[
  {
    "path": ".cursor/BUGBOT.md",
    "content": "Always suggest one or two ways to patch the issues you are highlighting.\n\n"
  },
  {
    "path": ".cursor/rules/build.mdc",
    "content": "---\nname: build\ndescription: \"Arweave build commands and structure\"\nalwaysApply: true\n---\n\n# Arweave Project Rules\n\nTo better reason about the Arweave protocol, **read `.cursor/rules/protocol.mdc`** in this directory.\n\n## Build Commands\n- To compile the project: `./ar-rebar3 prod compile`\n\n## Project Structure\n- This is an Erlang/OTP project using rebar3\n- It contains multiple apps in the `apps` directory\n- The main app is `arweave`\n- Source files: `apps/<app>/src/`\n- Test files: `apps/<app>/test/`\n- Include files: `apps/<app>/include/`\n\n## Running Tests\n- To run tests for a specific module: `./bin/test <module>`\n  - Example: `./bin/test ar_mining_io_tests`\n- To run tests for a specific test: `./bin/test <module>:<test>`\n  - Example: `./bin/test ar_mining_io_tests:read_recall_range_test_`\n- Multiple modules and/or tests can be run together\n  - Example: `./bin/test ar_mining_io_tests ar_data_sync_root_tests:data_roots_syncs_from_peer_test_`\n\n## Testing Notes\n- Test profile uses smaller values for constants like `?PARTITION_SIZE` and `?REPLICA_2_9_ENTROPY_COUNT` (defined in rebar.config)"
  },
  {
    "path": ".cursor/rules/protocol.mdc",
    "content": "# Arweave protocol technical details and caveats\n\nRecall bytes always point to chunks, not sub-chunks. We always consider all sub-chunks of every chunk durining mining. Nonce rem SubChunkCount determines which sub-chunk goes into the proof.\n"
  },
  {
    "path": ".gitattributes",
    "content": "localnet_snapshot/** filter=lfs diff=lfs merge=lfs -text\n"
  },
  {
    "path": ".github/workflows/e2e-test.yml",
    "content": "name: \"Arweave e2e  Tests Suites\"\non:\n  workflow_dispatch:\n  schedule:\n    - cron: \"0 13 * * *\"\n\njobs:\n  build:\n    runs-on: [self-hosted, ubuntu, amd64, build-runner]\n    steps:\n      - uses: actions/checkout@v4\n        with:\n          submodules: \"recursive\"\n          lfs: true\n\n      # only arweave dependencies are being cached,\n      # those are not updated everyday and this is\n      # unecessary to fetch them everytime.\n      - uses: actions/cache@v4\n        id: cache\n        with:\n          path: |\n            _build/default/lib/accept\n            _build/default/lib/b64fast\n            _build/default/lib/cowboy\n            _build/default/lib/cowlib\n            _build/default/lib/gun\n            _build/default/lib/jiffy\n            _build/default/lib/prometheus\n            _build/default/lib/prometheus_cowboy\n            _build/default/lib/prometheus_httpd\n            _build/default/lib/prometheus_process_collector\n            _build/default/lib/quantile_estimator\n            _build/default/lib/ranch\n            _build/default/lib/.rebar3\n            _build/default/lib/recon\n            _build/default/lib/rocksdb\n            _build/default/plugins/\n            _build/default/plugins/aleppo\n            _build/default/plugins/geas\n            _build/default/plugins/geas_rebar3\n            _build/default/plugins/hex_core\n            _build/default/plugins/katana_code\n            _build/default/plugins/pc\n            _build/default/plugins/.rebar3\n            _build/default/plugins/rebar3_archive_plugin\n            _build/default/plugins/rebar3_elvis_plugin\n            _build/default/plugins/rebar3_hex\n            _build/default/plugins/samovar\n            _build/default/plugins/verl\n            _build/default/plugins/zipper\n          key: deps-cache-${{ hashFiles('rebar.lock') }}\n          restore-keys: |\n            deps-cache-${{ hashFiles('rebar.lock') }}\n\n      - name: Get dependencies\n        if: steps.cache.outputs.cache-hit != 'true'\n        run: ./ar-rebar3 test get-deps\n\n      - uses: actions/cache@v4\n        if: steps.cache.outputs.cache-hit != 'true'\n        with:\n          path: |\n            _build/default/lib/accept\n            _build/default/lib/b64fast\n            _build/default/lib/cowboy\n            _build/default/lib/cowlib\n            _build/default/lib/gun\n            _build/default/lib/jiffy\n            _build/default/lib/prometheus\n            _build/default/lib/prometheus_cowboy\n            _build/default/lib/prometheus_httpd\n            _build/default/lib/prometheus_process_collector\n            _build/default/lib/quantile_estimator\n            _build/default/lib/ranch\n            _build/default/lib/.rebar3\n            _build/default/lib/recon\n            _build/default/lib/rocksdb\n            _build/default/plugins/\n            _build/default/plugins/aleppo\n            _build/default/plugins/geas\n            _build/default/plugins/geas_rebar3\n            _build/default/plugins/hex_core\n            _build/default/plugins/katana_code\n            _build/default/plugins/pc\n            _build/default/plugins/.rebar3\n            _build/default/plugins/rebar3_archive_plugin\n            _build/default/plugins/rebar3_elvis_plugin\n            _build/default/plugins/rebar3_hex\n            _build/default/plugins/samovar\n            _build/default/plugins/verl\n            _build/default/plugins/zipper\n          key: deps-cache-${{ hashFiles('rebar.lock') }}\n\n      - name: Compile arweave release\n        run: ./ar-rebar3 default release\n          \n      - name: Build arweave test sources\n        run: ./ar-rebar3 test compile\n\n      - name: Build arweave e2e test sources\n        run: ./ar-rebar3 e2e compile\n\n      # some artifacts are compiled and only available\n      # in arweave directy (libraries)\n      - name: Prepare artifacts\n        run: |\n          chmod -R u+w ./_build\n          tar czfp _build.tar.gz ./_build ./bin/arweave\n          tar czfp apps.tar.gz ./apps\n\n      # to avoid reusing artifacts from someone else\n      # and generating issues, an unique artifact is\n      # produced using github checksum.\n      - name: upload artifacts\n        uses: actions/upload-artifact@v4\n        with:\n          name: build-${{ github.sha }}\n          if-no-files-found: error\n          include-hidden-files: true\n          retention-days: 7\n          overwrite: true\n          path: |\n            _build.tar.gz\n            apps.tar.gz\n\n  e2e-tests:\n    needs: build\n    runs-on: [self-hosted, ubuntu, amd64, build-runner]\n    strategy:\n      max-parallel: 4\n      matrix:\n        core_test_mod: [\n            ar_sync_pack_mine_tests,\n            ar_repack_mine_tests,\n            ar_repack_in_place_mine_tests\n          ]\n    steps:\n      - uses: actions/checkout@v4\n        with:\n          submodules: \"recursive\"\n          lfs: true\n\n      - name: Download artifact\n        uses: actions/download-artifact@v4\n        with:\n          name: build-${{ github.sha }}\n\n      # Both artifacts (_build and apps dir) are\n      # required.\n      - name: Extract artifact\n        run: |\n          tar zxfp _build.tar.gz\n          tar zxfp apps.tar.gz\n\n      - name: ${{ matrix.core_test_mod }}.erl\n        id: tests\n        run: bash scripts/github_workflow.sh \"e2e\" \"${{ matrix.core_test_mod }}\"\n\n      # this part of the job produces test artifacts from logs\n      # generated by the tests. It also collect dumps and the files\n      # present in .tmp (temporary arweave data store)\n      - name: upload artifacts in case of failure\n        uses: actions/upload-artifact@v4\n        if: always() && failure()\n        with:\n          name: \"logs-${{ matrix.core_test_mod }}-${{ github.run_attempt }}-${{ job.status }}-${{ runner.name }}-${{ github.sha }}\"\n          retention-days: 7\n          overwrite: true\n          include-hidden-files: true\n          path: |\n            ./logs\n            *.out\n            *.dump\n"
  },
  {
    "path": ".github/workflows/gitstamp.yaml",
    "content": "# See: https://github.com/weavery/gitstamp-action\n---\nname: Gitstamp\non: \n  push:\n    branches:\n      - 'master'\n      - 'release/**'\n      - 'releases/**'\n  pull_request_target:\n    types: [closed]\n\njobs:\n  gitstamp:\n    runs-on: [self-hosted, ubuntu, amd64, build-runner]\n    name: Timestamp commit with Gitstamp\n    steps:\n      - name: Clone repository\n        uses: actions/checkout@v2\n      - name: Submit Gitstamp transaction\n        uses: weavery/gitstamp-action@v1\n        with:\n          wallet-key: ${{ secrets.GITSTAMP_KEYFILE }}\n          commit-link: true\n"
  },
  {
    "path": ".github/workflows/release.yml",
    "content": "######################################################################\n# All releases are generated using this workflow. The goal is to\n# make our life easier when releasing a new version of Arweave.\n# Instead of doing the process manually, some builders will be used\n# to produce the necessary tarballs, create the checksums, packs them\n# together and finally create the release on github side with a\n# message from the repository (in release_notes directory).\n######################################################################\nname: Release\n\non:\n  push:\n    tags:\n      - N.**\n  workflow_dispatch:\n    inputs:\n      tag:\n        required: true\n        type: string\n      make_latest:\n        required: false\n        default: true\n        type: boolean\n      prerelease:\n        required: false\n        default: true\n        type: boolean\n      draft:\n        required: false\n        default: false\n        type: boolean\n\njobs:\n  # prepare ubuntu 22.04 release (jammy) on amd64 arch\n  ubuntu-jammy-release:\n    uses: ./.github/workflows/x-release-linux.yml\n    secrets: inherit\n    with:\n      os_arch: amd64\n      os_release: jammy\n      os_name: ubuntu\n      tag: ${{ github.ref_name || inputs.tag }}\n\n  # prepare ubuntu 24.04 release (noble) on amd64 arch\n  ubuntu-noble-release:\n    uses: ./.github/workflows/x-release-linux.yml\n    secrets: inherit\n    with:\n      os_arch: amd64\n      os_release: noble\n      os_name: ubuntu\n      tag: ${{ github.ref_name || inputs.tag }}\n\n  # prepare rocky 9 release on amd64 arch\n  rockylinux-9-release:\n    uses: ./.github/workflows/x-release-linux.yml\n    secrets: inherit\n    with:\n      os_arch: x86_64\n      os_release: 9\n      os_name: rockylinux\n      tag: ${{ github.ref_name || inputs.tag }}\n\n  # prepare macos release on arm64 arch\n  macos-release:\n    uses: ./.github/workflows/x-release-macos.yml\n    secrets: inherit\n    with:\n      tag: ${{ github.ref_name || inputs.tag }}\n\n  # craft the release using the previous builds\n  release:\n    needs:\n      - ubuntu-jammy-release\n      - ubuntu-noble-release\n      - rockylinux-9-release\n      - macos-release\n\n    permissions:\n      contents: write\n      packages: write\n\n    runs-on:\n      - self-hosted\n      - release-runner\n      - amd64\n\n    steps:\n\n      # a new checkout is required to have the release\n      # message from releases_notes directory.\n      - uses: actions/checkout@v4\n        with:\n          submodules: \"recursive\"\n          lfs: true\n\n      # let fetch ubuntu jammy tarball\n      - name: Download Ubuntu Jammy Release\n        uses: actions/download-artifact@v5\n        with:\n          name: arweave-ubuntu-jammy-amd64\n          path: ./arweave-ubuntu-jammy-amd64\n\n      # let fetch ubuntu noble tarball\n      - name: Download Ubuntu Noble Release\n        uses: actions/download-artifact@v5\n        with:\n          name: arweave-ubuntu-noble-amd64\n          path: ./arweave-ubuntu-noble-amd64\n\n      # let fetch rocky 9 tarball\n      - name: Download Rockylinux 9 Release\n        uses: actions/download-artifact@v5\n        with:\n          name: arweave-rockylinux-9-x86_64\n          path: ./arweave-rockylinux-9-x86_64\n\n      # let fetch macos tarball\n      - name: Download MacOS Release\n        uses: actions/download-artifact@v5\n        with:\n          name: arweave-macos-26-arm64\n          path: ./arweave-macos-26-arm64\n\n      # now this part is a bit tricky. it will rename\n      # all tarball using the tag pushed. To avoid\n      # some weird behaviors, the name is sanitized\n      # by removing few symbols and replacing them\n      # with a dash (-).\n      - name: Prepare Release Tarballs and Checksums\n        run: |\n          #!/bin/sh\n          set -eux\n\n          # define variables\n          releasedir=\"$(pwd)/_releases\"\n          ref=${{ github.ref_name }}\n          release_name=$(echo ${ref} | sed -Ee 's!(/|:|@|\\[|\\]|\\(|\\)|\\~)!-!g' -e 's!^N\\.!!')\n\n          # prepare release directory\n          mkdir -p \"${releasedir}\"\n\n          # prepare a release using a directory and a postfix name\n          _prepare_release() {\n            local dir=${1}\n            local name=${2}\n            local checksum\n            cd ${dir}\n\n            # rename arweave tarball\n            cp arweave*.tar.gz ${releasedir}/arweave-${release_name}.${name}.tar.gz\n\n            # prepare checksum file\n            checksum=$(cat arweave*.tar.gz.SHA256 | awk '{ print $1 }')\n            echo \"${checksum} arweave-${release_name}.${name}.tar.gz\" \\\n              > ${releasedir}/arweave-${release_name}.${name}.tar.gz.SHA256\n\n            cd ..\n          }\n\n          # prepare releases for each distribution\n          _prepare_release arweave-ubuntu-jammy-amd64 ubuntu22.x86_64\n          _prepare_release arweave-ubuntu-noble-amd64 ubuntu24.x86_64\n          _prepare_release arweave-rockylinux-9-x86_64 rocky9.x86_64\n          _prepare_release arweave-macos-26-arm64 macos.arm64\n\n          # check if the checksums are correct\n          cd ${releasedir}\n          cat *.SHA256 > checksums.txt\n          cat *.SHA256 > SHA256\n          sha256sum -c checksums.txt\n          sha256sum -c SHA256\n          cd ..\n\n      # Release the version based on the new tag\n      - name: Release the new version on github\n        uses: softprops/action-gh-release@v2\n        if: startsWith(github.ref, 'refs/tags/')\n        with:\n          name: Release ${{ github.ref_name }}\n          body_path: release_notes/${{ github.ref_name }}/README.md\n          files: |\n            _releases/*.tar.gz\n            _releases/checksums.txt\n            _releases/SHA256\n            LICENSE.md\n          make_latest: ${{ inputs.make_latest || true }}\n          prerelease: ${{ inputs.prerelease || true }}\n          draft: ${{ inputs.draft || false }}\n"
  },
  {
    "path": ".github/workflows/test-amd64-ubuntu-22.04.yml",
    "content": "######################################################################\n# Test suite for Ubuntu 22.04. This is the official OS supported by\n# arweave team. The complete test suite should be executed.\n######################################################################\nname: \"Test on arch:amd64 distribution:ubuntu release:22.04\"\n\non:\n  push:\n    branches: [\"**\"]\n  workflow_dispatch:\n\njobs:\n  build:\n    uses: ./.github/workflows/x-build.yml\n    secrets: inherit\n\n  test-canary:\n    needs: [build]\n    uses: ./.github/workflows/x-test-canary.yml\n    secrets: inherit\n\n  common-test:\n    needs: [test-canary]\n    uses: ./.github/workflows/x-common-test.yml\n    secrets: inherit\n\n  test:\n    needs: [test-canary]\n    uses: ./.github/workflows/x-test-full.yml\n    secrets: inherit\n"
  },
  {
    "path": ".github/workflows/test-arm64-macos-26.yml",
    "content": "######################################################################\n# Test suite for MacOS. The support of MacOS is mainly for the VDF\n# part, it should not be required to do the full test suite.\n######################################################################\nname: \"Test on arch:arm64 distribution:macos release:26\"\n\non:\n  push:\n    branches: [\"**\"]\n  workflow_dispatch:\n\njobs:\n  build:\n    uses: ./.github/workflows/x-build.yml\n    secrets: inherit\n    with:\n      os_arch: arm64\n      os_name: macos\n      os_release: 26\n\n  test-canary:\n    needs: [build]\n    uses: ./.github/workflows/x-test-canary.yml\n    secrets: inherit\n    with:\n      os_arch: arm64\n      os_name: macos\n      os_release: 26\n\n  common-test:\n    needs: [test-canary]\n    uses: ./.github/workflows/x-common-test.yml\n    secrets: inherit\n    with:\n      os_arch: arm64\n      os_name: macos\n      os_release: 26\n\n  test:\n    needs: [test-canary]\n    uses: ./.github/workflows/x-test-vdf.yml\n    secrets: inherit\n    with:\n      os_arch: arm64\n      os_name: macos\n      os_release: 26\n"
  },
  {
    "path": ".github/workflows/x-build.yml",
    "content": "######################################################################\n# Common way to build arweave. This template should be compatible with\n# any kind of systems but it expects the images/vm/servers used to\n# compile already have every requirement installed.\n######################################################################\nname: \"arweave-build-template\"\n\non:\n  workflow_call:\n    inputs:\n      os_arch:\n        description: \"operating system architecture\"\n        default: \"amd64\"\n        type: \"string\"\n      os_name:\n        description: \"operating system name\"\n        default: \"ubuntu\"\n        type: \"string\"\n      os_release:\n        description: \"operating system release\"\n        default: \"22.04\"\n        type: \"string\"\n\nenv:\n  cache_key: ${{ inputs.os_arch }}-${{ inputs.os_name }}-${{ inputs.os_release }}\n\njobs:\n  build-template:\n    runs-on:\n      - self-hosted\n      - build-runner\n      - ${{ inputs.os_arch }}\n      - ${{ inputs.os_name }}\n      - ${{ inputs.os_release }}\n\n    steps:\n      # On standalone runners, it is always required to cleanup\n      # first. By default executed on all runners, to start with\n      # clean environment.\n      - name: cleanup\n        if: always()\n        run: |\n          rm -rf \"${GITHUB_WORKSPACE}\" && mkdir -p \"${GITHUB_WORKSPACE}\"\n\n      # checkout arweave repository and extract it in\n      # working directory.\n      - name: checkout arweave repository\n        uses: actions/checkout@v4\n        with:\n          submodules: \"recursive\"\n          lfs: true\n\n      # before doing anything, we would like to know what kind of\n      # software and libraries are present on the system for debugging\n      # purpose\n      - name: get software and libraries information\n        run: |\n          sh ./scripts/system_info.sh > _version.yaml\n          cat _version.yaml\n\n      # only arweave dependencies are being cached,\n      # those are not updated everyday and this is\n      # unecessary to fetch them everytime.\n      - name: extract cache\n        uses: actions/cache@v4\n        id: cache\n        with:\n          path: |\n            _build/default/lib/accept\n            _build/default/lib/b64fast\n            _build/default/lib/cowboy\n            _build/default/lib/cowlib\n            _build/default/lib/gun\n            _build/default/lib/jiffy\n            _build/default/lib/prometheus\n            _build/default/lib/prometheus_cowboy\n            _build/default/lib/prometheus_httpd\n            _build/default/lib/prometheus_process_collector\n            _build/default/lib/quantile_estimator\n            _build/default/lib/ranch\n            _build/default/lib/.rebar3\n            _build/default/lib/recon\n            _build/default/lib/rocksdb\n            _build/default/plugins/\n            _build/default/plugins/aleppo\n            _build/default/plugins/geas\n            _build/default/plugins/geas_rebar3\n            _build/default/plugins/hex_core\n            _build/default/plugins/katana_code\n            _build/default/plugins/pc\n            _build/default/plugins/.rebar3\n            _build/default/plugins/rebar3_archive_plugin\n            _build/default/plugins/rebar3_elvis_plugin\n            _build/default/plugins/rebar3_hex\n            _build/default/plugins/samovar\n            _build/default/plugins/verl\n            _build/default/plugins/zipper\n          key: deps-cache-${{ hashFiles('rebar.lock') }}-${{ env.cache_key }}\n          restore-keys: |\n            deps-cache-${{ hashFiles('rebar.lock') }}-${{ env.cache_key }}\n\n      - name: Get dependencies\n        if: steps.cache.outputs.cache-hit != 'true'\n        run: ./ar-rebar3 test get-deps\n\n      - uses: actions/cache@v4\n        if: steps.cache.outputs.cache-hit != 'true'\n        with:\n          path: |\n            _build/default/lib/accept\n            _build/default/lib/b64fast\n            _build/default/lib/cowboy\n            _build/default/lib/cowlib\n            _build/default/lib/gun\n            _build/default/lib/jiffy\n            _build/default/lib/prometheus\n            _build/default/lib/prometheus_cowboy\n            _build/default/lib/prometheus_httpd\n            _build/default/lib/prometheus_process_collector\n            _build/default/lib/quantile_estimator\n            _build/default/lib/ranch\n            _build/default/lib/.rebar3\n            _build/default/lib/recon\n            _build/default/lib/rocksdb\n            _build/default/plugins/\n            _build/default/plugins/aleppo\n            _build/default/plugins/geas\n            _build/default/plugins/geas_rebar3\n            _build/default/plugins/hex_core\n            _build/default/plugins/katana_code\n            _build/default/plugins/pc\n            _build/default/plugins/.rebar3\n            _build/default/plugins/rebar3_archive_plugin\n            _build/default/plugins/rebar3_elvis_plugin\n            _build/default/plugins/rebar3_hex\n            _build/default/plugins/samovar\n            _build/default/plugins/verl\n            _build/default/plugins/zipper\n          key: deps-cache-${{ hashFiles('rebar.lock') }}-${{ env.cache_key }}\n\n      - name: Compile arweave release\n        run: ./ar-rebar3 default release\n\n      - name: Build arweave test sources\n        run: ./ar-rebar3 test release\n\n      # some artifacts are compiled and only available\n      # in arweave directory (libraries).\n      - name: Prepare artifacts\n        run: |\n          chmod -R u+w ./_build\n\n          # rebar is using a lot of absolute symlink,\n          # this can generate issue on standalone worker\n          # to avoid this problem, links must be\n          # deferenced with -h flag.\n          tar czfhp _build.tar.gz ./_build ./bin/arweave\n          tar czfhp apps.tar.gz ./apps\n\n      # to avoid reusing artifacts from someone else\n      # and generating issues, an unique artifact is\n      # produced using github checksum.\n      - name: upload artifacts\n        uses: actions/upload-artifact@v4\n        with:\n          name: build-${{ github.sha }}-${{ env.cache_key }}\n          if-no-files-found: error\n          retention-days: 1\n          overwrite: true\n          path: |\n            _version.yaml\n            _build.tar.gz\n            apps.tar.gz\n"
  },
  {
    "path": ".github/workflows/x-common-test.yml",
    "content": "######################################################################\n# Full Arweave Test Suite. Mostly used on Linux like systems.\n######################################################################\nname: \"arweave-common-test-suite\"\n\non:\n  workflow_call:\n    inputs:\n      os_arch:\n        description: \"operating system architecture\"\n        default: \"amd64\"\n        type: \"string\"\n      os_name:\n        description: \"operating system name\"\n        default: \"ubuntu\"\n        type: \"string\"\n      os_release:\n        description: \"operating system release\"\n        default: \"22.04\"\n        type: \"string\"\n\nenv:\n  cache_key: ${{ inputs.os_arch }}-${{ inputs.os_name }}-${{ inputs.os_release }}\n\n####################################################################\n# Test modules (note: that _tests are implicitly run by a matching\n# prefix name\n####################################################################\njobs:\n  common-test-suite:\n    runs-on:\n      - self-hosted\n      - build-runner\n      - ${{ inputs.os_arch }}\n      - ${{ inputs.os_name }}\n      - ${{ inputs.os_release }}\n    steps:\n      - name: cleanup\n        if: always()\n        run: |\n          rm -rf \"${GITHUB_WORKSPACE}\" && mkdir -p \"${GITHUB_WORKSPACE}\"\n\n      - uses: actions/checkout@v4\n        with:\n          submodules: \"recursive\"\n          lfs: true\n\n      - name: Download artifact\n        uses: actions/download-artifact@v4\n        with:\n          name: build-${{ github.sha }}-${{ env.cache_key }}\n\n      # Both artifacts (_build and apps dir) are\n      # required.\n      - name: Extract artifact\n        run: |\n          tar zxfp _build.tar.gz\n          tar zxfp apps.tar.gz\n\n      # This is a temporary fix to prevent test failures when\n      # executing the test suite on a system like MacOS. The libraries\n      # must be rebuilt due to CMake full path usage, if CMake is\n      # re-used on another user (then with a different full path), it\n      # will fail. Here, we force to clean all external libraries\n      # objects before executing the test.\n      - name: temporary external libraries fix\n        run: make -C apps/arweave/lib clean\n\n      # execute common test suite with test profile and coverage\n      # enabled.\n      - name: run common test suite\n        id: common-tests\n        run: |\n          ./rebar3 as test ct \\\n            --cover \\\n            --verbose \\\n            --logdir _ct_logs\n\n      # execute eunit in standalone on only a small subset of\n      # application available. Because arweave main application is\n      # using eunit for its test suite in a specific way, testing this\n      # application will crash the test suite.\n      - name: run eunit test suite\n        id: eunit\n        run: |\n          ./rebar3 as test eunit \\\n            --cover \\\n            --application arweave_config\n\n      # generate coverage report, it will be stored in\n      # _build/test/cover directory\n      - name: run cover\n        id: cover\n        run: |\n          ./rebar3 as test cover \\\n            --verbose\n\n      # generate markdown/html like report to have a quick view of\n      # what happened during the tests\n      - name: generate workflow summary\n        run: |-\n          if test -f ./_build/test/cover/index.html\n          then\n            echo \"# Coverage Summary\" >> $GITHUB_STEP_SUMMARY\n            echo >> $GITHUB_STEP_SUMMARY\n            cat ./_build/test/cover/index.html >> $GITHUB_STEP_SUMMARY\n          fi\n\n          if test -d ./_build/test/surefire\n          then\n            echo \"# Eunit Report\" >> $GITHUB_STEP_SUMMARY\n            echo >> $GITHUB_STEP_SUMMARY\n            for f in ./_build/test/surefire/*.xml\n            do\n              ./scripts/surefire_to_html.py \"${f}\" >> $GITHUB_STEP_SUMMARY\n            done\n          fi\n\n      # upload test coverage report as artifact.\n      - name: upload coverage report\n        uses: actions/upload-artifact@v4\n        with:\n          name: \"coverage-common_test-${{ github.run_attempt }}-${{ job.status }}-${{ runner.name }}-${{ github.sha }}\"\n          retention-days: 7\n          overwrite: true\n          include-hidden-files: true\n          path: |\n            ./_build/test/cover\n            ./_build/test/surefire\n\n      # this part of the job produces test artifacts from logs\n      # generated by the tests. It also collect dumps and the files\n      # present in .tmp (temporary arweave data store)\n      - name: upload artifacts in case of failure\n        uses: actions/upload-artifact@v4\n        if: always()\n        with:\n          name: \"logs-common_test-${{ github.run_attempt }}-${{ job.status }}-${{ runner.name }}-${{ github.sha }}\"\n          retention-days: 7\n          overwrite: true\n          include-hidden-files: true\n          path: |\n            ./_ct_logs\n            ./logs\n\n\n"
  },
  {
    "path": ".github/workflows/x-release-linux.yml",
    "content": "######################################################################\n# Release template for Linux based systems, including at this time\n# Ubuntu and Rockylinux.\n######################################################################\nname: \"arweave-release-linux-template\"\n\non:\n  workflow_call:\n    inputs:\n      tag:\n        required: true\n        type: string\n      os_arch:\n        description: \"operating system architecture\"\n        default: \"amd64\"\n        type: \"string\"\n      os_name:\n        description: \"operating system name\"\n        default: \"ubuntu\"\n        type: \"string\"\n      os_release:\n        description: \"operating system release\"\n        default: \"22.04\"\n        type: \"string\"\n\njobs:\n  linux-release:\n    runs-on:\n      - self-hosted\n      - release-runner\n      - ${{ inputs.os_arch }}\n      - ${{ inputs.os_name }}\n      - ${{ inputs.os_release }}\n    steps:\n      - name: cleanup\n        if: always()\n        run: |\n          rm -rf \"${GITHUB_WORKSPACE}\" && mkdir -p \"${GITHUB_WORKSPACE}\"\n\n      - uses: actions/checkout@v4\n        with:\n          submodules: \"recursive\"\n          lfs: true\n\n      - name: Create Arweave Release\n        run: |\n          set -eux\n          ./rebar3 as prod tar\n          f=$(ls _build/prod/rel/arweave/arweave-*.tar.gz)\n          sha256sum ${f} > ${f}.SHA256\n\n      - name: Prepare tarball tests\n        run: |\n          set -eux\n          f=$(ls ${PWD}/_build/prod/rel/arweave/arweave-*.tar.gz)\n          mkdir _extract\n          cd _extract\n          tar zxf \"${f}\"\n\n      - name: Test vdf openssl\n        run: |\n          cd _extract\n          ./bin/arweave benchmark vdf mode openssl verify true 2>&1 \\\n            | grep -E \"^VDF step computed in .* seconds.\"\n\n      - name: Test vdf fused\n        run: |\n          cd _extract\n          ./bin/arweave benchmark vdf mode openssl verify true 2>&1 \\\n            | grep -E \"^VDF step computed in .* seconds.\"\n\n      - name: Test create rsa key\n        run: |\n          cd _extract\n          mkdir _rsa\n          set +e\n          (./bin/arweave wallet create rsa _rsa 2>&1) >/dev/null\n          set -e\n          test 1 = $(ls _rsa/wallets/arweave_keyfile*.json | wc -l)\n\n      - name: Test create ecdsa\n        run: |\n          cd _extract\n          mkdir _ecdsa\n          set +e\n          (./bin/arweave wallet create ecdsa _ecdsa 2>&1) >/dev/null\n          set -e\n          test 1 = $(ls _ecdsa/wallets/arweave_keyfile*.json | wc -l)\n\n      - name: Test arweave execution\n        run: |\n          cd _extract\n          ./bin/arweave foreground 2>&1 \\\n            | grep -Ee '^Usage: arweave-server' \\\n                   -e '^Compatible with network: arweave.N.1'\n\n      - name: Cleanup tests\n        if: always()\n        run: |\n          rm -rf _extract\n\n      - name: Upload Arweave artifacts\n        uses: actions/upload-artifact@v4\n        with:\n          name: arweave-${{ inputs.os_name }}-${{ inputs.os_release }}-${{ inputs.os_arch }}\n          if-no-files-found: error\n          path: |\n            _build/prod/rel/arweave/arweave-*.tar.gz\n            _build/prod/rel/arweave/arweave-*.tar.gz.SHA256\n"
  },
  {
    "path": ".github/workflows/x-release-macos.yml",
    "content": "######################################################################\n# release template for MacOS.\n######################################################################\nname: \"arweave-release-macos-template\"\n\non:\n  workflow_call:\n    inputs:\n      tag:\n        required: true\n        type: string\n      os_arch:\n        description: \"operating system architecture\"\n        default: \"arm64\"\n        type: \"string\"\n      os_name:\n        description: \"operating system name\"\n        default: \"macos\"\n        type: \"string\"\n      os_release:\n        description: \"operating system release\"\n        default: \"26\"\n        type: \"string\"\n\njobs:\n  macos-release:\n    runs-on:\n      - self-hosted\n      - release-runner\n      - ${{ inputs.os_arch }}\n      - ${{ inputs.os_name }}\n\n    steps:\n      - name: cleanup\n        if: always()\n        run: |\n          rm -rf \"${GITHUB_WORKSPACE}\" && mkdir -p \"${GITHUB_WORKSPACE}\"\n\n      - uses: actions/checkout@v4\n        with:\n          submodules: \"recursive\"\n          lfs: true\n\n      - name: Create Arweave Release\n        run: |\n          set -eux\n          ./rebar3 as prod tar\n          f=$(ls _build/prod/rel/arweave/arweave-*.tar.gz)\n          sha256sum ${f} > ${f}.SHA256\n\n      - name: Prepare tarball tests\n        run: |\n          set -eux\n          f=$(ls ${PWD}/_build/prod/rel/arweave/arweave-*.tar.gz)\n          mkdir _extract\n          cd _extract\n          tar zxf \"${f}\"\n\n      - name: Test vdf openssl\n        run: |\n          cd _extract\n          ./bin/arweave benchmark vdf mode openssl verify true 2>&1 \\\n            | grep -E \"^VDF step computed in .* seconds.\"\n\n      - name: Test vdf fused\n        run: |\n          cd _extract\n          ./bin/arweave benchmark vdf mode openssl verify true 2>&1 \\\n            | grep -E \"^VDF step computed in .* seconds.\" 2>&1 \\\n\n      - name: Test vdf hiopt_m4\n        run: |\n          cd _extract\n          ./bin/arweave benchmark vdf mode openssl hiopt_m4 true 2>&1 \\\n            | grep -E \"^VDF step computed in .* seconds.\"\n\n      - name: Test create rsa key\n        run: |\n          cd _extract\n          mkdir _rsa\n          set +e\n          (./bin/arweave wallet create rsa _rsa 2>&1) >/dev/null\n          set -e\n          test 1 = $(ls _rsa/wallets/arweave_keyfile*.json | wc -l)\n\n      - name: Test create ecdsa\n        run: |\n          cd _extract\n          mkdir _ecdsa\n          set +e\n          (./bin/arweave wallet create ecdsa _ecdsa 2>&1) >/dev/null\n          set -e\n          test 1 = $(ls _ecdsa/wallets/arweave_keyfile*.json | wc -l)\n\n      - name: Test arweave execution\n        run: |\n          cd _extract\n          ./bin/arweave foreground 2>&1 \\\n            | grep -Ee '^Usage: arweave-server' \\\n                   -e '^Compatible with network: arweave.N.1'\n\n      - name: Cleanup tests\n        if: always()\n        run: |\n          rm -rf _extract\n\n      - name: Upload Arweave artifacts\n        uses: actions/upload-artifact@v4\n        with:\n          name: arweave-${{ inputs.os_name }}-${{ inputs.os_release }}-${{ inputs.os_arch }}\n          if-no-files-found: error\n          path: |\n            _build/prod/rel/arweave/arweave-*.tar.gz\n            _build/prod/rel/arweave/arweave-*.tar.gz.SHA256\n"
  },
  {
    "path": ".github/workflows/x-test-canary.yml",
    "content": "######################################################################\n# A canary test suite, checking if the test suite is correctly\n# working.\n######################################################################\nname: \"arweave-test-suite-canary-template\"\n\non:\n  workflow_call:\n    inputs:\n      os_arch:\n        description: \"operating system architecture\"\n        default: \"amd64\"\n        type: \"string\"\n      os_name:\n        description: \"operating system name\"\n        default: \"ubuntu\"\n        type: \"string\"\n      os_release:\n        description: \"operating system release\"\n        default: \"22.04\"\n        type: \"string\"\n\nenv:\n  cache_key: ${{ inputs.os_arch }}-${{ inputs.os_name }}-${{ inputs.os_release }}\n\n####################################################################\n# Canary testing, should fail.\n####################################################################\njobs:\n  canary:\n    runs-on:\n      - self-hosted\n      - build-runner\n      - ${{ inputs.os_arch }}\n      - ${{ inputs.os_name }}\n      - ${{ inputs.os_release }}\n\n    steps:\n      - name: cleanup\n        if: always()\n        run: |\n          rm -rf \"${GITHUB_WORKSPACE}\" && mkdir -p \"${GITHUB_WORKSPACE}\"\n\n      - uses: actions/checkout@v4\n        with:\n          submodules: \"recursive\"\n          lfs: true\n\n      - name: Download artifact\n        uses: actions/download-artifact@v4\n        with:\n          name: build-${{ github.sha }}-${{ env.cache_key }}\n\n      # Both artifacts (_build and apps dir) are\n      # required.\n      - name: Extract artifact\n        run: |\n          tar zxfp _build.tar.gz\n          tar zxfp apps.tar.gz\n\n      - id: canary\n        name: ar_canary.erl\n        continue-on-error: true\n        run: bash scripts/github_workflow.sh \"tests\" \"ar_canary\"\n\n      - name: should fail\n        run: |\n          if test \"${{ steps.canary.outcome }}\" = \"failure\"\n          then\n            exit 0\n          else\n            exit 1\n          fi\n"
  },
  {
    "path": ".github/workflows/x-test-full.yml",
    "content": "######################################################################\n# Full Arweave Test Suite. Mostly used on Linux like systems.\n######################################################################\nname: \"arweave-test-suite-full\"\n\non:\n  workflow_call:\n    inputs:\n      os_arch:\n        description: \"operating system architecture\"\n        default: \"amd64\"\n        type: \"string\"\n      os_name:\n        description: \"operating system name\"\n        default: \"ubuntu\"\n        type: \"string\"\n      os_release:\n        description: \"operating system release\"\n        default: \"22.04\"\n        type: \"string\"\n\nenv:\n  cache_key: ${{ inputs.os_arch }}-${{ inputs.os_name }}-${{ inputs.os_release }}\n\n####################################################################\n# Test modules (note: that _tests are implicitly run by a matching\n# prefix name\n####################################################################\njobs:\n  full-test-modules:\n    runs-on:\n      - self-hosted\n      - build-runner\n      - ${{ inputs.os_arch }}\n      - ${{ inputs.os_name }}\n      - ${{ inputs.os_release }}\n    outputs:\n      core_test_mods: ${{ steps.core-test-mods.outputs.core_test_mods }}\n    steps:\n      - name: cleanup\n        if: always()\n        run: |\n          rm -rf \"${GITHUB_WORKSPACE}\" && mkdir -p \"${GITHUB_WORKSPACE}\"\n\n      - uses: actions/checkout@v4\n        with:\n          submodules: \"recursive\"\n\n      - name: load full test modules\n        id: core-test-mods\n        run: |\n          echo \"core_test_mods=$(bash scripts/list_test_modules.sh json)\" >> \"$GITHUB_OUTPUT\"\n\n  eunit-tests-suite:\n    needs: full-test-modules\n    runs-on:\n      - self-hosted\n      - build-runner\n      - ${{ inputs.os_arch }}\n      - ${{ inputs.os_name }}\n      - ${{ inputs.os_release }}\n\n    strategy:\n      fail-fast: false\n      max-parallel: 12\n      matrix:\n        core_test_mod: ${{ fromJson(needs.full-test-modules.outputs.core_test_mods) }}\n    steps:\n      - name: cleanup\n        if: always()\n        run: |\n          rm -rf \"${GITHUB_WORKSPACE}\" && mkdir -p \"${GITHUB_WORKSPACE}\"\n\n      - uses: actions/checkout@v4\n        with:\n          submodules: \"recursive\"\n          lfs: true\n\n      - name: Download artifact\n        uses: actions/download-artifact@v4\n        with:\n          name: build-${{ github.sha }}-${{ env.cache_key }}\n\n      # Both artifacts (_build and apps dir) are\n      # required.\n      - name: Extract artifact\n        run: |\n          tar zxfp _build.tar.gz\n          tar zxfp apps.tar.gz\n\n      - name: ${{ matrix.core_test_mod }}.erl\n        id: tests\n        run: bash scripts/github_workflow.sh \"tests\" \"${{ matrix.core_test_mod }}\"\n\n      # this part of the job produces test artifacts from logs\n      # generated by the tests. It also collect dumps and the files\n      # present in .tmp (temporary arweave data store)\n      - name: upload artifacts in case of failure\n        uses: actions/upload-artifact@v4\n        if: always() && failure()\n        with:\n          name: \"logs-${{ matrix.core_test_mod }}-${{ github.run_attempt }}-${{ job.status }}-${{ runner.name }}-${{ github.sha }}\"\n          retention-days: 7\n          overwrite: true\n          include-hidden-files: true\n          path: |\n            ./logs\n            *.out\n            *.dump\n  notebook-pricing-localnet:\n    runs-on:\n      - self-hosted\n      - build-runner\n      - ${{ inputs.os_arch }}\n      - ${{ inputs.os_name }}\n      - ${{ inputs.os_release }}\n    steps:\n      - name: cleanup\n        if: always()\n        run: |\n          rm -rf \"${GITHUB_WORKSPACE}\" && mkdir -p \"${GITHUB_WORKSPACE}\"\n      - uses: actions/checkout@v4\n        with:\n          submodules: \"recursive\"\n          lfs: true\n      - name: Fetch Git LFS objects\n        run: |\n          git lfs install --local\n          git lfs pull\n          git lfs checkout\n      - name: Setup Python\n        uses: actions/setup-python@v5\n        with:\n          python-version: \"3.11\"\n      - name: Download artifact\n        uses: actions/download-artifact@v4\n        with:\n          name: build-${{ github.sha }}-${{ env.cache_key }}\n      # Both artifacts (_build and apps dir) are\n      # required.\n      - name: Extract artifact\n        run: |\n          tar zxfp _build.tar.gz\n          tar zxfp apps.tar.gz\n      - name: Setup notebook env\n        run: scripts/setup_notebook_env.sh\n      - name: Run pricing transition notebook headless\n        run: scripts/run_notebook_headless.sh pricing_transition_localnet\n  notebook-autoredenomination-localnet:\n    runs-on:\n      - self-hosted\n      - build-runner\n      - ${{ inputs.os_arch }}\n      - ${{ inputs.os_name }}\n      - ${{ inputs.os_release }}\n    steps:\n      - name: cleanup\n        if: always()\n        run: |\n          rm -rf \"${GITHUB_WORKSPACE}\" && mkdir -p \"${GITHUB_WORKSPACE}\"\n      - uses: actions/checkout@v4\n        with:\n          submodules: \"recursive\"\n          lfs: true\n      - name: Fetch Git LFS objects\n        run: |\n          git lfs install --local\n          git lfs pull\n          git lfs checkout\n      - name: Setup Python\n        uses: actions/setup-python@v5\n        with:\n          python-version: \"3.11\"\n      - name: Download artifact\n        uses: actions/download-artifact@v4\n        with:\n          name: build-${{ github.sha }}-${{ env.cache_key }}\n      # Both artifacts (_build and apps dir) are\n      # required.\n      - name: Extract artifact\n        run: |\n          tar zxfp _build.tar.gz\n          tar zxfp apps.tar.gz\n      - name: Setup notebook env\n        run: scripts/setup_notebook_env.sh\n      - name: Run autoredenomination notebook headless\n        run: scripts/run_notebook_headless.sh autoredenomination_localnet\n\n"
  },
  {
    "path": ".github/workflows/x-test-on-demand.yml",
    "content": "######################################################################\n# Full Arweave Test Suite. On demand.\n######################################################################\nname: \"arweave-test-suite-full\"\n\non:\n  workflow_call:\n    inputs:\n      test:\n        type: \"string\"\n        required: true\n      os_arch:\n        description: \"operating system architecture\"\n        default: \"amd64\"\n        type: \"string\"\n      os_name:\n        description: \"operating system name\"\n        default: \"ubuntu\"\n        type: \"string\"\n      os_release:\n        description: \"operating system release\"\n        default: \"22.04\"\n        type: \"string\"\n\nenv:\n  cache_key: ${{ inputs.os_arch }}-${{ inputs.os_name }}-${{ inputs.os_release }}\n\n####################################################################\n# Test modules (note: that _tests are implicitly run by a matching\n# prefix name\n####################################################################\njobs:\n  eunit-tests-suite:\n    runs-on:\n      - self-hosted\n      - build-runner\n      - ${{ inputs.os_arch }}\n      - ${{ inputs.os_name }}\n      - ${{ inputs.os_release }}\n\n    strategy:\n      fail-fast: false\n\n    steps:\n      - name: cleanup\n        if: always()\n        run: |\n          rm -rf \"${GITHUB_WORKSPACE}\" && mkdir -p \"${GITHUB_WORKSPACE}\"\n\n      - uses: actions/checkout@v4\n        with:\n          submodules: \"recursive\"\n          lfs: true\n\n      - name: Download artifact\n        uses: actions/download-artifact@v4\n        with:\n          name: build-${{ github.sha }}-${{ env.cache_key }}\n\n      # Both artifacts (_build and apps dir) are\n      # required.\n      - name: Extract artifact\n        run: |\n          tar zxfp _build.tar.gz\n          tar zxfp apps.tar.gz\n\n      - name: ${{ inputs.test }}.erl\n        id: tests\n        run: bash scripts/github_workflow.sh \"tests\" \"${{ inputs.test }}\"\n\n      # this part of the job produces test artifacts from logs\n      # generated by the tests. It also collect dumps and the files\n      # present in .tmp (temporary arweave data store)\n      - name: upload artifacts in case of failure\n        uses: actions/upload-artifact@v4\n        if: always() && failure()\n        with:\n          name: \"logs-${{ inputs.test }}-${{ github.run_attempt }}-${{ job.status }}-${{ runner.name }}-${{ github.sha }}\"\n          retention-days: 7\n          overwrite: true\n          include-hidden-files: true\n          path: |\n            ./logs\n            *.out\n            *.dump\n"
  },
  {
    "path": ".github/workflows/x-test-vdf.yml",
    "content": "######################################################################\n# Arweave Test Suite dedicated for VDF testing.\n######################################################################\nname: \"arweave-test-suite-vdf\"\n\non:\n  workflow_call:\n    inputs:\n      os_arch:\n        description: \"operating system architecture\"\n        default: \"amd64\"\n        type: \"string\"\n      os_name:\n        description: \"operating system name\"\n        default: \"ubuntu\"\n        type: \"string\"\n      os_release:\n        description: \"operating system release\"\n        default: \"22.04\"\n        type: \"string\"\n\nenv:\n  cache_key: ${{ inputs.os_arch }}-${{ inputs.os_name }}-${{ inputs.os_release }}\n\n####################################################################\n# Test modules (note: that _tests are implicitly run by a matching\n# prefix name\n####################################################################\njobs:\n  eunit-tests-suite:\n    runs-on:\n      - self-hosted\n      - build-runner\n      - ${{ inputs.os_arch }}\n      - ${{ inputs.os_name }}\n      - ${{ inputs.os_release }}\n\n    strategy:\n      fail-fast: false\n      max-parallel: 12\n      matrix:\n        core_test_mod: [\n            # modules\n            ar,\n            ar_block,\n            ar_block_cache,\n            ar_chain_stats,\n            ar_chunk_copy,\n            ar_chunk_storage,\n            ar_deep_hash,\n            ar_device_lock,\n            ar_diff_dag,\n            ar_entropy_gen,\n            ar_entropy_storage,\n            ar_ets_intervals,\n            ar_events,\n            ar_footprint_record,\n            ar_inflation,\n            ar_intervals,\n            ar_join,\n            ar_kv,\n            arweave_limiter_group,\n            arweave_limiter_metrics_collector,\n            ar_merkle,\n            ar_node,\n            ar_node_utils,\n            ar_nonce_limiter,\n            ar_patricia_tree,\n            ar_peers,\n            ar_peer_intervals,\n            ar_pricing,\n            ar_retarget,\n            ar_serialize,\n            ar_tx_db,\n            ar_unbalanced_merkle,\n            ar_util,\n            ar_wallet,\n            ar_webhook,\n            ar_pool,\n            # standard\n            ar_base64_compatibility_tests,\n            ar_config_tests,\n            ar_difficulty_tests,\n            ar_forced_validation_tests,\n            ar_header_sync_tests,\n            ar_http_iface_tests,\n            ar_http_util_tests,\n            ar_info_tests,\n            ar_mempool_tests,\n            ar_mine_vdf_tests,\n            ar_semaphore_tests,\n            ar_start_from_block_tests,\n            ar_tx_blacklist_tests,\n            ar_vdf_tests,\n            # long running\n            ar_vdf_block_validation_tests,\n            ar_vdf_server_tests,\n            ar_vdf_external_update_tests,\n            ar_cli_parser\n          ]\n    steps:\n      - name: cleanup\n        if: always()\n        run: |\n          rm -rf \"${GITHUB_WORKSPACE}\" && mkdir -p \"${GITHUB_WORKSPACE}\"\n\n      - uses: actions/checkout@v4\n        with:\n          submodules: \"recursive\"\n          lfs: true\n\n      - name: Download artifact\n        uses: actions/download-artifact@v4\n        with:\n          name: build-${{ github.sha }}-${{ env.cache_key }}\n\n      # Both artifacts (_build and apps dir) are\n      # required.\n      - name: Extract artifact\n        run: |\n          tar zxfp _build.tar.gz\n          tar zxfp apps.tar.gz\n\n      - name: ${{ matrix.core_test_mod }}.erl\n        id: tests\n        run: bash scripts/github_workflow.sh \"tests\" \"${{ matrix.core_test_mod }}\"\n\n      # this part of the job produces test artifacts from logs\n      # generated by the tests. It also collect dumps and the files\n      # present in .tmp (temporary arweave data store)\n      - name: upload artifacts in case of failure\n        uses: actions/upload-artifact@v4\n        if: always() && failure()\n        with:\n          name: \"logs-${{ matrix.core_test_mod }}-${{ github.run_attempt }}-${{ job.status }}-${{ runner.name }}-${{ github.sha }}\"\n          retention-days: 7\n          overwrite: true\n          include-hidden-files: true\n          path: |\n            ./logs\n            *.out\n            *.dump\n"
  },
  {
    "path": ".gitignore",
    "content": "*.beam\n*.log\n*.dat\n.vscode\n*.out\n*.code-workspace\n.arweave.plt\ntestlog\ndebug_logs\napps/arweave/priv/tls\napps/arweave/priv/*.so\n_build\nreleases\nlib\n/result\n.rebar3\napps/arweave/c_src/**/*.o\napps/arweave/c_src/tests/tests\ndata_test_*\nerl_crash.dump\ntags\nmetrics\nblocks\ntxs\nhash_lists\nwallet_lists\n/wallets\nlogs*\n!bin/logs\nebin\nmetrics_*\nrelease/output\n.DS_Store\n.tmp\nnode_modules\nscreenlog.0\n_*\n*.swp\n*~\nlocalnet_snapshot/wallets/arweave_keyfile_*\n!**/_*.json\n.jupyter/migrated\nnotebooks/.ipynb_checkpoints\n\n"
  },
  {
    "path": ".gitmodules",
    "content": "[submodule \"lib/RandomX\"]\n\tpath = apps/arweave/lib/RandomX\n\turl = https://github.com/ArweaveTeam/RandomX.git\n[submodule \"apps/arweave/lib/secp256k1\"]\n\tpath = apps/arweave/lib/secp256k1\n\turl = https://github.com/bitcoin-core/secp256k1\n[submodule \"apps/arweave/lib/openssl-sha-lite\"]\n\tpath = apps/arweave/lib/openssl-sha-lite\n\turl = https://github.com/ArweaveTeam/openssl-sha-lite\n"
  },
  {
    "path": ".jupyter/jupyter_server_config.py",
    "content": "import os\n\n\ndef _strip_outputs_pre_save(model, **kwargs):\n    if os.getenv(\"NOTEBOOK_SAVE_OUTPUTS\") == \"1\":\n        return\n    if model.get(\"type\") != \"notebook\":\n        return\n\n    content = model.get(\"content\")\n    if not content:\n        return\n\n    for cell in content.get(\"cells\", []):\n        if cell.get(\"cell_type\") != \"code\":\n            continue\n        cell[\"outputs\"] = []\n        cell[\"execution_count\"] = None\n        metadata = cell.get(\"metadata\")\n        if isinstance(metadata, dict):\n            metadata.pop(\"execution\", None)\n\n\nc.FileContentsManager.pre_save_hook = _strip_outputs_pre_save\n"
  },
  {
    "path": "CANARY.md",
    "content": "# Arweave Team Warrant Canary\n\n- The Arweave Team has not been contacted by any law enforcement officials regarding the project. Last update: 22 March 2024.\n\n- The Arweave Team has not been asked to break the encryption of the system by any party. Last update: 22 March 2024.\n\n- The Arweave Team has not been asked to reveal the identities of any backers. Last update: 22 March 2024.\n\n- The Arweave Team is not in any way under duress. Last update: 22 March 2024.\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "# Contributing\n\nThis is a quick overview for what you should know when contributing to this Git repository.\n\n - There is a code style guide in `arweave_styleguide.md`. Please note that we're using tabs for indentation.\n - Make sure the tests pass (see [README](README.md) for how to run the tests).\n - You can discuss development and get help from the Arweave organization and community in the `#dev` channel on [our Discord server](https://discord.gg/3UTNZky).\n\n## Workflow\n\n 1. Fork the main Git repo `https://github.com/ArweaveTeam/arweave.git`\n 2. Branch out from `master`.\n 3. Add your changes.\n 4. Run the tests (see above).\n 5. Rebase your branch on the upstream `master` if the upstream `master` has moved since you branched out.\n 6. Create a PR back to the upstream `master`.\n\nHappy hacking! :)\n"
  },
  {
    "path": "LICENSE.md",
    "content": "                    GNU GENERAL PUBLIC LICENSE\n                       Version 2, June 1991\n\n Copyright (C) 1989, 1991 Free Software Foundation, Inc., <http://fsf.org/>\n 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n Everyone is permitted to copy and distribute verbatim copies\n of this license document, but changing it is not allowed.\n\n                            Preamble\n\n  The licenses for most software are designed to take away your\nfreedom to share and change it.  By contrast, the GNU General Public\nLicense is intended to guarantee your freedom to share and change free\nsoftware--to make sure the software is free for all its users.  This\nGeneral Public License applies to most of the Free Software\nFoundation's software and to any other program whose authors commit to\nusing it.  (Some other Free Software Foundation software is covered by\nthe GNU Lesser General Public License instead.)  You can apply it to\nyour programs, too.\n\n  When we speak of free software, we are referring to freedom, not\nprice.  Our General Public Licenses are designed to make sure that you\nhave the freedom to distribute copies of free software (and charge for\nthis service if you wish), that you receive source code or can get it\nif you want it, that you can change the software or use pieces of it\nin new free programs; and that you know you can do these things.\n\n  To protect your rights, we need to make restrictions that forbid\nanyone to deny you these rights or to ask you to surrender the rights.\nThese restrictions translate to certain responsibilities for you if you\ndistribute copies of the software, or if you modify it.\n\n  For example, if you distribute copies of such a program, whether\ngratis or for a fee, you must give the recipients all the rights that\nyou have.  You must make sure that they, too, receive or can get the\nsource code.  And you must show them these terms so they know their\nrights.\n\n  We protect your rights with two steps: (1) copyright the software, and\n(2) offer you this license which gives you legal permission to copy,\ndistribute and/or modify the software.\n\n  Also, for each author's protection and ours, we want to make certain\nthat everyone understands that there is no warranty for this free\nsoftware.  If the software is modified by someone else and passed on, we\nwant its recipients to know that what they have is not the original, so\nthat any problems introduced by others will not reflect on the original\nauthors' reputations.\n\n  Finally, any free program is threatened constantly by software\npatents.  We wish to avoid the danger that redistributors of a free\nprogram will individually obtain patent licenses, in effect making the\nprogram proprietary.  To prevent this, we have made it clear that any\npatent must be licensed for everyone's free use or not licensed at all.\n\n  The precise terms and conditions for copying, distribution and\nmodification follow.\n\n                    GNU GENERAL PUBLIC LICENSE\n   TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION\n\n  0. This License applies to any program or other work which contains\na notice placed by the copyright holder saying it may be distributed\nunder the terms of this General Public License.  The \"Program\", below,\nrefers to any such program or work, and a \"work based on the Program\"\nmeans either the Program or any derivative work under copyright law:\nthat is to say, a work containing the Program or a portion of it,\neither verbatim or with modifications and/or translated into another\nlanguage.  (Hereinafter, translation is included without limitation in\nthe term \"modification\".)  Each licensee is addressed as \"you\".\n\nActivities other than copying, distribution and modification are not\ncovered by this License; they are outside its scope.  The act of\nrunning the Program is not restricted, and the output from the Program\nis covered only if its contents constitute a work based on the\nProgram (independent of having been made by running the Program).\nWhether that is true depends on what the Program does.\n\n  1. You may copy and distribute verbatim copies of the Program's\nsource code as you receive it, in any medium, provided that you\nconspicuously and appropriately publish on each copy an appropriate\ncopyright notice and disclaimer of warranty; keep intact all the\nnotices that refer to this License and to the absence of any warranty;\nand give any other recipients of the Program a copy of this License\nalong with the Program.\n\nYou may charge a fee for the physical act of transferring a copy, and\nyou may at your option offer warranty protection in exchange for a fee.\n\n  2. You may modify your copy or copies of the Program or any portion\nof it, thus forming a work based on the Program, and copy and\ndistribute such modifications or work under the terms of Section 1\nabove, provided that you also meet all of these conditions:\n\n    a) You must cause the modified files to carry prominent notices\n    stating that you changed the files and the date of any change.\n\n    b) You must cause any work that you distribute or publish, that in\n    whole or in part contains or is derived from the Program or any\n    part thereof, to be licensed as a whole at no charge to all third\n    parties under the terms of this License.\n\n    c) If the modified program normally reads commands interactively\n    when run, you must cause it, when started running for such\n    interactive use in the most ordinary way, to print or display an\n    announcement including an appropriate copyright notice and a\n    notice that there is no warranty (or else, saying that you provide\n    a warranty) and that users may redistribute the program under\n    these conditions, and telling the user how to view a copy of this\n    License.  (Exception: if the Program itself is interactive but\n    does not normally print such an announcement, your work based on\n    the Program is not required to print an announcement.)\n\nThese requirements apply to the modified work as a whole.  If\nidentifiable sections of that work are not derived from the Program,\nand can be reasonably considered independent and separate works in\nthemselves, then this License, and its terms, do not apply to those\nsections when you distribute them as separate works.  But when you\ndistribute the same sections as part of a whole which is a work based\non the Program, the distribution of the whole must be on the terms of\nthis License, whose permissions for other licensees extend to the\nentire whole, and thus to each and every part regardless of who wrote it.\n\nThus, it is not the intent of this section to claim rights or contest\nyour rights to work written entirely by you; rather, the intent is to\nexercise the right to control the distribution of derivative or\ncollective works based on the Program.\n\nIn addition, mere aggregation of another work not based on the Program\nwith the Program (or with a work based on the Program) on a volume of\na storage or distribution medium does not bring the other work under\nthe scope of this License.\n\n  3. You may copy and distribute the Program (or a work based on it,\nunder Section 2) in object code or executable form under the terms of\nSections 1 and 2 above provided that you also do one of the following:\n\n    a) Accompany it with the complete corresponding machine-readable\n    source code, which must be distributed under the terms of Sections\n    1 and 2 above on a medium customarily used for software interchange; or,\n\n    b) Accompany it with a written offer, valid for at least three\n    years, to give any third party, for a charge no more than your\n    cost of physically performing source distribution, a complete\n    machine-readable copy of the corresponding source code, to be\n    distributed under the terms of Sections 1 and 2 above on a medium\n    customarily used for software interchange; or,\n\n    c) Accompany it with the information you received as to the offer\n    to distribute corresponding source code.  (This alternative is\n    allowed only for noncommercial distribution and only if you\n    received the program in object code or executable form with such\n    an offer, in accord with Subsection b above.)\n\nThe source code for a work means the preferred form of the work for\nmaking modifications to it.  For an executable work, complete source\ncode means all the source code for all modules it contains, plus any\nassociated interface definition files, plus the scripts used to\ncontrol compilation and installation of the executable.  However, as a\nspecial exception, the source code distributed need not include\nanything that is normally distributed (in either source or binary\nform) with the major components (compiler, kernel, and so on) of the\noperating system on which the executable runs, unless that component\nitself accompanies the executable.\n\nIf distribution of executable or object code is made by offering\naccess to copy from a designated place, then offering equivalent\naccess to copy the source code from the same place counts as\ndistribution of the source code, even though third parties are not\ncompelled to copy the source along with the object code.\n\n  4. You may not copy, modify, sublicense, or distribute the Program\nexcept as expressly provided under this License.  Any attempt\notherwise to copy, modify, sublicense or distribute the Program is\nvoid, and will automatically terminate your rights under this License.\nHowever, parties who have received copies, or rights, from you under\nthis License will not have their licenses terminated so long as such\nparties remain in full compliance.\n\n  5. You are not required to accept this License, since you have not\nsigned it.  However, nothing else grants you permission to modify or\ndistribute the Program or its derivative works.  These actions are\nprohibited by law if you do not accept this License.  Therefore, by\nmodifying or distributing the Program (or any work based on the\nProgram), you indicate your acceptance of this License to do so, and\nall its terms and conditions for copying, distributing or modifying\nthe Program or works based on it.\n\n  6. Each time you redistribute the Program (or any work based on the\nProgram), the recipient automatically receives a license from the\noriginal licensor to copy, distribute or modify the Program subject to\nthese terms and conditions.  You may not impose any further\nrestrictions on the recipients' exercise of the rights granted herein.\nYou are not responsible for enforcing compliance by third parties to\nthis License.\n\n  7. If, as a consequence of a court judgment or allegation of patent\ninfringement or for any other reason (not limited to patent issues),\nconditions are imposed on you (whether by court order, agreement or\notherwise) that contradict the conditions of this License, they do not\nexcuse you from the conditions of this License.  If you cannot\ndistribute so as to satisfy simultaneously your obligations under this\nLicense and any other pertinent obligations, then as a consequence you\nmay not distribute the Program at all.  For example, if a patent\nlicense would not permit royalty-free redistribution of the Program by\nall those who receive copies directly or indirectly through you, then\nthe only way you could satisfy both it and this License would be to\nrefrain entirely from distribution of the Program.\n\nIf any portion of this section is held invalid or unenforceable under\nany particular circumstance, the balance of the section is intended to\napply and the section as a whole is intended to apply in other\ncircumstances.\n\nIt is not the purpose of this section to induce you to infringe any\npatents or other property right claims or to contest validity of any\nsuch claims; this section has the sole purpose of protecting the\nintegrity of the free software distribution system, which is\nimplemented by public license practices.  Many people have made\ngenerous contributions to the wide range of software distributed\nthrough that system in reliance on consistent application of that\nsystem; it is up to the author/donor to decide if he or she is willing\nto distribute software through any other system and a licensee cannot\nimpose that choice.\n\nThis section is intended to make thoroughly clear what is believed to\nbe a consequence of the rest of this License.\n\n  8. If the distribution and/or use of the Program is restricted in\ncertain countries either by patents or by copyrighted interfaces, the\noriginal copyright holder who places the Program under this License\nmay add an explicit geographical distribution limitation excluding\nthose countries, so that distribution is permitted only in or among\ncountries not thus excluded.  In such case, this License incorporates\nthe limitation as if written in the body of this License.\n\n  9. The Free Software Foundation may publish revised and/or new versions\nof the General Public License from time to time.  Such new versions will\nbe similar in spirit to the present version, but may differ in detail to\naddress new problems or concerns.\n\nEach version is given a distinguishing version number.  If the Program\nspecifies a version number of this License which applies to it and \"any\nlater version\", you have the option of following the terms and conditions\neither of that version or of any later version published by the Free\nSoftware Foundation.  If the Program does not specify a version number of\nthis License, you may choose any version ever published by the Free Software\nFoundation.\n\n  10. If you wish to incorporate parts of the Program into other free\nprograms whose distribution conditions are different, write to the author\nto ask for permission.  For software which is copyrighted by the Free\nSoftware Foundation, write to the Free Software Foundation; we sometimes\nmake exceptions for this.  Our decision will be guided by the two goals\nof preserving the free status of all derivatives of our free software and\nof promoting the sharing and reuse of software generally.\n\n                            NO WARRANTY\n\n  11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY\nFOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN\nOTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES\nPROVIDE THE PROGRAM \"AS IS\" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED\nOR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF\nMERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS\nTO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE\nPROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,\nREPAIR OR CORRECTION.\n\n  12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING\nWILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR\nREDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,\nINCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING\nOUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED\nTO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY\nYOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER\nPROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE\nPOSSIBILITY OF SUCH DAMAGES.\n\n                     END OF TERMS AND CONDITIONS\n\n            How to Apply These Terms to Your New Programs\n\n  If you develop a new program, and you want it to be of the greatest\npossible use to the public, the best way to achieve this is to make it\nfree software which everyone can redistribute and change under these terms.\n\n  To do so, attach the following notices to the program.  It is safest\nto attach them to the start of each source file to most effectively\nconvey the exclusion of warranty; and each file should have at least\nthe \"copyright\" line and a pointer to where the full notice is found.\n\n    {description}\n    Copyright (C) {year}  {fullname}\n\n    This program is free software; you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation; either version 2 of the License, or\n    (at your option) any later version.\n\n    This program is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License along\n    with this program; if not, write to the Free Software Foundation, Inc.,\n    51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\nAlso add information on how to contact you by electronic and paper mail.\n\nIf the program is interactive, make it output a short notice like this\nwhen it starts in an interactive mode:\n\n    Gnomovision version 69, Copyright (C) year name of author\n    Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.\n    This is free software, and you are welcome to redistribute it\n    under certain conditions; type `show c' for details.\n\nThe hypothetical commands `show w' and `show c' should show the appropriate\nparts of the General Public License.  Of course, the commands you use may\nbe called something other than `show w' and `show c'; they could even be\nmouse-clicks or menu items--whatever suits your program.\n\nYou should also get your employer (if you work as a programmer) or your\nschool, if any, to sign a \"copyright disclaimer\" for the program, if\nnecessary.  Here is a sample; alter the names:\n\n  Yoyodyne, Inc., hereby disclaims all copyright interest in the program\n  `Gnomovision' (which makes passes at compilers) written by James Hacker.\n\n  {signature of Ty Coon}, 1 April 1989\n  Ty Coon, President of Vice\n\nThis General Public License does not permit incorporating your program into\nproprietary programs.  If your program is a subroutine library, you may\nconsider it more useful to permit linking proprietary applications with the\nlibrary.  If this is what you want to do, use the GNU Lesser General\nPublic License instead of this License.\n"
  },
  {
    "path": "README.md",
    "content": "# Arweave Server\n\nThis is the repository for the official Erlang implementation of the Arweave\nprotocol and a gateway implementation.\n\nArweave is a distributed, cryptographically verified permanent archive built\non a cryptocurrency that aims to, for the first time, provide feasible data\npermanence. By leveraging our novel Blockweave datastructure, data is stored\nin a decentralised, peer-to-peer manner where miners are incentivised to\nstore rare data.\n\n# Contributing\n\nFor instructions on how to build and update the source, please refer to the [Development Docs](https://docs.arweave.org/developers/development/getting-started)\n\n# Contact\n\nIf you have questions or comments about Arweave you can get in touch by\nfinding us on [Twitter](https://twitter.com/ArweaveTeam/), [Reddit](https://www.reddit.com/r/arweave), [Discord](https://discord.gg/DjAFMJc) or by\nemailing us at team@arweave.org.\n\n\nFor more information about the Arweave project visit [https://www.arweave.org](https://www.arweave.org/)\nor have a look at our [yellow paper](https://yellow-paper.arweave.dev).\n\n# License\n\nThe Arweave project is released under GNU General Public License v2.0.\nSee [LICENSE](LICENSE.md) for full license conditions.\n"
  },
  {
    "path": "apps/arweave/c_src/Makefile",
    "content": "# Based on c_src.mk from erlang.mk by Loic Hoguin <essen@ninenines.eu>\n\nCURDIR := $(shell pwd)\nBASEDIR := $(abspath $(CURDIR)/..)\n\nPROJECT ?= $(notdir $(BASEDIR))\nPROJECT := $(strip $(PROJECT))\n\nifeq ($(MODE), debug)\n\tCFLAGS ?= -O0 -g\n\tCXXFLAGS ?= -O0 -g\nelse\n\tCFLAGS ?= -O3\n\tCXXFLAGS ?= -O3\nendif\n\nUNAME_SYS := $(shell uname -s)\n\n# Configure SHA external libraries, we are using OPENSSL_LITE\n# by default, for all systems\nRANDOMX_LDFLAGS = ../lib/openssl-sha-lite/libcrypto.a\n\n# Set default libs path for secp256k1 implementation\nSECP256K1_LDLIBS = -L /usr/lib -L /usr/local/lib\n\nifeq ($(UNAME_SYS), Linux)\n\t# _mm_crc32_u32 support\n\tCFLAGS += -msse4.2\n\tCXXFLAGS += -msse4.2\nendif\n\nERTS_INCLUDE_DIR ?= $(shell erl -noshell -eval 'io:format(\"~ts/erts-~ts/include/\", [code:root_dir(), erlang:system_info(version)]).' -s init stop)\nERL_INTERFACE_INCLUDE_DIR ?= $(shell erl -noshell -eval 'io:format(\"~ts\", [code:lib_dir(erl_interface, include)]).' -s init stop)\nERL_INTERFACE_LIB_DIR ?= $(shell erl -noshell -eval 'io:format(\"~ts\", [code:lib_dir(erl_interface, lib)]).' -s init stop)\n\n# System type and C compiler/flags.\n\nifeq ($(UNAME_SYS), Darwin)\n\tOSX_CPU_ARCH ?= x86_64\n\t# nix systems may not have sysctl where uname -m will return the correct arch\n\tSYSCTL_EXISTS := $(shell which sysctl 2>/dev/null)\n\tifneq ($(shell uname -m | egrep \"arm64\"),)\n\t\tOSX_CPU_ARCH = arm64\n\telse\n\t\tifdef SYSCTL_EXISTS\n\t\t\tifneq ($(shell sysctl -n machdep.cpu.brand_string | egrep \"M(1|2)\"),)\n\t\t\t\tOSX_CPU_ARCH = arm64\n\t\t\tendif\n\t\tendif\n\tendif\n\tCC ?= cc\n\tCFLAGS += -std=c99 -arch $(OSX_CPU_ARCH) -finline-functions -Wall -Wmissing-prototypes\n\tCXXFLAGS += -arch $(OSX_CPU_ARCH) -finline-functions -Wall\n\tLDFLAGS ?= -arch $(OSX_CPU_ARCH)\n\tLDFLAGS += -undefined suppress\n\t# on MacOS, some libs are also present in /opt/homebrew/lib\n\tSECP256K1_LDLIBS += -L /opt/homebrew/lib\nelse ifeq ($(UNAME_SYS), FreeBSD)\n\tCC ?= cc\n\tCFLAGS += -std=c99 -finline-functions -Wall -Wmissing-prototypes\n\tCXXFLAGS += -finline-functions -Wall\nelse ifeq ($(UNAME_SYS), Linux)\n\tCC ?= gcc\n\tCFLAGS += -std=c99 -finline-functions -Wall -Wmissing-prototypes\n\tCXXFLAGS += -finline-functions -Wall\nendif\n\nifneq (, $(shell which pkg-config))\n\tCFLAGS   += -I../lib/openssl-sha-lite/include\n\tCXXFLAGS += -I../lib/openssl-sha-lite/include\nendif\n\nC_SRC_DIR = $(CURDIR)\n\nSECP256K1_CFLAGS += $(CFLAGS)\nSECP256K1_LDLIBS += $(LDFLAGS)\nCFLAGS += -fPIC -I $(ERTS_INCLUDE_DIR) -I $(ERL_INTERFACE_INCLUDE_DIR) -I /usr/local/include -I ../lib/RandomX/src -I $(C_SRC_DIR)\nCXXFLAGS += -fPIC -I $(ERTS_INCLUDE_DIR) -I $(ERL_INTERFACE_INCLUDE_DIR) -I ../lib/RandomX/src -std=c++11\nLDLIBS += -L $(ERL_INTERFACE_LIB_DIR) -L /usr/local/lib -lei\n\n\nRX512_OUTPUT ?= $(CURDIR)/../priv/rx512_arweave.so\nRX4096_OUTPUT ?= $(CURDIR)/../priv/rx4096_arweave.so\nRXSQUARED_OUTPUT ?= $(CURDIR)/../priv/rxsquared_arweave.so\nVDF_OUTPUT ?= $(CURDIR)/../priv/vdf_arweave.so\n\nCOMMON_RANDOMX_SOURCES = $(wildcard $(C_SRC_DIR)/randomx/*.c $(C_SRC_DIR)/randomx/*.cpp)\nRX512_SOURCES = $(COMMON_RANDOMX_SOURCES) $(wildcard $(C_SRC_DIR)/*.c $(C_SRC_DIR)/randomx/rx512/*.c)\nRX4096_SOURCES = $(COMMON_RANDOMX_SOURCES) $(wildcard $(C_SRC_DIR)/*.c $(C_SRC_DIR)/randomx/rx4096/*.c)\nRXSQUARED_SOURCES = $(COMMON_RANDOMX_SOURCES) $(wildcard $(C_SRC_DIR)/*.c $(C_SRC_DIR)/randomx/rxsquared/*.c)\nVDF_SOURCES = $(wildcard $(C_SRC_DIR)/*.c $(C_SRC_DIR)/vdf/*.c $(C_SRC_DIR)/vdf/*.cpp)\n\nRX512_OBJECTS = $(addsuffix .o, $(basename $(RX512_SOURCES)))\nRX4096_OBJECTS = $(addsuffix .o, $(basename $(RX4096_SOURCES)))\nRXSQUARED_OBJECTS = $(addsuffix .o, $(basename $(RXSQUARED_SOURCES)))\nVDF_OBJECTS = $(addsuffix .o, $(basename $(VDF_SOURCES)))\n\n# NOTE tabs here will cause build fail\nifeq ($(UNAME_SYS), Linux)\n  $(C_SRC_DIR)/vdf/vdf_fused_x86.o: CXXFLAGS += -msha\nendif\nifeq ($(UNAME_SYS), Darwin)\n  $(C_SRC_DIR)/vdf/vdf_fused_arm.o: CXXFLAGS += -march=armv8-a+crypto\n  $(C_SRC_DIR)/vdf/vdf_hiopt_arm.o: CXXFLAGS += -march=armv8-a+crypto\nendif\nifeq ($(UNAME_SYS), Darwin)\n\tVDF_ARM_ASM_OBJ = $(C_SRC_DIR)/vdf/sha256-armv8.o\n\tVDF_OBJECTS += $(VDF_ARM_ASM_OBJ)\n$(VDF_ARM_ASM_OBJ): $(C_SRC_DIR)/vdf/sha256-armv8.S\n\t@echo \"Assembling ARM64 specific file: $<\"\n\tclang -O3 -arch arm64 -c $(C_SRC_DIR)/vdf/sha256-armv8.S -o $(VDF_ARM_ASM_OBJ)\nendif\n\n# Verbosity.\n\nc_verbose_0 = @echo \" C     \" $(?F);\nc_verbose = $(c_verbose_$(V))\n\ncpp_verbose_0 = @echo \" CPP   \" $(?F);\ncpp_verbose = $(cpp_verbose_$(V))\n\nlink_verbose_0 = @echo \" LD    \" $(@F);\nlink_verbose = $(link_verbose_$(V))\n\nCOMPILE_C = $(c_verbose) $(CC) $(CFLAGS) $(CPPFLAGS) -c\nCOMPILE_CPP = $(cpp_verbose) $(CXX) $(CXXFLAGS) $(CPPFLAGS) -c\n\n$(RX512_OUTPUT): $(RX512_OBJECTS)\n\t@mkdir -p $(BASEDIR)/priv/\n\t$(link_verbose) $(CXX) $(RX512_OBJECTS) $(RANDOMX_LDFLAGS) $(LDFLAGS) $(LDLIBS) ../lib/RandomX/build512/librandomx512.a -shared -o $(RX512_OUTPUT)\n\n$(RX4096_OUTPUT): $(RX4096_OBJECTS)\n\t@mkdir -p $(BASEDIR)/priv/\n\t$(link_verbose) $(CXX) $(RX4096_OBJECTS) $(RANDOMX_LDFLAGS) $(LDFLAGS) $(LDLIBS) ../lib/RandomX/build4096/librandomx4096.a -shared -o $(RX4096_OUTPUT)\n\n$(RXSQUARED_OUTPUT): $(RXSQUARED_OBJECTS)\n\t@mkdir -p $(BASEDIR)/priv/\n\t$(link_verbose) $(CXX) $(RXSQUARED_OBJECTS) $(RANDOMX_LDFLAGS) $(LDFLAGS) $(LDLIBS) ../lib/RandomX/buildsquared/librandomxsquared.a -shared -o $(RXSQUARED_OUTPUT)\n\n$(VDF_OUTPUT): $(VDF_OBJECTS)\n\t@mkdir -p $(BASEDIR)/priv/\n\t$(link_verbose) $(CXX) $(VDF_OBJECTS) $(RANDOMX_LDFLAGS) $(LDFLAGS) $(LDLIBS) -shared -o $(VDF_OUTPUT)\n\nSECP256K1_SOURCES = $(wildcard $(C_SRC_DIR)/*.c $(C_SRC_DIR)/secp256k1/*.c)\nSECP256K1_OBJECTS = $(addsuffix .o, $(basename $(SECP256K1_SOURCES)))\nSECP256K1_CFLAGS += -fPIC -I $(ERTS_INCLUDE_DIR) -I $(ERL_INTERFACE_INCLUDE_DIR) -I /usr/local/include -I $(CURDIR)/../lib/secp256k1/src -I $(CURDIR)/../lib/secp256k1/include -I $(C_SRC_DIR)\nSECP256K1_LDLIBS += -L $(ERL_INTERFACE_LIB_DIR)\nSECP256K1_OUTPUT ?= $(CURDIR)/../priv/secp256k1_arweave.so\n\n$(SECP256K1_OUTPUT): $(SECP256K1_OBJECTS)\n\t@mkdir -p $(BASEDIR)/priv/\n\t$(link_verbose) $(CXX) $(SECP256K1_OBJECTS) $(SECP256K1_LDLIBS) ../lib/secp256k1/build/lib/libsecp256k1.a -shared -o $(SECP256K1_OUTPUT)\n\n%secp256k1_nif.o: %secp256k1_nif.c\n\t$(c_verbose) $(CC) $(SECP256K1_CFLAGS) -c $(OUTPUT_OPTION) $<\n\n%.o: %.c\n\t$(COMPILE_C) $(OUTPUT_OPTION) $<\n\n%.o: %.cc\n\t$(COMPILE_CPP) $(OUTPUT_OPTION) $<\n\n%.o: %.C\n\t$(COMPILE_CPP) $(OUTPUT_OPTION) $<\n\n%.o: %.cpp\n\t$(COMPILE_CPP) $(OUTPUT_OPTION) $<\n\nall: $(RX512_OUTPUT) $(RX4096_OUTPUT) $(RXSQUARED_OUTPUT) $(VDF_OUTPUT) $(SECP256K1_OUTPUT)\n\nclean:\n\t@rm -f $(RX512_OUTPUT) $(RX4096_OUTPUT) $(RXSQUARED_OUTPUT) $(VDF_OUTPUT) $(RX512_OBJECTS) $(RX4096_OBJECTS) $(RXSQUARED_OBJECTS) $(VDF_OBJECTS) $(SECP256K1_OUTPUT) $(SECP256K1_OBJECTS)\n\n\n\n\n\n"
  },
  {
    "path": "apps/arweave/c_src/ar_nif.c",
    "content": "#include \"ar_nif.h\"\n#include <string.h>\n\n// Utility functions.\n\nERL_NIF_TERM solution_tuple(ErlNifEnv* envPtr, ERL_NIF_TERM hashTerm) {\n\treturn enif_make_tuple2(envPtr, enif_make_atom(envPtr, \"true\"), hashTerm);\n}\n\nERL_NIF_TERM ok_tuple(ErlNifEnv* envPtr, ERL_NIF_TERM term)\n{\n\treturn enif_make_tuple2(envPtr, enif_make_atom(envPtr, \"ok\"), term);\n}\n\nERL_NIF_TERM ok_tuple2(ErlNifEnv* envPtr, ERL_NIF_TERM term1, ERL_NIF_TERM term2)\n{\n\treturn enif_make_tuple3(envPtr, enif_make_atom(envPtr, \"ok\"), term1, term2);\n}\n\nERL_NIF_TERM error_tuple(ErlNifEnv* envPtr, const char* reason)\n{\n\tERL_NIF_TERM reasonTerm = enif_make_string(envPtr, reason, ERL_NIF_LATIN1);\n\treturn enif_make_tuple2(envPtr, enif_make_atom(envPtr, \"error\"), reasonTerm);\n}\n\nERL_NIF_TERM make_output_binary(ErlNifEnv* envPtr, unsigned char *dataPtr, size_t size)\n{\n\tERL_NIF_TERM outputTerm;\n\tunsigned char *outputTermDataPtr;\n\n\toutputTermDataPtr = enif_make_new_binary(envPtr, size, &outputTerm);\n\tmemcpy(outputTermDataPtr, dataPtr, size);\n\treturn outputTerm;\n}\n"
  },
  {
    "path": "apps/arweave/c_src/ar_nif.h",
    "content": "#ifndef AR_NIF_H\n#define AR_NIF_H\n\n#include <erl_nif.h>\n\nERL_NIF_TERM solution_tuple(ErlNifEnv*, ERL_NIF_TERM);\nERL_NIF_TERM ok_tuple(ErlNifEnv*, ERL_NIF_TERM);\nERL_NIF_TERM ok_tuple2(ErlNifEnv*, ERL_NIF_TERM, ERL_NIF_TERM);\nERL_NIF_TERM error_tuple(ErlNifEnv*, const char*);\nERL_NIF_TERM make_output_binary(ErlNifEnv*, unsigned char*, size_t);\n\n#endif // AR_NIF_H"
  },
  {
    "path": "apps/arweave/c_src/randomx/ar_randomx_impl.h",
    "content": "#ifndef AR_RANDOMX_IMPL_H\n#define AR_RANDOMX_IMPL_H\n\n// Thif file includes the full definitions of any function that is shared between the\n// rx512 and rx4096 shared libraries. Although ugly this was the only way I could get\n// everything to work without causing symbol conflicts or seg faults once the two .so's\n// are loaded into arweave and the NIFs registered. There may be a better way!\n\n#include <erl_nif.h>\n#include <randomx.h>\n\n// From RandomX/src/jit_compiler.hpp\n// needed for the JIT compiler to work on OpenBSD, NetBSD and Apple Silicon\n#if defined(__OpenBSD__) || defined(__NetBSD__) || (defined(__APPLE__) && defined(__aarch64__))\n#define RANDOMX_FORCE_SECURE\n#endif\n\ntypedef enum { FALSE, TRUE } boolean;\n\nstruct workerThread {\n\tErlNifTid threadId;\n\tErlNifThreadOpts *optsPtr;\n\trandomx_cache *cachePtr;\n\trandomx_dataset *datasetPtr;\n\tunsigned long datasetInitStartItem;\n\tunsigned long datasetInitItemCount;\n};\n\ntypedef enum {\n\tHASHING_MODE_FAST = 0,\n\tHASHING_MODE_LIGHT = 1,\n} hashing_mode;\n\nstruct state {\n\tErlNifRWLock*     lockPtr;\n\tint               isRandomxReleased;\n\thashing_mode      mode;\n\trandomx_dataset*  datasetPtr;\n\trandomx_cache*    cachePtr;\n};\n\nErlNifResourceType* stateType;\n\nstatic ERL_NIF_TERM init_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[]);\nstatic ERL_NIF_TERM info_nif(const char* rxSize, ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[]);\nstatic ERL_NIF_TERM hash_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[]);\nstatic int load(ErlNifEnv* envPtr, void** priv, ERL_NIF_TERM info);\nstatic void state_dtor(ErlNifEnv* envPtr, void* objPtr);\nstatic boolean init_dataset(\n\trandomx_dataset *datasetPtr,\n\trandomx_cache *cachePtr,\n\tunsigned int numWorkers\n);\nstatic void *init_dataset_thread(void *objPtr);\nstatic ERL_NIF_TERM init_failed(ErlNifEnv *envPtr, struct state *statePtr, const char* reason);\nstatic randomx_vm* create_vm(struct state* statePtr,\n\t\tint fullMemEnabled, int jitEnabled, int largePagesEnabled, int hardwareAESEnabled,\n\t\tint* isRandomxReleased);\nstatic void destroy_vm(struct state* statePtr, randomx_vm* vmPtr);\n\nstatic ERL_NIF_TERM init_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[])\n{\n\tErlNifBinary key;\n\thashing_mode mode;\n\tstruct state *statePtr;\n\tERL_NIF_TERM resource;\n\tunsigned int numWorkers;\n\tint jitEnabled, largePagesEnabled;\n\trandomx_flags flags;\n\n\tif (!enif_inspect_binary(envPtr, argv[0], &key)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[1], &mode)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[2], &jitEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[3], &largePagesEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_uint(envPtr, argv[4], &numWorkers)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tstatePtr = enif_alloc_resource(stateType, sizeof(struct state));\n\tstatePtr->cachePtr = NULL;\n\tstatePtr->datasetPtr = NULL;\n\tstatePtr->isRandomxReleased = 0;\n\tstatePtr->mode = mode;\n\n\tstatePtr->lockPtr = enif_rwlock_create(\"state_rw_lock\");\n\tif (statePtr->lockPtr == NULL) {\n\t\treturn init_failed(envPtr, statePtr, \"enif_rwlock_create failed\");\n\t}\n\n\tflags = RANDOMX_FLAG_DEFAULT;\n\tif (jitEnabled) {\n\t\tflags |= RANDOMX_FLAG_JIT;\n#ifdef RANDOMX_FORCE_SECURE\n\t\tflags |= RANDOMX_FLAG_SECURE;\n#endif\n\t}\n\tif (largePagesEnabled) {\n\t\tflags |= RANDOMX_FLAG_LARGE_PAGES;\n\t}\n\n\tstatePtr->cachePtr = randomx_alloc_cache(flags);\n\tif (statePtr->cachePtr == NULL) {\n\t\treturn init_failed(envPtr, statePtr, \"randomx_alloc_cache failed\");\n\t}\n\n\trandomx_init_cache(\n\t\tstatePtr->cachePtr,\n\t\tkey.data,\n\t\tkey.size);\n\n\tif (mode == HASHING_MODE_FAST) {\n\t\tstatePtr->datasetPtr = randomx_alloc_dataset(flags);\n\t\tif (statePtr->datasetPtr == NULL) {\n\t\t\treturn init_failed(envPtr, statePtr, \"randomx_alloc_dataset failed\");\n\t\t}\n\t\tif (!init_dataset(statePtr->datasetPtr, statePtr->cachePtr, numWorkers)) {\n\t\t\treturn init_failed(envPtr, statePtr, \"init_dataset failed\");\n\t\t}\n\t\trandomx_release_cache(statePtr->cachePtr);\n\t\tstatePtr->cachePtr = NULL;\n\t} else {\n\t\tstatePtr->datasetPtr = NULL;\n\t}\n\n\tresource = enif_make_resource(envPtr, statePtr);\n\tenif_release_resource(statePtr);\n\n\treturn ok_tuple(envPtr, resource);\n}\n\nstatic ERL_NIF_TERM info_nif(\n    const char* rxSize, ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[])\n{\n\tstruct state* statePtr;\n\tunsigned int datasetSize;\n\tunsigned int scratchpadSize;\n\thashing_mode hashingMode;\n\tERL_NIF_TERM hashingModeTerm;\n\n\tif (argc != 1) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_resource(envPtr, argv[0], stateType, (void**) &statePtr)) {\n\t\treturn error_tuple(envPtr, \"failed to read state\");\n\t}\n\n\thashingMode = statePtr->mode;\n\n\tif (hashingMode == HASHING_MODE_FAST) {\n\t\tif (statePtr->datasetPtr == NULL) {\n\t\t\treturn error_tuple(envPtr, \"dataset is not initialized for fast hashing mode\");\n\t\t}\n\t\tif (statePtr->cachePtr != NULL) {\n\t\t\treturn error_tuple(envPtr, \"cache is initialized for fast hashing mode\");\n\t\t}\n\t\thashingModeTerm = enif_make_atom(envPtr, \"fast\");\n\t\tdatasetSize = randomx_dataset_item_count();\n\t\tscratchpadSize = randomx_get_scratchpad_size();\n\t} else if (hashingMode == HASHING_MODE_LIGHT) {\n\t\tif (statePtr->datasetPtr != NULL) {\n\t\t\treturn error_tuple(envPtr, \"dataset is initialized for light hashing mode\");\n\t\t}\n\t\tif (statePtr->cachePtr == NULL) {\n\t\t\treturn error_tuple(envPtr, \"cache is not initialized for light hashing mode\");\n\t\t}\n\t\thashingModeTerm = enif_make_atom(envPtr, \"light\");\n\t\tdatasetSize = 0;\n\t\tscratchpadSize = randomx_get_scratchpad_size();\n\t} else {\n\t\treturn error_tuple(envPtr, \"invalid hashing mode\");\n\t}\n\n\tERL_NIF_TERM infoTerm = enif_make_tuple4(envPtr,\n\t\tenif_make_atom(envPtr, rxSize),\n\t\thashingModeTerm,\n\t\tenif_make_uint(envPtr, datasetSize),\n\t\tenif_make_uint(envPtr, scratchpadSize));\n\treturn ok_tuple(envPtr, infoTerm);\n}\n\n\nstatic ERL_NIF_TERM hash_nif(\n\tErlNifEnv* envPtr,\n\tint argc,\n\tconst ERL_NIF_TERM argv[]\n) {\n\tint jitEnabled, largePagesEnabled, hardwareAESEnabled;\n\tunsigned char hashPtr[RANDOMX_HASH_SIZE];\n\tstruct state* statePtr;\n\tErlNifBinary inputData;\n\n\tif (argc != 5) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_resource(envPtr, argv[0], stateType, (void**) &statePtr)) {\n\t\treturn error_tuple(envPtr, \"failed to read state\");\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[1], &inputData)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[2], &jitEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[3], &largePagesEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[4], &hardwareAESEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tint isRandomxReleased;\n\trandomx_vm *vmPtr = create_vm(statePtr, (statePtr->mode == HASHING_MODE_FAST), jitEnabled, largePagesEnabled, hardwareAESEnabled, &isRandomxReleased);\n\tif (vmPtr == NULL) {\n\t\tif (isRandomxReleased != 0) {\n\t\t\treturn error_tuple(envPtr, \"state has been released\");\n\t\t}\n\t\treturn error_tuple(envPtr, \"randomx_create_vm failed\");\n\t}\n\n\trandomx_calculate_hash(vmPtr, inputData.data, inputData.size, hashPtr);\n\n\tdestroy_vm(statePtr, vmPtr);\n\n\treturn ok_tuple(envPtr, make_output_binary(envPtr, hashPtr, RANDOMX_HASH_SIZE));\n}\n\nstatic int load(ErlNifEnv* envPtr, void** priv, ERL_NIF_TERM info)\n{\n\tint flags = ERL_NIF_RT_CREATE;\n\tstateType = enif_open_resource_type(envPtr, NULL, \"state\", state_dtor, flags, NULL);\n\tif (stateType == NULL) {\n\t\treturn 1;\n\t}\n\n\treturn 0;\n}\n\nstatic void state_dtor(ErlNifEnv* envPtr, void* objPtr)\n{\n\tstruct state *statePtr = (struct state*) objPtr;\n\n    if (statePtr->datasetPtr != NULL) {\n\t\trandomx_release_dataset(statePtr->datasetPtr);\n\t\tstatePtr->datasetPtr = NULL;\n\t}\n\tif (statePtr->cachePtr != NULL) {\n\t\trandomx_release_cache(statePtr->cachePtr);\n\t\tstatePtr->cachePtr = NULL;\n\t}\n\tstatePtr->isRandomxReleased = 1;\n\n\tif (statePtr->lockPtr != NULL) {\n\t\tenif_rwlock_destroy(statePtr->lockPtr);\n\t\tstatePtr->lockPtr = NULL;\n\t}\n}\n\nstatic boolean init_dataset(\n\trandomx_dataset *datasetPtr,\n\trandomx_cache *cachePtr,\n\tunsigned int numWorkers\n) {\n\tstruct workerThread **workerPtrPtr;\n\tstruct workerThread *workerPtr;\n\tunsigned long itemsPerThread;\n\tunsigned long itemsRemainder;\n\tunsigned long startItem;\n\tboolean anyThreadFailed;\n\n\tworkerPtrPtr = enif_alloc(sizeof(struct workerThread *) * numWorkers);\n\titemsPerThread = randomx_dataset_item_count() / numWorkers;\n\titemsRemainder = randomx_dataset_item_count() % numWorkers;\n\tstartItem = 0;\n\tfor (int i = 0; i < numWorkers; i++) {\n\t\tworkerPtrPtr[i] = enif_alloc(sizeof(struct workerThread));\n\t\tworkerPtr = workerPtrPtr[i];\n\n\t\tworkerPtr->cachePtr = cachePtr;\n\t\tworkerPtr->datasetPtr = datasetPtr;\n\n\t\tworkerPtr->datasetInitStartItem = startItem;\n\t\tif (i + 1 == numWorkers) {\n\t\t\tworkerPtr->datasetInitItemCount = itemsPerThread + itemsRemainder;\n\t\t} else {\n\t\t\tworkerPtr->datasetInitItemCount = itemsPerThread;\n\t\t}\n\t\tstartItem += workerPtr->datasetInitItemCount;\n\t\tworkerPtr->optsPtr = enif_thread_opts_create(\"init_fast_worker\");\n\t\tif (0 != enif_thread_create(\n\t\t\t\t\"init_dataset_worker\",\n\t\t\t\t&(workerPtr->threadId),\n\t\t\t\t&init_dataset_thread,\n\t\t\t\tworkerPtr,\n\t\t\t\tworkerPtr->optsPtr))\n\t\t{\n\t\t\tenif_thread_opts_destroy(workerPtr->optsPtr);\n\t\t\tenif_free(workerPtrPtr[i]);\n\t\t\tworkerPtrPtr[i] = NULL;\n\t\t}\n\t}\n\tanyThreadFailed = FALSE;\n\tfor (int i = 0; i < numWorkers; i++) {\n\t\tworkerPtr = workerPtrPtr[i];\n\t\tif (workerPtr == NULL) {\n\t\t\tanyThreadFailed = TRUE;\n\t\t} else if (0 != enif_thread_join(workerPtr->threadId, NULL)) {\n\t\t\tanyThreadFailed = TRUE;\n\t\t}\n\t\tif (workerPtr != NULL) {\n\t\t\tenif_thread_opts_destroy(workerPtr->optsPtr);\n\t\t\tenif_free(workerPtr);\n\t\t}\n\t}\n\tenif_free(workerPtrPtr);\n\treturn !anyThreadFailed;\n}\n\nstatic void *init_dataset_thread(void *objPtr)\n{\n\tstruct workerThread *workerPtr = (struct workerThread*) objPtr;\n\trandomx_init_dataset(\n\t\tworkerPtr->datasetPtr,\n\t\tworkerPtr->cachePtr,\n\t\tworkerPtr->datasetInitStartItem,\n\t\tworkerPtr->datasetInitItemCount);\n\treturn NULL;\n}\n\nstatic ERL_NIF_TERM init_failed(ErlNifEnv *envPtr, struct state *statePtr, const char* reason)\n{\n\tif (statePtr->lockPtr != NULL) {\n\t\tenif_rwlock_destroy(statePtr->lockPtr);\n\t\tstatePtr->lockPtr = NULL;\n\t}\n\tif (statePtr->cachePtr != NULL) {\n\t\trandomx_release_cache(statePtr->cachePtr);\n\t\tstatePtr->cachePtr = NULL;\n\t}\n\tif (statePtr->datasetPtr != NULL) {\n\t\trandomx_release_dataset(statePtr->datasetPtr);\n\t\tstatePtr->datasetPtr = NULL;\n\t}\n\tenif_release_resource(statePtr);\n\treturn error_tuple(envPtr, reason);\n}\n\nstatic randomx_vm* create_vm(struct state* statePtr,\n\t\tint fullMemEnabled, int jitEnabled, int largePagesEnabled, int hardwareAESEnabled,\n\t\tint* isRandomxReleased) {\n\tenif_rwlock_rlock(statePtr->lockPtr);\n\t*isRandomxReleased = statePtr->isRandomxReleased;\n\tif (statePtr->isRandomxReleased != 0) {\n\t\tenif_rwlock_runlock(statePtr->lockPtr);\n\t\treturn NULL;\n\t}\n\n\trandomx_flags flags = RANDOMX_FLAG_DEFAULT;\n\tif (fullMemEnabled) {\n\t\tflags |= RANDOMX_FLAG_FULL_MEM;\n\t}\n\tif (hardwareAESEnabled) {\n\t\tflags |= RANDOMX_FLAG_HARD_AES;\n\t}\n\tif (jitEnabled) {\n\t\tflags |= RANDOMX_FLAG_JIT;\n#ifdef RANDOMX_FORCE_SECURE\n\t\tflags |= RANDOMX_FLAG_SECURE;\n#endif\n\t}\n\tif (largePagesEnabled) {\n\t\tflags |= RANDOMX_FLAG_LARGE_PAGES;\n\t}\n\n\trandomx_vm *vmPtr = randomx_create_vm(flags, statePtr->cachePtr, statePtr->datasetPtr);\n\tif (vmPtr == NULL) {\n\t\tenif_rwlock_runlock(statePtr->lockPtr);\n\t\treturn NULL;\n\t}\n\treturn vmPtr;\n}\n\nstatic void destroy_vm(struct state* statePtr, randomx_vm* vmPtr) {\n\trandomx_destroy_vm(vmPtr);\n\tenif_rwlock_runlock(statePtr->lockPtr);\n}\n\n#endif"
  },
  {
    "path": "apps/arweave/c_src/randomx/crc32.h",
    "content": "#ifndef CRC32_H\n#define CRC32_H\n\n#if defined(__x86_64__) || defined(__i386__) || defined(_M_X64) || defined(_M_IX86)\n  #include <immintrin.h>\n  #define crc32(a, b) _mm_crc32_u32(a, b)\n#elif defined(__aarch64__) || defined(__arm__) || defined(_M_ARM64) || defined(_M_ARM)\n  #include <arm_acle.h>\n  #define crc32(a, b) __crc32cw(a, b)\n#else\n  // TODO make support for soft crc32\n  #error \"Unsupported architecture for CRC32 operations.\"\n#endif\n\n#endif // CRC32_H"
  },
  {
    "path": "apps/arweave/c_src/randomx/feistel_msgsize_key_cipher.cpp",
    "content": "#include <openssl/sha.h>\n\n#include \"feistel_msgsize_key_cipher.h\"\n\n// NOTE feistel_encrypt_block/feistel_decrypt_block with less than 2 blocks have no sense\n\nvoid feistel_hash(const unsigned char *in_r, const unsigned char *in_k, unsigned char *out) {\n\tSHA256_CTX sha256;\n\tSHA256_Init(&sha256);\n\tSHA256_Update(&sha256, in_r, 32);\n\tSHA256_Update(&sha256, in_k, 32);\n\tSHA256_Final(out, &sha256);\n}\n\n// size_t key_len, \nvoid feistel_encrypt_block(const unsigned char *in_left, const unsigned char *in_right, const unsigned char *in_key, unsigned char *out_left, unsigned char *out_right) {\n\t// size_t round_count = key_len / FEISTEL_BLOCK_LENGTH;\n\n\t// unsigned char temp;\n\tunsigned char key_hash[FEISTEL_BLOCK_LENGTH];\n\tunsigned char left[FEISTEL_BLOCK_LENGTH];\n\tunsigned char right[FEISTEL_BLOCK_LENGTH];\n\tconst unsigned char *key = in_key;\n\n\tfeistel_hash(in_right, key, key_hash);\n\tkey += FEISTEL_BLOCK_LENGTH;\n\tfor(int j = 0; j < FEISTEL_BLOCK_LENGTH; j++) {\n\t\t// temp = in_left[j] ^ key_hash[j];\n\t\tright[j] = in_left[j] ^ key_hash[j];\n\t\tleft[j] = in_right[j];\n\t\t// right[j] = temp;\n\t}\n\n\t// NOTE will be unused by arweave\n\t// for (size_t i = 1; i < round_count - 1; i++) {\n\t// \tfeistel_hash(right, key, key_hash);\n\t// \tkey += FEISTEL_BLOCK_LENGTH;\n\t// \tfor(int j = 0; j < FEISTEL_BLOCK_LENGTH; j++) {\n\t// \t\ttemp = left[j] ^ key_hash[j];\n\t// \t\tleft[j] = right[j];\n\t// \t\tright[j] = temp;\n\t// \t}\n\t// }\n\n\tfeistel_hash(right, key, key_hash);\n\tfor(int j = 0; j < FEISTEL_BLOCK_LENGTH; j++) {\n\t\t// temp = left[j] ^ key_hash[j];\n\t\tout_right[j] = left[j] ^ key_hash[j];\n\t\tout_left[j] = right[j];\n\t\t// out_right[j] = temp;\n\t}\n}\n\nvoid feistel_decrypt_block(const unsigned char *in_left, const unsigned char *in_right, const unsigned char *in_key, unsigned char *out_left, unsigned char *out_right) {\n\t// size_t round_count = key_len / FEISTEL_BLOCK_LENGTH;\n\n\t// unsigned char temp;\n\tunsigned char key_hash[FEISTEL_BLOCK_LENGTH];\n\tunsigned char left[FEISTEL_BLOCK_LENGTH];\n\tunsigned char right[FEISTEL_BLOCK_LENGTH];\n\t// const unsigned char *key = in_key + FEISTEL_BLOCK_LENGTH + 2*FEISTEL_BLOCK_LENGTH*(round_count - 1);\n\tconst unsigned char *key = in_key + FEISTEL_BLOCK_LENGTH;\n\n\tfeistel_hash(in_left, key, key_hash);\n\tkey -= FEISTEL_BLOCK_LENGTH;\n\tfor(int j = 0; j < FEISTEL_BLOCK_LENGTH; j++) {\n\t\t// temp = in_right[j] ^ key_hash[j];\n\t\tleft[j] = in_right[j] ^ key_hash[j];\n\t\tright[j] = in_left[j];\n\t\t// left[j] = temp;\n\t}\n\n\t// NOTE will be unused by arweave\n\t// for (size_t i = 1; i < round_count - 1; i++) {\n\t// \tfeistel_hash(left, key, key_hash);\n\t// \tkey -= FEISTEL_BLOCK_LENGTH;\n\t// \tfor(int j = 0; j < FEISTEL_BLOCK_LENGTH; j++) {\n\t// \t\ttemp = right[j] ^ key_hash[j];\n\t// \t\tright[j] = left[j];\n\t// \t\tleft[j] = temp;\n\t// \t}\n\t// }\n\n\tfeistel_hash(left, key, key_hash);\n\tfor(int j = 0; j < FEISTEL_BLOCK_LENGTH; j++) {\n\t\t// temp = right[j] ^ key_hash[j];\n\t\tout_left[j] = right[j] ^ key_hash[j];\n\t\tout_right[j] = left[j];\n\t\t// out_left[j] = temp;\n\t}\n}\n\n// feistel_encrypt accepts padded message with 2*FEISTEL_BLOCK_LENGTH = 64 bytes\n// in_key_length == plaintext_len\n// CBC\nvoid feistel_encrypt(const unsigned char *plaintext, const size_t plaintext_len, const unsigned char *in_key, unsigned char *ciphertext) {\n\tsize_t block_count = plaintext_len / (2*FEISTEL_BLOCK_LENGTH);\n\tunsigned char feed_key[2*FEISTEL_BLOCK_LENGTH] = {0};\n\n\tconst unsigned char *in = plaintext;\n\tunsigned char *out = ciphertext;\n\tconst unsigned char *key = in_key;\n\n\tfeistel_encrypt_block(in, in + FEISTEL_BLOCK_LENGTH, key, out, out + FEISTEL_BLOCK_LENGTH);\n\tin  += 2*FEISTEL_BLOCK_LENGTH;\n\tkey += 2*FEISTEL_BLOCK_LENGTH;\n\n\tfor(size_t i = 1; i < block_count; i++) {\n\t\tfor(int j = 0; j < 2*FEISTEL_BLOCK_LENGTH; j++) {\n\t\t\tfeed_key[j] = key[j] ^ out[j];\n\t\t}\n\t\tout += 2*FEISTEL_BLOCK_LENGTH;\n\n\t\tfeistel_encrypt_block(in, in + FEISTEL_BLOCK_LENGTH, feed_key, out, out + FEISTEL_BLOCK_LENGTH);\n\t\tin  += 2*FEISTEL_BLOCK_LENGTH;\n\t\tkey += 2*FEISTEL_BLOCK_LENGTH;\n\t}\n}\n\nvoid feistel_decrypt(const unsigned char *ciphertext, const size_t ciphertext_len, const unsigned char *in_key, unsigned char *plaintext) {\n\tsize_t block_count = ciphertext_len / (2*FEISTEL_BLOCK_LENGTH);\n\tunsigned char feed_key[2*FEISTEL_BLOCK_LENGTH] = {0};\n\n\tconst unsigned char *in = ciphertext + ciphertext_len - 2*FEISTEL_BLOCK_LENGTH;\n\tunsigned char *out = plaintext + ciphertext_len - 2*FEISTEL_BLOCK_LENGTH;\n\tconst unsigned char *key = in_key + ciphertext_len - 2*FEISTEL_BLOCK_LENGTH;\n\n\tfor(size_t i = 0; i < block_count-1; i++) {\n\t\tfor(int j = 0; j < 2*FEISTEL_BLOCK_LENGTH; j++) {\n\t\t\tfeed_key[j] = key[j] ^ in[j - 2*FEISTEL_BLOCK_LENGTH];\n\t\t}\n\n\t\tfeistel_decrypt_block(in, in + FEISTEL_BLOCK_LENGTH, feed_key, out, out + FEISTEL_BLOCK_LENGTH);\n\t\tin  -= 2*FEISTEL_BLOCK_LENGTH;\n\t\tkey -= 2*FEISTEL_BLOCK_LENGTH;\n\t\tout -= 2*FEISTEL_BLOCK_LENGTH;\n\t}\n\n\tfeistel_decrypt_block(in, in + FEISTEL_BLOCK_LENGTH, key, out, out + FEISTEL_BLOCK_LENGTH);\n}\n\n"
  },
  {
    "path": "apps/arweave/c_src/randomx/feistel_msgsize_key_cipher.h",
    "content": "#ifndef FEISTEL_MSGSIZE_KEY_CIPHER_H\n#define FEISTEL_MSGSIZE_KEY_CIPHER_H\n\n\n#define FEISTEL_BLOCK_LENGTH 32\n\n#if defined(__cplusplus)\nextern \"C\" {\n#endif\n\nvoid feistel_encrypt(const unsigned char *plaintext, const size_t plaintext_len, const unsigned char *key, unsigned char *ciphertext);\nvoid feistel_decrypt(const unsigned char *ciphertext, const size_t ciphertext_len, const unsigned char *key, unsigned char *plaintext);\n\n#if defined(__cplusplus)\n}\n#endif\n\n#endif // FEISTEL_MSGSIZE_KEY_CIPHER_H\n"
  },
  {
    "path": "apps/arweave/c_src/randomx/randomx_long_with_entropy.cpp",
    "content": "#include <cassert>\n#include \"randomx_long_with_entropy.h\"\n#include \"vm_interpreted.hpp\"\n#include \"vm_interpreted_light.hpp\"\n#include \"vm_compiled.hpp\"\n#include \"vm_compiled_light.hpp\"\n#include \"blake2/blake2.h\"\n#include \"feistel_msgsize_key_cipher.h\"\n\n// NOTE. possible optimisation with outputEntropySize\n// can improve performance for less memcpy (has almost no impact because randomx is too long 99+%)\n\nextern \"C\" {\n\tconst unsigned char *randomx_calculate_hash_long_with_entropy_get_entropy(randomx_vm *machine, const unsigned char *input, const size_t inputSize, const int randomxProgramCount) {\n\t\tassert(machine != nullptr);\n\t\tassert(inputSize == 0 || input != nullptr);\n\t\talignas(16) uint64_t tempHash[8];\n\t\tint blakeResult = randomx_blake2b(tempHash, sizeof(tempHash), input, inputSize, nullptr, 0);\n\t\tassert(blakeResult == 0);\n\t\tmachine->initScratchpad(&tempHash);\n\t\tmachine->resetRoundingMode();\n\t\tfor (int chain = 0; chain < randomxProgramCount - 1; ++chain) {\n\t\t\tmachine->run(&tempHash);\n\t\t\tblakeResult = randomx_blake2b(tempHash, sizeof(tempHash), machine->getRegisterFile(), sizeof(randomx::RegisterFile), nullptr, 0);\n\t\t\tassert(blakeResult == 0);\n\t\t}\n\t\tmachine->run(&tempHash);\n\t\tunsigned char output[64];\n\t\tmachine->getFinalResult(output, RANDOMX_HASH_SIZE);\n\t\treturn (const unsigned char*)machine->getScratchpad();\n\t}\n\n\t// feistel_encrypt accepts padded message with 2*FEISTEL_BLOCK_LENGTH = 64 bytes\n\tRANDOMX_EXPORT void randomx_encrypt_chunk(randomx_vm *machine, const unsigned char *input, const size_t inputSize, const unsigned char *inChunk, const size_t inChunkSize, unsigned char *outChunk, const int randomxProgramCount) {\n\t\tassert(inChunkSize <= RANDOMX_ENTROPY_SIZE);\n\t\tassert(inChunkSize % (2*FEISTEL_BLOCK_LENGTH) == 0);\n\t\tconst unsigned char *outputEntropy = randomx_calculate_hash_long_with_entropy_get_entropy(machine, input, inputSize, randomxProgramCount);\n\n\t\tfeistel_encrypt((const unsigned char*)inChunk, inChunkSize, outputEntropy, (unsigned char*)outChunk);\n\t}\n\n\tRANDOMX_EXPORT void randomx_decrypt_chunk(randomx_vm *machine, const unsigned char *input, const size_t inputSize, const unsigned char *inChunk, const size_t inChunkSize, unsigned char *outChunk, const int randomxProgramCount) {\n\t\tassert(inChunkSize <= RANDOMX_ENTROPY_SIZE);\n\t\tassert(inChunkSize % (2*FEISTEL_BLOCK_LENGTH) == 0);\n\n\t\tconst unsigned char *outputEntropy = randomx_calculate_hash_long_with_entropy_get_entropy(machine, input, inputSize, randomxProgramCount);\n\n\t\tfeistel_decrypt((const unsigned char*)inChunk, inChunkSize, outputEntropy, (unsigned char*)outChunk);\n\t}\n}\n"
  },
  {
    "path": "apps/arweave/c_src/randomx/randomx_long_with_entropy.h",
    "content": "#ifndef RANDOMX_LONG_WITH_ENTROPY_H\n#define RANDOMX_LONG_WITH_ENTROPY_H\n\n#include \"randomx.h\"\n\n#define RANDOMX_ENTROPY_SIZE (256*1024)\n\n#if defined(__cplusplus)\nextern \"C\" {\n#endif\n\nRANDOMX_EXPORT void randomx_encrypt_chunk(randomx_vm *machine, const unsigned char *input, const size_t inputSize, const unsigned char *inChunk, const size_t inChunkSize,  unsigned char *outChunk, const int randomxProgramCount);\nRANDOMX_EXPORT void randomx_decrypt_chunk(randomx_vm *machine, const unsigned char *input, const size_t inputSize, const unsigned char *inChunk, const size_t outChunkSize, unsigned char *outChunk, const int randomxProgramCount);\n\n#if defined(__cplusplus)\n}\n#endif\n\n#endif // RANDOMX_LONG_WITH_ENTROPY_H"
  },
  {
    "path": "apps/arweave/c_src/randomx/randomx_squared.cpp",
    "content": "#include <cassert>\n#include <openssl/sha.h>\n#include \"crc32.h\"\n#include \"randomx_squared.h\"\n#include \"feistel_msgsize_key_cipher.h\"\n\n// imports from randomx\n#include \"vm_compiled.hpp\"\n#include \"blake2/blake2.h\"\n\nextern \"C\" {\n\n\tvoid _rsp_mix_entropy_near(\n\t\tconst unsigned char *inEntropy,\n\t\tunsigned char *outEntropy,\n\t\tconst size_t entropySize\n\t) {\n\t\t// NOTE we can't use _mm_crc32_u64, because it output only final 32-bit result\n\t\t// NOTE commented variant is more readable but unoptimized\n\t\tunsigned int state = ~0;\n\t\t// unsigned int state = 0;\n\t\tconst unsigned int *inEntropyPtr = (const unsigned int*)inEntropy;\n\t\tunsigned int *outEntropyPtr = (unsigned int*)outEntropy;\n\t\tfor(size_t i=0;i<entropySize;i+=8) {\n\t\t\t//state = crc32(~state, *inEntropyPtr);\n\t\t\t//*outEntropyPtr = *inEntropyPtr ^ ~state;\n\n\t\t\tstate = ~crc32(state, *inEntropyPtr);\n\t\t\t*outEntropyPtr = *inEntropyPtr ^ state;\n\t\t\tinEntropyPtr++;\n\t\t\toutEntropyPtr++;\n\t\t\t*outEntropyPtr = *inEntropyPtr;\n\t\t\tinEntropyPtr++;\n\t\t\toutEntropyPtr++;\n\t\t}\n\n\t\t// keep state\n\t\t// reset output entropy from start\n\t\toutEntropyPtr = (unsigned int*)outEntropy;\n\t\toutEntropyPtr++;\n\t\t// take input from output now\n\t\tinEntropyPtr = outEntropyPtr;\n\t\t// Note it's optimizeable now with only 1 pointer, but we will reduce readability later\n\t\tfor(size_t i=0;i<entropySize;i+=8) {\n\t\t\t//state = crc32(~state, *inEntropyPtr);\n\t\t\t//*outEntropyPtr = *inEntropyPtr ^ ~state;\n\n\t\t\tstate = ~crc32(state, *inEntropyPtr);\n\t\t\t*outEntropyPtr = *inEntropyPtr ^ state;\n\t\t\tinEntropyPtr += 2;\n\t\t\toutEntropyPtr += 2;\n\t\t}\n\t}\n\n\t// Runs 1 RX2 round of programCount RandomX execs + 1 CRC mix on a single lane.\n\t// VM scratchpad is updated in place.\n\tvoid _rsp_exec_inplace(\n\t\trandomx_vm* machine,\n\t\tuint64_t* tempHash,\n\t\tint programCount,\n\t\tsize_t scratchpadSize\n\t) {\n\t\tmachine->resetRoundingMode();\n\t\tfor (int chain = 0; chain < programCount-1; chain++) {\n\t\t\tmachine->run(tempHash);\n\t\t\tint blakeResult = randomx_blake2b(\n\t\t\t\ttempHash, 64,\n\t\t\t\tmachine->getRegisterFile(),\n\t\t\t\tsizeof(randomx::RegisterFile),\n\t\t\t\tnullptr, 0\n\t\t\t);\n\t\t\tassert(blakeResult == 0);\n\t\t}\n\t\tmachine->run(tempHash);\n\t\tint blakeResult = randomx_blake2b(\n\t\t\ttempHash, 64,\n\t\t\tmachine->getRegisterFile(),\n\t\t\tsizeof(randomx::RegisterFile),\n\t\t\tnullptr, 0\n\t\t);\n\t\tassert(blakeResult == 0);\n\t\t_rsp_mix_entropy_near(\n\t\t\t(const unsigned char*)machine->getScratchpad(),\n\t\t\t(unsigned char*)(void*)machine->getScratchpad(),\n\t\t\tscratchpadSize);\n\t}\n\n\tvoid _copy_chunk_cross_lane(\n\t\trandomx_vm** inSet,\n\t\trandomx_vm** outSet,\n\t\tsize_t srcPos,\n\t\tsize_t dstPos,\n\t\tsize_t length,\n\t\tsize_t scratchpadSize\n\t) {\n\t\twhile (length > 0) {\n\t\t\tint srcLane = (int)(srcPos / scratchpadSize);\n\t\t\tsize_t offsetInSrcLane = srcPos % scratchpadSize;\n\n\t\t\tint dstLane = (int)(dstPos / scratchpadSize);\n\t\t\tsize_t offsetInDstLane = dstPos % scratchpadSize;\n\n\t\t\tsize_t srcLaneRemain = scratchpadSize - offsetInSrcLane;\n\t\t\tsize_t dstLaneRemain = scratchpadSize - offsetInDstLane;\n\n\t\t\tsize_t chunkSize = length;\n\t\t\tif (chunkSize > srcLaneRemain) {\n\t\t\t\tchunkSize = srcLaneRemain;\n\t\t\t}\n\t\t\tif (chunkSize > dstLaneRemain) {\n\t\t\t\tchunkSize = dstLaneRemain;\n\t\t\t}\n\n\t\t\tunsigned char* srcSp = (unsigned char*)(void*) inSet[srcLane]->getScratchpad();\n\t\t\tunsigned char* dstSp = (unsigned char*)(void*) outSet[dstLane]->getScratchpad();\n\t\t\tmemcpy(dstSp + offsetInDstLane, srcSp + offsetInSrcLane, chunkSize);\n\n\t\t\tsrcPos += chunkSize;\n\t\t\tdstPos += chunkSize;\n\t\t\tlength -= chunkSize;\n\t\t}\n\t}\n\n\tvoid _rsp_mix_entropy_far(\n\t\trandomx_vm** inSet,\n\t\trandomx_vm** outSet,\n\t\tint count,\n\t\tsize_t scratchpadSize,\n\t\tsize_t jumpSize,\n\t\tsize_t blockSize)\n\t{\n\t\tsize_t totalSize = (size_t)count * scratchpadSize;\n\n\t\tsize_t entropySize = totalSize;\n\t\tsize_t numJumps = entropySize / jumpSize;\n\t\tsize_t numBlocksPerJump = jumpSize / blockSize;\n\t\tsize_t leftover = jumpSize % blockSize;\n\n\t\tsize_t outOffset = 0;\n\t\tfor (size_t offset = 0; offset < numBlocksPerJump; ++offset) {\n\t\t\tfor (size_t i = 0; i < numJumps; ++i) {\n\t\t\t\tsize_t srcPos = i * jumpSize + offset * blockSize;\n\t\t\t\t_copy_chunk_cross_lane(inSet, outSet, srcPos, outOffset, blockSize, scratchpadSize);\n\t\t\t\toutOffset += blockSize;\n\t\t\t}\n\t\t}\n\n\t\tif (leftover > 0) {\n\t\t\tfor (size_t i = 0; i < numJumps; ++i) {\n\t\t\t\tsize_t srcPos = i * jumpSize + numBlocksPerJump * blockSize;\n\t\t\t\t_copy_chunk_cross_lane(inSet, outSet, srcPos, outOffset, leftover, scratchpadSize);\n\t\t\t\toutOffset += leftover;\n\t\t\t}\n\t\t}\n\t}\n\n\tint rsp_fused_entropy(\n\t\trandomx_vm** vmList,\n\t\tsize_t scratchpadSize,\n\t\tint subChunkCount,\n\t\tint subChunkSize,\n\t\tint laneCount,\n\t\tint rxDepth,\n\t\tint randomxProgramCount,\n\t\tint blockSize,\n\t\tconst unsigned char* keyData,\n\t\tsize_t keySize,\n\t\tunsigned char* outEntropy\n\t) {\n\t\tstruct vm_hash_t {\n\t\t\talignas(16) uint64_t tempHash[8]; // 64 bytes\n\t\t};\n\n\t\tvm_hash_t* vmHashes = new (std::nothrow) vm_hash_t[laneCount];\n\t\tif (!vmHashes) {\n\t\t\treturn 0;\n\t\t}\n\n\t\t// Initialize the scratchaps for each lane\n\t\tfor (int i = 0; i < laneCount; i++) {\n\t\t\t// laneSeed = sha256(<<keyData, i>>)\n\t\t\t// laneSeed should be unique - i.e. now two lanes across all entropies and all\n\t\t\t// replicas should have the same seed. Current key (as off 2025-01-01) is\n\t\t\t// <<Partition, EntropyIndex, RewardAddr>> where entropy index is unique within\n\t\t\t// a given partition.\n\t\t\tunsigned char laneSeed[32];\n\t\t\t{\n\t\t\t\tSHA256_CTX sha256;\n\t\t\t\tSHA256_Init(&sha256);\n\t\t\t\tSHA256_Update(&sha256, keyData, keySize);\n\t\t\t\tunsigned char laneIndex = (unsigned char)i + 1;\n\t\t\t\tSHA256_Update(&sha256, &laneIndex, 1);\n\t\t\t\tSHA256_Final(laneSeed, &sha256);\n\t\t\t}\n\t\t\tint blakeResult = randomx_blake2b(\n\t\t\t\tvmHashes[i].tempHash, sizeof(vmHashes[i].tempHash),\n\t\t\t\tlaneSeed, 32,\n\t\t\t\tnullptr, 0\n\t\t\t);\n\t\t\tif (blakeResult != 0) {\n\t\t\t\tdelete[] vmHashes;\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t\tvmList[i]->initScratchpad(&vmHashes[i].tempHash);\n\t\t}\n\n\t\tfor (int d = 0; d < rxDepth; d++) {\n\t\t\tfor (int lane = 0; lane < laneCount; lane++) {\n\t\t\t\t_rsp_exec_inplace(\n\t\t\t\t\tvmList[lane],\n\t\t\t\t\tvmHashes[lane].tempHash,\n\t\t\t\t\trandomxProgramCount, scratchpadSize);\n\t\t\t}\n\t\t\t_rsp_mix_entropy_far(&vmList[0], &vmList[laneCount],\n\t\t\t\t\t\t\t\t\t\t laneCount, scratchpadSize, scratchpadSize,\n\t\t\t\t\t\t\t\t\t\t blockSize);\n\n\t\t\tif (d + 1 < rxDepth) {\n\t\t\t\td++;\n\t\t\t\tfor (int lane = 0; lane < laneCount; lane++) {\n\t\t\t\t\t_rsp_exec_inplace(\n\t\t\t\t\t\tvmList[lane+laneCount],\n\t\t\t\t\t\tvmHashes[lane].tempHash,\n\t\t\t\t\t\trandomxProgramCount, scratchpadSize);\n\t\t\t\t}\n\t\t\t\t_rsp_mix_entropy_far(&vmList[laneCount], &vmList[0],\n\t\t\t\t\t\t\t\t\t\t\t laneCount, scratchpadSize, scratchpadSize,\n\t\t\t\t\t\t\t\t\t\t\t blockSize);\n\t\t\t}\n\t\t}\n\t\t// NOTE still unoptimal. Last copy can be performed from scratchpad to output.\n\t\t// But requires +1 variation (set to buffer)\n\n\t\tif ((rxDepth % 2) == 0) {\n\t\t\tunsigned char* outEntropyPtr = outEntropy;\n\t\t\tfor (int i = 0; i < laneCount; i++) {\n\t\t\t\tvoid* sp = (void*)vmList[i]->getScratchpad();\n\t\t\t\tmemcpy(outEntropyPtr, sp, scratchpadSize);\n\t\t\t\toutEntropyPtr += scratchpadSize;\n\t\t\t}\n\t\t} else {\n\t\t\tunsigned char* outEntropyPtr = outEntropy;\n\t\t\tfor (int i = laneCount; i < 2*laneCount; i++) {\n\t\t\t\tvoid* sp = (void*)vmList[i]->getScratchpad();\n\t\t\t\tmemcpy(outEntropyPtr, sp, scratchpadSize);\n\t\t\t\toutEntropyPtr += scratchpadSize;\n\t\t\t}\n\t\t}\n\n\t\tdelete[] vmHashes;\n\n\t\treturn 1;\n\t}\n\n\t// TODO optimized packing_apply_to_subchunk (NIF only uses slice)\n}\n"
  },
  {
    "path": "apps/arweave/c_src/randomx/randomx_squared.h",
    "content": "#ifndef RANDOMX_SQUARED_H\n#define RANDOMX_SQUARED_H\n\n#include \"randomx.h\"\n\n#if defined(__cplusplus)\nextern \"C\" {\n#endif\n\nRANDOMX_EXPORT int rsp_fused_entropy(\n    randomx_vm** vmList,\n    size_t scratchpadSize,\n    int subChunkCount,\n    int subChunkSize,\n    int laneCount,\n    int rxDepth,\n    int randomxProgramCount,\n    int blockSize,\n    const unsigned char* keyData,\n    size_t keySize,\n    unsigned char* outEntropy  // We'll pass in a pointer for final scratchpad data\n);\n\n// TODO optimized packing_apply_to_subchunk (NIF only uses slice)\n\n#if defined(__cplusplus)\n}\n#endif\n\n#endif // RANDOMX_SQUARED_H\n"
  },
  {
    "path": "apps/arweave/c_src/randomx/rx4096/ar_rx4096_nif.c",
    "content": "#include <string.h>\n#include <openssl/sha.h>\n#include <ar_nif.h>\n#include \"../randomx_long_with_entropy.h\"\n#include \"../feistel_msgsize_key_cipher.h\"\n\n#include \"../ar_randomx_impl.h\"\n\nconst int PACKING_KEY_SIZE = 32;\nconst int MAX_CHUNK_SIZE = 256*1024;\n\nstatic int rx4096_load(ErlNifEnv* envPtr, void** priv, ERL_NIF_TERM info);\nstatic ERL_NIF_TERM rx4096_info_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[]);\nstatic ERL_NIF_TERM rx4096_init_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[]);\nstatic ERL_NIF_TERM rx4096_hash_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[]);\nstatic ERL_NIF_TERM rx4096_encrypt_composite_chunk_nif(\n\tErlNifEnv* envPtr,\n\tint argc,\n\tconst ERL_NIF_TERM argv[]\n);\nstatic ERL_NIF_TERM rx4096_decrypt_composite_chunk_nif(\n\tErlNifEnv* envPtr,\n\tint argc,\n\tconst ERL_NIF_TERM argv[]\n);\nstatic ERL_NIF_TERM rx4096_decrypt_composite_sub_chunk_nif(\n\tErlNifEnv* envPtr,\n\tint argc,\n\tconst ERL_NIF_TERM argv[]\n);\nstatic ERL_NIF_TERM rx4096_reencrypt_composite_chunk_nif(\n\tErlNifEnv* envPtr,\n\tint argc,\n\tconst ERL_NIF_TERM argv[]\n);\n\nstatic int rx4096_load(ErlNifEnv* envPtr, void** priv, ERL_NIF_TERM info)\n{\n\treturn load(envPtr, priv, info);\n}\n\nstatic ERL_NIF_TERM rx4096_info_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[])\n{\n\treturn info_nif(\"rx4096\", envPtr, argc, argv);\n}\n\nstatic ERL_NIF_TERM rx4096_init_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[])\n{\n\treturn init_nif(envPtr, argc, argv);\n}\n\nstatic ERL_NIF_TERM rx4096_hash_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[])\n{\n\treturn hash_nif(envPtr, argc, argv);\n}\n\nstatic ERL_NIF_TERM encrypt_composite_chunk(ErlNifEnv* envPtr,\n\t\trandomx_vm *vmPtr, ErlNifBinary *inputDataPtr, ErlNifBinary *inputChunkPtr,\n\t\tconst int subChunkCount, const int iterations,\n\t\tconst int randomxRoundCount, const int jitEnabled,\n\t\tconst int largePagesEnabled, const int hardwareAESEnabled) {\n\n\tunsigned char *paddedChunk = (unsigned char*)malloc(MAX_CHUNK_SIZE);\n\tif (inputChunkPtr->size == MAX_CHUNK_SIZE) {\n\t\tmemcpy(paddedChunk, inputChunkPtr->data, inputChunkPtr->size);\n\t} else {\n\t\tmemset(paddedChunk, 0, MAX_CHUNK_SIZE);\n\t\tmemcpy(paddedChunk, inputChunkPtr->data, inputChunkPtr->size);\n\t}\n\n\tERL_NIF_TERM encryptedChunkTerm;\n\tunsigned char* encryptedChunk = enif_make_new_binary(envPtr, MAX_CHUNK_SIZE,\n\t\t\t&encryptedChunkTerm);\n\t// MAX_CHUNK_SIZE / subChunkCount is a multiple of 64 so all sub-chunks\n\t// are of the same size.\n\tuint32_t subChunkSize = MAX_CHUNK_SIZE / subChunkCount;\n\tuint32_t offset = 0;\n\tunsigned char key[PACKING_KEY_SIZE];\n\t// Encrypt each sub-chunk independently and then concatenate the encrypted sub-chunks\n\t// to yield encrypted composite chunk.\n\tfor (int i = 0; i < subChunkCount; i++) {\n\t\tunsigned char* subChunk = paddedChunk + offset;\n\t\tunsigned char* encryptedSubChunk = (unsigned char*)malloc(subChunkSize);\n\n\t\t// 3 bytes is sufficient to represent offsets up to at most MAX_CHUNK_SIZE.\n\t\tint offsetByteSize = 3;\n\t\tunsigned char offsetBytes[offsetByteSize];\n\t\t// Byte string representation of the sub-chunk start offset: i * subChunkSize.\n\t\tfor (int k = 0; k < offsetByteSize; k++) {\n\t\t\toffsetBytes[k] = ((offset + subChunkSize) >> (8 * (offsetByteSize - 1 - k))) & 0xFF;\n\t\t}\n\t\t// Sub-chunk encryption key is the SHA256 hash of the concatenated\n\t\t// input data and the sub-chunk start offset.\n\t\tSHA256_CTX sha256;\n\t\tSHA256_Init(&sha256);\n\t\tSHA256_Update(&sha256, inputDataPtr->data, inputDataPtr->size);\n\t\tSHA256_Update(&sha256, offsetBytes, offsetByteSize);\n\t\tSHA256_Final(key, &sha256);\n\n\t\t// Sequentially encrypt each sub-chunk 'iterations' times.\n\t\tfor (int j = 0; j < iterations; j++) {\n\t\t\trandomx_encrypt_chunk(\n\t\t\t\tvmPtr, key, PACKING_KEY_SIZE, subChunk, subChunkSize,\n\t\t\t\tencryptedSubChunk, randomxRoundCount);\n\t\t\tif (j < iterations - 1) {\n\t\t\t\tmemcpy(subChunk, encryptedSubChunk, subChunkSize);\n\t\t\t}\n\t\t}\n\t\tmemcpy(encryptedChunk + offset, encryptedSubChunk, subChunkSize);\n\t\tfree(encryptedSubChunk);\n\t\toffset += subChunkSize;\n\t}\n\tfree(paddedChunk);\n\treturn encryptedChunkTerm;\n}\n\nstatic ERL_NIF_TERM decrypt_composite_chunk(ErlNifEnv* envPtr,\n\t\trandomx_vm *vmPtr, ErlNifBinary *inputDataPtr, ErlNifBinary *inputChunkPtr,\n\t\tconst int outChunkLen, const int subChunkCount, const int iterations,\n\t\tconst int randomxRoundCount, const int jitEnabled,\n\t\tconst int largePagesEnabled, const int hardwareAESEnabled) {\n\n\tunsigned char *chunk = (unsigned char*)malloc(MAX_CHUNK_SIZE);\n\tmemcpy(chunk, inputChunkPtr->data, inputChunkPtr->size);\n\n\tERL_NIF_TERM decryptedChunkTerm;\n\tunsigned char* decryptedChunk = enif_make_new_binary(envPtr, outChunkLen,\n\t\t\t&decryptedChunkTerm);\n\tunsigned char* decryptedSubChunk;\n\t// outChunkLen / subChunkCount is a multiple of 64 so all sub-chunks\n\t// are of the same size.\n\tuint32_t subChunkSize = outChunkLen / subChunkCount;\n\tuint32_t offset = 0;\n\tunsigned char key[PACKING_KEY_SIZE];\n\t// Decrypt each sub-chunk independently and then concatenate the decrypted sub-chunks\n\t// to yield encrypted composite chunk.\n\tfor (int i = 0; i < subChunkCount; i++) {\n\t\tunsigned char* subChunk = chunk + offset;\n\t\tdecryptedSubChunk = (unsigned char*)malloc(subChunkSize);\n\n\t\t// 3 bytes is sufficient to represent offsets up to at most MAX_CHUNK_SIZE.\n\t\tint offsetByteSize = 3;\n\t\tunsigned char offsetBytes[offsetByteSize];\n\t\t// Byte string representation of the sub-chunk start offset: i * subChunkSize.\n\t\tfor (int k = 0; k < offsetByteSize; k++) {\n\t\t\toffsetBytes[k] = ((offset + subChunkSize) >> (8 * (offsetByteSize - 1 - k))) & 0xFF;\n\t\t}\n\t\t// Sub-chunk encryption key is the SHA256 hash of the concatenated\n\t\t// input data and the sub-chunk start offset.\n\t\tSHA256_CTX sha256;\n\t\tSHA256_Init(&sha256);\n\t\tSHA256_Update(&sha256, inputDataPtr->data, inputDataPtr->size);\n\t\tSHA256_Update(&sha256, offsetBytes, offsetByteSize);\n\t\tSHA256_Final(key, &sha256);\n\n\t\t// Sequentially decrypt each sub-chunk 'iterations' times.\n\t\tfor (int j = 0; j < iterations; j++) {\n\t\t\trandomx_decrypt_chunk(\n\t\t\t\tvmPtr, key, PACKING_KEY_SIZE, subChunk, subChunkSize,\n\t\t\t\tdecryptedSubChunk, randomxRoundCount);\n\t\t\tif (j < iterations - 1) {\n\t\t\t\tmemcpy(subChunk, decryptedSubChunk, subChunkSize);\n\t\t\t}\n\t\t}\n\t\tmemcpy(decryptedChunk + offset, decryptedSubChunk, subChunkSize);\n\t\tfree(decryptedSubChunk);\n\t\toffset += subChunkSize;\n\t}\n\tfree(chunk);\n\treturn decryptedChunkTerm;\n}\n\nstatic ERL_NIF_TERM rx4096_encrypt_composite_chunk_nif(\n\tErlNifEnv* envPtr,\n\tint argc,\n\tconst ERL_NIF_TERM argv[]\n) {\n\t// RandomX rounds per sub-chunk.\n\tint randomxRoundCount;\n\t// RandomX iterations (randomxRoundCount each) per sub-chunk.\n\tint iterations;\n\t// The number of sub-chunks in the chunk.\n\tint subChunkCount;\n\tint jitEnabled, largePagesEnabled, hardwareAESEnabled;\n\tstruct state* statePtr;\n\tErlNifBinary inputData;\n\tErlNifBinary inputChunk;\n\n\tif (argc != 9) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_resource(envPtr, argv[0], stateType, (void**) &statePtr)) {\n\t\treturn error_tuple(envPtr, \"failed to read state\");\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[1], &inputData)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[2], &inputChunk) ||\n\t\tinputChunk.size == 0 ||\n\t\tinputChunk.size > MAX_CHUNK_SIZE) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[3], &jitEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[4], &largePagesEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[5], &hardwareAESEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[6], &randomxRoundCount)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[7], &iterations) ||\n\t\titerations < 1) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[8], &subChunkCount) ||\n\t\tsubChunkCount < 1 ||\n\t\tMAX_CHUNK_SIZE % subChunkCount != 0 ||\n\t\t(MAX_CHUNK_SIZE / subChunkCount) % 64 != 0 ||\n\t\tsubChunkCount > (MAX_CHUNK_SIZE / 64)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tint isRandomxReleased;\n\trandomx_vm *vmPtr = create_vm(statePtr, (statePtr->mode == HASHING_MODE_FAST),\n\t\t\tjitEnabled, largePagesEnabled, hardwareAESEnabled, &isRandomxReleased);\n\tif (vmPtr == NULL) {\n\t\tif (isRandomxReleased != 0) {\n\t\t\treturn error_tuple(envPtr, \"state has been released\");\n\t\t}\n\t\treturn error_tuple(envPtr, \"randomx_create_vm failed\");\n\t}\n\n\tERL_NIF_TERM encryptedChunkTerm = encrypt_composite_chunk(envPtr, vmPtr, &inputData,\n\t\t\t&inputChunk, subChunkCount, iterations, randomxRoundCount,\n\t\t\tjitEnabled, largePagesEnabled, hardwareAESEnabled);\n\tdestroy_vm(statePtr, vmPtr);\n\treturn ok_tuple(envPtr, encryptedChunkTerm);\n}\n\nstatic ERL_NIF_TERM rx4096_decrypt_composite_chunk_nif(\n\tErlNifEnv* envPtr,\n\tint argc,\n\tconst ERL_NIF_TERM argv[]\n) {\n\tint outChunkLen;\n\t// RandomX rounds per sub-chunk.\n\tint randomxRoundCount;\n\t// RandomX iterations (randomxRoundCount each) per sub-chunk.\n\tint iterations;\n\t// The number of sub-chunks in the chunk.\n\tint subChunkCount;\n\tint jitEnabled, largePagesEnabled, hardwareAESEnabled;\n\tstruct state* statePtr;\n\tErlNifBinary inputData;\n\tErlNifBinary inputChunk;\n\n\tif (argc != 10) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_resource(envPtr, argv[0], stateType, (void**) &statePtr)) {\n\t\treturn error_tuple(envPtr, \"failed to read state\");\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[1], &inputData)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[2], &inputChunk) ||\n\t\tinputChunk.size == 0 ||\n\t\tinputChunk.size > MAX_CHUNK_SIZE) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[3], &outChunkLen) ||\n\t\toutChunkLen > MAX_CHUNK_SIZE ||\n\t\toutChunkLen < 64 ||\n\t\tinputChunk.size != outChunkLen) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[4], &jitEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[5], &largePagesEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[6], &hardwareAESEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[7], &randomxRoundCount)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[8], &iterations) ||\n\t\titerations < 1) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[9], &subChunkCount) ||\n\t\tsubChunkCount < 1 ||\n\t\toutChunkLen % subChunkCount != 0 ||\n\t\t(outChunkLen / subChunkCount) % 64 != 0 ||\n\t\tsubChunkCount > (outChunkLen / 64)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tint isRandomxReleased;\n\trandomx_vm *vmPtr = create_vm(statePtr, (statePtr->mode == HASHING_MODE_FAST),\n\t\t\tjitEnabled, largePagesEnabled, hardwareAESEnabled, &isRandomxReleased);\n\tif (vmPtr == NULL) {\n\t\tif (isRandomxReleased != 0) {\n\t\t\treturn error_tuple(envPtr, \"state has been released\");\n\t\t}\n\t\treturn error_tuple(envPtr, \"randomx_create_vm failed\");\n\t}\n\tERL_NIF_TERM decryptedChunkTerm = decrypt_composite_chunk(envPtr, vmPtr,\n\t\t\t&inputData, &inputChunk, outChunkLen, subChunkCount, iterations,\n\t\t\trandomxRoundCount, jitEnabled, largePagesEnabled, hardwareAESEnabled);\n\n\tdestroy_vm(statePtr, vmPtr);\n\n\treturn ok_tuple(envPtr, decryptedChunkTerm);\n}\n\nstatic ERL_NIF_TERM rx4096_decrypt_composite_sub_chunk_nif(\n\tErlNifEnv* envPtr,\n\tint argc,\n\tconst ERL_NIF_TERM argv[]\n) {\n\tint outChunkLen;\n\t// RandomX rounds per sub-chunk.\n\tint randomxRoundCount;\n\t// RandomX iterations (randomxRoundCount each) per sub-chunk.\n\tint iterations;\n\t// The relative sub-chunk start offset. We add the chunk size to it, encode the result,\n\t// add it to the base packing key, and SHA256-hash it to get the packing key.\n\tuint32_t offset;\n\tint jitEnabled, largePagesEnabled, hardwareAESEnabled;\n\tstruct state* statePtr;\n\tErlNifBinary inputData;\n\tErlNifBinary inputChunk;\n\n\tif (argc != 10) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_resource(envPtr, argv[0], stateType, (void**) &statePtr)) {\n\t\treturn error_tuple(envPtr, \"failed to read state\");\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[1], &inputData)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[2], &inputChunk)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[3], &outChunkLen) ||\n\t\toutChunkLen > MAX_CHUNK_SIZE ||\n\t\toutChunkLen < 64 ||\n\t\tinputChunk.size != outChunkLen ) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[4], &jitEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[5], &largePagesEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[6], &hardwareAESEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[7], &randomxRoundCount)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[8], &iterations) ||\n\t\titerations < 1) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_uint(envPtr, argv[9], &offset) ||\n\t\toffset < 0 ||\n\t\toffset > MAX_CHUNK_SIZE) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tint isRandomxReleased;\n\tERL_NIF_TERM decryptedSubChunkTerm;\n\tunsigned char* decryptedSubChunk = enif_make_new_binary(envPtr, outChunkLen,\n\t\t\t&decryptedSubChunkTerm);\n\tuint32_t subChunkSize = outChunkLen;\n\tunsigned char key[PACKING_KEY_SIZE];\n\n\trandomx_vm *vmPtr = create_vm(statePtr, (statePtr->mode == HASHING_MODE_FAST),\n\t\t\tjitEnabled, largePagesEnabled, hardwareAESEnabled, &isRandomxReleased);\n\tif (vmPtr == NULL) {\n\t\tif (isRandomxReleased != 0) {\n\t\t\treturn error_tuple(envPtr, \"state has been released\");\n\t\t}\n\t\treturn error_tuple(envPtr, \"randomx_create_vm failed\");\n\t}\n\n\tunsigned char* subChunk = (unsigned char*)malloc(inputChunk.size);\n\tmemcpy(subChunk, inputChunk.data, inputChunk.size);\n\n\t// 3 bytes is sufficient to represent offsets up to at most MAX_CHUNK_SIZE.\n\tint offsetByteSize = 3;\n\tunsigned char offsetBytes[offsetByteSize];\n\tfor (int k = 0; k < offsetByteSize; k++) {\n\t\toffsetBytes[k] = ((offset + subChunkSize) >> (8 * (offsetByteSize - 1 - k))) & 0xFF;\n\t}\n\t// Sub-chunk encryption key is the SHA256 hash of the concatenated\n\t// input data and the sub-chunk start offset.\n\tSHA256_CTX sha256;\n\tSHA256_Init(&sha256);\n\tSHA256_Update(&sha256, inputData.data, inputData.size);\n\tSHA256_Update(&sha256, offsetBytes, offsetByteSize);\n\tSHA256_Final(key, &sha256);\n\n\t// Sequentially decrypt each sub-chunk 'iterations' times.\n\tfor (int j = 0; j < iterations; j++) {\n\t\trandomx_decrypt_chunk(vmPtr, key, PACKING_KEY_SIZE, subChunk, subChunkSize,\n\t\t\tdecryptedSubChunk, randomxRoundCount);\n\t\tif (j < iterations - 1) {\n\t\t\tmemcpy(subChunk, decryptedSubChunk, subChunkSize);\n\t\t}\n\t}\n\tfree(subChunk);\n\tdestroy_vm(statePtr, vmPtr);\n\n\treturn ok_tuple(envPtr, decryptedSubChunkTerm);\n}\n\nstatic ERL_NIF_TERM rx4096_reencrypt_composite_chunk_nif(\n\tErlNifEnv* envPtr,\n\tint argc,\n\tconst ERL_NIF_TERM argv[]\n) {\n\tint decryptRandomxRoundCount, encryptRandomxRoundCount;\n\tint jitEnabled, largePagesEnabled, hardwareAESEnabled;\n\tint decryptSubChunkCount, encryptSubChunkCount, decryptIterations, encryptIterations;\n\tstruct state* statePtr;\n\tErlNifBinary decryptKey;\n\tErlNifBinary encryptKey;\n\tErlNifBinary inputChunk;\n\tERL_NIF_TERM inputChunkTerm;\n\n\tif (argc != 13) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_resource(envPtr, argv[0], stateType, (void**) &statePtr)) {\n\t\treturn error_tuple(envPtr, \"failed to read state\");\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[1], &decryptKey)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[2], &encryptKey)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[3], &inputChunk) ||\n\t\t\tinputChunk.size != MAX_CHUNK_SIZE) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tinputChunkTerm = argv[3];\n\tif (!enif_get_int(envPtr, argv[4], &jitEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[5], &largePagesEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[6], &hardwareAESEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[7], &decryptRandomxRoundCount)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[8], &encryptRandomxRoundCount)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[9], &decryptIterations) ||\n\t\tdecryptIterations < 1) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[10], &encryptIterations) ||\n\t\tencryptIterations < 1) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[11], &decryptSubChunkCount) ||\n\t\tdecryptSubChunkCount < 1 ||\n\t\tMAX_CHUNK_SIZE % decryptSubChunkCount != 0 ||\n\t\t(MAX_CHUNK_SIZE / decryptSubChunkCount) % 64 != 0 ||\n\t\tdecryptSubChunkCount > (MAX_CHUNK_SIZE / 64)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[12], &encryptSubChunkCount) ||\n\t\tencryptSubChunkCount < 1 ||\n\t\tMAX_CHUNK_SIZE % encryptSubChunkCount != 0 ||\n\t\t(MAX_CHUNK_SIZE / encryptSubChunkCount) % 64 != 0 ||\n\t\tencryptSubChunkCount > (MAX_CHUNK_SIZE / 64)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tint isRandomxReleased;\n\trandomx_vm *vmPtr = create_vm(statePtr, (statePtr->mode == HASHING_MODE_FAST),\n\t\t\tjitEnabled, largePagesEnabled, hardwareAESEnabled, &isRandomxReleased);\n\tif (vmPtr == NULL) {\n\t\tif (isRandomxReleased != 0) {\n\t\t\treturn error_tuple(envPtr, \"state has been released\");\n\t\t}\n\t\treturn error_tuple(envPtr, \"randomx_create_vm failed\");\n\t}\n\n\tint keysMatch = 0;\n\tif (decryptKey.size == encryptKey.size) {\n\t\tif (memcmp(decryptKey.data, encryptKey.data, decryptKey.size) == 0) {\n\t\t\tkeysMatch = 1;\n\t\t}\n\t}\n\tint encryptionsMatch = 0;\n\tif (keysMatch && (decryptSubChunkCount == encryptSubChunkCount) &&\n\t\t\t(decryptRandomxRoundCount == encryptRandomxRoundCount)) {\n\t\tencryptionsMatch = 1;\n\t}\n\n\tif (encryptionsMatch && (encryptIterations <= decryptIterations)) {\n\t\tdestroy_vm(statePtr, vmPtr);\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tunsigned char decryptedChunk[MAX_CHUNK_SIZE];\n\tErlNifBinary *decryptedChunkBinPtr;\n\tERL_NIF_TERM decryptedChunkTerm;\n\tif (!encryptionsMatch) {\n\t\tdecryptedChunkTerm = decrypt_composite_chunk(envPtr, vmPtr,\n\t\t\t\t&decryptKey, &inputChunk, inputChunk.size, decryptSubChunkCount,\n\t\t\t\tdecryptIterations, decryptRandomxRoundCount, jitEnabled,\n\t\t\t\tlargePagesEnabled, hardwareAESEnabled);\n\t\tErlNifBinary decryptedChunkBin;\n\t\tif (!enif_inspect_binary(envPtr, decryptedChunkTerm, &decryptedChunkBin)) {\n\t\t\tdestroy_vm(statePtr, vmPtr);\n\t\t\treturn enif_make_badarg(envPtr);\n\t\t}\n\t\tdecryptedChunkBinPtr = &decryptedChunkBin;\n\t} else {\n\t\tdecryptedChunkBinPtr = &inputChunk;\n\t\tdecryptedChunkTerm = inputChunkTerm;\n\t}\n\tint iterations = encryptIterations;\n\tif (encryptionsMatch) {\n\t\titerations = encryptIterations - decryptIterations;\n\t}\n\n\tERL_NIF_TERM reencryptedChunkTerm = encrypt_composite_chunk(envPtr, vmPtr, &encryptKey,\n\t\t\tdecryptedChunkBinPtr, encryptSubChunkCount, iterations, encryptRandomxRoundCount,\n\t\t\tjitEnabled, largePagesEnabled, hardwareAESEnabled);\n\tdestroy_vm(statePtr, vmPtr);\n\treturn ok_tuple2(envPtr, reencryptedChunkTerm, decryptedChunkTerm);\n}\n\nstatic ErlNifFunc rx4096_funcs[] = {\n\t{\"rx4096_info_nif\", 1, rx4096_info_nif, ERL_NIF_DIRTY_JOB_CPU_BOUND},\n\t{\"rx4096_init_nif\", 5, rx4096_init_nif, ERL_NIF_DIRTY_JOB_CPU_BOUND},\n\t{\"rx4096_hash_nif\", 5, rx4096_hash_nif, ERL_NIF_DIRTY_JOB_CPU_BOUND},\n\t{\"rx4096_encrypt_composite_chunk_nif\", 9, rx4096_encrypt_composite_chunk_nif,\n\t\tERL_NIF_DIRTY_JOB_CPU_BOUND},\n\t{\"rx4096_decrypt_composite_chunk_nif\", 10, rx4096_decrypt_composite_chunk_nif,\n\t\tERL_NIF_DIRTY_JOB_CPU_BOUND},\n\t{\"rx4096_decrypt_composite_sub_chunk_nif\", 10, rx4096_decrypt_composite_sub_chunk_nif,\n\t\tERL_NIF_DIRTY_JOB_CPU_BOUND},\n\t{\"rx4096_reencrypt_composite_chunk_nif\", 13,\n\t\trx4096_reencrypt_composite_chunk_nif, ERL_NIF_DIRTY_JOB_CPU_BOUND}\n};\n\nERL_NIF_INIT(ar_rx4096_nif, rx4096_funcs, rx4096_load, NULL, NULL, NULL);\n"
  },
  {
    "path": "apps/arweave/c_src/randomx/rx512/ar_rx512_nif.c",
    "content": "#include <string.h>\n#include <openssl/sha.h>\n#include <ar_nif.h>\n#include \"../randomx_long_with_entropy.h\"\n#include \"../feistel_msgsize_key_cipher.h\"\n\n#include \"../ar_randomx_impl.h\"\n\nconst int MAX_CHUNK_SIZE = 256*1024;\n\nstatic int rx512_load(ErlNifEnv* envPtr, void** priv, ERL_NIF_TERM info);\nstatic ERL_NIF_TERM rx512_info_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[]);\nstatic ERL_NIF_TERM rx512_init_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[]);\nstatic ERL_NIF_TERM rx512_hash_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[]);\nstatic ERL_NIF_TERM rx512_encrypt_chunk_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[]);\nstatic ERL_NIF_TERM rx512_decrypt_chunk_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[]);\nstatic ERL_NIF_TERM rx512_reencrypt_chunk_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[]);\n\nstatic int rx512_load(ErlNifEnv* envPtr, void** priv, ERL_NIF_TERM info)\n{\n\treturn load(envPtr, priv, info);\n}\n\nstatic ERL_NIF_TERM rx512_info_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[])\n{\n\treturn info_nif(\"rx512\", envPtr, argc, argv);\n}\n\nstatic ERL_NIF_TERM rx512_init_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[])\n{\n\treturn init_nif(envPtr, argc, argv);\n}\n\nstatic ERL_NIF_TERM rx512_hash_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[])\n{\n\treturn hash_nif(envPtr, argc, argv);\n}\n\nstatic ERL_NIF_TERM decrypt_chunk(ErlNifEnv* envPtr,\n\t\trandomx_vm *machine, const unsigned char *input, const size_t inputSize,\n\t\tconst unsigned char *inChunk, const size_t inChunkSize,\n\t\tunsigned char* outChunk, const size_t outChunkSize,\n\t\tconst int randomxProgramCount) {\n\trandomx_decrypt_chunk(\n\t\tmachine, input, inputSize, inChunk, inChunkSize, outChunk, randomxProgramCount);\n\treturn make_output_binary(envPtr, outChunk, outChunkSize);\n}\n\nstatic ERL_NIF_TERM encrypt_chunk(ErlNifEnv* envPtr,\n\t\trandomx_vm *machine, const unsigned char *input, const size_t inputSize,\n\t\tconst unsigned char *inChunk, const size_t inChunkSize,\n\t\tconst int randomxProgramCount) {\n\tERL_NIF_TERM encryptedChunkTerm;\n\tunsigned char* encryptedChunk = enif_make_new_binary(\n\t\t\t\t\t\t\t\t\t\tenvPtr, MAX_CHUNK_SIZE, &encryptedChunkTerm);\n\n\tif (inChunkSize < MAX_CHUNK_SIZE) {\n\t\tunsigned char *paddedInChunk = (unsigned char*)malloc(MAX_CHUNK_SIZE);\n\t\tmemset(paddedInChunk, 0, MAX_CHUNK_SIZE);\n\t\tmemcpy(paddedInChunk, inChunk, inChunkSize);\n\t\trandomx_encrypt_chunk(\n\t\t\tmachine, input, inputSize, paddedInChunk, MAX_CHUNK_SIZE,\n\t\t\tencryptedChunk, randomxProgramCount);\n\t\tfree(paddedInChunk);\n\t} else {\n\t\trandomx_encrypt_chunk(\n\t\t\tmachine, input, inputSize, inChunk, inChunkSize,\n\t\t\tencryptedChunk, randomxProgramCount);\n\t}\n\n\treturn encryptedChunkTerm;\n}\n\nstatic ERL_NIF_TERM rx512_encrypt_chunk_nif(\n\tErlNifEnv* envPtr,\n\tint argc,\n\tconst ERL_NIF_TERM argv[]\n) {\n\tint randomxRoundCount, jitEnabled, largePagesEnabled, hardwareAESEnabled;\n\tstruct state* statePtr;\n\tErlNifBinary inputData;\n\tErlNifBinary inputChunk;\n\n\tif (argc != 7) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_resource(envPtr, argv[0], stateType, (void**) &statePtr)) {\n\t\treturn error_tuple(envPtr, \"failed to read state\");\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[1], &inputData)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[2], &inputChunk) ||\n\t\tinputChunk.size == 0 ||\n\t\tinputChunk.size > MAX_CHUNK_SIZE) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[3], &randomxRoundCount)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[4], &jitEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[5], &largePagesEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[6], &hardwareAESEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tint isRandomxReleased;\n\trandomx_vm *vmPtr = create_vm(statePtr, (statePtr->mode == HASHING_MODE_FAST),\n\t\tjitEnabled, largePagesEnabled, hardwareAESEnabled, &isRandomxReleased);\n\tif (vmPtr == NULL) {\n\t\tif (isRandomxReleased != 0) {\n\t\t\treturn error_tuple(envPtr, \"state has been released\");\n\t\t}\n\t\treturn error_tuple(envPtr, \"randomx_create_vm failed\");\n\t}\n\n\tERL_NIF_TERM outChunkTerm = encrypt_chunk(envPtr, vmPtr,\n\t\tinputData.data, inputData.size, inputChunk.data, inputChunk.size, randomxRoundCount);\n\n\tdestroy_vm(statePtr, vmPtr);\n\treturn ok_tuple(envPtr, outChunkTerm);\n}\n\nstatic ERL_NIF_TERM rx512_decrypt_chunk_nif(\n\tErlNifEnv* envPtr,\n\tint argc,\n\tconst ERL_NIF_TERM argv[]\n) {\n\tint outChunkLen, randomxRoundCount, jitEnabled, largePagesEnabled, hardwareAESEnabled;\n\tstruct state* statePtr;\n\tErlNifBinary inputData;\n\tErlNifBinary inputChunk;\n\n\tif (argc != 8) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_resource(envPtr, argv[0], stateType, (void**) &statePtr)) {\n\t\treturn error_tuple(envPtr, \"failed to read state\");\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[1], &inputData)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[2], &inputChunk) ||\n\t\tinputChunk.size == 0 ||\n\t\tinputChunk.size > MAX_CHUNK_SIZE) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[3], &outChunkLen)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[4], &randomxRoundCount)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[5], &jitEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[6], &largePagesEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[7], &hardwareAESEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tint isRandomxReleased;\n\trandomx_vm *vmPtr = create_vm(statePtr, (statePtr->mode == HASHING_MODE_FAST),\n\t\tjitEnabled, largePagesEnabled, hardwareAESEnabled, &isRandomxReleased);\n\tif (vmPtr == NULL) {\n\t\tif (isRandomxReleased != 0) {\n\t\t\treturn error_tuple(envPtr, \"state has been released\");\n\t\t}\n\t\treturn error_tuple(envPtr, \"randomx_create_vm failed\");\n\t}\n\n\t// NOTE. Because randomx_decrypt_chunk will unpack padding too, decrypt always uses the\n\t// full 256KB chunk size. We'll then truncate the output to the correct feistel-padded\n\t// outChunkSize.\n\tunsigned char outChunk[MAX_CHUNK_SIZE];\n\tERL_NIF_TERM decryptedChunkTerm = decrypt_chunk(envPtr, vmPtr,\n\t\tinputData.data, inputData.size, inputChunk.data, inputChunk.size,\n\t\toutChunk, outChunkLen, randomxRoundCount);\n\n\tdestroy_vm(statePtr, vmPtr);\n\n\treturn ok_tuple(envPtr, decryptedChunkTerm);\n}\n\nstatic ERL_NIF_TERM rx512_reencrypt_chunk_nif(\n\tErlNifEnv* envPtr,\n\tint argc,\n\tconst ERL_NIF_TERM argv[]\n) {\n\tint chunkSize, decryptRandomxRoundCount, encryptRandomxRoundCount;\n\tint jitEnabled, largePagesEnabled, hardwareAESEnabled;\n\tstruct state* statePtr;\n\tErlNifBinary decryptKey;\n\tErlNifBinary encryptKey;\n\tErlNifBinary inputChunk;\n\n\tif (argc != 10) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_resource(envPtr, argv[0], stateType, (void**) &statePtr)) {\n\t\treturn error_tuple(envPtr, \"failed to read state\");\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[1], &decryptKey)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[2], &encryptKey)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[3], &inputChunk) || inputChunk.size == 0) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[4], &chunkSize)  ||\n\t\tchunkSize == 0 ||\n\t\tchunkSize > MAX_CHUNK_SIZE) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[5], &decryptRandomxRoundCount)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[6], &encryptRandomxRoundCount)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[7], &jitEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[8], &largePagesEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[9], &hardwareAESEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tint isRandomxReleased;\n\trandomx_vm *vmPtr = create_vm(statePtr, (statePtr->mode == HASHING_MODE_FAST),\n\t\tjitEnabled, largePagesEnabled, hardwareAESEnabled, &isRandomxReleased);\n\tif (vmPtr == NULL) {\n\t\tif (isRandomxReleased != 0) {\n\t\t\treturn error_tuple(envPtr, \"state has been released\");\n\t\t}\n\t\treturn error_tuple(envPtr, \"randomx_create_vm failed\");\n\t}\n\n\t// NOTE. Because randomx_decrypt_chunk will unpack padding too, decrypt always uses the\n\t// full 256KB chunk size. We'll then truncate the output to the correct feistel-padded\n\t// outChunkSize.\n\tunsigned char decryptedChunk[MAX_CHUNK_SIZE];\n\tERL_NIF_TERM decryptedChunkTerm = decrypt_chunk(envPtr, vmPtr,\n\t\tdecryptKey.data, decryptKey.size, inputChunk.data, inputChunk.size,\n\t\tdecryptedChunk, chunkSize, decryptRandomxRoundCount);\n\n\tERL_NIF_TERM reencryptedChunkTerm = encrypt_chunk(envPtr, vmPtr,\n\t\tencryptKey.data, encryptKey.size, decryptedChunk, chunkSize, encryptRandomxRoundCount);\n\n\tdestroy_vm(statePtr, vmPtr);\n\n\treturn ok_tuple2(envPtr, reencryptedChunkTerm, decryptedChunkTerm);\n}\n\nstatic ErlNifFunc rx512_funcs[] = {\n\t{\"rx512_info_nif\", 1, rx512_info_nif, ERL_NIF_DIRTY_JOB_CPU_BOUND},\n\t{\"rx512_init_nif\", 5, rx512_init_nif, ERL_NIF_DIRTY_JOB_CPU_BOUND},\n\t{\"rx512_hash_nif\", 5, rx512_hash_nif, ERL_NIF_DIRTY_JOB_CPU_BOUND},\n\t{\"rx512_encrypt_chunk_nif\", 7, rx512_encrypt_chunk_nif, ERL_NIF_DIRTY_JOB_CPU_BOUND},\n\t{\"rx512_decrypt_chunk_nif\", 8, rx512_decrypt_chunk_nif, ERL_NIF_DIRTY_JOB_CPU_BOUND},\n\t{\"rx512_reencrypt_chunk_nif\", 10, rx512_reencrypt_chunk_nif,\n\t\tERL_NIF_DIRTY_JOB_CPU_BOUND}\n};\n\n\nERL_NIF_INIT(ar_rx512_nif, rx512_funcs, rx512_load, NULL, NULL, NULL);\n\n"
  },
  {
    "path": "apps/arweave/c_src/randomx/rxsquared/ar_rxsquared_nif.c",
    "content": "#include <string.h>\n#include <openssl/sha.h>\n#include <ar_nif.h>\n#include \"../randomx_long_with_entropy.h\"\n#include \"../feistel_msgsize_key_cipher.h\"\n#include \"../randomx_squared.h\"\n\n#include \"../ar_randomx_impl.h\"\n\nconst int PACKING_KEY_SIZE = 32;\nconst int MAX_CHUNK_SIZE = 256*1024;\n\nstatic int rxsquared_load(ErlNifEnv* envPtr, void** priv, ERL_NIF_TERM info);\nstatic ERL_NIF_TERM rxsquared_info_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[]);\nstatic ERL_NIF_TERM rxsquared_init_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[]);\nstatic ERL_NIF_TERM rxsquared_hash_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[]);\n\nstatic int rxsquared_load(ErlNifEnv* envPtr, void** priv, ERL_NIF_TERM info) {\n\treturn load(envPtr, priv, info);\n}\n\nstatic ERL_NIF_TERM rxsquared_info_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[]) {\n\treturn info_nif(\"rxsquared\", envPtr, argc, argv);\n}\n\nstatic ERL_NIF_TERM rxsquared_init_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[]) {\n\treturn init_nif(envPtr, argc, argv);\n}\n\nstatic ERL_NIF_TERM rxsquared_hash_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[]) {\n\treturn hash_nif(envPtr, argc, argv);\n}\n\n\nstatic ERL_NIF_TERM rsp_feistel_encrypt_nif(\n\t\tErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[]) {\n\tErlNifBinary inMsgBin;\n\tErlNifBinary inKeyBin;\n\tERL_NIF_TERM outMsgTerm;\n\tunsigned char* outMsgData;\n\n\tif (argc != 2) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tif (!enif_inspect_binary(envPtr, argv[0], &inMsgBin)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tif (!enif_inspect_binary(envPtr, argv[1], &inKeyBin)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tsize_t msgSize = inMsgBin.size;\n\n\tif (inKeyBin.size != msgSize) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tif (msgSize % 64 != 0) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\toutMsgData = enif_make_new_binary(envPtr, msgSize, &outMsgTerm);\n\tif (outMsgData == NULL) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tfeistel_encrypt(inMsgBin.data, msgSize, inKeyBin.data, outMsgData);\n\n\treturn ok_tuple(envPtr, outMsgTerm);\n}\n\nstatic ERL_NIF_TERM rsp_feistel_decrypt_nif(\n\t\tErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[]) {\n\tErlNifBinary inMsgBin;\n\tErlNifBinary inKeyBin;\n\tERL_NIF_TERM outMsgTerm;\n\tunsigned char* outMsgData;\n\n\tif (argc != 2) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tif (!enif_inspect_binary(envPtr, argv[0], &inMsgBin)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tif (!enif_inspect_binary(envPtr, argv[1], &inKeyBin)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tsize_t msgSize = inMsgBin.size;\n\n\tif (inKeyBin.size != msgSize) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tif (msgSize % 64 != 0) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\toutMsgData = enif_make_new_binary(envPtr, msgSize, &outMsgTerm);\n\tif (outMsgData == NULL) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tfeistel_decrypt(inMsgBin.data, msgSize, inKeyBin.data, outMsgData);\n\n\treturn ok_tuple(envPtr, outMsgTerm);\n}\n\nstatic ERL_NIF_TERM rsp_fused_entropy_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[]) {\n\tif (argc != 10) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\t// 1. Parse the state resource\n\tstruct state* statePtr;\n\tif (!enif_get_resource(envPtr, argv[0], stateType, (void**)&statePtr)) {\n\t\treturn error_tuple(envPtr, \"failed_to_read_state\");\n\t}\n\n\t// 2. Parse each integer\n\tint subChunkCount;\n\tif (!enif_get_int(envPtr, argv[1], &subChunkCount)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tint subChunkSize;\n\tif (!enif_get_int(envPtr, argv[2], &subChunkSize)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tint laneCount;\n\tif (!enif_get_int(envPtr, argv[3], &laneCount)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tint rxDepth;\n\tif (!enif_get_int(envPtr, argv[4], &rxDepth)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tint jitEnabled;\n\tif (!enif_get_int(envPtr, argv[5], &jitEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tint largePagesEnabled;\n\tif (!enif_get_int(envPtr, argv[6], &largePagesEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tint hardwareAESEnabled;\n\tif (!enif_get_int(envPtr, argv[7], &hardwareAESEnabled)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tint randomxProgramCount;\n\tif (!enif_get_int(envPtr, argv[8], &randomxProgramCount)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\t// 3. Parse key as a binary\n\tErlNifBinary keyBin;\n\tif (!enif_inspect_binary(envPtr, argv[9], &keyBin)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\t// 4. Create VMs\n\tint totalVMs = 2 * laneCount;\n\trandomx_vm** vmList = (randomx_vm**)calloc(totalVMs, sizeof(randomx_vm*));\n\tif (!vmList) {\n\t\treturn error_tuple(envPtr, \"vmList_alloc_failed\");\n\t}\n\n\tsize_t scratchpadSize = randomx_get_scratchpad_size();\n\n\t// 5. Pre-allocate the final output binary to store all scratchpads\n\tsize_t outEntropySize = scratchpadSize * laneCount;\n\tERL_NIF_TERM outEntropyTerm;\n\tunsigned char* outEntropy =\n\t\tenif_make_new_binary(envPtr, outEntropySize, &outEntropyTerm);\n\tif (!outEntropy) {\n\t\tfree(vmList);\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\t// 6. Create the randomx_vm objects\n\tint isRandomxReleased = 0;\n\tfor (int i = 0; i < totalVMs; i++) {\n\t\tvmList[i] = create_vm(\n\t\t\tstatePtr,\n\t\t\t(statePtr->mode == HASHING_MODE_FAST),\n\t\t\tjitEnabled,\n\t\t\tlargePagesEnabled,\n\t\t\thardwareAESEnabled,\n\t\t\t&isRandomxReleased\n\t\t);\n\t\tif (!vmList[i]) {\n\t\t\t// Clean up partial\n\t\t\tfor (int j = 0; j < i; j++) {\n\t\t\t\tdestroy_vm(statePtr, vmList[j]);\n\t\t\t}\n\t\t\tfree(vmList);\n\t\t\tif (isRandomxReleased != 0) {\n\t\t\t\treturn error_tuple(envPtr, \"state_has_been_released\");\n\t\t\t}\n\t\t\treturn error_tuple(envPtr, \"randomx_create_vm_failed\");\n\t\t}\n\t}\n\n\t// 7. Call the pure C++ function that does the heavy logic and returns bool\n\tint success = rsp_fused_entropy(\n\t\tvmList,\n\t\tscratchpadSize,\n\t\tsubChunkCount,\n\t\tsubChunkSize,\n\t\tlaneCount,\n\t\trxDepth,\n\t\trandomxProgramCount,\n\t\t6,\n\t\tkeyBin.data,\n\t\tkeyBin.size,\n\t\toutEntropy  // final buffer for the output entropy\n\t);\n\n\t// 8. If the function returned false, we interpret that as an error\n\tif (!success) {\n\t\t// Cleanup\n\t\tfor (int i = 0; i < totalVMs; i++) {\n\t\t\tif (vmList[i]) {\n\t\t\t\tdestroy_vm(statePtr, vmList[i]);\n\t\t\t}\n\t\t}\n\t\tfree(vmList);\n\t\treturn error_tuple(envPtr, \"cxx_fused_entropy_failed\");\n\t}\n\n\t// 9. If success, destroy VMs and return {ok, outEntropyTerm}\n\tfor (int i = 0; i < totalVMs; i++) {\n\t\tdestroy_vm(statePtr, vmList[i]);\n\t}\n\tfree(vmList);\n\n\treturn ok_tuple(envPtr, outEntropyTerm);\n}\n\n\nstatic ErlNifFunc rxsquared_funcs[] = {\n\t{\"rxsquared_info_nif\", 1, rxsquared_info_nif, ERL_NIF_DIRTY_JOB_CPU_BOUND},\n\t{\"rxsquared_init_nif\", 5, rxsquared_init_nif, ERL_NIF_DIRTY_JOB_CPU_BOUND},\n\t{\"rxsquared_hash_nif\", 5, rxsquared_hash_nif, ERL_NIF_DIRTY_JOB_CPU_BOUND},\n\n\t{\"rsp_fused_entropy_nif\", 10,\n\t\trsp_fused_entropy_nif, ERL_NIF_DIRTY_JOB_CPU_BOUND},\n\t{\"rsp_feistel_encrypt_nif\", 2, rsp_feistel_encrypt_nif, ERL_NIF_DIRTY_JOB_CPU_BOUND},\n\t{\"rsp_feistel_decrypt_nif\", 2, rsp_feistel_decrypt_nif, ERL_NIF_DIRTY_JOB_CPU_BOUND}\n};\n\nERL_NIF_INIT(ar_rxsquared_nif, rxsquared_funcs, rxsquared_load, NULL, NULL, NULL);"
  },
  {
    "path": "apps/arweave/c_src/secp256k1/secp256k1_nif.c",
    "content": "#define _GNU_SOURCE\n#include <string.h>\n#include <errno.h>\n#include <stddef.h>\n#include <stdint.h>\n#include <fcntl.h>\n#include <unistd.h>\n#include <sys/random.h>\n\n#include <secp256k1.h>\n#include <secp256k1_recovery.h>\n\n#include <ar_nif.h>\n\n#define SECP256K1_PUBKEY_UNCOMPRESSED_SIZE 65\n#define SECP256K1_PUBKEY_COMPRESSED_SIZE 33\n#define SECP256K1_SIGNATURE_COMPACT_SIZE 64\n#define SECP256K1_SIGNATURE_RECOVERABLE_SIZE 65\n#define SECP256K1_PRIVKEY_SIZE 32\n#define SECP256K1_CONTEXT_SEED_SIZE 32\n#define SECP256K1_DIGEST_SIZE 32\n\nstatic int secp256k1_load(ErlNifEnv* env, void** priv, ERL_NIF_TERM load_info) {\n\treturn 0;\n}\n\nstatic int fill_devurandom(void* buffer, size_t size) {\n\tint fd = open(\"/dev/urandom\", O_RDONLY | O_CLOEXEC);\n\tif (fd == -1) {\n\t\treturn 0;\n\t}\n\n\tsize_t offset = 0;\n\twhile (offset < size) {\n\t\tssize_t result = read(fd, (char*)buffer + offset, size - offset);\n\t\tif (result == -1) {\n\t\t\tif (errno == EINTR) continue;\n\t\t\tgoto error;\n\t\t}\n\t\t// EOF\n\t\tif (result == 0) {\n\t\t\tgoto error;\n\t\t}\n\t\toffset += (size_t)result;\n\t}\n\n\tclose(fd);\n\treturn 1;\n\nerror:\n\tclose(fd);\n\treturn 0;\n}\n\nstatic int fill_random(void* buffer, size_t size) {\n#if defined(__linux__) || defined(__FreeBSD__)\n\n\tsize_t offset = 0;\n\twhile (offset < size) {\n\t\tssize_t result = getrandom((char*)buffer + offset, size - offset, 0);\n\t\tif (result == -1) {\n\t\t\tif (errno == EINTR) continue;\n\t\t\tif (errno == ENOSYS) return fill_devurandom(buffer, size);\n\t\t\treturn 0;\n\t\t}\n\t\toffset += (size_t)result;\n\t}\n\n#elif defined(__APPLE__)\n\n\tsize_t offset = 0;\n\twhile (offset < size) {\n\t\t// max allowed length is 256 bytes\n\t\tsize_t chunk = (size - offset > 256) ? 256 : (size - offset);\n\t\tif (getentropy((char*)buffer + offset, chunk) == -1) {\n\t\t\tif (errno == ENOSYS) return fill_devurandom(buffer, size);\n\t\t\treturn 0;\n\t\t}\n\t\toffset += chunk;\n\t}\n\n#else\n\t// Unsupported platform\n\treturn 0;\n#endif\n\treturn 1;\n}\n\n/* Cleanses memory to prevent leaking sensitive info. Won't be optimized out. */\nstatic void secure_erase(void *ptr, size_t len) {\n#if defined(__GNUC__)\n\t/* We use a memory barrier that scares the compiler away from optimizing out the memset.\n\t *\n\t * Quoting Adam Langley <agl@google.com> in commit ad1907fe73334d6c696c8539646c21b11178f20f\n\t * in BoringSSL (ISC License):\n\t *\tAs best as we can tell, this is sufficient to break any optimisations that\n\t *\tmight try to eliminate \"superfluous\" memsets.\n\t * This method used in memzero_explicit() the Linux kernel, too. Its advantage is that it is\n\t * pretty efficient, because the compiler can still implement the memset() efficiently,\n\t * just not remove it entirely. See \"Dead Store Elimination (Still) Considered Harmful\" by\n\t * Yang et al. (USENIX Security 2017) for more background.\n\t */\n\tmemset(ptr, 0, len);\n\t__asm__ __volatile__(\"\" : : \"r\"(ptr) : \"memory\");\n#else\n\tvoid *(*volatile const volatile_memset)(void *, int, size_t) = memset;\n\tvolatile_memset(ptr, 0, len);\n#endif\n}\n\nstatic ERL_NIF_TERM sign_recoverable(ErlNifEnv *env, int argc, const ERL_NIF_TERM argv[]) {\n\tif (argc != 2) {\n\t\treturn enif_make_badarg(env);\n\t}\n\tErlNifBinary Digest, PrivateBytes;\n\tif (!enif_inspect_binary(env, argv[0], &Digest)) {\n\t\treturn enif_make_badarg(env);\n\t}\n\tif (Digest.size != SECP256K1_DIGEST_SIZE) {\n\t\treturn enif_make_badarg(env);\n\t}\n\n\tif (!enif_inspect_binary(env, argv[1], &PrivateBytes)) {\n\t\treturn enif_make_badarg(env);\n\t}\n\tif (PrivateBytes.size != SECP256K1_PRIVKEY_SIZE) {\n\t\treturn enif_make_badarg(env);\n\t}\n\n\tchar *error = NULL;\n\tunsigned char seed[SECP256K1_CONTEXT_SEED_SIZE];\n\tunsigned char digest[SECP256K1_DIGEST_SIZE];\n\tunsigned char privbytes[SECP256K1_PRIVKEY_SIZE];\n\tunsigned char signature_compact[SECP256K1_SIGNATURE_COMPACT_SIZE];\n\tunsigned char signature_recoverable[SECP256K1_SIGNATURE_RECOVERABLE_SIZE];\n\tint recid;\n\tsecp256k1_ecdsa_recoverable_signature s;\n\tsecp256k1_context* ctx = secp256k1_context_create(SECP256K1_CONTEXT_NONE);\n\n\tmemcpy(digest, Digest.data, SECP256K1_DIGEST_SIZE);\n\tmemcpy(privbytes, PrivateBytes.data, SECP256K1_PRIVKEY_SIZE);\n\n\tif (!secp256k1_ec_seckey_verify(ctx, privbytes)) {\n\t\terror = \"secp256k1 key is invalid.\";\n\t\tgoto cleanup;\n\t}\n\n\tif (!fill_random(seed, sizeof(seed))) {\n\t\terror = \"Failed to generate random seed for context.\";\n\t\tgoto cleanup;\n\t}\n\n\tif (!secp256k1_context_randomize(ctx, seed)) {\n\t\terror = \"Failed to randomize context.\";\n\t\tgoto cleanup;\n\t}\n\n\tif(!secp256k1_ecdsa_sign_recoverable(ctx, &s, digest, privbytes, NULL, NULL)) {\n\t\terror = \"Failed to create signature.\";\n\t\tgoto cleanup;\n\t}\n\n\tif(!secp256k1_ecdsa_recoverable_signature_serialize_compact(ctx, signature_compact, &recid, &s)) {\n\t\terror = \"Failed to serialize signature.\";\n\t\tgoto cleanup;\n\t}\n\tmemcpy(signature_recoverable, signature_compact, SECP256K1_SIGNATURE_COMPACT_SIZE);\n\tsignature_recoverable[64] = (unsigned char)(recid);\n\n\tERL_NIF_TERM signature_term = make_output_binary(env, signature_recoverable, SECP256K1_SIGNATURE_RECOVERABLE_SIZE);\n\ncleanup:\n\tsecp256k1_context_destroy(ctx);\n\tsecure_erase(seed, sizeof(seed));\n\tsecure_erase(privbytes, sizeof(privbytes));\n\tmemset(signature_compact, 0, SECP256K1_SIGNATURE_COMPACT_SIZE);\n\tmemset(signature_recoverable, 0, SECP256K1_SIGNATURE_RECOVERABLE_SIZE);\n\n\tif (error) {\n\t\treturn error_tuple(env, error);\n\t}\n\treturn ok_tuple(env, signature_term);\n}\n\nstatic ERL_NIF_TERM recover_pk_and_verify(ErlNifEnv *env, int argc, const ERL_NIF_TERM argv[]) {\n\tif (argc != 2) {\n\t\treturn enif_make_badarg(env);\n\t}\n\tErlNifBinary Digest, Signature;\n\tif (!enif_inspect_binary(env, argv[0], &Digest)) {\n\t\treturn enif_make_badarg(env);\n\t}\n\tif (Digest.size != SECP256K1_DIGEST_SIZE) {\n\t\treturn enif_make_badarg(env);\n\t}\n\n\tif (!enif_inspect_binary(env, argv[1], &Signature)) {\n\t\treturn enif_make_badarg(env);\n\t}\n\tif (Signature.size != SECP256K1_SIGNATURE_RECOVERABLE_SIZE) {\n\t\treturn enif_make_badarg(env);\n\t}\n\n\tchar *error = NULL;\n\tunsigned char digest[SECP256K1_DIGEST_SIZE];\n\tunsigned char signature_recoverable[SECP256K1_SIGNATURE_RECOVERABLE_SIZE];\n\tunsigned char signature_compact[SECP256K1_SIGNATURE_COMPACT_SIZE];\n\tunsigned char pubbytes[SECP256K1_PUBKEY_COMPRESSED_SIZE];\n\tint recid;\n\tsecp256k1_ecdsa_recoverable_signature rs;\n\tsecp256k1_ecdsa_signature s;\n\tsecp256k1_pubkey pubkey;\n\n\tmemcpy(digest, Digest.data, SECP256K1_DIGEST_SIZE);\n\tmemcpy(signature_recoverable, Signature.data, SECP256K1_SIGNATURE_RECOVERABLE_SIZE);\n\n\tmemcpy(signature_compact, signature_recoverable, SECP256K1_SIGNATURE_COMPACT_SIZE);\n\trecid = (int)signature_recoverable[64];\n\n\tif (recid < 0 || recid > 3) {\n\t\terror = \"Invalid signature recid. recid >= 0 && recid <= 3.\";\n\t\tgoto cleanup;\n\t}\n\n\tif (!secp256k1_ecdsa_recoverable_signature_parse_compact(secp256k1_context_static, &rs, signature_compact, recid)) {\n\t\terror = \"Failed to deserialize/parse recoverable signature.\";\n\t\tgoto cleanup;\n\t}\n\n\tif (!secp256k1_ecdsa_recover(secp256k1_context_static, &pubkey, &rs, digest)) {\n\t\terror = \"Failed to recover public key.\";\n\t\tgoto cleanup;\n\t}\n\tsize_t l = SECP256K1_PUBKEY_COMPRESSED_SIZE;\n\tif (!secp256k1_ec_pubkey_serialize(secp256k1_context_static, pubbytes, &l, &pubkey, SECP256K1_EC_COMPRESSED)) {\n\t\terror = \"Failed to serialize the recovered public key.\";\n\t\tgoto cleanup;\n\t}\n\n\tif (!secp256k1_ecdsa_recoverable_signature_convert(secp256k1_context_static, &s, &rs)) {\n\t\terror = \"Failed to convert recoverable signature to compact signature.\";\n\t\tgoto cleanup;\n\t}\n\n\t// NOTE. https://github.com/bitcoin-core/secp256k1/blob/f79f46c70386c693ff4e7aef0b9e7923ba284e56/src/secp256k1.c#L461\n\t// Verify performs check for low-s\n\tint is_valid = secp256k1_ecdsa_verify(secp256k1_context_static, &s, digest, &pubkey);\n\tERL_NIF_TERM pubkey_term = make_output_binary(env, pubbytes, SECP256K1_PUBKEY_COMPRESSED_SIZE);\n\ncleanup:\n\tmemset(digest, 0, SECP256K1_DIGEST_SIZE);\n\tmemset(pubbytes, 0, SECP256K1_PUBKEY_COMPRESSED_SIZE);\n\tmemset(signature_compact, 0, SECP256K1_SIGNATURE_COMPACT_SIZE);\n\tmemset(signature_recoverable, 0, SECP256K1_SIGNATURE_RECOVERABLE_SIZE);\n\n\tif (error) {\n\t\treturn error_tuple(env, error);\n\t}\n\tif (is_valid) {\n\t\treturn ok_tuple2(env, enif_make_atom(env, \"true\"), pubkey_term);\n\t}\n\treturn ok_tuple2(env, enif_make_atom(env, \"false\"), pubkey_term);\n}\n\nstatic ErlNifFunc nif_funcs[] = {\n\t{\"sign_recoverable\", 2, sign_recoverable},\n\t{\"recover_pk_and_verify\", 2, recover_pk_and_verify}\n};\n\nERL_NIF_INIT(secp256k1_nif, nif_funcs, secp256k1_load, NULL, NULL, NULL)\n"
  },
  {
    "path": "apps/arweave/c_src/vdf/ar_vdf_nif.c",
    "content": "#include <erl_nif.h>\n#include <string.h>\n#include <openssl/sha.h>\n#include <ar_nif.h>\n#include \"vdf.h\"\n\n#if defined(__x86_64__) || defined(__amd64__) || defined(__i386__)\n#include <cpuid.h>\n#endif\n#if defined(__linux__)\n\t#include <sys/auxv.h>\n#endif\n#if defined(__APPLE__)\n\t#include <sys/types.h>\n\t#include <sys/sysctl.h>\n#endif\n\n////////////////////////////////////////////////////////////////////////////////////////////////////\n//    SHA\n////////////////////////////////////////////////////////////////////////////////////////////////////\ntypedef void (*vdf_sha2_fn)(\n\tunsigned char* saltBuffer,\n\tunsigned char* seed,\n\tunsigned char* out,\n\tunsigned char* outCheckpoint,\n\tint checkpointCount,\n\tint skipCheckpointCount,\n\tint hashingIterations\n);\nstatic vdf_sha2_fn vdf_sha2_fused_ptr = NULL;\nstatic vdf_sha2_fn vdf_sha2_hiopt_ptr = NULL;\n\nstatic int vdf_load(ErlNifEnv* env, void** priv, ERL_NIF_TERM load_info) {\n\t#if defined(__x86_64__) || defined(__i386__)\n\t\t{\n\t\t\tunsigned int eax, ebx, ecx, edx;\n\t\t\t// leaf 7, subleaf 0\n\t\t\tif (__get_cpuid_count(7, 0, &eax, &ebx, &ecx, &edx) && (ebx & (1u << 29))) {\n\t\t\t\tprintf(\"VDF arch x86\\n\");\n\t\t\t\tvdf_sha2_fused_ptr = vdf_sha2_fused_x86;\n\t\t\t\tvdf_sha2_hiopt_ptr = vdf_sha2_fused_x86; // fallback\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t}\n\t#endif\n\n\t#if defined(__aarch64__) || defined(__arm__)\n\t\t#if defined(__linux__)\n\t\t\tif (getauxval(AT_HWCAP) & HWCAP_SHA2) {\n\t\t\t\tprintf(\"VDF arch ARM linux\\n\");\n\t\t\t\tvdf_sha2_fused_ptr = vdf_sha2_fused_arm;\n\t\t\t\tvdf_sha2_hiopt_ptr = vdf_sha2_hiopt_arm;\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t#elif defined(__APPLE__)\n\t\t\t{\n\t\t\t\tint val = 0; size_t len = sizeof(val);\n\t\t\t\tif (sysctlbyname(\"hw.optional.arm.FEAT_SHA256\", &val, &len, NULL, 0) == 0 && val != 0) {\n\t\t\t\t\tprintf(\"VDF arch ARM macos\\n\");\n\t\t\t\t\tvdf_sha2_fused_ptr = vdf_sha2_fused_arm;\n\t\t\t\t\tvdf_sha2_hiopt_ptr = vdf_sha2_hiopt_arm;\n\t\t\t\t\treturn 0;\n\t\t\t\t}\n\t\t\t}\n\t\t#endif\n\t#endif\n\n\tprintf(\"VDF arch unknown\\n\");\n\tvdf_sha2_fused_ptr = vdf_sha2;\n\tvdf_sha2_hiopt_ptr = vdf_sha2;\n\treturn 0;\n}\nstatic ERL_NIF_TERM vdf_sha2_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[])\n{\n\tErlNifBinary Salt, Seed;\n\tint checkpointCount;\n\tint skipCheckpointCount;\n\tint hashingIterations;\n\n\tif (argc != 5) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[0], &Salt)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (Salt.size != SALT_SIZE) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[1], &Seed)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (Seed.size != VDF_SHA_HASH_SIZE) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[2], &checkpointCount)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[3], &skipCheckpointCount)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[4], &hashingIterations)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tunsigned char temp_result[VDF_SHA_HASH_SIZE];\n\tsize_t outCheckpointSize = VDF_SHA_HASH_SIZE*checkpointCount;\n\tERL_NIF_TERM outputTermCheckpoint;\n\tunsigned char* outCheckpoint = enif_make_new_binary(envPtr, outCheckpointSize, &outputTermCheckpoint);\n\tvdf_sha2(Salt.data, Seed.data, temp_result, outCheckpoint, checkpointCount, skipCheckpointCount, hashingIterations);\n\n\treturn ok_tuple2(envPtr, make_output_binary(envPtr, temp_result, VDF_SHA_HASH_SIZE), outputTermCheckpoint);\n}\nstatic ERL_NIF_TERM vdf_sha2_fused_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[])\n{\n\tErlNifBinary Salt, Seed;\n\tint checkpointCount;\n\tint skipCheckpointCount;\n\tint hashingIterations;\n\n\tif (argc != 5) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[0], &Salt)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (Salt.size != SALT_SIZE) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[1], &Seed)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (Seed.size != VDF_SHA_HASH_SIZE) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[2], &checkpointCount)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[3], &skipCheckpointCount)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[4], &hashingIterations)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tunsigned char temp_result[VDF_SHA_HASH_SIZE];\n\tsize_t outCheckpointSize = VDF_SHA_HASH_SIZE*checkpointCount;\n\tERL_NIF_TERM outputTermCheckpoint;\n\tunsigned char* outCheckpoint = enif_make_new_binary(envPtr, outCheckpointSize, &outputTermCheckpoint);\n\tvdf_sha2_fused_ptr(Salt.data, Seed.data, temp_result, outCheckpoint, checkpointCount, skipCheckpointCount, hashingIterations);\n\n\treturn ok_tuple2(envPtr, make_output_binary(envPtr, temp_result, VDF_SHA_HASH_SIZE), outputTermCheckpoint);\n}\nstatic ERL_NIF_TERM vdf_sha2_hiopt_nif(ErlNifEnv* envPtr, int argc, const ERL_NIF_TERM argv[])\n{\n\tErlNifBinary Salt, Seed;\n\tint checkpointCount;\n\tint skipCheckpointCount;\n\tint hashingIterations;\n\n\tif (argc != 5) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[0], &Salt)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (Salt.size != SALT_SIZE) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[1], &Seed)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (Seed.size != VDF_SHA_HASH_SIZE) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[2], &checkpointCount)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[3], &skipCheckpointCount)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[4], &hashingIterations)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tunsigned char temp_result[VDF_SHA_HASH_SIZE];\n\tsize_t outCheckpointSize = VDF_SHA_HASH_SIZE*checkpointCount;\n\tERL_NIF_TERM outputTermCheckpoint;\n\tunsigned char* outCheckpoint = enif_make_new_binary(envPtr, outCheckpointSize, &outputTermCheckpoint);\n\tvdf_sha2_hiopt_ptr(Salt.data, Seed.data, temp_result, outCheckpoint, checkpointCount, skipCheckpointCount, hashingIterations);\n\n\treturn ok_tuple2(envPtr, make_output_binary(envPtr, temp_result, VDF_SHA_HASH_SIZE), outputTermCheckpoint);\n}\n\nstatic ERL_NIF_TERM vdf_parallel_sha_verify_with_reset_nif(\n\tErlNifEnv* envPtr,\n\tint argc,\n\tconst ERL_NIF_TERM argv[]\n) {\n\tErlNifBinary Salt, Seed, InCheckpoint, InRes, ResetSalt, ResetSeed;\n\tint checkpointCount;\n\tint skipCheckpointCount;\n\tint hashingIterations;\n\tint maxThreadCount;\n\n\tif (argc != 10) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\tif (!enif_inspect_binary(envPtr, argv[0], &Salt)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (Salt.size != SALT_SIZE) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[1], &Seed)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (Seed.size != VDF_SHA_HASH_SIZE) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[2], &checkpointCount)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[3], &skipCheckpointCount)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[4], &hashingIterations)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[5], &InCheckpoint)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (InCheckpoint.size != checkpointCount*VDF_SHA_HASH_SIZE) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[6], &InRes)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (InRes.size != VDF_SHA_HASH_SIZE) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[7], &ResetSalt)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (ResetSalt.size != 32) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_inspect_binary(envPtr, argv[8], &ResetSeed)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (ResetSeed.size != VDF_SHA_HASH_SIZE) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (!enif_get_int(envPtr, argv[9], &maxThreadCount)) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\tif (maxThreadCount < 1) {\n\t\treturn enif_make_badarg(envPtr);\n\t}\n\n\t// NOTE last paramemter will be array later\n\tsize_t outCheckpointSize = VDF_SHA_HASH_SIZE*(1+checkpointCount)*(1+skipCheckpointCount);\n\tERL_NIF_TERM outputTermCheckpoint;\n\tunsigned char* outCheckpoint = enif_make_new_binary(\n\t\tenvPtr, outCheckpointSize, &outputTermCheckpoint);\n\tbool res = vdf_parallel_sha_verify_with_reset(\n\t\tSalt.data, Seed.data, checkpointCount, skipCheckpointCount, hashingIterations,\n\t\tInRes.data, InCheckpoint.data, outCheckpoint, ResetSalt.data, ResetSeed.data,\n\t\tmaxThreadCount);\n\t// TODO return all checkpoints\n\tif (!res) {\n\t\treturn error_tuple(envPtr, \"verification failed\");\n\t}\n\n\treturn ok_tuple(envPtr, outputTermCheckpoint);\n}\n\nstatic ErlNifFunc nif_funcs[] = {\n\t{\"vdf_sha2_nif\", 5, vdf_sha2_nif, ERL_NIF_DIRTY_JOB_CPU_BOUND},\n\t{\"vdf_sha2_fused_nif\", 5, vdf_sha2_fused_nif, ERL_NIF_DIRTY_JOB_CPU_BOUND},\n\t{\"vdf_sha2_hiopt_nif\", 5, vdf_sha2_hiopt_nif, ERL_NIF_DIRTY_JOB_CPU_BOUND},\n\t{\"vdf_parallel_sha_verify_with_reset_nif\", 10, vdf_parallel_sha_verify_with_reset_nif,\n\t\tERL_NIF_DIRTY_JOB_CPU_BOUND}\n};\n\nERL_NIF_INIT(ar_vdf_nif, nif_funcs, vdf_load, NULL, NULL, NULL);\n"
  },
  {
    "path": "apps/arweave/c_src/vdf/sha256-armv8.S",
    "content": ".text\n.globl\t_sha256_block_vdf_order\n.align\t6\n_sha256_block_vdf_order:\n\tstp\tx29,x30,[sp,#-16]!\n\tadd\tx29,sp,#0\n    adr    x3,K2564\n\n    ld1    {v4.16b,v5.16b,v6.16b,v7.16b},[x1]\n\tld1\t   {v0.4s,v1.4s},[x0]\n    \n    ld1    {v16.4s},[x3],#16\n    add    v16.4s,v16.4s,v4.4s\n    orr    v2.16b,v0.16b,v0.16b\n.long    0x5e104020    //sha256h v0.16b,v1.16b,v16.4s\n.long    0x5e105041    //sha256h2 v1.16b,v2.16b,v16.4s\n\n    ld1    {v17.4s},[x3],#16\n    add    v17.4s,v17.4s,v5.4s\n    orr    v2.16b,v0.16b,v0.16b\n.long    0x5e114020    //sha256h v0.16b,v1.16b,v17.4s\n.long    0x5e115041    //sha256h2 v1.16b,v2.16b,v17.4s\n    orr    v20.16b,v0.16b,v0.16b\n    orr    v21.16b,v1.16b,v1.16b\n    ld1    {v18.4s,v19.4s},[x0]\n\nLoop_hw:\n\tsub\tx2,x2,#1\t\n//\trev32\tv4.16b,v4.16b\n//\trev32\tv5.16b,v5.16b\n//\trev32\tv6.16b,v6.16b\n//\trev32\tv7.16b,v7.16b\n\n    ld1    {v4.16b,v5.16b},[x1]\n    orr    v0.16b,v20.16b,v20.16b\n    orr    v1.16b,v21.16b,v21.16b\n\t\n.long\t0x5e2828a4\t//sha256su0 v4.16b,v5.16b\n.long\t0x5e0760c4\t//sha256su1 v4.16b,v6.16b,v7.16b\n\tld1\t{v16.4s},[x3],#16\n.long\t0x5e2828c5\t//sha256su0 v5.16b,v6.16b\n.long\t0x5e0460e5\t//sha256su1 v5.16b,v7.16b,v4.16b\n\tld1\t{v17.4s},[x3],#16\n\tadd\tv16.4s,v16.4s,v6.4s\n.long\t0x5e2828e6\t//sha256su0 v6.16b,v7.16b\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n.long\t0x5e056086\t//sha256su1 v6.16b,v4.16b,v5.16b\n\tld1\t{v16.4s},[x3],#16\n\tadd\tv17.4s,v17.4s,v7.4s\n.long\t0x5e282887\t//sha256su0 v7.16b,v4.16b\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e114020\t//sha256h v0.16b,v1.16b,v17.4s\n.long\t0x5e115041\t//sha256h2 v1.16b,v2.16b,v17.4s\n.long\t0x5e0660a7\t//sha256su1 v7.16b,v5.16b,v6.16b\n\tld1\t{v17.4s},[x3],#16\n\tadd\tv16.4s,v16.4s,v4.4s\n.long\t0x5e2828a4\t//sha256su0 v4.16b,v5.16b\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n.long\t0x5e0760c4\t//sha256su1 v4.16b,v6.16b,v7.16b\n\tld1\t{v16.4s},[x3],#16\n\tadd\tv17.4s,v17.4s,v5.4s\n.long\t0x5e2828c5\t//sha256su0 v5.16b,v6.16b\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e114020\t//sha256h v0.16b,v1.16b,v17.4s\n.long\t0x5e115041\t//sha256h2 v1.16b,v2.16b,v17.4s\n.long\t0x5e0460e5\t//sha256su1 v5.16b,v7.16b,v4.16b\n\tld1\t{v17.4s},[x3],#16\n\tadd\tv16.4s,v16.4s,v6.4s\n.long\t0x5e2828e6\t//sha256su0 v6.16b,v7.16b\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n.long\t0x5e056086\t//sha256su1 v6.16b,v4.16b,v5.16b\n\tld1\t{v16.4s},[x3],#16\n\tadd\tv17.4s,v17.4s,v7.4s\n.long\t0x5e282887\t//sha256su0 v7.16b,v4.16b\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e114020\t//sha256h v0.16b,v1.16b,v17.4s\n.long\t0x5e115041\t//sha256h2 v1.16b,v2.16b,v17.4s\n.long\t0x5e0660a7\t//sha256su1 v7.16b,v5.16b,v6.16b\n\tld1\t{v17.4s},[x3],#16\n\tadd\tv16.4s,v16.4s,v4.4s\n.long\t0x5e2828a4\t//sha256su0 v4.16b,v5.16b\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n.long\t0x5e0760c4\t//sha256su1 v4.16b,v6.16b,v7.16b\n\tld1\t{v16.4s},[x3],#16\n\tadd\tv17.4s,v17.4s,v5.4s\n.long\t0x5e2828c5\t//sha256su0 v5.16b,v6.16b\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e114020\t//sha256h v0.16b,v1.16b,v17.4s\n.long\t0x5e115041\t//sha256h2 v1.16b,v2.16b,v17.4s\n.long\t0x5e0460e5\t//sha256su1 v5.16b,v7.16b,v4.16b\n\tld1\t{v17.4s},[x3],#16\n\tadd\tv16.4s,v16.4s,v6.4s\n.long\t0x5e2828e6\t//sha256su0 v6.16b,v7.16b\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n.long\t0x5e056086\t//sha256su1 v6.16b,v4.16b,v5.16b\n\tld1\t{v16.4s},[x3],#16\n\tadd\tv17.4s,v17.4s,v7.4s\n.long\t0x5e282887\t//sha256su0 v7.16b,v4.16b\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e114020\t//sha256h v0.16b,v1.16b,v17.4s\n.long\t0x5e115041\t//sha256h2 v1.16b,v2.16b,v17.4s\n.long\t0x5e0660a7\t//sha256su1 v7.16b,v5.16b,v6.16b\n\tld1\t{v17.4s},[x3],#16\n\tadd\tv16.4s,v16.4s,v4.4s\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n\n\tld1\t{v16.4s},[x3],#16\n\tadd\tv17.4s,v17.4s,v5.4s\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e114020\t//sha256h v0.16b,v1.16b,v17.4s\n.long\t0x5e115041\t//sha256h2 v1.16b,v2.16b,v17.4s\n\n\tld1\t{v17.4s},[x3],#16\n\tadd\tv16.4s,v16.4s,v6.4s\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n\t\n\tadd\tv17.4s,v17.4s,v7.4s\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e114020\t//sha256h v0.16b,v1.16b,v17.4s\n.long\t0x5e115041\t//sha256h2 v1.16b,v2.16b,v17.4s\n\n\tadd\tv6.4s,v0.4s,v18.4s\n\tadd\tv7.4s,v1.4s,v19.4s\n\n//64B\n\n\tld1\t{v16.4s},[x3],#16\n\torr\tv2.16b,v6.16b,v6.16b\n    orr    v0.16b,v6.16b,v6.16b\n    orr    v1.16b,v7.16b,v7.16b\n//.long   0x5e1040e0  //sha256h v0.16b,v7.16b,v16.4s\n.long   0x5e104020  //sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n\n\tld1\t{v16.4s},[x3],#16\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n\n\tld1\t{v16.4s},[x3],#16\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n\n\tld1\t{v16.4s},[x3],#16\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n\n\tld1\t{v16.4s},[x3],#16\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n\n\tld1\t{v16.4s},[x3],#16\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n\n\tld1\t{v16.4s},[x3],#16\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n\n\tld1\t{v16.4s},[x3],#16\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n\n\tld1\t{v16.4s},[x3],#16\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n\n\tld1\t{v16.4s},[x3],#16\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n\n\tld1\t{v16.4s},[x3],#16\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n\n\tld1\t{v16.4s},[x3],#16\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n\n\tld1\t{v16.4s},[x3],#16\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n\n\tld1\t{v16.4s},[x3],#16\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n\n\tld1\t{v16.4s},[x3],#16\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n\n\tld1\t{v16.4s},[x3]\n\tsub\tx3,x3,#128*4-48\t// rewind\n\torr\tv2.16b,v0.16b,v0.16b\n.long\t0x5e104020\t//sha256h v0.16b,v1.16b,v16.4s\n.long\t0x5e105041\t//sha256h2 v1.16b,v2.16b,v16.4s\n\n\tadd\tv6.4s,v0.4s,v6.4s\n\tadd\tv7.4s,v1.4s,v7.4s\n\t\n\tcbnz\tx2,Loop_hw\n\n\tst1\t{v6.4s,v7.4s},[x0]\n\n\tldr\tx29,[sp],#16\n\tret\n\t\nK2564:\n.long\t0x428a2f98,0x71374491,0xb5c0fbcf,0xe9b5dba5\n.long\t0x3956c25b,0x59f111f1,0x923f82a4,0xab1c5ed5\n.long\t0xd807aa98,0x12835b01,0x243185be,0x550c7dc3\n.long\t0x72be5d74,0x80deb1fe,0x9bdc06a7,0xc19bf174\n.long\t0xe49b69c1,0xefbe4786,0x0fc19dc6,0x240ca1cc\n.long\t0x2de92c6f,0x4a7484aa,0x5cb0a9dc,0x76f988da\n.long\t0x983e5152,0xa831c66d,0xb00327c8,0xbf597fc7\n.long\t0xc6e00bf3,0xd5a79147,0x06ca6351,0x14292967\n.long\t0x27b70a85,0x2e1b2138,0x4d2c6dfc,0x53380d13\n.long\t0x650a7354,0x766a0abb,0x81c2c92e,0x92722c85\n.long\t0xa2bfe8a1,0xa81a664b,0xc24b8b70,0xc76c51a3\n.long\t0xd192e819,0xd6990624,0xf40e3585,0x106aa070\n.long\t0x19a4c116,0x1e376c08,0x2748774c,0x34b0bcb5\n.long\t0x391c0cb3,0x4ed8aa4a,0x5b9cca4f,0x682e6ff3\n.long\t0x748f82ee,0x78a5636f,0x84c87814,0x8cc70208\n.long\t0x90befffa,0xa4506ceb,0xbef9a3f7,0xc67178f2\n.long\t0xC28A2F98,0x71374491,0xB5C0FBCF,0xE9B5DBA5\t\t//64B\n.long\t0x3956C25B,0x59F111F1,0x923F82A4,0xAB1C5ED5\n.long\t0xD807AA98,0x12835B01,0x243185BE,0x550C7DC3\n.long\t0x72BE5D74,0x80DEB1FE,0x9BDC06A7,0xC19BF374\n.long\t0x649B69C1,0xF0FE4786,0x0FE1EDC6,0x240CF254\n.long\t0x4FE9346F,0x6CC984BE,0x61B9411E,0x16F988FA\n.long\t0xF2C65152,0xA88E5A6D,0xB019FC65,0xB9D99EC7\n.long\t0x9A1231C3,0xE70EEAA0,0xFDB1232B,0xC7353EB0\n.long\t0x3069BAD5,0xCB976D5F,0x5A0F118F,0xDC1EEEFD\n.long\t0x0A35B689,0xDE0B7A04,0x58F4CA9D,0xE15D5B16\n.long\t0x007F3E86,0x37088980,0xA507EA32,0x6FAB9537\n.long\t0x17406110,0x0D8CD6F1,0xCDAA3B6D,0xC0BBBE37\n.long\t0x83613BDA,0xDB48A363,0x0B02E931,0x6FD15CA7\n.long\t0x521AFACA,0x31338431,0x6ED41A95,0x6D437890\n.long\t0xC39C91F2,0x9ECCABBD,0xB5C9A0E6,0x532FB63C\n.long\t0xD2C741C6,0x07237EA3,0xA4954B68,0x4C191D76\n.long\t0\t//terminator\n"
  },
  {
    "path": "apps/arweave/c_src/vdf/vdf.cpp",
    "content": "#include <thread>\n#include <cstdint>\n#include <cstring>\n#include <cstdlib>\n#include <vector>\n#include <mutex>\n#include <openssl/sha.h>\n#include \"vdf.h\"\n\nextern \"C\" {\n\nstruct vdf_sha_thread_arg {\n\tunsigned char* saltBuffer;\n\tunsigned char* seed;\n\tunsigned char* outCheckpoint;\n\tint checkpointCount;\n\tint skipCheckpointCount;\n\tint hashingIterations;\n\tunsigned char* out;\n};\n\nstruct vdf_sha_verify_thread_arg;\n\nclass vdf_verify_job {\npublic:\n\tunsigned char* startSaltBuffer;\n\tunsigned char* seed;\n\tunsigned char* inCheckpointSha;\n\tunsigned char* outCheckpointSha;\n\tunsigned char* inCheckpointRandomx;\n\tunsigned char* resetStepNumberBin256;\n\tunsigned char* resetSeed;\n\tint checkpointCount;\n\tint skipCheckpointCount;\n\tint hashingIterationsSha;\n\tint hashingIterationsRandomx;\n\n\tstd::vector<vdf_sha_verify_thread_arg> _vdf_sha_verify_thread_arg_list;\n\tvolatile bool verifyRes;\n\tstd::mutex lock;\n};\n\nstruct vdf_sha_verify_thread_arg {\n\tstd::thread* thread;\n\tvolatile bool in_progress;\n\n\tvdf_verify_job* job;\n\tint checkpointIdx;\n};\n\n////////////////////////////////////////////////////////////////////////////////////////////////////\n//    SHA\n////////////////////////////////////////////////////////////////////////////////////////////////////\n// NOTE saltBuffer is mutable in progress\nvoid _vdf_sha2(unsigned char* saltBuffer, unsigned char* seed, unsigned char* out, unsigned char* outCheckpoint, int checkpointCount, int skipCheckpointCount, int hashingIterations) {\n\tunsigned char tempOut[VDF_SHA_HASH_SIZE];\n\t// 2 different branches for different optimisation cases\n\tif (skipCheckpointCount == 0) {\n\t\tfor(int checkpointIdx = 0; checkpointIdx <= checkpointCount; checkpointIdx++) {\n\t\t\tunsigned char* locIn  = checkpointIdx == 0               ? seed : (outCheckpoint + VDF_SHA_HASH_SIZE*(checkpointIdx-1));\n\t\t\tunsigned char* locOut = checkpointIdx == checkpointCount ? out  : (outCheckpoint + VDF_SHA_HASH_SIZE*checkpointIdx);\n\n\t\t\t{\n\t\t\t\tSHA256_CTX sha256;\n\t\t\t\tSHA256_Init(&sha256);\n\t\t\t\tSHA256_Update(&sha256, saltBuffer, SALT_SIZE);\n\t\t\t\tSHA256_Update(&sha256, locIn, VDF_SHA_HASH_SIZE); // -1 memcpy\n\t\t\t\tSHA256_Final(tempOut, &sha256);\n\t\t\t}\n\t\t\tfor(int i = 2; i < hashingIterations; i++) {\n\t\t\t\tSHA256_CTX sha256;\n\t\t\t\tSHA256_Init(&sha256);\n\t\t\t\tSHA256_Update(&sha256, saltBuffer, SALT_SIZE);\n\t\t\t\tSHA256_Update(&sha256, tempOut, VDF_SHA_HASH_SIZE);\n\t\t\t\tSHA256_Final(tempOut, &sha256);\n\t\t\t}\n\t\t\t{\n\t\t\t\tSHA256_CTX sha256;\n\t\t\t\tSHA256_Init(&sha256);\n\t\t\t\tSHA256_Update(&sha256, saltBuffer, SALT_SIZE);\n\t\t\t\tSHA256_Update(&sha256, tempOut, VDF_SHA_HASH_SIZE);\n\t\t\t\tSHA256_Final(locOut, &sha256);\n\t\t\t}\n\t\t\tlong_add(saltBuffer, 1);\n\t\t}\n\t} else {\n\t\tfor(int checkpointIdx = 0; checkpointIdx <= checkpointCount; checkpointIdx++) {\n\t\t\tunsigned char* locIn  = checkpointIdx == 0               ? seed : (outCheckpoint + VDF_SHA_HASH_SIZE*(checkpointIdx-1));\n\t\t\tunsigned char* locOut = checkpointIdx == checkpointCount ? out  : (outCheckpoint + VDF_SHA_HASH_SIZE*checkpointIdx);\n\n\t\t\t{\n\t\t\t\tSHA256_CTX sha256;\n\t\t\t\tSHA256_Init(&sha256);\n\t\t\t\tSHA256_Update(&sha256, saltBuffer, SALT_SIZE);\n\t\t\t\tSHA256_Update(&sha256, locIn, VDF_SHA_HASH_SIZE); // -1 memcpy\n\t\t\t\tSHA256_Final(tempOut, &sha256);\n\t\t\t}\n\t\t\t// 1 skip on start\n\t\t\tfor(int i = 1; i < hashingIterations; i++) {\n\t\t\t\tSHA256_CTX sha256;\n\t\t\t\tSHA256_Init(&sha256);\n\t\t\t\tSHA256_Update(&sha256, saltBuffer, SALT_SIZE);\n\t\t\t\tSHA256_Update(&sha256, tempOut, VDF_SHA_HASH_SIZE);\n\t\t\t\tSHA256_Final(tempOut, &sha256);\n\t\t\t}\n\t\t\tlong_add(saltBuffer, 1);\n\t\t\tfor(int j = 1; j < skipCheckpointCount; j++) {\n\t\t\t\t// no skips\n\t\t\t\tfor(int i = 0; i < hashingIterations; i++) {\n\t\t\t\t\tSHA256_CTX sha256;\n\t\t\t\t\tSHA256_Init(&sha256);\n\t\t\t\t\tSHA256_Update(&sha256, saltBuffer, SALT_SIZE);\n\t\t\t\t\tSHA256_Update(&sha256, tempOut, VDF_SHA_HASH_SIZE);\n\t\t\t\t\tSHA256_Final(tempOut, &sha256);\n\t\t\t\t}\n\t\t\t\tlong_add(saltBuffer, 1);\n\t\t\t}\n\t\t\t// 1 skip on end\n\t\t\tfor(int i = 1; i < hashingIterations; i++) {\n\t\t\t\tSHA256_CTX sha256;\n\t\t\t\tSHA256_Init(&sha256);\n\t\t\t\tSHA256_Update(&sha256, saltBuffer, SALT_SIZE);\n\t\t\t\tSHA256_Update(&sha256, tempOut, VDF_SHA_HASH_SIZE);\n\t\t\t\tSHA256_Final(tempOut, &sha256);\n\t\t\t}\n\t\t\t{\n\t\t\t\tSHA256_CTX sha256;\n\t\t\t\tSHA256_Init(&sha256);\n\t\t\t\tSHA256_Update(&sha256, saltBuffer, SALT_SIZE);\n\t\t\t\tSHA256_Update(&sha256, tempOut, VDF_SHA_HASH_SIZE);\n\t\t\t\tSHA256_Final(locOut, &sha256);\n\t\t\t}\n\t\t\tlong_add(saltBuffer, 1);\n\t\t}\n\t}\n}\n\n// use\n//   unsigned char out[VDF_SHA_HASH_SIZE];\n//   unsigned char* outCheckpoint = (unsigned char*)malloc(checkpointCount*VDF_SHA_HASH_SIZE);\n//   free(outCheckpoint);\n// for call\nvoid vdf_sha2(unsigned char* saltBuffer, unsigned char* seed, unsigned char* out, unsigned char* outCheckpoint, int checkpointCount, int skipCheckpointCount, int hashingIterations) {\n\tunsigned char saltBufferStack[SALT_SIZE];\n\t// ensure 1 L1 cache page used\n\t// no access to heap, except of 0-iteration\n\tmemcpy(saltBufferStack, saltBuffer, SALT_SIZE);\n\n\t_vdf_sha2(saltBufferStack, seed, out, outCheckpoint, checkpointCount, skipCheckpointCount, hashingIterations);\n}\n\n////////////////////////////////////////////////////////////////////////////////////////////////////\n//    Verify SHA\n////////////////////////////////////////////////////////////////////////////////////////////////////\nvoid _vdf_sha_verify_thread(vdf_sha_verify_thread_arg* _arg) {\n\tvdf_sha_verify_thread_arg* arg = _arg;\n\twhile(true) {\n\t\tif (!arg->job->verifyRes) {\n\t\t\treturn;\n\t\t}\n\n\t\tunsigned char expdOut[VDF_SHA_HASH_SIZE];\n\t\tunsigned char* in = arg->checkpointIdx == 0 ? arg->job->seed : (arg->job->inCheckpointSha + (arg->checkpointIdx-1)*VDF_SHA_HASH_SIZE);\n\t\tunsigned char* out = arg->job->inCheckpointSha + arg->checkpointIdx*VDF_SHA_HASH_SIZE;\n\t\tunsigned char* outFullCheckpoint = arg->job->outCheckpointSha + arg->checkpointIdx*(1+arg->job->skipCheckpointCount)*VDF_SHA_HASH_SIZE;\n\t\tunsigned char  saltBuffer[SALT_SIZE];\n\t\tmemcpy(saltBuffer, arg->job->startSaltBuffer, SALT_SIZE);\n\t\tlong_add(saltBuffer, arg->checkpointIdx*(1+arg->job->skipCheckpointCount));\n\n\t\t// unrolled for memcpy inject\n\t\t// _vdf_sha2(saltBuffer, in, expdOut, NULL, 0, arg->job->skipCheckpointCount, arg->job->hashingIterationsSha);\n\n\t\t// do not rewrite in\n\t\tunsigned char inCopy[VDF_SHA_HASH_SIZE];\n\t\tmemcpy(inCopy, in, VDF_SHA_HASH_SIZE);\n\t\tfor(int i=0;i<=arg->job->skipCheckpointCount;i++) {\n\t\t\t_vdf_sha2(saltBuffer, inCopy, expdOut, NULL, 0, 0, arg->job->hashingIterationsSha);\n\t\t\tmemcpy(outFullCheckpoint, expdOut, VDF_SHA_HASH_SIZE);\n\t\t\toutFullCheckpoint += VDF_SHA_HASH_SIZE;\n\t\t\tmemcpy(inCopy, expdOut, VDF_SHA_HASH_SIZE);\n\t\t\t// NOTE long_add included\n\t\t}\n\t\t// 0 == equal\n\t\tif (0 != memcmp(expdOut, out, VDF_SHA_HASH_SIZE)) {\n\t\t\targ->job->verifyRes = false;\n\t\t\treturn;\n\t\t}\n\n\t\t{\n\t\t\tconst std::lock_guard<std::mutex> lock(arg->job->lock);\n\n\t\t\tbool found = false;\n\t\t\tfor(int i=arg->checkpointIdx+1;i<arg->job->checkpointCount;i++) {\n\t\t\t\tvdf_sha_verify_thread_arg* new_arg = &arg->job->_vdf_sha_verify_thread_arg_list[i];\n\t\t\t\tif (!new_arg->in_progress) {\n\t\t\t\t\tnew_arg->in_progress = true;\n\t\t\t\t\targ = new_arg;\n\t\t\t\t\tfound = true;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (!found) break;\n\t\t}\n\t}\n\n\t// TODO steal job from other hash function\n}\n\nvoid reset_mix(unsigned char* res, unsigned char* prevOutput, unsigned char* resetSeed) {\n\tSHA256_CTX sha256;\n\tSHA256_Init(&sha256);\n\tSHA256_Update(&sha256, prevOutput, VDF_SHA_HASH_SIZE);\n\tSHA256_Update(&sha256, resetSeed, VDF_SHA_HASH_SIZE);\n\tSHA256_Final(res, &sha256);\n}\n\nbool fast_rev_cmp256(unsigned char* a, unsigned char* b) {\n\tfor(int i=31; i>=0; i--) {\n\t\tif (a[i] != b[i])\n\t\t\treturn false;\n\t}\n\treturn true;\n}\n\nvoid _vdf_sha_verify_with_reset_thread(vdf_sha_verify_thread_arg* _arg) {\n\tvdf_sha_verify_thread_arg* arg = _arg;\n\twhile(true) {\n\t\tif (!arg->job->verifyRes) {\n\t\t\treturn;\n\t\t}\n\n\t\tunsigned char expdOut[VDF_SHA_HASH_SIZE];\n\t\tunsigned char* in = arg->checkpointIdx == 0 ? arg->job->seed : (arg->job->inCheckpointSha + (arg->checkpointIdx-1)*VDF_SHA_HASH_SIZE);\n\t\tunsigned char* out = arg->job->inCheckpointSha + arg->checkpointIdx*VDF_SHA_HASH_SIZE;\n\t\tunsigned char* outFullCheckpoint = arg->job->outCheckpointSha + arg->checkpointIdx*(1+arg->job->skipCheckpointCount)*VDF_SHA_HASH_SIZE;\n\t\tunsigned char  saltBuffer[SALT_SIZE];\n\t\tmemcpy(saltBuffer, arg->job->startSaltBuffer, SALT_SIZE);\n\t\tlong_add(saltBuffer, arg->checkpointIdx*(1+arg->job->skipCheckpointCount));\n\n\t\t// unrolled for memcpy inject and reset_mix\n\t\t// _vdf_sha2(saltBuffer, in, expdOut, NULL, 0, arg->job->skipCheckpointCount, arg->job->hashingIterationsSha);\n\n\t\t// do not rewrite in\n\t\tunsigned char inCopy[VDF_SHA_HASH_SIZE];\n\t\tmemcpy(inCopy, in, VDF_SHA_HASH_SIZE);\n\n\t\tif (fast_rev_cmp256(saltBuffer, arg->job->resetStepNumberBin256)) {\n\t\t\treset_mix(inCopy, inCopy, arg->job->resetSeed);\n\t\t}\n\n\t\tfor(int i=0;i<=arg->job->skipCheckpointCount;i++) {\n\t\t\t_vdf_sha2(saltBuffer, inCopy, expdOut, NULL, 0, 0, arg->job->hashingIterationsSha);\n\t\t\tmemcpy(outFullCheckpoint, expdOut, VDF_SHA_HASH_SIZE);\n\t\t\toutFullCheckpoint += VDF_SHA_HASH_SIZE;\n\t\t\tmemcpy(inCopy, expdOut, VDF_SHA_HASH_SIZE);\n\n\t\t\tif (fast_rev_cmp256(saltBuffer, arg->job->resetStepNumberBin256)) {\n\t\t\t\treset_mix(inCopy, inCopy, arg->job->resetSeed);\n\t\t\t}\n\t\t\t// NOTE long_add included\n\t\t}\n\t\t// 0 == equal\n\t\tif (0 != memcmp(expdOut, out, VDF_SHA_HASH_SIZE)) {\n\t\t\targ->job->verifyRes = false;\n\t\t\treturn;\n\t\t}\n\n\t\t{\n\t\t\tconst std::lock_guard<std::mutex> lock(arg->job->lock);\n\n\t\t\tbool found = false;\n\t\t\tfor(int i=arg->checkpointIdx+1;i<arg->job->checkpointCount;i++) {\n\t\t\t\tvdf_sha_verify_thread_arg* new_arg = &arg->job->_vdf_sha_verify_thread_arg_list[i];\n\t\t\t\tif (!new_arg->in_progress) {\n\t\t\t\t\tnew_arg->in_progress = true;\n\t\t\t\t\targ = new_arg;\n\t\t\t\t\tfound = true;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (!found) break;\n\t\t}\n\t}\n\n\t// TODO steal job from other hash function\n}\n\nbool vdf_parallel_sha_verify_with_reset(unsigned char* startSaltBuffer, unsigned char* seed, int checkpointCount, int skipCheckpointCount, int hashingIterations, unsigned char* inRes, unsigned char* inCheckpoint, unsigned char* outCheckpoint, unsigned char* resetStepNumberBin256, unsigned char* resetSeed, int maxThreadCount) {\n\tint freeThreadCount = maxThreadCount;\n\n\tvdf_verify_job job;\n\tjob.startSaltBuffer = startSaltBuffer;\n\tjob.seed = seed;\n\tjob.inCheckpointSha = inCheckpoint;\n\tjob.outCheckpointSha = outCheckpoint;\n\tjob.checkpointCount = checkpointCount;\n\tjob.skipCheckpointCount = skipCheckpointCount;\n\tjob.hashingIterationsSha = hashingIterations;\n\tjob.resetStepNumberBin256 = resetStepNumberBin256;\n\tjob.resetSeed = resetSeed;\n\tjob.verifyRes = true;\n\n\tjob._vdf_sha_verify_thread_arg_list    .resize(checkpointCount);\n\n\tfor (int checkpointIdx=0;checkpointIdx<checkpointCount;checkpointIdx++) {\n\t\tstruct vdf_sha_verify_thread_arg*     _vdf_sha_verify_thread_arg     = &job._vdf_sha_verify_thread_arg_list[checkpointIdx];\n\t\t_vdf_sha_verify_thread_arg    ->checkpointIdx = checkpointIdx;\n\t\t_vdf_sha_verify_thread_arg    ->thread = NULL;\n\t\t_vdf_sha_verify_thread_arg    ->in_progress = false;\n\t\t_vdf_sha_verify_thread_arg    ->job = &job;\n\t}\n\n\tfor (int checkpointIdx=0;checkpointIdx<checkpointCount;checkpointIdx++) {\n\t\tstruct vdf_sha_verify_thread_arg*     _vdf_sha_verify_thread_arg     = &job._vdf_sha_verify_thread_arg_list[checkpointIdx];\n\t\tif (freeThreadCount > 0) {\n\t\t\tfreeThreadCount--;\n\t\t\tconst std::lock_guard<std::mutex> lock(job.lock);\n\t\t\t_vdf_sha_verify_thread_arg->in_progress = true;\n\t\t\t_vdf_sha_verify_thread_arg->thread = new std::thread(_vdf_sha_verify_with_reset_thread, _vdf_sha_verify_thread_arg);\n\t\t}\n\t\tif (freeThreadCount == 0) break;\n\t}\n\n\tif (job.verifyRes) {\n\t\tunsigned char expdOut[VDF_SHA_HASH_SIZE];\n\t\tunsigned char* sha_temp_result;\n\t\tif (checkpointCount == 0) {\n\t\t\tsha_temp_result = seed;\n\t\t} else {\n\t\t\tsha_temp_result = inCheckpoint + (checkpointCount-1)*VDF_SHA_HASH_SIZE;\n\t\t}\n\n\t\tunsigned char finalSaltBuffer[SALT_SIZE];\n\t\tmemcpy(finalSaltBuffer, startSaltBuffer, SALT_SIZE);\n\t\tlong_add(finalSaltBuffer, checkpointCount*(1+skipCheckpointCount));\n\n\t\tunsigned char* outFullCheckpoint = outCheckpoint + checkpointCount*(1+skipCheckpointCount)*VDF_SHA_HASH_SIZE;\n\n\t\t// unrolled for memcpy inject\n\t\t// _vdf_sha2(finalSaltBuffer, sha_temp_result, expdOut, NULL, 0, skipCheckpointCount, hashingIterations);\n\n\t\t// do not rewrite in\n\t\tunsigned char inCopy[VDF_SHA_HASH_SIZE];\n\t\tmemcpy(inCopy, sha_temp_result, VDF_SHA_HASH_SIZE);\n\n\t\tif (fast_rev_cmp256(finalSaltBuffer, resetStepNumberBin256)) {\n\t\t\treset_mix(inCopy, inCopy, resetSeed);\n\t\t}\n\n\t\tfor(int i=0;i<=skipCheckpointCount;i++) {\n\t\t\t_vdf_sha2(finalSaltBuffer, inCopy, expdOut, NULL, 0, 0, hashingIterations);\n\t\t\tmemcpy(outFullCheckpoint, expdOut, VDF_SHA_HASH_SIZE);\n\t\t\toutFullCheckpoint += VDF_SHA_HASH_SIZE;\n\t\t\tmemcpy(inCopy, expdOut, VDF_SHA_HASH_SIZE);\n\n\t\t\tif (fast_rev_cmp256(finalSaltBuffer, resetStepNumberBin256)) {\n\t\t\t\treset_mix(inCopy, inCopy, resetSeed);\n\t\t\t}\n\t\t\t// NOTE long_add included\n\t\t}\n\t\tif (0 != memcmp(expdOut, inRes, VDF_SHA_HASH_SIZE)) {\n\t\t\tjob.verifyRes = false;\n\t\t}\n\t}\n\n\tfor (int checkpointIdx=0;checkpointIdx<checkpointCount;checkpointIdx++) {\n\t\tstruct vdf_sha_verify_thread_arg*     _vdf_sha_verify_thread_arg     = &job._vdf_sha_verify_thread_arg_list[checkpointIdx];\n\n\t\tif (_vdf_sha_verify_thread_arg->thread) {\n\t\t\t_vdf_sha_verify_thread_arg->thread->join();\n\t\t\tfree(_vdf_sha_verify_thread_arg->thread);\n\t\t}\n\t}\n\n\treturn job.verifyRes;\n}\n\n}"
  },
  {
    "path": "apps/arweave/c_src/vdf/vdf.h",
    "content": "#ifndef VDF_H\n#define VDF_H\n\n#include <stdbool.h>\n\nconst int SALT_SIZE = 32;\nconst int VDF_SHA_HASH_SIZE = 32;\n\nstatic inline void long_add(unsigned char* saltBuffer, int checkpointIdx) {\n\tunsigned int acc = checkpointIdx;\n\t// big endian from erlang\n\tfor(int i=SALT_SIZE-1;i>=0;i--) {\n\t\tunsigned int value = saltBuffer[i];\n\t\tvalue += acc;\n\t\tsaltBuffer[i] = value & 0xFF;\n\t\tacc = value >> 8;\n\t\tif (acc == 0) break;\n\t}\n}\n\n#if defined(__cplusplus)\nextern \"C\" {\n#endif\n\n// out checkpoint should return all checkpoints including skipCheckpointCount\nvoid vdf_sha2(unsigned char* saltBuffer, unsigned char* seed, unsigned char* out, unsigned char* outCheckpoint, int checkpointCount, int skipCheckpointCount, int hashingIterations);\nvoid vdf_sha2_fused_x86(unsigned char* saltBuffer, unsigned char* seed, unsigned char* out, unsigned char* outCheckpoint, int checkpointCount, int skipCheckpointCount, int hashingIterations);\nvoid vdf_sha2_fused_arm(unsigned char* saltBuffer, unsigned char* seed, unsigned char* out, unsigned char* outCheckpoint, int checkpointCount, int skipCheckpointCount, int hashingIterations);\nvoid vdf_sha2_hiopt_arm(unsigned char* saltBuffer, unsigned char* seed, unsigned char* out, unsigned char* outCheckpoint, int checkpointCount, int skipCheckpointCount, int hashingIterations);\nbool vdf_parallel_sha_verify_with_reset(unsigned char* startSaltBuffer, unsigned char* seed, int checkpointCount, int skipCheckpointCount, int hashingIterations, unsigned char* inRes, unsigned char* inCheckpoint, unsigned char* outCheckpoint, unsigned char* resetSalt, unsigned char* resetSeed, int maxThreadCount);\n\n#if defined(__cplusplus)\n}\n#endif\n\n\n#endif\n"
  },
  {
    "path": "apps/arweave/c_src/vdf/vdf_fused_arm.cpp",
    "content": "#include <cstdint>\n#include <cstring>\n#include \"vdf.h\"\n\n#if defined(__aarch64__) || defined(__arm__)\n\n#include <arm_neon.h>\n#include <arm_acle.h>\n\nstatic const uint32_t K[] = {\n\t0x428A2F98, 0x71374491, 0xB5C0FBCF, 0xE9B5DBA5,\n\t0x3956C25B, 0x59F111F1, 0x923F82A4, 0xAB1C5ED5,\n\t0xD807AA98, 0x12835B01, 0x243185BE, 0x550C7DC3,\n\t0x72BE5D74, 0x80DEB1FE, 0x9BDC06A7, 0xC19BF174,\n\t0xE49B69C1, 0xEFBE4786, 0x0FC19DC6, 0x240CA1CC,\n\t0x2DE92C6F, 0x4A7484AA, 0x5CB0A9DC, 0x76F988DA,\n\t0x983E5152, 0xA831C66D, 0xB00327C8, 0xBF597FC7,\n\t0xC6E00BF3, 0xD5A79147, 0x06CA6351, 0x14292967,\n\t0x27B70A85, 0x2E1B2138, 0x4D2C6DFC, 0x53380D13,\n\t0x650A7354, 0x766A0ABB, 0x81C2C92E, 0x92722C85,\n\t0xA2BFE8A1, 0xA81A664B, 0xC24B8B70, 0xC76C51A3,\n\t0xD192E819, 0xD6990624, 0xF40E3585, 0x106AA070,\n\t0x19A4C116, 0x1E376C08, 0x2748774C, 0x34B0BCB5,\n\t0x391C0CB3, 0x4ED8AA4A, 0x5B9CCA4F, 0x682E6FF3,\n\t0x748F82EE, 0x78A5636F, 0x84C87814, 0x8CC70208,\n\t0x90BEFFFA, 0xA4506CEB, 0xBEF9A3F7, 0xC67178F2\n};\n\nvoid sha2_p2_32_32_rev_norm (unsigned char *output,\n                             const unsigned char *input1,\n                             const unsigned char *input2)\n{\n\tuint32_t state[8] = {\n\t\t0x6a09e667, 0xbb67ae85, 0x3c6ef372, 0xa54ff53a,\n\t\t0x510e527f, 0x9b05688c, 0x1f83d9ab, 0x5be0cd19\n\t};\n\n\tuint32x4_t STATE0, STATE1, ABEF_SAVE, CDGH_SAVE;\n\tuint32x4_t MSG0, MSG1, MSG2, MSG3;\n\tuint32x4_t TMP0, TMP2;\n\n\t// Load state\n\tSTATE0 = vld1q_u32(&state[0]);\n\tSTATE1 = vld1q_u32(&state[4]);\n\n\t// Save current state\n\tABEF_SAVE = STATE0;\n\tCDGH_SAVE = STATE1;\n\n\t// Load input1 (32 bytes) and input2 (32 bytes) into two message blocks\n\t// These constitute our 64-byte block\n\tMSG0 = vld1q_u32((const uint32_t *)(input1 + 0));\n\tMSG1 = vld1q_u32((const uint32_t *)(input1 + 16));\n\tMSG2 = vld1q_u32((const uint32_t *)(input2 + 0));\n\tMSG3 = vld1q_u32((const uint32_t *)(input2 + 16));\n\n\t// Adjust endianness\n\tMSG0 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG0)));\n\tMSG1 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG1)));\n\tMSG2 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG2)));\n\tMSG3 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG3)));\n\n\t// Rounds 1-4\n\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[0]));\n\tTMP2 = STATE0;\n\tMSG0 = vsha256su0q_u32(MSG0, MSG1);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG0 = vsha256su1q_u32(MSG0, MSG2, MSG3);\n\n\t// Rounds 5-8\n\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[4]));\n\tTMP2 = STATE0;\n\tMSG1 = vsha256su0q_u32(MSG1, MSG2);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG1 = vsha256su1q_u32(MSG1, MSG3, MSG0);\n\n\t// Rounds 9-12\n\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[8]));\n\tTMP2 = STATE0;\n\tMSG2 = vsha256su0q_u32(MSG2, MSG3);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG2 = vsha256su1q_u32(MSG2, MSG0, MSG1);\n\n\t// Rounds 13-16\n\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[12]));\n\tTMP2 = STATE0;\n\tMSG3 = vsha256su0q_u32(MSG3, MSG0);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG3 = vsha256su1q_u32(MSG3, MSG1, MSG2);\n\n\t// Rounds 17-20\n\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[16]));\n\tTMP2 = STATE0;\n\tMSG0 = vsha256su0q_u32(MSG0, MSG1);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG0 = vsha256su1q_u32(MSG0, MSG2, MSG3);\n\n\t// Rounds 21-24\n\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[20]));\n\tTMP2 = STATE0;\n\tMSG1 = vsha256su0q_u32(MSG1, MSG2);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG1 = vsha256su1q_u32(MSG1, MSG3, MSG0);\n\n\t// Rounds 25-28\n\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[24]));\n\tTMP2 = STATE0;\n\tMSG2 = vsha256su0q_u32(MSG2, MSG3);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG2 = vsha256su1q_u32(MSG2, MSG0, MSG1);\n\n\t// Rounds 29-32\n\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[28]));\n\tTMP2 = STATE0;\n\tMSG3 = vsha256su0q_u32(MSG3, MSG0);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG3 = vsha256su1q_u32(MSG3, MSG1, MSG2);\n\n\t// Rounds 33-36\n\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[32]));\n\tTMP2 = STATE0;\n\tMSG0 = vsha256su0q_u32(MSG0, MSG1);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG0 = vsha256su1q_u32(MSG0, MSG2, MSG3);\n\n\t// Rounds 37-40\n\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[36]));\n\tTMP2 = STATE0;\n\tMSG1 = vsha256su0q_u32(MSG1, MSG2);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG1 = vsha256su1q_u32(MSG1, MSG3, MSG0);\n\n\t// Rounds 41-44\n\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[40]));\n\tTMP2 = STATE0;\n\tMSG2 = vsha256su0q_u32(MSG2, MSG3);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG2 = vsha256su1q_u32(MSG2, MSG0, MSG1);\n\n\t// Rounds 45-48\n\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[44]));\n\tTMP2 = STATE0;\n\tMSG3 = vsha256su0q_u32(MSG3, MSG0);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG3 = vsha256su1q_u32(MSG3, MSG1, MSG2);\n\n\t// Rounds 49-52\n\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[48]));\n\tTMP2 = STATE0;\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t// Rounds 53-56\n\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[52]));\n\tTMP2 = STATE0;\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t// Rounds 57-60\n\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[56]));\n\tTMP2 = STATE0;\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t// Rounds 61-64\n\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[60]));\n\tTMP2 = STATE0;\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t// Update state\n\tSTATE0 = vaddq_u32(STATE0, ABEF_SAVE);\n\tSTATE1 = vaddq_u32(STATE1, CDGH_SAVE);\n\n\t// Now we need to process the padding block\n\t// For a 64-byte input (32+32), the padding block consists of 0x80 followed by zeros\n\t// and the 64-bit length (512 bits)\n\n\t// TODO merge with endian fixes\n\tuint8_t padding[64] = {0};\n\tpadding[0] = 0x80;  // Padding start marker\n\n\t// Set the 64-bit length value (512 bits = 0x0200)\n\tpadding[56] = 0x00;\n\tpadding[57] = 0x00;\n\tpadding[58] = 0x00;\n\tpadding[59] = 0x00;\n\tpadding[60] = 0x00;\n\tpadding[61] = 0x00;\n\tpadding[62] = 0x02;\n\tpadding[63] = 0x00;\n\n\t// Save current state\n\tABEF_SAVE = STATE0;\n\tCDGH_SAVE = STATE1;\n\n\t// Load padding block\n\tMSG0 = vld1q_u32((const uint32_t *)(padding + 0));\n\tMSG1 = vld1q_u32((const uint32_t *)(padding + 16));\n\tMSG2 = vld1q_u32((const uint32_t *)(padding + 32));\n\tMSG3 = vld1q_u32((const uint32_t *)(padding + 48));\n\n\tMSG0 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG0)));\n\tMSG1 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG1)));\n\tMSG2 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG2)));\n\tMSG3 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG3)));\n\n\t// Rounds 1-4\n\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[0]));\n\tTMP2 = STATE0;\n\tMSG0 = vsha256su0q_u32(MSG0, MSG1);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG0 = vsha256su1q_u32(MSG0, MSG2, MSG3);\n\n\t// Rounds 5-8\n\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[4]));\n\tTMP2 = STATE0;\n\tMSG1 = vsha256su0q_u32(MSG1, MSG2);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG1 = vsha256su1q_u32(MSG1, MSG3, MSG0);\n\n\t// Rounds 9-12\n\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[8]));\n\tTMP2 = STATE0;\n\tMSG2 = vsha256su0q_u32(MSG2, MSG3);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG2 = vsha256su1q_u32(MSG2, MSG0, MSG1);\n\n\t// Rounds 13-16\n\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[12]));\n\tTMP2 = STATE0;\n\tMSG3 = vsha256su0q_u32(MSG3, MSG0);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG3 = vsha256su1q_u32(MSG3, MSG1, MSG2);\n\n\t// Rounds 17-20\n\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[16]));\n\tTMP2 = STATE0;\n\tMSG0 = vsha256su0q_u32(MSG0, MSG1);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG0 = vsha256su1q_u32(MSG0, MSG2, MSG3);\n\n\t// Rounds 21-24\n\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[20]));\n\tTMP2 = STATE0;\n\tMSG1 = vsha256su0q_u32(MSG1, MSG2);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG1 = vsha256su1q_u32(MSG1, MSG3, MSG0);\n\n\t// Rounds 25-28\n\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[24]));\n\tTMP2 = STATE0;\n\tMSG2 = vsha256su0q_u32(MSG2, MSG3);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG2 = vsha256su1q_u32(MSG2, MSG0, MSG1);\n\n\t// Rounds 29-32\n\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[28]));\n\tTMP2 = STATE0;\n\tMSG3 = vsha256su0q_u32(MSG3, MSG0);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG3 = vsha256su1q_u32(MSG3, MSG1, MSG2);\n\n\t// Rounds 33-36\n\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[32]));\n\tTMP2 = STATE0;\n\tMSG0 = vsha256su0q_u32(MSG0, MSG1);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG0 = vsha256su1q_u32(MSG0, MSG2, MSG3);\n\n\t// Rounds 37-40\n\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[36]));\n\tTMP2 = STATE0;\n\tMSG1 = vsha256su0q_u32(MSG1, MSG2);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG1 = vsha256su1q_u32(MSG1, MSG3, MSG0);\n\n\t// Rounds 41-44\n\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[40]));\n\tTMP2 = STATE0;\n\tMSG2 = vsha256su0q_u32(MSG2, MSG3);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG2 = vsha256su1q_u32(MSG2, MSG0, MSG1);\n\n\t// Rounds 45-48\n\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[44]));\n\tTMP2 = STATE0;\n\tMSG3 = vsha256su0q_u32(MSG3, MSG0);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG3 = vsha256su1q_u32(MSG3, MSG1, MSG2);\n\n\t// Rounds 49-52\n\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[48]));\n\tTMP2 = STATE0;\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t// Rounds 53-56\n\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[52]));\n\tTMP2 = STATE0;\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t// Rounds 57-60\n\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[56]));\n\tTMP2 = STATE0;\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t// Rounds 61-64\n\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[60]));\n\tTMP2 = STATE0;\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t// Update state\n\tSTATE0 = vaddq_u32(STATE0, ABEF_SAVE);\n\tSTATE1 = vaddq_u32(STATE1, CDGH_SAVE);\n\n\t/* write 32-bit words little-endian */\n\tvst1q_u32((uint32_t *)output,      STATE0);\n\tvst1q_u32((uint32_t *)(output+16), STATE1);\n}\n\nvoid sha2_p2_32_32_norm_loop(unsigned char  *tempOut,\n                        const unsigned char *saltBuffer,\n                        int                 iterations)\n{\n\tuint32_t state[8] = {\n\t\t0x6a09e667, 0xbb67ae85, 0x3c6ef372, 0xa54ff53a,\n\t\t0x510e527f, 0x9b05688c, 0x1f83d9ab, 0x5be0cd19\n\t};\n\n\tuint32x4_t STATE0, STATE1, ABEF_SAVE, CDGH_SAVE;\n\tuint32x4_t MSG0, MSG1, MSG2, MSG3;\n\tuint32x4_t TMP0, TMP2;\n\n\tMSG2 = vld1q_u32((const uint32_t *)(tempOut + 0));\n\tMSG3 = vld1q_u32((const uint32_t *)(tempOut + 16));\n\n\tfor (int i = 0; i < iterations; ++i) {\n\t\tSTATE0 = vld1q_u32(&state[0]);\n\t\tSTATE1 = vld1q_u32(&state[4]);\n\n\t\t// Save current state\n\t\tABEF_SAVE = STATE0;\n\t\tCDGH_SAVE = STATE1;\n\n\t\t// Load input1 (32 bytes) and input2 (32 bytes) into two message blocks\n\t\t// These constitute our 64-byte block\n\t\tMSG0 = vld1q_u32((const uint32_t *)(saltBuffer + 0));\n\t\tMSG1 = vld1q_u32((const uint32_t *)(saltBuffer + 16));\n\n\t\t// Adjust endianness\n\t\tMSG0 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG0)));\n\t\tMSG1 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG1)));\n\n\t\t// Rounds 1-4\n\t\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[0]));\n\t\tTMP2 = STATE0;\n\t\tMSG0 = vsha256su0q_u32(MSG0, MSG1);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG0 = vsha256su1q_u32(MSG0, MSG2, MSG3);\n\n\t\t// Rounds 5-8\n\t\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[4]));\n\t\tTMP2 = STATE0;\n\t\tMSG1 = vsha256su0q_u32(MSG1, MSG2);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG1 = vsha256su1q_u32(MSG1, MSG3, MSG0);\n\n\t\t// Rounds 9-12\n\t\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[8]));\n\t\tTMP2 = STATE0;\n\t\tMSG2 = vsha256su0q_u32(MSG2, MSG3);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG2 = vsha256su1q_u32(MSG2, MSG0, MSG1);\n\n\t\t// Rounds 13-16\n\t\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[12]));\n\t\tTMP2 = STATE0;\n\t\tMSG3 = vsha256su0q_u32(MSG3, MSG0);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG3 = vsha256su1q_u32(MSG3, MSG1, MSG2);\n\n\t\t// Rounds 17-20\n\t\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[16]));\n\t\tTMP2 = STATE0;\n\t\tMSG0 = vsha256su0q_u32(MSG0, MSG1);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG0 = vsha256su1q_u32(MSG0, MSG2, MSG3);\n\n\t\t// Rounds 21-24\n\t\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[20]));\n\t\tTMP2 = STATE0;\n\t\tMSG1 = vsha256su0q_u32(MSG1, MSG2);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG1 = vsha256su1q_u32(MSG1, MSG3, MSG0);\n\n\t\t// Rounds 25-28\n\t\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[24]));\n\t\tTMP2 = STATE0;\n\t\tMSG2 = vsha256su0q_u32(MSG2, MSG3);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG2 = vsha256su1q_u32(MSG2, MSG0, MSG1);\n\n\t\t// Rounds 29-32\n\t\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[28]));\n\t\tTMP2 = STATE0;\n\t\tMSG3 = vsha256su0q_u32(MSG3, MSG0);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG3 = vsha256su1q_u32(MSG3, MSG1, MSG2);\n\n\t\t// Rounds 33-36\n\t\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[32]));\n\t\tTMP2 = STATE0;\n\t\tMSG0 = vsha256su0q_u32(MSG0, MSG1);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG0 = vsha256su1q_u32(MSG0, MSG2, MSG3);\n\n\t\t// Rounds 37-40\n\t\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[36]));\n\t\tTMP2 = STATE0;\n\t\tMSG1 = vsha256su0q_u32(MSG1, MSG2);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG1 = vsha256su1q_u32(MSG1, MSG3, MSG0);\n\n\t\t// Rounds 41-44\n\t\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[40]));\n\t\tTMP2 = STATE0;\n\t\tMSG2 = vsha256su0q_u32(MSG2, MSG3);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG2 = vsha256su1q_u32(MSG2, MSG0, MSG1);\n\n\t\t// Rounds 45-48\n\t\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[44]));\n\t\tTMP2 = STATE0;\n\t\tMSG3 = vsha256su0q_u32(MSG3, MSG0);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG3 = vsha256su1q_u32(MSG3, MSG1, MSG2);\n\n\t\t// Rounds 49-52\n\t\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[48]));\n\t\tTMP2 = STATE0;\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t\t// Rounds 53-56\n\t\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[52]));\n\t\tTMP2 = STATE0;\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t\t// Rounds 57-60\n\t\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[56]));\n\t\tTMP2 = STATE0;\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t\t// Rounds 61-64\n\t\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[60]));\n\t\tTMP2 = STATE0;\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t\t// Update state\n\t\tSTATE0 = vaddq_u32(STATE0, ABEF_SAVE);\n\t\tSTATE1 = vaddq_u32(STATE1, CDGH_SAVE);\n\n\t\t// Now we need to process the padding block\n\t\t// For a 64-byte input (32+32), the padding block consists of 0x80 followed by zeros\n\t\t// and the 64-bit length (512 bits)\n\n\t\t// TODO merge with endian fixes\n\t\tuint8_t padding[64] = {0};\n\t\tpadding[0] = 0x80;  // Padding start marker\n\n\t\t// Set the 64-bit length value (512 bits = 0x0200)\n\t\tpadding[56] = 0x00;\n\t\tpadding[57] = 0x00;\n\t\tpadding[58] = 0x00;\n\t\tpadding[59] = 0x00;\n\t\tpadding[60] = 0x00;\n\t\tpadding[61] = 0x00;\n\t\tpadding[62] = 0x02;\n\t\tpadding[63] = 0x00;\n\n\t\t// Save current state\n\t\tABEF_SAVE = STATE0;\n\t\tCDGH_SAVE = STATE1;\n\n\t\t// Load padding block\n\t\tMSG0 = vld1q_u32((const uint32_t *)(padding + 0));\n\t\tMSG1 = vld1q_u32((const uint32_t *)(padding + 16));\n\t\tMSG2 = vld1q_u32((const uint32_t *)(padding + 32));\n\t\tMSG3 = vld1q_u32((const uint32_t *)(padding + 48));\n\n\t\tMSG0 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG0)));\n\t\tMSG1 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG1)));\n\t\tMSG2 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG2)));\n\t\tMSG3 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG3)));\n\n\t\t// Rounds 1-4\n\t\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[0]));\n\t\tTMP2 = STATE0;\n\t\tMSG0 = vsha256su0q_u32(MSG0, MSG1);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG0 = vsha256su1q_u32(MSG0, MSG2, MSG3);\n\n\t\t// Rounds 5-8\n\t\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[4]));\n\t\tTMP2 = STATE0;\n\t\tMSG1 = vsha256su0q_u32(MSG1, MSG2);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG1 = vsha256su1q_u32(MSG1, MSG3, MSG0);\n\n\t\t// Rounds 9-12\n\t\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[8]));\n\t\tTMP2 = STATE0;\n\t\tMSG2 = vsha256su0q_u32(MSG2, MSG3);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG2 = vsha256su1q_u32(MSG2, MSG0, MSG1);\n\n\t\t// Rounds 13-16\n\t\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[12]));\n\t\tTMP2 = STATE0;\n\t\tMSG3 = vsha256su0q_u32(MSG3, MSG0);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG3 = vsha256su1q_u32(MSG3, MSG1, MSG2);\n\n\t\t// Rounds 17-20\n\t\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[16]));\n\t\tTMP2 = STATE0;\n\t\tMSG0 = vsha256su0q_u32(MSG0, MSG1);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG0 = vsha256su1q_u32(MSG0, MSG2, MSG3);\n\n\t\t// Rounds 21-24\n\t\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[20]));\n\t\tTMP2 = STATE0;\n\t\tMSG1 = vsha256su0q_u32(MSG1, MSG2);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG1 = vsha256su1q_u32(MSG1, MSG3, MSG0);\n\n\t\t// Rounds 25-28\n\t\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[24]));\n\t\tTMP2 = STATE0;\n\t\tMSG2 = vsha256su0q_u32(MSG2, MSG3);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG2 = vsha256su1q_u32(MSG2, MSG0, MSG1);\n\n\t\t// Rounds 29-32\n\t\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[28]));\n\t\tTMP2 = STATE0;\n\t\tMSG3 = vsha256su0q_u32(MSG3, MSG0);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG3 = vsha256su1q_u32(MSG3, MSG1, MSG2);\n\n\t\t// Rounds 33-36\n\t\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[32]));\n\t\tTMP2 = STATE0;\n\t\tMSG0 = vsha256su0q_u32(MSG0, MSG1);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG0 = vsha256su1q_u32(MSG0, MSG2, MSG3);\n\n\t\t// Rounds 37-40\n\t\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[36]));\n\t\tTMP2 = STATE0;\n\t\tMSG1 = vsha256su0q_u32(MSG1, MSG2);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG1 = vsha256su1q_u32(MSG1, MSG3, MSG0);\n\n\t\t// Rounds 41-44\n\t\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[40]));\n\t\tTMP2 = STATE0;\n\t\tMSG2 = vsha256su0q_u32(MSG2, MSG3);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG2 = vsha256su1q_u32(MSG2, MSG0, MSG1);\n\n\t\t// Rounds 45-48\n\t\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[44]));\n\t\tTMP2 = STATE0;\n\t\tMSG3 = vsha256su0q_u32(MSG3, MSG0);\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\t\tMSG3 = vsha256su1q_u32(MSG3, MSG1, MSG2);\n\n\t\t// Rounds 49-52\n\t\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[48]));\n\t\tTMP2 = STATE0;\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t\t// Rounds 53-56\n\t\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[52]));\n\t\tTMP2 = STATE0;\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t\t// Rounds 57-60\n\t\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[56]));\n\t\tTMP2 = STATE0;\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t\t// Rounds 61-64\n\t\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[60]));\n\t\tTMP2 = STATE0;\n\t\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\t\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t\t// Update state\n\t\tMSG2 = vaddq_u32(STATE0, ABEF_SAVE);\n\t\tMSG3 = vaddq_u32(STATE1, CDGH_SAVE);\n\t}\n\n\tvst1q_u32((uint32_t *)tempOut,      MSG2);\n\tvst1q_u32((uint32_t *)(tempOut+16), MSG3);\n}\n\nvoid sha2_p2_32_32_norm_rev (unsigned char *output,\n                             const unsigned char *input1,\n                             const unsigned char *input2)\n{\n\tuint32_t state[8] = {\n\t\t0x6a09e667, 0xbb67ae85, 0x3c6ef372, 0xa54ff53a,\n\t\t0x510e527f, 0x9b05688c, 0x1f83d9ab, 0x5be0cd19\n\t};\n\n\tuint32x4_t STATE0, STATE1, ABEF_SAVE, CDGH_SAVE;\n\tuint32x4_t MSG0, MSG1, MSG2, MSG3;\n\tuint32x4_t TMP0, TMP2;\n\n\t// Load state\n\tSTATE0 = vld1q_u32(&state[0]);\n\tSTATE1 = vld1q_u32(&state[4]);\n\n\t// Save current state\n\tABEF_SAVE = STATE0;\n\tCDGH_SAVE = STATE1;\n\n\t// Load input1 (32 bytes) and input2 (32 bytes) into two message blocks\n\t// These constitute our 64-byte block\n\tMSG0 = vld1q_u32((const uint32_t *)(input1 + 0));\n\tMSG1 = vld1q_u32((const uint32_t *)(input1 + 16));\n\tMSG2 = vld1q_u32((const uint32_t *)(input2 + 0));\n\tMSG3 = vld1q_u32((const uint32_t *)(input2 + 16));\n\n\t// Adjust endianness\n\tMSG0 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG0)));\n\tMSG1 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG1)));\n\n\t// Rounds 1-4\n\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[0]));\n\tTMP2 = STATE0;\n\tMSG0 = vsha256su0q_u32(MSG0, MSG1);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG0 = vsha256su1q_u32(MSG0, MSG2, MSG3);\n\n\t// Rounds 5-8\n\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[4]));\n\tTMP2 = STATE0;\n\tMSG1 = vsha256su0q_u32(MSG1, MSG2);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG1 = vsha256su1q_u32(MSG1, MSG3, MSG0);\n\n\t// Rounds 9-12\n\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[8]));\n\tTMP2 = STATE0;\n\tMSG2 = vsha256su0q_u32(MSG2, MSG3);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG2 = vsha256su1q_u32(MSG2, MSG0, MSG1);\n\n\t// Rounds 13-16\n\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[12]));\n\tTMP2 = STATE0;\n\tMSG3 = vsha256su0q_u32(MSG3, MSG0);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG3 = vsha256su1q_u32(MSG3, MSG1, MSG2);\n\n\t// Rounds 17-20\n\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[16]));\n\tTMP2 = STATE0;\n\tMSG0 = vsha256su0q_u32(MSG0, MSG1);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG0 = vsha256su1q_u32(MSG0, MSG2, MSG3);\n\n\t// Rounds 21-24\n\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[20]));\n\tTMP2 = STATE0;\n\tMSG1 = vsha256su0q_u32(MSG1, MSG2);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG1 = vsha256su1q_u32(MSG1, MSG3, MSG0);\n\n\t// Rounds 25-28\n\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[24]));\n\tTMP2 = STATE0;\n\tMSG2 = vsha256su0q_u32(MSG2, MSG3);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG2 = vsha256su1q_u32(MSG2, MSG0, MSG1);\n\n\t// Rounds 29-32\n\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[28]));\n\tTMP2 = STATE0;\n\tMSG3 = vsha256su0q_u32(MSG3, MSG0);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG3 = vsha256su1q_u32(MSG3, MSG1, MSG2);\n\n\t// Rounds 33-36\n\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[32]));\n\tTMP2 = STATE0;\n\tMSG0 = vsha256su0q_u32(MSG0, MSG1);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG0 = vsha256su1q_u32(MSG0, MSG2, MSG3);\n\n\t// Rounds 37-40\n\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[36]));\n\tTMP2 = STATE0;\n\tMSG1 = vsha256su0q_u32(MSG1, MSG2);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG1 = vsha256su1q_u32(MSG1, MSG3, MSG0);\n\n\t// Rounds 41-44\n\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[40]));\n\tTMP2 = STATE0;\n\tMSG2 = vsha256su0q_u32(MSG2, MSG3);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG2 = vsha256su1q_u32(MSG2, MSG0, MSG1);\n\n\t// Rounds 45-48\n\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[44]));\n\tTMP2 = STATE0;\n\tMSG3 = vsha256su0q_u32(MSG3, MSG0);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG3 = vsha256su1q_u32(MSG3, MSG1, MSG2);\n\n\t// Rounds 49-52\n\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[48]));\n\tTMP2 = STATE0;\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t// Rounds 53-56\n\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[52]));\n\tTMP2 = STATE0;\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t// Rounds 57-60\n\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[56]));\n\tTMP2 = STATE0;\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t// Rounds 61-64\n\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[60]));\n\tTMP2 = STATE0;\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t// Update state\n\tSTATE0 = vaddq_u32(STATE0, ABEF_SAVE);\n\tSTATE1 = vaddq_u32(STATE1, CDGH_SAVE);\n\n\t// Now we need to process the padding block\n\t// For a 64-byte input (32+32), the padding block consists of 0x80 followed by zeros\n\t// and the 64-bit length (512 bits)\n\n\t// TODO merge with endian fixes\n\tuint8_t padding[64] = {0};\n\tpadding[0] = 0x80;  // Padding start marker\n\n\t// Set the 64-bit length value (512 bits = 0x0200)\n\tpadding[56] = 0x00;\n\tpadding[57] = 0x00;\n\tpadding[58] = 0x00;\n\tpadding[59] = 0x00;\n\tpadding[60] = 0x00;\n\tpadding[61] = 0x00;\n\tpadding[62] = 0x02;\n\tpadding[63] = 0x00;\n\n\t// Save current state\n\tABEF_SAVE = STATE0;\n\tCDGH_SAVE = STATE1;\n\n\t// Load padding block\n\tMSG0 = vld1q_u32((const uint32_t *)(padding + 0));\n\tMSG1 = vld1q_u32((const uint32_t *)(padding + 16));\n\tMSG2 = vld1q_u32((const uint32_t *)(padding + 32));\n\tMSG3 = vld1q_u32((const uint32_t *)(padding + 48));\n\n\tMSG0 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG0)));\n\tMSG1 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG1)));\n\tMSG2 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG2)));\n\tMSG3 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG3)));\n\n\t// Rounds 1-4\n\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[0]));\n\tTMP2 = STATE0;\n\tMSG0 = vsha256su0q_u32(MSG0, MSG1);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG0 = vsha256su1q_u32(MSG0, MSG2, MSG3);\n\n\t// Rounds 5-8\n\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[4]));\n\tTMP2 = STATE0;\n\tMSG1 = vsha256su0q_u32(MSG1, MSG2);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG1 = vsha256su1q_u32(MSG1, MSG3, MSG0);\n\n\t// Rounds 9-12\n\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[8]));\n\tTMP2 = STATE0;\n\tMSG2 = vsha256su0q_u32(MSG2, MSG3);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG2 = vsha256su1q_u32(MSG2, MSG0, MSG1);\n\n\t// Rounds 13-16\n\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[12]));\n\tTMP2 = STATE0;\n\tMSG3 = vsha256su0q_u32(MSG3, MSG0);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG3 = vsha256su1q_u32(MSG3, MSG1, MSG2);\n\n\t// Rounds 17-20\n\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[16]));\n\tTMP2 = STATE0;\n\tMSG0 = vsha256su0q_u32(MSG0, MSG1);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG0 = vsha256su1q_u32(MSG0, MSG2, MSG3);\n\n\t// Rounds 21-24\n\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[20]));\n\tTMP2 = STATE0;\n\tMSG1 = vsha256su0q_u32(MSG1, MSG2);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG1 = vsha256su1q_u32(MSG1, MSG3, MSG0);\n\n\t// Rounds 25-28\n\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[24]));\n\tTMP2 = STATE0;\n\tMSG2 = vsha256su0q_u32(MSG2, MSG3);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG2 = vsha256su1q_u32(MSG2, MSG0, MSG1);\n\n\t// Rounds 29-32\n\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[28]));\n\tTMP2 = STATE0;\n\tMSG3 = vsha256su0q_u32(MSG3, MSG0);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG3 = vsha256su1q_u32(MSG3, MSG1, MSG2);\n\n\t// Rounds 33-36\n\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[32]));\n\tTMP2 = STATE0;\n\tMSG0 = vsha256su0q_u32(MSG0, MSG1);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG0 = vsha256su1q_u32(MSG0, MSG2, MSG3);\n\n\t// Rounds 37-40\n\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[36]));\n\tTMP2 = STATE0;\n\tMSG1 = vsha256su0q_u32(MSG1, MSG2);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG1 = vsha256su1q_u32(MSG1, MSG3, MSG0);\n\n\t// Rounds 41-44\n\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[40]));\n\tTMP2 = STATE0;\n\tMSG2 = vsha256su0q_u32(MSG2, MSG3);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG2 = vsha256su1q_u32(MSG2, MSG0, MSG1);\n\n\t// Rounds 45-48\n\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[44]));\n\tTMP2 = STATE0;\n\tMSG3 = vsha256su0q_u32(MSG3, MSG0);\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\tMSG3 = vsha256su1q_u32(MSG3, MSG1, MSG2);\n\n\t// Rounds 49-52\n\tTMP0 = vaddq_u32(MSG0, vld1q_u32(&K[48]));\n\tTMP2 = STATE0;\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t// Rounds 53-56\n\tTMP0 = vaddq_u32(MSG1, vld1q_u32(&K[52]));\n\tTMP2 = STATE0;\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t// Rounds 57-60\n\tTMP0 = vaddq_u32(MSG2, vld1q_u32(&K[56]));\n\tTMP2 = STATE0;\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t// Rounds 61-64\n\tTMP0 = vaddq_u32(MSG3, vld1q_u32(&K[60]));\n\tTMP2 = STATE0;\n\tSTATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);\n\tSTATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);\n\n\t// Update state\n\tSTATE0 = vaddq_u32(STATE0, ABEF_SAVE);\n\tSTATE1 = vaddq_u32(STATE1, CDGH_SAVE);\n\n\t/* big-endian, canonical SHA-256 layout */\n\tuint32_t tmp[8];\n\tvst1q_u32(&tmp[0], STATE0);\n\tvst1q_u32(&tmp[4], STATE1);\n\tfor (int i = 0; i < 8; i++) {\n\t\toutput[(i<<2)+0] = (uint8_t)(tmp[i] >> 24);\n\t\toutput[(i<<2)+1] = (uint8_t)(tmp[i] >> 16);\n\t\toutput[(i<<2)+2] = (uint8_t)(tmp[i] >>  8);\n\t\toutput[(i<<2)+3] = (uint8_t)(tmp[i]      );\n\t}\n}\n\nvoid _vdf_sha2_fused_arm(unsigned char* saltBuffer, unsigned char* seed, unsigned char* out, unsigned char* outCheckpoint, int checkpointCount, int skipCheckpointCount, int hashingIterations) {\n\tunsigned char tempOut[VDF_SHA_HASH_SIZE];\n\t// 2 different branches for different optimisation cases\n\tif (skipCheckpointCount == 0) {\n\t\tfor(int checkpointIdx = 0; checkpointIdx <= checkpointCount; checkpointIdx++) {\n\t\t\tunsigned char* locIn  = checkpointIdx == 0               ? seed : (outCheckpoint + VDF_SHA_HASH_SIZE*(checkpointIdx-1));\n\t\t\tunsigned char* locOut = checkpointIdx == checkpointCount ? out  : (outCheckpoint + VDF_SHA_HASH_SIZE*checkpointIdx);\n\n\t\t\tsha2_p2_32_32_rev_norm(tempOut, saltBuffer, locIn);\n\t\t\tsha2_p2_32_32_norm_loop(tempOut, saltBuffer, hashingIterations-2);\n\t\t\tsha2_p2_32_32_norm_rev(locOut, saltBuffer, tempOut);\n\t\t\tlong_add(saltBuffer, 1);\n\t\t}\n\t} else {\n\t\tfor(int checkpointIdx = 0; checkpointIdx <= checkpointCount; checkpointIdx++) {\n\t\t\tunsigned char* locIn  = checkpointIdx == 0               ? seed : (outCheckpoint + VDF_SHA_HASH_SIZE*(checkpointIdx-1));\n\t\t\tunsigned char* locOut = checkpointIdx == checkpointCount ? out  : (outCheckpoint + VDF_SHA_HASH_SIZE*checkpointIdx);\n\n\t\t\tsha2_p2_32_32_rev_norm(tempOut, saltBuffer, locIn);\n\t\t\t// 1 skip on start\n\t\t\tsha2_p2_32_32_norm_loop(tempOut, saltBuffer, hashingIterations-1);\n\t\t\tlong_add(saltBuffer, 1);\n\t\t\tfor(int j = 1; j < skipCheckpointCount; j++) {\n\t\t\t\t// no skips\n\t\t\t\tsha2_p2_32_32_norm_loop(tempOut, saltBuffer, hashingIterations);\n\t\t\t\tlong_add(saltBuffer, 1);\n\t\t\t}\n\t\t\t// 1 skip on end\n\t\t\tsha2_p2_32_32_norm_loop(tempOut, saltBuffer, hashingIterations-1);\n\t\t\tsha2_p2_32_32_norm_rev(locOut, saltBuffer, tempOut);\n\t\t\tlong_add(saltBuffer, 1);\n\t\t}\n\t}\n}\n\nvoid vdf_sha2_fused_arm(unsigned char* saltBuffer, unsigned char* seed, unsigned char* out, unsigned char* outCheckpoint, int checkpointCount, int skipCheckpointCount, int hashingIterations) {\n\tunsigned char saltBufferStack[SALT_SIZE];\n\t// ensure 1 L1 cache page used\n\t// no access to heap, except of 0-iteration\n\tmemcpy(saltBufferStack, saltBuffer, SALT_SIZE);\n\n\t_vdf_sha2_fused_arm(saltBufferStack, seed, out, outCheckpoint, checkpointCount, skipCheckpointCount, hashingIterations);\n}\n\n#endif"
  },
  {
    "path": "apps/arweave/c_src/vdf/vdf_fused_x86.cpp",
    "content": "#include <cstdint>\n#include <cstring>\n#include \"vdf.h\"\n\n#if defined(__x86_64__) || defined(__i386__)\n\n#include <immintrin.h>\n\n// NOTE spaces here, because tabs are much more difficult for backslash alignment\n#define sha256_compress1(state, data) {                                                             \\\n    __m128i STATE0, STATE1, ABEF_SAVE, CDGH_SAVE;                                                   \\\n    __m128i MSG, TMP, MASK;                                                                         \\\n    __m128i TMSG0, TMSG1, TMSG2, TMSG3;                                                             \\\n                                                                                                    \\\n                                                                                                    \\\n    TMP   = _mm_loadu_si128((const __m128i*) &state[0]);                                            \\\n    STATE1= _mm_loadu_si128((const __m128i*) &state[4]);                                            \\\n                                                                                                    \\\n    MASK  = _mm_set_epi64x(0x0c0d0e0f08090a0bULL, 0x0405060700010203ULL);                           \\\n                                                                                                    \\\n                                                                                                    \\\n    TMP   = _mm_shuffle_epi32(TMP, 0xB1);                                                           \\\n    STATE1= _mm_shuffle_epi32(STATE1, 0x1B);                                                        \\\n    STATE0= _mm_alignr_epi8(TMP, STATE1, 8);                                                        \\\n    STATE1= _mm_blend_epi16(STATE1, TMP, 0xF0);                                                     \\\n                                                                                                    \\\n                                                                                                    \\\n    {                                                                                               \\\n                                                                                                    \\\n        ABEF_SAVE = STATE0;                                                                         \\\n        CDGH_SAVE = STATE1;                                                                         \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_loadu_si128((const __m128i*) (data + 0));                                       \\\n        TMSG0 = _mm_shuffle_epi8(MSG, MASK);                                                        \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG0, _mm_set_epi64x(0xE9B5DBA5B5C0FBCFULL,                          \\\n                                                   0x71374491428A2F98ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n                                                                                                    \\\n                                                                                                    \\\n        TMSG1 = _mm_loadu_si128((const __m128i*) (data + 16));                                      \\\n        TMSG1 = _mm_shuffle_epi8(TMSG1, MASK);                                                      \\\n        MSG   = _mm_add_epi32(TMSG1, _mm_set_epi64x(0xAB1C5ED5923F82A4ULL,                          \\\n                                                   0x59F111F13956C25BULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG0= _mm_sha256msg1_epu32(TMSG0, TMSG1);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        TMSG2 = _mm_loadu_si128((const __m128i*) (data + 32));                                      \\\n        TMSG2 = _mm_shuffle_epi8(TMSG2, MASK);                                                      \\\n        MSG   = _mm_add_epi32(TMSG2, _mm_set_epi64x(0x550C7DC3243185BEULL,                          \\\n                                                   0x12835B01D807AA98ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG1= _mm_sha256msg1_epu32(TMSG1, TMSG2);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        TMSG3 = _mm_loadu_si128((const __m128i*) (data + 48));                                      \\\n        TMSG3 = _mm_shuffle_epi8(TMSG3, MASK);                                                      \\\n        MSG   = _mm_add_epi32(TMSG3, _mm_set_epi64x(0xC19BF1749BDC06A7ULL,                          \\\n                                                   0x80DEB1FE72BE5D74ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG3, TMSG2, 4);                                                   \\\n        TMSG0= _mm_add_epi32(TMSG0, TMP);                                                           \\\n        TMSG0= _mm_sha256msg2_epu32(TMSG0, TMSG3);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG2= _mm_sha256msg1_epu32(TMSG2, TMSG3);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG0, _mm_set_epi64x(0x240CA1CC0FC19DC6ULL,                          \\\n                                                   0xEFBE4786E49B69C1ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG0, TMSG3, 4);                                                   \\\n        TMSG1= _mm_add_epi32(TMSG1, TMP);                                                           \\\n        TMSG1= _mm_sha256msg2_epu32(TMSG1, TMSG0);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG3= _mm_sha256msg1_epu32(TMSG3, TMSG0);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG1, _mm_set_epi64x(0x76F988DA5CB0A9DCULL,                          \\\n                                                   0x4A7484AA2DE92C6FULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG1, TMSG0, 4);                                                   \\\n        TMSG2= _mm_add_epi32(TMSG2, TMP);                                                           \\\n        TMSG2= _mm_sha256msg2_epu32(TMSG2, TMSG1);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG0= _mm_sha256msg1_epu32(TMSG0, TMSG1);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG2, _mm_set_epi64x(0xBF597FC7B00327C8ULL,                          \\\n                                                   0xA831C66D983E5152ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG2, TMSG1, 4);                                                   \\\n        TMSG3= _mm_add_epi32(TMSG3, TMP);                                                           \\\n        TMSG3= _mm_sha256msg2_epu32(TMSG3, TMSG2);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG1= _mm_sha256msg1_epu32(TMSG1, TMSG2);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG3, _mm_set_epi64x(0x1429296706CA6351ULL,                          \\\n                                                   0xD5A79147C6E00BF3ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG3, TMSG2, 4);                                                   \\\n        TMSG0= _mm_add_epi32(TMSG0, TMP);                                                           \\\n        TMSG0= _mm_sha256msg2_epu32(TMSG0, TMSG3);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG2= _mm_sha256msg1_epu32(TMSG2, TMSG3);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG0, _mm_set_epi64x(0x53380D134D2C6DFCULL,                          \\\n                                                   0x2E1B213827B70A85ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG0, TMSG3, 4);                                                   \\\n        TMSG1= _mm_add_epi32(TMSG1, TMP);                                                           \\\n        TMSG1= _mm_sha256msg2_epu32(TMSG1, TMSG0);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG3= _mm_sha256msg1_epu32(TMSG3, TMSG0);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG1, _mm_set_epi64x(0x92722C8581C2C92EULL,                          \\\n                                                   0x766A0ABB650A7354ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG1, TMSG0, 4);                                                   \\\n        TMSG2= _mm_add_epi32(TMSG2, TMP);                                                           \\\n        TMSG2= _mm_sha256msg2_epu32(TMSG2, TMSG1);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG0= _mm_sha256msg1_epu32(TMSG0, TMSG1);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG2, _mm_set_epi64x(0xC76C51A3C24B8B70ULL,                          \\\n                                                   0xA81A664BA2BFE8A1ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG2, TMSG1, 4);                                                   \\\n        TMSG3= _mm_add_epi32(TMSG3, TMP);                                                           \\\n        TMSG3= _mm_sha256msg2_epu32(TMSG3, TMSG2);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG1= _mm_sha256msg1_epu32(TMSG1, TMSG2);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG3, _mm_set_epi64x(0x106AA070F40E3585ULL,                          \\\n                                                   0xD6990624D192E819ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG3, TMSG2, 4);                                                   \\\n        TMSG0= _mm_add_epi32(TMSG0, TMP);                                                           \\\n        TMSG0= _mm_sha256msg2_epu32(TMSG0, TMSG3);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG2= _mm_sha256msg1_epu32(TMSG2, TMSG3);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG0, _mm_set_epi64x(0x34B0BCB52748774CULL,                          \\\n                                                   0x1E376C0819A4C116ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG0, TMSG3, 4);                                                   \\\n        TMSG1= _mm_add_epi32(TMSG1, TMP);                                                           \\\n        TMSG1= _mm_sha256msg2_epu32(TMSG1, TMSG0);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG3= _mm_sha256msg1_epu32(TMSG3, TMSG0);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG1, _mm_set_epi64x(0x682E6FF35B9CCA4FULL,                          \\\n                                                   0x4ED8AA4A391C0CB3ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG1, TMSG0, 4);                                                   \\\n        TMSG2= _mm_add_epi32(TMSG2, TMP);                                                           \\\n        TMSG2= _mm_sha256msg2_epu32(TMSG2, TMSG1);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG2, _mm_set_epi64x(0x8CC7020884C87814ULL,                          \\\n                                                   0x78A5636F748F82EEULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG2, TMSG1, 4);                                                   \\\n        TMSG3= _mm_add_epi32(TMSG3, TMP);                                                           \\\n        TMSG3= _mm_sha256msg2_epu32(TMSG3, TMSG2);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG3, _mm_set_epi64x(0xC67178F2BEF9A3F7ULL,                          \\\n                                                   0xA4506CEB90BEFFFAULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n                                                                                                    \\\n                                                                                                    \\\n        STATE0 = _mm_add_epi32(STATE0, ABEF_SAVE);                                                  \\\n        STATE1 = _mm_add_epi32(STATE1, CDGH_SAVE);                                                  \\\n    }                                                                                               \\\n                                                                                                    \\\n                                                                                                    \\\n    TMP    = _mm_shuffle_epi32(STATE0, 0x1B);                                                       \\\n    STATE1 = _mm_shuffle_epi32(STATE1, 0xB1);                                                       \\\n    STATE0 = _mm_blend_epi16(TMP, STATE1, 0xF0);                                                    \\\n    STATE1 = _mm_alignr_epi8(STATE1, TMP, 8);                                                       \\\n                                                                                                    \\\n                                                                                                    \\\n    _mm_storeu_si128((__m128i*) &state[0], STATE0);                                                 \\\n    _mm_storeu_si128((__m128i*) &state[4], STATE1);                                                 \\\n}\n\n#define sha256_compress1_2(state, data1, data2) {                                                   \\\n    __m128i STATE0, STATE1, ABEF_SAVE, CDGH_SAVE;                                                   \\\n    __m128i MSG, TMP, MASK;                                                                         \\\n    __m128i TMSG0, TMSG1, TMSG2, TMSG3;                                                             \\\n                                                                                                    \\\n                                                                                                    \\\n    TMP   = _mm_loadu_si128((const __m128i*) &state[0]);                                            \\\n    STATE1= _mm_loadu_si128((const __m128i*) &state[4]);                                            \\\n                                                                                                    \\\n    MASK  = _mm_set_epi64x(0x0c0d0e0f08090a0bULL, 0x0405060700010203ULL);                           \\\n                                                                                                    \\\n                                                                                                    \\\n    TMP   = _mm_shuffle_epi32(TMP, 0xB1);                                                           \\\n    STATE1= _mm_shuffle_epi32(STATE1, 0x1B);                                                        \\\n    STATE0= _mm_alignr_epi8(TMP, STATE1, 8);                                                        \\\n    STATE1= _mm_blend_epi16(STATE1, TMP, 0xF0);                                                     \\\n                                                                                                    \\\n                                                                                                    \\\n    {                                                                                               \\\n                                                                                                    \\\n        ABEF_SAVE = STATE0;                                                                         \\\n        CDGH_SAVE = STATE1;                                                                         \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_loadu_si128((const __m128i*) (data1 + 0));                                      \\\n        TMSG0 = _mm_shuffle_epi8(MSG, MASK);                                                        \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG0, _mm_set_epi64x(0xE9B5DBA5B5C0FBCFULL,                          \\\n                                                   0x71374491428A2F98ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n                                                                                                    \\\n                                                                                                    \\\n        TMSG1 = _mm_loadu_si128((const __m128i*) (data1 + 16));                                     \\\n        TMSG1 = _mm_shuffle_epi8(TMSG1, MASK);                                                      \\\n        MSG   = _mm_add_epi32(TMSG1, _mm_set_epi64x(0xAB1C5ED5923F82A4ULL,                          \\\n                                                   0x59F111F13956C25BULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG0= _mm_sha256msg1_epu32(TMSG0, TMSG1);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        TMSG2 = _mm_loadu_si128((const __m128i*) (data2 + 0));                                      \\\n        TMSG2 = _mm_shuffle_epi8(TMSG2, MASK);                                                      \\\n        MSG   = _mm_add_epi32(TMSG2, _mm_set_epi64x(0x550C7DC3243185BEULL,                          \\\n                                                   0x12835B01D807AA98ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG1= _mm_sha256msg1_epu32(TMSG1, TMSG2);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        TMSG3 = _mm_loadu_si128((const __m128i*) (data2 + 16));                                     \\\n        TMSG3 = _mm_shuffle_epi8(TMSG3, MASK);                                                      \\\n        MSG   = _mm_add_epi32(TMSG3, _mm_set_epi64x(0xC19BF1749BDC06A7ULL,                          \\\n                                                   0x80DEB1FE72BE5D74ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG3, TMSG2, 4);                                                   \\\n        TMSG0= _mm_add_epi32(TMSG0, TMP);                                                           \\\n        TMSG0= _mm_sha256msg2_epu32(TMSG0, TMSG3);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG2= _mm_sha256msg1_epu32(TMSG2, TMSG3);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG0, _mm_set_epi64x(0x240CA1CC0FC19DC6ULL,                          \\\n                                                   0xEFBE4786E49B69C1ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG0, TMSG3, 4);                                                   \\\n        TMSG1= _mm_add_epi32(TMSG1, TMP);                                                           \\\n        TMSG1= _mm_sha256msg2_epu32(TMSG1, TMSG0);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG3= _mm_sha256msg1_epu32(TMSG3, TMSG0);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG1, _mm_set_epi64x(0x76F988DA5CB0A9DCULL,                          \\\n                                                   0x4A7484AA2DE92C6FULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG1, TMSG0, 4);                                                   \\\n        TMSG2= _mm_add_epi32(TMSG2, TMP);                                                           \\\n        TMSG2= _mm_sha256msg2_epu32(TMSG2, TMSG1);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG0= _mm_sha256msg1_epu32(TMSG0, TMSG1);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG2, _mm_set_epi64x(0xBF597FC7B00327C8ULL,                          \\\n                                                   0xA831C66D983E5152ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG2, TMSG1, 4);                                                   \\\n        TMSG3= _mm_add_epi32(TMSG3, TMP);                                                           \\\n        TMSG3= _mm_sha256msg2_epu32(TMSG3, TMSG2);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG1= _mm_sha256msg1_epu32(TMSG1, TMSG2);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG3, _mm_set_epi64x(0x1429296706CA6351ULL,                          \\\n                                                   0xD5A79147C6E00BF3ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG3, TMSG2, 4);                                                   \\\n        TMSG0= _mm_add_epi32(TMSG0, TMP);                                                           \\\n        TMSG0= _mm_sha256msg2_epu32(TMSG0, TMSG3);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG2= _mm_sha256msg1_epu32(TMSG2, TMSG3);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG0, _mm_set_epi64x(0x53380D134D2C6DFCULL,                          \\\n                                                   0x2E1B213827B70A85ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG0, TMSG3, 4);                                                   \\\n        TMSG1= _mm_add_epi32(TMSG1, TMP);                                                           \\\n        TMSG1= _mm_sha256msg2_epu32(TMSG1, TMSG0);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG3= _mm_sha256msg1_epu32(TMSG3, TMSG0);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG1, _mm_set_epi64x(0x92722C8581C2C92EULL,                          \\\n                                                   0x766A0ABB650A7354ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG1, TMSG0, 4);                                                   \\\n        TMSG2= _mm_add_epi32(TMSG2, TMP);                                                           \\\n        TMSG2= _mm_sha256msg2_epu32(TMSG2, TMSG1);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG0= _mm_sha256msg1_epu32(TMSG0, TMSG1);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG2, _mm_set_epi64x(0xC76C51A3C24B8B70ULL,                          \\\n                                                   0xA81A664BA2BFE8A1ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG2, TMSG1, 4);                                                   \\\n        TMSG3= _mm_add_epi32(TMSG3, TMP);                                                           \\\n        TMSG3= _mm_sha256msg2_epu32(TMSG3, TMSG2);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG1= _mm_sha256msg1_epu32(TMSG1, TMSG2);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG3, _mm_set_epi64x(0x106AA070F40E3585ULL,                          \\\n                                                   0xD6990624D192E819ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG3, TMSG2, 4);                                                   \\\n        TMSG0= _mm_add_epi32(TMSG0, TMP);                                                           \\\n        TMSG0= _mm_sha256msg2_epu32(TMSG0, TMSG3);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG2= _mm_sha256msg1_epu32(TMSG2, TMSG3);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG0, _mm_set_epi64x(0x34B0BCB52748774CULL,                          \\\n                                                   0x1E376C0819A4C116ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG0, TMSG3, 4);                                                   \\\n        TMSG1= _mm_add_epi32(TMSG1, TMP);                                                           \\\n        TMSG1= _mm_sha256msg2_epu32(TMSG1, TMSG0);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n        TMSG3= _mm_sha256msg1_epu32(TMSG3, TMSG0);                                                  \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG1, _mm_set_epi64x(0x682E6FF35B9CCA4FULL,                          \\\n                                                   0x4ED8AA4A391C0CB3ULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG1, TMSG0, 4);                                                   \\\n        TMSG2= _mm_add_epi32(TMSG2, TMP);                                                           \\\n        TMSG2= _mm_sha256msg2_epu32(TMSG2, TMSG1);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG2, _mm_set_epi64x(0x8CC7020884C87814ULL,                          \\\n                                                   0x78A5636F748F82EEULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        TMP   = _mm_alignr_epi8(TMSG2, TMSG1, 4);                                                   \\\n        TMSG3= _mm_add_epi32(TMSG3, TMP);                                                           \\\n        TMSG3= _mm_sha256msg2_epu32(TMSG3, TMSG2);                                                  \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n                                                                                                    \\\n                                                                                                    \\\n        MSG   = _mm_add_epi32(TMSG3, _mm_set_epi64x(0xC67178F2BEF9A3F7ULL,                          \\\n                                                   0xA4506CEB90BEFFFAULL));                         \\\n        STATE1= _mm_sha256rnds2_epu32(STATE1, STATE0, MSG);                                         \\\n        MSG   = _mm_shuffle_epi32(MSG, 0x0E);                                                       \\\n        STATE0= _mm_sha256rnds2_epu32(STATE0, STATE1, MSG);                                         \\\n                                                                                                    \\\n                                                                                                    \\\n        STATE0 = _mm_add_epi32(STATE0, ABEF_SAVE);                                                  \\\n        STATE1 = _mm_add_epi32(STATE1, CDGH_SAVE);                                                  \\\n    }                                                                                               \\\n                                                                                                    \\\n                                                                                                    \\\n    TMP    = _mm_shuffle_epi32(STATE0, 0x1B);                                                       \\\n    STATE1 = _mm_shuffle_epi32(STATE1, 0xB1);                                                       \\\n    STATE0 = _mm_blend_epi16(TMP, STATE1, 0xF0);                                                    \\\n    STATE1 = _mm_alignr_epi8(STATE1, TMP, 8);                                                       \\\n                                                                                                    \\\n                                                                                                    \\\n    _mm_storeu_si128((__m128i*) &state[0], STATE0);                                                 \\\n    _mm_storeu_si128((__m128i*) &state[4], STATE1);                                                 \\\n}\n\n\n\n// Optimized sha2_p2_32_32: Computes SHA-256 on the concatenation of two 32-byte inputs.\n#define sha2_p2_32_32(output, input1, input2) {                                                     \\\n                                                                                                    \\\n    uint32_t state[8];                                                                              \\\n                                                                                                    \\\n    state[0] = 0x6a09e667U;                                                                         \\\n    state[1] = 0xbb67ae85U;                                                                         \\\n    state[2] = 0x3c6ef372U;                                                                         \\\n    state[3] = 0xa54ff53aU;                                                                         \\\n    state[4] = 0x510e527fU;                                                                         \\\n    state[5] = 0x9b05688cU;                                                                         \\\n    state[6] = 0x1f83d9abU;                                                                         \\\n    state[7] = 0x5be0cd19U;                                                                         \\\n                                                                                                    \\\n    sha256_compress1_2(state, input1, input2);                                                      \\\n                                                                                                    \\\n    uint8_t block2[64] = { 0 };                                                                     \\\n    block2[0] = 0x80;                                                                               \\\n                                                                                                    \\\n    uint64_t bit_len = 64ULL * 8ULL;                                                                \\\n    block2[56] = (uint8_t)(bit_len >> 56);                                                          \\\n    block2[57] = (uint8_t)(bit_len >> 48);                                                          \\\n    block2[58] = (uint8_t)(bit_len >> 40);                                                          \\\n    block2[59] = (uint8_t)(bit_len >> 32);                                                          \\\n    block2[60] = (uint8_t)(bit_len >> 24);                                                          \\\n    block2[61] = (uint8_t)(bit_len >> 16);                                                          \\\n    block2[62] = (uint8_t)(bit_len >> 8);                                                           \\\n    block2[63] = (uint8_t)(bit_len);                                                                \\\n                                                                                                    \\\n                                                                                                    \\\n    sha256_compress1(state, block2);                                                                \\\n                                                                                                    \\\n                                                                                                    \\\n    output[ 0] = (uint8_t)(state[0] >> 24);                                                         \\\n    output[ 1] = (uint8_t)(state[0] >> 16);                                                         \\\n    output[ 2] = (uint8_t)(state[0] >> 8);                                                          \\\n    output[ 3] = (uint8_t)(state[0]);                                                               \\\n                                                                                                    \\\n    output[ 4] = (uint8_t)(state[1] >> 24);                                                         \\\n    output[ 5] = (uint8_t)(state[1] >> 16);                                                         \\\n    output[ 6] = (uint8_t)(state[1] >> 8);                                                          \\\n    output[ 7] = (uint8_t)(state[1]);                                                               \\\n                                                                                                    \\\n    output[ 8]  = (uint8_t)(state[2] >> 24);                                                        \\\n    output[ 9]  = (uint8_t)(state[2] >> 16);                                                        \\\n    output[10] = (uint8_t)(state[2] >> 8);                                                          \\\n    output[11] = (uint8_t)(state[2]);                                                               \\\n                                                                                                    \\\n    output[12] = (uint8_t)(state[3] >> 24);                                                         \\\n    output[13] = (uint8_t)(state[3] >> 16);                                                         \\\n    output[14] = (uint8_t)(state[3] >> 8);                                                          \\\n    output[15] = (uint8_t)(state[3]);                                                               \\\n                                                                                                    \\\n    output[16] = (uint8_t)(state[4] >> 24);                                                         \\\n    output[17] = (uint8_t)(state[4] >> 16);                                                         \\\n    output[18] = (uint8_t)(state[4] >> 8);                                                          \\\n    output[19] = (uint8_t)(state[4]);                                                               \\\n                                                                                                    \\\n    output[20] = (uint8_t)(state[5] >> 24);                                                         \\\n    output[21] = (uint8_t)(state[5] >> 16);                                                         \\\n    output[22] = (uint8_t)(state[5] >> 8);                                                          \\\n    output[23] = (uint8_t)(state[5]);                                                               \\\n                                                                                                    \\\n    output[24] = (uint8_t)(state[6] >> 24);                                                         \\\n    output[25] = (uint8_t)(state[6] >> 16);                                                         \\\n    output[26] = (uint8_t)(state[6] >> 8);                                                          \\\n    output[27] = (uint8_t)(state[6]);                                                               \\\n                                                                                                    \\\n    output[28] = (uint8_t)(state[7] >> 24);                                                         \\\n    output[29] = (uint8_t)(state[7] >> 16);                                                         \\\n    output[30] = (uint8_t)(state[7] >> 8);                                                          \\\n    output[31] = (uint8_t)(state[7]);                                                               \\\n                                                                                                    \\\n}\n\n// TODO make even better impl with ideas from ARM impl\nvoid _vdf_sha2_fused_x86(unsigned char* saltBuffer, unsigned char* seed, unsigned char* out, unsigned char* outCheckpoint, int checkpointCount, int skipCheckpointCount, int hashingIterations) {\n\t// 2 different branches for different optimisation cases\n\tif (skipCheckpointCount == 0) {\n\t\tfor(int checkpointIdx = 0; checkpointIdx <= checkpointCount; checkpointIdx++) {\n\t\t\tunsigned char* locIn  = checkpointIdx == 0               ? seed : (outCheckpoint + VDF_SHA_HASH_SIZE*(checkpointIdx-1));\n\t\t\tunsigned char* locOut = checkpointIdx == checkpointCount ? out  : (outCheckpoint + VDF_SHA_HASH_SIZE*checkpointIdx);\n\t\t\tmemcpy(locOut, locIn, VDF_SHA_HASH_SIZE);\n\n\t\t\tfor(int i = 0; i < hashingIterations; i++) {\n\t\t\t\tsha2_p2_32_32(locOut, saltBuffer, locOut);\n\t\t\t}\n\t\t\tlong_add(saltBuffer, 1);\n\t\t}\n\t} else {\n\t\tfor(int checkpointIdx = 0; checkpointIdx <= checkpointCount; checkpointIdx++) {\n\t\t\tunsigned char* locIn  = checkpointIdx == 0               ? seed : (outCheckpoint + VDF_SHA_HASH_SIZE*(checkpointIdx-1));\n\t\t\tunsigned char* locOut = checkpointIdx == checkpointCount ? out  : (outCheckpoint + VDF_SHA_HASH_SIZE*checkpointIdx);\n\t\t\tmemcpy(locOut, locIn, VDF_SHA_HASH_SIZE);\n\n\t\t\t// 1 skip on start\n\t\t\tfor(int i = 0; i < hashingIterations; i++) {\n\t\t\t\tsha2_p2_32_32(locOut, saltBuffer, locOut);\n\t\t\t}\n\t\t\tlong_add(saltBuffer, 1);\n\t\t\tfor(int j = 1; j < skipCheckpointCount; j++) {\n\t\t\t\t// no skips\n\t\t\t\tfor(int i = 0; i < hashingIterations; i++) {\n\t\t\t\t\tsha2_p2_32_32(locOut, saltBuffer, locOut);\n\t\t\t\t}\n\t\t\t\tlong_add(saltBuffer, 1);\n\t\t\t}\n\t\t\t// 1 skip on end\n\t\t\tfor(int i = 0; i < hashingIterations; i++) {\n\t\t\t\tsha2_p2_32_32(locOut, saltBuffer, locOut);\n\t\t\t}\n\t\t\tlong_add(saltBuffer, 1);\n\t\t}\n\t}\n}\n\nvoid vdf_sha2_fused_x86(unsigned char* saltBuffer, unsigned char* seed, unsigned char* out, unsigned char* outCheckpoint, int checkpointCount, int skipCheckpointCount, int hashingIterations) {\n\tunsigned char saltBufferStack[SALT_SIZE];\n\t// ensure 1 L1 cache page used\n\t// no access to heap, except of 0-iteration\n\tmemcpy(saltBufferStack, saltBuffer, SALT_SIZE);\n\n\t_vdf_sha2_fused_x86(saltBufferStack, seed, out, outCheckpoint, checkpointCount, skipCheckpointCount, hashingIterations);\n}\n\n#endif\n"
  },
  {
    "path": "apps/arweave/c_src/vdf/vdf_hiopt_arm.cpp",
    "content": "#include <cstdint>\n#include <cstring>\n#include \"vdf.h\"\n\n#if defined(__aarch64__) || defined(__arm__)\n\nextern \"C\" {\n\tvoid sha256_block_vdf_order(unsigned int* cth, const void* in, size_t diff_num);\n\tvoid reverse_endianness_asm(const uint32_t h[8], uint8_t* md) {\n\t\tuint64_t h1, h2;\n\t\t__asm__ volatile (\n\t\t\t\"LDP %[t1], %[t2], [%[in]], #16\\n\\t\"\n\t\t\t\"REV32 %[t1], %[t1]\\n\\t\"\n\t\t\t\"REV32 %[t2], %[t2]\\n\\t\"\n\t\t\t\"STP %[t1], %[t2], [%[out]], #16\\n\\t\"\n\n\t\t\t\"LDP %[t1], %[t2], [%[in]]\\n\\t\"\n\t\t\t\"REV32 %[t1], %[t1]\\n\\t\"\n\t\t\t\"REV32 %[t2], %[t2]\\n\\t\"\n\t\t\t\"STP %[t1], %[t2], [%[out]]\\n\\t\"\n\n\t\t\t//\n\t\t\t: [t1] \"+r\" (h1),\n\t\t\t[t2] \"+r\" (h2),\n\t\t\t[in] \"+r\" (h),\t\n\t\t\t[out] \"+r\" (md)\t\n\t\t\t: \n\t\t\t: \"memory\"\n\t\t\t);\n\t}\n}\n\n//sha256 h0-h7\nunsigned int H07[8] = { 0x6a09e667U,0xbb67ae85U,0x3c6ef372U,0xa54ff53aU,\n\t0x510e527fU,0x9b05688cU,0x1f83d9abU,0x5be0cd19U };\n\n////////////////////////////////////////////////////////////////////////////////////////////////////\n//    SHA\n////////////////////////////////////////////////////////////////////////////////////////////////////\n// NOTE saltBuffer is mutable in progress\nvoid _vdf_sha2_hiopt_arm(unsigned char* saltBuffer, unsigned char* seed, unsigned char* out, unsigned char* outCheckpoint, int checkpointCount, int skipCheckpointCount, int hashingIterations) {\n\t//unsigned char tempOut[VDF_SHA_HASH_SIZE];\n\t// 2 different branches for different optimisation cases\n\n\tunsigned int sha256[8];\n\t//one sha256 block\n\tunsigned char inBuffer[64];\n\n\tif (skipCheckpointCount == 0) {\n\t\tfor (int checkpointIdx = 0; checkpointIdx <= checkpointCount; checkpointIdx++) {\n\t\t\tunsigned char* locIn = checkpointIdx == 0 ? seed : (outCheckpoint + VDF_SHA_HASH_SIZE * (checkpointIdx - 1));\n\t\t\tunsigned char* locOut = checkpointIdx == checkpointCount ? out : (outCheckpoint + VDF_SHA_HASH_SIZE * checkpointIdx);\n\n\t\t\tmemcpy(sha256, H07, 32);\n\t\t\treverse_endianness_asm((uint32_t*)saltBuffer, inBuffer);\n\t\t\treverse_endianness_asm((uint32_t*)locIn, &inBuffer[32]);\n\t\t\t\n\t\t\tsha256_block_vdf_order(sha256, inBuffer, hashingIterations);\n\t\t\treverse_endianness_asm(sha256, locOut);\n\t\t\tlong_add(saltBuffer, 1);\n\t\t}\n\t}\n\telse {\n\t\tfor (int checkpointIdx = 0; checkpointIdx <= checkpointCount; checkpointIdx++) {\n\t\t\tunsigned char* locIn = checkpointIdx == 0 ? seed : (outCheckpoint + VDF_SHA_HASH_SIZE * (checkpointIdx - 1));\n\t\t\tunsigned char* locOut = checkpointIdx == checkpointCount ? out : (outCheckpoint + VDF_SHA_HASH_SIZE * checkpointIdx);\n\t\t\t\n\t\t\t// 1 skip on start\n\t\t\tmemcpy(sha256, H07, 32);\n\t\t\treverse_endianness_asm((uint32_t*)saltBuffer, inBuffer);\n\t\t\treverse_endianness_asm((uint32_t*)locIn, &inBuffer[32]);\n\t\t\t\n\t\t\tsha256_block_vdf_order(sha256, inBuffer, hashingIterations);\n\t\t\tmemcpy(&inBuffer[32], sha256, 32);\n\t\t\tlong_add(saltBuffer, 1);\n\t\t\treverse_endianness_asm((uint32_t*)saltBuffer, inBuffer);\n\t\t\tfor (int j = 1; j < skipCheckpointCount; j++) {\n\t\t\t\t// no skips\n\t\t\t\tmemcpy(sha256, H07, 32);\n\t\t\t\tsha256_block_vdf_order(sha256, inBuffer, hashingIterations);\n\t\t\t\tmemcpy(&inBuffer[32], sha256, 32);\n\t\t\t\tlong_add(saltBuffer, 1);\n\t\t\t\treverse_endianness_asm((uint32_t*)saltBuffer, inBuffer);\n\t\t\t}\n\t\t\t// 1 skip on end\n\t\t\tmemcpy(sha256, H07, 32);\n\t\t\tsha256_block_vdf_order(sha256, inBuffer, hashingIterations);\n\t\t\treverse_endianness_asm(sha256, locOut);\n\t\t\tlong_add(saltBuffer, 1);\n\t\t}\n\t}\n}\n\n// use\n//   unsigned char out[VDF_SHA_HASH_SIZE];\n//   unsigned char* outCheckpoint = (unsigned char*)malloc(checkpointCount*VDF_SHA_HASH_SIZE);\n//   free(outCheckpoint);\n// for call\nvoid vdf_sha2_hiopt_arm(unsigned char* saltBuffer, unsigned char* seed, unsigned char* out, unsigned char* outCheckpoint, int checkpointCount, int skipCheckpointCount, int hashingIterations) {\n\tunsigned char saltBufferStack[SALT_SIZE];\n\t// ensure 1 L1 cache page used\n\t// no access to heap, except of 0-iteration\n\tmemcpy(saltBufferStack, saltBuffer, SALT_SIZE);\n\n\t_vdf_sha2_hiopt_arm(saltBufferStack, seed, out, outCheckpoint, checkpointCount, skipCheckpointCount, hashingIterations);\n}\n\n\n#endif"
  },
  {
    "path": "apps/arweave/include/ar.hrl",
    "content": "-ifndef(AR_HRL).\n-define(AR_HRL, true).\n\n%%% A collection of record structures used throughout the Arweave server.\n\n%% True if arweave was launched with -setcookie=test\n%% (e.g. bin/test or bin/shell)\n-define(IS_TEST, erlang:get_cookie() == test).\n\n%% Default gen_server:call timeout.\n%% Is used to safely replace deprecated `infinity` timeout, that was used in\n%% multiple places, with a more reasonable value.\n%% Is a subject for future changes.\n-define(DEFAULT_CALL_TIMEOUT, 600000).\n\n%% The mainnet name. Does not change at the hard forks.\n-ifndef(NETWORK_NAME).\n\t-ifdef(AR_TEST).\n\t\t-define(NETWORK_NAME, \"arweave.localtest\").\n\t-else.\n\t\t-define(NETWORK_NAME, \"arweave.N.1\").\n\t-endif.\n-endif.\n\n%% When a request is received without specifing the X-Network header, this network name\n%% is assumed.\n-ifndef(DEFAULT_NETWORK_NAME).\n\t-define(DEFAULT_NETWORK_NAME, \"arweave.N.1\").\n-endif.\n\n%% The current release number of the arweave client software.\n%% @deprecated Not used apart from being included in the /info response.\n-define(CLIENT_VERSION, 5).\n\n%% The current build number -- incremented for every release.\n-define(RELEASE_NUMBER, 91).\n\n-define(DEFAULT_REQUEST_HEADERS,\n\t[\n\t\t{<<\"X-Network\">>, ?NETWORK_NAME},\n\t\t{<<\"X-Version\">>, <<\"8\">>},\n\t\t{<<\"X-Block-Format\">>, <<\"3\">>}\n\t]).\n\n-define(CORS_HEADERS,\n\t#{<<\"access-control-allow-origin\">> => <<\"*\">>}).\n\n-ifdef(FORKS_RESET).\n-define(FORK_1_6, 0).\n-else.\n%%% FORK INDEX\n%%% @deprecated Fork heights from 1.7 on are defined in the ar_fork module.\n-define(FORK_1_6, 95000).\n-endif.\n\n%% The hashing algorithm used to calculate wallet addresses.\n-define(HASH_ALG, sha256).\n\n-define(DEEP_HASH_ALG, sha384).\n\n-define(MERKLE_HASH_ALG, sha384).\n\n-define(RSA_SIGN_ALG, rsa).\n-define(RSA_PRIV_KEY_SZ, 4096).\n\n-define(ECDSA_SIGN_ALG, ecdsa).\n-define(ECDSA_TYPE_BYTE, <<2>>).\n\n-define(EDDSA_SIGN_ALG, eddsa).\n-define(EDDSA_TYPE_BYTE, <<3>>).\n\n%% The default key type used by transactions that do not specify a signature type.\n-define(DEFAULT_KEY_TYPE, {?RSA_SIGN_ALG, 65537}).\n\n-define(RSA_KEY_TYPE, {?RSA_SIGN_ALG, 65537}).\n-define(ECDSA_KEY_TYPE, {?ECDSA_SIGN_ALG, secp256k1}).\n\n-define(RSA_BLOCK_SIG_SIZE, 512).\n-define(ECDSA_PUB_KEY_SIZE, 33).\n-define(ECDSA_SIG_SIZE, 65).\n\n%% The difficulty a new weave is started with.\n-define(DEFAULT_DIFF, 6).\n\n-ifndef(TARGET_BLOCK_TIME).\n-define(TARGET_BLOCK_TIME, 120).\n-endif.\n\n-ifndef(RETARGET_BLOCKS).\n-define(RETARGET_BLOCKS, 10).\n-endif.\n\n%% We only do retarget if the time it took to mine ?RETARGET_BLOCKS is more than\n%% 1.1 times bigger or smaller than ?TARGET_BLOCK_TIME * ?RETARGET_BLOCKS. Was used before\n%% the fork 2.5 where we got rid of the floating point calculations.\n-define(RETARGET_TOLERANCE, 0.1).\n\n-define(JOIN_CLOCK_TOLERANCE, 15).\n\n-define(MAX_BLOCK_PROPAGATION_TIME, 60).\n\n-define(CLOCK_DRIFT_MAX, 5).\n\n%% The total supply of tokens in the Genesis block.\n-define(GENESIS_TOKENS, 55000000).\n\n%% Winstons per AR.\n-define(WINSTON_PER_AR, 1000000000000).\n\n%% The number of bytes in a gibibyte.\n-define(KiB, (1024)).\n-define(MiB, (1024 * ?KiB)).\n-define(GiB, (1024 * ?MiB)).\n-define(TiB, (1024 * ?GiB)).\n\n%% How far into the past or future the block can be in order to be accepted for\n%% processing.\n-ifdef(AR_TEST).\n-define(STORE_BLOCKS_BEHIND_CURRENT, 10).\n-else.\n-define(STORE_BLOCKS_BEHIND_CURRENT, 50).\n-endif.\n\n%% The maximum lag when fork recovery (chain reorganisation) is performed.\n-ifdef(AR_TEST).\n-define(CHECKPOINT_DEPTH, 4).\n-else.\n-define(CHECKPOINT_DEPTH, 18).\n-endif.\n\n%% The recommended depth of the block to use as an anchor for transactions.\n%% The corresponding block hash is returned by the GET /tx_anchor endpoint.\n-ifdef(AR_TEST).\n-define(SUGGESTED_TX_ANCHOR_DEPTH, 5).\n-else.\n-define(SUGGESTED_TX_ANCHOR_DEPTH, 6).\n-endif.\n\n%% The number of blocks returned in the /info 'recent' field\n-ifdef(AR_TEST).\n-define(RECENT_BLOCKS_WITHOUT_TIMESTAMP, 2).\n-else.\n-define(RECENT_BLOCKS_WITHOUT_TIMESTAMP, 5).\n-endif.\n\n%% How long to wait before giving up on unit test(s).\n-define(TEST_SUITE_TIMEOUT, 90 * 60). %% 90 minutes\n%% How long to wait before giving up on e2e test(s).\n-define(E2E_TEST_SUITE_TIMEOUT, 6 * 60 * 60). %% 6 hours\n%% Default test timeout to use if a test starts a node. We keep having test failures due to\n%% the timeout elapsing, and I think it may be that sometimes on the runner it just takes a\n%% while to launch a test node.\n-define(TEST_NODE_TIMEOUT, 300). %% 5 minutes\n\n%% The maximum byte size of a single POST body.\n-define(MAX_BODY_SIZE, 15 * ?MiB).\n\n%% The maximum allowed size in bytes for the data field of\n%% a format=1 transaction.\n-define(TX_DATA_SIZE_LIMIT, 10 * ?MiB).\n\n%% The maximum allowed size in bytes for the combined data fields of\n%% the format=1 transactions included in a block. Must be greater than\n%% or equal to ?TX_DATA_SIZE_LIMIT.\n-define(BLOCK_TX_DATA_SIZE_LIMIT, ?TX_DATA_SIZE_LIMIT).\n\n%% The maximum number of transactions (both format=1 and format=2) in a block.\n-ifdef(AR_TEST).\n-define(BLOCK_TX_COUNT_LIMIT, 10).\n-else.\n-define(BLOCK_TX_COUNT_LIMIT, 1000).\n-endif.\n\n%% The base transaction size the transaction fee must pay for.\n-define(TX_SIZE_BASE, 3210).\n\n%% Mempool Limits.\n%%\n%% The reason we have two different mempool limits has to do with the way\n%% format=2 transactions are distributed. To achieve faster consensus and\n%% reduce the network congestion, the miner does not gossip data of format=2\n%% transactions, but serves it later to those who request it after the\n%% corresponding transaction is included into a block. A single mempool limit\n%% would therefore be reached much quicker by a peer accepting format=2\n%% transactions with data. This would prevent this miner from accepting any\n%% further transactions. Having a separate limit for data allows the miner\n%% to continue accepting transaction headers.\n\n%% The maximum allowed size of transaction headers stored in mempool.\n%% The data field of a format=1 transaction is considered to belong to\n%% its headers.\n-ifdef(AR_TEST).\n-define(MEMPOOL_HEADER_SIZE_LIMIT, 50 * ?MiB).\n-else.\n-define(MEMPOOL_HEADER_SIZE_LIMIT, 250 * ?MiB).\n-endif.\n\n%% The maximum allowed size of transaction data stored in mempool.\n%% The format=1 transactions are not counted as their data is considered\n%% to be part of the header.\n-ifdef(AR_TEST).\n-define(MEMPOOL_DATA_SIZE_LIMIT, 50 * ?MiB).\n-else.\n-define(MEMPOOL_DATA_SIZE_LIMIT, 500 * ?MiB).\n-endif.\n\n%% Default timeout for establishing an HTTP connection.\n-define(HTTP_REQUEST_CONNECT_TIMEOUT, 10 * 1000).\n\n%% Default timeout used when sending to and receiving from a TCP socket\n%% when making an HTTP request.\n-define(HTTP_REQUEST_SEND_TIMEOUT, 60 * 1000).\n\n%% The time in milliseconds to wait before retrying\n%% a failed join (block index download) attempt.\n-define(REJOIN_TIMEOUT, 10 * 1000).\n\n%% How many times to retry fetching the block index from each of\n%% the peers before giving up.\n-define(REJOIN_RETRIES, 3).\n\n%% Maximum allowed number of accepted requests per minute per IP.\n-ifdef(AR_TEST).\n-define(DEFAULT_REQUESTS_PER_MINUTE_LIMIT, 100_000).\n-else.\n-define(DEFAULT_REQUESTS_PER_MINUTE_LIMIT, 900).\n-endif.\n\n%% Number of seconds an IP address should be completely banned from doing\n%% HTTP requests after posting an invalid block.\n-define(BAD_BLOCK_BAN_TIME, 24 * 60 * 60).\n\n%% A part of transaction propagation delay independent from the size, in seconds.\n-ifdef(AR_TEST).\n-define(BASE_TX_PROPAGATION_DELAY, 0).\n-else.\n-ifndef(BASE_TX_PROPAGATION_DELAY).\n-define(BASE_TX_PROPAGATION_DELAY, 30).\n-endif.\n-endif.\n\n%% A conservative assumption of the network speed used to\n%% estimate the transaction propagation delay. It does not include\n%% the base delay, the time the transaction spends in the priority\n%% queue, and the time it takes to propagate the transaction to peers.\n-ifdef(AR_TEST).\n-define(TX_PROPAGATION_BITS_PER_SECOND, 1000000000).\n-else.\n-define(TX_PROPAGATION_BITS_PER_SECOND, 3000000). % 3 mbps\n-endif.\n\n%% The number of peers to send new blocks to in parallel.\n-define(BLOCK_PROPAGATION_PARALLELIZATION, 20).\n\n%% The maximum number of peers to propagate txs to, by default.\n-define(DEFAULT_MAX_PROPAGATION_PEERS, 16).\n\n%% The maximum number of peers to propagate blocks to, by default.\n-define(DEFAULT_MAX_BLOCK_PROPAGATION_PEERS, 1000).\n\n%% When the transaction data size is smaller than this number of bytes,\n%% the transaction is gossiped to the peer without a prior check if the peer\n%% already has this transaction.\n-define(TX_SEND_WITHOUT_ASKING_SIZE_LIMIT, 1000).\n\n%% Block headers directory, relative to the data dir.\n-define(BLOCK_DIR, \"blocks\").\n\n%% Transaction headers directory, relative to the data dir.\n-define(TX_DIR, \"txs\").\n\n%% Disk cache directory, relative to the data dir.\n-define(DISK_CACHE_DIR, \"disk_cache\").\n\n%% Block headers directory, relative to the disk cache directory.\n-define(DISK_CACHE_BLOCK_DIR, \"blocks\").\n\n%% Transaction headers directory, relative to the disk cache directory.\n-define(DISK_CACHE_TX_DIR, \"txs\").\n\n%% Backup block hash list storage directory, relative to the data dir.\n-define(HASH_LIST_DIR, \"hash_lists\").\n\n%% Directory for storing miner wallets, relative to the data dir.\n-define(WALLET_DIR, \"wallets\").\n\n%% Directory for storing unique wallet lists, relative to the data dir.\n-define(WALLET_LIST_DIR, \"wallet_lists\").\n\n%% Directory for storing data chunks, relative to the data dir.\n-define(DATA_CHUNK_DIR, \"data_chunks\").\n\n%% Directory for RocksDB key-value storages, relative to the data dir.\n-define(ROCKS_DB_DIR, \"rocksdb\").\n\n%% Log output directory, NOT relative to the data dir.\n-define(LOG_DIR, \"logs\").\n\n%% The directory for persisted metrics, NOT relative to the data dir.\n-define(METRICS_DIR, \"metrics\").\n\n%% The ID and module for the default storage module.\n-define(DEFAULT_MODULE, \"default\").\n\n%% Default TCP port.\n-define(DEFAULT_HTTP_IFACE_PORT, 1984).\n\n%% Number of transaction propagation processes to spawn.\n%% Each emitter picks the most valued transaction from the queue\n%% and propagates it to the chosen peers.\n%% Can be overriden by a command line argument.\n-define(NUM_EMITTER_PROCESSES, 16).\n\n%% The adjustment of difficutly going from SHA-384 to RandomX.\n-define(RANDOMX_DIFF_ADJUSTMENT, (-14)).\n\n%% Max allowed difficulty multiplication and division factors, before the fork 2.4.\n-define(DIFF_ADJUSTMENT_DOWN_LIMIT, 2).\n-define(DIFF_ADJUSTMENT_UP_LIMIT, 4).\n\n%% Maximum size of a single data chunk, in bytes.\n-define(DATA_CHUNK_SIZE, (256 * 1024)).\n\n%% The maximum allowed packing difficulty.\n-define(MAX_PACKING_DIFFICULTY, 32).\n\n%% The number of sub-chunks in a compositely packed chunk.\n%% The composite packing with the packing difficulty 1 matches approximately the non-composite\n%% 2.6 packing in terms of computational costs.\n-define(COMPOSITE_PACKING_SUB_CHUNK_COUNT, 32).\n\n%% The size of a unit sub-chunk in the compositely packed chunk.\n-define(COMPOSITE_PACKING_SUB_CHUNK_SIZE,\n\t\t(?DATA_CHUNK_SIZE div ?COMPOSITE_PACKING_SUB_CHUNK_COUNT)).\n\n%% The number of RandomX rounds used for a single iteration of packing of a single sub-chunk\n%% during the composite packing.\n-define(COMPOSITE_PACKING_ROUND_COUNT, 10).\n\n%% Maximum size of a `data_path`, in bytes.\n-define(MAX_PATH_SIZE, (256 * 1024)).\n\n%% The size of data chunk hashes, in bytes.\n-define(CHUNK_ID_HASH_SIZE, 32).\n\n-define(NOTE_SIZE, 32).\n\n%% Disk cache size in MB\n-ifdef(AR_TEST).\n-define(DISK_CACHE_SIZE, 1).\n-define(DISK_CACHE_CLEAN_PERCENT_MAX, 20).\n-else.\n-define(DISK_CACHE_SIZE, 5120).\n-define(DISK_CACHE_CLEAN_PERCENT_MAX, 20).\n-endif.\n\n%% The speed in chunks/s of moving the fork 2.5 packing threshold.\n-ifdef(AR_TEST).\n-define(PACKING_2_5_THRESHOLD_CHUNKS_PER_SECOND, 1).\n-else.\n-define(PACKING_2_5_THRESHOLD_CHUNKS_PER_SECOND, 10).\n-endif.\n\n%% The data_root of the system \"padding\" nodes inserted in the transaction Merkle trees\n%% since the 2.5 fork block. User transactions cannot set <<>> for data_root unless\n%% data_size == 0. The motivation is to place all chunks including those\n%% smaller than 256 KiB into the 256 KiB buckets on the weave, to even out their chances to be\n%% picked as recall chunks and therefore equally incentivize the storage.\n-define(PADDING_NODE_DATA_ROOT, <<>>).\n\n-ifndef(INITIAL_VDF_DIFFICULTY).\n-define(INITIAL_VDF_DIFFICULTY, 600_000).\n-endif.\n\n%% @doc A chunk with the proofs of its presence in the weave at a particular offset.\n-record(poa, {\n\t%% DEPRECATED. Not used since the fork 2.4.\n\toption = 1,\n\t%% The path through the Merkle tree of transactions' \"data_root\"s.\n\t%% Proofs the inclusion of the \"data_root\" in the corresponding \"tx_root\"\n\t%% under the particular offset.\n\ttx_path = <<>>,\n\t%% The path through the Merkle tree of the identifiers of the chunks\n\t%% of the corresponding transaction. Proofs the inclusion of the chunk\n\t%% in the corresponding \"data_root\" under a particular offset.\n\tdata_path = <<>>,\n\t%% When packing difficulty is 0 chunk stores a full ?DATA_CHUNK_SIZE-sized packed chunk.\n\t%% When packing difficulty >= 1, chunk stores a ?COMPOSITE_PACKING_SUB_CHUNK_SIZE-sized\n\t%% packed sub-chunk.\n\tchunk = <<>>,\n\t%% When packing difficulty is 0 unpacked_chunk is <<>>.\n\t%% When packing difficulty >= 1, unpacked_chunk stores a full 0-padded\n\t%% ?DATA_CHUNK_SIZE-sized unpacked chunk.\n\tunpacked_chunk = <<>>\n}).\n\n%% @doc The information which simplifies validation of the nonce limiting procedures.\n-record(nonce_limiter_info, {\n\t%% The output of the latest step - the source of the entropy for the mining nonces.\n\toutput = <<>>,\n\t%% The output of the latest step of the previous block.\n\tprev_output = <<>>,\n\t%% The hash of the latest block mined below the current reset line.\n\tseed = <<>>,\n\t%% The hash of the latest block mined below the future reset line.\n\tnext_seed = <<>>,\n\t%% The weave size of the latest block mined below the current reset line.\n\tpartition_upper_bound = 0,\n\t%% The weave size of the latest block mined below the future reset line.\n\tnext_partition_upper_bound = 0,\n\t%% The global sequence number of the nonce limiter step at which the block was found.\n\tglobal_step_number = 1,\n\t%% ?VDF_CHECKPOINT_COUNT_IN_STEP checkpoints from the most recent step in the nonce\n\t%% limiter process.\n\tlast_step_checkpoints = [],\n\t%% A list of the output of each step of the nonce limiting process. Note: each step\n\t%% has ?VDF_CHECKPOINT_COUNT_IN_STEP checkpoints, the last of which is that step's output.\n\tsteps = [],\n\n\t%% The fields added at the fork 2.7\n\n\t%% The number of SHA2-256 iterations in a single VDF checkpoint. The protocol aims to keep the\n\t%% checkoint calculation time to around 40ms by varying this paramter. Note: there are\n\t%% 25 checkpoints in a single VDF step - so the protocol aims to keep the step calculation at\n\t%% 1 second by varying this parameter.\n\tvdf_difficulty = ?INITIAL_VDF_DIFFICULTY,\n\t%% The VDF difficulty scheduled for to be applied after the next VDF reset line.\n\tnext_vdf_difficulty = ?INITIAL_VDF_DIFFICULTY\n}).\n\n%% @doc A VDF session.\n-record(vdf_session, {\n\tstep_number,\n\tseed,\n\tstep_checkpoints_map = #{},\n\tsteps,\n\tprev_session_key,\n\tupper_bound,\n\tnext_upper_bound,\n\tvdf_difficulty,\n\tnext_vdf_difficulty\n}).\n\n%% @doc The format of the nonce limiter update provided by the configured trusted peer.\n-record(nonce_limiter_update, {\n\tsession_key,\n\tsession,\n\tis_partial = true\n}).\n\n%% @doc The format of the response to nonce limiter updates by configured trusted peers.\n-record(nonce_limiter_update_response, {\n\tsession_found = true,\n\tstep_number,\n\tpostpone = 0,\n\tformat = 2\n}).\n\n%% @doc A compact announcement of a new block gossiped to peers. Peers\n%% who have not received this block yet and decide to receive it from us,\n%% should reply with a #block_announcement_response.\n-record(block_announcement, {\n\tindep_hash,\n\tprevious_block,\n\trecall_byte,\n\ttx_prefixes = [], % 8 byte prefixes of transaction identifiers.\n\trecall_byte2,\n\tsolution_hash\n}).\n\n%% @doc A reply to a block announcement when we are willing to receive this\n%% block from the announcing peer.\n-record(block_announcement_response, {\n\tmissing_chunk = false,\n\tmissing_tx_indices = [], % Missing transactions' indices, 0 =<, =< 999.\n\tmissing_chunk2\n}).\n\n%% @doc A block (txs is a list of tx records) or a block shadow (txs is a list of\n%% transaction identifiers).\n-record(block, {\n\t%% The nonce chosen to solve the mining problem.\n\tnonce,\n\t%% `indep_hash` of the previous block in the weave.\n\tprevious_block = <<>>,\n\t%% POSIX time of block discovery.\n\ttimestamp,\n\t%% POSIX time of the last difficulty retarget.\n\tlast_retarget,\n\t%% Mining difficulty, the number `hash` must be greater than.\n\tdiff,\n\theight = 0,\n\t%% Mining solution hash.\n\thash = <<>>,\n\t%% The block identifier.\n\tindep_hash,\n\t%% The list of transaction identifiers or transactions (tx records).\n\ttxs = [],\n\t%% The Merkle root of the tree of Merkle roots of block's transactions' data.\n\ttx_root = <<>>,\n\t%% The Merkle tree of Merkle roots of block's transactions' data. Used internally,\n\t%% not gossiped.\n\ttx_tree = [],\n\t%% Deprecated. Not used, not gossiped.\n\thash_list = unset,\n\t%% The Merkle root of the block index - the list of\n\t%% {`indep_hash`, `weave_size`, `tx_root`} triplets describing the past blocks\n\t%% excluding this one.\n\thash_list_merkle = <<>>,\n\t%% The root hash of the Merkle Patricia Tree containing all wallet (account) balances and\n\t%% the identifiers of the last transactions posted by them, if any\n\twallet_list,\n\t%% The mining address. Before the fork 2.6, either the atom 'unclaimed' or\n\t%% a SHA2-256 hash of the RSA PSS public key. In 2.6, 'unclaimed' is not supported.\n    reward_addr = unclaimed,\n\t%% Miner-specified tags (a list of strings) to store with the block.\n    tags = [],\n\t%% The number of Winston in the endowment pool.\n\treward_pool,\n\t%% The total number of bytes whose storage is incentivized.\n\tweave_size,\n\t%% The total number of bytes added to the storage incentivization by this block.\n\tblock_size,\n\t%% The sum of the average number of hashes computed by the network to produce the past\n\t%% blocks including this one.\n\tcumulative_diff,\n\t%% The list of {{`tx_id`, `data_root`}, `offset`} pairs. Used internally, not gossiped.\n\tsize_tagged_txs = unset,\n\t%% The first proof of access.\n\tpoa = #poa{},\n\t%% The estimated USD to AR conversion rate used in the pricing calculations.\n\t%% A tuple {Dividend, Divisor}.\n\t%% Used until the transition to the new fee calculation method is complete.\n\tusd_to_ar_rate,\n\t%% The estimated USD to AR conversion rate scheduled to be used a bit later, used to\n\t%% compute the necessary fee for the currently signed txs. A tuple {Dividend, Divisor}.\n\t%% Used until the transition to the new fee calculation method is complete.\n\tscheduled_usd_to_ar_rate,\n\t%% The offset on the weave separting the data which has to be packed for mining after the\n\t%% fork 2.5 from the data which does not have to be packed yet. It is set to the\n\t%% weave_size of the 50th previous block at the hard fork block and moves down at a speed\n\t%% of ?PACKING_2_5_THRESHOLD_CHUNKS_PER_SECOND chunks/s. The motivation behind the\n\t%% threshold is a smooth transition to the new algorithm - big miners who might not want\n\t%% to adopt the new algorithm are still incentivized to upgrade and stay in the network\n\t%% for some time.\n\tpacking_2_5_threshold,\n\t%% The offset on the weave separating the data which has to be split according to the\n\t%% stricter rules introduced in the fork 2.5 from the historical data. The new rules\n\t%% require all chunk sizes to be 256 KiB excluding the last or the only chunks of the\n\t%% corresponding transactions and the second last chunks of their transactions where they\n\t%% exceed 256 KiB in size when combined with the following (last) chunk. Furthermore, the\n\t%% new chunks may not be smaller than their Merkle proofs unless they are the last chunks.\n\t%% The motivation is to be able to put all chunks into 256 KiB buckets. It makes all\n\t%% chunks equally attractive because they have equal chances of being chosen as recall\n\t%% chunks. Moreover, every chunk costs the same in terms of storage and computation\n\t%% expenditure when packed (smaller chunks are simply padded before packing).\n\tstrict_data_split_threshold,\n\t%% Used internally by tests.\n\taccount_tree,\n\n\t%%\n\t%% The fields below were added at the fork 2.6.\n\t%%\n\n\t%% A part of the solution hash preimage. Used for the initial solution validation\n\t%% without a data chunk.\n\thash_preimage = <<>>,\n\t%% The absolute recall offset.\n\trecall_byte,\n\t%% The total amount of winston the miner receives for this block.\n\treward = 0,\n\t%% The solution hash of the previous block.\n\tprevious_solution_hash = <<>>,\n\t%% The sequence number of the mining partition where the block was found.\n\tpartition_number,\n\t%% The nonce limiter information.\n\tnonce_limiter_info = #nonce_limiter_info{},\n\t%% The second proof of access (empty when the solution was found with only one chunk).\n\tpoa2 = #poa{},\n\t%% The absolute second recall offset.\n\trecall_byte2,\n\t%% The block signature.\n\tsignature = <<>>,\n\t%% {KeyType, PubKey} - the public key the block was signed with.\n\t%% The only supported KeyType is currently {rsa, 65537}.\n\treward_key,\n\t%% The estimated number of Winstons it costs the network to store one gibibyte\n\t%% for one minute.\n\tprice_per_gib_minute = 0,\n\t%% The updated estimation of the number of Winstons it costs the network to store\n\t%% one gibibyte for one minute.\n\tscheduled_price_per_gib_minute = 0,\n\t%% The recursive hash of the network hash rates, block rewards, mining addresses,\n\t%% and denominations.\n\t%% Note that the length of the reward history has increased from\n\t%% ?LEGACY_REWARD_HISTORY_BLOCKS to ?REWARD_HISTORY_BLOCKS in 2.8.\n\t%% Before 2.8 every new hash was computed over the latest ?REWARD_HISTORY_BLOCKS.\n\t%% After 2.8 the new hash is computed from the new history element and the previous hash.\n\treward_history_hash,\n\t%% The network hash rates, block rewards, and mining addresses from the latest\n\t%% ?REWARD_HISTORY_BLOCKS + ar_block:get_consensus_window_size() blocks. Used internally, not gossiped.\n\treward_history = [],\n\t%% The total number of Winston emitted when the endowment was not sufficient\n\t%% to compensate mining.\n\tdebt_supply = 0,\n\t%% An additional multiplier for the transaction fees doubled every time the\n\t%% endowment pool becomes empty.\n\tkryder_plus_rate_multiplier = 1,\n\t%% A lock controlling the updates of kryder_plus_rate_multiplier. It is set to 1\n\t%% after the update and back to 0 when the endowment pool is bigger than\n\t%% ?RESET_KRYDER_PLUS_LATCH_THRESHOLD (redenominated according to the denomination\n\t%% used at the time).\n\tkryder_plus_rate_multiplier_latch = 0,\n\t%% The code for the denomination of AR in base units.\n\t%% 1 is the default which corresponds to the original denomination of 1^12 base units.\n\t%% Every time the available supply falls below ?REDENOMINATION_THRESHOLD,\n\t%% the denomination is multiplied by 1000, the code is incremented.\n\t%% Transaction denomination code must not exceed the block's denomination code.\n\tdenomination = 1,\n\t%% The biggest known redenomination height (0 means there were no redenominations yet).\n\tredenomination_height = 0,\n\t%% The proof of signing the same block several times or extending two equal forks.\n\tdouble_signing_proof,\n\t%% The cumulative difficulty of the previous block.\n\tprevious_cumulative_diff = 0,\n\n\t%%\n\t%% The fields below were added at the fork 2.7 (note that 2.6.8 was a hard fork too).\n\t%%\n\n\t%% The merkle trees of the data written after this weave offset may be constructed\n\t%% in a way where some subtrees are \"rebased\", i.e., their offsets start from 0 as if\n\t%% they were the leftmost subtree of the entire tree. The merkle paths for the chunks\n\t%% belonging to the subtrees will include a 32-byte 0-sequence preceding the pivot to\n\t%% the corresponding subtree. The rebases allow for flexible combination of data before\n\t%% registering it on the weave, extremely useful e.g., for the bundling services.\n\tmerkle_rebase_support_threshold,\n\t%% The SHA2-256 of the packed chunk.\n\tchunk_hash,\n\t%% The SHA2-256 of the packed chunk2, when present.\n\tchunk2_hash,\n\n\t%% The hashes of the history of block times (in seconds), VDF times (in steps),\n\t%% and solution types (one-chunk vs two-chunk) of the latest\n\t%% ?BLOCK_TIME_HISTORY_BLOCKS blocks.\n\tblock_time_history_hash,\n\t%% The block times (in seconds), VDF times (in steps), and solution types (one-chunk vs\n\t%% two-chunk) of the latest ?BLOCK_TIME_HISTORY_BLOCKS blocks.\n\t%% Used internally, not gossiped.\n\tblock_time_history = [], % {block_interval, vdf_interval, chunk_count}\n\n\t%%\n\t%% The fields below were added at the fork 2.8.\n\t%%\n\n\t%% The packing difficulty of the replica the block was mined with.\n\t%% Applies to both poa1 and poa2.\n\t%%\n\t%% Packing difficulty 0 denotes the usual pre-2.8 packing scheme.\n\t%% Packing difficulty 1 refers to the new composite packing of approximately the same\n\t%% computational cost as the difficulty 0 packing. Packing difficulty 2 is the composite\n\t%% packing where each sub-chunk is hashed twice as many times. The maximum allowed\n\t%% value is 32.\n\t%%\n\t%% When packing_difficulty >= 1, both poa1 and poa2 contain the unpacked chunks.\n\t%% The values of the \"chunk\" fields are now 8192-byte packed sub-chunks.\n\t%%\n\t%% If the block is associated with the new replication format (replica_format=1,)\n\t%% the packing difficulty is constant and determines the number of nonces\n\t%% (also, sub-chunks) in the recall range and their mining difficulty, in line with\n\t%% the chosen computational difficulty of the entropy computation.\n\tpacking_difficulty = 0,\n\t%% The SHA2-256 of the unpacked 0-padded (if less than 256 KiB) chunk.\n\t%% undefined when packing_difficulty == 0, has a value otherwise.\n\tunpacked_chunk_hash,\n\t%% The SHA2-256 of the unpacked 0-padded (if less than 256 KiB) chunk2.\n\t%% undefined when packing_difficulty == 0 or recall_byte2 == undefined,\n\t%% has a value otherwise.\n\tunpacked_chunk2_hash,\n\n\t%% The replica format 0 is the inefficient \"packing\" where every chunk is packed\n\t%% independently. The replica format 1 is new the blazing fast replication format.\n\treplica_format = 0,\n\n\t%% Used internally, not gossiped. Convenient for validating potentially non-unique\n\t%% merkle proofs assigned to the different signatures of the same solution\n\t%% (see validate_poa_against_cached_poa in ar_block_pre_validator.erl).\n\tpoa_cache,\n\t%% Used internally, not gossiped. Convenient for validating potentially non-unique\n\t%% merkle proofs assigned to the different signatures of the same solution\n\t%% (see validate_poa_against_cached_poa in ar_block_pre_validator.erl).\n\tpoa2_cache,\n\n\t%% Used internally, not gossiped.\n\treceive_timestamp\n}).\n\n%% @doc A transaction.\n-record(tx, {\n\t%% 1 or 2.\n\tformat = 1,\n\t%% The transaction identifier.\n\tid = <<>>,\n\t%% Either the identifier of the previous transaction from\n\t%% the same wallet or the identifier of one of the\n\t%% last ar_block:get_max_tx_anchor_depth() blocks.\n\tlast_tx = <<>>,\n\t%% The public key the transaction is signed with.\n\towner =\t<<>>,\n\t%% The owner address. Used as a cache to avoid recomputing it, not serialized.\n\towner_address = not_set,\n\t%% A list of arbitrary key-value pairs. Keys and values are binaries.\n\ttags = [],\n\t%% The address of the recipient, if any. The SHA2-256 hash of the public key.\n\ttarget = <<>>,\n\t%% The amount of Winstons to send to the recipient, if any.\n\tquantity = 0,\n\t%% The data to upload, if any. For v2 transactions, the field is optional - a fee\n\t%% is charged based on the \"data_size\" field, data itself may be uploaded any time\n\t%% later in chunks.\n\tdata = <<>>,\n\t%% Size in bytes of the transaction data.\n\tdata_size = 0,\n\t%% Deprecated. Not used, not gossiped.\n\tdata_tree = [],\n\t%% The Merkle root of the Merkle tree of data chunks.\n\tdata_root = <<>>,\n\t%% The signature.\n\tsignature = <<>>,\n\t%% The fee in Winstons.\n\treward = 0,\n\n\t%% The code for the denomination of AR in base units.\n\t%%\n\t%% 1 corresponds to the original denomination of 1^12 base units.\n\t%% Every time the available supply falls below ?REDENOMINATION_THRESHOLD,\n\t%% the denomination is multiplied by 1000, the code is incremented.\n\t%%\n\t%% 0 is the default denomination code. It is treated as the denomination code of the\n\t%% current block. We do NOT default to 1 because we want to distinguish between the\n\t%% transactions with the explicitly assigned denomination (the denomination then becomes\n\t%% a part of the signature preimage) and transactions signed the way they were signed\n\t%% before the upgrade. The motivation is to keep supporting legacy client libraries after\n\t%% redenominations and at the same time protect users from an attack where\n\t%% a post-redenomination transaction is included in a pre-redenomination block. The attack\n\t%% is prevented by forbidding inclusion of transactions with denomination=0 in the 100\n\t%% blocks preceding the redenomination block.\n\t%%\n\t%% Transaction denomination code must not exceed the block's denomination code.\n\tdenomination = 0,\n\n\t%% The type of signature this transaction was signed with. A system field,\n\t%% not used by the protocol yet.\n\tsignature_type = ?DEFAULT_KEY_TYPE\n}).\n\n%% @doc The data_path field will only be not_found if the chunk record is corrupt/invalid.\n%% This can happen if the chunk entry exists in the chunks_index but not in the chunk_data_db.\n%% In this case:\n%% - not_set means that a field has not been queried yet.\n%% - not_found means that the field has been queried but could not be found.\n-record(chunk_metadata, {\n\tchunk_data_key = not_set :: not_set | binary(),\n\ttx_root = not_set :: not_set | binary(),\n\ttx_path = not_set :: not_set | binary(),\n\tdata_root = not_set :: not_set | binary(),\n\tdata_path = not_set :: not_set | not_found | binary(),\n\tchunk_size = not_set :: not_set | non_neg_integer()\n}).\n\n-record(chunk_offsets, {\n\tabsolute_offset = not_set :: not_set | non_neg_integer(),\n\tbucket_end_offset = not_set :: not_set | non_neg_integer(),\n\tpadded_end_offset = not_set :: not_set | non_neg_integer(),\n\trelative_offset = not_set :: not_set | non_neg_integer()\n}).\n\n%% A macro to convert AR into Winstons.\n-define(AR(AR), (?WINSTON_PER_AR * AR)).\n\n%% A macro to return whether a term is a block record.\n-define(IS_BLOCK(X), (is_record(X, block))).\n\n%% Convert a v2.0 block index into an old style block hash list.\n-define(BI_TO_BHL(BI), ([BH || {BH, _, _} <- BI])).\n\n%% Pattern matches on ok-tuple and returns the value.\n-define(OK(Tuple), begin (case (Tuple) of {ok, SuccessValue} -> (SuccessValue) end) end).\n\n%% The messages to be stored inside the genesis block.\n-define(GENESIS_BLOCK_MESSAGES, []).\n\n%% Minimum number of characters for internal API secret. Used in the optional HTTP API\n%% for signing transactions.\n-define(INTERNAL_API_SECRET_MIN_LEN, 16).\n\n%% The frequency of issuing a reminder to the console and the logfile\n%% about the insufficient disk space, in milliseconds.\n-define(DISK_SPACE_WARNING_FREQUENCY, 24 * 60 * 60 * 1000).\n\n%% Use a standard way of logging.\n%% For more details see https://erlang.org/doc/man/logger.html#macros.\n-include_lib(\"kernel/include/logger.hrl\").\n\n-endif.\n"
  },
  {
    "path": "apps/arweave/include/ar_blacklist_middleware.hrl",
    "content": "-define(THROTTLE_PERIOD, 30000).\n\n-define(BAN_CLEANUP_INTERVAL, 60000).\n\n-define(RPM_BY_PATH(Path), fun() ->\n\t?RPM_BY_PATH(Path, #{})()\nend).\n\n-define(RPM_BY_PATH(Path, LimitByIP), fun() ->\n\t{ok, Config} = arweave_config:get_env(),\n\t?RPM_BY_PATH(Path, LimitByIP, Config#config.requests_per_minute_limit)()\nend).\n\n-ifdef(AR_TEST).\n-define(RPM_BY_PATH(Path, LimitByIP, DefaultPathLimit), fun() ->\n\tcase Path of\n\t\t[<<\"chunk\">> | _] ->\n\t\t\t{chunk,\tmaps:get(chunk, LimitByIP, DefaultPathLimit)}; % ~50 MB/s.\n\t\t[<<\"chunk2\">> | _] ->\n\t\t\t{chunk,\tmaps:get(chunk, LimitByIP, DefaultPathLimit)}; % ~50 MB/s.\n\t\t[<<\"data_sync_record\">> | _] ->\n\t\t\t{data_sync_record, maps:get(data_sync_record, LimitByIP, DefaultPathLimit)};\n\t\t[<<\"recent_hash_list_diff\">> | _] ->\n\t\t\t{recent_hash_list_diff,\tmaps:get(recent_hash_list_diff, LimitByIP, DefaultPathLimit)};\n\t\t[<<\"hash_list\">>] ->\n\t\t\t{block_index, maps:get(block_index, LimitByIP, DefaultPathLimit)};\n\t\t[<<\"hash_list2\">>] ->\n\t\t\t{block_index, maps:get(block_index, LimitByIP, DefaultPathLimit)};\n\t\t[<<\"block_index\">>] ->\n\t\t\t{block_index, maps:get(block_index, LimitByIP, DefaultPathLimit)};\n\t\t[<<\"block_index2\">>] ->\n\t\t\t{block_index, maps:get(block_index, LimitByIP, DefaultPathLimit)};\n\t\t[<<\"wallet_list\">>] ->\n\t\t\t{wallet_list, maps:get(wallet_list, LimitByIP, DefaultPathLimit)};\n\t\t[<<\"block\">>, _Type, _ID, <<\"wallet_list\">>] ->\n\t\t\t{wallet_list, maps:get(wallet_list, LimitByIP, DefaultPathLimit)};\n\t\t[<<\"block\">>, _Type, _ID, <<\"hash_list\">>] ->\n\t\t\t{block_index, maps:get(block_index, LimitByIP, DefaultPathLimit)};\n\t\t[<<\"vdf\">>] ->\n\t\t\t{get_vdf, maps:get(get_vdf, LimitByIP, DefaultPathLimit)};\n\t\t[<<\"vdf\">>, <<\"session\">>] ->\n\t\t\t{get_vdf_session, maps:get(get_vdf_session, LimitByIP, DefaultPathLimit)};\n\t\t[<<\"vdf2\">>, <<\"session\">>] ->\n\t\t\t{get_vdf_session, maps:get(get_vdf_session, LimitByIP, DefaultPathLimit)};\n\t\t[<<\"vdf3\">>, <<\"session\">>] ->\n\t\t\t{get_vdf_session, maps:get(get_vdf_session, LimitByIP, DefaultPathLimit)};\n\t\t[<<\"vdf4\">>, <<\"session\">>] ->\n\t\t\t{get_vdf_session, maps:get(get_vdf_session, LimitByIP, DefaultPathLimit)};\n\t\t[<<\"vdf\">>, <<\"previous_session\">>] ->\n\t\t\t{get_previous_vdf_session, maps:get(get_previous_vdf_session, LimitByIP, DefaultPathLimit)};\n\t\t[<<\"vdf2\">>, <<\"previous_session\">>] ->\n\t\t\t{get_previous_vdf_session, maps:get(get_previous_vdf_session, LimitByIP, DefaultPathLimit)};\n\t\t[<<\"vdf4\">>, <<\"previous_session\">>] ->\n\t\t\t{get_previous_vdf_session, maps:get(get_previous_vdf_session, LimitByIP, DefaultPathLimit)};\n\t\t_ ->\n\t\t\t{default, maps:get(default, LimitByIP, DefaultPathLimit)}\n\tend\nend).\n-else.\n-define(RPM_BY_PATH(Path, LimitByIP, DefaultPathLimit), fun() ->\n\tcase Path of\n\t\t[<<\"chunk\">> | _] ->\n\t\t\t{chunk,\tmaps:get(chunk, LimitByIP, 12000)}; % ~50 MB/s.\n\t\t[<<\"chunk2\">> | _] ->\n\t\t\t{chunk,\tmaps:get(chunk, LimitByIP, 12000)}; % ~50 MB/s.\n\t\t[<<\"data_sync_record\">> | _] ->\n\t\t\t{data_sync_record,\tmaps:get(data_sync_record, LimitByIP, 40)};\n\t\t[<<\"footprints\">> | _] ->\n\t\t\t%% 262144 * 1024 (chunks per footprint) * 200 (rpm) / 60 (seconds) =~ 800 MB/s\n\t\t\t{footprints,\tmaps:get(footprints, LimitByIP, 200)};\n\t\t[<<\"recent_hash_list_diff\">> | _] ->\n\t\t\t{recent_hash_list_diff,\tmaps:get(recent_hash_list_diff, LimitByIP, 240)};\n\t\t[<<\"hash_list\">>] ->\n\t\t\t{block_index, maps:get(block_index, LimitByIP, 2)};\n\t\t[<<\"hash_list2\">>] ->\n\t\t\t{block_index, maps:get(block_index, LimitByIP, 2)};\n\t\t[<<\"block_index\">>] ->\n\t\t\t{block_index, maps:get(block_index, LimitByIP, 2)};\n\t\t[<<\"block_index2\">>] ->\n\t\t\t{block_index, maps:get(block_index, LimitByIP, 2)};\n\t\t[<<\"wallet_list\">>] ->\n\t\t\t{wallet_list, maps:get(wallet_list, LimitByIP, 2)};\n\t\t[<<\"block\">>, _Type, _ID, <<\"wallet_list\">>] ->\n\t\t\t{wallet_list, maps:get(wallet_list, LimitByIP, 2)};\n\t\t[<<\"block\">>, _Type, _ID, <<\"hash_list\">>] ->\n\t\t\t{block_index, maps:get(block_index, LimitByIP, 2)};\n\t\t[<<\"vdf\">>] ->\n\t\t\t{get_vdf, maps:get(get_vdf, LimitByIP, 180)};\n\t\t[<<\"vdf\">>, <<\"session\">>] ->\n\t\t\t{get_vdf_session, maps:get(get_vdf_session, LimitByIP, 60)};\n\t\t[<<\"vdf2\">>, <<\"session\">>] ->\n\t\t\t{get_vdf_session, maps:get(get_vdf_session, LimitByIP, 60)};\n\t\t[<<\"vdf3\">>, <<\"session\">>] ->\n\t\t\t{get_vdf_session, maps:get(get_vdf_session, LimitByIP, 60)};\n\t\t[<<\"vdf4\">>, <<\"session\">>] ->\n\t\t\t{get_vdf_session, maps:get(get_vdf_session, LimitByIP, 60)};\n\t\t[<<\"vdf\">>, <<\"previous_session\">>] ->\n\t\t\t{get_previous_vdf_session, maps:get(get_previous_vdf_session, LimitByIP, 60)};\n\t\t[<<\"vdf2\">>, <<\"previous_session\">>] ->\n\t\t\t{get_previous_vdf_session, maps:get(get_previous_vdf_session, LimitByIP, 60)};\n\t\t[<<\"vdf4\">>, <<\"previous_session\">>] ->\n\t\t\t{get_previous_vdf_session, maps:get(get_previous_vdf_session, LimitByIP, 60)};\n\t\t_ ->\n\t\t\t{default, maps:get(default, LimitByIP, DefaultPathLimit)}\n\tend\nend).\n-endif.\n"
  },
  {
    "path": "apps/arweave/include/ar_block.hrl",
    "content": "%% Size in bytes of the timestamp and last_retarget block fields.\n-define(TIMESTAMP_FIELD_SIZE_LIMIT, 12).\n"
  },
  {
    "path": "apps/arweave/include/ar_chain_stats.hrl",
    "content": "-ifndef(AR_CHAIN_STATS_HRL).\n-define(AR_CHAIN_STATS_HRL, true).\n\n-define(RECENT_FORKS_AGE, 60 * 60 * 24 * 30). %% last 30 days of forks\n\n-ifdef(AR_TEST).\n-define(RECENT_FORKS_LENGTH, 5).\n-else.\n-define(RECENT_FORKS_LENGTH, 20). %% only return the last 20 forks\n-endif.\n\n-record(fork, {\n    id,\n    height,\n    timestamp,\n    block_ids\n}).\n\n-endif.\n"
  },
  {
    "path": "apps/arweave/include/ar_chunk_storage.hrl",
    "content": "-define(OFFSET_SIZE, 3). % Sufficient to represent a number up to 256 * 1024 (?DATA_CHUNK_SIZE).\n-define(OFFSET_BIT_SIZE, (?OFFSET_SIZE * 8)).\n\n-define(CHUNK_DIR, \"chunk_storage\").\n"
  },
  {
    "path": "apps/arweave/include/ar_consensus.hrl",
    "content": "%% The number of RandomX hashes to compute to pack a chunk.\n-define(PACKING_DIFFICULTY, 20).\n\n%% The number of RandomX hashes to compute to pack a chunk after the fork 2.6.\n%% we want packing x30 longer than regular one\n%% 8   iterations - 2 ms\n%% 360 iterations - 59 ms\n%% 360/8 = 45\n-define(PACKING_DIFFICULTY_2_6, 45).\n\n-define(RANDOMX_PACKING_ROUNDS, 8 * (?PACKING_DIFFICULTY)).\n-define(RANDOMX_PACKING_ROUNDS_2_6, 8 * (?PACKING_DIFFICULTY_2_6)).\n\n%% Stop supporting the legacy non-composite packing after this number of blocks\n%% passed since the fork 2.8. 365 * 24 * 60 * 60 / 128 = 246375.\n-define(SPORA_PACKING_EXPIRATION_PERIOD_BLOCKS, (246375 * 4)).\n\n%% Stop supporting the composite packing ~60 days have passed since 2.9 fork.\n%% 30 days = 30 * 24 * 60 * 60 / 128 = 20250.\n-ifndef(COMPOSITE_PACKING_EXPIRATION_PERIOD_BLOCKS).\n-define(COMPOSITE_PACKING_EXPIRATION_PERIOD_BLOCKS, (20250 * 2)).\n-endif.\n\n%% The number of times we apply an RX hash in each RX2 lane in-between every pair\n%% of mixings.\n-define(REPLICA_2_9_RANDOMX_PROGRAM_COUNT, 6).\n\n%% The number of RX2 lanes.\n-define(REPLICA_2_9_RANDOMX_LANE_COUNT, 4).\n\n%% The RX2 depth: the number of RX2 rounds. A round of RX2 has:\n%% 1. REPLICA_2_9_RANDOMX_LANE_COUNT lanes\n%% 2. Each lane evaluates REPLICA_2_9_RANDOMX_PROGRAM_COUNT RandomX programs\n%% 3. The output entropy of each lane is then mixed with crc32 (aka \"near mix\")\n%% 4. The the mixed output from all lanes is then shuffled (aka \"far mix\")\n-define(REPLICA_2_9_RANDOMX_DEPTH, 3).\n\n%% The size in bytes of the component RX2 scratchpad (aka the output from each RX2 lane). This\n%% is NOT the total output entropy (that size is defined in REPLICA_2_9_ENTROPY_SIZE).\n-define(RANDOMX_SCRATCHPAD_SIZE, 2097152).\n\n%% The size in bytes of the total RX2 entropy (# of lanes * scratchpad size).\n-ifdef(AR_TEST).\n%% 32_768 bytes worth of entropy.\n-define(REPLICA_2_9_ENTROPY_SIZE, (4 * ?COMPOSITE_PACKING_SUB_CHUNK_SIZE)).\n-else.\n%% 8_388_608 bytes worth of entropy.\n-define(REPLICA_2_9_ENTROPY_SIZE, (\n\t?REPLICA_2_9_RANDOMX_LANE_COUNT * ?RANDOMX_SCRATCHPAD_SIZE\n)).\n-endif.\n\n%% The number of entropies generated per partition.\n%% The value is chosen depending on the PARTITION_SIZE and REPLICA_2_9_ENTROPY_SIZE constants\n%% such that\n%% 1. Entropy Partition Size =\n%%      REPLICA_2_9_ENTROPY_COUNT * REPLICA_2_9_ENTROPY_SIZE >= PARTITION_SIZE\n%% 2. Sector Size =\n%%      REPLICA_2_9_ENTROPY_COUNT * COMPOSITE_PACKING_SUB_CHUNK_SIZE and\n%%      is divisible by DATA_CHUNK_SIZE\n%% This proves very convenient for chunk-by-chunk syncing.\n%%\n%% Equation to solve for REPLICA_2_9_ENTROPY_COUNT:\n%% round(PARTITION_SIZE / REPLICA_2_9_ENTROPY_SIZE) to nearest multiple of 32\n%%\n%% e.g.\n%% 3_600_000_000_000 / 8_388_608 = 429153.4423828125\n%% (429_153 + 31) = 429_184 (nearest multiple of 32)\n%%\n%% Entropy Partition Size is 429_184 * 8_388_608 = 3_600_256_335_872\n%% Sector Size is 429_184 * 8192 = 3_515_875_328\n%%\n%% Each slice of an entropy is distributed to a different sector such that consecutive slices\n%% map to chunks that are as far as possible from each other within a partition. With\n%% an entropy size of 8_388_608 bytes and a slice size of 8192 bytes, there are 1024 slices per\n%% entropy, which yields 1024 sectors per partition.\n-ifndef(REPLICA_2_9_ENTROPY_COUNT).\n-define(REPLICA_2_9_ENTROPY_COUNT, 429_184).\n-endif.\n\n%% The effective packing difficulty of the new replication format (replica_format=1.)\n%% Determines the recall range size and the mining difficulty of the mining nonces.\n-ifndef(REPLICA_2_9_PACKING_DIFFICULTY).\n-define(REPLICA_2_9_PACKING_DIFFICULTY, 10).\n-endif.\n\n%% The size of the mining partition. The weave is broken down into partitions\n%% of equal size. A miner can search for a solution in each of the partitions\n%% in parallel, per mining address.\n-ifndef(PARTITION_SIZE).\n-define(PARTITION_SIZE, 3_600_000_000_000). % 90% of 4 TB.\n-endif.\n\n%% The size of a recall range. The first range is randomly chosen from the given\n%% mining partition. The second range is chosen from the entire weave.\n-ifdef(AR_TEST).\n-define(RECALL_RANGE_SIZE, (128 * 1024)).\n-else.\n-define(RECALL_RANGE_SIZE, 26_214_400). % == 25 * 1024 * 1024\n-endif.\n\n%% The size of a recall range before the fork 2.8.\n-ifdef(AR_TEST).\n-define(LEGACY_RECALL_RANGE_SIZE, (512 * 1024)).\n-else.\n-define(LEGACY_RECALL_RANGE_SIZE, 104_857_600). % == 100 * 1024 * 1024\n-endif.\n\n-ifndef(STRICT_DATA_SPLIT_THRESHOLD).\n%% The threshold was determined on the mainnet at the 2.5 fork block. The chunks\n%% submitted after the threshold must adhere to stricter validation rules.\n%% This offset is about half way through partition 8\n-define(STRICT_DATA_SPLIT_THRESHOLD, 30_607_159_107_830).\n-endif.\n\n-ifdef(FORKS_RESET).\n\t-ifdef(AR_TEST).\n\t\t-define(MERKLE_REBASE_SUPPORT_THRESHOLD, (ar_block:strict_data_split_threshold() * 2)).\n\t-else.\n\t\t-define(MERKLE_REBASE_SUPPORT_THRESHOLD, 0).\n\t-endif.\n-else.\n%% The threshold was determined on the mainnet at the 2.7 fork block. The chunks\n%% submitted after the threshold must adhere to a different set of validation rules.\n-define(MERKLE_REBASE_SUPPORT_THRESHOLD, 151066495197430).\n-endif.\n\n%% Recall bytes are only picked from the subspace up to the size\n%% of the weave at the block of the depth defined by this constant.\n-ifdef(AR_TEST).\n-define(SEARCH_SPACE_UPPER_BOUND_DEPTH, 3).\n-else.\n-define(SEARCH_SPACE_UPPER_BOUND_DEPTH, 50).\n-endif.\n\n%% The maximum mining difficulty. 2 ^ 256. The network difficulty\n%% may theoretically be at most ?MAX_DIFF - 1.\n-define(MAX_DIFF, (\n\t115792089237316195423570985008687907853269984665640564039457584007913129639936\n)).\n\n%% Increase the difficulty of PoA1 solutions by this multiplier (e.g. 100x).\n-ifndef(POA1_DIFF_MULTIPLIER).\n-define(POA1_DIFF_MULTIPLIER, 100).\n-endif.\n\n%% The number of nonce limiter steps sharing the entropy. We add the entropy\n%% from a past block every so often. If we did not add any entropy at all, even\n%% a slight speedup of the nonce limiting function (considering its low cost) allows\n%% one to eventually pre-compute a very significant amount of nonces opening up\n%% the possibility of mining without the speed limitation. On the other hand,\n%% adding the entropy at certain blocks (rather than nonce limiter steps) allows\n%% miners to use extra bandwidth (bearing almost no additional costs) to compute\n%% nonces on the short forks with different-entropy nonce limiting chains.\n-ifndef(NONCE_LIMITER_RESET_FREQUENCY).\n-define(NONCE_LIMITER_RESET_FREQUENCY, (10 * 120)).\n-endif.\n\n%% The maximum number of one-step checkpoints the block header may include.\n-ifndef(NONCE_LIMITER_MAX_CHECKPOINTS_COUNT).\n-define(NONCE_LIMITER_MAX_CHECKPOINTS_COUNT, 10800).\n-endif.\n\n%% The minimum difficulty allowed.\n-ifndef(SPORA_MIN_DIFFICULTY).\n-define(SPORA_MIN_DIFFICULTY(Height), fun() ->\n\tForks = {\n\t\tar_fork:height_2_4(),\n\t\tar_fork:height_2_6()\n\t},\n\tcase Forks of\n\t\t{_Fork_2_4, Fork_2_6} when Height >= Fork_2_6 ->\n\t\t\t2;\n\t\t{Fork_2_4, _Fork_2_6} when Height >= Fork_2_4 ->\n\t\t\t21\n\tend\nend()).\n-else.\n-define(SPORA_MIN_DIFFICULTY(_Height), ?SPORA_MIN_DIFFICULTY).\n-endif.\n\n%%%===================================================================\n%%% Pre-fork 2.6 constants.\n%%%===================================================================\n\n%% The size of the search space - a share of the weave randomly sampled\n%% at every block. The solution must belong to the search space.\n-define(SPORA_SEARCH_SPACE_SIZE(SearchSpaceUpperBound), fun() ->\n\t%% The divisor must be equal to SPORA_SEARCH_SPACE_SHARE\n\t%% defined in c_src/ar_mine_randomx.h.\n\tSearchSpaceUpperBound div 10 % 10% of the weave.\nend()).\n\n%% The number of contiguous subspaces of the search space, a roughly equal\n%% share of the search space is sampled from each of the subspaces.\n%% Must be equal to SPORA_SUBSPACES_COUNT defined in c_src/ar_mine_randomx.h.\n-define(SPORA_SEARCH_SPACE_SUBSPACES_COUNT, 1024).\n\n%% The key to initialize the RandomX state from, for RandomX packing.\n-define(RANDOMX_PACKING_KEY, <<\"default arweave 2.5 pack key\">>).\n\n-define(RANDOMX_HASHING_MODE_FAST, 0).\n-define(RANDOMX_HASHING_MODE_LIGHT, 1).\n\n%% The original plan was to cap the proof at 262144 (also the maximum chunk size).\n%% The maximum tree depth is then (262144 - 64) / (32 + 32 + 32) = 2730.\n%% Later we added support for offset rebases by recognizing the extra 32 bytes,\n%% possibly at every branching point, as indicating a rebase. To preserve the depth maximum,\n%% we now cap the size at 2730 * (96 + 32) + 65 = 349504.\n-define(MAX_DATA_PATH_SIZE, 349504).\n\n%% We may have at most 1000 transactions + 1000 padding nodes => depth=11\n%% => at most 11 * 96 + 64 bytes worth of the proof. Due to its small size, we\n%% extend it somewhat for better future-compatibility.\n-define(MAX_TX_PATH_SIZE, 2176).\n"
  },
  {
    "path": "apps/arweave/include/ar_data_discovery.hrl",
    "content": "%% The size in bytes of a bucket used to group peers' sync records. When we want to sync\n%% an interval, we process it bucket by bucket: for every bucket, a few peers who are known to\n%% to have some data there are asked for the intervals they have and check which of them\n%% cross the desired interval.\n-ifdef(AR_TEST).\n-define(NETWORK_DATA_BUCKET_SIZE, 10_000_000). % 10 MB\n-else.\n-define(NETWORK_DATA_BUCKET_SIZE, 10_000_000_000). % 10 GB\n-endif.\n\n%% Similar to ?NETWORK_DATA_BUCKET_SIZE, except for a footprint bucket\n%% contains several \"footprints\" - sets of chunks spread out across the partition.\n-ifdef(AR_TEST).\n-define(NETWORK_FOOTPRINT_BUCKET_SIZE, 36). % 12 (footprints) * 3 (chunks); ~10 MB\n-else.\n-define(NETWORK_FOOTPRINT_BUCKET_SIZE, 37888). % 37 (footprints) * 1024 (chunks); ~10 GB\n-endif.\n\n%% The maximum number of synced intervals shared with peers.\n-ifdef(AR_TEST).\n-define(MAX_SHARED_SYNCED_INTERVALS_COUNT, 20).\n-else.\n-define(MAX_SHARED_SYNCED_INTERVALS_COUNT, 10_000).\n-endif.\n\n%% The upper limit for the size of a sync record serialized using Erlang Term Format.\n-define(MAX_ETF_SYNC_RECORD_SIZE, 80 * ?MAX_SHARED_SYNCED_INTERVALS_COUNT).\n\n%% byte_size(ar_serialize:jsonify(jiffy:encode(#{ packing => \"replica_2_9_\" ++ binary_to_list(crypto:strong_rand_bytes(32)), intervals => [[integer_to_list(trunc(math:pow(2, 256) - 1)), integer_to_list(trunc(math:pow(2, 256) - 1))] || _ <- lists:seq(1, 512)] }))).\n%% 243238\n-define(MAX_FOOTPRINT_PAYLOAD_SIZE, 250_000).\n\n%% The upper limit for the size of the serialized (in Erlang Term Format) sync buckets.\n-define(MAX_SYNC_BUCKETS_SIZE, 100_000).\n\n%% How many peers with the biggest synced shares in the given bucket to query per bucket\n%% per sync job iteration.\n-define(QUERY_BEST_PEERS_COUNT, 15).\n\n%% The number of the release adding support for the\n%% GET /data_sync_record/[start]/[end]/[limit] endpoint.\n-define(GET_SYNC_RECORD_RIGHT_BOUND_SUPPORT_RELEASE, 83).\n\n%% The number of the release adding support for endpoints:\n%% GET /footprints/[partition]/[footprint] \n%% GET /footprint_buckets\n-define(GET_FOOTPRINT_SUPPORT_RELEASE, 91)."
  },
  {
    "path": "apps/arweave/include/ar_data_sync.hrl",
    "content": "%% The size in bits of the offset key in kv databases.\n-define(OFFSET_KEY_BITSIZE, 256).\n\n%% The size in bits of the key prefix used in prefix bloom filter\n%% when looking up chunks by offsets from kv database.\n%% 29 bytes of the prefix correspond to the 16777216 (16 Mib) max distance\n%% between the keys with the same prefix. The prefix should be bigger than\n%% max chunk size (256 KiB) so that the chunk in question is likely to be\n%% found in the filter and smaller than an SST table (200 MiB) so that the\n%% filter lookup can narrow the search down to a single table.\n-define(OFFSET_KEY_PREFIX_BITSIZE, 232).\n\n%% The upper size limit for a serialized chunk with its proof\n%% as it travels around the network.\n%%\n%% It is computed as ?MAX_PATH_SIZE (data_path) + DATA_CHUNK_SIZE (chunk) +\n%% 32 * 1000 (tx_path, considering the 1000 txs per block limit),\n%% multiplied by 1.34 (Base64), rounded to the nearest 50000 -\n%% the difference is sufficient to fit an offset, a data_root,\n%% and special JSON chars.\n-define(MAX_SERIALIZED_CHUNK_PROOF_SIZE, 750000).\n\n%% Transaction data bigger than this limit is not served in\n%% GET /tx/<id>/data endpoint. Clients interested in downloading\n%% such data should fetch it chunk by chunk.\n-define(MAX_SERVED_TX_DATA_SIZE, 12 * 1024 * 1024).\n\n%% The time to wait until the next full disk pool scan.\n-ifdef(AR_TEST).\n-define(DISK_POOL_SCAN_DELAY_MS, 2000).\n-else.\n-define(DISK_POOL_SCAN_DELAY_MS, 10000).\n-endif.\n\n%% How often to measure the number of chunks in the disk pool index.\n-ifdef(AR_TEST).\n-define(RECORD_DISK_POOL_CHUNKS_COUNT_FREQUENCY_MS, 1000).\n-else.\n-define(RECORD_DISK_POOL_CHUNKS_COUNT_FREQUENCY_MS, 5000).\n-endif.\n\n%% How long to keep the offsets of the recently processed \"matured\" chunks in a cache.\n%% We use the cache to quickly skip matured chunks when scanning the disk pool.\n-define(CACHE_RECENTLY_PROCESSED_DISK_POOL_OFFSET_LIFETIME_MS, 30 * 60 * 1000).\n\n%% The frequency of removing expired data roots from the disk pool.\n-define(REMOVE_EXPIRED_DATA_ROOTS_FREQUENCY_MS, 60000).\n\n%% The frequency of storing the server state on disk.\n-define(STORE_STATE_FREQUENCY_MS, 30000).\n\n%% The maximum number of chunks currently being downloaded or processed.\n-ifdef(AR_TEST).\n-define(SYNC_BUFFER_SIZE, 100).\n-else.\n-define(SYNC_BUFFER_SIZE, 1000).\n-endif.\n\n%% Defines how long we keep the interval excluded from syncing.\n%% If we cannot find an interval by peers, we temporarily exclude\n%% it from the sought ranges to prevent the syncing process from slowing down.\n-ifdef(AR_TEST).\n-define(EXCLUDE_MISSING_INTERVAL_TIMEOUT_MS, 5000).\n-else.\n-define(EXCLUDE_MISSING_INTERVAL_TIMEOUT_MS, 10 * 60 * 1000).\n-endif.\n\n%% Let at least this many chunks stack up, per storage module, then write them on disk in the\n%% ascending order, to reduce out-of-order disk writes causing fragmentation.\n-ifdef(AR_TEST).\n-define(STORE_CHUNK_QUEUE_FLUSH_SIZE_THRESHOLD, 2).\n-else.\n-define(STORE_CHUNK_QUEUE_FLUSH_SIZE_THRESHOLD, 100). % ~ 25 MB worth of chunks.\n-endif.\n\n%% If a chunk spends longer than this in the store queue, write it on disk without waiting\n%% for ?STORE_CHUNK_QUEUE_FLUSH_SIZE_THRESHOLD chunks to stack up.\n-ifdef(AR_TEST).\n-define(STORE_CHUNK_QUEUE_FLUSH_TIME_THRESHOLD, 1000).\n-else.\n-define(STORE_CHUNK_QUEUE_FLUSH_TIME_THRESHOLD, 2_000). % 2 seconds.\n-endif.\n\n%% @doc The state of the server managing data synchronization.\n-record(sync_data_state, {\n\t%% The last entries of the block index.\n\t%% Used to determine orphaned data upon startup or chain reorg.\n\tblock_index,\n\t%% The current weave size. The upper limit for the absolute chunk end offsets.\n\tweave_size,\n\t%% A reference to the on-disk key-value storage mapping\n\t%% AbsoluteChunkEndOffset\n\t%%   => {ChunkDataKey, TXRoot, DataRoot, TXPath, ChunkOffset, ChunkSize}\n\t%%\n\t%% Chunks themselves and their DataPaths are stored separately (in chunk_data_db)\n\t%% because the offsets may change after a reorg. However, after the offset falls below\n\t%% DiskPoolThreshold, the chunk is packed for mining and recorded in the fast storage\n\t%% under the offset key.\n\t%%\n\t%% The index is used to look up the chunk by a random offset when a peer\n\t%% asks for it and to look up chunks of a transaction.\n\tchunks_index,\n\t%% A reference to the on-disk key-value storage mapping\n\t%% << DataRoot/binary, TXSize/binary, AbsoluteTXStartOffset/binary >> => TXPath.\n\t%%\n\t%% The index is used to look up tx_root for a submitted chunk and compute\n\t%% AbsoluteChunkEndOffset for the accepted chunk.\n\t%%\n\t%% We need the index because users should be able to submit their data without\n\t%% monitoring the chain, otherwise chain reorganisations might make the experience\n\t%% very unnerving. The index is NOT consulted when serving random chunks therefore\n\t%% it is possible to develop a lightweight client which would sync and serve random\n\t%% portions of the weave without maintaining this index.\n\tdata_root_index,\n\tdata_root_index_old,\n\t%% A reference to the on-disk key-value storage mapping\n\t%% AbsoluteBlockStartOffset => {TXRoot, BlockSize, DataRootIndexKeySet}.\n\t%% Each key in DataRootIndexKeySet is a << DataRoot/binary, TXSize:256 >> binary.\n\t%% Used to remove orphaned entries from DataRootIndex.\n\tdata_root_offset_index,\n\t%% A reference to the on-disk key value storage mapping\n\t%% << DataRootTimestamp:256, ChunkDataIndexKey/binary >> =>\n\t%%     {RelativeChunkEndOffset, ChunkSize, DataRoot, TXSize, ChunkDataKey, IsStrictSplit}.\n\t%%\n\t%% The index is used to keep track of pending, orphaned, and recent chunks.\n\t%% A periodic process iterates over chunks from earliest to latest, consults\n\t%% DiskPoolDataRoots and data_root_index to decide whether each chunk needs to\n\t%% be removed from disk as orphaned, reincluded into the weave (by updating chunks_index),\n\t%% or removed from disk_pool_chunks_index by expiration.\n\tdisk_pool_chunks_index,\n\tdisk_pool_chunks_index_old,\n\t%% One of the keys from disk_pool_chunks_index or the atom \"first\".\n\t%% The disk pool is processed chunk by chunk going from the oldest entry to the newest,\n\t%% trying not to block the syncing process if the disk pool accumulates a lot of orphaned\n\t%% and pending chunks. The cursor remembers the key after the last processed on the\n\t%% previous iteration. After reaching the last key in the storage, we go back to\n\t%% the first one. Not stored.\n\tdisk_pool_cursor,\n\t%% The weave offset for the disk pool - chunks above this offset are stored there.\n\tdisk_pool_threshold = 0,\n\t%% A reference to the on-disk key value storage mapping\n\t%% TXID => {AbsoluteTXEndOffset, TXSize}.\n\t%% Is used to serve transaction data by TXID.\n\ttx_index,\n\t%% A reference to the on-disk key value storage mapping\n\t%% AbsoluteTXStartOffset => TXID. Is used to cleanup orphaned transactions from tx_index.\n\ttx_offset_index,\n\t%% A reference to the on-disk key value storage mapping\n\t%% << Timestamp:256, DataPathHash/binary >> to raw chunk data (possibly packed).\n\t%%\n\t%% Is used to store disk pool chunks (their global offsets cannot be determined with\n\t%% certainty yet).\n\t%%\n\t%% The timestamp prefix is used to make the written entries sorted from the start,\n\t%% to minimize the LSTM compaction overhead.\n\tchunk_data_db,\n\t%% A reference to the on-disk key value storage mapping migration names to their stages.\n\tmigrations_index,\n\t%% A flag indicating the process has started collecting the intervals for syncing.\n\t%% We consult the other storage modules first, then search among the network peers.\n\tsync_status = undefined,\n\t%% The offsets of the chunks currently scheduled for (re-)packing (keys) and\n\t%% some chunk metadata needed for storing the chunk once it is packed.\n\tpacking_map = #{},\n\t%% The queue with unique {Start, End, Peer} triplets. Sync jobs are taking intervals\n\t%% from this queue and syncing them.\n\tsync_intervals_queue = gb_sets:new(),\n\t%% A compact set of non-overlapping intervals containing all the intervals from the\n\t%% sync intervals queue. We use it to quickly check which intervals have been queued\n\t%% already and avoid syncing the same interval twice.\n\tsync_intervals_queue_intervals = ar_intervals:new(),\n\t%% A key marking the beginning of a full disk pool scan.\n\tdisk_pool_full_scan_start_key = none,\n\t%% The timestamp of the beginning of a full disk pool scan. Used to measure\n\t%% the time it takes to scan the current disk pool - if it is too short, we postpone\n\t%% the next scan to save some disk IO.\n\tdisk_pool_full_scan_start_timestamp,\n\t%% A cache of the offsets of the recently \"matured\" chunks. We use it to quickly\n\t%% skip matured chunks when scanning the disk pool. The reason the chunk is still\n\t%% in the disk pool is some of its offsets have not matured yet (the same data can be\n\t%% submitted several times).\n\trecently_processed_disk_pool_offsets = #{},\n\t%% A registry of the currently processed disk pool chunks consulted by different\n\t%% disk pool jobs to avoid double-processing.\n\tcurrently_processed_disk_pool_keys = sets:new(),\n\t%% A flag used to temporarily pause all disk pool jobs.\n\tdisk_pool_scan_pause = false,\n\t%% The mining address the chunks are packed with in 2.6.\n\tmining_address,\n\t%% The identifier of the storage module the process is responsible for.\n\tstore_id,\n\t%% The start offset of the range the module is responsible for.\n\trange_start = -1,\n\t%% The end offset of the range the module is responsible for.\n\trange_end = -1,\n\t%% The list of {StoreID, {Start, End}} - the ranges we want to copy\n\t%% from the other storage modules (possibly, (re)packing the data in the process).\n\tunsynced_intervals_from_other_storage_modules = [],\n\t%% The list of identifiers of the non-default storage modules intersecting with the given\n\t%% storage module to be searched for missing data before attempting to sync the data\n\t%% from the network.\n\tother_storage_modules_with_unsynced_intervals = [],\n\t%% The priority queue of chunks sorted by offset. The motivation is to have chunks\n\t%% stack up, per storage module, before writing them on disk so that we can write\n\t%% them in the ascending order and reduce out-of-order disk writes causing fragmentation.\n\tstore_chunk_queue = gb_sets:new(),\n\t%% The length of the store chunk queue.\n\tstore_chunk_queue_len = 0,\n\t%% The threshold controlling the brief accumuluation of the chunks in the queue before\n\t%% the actual disk dump, to reduce the chance of out-of-order write causing disk\n\t%% fragmentation.\n\tstore_chunk_queue_threshold = ?STORE_CHUNK_QUEUE_FLUSH_SIZE_THRESHOLD,\n\t%% The phase of the syncing process.\n\t%% The phases are:\n\t%% - normal: normal left-to-right syncing (normally, of the unpacked data).\n\t%% - footprint: footprint-based syncing of replica 2.9 data.\n\tsync_phase = undefined\n}).\n"
  },
  {
    "path": "apps/arweave/include/ar_header_sync.hrl",
    "content": "%% The frequency of processing items in the queue.\n-ifdef(AR_TEST).\n-define(PROCESS_ITEM_INTERVAL_MS, 1000).\n-else.\n-define(PROCESS_ITEM_INTERVAL_MS, 100).\n-endif.\n\n%% The frequency of checking if there are headers to sync after everything\n%% is synced. Also applies to a fresh node without any data waiting for a block index.\n%% Another case is when the process misses a few blocks (e.g. blocks were sent while the\n%% supervisor was restarting it after a crash).\n-ifdef(AR_TEST).\n-define(CHECK_AFTER_SYNCED_INTERVAL_MS, 500).\n-else.\n-define(CHECK_AFTER_SYNCED_INTERVAL_MS, 5000).\n-endif.\n\n%% The initial value for the exponential backoff for failing requests.\n-ifdef(AR_TEST).\n-define(INITIAL_BACKOFF_INTERVAL_S, 1).\n-else.\n-define(INITIAL_BACKOFF_INTERVAL_S, 30).\n-endif.\n\n%% The maximum exponential backoff interval for failing requests.\n-ifdef(AR_TEST).\n-define(MAX_BACKOFF_INTERVAL_S, 2).\n-else.\n-define(MAX_BACKOFF_INTERVAL_S, 2 * 60 * 60).\n-endif.\n\n%% The frequency of storing the server state on disk.\n-define(STORE_HEADER_STATE_FREQUENCY_MS, 30000).\n"
  },
  {
    "path": "apps/arweave/include/ar_inflation.hrl",
    "content": "-ifndef(AR_INFLATION_HRL).\n-define(AR_INFLATION_HRL, true).\n\n-include_lib(\"arweave/include/ar.hrl\").\n\n%% An approximation of the natural logarithm of 2,\n%% expressed as a decimal fraction, with the precision of math:log.\n-define(LN2, {6931471805599453, 10000000000000000}).\n\n%% The precision of computing the natural exponent as a decimal fraction,\n%% expressed as the maximal power of the argument in the Taylor series.\n-define(INFLATION_NATURAL_EXPONENT_DECIMAL_FRACTION_PRECISION, 24).\n\n%% The tolerance used in the inflation schedule tests.\n-define(DEFAULT_TOLERANCE_PERCENT, 0.001).\n\n%% Height at which the 1.5.0.0 fork takes effect.\n-ifdef(AR_TEST).\n%% The inflation tests serve as a documentation of how rewards are computed.\n%% Therefore, we keep the mainnet value in these tests. Other tests have\n%% FORK 1.6 height set to zero from now on.\n-define(FORK_15_HEIGHT, 95000).\n-else.\n-define(FORK_15_HEIGHT, ?FORK_1_6).\n-endif.\n\n%% Blocks per year prior to 1.5.0.0 release.\n-define(PRE_15_BLOCKS_PER_YEAR, 525600 / (120 / 60)).\n\n%% Blocks per year prior to 2.5.0.0 release.\n-define(PRE_25_BLOCKS_PER_YEAR, (525600 / (120 / 60))).\n\n%% The number of extra tokens to grant for blocks between the 1.5.0.0 release\n%% and the end of year one.\n%% \n%% calculate_post_15_y1_extra() ->\n%%     Pre15 = erlang:trunc(sum_rewards(fun calculate/1, 0, ?FORK_15_HEIGHT)),\n%%     Base = erlang:trunc(sum_rewards(fun calculate_base/1, 0, ?FORK_15_HEIGHT)),\n%%     Post15Diff = Base - Pre15,\n%%     erlang:trunc(Post15Diff / (?BLOCKS_PER_YEAR - ?FORK_15_HEIGHT)).\n-define(POST_15_Y1_EXTRA, 13275279633337).\n\n-endif.\n"
  },
  {
    "path": "apps/arweave/include/ar_mining.hrl",
    "content": "-ifndef(AR_MINING_HRL).\n-define(AR_MINING_HRL, true).\n\n-define(GC_LOG_THRESHOLD, 1000).\n\n%% fields prefixed with cm_ are only set when a solution is distributed across miners as part\n%% of a coordinated mining set.\n-record(mining_candidate, {\n\tcache_ref = not_set, %% not serialized\n\tchunk1 = not_set, %% not serialized\n\tchunk2 = not_set, %% not serialized\n\tcm_diff = not_set, %% serialized. set to the difficulty used by the H1 miner\n\tcm_h1_list = [], %% serialized. list of {h1, nonce} pairs\n\tcm_lead_peer = not_set, %% not serialized. if set, this candidate came from another peer\n\th0 = not_set, %% serialized\n\th1 = not_set, %% serialized\n\th2 = not_set, %% serialized\n\tmining_address = not_set, %% serialized\n\tnext_seed = not_set, %% serialized\n\tnext_vdf_difficulty = not_set, %% serialized\n\tnonce = not_set, %% serialized\n\tnonce_limiter_output = not_set, %% serialized\n\tpartition_number = not_set, %% serialized\n\tpartition_number2 = not_set, %% serialized\n\tpartition_upper_bound = not_set, %% serialized\n\tpoa2 = not_set, %% serialized\n\tpreimage = not_set, %% serialized. this can be either the h1 or h2 preimage\n\tseed = not_set, %% serialized\n\tsession_key = not_set, %% serialized\n\tstart_interval_number = not_set, %% serialized\n\tstep_number = not_set, %% serialized\n\tpacking_difficulty = 0, %% serialized\n\treplica_format = 0, %% serialized\n\tlabel = <<\"not_set\">> %% not atom, for prevent atom table pollution DoS\n}).\n\n-record(mining_solution, {\n\tlast_step_checkpoints = [],\n\tmerkle_rebase_threshold = 0,\n\tnext_seed = << 0:(8 * 48) >>,\n\tnext_vdf_difficulty = 0,\n\tnonce = 0,\n\tnonce_limiter_output = << 0:256 >>,\n\tpartition_number = 0,\n\tpoa1 = #poa{},\n\tpoa2 = #poa{},\n\tpreimage = << 0:256 >>,\n\trecall_byte1 = 0,\n\trecall_byte2 = undefined,\n\tsolution_hash = << 0:256 >>,\n\tstart_interval_number = 0,\n\tstep_number = 0,\n\tsteps = [],\n\tseed = << 0:(8 * 48) >>,\n\tmining_address = << 0:256 >>,\n\tpartition_upper_bound = 0,\n\tpacking_difficulty = 0,\n\treplica_format = 0\n}).\n\n-endif.\n"
  },
  {
    "path": "apps/arweave/include/ar_mining_cache.hrl",
    "content": "-ifndef(AR_MINING_CACHE_HRL).\n-define(AR_MINING_CACHE_HRL, true).\n\n-record(ar_mining_cache_value, {\n  chunk1 :: binary() | undefined,\n  chunk2 :: binary() | undefined,\n  chunk1_failed = false :: boolean(),\n  chunk2_failed = false :: boolean(),\n  h1 :: binary() | undefined,\n  h1_passes_diff_checks = false :: boolean(),\n  h2 :: binary() | undefined\n}).\n\n-record(ar_mining_cache_session, {\n  mining_cache = #{} :: #{term() => #ar_mining_cache_value{}},\n  mining_cache_size_bytes = 0 :: non_neg_integer(),\n  reserved_mining_cache_bytes = 0 :: non_neg_integer()\n}).\n\n\n-record(ar_mining_cache, {\n  name = not_set :: term(),\n  mining_cache_sessions = #{} :: #{term() => #ar_mining_cache_session{}},\n  mining_cache_sessions_queue = queue:new() :: queue:queue(),\n  mining_cache_limit_bytes = 0 :: non_neg_integer()\n}).\n\n-endif.\n"
  },
  {
    "path": "apps/arweave/include/ar_peers.hrl",
    "content": "-ifndef(AR_PEERS_HRL).\n-define(AR_PEERS_HRL, true).\n\n-include_lib(\"ar.hrl\").\n\n-record(performance, {\n\tversion = 3,\n\trelease = -1,\n\ttotal_bytes = 0,\n\ttotal_throughput = 0.0, %% bytes per millisecond\n\ttotal_transfers = 0,\n\taverage_latency = 0.0, %% milliseconds\n\taverage_throughput = 0.0, %% bytes per millisecond\n\taverage_success = 1.0,\n\tlifetime_rating = 0, %% longer time window\n\tcurrent_rating = 0 %% shorter time window\n}).\n\n-endif."
  },
  {
    "path": "apps/arweave/include/ar_poa.hrl",
    "content": "-ifndef(AR_POA_HRL).\n-define(AR_POA_HRL, true).\n\n-include(\"ar.hrl\").\n\n-record(chunk_proof, {\n\tmetadata :: #chunk_metadata{},\n\tseek_byte :: non_neg_integer(),\n\ttx_start_offset :: non_neg_integer(),\n\ttx_end_offset :: non_neg_integer(),\n\tblock_start_offset :: non_neg_integer(),\n\tblock_end_offset :: non_neg_integer(),\n\tchunk_id :: binary(),\n\tchunk_start_offset :: non_neg_integer(),\n\tchunk_end_offset :: non_neg_integer(),\n\tvalidate_data_path_ruleset :: \n\t\t'offset_rebase_support_ruleset' |\n\t\t'strict_data_split_ruleset' |\n\t\t'strict_borders_ruleset',\n\ttx_path_is_valid = not_validated :: 'not_validated' | 'valid' | 'invalid',\n\tdata_path_is_valid = not_validated :: 'not_validated' | 'valid' | 'invalid',\n\tchunk_is_valid = not_validated :: 'not_validated' | 'valid' | 'invalid'\n}).\n\n-endif.\n"
  },
  {
    "path": "apps/arweave/include/ar_pool.hrl",
    "content": "%% The number of VDF steps (\"jobs\") the pool server serves at a time.\n-define(GET_JOBS_COUNT, 10).\n\n%% The time in seconds the pool server waits before giving up on replying with\n%% new jobs when the client already has the newest job.\n-define(GET_JOBS_TIMEOUT_S, 2).\n\n%% The frequency in milliseconds of asking the pool or CM exit node about new jobs.\n-define(FETCH_JOBS_FREQUENCY_MS, 500).\n\n%% The time in milliseconds we wait before retrying a failed fetch jobs request.\n-define(FETCH_JOBS_RETRY_MS, 2000).\n\n%% The frequency in milliseconds of asking the pool or CM exit node about new CM jobs.\n-define(FETCH_CM_JOBS_FREQUENCY_MS, 1000).\n\n%% The time in milliseconds we wait before retrying a failed fetch CM jobs request.\n-define(FETCH_CM_JOBS_RETRY_MS, 2000).\n\n%% @doc A collection of mining jobs.\n-record(jobs, {\n\tjobs = [], %% The information about a single VDF output (a \"job\").\n\tpartial_diff = {0, 0}, %% Partial difficulty.\n\tseed = <<>>,\n\tnext_seed = <<>>,\n\tinterval_number = 0,\n\tnext_vdf_difficulty = 0\n}).\n\n%% @doc A mining job.\n-record(job, {\n\toutput = <<>>,\n\tglobal_step_number = 0,\n\tpartition_upper_bound = 0\n}).\n\n%% @doc Partial solution validation response.\n-record(partial_solution_response, {\n\tindep_hash = <<>>,\n\tstatus = <<>>\n}).\n\n%% @doc A set of coordinated mining jobs provided by the pool.\n%%\n%% Miners fetch and submit pool CM jobs via the same POST /pool_cm_jobs endpoint.\n%% When miners fetch jobs, they specify the partitions and leave the job fields empty.\n%% When miners submit jobs, they leave the partitions field empty.\n-record(pool_cm_jobs, {\n\th1_to_h2_jobs = [], % [#mining_candidate{}]\n\th1_read_jobs = [], % [#mining_candidate{}]\n\t%% A list of {[{bucket, ...}, {bucketsize, ...}, {addr, ...}]} or\n\t%% {[{bucket, ...}, {bucketsize, ...}, {addr, ...}, {pdiff, ...}]} JSON structs.\n\tpartitions = []\n}).\n"
  },
  {
    "path": "apps/arweave/include/ar_pricing.hrl",
    "content": "%% @doc Pricing macros.\n\n%% For a new account, we charge the fee equal to the price of uploading\n%% this number of bytes. The fee is about 0.1$ at the time.\n-define(NEW_ACCOUNT_FEE_DATA_SIZE_EQUIVALENT, 20_000_000).\n\n%% The target number of replications.\n-ifdef(AR_TEST).\n-define(N_REPLICATIONS, fun(_MACRO_Height) -> 200 end).\n-else.\n-define(N_REPLICATIONS, fun(MACRO_Height) ->\n\tMACRO_Forks = {\n\t\tar_fork:height_2_5(),\n\t\tar_fork:height_2_6()\n\t},\n\tcase MACRO_Forks of\n\t\t{_MACRO_Fork_2_5, MACRO_Fork_2_6} when MACRO_Height >= MACRO_Fork_2_6 ->\n\t\t\t20;\n\t\t{MACRO_Fork_2_5, _MACRO_Fork_2_6} when MACRO_Height >= MACRO_Fork_2_5 ->\n\t\t\t45;\n\t\t_ ->\n\t\t\t10\n\tend\nend).\n-endif.\n\n%% The miners always receive ?MINER_FEE_SHARE of the transaction fees, even\n%% when the fees are bigger than the required minimum.\n-define(MINER_FEE_SHARE, {1, 21}).\n\n%% When a double-signing proof is provided, we reward the prover with the\n%% ?DOUBLE_SIGNING_PROVER_REWARD_SHARE of the minimum reward among the preceding\n%% ?DOUBLE_SIGNING_REWARD_SAMPLE_SIZE blocks.\n-ifdef(AR_TEST).\n-define(DOUBLE_SIGNING_REWARD_SAMPLE_SIZE, 2).\n-else.\n-define(DOUBLE_SIGNING_REWARD_SAMPLE_SIZE, 100).\n-endif.\n\n%% When a double-signing proof is provided, we reward the prover with the\n%% ?DOUBLE_SIGNING_PROVER_REWARD_SHARE of the minimum reward among the preceding\n%% ?DOUBLE_SIGNING_REWARD_SAMPLE_SIZE blocks.\n-define(DOUBLE_SIGNING_PROVER_REWARD_SHARE, {1, 2}).\n\n%% Every transaction fee has to be at least\n%% X + X * ?MINER_MINIMUM_ENDOWMENT_CONTRIBUTION_SHARE\n%% where X is the amount sent to the endowment pool.\n-define(MINER_MINIMUM_ENDOWMENT_CONTRIBUTION_SHARE, {1, 20}).\n\n%% The fixed USD to AR rate used after the fork 2.6 until the automatic transition to the new\n%% pricing scheme is complete. We fix the rate because the network difficulty is expected\n%% fluctuate a lot around the fork.\n-define(FORK_2_6_PRE_TRANSITION_USD_TO_AR_RATE, {1, 10}).\n\n%% The number of recent blocks with the reserved (temporarily locked) mining rewards.\n-ifdef(AR_TEST).\n\t% testnet value should have same ratio 30:1 to VDF_DIFFICULTY_RETARGET\n\t% BUT. For tests we are using old value\n\t-define(LOCKED_REWARDS_BLOCKS, 3).\n-else.\n\t-ifndef(LOCKED_REWARDS_BLOCKS).\n\t\t-define(LOCKED_REWARDS_BLOCKS, (30 * 24 * 30)).\n\t-endif.\n-endif.\n\n%% The number of recent blocks contributing data points to the continuous estimation\n%% of the average price of storing a gibibyte for a minute. A recent subset of the\n%% reward history is used for tracking the reserved mining rewards.\n-ifdef(AR_TEST).\n\t-define(REWARD_HISTORY_BLOCKS, 3).\n-else.\n\t-ifndef(REWARD_HISTORY_BLOCKS).\n\t\t-define(REWARD_HISTORY_BLOCKS, (3 * 30 * 24 * 30)).\n\t-endif.\n-endif.\n\n%% The REWARD_HISTORY_BLOCKS before 2.8.\n-ifdef(AR_TEST).\n\t-define(LEGACY_REWARD_HISTORY_BLOCKS, 3).\n-else.\n\t-ifndef(LEGACY_REWARD_HISTORY_BLOCKS).\n\t\t-define(LEGACY_REWARD_HISTORY_BLOCKS, (30 * 24 * 30)).\n\t-endif.\n-endif.\n\n%% The prices are re-estimated every so many blocks.\n-ifdef(AR_TEST).\n-define(PRICE_ADJUSTMENT_FREQUENCY, 2).\n-else.\n\t-ifndef(PRICE_ADJUSTMENT_FREQUENCY).\n\t\t-define(PRICE_ADJUSTMENT_FREQUENCY, 50).\n\t-endif.\n-endif.\n\n%% An approximation of the natural logarithm of ?PRICE_DECAY_ANNUAL (0.995),\n%% expressed as a decimal fraction, with the precision of math:log.\n-define(LN_PRICE_DECAY_ANNUAL, {-5012541823544286, 1000000000000000000}).\n\n%% The assumed annual decay rate of the Arweave prices, expressed as a decimal fraction.\n-define(PRICE_DECAY_ANNUAL, {995, 1000}). % 0.995, i.e., 0.5% annual decay rate.\n\n%% The precision of computing the natural exponent as a decimal fraction,\n%% expressed as the maximal power of the argument in the Taylor series.\n-define(TX_PRICE_NATURAL_EXPONENT_DECIMAL_FRACTION_PRECISION, 9).\n\n%% When/if the endowment fund runs empty, we increase storage fees and lock a \"Kryder+ rate\n%% multiplier latch\" to make sure we do not increase the fees several times while the\n%% endowment size remains low. Once the endowment is bigger than this constant again,\n%% we open the latch (and will increase the fees again when/if the endowment is empty).\n%% The value is redenominated according the denomination used at the time.\n-ifdef(AR_TEST).\n-define(RESET_KRYDER_PLUS_LATCH_THRESHOLD, 100_000_000_000).\n-else.\n-define(RESET_KRYDER_PLUS_LATCH_THRESHOLD, 10_000_000_000_000_000).\n-endif.\n\n%% The total supply, in Winston (the sum of genesis balances + the total emission).\n%% Does NOT include the additional emission which may start in the far future if and when\n%% the endowment pool runs empty.\n-ifdef(AR_TEST).\n\t%% The debug constant is not always actually equal to the sum of genesis balances plust\n\t%% the total emission. We just set a relatively low value so that we can reproduce\n\t%% autoredenomination in tests.\n\t-define(TOTAL_SUPPLY, 1500000000000).\n-else.\n\t-ifdef(FORKS_RESET).\n\t\t%% This value should be ideally adjusted if the genesis balances\n\t\t%% of a new weave differ from those in mainnet.\n\t\t-define(TOTAL_SUPPLY, 66000015859279336957).\n\t-else.\n\t\t-define(TOTAL_SUPPLY, 66_000_015_859_279_336_957).\n\t-endif.\n-endif.\n\n%% Re-denominate AR (multiply by 1000) when the available supply falls below this\n%% number of units.\n-ifdef(AR_TEST).\n-define(REDENOMINATION_THRESHOLD, 1350000000000).\n-else.\n-define(REDENOMINATION_THRESHOLD, 1000_000_000_000_000_000).\n-endif.\n\n%% The number of blocks which has to pass after we assign the redenomination height before\n%% the redenomination occurs. Transactions without an explicitly assigned denomination are\n%% not allowed in these blocks. The motivation is to protect the legacy libraries' users from\n%% an attack where a post-redenomination transaction is included in a pre-redenomination\n%% block, potentially charging the user a thousand times the intended fee or transfer amount.\n-ifdef(AR_TEST).\n-define(REDENOMINATION_DELAY_BLOCKS, 2).\n-else.\n-define(REDENOMINATION_DELAY_BLOCKS, 100).\n-endif.\n\n%% USD to AR exchange rates by height defined together with INITIAL_USD_TO_AR_HEIGHT\n%% and INITIAL_USD_TO_AR_DIFF. The protocol uses these constants to estimate the\n%% USD to AR rate at any block based on the change in the network difficulty and inflation\n%% rewards.\n-define(INITIAL_USD_TO_AR(Height), fun() ->\n\tForks = {\n\t\tar_fork:height_2_4(),\n\t\tar_fork:height_2_5()\n\t},\n\tcase Forks of\n\t\t{_Fork_2_4, Fork_2_5} when Height >= Fork_2_5 ->\n\t\t\t{1, 65};\n\t\t{Fork_2_4, _Fork_2_5} when Height >= Fork_2_4 ->\n\t\t\t?INITIAL_USD_TO_AR_PRE_FORK_2_5\n\tend\nend).\n\n%% The original USD to AR conversion rate, defined as a fraction. Set up at fork 2.4.\n%% Used until the fork 2.5.\n-define(INITIAL_USD_TO_AR_PRE_FORK_2_5, {1, 5}).\n\n%% The network difficulty at the time when the USD to AR exchange rate was\n%% ?INITIAL_USD_TO_AR(Height). Used to account for the change in the network\n%% difficulty when estimating the new USD to AR rate.\n-define(INITIAL_USD_TO_AR_DIFF(Height), fun() ->\n\tForks = {\n\t\tar_fork:height_1_9(),\n\t\tar_fork:height_2_2(),\n\t\tar_fork:height_2_5()\n\t},\n\tcase Forks of\n\t\t{_Fork_1_9, _Fork_2_2, Fork_2_5} when Height >= Fork_2_5 ->\n\t\t\t32;\n\t\t{_Fork_1_9, Fork_2_2, _Fork_2_5} when Height >= Fork_2_2 ->\n\t\t\t34;\n\t\t{Fork_1_9, _Fork_2_2, _Fork_2_5} when Height < Fork_1_9 ->\n\t\t\t28;\n\t\t_ ->\n\t\t\t29\n\tend\nend).\n\n%% The network height at the time when the USD to AR exchange rate was\n%% ?INITIAL_USD_TO_AR(Height). Used to account for the change in inflation\n%% rewards when estimating the new USD to AR rate.\n-define(INITIAL_USD_TO_AR_HEIGHT(Height), fun() ->\n\tForks = {\n\t\tar_fork:height_1_9(),\n\t\tar_fork:height_2_2(),\n\t\tar_fork:height_2_5(),\n\t\tar_fork:height_2_6()\n\t},\n\t%% In case the fork heights are reset to 0 (e.g. on testnets),\n\t%% set the initial height to 1 - the height where the inflation\n\t%% emission essentially begins.\n\tcase Forks of\n\t\t{_Fork_1_9, _Fork_2_2, _Fork_2_5, Fork_2_6} when Height >= Fork_2_6 ->\n\t\t\tmax(Fork_2_6, 1);\n\t\t{_Fork_1_9, _Fork_2_2, Fork_2_5, _Fork_2_6} when Height >= Fork_2_5 ->\n\t\t\tmax(Fork_2_5, 1);\n\t\t{_Fork_1_9, Fork_2_2, _Fork_2_5, _Fork_2_6} when Height >= Fork_2_2 ->\n\t\t\tmax(Fork_2_2, 1);\n\t\t{Fork_1_9, _Fork_2_2, _Fork_2_5, _Fork_2_6} when Height < Fork_1_9 ->\n\t\t\tmax(ar_fork:height_1_8(), 1);\n\t\t{Fork_1_9, _Fork_2_2, _Fork_2_5, _Fork_2_6} ->\n\t\t\tmax(Fork_1_9, 1)\n\tend\nend).\n\n%% The base wallet generation fee in USD, defined as a fraction.\n%% The amount in AR depends on the current difficulty and height.\n%% Used until the transition to the new fee calculation method is complete.\n-define(WALLET_GEN_FEE_USD, {1, 10}).\n\n%% The estimated historical price of storing 1 GB of data for the year 2018,\n%% expressed as a decimal fraction.\n%% Used until the transition to the new fee calculation method is complete.\n-define(USD_PER_GBY_2018, {1045, 1000000}). % 0.001045\n\n%% The estimated historical price of storing 1 GB of data for the year 2019,\n%% expressed as a decimal fraction.\n%% Used until the transition to the new fee calculation method is complete.\n-define(USD_PER_GBY_2019, {925, 1000000}). % 0.000925\n\n-define(STATIC_2_6_8_FEE_WINSTON, 858_000_000_000).\n\n%% The largest possible multiplier for a one-step increase of the USD to AR Rate.\n-define(USD_TO_AR_MAX_ADJUSTMENT_UP_MULTIPLIER, {1005, 1000}).\n\n%% The largest possible multiplier for a one-step decrease of the USD to AR Rate.\n-define(USD_TO_AR_MAX_ADJUSTMENT_DOWN_MULTIPLIER, {995, 1000}).\n\n%% Reduce the USD to AR fraction if both the dividend and the devisor become bigger than this.\n-ifdef(AR_TEST).\n-define(USD_TO_AR_FRACTION_REDUCTION_LIMIT, 100).\n-else.\n-define(USD_TO_AR_FRACTION_REDUCTION_LIMIT, 1000000).\n-endif.\n\n%% Every transaction fee has to be at least X + X * ?MINING_REWARD_MULTIPLIER\n%% where X is the amount sent to the endowment pool.\n%% Used until the transition to the new fee calculation method is complete.\n-ifdef(AR_TEST).\n-define(MINING_REWARD_MULTIPLIER, {2, 10000}).\n-else.\n-define(MINING_REWARD_MULTIPLIER, {2, 10}).\n-endif.\n\n%% The USD to AR exchange rate for a new chain, e.g. a testnet.\n-define(NEW_WEAVE_USD_TO_AR_RATE, ?INITIAL_USD_TO_AR_PRE_FORK_2_5).\n\n%% Initial $/AR exchange rate. Used until the fork 2.4.\n-define(INITIAL_USD_PER_AR(Height), fun() ->\n\tForks = {\n\t\tar_fork:height_1_9(),\n\t\tar_fork:height_2_2()\n\t},\n\tcase Forks of\n\t\t{Fork_1_9, _Fork_2_2} when Height < Fork_1_9 ->\n\t\t\t1.5;\n\t\t{_Fork_1_9, Fork_2_2} when Height >= Fork_2_2 ->\n\t\t\t4;\n\t\t_ ->\n\t\t\t1.2\n\tend\nend).\n\n%% Base wallet generation fee. Used until fork 2.2.\n-define(WALLET_GEN_FEE, 250000000000).\n"
  },
  {
    "path": "apps/arweave/include/ar_repack.hrl",
    "content": "-ifndef(AR_REPACK_HRL).\n-define(AR_REPACK_HRL, true).\n\n-record(repack_chunk, {\n\tstate = needs_chunk :: \n\t\tneeds_chunk | invalid | entropy_only | already_repacked | needs_data_path |\n\t\tneeds_repack | needs_entropy | needs_encipher | needs_write | error,\n\tmetadata = not_set :: not_set | not_found | #chunk_metadata{},\n\toffsets = not_set :: not_set | not_found | #chunk_offsets{},\n\t%% source_packing is used to track the current packing format of the chunk. It starts\n\t%% set to the original format of the chunk, but will be updated as the chunk is\n\t%% repacked (sometimes through intermediate formats) and ultimately will be set equal\n\t%% to target_packing.\n\tsource_packing = not_set :: not_set | not_found | ar_packing:packing(),\n\ttarget_packing = not_set :: not_set | not_found | ar_packing:packing(),\n\tchunk = not_set :: not_set | not_found | binary(),\n\tsource_entropy = not_set :: not_set | binary(),\n\ttarget_entropy = not_set :: not_set | binary()\n}).\n\n-endif.\n"
  },
  {
    "path": "apps/arweave/include/ar_sup.hrl",
    "content": "%% The number of milliseconds the supervisor gives every process for shutdown.\n-ifdef(AR_TEST).\n-define(SHUTDOWN_TIMEOUT, 30_000).\n-else.\n-define(SHUTDOWN_TIMEOUT, 300_000).\n-endif.\n\n-define(CHILD(I, Type), #{\n\tid => I,\n\tstart => {I, start_link, []},\n\trestart => permanent,\n\tshutdown => ?SHUTDOWN_TIMEOUT,\n\ttype => Type,\n\tmodules => [I]\n}).\n\n-define(CHILD_WITH_ARGS(I, Type, Name, Args), #{\n\tid => Name,\n\tstart => {I, start_link, Args},\n\trestart => permanent,\n\tshutdown => ?SHUTDOWN_TIMEOUT,\n\ttype => Type,\n\tmodules => [Name]\n}).\n\n%% From the Erlang docs:\n%%\n%% An integer time-out value means that the supervisor tells the child process to terminate\n%% by calling exit(Child,shutdown) and then wait for an exit signal with reason shutdown back\n%% from the child process. If no exit signal is received within the specified number of\n%% milliseconds, the child process is unconditionally terminated using exit(Child,kill).\n%% If the child process is another supervisor, the shutdown time must be set to infinity to\n%% give the subtree ample time to shut down.\n-define(CHILD_SUP(I, Type), #{\n\tid => I,\n\tstart => {I, start_link, []},\n\trestart => permanent,\n\tshutdown => infinity,\n\ttype => Type,\n\tmodules => [I]\n}).\n"
  },
  {
    "path": "apps/arweave/include/ar_sync_buckets.hrl",
    "content": "%% The size in bytes of a bucket in sync buckets. The bigger the bucket,\n%% the more compact the structure is, but also the higher the number of \"misses\"\n%% encountered when asking peers about the presence of particular chunks.\n%% If the serialized buckets do not fit in ?MAX_SYNC_BUCKETS_SIZE, the bucket\n%% size is doubled until they fit.\n-ifdef(AR_TEST).\n-define(DEFAULT_SYNC_BUCKET_SIZE, 10000000).\n-else.\n-define(DEFAULT_SYNC_BUCKET_SIZE, 10_000_000_000). % 10 GB\n-endif.\n\n%% The maximum ratio between a peer's reported bucket size and the expected bucket size.\n-define(MAX_SYNC_BUCKET_SIZE_RATIO, 4096).\n"
  },
  {
    "path": "apps/arweave/include/ar_vdf.hrl",
    "content": "% 25 checkpoints 40 ms each = 1000 ms\n-define(VDF_CHECKPOINT_COUNT_IN_STEP, 25).\n\n-define(VDF_BYTE_SIZE, 32).\n\n%% Typical ryzen 5900X iterations for 1 sec\n-define(VDF_SHA_1S, 15_000_000).\n\n-ifndef(VDF_DIFFICULTY).\n\t-define(VDF_DIFFICULTY, ?VDF_SHA_1S div ?VDF_CHECKPOINT_COUNT_IN_STEP).\n-endif.\n\n-ifdef(AR_TEST).\n\t% NOTE. VDF_DIFFICULTY_RETARGET should be > 10 because it's > 10 in mainnet\n\t% So VDF difficulty should change slower than difficulty\n\t-define(VDF_DIFFICULTY_RETARGET, 20).\n\t-define(VDF_HISTORY_CUT, 2).\n-else.\n\t-ifndef(VDF_DIFFICULTY_RETARGET).\n\t\t-define(VDF_DIFFICULTY_RETARGET, 720).\n\t-endif.\n\t-ifndef(VDF_HISTORY_CUT).\n\t\t-define(VDF_HISTORY_CUT, 50).\n\t-endif.\n-endif.\n\n\n"
  },
  {
    "path": "apps/arweave/include/ar_verify_chunks.hrl",
    "content": "-ifndef(AR_VERIFY_CHUNKS_HRL).\n-define(AR_VERIFY_CHUNKS_HRL, true).\n\n-record(verify_report, {\n\tstart_time :: non_neg_integer(),\n\ttotal_error_bytes = 0 :: non_neg_integer(),\n\ttotal_error_chunks = 0 :: non_neg_integer(),\n\terror_bytes = #{} :: #{atom() => non_neg_integer()},\n\terror_chunks = #{} :: #{atom() => non_neg_integer()},\n\tbytes_processed = 0 :: non_neg_integer(),\n\tprogress = 0 :: non_neg_integer(),\n\tstatus = not_ready :: not_ready | running| done\n}).\n\n-record(sample_report, {\n\tsamples = 0 :: non_neg_integer(),\n\ttotal = 0 :: non_neg_integer(),\n\tsuccess = 0 :: non_neg_integer(),\n\tfailure = 0 :: non_neg_integer()\n}).\n\n-define(SAMPLE_CHUNK_COUNT, 1000).\n\n-endif.\n"
  },
  {
    "path": "apps/arweave/include/ar_wallets.hrl",
    "content": "%% @doc The maximum number of wallets served via /wallet_list/<root_hash>[/<cursor>].\n-ifdef(AR_TEST).\n-define(WALLET_LIST_CHUNK_SIZE, 2).\n-else.\n-define(WALLET_LIST_CHUNK_SIZE, 2500).\n-endif.\n\n%% @doc The upper limit for the size of the response fetched from\n%% /wallet_list/<root_hash>[/<cursor>], when serialized using Erlang Term Format.\n%% The actual size of the binary for so many wallets is a few kilobytes smaller,\n%% so the response may contain some metadata.\n-define(MAX_SERIALIZED_WALLET_LIST_CHUNK_SIZE, ?WALLET_LIST_CHUNK_SIZE * 202). % = 505000\n"
  },
  {
    "path": "apps/arweave/include/user_default.hrl",
    "content": "%%\n%% This file is only intended to be included into user_default.erl file.\n%% The reason to incluide these headers into user_default module is to\n%% enable records to be rendered properly in the REPL.\n%% It might be a good idea to also include some third-party libraries headers\n%% here as well (e.g. cowboy' request, etc.)\n%%\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_blacklist_middleware.hrl\").\n-include_lib(\"arweave/include/ar_block.hrl\").\n-include_lib(\"arweave/include/ar_chain_stats.hrl\").\n-include_lib(\"arweave/include/ar_chunk_storage.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n-include_lib(\"arweave/include/ar_data_discovery.hrl\").\n-include_lib(\"arweave/include/ar_data_sync.hrl\").\n-include_lib(\"arweave/include/ar_header_sync.hrl\").\n-include_lib(\"arweave/include/ar_inflation.hrl\").\n-include_lib(\"arweave/include/ar_mining.hrl\").\n-include_lib(\"arweave/include/ar_peers.hrl\").\n-include_lib(\"arweave/include/ar_poa.hrl\").\n-include_lib(\"arweave/include/ar_pool.hrl\").\n-include_lib(\"arweave/include/ar_pricing.hrl\").\n-include_lib(\"arweave/include/ar_sup.hrl\").\n-include_lib(\"arweave/include/ar_sync_buckets.hrl\").\n-include_lib(\"arweave/include/ar_vdf.hrl\").\n-include_lib(\"arweave/include/ar_verify_chunks.hrl\").\n-include_lib(\"arweave/include/ar_wallets.hrl\").\n"
  },
  {
    "path": "apps/arweave/src/ar.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @doc Arweave server entrypoint and basic utilities.\n%%% @end\n%%%===================================================================\n-module(ar).\n-behaviour(application).\n-compile(warnings_as_errors).\n-export([\n\tbenchmark_hash/0,\n\tbenchmark_hash/1,\n\tbenchmark_packing/0,\n\tbenchmark_packing/1,\n\tbenchmark_vdf/0,\n\tbenchmark_vdf/1,\n\tconsole/1,\n\tconsole/2,\n\tcreate_ecdsa_wallet/0,\n\tcreate_ecdsa_wallet/1,\n\tcreate_wallet/0,\n\tcreate_wallet/1,\n\tdocs/0,\n\te2e/0,\n\te2e/1,\n\tmain/1,\n\tprep_stop/1,\n\tshell/0,\n\tshell_e2e/0,\n\tshell_localnet/0,\n\tshell_localnet/1,\n\tshutdown/1,\n\tstart/1,\n\tstart/2,\n\tstart_dependencies/0,\n\tstop/1,\n\tstop_dependencies/0,\n\tstop_shell/0,\n\tstop_shell_e2e/0,\n\tstop_shell_localnet/0,\n\ttests/0,\n\ttests/1\n]).\n\n-include(\"ar.hrl\").\n-include(\"ar_consensus.hrl\").\n-include(\"ar_verify_chunks.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%% Supported feature flags (default behaviour)\n% http_logging (false)\n% disk_logging (false)\n% miner_logging (true)\n% subfield_queries (false)\n% blacklist (true)\n% time_syncing (true)\n\n%%--------------------------------------------------------------------\n%% @doc Command line program entrypoint. Takes a list of arguments.\n%% @end\n%%--------------------------------------------------------------------\nmain(\"\") ->\n\tar_cli_parser:show_help(),\n\tinit:stop(1);\nmain(Args) ->\n\t% arweave_config must be the first application started, it\n\t% will keep the configuration for all other arweave\n\t% applications or processes.\n\tarweave_config:start(),\n\n\t% let parse the arguments and initialize arweave_config. The\n\t% idea here is to let full control over the configuration to\n\t% arweave_config and then return the correct configuration\n\t% file. In case of error, the application is stopped.\n\tcase arweave_config_bootstrap:start(Args) of\n\t\t{ok, Config} ->\n\t\t\tstart(Config);\n\t\tElse ->\n\t\t\tar_cli_parser:show_help(),\n\t\t\tinit:stop(1),\n\t\t\t{error, Else}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc Start an Arweave node on this BEAM.\n%% @end\n%%--------------------------------------------------------------------\nstart(Port) when is_integer(Port) ->\n\tstart(#config{ port = Port });\nstart(Config) ->\n\t%% Start the logging system.\n\tcase os:getenv(\"TERM\") of\n\t\t\"dumb\" ->\n\t\t\t% Set logger to output all levels of logs to the console\n\t\t\t% when running in a dumb terminal.\n\t\t\tlogger:add_handler(console, logger_std_h, #{level => all});\n\t\t_->\n\t\t\tok\n\tend,\n\tcase ar_config:validate_config(Config) of\n\t\ttrue ->\n\t\t\tok;\n\t\tfalse ->\n\t\t\ttimer:sleep(2000),\n\t\t\tinit:stop(1)\n\tend,\n\tConfig2 = ar_config:set_dependent_flags(Config),\n\tok = arweave_config:set_env(Config2),\n\tfilelib:ensure_dir(Config2#config.log_dir ++ \"/\"),\n\twarn_if_single_scheduler(),\n\tcase Config2#config.nonce_limiter_server_trusted_peers of\n\t\t[] ->\n\t\t\tVDFSpeed = ar_bench_vdf:run_benchmark(),\n\t\t\t?LOG_INFO([{event, vdf_benchmark}, {vdf_s, VDFSpeed / 1000000}]);\n\t\t_ ->\n\t\t\tok\n\tend,\n\tstart_dependencies().\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc application `start/2' callback. function used to start arweave\n%% application using `application:start/1' or while using OTP.\n%% @end\n%%--------------------------------------------------------------------\nstart(normal, _Args) ->\n\t% Load configuration from environment variable, it will\n\t% impact only feature supporting arweave_config.\n\tarweave_config_environment:load(),\n\n\t% Load the old configuration from arweave_config.\n\t{ok, Config} = arweave_config:get_env(),\n\n\t% arweave_config can now switch in runtime mode. Setting\n\t% parameters without \"runtime\" flag set to true will fail now.\n\tarweave_config:runtime(),\n\n\t%% Set erlang socket backend\n\tpersistent_term:put({kernel, inet_backend}, Config#config.'socket.backend'),\n\n\t%% Configure logger\n\tar_logger:init(Config),\n\n\t?LOG_INFO(\"========== Starting Arweave Node  ==========\"),\n\tar_config:log_config(Config),\n\n\t%% Start the Prometheus metrics subsystem.\n\tprometheus_registry:register_collector(prometheus_process_collector),\n\tprometheus_registry:register_collector(ar_metrics_collector),\n\n\t%% Register custom metrics.\n\tar_metrics:register(),\n\n\t%% Start other apps which we depend on.\n\tset_mining_address(Config),\n\tar_chunk_storage:run_defragmentation(),\n\n\t%% Start Arweave.\n\tar_sup:start_link().\n\nset_mining_address(#config{ mining_addr = not_set } = C) ->\n\tcase ar_wallet:get_or_create_wallet([{?RSA_SIGN_ALG, 65537}]) of\n\t\t{error, Reason} ->\n\t\t\tar:console(\"~nFailed to create a wallet, reason: ~p.~n\",\n\t\t\t\t[io_lib:format(\"~p\", [Reason])]),\n\t\t\ttimer:sleep(500),\n\t\t\tinit:stop(1);\n\t\tW ->\n\t\t\tAddr = ar_wallet:to_address(W),\n\t\t\tar:console(\"~nSetting the mining address to ~s.~n\", [ar_util:encode(Addr)]),\n\t\t\tC2 = C#config{ mining_addr = Addr },\n\t\t\tarweave_config:set_env(C2),\n\t\t\tset_mining_address(C2)\n\tend;\nset_mining_address(#config{ mine = false }) ->\n\tok;\nset_mining_address(#config{ mining_addr = Addr, cm_exit_peer = CmExitPeer,\n\t\tis_pool_client = PoolClient }) ->\n\tcase ar_wallet:load_key(Addr) of\n\t\tnot_found ->\n\t\t\tcase {CmExitPeer, PoolClient} of\n\t\t\t\t{not_set, false} ->\n\t\t\t\t\tar:console(\"~nThe mining key for the address ~s was not found.\"\n\t\t\t\t\t\t\" Make sure you placed the file in [data_dir]/~s (the node is looking for\"\n\t\t\t\t\t\t\" [data_dir]/~s/[mining_addr].json or \"\n\t\t\t\t\t\t\"[data_dir]/~s/arweave_keyfile_[mining_addr].json file).\"\n\t\t\t\t\t\t\" Do not specify \\\"mining_addr\\\" if you want one to be generated.~n~n\",\n\t\t\t\t\t\t[ar_util:encode(Addr), ?WALLET_DIR, ?WALLET_DIR, ?WALLET_DIR]),\n\t\t\t\t\tinit:stop(1);\n\t\t\t\t_ ->\n\t\t\t\t\tok\n\t\t\tend;\n\t\t_Key ->\n\t\t\tok\n\tend.\n\ncreate_wallet([DataDir]) ->\n\tcreate_wallet(DataDir, ?RSA_KEY_TYPE);\ncreate_wallet(_) ->\n\tcreate_wallet_fail(?RSA_KEY_TYPE).\n\ncreate_ecdsa_wallet() ->\n\tcreate_wallet_fail(?ECDSA_KEY_TYPE).\n\ncreate_ecdsa_wallet([DataDir]) ->\n\tcreate_wallet(DataDir, ?ECDSA_KEY_TYPE);\ncreate_ecdsa_wallet(_) ->\n\tcreate_wallet_fail(?ECDSA_KEY_TYPE).\n\ncreate_wallet(DataDir, KeyType) ->\n\tcase filelib:is_dir(DataDir) of\n\t\tfalse ->\n\t\t\tcreate_wallet_fail(KeyType);\n\t\ttrue ->\n\t\t\tok = arweave_config:set_env(#config{ data_dir = DataDir }),\n\t\t\tcase ar_wallet:new_keyfile(KeyType) of\n\t\t\t\t{error, Reason} ->\n\t\t\t\t\tar:console(\"Failed to create a wallet, reason: ~p.~n~n\",\n\t\t\t\t\t\t\t[io_lib:format(\"~p\", [Reason])]),\n\t\t\t\t\ttimer:sleep(500),\n\t\t\t\t\tinit:stop(1);\n\t\t\t\tW ->\n\t\t\t\t\tAddr = ar_wallet:to_address(W),\n\t\t\t\t\tar:console(\"Created a wallet with address ~s.~n\", [ar_util:encode(Addr)]),\n\t\t\t\t\tinit:stop(1)\n\t\t\tend\n\tend.\n\ncreate_wallet() ->\n\tcreate_wallet_fail(?RSA_KEY_TYPE).\n\ncreate_wallet_fail(?RSA_KEY_TYPE) ->\n\tio:format(\"Usage: ./bin/create-wallet [data_dir]~n\"),\n\tinit:stop(1);\ncreate_wallet_fail(?ECDSA_KEY_TYPE) ->\n\tio:format(\"Usage: ./bin/create-ecdsa-wallet [data_dir]~n\"),\n\tinit:stop(1).\n\nbenchmark_vdf() ->\n\tbenchmark_vdf([]).\nbenchmark_vdf(Args) ->\n\tok = arweave_config:set_env(#config{}),\n\tar_bench_vdf:run_benchmark_from_cli(Args),\n\tinit:stop(1).\n\nbenchmark_hash() ->\n\tbenchmark_hash([]).\nbenchmark_hash(Args) ->\n\tar_bench_hash:run_benchmark_from_cli(Args),\n\tinit:stop(1).\n\nbenchmark_packing() ->\n\tbenchmark_packing([]).\nbenchmark_packing(Args) ->\n\tar_bench_packing:run_benchmark_from_cli(Args),\n\tinit:stop(1).\n\nshutdown([NodeName]) ->\n\trpc:cast(NodeName, init, stop, []).\n\nprep_stop(State) ->\n\t% the service will be stopped, ar_shutdown_manager\n\t% must be noticed and its state modified.\n\t_ = ar_shutdown_manager:shutdown(),\n\n\t% When arweave is stopped, the first step is to stop\n\t% accepting connections from other peers, and then\n\t% start the shutdown procedure.\n\tok = ranch:suspend_listener(ar_http_iface_listener),\n\n\t% all timers/intervals must be stopped.\n\tar_timer:terminate_timers(),\n\tState.\n\nstop(_State) ->\n\t?LOG_INFO([{stop, ?MODULE}]).\n\nstop_dependencies() ->\n\t?LOG_INFO(\"========== Stopping Arweave Node  ==========\"),\n\tapplication:stop(arweave_limiter),\n\t{ok, [_Kernel, _Stdlib, _SASL, _OSMon | Deps]} = application:get_key(arweave, applications),\n\tlists:foreach(fun(Dep) -> application:stop(Dep) end, lists:reverse(Deps)).\n\nstart_dependencies() ->\n\tok = arweave_limiter:start(),\n\t{ok, _} = application:ensure_all_started(arweave, permanent),\n\tok.\n\n%% One scheduler => one dirty scheduler => Calculating a RandomX hash, e.g.\n%% for validating a block, will be blocked on initializing a RandomX dataset,\n%% which takes minutes.\nwarn_if_single_scheduler() ->\n\tcase erlang:system_info(schedulers_online) of\n\t\t1 ->\n\t\t\t?LOG_WARNING(\n\t\t\t\t\"WARNING: Running only one CPU core / Erlang scheduler may cause issues.\");\n\t\t_ ->\n\t\t\tok\n\tend.\n\nshell() ->\n\tar_test_runner:start_shell(test).\n\nshell_e2e() ->\n\tar_test_runner:start_shell(e2e).\n\nstop_shell() ->\n\tar_test_runner:stop_shell(test).\n\nstop_shell_e2e() ->\n\tar_test_runner:stop_shell(e2e).\n\n%% @doc Run unit tests.\n%% Usage: ./bin/test [module | module:test ...]\ntests()     -> ar_test_runner:run(test).\ntests(Args) -> ar_test_runner:run(test, Args).\n\nshell_localnet() ->\n\tshell_localnet([]).\n\nshell_localnet(Args) ->\n\ttry\n\t\tcase Args of\n\t\t\t[] ->\n\t\t\t\tar_localnet:start(),\n\t\t\t\tio:format(\"Shell is ready.~n\");\n\t\t\t[SnapshotDir] ->\n\t\t\t\tar_localnet:start(SnapshotDir),\n\t\t\t\tio:format(\"Shell is ready.~n\");\n\t\t\t_ ->\n\t\t\t\tio:format(\"Usage: ./bin/localnet_shell [snapshot_dir]~n\"),\n\t\t\t\terlang:error({invalid_args, Args})\n\t\tend\n\tcatch\n\t\tType:Reason:S ->\n\t\t\tio:format(\"Failed to start localnet due to ~p:~p:~p~n\", [Type, Reason, S]),\n\t\t\tinit:stop(1)\n\tend.\n\nstop_shell_localnet() ->\n\tar_test_node:stop(),\n\tinit:stop().\n\n%% @doc Run e2e tests.\n%% Usage: ./bin/e2e [module | module:test ...]\ne2e()     -> ar_test_runner:run(e2e).\ne2e(Args) -> ar_test_runner:run(e2e, Args).\n\n%% @doc Generate the project documentation.\ndocs() ->\n\tMods =\n\t\tlists:filter(\n\t\t\tfun(File) -> filename:extension(File) == \".erl\" end,\n\t\t\telement(2, file:list_dir(\"apps/arweave/src\"))\n\t\t),\n\tedoc:files(\n\t\t[\"apps/arweave/src/\" ++ Mod || Mod <- Mods],\n\t\t[\n\t\t\t{dir, \"source_code_docs\"},\n\t\t\t{hidden, true},\n\t\t\t{private, true}\n\t\t]\n\t).\n\n\n-ifdef(AR_TEST).\nconsole(Format) ->\n\t?LOG_INFO(io_lib:format(Format, [])).\n\nconsole(Format, Params) ->\n\t?LOG_INFO(io_lib:format(Format, Params)).\n-else.\nconsole(Format) ->\n\tio:format(Format).\n\nconsole(Format, Params) ->\n\tio:format(Format, Params).\n-endif.\n"
  },
  {
    "path": "apps/arweave/src/ar_base32.erl",
    "content": "%% @doc This module is very strongly inspired by OTP's base64 source code.\n%% See https://github.com/erlang/otp/blob/93ec8bb2dbba9456395a54551fe9f1e0f86184b1/lib/stdlib/src/base64.erl#L66-L80\n-module(ar_base32).\n\n-export([encode/1]).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Encode data into a lowercase unpadded RFC 4648 base32 alphabet\nencode(Bin) when is_binary(Bin) ->\n\tencode_binary(Bin, <<>>).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nencode_binary(<<>>, A) ->\n\tA;\nencode_binary(<<B1:8>>, A) ->\n\t<<A/bits, (b32e(B1 bsr 3)):8, (b32e((B1 band 7) bsl 2)):8>>;\nencode_binary(<<B1:8, B2:8>>, A) ->\n\tBB = (B1 bsl 8) bor B2,\n\t<<A/bits,\n\t\t(b32e(BB bsr 11)):8,\n\t\t(b32e((BB bsr 6) band 31)):8,\n\t\t(b32e((BB bsr 1) band 31)):8,\n\t\t(b32e((BB bsl 4) band 31)):8>>;\nencode_binary(<<B1:8, B2:8, B3:8>>, A) ->\n\tBB = (B1 bsl 16) bor (B2 bsl 8) bor B3,\n\t<<A/bits,\n\t\t(b32e(BB bsr 19)):8,\n\t\t(b32e((BB bsr 14) band 31)):8,\n\t\t(b32e((BB bsr 9) band 31)):8,\n\t\t(b32e((BB bsr 4) band 31)):8,\n\t\t(b32e((BB bsl 1) band 31)):8>>;\nencode_binary(<<B1:8, B2:8, B3:8, B4:8>>, A) ->\n\tBB = (B1 bsl 24) bor (B2 bsl 16) bor (B3 bsl 8) bor B4,\n\t<<A/bits,\n\t\t(b32e(BB bsr 27)):8,\n\t\t(b32e((BB bsr 22) band 31)):8,\n\t\t(b32e((BB bsr 17) band 31)):8,\n\t\t(b32e((BB bsr 12) band 31)):8,\n\t\t(b32e((BB bsr 7) band 31)):8,\n\t\t(b32e((BB bsr 2) band 31)):8,\n\t\t(b32e((BB bsl 3) band 31)):8>>;\nencode_binary(<<B1:8, B2:8, B3:8, B4:8, B5:8, Ls/bits>>, A) ->\n\tBB = (B1 bsl 32) bor (B2 bsl 24) bor (B3 bsl 16) bor (B4 bsl 8) bor B5,\n\tencode_binary(\n\t\tLs,\n\t\t<<A/bits,\n\t\t\t(b32e(BB bsr 35)):8,\n\t\t\t(b32e((BB bsr 30) band 31)):8,\n\t\t\t(b32e((BB bsr 25) band 31)):8,\n\t\t\t(b32e((BB bsr 20) band 31)):8,\n\t\t\t(b32e((BB bsr 15) band 31)):8,\n\t\t\t(b32e((BB bsr 10) band 31)):8,\n\t\t\t(b32e((BB bsr 5) band 31)):8,\n\t\t\t(b32e(BB band 31)):8>>\n\t).\n\n-compile({inline, [{b32e, 1}]}).\nb32e(X) ->\n\telement(X+1, {\n\t\t$a, $b, $c, $d, $e, $f, $g, $h, $i, $j, $k, $l, $m,\n\t\t$n, $o, $p, $q, $r, $s, $t, $u, $v, $w, $x, $y, $z,\n\t\t$2, $3, $4, $5, $6, $7, $8, $9\n\t}).\n"
  },
  {
    "path": "apps/arweave/src/ar_bench_hash.erl",
    "content": "-module(ar_bench_hash).\n\n-export([run_benchmark_from_cli/1, run_benchmark/1]).\n\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\nrun_benchmark_from_cli(Args) ->\n\tRandomX = get_flag_value(Args, \"randomx\", \"512\"),\n\tJIT = list_to_integer(get_flag_value(Args, \"jit\", \"1\")),\n\tLargePages = list_to_integer(get_flag_value(Args, \"large_pages\", \"1\")),\n\tHardwareAES = list_to_integer(get_flag_value(Args, \"hw_aes\", \"1\")),\n\n\tRandomXMode = case RandomX of\n\t\t\"512\" -> rx512;\n\t\t\"4096\" -> rx4096;\n\t\t\"squared\" -> rxsquared;\n\t\t_ -> show_help()\n\tend,\n\n\tSchedulers = erlang:system_info(dirty_cpu_schedulers_online),\n\tRandomXState = ar_mine_randomx:init_fast2(\n\t\tRandomXMode, ?RANDOMX_PACKING_KEY, JIT, LargePages, Schedulers),\n\t{H0, H1} = run_benchmark(RandomXState, JIT, LargePages, HardwareAES),\n\tH0String = io_lib:format(\"~.3f\", [H0 / 1000]),\n\tH1String = io_lib:format(\"~.3f\", [H1 / 1000]),\n\tar:console(\"Hashing benchmark~nH0: ~s ms~nH1/H2: ~s ms~n\", [H0String, H1String]).\n\nget_flag_value([], _, DefaultValue) ->\n    DefaultValue;\nget_flag_value([Flag | [Value | _Tail]], TargetFlag, _DefaultValue) when Flag == TargetFlag ->\n    Value;\nget_flag_value([_ | Tail], TargetFlag, DefaultValue) ->\n    get_flag_value(Tail, TargetFlag, DefaultValue).\n\nshow_help() ->\n\tio:format(\"~nUsage: benchmark-hash [options]~n\"),\n\tio:format(\"Options:~n\"),\n\tio:format(\"  randomx <512|4096> (default: 512)~n\"),\n\tio:format(\"  jit <0|1> (default: 1)~n\"),\n\tio:format(\"  large_pages <0|1> (default: 1)~n\"),\n\tio:format(\"  hw_aes <0|1> (default: 1)~n\"),\n\tinit:stop(1).\n\nrun_benchmark(RandomXState) ->\n\trun_benchmark(RandomXState, ar_mine_randomx:jit(),\n\t\tar_mine_randomx:large_pages(), ar_mine_randomx:hardware_aes()).\n\nrun_benchmark(RandomXState, JIT, LargePages, HardwareAES) ->\n\tNonceLimiterOutput = crypto:strong_rand_bytes(32),\n\tSeed = crypto:strong_rand_bytes(32),\n\tMiningAddr = crypto:strong_rand_bytes(32),\n\tIterations = 1000,\n\t{H0Time, _} = timer:tc(fun() ->\n\t\tlists:foreach(\n\t\t\tfun(I) ->\n\t\t\t\tData = << NonceLimiterOutput:32/binary,\n\t\t\t\tI:256, Seed:32/binary, MiningAddr/binary >>,\n\t\t\t\tar_mine_randomx:hash(RandomXState, Data, JIT, LargePages, HardwareAES)\n\t\t\tend,\n\t\t\tlists:seq(1, Iterations))\n\t\tend),\n\tH0Microseconds = H0Time / Iterations,\n\n\tH0 = crypto:strong_rand_bytes(32),\n\tChunk = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\t{H1Time, _} = timer:tc(fun() ->\n\t\tlists:foreach(\n\t\t\tfun(_) ->\n\t\t\t\tNonce = rand:uniform(1000),\n\t\t\t\tPreimage = crypto:hash(sha256, << H0:32/binary, Nonce:64, Chunk/binary >>),\n\t\t\t\tcrypto:hash(sha256, << H0:32/binary, Preimage/binary >>)\n\t\t\tend,\n\t\t\tlists:seq(1, Iterations))\n\t\tend),\n\tH1Microseconds = H1Time / Iterations,\n\n\t{H0Microseconds, H1Microseconds}."
  },
  {
    "path": "apps/arweave/src/ar_bench_packing.erl",
    "content": "-module(ar_bench_packing).\n\n-export([show_help/0, run_benchmark_from_cli/1, run_benchmark/1]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n-include_lib(\"arweave/include/ar_chunk_storage.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"kernel/include/file.hrl\").\n\n-define(TB, 1_000_000_000_000).\n-define(FOOTPRINTS_PER_ITERATION, 1).\n\n%%%===================================================================\n%%% CLI and Entry Points\n%%%===================================================================\n\nrun_benchmark_from_cli(Args) ->\n\tConfig = parse_cli_args(Args),\n\tvalidate_config(Config),\n\trun_benchmark(Config).\n\nparse_cli_args(Args) ->\n\tThreads = list_to_integer(get_flag_value(Args, \"threads\",\n\t\tinteger_to_list(erlang:system_info(dirty_cpu_schedulers_online)))),\n\tSamples = list_to_integer(get_flag_value(Args, \"samples\", \"20\")),\n\tLargePages = list_to_integer(get_flag_value(Args, \"large_pages\", \"1\")),\n\tRatedSpeedMB = list_to_integer(get_flag_value(Args, \"rated_speed\", \"250\")),\n\tReadLoadThreads = list_to_integer(get_flag_value(Args, \"read_load\", \"2\")),\n\tReadFileGB = list_to_integer(get_flag_value(Args, \"read_file_gb\", \"4\")),\n\tDir = get_flag_value(Args, \"dir\", undefined),\n\t{Dir, Threads, Samples, LargePages, RatedSpeedMB, ReadLoadThreads, ReadFileGB}.\n\nvalidate_config({Dir, _Threads, _Samples, _LargePages, _RatedSpeedMB, _ReadLoadThreads, _ReadFileGB}) ->\n\tcase Dir of\n\t\tundefined ->\n\t\t\tio:format(\"~nNo directory specified - will benchmark entropy generation only.~n\"),\n\t\t\tio:format(\"For disk I/O benchmark, specify: dir /path/to/storage~n~n\");\n\t\t_ ->\n\t\t\tcase filelib:ensure_dir(filename:join(Dir, \"dummy\")) of\n\t\t\t\tok -> ok;\n\t\t\t\t{error, Reason} ->\n\t\t\t\t\tio:format(\"Error: Could not ensure directory ~p exists: ~p~n\", [Dir, Reason]),\n\t\t\t\t\terlang:halt(1)\n\t\t\tend\n\tend.\n\nget_flag_value([], _, DefaultValue) ->\n\tDefaultValue;\nget_flag_value([Flag, Value | _Tail], TargetFlag, _DefaultValue) when Flag == TargetFlag ->\n\tValue;\nget_flag_value([_ | Tail], TargetFlag, DefaultValue) ->\n\tget_flag_value(Tail, TargetFlag, DefaultValue).\n\nshow_help() ->\n\tio:format(\"~nUsage: benchmark packing [options]~n~n\"),\n\tio:format(\"Options:~n\"),\n\tio:format(\"  threads       Number of threads. Default: number of CPU cores.~n\"),\n\tio:format(\"  samples       Number of samples to average. Default: 20.~n\"),\n\tio:format(\"  large_pages   Use large pages for RandomX (0=off, 1=on). Default: 1.~n\"),\n\tio:format(\"  rated_speed   Expected disk write speed in MB/s. Benchmark will exclude samples\\n\"),\n\tio:format(\"                that are too fast as they are likely cached. Default: 250.~n\"),\n\tio:format(\"  read_load     Background read threads (simulates other disk activity). Default: 2.~n\"),\n\tio:format(\"  read_file_gb  Size of read load file in GB (larger = less caching). Default: 4.~n\"),\n\tio:format(\"  dir           Directory to write to (optional).~n~n\"),\n\tio:format(\"Examples:~n\"),\n\tio:format(\"  benchmark packing threads 8 dir /mnt/storage1~n\"),\n\tio:format(\"  benchmark packing rated_speed 246 dir /tmp/bench~n~n\"),\n\tio:format(\"For more information, see the Benchmarking section at docs.arweave.org~n~n\"),\n\tinit:stop(1).\n\n%%%===================================================================\n%%% Main Benchmark Orchestration\n%%%===================================================================\n\nrun_benchmark({Dir, Threads, TargetSamples, LargePages, RatedSpeedMB, ReadLoadThreads, ReadFileGB}) ->\n\tconfigure_randomx(LargePages),\n\t\n\tprint_header(Threads, TargetSamples, RatedSpeedMB, ReadLoadThreads, ReadFileGB, Dir),\n\t\n\tar:console(\"~nInitializing...~n\"),\n\tRandomXState = init_randomx_state(Threads),\n\tRewardAddress = crypto:strong_rand_bytes(32),\n\t\n\tChunkDir = init_chunk_dir(Dir),\n\tReadPids = start_read_load(ReadLoadThreads, ReadFileGB, Dir),\n\t\n\t%% Phase 1: Entropy Preparation\n\tprint_cache_fill_start(ChunkDir),\n\tMinDiskMs = calculate_min_disk_ms(RatedSpeedMB),\n\t{AllEntropyResults, ValidEntropyResults} = collect_entropy_samples(\n\t\tTargetSamples, MinDiskMs, Threads, RandomXState, RewardAddress, ChunkDir),\n\t\n\tprint_entropy_results(AllEntropyResults, ValidEntropyResults, ChunkDir, Threads),\n\t\n\t%% Phase 2: Packing (only if disk I/O enabled)\n\tcase ChunkDir of\n\t\tundefined ->\n\t\t\tok;\n\t\t_ ->\n\t\t\t%% Sync and drop page cache so reads hit disk\n\t\t\tsync_and_drop_cache(),\n\t\t\tclose_file_handles(),\n\t\t\tPackingResults = collect_packing_samples(\n\t\t\t\tTargetSamples, Threads, RandomXState, ChunkDir),\n\t\t\tprint_packing_results(PackingResults, Threads)\n\tend,\n\t\n\tstop_read_load(ReadPids),\n\tclose_file_handles(),\n\tar:console(\"~n\").\n\nconfigure_randomx(LargePages) ->\n\tcase LargePages of\n\t\t1 -> arweave_config:set_env(#config{disable = [], enable = [randomx_large_pages]});\n\t\t0 -> arweave_config:set_env(#config{disable = [randomx_large_pages], enable = []})\n\tend.\n\ncalculate_mib_per_iteration() ->\n\tBytesPerIteration = ?FOOTPRINTS_PER_ITERATION * ar_block:get_replica_2_9_footprint_size(),\n\tBytesPerIteration div ?MiB.\n\ncalculate_min_disk_ms(RatedSpeedMB) ->\n\t%% Convert MB/s (decimal, marketing) to MiB/s (binary)\n\t%% 1 MiB = 1.048576 MB, so MiB/s = MB/s / 1.048576\n\tRatedSpeedMiB = RatedSpeedMB / 1.048576,\n\t%% Add 10% margin - writes up to 10% faster than rated still count as valid\n\tEffectiveSpeed = RatedSpeedMiB * 1.1,\n\t%% Calculate minimum expected time for a \"real\" disk write\n\tcalculate_mib_per_iteration() / EffectiveSpeed * 1000.\n\ninit_chunk_dir(undefined) ->\n\tundefined;\ninit_chunk_dir(Dir) ->\n\tChunkDir = filename:join(Dir, \"benchmark_chunk_storage\"),\n\tfilelib:ensure_dir(filename:join(ChunkDir, \"dummy\")),\n\tclear_dir(ChunkDir),\n\tChunkDir.\n\ninit_randomx_state(Threads) ->\n\ttry\n\t\tar_mine_randomx:init_fast(rxsquared, ?RANDOMX_PACKING_KEY, Threads)\n\tcatch\n\t\terror:{badmatch, {error, Reason}} ->\n\t\t\tar:console(\"~nError: Failed to initialize RandomX: ~p~n\", [Reason]),\n\t\t\tar:console(\"~nTry running with large_pages 0 if large pages are not supported.~n~n\"),\n\t\t\terlang:halt(1),\n\t\t\tundefined\n\tend.\n\ncollect_entropy_samples(TargetSamples, MinDiskMs, Threads, RandomXState, RewardAddress, ChunkDir) ->\n\tcollect_entropy_samples_loop(\n\t\t0, TargetSamples, MinDiskMs, Threads, RandomXState, RewardAddress, ChunkDir, [], [], 0, []).\n\ncollect_entropy_samples_loop(_Iteration, TargetSamples, _MinDiskMs, _Threads, _RandomXState, \n\t\t_RewardAddr, _ChunkDir, AllResults, ValidResults, _CachedCount, _AllDiskMs) \n\t\twhen length(ValidResults) >= TargetSamples ->\n\tar:console(\"~n\"),\n\t{lists:reverse(AllResults), lists:reverse(ValidResults)};\ncollect_entropy_samples_loop(_Iteration, _TargetSamples, _MinDiskMs, _Threads, _RandomXState, \n\t\t_RewardAddr, ChunkDir, _AllResults, [], CachedCount, AllDiskMs) \n\t\twhen CachedCount >= 100, ChunkDir /= undefined ->\n\t%% 100 consecutive cached iterations without finding a valid sample\n\tprint_rated_speed_too_low_error(AllDiskMs),\n\terlang:halt(1);\ncollect_entropy_samples_loop(Iteration, TargetSamples, MinDiskMs, Threads, RandomXState, \n\t\tRewardAddr, ChunkDir, AllResults, ValidResults, CachedCount, AllDiskMs) ->\n\t{_EntropyMs, DiskMs} = Result = run_entropy_iteration(Iteration, Threads, RandomXState, RewardAddr, ChunkDir),\n\t{NewValidResults, NewCachedCount} = process_entropy_sample(Result, MinDiskMs, ChunkDir, ValidResults, CachedCount),\n\tNewAllDiskMs = case DiskMs > 0 of\n\t\ttrue -> [DiskMs | AllDiskMs];\n\t\tfalse -> AllDiskMs\n\tend,\n\tcollect_entropy_samples_loop(\n\t\tIteration + 1, TargetSamples, MinDiskMs, Threads, RandomXState, RewardAddr, ChunkDir, \n\t\t[Result | AllResults], NewValidResults, NewCachedCount, NewAllDiskMs).\n\nprocess_entropy_sample({EntropyMs, DiskMs} = Result, MinDiskMs, ChunkDir, ValidResults, CachedCount) ->\n\tcase ChunkDir of\n\t\tundefined ->\n\t\t\t%% CPU-only: all samples count\n\t\t\tprint_entropy_cpu_sample(length(ValidResults) + 1, EntropyMs),\n\t\t\t{[Result | ValidResults], 0};\n\t\t_ ->\n\t\t\tcase DiskMs >= MinDiskMs of\n\t\t\t\ttrue ->\n\t\t\t\t\tprint_entropy_valid_sample(length(ValidResults) + 1, EntropyMs, DiskMs),\n\t\t\t\t\t{[Result | ValidResults], 0};\n\t\t\t\tfalse ->\n\t\t\t\t\tprint_cached_sample(),\n\t\t\t\t\t{ValidResults, CachedCount + 1}\n\t\t\tend\n\tend.\n\n%%%===================================================================\n%%% Phase 1: Entropy Preparation - Iteration Execution\n%%%===================================================================\n\nrun_entropy_iteration(Iteration, _Threads, RandomXState, RewardAddr, ChunkDir) ->\n\t{EntropyMs, Entropies} = time_entropy_generation(Iteration, RandomXState, RewardAddr),\n\tDiskMs = time_entropy_disk_write(Iteration, ChunkDir, Entropies),\n\t{EntropyMs, DiskMs}.\n\ntime_entropy_generation(Iteration, RandomXState, RewardAddr) ->\n\tStartTime = erlang:monotonic_time(microsecond),\n\tEntropies = generate_all_footprints(Iteration, RandomXState, RewardAddr),\n\tEndTime = erlang:monotonic_time(microsecond),\n\t{(EndTime - StartTime) / 1000, Entropies}.\n\ntime_entropy_disk_write(_Iteration, undefined, _Entropies) ->\n\t0;\ntime_entropy_disk_write(Iteration, ChunkDir, Entropies) ->\n\tBaseOffset = Iteration * ?FOOTPRINTS_PER_ITERATION * ?DATA_CHUNK_SIZE,\n\tStartTime = erlang:monotonic_time(microsecond),\n\twrite_all_entropies(ChunkDir, Entropies, BaseOffset),\n\tEndTime = erlang:monotonic_time(microsecond),\n\t(EndTime - StartTime) / 1000.\n\n%%%===================================================================\n%%% Phase 2: Packing - Sample Collection\n%%%===================================================================\n\ncollect_packing_samples(TargetSamples, Threads, RandomXState, ChunkDir) ->\n\tar:console(\"~n=== Phase 2: Packing Benchmark ===~n\"),\n\tar:console(\"~nRunning packing benchmark:\"),\n\t%% Generate a new reward address for unpacking entropy\n\tUnpackRewardAddr = crypto:strong_rand_bytes(32),\n\tcollect_packing_samples_loop(0, TargetSamples, Threads, RandomXState, UnpackRewardAddr, ChunkDir, []).\n\ncollect_packing_samples_loop(SampleNum, TargetSamples, _Threads, _RandomXState, _UnpackRewardAddr, _ChunkDir, Results) \n\t\twhen SampleNum >= TargetSamples ->\n\tar:console(\"~n\"),\n\tlists:reverse(Results);\ncollect_packing_samples_loop(SampleNum, TargetSamples, Threads, RandomXState, UnpackRewardAddr, ChunkDir, Results) ->\n\tResult = run_packing_iteration(SampleNum, Threads, RandomXState, UnpackRewardAddr, ChunkDir),\n\tprint_packing_sample(SampleNum + 1, Result, Threads),\n\tcollect_packing_samples_loop(SampleNum + 1, TargetSamples, Threads, RandomXState, UnpackRewardAddr, ChunkDir, [Result | Results]).\n\nrun_packing_iteration(Iteration, _Threads, RandomXState, UnpackRewardAddr, ChunkDir) ->\n\tBaseOffset = Iteration * ?FOOTPRINTS_PER_ITERATION * ?DATA_CHUNK_SIZE,\n\t\n\t%% Step 1: Generate unpack entropy (parallelized across threads)\n\t{UnpackEntropyMs, UnpackEntropies} = time_entropy_generation(Iteration, RandomXState, UnpackRewardAddr),\n\t\n\t%% Step 2-5: Walk through entropy using map_entropies (same pattern as phase 1)\n\t{DecipherMs, ReadMs, EncipherMs, WriteMs} = pack_all_chunks(ChunkDir, UnpackEntropies, BaseOffset),\n\t\n\t{UnpackEntropyMs, DecipherMs, ReadMs, EncipherMs, WriteMs}.\n\npack_all_chunks(_ChunkDir, [], _BaseOffset) ->\n\t{0, 0, 0, 0};\npack_all_chunks(ChunkDir, [Footprint | Rest], BaseOffset) ->\n\tOffsets = ar_entropy_gen:entropy_offsets(BaseOffset + ?DATA_CHUNK_SIZE, ?PARTITION_SIZE),\n\t{D1, R1, E1, W1} = ar_entropy_gen:map_entropies(\n\t\tFootprint, Offsets, 0, [], <<>>,\n\t\tfun pack_chunk_callback/5, [ChunkDir], {0, 0, 0, 0}),\n\t{D2, R2, E2, W2} = pack_all_chunks(ChunkDir, Rest, BaseOffset + ?DATA_CHUNK_SIZE),\n\t{D1 + D2, R1 + R2, E1 + E2, W1 + W2}.\n\npack_chunk_callback(UnpackEntropy, BucketEndOffset, _RewardAddr, ChunkDir, {DAcc, RAcc, EAcc, WAcc}) ->\n\t{DecipherMs, ReadMs, EncipherMs, WriteMs} = pack_single_chunk(ChunkDir, BucketEndOffset, UnpackEntropy),\n\t{DAcc + DecipherMs, RAcc + ReadMs, EAcc + EncipherMs, WAcc + WriteMs}.\n\npack_single_chunk(ChunkDir, PaddedEndOffset, UnpackEntropy) ->\n\t%% Step 2: Generate random packed chunk and decipher it\n\tRandomPackedChunk = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\tDecipherStart = erlang:monotonic_time(microsecond),\n\tUnpackedChunk = ar_packing_server:exor_replica_2_9_chunk(RandomPackedChunk, UnpackEntropy),\n\tDecipherEnd = erlang:monotonic_time(microsecond),\n\tDecipherMs = (DecipherEnd - DecipherStart) / 1000,\n\t\n\t%% Step 3: Read pack entropy from disk\n\tChunkFileStart = ar_chunk_storage:get_chunk_file_start(PaddedEndOffset),\n\t{Position, _ChunkOffset} = ar_chunk_storage:get_position_and_relative_chunk_offset(\n\t\tChunkFileStart, PaddedEndOffset),\n\tFilepath = filename:join(ChunkDir, integer_to_list(ChunkFileStart)),\n\tFH = get_file_handle(Filepath),\n\t\n\tReadStart = erlang:monotonic_time(microsecond),\n\t{ok, EntropyWithHeader} = file:pread(FH, Position, ?OFFSET_BIT_SIZE div 8 + ?DATA_CHUNK_SIZE),\n\t<< _StoredChunkOffset:?OFFSET_BIT_SIZE, PackEntropy/binary >> = EntropyWithHeader,\n\tReadEnd = erlang:monotonic_time(microsecond),\n\tReadMs = (ReadEnd - ReadStart) / 1000,\n\t\n\t%% Step 4: Encipher the unpacked chunk with pack entropy\n\tEncipherStart = erlang:monotonic_time(microsecond),\n\tPackedChunk = ar_packing_server:exor_replica_2_9_chunk(UnpackedChunk, PackEntropy),\n\tEncipherEnd = erlang:monotonic_time(microsecond),\n\tEncipherMs = (EncipherEnd - EncipherStart) / 1000,\n\t\n\t%% Step 5: Write the packed chunk\n\tWriteStart = erlang:monotonic_time(microsecond),\n\tok = file:pwrite(FH, Position + (?OFFSET_BIT_SIZE div 8), PackedChunk),\n\tWriteEnd = erlang:monotonic_time(microsecond),\n\tWriteMs = (WriteEnd - WriteStart) / 1000,\n\t\n\t{DecipherMs, ReadMs, EncipherMs, WriteMs}.\n\n%%%===================================================================\n%%% Entropy Generation\n%%%===================================================================\n\ngenerate_all_footprints(Iteration, RandomXState, RewardAddr) ->\n\tFootprintIds = lists:seq(0, ?FOOTPRINTS_PER_ITERATION - 1),\n\tar_util:pmap(\n\t\tfun(FootprintId) ->\n\t\t\tUniqueId = Iteration * ?FOOTPRINTS_PER_ITERATION + FootprintId,\n\t\t\tgenerate_footprint(RandomXState, RewardAddr, UniqueId)\n\t\tend,\n\t\tFootprintIds, infinity).\n\ngenerate_footprint(RandomXState, RewardAddr, UniqueId) ->\n\tSubChunkIndices = lists:seq(0, ?COMPOSITE_PACKING_SUB_CHUNK_COUNT - 1),\n\tar_util:pmap(\n\t\tfun(SubChunkIndex) ->\n\t\t\tAbsoluteOffset = (UniqueId + 1) * ?DATA_CHUNK_SIZE,\n\t\t\tSubChunkOffset = SubChunkIndex * ?COMPOSITE_PACKING_SUB_CHUNK_SIZE,\n\t\t\tKey = ar_replica_2_9:get_entropy_key(RewardAddr, AbsoluteOffset, SubChunkOffset),\n\t\t\tar_mine_randomx:randomx_generate_replica_2_9_entropy(RandomXState, Key)\n\t\tend,\n\t\tSubChunkIndices, infinity).\n\n%%%===================================================================\n%%% Disk I/O - Chunk Storage\n%%%===================================================================\n\nwrite_all_entropies(_ChunkDir, [], _BaseOffset) ->\n\tok;\nwrite_all_entropies(ChunkDir, [Footprint | Rest], BaseOffset) ->\n\tOffsets = ar_entropy_gen:entropy_offsets(BaseOffset + ?DATA_CHUNK_SIZE, ?PARTITION_SIZE),\n\tar_entropy_gen:map_entropies(\n\t\tFootprint, Offsets, 0, [], <<>>,\n\t\tfun write_chunk_callback/5, [ChunkDir], ok),\n\twrite_all_entropies(ChunkDir, Rest, BaseOffset + ?DATA_CHUNK_SIZE).\n\nwrite_chunk_callback(ChunkEntropy, BucketEndOffset, _RewardAddr, ChunkDir, ok) ->\n\twrite_chunk(ChunkDir, BucketEndOffset, ChunkEntropy),\n\tok.\n\nwrite_chunk(ChunkDir, PaddedEndOffset, Chunk) ->\n\tChunkFileStart = ar_chunk_storage:get_chunk_file_start(PaddedEndOffset),\n\t{Position, ChunkOffset} = ar_chunk_storage:get_position_and_relative_chunk_offset(\n\t\tChunkFileStart, PaddedEndOffset),\n\tFilepath = filename:join(ChunkDir, integer_to_list(ChunkFileStart)),\n\tFH = get_file_handle(Filepath),\n\tok = file:pwrite(FH, Position, [<< ChunkOffset:?OFFSET_BIT_SIZE >> | Chunk]).\n\nget_file_handle(Filepath) ->\n\tcase erlang:get({write_handle, Filepath}) of\n\t\tundefined ->\n\t\t\t{ok, FH} = file:open(Filepath, [read, write, raw, binary]),\n\t\t\terlang:put({write_handle, Filepath}, FH),\n\t\t\tFH;\n\t\tFH ->\n\t\t\tFH\n\tend.\n\nclose_file_handles() ->\n\tlists:foreach(\n\t\tfun({write_handle, _} = Key) ->\n\t\t\tfile:close(erlang:get(Key)),\n\t\t\terlang:erase(Key);\n\t\t   (_) ->\n\t\t\tok\n\t\tend,\n\t\terlang:get_keys()).\n\nsync_and_drop_cache() ->\n\tar:console(\"~nSyncing and dropping page cache\", []),\n\tlists:foreach(\n\t\tfun({write_handle, _} = Key) ->\n\t\t\tFH = erlang:get(Key),\n\t\t\t%% Sync to disk\n\t\t\tfile:sync(FH),\n\t\t\t%% Get file size for advise\n\t\t\tcase file:position(FH, eof) of\n\t\t\t\t{ok, Size} ->\n\t\t\t\t\t%% Tell kernel we don't need these pages cached\n\t\t\t\t\tfile:advise(FH, 0, Size, dont_need);\n\t\t\t\t_ ->\n\t\t\t\t\tok\n\t\t\tend,\n\t\t\tar:console(\".\", []);\n\t\t   (_) ->\n\t\t\tok\n\t\tend,\n\t\terlang:get_keys()),\n\tar:console(\"~n\", []).\n\nclear_dir(Dir) ->\n\tcase file:list_dir(Dir) of\n\t\t{ok, Files} ->\n\t\t\tlists:foreach(fun(File) -> file:delete(filename:join(Dir, File)) end, Files);\n\t\t{error, enoent} ->\n\t\t\tok\n\tend.\n\n%%%===================================================================\n%%% Disk I/O - Read Load Simulation\n%%%===================================================================\n\nstart_read_load(0, _ReadFileGB, _Dir) ->\n\t[];\nstart_read_load(_N, _ReadFileGB, undefined) ->\n\t[];\nstart_read_load(NumThreads, ReadFileSizeGB, Dir) ->\n\tReadFile = create_read_load_file(Dir, ReadFileSizeGB),\n\tspawn_read_load_threads(NumThreads, ReadFile).\n\ncreate_read_load_file(Dir, SizeGB) ->\n\tReadFile = filename:join(Dir, \"benchmark_read_load.bin\"),\n\tSizeMB = SizeGB * 1024,\n\tfile:delete(ReadFile),\n\tar:console(\"Creating read load file...~n\"),\n\t{ok, FH} = file:open(ReadFile, [write, raw, binary]),\n\tlists:foreach(\n\t\tfun(_) -> file:write(FH, crypto:strong_rand_bytes(?MiB)) end,\n\t\tlists:seq(1, SizeMB)),\n\tfile:close(FH),\n\tReadFile.\n\nspawn_read_load_threads(NumThreads, ReadFile) ->\n\t[spawn_link(fun() -> read_load_loop(ReadFile) end) || _ <- lists:seq(1, NumThreads)].\n\nread_load_loop(ReadFile) ->\n\t{ok, FH} = file:open(ReadFile, [read, raw, binary, {read_ahead, 0}]),\n\t{ok, FileInfo} = file:read_file_info(ReadFile),\n\tFileSize = FileInfo#file_info.size,\n\tread_load_loop(FH, FileSize).\n\nread_load_loop(FH, FileSize) ->\n\t%% Random read of 4-64KB (typical RocksDB read sizes)\n\tReadSize = 4096 + rand:uniform(60 * 1024),\n\tMaxOffset = max(0, FileSize - ReadSize),\n\tOffset = rand:uniform(MaxOffset + 1) - 1,\n\tfile:pread(FH, Offset, ReadSize),\n\tread_load_loop(FH, FileSize).\n\nstop_read_load(Pids) ->\n\tlists:foreach(fun(Pid) -> exit(Pid, kill) end, Pids).\n\n%%%===================================================================\n%%% Output - Progress and Results\n%%%===================================================================\n\nprint_header(Threads, TargetSamples, RatedSpeedMB, ReadLoadThreads, ReadFileGB, Dir) ->\n\tar:console(\"~n=== Replica 2.9 Preparation Benchmark ===~n\"),\n\tar:console(\"See docs.arweave.org for more information.~n~n\"),\n\tar:console(\"Configuration:~n\"),\n\tar:console(\"  Threads:            ~p~n\", [Threads]),\n\tar:console(\"  Samples:            ~p~n\", [TargetSamples]),\n\tar:console(\"  Data per iteration: ~p MiB~n\", [calculate_mib_per_iteration()]),\n\tar:console(\"  Rated disk speed:   ~p MB/s~n\", [RatedSpeedMB]),\n\tar:console(\"  Read load threads:  ~p~n\", [ReadLoadThreads]),\n\tar:console(\"  Read file size:     ~p GB~n\", [ReadFileGB]),\n\tcase Dir of\n\t\tundefined -> ar:console(\"  Directory:          (none - CPU benchmark only)~n\");\n\t\t_ -> ar:console(\"  Directory:          ~p~n\", [Dir])\n\tend.\n\nprint_cache_fill_start(undefined) ->\n\tok;\nprint_cache_fill_start(_ChunkDir) ->\n\tar:console(\"~n=== Phase 1: Entropy Preparation ===~n\"),\n\tar:console(\"~nFilling write cache\", []).\n\nprint_entropy_cpu_sample(SampleNum, EntropyMs) ->\n\tEntropyRate = calculate_mib_per_iteration() / (EntropyMs / 1000),\n\tar:console(\"~nSample ~p: Entropy: ~p MiB/s\", [SampleNum, round(EntropyRate)]).\n\nprint_entropy_valid_sample(SampleNum, EntropyMs, DiskMs) ->\n\tcase SampleNum of\n\t\t1 -> ar:console(\"~n~nRunning entropy benchmark:\");\n\t\t_ -> ok\n\tend,\n\tMiBPerIteration = calculate_mib_per_iteration(),\n\tEntropyRate = MiBPerIteration / (EntropyMs / 1000),\n\tDiskRate = MiBPerIteration / (DiskMs / 1000),\n\tar:console(\"~nSample ~p: Entropy: ~p MiB/s, Write: ~p MiB/s\", [SampleNum, round(EntropyRate), round(DiskRate)]).\n\nprint_cached_sample() ->\n\tar:console(\".\", []).\n\nprint_rated_speed_too_low_error(AllDiskMs) ->\n\tMiBPerIteration = calculate_mib_per_iteration(),\n\tMinDiskMs = lists:min(AllDiskMs),\n\tMinWriteSpeed = MiBPerIteration / (MinDiskMs / 1000),\n\tar:console(\"~n~n=== Benchmark Stopped ===~n~n\"),\n\tar:console(\"Benchmark is unable to proceed as the configured rated_speed is too low.~n\"),\n\tar:console(\"The slowest write speed observed was ~p MiB/s (~p MB/s).~n~n\", \n\t\t[round(MinWriteSpeed), round(MinWriteSpeed * 1.048576)]),\n\tar:console(\"Please re-run the benchmark with a rated_speed that more correctly~n\"),\n\tar:console(\"reflects the rated speed of the disk being written to.~n~n\").\n\nprint_entropy_results(AllResults, ValidResults, ChunkDir, Threads) ->\n\tMiBPerIteration = calculate_mib_per_iteration(),\n\t{AllEntropyTimes, _} = lists:unzip(AllResults),\n\t{ValidEntropyTimes, ValidDiskTimes} = lists:unzip(ValidResults),\n\t\n\tTotalIterations = length(AllResults),\n\tValidCount = length(ValidResults),\n\tCachedCount = TotalIterations - ValidCount,\n\t\n\tAvgEntropyMs = lists:sum(AllEntropyTimes) / TotalIterations,\n\tAvgDiskMs = safe_average(ValidDiskTimes),\n\tAvgValidEntropyMs = case ValidCount > 0 of\n\t\ttrue -> lists:sum(ValidEntropyTimes) / ValidCount;\n\t\tfalse -> AvgEntropyMs\n\tend,\n\t\n\tEntropyRate = MiBPerIteration / (AvgValidEntropyMs / 1000),\n\t\n\tar:console(\"~n--- Entropy Preparation Results ---~n~n\"),\n\tar:console(\"Data per iteration:   ~p MiB (~p chunks)~n\", [MiBPerIteration, MiBPerIteration * 4]),\n\tar:console(\"Total iterations:     ~p (~p excluded, ~p samples)~n\", [TotalIterations, CachedCount, ValidCount]),\n\tar:console(\"~n\"),\n\tar:console(\"Entropy generation:   ~.2f MiB/s (~p threads)~n\", [EntropyRate, Threads]),\n\t\n\tcase ChunkDir of\n\t\tundefined ->\n\t\t\tar:console(\"~nNo disk I/O measured.~n\"),\n\t\t\tprint_preparation_extrapolation(EntropyRate);\n\t\t_ ->\n\t\t\tDiskRate = MiBPerIteration / (AvgDiskMs / 1000),\n\t\t\tar:console(\"Disk write:           ~.2f MiB/s~n\", [DiskRate]),\n\t\t\t{EffectiveRate, Bottleneck} = case EntropyRate < DiskRate of\n\t\t\t\ttrue -> {EntropyRate, \"CPU (entropy generation)\"};\n\t\t\t\tfalse -> {DiskRate, \"Disk Write\"}\n\t\t\tend,\n\t\t\tar:console(\"~nBottleneck:           ~s~n\", [Bottleneck]),\n\t\t\tar:console(\"Effective rate:       ~.2f MiB/s~n\", [EffectiveRate]),\n\t\t\tprint_preparation_extrapolation(EffectiveRate)\n\tend.\n\nprint_preparation_extrapolation(EffectiveRate) ->\n\tPartitionSizeTB = ?PARTITION_SIZE / ?TB,\n\tTotalSeconds = ?PARTITION_SIZE / (EffectiveRate * ?MiB),\n\tar:console(\"~nEstimated preparation time for ~.1f TB partition: ~s~n\", \n\t\t[PartitionSizeTB, format_duration(TotalSeconds)]).\n\nprint_packing_sample(SampleNum, {UnpackEntropyMs, DecipherMs, ReadMs, EncipherMs, WriteMs}, _Threads) ->\n\tMiBPerIteration = calculate_mib_per_iteration(),\n\tUnpackEntropyRate = MiBPerIteration / (UnpackEntropyMs / 1000),\n\tDecipherRate = MiBPerIteration / (DecipherMs / 1000),\n\tReadRate = MiBPerIteration / (ReadMs / 1000),\n\tEncipherRate = MiBPerIteration / (EncipherMs / 1000),\n\tWriteRate = MiBPerIteration / (WriteMs / 1000),\n\tar:console(\"~nSample ~p: Entropy: ~p MiB/s, Decipher: ~p MiB/s, Read: ~p MiB/s, Encipher: ~p MiB/s, Write: ~p MiB/s\", \n\t\t[SampleNum, round(UnpackEntropyRate), round(DecipherRate), round(ReadRate), \n\t\t round(EncipherRate), round(WriteRate)]).\n\nprint_packing_results(Results, Threads) ->\n\tMiBPerIteration = calculate_mib_per_iteration(),\n\tSampleCount = length(Results),\n\t\n\t{UnpackEntropyTimes, DecipherTimes, ReadTimes, EncipherTimes, WriteTimes} = lists:foldl(\n\t\tfun({UE, D, R, E, W}, {UEAcc, DAcc, RAcc, EAcc, WAcc}) -> \n\t\t\t{UEAcc + UE, DAcc + D, RAcc + R, EAcc + E, WAcc + W} \n\t\tend,\n\t\t{0, 0, 0, 0, 0},\n\t\tResults),\n\t\n\tAvgUnpackEntropyMs = UnpackEntropyTimes / SampleCount,\n\tAvgDecipherMs = DecipherTimes / SampleCount,\n\tAvgReadMs = ReadTimes / SampleCount,\n\tAvgEncipherMs = EncipherTimes / SampleCount,\n\tAvgWriteMs = WriteTimes / SampleCount,\n\t\n\tUnpackEntropyRate = MiBPerIteration / (AvgUnpackEntropyMs / 1000),\n\tDecipherRate = MiBPerIteration / (AvgDecipherMs / 1000),\n\tReadRate = MiBPerIteration / (AvgReadMs / 1000),\n\tEncipherRate = MiBPerIteration / (AvgEncipherMs / 1000),\n\tWriteRate = MiBPerIteration / (AvgWriteMs / 1000),\n\t\n\t%% Identify bottleneck (slowest operation = lowest rate)\n\t{EffectiveRate, Bottleneck} = find_bottleneck([\n\t\t{UnpackEntropyRate, io_lib:format(\"CPU (unpack entropy, ~p threads)\", [Threads])},\n\t\t{DecipherRate, \"CPU (decipher)\"},\n\t\t{ReadRate, \"Disk read\"},\n\t\t{EncipherRate, \"CPU (encipher)\"},\n\t\t{WriteRate, \"Disk write\"}\n\t]),\n\t\n\tar:console(\"~n--- Packing Results ---~n~n\"),\n\tar:console(\"Samples:              ~p~n\", [SampleCount]),\n\tar:console(\"~n\"),\n\tar:console(\"Unpack entropy:       ~.2f MiB/s (~p threads)~n\", [UnpackEntropyRate, Threads]),\n\tar:console(\"Decipher:             ~.2f MiB/s~n\", [DecipherRate]),\n\tar:console(\"Read:                 ~.2f MiB/s~n\", [ReadRate]),\n\tar:console(\"Encipher:             ~.2f MiB/s~n\", [EncipherRate]),\n\tar:console(\"Write:                ~.2f MiB/s~n\", [WriteRate]),\n\tar:console(\"~nBottleneck:           ~s~n\", [Bottleneck]),\n\tar:console(\"Effective rate:       ~.2f MiB/s~n\", [EffectiveRate]),\n\tprint_packing_extrapolation(EffectiveRate).\n\nfind_bottleneck([{Rate, Name} | Rest]) ->\n\tfind_bottleneck(Rest, Rate, Name).\n\nfind_bottleneck([], MinRate, MinName) ->\n\t{MinRate, MinName};\nfind_bottleneck([{Rate, Name} | Rest], MinRate, MinName) ->\n\tcase Rate < MinRate of\n\t\ttrue -> find_bottleneck(Rest, Rate, Name);\n\t\tfalse -> find_bottleneck(Rest, MinRate, MinName)\n\tend.\n\nprint_packing_extrapolation(EffectiveRate) ->\n\tPartitionSizeTB = ?PARTITION_SIZE / ?TB,\n\tTotalSeconds = ?PARTITION_SIZE / (EffectiveRate * ?MiB),\n\tar:console(\"Estimated packing time for ~.1f TB partition: ~s~n\", \n\t\t[PartitionSizeTB, format_duration(TotalSeconds)]).\n\n%%%===================================================================\n%%% Utilities\n%%%===================================================================\n\nsafe_average([]) ->\n\t0;\nsafe_average(List) ->\n\tlists:sum(List) / length(List).\n\nformat_duration(Seconds) when Seconds < 60 ->\n\tio_lib:format(\"~.1f seconds\", [Seconds]);\nformat_duration(Seconds) when Seconds < 3600 ->\n\tio_lib:format(\"~.1f minutes\", [Seconds / 60]);\nformat_duration(Seconds) when Seconds < 86400 ->\n\tio_lib:format(\"~.1f hours\", [Seconds / 3600]);\nformat_duration(Seconds) ->\n\tDays = Seconds / 86400,\n\tio_lib:format(\"~.1f days (~p hours)\", [Days, trunc(Days * 24)]).\n"
  },
  {
    "path": "apps/arweave/src/ar_bench_timer.erl",
    "content": "-module(ar_bench_timer).\n\n-export([initialize/0, reset/0, record/3, start/1, stop/1, get_timing_data/0, print_timing_data/0, get_total/1, get_max/1, get_min/1, get_avg/1]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_vdf.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n\nrecord(Key, Fun, Args) ->\n    {Time, Result} = timer:tc(Fun, Args),\n    update_total(Key, Time),\n    Result.\n\nstart(Key) ->\n\tStartTime = erlang:timestamp(),\n\tets:insert(start_time, {real_key(Key), StartTime}).\n\nstop(Key) ->\n\tcase ets:lookup(start_time, real_key(Key)) of\n\t\t[{_, StartTime}] ->\n\t\t\tEndTime = erlang:timestamp(),\n\t\t\tElapsedTime = timer:now_diff(EndTime, StartTime),\n\t\t\t% io:format(\"Elapsed ~p: ~p -> ~p = ~p~n\", [Key, StartTime, EndTime, ElapsedTime]),\n\t\t\tupdate_total(Key, ElapsedTime),\n\t\t\tElapsedTime;\n\t\t[] ->\n\t\t\t% Key not found, throw an error\n\t\t\t{error, {not_started, Key}}\n\tend.\n\nupdate_total(Key, ElapsedTime) ->\n\tets:update_counter(total_time, real_key(Key), {2, ElapsedTime}, {real_key(Key), 0}).\n\nget_total([]) ->\n\t0;\nget_total(Times) when is_list(Times) ->\n\tlists:sum(Times);\nget_total(Key) ->\n    get_total(get_times(Key)).\n\nget_max([]) ->\n\t0;\nget_max(Times) when is_list(Times) ->\n    lists:max(Times);\nget_max(Key) ->\n\tget_max(get_times(Key)).\n\nget_min([]) ->\n\t0;\nget_min(Times) when is_list(Times) ->\n    lists:min(Times);\nget_min(Key) ->\n    get_min(get_times(Key)).\n\nget_avg([]) ->\n\t0;\nget_avg(Times) when is_list(Times) ->\n\tTotalTime = lists:sum(Times),\n    case length(Times) of\n        0 -> 0;\n        N -> TotalTime / N\n    end;\nget_avg(Key) ->\n\tget_avg(get_times(Key)).\n\t\nget_times(Key) ->\n\t[Match || [Match] <- ets:match(total_time, {{Key, '_'}, '$1'})].\nget_timing_keys() ->\n    Keys = [Key || {{Key, _PID}, _Value} <- get_timing_data()],\n    UniqueKeys = sets:to_list(sets:from_list(Keys)),\n\tUniqueKeys.\nget_timing_data() ->\n    ets:tab2list(total_time).\nprint_timing_data() ->\n\tlists:foreach(fun(Key) ->\n\t\t\tSeconds = get_total(Key) / 1000000,\n\t\t\t?LOG_ERROR(\"~p: ~p\", [Key, Seconds])\n\t\tend, get_timing_keys()).\n\nreset() ->\n\tets:delete_all_objects(total_time),\n\tets:delete_all_objects(start_time).\n\ninitialize() ->\n    ets:new(total_time, [set, named_table, public]),\n\tets:new(start_time, [set, named_table, public]).\n\nreal_key(Key) ->\n\t{Key, self()}.\n\n\n"
  },
  {
    "path": "apps/arweave/src/ar_bench_vdf.erl",
    "content": "-module(ar_bench_vdf).\n\n-export([run_benchmark/0, run_benchmark_from_cli/1]).\n\n-include_lib(\"arweave/include/ar_vdf.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\nrun_benchmark_from_cli(Args) ->\n\tMode = list_to_atom(get_flag_value(Args, \"mode\", \"default\")),\n\tDifficulty = list_to_integer(get_flag_value(Args, \"difficulty\", integer_to_list(?VDF_DIFFICULTY))),\n\tVerify = list_to_atom(get_flag_value(Args, \"verify\", \"false\")),\n\n\trun_benchmark(Mode, Difficulty, Verify).\n\nget_flag_value([], _, DefaultValue) ->\n\tDefaultValue;\nget_flag_value([Flag | [Value | _Tail]], TargetFlag, _DefaultValue) when Flag == TargetFlag ->\n\tValue;\nget_flag_value([_ | Tail], TargetFlag, DefaultValue) ->\n\tget_flag_value(Tail, TargetFlag, DefaultValue).\n\nshow_help() ->\n\tio:format(\"~nUsage: benchmark vdf [options]~n\"),\n\tio:format(\"Options:~n\"),\n\tio:format(\"  mode <default|openssl|fused|hiopt_m4> (default: default)~n\"),\n\tio:format(\"  difficulty <vdf_difficulty> (default: ~p)~n\", [?VDF_DIFFICULTY]),\n\tio:format(\"  verify <true|false> (default: false)~n\"),\n\tinit:stop(1).\n\nrun_benchmark() ->\n\trun_benchmark(none, ?VDF_DIFFICULTY, false).\n\nrun_benchmark(Mode, Difficulty, Verify) ->\n\tcase Mode of\n\t\tnone ->\n\t\t\t%% Run as part of startup, use whatever is set in the config\n\t\t\tok;\n\t\topenssl ->\n\t\t\tok = arweave_config:set_env(#config{ vdf = openssl });\n\t\tfused ->\n\t\t\tok = arweave_config:set_env(#config{ vdf = fused });\n\t\thiopt_m4 ->\n\t\t\tok = arweave_config:set_env(#config{ vdf = hiopt_m4 });\n\t\tdefault ->\n\t\t\tok = arweave_config:set_env(#config{})\n\tend,\n\tInput = crypto:strong_rand_bytes(32),\n\t{Time, {ok, Output, Checkpoints}} = timer:tc(fun() -> \n\t\t\tar_vdf:compute2(1, Input, Difficulty)\n\tend),\n\tio:format(\"~n~n\"),\n\tmaybe_verify(Verify, Input, Difficulty, Output, Checkpoints),\n\tio:format(\"VDF step computed in ~.2f seconds.~n~n\", [Time / 1000000]),\n\tcase Time > 1150000 of\n\t\ttrue ->\n\t\t\tio:format(\"WARNING: your VDF computation speed is low - consider fetching \"\n\t\t\t\t\t\"VDF outputs from an external source (see vdf_server_trusted_peer \"\n\t\t\t\t\t\"and vdf_client_peer command line parameters).~n~n\");\n\t\tfalse ->\n\t\t\tok\n\tend,\n\tTime.\n\nmaybe_verify(true, Input, Difficulty, Output, Checkpoints) ->\n\t{ok, VerifyOutput, VerifyCheckpoints} = ar_vdf:debug_sha2(1, Input, Difficulty),\n\tcase Output == VerifyOutput of\n\t\ttrue ->\n\t\t\tio:format(\"Output matches.~n\");\n\t\tfalse ->\n\t\t\tio:format(\"Output mismatch. Expected: ~p, Got: ~p~n\",\n\t\t\t\t[ar_util:encode(Output), ar_util:encode(VerifyOutput)])\n\tend,\n\tcase Checkpoints == VerifyCheckpoints of\n\t\ttrue ->\n\t\t\tio:format(\"Checkpoints match.~n\");\n\t\tfalse ->\n\t\t\tio:format(\"Checkpoints mismatch. Expected: ~p, Got: ~p~n\",\n\t\t\t\t[Checkpoints, VerifyCheckpoints])\n\tend;\nmaybe_verify(false, _Input, _Difficulty, _Output, _Checkpoints) ->\n\tok.\n"
  },
  {
    "path": "apps/arweave/src/ar_blacklist_middleware.erl",
    "content": "-module(ar_blacklist_middleware).\n\n-export([start/0, ban_peer/2, is_peer_banned/1, cleanup_ban/1]).\n-export([start_link/0]).\n\n-ifdef(AR_TEST).\n-export([reset/0]).\n-endif.\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_blacklist_middleware.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\nstart_link() ->\n\t{ok, spawn_link(fun() -> start() end)}.\n\nstart() ->\n\t?LOG_INFO([{start, ?MODULE}, {pid, self()}]),\n\t{ok, _} = ar_timer:apply_after(\n\t\t?BAN_CLEANUP_INTERVAL,\n\t\t?MODULE,\n\t\tcleanup_ban,\n\t\t[ets:whereis(?MODULE)],\n\t\t#{ skip_on_shutdown => false }\n\t).\n\nreset() ->\n    true = ets:delete_all_objects(?MODULE),\n    ok.\n\n%% Ban a peer completely for TTLSeconds seoncds. Since we cannot trust the port,\n%% we ban the whole IP address.\nban_peer(Peer, TTLSeconds) ->\n\t?LOG_DEBUG([{event, ban_peer}, {peer, ar_util:format_peer(Peer)}, {seconds, TTLSeconds}]),\n\tKey = {ban, peer_to_ip_addr(Peer)},\n\tExpires = os:system_time(seconds) + TTLSeconds,\n\tets:insert(?MODULE, {Key, Expires}).\n\nis_peer_banned(Peer) ->\n\tKey = {ban, peer_to_ip_addr(Peer)},\n\tcase ets:lookup(?MODULE, Key) of\n\t\t[] -> not_banned;\n\t\t[_] -> banned\n\tend.\n\ncleanup_ban(TableID) ->\n\tcase ets:whereis(?MODULE) of\n\t\tTableID ->\n\t\t\tNow = os:system_time(seconds),\n\t\t\tFolder = fun\n\t\t\t\t({{ban, _} = Key, Expires}, Acc) when Expires < Now ->\n\t\t\t\t\t[Key | Acc];\n\t\t\t\t(_, Acc) ->\n\t\t\t\t\tAcc\n\t\t\tend,\n\t\t\tRemoveKeys = ets:foldl(Folder, [], ?MODULE),\n\t\t\tDelete = fun(Key) -> ets:delete(?MODULE, Key) end,\n\t\t\tlists:foreach(Delete, RemoveKeys),\n\t\t\t_ = ar_timer:apply_after(\n\t\t\t\t?BAN_CLEANUP_INTERVAL,\n\t\t\t\t?MODULE,\n\t\t\t\tcleanup_ban,\n\t\t\t\t[TableID],\n\t\t\t\t#{ skip_on_shutdown => true }\n\t\t\t);\n\t\t_ ->\n\t\t\ttable_owner_died\n\tend.\n\n%private functions\npeer_to_ip_addr({A, B, C, D, _}) -> {A, B, C, D}.\n"
  },
  {
    "path": "apps/arweave/src/ar_block.erl",
    "content": "-module(ar_block).\n\n-export([get_consensus_window_size/0, get_max_tx_anchor_depth/0,\n\t\tpartition_size/0,\n\t\tget_replica_2_9_entropy_sector_size/0, get_replica_2_9_entropy_partition_size/0,\n\t\tget_sub_chunks_per_replica_2_9_entropy/0, get_replica_2_9_entropy_count/0,\n\t\tget_replica_2_9_footprint_size/0, strict_data_split_threshold/0,\n\t\tblock_field_size_limit/1, verify_timestamp/2, get_max_timestamp_deviation/0, verify_last_retarget/2,\n\t\tverify_weave_size/3, verify_cumulative_diff/2, verify_block_hash_list_merkle/2,\n\t\tcompute_hash_list_merkle/1, compute_h0/2, compute_h0/5, compute_h0/6,\n\t\tcompute_h1/3, compute_h2/3, compute_solution_h/2,\n\t\tindep_hash/1, indep_hash/2, indep_hash2/2, get_block_signature_preimage/4,\n\t\tgenerate_signed_hash/1, verify_signature/3, get_reward_key/2,\n\t\tgenerate_block_data_segment/1, generate_block_data_segment/2,\n\t\tgenerate_block_data_segment_base/1, get_recall_range/3, get_recall_range/5, verify_tx_root/1,\n\t\thash_wallet_list/1, generate_hash_list_for_block/2,\n\t\tgenerate_tx_root_for_block/1, generate_tx_root_for_block/2,\n\t\tgenerate_size_tagged_list_from_txs/2, generate_tx_tree/1, generate_tx_tree/2,\n\t\ttest_wallet_list_performance/0, test_wallet_list_performance/1,\n\t\ttest_wallet_list_performance/2, test_wallet_list_performance/3,\n\t\tpoa_to_list/1, shift_packing_2_5_threshold/1,\n\t\tget_packing_threshold/2, compute_next_vdf_difficulty/1,\n\t\tvalidate_proof_size/1, vdf_step_number/1, get_packing/3,\n\t\tvalidate_replica_format/3,\n\t\tget_max_nonce/1, get_recall_range_size/1, get_recall_byte/3,\n\t\tget_sub_chunk_size/1, get_nonces_per_chunk/1, get_nonces_per_recall_range/1,\n\t\tget_sub_chunk_index/2,\n\t\tget_chunk_padded_offset/1, get_double_signing_condition/4]).\n\n-include(\"ar.hrl\").\n-include(\"ar_consensus.hrl\").\n-include(\"ar_block.hrl\").\n-include(\"ar_vdf.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Return the number of blocks we track during consensus. The node\n%% does not accept new blocks originating from blocks older than the oldest\n%% block in this window.\nget_consensus_window_size() ->\n\t?STORE_BLOCKS_BEHIND_CURRENT.\n\n%% @doc Return the maximum allowed block depth of the transaction block anchor.\nget_max_tx_anchor_depth() ->\n\tar_block:get_consensus_window_size().\n\n%% @doc Expose constants through a function to allow mocking/injection in tests.\npartition_size() -> ?PARTITION_SIZE.\nstrict_data_split_threshold() -> ?STRICT_DATA_SPLIT_THRESHOLD.\n\n%% @doc Return the 2.9 entropy sector size - the largest total size in bytes of the contiguous\n%% area where the 2.9 entropy of every chunk is unique.\n-spec get_replica_2_9_entropy_sector_size() -> pos_integer().\nget_replica_2_9_entropy_sector_size() ->\n\t?REPLICA_2_9_ENTROPY_COUNT * ?COMPOSITE_PACKING_SUB_CHUNK_SIZE.\n\n%% @doc Return the size of the 2.9 entropy partition.\n-spec get_replica_2_9_entropy_partition_size() -> pos_integer().\nget_replica_2_9_entropy_partition_size() ->\n\t?REPLICA_2_9_ENTROPY_COUNT * ?REPLICA_2_9_ENTROPY_SIZE.\n\n%% @doc Return the number of sub-chunks per entropy. We'll generally create 32x entropies\n%% in order to fully encipher this many chunks.\n-spec get_sub_chunks_per_replica_2_9_entropy() -> pos_integer().\nget_sub_chunks_per_replica_2_9_entropy() ->\n\t?REPLICA_2_9_ENTROPY_SIZE div ?COMPOSITE_PACKING_SUB_CHUNK_SIZE.\n\n%% @doc Return the total size in bytes for a full footprint of entropy.\n-spec get_replica_2_9_footprint_size() -> pos_integer().\nget_replica_2_9_footprint_size() ->\n\t?REPLICA_2_9_ENTROPY_SIZE * ?COMPOSITE_PACKING_SUB_CHUNK_COUNT.\n\n%% @doc Return the number of entropies per partition.\n-spec get_replica_2_9_entropy_count() -> pos_integer().\nget_replica_2_9_entropy_count() ->\n\t?REPLICA_2_9_ENTROPY_COUNT div ?COMPOSITE_PACKING_SUB_CHUNK_COUNT.\n\n%% @doc Check whether the block fields conform to the specified size limits.\nblock_field_size_limit(B = #block{ reward_addr = unclaimed }) ->\n\tblock_field_size_limit(B#block{ reward_addr = <<>> });\nblock_field_size_limit(B) ->\n\tDiffBytesLimit =\n\t\tcase ar_fork:height_1_8() of\n\t\t\tHeight when B#block.height >= Height ->\n\t\t\t\t78;\n\t\t\t_ ->\n\t\t\t\t10\n\t\tend,\n\t{ChunkSize, DataPathSize} =\n\t\tcase B#block.poa of\n\t\t\tPOA when is_record(POA, poa) ->\n\t\t\t\t{\n\t\t\t\t\tbyte_size((B#block.poa)#poa.chunk),\n\t\t\t\t\tbyte_size((B#block.poa)#poa.data_path)\n\t\t\t\t};\n\t\t\t_ -> {0, 0}\n\t\tend,\n\tRewardAddrCheck = byte_size(B#block.reward_addr) =< 32,\n\tCheck = (byte_size(B#block.nonce) =< 512) and\n\t\t(byte_size(B#block.previous_block) =< 48) and\n\t\t(byte_size(integer_to_binary(B#block.timestamp)) =< ?TIMESTAMP_FIELD_SIZE_LIMIT) and\n\t\t(byte_size(integer_to_binary(B#block.last_retarget))\n\t\t\t\t=< ?TIMESTAMP_FIELD_SIZE_LIMIT) and\n\t\t(byte_size(integer_to_binary(B#block.diff)) =< DiffBytesLimit) and\n\t\t(byte_size(integer_to_binary(B#block.height)) =< 20) and\n\t\t(byte_size(B#block.hash) =< 48) and\n\t\t(byte_size(B#block.indep_hash) =< 48) and\n\t\tRewardAddrCheck and\n\t\tvalidate_tags_size(B) and\n\t\t(byte_size(integer_to_binary(B#block.weave_size)) =< 64) and\n\t\t(byte_size(integer_to_binary(B#block.block_size)) =< 64) and\n\t\t(ChunkSize =< ?DATA_CHUNK_SIZE) and\n\t\t(DataPathSize =< ?MAX_PATH_SIZE),\n\tcase Check of\n\t\tfalse ->\n\t\t\t?LOG_INFO(\n\t\t\t\t[\n\t\t\t\t\t{event, received_block_with_invalid_field_size},\n\t\t\t\t\t{nonce, byte_size(B#block.nonce)},\n\t\t\t\t\t{previous_block, byte_size(B#block.previous_block)},\n\t\t\t\t\t{timestamp, byte_size(integer_to_binary(B#block.timestamp))},\n\t\t\t\t\t{last_retarget, byte_size(integer_to_binary(B#block.last_retarget))},\n\t\t\t\t\t{diff, byte_size(integer_to_binary(B#block.diff))},\n\t\t\t\t\t{height, byte_size(integer_to_binary(B#block.height))},\n\t\t\t\t\t{hash, byte_size(B#block.hash)},\n\t\t\t\t\t{indep_hash, byte_size(B#block.indep_hash)},\n\t\t\t\t\t{reward_addr, byte_size(B#block.reward_addr)},\n\t\t\t\t\t{tags, byte_size(list_to_binary(B#block.tags))},\n\t\t\t\t\t{weave_size, byte_size(integer_to_binary(B#block.weave_size))},\n\t\t\t\t\t{block_size, byte_size(integer_to_binary(B#block.block_size))}\n\t\t\t\t]\n\t\t\t);\n\t\t_ ->\n\t\t\tok\n\tend,\n\tCheck.\n\n%% @doc Verify the block timestamp is not too far in the future nor too far in\n%% the past. We calculate the maximum reasonable clock difference between any\n%% two nodes. This is a simplification since there is a chaining effect in the\n%% network which we don't take into account. Instead, we assume two nodes can\n%% deviate JOIN_CLOCK_TOLERANCE seconds in the opposite direction from each\n%% other.\nverify_timestamp(#block{ timestamp = Timestamp }, #block{ timestamp = PrevTimestamp }) ->\n\tMaxNodesClockDeviation = get_max_timestamp_deviation(),\n\tcase Timestamp >= PrevTimestamp - MaxNodesClockDeviation of\n\t\tfalse ->\n\t\t\tfalse;\n\t\ttrue ->\n\t\t\tCurrentTime = os:system_time(seconds),\n\t\t\tTimestamp =< CurrentTime + MaxNodesClockDeviation\n\tend.\n\n%% @doc Return the largest possible value by which the previous block's timestamp\n%% may exceed the next block's timestamp.\nget_max_timestamp_deviation() ->\n\t?JOIN_CLOCK_TOLERANCE * 2 + ?CLOCK_DRIFT_MAX.\n\n%% @doc Verify the retarget timestamp on NewB is correct.\nverify_last_retarget(NewB, OldB) ->\n\tcase ar_retarget:is_retarget_height(NewB#block.height) of\n\t\ttrue ->\n\t\t\tNewB#block.last_retarget == NewB#block.timestamp;\n\t\tfalse ->\n\t\t\tNewB#block.last_retarget == OldB#block.last_retarget\n\tend.\n\n%% @doc Verify the new weave size is computed correctly given the previous block\n%% and the list of transactions of the new block.\nverify_weave_size(NewB, OldB, TXs) ->\n\tBlockSize = lists:foldl(\n\t\tfun(TX, Acc) ->\n\t\t\tAcc + ar_tx:get_weave_size_increase(TX, NewB#block.height)\n\t\tend,\n\t\t0,\n\t\tTXs\n\t),\n\t(NewB#block.height < ar_fork:height_2_6() orelse BlockSize == NewB#block.block_size)\n\t\t\tandalso NewB#block.weave_size == OldB#block.weave_size + BlockSize.\n\n%% @doc Verify the new cumulative difficulty is computed correctly.\nverify_cumulative_diff(NewB, OldB) ->\n\tNewB#block.cumulative_diff ==\n\t\tar_difficulty:next_cumulative_diff(\n\t\t\tOldB#block.cumulative_diff,\n\t\t\tNewB#block.diff,\n\t\t\tNewB#block.height\n\t\t).\n\n%% @doc Verify the root of the new block tree is computed correctly.\nverify_block_hash_list_merkle(NewB, CurrentB) ->\n\ttrue = NewB#block.height > ar_fork:height_2_0(),\n\tNewB#block.hash_list_merkle == ar_unbalanced_merkle:root(CurrentB#block.hash_list_merkle,\n\t\t\t{CurrentB#block.indep_hash, CurrentB#block.weave_size, CurrentB#block.tx_root},\n\t\t\tfun ar_unbalanced_merkle:hash_block_index_entry/1).\n\n%% @doc Compute the root of the new block tree given the previous block.\ncompute_hash_list_merkle(B) ->\n\tar_unbalanced_merkle:root(\n\t\tB#block.hash_list_merkle,\n\t\t{B#block.indep_hash, B#block.weave_size, B#block.tx_root},\n\t\tfun ar_unbalanced_merkle:hash_block_index_entry/1\n\t).\n\n%% @doc Compute \"h0\" - a cryptographic hash used as a source of entropy when choosing\n%% two recall ranges on the weave as unlocked by the given nonce limiter output.\ncompute_h0(B, PrevB) ->\n\t#block{ nonce_limiter_info = NonceLimiterInfo,\n\t\t\tpartition_number = PartitionNumber, reward_addr = MiningAddr,\n\t\t\tpacking_difficulty = PackingDifficulty } = B,\n\tPrevNonceLimiterInfo = PrevB#block.nonce_limiter_info,\n\tSeed = PrevNonceLimiterInfo#nonce_limiter_info.seed,\n\tNonceLimiterOutput = NonceLimiterInfo#nonce_limiter_info.output,\n\tcompute_h0(NonceLimiterOutput, PartitionNumber, Seed, MiningAddr, PackingDifficulty).\n\ncompute_h0(NonceLimiterOutput, PartitionNumber, Seed, MiningAddr, PackingDifficulty) ->\n\tcompute_h0(NonceLimiterOutput, PartitionNumber, Seed, MiningAddr, PackingDifficulty,\n\t\t\tar_packing_server:get_packing_state()).\n\n%% @doc Compute \"h0\" - a cryptographic hash used as a source of entropy when choosing\n%% two recall ranges on the weave as unlocked by the given nonce limiter output.\ncompute_h0(NonceLimiterOutput, PartitionNumber, Seed, MiningAddr, PackingDifficulty,\n\t\tPackingState) ->\n\tPreimage =\n\t\tcase PackingDifficulty of\n\t\t\t0 ->\n\t\t\t\t<< NonceLimiterOutput:32/binary,\n\t\t\t\t\tPartitionNumber:256, Seed:32/binary, MiningAddr/binary >>;\n\t\t\t_ ->\n\t\t\t\t<< NonceLimiterOutput:32/binary,\n\t\t\t\t\tPartitionNumber:256, Seed:32/binary, MiningAddr/binary,\n\t\t\t\t\tPackingDifficulty:8 >>\n\t\tend,\n\tRandomXState = ar_packing_server:get_randomx_state_for_h0(PackingDifficulty, PackingState),\n\tar_mine_randomx:hash(RandomXState, Preimage).\n\n%% @doc Compute \"h1\" - a cryptographic hash which is either the hash of a solution not\n%% involving the second chunk or a carrier of the information about the first chunk\n%% used when computing the solution hash off the second chunk.\ncompute_h1(H0, Nonce, Chunk) ->\n\tPreimage = crypto:hash(sha256, << H0:32/binary, Nonce:64, Chunk/binary >>),\n\t{compute_solution_h(H0, Preimage), Preimage}.\n\n%% @doc Compute \"h2\" - the hash of a solution involving the second chunk.\ncompute_h2(H1, Chunk, H0) ->\n\tPreimage = crypto:hash(sha256, << H1:32/binary, Chunk/binary >>),\n\t{compute_solution_h(H0, Preimage), Preimage}.\n\n%% @doc Compute the solution hash from the preimage and H0.\ncompute_solution_h(H0, Preimage) ->\n\tcrypto:hash(sha256, << H0:32/binary, Preimage/binary >>).\n\ncompute_next_vdf_difficulty(PrevB) ->\n\tHeight = PrevB#block.height + 1,\n\t#nonce_limiter_info{\n\t\tvdf_difficulty = VDFDifficulty,\n\t\tnext_vdf_difficulty = NextVDFDifficulty\n\t} = PrevB#block.nonce_limiter_info,\n\tcase ar_block_time_history:has_history(Height) of\n\t\ttrue ->\n\t\t\tcase (Height rem ?VDF_DIFFICULTY_RETARGET == 0) andalso\n\t\t\t\t\t(VDFDifficulty == NextVDFDifficulty) of\n\t\t\t\tfalse ->\n\t\t\t\t\tNextVDFDifficulty;\n\t\t\t\ttrue ->\n\t\t\t\t\tcase Height < ar_fork:height_2_7_1() of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tHistoryPart = lists:nthtail(?VDF_HISTORY_CUT,\n\t\t\t\t\t\t\t\t\tar_block_time_history:get_history(PrevB)),\n\t\t\t\t\t\t\t{IntervalTotal, VDFIntervalTotal} =\n\t\t\t\t\t\t\t\tlists:foldl(\n\t\t\t\t\t\t\t\t\tfun({BlockInterval, VDFInterval, _ChunkCount}, {Acc1, Acc2}) ->\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\tAcc1 + BlockInterval,\n\t\t\t\t\t\t\t\t\t\t\tAcc2 + VDFInterval\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\t\t\t{0, 0},\n\t\t\t\t\t\t\t\t\tHistoryPart\n\t\t\t\t\t\t\t\t),\n\t\t\t\t\t\t\tNewVDFDifficulty =\n\t\t\t\t\t\t\t\t(VDFIntervalTotal * VDFDifficulty) div IntervalTotal,\n\t\t\t\t\t\t\t?LOG_DEBUG([{event, vdf_difficulty_retarget},\n\t\t\t\t\t\t\t\t\t{height, Height},\n\t\t\t\t\t\t\t\t\t{old_vdf_difficulty, VDFDifficulty},\n\t\t\t\t\t\t\t\t\t{new_vdf_difficulty, NewVDFDifficulty},\n\t\t\t\t\t\t\t\t\t{interval_total, IntervalTotal},\n\t\t\t\t\t\t\t\t\t{vdf_interval_total, VDFIntervalTotal}]),\n\t\t\t\t\t\t\tNewVDFDifficulty;\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tHistoryPartCut1 = lists:nthtail(?VDF_HISTORY_CUT,\n\t\t\t\t\t\t\t\tar_block_time_history:get_history(PrevB)),\n\t\t\t\t\t\t\tHistoryPart = lists:sublist(HistoryPartCut1, ?VDF_DIFFICULTY_RETARGET),\n\t\t\t\t\t\t\t{IntervalTotal, VDFIntervalTotal} =\n\t\t\t\t\t\t\t\tlists:foldl(\n\t\t\t\t\t\t\t\t\tfun({BlockInterval, VDFInterval, _ChunkCount}, {Acc1, Acc2}) ->\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\tAcc1 + BlockInterval,\n\t\t\t\t\t\t\t\t\t\t\tAcc2 + VDFInterval\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\t\t\t{0, 0},\n\t\t\t\t\t\t\t\t\tHistoryPart\n\t\t\t\t\t\t\t\t),\n\t\t\t\t\t\t\tNewVDFDifficulty =\n\t\t\t\t\t\t\t\t(VDFIntervalTotal * VDFDifficulty) div IntervalTotal,\n\t\t\t\t\t\t\tEMAVDFDifficulty = (9*VDFDifficulty + NewVDFDifficulty) div 10,\n\t\t\t\t\t\t\t?LOG_DEBUG([{event, vdf_difficulty_retarget},\n\t\t\t\t\t\t\t\t\t{height, Height},\n\t\t\t\t\t\t\t\t\t{old_vdf_difficulty, VDFDifficulty},\n\t\t\t\t\t\t\t\t\t{new_vdf_difficulty, NewVDFDifficulty},\n\t\t\t\t\t\t\t\t\t{ema_vdf_difficulty, EMAVDFDifficulty},\n\t\t\t\t\t\t\t\t\t{interval_total, IntervalTotal},\n\t\t\t\t\t\t\t\t\t{vdf_interval_total, VDFIntervalTotal}]),\n\t\t\t\t\t\t\tEMAVDFDifficulty\n\t\t\t\t\tend\n\t\t\tend;\n\t\tfalse ->\n\t\t\t?VDF_DIFFICULTY\n\tend.\n\nvalidate_proof_size(PoA) ->\n\tbyte_size(PoA#poa.tx_path) =< ?MAX_TX_PATH_SIZE andalso\n\t\t\tbyte_size(PoA#poa.data_path) =< ?MAX_DATA_PATH_SIZE andalso\n\t\t\tbyte_size(PoA#poa.chunk) =< ?DATA_CHUNK_SIZE andalso\n\t\t\tbyte_size(PoA#poa.unpacked_chunk) =< ?DATA_CHUNK_SIZE.\n\n%% @doc Compute the block identifier (also referred to as \"independent hash\").\nindep_hash(B) ->\n\tcase B#block.height >= ar_fork:height_2_6() of\n\t\ttrue ->\n\t\t\tH = ar_block:generate_signed_hash(B),\n\t\t\tindep_hash2(H, B#block.signature);\n\t\tfalse ->\n\t\t\tBDS = ar_block:generate_block_data_segment(B),\n\t\t\tindep_hash(BDS, B)\n\tend.\n\n%% @doc Compute the hash signed by the block producer.\ngenerate_signed_hash(#block{ previous_block = PrevH, timestamp = TS,\n\t\tnonce = Nonce, height = Height, diff = Diff, cumulative_diff = CDiff,\n\t\tlast_retarget = LastRetarget, hash = Hash, block_size = BlockSize,\n\t\tweave_size = WeaveSize, tx_root = TXRoot, wallet_list = WalletList,\n\t\thash_list_merkle = HashListMerkle, reward_pool = RewardPool,\n\t\tpacking_2_5_threshold = Packing_2_5_Threshold, reward_addr = Addr,\n\t\treward_key = RewardKey, strict_data_split_threshold = StrictChunkThreshold,\n\t\tusd_to_ar_rate = {RateDividend, RateDivisor},\n\t\tscheduled_usd_to_ar_rate = {ScheduledRateDividend, ScheduledRateDivisor},\n\t\ttags = Tags, txs = TXs,\n\t\treward = Reward, hash_preimage = HashPreimage, recall_byte = RecallByte,\n\t\tpartition_number = PartitionNumber, recall_byte2 = RecallByte2,\n\t\tnonce_limiter_info = NonceLimiterInfo,\n\t\tprevious_solution_hash = PreviousSolutionHash,\n\t\tprice_per_gib_minute = PricePerGiBMinute,\n\t\tscheduled_price_per_gib_minute = ScheduledPricePerGiBMinute,\n\t\treward_history_hash = RewardHistoryHash,\n\t\tblock_time_history_hash = BlockTimeHistoryHash, debt_supply = DebtSupply,\n\t\tkryder_plus_rate_multiplier = KryderPlusRateMultiplier,\n\t\tkryder_plus_rate_multiplier_latch = KryderPlusRateMultiplierLatch,\n\t\tdenomination = Denomination, redenomination_height = RedenominationHeight,\n\t\tdouble_signing_proof = DoubleSigningProof, previous_cumulative_diff = PrevCDiff,\n\t\tmerkle_rebase_support_threshold = RebaseThreshold,\n\t\tpoa = #poa{ data_path = DataPath, tx_path = TXPath },\n\t\tpoa2 = #poa{ data_path = DataPath2, tx_path = TXPath2 },\n\t\tchunk_hash = ChunkHash, chunk2_hash = Chunk2Hash,\n\t\tpacking_difficulty = PackingDifficulty,\n\t\tunpacked_chunk_hash = UnpackedChunkHash,\n\t\tunpacked_chunk2_hash = UnpackedChunk2Hash,\n\t\treplica_format = ReplicaFormat }) ->\n\tGetTXID = fun(TXID) when is_binary(TXID) -> TXID; (TX) -> TX#tx.id end,\n\tNonce2 = binary:encode_unsigned(Nonce),\n\t%% The only block where reward_address may be unclaimed\n\t%% is the genesis block of a new weave.\n\tAddr2 = case Addr of unclaimed -> <<>>; _ -> Addr end,\n\tRewardKey2 = case RewardKey of undefined -> undefined; {_Type, Pub} -> Pub end,\n\t#nonce_limiter_info{ output = Output, global_step_number = N, seed = Seed,\n\t\t\tnext_seed = NextSeed, partition_upper_bound = PartitionUpperBound,\n\t\t\tnext_partition_upper_bound = NextPartitionUpperBound,\n\t\t\tsteps = Steps, prev_output = PrevOutput,\n\t\t\tlast_step_checkpoints = LastStepCheckpoints,\n\t\t\tvdf_difficulty = VDFDifficulty,\n\t\t\tnext_vdf_difficulty = NextVDFDifficulty } = NonceLimiterInfo,\n\t{RebaseThresholdBin, DataPathBin, TXPathBin, DataPath2Bin, TXPath2Bin,\n\t\t\tChunkHashBin, Chunk2HashBin, BlockTimeHistoryHashBin,\n\t\t\tVDFDifficultyBin, NextVDFDifficultyBin} =\n\t\tcase Height >= ar_fork:height_2_7() of\n\t\t\ttrue ->\n\t\t\t\t{encode_int(RebaseThreshold, 16), ar_serialize:encode_bin(DataPath, 24),\n\t\t\t\t\t\tar_serialize:encode_bin(TXPath, 24),\n\t\t\t\t\t\tar_serialize:encode_bin(DataPath2, 24),\n\t\t\t\t\t\tar_serialize:encode_bin(TXPath2, 24),\n\t\t\t\t\t\t<< ChunkHash:32/binary >>,\n\t\t\t\t\t\tar_serialize:encode_bin(Chunk2Hash, 8),\n\t\t\t\t\t\t<< BlockTimeHistoryHash:32/binary >>,\n\t\t\t\t\t\tar_serialize:encode_int(VDFDifficulty, 8),\n\t\t\t\t\t\tar_serialize:encode_int(NextVDFDifficulty, 8)};\n\t\t\tfalse ->\n\t\t\t\t{<<>>, <<>>, <<>>, <<>>, <<>>, <<>>, <<>>, <<>>, <<>>, <<>>}\n\t\tend,\n\t{PackingDifficultyBin, UnpackedChunkHashBin, UnpackedChunk2HashBin} =\n\t\tcase Height >= ar_fork:height_2_8() of\n\t\t\ttrue ->\n\t\t\t\t{<< PackingDifficulty:8 >>,\n\t\t\t\t\t\tar_serialize:encode_bin(UnpackedChunkHash, 8),\n\t\t\t\t\t\tar_serialize:encode_bin(UnpackedChunk2Hash, 8)};\n\t\t\tfalse ->\n\t\t\t\t{<<>>, <<>>, <<>>}\n\t\tend,\n\tReplicaFormatBin =\n\t\tcase Height >= ar_fork:height_2_9() of\n\t\t\ttrue ->\n\t\t\t\t<< ReplicaFormat:8 >>;\n\t\t\tfalse ->\n\t\t\t\t<<>>\n\t\tend,\n\t%% The elements must be either fixed-size or separated by the size separators (\n\t%% the ar_serialize:encode_* functions).\n\tSegment = << (encode_bin(PrevH, 8))/binary, (encode_int(TS, 8))/binary,\n\t\t\t(encode_bin(Nonce2, 16))/binary, (encode_int(Height, 8))/binary,\n\t\t\t(encode_int(Diff, 16))/binary, (encode_int(CDiff, 16))/binary,\n\t\t\t(encode_int(LastRetarget, 8))/binary, (encode_bin(Hash, 8))/binary,\n\t\t\t(encode_int(BlockSize, 16))/binary, (encode_int(WeaveSize, 16))/binary,\n\t\t\t(encode_bin(Addr2, 8))/binary, (encode_bin(TXRoot, 8))/binary,\n\t\t\t(encode_bin(WalletList, 8))/binary,\n\t\t\t(encode_bin(HashListMerkle, 8))/binary, (encode_int(RewardPool, 8))/binary,\n\t\t\t(encode_int(Packing_2_5_Threshold, 8))/binary,\n\t\t\t(encode_int(StrictChunkThreshold, 8))/binary,\n\t\t\t\t\t(encode_int(RateDividend, 8))/binary,\n\t\t\t(encode_int(RateDivisor, 8))/binary,\n\t\t\t\t\t(encode_int(ScheduledRateDividend, 8))/binary,\n\t\t\t(encode_int(ScheduledRateDivisor, 8))/binary,\n\t\t\t(encode_bin_list(Tags, 16, 16))/binary,\n\t\t\t(encode_bin_list([GetTXID(TX) || TX <- TXs], 16, 8))/binary,\n\t\t\t(encode_int(Reward, 8))/binary,\n\t\t\t(encode_int(RecallByte, 16))/binary, (encode_bin(HashPreimage, 8))/binary,\n\t\t\t(encode_int(RecallByte2, 16))/binary, (encode_bin(RewardKey2, 16))/binary,\n\t\t\t(encode_int(PartitionNumber, 8))/binary, Output:32/binary, N:64,\n\t\t\tSeed:48/binary, NextSeed:48/binary, PartitionUpperBound:256,\n\t\t\tNextPartitionUpperBound:256, (encode_bin(PrevOutput, 8))/binary,\n\t\t\t(length(Steps)):16, (iolist_to_binary(Steps))/binary,\n\t\t\t(length(LastStepCheckpoints)):16, (iolist_to_binary(LastStepCheckpoints))/binary,\n\t\t\t(encode_bin(PreviousSolutionHash, 8))/binary,\n\t\t\t(encode_int(PricePerGiBMinute, 8))/binary,\n\t\t\t(encode_int(ScheduledPricePerGiBMinute, 8))/binary,\n\t\t\tRewardHistoryHash:32/binary, (encode_int(DebtSupply, 8))/binary,\n\t\t\tKryderPlusRateMultiplier:24, KryderPlusRateMultiplierLatch:8, Denomination:24,\n\t\t\t(encode_int(RedenominationHeight, 8))/binary,\n\t\t\t(ar_serialize:encode_double_signing_proof(DoubleSigningProof, Height))/binary,\n\t\t\t(encode_int(PrevCDiff, 16))/binary, RebaseThresholdBin/binary,\n\t\t\tDataPathBin/binary, TXPathBin/binary, DataPath2Bin/binary, TXPath2Bin/binary,\n\t\t\tChunkHashBin/binary, Chunk2HashBin/binary, BlockTimeHistoryHashBin/binary,\n\t\t\tVDFDifficultyBin/binary, NextVDFDifficultyBin/binary,\n\t\t\tPackingDifficultyBin/binary, UnpackedChunkHashBin/binary,\n\t\t\tUnpackedChunk2HashBin/binary, ReplicaFormatBin/binary >>,\n\tcrypto:hash(sha256, Segment).\n\n%% @doc Compute the block identifier from the signed hash and block signature.\nindep_hash2(SignedH, Signature) ->\n\tcrypto:hash(sha384, << SignedH:32/binary, Signature/binary >>).\n\n%% @doc Compute the block identifier of a pre-2.6 block.\nindep_hash(BDS, B) ->\n\tcase B#block.height >= ar_fork:height_2_4() of\n\t\ttrue ->\n\t\t\tar_deep_hash:hash([BDS, B#block.hash, B#block.nonce,\n\t\t\t\t\tar_block:poa_to_list(B#block.poa)]);\n\t\tfalse ->\n\t\t\tar_deep_hash:hash([BDS, B#block.hash, B#block.nonce])\n\tend.\n\n%% @doc Return the signed block signature preimage.\nget_block_signature_preimage(CDiff, PrevCDiff, Preimage, Height) ->\n\tEncodedCDiff = ar_serialize:encode_int(CDiff, 16),\n\tEncodedPrevCDiff = ar_serialize:encode_int(PrevCDiff, 16),\n\tSignaturePreimage = << EncodedCDiff/binary,\n\t\t\tEncodedPrevCDiff/binary, Preimage/binary >>,\n\tcase Height >= ar_fork:height_2_9() of\n\t\tfalse ->\n\t\t\tSignaturePreimage;\n\t\ttrue ->\n\t\t\t<< 0:(32 * 8), SignaturePreimage/binary >>\n\tend.\n\n%% @doc Verify the block signature.\nverify_signature(BlockPreimage, PrevCDiff,\n\t\t#block{ signature = Signature, reward_key = {?RSA_KEY_TYPE, Pub} = RewardKey,\n\t\t\t\treward_addr = RewardAddr, previous_solution_hash = PrevSolutionH,\n\t\t\t\tcumulative_diff = CDiff, height = Height })\n\t\twhen byte_size(Signature) == ?RSA_BLOCK_SIG_SIZE,\n\t\t\t\tbyte_size(Pub) == ?RSA_BLOCK_SIG_SIZE ->\n\tSignaturePreimage = get_block_signature_preimage(CDiff, PrevCDiff,\n\t\t\t<< PrevSolutionH/binary, BlockPreimage/binary >>, Height),\n\tar_wallet:to_address(RewardKey) == RewardAddr andalso\n\t\t\tar_wallet:verify(RewardKey, SignaturePreimage, Signature);\nverify_signature(BlockPreimage, PrevCDiff,\n\t\t#block{ signature = Signature, reward_key = {?ECDSA_KEY_TYPE, Pub} = RewardKey,\n\t\t\t\treward_addr = RewardAddr, previous_solution_hash = PrevSolutionH,\n\t\t\t\tcumulative_diff = CDiff, height = Height })\n\t\twhen byte_size(Signature) == ?ECDSA_SIG_SIZE, byte_size(Pub) == ?ECDSA_PUB_KEY_SIZE ->\n\tSignaturePreimage = get_block_signature_preimage(CDiff, PrevCDiff,\n\t\t\t<< PrevSolutionH/binary, BlockPreimage/binary >>, Height),\n\tcase Height >= ar_fork:height_2_9() of\n\t\ttrue ->\n\t\t\tar_wallet:to_address(RewardKey) == RewardAddr andalso\n\t\t\t\t\tar_wallet:verify(RewardKey, SignaturePreimage, Signature);\n\t\tfalse ->\n\t\t\tfalse\n\tend;\nverify_signature(_BlockPreimage, _PrevCDiff, _B) ->\n\tfalse.\n\n%% @doc Return the key suitable for ar_wallet:sign/3 from the given public key.\nget_reward_key(Pub, Height) ->\n\tcase Height >= ar_fork:height_2_9() of\n\t\tfalse ->\n\t\t\t{?DEFAULT_KEY_TYPE, Pub};\n\t\ttrue ->\n\t\t\tcase byte_size(Pub) of\n\t\t\t\t?ECDSA_PUB_KEY_SIZE ->\n\t\t\t\t\t{?ECDSA_KEY_TYPE, Pub};\n\t\t\t\t_ ->\n\t\t\t\t\t{?RSA_KEY_TYPE, Pub}\n\t\t\tend\n\tend.\n\n%% @doc Generate a block data segment for a pre-2.6 block. It is combined with a nonce\n%% when computing a solution candidate.\ngenerate_block_data_segment(B) ->\n\tgenerate_block_data_segment(generate_block_data_segment_base(B), B).\n\n%% @doc Generate a pre-2.6 block data segment given the computed \"base\".\ngenerate_block_data_segment(BDSBase, B) ->\n\tProps = [\n\t\tBDSBase,\n\t\tinteger_to_binary(B#block.timestamp),\n\t\tinteger_to_binary(B#block.last_retarget),\n\t\tinteger_to_binary(B#block.diff),\n\t\tinteger_to_binary(B#block.cumulative_diff),\n\t\tinteger_to_binary(B#block.reward_pool),\n\t\tB#block.wallet_list,\n\t\tB#block.hash_list_merkle\n\t],\n\tar_deep_hash:hash(Props).\n\n%% @doc Generate a hash, which is used to produce a block data segment\n%% when combined with the time-dependent parameters, which frequently\n%% change during mining - timestamp, last retarget timestamp, difficulty,\n%% cumulative difficulty, (before the fork 2.4, also miner's wallet, reward pool).\n%% Also excludes the merkle root of the block index, which is hashed with the rest\n%% as the last step - it was used before the fork 2.4 to allow verifiers to quickly\n%% validate PoW against the current state. After the fork 2.4, the hash of the\n%% previous block prefixes the solution hash preimage of the new block.\ngenerate_block_data_segment_base(B) ->\n\tGetTXID = fun(TXID) when is_binary(TXID) -> TXID; (TX) -> TX#tx.id end,\n\tcase B#block.height >= ar_fork:height_2_4() of\n\t\ttrue ->\n\t\t\tProps = [\n\t\t\t\tinteger_to_binary(B#block.height),\n\t\t\t\tB#block.previous_block,\n\t\t\t\tB#block.tx_root,\n\t\t\t\tlists:map(GetTXID, B#block.txs),\n\t\t\t\tinteger_to_binary(B#block.block_size),\n\t\t\t\tinteger_to_binary(B#block.weave_size),\n\t\t\t\tcase B#block.reward_addr of\n\t\t\t\t\tunclaimed ->\n\t\t\t\t\t\t<<\"unclaimed\">>;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tB#block.reward_addr\n\t\t\t\tend,\n\t\t\t\tencode_tags(B)\n\t\t\t],\n\t\t\tProps2 =\n\t\t\t\tcase B#block.height >= ar_fork:height_2_5() of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\t{RateDividend, RateDivisor} = B#block.usd_to_ar_rate,\n\t\t\t\t\t\t{ScheduledRateDividend, ScheduledRateDivisor} =\n\t\t\t\t\t\t\tB#block.scheduled_usd_to_ar_rate,\n\t\t\t\t\t\t[\n\t\t\t\t\t\t\tinteger_to_binary(RateDividend),\n\t\t\t\t\t\t\tinteger_to_binary(RateDivisor),\n\t\t\t\t\t\t\tinteger_to_binary(ScheduledRateDividend),\n\t\t\t\t\t\t\tinteger_to_binary(ScheduledRateDivisor),\n\t\t\t\t\t\t\tinteger_to_binary(B#block.packing_2_5_threshold),\n\t\t\t\t\t\t\tinteger_to_binary(B#block.strict_data_split_threshold)\n\t\t\t\t\t\t\t| Props\n\t\t\t\t\t\t];\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tProps\n\t\t\t\tend,\n\t\t\tar_deep_hash:hash(Props2);\n\t\tfalse ->\n\t\t\tar_deep_hash:hash([\n\t\t\t\tinteger_to_binary(B#block.height),\n\t\t\t\tB#block.previous_block,\n\t\t\t\tB#block.tx_root,\n\t\t\t\tlists:map(GetTXID, B#block.txs),\n\t\t\t\tinteger_to_binary(B#block.block_size),\n\t\t\t\tinteger_to_binary(B#block.weave_size),\n\t\t\t\tcase B#block.reward_addr of\n\t\t\t\t\tunclaimed ->\n\t\t\t\t\t\t<<\"unclaimed\">>;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tB#block.reward_addr\n\t\t\t\tend,\n\t\t\t\tencode_tags(B),\n\t\t\t\tpoa_to_list(B#block.poa)\n\t\t\t])\n\tend.\n\n%% @doc Return {RecallRange1Start, RecallRange2Start} - the start offsets\n%% of the two recall ranges.\n-ifdef(LOCALNET).\nget_recall_range(H0, PartitionNumber, PartitionUpperBound, not_set, not_set) ->\n\tRecallRange1Offset = binary:decode_unsigned(binary:part(H0, 0, 8), big),\n\tRecallRange1Start = PartitionNumber * ar_block:partition_size()\n\t\t\t+ RecallRange1Offset rem min(ar_block:partition_size(), PartitionUpperBound),\n\tRecallRange2Start = binary:decode_unsigned(H0, big) rem PartitionUpperBound,\n\t{RecallRange1Start, RecallRange2Start};\n\n%% In LOCALNET mode, RecallRange1 and RecallRange2 are passed through directly.\n%% In normal mode, they are computed from H0 and PartitionNumber.\nget_recall_range(_H0, _PartitionNumber, _PartitionUpperBound, RecallRange1, RecallRange2) ->\n\t{RecallRange1, RecallRange2}.\n-else.\nget_recall_range(H0, PartitionNumber, PartitionUpperBound, _RecallRange1, _RecallRange2) ->\n\tRecallRange1Offset = binary:decode_unsigned(binary:part(H0, 0, 8), big),\n\tRecallRange1Start = PartitionNumber * ar_block:partition_size()\n\t\t\t+ RecallRange1Offset rem min(ar_block:partition_size(), PartitionUpperBound),\n\tRecallRange2Start = binary:decode_unsigned(H0, big) rem PartitionUpperBound,\n\t{RecallRange1Start, RecallRange2Start}.\n-endif.\n\n%% @doc Compatibility version for 3 arguments.\nget_recall_range(H0, PartitionNumber, PartitionUpperBound) ->\n\tget_recall_range(H0, PartitionNumber, PartitionUpperBound, not_set, not_set).\n\nvdf_step_number(#block{ nonce_limiter_info = Info }) ->\n\tInfo#nonce_limiter_info.global_step_number.\n\nget_packing(PackingDifficulty, MiningAddress, 0) ->\n\tcase PackingDifficulty >= 1 of\n\t\ttrue ->\n\t\t\t{composite, MiningAddress, PackingDifficulty};\n\t\tfalse ->\n\t\t\t{spora_2_6, MiningAddress}\n\tend;\nget_packing(_PackingDifficulty, MiningAddress, 1) ->\n\t{replica_2_9, MiningAddress}.\n\nvalidate_replica_format(Height, PackingDifficulty, 1) ->\n\tHeight >= ar_fork:height_2_9()\n\t\t\tandalso PackingDifficulty == ?REPLICA_2_9_PACKING_DIFFICULTY;\nvalidate_replica_format(Height, 0, 0) ->\n\t%% Support for spora_2_6 discontinued at\n\t%% ar_fork:height_2_8() + ?SPORA_PACKING_EXPIRATION_PERIOD_BLOCKS.\n\tHeight - ?SPORA_PACKING_EXPIRATION_PERIOD_BLOCKS < ar_fork:height_2_8();\nvalidate_replica_format(Height, CompositePackingDifficulty, 0) ->\n\tcase Height - ?COMPOSITE_PACKING_EXPIRATION_PERIOD_BLOCKS < ar_fork:height_2_9() of\n\t\ttrue ->\n\t\t\t%% Composite is still supported - difficulty 1 through 32\n\t\t\tHeight >= ar_fork:height_2_8()\n\t\t\t\tandalso CompositePackingDifficulty =< ?MAX_PACKING_DIFFICULTY;\n\t\tfalse ->\n\t\t\t%% Composite packing is no longer supported.\n\t\t\tfalse\n\tend;\nvalidate_replica_format(_, _, _) ->\n\tfalse.\n\nget_recall_range_size(0) ->\n\t?LEGACY_RECALL_RANGE_SIZE;\nget_recall_range_size(PackingDifficulty) ->\n\t?RECALL_RANGE_SIZE div PackingDifficulty.\n\nget_recall_byte(RecallRangeStart, Nonce, 0) ->\n\tRecallRangeStart + Nonce * ?DATA_CHUNK_SIZE;\nget_recall_byte(RecallRangeStart, Nonce, _PackingDifficulty) ->\n\tChunkNumber = Nonce div ?COMPOSITE_PACKING_SUB_CHUNK_COUNT,\n\tRecallRangeStart + ChunkNumber * ?DATA_CHUNK_SIZE.\n\n%% @doc Return the number of bytes per sub-chunk. This also drives how far each mining nonce\n%% increments the recall byte.\nget_sub_chunk_size(0) ->\n\t?DATA_CHUNK_SIZE;\nget_sub_chunk_size(_PackingDifficulty) ->\n\t?COMPOSITE_PACKING_SUB_CHUNK_SIZE.\n\n%% @doc Return the number of mining nonces contained in each data chunk.\nget_nonces_per_chunk(0) ->\n\t1;\nget_nonces_per_chunk(_PackingDifficulty) ->\n\t?COMPOSITE_PACKING_SUB_CHUNK_COUNT.\n\nget_nonces_per_recall_range(PackingDifficulty) ->\n\tmax(1, get_recall_range_size(PackingDifficulty) div get_sub_chunk_size(PackingDifficulty)).\n\n%% @doc For packing difficulty 0 (aka spora_2_6 packing), there is one nonce per chunk, so\n%% the max nonce is the same as the max chunk number. For packing difficulty >= 1 (aka\n%% composite packing and the 2.9 replication), there are ?COMPOSITE_PACKING_SUB_CHUNK_COUNT\n%% nonces per chunk.\nget_max_nonce(PackingDifficulty) ->\n\t%% The max(...) is included mostly for testing, where the recall range can be less than\n\t%% a chunk.\n\tmax(get_nonces_per_chunk(PackingDifficulty) - 1,\n\t\tget_nonces_per_recall_range(PackingDifficulty) - 1).\n\n%% @doc Return the 0-based sub-chunk index the mining nonce is pointing to.\nget_sub_chunk_index(0, _Nonce) ->\n\t-1;\nget_sub_chunk_index(_PackingDifficulty, Nonce) ->\n\tNonce rem ?COMPOSITE_PACKING_SUB_CHUNK_COUNT.\n\n%% @doc Return Offset if it is smaller than or equal to ar_block:strict_data_split_threshold().\n%% Otherwise, return the offset of the last byte of the chunk + the size of the padding.\n-spec get_chunk_padded_offset(Offset :: non_neg_integer()) -> non_neg_integer().\nget_chunk_padded_offset(Offset) ->\n\tcase Offset > ar_block:strict_data_split_threshold() of\n\t\ttrue ->\n\t\t\tar_poa:get_padded_offset(Offset, ar_block:strict_data_split_threshold());\n\t\tfalse ->\n\t\t\tOffset\n\tend.\n\n%% @doc Return true if the given cumulative difficulty - previous cumulative difficulty\n%% pairs satisfy the double signing condition.\n-spec get_double_signing_condition(\n\t\tCDiff1 :: non_neg_integer(),\n\t\tPrevCDiff1 :: non_neg_integer(),\n\t\tCDiff2 :: non_neg_integer(),\n\t\tPrevCDiff2 :: non_neg_integer()\n) -> boolean().\nget_double_signing_condition(CDiff1, PrevCDiff1, CDiff2, PrevCDiff2) ->\n\tCDiff1 == CDiff2 orelse (CDiff1 > PrevCDiff2 andalso CDiff2 > PrevCDiff1).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nvalidate_tags_size(B) ->\n\tcase B#block.height >= ar_fork:height_2_5() of\n\t\ttrue ->\n\t\t\tTags = B#block.tags,\n\t\t\tvalidate_tags_length(Tags, 0) andalso byte_size(list_to_binary(Tags)) =< 2048;\n\t\tfalse ->\n\t\t\tbyte_size(list_to_binary(B#block.tags)) =< 2048\n\tend.\n\nvalidate_tags_length(_, N) when N > 2048 ->\n\tfalse;\nvalidate_tags_length([_ | Tags], N) ->\n\tvalidate_tags_length(Tags, N + 1);\nvalidate_tags_length([], _) ->\n\ttrue.\n\nencode_int(N, S) -> ar_serialize:encode_int(N, S).\nencode_bin(N, S) -> ar_serialize:encode_bin(N, S).\nencode_bin_list(L, LS, ES) -> ar_serialize:encode_bin_list(L, LS, ES).\n\nhash_wallet_list(WalletList) ->\n\tar_patricia_tree:compute_hash(WalletList,\n\t\tfun\t(Addr, {Balance, LastTX}) ->\n\t\t\t\tEncodedBalance = binary:encode_unsigned(Balance),\n\t\t\t\tar_deep_hash:hash([Addr, EncodedBalance, LastTX]);\n\t\t\t(Addr, {Balance, LastTX, Denomination, MiningPermission}) ->\n\t\t\t\tMiningPermissionBin =\n\t\t\t\t\tcase MiningPermission of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t<<1>>;\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t<<0>>\n\t\t\t\t\tend,\n\t\t\t\tPreimage = << (ar_serialize:encode_bin(Addr, 8))/binary,\n\t\t\t\t\t\t(ar_serialize:encode_int(Balance, 8))/binary,\n\t\t\t\t\t\t(ar_serialize:encode_bin(LastTX, 8))/binary,\n\t\t\t\t\t\t(ar_serialize:encode_int(Denomination, 8))/binary,\n\t\t\t\t\t\tMiningPermissionBin/binary >>,\n\t\t\t\tcrypto:hash(sha384, Preimage)\n\t\tend\n\t).\n\n%% @doc Generate the TX tree and set the TX root for a block.\ngenerate_tx_tree(B) ->\n\tSizeTaggedTXs = generate_size_tagged_list_from_txs(B#block.txs, B#block.height),\n\tSizeTaggedDataRoots = [{Root, Offset} || {{_, Root}, Offset} <- SizeTaggedTXs],\n\tgenerate_tx_tree(B, SizeTaggedDataRoots).\n\ngenerate_tx_tree(B, SizeTaggedDataRoots) ->\n\t{Root, Tree} = ar_merkle:generate_tree(SizeTaggedDataRoots),\n\tB#block{ tx_tree = Tree, tx_root = Root }.\n\ngenerate_size_tagged_list_from_txs(TXs, Height) ->\n\tlists:reverse(\n\t\telement(2,\n\t\t\tlists:foldl(\n\t\t\t\tfun(TX, {Pos, List}) ->\n\t\t\t\t\tDataSize = TX#tx.data_size,\n\t\t\t\t\tEnd = Pos + DataSize,\n\t\t\t\t\tcase Height >= ar_fork:height_2_5() of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tPadding = ar_tx:get_weave_size_increase(DataSize, Height)\n\t\t\t\t\t\t\t\t\t- DataSize,\n\t\t\t\t\t\t\t%% Encode the padding information in the Merkle tree.\n\t\t\t\t\t\t\tcase Padding > 0 of\n\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\tPaddingRoot = ?PADDING_NODE_DATA_ROOT,\n\t\t\t\t\t\t\t\t\t{End + Padding, [{{padding, PaddingRoot}, End + Padding},\n\t\t\t\t\t\t\t\t\t\t\t{{TX#tx.id, get_tx_data_root(TX)}, End} | List]};\n\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\t{End, [{{TX#tx.id, get_tx_data_root(TX)}, End} | List]}\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t{End, [{{TX#tx.id, get_tx_data_root(TX)}, End} | List]}\n\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\t\t{0, []},\n\t\t\t\tlists:sort(TXs)\n\t\t\t)\n\t\t)\n\t).\n\n%% @doc Find the appropriate block hash list for a block, from a block index.\ngenerate_hash_list_for_block(_BlockOrHash, []) -> [];\ngenerate_hash_list_for_block(B, BI) when ?IS_BLOCK(B) ->\n\tgenerate_hash_list_for_block(B#block.indep_hash, BI);\ngenerate_hash_list_for_block(Hash, BI) ->\n\tdo_generate_hash_list_for_block(Hash, BI).\n\ndo_generate_hash_list_for_block(_, []) ->\n\terror(cannot_generate_hash_list);\ndo_generate_hash_list_for_block(IndepHash, [{IndepHash, _, _} | BI]) -> ?BI_TO_BHL(BI);\ndo_generate_hash_list_for_block(IndepHash, [_ | Rest]) ->\n\tdo_generate_hash_list_for_block(IndepHash, Rest).\n\nencode_tags(B) ->\n\tcase B#block.height >= ar_fork:height_2_5() of\n\t\ttrue ->\n\t\t\tB#block.tags;\n\t\tfalse ->\n\t\t\tar_tx:tags_to_list(B#block.tags)\n\tend.\n\npoa_to_list(POA) ->\n\t[\n\t\tinteger_to_binary(POA#poa.option),\n\t\tPOA#poa.tx_path,\n\t\tPOA#poa.data_path,\n\t\tPOA#poa.chunk\n\t].\n\n%% @doc Compute the 2.5 packing threshold.\nget_packing_threshold(B, SearchSpaceUpperBound) ->\n\t#block{ height = Height, packing_2_5_threshold = PrevPackingThreshold } = B,\n\tFork_2_5 = ar_fork:height_2_5(),\n\tcase Height + 1 == Fork_2_5 of\n\t\ttrue ->\n\t\t\tSearchSpaceUpperBound;\n\t\tfalse ->\n\t\t\tcase Height + 1 > Fork_2_5 of\n\t\t\t\ttrue ->\n\t\t\t\t\tar_block:shift_packing_2_5_threshold(PrevPackingThreshold);\n\t\t\t\tfalse ->\n\t\t\t\t\tundefined\n\t\t\tend\n\tend.\n\n%% @doc Move the fork 2.5 packing threshold.\nshift_packing_2_5_threshold(0) ->\n\t0;\nshift_packing_2_5_threshold(Threshold) ->\n\tTargetTime = ar_testnet:target_block_time(ar_fork:height_2_5()),\n\tShift = (?DATA_CHUNK_SIZE) * (?PACKING_2_5_THRESHOLD_CHUNKS_PER_SECOND) * TargetTime,\n\tmax(0, Threshold - Shift).\n\nverify_tx_root(B) ->\n\tB#block.tx_root == generate_tx_root_for_block(B).\n\n%% @doc Given a list of TXs in various formats, or a block, generate the\n%% correct TX merkle tree root.\ngenerate_tx_root_for_block(B) when is_record(B, block) ->\n\tgenerate_tx_root_for_block(B#block.txs, B#block.height).\n\ngenerate_tx_root_for_block(TXIDs = [TXID | _], Height) when is_binary(TXID) ->\n\tgenerate_tx_root_for_block(ar_storage:read_tx(TXIDs), Height);\ngenerate_tx_root_for_block([], _Height) ->\n\t<<>>;\ngenerate_tx_root_for_block(TXs = [TX | _], Height) when is_record(TX, tx) ->\n\tSizeTaggedTXs = generate_size_tagged_list_from_txs(TXs, Height),\n\tSizeTaggedDataRoots = [{Root, Offset} || {{_, Root}, Offset} <- SizeTaggedTXs],\n\t{Root, _Tree} = ar_merkle:generate_tree(SizeTaggedDataRoots),\n\tRoot.\n\nget_tx_data_root(#tx{ format = 2, data_root = DataRoot }) ->\n\tDataRoot;\nget_tx_data_root(TX) ->\n\t(ar_tx:generate_chunk_tree(TX))#tx.data_root.\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\nhash_list_gen_test_() ->\n\t{timeout, 120, fun test_hash_list_gen/0}.\n\ntest_hash_list_gen() ->\n\t[B0] = ar_weave:init(),\n\tar_test_node:start(B0),\n\tar_test_node:mine(),\n\tBI1 = ar_test_node:wait_until_height(main, 1),\n\tB1 = ar_storage:read_block(hd(BI1)),\n\tar_test_node:mine(),\n\tBI2 = ar_test_node:wait_until_height(main, 2),\n\tB2 = ar_storage:read_block(hd(BI2)),\n\t?assertEqual([B0#block.indep_hash], generate_hash_list_for_block(B1, BI2)),\n\t?assertEqual([H || {H, _, _} <- BI1],\n\t\t\tgenerate_hash_list_for_block(B2#block.indep_hash, BI2)).\n\ngenerate_size_tagged_list_from_txs_test() ->\n\tFork_2_5 = ar_fork:height_2_5(),\n\t?assertEqual([], generate_size_tagged_list_from_txs([], Fork_2_5)),\n\t?assertEqual([], generate_size_tagged_list_from_txs([], Fork_2_5 - 1)),\n\tEmptyV1Root = (ar_tx:generate_chunk_tree(#tx{}))#tx.data_root,\n\t?assertEqual([{{<<>>, EmptyV1Root}, 0}],\n\t\t\tgenerate_size_tagged_list_from_txs([#tx{}], Fork_2_5)),\n\t?assertEqual([{{<<>>, <<>>}, 0}],\n\t\t\tgenerate_size_tagged_list_from_txs([#tx{ format = 2 }], Fork_2_5)),\n\t?assertEqual([{{<<>>, <<>>}, 0}],\n\t\t\tgenerate_size_tagged_list_from_txs([#tx{ format = 2}], Fork_2_5 - 1)),\n\t?assertEqual([{{<<>>, <<\"r\">>}, 1}, {{padding, <<>>}, 262144}],\n\t\t\tgenerate_size_tagged_list_from_txs([#tx{ format = 2, data_root = <<\"r\">>,\n\t\t\t\t\tdata_size = 1 }], Fork_2_5)),\n\t?assertEqual([\n\t\t\t{{<<\"1\">>, <<\"r\">>}, 1}, {{padding, <<>>}, 262144},\n\t\t\t{{<<\"2\">>, <<>>}, 262144},\n\t\t\t{{<<\"3\">>, <<>>}, 262144 * 5},\n\t\t\t{{<<\"4\">>, <<>>}, 262144 * 5},\n\t\t\t{{<<\"5\">>, <<>>}, 262144 * 5},\n\t\t\t{{<<\"6\">>, <<>>}, 262144 * 6}],\n\t\t\tgenerate_size_tagged_list_from_txs([\n\t\t\t\t\t#tx{ id = <<\"1\">>, format = 2, data_root = <<\"r\">>, data_size = 1 },\n\t\t\t\t\t#tx{ id = <<\"2\">>, format = 2 },\n\t\t\t\t\t#tx{ id = <<\"3\">>, format = 2, data_size = 262144 * 4 },\n\t\t\t\t\t#tx{ id = <<\"4\">>, format = 2 },\n\t\t\t\t\t#tx{ id = <<\"5\">>, format = 2 },\n\t\t\t\t\t#tx{ id = <<\"6\">>, format = 2, data_size = 262144 }], Fork_2_5)).\n\ntest_wallet_list_performance() ->\n\ttest_wallet_list_performance(250_000, ar_deep_hash, mixed).\n\ntest_wallet_list_performance(Length) ->\n\ttest_wallet_list_performance(Length, ar_deep_hash, mixed).\n\ntest_wallet_list_performance(Length, Algo) ->\n\ttest_wallet_list_performance(Length, Algo, mixed).\n\ntest_wallet_list_performance(Length, Algo, Denominations) ->\n\tSupportedAlgos = [ar_deep_hash, no_ar_deep_hash_sha384, sha256],\n\tcase lists:member(Algo, SupportedAlgos) of\n\t\tfalse ->\n\t\t\tio:format(\"Supported Algo: ~p~n\", [SupportedAlgos]);\n\t\ttrue ->\n\t\t\tSupportedDenominations = [old, new, mixed],\n\t\t\tcase lists:member(Denominations, SupportedDenominations) of\n\t\t\t\tfalse ->\n\t\t\t\t\tio:format(\"Supported Algo: ~p~n\", [SupportedDenominations]);\n\t\t\t\ttrue ->\n\t\t\t\t\ttest_wallet_list_performance2(Length, Algo, Denominations)\n\t\t\tend\n\tend.\n\ntest_wallet_list_performance2(Length, Algo, Denominations) ->\n\n\tio:format(\"# ~B wallets, denominations: ~p, algo: ~p~n\", [Length, Denominations, Algo]),\n\tio:format(\"============~n\"),\n\tWL = [random_wallet() || _ <- lists:seq(1, Length)],\n\t{Time1, T1} =\n\t\ttimer:tc(\n\t\t\tfun() ->\n\t\t\t\tlists:foldl(\n\t\t\t\t\tfun({A, B, LastTX}, Acc) ->\n\t\t\t\t\t\tcase Denominations of\n\t\t\t\t\t\t\told ->\n\t\t\t\t\t\t\t\tar_patricia_tree:insert(A, {B, LastTX}, Acc);\n\t\t\t\t\t\t\tnew ->\n\t\t\t\t\t\t\t\tar_patricia_tree:insert(A, {B, LastTX,\n\t\t\t\t\t\t\t\t\t\t1 + rand:uniform(10), true}, Acc);\n\t\t\t\t\t\t\tmixed ->\n\t\t\t\t\t\t\t\tcase rand:uniform(2) == 1 of\n\t\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\t\tar_patricia_tree:insert(A, {B, LastTX}, Acc);\n\t\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\t\tar_patricia_tree:insert(A, {B, LastTX,\n\t\t\t\t\t\t\t\t\t\t\t\t1 + rand:uniform(10), true}, Acc)\n\t\t\t\t\t\t\t\tend\n\t\t\t\t\t\tend\n\t\t\t\t\tend,\n\t\t\t\t\tar_patricia_tree:new(),\n\t\t\t\t\tWL\n\t\t\t\t)\n\t\t\tend\n\t\t),\n\tio:format(\"tree buildup                    | ~f seconds~n\", [Time1 / 1000000]),\n\t{Time2, Binary} =\n\t\ttimer:tc(\n\t\t\tfun() ->\n\t\t\t\tar_serialize:jsonify(\n\t\t\t\t\tar_serialize:wallet_list_to_json_struct(unclaimed, false, T1)\n\t\t\t\t)\n\t\t\tend\n\t\t),\n\tio:format(\"serialization                   | ~f seconds~n\", [Time2 / 1000000]),\n\tio:format(\"                                | ~B bytes~n\", [byte_size(Binary)]),\n\tComputeHashFun =\n\t\tfun\t(Addr, {Balance, LastTX}) ->\n\t\t\t\tcase Algo of\n\t\t\t\t\tar_deep_hash ->\n\t\t\t\t\t\tEncodedBalance = binary:encode_unsigned(Balance),\n\t\t\t\t\t\tar_deep_hash:hash([Addr, EncodedBalance, LastTX]);\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tDenomination = 0,\n\t\t\t\t\t\tMiningPermissionBin = <<1>>,\n\t\t\t\t\t\tPreimage = << (ar_serialize:encode_bin(Addr, 8))/binary,\n\t\t\t\t\t\t\t\t(ar_serialize:encode_int(Balance, 8))/binary,\n\t\t\t\t\t\t\t\t(ar_serialize:encode_bin(LastTX, 8))/binary,\n\t\t\t\t\t\t\t\t(ar_serialize:encode_int(Denomination, 8))/binary,\n\t\t\t\t\t\t\t\tMiningPermissionBin/binary >>,\n\t\t\t\t\t\tcase Algo of\n\t\t\t\t\t\t\tno_ar_deep_hash_sha384 ->\n\t\t\t\t\t\t\t\tcrypto:hash(sha384, Preimage);\n\t\t\t\t\t\t\tsha256 ->\n\t\t\t\t\t\t\t\tcrypto:hash(sha256, Preimage)\n\t\t\t\t\t\tend\n\t\t\t\tend;\n\t\t\t(Addr, {Balance, LastTX, Denomination, MiningPermission}) ->\n\t\t\t\tMiningPermissionBin =\n\t\t\t\t\tcase MiningPermission of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t<<1>>;\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t<<0>>\n\t\t\t\t\tend,\n\t\t\t\tPreimage = << (ar_serialize:encode_bin(Addr, 8))/binary,\n\t\t\t\t\t\t(ar_serialize:encode_int(Balance, 8))/binary,\n\t\t\t\t\t\t(ar_serialize:encode_bin(LastTX, 8))/binary,\n\t\t\t\t\t\t(ar_serialize:encode_int(Denomination, 8))/binary,\n\t\t\t\t\t\tMiningPermissionBin/binary >>,\n\t\t\t\tcase Algo of\n\t\t\t\t\tsha256 ->\n\t\t\t\t\t\tcrypto:hash(sha256, Preimage);\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tcrypto:hash(sha384, Preimage)\n\t\t\t\tend\n\t\tend,\n\t{Time3, {_, T2, _}} =\n\t\ttimer:tc(fun() -> ar_patricia_tree:compute_hash(T1, ComputeHashFun) end),\n\tio:format(\"root hash from scratch          | ~f seconds~n\", [Time3 / 1000000]),\n\t{Time4, T3} =\n\t\ttimer:tc(\n\t\t\tfun() ->\n\t\t\t\tlists:foldl(\n\t\t\t\t\tfun({A, B, LastTX}, Acc) ->\n\t\t\t\t\t\tar_patricia_tree:insert(A, {B, LastTX}, Acc)\n\t\t\t\t\tend,\n\t\t\t\t\tT2,\n\t\t\t\t\t[random_wallet() || _ <- lists:seq(1, 2000)]\n\t\t\t\t)\n\t\t\tend\n\t\t),\n\tio:format(\"2000 inserts                    | ~f seconds~n\", [Time4 / 1000000]),\n\t{Time5, _} =\n\t\ttimer:tc(fun() -> ar_patricia_tree:compute_hash(T3, ComputeHashFun) end),\n\tio:format(\"recompute hash after 2k inserts | ~f seconds~n\", [Time5 / 1000000]),\n\t{Time6, T4} =\n\t\ttimer:tc(\n\t\t\tfun() ->\n\t\t\t\t{A, B, LastTX} = random_wallet(),\n\t\t\t\tar_patricia_tree:insert(A, {B, LastTX}, T2)\n\t\t\tend\n\t\t),\n\tio:format(\"1 insert                        | ~f seconds~n\", [Time6 / 1000000]),\n\t{Time7, _} =\n\t\ttimer:tc(fun() -> ar_patricia_tree:compute_hash(T4, ComputeHashFun) end),\n\tio:format(\"recompute hash after 1 insert   | ~f seconds~n\", [Time7 / 1000000]).\n\nrandom_wallet() ->\n\t{\n\t\tcrypto:strong_rand_bytes(32),\n\t\trand:uniform(1000000000000000000),\n\t\tcrypto:strong_rand_bytes(32)\n\t}.\n\nvalidate_replica_format_test_() ->\n\t[\n\t\tar_test_node:test_with_mocked_functions([\n\t\t\t\t{ar_fork, height_2_8, fun() -> 10 end},\n\t\t\t\t{ar_fork, height_2_9, fun() -> 20 end}\n\t\t\t],\n\t\t\tfun test_validate_replica_format/0, 30)\n\t].\ntest_validate_replica_format() ->\n\t%% pre 2.8, only spora_2_6 is supported\n\t?assertEqual(true, validate_replica_format(0, 0, 0)),\n\t?assertEqual(false, validate_replica_format(0, 1, 0)),\n\t?assertEqual(false, validate_replica_format(0, 33, 0)),\n\t?assertEqual(false, validate_replica_format(0, 25, 0)),\n\t?assertEqual(false, validate_replica_format(0, 0, 1)),\n\t?assertEqual(false, validate_replica_format(0, 1, 1)),\n\t?assertEqual(false, validate_replica_format(0, 33, 1)),\n\t?assertEqual(false, validate_replica_format(0, 25, 1)),\n\t%% post-2.8, pre-2.9, spora_2_6 and composite are supported\n\t?assertEqual(true, validate_replica_format(15, 0, 0)),\n\t?assertEqual(true, validate_replica_format(15, 1, 0)),\n\t?assertEqual(false, validate_replica_format(15, 33, 0)),\n\t?assertEqual(false, validate_replica_format(15, 100, 0)),\n\t?assertEqual(false, validate_replica_format(15, 0, 1)),\n\t?assertEqual(false, validate_replica_format(15, 1, 1)),\n\t?assertEqual(false, validate_replica_format(15, 33, 1)),\n\t?assertEqual(false, validate_replica_format(15, 25, 1)),\n\t%% post-2.9, pre-composite expiration\n\t?assertEqual(true, validate_replica_format(25, 0, 0)),\n\t?assertEqual(true, validate_replica_format(25, 1, 0)),\n\t?assertEqual(false, validate_replica_format(25, 33, 0)),\n\t?assertEqual(false, validate_replica_format(25, 100, 0)),\n\t?assertEqual(false, validate_replica_format(25, 0, 1)),\n\t?assertEqual(false, validate_replica_format(25, 1, 1)),\n\t?assertEqual(false, validate_replica_format(25, 33, 1)),\n\t?assertEqual(true, validate_replica_format(25, 2, 1)), %% 2 in tests.\n\t%% post-2.9, post-composite expiration\n\tCompositeExpiration = ar_fork:height_2_9() + ?COMPOSITE_PACKING_EXPIRATION_PERIOD_BLOCKS,\n\t?assertEqual(true, validate_replica_format(CompositeExpiration, 0, 0)),\n\t?assertEqual(false, validate_replica_format(CompositeExpiration, 1, 0)),\n\t?assertEqual(false, validate_replica_format(CompositeExpiration, 33, 0)),\n\t?assertEqual(false, validate_replica_format(CompositeExpiration, 25, 0)),\n\t?assertEqual(false, validate_replica_format(CompositeExpiration, 0, 1)),\n\t?assertEqual(false, validate_replica_format(CompositeExpiration, 1, 1)),\n\t?assertEqual(false, validate_replica_format(CompositeExpiration, 33, 1)),\n\t?assertEqual(true, validate_replica_format(CompositeExpiration, 2, 1)),\n\t%% post-2.9, post-spora expiration\n\tSporaExpiration = ar_fork:height_2_8() + ?SPORA_PACKING_EXPIRATION_PERIOD_BLOCKS,\n\t?assertEqual(false, validate_replica_format(SporaExpiration, 0, 0)),\n\t?assertEqual(false, validate_replica_format(SporaExpiration, 1, 0)),\n\t?assertEqual(false, validate_replica_format(SporaExpiration, 33, 0)),\n\t?assertEqual(false, validate_replica_format(SporaExpiration, 25, 0)),\n\t?assertEqual(false, validate_replica_format(SporaExpiration, 0, 1)),\n\t?assertEqual(false, validate_replica_format(SporaExpiration, 1, 1)),\n\t?assertEqual(false, validate_replica_format(SporaExpiration, 33, 1)),\n\t?assertEqual(true, validate_replica_format(SporaExpiration, 2, 1)).\n"
  },
  {
    "path": "apps/arweave/src/ar_block_cache.erl",
    "content": "%%% @doc The module maintains a DAG of blocks that have passed the PoW validation, in ETS.\n%%% NOTE It is not safe to call functions which modify the state from different processes.\n-module(ar_block_cache).\n\n-export([new/2, initialize_from_list/2, add/2, mark_nonce_limiter_validated/2,\n\t\tadd_validated/2,\n\t\tmark_tip/2, get/2, get_earliest_not_validated_from_longest_chain/1,\n\t\tget_longest_chain_cache/1,\n\t\tget_block_and_status/2, remove/2, get_checkpoint_block/1, prune/2,\n\t\tget_by_solution_hash/5, is_known_solution_hash/2,\n\t\tget_siblings/2, get_fork_blocks/2, update_timestamp/3,\n\t\tget_blocks_by_miner/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%% The expiration time in seconds for every \"alternative\" block (a block with non-unique\n%% solution).\n-define(ALTERNATIVE_BLOCK_EXPIRATION_TIME_SECONDS, 10).\n\n%% @doc Block validation status\n%% on_chain: block is validated and belongs to the tip fork\n%% validated: block is validated but does not belong to the tip fork\n%% not_validated: block is not validated yet\n%% none: null status\n\n%% @doc ETS table: block_cache\n%% {block, BlockHash} => {#block{}, block_status(), Timestamp, set(Children)}\n%%   - Children is a set of all blocks that have this block as their previous block. Children is\n%%     used to track branches in the chain that fork off this block (i.e. they are DAG children)\n%% max_cdiff => {CDiff, BlockHash}\n%%   - maximum cumulative difficulty encountered and its BlockHash. This is used to determine\n%%     whether we need to switch from the current tip to a fork tip.\n%% {solution, SolutionHash} => set(BlockHash)\n%%   - all blocks with the same solution hash\n%% longest_chain => [{BlockHash, [TXIDs]}]\n%%  - the top ar_block:get_consensus_window_size() blocks of the longest chain\n%% tip -> BlockHash\n%%   - curent block chain tip\n%% links -> gb_set({Height, BlockHash})\n%%   - all blocks in the cache sorted by height. This is used when pruning the cache and\n%%     discarding all blocks below a certain height (and all off-chain children of those blocks\n%%     regardless of their height)\n\n%%%===================================================================\n%%% Public API.\n%%%===================================================================\n\n%% @doc Create a cache, initialize it with the given block. The block is marked as on-chain\n%% and as a tip block.\nnew(Tab, B) ->\n\t#block{ indep_hash = H, hash = SolutionH, cumulative_diff = CDiff, height = Height } = B,\n\tets:delete_all_objects(Tab),\n\tar_ignore_registry:add(H),\n\tinsert(Tab, [\n\t\t{max_cdiff, {CDiff, H}},\n\t\t{links, gb_sets:from_list([{Height, H}])},\n\t\t{{solution, SolutionH}, sets:from_list([H])},\n\t\t{tip, H},\n\t\t{{block, H}, {B, on_chain, erlang:timestamp(), sets:new()}}\n\t]).\n\n%% @doc Initialize a cache from the given list of validated blocks. Mark the latest\n%% block as the tip block. The given blocks must be sorted from newest to oldest.\ninitialize_from_list(Tab, [B]) ->\n\tnew(Tab, B);\ninitialize_from_list(Tab, [#block{ indep_hash = H } = B | Blocks]) ->\n\tinitialize_from_list(Tab, Blocks),\n\tadd_validated(Tab, B),\n\tmark_tip(Tab, H).\n\n%% @doc Add a block to the cache. The block is marked as not validated yet.\n%% If the block is already present in the cache and has not been yet validated, it is\n%% overwritten. If the block is validated, we do nothing and issue a warning.\nadd(Tab,\n\t\t#block{\n\t\t\tindep_hash = H,\n\t\t\thash = SolutionH,\n\t\t\tprevious_block = PrevH,\n\t\t\theight = Height\n\t\t} = B) ->\n\tcase ets:lookup(Tab, {block, H}) of\n\t\t[] ->\n\t\t\tar_ignore_registry:add(H),\n\t\t\tRemainingHs = remove_expired_alternative_blocks(Tab, SolutionH),\n\t\t\tSolutionSet = sets:from_list([H | RemainingHs]),\n\t\t\t[{_, Set}] = ets:lookup(Tab, links),\n\t\t\t[{_, {PrevB, PrevStatus, PrevTimestamp, Children}}] = ets:lookup(Tab, {block, PrevH}),\n\t\t\tSet2 = gb_sets:insert({Height, H}, Set),\n\t\t\tStatus = {not_validated, awaiting_nonce_limiter_validation},\n\t\t\t%% If CDiff > MaxCDiff it means this block belongs to the heaviest fork we're aware\n\t\t\t%% of. If our current tip is not on this fork, ar_node_worker may switch to this fork.\n\t\t\tinsert(Tab, [\n\t\t\t\t{max_cdiff, maybe_increase_max_cdiff(Tab, B, Status)},\n\t\t\t\t{links, Set2},\n\t\t\t\t{{solution, SolutionH}, SolutionSet},\n\t\t\t\t{{block, H}, {B, Status, erlang:timestamp(), sets:new()}},\n\t\t\t\t{{block, PrevH},\n\t\t\t\t\t\t{PrevB, PrevStatus, PrevTimestamp, sets:add_element(H, Children)}}\n\t\t\t]);\n\t\t[{_, {_B, {not_validated, _} = CurrentStatus, CurrentTimestamp, Children}}] ->\n\t\t\tinsert(Tab, {{block, H}, {B, CurrentStatus, CurrentTimestamp, Children}});\n\t\t_ ->\n\t\t\t?LOG_WARNING([{event, attempt_to_update_already_validated_cached_block},\n\t\t\t\t\t{h, ar_util:encode(H)}, {height, Height},\n\t\t\t\t\t{previous_block, ar_util:encode(PrevH)}]),\n\t\t\tok\n\tend.\n\n%% @doc Check all blocks that share the same solution and remove those that expired.\nremove_expired_alternative_blocks(Tab, SolutionH) ->\n\tSolutionSet =\n\t\tcase ets:lookup(Tab, {solution, SolutionH}) of\n\t\t\t[] ->\n\t\t\t\tsets:new();\n\t\t\t[{_, SolutionSet2}] ->\n\t\t\t\tSolutionSet2\n\t\tend,\n\tremove_expired_alternative_blocks2(Tab, sets:to_list(SolutionSet)).\n\nremove_expired_alternative_blocks2(_Tab, []) ->\n\t[];\nremove_expired_alternative_blocks2(Tab, [H | Hs]) ->\n\t[{_, {_B, Status, Timestamp, Children}}] = ets:lookup(Tab, {block, H}),\n\tcase Status of\n\t\ton_chain ->\n\t\t\t[H | remove_expired_alternative_blocks2(Tab, Hs)];\n\t\t_ ->\n\t\t\tLifetimeSeconds = get_alternative_block_lifetime(Tab, Children),\n\t\t\t{MegaSecs, Secs, MicroSecs} = Timestamp,\n\t\t\tExpirationTimestamp = {MegaSecs, Secs + LifetimeSeconds, MicroSecs},\n\t\t\tcase timer:now_diff(erlang:timestamp(), ExpirationTimestamp) >= 0 of\n\t\t\t\ttrue ->\n\t\t\t\t\t?LOG_INFO([{event, removing_expired_alternative_block_from_cache},\n\t\t\t\t\t\t\t{block, ar_util:encode(H)},\n\t\t\t\t\t\t\t{status, Status}]),\n\t\t\t\t\tremove(Tab, H),\n\t\t\t\t\tremove_expired_alternative_blocks2(Tab, Hs);\n\t\t\t\tfalse ->\n\t\t\t\t\t[H | remove_expired_alternative_blocks2(Tab, Hs)]\n\t\t\tend\n\tend.\n\nget_alternative_block_lifetime(Tab, Children) ->\n\tForkLen = get_fork_length(Tab, sets:to_list(Children)),\n\t(?ALTERNATIVE_BLOCK_EXPIRATION_TIME_SECONDS) * ForkLen.\n\nget_fork_length(Tab, Branches) when is_list(Branches) ->\n\t1 + lists:max([0 | [get_fork_length(Tab, Branch) || Branch <- Branches]]);\nget_fork_length(Tab, Branch) ->\n\t[{_, {_B, _Status, _Timestamp, Children}}] = ets:lookup(Tab, {block, Branch}),\n\tcase sets:size(Children) == 0 of\n\t\ttrue ->\n\t\t\t1;\n\t\tfalse ->\n\t\t\t1 + get_fork_length(Tab, sets:to_list(Children))\n\tend.\n\n%% @doc Update the status of the given block to 'nonce_limiter_validated'.\n%% Do nothing if the block is not found in cache or if its status is\n%% not 'awaiting_nonce_limiter_validation'.\nmark_nonce_limiter_validated(Tab, H) ->\n\tcase ets:lookup(Tab, {block, H}) of\n\t\t[{_, {B, {not_validated, awaiting_nonce_limiter_validation}, Timestamp, Children}}] ->\n\t\t\tinsert(Tab, {{block, H}, {B,\n\t\t\t\t\t{not_validated, nonce_limiter_validated}, Timestamp, Children}});\n\t\t_ ->\n\t\t\tok\n\tend.\n\n%% @doc Add a validated block to the cache. If the block is already in the cache, it\n%% is overwritten. However, the function assumes the height, hash, previous hash, and\n%% the cumulative difficulty do not change.\n%% Raises previous_block_not_found if the previous block is not in the cache.\n%% Raises previous_block_not_validated if the previous block is not validated.\nadd_validated(Tab, B) ->\n\t#block{ indep_hash = H, hash = SolutionH, previous_block = PrevH, height = Height } = B,\n\tcase ets:lookup(Tab, {block, PrevH}) of\n\t\t[] ->\n\t\t\terror(previous_block_not_found);\n\t\t[{_, {_PrevB, {not_validated, _}, _Timestamp, _Children}}] ->\n\t\t\terror(previous_block_not_validated);\n\t\t[{_, {PrevB, PrevStatus, PrevTimestamp, PrevChildren}}] ->\n\t\t\tcase ets:lookup(Tab, {block, H}) of\n\t\t\t\t[] ->\n\t\t\t\t\tRemainingHs = remove_expired_alternative_blocks(Tab, SolutionH),\n\t\t\t\t\tSolutionSet = sets:from_list([H | RemainingHs]),\n\t\t\t\t\t[{_, Set}] = ets:lookup(Tab, links),\n\t\t\t\t\tStatus = validated,\n\t\t\t\t\tinsert(Tab, [\n\t\t\t\t\t\t{{block, PrevH}, {PrevB, PrevStatus, PrevTimestamp,\n\t\t\t\t\t\t\t\tsets:add_element(H, PrevChildren)}},\n\t\t\t\t\t\t{{block, H}, {B, Status, erlang:timestamp(), sets:new()}},\n\t\t\t\t\t\t{max_cdiff, maybe_increase_max_cdiff(Tab, B, Status)},\n\t\t\t\t\t\t{links, gb_sets:insert({Height, H}, Set)},\n\t\t\t\t\t\t{{solution, SolutionH}, SolutionSet}\n\t\t\t\t\t]);\n\t\t\t\t[{_, {_B, on_chain, Timestamp, Children}}] ->\n\t\t\t\t\tinsert(Tab, [\n\t\t\t\t\t\t{{block, PrevH}, {PrevB, PrevStatus, PrevTimestamp,\n\t\t\t\t\t\t\t\tsets:add_element(H, PrevChildren)}},\n\t\t\t\t\t\t{{block, H}, {B, on_chain, Timestamp, Children}}\n\t\t\t\t\t]);\n\t\t\t\t[{_, {_B, _Status, Timestamp, Children}}] ->\n\t\t\t\t\tinsert(Tab, [\n\t\t\t\t\t\t{{block, PrevH}, {PrevB, PrevStatus, PrevTimestamp,\n\t\t\t\t\t\t\t\tsets:add_element(H, PrevChildren)}},\n\t\t\t\t\t\t{{block, H}, {B, validated, Timestamp, Children}}\n\t\t\t\t\t])\n\t\t\tend\n\tend.\n\n%% @doc Get the block from cache. Returns not_found if the block is not in cache.\nget(Tab, H) ->\n\tcase ets:lookup(Tab, {block, H}) of\n\t\t[] ->\n\t\t\tnot_found;\n\t\t[{_, {B, _Status, _Timestamp, _Children}}] ->\n\t\t\tB\n\tend.\n\n%% @doc Get the block and its status from cache.\n%% Returns not_found if the block is not in cache.\nget_block_and_status(Tab, H) ->\n\tcase ets:lookup(Tab, {block, H}) of\n\t\t[] ->\n\t\t\tnot_found;\n\t\t[{_, {B, Status, Timestamp, _Children}}] ->\n\t\t\t{B, {Status, Timestamp}}\n\tend.\n\n%% @doc Get a {block, previous blocks, status} tuple for the earliest block from\n%% the longest chain, which has not been validated yet. The previous blocks are\n%% sorted from newest to oldest. The last one is a block from the current fork.\n%% status is a tuple that indicates where in the validation process block is.\nget_earliest_not_validated_from_longest_chain(Tab) ->\n\t[{_, Tip}] = ets:lookup(Tab, tip),\n\t[{_, {CDiff, H}}] = ets:lookup(Tab, max_cdiff),\n\t[{_, {#block{ cumulative_diff = TipCDiff }, _, _, _}}] = ets:lookup(Tab, {block, Tip}),\n\tcase TipCDiff >= CDiff of\n\t\ttrue ->\n\t\t\t%% Current Tip is tip of the longest chain\n\t\t\tnot_found;\n\t\tfalse ->\n\t\t\t[{_, {B, Status, Timestamp, _Children}}] = ets:lookup(Tab, {block, H}),\n\t\t\tcase Status of\n\t\t\t\t{not_validated, _} ->\n\t\t\t\t\tget_earliest_not_validated(Tab, B, Status, Timestamp);\n\t\t\t\t_ ->\n\t\t\t\t\tnot_found\n\t\t\tend\n\tend.\n\n%% @doc Return the list of {BH, TXIDs} pairs corresponding to the top up to the\n%% ar_block:get_consensus_window_size() blocks of the longest chain and the number of blocks\n%% in this list that are not on chain yet.\n%%\n%% The cache is updated via update_longest_chain_cache/1 which calls\n%% get_longest_chain_block_txs_pairs/7\nget_longest_chain_cache(Tab) ->\n\t[{longest_chain, LongestChain}] = ets:lookup(Tab, longest_chain),\n\tLongestChain.\n\nget_longest_chain_block_txs_pairs(_Tab, _H, 0, _PrevStatus, _PrevH, Pairs, NotOnChainCount) ->\n\t{lists:reverse(Pairs), NotOnChainCount};\nget_longest_chain_block_txs_pairs(Tab, H, N, PrevStatus, PrevH, Pairs, NotOnChainCount) ->\n\tcase ets:lookup(Tab, {block, H}) of\n\t\t[{_, {B, {not_validated, awaiting_nonce_limiter_validation}, _Timestamp,\n\t\t\t\t_Children}}] ->\n\t\t\tget_longest_chain_block_txs_pairs(Tab, B#block.previous_block,\n\t\t\t\t\tar_block:get_consensus_window_size(), none, none, [], 0);\n\t\t[{_, {B, Status, _Timestamp, _Children}}] ->\n\t\t\tcase PrevStatus == on_chain andalso Status /= on_chain of\n\t\t\t\ttrue ->\n\t\t\t\t\t%% A reorg should have happened in the meantime - an unlikely\n\t\t\t\t\t%% event, retry.\n\t\t\t\t\tget_longest_chain_cache(Tab);\n\t\t\t\tfalse ->\n\t\t\t\t\tNotOnChainCount2 =\n\t\t\t\t\t\tcase Status of\n\t\t\t\t\t\t\ton_chain ->\n\t\t\t\t\t\t\t\tNotOnChainCount;\n\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\tNotOnChainCount + 1\n\t\t\t\t\t\tend,\n\t\t\t\t\tPairs2 = [{B#block.indep_hash, [tx_id(TX) || TX <- B#block.txs]} | Pairs],\n\t\t\t\t\tget_longest_chain_block_txs_pairs(Tab, B#block.previous_block, N - 1,\n\t\t\t\t\t\t\tStatus, H, Pairs2, NotOnChainCount2)\n\t\t\tend;\n\t\t[] ->\n\t\t\tcase PrevStatus of\n\t\t\t\ton_chain ->\n\t\t\t\t\tcase ets:lookup(Tab, {block, PrevH}) of\n\t\t\t\t\t\t[] ->\n\t\t\t\t\t\t\t%% The block has been pruned -\n\t\t\t\t\t\t\t%% an unlikely race condition so we retry.\n\t\t\t\t\t\t\tget_longest_chain_cache(Tab);\n\t\t\t\t\t\t[_] ->\n\t\t\t\t\t\t\t%% Pairs already contains the deepest block of the cache.\n\t\t\t\t\t\t\t{lists:reverse(Pairs), NotOnChainCount}\n\t\t\t\t\tend;\n\t\t\t\t_ ->\n\t\t\t\t\t%% The block has been invalidated -\n\t\t\t\t\t%% an unlikely race condition so we retry.\n\t\t\t\t\tget_longest_chain_cache(Tab)\n\t\t\tend\n\tend.\n\ntx_id(#tx{ id = ID }) ->\n\tID;\ntx_id(TXID) ->\n\tTXID.\n\n%% @doc Mark the given block as the tip block. Mark the previous blocks as on-chain.\n%% Mark the on-chain blocks from other forks as validated. Raises invalid_tip if\n%% one of the preceeding blocks is not validated. Raises not_found if the block\n%% is not found.\n%%\n%% Setting a new tip can cause some branches to be invalidated by the checkpoint, so we need\n%% to recalculate max_cdiff.\nmark_tip(Tab, H) ->\n\tcase ets:lookup(Tab, {block, H}) of\n\t\t[{_, {B, Status, Timestamp, Children}}] ->\n\t\t\tcase is_valid_fork(Tab, B, Status) of\n\t\t\t\ttrue ->\n\t\t\t\t\tinsert(Tab, [\n\t\t\t\t\t\t{tip, H},\n\t\t\t\t\t\t{{block, H}, {B, on_chain, Timestamp, Children}} |\n\t\t\t\t\t\tmark_on_chain(Tab, B)\n\t\t\t\t\t]),\n\t\t\t\t\t%% We would only update max_cdiff if somehow the old max_cdiff was on a branch\n\t\t\t\t\t%% that has been invalidated due to the new tip causing the checkpoint to move.\n\t\t\t\t\t%% In practice we would not expect this to happen.\n\t\t\t\t\t[{_, {_CDiff, CDiffH}}] = ets:lookup(Tab, max_cdiff),\n\t\t\t\t\tcase is_valid_fork(Tab, CDiffH) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tok;\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tinsert(Tab, {max_cdiff, find_max_cdiff(Tab, B#block.height)})\n\t\t\t\t\tend;\n\t\t\t\tfalse ->\n\t\t\t\t\terror(invalid_tip)\n\t\t\tend;\n\t\t[] ->\n\t\t\terror(not_found)\n\tend.\n\n%% @doc Remove the block and all the blocks on top from the cache.\nremove(Tab, H) ->\n\tcase ets:lookup(Tab, {block, H}) of\n\t\t[] ->\n\t\t\tok;\n\t\t[{_, {#block{ previous_block = PrevH }, _Status, _Timestamp, _Children}}] ->\n\t\t\t[{_, C = {_, H2}}] = ets:lookup(Tab, max_cdiff),\n\t\t\t[{_, {PrevB, PrevBStatus, PrevTimestamp, PrevBChildren}}] = ets:lookup(Tab,\n\t\t\t\t\t{block, PrevH}),\n\t\t\tremove2(Tab, H),\n\t\t\tinsert(Tab, [\n\t\t\t\t{max_cdiff, case ets:lookup(Tab, {block, H2}) of\n\t\t\t\t\t\t\t\t[] ->\n\t\t\t\t\t\t\t\t\tfind_max_cdiff(Tab, get_tip_height(Tab));\n\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\tC\n\t\t\t\t\t\t\tend},\n\t\t\t\t{{block, PrevH}, {PrevB, PrevBStatus, PrevTimestamp,\n\t\t\t\t\t\tsets:del_element(H, PrevBChildren)}}\n\t\t\t]),\n\t\t\tar_ignore_registry:remove(H),\n\t\t\tok\n\tend.\n\nget_checkpoint_block(RecentBI) ->\n\tget_checkpoint_block2(RecentBI, 1, ?CHECKPOINT_DEPTH).\n\n%% @doc Prune the cache.  Discard all blocks deeper than Depth from the tip and\n%% all of their children that are not on_chain.\n%% \n%% Height 99              A    B' C\n%%                         \\  /   |\n%%        98                D'    E\n%%\t\t\t\t\t\t      \\  /\n%%        97                   F' \n%%\n%% B' is the Tip. prune(Tab, 1) will remove F', E, and C from the cache.             \nprune(Tab, Depth) ->\n\tprune2(Tab, Depth, get_tip_height(Tab)).\n\n%% @doc Return true if there is at least one block in the cache with the given solution hash.\nis_known_solution_hash(Tab, SolutionH) ->\n\tcase ets:lookup(Tab, {solution, SolutionH}) of\n\t\t[] ->\n\t\t\tfalse;\n\t\t[{_, _Set}] ->\n\t\t\ttrue\n\tend.\n\n%% @doc Return a block from the block cache meeting the following requirements:\n%% - hash == SolutionH;\n%% - indep_hash /= H.\n%%\n%% If there are several blocks, choose one with the same cumulative difficulty\n%% or CDiff > PrevCDiff2 and CDiff2 > PrevCDiff (double-signing). If there are no\n%% such blocks, return any other block matching the conditions above. Return not_found\n%% if there are no blocks matching those conditions.\nget_by_solution_hash(Tab, SolutionH, H, CDiff, PrevCDiff) ->\n\tcase ets:lookup(Tab, {solution, SolutionH}) of\n\t\t[] ->\n\t\t\tnot_found;\n\t\t[{_, Set}] ->\n\t\t\tget_by_solution_hash(Tab, SolutionH, H, CDiff, PrevCDiff, sets:to_list(Set), none)\n\tend.\n\nget_by_solution_hash(_Tab, _SolutionH, _H, _CDiff, _PrevCDiff, [], B) ->\n\tcase B of\n\t\tnone ->\n\t\t\tnot_found;\n\t\t_ ->\n\t\t\tB\n\tend;\nget_by_solution_hash(Tab, SolutionH, H, CDiff, PrevCDiff, [H | L], B) ->\n\tget_by_solution_hash(Tab, SolutionH, H, CDiff, PrevCDiff, L, B);\nget_by_solution_hash(Tab, SolutionH, H, CDiff, PrevCDiff, [H2 | L], _B) ->\n\tcase get(Tab, H2) of\n\t\tnot_found ->\n\t\t\t%% An extremely unlikely race condition - simply retry.\n\t\t\tget_by_solution_hash(Tab, SolutionH, H, CDiff, PrevCDiff);\n\t\t#block{ cumulative_diff = CDiff } = B2 ->\n\t\t\tB2;\n\t\t#block{ cumulative_diff = CDiff2, previous_cumulative_diff = PrevCDiff2 } = B2\n\t\t\t\twhen CDiff2 > PrevCDiff, CDiff > PrevCDiff2 ->\n\t\t\tB2;\n\t\tB2 ->\n\t\t\tget_by_solution_hash(Tab, SolutionH, H, CDiff, PrevCDiff, L, B2)\n\tend.\n\n%% @doc Return the list of siblings of the given block, if any.\nget_siblings(Tab, B) ->\n\tH = B#block.indep_hash,\n\tPrevH = B#block.previous_block,\n\tcase ets:lookup(Tab, {block, PrevH}) of\n\t\t[] ->\n\t\t\t[];\n\t\t[{_, {_B, _Status, _CurrentTimestamp, Children}}] ->\n\t\t\tsets:fold(\n\t\t\t\tfun(SibH, Acc) ->\n\t\t\t\t\tcase SibH of\n\t\t\t\t\t\tH ->\n\t\t\t\t\t\t\tAcc;\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\tcase ets:lookup(Tab, {block, SibH}) of\n\t\t\t\t\t\t\t\t[] ->\n\t\t\t\t\t\t\t\t\tAcc;\n\t\t\t\t\t\t\t\t[{_, {Sib, _, _, _}}] ->\n\t\t\t\t\t\t\t\t\t[Sib | Acc]\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\t\t[],\n\t\t\t\tChildren\n\t\t\t)\n\tend.\n\nupdate_timestamp(Tab, H, ReceiveTimestamp) ->\n\tcase ets:lookup(Tab, {block, H}) of\n\t\t[] ->\n\t\t\tnot_found;\n\t\t[{_, {B, Status, Timestamp, Children}}] ->\n\t\t\tcase B#block.receive_timestamp of\n\t\t\t\tundefined ->\n\t\t\t\t\tinsert(Tab, {\n\t\t\t\t\t\t\t{block, H},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tB#block{receive_timestamp = ReceiveTimestamp},\n\t\t\t\t\t\t\t\tStatus,\n\t\t\t\t\t\t\t\tTimestamp,\n\t\t\t\t\t\t\t\tChildren\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}, false);\n\t\t\t\t_ ->\n\t\t\t\t\tok\n\t\t\tend\n\tend.\n\n%% @doc Return all blocks from the cache mined by the given address.\nget_blocks_by_miner(Tab, MinerAddr) ->\n\tcase ets:lookup(Tab, links) of\n\t\t[{links, Set}] ->\n\t\t\tgb_sets:fold(\n\t\t\t\tfun({_Height, H}, Acc) ->\n\t\t\t\t\tcase ets:lookup(Tab, {block, H}) of\n\t\t\t\t\t\t[{_, {B, _Status, _Timestamp, _Children}}] when B#block.reward_addr == MinerAddr ->\n\t\t\t\t\t\t\t[B | Acc];\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\tAcc\n\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\t\t[],\n\t\t\t\tSet\n\t\t\t);\n\t\t_ ->\n\t\t\t[]\n\tend.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\ninsert(Tab, Args) ->\n\tinsert(Tab, Args, true).\ninsert(Tab, Args, UpdateCache) ->\n\tets:insert(Tab, Args),\n\tcase UpdateCache of\n\t\ttrue ->\n\t\t\tupdate_longest_chain_cache(Tab);\n\t\tfalse ->\n\t\t\tok\n\tend.\n\ndelete(Tab, Args) ->\n\tdelete(Tab, Args, true).\ndelete(Tab, Args, UpdateCache) ->\n\tets:delete(Tab, Args),\n\tcase UpdateCache of\n\t\ttrue ->\n\t\t\tupdate_longest_chain_cache(Tab);\n\t\tfalse ->\n\t\t\tok\n\tend.\n\nget_earliest_not_validated(Tab, #block{ previous_block = PrevH } = B, Status, Timestamp) ->\n\t[{_, {PrevB, PrevStatus, PrevTimestamp, _Children}}] = ets:lookup(Tab, {block, PrevH}),\n\tcase PrevStatus of\n\t\t{not_validated, _} ->\n\t\t\tget_earliest_not_validated(Tab, PrevB, PrevStatus, PrevTimestamp);\n\t\t_ ->\n\t\t\t{B, get_fork_blocks(Tab, B), {Status, Timestamp}}\n\tend.\n\nget_fork_blocks(Tab, #block{ previous_block = PrevH }) ->\n\t[{_, {PrevB, Status, _Timestamp, _Children}}] = ets:lookup(Tab, {block, PrevH}),\n\tcase Status of\n\t\ton_chain ->\n\t\t\t[PrevB];\n\t\t_ ->\n\t\t\t[PrevB | get_fork_blocks(Tab, PrevB)]\n\tend.\n\nmark_on_chain(Tab, #block{ previous_block = PrevH, indep_hash = H }) ->\n\tcase ets:lookup(Tab, {block, PrevH}) of\n\t\t[{_, {_PrevB, {not_validated, _}, _Timestamp, _Children}}] ->\n\t\t\terror(invalid_tip);\n\t\t[{_, {_PrevB, on_chain, _Timestamp, Children}}] ->\n\t\t\t%% Mark the blocks from the previous main fork as validated, not on-chain.\n\t\t\tmark_off_chain(Tab, sets:del_element(H, Children));\n\t\t[{_, {PrevB, validated, Timestamp, Children}}] ->\n\t\t\t[{{block, PrevH}, {PrevB, on_chain, Timestamp, Children}}\n\t\t\t\t\t| mark_on_chain(Tab, PrevB)]\n\tend.\n\nmark_off_chain(Tab, Set) ->\n\tsets:fold(\n\t\tfun(H, Acc) ->\n\t\t\tcase ets:lookup(Tab, {block, H}) of\n\t\t\t\t[{_, {B, on_chain, Timestamp, Children}}] ->\n\t\t\t\t\t[{{block, H}, {B, validated, Timestamp, Children}}\n\t\t\t\t\t\t\t| mark_off_chain(Tab, Children)];\n\t\t\t\t_ ->\n\t\t\t\t\tAcc\n\t\t\tend\n\t\tend,\n\t\t[],\n\t\tSet\n\t).\n\nremove2(Tab, H) ->\n\t[{_, Set}] = ets:lookup(Tab, links),\n\tcase ets:lookup(Tab, {block, H}) of\n\t\tnot_found ->\n\t\t\tok;\n\t\t[{_, {#block{ hash = SolutionH, height = Height }, _Status, _Timestamp, Children}}] ->\n\t\t\t%% Don't update the cache here. remove/2 will do it.\n\t\t\tdelete(Tab, {block, H}, false), \n\t\t\tar_ignore_registry:remove(H),\n\t\t\tremove_solution(Tab, H, SolutionH),\n\t\t\tinsert(Tab, {links, gb_sets:del_element({Height, H}, Set)}, false),\n\t\t\tsets:fold(\n\t\t\t\tfun(Child, ok) ->\n\t\t\t\t\tremove2(Tab, Child)\n\t\t\t\tend,\n\t\t\t\tok,\n\t\t\t\tChildren\n\t\t\t)\n\tend.\n\nremove_solution(Tab, H, SolutionH) ->\n\t[{_, SolutionSet}] = ets:lookup(Tab, {solution, SolutionH}),\n\tcase sets:size(SolutionSet) of\n\t\t1 ->\n\t\t\tdelete(Tab, {solution, SolutionH}, false);\n\t\t_ ->\n\t\t\tSolutionSet2 = sets:del_element(H, SolutionSet),\n\t\t\tinsert(Tab, {{solution, SolutionH}, SolutionSet2}, false)\n\tend.\n\nget_tip_height(Tab) ->\n\t[{_, Tip}] = ets:lookup(Tab, tip),\n\t[{_, {#block{ height = Height }, _Status, _Timestamp, _Children}}] = ets:lookup(Tab,\n\t\t\t{block, Tip}),\n\tHeight.\n\nget_checkpoint_height(TipHeight) ->\n\tTipHeight - ?CHECKPOINT_DEPTH + 1.\n\nget_checkpoint_block2([{H, _, _}], _N, _CheckpointDepth) ->\n\t%% The genesis block.\n\tar_block_cache:get(block_cache, H);\nget_checkpoint_block2([{H, _, _} | BI], N, CheckpointDepth) ->\n\t B = ar_block_cache:get(block_cache, H),\n\t get_checkpoint_block2(BI, N + 1, B, CheckpointDepth).\n\nget_checkpoint_block2([{H, _, _}], _N, B, _CheckpointDepth) ->\n\t%% The genesis block.\n\tcase ar_block_cache:get(block_cache, H) of\n\t\tnot_found ->\n\t\t\tB;\n\t\tB2 ->\n\t\t\tB2\n\tend;\nget_checkpoint_block2([{H, _, _} | _], N, B, CheckpointDepth) when N == CheckpointDepth ->\n\tcase ar_block_cache:get(block_cache, H) of\n\t\tnot_found ->\n\t\t\tB;\n\t\tB2 ->\n\t\t\tB2\n\tend;\nget_checkpoint_block2([{H, _, _} | BI], N, B, CheckpointDepth) ->\n\tcase ar_block_cache:get(block_cache, H) of\n\t\tnot_found ->\n\t\t\tB;\n\t\tB2 ->\n\t\t\tget_checkpoint_block2(BI, N + 1, B2, CheckpointDepth)\n\tend.\n\n%% @doc Return true if B is either on the main fork or on a fork which branched off at the\n%% checkpoint height or higher.\nis_valid_fork(Tab, H) ->\n\t[{_, {B, Status, _Timestamp, _Children}}] = ets:lookup(Tab, {block, H}),\n\tis_valid_fork(Tab, B, Status).\nis_valid_fork(Tab, B, Status) ->\n\tCheckpointHeight = get_checkpoint_height(get_tip_height(Tab)),\n\tis_valid_fork(Tab, B, Status, CheckpointHeight).\n\nis_valid_fork(_Tab, #block{ height = Height, indep_hash = H }, _Status, CheckpointHeight)\n  \t\twhen Height < CheckpointHeight ->\n\t?LOG_WARNING([{event, found_invalid_heavy_fork}, {hash, ar_util:encode(H)},\n\t\t\t\t{height, Height}, {checkpoint_height, CheckpointHeight}]),\n\tfalse;\nis_valid_fork(_Tab, _B, on_chain, _CheckpointHeight) ->\n\ttrue;\nis_valid_fork(Tab, B, _Status, CheckpointHeight) ->\n\t[{_, {PrevB, PrevStatus, _, _}}] = ets:lookup(Tab, {block, B#block.previous_block}),\n\tis_valid_fork(Tab, PrevB, PrevStatus, CheckpointHeight).\n\nmaybe_increase_max_cdiff(Tab, B, Status) ->\n\t[{_, C}] = ets:lookup(Tab, max_cdiff),\n\tmaybe_increase_max_cdiff(Tab, B, Status, C).\n\nmaybe_increase_max_cdiff(Tab, B, Status, {MaxCDiff, _H} = C) ->\n\tcase B#block.cumulative_diff > MaxCDiff andalso is_valid_fork(Tab, B, Status) of\n\t\ttrue ->\n\t\t\t{B#block.cumulative_diff, B#block.indep_hash};\n\t\tfalse ->\n\t\t\tC\n\tend.\n\nfind_max_cdiff(Tab, TipHeight) ->\n\tCheckpointHeight = get_checkpoint_height(TipHeight),\n\t[{_, Set}] = ets:lookup(Tab, links),\n\tgb_sets:fold(\n\t\tfun ({Height, _H}, Acc) when Height < CheckpointHeight ->\n\t\t\t\tAcc;\n\t\t\t({_Height, H}, not_set) ->\n\t\t\t\t[{_, {#block{ cumulative_diff = CDiff }, _, _, _}}] = ets:lookup(Tab,\n\t\t\t\t\t\t{block, H}),\n\t\t\t\t{CDiff, H};\n\t\t\t({_Height, H}, Acc) ->\n\t\t\t\t[{_, {B, Status, _, _}}] = ets:lookup(Tab, {block, H}),\n\t\t\t\tmaybe_increase_max_cdiff(Tab, B, Status, Acc)\n\t\tend,\n\t\tnot_set,\n\t\tSet\n\t).\n\nprune2(Tab, Depth, TipHeight) ->\n\t[{_, Set}] = ets:lookup(Tab, links),\n\tcase gb_sets:is_empty(Set) of\n\t\ttrue ->\n\t\t\tok;\n\t\tfalse ->\n\t\t\t{{Height, H}, Set2} = gb_sets:take_smallest(Set),\n\t\t\tcase Height >= TipHeight - Depth of\n\t\t\t\ttrue ->\n\t\t\t\t\tok;\n\t\t\t\tfalse ->\n\t\t\t\t\tinsert(Tab, {links, Set2}, false),\n\t\t\t\t\t%% The lowest block must be on-chain by construction.\n\t\t\t\t\t[{_, {B, on_chain, _Timestamp, Children}}] = ets:lookup(Tab, {block, H}),\n\t\t\t\t\t#block{ hash = SolutionH } = B,\n\t\t\t\t\tsets:fold(\n\t\t\t\t\t\tfun(Child, ok) ->\n\t\t\t\t\t\t\t[{_, {_, Status, _, _}}] = ets:lookup(Tab, {block, Child}),\n\t\t\t\t\t\t\tcase Status of\n\t\t\t\t\t\t\t\ton_chain ->\n\t\t\t\t\t\t\t\t\tok;\n\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\tremove(Tab, Child)\n\t\t\t\t\t\t\tend\n\t\t\t\t\t\tend,\n\t\t\t\t\t\tok,\n\t\t\t\t\t\tChildren\n\t\t\t\t\t),\n\t\t\t\t\tremove_solution(Tab, H, SolutionH),\n\t\t\t\t\tdelete(Tab, {block, H}),\n\t\t\t\t\tar_ignore_registry:remove(H),\n\t\t\t\t\tprune2(Tab, Depth, TipHeight)\n\t\t\tend\n\tend.\n\nupdate_longest_chain_cache(Tab) ->\n\t[{_, {_CDiff, H}}] = ets:lookup(Tab, max_cdiff),\n\tResult = get_longest_chain_block_txs_pairs(Tab, H, ar_block:get_consensus_window_size(),\n\t\t\tnone, none, [], 0),\n\tcase ets:update_element(Tab, longest_chain, {2, Result}) of\n\t\ttrue -> ok;\n\t\tfalse ->\n\t\t\t%% if insert_new fails it means another process added the longest_chain key\n\t\t\t%% between when we called update_element here. Extremely unlikely, really only\n\t\t\t%% possible when the node first starts up, and ultimately not super relevant since\n\t\t\t%% the cache will likely be refreshed again shortly. So we'll ignore.\n\t\t\tets:insert_new(Tab, {longest_chain, Result})\n\tend,\n\tResult.\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\ncheckpoint_test() ->\n\tets:new(bcache_test, [set, named_table]),\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 3\t\tB3/on_chain\n\t%%\t\t\t     | \n\t%% 2\t\tB2/on_chain        B2B/not_validated\n\t%%\t\t\t     |                  |\n\t%% 1\t\tB1/on_chain        B1B/not_validated\n\t%%\t\t\t\t\t  \\            /\n\t%% 0\t\t\t\t\tB0/on_chain\n\tnew(bcache_test, B0 = random_block(0)),\n\tadd(bcache_test, B1 = on_top(random_block(1), B0)),\n\tadd(bcache_test, B1B = on_top(random_block(1), B0)),\n\tadd(bcache_test, B2 = on_top(random_block(2), B1)),\n\tadd(bcache_test, B2B = on_top(random_block(2), B1B)),\n\tadd(bcache_test, B3 = on_top(random_block(3), B2)),\n\tmark_tip(bcache_test, block_id(B1)),\n\tmark_tip(bcache_test, block_id(B2)),\n\tmark_tip(bcache_test, block_id(B3)),\n\n\t?assertMatch(not_found, get_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B3, B2, B1, B0], 0),\n\tassert_tip(block_id(B3)),\n\tassert_max_cdiff({3, block_id(B3)}),\n\tassert_is_valid_fork(true, on_chain, B0),\n\tassert_is_valid_fork(true, on_chain, B1),\n\tassert_is_valid_fork(true, not_validated, B1B),\n\tassert_is_valid_fork(true, on_chain, B2),\n\tassert_is_valid_fork(true, not_validated, B2B),\n\tassert_is_valid_fork(true, on_chain, B3),\n\n\t%% Add B4 as not_validated. No blocks are pushed beneath the checkpoint height since B4\n\t%% has not been validated.\n\t%%\n\t%% Height\tBlock/Status\n\t%%\n\t%% 4\t\tB4/not_validated\n\t%%\t\t\t     |\n\t%% 3\t\tB3/on_chain\n\t%%\t\t\t     | \n\t%% 2\t\tB2/on_chain        B2B/not_validated/invalid_fork\n\t%%\t\t\t     |                  |\n\t%% 1\t\tB1/on_chain        B1B/not_validated/invalid_fork\n\t%%\t\t\t\t\t  \\            /\n\t%% 0\t\t\t\t\tB0/on_chain/invalid_fork\n\tadd(bcache_test, B4 = on_top(random_block(4), B3)),\n\n\t?assertMatch({B4, [B3], {{not_validated, awaiting_nonce_limiter_validation}, _}},\n\t\tget_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B3, B2, B1, B0], 0),\n\tassert_tip(block_id(B3)),\n\tassert_max_cdiff({4, block_id(B4)}),\n\tassert_is_valid_fork(true, on_chain, B0),\n\tassert_is_valid_fork(true, on_chain, B1),\n\tassert_is_valid_fork(true, not_validated, B1B),\n\tassert_is_valid_fork(true, on_chain, B2),\n\tassert_is_valid_fork(true, not_validated, B2B),\n\tassert_is_valid_fork(true, on_chain, B3),\n\tassert_is_valid_fork(true, not_validated, B4),\n\n\t%% Mark B4 as the tip, this pushes B0 below the checkpoint height and invalidates B1B and B2B.\n\t%%\n\t%% Height\tBlock/Status\n\t%%\n\t%% 4\t\tB4/on_chain\n\t%%\t\t\t     |\n\t%% 3\t\tB3/on_chain\n\t%%\t\t\t     | \n\t%% 2\t\tB2/on_chain        B2B/not_validated/invalid_fork\n\t%%\t\t\t     |                  |\n\t%% 1\t\tB1/on_chain        B1B/not_validated/invalid_fork\n\t%%\t\t\t\t\t  \\            /\n\t%% 0\t\t\t\t\tB0/on_chain/invalid_fork\n\tmark_tip(bcache_test, block_id(B4)),\n\n\t?assertMatch(not_found, get_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B4, B3, B2, B1, B0], 0),\n\tassert_tip(block_id(B4)),\n\tassert_max_cdiff({4, block_id(B4)}),\n\tassert_is_valid_fork(false, on_chain, B0),\n\tassert_is_valid_fork(true, on_chain, B1),\n\tassert_is_valid_fork(false, not_validated, B1B),\n\tassert_is_valid_fork(true, on_chain, B2),\n\tassert_is_valid_fork(false, not_validated, B2B),\n\tassert_is_valid_fork(true, on_chain, B3),\n\tassert_is_valid_fork(true, on_chain, B4),\n\n\t%% Add B3B with cdiff 5 to the invalid fork. Nothing should change.\n\t%%\n\t%% Height\tBlock/Status\n\t%%\n\t%% 4\t\tB4/on_chain\n\t%%\t\t\t     |\n\t%% 3\t\tB3/on_chain        B3B/not_validated/invalid_fork\n\t%%\t\t\t     |                  |\n\t%% 2\t\tB2/on_chain        B2B/not_validated/invalid_fork\n\t%%\t\t\t     |                  |\n\t%% 1\t\tB1/on_chain        B1B/not_validated/invalid_fork\n\t%%\t\t\t\t\t  \\            /\n\t%% 0\t\t\t\t\tB0/on_chain/invalid_fork\n\tadd(bcache_test, B3B = on_top(random_block(5), B2B)),\n\n\t?assertMatch(not_found, get_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B4, B3, B2, B1, B0], 0),\n\tassert_tip(block_id(B4)),\n\tassert_max_cdiff({4, block_id(B4)}),\n\tassert_is_valid_fork(false, on_chain, B0),\n\tassert_is_valid_fork(true, on_chain, B1),\n\tassert_is_valid_fork(false, not_validated, B1B),\n\tassert_is_valid_fork(true, on_chain, B2),\n\tassert_is_valid_fork(false, not_validated, B2B),\n\tassert_is_valid_fork(true, on_chain, B3),\n\tassert_is_valid_fork(true, on_chain, B4),\n\tassert_is_valid_fork(false, not_validated, B3B),\n\t\n\t%% Remove B4, this should revalidate the fork and make it the max_cdiff\n\t%%\n\t%% Height\tBlock/Status\n\t%%\n\t%% 3\t\tB3/on_chain        B3B/not_validated\n\t%%\t\t\t     |                  |\n\t%% 2\t\tB2/on_chain        B2B/not_validated\n\t%%\t\t\t     |                  |\n\t%% 1\t\tB1/on_chain        B1B/not_validated\n\t%%\t\t\t\t\t  \\            /\n\t%% 0\t\t\t\t\tB0/on_chain\n\tmark_tip(bcache_test, block_id(B3)),\n\tremove(bcache_test, block_id(B4)),\n\t\n\t?assertMatch({B1B, [B0], {{not_validated, awaiting_nonce_limiter_validation}, _}},\n\t\tget_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B0], 0),\n\tassert_tip(block_id(B3)),\n\tassert_max_cdiff({5, block_id(B3B)}),\n\tassert_is_valid_fork(true, on_chain, B0),\n\tassert_is_valid_fork(true, on_chain, B1),\n\tassert_is_valid_fork(true, not_validated, B1B),\n\tassert_is_valid_fork(true, on_chain, B2),\n\tassert_is_valid_fork(true, not_validated, B2B),\n\tassert_is_valid_fork(true, on_chain, B3),\n\tassert_is_valid_fork(true, not_validated, B3B),\n\n\t%% We should not be able to mark blocks on the invalid fork as the tip.\n\t%%\n\t%% Height\tBlock/Status\n\t%%\n\t%% 4\t\tB4/on_chain\n\t%%\t\t\t     |\n\t%% 3\t\tB3/on_chain        B3B/not_validated/invalid_fork\n\t%%\t\t\t     |                  |\n\t%% 2\t\tB2/on_chain        B2B/not_validated/invalid_fork\n\t%%\t\t\t     |                  |\n\t%% 1\t\tB1/on_chain        B1B/not_validated/invalid_fork\n\t%%\t\t\t\t\t  \\            /\n\t%% 0\t\t\t\t\tB0/on_chain/invalid_fork\n\tadd(bcache_test, B4),\n\tmark_tip(bcache_test, block_id(B4)),\n\n\t?assertException(error, invalid_tip, mark_tip(bcache_test, block_id(B1B))),\n\t?assertMatch(not_found, get_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B4, B3, B2, B1, B0], 0),\n\tassert_tip(block_id(B4)),\n\tassert_max_cdiff({4, block_id(B4)}),\n\tassert_is_valid_fork(false, on_chain, B0),\n\tassert_is_valid_fork(true, on_chain, B1),\n\tassert_is_valid_fork(false, not_validated, B1B),\n\tassert_is_valid_fork(true, on_chain, B2),\n\tassert_is_valid_fork(false, not_validated, B2B),\n\tassert_is_valid_fork(true, on_chain, B3),\n\tassert_is_valid_fork(false, not_validated, B3B),\n\n\tets:delete(bcache_test).\n\ncheckpoint_invalidate_max_cdiff_test() ->\n\tets:new(bcache_test, [set, named_table]),\n\n\t%% B2B is heaviest.\n\t%%\n\t%% Height\tBlock/Status\n\t%%\n\t%% 4\t\tB4/not_validated\n\t%%\t\t\t     |\n\t%% 3\t\tB3/on_chain\n\t%%\t\t\t     | \n\t%% 2\t\tB2/on_chain        B2B/not_validated\n\t%%\t\t\t     |                  |\n\t%% 1\t\tB1/on_chain        B1B/not_validated\n\t%%\t\t\t\t\t  \\            /\n\t%% 0\t\t\t\t\tB0/on_chain\n\tnew(bcache_test, B0 = random_block(0)),\n\tadd(bcache_test, B1 = on_top(random_block(1), B0)),\n\tadd(bcache_test, B2 = on_top(random_block(2), B1)),\n\tadd(bcache_test, B3 = on_top(random_block(3), B2)),\n\tadd(bcache_test, B4 = on_top(random_block(4), B3)),\n\tadd(bcache_test, B1B = on_top(random_block(1), B0)),\n\tadd(bcache_test, B2B = on_top(random_block(5), B1B)),\t\n\tmark_tip(bcache_test, block_id(B1)),\n\tmark_tip(bcache_test, block_id(B2)),\n\tmark_tip(bcache_test, block_id(B3)),\n\n\t?assertMatch({B1B, [B0], {{not_validated, awaiting_nonce_limiter_validation}, _}},\n\t\tget_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B0], 0),\n\tassert_tip(block_id(B3)),\n\tassert_max_cdiff({5, block_id(B2B)}),\n\tassert_is_valid_fork(true, on_chain, B0),\n\tassert_is_valid_fork(true, on_chain, B1),\n\tassert_is_valid_fork(true, not_validated, B1B),\n\tassert_is_valid_fork(true, on_chain, B2),\n\tassert_is_valid_fork(true, not_validated, B2B),\n\tassert_is_valid_fork(true, on_chain, B3),\n\tassert_is_valid_fork(true, not_validated, B4),\n\n\t%% B2B is still heaviest, but since B4 is the new tip, B2B's branch has been pushed below\n\t%% the checkpoint.\n\t%%\n\t%% Height\tBlock/Status\n\t%%\n\t%% 4\t\tB4/on_chain\n\t%%\t\t\t     |\n\t%% 3\t\tB3/on_chain\n\t%%\t\t\t     | \n\t%% 2\t\tB2/on_chain        B2B/not_validated/invalid_fork\n\t%%\t\t\t     |                  |\n\t%% 1\t\tB1/on_chain        B1B/not_validated/invalid_fork\n\t%%\t\t\t\t\t  \\            /\n\t%% 0\t\t\t\t\tB0/on_chain/invalid_fork\n\tmark_tip(bcache_test, block_id(B4)),\n\n\t?assertMatch(not_found, get_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B4, B3, B2, B1, B0], 0),\n\tassert_tip(block_id(B4)),\n\tassert_max_cdiff({4, block_id(B4)}),\n\tassert_is_valid_fork(false, on_chain, B0),\n\tassert_is_valid_fork(true, on_chain, B1),\n\tassert_is_valid_fork(false, not_validated, B1B),\n\tassert_is_valid_fork(true, on_chain, B2),\n\tassert_is_valid_fork(false, not_validated, B2B),\n\tassert_is_valid_fork(true, on_chain, B3),\n\tassert_is_valid_fork(true, on_chain, B4),\n\n\tets:delete(bcache_test).\n\ncheckpoint_invalidate_tip_test() ->\n\tets:new(bcache_test, [set, named_table]),\n\n\t%% B2B is the tip\n\t%%\n\t%% Height\tBlock/Status\n\t%%\n\t%% 4\t\tB4/not_validated\n\t%%\t\t\t     |\n\t%% 3\t\tB3/validated\n\t%%\t\t\t     | \n\t%% 2\t\tB2/validated       B2B/on_chain\n\t%%\t\t\t     |                  |\n\t%% 1\t\tB1/validated       B1B/on_chain\n\t%%\t\t\t\t\t  \\            /\n\t%% 0\t\t\t\t\tB0/on_chain\n\tnew(bcache_test, B0 = random_block(0)),\n\tadd(bcache_test, B1 = on_top(random_block(1), B0)),\n\tadd(bcache_test, B2 = on_top(random_block(2), B1)),\n\tadd(bcache_test, B3 = on_top(random_block(3), B2)),\n\tadd(bcache_test, B4 = on_top(random_block(4), B3)),\n\tadd(bcache_test, B1B = on_top(random_block(1), B0)),\n\tadd(bcache_test, B2B = on_top(random_block(5), B1B)),\n\tmark_tip(bcache_test, block_id(B1)),\n\tmark_tip(bcache_test, block_id(B2)),\n\tmark_tip(bcache_test, block_id(B3)),\n\tmark_tip(bcache_test, block_id(B1B)),\n\tmark_tip(bcache_test, block_id(B2B)),\n\n\t?assertMatch(not_found,\n\t\tget_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B2B, B1B, B0], 0),\n\tassert_tip(block_id(B2B)),\n\tassert_max_cdiff({5, block_id(B2B)}),\n\tassert_is_valid_fork(true, on_chain, B0),\n\tassert_is_valid_fork(true, validated, B1),\n\tassert_is_valid_fork(true, on_chain, B1B),\n\tassert_is_valid_fork(true, validated, B2),\n\tassert_is_valid_fork(true, on_chain, B2B),\n\tassert_is_valid_fork(true, validated, B3),\n\tassert_is_valid_fork(true, not_validated, B4),\n\n\t%% When we mark B4 as the tip it will also invalidate the B2B branch.\n\t%%\n\t%% Height\tBlock/Status\n\t%%\n\t%% 4\t\tB4/on_chain\n\t%%\t\t\t     |\n\t%% 3\t\tB3/on_chain\n\t%%\t\t\t     | \n\t%% 2\t\tB2/on_chain        B2B/not_validated/invalid_fork\n\t%%\t\t\t     |                  |\n\t%% 1\t\tB1/on_chain        B1B/not_validated/invalid_fork\n\t%%\t\t\t\t\t  \\            /\n\t%% 0\t\t\t\t\tB0/on_chain/invalid_fork\n\tmark_tip(bcache_test, block_id(B4)),\n\n\t?assertMatch(not_found, get_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B4, B3, B2, B1, B0], 0),\n\tassert_tip(block_id(B4)),\n\tassert_max_cdiff({4, block_id(B4)}),\n\tassert_is_valid_fork(false, on_chain, B0),\n\tassert_is_valid_fork(true, on_chain, B1),\n\tassert_is_valid_fork(false, validated, B1B),\n\tassert_is_valid_fork(true, on_chain, B2),\n\tassert_is_valid_fork(false, validated, B2B),\n\tassert_is_valid_fork(true, on_chain, B3),\n\tassert_is_valid_fork(true, on_chain, B4),\n\n\tets:delete(bcache_test).\n\nblock_cache_test() ->\n\tets:new(bcache_test, [set, named_table]),\n\n\t%% Initialize block_cache from B1\n\t%%\n\t%% Height\t\tBlock/Status\n\t%%\n\t%% 0\t\t\tB1/on_chain\n\tnew(bcache_test, B1 = random_block(0)),\n\t?assertEqual(not_found, get(bcache_test, crypto:strong_rand_bytes(48))),\n\t?assertEqual(not_found, get_by_solution_hash(bcache_test, crypto:strong_rand_bytes(32),\n\t\t\tcrypto:strong_rand_bytes(32), 1, 1)),\n\t?assertEqual(B1, get(bcache_test, block_id(B1))),\n\t?assertEqual(B1, get_by_solution_hash(bcache_test, B1#block.hash,\n\t\t\tcrypto:strong_rand_bytes(32), 1, 1)),\n\t?assertEqual(not_found, get_by_solution_hash(bcache_test, B1#block.hash,\n\t\t\tblock_id(B1), 1, 1)),\n\tassert_longest_chain([B1], 0),\n\tassert_tip(block_id(B1)),\n\tassert_max_cdiff({0, block_id(B1)}),\n\tassert_is_valid_fork(true, on_chain, B1),\n\t?assertEqual([], get_siblings(bcache_test, B1)),\n\n\t%% Re-adding B1 shouldn't change anything - i.e. nothing should be updated because the\n\t%% block is already on chain\n\t%%\n\t%% Height\t\tBlock/Status\n\t%%\n\t%% 0\t\t\tB1/on_chain\n\tadd(bcache_test, B1#block{ txs = [crypto:strong_rand_bytes(32)] }),\n\t?assertEqual(B1#block{ txs = [] }, get(bcache_test, block_id(B1))),\n\t?assertEqual(B1#block{ txs = [] }, get_by_solution_hash(bcache_test, B1#block.hash,\n\t\t\tcrypto:strong_rand_bytes(32), 1, 1)),\n\tassert_longest_chain([B1], 0),\n\tassert_max_cdiff({0, block_id(B1)}),\n\tassert_is_valid_fork(true, on_chain, B1),\n\n\t%% Same as above.\n\t%%\n\t%% Height\t\tBlock/Status\n\t%%\n\t%% 0\t\t\tB1/on_chain\n\tadd(bcache_test, B1),\n\t?assertEqual(not_found, get_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B1], 0),\n\tassert_tip(block_id(B1)),\n\tassert_max_cdiff({0, block_id(B1)}),\n\tassert_is_valid_fork(true, on_chain, B1),\n\n\t%% Add B2 as not_validated\n\t%%\n\t%% Height\tBlock/Status\n\t%%\n\t%% 1\t\tB2/not_validated\n\t%%\t\t\t\t|\n\t%% 0\t\tB1/on_chain\n\tadd(bcache_test, B2 = on_top(random_block(1), B1)),\n\tExpectedStatus = awaiting_nonce_limiter_validation,\n\t?assertMatch({B2, [B1], {{not_validated, ExpectedStatus}, _}},\n\t\t\tget_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B1], 0),\n\tassert_tip(block_id(B1)),\n\tassert_max_cdiff({1, block_id(B2)}),\n\tassert_is_valid_fork(true, on_chain, B1),\n\tassert_is_valid_fork(true, not_validated, B2),\n\t?assertEqual([], get_siblings(bcache_test, B2)),\n\n\t%% Add a TXID to B2, but still don't mark as validated\n\t%%\n\t%% Height\tBlock/Status\n\t%%\n\t%% 1\t\tB2/not_validated\n\t%%\t\t\t\t|\n\t%% 0\t\tB1/on_chain\n\tTXID = crypto:strong_rand_bytes(32),\n\tadd(bcache_test, B2#block{ txs = [TXID] }),\n\t?assertEqual(B2#block{ txs = [TXID] }, get(bcache_test, block_id(B2))),\n\t?assertEqual(B2#block{ txs = [TXID] }, get_by_solution_hash(bcache_test, B2#block.hash,\n\t\t\tcrypto:strong_rand_bytes(32), 1, 1)),\n\t?assertEqual(B2#block{ txs = [TXID] }, get_by_solution_hash(bcache_test, B2#block.hash,\n\t\t\tblock_id(B1), 1, 1)),\n\tassert_longest_chain([B1], 0),\n\tassert_tip(block_id(B1)),\n\tassert_max_cdiff({1, block_id(B2)}),\n\tassert_is_valid_fork(true, on_chain, B1),\n\tassert_is_valid_fork(true, not_validated, B2),\n\n\t%% Remove B2\n\t%%\n\t%% Height\tBlock/Status\n\t%%\n\t%% 0\t\tB1/on_chain\n\tremove(bcache_test, block_id(B2)),\n\t?assertEqual(not_found, get(bcache_test, block_id(B2))),\n\t?assertEqual(B1, get(bcache_test, block_id(B1))),\n\tassert_longest_chain([B1], 0),\n\tassert_tip(block_id(B1)),\n\tassert_max_cdiff({0, block_id(B1)}),\n\tassert_is_valid_fork(true, on_chain, B1),\n\n\t%% Add B and B1_2 creating a fork, with B1_2 at a higher difficulty. Nether are validated.\n\t%%\n\t%% Height\tBlock/Status\n\t%%\n\t%% 1\t\tB2/not_validated    B1_2/not_validated\n\t%%\t\t\t\t\t  \\             /\n\t%% 0\t\t\t\t\tB1/on_chain\n\tadd(bcache_test, B2),\n\tadd(bcache_test, B1_2 = (on_top(random_block(2), B1))#block{ hash = B1#block.hash }),\n\t?assertEqual(B1, get_by_solution_hash(bcache_test, B1#block.hash, block_id(B1_2),\n\t\t\t1, 1)),\n\t?assertEqual(B1, get_by_solution_hash(bcache_test, B1#block.hash,\n\t\t\tcrypto:strong_rand_bytes(32), B1#block.cumulative_diff, 1)),\n\t?assertEqual(B1_2, get_by_solution_hash(bcache_test, B1#block.hash,\n\t\t\tcrypto:strong_rand_bytes(32), B1_2#block.cumulative_diff, 1)),\n\t?assert(lists:member(get_by_solution_hash(bcache_test, B1#block.hash, <<>>, 1, 1),\n\t\t\t[B1, B1_2])),\n\tassert_longest_chain([B1], 0),\n\tassert_tip(block_id(B1)),\n\tassert_max_cdiff({2, block_id(B1_2)}),\n\tassert_is_valid_fork(true, on_chain, B1),\n\tassert_is_valid_fork(true, not_validated, B2),\n\tassert_is_valid_fork(true, not_validated, B1_2),\n\t?assertEqual([B2], get_siblings(bcache_test, B1_2)),\n\t?assertEqual([B1_2], get_siblings(bcache_test, B2)),\n\n\t%% Even though B2 is marked as a tip, it is still lower difficulty than B1_2 so will\n\t%% not be included in the longest chain\n\t%%\n\t%% Height\tBlock/Status\n\t%%\n\t%% 1\t\tB2/on_chain      B1_2/not_validated\n\t%%\t\t\t\t\t  \\             /\n\t%% 0\t\t\t\t\tB1/on_chain\n\tmark_tip(bcache_test, block_id(B2)),\n\t?assertEqual(B1_2, get(bcache_test, block_id(B1_2))),\n\tassert_longest_chain([B1], 0),\n\tassert_tip(block_id(B2)),\n\tassert_max_cdiff({2, block_id(B1_2)}),\n\tassert_is_valid_fork(true, on_chain, B1),\n\tassert_is_valid_fork(true, on_chain, B2),\n\tassert_is_valid_fork(true, not_validated, B1_2),\n\n\t%% Remove B1_2, causing B2 to now be the tip of the heaviest chain\n\t%%\n\t%% Height\tBlock/Status\n\t%%\n\t%% 1\t\tB2/on_chain \n\t%%\t\t\t\t\t  \\ \n\t%% 0\t\t\t\t\tB1/on_chain\n\tremove(bcache_test, block_id(B1_2)),\n\t?assertEqual(not_found, get(bcache_test, block_id(B1_2))),\n\t?assertEqual(B1, get_by_solution_hash(bcache_test, B1#block.hash,\n\t\t\tcrypto:strong_rand_bytes(32), 0, 0)),\n\tassert_longest_chain([B2, B1], 0),\n\tassert_tip(block_id(B2)),\n\tassert_max_cdiff({1, block_id(B2)}),\n\tassert_is_valid_fork(true, on_chain, B1),\n\tassert_is_valid_fork(true, on_chain, B2),\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 1\t\tB2/on_chain \n\t%%\t\t\t\t\t  \\ \n\t%% 0\t\t\t\t\tB1/on_chain\n\tprune(bcache_test, 1),\n\t?assertEqual(B1, get(bcache_test, block_id(B1))),\n\t?assertEqual(B1, get_by_solution_hash(bcache_test, B1#block.hash,\n\t\t\tcrypto:strong_rand_bytes(32), 0, 0)),\n\tassert_longest_chain([B2, B1], 0),\n\tassert_tip(block_id(B2)),\n\tassert_max_cdiff({1, block_id(B2)}),\n\tassert_is_valid_fork(true, on_chain, B1),\n\tassert_is_valid_fork(true, on_chain, B2),\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 1\t\tB2/on_chain \n\tprune(bcache_test, 0),\n\t?assertEqual(not_found, get(bcache_test, block_id(B1))),\n\t?assertEqual(not_found, get_by_solution_hash(bcache_test, B1#block.hash, <<>>, 0, 0)),\n\tassert_longest_chain([B2], 0),\n\tassert_tip(block_id(B2)),\n\tassert_max_cdiff({1, block_id(B2)}),\n\tassert_is_valid_fork(true, on_chain, B2),\n\n\tprune(bcache_test, 0),\n\t?assertEqual(not_found, get(bcache_test, block_id(B1_2))),\n\t?assertEqual(not_found, get_by_solution_hash(bcache_test, B1_2#block.hash, <<>>, 0, 0)),\n\tassert_longest_chain([B2], 0),\n\tassert_tip(block_id(B2)),\n\tassert_max_cdiff({1, block_id(B2)}),\n\tassert_is_valid_fork(true, on_chain, B2),\n\n\t%% B1_2->B1 fork is the heaviest, but only B1 is validated. B2_2->B2->B1 is longer but\n\t%% has a lower cdiff.\n\t%%\n\t%% Height\tBlock/Status\n\t%%\n\t%% 2\t\tB2_2/not_validated\n\t%%               |\n\t%% 1\t\tB2/on_chain      B1_2/not_validated\n\t%%\t\t\t\t\t  \\             /\n\t%% 0\t\t\t\t\tB1/on_chain\n\tnew(bcache_test, B1),\n\tadd(bcache_test, B1_2),\n\tadd(bcache_test, B2),\n\tmark_tip(bcache_test, block_id(B2)),\n\tadd(bcache_test, B2_2 = on_top(random_block(1), B2)),\n\t?assertMatch({B1_2, [B1], {{not_validated, ExpectedStatus}, _Timestamp}},\n\t\t\tget_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B1], 0),\n\tassert_tip(block_id(B2)),\n\tassert_max_cdiff({2, block_id(B1_2)}),\n\tassert_is_valid_fork(true, on_chain, B1),\n\tassert_is_valid_fork(true, on_chain, B2),\n\tassert_is_valid_fork(true, not_validated, B1_2),\n\tassert_is_valid_fork(true, not_validated, B2_2),\n\n\t%% B2_3->B2_2->B2->B1 is no longer and heavier but only B2->B1 are validated.\n\t%%\n\t%% Height\tBlock/Status\n\t%%\n\t%% 3\t\tB2_3/not_validated\n\t%%               |\n\t%% 2\t\tB2_2/not_validated\n\t%%               |\n\t%% 1\t\tB2/on_chain      B1_2/not_validated\n\t%%\t\t\t\t\t  \\             /\n\t%% 0\t\t\t\t\tB1/on_chain\n\tadd(bcache_test, B2_3 = on_top(random_block(3), B2_2)),\n\t?assertMatch({B2_2, [B2], {{not_validated, ExpectedStatus}, _Timestamp}},\n\t\t\tget_earliest_not_validated_from_longest_chain(bcache_test)),\n\t?assertException(error, invalid_tip, mark_tip(bcache_test, block_id(B2_3))),\n\tassert_longest_chain([B2, B1], 0),\n\tassert_tip(block_id(B2)),\n\tassert_max_cdiff({3, block_id(B2_3)}),\n\tassert_is_valid_fork(true, on_chain, B1),\n\tassert_is_valid_fork(true, on_chain, B2),\n\tassert_is_valid_fork(true, not_validated, B1_2),\n\tassert_is_valid_fork(true, not_validated, B2_2),\n\tassert_is_valid_fork(true, not_validated, B2_3),\n\n\t%% Now B2_2->B2->B1 are validated.\n\t%%\n\t%% Height\tBlock/Status\n\t%%\n\t%% 3\t\tB2_3/not_validated\n\t%%               |\n\t%% 2\t\tB2_2/validated\n\t%%               |\n\t%% 1\t\tB2/on_chain      B1_2/not_validated\n\t%%\t\t\t\t\t  \\             /\n\t%% 0\t\t\t\t\tB1/on_chain\n\tadd_validated(bcache_test, B2_2),\n\t?assertMatch({B2_2, {validated, _}},\n\t\t\tget_block_and_status(bcache_test, B2_2#block.indep_hash)),\n\t?assertMatch({B2_3, [B2_2, B2], {{not_validated, ExpectedStatus}, _}},\n\t\t\tget_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B2_2, B2, B1], 1),\n\tassert_tip(block_id(B2)),\n\tassert_max_cdiff({3, block_id(B2_3)}),\n\tassert_is_valid_fork(true, on_chain, B1),\n\tassert_is_valid_fork(true, on_chain, B2),\n\tassert_is_valid_fork(true, not_validated, B1_2),\n\tassert_is_valid_fork(true, validated, B2_2),\n\tassert_is_valid_fork(true, not_validated, B2_3),\n\n\t%% Now the B3->B2->B1 fork is heaviest\n\t%%\n\t%% Height\tBlock/Status\n\t%%\n\t%% 3\t\tB2_3/not_validated\n\t%%               |\n\t%% 2\t\tB2_2/validated          B3/on_chain\n\t%%                    \\            /\n\t%% 1                   B2/on_chain      B1_2/not_validated\n\t%%\t\t\t                    \\           /\n\t%% 0                             B1/on_chain\n\tB3 = on_top(random_block(4), B2),\n\tB3ID = block_id(B3),\n\tadd(bcache_test, B3),\n\tadd_validated(bcache_test, B3),\n\tmark_tip(bcache_test, B3ID),\n\t?assertEqual(not_found, get_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B3, B2, B1], 0),\n\tassert_tip(block_id(B3)),\n\tassert_max_cdiff({4, block_id(B3)}),\n\tassert_is_valid_fork(true, on_chain, B1),\n\tassert_is_valid_fork(true, on_chain, B2),\n\tassert_is_valid_fork(true, not_validated, B1_2),\n\tassert_is_valid_fork(true, validated, B2_2),\n\tassert_is_valid_fork(true, not_validated, B2_3),\n\tassert_is_valid_fork(true, on_chain, B3),\n\t?assertEqual([B2_2], get_siblings(bcache_test, B3)),\n\t?assertEqual([B3], get_siblings(bcache_test, B2_2)),\n\t?assertEqual([B2], get_siblings(bcache_test, B1_2)),\n\n\t%% B3->B2->B1 fork is still heaviest\n\t%%\n\t%% Height\tBlock/Status\n\t%%\n\t%% 3\t\tB2_3/not_validated\n\t%%               |\n\t%% 2\t\tB2_2/on_chain        B3/validated\n\t%%                    \\            /\n\t%% 1                   B2/on_chain      B1_2/not_validated\n\t%%\t\t\t                    \\           /\n\t%% 0                             B1/on_chain\n\tmark_tip(bcache_test, block_id(B2_2)),\n\t?assertEqual(not_found, get_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B3, B2, B1], 1),\n\tassert_tip(block_id(B2_2)),\n\tassert_max_cdiff({4, block_id(B3)}),\n\tassert_is_valid_fork(true, on_chain, B1),\n\tassert_is_valid_fork(true, on_chain, B2),\n\tassert_is_valid_fork(true, not_validated, B1_2),\n\tassert_is_valid_fork(true, on_chain, B2_2),\n\tassert_is_valid_fork(true, not_validated, B2_3),\n\tassert_is_valid_fork(true, validated, B3),\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 3\t\tB2_3/not_validated   B4/not_validated\n\t%%               |                     |\n\t%% 2\t\tB2_2/on_chain        B3/validated\n\t%%                    \\            /\n\t%% 1                   B2/on_chain      B1_2/not_validated\n\t%%\t\t\t                    \\           /\n\t%% 0                             B1/on_chain\n\tadd(bcache_test, B4 = on_top(random_block(5), B3)),\n\t?assertMatch({B4, [B3, B2], {{not_validated, ExpectedStatus}, _Timestamp}},\n\t\t\tget_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B3, B2, B1], 1),\n\tassert_tip(block_id(B2_2)),\n\tassert_max_cdiff({5, block_id(B4)}),\n\tassert_is_valid_fork(true, on_chain, B1),\n\tassert_is_valid_fork(true, on_chain, B2),\n\tassert_is_valid_fork(true, not_validated, B1_2),\n\tassert_is_valid_fork(true, on_chain, B2_2),\n\tassert_is_valid_fork(true, not_validated, B2_3),\n\tassert_is_valid_fork(true, validated, B3),\n\tassert_is_valid_fork(true, not_validated, B4),\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 3\t\tB2_3/not_validated   B4/not_validated\n\t%%               |                     |\n\t%% 2\t\tB2_2/on_chain        B3/validated\n\t%%                    \\            /\n\t%% 1                   B2/on_chain\n\tprune(bcache_test, 1),\n\t?assertEqual(not_found, get(bcache_test, block_id(B1))),\n\t?assertEqual(not_found, get_by_solution_hash(bcache_test, B1#block.hash, <<>>, 0, 0)),\n\tassert_longest_chain([B3, B2], 1),\n\tassert_tip(block_id(B2_2)),\n\tassert_max_cdiff({5, block_id(B4)}),\n\tassert_is_valid_fork(true, on_chain, B2),\n\tassert_is_valid_fork(true, on_chain, B2_2),\n\tassert_is_valid_fork(true, not_validated, B2_3),\n\tassert_is_valid_fork(true, validated, B3),\n\tassert_is_valid_fork(true, not_validated, B4),\n\t?assertEqual([], get_siblings(bcache_test, B2_3)),\n\t?assertEqual([], get_siblings(bcache_test, B4)),\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 3\t\tB2_3/on_chain\n\t%%               |\n\t%% 2\t\tB2_2/on_chain\n\tmark_tip(bcache_test, block_id(B2_3)),\n\tprune(bcache_test, 1),\n\t?assertEqual(not_found, get(bcache_test, block_id(B2))),\n\t?assertEqual(not_found, get_by_solution_hash(bcache_test, B2#block.hash, <<>>, 0, 0)),\n\tassert_longest_chain([B2_3, B2_2], 0),\n\tassert_tip(block_id(B2_3)),\n\tassert_max_cdiff({3, block_id(B2_3)}),\n\tassert_is_valid_fork(true, on_chain, B2_2),\n\tassert_is_valid_fork(true, on_chain, B2_3),\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 3\t\tB2_3/on_chain\n\t%%               |\n\t%% 2\t\tB2_2/on_chain\n\tprune(bcache_test, 1),\n\t?assertEqual(not_found, get(bcache_test, block_id(B3))),\n\t?assertEqual(not_found, get_by_solution_hash(bcache_test, B3#block.hash, <<>>, 0, 0)),\n\tassert_longest_chain([B2_3, B2_2], 0),\n\tassert_tip(block_id(B2_3)),\n\tassert_max_cdiff({3, block_id(B2_3)}),\n\tassert_is_valid_fork(true, on_chain, B2_2),\n\tassert_is_valid_fork(true, on_chain, B2_3),\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 3\t\tB2_3/on_chain\n\t%%               |\n\t%% 2\t\tB2_2/on_chain\n\tprune(bcache_test, 1),\n\t?assertEqual(not_found, get(bcache_test, block_id(B4))),\n\t?assertEqual(not_found, get_by_solution_hash(bcache_test, B4#block.hash, <<>>, 0, 0)),\n\tassert_longest_chain([B2_3, B2_2], 0),\n\tassert_tip(block_id(B2_3)),\n\tassert_max_cdiff({3, block_id(B2_3)}),\n\tassert_is_valid_fork(true, on_chain, B2_2),\n\tassert_is_valid_fork(true, on_chain, B2_3),\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 3\t\tB2_3/on_chain\n\t%%               |\n\t%% 2\t\tB2_2/on_chain\n\tprune(bcache_test, 1),\n\t?assertEqual(B2_2, get(bcache_test, block_id(B2_2))),\n\t?assertEqual(B2_2, get_by_solution_hash(bcache_test, B2_2#block.hash, <<>>, 0, 0)),\n\tassert_longest_chain([B2_3, B2_2], 0),\n\tassert_tip(block_id(B2_3)),\n\tassert_max_cdiff({3, block_id(B2_3)}),\n\tassert_is_valid_fork(true, on_chain, B2_2),\n\tassert_is_valid_fork(true, on_chain, B2_3),\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 3\t\tB2_3/on_chain\n\t%%               |\n\t%% 2\t\tB2_2/on_chain\n\tprune(bcache_test, 1),\n\t?assertEqual(B2_3, get(bcache_test, block_id(B2_3))),\n\t?assertEqual(B2_3, get_by_solution_hash(bcache_test, B2_3#block.hash, <<>>, 0, 0)),\n\tassert_longest_chain([B2_3, B2_2], 0),\n\tassert_tip(block_id(B2_3)),\n\tassert_max_cdiff({3, block_id(B2_3)}),\n\tassert_is_valid_fork(true, on_chain, B2_2),\n\tassert_is_valid_fork(true, on_chain, B2_3),\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 3\t\tB2_3/on_chain\n\t%%               |\n\t%% 2\t\tB2_2/on_chain\n\tremove(bcache_test, block_id(B3)),\n\t?assertEqual(not_found, get(bcache_test, block_id(B3))),\n\t?assertEqual(not_found, get_by_solution_hash(bcache_test, B3#block.hash, <<>>, 0, 0)),\n\tassert_longest_chain([B2_3, B2_2], 0),\n\tassert_tip(block_id(B2_3)),\n\tassert_max_cdiff({3, block_id(B2_3)}),\n\tassert_is_valid_fork(true, on_chain, B2_2),\n\tassert_is_valid_fork(true, on_chain, B2_3),\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 3\t\tB2_3/on_chain\n\t%%               |\n\t%% 2\t\tB2_2/on_chain\n\tremove(bcache_test, block_id(B3)),\n\t?assertEqual(not_found, get(bcache_test, block_id(B4))),\n\t?assertEqual(not_found, get_by_solution_hash(bcache_test, B4#block.hash, <<>>, 0, 0)),\n\tassert_longest_chain([B2_3, B2_2], 0),\n\tassert_tip(block_id(B2_3)),\n\tassert_max_cdiff({3, block_id(B2_3)}),\n\tassert_is_valid_fork(true, on_chain, B2_2),\n\tassert_is_valid_fork(true, on_chain, B2_3),\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 1\t\tB12/not_validated    B13/on_chain\n\t%%\t\t\t\t\t  \\              /\n\t%% 0\t\t\t\t\tB11/on_chain\n\tnew(bcache_test, B11 = random_block(0)),\n\tadd(bcache_test, B12 = on_top(random_block(1), B11)),\n\tadd_validated(bcache_test, B13 = on_top(random_block(1), B11)),\n\tmark_tip(bcache_test, block_id(B13)),\n\t%% Although the first block at height 1 was the one added in B12, B13 then\n\t%% became the tip so we should not reorganize.\n\t?assertEqual(not_found, get_earliest_not_validated_from_longest_chain(bcache_test)),\n\t%% The longest chain starts a the max_cdiff block which in this case is B12 since B13\n\t%% was added second and has the same cdiff. So the longest chain stays as just [B11]\n\tassert_longest_chain([B11], 0),\n\tassert_tip(block_id(B13)),\n\tassert_max_cdiff({1, block_id(B12)}),\n\tassert_is_valid_fork(true, on_chain, B11),\n\tassert_is_valid_fork(true, not_validated, B12),\n\tassert_is_valid_fork(true, on_chain, B13),\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 2\t\t\t\t\t\t\t B14/not_validated\n\t%%\t\t\t\t\t\t\t\t\t  |\n\t%% 1\t\tB12/not_validated    B13/on_chain\n\t%%\t\t\t\t\t  \\              /\n\t%% 0\t\t\t\t\tB11/on_chain\n\tadd(bcache_test, B14 = on_top(random_block_after_repacking(2), B13)),\n\t?assertMatch({B14, [B13], {{not_validated, awaiting_nonce_limiter_validation}, _}},\n\t\t\tget_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B13, B11], 0),\n\tassert_tip(block_id(B13)),\n\tassert_max_cdiff({2, block_id(B14)}),\n\tassert_is_valid_fork(true, on_chain, B11),\n\tassert_is_valid_fork(true, not_validated, B12),\n\tassert_is_valid_fork(true, on_chain, B13),\n\tassert_is_valid_fork(true, not_validated, B14),\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 2\t\t\t\t\t\t\t B14/not_validated\n\t%%\t\t\t\t\t\t\t\t\t  |\n\t%% 1\t\tB12/not_validated    B13/on_chain\n\t%%\t\t\t\t\t  \\              /\n\t%% 0\t\t\t\t\tB11/on_chain\n\tmark_nonce_limiter_validated(bcache_test, crypto:strong_rand_bytes(48)),\n\tmark_nonce_limiter_validated(bcache_test, block_id(B13)),\n\t?assertMatch({B13, {on_chain, _}},\n\t\t\tget_block_and_status(bcache_test, block_id(B13))),\n\t?assertMatch({B14, {{not_validated, awaiting_nonce_limiter_validation}, _}},\n\t\t\tget_block_and_status(bcache_test, block_id(B14))),\n\tassert_longest_chain([B13, B11], 0),\n\tassert_tip(block_id(B13)),\n\tassert_max_cdiff({2, block_id(B14)}),\n\tassert_is_valid_fork(true, on_chain, B11),\n\tassert_is_valid_fork(true, not_validated, B12),\n\tassert_is_valid_fork(true, on_chain, B13),\n\tassert_is_valid_fork(true, not_validated, B14),\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 2\t\t\t\t\t\t\t B14/not_validated\n\t%%\t\t\t\t\t\t\t\t\t  |\n\t%% 1\t\tB12/not_validated    B13/on_chain\n\t%%\t\t\t\t\t  \\              /\n\t%% 0\t\t\t\t\tB11/on_chain\n\t?assertMatch({B14, {{not_validated, awaiting_nonce_limiter_validation}, _}},\n\t\t\tget_block_and_status(bcache_test, block_id(B14))),\n\t?assertMatch({B14, [B13], {{not_validated, awaiting_nonce_limiter_validation}, _}},\n\t\t\tget_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B13, B11], 0),\n\tassert_tip(block_id(B13)),\n\tassert_max_cdiff({2, block_id(B14)}),\n\tassert_is_valid_fork(true, on_chain, B11),\n\tassert_is_valid_fork(true, not_validated, B12),\n\tassert_is_valid_fork(true, on_chain, B13),\n\tassert_is_valid_fork(true, not_validated, B14),\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 2\t\t\t\t\t\t\t B14/nonce_limiter_validated\n\t%%\t\t\t\t\t\t\t\t\t  |\n\t%% 1\t\tB12/not_validated    B13/on_chain\n\t%%\t\t\t\t\t  \\              /\n\t%% 0\t\t\t\t\tB11/on_chain\n\tmark_nonce_limiter_validated(bcache_test, block_id(B14)),\n\t?assertMatch({B14, {{not_validated, nonce_limiter_validated}, _}},\n\t\t\tget_block_and_status(bcache_test, block_id(B14))),\n\t?assertMatch({B14, [B13], {{not_validated, nonce_limiter_validated}, _}},\n\t\t\tget_earliest_not_validated_from_longest_chain(bcache_test)),\n\t%% Longest chain now includes B14 because its status changed to nonce_limiter_validated\n\tassert_longest_chain([B14, B13, B11], 1),\n\tassert_tip(block_id(B13)),\n\tassert_max_cdiff({2, block_id(B14)}),\n\tassert_is_valid_fork(true, on_chain, B11),\n\tassert_is_valid_fork(true, not_validated, B12),\n\tassert_is_valid_fork(true, on_chain, B13),\n\tassert_is_valid_fork(true, not_validated, B14),\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 3\t\t\t\t\t\t\t B15/not_validated\n\t%%\t\t\t\t\t\t\t\t\t  |\n\t%% 2\t\t\t\t\t\t\t B14/nonce_limiter_validated\n\t%%\t\t\t\t\t\t\t\t\t  |\n\t%% 1\t\tB12/not_validated    B13/on_chain\n\t%%\t\t\t\t\t  \\              /\n\t%% 0\t\t\t\t\tB11/on_chain\n\tadd(bcache_test, B15 = on_top(random_block_after_repacking(3), B14)),\n\t?assertMatch({B14, [B13], {{not_validated, nonce_limiter_validated}, _}},\n\t\t\tget_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B14, B13, B11], 1),\n\tassert_tip(block_id(B13)),\n\tassert_max_cdiff({3, block_id(B15)}),\n\tassert_is_valid_fork(true, on_chain, B11),\n\tassert_is_valid_fork(true, not_validated, B12),\n\tassert_is_valid_fork(true, on_chain, B13),\n\tassert_is_valid_fork(true, not_validated, B14),\n\tassert_is_valid_fork(true, not_validated, B15),\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 3\t\t\t\t\t\t\t B15/not_validated\n\t%%\t\t\t\t\t\t\t\t\t  |\n\t%% 2\t\t\t\t\t\t\t B14/validated\n\t%%\t\t\t\t\t\t\t\t\t  |\n\t%% 1\t\tB12/not_validated    B13/on_chain\n\t%%\t\t\t\t\t  \\              /\n\t%% 0\t\t\t\t\tB11/on_chain\n\tadd_validated(bcache_test, B14),\n\t?assertMatch({B15, [B14, B13], {{not_validated, awaiting_nonce_limiter_validation}, _}},\n\t\t\tget_earliest_not_validated_from_longest_chain(bcache_test)),\n\t?assertMatch({B14, {validated, _}}, get_block_and_status(bcache_test, block_id(B14))),\n\tassert_longest_chain([B14, B13, B11], 1),\n\tassert_tip(block_id(B13)),\n\tassert_max_cdiff({3, block_id(B15)}),\n\tassert_is_valid_fork(true, on_chain, B11),\n\tassert_is_valid_fork(true, not_validated, B12),\n\tassert_is_valid_fork(true, on_chain, B13),\n\tassert_is_valid_fork(true, validated, B14),\n\tassert_is_valid_fork(true, not_validated, B15),\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 3\t\t\t\t\t\t\t B16/not_validated\n\t%%\t\t\t\t\t\t\t\t\t  |\n\t%% 3\t\t\t\t\t\t\t B15/not_validated\n\t%%\t\t\t\t\t\t\t\t\t  |\n\t%% 2\t\t\t\t\t\t\t B14/validated\n\t%%\t\t\t\t\t\t\t\t\t  |\n\t%% 1\t\tB12/not_validated    B13/on_chain\n\t%%\t\t\t\t\t  \\              /\n\t%% 0\t\t\t\t\tB11/on_chain\n\tadd(bcache_test, B16 = on_top(random_block_after_repacking(4), B15)),\n\t?assertMatch({B15, [B14, B13], {{not_validated, awaiting_nonce_limiter_validation}, _}},\n\t\t\tget_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B14, B13, B11], 1),\n\tassert_tip(block_id(B13)),\n\tassert_max_cdiff({4, block_id(B16)}),\n\tassert_is_valid_fork(true, on_chain, B11),\n\tassert_is_valid_fork(true, not_validated, B12),\n\tassert_is_valid_fork(true, on_chain, B13),\n\tassert_is_valid_fork(true, validated, B14),\n\tassert_is_valid_fork(true, not_validated, B15),\n\tassert_is_valid_fork(true, not_validated, B16),\t\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 3\t\t\t\t\t\t\t B16/nonce_limiter_validated\n\t%%\t\t\t\t\t\t\t\t\t  |\n\t%% 3\t\t\t\t\t\t\t B15/not_validated\n\t%%\t\t\t\t\t\t\t\t\t  |\n\t%% 2\t\t\t\t\t\t\t B14/validated\n\t%%\t\t\t\t\t\t\t\t\t  |\n\t%% 1\t\tB12/not_validated    B13/on_chain\n\t%%\t\t\t\t\t  \\              /\n\t%% 0\t\t\t\t\tB11/on_chain\n\tmark_nonce_limiter_validated(bcache_test, block_id(B16)),\n\t?assertMatch({B15, [B14, B13], {{not_validated, awaiting_nonce_limiter_validation}, _}},\n\t\t\tget_earliest_not_validated_from_longest_chain(bcache_test)),\n\t?assertMatch({B16, {{not_validated, nonce_limiter_validated}, _}},\n\t\t\tget_block_and_status(bcache_test, block_id(B16))),\n\tassert_longest_chain([B14, B13, B11], 1),\n\tassert_tip(block_id(B13)),\n\tassert_max_cdiff({4, block_id(B16)}),\n\tassert_is_valid_fork(true, on_chain, B11),\n\tassert_is_valid_fork(true, not_validated, B12),\n\tassert_is_valid_fork(true, on_chain, B13),\n\tassert_is_valid_fork(true, validated, B14),\n\tassert_is_valid_fork(true, not_validated, B15),\n\tassert_is_valid_fork(true, not_validated, B16),\t\t\n\n\t%% Height\tBlock/Status\n\t%%\n\t%% 3\t\t\t\t\t\t\t B16/nonce_limiter_validated\n\t%%\t\t\t\t\t\t\t\t\t  |\n\t%% 3\t\t\t\t\t\t\t B15/not_validated\n\t%%\t\t\t\t\t\t\t\t\t  |\n\t%% 2\t\t\t\t\t\t\t B14/on_chain\n\t%%\t\t\t\t\t\t\t\t\t  |\n\t%% 1\t\tB12/not_validated    B13/on_chain\n\t%%\t\t\t\t\t  \\              /\n\t%% 0\t\t\t\t\tB11/on_chain\n\tmark_tip(bcache_test, block_id(B14)),\n\t?assertMatch({B14, {on_chain, _}}, get_block_and_status(bcache_test, block_id(B14))),\n\t?assertMatch({B15, [B14], {{not_validated, awaiting_nonce_limiter_validation}, _}},\n\t\t\tget_earliest_not_validated_from_longest_chain(bcache_test)),\n\tassert_longest_chain([B14, B13, B11], 0),\n\tassert_tip(block_id(B14)),\n\tassert_max_cdiff({4, block_id(B16)}),\n\tassert_is_valid_fork(true, on_chain, B11),\n\tassert_is_valid_fork(true, not_validated, B12),\n\tassert_is_valid_fork(true, on_chain, B13),\n\tassert_is_valid_fork(true, on_chain, B14),\n\tassert_is_valid_fork(true, not_validated, B15),\n\tassert_is_valid_fork(true, not_validated, B16),\t\t\n\n\tets:delete(bcache_test).\n\nassert_longest_chain(Chain, NotOnChainCount) ->\n\tExpectedPairs =  [{B#block.indep_hash, []} || B <- Chain],\n\t?assertEqual({ExpectedPairs, NotOnChainCount}, get_longest_chain_cache(bcache_test)).\n\nassert_max_cdiff(ExpectedMaxCDiff) ->\n\t[{_, MaxCDiff}] = ets:lookup(bcache_test, max_cdiff),\n\t?assertEqual(ExpectedMaxCDiff, MaxCDiff).\n\nassert_is_valid_fork(ExpectedFork, ExpectedStatus, B) ->\n\t[{_, {_, Status, _, _}}] = ets:lookup(bcache_test, {block, block_id(B)}),\n\tcase ExpectedStatus of\n\t\tnot_validated ->\n\t\t\t?assertMatch({not_validated, _}, Status);\n\t\t_ ->\n\t\t\t?assertEqual(ExpectedStatus, Status)\n\tend,\n\t?assertEqual(ExpectedFork, is_valid_fork(bcache_test, B, Status)).\t\n\nassert_tip(ExpectedTip) ->\n\t[{_, Tip}] = ets:lookup(bcache_test, tip),\n\t?assertEqual(ExpectedTip,Tip).\n\nrandom_block(CDiff) ->\n\t#block{ indep_hash = crypto:strong_rand_bytes(48), height = 0, cumulative_diff = CDiff,\n\t\t\thash = crypto:strong_rand_bytes(32) }.\n\nrandom_block_after_repacking(CDiff) ->\n\t#block{ indep_hash = crypto:strong_rand_bytes(48), height = 0, cumulative_diff = CDiff,\n\t\t\thash = crypto:strong_rand_bytes(32) }.\n\nblock_id(#block{ indep_hash = H }) ->\n\tH.\n\non_top(B, PrevB) ->\n\tB#block{ previous_block = PrevB#block.indep_hash, height = PrevB#block.height + 1,\n\t\t\tprevious_cumulative_diff = PrevB#block.cumulative_diff }.\n\n%% @doc Test that get_blocks_by_miner returns the correct blocks for a given miner.\nget_blocks_by_miner_test() ->\n\tets:new(bcache_test, [set, named_table]),\n\tnew(bcache_test, B0 = random_block(0)),\n\tTab = bcache_test,\n\t?assertEqual([], get_blocks_by_miner(Tab, <<\"miner1\">>)),\n\t% Create some test blocks\n\tB1 = #block{ indep_hash = <<\"hash1\">>, reward_addr = <<\"miner1\">> },\n\tB2 = #block{ indep_hash = <<\"hash2\">>, reward_addr = <<\"miner2\">> },\n\tB3 = #block{ indep_hash = <<\"hash3\">>, reward_addr = <<\"miner1\">> },\n\t% Add blocks to cache\n\tadd(Tab, on_top(B1, B0)),\n\tadd(Tab, on_top(B2, B0)),\n\tadd(Tab, on_top(B3, B0)),\n\tB1_1 = B1#block{\n\t\theight = 1,\n\t\tprevious_block = B0#block.indep_hash,\n\t\tprevious_cumulative_diff = B0#block.cumulative_diff },\n\tB2_1 = B2#block{\n\t\theight = 1,\n\t\tprevious_block = B0#block.indep_hash,\n\t\tprevious_cumulative_diff = B0#block.cumulative_diff },\n\tB3_1 = B3#block{\n\t\theight = 1,\n\t\tprevious_block = B0#block.indep_hash,\n\t\tprevious_cumulative_diff = B0#block.cumulative_diff },\n\n\t% Test getting blocks by miner\n\t?assertEqual([B1_1, B3_1], lists:sort(fun(A, B) -> A#block.indep_hash < B#block.indep_hash end, get_blocks_by_miner(Tab, <<\"miner1\">>))),\n\t?assertEqual([B2_1], get_blocks_by_miner(Tab, <<\"miner2\">>)),\n\t?assertEqual([], get_blocks_by_miner(Tab, <<\"miner3\">>)),\n\tets:delete(Tab).\n"
  },
  {
    "path": "apps/arweave/src/ar_block_index.erl",
    "content": "-module(ar_block_index).\n\n-export([init/1, update/2, member/1, get_list/1, get_list_by_hash/1, get_element_by_height/1,\n\t\tget_block_bounds/1, get_block_bounds_with_height/1, get_intersection/2, get_intersection/1, get_range/2, get_last/0]).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Store the given block index in ETS.\ninit(BI) ->\n\tinit(lists:reverse(BI), 0).\n\n%% @doc Insert the new block index elements from BI and remove the N orphaned ones.\nupdate([], 0) ->\n\tok;\nupdate(BI, 0) ->\n\t{_WeaveSize, Height, _H, _TXRoot} = ets:last(block_index),\n\tupdate2(BI, Height + 1);\nupdate(BI, N) ->\n\tets:delete(block_index, ets:last(block_index)),\n\tupdate(BI, N - 1).\n\n%% @doc Return true if the given block hash is found in the index.\nmember(H) ->\n\tmember(H, ets:last(block_index)).\n\n%% @doc Return the list of {H, WeaveSize, TXRoot} triplets up to the given Height (including)\n%% sorted from latest to earliest.\nget_list(Height) ->\n\tget_list([], ets:first(block_index), -1, Height).\n\n%% @doc Return the list of {H, WeaveSize, TXRoot} triplets up to the block with the given\n%% hash H (including) sorted from latest to earliest.\nget_list_by_hash(H) ->\n\tget_list_by_hash([], ets:first(block_index), -1, H).\n\n%% @doc Return the {H, WeaveSize, TXRoot} triplet for the given Height or not_found.\nget_element_by_height(Height) ->\n\tcase catch ets:slot(block_index, Height) of\n\t\t{'EXIT', _} ->\n\t\t\tnot_found;\n\t\t'$end_of_table' ->\n\t\t\tnot_found;\n\t\t[{{WeaveSize, Height, H, TXRoot}}] ->\n\t\t\t{H, WeaveSize, TXRoot}\n\tend.\n\n%% @doc Return {BlockStartOffset, BlockEndOffset, TXRoot} where Offset >= BlockStartOffset,\n%% Offset < BlockEndOffset.\nget_block_bounds(Offset) ->\n\t{BlockStart, BlockEnd, TXRoot, _} = get_block_bounds_with_height(Offset),\n\t{BlockStart, BlockEnd, TXRoot}.\n\n%% @doc Return {BlockStartOffset, BlockEndOffset, TXRoot, Height} where Offset >= BlockStartOffset,\n%% Offset < BlockEndOffset.\nget_block_bounds_with_height(Offset) ->\n\t{WeaveSize, Height, _H, TXRoot} = Key = ets:next(block_index, {Offset, n, n, n}),\n\tcase Height of\n\t\t0 ->\n\t\t\t{0, WeaveSize, TXRoot, 0};\n\t\t_ ->\n\t\t\t{PrevWeaveSize, _, _, _} = ets:prev(block_index, Key),\n\t\t\t{PrevWeaveSize, WeaveSize, TXRoot, Height}\n\tend.\n\n%% @doc Return {Height, {H, WeaveSize, TXRoot}} with the triplet present in both\n%% the cached block index and the given BI or no_intersection.\nget_intersection(Height, _BI) when Height < 0 ->\n\tno_intersection;\nget_intersection(_Height, []) ->\n\tno_intersection;\nget_intersection(Height, BI) ->\n\tReverseBI = lists:reverse(BI),\n\t[{H, _, _} = Elem | ReverseBI2] = ReverseBI,\n\tcase catch ets:slot(block_index, Height) of\n\t\t[{{_, Height, H, _} = Entry}] ->\n\t\t\tget_intersection(Height + 1, Elem, ReverseBI2, ets:next(block_index, Entry));\n\t\t_ ->\n\t\t\tno_intersection\n\tend.\n\n%% @doc Return the {H, WeaveSize, TXRoot} triplet present in both\n%% the cached block index and the given BI or no_intersection.\nget_intersection([]) ->\n\tno_intersection;\nget_intersection(BI) ->\n\t{H, WeaveSize, _TXRoot} = lists:last(BI),\n\tget_intersection2({H, WeaveSize}, tl(lists:reverse(BI)),\n\t\t\tets:next(block_index, {WeaveSize - 1, n, n, n})).\n\n%% @doc Return the list of {H, WeaveSize, TXRoot} for blocks with Height >= Start, =< End,\n%% sorted from the largest height to the smallest.\nget_range(Start, End) when Start > End ->\n\t[];\nget_range(Start, End) ->\n\tcase catch ets:slot(block_index, Start) of\n\t\t[{{WeaveSize, _Height, H, TXRoot} = Entry}] ->\n\t\t\tlists:reverse([{H, WeaveSize, TXRoot}\n\t\t\t\t| get_range2(Start + 1, End, ets:next(block_index, Entry))]);\n\t\t_ ->\n\t\t\t{error, invalid_start}\n\tend.\n\n%% @doc Return the last element in the block index.\nget_last() ->\n\tets:last(block_index).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\ninit([], _Height) ->\n\tok;\ninit([{H, WeaveSize, TXRoot} | BI], Height) ->\n\tets:insert(block_index, {{WeaveSize, Height, H, TXRoot}}),\n\tinit(BI, Height + 1).\n\nupdate2([], _Height) ->\n\tok;\nupdate2([{H, WeaveSize, TXRoot} | BI], Height) ->\n\tets:insert(block_index, {{WeaveSize, Height, H, TXRoot}}),\n\tupdate2(BI, Height + 1).\n\nmember(H, {_, _, H, _}) ->\n\ttrue;\nmember(_H, '$end_of_table') ->\n\tfalse;\nmember(H, Key) ->\n\tmember(H, ets:prev(block_index, Key)).\n\nget_list(BI, '$end_of_table', _Height, _MaxHeight) ->\n\tBI;\nget_list(BI, _Elem, Height, MaxHeight) when Height >= MaxHeight ->\n\tBI;\nget_list(BI, {WeaveSize, NextHeight, H, TXRoot} = Key, Height, MaxHeight)\n\t\twhen NextHeight == Height + 1 ->\n\tget_list([{H, WeaveSize, TXRoot} | BI], ets:next(block_index, Key), Height + 1, MaxHeight);\nget_list(_BI, _Key, _Height, MaxHeight) ->\n\t%% An extremely unlikely race condition should have occured where some blocks were\n\t%% orphaned right after we passed some of them here, and new blocks have been added\n\t%% right before we reached the end of the table.\n\tget_list(MaxHeight).\n\nget_list_by_hash(BI, '$end_of_table', _Height, _H) ->\n\tBI;\nget_list_by_hash(BI, {WeaveSize, NextHeight, H, TXRoot}, Height, H)\n\t\twhen NextHeight == Height + 1 ->\n\t[{H, WeaveSize, TXRoot} | BI];\nget_list_by_hash(BI, {WeaveSize, NextHeight, H, TXRoot} = Key, Height, H2)\n\t\twhen NextHeight == Height + 1 ->\n\tget_list_by_hash([{H, WeaveSize, TXRoot} | BI], ets:next(block_index, Key), Height + 1,\n\t\t\tH2);\nget_list_by_hash(_BI, _Key, _Height, H) ->\n\t%% An extremely unlikely race condition should have occured where some blocks were\n\t%% orphaned right after we passed some of them here, and new blocks have been added\n\t%% right before we reached the end of the table.\n\tget_list_by_hash(H).\n\nget_intersection(Height, Entry, _ReverseBI, '$end_of_table') ->\n\t{Height - 1, Entry};\nget_intersection(Height, Entry, [], _Entry) ->\n\t{Height - 1, Entry};\nget_intersection(Height, _Entry, [{H, _, _} = Elem | ReverseBI], {_, Height, H, _} = Entry) ->\n\tget_intersection(Height + 1, Elem, ReverseBI, ets:next(block_index, Entry));\nget_intersection(Height, Entry, _ReverseBI, _TableEntry) ->\n\t{Height - 1, Entry}.\n\nget_intersection2(_, _, '$end_of_table') ->\n\tno_intersection;\nget_intersection2({_, WeaveSize}, _, {WeaveSize2, _, _, _}) when WeaveSize2 > WeaveSize ->\n\tno_intersection;\nget_intersection2({H, WeaveSize}, BI, {WeaveSize, _, H, TXRoot} = Elem) ->\n\tget_intersection3(ets:next(block_index, Elem), BI, {H, WeaveSize, TXRoot});\nget_intersection2({H, WeaveSize}, BI, {WeaveSize, _, _, _} = Elem) ->\n\tget_intersection2({H, WeaveSize}, BI, ets:next(block_index, Elem)).\n\nget_intersection3({WeaveSize, _, H, TXRoot} = Key, [{H, WeaveSize, TXRoot} | BI], _Elem) ->\n\tget_intersection3(ets:next(block_index, Key), BI, {H, WeaveSize, TXRoot});\nget_intersection3(_, _, {H, WeaveSize, TXRoot}) ->\n\t{H, WeaveSize, TXRoot}.\n\nget_range2(Start, End, _Elem) when Start > End ->\n\t[];\nget_range2(_Start, _End, '$end_of_table') ->\n\t[];\nget_range2(Start, End, {WeaveSize, _Height, H, TXRoot} = Elem) ->\n\t[{H, WeaveSize, TXRoot} | get_range2(Start + 1, End, ets:next(block_index, Elem))].\n"
  },
  {
    "path": "apps/arweave/src/ar_block_pre_validator.erl",
    "content": "-module(ar_block_pre_validator).\n\n-behaviour(gen_server).\n\n-export([start_link/0, pre_validate/3]).\n\n-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2]).\n\n-include(\"ar.hrl\").\n-include(\"ar_consensus.hrl\").\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-record(state, {\n\t%% The priority queue storing the validation requests.\n\tpqueue = gb_sets:new(),\n\t%% The total size in bytes of the priority queue.\n\tsize = 0,\n\t%% The map IP => the timestamp of the last block from this IP.\n\tip_timestamps = #{},\n\tthrottle_by_ip_interval,\n\t%% The map SolutionHash => the timestamp of the last block with this solution hash.\n\thash_timestamps = #{},\n\tthrottle_by_solution_interval\n}).\n\n%% The maximum size in bytes the blocks enqueued for pre-validation can occupy.\n-define(MAX_PRE_VALIDATION_QUEUE_SIZE, (200 * ?MiB)).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%% @doc Partially validate the received block. The validation consists of multiple\n%% stages. The process is aiming to increase resistance against DDoS attacks.\n%% The first stage is the quickest and performed synchronously when this function\n%% is called. Afterwards, the block is put in a limited-size priority queue.\n%% Bigger-height blocks from better-rated peers have higher priority. Additionally,\n%% the processing is throttled by IP and solution hash.\n%% Returns: ok, invalid, skipped\npre_validate(B, Peer, ReceiveTimestamp) ->\n\t#block{ indep_hash = H } = B,\n\tcase ar_ignore_registry:member(H) of\n\t\ttrue ->\n\t\t\tskipped;\n\t\tfalse ->\n\t\t\tRef = make_ref(),\n\t\t\tar_ignore_registry:add_ref(H, Ref),\n\t\t\terlang:put(ignore_registry_ref, Ref),\n\t\t\tB2 = B#block{ receive_timestamp = ReceiveTimestamp },\n\t\t\tcase pre_validate_is_peer_banned(B2, Peer) of\n\t\t\t\tenqueued ->\n\t\t\t\t\t?LOG_DEBUG([{event, enqueued_block},\n\t\t\t\t\t\t\t{hash, ar_util:encode(H)},\n\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)}]),\n\t\t\t\t\tok;\n\t\t\t\tOther ->\n\t\t\t\t\tar_ignore_registry:remove_ref(H, Ref),\n\t\t\t\t\tOther\n\t\t\tend\n\tend.\n\n%%%===================================================================\n%%% gen_server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\tgen_server:cast(?MODULE, pre_validate),\n\tok = ar_events:subscribe(block),\n\t{ok, Config} = arweave_config:get_env(),\n\tThrottleBySolutionInterval = Config#config.block_throttle_by_solution_interval,\n\tThrottleByIPInterval = Config#config.block_throttle_by_ip_interval,\n\t{ok, #state{ throttle_by_ip_interval = ThrottleByIPInterval,\n\t\t\tthrottle_by_solution_interval = ThrottleBySolutionInterval }}.\n\nhandle_cast(pre_validate, #state{ pqueue = Q, size = Size, ip_timestamps = IPTimestamps,\n\t\t\thash_timestamps = HashTimestamps,\n\t\t\tthrottle_by_ip_interval = ThrottleByIPInterval,\n\t\t\tthrottle_by_solution_interval = ThrottleBySolutionInterval } = State) ->\n\tcase gb_sets:is_empty(Q) of\n\t\ttrue ->\n\t\t\tar_util:cast_after(50, ?MODULE, pre_validate),\n\t\t\t{noreply, State};\n\t\tfalse ->\n\t\t\t{{_, {B, PrevB, SolutionResigned, Peer, Ref}}, Q2} = gb_sets:take_largest(Q),\n\t\t\tBlockSize = byte_size(term_to_binary(B)),\t\t\t\t\n\t\t\tSize2 = Size - BlockSize,\n\t\t\tBH = B#block.indep_hash,\n\t\t\tcase ar_ignore_registry:permanent_member(BH) of\n\t\t\t\ttrue ->\n\t\t\t\t\t?LOG_DEBUG([{event, indep_hash_already_processed2},\n\t\t\t\t\t\t\t{hash, ar_util:encode(BH)}]),\n\t\t\t\t\tar_ignore_registry:remove_ref(BH, Ref),\n\t\t\t\t\tgen_server:cast(?MODULE, pre_validate),\n\t\t\t\t\t{noreply, State#state{ pqueue = Q2, size = Size2 }};\n\t\t\t\tfalse ->\n\t\t\t\t\tThrottleByIPResult = throttle_by_ip(Peer, IPTimestamps,\n\t\t\t\t\t\t\tThrottleByIPInterval),\n\t\t\t\t\t{IPTimestamps3, HashTimestamps3} =\n\t\t\t\t\t\tcase ThrottleByIPResult of\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t?LOG_DEBUG([{event, dropping_block},\n\t\t\t\t\t\t\t\t\t\t{reason, throttle_by_ip},\n\t\t\t\t\t\t\t\t\t\t{hash, ar_util:encode(BH)},\n\t\t\t\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)}]),\n\t\t\t\t\t\t\t\tar_ignore_registry:remove_ref(BH, Ref),\n\t\t\t\t\t\t\t\t{IPTimestamps, HashTimestamps};\n\t\t\t\t\t\t\t{true, IPTimestamps2} ->\n\t\t\t\t\t\t\t\tcase throttle_by_solution_hash(B#block.hash, HashTimestamps,\n\t\t\t\t\t\t\t\t\t\tThrottleBySolutionInterval) of\n\t\t\t\t\t\t\t\t\t{true, HashTimestamps2} ->\n\t\t\t\t\t\t\t\t\t\t?LOG_INFO([{event, processing_block},\n\t\t\t\t\t\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t\t\t\t\t\t\t{height, B#block.height},\n\t\t\t\t\t\t\t\t\t\t\t\t{step_number, ar_block:vdf_step_number(B)},\n\t\t\t\t\t\t\t\t\t\t\t\t{block, ar_util:encode(BH)},\n\t\t\t\t\t\t\t\t\t\t\t\t{miner_address,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\tar_util:encode(B#block.reward_addr)},\n\t\t\t\t\t\t\t\t\t\t\t\t{previous_block,\n\t\t\t\t\t\t\t\t\t\t\t\t\tar_util:encode(PrevB#block.indep_hash)},\n\t\t\t\t\t\t\t\t\t\t\t\t{solution_hash, ar_util:encode(B#block.hash)},\n\t\t\t\t\t\t\t\t\t\t\t\t{cdiff, B#block.cumulative_diff},\n\t\t\t\t\t\t\t\t\t\t\t\t{prev_cdiff, PrevB#block.cumulative_diff}]),\n\t\t\t\t\t\t\t\t\t\tpre_validate_nonce_limiter_seed_data(B, PrevB,\n\t\t\t\t\t\t\t\t\t\t\t\tSolutionResigned, Peer),\n\t\t\t\t\t\t\t\t\t\tar_ignore_registry:remove_ref(BH, Ref),\n\t\t\t\t\t\t\t\t\t\trecord_block_pre_validation_time(\n\t\t\t\t\t\t\t\t\t\t\t\tB#block.receive_timestamp),\n\t\t\t\t\t\t\t\t\t\t{IPTimestamps2, HashTimestamps2};\n\t\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\t\t?LOG_DEBUG([{event, dropping_block},\n\t\t\t\t\t\t\t\t\t\t\t\t{reason, throttle_by_solution_hash},\n\t\t\t\t\t\t\t\t\t\t\t\t{hash, ar_util:encode(BH)},\n\t\t\t\t\t\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)}]),\n\t\t\t\t\t\t\t\t\t\tar_ignore_registry:remove_ref(BH, Ref),\n\t\t\t\t\t\t\t\t\t\t{IPTimestamps2, HashTimestamps}\n\t\t\t\t\t\t\t\tend\n\t\t\t\t\t\tend,\n\t\t\t\t\tgen_server:cast(?MODULE, pre_validate),\n\t\t\t\t\t{noreply, State#state{ pqueue = Q2, size = Size2,\n\t\t\t\t\t\t\tip_timestamps = IPTimestamps3,\n\t\t\t\t\t\t\thash_timestamps = HashTimestamps3 }}\n\t\t\tend\n\tend;\n\nhandle_cast({enqueue, {B, PrevB, SolutionResigned, Peer, Ref}}, State) ->\n\t#state{ pqueue = Q, size = Size } = State,\n\tPriority = priority(B, Peer),\n\tBlockSize = byte_size(term_to_binary(B)),\n\tSize2 = Size + BlockSize,\n\tQ2 = gb_sets:add_element({Priority, {B, PrevB, SolutionResigned, Peer, Ref}}, Q),\n\t{Q3, Size3} =\n\t\tcase Size2 > ?MAX_PRE_VALIDATION_QUEUE_SIZE of\n\t\t\ttrue ->\n\t\t\t\tdrop_tail(Q2, Size2);\n\t\t\tfalse ->\n\t\t\t\t{Q2, Size2}\n\t\tend,\n\t{noreply, State#state{ pqueue = Q3, size = Size3 }};\n\nhandle_cast({may_be_remove_ip_timestamp, IP}, #state{ ip_timestamps = Timestamps,\n\t\tthrottle_by_ip_interval = ThrottleInterval } = State) ->\n\tNow = os:system_time(millisecond),\n\tcase maps:get(IP, Timestamps, not_set) of\n\t\tnot_set ->\n\t\t\t{noreply, State};\n\t\tTimestamp when Timestamp < Now - ThrottleInterval ->\n\t\t\t{noreply, State#state{ ip_timestamps = maps:remove(IP, Timestamps) }};\n\t\t_ ->\n\t\t\t{noreply, State}\n\tend;\n\nhandle_cast({may_be_remove_h_timestamp, H}, #state{ hash_timestamps = Timestamps,\n\t\tthrottle_by_solution_interval = ThrottleInterval } = State) ->\n\tNow = os:system_time(millisecond),\n\tcase maps:get(H, Timestamps, not_set) of\n\t\tnot_set ->\n\t\t\t{noreply, State};\n\t\tTimestamp when Timestamp < Now - ThrottleInterval ->\n\t\t\t{noreply, State#state{ hash_timestamps = maps:remove(H, Timestamps) }};\n\t\t_ ->\n\t\t\t{noreply, State}\n\tend;\n\nhandle_cast(Msg, State) ->\n\t?LOG_ERROR([{event, unhandled_cast}, {module, ?MODULE}, {message, Msg}]),\n\t{noreply, State}.\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_info({event, block, _}, State) ->\n\t{noreply, State};\n\nhandle_info(Info, State) ->\n\t?LOG_ERROR([{event, unhandled_info}, {module, ?MODULE}, {info, Info}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\npre_validate_is_peer_banned(B, Peer) ->\n\tcase ar_blacklist_middleware:is_peer_banned(Peer) of\n\t\tnot_banned ->\n\t\t\tpre_validate_previous_block(B, Peer);\n\t\tbanned ->\n\t\t\t?LOG_DEBUG([{event, peer_banned},\n\t\t\t\t\t{hash, ar_util:encode(B#block.indep_hash)}]),\n\t\t\tskipped\n\tend.\n\npre_validate_previous_block(B, Peer) ->\n\tPrevH = B#block.previous_block,\n\tcase ar_node:get_block_shadow_from_cache(PrevH) of\n\t\tnot_found ->\n\t\t\t%% We have not seen the previous block yet - might happen if two\n\t\t\t%% successive blocks are distributed at the same time. Do not\n\t\t\t%% ban the peer as the block might be valid. If the network adopts\n\t\t\t%% this block, ar_poller will catch up.\n\t\t\t?LOG_DEBUG([{event, previous_block_not_found},\n\t\t\t\t\t{hash, ar_util:encode(B#block.indep_hash)},\n\t\t\t\t\t{prev_hash, ar_util:encode(PrevH)}]),\n\t\t\tskipped;\n\t\t#block{ height = PrevHeight } = PrevB ->\n\t\t\tcase B#block.height == PrevHeight + 1 of\n\t\t\t\tfalse ->\n\t\t\t\t\t?LOG_DEBUG([{event, previous_block_height_mismatch},\n\t\t\t\t\t\t\t{hash, ar_util:encode(B#block.indep_hash)},\n\t\t\t\t\t\t\t{prev_hash, ar_util:encode(PrevH)},\n\t\t\t\t\t\t\t{height, B#block.height},\n\t\t\t\t\t\t\t{prev_height, PrevHeight}]),\n\t\t\t\t\tinvalid;\n\t\t\t\ttrue ->\n\t\t\t\t\ttrue = B#block.height >= ar_fork:height_2_6(),\n\t\t\t\t\tPrevCDiff = B#block.previous_cumulative_diff,\n\t\t\t\t\tcase PrevB#block.cumulative_diff == PrevCDiff of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tpre_validate_proof_sizes(B, PrevB, Peer);\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t?LOG_DEBUG([{event, previous_block_cumulative_diff_mismatch},\n\t\t\t\t\t\t\t\t\t{hash, ar_util:encode(B#block.indep_hash)},\n\t\t\t\t\t\t\t\t\t{prev_hash, ar_util:encode(PrevH)},\n\t\t\t\t\t\t\t\t\t{cumulative_diff, PrevCDiff},\n\t\t\t\t\t\t\t\t\t{prev_cumulative_diff, PrevB#block.cumulative_diff}]),\n\t\t\t\t\t\t\tinvalid\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\npre_validate_proof_sizes(B, PrevB, Peer) ->\n\tcase ar_block:validate_proof_size(B#block.poa) andalso\n\t\t\tar_block:validate_proof_size(B#block.poa2) of\n\t\ttrue ->\n\t\t\tmay_be_pre_validate_first_chunk_hash(B, PrevB, Peer);\n\t\tfalse ->\n\t\t\tpost_block_reject_warn(B, check_proof_size, Peer),\n\t\t\tar_events:send(block, {rejected, invalid_proof_size, B#block.indep_hash, Peer}),\n\t\t\tinvalid\n\tend.\n\nmay_be_pre_validate_first_chunk_hash(B, PrevB, Peer) ->\n\tcase crypto:hash(sha256, (B#block.poa)#poa.chunk) == B#block.chunk_hash of\n\t\tfalse ->\n\t\t\tpost_block_reject_warn(B, check_first_chunk, Peer),\n\t\t\tar_events:send(block, {rejected, invalid_first_chunk, B#block.indep_hash, Peer}),\n\t\t\tinvalid;\n\t\ttrue ->\n\t\t\tmay_be_pre_validate_second_chunk_hash(B, PrevB, Peer)\n\tend.\n\nmay_be_pre_validate_second_chunk_hash(#block{ recall_byte2 = undefined } = B, PrevB, Peer) ->\n\tcase B#block.height < ar_fork:height_2_7_2() orelse B#block.poa2 == #poa{} of\n\t\tfalse ->\n\t\t\tpost_block_reject_warn(B, check_second_chunk, Peer),\n\t\t\tar_events:send(block, {rejected, invalid_poa2_recall_byte2_undefined,\n\t\t\t\t\tB#block.indep_hash, Peer}),\n\t\t\tinvalid;\n\t\ttrue ->\n\t\t\t%% The block is not supposed to have the second chunk.\n\t\t\tmay_be_pre_validate_first_unpacked_chunk_hash(B, PrevB, Peer)\n\tend;\nmay_be_pre_validate_second_chunk_hash(B, PrevB, Peer) ->\n\tcase crypto:hash(sha256, (B#block.poa2)#poa.chunk) == B#block.chunk2_hash of\n\t\tfalse ->\n\t\t\tpost_block_reject_warn(B, check_second_chunk, Peer),\n\t\t\tar_events:send(block, {rejected, invalid_second_chunk, B#block.indep_hash, Peer}),\n\t\t\tinvalid;\n\t\ttrue ->\n\t\t\tmay_be_pre_validate_first_unpacked_chunk_hash(B, PrevB, Peer)\n\tend.\n\nmay_be_pre_validate_first_unpacked_chunk_hash(\n\t\t#block{ packing_difficulty = PackingDifficulty } = B, PrevB, Peer)\n\t\t\twhen PackingDifficulty >= 1 ->\n\tPoA = B#block.poa,\n\tcase crypto:hash(sha256, PoA#poa.unpacked_chunk) == B#block.unpacked_chunk_hash\n\t\t\t%% The unpacked chunk is expected to be 0-padded when smaller than\n\t\t\t%% ?DATA_CHUNK_SIZE.\n\t\t\tandalso byte_size(PoA#poa.unpacked_chunk) == ?DATA_CHUNK_SIZE of\n\t\tfalse ->\n\t\t\tpost_block_reject_warn(B, check_first_unpacked_chunk, Peer),\n\t\t\tar_events:send(block, {rejected, invalid_first_unpacked_chunk,\n\t\t\t\t\tB#block.indep_hash, Peer}),\n\t\t\tinvalid;\n\t\ttrue ->\n\t\t\tmay_be_pre_validate_second_unpacked_chunk_hash(B, PrevB, Peer)\n\tend;\nmay_be_pre_validate_first_unpacked_chunk_hash(B, PrevB, Peer) ->\n\t#block{ poa = PoA, poa2 = PoA2 } = B,\n\t#block{ unpacked_chunk_hash = UnpackedChunkHash,\n\t\t\tunpacked_chunk2_hash = UnpackedChunk2Hash } = B,\n\tcase {UnpackedChunkHash, UnpackedChunk2Hash} == {undefined, undefined} of\n\t\tfalse ->\n\t\t\tpost_block_reject_warn(B, check_first_unpacked_chunk_hash, Peer),\n\t\t\tar_events:send(block, {rejected, invalid_first_unpacked_chunk_hash,\n\t\t\t\t\tB#block.indep_hash, Peer}),\n\t\t\tinvalid;\n\t\ttrue ->\n\t\t\tpre_validate_indep_hash(B#block{\n\t\t\t\t\tpoa = PoA#poa{ unpacked_chunk = <<>> },\n\t\t\t\t\tpoa2 = PoA2#poa{ unpacked_chunk = <<>> } }, PrevB, Peer)\n\tend.\n\nmay_be_pre_validate_second_unpacked_chunk_hash(\n\t\t#block{ recall_byte2 = RecallByte2 } = B, PrevB, Peer)\n\t\t\twhen RecallByte2 /= undefined ->\n\tPoA2 = B#block.poa2,\n\tcase crypto:hash(sha256, PoA2#poa.unpacked_chunk) == B#block.unpacked_chunk2_hash\n\t\t\t%% The unpacked chunk is expected to be 0-padded when smaller than\n\t\t\t%% ?DATA_CHUNK_SIZE.\n\t\t\tandalso byte_size(PoA2#poa.unpacked_chunk) == ?DATA_CHUNK_SIZE of\n\t\tfalse ->\n\t\t\tpost_block_reject_warn(B, check_second_unpacked_chunk, Peer),\n\t\t\tar_events:send(block, {rejected, invalid_second_unpacked_chunk,\n\t\t\t\t\tB#block.indep_hash, Peer}),\n\t\t\tinvalid;\n\t\ttrue ->\n\t\t\tpre_validate_indep_hash(B, PrevB, Peer)\n\tend;\nmay_be_pre_validate_second_unpacked_chunk_hash(B, PrevB, Peer) ->\n\t#block{ poa2 = PoA2 } = B,\n\tcase B#block.unpacked_chunk2_hash == undefined of\n\t\tfalse ->\n\t\t\tpost_block_reject_warn(B, check_second_unpacked_chunk_hash, Peer),\n\t\t\tar_events:send(block, {rejected, invalid_second_unpacked_chunk_hash,\n\t\t\t\t\tB#block.indep_hash, Peer}),\n\t\t\tinvalid;\n\t\ttrue ->\n\t\t\tpre_validate_indep_hash(B#block{\n\t\t\t\t\tpoa2 = PoA2#poa{ unpacked_chunk = <<>> } }, PrevB, Peer)\n\tend.\n\npre_validate_indep_hash(#block{ indep_hash = H } = B, PrevB, Peer) ->\n\tcase catch compute_hash(B, PrevB#block.cumulative_diff) of\n\t\t{ok, H} ->\n\t\t\tcase ar_ignore_registry:permanent_member(H) of\n\t\t\t\ttrue ->\n\t\t\t\t\t?LOG_DEBUG([{event, indep_hash_already_processed},\n\t\t\t\t\t\t\t{hash, ar_util:encode(H)}]),\n\t\t\t\t\tskipped;\n\t\t\t\tfalse ->\n\t\t\t\t\tpre_validate_timestamp(B, PrevB, Peer)\n\t\t\tend;\n\t\t{error, invalid_signature} ->\n\t\t\tpost_block_reject_warn(B, check_signature, Peer),\n\t\t\tar_events:send(block, {rejected, invalid_signature, B#block.indep_hash, Peer}),\n\t\t\tinvalid;\n\t\t{ok, _DifferentH} ->\n\t\t\tpost_block_reject_warn(B, check_indep_hash, Peer),\n\t\t\tar_events:send(block, {rejected, invalid_hash, B#block.indep_hash, Peer}),\n\t\t\tinvalid\n\tend.\n\npre_validate_timestamp(B, PrevB, Peer) ->\n\t#block{ indep_hash = H } = B,\n\tcase ar_block:verify_timestamp(B, PrevB) of\n\t\ttrue ->\n\t\t\tpre_validate_existing_solution_hash(B, PrevB, Peer);\n\t\tfalse ->\n\t\t\tpost_block_reject_warn(B, check_timestamp, Peer, [{block_time,\n\t\t\t\t\tB#block.timestamp}, {current_time, os:system_time(seconds)}]),\n\t\t\tar_events:send(block, {rejected, invalid_timestamp, H, Peer}),\n\t\t\tinvalid\n\tend.\n\npre_validate_existing_solution_hash(B, PrevB, Peer) ->\n\tHeight = B#block.height,\n\tSolutionH = B#block.hash,\n\t#block{ hash = SolutionH, nonce = Nonce, reward_addr = RewardAddr,\n\t\t\thash_preimage = HashPreimage, recall_byte = RecallByte,\n\t\t\tpartition_number = PartitionNumber, recall_byte2 = RecallByte2,\n\t\t\tnonce_limiter_info = #nonce_limiter_info{ output = Output,\n\t\t\t\t\tglobal_step_number = StepNumber, seed = Seed,\n\t\t\t\t\tpartition_upper_bound = UpperBound,\n\t\t\t\t\tlast_step_checkpoints = LastStepCheckpoints },\n\t\t\tchunk_hash = ChunkHash, chunk2_hash = Chunk2Hash,\n\t\t\tunpacked_chunk_hash = UnpackedChunkHash,\n\t\t\tunpacked_chunk2_hash = UnpackedChunk2Hash,\n\t\t\tpacking_difficulty = PackingDifficulty,\n\t\t\treplica_format = ReplicaFormat } = B,\n\tH = B#block.indep_hash,\n\tCDiff = B#block.cumulative_diff,\n\tPrevCDiff = PrevB#block.cumulative_diff,\n\tGetCachedSolution =\n\t\tcase ar_block_cache:get_by_solution_hash(block_cache, SolutionH, H,\n\t\t\t\tCDiff, PrevCDiff) of\n\t\t\tnot_found ->\n\t\t\t\tnot_found;\n\t\t\t#block{ hash = SolutionH, nonce = Nonce,\n\t\t\t\t\treward_addr = RewardAddr, hash_preimage = HashPreimage,\n\t\t\t\t\trecall_byte = RecallByte, partition_number = PartitionNumber,\n\t\t\t\t\tnonce_limiter_info = #nonce_limiter_info{ output = Output,\n\t\t\t\t\t\t\tlast_step_checkpoints = LastStepCheckpoints,\n\t\t\t\t\t\t\tseed = Seed, partition_upper_bound = UpperBound,\n\t\t\t\t\t\t\tglobal_step_number = StepNumber },\n\t\t\t\t\tchunk_hash = ChunkHash, chunk2_hash = Chunk2Hash,\n\t\t\t\t\tunpacked_chunk_hash = UnpackedChunkHash,\n\t\t\t\t\tunpacked_chunk2_hash = UnpackedChunk2Hash,\n\t\t\t\t\tpoa = #poa{ chunk = Chunk }, poa2 = #poa{ chunk = Chunk2 },\n\t\t\t\t\trecall_byte2 = RecallByte2,\n\t\t\t\t\tpacking_difficulty = PackingDifficulty2,\n\t\t\t\t\treplica_format = ReplicaFormat } = CacheB ->\n\t\t\t\tLastStepPrevOutput = get_last_step_prev_output(B),\n\t\t\t\tLastStepPrevOutput2 = get_last_step_prev_output(CacheB),\n\t\t\t\tcase LastStepPrevOutput == LastStepPrevOutput2\n\t\t\t\t\t\tandalso (Height < ar_fork:height_2_9()\n\t\t\t\t\t\t\torelse PackingDifficulty == PackingDifficulty2) of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tB2 = B#block{ poa = (B#block.poa)#poa{ chunk = Chunk },\n\t\t\t\t\t\t\t\tpoa2 = (B#block.poa2)#poa{ chunk = Chunk2 } },\n\t\t\t\t\t\tcase validate_poa_against_cached_poa(B2, CacheB) of\n\t\t\t\t\t\t\t{true, B3} ->\n\t\t\t\t\t\t\t\t{valid, B3};\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t{invalid, #{\n\t\t\t\t\t\t\t\t\tcode => check_resigned_solution_hash_poa_mismatch,\n\t\t\t\t\t\t\t\t\tb2 => B2, cache_b => CacheB, prev_b => PrevB }}\n\t\t\t\t\t\tend;\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t{invalid, #{ code => check_resigned_solution_hash_last_step_prev_output_mismatch,\n\t\t\t\t\t\t\t\t\tpacking_difficulty => PackingDifficulty,\n\t\t\t\t\t\t\t\t\tpacking_difficulty2 => PackingDifficulty2,\n\t\t\t\t\t\t\t\t\tlast_step_prev_output => LastStepPrevOutput,\n\t\t\t\t\t\t\t\t\tlast_step_prev_output2 => LastStepPrevOutput2,\n\t\t\t\t\t\t\t\t\tb => B, cache_b => CacheB, prev_b => PrevB }}\n\t\t\t\tend;\n\t\t\tCacheB2 ->\n\t\t\t\t{invalid, #{ code => check_resigned_solution_hash_block_mismatch,\n\t\t\t\t\t\t\tcache_b => CacheB2, b => B, prev_b => PrevB }}\n\t\tend,\n\tValidatedCachedSolutionDiff =\n\t\tcase GetCachedSolution of\n\t\t\tnot_found ->\n\t\t\t\tnot_found;\n\t\t\t{invalid, ExtraData} ->\n\t\t\t\t{invalid, ExtraData};\n\t\t\t{valid, B4} ->\n\t\t\t\tcase ar_node_utils:block_passes_diff_check(B) of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\t{valid, B4};\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t{invalid, #{ code => check_resigned_solution_hash_diff_mismatch,\n\t\t\t\t\t\t\t\t\tb => B }}\n\t\t\t\tend\n\t\tend,\n\tcase ValidatedCachedSolutionDiff of\n\t\tnot_found ->\n\t\t\tpre_validate_nonce_limiter_global_step_number(B, PrevB, false, Peer);\n\t\t{invalid, ExtraData2} ->\n\t\t\tCode = maps:get(code, ExtraData2, check_resigned_solution_hash),\n\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\tcase lists:member(extended_block_validation_trace, Config#config.enable) of\n\t\t\t\ttrue ->\n\t\t\t\t\tpost_block_reject_warn_and_error_dump(B, Code, Peer, ExtraData2);\n\t\t\t\tfalse ->\n\t\t\t\t\tpost_block_reject_warn(B, Code, Peer)\n\t\t\tend,\n\t\t\tar_events:send(block, {rejected, invalid_resigned_solution_hash,\n\t\t\t\t\tB#block.indep_hash, Peer}),\n\t\t\tinvalid;\n\t\t{valid, B5} ->\n\t\t\tpre_validate_nonce_limiter_global_step_number(B5, PrevB, true, Peer)\n\tend.\n\nget_last_step_prev_output(B) ->\n\t#block{ nonce_limiter_info = Info } = B,\n\t#nonce_limiter_info{ steps = Steps, prev_output = PrevOutput } = Info,\n\tcase Steps of\n\t\t[_, PrevStepOutput | _] ->\n\t\t\tPrevStepOutput;\n\t\t_ ->\n\t\t\tPrevOutput\n\tend.\n\nvalidate_poa_against_cached_poa(B, CacheB) ->\n\t#block{ poa_cache = {ArgCache, ChunkID}, poa2_cache = Cache2 } = CacheB,\n\tArgs = erlang:append_element(erlang:insert_element(5, ArgCache, B#block.poa), ChunkID),\n\tcase ar_poa:validate(Args) of\n\t\t{true, ChunkID} ->\n\t\t\tB2 = B#block{ poa_cache = {ArgCache, ChunkID} },\n\t\t\tcase B#block.recall_byte2 of\n\t\t\t\tundefined ->\n\t\t\t\t\t{true, B2};\n\t\t\t\t_ ->\n\t\t\t\t\t{ArgCache2, Chunk2ID} = Cache2,\n\t\t\t\t\tArgs2 = erlang:append_element(\n\t\t\t\t\t\t\terlang:insert_element(5, ArgCache2, B#block.poa2), Chunk2ID),\n\t\t\t\t\tcase ar_poa:validate(Args2) of\n\t\t\t\t\t\t{true, Chunk2ID} ->\n\t\t\t\t\t\t\t{true, B2#block{ poa2_cache = Cache2 }};\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\tfalse\n\t\t\t\t\tend\n\t\t\tend;\n\t\t_ ->\n\t\t\tfalse\n\tend.\n\npre_validate_nonce_limiter_global_step_number(B, PrevB, SolutionResigned, Peer) ->\n\tBlockInfo = B#block.nonce_limiter_info,\n\tStepNumber = ar_block:vdf_step_number(B),\n\tPrevBlockInfo = PrevB#block.nonce_limiter_info,\n\tPrevStepNumber = ar_block:vdf_step_number(PrevB),\n\tCurrentStepNumber =\n\t\tcase ar_nonce_limiter:get_current_step_number(PrevB) of\n\t\t\tnot_found ->\n\t\t\t\t%% Not necessarily computed already, but will be after we\n\t\t\t\t%% validate the previous block's chain.\n\t\t\t\tPrevStepNumber;\n\t\t\tN ->\n\t\t\t\tN\n\t\tend,\n\tIsAhead = ar_nonce_limiter:is_ahead_on_the_timeline(\n\t\t\tBlockInfo, PrevBlockInfo),\n\tMaxDistance = ?NONCE_LIMITER_MAX_CHECKPOINTS_COUNT,\n\tSteps = BlockInfo#nonce_limiter_info.steps,\n\tExpectedStepCount =\n\t\tget_expected_step_count(StepNumber, PrevStepNumber, MaxDistance, Steps),\n\tPrevOutput = BlockInfo#nonce_limiter_info.prev_output,\n\tcase IsAhead andalso StepNumber - CurrentStepNumber =< MaxDistance\n\t\t\tandalso length(Steps) == ExpectedStepCount\n\t\t\tandalso PrevOutput == PrevBlockInfo#nonce_limiter_info.output of\n\t\tfalse ->\n\t\t\tpost_block_reject_warn(B, check_nonce_limiter_step_number, Peer,\n\t\t\t\t\t[{block_step_number, StepNumber},\n\t\t\t\t\t{current_step_number, CurrentStepNumber}]),\n\t\t\tH = B#block.indep_hash,\n\t\t\tar_events:send(block,\n\t\t\t\t\t{rejected, invalid_nonce_limiter_global_step_number, H, Peer}),\n\t\t\tinvalid;\n\t\ttrue ->\n\t\t\tprometheus_gauge:set(block_vdf_advance, StepNumber - CurrentStepNumber),\n\t\t\tpre_validate_previous_solution_hash(B, PrevB, SolutionResigned, Peer)\n\tend.\n\n-ifdef(LOCALNET).\n%% In localnet we allow same-step blocks for faster block production. Consequent\n%% blocks on the same steps have the same \"steps\" and \"expected step count\" values.\nget_expected_step_count(StepNumber, PrevStepNumber, _MaxDistance, Steps) ->\n\tcase StepNumber - PrevStepNumber > 0 of\n\t\ttrue ->\n\t\t\tStepNumber - PrevStepNumber;\n\t\tfalse ->\n\t\t\tlength(Steps)\n\tend.\n-else.\nget_expected_step_count(StepNumber, PrevStepNumber, MaxDistance, _Steps) ->\n\tmin(MaxDistance, StepNumber - PrevStepNumber).\n-endif.\n\npre_validate_previous_solution_hash(B, PrevB, SolutionResigned, Peer) ->\n\tcase B#block.previous_solution_hash == PrevB#block.hash of\n\t\tfalse ->\n\t\t\tpost_block_reject_warn_and_error_dump(B, check_previous_solution_hash, Peer),\n\t\t\tar_events:send(block, {rejected, invalid_previous_solution_hash,\n\t\t\t\t\tB#block.indep_hash, Peer}),\n\t\t\tinvalid;\n\t\ttrue ->\n\t\t\tpre_validate_last_retarget(B, PrevB, SolutionResigned, Peer)\n\tend.\n\npre_validate_last_retarget(B, PrevB, SolutionResigned, Peer) ->\n\ttrue = B#block.height >= ar_fork:height_2_6(),\n\tcase ar_block:verify_last_retarget(B, PrevB) of\n\t\ttrue ->\n\t\t\tpre_validate_difficulty(B, PrevB, SolutionResigned, Peer);\n\t\tfalse ->\n\t\t\tpost_block_reject_warn_and_error_dump(B, check_last_retarget, Peer),\n\t\t\tar_events:send(block, {rejected, invalid_last_retarget,\n\t\t\t\t\tB#block.indep_hash, Peer}),\n\t\t\tinvalid\n\tend.\n\npre_validate_difficulty(B, PrevB, SolutionResigned, Peer) ->\n\ttrue = B#block.height >= ar_fork:height_2_6(),\n\tDiffValid = ar_retarget:validate_difficulty(B, PrevB),\n\tcase DiffValid of\n\t\ttrue ->\n\t\t\tpre_validate_cumulative_difficulty(B, PrevB, SolutionResigned, Peer);\n\t\t_ ->\n\t\t\tpost_block_reject_warn_and_error_dump(B, check_difficulty, Peer),\n\t\t\tar_events:send(block, {rejected, invalid_difficulty, B#block.indep_hash, Peer}),\n\t\t\tinvalid\n\tend.\n\npre_validate_cumulative_difficulty(B, PrevB, SolutionResigned, Peer) ->\n\ttrue = B#block.height >= ar_fork:height_2_6(),\n\tcase ar_block:verify_cumulative_diff(B, PrevB) of\n\t\tfalse ->\n\t\t\tpost_block_reject_warn_and_error_dump(B, check_cumulative_difficulty, Peer),\n\t\t\tar_events:send(block, {rejected, invalid_cumulative_difficulty,\n\t\t\t\t\tB#block.indep_hash, Peer}),\n\t\t\tinvalid;\n\t\ttrue ->\n\t\t\tpre_validate_packing_difficulty(B, PrevB, SolutionResigned, Peer)\n\tend.\n\npre_validate_packing_difficulty(B, PrevB, SolutionResigned, Peer) ->\n\tcase ar_block:validate_replica_format(B#block.height, B#block.packing_difficulty,\n\t\t\tB#block.replica_format) of\n\t\tfalse ->\n\t\t\tpost_block_reject_warn_and_error_dump(B, check_packing_difficulty, Peer),\n\t\t\tar_events:send(block, {rejected, invalid_packing_difficulty,\n\t\t\t\t\tB#block.indep_hash, Peer}),\n\t\t\tinvalid;\n\t\ttrue ->\n\t\t\tcase SolutionResigned of\n\t\t\t\ttrue ->\n\t\t\t\t\tRef = erlang:get(ignore_registry_ref),\n\t\t\t\t\tgen_server:cast(?MODULE, {enqueue, {B, PrevB, true, Peer, Ref}}),\n\t\t\t\t\tenqueued;\n\t\t\t\tfalse ->\n\t\t\t\t\tpre_validate_quick_pow(B, PrevB, false, Peer)\n\t\t\tend\n\tend.\n\npre_validate_quick_pow(B, PrevB, SolutionResigned, Peer) ->\n\t#block{ hash_preimage = HashPreimage } = B,\n\tH0 = ar_block:compute_h0(B, PrevB),\n\tSolutionHash = ar_block:compute_solution_h(H0, HashPreimage),\n\tcase ar_node_utils:block_passes_diff_check(SolutionHash, B) of\n\t\tfalse ->\n\t\t\tpost_block_reject_warn_and_error_dump(B, check_hash_preimage, Peer),\n\t\t\tar_events:send(block, {rejected, invalid_hash_preimage,\n\t\t\t\t\tB#block.indep_hash, Peer}),\n\t\t\tinvalid;\n\t\ttrue ->\n\t\t\tRef = erlang:get(ignore_registry_ref),\n\t\t\tgen_server:cast(?MODULE, {enqueue, {B, PrevB, SolutionResigned, Peer, Ref}}),\n\t\t\tenqueued\n\tend.\n\npre_validate_nonce_limiter_seed_data(B, PrevB, SolutionResigned, Peer) ->\n\tInfo = B#block.nonce_limiter_info,\n\t#nonce_limiter_info{ global_step_number = StepNumber, seed = Seed,\n\t\t\tnext_seed = NextSeed, partition_upper_bound = PartitionUpperBound,\n\t\t\tvdf_difficulty = VDFDifficulty,\n\t\t\tnext_partition_upper_bound = NextPartitionUpperBound } = Info,\n\tStepNumber = ar_block:vdf_step_number(B),\n\tExpectedSeedData = ar_nonce_limiter:get_seed_data(StepNumber, PrevB),\n\tcase ExpectedSeedData == {Seed, NextSeed, PartitionUpperBound,\n\t\t\tNextPartitionUpperBound, VDFDifficulty} of\n\t\ttrue ->\n\t\t\tpre_validate_partition_number(B, PrevB, PartitionUpperBound,\n\t\t\t\t\tSolutionResigned, Peer);\n\t\tfalse ->\n\t\t\tpost_block_reject_warn_and_error_dump(B, check_nonce_limiter_seed_data, Peer),\n\t\t\tar_events:send(block, {rejected, invalid_nonce_limiter_seed_data,\n\t\t\t\t\tB#block.indep_hash, Peer}),\n\t\t\tinvalid\n\tend.\n\npre_validate_partition_number(B, PrevB, PartitionUpperBound, SolutionResigned, Peer) ->\n\tMax = ar_node:get_max_partition_number(PartitionUpperBound),\n\tcase B#block.partition_number > Max of\n\t\ttrue ->\n\t\t\tpost_block_reject_warn_and_error_dump(B, check_partition_number, Peer),\n\t\t\tar_events:send(block, {rejected, invalid_partition_number, B#block.indep_hash,\n\t\t\t\t\tPeer}),\n\t\t\tinvalid;\n\t\tfalse ->\n\t\t\tpre_validate_nonce(B, PrevB, PartitionUpperBound, SolutionResigned, Peer)\n\tend.\n\npre_validate_nonce(B, PrevB, PartitionUpperBound, SolutionResigned, Peer) ->\n\tMax = ar_block:get_max_nonce(B#block.packing_difficulty),\n\tcase B#block.nonce > Max of\n\t\ttrue ->\n\t\t\tpost_block_reject_warn_and_error_dump(B, check_nonce, Peer),\n\t\t\tar_events:send(block, {rejected, invalid_nonce, B#block.indep_hash, Peer}),\n\t\t\tinvalid;\n\t\tfalse ->\n\t\t\tcase SolutionResigned of\n\t\t\t\ttrue ->\n\t\t\t\t\taccept_block(B, Peer, false);\n\t\t\t\tfalse ->\n\t\t\t\t\tpre_validate_pow_2_6(B, PrevB, PartitionUpperBound, Peer)\n\t\t\tend\n\tend.\n\npre_validate_pow_2_6(B, PrevB, PartitionUpperBound, Peer) ->\n\tH0 = ar_block:compute_h0(B, PrevB),\n\tChunk1 = (B#block.poa)#poa.chunk,\n\t{H1, Preimage1} = ar_block:compute_h1(H0, B#block.nonce, Chunk1),\n\tDiffPair = ar_difficulty:diff_pair(B),\n\tcase H1 == B#block.hash andalso ar_node_utils:h1_passes_diff_check(H1, DiffPair,\n\t\t\t\tB#block.packing_difficulty)\n\t\t\tandalso Preimage1 == B#block.hash_preimage\n\t\t\tandalso B#block.recall_byte2 == undefined\n\t\t\tandalso B#block.chunk2_hash == undefined of\n\t\ttrue ->\n\t\t\tpre_validate_poa(B, PrevB, PartitionUpperBound, H0, H1, Peer);\n\t\tfalse ->\n\t\t\tChunk2 = (B#block.poa2)#poa.chunk,\n\t\t\t{H2, Preimage2} = ar_block:compute_h2(H1, Chunk2, H0),\n\t\t\tcase H2 == B#block.hash andalso ar_node_utils:h2_passes_diff_check(H2, DiffPair,\n\t\t\t\t\t\tB#block.packing_difficulty)\n\t\t\t\t\tandalso Preimage2 == B#block.hash_preimage of\n\t\t\t\ttrue ->\n\t\t\t\t\tpre_validate_poa(B, PrevB, PartitionUpperBound, H0, H1, Peer);\n\t\t\t\tfalse ->\n\t\t\t\t\tpost_block_reject_warn_and_error_dump(B, check_pow, Peer),\n\t\t\t\t\tar_events:send(block, {rejected, invalid_pow, B#block.indep_hash, Peer}),\n\t\t\t\t\tinvalid\n\t\t\tend\n\tend.\n\n-ifdef(LOCALNET).\n%% On localnet we want to freely choose chunks, so we derive the recall range\n%% from the chosen chunk (recall_byte) rather than the other way around.\nget_precalculated_recall_range(B) ->\n\tcase B#block.packing_difficulty of\n\t\t0 ->\n\t\t\t{B#block.recall_byte - B#block.nonce * ?DATA_CHUNK_SIZE,\n\t\t\t\tcase B#block.recall_byte2 of\n\t\t\t\t\tundefined ->\n\t\t\t\t\t\tnot_set;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tB#block.recall_byte2 - B#block.nonce * ?DATA_CHUNK_SIZE\n\t\t\t\tend};\n\t\t_ ->\n\t\t\tChunkNumber = B#block.nonce div ?COMPOSITE_PACKING_SUB_CHUNK_COUNT,\n\t\t\t{B#block.recall_byte - ChunkNumber * ?DATA_CHUNK_SIZE,\n\t\t\t\tcase B#block.recall_byte2 of\n\t\t\t\t\tundefined ->\n\t\t\t\t\t\tnot_set;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tB#block.recall_byte2 - ChunkNumber * ?DATA_CHUNK_SIZE\n\t\t\t\tend}\n\tend.\n-else.\nget_precalculated_recall_range(_B) ->\n\t{not_set, not_set}.\n-endif.\n\npre_validate_poa(B, PrevB, PartitionUpperBound, H0, H1, Peer) ->\n\t{PrecalculatedRecallRange1, PrecalculatedRecallRange2} = get_precalculated_recall_range(B),\n\t{RecallRange1Start, RecallRange2Start} = ar_block:get_recall_range(H0,\n\t\tB#block.partition_number, PartitionUpperBound,\n\t\tPrecalculatedRecallRange1, PrecalculatedRecallRange2),\n\tRecallByte1 = ar_block:get_recall_byte(RecallRange1Start, B#block.nonce,\n\t\t\tB#block.packing_difficulty),\n\t{BlockStart1, BlockEnd1, TXRoot1} = ar_block_index:get_block_bounds(RecallByte1),\n\tBlockSize1 = BlockEnd1 - BlockStart1,\n\tPackingDifficulty = B#block.packing_difficulty,\n\tNonce = B#block.nonce,\n\t%% The packing difficulty >0 is only allowed after the 2.8 hard fork (validated earlier\n\t%% here), and the composite packing is only possible for packing difficulty >= 1.\n\t%% The new shared entropy format is supported starting from 2.9.\n\tPacking = ar_block:get_packing(PackingDifficulty, B#block.reward_addr,\n\t\t\tB#block.replica_format),\n\tSubChunkIndex = ar_block:get_sub_chunk_index(PackingDifficulty, Nonce),\n\tArgCache = {BlockStart1, RecallByte1, TXRoot1, BlockSize1, Packing, SubChunkIndex},\n\tcase RecallByte1 == B#block.recall_byte andalso\n\t\t\tar_poa:validate({BlockStart1, RecallByte1, TXRoot1, BlockSize1, B#block.poa,\n\t\t\t\t\tPacking, SubChunkIndex, not_set}) of\n\t\terror ->\n\t\t\t?LOG_ERROR([{event, failed_to_validate_proof_of_access},\n\t\t\t\t\t{block, ar_util:encode(B#block.indep_hash)}]),\n\t\t\tinvalid;\n\t\tfalse ->\n\t\t\tpost_block_reject_warn_and_error_dump(B, check_poa, Peer),\n\t\t\tar_events:send(block, {rejected, invalid_poa, B#block.indep_hash, Peer}),\n\t\t\tinvalid;\n\t\t{true, ChunkID} ->\n\t\t\t%% Cache the proof so that in case the miner signs additional blocks\n\t\t\t%% using the same solution, we can re-validate the potentially new\n\t\t\t%% proofs quickly, without re-validating the solution and re-unpacking\n\t\t\t%% the chunk.\n\t\t\tB2 = B#block{ poa_cache = {ArgCache, ChunkID} },\n\t\t\tcase B#block.hash == H1 of\n\t\t\t\ttrue ->\n\t\t\t\t\tpre_validate_nonce_limiter(B2, PrevB, Peer);\n\t\t\t\tfalse ->\n\t\t\t\t\tRecallByte2 = ar_block:get_recall_byte(RecallRange2Start, B#block.nonce,\n\t\t\t\t\t\t\tB#block.packing_difficulty),\n\t\t\t\t\t{BlockStart2, BlockEnd2, TXRoot2} = ar_block_index:get_block_bounds(\n\t\t\t\t\t\t\tRecallByte2),\n\t\t\t\t\tBlockSize2 = BlockEnd2 - BlockStart2,\n\t\t\t\t\tArgCache2 = {BlockStart2, RecallByte2, TXRoot2, BlockSize2, Packing,\n\t\t\t\t\t\t\tSubChunkIndex},\n\t\t\t\t\tcase RecallByte2 == B#block.recall_byte2 andalso\n\t\t\t\t\t\t\tar_poa:validate({BlockStart2, RecallByte2, TXRoot2, BlockSize2,\n\t\t\t\t\t\t\t\t\tB#block.poa2, Packing, SubChunkIndex, not_set}) of\n\t\t\t\t\t\terror ->\n\t\t\t\t\t\t\t?LOG_ERROR([{event, failed_to_validate_proof_of_access},\n\t\t\t\t\t\t\t\t\t{block, ar_util:encode(B#block.indep_hash)}]),\n\t\t\t\t\t\t\tinvalid;\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tpost_block_reject_warn_and_error_dump(B, check_poa2, Peer),\n\t\t\t\t\t\t\tar_events:send(block, {rejected, invalid_poa2,\n\t\t\t\t\t\t\t\t\tB#block.indep_hash, Peer}),\n\t\t\t\t\t\t\tinvalid;\n\t\t\t\t\t\t{true, Chunk2ID} ->\n\t\t\t\t\t\t\t%% Cache the proof so that in case the miner signs additional\n\t\t\t\t\t\t\t%% blocks using the same solution, we can re-validate the\n\t\t\t\t\t\t\t%% potentially new proofs quickly, without re-validating the\n\t\t\t\t\t\t\t%% solution and re-unpacking the chunk.\n\t\t\t\t\t\t\tB3 = B2#block{ poa2_cache = {ArgCache2, Chunk2ID} },\n\t\t\t\t\t\t\tpre_validate_nonce_limiter(B3, PrevB, Peer)\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\npre_validate_nonce_limiter(B, PrevB, Peer) ->\n\tPrevOutput = get_last_step_prev_output(B),\n\tcase ar_nonce_limiter:validate_last_step_checkpoints(B, PrevB, PrevOutput) of\n\t\t{false, cache_mismatch, CachedSteps} ->\n\t\t\tar_ignore_registry:add(B#block.indep_hash),\n\t\t\tpost_block_reject_warn_and_error_dump(B, check_nonce_limiter_cache_mismatch,\n\t\t\t\t\tPeer, #{ prev_b => PrevB, cached_steps => CachedSteps }),\n\t\t\tar_events:send(block, {rejected, invalid_nonce_limiter_cache_mismatch,\n\t\t\t\t\tB#block.indep_hash, Peer}),\n\t\t\tinvalid;\n\t\tfalse ->\n\t\t\tpost_block_reject_warn_and_error_dump(B, check_nonce_limiter, Peer),\n\t\t\tar_events:send(block, {rejected, invalid_nonce_limiter, B#block.indep_hash, Peer}),\n\t\t\tinvalid;\n\t\t{true, cache_match} ->\n\t\t\taccept_block(B, Peer, true);\n\t\ttrue ->\n\t\t\taccept_block(B, Peer, false)\n\tend.\n\naccept_block(B, Peer, Gossip) ->\n\tar_ignore_registry:add(B#block.indep_hash),\n\tar_events:send(block, {new, B, \n\t\t#{ source => {peer, Peer}, gossip => Gossip }}),\n\t?LOG_INFO([{event, accepted_block}, {height, B#block.height},\n\t\t\t{indep_hash, ar_util:encode(B#block.indep_hash)}]),\n\tok.\n\ncompute_hash(B, PrevCDiff) ->\n\ttrue = B#block.height >= ar_fork:height_2_6(),\n\tSignedH = ar_block:generate_signed_hash(B),\n\tcase ar_block:verify_signature(SignedH, PrevCDiff, B) of\n\t\tfalse ->\n\t\t\t{error, invalid_signature};\n\t\ttrue ->\n\t\t\t{ok, ar_block:indep_hash2(SignedH, B#block.signature)}\n\tend.\n\npost_block_reject_warn_and_error_dump(B, Step, Peer) ->\n\tpost_block_reject_warn_and_error_dump(B, Step, Peer, #{}).\n\npost_block_reject_warn_and_error_dump(B, Step, Peer, ExtraData) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tID = binary_to_list(ar_util:encode(crypto:strong_rand_bytes(16))),\n\tFile = filename:join(Config#config.data_dir, \"invalid_block_dump_\" ++ ID),\n\tfile:write_file(File, term_to_binary({B, ExtraData})),\n\tpost_block_reject_warn(B, Step, Peer),\n\t?LOG_WARNING([{event, post_block_rejected},\n\t\t\t{hash, ar_util:encode(B#block.indep_hash)}, {step, Step},\n\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t{error_dump, File}]).\n\npost_block_reject_warn(B, Step, Peer) ->\n\t?LOG_WARNING([{event, post_block_rejected},\n\t\t\t{hash, ar_util:encode(B#block.indep_hash)}, {step, Step},\n\t\t\t{peer, ar_util:format_peer(Peer)}]).\n\npost_block_reject_warn(B, Step, Peer, Params) ->\n\t?LOG_WARNING([{event, post_block_rejected},\n\t\t\t{hash, ar_util:encode(B#block.indep_hash)}, {step, Step},\n\t\t\t{params, Params}, {peer, ar_util:format_peer(Peer)}]).\n\nrecord_block_pre_validation_time(ReceiveTimestamp) ->\n\tTimeMs = timer:now_diff(erlang:timestamp(), ReceiveTimestamp) / 1000,\n\tprometheus_histogram:observe(block_pre_validation_time, TimeMs).\n\npriority(B, Peer) ->\n\t{B#block.height, get_peer_score(Peer)}.\n\nget_peer_score(Peer) ->\n\tget_peer_score(Peer, ar_peers:get_peers(lifetime), 0).\n\nget_peer_score(Peer, [Peer | _Peers], N) ->\n\tN;\nget_peer_score(Peer, [_Peer | Peers], N) ->\n\tget_peer_score(Peer, Peers, N - 1);\nget_peer_score(_Peer, [], N) ->\n\tN - rand:uniform(100).\n\ndrop_tail(Q, Size) when Size =< ?MAX_PRE_VALIDATION_QUEUE_SIZE ->\n\t{Q, Size};\ndrop_tail(Q, Size) ->\n\t{{_Priority, {B, _PrevB, _SolutionResigned, _Peer, Ref}}, Q2} = gb_sets:take_smallest(Q),\n\tar_ignore_registry:remove_ref(B#block.indep_hash, Ref),\n\tBlockSize = byte_size(term_to_binary(B)),\n\tdrop_tail(Q2, Size - BlockSize).\n\nthrottle_by_ip(Peer, Timestamps, ThrottleInterval) ->\n\tIP = get_ip(Peer),\n\tNow = os:system_time(millisecond),\n\tar_util:cast_after(ThrottleInterval * 2, ?MODULE, {may_be_remove_ip_timestamp, IP}),\n\tcase maps:get(IP, Timestamps, not_set) of\n\t\tnot_set ->\n\t\t\t{true, maps:put(IP, Now, Timestamps)};\n\t\tTimestamp when Timestamp < Now - ThrottleInterval ->\n\t\t\t{true, maps:put(IP, Now, Timestamps)};\n\t\t_ ->\n\t\t\tfalse\n\tend.\n\nget_ip({A, B, C, D, _Port}) ->\n\t{A, B, C, D}.\n\nthrottle_by_solution_hash(H, Timestamps, ThrottleInterval) ->\n\tNow = os:system_time(millisecond),\n\tar_util:cast_after(ThrottleInterval * 2, ?MODULE, {may_be_remove_h_timestamp, H}),\n\tcase maps:get(H, Timestamps, not_set) of\n\t\tnot_set ->\n\t\t\t{true, maps:put(H, Now, Timestamps)};\n\t\tTimestamp when Timestamp < Now - ThrottleInterval ->\n\t\t\t{true, maps:put(H, Now, Timestamps)};\n\t\t_ ->\n\t\t\tfalse\n\tend.\n"
  },
  {
    "path": "apps/arweave/src/ar_block_pre_validator_sup.erl",
    "content": "-module(ar_block_pre_validator_sup).\n\n-behaviour(supervisor).\n\n-include_lib(\"arweave/include/ar_sup.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-export([start_link/0]).\n-export([init/1]).\n\n%%%===================================================================\n%%% Public API.\n%%%===================================================================\n\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%%%===================================================================\n%%% Supervisor callbacks.\n%%%===================================================================\n\ninit([]) ->\n\tChildren = [?CHILD(ar_block_pre_validator, worker)],\n\t{ok, {{one_for_one, 5, 10}, Children}}.\n"
  },
  {
    "path": "apps/arweave/src/ar_block_propagation_worker.erl",
    "content": "-module(ar_block_propagation_worker).\n\n-behaviour(gen_server).\n\n-export([start_link/1]).\n\n-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n\n-record(state, {}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link(Name) ->\n\tgen_server:start_link({local, Name}, ?MODULE, [], []).\n\n%%%===================================================================\n%%% gen_server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\t{ok, #state{}}.\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast({send_block, SendFun, RetryCount, From}, State) ->\n\tcase SendFun() of\n\t\t{ok, {{<<\"412\">>, _}, _, _, _, _}} when RetryCount > 0 ->\n\t\t\tar_util:cast_after(2000, self(),\n\t\t\t\t\t{send_block, SendFun, RetryCount - 1, From}),\n\t\t\t{noreply, State};\n\t\t_ ->\n\t\t\tFrom ! {worker_sent_block, self()},\n\t\t\t{noreply, State}\n\tend;\n\nhandle_cast({send_block2, Peer, SendAnnouncementFun, SendFun, RetryCount, From}, State) ->\n\tcase SendAnnouncementFun() of\n\t\t{ok, {{<<\"412\">>, _}, _, _, _, _}} when RetryCount > 0 ->\n\t\t\tar_util:cast_after(2000, self(),\n\t\t\t\t\t{send_block2, Peer, SendAnnouncementFun, SendFun,\n\t\t\t\t\t\t\tRetryCount - 1, From});\n\t\t{ok, {{<<\"200\">>, _}, _, Body, _, _}} ->\n\t\t\tcase catch ar_serialize:binary_to_block_announcement_response(Body) of\n\t\t\t\t{'EXIT', Reason} ->\n\t\t\t\t\t?LOG_INFO([{event, send_announcement_response}, {peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t{exit, Reason}]),\n\t\t\t\t\tar_peers:issue_warning(Peer, block_announcement, Reason),\n\t\t\t\t\tFrom ! {worker_sent_block, self()};\n\t\t\t\t{error, Reason} ->\n\t\t\t\t\t?LOG_INFO([{event, send_announcement_response}, {peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t{error, Reason}]),\n\t\t\t\t\tar_peers:issue_warning(Peer, block_announcement, Reason),\n\t\t\t\t\tFrom ! {worker_sent_block, self()};\n\t\t\t\t{ok, #block_announcement_response{ missing_tx_indices = L,\n\t\t\t\t\t\tmissing_chunk = MissingChunk, missing_chunk2 = MissingChunk2 }} ->\n\t\t\t\t\tcase SendFun(MissingChunk, MissingChunk2, L) of\n\t\t\t\t\t\t{ok, {{<<\"418\">>, _}, _, Bin, _, _}} when RetryCount > 0 ->\n\t\t\t\t\t\t\tcase parse_txids(Bin) of\n\t\t\t\t\t\t\t\terror ->\n\t\t\t\t\t\t\t\t\tok;\n\t\t\t\t\t\t\t\t{ok, TXIDs} ->\n\t\t\t\t\t\t\t\t\tSendFun(MissingChunk, MissingChunk2, TXIDs)\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t{ok, {{<<\"419\">>, _}, _, _, _, _}} when RetryCount > 0 ->\n\t\t\t\t\t\t\tSendFun(true, true, L);\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\tok\n\t\t\t\t\tend,\n\t\t\t\t\tFrom ! {worker_sent_block, self()}\n\t\t\tend;\n\t\t_ ->\t%% 208 (the peer has already received this block) or\n\t\t\t\t%% an unexpected response.\n\t\t\tFrom ! {worker_sent_block, self()}\n\tend,\n\t{noreply, State};\n\nhandle_cast(Msg, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {message, Msg}]),\n\t{noreply, State}.\n\nhandle_info({gun_down, _PID, http, closed, _, _}, State) ->\n\t{noreply, State};\n\nhandle_info(Info, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {info, Info}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{event, terminate}, {module, ?MODULE}, {reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Internal functions\n%%%===================================================================\n\nparse_txids(<< TXID:32/binary, Rest/binary >>) ->\n\tcase parse_txids(Rest) of\n\t\terror ->\n\t\t\terror;\n\t\t{ok, TXIDs} ->\n\t\t\t{ok, [TXID | TXIDs]}\n\tend;\nparse_txids(<<>>) ->\n\t{ok, []}.\n"
  },
  {
    "path": "apps/arweave/src/ar_block_time_history.erl",
    "content": "-module(ar_block_time_history).\n\n-export([history_length/0, has_history/1, get_history/1, get_history_from_blocks/2, \n\tset_history/2, get_hashes/1, sum_history/1, compute_block_interval/1,\n\tvalidate_hashes/2, hash/1, update_history/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n\n-ifdef(AR_TEST).\n\t-define(BLOCK_TIME_HISTORY_BLOCKS, 3).\n-else.\n\t-ifndef(BLOCK_TIME_HISTORY_BLOCKS).\n\t\t-define(BLOCK_TIME_HISTORY_BLOCKS, (30 * 24 * 30)).\n\t-endif.\n-endif.\n\nhistory_length() ->\n\t?BLOCK_TIME_HISTORY_BLOCKS.\n\nhas_history(Height) ->\n\tHeight - history_length() > ar_fork:height_2_7().\n\nget_history(B) ->\n\tlists:sublist(B#block.block_time_history, history_length()).\n\nget_history_from_blocks([], _PrevB) ->\n\t[];\nget_history_from_blocks([B | Blocks], PrevB) ->\n\tcase B#block.height >= ar_fork:height_2_7() of\n\t\tfalse ->\n\t\t\tget_history_from_blocks(Blocks, B);\n\t\ttrue ->\n\t\t\t[{B#block.indep_hash, get_history_element(B, PrevB)}\n\t\t\t\t\t| get_history_from_blocks(Blocks, B)]\n\tend.\n\nset_history([], _History) ->\n\t[];\nset_history(Blocks, []) ->\n\tBlocks;\nset_history([B | Blocks], History) ->\n\t[B#block{ block_time_history = History } | set_history(Blocks, tl(History))].\n\nget_hashes(Blocks) ->\n\tTipB = hd(Blocks),\n\tLen = min(TipB#block.height - ar_fork:height_2_7() + 1, ar_block:get_consensus_window_size()),\n\t[B#block.block_time_history_hash || B <- lists:sublist(Blocks, Len)].\n\nsum_history(B) ->\n\t{IntervalTotal, VDFIntervalTotal, OneChunkCount, TwoChunkCount} =\n\t\tlists:foldl(\n\t\t\tfun({BlockInterval, VDFInterval, ChunkCount}, {Acc1, Acc2, Acc3, Acc4}) ->\n\t\t\t\t{\n\t\t\t\t\tAcc1 + BlockInterval,\n\t\t\t\t\tAcc2 + VDFInterval,\n\t\t\t\t\tcase ChunkCount of\n\t\t\t\t\t\t1 -> Acc3 + 1;\n\t\t\t\t\t\t_ -> Acc3\n\t\t\t\t\tend,\n\t\t\t\t\tcase ChunkCount of\n\t\t\t\t\t\t1 -> Acc4;\n\t\t\t\t\t\t_ -> Acc4 + 1\n\t\t\t\t\tend\n\t\t\t\t}\n\t\t\tend,\n\t\t\t{0, 0, 0, 0},\n\t\t\tget_history(B)\n\t\t),\n\t{IntervalTotal, VDFIntervalTotal, OneChunkCount, TwoChunkCount}.\n\ncompute_block_interval(B) ->\n\tHeight = B#block.height + 1,\n\tcase has_history(Height) of\n\t\ttrue ->\n\t\t\tIntervalTotal =\n\t\t\t\tlists:foldl(\n\t\t\t\t\tfun({BlockInterval, _VDFInterval, _ChunkCount}, Acc) ->\n\t\t\t\t\t\tAcc + BlockInterval\n\t\t\t\t\tend,\n\t\t\t\t\t0,\n\t\t\t\t\tget_history(B)\n\t\t\t\t),\n\t\t\tIntervalTotal div history_length();\n\t\tfalse -> 120\n\tend.\n\nvalidate_hashes(_History, []) ->\n\ttrue;\nvalidate_hashes(History, [H | Hashes]) ->\n\tcase validate_hash(H, History) of\n\t\ttrue ->\n\t\t\tvalidate_hashes(tl(History), Hashes);\n\t\tfalse ->\n\t\t\tfalse\n\tend.\n\nvalidate_hash(H, History) ->\n\tH == hash(History).\n\nhash(History) ->\n\tHistory2 = lists:sublist(History, history_length()),\n\thash(History2, [ar_serialize:encode_int(length(History2), 8)]).\n\nhash([], IOList) ->\n\tcrypto:hash(sha256, iolist_to_binary(IOList));\nhash([{BlockInterval, VDFInterval, ChunkCount} | History], IOList) ->\n\tBlockIntervalBin = ar_serialize:encode_int(BlockInterval, 8),\n\tVDFIntervalBin = ar_serialize:encode_int(VDFInterval, 8),\n\tChunkCountBin = ar_serialize:encode_int(ChunkCount, 8),\n\thash(History, [BlockIntervalBin, VDFIntervalBin, ChunkCountBin | IOList]).\n\nupdate_history(B, PrevB) ->\n\tcase B#block.height >= ar_fork:height_2_7() of\n\t\tfalse ->\n\t\t\tPrevB#block.block_time_history;\n\t\ttrue ->\n\t\t\t[get_history_element(B, PrevB) | PrevB#block.block_time_history]\n\tend.\n\nget_history_element(B, PrevB) ->\n\tBlockInterval = max(1, B#block.timestamp - PrevB#block.timestamp),\n\tVDFInterval = ar_block:vdf_step_number(B) - ar_block:vdf_step_number(PrevB),\n\tChunkCount =\n\t\tcase B#block.recall_byte2 of\n\t\t\tundefined ->\n\t\t\t\t1;\n\t\t\t_ ->\n\t\t\t\t2\n\t\tend,\n\t{BlockInterval, VDFInterval, ChunkCount}.\n"
  },
  {
    "path": "apps/arweave/src/ar_bridge.erl",
    "content": "%% This Source Code Form is subject to the terms of the GNU General\n%% Public License, v. 2.0. If a copy of the GPLv2 was not distributed\n%% with this file, You can obtain one at\n%% https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html\n\n%%% @doc The module gossips blocks to peers.\n-module(ar_bridge).\n\n-behaviour(gen_server).\n\n-export([start_link/2, start_gossip/0, stop_gossip/0]).\n\n-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2]).\n\n-export([block_propagation_parallelization/0]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-record(state, {\n\tblock_propagation_queue = gb_sets:new(),\n\tworkers,\n\tgossip = true\n}).\n\n%%%===================================================================\n%%% API\n%%%===================================================================\nblock_propagation_parallelization() ->\n\t?BLOCK_PROPAGATION_PARALLELIZATION.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% Starts the server\n%%\n%% @spec start_link() -> {ok, Pid} | ignore | {error, Error}\n%% @end\n%%--------------------------------------------------------------------\nstart_link(Name, Workers) ->\n\tgen_server:start_link({local, Name}, ?MODULE, Workers, []).\n\nstart_gossip() ->\n\tgen_server:call(?MODULE, start_gossip).\n\nstop_gossip() ->\n\tgen_server:call(?MODULE, stop_gossip).\n\n%%% gen_server callbacks\n%%%===================================================================\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Initializes the server\n%%\n%% @spec init(Args) -> {ok, State} |\n%%\t\t\t\t\t   {ok, State, Timeout} |\n%%\t\t\t\t\t   ignore |\n%%\t\t\t\t\t   {stop, Reason}\n%% @end\n%%--------------------------------------------------------------------\ninit(Workers) ->\n\tar_events:subscribe(block),\n\tWorkerMap = lists:foldl(fun(W, Acc) -> maps:put(W, free, Acc) end, #{}, Workers),\n\tState = #state{ workers = WorkerMap },\n\t{ok, State}.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Handling call messages\n%%\n%% @spec handle_call(Request, From, State) ->\n%%\t\t\t\t\t\t\t\t\t {reply, Reply, State} |\n%%\t\t\t\t\t\t\t\t\t {reply, Reply, State, Timeout} |\n%%\t\t\t\t\t\t\t\t\t {noreply, State} |\n%%\t\t\t\t\t\t\t\t\t {noreply, State, Timeout} |\n%%\t\t\t\t\t\t\t\t\t {stop, Reason, Reply, State} |\n%%\t\t\t\t\t\t\t\t\t {stop, Reason, State}\n%% @end\n%%--------------------------------------------------------------------\nhandle_call(start_gossip, _From, State) ->\n\t{reply, ok, State#state{ gossip = true }};\nhandle_call(stop_gossip, _From, State) ->\n\t{reply, ok, State#state{ gossip = false, block_propagation_queue = gb_sets:new() }};\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING(\"unhandled call: ~p\", [Request]),\n\t{reply, ok, State}.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Handling cast messages\n%%\n%% @spec handle_cast(Msg, State) -> {noreply, State} |\n%%\t\t\t\t\t\t\t\t\t{noreply, State, Timeout} |\n%%\t\t\t\t\t\t\t\t\t{stop, Reason, State}\n%% @end\n%%--------------------------------------------------------------------\n\nhandle_cast({may_be_send_block, W}, State) ->\n\t#state{ workers = Workers, block_propagation_queue = Q } = State,\n\tcase dequeue(Q) of\n\t\tempty ->\n\t\t\t{noreply, State};\n\t\t{{_Priority, Peer, BlockData}, Q2} ->\n\t\t\tcase maps:get(W, Workers) of\n\t\t\t\tfree ->\n\t\t\t\t\tsend_to_worker(Peer, BlockData, W),\n\t\t\t\t\t{noreply, State#state{ block_propagation_queue = Q2,\n\t\t\t\t\t\t\tworkers = maps:put(W, busy, Workers) }};\n\t\t\t\tbusy ->\n\t\t\t\t\t{noreply, State}\n\t\t\tend\n\tend;\n\nhandle_cast(Msg, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {message, Msg}]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Handling all non call/cast messages\n%%\n%% @spec handle_info(Info, State) -> {noreply, State} |\n%%\t\t\t\t\t\t\t\t\t {noreply, State, Timeout} |\n%%\t\t\t\t\t\t\t\t\t {stop, Reason, State}\n%% @end\n%%--------------------------------------------------------------------\nhandle_info({event, block, {new, _B, #{ gossip := false }}}, State) ->\n\t{noreply, State};\nhandle_info({event, block, {new, _B, _}}, State = #state{ gossip = false }) ->\n\t{noreply, State};\nhandle_info({event, block, {new, B, _}}, State) ->\n\t#state{ block_propagation_queue = Q, workers = Workers } = State,\n\tcase ar_block_cache:get(block_cache, B#block.previous_block) of\n\t\tnot_found ->\n\t\t\t%% The cache should have been just pruned and this block is old.\n\t\t\t{noreply, State};\n\t\t_ ->\n\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\tTrustedPeers = ar_peers:get_trusted_peers(),\n\t\t\tSpecialPeers = Config#config.block_gossip_peers,\n\t\t\tPeers = ((SpecialPeers ++ ar_peers:get_peers(current)) -- TrustedPeers) ++ TrustedPeers,\n\t\t\tJSON =\n\t\t\t\tcase B#block.height >= ar_fork:height_2_6() of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tnone;\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tblock_to_json(B)\n\t\t\t\tend,\n\t\t\tQ2 = enqueue_block(Peers, B#block.height, {JSON, B}, Q),\n\t\t\t[gen_server:cast(?MODULE, {may_be_send_block, W}) || W <- maps:keys(Workers)],\n\t\t\t{noreply, State#state{ block_propagation_queue = Q2 }}\n\tend;\n\nhandle_info({event, block, _}, State) ->\n\t{noreply, State};\n\nhandle_info({worker_sent_block, W},\n\t\t#state{ workers = Workers, block_propagation_queue = Q } = State) ->\n\tcase dequeue(Q) of\n\t\tempty ->\n\t\t\t{noreply, State#state{ workers = maps:put(W, free, Workers) }};\n\t\t{{_Priority, Peer, BlockData}, Q2} ->\n\t\t\tsend_to_worker(Peer, BlockData, W),\n\t\t\t{noreply, State#state{ block_propagation_queue = Q2,\n\t\t\t\t\tworkers = maps:put(W, busy, Workers) }}\n\tend;\n\nhandle_info(Info, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {info, Info}]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% This function is called by a gen_server when it is about to\n%% terminate. It should be the opposite of Module:init/1 and do any\n%% necessary cleaning up. When it returns, the gen_server terminates\n%% with Reason. The return value is ignored.\n%%\n%% @spec terminate(Reason, State) -> void()\n%% @end\n%%--------------------------------------------------------------------\nterminate(_Reason, _State) ->\n\t?LOG_INFO([{event, ar_bridge_terminated}, {module, ?MODULE}]),\n\tok.\n\n%%%===================================================================\n%%% Internal functions\n%%%===================================================================\n\nenqueue_block(Peers, Height, BlockData, Q) ->\n\tenqueue_block(Peers, Height, BlockData, Q, 0).\n\nenqueue_block([], _Height, _BlockData, Q, _N) ->\n\tQ;\nenqueue_block([Peer | Peers], Height, BlockData, Q, N) ->\n\tPriority = {N, Height},\n\tenqueue_block(Peers, Height, BlockData,\n\t\t\tgb_sets:add_element({Priority, Peer, BlockData}, Q), N + 1).\n\ndequeue(Q) ->\n\tcase gb_sets:is_empty(Q) of\n\t\ttrue ->\n\t\t\tempty;\n\t\tfalse ->\n\t\t\tgb_sets:take_smallest(Q)\n\tend.\n\nsend_to_worker(Peer, {JSON, B}, W) ->\n\t#block{ height = Height, indep_hash = H, previous_block = PrevH, txs = TXs,\n\t\t\thash = SolutionH } = B,\n\tRelease = ar_peers:get_peer_release(Peer),\n\tFork_2_6 = ar_fork:height_2_6(),\n\tSolutionH2 = case Height >= ar_fork:height_2_6() of true -> SolutionH; _ -> undefined end,\n\tcase Release >= 52 orelse Height >= Fork_2_6 of\n\t\ttrue ->\n\t\t\tSendAnnouncementFun =\n\t\t\t\tfun() ->\n\t\t\t\t\tAnnouncement = #block_announcement{ indep_hash = H,\n\t\t\t\t\t\t\tprevious_block = PrevH,\n\t\t\t\t\t\t\trecall_byte = B#block.recall_byte,\n\t\t\t\t\t\t\trecall_byte2 = B#block.recall_byte2,\n\t\t\t\t\t\t\tsolution_hash = SolutionH2,\n\t\t\t\t\t\t\ttx_prefixes = [ar_node_worker:tx_id_prefix(ID)\n\t\t\t\t\t\t\t\t\t|| #tx{ id = ID } <- TXs] },\n\t\t\t\t\tar_http_iface_client:send_block_announcement(Peer, Announcement)\n\t\t\t\tend,\n\t\t\tSendFun =\n\t\t\t\tfun(MissingChunk, MissingChunk2, MissingTXs) ->\n\t\t\t\t\t%% Some transactions might be absent from our mempool. We still gossip\n\t\t\t\t\t%% this block further and search for the missing transactions afterwads\n\t\t\t\t\t%% (the process is initiated by ar_node_worker). We are gradually moving\n\t\t\t\t\t%% to the new process where blocks are sent over POST /block2 along with\n\t\t\t\t\t%% all the missing transactions specified in the preceding\n\t\t\t\t\t%% POST /block_announcement reply. Once the network adopts the new release,\n\t\t\t\t\t%% we will turn off POST /block and remove the missing transactions search\n\t\t\t\t\t%% in ar_node_worker.\n\t\t\t\t\tcase determine_included_transactions(TXs, MissingTXs) of\n\t\t\t\t\t\tmissing ->\n\t\t\t\t\t\t\tcase Height >= ar_fork:height_2_6() of\n\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\t%% POST /block is not supported after 2.6.\n\t\t\t\t\t\t\t\t\t%% The recipient would have to download this block\n\t\t\t\t\t\t\t\t\t%% along with its transactions via ar_poller (which\n\t\t\t\t\t\t\t\t\t%% we made trustless in the 2.6 release).\n\t\t\t\t\t\t\t\t\tok;\n\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\tsend_and_log(Peer, H, Height, json, JSON, n)\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\tTXs2 ->\n\t\t\t\t\t\t\tPoA = case MissingChunk of true -> B#block.poa;\n\t\t\t\t\t\t\t\t\tfalse -> (B#block.poa)#poa{ chunk = <<>> } end,\n\t\t\t\t\t\t\tPoA2 = case MissingChunk2 of false ->\n\t\t\t\t\t\t\t\t\t(B#block.poa2)#poa{ chunk = <<>> };\n\t\t\t\t\t\t\t\t\t_ -> B#block.poa2 end,\n\t\t\t\t\t\t\tBin = ar_serialize:block_to_binary(B#block{ txs = TXs2,\n\t\t\t\t\t\t\t\t\tpoa = PoA, poa2 = PoA2 }),\n\t\t\t\t\t\t\tsend_and_log(Peer, H, Height, binary, Bin, B#block.recall_byte)\n\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\tgen_server:cast(W, {send_block2, Peer, SendAnnouncementFun, SendFun, 1, self()});\n\t\tfalse ->\n\t\t\tSendFun = fun() -> send_and_log(Peer, H, Height, json, JSON, n) end,\n\t\t\tgen_server:cast(W, {send_block, SendFun, 1, self()})\n\tend.\n\nsend_and_log(Peer, H, Height, Format, Bin, RecallByte) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tReply =\n\t\tcase Format of\n\t\t\tjson ->\n\t\t\t\tar_http_iface_client:send_block_json(Peer, H, Bin);\n\t\t\tbinary ->\n\t\t\t\tar_http_iface_client:send_block_binary(Peer, H, Bin, RecallByte)\n\t\tend,\n\tcase lists:member(Peer, Config#config.block_gossip_peers) of\n\t\ttrue ->\n\t\t\t?LOG_INFO([{event, sent_block_to_block_gossip_peer},\n\t\t\t\t{format, Format},\n\t\t\t\t{height, Height},\n\t\t\t\t{block, ar_util:encode(H)},\n\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t{reply, ar_metrics:get_status_class(Reply)}]);\n\t\tfalse ->\n\t\t\tok\n\tend.\n\nblock_to_json(B) ->\n\tBDS = ar_block:generate_block_data_segment(B),\n\t{BlockProps} = ar_serialize:block_to_json_struct(B),\n\tPostProps = [\n\t\t{<<\"new_block\">>, {BlockProps}},\n\t\t%% Add the P2P port field to be backwards compatible with nodes\n\t\t%% running the old version of the P2P port feature.\n\t\t{<<\"port\">>, ?DEFAULT_HTTP_IFACE_PORT},\n\t\t{<<\"block_data_segment\">>, ar_util:encode(BDS)}\n\t],\n\tar_serialize:jsonify({PostProps}).\n\n%% @doc Return the list of transactions to gossip or 'missing'. TXs is a list of possibly\n%% both tx records and transaction identifiers - whatever is found in the gossiped block.\n%% MissingTXs is a list of possibly both 0-based indices and tx identifiers. The items\n%% in the new list are in the same order they occur in TXs. Identifiers are simply placed\n%% as-is in the new list. The tx records might be converted to their identifiers (to avoid\n%% sending the entire transactions to peers who already know them) if either their 0-based\n%% indices or identifiers are found in MissingTXs. Elements in MissingTXs are assumed sorted\n%% in the order of their appearance in TXs. Return 'missing' if TXs contains an identifier (\n%% not a tx record) which (or its index) is found in MissingTXs.\ndetermine_included_transactions(TXs, MissingTXs) ->\n\tdetermine_included_transactions(TXs, MissingTXs, [], 0).\n\ndetermine_included_transactions([], _MissingTXs, Included, _N) ->\n\tlists:reverse(Included);\ndetermine_included_transactions([TXIDOrTX | TXs], [], Included, N) ->\n\tdetermine_included_transactions(TXs, [], [tx_id(TXIDOrTX) | Included], N);\ndetermine_included_transactions([TXIDOrTX | TXs], [TXIDOrIndex | MissingTXs], Included, N) ->\n\tTXID = tx_id(TXIDOrTX),\n\tcase TXIDOrIndex == N orelse TXIDOrIndex == TXID of\n\t\ttrue ->\n\t\t\tcase TXID == TXIDOrTX of\n\t\t\t\ttrue ->\n\t\t\t\t\tmissing;\n\t\t\t\tfalse ->\n\t\t\t\t\tdetermine_included_transactions(TXs, MissingTXs, [strip_v2_data(TXIDOrTX)\n\t\t\t\t\t\t\t| Included], N + 1)\n\t\t\tend;\n\t\tfalse ->\n\t\t\tdetermine_included_transactions(TXs, [TXIDOrIndex | MissingTXs], [TXID | Included],\n\t\t\t\t\tN + 1)\n\tend.\n\ntx_id(#tx{ id = TXID }) ->\n\tTXID;\ntx_id(TXID) ->\n\tTXID.\n\nstrip_v2_data(#tx{ format = 2 } = TX) ->\n\tTX#tx{ data = <<>> };\nstrip_v2_data(TX) ->\n\tTX.\n"
  },
  {
    "path": "apps/arweave/src/ar_bridge_sup.erl",
    "content": "-module(ar_bridge_sup).\n\n-behaviour(supervisor).\n\n-export([start_link/0]).\n\n-export([init/1]).\n\n-include_lib(\"arweave/include/ar_sup.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%% ===================================================================\n%% Supervisor callbacks.\n%% ===================================================================\n\ninit([]) ->\n\tChildren = lists:map(\n\t\tfun(Num) ->\n\t\t\tName = list_to_atom(\"ar_block_propagation_worker\" ++ integer_to_list(Num)),\n\t\t\t{Name, {ar_block_propagation_worker, start_link, [Name]}, permanent,\n\t\t\t ?SHUTDOWN_TIMEOUT, worker, [ar_block_propagation_worker]}\n\t\tend,\n\t\tlists:seq(1, ar_bridge:block_propagation_parallelization())\n\t),\n\tWorkers = [element(1, El) || El <- Children],\n\tChildren2 = [?CHILD_WITH_ARGS(ar_bridge, worker, ar_bridge, [ar_bridge, Workers]) | Children],\n\t{ok, {{one_for_one, 5, 10}, Children2}}.\n"
  },
  {
    "path": "apps/arweave/src/ar_chain_stats.erl",
    "content": "-module(ar_chain_stats).\n\n-behaviour(gen_server).\n\n-include(\"ar.hrl\").\n-include(\"ar_chain_stats.hrl\").\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-export([log_fork/2, log_fork/3, get_forks/1]).\n\n-export([start_link/0, init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2]).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n\nlog_fork(Orphans, ForkRootB) ->\n\tlog_fork(Orphans, ForkRootB, os:system_time(millisecond)).\n\nlog_fork([], _ForkRootB, _ForkTime) ->\n\t%% No fork to log\n\tok;\nlog_fork(Orphans, ForkRootB, ForkTime) ->\n\tgen_server:cast(?MODULE, {log_fork, Orphans, ForkRootB, ForkTime}).\n\t\n\n%% @doc Returns all forks that have been logged since the given start time\n%% (system time in seconds)\nget_forks(StartTime) ->\n\tcase catch gen_server:call(?MODULE, {get_forks, StartTime}) of\n\t\t{'EXIT', {timeout, {gen_server, call, _}}} ->\n\t\t\t{error, timeout};\n\t\tReply ->\n\t\t\tReply\n\tend.\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\ninit([]) ->\n\t%% Trap exit to avoid corrupting any open files on quit..\n\tprocess_flag(trap_exit, true),\n\t{ok, Config} = arweave_config:get_env(),\n\tok = ar_kv:open(#{\n\t\tpath => filename:join([Config#config.data_dir, ?ROCKS_DB_DIR, \"forks_db\"]),\n\t\tname => forks_db}),\n\t{ok, #{}}.\n\nhandle_call({get_forks, StartTime}, _From, State) ->\n\t{ok, ForksMap} = ar_kv:get_range(forks_db, <<(StartTime * 1000):64>>),\n\t%% Sort forks by their key (the timestamp when they were detected) - sorts in\n\t%% chronological / ascending order (i.e. first element of the list is the oldest fork)\n\tSortedForks = lists:sort(maps:to_list(ForksMap)),\n\tForks = [binary_to_term(Fork, [safe]) || {_Timestamp, Fork} <- SortedForks],\n\t{reply, Forks, State};\nhandle_call(_Request, _From, State) ->\n\t{reply, ok, State}.\n\nhandle_cast({log_fork, Orphans, ForkRootB, ForkTime}, State) ->\n\tdo_log_fork(Orphans, ForkRootB, ForkTime),\n\t{noreply, State};\nhandle_cast(_Msg, State) ->\n\t{noreply, State}.\n\nhandle_info(_Info, State) ->\n\t{noreply, State}.\n\nterminate(Reason, _state) ->\n\t?LOG_INFO([{module, ?MODULE}, {pid, self()}, {callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\ndo_log_fork(Orphans, ForkRootB, ForkTime) ->\n\tFork = create_fork(Orphans, ForkRootB, ForkTime),\n\tar_kv:put(forks_db, <<ForkTime:64>>, term_to_binary(Fork)),\n\trecord_fork_depth(Orphans, ForkRootB),\n\tok.\n\ncreate_fork(Orphans, ForkRootB, ForkTime) ->\n\tForkID = crypto:hash(sha256, list_to_binary(Orphans)),\n\t#fork{\n\t\tid = ForkID,\n\t\theight = ForkRootB#block.height + 1,\n\t\ttimestamp = ForkTime,\n\t\tblock_ids = Orphans\n\t}.\n\nrecord_fork_depth(Orphans, ForkRootB) ->\n\trecord_fork_depth(Orphans, ForkRootB, 0).\n\nrecord_fork_depth([], _ForkRootB, 0) ->\n\tok;\nrecord_fork_depth([], _ForkRootB, N) ->\n\tok;\nrecord_fork_depth([H | Orphans], ForkRootB, N) ->\n\tSolutionHashInfo =\n\t\tcase ar_block_cache:get(block_cache, H) of\n\t\t\tnot_found ->\n\t\t\t\t%% Should never happen, by construction.\n\t\t\t\t?LOG_ERROR([{event, block_not_found_in_cache}, {h, ar_util:encode(H)}]),\n\t\t\t\t[];\n\t\t\t#block{ hash = SolutionH } ->\n\t\t\t\t[{solution_hash, ar_util:encode(SolutionH)}]\n\t\tend,\n\tLogInfo = [\n\t\t{event, orphaning_block}, {block, ar_util:encode(H)}, {depth, N},\n\t\t{fork_root, ar_util:encode(ForkRootB#block.indep_hash)},\n\t\t{fork_height, ForkRootB#block.height + 1} | SolutionHashInfo],\n\t?LOG_INFO(LogInfo),\n\trecord_fork_depth(Orphans, ForkRootB, N + 1).\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\nforks_test_() ->\n\t[\n\t\t{timeout, 30, fun test_forks/0}\n\t].\n\ntest_forks() ->\n\tclear_forks_db(),\n\tStartTimeSeconds = 60,\n\tForkRootB1 = #block{ indep_hash = <<\"1\">>, height = 1 },\n\tForkRootB2= #block{ indep_hash = <<\"2\">>, height = 2 },\n\n\tOrphans1 = [<<\"a\">>],\n\tTime1 = (StartTimeSeconds * 1000) + 5,\n\tlog_fork(Orphans1, ForkRootB1, Time1),\n\tExpectedFork1 = #fork{\n\t\tid = crypto:hash(sha256, list_to_binary(Orphans1)),\n\t\theight = 2,\n\t\tblock_ids = Orphans1,\n\t\ttimestamp = Time1\n\t},\n\tassert_forks_equal([ExpectedFork1], get_forks(StartTimeSeconds)),\n\n\tOrphans2 = [<<\"b\">>, <<\"c\">>],\n\tTime2 = (StartTimeSeconds * 1000) + 10,\n\tlog_fork(Orphans2, ForkRootB1, Time2),\n\tExpectedFork2 = #fork{\n\t\tid = crypto:hash(sha256, list_to_binary(Orphans2)),\n\t\theight = 2,\n\t\tblock_ids = Orphans2,\n\t\ttimestamp = Time2\n\t},\n\tassert_forks_equal([ExpectedFork1, ExpectedFork2], get_forks(StartTimeSeconds)),\n\n\tOrphans3 = [<<\"b\">>, <<\"c\">>, <<\"d\">>],\n\tTime3 = (StartTimeSeconds * 1000) + 15,\n\tlog_fork(Orphans3, ForkRootB1, Time3),\n\tExpectedFork3 = #fork{\n\t\tid = crypto:hash(sha256, list_to_binary(Orphans3)),\n\t\theight = 2,\n\t\tblock_ids = Orphans3,\n\t\ttimestamp = Time3\n\t},\n\tassert_forks_equal(\n\t\t[ExpectedFork1, ExpectedFork2, ExpectedFork3],\n\t\tget_forks(StartTimeSeconds)),\n\n\tOrphans4 = [<<\"e\">>, <<\"f\">>, <<\"g\">>],\n\tTime4 = (StartTimeSeconds * 1000) + 1000,\n\tlog_fork(Orphans4, ForkRootB2, Time4),\n\tExpectedFork4 = #fork{\n\t\tid = crypto:hash(sha256, list_to_binary(Orphans4)),\n\t\theight = 3,\n\t\tblock_ids = Orphans4,\n\t\ttimestamp = Time4\n\t},\n\tassert_forks_equal(\n\t\t[ExpectedFork1, ExpectedFork2, ExpectedFork3, ExpectedFork4],\n\t\tget_forks(StartTimeSeconds)),\n\n\t%% Same fork seen again - not sure this is possible, but since we're just tracking\n\t%% forks based on when they occur, it should be handled.\n\tTime5 = (StartTimeSeconds * 1000) + 1005,\n\tlog_fork(Orphans3, ForkRootB1, Time5),\n\tExpectedFork5 = ExpectedFork3#fork{timestamp = Time5},\n\tassert_forks_equal(\n\t\t[ExpectedFork1, ExpectedFork2, ExpectedFork3, ExpectedFork4, ExpectedFork5],\n\t\tget_forks(StartTimeSeconds)),\n\n\t%% If the fork is empty, ignore it.\n\tTime6 = (StartTimeSeconds * 1000) + 1010,\n\tlog_fork([], ForkRootB2, Time6),\n\tassert_forks_equal(\n\t\t[ExpectedFork1, ExpectedFork2, ExpectedFork3, ExpectedFork4, ExpectedFork5],\n\t\tget_forks(StartTimeSeconds)),\n\n\t%% Check that the cutoff time is handled correctly\n\tassert_forks_equal(\n\t\t[ExpectedFork4, ExpectedFork5],\n\t\tget_forks(StartTimeSeconds+1)),\n\tok.\n\nassert_forks_equal(ExpectedForks, ActualForks) ->\n\t?assertEqual(ExpectedForks, ActualForks).\n\nclear_forks_db() ->\n\tTime = os:system_time(millisecond),\n\tar_kv:delete_range(forks_db, integer_to_binary(0), integer_to_binary(Time)).\n"
  },
  {
    "path": "apps/arweave/src/ar_chunk_copy.erl",
    "content": "%%% @doc The module maintains a queue of processes fetching data from the network\n%%% and from the local storage modules.\n-module(ar_chunk_copy).\n\n-behaviour(gen_server).\n\n-export([start_link/1, register_workers/0, read_range/4]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_sup.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-define(READ_RANGE_CHUNKS, 400).\n-define(MAX_ACTIVE_TASKS, 10).\n-define(MAX_QUEUED_TASKS, 50).\n\n-record(worker_tasks, {\n\tworker,\n\ttask_queue = queue:new(),\n\tactive_count = 0\n}).\n\n-record(state, {\n\tworkers = #{}\n}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the server.\nstart_link(WorkerMap) ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, WorkerMap, []).\n\nregister_workers() ->\n\t{Workers, WorkerMap} = register_read_workers(),\n\tChunkCopy = ?CHILD_WITH_ARGS(ar_chunk_copy, worker, ar_chunk_copy, [WorkerMap]),\n\tWorkers ++ [ChunkCopy].\n\nregister_read_workers() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tStoreIDs = [\n\t\tar_storage_module:id(StorageModule) || StorageModule <- Config#config.storage_modules\n\t] ++ [?DEFAULT_MODULE],\n\t{Workers, WorkerMap} = \n\t\tlists:foldl(\n\t\t\tfun(StoreID, {AccWorkers, AccWorkerMap}) ->\n\t\t\t\tLabel = ar_storage_module:label(StoreID),\n\t\t\t\tName = list_to_atom(\"ar_data_sync_worker_\" ++ Label),\n\n\t\t\t\tWorker = ?CHILD_WITH_ARGS(ar_data_sync_worker, worker, Name, [Name, read]),\n\n\t\t\t\t{[ Worker | AccWorkers], AccWorkerMap#{StoreID => Name}}\n\t\t\tend,\n\t\t\t{[], #{}},\n\t\t\tStoreIDs\n\t\t),\n\t{Workers, WorkerMap}.\n\n%% @doc Returns true if we can accept new tasks. Will always return false if syncing is\n%% disabled (i.e. sync_jobs = 0).\nready_for_work(StoreID) ->\n\ttry\n\t\tgen_server:call(?MODULE, {ready_for_work, StoreID}, 1000)\n\tcatch\n\t\texit:{timeout,_} ->\n\t\t\tfalse\n\tend.\n\nread_range(Start, End, OriginStoreID, TargetStoreID) ->\n\tcase ready_for_work(OriginStoreID) of\n\t\ttrue ->\n\t\t\tArgs = {Start, End, OriginStoreID, TargetStoreID},\n\t\t\tgen_server:cast(?MODULE, {read_range, Args}),\n\t\t\ttrue;\n\t\tfalse ->\n\t\t\tfalse\n\tend.\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit(WorkerMap) ->\n\t?LOG_DEBUG([{event, init}, {module, ?MODULE}, {worker_map, WorkerMap}]),\n\tWorkers = maps:fold(\n\t\tfun(StoreID, Name, Acc) ->\n\t\t\tAcc#{StoreID => #worker_tasks{worker = Name}}\n\t\tend,\n\t\t#{},\n\t\tWorkerMap\n\t),\n\tar_util:cast_after(1000, self(), process_queues),\n\t{ok, #state{\n\t\tworkers = Workers\n\t}}.\n\nhandle_call({ready_for_work, StoreID}, _From, State) ->\n\t{reply, do_ready_for_work(StoreID, State), State};\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast({read_range, Args}, State) ->\n\t{noreply, enqueue_read_range(Args, State)};\n\nhandle_cast(process_queues, State) ->\n\tar_util:cast_after(1000, self(), process_queues),\n\t{noreply, process_queues(State)};\n\nhandle_cast({task_completed, {read_range, {Worker, _, Args}}}, State) ->\n\t{noreply, task_completed(Args, State)};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_DEBUG([{event, terminate}, {module, ?MODULE}, {reason, io_lib:format(\"~p\", [Reason])}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\ndo_ready_for_work(StoreID, State) ->\n\tWorker = maps:get(StoreID, State#state.workers, undefined),\n\tcase Worker of\n\t\tundefined ->\n\t\t\t?LOG_ERROR([{event, worker_not_found}, {module, ?MODULE}, {call, ready_for_work},\n\t\t\t\t{store_id, StoreID}]),\n\t\t\tfalse;\n\t\t_ ->\n\t\t\tqueue:len(Worker#worker_tasks.task_queue) < ?MAX_QUEUED_TASKS\n\tend.\n\nenqueue_read_range(Args, State) ->\n\t{_Start, _End, OriginStoreID, _TargetStoreID} = Args,\n\tWorker = maps:get(OriginStoreID, State#state.workers, undefined),\n\tcase Worker of\n\t\tundefined ->\n\t\t\t?LOG_ERROR([{event, worker_not_found}, {module, ?MODULE},\n\t\t\t\t{call, enqueue_read_range}, {store_id, OriginStoreID}]),\n\t\t\tState;\n\t\t_ ->\n\t\t\tWorker2 = do_enqueue_read_range(Args, Worker),\n\t\t\tState#state{\n\t\t\t\tworkers = maps:put(OriginStoreID, Worker2, State#state.workers)\n\t\t\t}\n\tend.\n\ndo_enqueue_read_range(Args, Worker) ->\n\t{Start, End, OriginStoreID, TargetStoreID} = Args,\n\tEnd2 = min(Start + (?READ_RANGE_CHUNKS * ?DATA_CHUNK_SIZE), End),\n\tArgs2 = {Start, End2, OriginStoreID, TargetStoreID},\n\tTaskQueue = queue:in(Args2, Worker#worker_tasks.task_queue),\n\tWorker2 = Worker#worker_tasks{task_queue = TaskQueue},\n\tcase End2 == End of\n\t\ttrue ->\n\t\t\tWorker2;\n\t\tfalse ->\n\t\t\tArgs3 = {End2, End, OriginStoreID, TargetStoreID},\n\t\t\tdo_enqueue_read_range(Args3, Worker2)\n\tend.\n\nprocess_queues(State) ->\n\tWorkers = State#state.workers,\n\tUpdatedWorkers = maps:map(\n\t\tfun(_Key, Worker) ->\n\t\t\tprocess_queue(Worker)\n\t\tend,\n\t\tWorkers\n\t),\n\tState#state{workers = UpdatedWorkers}.\n\nprocess_queue(Worker) ->\n\tcase Worker#worker_tasks.active_count < ?MAX_ACTIVE_TASKS of\n\t\ttrue ->\n\t\t\tcase queue:out(Worker#worker_tasks.task_queue) of\n\t\t\t\t{empty, _} ->\n\t\t\t\t\tWorker;\n\t\t\t\t{{value, Args}, Q2}->\n\t\t\t\t\tgen_server:cast(Worker#worker_tasks.worker, {read_range, Args}),\n\t\t\t\t\tWorker2 = Worker#worker_tasks{\n\t\t\t\t\t\ttask_queue = Q2,\n\t\t\t\t\t\tactive_count = Worker#worker_tasks.active_count + 1\n\t\t\t\t\t},\n\t\t\t\t\tprocess_queue(Worker2)\n\t\t\tend;\n\t\tfalse ->\n\t\t\tWorker\n\tend.\n\ntask_completed(Args, State) ->\n\t{_Start, _End, OriginStoreID, _TargetStoreID} = Args,\n\tWorker = maps:get(OriginStoreID, State#state.workers, undefined),\n\tcase Worker of\n\t\tundefined ->\n\t\t\t?LOG_ERROR([{event, worker_not_found}, {module, ?MODULE}, {call, task_completed},\n\t\t\t\t{store_id, OriginStoreID}]),\n\t\t\tState;\n\t\t_ ->\n\t\t\tActiveCount = Worker#worker_tasks.active_count - 1,\n\t\t\tWorker2 = Worker#worker_tasks{active_count = ActiveCount},\n\t\t\tWorker3 = process_queue(Worker2),\n\t\t\tState2 = State#state{\n\t\t\t\tworkers = maps:put(OriginStoreID, Worker3, State#state.workers)\n\t\t\t},\n\t\t\tState2\n\tend.\n\n%%%===================================================================\n%%% Tests. Included in the module so they can reference private\n%%% functions.\n%%%===================================================================\n\nhelpers_test_() ->\n\t[\n\t\t{timeout, 30, fun test_ready_for_work/0},\n\t\t{timeout, 30, fun test_enqueue_read_range/0},\n\t\t{timeout, 30, fun test_process_queue/0},\n\t\t{timeout, 30, fun test_register_workers/0}\n\t].\n\ntest_ready_for_work() ->\n\tState = #state{\n\t\tworkers = #{\n\t\t\t\"store1\" => #worker_tasks{\n\t\t\t\ttask_queue = queue:from_list(lists:seq(1, ?MAX_QUEUED_TASKS - 1))},\n\t\t\t\"store2\" => #worker_tasks{\n\t\t\t\ttask_queue = queue:from_list(lists:seq(1, ?MAX_QUEUED_TASKS))}\n\t\t}\n\t},\n\t?assertEqual(true, do_ready_for_work(\"store1\", State)),\n\t?assertEqual(false, do_ready_for_work(\"store2\", State)).\n\ntest_enqueue_read_range() ->\n\tExpectedWorker = #worker_tasks{\n\t\ttask_queue = queue:from_list(\n\t\t\t\t\t[{\n\t\t\t\t\t\tfloor(2.5 * ?DATA_CHUNK_SIZE),\n\t\t\t\t\t\tfloor((2.5 + ?READ_RANGE_CHUNKS) * ?DATA_CHUNK_SIZE),\n\t\t\t\t\t\t\"store1\", \"store2\"\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tfloor((2.5 + ?READ_RANGE_CHUNKS) * ?DATA_CHUNK_SIZE),\n\t\t\t\t\t\tfloor((2.5 + 2 * ?READ_RANGE_CHUNKS) * ?DATA_CHUNK_SIZE),\n\t\t\t\t\t\t\"store1\", \"store2\"\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tfloor((2.5 + 2 * ?READ_RANGE_CHUNKS) * ?DATA_CHUNK_SIZE),\n\t\t\t\t\t\tfloor((2.5 + 3 * ?READ_RANGE_CHUNKS) * ?DATA_CHUNK_SIZE),\n\t\t\t\t\t\t\"store1\", \"store2\"\n\t\t\t\t\t}]\n\t\t\t\t)\n\t\t\t},\n\tWorker = do_enqueue_read_range(\n\t\t{\n\t\t\tfloor(2.5 * ?DATA_CHUNK_SIZE),\n\t\t\tfloor((2.5 + 3 * ?READ_RANGE_CHUNKS) * ?DATA_CHUNK_SIZE),\n\t\t\t\"store1\", \"store2\"\n\t\t},\n\t\t#worker_tasks{task_queue = queue:new()}\n\t),\n\t?assertEqual(\n\t\tqueue:to_list(ExpectedWorker#worker_tasks.task_queue),\n\t\tqueue:to_list(Worker#worker_tasks.task_queue)).\n\ntest_process_queue() ->\n\tWorker1 = #worker_tasks{\n\t\tactive_count = ?MAX_ACTIVE_TASKS\n\t},\n\t?assertEqual(Worker1, process_queue(Worker1)),\n\n\tWorker2 = #worker_tasks{\n\t\tactive_count = ?MAX_ACTIVE_TASKS + 1\n\t},\n\t?assertEqual(Worker2, process_queue(Worker2)),\n\n\tWorker3 = process_queue(\n\t\t#worker_tasks{\n\t\t\tactive_count = ?MAX_ACTIVE_TASKS - 2,\n\t\t\ttask_queue = queue:from_list(\n\t\t\t\t[{floor(2.5 * ?DATA_CHUNK_SIZE), floor(12.5 * ?DATA_CHUNK_SIZE),\n\t\t\t\t\"store1\", \"store2\"},\n\t\t\t{floor(12.5 * ?DATA_CHUNK_SIZE), floor(22.5 * ?DATA_CHUNK_SIZE),\n\t\t\t\t\"store1\", \"store2\"},\n\t\t\t{floor(22.5 * ?DATA_CHUNK_SIZE), floor(30 * ?DATA_CHUNK_SIZE),\n\t\t\t\t\"store1\", \"store2\"}])\n\t\t}\n\t),\n\tExpectedWorker3 = #worker_tasks{\n\t\tactive_count = ?MAX_ACTIVE_TASKS,\n\t\ttask_queue = queue:from_list(\n\t\t\t[{floor(22.5 * ?DATA_CHUNK_SIZE), floor(30 * ?DATA_CHUNK_SIZE),\n\t\t\t\t\"store1\", \"store2\"}]\n\t\t)\n\t},\n\t?assertEqual(\n\t\tExpectedWorker3#worker_tasks.active_count, Worker3#worker_tasks.active_count),\n\t?assertEqual(\n\t\tqueue:to_list(ExpectedWorker3#worker_tasks.task_queue),\n\t\tqueue:to_list(Worker3#worker_tasks.task_queue)).\n\ntest_register_workers() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tStoreIDs = [\n\t\tar_storage_module:id(StorageModule) || StorageModule <- Config#config.storage_modules],\n\tlists:foreach(\n\t\tfun(StoreID) ->\n\t\t\t?assertEqual(true, ready_for_work(StoreID))\n\t\tend,\n\t\tStoreIDs ++ [?DEFAULT_MODULE]\n\t).\n"
  },
  {
    "path": "apps/arweave/src/ar_chunk_storage.erl",
    "content": "%% The blob storage optimized for fast reads.\n-module(ar_chunk_storage).\n\n-behaviour(gen_server).\n\n-export([start_link/2, name/1, register_workers/0, is_storage_supported/3, put/4,\n\t\topen_files/1, get/2, get/3, locate_chunk_on_disk/2,\n\t\tget_range/2, get_range/3, cut/2, delete/1, delete/2,\n\t\tset_entropy_complete/1,\n\t\tget_filepath/2, get_handle_by_filepath/1, close_file/2, close_files/1, \n\t\tlist_files/2, run_defragmentation/0, get_position_and_relative_chunk_offset/2,\n\t\tget_storage_module_path/2, get_chunk_storage_path/2,\n\t\tget_chunk_bucket_start/1, get_chunk_bucket_end/1, \n\t\tget_chunk_byte_from_bucket_end/1, get_chunk_seek_offset/1,\n\t\tget_chunk_file_start/1,\n\t\tsync_record_id/1, write_chunk/4, record_chunk/5, read_offset/2]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n%% Used in tests.\n-export([delete_chunk/2]).\n\n-include(\"ar.hrl\").\n-include(\"ar_sup.hrl\").\n-include(\"ar_consensus.hrl\").\n-include(\"ar_chunk_storage.hrl\").\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n-include_lib(\"kernel/include/file.hrl\").\n\n-record(state, {\n\tfile_index,\n\tstore_id,\n\tentropy_context = none, %% some data we need pass to ar_entropy_storage\n\trange_start,\n\trange_end\n}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the server.\nstart_link(Name, StoreID) ->\n\tgen_server:start_link({local, Name}, ?MODULE, StoreID, []).\n\n%% @doc Return the name of the server serving the given StoreID.\nname(StoreID) ->\n\tlist_to_atom(\"ar_chunk_storage_\" ++ ar_storage_module:label(StoreID)).\n\nregister_workers() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tConfiguredWorkers = lists:map(\n\t\tfun(StorageModule) ->\n\t\t\tStoreID = ar_storage_module:id(StorageModule),\n\n\t\t\tChunkStorageName = ar_chunk_storage:name(StoreID),\n\t\t\t?CHILD_WITH_ARGS(ar_chunk_storage, worker,\n\t\t\t\tChunkStorageName, [ChunkStorageName, StoreID])\n\t\tend,\n\t\tConfig#config.storage_modules\n\t),\n\t\n\tDefaultChunkStorageWorker = ?CHILD_WITH_ARGS(ar_chunk_storage, worker,\n\t\tar_chunk_storage_default, [ar_chunk_storage_default, ?DEFAULT_MODULE]),\n\n\tRepackInPlaceWorkers = lists:map(\n\t\tfun({StorageModule, _Packing}) ->\n\t\t\tStoreID = ar_storage_module:id(StorageModule),\n\t\t\t%% Note: the config validation will prevent a StoreID from being used in both\n\t\t\t%% `storage_modules` and `repack_in_place_storage_modules`, so there's\n\t\t\t%% no risk of a `Name` clash with the workers spawned above.\n\t\t\tChunkStorageName = ar_chunk_storage:name(StoreID),\n\t\t\t?CHILD_WITH_ARGS(ar_chunk_storage, worker,\n\t\t\t\tChunkStorageName, [ChunkStorageName, StoreID])\n\t\tend,\n\t\tConfig#config.repack_in_place_storage_modules\n\t),\n\n\tConfiguredWorkers ++ RepackInPlaceWorkers ++ [DefaultChunkStorageWorker].\n\n%% @doc Return true if we can accept the chunk for storage.\n%% 256 KiB chunks are stored on disk in chunk_storage optimized for read speed.\n%% Unpacked chunks smaller than 256 KiB cannot be stored here currently,\n%% because the module does not keep track of the chunk sizes - all chunks\n%% are assumed to be 256 KiB.\n%% \n%% Put another way:\n%% 1. Small chunks from before the strict data split threshold are never packed and\n%%    never mined, so we store them as unpacked chunks in the rocksdb only.\n%% 2. Small chunks after the strict data split threshold are:\n%%    - stored in the rocksdb when they are unpacked\n%%    - stored in chunk_storage as normal when they are packed\n-spec is_storage_supported(\n\t\tOffset :: non_neg_integer(),\n\t\tChunkSize :: non_neg_integer(),\n\t\tPacking :: term()\n) -> true | false.\nis_storage_supported(Offset, ChunkSize, Packing) ->\n\tcase Offset > ar_block:strict_data_split_threshold() of\n\t\ttrue ->\n\t\t\t%% All chunks above ar_block:strict_data_split_threshold() are placed in 256 KiB\n\t\t\t%% buckets so technically can be stored in ar_chunk_storage. However, to avoid\n\t\t\t%% managing padding in ar_chunk_storage for unpacked chunks smaller than 256 KiB\n\t\t\t%% (we do not need fast random access to unpacked chunks after\n\t\t\t%% ar_block:strict_data_split_threshold() anyways), we put them to RocksDB.\n\t\t\tPacking /= unpacked orelse ChunkSize == (?DATA_CHUNK_SIZE);\n\t\tfalse ->\n\t\t\tChunkSize == (?DATA_CHUNK_SIZE)\n\tend.\n\n%% @doc Store the chunk under the given end offset,\n%% bytes Offset - ?DATA_CHUNK_SIZE, Offset - ?DATA_CHUNK_SIZE + 1, .., Offset - 1.\nput(PaddedOffset, Chunk, Packing, StoreID) ->\n\tGenServerID = name(StoreID),\n\tcase catch gen_server:call(GenServerID, {put, PaddedOffset, Chunk, Packing}, 180_000) of\n\t\t{'EXIT', {shutdown, {gen_server, call, _}}} ->\n\t\t\t%% Handle to avoid the large badmatch log on shutdown.\n\t\t\t{error, shutdown};\n\t\t{'EXIT', {timeout, {gen_server, call, _}}} ->\n\t\t\t?LOG_ERROR([{event, gen_server_timeout_putting_chunk},\n\t\t\t\t{padded_offset, PaddedOffset},\n\t\t\t\t{store_id, StoreID}\n\t\t\t]),\n\t\t\t{error, timeout};\n\t\tReply ->\n\t\t\tReply\n\tend.\n\n%% @doc Open all the storage files. The subsequent calls to get/1 in the\n%% caller process will use the opened file descriptors.\nopen_files(StoreID) ->\n\tets:foldl(\n\t\tfun ({{Key, ID}, Filepath}, _) when ID == StoreID ->\n\t\t\t\tcase erlang:get({cfile, {Key, ID}}) of\n\t\t\t\t\tundefined ->\n\t\t\t\t\t\tcase file:open(Filepath, [read, raw, binary]) of\n\t\t\t\t\t\t\t{ok, F} ->\n\t\t\t\t\t\t\t\terlang:put({cfile, {Key, ID}}, F);\n\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\tok\n\t\t\t\t\t\tend;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tok\n\t\t\t\tend;\n\t\t\t(_, _) ->\n\t\t\t\tok\n\t\tend,\n\t\tok,\n\t\tchunk_storage_file_index\n\t).\n\n%% @doc Return {PaddedEndOffset, Chunk} for the chunk containing the given byte.\nget(Byte, StoreID) ->\n\tcase ar_sync_record:get_interval(Byte + 1, ar_chunk_storage, StoreID) of\n\t\tnot_found ->\n\t\t\tnot_found;\n\t\t{_End, IntervalStart} ->\n\t\t\tget(Byte, IntervalStart, StoreID)\n\tend.\n\nget(Byte, IntervalStart, StoreID) ->\n\t%% The synced ranges begin at IntervalStart => the chunk\n\t%% should begin at a multiple of ?DATA_CHUNK_SIZE to the right of IntervalStart.\n\tChunkStart = Byte - (Byte - IntervalStart) rem ?DATA_CHUNK_SIZE,\n\tChunkFileStart = get_chunk_file_start_by_start_offset(ChunkStart),\n\n\tcase get(Byte, ChunkStart, ChunkFileStart, StoreID, 1) of\n\t\t[] ->\n\t\t\tnot_found;\n\t\t[{PaddedEndOffset, Chunk}] ->\n\t\t\t{PaddedEndOffset, Chunk}\n\tend.\n\nlocate_chunk_on_disk(PaddedEndOffset, StoreID) ->\n\tlocate_chunk_on_disk(PaddedEndOffset, StoreID, #{}).\n\nlocate_chunk_on_disk(PaddedEndOffset, StoreID, FileIndex) ->\n\tChunkFileStart = get_chunk_file_start(PaddedEndOffset),\n\tFilepath = filepath(ChunkFileStart, FileIndex, StoreID),\n\t{Position, ChunkOffset} =\n        get_position_and_relative_chunk_offset(ChunkFileStart, PaddedEndOffset),\n\t{ChunkFileStart, Filepath, Position, ChunkOffset}.\n\n%% @doc Return a list of {PaddedEndOffset, Chunk} pairs for the stored chunks\n%% inside the given range. The given interval does not have to cover every chunk\n%% completely - we return all chunks at the intersection with the range.\nget_range(Start, Size) ->\n\tget_range(Start, Size, ?DEFAULT_MODULE).\n\n%% @doc Return a list of {PaddedEndOffset, Chunk} pairs for the stored chunks\n%% inside the given range. The given interval does not have to cover every chunk\n%% completely - we return all chunks at the intersection with the range. The\n%% very last chunk might be outside of the interval - its start offset is\n%% at most Start + Size + ?DATA_CHUNK_SIZE - 1.\nget_range(Start, Size, StoreID) ->\n\t?assert(Size < get_chunk_group_size()),\n\tcase ar_sync_record:get_next_synced_interval(Start, infinity, ar_chunk_storage, StoreID) of\n\t\t{_End, IntervalStart} when Start + Size > IntervalStart ->\n\t\t\tStart2 = max(Start, IntervalStart),\n\t\t\tSize2 = Start + Size - Start2,\n\t\t\tChunkStart = Start2 - (Start2 - IntervalStart) rem ?DATA_CHUNK_SIZE,\n\t\t\tChunkFileStart = get_chunk_file_start_by_start_offset(ChunkStart),\n\t\t\tEnd = Start2 + Size2,\n\t\t\tLastChunkStart = (End - 1) - ((End - 1) - IntervalStart) rem ?DATA_CHUNK_SIZE,\n\t\t\tLastChunkFileStart = get_chunk_file_start_by_start_offset(LastChunkStart),\n\t\t\tChunkCount = (LastChunkStart - ChunkStart) div ?DATA_CHUNK_SIZE + 1,\n\t\t\tcase ChunkFileStart /= LastChunkFileStart of\n\t\t\t\tfalse ->\n\t\t\t\t\t%% All chunks are from the same chunk file.\n\t\t\t\t\tget(Start2, ChunkStart, ChunkFileStart, StoreID, ChunkCount);\n\t\t\t\ttrue ->\n\t\t\t\t\tSizeBeforeBorder = ChunkFileStart + get_chunk_group_size() - ChunkStart,\n\t\t\t\t\tChunkCountBeforeBorder = max(SizeBeforeBorder, ?DATA_CHUNK_SIZE) div ?DATA_CHUNK_SIZE,\n\t\t\t\t\tStartAfterBorder = ChunkStart + ChunkCountBeforeBorder * ?DATA_CHUNK_SIZE,\n\t\t\t\t\tSizeAfterBorder = Size2 - ChunkCountBeforeBorder * ?DATA_CHUNK_SIZE\n\t\t\t\t\t\t\t+ (Start2 - ChunkStart),\n\t\t\t\t\tget(Start2, ChunkStart, ChunkFileStart, StoreID, ChunkCountBeforeBorder)\n\t\t\t\t\t\t++ get_range(StartAfterBorder, SizeAfterBorder, StoreID)\n\t\t\tend;\n\t\t_ ->\n\t\t\t[]\n\tend.\n\n%% @doc Close the file with the given Key.\nclose_file(Key, StoreID) ->\n\tcase erlang:erase({cfile, {Key, StoreID}}) of\n\t\tundefined ->\n\t\t\tok;\n\t\tF ->\n\t\t\tfile:close(F)\n\tend.\n\n%% @doc Close the files opened by open_files/1.\nclose_files(StoreID) ->\n\tclose_files(erlang:get_keys(), StoreID).\n\n%% @doc Soft-delete everything above the given end offset.\ncut(Offset, StoreID) ->\n\tar_sync_record:cut(Offset, ar_chunk_storage, StoreID).\n\n%% @doc Remove the chunk with the given end offset.\ndelete(Offset) ->\n\tdelete(Offset, ?DEFAULT_MODULE).\n\n%% @doc Remove the chunk with the given end offset.\ndelete(PaddedOffset, StoreID) ->\n\tGenServerID = name(StoreID),\n\tcase catch gen_server:call(GenServerID, {delete, PaddedOffset}, 20000) of\n\t\t{'EXIT', {shutdown, {gen_server, call, _}}} ->\n\t\t\t%% Handle to avoid the large badmatch log on shutdown.\n\t\t\t{error, shutdown};\n\t\t{'EXIT', {timeout, {gen_server, call, _}}} ->\n\t\t\t{error, timeout};\n\t\tReply ->\n\t\t\tReply\n\tend.\n\n%% @doc Run defragmentation of chunk files if enabled\nrun_defragmentation() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase Config#config.run_defragmentation of\n\t\tfalse ->\n\t\t\tok;\n\t\ttrue ->\n\t\t\tar:console(\"Defragmentation threshold: ~B bytes.~n\",\n\t\t\t\t\t   [Config#config.defragmentation_trigger_threshold]),\n\t\t\tDefragModules = modules_to_defrag(Config),\n\t\t\tSizes = read_chunks_sizes(Config#config.data_dir),\n\t\t\tFiles = files_to_defrag(DefragModules,\n\t\t\t\t\t\t\t\t\tConfig#config.data_dir,\n\t\t\t\t\t\t\t\t\tConfig#config.defragmentation_trigger_threshold,\n\t\t\t\t\t\t\t\t\tSizes),\n\t\t\tok = defrag_files(Files),\n\t\t\tok = update_sizes_file(Files, #{})\n\tend.\n\nget_storage_module_path(DataDir, ?DEFAULT_MODULE) ->\n\tDataDir;\nget_storage_module_path(DataDir, StoreID) ->\n\tfilename:join([DataDir, \"storage_modules\", StoreID]).\n\nget_chunk_storage_path(DataDir, StoreID) ->\n\tfilename:join([get_storage_module_path(DataDir, StoreID), ?CHUNK_DIR]).\n\n%% @doc Return the start and end offset of the bucket containing the given offset.\n%% A chunk bucket is a 0-based, 256-KiB wide, 256-KiB aligned range that\n%% ar_chunk_storage uses to index chunks. The bucket start does NOT necessarily\n%% match the chunk's start offset.\n-spec get_chunk_bucket_start(Offset :: non_neg_integer()) -> non_neg_integer().\nget_chunk_bucket_start(Offset) ->\n\tPaddedEndOffset = ar_block:get_chunk_padded_offset(Offset),\n\tar_util:floor_int(max(0, PaddedEndOffset - ?DATA_CHUNK_SIZE), ?DATA_CHUNK_SIZE).\n\n-spec get_chunk_bucket_end(Offset :: non_neg_integer()) -> non_neg_integer().\nget_chunk_bucket_end(Offset) ->\n\tget_chunk_bucket_start(Offset) + ?DATA_CHUNK_SIZE.\n\n%% @doc Return the byte (>= ChunkStartOffset, < ChunkEndOffset)\n%% that necessarily belongs to the chunk stored  in the bucket with the given bucket end\n%% offset. For buckets above the strict data split threshold, the byte is the first byte\n%% of the chunk that is mapped to the bucket. For buckets below the strict data split\n%% threshold, the byte is just guaranteed to belong to the chunk but is not necessarily the\n%% chunk's first byte.\n-spec get_chunk_byte_from_bucket_end(non_neg_integer()) -> non_neg_integer().\nget_chunk_byte_from_bucket_end(BucketEndOffset) ->\n\t%% sanity checks\n\tBucketEndOffset = get_chunk_bucket_end(BucketEndOffset),\n\t%% end sanity checks\n\t\n\tget_chunk_seek_offset(BucketEndOffset) - 1.\n\n%% @doc Returns a byte that is guaranteed to be in the unpadded portion of the chunk\n%% identified by Offset. Offset can be any byte within the chunk - in either the unpadded\n%% part or the pad. This typically equates to the first byte of the chunk plus one.\n%% \n%% If Offset is before the ar_block:strict_data_split_threshold() we just return it because we don't\n%% have any information about where chunks start or end.\n-spec get_chunk_seek_offset(non_neg_integer()) -> non_neg_integer().\nget_chunk_seek_offset(Offset) ->\n\tcase Offset > ar_block:strict_data_split_threshold() of\n\t\ttrue ->\n\t\t\tar_poa:get_padded_offset(Offset, ar_block:strict_data_split_threshold())\n\t\t\t\t\t- (?DATA_CHUNK_SIZE)\n\t\t\t\t\t+ 1;\n\t\tfalse ->\n\t\t\tOffset\n\tend.\n\n\nset_entropy_complete(StoreID) ->\n\tgen_server:cast(name(StoreID), entropy_complete).\n\nread_offset(PaddedOffset, StoreID) ->\n\t{_ChunkFileStart, Filepath, Position, _ChunkOffset} =\n\t\t\tar_chunk_storage:locate_chunk_on_disk(PaddedOffset, StoreID),\n\tcase file:open(Filepath, [read, raw, binary]) of\n\t\t{ok, F} ->\n\t\t\tResult = file:pread(F, Position, ?OFFSET_SIZE),\n\t\t\tfile:close(F),\n\t\t\tResult;\n\t\tError ->\n\t\t\tError\n\tend.\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit(?DEFAULT_MODULE = StoreID) ->\n\t%% Trap exit to avoid corrupting any open files on quit..\n\tprocess_flag(trap_exit, true),\n\t{ok, Config} = arweave_config:get_env(),\n\tDataDir = Config#config.data_dir,\n\tDir = get_storage_module_path(DataDir, StoreID),\n\tok = filelib:ensure_dir(Dir ++ \"/\"),\n\tok = filelib:ensure_dir(filename:join(Dir, ?CHUNK_DIR) ++ \"/\"),\n\tFileIndex = read_file_index(Dir),\n\tFileIndex2 = maps:map(\n\t\tfun(Key, Filepath) ->\n\t\t\tFilepath2 = filename:join([DataDir, ?CHUNK_DIR, Filepath]),\n\t\t\tets:insert(chunk_storage_file_index, {{Key, StoreID}, Filepath2}),\n\t\t\tFilepath2\n\t\tend,\n\t\tFileIndex\n\t),\n\twarn_custom_chunk_group_size(StoreID),\n\t{ok, #state{\n\t\tfile_index = FileIndex2, store_id = StoreID }};\ninit(StoreID) ->\n\t%% Trap exit to avoid corrupting any open files on quit..\n\tprocess_flag(trap_exit, true),\n\t{ok, Config} = arweave_config:get_env(),\n\tDataDir = Config#config.data_dir,\n\tDir = get_storage_module_path(DataDir, StoreID),\n\tok = filelib:ensure_dir(Dir ++ \"/\"),\n\tok = filelib:ensure_dir(filename:join(Dir, ?CHUNK_DIR) ++ \"/\"),\n\tFileIndex = read_file_index(Dir),\n\tFileIndex2 = maps:map(\n\t\tfun(Key, Filepath) ->\n\t\t\tets:insert(chunk_storage_file_index, {{Key, StoreID}, Filepath}),\n\t\t\tFilepath\n\t\tend,\n\t\tFileIndex\n\t),\n\twarn_custom_chunk_group_size(StoreID),\n\t{RangeStart, RangeEnd} = ar_storage_module:get_range(StoreID),\n\n\tState = #state{\n\t\tfile_index = FileIndex2,\n\t\tstore_id = StoreID,\n\t\trange_start = RangeStart,\n\t\trange_end = RangeEnd\n\t},\n\n\tEntropyContext = ar_entropy_gen:initialize_context(\n\t\tStoreID, ar_storage_module:get_packing(StoreID)),\n\tState2 = State#state{ entropy_context = EntropyContext },\n\n\t{ok, State2}.\n\nwarn_custom_chunk_group_size(StoreID) ->\n\tcase StoreID == ?DEFAULT_MODULE andalso get_chunk_group_size() /= ?CHUNK_GROUP_SIZE of\n\t\ttrue ->\n\t\t\t%% This warning applies to all store ids, but we will only print it when loading\n\t\t\t%% the default StoreID to ensure it is only printed once.\n\t\t\tWarningMessage = \"WARNING: changing chunk_storage_file_size is not \"\n\t\t\t\t\"recommended and may cause errors if different sizes are used for the same \"\n\t\t\t\t\"chunk storage files.\",\n\t\t\tar:console(WarningMessage),\n\t\t\t?LOG_WARNING(WarningMessage);\n\t\tfalse ->\n\t\t\tok\n\tend.\n\nhandle_cast(entropy_complete, State) ->\n\t#state{ entropy_context = {_, RewardAddr} } = State,\n\tState2 = State#state{ entropy_context = {true, RewardAddr} },\n\t{noreply, State2};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_call({put, PaddedEndOffset, Chunk, Packing}, _From, State)\n\t\twhen byte_size(Chunk) == ?DATA_CHUNK_SIZE ->\n\t#state{ store_id = StoreID,\n\t\tentropy_context = EntropyContext, file_index = FileIndex } = State,\n\n\tResult = store_chunk(\n\t\tPaddedEndOffset, Chunk, Packing, StoreID, FileIndex, EntropyContext),\n\tcase Result of\n\t\t{ok, FileIndex2, NewPacking} ->\n\t\t\t{reply, {ok, NewPacking}, State#state{ file_index = FileIndex2 }};\n\t\tError ->\n\t\t\t{reply, Error, State}\n\tend;\n\nhandle_call({delete, PaddedEndOffset}, _From, State) ->\n\t#state{\tstore_id = StoreID } = State,\n\tStartOffset = PaddedEndOffset - ?DATA_CHUNK_SIZE,\n\tcase ar_sync_record:delete(PaddedEndOffset, StartOffset, ar_chunk_storage, StoreID) of\n\t\tok ->\n\t\t\tcase ar_entropy_storage:delete_record(PaddedEndOffset, StoreID) of\n\t\t\t\tok ->\n\t\t\t\t\tcase delete_chunk(PaddedEndOffset, StoreID) of\n\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\t{reply, ok, State};\n\t\t\t\t\t\tError ->\n\t\t\t\t\t\t\t{reply, Error, State}\n\t\t\t\t\tend;\n\t\t\t\tError2 ->\n\t\t\t\t\t{reply, Error2, State}\n\t\t\tend;\n\t\tError3 ->\n\t\t\t{reply, Error3, State}\n\tend;\n\nhandle_call(reset, _, #state{ store_id = StoreID, file_index = FileIndex } = State) ->\n\tmaps:map(\n\t\tfun(_Key, Filepath) ->\n\t\t\tfile:delete(Filepath)\n\t\tend,\n\t\tFileIndex\n\t),\n\tok = ar_sync_record:cut(0, ar_chunk_storage, StoreID),\n\terlang:erase(),\n\t{reply, ok, State#state{ file_index = #{} }};\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_info({Ref, _Reply}, State) when is_reference(Ref) ->\n\t?LOG_ERROR([{event, stale_gen_server_call_reply}, {ref, Ref}, {reply, _Reply}]),\n\t%% A stale gen_server:call reply.\n\t{noreply, State};\n\nhandle_info({'EXIT', _PID, normal}, State) ->\n\t{noreply, State};\n\nhandle_info({entropy_generated, _Ref, _Entropy}, State) ->\n\t?LOG_WARNING([{event, entropy_generation_timed_out}]),\n\t{noreply, State};\n\nhandle_info(Info, State) ->\n\t?LOG_ERROR([{event, unhandled_info}, {info, io_lib:format(\"~p\", [Info])}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\tsync_and_close_files(),\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nget_chunk_group_size() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tConfig#config.chunk_storage_file_size.\n\nget_filepath(Name, StoreID) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tDataDir = Config#config.data_dir,\n\tChunkDir = get_chunk_storage_path(DataDir, StoreID),\n\tfilename:join([ChunkDir, Name]).\n\nstore_chunk(PaddedEndOffset, Chunk, Packing, StoreID, FileIndex, EntropyContext) ->\n\tcase Packing == unpacked_padded of\n\t\ttrue ->\n\t\t\tar_entropy_storage:record_chunk(\n\t\t\t\tPaddedEndOffset, Chunk, StoreID, FileIndex, EntropyContext);\n\t\tfalse ->\n\t\t\trecord_chunk(\n\t\t\t\tPaddedEndOffset, Chunk, Packing, StoreID, FileIndex)\n\tend.\n\nrecord_chunk(\n\t\tPaddedEndOffset, Chunk, Packing, StoreID, FileIndex) ->\n\tcase write_chunk(PaddedEndOffset, Chunk, FileIndex, StoreID) of\n\t\t{ok, Filepath} ->\n\t\t\tprometheus_counter:inc(chunks_stored,\n\t\t\t\t[ar_storage_module:packing_label(Packing), ar_storage_module:label(StoreID)]),\n\t\t\tcase ar_sync_record:add(\n\t\t\t\t\tPaddedEndOffset, PaddedEndOffset - ?DATA_CHUNK_SIZE,\n\t\t\t\t\tsync_record_id(Packing), StoreID) of\n\t\t\t\tok ->\n\t\t\t\t\tChunkFileStart = get_chunk_file_start(PaddedEndOffset),\n\t\t\t\t\tets:insert(chunk_storage_file_index,\n\t\t\t\t\t\t{{ChunkFileStart, StoreID}, Filepath}),\n\t\t\t\t\t{ok, maps:put(ChunkFileStart, Filepath, FileIndex), Packing};\n\t\t\t\tError ->\n\t\t\t\t\tError\n\t\t\tend;\n\t\tError2 ->\n\t\t\tError2\n\tend.\n\nsync_record_id(unpacked_padded) ->\n\t%% Entropy indexing changed between 2.9.0 and 2.9.1. So we'll use a new\n\t%% sync_record id (ar_chunk_storage_replica_2_9_1_unpacked) going forward.\n\t%% The old id (ar_chunk_storage_replica_2_9_unpacked) should not be used.\n\tar_chunk_storage_replica_2_9_1_unpacked;\nsync_record_id(_Packing) ->\n\tar_chunk_storage.\n\nget_chunk_file_start(EndOffset) ->\n\tStartOffset = EndOffset - ?DATA_CHUNK_SIZE,\n\tget_chunk_file_start_by_start_offset(StartOffset).\n\nget_chunk_file_start_by_start_offset(StartOffset) ->\n\tar_util:floor_int(StartOffset, get_chunk_group_size()).\n\nwrite_chunk(PaddedOffset, Chunk, FileIndex, StoreID) ->\n\t{_ChunkFileStart, Filepath, Position, ChunkOffset} =\n\t\t\t\tlocate_chunk_on_disk(PaddedOffset, StoreID, FileIndex),\n\tcase get_handle_by_filepath(Filepath) of\n\t\t{error, _} = Error ->\n\t\t\tError;\n\t\tF ->\n\t\t\twrite_chunk2(PaddedOffset, ChunkOffset, Chunk, Filepath, F, Position)\n\tend.\n\nfilepath(ChunkFileStart, FileIndex, StoreID) ->\n\tcase maps:get(ChunkFileStart, FileIndex, not_found) of\n\t\tnot_found ->\n\t\t\tfilepath(ChunkFileStart, StoreID);\n\t\tFilepath ->\n\t\t\tFilepath\n\tend.\n\nfilepath(ChunkFileStart, StoreID) ->\n\tget_filepath(integer_to_binary(ChunkFileStart), StoreID).\n\nget_handle_by_filepath(Filepath) ->\n\tcase erlang:get({write_handle, Filepath}) of\n\t\tundefined ->\n\t\t\tcase file:open(Filepath, [read, write, raw]) of\n\t\t\t\t{error, Reason} = Error ->\n\t\t\t\t\t?LOG_ERROR([\n\t\t\t\t\t\t{event, failed_to_open_chunk_file},\n\t\t\t\t\t\t{file, Filepath},\n\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}\n\t\t\t\t\t]),\n\t\t\t\t\tError;\n\t\t\t\t{ok, F} ->\n\t\t\t\t\terlang:put({write_handle, Filepath}, F),\n\t\t\t\t\tF\n\t\t\tend;\n\t\tF ->\n\t\t\tF\n\tend.\n\nwrite_chunk2(_PaddedOffset, ChunkOffset, Chunk, Filepath, F, Position) ->\n\tResult = file:pwrite(F, Position, [<< ChunkOffset:?OFFSET_BIT_SIZE >> | Chunk]),\n\tcase Result of\n\t\t{error, _Reason} = Error ->\n\t\t\tError;\n\t\tok ->\n\t\t\t{ok, Filepath}\n\tend.\n\nget_special_zero_offset() ->\n\t?DATA_CHUNK_SIZE.\n\nget_position_and_relative_chunk_offset(ChunkFileStart, Offset) ->\n\tBucketPickOffset = Offset - ?DATA_CHUNK_SIZE,\n\tget_position_and_relative_chunk_offset_by_start_offset(ChunkFileStart, BucketPickOffset).\n\nget_position_and_relative_chunk_offset_by_start_offset(ChunkFileStart, BucketPickOffset) ->\n\tBucketStart = ar_util:floor_int(BucketPickOffset, ?DATA_CHUNK_SIZE),\n\tChunkOffset = case BucketPickOffset - BucketStart of\n\t\t0 ->\n\t\t\t%% Represent 0 as the largest possible offset plus one,\n\t\t\t%% to distinguish zero offset from not yet written data.\n\t\t\tget_special_zero_offset();\n\t\tOffset ->\n\t\t\tOffset\n\tend,\n\tRelativeOffset = BucketStart - ChunkFileStart,\n\tPosition = RelativeOffset + ?OFFSET_SIZE * (RelativeOffset div ?DATA_CHUNK_SIZE),\n\t{Position, ChunkOffset}.\n\ndelete_chunk(PaddedOffset, StoreID) ->\n\t{_ChunkFileStart, Filepath, Position, _ChunkOffset} =\n\t\tlocate_chunk_on_disk(PaddedOffset, StoreID),\n\tcase file:open(Filepath, [read, write, raw]) of\n\t\t{ok, F} ->\n\t\t\tZeroChunk =\n\t\t\t\tcase erlang:get(zero_chunk) of\n\t\t\t\t\tundefined ->\n\t\t\t\t\t\tOffsetBytes = << 0:?OFFSET_BIT_SIZE >>,\n\t\t\t\t\t\tZeroBytes = << <<0>> || _ <- lists:seq(1, ?DATA_CHUNK_SIZE) >>,\n\t\t\t\t\t\tChunk = << OffsetBytes/binary, ZeroBytes/binary >>,\n\t\t\t\t\t\t%% Cache the zero chunk in the process memory, constructing\n\t\t\t\t\t\t%% it is expensive.\n\t\t\t\t\t\terlang:put(zero_chunk, Chunk),\n\t\t\t\t\t\tChunk;\n\t\t\t\t\tChunk ->\n\t\t\t\t\t\tChunk\n\t\t\t\tend,\n\t\t\tar_entropy_storage:acquire_semaphore(Filepath),\n\t\t\tResult = file:pwrite(F, Position, ZeroChunk),\n\t\t\tar_entropy_storage:release_semaphore(Filepath),\n\t\t\tResult;\n\t\t{error, enoent} ->\n\t\t\tok;\n\t\tError ->\n\t\t\tError\n\tend.\n\nget(Byte, Start, ChunkFileStart, StoreID, ChunkCount) ->\n\tReadChunks =\n\t\tcase erlang:get({cfile, {ChunkFileStart, StoreID}}) of\n\t\t\tundefined ->\n\t\t\t\tcase ets:lookup(chunk_storage_file_index, {ChunkFileStart, StoreID}) of\n\t\t\t\t\t[] ->\n\t\t\t\t\t\t[];\n\t\t\t\t\t[{_, Filepath}] ->\n\t\t\t\t\t\tread_chunk(Byte, Start, ChunkFileStart, Filepath, ChunkCount, StoreID)\n\t\t\t\tend;\n\t\t\tFile ->\n\t\t\t\tread_chunk2(Byte, Start, ChunkFileStart, File, ChunkCount, StoreID)\n\t\tend,\n\tcase ar_storage_module:is_repack_in_place(StoreID) of\n\t\ttrue ->\n\t\t\tReadChunks;\n\t\tfalse ->\n\t\t\tfilter_by_sync_record(ReadChunks, Byte, Start, ChunkFileStart, StoreID, ChunkCount)\n\tend.\n\nread_chunk(Byte, Start, ChunkFileStart, Filepath, ChunkCount, StoreID) ->\n\tcase file:open(Filepath, [read, raw, binary]) of\n\t\t{error, enoent} ->\n\t\t\t[];\n\t\t{error, Reason} ->\n\t\t\t?LOG_ERROR([\n\t\t\t\t{event, failed_to_open_chunk_file},\n\t\t\t\t{byte, Byte},\n\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}\n\t\t\t]),\n\t\t\t[];\n\t\t{ok, File} ->\n\t\t\tResult = read_chunk2(Byte, Start, ChunkFileStart, File, ChunkCount, StoreID),\n\t\t\tfile:close(File),\n\t\t\tResult\n\tend.\n\nread_chunk2(Byte, Start, ChunkFileStart, File, ChunkCount, StoreID) ->\n\t{Position, _ChunkOffset} =\n\t\t\tget_position_and_relative_chunk_offset_by_start_offset(ChunkFileStart, Start),\n\tBucketStart = ar_util:floor_int(Start, ?DATA_CHUNK_SIZE),\n\tread_chunk3(Byte, Position, BucketStart, File, ChunkCount, StoreID).\n\nread_chunk3(Byte, Position, BucketStart, File, ChunkCount, StoreID) ->\n\tStartTime = erlang:monotonic_time(),\n\tcase file:pread(File, Position, (?DATA_CHUNK_SIZE + ?OFFSET_SIZE) * ChunkCount) of\n\t\t{ok, << ChunkOffset:?OFFSET_BIT_SIZE, _Chunk/binary >> = Bin} ->\n\t\t\tStoreIDLabel = ar_storage_module:label(StoreID),\n\t\t\tar_metrics:record_rate_metric(\n\t\t\t\tStartTime, byte_size(Bin), \n\t\t\t\tchunk_read_rate_bytes_per_second, [StoreIDLabel, raw]),\n\t\t\tprometheus_counter:inc(chunks_read, [StoreIDLabel], ChunkCount),\n\t\t\tcase is_offset_valid(Byte, BucketStart, ChunkOffset) of\n\t\t\t\ttrue ->\n\t\t\t\t\textract_end_offset_chunk_pairs(Bin, BucketStart, 1);\n\t\t\t\tfalse ->\n\t\t\t\t\t[]\n\t\t\tend;\n\t\t{error, Reason} ->\n\t\t\t?LOG_ERROR([\n\t\t\t\t{event, failed_to_read_chunk},\n\t\t\t\t{byte, Byte},\n\t\t\t\t{position, Position},\n\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}\n\t\t\t]),\n\t\t\t[];\n\t\teof ->\n\t\t\t[]\n\tend.\n\nextract_end_offset_chunk_pairs(\n\t\t<< 0:?OFFSET_BIT_SIZE, _ZeroChunk:?DATA_CHUNK_SIZE/binary, Rest/binary >>,\n\t\tBucketStart,\n\t\tShift\n ) ->\n\textract_end_offset_chunk_pairs(Rest, BucketStart, Shift + 1);\nextract_end_offset_chunk_pairs(\n\t\t<< ChunkOffset:?OFFSET_BIT_SIZE, Chunk:?DATA_CHUNK_SIZE/binary, Rest/binary >>,\n\t\tBucketStart,\n\t\tShift\n ) ->\n\tChunkOffsetLimit = ?DATA_CHUNK_SIZE,\n\tEndOffset =\n\t\tBucketStart\n\t\t+ (ChunkOffset rem ChunkOffsetLimit)\n\t\t+ (?DATA_CHUNK_SIZE * Shift),\n\t[{EndOffset, Chunk}\n\t\t\t| extract_end_offset_chunk_pairs(Rest, BucketStart, Shift + 1)];\nextract_end_offset_chunk_pairs(<<>>, _BucketStart, _Shift) ->\n\t[];\nextract_end_offset_chunk_pairs(<< ChunkOffset:?OFFSET_BIT_SIZE, Chunk/binary >>,\n\t\tBucketStart, Shift) ->\n\t?LOG_ERROR([{event, unexpected_chunk_data}, {chunk_offset, ChunkOffset},\n\t\t\t{bucket_start, BucketStart}, {shift, Shift}, {chunk_size, byte_size(Chunk)}]),\n\t[].\n\nis_offset_valid(_Byte, _BucketStart, 0) ->\n\t%% 0 is interpreted as \"data has not been written yet\".\n\tfalse;\nis_offset_valid(Byte, BucketStart, ChunkOffset) ->\n\tDelta = Byte - (BucketStart + ChunkOffset rem ?DATA_CHUNK_SIZE),\n\tDelta >= 0 andalso Delta < ?DATA_CHUNK_SIZE.\n\nget_sync_record_intervals(Start, ChunkCount, StoreID) ->\n\tEnd = Start + (ChunkCount + 1) * ?DATA_CHUNK_SIZE,\n\tget_sync_record_intervals(Start, End, StoreID, ar_intervals:new()).\n\nget_sync_record_intervals(Start, End, _StoreID, Intervals) when Start >= End ->\n\tIntervals;\nget_sync_record_intervals(Start, End, StoreID, Intervals) ->\n\tcase ar_sync_record:get_next_synced_interval(Start, End, ar_chunk_storage, StoreID) of\n\t\tnot_found ->\n\t\t\tIntervals;\n\t\t{End2, Start2} ->\n\t\t\tget_sync_record_intervals(End2, End, StoreID,\n\t\t\t\t\tar_intervals:add(Intervals, min(End, End2), Start2))\n\tend.\n\nfilter_by_sync_record(ReadChunks, Byte, Start, ChunkFileStart, StoreID, ChunkCount) ->\n\tprometheus_histogram:observe_duration(chunk_storage_sync_record_check_duration_milliseconds,\n\t\t[ChunkCount],\n\t\tfun() ->\n\t\t\tIntervals = get_sync_record_intervals(Start, ChunkCount, StoreID),\n\t\t\tfilter_by_sync_record(ReadChunks, Intervals, Byte, Start, ChunkFileStart, StoreID, ChunkCount)\n\t\tend).\n\nfilter_by_sync_record(Chunks, _Intervals, _Byte, _Start, _ChunkFileStart, _StoreID, 1) ->\n\t%% The code paths which query a single chunk have already implicitly checked that\n\t%% the chunk belongs to the sync_record. E.g. ar_chunk_storage:get/2\n\tChunks;\nfilter_by_sync_record([], _Intervals, _Byte, _Start, _ChunkFileStart, _StoreID, _ChunkCount) ->\n\t[];\nfilter_by_sync_record([{PaddedEndOffset, Chunk} | Rest], Intervals, Byte, Start, ChunkFileStart, StoreID, ChunkCount) ->\n\tcase ar_intervals:is_inside(Intervals, PaddedEndOffset) of\n\t\tfalse ->\n\t\t\t%% The holes between chunks may be filled with entropy.\n\t\t\tfilter_by_sync_record(Rest, Intervals, Byte, Start, ChunkFileStart, StoreID, ChunkCount);\n\t\t_ ->\n\t\t\t[{PaddedEndOffset, Chunk}\n\t\t\t\t| filter_by_sync_record(Rest, Intervals, Byte, Start, ChunkFileStart, StoreID, ChunkCount)]\n\tend.\n\nclose_files([{cfile, {_, StoreID} = Key} | Keys], StoreID) ->\n\tfile:close(erlang:get({cfile, Key})),\n\tclose_files(Keys, StoreID);\nclose_files([_ | Keys], StoreID) ->\n\tclose_files(Keys, StoreID);\nclose_files([], _StoreID) ->\n\tok.\n\nread_file_index(Dir) ->\n\tChunkDir = filename:join(Dir, ?CHUNK_DIR),\n\t{ok, Filenames} = file:list_dir(ChunkDir),\n\tlists:foldl(\n\t\tfun(Filename, Acc) ->\n\t\t\tcase catch list_to_integer(Filename) of\n\t\t\t\tKey when is_integer(Key) ->\n\t\t\t\t\tmaps:put(Key, filename:join(ChunkDir, Filename), Acc);\n\t\t\t\t_ ->\n\t\t\t\t\tAcc\n\t\t\tend\n\t\tend,\n\t\t#{},\n\t\tFilenames\n\t).\n\nsync_and_close_files() ->\n\tsync_and_close_files(erlang:get_keys()).\n\nsync_and_close_files([{write_handle, _} = Key | Keys]) ->\n\tF = erlang:get(Key),\n\tok = file:sync(F),\n\tfile:close(F),\n\tsync_and_close_files(Keys);\nsync_and_close_files([_ | Keys]) ->\n\tsync_and_close_files(Keys);\nsync_and_close_files([]) ->\n\tok.\n\nlist_files(DataDir, StoreID) ->\n\tDir = get_storage_module_path(DataDir, StoreID),\n\tok = filelib:ensure_dir(Dir ++ \"/\"),\n\tok = filelib:ensure_dir(filename:join(Dir, ?CHUNK_DIR) ++ \"/\"),\n\tStorageIndex = read_file_index(Dir),\n\tmaps:values(StorageIndex).\n\nfiles_to_defrag(StorageModules, DataDir, ByteSizeThreshold, Sizes) ->\n\tAllFiles = lists:flatmap(\n\t\tfun(StorageModule) ->\n\t\t\tlist_files(DataDir, ar_storage_module:id(StorageModule))\n\t\tend, StorageModules),\n\tlists:filter(\n\t\tfun(Filepath) ->\n\t\t\tcase file:read_file_info(Filepath) of\n\t\t\t\t{ok, #file_info{ size = Size }} ->\n\t\t\t\t\tLastSize = maps:get(Filepath, Sizes, 1),\n\t\t\t\t\tGrowth = (Size - LastSize) / LastSize,\n\t\t\t\t\tSize >= ByteSizeThreshold andalso Growth > 0.1;\n\t\t\t\t{error, Reason} ->\n\t\t\t\t\t?LOG_ERROR([\n\t\t\t\t\t\t{event, failed_to_read_chunk_file_info},\n\t\t\t\t\t\t{file, Filepath},\n\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}\n\t\t\t\t\t]),\n\t\t\t\t\tfalse\n\t\t\tend\n\t\tend, AllFiles).\n\ndefrag_files([]) ->\n\tok;\ndefrag_files([Filepath | Rest]) ->\n\t?LOG_DEBUG([{event, defragmenting_file}, {file, Filepath}]),\n\tar:console(\"Defragmenting ~s...~n\", [Filepath]),\n\tTmpFilepath = Filepath ++ \".tmp\",\n\tDefragCmd = io_lib:format(\"rsync --sparse --quiet ~ts ~ts\", [Filepath, TmpFilepath]),\n\tMoveDefragCmd = io_lib:format(\"mv ~ts ~ts\", [TmpFilepath, Filepath]),\n\t%% We expect nothing to be returned on successful calls.\n\t[] = os:cmd(DefragCmd),\n\t[] = os:cmd(MoveDefragCmd),\n\tar:console(\"Defragmented ~s...~n\", [Filepath]),\n\tdefrag_files(Rest).\n\nupdate_sizes_file([], Sizes) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tSizesFile = filename:join(Config#config.data_dir, \"chunks_sizes\"),\n\tcase file:open(SizesFile, [write, raw]) of\n\t\t{error, Reason} ->\n\t\t\t?LOG_ERROR([\n\t\t\t\t{event, failed_to_open_chunk_sizes_file},\n\t\t\t\t{file, SizesFile},\n\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}\n\t\t\t]),\n\t\t\terror;\n\t\t{ok, F} ->\n\t\t\tSizesBinary = erlang:term_to_binary(Sizes),\n\t\t\tok = file:write(F, SizesBinary),\n\t\t\tfile:close(F)\n\tend;\nupdate_sizes_file([Filepath | Rest], Sizes) ->\n\tcase file:read_file_info(Filepath) of\n\t\t{ok, #file_info{ size = Size }} ->\n\t\t\tupdate_sizes_file(Rest, Sizes#{ Filepath => Size });\n\t\t{error, Reason} ->\n\t\t\t?LOG_ERROR([\n\t\t\t\t{event, failed_to_read_chunk_file_info},\n\t\t\t\t{file, Filepath},\n\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}\n\t\t\t]),\n\t\t\terror\n\tend.\n\nread_chunks_sizes(DataDir) ->\n\tSizesFile = filename:join(DataDir, \"chunks_sizes\"),\n\tcase file:read_file(SizesFile) of\n\t\t{ok, Content} ->\n\t\t\terlang:binary_to_term(Content, [safe]);\n\t\t{error, enoent} ->\n\t\t\t#{};\n\t\t{error, Reason} ->\n\t\t\t?LOG_ERROR([\n\t\t\t\t{event, failed_to_read_chunk_sizes_file},\n\t\t\t\t{file, SizesFile},\n\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}\n\t\t\t]),\n\t\t\terror\n\tend.\n\nmodules_to_defrag(#config{defragmentation_modules = [_ | _] = Modules}) -> Modules;\nmodules_to_defrag(#config{storage_modules = Modules}) -> Modules.\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\nchunk_bucket_test() ->\n\tar_test_node:test_with_mocked_functions([\n\t\t{ar_block, strict_data_split_threshold, fun() -> 700_000 end}\n\t],\n\tfun test_chunk_bucket/0, 30).\n\ntest_chunk_bucket() ->\n\tcase ar_block:strict_data_split_threshold() of\n\t\t700_000 ->\n\t\t\tok;\n\t\t_ ->\n\t\t\tthrow(unexpected_strict_data_split_threshold)\n\tend,\n\n\t%% get_chunk_bucket_end pads the provided offset\n\t%% get_chunk_bucket_start does not pad the provided offset\n\n\t%% At and before the STRICT_DATA_SPLIT_THRESHOLD, offsets are not padded.\n\t?assertEqual(262144, get_chunk_bucket_end(0)),\n\t?assertEqual(0, get_chunk_bucket_start(0)),\n\n\t?assertEqual(262144, get_chunk_bucket_end(1)),\n\t?assertEqual(0, get_chunk_bucket_start(1)),\n\n\t?assertEqual(262144, get_chunk_bucket_end(?DATA_CHUNK_SIZE - 1)),\n\t?assertEqual(0, get_chunk_bucket_start(?DATA_CHUNK_SIZE - 1)),\n\n\t?assertEqual(262144, get_chunk_bucket_end(?DATA_CHUNK_SIZE)),\n\t?assertEqual(0, get_chunk_bucket_start(?DATA_CHUNK_SIZE)),\n\n\t?assertEqual(262144, get_chunk_bucket_end(?DATA_CHUNK_SIZE + 1)),\n\t?assertEqual(0, get_chunk_bucket_start(?DATA_CHUNK_SIZE + 1)),\n\n\t?assertEqual(524288, get_chunk_bucket_end(2 * ?DATA_CHUNK_SIZE)),\n\t?assertEqual(262144, get_chunk_bucket_start(2 * ?DATA_CHUNK_SIZE)),\n\n\t?assertEqual(524288, get_chunk_bucket_end(2 * ?DATA_CHUNK_SIZE + 1)),\n\t?assertEqual(262144, get_chunk_bucket_start(2 * ?DATA_CHUNK_SIZE + 1)),\n\n\t?assertEqual(524288, get_chunk_bucket_end(ar_block:strict_data_split_threshold() - 1)),\n\t?assertEqual(262144, get_chunk_bucket_start(ar_block:strict_data_split_threshold() - 1)),\n\n\t?assertEqual(524288, get_chunk_bucket_end(ar_block:strict_data_split_threshold())),\n\t?assertEqual(262144, get_chunk_bucket_start(ar_block:strict_data_split_threshold())),\n\n\t%% After the STRICT_DATA_SPLIT_THRESHOLD, offsets are padded.\n\t?assertEqual(786432, get_chunk_bucket_end(ar_block:strict_data_split_threshold() + 1)),\n\t?assertEqual(524288, get_chunk_bucket_start(ar_block:strict_data_split_threshold() + 1)),\n\n\t?assertEqual(786432, get_chunk_bucket_end(3 * ?DATA_CHUNK_SIZE - 1)),\n\t?assertEqual(524288, get_chunk_bucket_start(3 * ?DATA_CHUNK_SIZE - 1)),\n\n\t?assertEqual(786432, get_chunk_bucket_end(3 * ?DATA_CHUNK_SIZE)),\n\t?assertEqual(524288, get_chunk_bucket_start(3 * ?DATA_CHUNK_SIZE)),\n\n\t?assertEqual(786432, get_chunk_bucket_end(3 * ?DATA_CHUNK_SIZE + 1)),\n\t?assertEqual(524288, get_chunk_bucket_start(3 * ?DATA_CHUNK_SIZE + 1)),\n\n\t?assertEqual(1048576, get_chunk_bucket_end(4 * ?DATA_CHUNK_SIZE - 1)),\n\t?assertEqual(786432, get_chunk_bucket_start(4 * ?DATA_CHUNK_SIZE - 1)),\n\n\t?assertEqual(1048576, get_chunk_bucket_end(4 * ?DATA_CHUNK_SIZE)),\n\t?assertEqual(786432, get_chunk_bucket_start(4 * ?DATA_CHUNK_SIZE)),\n\n\t?assertEqual(1048576, get_chunk_bucket_end(4 * ?DATA_CHUNK_SIZE + 1)),\n\t?assertEqual(786432, get_chunk_bucket_start(4 * ?DATA_CHUNK_SIZE + 1)),\n\n\t?assertEqual(1310720, get_chunk_bucket_end(5 * ?DATA_CHUNK_SIZE - 1)),\n\t?assertEqual(1048576, get_chunk_bucket_start(5 * ?DATA_CHUNK_SIZE - 1)),\n\n\t?assertEqual(1310720, get_chunk_bucket_end(5 * ?DATA_CHUNK_SIZE)),\n\t?assertEqual(1048576, get_chunk_bucket_start(5 * ?DATA_CHUNK_SIZE)),\n\n\t?assertEqual(1310720, get_chunk_bucket_end(5 * ?DATA_CHUNK_SIZE + 1)),\n\t?assertEqual(1048576, get_chunk_bucket_start(5 * ?DATA_CHUNK_SIZE + 1)).\n\nget_chunk_byte_from_bucket_end_test() ->\n\tar_test_node:test_with_mocked_functions([\n\t\t{ar_block, strict_data_split_threshold, fun() -> 700_000 end}\n\t],\n\tfun test_get_chunk_byte_from_bucket_end/0, 30).\n\ntest_get_chunk_byte_from_bucket_end() ->\n\t?assertEqual(262143, get_chunk_byte_from_bucket_end(262144)),\n\t?assertEqual(524287, get_chunk_byte_from_bucket_end(524288)),\n\t?assertEqual(700000, get_chunk_byte_from_bucket_end(786432)),\n\t?assertEqual(962144, get_chunk_byte_from_bucket_end(1048576)),\n\t?assertEqual(1224288, get_chunk_byte_from_bucket_end(1310720)),\n\t?assertEqual(1486432, get_chunk_byte_from_bucket_end(1572864)),\n\t?assertEqual(1748576, get_chunk_byte_from_bucket_end(1835008)),\n\t?assertEqual(2010720, get_chunk_byte_from_bucket_end(2097152)),\n\t?assertEqual(2272864, get_chunk_byte_from_bucket_end(2359296)).\n\t\n\t\nwell_aligned_test_() ->\n\t{timeout, 20, fun test_well_aligned/0}.\n\ntest_well_aligned() ->\n\tclear(?DEFAULT_MODULE),\n\tPacking = ar_storage_module:get_packing(?DEFAULT_MODULE),\n\tC1 = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\tC2 = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\tC3 = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\t{ok, unpacked} = ar_chunk_storage:put(2 * ?DATA_CHUNK_SIZE, C1, Packing, ?DEFAULT_MODULE),\n\tassert_get(C1, 2 * ?DATA_CHUNK_SIZE),\n\t?assertEqual(not_found, ar_chunk_storage:get(2 * ?DATA_CHUNK_SIZE, ?DEFAULT_MODULE)),\n\t?assertEqual(not_found, ar_chunk_storage:get(2 * ?DATA_CHUNK_SIZE + 1, ?DEFAULT_MODULE)),\n\tar_chunk_storage:delete(2 * ?DATA_CHUNK_SIZE),\n\tassert_get(not_found, 2 * ?DATA_CHUNK_SIZE),\n\tar_chunk_storage:put(?DATA_CHUNK_SIZE, C2, Packing, ?DEFAULT_MODULE),\n\tassert_get(C2, ?DATA_CHUNK_SIZE),\n\tassert_get(not_found, 2 * ?DATA_CHUNK_SIZE),\n\tar_chunk_storage:put(2 * ?DATA_CHUNK_SIZE, C1, Packing, ?DEFAULT_MODULE),\n\tassert_get(C1, 2 * ?DATA_CHUNK_SIZE),\n\tassert_get(C2, ?DATA_CHUNK_SIZE),\n\t?assertEqual([{?DATA_CHUNK_SIZE, C2}, {2 * ?DATA_CHUNK_SIZE, C1}],\n\t\t\tar_chunk_storage:get_range(0, 2 * ?DATA_CHUNK_SIZE)),\n\t?assertEqual([{?DATA_CHUNK_SIZE, C2}, {2 * ?DATA_CHUNK_SIZE, C1}],\n\t\t\tar_chunk_storage:get_range(1, 2 * ?DATA_CHUNK_SIZE)),\n\t?assertEqual([{?DATA_CHUNK_SIZE, C2}, {2 * ?DATA_CHUNK_SIZE, C1}],\n\t\t\tar_chunk_storage:get_range(1, 2 * ?DATA_CHUNK_SIZE - 1)),\n\t?assertEqual([{?DATA_CHUNK_SIZE, C2}, {2 * ?DATA_CHUNK_SIZE, C1}],\n\t\t\tar_chunk_storage:get_range(0, 3 * ?DATA_CHUNK_SIZE)),\n\t?assertEqual([{?DATA_CHUNK_SIZE, C2}, {2 * ?DATA_CHUNK_SIZE, C1}],\n\t\t\tar_chunk_storage:get_range(0, ?DATA_CHUNK_SIZE + 1)),\n\tar_chunk_storage:put(3 * ?DATA_CHUNK_SIZE, C3, Packing, ?DEFAULT_MODULE),\n\tassert_get(C2, ?DATA_CHUNK_SIZE),\n\tassert_get(C1, 2 * ?DATA_CHUNK_SIZE),\n\tassert_get(C3, 3 * ?DATA_CHUNK_SIZE),\n\t?assertEqual(not_found, ar_chunk_storage:get(3 * ?DATA_CHUNK_SIZE, ?DEFAULT_MODULE)),\n\t?assertEqual(not_found, ar_chunk_storage:get(3 * ?DATA_CHUNK_SIZE + 1, ?DEFAULT_MODULE)),\n\tar_chunk_storage:put(2 * ?DATA_CHUNK_SIZE, C2, Packing, ?DEFAULT_MODULE),\n\tassert_get(C2, ?DATA_CHUNK_SIZE),\n\tassert_get(C2, 2 * ?DATA_CHUNK_SIZE),\n\tassert_get(C3, 3 * ?DATA_CHUNK_SIZE),\n\tar_chunk_storage:delete(?DATA_CHUNK_SIZE),\n\tassert_get(not_found, ?DATA_CHUNK_SIZE),\n\t?assertEqual([], ar_chunk_storage:get_range(0, ?DATA_CHUNK_SIZE)),\n\tassert_get(C2, 2 * ?DATA_CHUNK_SIZE),\n\tassert_get(C3, 3 * ?DATA_CHUNK_SIZE),\n\t?assertEqual([{2 * ?DATA_CHUNK_SIZE, C2}, {3 * ?DATA_CHUNK_SIZE, C3}],\n\t\t\tar_chunk_storage:get_range(0, 4 * ?DATA_CHUNK_SIZE)),\n\t?assertEqual([], ar_chunk_storage:get_range(7 * ?DATA_CHUNK_SIZE, 13 * ?DATA_CHUNK_SIZE)).\n\nnot_aligned_test_() ->\n\t{timeout, 20, fun test_not_aligned/0}.\n\ntest_not_aligned() ->\n\tclear(?DEFAULT_MODULE),\n\tPacking = ar_storage_module:get_packing(?DEFAULT_MODULE),\n\tC1 = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\tC2 = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\tC3 = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\tar_chunk_storage:put(2 * ?DATA_CHUNK_SIZE + 7, C1, Packing, ?DEFAULT_MODULE),\n\tassert_get(C1, 2 * ?DATA_CHUNK_SIZE + 7),\n\tar_chunk_storage:delete(2 * ?DATA_CHUNK_SIZE + 7),\n\tassert_get(not_found, 2 * ?DATA_CHUNK_SIZE + 7),\n\tar_chunk_storage:put(2 * ?DATA_CHUNK_SIZE + 7, C1, Packing, ?DEFAULT_MODULE),\n\tassert_get(C1, 2 * ?DATA_CHUNK_SIZE + 7),\n\t?assertEqual(not_found, ar_chunk_storage:get(2 * ?DATA_CHUNK_SIZE + 7, ?DEFAULT_MODULE)),\n\t?assertEqual(not_found, ar_chunk_storage:get(?DATA_CHUNK_SIZE + 7 - 1, ?DEFAULT_MODULE)),\n\t?assertEqual(not_found, ar_chunk_storage:get(?DATA_CHUNK_SIZE, ?DEFAULT_MODULE)),\n\t?assertEqual(not_found, ar_chunk_storage:get(?DATA_CHUNK_SIZE - 1, ?DEFAULT_MODULE)),\n\t?assertEqual(not_found, ar_chunk_storage:get(0, ?DEFAULT_MODULE)),\n\t?assertEqual(not_found, ar_chunk_storage:get(1, ?DEFAULT_MODULE)),\n\tar_chunk_storage:put(?DATA_CHUNK_SIZE + 3, C2, Packing, ?DEFAULT_MODULE),\n\tassert_get(C2, ?DATA_CHUNK_SIZE + 3),\n\t?assertEqual(not_found, ar_chunk_storage:get(0, ?DEFAULT_MODULE)),\n\t?assertEqual(not_found, ar_chunk_storage:get(1, ?DEFAULT_MODULE)),\n\t?assertEqual(not_found, ar_chunk_storage:get(2, ?DEFAULT_MODULE)),\n\tar_chunk_storage:delete(2 * ?DATA_CHUNK_SIZE + 7),\n\tassert_get(C2, ?DATA_CHUNK_SIZE + 3),\n\tassert_get(not_found, 2 * ?DATA_CHUNK_SIZE + 7),\n\tar_chunk_storage:put(3 * ?DATA_CHUNK_SIZE + 7, C3, Packing, ?DEFAULT_MODULE),\n\tassert_get(C3, 3 * ?DATA_CHUNK_SIZE + 7),\n\tar_chunk_storage:put(3 * ?DATA_CHUNK_SIZE + 7, C1, Packing, ?DEFAULT_MODULE),\n\tassert_get(C1, 3 * ?DATA_CHUNK_SIZE + 7),\n\tar_chunk_storage:put(4 * ?DATA_CHUNK_SIZE + ?DATA_CHUNK_SIZE div 2, C2, Packing, ?DEFAULT_MODULE),\n\tassert_get(C2, 4 * ?DATA_CHUNK_SIZE + ?DATA_CHUNK_SIZE div 2),\n\t?assertEqual(\n\t\tnot_found,\n\t\tar_chunk_storage:get(4 * ?DATA_CHUNK_SIZE + ?DATA_CHUNK_SIZE div 2, ?DEFAULT_MODULE)\n\t),\n\t?assertEqual(not_found, ar_chunk_storage:get(3 * ?DATA_CHUNK_SIZE + 7, ?DEFAULT_MODULE)),\n\t?assertEqual(not_found, ar_chunk_storage:get(3 * ?DATA_CHUNK_SIZE + 8, ?DEFAULT_MODULE)),\n\tar_chunk_storage:put(5 * ?DATA_CHUNK_SIZE + ?DATA_CHUNK_SIZE div 2 + 1, C2, Packing, ?DEFAULT_MODULE),\n\tassert_get(C2, 5 * ?DATA_CHUNK_SIZE + ?DATA_CHUNK_SIZE div 2 + 1),\n\tassert_get(not_found, 2 * ?DATA_CHUNK_SIZE + 7),\n\tar_chunk_storage:delete(4 * ?DATA_CHUNK_SIZE + ?DATA_CHUNK_SIZE div 2),\n\tassert_get(not_found, 4 * ?DATA_CHUNK_SIZE + ?DATA_CHUNK_SIZE div 2),\n\tassert_get(C2, 5 * ?DATA_CHUNK_SIZE + ?DATA_CHUNK_SIZE div 2 + 1),\n\tassert_get(C1, 3 * ?DATA_CHUNK_SIZE + 7),\n\t?assertEqual([{3 * ?DATA_CHUNK_SIZE + 7, C1}],\n\t\t\tar_chunk_storage:get_range(2 * ?DATA_CHUNK_SIZE + 7, 2 * ?DATA_CHUNK_SIZE)),\n\t?assertEqual([{3 * ?DATA_CHUNK_SIZE + 7, C1}],\n\t\t\tar_chunk_storage:get_range(2 * ?DATA_CHUNK_SIZE + 6, 2 * ?DATA_CHUNK_SIZE)),\n\t?assertEqual([{3 * ?DATA_CHUNK_SIZE + 7, C1},\n\t\t\t{5 * ?DATA_CHUNK_SIZE + ?DATA_CHUNK_SIZE div 2 + 1, C2}],\n\t\t\t%% The end offset of the second chunk is bigger than Start + Size but\n\t\t\t%% it is included because Start + Size is bigger than the start offset\n\t\t\t%% of the bucket where the last chunk is placed.\n\t\t\tar_chunk_storage:get_range(2 * ?DATA_CHUNK_SIZE + 7, 2 * ?DATA_CHUNK_SIZE + 1)),\n\t?assertEqual([{3 * ?DATA_CHUNK_SIZE + 7, C1},\n\t\t\t{5 * ?DATA_CHUNK_SIZE + ?DATA_CHUNK_SIZE div 2 + 1, C2}],\n\t\t\tar_chunk_storage:get_range(2 * ?DATA_CHUNK_SIZE + 7, 3 * ?DATA_CHUNK_SIZE)),\n\t?assertEqual([{3 * ?DATA_CHUNK_SIZE + 7, C1},\n\t\t\t{5 * ?DATA_CHUNK_SIZE + ?DATA_CHUNK_SIZE div 2 + 1, C2}],\n\t\t\tar_chunk_storage:get_range(2 * ?DATA_CHUNK_SIZE + 7 - 1, 3 * ?DATA_CHUNK_SIZE)),\n\t?assertEqual([{3 * ?DATA_CHUNK_SIZE + 7, C1},\n\t\t\t{5 * ?DATA_CHUNK_SIZE + ?DATA_CHUNK_SIZE div 2 + 1, C2}],\n\t\t\tar_chunk_storage:get_range(2 * ?DATA_CHUNK_SIZE, 4 * ?DATA_CHUNK_SIZE)).\n\ncross_file_aligned_test_() ->\n\t{timeout, 20, fun test_cross_file_aligned/0}.\n\ntest_cross_file_aligned() ->\n\tclear(?DEFAULT_MODULE),\n\tPacking = ar_storage_module:get_packing(?DEFAULT_MODULE),\n\tC1 = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\tC2 = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\tar_chunk_storage:put(get_chunk_group_size(), C1, Packing, ?DEFAULT_MODULE),\n\tassert_get(C1, get_chunk_group_size()),\n\t?assertEqual(not_found, ar_chunk_storage:get(get_chunk_group_size(), ?DEFAULT_MODULE)),\n\t?assertEqual(not_found, ar_chunk_storage:get(get_chunk_group_size() + 1, ?DEFAULT_MODULE)),\n\t?assertEqual(not_found, ar_chunk_storage:get(0, ?DEFAULT_MODULE)),\n\t?assertEqual(not_found, ar_chunk_storage:get(get_chunk_group_size() - ?DATA_CHUNK_SIZE - 1, ?DEFAULT_MODULE)),\n\tar_chunk_storage:put(get_chunk_group_size() + ?DATA_CHUNK_SIZE, C2, Packing, ?DEFAULT_MODULE),\n\tassert_get(C2, get_chunk_group_size() + ?DATA_CHUNK_SIZE),\n\tassert_get(C1, get_chunk_group_size()),\n\t?assertEqual([{get_chunk_group_size(), C1}, {get_chunk_group_size() + ?DATA_CHUNK_SIZE, C2}],\n\t\t\tar_chunk_storage:get_range(get_chunk_group_size() - ?DATA_CHUNK_SIZE,\n\t\t\t\t\t2 * ?DATA_CHUNK_SIZE)),\n\t?assertEqual([{get_chunk_group_size(), C1}, {get_chunk_group_size() + ?DATA_CHUNK_SIZE, C2}],\n\t\t\tar_chunk_storage:get_range(get_chunk_group_size() - 2 * ?DATA_CHUNK_SIZE - 1,\n\t\t\t\t\t4 * ?DATA_CHUNK_SIZE)),\n\t?assertEqual(not_found, ar_chunk_storage:get(0, ?DEFAULT_MODULE)),\n\t?assertEqual(not_found, ar_chunk_storage:get(get_chunk_group_size() - ?DATA_CHUNK_SIZE - 1, ?DEFAULT_MODULE)),\n\tar_chunk_storage:delete(get_chunk_group_size(), ?DEFAULT_MODULE),\n\tassert_get(not_found, get_chunk_group_size(), ?DEFAULT_MODULE),\n\tassert_get(C2, get_chunk_group_size() + ?DATA_CHUNK_SIZE),\n\tar_chunk_storage:put(get_chunk_group_size(), C2, Packing, ?DEFAULT_MODULE),\n\tassert_get(C2, get_chunk_group_size()).\n\ncross_file_not_aligned_test_() ->\n\t{timeout, 20, fun test_cross_file_not_aligned/0}.\n\ntest_cross_file_not_aligned() ->\n\tclear(?DEFAULT_MODULE),\n\tPacking = ar_storage_module:get_packing(?DEFAULT_MODULE),\n\tC1 = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\tC2 = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\tC3 = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\tC4 = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\tC5 = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\tar_chunk_storage:put(get_chunk_group_size() + 1, C1, Packing, ?DEFAULT_MODULE),\n\tassert_get(C1, get_chunk_group_size() + 1),\n\t?assertEqual(not_found, ar_chunk_storage:get(get_chunk_group_size() + 1, ?DEFAULT_MODULE)),\n\t?assertEqual(not_found, ar_chunk_storage:get(get_chunk_group_size() - ?DATA_CHUNK_SIZE, ?DEFAULT_MODULE)),\n\tar_chunk_storage:put(2 * get_chunk_group_size() + ?DATA_CHUNK_SIZE div 2, C2, Packing, ?DEFAULT_MODULE),\n\tassert_get(C2, 2 * get_chunk_group_size() + ?DATA_CHUNK_SIZE div 2),\n\t?assertEqual(not_found, ar_chunk_storage:get(get_chunk_group_size() + 1, ?DEFAULT_MODULE)),\n\tar_chunk_storage:put(2 * get_chunk_group_size() - ?DATA_CHUNK_SIZE div 2, C3, Packing, ?DEFAULT_MODULE),\n\tassert_get(C2, 2 * get_chunk_group_size() + ?DATA_CHUNK_SIZE div 2),\n\tassert_get(C3, 2 * get_chunk_group_size() - ?DATA_CHUNK_SIZE div 2),\n\tar_chunk_storage:put(2 * get_chunk_group_size() + 3 * ?DATA_CHUNK_SIZE div 2, C4, Packing, ?DEFAULT_MODULE),\n\tar_chunk_storage:put(2 * get_chunk_group_size() + 5 * ?DATA_CHUNK_SIZE div 2, C5, Packing, ?DEFAULT_MODULE),\n\t?assertEqual([{2 * get_chunk_group_size() + ?DATA_CHUNK_SIZE div 2, C2},\n\t\t\t{2 * get_chunk_group_size() + 3 * ?DATA_CHUNK_SIZE div 2, C4}],\n\t\t\tar_chunk_storage:get_range(2 * get_chunk_group_size()\n\t\t\t\t\t- ?DATA_CHUNK_SIZE div 2, ?DATA_CHUNK_SIZE * 2)),\n\t?assertEqual([{2 * get_chunk_group_size() + ?DATA_CHUNK_SIZE div 2, C2},\n\t\t\t{2 * get_chunk_group_size() + 3 * ?DATA_CHUNK_SIZE div 2, C4},\n\t\t\t{2 * get_chunk_group_size() + 5 * ?DATA_CHUNK_SIZE div 2, C5}],\n\t\t\tar_chunk_storage:get_range(2 * get_chunk_group_size()\n\t\t\t\t\t- ?DATA_CHUNK_SIZE div 2 + 10, ?DATA_CHUNK_SIZE * 2)),\n\n\t?assertEqual([{2 * get_chunk_group_size() - ?DATA_CHUNK_SIZE div 2, C3},\n\t\t\t{2 * get_chunk_group_size() + ?DATA_CHUNK_SIZE div 2, C2}],\n\t\t\tar_chunk_storage:get_range(2 * get_chunk_group_size()\n\t\t\t\t\t- ?DATA_CHUNK_SIZE div 2 - ?DATA_CHUNK_SIZE, ?DATA_CHUNK_SIZE * 2)),\n\t?assertEqual([{2 * get_chunk_group_size() - ?DATA_CHUNK_SIZE div 2, C3},\n\t\t\t{2 * get_chunk_group_size() + ?DATA_CHUNK_SIZE div 2, C2},\n\t\t\t{2 * get_chunk_group_size() + 3 * ?DATA_CHUNK_SIZE div 2, C4}],\n\t\t\tar_chunk_storage:get_range(2 * get_chunk_group_size()\n\t\t\t\t\t- ?DATA_CHUNK_SIZE div 2 - ?DATA_CHUNK_SIZE + 10, ?DATA_CHUNK_SIZE * 2)),\n\n\t?assertEqual(not_found, ar_chunk_storage:get(get_chunk_group_size() + 1, ?DEFAULT_MODULE)),\n\t?assertEqual(\n\t\tnot_found,\n\t\tar_chunk_storage:get(get_chunk_group_size() + ?DATA_CHUNK_SIZE div 2 - 1, ?DEFAULT_MODULE)\n\t),\n\tar_chunk_storage:delete(2 * get_chunk_group_size() - ?DATA_CHUNK_SIZE div 2),\n\tassert_get(not_found, 2 * get_chunk_group_size() - ?DATA_CHUNK_SIZE div 2),\n\tassert_get(C2, 2 * get_chunk_group_size() + ?DATA_CHUNK_SIZE div 2),\n\tassert_get(C1, get_chunk_group_size() + 1),\n\tar_chunk_storage:delete(get_chunk_group_size() + 1),\n\tassert_get(not_found, get_chunk_group_size() + 1),\n\tassert_get(not_found, 2 * get_chunk_group_size() - ?DATA_CHUNK_SIZE div 2),\n\tassert_get(C2, 2 * get_chunk_group_size() + ?DATA_CHUNK_SIZE div 2),\n\tar_chunk_storage:delete(2 * get_chunk_group_size() + ?DATA_CHUNK_SIZE div 2),\n\tassert_get(not_found, 2 * get_chunk_group_size() + ?DATA_CHUNK_SIZE div 2),\n\tar_chunk_storage:delete(get_chunk_group_size() + 1),\n\tar_chunk_storage:delete(100 * get_chunk_group_size() + 1),\n\tar_chunk_storage:put(2 * get_chunk_group_size() - ?DATA_CHUNK_SIZE div 2, C1, Packing, ?DEFAULT_MODULE),\n\tassert_get(C1, 2 * get_chunk_group_size() - ?DATA_CHUNK_SIZE div 2),\n\t?assertEqual(not_found,\n\t\t\tar_chunk_storage:get(2 * get_chunk_group_size() - ?DATA_CHUNK_SIZE div 2, ?DEFAULT_MODULE)).\n\nclear(StoreID) ->\n\tok = gen_server:call(name(StoreID), reset).\n\nassert_get(Expected, Offset) ->\n\tassert_get(Expected, Offset, ?DEFAULT_MODULE).\n\nassert_get(Expected, Offset, StoreID) ->\n\tExpectedResult =\n\t\tcase Expected of\n\t\t\tnot_found ->\n\t\t\t\tnot_found;\n\t\t\t_ ->\n\t\t\t\t{Offset, Expected}\n\t\tend,\n\t?assertEqual(ExpectedResult, ar_chunk_storage:get(Offset - 1, StoreID)),\n\t?assertEqual(ExpectedResult, ar_chunk_storage:get(Offset - 2, StoreID)),\n\t?assertEqual(ExpectedResult, ar_chunk_storage:get(Offset - ?DATA_CHUNK_SIZE, StoreID)),\n\t?assertEqual(ExpectedResult, ar_chunk_storage:get(Offset - ?DATA_CHUNK_SIZE + 1, StoreID)),\n\t?assertEqual(ExpectedResult, ar_chunk_storage:get(Offset - ?DATA_CHUNK_SIZE + 2, StoreID)),\n\t?assertEqual(ExpectedResult,\n\t\t\tar_chunk_storage:get(Offset - ?DATA_CHUNK_SIZE div 2, StoreID)),\n\t?assertEqual(ExpectedResult,\n\t\t\tar_chunk_storage:get(Offset - ?DATA_CHUNK_SIZE div 2 + 1, StoreID)),\n\t?assertEqual(ExpectedResult,\n\t\t\tar_chunk_storage:get(Offset - ?DATA_CHUNK_SIZE div 2 - 1, StoreID)),\n\t?assertEqual(ExpectedResult,\n\t\t\tar_chunk_storage:get(Offset - ?DATA_CHUNK_SIZE div 3, StoreID)).\n\ndefrag_command_test() ->\n\tRandomID = crypto:strong_rand_bytes(16),\n\tFilepath = \"test_defrag_\" ++ binary_to_list(ar_util:encode(RandomID)),\n\t{ok, F} = file:open(Filepath, [binary, write]),\n\t{O1, C1} = {236, crypto:strong_rand_bytes(262144)},\n\t{O2, C2} = {262144, crypto:strong_rand_bytes(262144)},\n\t{O3, C3} = {262143, crypto:strong_rand_bytes(262144)},\n\tfile:pwrite(F, 1, <<\"a\">>),\n\tfile:pwrite(F, 1000, <<\"b\">>),\n\tfile:pwrite(F, 1000000, <<\"cde\">>),\n\tfile:pwrite(F, 10000001, << O1:24, C1/binary, O2:24, C2/binary >>),\n\tfile:pwrite(F, 30000001, << O3:24, C3/binary >>),\n\tfile:close(F),\n\tdefrag_files([Filepath]),\n\t{ok, F2} = file:open(Filepath, [binary, read]),\n\t?assertEqual({ok, <<0>>}, file:pread(F2, 0, 1)),\n\t?assertEqual({ok, <<\"a\">>}, file:pread(F2, 1, 1)),\n\t?assertEqual({ok, <<0>>}, file:pread(F2, 2, 1)),\n\t?assertEqual({ok, <<\"b\">>}, file:pread(F2, 1000, 1)),\n\t?assertEqual({ok, <<\"c\">>}, file:pread(F2, 1000000, 1)),\n\t?assertEqual({ok, <<\"cde\">>}, file:pread(F2, 1000000, 3)),\n\t?assertEqual({ok, C1}, file:pread(F2, 10000001 + 3, 262144)),\n\t?assertMatch({ok, << O1:24, _/binary >>}, file:pread(F2, 10000001, 10)),\n\t?assertMatch({ok, << O1:24, C1:262144/binary, O2:24, C2:262144/binary,\n\t\t\t0:((262144 + 3) * 2 * 8) >>}, file:pread(F2, 10000001, (262144 + 3) * 4)),\n\t?assertMatch({ok, << O3:24, C3:262144/binary >>},\n\t\t\tfile:pread(F2, 30000001, 262144 + 3 + 100)). % End of file => +100 is ignored.\n"
  },
  {
    "path": "apps/arweave/src/ar_chunk_storage_sup.erl",
    "content": "-module(ar_chunk_storage_sup).\n\n-behaviour(supervisor).\n\n-export([start_link/0]).\n\n-export([init/1]).\n\n-include_lib(\"arweave/include/ar_sup.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%% ===================================================================\n%% Supervisor callbacks.\n%% ===================================================================\n\ninit([]) ->\n\tets:new(chunk_storage_file_index, [set, public, named_table, {read_concurrency, true}]),\n\n\tWorkers = ar_chunk_storage:register_workers() ++\n\t\tar_repack:register_workers() ++\n\t\tar_entropy_gen:register_workers(ar_entropy_gen) ++\n\t\tar_entropy_gen:register_workers(ar_entropy_storage),\n\t{ok, {{one_for_one, 5, 10}, Workers}}.\n"
  },
  {
    "path": "apps/arweave/src/ar_chunk_visualization.erl",
    "content": "-module(ar_chunk_visualization).\n\n-export([get_chunk_packings/3, get_chunk_packings/4, generate_bitmap/1, bitmap_to_binary/1, print_chunk_stats/1]).\n\n-include_lib(\"ar.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Build a list of lists where each inner list represents a sector worth of packing\n%% formats. Each sector row will form a row in the bitmap.\nget_chunk_packings(ModuleStart, ModuleEnd, StoreID) ->\n\tget_chunk_packings(ModuleStart, ModuleEnd, StoreID, false).\nget_chunk_packings(ModuleStart, ModuleEnd, StoreID, PrintProgress) ->\n\tPartition = ar_node:get_partition_number(ModuleStart),\n\tPartitionStart = ar_chunk_storage:get_chunk_bucket_start(ModuleStart),\n\tSectorSize = ar_block:get_replica_2_9_entropy_sector_size(),\n\tBucketsPerSector = SectorSize div ?DATA_CHUNK_SIZE,\n\tNumSectors = ar_block:get_replica_2_9_entropy_partition_size() div SectorSize,\n\n\tcase PrintProgress of\n\t\ttrue ->\n\t\t\tar:console(\"Partition ~p~n\", [Partition]),\n\t\t\tar:console(\"PartitionStart: ~p~n\", [PartitionStart]),\n\t\t\tar:console(\"SectorSize: ~p~n\", [SectorSize]),\n\t\t\tar:console(\"BucketsPerSector: ~p~n\", [BucketsPerSector]),\n\t\t\tar:console(\"NumSectors: ~p~n\", [NumSectors]);\n\t\t_ ->\n\t\t\tok\n\tend,\n\t\n\tlists:map(\n\t\tfun(SectorIndex) ->\n\t\t\tSectorStart = PartitionStart + SectorIndex * SectorSize,\n\t\t\tSectorEnd = SectorStart + SectorSize,\n\t\t\t%% Chunk Range will be a bit larger than the sector range to make sure we don't\n\t\t\t%% miss any chunks.\n\t\t\tChunkRangeStart = ar_chunk_storage:get_chunk_byte_from_bucket_end(SectorStart),\n\t\t\tChunkRangeEnd =\n\t\t\t\tar_chunk_storage:get_chunk_byte_from_bucket_end(SectorEnd) + ?DATA_CHUNK_SIZE,\n\t\t\tcase PrintProgress of\n\t\t\t\ttrue ->\n\t\t\t\t\tar:console(\n\t\t\t\t\t\t\"Partition ~p sector ~4B. Bucket Offsets ~p to ~p. \"\n\t\t\t\t\t\t\"Chunk Range ~p to ~p.~n\", [\n\t\t\t\t\t\t\tPartition, SectorIndex,\n\t\t\t\t\t\t\tSectorStart, SectorEnd,\n\t\t\t\t\t\t\tChunkRangeStart, ChunkRangeEnd]);\n\t\t\t\tfalse ->\n\t\t\t\t\tok\n\t\t\tend,\n\t\t\t{ok, MetadataRange} = ar_data_sync:get_chunk_metadata_range(\n\t\t\t\t\tChunkRangeStart, ChunkRangeEnd, StoreID),\n\t\t\t\n\t\t\t% Initialize map with all bucket end offsets set to 'missing'\n\t\t\tBucketMap = lists:foldl(\n\t\t\t\tfun(J, Acc) ->\n\t\t\t\t\tBucketEndOffset = SectorStart + J * ?DATA_CHUNK_SIZE,\n\t\t\t\t\tcase BucketEndOffset < ModuleStart orelse BucketEndOffset > ModuleEnd of\n\t\t\t\t\t\ttrue -> Acc;\n\t\t\t\t\t\tfalse -> maps:put(BucketEndOffset, missing, Acc)\n\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\t\t#{},\n\t\t\t\tlists:seq(1, BucketsPerSector)),\n\n\t\t\t% Process metadata to update the map\n\t\t\tUpdatedMap = maps:fold(\n\t\t\t\tfun(AbsoluteEndOffset, Metadata, Acc) ->\n\t\t\t\t\tBucketEndOffset = ar_chunk_storage:get_chunk_bucket_end(AbsoluteEndOffset),\n\t\t\t\t\tcase maps:is_key(BucketEndOffset, Acc) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tIsRecorded = ar_sync_record:is_recorded(\n\t\t\t\t\t\t\t\tAbsoluteEndOffset, ar_data_sync, StoreID),\n\t\t\t\t\t\t\tmaps:put(BucketEndOffset,\n\t\t\t\t\t\t\t\tnormalize_sync_record(IsRecorded, AbsoluteEndOffset, Metadata),\n\t\t\t\t\t\t\t\tAcc);\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tAcc\n\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\t\tBucketMap,\n\t\t\t\tMetadataRange),\n\n\t\t\t% Convert map to list in order of bucket end offsets\n\t\t\tlists:map(\n\t\t\t\tfun(J) ->\n\t\t\t\t\tBucketEndOffset = SectorStart + J * ?DATA_CHUNK_SIZE,\n\t\t\t\t\tcase BucketEndOffset < ModuleStart orelse BucketEndOffset > ModuleEnd of\n\t\t\t\t\t\ttrue -> none;\n\t\t\t\t\t\tfalse -> maps:get(BucketEndOffset, UpdatedMap)\n\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\t\tlists:seq(1, BucketsPerSector))\n\t\tend,\n\t\tlists:seq(0, NumSectors - 1)).\n\t\n%% @doc Convert packing formats to RGB pixels.\ngenerate_bitmap(PackingRows) ->\n\tlists:map(\n\t\tfun(Row) ->\n\t\t\tlists:map(fun packing_color/1, Row)\n\t\tend,\n\t\tPackingRows).\n\n%% @doc Convert a bitmap (list of rows; each row a list of {R, G, B} tuples)\n%% into a binary PPM image.\nbitmap_to_binary(BitmapRows) ->\n\tHeight = length(BitmapRows),\n\tWidth =\n\t\tcase BitmapRows of\n\t\t\t[Row | _] ->\n\t\t\t\tlength(Row);\n\t\t\t[] ->\n\t\t\t\t0\n\t\tend,\n\tHeader = io_lib:format(\"P6\\n~w ~w\\n255\\n\", [Width, Height]),\n\t%% Build pixel binary data (each pixel is 3 bytes: R,G,B)\n\tPixelData = [<<R:8, G:8, B:8>> || Row <- BitmapRows, {R, G, B} <- Row],\n\tlist_to_binary([Header, PixelData]).\n\nprint_chunk_stats(ChunkPackings) ->\n\tCounts = chunk_statistics(ChunkPackings),\n\tTotal = maps:fold(fun(_Format, Count, Acc) -> Count + Acc end, 0, Counts),\n\tar:console(\"Total chunks: ~p~n\", [Total]),\n\tar:console(\"Chunk counts by packing format:~n\"),\n\tlists:foreach(\n\t\tfun({Packing, Count}) ->\n\t\t\tPercentage =\n\t\t\t\tcase Total of\n\t\t\t\t\t0 -> 0.0;\n\t\t\t\t\t_ -> Count * 100 / Total\n\t\t\t\tend,\n\t\t\tar:console(\"~p (~p): ~p chunks (~.2f%)~n\",\n\t\t\t\t[ar_serialize:encode_packing(Packing, false),\n\t\t\t\t\tpacking_color(Packing),\n\t\t\t\t\tCount,\n\t\t\t\t\tPercentage])\n\t\tend,\n\t\tlists:sort(\n\t\t\tmaps:to_list(Counts))).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nnormalize_sync_record(false, _, _) ->\n\tmissing;\nnormalize_sync_record(_, _, not_found) ->\n\terror;\nnormalize_sync_record({true, Packing}, PaddedEndOffset, Metadata) ->\n\t{_, _, _, _, _, ChunkSize} = Metadata,\n\tcase ar_chunk_storage:is_storage_supported(PaddedEndOffset, ChunkSize, Packing) of\n\t\ttrue ->\n\t\t\tPacking;\n\t\tfalse ->\n\t\t\ttoo_small\n\tend;\nnormalize_sync_record(_, _, _) ->\n\terror.\n\n%% @doc Returns a unique color (as an {R,G,B} tuple) for each recognized packing format.\npacking_color(missing) ->\n\t{0, 0, 0};\npacking_color(error) ->\n\t{255, 0, 0};\npacking_color(too_small) ->\n\t{255, 0, 255};\npacking_color(unpacked) ->\n\t{255, 255, 255};\npacking_color(unpacked_padded) ->\n\t{128, 128, 128};\npacking_color(none) ->\n\t{0, 255, 255};\npacking_color({Format, Addr, _PackingDifficulty}) ->\n\tpacking_color({Format, Addr});\npacking_color({Format, Addr}) ->\n\tBaseColor = packing_color(Format),\n\t%% Compute a hash from Addr and extract offsets\n\tHash = erlang:phash2(Addr, 16777216),\n\tRoffset = Hash band 255,\n\tGoffset = (Hash bsr 8) band 255,\n\tBoffset = (Hash bsr 16) band 255,\n\t{(element(1, BaseColor) + Roffset) rem 256,\n\t (element(2, BaseColor) + Goffset) rem 256,\n\t (element(3, BaseColor) + Boffset) rem 256};\n%% Base colors for known packing formats\npacking_color(replica_2_9) ->\n\t{0, 0, 255}; %% blue\npacking_color(spora_2_6) ->\n\t{0, 255, 0}; %% green\npacking_color(composite) ->\n\t{255, 255, 0}; %% yellow\npacking_color(_) ->\n\t{255, 0, 0}. %% red for unknown packings\n\nchunk_statistics(ChunkPackings) ->\n\tlists:foldl(\n\t\tfun(Row, AccCounts) ->\n\t\t\tlists:foldl(\n\t\t\t\tfun(Packing, RowAccCounts) ->\n\t\t\t\t\tmaps:update_with(Packing, fun(N) -> N + 1 end, 1, RowAccCounts)\n\t\t\t\tend,\n\t\t\t\tAccCounts,\n\t\t\t\tRow)\n\t\tend,\n\t\t#{},\n\t\tChunkPackings).\n"
  },
  {
    "path": "apps/arweave/src/ar_cli_parser.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @doc `ar_cli_parser' module legacy command line argument parser.\n%%%\n%%% This module has been created from `ar' module. The code is mostly\n%%% the same, except the exported interfaces have been renamed.\n%%%\n%%% @end\n%%%===================================================================\n-module(ar_cli_parser).\n-compile(warnings_as_errors).\n-export([\n\teval/2,\n\tparse/2,\n\tshow_help/0\n]).\n-include(\"ar.hrl\").\n-include(\"ar_consensus.hrl\").\n-include(\"ar_verify_chunks.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%--------------------------------------------------------------------\n%% @doc show help and stop the node.\n%% @end\n%%--------------------------------------------------------------------\nshow_help() ->\n\tio:format(\"Usage: arweave-server [options]~n\"),\n\tio:format(\"Compatible with network: ~s~n\", [?NETWORK_NAME]),\n\tio:format(\"Options:~n\"),\n\tlists:foreach(\n\t\tfun({Opt, Desc}) ->\n\t\t\tio:format(\"\\t~s~s~n\",\n\t\t\t\t[\n\t\t\t\t\tstring:pad(Opt, 40, trailing, $ ),\n\t\t\t\t\tDesc\n\t\t\t\t]\n\t\t\t)\n\t\tend,\n\t\t[\n\t\t\t{\"config_file (path)\", io_lib:format(\"Load the configuration from the \"\n\t\t\t\t\"specified JSON file.~n~n\"\n\t\t\t\t\"The configuration file is currently the only place where you may configure \"\n\t\t\t\t\"webhooks and tune semaphores.~n~n\"\n\t\t\t\t\"An example:~n~n\"\n\t\t\t\t\"{~n\"\n\t\t\t\t\"  \\\"webhooks\\\": [~n\"\n\t\t\t\t\"    {~n\"\n\t\t\t\t\"      \\\"events\\\": [\\\"transaction\\\", \\\"block\\\"],~n\"\n\t\t\t\t\"      \\\"url\\\": \\\"https://example.com/block_or_tx\\\",~n\"\n\t\t\t\t\"      \\\"headers\\\": {~n\"\n\t\t\t\t\"        \\\"Authorization\\\": \\\"Bearer 123\\\"~n\"\n\t\t\t\t\"       }~n\"\n\t\t\t\t\"    },~n\"\n\t\t\t\t\"    {~n\"\n\t\t\t\t\"      \\\"events\\\": [\\\"transaction_data\\\"],~n\"\n\t\t\t\t\"      \\\"url\\\": \\\"http://127.0.0.1:1985/tx_data\\\"~n\"\n\t\t\t\t\"    },~n\"\n\t\t\t\t\"    {~n\"\n\t\t\t\t\"      \\\"events\\\": [\\\"solution\\\"],~n\"\n\t\t\t\t\"      \\\"url\\\": \\\"http://127.0.0.1:1985/solution\\\"~n\"\n\t\t\t\t\"    },~n\"\n\t\t\t\t\"  \\\"semaphores\\\": {\\\"post_tx\\\": 100}~n\"\n\t\t\t\t\"}~n~n\"\n\t\t\t\t\"100 means the node will validate up to 100 incoming transactions in \"\n\t\t\t\t\"parallel.~n\"\n\t\t\t\t\"The supported semaphore keys are get_chunk, get_and_pack_chunk, get_tx_data, \"\n\t\t\t\t\"post_chunk, post_tx, get_block_index, get_wallet_list, get_sync_record.~n~n\",\n\t\t\t\t[])},\n\t\t\t{\"peer (IP:port)\", \"Join a network on a peer (or set of peers).\"},\n\t\t\t{\"block_gossip_peer (IP:port)\", \"Optionally specify peer(s) to always\"\n\t\t\t\t\t\" send blocks to.\"},\n\t\t\t{\"local_peer (IP:port)\", \"The local network peer. Local peers do not rate limit \"\n\t\t\t\t\t\"each other so we recommend you connect all your nodes from the same \"\n\t\t\t\t\t\"network via this configuration parameter.\"},\n\t\t\t{\"sync_from_local_peers_only\", \"If set, the data (not headers) is only synced \"\n\t\t\t\t\t\"from the local network peers specified via the local_peer parameter.\"},\n\t\t\t{\"start_from_latest_state\", \"Start the node from the latest stored state.\"},\n\t\t\t{\"start_from_state (folder)\", \"Start the node from the state stored in the \"\n\t\t\t\t\t\"specified folder. This folder must be different from data_dir. \"\n\t\t\t\t\t\"Implicitly sets start_from_latest_state to true.\"},\n\t\t\t{\"start_from_block (block hash)\", \"Start the node from the state corresponding \"\n\t\t\t\t\t\"to the given block hash.\"},\n\t\t\t{\"start_from_block_index\", \"The legacy name for start_from_latest_state.\"},\n\t\t\t{\"mine\", \"Automatically start mining once the netwok has been joined.\"},\n\t\t\t{\"port\", \"The local port to use for mining. \"\n\t\t\t\t\t\t\"This port must be accessible by remote peers.\"},\n\t\t\t{\"data_dir\",\n\t\t\t\t\"The directory for storing the weave and the wallets (when generated).\"},\n\t\t\t{\"log_dir\", \"The directory for logs. If the \\\"debug\\\" flag is set, the debug logs \"\n\t\t\t\t\t\"are written to logs/debug_logs/. The RocksDB logs are written to \"\n\t\t\t\t\t\"logs/rocksdb/.\"},\n\t\t\t{\"storage_module\", \"A storage module is responsible for syncronizing and storing \"\n\t\t\t\t\t\"a particular data range. The data and metadata related to the module \"\n\t\t\t\t\t\"are stored in a dedicated folder \"\n\t\t\t\t\t\"([data_dir]/storage_modules/storage_module_[partition_number]_[replica_type]/\"\n\t\t\t\t\t\") where replica_type is either {mining_address} or\"\n\t\t\t\t\t\" {mining address}.{composite packing difficulty} or\"\n\t\t\t\t\t\" {mining address}.replica.2.9 or \\\"unpacked\\\".\"\n\t\t\t\t\t\" Example: storage_module 0,En2eqsVJARnTVOSh723PBXAKGmKgrGSjQ2YIGwE_ZRI.1. \"\n\t\t\t\t\t\"To configure a module of a custom size, set \"\n\t\t\t\t\t\"storage_module {number},{size_in_bytes},{replica_type}. For instance, \"\n\t\t\t\t\t\"storage_module \"\n\t\t\t\t\t\"22,1000000000000,En2eqsVJARnTVOSh723PBXAKGmKgrGSjQ2YIGwE_ZRI.1 will be \"\n\t\t\t\t\t\"syncing the weave data between the offsets 22 TB and 23 TB. Make sure \"\n\t\t\t\t\t\"the corresponding disk contains some extra space for the proofs and \"\n\t\t\t\t\t\"other metadata, about 10% of the configured size.\"\n\n\t\t\t\t\t\"You may repack a storage module in-place. To do that, specify \"\n\t\t\t\t\t\"storage_module \"\n\t\t\t\t\t\"{partition_number},{packing},repack_in_place,{target_packing}. \"\n\t\t\t\t\t\"For example, if you want to repack a storage module \"\n\t\t\t\t\t\"22,En2eqsVJARnTVOSh723PBXAKGmKgrGSjQ2YIGwE_ZRI.1 to the new address \"\n\t\t\t\t\t\"Q5EfKawrRazp11HEDf_NJpxjYMV385j21nlQNjR8_pY, specify \"\n\t\t\t\t\t\"storage_module \"\n\t\t\t\t\t\"22,En2eqsVJARnTVOSh723PBXAKGmKgrGSjQ2YIGwE_ZRI.1,repack_in_place,\"\n\t\t\t\t\t\"Q5EfKawrRazp11HEDf_NJpxjYMV385j21nlQNjR8_pY.replica.2.9. This storage module \"\n\t\t\t\t\t\"will only do the repacking - it won't be used for mining and won't \"\n\t\t\t\t\t\"serve any data to peers. Once the repacking is complete, a message will \"\n\t\t\t\t\t\"be logged to the file and written to the console. We suggest you rename \"\n\t\t\t\t\t\"the storage module folder according to the new packing then. \"\n\n\t\t\t\t\t\"Note: as of 2.9.1 you can only repack in place to the replica_2_9 \"\n\t\t\t\t\t\"format.\"\n\t\t\t},\n\t\t\t{\"repack_batch_size\", io_lib:format(\"The number of batches to process at a time \"\n\t\t\t\t\"during in-place repacking. For each partition being repacked, a batch \"\n\t\t\t\t\"requires about 512 MiB of memory. Default: ~B.\",\n\t\t\t\t[?DEFAULT_REPACK_BATCH_SIZE])},\n\t\t\t{\"repack_cache_size_mb\", io_lib:format(\"The size (in MiB) of the cache for \"\n\t\t\t\t\"in-place repacking. The node will restrict the cache size to this amount for \"\n\t\t\t\t\"each partition being repacked. Default: ~B.\",\n\t\t\t\t[?DEFAULT_REPACK_CACHE_SIZE_MB])},\n\t\t\t{\"polling (num)\", lists:flatten(\n\t\t\t\t\tio_lib:format(\n\t\t\t\t\t\t\"Ask some peers about new blocks every N seconds. Default is ~p.\",\n\t\t\t\t\t\t[?DEFAULT_POLLING_INTERVAL]\n\t\t\t\t\t)\n\t\t\t)},\n\t\t\t{\"block_pollers (num)\", io_lib:format(\n\t\t\t\t\t\"How many peer polling jobs to run. Default is ~p.\",\n\t\t\t\t\t[?DEFAULT_BLOCK_POLLERS])},\n\t\t\t{\"no_auto_join\", \"Do not automatically join the network of your peers.\"},\n\t\t\t{\"join_workers (num)\", io_lib:format(\"The number of workers fetching the recent \"\n\t\t\t\t\t\"blocks and transactions simultaneously when joining the network. \"\n\t\t\t\t\t\"Default: ~B. \", [?DEFAULT_JOIN_WORKERS])},\n\t\t\t{\"mining_addr (addr)\",\n\t\t\t\tio_lib:format(\n\t\t\t\t\"The address mining rewards should be credited to. If the \\\"mine\\\" flag\"\n\t\t\t\t\" is set but no mining_addr is specified, an RSA PSS key is created\"\n\t\t\t\t\" and stored in the [data_dir]/~s directory. If the directory already\"\n\t\t\t\t\" contains such keys, the one written later is picked, no new files are\"\n\t\t\t\t\" created. After the fork 2.6, the specified address is also a replication key, \"\n\t\t\t\t\"so it is used to prepare synced data for mining even if the \\\"mine\\\" flag is not \"\n\t\t\t\t\"specified. The data already packed with different addresses is not repacked.\",\n\t\t\t\t[?WALLET_DIR])},\n\t\t\t{\"hashing_threads (num)\", io_lib:format(\"The number of hashing processes to spawn.\"\n\t\t\t\t\t\" Takes effect starting from the fork 2.6 block.\"\n\t\t\t\t\t\" Default is ~B.\", [?NUM_HASHING_PROCESSES])},\n\t\t\t{\"data_cache_size_limit (num)\", \"The approximate maximum number of data chunks \"\n\t\t\t\t\t\"kept in memory by the syncing processes.\"},\n\t\t\t{\"packing_cache_size_limit (num)\", \"The approximate maximum number of data chunks \"\n\t\t\t\t\t\"kept in memory by the packing process.\"},\n\t\t\t{\"mining_cache_size_mb (num)\", \"The total amount of cache \"\n\t\t\t\t\"(in MiB) allocated to store unprocessed chunks while mining. The mining \"\n\t\t\t\t\"server will only read new data when there is room in the cache to store \"\n\t\t\t\t\"more chunks. This cache is subdivided into sub-caches for each mined \"\n\t\t\t\t\"partition. When omitted, it is determined based on the number of \"\n\t\t\t\t\"mining partitions.\"},\n\t\t\t{\"max_emitters (num)\", io_lib:format(\"The number of transaction propagation \"\n\t\t\t\t\"processes to spawn. Must be at least 1. Default is ~B.\", [?NUM_EMITTER_PROCESSES])},\n\t\t\t{\"post_tx_timeout\", io_lib:format(\"The time in seconds to wait for the available\"\n\t\t\t\t\" tx validation process before dropping the POST /tx request. Default is ~B.\"\n\t\t\t\t\" By default ~B validation processes are running. You can override it by\"\n\t\t\t\t\" setting a different value for the post_tx key in the semaphores object\"\n\t\t\t\t\" in the configuration file.\", [?DEFAULT_POST_TX_TIMEOUT,\n\t\t\t\t\t\t?MAX_PARALLEL_POST_TX_REQUESTS])},\n\t\t\t{\"max_propagation_peers\", io_lib:format(\n\t\t\t\t\"The maximum number of peers to propagate transactions to. \"\n\t\t\t\t\"Default is ~B.\", [?DEFAULT_MAX_PROPAGATION_PEERS])},\n\t\t\t{\"max_block_propagation_peers\", io_lib:format(\n\t\t\t\t\"The maximum number of best peers to propagate blocks to. \"\n\t\t\t\t\"Default is ~B.\", [?DEFAULT_MAX_BLOCK_PROPAGATION_PEERS])},\n\t\t\t{\"sync_jobs (num)\",\n\t\t\t\tio_lib:format(\n\t\t\t\t\t\"The number of data syncing jobs to run. Default: ~B.\"\n\t\t\t\t\t\" Each job periodically picks a range and downloads it from peers.\",\n\t\t\t\t\t[?DEFAULT_SYNC_JOBS]\n\t\t\t\t)},\n\t\t\t{\"header_sync_jobs (num)\",\n\t\t\t\tio_lib:format(\n\t\t\t\t\t\"The number of header syncing jobs to run. Default: ~B.\"\n\t\t\t\t\t\" Each job periodically picks the latest not synced block header\"\n\t\t\t\t\t\" and downloads it from peers.\",\n\t\t\t\t\t[?DEFAULT_HEADER_SYNC_JOBS]\n\t\t\t\t)},\n\t\t\t{\"enable_data_roots_syncing [true|false]\",\n\t\t\t\t\"Enable or disable background data roots syncing. Default: true.\"},\n\t\t\t{\"data_sync_request_packed_chunks\",\n\t\t\t\t\"Enables requesting the packed chunks from peers.\"},\n\t\t\t{\"disk_pool_jobs (num)\",\n\t\t\t\tio_lib:format(\n\t\t\t\t\t\"The number of disk pool jobs to run. Default: ~B.\"\n\t\t\t\t\t\" Disk pool jobs scan the disk pool to index no longer pending or\"\n\t\t\t\t\t\" orphaned chunks, schedule packing for chunks with a sufficient\"\n\t\t\t\t\t\" number of confirmations and remove abandoned chunks.\",\n\t\t\t\t\t[?DEFAULT_DISK_POOL_JOBS]\n\t\t\t\t)},\n\t\t\t{\"load_mining_key (file)\", \"DEPRECATED. Does not take effect anymore.\"},\n\t\t\t{\"transaction_blacklist (file)\", \"A file containing blacklisted transactions. \"\n\t\t\t\t\t\t\t\t\t\t\t \"One Base64 encoded transaction ID per line.\"},\n\t\t\t{\"transaction_blacklist_url\", \"An HTTP endpoint serving a transaction blacklist.\"},\n\t\t\t{\"transaction_whitelist (file)\", \"A file containing whitelisted transactions. \"\n\t\t\t\t\t\t\t\t\t\t\t \"One Base64 encoded transaction ID per line. \"\n\t\t\t\t\t\t\t\t\t\t\t \"If a transaction is in both lists, it is \"\n\t\t\t\t\t\t\t\t\t\t\t \"considered whitelisted.\"},\n\t\t\t{\"transaction_whitelist_url\", \"An HTTP endpoint serving a transaction whitelist.\"},\n\t\t\t{\"disk_space_check_frequency (num)\",\n\t\t\t\tio_lib:format(\n\t\t\t\t\t\"The frequency in seconds of requesting the information \"\n\t\t\t\t\t\"about the available disk space from the operating system, \"\n\t\t\t\t\t\"used to decide on whether to continue syncing the historical \"\n\t\t\t\t\t\"data or clean up some space. Default is ~B.\",\n\t\t\t\t\t[?DISK_SPACE_CHECK_FREQUENCY_MS div 1000]\n\t\t\t\t)},\n\t\t\t{\"init\", \"Start a new weave.\"},\n\t\t\t{\"internal_api_secret (secret)\",\n\t\t\t\tlists:flatten(io_lib:format(\n\t\t\t\t\t\t\"Enables the internal API endpoints, only accessible with this secret.\"\n\t\t\t\t\t\t\" Min. ~B chars.\",\n\t\t\t\t\t\t[?INTERNAL_API_SECRET_MIN_LEN]))},\n\t\t\t{\"enable (feature)\", \"Enable a specific (normally disabled) feature. For example, \"\n\t\t\t\t\t\"subfield_queries.\"},\n\t\t\t{\"disable (feature)\", \"Disable a specific (normally enabled) feature.\"},\n\t\t\t{\"requests_per_minute_limit (number)\", \"Limit the maximum allowed number of HTTP \"\n\t\t\t\t\t\"requests per IP address per minute. Default is 900.\"},\n\t\t\t{\"max_connections\", io:format(\n\t\t\t\t\"The number of connections to be handled concurrently. \"\n\t\t\t\t\"Its purpose is to prevent your system from being overloaded and \"\n\t\t\t\t\"ensuring all the connections are handled optimally. \"\n\t\t\t\t\"Default is ~p.\",\n\t\t\t\t[?DEFAULT_COWBOY_TCP_MAX_CONNECTIONS]\n\t\t\t)},\n\t\t\t{\"disk_pool_data_root_expiration_time\",\n\t\t\t\t\"The time in seconds of how long a pending or orphaned data root is kept in \"\n\t\t\t\t\"the disk pool. The default is 2 * 60 * 60 (2 hours).\"},\n\t\t\t{\"max_disk_pool_buffer_mb\",\n\t\t\t\t\"The max total size (in MiB)) of the pending chunks in the disk pool.\"\n\t\t\t\t\"The default is 2000 (2 GiB).\"},\n\t\t\t{\"max_disk_pool_data_root_buffer_mb\",\n\t\t\t\t\"The max size (in MiB) per data root of the pending chunks in the disk\"\n\t\t\t\t\" pool. The default is 50.\"},\n\t\t\t{\"max_duplicate_data_roots\",\n\t\t\t\tio_lib:format(\n\t\t\t\t\t\"The maximum number of duplicate data roots to inspect when \"\n\t\t\t\t\t\"checking whether a posted chunk is already synced. Default is ~B.\",\n\t\t\t\t\t[?DEFAULT_MAX_DUPLICATE_DATA_ROOTS]\n\t\t\t\t)},\n\t\t\t{\"disk_cache_size_mb\",\n\t\t\t\tlists:flatten(io_lib:format(\n\t\t\t\t\t\"The maximum size (in MiB) of the disk space allocated for\"\n\t\t\t\t\t\" storing recent block and transaction headers. Default is ~B.\",\n\t\t\t\t\t[?DISK_CACHE_SIZE]\n\t\t\t\t)\n\t\t\t)},\n\t\t\t{\"packing_workers (num)\",\n\t\t\t\t\"The number of packing workers to spawn. The default is the number of \"\n\t\t\t\t\"logical CPU cores.\"},\n\t\t\t{\"replica_2_9_workers (num)\", io_lib:format(\n\t\t\t\t\"The number of replica 2.9 workers to spawn. Replica 2.9 workers are used \"\n\t\t\t\t\"to generate entropy for the replica.2.9 format. By default, at most one \"\n\t\t\t\t\"worker will be active per physical disk at a time. Default: ~B\",\n\t\t\t\t[?DEFAULT_REPLICA_2_9_WORKERS]\n\t\t\t)},\n\t\t\t{\"disable_replica_2_9_device_limit\",\n\t\t\t\t\"Disable the device limit for the replica.2.9 format. By default, at most \"\n\t\t\t\t\"one worker will be active per physical disk at a time, setting this flag \"\n\t\t\t\t\"removes this limit allowing multiple workers to be active on a given \"\n\t\t\t\t\"physical disk.\"\n\t\t\t},\n\t\t\t{\"replica_2_9_entropy_cache_size_mb (num)\", io_lib:format(\n\t\t\t\t\"The maximum cache size (in MiB) to allocate for for replica.2.9 entropy. \"\n\t\t\t\t\"Each cached entropy is 256 MiB. The bigger the cache, the more replica.2.9 data \"\n\t\t\t\t\"can be synced concurrently. Default: ~B\",\n\t\t\t\t[?DEFAULT_REPLICA_2_9_ENTROPY_CACHE_SIZE_MB]\n\t\t\t)},\n\t\t\t{\"max_vdf_validation_thread_count\", io_lib:format(\"\\tThe maximum number \"\n\t\t\t\t\t\"of threads used for VDF validation. Default: ~B\",\n\t\t\t\t\t[?DEFAULT_MAX_NONCE_LIMITER_VALIDATION_THREAD_COUNT])},\n\t\t\t{\"max_vdf_last_step_validation_thread_count\", io_lib:format(\n\t\t\t\t\t\"\\tThe maximum number of threads used for VDF last step \"\n\t\t\t\t\t\"validation. Default: ~B\",\n\t\t\t\t\t[?DEFAULT_MAX_NONCE_LIMITER_LAST_STEP_VALIDATION_THREAD_COUNT])},\n\t\t\t{\"vdf_server_trusted_peer\", \"If the option is set, we expect the given \"\n\t\t\t\t\t\"peer(s) to push VDF updates to us; we will thus not compute VDF outputs \"\n\t\t\t\t\t\"ourselves. Recommended on CPUs without hardware extensions for computing\"\n\t\t\t\t\t\" SHA-2. We will nevertheless validate VDF chains in blocks. Also, we \"\n\t\t\t\t\t\"recommend you specify at least two trusted peers to aim for shorter \"\n\t\t\t\t\t\"mining downtime.\"},\n\t\t\t{\"vdf_client_peer\", \"If the option is set, the node will push VDF updates \"\n\t\t\t\t\t\"to this peer. You can specify several vdf_client_peer options.\"},\n\t\t\t{\"debug\",\n\t\t\t\t\"Enable extended logging.\"},\n\t\t\t{\"run_defragmentation\",\n\t\t\t\t\"Run defragmentation of chunk storage files.\"},\n\t\t\t{\"defragmentation_trigger_threshold\",\n\t\t\t\t\"File size threshold in bytes for it to be considered for defragmentation.\"},\n\t\t\t{\"block_throttle_by_ip_interval (number)\",\n\t\t\t\t\tio_lib:format(\"The number of milliseconds that have to pass before \"\n\t\t\t\t\t\t\t\"we accept another block from the same IP address. Default: ~B.\",\n\t\t\t\t\t\t\t[?DEFAULT_BLOCK_THROTTLE_BY_IP_INTERVAL_MS])},\n\t\t\t{\"block_throttle_by_solution_interval (number)\",\n\t\t\t\t\tio_lib:format(\"The number of milliseconds that have to pass before \"\n\t\t\t\t\t\t\t\"we accept another block with the same solution hash. \"\n\t\t\t\t\t\t\t\"Default: ~B.\",\n\t\t\t\t\t\t\t[?DEFAULT_BLOCK_THROTTLE_BY_SOLUTION_INTERVAL_MS])},\n\t\t\t{\"defragment_module\",\n\t\t\t\t\"Run defragmentation of the chunk storage files from the given storage module.\"\n\t\t\t\t\" Assumes the run_defragmentation flag is provided.\"},\n\t\t\t{\"tls_cert_file\",\n\t\t\t\t\"Optional path to the TLS certificate file for TLS support, \"\n\t\t\t\t\"depends on 'tls_key_file' being set as well.\"},\n\t\t\t{\"tls_key_file\",\n\t\t\t\t\"The path to the TLS key file for TLS support, depends \"\n\t\t\t\t\"on 'tls_cert_file' being set as well.\"},\n\t\t\t{\"coordinated_mining\", \"Enable coordinated mining. If you are a solo pool miner \"\n\t\t\t\t\t\"coordinating on a replica with other pool miners, set this flag too. \"\n\t\t\t\t\t\"To connect the internal nodes, set cm_api_secret, cm_peer, \"\n\t\t\t\t\t\"and cm_exit_peer. Make sure every node specifies every other node in the \"\n\t\t\t\t\t\"cluster via cm_peer or cm_exit_peer. The same peer may be both cm_peer \"\n\t\t\t\t\t\"and cm_exit_peer. Also, set the mine flag on every CM peer. You may or \"\n\t\t\t\t\t\"may not set the mine flag on the exit peer.\"},\n\t\t\t{\"cm_api_secret\", \"Coordinated mining secret for authenticated \"\n\t\t\t\t\t\"requests between private peers. You need to also set coordinated_mining, \"\n\t\t\t\t\t\"cm_peer, and cm_exit_peer.\"},\n\t\t\t{\"cm_poll_interval\", io_lib:format(\"The frequency in milliseconds of asking the \"\n\t\t\t\t\t\"other nodes in the coordinated mining setup about their partition \"\n\t\t\t\t\t\"tables. Default is ~B.\", [?DEFAULT_CM_POLL_INTERVAL_MS])},\n\t\t\t{\"cm_out_batch_timeout (num)\", io_lib:format(\"The frequency in milliseconds of \"\n\t\t\t\t\t\"sending other nodes in the coordinated mining setup a batch of H1 \"\n\t\t\t\t\t\"values to hash. A higher value reduces network traffic, a lower value \"\n\t\t\t\t\t\"reduces hashing latency. Default is ~B.\",\n\t\t\t\t\t[?DEFAULT_CM_BATCH_TIMEOUT_MS])},\n\t\t\t{\"cm_peer (IP:port)\", \"The peer(s) to mine in coordination with. You need to also \"\n\t\t\t\t\t\"set coordinated_mining, cm_api_secret, and cm_exit_peer. The same \"\n\t\t\t\t\t\"peer may be specified as cm_peer and cm_exit_peer. If we are an exit \"\n\t\t\t\t\t\"peer, make sure to also set cm_peer for every miner we work with.\"},\n\t\t\t{\"cm_exit_peer (IP:port)\", \"The peer to send mining solutions to in the \"\n\t\t\t\t\t\"coordinated mining mode. You need to also set coordinated_mining, \"\n\t\t\t\t\t\"cm_api_secret, and cm_peer. If cm_exit_peer is not set, we are the \"\n\t\t\t\t\t\"exit peer. When is_pool_client is set, the exit peer \"\n\t\t\t\t\t\"is a proxy through which we communicate with the pool.\"},\n\t\t\t{\"is_pool_server\", \"Configure the node as a pool server. The pool node may not \"\n\t\t\t\t\t\"participate in the coordinated mining.\"},\n\t\t\t{\"is_pool_client\", \"Configure the node as a pool client. The node may be an \"\n\t\t\t\t\t\"exit peer in the coordinated mining setup or a standalone node.\"},\n\t\t\t{\"pool_api_key\", \"API key for the requests to the pool.\"},\n\t\t\t{\"pool_server_address\", \"The pool address\"},\n\t\t\t{\"pool_worker_name\", \"(optional) The pool worker name. \"\n\t\t\t\t\t\"Useful if you have multiple machines (or replicas) \"\n\t\t\t\t\t\"and you want to monitor them separately on pool\"},\n\t\t\t{\"rocksdb_flush_interval\", \"RocksDB flush interval in seconds\"},\n\t\t\t{\"rocksdb_wal_sync_interval\", \"RocksDB WAL sync interval in seconds\"},\n\t\t\t{\"verify\", \"Run in verify. There are two valid values 'purge' or 'log'. \"\n\t\t\t\t\"The node will run several checks on all listed storage_modules, and flag any \"\n\t\t\t\t\"errors. In 'log' mode the error are just logged, in 'purge' node the chunks \"\n\t\t\t\t\"are invalidated so that they have to be repacked. After completing a full \"\n\t\t\t\t\"verification cycle, you can restart the node in normal mode to have it \"\n\t\t\t\t\"resync and/or repack any flagged chunks. When running in verify mode several \"\n\t\t\t\t\"flags are disallowed. See the node output for details.\"},\n\t\t\t{\"verify_samples (num)\", io_lib:format(\"Number of chunks to sample and unpack \"\n\t\t\t\t\"during 'verify'. Default is ~B.\", [?SAMPLE_CHUNK_COUNT])},\n\t\t\t{\"vdf (mode)\", io_lib:format(\"VDF implementation (openssl (default), openssllite,\"\n\t\t\t\t\" fused, hiopt_m4). Default is openssl.\", [])},\n\n\t\t\t% Shutdown management\n\t\t\t{\"network.tcp.shutdown.connection_timeout\", io_lib:format(\n\t\t\t\t\"Configure shutdown TCP connection timeout (seconds). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?SHUTDOWN_TCP_CONNECTION_TIMEOUT]\n\t\t\t)},\n\t\t\t{\"network.tcp.shutdown.mode\", io_lib:format(\n\t\t\t\t\"Configure shutdown TCP mode (shutdown or close). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?SHUTDOWN_TCP_MODE]\n\t\t\t)},\n\n\t\t\t% Global socket configuration\n\t\t\t{\"network.socket.backend\", io_lib:format(\n\t\t\t\t\"Configure Erlang default socket backend (inet or socket). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_SOCKET_BACKEND]\n\t\t\t)},\n\n\t\t\t% Gun HTTP Client Tuning\n\t\t\t{\"http_client.http.closing_timeout\", io_lib:format(\n\t\t\t\t\"Configure HTTP Client closing timeout parameter (milliseconds). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_GUN_HTTP_CLOSING_TIMEOUT]\n\t\t\t)},\n\t\t\t{\"http_client.http.keepalive\", io_lib:format(\n\t\t\t\t\"Configure HTTP Client keep alive parameter (seconds or infinity). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_GUN_HTTP_KEEPALIVE]\n\t\t\t)},\n\t\t\t{\"http_client.tcp.delay_send\", io_lib:format(\n\t\t\t\t\"Configure HTTP Client TCP delay send parameter (boolean). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_GUN_TCP_DELAY_SEND]\n\t\t\t)},\n\t\t\t{\"http_client.tcp.keepalive\", io_lib:format(\n\t\t\t\t\"Configure HTTP Client TCP keepalive parameter (boolean). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_GUN_TCP_KEEPALIVE]\n\t\t\t)},\n\t\t\t{\"http_client.tcp.linger\", io_lib:format(\n\t\t\t\t\"Configure HTTP Client TCP linger parameter (boolean). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_GUN_TCP_LINGER]\n\t\t\t)},\n\t\t\t{\"http_client.tcp.linger_timeout\", io_lib:format(\n\t\t\t\t\"Configure HTTP Client TCP linger timeout parameter (seconds). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_GUN_TCP_LINGER_TIMEOUT]\n\t\t\t)},\n\t\t\t{\"http_client.tcp.nodelay\", io_lib:format(\n\t\t\t\t\"Configure HTTP Client TCP nodelay parameter (boolean). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_GUN_TCP_NODELAY]\n\t\t\t)},\n\t\t\t{\"http_client.tcp.send_timeout_close\", io_lib:format(\n\t\t\t\t\"Configure HTTP Client TCP send timeout close parameter (boolean). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_GUN_TCP_SEND_TIMEOUT_CLOSE]\n\t\t\t)},\n\t\t\t{\"http_client.tcp.send_timeout\", io_lib:format(\n\t\t\t\t\"Configure HTTP Client TCP send timeout parameter (milliseconds). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_GUN_TCP_SEND_TIMEOUT]\n\t\t\t)},\n\n\t\t\t% Cowboy HTTP Server Tuning\n\t\t\t{\"http_api.http.active_n\", io_lib:format(\n\t\t\t\t\"Configure HTTP Server number of packets requested per sockets (integer). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_COWBOY_HTTP_ACTIVE_N]\n\t\t\t)},\n\t\t\t{\"http_api.tcp.idle_timeout_seconds\", io_lib:format(\n\t\t\t\t\"The number of seconds allowed for incoming API client connections to be idle \"\n\t\t\t\t\"before closing them. Default is '~p' seconds. \"\n\t\t\t\t\"Please, do not set this value too low \"\n\t\t\t\t\"as it will negatively affect the performance of the node.\",\n\t\t\t\t[?DEFAULT_COWBOY_TCP_IDLE_TIMEOUT_SECOND]\n\t\t\t)},\n\t\t\t{\"http_api.http.inactivity_timeout\", io_lib:format(\n\t\t\t\t\"Configure HTTP Server inactivity timeout (milliseconds). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_COWBOY_HTTP_INACTIVITY_TIMEOUT]\n\t\t\t)},\n\t\t\t{\"http_api.http.linger_timeout\", io_lib:format(\n\t\t\t\t\"Configure HTTP Server linger timeout (milliseconds). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_COWBOY_HTTP_LINGER_TIMEOUT]\n\t\t\t)},\n\t\t\t{\"http_api.http.request_timeout\", io_lib:format(\n\t\t\t\t\"Configure HTTP Server request timeout (milliseconds). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_COWBOY_HTTP_REQUEST_TIMEOUT]\n\t\t\t)},\n\t\t\t{\"http_api.tcp.backlog\", io_lib:format(\n\t\t\t\t\"Configure HTTP Server TCP backlog parameter (integer). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_COWBOY_TCP_BACKLOG]\n\t\t\t)},\n\t\t\t{\"http_api.tcp.delay_send\", io_lib:format(\n\t\t\t\t\"Configure HTTP Server TCP delay send parameter (boolean). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_COWBOY_TCP_DELAY_SEND]\n\t\t\t)},\n\t\t\t{\"http_api.tcp.keepalive\", io_lib:format(\n\t\t\t\t\"Configure HTTP Server TCP keepalive parameter (boolean). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_COWBOY_TCP_KEEPALIVE]\n\t\t\t)},\n\t\t\t{\"http_api.tcp.linger\", io_lib:format(\n\t\t\t\t\"Configure HTTP Server TCP linger parameter (boolean). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_COWBOY_TCP_LINGER]\n\t\t\t)},\n\t\t\t{\"http_api.tcp.linger_timeout\", io_lib:format(\n\t\t\t\t\"Configure HTTP Server TCP linger timeout parameter (seconds). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_COWBOY_TCP_LINGER_TIMEOUT]\n\t\t\t)},\n\t\t\t{\"http_api.tcp.listener_shutdown\", io_lib:format(\n\t\t\t\t\"Configure HTTP Server listener shutdown (seconds).\"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_COWBOY_TCP_LISTENER_SHUTDOWN]\n\t\t\t)},\n\t\t\t{\"http_api.tcp.nodelay\", io_lib:format(\n\t\t\t\t\"Configure HTTP Server TCP nodelay parameter (boolean). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_COWBOY_TCP_NODELAY]\n\t\t\t)},\n\t\t\t{\"http_api.tcp.num_acceptors\", io_lib:format(\n\t\t\t\t\"Configure HTTP Server TCP acceptors (integer). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_COWBOY_TCP_NUM_ACCEPTORS]\n\t\t\t)},\n\t\t\t{\"http_api.tcp.send_timeout_close\", io_lib:format(\n\t\t\t\t\"Configure HTTP Server TCP send timeout close parameter (boolean). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t [?DEFAULT_COWBOY_TCP_SEND_TIMEOUT_CLOSE]\n\t\t\t)},\n\t\t\t{\"http_api.tcp.send_timeout\", io_lib:format(\n\t\t\t\t\"Configure HTTP Server TCP send timeout parameter (milliseconds). \"\n\t\t\t\t\"Default is '~p'.\",\n\t\t\t\t[?DEFAULT_COWBOY_TCP_SEND_TIMEOUT]\n\t\t\t)}\n\t\t]\n\t).\n\n\n%%--------------------------------------------------------------------\n%% @doc evaluate `parse_cli_args/2' function and execute actions if\n%% required (returned by `parse/2' function).\n%% @end\n%%--------------------------------------------------------------------\n-spec eval(Args, Config) -> Return when\n\tArgs :: [string()],\n\tConfig :: #config{},\n\tReturn :: Config | [term()].\n\neval(Args, Config) ->\n\tcase parse(Args, Config) of\n\t\t{ok, C} -> C;\n\t\t{error, Actions, _C} ->\n\t\t\t[ {M, F, A, erlang:apply(M, F, A)}\n\t\t\t  || {M,F,A} <- Actions\n\t\t\t]\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc Legacy argument parser. This function will return the\n%% configuration as `#config{}' record in case of success, or returns\n%% an error with a list of actions to execute as MFA.\n%% @end\n%%--------------------------------------------------------------------\n-spec parse(Args, Config) -> Return when\n\tArgs :: [string()],\n\tConfig :: #config{},\n\tReturn :: {ok, Config} |\n\t\t{error, Actions, Config},\n\tActions :: {M, F, A},\n\tM :: atom(),\n\tF :: atom(),\n\tA :: [term()].\n\nparse([], C) ->\n\t{ok, C};\nparse([\"config_file\",_|Rest], C) ->\n\t% ignore config_file parameter when using arguments parser.\n\tparse(Rest, C);\nparse([\"mine\" | Rest], C) ->\n\tparse(Rest, C#config{ mine = true });\nparse([\"verify\", \"purge\" | Rest], C) ->\n\tparse(Rest, C#config{ verify = purge });\nparse([\"verify\", \"log\" | Rest], C) ->\n\tparse(Rest, C#config{ verify = log });\nparse([\"verify\", _ | _], C) ->\n\tio:format(\"Invalid verify mode. Valid modes are 'purge' or 'log'.~n\"),\n\t{error, [\n\t\t{timer, sleep, [1000]},\n\t\t{init, stop, [1]}\n\t], C};\nparse([\"verify_samples\", \"all\" | Rest], C) ->\n\tparse(Rest, C#config{ verify_samples = all });\nparse([\"verify_samples\", N | Rest], C) ->\n\tparse(Rest, C#config{ verify_samples = list_to_integer(N) });\nparse([\"vdf\", Mode | Rest], C) ->\n\tParsedMode = case Mode of\n\t\t\"openssl\" ->openssl;\n\t\t\"openssllite\" ->openssllite;\n\t\t\"fused\" ->fused;\n\t\t\"hiopt_m4\" ->hiopt_m4;\n\t\t_ ->\n\t\t\tio:format(\"VDF ~p is invalid.~n\", [Mode]),\n\t\t\topenssl\n\tend,\n\tparse(Rest, C#config{ vdf = ParsedMode });\nparse([\"peer\", Peer | Rest], C = #config{ peers = Ps }) ->\n\tcase ar_util:safe_parse_peer(Peer) of\n\t\t{ok, ValidPeers} when is_list(ValidPeers) ->\n\t\t\tNewConfig = C#config{peers = ValidPeers ++ Ps},\n\t\t\tparse(Rest, NewConfig);\n\t\t{error, _} ->\n\t\t\tio:format(\"Peer ~p is invalid.~n\", [Peer]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"block_gossip_peer\", Peer | Rest],\n\t\tC = #config{ block_gossip_peers = Peers }) ->\n\tcase ar_util:safe_parse_peer(Peer) of\n\t\t{ok, ValidPeer} when is_list(ValidPeer) ->\n\t\t\tparse(Rest, C#config{ block_gossip_peers = ValidPeer ++ Peers });\n\t\t{error, _} ->\n\t\t\tio:format(\"Peer ~p invalid ~n\", [Peer]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"local_peer\", Peer | Rest], C = #config{ local_peers = Peers }) ->\n\tcase ar_util:safe_parse_peer(Peer) of\n\t\t{ok, ValidPeer} when is_list(ValidPeer) ->\n\t\t\tparse(Rest, C#config{ local_peers = ValidPeer ++ Peers });\n\t\t{error, _} ->\n\t\t\tio:format(\"Peer ~p is invalid.~n\", [Peer]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"sync_from_local_peers_only\" | Rest], C) ->\n\tparse(Rest, C#config{ sync_from_local_peers_only = true });\nparse([\"transaction_blacklist\", File | Rest],\n\tC = #config{ transaction_blacklist_files = Files } ) ->\n\tparse(Rest, C#config{ transaction_blacklist_files = [File | Files] });\nparse([\"transaction_blacklist_url\", URL | Rest],\n\t\tC = #config{ transaction_blacklist_urls = URLs} ) ->\n\tparse(Rest, C#config{ transaction_blacklist_urls = [URL | URLs] });\nparse([\"transaction_whitelist\", File | Rest],\n\t\tC = #config{ transaction_whitelist_files = Files } ) ->\n\tparse(Rest, C#config{ transaction_whitelist_files = [File | Files] });\nparse([\"transaction_whitelist_url\", URL | Rest],\n\t\tC = #config{ transaction_whitelist_urls = URLs} ) ->\n\tparse(Rest, C#config{ transaction_whitelist_urls = [URL | URLs] });\nparse([\"port\", Port | Rest], C) ->\n\tparse(Rest, C#config{ port = list_to_integer(Port) });\nparse([\"data_dir\", DataDir | Rest], C) ->\n\tparse(Rest, C#config{ data_dir = DataDir });\nparse([\"log_dir\", Dir | Rest], C) ->\n\tparse(Rest, C#config{ log_dir = Dir });\nparse([\"storage_module\", StorageModuleString | Rest], C) ->\n\ttry\n\t\tcase ar_config:parse_storage_module(StorageModuleString) of\n\t\t\t{ok, StorageModule} ->\n\t\t\t\tStorageModules = C#config.storage_modules,\n\t\t\t\tparse(Rest, C#config{\n\t\t\t\t\t\tstorage_modules = [StorageModule | StorageModules] });\n\t\t\t{repack_in_place, StorageModule} ->\n\t\t\t\tStorageModules = C#config.repack_in_place_storage_modules,\n\t\t\t\tparse(Rest, C#config{\n\t\t\t\t\t\trepack_in_place_storage_modules = [StorageModule | StorageModules] })\n\t\tend\n\tcatch _:_ ->\n\t\tio:format(\"~nstorage_module value must be \"\n\t\t\t\t\"in the {number},{address}[,repack_in_place,{to_packing}] format.~n~n\"),\n\t\t{error, [\n\t\t\t{init, stop, [1]}\n\t\t], C}\n\tend;\nparse([\"repack_batch_size\", N | Rest], C) ->\n\tparse(Rest, C#config{ repack_batch_size = list_to_integer(N) });\nparse([\"repack_cache_size_mb\", N | Rest], C) ->\n\tparse(Rest, C#config{ repack_cache_size_mb = list_to_integer(N) });\nparse([\"polling\", Frequency | Rest], C) ->\n\tparse(Rest, C#config{ polling = list_to_integer(Frequency) });\nparse([\"block_pollers\", N | Rest], C) ->\n\tparse(Rest, C#config{ block_pollers = list_to_integer(N) });\nparse([\"no_auto_join\" | Rest], C) ->\n\tparse(Rest, C#config{ auto_join = false });\nparse([\"join_workers\", N | Rest], C) ->\n\tparse(Rest, C#config{ join_workers = list_to_integer(N) });\nparse([\"diff\", N | Rest], C) ->\n\tparse(Rest, C#config{ diff = list_to_integer(N) });\nparse([\"mining_addr\", Addr | Rest], C) ->\n\tcase C#config.mining_addr of\n\t\tnot_set ->\n\t\t\tcase ar_util:safe_decode(Addr) of\n\t\t\t\t{ok, DecodedAddr} when byte_size(DecodedAddr) == 32 ->\n\t\t\t\t\tparse(Rest, C#config{ mining_addr = DecodedAddr });\n\t\t\t\t_ ->\n\t\t\t\t\tio:format(\"~nmining_addr must be a valid Base64Url string, 43\"\n\t\t\t\t\t\t\t\" characters long.~n~n\"),\n\t\t\t\t\t{error, [{init, stop, [1]}], C}\n\t\t\tend;\n\t\t_ ->\n\t\t\tio:format(\"~nYou may specify at most one mining_addr.~n~n\"),\n\t\t\t{error, [{init, stop, [1]}], C}\n\tend;\nparse([\"hashing_threads\", Num | Rest], C) ->\n\tparse(Rest, C#config{ hashing_threads = list_to_integer(Num) });\nparse([\"data_cache_size_limit\", Num | Rest], C) ->\n\tparse(Rest, C#config{\n\t\t\tdata_cache_size_limit = list_to_integer(Num) });\nparse([\"packing_cache_size_limit\", Num | Rest], C) ->\n\tparse(Rest, C#config{\n\t\t\tpacking_cache_size_limit = list_to_integer(Num) });\nparse([\"mining_cache_size_mb\", Num | Rest], C) ->\n\tparse(Rest, C#config{\n\t\t\tmining_cache_size_mb = list_to_integer(Num) });\nparse([\"max_emitters\", Num | Rest], C) ->\n\tparse(Rest, C#config{ max_emitters = list_to_integer(Num) });\nparse([\"disk_space_check_frequency\", Frequency | Rest], C) ->\n\tparse(Rest, C#config{\n\t\tdisk_space_check_frequency = list_to_integer(Frequency) * 1000\n\t});\nparse([\"start_from_block_index\" | Rest], C) ->\n\tparse(Rest, C#config{ start_from_latest_state = true });\nparse([\"start_from_state\", Folder | Rest], C) ->\n\tparse(Rest, C#config{ start_from_state = Folder });\nparse([\"start_from_block\", H | Rest], C) ->\n\tcase ar_util:safe_decode(H) of\n\t\t{ok, Decoded} when byte_size(Decoded) == 48 ->\n\t\t\tparse(Rest, C#config{ start_from_block = Decoded });\n\t\t_ ->\n\t\t\tio:format(\"Invalid start_from_block.~n\", []),\n\t\t\t{error, [\n\t\t\t\t{timer, sleep, [1000]},\n\t\t\t\t{init, stop, [1]}\n\t\t\t], C}\n\tend;\nparse([\"start_from_latest_state\" | Rest], C) ->\n\tparse(Rest, C#config{ start_from_latest_state = true });\nparse([\"init\" | Rest], C)->\n\tparse(Rest, C#config{ init = true });\nparse([\"internal_api_secret\", Secret | Rest], C)\n\t\twhen length(Secret) >= ?INTERNAL_API_SECRET_MIN_LEN ->\n\tparse(Rest, C#config{ internal_api_secret = list_to_binary(Secret)});\nparse([\"internal_api_secret\", _ | _], C) ->\n\tio:format(\"~nThe internal_api_secret must be at least ~B characters long.~n~n\",\n\t\t\t[?INTERNAL_API_SECRET_MIN_LEN]),\n\t{error, [\n\t\t{init, stop, [1]}\n\t], C};\nparse([\"enable\", Feature | Rest ], C = #config{ enable = Enabled }) ->\n\tparse(Rest, C#config{ enable = [ list_to_atom(Feature) | Enabled ] });\nparse([\"disable\", Feature | Rest ], C = #config{ disable = Disabled }) ->\n\tparse(Rest, C#config{ disable = [ list_to_atom(Feature) | Disabled ] });\nparse([\"custom_domain\", _ | Rest], C = #config{ }) ->\n\t?LOG_WARNING(\"Deprecated option found 'custom_domain': \"\n\t\t\t\" this option has been removed and is a no-op.\", []),\n\tparse(Rest, C#config{ });\nparse([\"requests_per_minute_limit\", Num | Rest], C) ->\n\tparse(Rest, C#config{ requests_per_minute_limit = list_to_integer(Num) });\nparse([\"max_propagation_peers\", Num | Rest], C) ->\n\tparse(Rest, C#config{ max_propagation_peers = list_to_integer(Num) });\nparse([\"max_block_propagation_peers\", Num | Rest], C) ->\n\tparse(Rest, C#config{ max_block_propagation_peers = list_to_integer(Num) });\nparse([\"sync_jobs\", Num | Rest], C) ->\n\tparse(Rest, C#config{ sync_jobs = list_to_integer(Num) });\nparse([\"header_sync_jobs\", Num | Rest], C) ->\n\tparse(Rest, C#config{ header_sync_jobs = list_to_integer(Num) });\nparse([\"enable_data_roots_syncing\", \"true\" | Rest], C) ->\n\tparse(Rest, C#config{ enable_data_roots_syncing = true });\nparse([\"enable_data_roots_syncing\", \"false\" | Rest], C) ->\n\tparse(Rest, C#config{ enable_data_roots_syncing = false });\nparse([\"data_sync_request_packed_chunks\" | Rest], C) ->\n\tparse(Rest, C#config{ data_sync_request_packed_chunks = true });\nparse([\"post_tx_timeout\", Num | Rest], C) ->\n\tparse(Rest, C#config { post_tx_timeout = list_to_integer(Num) });\nparse([\"max_connections\", Num | Rest], C) ->\n\ttry list_to_integer(Num) of\n\t\tN when N >= 1 ->\n\t\t\tparse(Rest, C#config{ 'http_api.tcp.max_connections' = N });\n\t\t_ ->\n\t\t\tio:format(\"Invalid max_connections ~p\", [Num]),\n\t\t\tparse(Rest, C)\n\tcatch\n\t\t_:_ ->\n\t\t\tio:format(\"Invalid max_connections ~p\", [Num]),\n\t\t\tparse(Rest, C)\n\n\tend;\nparse([\"disk_pool_data_root_expiration_time\", Num | Rest], C) ->\n\tparse(Rest, C#config{\n\t\t\tdisk_pool_data_root_expiration_time = list_to_integer(Num) });\nparse([\"max_disk_pool_buffer_mb\", Num | Rest], C) ->\n\tparse(Rest, C#config{ max_disk_pool_buffer_mb = list_to_integer(Num) });\nparse([\"max_disk_pool_data_root_buffer_mb\", Num | Rest], C) ->\n\tparse(Rest, C#config{ max_disk_pool_data_root_buffer_mb = list_to_integer(Num) });\nparse([\"max_duplicate_data_roots\", Num | Rest], C) ->\n\tparse(Rest, C#config{ max_duplicate_data_roots = list_to_integer(Num) });\nparse([\"disk_cache_size_mb\", Num | Rest], C) ->\n\tparse(Rest, C#config{ disk_cache_size = list_to_integer(Num) });\nparse([\"packing_workers\", Num | Rest], C) ->\n\tparse(Rest, C#config{ packing_workers = list_to_integer(Num) });\nparse([\"replica_2_9_workers\", Num | Rest], C) ->\n\tparse(Rest, C#config{ replica_2_9_workers = list_to_integer(Num) });\nparse([\"disable_replica_2_9_device_limit\" | Rest], C) ->\n\tparse(Rest, C#config{ disable_replica_2_9_device_limit = true });\nparse([\"replica_2_9_entropy_cache_size_mb\", Num | Rest], C) ->\n\tparse(Rest, C#config{ replica_2_9_entropy_cache_size_mb = list_to_integer(Num) });\nparse([\"max_vdf_validation_thread_count\", Num | Rest], C) ->\n\tparse(Rest,\n\t\t\tC#config{ max_nonce_limiter_validation_thread_count = list_to_integer(Num) });\nparse([\"max_vdf_last_step_validation_thread_count\", Num | Rest], C) ->\n\tparse(Rest, C#config{\n\t\t\tmax_nonce_limiter_last_step_validation_thread_count = list_to_integer(Num) });\nparse([\"vdf_server_trusted_peer\", Peer | Rest], C) ->\n\t#config{ nonce_limiter_server_trusted_peers = Peers } = C,\n\tparse(Rest, C#config{ nonce_limiter_server_trusted_peers = [Peer | Peers] });\nparse([\"vdf_client_peer\", RawPeer | Rest],\n\t\tC = #config{ nonce_limiter_client_peers = Peers }) ->\n\tparse(Rest, C#config{ nonce_limiter_client_peers = [RawPeer | Peers] });\nparse([\"debug\" | Rest], C) ->\n\tparse(Rest, C#config{ debug = true });\nparse([\"run_defragmentation\" | Rest], C) ->\n\tparse(Rest, C#config{ run_defragmentation = true });\nparse([\"defragmentation_trigger_threshold\", Num | Rest], C) ->\n\tparse(Rest, C#config{ defragmentation_trigger_threshold = list_to_integer(Num) });\nparse([\"block_throttle_by_ip_interval\", Num | Rest], C) ->\n\tparse(Rest, C#config{ block_throttle_by_ip_interval = list_to_integer(Num) });\nparse([\"block_throttle_by_solution_interval\", Num | Rest], C) ->\n\tparse(Rest, C#config{\n\t\t\tblock_throttle_by_solution_interval = list_to_integer(Num) });\nparse([\"defragment_module\", DefragModuleString | Rest], C) ->\n\tDefragModules = C#config.defragmentation_modules,\n\ttry\n\t\t{ok, DefragModule} = ar_config:parse_storage_module(DefragModuleString),\n\t\tDefragModules2 = [DefragModule | DefragModules],\n\t\tparse(Rest, C#config{ defragmentation_modules = DefragModules2 })\n\tcatch _:_ ->\n\t\tio:format(\"~ndefragment_module value must be in the {number},{address} format.~n~n\"),\n\t\t{error, [\n\t\t\t{init, stop, [1]}\n\t\t], C}\n\tend;\nparse([\"tls_cert_file\", CertFilePath | Rest], C) ->\n    AbsCertFilePath = filename:absname(CertFilePath),\n    ar_util:assert_file_exists_and_readable(AbsCertFilePath),\n    parse(Rest, C#config{ tls_cert_file = AbsCertFilePath });\nparse([\"tls_key_file\", KeyFilePath | Rest], C) ->\n    AbsKeyFilePath = filename:absname(KeyFilePath),\n    ar_util:assert_file_exists_and_readable(AbsKeyFilePath),\n    parse(Rest, C#config{ tls_key_file = AbsKeyFilePath });\nparse([\"http_api.tcp.idle_timeout_seconds\", Num | Rest], C) ->\n\tparse(Rest, C#config { http_api_transport_idle_timeout = list_to_integer(Num) * 1000 });\nparse([\"coordinated_mining\" | Rest], C) ->\n\tparse(Rest, C#config{ coordinated_mining = true });\nparse([\"cm_api_secret\", CMSecret | Rest], C)\n\t\twhen length(CMSecret) >= ?INTERNAL_API_SECRET_MIN_LEN ->\n\tparse(Rest, C#config{ cm_api_secret = list_to_binary(CMSecret) });\nparse([\"cm_api_secret\", _ | _], C) ->\n\tio:format(\"~nThe cm_api_secret must be at least ~B characters long.~n~n\",\n\t\t\t[?INTERNAL_API_SECRET_MIN_LEN]),\n\t{error, [\n\t\t{init, stop, [1]}\n\t], C};\nparse([\"cm_poll_interval\", Num | Rest], C) ->\n\tparse(Rest, C#config{ cm_poll_interval = list_to_integer(Num) });\nparse([\"cm_peer\", Peer | Rest], C = #config{ cm_peers = Ps }) ->\n\tcase ar_util:safe_parse_peer(Peer) of\n\t\t{ok, ValidPeer} when is_list(ValidPeer) ->\n\t\t\tparse(Rest, C#config{ cm_peers = ValidPeer ++ Ps });\n\t\t{error, _} ->\n\t\t\tio:format(\"Peer ~p is invalid.~n\", [Peer]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"cm_exit_peer\", Peer | Rest], C) ->\n\tcase ar_util:safe_parse_peer(Peer) of\n\t\t{ok, [ValidPeer|_]} ->\n\t\t\tparse(Rest, C#config{ cm_exit_peer = ValidPeer });\n\t\t{error, _} ->\n\t\t\tio:format(\"Peer ~p is invalid.~n\", [Peer]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"cm_out_batch_timeout\", Num | Rest], C) ->\n\tparse(Rest, C#config{ cm_out_batch_timeout = list_to_integer(Num) });\nparse([\"is_pool_server\" | Rest], C) ->\n\tparse(Rest, C#config{ is_pool_server = true });\nparse([\"is_pool_client\" | Rest], C) ->\n\tparse(Rest, C#config{ is_pool_client = true });\nparse([\"pool_api_key\", Key | Rest], C) ->\n\tparse(Rest, C#config{ pool_api_key = list_to_binary(Key) });\nparse([\"pool_server_address\", Host | Rest], C) ->\n\tparse(Rest, C#config{ pool_server_address = list_to_binary(Host) });\nparse([\"pool_worker_name\", Host | Rest], C) ->\n\tparse(Rest, C#config{ pool_worker_name = list_to_binary(Host) });\nparse([\"rocksdb_flush_interval\", Seconds | Rest], C) ->\n\tparse(Rest, C#config{ rocksdb_flush_interval_s = list_to_integer(Seconds) });\nparse([\"rocksdb_wal_sync_interval\", Seconds | Rest], C) ->\n\tparse(Rest, C#config{ rocksdb_wal_sync_interval_s = list_to_integer(Seconds) });\n\n%% tcp shutdown procedure\nparse([\"network.tcp.connection_timeout\", Delay|Rest], C) ->\n\tparse(Rest, C#config{ shutdown_tcp_connection_timeout = list_to_integer(Delay) });\nparse([\"network.tcp.shutdown.mode\", RawMode|Rest], C) ->\n\tcase RawMode of\n\t\t\"shutdown\" ->\n\t\t\tparse(Rest, C#config{ shutdown_tcp_mode = shutdown});\n\t\t\"close\" ->\n\t\t\tparse(Rest, C#config{ shutdown_tcp_mode = close });\n\t\tMode ->\n\t\t\tio:format(\"Mode ~p is invalid.~n\", [Mode]),\n\t\t\tparse(Rest, C)\n\tend;\n\n%% global socket configuration\nparse([\"network.socket.backend\", Backend|Rest], C) ->\n\tcase Backend of\n\t\t\"inet\" ->\n\t\t\tparse(Rest, C#config{ 'socket.backend' = inet });\n\t\t\"socket\" ->\n\t\t\tparse(Rest, C#config{ 'socket.backend' = socket });\n\t\t_ ->\n\t\t\tio:format(\"Invalid socket.backend ~p.\", [Backend]),\n\t\t\tparse(Rest, C)\n\tend;\n\n%% gun http client configuration\nparse([\"http_client.http.keepalive\", \"infinity\"|Rest], C) ->\n\tparse(Rest, C#config{ 'http_client.http.keepalive' = infinity});\nparse([\"http_client.http.keepalive\", Keepalive|Rest], C) ->\n\ttry list_to_integer(Keepalive) of\n\t\tK when K >= 0 ->\n\t\t\tparse(Rest, C#config{ 'http_client.http.keepalive' = K });\n\t\t_ ->\n\t\t\tio:format(\"Invalid http_client.http.keepalive ~p.\", [Keepalive]),\n\t\t\tparse(Rest, C)\n\tcatch\n\t\t_:_ ->\n\t\t\tio:format(\"Invalid http_client.http.keepalive ~p.\", [Keepalive]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"http_client.tcp.delay_send\", DelaySend|Rest], C) ->\n\tcase DelaySend of\n\t\t\"true\" ->\n\t\t\tparse(Rest, C#config{ 'http_client.tcp.delay_send' = true });\n\t\t\"false\" ->\n\t\t\tparse(Rest, C#config{ 'http_client.tcp.delay_send' = false });\n\t\t_ ->\n\t\t\tio:format(\"Invalid http_client.tcp.delay_send ~p.\", [DelaySend]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"http_client.tcp.keepalive\", Keepalive|Rest], C) ->\n\tcase Keepalive of\n\t\t\"true\" ->\n\t\t\tparse(Rest, C#config{ 'http_client.tcp.keepalive' = true });\n\t\t\"false\" ->\n\t\t\tparse(Rest, C#config{ 'http_client.tcp.keepalive' = false });\n\t\t_ ->\n\t\t\tio:format(\"Invalid http_client.tcp.keepalive ~p.\", [Keepalive]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"http_client.tcp.linger\", Linger|Rest], C) ->\n\tcase Linger of\n\t\t\"true\" ->\n\t\t\tparse(Rest, C#config{ 'http_client.tcp.linger' = true });\n\t\t\"false\" ->\n\t\t\tparse(Rest, C#config{ 'http_client.tcp.linger' = false});\n\t\t_ ->\n\t\t\tio:format(\"Invalid http_client.tcp.linger ~p.\", [Linger]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"http_client.tcp.linger_timeout\", Timeout|Rest], C) ->\n\ttry list_to_integer(Timeout) of\n\t\tT when T >= 0 ->\n\t\t\tparse(Rest, C#config{ 'http_client.tcp.linger_timeout' = T });\n\t\t_ ->\n\t\t\tio:format(\"Invalid http_client.tcp.linger_timeout ~p.\", [Timeout]),\n\t\t\tparse(Rest, C)\n\tcatch\n\t\t_:_ ->\n\t\t\tio:format(\"Invalid http_client.tcp.linger_timeout timeout ~p.\", [Timeout]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"http_client.tcp.nodelay\", Nodelay|Rest], C) ->\n\tcase Nodelay of\n\t\t\"true\" ->\n\t\t\tparse(Rest, C#config{ 'http_client.tcp.nodelay' = true });\n\t\t\"false\" ->\n\t\t\tparse(Rest, C#config{ 'http_client.tcp.nodelay' = false });\n\t\t_ ->\n\t\t\tio:format(\"Invalid http_client.tcp.nodelay ~p.\", [Nodelay]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"http_client.tcp.send_timeout_close\", Value|Rest], C) ->\n\tcase Value of\n\t\t\"true\" ->\n\t\t\tparse(Rest, C#config{ 'http_client.tcp.send_timeout_close' = true });\n\t\t\"false\" ->\n\t\t\tparse(Rest, C#config{ 'http_client.tcp.send_timeout_close' = false });\n\t\t_ ->\n\t\t\tio:format(\"Invalid http_client.tcp.send_timeout_close ~p.\", [Value]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"http_client.tcp.send_timeout\", Timeout|Rest], C) ->\n\ttry list_to_integer(Timeout) of\n\t\tT when T >= 0 ->\n\t\t\tparse(Rest, C#config{ 'http_client.tcp.send_timeout' = T });\n\t\t_ ->\n\t\t\tio:format(\"Invalid http_client.tcp.send_timeout ~p.\", [Timeout]),\n\t\t\tparse(Rest, C)\n\tcatch\n\t\t_:_ ->\n\t\t\tio:format(\"Invalid http_client.tcp.send_timeout ~p.\", [Timeout]),\n\t\t\tparse(Rest, C)\n\tend;\n\n%% cowboy http server configuration\nparse([\"http_api.http.active_n\", Active|Rest], C) ->\n\ttry list_to_integer(Active) of\n\t\tN when N >= 1 ->\n\t\t\tparse(Rest, C#config{ 'http_api.http.active_n' = N });\n\t\t_ ->\n\t\t\tio:format(\"Invalid http_api.http.active_n ~p.\", [Active]),\n\t\t\tparse(Rest, C)\n\tcatch\n\t\t_:_ ->\n\t\t\tio:format(\"Invalid http_api.http.active_n ~p.\", [Active]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"http_api.http.inactivity_timeout\", Timeout|Rest], C) ->\n\ttry list_to_integer(Timeout) of\n\t\tT when T >= 0 ->\n\t\t\tparse(Rest, C#config{ 'http_api.http.inactivity_timeout' = T });\n\t\t_ ->\n\t\t\tio:format(\"Invalid http_api.http.inactivity_timeout ~p.\", [Timeout]),\n\t\t\tparse(Rest, C)\n\tcatch\n\t\t_:_ ->\n\t\t\tio:format(\"Invalid http_api.http.inactivity_timeout ~p.\", [Timeout]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"http_api.http.linger_timeout\", Timeout|Rest], C) ->\n\ttry list_to_integer(Timeout) of\n\t\tT when T >= 0 ->\n\t\t\tparse(Rest, C#config{ 'http_api.http.linger_timeout' = T });\n\t\t_ ->\n\t\t\tio:format(\"Invalid http_api.http.linger_timeout ~p.\", [Timeout]),\n\t\t\tparse(Rest, C)\n\tcatch\n\t\t_:_ ->\n\t\t\tio:format(\"Invalid http_api.http.linger_timeout ~p.\", [Timeout]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"http_api.http.request_timeout\", Timeout|Rest], C) ->\n\ttry list_to_integer(Timeout) of\n\t\tT when T >= 0 ->\n\t\t\tparse(Rest, C#config{ 'http_api.http.request_timeout' = T });\n\t\t_ ->\n\t\t\tio:format(\"Invalid http_api.http.request_timeout ~p.\", [Timeout]),\n\t\t\tparse(Rest, C)\n\tcatch\n\t\t_:_ ->\n\t\t\tio:format(\"Invalid http_api.http.request_timeout ~p.\", [Timeout]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"http_api.tcp.backlog\", Backlog|Rest], C) ->\n\ttry list_to_integer(Backlog)of\n\t\tB when B >= 1 ->\n\t\t\tparse(Rest, C#config{ 'http_api.tcp.backlog' = B });\n\t\t_ ->\n\t\t\tio:format(\"Invalid http_api.tcp.backlog ~p.\", [Backlog]),\n\t\t\tparse(Rest, C)\n\tcatch\n\t\t_:_ ->\n\t\t\tio:format(\"Invalid http_api.tcp.backlog ~p.\", [Backlog]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"http_api.tcp.delay_send\", DelaySend|Rest], C) ->\n\tcase DelaySend of\n\t\t\"true\" ->\n\t\t\tparse(Rest, C#config{ 'http_api.tcp.delay_send' = true });\n\t\t\"false\" ->\n\t\t\tparse(Rest, C#config{ 'http_api.tcp.delay_send' = false });\n\t\t_ ->\n\t\t\tio:format(\"Invalid http_api.tcp.delay_send ~p.\", [DelaySend]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"http_api.tcp.keepalive\", \"true\"|Rest], C) ->\n\tparse(Rest, C#config{ 'http_api.tcp.keepalive' = true});\nparse([\"http_api.tcp.keepalive\", \"false\"|Rest], C) ->\n\tparse(Rest, C#config{ 'http_api.tcp.keepalive' = false});\nparse([\"http_api.tcp.keepalive\", Keepalive|Rest], C) ->\n\tio:format(\"Invalid http_api.tcp.keepalive ~p.\", [Keepalive]),\n\tparse(Rest, C);\nparse([\"http_api.tcp.linger\", Linger|Rest], C) ->\n\tcase Linger of\n\t\t\"true\" ->\n\t\t\tparse(Rest, C#config{ 'http_api.tcp.linger' = true });\n\t\t\"false\" ->\n\t\t\tparse(Rest, C#config{ 'http_api.tcp.linger' = false});\n\t\t_ ->\n\t\t\tio:format(\"Invalid http_api.tcp.linger ~p.\", [Linger]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"http_api.tcp.linger_timeout\", Timeout|Rest], C) ->\n\ttry list_to_integer(Timeout) of\n\t\tT when T >= 0 ->\n\t\t\tparse(Rest, C#config{ 'http_api.tcp.linger_timeout' = T });\n\t\t_ ->\n\t\t\tio:format(\"Invalid http_api.tcp.linger_timeout ~p.\", [Timeout]),\n\t\t\tparse(Rest, C)\n\tcatch\n\t\t_:_ ->\n\t\t\tio:format(\"Invalid http_api.tcp.linger_timeout ~p.\", [Timeout]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"http_api.tcp.listener_shutdown\", \"brutal_kill\"|Rest], C) ->\n\tparse(Rest, C#config{ 'http_api.tcp.listener_shutdown' = brutal_kill});\nparse([\"http_api.tcp.listener_shutdown\", \"infinity\"|Rest], C) ->\n\tparse(Rest, C#config{ 'http_api.tcp.listener_shutdown' = infinity });\nparse([\"http_api.tcp.listener_shutdown\", Shutdown|Rest], C) ->\n\ttry list_to_integer(Shutdown) of\n\t\tS when S >= 0 ->\n\t\t\tparse(Rest, C#config{ 'http_api.tcp.listener_shutdown' = S });\n\t\t_ ->\n\t\t\tio:format(\"Invalid http_api.tcp.listener_shutdown ~p.\", [Shutdown]),\n\t\t\tparse(Rest, C)\n\tcatch\n\t\t_:_ ->\n\t\t\tio:format(\"Invalid http_api.tcp.listener_shutdown ~p.\", [Shutdown]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"http_api.tcp.nodelay\", Nodelay|Rest], C) ->\n\tcase Nodelay of\n\t\t\"true\" ->\n\t\t\tparse(Rest, C#config{ 'http_api.tcp.nodelay' = true });\n\t\t\"false\" ->\n\t\t\tparse(Rest, C#config{ 'http_api.tcp.nodelay' = false });\n\t\t_ ->\n\t\t\tio:format(\"Invalid http_api.tcp.nodelay ~p.\", [Nodelay]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"http_api.tcp.num_acceptors\", Acceptors|Rest], C) ->\n\ttry list_to_integer(Acceptors) of\n\t\tN when N >= 0 ->\n\t\t\tparse(Rest, C#config{ 'http_api.tcp.num_acceptors' = N });\n\t\t_ ->\n\t\t\tio:format(\"Invalid http_api.tcp.num_acceptors ~p.\", [Acceptors]),\n\t\t\tparse(Rest, C)\n\tcatch\n\t\t_:_ ->\n\t\t\tio:format(\"Invalid http_api.tcp.num_acceptors ~p.\", [Acceptors]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"http_api.tcp.send_timeout_close\", Value|Rest], C) ->\n\tcase Value of\n\t\t\"true\" ->\n\t\t\tparse(Rest, C#config{ 'http_api.tcp.send_timeout_close' = true });\n\t\t\"false\" ->\n\t\t\tparse(Rest, C#config{ 'http_api.tcp.send_timeout_close' = false });\n\t\t_ ->\n\t\t\tio:format(\"Invalid http_api.tcp.send_timeout_close ~p.\", [Value]),\n\t\t\tparse(Rest, C)\n\tend;\nparse([\"http_api.tcp.send_timeout\", Timeout|Rest], C) ->\n\ttry list_to_integer(Timeout) of\n\t\tT when T >= 0 ->\n\t\t\tparse(Rest, C#config{ 'http_api.tcp.send_timeout' = T });\n\t\t_ ->\n\t\t\tio:format(\"Invalid http_api.tcp.send_timeout ~p.\", [Timeout]),\n\t\t\tparse(Rest, C)\n\tcatch\n\t\t_:_ ->\n\t\t\tio:format(\"Invalid http_api.tcp.send_timeout ~p.\", [Timeout]),\n\t\t\tparse(Rest, C)\n\tend;\n\n%% Undocumented/unsupported options\nparse([\"chunk_storage_file_size\", Num | Rest], C) ->\n\tparse(Rest, C#config{ chunk_storage_file_size = list_to_integer(Num) });\n\nparse([Arg | _Rest], C) ->\n\tio:format(\"~nUnknown argument: ~s.~n\", [Arg]),\n\t{error, [\n\t\t{?MODULE, show_help, []}\n\t], C}.\n\n%% @doc Ensure that parsing of core command line options functions correctly.\ncommandline_parser_test_() ->\n\t{timeout, 60, fun() ->\n\t\tAddr = crypto:strong_rand_bytes(32),\n\t\tTests =\n\t\t\t[\n\t\t\t\t{\"peer 1.2.3.4 peer 5.6.7.8:9\", #config.peers, [{5,6,7,8,9},{1,2,3,4,1984}]},\n\t\t\t\t{\"mine\", #config.mine, true},\n\t\t\t\t{\"port 22\", #config.port, 22},\n\t\t\t\t{\"mining_addr \"\n\t\t\t\t\t++ binary_to_list(ar_util:encode(Addr)), #config.mining_addr, Addr}\n\t\t\t],\n\t\tX = string:split(string:join([ L || {L, _, _} <- Tests ], \" \"), \" \", all),\n\t\tC = eval(X, #config{}),\n\t\tlists:foreach(\n\t\t\tfun({_, Index, Value}) ->\n\t\t\t\t?assertEqual(element(Index, C), Value)\n\t\t\tend,\n\t\t\tTests\n\t\t)\n\tend}.\n"
  },
  {
    "path": "apps/arweave/src/ar_config.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2026 (c) Arweave\n%%% @author Arweave Team\n%%% @doc arweave legacy configuration parser module.\n%%% @end\n%%%===================================================================\n-module(ar_config).\n-export([\n\tcompute_own_vdf/0,\n\tis_public_vdf_server/0,\n\tis_vdf_server/0,\n\tlog_config/1,\n\tparse/1,\n\tparse_config_file/1,\n\tparse_config_file/2,\n\tparse_storage_module/1,\n\tpull_from_remote_vdf_server/0,\n\tset_dependent_flags/1,\n\tuse_remote_vdf_server/0,\n\tvalidate_config/1\n]).\n-include(\"ar.hrl\").\n-include(\"ar_consensus.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @see parse_config_file/2\n%% @end\n%%--------------------------------------------------------------------\n-spec parse_config_file(Args) -> Return when\n\tArgs :: [string()],\n\tReturn :: {ok, #config{}}\n\t\t| {error, term(), term()}\n\t\t| {error, term()}.\n\nparse_config_file(Args) ->\n\tparse_config_file(Args, [], #config{}).\n\n%%--------------------------------------------------------------------\n%% @doc Take legacy command line argument and look for config_file\n%% parameter, then read and parse the file.\n%% @end\n%%--------------------------------------------------------------------\n-spec parse_config_file(Args, Config) -> Return when\n\tArgs :: [string()],\n\tConfig :: #config{},\n\tReturn :: {ok, #config{}}\n\t\t| {error, term(), term()}\n\t\t| {error, term()}.\n\nparse_config_file(Args, Config) ->\n\tparse_config_file(Args, [], Config).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\n-spec parse_config_file(Args, Skipped, Config) -> Return when\n\tArgs :: [string()],\n\tSkipped :: [string()],\n\tConfig :: #config{},\n\tReturn :: {ok, #config{}}\n\t\t| {error, term(), term()}\n\t\t| {error, term()}.\n\nparse_config_file([], _, Config) ->\n\t{ok, Config};\nparse_config_file([\"config_file\", Path | Rest], Skipped, _) ->\n\tcase read_config_from_file(Path) of\n\t\t{ok, Config} ->\n\t\t\tparse_config_file(Rest, Skipped, Config);\n\t\t{error, Reason, Item} ->\n\t\t\tio:format(\"Failed to parse config: ~p: ~p.~n\", [Reason, Item]),\n\t\t\tar_cli_parser:show_help(),\n\t\t\t{error, Reason, Item};\n\t\t{error, Reason} ->\n\t\t\tio:format(\"Failed to parse config: ~p.~n\", [Reason]),\n\t\t\tar_cli_parser:show_help(),\n\t\t\t{error, Reason}\n\tend;\nparse_config_file([Arg | Rest], Skipped, Config) ->\n\tparse_config_file(Rest, [Arg | Skipped], Config).\n\n%%--------------------------------------------------------------------\n%% @doc read the content of a configuration and then parse it with\n%% `ar_config:parse/1'.\n%% @end\n%%--------------------------------------------------------------------\n-spec read_config_from_file(Path) -> Return when\n\tPath :: string(),\n\tReturn :: {ok, binary()}\n\t\t| {error, file_unreadable, Path}.\n\nread_config_from_file(Path) ->\n\tcase file:read_file(Path) of\n\t\t{ok, FileData} ->\n\t\t\tar_config:parse(FileData);\n\t\t{error, _} ->\n\t\t\t{error, file_unreadable, Path}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc Validate legacy configuration file as `#config{}' record.\n%% @end\n%%--------------------------------------------------------------------\n-spec validate_config(Config :: #config{}) -> boolean().\n\nvalidate_config(Config) ->\n\tvalidate_init(Config) andalso\n\tvalidate_storage_modules(Config) andalso\n\tvalidate_repack_in_place(Config) andalso\n\tvalidate_cm_pool(Config) andalso\n\tvalidate_cm(Config) andalso\n\tvalidate_unique_replication_type(Config) andalso\n\tvalidate_verify(Config) andalso\n\tvalidate_start_from_state(Config).\n\n%%--------------------------------------------------------------------\n%% @doc Some flags force other flags to be set.\n%% @end\n%%--------------------------------------------------------------------\n-spec set_dependent_flags(Config :: #config{}) -> #config{}.\n\nset_dependent_flags(Config) ->\n\tConfig1 = set_start_from_state_flags(Config),\n\tConfig2 = set_verify_flags(Config1),\n\tConfig2.\n\nset_start_from_state_flags(#config{ start_from_state = not_set } = Config) ->\n\tConfig;\nset_start_from_state_flags(Config) ->\n\tConfig#config{ start_from_latest_state = true }.\n\nuse_remote_vdf_server() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase Config#config.nonce_limiter_server_trusted_peers of\n\t\t[] ->\n\t\t\tfalse;\n\t\t_ ->\n\t\t\ttrue\n\tend.\n\npull_from_remote_vdf_server() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tnot lists:member(vdf_server_pull, Config#config.disable).\n\ncompute_own_vdf() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase Config#config.nonce_limiter_server_trusted_peers of\n\t\t[] ->\n\t\t\t%% Not a VDF client - compute VDF unless explicitly disabled.\n\t\t\tnot lists:member(compute_own_vdf, Config#config.disable);\n\t\t_ ->\n\t\t\t%% Computing your own VDF needs to be explicitly enabled on a VDF client.\n\t\t\tlists:member(compute_own_vdf, Config#config.enable)\n\tend.\n\nis_vdf_server() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase Config#config.nonce_limiter_client_peers of\n\t\t[] ->\n\t\t\tlists:member(public_vdf_server, Config#config.enable);\n\t\t_ ->\n\t\t\ttrue\n\tend.\n\nis_public_vdf_server() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tlists:member(public_vdf_server, Config#config.enable).\n\nparse(Config) when is_binary(Config) ->\n\tcase ar_serialize:json_decode(Config) of\n\t\t{ok, JsonValue} -> parse_options(JsonValue);\n\t\t{error, _} -> {error, bad_json, Config}\n\tend.\n\nparse_storage_module(IOList) ->\n\tBin = iolist_to_binary(IOList),\n\tcase binary:split(Bin, <<\",\">>, [global]) of\n\t\t[PartitionNumberBin, PackingBin, <<\"repack_in_place\">>, ToPackingBin] ->\n\t\t\tPartitionNumber = binary_to_integer(PartitionNumberBin),\n\t\t\ttrue = PartitionNumber >= 0,\n\t\t\tparse_storage_module(PartitionNumber, ar_block:partition_size(), PackingBin, ToPackingBin);\n\t\t[RangeNumberBin, RangeSizeBin, PackingBin, <<\"repack_in_place\">>, ToPackingBin] ->\n\t\t\tRangeNumber = binary_to_integer(RangeNumberBin),\n\t\t\ttrue = RangeNumber >= 0,\n\t\t\tRangeSize = binary_to_integer(RangeSizeBin),\n\t\t\ttrue = RangeSize >= 0,\n\t\t\tparse_storage_module(RangeNumber, RangeSize, PackingBin, ToPackingBin);\n\t\t[PartitionNumberBin, PackingBin] ->\n\t\t\tPartitionNumber = binary_to_integer(PartitionNumberBin),\n\t\t\ttrue = PartitionNumber >= 0,\n\t\t\tparse_storage_module(PartitionNumber, ar_block:partition_size(), PackingBin);\n\t\t[RangeNumberBin, RangeSizeBin, PackingBin] ->\n\t\t\tRangeNumber = binary_to_integer(RangeNumberBin),\n\t\t\ttrue = RangeNumber >= 0,\n\t\t\tRangeSize = binary_to_integer(RangeSizeBin),\n\t\t\ttrue = RangeSize >= 0,\n\t\t\tparse_storage_module(RangeNumber, RangeSize, PackingBin)\n\tend.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\n\n%% -------------------------------------------------------------------\n%% @doc Parse the configuration options.\n%% -------------------------------------------------------------------\nparse_options({KVPairs}) when is_list(KVPairs) ->\n\tparse_options(KVPairs, #config{});\nparse_options(JsonValue) ->\n\t{error, root_not_object, JsonValue}.\n\nparse_options([{_, null} | Rest], Config) ->\n\tparse_options(Rest, Config);\n\nparse_options([{<<\"config_file\">>, _} | _], _) ->\n\t{error, config_file_set};\n\nparse_options([{<<\"peers\">>, Peers} | Rest], Config) when is_list(Peers) ->\n\tcase parse_peers(Peers, []) of\n\t\t{ok, ParsedPeers} ->\n\t\t\tparse_options(Rest, Config#config{ peers = ParsedPeers });\n\t\terror ->\n\t\t\t{error, bad_peers, Peers}\n\tend;\nparse_options([{<<\"peers\">>, Peers} | _], _) ->\n\t{error, {bad_type, peers, array}, Peers};\n\nparse_options([{<<\"block_gossip_peers\">>, Peers} | Rest], Config) when is_list(Peers) ->\n\tcase parse_peers(Peers, []) of\n\t\t{ok, ParsedPeers} ->\n\t\t\tparse_options(Rest, Config#config{ block_gossip_peers = ParsedPeers });\n\t\terror ->\n\t\t\t{error, bad_peers, Peers}\n\tend;\nparse_options([{<<\"block_gossip_peers\">>, Peers} | _], _) ->\n\t{error, {bad_type, peers, array}, Peers};\n\nparse_options([{<<\"local_peers\">>, Peers} | Rest], Config) when is_list(Peers) ->\n\tcase parse_peers(Peers, []) of\n\t\t{ok, ParsedPeers} ->\n\t\t\tparse_options(Rest, Config#config{ local_peers = ParsedPeers });\n\t\terror ->\n\t\t\t{error, bad_local_peers, Peers}\n\tend;\nparse_options([{<<\"local_peers\">>, Peers} | _], _) ->\n\t{error, {bad_type, local_peers, array}, Peers};\n\nparse_options([{<<\"sync_from_local_peers_only\">>, true} | Rest], Config) ->\n\tparse_options(Rest, Config#config{ sync_from_local_peers_only = true });\nparse_options([{<<\"sync_from_local_peers_only\">>, false} | Rest], Config) ->\n\tparse_options(Rest, Config#config{ sync_from_local_peers_only = false });\nparse_options([{<<\"sync_from_local_peers_only\">>, Opt} | _], _) ->\n\t{error, {bad_type, sync_from_local_peers_only, boolean}, Opt};\n\nparse_options([{<<\"start_from_latest_state\">>, true} | Rest], Config) ->\n\tparse_options(Rest, Config#config{ start_from_latest_state = true });\nparse_options([{<<\"start_from_latest_state\">>, false} | Rest], Config) ->\n\tparse_options(Rest, Config#config{ start_from_latest_state = false });\nparse_options([{<<\"start_from_latest_state\">>, Opt} | _], _) ->\n\t{error, {bad_type, start_from_latest_state, boolean}, Opt};\n\nparse_options([{<<\"start_from_state\">>, Folder} | Rest], Config) when is_binary(Folder) ->\n\tparse_options(Rest, Config#config{ start_from_state = binary_to_list(Folder) });\nparse_options([{<<\"start_from_state\">>, Folder} | _], _) ->\n\t{error, {bad_type, start_from_state, string}, Folder};\n\nparse_options([{<<\"start_from_block\">>, H} | Rest], Config) when is_binary(H) ->\n\tcase ar_util:safe_decode(H) of\n\t\t{ok, Decoded} when byte_size(Decoded) == 48 ->\n\t\t\tparse_options(Rest, Config#config{ start_from_block = Decoded });\n\t\t_ ->\n\t\t\t{error, bad_block, H}\n\tend;\nparse_options([{<<\"start_from_block\">>, Opt} | _], _) ->\n\t{error, {bad_type, start_from_block, string}, Opt};\n\nparse_options([{<<\"start_from_block_index\">>, true} | Rest], Config) ->\n\tparse_options(Rest, Config#config{ start_from_latest_state = true });\nparse_options([{<<\"start_from_block_index\">>, false} | Rest], Config) ->\n\tparse_options(Rest, Config#config{ start_from_latest_state = false });\nparse_options([{<<\"start_from_block_index\">>, Opt} | _], _) ->\n\t{error, {bad_type, start_from_block_index, boolean}, Opt};\n\nparse_options([{<<\"mine\">>, true} | Rest], Config) ->\n\tparse_options(Rest, Config#config{ mine = true });\nparse_options([{<<\"mine\">>, false} | Rest], Config) ->\n\tparse_options(Rest, Config);\nparse_options([{<<\"mine\">>, Opt} | _], _) ->\n\t{error, {bad_type, mine, boolean}, Opt};\n\nparse_options([{<<\"verify\">>, <<\"purge\">>} | Rest], Config) ->\n\tparse_options(Rest, Config#config{ verify = purge });\nparse_options([{<<\"verify\">>, <<\"log\">>} | Rest], Config) ->\n\tparse_options(Rest, Config#config{ verify = log });\nparse_options([{<<\"verify\">>, Opt} | _], _) ->\n\t{error, bad_verify_mode, Opt};\n\nparse_options([{<<\"verify_samples\">>, N} | Rest], Config) when is_integer(N) ->\n\tparse_options(Rest, Config#config{ verify_samples = N });\nparse_options([{<<\"verify_samples\">>, <<\"all\">>} | Rest], Config) ->\n\tparse_options(Rest, Config#config{ verify_samples = all });\nparse_options([{<<\"verify_samples\">>, Opt} | _], _) ->\n\t{error, {bad_type, verify_samples, number}, Opt};\n\nparse_options([{<<\"vdf\">>, Mode} | Rest], Config) ->\n\tParsedMode = case Mode of\n\t\t<<\"openssl\">> -> openssl;\n\t\t<<\"fused\">> -> fused;\n\t\t<<\"hiopt_m4\">> -> hiopt_m4;\n\t\t_ ->\n\t\t\tio:format(\"VDF ~p is invalid.~n\", [Mode]),\n\t\t\topenssl\n\tend,\n\tparse_options(Rest, Config#config{ vdf = ParsedMode });\n\nparse_options([{<<\"port\">>, Port} | Rest], Config) when is_integer(Port) ->\n\tparse_options(Rest, Config#config{ port = Port });\nparse_options([{<<\"port\">>, Port} | _], _) ->\n\t{error, {bad_type, port, number}, Port};\n\nparse_options([{<<\"data_dir\">>, DataDir} | Rest], Config) when is_binary(DataDir) ->\n\tparse_options(Rest, Config#config{ data_dir = binary_to_list(DataDir) });\nparse_options([{<<\"data_dir\">>, DataDir} | _], _) ->\n\t{error, {bad_type, data_dir, string}, DataDir};\n\nparse_options([{<<\"log_dir\">>, Dir} | Rest], Config) when is_binary(Dir) ->\n\tparse_options(Rest, Config#config{ log_dir = binary_to_list(Dir) });\nparse_options([{<<\"log_dir\">>, Dir} | _], _) ->\n\t{error, {bad_type, log_dir, string}, Dir};\n\nparse_options([{<<\"storage_modules\">>, L} | Rest], Config) when is_list(L) ->\n\ttry\n\t\t{StorageModules, RepackInPlaceStorageModules} =\n\t\t\tlists:foldr(\n\t\t\t\tfun(Bin, {Acc1, Acc2}) ->\n\t\t\t\t\tcase parse_storage_module(Bin) of\n\t\t\t\t\t\t{ok, Module} ->\n\t\t\t\t\t\t\t{[Module | Acc1], Acc2};\n\t\t\t\t\t\t{repack_in_place, Module} ->\n\t\t\t\t\t\t\t{Acc1, [Module | Acc2]}\n\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\t\t{[], []},\n\t\t\t\tL\n\t\t\t),\n\t\tparse_options(Rest, Config#config{\n\t\t\t\tstorage_modules = StorageModules,\n\t\t\t\trepack_in_place_storage_modules = RepackInPlaceStorageModules })\n\tcatch Error:Reason ->\n\t\t?LOG_ERROR([{event, parse_failure}, {option, storage_modules},\n\t\t\t{error, Error}, {reason, Reason}]),\n\t\t{error, {bad_format, storage_modules, \"an array of \"\n\t\t\t\t\"\\\"{number},{address}[,repack_in_place,{to_packing}]\\\"\"}, L}\n\tend;\nparse_options([{<<\"storage_modules\">>, Bin} | _], _) ->\n\t{error, {bad_type, storage_modules, array}, Bin};\n\nparse_options([{<<\"repack_batch_size\">>, N} | Rest], Config) when is_integer(N) ->\n\tparse_options(Rest, Config#config{ repack_batch_size = N });\nparse_options([{<<\"repack_batch_size\">>, Opt} | _], _) ->\n\t{error, {bad_type, repack_batch_size, number}, Opt};\n\nparse_options([{<<\"repack_cache_size_mb\">>, N} | Rest], Config) when is_integer(N) ->\n\tparse_options(Rest, Config#config{ repack_cache_size_mb = N });\nparse_options([{<<\"repack_cache_size_mb\">>, Opt} | _], _) ->\n\t{error, {bad_type, repack_cache_size_mb, number}, Opt};\n\nparse_options([{<<\"polling\">>, Frequency} | Rest], Config) when is_integer(Frequency) ->\n\tparse_options(Rest, Config#config{ polling = Frequency });\nparse_options([{<<\"polling\">>, Opt} | _], _) ->\n\t{error, {bad_type, polling, number}, Opt};\n\nparse_options([{<<\"block_pollers\">>, N} | Rest], Config) when is_integer(N) ->\n\tparse_options(Rest, Config#config{ block_pollers = N });\nparse_options([{<<\"block_pollers\">>, Opt} | _], _) ->\n\t{error, {bad_type, block_pollers, number}, Opt};\n\nparse_options([{<<\"no_auto_join\">>, true} | Rest], Config) ->\n\tparse_options(Rest, Config#config{ auto_join = false });\nparse_options([{<<\"no_auto_join\">>, false} | Rest], Config) ->\n\tparse_options(Rest, Config);\nparse_options([{<<\"no_auto_join\">>, Opt} | _], _) ->\n\t{error, {bad_type, no_auto_join, boolean}, Opt};\n\nparse_options([{<<\"join_workers\">>, N} | Rest], Config) when is_integer(N)->\n\tparse_options(Rest, Config#config{ join_workers = N });\nparse_options([{<<\"join_workers\">>, Opt} | _], _) ->\n\t{error, {bad_type, join_workers, number}, Opt};\n\nparse_options([{<<\"packing_workers\">>, N} | Rest], Config) when is_integer(N)->\n\tparse_options(Rest, Config#config{ packing_workers = N });\nparse_options([{<<\"packing_workers\">>, Opt} | _], _) ->\n\t{error, {bad_type, packing_workers, number}, Opt};\n\nparse_options([{<<\"replica_2_9_workers\">>, N} | Rest], Config) when is_integer(N)->\n\tparse_options(Rest, Config#config{ replica_2_9_workers = N });\nparse_options([{<<\"replica_2_9_workers\">>, Opt} | _], _) ->\n\t{error, {bad_type, replica_2_9_workers, number}, Opt};\n\nparse_options([{<<\"disable_replica_2_9_device_limit\">>, true} | Rest], Config) ->\n\tparse_options(Rest, Config#config{ disable_replica_2_9_device_limit = true });\nparse_options([{<<\"disable_replica_2_9_device_limit\">>, false} | Rest], Config) ->\n\tparse_options(Rest, Config);\nparse_options([{<<\"disable_replica_2_9_device_limit\">>, Opt} | _], _) ->\n\t{error, {bad_type, disable_replica_2_9_device_limit, boolean}, Opt};\n\nparse_options([{<<\"replica_2_9_entropy_cache_size_mb\">>, N} | Rest], Config) when is_integer(N)->\n\tparse_options(Rest, Config#config{ replica_2_9_entropy_cache_size_mb = N });\nparse_options([{<<\"replica_2_9_entropy_cache_size_mb\">>, Opt} | _], _) ->\n\t{error, {bad_type, replica_2_9_entropy_cache_size_mb, number}, Opt};\n\nparse_options([{<<\"diff\">>, Diff} | Rest], Config) when is_integer(Diff) ->\n\tparse_options(Rest, Config#config{ diff = Diff });\nparse_options([{<<\"diff\">>, Diff} | _], _) ->\n\t{error, {bad_type, diff, number}, Diff};\n\nparse_options([{<<\"mining_addr\">>, Addr} | Rest], Config) when is_binary(Addr) ->\n\tcase Config#config.mining_addr of\n\t\tnot_set ->\n\t\t\tcase ar_util:safe_decode(Addr) of\n\t\t\t\t{ok, D} when byte_size(D) == 32 ->\n\t\t\t\t\tparse_options(Rest, Config#config{ mining_addr = D });\n\t\t\t\t_ -> {error, bad_mining_addr, Addr}\n\t\t\tend;\n\t\t_ ->\n\t\t\t{error, at_most_one_mining_addr_is_supported, Addr}\n\tend;\nparse_options([{<<\"mining_addr\">>, Addr} | _], _) ->\n\t{error, {bad_type, mining_addr, string}, Addr};\n\nparse_options([{<<\"hashing_threads\">>, Threads} | Rest], Config) when is_integer(Threads) ->\n\tparse_options(Rest, Config#config{ hashing_threads = Threads });\nparse_options([{<<\"hashing_threads\">>, Threads} | _], _) ->\n\t{error, {bad_type, hashing_threads, number}, Threads};\n\nparse_options([{<<\"data_cache_size_limit\">>, Limit} | Rest], Config)\n\t\twhen is_integer(Limit) ->\n\tparse_options(Rest, Config#config{ data_cache_size_limit = Limit });\nparse_options([{<<\"data_cache_size_limit\">>, Limit} | _], _) ->\n\t{error, {bad_type, data_cache_size_limit, number}, Limit};\n\nparse_options([{<<\"packing_cache_size_limit\">>, Limit} | Rest], Config)\n\t\twhen is_integer(Limit) ->\n\tparse_options(Rest, Config#config{ packing_cache_size_limit = Limit });\nparse_options([{<<\"packing_cache_size_limit\">>, Limit} | _], _) ->\n\t{error, {bad_type, packing_cache_size_limit, number}, Limit};\n\nparse_options([{<<\"mining_cache_size_mb\">>, Limit} | Rest], Config)\n\t\twhen is_integer(Limit) ->\n\tparse_options(Rest, Config#config{ mining_cache_size_mb = Limit });\nparse_options([{<<\"mining_cache_size_mb\">>, Limit} | _], _) ->\n\t{error, {bad_type, mining_cache_size_mb, number}, Limit};\n\nparse_options([{<<\"max_emitters\">>, Value} | Rest], Config) when is_integer(Value) ->\n\tparse_options(Rest, Config#config{ max_emitters = Value });\nparse_options([{<<\"max_emitters\">>, Value} | _], _) ->\n\t{error, {bad_type, max_emitters, number}, Value};\n\nparse_options([{<<\"post_tx_timeout\">>, Value} | Rest], Config)\n\t\twhen is_integer(Value) ->\n\tparse_options(Rest, Config#config{ post_tx_timeout = Value });\nparse_options([{<<\"post_tx_timeout\">>, Value} | _], _) ->\n\t{error, {bad_type, post_tx_timeout, number}, Value};\n\nparse_options([{<<\"max_propagation_peers\">>, Value} | Rest], Config)\n\t\twhen is_integer(Value) ->\n\tparse_options(Rest, Config#config{ max_propagation_peers = Value });\nparse_options([{<<\"max_propagation_peers\">>, Value} | _], _) ->\n\t{error, {bad_type, max_propagation_peers, number}, Value};\n\nparse_options([{<<\"max_block_propagation_peers\">>, Value} | Rest], Config)\n\t\twhen is_integer(Value) ->\n\tparse_options(Rest, Config#config{ max_block_propagation_peers = Value });\nparse_options([{<<\"max_block_propagation_peers\">>, Value} | _], _) ->\n\t{error, {bad_type, max_block_propagation_peers, number}, Value};\n\nparse_options([{<<\"sync_jobs\">>, Value} | Rest], Config)\n\t\twhen is_integer(Value) ->\n\tparse_options(Rest, Config#config{ sync_jobs = Value });\nparse_options([{<<\"sync_jobs\">>, Value} | _], _) ->\n\t{error, {bad_type, sync_jobs, number}, Value};\n\nparse_options([{<<\"header_sync_jobs\">>, Value} | Rest], Config)\n\t\twhen is_integer(Value) ->\n\tparse_options(Rest, Config#config{ header_sync_jobs = Value });\nparse_options([{<<\"header_sync_jobs\">>, Value} | _], _) ->\n\t{error, {bad_type, header_sync_jobs, number}, Value};\n\nparse_options([{<<\"enable_data_roots_syncing\">>, Value} | Rest], Config)\n\t\twhen is_boolean(Value) ->\n\tparse_options(Rest, Config#config{ enable_data_roots_syncing = Value });\nparse_options([{<<\"enable_data_roots_syncing\">>, Value} | _], _) ->\n\t{error, {bad_type, enable_data_roots_syncing, boolean}, Value};\n\nparse_options([{<<\"disk_pool_jobs\">>, Value} | Rest], Config)\n\t\twhen is_integer(Value) ->\n\tparse_options(Rest, Config#config{ disk_pool_jobs = Value });\nparse_options([{<<\"disk_pool_jobs\">>, Value} | _], _) ->\n\t{error, {bad_type, disk_pool_jobs, number}, Value};\n\nparse_options([{<<\"requests_per_minute_limit\">>, L} | Rest], Config) when is_integer(L) ->\n\tparse_options(Rest, Config#config{ requests_per_minute_limit = L });\nparse_options([{<<\"requests_per_minute_limit\">>, L} | _], _) ->\n\t{error, {bad_type, requests_per_minute_limit, number}, L};\n\nparse_options([{<<\"requests_per_minute_limit_by_ip\">>, Object} | Rest], Config)\n\t\twhen is_tuple(Object) ->\n\tcase parse_requests_per_minute_limit_by_ip(Object) of\n\t\t{ok, ParsedMap} ->\n\t\t\tparse_options(Rest, Config#config{ requests_per_minute_limit_by_ip = ParsedMap });\n\t\terror ->\n\t\t\t{error, bad_requests_per_minute_limit_by_ip, Object}\n\tend;\nparse_options([{<<\"requests_per_minute_limit_by_ip\">>, Object} | _], _) ->\n\t{error, {bad_type, requests_per_minute_limit_by_ip, object}, Object};\n\nparse_options([{<<\"transaction_blacklists\">>, TransactionBlacklists} | Rest], Config)\n\t\twhen is_list(TransactionBlacklists) ->\n\tcase safe_map(fun binary_to_list/1, TransactionBlacklists) of\n\t\t{ok, TransactionBlacklistStrings} ->\n\t\t\tparse_options(Rest, Config#config{\n\t\t\t\ttransaction_blacklist_files = TransactionBlacklistStrings\n\t\t\t});\n\t\terror ->\n\t\t\t{error, bad_transaction_blacklists}\n\tend;\nparse_options([{<<\"transaction_blacklists\">>, TransactionBlacklists} | _], _) ->\n\t{error, {bad_type, transaction_blacklists, array}, TransactionBlacklists};\n\nparse_options([{<<\"transaction_blacklist_urls\">>, TransactionBlacklistURLs} | Rest], Config)\n\t\twhen is_list(TransactionBlacklistURLs) ->\n\tcase safe_map(fun binary_to_list/1, TransactionBlacklistURLs) of\n\t\t{ok, TransactionBlacklistURLStrings} ->\n\t\t\tparse_options(Rest, Config#config{\n\t\t\t\ttransaction_blacklist_urls = TransactionBlacklistURLStrings\n\t\t\t});\n\t\terror ->\n\t\t\t{error, bad_transaction_blacklist_urls}\n\tend;\nparse_options([{<<\"transaction_blacklist_urls\">>, TransactionBlacklistURLs} | _], _) ->\n\t{error, {bad_type, transaction_blacklist_urls, array}, TransactionBlacklistURLs};\n\nparse_options([{<<\"transaction_whitelists\">>, TransactionWhitelists} | Rest], Config)\n\t\twhen is_list(TransactionWhitelists) ->\n\tcase safe_map(fun binary_to_list/1, TransactionWhitelists) of\n\t\t{ok, TransactionWhitelistStrings} ->\n\t\t\tparse_options(Rest, Config#config{\n\t\t\t\ttransaction_whitelist_files = TransactionWhitelistStrings\n\t\t\t});\n\t\terror ->\n\t\t\t{error, bad_transaction_whitelists}\n\tend;\nparse_options([{<<\"transaction_whitelists\">>, TransactionWhitelists} | _], _) ->\n\t{error, {bad_type, transaction_whitelists, array}, TransactionWhitelists};\n\nparse_options([{<<\"transaction_whitelist_urls\">>, TransactionWhitelistURLs} | Rest], Config)\n\t\twhen is_list(TransactionWhitelistURLs) ->\n\tcase safe_map(fun binary_to_list/1, TransactionWhitelistURLs) of\n\t\t{ok, TransactionWhitelistURLStrings} ->\n\t\t\tparse_options(Rest, Config#config{\n\t\t\t\ttransaction_whitelist_urls = TransactionWhitelistURLStrings\n\t\t\t});\n\t\terror ->\n\t\t\t{error, bad_transaction_whitelist_urls}\n\tend;\nparse_options([{<<\"transaction_whitelist_urls\">>, TransactionWhitelistURLs} | _], _) ->\n\t{error, {bad_type, transaction_whitelist_urls, array}, TransactionWhitelistURLs};\n\nparse_options([{<<\"disk_space_check_frequency\">>, Frequency} | Rest], Config)\n\t\twhen is_integer(Frequency) ->\n\tparse_options(Rest, Config#config{ disk_space_check_frequency = Frequency * 1000 });\nparse_options([{<<\"disk_space_check_frequency\">>, Frequency} | _], _) ->\n\t{error, {bad_type, disk_space_check_frequency, number}, Frequency};\n\nparse_options([{<<\"init\">>, true} | Rest], Config) ->\n\tparse_options(Rest, Config#config{ init = true });\nparse_options([{<<\"init\">>, false} | Rest], Config) ->\n\tparse_options(Rest, Config#config{ init = false });\nparse_options([{<<\"init\">>, Opt} | _], _) ->\n\t{error, {bad_type, init, boolean}, Opt};\n\nparse_options([{<<\"internal_api_secret\">>, Secret} | Rest], Config)\n\t\twhen is_binary(Secret), byte_size(Secret) >= ?INTERNAL_API_SECRET_MIN_LEN ->\n\tparse_options(Rest, Config#config{ internal_api_secret = Secret });\nparse_options([{<<\"internal_api_secret\">>, Secret} | _], _) ->\n\t{error, bad_secret, Secret};\n\nparse_options([{<<\"enable\">>, Features} | Rest], Config) when is_list(Features) ->\n\tcase safe_map(fun(Feature) -> binary_to_atom(Feature, latin1) end, Features) of\n\t\t{ok, FeatureAtoms} ->\n\t\t\tparse_options(Rest, Config#config{ enable = FeatureAtoms });\n\t\terror ->\n\t\t\t{error, bad_enable}\n\tend;\nparse_options([{<<\"enable\">>, Features} | _], _) ->\n\t{error, {bad_type, enable, array}, Features};\n\nparse_options([{<<\"disable\">>, Features} | Rest], Config) when is_list(Features) ->\n\tcase safe_map(fun(Feature) -> binary_to_atom(Feature, latin1) end, Features) of\n\t\t{ok, FeatureAtoms} ->\n\t\t\tparse_options(Rest, Config#config{ disable = FeatureAtoms });\n\t\terror ->\n\t\t\t{error, bad_disable}\n\tend;\nparse_options([{<<\"disable\">>, Features} | _], _) ->\n\t{error, {bad_type, disable, array}, Features};\n\nparse_options([{<<\"webhooks\">>, WebhookConfigs} | Rest], Config) when is_list(WebhookConfigs) ->\n\tcase parse_webhooks(WebhookConfigs, []) of\n\t\t{ok, ParsedWebhooks} ->\n\t\t\tparse_options(Rest, Config#config{ webhooks = ParsedWebhooks });\n\t\terror ->\n\t\t\t{error, bad_webhooks, WebhookConfigs}\n\tend;\nparse_options([{<<\"webhooks\">>, Webhooks} | _], _) ->\n\t{error, {bad_type, webhooks, array}, Webhooks};\n\nparse_options([{<<\"semaphores\">>, Semaphores} | Rest], Config) when is_tuple(Semaphores) ->\n\tcase parse_atom_number_map(Semaphores, Config#config.semaphores) of\n\t\t{ok, ParsedSemaphores} ->\n\t\t\tparse_options(Rest, Config#config{ semaphores = ParsedSemaphores });\n\t\terror ->\n\t\t\t{error, bad_semaphores, Semaphores}\n\tend;\nparse_options([{<<\"semaphores\">>, Semaphores} | _], _) ->\n\t{error, {bad_type, semaphores, object}, Semaphores};\n\nparse_options([{<<\"max_connections\">>, MaxConnections} | Rest], Config)\n\t\twhen is_integer(MaxConnections), MaxConnections >= 1 ->\n\tparse_options(Rest, Config#config{ 'http_api.tcp.max_connections' = MaxConnections });\n\nparse_options([{<<\"disk_pool_data_root_expiration_time\">>, D} | Rest], Config)\n\t\twhen is_integer(D) ->\n\tparse_options(Rest, Config#config{ disk_pool_data_root_expiration_time = D });\n\nparse_options([{<<\"max_disk_pool_buffer_mb\">>, D} | Rest], Config) when is_integer(D) ->\n\tparse_options(Rest, Config#config{ max_disk_pool_buffer_mb= D });\n\nparse_options([{<<\"max_disk_pool_data_root_buffer_mb\">>, D} | Rest], Config)\n\t\twhen is_integer(D) ->\n\tparse_options(Rest, Config#config{ max_disk_pool_data_root_buffer_mb = D });\n\nparse_options([{<<\"max_duplicate_data_roots\">>, D} | Rest], Config)\n\t\twhen is_integer(D) ->\n\tparse_options(Rest, Config#config{ max_duplicate_data_roots = D });\n\nparse_options([{<<\"disk_cache_size_mb\">>, D} | Rest], Config) when is_integer(D) ->\n\tparse_options(Rest, Config#config{ disk_cache_size = D });\n\nparse_options([{<<\"max_nonce_limiter_validation_thread_count\">>, D} | Rest], Config)\n\t\twhen is_integer(D) ->\n\tparse_options(Rest, Config#config{ max_nonce_limiter_validation_thread_count = D });\n\nparse_options([{<<\"max_nonce_limiter_last_step_validation_thread_count\">>, D} | Rest], Config)\n\t\twhen is_integer(D) ->\n\tparse_options(Rest,\n\t\t\tConfig#config{ max_nonce_limiter_last_step_validation_thread_count = D });\n\nparse_options([{<<\"vdf_server_trusted_peer\">>, <<>>} | Rest], Config) ->\n\tparse_options(Rest, Config);\nparse_options([{<<\"vdf_server_trusted_peer\">>, Peer} | Rest], Config) ->\n\tparse_options(Rest, parse_vdf_server_trusted_peer(Peer, Config));\n\nparse_options([{<<\"vdf_server_trusted_peers\">>, Peers} | Rest], Config) when is_list(Peers) ->\n\tparse_options(Rest, parse_vdf_server_trusted_peers(Peers, Config));\nparse_options([{<<\"vdf_server_trusted_peers\">>, Peers} | _], _) ->\n\t{error, {bad_type, vdf_server_trusted_peers, array}, Peers};\n\nparse_options([{<<\"vdf_client_peers\">>, Peers} | Rest], Config) when is_list(Peers) ->\n\tparse_options(Rest, Config#config{ nonce_limiter_client_peers = Peers });\nparse_options([{<<\"vdf_client_peers\">>, Peers} | _], _) ->\n\t{error, {bad_type, vdf_client_peers, array}, Peers};\n\nparse_options([{<<\"debug\">>, B} | Rest], Config) when is_boolean(B) ->\n\tparse_options(Rest, Config#config{ debug = B });\n\nparse_options([{<<\"run_defragmentation\">>, B} | Rest], Config) when is_boolean(B) ->\n\tparse_options(Rest, Config#config{ run_defragmentation = B });\n\nparse_options([{<<\"defragmentation_trigger_threshold\">>, D} | Rest], Config)\n\t\twhen is_integer(D) ->\n\tparse_options(Rest, Config#config{ defragmentation_trigger_threshold = D });\n\nparse_options([{<<\"block_throttle_by_ip_interval\">>, D} | Rest], Config)\n\t\twhen is_integer(D) ->\n\tparse_options(Rest, Config#config{ block_throttle_by_ip_interval = D });\n\nparse_options([{<<\"block_throttle_by_solution_interval\">>, D} | Rest], Config)\n\t\twhen is_integer(D) ->\n\tparse_options(Rest, Config#config{ block_throttle_by_solution_interval = D });\n\nparse_options([{<<\"defragment_modules\">>, L} | Rest], Config) when is_list(L) ->\n\ttry\n\t\tDefragModules =\n\t\t\tlists:foldr(\n\t\t\t\tfun(Bin, Acc) ->\n\t\t\t\t\t{ok, M} = parse_storage_module(Bin),\n\t\t\t\t\t[M | Acc]\n\t\t\t\tend,\n\t\t\t\t[],\n\t\t\t\tL\n\t\t\t),\n\t\tparse_options(Rest, Config#config{ defragmentation_modules = DefragModules })\n\tcatch _:_ ->\n\t\t{error, {bad_format, defragment_modules, \"an array of \\\"{number},{address}\\\"\"}, L}\n\tend;\nparse_options([{<<\"defragment_modules\">>, Bin} | _], _) ->\n\t{error, {bad_type, defragment_modules, array}, Bin};\n\nparse_options([{<<\"http_api.tcp.idle_timeout_seconds\">>, D} | Rest], Config) when is_integer(D) ->\n\tparse_options(Rest, Config#config{ http_api_transport_idle_timeout = D * 1000 });\n\nparse_options([{<<\"coordinated_mining\">>, true} | Rest], Config) ->\n\tparse_options(Rest, Config#config{ coordinated_mining = true });\nparse_options([{<<\"coordinated_mining\">>, false} | Rest], Config) ->\n\tparse_options(Rest, Config);\nparse_options([{<<\"coordinated_mining\">>, Opt} | _], _) ->\n\t{error, {bad_type, coordinated_mining, boolean}, Opt};\n\nparse_options([{<<\"cm_api_secret\">>, CMSecret} | Rest], Config)\n\t\twhen is_binary(CMSecret), byte_size(CMSecret) >= ?INTERNAL_API_SECRET_MIN_LEN ->\n\tparse_options(Rest, Config#config{ cm_api_secret = CMSecret });\nparse_options([{<<\"cm_api_secret\">>, CMSecret} | _], _) ->\n\t{error, {bad_type, cm_api_secret, string}, CMSecret};\n\nparse_options([{<<\"cm_poll_interval\">>, CMPollInterval} | Rest], Config)\n\t\twhen is_integer(CMPollInterval) ->\n\tparse_options(Rest, Config#config{ cm_poll_interval = CMPollInterval });\nparse_options([{<<\"cm_poll_interval\">>, CMPollInterval} | _], _) ->\n\t{error, {bad_type, cm_poll_interval, number}, CMPollInterval};\n\nparse_options([{<<\"cm_peers\">>, Peers} | Rest], Config) when is_list(Peers) ->\n\tcase parse_peers(Peers, []) of\n\t\t{ok, ParsedPeers} ->\n\t\t\tparse_options(Rest, Config#config{ cm_peers = ParsedPeers });\n\t\terror ->\n\t\t\t{error, bad_peers, Peers}\n\tend;\n\nparse_options([{<<\"cm_exit_peer\">>, Peer} | Rest], Config) ->\n\tcase ar_util:safe_parse_peer(Peer) of\n\t\t{ok, [ParsedPeer|_]} ->\n\t\t\tparse_options(Rest, Config#config{ cm_exit_peer = ParsedPeer });\n\t\t{error, _} ->\n\t\t\t{error, bad_cm_exit_peer, Peer}\n\tend;\n\nparse_options([{<<\"cm_out_batch_timeout\">>, CMBatchTimeout} | Rest], Config)\n\t\twhen is_integer(CMBatchTimeout) ->\n\tparse_options(Rest, Config#config{ cm_out_batch_timeout = CMBatchTimeout });\nparse_options([{<<\"cm_out_batch_timeout\">>, CMBatchTimeout} | _], _) ->\n\t{error, {bad_type, cm_out_batch_timeout, number}, CMBatchTimeout};\n\nparse_options([{<<\"is_pool_server\">>, true} | Rest], Config) ->\n\tparse_options(Rest, Config#config{ is_pool_server = true });\nparse_options([{<<\"is_pool_server\">>, false} | Rest], Config) ->\n\tparse_options(Rest, Config);\nparse_options([{<<\"is_pool_server\">>, Opt} | _], _) ->\n\t{error, {bad_type, is_pool_server, boolean}, Opt};\n\nparse_options([{<<\"is_pool_client\">>, true} | Rest], Config) ->\n\tparse_options(Rest, Config#config{ is_pool_client = true });\nparse_options([{<<\"is_pool_client\">>, false} | Rest], Config) ->\n\tparse_options(Rest, Config);\nparse_options([{<<\"is_pool_client\">>, Opt} | _], _) ->\n\t{error, {bad_type, is_pool_client, boolean}, Opt};\n\nparse_options([{<<\"pool_api_key\">>, Key} | Rest], Config) when is_binary(Key) ->\n\tparse_options(Rest, Config#config{ pool_api_key = Key });\nparse_options([{<<\"pool_api_key\">>, Key} | _], _) ->\n\t{error, {bad_type, pool_api_key, string}, Key};\n\nparse_options([{<<\"pool_server_address\">>, Host} | Rest], Config) when is_binary(Host) ->\n\tparse_options(Rest, Config#config{ pool_server_address = Host });\nparse_options([{<<\"pool_server_address\">>, Host} | _], _) ->\n\t{error, {bad_type, pool_server_address, string}, Host};\n\n%% Undocumented/unsupported options\nparse_options([{<<\"chunk_storage_file_size\">>, ChunkGroupSize} | Rest], Config)\n\t\twhen is_integer(ChunkGroupSize) ->\n\tparse_options(Rest, Config#config{ chunk_storage_file_size = ChunkGroupSize });\nparse_options([{<<\"chunk_storage_file_size\">>, ChunkGroupSize} | _], _) ->\n\t{error, {bad_type, chunk_storage_file_size, number}, ChunkGroupSize};\n\nparse_options([{<<\"rocksdb_flush_interval\">>, IntervalS} | Rest], Config)\n\t\twhen is_integer(IntervalS) ->\n\tparse_options(Rest, Config#config{ rocksdb_flush_interval_s = IntervalS });\nparse_options([{<<\"rocksdb_flush_interval\">>, IntervalS} | _], _) ->\n\t{error, {bad_type, rocksdb_flush_interval, number}, IntervalS};\n\nparse_options([{<<\"rocksdb_wal_sync_interval\">>, IntervalS} | Rest], Config)\n\t\twhen is_integer(IntervalS) ->\n\tparse_options(Rest, Config#config{ rocksdb_wal_sync_interval_s = IntervalS });\nparse_options([{<<\"rocksdb_wal_sync_interval\">>, IntervalS} | _], _) ->\n\t{error, {bad_type, rocksdb_wal_sync_interval, number}, IntervalS};\n\nparse_options([{<<\"data_sync_request_packed_chunks\">>, Bool} | Rest], Config)\n\t\twhen is_boolean(Bool) ->\n\tparse_options(Rest, Config#config{ data_sync_request_packed_chunks = Bool });\nparse_options([{<<\"data_sync_request_packed_chunks\">>, InvalidValue} | _Rest], _Config) ->\n\t{error, {bad_type, data_sync_request_packed_chunks, boolean}, InvalidValue};\n\n%% shutdown procedure\nparse_options([{<<\"network.tcp.shutdown.connection_timeout\">>, Delay} | Rest], Config)\n\twhen is_integer(Delay) andalso Delay > 0 ->\n\t\tNewConfig = Config#config{ shutdown_tcp_connection_timeout = Delay },\n\t\tparse_options(Rest, NewConfig);\nparse_options([{<<\"network.tcp.shutdown.connection_timeout\">>, InvalidValue} | _Rest], _Config) ->\n\t{error, {bad_type, shutdown_tcp_connection_timeout, integer}, InvalidValue};\nparse_options([{<<\"network.tcp.shutdown.mode\">>, Mode}|Rest], Config) ->\n\tcase Mode of\n\t\t<<\"shutdown\">> ->\n\t\t\tNewConfig = Config#config{ shutdown_tcp_mode = shutdown },\n\t\t\tparse_options(Rest, NewConfig);\n\t\t<<\"close\">> ->\n\t\t\tNewConfig = Config#config{ shutdown_tcp_mode = close },\n\t\t\tparse_options(Rest, NewConfig);\n\t\tMode ->\n\t\t\t{error, {bad_value, shutdown_tcp_mode}, Mode}\n\tend;\n\n%% Global socket configuration\nparse_options([{<<\"network.socket.backend\">>, Backend}|Rest], Config) ->\n\tcase Backend of\n\t\t<<\"inet\">> ->\n\t\t\tparse_options(Rest, Config#config{ 'socket.backend' = inet });\n\t\t<<\"socket\">> ->\n\t\t\tparse_options(Rest, Config#config{ 'socket.backend' = socket });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'socket.backend'}, Backend}\n\tend;\n\n%% Gun client parameters\nparse_options([{<<\"http_client.http.closing_timeout\">>, Timeout}|Rest], Config) ->\n\tcase Timeout of\n\t\t_ when is_integer(Timeout), Timeout >= 0 ->\n\t\t\tparse_options(Rest, Config#config{ 'http_client.http.closing_timeout' = Timeout });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_client.http.closing_timeout'}, Timeout}\n\tend;\nparse_options([{<<\"http_client.http.keepalive\">>, Timeout}|Rest], Config) ->\n\tcase Timeout of\n\t\t<<\"infinity\">> ->\n\t\t\tparse_options(Rest, Config#config{ 'http_client.http.keepalive' = infinity });\n\t\t_ when is_integer(Timeout), Timeout >= 0 ->\n\t\t\tparse_options(Rest, Config#config{ 'http_client.http.keepalive' = Timeout });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_client.http.keepalive'}, Timeout}\n\tend;\nparse_options([{<<\"http_client.tcp.delay_send\">>, Delay}|Rest], Config) ->\n\tcase Delay of\n\t\t_ when is_boolean(Delay) ->\n\t\t\tparse_options(Rest, Config#config{ 'http_client.tcp.delay_send' = Delay });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_client.tcp.delay_send'}, Delay}\n\tend;\nparse_options([{<<\"http_client.tcp.keepalive\">>, Keepalive}|Rest], Config) ->\n\tcase Keepalive of\n\t\t_ when is_boolean(Keepalive) ->\n\t\t\tparse_options(Rest, Config#config{ 'http_client.tcp.keepalive' = Keepalive });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_client.tcp.keepalive'}, Keepalive}\n\tend;\nparse_options([{<<\"http_client.tcp.linger\">>, Linger}|Rest], Config) ->\n\tcase Linger of\n\t\t_ when is_boolean(Linger) ->\n\t\t\tparse_options(Rest, Config#config{ 'http_client.tcp.linger' = Linger });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_client.tcp.linger'}, Linger}\n\tend;\nparse_options([{<<\"http_client.tcp.linger_timeout\">>, Timeout}|Rest], Config) ->\n\tcase Timeout of\n\t\t_ when is_integer(Timeout), Timeout >= 0 ->\n\t\t\tparse_options(Rest, Config#config{ 'http_client.tcp.linger_timeout' = Timeout });\n\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_client.tcp.linger_timeout'}, Timeout}\n\tend;\nparse_options([{<<\"http_client.tcp.nodelay\">>, Nodelay}|Rest], Config) ->\n\tcase Nodelay of\n\t\t_ when is_boolean(Nodelay) ->\n\t\t\tparse_options(Rest, Config#config{ 'http_client.tcp.nodelay' = Nodelay });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_client.tcp.nodelay'}, Nodelay }\n\tend;\nparse_options([{<<\"http_client.tcp.send_timeout_close\">>, Value}|Rest], Config) ->\n\tcase Value of\n\t\t_ when is_boolean(Value) ->\n\t\t\tparse_options(Rest, Config#config{ 'http_client.tcp.send_timeout_close' = Value });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_client.tcp.send_timeout_close'}, Value}\n\tend;\nparse_options([{<<\"http_client.tcp.send_timeout\">>, Timeout}|Rest], Config) ->\n\tcase Timeout of\n\t\t_ when is_integer(Timeout), Timeout >= 0 ->\n\t\t\tparse_options(Rest, Config#config{ 'http_client.tcp.send_timeout' = Timeout });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_client.tcp.send_timeout'}, Timeout}\n\tend;\n\n%% Cowboy server parameters\nparse_options([{<<\"http_api.http.active_n\">>, Active}|Rest], Config) ->\n\tcase Active of\n\t\t_ when is_integer(Active), Active >= 1 ->\n\t\t\tparse_options(Rest, Config#config{ 'http_api.http.active_n' = Active });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_api.http.active_n'}, Active}\n\tend;\nparse_options([{<<\"http_api.http.inactivity_timeout\">>, Timeout}|Rest], Config) ->\n\tcase Timeout of\n\t\t_ when is_integer(Timeout), Timeout >= 0 ->\n\t\t\tparse_options(Rest, Config#config{ 'http_api.http.inactivity_timeout' = Timeout });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_api.http.inactivity_timeout'}, Timeout}\n\tend;\nparse_options([{<<\"http_api.http.linger_timeout\">>, Timeout}|Rest], Config) ->\n\tcase Timeout of\n\t\t_ when is_integer(Timeout), Timeout >= 0 ->\n\t\t\tparse_options(Rest, Config#config{ 'http_api.http.linger_timeout' = Timeout });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_api.http.linger_timeout'}, Timeout}\n\tend;\nparse_options([{<<\"http_api.http.request_timeout\">>, Timeout}|Rest], Config) ->\n\tcase Timeout of\n\t\t_ when is_integer(Timeout), Timeout >= 0 ->\n\t\t\tparse_options(Rest, Config#config{ 'http_api.http.request_timeout' = Timeout });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_api.http.request_timeout'}, Timeout}\n\tend;\nparse_options([{<<\"http_api.tcp.backlog\">>, Backlog}|Rest], Config) ->\n\tcase Backlog of\n\t\t_ when is_integer(Backlog), Backlog >= 1 ->\n\t\t\tparse_options(Rest, Config#config{ 'http_api.tcp.backlog' = Backlog });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_api.tcp.backlog'}, Backlog}\n\tend;\nparse_options([{<<\"http_api.tcp.delay_send\">>, Delay}|Rest], Config) ->\n\tcase Delay of\n\t\t_ when is_boolean(Delay) ->\n\t\t\tparse_options(Rest, Config#config{ 'http_api.tcp.delay_send' = Delay });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_api.tcp.delay_send'}, Delay}\n\tend;\nparse_options([{<<\"http_api.tcp.keepalive\">>, Keepalive}|Rest], Config) ->\n\tcase Keepalive of\n\t\t_ when is_boolean(Keepalive) ->\n\t\t\tparse_options(Rest, Config#config{ 'http_api.tcp.keepalive' = Keepalive });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_api.tcp.keepalive'}, Keepalive}\n\tend;\nparse_options([{<<\"http_api.tcp.linger\">>, Linger}|Rest], Config) ->\n\tcase Linger of\n\t\t_ when is_boolean(Linger) ->\n\t\t\tparse_options(Rest, Config#config{ 'http_api.tcp.linger' = Linger });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_api.tcp.linger'}, Linger}\n\tend;\nparse_options([{<<\"http_api.tcp.linger_timeout\">>, Timeout}|Rest], Config) ->\n\tcase Timeout of\n\t\t_ when is_integer(Timeout), Timeout >= 0 ->\n\t\t\tparse_options(Rest, Config#config{ 'http_api.tcp.linger_timeout' = Timeout });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_api.tcp.linger_timeout'}, Timeout}\n\tend;\nparse_options([{<<\"http_api.tcp.listener_shutdown\">>, Shutdown}|Rest], Config) ->\n\tcase Shutdown of\n\t\t\"brutal_kill\" ->\n\t\t\tparse_options(Rest, Config#config{ 'http_api.tcp.listener_shutdown' = brutal_kill });\n\t\t\"infinity\" ->\n\t\t\tparse_options(Rest, Config#config{ 'http_api.tcp.listener_shutdown' = infinity });\n\t\t_ when is_integer(Shutdown), Shutdown >= 0 ->\n\t\t\tparse_options(Rest, Config#config{ 'http_api.tcp.listener_shutdown' = Shutdown });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_api.tcp.listener_shutdown'}, Shutdown}\n\tend;\nparse_options([{<<\"http_api.tcp.nodelay\">>, Nodelay}|Rest], Config) ->\n\tcase Nodelay of\n\t\t_ when is_boolean(Nodelay) ->\n\t\t\tparse_options(Rest, Config#config{ 'http_api.tcp.nodelay' = Nodelay });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_api.tcp.nodelay'}, Nodelay }\n\tend;\nparse_options([{<<\"http_api.tcp.num_acceptors\">>, Acceptors}|Rest], Config) ->\n\tcase Acceptors of\n\t\t_ when is_integer(Acceptors), Acceptors >= 1 ->\n\t\t\tparse_options(Rest, Config#config{ 'http_api.tcp.num_acceptors' = Acceptors });\n\t\t_ ->\n\t\t\t{error, {bad_valud, 'http_api.tcp.num_acceptors'}, Acceptors}\n\tend;\nparse_options([{<<\"http_api.tcp.send_timeout_close\">>, Value}|Rest], Config) ->\n\tcase Value of\n\t\t_ when is_boolean(Value) ->\n\t\t\tparse_options(Rest, Config#config{ 'http_api.tcp.send_timeout_close' = Value });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_api.tcp.send_timeout_close'}, Value}\n\tend;\nparse_options([{<<\"http_api.tcp.send_timeout\">>, Timeout}|Rest], Config) ->\n\tcase Timeout of\n\t\t_ when is_integer(Timeout), Timeout >= 0 ->\n\t\t\tparse_options(Rest, Config#config{ 'http_api.tcp.send_timeout' = Timeout });\n\t\t_ ->\n\t\t\t{error, {bad_value, 'http_api.tcp.send_timeout'}, Timeout}\n\tend;\n\n%% RATE LIMITER GENERAL\nparse_options([{<<\"http_api.limiter.general.sliding_window_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit >= 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.general.sliding_window_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.general.sliding_window_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.general.sliding_window_duration\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.general.sliding_window_duration' = Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.general.sliding_window_duration'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.general.sliding_window_timestamp_cleanup_interval\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.general.sliding_window_timestamp_cleanup_interval' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.general.sliding_window_timestamp_cleanup_interval'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.general.sliding_window_timestamp_cleanup_expiry\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.general.sliding_window_timestamp_cleanup_expiry' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.general.sliding_window_timestamp_cleanup_expiry'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.general.leaky_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit >= 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.general.leaky_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.general.leaky_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.general.leaky_tick_interval\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.general.leaky_tick_interval' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.general.leaky_tick_interval'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.general.leaky_tick_reduction\">>, Reduction}|Rest], Config) ->\n    case Reduction of\n        Reduction when is_integer(Reduction), Reduction > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.general.leaky_tick_reduction' = Reduction });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.general.leaky_tick_reduction'}, Reduction}\n    end;\n\nparse_options([{<<\"http_api.limiter.general.concurrency_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.general.concurrency_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.general.concurrency_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.general.is_manual_reduction_disabled\">>, IsDisabled}|Rest], Config) ->\n    case IsDisabled of\n        IsDisabled when is_boolean(IsDisabled) ->\n            parse_options(Rest, Config#config{'http_api.limiter.general.is_manual_reduction_disabled' = IsDisabled });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.general.is_manual_reduction_disabled'}, IsDisabled}\n    end;\n\n%% RATE LIMITER CHUNK\nparse_options([{<<\"http_api.limiter.chunk.sliding_window_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit >= 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.chunk.sliding_window_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.chunk.sliding_window_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.chunk.sliding_window_duration\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.chunk.sliding_window_duration' = Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.chunk.sliding_window_duration'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.chunk.sliding_window_timestamp_cleanup_interval\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.chunk.sliding_window_timestamp_cleanup_interval' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.chunk.sliding_window_timestamp_cleanup_interval'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.chunk.sliding_window_timestamp_cleanup_expiry\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.chunk.sliding_window_timestamp_cleanup_expiry' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.chunk.sliding_window_timestamp_cleanup_expiry'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.chunk.leaky_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit >= 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.chunk.leaky_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.chunk.leaky_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.chunk.leaky_tick_interval\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.chunk.leaky_tick_interval' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.chunk.leaky_tick_interval'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.chunk.leaky_tick_reduction\">>, Reduction}|Rest], Config) ->\n    case Reduction of\n        Reduction when is_integer(Reduction), Reduction > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.chunk.leaky_tick_reduction' = Reduction });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.chunk.leaky_tick_reduction'}, Reduction}\n    end;\n\nparse_options([{<<\"http_api.limiter.chunk.concurrency_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.chunk.concurrency_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.chunk.concurrency_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.chunk.is_manual_reduction_disabled\">>, IsDisabled}|Rest], Config) ->\n    case IsDisabled of\n        IsDisabled when is_boolean(IsDisabled) ->\n            parse_options(Rest, Config#config{'http_api.limiter.chunk.is_manual_reduction_disabled' = IsDisabled });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.chunk.is_manual_reduction_disabled'}, IsDisabled}\n    end;\n\n%% RATE LIMITER DATA_SYNC_RECORD\nparse_options([{<<\"http_api.limiter.data_sync_record.sliding_window_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit >= 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.data_sync_record.sliding_window_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.data_sync_record.sliding_window_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.data_sync_record.sliding_window_duration\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.data_sync_record.sliding_window_duration' = Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.data_sync_record.sliding_window_duration'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.data_sync_record.sliding_window_timestamp_cleanup_interval\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.data_sync_record.sliding_window_timestamp_cleanup_interval' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.data_sync_record.sliding_window_timestamp_cleanup_interval'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.data_sync_record.sliding_window_timestamp_cleanup_expiry\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.data_sync_record.sliding_window_timestamp_cleanup_expiry' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.data_sync_record.sliding_window_timestamp_cleanup_expiry'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.data_sync_record.leaky_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit >= 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.data_sync_record.leaky_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.data_sync_record.leaky_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.data_sync_record.leaky_tick_interval\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.data_sync_record.leaky_tick_interval' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.data_sync_record.leaky_tick_interval'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.data_sync_record.leaky_tick_reduction\">>, Reduction}|Rest], Config) ->\n    case Reduction of\n        Reduction when is_integer(Reduction), Reduction > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.data_sync_record.leaky_tick_reduction' = Reduction });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.data_sync_record.leaky_tick_reduction'}, Reduction}\n    end;\n\nparse_options([{<<\"http_api.limiter.data_sync_record.concurrency_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.data_sync_record.concurrency_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.data_sync_record.concurrency_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.data_sync_record.is_manual_reduction_disabled\">>, IsDisabled}|Rest], Config) ->\n    case IsDisabled of\n        IsDisabled when is_boolean(IsDisabled) ->\n            parse_options(Rest, Config#config{'http_api.limiter.data_sync_record.is_manual_reduction_disabled' = IsDisabled });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.data_sync_record.is_manual_reduction_disabled'}, IsDisabled}\n    end;\n\n%% RATE LIMITER RECENT_HASH_LIST_DIFF\nparse_options([{<<\"http_api.limiter.recent_hash_list_diff.sliding_window_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit >= 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.recent_hash_list_diff.sliding_window_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.recent_hash_list_diff.sliding_window_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.recent_hash_list_diff.sliding_window_duration\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.recent_hash_list_diff.sliding_window_duration' = Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.recent_hash_list_diff.sliding_window_duration'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.recent_hash_list_diff.sliding_window_timestamp_cleanup_interval\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.recent_hash_list_diff.sliding_window_timestamp_cleanup_interval' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.recent_hash_list_diff.sliding_window_timestamp_cleanup_interval'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.recent_hash_list_diff.sliding_window_timestamp_cleanup_expiry\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.recent_hash_list_diff.sliding_window_timestamp_cleanup_expiry' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.recent_hash_list_diff.sliding_window_timestamp_cleanup_expiry'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.recent_hash_list_diff.leaky_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit >= 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.recent_hash_list_diff.leaky_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.recent_hash_list_diff.leaky_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.recent_hash_list_diff.leaky_tick_interval\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.recent_hash_list_diff.leaky_tick_interval' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.recent_hash_list_diff.leaky_tick_interval'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.recent_hash_list_diff.leaky_tick_reduction\">>, Reduction}|Rest], Config) ->\n    case Reduction of\n        Reduction when is_integer(Reduction), Reduction > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.recent_hash_list_diff.leaky_tick_reduction' = Reduction });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.recent_hash_list_diff.leaky_tick_reduction'}, Reduction}\n    end;\n\nparse_options([{<<\"http_api.limiter.recent_hash_list_diff.concurrency_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.recent_hash_list_diff.concurrency_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.recent_hash_list_diff.concurrency_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.recent_hash_list_diff.is_manual_reduction_disabled\">>, IsDisabled}|Rest], Config) ->\n    case IsDisabled of\n        IsDisabled when is_boolean(IsDisabled) ->\n            parse_options(Rest, Config#config{'http_api.limiter.recent_hash_list_diff.is_manual_reduction_disabled' = IsDisabled });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.recent_hash_list_diff.is_manual_reduction_disabled'}, IsDisabled}\n    end;\n\n%% RATE LIMITER BLOCK_INDEX\nparse_options([{<<\"http_api.limiter.block_index.sliding_window_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit >= 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.block_index.sliding_window_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.block_index.sliding_window_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.block_index.sliding_window_duration\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.block_index.sliding_window_duration' = Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.block_index.sliding_window_duration'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.block_index.sliding_window_timestamp_cleanup_interval\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.block_index.sliding_window_timestamp_cleanup_interval' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.block_index.sliding_window_timestamp_cleanup_interval'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.block_index.sliding_window_timestamp_cleanup_expiry\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.block_index.sliding_window_timestamp_cleanup_expiry' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.block_index.sliding_window_timestamp_cleanup_expiry'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.block_index.leaky_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit >= 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.block_index.leaky_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.block_index.leaky_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.block_index.leaky_tick_interval\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.block_index.leaky_tick_interval' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.block_index.leaky_tick_interval'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.block_index.leaky_tick_reduction\">>, Reduction}|Rest], Config) ->\n    case Reduction of\n        Reduction when is_integer(Reduction), Reduction > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.block_index.leaky_tick_reduction' = Reduction });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.block_index.leaky_tick_reduction'}, Reduction}\n    end;\n\nparse_options([{<<\"http_api.limiter.block_index.concurrency_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.block_index.concurrency_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.block_index.concurrency_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.block_index.is_manual_reduction_disabled\">>, IsDisabled}|Rest], Config) ->\n    case IsDisabled of\n        IsDisabled when is_boolean(IsDisabled) ->\n            parse_options(Rest, Config#config{'http_api.limiter.block_index.is_manual_reduction_disabled' = IsDisabled });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.block_index.is_manual_reduction_disabled'}, IsDisabled}\n    end;\n\n%% RATE LIMITER WALLET_LIST\nparse_options([{<<\"http_api.limiter.wallet_list.sliding_window_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit >= 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.wallet_list.sliding_window_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.wallet_list.sliding_window_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.wallet_list.sliding_window_duration\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.wallet_list.sliding_window_duration' = Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.wallet_list.sliding_window_duration'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.wallet_list.sliding_window_timestamp_cleanup_interval\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.wallet_list.sliding_window_timestamp_cleanup_interval' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.wallet_list.sliding_window_timestamp_cleanup_interval'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.wallet_list.sliding_window_timestamp_cleanup_expiry\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.wallet_list.sliding_window_timestamp_cleanup_expiry' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.wallet_list.sliding_window_timestamp_cleanup_expiry'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.wallet_list.leaky_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit >= 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.wallet_list.leaky_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.wallet_list.leaky_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.wallet_list.leaky_tick_interval\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.wallet_list.leaky_tick_interval' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.wallet_list.leaky_tick_interval'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.wallet_list.leaky_tick_reduction\">>, Reduction}|Rest], Config) ->\n    case Reduction of\n        Reduction when is_integer(Reduction), Reduction > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.wallet_list.leaky_tick_reduction' = Reduction });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.wallet_list.leaky_tick_reduction'}, Reduction}\n    end;\n\nparse_options([{<<\"http_api.limiter.wallet_list.concurrency_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.wallet_list.concurrency_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.wallet_list.concurrency_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.wallet_list.is_manual_reduction_disabled\">>, IsDisabled}|Rest], Config) ->\n    case IsDisabled of\n        IsDisabled when is_boolean(IsDisabled) ->\n            parse_options(Rest, Config#config{'http_api.limiter.wallet_list.is_manual_reduction_disabled' = IsDisabled });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.wallet_list.is_manual_reduction_disabled'}, IsDisabled}\n    end;\n\n%% RATE LIMITER GET_VDF\nparse_options([{<<\"http_api.limiter.get_vdf.sliding_window_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit >= 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.get_vdf.sliding_window_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_vdf.sliding_window_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_vdf.sliding_window_duration\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.get_vdf.sliding_window_duration' = Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_vdf.sliding_window_duration'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_vdf.sliding_window_timestamp_cleanup_interval\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.get_vdf.sliding_window_timestamp_cleanup_interval' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_vdf.sliding_window_timestamp_cleanup_interval'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_vdf.sliding_window_timestamp_cleanup_expiry\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.get_vdf.sliding_window_timestamp_cleanup_expiry' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_vdf.sliding_window_timestamp_cleanup_expiry'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_vdf.leaky_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit >= 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.get_vdf.leaky_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_vdf.leaky_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_vdf.leaky_tick_interval\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.get_vdf.leaky_tick_interval' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_vdf.leaky_tick_interval'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_vdf.leaky_tick_reduction\">>, Reduction}|Rest], Config) ->\n    case Reduction of\n        Reduction when is_integer(Reduction), Reduction > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.get_vdf.leaky_tick_reduction' = Reduction });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_vdf.leaky_tick_reduction'}, Reduction}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_vdf.concurrency_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.get_vdf.concurrency_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_vdf.concurrency_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_vdf.is_manual_reduction_disabled\">>, IsDisabled}|Rest], Config) ->\n    case IsDisabled of\n        IsDisabled when is_boolean(IsDisabled) ->\n            parse_options(Rest, Config#config{'http_api.limiter.get_vdf.is_manual_reduction_disabled' = IsDisabled });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_vdf.is_manual_reduction_disabled'}, IsDisabled}\n    end;\n\n%% RATE LIMITER GET_VDF_SESSION\nparse_options([{<<\"http_api.limiter.get_vdf_session.sliding_window_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit >= 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.get_vdf_session.sliding_window_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_vdf_session.sliding_window_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_vdf_session.sliding_window_duration\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.get_vdf_session.sliding_window_duration' = Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_vdf_session.sliding_window_duration'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_vdf_session.sliding_window_timestamp_cleanup_interval\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.get_vdf_session.sliding_window_timestamp_cleanup_interval' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_vdf_session.sliding_window_timestamp_cleanup_interval'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_vdf_session.sliding_window_timestamp_cleanup_expiry\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.get_vdf_session.sliding_window_timestamp_cleanup_expiry' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_vdf_session.sliding_window_timestamp_cleanup_expiry'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_vdf_session.leaky_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit >= 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.get_vdf_session.leaky_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_vdf_session.leaky_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_vdf_session.leaky_tick_interval\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.get_vdf_session.leaky_tick_interval' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_vdf_session.leaky_tick_interval'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_vdf_session.leaky_tick_reduction\">>, Reduction}|Rest], Config) ->\n    case Reduction of\n        Reduction when is_integer(Reduction), Reduction > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.get_vdf_session.leaky_tick_reduction' = Reduction });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_vdf_session.leaky_tick_reduction'}, Reduction}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_vdf_session.concurrency_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.get_vdf_session.concurrency_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_vdf_session.concurrency_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_vdf_session.is_manual_reduction_disabled\">>, IsDisabled}|Rest], Config) ->\n    case IsDisabled of\n        IsDisabled when is_boolean(IsDisabled) ->\n            parse_options(Rest, Config#config{'http_api.limiter.get_vdf_session.is_manual_reduction_disabled' = IsDisabled });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_vdf_session.is_manual_reduction_disabled'}, IsDisabled}\n    end;\n\n%% RATE LIMITER GET_PREVIOUS_VDF_SESSION\nparse_options([{<<\"http_api.limiter.get_previous_vdf_session.sliding_window_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit >= 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.get_previous_vdf_session.sliding_window_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_previous_vdf_session.sliding_window_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_previous_vdf_session.sliding_window_duration\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.get_previous_vdf_session.sliding_window_duration' = Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_previous_vdf_session.sliding_window_duration'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_previous_vdf_session.sliding_window_timestamp_cleanup_interval\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.get_previous_vdf_session.sliding_window_timestamp_cleanup_interval' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_previous_vdf_session.sliding_window_timestamp_cleanup_interval'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_previous_vdf_session.sliding_window_timestamp_cleanup_expiry\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.get_previous_vdf_session.sliding_window_timestamp_cleanup_expiry' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_previous_vdf_session.sliding_window_timestamp_cleanup_expiry'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_previous_vdf_session.leaky_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit >= 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.get_previous_vdf_session.leaky_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_previous_vdf_session.leaky_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_previous_vdf_session.leaky_tick_interval\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.get_previous_vdf_session.leaky_tick_interval' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_previous_vdf_session.leaky_tick_interval'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_previous_vdf_session.leaky_tick_reduction\">>, Reduction}|Rest], Config) ->\n    case Reduction of\n        Reduction when is_integer(Reduction), Reduction > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.get_previous_vdf_session.leaky_tick_reduction' = Reduction });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_previous_vdf_session.leaky_tick_reduction'}, Reduction}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_previous_vdf_session.concurrency_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.get_previous_vdf_session.concurrency_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_previous_vdf_session.concurrency_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.get_previous_vdf_session.is_manual_reduction_disabled\">>, IsDisabled}|Rest], Config) ->\n    case IsDisabled of\n        IsDisabled when is_boolean(IsDisabled) ->\n            parse_options(Rest, Config#config{'http_api.limiter.get_previous_vdf_session.is_manual_reduction_disabled' = IsDisabled });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.get_previous_vdf_session.is_manual_reduction_disabled'}, IsDisabled}\n    end;\n\n%% RATE LIMITER METRICS\nparse_options([{<<\"http_api.limiter.metrics.sliding_window_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit >= 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.metrics.sliding_window_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.metrics.sliding_window_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.metrics.sliding_window_duration\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.metrics.sliding_window_duration' = Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.metrics.sliding_window_duration'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.metrics.sliding_window_timestamp_cleanup_interval\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.metrics.sliding_window_timestamp_cleanup_interval' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.metrics.sliding_window_timestamp_cleanup_interval'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.metrics.sliding_window_timestamp_cleanup_expiry\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.metrics.sliding_window_timestamp_cleanup_expiry' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.metrics.sliding_window_timestamp_cleanup_expiry'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.metrics.leaky_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit >= 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.metrics.leaky_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.metrics.leaky_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.metrics.leaky_tick_interval\">>, Duration}|Rest], Config) ->\n    case Duration of\n        Duration when is_integer(Duration), Duration > 0 ->\n            parse_options(\n              Rest, Config#config{'http_api.limiter.metrics.leaky_tick_interval' =\n                                      Duration });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.metrics.leaky_tick_interval'}, Duration}\n    end;\n\nparse_options([{<<\"http_api.limiter.metrics.leaky_tick_reduction\">>, Reduction}|Rest], Config) ->\n    case Reduction of\n        Reduction when is_integer(Reduction), Reduction > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.metrics.leaky_tick_reduction' = Reduction });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.metrics.leaky_tick_reduction'}, Reduction}\n    end;\n\nparse_options([{<<\"http_api.limiter.metrics.concurrency_limit\">>, Limit}|Rest], Config) ->\n    case Limit of\n        Limit when is_integer(Limit), Limit > 0 ->\n            parse_options(Rest, Config#config{'http_api.limiter.metrics.concurrency_limit' = Limit });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.metrics.concurrency_limit'}, Limit}\n    end;\n\nparse_options([{<<\"http_api.limiter.metrics.is_manual_reduction_disabled\">>, IsDisabled}|Rest], Config) ->\n    case IsDisabled of\n        IsDisabled when is_boolean(IsDisabled) ->\n            parse_options(Rest, Config#config{'http_api.limiter.metrics.is_manual_reduction_disabled' = IsDisabled });\n        _ ->\n            {error, {bad_value, 'http_api.limiter.metrics.is_manual_reduction_disabled'}, IsDisabled}\n    end;\n\nparse_options([Opt | _], _) ->\n\t{error, unknown, Opt};\nparse_options([], Config) ->\n\t{ok, Config}.\n\nparse_storage_module(RangeNumber, RangeSize, PackingBin) ->\n\tPacking =\n\t\tcase PackingBin of\n\t\t\t<<\"unpacked\">> ->\n\t\t\t\tunpacked;\n\t\t\t<< MiningAddr:43/binary, \".replica.2.9\" >> ->\n\t\t\t\t{replica_2_9, ar_util:decode(MiningAddr)};\n\t\t\t<< MiningAddr:43/binary, \".\", PackingDifficultyBin/binary >> ->\n\t\t\t\tPackingDifficulty = binary_to_integer(PackingDifficultyBin),\n\t\t\t\ttrue = PackingDifficulty >= 1\n\t\t\t\t\t\tandalso PackingDifficulty =< ?MAX_PACKING_DIFFICULTY\n\t\t\t\t\t\tandalso PackingDifficulty /= ?REPLICA_2_9_PACKING_DIFFICULTY,\n\t\t\t\t{composite, ar_util:decode(MiningAddr), PackingDifficulty};\n\t\t\tMiningAddr when byte_size(MiningAddr) == 43 ->\n\t\t\t\t{spora_2_6, ar_util:decode(MiningAddr)}\n\t\tend,\n\t{ok, {RangeSize, RangeNumber, Packing}}.\n\nparse_storage_module(RangeNumber, RangeSize, PackingBin, ToPackingBin) ->\n\tPacking =\n\t\tcase PackingBin of\n\t\t\t<<\"unpacked\">> ->\n\t\t\t\tunpacked;\n\t\t\t<< MiningAddr:43/binary, \".replica.2.9\" >> ->\n\t\t\t\t{replica_2_9, ar_util:decode(MiningAddr)};\n\t\t\tMiningAddr when byte_size(MiningAddr) == 43 ->\n\t\t\t\t{spora_2_6, ar_util:decode(MiningAddr)}\n\t\tend,\n\tToPacking =\n\t\tcase ToPackingBin of\n\t\t\t<<\"unpacked\">> ->\n\t\t\t\tunpacked;\n\t\t\t<< ToMiningAddr:43/binary, \".replica.2.9\" >> ->\n\t\t\t\t{replica_2_9, ar_util:decode(ToMiningAddr)};\n\t\t\tToMiningAddr when byte_size(ToMiningAddr) == 43 ->\n\t\t\t\t{spora_2_6, ar_util:decode(ToMiningAddr)}\n\t\tend,\n\t{repack_in_place, {{RangeSize, RangeNumber, Packing}, ToPacking}}.\n\nsafe_map(Fun, List) ->\n\ttry\n\t\t{ok, lists:map(Fun, List)}\n\tcatch\n\t\t_:_ -> error\n\tend.\n\nparse_peers([Peer | Rest], ParsedPeers) ->\n\tcase ar_util:safe_parse_peer(Peer) of\n\t\t{ok, ParsedPeer} -> parse_peers(Rest, ParsedPeer ++ ParsedPeers);\n\t\t{error, _} -> error\n\tend;\nparse_peers([], ParsedPeers) ->\n\tFlatten = lists:flatten(ParsedPeers),\n\tReverse = lists:reverse(Flatten),\n\t{ok, Reverse}.\n\nparse_webhooks([{WebhookConfig} | Rest], ParsedWebhookConfigs) when is_list(WebhookConfig) ->\n\tcase parse_webhook(WebhookConfig, #config_webhook{}) of\n\t\t{ok, ParsedWebhook} -> parse_webhooks(Rest, [ParsedWebhook | ParsedWebhookConfigs]);\n\t\terror -> error\n\tend;\nparse_webhooks([_ | _], _) ->\n\terror;\nparse_webhooks([], ParsedWebhookConfigs) ->\n\t{ok, lists:reverse(ParsedWebhookConfigs)}.\n\nparse_webhook([{<<\"events\">>, Events} | Rest], Webhook) when is_list(Events) ->\n\tcase parse_webhook_events(Events, []) of\n\t\t{ok, ParsedEvents} ->\n\t\t\tparse_webhook(Rest, Webhook#config_webhook{ events = ParsedEvents });\n\t\terror ->\n\t\t\terror\n\tend;\nparse_webhook([{<<\"events\">>, _} | _], _) ->\n\terror;\nparse_webhook([{<<\"url\">>, Url} | Rest], Webhook) when is_binary(Url) ->\n\tparse_webhook(Rest, Webhook#config_webhook{ url = Url });\nparse_webhook([{<<\"url\">>, _} | _], _) ->\n\terror;\nparse_webhook([{<<\"headers\">>, {Headers}} | Rest], Webhook) when is_list(Headers) ->\n\tparse_webhook(Rest, Webhook#config_webhook{ headers = Headers });\nparse_webhook([{<<\"headers\">>, _} | _], _) ->\n\terror;\nparse_webhook([], Webhook) ->\n\t{ok, Webhook}.\n\nparse_webhook_events([Event | Rest], Events) ->\n\tcase Event of\n\t\t<<\"transaction\">> -> parse_webhook_events(Rest, [transaction | Events]);\n\t\t<<\"transaction_data\">> -> parse_webhook_events(Rest, [transaction_data | Events]);\n\t\t<<\"block\">> -> parse_webhook_events(Rest, [block | Events]);\n\t\t<<\"solution\">> -> parse_webhook_events(Rest, [solution | Events]);\n\t\t_ -> error\n\tend;\nparse_webhook_events([], Events) ->\n\t{ok, lists:reverse(Events)}.\n\nparse_atom_number_map({[Pair | Pairs]}, Parsed) when is_tuple(Pair) ->\n\tparse_atom_number_map({Pairs}, parse_atom_number(Pair, Parsed));\nparse_atom_number_map({[]}, Parsed) ->\n\t{ok, Parsed};\nparse_atom_number_map(_, _) ->\n\terror.\n\nparse_atom_number({Name, Number}, Parsed) when is_binary(Name), is_number(Number) ->\n\tmaps:put(binary_to_atom(Name), Number, Parsed);\nparse_atom_number({Key, Value}, Parsed) ->\n\t?LOG_WARNING([{event, parse_config_bad_type},\n\t\t{key, io_lib:format(\"~p\", [Key])}, {value, io_lib:format(\"~p\", [Value])}]),\n\tParsed.\n\nparse_requests_per_minute_limit_by_ip(Input) ->\n\tparse_requests_per_minute_limit_by_ip(Input, #{}).\n\nparse_requests_per_minute_limit_by_ip({[{IP, Object} | Pairs]}, Parsed) ->\n\tcase ar_util:safe_parse_peer(IP) of\n\t\t{error, invalid} ->\n\t\t\terror;\n\t\t{ok, [{A, B, C, D, _Port}]} ->\n\t\t\tcase parse_atom_number_map(Object, #{}) of\n\t\t\t\terror ->\n\t\t\t\t\terror;\n\t\t\t\t{ok, ParsedMap} ->\n\t\t\t\t\tparse_requests_per_minute_limit_by_ip({Pairs},\n\t\t\t\t\t\t\tmaps:put({A, B, C, D}, ParsedMap, Parsed))\n\t\t\tend\n\tend;\nparse_requests_per_minute_limit_by_ip({[]}, Parsed) ->\n\t{ok, Parsed};\nparse_requests_per_minute_limit_by_ip(_, _) ->\n\terror.\n\nparse_vdf_server_trusted_peers([Peer | Rest], Config) ->\n\tConfig2 = parse_vdf_server_trusted_peer(Peer, Config),\n\tparse_vdf_server_trusted_peers(Rest, Config2);\nparse_vdf_server_trusted_peers([], Config) ->\n\tConfig.\n\nparse_vdf_server_trusted_peer(Peer, Config) when is_binary(Peer) ->\n\tparse_vdf_server_trusted_peer(binary_to_list(Peer), Config);\nparse_vdf_server_trusted_peer(Peer, Config) ->\n\t#config{ nonce_limiter_server_trusted_peers = Peers } = Config,\n\tConfig#config{ nonce_limiter_server_trusted_peers = Peers ++ [Peer] }.\n\nlog_config(Config) ->\n\tFields = record_info(fields, config),\n\t?LOG_INFO(\"=============== Start Config ===============\"),\n\tlog_config(Config, Fields, 2, []),\n\t?LOG_INFO(\"=============== End Config   ===============\").\n\nlog_config(_Config, [], _Index, _Acc) ->\n\tok;\nlog_config(Config, [Field | Rest], Index, Acc) ->\n\tFieldValue = erlang:element(Index, Config),\n\t%% Wrap formatting in a try/catch just in case - we don't want any issues in formatting\n\t%% to cause a crash.\n\tFormattedValue = try\n\t\tlog_config_value(Field, FieldValue)\n\tcatch _:_ ->\n\t\tFieldValue\n\tend,\n\tLine = ?LOG_INFO(\"~s: ~tp\", [atom_to_list(Field), FormattedValue]),\n\tlog_config(Config, Rest, Index+1, [Line | Acc]).\n\nlog_config_value(peers, FieldValue) ->\n\tformat_peers(FieldValue);\nlog_config_value(block_gossip_peers, FieldValue) ->\n\tformat_peers(FieldValue);\nlog_config_value(local_peers, FieldValue) ->\n\tformat_peers(FieldValue);\nlog_config_value(mining_addr, FieldValue) ->\n\tformat_binary(FieldValue);\nlog_config_value(start_from_state, FieldValue) ->\n\tFieldValue;\nlog_config_value(start_from_block, FieldValue) ->\n\tformat_binary(FieldValue);\nlog_config_value(storage_modules, FieldValue) ->\n\t[format_storage_module(StorageModule) || StorageModule <- FieldValue];\nlog_config_value(repack_in_place_storage_modules, FieldValue) ->\n\t[{format_storage_module(StorageModule), ar_serialize:encode_packing(ToPacking, false)}\n\t\t\t|| {StorageModule, ToPacking} <- FieldValue];\nlog_config_value(_, FieldValue) ->\n\tFieldValue.\n\nformat_peers(Peers) ->\n\t[ar_util:format_peer(Peer) || Peer <- Peers].\nformat_binary(Address) ->\n\tar_util:encode(Address).\nformat_storage_module({RangeSize, RangeNumber, {spora_2_6, MiningAddress}}) ->\n\t{RangeSize, RangeNumber, {spora_2_6, format_binary(MiningAddress)}};\nformat_storage_module({RangeSize, RangeNumber, {composite, MiningAddress, PackingDiff}}) ->\n\t{RangeSize, RangeNumber, {composite, format_binary(MiningAddress), PackingDiff}};\nformat_storage_module({RangeSize, RangeNumber, {replica_2_9, MiningAddress}}) ->\n\t{RangeSize, RangeNumber, {replica_2_9, format_binary(MiningAddress)}};\nformat_storage_module(StorageModule) ->\n\tStorageModule.\n\n%% -------------------------------------------------------------------\n%% @doc Validate the configuration options.\n%% -------------------------------------------------------------------\nvalidate_init(Config) ->\n\tcase Config#config.init of\n\t\ttrue ->\n\t\t\tcase ?NETWORK_NAME of\n\t\t\t\t\"arweave.N.1\" ->\n\t\t\t\t\tio:format(\"~nCannot start a new network with the mainnet name! \"\n\t\t\t\t\t\t\t\"Use ./bin/start-localnet ... when running from sources \"\n\t\t\t\t\t\t\t\"or compile via ./rebar3 as localnet tar and use \"\n\t\t\t\t\t\t\t\"./bin/start ... as usual.~n~n\"),\n\t\t\t\t\tfalse;\n\t\t\t\t_ ->\n\t\t\t\t\ttrue\n\t\t\tend;\n\t\tfalse ->\n\t\t\ttrue\n\tend.\n\nvalidate_storage_modules(#config{ storage_modules = StorageModules }) ->\n\tcase length(StorageModules) =:= length(lists:usort(StorageModules)) of\n\t\ttrue ->\n\t\t\ttrue;\n\t\tfalse ->\n\t\t\tio:format(\"~nDuplicate value detected in the storage_modules option.~n~n\"),\n\t\t\tfalse\n\tend.\nvalidate_repack_in_place(Config) ->\n\tModules = [ar_storage_module:id(M) || M <- Config#config.storage_modules],\n\tvalidate_repack_in_place(Config#config.repack_in_place_storage_modules, Modules).\n\nvalidate_repack_in_place([], _Modules) ->\n\ttrue;\nvalidate_repack_in_place([{Module, _ToPacking} | L], Modules) ->\n\tID = ar_storage_module:id(Module),\n\tModuleInUse = lists:member(ID, Modules),\n\tcase ModuleInUse of\n\t\ttrue ->\n\t\t\tio:format(\"~nCannot use the storage module ~s \"\n\t\t\t\t\t\"while it is being repacked in place.~n~n\", [ID]),\n\t\t\tfalse;\n\t\tfalse ->\n\t\t\tvalidate_repack_in_place(L, Modules)\n\tend.\n\nvalidate_cm_pool(Config) ->\n\tA = case {Config#config.coordinated_mining, Config#config.is_pool_server} of\n\t\t{true, true} ->\n\t\t\tio:format(\"~nThe pool server node cannot participate \"\n\t\t\t\t\t\"in the coordinated mining.~n~n\"),\n\t\t\tfalse;\n\t\t_ ->\n\t\t\ttrue\n\tend,\n\tB = case {Config#config.is_pool_server, Config#config.is_pool_client} of\n\t\t{true, true} ->\n\t\t\tio:format(\"~nThe node cannot be a pool server and a pool client \"\n\t\t\t\t\t\"at the same time.~n~n\"),\n\t\t\tfalse;\n\t\t_ ->\n\t\t\ttrue\n\tend,\n\tC = case {Config#config.is_pool_client, Config#config.mine} of\n\t\t{true, false} ->\n\t\t\tio:format(\"~nThe mine flag must be set along with the is_pool_client flag.~n~n\"),\n\t\t\tfalse;\n\t\t_ ->\n\t\t\ttrue\n\tend,\n\tA andalso B andalso C.\n\nvalidate_cm(#config{ coordinated_mining = false }) ->\n\ttrue;\nvalidate_cm(#config{ cm_api_secret = not_set }) ->\n\tio:format(\"~nThe cm_api_secret must be set when coordinated_mining is set.~n~n\"),\n\tfalse;\nvalidate_cm(#config{ mine = false }) ->\n\tio:format(\"~nThe mine flag must be set when coordinated_mining is set.~n~n\"),\n\tfalse;\nvalidate_cm(_Config) ->\n\ttrue.\n\nvalidate_unique_replication_type(#config{ mine = false }) ->\n\ttrue;\nvalidate_unique_replication_type(Config) ->\n\tMiningAddr = Config#config.mining_addr,\n\tUniquePackingDifficulties = lists:foldl(\n\t\tfun({_, _, {composite, Addr, Difficulty}}, Acc) when Addr =:= MiningAddr ->\n\t\t\tsets:add_element({composite, Difficulty}, Acc);\n\t\t({_, _, {spora_2_6, Addr}}, Acc) when Addr =:= MiningAddr ->\n\t\t\tsets:add_element(spora_2_6, Acc);\n\t\t({_, _, {replica_2_9, Addr}}, Acc) when Addr =:= MiningAddr ->\n\t\t\tsets:add_element(replica_2_9, Acc);\n\t\t(_, Acc) ->\n\t\t\tAcc\n\t\tend,\n\t\tsets:new(),\n\t\tConfig#config.storage_modules\n\t),\n\tcase sets:size(UniquePackingDifficulties) =< 1 of\n\t\ttrue ->\n\t\t\ttrue;\n\t\tfalse ->\n\t\t\tio:format(\"~nThe node cannot mine multiple replication types \"\n\t\t\t\t\t\"for the same mining address.~n~n\"),\n\t\t\tfalse\n\tend.\n\nvalidate_verify(#config{ verify = false }) ->\n\ttrue;\nvalidate_verify(#config{ mine = true }) ->\n\tio:format(\"~nThe verify flag cannot be set together with the mine flag.~n~n\"),\n\tfalse;\nvalidate_verify(#config{ repack_in_place_storage_modules = RepackInPlaceStorageModules })\n\t\t\twhen RepackInPlaceStorageModules =/= [] ->\n\tio:format(\"~nThe verify flag cannot be set together with the repack_in_place flag.~n~n\"),\n\tfalse;\nvalidate_verify(_Config) ->\n\ttrue.\n\nvalidate_start_from_state(#config{ start_from_state = not_set }) ->\n\ttrue;\nvalidate_start_from_state(#config{ start_from_state = Folder, data_dir = DataDir }) ->\n\tcase filename:absname(Folder) == filename:absname(DataDir) of\n\t\ttrue ->\n\t\t\tio:format(\"~nstart_from_state folder cannot be the same as data_dir.~n~n\"),\n\t\t\tfalse;\n\t\tfalse ->\n\t\t\ttrue\n\tend.\n\ndisable_vdf(Config) ->\n\tRemovePublicVDFServer =\n\t\tlists:filter(fun(Item) -> Item =/= public_vdf_server end, Config#config.enable),\n\tConfig#config{\n\t\tnonce_limiter_client_peers = [],\n\t\tnonce_limiter_server_trusted_peers = [],\n\t\tenable = RemovePublicVDFServer,\n\t\tdisable = [compute_own_vdf | Config#config.disable]\n\t}.\n\nset_verify_flags(#config{ verify = false } = Config) ->\n\tConfig;\nset_verify_flags(Config) ->\n\tio:format(\"~n~nWARNING: The verify flag is set. Forcing the following options:\"),\n\tio:format(\"~n  - auto_join false\"),\n\tio:format(\"~n  - start_from_latest_state true\"),\n\tio:format(\"~n  - sync_jobs 0\"),\n\tio:format(\"~n  - block_pollers 0\"),\n\tio:format(\"~n  - header_sync_jobs 0\"),\n\tio:format(\"~n  - disable tx_poller\"),\n\tio:format(\"~n  - replica_2_9_workers 0\"),\n\tio:format(\"~n  - max_propagation_peers 0\"),\n\tio:format(\"~n  - max_block_propagation_peers 0\"),\n\tio:format(\"~n  - coordinated_mining false\"),\n\tio:format(\"~n  - cm_peers []\"),\n\tio:format(\"~n  - cm_exit_peer not_set\"),\n\tio:format(\"~n  - all VDF features disabled\"),\n\tConfig2 = disable_vdf(Config),\n\tConfig2#config{\n\t\tauto_join = false,\n\t\tstart_from_latest_state = true,\n\t\tsync_jobs = 0,\n\t\tblock_pollers = 0,\n\t\theader_sync_jobs = 0,\n\t\tdisable = [tx_poller | Config#config.disable],\n\t\treplica_2_9_workers = 0,\n\t\tcoordinated_mining = false,\n\t\tcm_peers = [],\n\t\tcm_exit_peer = not_set,\n\t\tmax_propagation_peers = 0,\n\t\tmax_block_propagation_peers = 0\n\t}.\n"
  },
  {
    "path": "apps/arweave/src/ar_coordination.erl",
    "content": "-module(ar_coordination).\n\n-behaviour(gen_server).\n\n-export([\n\tstart_link/0, computed_h1/2, compute_h2_for_peer/2, computed_h2_for_peer/1,\n\tget_public_state/0, send_h1_batch_to_peer/0, stat_loop/0,\n\tget_peers/1, get_peer/1,\n\tupdate_peer/2, remove_peer/1, garbage_collect/0, is_exit_peer/0,\n\tget_unique_partitions_list/0, get_self_plus_external_partitions_list/0,\n\tget_cluster_partitions_list/0, is_coordinated_miner/0\n]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n-include_lib(\"arweave/include/ar_mining.hrl\").\n\n-record(state, {\n\tlast_peer_response = #{},\n\tpeers_by_partition = #{},\n\tout_batches = #{},\n\tout_batch_timeout = ?DEFAULT_CM_BATCH_TIMEOUT_MS\n}).\n\n-define(START_DELAY, 1000).\n\n-ifdef(AR_TEST).\n-define(BATCH_SIZE_LIMIT, 2).\n-else.\n-define(BATCH_SIZE_LIMIT, 400).\n-endif.\n\n-define(BATCH_POLL_INTERVAL_MS, 20).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the gen_server.\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%% Helper function to see state while testing and later for monitoring API\nget_public_state() ->\n\tgen_server:call(?MODULE, get_public_state).\n\n%% @doc An H1 has been generated. Store it to send it later to a\n%% coordinated mining peer\ncomputed_h1(Candidate, DiffPair) ->\n\t#mining_candidate{\n\t\th1 = H1,\n\t\tnonce = Nonce\n\t} = Candidate,\n\t%% Prepare Candidate to be shared with a remote miner.\n\t%% 1. Add the current difficulty (the remote peer will use this instead of\n\t%%    its local difficulty).\n\t%% 2. Remove any data that's not needed by the peer. This cuts down on the volume of data\n\t%%    shared.\n\t%% 3. The peer field will be set to this peer's address by the remote miner.\n\tShareableCandidate = Candidate#mining_candidate{\n\t\tchunk1 = not_set,\n\t\tchunk2 = not_set,\n\t\tcm_diff = DiffPair,\n\t\tcm_lead_peer = not_set,\n\t\th1 = not_set,\n\t\th2 = not_set,\n\t\tnonce = not_set,\n\t\tpoa2 = not_set,\n\t\tpreimage = not_set\n\t},\n\tgen_server:cast(?MODULE, {computed_h1, ShareableCandidate, H1, Nonce}).\n\nsend_h1_batch_to_peer() ->\n\tgen_server:cast(?MODULE, send_h1_batch_to_peer).\n\n%% @doc Compute h2 for a remote peer\ncompute_h2_for_peer(Peer, Candidate) ->\n\tgen_server:cast(?MODULE, {compute_h2_for_peer,\n\t\tCandidate#mining_candidate{ cm_lead_peer = Peer }}).\n\ncomputed_h2_for_peer(Candidate) ->\n\tgen_server:cast(?MODULE, {computed_h2_for_peer, Candidate}).\n\nstat_loop() ->\n\tgen_server:call(?MODULE, stat_loop).\n\nget_peer(PartitionNumber) ->\n\tgen_server:call(?MODULE, {get_peer, PartitionNumber}).\n\nget_peers(PartitionNumber) ->\n\tgen_server:call(?MODULE, {get_peers, PartitionNumber}).\n\nupdate_peer(Peer, PartitionList) ->\n\tgen_server:cast(?MODULE, {update_peer, {Peer, PartitionList}}).\n\nremove_peer(Peer) ->\n\tgen_server:cast(?MODULE, {remove_peer, Peer}).\n\ngarbage_collect() ->\n\tgen_server:cast(?MODULE, garbage_collect).\n\n%% Return true if we are an exit peer in the coordinated mining setup.\nis_exit_peer() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tConfig#config.coordinated_mining == true andalso\n\t\t\tConfig#config.cm_exit_peer == not_set.\n\n%% Return true if we are a CM miner in the coordinated mining setup.\n%% A CM miner may be but does not have to be an exit node.\nis_coordinated_miner() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tConfig#config.coordinated_mining == true.\n\n%% @doc Return a list of unique partitions including local partitions and all of\n%% external (relevant pool peers') partitions.\n%%\n%% A single partition in the following format:\n%% {[\n%%   {bucket, PartitionID},\n%%   {bucketsize, ar_block:partition_size()},\n%%   {addr, EncodedMiningAddress}\n%% ]}\n%%\n%% A single partition with the composite packing is in the following format:\n%% {[\n%%   {bucket, PartitionID},\n%%   {bucketsize, ar_block:partition_size()},\n%%   {addr, EncodedMiningAddress},\n%%   {pdiff, PackingDifficulty}\n%% ]}\nget_self_plus_external_partitions_list() ->\n\tPoolPeer = ar_pool:pool_peer(),\n\tPoolPartitions = get_peer_partitions(PoolPeer),\n\tLocalPartitions = get_unique_partitions_set(),\n\tlists:sort(sets:to_list(get_unique_partitions_set(PoolPartitions, LocalPartitions))).\n\n%% @doc Return a list of unique partitions including local partitions and all of\n%% CM peers' partitions.\n%%\n%% A single partition in the following format:\n%% {[\n%%   {bucket, PartitionID},\n%%   {bucketsize, ar_block:partition_size()},\n%%   {addr, EncodedMiningAddress}\n%% ]}\n%%\n%% A single partition with the composite packing is in the following format:\n%% {[\n%%   {bucket, PartitionID},\n%%   {bucketsize, ar_block:partition_size()},\n%%   {addr, EncodedMiningAddress},\n%%   {pdiff, PackingDifficulty}\n%% ]}\nget_cluster_partitions_list() ->\n\tgen_server:call(?MODULE, get_cluster_partitions_list, ?DEFAULT_CALL_TIMEOUT).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\t{ok, Config} = arweave_config:get_env(),\n\n\tar_util:cast_after(?BATCH_POLL_INTERVAL_MS, ?MODULE, check_batches),\n\tState = #state{\n\t\tlast_peer_response = #{}\n\t},\n\tState2 = case Config#config.coordinated_mining of\n\t\tfalse ->\n\t\t\tState;\n\t\ttrue ->\n\t\t\tcase Config#config.cm_exit_peer of\n\t\t\t\tnot_set ->\n\t\t\t\t\tar:console(\n\t\t\t\t\t\t\"This node is configured as a Coordinated Mining Exit Node. If this is \"\n\t\t\t\t\t\t\"not correct, set 'cm_exit_peer' and relaunch.~n\");\n\t\t\t\t_ ->\n\t\t\t\t\tok\n\t\t\tend,\n\t\t\tar_util:cast_after(?START_DELAY, ?MODULE, refetch_peer_partitions),\n\t\t\tState#state{\n\t\t\t\tlast_peer_response = #{}\n\t\t\t}\n\tend,\n\t{ok, State2#state{\n\t\tout_batch_timeout = Config#config.cm_out_batch_timeout }}.\n\n%% Helper function to see state while testing and later for monitoring API\nhandle_call(get_public_state, _From, State) ->\n\tPublicState = {State#state.last_peer_response},\n\t{reply, {ok, PublicState}, State};\n\nhandle_call({get_peer, PartitionNumber}, _From, State) ->\n\t{reply, get_peer(PartitionNumber, State), State};\n\nhandle_call({get_peers, PartitionNumber}, _From, State) ->\n\t{reply, get_peers(PartitionNumber, State), State};\n\nhandle_call({get_peer_partitions, Peer}, _From, State) ->\n\t#state{ last_peer_response = Map } = State,\n\tcase maps:get(Peer, Map, []) of\n\t\t[] ->\n\t\t\t{reply, [], State};\n\t\t{true, Partitions} ->\n\t\t\t{reply, Partitions, State}\n\tend;\n\nhandle_call(get_cluster_partitions_list, _From, State) ->\n\tPeerPartitions =\n\t\tmaps:fold(\n\t\t\tfun(PartitionID, Items, Acc) ->\n\t\t\t\tlists:foldl(\n\t\t\t\t\tfun\t({{pool, _}, _, _}, Acc2) ->\n\t\t\t\t\t\t\tAcc2;\n\t\t\t\t\t\t({_Peer, PackingAddr, PackingDifficulty}, Acc2) ->\n\t\t\t\t\t\t\tsets:add_element(ar_serialize:partition_to_json_struct(\n\t\t\t\t\t\t\t\t\tPartitionID, ar_block:partition_size(), PackingAddr,\n\t\t\t\t\t\t\t\t\tPackingDifficulty), Acc2)\n\t\t\t\t\tend,\n\t\t\t\t\tAcc,\n\t\t\t\t\tItems\n\t\t\t\t)\n\t\t\tend,\n\t\t\tsets:new(),\n\t\t\tState#state.peers_by_partition\n\t\t),\n\tSet = get_unique_partitions_set(ar_mining_io:get_partitions(), PeerPartitions),\n\t{reply, lists:sort(sets:to_list(Set)), State};\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast(check_batches, State) ->\n\tar_util:cast_after(?BATCH_POLL_INTERVAL_MS, ?MODULE, check_batches),\n\tOutBatches = check_out_batches(State),\n\t{noreply, State#state{ out_batches = OutBatches }};\n\nhandle_cast({computed_h1, ShareableCandidate, H1, Nonce}, State) ->\n\t#state{ out_batches = OutBatches } = State,\n\t#mining_candidate{\n\t\tcache_ref = CacheRef\n\t} = ShareableCandidate,\n\tNow = os:system_time(millisecond),\n\t{Start, ShareableCandidate2} = maps:get(CacheRef, OutBatches, {Now, ShareableCandidate}),\n\tH1List = [{H1, Nonce} | ShareableCandidate2#mining_candidate.cm_h1_list],\n\tShareableCandidate3 = ShareableCandidate2#mining_candidate{ cm_h1_list = H1List },\n\tOutBatches2 = case length(H1List) >= ?BATCH_SIZE_LIMIT of\n\t\ttrue ->\n\t\t\tsend_h1(ShareableCandidate3, State),\n\t\t\tmaps:remove(CacheRef, OutBatches);\n\t\tfalse ->\n\t\t\tmaps:put(CacheRef, {Start, ShareableCandidate3}, OutBatches)\n\tend,\n\t{noreply, State#state{ out_batches = OutBatches2 }};\n\nhandle_cast({compute_h2_for_peer, Candidate}, State) ->\n\t%% No don't need to batch inbound batches since ar_mining_io will cache the recall\n\t%% range for a short period greatly lowering the cost of processing the same\n\t%% multiple times across several batches.\n\tar_mining_server:compute_h2_for_peer(Candidate),\n\t{noreply, State};\n\nhandle_cast({computed_h2_for_peer, Candidate}, State) ->\n\t#mining_candidate{ cm_lead_peer = Peer, chunk2 = Chunk2 } = Candidate,\n\tPoA2 = case ar_mining_server:prepare_poa(poa2, Candidate, #poa{}) of\n\t\t{ok, PoA} -> PoA;\n\t\t{error, _Error} ->\n\t\t\t%% Fallback. This will probably fail later, but prepare_poa/3 should\n\t\t\t%% have already printed several errors so we'll continue just in case.\n\t\t\t%% df: Is this the right fallback?..\n\t\t\t#poa{ chunk = Chunk2 }\n\tend,\n\tsend_h2(Peer, Candidate#mining_candidate{ poa2 = PoA2 }),\n\t{noreply, State};\n\nhandle_cast(refetch_peer_partitions, State) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tPeers = Config#config.cm_peers,\n\tPeers2 =\n\t\tcase Config#config.cm_exit_peer == not_set\n\t\t\t\torelse lists:member(Config#config.cm_exit_peer, Peers) of\n\t\t\ttrue ->\n\t\t\t\t%% Either we are the exit node or the exit node\n\t\t\t\t%% is already configured as yet another mining peer.\n\t\t\t\tPeers;\n\t\t\tfalse ->\n\t\t\t\t[Config#config.cm_exit_peer | Peers]\n\t\tend,\n\tar_util:cast_after(Config#config.cm_poll_interval, ?MODULE, refetch_peer_partitions),\n\trefetch_peer_partitions(Peers2),\n\t{noreply, State};\n\nhandle_cast({update_peer, {Peer, PartitionList}}, State) ->\n\tSetValue = {true, PartitionList},\n\tState2 = State#state{\n\t\tlast_peer_response = maps:put(Peer, SetValue, State#state.last_peer_response)\n\t},\n\tState3 = remove_mining_peer(Peer, State2),\n\tState4 = add_mining_peer({Peer, PartitionList}, State3),\n\t{noreply, State4};\n\nhandle_cast({remove_peer, Peer}, State) ->\n\tState3 = case maps:get(Peer, State#state.last_peer_response, none) of\n\t\tnone ->\n\t\t\tState;\n\t\t{_, OldPartitionList} ->\n\t\t\tSetValue = {false, OldPartitionList},\n\t\t\t% NOTE. We keep OldPartitionList because we don't want blinky stat\n\t\t\tState2 = State#state{\n\t\t\t\tlast_peer_response = maps:put(Peer, SetValue, State#state.last_peer_response)\n\t\t\t},\n\t\t\t?LOG_INFO([{event, cm_peer_removed}, {peer, ar_util:format_peer(Peer)}]),\n\t\t\tremove_mining_peer(Peer, State2)\n\tend,\n\t{noreply, State3};\n\nhandle_cast(refetch_pool_peer_partitions, State) ->\n\t%% Casted when we are a CM exit peer and a pool client. We collect our local peer\n\t%% partitions and push them to the pool getting the pool's complementary partitions\n\t%% in response.\n\tUniquePeerPartitions =\n\t\tmaps:fold(\n\t\t\tfun\t({pool, _}, _Value, Acc) ->\n\t\t\t\t\tAcc;\n\t\t\t\t(_Peer, {_, Partitions}, Acc) ->\n\t\t\t\t\tget_unique_partitions_set(Partitions, Acc)\n\t\t\tend,\n\t\t\tsets:new(),\n\t\t\tState#state.last_peer_response\n\t\t),\n\trefetch_pool_peer_partitions(UniquePeerPartitions),\n\t{noreply, State};\n\nhandle_cast(garbage_collect, State) ->\n\terlang:garbage_collect(self(),\n\t\t[{async, {ar_coordination, self(), erlang:monotonic_time()}}]),\n\t{noreply, State};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info({garbage_collect, {Name, Pid, StartTime}, GCResult}, State) ->\n\tEndTime = erlang:monotonic_time(),\n\tElapsedTime = erlang:convert_time_unit(EndTime-StartTime, native, millisecond),\n\tcase GCResult == false orelse ElapsedTime > ?GC_LOG_THRESHOLD of\n\t\ttrue ->\n\t\t\t?LOG_DEBUG([\n\t\t\t\t{event, mining_debug_garbage_collect}, {process, Name}, {pid, Pid},\n\t\t\t\t{gc_time, ElapsedTime}, {gc_result, GCResult}]);\n\t\tfalse ->\n\t\t\tok\n\tend,\n\t{noreply, State};\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\n%% @doc Return the list of the partitions of the given Peer, to the best\n%% of our knowledge.\nget_peer_partitions(Peer) ->\n\tgen_server:call(?MODULE, {get_peer_partitions, Peer}, ?DEFAULT_CALL_TIMEOUT).\n\ncheck_out_batches(#state{out_batches = OutBatches}) when map_size(OutBatches) == 0 ->\n\tOutBatches;\ncheck_out_batches(State) ->\n\t#state{ out_batches = OutBatches, out_batch_timeout = BatchTimeout } = State,\n\tNow = os:system_time(millisecond),\n\tmaps:filter(\n\t\tfun\t(_CacheRef, {Start, Candidate}) ->\n\t\t\tcase Now - Start >= BatchTimeout of\n\t\t\t\ttrue ->\n\t\t\t\t\t%% send this batch, and remove it from the map\n\t\t\t\t\tsend_h1(Candidate, State),\n\t\t\t\t\tfalse;\n\t\t\t\tfalse ->\n\t\t\t\t\t%% not yet time to send this batch, keep it in the map\n\t\t\t\t\ttrue\n\t\t\tend\n\t\tend,\n\t\tOutBatches\n\t).\n\nget_peers(PartitionNumber, State) ->\n\t[element(1, El) || El <- maps:get(PartitionNumber, State#state.peers_by_partition, [])].\n\nget_peer(PartitionNumber, State) ->\n\tcase get_peers(PartitionNumber, State) of\n\t\t[] ->\n\t\t\tnone;\n\t\tPeers ->\n\t\t\tlists:last(Peers)\n\tend.\n\nsend_h1(Candidate, State) ->\n\t#mining_candidate{\n\t\tpartition_number2 = PartitionNumber2, cm_h1_list = H1List } = Candidate,\n\tcase get_peer(PartitionNumber2, State) of\n\t\tnone ->\n\t\t\tok;\n\t\tPeer ->\n\t\t\tCandidate2 = Candidate#mining_candidate{ label = <<\"cm\">> },\n\t\t\tspawn(fun() ->\n\t\t\t\tar_http_iface_client:cm_h1_send(Peer, Candidate2)\n\t\t\tend),\n\t\t\tcase Peer of\n\t\t\t\t{pool, _} ->\n\t\t\t\t\tar_mining_stats:h1_sent_to_peer(pool, length(H1List));\n\t\t\t\t_ ->\n\t\t\t\t\tar_mining_stats:h1_sent_to_peer(Peer, length(H1List))\n\t\t\tend\n\tend.\n\nsend_h2(Peer, Candidate) ->\n\tspawn(fun() ->\n\t\tar_http_iface_client:cm_h2_send(Peer, Candidate)\n\tend),\n\tcase Peer of\n\t\t{pool, _} ->\n\t\t\tar_mining_stats:h2_sent_to_peer(pool);\n\t\t_ ->\n\t\t\tar_mining_stats:h2_sent_to_peer(Peer)\n\tend.\n\nadd_mining_peer({Peer, StorageModules}, State) ->\n\tPartitions = lists:map(\n\t\tfun({PartitionID, _PartitionSize, PackingAddr, PackingDifficulty}) ->\n\t\t\t{PartitionID, PackingAddr, PackingDifficulty} end, StorageModules),\n\t?LOG_INFO([{event, cm_peer_updated},\n\t\t{peer, ar_util:format_peer(Peer)},\n\t\t{partitions, io_lib:format(\"~p\",\n\t\t\t[[{ID, ar_util:encode(Addr), PackingDifficulty}\n\t\t\t\t|| {ID, Addr, PackingDifficulty} <- Partitions]])}]),\n\tPeersByPartition =\n\t\tlists:foldl(\n\t\t\tfun({PartitionID, PackingAddr, PackingDifficulty}, Acc) ->\n\t\t\t\tItems = maps:get(PartitionID, Acc, []),\n\t\t\t\tmaps:put(PartitionID, [{Peer, PackingAddr, PackingDifficulty} | Items], Acc)\n\t\t\tend,\n\t\t\tState#state.peers_by_partition,\n\t\t\tPartitions\n\t\t),\n\tState#state{ peers_by_partition = PeersByPartition }.\n\nremove_mining_peer(Peer, State) ->\n\tPeersByPartition = maps:fold(\n\t\tfun(PartitionID, Peers, Acc) ->\n\t\t\tPeers2 = [{Peer2, Addr, PackingDifficulty}\n\t\t\t\t\t|| {Peer2, Addr, PackingDifficulty} <- Peers, Peer2 /= Peer],\n\t\t\tmaps:put(PartitionID, Peers2, Acc)\n\t\tend,\n\t\t#{},\n\t\tState#state.peers_by_partition\n\t),\n\tState#state{ peers_by_partition = PeersByPartition }.\n\nrefetch_peer_partitions(Peers) ->\n\tspawn(fun() ->\n\t\ttry\n\t\t\tMapFun = fun(Peer) ->\n\t\t\t\tcase ar_http_iface_client:get_cm_partition_table(Peer) of\n\t\t\t\t\t{ok, PartitionList} -> ar_coordination:update_peer(Peer, PartitionList);\n\t\t\t\t\t_ -> ok\n\t\t\t\tend\n\t\t\tend,\n\t\t\tar_util:pmap(MapFun, Peers)\n\t\tcatch\n\t\t\tthrow:{pmap_timeout, _} -> ?LOG_WARNING([{event, pmap_timeout}, {module, ?MODULE}, {peers, Peers}]);\n\t\t\tErrT:Other -> ?LOG_ERROR([{event, pmap_error}, {module, ?MODULE}, {peers, Peers}, {ErrT, Other}])\n\t\tend,\n\t\t%% ar_util:pmap ensures we fetch all the local up-to-date CM peer partitions first,\n\t\t%% then share them with the Pool to fetch the complementary pool CM peer partitions.\n\t\tcase {ar_pool:is_client(), ar_coordination:is_exit_peer()} of\n\t\t\t{true, true} ->\n\t\t\t\trefetch_pool_peer_partitions();\n\t\t\t_ ->\n\t\t\t\tok\n\t\tend\n\tend).\n\nrefetch_pool_peer_partitions() ->\n\tgen_server:cast(?MODULE, refetch_pool_peer_partitions).\n\nget_unique_partitions_list() ->\n\tSet = get_unique_partitions_set(ar_mining_io:get_partitions(), sets:new()),\n\tlists:sort(sets:to_list(Set)).\n\nget_unique_partitions_set() ->\n\tget_unique_partitions_set(ar_mining_io:get_partitions(), sets:new()).\n\nget_unique_partitions_set([], UniquePartitions) ->\n\tUniquePartitions;\nget_unique_partitions_set([{PartitionID, MiningAddress, PackingDifficulty} | Partitions],\n\t\tUniquePartitions) ->\n\tget_unique_partitions_set(\n\t\tPartitions,\n\t\tsets:add_element(ar_serialize:partition_to_json_struct(PartitionID, ar_block:partition_size(),\n\t\t\t\tMiningAddress, PackingDifficulty), UniquePartitions)\n\t);\nget_unique_partitions_set([{PartitionID, BucketSize, MiningAddress, PackingDifficulty}\n\t\t| Partitions], UniquePartitions) ->\n\tget_unique_partitions_set(\n\t\tPartitions,\n\t\tsets:add_element(\n\t\t\tar_serialize:partition_to_json_struct(PartitionID, BucketSize,\n\t\t\t\t\tMiningAddress, PackingDifficulty), UniquePartitions)\n\t).\n\nrefetch_pool_peer_partitions(UniquePeerPartitions) ->\n\tspawn(fun() ->\n\t\tJSON = ar_serialize:jsonify(lists:sort(sets:to_list(UniquePeerPartitions))),\n\t\tPoolPeer = ar_pool:pool_peer(),\n\t\tcase ar_http_iface_client:post_cm_partition_table_to_pool(PoolPeer, JSON) of\n\t\t\t{ok, PartitionList} ->\n\t\t\t\tar_coordination:update_peer(PoolPeer, PartitionList);\n\t\t\t_ ->\n\t\t\t\tok\n\t\tend\n\tend).\n"
  },
  {
    "path": "apps/arweave/src/ar_data_discovery.erl",
    "content": "-module(ar_data_discovery).\n\n-behaviour(gen_server).\n\n-export([start_link/0, get_bucket_peers/1, get_footprint_bucket_peers/1,\n\t\tcollect_peers/0, pick_peers/2, report_bucket_stats/0]).\n\n-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2]).\n\n-include(\"ar.hrl\").\n-include(\"ar_data_discovery.hrl\").\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-record(state, {\n\tpeer_queue,\n\tpeers_pending,\n\tnetwork_map,\n\tfootprint_map,\n\texpiration_map\n}).\n\n%% The frequency of asking peers about their data.\n-ifdef(AR_TEST).\n-define(DATA_DISCOVERY_COLLECT_PEERS_FREQUENCY_MS, 2 * 1000).\n-else.\n-define(DATA_DISCOVERY_COLLECT_PEERS_FREQUENCY_MS, 4 * 60 * 1000).\n-endif.\n\n%% The frequency of logging bucket stats.\n-ifdef(AR_TEST).\n-define(REPORT_BUCKET_STATS_FREQUENCY_MS, 10 * 1000).\n-else.\n-define(REPORT_BUCKET_STATS_FREQUENCY_MS, 60 * 1000).\n-endif.\n\n%% The expiration time of peer's buckets. If a peer is found in the list of\n%% the first best ?DATA_DISCOVERY_COLLECT_PEERS_COUNT peers (checked every\n%% ?DATA_DISCOVERY_COLLECT_PEERS_FREQUENCY_MS milliseconds), the timer is refreshed.\n-define(PEER_EXPIRATION_TIME_MS, 60 * 60 * 1000).\n\n%% The maximum number of requests running at any time.\n-define(DATA_DISCOVERY_PARALLEL_PEER_REQUESTS, 10).\n\n%% The number of peers from the top of the rating to schedule for inclusion\n%% into the peer map every DATA_DISCOVERY_COLLECT_PEERS_FREQUENCY_MS milliseconds.\n-define(DATA_DISCOVERY_COLLECT_PEERS_COUNT, 1000).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%% @doc Return the list of ?QUERY_BEST_PEERS_COUNT peers who have at least one byte of\n%% data synced in the given Bucket of size ?NETWORK_DATA_BUCKET_SIZE. 80% of the peers\n%% are chosen from the 20% of peers with the biggest share in the given bucket.\nget_bucket_peers(Bucket) ->\n\tcase ets:member(ar_peers, block_connections) of\n\t\ttrue ->\n\t\t\t[];\n\t\tfalse ->\n\t\t\tget_bucket_peers(Bucket, {Bucket, 0, no_peer}, [])\n\tend.\n\nget_bucket_peers(Bucket, Cursor, Peers) ->\n\tcase ets:next(?MODULE, Cursor) of\n\t\t{Bucket, _Share, Peer} = Key ->\n\t\t\tget_bucket_peers(Bucket, Key, [Peer | Peers]);\n\t\t_ -> % matches `end_of_table` or an unexpected value\n\t\t\tar_util:unique(Peers)\n\tend.\n\n%% @doc Return the list of ?QUERY_BEST_PEERS_COUNT peers who have at least one byte of\n%% data synced in the given footprint bucket of size ?NETWORK_FOOTPRINT_BUCKET_SIZE.\n%% 80% of the peers are chosen from the 20% of peers with the biggest share\n%% in the given bucket.\nget_footprint_bucket_peers(Bucket) ->\n\tcase ets:member(ar_peers, block_connections) of\n\t\ttrue ->\n\t\t\t[];\n\t\tfalse ->\n\t\t\tget_footprint_bucket_peers(Bucket, {Bucket, 0, no_peer}, [])\n\tend.\n\nget_footprint_bucket_peers(Bucket, Cursor, Peers) ->\n\tcase ets:next(ar_data_discovery_footprint_buckets, Cursor) of\n\t\t{Bucket, _Share, Peer} = Key ->\n\t\t\tget_footprint_bucket_peers(Bucket, Key, [Peer | Peers]);\n\t\t_ ->\n\t\t\tar_util:unique(Peers)\n\tend.\n\n%% @doc Return a list of peers where 80% of the peers are randomly chosen\n%% from the first 20% of Peers and the other 20% of the peers are randomly\n%% chosen from the other 80% of Peers.\npick_peers(Peers, N) ->\n\tpick_peers(Peers, length(Peers), N).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\t{ok, _} = ar_timer:apply_interval(\n\t\t?DATA_DISCOVERY_COLLECT_PEERS_FREQUENCY_MS,\n\t\t?MODULE,\n\t\tcollect_peers,\n\t\t[],\n\t\t#{ skip_on_shutdown => false }\n\t),\n\t{ok, _} = ar_timer:apply_interval(\n\t\t?REPORT_BUCKET_STATS_FREQUENCY_MS,\n\t\t?MODULE,\n\t\treport_bucket_stats,\n\t\t[],\n\t\t#{ skip_on_shutdown => true }\n\t),\n\tgen_server:cast(?MODULE, update_network_data_map),\n\tok = ar_events:subscribe(peer),\n\t{ok, #state{\n\t\tpeer_queue = queue:new(),\n\t\tpeers_pending = 0,\n\t\tnetwork_map = #{},\n\t\tfootprint_map = #{},\n\t\texpiration_map = #{}\n\t}}.\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast({add_peer, Peer}, #state{ peer_queue = Queue } = State) ->\n\t{noreply, State#state{ peer_queue = queue:in(Peer, Queue) }};\n\nhandle_cast(update_network_data_map, #state{ peers_pending = N } = State)\n\t\twhen N < ?DATA_DISCOVERY_PARALLEL_PEER_REQUESTS ->\n\tcase queue:out(State#state.peer_queue) of\n\t\t{empty, _} ->\n\t\t\tar_util:cast_after(200, ?MODULE, update_network_data_map),\n\t\t\t{noreply, State};\n\t\t{{value, Peer}, Queue} ->\n\t\t\tmonitor(process, spawn_link(\n\t\t\t\tfun() ->\n\t\t\t\t\tfetch_sync_buckets(Peer),\n\t\t\t\t\tfetch_footprint_buckets(Peer)\n\t\t\t\tend\n\t\t\t)),\n\t\t\tgen_server:cast(?MODULE, update_network_data_map),\n\t\t\t{noreply, State#state{ peers_pending = N + 1, peer_queue = Queue }}\n\tend;\nhandle_cast(update_network_data_map, State) ->\n\tar_util:cast_after(200, ?MODULE, update_network_data_map),\n\t{noreply, State};\n\nhandle_cast({add_peer_sync_buckets, Peer, SyncBuckets}, State) ->\n\t#state{ network_map = Map } = State,\n\tState2 = refresh_expiration_timer(Peer, State),\n\tMap2 = maps:put(Peer, SyncBuckets, Map),\n\tar_sync_buckets:foreach(\n\t\tfun(Bucket, Share) ->\n\t\t\tets:insert(?MODULE, {{Bucket, Share, Peer}})\n\t\tend,\n\t\t?NETWORK_DATA_BUCKET_SIZE,\n\t\tSyncBuckets\n\t),\n\t{noreply, State2#state{ network_map = Map2 }};\n\nhandle_cast({add_peer_footprint_buckets, Peer, FootprintBuckets}, State) ->\n\t#state{ footprint_map = Map } = State,\n\tState2 = refresh_expiration_timer(Peer, State),\n\tMap2 = maps:put(Peer, FootprintBuckets, Map),\n\tar_sync_buckets:foreach(\n\t\tfun(Bucket, Share) ->\n\t\t\tets:insert(ar_data_discovery_footprint_buckets, {{Bucket, Share, Peer}})\n\t\tend,\n\t\t?NETWORK_FOOTPRINT_BUCKET_SIZE,\n\t\tFootprintBuckets\n\t),\n\t{noreply, State2#state{ footprint_map = Map2 }};\n\nhandle_cast({remove_peer, Peer}, State) ->\n\t#state{ network_map = Map, footprint_map = FootprintMap, expiration_map = E } = State,\n\tMap2 =\n\t\tcase maps:take(Peer, Map) of\n\t\t\terror ->\n\t\t\t\tMap;\n\t\t\t{SyncBuckets, Map3} ->\n\t\t\t\tar_sync_buckets:foreach(\n\t\t\t\t\tfun(Bucket, Share) ->\n\t\t\t\t\t\tets:delete(?MODULE, {Bucket, Share, Peer})\n\t\t\t\t\tend,\n\t\t\t\t\t?NETWORK_DATA_BUCKET_SIZE,\n\t\t\t\t\tSyncBuckets\n\t\t\t\t),\n\t\t\t\tMap3\n\t\tend,\n\tFootprintMap2 =\n\t\tcase maps:take(Peer, FootprintMap) of\n\t\t\terror ->\n\t\t\t\tFootprintMap;\n\t\t\t{FootprintBuckets, Map4} ->\n\t\t\t\tar_sync_buckets:foreach(\n\t\t\t\t\tfun(Bucket, Share) ->\n\t\t\t\t\t\tets:delete(ar_data_discovery_footprint_buckets, {Bucket, Share, Peer})\n\t\t\t\t\tend,\n\t\t\t\t\t?NETWORK_FOOTPRINT_BUCKET_SIZE,\n\t\t\t\t\tFootprintBuckets\n\t\t\t\t),\n\t\t\t\tMap4\n\t\tend,\n\tE2 = maps:remove(Peer, E),\n\t{noreply, State#state{ network_map = Map2, footprint_map = FootprintMap2, expiration_map = E2 }};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info({'DOWN', _,  process, _, _}, #state{ peers_pending = N } = State) ->\n\t{noreply, State#state{ peers_pending = N - 1 }};\n\nhandle_info({event, peer, {removed, Peer}}, State) ->\n\tgen_server:cast(?MODULE, {remove_peer, Peer}),\n\t{noreply, State};\n\nhandle_info({event, peer, _}, State) ->\n\t{noreply, State};\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\npick_peers(Peers, PeerLen, N) when N >= PeerLen ->\n\tPeers;\npick_peers([], _PeerLen, _N) ->\n\t[];\npick_peers(_Peers, _PeerLen, N) when N =< 0 ->\n\t[];\npick_peers(Peers, PeerLen, N) ->\n\t%% N: the target number of peers to pick\n\t%% Best: top 20% of the Peers list\n\t%% Other: the rest of the Peers list\n\t{Best, Other} = lists:split(max(PeerLen div 5, 1), Peers),\n\t%% TakeBest: Select 80% of N worth of Best - or all of Best if Best is short.\n\tTakeBest = max((8 * N) div 10, 1),\n\tPart1 = ar_util:pick_random(Best, min(length(Best), TakeBest)),\n\t%% TakeOther: rather than strictly take 20% of N, take enough to ensure we're\n\t%% getting the full N of picked peers.\n\tTakeOther = N - length(Part1),\n\tPart2 = ar_util:pick_random(Other, min(length(Other), TakeOther)),\n\tPart1 ++ Part2.\n\ncollect_peers() ->\n\tN = ?DATA_DISCOVERY_COLLECT_PEERS_COUNT,\n\t{ok, Config} = arweave_config:get_env(),\n\tPeers =\n\t\tcase Config#config.sync_from_local_peers_only of\n\t\t\ttrue ->\n\t\t\t\tConfig#config.local_peers;\n\t\t\tfalse ->\n\t\t\t\t%% rank peers by current rating since we care about their recent throughput performance\n\t\t\t\tar_peers:get_peers(current)\n\t\tend,\n\tcollect_peers(lists:sublist(Peers, N)).\n\ncollect_peers([Peer | Peers]) ->\n\tgen_server:cast(?MODULE, {add_peer, Peer}),\n\tcollect_peers(Peers);\ncollect_peers([]) ->\n\tok.\n\n%% @doc Log bucket statistics for each configured storage module.\nreport_bucket_stats() ->\n\tStartTime = erlang:monotonic_time(millisecond),\n\t{ok, Config} = arweave_config:get_env(),\n\tStorageModules = Config#config.storage_modules,\n\tlists:foreach(\n\t\tfun(Module) ->\n\t\t\tStoreID = ar_storage_module:id(Module),\n\t\t\t{RangeStart, RangeEnd} = ar_storage_module:get_range(StoreID),\n\t\t\treport_bucket_stats(StoreID, RangeStart, RangeEnd, normal),\n\t\t\treport_bucket_stats(StoreID, RangeStart, RangeEnd, footprint)\n\t\tend,\n\t\tStorageModules\n\t),\n\tElapsedMs = erlang:monotonic_time(millisecond) - StartTime,\n\t?LOG_DEBUG([{event, bucket_stats_complete}, {elapsed_ms, ElapsedMs}]).\n\nreport_bucket_stats(StoreID, RangeStart, RangeEnd, normal) ->\n\tStartBucket = RangeStart div ?NETWORK_DATA_BUCKET_SIZE,\n\tEndBucket = (RangeEnd - 1) div ?NETWORK_DATA_BUCKET_SIZE,\n\tTotalBuckets = EndBucket - StartBucket + 1,\n\t{AllPeers, ZeroCount, HealthyCount} =\n\t\tbucket_stats(StartBucket, EndBucket, ?MODULE, sets:new()),\n\tset_bucket_stats_metrics(StoreID, normal, AllPeers, TotalBuckets, ZeroCount, HealthyCount);\nreport_bucket_stats(StoreID, RangeStart, RangeEnd, footprint) ->\n\tStartBucket = ar_footprint_record:get_footprint_bucket(RangeStart + ?DATA_CHUNK_SIZE),\n\tEndBucket = ar_footprint_record:get_footprint_bucket(RangeEnd),\n\tTotalBuckets = max(0, EndBucket - StartBucket + 1),\n\t{AllPeers, ZeroCount, HealthyCount} =\n\t\tbucket_stats(StartBucket, EndBucket, ar_data_discovery_footprint_buckets, sets:new()),\n\tset_bucket_stats_metrics(StoreID, footprint, AllPeers, TotalBuckets, ZeroCount, HealthyCount).\n\nbucket_stats(StartBucket, EndBucket, _Table, AllPeers) when StartBucket > EndBucket ->\n\t{AllPeers, 0, 0};\nbucket_stats(StartBucket, EndBucket, Table, AllPeers) ->\n\tbucket_stats(StartBucket, EndBucket, Table, AllPeers, 0, 0).\n\nbucket_stats(Bucket, EndBucket, _Table, AllPeers, ZeroCount, HealthyCount)\n\t\twhen Bucket > EndBucket ->\n\t{AllPeers, ZeroCount, HealthyCount};\nbucket_stats(Bucket, EndBucket, Table, AllPeers, ZeroCount, HealthyCount) ->\n\t{BucketPeers, AllPeers2} = get_bucket_peers_and_collect(Bucket, Table, AllPeers),\n\tPeerCount = length(BucketPeers),\n\t{ZeroCount2, HealthyCount2} =\n\t\tcase PeerCount of\n\t\t\t0 -> {ZeroCount + 1, HealthyCount};\n\t\t\tN when N >= 3 -> {ZeroCount, HealthyCount + 1};\n\t\t\t_ -> {ZeroCount, HealthyCount}\n\t\tend,\n\tbucket_stats(Bucket + 1, EndBucket, Table, AllPeers2, ZeroCount2, HealthyCount2).\n\nget_bucket_peers_and_collect(Bucket, Table, AllPeers) ->\n\tget_bucket_peers_and_collect(Bucket, Table, {Bucket, 0, no_peer}, [], AllPeers).\n\nget_bucket_peers_and_collect(Bucket, Table, Cursor, BucketPeers, AllPeers) ->\n\tcase ets:next(Table, Cursor) of\n\t\t{Bucket, _Share, Peer} = Key ->\n\t\t\tget_bucket_peers_and_collect(Bucket, Table, Key,\n\t\t\t\t[Peer | BucketPeers], sets:add_element(Peer, AllPeers));\n\t\t_ ->\n\t\t\t{ar_util:unique(BucketPeers), AllPeers}\n\tend.\n\nset_bucket_stats_metrics(StoreID, Type, AllPeers, TotalBuckets, ZeroCount, HealthyCount) ->\n\tNumPeers = sets:size(AllPeers),\n\tStoreIDLabel = ar_storage_module:label(StoreID),\n\tprometheus_gauge:set(data_discovery, [Type, StoreIDLabel, num_peers], NumPeers),\n\tprometheus_gauge:set(data_discovery, [Type, StoreIDLabel, total_buckets], TotalBuckets),\n\tprometheus_gauge:set(data_discovery, [Type, StoreIDLabel, zero_peer_count], ZeroCount),\n\tprometheus_gauge:set(data_discovery, [Type, StoreIDLabel, healthy_peer_count], HealthyCount).\n\nfetch_sync_buckets(Peer) ->\n\tcase ar_http_iface_client:get_sync_buckets(Peer) of\n\t\t{ok, SyncBuckets} ->\n\t\t\tgen_server:cast(?MODULE, {add_peer_sync_buckets, Peer, SyncBuckets});\n\t\t{error, request_type_not_found} ->\n\t\t\t?LOG_DEBUG([{event, sync_buckets_request_type_not_found},\n\t\t\t\t\t{peer, ar_util:format_peer(Peer)}]);\n\t\t{error, Reason} ->\n\t\t\tar_http_iface_client:log_failed_request(Reason,\n\t\t\t\t[{event, failed_to_fetch_sync_buckets},\n\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}]);\n\t\tError ->\n\t\t\t?LOG_DEBUG([{event, failed_to_fetch_sync_buckets},\n\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t{reason, io_lib:format(\"~p\", [Error])}])\n\tend.\n\nfetch_footprint_buckets(Peer) ->\n\tcase ar_peers:get_peer_release(Peer) >= ?GET_FOOTPRINT_SUPPORT_RELEASE of\n\t\ttrue ->\n\t\t\tfetch_footprint_buckets2(Peer);\n\t\tfalse ->\n\t\t\tok\n\tend.\n\nfetch_footprint_buckets2(Peer) ->\n\tcase ar_http_iface_client:get_footprint_buckets(Peer) of\n\t\t{ok, SyncBuckets} ->\n\t\t\tgen_server:cast(?MODULE, {add_peer_footprint_buckets, Peer, SyncBuckets});\n\t\t{error, request_type_not_found} ->\n\t\t\t?LOG_DEBUG([{event, footprint_buckets_request_type_not_found},\n\t\t\t\t\t{peer, ar_util:format_peer(Peer)}]);\n\t\t{error, Reason} ->\n\t\t\tar_http_iface_client:log_failed_request(Reason,\n\t\t\t\t[{event, failed_to_fetch_footprint_buckets},\n\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}]);\n\t\tError ->\n\t\t\t?LOG_DEBUG([{event, failed_to_fetch_footprint_buckets},\n\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t{reason, io_lib:format(\"~p\", [Error])}])\n\tend.\n\nrefresh_expiration_timer(Peer, State) ->\n\t#state{ expiration_map = Map } = State,\n\tcase maps:get(Peer, Map, not_found) of\n\t\tnot_found ->\n\t\t\tok;\n\t\tTimer ->\n\t\t\ttimer:cancel(Timer)\n\tend,\n\tTimer2 = ar_util:cast_after(?PEER_EXPIRATION_TIME_MS, ?MODULE, {remove_peer, Peer}),\n\tState#state{ expiration_map = maps:put(Peer, Timer2, Map) }.\n"
  },
  {
    "path": "apps/arweave/src/ar_data_doctor.erl",
    "content": "-module(ar_data_doctor).\n\n-export([main/0, main/1]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_chunk_storage.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n\nmain() ->\n\tmain([]).\nmain([]) ->\n\thelp(),\n\tinit:stop(1);\nmain(Args) ->\n    logger:set_handler_config(default, level, error),\n    Command = hd(Args),\n    Success = case Command of\n        \"merge\" ->\n            ar_doctor_merge:main(tl(Args));\n        \"bench\" ->\n            ar_doctor_bench:main(tl(Args));\n        \"dump\" ->\n            ar_doctor_dump:main(tl(Args));\n        \"inspect\" ->\n            ar_doctor_inspect:main(tl(Args));\n        _ ->\n            false\n    end,\n    case Success of\n        true ->\n            init:stop(0);\n        _ ->\n            help(),\n            init:stop(1)\n    end. \n\nhelp() ->\n\tar:console(\"~n\"),\n\tar_doctor_merge:help(),\n\tar:console(\"~n\"),\n\tar_doctor_bench:help(),\n\tar:console(\"~n\"),\n\tar_doctor_dump:help(),\n\tar:console(\"~n\"),\n\tar_doctor_inspect:help(),\n\tar:console(\"~n\").\n"
  },
  {
    "path": "apps/arweave/src/ar_data_root_sync.erl",
    "content": "-module(ar_data_root_sync).\n\n-behaviour(gen_server).\n\n-export([start_link/1, name/1, store_data_roots/4, store_data_roots_sync/4, validate_data_roots/4]).\n-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2]).\n\n-include(\"ar.hrl\").\n-include(\"ar_data_sync.hrl\").\n-include(\"ar_sup.hrl\").\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-record(state, {\n\tstore_id,\n\trange_start,\n\trange_end,\n\tscan_cursor\n}).\n\n-define(DATA_ROOTS_SYNC_RELEASE_NUMBER, 91).\n\n%% How long we wait before (re-)scanning our range for unsynced data roots.\n-ifdef(AR_TEST).\n-define(DATA_ROOTS_SYNC_SCAN_INTERVAL_MS, 2000).\n-else.\n-define(DATA_ROOTS_SYNC_SCAN_INTERVAL_MS, 600_000). % 10 minutes.\n-endif.\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link(StoreID) ->\n\tName = name(StoreID),\n\tgen_server:start_link({local, Name}, ?MODULE, [StoreID], []).\n\t\nname(StoreID) ->\n\tlist_to_atom(\"ar_data_root_sync_\" ++ ar_storage_module:label(StoreID)).\n\n%% @doc Store the given data roots.\nstore_data_roots(BlockStart, BlockEnd, TXRoot, Entries) ->\n\tgen_server:cast(ar_data_sync_default, {store_data_roots,\n\t\tBlockStart, BlockEnd, TXRoot, Entries}).\n\n%% @doc Store the given data roots synchronously.\nstore_data_roots_sync(BlockStart, BlockEnd, TXRoot, Entries) ->\n\tgen_server:call(ar_data_sync_default, {store_data_roots_sync,\n\t\tBlockStart, BlockEnd, TXRoot, Entries}, 120000).\n\n%% @doc Validate the given data roots against the local block index.\n%% Also recompute the TXRoot from entries and verify Merkle paths.\nvalidate_data_roots(TXRoot, BlockSize, Entries, Offset) ->\n\t{BlockStart, BlockEnd, ExpectedTXRoot} = ar_block_index:get_block_bounds(Offset),\n\tCheckBlockBounds =\n\t\tcase Offset >= BlockStart andalso Offset < BlockEnd of\n\t\t\tfalse ->\n\t\t\t\t{error, invalid_block_bounds};\n\t\t\ttrue ->\n\t\t\t\tok\n\t\tend,\n\tCheckBlockSize =\n\t\tcase CheckBlockBounds of\n\t\t\tok ->\n\t\t\t\tcase BlockSize == BlockEnd - BlockStart of\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t{error, invalid_block_size};\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tok\n\t\t\t\tend;\n\t\t\tError ->\n\t\t\t\tError\n\t\tend,\n\tPrepareDataRootPairs =\n\t\tcase CheckBlockSize of\n\t\t\tok ->\n\t\t\t\tprepare_data_root_pairs(Entries, BlockStart, BlockSize);\n\t\t\tError2 ->\n\t\t\t\tError2\n\t\tend,\n\tValidateTXRoot =\n\t\tcase PrepareDataRootPairs of\n\t\t\t{ok, Triplets} ->\n\t\t\t\tcase TXRoot == ExpectedTXRoot of\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t{error, invalid_tx_root};\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\t{ok, Triplets}\n\t\t\t\tend;\n\t\t\t{error, _} = Error3 ->\n\t\t\t\tError3\n\t\tend,\n\tcase ValidateTXRoot of\n\t\t{ok, Triplets2} ->\n\t\t\tcase verify_tx_paths(Triplets2, TXRoot, BlockStart, BlockEnd, 0) of\n\t\t\t\tok ->\n\t\t\t\t\t{ok, {TXRoot, BlockSize, Entries}};\n\t\t\t\tError4 ->\n\t\t\t\t\tError4\n\t\t\tend;\n\t\tError5 ->\n\t\t\tError5\n\tend.\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([StoreID]) ->\n\t{RangeStart, RangeEnd} = ar_storage_module:get_range(StoreID),\n\tgen_server:cast(self(), sync),\n\t{ok, #state{ store_id = StoreID,\n\t\t\trange_start = RangeStart,\n\t\t\trange_end = RangeEnd,\n\t\t\tscan_cursor = RangeStart }}.\n\nhandle_cast(sync, State) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tar_util:cast_after(500, self(), sync),\n\t\t\t{noreply, State};\n\t\ttrue ->\n\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\t{Delay, State2} =\n\t\t\t\tcase Config#config.enable_data_roots_syncing of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tsync_block_data_roots(State);\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t{?DATA_ROOTS_SYNC_SCAN_INTERVAL_MS, State}\n\t\t\t\tend,\n\t\t\tar_util:cast_after(Delay, self(), sync),\n\t\t\t{noreply, State2}\n\tend;\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ignored, State}.\n\nterminate(_Reason, _State) ->\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nsync_block_data_roots(#state{ store_id = StoreID, range_start = RangeStart,\n\trange_end = RangeEnd, scan_cursor = Cursor } = State) ->\n\tEnd = min(RangeEnd, ar_data_sync:get_disk_pool_threshold()),\n\t{ok, Cursor2} = sync_block_data_roots(StoreID, Cursor, End),\n\t{Delay, Cursor3} =\n\t\tcase Cursor2 >= End of\n\t\t\ttrue ->\n\t\t\t\t{?DATA_ROOTS_SYNC_SCAN_INTERVAL_MS, RangeStart};\n\t\t\tfalse ->\n\t\t\t\t{0, Cursor2}\n\t\tend,\n\t{Delay, State#state{ scan_cursor = Cursor3 }}.\n\nsync_block_data_roots(_StoreID, Cursor, RangeEnd) when Cursor >= RangeEnd ->\n\t{ok, Cursor};\nsync_block_data_roots(StoreID, Cursor, RangeEnd) ->\n\t{BlockStart, BlockEnd, TXRoot} = ar_block_index:get_block_bounds(Cursor),\n\tCursor2 =\n\t\tcase BlockStart >= RangeEnd of\n\t\t\ttrue ->\n\t\t\t\tRangeEnd;\n\t\t\tfalse ->\n\t\t\t\tcase ar_data_sync:are_data_roots_synced(BlockStart, BlockEnd, TXRoot) of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tBlockEnd;\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tmaybe_fetch_and_store(BlockStart, BlockEnd),\n\t\t\t\t\t\tBlockEnd\n\t\t\t\tend\n\t\tend,\n\tsync_block_data_roots(StoreID, Cursor2, RangeEnd).\n\nmaybe_fetch_and_store(BlockStart, BlockEnd) ->\n\tPeers = ar_peers:get_peers(current),\n\tPeers2 = lists:filter(\n\t\tfun(Peer) ->\n\t\t\tar_peers:get_peer_release(Peer) >= ?DATA_ROOTS_SYNC_RELEASE_NUMBER\n\t\tend,\n\t\tPeers\n\t),\n\tcase fetch_data_roots_from_peers(Peers2, BlockStart) of\n\t\t{ok, {TXRoot, BlockSize, Entries}} ->\n\t\t\tBlockSize = BlockEnd - BlockStart,\n\t\t\tstore_data_roots(BlockStart, BlockEnd, TXRoot, Entries);\n\t\t_ ->\n\t\t\tok\n\tend.\n\nfetch_data_roots_from_peers([], _Offset) ->\n\t{error, not_found};\nfetch_data_roots_from_peers([Peer | Rest], Offset) ->\n\tcase ar_http_iface_client:get_data_roots(Peer, Offset) of\n\t\t{ok, _} = Reply ->\n\t\t\tReply;\n\t\t{error, Error} ->\n\t\t\t?LOG_DEBUG([{event, fetch_data_roots_from_peers_error},\n\t\t\t\t\t{peer, Peer},\n\t\t\t\t\t{offset, Offset},\n\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])}]),\n\t\t\tfetch_data_roots_from_peers(Rest, Offset)\n\tend.\n\nprepare_data_root_pairs(Entries, BlockStart, BlockSize) ->\n\tResult = lists:foldr(\n\t\tfun\n\t\t\t(_, {error, _} = Error) ->\n\t\t\t\tError;\n\t\t\t({_DataRoot, 0, _TXStartOffset, _TXPath}, _Acc) ->\n\t\t\t\t{error, invalid_zero_tx_size};\n\t\t\t({DataRoot, TXSize, TXStartOffset, TXPath}, {ok, {Total, Acc}}) ->\n\t\t\t\tMerkleLabel = TXStartOffset + TXSize - BlockStart,\n\t\t\t\tcase MerkleLabel >= 0 of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tPaddedSize = get_padded_size(TXSize, BlockStart),\n\t\t\t\t\t\t{ok, {Total + PaddedSize, [{DataRoot, MerkleLabel, TXPath} | Acc]}};\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t{error, invalid_entry_merkle_label}\n\t\t\t\tend\n\t\tend,\n\t\t{ok, {0,[]}},\n\t\tEntries\n\t),\n\tcase Result of\n\t\t{ok, {Total, Entries2}} ->\n\t\t\tcase Total == BlockSize of\n\t\t\t\ttrue ->\n\t\t\t\t\t{ok, Entries2};\n\t\t\t\tfalse ->\n\t\t\t\t\t{error, invalid_total_tx_size}\n\t\t\tend;\n\t\tError6 ->\n\t\t\tError6\n\tend.\n\nget_padded_size(TXSize, BlockStart) ->\n\tcase BlockStart >= ar_block:strict_data_split_threshold() of\n\t\ttrue ->\n\t\t\tar_poa:get_padded_offset(TXSize, 0);\n\t\tfalse ->\n\t\t\tTXSize\n\tend.\n\nverify_tx_paths([], _TXRoot, _BlockStart, _BlockEnd, _TXStartOffset) ->\n\tok;\nverify_tx_paths([Entry | Entries], TXRoot, BlockStart, BlockEnd, TXStartOffset) ->\n\t{DataRoot, TXEndOffset, TXPath} = Entry,\n\tBlockSize = BlockEnd - BlockStart,\n\tcase ar_merkle:validate_path(TXRoot, TXEndOffset - 1, BlockSize, TXPath) of\n\t\tfalse ->\n\t\t\t{error, invalid_tx_path};\n\t\t{DataRoot, TXStartOffset, TXEndOffset} ->\n\t\t\tPaddedEndOffset = get_padded_size(TXEndOffset, BlockStart),\n\t\t\tverify_tx_paths(Entries, TXRoot, BlockStart, BlockEnd, PaddedEndOffset);\n\t\t_ ->\n\t\t\t{error, invalid_tx_path}\n\tend.\n"
  },
  {
    "path": "apps/arweave/src/ar_data_root_sync_sup.erl",
    "content": "-module(ar_data_root_sync_sup).\n\n-behaviour(supervisor).\n\n-export([start_link/0]).\n-export([init/1]).\n\n%% internal\n-export([register_workers/0]).\n\n-include(\"ar_sup.hrl\").\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%% ===================================================================\n%% Supervisor callbacks.\n%% ===================================================================\n\ninit([]) ->\n\tWorkers = register_workers(),\n\t{ok, {{one_for_one, 5, 10}, Workers}}.\n\nregister_workers() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tlists:map(\n\t\tfun(StorageModule) ->\n\t\t\tStoreID = ar_storage_module:id(StorageModule),\n\t\t\tName = ar_data_root_sync:name(StoreID),\n\t\t\t?CHILD_WITH_ARGS(ar_data_root_sync, worker, Name, [StoreID])\n\t\tend,\n\t\tConfig#config.storage_modules\n\t)."
  },
  {
    "path": "apps/arweave/src/ar_data_sync.erl",
    "content": "-module(ar_data_sync).\n\n-behaviour(gen_server).\n\n-export([name/1, start_link/2, register_workers/0, join/1, add_tip_block/2, add_block/2,\n\t\tinvalidate_bad_data_record/4, is_chunk_proof_ratio_attractive/3,\n\t\tadd_data_root_to_disk_pool/3, maybe_drop_data_root_from_disk_pool/3,\n\t\tget_chunk/2, get_chunk_data/2, get_chunk_proof/2, get_tx_data/1, get_tx_data/2,\n\t\tget_tx_offset/1, get_tx_offset_data_in_range/2, has_data_root/2,\n\t\trequest_tx_data_removal/3, request_data_removal/4, record_disk_pool_chunks_count/0,\n\t\trecord_chunk_cache_size_metric/0, is_chunk_cache_full/0, is_disk_space_sufficient/1,\n\t\tget_chunk_by_byte/2, advance_chunks_index_cursor/1,\n\t\tread_chunk/3, write_chunk/5, read_data_path/2,\n\t\tincrement_chunk_cache_size/0, decrement_chunk_cache_size/0,\n\t\tget_chunk_metadata_range/3, get_merkle_rebase_threshold/0,\n\t\tis_footprint_record_supported/3, get_data_roots_for_offset/1, are_data_roots_synced/3,\n\t\tget_disk_pool_threshold/0]).\n\n-export([add_chunk_to_disk_pool/5]).\n\n-export([debug_get_disk_pool_chunks/0]).\n\n%% For data-doctor tools\n-export([init_kv/1]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n-export([enqueue_intervals/3, remove_expired_disk_pool_data_roots/0]).\n\n-include(\"ar.hrl\").\n-include(\"ar_sup.hrl\").\n-include(\"ar_consensus.hrl\").\n-include(\"ar_poa.hrl\").\n-include(\"ar_data_discovery.hrl\").\n-include(\"ar_data_sync.hrl\").\n-include(\"ar_sync_buckets.hrl\").\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-ifdef(AR_TEST).\n-include_lib(\"eunit/include/eunit.hrl\").\n-endif.\n\n%% The key for storing migration cursor in the migrations_index database.\n-define(FOOTPRINT_MIGRATION_CURSOR_KEY, <<\"footprint_migration_cursor\">>).\n\n-ifdef(AR_TEST).\n-define(COLLECT_SYNC_INTERVALS_FREQUENCY_MS, 1_000).\n-else.\n-define(COLLECT_SYNC_INTERVALS_FREQUENCY_MS, 10_000).\n-endif.\n\n-ifdef(AR_TEST).\n-define(DEVICE_LOCK_WAIT, 100).\n-else.\n-define(DEVICE_LOCK_WAIT, 5_000).\n-endif.\n\n%% The number of chunks to migrate per batch during footprint migration.\n-ifdef(AR_TEST).\n-define(FOOTPRINT_MIGRATION_BATCH_SIZE, 10).\n-else.\n-define(FOOTPRINT_MIGRATION_BATCH_SIZE, 200).\n-endif.\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nname(StoreID) ->\n\tlist_to_atom(\"ar_data_sync_\" ++ ar_storage_module:label(StoreID)).\n\nstart_link(Name, Args) ->\n\tgen_server:start_link({local, Name}, ?MODULE, Args, []).\n\n%% @doc Register the workers that will be monitored by ar_data_sync_sup.erl.\nregister_workers() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tStorageModuleWorkers = lists:map(\n\t\tfun(StorageModule) ->\n\t\t\tStoreID = ar_storage_module:id(StorageModule),\n\t\t\tStoreLabel = ar_storage_module:label(StoreID),\n\t\t\tName = list_to_atom(\"ar_data_sync_\" ++ StoreLabel),\n\t\t\t?CHILD_WITH_ARGS(ar_data_sync, worker, Name, [Name, {StoreID, none}])\n\t\tend,\n\t\tConfig#config.storage_modules\n\t),\n\tDefaultStorageModuleWorker = ?CHILD_WITH_ARGS(ar_data_sync, worker,\n\t\tar_data_sync_default, [ar_data_sync_default, {?DEFAULT_MODULE, none}]),\n\tRepackInPlaceWorkers = lists:map(\n\t\tfun({StorageModule, TargetPacking}) ->\n\t\t\tStoreID = ar_storage_module:id(StorageModule),\n\t\t\tName = ar_data_sync:name(StoreID),\n\t\t\t?CHILD_WITH_ARGS(ar_data_sync, worker, Name, [Name, {StoreID, TargetPacking}])\n\t\tend,\n\t\tConfig#config.repack_in_place_storage_modules\n\t),\n\tStorageModuleWorkers ++ [DefaultStorageModuleWorker] ++ RepackInPlaceWorkers.\n\n%% @doc Notify the server the node has joined the network on the given block index.\njoin(RecentBI) ->\n\tgen_server:cast(ar_data_sync_default, {join, RecentBI}).\n\n%% @doc Notify the server about the new tip block.\nadd_tip_block(BlockTXPairs, RecentBI) ->\n\tgen_server:cast(ar_data_sync_default, {add_tip_block, BlockTXPairs, RecentBI}).\n\ninvalidate_bad_data_record(AbsoluteEndOffset, ChunkSize, StoreID, Case) ->\n\tgen_server:cast(name(StoreID), {invalidate_bad_data_record,\n\t\t{AbsoluteEndOffset, ChunkSize, StoreID, Case}}).\n\n%% @doc The condition which is true if the chunk is too small compared to the proof.\n%% Small chunks make syncing slower and increase space amplification. A small chunk\n%% is accepted if it is the last chunk of the corresponding transaction - such chunks\n%% may be produced by ar_tx:chunk_binary/1, the legacy splitting method used to split\n%% v1 data or determine the data root of a v2 tx when data is uploaded via the data field.\n%% Due to the block limit we can only get up to 1k such chunks per block.\nis_chunk_proof_ratio_attractive(ChunkSize, TXSize, DataPath) ->\n\tDataPathSize = byte_size(DataPath),\n\tcase DataPathSize of\n\t\t0 ->\n\t\t\tfalse;\n\t\t_ ->\n\t\t\tcase catch ar_merkle:extract_note(DataPath) of\n\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\tfalse;\n\t\t\t\tOffset ->\n\t\t\t\t\tOffset == TXSize orelse DataPathSize =< ChunkSize\n\t\t\tend\n\tend.\n\n%% @doc Store the given chunk if the proof is valid.\n%% Called when a chunk is pushed to the node via POST /chunk.\n%% The chunk is placed in the disk pool. The periodic process\n%% scanning the disk pool will later record it as synced.\n%% The item is removed from the disk pool when the chunk's offset\n%% drops below the disk pool threshold.\nadd_chunk_to_disk_pool(DataRoot, DataPath, Chunk, Offset, TXSize) ->\n\tDataRootIndex = {data_root_index, ?DEFAULT_MODULE},\n\t[{_, DiskPoolSize}] = ets:lookup(ar_data_sync_state, disk_pool_size),\n\tDiskPoolChunksIndex = {disk_pool_chunks_index, ?DEFAULT_MODULE},\n\tDataRootKey = << DataRoot/binary, TXSize:?OFFSET_KEY_BITSIZE >>,\n\tDataRootOffsetReply = get_data_root_offset(DataRootKey, ?DEFAULT_MODULE),\n\tDataRootInDiskPool = ets:lookup(ar_disk_pool_data_roots, DataRootKey),\n\tChunkSize = byte_size(Chunk),\n\t{ok, Config} = arweave_config:get_env(),\n\tDataRootLimit = Config#config.max_disk_pool_data_root_buffer_mb * ?MiB,\n\tDiskPoolLimit = Config#config.max_disk_pool_buffer_mb * ?MiB,\n\tCheckDiskPool =\n\t\tcase {DataRootOffsetReply, DataRootInDiskPool} of\n\t\t\t{not_found, []} ->\n\t\t\t\t?LOG_INFO([{event, failed_to_add_chunk_to_disk_pool},\n\t\t\t\t\t{reason, data_root_not_found}, {offset, Offset},\n\t\t\t\t\t{data_root, ar_util:encode(DataRoot)}]),\n\t\t\t\t{error, data_root_not_found};\n\t\t\t{not_found, [{_, {Size, Timestamp, TXIDSet}}]} ->\n\t\t\t\tcase Size + ChunkSize > DataRootLimit\n\t\t\t\t\t\torelse DiskPoolSize + ChunkSize > DiskPoolLimit of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\t?LOG_INFO([{event, failed_to_add_chunk_to_disk_pool},\n\t\t\t\t\t\t\t{reason, exceeds_disk_pool_size_limit1}, {offset, Offset},\n\t\t\t\t\t\t\t{data_root_size, Size}, {chunk_size, ChunkSize},\n\t\t\t\t\t\t\t{data_root_limit, DataRootLimit}, {disk_pool_size, DiskPoolSize},\n\t\t\t\t\t\t\t{disk_pool_limit, DiskPoolLimit}]),\n\t\t\t\t\t\t{error, exceeds_disk_pool_size_limit};\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t{ok, {Size + ChunkSize, Timestamp, TXIDSet}}\n\t\t\t\tend;\n\t\t\t_ ->\n\t\t\t\tcase DiskPoolSize + ChunkSize > DiskPoolLimit of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\t?LOG_INFO([{event, failed_to_add_chunk_to_disk_pool},\n\t\t\t\t\t\t\t{reason, exceeds_disk_pool_size_limit2}, {offset, Offset},\n\t\t\t\t\t\t\t{chunk_size, ChunkSize}, {disk_pool_size, DiskPoolSize},\n\t\t\t\t\t\t\t{disk_pool_limit, DiskPoolLimit}]),\n\t\t\t\t\t\t{error, exceeds_disk_pool_size_limit};\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tTimestamp =\n\t\t\t\t\t\t\tcase DataRootInDiskPool of\n\t\t\t\t\t\t\t\t[] ->\n\t\t\t\t\t\t\t\t\tos:system_time(microsecond);\n\t\t\t\t\t\t\t\t[{_, {_, Timestamp2, _}}] ->\n\t\t\t\t\t\t\t\t\tTimestamp2\n\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t{ok, {ChunkSize, Timestamp, not_set}}\n\t\t\t\tend\n\t\tend,\n\tValidateProof =\n\t\tcase CheckDiskPool of\n\t\t\t{error, _} = Error ->\n\t\t\t\tError;\n\t\t\t{ok, DiskPoolDataRootValue} ->\n\t\t\t\tcase validate_data_path(DataRoot, Offset, TXSize, DataPath, Chunk) of\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t?LOG_INFO([{event, failed_to_add_chunk_to_disk_pool},\n\t\t\t\t\t\t\t{reason, invalid_proof}, {offset, Offset}]),\n\t\t\t\t\t\t{error, invalid_proof};\n\t\t\t\t\t{true, PassesBase, PassesStrict, PassesRebase, EndOffset} ->\n\t\t\t\t\t\t{ok, {EndOffset, PassesBase, PassesStrict, PassesRebase,\n\t\t\t\t\t\t\t\tDiskPoolDataRootValue}}\n\t\t\t\tend\n\t\tend,\n\tCheckSynced =\n\t\tcase ValidateProof of\n\t\t\t{error, _} = Error2 ->\n\t\t\t\tError2;\n\t\t\t{ok, {EndOffset2, _PassesBase2, _PassesStrict2, _PassesRebase2,\n\t\t\t\t\t{_, Timestamp3, _}} = PassedState2} ->\n\t\t\t\tDataPathHash = crypto:hash(sha256, DataPath),\n\t\t\t\tDiskPoolChunkKey = << Timestamp3:256, DataPathHash/binary >>,\n\t\t\t\tcase ar_kv:get(DiskPoolChunksIndex, DiskPoolChunkKey) of\n\t\t\t\t\t{ok, _DiskPoolChunk} ->\n\t\t\t\t\t\t%% The chunk is already in disk pool.\n\t\t\t\t\t\t{synced_disk_pool, EndOffset2};\n\t\t\t\t\tnot_found ->\n\t\t\t\t\t\tcase DataRootOffsetReply of\n\t\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\t\t{ok, {DataPathHash, DiskPoolChunkKey, PassedState2}};\n\t\t\t\t\t\t\t{ok, {TXStartOffset, _}} ->\n\t\t\t\t\t\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\t\t\t\t\t\tcase chunk_offsets_synced(DataRootIndex, DataRootKey,\n\t\t\t\t\t\t\t\t\t\t%% The same data may be uploaded several times.\n\t\t\t\t\t\t\t\t\t\t%% Here we only accept the chunk if any of the\n\t\t\t\t\t\t\t\t\t\t%% last configured number of instances of this\n\t\t\t\t\t\t\t\t\t\t%% data is not filled in yet.\n\t\t\t\t\t\t\t\t\t\tEndOffset2, TXStartOffset, Config#config.max_duplicate_data_roots) of\n\t\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\t\tsynced;\n\t\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\t\t{ok, {DataPathHash, DiskPoolChunkKey, PassedState2}}\n\t\t\t\t\t\t\t\tend\n\t\t\t\t\t\tend;\n\t\t\t\t\t{error, Reason} ->\n\t\t\t\t\t\t?LOG_WARNING([{event, failed_to_read_chunk_from_disk_pool},\n\t\t\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])},\n\t\t\t\t\t\t\t\t{data_path_hash, ar_util:encode(DataPathHash)},\n\t\t\t\t\t\t\t\t{data_root, ar_util:encode(DataRoot)},\n\t\t\t\t\t\t\t\t{relative_offset, EndOffset2}]),\n\t\t\t\t\t\t{error, failed_to_store_chunk}\n\t\t\t\tend\n\t\tend,\n\tcase CheckSynced of\n\t\tsynced ->\n\t\t\tok;\n\t\t{synced_disk_pool, EndOffset4} ->\n\t\t\tcase is_estimated_long_term_chunk(DataRootOffsetReply, EndOffset4) of\n\t\t\t\tfalse ->\n\t\t\t\t\ttemporary;\n\t\t\t\ttrue ->\n\t\t\t\t\tok\n\t\t\tend;\n\t\t{error, _} = Error4 ->\n\t\t\tError4;\n\t\t{ok, {DataPathHash2, DiskPoolChunkKey2, {EndOffset3, PassesBase3, PassesStrict3,\n\t\t\t\tPassesRebase3, DiskPoolDataRootValue2}}} ->\n\t\t\tChunkDataKey = get_chunk_data_key(DataPathHash2),\n\t\t\tcase put_chunk_data(ChunkDataKey, ?DEFAULT_MODULE, {Chunk, DataPath}) of\n\t\t\t\t{error, Reason2} ->\n\t\t\t\t\t?LOG_WARNING([{event, failed_to_store_chunk_in_disk_pool},\n\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason2])},\n\t\t\t\t\t\t{data_path_hash, ar_util:encode(DataPathHash2)},\n\t\t\t\t\t\t{data_root, ar_util:encode(DataRoot)},\n\t\t\t\t\t\t{relative_offset, EndOffset3}]),\n\t\t\t\t\t{error, failed_to_store_chunk};\n\t\t\t\tok ->\n\t\t\t\t\tDiskPoolChunkValue = term_to_binary({EndOffset3, ChunkSize, DataRoot,\n\t\t\t\t\t\t\tTXSize, ChunkDataKey, PassesBase3, PassesStrict3, PassesRebase3}),\n\t\t\t\t\tcase ar_kv:put(DiskPoolChunksIndex, DiskPoolChunkKey2,\n\t\t\t\t\t\t\tDiskPoolChunkValue) of\n\t\t\t\t\t\t{error, Reason3} ->\n\t\t\t\t\t\t\t?LOG_WARNING([{event, failed_to_record_chunk_in_disk_pool},\n\t\t\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason3])},\n\t\t\t\t\t\t\t\t{data_path_hash, ar_util:encode(DataPathHash2)},\n\t\t\t\t\t\t\t\t{data_root, ar_util:encode(DataRoot)},\n\t\t\t\t\t\t\t\t{relative_offset, EndOffset3}]),\n\t\t\t\t\t\t\t{error, failed_to_store_chunk};\n\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\tets:insert(ar_disk_pool_data_roots,\n\t\t\t\t\t\t\t\t\t{DataRootKey, DiskPoolDataRootValue2}),\n\t\t\t\t\t\t\tets:update_counter(ar_data_sync_state, disk_pool_size,\n\t\t\t\t\t\t\t\t\t{2, ChunkSize}),\n\t\t\t\t\t\t\tprometheus_gauge:inc(pending_chunks_size, ChunkSize),\n\t\t\t\t\t\t\tcase is_estimated_long_term_chunk(DataRootOffsetReply, EndOffset3) of\n\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\ttemporary;\n\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\tok\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\n%% @doc Store the given value in the chunk data DB.\n-spec put_chunk_data(\n\tChunkDataKey :: binary(),\n\tStoreID :: term(),\n\tValue :: DataPath :: binary() | {Chunk :: binary(), DataPath :: binary()}) ->\n\t\tok | {error, term()}.\nput_chunk_data(ChunkDataKey, StoreID, Value) ->\n\tar_kv:put({chunk_data_db, StoreID}, ChunkDataKey, term_to_binary(Value)).\n\nget_chunk_data(ChunkDataKey, StoreID) ->\n\tar_kv:get({chunk_data_db, StoreID}, ChunkDataKey).\n\ndelete_chunk_data(ChunkDataKey, StoreID) ->\n\tar_kv:delete({chunk_data_db, StoreID}, ChunkDataKey).\n\n-spec put_chunk_metadata(\n\tAbsoluteEndOffset :: non_neg_integer(),\n\tStoreID :: term(),\n\tMetadata :: term()) -> ok | {error, term()}.\nput_chunk_metadata(AbsoluteEndOffset, StoreID,\n\t{_ChunkDataKey, _TXRoot, _DataRoot, _TXPath, _Offset, _ChunkSize} = Metadata) ->\n\tKey = << AbsoluteEndOffset:?OFFSET_KEY_BITSIZE >>,\n\tar_kv:put({chunks_index, StoreID}, Key, term_to_binary(Metadata)).\n\nget_chunk_metadata(AbsoluteEndOffset, StoreID) ->\n\tcase ar_kv:get({chunks_index, StoreID}, << AbsoluteEndOffset:?OFFSET_KEY_BITSIZE >>) of\n\t\t{ok, Value} ->\n\t\t\t{ok, binary_to_term(Value, [safe])};\n\t\tnot_found ->\n\t\t\tnot_found\n\tend.\n\ndelete_chunk_metadata(AbsoluteEndOffset, StoreID) ->\n\tar_kv:delete({chunks_index, StoreID}, << AbsoluteEndOffset:?OFFSET_KEY_BITSIZE >>).\n\n%% @doc Return {ok, Map} | {error, Error} where\n%% Map is\n%% AbsoluteEndOffset => {ChunkDataKey, TXRoot, DataRoot, TXPath, RelativeOffset, ChunkSize}\n%% map with all the chunk metadata found within the given range AbsoluteEndOffset >= Start,\n%% AbsoluteEndOffset =< End. Return the empty map if no metadata is found.\nget_chunk_metadata_range(Start, End, StoreID) ->\n\tcase ar_kv:get_range({chunks_index, StoreID},\n\t\t\t<< Start:?OFFSET_KEY_BITSIZE >>, << End:?OFFSET_KEY_BITSIZE >>) of\n\t\t{ok, Map} ->\n\t\t\t{ok, maps:fold(\n\t\t\t\t\tfun(K, V, Acc) ->\n\t\t\t\t\t\t<< Offset:?OFFSET_KEY_BITSIZE >> = K,\n\t\t\t\t\t\tmaps:put(Offset, binary_to_term(V, [safe]), Acc)\n\t\t\t\t\tend,\n\t\t\t\t\t#{},\n\t\t\t\t\tMap)};\n\t\tError ->\n\t\t\tError\n\tend.\ndelete_chunk_metadata_range(Start, End, State) ->\n\t#sync_data_state{ chunks_index = ChunksIndex } = State,\n\tar_kv:delete_range(ChunksIndex, << (Start + 1):?OFFSET_KEY_BITSIZE >>,\n\t\t\t<< (End + 1):?OFFSET_KEY_BITSIZE >>).\n\n%% @doc Return true if we expect the chunk with the given data root index value and\n%% relative end offset to end up in one of the configured storage modules.\nis_estimated_long_term_chunk(DataRootOffsetReply, EndOffset) ->\n\tWeaveSize = ar_node:get_current_weave_size(),\n\tcase DataRootOffsetReply of\n\t\tnot_found ->\n\t\t\t%% A chunk from a pending transaction.\n\t\t\tis_offset_vicinity_covered(WeaveSize);\n\t\t{ok, {TXStartOffset, _}} ->\n\t\t\tWeaveSize = ar_node:get_current_weave_size(),\n\t\t\tSize = ar_node:get_recent_max_block_size(),\n\t\t\tAbsoluteEndOffset = TXStartOffset + EndOffset,\n\t\t\tcase AbsoluteEndOffset > WeaveSize - Size * 4 of\n\t\t\t\ttrue ->\n\t\t\t\t\t%% A relatively recent offset - do not expect this chunk to be\n\t\t\t\t\t%% persisted unless we have some storage modules configured for\n\t\t\t\t\t%% the space ahead (the data may be rearranged during after a reorg).\n\t\t\t\t\tis_offset_vicinity_covered(AbsoluteEndOffset);\n\t\t\t\tfalse ->\n\t\t\t\t\tar_storage_module:has_any(AbsoluteEndOffset)\n\t\t\tend\n\tend.\n\nis_offset_vicinity_covered(Offset) ->\n\tSize = ar_node:get_recent_max_block_size(),\n\tar_storage_module:has_range(max(0, Offset - Size * 2), Offset + Size * 2).\n\n%% @doc Notify the server about the new pending data root (added to mempool).\n%% The server may accept pending chunks and store them in the disk pool.\nadd_data_root_to_disk_pool(_, 0, _) ->\n\tok;\nadd_data_root_to_disk_pool(DataRoot, _, _) when byte_size(DataRoot) < 32 ->\n\tok;\nadd_data_root_to_disk_pool(DataRoot, TXSize, TXID) ->\n\tKey = << DataRoot:32/binary, TXSize:?OFFSET_KEY_BITSIZE >>,\n\tcase ets:lookup(ar_disk_pool_data_roots, Key) of\n\t\t[] ->\n\t\t\tets:insert(ar_disk_pool_data_roots, {Key,\n\t\t\t\t\t{0, os:system_time(microsecond), sets:from_list([TXID])}});\n\t\t[{_, {_, _, not_set}}] ->\n\t\t\tok;\n\t\t[{_, {Size, Timestamp, TXIDSet}}] ->\n\t\t\tets:insert(ar_disk_pool_data_roots,\n\t\t\t\t\t{Key, {Size, Timestamp, sets:add_element(TXID, TXIDSet)}})\n\tend,\n\tok.\n\n%% @doc Notify the server the given data root has been removed from the mempool.\nmaybe_drop_data_root_from_disk_pool(_, 0, _) ->\n\tok;\nmaybe_drop_data_root_from_disk_pool(DataRoot, _, _) when byte_size(DataRoot) < 32 ->\n\tok;\nmaybe_drop_data_root_from_disk_pool(DataRoot, TXSize, TXID) ->\n\tKey = << DataRoot:32/binary, TXSize:?OFFSET_KEY_BITSIZE >>,\n\tcase ets:lookup(ar_disk_pool_data_roots, Key) of\n\t\t[] ->\n\t\t\tok;\n\t\t[{_, {_, _, not_set}}] ->\n\t\t\tok;\n\t\t[{_, {Size, Timestamp, TXIDs}}] ->\n\t\t\tcase sets:subtract(TXIDs, sets:from_list([TXID])) of\n\t\t\t\tTXIDs ->\n\t\t\t\t\tok;\n\t\t\t\tTXIDs2 ->\n\t\t\t\t\tcase sets:size(TXIDs2) of\n\t\t\t\t\t\t0 ->\n\t\t\t\t\t\t\tets:delete(ar_disk_pool_data_roots, Key);\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\tets:insert(ar_disk_pool_data_roots, {Key,\n\t\t\t\t\t\t\t\t\t{Size, Timestamp, TXIDs2}})\n\t\t\t\t\tend\n\t\t\tend\n\tend,\n\tok.\n\n%% @doc Fetch the chunk corresponding to Offset. When Offset is less than or equal to\n%% the strict split data threshold, the chunk returned contains the byte with the given\n%% Offset (the indexing is 1-based). Otherwise, the chunk returned ends in the same 256 KiB\n%% bucket as Offset counting from the first 256 KiB after the strict split data threshold.\n%% The strict split data threshold is weave_size of the block preceding the fork 2.5 block.\n%%\n%% Options:\n%%\t_________________________________________________________________________________________\n%%\tpacking\t\t\t\t| required; spora_2_5 or unpacked or {spora_2_6, <Mining Address>}\n%%\t\t\t\t\t\t\tor {composite, <Mining Address>, <Difficulty>}\n%%\t\t\t\t\t\t\tor {replica_2_9, <Mining Address>}\n%%\t_________________________________________________________________________________________\n%%\tpack\t\t\t\t| if false and a packed chunk is requested but stored unpacked or\n%%\t\t\t\t\t\t| an unpacked chunk is requested but stored packed, return\n%%\t\t\t\t\t\t| {error, chunk_not_found} instead of packing/unpacking;\n%%\t\t\t\t\t\t| true by default;\n%%\t_________________________________________________________________________________________\n%%\tbucket_based_offset\t| does not play a role for the offsets before\n%%\t\t\t\t\t\t| strict_data_split_threshold (weave_size of the block preceding\n%%\t\t\t\t\t\t| the fork 2.5 block); if true, return the chunk which ends in\n%%\t\t\t\t\t\t| the same 256 KiB bucket starting from\n%%\t\t\t\t\t\t| strict_data_split_threshold where borders belong to the\n%%\t\t\t\t\t\t| buckets on the left; true by default.\nget_chunk(Offset, #{ packing := Packing } = Options) ->\n\tPack = maps:get(pack, Options, true),\n\tRequestOrigin = maps:get(origin, Options, unknown),\n\tIsRecorded =\n\t\tcase {RequestOrigin, Pack} of\n\t\t\t{miner, _} ->\n\t\t\t\tStorageModules = ar_storage_module:get_all(Offset),\n\t\t\t\tar_sync_record:is_recorded_any(Offset, ar_data_sync, StorageModules);\n\t\t\t{_, false} ->\n\t\t\t\tar_sync_record:is_recorded(Offset, {ar_data_sync, Packing});\n\t\t\t{_, true} ->\n\t\t\t\tar_sync_record:is_recorded(Offset, ar_data_sync)\n\t\tend,\n\tSeekOffset =\n\t\tcase maps:get(bucket_based_offset, Options, true) of\n\t\t\ttrue ->\n\t\t\t\tar_chunk_storage:get_chunk_seek_offset(Offset);\n\t\t\tfalse ->\n\t\t\t\tOffset\n\t\tend,\n\tcase IsRecorded of\n\t\t{{true, StoredPacking}, StoreID} ->\n\t\t\tget_chunk(Offset, SeekOffset, Pack, Packing, StoredPacking, StoreID,\n\t\t\t\tRequestOrigin);\n\t\t{true, StoreID} ->\n\t\t\tUnpackedReply = ar_sync_record:is_recorded(Offset, {ar_data_sync, unpacked}),\n\t\t\tlog_chunk_error(RequestOrigin, chunk_record_not_associated_with_packing,\n\t\t\t\t\t[{store_id, StoreID}, {seek_offset, SeekOffset},\n\t\t\t\t\t{is_recorded_unpacked, io_lib:format(\"~p\", [UnpackedReply])}]),\n\t\t\t{error, chunk_not_found};\n\t\tReply ->\n\t\t\tUnpackedReply = ar_sync_record:is_recorded(Offset, {ar_data_sync, unpacked}),\n\t\t\tModules = ar_storage_module:get_all(Offset),\n\t\t\tModuleIDs = [ar_storage_module:id(Module) || Module <- Modules],\n\t\t\tRootRecords = [ets:lookup(sync_records, {ar_data_sync, ID})\n\t\t\t\t\t|| ID <- ModuleIDs],\n\t\t\tcase RequestOrigin of\n\t\t\t\tminer ->\n\t\t\t\t\tlog_chunk_error(RequestOrigin, chunk_record_not_found,\n\t\t\t\t\t\t\t[{modules_covering_offset, ModuleIDs},\n\t\t\t\t\t\t\t{root_sync_records, RootRecords},\n\t\t\t\t\t\t\t{seek_offset, SeekOffset},\n\t\t\t\t\t\t\t{reply, io_lib:format(\"~p\", [Reply])},\n\t\t\t\t\t\t\t{is_recorded_unpacked, io_lib:format(\"~p\", [UnpackedReply])}]);\n\t\t\t\t_ ->\n\t\t\t\t\tok\n\t\t\tend,\n\t\t\t{error, chunk_not_found}\n\tend.\n\n%% @doc Fetch the merkle proofs for the chunk corresponding to Offset.\nget_chunk_proof(Offset, Options) ->\n\tRequestOrigin = maps:get(origin, Options, unknown),\n\tIsRecorded = ar_sync_record:is_recorded(Offset, ar_data_sync),\n\tSeekOffset =\n\t\tcase maps:get(bucket_based_offset, Options, true) of\n\t\t\ttrue ->\n\t\t\t\tar_chunk_storage:get_chunk_seek_offset(Offset);\n\t\t\tfalse ->\n\t\t\t\tOffset\n\t\tend,\n\tcase IsRecorded of\n\t\t{{true, StoredPacking}, StoreID} ->\n\t\t\tget_chunk_proof(Offset, SeekOffset, StoredPacking, StoreID, RequestOrigin);\n\t\t_ ->\n\t\t\t{error, chunk_not_found}\n\tend.\n\n%% @doc Fetch the transaction data. Return {error, tx_data_too_big} if\n%% the size is bigger than ?MAX_SERVED_TX_DATA_SIZE, unless the limitation\n%% is disabled in the configuration.\nget_tx_data(TXID) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tSizeLimit =\n\t\tcase lists:member(serve_tx_data_without_limits, Config#config.enable) of\n\t\t\ttrue ->\n\t\t\t\tinfinity;\n\t\t\tfalse ->\n\t\t\t\t?MAX_SERVED_TX_DATA_SIZE\n\t\tend,\n\tget_tx_data(TXID, SizeLimit).\n\n%% @doc Fetch the transaction data. Return {error, tx_data_too_big} if\n%% the size is bigger than SizeLimit.\nget_tx_data(TXID, SizeLimit) ->\n\tcase get_tx_offset(TXID) of\n\t\t{error, not_found} ->\n\t\t\t{error, not_found};\n\t\t{error, failed_to_read_tx_offset} ->\n\t\t\t{error, failed_to_read_tx_data};\n\t\t{ok, {Offset, Size}} ->\n\t\t\tcase Size > SizeLimit of\n\t\t\t\ttrue ->\n\t\t\t\t\t{error, tx_data_too_big};\n\t\t\t\tfalse ->\n\t\t\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\t\t\tPack = lists:member(pack_served_chunks, Config#config.enable),\n\t\t\t\t\tget_tx_data(Offset - Size, Offset, [], Pack)\n\t\t\tend\n\tend.\n\n%% @doc Return the global end offset and size for the given transaction.\nget_tx_offset(TXID) ->\n\tTXIndex = {tx_index, ?DEFAULT_MODULE},\n\tget_tx_offset(TXIndex, TXID).\n\n%% @doc Return {ok, [{TXID, AbsoluteStartOffset, AbsoluteEndOffset}, ...]}\n%% where AbsoluteStartOffset, AbsoluteEndOffset are transaction borders\n%% (not clipped by the given range) for all TXIDs intersecting the given range.\nget_tx_offset_data_in_range(Start, End) ->\n\tTXIndex = {tx_index, ?DEFAULT_MODULE},\n\tTXOffsetIndex = {tx_offset_index, ?DEFAULT_MODULE},\n\tget_tx_offset_data_in_range(TXOffsetIndex, TXIndex, Start, End).\n\n%% @doc Return true if the given {DataRoot, DataSize} is in the mempool\n%% or in the index.\nhas_data_root(DataRoot, DataSize) ->\n\tDataRootKey = << DataRoot:32/binary, DataSize:256 >>,\n\tcase ets:member(ar_disk_pool_data_roots, DataRootKey) of\n\t\ttrue ->\n\t\t\ttrue;\n\t\tfalse ->\n\t\t\tcase get_data_root_offset(DataRootKey, ?DEFAULT_MODULE) of\n\t\t\t\t{ok, _} ->\n\t\t\t\t\ttrue;\n\t\t\t\t_ ->\n\t\t\t\t\tfalse\n\t\t\tend\n\tend.\n\n%% @doc Record the metadata of the given block.\nadd_block(B, SizeTaggedTXs) ->\n\tgen_server:call(ar_data_sync_default, {add_block, B, SizeTaggedTXs}, ?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Request the removal of the transaction data.\nrequest_tx_data_removal(TXID, Ref, ReplyTo) ->\n\tTXIndex = {tx_index, ?DEFAULT_MODULE},\n\tcase ar_kv:get(TXIndex, TXID) of\n\t\t{ok, Value} ->\n\t\t\t{End, Size} = binary_to_term(Value, [safe]),\n\t\t\tremove_range(End - Size, End, Ref, ReplyTo);\n\t\tnot_found ->\n\t\t\t?LOG_WARNING([{event, tx_offset_not_found}, {tx, ar_util:encode(TXID)}]),\n\t\t\tok;\n\t\t{error, Reason} ->\n\t\t\t?LOG_ERROR([{event, failed_to_fetch_blacklisted_tx_offset},\n\t\t\t\t\t{tx, ar_util:encode(TXID)}, {reason, Reason}]),\n\t\t\tok\n\tend.\n\n%% @doc Request the removal of the given byte range.\nrequest_data_removal(Start, End, Ref, ReplyTo) ->\n\tremove_range(Start, End, Ref, ReplyTo).\n\n%% @doc Return true if the in-memory data chunk cache is full. Return not_initialized\n%% if there is no information yet.\nis_chunk_cache_full() ->\n\tcase ets:lookup(ar_data_sync_state, chunk_cache_size_limit) of\n\t\t[{_, Limit}] ->\n\t\t\tcase ets:lookup(ar_data_sync_state, chunk_cache_size) of\n\t\t\t\t[{_, Size}] when Size > Limit ->\n\t\t\t\t\ttrue;\n\t\t\t\t_ ->\n\t\t\t\t\tfalse\n\t\t\tend;\n\t\t_ ->\n\t\t\tnot_initialized\n\tend.\n\n-ifdef(AR_TEST).\nis_disk_space_sufficient(StoreID) ->\n\t%% When testing, disk space is always sufficient *unless* the storage module has not\n\t%% been properly initialized.\n\tcase is_disk_space_sufficient2(StoreID) of\n\t\tnot_initialized ->\n\t\t\tnot_initialized;\n\t\t_ ->\n\t\t\ttrue\n\tend.\n-else.\n%% @doc Return true if we have sufficient disk space to write new data for the\n%% given StoreID. Return not_initialized if there is no information yet.\nis_disk_space_sufficient(StoreID) ->\n\tis_disk_space_sufficient2(StoreID).\n-endif.\n\n%% @doc Return true if we have sufficient disk space to write new data for the\n%% given StoreID. Return not_initialized if there is no information yet.\nis_disk_space_sufficient2(StoreID) ->\n\tcase ets:lookup(ar_data_sync_state, {is_disk_space_sufficient, StoreID}) of\n\t\t[{_, false}] ->\n\t\t\tfalse;\n\t\t[{_, true}] ->\n\t\t\ttrue;\n\t\t_ ->\n\t\t\tnot_initialized\n\tend.\n\nget_chunk_by_byte(Byte, StoreID) ->\n\tResult = ar_kv:get_next_by_prefix({chunks_index, StoreID}, ?OFFSET_KEY_PREFIX_BITSIZE,\n\t\t?OFFSET_KEY_BITSIZE, << Byte:?OFFSET_KEY_BITSIZE >>),\n\tcase Result of\n\t\t{error, Reason} ->\n\t\t\t{error, Reason};\n\t\t{ok, Key, Metadata} ->\n\t\t\t<< AbsoluteEndOffset:?OFFSET_KEY_BITSIZE >> = Key,\n\t\t\t{\n\t\t\t\tChunkDataKey, TXRoot, DataRoot, TXPath, RelativeOffset, ChunkSize\n\t\t\t} = binary_to_term(Metadata, [safe]),\n\t\t\tFullMetaData = {AbsoluteEndOffset, ChunkDataKey, TXRoot, DataRoot, TXPath,\n\t\t\t\tRelativeOffset, ChunkSize},\n\t\t\t{ok, Key, FullMetaData}\n\tend.\n\n%% @doc: handle situation where get_chunks_by_byte returns invalid_iterator, so we can't\n%% use the chunk's end offset to advance the cursor.\n%%\n%% get_chunk_by_byte looks for a key with the same prefix or the next prefix.\n%% Therefore, if there is no such key, it does not make sense to look for any\n%% key smaller than the prefix + 2 in the next iteration.\nadvance_chunks_index_cursor(Cursor) ->\n\tPrefixSpaceSize = trunc(math:pow(2, ?OFFSET_KEY_BITSIZE - ?OFFSET_KEY_PREFIX_BITSIZE)),\n\t((Cursor div PrefixSpaceSize) + 2) * PrefixSpaceSize.\n\nread_chunk(Offset, ChunkDataKey, StoreID) ->\n\tcase get_chunk_data(ChunkDataKey, StoreID) of\n\t\tnot_found ->\n\t\t\tnot_found;\n\t\t{ok, Value} ->\n\t\t\tcase binary_to_term(Value, [safe]) of\n\t\t\t\t{Chunk, DataPath} ->\n\t\t\t\t\t{ok, {Chunk, DataPath}};\n\t\t\t\tDataPath ->\n\t\t\t\t\tcase ar_chunk_storage:get(Offset - 1, StoreID) of\n\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\tnot_found;\n\t\t\t\t\t\t{_EndOffset, Chunk} ->\n\t\t\t\t\t\t\t{ok, {Chunk, DataPath}}\n\t\t\t\t\tend\n\t\t\tend;\n\t\tError ->\n\t\t\tError\n\tend.\n\nwrite_chunk(Offset, ChunkMetadata, Chunk, Packing, StoreID) ->\n\t#chunk_metadata{\n\t\tchunk_data_key = ChunkDataKey,\n\t\tchunk_size = ChunkSize,\n\t\tdata_path = DataPath\n\t} = ChunkMetadata,\n\twrite_chunk(Offset, ChunkDataKey, Chunk, ChunkSize, DataPath, Packing, StoreID).\n\nread_data_path(ChunkDataKey, StoreID) ->\n\tread_data_path(undefined, ChunkDataKey, StoreID).\n%% The first argument is introduced to match the read_chunk/3 signature.\nread_data_path(_Offset, ChunkDataKey, StoreID) ->\n\tcase get_chunk_data(ChunkDataKey, StoreID) of\n\t\tnot_found ->\n\t\t\tnot_found;\n\t\t{ok, Value} ->\n\t\t\tcase binary_to_term(Value, [safe]) of\n\t\t\t\t{_Chunk, DataPath} ->\n\t\t\t\t\t{ok, DataPath};\n\t\t\t\tDataPath ->\n\t\t\t\t\t{ok, DataPath}\n\t\t\tend;\n\t\tError ->\n\t\t\tError\n\tend.\n\ndecrement_chunk_cache_size() ->\n\tets:update_counter(ar_data_sync_state, chunk_cache_size, {2, -1}, {chunk_cache_size, 0}).\n\nincrement_chunk_cache_size() ->\n\tets:update_counter(ar_data_sync_state, chunk_cache_size, {2, 1}, {chunk_cache_size, 1}).\n\ndebug_get_disk_pool_chunks() ->\n\tdebug_get_disk_pool_chunks(first).\n\ndebug_get_disk_pool_chunks(Cursor) ->\n\tcase ar_kv:get_next({disk_pool_chunks_index, ?DEFAULT_MODULE}, Cursor) of\n\t\tnone ->\n\t\t\t[];\n\t\t{ok, K, V} ->\n\t\t\tK2 = << K/binary, <<\"a\">>/binary >>,\n\t\t\t[{K, V} | debug_get_disk_pool_chunks(K2)]\n\tend.\n\n%% @doc Check if the footprint record should be updated for the given chunk.\n%% We maintain the footprint record for all chunks so that footprint-based syncing\n%% correctly identifies already-synced chunks. Note: in the early weave (before the\n%% strict data split threshold), multiple small chunks may map to the same bucket,\n%% making the footprint record imprecise. This is acceptable since footprint syncing\n%% is primarily for replica 2.9 data and small chunks can use \"normal\" syncing.\nis_footprint_record_supported(_AbsoluteOffset, _ChunkSize, _Packing) ->\n\ttrue.\n\n%% @doc Return the disk pool threshold, a byte offset where\n%% the disk pool begins - the data above this offset is considered\n%% to belong to the disk pool. For example, we do not store the\n%% disk pool data in the storage modules due to the risk of orphans.\nget_disk_pool_threshold() ->\n\tcase ets:lookup(ar_data_sync_state, disk_pool_threshold) of\n\t\t[] ->\n\t\t\t0;\n\t\t[{_, DiskPoolThreshold}] ->\n\t\t\tDiskPoolThreshold\n\tend.\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit({?DEFAULT_MODULE = StoreID, _}) ->\n\t%% Trap exit to avoid corrupting any open files on quit..\n\tprocess_flag(trap_exit, true),\n\t{ok, Config} = arweave_config:get_env(),\n\t[ok, ok] = ar_events:subscribe([node_state, disksup]),\n\tState = init_kv(StoreID),\n\tmove_disk_pool_index(State),\n\tmove_data_root_index(State),\n\t{ok, _} = ar_timer:apply_interval(\n\t\t?RECORD_DISK_POOL_CHUNKS_COUNT_FREQUENCY_MS,\n\t\tar_data_sync,\n\t\trecord_disk_pool_chunks_count,\n\t\t[],\n\t\t#{ skip_on_shutdown => false }\n\t),\n\n\tStateMap = read_data_sync_state(),\n\tCurrentBI = maps:get(block_index, StateMap),\n\t%% Maintain a map of pending, recently uploaded, and orphaned data roots.\n\t%% << DataRoot:32/binary, TXSize:256 >> => {Size, Timestamp, TXIDSet}.\n\t%%\n\t%% Unconfirmed chunks can be accepted only after their data roots end up in this set.\n\t%% New chunks for these data roots are accepted until the corresponding size reaches\n\t%% #config.max_disk_pool_data_root_buffer_mb or the total size of added pending and\n\t%% seeded chunks reaches #config.max_disk_pool_buffer_mb. When a data root is orphaned,\n\t%% its timestamp is refreshed so that the chunks have chance to be reincluded later.\n\t%% After a data root expires, the corresponding chunks are removed from\n\t%% disk_pool_chunks_index and if they do not belong to any storage module - from storage.\n\t%% TXIDSet keeps track of pending transaction identifiers - if all pending transactions\n\t%% with the << DataRoot:32/binary, TXSize:256 >> key are dropped from the mempool,\n\t%% the corresponding entry is removed from DiskPoolDataRoots. When a data root is\n\t%% confirmed, TXIDSet is set to not_set - from this point on, the key is only dropped\n\t%% after expiration.\n\tDiskPoolDataRoots = maps:get(disk_pool_data_roots, StateMap),\n\trecalculate_disk_pool_size(DiskPoolDataRoots, State),\n\tDiskPoolThreshold = maps:get(disk_pool_threshold, StateMap),\n\tets:insert(ar_data_sync_state, {disk_pool_threshold, DiskPoolThreshold}),\n\tState2 = State#sync_data_state{\n\t\tblock_index = CurrentBI,\n\t\tweave_size = maps:get(weave_size, StateMap),\n\t\tdisk_pool_cursor = first,\n\t\tdisk_pool_threshold = DiskPoolThreshold,\n\t\tstore_id = StoreID,\n\t\tsync_status = init_sync_status(StoreID)\n\t},\n\t?LOG_INFO([{event, ar_data_sync_start}, {store_id, StoreID},\n\t\t{range_start, State2#sync_data_state.range_start},\n\t\t{range_end, State2#sync_data_state.range_end}]),\n\t{ok, _} = ar_timer:apply_interval(\n\t\t?REMOVE_EXPIRED_DATA_ROOTS_FREQUENCY_MS,\n\t\t?MODULE,\n\t\tremove_expired_disk_pool_data_roots,\n\t\t[],\n\t\t#{ skip_on_shutdown => false }\n\t),\n\tlists:foreach(\n\t\tfun(_DiskPoolJobNumber) ->\n\t\t\tgen_server:cast(self(), process_disk_pool_item)\n\t\tend,\n\t\tlists:seq(1, Config#config.disk_pool_jobs)\n\t),\n\tgen_server:cast(self(), store_sync_state),\n\t{ok, Config} = arweave_config:get_env(),\n\tLimit =\n\t\tcase Config#config.data_cache_size_limit of\n\t\t\tundefined ->\n\t\t\t\tFree = proplists:get_value(free_memory, memsup:get_system_memory_data(),\n\t\t\t\t\t\t2000000000),\n\t\t\t\tLimit2 = min(1000, erlang:ceil(Free * 0.9 / 3 / 262144)),\n\t\t\t\tLimit3 = ar_util:ceil_int(Limit2, 100),\n\t\t\t\tLimit3;\n\t\t\tLimit2 ->\n\t\t\t\tLimit2\n\t\tend,\n\tar:console(\"~nSetting the data chunk cache size limit to ~B chunks.~n\", [Limit]),\n\tets:insert(ar_data_sync_state, {chunk_cache_size_limit, Limit}),\n\tets:insert(ar_data_sync_state, {chunk_cache_size, 0}),\n\t{ok, _} = ar_timer:apply_interval(\n\t\t200,\n\t\t?MODULE,\n\t\trecord_chunk_cache_size_metric,\n\t\t[],\n\t\t#{ skip_on_shutdown => false }\n\t),\n\tgen_server:cast(self(), process_store_chunk_queue),\n\t{ok, State2};\ninit({StoreID, RepackInPlacePacking}) ->\n\t?LOG_INFO([{event, ar_data_sync_start}, {store_id, StoreID}]),\n\t%% Trap exit to avoid corrupting any open files on quit..\n\tprocess_flag(trap_exit, true),\n\t[ok, ok] = ar_events:subscribe([node_state, disksup]),\n\n\tState = init_kv(StoreID),\n\n\t{RangeStart, RangeEnd} = ar_storage_module:get_range(StoreID),\n\tRangeStart2 = max(0, ar_block:get_chunk_padded_offset(RangeStart) - ?DATA_CHUNK_SIZE),\n\tRangeEnd2 = ar_block:get_chunk_padded_offset(RangeEnd),\n\tState2 = State#sync_data_state{\n\t\tstore_id = StoreID,\n\t\trange_start = RangeStart2,\n\t\trange_end = RangeEnd2,\n\t\t%% weave_size and disk_pool_threshold will be set on join\n\t\tweave_size = 0,\n\t\tdisk_pool_threshold = 0\n\t},\n\n\tcase RepackInPlacePacking of\n\t\tnone ->\n\t\t\tgen_server:cast(self(), process_store_chunk_queue),\n\t\t\tState3 = State2#sync_data_state{\n\t\t\t\tsync_status = init_sync_status(StoreID)\n\t\t\t},\n\t\t\t%% Start syncing immediately. For replica_2_9 packing, chunks will be\n\t\t\t%% written as unpacked_padded first and upgraded once entropy arrives.\n\t\t\tgen_server:cast(self(), sync_intervals),\n\t\t\tgen_server:cast(self(), sync_data),\n\t\t\tmaybe_run_footprint_record_initialization(State3),\n\t\t\t{ok, State3};\n\t\t_ ->\n\t\t\tState3 = State2#sync_data_state{\n\t\t\t\tsync_status = off\n\t\t\t},\n\t\t\tar_device_lock:set_device_lock_metric(StoreID, sync, off),\n\t\t\t{ok, State3}\n\tend.\n\nhandle_cast({move_data_root_index, Cursor, N}, State) ->\n\tmove_data_root_index(Cursor, N, State),\n\t{noreply, State};\n\nhandle_cast({store_data_roots, BlockStart, BlockEnd, TXRoot, Entries}, State) ->\n\t{_, State2} = handle_store_data_roots(BlockStart, BlockEnd, TXRoot, Entries, State),\n\t{noreply, State2};\n\nhandle_cast(process_store_chunk_queue, State) ->\n\tar_util:cast_after(200, self(), process_store_chunk_queue),\n\t{noreply, process_store_chunk_queue(State)};\n\nhandle_cast({initialize_footprint_record, Cursor, Packing}, State) ->\n\tState2 = initialize_footprint_record(Cursor, Packing, State),\n\t{noreply, State2};\n\nhandle_cast({join, RecentBI}, State) ->\n\t#sync_data_state{ block_index = CurrentBI, store_id = StoreID } = State,\n\t[{_, WeaveSize, _} | _] = RecentBI,\n\tcase {CurrentBI, ar_block_index:get_intersection(CurrentBI)} of\n\t\t{[], _} ->\n\t\t\tok;\n\t\t{_, no_intersection} ->\n\t\t\tio:format(\"~nWARNING: the stored block index of the data syncing module \"\n\t\t\t\t\t\"has no intersection with the new one \"\n\t\t\t\t\t\"in the most recent blocks. If you have just started a new weave using \"\n\t\t\t\t\t\"the init option, restart from the local state \"\n\t\t\t\t\t\"or specify some peers.~n~n\"),\n\t\t\tinit:stop(1);\n\t\t{_, {_H, Offset, _TXRoot}} ->\n\t\t\tPreviousWeaveSize = element(2, hd(CurrentBI)),\n\t\t\t{ok, OrphanedDataRoots} = remove_orphaned_data(State, Offset, PreviousWeaveSize),\n\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\t[gen_server:cast(name(ar_storage_module:id(Module)),\n\t\t\t\t\t{cut, Offset}) || Module <- Config#config.storage_modules],\n\t\t\tok = ar_chunk_storage:cut(Offset, StoreID),\n\t\t\tok = ar_sync_record:cut(Offset, ar_data_sync, StoreID),\n\t\t\tar_events:send(sync_record, {global_cut, Offset}),\n\t\t\treset_orphaned_data_roots_disk_pool_timestamps(OrphanedDataRoots)\n\tend,\n\tBI = ar_block_index:get_list_by_hash(element(1, lists:last(RecentBI))),\n\trepair_data_root_offset_index(BI, State),\n\tDiskPoolThreshold = get_disk_pool_threshold(RecentBI),\n\tets:insert(ar_data_sync_state, {disk_pool_threshold, DiskPoolThreshold}),\n\tState2 = store_sync_state(\n\t\tState#sync_data_state{\n\t\t\tweave_size = WeaveSize,\n\t\t\tblock_index = RecentBI,\n\t\t\tdisk_pool_threshold = DiskPoolThreshold\n\t\t}),\n\t{noreply, State2};\n\nhandle_cast({cut, Start}, #sync_data_state{ store_id = StoreID,\n\t\trange_end = End } = State) ->\n\tcase ar_sync_record:get_next_synced_interval(Start, End, ar_data_sync, StoreID) of\n\t\tnot_found ->\n\t\t\tok;\n\t\t_Interval ->\n\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\tcase lists:member(remove_orphaned_storage_module_data, Config#config.enable) of\n\t\t\t\tfalse ->\n\t\t\t\t\tar:console(\"The storage module ~s contains some orphaned data above the \"\n\t\t\t\t\t\t\t\"weave offset ~B. Make sure you are joining the network through \"\n\t\t\t\t\t\t\t\"trusted in-sync peers and restart with \"\n\t\t\t\t\t\t\t\"`enable remove_orphaned_storage_module_data`.~n\",\n\t\t\t\t\t\t\t[StoreID, Start]),\n\t\t\t\t\ttimer:sleep(2000),\n\t\t\t\t\tinit:stop(1);\n\t\t\t\ttrue ->\n\t\t\t\t\tok = delete_chunk_metadata_range(Start, End, State),\n\t\t\t\t\tok = ar_chunk_storage:cut(Start, StoreID),\n\t\t\t\t\tok = ar_sync_record:cut(Start, ar_data_sync, StoreID)\n\t\t\tend\n\tend,\n\t{noreply, State};\n\nhandle_cast({add_tip_block, BlockTXPairs, BI}, State) ->\n\t#sync_data_state{ store_id = StoreID, weave_size = CurrentWeaveSize,\n\t\t\tblock_index = CurrentBI } = State,\n\t{BlockStartOffset, Blocks} = pick_missing_blocks(CurrentBI, BlockTXPairs),\n\t{ok, OrphanedDataRoots} = remove_orphaned_data(State, BlockStartOffset, CurrentWeaveSize),\n\t{WeaveSize, AddedDataRoots} = lists:foldl(\n\t\tfun ({_BH, []}, Acc) ->\n\t\t\t\tAcc;\n\t\t\t({_BH, SizeTaggedTXs}, {StartOffset, CurrentAddedDataRoots}) ->\n\t\t\t\t{ok, DataRoots} = add_block_data_roots(SizeTaggedTXs, StartOffset, StoreID),\n\t\t\t\tok = update_tx_index(SizeTaggedTXs, StartOffset, StoreID),\n\t\t\t\t{StartOffset + element(2, lists:last(SizeTaggedTXs)),\n\t\t\t\t\tsets:union(CurrentAddedDataRoots, DataRoots)}\n\t\tend,\n\t\t{BlockStartOffset, sets:new()},\n\t\tBlocks\n\t),\n\tadd_block_data_roots_to_disk_pool(AddedDataRoots),\n\treset_orphaned_data_roots_disk_pool_timestamps(OrphanedDataRoots),\n\tok = ar_chunk_storage:cut(BlockStartOffset, StoreID),\n\tok = ar_sync_record:cut(BlockStartOffset, ar_data_sync, StoreID),\n\tar_events:send(sync_record, {global_cut, BlockStartOffset}),\n\tDiskPoolThreshold = get_disk_pool_threshold(BI),\n\tets:insert(ar_data_sync_state, {disk_pool_threshold, DiskPoolThreshold}),\n\tState2 = store_sync_state(\n\t\tState#sync_data_state{\n\t\t\tweave_size = WeaveSize,\n\t\t\tblock_index = BI,\n\t\t\tdisk_pool_threshold = DiskPoolThreshold\n\t\t}),\n\t{noreply, State2};\n\nhandle_cast(sync_data, State) ->\n\t#sync_data_state{ store_id = StoreID } = State,\n\tStatus = ar_device_lock:acquire_lock(sync, StoreID, State#sync_data_state.sync_status),\n\tState2 = State#sync_data_state{ sync_status = Status },\n\tState3 = case Status of\n\t\tactive ->\n\t\t\tdo_sync_data(State2);\n\t\tpaused ->\n\t\t\tar_util:cast_after(?DEVICE_LOCK_WAIT, self(), sync_data),\n\t\t\tState2;\n\t\t_ ->\n\t\t\tState2\n\tend,\n\t{noreply, State3};\n\nhandle_cast(sync_data2, State) ->\n\t#sync_data_state{ store_id = StoreID } = State,\n\tStatus = ar_device_lock:acquire_lock(sync, StoreID, State#sync_data_state.sync_status),\n\tState2 = State#sync_data_state{ sync_status = Status },\n\tState3 = case Status of\n\t\tactive ->\n\t\t\tdo_sync_data2(State2);\n\t\tpaused ->\n\t\t\tar_util:cast_after(?DEVICE_LOCK_WAIT, self(), sync_data2),\n\t\t\tState2;\n\t\t_ ->\n\t\t\tState2\n\tend,\n\t{noreply, State3};\n\n%% Schedule syncing of the unsynced intervals. Choose a peer for each of the intervals.\n%% There are two message payloads:\n%% 1. collect_peer_intervals\n%%    Start the collection process over the full storage_module range.\n%% 2. {collect_peer_intervals, Start, End}\n%%    Collect intervals for the specified range. This interface is used to pick up where\n%%    we left off after a pause. There are 2 main conditions that can trigger a pause:\n%%    a. Insufficient disk space. Will pause until disk space frees up\n%%    b. Sync queue is busy. Will pause until previously queued intervals are scheduled to the\n%%       ar_data_sync_coordinator for syncing.\nhandle_cast(collect_peer_intervals, State) ->\n\t#sync_data_state{ range_start = Start, range_end = End,\n\t\t\tdisk_pool_threshold = DiskPoolThreshold,\n\t\t\tsync_phase = SyncPhase,\n\t\t\tmigrations_index = MI,\n\t\t\tstore_id = StoreID,\n\t\t\tsync_intervals_queue = Q } = State,\n\tCheckIsJoined =\n\t\tcase ar_node:is_joined() of\n\t\t\tfalse ->\n\t\t\t\tar_util:cast_after(1000, self(), collect_peer_intervals),\n\t\t\t\tfalse;\n\t\t\ttrue ->\n\t\t\t\ttrue\n\t\tend,\n\tIsFootprintRecordMigrated =\n\t\tcase ar_kv:get(MI, ?FOOTPRINT_MIGRATION_CURSOR_KEY) of\n\t\t\t{ok, <<\"complete\">>} ->\n\t\t\t\ttrue;\n\t\t\t_ ->\n\t\t\t\tar_util:cast_after(5_000, self(), collect_peer_intervals),\n\t\t\t\tfalse\n\t\tend,\n\tIntersectsDiskPool =\n\t\tcase CheckIsJoined andalso IsFootprintRecordMigrated of\n\t\t\tfalse ->\n\t\t\t\tnoop;\n\t\t\ttrue ->\n\t\t\t\tEnd > DiskPoolThreshold\n\t\tend,\n\t%% Alternate between \"normal\" and footprint-based syncing.\n\t%% Footprint-based syncing downloads replica 2.9 chunks footprint by footprint\n\t%% to avoid redundant entropy generations for unpacking. \"Normal\" syncing ignores\n\t%% replica 2.9 data and mostly downloads unpacked data from peers storing it.\n\tSyncPhase2 =\n\t\tcase SyncPhase of\n\t\t\tundefined ->\n\t\t\t\t%% Start with normal syncing.\n\t\t\t\tnormal;\n\t\t\tnormal ->\n\t\t\t\tfootprint;\n\t\t\tfootprint ->\n\t\t\t\tnormal\n\t\tend,\n\t?LOG_DEBUG([{event, collect_peer_intervals_start},\n\t\t{function, collect_peer_intervals},\n\t\t{store_id, StoreID},\n\t\t{s, Start}, {e, End},\n\t\t{queue_size, gb_sets:size(Q)},\n\t\t{is_joined, CheckIsJoined}, \n\t\t{is_footprint_record_migrated, IsFootprintRecordMigrated},\n\t\t{intersects_disk_pool, IntersectsDiskPool},\n\t\t{sync_phase, SyncPhase2}]),\n\tState2 =\n\t\tcase IntersectsDiskPool of\n\t\t\tnoop ->\n\t\t\t\tState;\n\t\t\ttrue ->\n\t\t\t\tcase SyncPhase2 of\n\t\t\t\t\tfootprint ->\n\t\t\t\t\t\tEnd2 = min(End, DiskPoolThreshold),\n\t\t\t\t\t\tgen_server:cast(self(),\n\t\t\t\t\t\t\t{collect_peer_intervals, Start, Start, End2, footprint}),\n\t\t\t\t\t\tState#sync_data_state{ sync_phase = footprint };\n\t\t\t\t\t_ ->\n\t\t\t\t\t\t%% The disk pool is only synced during the \"normal\" phase.\n\t\t\t\t\t\tgen_server:cast(self(),\n\t\t\t\t\t\t\t{collect_peer_intervals, Start, Start, End, normal}),\n\t\t\t\t\t\tState#sync_data_state{ sync_phase = normal }\n\t\t\t\tend;\n\t\t\tfalse ->\n\t\t\t\tgen_server:cast(self(), {collect_peer_intervals, Start, Start, End, SyncPhase2}),\n\t\t\t\tState#sync_data_state{ sync_phase = SyncPhase2 }\n\t\tend,\n\t{noreply, State2};\n\nhandle_cast({collect_peer_intervals, Offset, Start, End, Type}, State) when Offset >= End ->\n\t%% We've finished collecting intervals for the whole storage_module range. Schedule\n\t%% the collection process to restart in ?COLLECT_SYNC_INTERVALS_FREQUENCY_MS.\n\t?LOG_DEBUG([{event, collect_peer_intervals_done},\n\t\t{function, collect_peer_intervals},\n\t\t{store_id, State#sync_data_state.store_id},\n\t\t{offset, Offset}, {s, Start}, {e, End}, {type, Type}]),\n\tar_util:cast_after(?COLLECT_SYNC_INTERVALS_FREQUENCY_MS, self(), collect_peer_intervals),\n\t{noreply, State};\nhandle_cast({collect_peer_intervals, Offset, Start, End, Type}, State) ->\n\t#sync_data_state{ sync_intervals_queue = Q,\n\t\t\tstore_id = StoreID, weave_size = WeaveSize } = State,\n\tIsDiskSpaceSufficient =\n\t\tcase is_disk_space_sufficient(StoreID) of\n\t\t\ttrue ->\n\t\t\t\ttrue;\n\t\t\tIsSufficient ->\n\t\t\t\tDelay = case IsSufficient of\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t30_000;\n\t\t\t\t\tnot_initialized ->\n\t\t\t\t\t\t1000\n\t\t\t\tend,\n\t\t\t\tar_util:cast_after(Delay, self(),\n\t\t\t\t\t{collect_peer_intervals, Offset, Start, End, Type}),\n\t\t\t\tfalse\n\t\tend,\n\tIsSyncQueueBusy =\n\t\tcase IsDiskSpaceSufficient of\n\t\t\tfalse ->\n\t\t\t\ttrue;\n\t\t\ttrue ->\n\t\t\t\t%% Q contains chunks we've already queued for syncing. We need\n\t\t\t\t%% to manage the queue length.\n\t\t\t\t%% 1. Periodically sync_intervals will pull from Q and send work to\n\t\t\t\t%%    ar_data_sync_coordinator. We need to make sure Q is long enough so\n\t\t\t\t%%    that we never starve ar_data_sync_coordinator of work.\n\t\t\t\t%% 2. On the flip side we don't want Q to get so long as to trigger an\n\t\t\t\t%%    out-of-memory condition. In the extreme case we could collect and\n\t\t\t\t%%    enqueue all chunks in the entire storage module (usually 3.6 TB).\n\t\t\t\t%%    A Q of this length would have a roughly 500 MB memory footprint per\n\t\t\t\t%%    storage module. For a node that is syncing multiple storage modules,\n\t\t\t\t%%    this can add up fast.\n\t\t\t\t%% 3. We also want to make sure we are using the most up to date information\n\t\t\t\t%%    we can. Every time we add a task to the Q we're locking in a specific\n\t\t\t\t%%    view of Peer data availability. If that peer goes offline before we\n\t\t\t\t%%    get to the task it can result in wasted work or syncing stalls. A\n\t\t\t\t%%    shorter queue helps ensure we're always syncing from the \"best\" peers\n\t\t\t\t%%    at any point in time.\n\t\t\t\t%%\n\t\t\t\t%% With all that in mind, we'll pause collection once the Q hits roughly\n\t\t\t\t%% a bucket size worth of chunks. This number is slightly arbitrary and we\n\t\t\t\t%% should feel free to adjust as necessary.\n\t\t\t\tIntervalsQueueSize = gb_sets:size(Q),\n\t\t\t\tStoreIDLabel = ar_storage_module:label(StoreID),\n\t\t\t\tprometheus_gauge:set(sync_intervals_queue_size,\n\t\t\t\t\t[StoreIDLabel], IntervalsQueueSize),\n\t\t\t\tcase IntervalsQueueSize > (?NETWORK_DATA_BUCKET_SIZE / ?DATA_CHUNK_SIZE) of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tar_util:cast_after(500, self(),\n\t\t\t\t\t\t\t{collect_peer_intervals, Offset, Start, End, Type}),\n\t\t\t\t\t\ttrue;\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tfalse\n\t\t\t\tend\n\t\tend,\n\tcase IsSyncQueueBusy of\n\t\ttrue ->\n\t\t\t?LOG_DEBUG([{event, collect_peer_intervals_skipped},\n\t\t\t\t\t{function, collect_peer_intervals},\n\t\t\t\t\t{store_id, StoreID},\n\t\t\t\t\t{offset, Offset}, {s, Start}, {e, End},\n\t\t\t\t\t{weave_size, WeaveSize},\n\t\t\t\t\t{is_disk_space_sufficient, IsDiskSpaceSufficient},\n\t\t\t\t\t{is_sync_queue_busy, IsSyncQueueBusy}]),\n\t\t\tok;\n\t\tfalse ->\n\t\t\tEnd2 = min(End, WeaveSize),\n\t\t\tcase Offset >= End2 of\n\t\t\t\ttrue ->\n\t\t\t\t\tar_util:cast_after(500, self(),\n\t\t\t\t\t\t{collect_peer_intervals, Offset, Start, End, Type});\n\t\t\t\tfalse ->\n\t\t\t\t\t%% All checks have passed, find and enqueue intervals for one\n\t\t\t\t\t%% sync bucket worth of chunks starting at offset Start.\n\t\t\t\t\tar_peer_intervals:fetch(Offset, Start, End2, StoreID, Type)\n\t\t\tend\n\tend,\n\n\t{noreply, State};\n\nhandle_cast({enqueue_intervals, []}, State) ->\n\t{noreply, State};\nhandle_cast({enqueue_intervals, Intervals}, State) ->\n\t#sync_data_state{ sync_intervals_queue = Q,\n\t\t\tsync_intervals_queue_intervals = QIntervals } = State,\n\t%% When enqueuing intervals, we want to distribute the intervals among many peers,\n\t%% so that:\n\t%% 1. We can better saturate our network bandwidth without overwhelming any one peer.\n\t%% 2. So that we limit the risk of blocking on one particularly slow peer.\n\t%%\n\t%% We do a probabilistic distribution:\n\t%% 1. We shuffle the peers list so that the ordering differs from call to call\n\t%% 2. We cap the number of chunks to enqueue per peer - at roughly 50% more than\n\t%%    their \"fair\" share (i.e. ?DEFAULT_SYNC_BUCKET_SIZE / NumPeers).\n\t%%\n\t%% The compute overhead of these 2 steps is minimal and results in a pretty good\n\t%% distribution of sync requests among peers.\n\n\t%% This is an approximation. The intent is to enqueue one sync_bucket at a time - but\n\t%% due to the selection of each peer's intervals, the total number of bytes may be\n\t%% less than a full sync_bucket. But for the purposes of distributing requests among\n\t%% many peers - the approximation is fine (and much cheaper to calculate than taking\n\t%% the sum of all the peer intervals).\n\tTotalChunksToEnqueue = ?DEFAULT_SYNC_BUCKET_SIZE div ?DATA_CHUNK_SIZE,\n\tNumPeers = length(Intervals),\n\t%% Allow each Peer to sync slightly more chunks than their strict share - this allows\n\t%% us to more reliably sync the full set of requested intervals.\n\tScalingFactor = 1.5,\n\tChunksPerPeer = trunc(((TotalChunksToEnqueue + NumPeers - 1) div NumPeers) * ScalingFactor),\n\n\t{Q2, QIntervals2} = enqueue_intervals(\n\t\tar_util:shuffle_list(Intervals), ChunksPerPeer, {Q, QIntervals}),\n\n\t?LOG_DEBUG([{event, enqueue_intervals}, {pid, self()},\n\t\t{queue_before, gb_sets:size(Q)}, {queue_after, gb_sets:size(Q2)},\n\t\t{num_peers, NumPeers}, {chunks_per_peer, ChunksPerPeer},\n\t\t{q_intervals_before, ar_intervals:sum(QIntervals)},\n\t\t{q_intervals_after, ar_intervals:sum(QIntervals2)}]),\n\n\t{noreply, State#sync_data_state{ sync_intervals_queue = Q2,\n\t\t\tsync_intervals_queue_intervals = QIntervals2 }};\n\nhandle_cast(sync_intervals, State) ->\n\t#sync_data_state{ store_id = StoreID } = State,\n\tStatus = ar_device_lock:acquire_lock(sync, StoreID, State#sync_data_state.sync_status),\n\tState2 = State#sync_data_state{ sync_status = Status },\n\tState3 = case Status of\n\t\tactive ->\n\t\t\tdo_sync_intervals(State2);\n\t\tpaused ->\n\t\t\tar_util:cast_after(?DEVICE_LOCK_WAIT, self(), sync_intervals),\n\t\t\tState2;\n\t\t_ ->\n\t\t\tState2\n\tend,\n\t{noreply, State3};\n\nhandle_cast({invalidate_bad_data_record, Args}, State) ->\n\tinvalidate_bad_data_record(Args),\n\t{noreply, State};\n\nhandle_cast({pack_and_store_chunk, Args} = Cast,\n\t\t\t#sync_data_state{ store_id = StoreID } = State) ->\n\tcase is_disk_space_sufficient(StoreID) of\n\t\ttrue ->\n\t\t\tpack_and_store_chunk(Args, State);\n\t\t_ ->\n\t\t\tar_util:cast_after(30000, self(), Cast),\n\t\t\t{noreply, State}\n\tend;\n\nhandle_cast({store_fetched_chunk, Peer, Byte, Proof} = Cast, State) ->\n\t{store_fetched_chunk, Peer, Byte, Proof} = Cast,\n\t#{ data_path := DataPath, tx_path := TXPath, chunk := Chunk, packing := Packing } = Proof,\n\tSeekByte = ar_chunk_storage:get_chunk_seek_offset(Byte + 1) - 1,\n\tcase validate_proof(SeekByte, Proof) of\n\t\t{need_unpacking, AbsoluteEndOffset, ChunkProof2} ->\n\t\t\t#chunk_proof{\n\t\t\t\tblock_start_offset = BlockStartOffset,\n\t\t\t\ttx_start_offset = TXStartOffset,\n\t\t\t\ttx_end_offset = TXEndOffset,\n\t\t\t\tchunk_end_offset = ChunkEndOffset,\n\t\t\t\tchunk_id = ChunkID,\n\t\t\t\tmetadata = #chunk_metadata{\n\t\t\t\t\ttx_root = TXRoot,\n\t\t\t\t\tdata_root = DataRoot,\n\t\t\t\t\tchunk_size = ChunkSize\n\t\t\t\t}\n\t\t\t} = ChunkProof2,\n\t\t\tTXSize = TXEndOffset - TXStartOffset,\n\t\t\tAbsoluteTXStartOffset = BlockStartOffset + TXStartOffset,\n\t\t\tChunkArgs = {Packing, Chunk, AbsoluteEndOffset, TXRoot, ChunkSize},\n\t\t\tArgs = {AbsoluteTXStartOffset, TXSize, DataPath, TXPath, DataRoot,\n\t\t\t\t\tChunk, ChunkID, ChunkEndOffset, Peer, Byte},\n\t\t\tunpack_fetched_chunk(Cast, AbsoluteEndOffset, ChunkArgs, Args, State);\n\t\tfalse ->\n\t\t\tdecrement_chunk_cache_size(),\n\t\t\tprocess_invalid_fetched_chunk(Peer, Byte, State);\n\t\t{true, ChunkProof2} ->\n\t\t\t#chunk_proof{\n\t\t\t\tblock_start_offset = BlockStartOffset,\n\t\t\t\ttx_start_offset = TXStartOffset,\n\t\t\t\ttx_end_offset = TXEndOffset,\n\t\t\t\tchunk_end_offset = ChunkEndOffset,\n\t\t\t\tchunk_id = ChunkID,\n\t\t\t\tmetadata = #chunk_metadata{\n\t\t\t\t\ttx_root = TXRoot,\n\t\t\t\t\tdata_root = DataRoot,\n\t\t\t\t\tchunk_size = ChunkSize\n\t\t\t\t}\n\t\t\t} = ChunkProof2,\n\t\t\tTXSize = TXEndOffset - TXStartOffset,\n\t\t\tAbsoluteTXStartOffset = BlockStartOffset + TXStartOffset,\n\t\t\tAbsoluteEndOffset = AbsoluteTXStartOffset + ChunkEndOffset,\n\t\t\tChunkArgs = {unpacked, Chunk, AbsoluteEndOffset, TXRoot, ChunkSize},\n\t\t\tArgs = {AbsoluteTXStartOffset, TXSize, DataPath, TXPath, DataRoot,\n\t\t\t\t\tChunk, ChunkID, ChunkEndOffset, Peer, Byte},\n\t\t\tprocess_valid_fetched_chunk(ChunkArgs, Args, State)\n\tend;\n\nhandle_cast(process_disk_pool_item, #sync_data_state{ disk_pool_scan_pause = true } = State) ->\n\tar_util:cast_after(?DISK_POOL_SCAN_DELAY_MS, self(), process_disk_pool_item),\n\t{noreply, State};\nhandle_cast(process_disk_pool_item, State) ->\n\t#sync_data_state{ disk_pool_cursor = Cursor, disk_pool_chunks_index = DiskPoolChunksIndex,\n\t\t\tdisk_pool_full_scan_start_key = FullScanStartKey,\n\t\t\tdisk_pool_full_scan_start_timestamp = Timestamp,\n\t\t\tcurrently_processed_disk_pool_keys = CurrentlyProcessedDiskPoolKeys } = State,\n\tNextKey =\n\t\tcase ar_kv:get_next(DiskPoolChunksIndex, Cursor) of\n\t\t\t{ok, Key1, Value1} ->\n\t\t\t\tcase sets:is_element(Key1, CurrentlyProcessedDiskPoolKeys) of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tnone;\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t{ok, Key1, Value1}\n\t\t\t\tend;\n\t\t\tnone ->\n\t\t\t\tcase ar_kv:get_next(DiskPoolChunksIndex, first) of\n\t\t\t\t\tnone ->\n\t\t\t\t\t\tnone;\n\t\t\t\t\t{ok, Key2, Value2} ->\n\t\t\t\t\t\tcase sets:is_element(Key2, CurrentlyProcessedDiskPoolKeys) of\n\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\tnone;\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t{ok, Key2, Value2}\n\t\t\t\t\t\tend\n\t\t\t\tend\n\t\tend,\n\tcase NextKey of\n\t\tnone ->\n\t\t\tar_util:cast_after(?DISK_POOL_SCAN_DELAY_MS, self(), resume_disk_pool_scan),\n\t\t\tar_util:cast_after(?DISK_POOL_SCAN_DELAY_MS, self(), process_disk_pool_item),\n\t\t\t{noreply, State#sync_data_state{ disk_pool_cursor = first,\n\t\t\t\t\tdisk_pool_full_scan_start_key = none, disk_pool_scan_pause = true }};\n\t\t{ok, Key3, Value3} ->\n\t\t\tcase FullScanStartKey of\n\t\t\t\tnone ->\n\t\t\t\t\tprocess_disk_pool_item(State#sync_data_state{\n\t\t\t\t\t\t\tdisk_pool_full_scan_start_key = Key3,\n\t\t\t\t\t\t\tdisk_pool_full_scan_start_timestamp = erlang:timestamp() },\n\t\t\t\t\t\t\tKey3, Value3);\n\t\t\t\tKey3 ->\n\t\t\t\t\tTimePassed = timer:now_diff(erlang:timestamp(), Timestamp),\n\t\t\t\t\tcase TimePassed < (?DISK_POOL_SCAN_DELAY_MS) * 1000 of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tar_util:cast_after(?DISK_POOL_SCAN_DELAY_MS, self(),\n\t\t\t\t\t\t\t\t\tresume_disk_pool_scan),\n\t\t\t\t\t\t\tar_util:cast_after(?DISK_POOL_SCAN_DELAY_MS, self(),\n\t\t\t\t\t\t\t\t\tprocess_disk_pool_item),\n\t\t\t\t\t\t\t{noreply, State#sync_data_state{ disk_pool_cursor = first,\n\t\t\t\t\t\t\t\t\tdisk_pool_full_scan_start_key = none,\n\t\t\t\t\t\t\t\t\tdisk_pool_scan_pause = true }};\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tprocess_disk_pool_item(State, Key3, Value3)\n\t\t\t\t\tend;\n\t\t\t\t_ ->\n\t\t\t\t\tprocess_disk_pool_item(State, Key3, Value3)\n\t\t\tend\n\tend;\n\nhandle_cast(resume_disk_pool_scan, State) ->\n\t{noreply, State#sync_data_state{ disk_pool_scan_pause = false }};\n\nhandle_cast({process_disk_pool_chunk_offsets, Iterator, MayConclude, Args}, State) ->\n\t{Offset, _, _, _, _, _, Key, _, _, _} = Args,\n\t%% Place the chunk under its last 10 offsets in the weave (the same data\n\t%% may be uploaded several times).\n\tcase data_root_index_next_v2(Iterator, 10) of\n\t\tnone ->\n\t\t\tState2 =\n\t\t\t\tcase MayConclude of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tIterator2 = data_root_index_reset(Iterator),\n\t\t\t\t\t\tdelete_disk_pool_chunk(Iterator2, Args, State),\n\t\t\t\t\t\tmaybe_reset_disk_pool_full_scan_key(Key, State);\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tState\n\t\t\t\tend,\n\t\t\tgen_server:cast(self(), process_disk_pool_item),\n\t\t\t{noreply, deregister_currently_processed_disk_pool_key(Key, State2)};\n\t\t{TXArgs, Iterator2} ->\n\t\t\tState2 = register_currently_processed_disk_pool_key(Key, State),\n\t\t\t{TXStartOffset, TXRoot, TXPath} = TXArgs,\n\t\t\tAbsoluteEndOffset = TXStartOffset + Offset,\n\t\t\tprocess_disk_pool_chunk_offset(Iterator2, TXRoot, TXPath, AbsoluteEndOffset,\n\t\t\t\t\tMayConclude, Args, State2)\n\tend;\n\nhandle_cast({remove_range, End, Cursor, Ref, PID}, State) when Cursor > End ->\n\tPID ! {removed_range, Ref},\n\t{noreply, State};\nhandle_cast({remove_range, End, Cursor, Ref, PID}, State) ->\n\t#sync_data_state{ store_id = StoreID } = State,\n\tcase get_chunk_by_byte(Cursor, StoreID) of\n\t\t{ok, _Key, {AbsoluteEndOffset, _, _, _, _, _, _}}\n\t\t\t\twhen AbsoluteEndOffset > End ->\n\t\t\tPID ! {removed_range, Ref},\n\t\t\t{noreply, State};\n\t\t{ok, _Key, {AbsoluteEndOffset, _, _, _, _, _, ChunkSize}} ->\n\t\t\tPaddedStartOffset = ar_block:get_chunk_padded_offset(AbsoluteEndOffset - ChunkSize),\n\t\t\tPaddedOffset = ar_block:get_chunk_padded_offset(AbsoluteEndOffset),\n\t\t\t%% 1) store updated sync record\n\t\t\t%% 2) remove chunk\n\t\t\t%% 3) update chunks_index\n\t\t\t%%\n\t\t\t%% The order is important - in case the VM crashes,\n\t\t\t%% we will not report false positives to peers,\n\t\t\t%% and the chunk can still be removed upon retry.\n\t\t\tRemoveFromFootprint = ar_footprint_record:delete(PaddedOffset, StoreID),\n\t\t\tRemoveFromSyncRecord =\n\t\t\t\tcase RemoveFromFootprint of\n\t\t\t\t\tok ->\n\t\t\t\t\t\tar_sync_record:delete(PaddedOffset,\n\t\t\t\t\t\t\t\tPaddedStartOffset, ar_data_sync, StoreID);\n\t\t\t\t\tError ->\n\t\t\t\t\t\tError\n\t\t\t\tend,\n\t\t\tRemoveFromChunkStorage =\n\t\t\t\tcase RemoveFromSyncRecord of\n\t\t\t\t\tok ->\n\t\t\t\t\t\tar_chunk_storage:delete(PaddedOffset, StoreID);\n\t\t\t\t\tError2 ->\n\t\t\t\t\t\tError2\n\t\t\t\tend,\n\t\t\tRemoveFromChunksIndex =\n\t\t\t\tcase RemoveFromChunkStorage of\n\t\t\t\t\tok ->\n\t\t\t\t\t\tdelete_chunk_metadata(AbsoluteEndOffset, StoreID);\n\t\t\t\t\tError3 ->\n\t\t\t\t\t\tError3\n\t\t\t\tend,\n\t\t\tcase RemoveFromChunksIndex of\n\t\t\t\tok ->\n\t\t\t\t\tNextCursor = AbsoluteEndOffset + 1,\n\t\t\t\t\tgen_server:cast(self(), {remove_range, End, NextCursor, Ref, PID});\n\t\t\t\t{error, Reason} ->\n\t\t\t\t\t?LOG_ERROR([{event,\n\t\t\t\t\t\t\tdata_removal_aborted_since_failed_to_remove_chunk},\n\t\t\t\t\t\t\t{offset, Cursor},\n\t\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}])\n\t\t\tend,\n\t\t\t{noreply, State};\n\t\t{error, invalid_iterator} ->\n\t\t\tNextCursor = advance_chunks_index_cursor(Cursor),\n\t\t\tgen_server:cast(self(), {remove_range, End, NextCursor, Ref, PID}),\n\t\t\t{noreply, State};\n\t\t{error, Reason} ->\n\t\t\t?LOG_ERROR([{event, data_removal_aborted_since_failed_to_query_chunk},\n\t\t\t\t\t{offset, Cursor}, {reason, io_lib:format(\"~p\", [Reason])}]),\n\t\t\t{noreply, State}\n\tend;\n\nhandle_cast({expire_repack_request, Key}, State) ->\n\t#sync_data_state{ packing_map = PackingMap } = State,\n\tcase maps:get(Key, PackingMap, not_found) of\n\t\t{pack_chunk, {_, DataPath, Offset, DataRoot, _, _, _}} ->\n\t\t\tdecrement_chunk_cache_size(),\n\t\t\tDataPathHash = crypto:hash(sha256, DataPath),\n\t\t\t?LOG_DEBUG([{event, expired_repack_chunk_request},\n\t\t\t\t\t{data_path_hash, ar_util:encode(DataPathHash)},\n\t\t\t\t\t{data_root, ar_util:encode(DataRoot)},\n\t\t\t\t\t{relative_offset, Offset}]),\n\t\t\tState2 = State#sync_data_state{ packing_map = maps:remove(Key, PackingMap) },\n\t\t\t{noreply, State2};\n\t\t_ ->\n\t\t\t{noreply, State}\n\tend;\n\nhandle_cast({expire_unpack_request, Key}, State) ->\n\t#sync_data_state{ packing_map = PackingMap } = State,\n\tcase maps:get(Key, PackingMap, not_found) of\n\t\t{unpack_fetched_chunk, _Args} ->\n\t\t\tdecrement_chunk_cache_size(),\n\t\t\tState2 = State#sync_data_state{ packing_map = maps:remove(Key, PackingMap) },\n\t\t\t{noreply, State2};\n\t\t_ ->\n\t\t\t{noreply, State}\n\tend;\n\nhandle_cast(store_sync_state, State) ->\n\tstore_sync_state(State),\n\tar_util:cast_after(?STORE_STATE_FREQUENCY_MS, self(), store_sync_state),\n\t{noreply, State};\n\nhandle_cast({remove_recently_processed_disk_pool_offset, Offset, ChunkDataKey}, State) ->\n\t{noreply, remove_recently_processed_disk_pool_offset(Offset, ChunkDataKey, State)};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_call({add_block, B, SizeTaggedTXs}, _From, State) ->\n\t#sync_data_state{ store_id = StoreID } = State,\n\t{reply, add_block(B, SizeTaggedTXs, StoreID), State};\n\nhandle_call({store_data_roots_sync, BlockStart, BlockEnd, TXRoot, Entries}, _From, State) ->\n\t{Reply, State2} = handle_store_data_roots(BlockStart, BlockEnd, TXRoot, Entries, State),\n\t{reply, Reply, State2};\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_store_data_roots(BlockStart, BlockEnd, TXRoot, Entries, State) ->\n\t#sync_data_state{ store_id = ?DEFAULT_MODULE } = State,\n\tDataRootIndexKeySet = sets:from_list([\n\t\t<< DataRoot:32/binary, TXSize:?OFFSET_KEY_BITSIZE >>\n\t\t|| {DataRoot, TXSize, _TXStart, _TXPath} <- Entries\n\t]),\n\tBlockSize = BlockEnd - BlockStart,\n\tlists:foreach(\n\t\tfun({DataRoot, TXSize, TXStartOffset, TXPath}) ->\n\t\t\tok = update_data_root_index(DataRoot, TXSize, TXStartOffset, TXPath, ?DEFAULT_MODULE)\n\t\tend,\n\t\tEntries\n\t),\n\tok = ar_kv:put({data_root_offset_index, ?DEFAULT_MODULE},\n\t\t\t<< BlockStart:?OFFSET_KEY_BITSIZE >>,\n\t\t\tterm_to_binary({TXRoot, BlockSize, DataRootIndexKeySet})),\n\t{ok, State}.\n\nhandle_info({event, node_state, {initialized, B}}, State) ->\n\t{noreply, State#sync_data_state{ weave_size = B#block.weave_size }};\n\nhandle_info({event, node_state, {new_tip, B, _PrevB}}, State) ->\n\t{noreply, State#sync_data_state{ weave_size = B#block.weave_size }};\n\nhandle_info({event, node_state, {search_space_upper_bound, Bound}}, State) ->\n\t{noreply, State#sync_data_state{ disk_pool_threshold = Bound }};\n\nhandle_info({event, node_state, _}, State) ->\n\t{noreply, State};\n\nhandle_info({chunk, {unpacked, Key, ChunkArgs}}, State) ->\n\t#sync_data_state{ packing_map = PackingMap } = State,\n\tcase maps:get(Key, PackingMap, not_found) of\n\t\t{unpack_fetched_chunk, Args} ->\n\t\t\tState2 = State#sync_data_state{ packing_map = maps:remove(Key, PackingMap) },\n\t\t\tprocess_unpacked_chunk(ChunkArgs, Args, State2);\n\t\tResult ->\n\t\t\t{Packing, _U, AbsoluteEndOffset, _TXRoot, ChunkSize} = ChunkArgs,\n\t\t\tReason = missing_unpacked_chunk,\n\t\t\tprometheus_counter:inc(sync_chunks_skipped, [Reason]),\n\t\t\t?LOG_DEBUG([{event, skipping_synced_chunk}, \n\t\t\t\t\t{reason, Reason}, {key, Key},\n\t\t\t\t\t{packing, ar_serialize:encode_packing(Packing, true)},\n\t\t\t\t\t{absolute_offset, AbsoluteEndOffset},\n\t\t\t\t\t{chunk_size, ChunkSize}, {result, Result}]),\n\t\t\t{noreply, State}\tend;\n\nhandle_info({chunk, {unpack_error, Key, ChunkArgs, Error}}, State) ->\n\t#sync_data_state{ packing_map = PackingMap } = State,\n\tcase maps:get(Key, PackingMap, not_found) of\n\t\t{unpack_fetched_chunk, Args} ->\n\t\t\t{Packing, _Chunk1, AbsoluteEndOffset, _TXRoot, ChunkSize} = ChunkArgs,\n\t\t\t{_AbsoluteTXStartOffset, _TXSize, _DataPath, _TXPath, _DataRoot,\n\t\t\t\t\t_Chunk2, _ChunkID, _ChunkEndOffset, Peer, _Byte} = Args,\n\t\t\t?LOG_WARNING([{event, got_invalid_packed_chunk},\n\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t{absolute_end_offset, AbsoluteEndOffset},\n\t\t\t\t\t{packing, ar_serialize:encode_packing(Packing, true)},\n\t\t\t\t\t{chunk_size, ChunkSize},\n\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])}]),\n\t\t\tState2 = State#sync_data_state{ packing_map = maps:remove(Key, PackingMap) },\n\t\t\tar_peers:issue_warning(Peer, chunk, Error),\n\t\t\t{noreply, State2};\n\t\t_ ->\n\t\t\t{noreply, State}\n\tend;\n\nhandle_info({chunk, {packed, Key, ChunkArgs}}, State) ->\n\t#sync_data_state{ packing_map = PackingMap } = State,\n\tPacking = element(1, ChunkArgs),\n\tcase maps:get(Key, PackingMap, not_found) of\n\t\t{pack_chunk, Args} when element(1, Args) == Packing ->\n\t\t\tState2 = State#sync_data_state{ packing_map = maps:remove(Key, PackingMap) },\n\t\t\t{noreply, store_chunk(ChunkArgs, Args, State2)};\n\t\t_ ->\n\t\t\t{noreply, State}\n\tend;\n\nhandle_info({chunk, _}, State) ->\n\t{noreply, State};\n\nhandle_info({event, disksup, {remaining_disk_space, StoreID, false, Percentage, _Bytes}},\n\t\t#sync_data_state{ store_id = StoreID } = State) ->\n\tcase Percentage < 0.01 of\n\t\ttrue ->\n\t\t\tcase is_disk_space_sufficient(StoreID) of\n\t\t\t\tfalse ->\n\t\t\t\t\tok;\n\t\t\t\t_ ->\n\t\t\t\t\tlog_insufficient_disk_space(StoreID)\n\t\t\tend,\n\t\t\tets:insert(ar_data_sync_state, {{is_disk_space_sufficient, StoreID}, false});\n\t\tfalse ->\n\t\t\tcase Percentage > 0.05 of\n\t\t\t\ttrue ->\n\t\t\t\t\tcase is_disk_space_sufficient(StoreID) of\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tlog_sufficient_disk_space(StoreID);\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\tok\n\t\t\t\t\tend;\n\t\t\t\tfalse ->\n\t\t\t\t\tok\n\t\t\tend,\n\t\t\tets:insert(ar_data_sync_state, {{is_disk_space_sufficient, StoreID}, true})\n\tend,\n\t{noreply, State};\nhandle_info({event, disksup, {remaining_disk_space, StoreID, true, _Percentage, Bytes}},\n\t\t#sync_data_state{ store_id = StoreID } = State) ->\n\t{ok, Config} = arweave_config:get_env(),\n\t%% Default values:\n\t%% max_disk_pool_buffer_mb = ?DEFAULT_MAX_DISK_POOL_BUFFER_MB = 100_000\n\t%% disk_cache_size = ?DISK_CACHE_SIZE = 5_120\n\t%% DiskPoolSize = ~100GB\n\t%% DisckCacheSize = ~5GB\n\t%% BufferSize = ~10GB\n\tDiskPoolSize = Config#config.max_disk_pool_buffer_mb * ?MiB,\n\tDiskCacheSize = Config#config.disk_cache_size * ?MiB,\n\tBufferSize = 10_000_000_000,\n\tcase Bytes < DiskPoolSize + DiskCacheSize + (BufferSize div 2) of\n\t\ttrue ->\n\t\t\tar:console(\"error: Not enough disk space left on 'data_dir' disk for \"\n\t\t\t\t\"the requested 'max_disk_pool_buffer_mb' ~Bmb and 'disk_cache_size_mb' ~Bmb \"\n\t\t\t\t\"either lower these values or add more disk space.~n\",\n\t\t\t[Config#config.max_disk_pool_buffer_mb, Config#config.disk_cache_size]),\n\t\t\tcase is_disk_space_sufficient(StoreID) of\n\t\t\t\tfalse ->\n\t\t\t\t\tok;\n\t\t\t\t_ ->\n\t\t\t\t\tlog_insufficient_disk_space(StoreID)\n\t\t\tend,\n\t\t\tets:insert(ar_data_sync_state, {{is_disk_space_sufficient, StoreID}, false});\n\t\tfalse ->\n\t\t\tcase Bytes > DiskPoolSize + DiskCacheSize + BufferSize of\n\t\t\t\ttrue ->\n\t\t\t\t\tcase is_disk_space_sufficient(StoreID) of\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tlog_sufficient_disk_space(StoreID);\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\tok\n\t\t\t\t\tend;\n\t\t\t\tfalse ->\n\t\t\t\t\tok\n\t\t\tend,\n\t\t\tets:insert(ar_data_sync_state, {{is_disk_space_sufficient, StoreID}, true})\n\tend,\n\t{noreply, State};\n\nhandle_info({event, disksup, _}, State) ->\n\t{noreply, State};\n\nhandle_info({'EXIT', _PID, normal}, State) ->\n\t{noreply, State};\n\nhandle_info({'DOWN', _,  process, _, normal}, State) ->\n\t{noreply, State};\nhandle_info({'DOWN', _,  process, _, noproc}, State) ->\n\t{noreply, State};\nhandle_info({'DOWN', _,  process, _, Reason},  #sync_data_state{ store_id = StoreID } = State) ->\n\t?LOG_WARNING([{event, collect_intervals_job_failed},\n\t\t\t{reason, io_lib:format(\"~p\", [Reason])}, {action, spawning_another_one},\n\t\t\t{store_id, StoreID}]),\n\tgen_server:cast(self(), collect_peer_intervals),\n\t{noreply, State};\n\nhandle_info(Message,  #sync_data_state{ store_id = StoreID } = State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {store_id, StoreID}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, #sync_data_state{ store_id = StoreID } = State) ->\n\tstore_sync_state(State),\n\t?LOG_INFO([{event, terminate}, {store_id, StoreID},\n\t\t\t{reason, io_lib:format(\"~p\", [Reason])}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\ninit_sync_status(StoreID) ->\n\tSyncStatus = case ar_data_sync_coordinator:is_syncing_enabled() of\n\t\ttrue -> paused;\n\t\tfalse -> off\n\tend,\n\tar_device_lock:set_device_lock_metric(StoreID, sync, SyncStatus),\n\tSyncStatus.\ndo_log_chunk_error(LogType, Event, ExtraLogData) ->\n\tLogData = [{event, Event}, {tags, [solution_proofs]} | ExtraLogData],\n\tcase LogType of\n\t\terror ->\n\t\t\t?LOG_ERROR(LogData);\n\t\tinfo ->\n\t\t\t?LOG_INFO(LogData)\n\tend.\n\nlog_chunk_error(http, _, _) ->\n\tok;\nlog_chunk_error(tx_data, _, _) ->\n\tok;\nlog_chunk_error(verify, Event, ExtraLogData) ->\n\tdo_log_chunk_error(info, Event, [{request_origin, verify} | ExtraLogData]);\nlog_chunk_error(RequestOrigin, Event, ExtraLogData) ->\n\tdo_log_chunk_error(error, Event, [{request_origin, RequestOrigin} | ExtraLogData]).\n\ndo_sync_intervals(State) ->\n\t#sync_data_state{ sync_intervals_queue = Q,\n\t\t\tsync_intervals_queue_intervals = QIntervals, store_id = StoreID } = State,\n\tIsQueueEmpty =\n\t\tcase gb_sets:is_empty(Q) of\n\t\t\ttrue ->\n\t\t\t\tar_util:cast_after(500, self(), sync_intervals),\n\t\t\t\ttrue;\n\t\t\tfalse ->\n\t\t\t\tfalse\n\t\tend,\n\tIsDiskSpaceSufficient =\n\t\tcase IsQueueEmpty of\n\t\t\ttrue ->\n\t\t\t\tfalse;\n\t\t\tfalse ->\n\t\t\t\tcase is_disk_space_sufficient(StoreID) of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\ttrue;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tar_util:cast_after(30000, self(), sync_intervals),\n\t\t\t\t\t\tfalse\n\t\t\t\tend\n\t\tend,\n\tIsChunkCacheFull =\n\t\tcase IsDiskSpaceSufficient of\n\t\t\tfalse ->\n\t\t\t\ttrue;\n\t\t\ttrue ->\n\t\t\t\tcase is_chunk_cache_full() of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tar_util:cast_after(1000, self(), sync_intervals),\n\t\t\t\t\t\ttrue;\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tfalse\n\t\t\t\tend\n\t\tend,\n\tAreSyncWorkersBusy =\n\t\tcase IsChunkCacheFull of\n\t\t\ttrue ->\n\t\t\t\ttrue;\n\t\t\tfalse ->\n\t\t\t\tcase ar_data_sync_coordinator:ready_for_work() of\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tar_util:cast_after(200, self(), sync_intervals),\n\t\t\t\t\t\ttrue;\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tfalse\n\t\t\t\tend\n\t\tend,\n\tcase AreSyncWorkersBusy of\n\t\ttrue ->\n\t\t\tState;\n\t\tfalse ->\n\t\t\tgen_server:cast(self(), sync_intervals),\n\t\t\t{{FootprintKey, Start, End, Peer}, Q2} = gb_sets:take_smallest(Q),\n\t\t\tI2 = ar_intervals:delete(QIntervals, End, Start),\n\t\t\tgen_server:cast(ar_data_sync_coordinator,\n\t\t\t\t\t{sync_range, {Start, End, Peer, StoreID, FootprintKey}}),\n\t\t\tState#sync_data_state{ sync_intervals_queue = Q2,\n\t\t\t\t\tsync_intervals_queue_intervals = I2 }\n\tend.\n\ndo_sync_data(State) ->\n\t#sync_data_state{ store_id = StoreID, range_start = RangeStart, range_end = RangeEnd,\n\t\t\tdisk_pool_threshold = DiskPoolThreshold } = State,\n\t%% See if any of StoreID's unsynced intervals can be found in the \"default\"\n\t%% storage_module\n\tIntervals = get_unsynced_intervals_from_other_storage_modules(\n\t\tStoreID, ?DEFAULT_MODULE, RangeStart, min(RangeEnd, DiskPoolThreshold)),\n\tgen_server:cast(self(), sync_data2),\n\t%% Find all storage_modules that might include the target chunks (e.g. neighboring\n\t%% storage_modules with an overlap, or unpacked copies used for packing, etc...)\n\tOtherStorageModules = [ar_storage_module:id(Module)\n\t\t\t|| Module <- ar_storage_module:get_all(RangeStart, RangeEnd),\n\t\t\tar_storage_module:id(Module) /= StoreID],\n\t?LOG_INFO([{event, sync_data}, {stage, copy_from_default_storage_module},\n\t\t{store_id, StoreID}, {range_start, RangeStart}, {range_end, RangeEnd},\n\t\t{range_end, RangeEnd}, {disk_pool_threshold, DiskPoolThreshold},\n\t\t{default_intervals, length(Intervals)},\n\t\t{other_storage_modules, length(OtherStorageModules)}]),\n\tState#sync_data_state{\n\t\tunsynced_intervals_from_other_storage_modules = Intervals,\n\t\tother_storage_modules_with_unsynced_intervals = OtherStorageModules\n\t}.\n\n%% @doc No unsynced overlap intervals, proceed with syncing\ndo_sync_data2(#sync_data_state{\n\t\tunsynced_intervals_from_other_storage_modules = [],\n\t\tother_storage_modules_with_unsynced_intervals = [] } = State) ->\n\t#sync_data_state{ store_id = StoreID,\n\t\trange_start = RangeStart, range_end = RangeEnd } = State,\n\t?LOG_INFO([{event, sync_data}, {stage, complete},\n\t\t{store_id, StoreID}, {range_start, RangeStart}, {range_end, RangeEnd}]),\n\tar_util:cast_after(2000, self(), collect_peer_intervals),\n\tState;\n%% @doc Check to see if a neighboring storage_module may have already synced one of our\n%% unsynced intervals\ndo_sync_data2(#sync_data_state{\n\t\t\tstore_id = StoreID, range_start = RangeStart, range_end = RangeEnd,\n\t\t\tunsynced_intervals_from_other_storage_modules = [],\n\t\t\tother_storage_modules_with_unsynced_intervals = [OtherStoreID | OtherStoreIDs]\n\t\t} = State) ->\n\tPacking = ar_storage_module:get_packing(StoreID),\n\tOtherPacking = ar_storage_module:get_packing(OtherStoreID),\n\tIntervals = get_unsynced_intervals_from_other_storage_modules(StoreID, OtherStoreID,\n\t\t\tRangeStart, RangeEnd),\n\t?LOG_INFO([{event, sync_data}, {stage, copy_from_other_storage_modules},\n\t\t{store_id, StoreID}, {other_store_id, OtherStoreID}, \n\t\t{range_start, RangeStart}, {range_end, RangeEnd},\n\t\t{found_intervals, length(Intervals)}]),\n\tgen_server:cast(self(), sync_data2),\n\tState#sync_data_state{\n\t\tunsynced_intervals_from_other_storage_modules = Intervals,\n\t\tother_storage_modules_with_unsynced_intervals = OtherStoreIDs\n\t};\n%% @doc Read an unsynced interval from the disk of a neighboring storage_module\ndo_sync_data2(#sync_data_state{\n\t\tstore_id = StoreID,\n\t\tunsynced_intervals_from_other_storage_modules =\n\t\t\t[{OtherStoreID, {Start, End}} | Intervals]\n\t\t} = State) ->\n\tState2 =\n\t\tcase ar_chunk_copy:read_range(Start, End, OtherStoreID, StoreID) of\n\t\t\ttrue ->\n\t\t\t\tState#sync_data_state{\n\t\t\t\t\tunsynced_intervals_from_other_storage_modules = Intervals };\n\t\t\tfalse ->\n\t\t\t\tState\n\t\tend,\n\tar_util:cast_after(50, self(), sync_data2),\n\tState2.\n\nremove_expired_disk_pool_data_roots() ->\n\tNow = os:system_time(microsecond),\n\t{ok, Config} = arweave_config:get_env(),\n\tExpirationTime = Config#config.disk_pool_data_root_expiration_time * 1000000,\n\tets:foldl(\n\t\tfun({Key, {_Size, Timestamp, _TXIDSet}}, _Acc) ->\n\t\t\tcase Timestamp + ExpirationTime > Now of\n\t\t\t\ttrue ->\n\t\t\t\t\tok;\n\t\t\t\tfalse ->\n\t\t\t\t\tets:delete(ar_disk_pool_data_roots, Key),\n\t\t\t\t\tok\n\t\t\tend\n\t\tend,\n\t\tok,\n\t\tar_disk_pool_data_roots\n\t).\n\nget_chunk(Offset, SeekOffset, Pack, Packing, StoredPacking, StoreID, RequestOrigin) ->\n\tcase read_chunk_with_metadata(Offset, SeekOffset, StoredPacking, StoreID, true,\n\t\t\tRequestOrigin) of\n\t\t{error, Reason} ->\n\t\t\t{error, Reason};\n\t\t{ok, {Chunk, DataPath}, AbsoluteEndOffset, TXRoot, ChunkSize, TXPath} ->\n\t\t\tChunkID =\n\t\t\t\tcase validate_fetched_chunk({AbsoluteEndOffset, DataPath, TXPath, TXRoot,\n\t\t\t\t\t\tChunkSize, StoreID, RequestOrigin}) of\n\t\t\t\t\t{true, ID} ->\n\t\t\t\t\t\tID;\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\terror\n\t\t\t\tend,\n\t\t\tPackResult =\n\t\t\t\tcase {ChunkID, Packing == StoredPacking, Pack} of\n\t\t\t\t\t{error, _, _} ->\n\t\t\t\t\t\t%% Chunk was read but could not be validated.\n\t\t\t\t\t\t{error, chunk_failed_validation};\n\t\t\t\t\t{_, false, false} ->\n\t\t\t\t\t\t%% Requested and stored chunk are in different formats,\n\t\t\t\t\t\t%% and repacking is disabled.\n\t\t\t\t\t\t{error, chunk_stored_in_different_packing_only};\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tar_packing_server:repack(\n\t\t\t\t\t\t\tPacking, StoredPacking, AbsoluteEndOffset, TXRoot, Chunk, ChunkSize)\n\t\t\t\tend,\n\t\t\tcase {PackResult, ChunkID} of\n\t\t\t\t{{error, Reason}, _} ->\n\t\t\t\t\tlog_chunk_error(RequestOrigin, failed_to_repack_chunk,\n\t\t\t\t\t\t\t[{packing, ar_serialize:encode_packing(Packing, true)},\n\t\t\t\t\t\t\t{stored_packing, ar_serialize:encode_packing(StoredPacking, true)},\n\t\t\t\t\t\t\t{absolute_end_offset, AbsoluteEndOffset},\n\t\t\t\t\t\t\t{store_id, StoreID},\n\t\t\t\t\t\t\t{error, io_lib:format(\"~p\", [Reason])}]),\n\t\t\t\t\t{error, Reason};\n\t\t\t\t{{ok, PackedChunk, none}, _} ->\n\t\t\t\t\t%% PackedChunk is the requested format.\n\t\t\t\t\tProof = #{ tx_root => TXRoot, chunk => PackedChunk,\n\t\t\t\t\t\t\tdata_path => DataPath, tx_path => TXPath,\n\t\t\t\t\t\t\tabsolute_end_offset => AbsoluteEndOffset,\n\t\t\t\t\t\t\tchunk_size => ChunkSize },\n\t\t\t\t\t{ok, Proof};\n\t\t\t\t{{ok, PackedChunk, MaybeUnpackedChunk}, none} ->\n\t\t\t\t\t%% PackedChunk is the requested format, but the ChunkID could\n\t\t\t\t\t%% not be determined\n\t\t\t\t\tProof = #{ tx_root => TXRoot, chunk => PackedChunk,\n\t\t\t\t\t\t\tdata_path => DataPath, tx_path => TXPath,\n\t\t\t\t\t\t\tabsolute_end_offset => AbsoluteEndOffset,\n\t\t\t\t\t\t\tchunk_size => ChunkSize },\n\t\t\t\t\tcase MaybeUnpackedChunk of\n\t\t\t\t\t\tnone ->\n\t\t\t\t\t\t\t{ok, Proof};\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t{ok, Proof#{ unpacked_chunk => MaybeUnpackedChunk }}\n\t\t\t\t\tend;\n\t\t\t\t{{ok, PackedChunk, MaybeUnpackedChunk}, _} ->\n\t\t\t\t\tProof = #{ tx_root => TXRoot, chunk => PackedChunk,\n\t\t\t\t\t\t\tdata_path => DataPath, tx_path => TXPath,\n\t\t\t\t\t\t\tabsolute_end_offset => AbsoluteEndOffset,\n\t\t\t\t\t\t\tchunk_size => ChunkSize },\n\t\t\t\t\tcase MaybeUnpackedChunk of\n\t\t\t\t\t\tnone ->\n\t\t\t\t\t\t\t{ok, Proof};\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\tComputedChunkID = ar_tx:generate_chunk_id(MaybeUnpackedChunk),\n\t\t\t\t\t\t\tcase ComputedChunkID == ChunkID of\n\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\t{ok, Proof#{ unpacked_chunk => MaybeUnpackedChunk }};\n\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\tlog_chunk_error(RequestOrigin, get_chunk_invalid_id,\n\t\t\t\t\t\t\t\t\t\t\t[{chunk_size, ChunkSize},\n\t\t\t\t\t\t\t\t\t\t\t{actual_chunk_size, byte_size(MaybeUnpackedChunk)},\n\t\t\t\t\t\t\t\t\t\t\t{requested_packing,\n\t\t\t\t\t\t\t\t\t\t\t\tar_serialize:encode_packing(Packing, true)},\n\t\t\t\t\t\t\t\t\t\t\t{stored_packing,\n\t\t\t\t\t\t\t\t\t\t\t\tar_serialize:encode_packing(StoredPacking, true)},\n\t\t\t\t\t\t\t\t\t\t\t{absolute_end_offset, AbsoluteEndOffset},\n\t\t\t\t\t\t\t\t\t\t\t{offset, Offset},\n\t\t\t\t\t\t\t\t\t\t\t{seek_offset, SeekOffset},\n\t\t\t\t\t\t\t\t\t\t\t{store_id, StoreID},\n\t\t\t\t\t\t\t\t\t\t\t{expected_chunk_id, ar_util:encode(ChunkID)},\n\t\t\t\t\t\t\t\t\t\t\t{chunk_id, ar_util:encode(ComputedChunkID)},\n\t\t\t\t\t\t\t\t\t\t\t{actual_chunk, binary:part(MaybeUnpackedChunk, 0, 32)}]),\n\t\t\t\t\t\t\t\t\tinvalidate_bad_data_record({AbsoluteEndOffset, ChunkSize,\n\t\t\t\t\t\t\t\t\t\tStoreID, get_chunk_invalid_id}),\n\t\t\t\t\t\t\t\t\t{error, chunk_not_found}\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\nget_chunk_proof(Offset, SeekOffset, StoredPacking, StoreID, RequestOrigin) ->\n\tcase read_chunk_with_metadata(\n\t\t\tOffset, SeekOffset, StoredPacking, StoreID, false, RequestOrigin) of\n\t\t{error, Reason} ->\n\t\t\t{error, Reason};\n\t\t{ok, DataPath, AbsoluteEndOffset, TXRoot, ChunkSize, TXPath} ->\n\t\t\tCheckProof =\n\t\t\t\tcase validate_fetched_chunk({AbsoluteEndOffset, DataPath, TXPath, TXRoot,\n\t\t\t\t\t\tChunkSize, StoreID, false}) of\n\t\t\t\t\t{true, ID} ->\n\t\t\t\t\t\tID;\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\terror\n\t\t\t\tend,\n\t\t\tcase CheckProof of\n\t\t\t\terror ->\n\t\t\t\t\t%% Proof was read but could not be validated.\n\t\t\t\t\tlog_chunk_error(RequestOrigin, chunk_proof_failed_validation,\n\t\t\t\t\t\t\t[{offset, Offset},\n\t\t\t\t\t\t\t{seek_offset, SeekOffset},\n\t\t\t\t\t\t\t{stored_packing, ar_serialize:encode_packing(StoredPacking, true)},\n\t\t\t\t\t\t\t{store_id, StoreID}]),\n\t\t\t\t\t{error, chunk_not_found};\n\t\t\t\t_ ->\n\t\t\t\t\tProof = #{ data_path => DataPath, tx_path => TXPath },\n\t\t\t\t\t{ok, Proof}\n\t\t\tend\n\tend.\n\n%% @doc Read the chunk metadata and optionally the chunk itself.\n%%\n%% When ReadChunk=true, the response is of the format:\n%% {ok, {Chunk, DataPath}, AbsoluteEndOffset, TXRoot, ChunkSize, TXPath}\n%%\n%% Otherwise, the format is\n%% {ok, DataPath, AbsoluteEndOffset, TXRoot, ChunkSize, TXPath}\nread_chunk_with_metadata(\n\t\tOffset, SeekOffset, unpacked_padded, StoreID, _ReadChunk, RequestOrigin) ->\n\t%% unpacked_padded is an intermediate format and should not be read. Since not all\n\t%% the records and indices have been fully setup, trying to read the chunk can cause\n\t%% its offset to be invalidated.\n\tlog_chunk_error(RequestOrigin, read_unpacked_padded_chunk,\n\t\t\t[{seek_offset, SeekOffset},\n\t\t\t{offset, Offset},\n\t\t\t{store_id, StoreID},\n\t\t\t{stored_packing, unpacked_padded}]),\n\t{error, chunk_not_found};\nread_chunk_with_metadata(\n\t\tOffset, SeekOffset, StoredPacking, StoreID, ReadChunk, RequestOrigin) ->\n\tcase get_chunk_by_byte(SeekOffset, StoreID) of\n\t\t{error, invalid_iterator} ->\n\t\t\t%% No error log needed since this is expected behavior when the chunk simply\n\t\t\t%% isn't stored.\n\t\t\t{error, chunk_not_found};\n\t\t{error, Err} ->\n\t\t\tModules = ar_storage_module:get_all(SeekOffset),\n\t\t\tModuleIDs = [ar_storage_module:id(Module) || Module <- Modules],\n\t\t\tlog_chunk_error(RequestOrigin, failed_to_fetch_chunk_metadata,\n\t\t\t\t[{seek_offset, SeekOffset},\n\t\t\t\t{store_id, StoreID},\n\t\t\t\t{stored_packing, ar_serialize:encode_packing(StoredPacking, true)},\n\t\t\t\t{modules_covering_seek_offset, ModuleIDs},\n\t\t\t\t{error, io_lib:format(\"~p\", [Err])}]),\n\t\t\t{error, chunk_not_found};\n\t\t{ok, _, {AbsoluteEndOffset, _, _, _, _, _, ChunkSize}}\n\t\t\t\twhen AbsoluteEndOffset - SeekOffset >= ChunkSize ->\n\t\t\tlog_chunk_error(RequestOrigin, chunk_offset_mismatch,\n\t\t\t\t\t[{absolute_offset, AbsoluteEndOffset},\n\t\t\t\t\t{seek_offset, SeekOffset},\n\t\t\t\t\t{store_id, StoreID},\n\t\t\t\t\t{stored_packing, ar_serialize:encode_packing(StoredPacking, true)}]),\n\t\t\t{error, chunk_not_found};\n\t\t{ok, _, {AbsoluteEndOffset, ChunkDataKey, TXRoot, _, TXPath, _, ChunkSize}} ->\n\t\t\tReadFun =\n\t\t\t\tcase ReadChunk of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tfun read_chunk/3;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tfun read_data_path/3\n\t\t\t\tend,\n\t\t\tcase ReadFun(AbsoluteEndOffset, ChunkDataKey, StoreID) of\n\t\t\t\tnot_found ->\n\t\t\t\t\tModules = ar_storage_module:get_all(SeekOffset),\n\t\t\t\t\tModuleIDs = [ar_storage_module:id(Module) || Module <- Modules],\n\t\t\t\t\tlog_chunk_error(RequestOrigin, failed_to_read_chunk_data_path,\n\t\t\t\t\t\t[{seek_offset, SeekOffset},\n\t\t\t\t\t\t{absolute_offset, AbsoluteEndOffset},\n\t\t\t\t\t\t{store_id, StoreID},\n\t\t\t\t\t\t{stored_packing,\n\t\t\t\t\t\t\tar_serialize:encode_packing(StoredPacking, true)},\n\t\t\t\t\t\t{modules_covering_seek_offset, ModuleIDs},\n\t\t\t\t\t\t{chunk_data_key, ar_util:encode(ChunkDataKey)},\n\t\t\t\t\t\t{read_fun, ReadFun}]),\n\t\t\t\t\tinvalidate_bad_data_record({AbsoluteEndOffset, ChunkSize, StoreID,\n\t\t\t\t\t\tfailed_to_read_chunk_data_path}),\n\t\t\t\t\t{error, chunk_not_found};\n\t\t\t\t{error, Error} ->\n\t\t\t\t\tlog_chunk_error(RequestOrigin, failed_to_read_chunk,\n\t\t\t\t\t\t\t[{reason, io_lib:format(\"~p\", [Error])},\n\t\t\t\t\t\t\t{chunk_data_key, ar_util:encode(ChunkDataKey)},\n\t\t\t\t\t\t\t{absolute_end_offset, Offset}]),\n\t\t\t\t\t{error, failed_to_read_chunk};\n\t\t\t\t{ok, {Chunk, DataPath}} ->\n\t\t\t\t\tcase ar_sync_record:is_recorded(Offset, StoredPacking, ar_data_sync,\n\t\t\t\t\t\t\tStoreID) of\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tModules = ar_storage_module:get_all(SeekOffset),\n\t\t\t\t\t\t\tModuleIDs = [ar_storage_module:id(Module) || Module <- Modules],\n\t\t\t\t\t\t\tRootRecords = [ets:lookup(sync_records, {ar_data_sync, ID})\n\t\t\t\t\t\t\t\t\t|| ID <- ModuleIDs],\n\t\t\t\t\t\t\tlog_chunk_error(RequestOrigin, chunk_metadata_read_sync_record_race_condition,\n\t\t\t\t\t\t\t\t[{seek_offset, SeekOffset},\n\t\t\t\t\t\t\t\t{storeID, StoreID},\n\t\t\t\t\t\t\t\t{modules_covering_seek_offset, ModuleIDs},\n\t\t\t\t\t\t\t\t{root_sync_records, RootRecords},\n\t\t\t\t\t\t\t\t{stored_packing,\n\t\t\t\t\t\t\t\t\tar_serialize:encode_packing(StoredPacking, true)}]),\n\t\t\t\t\t\t\t%% The chunk should have been re-packed\n\t\t\t\t\t\t\t%% in the meantime - very unlucky timing.\n\t\t\t\t\t\t\t{error, chunk_not_found};\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t{ok, {Chunk, DataPath}, AbsoluteEndOffset, TXRoot, ChunkSize, TXPath}\n\t\t\t\t\tend;\n\t\t\t\t{ok, DataPath} ->\n\t\t\t\t\t{ok, DataPath, AbsoluteEndOffset, TXRoot, ChunkSize, TXPath}\n\t\t\tend\n\tend.\n\ninvalidate_bad_data_record({AbsoluteEndOffset, ChunkSize, StoreID, Type}) ->\n\t[{_, T}] = ets:lookup(ar_data_sync_state, disk_pool_threshold),\n\tcase AbsoluteEndOffset > T of\n\t\ttrue ->\n\t\t\t[{_, T}] = ets:lookup(ar_data_sync_state, disk_pool_threshold),\n\t\t\tcase AbsoluteEndOffset > T of\n\t\t\t\ttrue ->\n\t\t\t\t\t%% Do not invalidate fresh records - a reorg may be in progress.\n\t\t\t\t\tok;\n\t\t\t\tfalse ->\n\t\t\t\t\tinvalidate_bad_data_record2({AbsoluteEndOffset, ChunkSize, StoreID, Type})\n\t\t\tend;\n\t\tfalse ->\n\t\t\tinvalidate_bad_data_record2({AbsoluteEndOffset, ChunkSize, StoreID, Type})\n\tend.\n\ninvalidate_bad_data_record2({AbsoluteEndOffset, ChunkSize, StoreID, Type}) ->\n\tPaddedEndOffset = ar_block:get_chunk_padded_offset(AbsoluteEndOffset),\n\tStartOffset = AbsoluteEndOffset - ChunkSize,\n\t?LOG_WARNING([{event, invalidating_bad_data_record}, {type, Type},\n\t\t\t{range_start, StartOffset}, {range_end, PaddedEndOffset},\n\t\t\t{store_id, StoreID}]),\n\tcase remove_invalid_sync_records(PaddedEndOffset, StartOffset, StoreID) of\n\t\tok ->\n\t\t\tar_sync_record:add(PaddedEndOffset, StartOffset, invalid_chunks, StoreID),\n\t\t\tcase delete_invalid_metadata(AbsoluteEndOffset, StoreID) of\n\t\t\t\tok ->\n\t\t\t\t\tok;\n\t\t\t\tError2 ->\n\t\t\t\t\t?LOG_WARNING([{event, failed_to_remove_chunks_index_key},\n\t\t\t\t\t\t\t{absolute_end_offset, AbsoluteEndOffset},\n\t\t\t\t\t\t\t{error, io_lib:format(\"~p\", [Error2])}])\n\t\t\tend;\n\t\tError ->\n\t\t\t?LOG_WARNING([{event, failed_to_remove_sync_record_range},\n\t\t\t\t\t{range_end, PaddedEndOffset}, {range_start, StartOffset},\n\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])}])\n\tend.\n\nremove_invalid_sync_records(PaddedEndOffset, StartOffset, StoreID) ->\n\tRemove1 = ar_footprint_record:delete(PaddedEndOffset, StoreID),\n\tRemove2 =\n\t\tcase Remove1 of\n\t\t\tok ->\n\t\t\t\tar_sync_record:delete(PaddedEndOffset, StartOffset, ar_data_sync, StoreID);\n\t\t\tError ->\n\t\t\t\tError\n\t\tend,\n\tIsSmallChunkBeforeThreshold = PaddedEndOffset - StartOffset < ?DATA_CHUNK_SIZE,\n\tRemove3 =\n\t\tcase {Remove2, IsSmallChunkBeforeThreshold} of\n\t\t\t{ok, false} ->\n\t\t\t\tar_sync_record:delete(PaddedEndOffset, StartOffset,\n\t\t\t\t\t\tar_chunk_storage, StoreID);\n\t\t\t_ ->\n\t\t\t\tRemove2\n\t\tend,\n\tRemove4 =\n\t\tcase {Remove3, IsSmallChunkBeforeThreshold} of\n\t\t\t{ok, false} ->\n\t\t\t\tar_entropy_storage:delete_record(PaddedEndOffset, StartOffset, StoreID);\n\t\t\t_ ->\n\t\t\t\tRemove3\n\t\tend,\n\tcase {Remove4, IsSmallChunkBeforeThreshold} of\n\t\t{ok, false} ->\n\t\t\tar_sync_record:delete(PaddedEndOffset, StartOffset,\n\t\t\t\t\tar_chunk_storage_replica_2_9_1_unpacked, StoreID);\n\t\t_ ->\n\t\t\tRemove4\n\tend.\n\ndelete_invalid_metadata(AbsoluteEndOffset, StoreID) ->\n\tcase get_chunk_metadata(AbsoluteEndOffset, StoreID) of\n\t\tnot_found ->\n\t\t\tok;\n\t\t{ok, Metadata} ->\n\t\t\t{ChunkDataKey, _, _, _, _, _} = Metadata,\n\t\t\tdelete_chunk_data(ChunkDataKey, StoreID),\n\t\t\tdelete_chunk_metadata(AbsoluteEndOffset, StoreID)\n\tend.\n\nvalidate_fetched_chunk(Args) ->\n\t{Offset, DataPath, TXPath, TXRoot, ChunkSize, StoreID, RequestOrigin} = Args,\n\t[{_, T}] = ets:lookup(ar_data_sync_state, disk_pool_threshold),\n\tcase Offset > T orelse not ar_node:is_joined() of\n\t\ttrue ->\n\t\t\tcase RequestOrigin of\n\t\t\t\tminer ->\n\t\t\t\t\tlog_chunk_error(RequestOrigin, miner_requested_disk_pool_chunk,\n\t\t\t\t\t\t\t[{disk_pool_threshold, T}, {end_offset, Offset}]);\n\t\t\t\t_ ->\n\t\t\t\t\tok\n\t\t\tend,\n\t\t\t{true, none};\n\t\tfalse ->\n\t\t\tcase ar_block_index:get_block_bounds(Offset - 1) of\n\t\t\t\t{BlockStart, BlockEnd, TXRoot} ->\n\n\t\t\t\t\tChunkOffset = Offset - BlockStart - 1,\n\t\t\t\t\tcase validate_proof2(TXRoot, TXPath, DataPath, BlockStart, BlockEnd,\n\t\t\t\t\t\t\tChunkOffset, ChunkSize, RequestOrigin) of\n\t\t\t\t\t\t{true, ChunkID} ->\n\t\t\t\t\t\t\t{true, ChunkID};\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tlog_chunk_error(RequestOrigin, failed_to_validate_chunk_proofs,\n\t\t\t\t\t\t\t\t[{absolute_end_offset, Offset}, {store_id, StoreID}]),\n\t\t\t\t\t\t\tinvalidate_bad_data_record({Offset, ChunkSize, StoreID,\n\t\t\t\t\t\t\t\tfailed_to_validate_chunk_proofs}),\n\t\t\t\t\t\t\tfalse\n\t\t\t\t\tend;\n\t\t\t\t{_BlockStart, _BlockEnd, TXRoot2} ->\n\t\t\t\t\tlog_chunk_error(RequestOrigin, stored_chunk_invalid_tx_root,\n\t\t\t\t\t\t[{end_offset, Offset}, {tx_root, ar_util:encode(TXRoot2)},\n\t\t\t\t\t\t{stored_tx_root, ar_util:encode(TXRoot)}, {store_id, StoreID}]),\n\t\t\t\t\tinvalidate_bad_data_record({Offset, ChunkSize, StoreID,\n\t\t\t\t\t\tstored_chunk_invalid_tx_root}),\n\t\t\t\t\tfalse\n\t\t\tend\n\tend.\n\n\nget_tx_offset(TXIndex, TXID) ->\n\tcase ar_kv:get(TXIndex, TXID) of\n\t\t{ok, Value} ->\n\t\t\t{ok, binary_to_term(Value, [safe])};\n\t\tnot_found ->\n\t\t\t{error, not_found};\n\t\t{error, Reason} ->\n\t\t\t?LOG_ERROR([{event, failed_to_read_tx_offset},\n\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])},\n\t\t\t\t\t{tx, ar_util:encode(TXID)}]),\n\t\t\t{error, failed_to_read_offset}\n\tend.\n\nget_tx_offset_data_in_range(TXOffsetIndex, TXIndex, Start, End) ->\n\tcase ar_kv:get_prev(TXOffsetIndex, << Start:?OFFSET_KEY_BITSIZE >>) of\n\t\tnone ->\n\t\t\tget_tx_offset_data_in_range2(TXOffsetIndex, TXIndex, Start, End);\n\t\t{ok, << Start2:?OFFSET_KEY_BITSIZE >>, _} ->\n\t\t\tget_tx_offset_data_in_range2(TXOffsetIndex, TXIndex, Start2, End);\n\t\tError ->\n\t\t\tError\n\tend.\n\nget_tx_offset_data_in_range2(TXOffsetIndex, TXIndex, Start, End) ->\n\tcase ar_kv:get_range(TXOffsetIndex, << Start:?OFFSET_KEY_BITSIZE >>,\n\t\t\t<< (End - 1):?OFFSET_KEY_BITSIZE >>) of\n\t\t{ok, EmptyMap} when map_size(EmptyMap) == 0 ->\n\t\t\t{ok, []};\n\t\t{ok, Map} ->\n\t\t\tcase maps:fold(\n\t\t\t\tfun\n\t\t\t\t\t(_, _Value, {error, _} = Error) ->\n\t\t\t\t\t\tError;\n\t\t\t\t\t(_, TXID, Acc) ->\n\t\t\t\t\t\tcase get_tx_offset(TXIndex, TXID) of\n\t\t\t\t\t\t\t{ok, {EndOffset, Size}} ->\n\t\t\t\t\t\t\t\tcase EndOffset =< Start of\n\t\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\t\tAcc;\n\t\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\t\t[{TXID, EndOffset - Size, EndOffset} | Acc]\n\t\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\t\tAcc;\n\t\t\t\t\t\t\tError ->\n\t\t\t\t\t\t\t\tError\n\t\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\t\t[],\n\t\t\t\tMap\n\t\t\t) of\n\t\t\t\t{error, _} = Error ->\n\t\t\t\t\tError;\n\t\t\t\tList ->\n\t\t\t\t\t{ok, lists:reverse(List)}\n\t\t\tend;\n\t\tError ->\n\t\t\tError\n\tend.\n\nget_tx_data(Start, End, Chunks, _Pack) when Start >= End ->\n\t{ok, iolist_to_binary(Chunks)};\nget_tx_data(Start, End, Chunks, Pack) ->\n\tcase get_chunk(Start + 1, #{ pack => Pack, packing => unpacked,\n\t\t\tbucket_based_offset => false, origin => tx_data }) of\n\t\t{ok, #{ chunk := Chunk }} ->\n\t\t\tget_tx_data(Start + byte_size(Chunk), End, [Chunks | Chunk], Pack);\n\t\t{error, chunk_not_found} ->\n\t\t\t{error, not_found};\n\t\t{error, Reason} ->\n\t\t\t?LOG_ERROR([{event, failed_to_get_tx_data},\n\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}]),\n\t\t\t{error, failed_to_get_tx_data}\n\tend.\n\nget_data_root_offset(DataRootKey, StoreID) ->\n\t<< DataRoot:32/binary, TXSize:?OFFSET_KEY_BITSIZE >> = DataRootKey,\n\tDataRootIndex = {data_root_index, StoreID},\n\tcase ar_kv:get_prev(DataRootIndex, << DataRoot:32/binary,\n\t\t\t(ar_serialize:encode_int(TXSize, 8))/binary, <<\"a\">>/binary >>) of\n\t\tnone ->\n\t\t\tnot_found;\n\t\t{ok, << DataRoot:32/binary, TXSizeSize:8, TXSize:(TXSizeSize * 8),\n\t\t\t\tOffsetSize:8, Offset:(OffsetSize * 8) >>, TXPath} ->\n\t\t\t{ok, {Offset, TXPath}};\n\t\t{ok, _, _} ->\n\t\t\tnot_found;\n\t\t{error, _} = Error ->\n\t\t\tError\n\tend.\n\nremove_range(Start, End, Ref, ReplyTo) ->\n\tReplyFun =\n\t\tfun(Fun, StorageRefs) ->\n\t\t\tcase sets:is_empty(StorageRefs) of\n\t\t\t\ttrue ->\n\t\t\t\t\tReplyTo ! {removed_range, Ref},\n\t\t\t\t\tar_events:send(sync_record, {global_remove_range, Start, End});\n\t\t\t\tfalse ->\n\t\t\t\t\treceive\n\t\t\t\t\t\t{removed_range, StorageRef} ->\n\t\t\t\t\t\t\tFun(Fun, sets:del_element(StorageRef, StorageRefs))\n\t\t\t\t\tafter 10000 ->\n\t\t\t\t\t\t?LOG_DEBUG([{event,\n\t\t\t\t\t\t\t\twaiting_for_data_range_removal_longer_than_ten_seconds}]),\n\t\t\t\t\t\tFun(Fun, StorageRefs)\n\t\t\t\t\tend\n\t\t\tend\n\t\tend,\n\tStorageModules = ar_storage_module:get_all(Start, End),\n\tStoreIDs = [?DEFAULT_MODULE | [ar_storage_module:id(M) || M <- StorageModules]],\n\tRefL = [make_ref() || _ <- StoreIDs],\n\tPID = spawn(fun() -> ReplyFun(ReplyFun, sets:from_list(RefL)) end),\n\tlists:foreach(\n\t\tfun({StoreID, R}) ->\n\t\t\tgen_server:cast(name(StoreID), {remove_range, End, Start + 1, R, PID})\n\t\tend,\n\t\tlists:zip(StoreIDs, RefL)\n\t).\n\ninit_kv(StoreID) ->\n\tBasicOpts = [{max_open_files, 10000}],\n\tBloomFilterOpts = [\n\t\t{block_based_table_options, [\n\t\t\t{cache_index_and_filter_blocks, true}, % Keep bloom filters in memory.\n\t\t\t{bloom_filter_policy, 10} % ~1% false positive probability.\n\t\t]},\n\t\t{optimize_filters_for_hits, true}\n\t],\n\tPrefixBloomFilterOpts =\n\t\tBloomFilterOpts ++ [\n\t\t\t{prefix_extractor, {capped_prefix_transform, ?OFFSET_KEY_PREFIX_BITSIZE div 8}}],\n\tColumnFamilyDescriptors = [\n\t\t{\"default\", BasicOpts},\n\t\t{\"chunks_index\", BasicOpts ++ PrefixBloomFilterOpts},\n\t\t{\"data_root_index\", BasicOpts ++ BloomFilterOpts},\n\t\t{\"data_root_offset_index\", BasicOpts},\n\t\t{\"tx_index\", BasicOpts ++ BloomFilterOpts},\n\t\t{\"tx_offset_index\", BasicOpts},\n\t\t{\"disk_pool_chunks_index\", BasicOpts ++ BloomFilterOpts},\n\t\t{\"migrations_index\", BasicOpts}\n\t],\n\t{ok, Config} = arweave_config:get_env(),\n\tDataDir = Config#config.data_dir,\n\tDir =\n\t\tcase StoreID of\n\t\t\t?DEFAULT_MODULE ->\n\t\t\t\tfilename:join(DataDir, ?ROCKS_DB_DIR);\n\t\t\t_ ->\n\t\t\t\tfilename:join([DataDir, \"storage_modules\", StoreID, ?ROCKS_DB_DIR])\n\t\tend,\n\tok = ar_kv:open(#{\n\t\tpath => filename:join(Dir, \"ar_data_sync_db\"),\n\t\tcf_descriptors => ColumnFamilyDescriptors,\n\t\tcf_names => [{ar_data_sync, StoreID}, {chunks_index, StoreID}, {data_root_index_old, StoreID},\n\t\t\t{data_root_offset_index, StoreID}, {tx_index, StoreID}, {tx_offset_index, StoreID},\n\t\t\t{disk_pool_chunks_index_old, StoreID}, {migrations_index, StoreID}]}),\n\tok = ar_kv:open(#{\n\t\tpath => filename:join(Dir, \"ar_data_sync_chunk_db\"),\n\t\tname => {chunk_data_db, StoreID},\n\t\toptions => [{max_open_files, 10000},\n\t\t\t{max_background_compactions, 8},\n\t\t\t{write_buffer_size, 256 * ?MiB}, % 256 MiB per memtable.\n\t\t\t{target_file_size_base, 256 * ?MiB}, % 256 MiB per SST file.\n\t\t\t%% 10 files in L1 to make L1 == L0 as recommended by the\n\t\t\t%% RocksDB guide https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide.\n\t\t\t{max_bytes_for_level_base, 10 * 256 * ?MiB}]}),\n\tok = ar_kv:open(#{\n\t\tpath => filename:join(Dir, \"ar_data_sync_disk_pool_chunks_index_db\"),\n\t\tname => {disk_pool_chunks_index, StoreID},\n\t\toptions => [{max_open_files, 1000}, {max_background_compactions, 8},\n\t\t\t{write_buffer_size, 256 * ?MiB}, % 256 MiB per memtable.\n\t\t\t{target_file_size_base, 256 * ?MiB}, % 256 MiB per SST file.\n\t\t\t%% 10 files in L1 to make L1 == L0 as recommended by the\n\t\t\t%% RocksDB guide https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide.\n\t\t\t{max_bytes_for_level_base, 10 * 256 * ?MiB}] ++ BloomFilterOpts}),\n\tok = ar_kv:open(#{\n\t\tpath => filename:join(Dir, \"ar_data_sync_data_root_index_db\"),\n\t\tname => {data_root_index, StoreID},\n\t\toptions => [{max_open_files, 100}, {max_background_compactions, 8},\n\t\t\t{write_buffer_size, 256 * ?MiB}, % 256 MiB per memtable.\n\t\t\t{target_file_size_base, 256 * ?MiB}, % 256 MiB per SST file.\n\t\t\t%% 10 files in L1 to make L1 == L0 as recommended by the\n\t\t\t%% RocksDB guide https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide.\n\t\t\t{max_bytes_for_level_base, 10 * 256 * ?MiB}] ++ BloomFilterOpts}),\n\t#sync_data_state{\n\t\tchunks_index = {chunks_index, StoreID},\n\t\tdata_root_index = {data_root_index, StoreID},\n\t\tdata_root_index_old = {data_root_index_old, StoreID},\n\t\tdata_root_offset_index = {data_root_offset_index, StoreID},\n\t\tchunk_data_db = {chunk_data_db, StoreID},\n\t\ttx_index = {tx_index, StoreID},\n\t\ttx_offset_index = {tx_offset_index, StoreID},\n\t\tdisk_pool_chunks_index = {disk_pool_chunks_index, StoreID},\n\t\tdisk_pool_chunks_index_old = {disk_pool_chunks_index_old, StoreID},\n\t\tmigrations_index = {migrations_index, StoreID}\n\t}.\n\nmove_disk_pool_index(State) ->\n\tmove_disk_pool_index(first, State).\n\nmove_disk_pool_index(Cursor, State) ->\n\t#sync_data_state{ disk_pool_chunks_index_old = Old,\n\t\t\tdisk_pool_chunks_index = New } = State,\n\tcase ar_kv:get_next(Old, Cursor) of\n\t\tnone ->\n\t\t\tok;\n\t\t{ok, Key, Value} ->\n\t\t\tok = ar_kv:put(New, Key, Value),\n\t\t\tok = ar_kv:delete(Old, Key),\n\t\t\tmove_disk_pool_index(Key, State)\n\tend.\n\nmove_data_root_index(#sync_data_state{ migrations_index = MI,\n\t\tdata_root_index_old = DI } = State) ->\n\tcase ar_kv:get(MI, <<\"move_data_root_index\">>) of\n\t\t{ok, <<\"complete\">>} ->\n\t\t\tets:insert(ar_data_sync_state, {move_data_root_index_migration_complete}),\n\t\t\tok;\n\t\t{ok, Cursor} ->\n\t\t\tmove_data_root_index(Cursor, 1, State);\n\t\tnot_found ->\n\t\t\tcase ar_kv:get_next(DI, last) of\n\t\t\t\tnone ->\n\t\t\t\t\tets:insert(ar_data_sync_state, {move_data_root_index_migration_complete}),\n\t\t\t\t\tok;\n\t\t\t\t{ok, Key, _} ->\n\t\t\t\t\tmove_data_root_index(Key, 1, State)\n\t\t\tend\n\tend.\n\nmove_data_root_index(Cursor, N, State) ->\n\t#sync_data_state{ migrations_index = MI, data_root_index_old = Old,\n\t\t\tdata_root_index = New } = State,\n\tcase N rem 50000 of\n\t\t0 ->\n\t\t\t?LOG_DEBUG([{event, moving_data_root_index}, {moved_keys, N}]),\n\t\t\tok = ar_kv:put(MI, <<\"move_data_root_index\">>, Cursor),\n\t\t\tgen_server:cast(self(), {move_data_root_index, Cursor, N + 1});\n\t\t_ ->\n\t\t\tcase ar_kv:get_prev(Old, Cursor) of\n\t\t\t\tnone ->\n\t\t\t\t\tok = ar_kv:put(MI, <<\"move_data_root_index\">>, <<\"complete\">>),\n\t\t\t\t\tets:insert(ar_data_sync_state, {move_data_root_index_migration_complete}),\n\t\t\t\t\tok;\n\t\t\t\t{ok, << DataRoot:32/binary, TXSize:?OFFSET_KEY_BITSIZE >>, Value} ->\n\t\t\t\t\tM = binary_to_term(Value, [safe]),\n\t\t\t\t\tmove_data_root_index(DataRoot, TXSize, data_root_index_iterator(M), New),\n\t\t\t\t\tPrevKey = << DataRoot:32/binary, (TXSize - 1):?OFFSET_KEY_BITSIZE >>,\n\t\t\t\t\tmove_data_root_index(PrevKey, N + 1, State);\n\t\t\t\t{ok, Key, _} ->\n\t\t\t\t\t%% The empty data root key (from transactions without data) was\n\t\t\t\t\t%% unnecessarily recorded in the index.\n\t\t\t\t\tPrevKey = binary:part(Key, 0, byte_size(Key) - 1),\n\t\t\t\t\tmove_data_root_index(PrevKey, N + 1, State)\n\t\t\tend\n\tend.\n\nmove_data_root_index(DataRoot, TXSize, Iterator, DB) ->\n\tcase data_root_index_next(Iterator, infinity) of\n\t\tnone ->\n\t\t\tok;\n\t\t{{Offset, _TXRoot, TXPath}, Iterator2} ->\n\t\t\tKey = data_root_key_v2(DataRoot, TXSize, Offset),\n\t\t\tok = ar_kv:put(DB, Key, TXPath),\n\t\t\tmove_data_root_index(DataRoot, TXSize, Iterator2, DB)\n\tend.\n\ndata_root_key_v2(DataRoot, TXSize, Offset) ->\n\t<< DataRoot:32/binary, (ar_serialize:encode_int(TXSize, 8))/binary,\n\t\t\t(ar_serialize:encode_int(Offset, 8))/binary >>.\n\nrecord_disk_pool_chunks_count() ->\n\tDB = {disk_pool_chunks_index, ?DEFAULT_MODULE},\n\tcase ar_kv:count(DB) of\n\t\tCount when is_integer(Count) ->\n\t\t\tprometheus_gauge:set(disk_pool_chunks_count, Count);\n\t\tError ->\n\t\t\t?LOG_WARNING([{event, failed_to_read_disk_pool_chunks_count},\n\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])}])\n\tend.\n\nread_data_sync_state() ->\n\tcase ar_storage:read_term(data_sync_state) of\n\t\t{ok, #{ block_index := RecentBI } = M} ->\n\t\t\tmaps:merge(M, #{\n\t\t\t\tweave_size => case RecentBI of [] -> 0; _ -> element(2, hd(RecentBI)) end,\n\t\t\t\tdisk_pool_threshold => maps:get(disk_pool_threshold, M,\n\t\t\t\t\t\tget_disk_pool_threshold(RecentBI)) });\n\t\tnot_found ->\n\t\t\t#{ block_index => [], disk_pool_data_roots => #{}, disk_pool_size => 0,\n\t\t\t\t\tweave_size => 0, packing_2_5_threshold => infinity,\n\t\t\t\t\tdisk_pool_threshold => 0 }\n\tend.\n\nrecalculate_disk_pool_size(DataRootMap, State) ->\n\t#sync_data_state{ disk_pool_chunks_index = Index } = State,\n\tDataRootMap2 = maps:map(fun(_DataRootKey, {_Size, Timestamp, TXIDSet}) ->\n\t\t\t{0, Timestamp, TXIDSet} end, DataRootMap),\n\trecalculate_disk_pool_size(Index, DataRootMap2, first, 0).\n\nrecalculate_disk_pool_size(Index, DataRootMap, Cursor, Sum) ->\n\tcase ar_kv:get_next(Index, Cursor) of\n\t\tnone ->\n\t\t\tprometheus_gauge:set(pending_chunks_size, Sum),\n\t\t\tmaps:map(fun(DataRootKey, V) -> ets:insert(ar_disk_pool_data_roots,\n\t\t\t\t\t{DataRootKey, V}) end, DataRootMap),\n\t\t\tets:insert(ar_data_sync_state, {disk_pool_size, Sum});\n\t\t{ok, Key, Value} ->\n\t\t\tDecodedValue = binary_to_term(Value, [safe]),\n\t\t\tChunkSize = element(2, DecodedValue),\n\t\t\tDataRoot = element(3, DecodedValue),\n\t\t\tTXSize = element(4, DecodedValue),\n\t\t\tDataRootKey = << DataRoot:32/binary, TXSize:?OFFSET_KEY_BITSIZE >>,\n\t\t\tDataRootMap2 =\n\t\t\t\tcase maps:get(DataRootKey, DataRootMap, not_found) of\n\t\t\t\t\tnot_found ->\n\t\t\t\t\t\tDataRootMap;\n\t\t\t\t\t{Size, Timestamp, TXIDSet} ->\n\t\t\t\t\t\tmaps:put(DataRootKey, {Size + ChunkSize, Timestamp, TXIDSet},\n\t\t\t\t\t\t\t\tDataRootMap)\n\t\t\t\tend,\n\t\t\tCursor2 = << Key/binary, <<\"a\">>/binary >>,\n\t\t\trecalculate_disk_pool_size(Index, DataRootMap2, Cursor2, Sum + ChunkSize)\n\tend.\n\nget_disk_pool_threshold([]) ->\n\t0;\nget_disk_pool_threshold(BI) ->\n\tar_node:get_partition_upper_bound(BI).\n\nremove_orphaned_data(State, BlockStartOffset, WeaveSize) ->\n\tok = remove_tx_index_range(BlockStartOffset, WeaveSize, State),\n\t{ok, OrphanedDataRoots} = remove_data_root_index_range(BlockStartOffset, WeaveSize, State),\n\tok = remove_data_root_offset_index_range(BlockStartOffset, WeaveSize, State),\n\tok = delete_chunk_metadata_range(BlockStartOffset, WeaveSize, State),\n\t{ok, OrphanedDataRoots}.\n\nremove_tx_index_range(Start, End, State) ->\n\t#sync_data_state{ tx_offset_index = TXOffsetIndex, tx_index = TXIndex } = State,\n\tok = case ar_kv:get_range(TXOffsetIndex, << Start:?OFFSET_KEY_BITSIZE >>,\n\t\t\t<< (End - 1):?OFFSET_KEY_BITSIZE >>) of\n\t\t{ok, EmptyMap} when map_size(EmptyMap) == 0 ->\n\t\t\tok;\n\t\t{ok, Map} ->\n\t\t\tmaps:fold(\n\t\t\t\tfun\n\t\t\t\t\t(_, _Value, {error, _} = Error) ->\n\t\t\t\t\t\tError;\n\t\t\t\t\t(_, TXID, ok) ->\n\t\t\t\t\t\tar_kv:delete(TXIndex, TXID),\n\t\t\t\t\t\tar_tx_blacklist:norify_about_orphaned_tx(TXID)\n\t\t\t\tend,\n\t\t\t\tok,\n\t\t\t\tMap\n\t\t\t);\n\t\tError ->\n\t\t\tError\n\tend,\n\tar_kv:delete_range(TXOffsetIndex, << Start:?OFFSET_KEY_BITSIZE >>,\n\t\t\t<< End:?OFFSET_KEY_BITSIZE >>).\n\nremove_data_root_index_range(Start, End, State) ->\n\t#sync_data_state{ data_root_offset_index = DataRootOffsetIndex,\n\t\t\tdata_root_index = DataRootIndex } = State,\n\tcase ar_kv:get_range(DataRootOffsetIndex, << Start:?OFFSET_KEY_BITSIZE >>,\n\t\t\t<< (End - 1):?OFFSET_KEY_BITSIZE >>) of\n\t\t{ok, EmptyMap} when map_size(EmptyMap) == 0 ->\n\t\t\t{ok, sets:new()};\n\t\t{ok, Map} ->\n\t\t\tmaps:fold(\n\t\t\t\tfun\n\t\t\t\t\t(_, _Value, {error, _} = Error) ->\n\t\t\t\t\t\tError;\n\t\t\t\t\t(_, Value, {ok, RemovedDataRoots}) ->\n\t\t\t\t\t\t{_TXRoot, _BlockSize, DataRootIndexKeySet} = binary_to_term(Value, [safe]),\n\t\t\t\t\t\tsets:fold(\n\t\t\t\t\t\t\tfun (_Key, {error, _} = Error) ->\n\t\t\t\t\t\t\t\t\tError;\n\t\t\t\t\t\t\t\t(<< _DataRoot:32/binary, _TXSize:?OFFSET_KEY_BITSIZE >> = Key,\n\t\t\t\t\t\t\t\t\t\t{ok, Removed}) ->\n\t\t\t\t\t\t\t\t\tcase remove_data_root(DataRootIndex, Key, Start, End) of\n\t\t\t\t\t\t\t\t\t\tremoved ->\n\t\t\t\t\t\t\t\t\t\t\t{ok, sets:add_element(Key, Removed)};\n\t\t\t\t\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\t\t\t\t\t{ok, Removed};\n\t\t\t\t\t\t\t\t\t\tError ->\n\t\t\t\t\t\t\t\t\t\t\tError\n\t\t\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t\t\t(_, Acc) ->\n\t\t\t\t\t\t\t\t\tAcc\n\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\t{ok, RemovedDataRoots},\n\t\t\t\t\t\t\tDataRootIndexKeySet\n\t\t\t\t\t\t)\n\t\t\t\tend,\n\t\t\t\t{ok, sets:new()},\n\t\t\t\tMap\n\t\t\t);\n\t\tError ->\n\t\t\tError\n\tend.\n\nremove_data_root(DataRootIndex, DataRootKey, Start, End) ->\n\t<< DataRoot:32/binary, TXSize:?OFFSET_KEY_BITSIZE >> = DataRootKey,\n\tStartKey = data_root_key_v2(DataRoot, TXSize, Start),\n\tEndKey = data_root_key_v2(DataRoot, TXSize, End),\n\tcase ar_kv:delete_range(DataRootIndex, StartKey, EndKey) of\n\t\tok ->\n\t\t\tcase ar_kv:get_prev(DataRootIndex, StartKey) of\n\t\t\t\t{ok, << DataRoot:32/binary, TXSizeSize:8, TXSize:(TXSizeSize * 8),\n\t\t\t\t\t\t_Rest/binary >>, _} ->\n\t\t\t\t\tok;\n\t\t\t\t{ok, _, _} ->\n\t\t\t\t\tremoved;\n\t\t\t\tnone ->\n\t\t\t\t\tremoved;\n\t\t\t\t{error, _} = Error ->\n\t\t\t\t\tError\n\t\t\tend;\n\t\tError ->\n\t\t\tError\n\tend.\n\nremove_data_root_offset_index_range(Start, End, State) ->\n\t#sync_data_state{ data_root_offset_index = DataRootOffsetIndex } = State,\n\tar_kv:delete_range(DataRootOffsetIndex, << Start:?OFFSET_KEY_BITSIZE >>,\n\t\t\t<< End:?OFFSET_KEY_BITSIZE >>).\n\nrepair_data_root_offset_index(BI, State) ->\n\t#sync_data_state{ migrations_index = DB } = State,\n\tcase ar_kv:get(DB, <<\"repair_data_root_offset_index\">>) of\n\t\tnot_found ->\n\t\t\t?LOG_INFO([{event, starting_data_root_offset_index_scan}]),\n\t\t\tReverseBI = lists:reverse(BI),\n\t\t\tResyncBlocks = repair_data_root_offset_index(ReverseBI, <<>>, 0, [], State),\n\t\t\t[ar_header_sync:remove_block(Height) || Height <- ResyncBlocks],\n\t\t\tok = ar_kv:put(DB, <<\"repair_data_root_offset_index\">>, <<>>),\n\t\t\t?LOG_INFO([{event, data_root_offset_index_scan_complete}]);\n\t\t_ ->\n\t\t\tok\n\tend.\n\nrepair_data_root_offset_index(BI, Cursor, Height, ResyncBlocks, State) ->\n\t#sync_data_state{ data_root_offset_index = DRI } = State,\n\tcase ar_kv:get_next(DRI, Cursor) of\n\t\tnone ->\n\t\t\tResyncBlocks;\n\t\t{ok, Key, Value} ->\n\t\t\t<< BlockStart:?OFFSET_KEY_BITSIZE >> = Key,\n\t\t\t{TXRoot, BlockSize, _DataRootKeys} = binary_to_term(Value, [safe]),\n\t\t\tBlockEnd = BlockStart + BlockSize,\n\t\t\tcase shift_block_index(TXRoot, BlockStart, BlockEnd, Height, ResyncBlocks, BI) of\n\t\t\t\t{ok, {Height2, BI2}} ->\n\t\t\t\t\tCursor2 = << (BlockStart + 1):?OFFSET_KEY_BITSIZE >>,\n\t\t\t\t\trepair_data_root_offset_index(BI2, Cursor2, Height2, ResyncBlocks, State);\n\t\t\t\t{bad_key, []} ->\n\t\t\t\t\tResyncBlocks;\n\t\t\t\t{bad_key, ResyncBlocks2} ->\n\t\t\t\t\t?LOG_INFO([{event, removing_data_root_index_range},\n\t\t\t\t\t\t\t{range_start, BlockStart}, {range_end, BlockEnd}]),\n\t\t\t\t\tok = remove_tx_index_range(BlockStart, BlockEnd, State),\n\t\t\t\t\t{ok, _} = remove_data_root_index_range(BlockStart, BlockEnd, State),\n\t\t\t\t\tok = remove_data_root_offset_index_range(BlockStart, BlockEnd, State),\n\t\t\t\t\trepair_data_root_offset_index(BI, Cursor, Height, ResyncBlocks2, State)\n\t\t\tend\n\tend.\n\nshift_block_index(_TXRoot, _BlockStart, _BlockEnd, _Height, ResyncBlocks, []) ->\n\t{bad_key, ResyncBlocks};\nshift_block_index(TXRoot, BlockStart, BlockEnd, Height, ResyncBlocks,\n\t\t[{_H, WeaveSize, _TXRoot} | BI]) when BlockEnd > WeaveSize ->\n\tResyncBlocks2 = case BlockStart < WeaveSize of true -> [Height | ResyncBlocks];\n\t\t\t_ -> ResyncBlocks end,\n\tshift_block_index(TXRoot, BlockStart, BlockEnd, Height + 1, ResyncBlocks2, BI);\nshift_block_index(TXRoot, _BlockStart, WeaveSize, Height, _ResyncBlocks,\n\t\t[{_H, WeaveSize, TXRoot} | BI]) ->\n\t{ok, {Height + 1, BI}};\nshift_block_index(_TXRoot, _BlockStart, _WeaveSize, Height, ResyncBlocks, _BI) ->\n\t{bad_key, [Height | ResyncBlocks]}.\n\nadd_block(B, SizeTaggedTXs, StoreID) ->\n\t#block{ indep_hash = H, weave_size = WeaveSize, tx_root = TXRoot } = B,\n\tcase ar_block_index:get_element_by_height(B#block.height) of\n\t\t{H, WeaveSize, TXRoot} ->\n\t\t\tBlockStart = B#block.weave_size - B#block.block_size,\n\t\t\tcase ar_kv:get({data_root_offset_index, StoreID},\n\t\t\t\t\t<< BlockStart:?OFFSET_KEY_BITSIZE >>) of\n\t\t\t\tnot_found ->\n\t\t\t\t\t{ok, _} = add_block_data_roots(SizeTaggedTXs, BlockStart, StoreID),\n\t\t\t\t\tok = update_tx_index(SizeTaggedTXs, BlockStart, StoreID),\n\t\t\t\t\tok;\n\t\t\t\t_ ->\n\t\t\t\t\tok\n\t\t\tend;\n\t\t_ ->\n\t\t\tok\n\tend.\n\nupdate_tx_index([], _BlockStartOffset, _StoreID) ->\n\tok;\nupdate_tx_index(SizeTaggedTXs, BlockStartOffset, StoreID) ->\n\tlists:foldl(\n\t\tfun ({_, Offset}, Offset) ->\n\t\t\t\tOffset;\n\t\t\t({{padding, _}, Offset}, _) ->\n\t\t\t\tOffset;\n\t\t\t({{TXID, _}, TXEndOffset}, PreviousOffset) ->\n\t\t\t\tAbsoluteEndOffset = BlockStartOffset + TXEndOffset,\n\t\t\t\tTXSize = TXEndOffset - PreviousOffset,\n\t\t\t\tAbsoluteStartOffset = AbsoluteEndOffset - TXSize,\n\t\t\t\tcase ar_kv:put({tx_offset_index, StoreID},\n\t\t\t\t\t\t<< AbsoluteStartOffset:?OFFSET_KEY_BITSIZE >>, TXID) of\n\t\t\t\t\tok ->\n\t\t\t\t\t\tcase ar_kv:put({tx_index, StoreID}, TXID,\n\t\t\t\t\t\t\t\tterm_to_binary({AbsoluteEndOffset, TXSize})) of\n\t\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\t\tar_events:send(tx, {registered_offset, TXID, AbsoluteEndOffset,\n\t\t\t\t\t\t\t\t\t\tTXSize}),\n\t\t\t\t\t\t\t\tar_tx_blacklist:notify_about_added_tx(TXID, AbsoluteEndOffset,\n\t\t\t\t\t\t\t\t\t\tAbsoluteStartOffset),\n\t\t\t\t\t\t\t\tTXEndOffset;\n\t\t\t\t\t\t\t{error, Reason} ->\n\t\t\t\t\t\t\t\t?LOG_ERROR([{event, failed_to_update_tx_index},\n\t\t\t\t\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])},\n\t\t\t\t\t\t\t\t\t\t{tx, ar_util:encode(TXID)}]),\n\t\t\t\t\t\t\t\tTXEndOffset\n\t\t\t\t\t\tend;\n\t\t\t\t\t{error, Reason} ->\n\t\t\t\t\t\t?LOG_ERROR([{event, failed_to_update_tx_offset_index},\n\t\t\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])},\n\t\t\t\t\t\t\t\t{tx, ar_util:encode(TXID)}]),\n\t\t\t\t\t\tTXEndOffset\n\t\t\t\tend\n\t\tend,\n\t\t0,\n\t\tSizeTaggedTXs\n\t),\n\tok.\n\nadd_block_data_roots([], _CurrentWeaveSize, _StoreID) ->\n\t{ok, sets:new()};\nadd_block_data_roots(SizeTaggedTXs, CurrentWeaveSize, StoreID) ->\n\tSizeTaggedDataRoots = [{Root, Offset} || {{_, Root}, Offset} <- SizeTaggedTXs],\n\t{TXRoot, TXTree} = ar_merkle:generate_tree(SizeTaggedDataRoots),\n\t{BlockSize, DataRootIndexKeySet, Args} = lists:foldl(\n\t\tfun ({_, Offset}, {Offset, _, _} = Acc) ->\n\t\t\t\tAcc;\n\t\t\t({{padding, _}, Offset}, {_, Acc1, Acc2}) ->\n\t\t\t\t{Offset, Acc1, Acc2};\n\t\t\t({{_, DataRoot}, Offset}, {_, Acc1, Acc2}) when byte_size(DataRoot) < 32 ->\n\t\t\t\t{Offset, Acc1, Acc2};\n\t\t\t({{_, DataRoot}, TXEndOffset}, {PrevOffset, CurrentDataRootSet, CurrentArgs}) ->\n\t\t\t\tTXPath = ar_merkle:generate_path(TXRoot, TXEndOffset - 1, TXTree),\n\t\t\t\tTXOffset = CurrentWeaveSize + PrevOffset,\n\t\t\t\tTXSize = TXEndOffset - PrevOffset,\n\t\t\t\tDataRootKey = << DataRoot:32/binary, TXSize:?OFFSET_KEY_BITSIZE >>,\n\t\t\t\t{TXEndOffset, sets:add_element(DataRootKey, CurrentDataRootSet),\n\t\t\t\t\t\t[{DataRoot, TXSize, TXOffset, TXPath} | CurrentArgs]}\n\t\tend,\n\t\t{0, sets:new(), []},\n\t\tSizeTaggedTXs\n\t),\n\tcase BlockSize > 0 of\n\t\ttrue ->\n\t\t\tok = ar_kv:put({data_root_offset_index, StoreID},\n\t\t\t\t\t<< CurrentWeaveSize:?OFFSET_KEY_BITSIZE >>,\n\t\t\t\t\tterm_to_binary({TXRoot, BlockSize, DataRootIndexKeySet})),\n\t\t\tlists:foreach(\n\t\t\t\tfun({DataRoot, TXSize, TXOffset, TXPath}) ->\n\t\t\t\t\tok = update_data_root_index(DataRoot, TXSize, TXOffset, TXPath, StoreID)\n\t\t\t\tend,\n\t\t\t\tArgs\n\t\t\t);\n\t\tfalse ->\n\t\t\tdo_not_update_data_root_offset_index\n\tend,\n\t{ok, DataRootIndexKeySet}.\n\nupdate_data_root_index(DataRoot, TXSize, AbsoluteTXStartOffset, TXPath, StoreID) ->\n\tar_kv:put({data_root_index, StoreID},\n\t\t\tdata_root_key_v2(DataRoot, TXSize, AbsoluteTXStartOffset), TXPath).\n\nadd_block_data_roots_to_disk_pool(DataRootKeySet) ->\n\tsets:fold(\n\t\tfun(R, T) ->\n\t\t\tcase ets:lookup(ar_disk_pool_data_roots, R) of\n\t\t\t\t[] ->\n\t\t\t\t\tets:insert(ar_disk_pool_data_roots, {R, {0, T, not_set}});\n\t\t\t\t[{_, {Size, Timeout, _}}] ->\n\t\t\t\t\tets:insert(ar_disk_pool_data_roots, {R, {Size, Timeout, not_set}})\n\t\t\tend,\n\t\t\tT + 1\n\t\tend,\n\t\tos:system_time(microsecond),\n\t\tDataRootKeySet\n\t).\n\nreset_orphaned_data_roots_disk_pool_timestamps(DataRootKeySet) ->\n\tsets:fold(\n\t\tfun(R, T) ->\n\t\t\tcase ets:lookup(ar_disk_pool_data_roots, R) of\n\t\t\t\t[] ->\n\t\t\t\t\tets:insert(ar_disk_pool_data_roots, {R, {0, T, not_set}});\n\t\t\t\t[{_, {Size, _, TXIDSet}}] ->\n\t\t\t\t\tets:insert(ar_disk_pool_data_roots, {R, {Size, T, TXIDSet}})\n\t\t\tend,\n\t\t\tT + 1\n\t\tend,\n\t\tos:system_time(microsecond),\n\t\tDataRootKeySet\n\t).\n\nstore_sync_state(#sync_data_state{ store_id = ?DEFAULT_MODULE } = State) ->\n\t#sync_data_state{ block_index = BI } = State,\n\tDiskPoolDataRoots = ets:foldl(\n\t\t\tfun({DataRootKey, V}, Acc) -> maps:put(DataRootKey, V, Acc) end, #{},\n\t\t\tar_disk_pool_data_roots),\n\tStoredState = #{ block_index => BI, disk_pool_data_roots => DiskPoolDataRoots,\n\t\t\t%% Storing it for backwards-compatibility.\n\t\t\tstrict_data_split_threshold => ar_block:strict_data_split_threshold() },\n\tcase ar_storage:write_term(data_sync_state, StoredState) of\n\t\t{error, enospc} ->\n\t\t\t?LOG_WARNING([{event, failed_to_dump_state}, {reason, disk_full},\n\t\t\t\t\t{store_id, ?DEFAULT_MODULE}]),\n\t\t\tok;\n\t\tok ->\n\t\t\tok\n\tend,\n\tState;\nstore_sync_state(State) ->\n\tState.\n\n%% @doc Look to StoreID to find data that TargetStoreID is missing.\n%% Args:\n%%   StoreID - The ID of the storage module to sync to (this module is missing data)\n%%   OtherStoreID - The ID of the storage module to sync from (this module might have the data)\n%%   RangeStart - The start offset of the range to check\n%%   RangeEnd - The end offset of the range to check\nget_unsynced_intervals_from_other_storage_modules(StoreID, OtherStoreID, RangeStart,\n\t\tRangeEnd) ->\n\tget_unsynced_intervals_from_other_storage_modules(StoreID, OtherStoreID, RangeStart,\n\t\t\tRangeEnd, []).\n\nget_unsynced_intervals_from_other_storage_modules(_StoreID, _OtherStoreID, RangeStart,\n\t\tRangeEnd, Intervals) when RangeStart >= RangeEnd ->\n\tIntervals;\nget_unsynced_intervals_from_other_storage_modules(StoreID, OtherStoreID, RangeStart,\n\t\tRangeEnd, Intervals) ->\n\tFindNextMissing =\n\t\tcase ar_sync_record:get_next_synced_interval(RangeStart, RangeEnd, ar_data_sync,\n\t\tStoreID) of\n\t\t\tnot_found ->\n\t\t\t\t{request, {RangeStart, RangeEnd}};\n\t\t\t{End, Start} when Start =< RangeStart ->\n\t\t\t\t{skip, End};\n\t\t\t{_End, Start} ->\n\t\t\t\t{request, {RangeStart, Start}}\n\t\tend,\n\tcase FindNextMissing of\n\t\t{skip, End2} ->\n\t\t\tget_unsynced_intervals_from_other_storage_modules(StoreID, OtherStoreID, End2,\n\t\t\t\t\tRangeEnd, Intervals);\n\t\t{request, {Cursor, RightBound}} ->\n\t\t\tcase ar_sync_record:get_next_synced_interval(Cursor, RightBound, ar_data_sync,\n\t\t\t\t\tOtherStoreID) of\n\t\t\t\tnot_found ->\n\t\t\t\t\tget_unsynced_intervals_from_other_storage_modules(StoreID, OtherStoreID,\n\t\t\t\t\t\t\tRightBound, RangeEnd, Intervals);\n\t\t\t\t{End2, Start2} ->\n\t\t\t\t\tStart3 = max(Start2, Cursor),\n\t\t\t\t\tIntervals2 = [{OtherStoreID, {Start3, End2}} | Intervals],\n\t\t\t\t\tget_unsynced_intervals_from_other_storage_modules(StoreID, OtherStoreID,\n\t\t\t\t\t\t\tEnd2, RangeEnd, Intervals2)\n\t\t\tend\n\tend.\n\nenqueue_intervals([], _ChunksToEnqueue, {Q, QIntervals}) ->\n\t{Q, QIntervals};\nenqueue_intervals([{Peer, Intervals, FootprintKey} | Rest], ChunksToEnqueue, {Q, QIntervals}) ->\n\t{Q2, QIntervals2} = enqueue_peer_intervals(Peer, Intervals, FootprintKey, ChunksToEnqueue, {Q, QIntervals}),\n\tenqueue_intervals(Rest, ChunksToEnqueue, {Q2, QIntervals2}).\n\nenqueue_peer_intervals(Peer, Intervals, FootprintKey, ChunksToEnqueue, {Q, QIntervals}) ->\n\t%% Only keep unique intervals. We may get some duplicates for two\n\t%% reasons:\n\t%% 1) find_peer_intervals might choose the same interval several\n\t%%    times in a row even when there are other unsynced intervals\n\t%%    to pick because it is probabilistic.\n\t%% 2) We ask many peers simultaneously about the same interval\n\t%%    to make finding of the relatively rare intervals quicker.\n\tOuterJoin = ar_intervals:outerjoin(QIntervals, Intervals),\n\t{_, {Q2, QIntervals2}}  = ar_intervals:fold(\n\t\tfun\t(_, {0, {QAcc, QIAcc}}) ->\n\t\t\t\t{0, {QAcc, QIAcc}};\n\t\t\t({End, Start}, {ChunksToEnqueue2, {QAcc, QIAcc}}) ->\n\t\t\t\tRangeEnd = min(End, Start + (ChunksToEnqueue2 * ?DATA_CHUNK_SIZE)),\n\t\t\t\tChunkOffsets = lists:seq(Start, RangeEnd - 1, ?DATA_CHUNK_SIZE),\n\t\t\t\tChunksEnqueued = length(ChunkOffsets),\n\t\t\t\t{ChunksToEnqueue2 - ChunksEnqueued,\n\t\t\t\t\tenqueue_peer_range(Peer, FootprintKey, Start, RangeEnd, ChunkOffsets, {QAcc, QIAcc})}\n\t\tend,\n\t\t{ChunksToEnqueue, {Q, QIntervals}},\n\t\tOuterJoin\n\t),\n\t{Q2, QIntervals2}.\n\nenqueue_peer_range(Peer, FootprintKey, RangeStart, RangeEnd, ChunkOffsets, {Q, QIntervals}) ->\n\tQ2 = lists:foldl(\n\t\tfun(ChunkStart, QAcc) ->\n\t\t\tgb_sets:add_element(\n\t\t\t\t{FootprintKey, ChunkStart, min(ChunkStart + ?DATA_CHUNK_SIZE, RangeEnd), Peer},\n\t\t\t\tQAcc)\n\t\tend,\n\t\tQ,\n\t\tChunkOffsets\n\t),\n\tQIntervals2 = ar_intervals:add(QIntervals, RangeEnd, RangeStart),\n\t{Q2, QIntervals2}.\n\nunpack_fetched_chunk(Cast, AbsoluteEndOffset, ChunkArgs, Args, State) ->\n\t#sync_data_state{ packing_map = PackingMap } = State,\n\tcase maps:is_key({AbsoluteEndOffset, unpacked}, PackingMap) of\n\t\ttrue ->\n\t\t\tdecrement_chunk_cache_size(),\n\t\t\t{noreply, State};\n\t\tfalse ->\n\t\t\tcase ar_packing_server:is_buffer_full() of\n\t\t\t\ttrue ->\n\t\t\t\t\tar_util:cast_after(1000, self(), Cast),\n\t\t\t\t\t{noreply, State};\n\t\t\t\tfalse ->\n\t\t\t\t\tar_packing_server:request_unpack({AbsoluteEndOffset, unpacked}, ChunkArgs),\n\t\t\t\t\t{noreply, State#sync_data_state{\n\t\t\t\t\t\t\tpacking_map = PackingMap#{\n\t\t\t\t\t\t\t\t{AbsoluteEndOffset, unpacked} => {unpack_fetched_chunk,\n\t\t\t\t\t\t\t\t\t\tArgs} } }}\n\t\t\tend\n\tend.\n\nvalidate_proof(SeekByte, Proof) ->\n\t#{ data_path := DataPath, tx_path := TXPath, chunk := Chunk, packing := Packing } = Proof,\n\n\tChunkMetadata = #chunk_metadata{\n\t\ttx_path = TXPath,\n\t\tdata_path = DataPath\n\t},\n\n\tChunkProof = ar_poa:chunk_proof(ChunkMetadata, SeekByte, get_merkle_rebase_threshold()),\n\tcase ar_poa:validate_paths(ChunkProof) of\n\t\t{false, _} ->\n\t\t\tfalse;\n\t\t{true, ChunkProof2} ->\n\t\t\t#chunk_proof{\n\t\t\t\tmetadata = Metadata,\n\t\t\t\tchunk_id = ChunkID,\n\t\t\t\tblock_start_offset = BlockStartOffset,\n\t\t\t\tchunk_end_offset = ChunkEndOffset,\n\t\t\t\ttx_start_offset = TXStartOffset\n\t\t\t} = ChunkProof2,\n\t\t\t#chunk_metadata{\n\t\t\t\tchunk_size = ChunkSize\n\t\t\t} = Metadata,\n\t\t\tAbsoluteEndOffset = BlockStartOffset + TXStartOffset + ChunkEndOffset,\n\t\t\tcase Packing of\n\t\t\t\tunpacked ->\n\t\t\t\t\tcase ar_tx:generate_chunk_id(Chunk) == ChunkID of\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tfalse;\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tcase ChunkSize == byte_size(Chunk) of\n\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\t{true, ChunkProof2};\n\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\tfalse\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend;\n\t\t\t\t_ ->\n\t\t\t\t\t{need_unpacking, AbsoluteEndOffset, ChunkProof2}\n\t\t\tend\n\tend.\n\nvalidate_proof2(\n\t\tTXRoot, TXPath, DataPath, BlockStartOffset, BlockEndOffset, BlockRelativeOffset,\n\t\tExpectedChunkSize, RequestOrigin) ->\n\tChunkMetadata = #chunk_metadata{\n\t\ttx_root = TXRoot,\n\t\ttx_path = TXPath,\n\t\tdata_path = DataPath\n\t},\n\tValidateDataPathRuleset = ar_poa:get_data_path_validation_ruleset(\n\t\tBlockStartOffset, get_merkle_rebase_threshold()),\n\tAbsoluteEndOffset = BlockStartOffset + BlockRelativeOffset,\n\tChunkProof = ar_poa:chunk_proof(ChunkMetadata, BlockStartOffset, BlockEndOffset, AbsoluteEndOffset, ValidateDataPathRuleset),\n\t{IsValid, ChunkProof2} = ar_poa:validate_paths(ChunkProof),\n\tcase IsValid of\n\t\ttrue ->\n\t\t\t#chunk_proof{\n\t\t\t\tchunk_id = ChunkID,\n\t\t\t\tchunk_start_offset = ChunkStartOffset,\n\t\t\t\tchunk_end_offset = ChunkEndOffset\n\t\t\t} = ChunkProof2,\n\t\t\tcase ChunkEndOffset - ChunkStartOffset == ExpectedChunkSize of\n\t\t\t\tfalse ->\n\t\t\t\t\tlog_chunk_error(RequestOrigin, failed_to_validate_data_path_offset,\n\t\t\t\t\t\t\t[{chunk_end_offset, ChunkEndOffset},\n\t\t\t\t\t\t\t{chunk_start_offset, ChunkStartOffset},\n\t\t\t\t\t\t\t{chunk_size, ExpectedChunkSize}]),\n\t\t\t\t\tfalse;\n\t\t\t\ttrue ->\n\t\t\t\t\t{true, ChunkID}\n\t\t\tend;\n\t\tfalse ->\n\t\t\t#chunk_proof{\n\t\t\t\ttx_path_is_valid = TXPathIsValid,\n\t\t\t\tdata_path_is_valid = DataPathIsValid\n\t\t\t} = ChunkProof2,\n\t\t\tcase {TXPathIsValid, DataPathIsValid} of\n\t\t\t\t{invalid, _} ->\n\t\t\t\t\tlog_chunk_error(RequestOrigin, failed_to_validate_tx_path,\n\t\t\t\t\t\t\t[{block_start_offset, BlockStartOffset},\n\t\t\t\t\t\t\t{block_end_offset, BlockEndOffset},\n\t\t\t\t\t\t\t{block_relative_offset, BlockRelativeOffset}]),\n\t\t\t\t\tfalse;\n\t\t\t\t{_, invalid} ->\n\t\t\t\t\tlog_chunk_error(RequestOrigin, failed_to_validate_data_path,\n\t\t\t\t\t\t\t[{block_start_offset, BlockStartOffset},\n\t\t\t\t\t\t\t{block_end_offset, BlockEndOffset},\n\t\t\t\t\t\t\t{block_relative_offset, BlockRelativeOffset}]),\n\t\t\t\t\tfalse\n\t\t\tend\n\tend.\n\nvalidate_data_path(DataRoot, Offset, TXSize, DataPath, Chunk) ->\n\tBase = ar_merkle:validate_path(DataRoot, Offset, TXSize, DataPath, strict_borders_ruleset),\n\tStrict = ar_merkle:validate_path(DataRoot, Offset, TXSize, DataPath,\n\t\t\tstrict_data_split_ruleset),\n\tRebase = ar_merkle:validate_path(DataRoot, Offset, TXSize, DataPath,\n\t\t\toffset_rebase_support_ruleset),\n\tResult =\n\t\tcase {Base, Strict, Rebase} of\n\t\t\t{false, false, false} ->\n\t\t\t\tfalse;\n\t\t\t{_, {_, _, _} = StrictResult, _} ->\n\t\t\t\tStrictResult;\n\t\t\t{_, _, {_, _, _} = RebaseResult} ->\n\t\t\t\tRebaseResult;\n\t\t\t{{_, _, _} = BaseResult, _, _} ->\n\t\t\t\tBaseResult\n\t\tend,\n\tcase Result of\n\t\tfalse ->\n\t\t\tfalse;\n\t\t{ChunkID, StartOffset, EndOffset} ->\n\t\t\tcase ar_tx:generate_chunk_id(Chunk) == ChunkID of\n\t\t\t\tfalse ->\n\t\t\t\t\tfalse;\n\t\t\t\ttrue ->\n\t\t\t\t\tcase EndOffset - StartOffset == byte_size(Chunk) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tPassesBase = not (Base == false),\n\t\t\t\t\t\t\tPassesStrict = not (Strict == false),\n\t\t\t\t\t\t\tPassesRebase = not (Rebase == false),\n\t\t\t\t\t\t\t{true, PassesBase, PassesStrict, PassesRebase, EndOffset};\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tfalse\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\nchunk_offsets_synced(_, _, _, _, N) when N == 0 ->\n\ttrue;\nchunk_offsets_synced(DataRootIndex, DataRootKey, ChunkOffset, TXStartOffset, N) ->\n\tcase ar_sync_record:is_recorded(TXStartOffset + ChunkOffset, ar_data_sync) of\n\t\t{{true, _}, _StoreID} ->\n\t\t\tcase TXStartOffset of\n\t\t\t\t0 ->\n\t\t\t\t\ttrue;\n\t\t\t\t_ ->\n\t\t\t\t\t<< DataRoot:32/binary, TXSize:?OFFSET_KEY_BITSIZE >> = DataRootKey,\n\t\t\t\t\tKey = data_root_key_v2(DataRoot, TXSize, TXStartOffset - 1),\n\t\t\t\t\tcase ar_kv:get_prev(DataRootIndex, Key) of\n\t\t\t\t\t\tnone ->\n\t\t\t\t\t\t\ttrue;\n\t\t\t\t\t\t{ok, << DataRoot:32/binary, TXSizeSize:8, TXSize:(TXSizeSize * 8),\n\t\t\t\t\t\t\t\tTXStartOffset2Size:8,\n\t\t\t\t\t\t\t\tTXStartOffset2:(TXStartOffset2Size * 8) >>, _} ->\n\t\t\t\t\t\t\tchunk_offsets_synced(DataRootIndex, DataRootKey, ChunkOffset,\n\t\t\t\t\t\t\t\t\tTXStartOffset2, N - 1);\n\t\t\t\t\t\t{ok, _, _} ->\n\t\t\t\t\t\t\ttrue;\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\tfalse\n\t\t\t\t\tend\n\t\t\tend;\n\t\tfalse ->\n\t\t\tfalse\n\tend.\n\n%% @doc Return a storage reference to the chunk proof (and possibly the chunk itself).\nget_chunk_data_key(DataPathHash) ->\n\tTimestamp = os:system_time(microsecond),\n\t<< Timestamp:256, DataPathHash/binary >>.\n\nwrite_chunk(Offset, ChunkDataKey, Chunk, ChunkSize, DataPath, Packing, StoreID) ->\n\tcase ar_tx_blacklist:is_byte_blacklisted(Offset) of\n\t\ttrue ->\n\t\t\t{ok, Packing};\n\t\tfalse ->\n\t\t\twrite_not_blacklisted_chunk(Offset, ChunkDataKey, Chunk, ChunkSize, DataPath,\n\t\t\t\t\tPacking, StoreID)\n\tend.\n\nwrite_not_blacklisted_chunk(Offset, ChunkDataKey, Chunk, ChunkSize, DataPath, Packing,\n\t\tStoreID) ->\n\tShouldStoreInChunkStorage =\n\t\tar_chunk_storage:is_storage_supported(Offset, ChunkSize, Packing),\n\tcase {ShouldStoreInChunkStorage, is_binary(DataPath)} of\n\t\t{true, true} ->\n\t\t\tPaddedOffset = ar_block:get_chunk_padded_offset(Offset),\n\t\t\tcase ar_chunk_storage:put(PaddedOffset, Chunk, Packing, StoreID) of\n\t\t\t\t{ok, NewPacking} ->\n\t\t\t\t\tcase put_chunk_data(ChunkDataKey, StoreID, DataPath) of\n\t\t\t\t\t\tok -> {ok, NewPacking};\n\t\t\t\t\t\tError -> Error\n\t\t\t\t\tend;\n\t\t\t\tOther -> Other\n\t\t\tend;\n\t\t{true, false} ->\n\t\t\t%% If ar_data_sync:write_chunk/7 is called directly without a DataPath, we\n\t\t\t%% should just update chunk storage without modifying chunk_data_db. This\n\t\t\t%% can happen, for example, durin grepack in place.\n\t\t\tPaddedOffset = ar_block:get_chunk_padded_offset(Offset),\n\t\t\tar_chunk_storage:put(PaddedOffset, Chunk, Packing, StoreID);\n\t\t{false, true} ->\n\t\t\tcase put_chunk_data(ChunkDataKey, StoreID, {Chunk, DataPath}) of\n\t\t\t\tok ->\n\t\t\t\t\tprometheus_counter:inc(chunks_stored, [\n\t\t\t\t\t\tar_storage_module:packing_label(Packing),\n\t\t\t\t\t\tar_storage_module:label(StoreID)]),\n\t\t\t\t\t{ok, Packing};\n\t\t\t\tError -> Error\n\t\t\tend;\n\t\t{false, false} ->\n\t\t\t%% For chunks which are only stored in chunk_data_db, we currently require that\n\t\t\t%% both the Chunk and the DataPath are present.\n\t\t\t{error, invalid_data_path}\n\tend.\n\nupdate_chunks_index(Args, UpdateFootprint, State) ->\n\tAbsoluteChunkOffset = element(1, Args),\n\tcase ar_tx_blacklist:is_byte_blacklisted(AbsoluteChunkOffset) of\n\t\ttrue ->\n\t\t\tok;\n\t\tfalse ->\n\t\t\tupdate_chunks_index2(Args, UpdateFootprint, State)\n\tend.\n\nupdate_chunks_index2(Args, UpdateFootprint, State) ->\n\t{AbsoluteEndOffset, Offset, ChunkDataKey, TXRoot, DataRoot, TXPath, ChunkSize,\n\t\t\tPacking} = Args,\n\t#sync_data_state{ store_id = StoreID } = State,\n\tMetadata = {ChunkDataKey, TXRoot, DataRoot, TXPath, Offset, ChunkSize},\n\tcase put_chunk_metadata(AbsoluteEndOffset, StoreID, Metadata) of\n\t\tok ->\n\t\t\tStartOffset = ar_block:get_chunk_padded_offset(AbsoluteEndOffset - ChunkSize),\n\t\t\tPaddedOffset = ar_block:get_chunk_padded_offset(AbsoluteEndOffset),\n\t\t\tcase ar_sync_record:add(PaddedOffset, StartOffset, Packing, ar_data_sync, StoreID) of\n\t\t\t\tok ->\n\t\t\t\t\tcase UpdateFootprint of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tcase ar_footprint_record:add(PaddedOffset, Packing, StoreID) of\n\t\t\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\t\t\tok;\n\t\t\t\t\t\t\t\t{error, Reason} ->\n\t\t\t\t\t\t\t\t\t{error, Reason}\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tok\n\t\t\t\t\tend;\n\t\t\t\t{error, Reason} ->\n\t\t\t\t\t{error, Reason}\n\t\t\tend;\n\t\t{error, Reason} ->\n\t\t\t{error, Reason}\n\tend.\n\npick_missing_blocks([{H, WeaveSize, _} | CurrentBI], BlockTXPairs) ->\n\t{After, Before} = lists:splitwith(fun({BH, _}) -> BH /= H end, BlockTXPairs),\n\tcase Before of\n\t\t[] ->\n\t\t\tpick_missing_blocks(CurrentBI, BlockTXPairs);\n\t\t_ ->\n\t\t\t{WeaveSize, lists:reverse(After)}\n\tend.\n\nprocess_invalid_fetched_chunk(Peer, Byte, State) ->\n\t%% Not necessarily a malicious peer, it might happen\n\t%% if the chunk is recent and from a different fork.\n\tprocess_invalid_fetched_chunk(Peer, Byte, State, got_invalid_proof_from_peer, []).\nprocess_invalid_fetched_chunk(Peer, Byte, State, Event, ExtraLogs) ->\n\t#sync_data_state{ weave_size = WeaveSize } = State,\n\tprometheus_counter:inc(sync_chunks_skipped, [Event]),\n\t?LOG_WARNING([{event, skipping_synced_chunk},\n\t\t\t{reason, Event}, {peer, ar_util:format_peer(Peer)},\n\t\t\t{byte, Byte}, {weave_size, WeaveSize} | ExtraLogs]),\n\t{noreply, State}.\n\nprocess_valid_fetched_chunk(ChunkArgs, Args, State) ->\n\t#sync_data_state{ store_id = StoreID, disk_pool_threshold = DiskPoolThreshold } = State,\n\t{Packing, UnpackedChunk, AbsoluteEndOffset, TXRoot, ChunkSize} = ChunkArgs,\n\t{AbsoluteTXStartOffset, TXSize, DataPath, TXPath, DataRoot, Chunk, _ChunkID,\n\t\t\tChunkEndOffset, Peer, Byte} = Args,\n\tcase is_chunk_proof_ratio_attractive(ChunkSize, TXSize, DataPath) of\n\t\tfalse ->\n\t\t\tReason = got_too_big_proof_from_peer,\n\t\t\tprometheus_counter:inc(sync_chunks_skipped, [Reason]),\n\t\t\t?LOG_WARNING([{event, skipping_synced_chunk},\n\t\t\t\t\t{reason, Reason},\n\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t{absolute_end_offset, AbsoluteEndOffset},\n\t\t\t\t\t{store_id, StoreID}]),\n\t\t\tdecrement_chunk_cache_size(),\n\t\t\t{noreply, State};\n\t\ttrue ->\n\t\t\tcase ar_sync_record:is_recorded(Byte + 1, ar_data_sync, StoreID) of\n\t\t\t\t{true, _} ->\n\t\t\t\t\tReason = chunk_already_synced,\n\t\t\t\t\tprometheus_counter:inc(sync_chunks_skipped, [Reason]),\n\t\t\t\t\t?LOG_DEBUG([{event, skipping_synced_chunk},\n\t\t\t\t\t\t{reason, Reason},\n\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t{absolute_end_offset, AbsoluteEndOffset},\n\t\t\t\t\t\t{store_id, StoreID}]),\n\t\t\t\t\t%% The chunk has been synced by another job already.\n\t\t\t\t\tdecrement_chunk_cache_size(),\n\t\t\t\t\t{noreply, State};\n\t\t\t\tfalse ->\n\t\t\t\t\ttrue = AbsoluteEndOffset == AbsoluteTXStartOffset + ChunkEndOffset,\n\t\t\t\t\tcase AbsoluteEndOffset >= DiskPoolThreshold of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tadd_chunk_to_disk_pool(\n\t\t\t\t\t\t\t\tDataRoot, DataPath, UnpackedChunk, ChunkEndOffset - 1, TXSize),\n\t\t\t\t\t\t\tdecrement_chunk_cache_size(),\n\t\t\t\t\t\t\t{noreply, State};\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tpack_and_store_chunk({DataRoot, AbsoluteEndOffset, TXPath, TXRoot,\n\t\t\t\t\t\t\t\t\tDataPath, Packing, ChunkEndOffset, ChunkSize, Chunk,\n\t\t\t\t\t\t\t\t\tUnpackedChunk, none, none}, State)\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\npack_and_store_chunk({_, AbsoluteEndOffset, _, _, _, _, _, _, _, _, _, _},\n\t\t#sync_data_state{ store_id = StoreID, disk_pool_threshold = DiskPoolThreshold } = State)\n\t\twhen AbsoluteEndOffset > DiskPoolThreshold ->\n\t%% We do not put data into storage modules unless it is well confirmed.\n\tReason = chunk_is_above_disk_pool_threshold,\n\tprometheus_counter:inc(sync_chunks_skipped, [Reason]),\n\t?LOG_DEBUG([{event, skipping_synced_chunk},\n\t\t{reason, Reason},\n\t\t{absolute_end_offset, AbsoluteEndOffset},\n\t\t{store_id, StoreID}]),\n\tdecrement_chunk_cache_size(),\n\t{noreply, State};\npack_and_store_chunk(Args, State) ->\n\t{DataRoot, AbsoluteEndOffset, TXPath, TXRoot, DataPath, Packing, Offset, ChunkSize, Chunk,\n\t\t\tUnpackedChunk, OriginStoreID, OriginChunkDataKey} = Args,\n\t#sync_data_state{ store_id = StoreID, packing_map = PackingMap } = State,\n\tRequiredPacking = get_required_chunk_packing(AbsoluteEndOffset, ChunkSize, State),\n\tPackingStatus =\n\t\tcase {RequiredPacking, Packing} of\n\t\t\t{Packing, Packing} ->\n\t\t\t\t{ready, {Packing, Chunk}};\n\t\t\t{DifferentPacking, _} ->\n\t\t\t\t{need_packing, DifferentPacking}\n\t\tend,\n\tcase PackingStatus of\n\t\t{ready, {StoredPacking, StoredChunk}} ->\n\t\t\tChunkArgs = {StoredPacking, StoredChunk, AbsoluteEndOffset, TXRoot, ChunkSize},\n\t\t\t{noreply, store_chunk(ChunkArgs, {StoredPacking, DataPath, Offset, DataRoot,\n\t\t\t\t\tTXPath, OriginStoreID, OriginChunkDataKey}, State)};\n\t\t{need_packing, RequiredPacking} ->\n\t\t\tcase maps:is_key({AbsoluteEndOffset, RequiredPacking}, PackingMap) of\n\t\t\t\ttrue ->\n\t\t\t\t\tReason = chunk_already_being_packed,\n\t\t\t\t\tprometheus_counter:inc(sync_chunks_skipped, [Reason]),\n\t\t\t\t\t?LOG_DEBUG([{event, skipping_synced_chunk},\n\t\t\t\t\t\t{reason, Reason},\n\t\t\t\t\t\t{absolute_end_offset, AbsoluteEndOffset},\n\t\t\t\t\t\t{store_id, StoreID}]),\n\t\t\t\t\tdecrement_chunk_cache_size(),\n\t\t\t\t\t{noreply, State};\n\t\t\t\tfalse ->\n\t\t\t\t\tcase ar_packing_server:is_buffer_full() of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tar_util:cast_after(1000, self(), {pack_and_store_chunk, Args}),\n\t\t\t\t\t\t\t{noreply, State};\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t{Packing2, Chunk2} =\n\t\t\t\t\t\t\t\tcase UnpackedChunk of\n\t\t\t\t\t\t\t\t\tnone ->\n\t\t\t\t\t\t\t\t\t\t{Packing, Chunk};\n\t\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\t\t{unpacked, UnpackedChunk}\n\t\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\tar_packing_server:request_repack({AbsoluteEndOffset, RequiredPacking},\n\t\t\t\t\t\t\t\t\t{RequiredPacking, Packing2, Chunk2, AbsoluteEndOffset,\n\t\t\t\t\t\t\t\t\t\tTXRoot, ChunkSize}),\n\t\t\t\t\t\t\tPackingArgs = {pack_chunk, {RequiredPacking, DataPath,\n\t\t\t\t\t\t\t\t\tOffset, DataRoot, TXPath, OriginStoreID,\n\t\t\t\t\t\t\t\t\tOriginChunkDataKey}},\n\t\t\t\t\t\t\t{noreply, State#sync_data_state{\n\t\t\t\t\t\t\t\tpacking_map = PackingMap#{\n\t\t\t\t\t\t\t\t\t{AbsoluteEndOffset, RequiredPacking} => PackingArgs }}}\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\nprocess_store_chunk_queue(#sync_data_state{ store_chunk_queue_len = StartLen } = State) ->\n\tprocess_store_chunk_queue(State, StartLen).\n\nprocess_store_chunk_queue(#sync_data_state{ store_chunk_queue_len = 0 } = State, _StartLen) ->\n\tState;\nprocess_store_chunk_queue(State, StartLen) ->\n\t#sync_data_state{ store_chunk_queue = Q, store_chunk_queue_len = Len,\n\t\t\tstore_chunk_queue_threshold = Threshold } = State,\n\tTimestamp = element(2, gb_sets:smallest(Q)),\n\tNow = os:system_time(millisecond),\n\tThreshold2 =\n\t\tcase Threshold < ?STORE_CHUNK_QUEUE_FLUSH_SIZE_THRESHOLD of\n\t\t\ttrue ->\n\t\t\t\tThreshold;\n\t\t\tfalse ->\n\t\t\t\tcase Len > Threshold of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\t0;\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tThreshold\n\t\t\t\tend\n\t\tend,\n\tcase Len > Threshold2\n\t\t\torelse Now - Timestamp > ?STORE_CHUNK_QUEUE_FLUSH_TIME_THRESHOLD of\n\t\ttrue ->\n\t\t\t{{_Offset, _Timestamp, _Ref, ChunkArgs, Args}, Q2} = gb_sets:take_smallest(Q),\n\n\t\t\tstore_chunk2(ChunkArgs, Args, State),\n\n\t\t\tdecrement_chunk_cache_size(),\n\t\t\tState2 = State#sync_data_state{ store_chunk_queue = Q2,\n\t\t\t\t\tstore_chunk_queue_len = Len - 1,\n\t\t\t\t\tstore_chunk_queue_threshold = min(Threshold2 + 1,\n\t\t\t\t\t\t\t?STORE_CHUNK_QUEUE_FLUSH_SIZE_THRESHOLD) },\n\t\t\tprocess_store_chunk_queue(State2, StartLen);\n\t\tfalse ->\n\t\t\tState\n\tend.\n\nstore_chunk(ChunkArgs, Args, State) ->\n\t%% Let at least N chunks stack up, then write them in the ascending order,\n\t%% to reduce out-of-order disk writes causing fragmentation.\n\t#sync_data_state{ store_chunk_queue = Q, store_chunk_queue_len = Len } = State,\n\tNow = os:system_time(millisecond),\n\tOffset = element(3, ChunkArgs),\n\tQ2 = gb_sets:add_element({Offset, Now, make_ref(), ChunkArgs, Args}, Q),\n\tState2 = State#sync_data_state{ store_chunk_queue = Q2, store_chunk_queue_len = Len + 1 },\n\tprocess_store_chunk_queue(State2).\n\nstore_chunk2(ChunkArgs, Args, State) ->\n\t#sync_data_state{ store_id = StoreID } = State,\n\t{Packing, Chunk, AbsoluteEndOffset, TXRoot, ChunkSize} = ChunkArgs,\n\t{_Packing, DataPath, Offset, DataRoot, TXPath, OriginStoreID, OriginChunkDataKey} = Args,\n\tPaddedOffset = ar_block:get_chunk_padded_offset(AbsoluteEndOffset),\n\tStartOffset = ar_block:get_chunk_padded_offset(AbsoluteEndOffset - ChunkSize),\n\t%% This will fail if DataPath is not a string - which is fine as it serves as a sanity\n\t%% check that store_chunk2 is called with valid arguments.\n\tDataPathHash = crypto:hash(sha256, DataPath),\n\tShouldStoreInChunkStorage = ar_chunk_storage:is_storage_supported(AbsoluteEndOffset,\n\t\t\tChunkSize, Packing),\n\tCleanRecord =\n\t\tcase {ShouldStoreInChunkStorage, ar_storage_module:get_packing(StoreID)} of\n\t\t\t{true, {replica_2_9, _}} ->\n\t\t\t\t%% The 2.9 chunk storage is write-once.\n\t\t\t\tok;\n\t\t\t_ ->\n\t\t\t\tcase ar_footprint_record:delete(PaddedOffset, StoreID) of\n\t\t\t\t\tok ->\n\t\t\t\t\t\tar_sync_record:delete(PaddedOffset, StartOffset, ar_data_sync, StoreID);\n\t\t\t\t\tError ->\n\t\t\t\t\t\tError\n\t\t\t\tend\n\t\tend,\n\tcase CleanRecord of\n\t\t{error, Reason} ->\n\t\t\tlog_failed_to_store_chunk(Reason, AbsoluteEndOffset, Offset, DataRoot, DataPathHash,\n\t\t\t\t\tStoreID),\n\t\t\t{error, Reason};\n\t\tok ->\n\t\t\tChunkDataKey =\n\t\t\t\tcase StoreID == OriginStoreID of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tOriginChunkDataKey;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tget_chunk_data_key(DataPathHash)\n\t\t\t\tend,\n\t\t\tStoreIndex =\n\t\t\t\tcase write_chunk(AbsoluteEndOffset, ChunkDataKey, Chunk, ChunkSize, DataPath,\n\t\t\t\t\t\tPacking, StoreID) of\n\t\t\t\t\t{ok, NewPacking} ->\n\t\t\t\t\t\t{true, NewPacking};\n\t\t\t\t\tError2 ->\n\t\t\t\t\t\tError2\n\t\t\t\tend,\n\t\t\tProcessAlreadyStored =\n\t\t\t\tcase StoreIndex of\n\t\t\t\t\talready_stored ->\n\t\t\t\t\t\tcase ar_sync_record:is_recorded(PaddedOffset, Packing, ar_data_sync, StoreID) of\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\tinvalidate_bad_data_record({AbsoluteEndOffset, ChunkSize,\n\t\t\t\t\t\t\t\t\t\tStoreID, chunk_already_stored_but_not_in_sync_record});\n\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\tcase ar_footprint_record:is_recorded(PaddedOffset, StoreID) of\n\t\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\t\t%% Repair the broken footprint record.\n\t\t\t\t\t\t\t\t\t\tar_footprint_record:add(PaddedOffset, Packing, StoreID);\n\t\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\t\tok\n\t\t\t\t\t\t\t\tend\n\t\t\t\t\t\tend,\n\t\t\t\t\t\talready_stored;\n\t\t\t\t\tElse ->\n\t\t\t\t\t\tElse\n\t\t\t\tend,\n\t\t\tcase ProcessAlreadyStored of\n\t\t\t\t{true, Packing2} ->\n\t\t\t\t\tUpdateFootprintRecord = is_footprint_record_supported(AbsoluteEndOffset, ChunkSize, Packing2),\n\t\t\t\t\tcase update_chunks_index({AbsoluteEndOffset, Offset, ChunkDataKey, TXRoot,\n\t\t\t\t\t\t\tDataRoot, TXPath, ChunkSize, Packing2}, UpdateFootprintRecord, State) of\n\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\tok;\n\t\t\t\t\t\t{error, Reason} ->\n\t\t\t\t\t\t\tlog_failed_to_store_chunk(Reason, AbsoluteEndOffset, Offset, DataRoot,\n\t\t\t\t\t\t\t\t\tDataPathHash, StoreID),\n\t\t\t\t\t\t\t{error, Reason}\n\t\t\t\t\tend;\n\t\t\t\t{error, Reason} ->\n\t\t\t\t\tlog_failed_to_store_chunk(Reason, AbsoluteEndOffset, Offset, DataRoot,\n\t\t\t\t\t\t\tDataPathHash, StoreID),\n\t\t\t\t\t{error, Reason}\n\t\t\tend\n\tend.\n\nlog_failed_to_store_chunk(already_stored,\n\t\tAbsoluteEndOffset, Offset, DataRoot, DataPathHash, StoreID) ->\n\t?LOG_INFO([{event, chunk_already_stored},\n\t\t\t{absolute_end_offset, AbsoluteEndOffset},\n\t\t\t{relative_offset, Offset},\n\t\t\t{data_path_hash, ar_util:safe_encode(DataPathHash)},\n\t\t\t{data_root, ar_util:safe_encode(DataRoot)},\n\t\t\t{store_id, StoreID}]);\nlog_failed_to_store_chunk(not_prepared_yet,\n\t\tAbsoluteEndOffset, Offset, DataRoot, DataPathHash, StoreID) ->\n\t?LOG_WARNING([{event, chunk_not_prepared_yet},\n\t\t\t{absolute_end_offset, AbsoluteEndOffset},\n\t\t\t{relative_offset, Offset},\n\t\t\t{data_path_hash, ar_util:safe_encode(DataPathHash)},\n\t\t\t{data_root, ar_util:safe_encode(DataRoot)},\n\t\t\t{store_id, StoreID}]);\nlog_failed_to_store_chunk(Reason, AbsoluteEndOffset, Offset, DataRoot, DataPathHash, StoreID) ->\n\t?LOG_ERROR([{event, failed_to_store_chunk},\n\t\t\t{reason, io_lib:format(\"~p\", [Reason])},\n\t\t\t{absolute_end_offset, AbsoluteEndOffset},\n\t\t\t{relative_offset, Offset},\n\t\t\t{data_path_hash, ar_util:safe_encode(DataPathHash)},\n\t\t\t{data_root, ar_util:safe_encode(DataRoot)},\n\t\t\t{store_id, StoreID}]).\n\nget_required_chunk_packing(_Offset, _ChunkSize, #sync_data_state{ store_id = ?DEFAULT_MODULE }) ->\n\tunpacked;\nget_required_chunk_packing(Offset, ChunkSize, State) ->\n\t#sync_data_state{ store_id = StoreID } = State,\n\tIsEarlySmallChunk =\n\t\tOffset =< ar_block:strict_data_split_threshold() andalso ChunkSize < ?DATA_CHUNK_SIZE,\n\tcase IsEarlySmallChunk of\n\t\ttrue ->\n\t\t\tunpacked;\n\t\tfalse ->\n\t\t\tcase ar_storage_module:get_packing(StoreID) of\n\t\t\t\t{replica_2_9, _Addr} ->\n\t\t\t\t\tunpacked_padded;\n\t\t\t\tPacking ->\n\t\t\t\t\tPacking\n\t\t\tend\n\tend.\n\nprocess_disk_pool_item(State, Key, Value) ->\n\t#sync_data_state{ disk_pool_chunks_index = DiskPoolChunksIndex,\n\t\t\tdata_root_index = DataRootIndex, store_id = StoreID } = State,\n\tprometheus_counter:inc(disk_pool_processed_chunks),\n\t<< Timestamp:256, DataPathHash/binary >> = Key,\n\tDiskPoolChunk = parse_disk_pool_chunk(Value),\n\t{Offset, ChunkSize, DataRoot, TXSize, ChunkDataKey,\n\t\t\tPassedBaseValidation, PassedStrictValidation,\n\t\t\tPassedRebaseValidation} = DiskPoolChunk,\n\tDataRootKey = << DataRoot:32/binary, TXSize:?OFFSET_KEY_BITSIZE >>,\n\tInDataRootIndex = get_data_root_offset(DataRootKey, StoreID),\n\tInDiskPool = ets:member(ar_disk_pool_data_roots, DataRootKey),\n\tcase {InDataRootIndex, InDiskPool} of\n\t\t{not_found, true} ->\n\t\t\t%% Increment the timestamp by one (microsecond), so that the new cursor is\n\t\t\t%% a prefix of the first key of the next data root. We want to quickly skip\n\t\t\t%% all chunks belonging to the same data root because the data root is not\n\t\t\t%% yet on chain.\n\t\t\tNextCursor = {seek, << (Timestamp + 1):256 >>},\n\t\t\tgen_server:cast(self(), process_disk_pool_item),\n\t\t\t{noreply, State#sync_data_state{ disk_pool_cursor = NextCursor }};\n\t\t{not_found, false} ->\n\t\t\t%% The chunk was either orphaned or never made it to the chain.\n\t\t\tcase ets:member(ar_data_sync_state, move_data_root_index_migration_complete) of\n\t\t\t\ttrue ->\n\t\t\t\t\tok = ar_kv:delete(DiskPoolChunksIndex, Key),\n\t\t\t\t\tok = delete_chunk_data(ChunkDataKey, StoreID),\n\t\t\t\t\tdecrease_occupied_disk_pool_size(ChunkSize, DataRootKey);\n\t\t\t\tfalse ->\n\t\t\t\t\t%% Do not remove the chunk from the disk pool until the data root index\n\t\t\t\t\t%% migration is complete, because the data root might still exist in the\n\t\t\t\t\t%% old index.\n\t\t\t\t\tok\n\t\t\tend,\n\t\t\tNextCursor = << Key/binary, <<\"a\">>/binary >>,\n\t\t\tgen_server:cast(self(), process_disk_pool_item),\n\t\t\tState2 = maybe_reset_disk_pool_full_scan_key(Key, State),\n\t\t\t{noreply, State2#sync_data_state{ disk_pool_cursor = NextCursor }};\n\t\t{{ok, {TXStartOffset, _TXPath}}, _} ->\n\t\t\tDataRootIndexIterator = data_root_index_iterator_v2(DataRootKey, TXStartOffset + 1,\n\t\t\t\t\tDataRootIndex),\n\t\t\tNextCursor = << Key/binary, <<\"a\">>/binary >>,\n\t\t\tState2 = State#sync_data_state{ disk_pool_cursor = NextCursor },\n\t\t\tArgs = {Offset, InDiskPool, ChunkSize, DataRoot, DataPathHash, ChunkDataKey, Key,\n\t\t\t\t\tPassedBaseValidation, PassedStrictValidation, PassedRebaseValidation},\n\t\t\tgen_server:cast(self(), {process_disk_pool_chunk_offsets, DataRootIndexIterator,\n\t\t\t\t\ttrue, Args}),\n\t\t\t{noreply, State2}\n\tend.\n\ndecrease_occupied_disk_pool_size(Size, DataRootKey) ->\n\tets:update_counter(ar_data_sync_state, disk_pool_size, {2, -Size}),\n\tprometheus_gauge:dec(pending_chunks_size, Size),\n\tcase ets:lookup(ar_disk_pool_data_roots, DataRootKey) of\n\t\t[] ->\n\t\t\tok;\n\t\t[{_, {Size2, Timestamp, TXIDSet}}] ->\n\t\t\tets:insert(ar_disk_pool_data_roots, {DataRootKey,\n\t\t\t\t\t{Size2 - Size, Timestamp, TXIDSet}}),\n\t\t\tok\n\tend.\n\nmaybe_reset_disk_pool_full_scan_key(Key,\n\t\t#sync_data_state{ disk_pool_full_scan_start_key = Key } = State) ->\n\tState#sync_data_state{ disk_pool_full_scan_start_key = none };\nmaybe_reset_disk_pool_full_scan_key(_Key, State) ->\n\tState.\n\nparse_disk_pool_chunk(Bin) ->\n\tcase binary_to_term(Bin, [safe]) of\n\t\t{Offset, ChunkSize, DataRoot, TXSize, ChunkDataKey} ->\n\t\t\t{Offset, ChunkSize, DataRoot, TXSize, ChunkDataKey, true, false, false};\n\t\t{Offset, ChunkSize, DataRoot, TXSize, ChunkDataKey, PassesStrict} ->\n\t\t\t{Offset, ChunkSize, DataRoot, TXSize, ChunkDataKey, true, PassesStrict, false};\n\t\tR ->\n\t\t\tR\n\tend.\n\ndelete_disk_pool_chunk(Iterator, Args, State) ->\n\t#sync_data_state{\n\t\t\tdisk_pool_chunks_index = DiskPoolChunksIndex, store_id = StoreID } = State,\n\t{Offset, _, ChunkSize, _, _, ChunkDataKey, DiskPoolKey, _, _, _} = Args,\n\tcase data_root_index_next_v2(Iterator, 10) of\n\t\tnone ->\n\t\t\tok = ar_kv:delete(DiskPoolChunksIndex, DiskPoolKey),\n\t\t\tok = delete_chunk_data(ChunkDataKey, StoreID),\n\t\t\tDataRootKey = data_root_index_get_key(Iterator),\n\t\t\tdecrease_occupied_disk_pool_size(ChunkSize, DataRootKey);\n\t\t{TXArgs, Iterator2} ->\n\t\t\t{TXStartOffset, _TXRoot, _TXPath} = TXArgs,\n\t\t\tAbsoluteEndOffset = TXStartOffset + Offset,\n\t\t\tcase get_chunk_metadata(AbsoluteEndOffset, StoreID) of\n\t\t\t\tnot_found ->\n\t\t\t\t\tok;\n\t\t\t\t{ok, ChunkArgs} ->\n\t\t\t\t\tcase element(1, ChunkArgs) of\n\t\t\t\t\t\tChunkDataKey ->\n\t\t\t\t\t\t\tPaddedOffset = ar_block:get_chunk_padded_offset(AbsoluteEndOffset),\n\t\t\t\t\t\t\tStartOffset = ar_block:get_chunk_padded_offset(\n\t\t\t\t\t\t\t\t\tAbsoluteEndOffset - ChunkSize),\n\t\t\t\t\t\t\tok = ar_footprint_record:delete(PaddedOffset, StoreID),\n\t\t\t\t\t\t\tok = ar_sync_record:delete(PaddedOffset, StartOffset, ar_data_sync,\n\t\t\t\t\t\t\t\t\tStoreID),\n\t\t\t\t\t\t\tcase ar_sync_record:is_recorded(PaddedOffset, ar_data_sync) of\n\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\tar_events:send(sync_record,\n\t\t\t\t\t\t\t\t\t\t\t{global_remove_range, StartOffset, PaddedOffset});\n\t\t\t\t\t\t\t\t{{true, {replica_2_9, _}}, _StoreID} ->\n\t\t\t\t\t\t\t\t\t%% Replica 2.9 data is recorded in the footprint record.\n\t\t\t\t\t\t\t\t\tar_events:send(sync_record,\n\t\t\t\t\t\t\t\t\t\t\t{global_remove_range, StartOffset, PaddedOffset});\n\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\tok\n\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\tok = delete_chunk_metadata(AbsoluteEndOffset, StoreID);\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t%% The entry has been written by the 2.5 version thus has\n\t\t\t\t\t\t\t%% a different key. We do not want to remove chunks from\n\t\t\t\t\t\t\t%% the existing 2.5 dataset.\n\t\t\t\t\t\t\tok\n\t\t\t\t\tend\n\t\t\tend,\n\t\t\tdelete_disk_pool_chunk(Iterator2, Args, State)\n\tend.\n\nregister_currently_processed_disk_pool_key(Key, State) ->\n\t#sync_data_state{ currently_processed_disk_pool_keys = Keys } = State,\n\tKeys2 = sets:add_element(Key, Keys),\n\tState#sync_data_state{ currently_processed_disk_pool_keys = Keys2 }.\n\nderegister_currently_processed_disk_pool_key(Key, State) ->\n\t#sync_data_state{ currently_processed_disk_pool_keys = Keys } = State,\n\tKeys2 = sets:del_element(Key, Keys),\n\tState#sync_data_state{ currently_processed_disk_pool_keys = Keys2 }.\n\nget_merkle_rebase_threshold() ->\n\tcase ets:lookup(node_state, merkle_rebase_support_threshold) of\n\t\t[] ->\n\t\t\tinfinity;\n\t\t[{_, Threshold}] ->\n\t\t\tThreshold\n\tend.\n\nprocess_disk_pool_chunk_offset(Iterator, TXRoot, TXPath, AbsoluteEndOffset, MayConclude, Args,\n\t\tState) ->\n\t#sync_data_state{ disk_pool_threshold = DiskPoolThreshold } = State,\n\t{Offset, _, _, DataRoot, DataPathHash, _, _,\n\t\t\tPassedBase, PassedStrictValidation, PassedRebaseValidation} = Args,\n\tPassedValidation =\n\t\tcase {AbsoluteEndOffset >= get_merkle_rebase_threshold(),\n\t\t\t\tAbsoluteEndOffset >= ar_block:strict_data_split_threshold(),\n\t\t\t\tPassedBase, PassedStrictValidation, PassedRebaseValidation} of\n\t\t\t%% At the rebase threshold we relax some of the validation rules so the strict\n\t\t\t%% validation may fail.\n\t\t\t{true, true, _, _, true} ->\n\t\t\t\ttrue;\n\t\t\t%% Between the \"strict\" and \"rebase\" thresholds the \"base\" and \"strict split\"\n\t\t\t%% rules must be followed.\n\t\t\t{false, true, true, true, _} ->\n\t\t\t\ttrue;\n\t\t\t%% Before the strict threshold only the base (most relaxed) validation must\n\t\t\t%% pass.\n\t\t\t{false, false, true, _, _} ->\n\t\t\t\ttrue;\n\t\t\t_ ->\n\t\t\t\tfalse\n\t\tend,\n\tcase PassedValidation of\n\t\tfalse ->\n\t\t\t%% When we accept chunks into the disk pool, we do not know where they will\n\t\t\t%% end up on the weave. Therefore, we cannot require all Merkle proofs pass\n\t\t\t%% the strict validation rules taking effect only after\n\t\t\t%% ar_block:strict_data_split_threshold() or allow the merkle tree offset rebases\n\t\t\t%% supported after the yet another special weave threshold.\n\t\t\t%% Instead we note down whether the chunk passes the strict and rebase validations\n\t\t\t%% and take it into account here where the chunk is associated with a global weave\n\t\t\t%% offset.\n\t\t\t?LOG_INFO([{event, disk_pool_chunk_from_bad_split},\n\t\t\t\t\t{absolute_end_offset, AbsoluteEndOffset},\n\t\t\t\t\t{merkle_rebase_threshold, get_merkle_rebase_threshold()},\n\t\t\t\t\t{strict_data_split_threshold, ar_block:strict_data_split_threshold()},\n\t\t\t\t\t{passed_base, PassedBase}, {passed_strict, PassedStrictValidation},\n\t\t\t\t\t{passed_rebase, PassedRebaseValidation},\n\t\t\t\t\t{relative_offset, Offset},\n\t\t\t\t\t{data_path_hash, ar_util:encode(DataPathHash)},\n\t\t\t\t\t{data_root, ar_util:encode(DataRoot)}]),\n\t\t\tgen_server:cast(self(), {process_disk_pool_chunk_offsets, Iterator,\n\t\t\t\t\tMayConclude, Args}),\n\t\t\t{noreply, State};\n\t\ttrue ->\n\t\t\tcase AbsoluteEndOffset > DiskPoolThreshold of\n\t\t\t\ttrue ->\n\t\t\t\t\tprocess_disk_pool_immature_chunk_offset(Iterator, TXRoot, TXPath,\n\t\t\t\t\t\t\tAbsoluteEndOffset, Args, State);\n\t\t\t\tfalse ->\n\t\t\t\t\tprocess_disk_pool_matured_chunk_offset(Iterator, TXRoot, TXPath,\n\t\t\t\t\t\t\tAbsoluteEndOffset, MayConclude, Args, State)\n\t\t\tend\n\tend.\n\nprocess_disk_pool_immature_chunk_offset(Iterator, TXRoot, TXPath, AbsoluteEndOffset, Args,\n\t\tState) ->\n\t#sync_data_state{ store_id = StoreID } = State,\n\tcase ar_sync_record:is_recorded(AbsoluteEndOffset, ar_data_sync, StoreID) of\n\t\t{true, unpacked} ->\n\t\t\t%% Pass MayConclude as false because we have encountered an offset\n\t\t\t%% above the disk pool threshold => we need to keep the chunk in the\n\t\t\t%% disk pool for now and not pack and move to the offset-based storage.\n\t\t\t%% The motivation is to keep chain reorganisations cheap.\n\t\t\tgen_server:cast(self(), {process_disk_pool_chunk_offsets, Iterator, false, Args}),\n\t\t\t{noreply, State};\n\t\tfalse ->\n\t\t\t{Offset, _, ChunkSize, DataRoot, DataPathHash, ChunkDataKey, Key, _, _, _} = Args,\n\t\t\tcase update_chunks_index({AbsoluteEndOffset, Offset, ChunkDataKey,\n\t\t\t\t\tTXRoot, DataRoot, TXPath, ChunkSize, unpacked}, false, State) of\n\t\t\t\tok ->\n\t\t\t\t\tgen_server:cast(self(), {process_disk_pool_chunk_offsets, Iterator, false,\n\t\t\t\t\t\t\tArgs}),\n\t\t\t\t\t{noreply, State};\n\t\t\t\t{error, Reason} ->\n\t\t\t\t\t?LOG_WARNING([{event, failed_to_index_disk_pool_chunk},\n\t\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])},\n\t\t\t\t\t\t\t{data_path_hash, ar_util:encode(DataPathHash)},\n\t\t\t\t\t\t\t{data_root, ar_util:encode(DataRoot)},\n\t\t\t\t\t\t\t{absolute_end_offset, AbsoluteEndOffset},\n\t\t\t\t\t\t\t{relative_offset, Offset},\n\t\t\t\t\t\t\t{chunk_data_key, ar_util:encode(element(5, Args))}]),\n\t\t\t\t\tgen_server:cast(self(), process_disk_pool_item),\n\t\t\t\t\t{noreply, deregister_currently_processed_disk_pool_key(Key, State)}\n\t\t\tend\n\tend.\n\nprocess_disk_pool_matured_chunk_offset(Iterator, TXRoot, TXPath, AbsoluteEndOffset, MayConclude,\n\t\tArgs, State) ->\n\t%% The chunk has received a decent number of confirmations so we put it in storage\n\t%% module(s). If we have no storage modules configured covering this offset, proceed to\n\t%% the next offset. If there are several suitable storage modules, send the chunk\n\t%% to those modules who have not have it synced yet.\n\t#sync_data_state{ store_id = DefaultStoreID } = State,\n\t{Offset, _, ChunkSize, DataRoot, DataPathHash, ChunkDataKey, Key, _PassedBaseValidation,\n\t\t\t_PassedStrictValidation, _PassedRebaseValidation} = Args,\n\tFindStorageModules =\n\t\tcase ar_storage_module:get_all(AbsoluteEndOffset - ChunkSize, AbsoluteEndOffset) of\n\t\t\t[] ->\n\t\t\t\tgen_server:cast(self(), {process_disk_pool_chunk_offsets, Iterator,\n\t\t\t\t\t\tMayConclude, Args}),\n\t\t\t\t{noreply, State};\n\t\t\tModules ->\n\t\t\t\t[ar_storage_module:id(Module) || Module <- Modules]\n\t\tend,\n\tIsBlacklisted =\n\t\tcase FindStorageModules of\n\t\t\t{noreply, State2} ->\n\t\t\t\t{noreply, State2};\n\t\t\tStoreIDs ->\n\t\t\t\tcase ar_tx_blacklist:is_byte_blacklisted(AbsoluteEndOffset) of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tgen_server:cast(self(), {process_disk_pool_chunk_offsets, Iterator,\n\t\t\t\t\t\t\t\tMayConclude, Args}),\n\t\t\t\t\t\t{noreply, remove_recently_processed_disk_pool_offset(AbsoluteEndOffset,\n\t\t\t\t\t\t\t\tChunkDataKey, State)};\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tStoreIDs\n\t\t\t\tend\n\t\tend,\n\tIsSynced =\n\t\tcase IsBlacklisted of\n\t\t\t{noreply, State3} ->\n\t\t\t\t{noreply, State3};\n\t\t\tStoreIDs2 ->\n\t\t\t\tcase filter_storage_modules_by_synced_offset(AbsoluteEndOffset, StoreIDs2) of\n\t\t\t\t\t[] ->\n\t\t\t\t\t\tgen_server:cast(self(), {process_disk_pool_chunk_offsets, Iterator,\n\t\t\t\t\t\t\t\tMayConclude, Args}),\n\t\t\t\t\t\t{noreply, remove_recently_processed_disk_pool_offset(AbsoluteEndOffset,\n\t\t\t\t\t\t\t\tChunkDataKey, State)};\n\t\t\t\t\tStoreIDs3 ->\n\t\t\t\t\t\tStoreIDs3\n\t\t\t\tend\n\t\tend,\n\tIsProcessed =\n\t\tcase IsSynced of\n\t\t\t{noreply, State4} ->\n\t\t\t\t{noreply, State4};\n\t\t\tStoreIDs4 ->\n\t\t\t\tcase is_recently_processed_offset(AbsoluteEndOffset, ChunkDataKey, State) of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tgen_server:cast(self(), {process_disk_pool_chunk_offsets, Iterator,\n\t\t\t\t\t\t\t\tfalse, Args}),\n\t\t\t\t\t\t{noreply, State};\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tStoreIDs4\n\t\t\t\tend\n\t\tend,\n\tIsChunkCacheFull =\n\t\tcase IsProcessed of\n\t\t\t{noreply, State5} ->\n\t\t\t\t{noreply, State5};\n\t\t\tStoreIDs5 ->\n\t\t\t\tcase is_chunk_cache_full() of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tgen_server:cast(self(), {process_disk_pool_chunk_offsets, Iterator,\n\t\t\t\t\t\t\t\tfalse, Args}),\n\t\t\t\t\t\t{noreply, State};\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tStoreIDs5\n\t\t\t\tend\n\t\tend,\n\tcase IsChunkCacheFull of\n\t\t{noreply, State6} ->\n\t\t\t{noreply, State6};\n\t\tStoreIDs6 ->\n\t\t\tcase read_chunk(AbsoluteEndOffset, ChunkDataKey, DefaultStoreID) of\n\t\t\t\tnot_found ->\n\t\t\t\t\t?LOG_ERROR([{event, disk_pool_chunk_not_found},\n\t\t\t\t\t\t\t{data_path_hash, ar_util:encode(DataPathHash)},\n\t\t\t\t\t\t\t{data_root, ar_util:encode(DataRoot)},\n\t\t\t\t\t\t\t{absolute_end_offset, AbsoluteEndOffset},\n\t\t\t\t\t\t\t{relative_offset, Offset},\n\t\t\t\t\t\t\t{chunk_data_key, ar_util:encode(element(5, Args))}]),\n\t\t\t\t\tgen_server:cast(self(), {process_disk_pool_chunk_offsets, Iterator,\n\t\t\t\t\t\t\tMayConclude, Args}),\n\t\t\t\t\t{noreply, State};\n\t\t\t\t{error, Reason2} ->\n\t\t\t\t\t?LOG_ERROR([{event, failed_to_read_disk_pool_chunk},\n\t\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason2])},\n\t\t\t\t\t\t\t{data_path_hash, ar_util:encode(DataPathHash)},\n\t\t\t\t\t\t\t{data_root, ar_util:encode(DataRoot)},\n\t\t\t\t\t\t\t{absolute_end_offset, AbsoluteEndOffset},\n\t\t\t\t\t\t\t{relative_offset, Offset},\n\t\t\t\t\t\t\t{chunk_data_key, ar_util:encode(element(5, Args))}]),\n\t\t\t\t\tgen_server:cast(self(), process_disk_pool_item),\n\t\t\t\t\t{noreply, deregister_currently_processed_disk_pool_key(Key, State)};\n\t\t\t\t{ok, {Chunk, DataPath}} ->\n\t\t\t\t\tincrement_chunk_cache_size(),\n\t\t\t\t\tArgs2 = {DataRoot, AbsoluteEndOffset, TXPath, TXRoot, DataPath, unpacked,\n\t\t\t\t\t\t\tOffset, ChunkSize, Chunk, Chunk, none, none},\n\t\t\t\t\t[gen_server:cast(name(StoreID6), {pack_and_store_chunk, Args2})\n\t\t\t\t\t\t|| StoreID6 <- StoreIDs6],\n\t\t\t\t\tgen_server:cast(self(), {process_disk_pool_chunk_offsets, Iterator,\n\t\t\t\t\t\t\tfalse, Args}),\n\t\t\t\t\t{noreply, cache_recently_processed_offset(AbsoluteEndOffset, ChunkDataKey,\n\t\t\t\t\t\t\tState)}\n\t\t\tend\n\tend.\n\nremove_recently_processed_disk_pool_offset(Offset, ChunkDataKey, State) ->\n\t#sync_data_state{ recently_processed_disk_pool_offsets = Map } = State,\n\tcase maps:get(Offset, Map, not_found) of\n\t\tnot_found ->\n\t\t\tState;\n\t\tSet ->\n\t\t\tSet2 = sets:del_element(ChunkDataKey, Set),\n\t\t\tMap2 =\n\t\t\t\tcase sets:is_empty(Set2) of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tmaps:remove(Offset, Map);\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tmaps:put(Offset, Set2, Map)\n\t\t\t\tend,\n\t\t\tState#sync_data_state{ recently_processed_disk_pool_offsets = Map2 }\n\tend.\n\nis_recently_processed_offset(Offset, ChunkDataKey, State) ->\n\t#sync_data_state{ recently_processed_disk_pool_offsets = Map } = State,\n\tSet = maps:get(Offset, Map, sets:new()),\n\tsets:is_element(ChunkDataKey, Set).\n\ncache_recently_processed_offset(Offset, ChunkDataKey, State) ->\n\t#sync_data_state{ recently_processed_disk_pool_offsets = Map } = State,\n\tSet = maps:get(Offset, Map, sets:new()),\n\tMap2 =\n\t\tcase sets:is_element(ChunkDataKey, Set) of\n\t\t\tfalse ->\n\t\t\t\tar_util:cast_after(?CACHE_RECENTLY_PROCESSED_DISK_POOL_OFFSET_LIFETIME_MS,\n\t\t\t\t\t\tself(), {remove_recently_processed_disk_pool_offset, Offset,\n\t\t\t\t\t\tChunkDataKey}),\n\t\t\t\tmaps:put(Offset, sets:add_element(ChunkDataKey, Set), Map);\n\t\t\ttrue ->\n\t\t\t\tMap\n\t\tend,\n\tState#sync_data_state{ recently_processed_disk_pool_offsets = Map2 }.\n\nfilter_storage_modules_by_synced_offset(AbsoluteEndOffset, [StoreID | StoreIDs]) ->\n\tcase ar_sync_record:is_recorded(AbsoluteEndOffset, ar_data_sync, StoreID) of\n\t\t{true, _Packing} ->\n\t\t\tfilter_storage_modules_by_synced_offset(AbsoluteEndOffset, StoreIDs);\n\t\tfalse ->\n\t\t\t[StoreID | filter_storage_modules_by_synced_offset(AbsoluteEndOffset, StoreIDs)]\n\tend;\nfilter_storage_modules_by_synced_offset(_, []) ->\n\t[].\n\nprocess_unpacked_chunk(ChunkArgs, Args, State) ->\n\t{_AbsoluteTXStartOffset, _TXSize, _DataPath, _TXPath, _DataRoot, _Chunk, ChunkID,\n\t\t\t_ChunkEndOffset, Peer, Byte} = Args,\n\t{_Packing, Chunk, _AbsoluteEndOffset, _TXRoot, ChunkSize} = ChunkArgs,\n\tcase validate_chunk_id_size(Chunk, ChunkID, ChunkSize) of\n\t\tfalse ->\n\t\t\tdecrement_chunk_cache_size(),\n\t\t\tprocess_invalid_fetched_chunk(Peer, Byte, State);\n\t\ttrue ->\n\t\t\tprocess_valid_fetched_chunk(ChunkArgs, Args, State)\n\tend.\n\nvalidate_chunk_id_size(Chunk, ChunkID, ChunkSize) ->\n\tcase ar_tx:generate_chunk_id(Chunk) == ChunkID of\n\t\tfalse ->\n\t\t\tfalse;\n\t\ttrue ->\n\t\t\tChunkSize == byte_size(Chunk)\n\tend.\n\nlog_sufficient_disk_space(StoreID) ->\n\tar:console(\"~nThe node has detected available disk space and resumed syncing data \"\n\t\t\t\"into the storage module ~s.~n\", [StoreID]),\n\t?LOG_INFO([{event, storage_module_resumed_syncing}, {storage_module, StoreID}]).\n\nlog_insufficient_disk_space(StoreID) ->\n\tar:console(\"~nThe node has stopped syncing data into the storage module ~s due to \"\n\t\t\t\"the insufficient disk space.~n\", [StoreID]),\n\t?LOG_INFO([{event, storage_module_stopped_syncing},\n\t\t\t{reason, insufficient_disk_space}, {storage_module, StoreID}]).\n\ndata_root_index_iterator_v2(DataRootKey, TXStartOffset, DataRootIndex) ->\n\t{DataRootKey, TXStartOffset, TXStartOffset, DataRootIndex, 1}.\n\ndata_root_index_next_v2({_, _, _, _, Count}, Limit) when Count > Limit ->\n\tnone;\ndata_root_index_next_v2({_, 0, _, _, _}, _Limit) ->\n\tnone;\ndata_root_index_next_v2(Args, _Limit) ->\n\t{DataRootKey, TXStartOffset, LatestTXStartOffset, DataRootIndex, Count} = Args,\n\t<< DataRoot:32/binary, TXSize:?OFFSET_KEY_BITSIZE >> = DataRootKey,\n\tKey = data_root_key_v2(DataRoot, TXSize, TXStartOffset - 1),\n\tcase ar_kv:get_prev(DataRootIndex, Key) of\n\t\tnone ->\n\t\t\tnone;\n\t\t{ok, << DataRoot:32/binary, TXSizeSize:8, TXSize:(TXSizeSize * 8),\n\t\t\t\tTXStartOffset2Size:8, TXStartOffset2:(TXStartOffset2Size * 8) >>, TXPath} ->\n\t\t\t{ok, TXRoot} = ar_merkle:extract_root(TXPath),\n\t\t\t{{TXStartOffset2, TXRoot, TXPath},\n\t\t\t\t\t{DataRootKey, TXStartOffset2, LatestTXStartOffset, DataRootIndex,\n\t\t\t\t\t\t\tCount + 1}};\n\t\t{ok, _, _} ->\n\t\t\tnone\n\tend.\n\ndata_root_index_reset({DataRootKey, _, TXStartOffset, DataRootIndex, _}) ->\n\t{DataRootKey, TXStartOffset, TXStartOffset, DataRootIndex, 1}.\n\ndata_root_index_get_key(Iterator) ->\n\telement(1, Iterator).\n\ndata_root_index_iterator(TXRootMap) ->\n\t{maps:fold(\n\t\tfun(TXRoot, Map, Acc) ->\n\t\t\tmaps:fold(\n\t\t\t\tfun(Offset, TXPath, Acc2) ->\n\t\t\t\t\tgb_sets:insert({Offset, TXRoot, TXPath}, Acc2)\n\t\t\t\tend,\n\t\t\t\tAcc,\n\t\t\t\tMap\n\t\t\t)\n\t\tend,\n\t\tgb_sets:new(),\n\t\tTXRootMap\n\t), 0}.\n\ndata_root_index_next({_Index, Count}, Limit) when Count >= Limit ->\n\tnone;\ndata_root_index_next({Index, Count}, _Limit) ->\n\tcase gb_sets:is_empty(Index) of\n\t\ttrue ->\n\t\t\tnone;\n\t\tfalse ->\n\t\t\t{Element, Index2} = gb_sets:take_largest(Index),\n\t\t\t{Element, {Index2, Count + 1}}\n\tend.\n\nrecord_chunk_cache_size_metric() ->\n\tcase ets:lookup(ar_data_sync_state, chunk_cache_size) of\n\t\t[{_, Size}] ->\n\t\t\tprometheus_gauge:set(chunk_cache_size, Size);\n\t\t_ ->\n\t\t\tok\n\tend.\n\n%% @doc Get data roots for a given offset (>= BlockStartOffset, < BlockEndOffset) from local indices.\n%% Return only entries corresponding to non-empty transactions.\n%% Return the complete list of entries in the order they appear in the data root index,\n%% which corresponds to sorted #tx records in the block.\n%% Return {ok, {TXRoot, BlockSize, [{DataRoot, TXSize, TXStartOffset, TXPath}, ...]}}\n%% or {error, Reason}.\nget_data_roots_for_offset(Offset) ->\n\tcase Offset >= get_disk_pool_threshold() of\n\t\ttrue ->\n\t\t\t{error, not_found};\n\t\tfalse ->\n\t\t\t{BlockStart, BlockEnd, TXRoot} = ar_block_index:get_block_bounds(Offset),\n\t\t\ttrue = Offset >= BlockStart andalso Offset < BlockEnd,\n\t\t\tStoreID = ?DEFAULT_MODULE,\n\t\t\tDB = {data_root_offset_index, StoreID},\n\t\t\tcase ar_kv:get(DB, << BlockStart:?OFFSET_KEY_BITSIZE >>) of\n\t\t\t\tnot_found ->\n\t\t\t\t\t{error, not_found};\n\t\t\t\t{ok, Bin} ->\n\t\t\t\t\t{TXRoot2, BlockSize, DataRootIndexKeySet} = binary_to_term(Bin),\n\t\t\t\t\ttrue = TXRoot2 == TXRoot,\n\t\t\t\t\t{ok, {TXRoot, BlockSize, lists:sort(\n\t\t\t\t\t\tfun({_DataRoot1, _TXSize1, TXStart1, _TXPath1}, {_DataRoot2, _TXSize2, TXStart2, _TXPath2}) ->\n\t\t\t\t\t\t\tTXStart1 < TXStart2\n\t\t\t\t\t\tend,\n\t\t\t\t\t\tsets:fold(\n\t\t\t\t\t\t\tfun(<< DataRoot:32/binary, TXSize:?OFFSET_KEY_BITSIZE >>, Acc) ->\n\t\t\t\t\t\t\t\tread_data_root_entries(DataRoot, TXSize, BlockStart, BlockEnd, StoreID, Acc)\n\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t[],\n\t\t\t\t\t\tDataRootIndexKeySet\n\t\t\t\t\t))}}\n\t\t\tend\n\tend.\n\n%% @doc Return true if the data roots for the given block range are synced, false otherwise.\n%% Assert the given BlockEnd and TXRoot match the stored values.\nare_data_roots_synced(BlockStart, BlockEnd, TXRoot) ->\n\tDB = {data_root_offset_index, ?DEFAULT_MODULE},\n\tcase ar_kv:get(DB, << BlockStart:?OFFSET_KEY_BITSIZE >>) of\n\t\tnot_found ->\n\t\t\tfalse;\n\t\t{ok, Bin} ->\n\t\t\t{TXRoot2, BlockSize, _DataRootIndexKeySet} = binary_to_term(Bin),\n\t\t\ttrue = TXRoot2 == TXRoot,\n\t\t\ttrue = BlockSize == BlockEnd - BlockStart,\n\t\t\ttrue\n\tend.\n\nread_data_root_entries(_DataRoot, _TXSize, _BlockStart, 0, _StoreID, Acc) ->\n\tAcc;\nread_data_root_entries(DataRoot, TXSize, BlockStart, Cursor, StoreID, Acc) ->\n\tKey = data_root_key_v2(DataRoot, TXSize, Cursor - 1),\n\tcase ar_kv:get_prev({data_root_index, StoreID}, Key) of\n\t\t{ok, << DataRoot:32/binary, TXSizeSize:8, TXSize:(TXSizeSize * 8),\n\t\t\t\tTXStartSize:8, TXStart:(TXStartSize * 8) >>, TXPath} when TXStart >= BlockStart ->\n\t\t\t[{DataRoot, TXSize, TXStart, TXPath}\n\t\t\t\t| read_data_root_entries(DataRoot, TXSize, BlockStart, TXStart, StoreID, Acc)];\n\t\t{ok, _, _} ->\n\t\t\tAcc;\n\t\tnone ->\n\t\t\tAcc\n\tend.\n\nmaybe_run_footprint_record_initialization(State) ->\n\t#sync_data_state{ store_id = StoreID } = State,\n\tPacking = ar_storage_module:get_packing(StoreID),\n\t{FootprintRecordCursor, InitializationComplete} = get_footprint_record_initialization_state(State),\n\tcase InitializationComplete of\n\t\ttrue ->\n\t\t\tok;\n\t\tfalse ->\n\t\t\t?LOG_INFO([{event, initializing_footprint_record},\n\t\t\t\t\t{cursor, FootprintRecordCursor}, {store_id, StoreID},\n\t\t\t\t\t{packing, ar_serialize:encode_packing(Packing, false)}]),\n\t\t\tgen_server:cast(self(), {initialize_footprint_record, FootprintRecordCursor, Packing})\n\tend.\n\nget_footprint_record_initialization_state(State) ->\n\t#sync_data_state{ migrations_index = MI } = State,\n\tcase ar_kv:get(MI, ?FOOTPRINT_MIGRATION_CURSOR_KEY) of\n\t\tnot_found ->\n\t\t\t{0, false};\n\t\t{ok, <<\"complete\">>} ->\n\t\t\t{complete, true};\n\t\t{ok, CursorBin} ->\n\t\t\tCursor = binary:decode_unsigned(CursorBin),\n\t\t\t{Cursor, false}\n\tend.\n\n%% @doc Initialize the footprint record from the ar_data_sync record.\n%% We don't filter by packing to ensure all synced intervals are migrated.\ninitialize_footprint_record(complete, _Packing, State) ->\n\tState;\ninitialize_footprint_record(Cursor, Packing, State) ->\n\t#sync_data_state{\n\t\tstore_id = StoreID,\n\t\trange_end = RangeEnd,\n\t\tmigrations_index = MI\n\t} = State,\n\tBatchSize = ?FOOTPRINT_MIGRATION_BATCH_SIZE,\n\n\tcase ar_sync_record:get_next_synced_interval(Cursor, RangeEnd, ar_data_sync, StoreID) of\n\t\tnot_found ->\n\t\t\tok = ar_kv:put(MI, ?FOOTPRINT_MIGRATION_CURSOR_KEY, <<\"complete\">>),\n\t\t\t?LOG_INFO([{event, footprint_record_initialized}, {store_id, StoreID}]),\n\t\t\tState;\n\t\t{IntervalEnd, IntervalStart} ->\n\t\t\tCursor2 = max(Cursor, IntervalStart),\n\t\t\tEndPosition = min(Cursor2 + (BatchSize * ?DATA_CHUNK_SIZE), IntervalEnd),\n\t\t\tinitialize_footprint_range(Cursor2, EndPosition, Packing, StoreID),\n\t\t\tNewCursor = EndPosition,\n\t\t\tok = ar_kv:put(MI, ?FOOTPRINT_MIGRATION_CURSOR_KEY, binary:encode_unsigned(NewCursor)),\n\t\t\tar_util:cast_after(1_000, self(), {initialize_footprint_record, NewCursor, Packing}),\n\t\t\tState\n\tend.\n\n%% @doc Migrate chunks in the given range to footprint records.\ninitialize_footprint_range(Start, End, _Packing, _StoreID) when Start >= End ->\n\tok;\ninitialize_footprint_range(Start, End, Packing, StoreID) ->\n\tar_footprint_record:add(Start + 1, Packing, StoreID),\n\tinitialize_footprint_range(Start + ?DATA_CHUNK_SIZE, End, Packing, StoreID).\n"
  },
  {
    "path": "apps/arweave/src/ar_data_sync_coordinator.erl",
    "content": "%%% @doc Coordinates data sync tasks between worker processes and peer workers.\n%%%\n%%% This module acts as a coordinator that:\n%%% - Dispatches sync tasks to ar_data_sync_worker processes\n%%% - Coordinates with ar_peer_worker processes (one per peer) that manage:\n%%%   - Peer task queues and dispatch limits\n%%%   - Footprint management (grouping tasks to limit entropy cache usage)\n%%%   - Peer performance tracking\n%%% - Performs periodic rebalancing based on peer performance metrics\n%%%\n%%% Architecture:\n%%% - Each peer has its own ar_peer_worker process that manages peer-specific state\n%%%   (queues, footprints, dispatch limits, performance metrics)\n%%% - This coordinator manages the pool of ar_data_sync_worker processes and\n%%%   dispatches tasks from peer queues to available workers\n%%% - Worker selection uses round-robin with load balancing\n%%%\n%%% Task Flow:\n%%% 1. Tasks are enqueued to the appropriate ar_peer_worker\n%%% 2. Peer workers store those tasks in either\n%%%    - their task_queue (ready for dispatch) if they belong to an active footprint\n%%%    - or in a waiting queue (not ready for dispatch) if they don't belong to an\n%%%      active footprint\n%%% 3. Periodically the coordinator pulls tasks from peer queues and dispatches to workers.\n%%%    This is event based and happens in response to one of these events:\n%%%    - a new task is sent to the coordinator\n%%%    - a task is completed by an ar_data_sync_worker\n%%% 4. On task completion, peer workers update metrics and notify coordinator.\n%%% 5. When a footprint completes, a new footprint is activated. Footprint activation is\n%%%    handled both by the ar_peer_worker (if it has waiting tasks) or by the coordinator\n%%%    (if the ar_peer_worker does not have waiting tasks, coordinator will find another\n%%%    peer that does). Note: footprint activation does not immediately dispatch tasks.\n%%% \n%%% Tasks can be in one of three states:\n%%% - waiting: the task belongs to an inactive footprint and is stored in a\n%%%            \"waiting\" queue on the ar_peer_worker. A task in the \"waiting\"\n%%%            state contributes to the total_queued_count, but can not be dispatched\n%%%            until its footprint becomes active.\n%%% - queued: the task belongs to an activae footprint and is stored in the\n%%%           ar_peer_worker's task queue. It will be dispatched as soon as\n%%%           an ar_data_sync_worker becomes available.  A task in the \"queued\"\n%%%           state contributes to the total_queued_count.\n%%% - dispatched: the task has been dispatched to an ar_data_sync_worker and is\n%%%            being processed. A task in the \"dispatched\" state contributes to the\n%%%            total_dispatched_count.\n%%% \n%%% Footprints can be in one of two states:\n%%% - active: All tasks belonging to an active footprint are moved to the\n%%%           ar_peer_worker's task queue and are eligible to be dispatched.\n%%% - inactive: All tasks belonging to an inactive footprint are stored in the\n%%%             ar_peer_worker's \"waiting\" queue. They are not eligible to be \n%%%             dispatched until their footprint becomes active.\n%%%\n-module(ar_data_sync_coordinator).\n\n-behaviour(gen_server).\n\n-export([start_link/1, register_workers/0, is_syncing_enabled/0, ready_for_work/0]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_sup.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_peers.hrl\").\n\n-define(REBALANCE_FREQUENCY_MS, 10*1000).\n\n-record(state, {\n\ttotal_queued_count = 0,       %% total count of non-dispatched tasks across all peers\n\ttotal_dispatched_count = 0,   %% total count tasks currently assigned to a worker\n\tworkers = queue:new(),\n\tdispatched_count_per_worker = #{},\n\tknown_peers = #{},          %% #{Peer => Pid} - cached peer worker Pids\n\t%% Global footprint tracking\n\ttotal_active_footprints = 0,  %% count of active footprints across all peers\n\tmax_footprints = 0            %% global max footprints limit\n}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the server.\nstart_link(Workers) ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, Workers, []).\n\nregister_workers() ->\n\tcase is_syncing_enabled() of\n\t\ttrue ->\n\t\t\t{Workers, WorkerNames} = register_sync_workers(),\n\t\t\tWorkerMaster = ?CHILD_WITH_ARGS(\n\t\t\t\tar_data_sync_coordinator, worker, ar_data_sync_coordinator,\n\t\t\t\t[WorkerNames]),\n\t\t\t\t[WorkerMaster] ++ Workers;\n\t\tfalse ->\n\t\t\t[]\n\tend.\n\nregister_sync_workers() ->\n\t{ok, Config} = arweave_config:get_env(),\n\t{Workers, WorkerNames} = lists:foldl(\n\t\tfun(Number, {AccWorkers, AccWorkerNames}) ->\n\t\t\tName = list_to_atom(\"ar_data_sync_worker_\" ++ integer_to_list(Number)),\n\t\t\tWorker = ?CHILD_WITH_ARGS(ar_data_sync_worker, worker, Name, [Name, sync]),\n\t\t\t{[Worker | AccWorkers], [Name | AccWorkerNames]}\n\t\tend,\n\t\t{[], []},\n\t\tlists:seq(1, Config#config.sync_jobs)\n\t),\n\t{Workers, WorkerNames}.\n\n%% @doc Returns true if syncing is enabled (i.e. sync_jobs > 0).\nis_syncing_enabled() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tConfig#config.sync_jobs > 0.\n\n%% @doc Returns true if we can accept new tasks. Will always return false if syncing is\n%% disabled (i.e. sync_jobs = 0).\nready_for_work() ->\n\ttry\n\t\tgen_server:call(?MODULE, ready_for_work, 1000)\n\tcatch\n\t\texit:{timeout,_} ->\n\t\t\tfalse\n\tend.\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit(Workers) ->\n\tar_util:cast_after(?REBALANCE_FREQUENCY_MS, ?MODULE, rebalance_peers),\n\tMaxFootprints = calculate_max_footprints(),\n\t?LOG_INFO([{event, init}, {module, ?MODULE}, {workers, Workers}, \n\t\t{max_footprints, MaxFootprints}]),\n\t{ok, #state{\n\t\tworkers = queue:from_list(Workers),\n\t\tmax_footprints = MaxFootprints\n\t}}.\n\ncalculate_max_footprints() ->\n\t{ok, Config} = arweave_config:get_env(),\n\t%% Calculate global max footprints based on entropy cache size\n\tFootprintSize = ar_block:get_replica_2_9_footprint_size(),\n\tmax(1, (Config#config.replica_2_9_entropy_cache_size_mb * ?MiB) div FootprintSize).\n\n\nhandle_call(ready_for_work, _From, State) ->\n\tWorkerCount = queue:len(State#state.workers),\n\tTotalTaskCount = State#state.total_dispatched_count + State#state.total_queued_count,\n\tReadyForWork = TotalTaskCount < max_tasks(WorkerCount),\n\t{reply, ReadyForWork, State};\n\nhandle_call({reset_worker, Worker}, _From, State) ->\n\tActiveCount = maps:get(Worker, State#state.dispatched_count_per_worker, 0),\n\tState2 = State#state{\n\t\ttotal_dispatched_count = State#state.total_dispatched_count - ActiveCount,\n\t\tdispatched_count_per_worker = maps:put(Worker, 0, State#state.dispatched_count_per_worker)\n\t},\n\t{reply, ok, State2};\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast({sync_range, Args}, State) ->\n\tcase queue:is_empty(State#state.workers) of\n\t\ttrue ->\n\t\t\t{noreply, State};\n\t\tfalse ->\n\t\t\t{_Start, _End, Peer, _TargetStoreID, FootprintKey} = Args,\n\t\t\t%% Track this peer and get cached Pid\n\t\t\t{Pid, State1} = maybe_add_peer(Peer, State),\n\t\t\tcase Pid of\n\t\t\t\tundefined ->\n\t\t\t\t\t{noreply, State1};\n\t\t\t\t_ ->\n\t\t\t\t\t%% Check if there's global capacity for new footprints\n\t\t\t\t\tHasCapacity = State1#state.total_active_footprints < State1#state.max_footprints,\n\t\t\t\t\tWorkerCount = queue:len(State1#state.workers),\n\t\t\t\t\t{WasActivated, TasksToDispatch} = ar_peer_worker:enqueue_task(\n\t\t\t\t\t\tPid, FootprintKey, Args, HasCapacity, WorkerCount),\n\t\t\t\t\tState2 = case WasActivated of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tState1#state{ \n\t\t\t\t\t\t\t\ttotal_active_footprints = State1#state.total_active_footprints + 1,\n\t\t\t\t\t\t\t\ttotal_queued_count = State1#state.total_queued_count + 1\n\t\t\t\t\t\t\t};\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tState1#state{\n\t\t\t\t\t\t\t\ttotal_queued_count = State1#state.total_queued_count + 1\n\t\t\t\t\t\t\t}\n\t\t\t\t\tend,\n\t\t\t\t\t%% Dispatch tasks to workers\n\t\t\t\t\tState3 = dispatch_tasks(Pid, TasksToDispatch, State2),\n\t\t\t\t\t{noreply, State3}\n\t\t\tend\n\tend;\n\nhandle_cast({task_completed, {sync_range, {Worker, Result, Args, ElapsedNative}}}, State) ->\n\t{Start, End, Peer, _, _, FootprintKey} = Args,\n\tDataSize = End - Start,\n\tState2 = increment_dispatched_task_count(Worker, -1, State),\n\t%% Notify peer worker (handles footprint completion, performance rating)\n\tcase maps:find(Peer, State2#state.known_peers) of\n\t\t{ok, Pid} ->\n\t\t\tar_peer_worker:task_completed(Pid, FootprintKey, Result, ElapsedNative, DataSize);\n\t\terror ->\n\t\t\t%% Peer not in cache (shouldn't happen normally)\n\t\t\t?LOG_WARNING([{event, task_completed_unknown_peer}, {peer, ar_util:format_peer(Peer)}])\n\tend,\n\t%% Process all peer queues, starting with the peer that just completed\n\tState3 = process_all_peer_queues(Peer, State2),\n\t{noreply, State3};\n\nhandle_cast(rebalance_peers, State) ->\n\tar_util:cast_after(?REBALANCE_FREQUENCY_MS, ?MODULE, rebalance_peers),\n\t%% TODO: Add logic to purge empty peer workers (no queued tasks, no dispatched tasks).\n\tPeerPids = State#state.known_peers,  %% #{Peer => Pid}\n\tPeers = maps:keys(PeerPids),\n\tAllPeerPerformances = ar_peers:get_peer_performances(Peers),\n\tTargets = calculate_targets(PeerPids, AllPeerPerformances, State),\n\tState2 = rebalance_peers(PeerPids, AllPeerPerformances, Targets, State),\n\t{noreply, State2};\n\nhandle_cast({footprint_deactivated, _Peer}, State) ->\n\tNewCount = max(0, State#state.total_active_footprints - 1),\n\tState2 = State#state{ total_active_footprints = NewCount },\n\t%% Notify all peers that capacity is available so they can activate waiting footprints\n\tState3 = case NewCount < State2#state.max_footprints of\n\t\ttrue ->\n\t\t\tnotify_peers_capacity_available(State2);\n\t\tfalse ->\n\t\t\tState2\n\tend,\n\t{noreply, State3};\n\nhandle_cast({peer_worker_started, Peer, Pid}, State) ->\n\t%% Peer worker (re)started - update cached PID\n\tState2 = State#state{ known_peers = maps:put(Peer, Pid, State#state.known_peers) },\n\t{noreply, State2};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{event, terminate}, {module, ?MODULE}, {reason, io_lib:format(\"~p\", [Reason])}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\n%%--------------------------------------------------------------------\n%% Peer queue management\n%%--------------------------------------------------------------------\n\n%% @doc If a peer has capacity, take tasks from its queue and dispatch them.\n%% Note: a new footprint will not be activated while dispatching tasks as only tasks\n%% belonging to an already active footprint will be in the ar_peer_worker's task queue\nprocess_peer_queue(Pid, State) ->\n\tWorkerCount = queue:len(State#state.workers),\n\tTasksToDispatch = ar_peer_worker:process_queue(Pid, WorkerCount),\n\tdispatch_tasks(Pid, TasksToDispatch, State).\n\n%% @doc Process all peer queues, with the priority peer processed first.\nprocess_all_peer_queues(PriorityPeer, State) ->\n\tcase queue:is_empty(State#state.workers) of\n\t\ttrue ->\n\t\t\tState;\n\t\tfalse ->\n\t\t\tPeerPids = State#state.known_peers,\n\t\t\tPriorityPid = maps:get(PriorityPeer, PeerPids, undefined),\n\t\t\tOtherPids = maps:values(maps:remove(PriorityPeer, PeerPids)),\n\t\t\tAllPids = case PriorityPid of\n\t\t\t\tundefined -> OtherPids;\n\t\t\t\t_ -> [PriorityPid | OtherPids]\n\t\t\tend,\n\t\t\tprocess_all_peer_queues2(AllPids, State)\n\tend.\n\nprocess_all_peer_queues2([], State) ->\n\tState;\nprocess_all_peer_queues2([Pid | Rest], State) ->\n\tState2 = process_peer_queue(Pid, State),\n\tprocess_all_peer_queues2(Rest, State2).\n\n%% @doc the maximum number of tasks we can have in process.\nmax_tasks(WorkerCount) ->\n\tWorkerCount * 50.\n\n%%--------------------------------------------------------------------\n%% Dispatch tasks to be run on workers\n%%--------------------------------------------------------------------\n\n%% @doc Dispatch tasks to workers.\n%% Caller must ensure workers are available before calling.\ndispatch_tasks(_Pid, [], State) ->\n\tState;\ndispatch_tasks(Pid, [Args | Rest], State) ->\n\t{Worker, State2} = get_worker(State),\n\t{Start, End, Peer, TargetStoreID, FootprintKey} = Args,\n\t%% Pass FootprintKey as 6th element so it comes back in task_completed\n\tgen_server:cast(Worker, {sync_range, {Start, End, Peer, TargetStoreID, 3, FootprintKey}}),\n\t%% When a task is dispatched it's removed from the ar_peer_worker's task queue and sent to\n\t%% an ar_data_sync_worker.\n\tState3 = increment_dispatched_task_count(Worker, 1, State2),\n\tState4 = State3#state{ total_queued_count = max(0, State3#state.total_queued_count - 1) },\n\tdispatch_tasks(Pid, Rest, State4).\n\n%%--------------------------------------------------------------------\n%% Rebalancing\n%%--------------------------------------------------------------------\n\n%% @doc Calculate rebalance parameters.\n%% PeerPids is #{Peer => Pid}\n%% Returns {WorkerCount, TargetLatency, TotalThroughput, TotalMaxDispatched}\ncalculate_targets(PeerPids, AllPeerPerformances, State) ->\n\tWorkerCount = queue:len(State#state.workers),\n\tPeers = maps:keys(PeerPids),\n\tTotalThroughput =\n\t\tlists:foldl(\n\t\t\tfun(Peer, Acc) -> \n\t\t\t\tPerformance = maps:get(Peer, AllPeerPerformances, #performance{}),\n\t\t\t\tAcc + Performance#performance.current_rating\n\t\t\tend, 0.0, Peers),\n    TotalLatency = \n\t\tlists:foldl(\n\t\t\tfun(Peer, Acc) -> \n\t\t\t\tPerformance = maps:get(Peer, AllPeerPerformances, #performance{}),\n\t\t\t\tAcc + Performance#performance.average_latency\n\t\t\tend, 0.0, Peers),\n\tTargetLatency = case length(Peers) > 0 of\n\t\ttrue -> TotalLatency / length(Peers);\n\t\tfalse -> 0.0\n\tend,\n\tTotalMaxDispatched = maps:fold(\n\t\tfun(_Peer, Pid, Acc) ->\n\t\t\tcase ar_peer_worker:get_max_dispatched(Pid) of\n\t\t\t\t{error, _} -> Acc;\n\t\t\t\tMaxDispatched -> MaxDispatched + Acc\n\t\t\tend\n\t\tend,\n\t\t0, PeerPids),\n\t?LOG_DEBUG([{event, sync_performance_targets},\n\t\t{worker_count, WorkerCount},\n\t\t{target_latency, TargetLatency},\n\t\t{total_throughput, TotalThroughput},\n\t\t{total_max_dispatched, TotalMaxDispatched}]),\n    {WorkerCount, TargetLatency, TotalThroughput, TotalMaxDispatched}.\n\n%% PeerPidsList is [{Peer, Pid}]\nrebalance_peers(PeerPids, AllPeerPerformances, Targets, State) ->\n\trebalance_peers2(maps:to_list(PeerPids), AllPeerPerformances, Targets, State).\n\nrebalance_peers2([], _AllPeerPerformances, _Targets, State) ->\n\tState;\nrebalance_peers2([{Peer, Pid} | Rest], AllPeerPerformances, Targets, State) ->\n\t{WorkerCount, TargetLatency, TotalThroughput, TotalMaxDispatched} = Targets,\n\tPerformance = maps:get(Peer, AllPeerPerformances, #performance{}),\n\t%% Calculate rebalance params (peer calculates FasterThanTarget from Performance)\n\tQueueScalingFactor = queue_scaling_factor(TotalThroughput, WorkerCount),\n\tWorkersStarved = TotalMaxDispatched < WorkerCount,\n\tRebalanceParams = {QueueScalingFactor, TargetLatency, WorkersStarved},\n\tResult = ar_peer_worker:rebalance(Pid, Performance, RebalanceParams),\n\tState2 = case Result of\n\t\t{shutdown, RemovedCount} ->\n\t\t\t%% Peer worker is idle and should be shutdown\n\t\t\t?LOG_INFO([{event, shutdown_idle_peer_worker},\n\t\t\t\t{peer, ar_util:format_peer(Peer)}]),\n\t\t\tar_peer_worker:stop(Pid),\n\t\t\tState#state{\n\t\t\t\ttotal_queued_count = max(0, State#state.total_queued_count - RemovedCount),\n\t\t\t\tknown_peers = maps:remove(Peer, State#state.known_peers)\n\t\t\t};\n\t\t{ok, RemovedCount} ->\n\t\t\tState#state{ total_queued_count = max(0, State#state.total_queued_count - RemovedCount) };\n\t\t{error, timeout} ->\n\t\t\t%% Peer worker timed out, skip it\n\t\t\tState\n\tend,\n\trebalance_peers2(Rest, AllPeerPerformances, Targets, State2).\n\n%% @doc Scaling factor for calculating per-peer max queue size.\n%% Peer worker calculates: MaxQueue = max(PeerThroughput * ScalingFactor, MIN_PEER_QUEUE)\nqueue_scaling_factor(0, _WorkerCount) -> infinity;\nqueue_scaling_factor(0.0, _WorkerCount) -> infinity;\nqueue_scaling_factor(TotalThroughput, WorkerCount) ->\n\tmax_tasks(WorkerCount) / TotalThroughput.\n\n%%--------------------------------------------------------------------\n%% Helpers\n%%--------------------------------------------------------------------\n\nincrement_dispatched_task_count(Worker, N, State) ->\n\tActiveCount = maps:get(Worker, State#state.dispatched_count_per_worker, 0) + N,\n\tState#state{\n\t\ttotal_dispatched_count = State#state.total_dispatched_count + N,\n\t\tdispatched_count_per_worker = maps:put(Worker, ActiveCount, State#state.dispatched_count_per_worker)\n\t}.\n\n%% @doc Add a peer to known_peers if not already present. Returns {Pid, State}.\n%% The Pid is cached so we don't have to do whereis + atom lookup on every call.\nmaybe_add_peer(Peer, State) ->\n\tcase State#state.known_peers of\n\t\t#{Peer := Pid} -> \n\t\t\t{Pid, State};\n\t\t_ -> \n\t\t\tcase ar_peer_worker:get_or_start(Peer) of\n\t\t\t\t{ok, Pid} ->\n\t\t\t\t\t{Pid, State#state{ known_peers = maps:put(Peer, Pid, State#state.known_peers) }};\n\t\t\t\t{error, _} ->\n\t\t\t\t\t{undefined, State}\n\t\t\tend\n\tend.\n\nget_worker(State) ->\n\tWorkerCount = queue:len(State#state.workers),\n\tAverageLoad = State#state.total_dispatched_count / WorkerCount,\n\tcycle_workers(AverageLoad, State).\n\ncycle_workers(AverageLoad, State) ->\n\t#state{ workers = Workers } = State,\n\t{{value, Worker}, Workers2} = queue:out(Workers),\n\tState2 = State#state{ workers = queue:in(Worker, Workers2) },\n\tActiveCount = maps:get(Worker, State2#state.dispatched_count_per_worker, 0),\n\tcase ActiveCount =< AverageLoad of\n\t\ttrue ->\n\t\t\t{Worker, State2};\n\t\tfalse ->\n\t\t\tcycle_workers(AverageLoad, State2)\n\tend.\n\n%% Notify all known peers that global footprint capacity is available.\nnotify_peers_capacity_available(#state{ known_peers = KnownPeers } = State) ->\n\t%% Iterate through peers, stop when one activates a footprint to avoid over-activation\n\tPeerList = maps:to_list(KnownPeers),\n\tcase try_activate_footprint(PeerList) of\n\t\ttrue ->\n\t\t\tState#state{ total_active_footprints = State#state.total_active_footprints + 1 };\n\t\tfalse ->\n\t\t\tState\n\tend.\n\ntry_activate_footprint([]) ->\n\tfalse;\ntry_activate_footprint([{_Peer, Pid} | Rest]) ->\n\tcase ar_peer_worker:try_activate_footprint(Pid) of\n\t\ttrue ->\n\t\t\t%% A footprint was activated, stop here\n\t\t\ttrue;\n\t\tfalse ->\n\t\t\t%% No footprint activated, try next peer\n\t\t\ttry_activate_footprint(Rest)\n\tend.\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\n-ifdef(AR_TEST).\n-include_lib(\"eunit/include/eunit.hrl\").\n\ncoordinator_test_() ->\n\t[\n\t\t{timeout, 30, fun test_get_worker/0},\n\t\t{timeout, 30, fun test_max_tasks/0},\n\t\t{timeout, 30, fun test_increment_dispatched_task_count/0},\n\t\t{timeout, 30, fun test_queue_scaling_factor/0},\n\t\t{timeout, 30, fun test_footprint_deactivated/0},\n\t\t{timeout, 30, fun test_peer_worker_started_updates_cache/0},\n\t\t{timeout, 30, fun test_reset_worker/0},\n\t\t{timeout, 30, fun test_dispatch_tasks_updates_counts/0},\n\t\t{timeout, 30, fun test_try_activate_footprint_stops_on_success/0},\n\t\t{timeout, 30, fun test_try_activate_footprint_tries_all/0},\n\t\t{timeout, 30, fun test_calculate_targets/0}\n\t].\n\ntest_get_worker() ->\n\tState0 = #state{\n\t\tworkers = queue:from_list([worker1, worker2, worker3]),\n\t\ttotal_dispatched_count = 6,\n\t\tdispatched_count_per_worker = #{worker1 => 3, worker2 => 2, worker3 => 1}\n\t},\n\t{worker2, State1} = get_worker(State0),\n\tState2 = increment_dispatched_task_count(worker2, 1, State1),\n\t{worker3, State3} = get_worker(State2),\n\tState4 = increment_dispatched_task_count(worker3, 1, State3),\n\t{worker3, State5} = get_worker(State4),\n\tState6 = increment_dispatched_task_count(worker3, 1, State5),\n\t{worker1, _} = get_worker(State6).\n\ntest_max_tasks() ->\n\t?assertEqual(50, max_tasks(1)),\n\t?assertEqual(100, max_tasks(2)),\n\t?assertEqual(500, max_tasks(10)),\n\t?assertEqual(5000, max_tasks(100)).\n\ntest_increment_dispatched_task_count() ->\n\tState0 = #state{\n\t\ttotal_dispatched_count = 5,\n\t\tdispatched_count_per_worker = #{worker1 => 3, worker2 => 2}\n\t},\n\t%% Increment worker1\n\tState1 = increment_dispatched_task_count(worker1, 2, State0),\n\t?assertEqual(7, State1#state.total_dispatched_count),\n\t?assertEqual(5, maps:get(worker1, State1#state.dispatched_count_per_worker)),\n\t\n\t%% Decrement worker2\n\tState2 = increment_dispatched_task_count(worker2, -1, State1),\n\t?assertEqual(6, State2#state.total_dispatched_count),\n\t?assertEqual(1, maps:get(worker2, State2#state.dispatched_count_per_worker)),\n\t\n\t%% Add new worker\n\tState3 = increment_dispatched_task_count(worker3, 1, State2),\n\t?assertEqual(7, State3#state.total_dispatched_count),\n\t?assertEqual(1, maps:get(worker3, State3#state.dispatched_count_per_worker)).\n\ntest_queue_scaling_factor() ->\n\t%% Zero throughput returns infinity\n\t?assertEqual(infinity, queue_scaling_factor(0, 10)),\n\t?assertEqual(infinity, queue_scaling_factor(0.0, 10)),\n\t\n\t%% Normal calculation: max_tasks(WorkerCount) / TotalThroughput\n\t%% max_tasks(10) = 500\n\t?assertEqual(5.0, queue_scaling_factor(100.0, 10)),\n\t?assertEqual(2.5, queue_scaling_factor(200.0, 10)),\n\t?assertEqual(50.0, queue_scaling_factor(10.0, 10)).\n\ntest_footprint_deactivated() ->\n\tState0 = #state{ total_active_footprints = 5, max_footprints = 10, known_peers = #{} },\n\t\n\t%% Simulate footprint_deactivated cast\n\t{noreply, State1} = handle_cast({footprint_deactivated, {1,2,3,4,1984}}, State0),\n\t?assertEqual(4, State1#state.total_active_footprints),\n\t\n\t%% Should not go below 0\n\tState2 = State0#state{ total_active_footprints = 0 },\n\t{noreply, State3} = handle_cast({footprint_deactivated, {1,2,3,4,1984}}, State2),\n\t?assertEqual(0, State3#state.total_active_footprints).\n\ntest_peer_worker_started_updates_cache() ->\n\tPeer1 = {1,2,3,4,1984},\n\tPeer2 = {5,6,7,8,1985},\n\tPid1 = self(),  %% Use self() as a fake Pid for testing\n\tPid2 = self(),\n\t\n\tState0 = #state{ known_peers = #{} },\n\t\n\t%% Add first peer\n\t{noreply, State1} = handle_cast({peer_worker_started, Peer1, Pid1}, State0),\n\t?assertEqual(Pid1, maps:get(Peer1, State1#state.known_peers)),\n\t\n\t%% Add second peer\n\t{noreply, State2} = handle_cast({peer_worker_started, Peer2, Pid2}, State1),\n\t?assertEqual(Pid1, maps:get(Peer1, State2#state.known_peers)),\n\t?assertEqual(Pid2, maps:get(Peer2, State2#state.known_peers)),\n\t\n\t%% Update existing peer (simulating restart)\n\tNewPid = spawn(fun() -> ok end),\n\t{noreply, State3} = handle_cast({peer_worker_started, Peer1, NewPid}, State2),\n\t?assertEqual(NewPid, maps:get(Peer1, State3#state.known_peers)).\n\ntest_reset_worker() ->\n\tState0 = #state{\n\t\ttotal_dispatched_count = 10,\n\t\tdispatched_count_per_worker = #{worker1 => 5, worker2 => 3, worker3 => 2}\n\t},\n\t\n\t%% Reset worker1 (had 5 tasks)\n\t{reply, ok, State1} = handle_call({reset_worker, worker1}, self(), State0),\n\t?assertEqual(5, State1#state.total_dispatched_count),  %% 10 - 5 = 5\n\t?assertEqual(0, maps:get(worker1, State1#state.dispatched_count_per_worker)),\n\t\n\t%% Reset worker2 (had 3 tasks)\n\t{reply, ok, State2} = handle_call({reset_worker, worker2}, self(), State1),\n\t?assertEqual(2, State2#state.total_dispatched_count),  %% 5 - 3 = 2\n\t?assertEqual(0, maps:get(worker2, State2#state.dispatched_count_per_worker)),\n\t\n\t%% Reset unknown worker (should handle gracefully - count is 0)\n\t{reply, ok, State3} = handle_call({reset_worker, unknown_worker}, self(), State2),\n\t?assertEqual(2, State3#state.total_dispatched_count).\n\ntest_dispatch_tasks_updates_counts() ->\n\tState0 = #state{\n\t\tworkers = queue:from_list([worker1, worker2]),\n\t\ttotal_dispatched_count = 0,\n\t\ttotal_queued_count = 5,\n\t\tdispatched_count_per_worker = #{}\n\t},\n\t\n\t%% Dispatch empty list - no changes\n\tState1 = dispatch_tasks(self(), [], State0),\n\t?assertEqual(0, State1#state.total_dispatched_count),\n\t?assertEqual(5, State1#state.total_queued_count),\n\t\n\t%% Note: We can't fully test dispatch_tasks without mocking workers,\n\t%% but we can verify the state changes for the helper functions.\n\t\n\t%% Verify that dispatching would decrement total_queued_count\n\t%% by manually simulating what dispatch_tasks does\n\tState2 = State1#state{ total_queued_count = max(0, State1#state.total_queued_count - 1) },\n\t?assertEqual(4, State2#state.total_queued_count).\n\ntest_try_activate_footprint_stops_on_success() ->\n\t%% Create mock peer processes that track if they were called\n\tParent = self(),\n\t\n\t%% Peer1 returns false (no waiting footprint)\n\tPid1 = spawn(fun() -> mock_peer_worker(Parent, peer1, false) end),\n\t%% Peer2 returns true (activates a footprint)\n\tPid2 = spawn(fun() -> mock_peer_worker(Parent, peer2, true) end),\n\t%% Peer3 should NOT be called (iteration should stop at Peer2)\n\tPid3 = spawn(fun() -> mock_peer_worker(Parent, peer3, false) end),\n\t\n\t%% Register mock processes so ar_peer_worker:try_activate_footprint can find them\n\t%% We need to call try_activate_footprint directly with our mock list\n\tPeerList = [{peer1, Pid1}, {peer2, Pid2}, {peer3, Pid3}],\n\t\n\t%% Call the function directly\n\ttrue = try_activate_footprint(PeerList),\n\t\n\t%% Wait a bit for messages\n\ttimer:sleep(50),\n\t\n\t%% Check which peers were called\n\t?assertEqual([peer1, peer2], collect_called_peers([])),\n\t\n\t%% Cleanup\n\texit(Pid1, kill),\n\texit(Pid2, kill),\n\texit(Pid3, kill).\n\ntest_try_activate_footprint_tries_all() ->\n\t%% Create mock peer processes that all return false\n\tParent = self(),\n\t\n\tPid1 = spawn(fun() -> mock_peer_worker(Parent, peer1, false) end),\n\tPid2 = spawn(fun() -> mock_peer_worker(Parent, peer2, false) end),\n\tPid3 = spawn(fun() -> mock_peer_worker(Parent, peer3, false) end),\n\t\n\tPeerList = [{peer1, Pid1}, {peer2, Pid2}, {peer3, Pid3}],\n\t\n\t%% Call the function directly\n\tfalse = try_activate_footprint(PeerList),\n\t\n\t%% Wait a bit for messages\n\ttimer:sleep(50),\n\t\n\t%% All three peers should have been called\n\t?assertEqual([peer1, peer2, peer3], collect_called_peers([])),\n\t\n\t%% Cleanup\n\texit(Pid1, kill),\n\texit(Pid2, kill),\n\texit(Pid3, kill).\n\n%% Mock peer worker that responds to try_activate_footprint calls\nmock_peer_worker(Parent, PeerName, ReturnValue) ->\n\treceive\n\t\t{'$gen_call', From, try_activate_footprint} ->\n\t\t\tParent ! {called, PeerName},\n\t\t\tgen_server:reply(From, ReturnValue)\n\tafter 5000 ->\n\t\tok\n\tend.\n\n%% Collect all {called, PeerName} messages\ncollect_called_peers(Acc) ->\n\treceive\n\t\t{called, PeerName} ->\n\t\t\tcollect_called_peers(Acc ++ [PeerName])\n\tafter 10 ->\n\t\tAcc\n\tend.\n\ntest_calculate_targets() ->\n\t%% Create mock peer workers that respond to get_max_dispatched\n\tPid1 = spawn(fun() -> mock_peer_worker_max_dispatched(10) end),\n\tPid2 = spawn(fun() -> mock_peer_worker_max_dispatched(15) end),\n\tPid3 = spawn(fun() -> mock_peer_worker_max_dispatched(20) end),\n\t\n\tPeer1 = {1,2,3,4,1984},\n\tPeer2 = {5,6,7,8,1985},\n\tPeer3 = {9,10,11,12,1986},\n\t\n\tPeerPids = #{Peer1 => Pid1, Peer2 => Pid2, Peer3 => Pid3},\n\t\n\t%% Create performance records\n\tAllPeerPerformances = #{\n\t\tPeer1 => #performance{ current_rating = 100.0, average_latency = 50.0 },\n\t\tPeer2 => #performance{ current_rating = 200.0, average_latency = 100.0 },\n\t\tPeer3 => #performance{ current_rating = 300.0, average_latency = 150.0 }\n\t},\n\t\n\tState = #state{ workers = queue:from_list([w1, w2, w3, w4, w5]) },\n\t\n\t{WorkerCount, TargetLatency, TotalThroughput, TotalMaxDispatched} = \n\t\tcalculate_targets(PeerPids, AllPeerPerformances, State),\n\t\n\t%% WorkerCount = 5 (number of workers in queue)\n\t?assertEqual(5, WorkerCount),\n\t\n\t%% TotalThroughput = 100 + 200 + 300 = 600\n\t?assertEqual(600.0, TotalThroughput),\n\t\n\t%% TargetLatency = (50 + 100 + 150) / 3 = 100\n\t?assertEqual(100.0, TargetLatency),\n\t\n\t%% TotalMaxDispatched = 10 + 15 + 20 = 45\n\t?assertEqual(45, TotalMaxDispatched),\n\t\n\t%% Cleanup\n\texit(Pid1, kill),\n\texit(Pid2, kill),\n\texit(Pid3, kill).\n\n%% Mock peer worker for get_max_dispatched\nmock_peer_worker_max_dispatched(MaxDispatched) ->\n\treceive\n\t\t{'$gen_call', From, get_max_dispatched} ->\n\t\t\tgen_server:reply(From, MaxDispatched),\n\t\t\tmock_peer_worker_max_dispatched(MaxDispatched)\n\tafter 5000 ->\n\t\tok\n\tend.\n\n-endif.\n"
  },
  {
    "path": "apps/arweave/src/ar_data_sync_sup.erl",
    "content": "-module(ar_data_sync_sup).\n\n-behaviour(supervisor).\n\n-export([start_link/0]).\n\n-export([init/1]).\n\n-include_lib(\"ar_sup.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%% ===================================================================\n%% Supervisor callbacks.\n%% ===================================================================\n\ninit([]) ->\n\t%% Peer worker supervisor must start before worker master\n\tPeerWorkerSup = #{\n\t\tid => ar_peer_worker_sup,\n\t\tstart => {ar_peer_worker_sup, start_link, []},\n\t\trestart => permanent,\n\t\tshutdown => infinity,\n\t\ttype => supervisor,\n\t\tmodules => [ar_peer_worker_sup]\n\t},\n\tChildren = \n\t\t[PeerWorkerSup] ++\n\t\tar_data_sync_coordinator:register_workers() ++\n\t\tar_chunk_copy:register_workers() ++\n\t\tar_data_sync:register_workers(),\n\t{ok, {{one_for_one, 5, 10}, Children}}.\n"
  },
  {
    "path": "apps/arweave/src/ar_data_sync_worker.erl",
    "content": "%%% @doc A process fetching the weave data from the network and from the local\n%%% storage modules, one chunk (or a range of chunks) at a time. The workers\n%%% are coordinated by ar_data_sync_coordinator. The workers do not update the\n%%% storage - updates are handled by ar_data_sync_* processes.\n-module(ar_data_sync_worker).\n\n-behaviour(gen_server).\n\n-export([start_link/2]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_data_sync.hrl\").\n\n-record(state, {\n\tname = undefined,\n\trequest_packed_chunks = false\n}).\n\n %% # of messages to cast to ar_data_sync at once. Each message carries at least 1 chunk worth\n %% of data (256 KiB). Since there are dozens or hundreds of workers, if each one posts too\n %% many messages at once it can overload the available memory.\n-define(READ_RANGE_MESSAGES_PER_BATCH, 40).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link(Name, Mode) ->\n\tgen_server:start_link({local, Name}, ?MODULE, {Name, Mode}, []).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit({Name, Mode}) ->\n\t?LOG_INFO([{event, init}, {module, ?MODULE}, {name, Name}]),\n\t{ok, Config} = arweave_config:get_env(),\n\t%% In case there has been a restart we need to tell\n\t%% ar_data_sync_coordinator to erase pending worker tasks.\n\t%% We only want to do this for sync workers, not read workers.\n\tcase Mode  of\n\t\tsync ->\n\t\t\tgen_server:call(ar_data_sync_coordinator, {reset_worker, Name}, 30_000);\n\t\t_ ->\n\t\t\tok\n\tend,\n\t{ok, #state{\n\t\tname = Name,\n\t\trequest_packed_chunks = Config#config.data_sync_request_packed_chunks\n\t}}.\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast({read_range, Args}, State) ->\n\tcase read_range(Args) of\n\t\trecast ->\n\t\t\tok;\n\t\tReadResult ->\n\t\t\tgen_server:cast(ar_chunk_copy,\n\t\t\t\t{task_completed, {read_range, {State#state.name, ReadResult, Args}}})\n\tend,\n\t{noreply, State};\n\nhandle_cast({sync_range, Args}, State) ->\n\t{_, _, Peer, _, RetryCount, FootprintKey} = Args,\n\tStartTime = erlang:monotonic_time(),\n\tSyncResult = sync_range(Args, State),\n\tEndTime = erlang:monotonic_time(),\n\tcase SyncResult of\n\t\trecast ->\n\t\t\t%% Only log at WARNING when retries are exhausted (RetryCount <= 1),\n\t\t\t%% otherwise log at DEBUG to reduce noise\n\t\t\tcase RetryCount =< 1 of\n\t\t\t\ttrue ->\n\t\t\t\t\t?LOG_WARNING([{event, sync_range_recast_exhausted}, \n\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)}, \n\t\t\t\t\t\t{footprint_key, FootprintKey},\n\t\t\t\t\t\t{retry_count, RetryCount},\n\t\t\t\t\t\t{worker, State#state.name}]);\n\t\t\t\tfalse ->\n\t\t\t\t\tok\n\t\t\tend,\n\t\t\tok;\n\t\t_ ->\n\t\t\tgen_server:cast(ar_data_sync_coordinator, {task_completed,\n\t\t\t\t{sync_range, {State#state.name, SyncResult, Args, EndTime-StartTime}}})\n\tend,\n\t{noreply, State};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info(_Message, State) ->\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{event, terminate}, {module, ?MODULE}, {reason, io_lib:format(\"~p\", [Reason])}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nread_range({Start, End, _OriginStoreID, _TargetStoreID})\n\t\twhen Start >= End ->\n\tok;\nread_range({Start, End, _OriginStoreID, TargetStoreID} = Args) ->\n\tcase ar_data_sync:is_chunk_cache_full() of\n\t\tfalse ->\n\t\t\tcase ar_data_sync:is_disk_space_sufficient(TargetStoreID) of\n\t\t\t\ttrue ->\n\t\t\t\t\t?LOG_DEBUG([{event, read_range}, {pid, self()},\n\t\t\t\t\t\t{size_mb, (End - Start) / ?MiB}, {args, Args}]),\n\t\t\t\t\tread_range2(?READ_RANGE_MESSAGES_PER_BATCH, Args);\n\t\t\t\t_ ->\n\t\t\t\t\tar_util:cast_after(30000, self(), {read_range, Args}),\n\t\t\t\t\trecast\n\t\t\tend;\n\t\t_ ->\n\t\t\tar_util:cast_after(200, self(), {read_range, Args}),\n\t\t\trecast\n\tend.\n\nread_range2(0, Args) ->\n\tar_util:cast_after(1000, self(), {read_range, Args}),\n\trecast;\nread_range2(_MessagesRemaining,\n\t\t{Start, End, _OriginStoreID, _TargetStoreID})\n\t\twhen Start >= End ->\n\tok;\nread_range2(MessagesRemaining, {Start, End, OriginStoreID, TargetStoreID}) ->\n\tCheckIsRecordedAlready =\n\t\tcase ar_sync_record:is_recorded(Start + 1, ar_data_sync, TargetStoreID) of\n\t\t\t{true, _} ->\n\t\t\t\tcase ar_sync_record:get_next_unsynced_interval(Start, End, ar_data_sync,\n\t\t\t\t\t\tTargetStoreID) of\n\t\t\t\t\tnot_found ->\n\t\t\t\t\t\tok;\n\t\t\t\t\t{_, Start2} ->\n\t\t\t\t\t\tread_range2(MessagesRemaining,\n\t\t\t\t\t\t\t\t{Start2, End, OriginStoreID, TargetStoreID})\n\t\t\t\tend;\n\t\t\t_ ->\n\t\t\t\tfalse\n\t\tend,\n\tIsRecordedInTheSource =\n\t\tcase CheckIsRecordedAlready of\n\t\t\tok ->\n\t\t\t\tok;\n\t\t\trecast ->\n\t\t\t\tok;\n\t\t\tfalse ->\n\t\t\t\tcase ar_sync_record:is_recorded(Start + 1, ar_data_sync, OriginStoreID) of\n\t\t\t\t\t{true, Packing} ->\n\t\t\t\t\t\t{true, Packing};\n\t\t\t\t\tSyncRecordReply ->\n\t\t\t\t\t\t?LOG_ERROR([{event, cannot_read_requested_range},\n\t\t\t\t\t\t\t\t{origin_store_id, OriginStoreID},\n\t\t\t\t\t\t\t\t{missing_start_offset, Start + 1},\n\t\t\t\t\t\t\t\t{end_offset, End},\n\t\t\t\t\t\t\t\t{target_store_id, TargetStoreID},\n\t\t\t\t\t\t\t\t{sync_record_reply, io_lib:format(\"~p\", [SyncRecordReply])}])\n\t\t\t\tend\n\t\tend,\n\tReadChunkMetadata =\n\t\tcase IsRecordedInTheSource of\n\t\t\tok ->\n\t\t\t\tok;\n\t\t\t{true, Packing2} ->\n\t\t\t\t{Packing2, ar_data_sync:get_chunk_by_byte(Start + 1, OriginStoreID)}\n\t\tend,\n\tPaddedEnd = ar_block:get_chunk_padded_offset(End),\n\tcase ReadChunkMetadata of\n\t\tok ->\n\t\t\tok;\n\t\t{_, {error, invalid_iterator}} ->\n\t\t\t%% get_chunk_by_byte looks for a key with the same prefix or the next\n\t\t\t%% prefix. Therefore, if there is no such key, it does not make sense to\n\t\t\t%% look for any key smaller than the prefix + 2 in the next iteration.\n\t\t\tPrefixSpaceSize = trunc(math:pow(2,\n\t\t\t\t\t?OFFSET_KEY_BITSIZE - ?OFFSET_KEY_PREFIX_BITSIZE)),\n\t\t\tStart3 = ((Start div PrefixSpaceSize) + 2) * PrefixSpaceSize,\n\t\t\tread_range2(MessagesRemaining,\n\t\t\t\t\t{Start3, End, OriginStoreID, TargetStoreID});\n\t\t{_, {error, Reason}} ->\n\t\t\t?LOG_ERROR([{event, failed_to_query_chunk_metadata}, {offset, Start + 1},\n\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}]);\n\t\t{_, {ok, _Key, {AbsoluteOffset, _, _, _, _, _, _}}} when AbsoluteOffset > PaddedEnd ->\n\t\t\tok;\n\t\t{Packing3, {ok, _Key, {AbsoluteOffset, ChunkDataKey, TXRoot, DataRoot, TXPath,\n\t\t\t\tRelativeOffset, ChunkSize}}} ->\n\t\t\tReadChunk = ar_data_sync:read_chunk(AbsoluteOffset, ChunkDataKey, OriginStoreID),\n\t\t\tcase ReadChunk of\n\t\t\t\tnot_found ->\n\t\t\t\t\tar_data_sync:invalidate_bad_data_record(\n\t\t\t\t\t\tAbsoluteOffset, ChunkSize, OriginStoreID, read_range_chunk_not_found),\n\t\t\t\t\tread_range2(MessagesRemaining-1,\n\t\t\t\t\t\t\t{Start + ChunkSize, End, OriginStoreID, TargetStoreID});\n\t\t\t\t{error, Error} ->\n\t\t\t\t\t?LOG_ERROR([{event, failed_to_read_chunk},\n\t\t\t\t\t\t\t{absolute_end_offset, AbsoluteOffset},\n\t\t\t\t\t\t\t{chunk_data_key, ar_util:encode(ChunkDataKey)},\n\t\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Error])}]),\n\t\t\t\t\tread_range2(MessagesRemaining,\n\t\t\t\t\t\t\t{Start + ChunkSize, End, OriginStoreID, TargetStoreID});\n\t\t\t\t{ok, {Chunk, DataPath}} ->\n\t\t\t\t\tcase ar_sync_record:is_recorded(AbsoluteOffset, ar_data_sync,\n\t\t\t\t\t\t\tOriginStoreID) of\n\t\t\t\t\t\t{true, Packing3} ->\n\t\t\t\t\t\t\tar_data_sync:increment_chunk_cache_size(),\n\t\t\t\t\t\t\tUnpackedChunk =\n\t\t\t\t\t\t\t\tcase Packing3 of\n\t\t\t\t\t\t\t\t\tunpacked ->\n\t\t\t\t\t\t\t\t\t\tChunk;\n\t\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\t\tnone\n\t\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\tArgs = {DataRoot, AbsoluteOffset, TXPath, TXRoot, DataPath,\n\t\t\t\t\t\t\t\t\tPacking3, RelativeOffset, ChunkSize, Chunk,\n\t\t\t\t\t\t\t\t\tUnpackedChunk, TargetStoreID, ChunkDataKey},\n\t\t\t\t\t\t\tgen_server:cast(ar_data_sync:name(TargetStoreID),\n\t\t\t\t\t\t\t\t\t{pack_and_store_chunk, Args}),\n\t\t\t\t\t\t\tread_range2(MessagesRemaining-1,\n\t\t\t\t\t\t\t\t{Start + ChunkSize, End, OriginStoreID, TargetStoreID});\n\t\t\t\t\t\t{true, _DifferentPacking} ->\n\t\t\t\t\t\t\t%% Unlucky timing - the chunk should have been repacked\n\t\t\t\t\t\t\t%% in the meantime.\n\t\t\t\t\t\t\tread_range2(MessagesRemaining,\n\t\t\t\t\t\t\t\t\t{Start, End, OriginStoreID, TargetStoreID});\n\t\t\t\t\t\tReply ->\n\t\t\t\t\t\t\t?LOG_ERROR([{event, chunk_record_not_found},\n\t\t\t\t\t\t\t\t\t{absolute_end_offset, AbsoluteOffset},\n\t\t\t\t\t\t\t\t\t{ar_sync_record_reply, io_lib:format(\"~p\", [Reply])}]),\n\t\t\t\t\t\t\tread_range2(MessagesRemaining,\n\t\t\t\t\t\t\t\t\t{Start + ChunkSize, End, OriginStoreID, TargetStoreID})\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\nsync_range({Start, End, Peer, _TargetStoreID, _RetryCount, _FootprintKey} = Args, _State) when Start >= End ->\n\tok;\nsync_range({Start, End, Peer, _TargetStoreID, 0, _FootprintKey}, _State) ->\n\t?LOG_DEBUG([{event, sync_range_retries_exhausted},\n\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t{start_offset, Start}, {end_offset, End}]),\n\t{error, timeout};\nsync_range({Start, End, Peer, TargetStoreID, RetryCount, FootprintKey} = Args, State) ->\n\tIsChunkCacheFull =\n\t\tcase ar_data_sync:is_chunk_cache_full() of\n\t\t\ttrue ->\n\t\t\t\tar_util:cast_after(500, self(), {sync_range, Args}),\n\t\t\t\ttrue;\n\t\t\tfalse ->\n\t\t\t\tfalse\n\t\tend,\n\tIsDiskSpaceSufficient =\n\t\tcase IsChunkCacheFull of\n\t\t\tfalse ->\n\t\t\t\tcase ar_data_sync:is_disk_space_sufficient(TargetStoreID) of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\ttrue;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tar_util:cast_after(30000, self(), {sync_range, Args}),\n\t\t\t\t\t\tfalse\n\t\t\t\tend;\n\t\t\ttrue ->\n\t\t\t\tfalse\n\t\tend,\n\tcase IsDiskSpaceSufficient of\n\t\tfalse ->\n\t\t\trecast;\n\t\ttrue ->\n\t\t\tStart2 = ar_tx_blacklist:get_next_not_blacklisted_byte(Start + 1),\n\t\t\tByte = Start2 - 1,\n\t\t\tIsRecorded = ar_sync_record:is_recorded(Byte + 1, ar_data_sync, TargetStoreID),\n\t\t\tcase {Byte >= End, IsRecorded} of\n\t\t\t\t{true, _} ->\n\t\t\t\t\tok;\n\t\t\t\t{_, {true, _}} ->\n\t\t\t\t\tok;\n\t\t\t\t_ ->\n\t\t\t\t\tPacking = get_target_packing(TargetStoreID, State#state.request_packed_chunks),\n\t\t\t\t\tcase ar_http_iface_client:get_chunk_binary(Peer, Start2, Packing) of\n\t\t\t\t\t\t{ok, #{ chunk := Chunk } = Proof, _Time, _TransferSize} ->\n\t\t\t\t\t\t\t%% In case we fetched a packed small chunk,\n\t\t\t\t\t\t\t%% we may potentially skip some chunks by\n\t\t\t\t\t\t\t%% continuing with Start2 + byte_size(Chunk) - the skip\n\t\t\t\t\t\t\t%% chunks will be then requested later.\n\t\t\t\t\t\t\tStart3 = ar_block:get_chunk_padded_offset(\n\t\t\t\t\t\t\t\t\tStart2 + byte_size(Chunk)) + 1,\n\t\t\t\t\t\t\tgen_server:cast(ar_data_sync:name(TargetStoreID),\n\t\t\t\t\t\t\t\t\t{store_fetched_chunk, Peer, Byte, Proof}),\n\t\t\t\t\t\t\tar_data_sync:increment_chunk_cache_size(),\n\t\t\t\t\t\t\tsync_range(\n\t\t\t\t\t\t\t\t{Start3, End, Peer, TargetStoreID, RetryCount, FootprintKey},\n\t\t\t\t\t\t\t\tState);\n\t\t\t\t\t\t{error, timeout} ->\n\t\t\t\t\t\t\t?LOG_DEBUG([{event, timeout_fetching_chunk},\n\t\t\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t\t\t\t{start_offset, Start2}, {end_offset, End}]),\n\t\t\t\t\t\t\tArgs2 = {Start, End, Peer, TargetStoreID, RetryCount - 1, FootprintKey},\n\t\t\t\t\t\t\tar_util:cast_after(1000, self(), {sync_range, Args2}),\n\t\t\t\t\t\t\trecast;\n\t\t\t\t\t\t{error, {ok, {{<<\"404\">>, _}, _, _, _, _}} = Reason} ->\n\t\t\t\t\t\t\t{error, Reason};\n\t\t\t\t\t\t{error, Reason} ->\n\t\t\t\t\t\t\tar_http_iface_client:log_failed_request({error, Reason}, [\n\t\t\t\t\t\t\t\t{event, failed_to_fetch_chunk},\n\t\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t\t\t{start_offset, Start2}, {end_offset, End},\n\t\t\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}]),\n\t\t\t\t\t\t\t{error, Reason}\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\nget_target_packing(StoreID, true) ->\n\tar_storage_module:get_packing(StoreID);\nget_target_packing(_StoreID, false) ->\n\tany.\n"
  },
  {
    "path": "apps/arweave/src/ar_deep_hash.erl",
    "content": "-module(ar_deep_hash).\n-export([hash/1]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\nhash(List) when is_list(List) -> hash_bin_or_list(List).\n\n%%% INTERNAL\n\nhash_bin_or_list(Bin) when is_binary(Bin) ->\n\tTag = <<\"blob\", (integer_to_binary(byte_size(Bin)))/binary>>,\n\thash_bin(<<(hash_bin(Tag))/binary, (hash_bin(Bin))/binary>>);\nhash_bin_or_list(List) when is_list(List) ->\n\tTag = <<\"list\", (integer_to_binary(length(List)))/binary>>,\n\thash_list(List, hash_bin(Tag)).\n\nhash_list([], Acc) ->\n\tAcc;\nhash_list([Head | List], Acc) ->\n\tHashPair = <<Acc/binary, (hash_bin_or_list(Head))/binary>>,\n\tNewAcc = hash_bin(HashPair),\n\thash_list(List, NewAcc).\n\nhash_bin(Bin) when is_binary(Bin) ->\n\tcrypto:hash(?DEEP_HASH_ALG, Bin).\n\n\n%%% TESTS\n\nhash_test() ->\n\tV1 = crypto:strong_rand_bytes(32),\n\tV2 = crypto:strong_rand_bytes(32),\n\tV3 = crypto:strong_rand_bytes(32),\n\tV4 = crypto:strong_rand_bytes(32),\n\tDeepList = [V1, [V2, V3], V4],\n\tH1 = hash_bin(<<(hash_bin(<<\"blob\", \"32\">>))/binary, (hash_bin(V1))/binary>>),\n\tH2 = hash_bin(<<(hash_bin(<<\"blob\", \"32\">>))/binary, (hash_bin(V2))/binary>>),\n\tH3 = hash_bin(<<(hash_bin(<<\"blob\", \"32\">>))/binary, (hash_bin(V3))/binary>>),\n\tH4 = hash_bin(<<(hash_bin(<<\"blob\", \"32\">>))/binary, (hash_bin(V4))/binary>>),\n\tHSublistTag = hash_bin(<<\"list\", \"2\">>),\n\tHSublistHead = hash_bin(<<HSublistTag/binary, H2/binary>>),\n\tHSublist = hash_bin(<<HSublistHead/binary, H3/binary>>),\n\tHListTag = hash_bin(<<\"list\", \"3\">>),\n\tHHead = hash_bin(<<HListTag/binary, H1/binary>>),\n\tHWithSublist = hash_bin(<<HHead/binary, HSublist/binary>>),\n\tH = hash_bin(<<HWithSublist/binary, H4/binary>>),\n\t?assertEqual(H, hash(DeepList)).\n\nhash_empty_list_test() ->\n\t?assertEqual(hash_bin(<<\"list\", \"0\">>), hash([])).\n\nhash_uniqueness_test() ->\n\t?assertNotEqual(\n\t\thash([<<\"a\">>]),\n\t\thash([[<<\"a\">>]])\n\t),\n\t?assertNotEqual(\n\t\thash([<<\"a\">>, <<\"b\">>]),\n\t\thash([<<\"b\">>, <<\"a\">>])\n\t),\n\t?assertNotEqual(\n\t\thash([<<\"a\">>, <<>>]),\n\t\thash([<<\"a\">>])\n\t),\n\t?assertNotEqual(\n\t\thash([<<\"a\">>, <<\"b\">>]),\n\t\thash([[<<\"a\">>], <<\"b\">>])\n\t),\n\t?assertNotEqual(\n\t\thash([<<\"a\">>, [<<\"b\">>, <<\"c\">>]]),\n\t\thash([<<\"a\">>, <<\"b\">>, <<\"c\">>])\n\t),\n\t?assertNotEqual(\n\t\thash([<<\"a\">>, [<<\"b\">>, <<\"c\">>], [<<\"d\">>, <<\"e\">>]]),\n\t\thash([<<\"a\">>, [<<\"b\">>, <<\"c\">>, <<\"d\">>, <<\"e\">>]])\n\t),\n\t?assertNotEqual(\n\t\thash([<<\"a\">>, [<<\"b\">>], <<\"c\">>, <<\"d\">>]),\n\t\thash([<<\"a\">>, [<<\"b\">>, <<\"c\">>], <<\"d\">>])\n\t).\n"
  },
  {
    "path": "apps/arweave/src/ar_device_lock.erl",
    "content": "-module(ar_device_lock).\n\n-behaviour(gen_server).\n\n-export([get_store_id_to_device_map/0, is_ready/0, acquire_lock/3, release_lock/2,\n\tset_device_lock_metric/3]).\n\n-export([start_link/0, init/1, handle_call/3, handle_info/2, handle_cast/2, terminate/2]).\n\n-include(\"ar.hrl\").\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-record(state, {\n\tstore_id_to_device = #{},\n\tdevice_locks = #{}, %% used when device_limit is true\n\tstore_id_locks = #{}, %% used when device_limit is false\n\tinitialized = false,\n\tnum_replica_2_9_workers = 0,\n\tdevice_limit = true\n}).\n\n-type device_mode() :: prepare | sync | repack.\n\n-ifdef(AR_TEST).\n-define(LOCK_LOG_INTERVAL_MS, 10_000). %% 10 seconds\n-else.\n-define(LOCK_LOG_INTERVAL_MS, 600_000). %% 10 minutes\n-endif.\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nget_store_id_to_device_map() ->\n\tcase catch gen_server:call(?MODULE, get_state) of\n\t\t{'EXIT', {Reason, {gen_server, call, _}}} ->\n\t\t\t{error, Reason};\n\t\tState ->\n\t\t\tState#state.store_id_to_device\n\tend.\n\nis_ready() ->\n\tcase catch gen_server:call(?MODULE, get_state) of\n\t\t{'EXIT', {Reason, {gen_server, call, _}}} ->\n\t\t\t?LOG_WARNING([{event, error_getting_device_lock_state},\n\t\t\t\t\t{module, ?MODULE}, {reason, Reason}]),\n\t\t\tfalse;\n\t\tState ->\n\t\t\tState#state.initialized\n\tend.\n\n%% @doc Helper function to wrap common logic around acquiring a device lock.\n-spec acquire_lock(device_mode(), string(), atom()) -> atom().\nacquire_lock(Mode, StoreID, CurrentStatus) ->\n\tNewStatus = case CurrentStatus of\n\t\t_ when CurrentStatus == complete; CurrentStatus == off ->\n\t\t\t% No change needed when we're done or off.\n\t\t\tCurrentStatus;\n\t\t_ ->\n\t\t\tcase catch gen_server:call(?MODULE, {acquire_lock, Mode, StoreID}) of\n\t\t\t\t{'EXIT', {Reason, {gen_server, call, _}}} ->\n\t\t\t\t\t?LOG_WARNING([{event, error_acquiring_device_lock},\n\t\t\t\t\t\t\t{module, ?MODULE}, {reason, Reason}]),\n\t\t\t\t\tCurrentStatus;\n\t\t\t\ttrue ->\n\t\t\t\t\tactive;\n\t\t\t\tfalse ->\n\t\t\t\t\tpaused\n\t\t\tend\n\tend,\n\n\tcase NewStatus == CurrentStatus of\n\t\ttrue ->\n\t\t\tok;\n\t\tfalse ->\n\t\t\tset_device_lock_metric(StoreID, Mode, NewStatus),\n\t\t\t?LOG_INFO([{event, acquire_device_lock}, {mode, Mode}, {store_id, StoreID},\n\t\t\t\t\t{old_status, CurrentStatus}, {new_status, NewStatus}])\n\tend,\n\tNewStatus.\n\nrelease_lock(Mode, StoreID) ->\n\tgen_server:cast(?MODULE, {release_lock, Mode, StoreID}).\n\nset_device_lock_metric(StoreID, Mode, Status) ->\n\tStatusCode = case Status of\n\t\toff -> -1;\n\t\tpaused -> 0;\n\t\tactive -> 1;\n\t\tcomplete -> 2;\n\t\t_ -> -2\t\t\n\tend,\n\tStoreIDLabel = ar_storage_module:label(StoreID),\n\tprometheus_gauge:set(device_lock_status, [StoreIDLabel, Mode], StatusCode).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\ninit([]) ->\n\tgen_server:cast(self(), initialize_state),\n\t{ok, Config} = arweave_config:get_env(),\n\n\tState = #state{\n\t\tnum_replica_2_9_workers = Config#config.replica_2_9_workers,\n\t\tdevice_limit = not Config#config.disable_replica_2_9_device_limit\n\t},\n\t?LOG_INFO([{event, starting_device_lock_server},\n\t\t{num_replica_2_9_workers, State#state.num_replica_2_9_workers},\n\t\t{device_limit, State#state.device_limit}]),\n\t{ok, State}.\n\nhandle_call(get_state, _From, State) ->\n\t{reply, State, State};\nhandle_call({acquire_lock, Mode, StoreID}, _From, State) ->\n\tcase State#state.initialized of\n\t\tfalse ->\n\t\t\t% Not yet initialized.\n\t\t\t{reply, false, State};\n\t\t_ ->\n\t\t\t{Acquired, State2} = do_acquire_lock(Mode, StoreID, State),\n\t\t\t{reply, Acquired, State2}\n\tend;\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast(initialize_state, State) ->\n\tState2 = case ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tar_util:cast_after(1000, self(), initialize_state),\n\t\t\tState;\n\t\ttrue ->\n\t\t\tinitialize_state(State)\n\tend,\n\t{noreply, State2};\nhandle_cast({release_lock, Mode, StoreID}, State) ->\n\tcase State#state.initialized of\n\t\tfalse ->\n\t\t\t% Not yet initialized.\n\t\t\t{noreply, State};\n\t\t_ ->\n\t\t\tState2 = do_release_lock(Mode, StoreID, State),\n\t\t\t?LOG_INFO([{event, release_device_lock}, {mode, Mode}, {store_id, StoreID}]),\n\t\t\t{noreply, State2}\n\tend;\nhandle_cast(log_locks, State) ->\n\tlog_locks(State),\n\tar_util:cast_after(?LOCK_LOG_INTERVAL_MS, ?MODULE, log_locks), \n\t{noreply, State};\nhandle_cast(Request, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {request, Request}]),\n\t{noreply, State}.\n\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\ninitialize_state(State) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tStorageModules = Config#config.storage_modules,\n\tRepackInPlaceModules = [element(1, El)\n\t\t\t|| El <- Config#config.repack_in_place_storage_modules],\n\tStoreIDToDevice = lists:foldl(\n\t\tfun(Module, Acc) ->\n\t\t\tStoreID = ar_storage_module:id(Module),\n\t\t\tDevice = get_system_device(Module),\n\t\t\t?LOG_INFO([\n\t\t\t\t{event, storage_module_device}, {store_id, StoreID}, {device, Device}]),\n\t\t\tmaps:put(StoreID, Device, Acc)\n\t\tend,\n\t\t#{},\n\t\tStorageModules ++ RepackInPlaceModules\n\t),\n\tState2 = State#state{\n\t\tstore_id_to_device = StoreIDToDevice,\n\t\tinitialized = true\n\t},\n\t\n\tlog_locks(State2),\n\tar_util:cast_after(?LOCK_LOG_INTERVAL_MS, ?MODULE, log_locks), \n\n\tState2.\n\nget_system_device(StorageModule) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tStoreID = ar_storage_module:id(StorageModule),\n\tPath = ar_chunk_storage:get_chunk_storage_path(Config#config.data_dir, StoreID),\n\tDevice = ar_util:get_system_device(Path),\n\tcase Device of\n\t\t\"\" -> StoreID;  % If the command fails or returns an empty string, return StoreID\n\t\t_ -> Device\n\tend.\n\ndo_acquire_lock(Mode, ?DEFAULT_MODULE, State) ->\n\t%% \"default\" storage module is a special case. It can only be in sync mode.\n\tcase Mode of\n\t\tsync ->\n\t\t\t{true, State};\n\t\t_ ->\n\t\t\t{false, State}\n\tend;\ndo_acquire_lock(Mode, StoreID, State) ->\n\tMaxPrepareLocks = State#state.num_replica_2_9_workers,\n\tLock = query_lock(StoreID, State),\n\tPrepareLocks = count_prepare_locks(State),\n\t{Acquired, NewLock} = case Mode of\n\t\tsync ->\n\t\t\t%% Can only aquire a sync lock if the device is in sync mode\n\t\t\tcase Lock of\n\t\t\t\tsync -> {true, sync};\n\t\t\t\t_ -> {false, Lock}\n\t\t\tend;\n\t\tprepare ->\n\t\t\t%% Can only acquire a prepare lock if the device is in sync mode or this\n\t\t\t%% StoreID already has the prepare lock\n\t\t\tcase {Lock, PrepareLocks} of\n\t\t\t\t{sync, _} when PrepareLocks < MaxPrepareLocks -> {true, prepare};\n\t\t\t\t{prepare, _} -> {true, Lock};\n\t\t\t\t_ -> {false, Lock}\n\t\t\tend;\n\t\trepack ->\n\t\t\t%% Can only acquire a repack lock if the device is in sync mode or this\n\t\t\t%% StoreID already has the repack lock\n\t\t\tcase {Lock, PrepareLocks} of\n\t\t\t\t{sync, _} when PrepareLocks < MaxPrepareLocks -> {true, repack};\n\t\t\t\t{repack, _} -> {true, Lock};\n\t\t\t\t_ -> {false, Lock}\n\t\t\tend\n\tend,\n\t{Acquired, update_lock(StoreID, NewLock, State)}.\n\ndo_release_lock(Mode, StoreID, State) ->\n\tLock = query_lock(StoreID, State),\n\tNewLock = case Mode of\n\t\tsync ->\n\t\t\t%% Releasing a sync lock does nothing.\n\t\t\tLock;\n\t\tprepare ->\n\t\t\tcase Lock of\n\t\t\t\tprepare ->\n\t\t\t\t\t%% This StoreID had a prepare lock on this device, so now we can\n\t\t\t\t\t%% put the device back in sync mode so it's ready to be locked again\n\t\t\t\t\t%% if needed.\n\t\t\t\t\tsync;\n\t\t\t\t_ ->\n\t\t\t\t\t%% We should only be able to release a prepare lock if we previously\n\t\t\t\t\t%% held it. If we hit this branch something is wrong.\n\t\t\t\t\t?LOG_WARNING([{event, invalid_release_lock},\n\t\t\t\t\t\t\t{module, ?MODULE}, {mode, Mode}, {store_id, StoreID},\n\t\t\t\t\t\t\t{current_lock, Lock}]),\n\t\t\t\t\tLock\n\t\t\tend;\n\t\trepack ->\n\t\t\tcase Lock of\n\t\t\t\trepack ->\n\t\t\t\t\t%% This StoreID had a repack lock on this device, so now we can\n\t\t\t\t\t%% put the device back in sync mode so it's ready to be locked again\n\t\t\t\t\t%% if needed.\n\t\t\t\t\tsync;\n\t\t\t\t_ ->\n\t\t\t\t\t%% We should only be able to release a repack lock if we previously\n\t\t\t\t\t%% held it. If we hit this branch something is wrong.\n\t\t\t\t\t?LOG_WARNING([{event, invalid_release_lock},\n\t\t\t\t\t\t\t{module, ?MODULE}, {mode, Mode}, {store_id, StoreID},\n\t\t\t\t\t\t\t{current_lock, Lock}]),\n\t\t\t\t\tLock\n\t\t\tend\n\tend,\n\n\tupdate_lock(StoreID, NewLock, State).\n\ncount_prepare_locks(#state{ device_limit = true } = State) ->\n\tmaps:fold(\n\t\tfun(_Device, {prepare, _}, Acc) -> Acc + 1;\n\t\t   (_Device, _, Acc) -> Acc\n\t\tend,\n\t\t0,\n\t\tState#state.device_locks\n\t);\ncount_prepare_locks(#state{ device_limit = false } = State) ->\n\tmaps:fold(\n\t\tfun(_Device, prepare, Acc) -> Acc + 1;\n\t\t   (_Device, _, Acc) -> Acc\n\t\tend,\n\t\t0,\n\t\tState#state.store_id_locks\n\t).\n\nquery_lock(StoreID, #state{ device_limit = true } = State) ->\n\tDevice = maps:get(StoreID, State#state.store_id_to_device),\n\tDeviceLock = maps:get(Device, State#state.device_locks, sync),\n\tcase DeviceLock of\n\t\tsync -> sync;\n\t\t{prepare, StoreID} -> prepare;\n\t\t{repack, StoreID} -> repack;\n\t\t_ -> paused\n\tend;\nquery_lock(StoreID, #state{ device_limit = false } = State) ->\n\tmaps:get(StoreID, State#state.store_id_locks, sync).\n\nupdate_lock(StoreID, Lock, #state{ device_limit = true } = State) ->\n\tDevice = maps:get(StoreID, State#state.store_id_to_device),\n\tDeviceLocks = case Lock of\n\t\tpaused -> State#state.device_locks;\n\t\tsync -> maps:put(Device, sync, State#state.device_locks);\n\t\tprepare -> maps:put(Device, {prepare, StoreID}, State#state.device_locks);\n\t\trepack -> maps:put(Device, {repack, StoreID}, State#state.device_locks)\n\tend,\n\tState#state{device_locks = DeviceLocks};\nupdate_lock(StoreID, Lock, #state{ device_limit = false } = State) ->\n\tStoreIDLocks = maps:put(StoreID, Lock, State#state.store_id_locks),\n\tState#state{store_id_locks = StoreIDLocks}.\n\nlog_locks(State) ->\n\tLogs = do_log_locks(State),\n\tlists:foreach(fun(Log) -> ?LOG_INFO(Log) end, Logs).\n\ndo_log_locks(#state{ device_limit = true } = State) ->\n\tStoreIDToDevice = State#state.store_id_to_device,\n\tDeviceLocks = State#state.device_locks,\n\tSortedStoreIDList = lists:sort(\n\t\tfun({StoreID1, Device1}, {StoreID2, Device2}) ->\n\t\t\tcase Device1 =:= Device2 of\n\t\t\t\ttrue -> StoreID1 =< StoreID2;\n\t\t\t\tfalse -> Device1 < Device2\n\t\t\tend\n\t\tend,\n\t\tmaps:to_list(StoreIDToDevice)),\n\tlists:foldr(\n\t\tfun({StoreID, Device}, Acc) ->\n\t\t\tDeviceLock = maps:get(Device, DeviceLocks, sync),\n\t\t\tStatus = case DeviceLock of\n\t\t\t\tsync -> sync;\n\t\t\t\t{prepare, StoreID} -> prepare;\n\t\t\t\t{repack, StoreID} -> repack;\n\t\t\t\t_ -> paused\n\t\t\tend,\n\t\t\t[\n\t\t\t\t[{event, device_lock_status}, {device, Device}, {store_id, StoreID}, {status, Status}] \n\t\t\t\t| Acc\n\t\t\t]\n\t\tend,\n\t\t[],\n\t\tSortedStoreIDList\n\t);\ndo_log_locks(#state{ device_limit = false } = State) ->\n\tStoreIDToDevice = State#state.store_id_to_device,\n\tStoreIDLocks = State#state.store_id_locks,\n\tSortedStoreIDList = lists:sort(\n\t\tfun({StoreID1, Device1}, {StoreID2, Device2}) ->\n\t\t\tcase Device1 =:= Device2 of\n\t\t\t\ttrue -> StoreID1 =< StoreID2;\n\t\t\t\tfalse -> Device1 < Device2\n\t\t\tend\n\t\tend,\n\t\tmaps:to_list(StoreIDToDevice)),\n\tlists:foldr(\n\t\tfun({StoreID, Device}, Acc) ->\n\t\t\tStatus = maps:get(StoreID, StoreIDLocks, sync),\n\t\t\t[\n\t\t\t\t[{event, device_lock_status}, {device, Device}, {store_id, StoreID}, {status, Status}] \n\t\t\t\t| Acc\n\t\t\t]\n\t\tend,\n\t\t[],\n\t\tSortedStoreIDList\n\t).\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\ndevice_locks_test_() ->\n\t[\n\t\t{timeout, 30, fun test_acquire_lock/0},\n\t\t{timeout, 30, fun test_acquire_lock_without_device_limit/0},\n\t\t{timeout, 30, fun test_release_lock/0},\n\t\t{timeout, 30, fun test_release_lock_without_device_limit/0},\n\t\t{timeout, 30, fun test_count_prepare_locks/0},\n\t\t{timeout, 30, fun test_log_locks/0}\n\t].\n\ntest_acquire_lock() ->\n\tState = #state{\n\t\tstore_id_to_device = #{\n\t\t\t\"storage_module_0_unpacked\" => \"device1\",\n\t\t\t\"storage_module_1_unpacked\" => \"device1\",\n\t\t\t\"storage_module_2_unpacked\" => \"device2\",\n\t\t\t\"storage_module_3_unpacked\" => \"device2\",\n\t\t\t\"storage_module_4_unpacked\" => \"device3\",\n\t\t\t\"storage_module_5_unpacked\" => \"device3\"\n\t\t},\n\t\tdevice_locks = #{\n\t\t\t\"device1\" => sync,\n\t\t\t\"device2\" => {prepare, \"storage_module_2_unpacked\"},\n\t\t\t\"device3\" => {repack, \"storage_module_4_unpacked\"}\n\t\t},\n\t\tnum_replica_2_9_workers = 2\n\t},\n\n\t?assertEqual(\n\t\t{true, State}, \n\t\tdo_acquire_lock(sync, \"storage_module_0_unpacked\", State)),\n\t?assertEqual(\n\t\t{false, State}, \n\t\tdo_acquire_lock(sync, \"storage_module_2_unpacked\", State)),\n\t?assertEqual(\n\t\t{false, State}, \n\t\tdo_acquire_lock(sync, \"storage_module_3_unpacked\", State)),\n\t?assertEqual(\n\t\t{false, State}, \n\t\tdo_acquire_lock(sync, \"storage_module_4_unpacked\", State)),\n\n\t?assertEqual(\n\t\t{false, State#state{num_replica_2_9_workers = 1}},\n\t\tdo_acquire_lock(prepare, \"storage_module_0_unpacked\",\n\t\t\tState#state{num_replica_2_9_workers = 1})),\n\t?assertEqual(\n\t\t{true, State#state{device_locks = #{\n\t\t\t\"device1\" => {prepare, \"storage_module_0_unpacked\"},\n\t\t\t\"device2\" => {prepare, \"storage_module_2_unpacked\"},\n\t\t\t\"device3\" => {repack, \"storage_module_4_unpacked\"}\n\t\t}}}, \n\t\tdo_acquire_lock(prepare, \"storage_module_0_unpacked\", State)),\n\t?assertEqual(\n\t\t{true, State}, \n\t\tdo_acquire_lock(prepare, \"storage_module_2_unpacked\", State)),\n\t?assertEqual(\n\t\t{false, State}, \n\t\tdo_acquire_lock(prepare, \"storage_module_3_unpacked\", State)),\n\t?assertEqual(\n\t\t{false, State}, \n\t\tdo_acquire_lock(prepare, \"storage_module_4_unpacked\", State)),\n\t\n\t?assertEqual(\n\t\t{true, State#state{device_locks = #{\n\t\t\t\"device1\" => {repack, \"storage_module_0_unpacked\"},\n\t\t\t\"device2\" => {prepare, \"storage_module_2_unpacked\"},\n\t\t\t\"device3\" => {repack, \"storage_module_4_unpacked\"}\n\t\t}}}, \n\t\tdo_acquire_lock(repack, \"storage_module_0_unpacked\", State)),\n\t?assertEqual(\n\t\t{false, State},\n\t\tdo_acquire_lock(repack, \"storage_module_2_unpacked\", State)),\n\t?assertEqual(\n\t\t{false, State}, \n\t\tdo_acquire_lock(repack, \"storage_module_3_unpacked\", State)),\n\t?assertEqual(\n\t\t{true, State}, \n\t\tdo_acquire_lock(repack, \"storage_module_4_unpacked\", State)),\n\t?assertEqual(\n\t\t{false, State}, \n\t\tdo_acquire_lock(repack, \"storage_module_5_unpacked\", State)).\n\ntest_acquire_lock_without_device_limit() ->\n\tState = #state{\n\t\tstore_id_to_device = #{\n\t\t\t\"storage_module_0_unpacked\" => \"device1\",\n\t\t\t\"storage_module_1_unpacked\" => \"device1\",\n\t\t\t\"storage_module_2_unpacked\" => \"device2\",\n\t\t\t\"storage_module_3_unpacked\" => \"device2\",\n\t\t\t\"storage_module_4_unpacked\" => \"device3\",\n\t\t\t\"storage_module_5_unpacked\" => \"device3\"\n\t\t},\n\t\tstore_id_locks = #{\n\t\t\t\"storage_module_0_unpacked\" => sync,\n\t\t\t\"storage_module_1_unpacked\" => sync,\n\t\t\t\"storage_module_2_unpacked\" => prepare,\n\t\t\t\"storage_module_3_unpacked\" => prepare,\n\t\t\t\"storage_module_4_unpacked\" => repack,\n\t\t\t\"storage_module_5_unpacked\" => repack\n\t\t},\n\t\tnum_replica_2_9_workers = 3,\n\t\tdevice_limit = false\n\t},\n\n\t?assertEqual(\n\t\t{true, State}, \n\t\tdo_acquire_lock(sync, \"storage_module_0_unpacked\", State)),\n\t?assertEqual(\n\t\t{false, State}, \n\t\tdo_acquire_lock(sync, \"storage_module_2_unpacked\", State)),\n\t?assertEqual(\n\t\t{false, State}, \n\t\tdo_acquire_lock(sync, \"storage_module_3_unpacked\", State)),\n\t?assertEqual(\n\t\t{false, State}, \n\t\tdo_acquire_lock(sync, \"storage_module_4_unpacked\", State)),\n\n\t\n\t?assertEqual(\n\t\t{false, State#state{num_replica_2_9_workers = 2}},\n\t\tdo_acquire_lock(prepare, \"storage_module_0_unpacked\",\n\t\t\tState#state{num_replica_2_9_workers = 2})),\n\t?assertEqual(\n\t\t{true, State#state{store_id_locks = #{\n\t\t\t\"storage_module_0_unpacked\" => prepare,\n\t\t\t\"storage_module_1_unpacked\" => sync,\n\t\t\t\"storage_module_2_unpacked\" => prepare,\n\t\t\t\"storage_module_3_unpacked\" => prepare,\n\t\t\t\"storage_module_4_unpacked\" => repack,\n\t\t\t\"storage_module_5_unpacked\" => repack\n\t\t}}},\n\t\tdo_acquire_lock(prepare, \"storage_module_0_unpacked\", State)),\n\t?assertEqual(\n\t\t{true, State}, \n\t\tdo_acquire_lock(prepare, \"storage_module_2_unpacked\", State)),\n\t?assertEqual(\n\t\t{true, State}, \n\t\tdo_acquire_lock(prepare, \"storage_module_3_unpacked\", State)),\n\t?assertEqual(\n\t\t{false, State}, \n\t\tdo_acquire_lock(prepare, \"storage_module_4_unpacked\", State)),\n\t\n\t?assertEqual(\n\t\t{true, State#state{store_id_locks = #{\n\t\t\t\"storage_module_0_unpacked\" => repack,\n\t\t\t\"storage_module_1_unpacked\" => sync,\n\t\t\t\"storage_module_2_unpacked\" => prepare,\n\t\t\t\"storage_module_3_unpacked\" => prepare,\n\t\t\t\"storage_module_4_unpacked\" => repack,\n\t\t\t\"storage_module_5_unpacked\" => repack\n\t\t}}},\n\t\tdo_acquire_lock(repack, \"storage_module_0_unpacked\", State)),\n\t?assertEqual(\n\t\t{false, State},\n\t\tdo_acquire_lock(repack, \"storage_module_2_unpacked\", State)),\n\t?assertEqual(\n\t\t{false, State}, \n\t\tdo_acquire_lock(repack, \"storage_module_3_unpacked\", State)),\n\t?assertEqual(\n\t\t{true, State}, \n\t\tdo_acquire_lock(repack, \"storage_module_4_unpacked\", State)),\n\t?assertEqual(\n\t\t{true, State}, \n\t\tdo_acquire_lock(repack, \"storage_module_5_unpacked\", State)).\n\ntest_release_lock() ->\n\tState = #state{\n\t\tstore_id_to_device = #{\n\t\t\t\"storage_module_0_unpacked\" => \"device1\",\n\t\t\t\"storage_module_1_unpacked\" => \"device1\",\n\t\t\t\"storage_module_2_unpacked\" => \"device2\",\n\t\t\t\"storage_module_3_unpacked\" => \"device2\",\n\t\t\t\"storage_module_4_unpacked\" => \"device3\",\n\t\t\t\"storage_module_5_unpacked\" => \"device3\",\n\t\t\t\"storage_module_6_unpacked\" => \"device4\"\n\t\t},\n\t\tdevice_locks = DeviceLocks = #{\n\t\t\t\"device1\" => sync,\n\t\t\t\"device2\" => {prepare, \"storage_module_2_unpacked\"},\n\t\t\t\"device3\" => {repack, \"storage_module_4_unpacked\"}\n\t\t}\n\t},\n\n\t?assertEqual(\n\t\tState, \n\t\tdo_release_lock(sync, \"storage_module_0_unpacked\", State)),\n\t?assertEqual(\n\t\tState, \n\t\tdo_release_lock(sync, \"storage_module_2_unpacked\", State)),\n\t?assertEqual(\n\t\tState, \n\t\tdo_release_lock(sync, \"storage_module_3_unpacked\", State)),\n\t?assertEqual(\n\t\tState, \n\t\tdo_release_lock(sync, \"storage_module_4_unpacked\", State)),\n\t?assertEqual(\n\t\tState#state{ device_locks = DeviceLocks#{ \"device4\" => sync }},\n\t\tdo_release_lock(sync, \"storage_module_6_unpacked\", State)),\n\n\t?assertEqual(\n\t\tState, \n\t\tdo_release_lock(prepare, \"storage_module_0_unpacked\", State)),\n\t?assertEqual(\n\t\tState#state{device_locks = #{\n\t\t\t\"device1\" => sync,\n\t\t\t\"device2\" => sync,\n\t\t\t\"device3\" => {repack, \"storage_module_4_unpacked\"}\n\t\t}}, \n\t\tdo_release_lock(prepare, \"storage_module_2_unpacked\", State)),\n\t?assertEqual(\n\t\tState, \n\t\tdo_release_lock(prepare, \"storage_module_3_unpacked\", State)),\n\t?assertEqual(\n\t\tState, \n\t\tdo_release_lock(prepare, \"storage_module_4_unpacked\", State)),\n\n\t?assertEqual(\n\t\tState, \n\t\tdo_release_lock(repack, \"storage_module_0_unpacked\", State)),\n\t?assertEqual(\n\t\tState, \n\t\tdo_release_lock(repack, \"storage_module_2_unpacked\", State)),\n\t?assertEqual(\n\t\tState, \n\t\tdo_release_lock(repack, \"storage_module_3_unpacked\", State)),\n\t?assertEqual(\n\t\tState#state{device_locks = #{\n\t\t\t\"device1\" => sync,\n\t\t\t\"device2\" => {prepare, \"storage_module_2_unpacked\"},\n\t\t\t\"device3\" => sync\n\t\t}}, \n\t\tdo_release_lock(repack, \"storage_module_4_unpacked\", State)),\n\t?assertEqual(\n\t\tState, \n\t\tdo_release_lock(repack, \"storage_module_5_unpacked\", State)).\n\ntest_release_lock_without_device_limit() ->\n\tState = #state{\n\t\tstore_id_to_device = #{\n\t\t\t\"storage_module_0_unpacked\" => \"device1\",\n\t\t\t\"storage_module_1_unpacked\" => \"device1\",\n\t\t\t\"storage_module_2_unpacked\" => \"device2\",\n\t\t\t\"storage_module_3_unpacked\" => \"device2\",\n\t\t\t\"storage_module_4_unpacked\" => \"device3\",\n\t\t\t\"storage_module_5_unpacked\" => \"device3\",\n\t\t\t\"storage_module_6_unpacked\" => \"device4\"\n\t\t},\n\t\tstore_id_locks = StoreIDLocks = #{\n\t\t\t\"storage_module_0_unpacked\" => sync,\n\t\t\t\"storage_module_1_unpacked\" => sync,\n\t\t\t\"storage_module_2_unpacked\" => prepare,\n\t\t\t\"storage_module_3_unpacked\" => prepare,\n\t\t\t\"storage_module_4_unpacked\" => repack,\n\t\t\t\"storage_module_5_unpacked\" => repack\n\t\t},\n\t\tdevice_limit = false\n\t},\n\n\t?assertEqual(\n\t\tState, \n\t\tdo_release_lock(sync, \"storage_module_0_unpacked\", State)),\n\t?assertEqual(\n\t\tState, \n\t\tdo_release_lock(sync, \"storage_module_2_unpacked\", State)),\n\t?assertEqual(\n\t\tState, \n\t\tdo_release_lock(sync, \"storage_module_3_unpacked\", State)),\n\t?assertEqual(\n\t\tState, \n\t\tdo_release_lock(sync, \"storage_module_4_unpacked\", State)),\n\t?assertEqual(\n\t\tState#state{ store_id_locks = StoreIDLocks#{ \"storage_module_6_unpacked\" => sync }},\n\t\tdo_release_lock(sync, \"storage_module_6_unpacked\", State)),\n\n\t?assertEqual(\n\t\tState, \n\t\tdo_release_lock(prepare, \"storage_module_0_unpacked\", State)),\n\t?assertEqual(\n\t\tState#state{store_id_locks =  #{\n\t\t\t\"storage_module_0_unpacked\" => sync,\n\t\t\t\"storage_module_1_unpacked\" => sync,\n\t\t\t\"storage_module_2_unpacked\" => sync,\n\t\t\t\"storage_module_3_unpacked\" => prepare,\n\t\t\t\"storage_module_4_unpacked\" => repack,\n\t\t\t\"storage_module_5_unpacked\" => repack\n\t\t}}, \n\t\tdo_release_lock(prepare, \"storage_module_2_unpacked\", State)),\n\t?assertEqual(\n\t\tState#state{store_id_locks = #{\n\t\t\t\"storage_module_0_unpacked\" => sync,\n\t\t\t\"storage_module_1_unpacked\" => sync,\n\t\t\t\"storage_module_2_unpacked\" => prepare,\n\t\t\t\"storage_module_3_unpacked\" => sync,\n\t\t\t\"storage_module_4_unpacked\" => repack,\n\t\t\t\"storage_module_5_unpacked\" => repack\n\t\t}},\n\t\tdo_release_lock(prepare, \"storage_module_3_unpacked\", State)),\n\t?assertEqual(\n\t\tState, \n\t\tdo_release_lock(prepare, \"storage_module_4_unpacked\", State)),\n\n\t?assertEqual(\n\t\tState, \n\t\tdo_release_lock(repack, \"storage_module_0_unpacked\", State)),\n\t?assertEqual(\n\t\tState, \n\t\tdo_release_lock(repack, \"storage_module_2_unpacked\", State)),\n\t?assertEqual(\n\t\tState, \n\t\tdo_release_lock(repack, \"storage_module_3_unpacked\", State)),\n\t?assertEqual(\n\t\tState#state{store_id_locks =  #{\n\t\t\t\"storage_module_0_unpacked\" => sync,\n\t\t\t\"storage_module_1_unpacked\" => sync,\n\t\t\t\"storage_module_2_unpacked\" => prepare,\n\t\t\t\"storage_module_3_unpacked\" => prepare,\n\t\t\t\"storage_module_4_unpacked\" => sync,\n\t\t\t\"storage_module_5_unpacked\" => repack\n\t\t}}, \n\t\tdo_release_lock(repack, \"storage_module_4_unpacked\", State)),\n\t?assertEqual(\n\t\tState#state{store_id_locks =  #{\n\t\t\t\"storage_module_0_unpacked\" => sync,\n\t\t\t\"storage_module_1_unpacked\" => sync,\n\t\t\t\"storage_module_2_unpacked\" => prepare,\n\t\t\t\"storage_module_3_unpacked\" => prepare,\n\t\t\t\"storage_module_4_unpacked\" => repack,\n\t\t\t\"storage_module_5_unpacked\" => sync\n\t\t}}, \n\t\tdo_release_lock(repack, \"storage_module_5_unpacked\", State)).\n\ntest_count_prepare_locks() ->\n\tState = #state{\n\t\tstore_id_to_device = #{\n\t\t\t\"storage_module_0_unpacked\" => \"device1\",\n\t\t\t\"storage_module_1_unpacked\" => \"device1\",\n\t\t\t\"storage_module_2_unpacked\" => \"device2\",\n\t\t\t\"storage_module_3_unpacked\" => \"device2\",\n\t\t\t\"storage_module_4_unpacked\" => \"device3\",\n\t\t\t\"storage_module_5_unpacked\" => \"device3\"\n\t\t},\n\t\tdevice_locks = #{\n\t\t\t\"device1\" => {prepare, \"storage_module_0_unpacked\"},\n\t\t\t\"device2\" => {prepare, \"storage_module_2_unpacked\"},\n\t\t\t\"device3\" => {repack, \"storage_module_4_unpacked\"}\n\t\t},\n\t\tstore_id_locks = #{\n\t\t\t\"storage_module_0_unpacked\" => sync,\n\t\t\t\"storage_module_1_unpacked\" => sync,\n\t\t\t\"storage_module_2_unpacked\" => prepare,\n\t\t\t\"storage_module_3_unpacked\" => prepare,\n\t\t\t\"storage_module_4_unpacked\" => prepare,\n\t\t\t\"storage_module_5_unpacked\" => repack\n\t\t}\n\t},\n\t\n\t?assertEqual(2, count_prepare_locks(State#state{ device_limit = true })),\n\t?assertEqual(3, count_prepare_locks(State#state{ device_limit = false })).\n\ntest_log_locks() ->\n\tState = #state{\n\t\tstore_id_to_device = #{\n\t\t\t\"storage_module_0_unpacked\" => \"device1\",\n\t\t\t\"storage_module_1_unpacked\" => \"device1\",\n\t\t\t\"storage_module_2_unpacked\" => \"device2\",\n\t\t\t\"storage_module_3_unpacked\" => \"device2\",\n\t\t\t\"storage_module_4_unpacked\" => \"device3\",\n\t\t\t\"storage_module_5_unpacked\" => \"device3\"\n\t\t},\n\t\tdevice_locks = #{\n\t\t\t\"device1\" => sync,\n\t\t\t\"device2\" => {prepare, \"storage_module_2_unpacked\"},\n\t\t\t\"device3\" => {repack, \"storage_module_4_unpacked\"}\n\t\t},\n\t\tstore_id_locks = #{\n\t\t\t\"storage_module_0_unpacked\" => sync,\n\t\t\t\"storage_module_1_unpacked\" => sync,\n\t\t\t\"storage_module_2_unpacked\" => prepare,\n\t\t\t\"storage_module_3_unpacked\" => prepare,\n\t\t\t\"storage_module_4_unpacked\" => repack,\n\t\t\t\"storage_module_5_unpacked\" => repack\n\t\t}\n\t},\n\n\t?assertEqual(\n\t\t[\n\t\t\t[{event, device_lock_status}, {device, \"device1\"}, {store_id, \"storage_module_0_unpacked\"}, {status, sync}],\n\t\t\t[{event, device_lock_status}, {device, \"device1\"}, {store_id, \"storage_module_1_unpacked\"}, {status, sync}],\n\t\t\t[{event, device_lock_status}, {device, \"device2\"}, {store_id, \"storage_module_2_unpacked\"}, {status, prepare}],\n\t\t\t[{event, device_lock_status}, {device, \"device2\"}, {store_id, \"storage_module_3_unpacked\"}, {status, paused}],\n\t\t\t[{event, device_lock_status}, {device, \"device3\"}, {store_id, \"storage_module_4_unpacked\"}, {status, repack}],\n\t\t\t[{event, device_lock_status}, {device, \"device3\"}, {store_id, \"storage_module_5_unpacked\"}, {status, paused}]\n\t\t],\n\t\tdo_log_locks(State#state{device_limit = true})),\n\t?assertEqual(\n\t\t[\n\t\t\t[{event, device_lock_status}, {device, \"device1\"}, {store_id, \"storage_module_0_unpacked\"}, {status, sync}],\n\t\t\t[{event, device_lock_status}, {device, \"device1\"}, {store_id, \"storage_module_1_unpacked\"}, {status, sync}],\n\t\t\t[{event, device_lock_status}, {device, \"device2\"}, {store_id, \"storage_module_2_unpacked\"}, {status, prepare}],\n\t\t\t[{event, device_lock_status}, {device, \"device2\"}, {store_id, \"storage_module_3_unpacked\"}, {status, prepare}],\n\t\t\t[{event, device_lock_status}, {device, \"device3\"}, {store_id, \"storage_module_4_unpacked\"}, {status, repack}],\n\t\t\t[{event, device_lock_status}, {device, \"device3\"}, {store_id, \"storage_module_5_unpacked\"}, {status, repack}]\n\t\t],\n\t\tdo_log_locks(State#state{device_limit = false}))."
  },
  {
    "path": "apps/arweave/src/ar_diff_dag.erl",
    "content": "%%% @doc A directed acyclic graph with a single sink node. The sink node is supposed\n%%% to store some big expensive to replicate entity (e.g., a wallet tree). Edges store\n%%% diffs. To compute a representation of the entity corresponding to a particular vertice,\n%%% one needs to walk from this vertice down to the sink node, collect all the diffs, and\n%%% apply them in the reverse order.\n-module(ar_diff_dag).\n\n-export([new/3, get_sink/1, is_sink/2, is_node/2, add_node/5, update_leaf_source/3,\n\t\tupdate_sink/3, get_metadata/2, get_sink_metadata/1, reconstruct/3, move_sink/4,\n\t\tfilter/2]).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Create a new DAG with a sink node under the given identifier storing the given entity.\nnew(ID, Entity, Metadata) ->\n\t{#{ ID => {sink, Entity, {0, Metadata}} }, ID, #{ ID => sets:new() }}.\n\n%% @doc Return the entity stored in the sink node.\nget_sink(DAG) ->\n\tID = element(2, DAG),\n\telement(2, maps:get(ID, element(1, DAG))).\n\n%% @doc Return true if the given identifier is the identifier of the sink node.\nis_sink({_Sinks, ID, _Sources}, ID) ->\n\ttrue;\nis_sink(_DAG, _ID) ->\n\tfalse.\n\n%% @doc Return true if the node with the given identifier exists.\nis_node({Sinks, _Sink, _Sources}, ID) ->\n\tmaps:is_key(ID, Sinks).\n\n%% @doc Create a node with an edge connecting the given source and sink identifiers,\n%% directed towards the given sink identifier.\n%% If the node with the given sink identifier does not exist or the node with the given source\n%% identifier already exists, the call fails with a badkey exception.\nadd_node(DAG, SourceID, SinkID, Diff, Metadata) when SourceID /= SinkID ->\n\tassert_exists(SinkID, DAG),\n\tassert_not_exists(SourceID, DAG),\n\t{Sinks, Sink, Sources} = DAG,\n\tSinkSources = maps:get(SinkID, Sources, sets:new()),\n\tUpdatedSources = Sources#{\n\t\tSinkID => sets:add_element(SourceID, SinkSources),\n\t\tSourceID => sets:new()\n\t},\n\t{_ID, _Entity, {Counter, _Metadata}} = maps:get(SinkID, Sinks),\n\t{Sinks#{ SourceID => {SinkID, Diff, {Counter + 1, Metadata}} }, Sink, UpdatedSources}.\n\n%% @doc Update the given node via the given function of a diff and a metadata, which\n%% returns a \"new node identifier, new diff, new metadata\" triplet. The node must be\n%% a source (must have a sink) and a leaf (must be a sink for no node).\n%% If the node does not exist or is not a leaf source, the call fails with a badkey exception.\nupdate_leaf_source(DAG, ID, UpdateFun) ->\n\tassert_exists(ID, DAG),\n\tassert_not_sink(ID, DAG),\n\t{#{ ID := {SinkID, Diff, {Counter, Metadata}} } = Sinks, Sink, Sources} = DAG,\n\tcase sets:is_empty(maps:get(ID, Sources, sets:new())) of\n\t\tfalse ->\n\t\t\terror({badkey, ID});\n\t\ttrue ->\n\t\t\t{NewID, UpdatedDiff, UpdatedMetadata} = UpdateFun(Diff, Metadata),\n\t\t\tSinks2 = maps:remove(ID, Sinks),\n\t\t\tSources2 = maps:remove(ID, Sources),\n\t\t\tSet = sets:add_element(NewID, sets:del_element(ID, maps:get(SinkID, Sources))),\n\t\t\tSources3 = Sources2#{ SinkID => Set },\n\t\t\t{Sinks2#{ NewID => {SinkID, UpdatedDiff, {Counter, UpdatedMetadata}} }, Sink,\n\t\t\t\t\tSources3}\n\tend.\n\n%% @doc Update the sink via the given function of an entity and a metadata, which\n%% returns a \"new node identifier, new entity, new metadata\" triplet.\n%% If the node does not exist or is not a sink, the call fails with a badkey exception.\nupdate_sink({Sinks, ID, Sources}, ID, UpdateFun) ->\n\t#{ ID := {sink, Entity, {Counter, Metadata}} } = Sinks,\n\t{NewID, NewEntity, NewMetadata} = UpdateFun(Entity, Metadata),\n\tSinks2 = maps:remove(ID, Sinks),\n\tSinks3 = Sinks2#{ NewID => {sink, NewEntity, {Counter, NewMetadata}} },\n\t{Set, Sources2} =\n\t\tcase maps:take(ID, Sources) of\n\t\t\terror ->\n\t\t\t\t{sets:new(), Sources};\n\t\t\tUpdate ->\n\t\t\t\tUpdate\n\t\tend,\n\tSinks4 = sets:fold(\n\t\tfun(SourceID, Acc) ->\n\t\t\t{ID, Diff, Meta} = maps:get(SourceID, Acc),\n\t\t\tAcc#{ SourceID => {NewID, Diff, Meta} }\n\t\tend,\n\t\tSinks3,\n\t\tSet\n\t),\n\tSources3 = Sources2#{ NewID => Set },\n\t{Sinks4, NewID, Sources3};\nupdate_sink(_DAG, ID, _Fun) ->\n\terror({badkey, ID}).\n\n%% @doc Return metadata stored at the given node. If the node with the given identifier\n%% does not exist, the call fails with a badkey exception.\nget_metadata(DAG, ID) ->\n\telement(2, element(3, maps:get(ID, element(1, DAG)))).\n\n%% @doc Return metadata stored at the sink node. If the node with the given identifier\n%% does not exist, the call fails with a badkey exception.\nget_sink_metadata(DAG) ->\n\tID = element(2, DAG),\n\tget_metadata(DAG, ID).\n\n%% @doc Reconstruct the entity corresponding to the given node using\n%% the given diff application function - a function of a diff and an entity.\n%% If the node with the given identifier does not exist, returns {error, not_found}.\nreconstruct(DAG, ID, ApplyDiffFun) ->\n\tSinks = element(1, DAG),\n\tcase maps:is_key(ID, Sinks) of\n\t\ttrue ->\n\t\t\treconstruct(DAG, ID, ApplyDiffFun, []);\n\t\tfalse ->\n\t\t\t{error, not_found}\n\tend.\n\n%% @doc Make the given node the sink node. The diffs are reversed\n%% according to the given function of a diff and an entity.\n%% The new entity is constructed by applying the diffs on the path from the previous\n%% sink to the new one using the given diff application function of a diff and an entity.\n%% If the node with the given identifier does not exist, the call fails with a badkey exception.\nmove_sink(DAG, ID, ApplyDiffFun, ReverseDiffFun) ->\n\tassert_exists(ID, DAG),\n\tmove_sink(DAG, ID, ApplyDiffFun, ReverseDiffFun, []).\n\n%% @doc Remove the nodes further away from the sink than the given distance.\nfilter({Sinks, ID, Sources}, Depth) ->\n\t{sink, _Entity, {SinkCounter, _Metadata}} = maps:get(ID, Sinks),\n\t{ToRemove, Sources2} = filter(maps:iterator(Sinks), SinkCounter, Depth, Sources, sets:new()),\n\t{UpdatedSinks, UpdatedSources} = sets:fold(\n\t\tfun(RemoveID, {CurrentSinks, CurrentSources}) ->\n\t\t\t#{ RemoveID := {SinkID, _CurrentEntity, _CurrentMetadata} } = CurrentSinks,\n\t\t\tCurrentSources2 =\n\t\t\t\tcase sets:is_element(SinkID, ToRemove) of\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tSet = maps:get(SinkID, CurrentSources, sets:new()),\n\t\t\t\t\t\tmaps:put(\n\t\t\t\t\t\t\tSinkID,\n\t\t\t\t\t\t\tsets:del_element(RemoveID, Set),\n\t\t\t\t\t\t\tCurrentSources\n\t\t\t\t\t\t);\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tCurrentSources\n\t\t\t\tend,\n\t\t\t{maps:remove(RemoveID, CurrentSinks), CurrentSources2}\n\t\tend,\n\t\t{Sinks, Sources2},\n\t\tToRemove\n\t),\n\t{UpdatedSinks, ID, UpdatedSources}.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nassert_exists(ID, {Sinks, _, _}) ->\n\tcase maps:is_key(ID, Sinks) of\n\t\ttrue ->\n\t\t\tok;\n\t\tfalse ->\n\t\t\terror({badkey, ID})\n\tend.\n\nassert_not_exists(ID, {Sinks, _, _}) ->\n\tcase maps:is_key(ID, Sinks) of\n\t\ttrue ->\n\t\t\terror({badkey, ID});\n\t\tfalse ->\n\t\t\tok\n\tend.\n\nassert_not_sink(ID, {_, ID, _}) ->\n\terror({badkey, ID});\nassert_not_sink(_ID, _DAG) ->\n\tok.\n\nreconstruct(DAG, ID, ApplyDiffFun, Diffs) ->\n\tcase DAG of\n\t\t{#{ ID := {sink, Entity, _Meta} }, ID, _} ->\n\t\t\tlists:foldl(ApplyDiffFun, Entity, Diffs);\n\t\t{#{ ID := {SinkID, Diff, _Meta} }, _Sink, _Sinks} ->\n\t\t\treconstruct(DAG, SinkID, ApplyDiffFun, [Diff | Diffs])\n\tend.\n\nmove_sink(DAG, ID, ApplyDiffFun, ReverseDiffFun, Diffs) ->\n\tcase DAG of\n\t\t{#{ ID := {sink, Entity, Metadata} }, ID, _Sources} ->\n\t\t\t{UpdatedSinkID, UpdatedEntity, UpdatedMetadata, UpdatedDAG} = lists:foldl(\n\t\t\t\tfun({SinkID, Diff, Meta}, {SourceID, CurrentEntity, CurrentMeta, CurrentDAG}) ->\n\t\t\t\t\tReversedDiff = ReverseDiffFun(Diff, CurrentEntity),\n\t\t\t\t\t{Sinks, _Sink, Sources} = CurrentDAG,\n\t\t\t\t\tSinks2 = Sinks#{ SourceID => {SinkID, ReversedDiff, CurrentMeta} },\n\t\t\t\t\tSourceIDSet2 = sets:del_element(SinkID, maps:get(SourceID, Sources)),\n\t\t\t\t\tSinkIDSet2 =\n\t\t\t\t\t\tsets:add_element(SourceID, maps:get(SinkID, Sources, sets:new())),\n\t\t\t\t\tSources2 = Sources#{ SinkID => SinkIDSet2, SourceID => SourceIDSet2 },\n\t\t\t\t\t{SinkID, ApplyDiffFun(Diff, CurrentEntity), Meta, {Sinks2, SinkID, Sources2}}\n\t\t\t\tend,\n\t\t\t\t{ID, Entity, Metadata, DAG},\n\t\t\t\tDiffs\n\t\t\t),\n\t\t\t{UpdatedSinks, UpdatedSinkID, UpdatedSources} = UpdatedDAG,\n\t\t\tUpdatedSinks2 = UpdatedSinks#{\n\t\t\t\tUpdatedSinkID => {sink, UpdatedEntity, UpdatedMetadata}\n\t\t\t},\n\t\t\t{UpdatedSinks2, UpdatedSinkID, UpdatedSources};\n\t\t{#{ ID := {SinkID, Diff, Metadata} }, _Sink, _Sinks} ->\n\t\t\tmove_sink(DAG, SinkID, ApplyDiffFun, ReverseDiffFun, [{ID, Diff, Metadata} | Diffs])\n\tend.\n\nfilter(SinkIterator, SinkCounter, Depth, Sources, ToRemove) ->\n\tcase maps:next(SinkIterator) of\n\t\tnone ->\n\t\t\t{ToRemove, Sources};\n\t\t{ID, {_ID, _Entity, {Counter, _Metadata}}, NextIterator} ->\n\t\t\tcase sets:is_element(ID, ToRemove) of\n\t\t\t\ttrue ->\n\t\t\t\t\tfilter(NextIterator, SinkCounter, Depth, Sources, ToRemove);\n\t\t\t\tfalse ->\n\t\t\t\t\tcase abs(Counter - SinkCounter) =< Depth of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tfilter(NextIterator, SinkCounter, Depth, Sources, ToRemove);\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t{Sources2, ToRemove2} =\n\t\t\t\t\t\t\t\textend_with_subtree_identifiers(ID, {Sources, ToRemove}),\n\t\t\t\t\t\t\tfilter(NextIterator, SinkCounter, Depth, Sources2, ToRemove2)\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\nextend_with_subtree_identifiers(ID, {Sources, ToRemove}) ->\n\tsets:fold(\n\t\tfun(RemoveID, Acc) ->\n\t\t\textend_with_subtree_identifiers(RemoveID, Acc)\n\t\tend,\n\t\t{maps:remove(ID, Sources), sets:add_element(ID, ToRemove)},\n\t\tmaps:get(ID, Sources, sets:new())\n\t).\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\ndiff_dag_test() ->\n\t%% node-1: {0, meta_1}\n\tDAG1 = new(\"node-1\", 0, meta_1),\n\t?assertEqual(0, get_sink(DAG1)),\n\t?assertEqual(DAG1, filter(DAG1, 0)),\n\t?assertEqual(DAG1, filter(DAG1, 1)),\n\t?assertEqual(DAG1, filter(DAG1, 2)),\n\t?assertEqual(0, reconstruct(DAG1, \"node-1\", fun(_Diff, _E) -> not_called end)),\n\t?assertEqual(\n\t\t{error, not_found},\n\t\treconstruct(DAG1, \"node-2\", fun(_Diff, _E) -> not_called end)\n\t),\n\t?assertEqual(meta_1, get_metadata(DAG1, \"node-1\")),\n\t%% node-1: {0, meta_1} <- node-2-1: {1, meta_2_1}\n\tDAG2 = add_node(DAG1, \"node-2-1\", \"node-1\", 1, meta_2_1),\n\t?assertEqual(0, get_sink(DAG2)),\n\t?assertEqual(DAG2, filter(DAG2, 1)),\n\t?assertEqual(DAG1, filter(DAG2, 0)),\n\t?assertEqual(DAG2, filter(DAG2, 2)),\n\t?assertEqual(DAG2, filter(DAG2, 3)),\n\t?assertEqual(1, reconstruct(DAG2, \"node-2-1\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(-1, reconstruct(DAG2, \"node-2-1\", fun(Diff, E) -> E - Diff end)),\n\t?assertEqual(meta_1, get_metadata(DAG2, \"node-1\")),\n\t?assertEqual(meta_2_1, get_metadata(DAG2, \"node-2-1\")),\n\t%% node-1: {0, meta_1} <- node-2-1: {2, meta_2_2}\n\tDAG3 = update_leaf_source(DAG2, \"node-2-1\", fun(D, _M) -> {\"node-2-1\", D + 1, meta_2_2} end),\n\t?assertEqual(0, get_sink(DAG3)),\n\t?assertEqual(DAG1, filter(DAG3, 0)),\n\t?assertEqual(DAG3, filter(DAG3, 1)),\n\t?assertEqual(2, reconstruct(DAG3, \"node-2-1\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(meta_2_2, get_metadata(DAG3, \"node-2-1\")),\n\t?assertException(error, {badkey, \"node-1\"}, update_leaf_source(DAG2, \"node-1\", no_function)),\n\t%% node-1: {0, meta_1} <- node-2-2: {1, meta_2_2}\n\tDAG4 = update_leaf_source(DAG3, \"node-2-1\", fun(D, M) -> {\"node-2-2\", D - 1, M} end),\n\t?assertEqual(0, get_sink(DAG4)),\n\t?assertEqual(1, reconstruct(DAG4, \"node-2-2\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(meta_2_2, get_metadata(DAG4, \"node-2-2\")),\n\t?assertException(error, {badkey, \"node-2-1\"}, get_metadata(DAG4, \"node-2-1\")),\n\t%% node-1: {0, meta_1} <- node-2-2: {1, meta_2_2}\n\t%%                     <- node-2-3: {2, meta_2_3}\n\tDAG5 = add_node(DAG4, \"node-2-3\", \"node-1\", 2, meta_2_3),\n\t?assertEqual(1, reconstruct(DAG5, \"node-2-2\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(2, reconstruct(DAG5, \"node-2-3\", fun(Diff, E) -> E + Diff end)),\n\t%% node-1: {0, meta_1} <- node-2-2: {1, meta_2_2}\n\t%%                     <- node-2-3: {2, meta_2_3} <- node-3-1: {-3, meta_3_1}\n\tDAG6 = add_node(DAG5, \"node-3-1\", \"node-2-3\", -3, meta_3_1),\n\t?assertEqual(1, reconstruct(DAG6, \"node-2-2\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(-1, reconstruct(DAG6, \"node-3-1\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(DAG5, filter(DAG6, 1)),\n\t?assertEqual(DAG1, filter(DAG6, 0)),\n\t%% node-1: {-2, meta_1} <- node-2-2: {1, meta_2_2}\n\t%%                      -> node-2-3: {3, meta_2_3} -> node-3-1: {-1, meta_3_1}\n\tDAG7 = move_sink(DAG6, \"node-3-1\", fun(Diff, E) -> E + Diff end, fun(Diff, _E) -> -Diff end),\n\t?assertEqual(-1, get_sink(DAG7)),\n\t?assertEqual(1, reconstruct(DAG7, \"node-2-2\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(meta_2_2, get_metadata(DAG7, \"node-2-2\")),\n\t?assertEqual(0, reconstruct(DAG7, \"node-1\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(meta_1, get_metadata(DAG7, \"node-1\")),\n\t?assertEqual(2, reconstruct(DAG7, \"node-2-3\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(meta_2_3, get_metadata(DAG7, \"node-2-3\")),\n\t?assertEqual(-1, reconstruct(DAG7, \"node-3-1\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(meta_3_1, get_metadata(DAG7, \"node-3-1\")),\n\t?assert(not is_node(filter(DAG7, 0), \"node-2-3\")),\n\t?assert(not is_node(filter(DAG7, 0), \"node-1\")),\n\t?assert(not is_node(filter(DAG7, 0), \"node-2-2\")),\n\t?assert(is_node(filter(DAG7, 0), \"node-3-1\")),\n\t?assert(is_node(filter(DAG7, 1), \"node-3-1\")),\n\t?assert(is_node(filter(DAG7, 1), \"node-2-3\")),\n\t?assert(not is_node(filter(DAG7, 1), \"node-1\")),\n\t?assert(is_node(filter(DAG7, 1), \"node-3-1\")),\n\t?assert(not is_node(filter(DAG7, 1), \"node-2-2\")),\n\t?assertEqual(DAG7, filter(DAG7, 2)),\n\t?assertEqual(DAG7, filter(DAG7, 3)),\n\t%% node-1: {-1, meta_1} -> node-2-2: {1, meta_2_2}\n\t%%                      <- node-2-3: {2, meta_2_3} <- node-3-1: {-3, meta_3_1}\n\tDAG9 = move_sink(DAG7, \"node-2-2\", fun(Diff, E) -> E + Diff end, fun(Diff, _E) -> -Diff end),\n\t?assertEqual(1, get_sink(DAG9)),\n\t?assertEqual(0, reconstruct(DAG9, \"node-1\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(meta_1, get_metadata(DAG9, \"node-1\")),\n\t?assertEqual(2, reconstruct(DAG9, \"node-2-3\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(meta_2_3, get_metadata(DAG9, \"node-2-3\")),\n\t?assertEqual(-1, reconstruct(DAG9, \"node-3-1\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(meta_3_1, get_metadata(DAG9, \"node-3-1\")),\n\t%% node-1: {-1, meta_1} -> node-2-2: {1, meta_2_2} <- node-3-2: {10, meta_3_2}\n\t%%                      <- node-2-3: {2, meta_2_3} <- node-3-1: {-3, meta_3_1}\n\tDAG10 = add_node(DAG9, \"node-3-2\", \"node-2-2\", 10, meta_3_2),\n\t%% node-1: {-1, meta_1} -> node-2-2: {-10, meta_2_2} -> node-3-2: {11, meta_3_2}\n\t%%                      <- node-2-3: {2, meta_2_3} <- node-3-1: {-3, meta_3_1}\n\tDAG11 =\n\t\tmove_sink(DAG10, \"node-3-2\", fun(Diff, E) -> E + Diff end, fun(Diff, _E) -> -Diff end),\n\t?assertEqual(11, get_sink(DAG11)),\n\t?assertEqual(1, reconstruct(DAG11, \"node-2-2\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(meta_2_2, get_metadata(DAG11, \"node-2-2\")),\n\t?assertEqual(0, reconstruct(DAG11, \"node-1\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(meta_1, get_metadata(DAG11, \"node-1\")),\n\t?assertEqual(2, reconstruct(DAG11, \"node-2-3\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(meta_2_3, get_metadata(DAG11, \"node-2-3\")),\n\t?assertEqual(-1, reconstruct(DAG11, \"node-3-1\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(meta_3_1, get_metadata(DAG11, \"node-3-1\")),\n\t?assertException(\n\t\terror, {badkey, \"node-2-2\"},\n\t\tupdate_leaf_source(DAG11, \"node-2-2\", no_function)\n\t),\n\t%% node-1: {-1, meta_1} -> node-2-2: {-10, meta_2_2} -> node-3-2: {12, meta_3_2}\n\t%%                      <- node-2-3: {2, meta_2_3} <- node-3-1: {-3, meta_3_1}\n\tDAG12 = update_sink(DAG11, \"node-3-2\", fun(11, meta_3_2) -> {\"node-3-2\", 12, meta_3_2} end),\n\t?assertEqual(12, get_sink(DAG12)),\n\t?assertEqual(meta_3_2, get_metadata(DAG12, \"node-3-2\")),\n\t?assertEqual(2, reconstruct(DAG12, \"node-2-2\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(meta_2_2, get_metadata(DAG12, \"node-2-2\")),\n\t?assertEqual(1, reconstruct(DAG12, \"node-1\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(meta_1, get_metadata(DAG12, \"node-1\")),\n\t?assertEqual(3, reconstruct(DAG12, \"node-2-3\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(meta_2_3, get_metadata(DAG12, \"node-2-3\")),\n\t?assertEqual(0, reconstruct(DAG12, \"node-3-1\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(meta_3_1, get_metadata(DAG12, \"node-3-1\")),\n\t?assertException(error, {badkey, \"node-2-2\"}, update_sink(DAG11, \"node-2-2\", no_function)),\n\t?assertException(error, {badkey, \"node-3-1\"}, update_sink(DAG11, \"node-3-1\", no_function)),\n\t%% node-1: {-1, meta_1} -> node-2-2: {-10, meta_2_2} -> new-node-3-2: {13, meta_3_2}\n\t%%                      <- node-2-3: {2, meta_2_3} <- node-3-1: {-3, meta_3_1}\n\tDAG13 =\n\t\tupdate_sink(DAG12, \"node-3-2\", fun(12, meta_3_2) -> {\"new-node-3-2\", 13, meta_3_2} end),\n\t?assertEqual(13, get_sink(DAG13)),\n\t?assertEqual(meta_3_2, get_metadata(DAG13, \"new-node-3-2\")),\n\t?assertException(error, {badkey, \"node-3-2\"}, get_metadata(DAG13, \"node-3-2\")),\n\t?assertEqual(3, reconstruct(DAG13, \"node-2-2\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(meta_2_2, get_metadata(DAG13, \"node-2-2\")),\n\t?assertEqual(2, reconstruct(DAG13, \"node-1\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(meta_1, get_metadata(DAG13, \"node-1\")),\n\t?assertEqual(4, reconstruct(DAG13, \"node-2-3\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(meta_2_3, get_metadata(DAG13, \"node-2-3\")),\n\t?assertEqual(1, reconstruct(DAG13, \"node-3-1\", fun(Diff, E) -> E + Diff end)),\n\t?assertEqual(meta_3_1, get_metadata(DAG13, \"node-3-1\")),\n\t%% node-1: {0, meta_1} <- node-2: {1, meta_2}\n\tDAG14 = add_node(new(\"node-1\", 0, meta_1), \"node-2\", \"node-1\", 1, meta_2),\n\t?assertEqual(0, get_sink(DAG14)),\n\t?assertEqual(1, reconstruct(DAG14, \"node-2\", fun(Diff, E) -> E + Diff end)),\n\t%% node-1: {-1, meta_1} -> node-2: {1, meta_2}\n\tDAG15 = move_sink(DAG14, \"node-2\", fun(Diff, E) -> E + Diff end, fun(Diff, _E) -> -Diff end),\n\t?assertEqual(1, get_sink(DAG15)),\n\t?assertEqual(0, reconstruct(DAG15, \"node-1\", fun(Diff, E) -> E + Diff end)),\n\t?assertException(error, {badkey, \"node-2\"}, add_node(DAG15, \"node-2\", \"node-1\", 1, meta_1)),\n\t?assertException(error, {badkey, \"node-1\"}, add_node(DAG15, \"node-1\", \"node-2\", 1, meta_2)).\n"
  },
  {
    "path": "apps/arweave/src/ar_difficulty.erl",
    "content": "-module(ar_difficulty).\n\n-export([get_hash_rate_fixed_ratio/1, next_cumulative_diff/3, multiply_diff_pre_fork_2_5/2,\n\t\t\tdiff_pair/1, poa1_diff_multiplier/1, poa1_diff/2, scale_diff/3,\n\t\t\tmin_difficulty/1, switch_to_randomx_fork_diff/1, sub_diff/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Return the block time hash rate for the given difficulty.\nget_hash_rate_fixed_ratio(B) ->\n\tHashRate = ?MAX_DIFF div (?MAX_DIFF - B#block.diff),\n\tcase B#block.height >= ar_fork:height_2_8() of\n\t\ttrue ->\n\t\t\tHashRate;\n\t\tfalse ->\n\t\t\t%% Adjusting the hash rate by\n\t\t\t%% (TwoChunkCount + OneChunkCount) / TwoChunkCount counts\n\t\t\t%% the number of all the hashing attempts. In other words,\n\t\t\t%% the adjusted value is useful when we want to see the total\n\t\t\t%% amount of CPU work put into mining a block. This is not what\n\t\t\t%% we use it for. We use it as a denominator when computing\n\t\t\t%% a share contributed by a single partition - see\n\t\t\t%% ar_pricing:get_v2_price_per_gib_minute. Therefore, the hash\n\t\t\t%% rate computed here needs to have the same \"units\" as\n\t\t\t%% the hash rate we estimate for the partition -\n\t\t\t%% the \"normalized\" hash rate where a recall range only\n\t\t\t%% produces 4 nonces from one recall range (chunk-1)\n\t\t\t%% plus up to 400 nonces (chunk-2).\n\t\t\t%%\n\t\t\t%% Note that we did not even adjust it\n\t\t\t%% by (TwoChunkCount + OneChunkCount) / TwoChunkCount but by\n\t\t\t%% 100 div (100 + 1), what is wrong.\n\t\t\tMultiplier = poa1_diff_multiplier(B#block.height),\n\t\t\tHashRate = ?MAX_DIFF div (?MAX_DIFF - B#block.diff),\n\t\t\tcase Multiplier > 1 of\n\t\t\t\ttrue ->\n\t\t\t\t\tHashRate * Multiplier div (Multiplier + 1);\n\t\t\t\tfalse ->\n\t\t\t\t\tHashRate\n\t\t\tend\n\tend.\n\n%% @doc Calculate the cumulative difficulty for the next block.\nnext_cumulative_diff(OldCDiff, NewDiff, Height) ->\n\tcase Height >= ar_fork:height_1_6() of\n\t\ttrue ->\n\t\t\tnext_cumulative_diff2(OldCDiff, NewDiff, Height);\n\t\tfalse ->\n\t\t\t0\n\tend.\n\n%% @doc Get a difficulty that makes it harder to mine by Multiplier number of times.\n%% The function was used up to the fork 2.4 and must be reimplemented without the\n%% floating point numbers in case the need arises after the fork 2.5.\n%% @end\nmultiply_diff_pre_fork_2_5(Diff, Multiplier) ->\n\t?MAX_DIFF - erlang:trunc(1 / Multiplier * (?MAX_DIFF - Diff)).\n\ndiff_pair(Block) ->\n\tDiff = Block#block.diff,\n\tHeight = Block#block.height,\n\t{poa1_diff(Diff, Height), Diff}.\n\npoa1_diff_multiplier(Height) ->\n\tcase Height >= ar_fork:height_2_7_2() of\n\t\ttrue ->\n\t\t\t?POA1_DIFF_MULTIPLIER;\n\t\tfalse ->\n\t\t\t1\n\tend.\n\npoa1_diff(Diff, Height) ->\n\tScale = {poa1_diff_multiplier(Height), 1},\n\tscale_diff(Diff, Scale, Height).\n\n%% @doc Scale the difficulty by ScaleDividend/ScaleDivisor.\n%% Example: scale_diff(Diff, {100, 1}, Height) will scale the difficulty by 100, increasing it\n%% Example: scale_diff(Diff, {3, 10}, Height) will scale the difficulty by 3/10, decreasing it\nscale_diff(infinity, _Coeff, _Height) ->\n\tinfinity;\nscale_diff(Diff, {1, 1}, _Height) ->\n\tDiff;\nscale_diff(Diff, {ScaleDividend, ScaleDivisor}, Height) ->\n\tMaxDiff = ?MAX_DIFF,\n\tMinDiff = min_difficulty(Height),\n\t%% Scale DiffInverse by ScaleDivisor/ScaleDividend because it's an inverse value.\n\t%% I.e. passing in {100, 1} will scale DiffInverse by 1/100 and *increase* the difficulty.\n\tDiffInverse = (MaxDiff - Diff) * ScaleDivisor div ScaleDividend,\n\tar_util:between(\n\t\tMaxDiff - DiffInverse,\n\t\tMinDiff,\n\t\tMaxDiff - 1\n\t).\n\n%% @doc Return the new difficulty computed such that N candidates have approximately the same\n%% chances with the new difficulty as a single candidate with the Diff difficulty.\n%%\n%% Let the probability a candidate satisfies the new difficulty be x.\n%% Let the probability a candidate satisfies the old diffiuclty be p.\n%% Then, the probability at least one of the N candidates satisfies\n%% the new difficulty is 1 - (1 - x) ^ N. We want it to be equal to p.\n%% So, (1 - x) ^ N = 1 - p => x = 1 - 32th root of (1 - p).\n%% The first three terms of the infinite series of (1 - p) ^ (1 / 32) are\n%% 1 - (1 / 32) * p - (31 * p ^ 2)/(2 * 32 ^ 2).\n%% Therefore, x is approximately p/32 + 31 * (p ^ 2) / (2 * 32 ^ 2).\n%% x = NewReverseDiff / MaxDiff, p = ReverseDiff / MaxDiff.\nsub_diff(infinity, _N) ->\n\tinfinity;\nsub_diff(Diff, N) ->\n\tMaxDiff = ?MAX_DIFF,\n\tReverseDiff = MaxDiff - Diff,\n\tMaxDiffSquared = MaxDiff * MaxDiff,\n\tReverseDiffSquared = ReverseDiff * ReverseDiff,\n\tDividend = 2 * N * ReverseDiff * MaxDiff + (N - 1) * ReverseDiffSquared,\n\tDivisor = 2 * N * N * MaxDiffSquared,\n\t(MaxDiff * Divisor - Dividend  * MaxDiff) div Divisor.\n\n-ifdef(AR_TEST).\nmin_difficulty(_Height) ->\n\t1.\nswitch_to_randomx_fork_diff(_) ->\n\t1.\n-else.\nmin_spora_difficulty(Height) ->\n\t?SPORA_MIN_DIFFICULTY(Height).\n\nmin_randomx_difficulty() ->\n\tmin_sha384_difficulty() + ?RANDOMX_DIFF_ADJUSTMENT.\n\nmin_sha384_difficulty() ->\n\t31.\n\nmin_difficulty(Height) ->\n\tDiff =\n\t\tcase Height >= ar_fork:height_1_7() of\n\t\t\ttrue ->\n\t\t\t\tcase Height >= ar_fork:height_2_4() of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tmin_spora_difficulty(Height);\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tmin_randomx_difficulty()\n\t\t\t\tend;\n\t\t\tfalse ->\n\t\t\t\tmin_sha384_difficulty()\n\t\tend,\n\tcase Height >= ar_fork:height_1_8() of\n\t\ttrue ->\n\t\t\tcase Height >= ar_fork:height_2_5() of\n\t\t\t\ttrue ->\n\t\t\t\t\tar_retarget:switch_to_linear_diff(Diff);\n\t\t\t\tfalse ->\n\t\t\t\t\tar_retarget:switch_to_linear_diff_pre_fork_2_5(Diff)\n\t\t\tend;\n\t\tfalse ->\n\t\t\tDiff\n\tend.\n\nsha384_diff_to_randomx_diff(Sha384Diff) ->\n\tmax(Sha384Diff + ?RANDOMX_DIFF_ADJUSTMENT, min_randomx_difficulty()).\n\nswitch_to_randomx_fork_diff(OldDiff) ->\n\tsha384_diff_to_randomx_diff(OldDiff) - 2.\n-endif.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nnext_cumulative_diff2(OldCDiff, NewDiff, Height) ->\n\tDelta =\n\t\tcase Height >= ar_fork:height_1_8() of\n\t\t\tfalse ->\n\t\t\t\tNewDiff * NewDiff;\n\t\t\ttrue  ->\n\t\t\t\t%% The number of hashes to try on average to find a solution.\n\t\t\t\tcase Height >= ar_fork:height_2_5() of\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\terlang:trunc(?MAX_DIFF / (?MAX_DIFF - NewDiff));\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\t?MAX_DIFF div (?MAX_DIFF - NewDiff)\n\t\t\t\tend\n\t\tend,\n\tOldCDiff + Delta.\n"
  },
  {
    "path": "apps/arweave/src/ar_disk_cache.erl",
    "content": "%% This Source Code Form is subject to the terms of the GNU General\n%% Public License, v. 2.0. If a copy of the GPLv2 was not distributed\n%% with this file, You can obtain one at\n%% https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html\n\n-module(ar_disk_cache).\n\n-behaviour(gen_server).\n\n-export([lookup_block_filename/1, lookup_block_filename/2, lookup_tx_filename/1, lookup_tx_filename/2, write_block/1, write_block_shadow/1,\n\t\treset/0, write_tx/1]).\n\n-export([start_link/0, init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2,\n\t\tcode_change/3]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_wallets.hrl\").\n\n%% Internal state definition.\n-record(state, {\n\tlimit_max,\n\tlimit_min,\n\tsize = 0,\n\tpath\n}).\n\n%%%===================================================================\n%%% API\n%%%===================================================================\n\nlookup_block_filename(H) ->\n\tlookup_block_filename(H, not_set).\n\nlookup_block_filename(H, CustomDir) when is_binary(H)->\n\t%% Use the process dictionary to keep the path.\n\tPathBlock =\n\t\tcase CustomDir of\n\t\t\tnot_set ->\n\t\tcase get(ar_disk_cache_path) of\n\t\t\tundefined ->\n\t\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\t\tPath = filename:join(Config#config.data_dir, ?DISK_CACHE_DIR),\n\t\t\t\tput(ar_disk_cache_path, Path),\n\t\t\t\tfilename:join(Path, ?DISK_CACHE_BLOCK_DIR);\n\t\t\tPath ->\n\t\t\t\tfilename:join(Path, ?DISK_CACHE_BLOCK_DIR)\n\t\t\t\tend;\n\t\t\t_ ->\n\t\t\t\tfilename:join([CustomDir, ?DISK_CACHE_DIR, ?DISK_CACHE_BLOCK_DIR])\n\t\tend,\n\tFileName = binary_to_list(ar_util:encode(H)),\n\tFilePath = filename:join(PathBlock, FileName),\n\tFilePathJSON = iolist_to_binary([FilePath, \".json\"]),\n\tcase ar_storage:is_file(FilePathJSON) of\n\t\ttrue ->\n\t\t\t{ok, {FilePathJSON, json}};\n\t\t_ ->\n\t\t\tFilePathBin = iolist_to_binary([FilePath, \".bin\"]),\n\t\t\tcase ar_storage:is_file(FilePathBin) of\n\t\t\t\ttrue ->\n\t\t\t\t\t{ok, {FilePathBin, binary}};\n\t\t\t\t_ ->\n\t\t\t\t\tunavailable\n\t\t\tend\n\tend.\n\nlookup_tx_filename(Hash) ->\n\tlookup_tx_filename(Hash, not_set).\n\nlookup_tx_filename(Hash, CustomDir) when is_binary(Hash) ->\n\tPathTX =\n\t\tcase CustomDir of\n\t\t\tnot_set ->\n\t\t\t\tcase get(ar_disk_cache_path) of\n\t\tundefined ->\n\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\tPath = filename:join(Config#config.data_dir, ?DISK_CACHE_DIR),\n\t\t\tput(ar_disk_cache_path, Path),\n\t\t\tfilename:join(Path, ?DISK_CACHE_TX_DIR);\n\t\tPath ->\n\t\t\tfilename:join(Path, ?DISK_CACHE_TX_DIR)\n\t\t\t\tend;\n\t\t\t_ ->\n\t\t\t\tfilename:join([CustomDir, ?DISK_CACHE_DIR, ?DISK_CACHE_TX_DIR])\n\tend,\n\tFileName = binary_to_list(ar_util:encode(Hash)) ++ \".json\",\n\tFile = filename:join(PathTX, FileName),\n\tcase ar_storage:is_file(File) of\n\t\ttrue ->\n\t\t\t{ok, File};\n\t\t_ ->\n\t\t\tunavailable\n\tend.\n\nwrite_block_shadow(B) ->\n\tName = binary_to_list(ar_util:encode(B#block.indep_hash)) ++ \".bin\",\n\tFile = filename:join(get_block_path(), Name),\n\tBin = ar_serialize:block_to_binary(B),\n\tSize = byte_size(Bin),\n\t?LOG_DEBUG([{event, write_block_shadow},\n\t\t{hash, ar_util:encode(B#block.indep_hash)}, {size, Size}]),\n\tgen_server:cast(?MODULE, {record_written_data, Size}),\n\tcase ar_storage:write_file_atomic(File, Bin) of\n\t\tok ->\n\t\t\tok;\n\t\t{error, Reason} = Error ->\n\t\t\t?LOG_ERROR([{event, failed_to_store_block_in_disk_cache},\n\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}]),\n\t\t\tError\n\tend.\n\nwrite_block(B) ->\n\tBShadow = B#block{ txs = [TX#tx.id || TX <- B#block.txs] },\n\tcase write_block_shadow(BShadow) of\n\t\tok ->\n\t\t\twrite_txs(B#block.txs);\n\t\tReply ->\n\t\t\tReply\n\tend.\n\nwrite_txs([]) ->\n\tok;\nwrite_txs([TX | TXs]) ->\n\tcase write_tx(TX) of\n\t\tok ->\n\t\t\twrite_txs(TXs);\n\t\tReply ->\n\t\t\tReply\n\tend.\n\nreset() ->\n\tgen_server:call(?MODULE, reset).\n\n%%--------------------------------------------------------------------\n%% @doc\n%% Starts the server\n%%\n%% @spec start_link() -> {ok, Pid} | ignore | {error, Error}\n%% @end\n%%--------------------------------------------------------------------\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%%% gen_server callbacks\n%%%===================================================================\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Initializes the server\n%%\n%% @spec init(Args) -> {ok, State} |\n%%\t\t\t\t\t   {ok, State, Timeout} |\n%%\t\t\t\t\t   ignore |\n%%\t\t\t\t\t   {stop, Reason}\n%% @end\n%%--------------------------------------------------------------------\ninit([]) ->\n\t%% Trap exit to avoid corrupting any open files on quit.\n\tprocess_flag(trap_exit, true),\n\t{ok, Config} = arweave_config:get_env(),\n\tPath = filename:join(Config#config.data_dir, ?DISK_CACHE_DIR),\n\tBlockPath = filename:join(Path, ?DISK_CACHE_BLOCK_DIR),\n\tTXPath = filename:join(Path, ?DISK_CACHE_TX_DIR),\n\tok = filelib:ensure_dir(BlockPath ++ \"/\"),\n\tok = filelib:ensure_dir(TXPath ++ \"/\"),\n\tSize =\n\t\tfilelib:fold_files(\n\t\t\tPath,\n\t\t\t\"(.*\\\\.json$)|(.*\\\\.bin$)\",\n\t\t\ttrue,\n\t\t\tfun(F,Acc) -> filelib:file_size(F) + Acc end,\n\t\t\t0\n\t\t),\n\tLimitMax = Config#config.disk_cache_size * 1048576, % MB to Bytes.\n\tLimitMin = trunc(LimitMax * (100 - ?DISK_CACHE_CLEAN_PERCENT_MAX) / 100),\n\tState = #state{\n\t\tlimit_max = LimitMax,\n\t\tlimit_min = LimitMin,\n\t\tsize = Size,\n\t\tpath = Path\n\t},\n\terlang:garbage_collect(),\n\t{ok, State}.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Handling call messages\n%%\n%% @spec handle_call(Request, From, State) ->\n%%\t\t\t\t\t\t\t\t\t {reply, Reply, State} |\n%%\t\t\t\t\t\t\t\t\t {reply, Reply, State, Timeout} |\n%%\t\t\t\t\t\t\t\t\t {noreply, State} |\n%%\t\t\t\t\t\t\t\t\t {noreply, State, Timeout} |\n%%\t\t\t\t\t\t\t\t\t {stop, Reason, Reply, State} |\n%%\t\t\t\t\t\t\t\t\t {stop, Reason, State}\n%% @end\n%%--------------------------------------------------------------------\nhandle_call(reset, _From, State) ->\n\tPath = State#state.path,\n\t?LOG_DEBUG([{event, reset_disk_cache}, {path, Path}]),\n\tos:cmd(\"rm -r \" ++ Path ++ \"/*\"),\n\tBlockPath = filename:join(Path, ?DISK_CACHE_BLOCK_DIR),\n\tTXPath = filename:join(Path, ?DISK_CACHE_TX_DIR),\n\tok = filelib:ensure_dir(BlockPath ++ \"/\"),\n\tok = filelib:ensure_dir(TXPath ++ \"/\"),\n\t{reply, ok, State#state{ size = 0 }};\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Handling cast messages\n%%\n%% @spec handle_cast(Msg, State) -> {noreply, State} |\n%%\t\t\t\t\t\t\t\t\t{noreply, State, Timeout} |\n%%\t\t\t\t\t\t\t\t\t{stop, Reason, State}\n%% @end\n%%--------------------------------------------------------------------\n\nhandle_cast({record_written_data, Size}, State) ->\n\tCacheSize = State#state.size + Size,\n\tgen_server:cast(?MODULE, may_be_clean_up),\n\t{noreply, State#state{ size = CacheSize }};\n\nhandle_cast(may_be_clean_up, State) when State#state.size > State#state.limit_max ->\n\t?LOG_DEBUG([{event, disk_cache_exceeds_limit}, {limit, State#state.limit_max},\n\t\t\t{cache_size, State#state.size}]),\n\tFiles =\n\t\tlists:sort(filelib:fold_files(\n\t\t\tState#state.path,\n\t\t\t\"(.*\\\\.json$)|(.*\\\\.bin$)\",\n\t\t\ttrue,\n\t\t\tfun(F, A) ->\n\t\t\t\t [{filelib:last_modified(F), filelib:file_size(F), F} | A]\n\t\t\tend,\n\t\t\t[])\n\t\t),\n\t%% How much space should be cleaned up.\n\tToRemove = State#state.size - State#state.limit_min,\n\tRemoved = delete_file(Files, ToRemove, 0),\n\tSize = State#state.size - Removed,\n\terlang:garbage_collect(),\n\t{noreply, State#state{ size = Size }};\nhandle_cast(may_be_clean_up, State) ->\n\t{noreply, State};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Handling all non call/cast messages\n%%\n%% @spec handle_info(Info, State) -> {noreply, State} |\n%%\t\t\t\t\t\t\t\t\t {noreply, State, Timeout} |\n%%\t\t\t\t\t\t\t\t\t {stop, Reason, State}\n%% @end\n%%--------------------------------------------------------------------\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% This function is called by a gen_server when it is about to\n%% terminate. It should be the opposite of Module:init/1 and do any\n%% necessary cleaning up. When it returns, the gen_server terminates\n%% with Reason. The return value is ignored.\n%%\n%% @spec terminate(Reason, State) -> void()\n%% @end\n%%--------------------------------------------------------------------\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Convert process state when code is changed\n%%\n%% @spec code_change(OldVsn, State, Extra) -> {ok, NewState}\n%% @end\n%%--------------------------------------------------------------------\ncode_change(_OldVsn, State, _Extra) ->\n\t{ok, State}.\n\n%%%===================================================================\n%%% Internal functions\n%%%===================================================================\n\nget_block_path() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tPath = filename:join(Config#config.data_dir, ?DISK_CACHE_DIR),\n\tfilename:join(Path, ?DISK_CACHE_BLOCK_DIR).\n\nget_tx_path() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tPath = filename:join(Config#config.data_dir, ?DISK_CACHE_DIR),\n\tfilename:join(Path, ?DISK_CACHE_TX_DIR).\n\nwrite_tx(TX) ->\n\tName = binary_to_list(ar_util:encode(TX#tx.id)) ++ \".json\",\n\tFile = filename:join(get_tx_path(), Name),\n\tTXHeader = case TX#tx.format of 1 -> TX; 2 -> TX#tx{ data = <<>> } end,\n\tJSONStruct = ar_serialize:tx_to_json_struct(TXHeader),\n\tData = ar_serialize:jsonify(JSONStruct),\n\tSize = byte_size(Data),\n\t?LOG_DEBUG([{event, write_tx}, {txid, ar_util:encode(TX#tx.id)}, {size, Size}]),\n\tgen_server:cast(?MODULE, {record_written_data, Size}),\n\tcase ar_storage:write_file_atomic(File, Data) of\n\t\tok ->\n\t\t\tok;\n\t\t{error, Reason} = Error ->\n\t\t\t?LOG_ERROR([{event, failed_to_store_transaction_in_disk_cache},\n\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}]),\n\t\t\tError\n\tend.\n\ndelete_file([], _ToRemove, Removed) ->\n\tRemoved;\ndelete_file(_Files, ToRemove, Removed) when ToRemove < 0 ->\n\tRemoved;\ndelete_file([{_DateTime, Size, Filename} | Files], ToRemove, Removed) ->\n\tcase file:delete(Filename) of\n\t\tok ->\n\t\t\t?LOG_DEBUG([{event, cleaned_disk_cache}, {removed_file, Filename},\n\t\t\t\t\t{cleaned_size, Size}]),\n\t\t\tdelete_file(Files, ToRemove - Size, Removed + Size);\n\t\t{error, Reason} ->\n\t\t\t?LOG_ERROR([{event, failed_to_remove_disk_cache_file},\n\t\t\t\t\t{file, Filename}, {reason, io_lib:format(\"~p\", [Reason])}]),\n\t\t\tdelete_file(Files, ToRemove, Removed)\n\tend.\n"
  },
  {
    "path": "apps/arweave/src/ar_disksup.erl",
    "content": "%% Erlang OTP disksup copyright note:\n%%\n%% %CopyrightBegin%\n%%\n%% Copyright Ericsson AB 1996-2018. All Rights Reserved.\n%%\n%% Licensed under the Apache License, Version 2.0 (the \"License\");\n%% you may not use this file except in compliance with the License.\n%% You may obtain a copy of the License at\n%%\n%%\t   http://www.apache.org/licenses/LICENSE-2.0\n%%\n%% Unless required by applicable law or agreed to in writing, software\n%% distributed under the License is distributed on an \"AS IS\" BASIS,\n%% WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n%% See the License for the specific language governing permissions and\n%% limitations under the License.\n%%\n%% %CopyrightEnd%\n%%\n\n%%% @doc The server is a modified version of disksup from Erlang OTP - it periodically\n%%% checks for available disk space and returns it in bytes (disksup only serves it in %).\n%%% @end\n-module(ar_disksup).\n-behaviour(gen_server).\n\n-export([start_link/0, get_disk_space_check_frequency/0, get_disk_data/0, pause/0, resume/0]).\n\n-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-record(state, {\n\ttimeout,\n\tos,\n\tdiskdata = [],\n\tport,\n\tpaused = false\n}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\nget_disk_space_check_frequency() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tConfig#config.disk_space_check_frequency.\n\nget_disk_data() ->\n\tgen_server:call(?MODULE, get_disk_data, ?DEFAULT_CALL_TIMEOUT).\n\npause() ->\n\tgen_server:call(?MODULE, pause, ?DEFAULT_CALL_TIMEOUT).\n\nresume() ->\n\tgen_server:call(?MODULE, resume, ?DEFAULT_CALL_TIMEOUT).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\tprocess_flag(trap_exit, true),\n\tprocess_flag(priority, low),\n\tOS = get_os(),\n\tPort =\n\t\tcase OS of\n\t\t\t{unix, Flavor}\n\t\t\t\twhen Flavor==sunos4;\n\t\t\t\t\tFlavor==solaris;\n\t\t\t\t\tFlavor==freebsd;\n\t\t\t\t\tFlavor==dragonfly;\n\t\t\t\t\tFlavor==darwin;\n\t\t\t\t\tFlavor==linux;\n\t\t\t\t\tFlavor==posix;\n\t\t\t\t\tFlavor==openbsd;\n\t\t\t\t\tFlavor==netbsd ->\n\t\t\t\tstart_portprogram();\n\t\t{win32, _OSname} ->\n\t\t\tnot_used;\n\t\t_ ->\n\t\t\texit({unsupported_os, OS})\n\t\tend,\n\t%% Initiate the first check.\n\tself() ! timeout,\n\tTimeout = get_disk_space_check_frequency(),\n\t?LOG_INFO([{event, disksup_init}, {os, OS}, {port, Port}, {timeout, Timeout}]),\n\t{ok, #state{ port = Port, os = OS, timeout = Timeout }}.\n\nhandle_call(get_disk_data, _From, State) ->\n\t{reply, State#state.diskdata, State};\n\nhandle_call(pause, _From, State) ->\n\t?LOG_INFO([{event, pausing_disksup}]),\n\t{reply, ok, State#state{ paused = true }};\n\nhandle_call(resume, _From, State) ->\n\t?LOG_INFO([{event, resuming_disksup}]),\n\t{reply, ok, State#state{ paused = false }}.\n\nhandle_cast(_Msg, State) ->\n\t{noreply, State}.\n\nhandle_info(timeout, #state{ paused = true } = State) ->\n\t?LOG_INFO([{event, disksup_paused}]),\n\t{ok, _} = ar_timer:send_after(\n\t\tState#state.timeout,\n\t\tself(),\n\t\ttimeout,\n\t\t#{ skip_on_shutdown => false }\n\t),\n\t{noreply, State};\nhandle_info(timeout, State) ->\n\tNewDiskData = check_disk_space(State#state.os, State#state.port),\n\tensure_storage_modules_paths(),\n\tbroadcast_disk_free(State#state.os, State#state.port),\n\t{ok, _} = ar_timer:send_after(\n\t\tState#state.timeout,\n\t\tself(),\n\t\ttimeout,\n\t\t#{ skip_on_shutdown => false }\n\t),\n\t{noreply, State#state{ diskdata = NewDiskData }};\n\nhandle_info({'EXIT', _Port, Reason}, State) ->\n\t{stop, {port_died, Reason}, State#state{ port = not_used }};\n\nhandle_info(_Info, State) ->\n\t{noreply, State}.\n\nterminate(Reason, State) ->\n\tcase State#state.port of\n\t\tnot_used ->\n\t\t\tok;\n\t\tPort ->\n\t\t\tport_close(Port)\n\tend,\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nget_os() ->\n\tcase os:type() of\n\t\t{unix, sunos} ->\n\t\t\tcase os:version() of\n\t\t\t\t{5, _, _} ->\n\t\t\t\t\t{unix, solaris};\n\t\t\t\t{4, _, _} ->\n\t\t\t\t\t{unix, sunos4};\n\t\t\t\tV -> exit({unknown_os_version, V})\n\t\t\tend;\n\t\tOS ->\n\t\t\tOS\n\tend.\n\nstart_portprogram() ->\n\topen_port({spawn, \"sh -s disksup 2>&1\"}, [stream]).\n\nmy_cmd(Cmd0, Port) ->\n\t%% Insert a new line after the command, in case the command\n\t%% contains a comment character.\n\tCmd = io_lib:format(\"(~s\\n) </dev/null; echo  \\\"\\^M\\\"\\n\", [Cmd0]),\n\tPort ! {self(), {command, [Cmd, 10]}},\n\tget_reply(Port, []).\n\nget_reply(Port, O) ->\n\treceive\n\t\t{Port, {data, N}} ->\n\t\t\tcase newline(N, O) of\n\t\t\t\t{ok, Str} -> Str;\n\t\t\t\t{more, Acc} -> get_reply(Port, Acc)\n\t\t\tend;\n\t\t{'EXIT', Port, Reason} ->\n\t\t\texit({port_died, Reason})\n\tend.\n\nnewline([13 | _], B) -> {ok, lists:reverse(B)};\nnewline([H | T], B) -> newline(T, [H | B]);\nnewline([], B) -> {more, B}.\n\nfind_cmd(Cmd) ->\n\tos:find_executable(Cmd).\n\nfind_cmd(Cmd, Path) ->\n\t%% Try to find it at the specific location.\n\tcase os:find_executable(Cmd, Path) of\n\t\tfalse ->\n\t\t\tfind_cmd(Cmd);\n\t\tFound ->\n\t\t\tFound\n\tend.\n\n%% We use as many absolute paths as possible below as there may be stale\n%% NFS handles in the PATH which cause these commands to hang.\ncheck_disk_space({win32, _}, not_used) ->\n\tResult = os_mon_sysinfo:get_disk_info(),\n\tcheck_disks_win32(Result);\ncheck_disk_space({unix, solaris}, Port) ->\n\tResult = my_cmd(\"/usr/bin/df -k\", Port),\n\tcheck_disks_solaris(skip_to_eol(Result));\ncheck_disk_space({unix, linux}, Port) ->\n\tDf = find_cmd(\"df\", \"/bin\"),\n\tResult = my_cmd(Df ++ \" -k -x squashfs\", Port),\n\tcheck_disks_solaris(skip_to_eol(Result));\ncheck_disk_space({unix, posix}, Port) ->\n\tResult = my_cmd(\"df -k -P\", Port),\n\tcheck_disks_solaris(skip_to_eol(Result));\ncheck_disk_space({unix, dragonfly}, Port) ->\n\tResult = my_cmd(\"/bin/df -k -t ufs,hammer\", Port),\n\tcheck_disks_solaris(skip_to_eol(Result));\ncheck_disk_space({unix, freebsd}, Port) ->\n\tResult = my_cmd(\"/bin/df -k\", Port),\n\tcheck_disks_solaris(skip_to_eol(Result));\ncheck_disk_space({unix, openbsd}, Port) ->\n\tResult = my_cmd(\"/bin/df -k\", Port),\n\tcheck_disks_solaris(skip_to_eol(Result));\ncheck_disk_space({unix, netbsd}, Port) ->\n\tResult = my_cmd(\"/bin/df -k -t ffs\", Port),\n\tcheck_disks_solaris(skip_to_eol(Result));\ncheck_disk_space({unix, sunos4}, Port) ->\n\tResult = my_cmd(\"df\", Port),\n\tcheck_disks_solaris(skip_to_eol(Result));\ncheck_disk_space({unix, darwin}, Port) ->\n\tResult = my_cmd(\"/bin/df -i -k -t ufs,hfs,apfs\", Port),\n\tcheck_disks_susv3(skip_to_eol(Result)).\n\ndisk_free_cmd({unix, darwin}, Df, DataDirPath, Port) ->\n\t my_cmd(Df ++ \" -Pa \" ++ DataDirPath ++ \"/\", Port);\ndisk_free_cmd({unix, _}, Df, DataDirPath, Port) ->\n\t my_cmd(Df ++ \" -Pa -B1 \" ++ DataDirPath ++ \"/\", Port).\n\n%% check for hardware errors in df output\ncheck_for_hardware_error(DfOutput, ThrowOnError) ->\n    case lists:member(\"Input/output error\", DfOutput) of\n        true ->\n            ar:console(\"~nERROR: one or more of your disks are in corrupt/failing state.~n~p~n\", [DfOutput]),\n            case ThrowOnError of\n                true ->\n                    erlang:error({input_output_error_detected, DfOutput});\n                _ ->\n                    true\n            end;\n        false ->\n            false\n    end.\n\n%% doc: iterates trough storage modules\nbroadcast_disk_free({unix, _} = Os, Port) ->\n\tDf = find_cmd(\"df\"),\n\t[DataDirPathData | StorageModulePaths] = get_storage_modules_paths(),\n\t{DataDirID, DataDirPath} = DataDirPathData,\n\tDataDirDfResult = disk_free_cmd(Os, Df, DataDirPath, Port),\n\tcheck_for_hardware_error(DataDirDfResult, true),\n\t[DataDirFs, DataDirBytes, DataDirPercentage] = parse_df_2(DataDirDfResult),\n\tar_events:send(disksup, {\n\t\tremaining_disk_space,\n\t\t\tDataDirID,\n\t\t\ttrue,\n\t\t\tDataDirPercentage,\n\t\t\tDataDirBytes\n\t}),\n\tHandleSmPath = fun({StoreID, StorageModulePath}) ->\n\t\tResult = disk_free_cmd(Os, Df, StorageModulePath, Port),\n\t\tHasDiskError = check_for_hardware_error(Result, false),\n\t\tcase HasDiskError of\n\t\t\ttrue ->\n\t\t\t\t\t\tar:console(\"~nERROR: storage module ~p is offline.~n\", [StorageModulePath]),\n\t\t\t\t\t\tok;\n\t\t\tfalse ->\n\t\t\t\t\t\t[StorageModuleFs, Bytes, Percentage] = parse_df_2(Result),\n\t\t\t\t\t\tIsDataDirDrive = string:equal(DataDirFs, StorageModuleFs),\n\t\t\t\t\t\tar_events:send(disksup, {\n\t\t\t\t\t\t\tremaining_disk_space,\n\t\t\t\t\t\t\tStoreID, IsDataDirDrive, Percentage, Bytes\n\t\t\t\t\t\t})\n\t\t\tend\n\tend,\n\tlists:foreach(HandleSmPath, StorageModulePaths);\nbroadcast_disk_free(_, _) ->\n\tar:console(\"~nWARNING: disk space checks are not supported on your platform. The node \"\n\t\t\t\"may stop working if it runs out of space.~n\", []).\n\n%% This code works for Linux and FreeBSD as well.\ncheck_disks_solaris(\"\") ->\n\t[];\ncheck_disks_solaris(\"\\n\") ->\n\t[];\ncheck_disks_solaris(Str) ->\n\tcase parse_df(Str, posix) of\n\t\t{ok, {KB, CapKB, MntOn}, RestStr} ->\n\t\t\t[{MntOn, KB, CapKB} | check_disks_solaris(RestStr)];\n\t\t_Other ->\n\t\t\tcheck_disks_solaris(skip_to_eol(Str))\n\tend.\n\n%% @private\n%% @doc Predicate to take a word from the input string until a space or\n%% a percent '%' sign (the Capacity field is followed by a %)\nparse_df_is_not_space($ ) -> false;\nparse_df_is_not_space($%) -> false;\nparse_df_is_not_space(_) -> true.\n\n%% @private\n%% @doc Predicate to take spaces away from string. Stops on a non-space\nparse_df_is_space($ ) -> true;\nparse_df_is_space(_) -> false.\n\n%% @private\n%% @doc Predicate to consume remaining characters until end of line.\nparse_df_is_not_eol($\\r) -> false;\nparse_df_is_not_eol($\\n) -> false;\nparse_df_is_not_eol(_)\t -> true.\n\n%% @private\n%% @doc Trims leading non-spaces (the word) from the string then trims spaces.\nparse_df_skip_word(Input) ->\n\tRemaining = lists:dropwhile(fun parse_df_is_not_space/1, Input),\n\tlists:dropwhile(fun parse_df_is_space/1, Remaining).\n\n%% @private\n%% @doc Takes all non-spaces and then drops following spaces.\nparse_df_take_word(Input) ->\n\t{Word, Remaining0} = lists:splitwith(fun parse_df_is_not_space/1, Input),\n\tRemaining1 = lists:dropwhile(fun parse_df_is_space/1, Remaining0),\n\t{Word, Remaining1}.\n\n%% @private\n%% @doc Takes all non-spaces and then drops the % after it and the spaces.\nparse_df_take_word_percent(Input) ->\n\t{Word, Remaining0} = lists:splitwith(fun parse_df_is_not_space/1, Input),\n\t%% Drop the leading % or do nothing.\n\tRemaining1 =\n\t\tcase Remaining0 of\n\t\t\t[$% | R1] -> R1;\n\t\t\t_ -> Remaining0 % Might be no % or empty list even.\n\t\tend,\n\tRemaining2 = lists:dropwhile(fun parse_df_is_space/1, Remaining1),\n\t{Word, Remaining2}.\n\n%% @private\n%% @doc Given a line of 'df' POSIX/SUSv3 output split it into fields:\n%% a string (mounted device), 4 integers (kilobytes, used, available\n%% and capacity), skip % sign, (optionally for susv3 can also skip IUsed, IFree\n%% and ICap% fields) then take remaining characters as the mount path\n-spec parse_df(string(), posix | susv3) ->\n\t{error, parse_df} | {ok, {integer(), integer(), list()}, string()}.\nparse_df(Input0, Flavor) ->\n\t%% Format of Posix/Linux df output looks like Header + Lines\n\t%% Filesystem\t  1024-blocks\t  Used Available Capacity Mounted on\n\t%% udev\t\t\t\t  2467108\t\t 0\t 2467108\t   0% /dev\n\tInput1 = parse_df_skip_word(Input0), % Skip device path field.\n\t{KBStr, Input2} = parse_df_take_word(Input1), % Take KB field.\n\tInput3 = parse_df_skip_word(Input2), % Skip Used field.\n\t{AvailKBStr, Input4} = parse_df_take_word(Input3), % Take Avail field.\n\t{_, Input5} = parse_df_take_word_percent(Input4), % Skip Capacity% field.\n\t%% Format of OS X/SUSv3 df looks similar to POSIX but has 3 extra columns\n\t%% Filesystem 1024-blocks Used Available Capacity iused ifree %iused Mounted\n\t%% /dev/disk1\t243949060 2380\t86690680\t65% 2029724 37555\t 0%  /\n\tInput6 =\n\t\tcase Flavor of\n\t\t\tposix -> Input5;\n\t\t\tsusv3 -> % There are 3 extra integers we want to skip.\n\t\t\t\tInput5a = parse_df_skip_word(Input5), % Skip IUsed field.\n\t\t\t\tInput5b = parse_df_skip_word(Input5a), % Skip IFree field.\n\t\t\t\t%% Skip the value of ICap + '%' field.\n\t\t\t\t{_, Input5c} = parse_df_take_word_percent(Input5b),\n\t\t\t\tInput5c\n\t\tend,\n\t%% Path is the remaining string till end of line.\n\t{MountPath, Input7} = lists:splitwith(fun parse_df_is_not_eol/1, Input6),\n\t%% Trim the newlines.\n\tRemaining = lists:dropwhile(fun(X) -> not parse_df_is_not_eol(X) end, Input7),\n\ttry\n\t\tKB = erlang:list_to_integer(KBStr),\n\t\tCapacityKB = erlang:list_to_integer(AvailKBStr),\n\t\t{ok, {KB, CapacityKB, MountPath}, Remaining}\n\tcatch error:badarg ->\n\t\t{error, parse_df}\n\tend.\n\n%% Parse per SUSv3 specification, notably recent OS X.\ncheck_disks_susv3(\"\") ->\n\t[];\ncheck_disks_susv3(\"\\n\") ->\n\t[];\ncheck_disks_susv3(Str) ->\n\tcase parse_df(Str, susv3) of\n\t\t{ok, {KB, CapKB, MntOn}, RestStr} ->\n\t\t\t[{MntOn, KB, CapKB} | check_disks_susv3(RestStr)];\n\t\t_Other ->\n\t\t\tcheck_disks_susv3(skip_to_eol(Str))\n\tend.\n\ncheck_disks_win32([]) ->\n\t[];\ncheck_disks_win32([H|T]) ->\n\tcase io_lib:fread(\"~s~s~d~d~d\", H) of\n\t\t{ok, [Drive, \"DRIVE_FIXED\", BAvail, BTot, _TotFree], _RestStr} ->\n\t\t\t[{Drive, BTot div 1024, BAvail div 1024} | check_disks_win32(T)];\n\t\t{ok, _, _RestStr} ->\n\t\t\tcheck_disks_win32(T);\n\t\t_Other ->\n\t\t\t[]\n\tend.\n\nskip_to_eol([]) ->\n\t[];\nskip_to_eol([$\\n | T]) ->\n\tT;\nskip_to_eol([_ | T]) ->\n\tskip_to_eol(T).\n\nget_storage_modules_paths() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tDataDir = Config#config.data_dir,\n\tSMDirs = lists:map(\n\t\tfun(StorageModule) ->\n\t\t\tStoreID = ar_storage_module:id(StorageModule),\n\t\t\t{StoreID, filename:join([DataDir, \"storage_modules\", StoreID])}\n\t\tend,\n\t\tConfig#config.storage_modules\n\t),\n\t[{?DEFAULT_MODULE, DataDir} | SMDirs].\n\nensure_storage_modules_paths() ->\n\tStoragePaths = get_storage_modules_paths(),\n\tEnsurePaths = fun({_, StorageModulePath}) ->\n\t\tfilelib:ensure_dir(StorageModulePath ++ \"/\")\n\tend,\n\tlists:foreach(EnsurePaths, StoragePaths).\n\nparse_df_2(Input) ->\n\t[DfHeader, DfInfo] = string:tokens(Input, \"\\n\"),\n\t[_, BlocksInfo | _] = string:tokens(DfHeader, \" \\t\"),\n\tBlocksNum = case string:tokens(BlocksInfo, \"-\") of\n\t\t [Num, _] ->\n\t\t\t erlang:list_to_integer(Num);\n\t\t _->\n\t\t\t 1\n  end,\n\t[Filesystem, Total, _, Available, _, _] = string:tokens(DfInfo, \" \\t\"),\n\tBytesAvailable = erlang:list_to_integer(Available),\n\tTotalCapacity = erlang:list_to_integer(Total),\n\tPercentageAvailable = BytesAvailable / TotalCapacity,\n\t[Filesystem, BytesAvailable * BlocksNum, PercentageAvailable].\n"
  },
  {
    "path": "apps/arweave/src/ar_doctor_bench.erl",
    "content": "-module(ar_doctor_bench).\n\n-export([main/1, help/0]).\n\n-include_lib(\"kernel/include/file.hrl\").\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_mining.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n\n-define(NUM_ITERATIONS, 5).\n-define(NUM_FILES, 15).\n-define(OUTPUT_FILENAME, \"<storage_module>.benchmark.csv\").\n-define(FILE_FORMAT, \"timestamp,bytes_read,elapsed_time_ms,throughput_bps\").\n\nmain(Args) ->\n\tbench_read(Args).\n\nhelp() ->\n\tar:console(\"data-doctor bench <duration> <data_dir> <storage_module> [<storage_module> ...]~n\"),\n\tar:console(\"  duration: How long, in seconds, to run the benchmark for.~n\"), \n\tar:console(\"  data_dir: Full path to your data_dir.~n\"), \n\tar:console(\"  storage_module: List of storage modules in same format used for Arweave ~n\"),\n\tar:console(\"                  configuration (e.g. 0,En2eqsVJARnTVOSh723PBXAKGmKgrGSjQ2YIGwE_ZRI).~n\"), \n\tar:console(\"                  It's recommended that you specify all configured storage_modules ~n\"),\n\tar:console(\"                  in order to benchmark the overall system performance including  ~n\"),\n\tar:console(\"                  any data busses that are shared across disks.~n\"), \n\tar:console(\"~n\"), \n\tar:console(\"Example:~n\"), \n\tar:console(\"data-doctor bench 60 /mnt/arweave-data 0,En2eqsVJARnTVOSh723PBXAKGmKgrGSjQ2YIGwE_ZRI \\\\~n\"),\n\tar:console(\"    1,En2eqsVJARnTVOSh723PBXAKGmKgrGSjQ2YIGwE_ZRI \\\\~n\"),\n\tar:console(\"    2,En2eqsVJARnTVOSh723PBXAKGmKgrGSjQ2YIGwE_ZRI \\\\~n\"),\n\tar:console(\"    3,En2eqsVJARnTVOSh723PBXAKGmKgrGSjQ2YIGwE_ZRI~n\"),\n\tar:console(\"~n\"), \n\tar:console(\"Note: During the run data will be logged to ~p in the format:~n\", [?OUTPUT_FILENAME]),\n\tar:console(\"      '~s'~n\", [?FILE_FORMAT]).\n\nbench_read(Args) when length(Args) < 3 ->\n\tfalse;\nbench_read(Args) ->\n\t[DurationString, DataDir | StorageModuleConfigs] = Args,\n\tDuration = list_to_integer(DurationString),\n\n\t{StorageModules, Address} = parse_storage_modules(StorageModuleConfigs, [], undefined),\n\tar:console(\"Assuming mining address: ~p~n\", [ar_util:safe_encode(Address)]),\n\tConfig = #config{\n\t\tdata_dir = DataDir,\n\t\tstorage_modules = StorageModules,\n\t\tmining_addr = Address},\n\tarweave_config:set_env(Config),\n\n\tar_kv_sup:start_link(),\n\tar_storage_sup:start_link(),\n\tar_sync_record_sup:start_link(),\n\tar_chunk_storage_sup:start_link(),\n\tar_mining_io:start_link(standalone),\n\n\tar:console(\"~n~nDisk read benchmark will run for ~B seconds.~n\", [Duration]),\n\tar:console(\"Data will be logged continuously to ~p in the format:~n\", [?OUTPUT_FILENAME]),\n\tar:console(\"'~s'~n~n\", [?FILE_FORMAT]),\n\n\tStopTime = erlang:monotonic_time() + erlang:convert_time_unit(Duration, second, native),\n\n\tResults = ar_util:pmap(\n\t\tfun(StorageModule) ->\n\t\t\tread_storage_module(DataDir, StorageModule, StopTime)\n\t\tend,\n\t\tStorageModules\n\t),\n\n\tlists:foreach(\n\t\tfun({StoreID, SumChunks, SumElapsedTime}) ->\n\t\t\tReadRate = (SumChunks * 1000 div 4) div SumElapsedTime,\n\t\t\tar:console(\"~s read ~B chunks in ~B ms (~B MiB/s)~n\", [StoreID, SumChunks, SumElapsedTime, ReadRate])\n\t\tend,\n\t\tResults),\n\n\tar:console(\"~n\"),\n\t\n\ttrue.\n\nparse_storage_modules([], StorageModules, Address) ->\n\t{StorageModules, Address};\nparse_storage_modules([StorageModuleConfig | StorageModuleConfigs], StorageModules, Address) ->\n\t{ok, StorageModule} = ar_config:parse_storage_module(StorageModuleConfig),\n\tAddress2 = ar_storage_module:module_address(StorageModule),\n\tcase Address2 == Address orelse Address == undefined of\n\t\ttrue ->\n\t\t\tok;\n\t\tfalse ->\n\t\t\tar:console(\"Warning: multiple mining addresses specified in storage_modules:~n\")\n\tend,\n\tparse_storage_modules(\n\t\tStorageModuleConfigs,\t\n\t\tStorageModules ++ [StorageModule],\n\t\tAddress2).\n\t\nread_storage_module(_DataDir, StorageModule, StopTime) ->\n\tStoreID = ar_storage_module:id(StorageModule),\n\tar_chunk_storage:open_files(StoreID),\n\t{StartOffset, EndOffset} = ar_storage_module:module_range(StorageModule),\t\n\n\tOutputFileName = string:replace(?OUTPUT_FILENAME, \"<storage_module>\", StoreID),\n\n\trandom_read(StorageModule, StartOffset, EndOffset, StopTime, OutputFileName).\n\n\t% random_chunk_pread(DataDir, StoreID),\n\t% random_dev_pread(DataDir, StoreID),\n\t% dd_chunk_files_read(DataDir, StoreID),\n\t% dd_chunk_file_read(DataDir, StoreID),\n\t% dd_devs_read(DataDir, StoreID),\n\t% dd_dev_read(DataDir, StoreID),\n\nrandom_read(StorageModule, StartOffset, EndOffset, StopTime, OutputFileName) ->\n\trandom_read(StorageModule, StartOffset, EndOffset, StopTime, OutputFileName, 0, 0).\nrandom_read(StorageModule, StartOffset, EndOffset, StopTime, OutputFileName, SumChunks, SumElapsedTime) ->\n\tStartTime = erlang:monotonic_time(),\n\tcase StartTime < StopTime of\n\t\ttrue ->\n\t\t\tChunks = read(StorageModule, StartOffset, EndOffset, ?RECALL_RANGE_SIZE, ?NUM_FILES),\n\t\t\tEndTime = erlang:monotonic_time(),\n\t\t\tElapsedTime = erlang:convert_time_unit(EndTime - StartTime, native, millisecond),\n\n\t\t\t%% timestamp,bytes_read,elapsed_time_ms,throughput_bps\n\t\t\tTimestamp = os:system_time(second),\n\t\t\tBytesRead = Chunks * ?DATA_CHUNK_SIZE,\n\t\t\tLine = io_lib:format(\"~B,~B,~B,~B~n\", [\n\t\t\t\tTimestamp, BytesRead, ElapsedTime, BytesRead * 1000 div ElapsedTime]),\n\t\t\tfile:write_file(OutputFileName, Line, [append]),\n\t\t\trandom_read(StorageModule, StartOffset, EndOffset, StopTime, OutputFileName,\n\t\t\t\tSumChunks + Chunks, SumElapsedTime + ElapsedTime);\n\t\tfalse ->\n\t\t\tStoreID = ar_storage_module:id(StorageModule),\n\t\t\t{StoreID, SumChunks, SumElapsedTime}\n\tend.\n\t\nread(StorageModule, StartOffset, EndOffset, Size, NumReads) ->\n\tread(StorageModule, StartOffset, EndOffset, Size, 0, NumReads).\n\nread(_StorageModule, _StartOffset, _EndOffset, _Size, NumChunks, 0) ->\n\tNumChunks;\nread(StorageModule, StartOffset, EndOffset, Size, NumChunks, NumReads) ->\n\tOffset = rand:uniform(EndOffset - Size - StartOffset + 1) + StartOffset,\n\tCandidate = #mining_candidate{\n\t\tmining_address = ar_storage_module:module_address(StorageModule),\n\t\tpacking_difficulty = ar_storage_module:module_packing_difficulty(StorageModule)\n\t},\n\tRangeExists = ar_mining_io:read_recall_range(chunk1, self(), Candidate, Offset),\n\tcase RangeExists of\n\t\ttrue ->\n\t\t\treceive\n\t\t\t\t{chunks_read, _WhichChunk, _Candidate, _RecallRangeStart, ChunkOffsets} ->\n\t\t\t\t\tread(StorageModule, StartOffset, EndOffset, Size,\n\t\t\t\t\t\tNumChunks + length(ChunkOffsets), NumReads - 1)\n\t\t\tend;\n\t\tfalse ->\n\t\t\t%% Try again with a new random offset\n\t\t\tread(StorageModule, StartOffset, EndOffset, Size, NumChunks, NumReads)\n\tend.\n\n\t\n%% XXX: the following functions are not used, but may be useful in the future to benchmark\n%% different read strategies. They can be deleted when they are no longer useful.\n\nrandom_chunk_pread(DataDir, StoreID) ->\n\trandom_chunk_pread(DataDir, StoreID, ?NUM_ITERATIONS, 0, 0).\nrandom_chunk_pread(_DataDir, _StoreID, 0, SumBytes, SumElapsedTime) ->\n\tReadRate = (SumBytes * 1000 div ?MiB) div SumElapsedTime,\n\tar:console(\"*Random* chunk pread ~B MiB in ~B ms (~B MiB/s)~n\", [SumBytes div ?MiB, SumElapsedTime, ReadRate]);\nrandom_chunk_pread(DataDir, StoreID, Count, SumBytes, SumElapsedTime) ->\n\tFiles = open_files(DataDir, StoreID),\n\tStartTime = erlang:monotonic_time(),\n\tBytes = pread(Files, ?RECALL_RANGE_SIZE, 0),\n\tEndTime = erlang:monotonic_time(),\n\tElapsedTime = erlang:convert_time_unit(EndTime - StartTime, native, millisecond),\n\trandom_chunk_pread(DataDir, StoreID, Count - 1, SumBytes + Bytes, SumElapsedTime + ElapsedTime).\n\nrandom_dev_pread(DataDir, StoreID) ->\n\trandom_dev_pread(DataDir, StoreID, ?NUM_ITERATIONS, 0, 0).\nrandom_dev_pread(_DataDir, _StoreID, 0, SumBytes, SumElapsedTime) ->\n\tReadRate = (SumBytes * 1000 div ?MiB) div SumElapsedTime,\n\tar:console(\"*Random* device pread ~B MiB in ~B ms (~B MiB/s)~n\", [SumBytes div ?MiB, SumElapsedTime, ReadRate]);\nrandom_dev_pread(DataDir, StoreID, Count, SumBytes, SumElapsedTime) ->\n\tFilepath = hd(ar_chunk_storage:list_files(DataDir, StoreID)),\n\tDevice = get_mounted_device(Filepath),\n\t{ok, File} = file:open(Device, [read, raw, binary]),\n\tFiles = [{Device, File, ar_block:partition_size()} || _ <- lists:seq(1, ?NUM_FILES)],\n\tStartTime = erlang:monotonic_time(),\n\tBytes = pread(Files, ?RECALL_RANGE_SIZE, 0),\n\tEndTime = erlang:monotonic_time(),\n\tElapsedTime = erlang:convert_time_unit(EndTime - StartTime, native, millisecond),\n\trandom_dev_pread(DataDir, StoreID, Count - 1, SumBytes + Bytes, SumElapsedTime + ElapsedTime).\n\ndd_chunk_files_read(DataDir, StoreID) ->\n\tdd_chunk_files_read(DataDir, StoreID, ?NUM_ITERATIONS, 0, 0).\ndd_chunk_files_read(_DataDir, _StoreID, 0, SumBytes, SumElapsedTime) ->\n\tReadRate = (SumBytes * 1000 div ?MiB) div SumElapsedTime,\n\tar:console(\"*dd* multi chunk files read ~B MiB in ~B ms (~B MiB/s)~n\", [SumBytes div ?MiB, SumElapsedTime, ReadRate]);\ndd_chunk_files_read(DataDir, StoreID, Count, SumBytes, SumElapsedTime) ->\n\tFiles = open_files(DataDir, StoreID),\n\tStartTime = erlang:monotonic_time(),\n\tBytes = dd_files(Files, ?RECALL_RANGE_SIZE, 0),\n\tEndTime = erlang:monotonic_time(),\n\tElapsedTime = erlang:convert_time_unit(EndTime - StartTime, native, millisecond),\n\tdd_chunk_files_read(DataDir, StoreID, Count - 1, SumBytes + Bytes, SumElapsedTime + ElapsedTime).\n\ndd_chunk_file_read(DataDir, StoreID) ->\n\tdd_chunk_file_read(DataDir, StoreID, ?NUM_ITERATIONS, 0, 0).\ndd_chunk_file_read(_DataDir, _StoreID, 0, SumBytes, SumElapsedTime) ->\n\tReadRate = (SumBytes * 1000 div ?MiB) div SumElapsedTime,\n\tar:console(\"*dd* single chunk file read ~B MiB in ~B ms (~B MiB/s)~n\", [SumBytes div ?MiB, SumElapsedTime, ReadRate]);\ndd_chunk_file_read(DataDir, StoreID, Count, SumBytes, SumElapsedTime) ->\n\tFiles = open_files(DataDir, StoreID),\n\t{Filepath, _File, FileSize} = hd(Files),\n\tStartTime = erlang:monotonic_time(),\n\tdd(Filepath, FileSize, ?RECALL_RANGE_SIZE, ?NUM_FILES),\n\tEndTime = erlang:monotonic_time(),\n\tElapsedTime = erlang:convert_time_unit(EndTime - StartTime, native, millisecond),\n\tBytes = ?RECALL_RANGE_SIZE * ?NUM_FILES,\n\tdd_chunk_file_read(DataDir, StoreID, Count - 1, SumBytes + Bytes, SumElapsedTime + ElapsedTime).\n\ndd_dev_file_read(DataDir, StoreID) ->\n\tdd_dev_file_read(DataDir, StoreID, ?NUM_ITERATIONS, 0, 0).\ndd_dev_file_read(_DataDir, _StoreID, 0, SumBytes, SumElapsedTime) ->\n\tReadRate = (SumBytes * 1000 div ?MiB) div SumElapsedTime,\n\tar:console(\"*dd* multi dev file read ~B MiB in ~B ms (~B MiB/s)~n\", [SumBytes div ?MiB, SumElapsedTime, ReadRate]);\ndd_dev_file_read(DataDir, StoreID, Count, SumBytes, SumElapsedTime) ->\n\tFilepath = \"/opt/prod/data/storage_modules/storage_module_19_cLGt682uYLJCl47QsRHfdTzMhSPTHPsUnUOzuvTm1HQ/dd.10GB\",\n\tStartTime = erlang:monotonic_time(),\n\tdd(Filepath, 10*?GiB, ?RECALL_RANGE_SIZE, ?NUM_FILES),\n\tEndTime = erlang:monotonic_time(),\n\tElapsedTime = erlang:convert_time_unit(EndTime - StartTime, native, millisecond),\n\tBytes = ?RECALL_RANGE_SIZE * ?NUM_FILES,\n\tdd_dev_file_read(DataDir, StoreID, Count - 1, SumBytes + Bytes, SumElapsedTime + ElapsedTime).\n\ndd_devs_read(DataDir, StoreID) ->\n\tdd_devs_read(DataDir, StoreID, ?NUM_ITERATIONS, 0, 0).\ndd_devs_read(_DataDir, _StoreID, 0, SumBytes, SumElapsedTime) ->\n\tReadRate = (SumBytes * 1000 div ?MiB) div SumElapsedTime,\n\tar:console(\"*dd* multi devs read ~B MiB in ~B ms (~B MiB/s)~n\", [SumBytes div ?MiB, SumElapsedTime, ReadRate]);\ndd_devs_read(DataDir, StoreID, Count, SumBytes, SumElapsedTime) ->\n\tFilepath = hd(ar_chunk_storage:list_files(DataDir, StoreID)),\n\tDevice = get_mounted_device(Filepath),\n\tDevices = [{Device, not_set, ar_block:partition_size()} || _ <- lists:seq(1, ?NUM_FILES)],\n\tStartTime = erlang:monotonic_time(),\n\tBytes = dd_files(Devices, ?RECALL_RANGE_SIZE, 0),\n\tEndTime = erlang:monotonic_time(),\n\tElapsedTime = erlang:convert_time_unit(EndTime - StartTime, native, millisecond),\n\tdd_devs_read(DataDir, StoreID, Count - 1, SumBytes + Bytes, SumElapsedTime + ElapsedTime).\n\ndd_dev_read(DataDir, StoreID) ->\n\tdd_dev_read(DataDir, StoreID, ?NUM_ITERATIONS, 0, 0).\ndd_dev_read(_DataDir, _StoreID, 0, SumBytes, SumElapsedTime) ->\n\tReadRate = (SumBytes * 1000 div ?MiB) div SumElapsedTime,\n\tar:console(\"*dd* single dev read ~B MiB in ~B ms (~B MiB/s)~n\", [SumBytes div ?MiB, SumElapsedTime, ReadRate]);\ndd_dev_read(DataDir, StoreID, Count, SumBytes, SumElapsedTime) ->\n\tFilepath = hd(ar_chunk_storage:list_files(DataDir, StoreID)),\n\tDevice = get_mounted_device(Filepath),\n\tStartTime = erlang:monotonic_time(),\n\tdd(Device, ar_block:partition_size(), ?RECALL_RANGE_SIZE, ?NUM_FILES),\n\tEndTime = erlang:monotonic_time(),\n\tElapsedTime = erlang:convert_time_unit(EndTime - StartTime, native, millisecond),\n\tBytes = ?RECALL_RANGE_SIZE * ?NUM_FILES,\n\tdd_dev_read(DataDir, StoreID, Count - 1, SumBytes + Bytes, SumElapsedTime + ElapsedTime).\n\t\nget_mounted_device(FilePath) ->\n\tCmd = \"df \" ++ FilePath ++ \" | awk 'NR==2 {print $1}'\",\n\tDevice = os:cmd(Cmd),\n\tstring:trim(Device, both, \"\\n\").\n\t\nopen_files(DataDir, StoreID) ->\n\tAllFilepaths = ar_chunk_storage:list_files(DataDir, StoreID),\n\tFilepaths = lists:sublist(ar_util:shuffle_list(AllFilepaths), ?NUM_FILES),\n\tlists:foldl(\n\t\tfun(Filepath, Acc) ->\n\t\t\t{ok, FileInfo} = file:read_file_info(Filepath),\n\t\t\t{ok, File} = file:open(Filepath, [read, raw, binary]),\n\t\t\t[{Filepath, File, FileInfo#file_info.size} | Acc]\n\t\tend,\n\t\t[], Filepaths).\n\npread([], _Size, NumBytes) ->\n\tNumBytes;\npread([{Filepath, File, FileSize} | Files], Size, NumBytes) ->\n\tPosition = max(0, rand:uniform(FileSize - Size)),\n \t% ar:console(\"pread: ~p ~B ~B ~B ~B~n\", [Filepath, FileSize, Position, Size, NumBytes]),\n\t{ok, Bin} = file:pread(File, Position, Size),\n\tpread(Files, Size, NumBytes + byte_size(Bin)).\n\ndd_files([], _Size, NumBytes) ->\n\tNumBytes;\ndd_files([{Filepath, _File, FileSize} | Files], Size, NumBytes) ->\n\tdd(Filepath, FileSize, Size, 1),\n\tdd_files(Files, Size, NumBytes + Size).\n\ndd(Filepath, FileSize, Size, Count) ->\n\tBlockSize = ?RECALL_RANGE_SIZE,\n\tBytes = Size * Count,\n\tBlocks = Bytes div BlockSize,\n\tMaxOffset = max(1, FileSize - Bytes),\n\tPosition = rand:uniform(MaxOffset) div BlockSize,\n\tCommand = io_lib:format(\"dd iflag=direct if=~s skip=~B of=/dev/null bs=~B count=~B\", [Filepath, Position, BlockSize, Blocks]),\n\t% ar:console(\"~s~n\", [Command]),\n\tos:cmd(Command).\n"
  },
  {
    "path": "apps/arweave/src/ar_doctor_dump.erl",
    "content": "-module(ar_doctor_dump).\n\n-export([main/1, help/0]).\n\n-include_lib(\"kernel/include/file.hrl\").\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\nmain(Args) ->\n\tdump(Args).\n\nhelp() ->\n\tar:console(\"data-doctor dump <include_txs> <block_id> <min_height> <data_dir> <output_dir>~n\"),\n\tar:console(\"  include_txs: Whether to include transactions in the dump (true/false).~n\"),\n\tar:console(\"  block_id: The block ID to start the dump from.~n\"),\n\tar:console(\"  min_height: The minimum height of the blocks to dump.~n\"),\n\tar:console(\"  data_dir: Full path to your data_dir.~n\"),\n\tar:console(\"  output_dir: Full path to a directory where the dumped data will be written.~n\"),\n\tar:console(\"~nExample:~n\"),\n\tar:console(\"data-doctor dump true ZR7zbobdw55a....pRpUabEkLD0V 100000 /mnt/arweave-data /mnt/output~n\").\n\ndump([IncludeTXs, H, MinHeight, DataDir, OutputDir]) ->\n\tok = filelib:ensure_dir(filename:join([OutputDir, \"blocks\", \"dummy\"])),\n\tok = filelib:ensure_dir(filename:join([OutputDir, \"txs\", \"dummy\"])),\n\n\tConfig = #config{data_dir = DataDir},\n\tarweave_config:set_env(Config),\n\tar_kv_sup:start_link(),\n\tar_storage_sup:start_link(),\n\n\tdump_blocks(ar_util:decode(H),\n\t\tlist_to_integer(MinHeight),\n\t\tOutputDir,\n\t\tlist_to_boolean(IncludeTXs)),\n\ttrue;\ndump(_) ->\n\tfalse.\n\nlist_to_boolean(\"true\") -> true;\nlist_to_boolean(\"false\") -> false;\nlist_to_boolean(_) -> false.\n\ndump_blocks(BH, MinHeight, OutputDir, IncludeTXs) ->\n\tH = ar_util:encode(BH),\n\tcase ar_kv:get(block_db, BH) of\n\t\t{ok, Bin} ->\n\t\t\ttry\n\t\t\t\tcase ar_serialize:binary_to_block(Bin) of\n\t\t\t\t\t{ok, B} ->\n\t\t\t\t\t\tcase B#block.height >= MinHeight of\n\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\tio:format(\"Block: ~p / ~p\", [B#block.height, H]),\n\t\t\t\t\t\t\t\tJsonFilename = io_lib:format(\"~s.json\", [ar_util:encode(B#block.indep_hash)]),\n\t\t\t\t\t\t\t\tOutputFilePath = filename:join([OutputDir, \"blocks\", JsonFilename]),\n\t\t\t\t\t\t\t\tcase file:read_file_info(OutputFilePath) of\n\t\t\t\t\t\t\t\t\t{ok, _FileInfo} ->\n\t\t\t\t\t\t\t\t\t\tio:format(\" ... skipping~n\"),\n\t\t\t\t\t\t\t\t\t\tok; % File exists, do nothing\n\t\t\t\t\t\t\t\t\t{error, enoent} ->\n\t\t\t\t\t\t\t\t\t\tio:format(\" ... writing~n\"),\n\t\t\t\t\t\t\t\t\t\t% File does not exist, proceed with processing\n\t\t\t\t\t\t\t\t\t\tcase IncludeTXs of\n\t\t\t\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\t\t\t\tdump_txs(B#block.txs, OutputDir);\n\t\t\t\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\t\t\t\tok\n\t\t\t\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\t\t\t\tJson = ar_serialize:block_to_json_struct(B),\n\t\t\t\t\t\t\t\t\t\tJsonString = ar_serialize:jsonify(Json),\n\t\t\t\t\t\t\t\t\t\tfile:write_file(OutputFilePath, JsonString)\n\t\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\t\tPrevBH = B#block.previous_block,\n\t\t\t\t\t\t\t\tdump_blocks(PrevBH, MinHeight, OutputDir, IncludeTXs);\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\tio:format(\"Done.~n\")\n\t\t\t\t\t\tend;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tok\n\t\t\t\tend\n\t\t\tcatch\n\t\t\t\tType:Reason ->\n\t\t\t\t\tio:format(\"Error processing block ~p: ~p:~p~n\", [H, Type, Reason])\n\t\t\tend;\n        not_found ->\n            io:format(\"Block ~p not found.~n\", [H])\n    end.\n\ndump_txs([], _OutputDir) ->\n\tok;\ndump_txs([TXID | TXIDs], OutputDir) ->\n\tcase ar_kv:get(tx_db, TXID) of\n\t\t{ok, Bin} ->\n\t\t\t{ok, TX} = ar_serialize:binary_to_tx(Bin),\n\t\t\tJson = ar_serialize:tx_to_json_struct(TX),\n\t\t\tJsonString = ar_serialize:jsonify(Json),\n\t\t\tJsonFilename = io_lib:format(\"~s.json\", [ar_util:encode(TXID)]),\n\t\t\tOutputFilePath = filename:join([OutputDir, \"txs\", JsonFilename]),\n\t\t\tfile:write_file(OutputFilePath, JsonString);\n\t\t_ ->\n\t\t\tok\n\tend,\n\tdump_txs(TXIDs, OutputDir).\n"
  },
  {
    "path": "apps/arweave/src/ar_doctor_inspect.erl",
    "content": "-module(ar_doctor_inspect).\n\n-export([main/1, help/0]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_chunk_storage.hrl\").\n\n%%--------------------------------------------------------------------\n%% API\n%%--------------------------------------------------------------------\n\n%% main/1 expects either:\n%% 1. [Dir, StartStr, EndStr, Address1, Address2, ...] for traditional inspection\n%% 2. [\"bitmap\", DataDir, StorageModule] for generating a bitmap of chunk states\nmain(Args) ->\n\tcase Args of\n\t\t[\"bitmap\", DataDir, StorageModuleConfig] ->\n\t\t\tbitmap(DataDir, StorageModuleConfig),\n\t\t\ttrue;\n\t\t[\"chunks\", Dir, StartStr, EndStr | AddrListStr] when length(AddrListStr) >= 1 ->\n\t\t\tAddresses = [ar_util:decode(AddrStr) || AddrStr <- AddrListStr],\n\t\t\tarweave_config:set_env(#config{\n\t\t\t\tdisable = [], enable  = [randomx_large_pages]\n\t\t\t}),\n\t\t\tar_metrics:register(),\n\t\t\tar_packing_sup:start_link(),\n\t\t\tStart = ar_block:get_chunk_padded_offset(list_to_integer(StartStr)),\n\t\t\tEnd = ar_block:get_chunk_padded_offset(list_to_integer(EndStr)),\n\t\t\tar:console(\"~nInspecting chunks from padded offset ~p to ~p~n\", [Start, End]),\n\t\t\tEncodedAddresses = [ar_util:encode(Address) || Address <- Addresses],\n\t\t\tar:console(\"~nChecking chunks against unpacked and all addresses: ~p~n\",\n\t\t\t\t[EncodedAddresses]),\n\t\t\tinspect_range(Dir, Start, End, Addresses),\n\t\t\ttrue;\n\t\t_ ->\n\t\t\tfalse\n\tend.\n\nhelp() ->\n\tar:console(\"Usage: inspect chunks <directory> <start_range> <end_range> <address1> [address2 ...]~n\"),\n\tar:console(\"       inspect bitmap <data_dir> <storage_module>~n\").\n\n%%--------------------------------------------------------------------\n%% Inspect Chunks\n%%--------------------------------------------------------------------\n\n%% iterate from Padded (chunk end offset) = Start to End (inclusive)\ninspect_range(_Dir, Start, End, _Addresses) when Start > End ->\n\tok;\ninspect_range(Dir, Start, End, Addresses) ->\n\tinspect_chunk(Dir, Start, Addresses),\n\tNext = Start + ?DATA_CHUNK_SIZE,\n\tinspect_range(Dir, Next, End, Addresses).\n\n%% inspect_chunk/2 locates the chunk file and reads the local chunk,\n%% then queries the remote chunk and prints their generated ids.\ninspect_chunk(Dir, PaddedEndOffset, Addresses) ->\n\tar:console(\"~n~n--- Inspecting padded offset: ~p ---~n\", [PaddedEndOffset]),\n\n\tChunkFileStart = ar_chunk_storage:get_chunk_file_start(PaddedEndOffset),\n\tFilepath = filename:join([Dir, integer_to_binary(ChunkFileStart)]),\n\t{Position, ChunkOffset} =\n\t\tar_chunk_storage:get_position_and_relative_chunk_offset(\n\t\t\tChunkFileStart, PaddedEndOffset),\n\n\tar:console(\"File path: ~p~n\", [Filepath]),\n\tar:console(\"Position: ~p~n\", [Position]),\n\tar:console(\"Chunk offset: ~p~n\", [ChunkOffset]),\n\n\t%% Fetch the expected chunk from arweave.net\n\t{ok, Proof} = fetch_remote_chunk(PaddedEndOffset),\n\tExpectedChunk = maps:get(chunk, Proof),\n\tTXPath = maps:get(tx_path, Proof),\n\t{ok, TXRoot} = ar_merkle:extract_root(TXPath),\n\tChunkSize = byte_size(ExpectedChunk),\n\tExpectedChunkID = ar_tx:generate_chunk_id(ExpectedChunk),\n\tar:console(\"~nExpected chunk size: ~p~n\", [byte_size(ExpectedChunk)]),\n\tar:console(\"Expected chunk ID: ~p~n\", [ar_util:encode(ExpectedChunkID)]),\n\n\t%% Read local chunk from disk.\n\t{RawChunkOffset, RawChunk} = read_local_chunk(Filepath, Position),\n\tar:console(\"~nRaw chunk: ~p~n\", [byte_size(RawChunk)]),\n\tar:console(\"Raw chunk offset: ~p~n\", [RawChunkOffset]),\n\tRawChunkID = ar_tx:generate_chunk_id(RawChunk),\n\tar:console(\"Raw chunk ID: ~p~n\", [ar_util:encode(RawChunkID)]),\n\n\t%% Try unpacking the local chunk a number of different ways to see if any match the\n\t%% expected chunk ID.\n\tResult = check_all(\n\t\tExpectedChunkID, RawChunk, PaddedEndOffset, Addresses, TXRoot, ChunkSize),\n\tprint_match(Result).\n\n%% New functions for checking unpacked chunks without printing per test;\n%% only the first matching test is reported.\n\ncheck_unpacked([], _PaddedEndOffset, _TXRoot, _LocalChunk, _ChunkSize, _ExpectedChunkID) ->\n\tno_match;\ncheck_unpacked(\n\t\t[Address | Rest], PaddedEndOffset, TXRoot, LocalChunk, ChunkSize, ExpectedChunkID) ->\n\tcase check_packings_for_address(\n\t\t\tAddress, PaddedEndOffset, TXRoot, LocalChunk, ChunkSize, ExpectedChunkID) of\n\t\t{match, Packing} ->\n\t\t\t{match, Packing};\n\t\tno_match ->\n\t\t\tcheck_unpacked(\n\t\t\t\tRest, PaddedEndOffset, TXRoot, LocalChunk, ChunkSize, ExpectedChunkID)\n\tend.\n\ncheck_packings_for_address(\n\t\tAddress, PaddedEndOffset, TXRoot, LocalChunk, ChunkSize, ExpectedChunkID) ->\n\tPackings = [\n\t\t{replica_2_9, Address},\n\t\t{spora_2_6, Address},\n\t\t{composite, Address, 1},\n\t\t{composite, Address, 2}\n\t],\n\tcheck_packings(Packings, PaddedEndOffset, TXRoot, LocalChunk, ChunkSize, ExpectedChunkID).\n\ncheck_packings([], _PaddedEndOffset, _TXRoot, _LocalChunk, _ChunkSize, _ExpectedChunkID) ->\n\tno_match;\ncheck_packings(\n\t\t[Packing | Rest], PaddedEndOffset, TXRoot, LocalChunk, ChunkSize, ExpectedChunkID) ->\n\tcase check_packing(\n\t\t\tPacking, PaddedEndOffset, TXRoot, LocalChunk, ChunkSize, ExpectedChunkID) of\n\t\t{match, _} = Match ->\n\t\t\tMatch;\n\t\tno_match ->\n\t\t\tcheck_packings(\n\t\t\t\tRest, PaddedEndOffset, TXRoot, LocalChunk, ChunkSize, ExpectedChunkID)\n\tend.\n\ncheck_packing(Packing, PaddedEndOffset, TXRoot, LocalChunk, ChunkSize, ExpectedChunkID) ->\n\tcase ar_packing_server:unpack(Packing, PaddedEndOffset, TXRoot, LocalChunk, ChunkSize) of\n\t\t{ok, Unpacked} ->\n\t\t\tUnpackedID = ar_tx:generate_chunk_id(Unpacked),\n\t\t\tif UnpackedID =:= ExpectedChunkID ->\n\t\t\t\t{match, Packing};\n\t\t\ttrue ->\n\t\t\t\tno_match\n\t\t\tend;\n\t\t{error, _Reason} ->\n\t\t\tno_match\n\tend.\n\n%% read_local_chunk/2 opens the file, reads ?OFFSET_SIZE+?DATA_CHUNK_SIZE bytes \n%% starting at Position and closes the file.\nread_local_chunk(Filepath, Position) ->\n\tcase file:open(Filepath, [read, binary, raw]) of\n\t\t{ok, F} ->\n\t\t\t%% Read header + chunk data.\n\t\t\tLength = ?OFFSET_SIZE + ?DATA_CHUNK_SIZE,\n\t\t\tcase file:pread(F, Position, Length) of\n\t\t\t\t{ok, << ChunkOffset:?OFFSET_BIT_SIZE, Chunk:?DATA_CHUNK_SIZE/binary, Rest/binary >>} ->\n\t\t\t\t\tfile:close(F),\n\t\t\t\t\t{ChunkOffset, Chunk};\n\t\t\t\tError ->\n\t\t\t\t\tfile:close(F),\n\t\t\t\t\tar:console(\"Error reading file ~s at position ~p: ~p~n\", [Filepath, Position, Error]),\n\t\t\t\t\t{0, <<>>}\n\t\t\tend;\n\t\t{error, Reason} ->\n\t\t\tar:console(\"Error opening file ~s: ~p~n\", [Filepath, Reason]),\n\t\t\t{0, <<>>}\n\tend.\n\n%% fetch_remote_chunk/1 uses httpc (in inets application) to query the remote URL.\nfetch_remote_chunk(PaddedOffset) ->\n\t%% Build URL e.g. \"http://arweave.net/chunk2/123456\" \n\tURL = lists:concat([\"https://arweave.net/chunk2/\", integer_to_list(PaddedOffset)]),\n\tar:console(\"Fetching remote chunk from ~s~n\", [URL]),\n\t%% Ensure inets is started.\n\tapplication:ensure_all_started(inets),\n\tcase httpc:request(get, {URL, []}, [{body_format, binary}], []) of\n\t\t{ok, {{_, 200, _}, _Headers, Body}} ->\n\t\t\tBin = list_to_binary(Body),\n\t\t\tar_serialize:binary_to_poa(Bin);\n\t\t{ok, Response} ->\n\t\t\tar:console(\"Unexpected response for ~s: ~p~n\", [URL, Response]),\n\t\t\t{error, Response};\n\t\t{error, Reason} ->\n\t\t\tar:console(\"HTTP request error for ~s: ~p~n\", [URL, Reason]),\n\t\t\t{error, Reason}\n\tend.\n\n%% check_all/6 performs the raw, entropy, and unpacking checks sequentially\ncheck_all(ExpectedChunkID, LocalChunk, PaddedEndOffset, Addresses, TXRoot, ChunkSize) ->\n\tLocalID = ar_tx:generate_chunk_id(LocalChunk),\n\tcase LocalID =:= ExpectedChunkID of\n\t\ttrue ->\n\t\t\t{match, \"Raw chunk\"};\n\t\tfalse ->\n\t\t\tEntropy = ar_entropy_storage:generate_missing_entropy(\n\t\t\t\tPaddedEndOffset, hd(Addresses)),\n\t\t\tEntropyID = ar_tx:generate_chunk_id(Entropy),\n\t\t\tcase EntropyID =:= ExpectedChunkID of\n\t\t\t\ttrue ->\n\t\t\t\t\t{match, \"Entropy\"};\n\t\t\t\tfalse ->\n\t\t\t\t\tcheck_unpacked(\n\t\t\t\t\t\tAddresses, PaddedEndOffset, TXRoot, LocalChunk, ChunkSize,\n\t\t\t\t\t\tExpectedChunkID)\n\t\t\tend\n\tend.\n\n%% print_match/1 prints the match result.\nprint_match({match, Type}) when is_list(Type) ->\n\tar:console(\"~nMATCH: ~s~n\", [Type]);\nprint_match({match, Packing}) ->\n\tar:console(\"~nMATCH: ~p~n\", [ar_serialize:encode_packing(Packing, true)]);\nprint_match(no_match) ->\n\tar:console(\"~nNO MATCH~n\").\n\n%%--------------------------------------------------------------------\n%% Inspect Bitmap\n%%--------------------------------------------------------------------\n\n%% @doc Generates a bitmap of the provided storage module. Each pixel is a chunk where\n%% the color is determined by the packing format of the chunk. Each row of the bitmap\n%% is a replica.2.9 sector (so the bitmap is 1024 rows high).\nbitmap(DataDir, StorageModuleConfig) ->\n\t{ok, StorageModule} = ar_config:parse_storage_module(StorageModuleConfig),\n\t\n\tConfig = #config{\n\t\tdata_dir = DataDir,\n\t\tstorage_modules = [StorageModule]},\n\tarweave_config:set_env(Config),\n\n\tStoreID = ar_storage_module:id(StorageModule),\n\t\n\tar_kv_sup:start_link(),\n\tar_storage_sup:start_link(),\n\tar_sync_record_sup:start_link(),\n\tar_data_sync:init_kv(StoreID),\n\t\n\t{ModuleStart, ModuleEnd} = ar_storage_module:module_range(StorageModule),\n\n\tChunkPackings = ar_chunk_visualization:get_chunk_packings(\n\t\tModuleStart, ModuleEnd, StoreID, true),\n\tar_chunk_visualization:print_chunk_stats(ChunkPackings),\n\tBitmap = ar_chunk_visualization:generate_bitmap(ChunkPackings),\n\t\n\tFilename = \"bitmap_\" ++ StoreID ++ \".ppm\",\n\tfile:write_file(Filename, ar_chunk_visualization:bitmap_to_binary(Bitmap)),\n\tar:console(\"Bitmap written to ~s~n\", [Filename])."
  },
  {
    "path": "apps/arweave/src/ar_doctor_merge.erl",
    "content": "-module(ar_doctor_merge).\n\n-export([main/1, help/0]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_chunk_storage.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n\nmain(Args) ->\n\tmerge(Args).\n\nhelp() ->\n\tar:console(\"data-doctor merge data_dir storage_module src_directories~n\").\n\nmerge(Args) when length(Args) < 3 ->\n\tfalse;\nmerge(Args) ->\n\t[DataDir, StorageModuleConfig | SrcDirs ] = Args,\n\n\tStorageModule = ar_config:parse_storage_module(StorageModuleConfig),\n\tStoreID = ar_storage_module:id(StorageModule),\n\n\tok = merge(DataDir, StorageModule, StoreID, SrcDirs),\n\ttrue.\n\nmerge(_DataDir, _StorageModule, _StoreID, []) ->\n\tok;\nmerge(DataDir, StorageModule, StoreID, [SrcDir | SrcDirs]) ->\n\n\tDstDir = filename:join([DataDir, \"storage_modules\", StoreID]),\n\tar:console(\"~n~nMerge data from ~p into ~p~n~n\", [SrcDir, DstDir]),\n\n\tmove_chunk_storage(SrcDir, DstDir),\n\n\tcopy_db(\"ar_data_sync_db\", SrcDir, DstDir),\n\tcopy_db(\"ar_data_sync_chunk_db\", SrcDir, DstDir),\n\tcopy_db(\"ar_data_sync_disk_pool_chunks_index_db\", SrcDir, DstDir),\n\tcopy_db(\"ar_data_sync_data_root_index_db\", SrcDir, DstDir),\n\tcopy_sync_records(SrcDir, DstDir),\n\n\tmerge(DataDir, StorageModule, StoreID, SrcDirs).\n\nmove_chunk_storage(SrcDir, DstDir) ->\n\tMkDir = io_lib:format(\"mkdir -p ~s/chunk_storage ~s/rocksdb~n\", [DstDir, DstDir]),\n\tMv = io_lib:format(\"mv ~s/chunk_storage/* ~s/chunk_storage~n\", [SrcDir, DstDir]),\n\tar:console(MkDir),\n\tos:cmd(MkDir),\n\tar:console(Mv),\n\tos:cmd(Mv).\n\n% Function to copy all key/value pairs from one DB to another\ncopy_db(DB, SrcDir, DstDir) ->\n\tar:console(\"~nCopying DB ~p~n\", [DB]),\n\tSrcPath = filename:join([SrcDir, \"rocksdb\", DB]),\n\tDstPath = filename:join([DstDir, \"rocksdb\", DB]),\n    % List all column families in the source database\n    {ok, ColumnFamilies} = rocksdb:list_column_families(SrcPath, [{create_if_missing, false}]),\n\n\tCFDescriptors = lists:foldl(\n\t\tfun(CF, Acc) ->\n\t\t\t[{CF, []} | Acc]\n\t\tend,\n\t\t[],\n\t\tColumnFamilies\n\t),\n\t\n    % Open Source Database with all column families\n    {ok, SrcDB, SrcCFs} = rocksdb:open(SrcPath, [{create_if_missing, false}], CFDescriptors),\n\n    % Open Destination Database with all column families, creating them if necessary\n    {ok, DstDB, DstCFs} = rocksdb:open(DstPath,\n\t\t[{create_if_missing, true}, {create_missing_column_families, true}], CFDescriptors),\n\n    % Iterate and copy for each column family\n    lists:zipwith(\n\t\tfun({SrcCF, DstCF}, ColumnFamily) -> \n\t\t\tar:console(\"Copying family ~p~n\", [ColumnFamily]),\n\t\t\tcopy_column_family(SrcDB, DstDB, SrcCF, DstCF) \n\t\tend,\n\t\tlists:zip(SrcCFs, DstCFs), ColumnFamilies),\n\n    % Close databases\n    rocksdb:close(SrcDB),\n    rocksdb:close(DstDB).\n\n% Function to copy a specific column family\ncopy_column_family(SrcDB, DstDB, SrcCF, DstCF) ->\n    % Create an Iterator for this column family in Source Database\n    {ok, Itr} = rocksdb:iterator(SrcDB, SrcCF, []),\n\tcopy_from_iterator(Itr, rocksdb:iterator_move(Itr, first), DstDB, DstCF),\n    rocksdb:iterator_close(Itr).\n\n% Helper function to copy key/value pairs from iterator to destination DB\ncopy_from_iterator(Itr, Res, DstDB, DstCF) ->\n\tcase Res of\n\t\t{ok, Key, Value} ->\n            ok = rocksdb:put(DstDB, DstCF, Key, Value, []),\n            copy_from_iterator(Itr, rocksdb:iterator_move(Itr, next), DstDB, DstCF);\n        {error, invalid_iterator} ->\n            % End of iteration\n            ok\n    end.\n\ncopy_sync_records(SrcDir, DstDir) ->\n\tar:console(\"Copying sync records~n\", []),\n\tSrcPath = filename:join([SrcDir, \"rocksdb\", \"ar_sync_record_db\"]),\n\tDstPath = filename:join([DstDir, \"rocksdb\", \"ar_sync_record_db\"]),\n\t{ok, SrcDB} = rocksdb:open(SrcPath, [{create_if_missing, false}]),\n\t{ok, DstDB} = rocksdb:open(DstPath, [{create_if_missing, true}]),\n\tSrcSyncRecords = get_sync_records(SrcDB),\n\tDstSyncRecords = get_sync_records(DstDB),\n\tUnion = merge_sync_records(SrcSyncRecords, DstSyncRecords),\n\tput_sync_records(DstDB, Union),\n\trocksdb:close(SrcDB),\n\trocksdb:close(DstDB).\n\nget_sync_records(DB) ->\n\tRecord = rocksdb:get(DB, <<\"sync_records\">>, []),\n\tcase Record of\n\t\t{ok, Bin} ->\n\t\t\tbinary_to_term(Bin, [safe]);\n\t\t_ ->\n\t\t\t{#{}, #{}}\n\tend.\n\nput_sync_records(DB, Intervals) ->\n\trocksdb:put(DB, <<\"sync_records\">>, term_to_binary(Intervals), []).\n\nmerge_sync_records(\n\t\t{SrcSyncRecordByID, SrcSyncRecordByIDType}, {DstSyncRecordByID, DstSyncRecordByIDType}) ->\n\tUnionSyncRecordByID = maps:merge_with(\n\t\tfun(_Key, Src, Dst) -> \n\t\t\tar_intervals:union(Src, Dst)\n\t\tend,\n\t\tSrcSyncRecordByID, DstSyncRecordByID),\n\tUnionRecordByIDType = maps:merge_with(\n\t\tfun(_Key, Src, Dst) -> \n\t\t\tar_intervals:union(Src, Dst)\n\t\tend,\n\t\tSrcSyncRecordByIDType, DstSyncRecordByIDType),\n\t{UnionSyncRecordByID, UnionRecordByIDType}."
  },
  {
    "path": "apps/arweave/src/ar_domain.erl",
    "content": "-module(ar_domain).\n\n-export([get_labeling/3, lookup_arweave_txt_record/1, derive_tx_label/2]).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nget_labeling(ApexDomain, CustomDomains, Hostname) ->\n\tSize = byte_size(ApexDomain),\n\tcase binary:match(Hostname, ApexDomain) of\n\t\t{0, Size} ->\n\t\t\tapex;\n\t\t{N, Size} ->\n\t\t\tLabel = binary:part(Hostname, {0, N-1}),\n\t\t\t{labeled, Label};\n\t\tnomatch ->\n\t\t\tget_labeling_1(CustomDomains, Hostname)\n\tend.\n\nlookup_arweave_txt_record(Domain) ->\n\tcase inet_res:lookup(\"_arweave.\" ++ binary_to_list(Domain), in, txt) of\n\t\t[] ->\n\t\t\tnot_found;\n\t\t[RecordChunks|_] ->\n\t\t\tlist_to_binary(lists:concat(RecordChunks))\n\tend.\n\nderive_tx_label(TXID, BH) ->\n\tData = <<TXID/binary, BH/binary>>,\n\tDigest = crypto:hash(sha256, Data),\n\tbinary:part(ar_base32:encode(Digest), {0, 12}).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nget_labeling_1(CustomDomains, Hostname) ->\n\tcase lists:member(Hostname, CustomDomains) of\n\t\ttrue ->\n\t\t\t{custom, Hostname};\n\t\tfalse ->\n\t\t\tunknown\n\tend.\n"
  },
  {
    "path": "apps/arweave/src/ar_entropy_cache.erl",
    "content": "-module(ar_entropy_cache).\n\n-export([get/1, clean_up_space/2, put/3, total_size/0]).\n\n-include(\"ar.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Return the stored value, if any, for the given Key.\n-spec get(Key :: string()) -> {ok, term()} | not_found.\nget(Key) ->\n\tget(Key, ar_entropy_cache).\n\n%% @doc Make sure the cache has enough space (i.e., clean up the oldest records, if any)\n%% to store Size worth of elements such that the total size does not exceed MaxSize.\n%% In other words, if you want to store new elements with the total size Size,\n%% call clean_up_space(Size, MaxSize) then call put/3 to store new elements.\n-spec clean_up_space(\n\t\tSize :: non_neg_integer(),\n\t\tMaxSize :: non_neg_integer()\n) -> ok.\nclean_up_space(Size, MaxSize) ->\n\tTable = ar_entropy_cache,\n\tOrderedKeyTable = ar_entropy_cache_ordered_keys,\n\tclean_up_space(Size, MaxSize, Table, OrderedKeyTable).\n\n%% @doc Store the given Value in the cache. Associate it with the given Size and\n%% increase the total cache size accordingly.\n-spec put(\n\t\tKey :: string(),\n\t\tValue :: term(),\n\t\tSize :: non_neg_integer()\n) -> ok.\nput(Key, Value, Size) ->\n\tTable = ar_entropy_cache,\n\tOrderedKeyTable = ar_entropy_cache_ordered_keys,\n\tput(Key, Value, Size, Table, OrderedKeyTable).\n\n%% @doc Return the size of the cache.\n-spec total_size() -> non_neg_integer().\ntotal_size() ->\n\tTable = ar_entropy_cache,\n\ttotal_size(Table).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\ntotal_size(Table) ->\n\tcase ets:lookup(Table, total_size) of\n\t\t[] ->\n\t\t\t0;\n\t\t[{_, Value}] ->\n\t\t\tValue\n\tend.\n\nget(Key, Table) ->\n\tcase ets:lookup(Table, {key, Key}) of\n\t\t[] ->\n\t\t\tnot_found;\n\t\t[{_, Value}] ->\n\t\t\t%% Track the number of used keys per entropy to estimate the efficiency\n\t\t\t%% of the cache.\n\t\t\tets:update_counter(Table, {fetched_key_count, Key}, 1,\n\t\t\t\t\t{{fetched_key_count, Key}, 0}),\n\t\t\t{ok, Value}\n\tend.\n\nclean_up_space(Size, MaxSize, Table, OrderedKeyTable) ->\n\tTotalSize = total_size(Table),\n\tcase TotalSize + Size > MaxSize of\n\t\ttrue ->\n\t\t\tcase ets:first(OrderedKeyTable) of\n\t\t\t\t'$end_of_table' ->\n\t\t\t\t\tok;\n\t\t\t\t{_Timestamp, Key, ElementSize} = EarliestKey ->\n\t\t\t\t\tets:delete(Table, {key, Key}),\n\t\t\t\t\tets:update_counter(Table, total_size, -ElementSize, {total_size, 0}),\n\t\t\t\t\tets:delete(OrderedKeyTable, EarliestKey),\n\t\t\t\t\tets:delete(Table, {fetched_key_count, Key}),\n\t\t\t\t\tclean_up_space(Size, MaxSize, Table, OrderedKeyTable)\n\t\t\tend;\n\t\tfalse ->\n\t\t\tprometheus_gauge:set(replica_2_9_entropy_cache, TotalSize + Size),\n\t\t\tok\n\tend.\n\nget_fetched_key_count(Table, Key) ->\n\tcase ets:lookup(Table, {fetched_key_count, Key}) of\n\t\t[] ->\n\t\t\t0;\n\t\t[{_, Count}] ->\n\t\t\tCount\n\tend.\n\nput(Key, Value, Size, Table, OrderedKeyTable) ->\n\tets:insert(Table, {{key, Key}, Value}),\n\tTimestamp = os:system_time(microsecond),\n\tets:insert(OrderedKeyTable, {{Timestamp, Key, Size}}),\n\tets:update_counter(Table, total_size, Size, {total_size, 0}).\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\ncache_test() ->\n\tTable = 'test_entropy_cache_table',\n\tOrderedKeyTable = 'test_entropy_cache_ordered_key_table',\n\tets:new(Table, [set, public, named_table]),\n\tets:new(OrderedKeyTable, [ordered_set, public, named_table]),\n\t?assertEqual(0, get_fetched_key_count(Table, some_key)),\n\t?assertEqual(not_found, get(some_key, Table)),\n\t?assertEqual(0, get_fetched_key_count(Table, some_key)),\n\tclean_up_space(64, 128, Table, OrderedKeyTable),\n\tput(some_key, some_value, 64, Table, OrderedKeyTable),\n\t?assertEqual({ok, some_value}, get(some_key, Table)),\n\t?assertEqual(1, get_fetched_key_count(Table, some_key)),\n\t?assertEqual({ok, some_value}, get(some_key, Table)),\n\t?assertEqual(2, get_fetched_key_count(Table, some_key)),\n\tclean_up_space(64, 128, Table, OrderedKeyTable),\n\t?assertEqual({ok, some_value}, get(some_key, Table)),\n\t?assertEqual(3, get_fetched_key_count(Table, some_key)),\n\tclean_up_space(64, 128, Table, OrderedKeyTable),\n\t?assertEqual({ok, some_value}, get(some_key, Table)),\n\t?assertEqual(4, get_fetched_key_count(Table, some_key)),\n\tclean_up_space(128, 128, Table, OrderedKeyTable),\n\t%% We requested an allocation of > MaxSize so the old key needs to be removed.\n\t?assertEqual(not_found, get(some_key, Table)),\n\t?assertEqual(0, get_fetched_key_count(Table, some_key)),\n\t%% The put itself does not clean up the cache.\n\tput(some_key, some_value, 64, Table, OrderedKeyTable),\n\tput(some_other_key, some_other_value, 64, Table, OrderedKeyTable),\n\tput(yet_another_key, yet_another_value, 64, Table, OrderedKeyTable),\n\t?assertEqual(0, get_fetched_key_count(Table, some_key)),\n\t?assertEqual({ok, some_value}, get(some_key, Table)),\n\t?assertEqual({ok, some_other_value}, get(some_other_key, Table)),\n\t?assertEqual({ok, yet_another_value}, get(yet_another_key, Table)),\n\t?assertEqual(1, get_fetched_key_count(Table, some_key)),\n\t?assertEqual(1, get_fetched_key_count(Table, some_other_key)),\n\t?assertEqual(1, get_fetched_key_count(Table, yet_another_key)),\n\t%% Basically, we are simply reducing the cache 192 -> 128.\n\tclean_up_space(0, 128, Table, OrderedKeyTable),\n\t?assertEqual(not_found, get(some_key, Table)),\n\t?assertEqual({ok, some_other_value}, get(some_other_key, Table)),\n\t?assertEqual({ok, yet_another_value}, get(yet_another_key, Table)),\n\tclean_up_space(64, 128, Table, OrderedKeyTable),\n\t?assertEqual(not_found, get(some_other_key, Table)),\n\t?assertEqual({ok, yet_another_value}, get(yet_another_key, Table))."
  },
  {
    "path": "apps/arweave/src/ar_entropy_gen.erl",
    "content": "-module(ar_entropy_gen).\n\n-behaviour(gen_server).\n\n-export([name/1, register_workers/1,  initialize_context/2,\n\tmap_entropies/8, entropy_offsets/2,\n\tgenerate_entropies/2, generate_entropies/4, generate_entropy_keys/2, \n\tshift_entropy_offset/2]).\n\n-export([start_link/2, init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include(\"ar.hrl\").\n-include(\"ar_sup.hrl\").\n-include(\"ar_consensus.hrl\").\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-record(state, {\n\tstore_id,\n\tpacking,\n\tmodule_start,\n\tmodule_end,\n\tcursor,\n\tprepare_status = undefined\n}).\n\n-ifdef(AR_TEST).\n-define(DEVICE_LOCK_WAIT, 100).\n-else.\n-define(DEVICE_LOCK_WAIT, 5_000).\n-endif.\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the server.\nstart_link(Name, {StoreID, Packing}) ->\n\tgen_server:start_link({local, Name}, ?MODULE, {StoreID, Packing}, []).\n\n%% @doc Return the name of the server serving the given StoreID.\nname(StoreID) ->\n\tlist_to_atom(\"ar_entropy_gen_\" ++ ar_storage_module:label(StoreID)).\n\nregister_workers(Module) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tConfiguredWorkers = lists:filtermap(\n\t\tfun(StorageModule) ->\n\t\t\t\tStoreID = ar_storage_module:id(StorageModule),\n\t\t\t\tPacking = ar_storage_module:get_packing(StoreID),\n\n\t\t\t\tcase is_entropy_packing(Packing) of\n\t\t\t\t\t true ->\n\t\t\t\t\t\tWorker = ?CHILD_WITH_ARGS(\n\t\t\t\t\t\t\tModule, worker, Module:name(StoreID),\n\t\t\t\t\t\t\t[Module:name(StoreID), {StoreID, Packing}]),\n\t\t\t\t\t\t{true, Worker};\n\t\t\t\t\t false ->\n\t\t\t\t\t\tfalse\n\t\t\t\tend\n\t\tend,\n\t\tConfig#config.storage_modules\n\t),\n\t \n\tRepackInPlaceWorkers = lists:filtermap(\n\t\tfun({StorageModule, ToPacking}) ->\n\t\t\t\tStoreID = ar_storage_module:id(StorageModule),\n\t\t\t\tConfiguredPacking = ar_storage_module:get_packing(StorageModule),\n\t\t\t\t%% Note: the config validation will prevent a StoreID from being used in both\n\t\t\t\t%% `storage_modules` and `repack_in_place_storage_modules`, so there's\n\t\t\t\t%% no risk of a `Name` clash with the workers spawned above.\n\t\t\t\tIsEntropyPacking = (\n\t\t\t\t\tis_entropy_packing(ConfiguredPacking) orelse is_entropy_packing(ToPacking)\n\t\t\t\t),\n\t\t\t\tcase IsEntropyPacking of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tWorker = ?CHILD_WITH_ARGS(\n\t\t\t\t\t\t\tModule, worker, Module:name(StoreID),\n\t\t\t\t\t\t\t[Module:name(StoreID), {StoreID, ToPacking}]),\n\t\t\t\t\t\t{true, Worker};\n\t\t\t\t\t false ->\n\t\t\t\t\t\tfalse\n\t\t\t\tend\n\t\tend,\n\t\tConfig#config.repack_in_place_storage_modules\n\t),\n\n\tConfiguredWorkers ++ RepackInPlaceWorkers.\n\n-spec initialize_context(ar_storage_module:store_id(), ar_chunk_storage:packing()) ->\n\t {IsPrepared :: boolean(), RewardAddr :: none | ar_wallet:address()}.\ninitialize_context(StoreID, Packing) ->\n\tcase Packing of\n\t\t{replica_2_9, Addr} ->\n\t\t\t{ModuleStart, ModuleEnd} = ar_storage_module:get_range(StoreID),\n\t\t\tCursor = read_cursor(StoreID, ModuleStart),\n\t\t\tcase Cursor =< ModuleEnd of\n\t\t\t\ttrue ->\n\t\t\t\t\t{false, Addr};\n\t\t\t\tfalse ->\n\t\t\t\t\t{true, Addr}\n\t\t\tend;\n\t\t_ ->\n\t\t\t{true, none}\n\tend.\n\n-spec is_entropy_packing(ar_chunk_storage:packing()) -> boolean().\nis_entropy_packing(unpacked_padded) ->\n\ttrue;\nis_entropy_packing({replica_2_9, _}) ->\n\ttrue;\nis_entropy_packing(_) ->\n\tfalse.\n\n%% @doc Return a list of all BucketEndOffsets covered by the entropy needed to encipher\n%% the chunk at the given offset. The list returned may include offsets that occur before\n%% the provided offset. This is expected if Offset does not refer to a sector 0 chunk.\n-spec entropy_offsets(non_neg_integer(), non_neg_integer()) -> [non_neg_integer()].\nentropy_offsets(Offset, ModuleEnd) ->\n\tBucketEndOffset = ar_chunk_storage:get_chunk_bucket_end(Offset),\n\tBucketEndOffset2 = reset_entropy_offset(BucketEndOffset),\n\tPartition = ar_replica_2_9:get_entropy_partition(BucketEndOffset),\n\t{_, EntropyPartitionEnd} = ar_replica_2_9:get_entropy_partition_range(Partition),\n\tEnd = min(EntropyPartitionEnd, ModuleEnd),\n\tentropy_offsets2(BucketEndOffset2, End).\n\nentropy_offsets2(BucketEndOffset, PaddedPartitionEnd)\n\twhen BucketEndOffset > PaddedPartitionEnd ->\n\t[];\nentropy_offsets2(BucketEndOffset, PaddedPartitionEnd) ->\n\tNextOffset = shift_entropy_offset(BucketEndOffset, 1),\n\t[BucketEndOffset | entropy_offsets2(NextOffset, PaddedPartitionEnd)].\n\n%% @doc If we are not at the beginning of the entropy, shift the offset to\n%% the left. store_entropy_footprint will traverse the entire 2.9 partition shifting\n%% the offset by sector size.\nreset_entropy_offset(BucketEndOffset) ->\n\t%% Sanity checks\n\tBucketEndOffset = ar_chunk_storage:get_chunk_bucket_end(BucketEndOffset),\n\t%% End sanity checks\n\tSliceIndex = ar_replica_2_9:get_slice_index(BucketEndOffset),\n\tshift_entropy_offset(BucketEndOffset, -SliceIndex).\n\nshift_entropy_offset(Offset, SectorCount) ->\n\tSectorSize = ar_block:get_replica_2_9_entropy_sector_size(),\n\tar_chunk_storage:get_chunk_bucket_end(Offset + SectorSize * SectorCount).\n\n%% @doc Returns a list of 32x 8 MiB entropies. These entropies will need to be sliced\n%% and recombined before they can be used. When properly recombined they contain enough\n%% entropy to cover 1024 chunks. The chunks covered (aka the \"footprint\") are distributed\n%% throughout the partition\n-spec generate_entropies(StoreID :: ar_storage_module:store_id(),\n\t\t\t\t\t\t RewardAddr :: ar_wallet:address(),\n\t\t\t\t\t\t BucketEndOffset :: non_neg_integer(),\n\t\t\t\t\t\t ReplyTo :: pid()) ->\n\t\t\t\t\t\t\tok.\ngenerate_entropies(StoreID, RewardAddr, BucketEndOffset, ReplyTo) ->\n\tgen_server:cast(name(StoreID), {generate_entropies, RewardAddr, BucketEndOffset, ReplyTo}).\n\n-spec generate_entropies(RewardAddr :: ar_wallet:address(),\n\t\t\t\t\t\t BucketEndOffset :: non_neg_integer()) ->\n\t   \t\t\t\t\t\t[binary()] | {error, term()}.\ngenerate_entropies(RewardAddr, BucketEndOffset) ->\n\tgenerate_entropies(RewardAddr, BucketEndOffset, true).\n\n-spec generate_entropies(RewardAddr :: ar_wallet:address(),\n\tBucketEndOffset :: non_neg_integer(),\n\tCacheEntropy :: boolean()) ->\n\t   [binary()] | {error, term()}.\ngenerate_entropies(RewardAddr, BucketEndOffset, CacheEntropy) ->\n\tprometheus_histogram:observe_duration(replica_2_9_entropy_duration_milliseconds, [], \n\t\tfun() ->\n\t\t\tdo_generate_entropies(RewardAddr, BucketEndOffset, CacheEntropy)\n\t\tend).\n\nmap_entropies(_Entropies,\n\t\t\t[],\n\t\t\t_RangeStart,\n\t\t\t_Keys,\n\t\t\t_RewardAddr,\n\t\t\t_Fun,\n\t\t\t_Args,\n\t\t\tAcc) ->\n\t%% The amount of entropy generated per partition is slightly more than the amount needed.\n\t%% So at the end of a partition we will have finished processing chunks, but still have\n\t%% some entropy left. In this case we stop the recursion early and wait for the writes\n\t%% to complete.\n\tAcc;\nmap_entropies(Entropies,\n\t\t\t[BucketEndOffset | EntropyOffsets],\n\t\t\tRangeStart,\n\t\t\tKeys,\n\t\t\tRewardAddr,\n\t\t\tFun,\n\t\t\tArgs,\n\t\t\tAcc) ->\n\t\n\tcase take_and_combine_entropy_slices(Entropies) of\n\t\t{ChunkEntropy, Rest} ->\n\t\t\t%% Sanity checks\n\t\t\tsanity_check_replica_2_9_entropy_keys(BucketEndOffset, RewardAddr, Keys),\n\t\t\t%% End sanity checks\n\n\t\t\tAcc2 = case BucketEndOffset > RangeStart of\n\t\t\t\ttrue ->\n\t\t\t\t\terlang:apply(Fun,\n\t\t\t\t\t\t[ChunkEntropy, BucketEndOffset, RewardAddr] ++ Args ++ [Acc]);\n\t\t\t\tfalse ->\n\t\t\t\t\t%% Don't write entropy before the start of the range.\n\t\t\t\t\tAcc\n\t\t\tend,\n\n\t\t\t%% Jump to the next sector covered by this entropy.\n\t\t\tmap_entropies(\n\t\t\t\tRest,\n\t\t\t\tEntropyOffsets,\n\t\t\t\tRangeStart,\n\t\t\t\tKeys,\n\t\t\t\tRewardAddr,\n\t\t\t\tFun,\n\t\t\t\tArgs,\n\t\t\t\tAcc2)\n\tend.\n\n\ninit({StoreID, Packing}) ->\n\t?LOG_INFO([{event, ar_entropy_gen_init},\n\t\t{name, name(StoreID)}, {store_id, StoreID},\n\t\t{packing, ar_serialize:encode_packing(Packing, true)}]),\n\n\tConfiguredPacking = ar_storage_module:get_packing(StoreID),\n\t%% Sanity checks\n\ttrue = is_entropy_packing(ConfiguredPacking) orelse is_entropy_packing(Packing),\n\t%% End sanity checks\n\n\t{ModuleStart, ModuleEnd} = ar_storage_module:get_range(StoreID),\n\tPaddedRangeEnd = ar_chunk_storage:get_chunk_bucket_end(ModuleEnd),\n\n\t%% Provided Packing will only differ from the StoreID packing when this\n\t%% module is configured to repack in place.\n\tIsRepackInPlace = Packing /= ConfiguredPacking,\n\tState = case IsRepackInPlace of\n\t\ttrue ->\n\t\t\t#state{};\n\t\tfalse ->\n\t\t\t%% Only kick of the prepare entropy process if we're not repacking in place.\n\t\t\tCursor = read_cursor(StoreID, ModuleStart),\n\t\t\t?LOG_INFO([{event, read_prepare_replica_2_9_cursor}, {store_id, StoreID},\n\t\t\t\t\t{cursor, Cursor}, {module_start, ModuleStart},\n\t\t\t\t\t{module_end, ModuleEnd}, {padded_range_end, PaddedRangeEnd}]),\n\t\t\tPrepareStatus = \n\t\t\t\tcase initialize_context(StoreID, Packing) of\n\t\t\t\t\t{_IsPrepared, none} ->\n\t\t\t\t\t\t%% ar_entropy_gen is only used for replica_2_9 packing\n\t\t\t\t\t\t?LOG_ERROR([{event, invalid_packing_for_entropy}, {module, ?MODULE},\n\t\t\t\t\t\t\t{store_id, StoreID},\n\t\t\t\t\t\t\t{packing, ar_serialize:encode_packing(Packing, true)}]),\n\t\t\t\t\t\toff;\n\t\t\t\t\t{false, _} ->\n\t\t\t\t\t\tgen_server:cast(self(), prepare_entropy),\n\t\t\t\t\t\tpaused;\n\t\t\t\t\t{true, _} ->\n\t\t\t\t\t\t%% Entropy generation is complete\n\t\t\t\t\t\tcomplete\n\t\t\t\tend,\n\t\t\tar_device_lock:set_device_lock_metric(StoreID, prepare, PrepareStatus),\n\t\t\t#state{\n\t\t\t\tcursor = Cursor,\n\t\t\t\tprepare_status = PrepareStatus\n\t\t\t}\n\tend,\n\n\tState2 = State#state{\n\t\tstore_id = StoreID,\n\t\tpacking = Packing, \n\t\tmodule_start = ModuleStart,\n\t\tmodule_end = PaddedRangeEnd\n\t},\n\n\t{ok, State2}.\n\nhandle_cast(prepare_entropy, State) ->\n\t#state{ store_id = StoreID } = State,\n\tNewStatus = ar_device_lock:acquire_lock(prepare, StoreID, State#state.prepare_status),\n\tState2 = State#state{ prepare_status = NewStatus },\n\tState3 = case NewStatus of\n\t\tactive ->\n\t\t\tdo_prepare_entropy(State2);\n\t\tpaused ->\n\t\t\tar_util:cast_after(?DEVICE_LOCK_WAIT, self(), prepare_entropy),\n\t\t\tState2;\n\t\t_ ->\n\t\t\tState2\n\tend,\n\t{noreply, State3};\n\nhandle_cast({generate_entropies, RewardAddr, BucketEndOffset, ReplyTo}, State) ->\n\tEntropies = generate_entropies(RewardAddr, BucketEndOffset),\n\tReplyTo ! {entropy, BucketEndOffset, RewardAddr, Entropies},\n\t{noreply, State};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_call(Call, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {call, Call}]),\n\t{reply, {error, unhandled_call}, State}.\n\nhandle_info({entropy_generated, _Ref, _Entropy}, State) ->\n\t?LOG_WARNING([{event, entropy_generation_timed_out}]),\n\t{noreply, State};\n\nhandle_info(Info, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {info, Info}]),\n\t{noreply, State}.\n\nterminate(Reason, State) ->\n\t?LOG_INFO([{event, terminate},\n\t\t\t\t{module, ?MODULE},\n\t\t\t\t{reason, Reason},\n\t\t\t\t{name, name(State#state.store_id)},\n\t\t\t\t{store_id, State#state.store_id}]),\n\tok.\n\ndo_prepare_entropy(State) ->\n\t#state{ \n\t\tcursor = Start, module_start = ModuleStart, module_end = ModuleEnd,\n\t\tpacking = Packing,\n\t\tstore_id = StoreID\n\t} = State,\n\n\t{replica_2_9, RewardAddr} = Packing,\n\tBucketEndOffset = ar_chunk_storage:get_chunk_bucket_end(Start),\n\n\t%% Sanity checks:\n\tBucketEndOffset = ar_chunk_storage:get_chunk_bucket_end(BucketEndOffset),\n\ttrue = (\n\t\tar_chunk_storage:get_chunk_bucket_start(Start) ==\n\t\tar_chunk_storage:get_chunk_bucket_start(BucketEndOffset)\n\t),\n\ttrue = (\n\t\tmax(0, BucketEndOffset - ?DATA_CHUNK_SIZE) == \n\t\tar_chunk_storage:get_chunk_bucket_start(BucketEndOffset)\n\t),\n\t%% End of sanity checks.\n\n\t%% Make sure all prior entropy writes are complete.\n\tar_entropy_storage:is_ready(StoreID),\n\n\tCheckRangeEnd =\n\t\tcase BucketEndOffset > ModuleEnd of\n\t\t\ttrue ->\n\t\t\t\tar_device_lock:release_lock(prepare, StoreID),\n\t\t\t\t?LOG_INFO([{event, storage_module_entropy_preparation_complete},\n\t\t\t\t\t\t{store_id, StoreID}]),\n\t\t\t\tar:console(\"The storage module ~s is prepared for 2.9 replication.~n\",\n\t\t\t\t\t\t[StoreID]),\n\t\t\t\tar_chunk_storage:set_entropy_complete(StoreID),\n\t\t\t\tcomplete;\n\t\t\tfalse ->\n\t\t\t\tfalse\n\t\tend,\n\n\tCheckIsRecorded =\n\t\tcase CheckRangeEnd of\n\t\t\tcomplete ->\n\t\t\t\tcomplete;\n\t\t\tfalse ->\n\t\t\t\tar_entropy_storage:is_entropy_recorded(BucketEndOffset, Packing, StoreID)\n\t\tend,\n\n\tStoreEntropy =\n\t\tcase CheckIsRecorded of\n\t\t\tcomplete ->\n\t\t\t\tcomplete;\n\t\t\ttrue ->\n\t\t\t\tis_recorded;\n\t\t\tfalse ->\n\t\t\t\t%% Get all the entropies needed to encipher the chunk at BucketEndOffset.\n\t\t\t\tEntropies = generate_entropies(RewardAddr, BucketEndOffset, false),\n\t\t\t\tcase Entropies of\n\t\t\t\t\t{error, Reason} ->\n\t\t\t\t\t\t{error, Reason};\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tEntropyKeys = generate_entropy_keys(RewardAddr, BucketEndOffset),\n\t\t\t\t\t\tEntropyOffsets = entropy_offsets(BucketEndOffset, ModuleEnd),\n\t\t\t\t\t\tar_entropy_storage:store_entropy_footprint(\n\t\t\t\t\t\t\tStoreID, Entropies, EntropyOffsets,\n\t\t\t\t\t\t\tModuleStart, EntropyKeys, RewardAddr)\n\t\t\t\tend\n\t\tend,\n\tNextCursor = advance_entropy_offset(BucketEndOffset, Packing, StoreID),\n\tcase StoreEntropy of\n\t\tcomplete ->\n\t\t\tar_device_lock:set_device_lock_metric(StoreID, prepare, complete),\n\t\t\tState#state{ prepare_status = complete };\n\t\tis_recorded ->\n\t\t\tgen_server:cast(self(), prepare_entropy),\n\t\t\tState#state{ cursor = NextCursor };\n\t\t{error, Error} ->\n\t\t\t?LOG_WARNING([{event, failed_to_store_entropy},\n\t\t\t\t\t{cursor, Start},\n\t\t\t\t\t{store_id, StoreID},\n\t\t\t\t\t{reason, io_lib:format(\"~p\", [Error])}]),\n\t\t\tar_util:cast_after(500, self(), prepare_entropy),\n\t\t\tState;\n\t\tok ->\n\t\t\tgen_server:cast(self(), prepare_entropy),\n\t\t\tcase store_cursor(NextCursor, StoreID) of\n\t\t\t\tok ->\n\t\t\t\t\tok;\n\t\t\t\t{error, Error} ->\n\t\t\t\t\t?LOG_WARNING([{event, failed_to_store_prepare_entropy_cursor},\n\t\t\t\t\t\t\t{chunk_cursor, NextCursor},\n\t\t\t\t\t\t\t{store_id, StoreID},\n\t\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Error])}])\n\t\t\tend,\n\t\t\tState#state{ cursor = NextCursor }\n\tend.\n\n\n\ndo_generate_entropies(RewardAddr, BucketEndOffset, CacheEntropy) ->\n\tSubChunkSize = ?COMPOSITE_PACKING_SUB_CHUNK_SIZE,\n\tEntropyTasks =\n\t\tlists:map(\n\t\t\tfun(Offset) ->\n\t\t\t\tRef = make_ref(),\n\t\t\t\tar_packing_server:request_entropy_generation(\n\t\t\t\t\tRef, self(), {RewardAddr, BucketEndOffset, Offset, CacheEntropy}),\n\t\t\t\tRef\n\t\t\tend,\n\t\t\tlists:seq(0, ?DATA_CHUNK_SIZE - SubChunkSize, SubChunkSize)),\n\tEntropies = collect_entropies(EntropyTasks, []),\n\tcase Entropies of\n\t\t{error, _Reason} ->\n\t\t\tflush_entropy_messages();\n\t\t_ ->\n\t\t\tok\n\tend,\n\tEntropies.\n\n%% @doc Take the first slice of each entropy and combine into a single binary. This binary\n%% can be used to encipher a single chunk.\n-spec take_and_combine_entropy_slices(Entropies :: [binary()]) ->\n\t\t\t\t\t\t\t\t\t\t {ChunkEntropy :: binary(),\n\t\t\t\t\t\t\t\t\t\t  RemainingSlicesOfEachEntropy :: [binary()]}.\ntake_and_combine_entropy_slices(Entropies) ->\n\ttrue = ?COMPOSITE_PACKING_SUB_CHUNK_COUNT == length(Entropies),\n\ttake_and_combine_entropy_slices(Entropies, [], []).\n\ntake_and_combine_entropy_slices([], Acc, RestAcc) ->\n\t{iolist_to_binary(Acc), lists:reverse(RestAcc)};\ntake_and_combine_entropy_slices([<<>> | Entropies], _Acc, _RestAcc) ->\n\ttrue = lists:all(fun(Entropy) -> Entropy == <<>> end, Entropies),\n\t{<<>>, []};\ntake_and_combine_entropy_slices([<<EntropySlice:?COMPOSITE_PACKING_SUB_CHUNK_SIZE/binary,\n\t\t\t\t\t\t\t\t   Rest/binary>>\n\t\t\t\t\t\t\t\t | Entropies],\n\t\t\t\t\t\t\t\tAcc,\n\t\t\t\t\t\t\t\tRestAcc) ->\n\ttake_and_combine_entropy_slices(Entropies, [Acc, EntropySlice], [Rest | RestAcc]).\n\nsanity_check_replica_2_9_entropy_keys(PaddedEndOffset, RewardAddr, Keys) ->\n\tsanity_check_replica_2_9_entropy_keys(PaddedEndOffset, RewardAddr, 0, Keys).\n\nsanity_check_replica_2_9_entropy_keys(\n\t\t_PaddedEndOffset, _RewardAddr, _SubChunkStartOffset, []) ->\n\tok;\nsanity_check_replica_2_9_entropy_keys(\n\t\tPaddedEndOffset, RewardAddr, SubChunkStartOffset, [Key | Keys]) ->\n\t\tKey = ar_replica_2_9:get_entropy_key(RewardAddr, PaddedEndOffset, SubChunkStartOffset),\n\tSubChunkSize = ?COMPOSITE_PACKING_SUB_CHUNK_SIZE,\n\tsanity_check_replica_2_9_entropy_keys(PaddedEndOffset,\n\t\t\t\t\t\t\t\t\t\tRewardAddr,\n\t\t\t\t\t\t\t\t\t\tSubChunkStartOffset + SubChunkSize,\n\t\t\t\t\t\t\t\t\t\tKeys).\n\nadvance_entropy_offset(BucketEndOffset, Packing, StoreID) ->\n\tcase ar_entropy_storage:get_next_unsynced_interval(BucketEndOffset, Packing, StoreID) of\n\t\tnot_found ->\n\t\t\tBucketEndOffset + ?DATA_CHUNK_SIZE;\n\t\t{_, Start} ->\n\t\t\tStart + ?DATA_CHUNK_SIZE\n\tend.\n\ngenerate_entropy_keys(RewardAddr, Offset) ->\n\tgenerate_entropy_keys(RewardAddr, Offset, 0).\n\ngenerate_entropy_keys(_RewardAddr, _Offset, SubChunkStart)\n\twhen SubChunkStart == ?DATA_CHUNK_SIZE ->\n\t[];\ngenerate_entropy_keys(RewardAddr, Offset, SubChunkStart) ->\n\tSubChunkSize = ?COMPOSITE_PACKING_SUB_CHUNK_SIZE,\n\t[ar_replica_2_9:get_entropy_key(RewardAddr, Offset, SubChunkStart)\n\t | generate_entropy_keys(RewardAddr, Offset, SubChunkStart + SubChunkSize)].\n\ncollect_entropies([], Acc) ->\n\tlists:reverse(Acc);\ncollect_entropies([Ref | Rest], Acc) ->\n\treceive\n\t\t{entropy_generated, Ref, Entropy} ->\n\t\t\tcollect_entropies(Rest, [Entropy | Acc])\n\tafter 600_000 ->\n\t\t?LOG_ERROR([{event, entropy_generation_timeout}, {ref, Ref}]),\n\t\t{error, timeout}\n\tend.\n\nflush_entropy_messages() ->\n\t?LOG_INFO([{event, flush_entropy_messages}]),\n\treceive\n\t\t{entropy_generated, _, _} ->\n\t\t\tflush_entropy_messages()\n\tafter 0 ->\n\t\tok\n\tend.\n\nread_cursor(StoreID, ModuleStart) ->\n\tFilepath = ar_chunk_storage:get_filepath(\"prepare_replica_2_9_cursor\", StoreID),\n\tDefault = ModuleStart + 1,\n\tcase file:read_file(Filepath) of\n\t\t{ok, Bin} ->\n\t\tcase catch binary_to_term(Bin, [safe]) of\n\t\t\tCursor when is_integer(Cursor) ->\n\t\t\t\tCursor;\n\t\t\t\t_ ->\n\t\t\t\t\tDefault\n\t\t\tend;\n\t\t_ ->\n\t\t\tDefault\n\tend.\n\nstore_cursor(Cursor, StoreID) ->\n\tFilepath = ar_chunk_storage:get_filepath(\"prepare_replica_2_9_cursor\", StoreID),\n\tfile:write_file(Filepath, term_to_binary(Cursor)).\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\nentropy_offsets_test_() ->\n\tar_test_node:test_with_mocked_functions([\n\t\t{ar_block, strict_data_split_threshold, fun() -> 700_000 end}\n\t],\n\tfun test_entropy_offsets/0, 30).\n\ntest_entropy_offsets() ->\n\tSectorSize = ar_block:get_replica_2_9_entropy_sector_size(),\n\t?assertEqual(2 * ?DATA_CHUNK_SIZE, SectorSize),\n\n\tModule0 = {ar_block:partition_size(), 0, unpacked},\n\tModule1 = {ar_block:partition_size(), 1, unpacked},\n\n\t{_ModuleStart0, ModuleEnd0} = ar_storage_module:module_range(Module0),\n\t{_ModuleStart1, ModuleEnd1} = ar_storage_module:module_range(Module1),\n\t\n\tPaddedModuleEnd0 = ar_chunk_storage:get_chunk_bucket_end(ModuleEnd0),\n\tPaddedModuleEnd1 = ar_chunk_storage:get_chunk_bucket_end(ModuleEnd1),\n\n\t?assertEqual(2097152, PaddedModuleEnd0, \"1\"),\n\t?assertEqual(4194304, PaddedModuleEnd1, \"2\"),\n\t\n\t?assertEqual([262144, 786432, 1310720, 1835008], entropy_offsets(0, PaddedModuleEnd0), \"3\"), %% bucket end: 262144\n\t?assertEqual([262144, 786432, 1310720, 1835008], entropy_offsets(1000, PaddedModuleEnd0), \"4\"), %% bucket end: 262144\n\t?assertEqual([262144, 786432, 1310720, 1835008], entropy_offsets(262144, PaddedModuleEnd0), \"5\"), %% bucket end: 262144\n\n\t?assertEqual([524288, 1048576, 1572864, 2097152], entropy_offsets(524288, PaddedModuleEnd0), \"6\"), %% bucket end: 524288\n\n\t?assertEqual([524288, 1048576, 1572864, 2097152], entropy_offsets(699999, PaddedModuleEnd0), \"7\"), %% bucket end: 524288\n\t?assertEqual([524288, 1048576, 1572864, 2097152], entropy_offsets(700000, PaddedModuleEnd0), \"8\"), %% bucket end: 524288\n\t?assertEqual([262144, 786432, 1310720, 1835008], entropy_offsets(700001, PaddedModuleEnd0), \"9\"), %% bucket end: 786432\n\n\t?assertEqual([262144, 786432, 1310720, 1835008], entropy_offsets(786432, PaddedModuleEnd0), \"10\"), %% bucket end: 786432\n\t?assertEqual([262144, 786432, 1310720, 1835008], entropy_offsets(786433, PaddedModuleEnd0), \"11\"), %% bucket end: 786432\n\t?assertEqual([524288, 1048576, 1572864, 2097152], entropy_offsets(1048576, PaddedModuleEnd0), \"12\"), %% bucket end: 1048576\n\t?assertEqual([262144, 786432, 1310720, 1835008], entropy_offsets(1835007, PaddedModuleEnd0), \"13\"), %% bucket end: 1835008\n\t?assertEqual([262144, 786432, 1310720, 1835008], entropy_offsets(1835008, PaddedModuleEnd0), \"14\"), %% bucket end: 1835008\n\t?assertEqual([262144, 786432, 1310720, 1835008], entropy_offsets(1835009, PaddedModuleEnd0), \"15\"), %% bucket end: 1835008\n\n\t%% entropy partition is determined by the bucket *start* offset. So offsets that are in \n\t%% recall partition 1 may still be in entropy partition 0 (e.g. 2000001, 2097152)\n\t?assertEqual([262144, 786432, 1310720, 1835008], entropy_offsets(1999999, PaddedModuleEnd0), \"16\"), %% bucket end: 1835008\n\t?assertEqual([262144, 786432, 1310720, 1835008], entropy_offsets(2000000, PaddedModuleEnd0), \"17\"), %% bucket end: 1835008\n\t?assertEqual([262144, 786432, 1310720, 1835008], entropy_offsets(2000001, PaddedModuleEnd0), \"18\"), %% bucket end: 1835008\n\t?assertEqual([524288, 1048576, 1572864, 2097152], entropy_offsets(2097152, PaddedModuleEnd0), \"19\"), %% bucket end: 2097152\n\t?assertEqual([524288, 1048576, 1572864, 2097152], entropy_offsets(2097153, PaddedModuleEnd0), \"20\"), %% bucket end: 2097152\n\n\t%% Even when ModuleEnd is high, we should limit entropy to the current entropy partition.\n\t?assertEqual([524288, 1048576, 1572864, 2097152], entropy_offsets(2097152, PaddedModuleEnd1), \"21\"), %% bucket end: 2097152\n\n\t%% Retstrict offsets to module end.\n\t?assertEqual([524288, 1048576, 1572864], entropy_offsets(2097152, 2_000_000), \"22\"), %% bucket end: 2097152\n\n\t%% Entropy partition 1\n\t?assertEqual([2359296, 2883584, 3407872, 3932160], entropy_offsets(2359297, PaddedModuleEnd1), \"23\"), %% bucket end: 2359296\n\t?assertEqual([2621440, 3145728, 3670016, 4194304], entropy_offsets(2621441, PaddedModuleEnd1), \"24\"), %% bucket end: 2621440\n\n\tok.\n"
  },
  {
    "path": "apps/arweave/src/ar_entropy_storage.erl",
    "content": "-module(ar_entropy_storage).\n\n-behaviour(gen_server).\n\n-export([name/1, acquire_semaphore/1, release_semaphore/1, is_ready/1,\n\tsync_record_id/0, is_entropy_recorded/3, get_next_unsynced_interval/3,\n\tadd_record/3, add_record_async/4, delete_record/2, delete_record/3,\n\tstore_entropy_footprint/6, store_entropy/4, record_chunk/5]).\n\n-export([start_link/2, init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include(\"ar.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-record(state, {\n\tstore_id\n}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the server.\nstart_link(Name, {StoreID, _}) ->\n\tgen_server:start_link({local, Name}, ?MODULE, StoreID, []).\n\n%% @doc Return the name of the server serving the given StoreID.\nname(StoreID) ->\n\tlist_to_atom(\"ar_entropy_storage_\" ++ ar_storage_module:label(StoreID)).\n\ninit(StoreID) ->\n\t?LOG_INFO([{event, ar_entropy_storage_init}, {name, name(StoreID)}, {store_id, StoreID}]),\n\t{ok, #state{ store_id = StoreID }}.\n\nsync_record_id() ->\n\tar_chunk_storage_replica_2_9_5_entropy.\n\n%% @doc Write all of the entropies in a full 256 MiB entropy footprint to disk.\n-spec store_entropy_footprint(\t\n\tStoreID :: ar_storage_module:store_id(),\n\tEntropies :: [binary()],\n\tEntropyOffsets :: [non_neg_integer()],\n\tRangeStart :: non_neg_integer(),\n\tKeys :: [binary()],\n\tRewardAddr :: ar_wallet:address()) -> ok.\nstore_entropy_footprint(\n\tStoreID, Entropies, EntropyOffsets, RangeStart, Keys, RewardAddr) ->\n\tgen_server:cast(name(StoreID), {store_entropy_footprint,\n\t\tEntropies, EntropyOffsets, RangeStart, Keys, RewardAddr}).\n\nstore_entropy(ChunkEntropy, BucketEndOffset, StoreID, RewardAddr) ->\n\tcase catch gen_server:call(\n\t\t\tname(StoreID), \n\t\t\t{store_entropy, ChunkEntropy, BucketEndOffset, StoreID, RewardAddr},\n\t\t\t?DEFAULT_CALL_TIMEOUT) of\n\t\t{'EXIT', {Reason, {gen_server, call, _}}} ->\n\t\t\t?LOG_WARNING([{event, store_entropy}, {module, ?MODULE},\n\t\t\t\t{name, name(StoreID)}, {store_id, StoreID},\n\t\t\t\t{bucket_end_offset, BucketEndOffset}, {reason, Reason}]),\n\t\t\tfalse;\n\t\tReply ->\n\t\t\tReply\n\tend.\n\nis_ready(StoreID) ->\n\tcase catch gen_server:call(name(StoreID), is_ready, ?DEFAULT_CALL_TIMEOUT) of\n\t\t{'EXIT', {Reason, {gen_server, call, _}}} ->\n\t\t\t?LOG_WARNING([{event, is_ready_error}, {module, ?MODULE},\n\t\t\t\t{name, name(StoreID)}, {store_id, StoreID}, {reason, Reason}]),\n\t\t\tfalse;\n\t\tReply ->\n\t\t\tReply\n\tend.\n\nhandle_cast({store_entropy_footprint,\n\t\tEntropies, EntropyOffsets, RangeStart, Keys, RewardAddr}, State) ->\n\t#state{ store_id = StoreID } = State,\n\tStart = erlang:monotonic_time(millisecond),\n\tar_entropy_gen:map_entropies(\n\t\tEntropies,\n\t\tEntropyOffsets,\n\t\tRangeStart,\n\t\tKeys,\n\t\tRewardAddr,\n\t\tfun do_store_entropy/5,\n\t\t[StoreID],\n\t\tok),\n\tEnd = erlang:monotonic_time(millisecond),\n\t?LOG_DEBUG([{event, store_entropy_footprint}, {module, ?MODULE},\n\t\t{name, name(StoreID)}, {offset, hd(EntropyOffsets)},\n\t\t{num_entropies, length(Entropies)}, {duration, End - Start}]),\n\t{noreply, State};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_call(is_ready, _From, State) ->\n\t{reply, true, State};\n\n\nhandle_call({store_entropy, ChunkEntropy, BucketEndOffset, StoreID, RewardAddr},\n\t\t_From, State) ->\n\t#state{ store_id = StoreID } = State,\n\tdo_store_entropy(ChunkEntropy, BucketEndOffset, RewardAddr, StoreID),\n\t{reply, ok, State};\n\nhandle_call(Call, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {call, Call}]),\n\t{reply, {error, unhandled_call}, State}.\n\nterminate(Reason, State) ->\n\t?LOG_INFO([{event, terminate}, {module, ?MODULE},\n\t\t{reason, Reason}, {name, name(State#state.store_id)},\n\t\t{store_id, State#state.store_id}]),\n\tok.\n\nhandle_info(Info, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {info, Info}]),\n\t{noreply, State}.\n\n%% @doc Return true if the 2.9 entropy with the given offset is recorded.\nis_entropy_recorded(PaddedEndOffset, {replica_2_9, _} = Packing, StoreID) ->\n\tChunkBucketStart = ar_chunk_storage:get_chunk_bucket_start(PaddedEndOffset),\n\tIsRecorded = ar_sync_record:is_recorded(\n\t\tChunkBucketStart + 1, Packing, sync_record_id(), StoreID),\n\tcase IsRecorded of\n\t\tfalse ->\n\t\t\t%% Included for backwards compatibility with entropy written prior to 2.9.5.\n\t\t\tar_sync_record:is_recorded(\n\t\t\t\tChunkBucketStart + 1, ar_chunk_storage_replica_2_9_1_entropy, StoreID);\n\t\t_ ->\n\t\t\ttrue\n\tend;\nis_entropy_recorded(_PaddedEndOffset, _Packing, _StoreID) ->\n\tfalse.\n\nget_next_unsynced_interval(Offset, Packing, StoreID) ->\n\tcase ar_sync_record:get_next_unsynced_interval(\n\t\t\tOffset, infinity, Packing, sync_record_id(), StoreID) of\n\t\tnot_found ->\n\t\t\t%% Included for backwards compatibility with entropy written prior to 2.9.5.\n\t\t\tcase ar_sync_record:get_next_unsynced_interval(\n\t\t\t\t\tOffset, infinity, ar_chunk_storage_replica_2_9_1_entropy, StoreID) of\n\t\t\t\tnot_found ->\n\t\t\t\t\tnot_found;\n\t\t\t\tInterval ->\n\t\t\t\t\tInterval\n\t\t\tend;\n\t\tInterval ->\n\t\t\tInterval\n\tend.\n\nupdate_sync_records(IsComplete, PaddedEndOffset, StoreID, RewardAddr) ->\n\tBucketEnd = ar_chunk_storage:get_chunk_bucket_end(PaddedEndOffset),\n\tadd_record_async(replica_2_9_entropy, BucketEnd, {replica_2_9, RewardAddr}, StoreID),\n\tprometheus_counter:inc(replica_2_9_entropy_stored,\n\t\t[ar_storage_module:label(StoreID)], ?DATA_CHUNK_SIZE),\n\tStartOffset = PaddedEndOffset - ?DATA_CHUNK_SIZE,\n\tcase IsComplete of\n\t\ttrue ->\n\t\t\tPacking = {replica_2_9, RewardAddr},\n\t\t\t\n\t\t\tprometheus_counter:inc(chunks_stored,\n\t\t\t\t[ar_storage_module:packing_label(Packing),\n\t\t\t\tar_storage_module:label(StoreID)]),\n\t\t\tar_sync_record:add_async(replica_2_9_entropy_with_chunk,\n\t\t\t\t\t\t\t\t\t\tPaddedEndOffset,\n\t\t\t\t\t\t\t\t\t\tStartOffset,\n\t\t\t\t\t\t\t\t\t\tar_chunk_storage,\n\t\t\t\t\t\t\t\t\t\tStoreID),\n\t\t\tar_sync_record:add_async(replica_2_9_entropy_with_chunk,\n\t\t\t\t\t\t\t\t\t\tPaddedEndOffset,\n\t\t\t\t\t\t\t\t\t\tStartOffset,\n\t\t\t\t\t\t\t\t\t\t{replica_2_9, RewardAddr},\n\t\t\t\t\t\t\t\t\t\tar_data_sync,\n\t\t\t\t\t\t\t\t\t\tStoreID),\n\t\t\t%% Here we assume we do not store unpadded small chunks (small chunks\n\t\t\t%% before the strict data split threshold), thus ?DATA_CHUNK_SIZE.\n\t\t\tcase ar_data_sync:is_footprint_record_supported(PaddedEndOffset, ?DATA_CHUNK_SIZE, Packing) of\n\t\t\t\ttrue ->\n\t\t\t\t\tar_footprint_record:add_async(replica_2_9_entropy_with_chunk,\n\t\t\t\t\t\t\tPaddedEndOffset, Packing, StoreID);\n\t\t\t\tfalse ->\n\t\t\t\t\tok\n\t\t\tend;\n\t\tfalse ->\n\t\t\tok\n\tend.\n\nadd_record(BucketEndOffset, {replica_2_9, _} = Packing, StoreID) ->\n\tBucketStartOffset = BucketEndOffset - ?DATA_CHUNK_SIZE,\n\tar_sync_record:add(BucketEndOffset, BucketStartOffset, Packing, sync_record_id(), StoreID).\n\nadd_record_async(Event, BucketEndOffset, {replica_2_9, _} = Packing, StoreID) ->\n\tBucketStartOffset = BucketEndOffset - ?DATA_CHUNK_SIZE,\n\tar_sync_record:add_async(Event,\n\t\tBucketEndOffset, BucketStartOffset, Packing, sync_record_id(), StoreID).\n\ndelete_record(PaddedEndOffset, StoreID) ->\n\tBucketStart = ar_chunk_storage:get_chunk_bucket_start(PaddedEndOffset),\n\tdelete_record(BucketStart + ?DATA_CHUNK_SIZE, BucketStart, StoreID).\n\ndelete_record(EndOffset, StartOffset, StoreID) ->\n\tcase ar_sync_record:delete(EndOffset, StartOffset, sync_record_id(), StoreID) of\n\t\tok ->\n\t\t\t%% Included for backwards compatibility with entropy written prior to 2.9.5.\n\t\t\tar_sync_record:delete(\n\t\t\t\tEndOffset, StartOffset, ar_chunk_storage_replica_2_9_1_entropy, StoreID);\n\t\tError ->\n\t\t\tError\n\tend.\n\t\t\n\ngenerate_missing_entropy(PaddedEndOffset, RewardAddr) ->\n\tEntropies = ar_entropy_gen:generate_entropies(RewardAddr, PaddedEndOffset),\n\tcase Entropies of\n\t\t{error, Reason} ->\n\t\t\t{error, Reason};\n\t\t_ ->\n\t\t\tEntropyIndex = ar_replica_2_9:get_slice_index(PaddedEndOffset),\n\t\t\ttake_combined_entropy_by_index(Entropies, EntropyIndex)\n\tend.\n\nrecord_chunk(\n\t\tPaddedEndOffset, Chunk, StoreID, FileIndex, {IsPrepared, RewardAddr}) ->\n\t%% Sanity checks\n\tPaddedEndOffset = ar_block:get_chunk_padded_offset(PaddedEndOffset),\n\t%% End sanity checks\n\n\tPacking = {replica_2_9, RewardAddr},\n\tStartOffset = ar_chunk_storage:get_chunk_bucket_start(PaddedEndOffset),\n\t{_ChunkFileStart, Filepath, _Position, _ChunkOffset} =\n\t\tar_chunk_storage:locate_chunk_on_disk(PaddedEndOffset, StoreID),\n\tacquire_semaphore(Filepath),\n\tCheckIsChunkStoredAlready =\n\t\tar_sync_record:is_recorded(PaddedEndOffset, ar_chunk_storage, StoreID),\n\tCheckIsEntropyRecorded =\n\t\tcase CheckIsChunkStoredAlready of\n\t\t\ttrue ->\n\t\t\t\t{error, already_stored};\n\t\t\tfalse ->\n\t\t\t\tis_entropy_recorded(PaddedEndOffset, Packing, StoreID)\n\t\tend,\n\tReadEntropy =\n\t\tcase CheckIsEntropyRecorded of\n\t\t\t{error, _} = Error ->\n\t\t\t\tError;\n\t\t\tfalse ->\n\t\t\t\tcase IsPrepared of\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tno_entropy_yet;\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tmissing_entropy\n\t\t\t\tend;\n\t\t\ttrue ->\n\t\t\t\tar_chunk_storage:get(StartOffset, StartOffset, StoreID)\n\t\tend,\n\tRecordChunk = case ReadEntropy of\n\t\t{error, _} = Error2 ->\n\t\t\tError2;\n\t\tnot_found ->\n\t\t\tdelete_record(PaddedEndOffset, StoreID),\n\t\t\t{error, not_prepared_yet};\n\t\tmissing_entropy ->\n\t\t\t?LOG_WARNING([{event, missing_entropy}, {padded_end_offset, PaddedEndOffset},\n\t\t\t\t{store_id, StoreID}, {packing, ar_serialize:encode_packing(Packing, true)}]),\n\t\t\tEntropy = generate_missing_entropy(PaddedEndOffset, RewardAddr),\n\t\t\tcase Entropy of\n\t\t\t\t{error, Reason} ->\n\t\t\t\t\t{error, Reason};\n\t\t\t\t_ ->\n\t\t\t\t\tPackedChunk = ar_packing_server:encipher_replica_2_9_chunk(Chunk, Entropy),\n\t\t\t\t\tar_chunk_storage:record_chunk(\n\t\t\t\t\t\tPaddedEndOffset, PackedChunk, Packing, StoreID, FileIndex)\n\t\t\tend;\n\t\tno_entropy_yet ->\n\t\t\tar_chunk_storage:record_chunk(\n\t\t\t\tPaddedEndOffset, Chunk, unpacked_padded, StoreID,  FileIndex);\n\t\t{_EndOffset, Entropy} ->\n\t\t\tPackedChunk = ar_packing_server:encipher_replica_2_9_chunk(Chunk, Entropy),\n\t\t\tar_chunk_storage:record_chunk(\n\t\t\t\tPaddedEndOffset, PackedChunk, Packing, StoreID, FileIndex)\n\tend,\n\trelease_semaphore(Filepath),\n\tRecordChunk.\n\ndo_store_entropy(ChunkEntropy, BucketEndOffset, RewardAddr, StoreID, ok) ->\n\tdo_store_entropy(ChunkEntropy, BucketEndOffset, RewardAddr, StoreID).\n\ndo_store_entropy(ChunkEntropy, BucketEndOffset, RewardAddr, StoreID) ->\n\t%% Sanity checks\n\ttrue = byte_size(ChunkEntropy) == ?DATA_CHUNK_SIZE,\n\t%% End sanity checks\n\n\tByte = ar_chunk_storage:get_chunk_byte_from_bucket_end(BucketEndOffset),\n\tCheckUnpackedChunkRecorded = ar_sync_record:get_interval(\n\t\tByte + 1, ar_chunk_storage:sync_record_id(unpacked_padded), StoreID),\n\n\t{IsUnpackedChunkRecorded, PaddedEndOffset} =\n\t\tcase CheckUnpackedChunkRecorded of\n\t\t\tnot_found ->\n\t\t\t\t{false, BucketEndOffset};\n\t\t\t{_IntervalEnd, IntervalStart} ->\n\t\t\t\tEndOffset2 = IntervalStart\n\t\t\t\t\t+ ar_util:floor_int(Byte - IntervalStart, ?DATA_CHUNK_SIZE)\n\t\t\t\t\t+ ?DATA_CHUNK_SIZE,\n\t\t\t\tcase ar_chunk_storage:get_chunk_bucket_end(EndOffset2) of\n\t\t\t\t\tBucketEndOffset ->\n\t\t\t\t\t\t{true, EndOffset2};\n\t\t\t\t\t_ ->\n\t\t\t\t\t\t%% This chunk is from a different bucket. It may happen near the\n\t\t\t\t\t\t%% strict data split threshold where there is no single byte\n\t\t\t\t\t\t%% unambiguosly determining the bucket the chunk will be routed to.\n\t\t\t\t\t\t?LOG_INFO([{event, record_entropy_read_chunk_from_another_bucket},\n\t\t\t\t\t\t\t\t{bucket_end_offset, BucketEndOffset},\n\t\t\t\t\t\t\t\t{chunk_end_offset, EndOffset2}]),\n\t\t\t\t\t\t{false, BucketEndOffset}\n\t\t\t\tend\n\t\tend,\n\n\t{ChunkFileStart, Filepath, _Position, _ChunkOffset} =\n\t\tar_chunk_storage:locate_chunk_on_disk(PaddedEndOffset, StoreID),\n\n\t%% We allow generating and filling it the 2.9 entropy and storing unpacked chunks (to\n\t%% be enciphered later) asynchronously. Whatever comes first, is stored.\n\t%% If the other counterpart is stored already, we read it, encipher and store the\n\t%% packed chunk.\n\tacquire_semaphore(Filepath),\n\n\tChunk = case IsUnpackedChunkRecorded of\n\t\ttrue ->\n\t\t\tStartOffset = PaddedEndOffset - ?DATA_CHUNK_SIZE,\n\t\t\tcase ar_chunk_storage:get(Byte, StartOffset, StoreID) of\n\t\t\t\tnot_found ->\n\t\t\t\t\t{error, not_found};\n\t\t\t\t{error, _} = Error ->\n\t\t\t\t\tError;\n\t\t\t\t{_, UnpackedChunk} ->\n\t\t\t\t\tar_sync_record:delete(PaddedEndOffset, StartOffset, ar_data_sync, StoreID),\n\t\t\t\t\tar_footprint_record:delete(PaddedEndOffset, StoreID),\n\t\t\t\t\tar_packing_server:encipher_replica_2_9_chunk(UnpackedChunk, ChunkEntropy)\n\t\t\tend;\n\t\tfalse ->\n\t\t\t%% The entropy for the first sub-chunk of the chunk.\n\t\t\t%% The zero-offset does not have a real meaning, it is set\n\t\t\t%% to make sure we pass offset validation on read.\n\t\t\tChunkEntropy\n\tend,\n\n\tResult = case Chunk of\n\t\t{error, _} = Error2 ->\n\t\t\tError2;\n\t\t_ ->\n\t\t\tWriteChunkResult = ar_chunk_storage:write_chunk(\n\t\t\t\tPaddedEndOffset, Chunk, #{}, StoreID),\n\t\t\tcase WriteChunkResult of\n\t\t\t\t{ok, Filepath} ->\n\t\t\t\t\tets:insert(chunk_storage_file_index,\n\t\t\t\t\t\t{{ChunkFileStart, StoreID}, Filepath}),\n\t\t\t\t\tupdate_sync_records(\n\t\t\t\t\t\tIsUnpackedChunkRecorded, PaddedEndOffset, StoreID, RewardAddr);\n\t\t\t\tError2 ->\n\t\t\t\t\tError2\n\t\t\tend\n\tend,\n\n\tcase Result of\n\t\t{error, Reason} ->\n\t\t\t?LOG_ERROR([{event, failed_to_store_replica_2_9_chunk_entropy},\n\t\t\t\t\t\t\t{filepath, Filepath},\n\t\t\t\t\t\t\t{byte, Byte},\n\t\t\t\t\t\t\t{padded_end_offset, PaddedEndOffset},\n\t\t\t\t\t\t\t{bucket_end_offset, BucketEndOffset},\n\t\t\t\t\t\t\t{store_id, StoreID},\n\t\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}]);\n\t\t_ ->\n\t\t\tok\n\tend,\n\n\trelease_semaphore(Filepath),\n\tok.\n\t\ntake_combined_entropy_by_index(Entropies, Index) ->\n\ttake_combined_entropy_by_index(Entropies, Index, []).\n\ntake_combined_entropy_by_index([], _Index, Acc) ->\n\tiolist_to_binary(Acc);\ntake_combined_entropy_by_index([Entropy | Entropies], Index, Acc) ->\n\tSubChunkSize = ?COMPOSITE_PACKING_SUB_CHUNK_SIZE,\n\ttake_combined_entropy_by_index(\n\t\tEntropies,\n\t\tIndex,\n\t\t[Acc, binary:part(Entropy, Index * SubChunkSize, SubChunkSize)]).\n\nacquire_semaphore(Filepath) ->\n\tcase ets:insert_new(ar_entropy_storage, {{semaphore, Filepath}}) of\n\t\tfalse ->\n\t\t\t?LOG_DEBUG([\n\t\t\t\t{event, details_store_chunk}, {section, waiting_on_semaphore}, {filepath, Filepath}]),\n\t\t\ttimer:sleep(20),\n\t\t\tacquire_semaphore(Filepath);\n\t\ttrue ->\n\t\t\tok\n\tend.\n\nrelease_semaphore(Filepath) ->\n\tets:delete(ar_entropy_storage, {semaphore, Filepath}).\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\nreplica_2_9_test_() ->\n\t{timeout, 60, fun test_replica_2_9/0}.\n\ntest_replica_2_9() ->\n\tcase ar_block:strict_data_split_threshold() of\n\t\t786432 ->\n\t\t\tok;\n\t\t_ ->\n\t\t\tthrow(unexpected_strict_data_split_threshold)\n\tend,\n\n\tRewardAddr = ar_wallet:to_address(ar_wallet:new_keyfile()),\n\tPacking = {replica_2_9, RewardAddr},\n\tStorageModules = [\n\t\t\t{ar_block:partition_size(), 0, Packing},\n\t\t\t{ar_block:partition_size(), 1, Packing}\n\t],\n\t{ok, Config} = arweave_config:get_env(),\n\ttry\n\t\tar_test_node:start(#{ reward_addr => RewardAddr, storage_modules => StorageModules }),\n\t\tStoreID1 = ar_storage_module:id(lists:nth(1, StorageModules)),\n\t\tStoreID2 = ar_storage_module:id(lists:nth(2, StorageModules)),\n\t\tC1 = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\t\t%% ar_chunk_storage does not allow overwriting a chunk\n\t\t%% with an unpacked_padded chunk.\n\t\t?assertEqual({error, already_stored},\n\t\t\t\tar_chunk_storage:put(?DATA_CHUNK_SIZE, C1, unpacked_padded, StoreID1)),\n\t\t?assertEqual({error, already_stored},\n\t\t\t\tar_chunk_storage:put(2 * ?DATA_CHUNK_SIZE, C1, unpacked_padded, StoreID1)),\n\t\t?assertEqual({error, already_stored},\n\t\t\t\tar_chunk_storage:put(3 * ?DATA_CHUNK_SIZE, C1, unpacked_padded, StoreID1)),\n\t\t?assertEqual({ok, Packing},\n\t\t\t\tar_chunk_storage:put(?DATA_CHUNK_SIZE, C1, Packing, StoreID1)),\n\t\tassert_get(C1, ?DATA_CHUNK_SIZE, StoreID1),\n\t\t?assertEqual({ok, Packing},\n\t\t\t\tar_chunk_storage:put(2 * ?DATA_CHUNK_SIZE, C1, Packing, StoreID1)),\n\t\tassert_get(C1, 2 * ?DATA_CHUNK_SIZE, StoreID1),\n\t\t?assertEqual({ok, Packing},\n\t\t\t\tar_chunk_storage:put(3 * ?DATA_CHUNK_SIZE, C1, Packing, StoreID1)),\n\t\tassert_get(C1, 3 * ?DATA_CHUNK_SIZE, StoreID1),\n\n\t\t%% Store the new unpacked_padded chunk. Expect it to be enciphered with\n\t\t%% the its entropy.\n\t\t?assertEqual({ok, Packing},\n\t\t\t\tar_chunk_storage:put(4 * ?DATA_CHUNK_SIZE, C1, unpacked_padded, StoreID1)),\n\t\t{ok, P1, _Entropy} =\n\t\t\t\tar_packing_server:pack_replica_2_9_chunk(RewardAddr, 4 * ?DATA_CHUNK_SIZE, C1),\n\t\tassert_get(P1, 4 * ?DATA_CHUNK_SIZE, StoreID1),\n\n\t\tassert_get(not_found, 8 * ?DATA_CHUNK_SIZE, StoreID1),\n\t\t?assertEqual({ok, Packing},\n\t\t\t\tar_chunk_storage:put(8 * ?DATA_CHUNK_SIZE, C1, unpacked_padded, StoreID1)),\n\t\t{ok, P2, _} =\n\t\t\t\tar_packing_server:pack_replica_2_9_chunk(RewardAddr, 8 * ?DATA_CHUNK_SIZE, C1),\n\t\tassert_get(P2, 8 * ?DATA_CHUNK_SIZE, StoreID1),\n\n\t\t%% Store chunks in the second partition.\n\t\t?assertEqual({ok, Packing},\n\t\t\t\tar_chunk_storage:put(12 * ?DATA_CHUNK_SIZE, C1, unpacked_padded, StoreID2)),\n\t\t{ok, P3, Entropy3} =\n\t\t\t\tar_packing_server:pack_replica_2_9_chunk(RewardAddr, 12 * ?DATA_CHUNK_SIZE, C1),\n\n\t\tassert_get(P3, 12 * ?DATA_CHUNK_SIZE, StoreID2),\n\t\t?assertEqual({ok, Packing},\n\t\t\t\tar_chunk_storage:put(15 * ?DATA_CHUNK_SIZE, C1, unpacked_padded, StoreID2)),\n\t\t{ok, P4, Entropy4} =\n\t\t\t\tar_packing_server:pack_replica_2_9_chunk(RewardAddr, 15 * ?DATA_CHUNK_SIZE, C1),\n\t\tassert_get(P4, 15 * ?DATA_CHUNK_SIZE, StoreID2),\n\t\t?assertNotEqual(P3, P4),\n\t\t?assertNotEqual(Entropy3, Entropy4),\n\n\t\t?assertEqual({ok, Packing},\n\t\t\t\tar_chunk_storage:put(16 * ?DATA_CHUNK_SIZE, C1, unpacked_padded, StoreID2)),\n\t\t{ok, P5, Entropy5} =\n\t\t\t\tar_packing_server:pack_replica_2_9_chunk(RewardAddr, 16 * ?DATA_CHUNK_SIZE, C1),\n\t\tassert_get(P5, 16 * ?DATA_CHUNK_SIZE, StoreID2),\n\t\t?assertNotEqual(Entropy4, Entropy5)\n\tafter\n\t\tok = arweave_config:set_env(Config)\n\tend.\n\nassert_get(Expected, Offset, StoreID) ->\n\tExpectedResult =\n\t\tcase Expected of\n\t\t\tnot_found ->\n\t\t\t\tnot_found;\n\t\t\t_ ->\n\t\t\t\t{Offset, Expected}\n\t\tend,\n\t?assertEqual(ExpectedResult, ar_chunk_storage:get(Offset - 1, StoreID)),\n\t?assertEqual(ExpectedResult, ar_chunk_storage:get(Offset - 2, StoreID)),\n\t?assertEqual(ExpectedResult, ar_chunk_storage:get(Offset - ?DATA_CHUNK_SIZE, StoreID)),\n\t?assertEqual(ExpectedResult, ar_chunk_storage:get(Offset - ?DATA_CHUNK_SIZE + 1, StoreID)),\n\t?assertEqual(ExpectedResult, ar_chunk_storage:get(Offset - ?DATA_CHUNK_SIZE + 2, StoreID)),\n\t?assertEqual(ExpectedResult,\n\t\t\tar_chunk_storage:get(Offset - ?DATA_CHUNK_SIZE div 2, StoreID)),\n\t?assertEqual(ExpectedResult,\n\t\t\tar_chunk_storage:get(Offset - ?DATA_CHUNK_SIZE div 2 + 1, StoreID)),\n\t?assertEqual(ExpectedResult,\n\t\t\tar_chunk_storage:get(Offset - ?DATA_CHUNK_SIZE div 2 - 1, StoreID)),\n\t?assertEqual(ExpectedResult,\n\t\t\tar_chunk_storage:get(Offset - ?DATA_CHUNK_SIZE div 3, StoreID)).\n"
  },
  {
    "path": "apps/arweave/src/ar_ets_intervals.erl",
    "content": "%%% @doc The utilities for managing sets of non-overlapping intervals stored in an ETS table.\n%%% The API is similar to the one of the ar_intervals module. Keeping the intervals in ETS\n%%% is a convenient way to share them between processes, e.g. the mining module can quickly\n%%% check whether the given recall byte is synced. ar_intervals, in turn, is helpful\n%%% for manipulating multiple sets of intervals, e.g. the syncing process uses it to look for\n%%% the intersections between our data and peers' data.\n%%% @end\n-module(ar_ets_intervals).\n\n-export([init_from_gb_set/2, add/3, delete/3, cut/2, is_inside/2, get_interval_with_byte/2,\n\t\tget_next_interval_outside/3, get_next_interval/3, get_intersection_size/3]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Record intervals from the given gb_sets set.\ninit_from_gb_set(Table, Set) ->\n\tinit_from_gb_set_iterator(Table, gb_sets:iterator(Set)).\n\n%% @doc Record an interval, bytes Start + 1, Start + 2 ... End.\nadd(Table, End, Start) when End > Start ->\n\t{End2, Start2, InnerEnds} = find_largest_continuous_interval(Table, End, Start),\n\tets:insert(Table, [{End2, Start2}]),\n\tremove_inner_intervals(Table, InnerEnds, End2).\n\n%% @doc Remove the given interval, bytes Start + 1, Start + 2 ... End.\ndelete(Table, End, Start) when End > Start ->\n\tcase ets:next(Table, Start) of\n\t\t'$end_of_table' ->\n\t\t\tok;\n\t\tEnd2 ->\n\t\t\tcase ets:lookup(Table, End2) of\n\t\t\t\t[] ->\n\t\t\t\t\t%% The key has just been removed, very unlucky timing.\n\t\t\t\t\tdelete(Table, End, Start);\n\t\t\t\t[{_End2, Start2}] when Start2 >= End ->\n\t\t\t\t\tok;\n\t\t\t\t[{End2, Start2}] ->\n\t\t\t\t\tInsert =\n\t\t\t\t\t\tcase Start2 < Start of\n\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t[{Start, Start2}];\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t[]\n\t\t\t\t\t\tend,\n\t\t\t\t\tInsert2 =\n\t\t\t\t\t\tcase End2 > End of\n\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t[{End2, End} | Insert];\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\tInsert\n\t\t\t\t\t\tend,\n\t\t\t\t\tets:insert(Table, Insert2),\n\t\t\t\t\tcase End2 > End of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t%% We have already inserted {End2, End} above.\n\t\t\t\t\t\t\tok;\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tets:delete(Table, End2),\n\t\t\t\t\t\t\tcase End2 < End of\n\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\tdelete(Table, End, End2);\n\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\tok\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\n%% @doc Cut the set by removing all the intervals and interval's parts above Offset.\ncut(Table, Offset) ->\n\tcase ets:next(Table, Offset) of\n\t\t'$end_of_table' ->\n\t\t\tok;\n\t\tEnd ->\n\t\t\tcase ets:lookup(Table, End) of\n\t\t\t\t[] ->\n\t\t\t\t\t%% The key has just been removed, very unlucky timing.\n\t\t\t\t\tcut(Table, Offset);\n\t\t\t\t[{End, Start}] when Start < Offset ->\n\t\t\t\t\tets:insert(Table, [{Offset, Start}]),\n\t\t\t\t\tets:delete(Table, End),\n\t\t\t\t\tcut(Table, Offset);\n\t\t\t\t[{End, _Start}] ->\n\t\t\t\t\tets:delete(Table, End),\n\t\t\t\t\tcut(Table, Offset)\n\t\t\tend\n\tend.\n\n%% @doc Return true if the given offset is inside one of the intervals, including\n%% the right bound, excluding the left bound.\n%%\n%% E.g. a table with byte 1:\n%%   add(table, 1, 0),\n%%   is_inside(table, 1) == true,\n%%   is_inside(table, 0) == false\n%% for a table with bytes 3, 4, and 5:\n%%   add(table, 5, 2),\n%%   is_inside(table, 5) == true,\n%%   is_inside(table, 4) == true,\n%%   is_inside(table, 3) == true,\n%%   is_inside(table, 2) == false.\n%% @end\nis_inside(Table, Offset) ->\n\tcase ets:next(Table, Offset - 1) of\n\t\t'$end_of_table' ->\n\t\t\tfalse;\n\t\tNextOffset ->\n\t\t\tcase ets:lookup(Table, NextOffset) of\n\t\t\t\t[{NextOffset, Start}] ->\n\t\t\t\t\tOffset > Start;\n\t\t\t\t[] ->\n\t\t\t\t\t%% The key should have been just removed, unlucky timing.\n\t\t\t\t\tis_inside(Table, Offset)\n\t\t\tend\n\tend.\n\n%% @doc Return the interval containing the given offset, including the right bound,\n%% excluding the left bound, or not_found.\n%% @end\nget_interval_with_byte(Table, Offset) ->\n\tcase ets:next(Table, Offset - 1) of\n\t\t'$end_of_table' ->\n\t\t\tnot_found;\n\t\tNextOffset ->\n\t\t\tcase ets:lookup(Table, NextOffset) of\n\t\t\t\t[{NextOffset, Start}] ->\n\t\t\t\t\tcase Offset > Start of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t{NextOffset, Start};\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tnot_found\n\t\t\t\t\tend;\n\t\t\t\t[] ->\n\t\t\t\t\t%% The key should have been just removed, unlucky timing.\n\t\t\t\t\tget_interval_with_byte(Table, Offset)\n\t\t\tend\n\tend.\n\n%% @doc Return the lowest interval outside the recorded set of intervals,\n%% strictly above the given Offset, and with the end offset at most EndOffsetUpperBound.\n%% Return not_found if there are no such intervals.\nget_next_interval_outside(_Table, Offset, EndOffsetUpperBound)\n\t\twhen Offset >= EndOffsetUpperBound ->\n\tnot_found;\nget_next_interval_outside(Table, Offset, EndOffsetUpperBound) ->\n\tcase ets:next(Table, Offset) of\n\t\t'$end_of_table' ->\n\t\t\t{EndOffsetUpperBound, Offset};\n\t\tNextOffset ->\n\t\t\tcase ets:lookup(Table, NextOffset) of\n\t\t\t\t[{NextOffset, Start}] when Start > Offset ->\n\t\t\t\t\t{min(EndOffsetUpperBound, Start), Offset};\n\t\t\t\t_ ->\n\t\t\t\t\tget_next_interval_outside(Table, NextOffset, EndOffsetUpperBound)\n\t\t\tend\n\tend.\n\n%% @doc Return the lowest interval inside the recorded set of intervals with the\n%% end offset strictly above the given offset, and with the end offset\n%% at most EndOffsetUpperBound.\n%% Return not_found if there are no such intervals.\nget_next_interval(_Table, Offset, EndOffsetUpperBound) when Offset >= EndOffsetUpperBound ->\n\tnot_found;\nget_next_interval(Table, Offset, EndOffsetUpperBound) ->\n\tcase ets:next(Table, Offset) of\n\t\t'$end_of_table' ->\n\t\t\tnot_found;\n\t\tNextOffset ->\n\t\t\tcase ets:lookup(Table, NextOffset) of\n\t\t\t\t[{_NextOffset, Start}] when Start >= EndOffsetUpperBound ->\n\t\t\t\t\tnot_found;\n\t\t\t\t[{NextOffset, Start}] ->\n\t\t\t\t\t{min(NextOffset, EndOffsetUpperBound), Start};\n\t\t\t\t[] ->\n\t\t\t\t\t%% The key should have been just removed, unlucky timing.\n\t\t\t\t\tget_next_interval(Table, Offset, EndOffsetUpperBound)\n\t\t\tend\n\tend.\n\n%% @doc Return the size of the intesection between the stored intervals and the given range.\nget_intersection_size(Table, End, Start) when End > Start ->\n\tcase ets:next(Table, Start) of\n\t\t'$end_of_table' ->\n\t\t\t0;\n\t\tOffset when Offset >= End ->\n\t\t\tcase ets:lookup(Table, Offset) of\n\t\t\t\t[] ->\n\t\t\t\t\t%% An extremely unlikely race condition: just retry.\n\t\t\t\t\tget_intersection_size(Table, End, Start);\n\t\t\t\t[{_, Start2}] when Start2 >= End ->\n\t\t\t\t\t0;\n\t\t\t\t[{_, Start2}] ->\n\t\t\t\t\tEnd - max(Start, Start2)\n\t\t\tend;\n\t\tOffset ->\n\t\t\tcase ets:lookup(Table, Offset) of\n\t\t\t\t[] ->\n\t\t\t\t\t%% An extremely unlikely race condition: just retry.\n\t\t\t\t\tget_intersection_size(Table, End, Start);\n\t\t\t\t[{_, Start2}] ->\n\t\t\t\t\tOffset - max(Start, Start2) + get_intersection_size(Table, End, Offset)\n\t\t\tend\n\tend.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\ninit_from_gb_set_iterator(Table, Iterator) ->\n\tcase gb_sets:next(Iterator) of\n\t\tnone ->\n\t\t\tok;\n\t\t{{End, Start}, Iterator2} ->\n\t\t\tadd(Table, End, Start),\n\t\t\tinit_from_gb_set_iterator(Table, Iterator2)\n\tend.\n\nfind_largest_continuous_interval(Table, End, Start) ->\n\tfind_largest_continuous_interval(Table, End, Start, End, Start, []).\n\nfind_largest_continuous_interval(Table, End, Start, End2, Start2, InnerEnds) ->\n\tcase ets:next(Table, Start - 1) of\n\t\t'$end_of_table' ->\n\t\t\t{End2, Start2, InnerEnds};\n\t\tEnd3 ->\n\t\t\tcase ets:lookup(Table, End3) of\n\t\t\t\t[] ->\n\t\t\t\t\t%% The key has just been removed, very unlucky timing.\n\t\t\t\t\tfind_largest_continuous_interval(Table, End, Start, End2, Start2, InnerEnds);\n\t\t\t\t[{_End3, Start3}] when Start3 > End ->\n\t\t\t\t\t{End2, Start2, InnerEnds};\n\t\t\t\t[{End3, Start3}] ->\n\t\t\t\t\tfind_largest_continuous_interval(\n\t\t\t\t\t\tTable,\n\t\t\t\t\t\tEnd,\n\t\t\t\t\t\tEnd3 + 1,\n\t\t\t\t\t\tmax(End2, End3),\n\t\t\t\t\t\tmin(Start2, Start3),\n\t\t\t\t\t\t[End3 | InnerEnds]\n\t\t\t\t\t)\n\t\t\tend\n\tend.\n\nremove_inner_intervals(_Table, [], _End) ->\n\tok;\nremove_inner_intervals(Table, [End | InnerEnds], End) ->\n\tremove_inner_intervals(Table, InnerEnds, End);\nremove_inner_intervals(Table, [InnerEnd | InnerEnds], End) ->\n\tets:delete(Table, InnerEnd),\n\tremove_inner_intervals(Table, InnerEnds, End).\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\nets_intervals_test() ->\n\tets:new(ets_intervals_test, [named_table, ordered_set]),\n\tassert_is_not_inside(100, 0),\n\t?assertEqual(ok, cut(ets_intervals_test, 10)),\n\t?assertEqual(ok, delete(ets_intervals_test, 10, 5)),\n\tSet = gb_sets:from_list([{1, 0}, {5, 3}, {16, 10}]),\n\tinit_from_gb_set(ets_intervals_test, Set),\n\tassert_is_inside(1, 0),\n\tassert_is_not_inside(3, 1),\n\tassert_is_inside(5, 3),\n\tassert_is_not_inside(10, 5),\n\tassert_is_inside(16, 10),\n\tassert_is_not_inside(20, 16),\n\t%% 1,0 16,3\n\tadd(ets_intervals_test, 11, 4),\n\tassert_is_inside(16, 3),\n\tassert_is_inside(1, 0),\n\tassert_is_not_inside(3, 1),\n\tassert_is_not_inside(20, 16),\n\t%% back to 1,0 5,3 16,10\n\tdelete(ets_intervals_test, 10, 5),\n\tassert_is_inside(5, 3),\n\tassert_is_inside(1, 0),\n\tassert_is_inside(16, 10),\n\tassert_is_not_inside(3, 1),\n\tassert_is_not_inside(10, 5),\n\tassert_is_not_inside(20, 16),\n\t%% 1,0 5,3 16,10 20,18\n\tadd(ets_intervals_test, 20, 18),\n\tassert_is_inside(5, 3),\n\tassert_is_inside(1, 0),\n\tassert_is_inside(16, 10),\n\tassert_is_inside(20, 18),\n\tassert_is_not_inside(3, 1),\n\tassert_is_not_inside(10, 5),\n\tassert_is_not_inside(18, 16),\n\tassert_is_not_inside(22, 20),\n\t%% 1,0 5,3 8,7 16,10 20,18\n\tadd(ets_intervals_test, 8, 7),\n\tassert_is_inside(5, 3),\n\tassert_is_inside(1, 0),\n\tassert_is_inside(16, 10),\n\tassert_is_inside(20, 18),\n\tassert_is_inside(8, 7),\n\tassert_is_not_inside(3, 1),\n\tassert_is_not_inside(7, 5),\n\tassert_is_not_inside(10, 8),\n\tassert_is_not_inside(18, 16),\n\tassert_is_not_inside(22, 20),\n\t%% 5,0 8,7 16,10 20,18\n\tadd(ets_intervals_test, 3, 1),\n\tassert_is_inside(5, 0),\n\tassert_is_inside(8, 7),\n\tassert_is_inside(16, 10),\n\tassert_is_inside(20, 18),\n\tassert_is_not_inside(7, 5),\n\tassert_is_not_inside(10, 8),\n\tassert_is_not_inside(18, 16),\n\tassert_is_not_inside(22, 20),\n\t%% 5,0 8,7 16,10 20,18\n\tcut(ets_intervals_test, 22),\n\tassert_is_inside(5, 0),\n\tassert_is_inside(8, 7),\n\tassert_is_inside(16, 10),\n\tassert_is_inside(20, 18),\n\tassert_is_not_inside(7, 5),\n\tassert_is_not_inside(10, 8),\n\tassert_is_not_inside(18, 16),\n\tassert_is_not_inside(22, 20),\n\t%% 5,0 8,7 16,10 20,18\n\tcut(ets_intervals_test, 20),\n\tassert_is_inside(5, 0),\n\tassert_is_inside(8, 7),\n\tassert_is_inside(16, 10),\n\tassert_is_inside(20, 18),\n\tassert_is_not_inside(7, 5),\n\tassert_is_not_inside(10, 8),\n\tassert_is_not_inside(18, 16),\n\tassert_is_not_inside(22, 20),\n\t%% 5,0 8,7 16,10 19,18\n\tcut(ets_intervals_test, 19),\n\tassert_is_inside(5, 0),\n\tassert_is_inside(8, 7),\n\tassert_is_inside(16, 10),\n\tassert_is_inside(19, 18),\n\tassert_is_not_inside(7, 5),\n\tassert_is_not_inside(10, 8),\n\tassert_is_not_inside(18, 16),\n\tassert_is_not_inside(22, 19),\n\t%% 5,0 8,7 14,10\n\tcut(ets_intervals_test, 14),\n\tassert_is_inside(5, 0),\n\tassert_is_inside(8, 7),\n\tassert_is_inside(14, 10),\n\tassert_is_not_inside(7, 5),\n\tassert_is_not_inside(10, 8),\n\tassert_is_not_inside(20, 14),\n\t%% 1,0 8,7 14,10\n\tdelete(ets_intervals_test, 5, 1),\n\tassert_is_inside(1, 0),\n\tassert_is_inside(8, 7),\n\tassert_is_inside(14, 10),\n\tassert_is_not_inside(7, 1),\n\tassert_is_not_inside(10, 8),\n\tassert_is_not_inside(20, 14),\n\t%% 8,7 14,10\n\tdelete(ets_intervals_test, 5, 0),\n\tassert_is_inside(8, 7),\n\tassert_is_inside(14, 10),\n\tassert_is_not_inside(7, 0),\n\tassert_is_not_inside(10, 8),\n\tassert_is_not_inside(20, 14),\n\t%% 8,7 15,10 30,20\n\tadd(ets_intervals_test, 15, 14),\n\tadd(ets_intervals_test, 30, 20),\n\tassert_is_inside(8, 7),\n\tassert_is_inside(15, 10),\n\tassert_is_inside(30, 20),\n\tassert_is_not_inside(7, 0),\n\tassert_is_not_inside(10, 8),\n\tassert_is_not_inside(20, 15),\n\tassert_is_not_inside(40, 30),\n\t%% 8,7 30,25\n\tdelete(ets_intervals_test, 25, 8),\n\tassert_is_inside(8, 7),\n\tassert_is_inside(30, 25),\n\tassert_is_not_inside(25, 8),\n\tassert_is_not_inside(7, 0),\n\tassert_is_not_inside(40, 30),\n\t%% 30,7\n\tadd(ets_intervals_test, 25, 8),\n\tassert_is_inside(30, 7),\n\tassert_is_not_inside(7, 0),\n\tassert_is_not_inside(40, 30),\n\t%% 12,7 18,16 30,25\n\tdelete(ets_intervals_test, 16, 12),\n\tdelete(ets_intervals_test, 25, 18),\n\tassert_is_inside(12, 7),\n\tassert_is_inside(18, 16),\n\tassert_is_inside(30, 25),\n\tassert_is_not_inside(16, 12),\n\tassert_is_not_inside(25, 18),\n\tassert_is_not_inside(40, 30),\n\t%% 12,7 21,16 30,25\n\tadd(ets_intervals_test, 21, 18),\n\tassert_is_inside(12, 7),\n\tassert_is_inside(21, 16),\n\tassert_is_inside(30, 25),\n\tassert_is_not_inside(16, 12),\n\tassert_is_not_inside(25, 21),\n\tassert_is_not_inside(40, 30),\n\t%% 12,7 33,13\n\tadd(ets_intervals_test, 33, 13),\n\tassert_is_inside(33, 13),\n\tassert_is_inside(12, 7),\n\tassert_is_not_inside(13, 12),\n\tassert_is_not_inside(40, 33),\n\t%% 12,7 34,13\n\tadd(ets_intervals_test, 34, 13),\n\tassert_is_inside(34, 13),\n\tassert_is_inside(12, 7),\n\tassert_is_not_inside(13, 12),\n\tassert_is_not_inside(40, 34),\n\t%% 12,7 35,13\n\tadd(ets_intervals_test, 35, 22),\n\tassert_is_inside(35, 13),\n\tassert_is_inside(12, 7),\n\tassert_is_not_inside(13, 12),\n\tassert_is_not_inside(40, 35).\n\nassert_is_inside(End, End) ->\n\tok;\nassert_is_inside(End, Start) ->\n\t?assertEqual(true, is_inside(ets_intervals_test, Start + 1)),\n\tassert_is_inside(End, Start + 1).\n\nassert_is_not_inside(End, End) ->\n\tok;\nassert_is_not_inside(End, Start) ->\n\t?assertEqual(false, is_inside(ets_intervals_test, Start + 1)),\n\tassert_is_not_inside(End, Start + 1).\n"
  },
  {
    "path": "apps/arweave/src/ar_events.erl",
    "content": "%% This Source Code Form is subject to the terms of the GNU General\n%% Public License, v. 2.0. If a copy of the GPLv2 was not distributed\n%% with this file, You can obtain one at\n%% https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html\n\n-module(ar_events).\n\n-behaviour(gen_server).\n\n-export([\n\tevent_to_process/1,\n\tsubscribe/1,\n\tcancel/1,\n\tsend/2\n]).\n\n-export([\n\tstart_link/1,\n\tinit/1,\n\thandle_call/3,\n\thandle_cast/2,\n\thandle_info/2,\n\tterminate/2,\n\tcode_change/3\n]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%% Internal state definition.\n-record(state, {\n\tname,\n\tsubscribers = #{}\n}).\n\n%%%===================================================================\n%%% API\n%%%===================================================================\n\nevent_to_process(Event) when is_atom(Event) -> list_to_atom(\"ar_event_\" ++ atom_to_list(Event)).\n\nsubscribe(Event) when is_atom(Event) ->\n\tProcess = event_to_process(Event),\n\tgen_server:call(Process, subscribe);\nsubscribe([]) ->\n\t[];\nsubscribe([Event | Events]) ->\n\t[subscribe(Event) | subscribe(Events)].\n\ncancel(Event) ->\n\tProcess = event_to_process(Event),\n\tgen_server:call(Process, cancel).\n\nsend(Event, Value) ->\n\tProcess = event_to_process(Event),\n\tcase whereis(Process) of\n\t\tundefined ->\n\t\t\terror;\n\t\t_ ->\n\t\t\tgen_server:cast(Process, {send, self(), Value})\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% Starts the server\n%%\n%% @spec start_link() -> {ok, Pid} | ignore | {error, Error}\n%% @end\n%%--------------------------------------------------------------------\nstart_link(Name) ->\n    RegName = ar_events:event_to_process(Name),\n\tgen_server:start_link({local, RegName}, ?MODULE, Name, []).\n\n%%% gen_server callbacks\n%%%===================================================================\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Initializes the server\n%%\n%% @spec init(Args) -> {ok, State} |\n%%\t\t\t\t\t   {ok, State, Timeout} |\n%%\t\t\t\t\t   ignore |\n%%\t\t\t\t\t   {stop, Reason}\n%% @end\n%%--------------------------------------------------------------------\ninit(Name) ->\n\t{ok, #state{ name = Name }}.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Handling call messages\n%%\n%% @spec handle_call(Request, From, State) ->\n%%\t\t\t\t\t\t\t\t\t {reply, Reply, State} |\n%%\t\t\t\t\t\t\t\t\t {reply, Reply, State, Timeout} |\n%%\t\t\t\t\t\t\t\t\t {noreply, State} |\n%%\t\t\t\t\t\t\t\t\t {noreply, State, Timeout} |\n%%\t\t\t\t\t\t\t\t\t {stop, Reason, Reply, State} |\n%%\t\t\t\t\t\t\t\t\t {stop, Reason, State}\n%% @end\n%%--------------------------------------------------------------------\nhandle_call(subscribe , {From, _Tag}, State) ->\n\tcase maps:get(From, State#state.subscribers, unknown) of\n\t\tunknown ->\n\t\t\tRef = erlang:monitor(process, From),\n\t\t\tSubscribers = maps:put(From, Ref, State#state.subscribers),\n\t\t\t{reply, ok, State#state{subscribers = Subscribers}};\n\t\t_ ->\n\t\t\t{reply, already_subscribed, State}\n\tend;\n\nhandle_call(cancel, {From, _Tag}, State) ->\n\tcase maps:get(From, State#state.subscribers, unknown) of\n\t\tunknown ->\n\t\t\t{reply, unknown, State};\n\t\tRef ->\n\t\t\tSubscribers = maps:remove(From, State#state.subscribers),\n\t\t\terlang:demonitor(Ref),\n\t\t\t{reply, ok, State#state{ subscribers = Subscribers }}\n\tend;\n\nhandle_call(Request, _From, State) ->\n\t?LOG_ERROR([{event, unhandled_call}, {message, Request}]),\n\t{reply, ok, State}.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Handling cast messages\n%%\n%% @spec handle_cast(Msg, State) -> {noreply, State} |\n%%\t\t\t\t\t\t\t\t\t{noreply, State, Timeout} |\n%%\t\t\t\t\t\t\t\t\t{stop, Reason, State}\n%% @end\n%%--------------------------------------------------------------------\nhandle_cast({send, From, Value}, State) ->\n\t%% Send to the subscribers except self.\n\t[Pid ! {event, State#state.name, Value}\n\t\t|| Pid <- maps:keys(State#state.subscribers), Pid /= From],\n\t{noreply, State};\nhandle_cast(Msg, State) ->\n\t?LOG_ERROR([{event, unhandled_cast}, {message, Msg}]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Handling all non call/cast messages\n%%\n%% @spec handle_info(Info, State) -> {noreply, State} |\n%%\t\t\t\t\t\t\t\t\t {noreply, State, Timeout} |\n%%\t\t\t\t\t\t\t\t\t {stop, Reason, State}\n%% @end\n%%--------------------------------------------------------------------\nhandle_info({'DOWN', _,  process, From, _}, State) ->\n\t{_, _, State1} = handle_call(cancel, {From, x}, State),\n\t{noreply, State1};\n\nhandle_info(Info, State) ->\n\t?LOG_ERROR([{event, unhandled_info}, {module, ?MODULE}, {info, Info}]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% This function is called by a gen_server when it is about to\n%% terminate. It should be the opposite of Module:init/1 and do any\n%% necessary cleaning up. When it returns, the gen_server terminates\n%% with Reason. The return value is ignored.\n%%\n%% @spec terminate(Reason, State) -> void()\n%% @end\n%%--------------------------------------------------------------------\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Convert process state when code is changed\n%%\n%% @spec code_change(OldVsn, State, Extra) -> {ok, NewState}\n%% @end\n%%--------------------------------------------------------------------\ncode_change(_OldVsn, State, _Extra) ->\n\t{ok, State}.\n\n%%%===================================================================\n%%% Internal functions\n%%%===================================================================\n\nsubscribe_send_cancel_test() ->\n\t%% Check whether all the \"event\"-processes are alive.\n\t%% This list should be aligned with the total number\n\t%% of running gen_server's by ar_events_sup.\n\tProcesses = [tx, block, testing],\n\ttrue = lists:all(fun(P) -> whereis(ar_events:event_to_process(P)) /= undefined end, Processes),\n\tEventNetworkStateOnStart = sys:get_state(ar_events:event_to_process(testing)),\n\tok = ar_events:subscribe(testing),\n\talready_subscribed = ar_events:subscribe(testing),\n\t[ok, already_subscribed] = ar_events:subscribe([tx, testing]),\n\n\t%% Sender shouldn't receive its own event.\n\tok = ar_events:send(testing, 12345),\n\treceive\n\t\t{event, testing, 12345} ->\n\t\t\t?assert(false, \"Received an unexpected event.\")\n\tafter 200 ->\n\t\tok\n\tend,\n\t%% Sender should receive an event triggered by another process.\n\tspawn(fun() -> ar_events:send(testing, 12345) end),\n\treceive\n\t\t{event, testing, 12345} ->\n\t\t\tok\n\tafter 200 ->\n\t\t?assert(false, \"Did not receive an expected event within 200 milliseconds.\")\n\tend,\n\tok = ar_events:cancel(testing),\n\tEventNetworkStateOnStart = sys:get_state(ar_events:event_to_process(testing)).\n\nprocess_terminated_test() ->\n\t%% If a subscriber has been terminated without implicit \"cancel\" call\n\t%% it should be cleaned up from the subscription list.\n\tEventNetworkStateOnStart = sys:get_state(ar_events:event_to_process(testing)),\n\tspawn(fun() -> ar_events:subscribe(testing) end),\n\ttimer:sleep(200),\n\tEventNetworkStateOnStart = sys:get_state(ar_events:event_to_process(testing)).\n"
  },
  {
    "path": "apps/arweave/src/ar_events_sup.erl",
    "content": "%% This Source Code Form is subject to the terms of the GNU General\n%% Public License, v. 2.0. If a copy of the GPLv2 was not distributed\n%% with this file, You can obtain one at\n%% https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html\n\n-module(ar_events_sup).\n\n-behaviour(supervisor).\n\n%% API\n-export([start_link/0]).\n\n%% Supervisor callbacks\n-export([init/1]).\n\n-include(\"ar_sup.hrl\").\n\n%% Helper macro for declaring children of supervisor.\n-define(CHILD(Mod, I, Type), {I, {Mod, start_link, [I]}, permanent, ?SHUTDOWN_TIMEOUT, Type,\n\t\t[Mod]}).\n\n%% ===================================================================\n%% API functions\n%% ===================================================================\n\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%% ===================================================================\n%% Supervisor callbacks\n%% ===================================================================\n\ninit([]) ->\n\t{ok, {{one_for_one, 5, 10}, [\n\t\t%% Events: remaining_disk_space.\n\t\t?CHILD(ar_events, disksup, worker),\n\t\t%% Events: new, ready_for_mining, orphaned, emitting_scheduled,\n\t\t%% preparing_unblacklisting, ready_for_unblacklisting, registered_offset.\n\t\t?CHILD(ar_events, tx, worker),\n\t\t%% Events: discovered, rejected, new, mined_block_received.\n\t\t?CHILD(ar_events, block, worker),\n\t\t%% Events: unpacked, packed.\n\t\t?CHILD(ar_events, chunk, worker),\n\t\t%% Events: removed\n\t\t?CHILD(ar_events, peer, worker),\n\t\t%% Events: account_tree_initialized, initialized,\n\t\t%% new_tip, checkpoint_block, search_space_upper_bound.\n\t\t?CHILD(ar_events, node_state, worker),\n\t\t%% Events: initialized, valid, invalid, validation_error, refuse_validation,\n\t\t%% computed_output.\n\t\t?CHILD(ar_events, nonce_limiter, worker),\n\t\t%% Events: removed_file.\n\t\t?CHILD(ar_events, chunk_storage, worker),\n\t\t%% Events: add_range, remove_range, global_remove_range, cut, global_cut.\n\t\t?CHILD(ar_events, sync_record, worker),\n\t\t%% Events: rejected, stale, partial, accepted, confirmed, orphaned.\n\t\t?CHILD(ar_events, solution, worker),\n\t\t%% Used for the testing purposes.\n\t\t?CHILD(ar_events, testing, worker)\n\t]}}.\n"
  },
  {
    "path": "apps/arweave/src/ar_footprint_record.erl",
    "content": "-module(ar_footprint_record).\n\n-export([add/3, add_async/4, delete/2, get_offset/1, get_padded_offset_from_footprint_offset/1,\n\t\tget_footprint/1, get_footprint_bucket/1, get_intervals/3,\n\t\tget_intervals/4, get_unsynced_intervals/3,\n\t\tget_intervals_from_footprint_intervals/1,\n\t\tget_footprint_size/0, get_footprints_per_partition/0,\n\t\tis_recorded/2]).\n\n-include(\"ar.hrl\").\n-include(\"ar_consensus.hrl\").\n-include(\"ar_data_discovery.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-moduledoc \"\"\"\n    This module exports functions for maintaining\n    a replica 2.9 entropy-aligned record of the synced chunks.\n    It differs from the normal record (ar_data_sync) in that it only registers\n    the bucket numbers of the synced chunks and records chunks with the same footprint\n    next to each other. For example, a record may contain intervals 0-10, 1000-1024,\n    1028-2048. This means the node has the first 10 chunks of the first entropy footprint,\n    the last 24 chunks of the first entropy footprint and chunks 4-44 of the second\n    entropy footprint. These chunks are from the first partition. The offset of the chunks\n    from the second partition is shifted by the number of chunks in the replica 2.9\n    entropy generated per partition (which is slightly bigger than the number of chunks\n    that can fit in the 3.6 TB partition).\n\n    Note that Packing does not have to be replica_2_9. We maintain this record\n    for any packing so that it is convenient to serve the data to any client.\n\"\"\".\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Add a chunk to the footprint record.\n-spec add(Offset :: non_neg_integer(), Packing :: term(), StoreID :: string()) -> ok.\nadd(Offset, Packing, StoreID) ->\n\tFootprintOffset = get_offset(Offset),\n\tar_sync_record:add(FootprintOffset, FootprintOffset - 1, Packing, ar_data_sync_footprints, StoreID).\n\n%% @doc Add a chunk to the footprint record asynchronously.\n-spec add_async(Tag :: term(), Offset :: non_neg_integer(), Packing :: term(), StoreID :: string()) -> ok.\nadd_async(Tag, Offset, Packing, StoreID) ->\n\tFootprintOffset = get_offset(Offset),\n\tar_sync_record:add_async(Tag, FootprintOffset, FootprintOffset - 1, Packing, ar_data_sync_footprints, StoreID).\n\n%% @doc Get the offset of a chunk in the footprint record.\n-spec get_offset(Offset :: non_neg_integer()) -> non_neg_integer().\nget_offset(Offset) ->\n\tPaddedOffset = ar_block:get_chunk_padded_offset(Offset),\n\tFootprintSize = get_footprint_size(),\n\tFootprintsPerPartition = get_footprints_per_partition(),\n\n\tChunksPerPartition = get_chunks_per_partition(),\n\tPartition = ar_replica_2_9:get_entropy_partition(PaddedOffset),\n\tPartitionOffset = (PaddedOffset - Partition * ?PARTITION_SIZE) div ?DATA_CHUNK_SIZE - 1,\n\n\t%% Which footprint within the partition\n\tFootprint = PartitionOffset rem FootprintsPerPartition,\n\t%% Position within the footprint\n\tFootprintOffset = PartitionOffset div FootprintsPerPartition,\n\tPartition * ChunksPerPartition + Footprint * FootprintSize + FootprintOffset + 1.\n\n%% @doc Return the largest end offset of the chunk that maps to the given footprint offset.\nget_padded_offset_from_footprint_offset(FootprintOffset) ->\n\tStart = FootprintOffset - 1,\n\tFootprintSize = get_footprint_size(),\n\tChunksPerPartition = get_chunks_per_partition(),\n\tPartition = Start div ChunksPerPartition,\n\tFootprintsPerPartition = get_footprints_per_partition(),\n\tPartitionStart = Partition * ChunksPerPartition,\n\tFootprint = (Start - PartitionStart) div FootprintSize,\n\tInFootprintOffset = (Start - PartitionStart) rem FootprintSize,\n\tEndOffset = Partition * ?PARTITION_SIZE + (InFootprintOffset * FootprintsPerPartition + (Footprint + 1)) * ?DATA_CHUNK_SIZE,\n\tar_block:get_chunk_padded_offset(EndOffset).\n\n%% @doc Get the chunk's footprint's number, >= 0, < the maximum number of footprints\n%% in a partition.\n-spec get_footprint(Offset :: non_neg_integer()) -> non_neg_integer().\nget_footprint(Offset) ->\n\tEntropyIndex = ar_replica_2_9:get_entropy_index(Offset, 0),\n\tEntropyIndex div ?COMPOSITE_PACKING_SUB_CHUNK_COUNT.\n\n%% @doc Get the footprint bucket number of a chunk.\n-spec get_footprint_bucket(Offset :: non_neg_integer()) -> non_neg_integer().\nget_footprint_bucket(Offset) ->\n\tget_offset(Offset) div ?NETWORK_FOOTPRINT_BUCKET_SIZE.\n\n%% @doc Get the synced footprint intervals of a chunk.\n-spec get_intervals(\n\tPartition :: non_neg_integer(),\n\tFootprint :: non_neg_integer(),\n\tStoreID :: string()\n) -> term().\nget_intervals(Partition, Footprint, StoreID) ->\n\tget_intervals(Partition, Footprint, any, StoreID).\n\n%% @doc Get the synced footprint intervals of a chunk.\n-spec get_intervals(\n\tPartition :: non_neg_integer(),\n\tFootprint :: non_neg_integer(),\n\tPacking :: term(),\n\tStoreID :: string()\n) -> term().\nget_intervals(Partition, Footprint, Packing, StoreID) ->\n\tFootprintSize = get_footprint_size(),\n\tChunksPerPartition = get_chunks_per_partition(),\n\tPartitionStartOffset = Partition * ChunksPerPartition,\n\tFootprintStart = PartitionStartOffset + Footprint * FootprintSize,\n\tEnd = min(FootprintStart + FootprintSize, PartitionStartOffset + ChunksPerPartition),\n\tcollect_intervals(FootprintStart, End, Packing, StoreID).\n\n%% @doc Get the unsynced footprint intervals of a chunk.\n-spec get_unsynced_intervals(\n\tPartition :: non_neg_integer(),\n\tFootprint :: non_neg_integer(),\n\tStoreID :: string()\n) -> term().\nget_unsynced_intervals(Partition, Footprint, StoreID) ->\n\tFootprintSize = get_footprint_size(),\n\tChunksPerPartition = get_chunks_per_partition(),\n\tPartitionStartOffset = Partition * ChunksPerPartition,\n\tFootprintStart = PartitionStartOffset + Footprint * FootprintSize,\n\tEnd = min(FootprintStart + FootprintSize, PartitionStartOffset + ChunksPerPartition),\n\tcollect_unsynced_intervals(FootprintStart, End, StoreID).\n\n%% @doc Delete a chunk from the footprint record.\n-spec delete(Offset :: non_neg_integer(), StoreID :: string()) -> ok.\ndelete(Offset, StoreID) ->\n\tFootprintOffset = get_offset(Offset),\n\tar_sync_record:delete(FootprintOffset, FootprintOffset - 1, ar_data_sync_footprints, StoreID).\n\n%% @doc Convert a list of footprint intervals to a list of intervals.\n-spec get_intervals_from_footprint_intervals(FootprintIntervals :: term()) -> term().\nget_intervals_from_footprint_intervals(FootprintIntervals) ->\n\tget_intervals_from_footprint_intervals(ar_intervals:to_list(FootprintIntervals), ar_intervals:new()).\n\n%% @doc Get the number of footprints contained in a partition.\n-spec get_footprints_per_partition() -> non_neg_integer().\nget_footprints_per_partition() ->\n\t?REPLICA_2_9_ENTROPY_COUNT div ?COMPOSITE_PACKING_SUB_CHUNK_COUNT.\n\n%% @doc Return true if a chunk containing the given Offset (=< EndOffset, > StartOffset)\n%% is found in the footprint record.\n-spec is_recorded(Offset :: non_neg_integer(), StoreID :: string()) -> boolean().\nis_recorded(Offset, StoreID) ->\n\tFootprintOffset = get_offset(Offset),\n\tar_sync_record:is_recorded(FootprintOffset, ar_data_sync_footprints, StoreID).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nget_footprint_size() ->\n\t?REPLICA_2_9_ENTROPY_SIZE div ?COMPOSITE_PACKING_SUB_CHUNK_SIZE.\n\nget_chunks_per_partition() ->\n\tFootprintSize = get_footprint_size(),\n\tar_util:pad_to_closest_multiple_equal_or_above(?PARTITION_SIZE, ?DATA_CHUNK_SIZE * FootprintSize) div ?DATA_CHUNK_SIZE.\n\ncollect_intervals(Start, End, Packing, StoreID) ->\n\tcollect_intervals(Start, End, Packing, StoreID, ar_intervals:new()).\n\ncollect_intervals(Start, End, _Packing, _StoreID, Intervals) when Start >= End ->\n\tIntervals;\ncollect_intervals(Start, End, Packing, StoreID, Intervals) ->\n\tQuery =\n\t\tcase Packing of\n\t\t\tany ->\n\t\t\t\tar_sync_record:get_next_synced_interval(Start, End,\n\t\t\t\t\tar_data_sync_footprints, StoreID);\n\t\t\tPacking ->\n\t\t\t\tar_sync_record:get_next_synced_interval(Start, End,\n\t\t\t\t\t\tPacking, ar_data_sync_footprints, StoreID)\n\t\tend,\n\tcase Query of\n\t\tnot_found ->\n\t\t\tIntervals;\n\t\t{End2, Start2} ->\n\t\t\tEnd3 = min(End2, End),\n\t\t\tStart3 = max(Start2, Start),\n\t\t\tcollect_intervals(End3, End, Packing, StoreID,\n\t\t\t\t\tar_intervals:add(Intervals, End3, Start3))\n\tend.\n\ncollect_unsynced_intervals(Start, End, StoreID) ->\n\tcollect_unsynced_intervals(Start, End, StoreID, ar_intervals:new()).\n\ncollect_unsynced_intervals(Start, End, _StoreID, Intervals) when Start >= End ->\n\tIntervals;\ncollect_unsynced_intervals(Start, End, StoreID, Intervals) ->\n\tQuery = ar_sync_record:get_next_unsynced_interval(Start, End, ar_data_sync_footprints, StoreID),\n\tcase Query of\n\t\tnot_found ->\n\t\t\tIntervals;\n\t\t{End2, Start2} ->\n\t\t\tEnd3 = min(End2, End),\n\t\t\tStart3 = max(Start2, Start),\n\t\t\tcollect_unsynced_intervals(End3, End, StoreID,\n\t\t\t\t\tar_intervals:add(Intervals, End3, Start3))\n\tend.\n\nget_intervals_from_footprint_intervals([], Intervals) ->\n\tIntervals;\nget_intervals_from_footprint_intervals([{End, Start} | Rest], Intervals) ->\n\tIntervals2 = get_intervals_from_footprint_intervals(Start, End, Intervals),\n\tget_intervals_from_footprint_intervals(Rest, Intervals2).\n\nget_intervals_from_footprint_intervals(Start, End, Intervals) when Start >= End ->\n\tIntervals;\nget_intervals_from_footprint_intervals(Start, End, Intervals) ->\n\tOffset = get_padded_offset_from_footprint_offset(Start + 1),\n\tIntervals2 = ar_intervals:add(Intervals, Offset, Offset - ?DATA_CHUNK_SIZE),\n\tget_intervals_from_footprint_intervals(Start + 1, End, Intervals2).\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\n-ifdef(AR_TEST).\n\nget_offset_test() ->\n\t%% The first chunk of the first footprint.\n\t?assertEqual(1, get_offset(?DATA_CHUNK_SIZE)),\n\t%% The first chunk of the second footprint.\n\t?assertEqual(5, get_offset(?DATA_CHUNK_SIZE * 2)),\n\t%% The second chunk of the first footprint.\n\t?assertEqual(2, get_offset(?DATA_CHUNK_SIZE * 3)),\n\t%% The second chunk of the second footprint.\n\t?assertEqual(6, get_offset(?DATA_CHUNK_SIZE * 4)),\n\t%% The third chunk of the first footprint.\n\t?assertEqual(3, get_offset(?DATA_CHUNK_SIZE * 5)),\n\t%% The third chunk of the second footprint.\n\t?assertEqual(7, get_offset(?DATA_CHUNK_SIZE * 6)),\n\t%% The fourth chunk of the first footprint.\n\t?assertEqual(4, get_offset(?DATA_CHUNK_SIZE * 7)),\n\t%% The fourth chunk of the second footprint.\n\t?assertEqual(8, get_offset(?DATA_CHUNK_SIZE * 8)),\n\t%% The first chunk of the first footprint of the second partition.\n\t?assertEqual(9, get_offset(?DATA_CHUNK_SIZE * 9)),\n\t%% The first chunk of the second footprint of the second partition.\n\t?assertEqual(13, get_offset(?DATA_CHUNK_SIZE * 10)),\n\t%% The second chunk of the first footprint of the second partition.\n\t?assertEqual(10, get_offset(?DATA_CHUNK_SIZE * 11)),\n\t%% The second chunk of the second footprint of the second partition.\n\t?assertEqual(14, get_offset(?DATA_CHUNK_SIZE * 12)),\n\t%% The third chunk of the first footprint of the second partition.\n\t?assertEqual(11, get_offset(?DATA_CHUNK_SIZE * 13)),\n\t%% The third chunk of the second footprint of the second partition.\n\t?assertEqual(15, get_offset(?DATA_CHUNK_SIZE * 14)),\n\t%% The fourth chunk of the first footprint of the second partition.\n\t?assertEqual(12, get_offset(?DATA_CHUNK_SIZE * 15)),\n\t%% The fourth chunk of the second footprint of the second partition.\n\t?assertEqual(16, get_offset(?DATA_CHUNK_SIZE * 16)),\n\t%% The first chunk of the first footprint of the third partition.\n\t?assertEqual(17, get_offset(?DATA_CHUNK_SIZE * 17)),\n\t%% The first chunk of the second footprint of the third partition.\n\t?assertEqual(21, get_offset(?DATA_CHUNK_SIZE * 18)),\n\t%% The second chunk of the first footprint of the third partition.\n\t?assertEqual(18, get_offset(?DATA_CHUNK_SIZE * 19)).\n\nget_padded_offset_from_footprint_offset_test() ->\n\t?assertEqual(262144, get_padded_offset_from_footprint_offset(1)),\n\t?assertEqual(786432, get_padded_offset_from_footprint_offset(2)),\n\t?assertEqual(1310720, get_padded_offset_from_footprint_offset(3)),\n\t?assertEqual(1835008, get_padded_offset_from_footprint_offset(4)),\n\t?assertEqual(524288, get_padded_offset_from_footprint_offset(5)),\n\t?assertEqual(1048576, get_padded_offset_from_footprint_offset(6)),\n\t?assertEqual(1572864, get_padded_offset_from_footprint_offset(7)),\n\t?assertEqual(2097152, get_padded_offset_from_footprint_offset(8)),\n\t?assertEqual(2359296, get_padded_offset_from_footprint_offset(9)),\n\t?assertEqual(2883584, get_padded_offset_from_footprint_offset(10)),\n\n\t79280870522880 = ar_block:get_chunk_padded_offset(79280870522880),\n\t?assertEqual(317123481, ar_footprint_record:get_offset(79280870522880)),\n\t?assertEqual(79280870522880, ar_footprint_record:get_padded_offset_from_footprint_offset(317123481)).\n\nget_offset_get_intervals_from_footprint_intervals_reversal_test() ->\n\tOffsets = [?DATA_CHUNK_SIZE, ?DATA_CHUNK_SIZE * 2, ?DATA_CHUNK_SIZE * 3, ?DATA_CHUNK_SIZE * 4,\n\t\t\t?DATA_CHUNK_SIZE * 8, ?DATA_CHUNK_SIZE * 9],\n\t[get_offset_get_intervals_from_footprint_intervals_reversal(Offset) || Offset <- Offsets].\n\nget_offset_get_intervals_from_footprint_intervals_reversal(ByteOffset) ->\n\tFootprintOffset = get_offset(ByteOffset),\n\n\tFootprintInterval = ar_intervals:from_list([{FootprintOffset, FootprintOffset - 1}]),\n\tResultingByteIntervals = get_intervals_from_footprint_intervals(FootprintInterval),\n\t[{GotEnd, GotStart}] = ar_intervals:to_list(ResultingByteIntervals),\n\n\t?assertEqual(ByteOffset, GotEnd),\n\t?assertEqual(ByteOffset - ?DATA_CHUNK_SIZE, GotStart).\n\nget_unsynced_intervals_test_() ->\n\tar_test_node:test_with_mocked_functions(\n\t\t[{ar_storage_module, get_by_id, fun(test_unsynced_store) -> test_unsynced_store end}],\n\t\tfun() ->\n\t\t\t%% Set up a test sync record server.\n\t\t\tTestStoreID = test_unsynced_store,\n\t\t\tTestProcessName = list_to_atom(\"ar_sync_record_\" ++ atom_to_list(TestStoreID)),\n\n\t\t\t%% Initialize sync_records ETS table if it does not exist.\n\t\t\tcase ets:info(sync_records) of\n\t\t\t\tundefined ->\n\t\t\t\t\tets:new(sync_records, [named_table, public, {read_concurrency, true}]);\n\t\t\t\t_ ->\n\t\t\t\t\t%% Clear existing data from previous tests.\n\t\t\t\t\tets:delete_all_objects(sync_records)\n\t\t\tend,\n\n\t\t\t%% Start the sync record process.\n\t\t\tcase whereis(TestProcessName) of\n\t\t\t\tundefined ->\n\t\t\t\t\t{ok, _Pid} = ar_sync_record:start_link(TestProcessName, TestStoreID);\n\t\t\t\t_ ->\n\t\t\t\t\tok\n\t\t\tend,\n\n\t\t\tPartition = 0,\n\t\t\tFootprint = 0,\n\n\t\t\t%% Get unsynced intervals before adding any data.\n\t\t\tUnsyncedBefore = get_unsynced_intervals(Partition, Footprint, TestStoreID),\n\t\t\tUnsyncedBeforeList = ar_intervals:to_list(UnsyncedBefore),\n\t\t\t?assertEqual([{4, 0}], UnsyncedBeforeList),\n\n\t\t\t%% Add some data to the footprint.\n\t\t\t%% This should map to partition 0, footprint 0.\n\t\t\tPaddedOffset = ?DATA_CHUNK_SIZE,\n\t\t\tPacking = unpacked,\n\t\t\tok = add(PaddedOffset, Packing, TestStoreID),\n\n\t\t\tUnsyncedAfter = get_unsynced_intervals(Partition, Footprint, TestStoreID),\n\t\t\tUnsyncedAfterList = ar_intervals:to_list(UnsyncedAfter),\n\t\t\t?assertEqual([{4, 1}], UnsyncedAfterList)\n\t\tend).\n\nget_intervals_test_() ->\n\tar_test_node:test_with_mocked_functions(\n\t\t[{ar_storage_module, get_by_id, fun(test_intervals_store) -> test_intervals_store end}],\n\t\tfun() ->\n\t\t\t%% Set up a test sync record server.\n\t\t\tTestStoreID = test_intervals_store,\n\t\t\tTestProcessName = list_to_atom(\"ar_sync_record_\" ++ atom_to_list(TestStoreID)),\n\n\t\t\t%% Initialize sync_records ETS table if it does not exist.\n\t\t\tcase ets:info(sync_records) of\n\t\t\t\tundefined ->\n\t\t\t\t\tets:new(sync_records, [named_table, public, {read_concurrency, true}]);\n\t\t\t\t_ ->\n\t\t\t\t\t%% Clear existing data from previous tests.\n\t\t\t\t\tets:delete_all_objects(sync_records)\n\t\t\tend,\n\n\t\t\t%% Start the sync record process.\n\t\t\tcase whereis(TestProcessName) of\n\t\t\t\tundefined ->\n\t\t\t\t\t{ok, _Pid} = ar_sync_record:start_link(TestProcessName, TestStoreID);\n\t\t\t\t_ ->\n\t\t\t\t\tok\n\t\t\tend,\n\n\t\t\tPacking = unpacked,\n\t\t\tar_sync_record:add(32, 0,\n\t\t\t\t\tPacking, ar_data_sync_footprints, TestStoreID),\n\n\t\t\tPartition = 0,\n\t\t\tFootprint = 0,\n\t\t\tSyncedIntervals = get_intervals(Partition, Footprint, TestStoreID),\n\t\t\tSyncedIntervalsList = ar_intervals:to_list(SyncedIntervals),\n\t\t\t?assertEqual([{4, 0}], SyncedIntervalsList),\n\t\t\tPartition2 = 0,\n\t\t\tFootprint2 = 1,\n\t\t\tSyncedIntervals2 = get_intervals(Partition2, Footprint2, TestStoreID),\n\t\t\tSyncedIntervalsList2 = ar_intervals:to_list(SyncedIntervals2),\n\t\t\t?assertEqual([{8, 4}], SyncedIntervalsList2),\n\t\t\tPartition3 = 0,\n\t\t\tFootprint3 = 2,\n\t\t\tSyncedIntervals3 = get_intervals(Partition3, Footprint3, TestStoreID),\n\t\t\tSyncedIntervalsList3 = ar_intervals:to_list(SyncedIntervals3),\n\t\t\t?assertEqual([], SyncedIntervalsList3),\n\t\t\tPartition4 = 1,\n\t\t\tFootprint4 = 0,\n\t\t\tSyncedIntervals4 = get_intervals(Partition4, Footprint4, TestStoreID),\n\t\t\tSyncedIntervalsList4 = ar_intervals:to_list(SyncedIntervals4),\n\t\t\t?assertEqual([{12, 8}], SyncedIntervalsList4),\n\t\t\tPartition5 = 1,\n\t\t\tFootprint5 = 1,\n\t\t\tSyncedIntervals5 = get_intervals(Partition5, Footprint5, TestStoreID),\n\t\t\tSyncedIntervalsList5 = ar_intervals:to_list(SyncedIntervals5),\n\t\t\t?assertEqual([{16, 12}], SyncedIntervalsList5),\n\t\t\tPartition6 = 1,\n\t\t\tFootprint6 = 2,\n\t\t\tSyncedIntervals6 = get_intervals(Partition6, Footprint6, TestStoreID),\n\t\t\tSyncedIntervalsList6 = ar_intervals:to_list(SyncedIntervals6),\n\t\t\t?assertEqual([], SyncedIntervalsList6),\n\t\t\tPartition7 = 2,\n\t\t\tFootprint7 = 0,\n\t\t\tSyncedIntervals7 = get_intervals(Partition7, Footprint7, TestStoreID),\n\t\t\tSyncedIntervalsList7 = ar_intervals:to_list(SyncedIntervals7),\n\t\t\t?assertEqual([{20, 16}], SyncedIntervalsList7)\n\t\tend).\n\nget_offset_get_padded_offset_from_footprint_offset_reversal_test() ->\n\tOffsets = [\n\t\t?DATA_CHUNK_SIZE,\n\t\t?DATA_CHUNK_SIZE * 2,\n\t\t?DATA_CHUNK_SIZE * 3,\n\t\t?PARTITION_SIZE,\n\t\t?DATA_CHUNK_SIZE * 8,\n\t\t?DATA_CHUNK_SIZE * 9,\n\t\t?PARTITION_SIZE * 2,\n\t\t?PARTITION_SIZE * 3,\n\t\t?PARTITION_SIZE * 4,\n\t\t?PARTITION_SIZE * 5,\n\t\t?PARTITION_SIZE * 6,\n\t\t?PARTITION_SIZE * 7,\n\t\t?PARTITION_SIZE * 8,\n\t\t?PARTITION_SIZE * 9,\n\t\t?PARTITION_SIZE * 10,\n\t\t?PARTITION_SIZE * 11,\n\t\t?PARTITION_SIZE * 12,\n\t\t?PARTITION_SIZE * 13,\n\t\t?PARTITION_SIZE * 6249,\n\t\t?PARTITION_SIZE * 6250,\n\t\t?PARTITION_SIZE * 6249 + ?DATA_CHUNK_SIZE,\n\t\t?PARTITION_SIZE * 6250 + ?DATA_CHUNK_SIZE,\n\t\t?PARTITION_SIZE * 6249 + ?DATA_CHUNK_SIZE * 2,\n\t\t?PARTITION_SIZE * 6250 + ?DATA_CHUNK_SIZE * 2,\n\t\t?PARTITION_SIZE * 6249 + ?DATA_CHUNK_SIZE * 8,\n\t\t?PARTITION_SIZE * 6250 + ?DATA_CHUNK_SIZE * 8,\n\t\t?PARTITION_SIZE * 6249 + ?DATA_CHUNK_SIZE * 9\n\t],\n\t[get_offset_get_padded_offset_from_footprint_offset_reversal(F) || F <- Offsets],\n\tok.\n\nget_offset_get_padded_offset_from_footprint_offset_reversal(Offset) ->\n\tFootprintOffset = get_offset(Offset),\n\tPaddedEndOffset = get_padded_offset_from_footprint_offset(FootprintOffset),\n\t?assertEqual(ar_block:get_chunk_padded_offset(Offset), PaddedEndOffset).\n\nget_intervals_from_footprint_intervals_test() ->\n\tTestCases =\n\t[\n\t\t{[], [], \"Empty\"},\n\t\t{[{1, 0}], [{?DATA_CHUNK_SIZE, 0}], \"One chunk\"},\n\t\t{[{2, 0}], [\n\t\t\t{?DATA_CHUNK_SIZE, 0}, {?DATA_CHUNK_SIZE * 3, ?DATA_CHUNK_SIZE * 2}], \"Two chunks\"},\n\t\t{[{4, 0}], [\n\t\t\t{?DATA_CHUNK_SIZE, 0},\n\t\t\t{?DATA_CHUNK_SIZE * 3, ?DATA_CHUNK_SIZE * 2}, \n\t\t\t{?DATA_CHUNK_SIZE * 5, ?DATA_CHUNK_SIZE * 4},\n\t\t\t{?DATA_CHUNK_SIZE * 7, ?DATA_CHUNK_SIZE * 6}], \"Full footprint\"},\n\t\t{[{5, 0}], [\n\t\t\t{?DATA_CHUNK_SIZE * 5, ?DATA_CHUNK_SIZE * 4},\n\t\t\t{?DATA_CHUNK_SIZE * 3, 0}, \n\t\t\t{?DATA_CHUNK_SIZE * 7, ?DATA_CHUNK_SIZE * 6}], \"Footprint wraparound\"},\n\t\t{[{6, 3}], [\n\t\t\t{?DATA_CHUNK_SIZE * 7, ?DATA_CHUNK_SIZE * 6},\n\t\t\t{?DATA_CHUNK_SIZE * 2, ?DATA_CHUNK_SIZE * 1},\n\t\t\t{?DATA_CHUNK_SIZE * 4, ?DATA_CHUNK_SIZE * 3}], \"Bits of two footprints\"},\n\t\t{[{1, 0}, {3, 2}], [\n\t\t\t{?DATA_CHUNK_SIZE, 0},\n\t\t\t{?DATA_CHUNK_SIZE * 5,\n\t\t\t?DATA_CHUNK_SIZE * 4}], \"Two chunks with a hole\"},\n\t\t{[{8, 0}], [\n\t\t\t{?DATA_CHUNK_SIZE * 8, 0}], \"Completely covered partition\"},\n\t\t{[{9, 0}], [\n\t\t\t{?DATA_CHUNK_SIZE * 9, 0}], \"Completely covered partition plus one chunk\"},\n\t\t{[{9, 0}, {13, 12}], [\n\t\t\t{?DATA_CHUNK_SIZE * 10, 0}], \"Completely covered partition plus two chunks\"},\n\t\t{[{9, 0}, {13, 12}, {15, 14}], [\n\t\t\t{?DATA_CHUNK_SIZE * 10, 0},\n\t\t\t{?DATA_CHUNK_SIZE * 14, ?DATA_CHUNK_SIZE * 13}],\n\t\t\t\"Completely covered partition plus three chunks\"}\n\t],\n\ttest_get_intervals_from_footprint_intervals(TestCases).\n\ntest_get_intervals_from_footprint_intervals([]) ->\n\tok;\ntest_get_intervals_from_footprint_intervals([{Input, Expected, Title} | Rest]) ->\n\t?assertEqual(ar_intervals:from_list(Expected),\n\t\t\tget_intervals_from_footprint_intervals(ar_intervals:from_list(Input)), Title),\n\ttest_get_intervals_from_footprint_intervals(Rest).\n\n-endif."
  },
  {
    "path": "apps/arweave/src/ar_fork.erl",
    "content": "%%%\n%%% @doc The module defines Arweave hard forks' heights.\n%%%\n\n-module(ar_fork).\n\n-export([height_1_6/0, height_1_7/0, height_1_8/0, height_1_9/0, height_2_0/0, height_2_2/0,\n\t\theight_2_3/0, height_2_4/0, height_2_5/0, height_2_6/0, height_2_6_8/0,\n\t\theight_2_7/0, height_2_7_1/0, height_2_7_2/0,\n\t\theight_2_8/0, height_2_9/0]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n\n-ifdef(FORKS_RESET).\nheight_1_6() ->\n\t0.\n-else.\nheight_1_6() ->\n\t95000.\n-endif.\n\n-ifdef(FORKS_RESET).\nheight_1_7() ->\n\t0.\n-else.\nheight_1_7() ->\n\t235200. % Targeting 2019-07-08 UTC\n-endif.\n\n-ifdef(FORKS_RESET).\nheight_1_8() ->\n\t0.\n-else.\nheight_1_8() ->\n\t269510. % Targeting 2019-08-29 UTC\n-endif.\n\n-ifdef(FORKS_RESET).\nheight_1_9() ->\n\t0.\n-else.\nheight_1_9() ->\n\t315700. % Targeting 2019-11-04 UTC\n-endif.\n\n-ifdef(FORKS_RESET).\nheight_2_0() ->\n\t0.\n-else.\nheight_2_0() ->\n\t422250. % Targeting 2020-04-09 10:00 UTC\n-endif.\n\n-ifdef(FORKS_RESET).\nheight_2_2() ->\n\t0.\n-else.\nheight_2_2() ->\n\t552180. % Targeting 2020-10-21 13:00 UTC\n-endif.\n\n-ifdef(FORKS_RESET).\nheight_2_3() ->\n\t0.\n-else.\nheight_2_3() ->\n\t591140. % Targeting 2020-12-21 11:00 UTC\n-endif.\n\n-ifdef(FORKS_RESET).\nheight_2_4() ->\n\t0.\n-else.\nheight_2_4() ->\n\t633720. % Targeting 2021-02-24 11:50 UTC\n-endif.\n\n-ifdef(FORKS_RESET).\nheight_2_5() ->\n\t0.\n-else.\nheight_2_5() ->\n\t812970.\n-endif.\n\n-ifdef(FORK_2_6_HEIGHT).\nheight_2_6() ->\n\t?FORK_2_6_HEIGHT.\n-else.\n\t-ifdef(FORKS_RESET).\n\t\theight_2_6() ->\n\t\t\t0.\n\t-else.\n\t\theight_2_6() ->\n\t\t\t1132210. % Targeting 2023-03-06 14:00 UTC\n\t-endif.\n-endif.\n\n-ifdef(FORK_2_6_8_HEIGHT).\nheight_2_6_8() ->\n\t?FORK_2_6_8_HEIGHT.\n-else.\n\t-ifdef(FORKS_RESET).\n\t\theight_2_6_8() ->\n\t\t\t0.\n\t-else.\n\t\theight_2_6_8() ->\n\t\t\t1189560. % Targeting 2023-05-30 16:00 UTC\n\t-endif.\n-endif.\n\n-ifdef(FORK_2_7_HEIGHT).\nheight_2_7() ->\n\t?FORK_2_7_HEIGHT.\n-else.\n\t-ifdef(FORKS_RESET).\n\t\theight_2_7() ->\n\t\t\t0.\n\t-else.\n\t\theight_2_7() ->\n\t\t\t1275480. % Targeting 2023-10-04 14:00 UTC\n\t-endif.\n-endif.\n\n-ifdef(FORK_2_7_1_HEIGHT).\nheight_2_7_1() ->\n\t?FORK_2_7_1_HEIGHT.\n-else.\n\t-ifdef(FORKS_RESET).\n\t\theight_2_7_1() ->\n\t\t\t0.\n\t-else.\n\t\theight_2_7_1() ->\n\t\t\t1316410. % Targeting 2023-12-05 14:00 UTC\n\t-endif.\n-endif.\n\n-ifdef(FORK_2_7_2_HEIGHT).\nheight_2_7_2() ->\n\t?FORK_2_7_2_HEIGHT.\n-else.\n\t-ifdef(FORKS_RESET).\n\t\theight_2_7_2() ->\n\t\t\t0.\n\t-else.\n\t\theight_2_7_2() ->\n\t\t\t1391330. % Targeting 2024-03-26 14:00 UTC\n\t-endif.\n-endif.\n\n-ifdef(FORK_2_8_HEIGHT).\nheight_2_8() ->\n\t?FORK_2_8_HEIGHT.\n-else.\n\t-ifdef(FORKS_RESET).\n\t\theight_2_8() ->\n\t\t\t0.\n\t-else.\n\t\theight_2_8() ->\n\t\t\t1547120. % Targeting 2024-11-13 14:00 UTC\n\t-endif.\n-endif.\n\n-ifdef(FORK_2_9_HEIGHT).\nheight_2_9() ->\n\t?FORK_2_9_HEIGHT.\n-else.\n\t-ifdef(FORKS_RESET).\n\t\theight_2_9() ->\n\t\t\t0.\n\t-else.\n\t\theight_2_9() ->\n\t\t\t1602350. % Targeting 2025-02-03 14:00 UTC\n\t-endif.\n-endif.\n"
  },
  {
    "path": "apps/arweave/src/ar_fraction.erl",
    "content": "%%% @doc The module with utilities for performing computations on fractions.\n-module(ar_fraction).\n\n-export([pow/2, natural_exponent/2, factorial/1, minimum/2, maximum/2, multiply/2, reduce/2,\n\t\tadd/2]).\n\n%%%===================================================================\n%%% Types.\n%%%===================================================================\n\n-type fraction() :: {integer(), integer()}.\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Compute the given power of the given integer.\n-spec pow(X::integer(), P::integer()) -> integer().\npow(_X, 0) ->\n\t1;\npow(X, 1) ->\n\tX;\npow(X, 2) ->\n\tX * X;\npow(X, 3) ->\n\tX * X * X;\npow(X, N) ->\n\tcase N rem 2 of\n\t\t0 ->\n\t\t\tpow(X * X, N div 2);\n\t\t1 ->\n\t\t\tX * pow(X * X, N div 2)\n\tend.\n\n%% @doc Compute the X's power of e by summing up the terms of the Taylor series where\n%% the last term is a multiple of X to the power of P.\n-spec natural_exponent(X::fraction(), P::integer()) -> fraction().\nnatural_exponent({0, _Divisor}, _P) ->\n\t{1, 1};\nnatural_exponent(X, P) ->\n\t{natural_exponent_dividend(X, P, 0, 1), natural_exponent_divisor(X, P)}.\n\n%% @doc Return the smaller of D1 and D2.\n-spec minimum(D1::fraction(), D2::fraction()) -> fraction().\nminimum({Dividend1, Divisor1} = D1, {Dividend2, Divisor2} = D2) ->\n\tcase Dividend1 * Divisor2 < Dividend2 * Divisor1 of\n\t\ttrue ->\n\t\t\tD1;\n\t\tfalse ->\n\t\t\tD2\n\tend.\n\n%% @doc Return the bigger of D1 and D2.\n-spec maximum(D1::fraction(), D2::fraction()) -> fraction().\nmaximum(D1, D2) ->\n\tcase minimum(D1, D2) of\n\t\tD1 ->\n\t\t\tD2;\n\t\tD2 ->\n\t\t\tD1\n\tend.\n\n%% @doc Return the product of D1 and D2.\n-spec multiply(D1::fraction(), D2::fraction()) -> fraction().\nmultiply({Dividend1, Divisor1}, {Dividend2, Divisor2}) ->\n\t{Dividend1 * Dividend2, Divisor1 * Divisor2}.\n\n%% @doc Reduce the fraction until both the divisor and dividend are smaller than\n%% or equal to Max. Return at most Max or at least 1 / Max.\n-spec reduce(D::fraction(), Max::integer()) -> fraction().\nreduce({0, Divisor}, _Max) ->\n\t{0, Divisor};\nreduce({Dividend, Divisor}, Max) ->\n\tGCD = gcd(Dividend, Divisor),\n\treduce2({Dividend div GCD, Divisor div GCD}, Max).\n\n%% @doc Return the sum of two fractions.\n-spec add(A::fraction(), B::integer()) -> fraction().\nadd({Dividend1, Divisor1}, {Dividend2, Divisor2}) ->\n\t{Dividend1 * Divisor2 + Dividend2 * Divisor1, Divisor1 * Divisor2}.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nnatural_exponent_dividend(_X, -1, _K, _M) ->\n\t0;\nnatural_exponent_dividend({Dividend, Divisor} = X, P, K, M) ->\n\tpow(Dividend, P) * pow(Divisor, K) * M + natural_exponent_dividend(X, P - 1, K + 1, M * P).\n\nnatural_exponent_divisor({_Dividend, Divisor}, P) ->\n\tpow(Divisor, P) * factorial(P).\n\nfactorial(0) ->\n\t1;\nfactorial(1) ->\n\t1;\nfactorial(9) ->\n\t362880;\nfactorial(10) ->\n\t3628800;\nfactorial(20) ->\n\t2432902008176640000;\nfactorial(N) ->\n\tN * factorial(N - 1).\n\nreduce2({Dividend, Divisor}, Max) when Dividend > Max ->\n\tcase Divisor div 2 of\n\t\t0 ->\n\t\t\t{Max, 1};\n\t\t_ ->\n\t\t\treduce({Dividend div 2, Divisor div 2}, Max)\n\tend;\nreduce2({Dividend, Divisor}, Max) when Divisor > Max ->\n\tcase Dividend div 2 of\n\t\t0 ->\n\t\t\t{1, Max};\n\t\t_ ->\n\t\t\treduce({Dividend div 2, Divisor div 2}, Max)\n\tend;\nreduce2(R, _Max) ->\n\tR.\n\ngcd(A, B) when B > A ->\n\tgcd(B, A);\ngcd(A, B) when A rem B > 0 ->\n\tgcd(B, A rem B);\ngcd(A, B) when A rem B =:= 0 ->\n\tB.\n"
  },
  {
    "path": "apps/arweave/src/ar_global_sync_record.erl",
    "content": "-module(ar_global_sync_record).\n\n-behaviour(gen_server).\n\n-include(\"ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include(\"ar_data_discovery.hrl\").\n-include(\"ar_sync_buckets.hrl\").\n\n-export([start_link/0, get_serialized_sync_record/1, get_serialized_sync_buckets/0,\n\t\tget_serialized_footprint_buckets/0]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n%% The frequency in seconds of updating serialized sync buckets.\n-ifdef(AR_TEST).\n-define(UPDATE_SERIALIZED_SYNC_BUCKETS_FREQUENCY_S, 2).\n-else.\n-define(UPDATE_SERIALIZED_SYNC_BUCKETS_FREQUENCY_S, 300).\n-endif.\n\n-record(state, {\n\tsync_record,\n\tsync_buckets,\n\tfootprint_record,\n\tfootprint_buckets\n}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the server.\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%% @doc Return a set of data intervals from all configured storage modules.\n%%\n%% Args is a map with the following keys\n%%\n%% format\t\t\trequired\tetf or json\t\tserialize in Erlang Term Format or JSON\n%% random_subset\toptional\tany()\t\t\tpick a random subset if the key is present\n%% start\t\t\toptional\tinteger()\t\tpick intervals with right bound >= start\n%% right_bound\t\toptional\tinteger()\t\tpick intervals with right bound <= right_bound\n%% limit\t\t\toptional\tinteger()\t\tthe number of intervals to pick\n%%\n%% ?MAX_SHARED_SYNCED_INTERVALS_COUNT is both the default and the maximum value for limit.\n%% If random_subset key is present, a random subset of intervals is picked, the start key is\n%% ignored. If random_subset key is not present, the start key must be provided.\nget_serialized_sync_record(Args) ->\n\tcase catch gen_server:call(?MODULE, {get_serialized_sync_record, Args}, 10000) of\n\t\t{'EXIT', {timeout, {gen_server, call, _}}} ->\n\t\t\t{error, timeout};\n\t\tReply ->\n\t\t\tReply\n\tend.\n\n%% @doc Return an ETF-serialized compact but imprecise representation of the synced data -\n%% a bucket size and a map where every key is the sequence number of the bucket, every value -\n%% the percentage of data synced in the reported bucket.\nget_serialized_sync_buckets() ->\n\tcase ets:lookup(?MODULE, serialized_sync_buckets) of\n\t\t[] ->\n\t\t\t{error, not_initialized};\n\t\t[{_, SerializedSyncBuckets}] ->\n\t\t\t{ok, SerializedSyncBuckets}\n\tend.\n\n%% @doc Return an ETF-serialized compact but imprecise representation of the synced footprints -\n%% a bucket size and a map where every key is the sequence number of the bucket, every value -\n%% the percentage of data synced in the reported bucket. Every bucket contains one or more\n%% footprints. Note that while footprints in the buckets are adjacent in the sense that\n%% the first chunk in the first footprint is adjacent to the first chunk in the second footprint,\n%% chunks are generally not adjacent because of how the logic of 2.9 footprints goes.\nget_serialized_footprint_buckets() ->\n\tcase ets:lookup(?MODULE, serialized_footprint_buckets) of\n\t\t[] ->\n\t\t\t{error, not_initialized};\n\t\t[{_, SerializedFootprintBuckets}] ->\n\t\t\t{ok, SerializedFootprintBuckets}\n\tend.\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\tok = ar_events:subscribe(sync_record),\n\tSyncRecord = get_sync_record(),\n\tSyncBuckets = cache_and_get_sync_buckets(SyncRecord, serialized_sync_buckets,\n\t\t\tar_sync_buckets:new()),\n\tFootprintRecord = get_footprint_record(),\n\tFootprintBuckets = cache_and_get_sync_buckets(FootprintRecord,\n\t\t\tserialized_footprint_buckets,\n\t\t\tar_sync_buckets:new(?NETWORK_FOOTPRINT_BUCKET_SIZE)),\n\t{ok, #state{ sync_record = SyncRecord, sync_buckets = SyncBuckets,\n\t\t\tfootprint_record = FootprintRecord, footprint_buckets = FootprintBuckets }}.\n\nhandle_call({get_serialized_sync_record, Args}, _From, State) ->\n\t#state{ sync_record = SyncRecord } = State,\n\tLimit = min(maps:get(limit, Args, ?MAX_SHARED_SYNCED_INTERVALS_COUNT),\n\t\t\t?MAX_SHARED_SYNCED_INTERVALS_COUNT),\n\t{reply, {ok, ar_intervals:serialize(Args#{ limit => Limit }, SyncRecord)}, State};\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast({update_serialized_sync_buckets, serialized_sync_buckets = Key}, State) ->\n\t#state{ sync_buckets = SyncBuckets } = State,\n\t{SyncBuckets2, SerializedSyncBuckets} = ar_sync_buckets:serialize(SyncBuckets,\n\t\t\t?MAX_SYNC_BUCKETS_SIZE),\n\tets:insert(?MODULE, {Key, SerializedSyncBuckets}),\n\tar_util:cast_after(?UPDATE_SERIALIZED_SYNC_BUCKETS_FREQUENCY_S * 1000,\n\t\t\t?MODULE, {update_serialized_sync_buckets, Key}),\n\t{noreply, State#state{ sync_buckets = SyncBuckets2 }};\nhandle_cast({update_serialized_sync_buckets, serialized_footprint_buckets = Key}, State) ->\n\t#state{ footprint_buckets = FootprintBuckets } = State,\n\t{FootprintBuckets2, SerializedFootprintBuckets} = ar_sync_buckets:serialize(\n\t\t\tFootprintBuckets, ?MAX_SYNC_BUCKETS_SIZE),\n\tets:insert(?MODULE, {Key, SerializedFootprintBuckets}),\n\tar_util:cast_after(?UPDATE_SERIALIZED_SYNC_BUCKETS_FREQUENCY_S * 1000,\n\t\t\t?MODULE, {update_serialized_sync_buckets, Key}),\n\t{noreply, State#state{ footprint_buckets = FootprintBuckets2 }};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info({event, sync_record, {add_range, Start, End, ar_data_sync,\n\t\t#{ packing := Packing }}}, State) ->\n\t#state{ sync_record = SyncRecord, sync_buckets = SyncBuckets } = State,\n\tcase Packing of\n\t\t{replica_2_9, _} ->\n\t\t\t%% Replica 2.9 data is recorded in the footprint record. It is synced\n\t\t\t%% footprint by footprint (not left to right).\n\t\t\t{noreply, State};\n\t\t_ ->\n\t\t\tSyncRecord2 = ar_intervals:add(SyncRecord, End, Start),\n\t\t\tSyncBuckets2 = ar_sync_buckets:add(End, Start, SyncBuckets),\n\t\t\t{noreply, State#state{ sync_record = SyncRecord2, sync_buckets = SyncBuckets2 }}\n\tend;\n\nhandle_info({event, sync_record, {add_range, Start, End, ar_data_sync_footprints, _Options}}, State) ->\n\tState2 = update_footprint_data(Start, End, State),\n\t{noreply, State2};\n\nhandle_info({event, sync_record, {global_cut, Offset}}, State) ->\n\t#state{ sync_record = SyncRecord, sync_buckets = SyncBuckets } = State,\n\tSyncRecord2 = ar_intervals:cut(SyncRecord, Offset),\n\tSyncBuckets2 = ar_sync_buckets:cut(Offset, SyncBuckets),\n\t{noreply, State#state{ sync_record = SyncRecord2, sync_buckets = SyncBuckets2 }};\n\nhandle_info({event, sync_record, {global_remove_range, Start, End}}, State) ->\n\t#state{ sync_record = SyncRecord, sync_buckets = SyncBuckets } = State,\n\tSyncRecord2 = ar_intervals:delete(SyncRecord, End, Start),\n\tSyncBuckets2 = ar_sync_buckets:delete(End, Start, SyncBuckets),\n\tState2 = remove_footprint_data(Start, End, State),\n\t{noreply, State2#state{ sync_record = SyncRecord2, sync_buckets = SyncBuckets2 }};\n\nhandle_info({event, sync_record, _}, State) ->\n\t{noreply, State};\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{event, terminate}, {module, ?MODULE}, {reason, io_lib:format(\"~p\", [Reason])}]).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nget_sync_record() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tlists:foldl(\n\t\tfun(Module, Acc) ->\n\t\t\tcase Module of\n\t\t\t\t{_, _, {replica_2_9, _}} ->\n\t\t\t\t\t%% Replica 2.9 data is recorded in the footprint record. It is synced\n\t\t\t\t\t%% footprint by footprint (not left to right).\n\t\t\t\t\tAcc;\n\t\t\t\t_ ->\n\t\t\t\t\tStoreID = ar_storage_module:id(Module),\n\t\t\t\t\tar_intervals:union(ar_sync_record:get(ar_data_sync, StoreID), Acc)\n\t\t\tend\n\t\tend,\n\t\tar_intervals:new(),\n\t\t[?DEFAULT_MODULE | Config#config.storage_modules]\n\t).\n\nget_footprint_record() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tlists:foldl(\n\t\tfun(Module, Acc) ->\n\t\t\tStoreID = ar_storage_module:id(Module),\n\t\t\tar_intervals:union(ar_sync_record:get(ar_data_sync_footprints, StoreID), Acc)\n\t\tend,\n\t\tar_intervals:new(),\n\t\tConfig#config.storage_modules\n\t).\n\ncache_and_get_sync_buckets(SyncRecord, Key, SyncBuckets) ->\n\tSyncBuckets2 = ar_sync_buckets:from_intervals(SyncRecord, SyncBuckets),\n\t{SyncBuckets3, SerializedSyncBuckets} = ar_sync_buckets:serialize(SyncBuckets2,\n\t\t\t\t\t?MAX_SYNC_BUCKETS_SIZE),\n\tets:insert(?MODULE, {Key, SerializedSyncBuckets}),\n\tar_util:cast_after(?UPDATE_SERIALIZED_SYNC_BUCKETS_FREQUENCY_S * 1000,\n\t\t\t?MODULE, {update_serialized_sync_buckets, Key}),\n\tSyncBuckets3.\n\nupdate_footprint_data(Start, End, State) when Start >= End ->\n\tState;\nupdate_footprint_data(Start, End, State) ->\n\t#state{ footprint_record = FootprintRecord,\n\t\t\tfootprint_buckets = FootprintBuckets } = State,\n\tFootprintRecord2 = ar_intervals:add(FootprintRecord, Start + 1, Start),\n\tFootprintBuckets2 = ar_sync_buckets:add(Start + 1, Start, FootprintBuckets),\n\tState2 = State#state{ footprint_record = FootprintRecord2,\n\t\tfootprint_buckets = FootprintBuckets2 },\n\tupdate_footprint_data(Start + 1, End, State2).\n\nremove_footprint_data(Start, End, State) when Start >= End ->\n\tState;\nremove_footprint_data(Start, End, State) ->\n\t#state{ footprint_record = FootprintRecord,\n\t\t\tfootprint_buckets = FootprintBuckets } = State,\n\tOffset = ar_footprint_record:get_offset(Start + ?DATA_CHUNK_SIZE),\n\tFootprintRecord2 = ar_intervals:delete(FootprintRecord, Offset, Offset - 1),\n\tFootprintBuckets2 = ar_sync_buckets:delete(Offset, Offset - 1, FootprintBuckets),\n\tState2 = State#state{ footprint_record = FootprintRecord2,\n\t\tfootprint_buckets = FootprintBuckets2 },\n\tremove_footprint_data(Start + ?DATA_CHUNK_SIZE, End, State2)."
  },
  {
    "path": "apps/arweave/src/ar_header_sync.erl",
    "content": "-module(ar_header_sync).\n\n-behaviour(gen_server).\n\n-export([start_link/0, join/3, add_tip_block/2, add_block/1, request_tx_removal/1,\n\t\tremove_block/1]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_header_sync.hrl\").\n-include_lib(\"arweave/include/ar_data_sync.hrl\").\n-include_lib(\"arweave/include/ar_chunk_storage.hrl\").\n\n%%% This module syncs block and transaction headers and maintains a persisted record of synced\n%%% headers. Headers are synced from latest to earliest.\n\n-record(state, {\n\tblock_index,\n\theight,\n\tsync_record,\n\tretry_queue,\n\tretry_record,\n\tis_disk_space_sufficient\n}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%% @doc Update the tip after the node joins the network.\njoin(Height, RecentBI, Blocks) ->\n\tgen_server:cast(?MODULE, {join, Height, RecentBI, Blocks}).\n\n%% @doc Add a new tip block to the index and storage, record the new recent block index.\nadd_tip_block(B, RecentBI) ->\n\tgen_server:cast(?MODULE, {add_tip_block, B, RecentBI}).\n\n%% @doc Add a block to the index and storage.\nadd_block(B) ->\n\tgen_server:cast(?MODULE, {add_block, B}).\n\n%% @doc Remove the given transaction.\nrequest_tx_removal(TXID) ->\n\tgen_server:cast(?MODULE, {remove_tx, TXID}).\n\n%% @doc Remove the block header with the given Height from the record. The process\n%% will therefore re-sync it later (if there is available disk space).\nremove_block(Height) ->\n\tgen_server:cast(?MODULE, {remove_block, Height}).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\t?LOG_INFO([{event, ar_header_sync_start}]),\n\t%% Trap exit to avoid corrupting any open files on quit..\n\tprocess_flag(trap_exit, true),\n\t[ok, ok] = ar_events:subscribe([tx, disksup]),\n\t{ok, Config} = arweave_config:get_env(),\n\tok = ar_kv:open(#{\n\t\tpath => filename:join([Config#config.data_dir, ?ROCKS_DB_DIR, \"ar_header_sync_db\"]),\n\t\tname => ?MODULE}),\n\t{SyncRecord, Height, CurrentBI} =\n\t\tcase ar_storage:read_term(header_sync_state) of\n\t\t\tnot_found ->\n\t\t\t\t{ar_intervals:new(), -1, []};\n\t\t\t{ok, StoredState} ->\n\t\t\t\tStoredState\n\t\tend,\n\tlists:foreach(\n\t\tfun(_) ->\n\t\t\tgen_server:cast(self(), process_item)\n\t\tend,\n\t\tlists:seq(1, Config#config.header_sync_jobs)\n\t),\n\tgen_server:cast(self(), store_sync_state),\n\tets:insert(?MODULE, {synced_blocks, ar_intervals:sum(SyncRecord)}),\n\t{ok,\n\t\t#state{\n\t\t\tsync_record = SyncRecord,\n\t\t\theight = Height,\n\t\t\tblock_index = CurrentBI,\n\t\t\tretry_queue = queue:new(),\n\t\t\tretry_record = ar_intervals:new(),\n\t\t\tis_disk_space_sufficient = true\n\t\t}}.\n\nhandle_cast({join, Height, RecentBI, Blocks}, State) ->\n\t#state{ height = PrevHeight, block_index = CurrentBI,\n\t\t\tsync_record = SyncRecord, retry_record = RetryRecord } = State,\n\tState2 =\n\t\tState#state{\n\t\t\theight = Height,\n\t\t\tblock_index = RecentBI\n\t\t },\n\tStartHeight = PrevHeight - length(CurrentBI) + 1,\n\t{Status, State4} =\n\t\tcase {CurrentBI, ar_block_index:get_intersection(StartHeight, CurrentBI)} of\n\t\t\t{[], _} ->\n\t\t\t\t{ok, State2};\n\t\t\t{_, no_intersection} ->\n\t\t\t\tio:format(\"~nWARNING: the stored block index of the header syncing module \"\n\t\t\t\t\t\t\"has no intersection with the \"\n\t\t\t\t\t\t\"new one in the most recent blocks. If you have just started a new \"\n\t\t\t\t\t\t\"weave using the init option, restart from the local state \"\n\t\t\t\t\t\t\"or specify some peers.~n~n\"),\n\t\t\t\t\tinit:stop(1),\n\t\t\t\t\t{error, State2};\n\t\t\t{_, {IntersectionHeight, _}} ->\n\t\t\t\tState3 = State2#state{\n\t\t\t\t\t\tsync_record = ar_intervals:cut(SyncRecord, IntersectionHeight),\n\t\t\t\t\t\tretry_record = ar_intervals:cut(RetryRecord, IntersectionHeight) },\n\t\t\t\tok = store_sync_state(State3),\n\t\t\t\t%% Delete from the kv store only after the sync record is saved - no matter\n\t\t\t\t%% what happens to the process, if a height is in the record, it must be\n\t\t\t\t%% present in the kv store.\n\t\t\t\tok = ar_kv:delete_range(?MODULE, << (IntersectionHeight + 1):256 >>,\n\t\t\t\t\t\t<< (PrevHeight + 1):256 >>),\n\t\t\t\t{ok, State3}\n\t\tend,\n\tcase Status of\n\t\terror ->\n\t\t\t{noreply, State4};\n\t\tok ->\n\t\t\tState5 =\n\t\t\t\tlists:foldl(\n\t\t\t\t\tfun(B, Acc) ->\n\t\t\t\t\t\telement(2, add_block(B, Acc))\n\t\t\t\t\tend,\n\t\t\t\t\tState4,\n\t\t\t\t\tBlocks\n\t\t\t\t),\n\t\t\tok = store_sync_state(State5),\n\t\t\t{noreply, State5}\n\tend;\n\nhandle_cast({add_tip_block, #block{ height = Height } = B, RecentBI}, State) ->\n\t#state{ sync_record = SyncRecord, retry_record = RetryRecord,\n\t\t\tblock_index = CurrentBI, height = PrevHeight } = State,\n\tBaseHeight = get_base_height(CurrentBI, PrevHeight, RecentBI),\n\tState2 = State#state{\n\t\tsync_record = ar_intervals:cut(SyncRecord, BaseHeight),\n\t\tretry_record = ar_intervals:cut(RetryRecord, BaseHeight),\n\t\tblock_index = RecentBI,\n\t\theight = Height\n\t},\n\tState3 = element(2, add_block(B, State2)),\n\tcase store_sync_state(State3) of\n\t\tok ->\n\t\t\t%% Delete from the kv store only after the sync record is saved - no matter\n\t\t\t%% what happens to the process, if a height is in the record, it must be present\n\t\t\t%% in the kv store.\n\t\t\tok = ar_kv:delete_range(?MODULE, << (BaseHeight + 1):256 >>,\n\t\t\t\t\t<< (PrevHeight + 1):256 >>),\n\t\t\t{noreply, State3};\n\t\tError ->\n\t\t\t?LOG_WARNING([{event, failed_to_store_state},\n\t\t\t\t\t{reason, io_lib:format(\"~p\", [Error])}]),\n\t\t\t{noreply, State}\n\tend;\n\nhandle_cast({add_historical_block, _, _, _, _, _},\n\t\t#state{ is_disk_space_sufficient = false } = State) ->\n\tgen_server:cast(self(), process_item),\n\t{noreply, State};\nhandle_cast({add_historical_block, B, H, H2, TXRoot, Backoff}, State) ->\n\tcase add_block(B, State) of\n\t\t{ok, State2} ->\n\t\t\tgen_server:cast(self(), process_item),\n\t\t\t{noreply, State2};\n\t\t{_Error, State2} ->\n\t\t\tgen_server:cast(self(), {failed_to_get_block, H, H2, TXRoot, B#block.height,\n\t\t\t\t\tBackoff}),\n\t\t\t{noreply, State2}\n\tend;\n\nhandle_cast({add_block, B}, State) ->\n\t{noreply, element(2, add_block(B, State))};\n\nhandle_cast(process_item, #state{ is_disk_space_sufficient = false } = State) ->\n\tar_util:cast_after(?CHECK_AFTER_SYNCED_INTERVAL_MS, self(), process_item),\n\t{noreply, State};\nhandle_cast(process_item, #state{ retry_queue = Queue, retry_record = RetryRecord } = State) ->\n\tprometheus_gauge:set(downloader_queue_size, queue:len(Queue)),\n\tQueue2 = process_item(Queue),\n\tState2 = State#state{ retry_queue = Queue2 },\n\tcase pick_unsynced_block(State) of\n\t\tnothing_to_sync ->\n\t\t\t{noreply, State2};\n\t\tHeight ->\n\t\t\tcase ar_node:get_block_index_entry(Height) of\n\t\t\t\tnot_joined ->\n\t\t\t\t\t{noreply, State2};\n\t\t\t\tnot_found ->\n\t\t\t\t\t{noreply, State2};\n\t\t\t\t{H, _WeaveSize, TXRoot} ->\n\t\t\t\t\t%% Before 2.0, to compute a block hash, the complete wallet list\n\t\t\t\t\t%% and all the preceding hashes were required. Getting a wallet list\n\t\t\t\t\t%% and a hash list for every historical block to verify it belongs to\n\t\t\t\t\t%% the weave is very costly. Therefore, a list of 2.0 hashes for 1.0\n\t\t\t\t\t%% blocks was computed and stored along with the network client.\n\t\t\t\t\tH2 =\n\t\t\t\t\t\tcase Height < ar_fork:height_2_0() of\n\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\tar_node:get_2_0_hash_of_1_0_block(Height);\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\tnot_set\n\t\t\t\t\t\tend,\n\t\t\t\t\t{noreply, State2#state{\n\t\t\t\t\t\t\tretry_queue = enqueue({block, {H, H2, TXRoot, Height}}, Queue2),\n\t\t\t\t\t\t\tretry_record = ar_intervals:add(RetryRecord, Height, Height - 1) }}\n\t\t\tend\n\tend;\n\nhandle_cast({failed_to_get_block, H, H2, TXRoot, Height, Backoff},\n\t\t#state{ retry_queue = Queue } = State) ->\n\tBackoff2 = update_backoff(Backoff),\n\tQueue2 = enqueue({block, {H, H2, TXRoot, Height}}, Backoff2, Queue),\n\tgen_server:cast(self(), process_item),\n\t{noreply, State#state{ retry_queue = Queue2 }};\n\nhandle_cast({remove_tx, TXID}, State) ->\n\t{ok, _Size} = ar_storage:delete_blacklisted_tx(TXID),\n\tar_tx_blacklist:notify_about_removed_tx(TXID),\n\t{noreply, State};\n\nhandle_cast({remove_block, Height}, State) ->\n\t#state{ sync_record = Record } = State,\n\tok = ar_kv:delete(?MODULE, << Height:256 >>),\n\t{noreply, State#state{ sync_record = ar_intervals:delete(Record, Height, Height - 1) }};\n\nhandle_cast(store_sync_state, State) ->\n\tar_util:cast_after(?STORE_HEADER_STATE_FREQUENCY_MS, self(), store_sync_state),\n\tcase store_sync_state(State) of\n\t\tok ->\n\t\t\t{noreply, State};\n\t\tError ->\n\t\t\t?LOG_WARNING([{event, failed_to_store_state},\n\t\t\t\t\t{reason, io_lib:format(\"~p\", [Error])}]),\n\t\t\t{noreply, State}\n\tend;\n\nhandle_cast(Msg, State) ->\n\t?LOG_ERROR([{event, unhandled_cast}, {module, ?MODULE}, {message, Msg}]),\n\t{noreply, State}.\n\nhandle_call(_Msg, _From, State) ->\n\t{reply, not_implemented, State}.\n\nhandle_info({event, tx, {preparing_unblacklisting, TXID}}, State) ->\n\t#state{ sync_record = SyncRecord, retry_record = RetryRecord } = State,\n\tcase ar_storage:get_tx_confirmation_data(TXID) of\n\t\t{ok, {Height, _BH}} ->\n\t\t\t?LOG_DEBUG([{event, mark_block_with_blacklisted_tx_for_resyncing},\n\t\t\t\t\t{tx, ar_util:encode(TXID)}, {height, Height}]),\n\t\t\tState2 = State#state{ sync_record = ar_intervals:delete(SyncRecord, Height,\n\t\t\t\t\tHeight - 1), retry_record = ar_intervals:delete(RetryRecord, Height,\n\t\t\t\t\tHeight - 1) },\n\t\t\tok = store_sync_state(State2),\n\t\t\tok = ar_kv:delete(?MODULE, << Height:256 >>),\n\t\t\tar_events:send(tx, {ready_for_unblacklisting, TXID}),\n\t\t\t{noreply, State2};\n\t\tnot_found ->\n\t\t\tar_events:send(tx, {ready_for_unblacklisting, TXID}),\n\t\t\t{noreply, State};\n\t\t{error, Reason} ->\n\t\t\t?LOG_WARNING([{event, failed_to_read_tx_confirmation_index},\n\t\t\t\t\t{error, io_lib:format(\"~p\", [Reason])}]),\n\t\t\t{noreply, State}\n\tend;\n\nhandle_info({event, tx, _}, State) ->\n\t{noreply, State};\n\nhandle_info({event, disksup, {remaining_disk_space, ?DEFAULT_MODULE, true, _Percentage, Bytes}},\n\t\tState) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tDiskPoolSize = Config#config.max_disk_pool_buffer_mb * ?MiB,\n\tDiskCacheSize = Config#config.disk_cache_size * 1048576,\n\tBufferSize = 10_000_000_000,\n\tcase Bytes < DiskPoolSize + DiskCacheSize + BufferSize div 2 of\n\t\ttrue ->\n\t\t\tcase State#state.is_disk_space_sufficient of\n\t\t\t\ttrue ->\n\t\t\t\t\tMsg = \"~nThe node has stopped syncing headers. Add more disk space \"\n\t\t\t\t\t\t\t\"if you wish to store more block and transaction headers. \"\n\t\t\t\t\t\t\t\"The node will keep recording account tree updates and \"\n\t\t\t\t\t\t\t\"transaction confirmations - they do not take up a lot of \"\n\t\t\t\t\t\t\t\"space but you need to make sure the remaining disk space \"\n\t\t\t\t\t\t\t\"stays available for the node.~n~n\"\n\t\t\t\t\t\t\t\"The mining performance is not affected.~n\",\n\t\t\t\t\tar:console(Msg, []),\n\t\t\t\t\t?LOG_INFO([{event, ar_header_sync_stopped_syncing},\n\t\t\t\t\t\t\t{reason, insufficient_disk_space}]);\n\t\t\t\tfalse ->\n\t\t\t\t\tok\n\t\t\tend,\n\t\t\t{noreply, State#state{ is_disk_space_sufficient = false }};\n\t\tfalse ->\n\t\t\tcase Bytes > DiskPoolSize + DiskCacheSize + BufferSize of\n\t\t\t\ttrue ->\n\t\t\t\t\tcase State#state.is_disk_space_sufficient of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tok;\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tMsg = \"The available disk space has been detected, \"\n\t\t\t\t\t\t\t\t\t\"resuming header syncing.~n\",\n\t\t\t\t\t\t\tar:console(Msg, []),\n\t\t\t\t\t\t\t?LOG_INFO([{event, ar_header_sync_resumed_syncing}])\n\t\t\t\t\tend,\n\t\t\t\t\t{noreply, State#state{ is_disk_space_sufficient = true }};\n\t\t\t\tfalse ->\n\t\t\t\t\t{noreply, State}\n\t\t\tend\n\tend;\nhandle_info({event, disksup, _}, State) ->\n\t{noreply, State};\n\nhandle_info({'DOWN', _,  process, _, normal}, State) ->\n\t{noreply, State};\nhandle_info({'DOWN', _,  process, _, noproc}, State) ->\n\t{noreply, State};\nhandle_info({'DOWN', _,  process, _, Reason}, State) ->\n\t?LOG_WARNING([{event, header_sync_job_failed}, {reason, io_lib:format(\"~p\", [Reason])},\n\t\t\t{action, spawning_another_one}]),\n\tgen_server:cast(self(), process_item),\n\t{noreply, State};\n\nhandle_info({_Ref, _Atom}, State) ->\n\t%% Some older versions of Erlang OTP have a bug where gen_tcp:close may leak\n\t%% a message. https://github.com/ninenines/gun/issues/193,\n\t%% https://bugs.erlang.org/browse/ERL-1049.\n\t{noreply, State};\n\nhandle_info(Info, State) ->\n\t?LOG_ERROR([{event, unhandled_info}, {message, Info}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{event, ar_header_sync_terminate}, {reason, Reason}]).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nstore_sync_state(State) ->\n\t#state{ sync_record = SyncRecord, height = LastHeight, block_index = BI } = State,\n\tSyncedCount = ar_intervals:sum(SyncRecord),\n\tprometheus_gauge:set(synced_blocks, SyncedCount),\n\tets:insert(?MODULE, {synced_blocks, SyncedCount}),\n\tar_storage:write_term(header_sync_state, {SyncRecord, LastHeight, BI}).\n\nget_base_height([{H, _, _} | CurrentBI], CurrentHeight, RecentBI) ->\n\tcase lists:search(fun({BH, _, _}) -> BH == H end, RecentBI) of\n\t\tfalse ->\n\t\t\tget_base_height(CurrentBI, CurrentHeight - 1, RecentBI);\n\t\t_ ->\n\t\t\tCurrentHeight\n\tend.\n\nadd_block(B, State) ->\n\tcase check_fork(B#block.height, B#block.indep_hash, B#block.tx_root) of\n\t\tfalse ->\n\t\t\t{ok, State};\n\t\ttrue ->\n\t\t\tcase B#block.height == 0 andalso ?NETWORK_NAME == \"arweave.N.1\" of\n\t\t\t\ttrue ->\n\t\t\t\t\tok;\n\t\t\t\tfalse ->\n\t\t\t\t\tar_data_sync:add_block(B, B#block.size_tagged_txs)\n\t\t\tend,\n\t\t\tadd_block2(B, State)\n\tend.\n\nadd_block2(B, #state{ is_disk_space_sufficient = false } = State) ->\n\tcase ar_storage:update_confirmation_index(B) of\n\t\tok ->\n\t\t\t{ok, State};\n\t\tError ->\n\t\t\t?LOG_ERROR([{event, failed_to_record_block_confirmations},\n\t\t\t\t{reason, io_lib:format(\"~p\", [Error])}]),\n\t\t\t{Error, State}\n\tend;\nadd_block2(B, #state{ sync_record = SyncRecord, retry_record = RetryRecord } = State) ->\n\t#block{ indep_hash = H, previous_block = PrevH, height = Height } = B,\n\tcase ar_storage:write_full_block(B, B#block.txs) of\n\t\tok ->\n\t\t\tcase ar_intervals:is_inside(SyncRecord, Height) of\n\t\t\t\ttrue ->\n\t\t\t\t\t{ok, State};\n\t\t\t\tfalse ->\n\t\t\t\t\tok = ar_kv:put(?MODULE, << Height:256 >>, term_to_binary({H, PrevH})),\n\t\t\t\t\tSyncRecord2 = ar_intervals:add(SyncRecord, Height, Height - 1),\n\t\t\t\t\tRetryRecord2 = ar_intervals:delete(RetryRecord, Height, Height - 1),\n\t\t\t\t\t{ok, State#state{ sync_record = SyncRecord2, retry_record = RetryRecord2 }}\n\t\t\tend;\n\t\t{error, Reason} ->\n\t\t\t?LOG_WARNING([{event, failed_to_store_block}, {block, ar_util:encode(H)},\n\t\t\t\t\t{height, Height}, {reason, Reason}]),\n\t\t\t{{error, Reason}, State}\n\tend.\n\n%% @doc Return the latest height we have not synced or put in the retry queue yet.\n%% Return 'nothing_to_sync' if everything is either synced or in the retry queue.\npick_unsynced_block(#state{ height = Height, sync_record = SyncRecord,\n\t\tretry_record = RetryRecord }) ->\n\tUnion = ar_intervals:union(SyncRecord, RetryRecord),\n\tcase ar_intervals:is_empty(Union) of\n\t\ttrue ->\n\t\t\tHeight;\n\t\tfalse ->\n\t\t\tcase ar_intervals:take_largest(Union) of\n\t\t\t\t{{End, _Start}, _Union2} when Height > End ->\n\t\t\t\t\tHeight;\n\t\t\t\t{{_End, -1}, _Union2} ->\n\t\t\t\t\tnothing_to_sync;\n\t\t\t\t{{_End, Start}, _Union2} ->\n\t\t\t\t\tStart\n\t\t\tend\n\tend.\n\nenqueue(Item, Queue) ->\n\tqueue:in({Item, initial_backoff()}, Queue).\n\ninitial_backoff() ->\n\t{os:system_time(seconds), ?INITIAL_BACKOFF_INTERVAL_S}.\n\nprocess_item(Queue) ->\n\tNow = os:system_time(second),\n\tcase queue:out(Queue) of\n\t\t{empty, _Queue} ->\n\t\t\tar_util:cast_after(?PROCESS_ITEM_INTERVAL_MS, self(), process_item),\n\t\t\tQueue;\n\t\t{{value, {Item, {BackoffTimestamp, _} = Backoff}}, Queue2}\n\t\t\t\twhen BackoffTimestamp > Now ->\n\t\t\tar_util:cast_after(?PROCESS_ITEM_INTERVAL_MS, self(), process_item),\n\t\t\tenqueue(Item, Backoff, Queue2);\n\t\t{{value, {{block, {H, H2, TXRoot, Height}}, Backoff}}, Queue2} ->\n\t\t\tcase check_fork(Height, H, TXRoot) of\n\t\t\t\tfalse ->\n\t\t\t\t\tok;\n\t\t\t\ttrue ->\n\t\t\t\t\tParent = self(),\n\t\t\t\t\tmonitor(process, spawn(\n\t\t\t\t\t\tfun() ->\n\t\t\t\t\t\t\t%% Trap exit to avoid corrupting any open files on quit..\n\t\t\t\t\t\t\tprocess_flag(trap_exit, true),\n\t\t\t\t\t\t\tcase download_block(H, H2, TXRoot) of\n\t\t\t\t\t\t\t\t{error, _Reason} ->\n\t\t\t\t\t\t\t\t\tgen_server:cast(Parent, {failed_to_get_block, H, H2,\n\t\t\t\t\t\t\t\t\t\t\tTXRoot, Height, Backoff});\n\t\t\t\t\t\t\t\t{ok, B} ->\n\t\t\t\t\t\t\t\t\tgen_server:cast(Parent, {add_historical_block, B, H, H2,\n\t\t\t\t\t\t\t\t\t\t\tTXRoot, Backoff})\n\t\t\t\t\t\t\tend\n\t\t\t\t\t\tend\n\t\t\t\t\t))\n\t\t\tend,\n\t\t\tQueue2\n\tend.\n\nenqueue(Item, Backoff, Queue) ->\n\tqueue:in({Item, Backoff}, Queue).\n\nupdate_backoff({_Timestamp, Interval}) ->\n\tInterval2 = min(?MAX_BACKOFF_INTERVAL_S, Interval * 2),\n\t{os:system_time(second) + Interval2, Interval2}.\n\ncheck_fork(Height, H, TXRoot) ->\n\tcase Height < ar_fork:height_2_0() of\n\t\ttrue ->\n\t\t\ttrue;\n\t\tfalse ->\n\t\t\tcase ar_node:get_block_index_entry(Height) of\n\t\t\t\tnot_joined ->\n\t\t\t\t\tfalse;\n\t\t\t\tnot_found ->\n\t\t\t\t\tfalse;\n\t\t\t\t{H, _WeaveSize, TXRoot} ->\n\t\t\t\t\ttrue;\n\t\t\t\t_ ->\n\t\t\t\t\tfalse\n\t\t\tend\n\tend.\n\ndownload_block(H, H2, TXRoot) ->\n\tPeers = ar_peers:get_peers(current),\n\tcase ar_storage:read_block(H) of\n\t\tunavailable ->\n\t\t\tdownload_block(Peers, H, H2, TXRoot);\n\t\tB ->\n\t\t\tdownload_txs(Peers, B, TXRoot)\n\tend.\n\ndownload_block(Peers, H, H2, TXRoot) ->\n\tFork_2_0 = ar_fork:height_2_0(),\n\tOpts = #{ rand_min => length(Peers) },\n\tcase ar_http_iface_client:get_block_shadow(Peers, H, Opts) of\n\t\tunavailable ->\n\t\t\t?LOG_WARNING([\n\t\t\t\t{event, ar_header_sync_failed_to_download_block_header},\n\t\t\t\t{block, ar_util:encode(H)}\n\t\t\t]),\n\t\t\t{error, block_header_unavailable};\n\t\t{Peer, #block{ height = Height } = B, Time, BlockSize} ->\n\t\t\tBH =\n\t\t\t\tcase Height >= Fork_2_0 of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tar_block:indep_hash(B);\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tar_block:indep_hash(\n\t\t\t\t\t\t\tB#block{ tx_root = TXRoot, txs = lists:sort(B#block.txs) }\n\t\t\t\t\t\t)\n\t\t\t\tend,\n\t\t\tcase BH of\n\t\t\t\tH when Height >= Fork_2_0 ->\n\t\t\t\t\tar_peers:rate_fetched_data(Peer, block, Time, BlockSize),\n\t\t\t\t\tdownload_txs(Peers, B, TXRoot);\n\t\t\t\tH2 when Height < Fork_2_0 ->\n\t\t\t\t\tar_peers:rate_fetched_data(Peer, block, Time, BlockSize),\n\t\t\t\t\tdownload_txs(Peers, B, TXRoot);\n\t\t\t\t_ ->\n\t\t\t\t\t?LOG_WARNING([\n\t\t\t\t\t\t{event, ar_header_sync_block_hash_mismatch},\n\t\t\t\t\t\t{block, ar_util:encode(H)},\n\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)}\n\t\t\t\t\t]),\n\t\t\t\t\t{error, block_hash_mismatch}\n\t\t\tend\n\tend.\n\ndownload_txs(Peers, B, TXRoot) ->\n\tcase ar_http_iface_client:get_txs(Peers, B) of\n\t\t{ok, TXs} ->\n\t\t\tSizeTaggedTXs = ar_block:generate_size_tagged_list_from_txs(TXs, B#block.height),\n\t\t\tSizeTaggedDataRoots = [{Root, Offset} || {{_, Root}, Offset} <- SizeTaggedTXs],\n\t\t\t{Root, _Tree} = ar_merkle:generate_tree(SizeTaggedDataRoots),\n\t\t\tcase Root of\n\t\t\t\tTXRoot ->\n\t\t\t\t\t{ok, B#block{ txs = TXs, size_tagged_txs = SizeTaggedTXs }};\n\t\t\t\t_ ->\n\t\t\t\t\t?LOG_WARNING([\n\t\t\t\t\t\t{event, ar_header_sync_block_tx_root_mismatch},\n\t\t\t\t\t\t{block, ar_util:encode(B#block.indep_hash)}\n\t\t\t\t\t]),\n\t\t\t\t\t{error, block_tx_root_mismatch}\n\t\t\tend;\n\t\t{error, txs_exceed_block_size_limit} ->\n\t\t\t?LOG_WARNING([\n\t\t\t\t{event, ar_header_sync_block_txs_exceed_block_size_limit},\n\t\t\t\t{block, ar_util:encode(B#block.indep_hash)}\n\t\t\t]),\n\t\t\t{error, txs_exceed_block_size_limit};\n\t\t{error, txs_count_exceeds_limit} ->\n\t\t\t?LOG_WARNING([\n\t\t\t\t{event, ar_header_sync_block_txs_count_exceeds_limit},\n\t\t\t\t{block, ar_util:encode(B#block.indep_hash)}\n\t\t\t]),\n\t\t\t{error, txs_count_exceeds_limit};\n\t\t{error, tx_not_found} ->\n\t\t\t?LOG_WARNING([\n\t\t\t\t{event, ar_header_sync_block_tx_not_found},\n\t\t\t\t{block, ar_util:encode(B#block.indep_hash)}\n\t\t\t]),\n\t\t\t{error, tx_not_found}\n\tend.\n"
  },
  {
    "path": "apps/arweave/src/ar_header_sync_sup.erl",
    "content": "-module(ar_header_sync_sup).\n-behaviour(supervisor).\n\n-export([start_link/1]).\n-export([init/1]).\n\n%%%===================================================================\n%%% Public API.\n%%%===================================================================\n\nstart_link(Args) ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, Args).\n\n%%%===================================================================\n%%% Supervisor callbacks.\n%%%===================================================================\n\ninit(Args) ->\n\tSupFlags = #{strategy => one_for_one, intensity => 10, period => 1},\n\tChildSpec = #{\n\t\tid => ar_header_sync,\n\t\tstart => {ar_header_sync, start_link, [Args]}\n\t},\n\t{ok, {SupFlags, [ChildSpec]}}.\n"
  },
  {
    "path": "apps/arweave/src/ar_http.erl",
    "content": "%%% A wrapper library for gun.\n-module(ar_http).\n\n-behaviour(gen_server).\n\n-export([start_link/0, req/1]).\n\n-ifdef(AR_TEST).\n-export([block_peer_connections/0, unblock_peer_connections/0]).\n-endif.\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-record(state, {\n\tpid_by_peer = #{},\n\tstatus_by_pid = #{}\n}).\n\n%%% ==================================================================\n%%% Public interface.\n%%% ==================================================================\n\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n\n-ifdef(AR_TEST).\nblock_peer_connections() ->\n\tets:insert(?MODULE, {block_peer_connections}),\n\tok.\n\nunblock_peer_connections() ->\n\tets:delete(?MODULE, block_peer_connections),\n\tok.\n\nreq(Args) ->\n\tcase ar_shutdown_manager:state() of\n\t\trunning ->\n\t\t\treq2(Args);\n\t\tshutdown ->\n\t\t\t{error, shutdown}\n\tend.\n\nreq2(#{ peer := {_, _} } = Args) ->\n\treq(Args, false);\nreq2(#{ peer := Peer } = Args) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase Config#config.port == element(5, Peer) of\n\t\ttrue ->\n\t\t\t%% Do not block requests to self.\n\t\t\treq(Args, false);\n\t\tfalse ->\n\t\t\tcase ets:lookup(?MODULE, block_peer_connections) of\n\t\t\t\t[{_}] ->\n\t\t\t\t\tcase lists:keyfind(<<\"x-p2p-port\">>, 1, maps:get(headers, Args, [])) of\n\t\t\t\t\t\t{_, _} ->\n\t\t\t\t\t\t\t{error, blocked};\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t%% Do not block requests made from the test processes.\n\t\t\t\t\t\t\treq(Args, false)\n\t\t\t\t\tend;\n\t\t\t\t_ ->\n\t\t\t\t\treq(Args, false)\n\t\t\tend\n\tend.\n-else.\nreq(Args) ->\n\treq(Args, false).\n-endif.\n\nreq(Args, ReestablishedConnection) ->\n\tStartTime = erlang:monotonic_time(),\n\t#{ peer := Peer, path := Path, method := Method } = Args,\n\tok = ar_rate_limiter:throttle(Peer, Path),\n\tResponse = case catch gen_server:call(?MODULE, {get_connection, Args}, 15000) of\n\t\t{ok, PID} ->\n\t\t\tcase request(PID, Args) of\n\t\t\t\t{error, Error} ->\n\t\t\t\t\tcase {ReestablishedConnection, should_retry_closed_connection(Error)} of\n\t\t\t\t\t\t{false, true} ->\n\t\t\t\t\t\t\treq(Args, true);\n\t\t\t\t\t\t{_, true} ->\n\t\t\t\t\t\t\t{error, client_error};\n\t\t\t\t\t\t{_, false} ->\n\t\t\t\t\t\t\t{error, Error}\n\t\t\t\t\tend;\n\t\t\t\tReply ->\n\t\t\t\t\tReply\n\t\t\tend;\n\t\t{'EXIT', _} -> {error, client_error};\n\t\tError -> Error\n\tend,\n\tEndTime = erlang:monotonic_time(),\n\t%% Only log the metric for the top-level call to req/2 - not the recursive call\n\t%% that happens when the connection is reestablished.\n\tcase ReestablishedConnection of\n\t\ttrue ->\n\t\t\tok;\n\t\tfalse ->\n\t\t\t%% NOTE: the erlang prometheus client looks at the metric name to determine units.\n\t\t\t%%       If it sees <name>_duration_<unit> it assumes the observed value is in\n\t\t\t%%       native units and it converts it to <unit> .To query native units, use:\n\t\t\t%%       erlant:monotonic_time() without any arguments.\n\t\t\t%%       See: https://github.com/deadtrickster/prometheus.erl/blob/6dd56bf321e99688108bb976283a80e4d82b3d30/src/prometheus_time.erl#L2-L84\n\t\t\tprometheus_histogram:observe(ar_http_request_duration_seconds, [\n\t\t\t\t\tmethod_to_list(Method),\n\t\t\t\t\tar_http_iface_server:label_http_path(list_to_binary(Path)),\n\t\t\t\t\tar_metrics:get_status_class(Response)\n\t\t\t\t], EndTime - StartTime)\n\tend,\n\tResponse.\n\n%%% ==================================================================\n%%% gen_server callbacks.\n%%% ==================================================================\n\ninit([]) ->\n\t{ok, #state{}}.\n\nhandle_call({get_connection, Args}, From,\n\t\t#state{ pid_by_peer = PIDByPeer, status_by_pid = StatusByPID } = State) ->\n\tPeer = maps:get(peer, Args),\n\tcase maps:get(Peer, PIDByPeer, not_found) of\n\t\tnot_found ->\n\t\t\t{ok, PID} = open_connection(Args),\n\t\t\tMonitorRef = monitor(process, PID),\n\t\t\tPIDByPeer2 = maps:put(Peer, PID, PIDByPeer),\n\t\t\tStatusByPID2 = maps:put(PID, {{connecting, [{From, Args}]}, MonitorRef, Peer},\n\t\t\t\t\tStatusByPID),\n\t\t\t{noreply, State#state{ pid_by_peer = PIDByPeer2, status_by_pid = StatusByPID2 }};\n\t\tPID ->\n\t\t\tcase maps:get(PID, StatusByPID) of\n\t\t\t\t{{connecting, PendingRequests}, MonitorRef, Peer} ->\n\t\t\t\t\tStatusByPID2 = maps:put(PID, {{connecting,\n\t\t\t\t\t\t\t[{From, Args} | PendingRequests]}, MonitorRef, Peer}, StatusByPID),\n\t\t\t\t\t{noreply, State#state{ status_by_pid = StatusByPID2 }};\n\t\t\t\t{connected, _MonitorRef, Peer} ->\n\t\t\t\t\t{reply, {ok, PID}, State}\n\t\t\tend\n\tend;\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info({gun_up, PID, _Protocol}, #state{ status_by_pid = StatusByPID } = State) ->\n\tcase maps:get(PID, StatusByPID, not_found) of\n\t\tnot_found ->\n\t\t\t%% A connection timeout should have occurred.\n\t\t\t{noreply, State};\n\t\t{{connecting, PendingRequests}, MonitorRef, Peer} ->\n\t\t\t[gen_server:reply(ReplyTo, {ok, PID}) || {ReplyTo, _} <- PendingRequests],\n\t\t\tStatusByPID2 = maps:put(PID, {connected, MonitorRef, Peer}, StatusByPID),\n\t\t\tprometheus_gauge:inc(outbound_connections),\n\t\t\tar_peers:connected_peer(Peer),\n\t\t\t{noreply, State#state{ status_by_pid = StatusByPID2 }};\n\t\t{connected, _MonitorRef, Peer} ->\n\t\t\t?LOG_WARNING([{event, gun_up_pid_already_exists},\n\t\t\t\t\t{peer, ar_util:format_peer(Peer)}]),\n\t\t\tar_peers:connected_peer(Peer),\n\t\t\t{noreply, State}\n\tend;\n\nhandle_info({gun_error, PID, Reason},\n\t\t#state{ pid_by_peer = PIDByPeer, status_by_pid = StatusByPID } = State) ->\n\tcase maps:get(PID, StatusByPID, not_found) of\n\t\tnot_found ->\n\t\t\t?LOG_WARNING([{even, gun_connection_error_with_unknown_pid}]),\n\t\t\t{noreply, State};\n\t\t{Status, _MonitorRef, Peer} ->\n\t\t\tPIDByPeer2 = maps:remove(Peer, PIDByPeer),\n\t\t\tStatusByPID2 = maps:remove(PID, StatusByPID),\n\t\t\tReason2 =\n\t\t\t\tcase Reason of\n\t\t\t\t\ttimeout ->\n\t\t\t\t\t\tconnect_timeout;\n\t\t\t\t\t{Type, _} ->\n\t\t\t\t\t\tType;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tReason\n\t\t\t\tend,\n\t\t\tcase Status of\n\t\t\t\t{connecting, PendingRequests} ->\n\t\t\t\t\treply_error(PendingRequests, Reason2);\n\t\t\t\tconnected ->\n\t\t\t\t\tprometheus_gauge:dec(outbound_connections),\n\t\t\t\t\tok\n\t\t\tend,\n\t\t\tar_peers:disconnected_peer(Peer),\n\t\t\tgun:shutdown(PID),\n\t\t\t?LOG_DEBUG([{event, connection_error}, {reason, io_lib:format(\"~p\", [Reason])}]),\n\t\t\t{noreply, State#state{ status_by_pid = StatusByPID2, pid_by_peer = PIDByPeer2 }}\n\tend;\n\n% missing pattern from gun 2.2+\nhandle_info({gun_down, Pid, Protocol, Reason, Streams}, State) ->\n\thandle_info({gun_down, Pid, Protocol, Reason, [], Streams}, State);\n\nhandle_info({gun_down, PID, Protocol, Reason, _KilledStreams, _UnprocessedStreams},\n\t\t\t#state{ pid_by_peer = PIDByPeer, status_by_pid = StatusByPID } = State) ->\n\tcase maps:get(PID, StatusByPID, not_found) of\n\t\tnot_found ->\n\t\t\t?LOG_WARNING([{even, gun_connection_down_with_unknown_pid},\n\t\t\t\t\t{protocol, Protocol}]),\n\t\t\t{noreply, State};\n\t\t{Status, _MonitorRef, Peer} ->\n\t\t\tPIDByPeer2 = maps:remove(Peer, PIDByPeer),\n\t\t\tStatusByPID2 = maps:remove(PID, StatusByPID),\n\t\t\tReason2 =\n\t\t\t\tcase Reason of\n\t\t\t\t\t{Type, _} ->\n\t\t\t\t\t\tType;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tReason\n\t\t\t\tend,\n\t\t\tcase Status of\n\t\t\t\t{connecting, PendingRequests} ->\n\t\t\t\t\treply_error(PendingRequests, Reason2);\n\t\t\t\t_ ->\n\t\t\t\t\tprometheus_gauge:dec(outbound_connections),\n\t\t\t\t\tok\n\t\t\tend,\n\t\t\tar_peers:disconnected_peer(Peer),\n\t\t\t{noreply, State#state{ status_by_pid = StatusByPID2, pid_by_peer = PIDByPeer2 }}\n\tend;\n\nhandle_info({'DOWN', _Ref, process, PID, Reason},\n\t\t#state{ pid_by_peer = PIDByPeer, status_by_pid = StatusByPID } = State) ->\n\tcase maps:get(PID, StatusByPID, not_found) of\n\t\tnot_found ->\n\t\t\t{noreply, State};\n\t\t{Status, _MonitorRef, Peer} ->\n\t\t\tPIDByPeer2 = maps:remove(Peer, PIDByPeer),\n\t\t\tStatusByPID2 = maps:remove(PID, StatusByPID),\n\t\t\tcase Status of\n\t\t\t\t{connecting, PendingRequests} ->\n\t\t\t\t\treply_error(PendingRequests, Reason);\n\t\t\t\t_ ->\n\t\t\t\t\tprometheus_gauge:dec(outbound_connections),\n\t\t\t\t\tok\n\t\t\tend,\n\t\t\tar_peers:disconnected_peer(Peer),\n\t\t\t{noreply, State#state{ status_by_pid = StatusByPID2, pid_by_peer = PIDByPeer2 }}\n\tend;\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, #state{ status_by_pid = StatusByPID }) ->\n\tmaps:map(fun(PID, _Status) -> gun:shutdown(PID) end, StatusByPID),\n\t?LOG_INFO([{event, http_client_terminating}, {reason, io_lib:format(\"~p\", [Reason])}]),\n\tok.\n\n%%% ==================================================================\n%%% Private functions.\n%%% ==================================================================\n\nopen_connection(#{ peer := Peer } = Args) ->\n\t{ok, Config} = arweave_config:get_env(),\n\t{IPOrHost, Port} = get_ip_port(Peer),\n\tConnectTimeout = maps:get(connect_timeout, Args,\n\t\t\tmaps:get(timeout, Args, ?HTTP_REQUEST_CONNECT_TIMEOUT)),\n\tGunOpts = #{\n\t\tretry => 0,\n\t\tconnect_timeout => ConnectTimeout,\n\t\thttp_opts => #{\n\t\t\tclosing_timeout => Config#config.'http_client.http.closing_timeout',\n\t\t\tkeepalive => Config#config.'http_client.http.keepalive'\n\t\t},\n\t\ttcp_opts => [\n\t\t\t{delay_send, Config#config.'http_client.tcp.delay_send'},\n\t\t\t{keepalive, Config#config.'http_client.tcp.keepalive'},\n\t\t\t{linger, {\n\t\t\t\t\tConfig#config.'http_client.tcp.linger',\n\t\t\t\t\tConfig#config.'http_client.tcp.linger_timeout'\n\t\t\t\t}\n\t\t\t},\n\t\t\t{nodelay, Config#config.'http_client.tcp.nodelay'},\n\t\t\t{send_timeout_close, Config#config.'http_client.tcp.send_timeout_close'},\n\t\t\t{send_timeout, Config#config.'http_client.tcp.send_timeout'}\n\t\t]\n\t},\n\tgun:open(IPOrHost, Port, GunOpts).\n\nget_ip_port({_, _} = Peer) ->\n\tPeer;\nget_ip_port(Peer) ->\n\t{erlang:delete_element(size(Peer), Peer), erlang:element(size(Peer), Peer)}.\n\nreply_error([], _Reason) ->\n\tok;\nreply_error([PendingRequest | PendingRequests], Reason) ->\n\tReplyTo = element(1, PendingRequest),\n\tArgs = element(2, PendingRequest),\n\tMethod = maps:get(method, Args),\n\tPath = maps:get(path, Args),\n\trecord_response_status(Method, Path, {error, Reason}),\n\tgen_server:reply(ReplyTo, {error, Reason}),\n\treply_error(PendingRequests, Reason).\n\nrecord_response_status(Method, Path, Response) ->\n\tprometheus_counter:inc(gun_requests_total, [method_to_list(Method),\n\t\t\tar_http_iface_server:label_http_path(list_to_binary(Path)),\n\t\t\tar_metrics:get_status_class(Response)]).\n\nmethod_to_list(get) ->\n\t\"GET\";\nmethod_to_list(post) ->\n\t\"POST\";\nmethod_to_list(put) ->\n\t\"PUT\";\nmethod_to_list(head) ->\n\t\"HEAD\";\nmethod_to_list(delete) ->\n\t\"DELETE\";\nmethod_to_list(connect) ->\n\t\"CONNECT\";\nmethod_to_list(options) ->\n\t\"OPTIONS\";\nmethod_to_list(trace) ->\n\t\"TRACE\";\nmethod_to_list(patch) ->\n\t\"PATCH\";\nmethod_to_list(_) ->\n\t\"unknown\".\n\nrequest(PID, Args) ->\n\tTimeout = maps:get(timeout, Args, ?HTTP_REQUEST_SEND_TIMEOUT),\n\tRef = request2(PID, Args),\n\tResponseArgs = #{ pid => PID\n\t\t\t, stream_ref => Ref\n\t\t\t, timeout => Timeout\n\t\t\t, limit => maps:get(limit, Args, infinity)\n\t\t\t, counter => 0\n\t\t\t, acc => []\n\t\t\t, start => os:system_time(microsecond)\n\t\t\t, is_peer_request => maps:get(is_peer_request, Args, true)\n\t\t\t},\n\tResponse = await_response(maps:merge(Args, ResponseArgs)),\n\tMethod = maps:get(method, Args),\n\tPath = maps:get(path, Args),\n\trecord_response_status(Method, Path, Response),\n\tResponse.\n\nrequest2(PID, #{ path := Path } = Args) ->\n\tHeaders =\n\t\tcase maps:get(is_peer_request, Args, true) of\n\t\t\ttrue ->\n\t\t\t\tmerge_headers(?DEFAULT_REQUEST_HEADERS, maps:get(headers, Args, []));\n\t\t\t_ ->\n\t\t\t\tmaps:get(headers, Args, [])\n\t\tend,\n\tMethod = case maps:get(method, Args) of get -> \"GET\"; post -> \"POST\" end,\n\tgun:request(PID, Method, Path, Headers, maps:get(body, Args, <<>>)).\n\nmerge_headers(HeadersA, HeadersB) ->\n\tlists:ukeymerge(1, lists:keysort(1, HeadersB), lists:keysort(1, HeadersA)).\n\nawait_response( #{ pid := PID, stream_ref := Ref, timeout := Timeout\n\t\t , start := Start, limit := Limit, counter := Counter\n\t\t , acc := Acc, method := Method, path := Path } = Args) ->\n\tcase gun:await(PID, Ref, Timeout) of\n\t\t{response, fin, Status, Headers} ->\n\t\t\tEnd = os:system_time(microsecond),\n\t\t\tupload_metric(Args),\n\t\t\t{ok, {{integer_to_binary(Status), <<>>}, Headers, <<>>, Start, End}};\n\n\t\t{response, nofin, Status, Headers} ->\n\t\t\tawait_response(Args#{ status => Status, headers => Headers });\n\n\t\t{data, nofin, Data} ->\n\t\t\tcase Limit of\n\t\t\t\tinfinity ->\n\t\t\t\t\tawait_response(Args#{ acc := [Acc | Data] });\n\t\t\t\tLimit ->\n\t\t\t\t\tCounter2 = size(Data) + Counter,\n\t\t\t\t\tcase Limit >= Counter2 of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tawait_response(Args#{ counter := Counter2, acc := [Acc | Data] });\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tlog(err, http_fetched_too_much_data, Args,\n\t\t\t\t\t\t\t\t\t<<\"Fetched too much data\">>),\n\t\t\t\t\t\t\t{error, too_much_data}\n\t\t\t\t\tend\n\t\t\tend;\n\n\t\t{data, fin, Data} ->\n\t\t\tEnd = os:system_time(microsecond),\n\t\t\tFinData = iolist_to_binary([Acc | Data]),\n\t\t\tdownload_metric(FinData, Args),\n\t\t\tupload_metric(Args),\n\t\t\tResponseCode = gen_code_rest(maps:get(status, Args)),\n\t\t\tResponseHeaders = maps:get(headers, Args),\n\t\t\tResponse = {ResponseCode, ResponseHeaders, FinData, Start, End},\n\t\t\t{ok, Response};\n\n\t\t{error, timeout} = Response ->\n\t\t\trecord_response_status(Method, Path, Response),\n\t\t\tgun:cancel(PID, Ref),\n\t\t\tlog(warn, gun_await_process_down, Args, Response),\n\t\t\tResponse;\n\n\t\t{error, Reason} = Response when is_tuple(Reason) ->\n\t\t\trecord_response_status(Method, Path, Response),\n\t\t\tgun:cancel(PID, Ref),\n\t\t\tlog(warn, gun_await_process_down, Args, Reason),\n\t\t\tResponse;\n\n\t\tResponse ->\n\t\t\trecord_response_status(Method, Path, Response),\n\t\t\tgun:cancel(PID, Ref),\n\t\t\tlog(warn, gun_await_unknown, Args, Response),\n\t\t\tResponse\n\tend.\n\nlog(Type, Event, #{method := Method, peer := Peer, path := Path}, Reason) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase lists:member(http_logging, Config#config.enable) of\n\t\ttrue when Type == warn ->\n\t\t\t?LOG_WARNING([\n\t\t\t\t{event, Event},\n\t\t\t\t{http_method, Method},\n\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t{path, Path},\n\t\t\t\t{reason, Reason}\n\t\t\t]);\n\t\ttrue when Type == err ->\n\t\t\t?LOG_ERROR([\n\t\t\t\t{event, Event},\n\t\t\t\t{http_method, Method},\n\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t{path, Path},\n\t\t\t\t{reason, Reason}\n\t\t\t]);\n\t\t_ ->\n\t\t\tok\n\tend.\n\ndownload_metric(Data, #{path := Path}) ->\n\tprometheus_counter:inc(\n\t\thttp_client_downloaded_bytes_total,\n\t\t[ar_http_iface_server:label_http_path(list_to_binary(Path))],\n\t\tbyte_size(Data)\n\t).\n\nupload_metric(#{method := post, path := Path, body := Body}) ->\n\tprometheus_counter:inc(\n\t\thttp_client_uploaded_bytes_total,\n\t\t[ar_http_iface_server:label_http_path(list_to_binary(Path))],\n\t\tbyte_size(Body)\n\t);\nupload_metric(_) ->\n\tok.\n\nshould_retry_closed_connection({shutdown, normal}) ->\n\ttrue;\nshould_retry_closed_connection(noproc) ->\n\ttrue;\nshould_retry_closed_connection({down, {shutdown, closed}}) ->\n\ttrue;\nshould_retry_closed_connection({down, {shutdown, {error, einval}}}) ->\n\ttrue;\nshould_retry_closed_connection({stream_error, closed}) ->\n\ttrue;\nshould_retry_closed_connection({stream_error, closing}) ->\n\ttrue;\nshould_retry_closed_connection({stream_error, {closed, normal}}) ->\n\ttrue;\nshould_retry_closed_connection({shutdown, closed}) ->\n\ttrue;\nshould_retry_closed_connection(closed) ->\n\ttrue;\nshould_retry_closed_connection(closing) ->\n\ttrue;\nshould_retry_closed_connection(_) ->\n\tfalse.\n\ngen_code_rest(200) ->\n\t{<<\"200\">>, <<\"OK\">>};\ngen_code_rest(201) ->\n\t{<<\"201\">>, <<\"Created\">>};\ngen_code_rest(202) ->\n\t{<<\"202\">>, <<\"Accepted\">>};\ngen_code_rest(208) ->\n\t{<<\"208\">>, <<\"Transaction already processed\">>};\ngen_code_rest(400) ->\n\t{<<\"400\">>, <<\"Bad Request\">>};\ngen_code_rest(419) ->\n\t{<<\"419\">>, <<\"419 Missing Chunk\">>};\ngen_code_rest(421) ->\n\t{<<\"421\">>, <<\"Misdirected Request\">>};\ngen_code_rest(429) ->\n\t{<<\"429\">>, <<\"Too Many Requests\">>};\ngen_code_rest(N) ->\n\t{integer_to_binary(N), <<>>}.\n"
  },
  {
    "path": "apps/arweave/src/ar_http_iface_client.erl",
    "content": "%%%\n%%% @doc Exposes access to an internal Arweave client to external nodes on the network.\n%%%\n\n-module(ar_http_iface_client).\n\n-export([send_tx_json/3, send_tx_json/4, send_tx_binary/3, send_tx_binary/4]).\n-export([send_block_json/3, send_block_binary/3, send_block_binary/4,\n\tsend_block_announcement/2,\n\tget_block/3, get_tx/2, get_txs/2, get_tx_from_remote_peers/3,\n\tget_tx_data/2, get_wallet_list_chunk/2, get_wallet_list_chunk/3,\n\tget_wallet_list/2, add_peer/1, get_info/1, get_info/2, get_peers/1,\n\tget_time/2, get_height/1, get_block_index/3,\n\tget_sync_record/1, get_sync_record/3, get_sync_record/4, get_footprints/3,\n\tget_chunk_binary/3, get_mempool/1,\n\tget_sync_buckets/1, get_footprint_buckets/1, get_recent_hash_list/1,\n\tget_recent_hash_list_diff/2, get_reward_history/3,\n\tget_block_time_history/3, push_nonce_limiter_update/3,\n\tget_vdf_update/1, get_vdf_session/1, get_previous_vdf_session/1,\n\tget_cm_partition_table/1, cm_h1_send/2, cm_h2_send/2,\n\tcm_publish_send/2, get_jobs/2, post_partial_solution/2,\n\tget_pool_cm_jobs/2, post_pool_cm_jobs/2,\n\tpost_cm_partition_table_to_pool/2, get_data_roots/2]).\n-export([get_block_shadow/2, get_block_shadow/3, get_block_shadow/4]).\n-export([log_failed_request/2]).\n\n%% -- Testing exports\n-export([get_tx_from_remote_peer/3]).\n%% -- End of testing exports\n\n-include(\"ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include(\"ar_consensus.hrl\").\n-include(\"ar_data_sync.hrl\").\n-include(\"ar_sync_buckets.hrl\").\n-include(\"ar_data_discovery.hrl\").\n-include(\"ar_mining.hrl\").\n-include(\"ar_wallets.hrl\").\n-include(\"ar_pool.hrl\").\n\n%%--------------------------------------------------------------------\n%% @doc Send a JSON-encoded transaction to the given Peer with default\n%% parameters.\n%%\n%% == Examples ==\n%%\n%% ```\n%% Host = {127,0,0,1},\n%% Port = 1984,\n%% Peer = {Host, Port},\n%% TXID = <<0:256>>,\n%% Bin = ar_serialize:tx_to_binary(#tx{}),\n%% send_tx_json(Peer, TXID, Bin).\n%% '''\n%%\n%% @see send_tx_json/4\n%% @end\n%%--------------------------------------------------------------------\nsend_tx_json(Peer, TXID, Bin) ->\n\tsend_tx_json(Peer, TXID, Bin, #{}).\n\n%%--------------------------------------------------------------------\n%% @doc Send a JSON-encoded transaction to the given Peer.\n%%\n%% == Examples ==\n%%\n%% ```\n%% Host = {127,0,0,1},\n%% Port = 1984,\n%% Peer = {Host, Port},\n%% TXID = <<0:256>>,\n%% Bin = ar_serialize:tx_to_binary(#tx{}),\n%% Opts = #{ connect_timeout => 5\n%%         , timeout => 30\n%%         },\n%% send_tx_json(Peer, TXID, Bin, Opts).\n%% '''\n%%\n%% @end\n%%--------------------------------------------------------------------\nsend_tx_json(Peer, TXID, Bin, Opts) ->\n\tConnectTimeout = maps:get(connect_timeout, Opts, 5),\n\tTimeout = maps:get(timeout, Opts, 30),\n\tar_http:req(#{\n\t\tmethod => post,\n\t\tpeer => Peer,\n\t\tpath => \"/tx\",\n\t\theaders => add_header(<<\"arweave-tx-id\">>, ar_util:encode(TXID), p2p_headers()),\n\t\tbody => Bin,\n\t\tconnect_timeout => ConnectTimeout * 1000,\n\t\ttimeout => Timeout * 1000\n\t}).\n\n%%--------------------------------------------------------------------\n%% @doc Send a binary-encoded transaction to the given Peer with\n%% default parameters.\n%% @see send_tx_binary/4\n%% @end\n%%--------------------------------------------------------------------\nsend_tx_binary(Peer, TXID, Bin) ->\n\tsend_tx_binary(Peer, TXID, Bin, #{}).\n\n%%--------------------------------------------------------------------\n%% @doc Send a binary-encoded transaction to the given Peer.\n%% @end\n%%--------------------------------------------------------------------\nsend_tx_binary(Peer, TXID, Bin, Opts) ->\n\tConnectTimeout = maps:get(connect_timeout, Opts, 5),\n\tTimeout = maps:get(timeout, Opts, 30),\n\tar_http:req(#{\n\t\tmethod => post,\n\t\tpeer => Peer,\n\t\tpath => \"/tx2\",\n\t\theaders => add_header(<<\"arweave-tx-id\">>, ar_util:encode(TXID), p2p_headers()),\n\t\tbody => Bin,\n\t\tconnect_timeout => ConnectTimeout * 1000,\n\t\ttimeout => Timeout * 1000\n\t}).\n\n%% @doc Announce a block to Peer.\nsend_block_announcement(Peer, Announcement) ->\n\tar_http:req(#{\n\t\tmethod => post,\n\t\tpeer => Peer,\n\t\tpath => \"/block_announcement\",\n\t\theaders => p2p_headers(),\n\t\tbody => ar_serialize:block_announcement_to_binary(Announcement),\n\t\ttimeout => 10 * 1000\n\t}).\n\n%% @doc Send the given JSON-encoded block to the given peer.\nsend_block_json(Peer, H, Payload) ->\n\tar_http:req(#{\n\t\tmethod => post,\n\t\tpeer => Peer,\n\t\tpath => \"/block\",\n\t\theaders => add_header(<<\"arweave-block-hash\">>, ar_util:encode(H), p2p_headers()),\n\t\tbody => Payload,\n\t\tconnect_timeout => 5000,\n\t\ttimeout => 120 * 1000\n\t}).\n\n%% @doc Send the given binary-encoded block to the given peer.\nsend_block_binary(Peer, H, Payload) ->\n\tsend_block_binary(Peer, H, Payload, undefined).\n\nsend_block_binary(Peer, H, Payload, RecallByte) ->\n\tHeaders = add_header(<<\"arweave-block-hash\">>, ar_util:encode(H), p2p_headers()),\n\t%% The way of informing the recipient about the recall byte used before the fork\n\t%% 2.6. Since the fork 2.6 blocks have a \"recall_byte\" field.\n\tHeaders2 = case RecallByte of undefined -> Headers; _ ->\n\t\t\tadd_header(<<\"arweave-recall-byte\">>, integer_to_binary(RecallByte), Headers) end,\n\tar_http:req(#{\n\t\tmethod => post,\n\t\tpeer => Peer,\n\t\tpath => \"/block2\",\n\t\theaders => Headers2,\n\t\tbody => Payload,\n\t\ttimeout => 20 * 1000\n\t}).\n\n%% @doc Request to be added as a peer to a remote host.\nadd_peer(Peer) ->\n\tar_http:req(#{\n\t\tmethod => post,\n\t\tpeer => Peer,\n\t\tpath => \"/peers\",\n\t\theaders => p2p_headers(),\n\t\tbody => ar_serialize:jsonify({[{network, list_to_binary(?NETWORK_NAME)}]}),\n\t\ttimeout => 3 * 1000\n\t}).\n\n%% @doc Retrieve a block. We request the peer to include complete\n%% transactions at the given positions (in the sorted transaction list).\nget_block(Peer, H, TXIndices) ->\n\tcase handle_block_response(Peer, binary,\n\t\t\tar_http:req(#{\n\t\t\t\tmethod => get,\n\t\t\t\tpeer => Peer,\n\t\t\t\tpath => \"/block2/hash/\" ++ binary_to_list(ar_util:encode(H)),\n\t\t\t\theaders => p2p_headers(),\n\t\t\t\tconnect_timeout => 1000,\n\t\t\t\ttimeout => 15 * 1000,\n\t\t\t\tbody => ar_util:encode_list_indices(TXIndices),\n\t\t\t\tlimit => ?MAX_BODY_SIZE\n\t\t\t})) of\n\t\tnot_found ->\n\t\t\tnot_found;\n\t\t{ok, B, Time, Size} ->\n\t\t\t{B, Time, Size}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc get a block shadow using default parameter.\n%% @end\n%%--------------------------------------------------------------------\nget_block_shadow(Peers, ID) ->\n\tget_block_shadow(Peers, ID, #{}).\n\n%%--------------------------------------------------------------------\n%% @doc Retrieve a block shadow by hash or height from one of the given\n%%      peers. Some options can be modified like `rand_min',\n%%      `connect_timeout' and `timeout'.\n%% @see get_block_shadow/4\n%% @end\n%%--------------------------------------------------------------------\nget_block_shadow([], _ID, _Opts) ->\n\tunavailable;\nget_block_shadow(Peers, ID, Opts) ->\n\tRandMin = maps:get(rand_min, Opts, 5),\n\tRandom = rand:uniform(min(RandMin, length(Peers))),\n\tPeer = lists:nth(Random, Peers),\n\tcase get_block_shadow(ID, Peer, binary, Opts) of\n\t\tnot_found ->\n\t\t\tget_block_shadow(Peers -- [Peer], ID, Opts);\n\t\t{ok, B, Time, Size} ->\n\t\t\t{Peer, B, Time, Size}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc Retrieve a block shadow by hash or height from the given peer.\n%% @end\n%%--------------------------------------------------------------------\nget_block_shadow(ID, Peer, Encoding, _Opts) ->\n\thandle_block_response(Peer, Encoding,\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => Peer,\n\t\t\tpath => get_block_path(ID, Encoding),\n\t\t\theaders => p2p_headers(),\n\t\t\tconnect_timeout => 500,\n\t\t\ttimeout => 30 * 1000,\n\t\t\tlimit => ?MAX_BODY_SIZE\n\t\t})).\n\n%% @doc Generate an appropriate URL for a block by its identifier.\nget_block_path({ID, _, _}, Encoding) ->\n\tget_block_path(ID, Encoding);\nget_block_path(ID, Encoding) when is_binary(ID) ->\n\tcase Encoding of\n\t\tbinary ->\n\t\t\t\"/block2/hash/\" ++ binary_to_list(ar_util:encode(ID));\n\t\tjson ->\n\t\t\t\"/block/hash/\" ++ binary_to_list(ar_util:encode(ID))\n\tend;\nget_block_path(ID, Encoding) when is_integer(ID) ->\n\tcase Encoding of\n\t\tbinary ->\n\t\t\t\"/block2/height/\" ++ integer_to_list(ID);\n\t\tjson ->\n\t\t\t\"/block/height/\" ++ integer_to_list(ID)\n\tend.\n\n%% @doc Get a bunch of wallets by the given root hash from external peers.\nget_wallet_list_chunk(Peers, H) ->\n\tget_wallet_list_chunk(Peers, H, start).\n\nget_wallet_list_chunk([], _H, _Cursor) ->\n\t{error, not_found};\nget_wallet_list_chunk([Peer | Peers], H, Cursor) ->\n\tBasePath = \"/wallet_list/\" ++ binary_to_list(ar_util:encode(H)),\n\tPath =\n\t\tcase Cursor of\n\t\t\tstart ->\n\t\t\t\tBasePath;\n\t\t\t_ ->\n\t\t\t\tBasePath ++ \"/\" ++ binary_to_list(ar_util:encode(Cursor))\n\t\tend,\n\tResponse =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => Peer,\n\t\t\tpath => Path,\n\t\t\theaders => p2p_headers(),\n\t\t\tlimit => ?MAX_SERIALIZED_WALLET_LIST_CHUNK_SIZE,\n\t\t\ttimeout => 10 * 1000,\n\t\t\tconnect_timeout => 1000\n\t\t}),\n\tcase Response of\n\t\t{ok, {{<<\"200\">>, _}, _, Body, _, _}} ->\n\t\t\tcase ar_serialize:etf_to_wallet_chunk_response(Body) of\n\t\t\t\t{ok, #{ next_cursor := NextCursor, wallets := Wallets }} ->\n\t\t\t\t\t{ok, {NextCursor, Wallets}};\n\t\t\t\tDeserializationResult ->\n\t\t\t\t\t?LOG_ERROR([\n\t\t\t\t\t\t{event, got_unexpected_wallet_list_chunk_deserialization_result},\n\t\t\t\t\t\t{deserialization_result, DeserializationResult}\n\t\t\t\t\t]),\n\t\t\t\t\tget_wallet_list_chunk(Peers, H, Cursor)\n\t\t\tend;\n\t\tResponse ->\n\t\t\tget_wallet_list_chunk(Peers, H, Cursor)\n\tend.\n\n%% @doc Get a wallet list by the given block hash from external peers.\nget_wallet_list([], _H) ->\n\tnot_found;\nget_wallet_list([Peer | Peers], H) ->\n\tcase get_wallet_list(Peer, H) of\n\t\tunavailable ->\n\t\t\tget_wallet_list(Peers, H);\n\t\tnot_found ->\n\t\t\tget_wallet_list(Peers, H);\n\t\tWL ->\n\t\t\tWL\n\tend;\nget_wallet_list(Peer, H) ->\n\tResponse =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => Peer,\n\t\t\tpath => \"/block/hash/\" ++ binary_to_list(ar_util:encode(H)) ++ \"/wallet_list\",\n\t\t\theaders => p2p_headers()\n\t\t}),\n\tcase Response of\n\t\t{ok, {{<<\"200\">>, _}, _, Body, _, _}} ->\n\t\t\t{ok, ar_serialize:json_struct_to_wallet_list(Body)};\n\t\t{ok, {{<<\"404\">>, _}, _, _, _, _}} -> not_found;\n\t\t_ -> unavailable\n\tend.\n\nget_block_index(Peer, Start, End) ->\n\tget_block_index(Peer, Start, End, binary).\n\nget_block_index(Peer, Start, End, Encoding) ->\n\tStartList = integer_to_list(Start),\n\tEndList = integer_to_list(End),\n\tRoot = case Encoding of binary -> \"/block_index2/\"; json -> \"/block_index/\" end,\n\tcase ar_http:req(#{\n\t\t\t\tmethod => get,\n\t\t\t\tpeer => Peer,\n\t\t\t\tpath => Root ++ StartList ++ \"/\" ++ EndList,\n\t\t\t\ttimeout => 20000,\n\t\t\t\tconnect_timeout => 5000,\n\t\t\t\theaders => p2p_headers()\n\t\t\t}) of\n\t\t{ok, {{<<\"400\">>, _}, _, <<\"Request type not found.\">>, _, _}}\n\t\t\t\twhen Encoding == binary ->\n\t\t\tget_block_index(Peer, Start, End, json);\n\t\t{ok, {{<<\"200\">>, _}, _, Body, _, _}} ->\n\t\t\tcase decode_block_index(Body, Encoding) of\n\t\t\t\t{ok, BI} ->\n\t\t\t\t\t{ok, BI};\n\t\t\t\tError ->\n\t\t\t\t\t?LOG_WARNING([{event, failed_to_decode_block_index_range}, Error]),\n\t\t\t\t\tError\n\t\t\tend;\n\t\tError ->\n\t\t\t?LOG_WARNING([{event, failed_to_fetch_block_index_range},\n\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])}]),\n\t\t\t{error, Error}\n\tend.\n\ndecode_block_index(Bin, binary) ->\n\tar_serialize:binary_to_block_index(Bin);\ndecode_block_index(Bin, json) ->\n\tcase ar_serialize:json_decode(Bin) of\n\t\t{ok, Struct} ->\n\t\t\tcase catch ar_serialize:json_struct_to_block_index(Struct) of\n\t\t\t\t{'EXIT', _} = Exc ->\n\t\t\t\t\t{error, Exc};\n\t\t\t\tBI ->\n\t\t\t\t\t{ok, BI}\n\t\t\tend;\n\t\tError ->\n\t\t\tError\n\tend.\n\nget_sync_record(Peer) ->\n\tHeaders = [{<<\"Content-Type\">>, <<\"application/etf\">>}],\n\thandle_sync_record_response(ar_http:req(#{\n\t\tpeer => Peer,\n\t\tmethod => get,\n\t\tpath => \"/data_sync_record\",\n\t\ttimeout => 30 * 1000,\n\t\tconnect_timeout => 2000,\n\t\tlimit => ?MAX_ETF_SYNC_RECORD_SIZE,\n\t\theaders => Headers\n\t})).\n\nget_sync_record(Peer, Start, Limit) ->\n\tHeaders = [{<<\"Content-Type\">>, <<\"application/etf\">>}],\n\thandle_sync_record_response(ar_http:req(#{\n\t\tpeer => Peer,\n\t\tmethod => get,\n\t\tpath => \"/data_sync_record/\" ++ integer_to_list(Start) ++ \"/\"\n\t\t\t\t++ integer_to_list(Limit),\n\t\ttimeout => 30 * 1000,\n\t\tconnect_timeout => 5000,\n\t\tlimit => ?MAX_ETF_SYNC_RECORD_SIZE,\n\t\theaders => Headers\n\t}), Start, Limit).\n\nget_sync_record(Peer, Start, End, Limit) ->\n\tHeaders = [{<<\"Content-Type\">>, <<\"application/etf\">>}],\n\thandle_sync_record_response(ar_http:req(#{\n\t\tpeer => Peer,\n\t\tmethod => get,\n\t\tpath => \"/data_sync_record/\" ++ integer_to_list(Start) ++ \"/\"\n\t\t\t\t++ integer_to_list(End) ++ \"/\" ++ integer_to_list(Limit),\n\t\ttimeout => 30 * 1000,\n\t\tconnect_timeout => 5000,\n\t\tlimit => ?MAX_ETF_SYNC_RECORD_SIZE,\n\t\theaders => Headers\n\t}), Start, Limit).\n\nget_footprints(Peer, Partition, Footprint) ->\n\thandle_footprints_response(ar_http:req(#{\n\t\tpeer => Peer,\n\t\tmethod => get,\n\t\tpath => \"/footprints/\" ++ integer_to_list(Partition) ++ \"/\" ++ integer_to_list(Footprint),\n\t\ttimeout => 10_000,\n\t\tconnect_timeout => 5_000,\n\t\tlimit => ?MAX_FOOTPRINT_PAYLOAD_SIZE,\n\t\theaders => p2p_headers()\n\t})).\n\nget_chunk_binary(Peer, Offset, RequestedPacking) ->\n\tPackingBinary = iolist_to_binary(ar_serialize:encode_packing(RequestedPacking, false)),\n\tHeaders = [{<<\"x-packing\">>, PackingBinary},\n\t\t\t%% The nodes not upgraded to the 2.5 version would ignore this header.\n\t\t\t%% It is fine because all offsets before 2.5 are not bucket-based.\n\t\t\t%% Client libraries do not send this header - normally they do not need\n\t\t\t%% bucket-based offsets. Bucket-based offsets are required in mining\n\t\t\t%% after the fork 2.5 and it is convenient to use them for syncing,\n\t\t\t%% thus setting the header here. A bucket-based offset corresponds to\n\t\t\t%% the chunk that ends in the same 256 KiB bucket starting from the\n\t\t\t%% 2.5 block. In most cases a bucket-based offset would correspond to\n\t\t\t%% the same chunk as the normal offset except for the offsets of the\n\t\t\t%% last and second last chunks of the transactions when these chunks\n\t\t\t%% are smaller than 256 KiB.\n\t\t\t{<<\"x-bucket-based-offset\">>, <<\"true\">>}],\n\tStartTime = erlang:monotonic_time(),\n\tResponse = ar_http:req(#{\n\t\tpeer => Peer,\n\t\tmethod => get,\n\t\tpath => \"/chunk2/\" ++ integer_to_binary(Offset),\n\t\ttimeout => 120 * 1000,\n\t\tconnect_timeout => 5000,\n\t\tlimit => ?MAX_SERIALIZED_CHUNK_PROOF_SIZE,\n\t\theaders => p2p_headers() ++ Headers\n\t}),\n\tprometheus_histogram:observe(\n\t\thttp_client_get_chunk_duration_seconds,\n\t\t[\n\t\t\tar_metrics:get_status_class(Response),\n\t\t\tar_util:format_peer(Peer)\n\t\t],\n\t\terlang:monotonic_time() - StartTime),\n\n\thandle_chunk_response(Response, RequestedPacking, Peer).\n\nget_mempool([]) ->\n\t{error, not_found};\nget_mempool([Peer | Peers]) ->\n    case get_mempool(Peer) of\n\t\t{{ok, TXIDs}, Peer} ->\n\t\t\t{{ok, TXIDs}, Peer};\n\t\t{error, Error} ->\n\t\t\tlog_failed_request(Error, [{event, failed_to_get_mempool_txids_from_peer},\n\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])}]),\n\t\t\tget_mempool(Peers -- [Peer])\n    end;\n\nget_mempool(Peer) ->\n\thandle_mempool_response(ar_http:req(#{\n\t\tpeer => Peer,\n\t\tmethod => get,\n\t\tpath => \"/tx/pending\",\n\t\ttimeout => 5 * 1000,\n\t\tconnect_timeout => 500,\n\t\t%% Sufficient for a JSON-encoded list of the transaction identifiers\n\t\t%% from a mempool with 250 MiB worth of transaction headers with no data.\n\t\tlimit => 3000000,\n\t\theaders => p2p_headers()\n\t}), Peer).\n\nget_sync_buckets(Peer) ->\n\thandle_get_sync_buckets_response(ar_http:req(#{\n\t\tpeer => Peer,\n\t\tmethod => get,\n\t\tpath => \"/sync_buckets\",\n\t\ttimeout => 10 * 1000,\n\t\tconnect_timeout => 2000,\n\t\tlimit => ?MAX_SYNC_BUCKETS_SIZE,\n\t\theaders => p2p_headers()\n\t}), ?DEFAULT_SYNC_BUCKET_SIZE).\n\nget_footprint_buckets(Peer) ->\n\thandle_get_sync_buckets_response(ar_http:req(#{\n\t\tpeer => Peer,\n\t\tmethod => get,\n\t\tpath => \"/footprint_buckets\",\n\t\ttimeout => 10 * 1000,\n\t\tconnect_timeout => 2000,\n\t\tlimit => ?MAX_SYNC_BUCKETS_SIZE,\n\t\theaders => p2p_headers()\n\t}), ?NETWORK_FOOTPRINT_BUCKET_SIZE).\n\nget_recent_hash_list(Peer) ->\n\thandle_get_recent_hash_list_response(ar_http:req(#{\n\t\tpeer => Peer,\n\t\tmethod => get,\n\t\tpath => \"/recent_hash_list\",\n\t\ttimeout => 2 * 1000,\n\t\tconnect_timeout => 1000,\n\t\tlimit => 3400,\n\t\theaders => p2p_headers()\n\t})).\n\nget_recent_hash_list_diff(Peer, HL) ->\n\tReverseHL = lists:reverse(HL),\n\thandle_get_recent_hash_list_diff_response(ar_http:req(#{\n\t\tpeer => Peer,\n\t\tmethod => get,\n\t\tpath => \"/recent_hash_list_diff\",\n\t\ttimeout => 10 * 1000,\n\t\tconnect_timeout => 1000,\n\t\t%%        PrevH H    Len        TXID\n\t\tlimit => (48 + (48 + 2 + 1000 * 32) * 49), % 1570498 bytes,\n\t\t\t\t\t\t\t\t\t\t\t\t\t% very pessimistic case.\n\t\tbody => iolist_to_binary(ReverseHL),\n\t\theaders => p2p_headers()\n\t}), HL, Peer).\n\n%% @doc Fetch the reward history from one of the given peers. The reward history\n%% must contain ar_rewards:buffered_reward_history_length/1 elements. The reward history\n%% hashes are validated against the given ExpectedRewardHistoryHashes. Return not_found\n%% if we fail to fetch a reward history of the expected length from any of the peers.\nget_reward_history([Peer | Peers], B, ExpectedRewardHistoryHashes) ->\n\t#block{ height = Height, indep_hash = H } = B,\n\tExpectedLength = ar_rewards:buffered_reward_history_length(Height),\n\tDoubleCheckLength = ar_rewards:expected_hashes_length(Height),\n\ttrue = length(ExpectedRewardHistoryHashes) == min(\n\t\t\t\t\t\t\t\t\t\t\t\t\tHeight - ar_fork:height_2_6() + 1,\n\t\t\t\t\t\t\t\t\t\t\t\t\tDoubleCheckLength),\n\tcase ar_http:req(#{\n\t\t\t\tpeer => Peer,\n\t\t\t\tmethod => get,\n\t\t\t\tpath => \"/reward_history/\" ++ binary_to_list(ar_util:encode(H)),\n\t\t\t\ttimeout => 30000,\n\t\t\t\theaders => p2p_headers()\n\t\t\t}) of\n\t\t{ok, {{<<\"200\">>, _}, _, Body, _, _}} ->\n\t\t\tcase ar_serialize:binary_to_reward_history(Body) of\n\t\t\t\t{ok, RewardHistory} -> % when length(RewardHistory) == ExpectedLength ->\n\t\t\t\t\tcase ar_rewards:validate_reward_history_hashes(Height, RewardHistory,\n\t\t\t\t\t\t\tExpectedRewardHistoryHashes) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t?LOG_DEBUG([\n\t\t\t\t\t\t\t\t{event, received_valid_reward_history},\n\t\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t\t\t{height, Height},\n\t\t\t\t\t\t\t\t{expected_length, ExpectedLength},\n\t\t\t\t\t\t\t\t{length, length(RewardHistory)}\n\t\t\t\t\t\t\t]),\n\t\t\t\t\t\t\t{ok, RewardHistory};\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t?LOG_WARNING([{event, received_invalid_reward_history},\n\t\t\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)}]),\n\t\t\t\t\t\t\tget_reward_history(Peers, B, ExpectedRewardHistoryHashes)\n\t\t\t\t\tend;\n\t\t\t\t% {ok, L} ->\n\t\t\t\t%\t?LOG_WARNING([{event, received_reward_history_of_unexpected_length},\n\t\t\t\t%\t\t\t{expected_length, ExpectedLength}, {received_length, length(L)},\n\t\t\t\t%\t\t\t{peer, ar_util:format_peer(Peer)}]),\n\t\t\t\t%\tget_reward_history(Peers, B, ExpectedRewardHistoryHashes);\n\t\t\t\t{error, _} ->\n\t\t\t\t\t?LOG_WARNING([{event, failed_to_parse_reward_history},\n\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)}]),\n\t\t\t\t\tget_reward_history(Peers, B, ExpectedRewardHistoryHashes)\n\t\t\tend;\n\t\tReply ->\n\t\t\t?LOG_WARNING([{event, failed_to_fetch_reward_history},\n\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t{reply, io_lib:format(\"~p\", [Reply])}]),\n\t\t\tget_reward_history(Peers, B, ExpectedRewardHistoryHashes)\n\tend;\nget_reward_history([], _B, _RewardHistoryHashes) ->\n\tnot_found.\n\nget_block_time_history([Peer | Peers], B, ExpectedBlockTimeHistoryHashes) ->\n\t#block{ height = Height, indep_hash = H } = B,\n\tFork_2_7 = ar_fork:height_2_7(),\n\ttrue = Height >= Fork_2_7,\n\tExpectedLength = min(Height - Fork_2_7 + 1,\n\t\t\tar_block_time_history:history_length() + ar_block:get_consensus_window_size()),\n\ttrue = length(ExpectedBlockTimeHistoryHashes) == min(Height - Fork_2_7 + 1,\n\t\t\tar_block:get_consensus_window_size()),\n\tcase ar_http:req(#{\n\t\t\t\tpeer => Peer,\n\t\t\t\tmethod => get,\n\t\t\t\tpath => \"/block_time_history/\" ++ binary_to_list(ar_util:encode(H)),\n\t\t\t\ttimeout => 30000,\n\t\t\t\theaders => p2p_headers()\n\t\t\t}) of\n\t\t{ok, {{<<\"200\">>, _}, _, Body, _, _}} ->\n\t\t\tcase ar_serialize:binary_to_block_time_history(Body) of\n\t\t\t\t{ok, BlockTimeHistory} when length(BlockTimeHistory) == ExpectedLength ->\n\t\t\t\t\tcase ar_block_time_history:validate_hashes(BlockTimeHistory,\n\t\t\t\t\t\t\tExpectedBlockTimeHistoryHashes) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t{ok, BlockTimeHistory};\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t?LOG_WARNING([{event, received_invalid_block_time_history},\n\t\t\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)}]),\n\t\t\t\t\t\t\tget_block_time_history(Peers, B, ExpectedBlockTimeHistoryHashes)\n\t\t\t\t\tend;\n\t\t\t\t{ok, L} ->\n\t\t\t\t\t?LOG_WARNING([{event, received_block_time_history_of_unexpected_length},\n\t\t\t\t\t\t\t{expected_length, ExpectedLength}, {received_length, length(L)},\n\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)}]),\n\t\t\t\t\tget_block_time_history(Peers, B, ExpectedBlockTimeHistoryHashes);\n\t\t\t\t{error, _} ->\n\t\t\t\t\t?LOG_WARNING([{event, failed_to_parse_block_time_history},\n\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)}]),\n\t\t\t\t\tget_block_time_history(Peers, B, ExpectedBlockTimeHistoryHashes)\n\t\t\tend;\n\t\tReply ->\n\t\t\t?LOG_WARNING([{event, failed_to_fetch_block_time_history},\n\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t{reply, io_lib:format(\"~p\", [Reply])}]),\n\t\t\tget_block_time_history(Peers, B, ExpectedBlockTimeHistoryHashes)\n\tend;\nget_block_time_history([], _B, _RewardHistoryHashes) ->\n\tnot_found.\n\npush_nonce_limiter_update(Peer, Update, Format) ->\n\tBody = ar_serialize:nonce_limiter_update_to_binary(Format, Update),\n\tcase ar_http:req(#{\n\t\t\t\tpeer => Peer,\n\t\t\t\tmethod => post,\n\t\t\t\tpath => \"/vdf\",\n\t\t\t\tbody => Body,\n\t\t\t\ttimeout => 2000,\n\t\t\t\tlimit => 100,\n\t\t\t\theaders => p2p_headers()\n\t\t\t}) of\n\t\t{ok, {{<<\"200\">>, _}, _, <<>>, _, _}} ->\n\t\t\tok;\n\t\t{ok, {{<<\"202\">>, _}, _, ResponseBody, _, _}} ->\n\t\t\tar_serialize:binary_to_nonce_limiter_update_response(ResponseBody);\n\t\t{ok, {{Status, _}, _, ResponseBody, _, _}} ->\n\t\t\t{error, {Status, ResponseBody}};\n\t\tReply ->\n\t\t\tReply\n\tend.\n\nget_vdf_update(Peer) ->\n\tcase ar_http:req(#{ peer => Peer, method => get, path => \"/vdf2\",\n\t\t\ttimeout => 2000, headers => p2p_headers()\n\t\t\t}) of\n\t\t{ok, {{<<\"200\">>, _}, _, Bin, _, _}} ->\n\t\t\tar_serialize:binary_to_nonce_limiter_update(2, Bin);\n\t\t{ok, {{<<\"404\">>, _}, _, _, _, _}} ->\n\t\t\t{error, not_found};\n\t\t{ok, {{Status, _}, _, ResponseBody, _, _}} ->\n\t\t\t{error, {Status, ResponseBody}};\n\t\tReply ->\n\t\t\tReply\n\tend.\n\nget_vdf_session(Peer) ->\n\t{Path, Format} =\n\t\tcase ar_config:compute_own_vdf() of\n\t\t\ttrue ->\n\t\t\t\t%% If we compute our own VDF, we need to know the VDF difficulties\n\t\t\t\t%% so that we can continue extending the new session. The VDF difficulties\n\t\t\t\t%% have been introduced in the format number 4.\n\t\t\t\t{\"/vdf4/session\", 4};\n\t\t\tfalse ->\n\t\t\t\t{\"/vdf3/session\", 3}\n\t\tend,\n\tcase ar_http:req(#{ peer => Peer, method => get, path => Path,\n\t\t\ttimeout => 10000, headers => p2p_headers() }) of\n\t\t{ok, {{<<\"200\">>, _}, _, Bin, _, _}} ->\n\t\t\tar_serialize:binary_to_nonce_limiter_update(Format, Bin);\n\t\t{ok, {{<<\"404\">>, _}, _, _, _, _}} ->\n\t\t\t{error, not_found};\n\t\t{ok, {{Status, _}, _, ResponseBody, _, _}} ->\n\t\t\t{error, {Status, ResponseBody}};\n\t\tReply ->\n\t\t\tReply\n\tend.\n\nget_previous_vdf_session(Peer) ->\n\t{Path, Format} =\n\t\tcase ar_config:compute_own_vdf() of\n\t\t\ttrue ->\n\t\t\t\t%% If we compute our own VDF, we need to know the VDF difficulties\n\t\t\t\t%% so that we can continue extending the new session. The VDF difficulties\n\t\t\t\t%% have been introduced in the format number 4.\n\t\t\t\t{\"/vdf4/previous_session\", 4};\n\t\t\tfalse ->\n\t\t\t\t{\"/vdf2/previous_session\", 2}\n\t\tend,\n\tcase ar_http:req(#{ peer => Peer, method => get, path => Path,\n\t\t\ttimeout => 10000, headers => p2p_headers() }) of\n\t\t{ok, {{<<\"200\">>, _}, _, Bin, _, _}} ->\n\t\t\tar_serialize:binary_to_nonce_limiter_update(Format, Bin);\n\t\t{ok, {{<<\"404\">>, _}, _, _, _, _}} ->\n\t\t\t{error, not_found};\n\t\t{ok, {{Status, _}, _, ResponseBody, _, _}} ->\n\t\t\t{error, {Status, ResponseBody}};\n\t\tReply ->\n\t\t\tReply\n\tend.\n\n%% -----------------------------------------------------------------------------\n%% Coordinated Mining and Pool Request\n%% -----------------------------------------------------------------------------\n\nget_cm_partition_table(Peer) ->\n\tReq = build_cm_or_pool_request(get, Peer, \"/coordinated_mining/partition_table\"),\n\thandle_cm_partition_table_response(ar_http:req(Req)).\n\ncm_h1_send(Peer, Candidate) ->\n\tJSON = ar_serialize:jsonify(ar_serialize:candidate_to_json_struct(Candidate)),\n\tReq = build_cm_or_pool_request(post, Peer, \"/coordinated_mining/h1\", JSON),\n\thandle_cm_noop_response(ar_http:req(Req)).\n\ncm_h2_send(Peer, Candidate) ->\n\tJSON = ar_serialize:jsonify(ar_serialize:candidate_to_json_struct(Candidate)),\n\tReq = build_cm_or_pool_request(post, Peer, \"/coordinated_mining/h2\", JSON),\n\thandle_cm_noop_response(ar_http:req(Req)).\n\ncm_publish_send(Peer, Solution) ->\n\t?LOG_DEBUG([{event, cm_publish_send}, {peer, ar_util:format_peer(Peer)},\n\t\t{solution, ar_util:encode(Solution#mining_solution.solution_hash)},\n\t\t{step_number, Solution#mining_solution.step_number},\n\t\t{start_interval_number, Solution#mining_solution.start_interval_number},\n\t\t{seed, ar_util:encode(Solution#mining_solution.seed)}]),\n\tJSON = ar_serialize:jsonify(ar_serialize:solution_to_json_struct(Solution)),\n\tReq = build_cm_or_pool_request(post, Peer, \"/coordinated_mining/publish\", JSON),\n\thandle_cm_noop_response(ar_http:req(Req)).\n\n%% @doc Fetch the jobs from the pool or coordinated mining exit peer.\nget_jobs(Peer, PrevOutput) ->\n\tprometheus_counter:inc(pool_job_request_count),\n\tReq = build_cm_or_pool_request(get, Peer,\n\t\t\"/jobs/\" ++ binary_to_list(ar_util:encode(PrevOutput))),\n\thandle_get_jobs_response(ar_http:req(Req)).\n\n%% @doc Post the partial solution to the pool or coordinated mining exit peer.\npost_partial_solution(Peer, Solution) ->\n\tPayload =\n\t\tcase is_binary(Solution) of\n\t\t\ttrue ->\n\t\t\t\tSolution;\n\t\t\tfalse ->\n\t\t\t\tar_serialize:jsonify(ar_serialize:solution_to_json_struct(Solution))\n\t\tend,\n\tReq = build_cm_or_pool_request(post, Peer, \"/partial_solution\", Payload),\n\thandle_post_partial_solution_response(ar_http:req(Req#{\n\t\ttimeout => 20 * 1000,\n\t\tconnect_timeout => 5 * 1000\n\t})).\n\nget_pool_cm_jobs(Peer, Jobs) ->\n\tJSON = ar_serialize:jsonify(ar_serialize:pool_cm_jobs_to_json_struct(Jobs)),\n\tReq = build_cm_or_pool_request(post, Peer, \"/pool_cm_jobs\", JSON),\n\thandle_get_pool_cm_jobs_response(ar_http:req(Req#{\n\t\tconnect_timeout => 1000\n\t})).\n\npost_pool_cm_jobs(Peer, Payload) ->\n\tReq = build_cm_or_pool_request(post, Peer, \"/pool_cm_jobs\", Payload),\n\thandle_post_pool_cm_jobs_response(ar_http:req(Req#{\n\t\ttimeout => 10 * 1000,\n\t\tconnect_timeout => 2000\n\t})).\n\npost_cm_partition_table_to_pool(Peer, Payload) ->\n\tReq = build_cm_or_pool_request(post, Peer, \"/coordinated_mining/partition_table\", Payload),\n\thandle_cm_partition_table_response(ar_http:req(Req#{\n\t\ttimeout => 10 * 1000,\n\t\tconnect_timeout => 2000\n\t})).\n\n%% @doc Fetch data_root metadata for the block that starts at or before the given offset,\n%% and validate it against the local block index. Also recompute the TXRoot from entries.\nget_data_roots(Peer, Offset) ->\n\tPath = \"/data_roots/\" ++ integer_to_list(Offset),\n\tResponse = ar_http:req(#{\n\t\t\tpeer => Peer,\n\t\t\tmethod => get,\n\t\t\tpath => Path,\n\t\t\ttimeout => 10 * 1000,\n\t\t\tconnect_timeout => 2000,\n\t\t\theaders => p2p_headers()\n\t\t}),\n\thandle_get_data_roots_response(Response, Offset).\n\nhandle_get_data_roots_response({ok, {{<<\"200\">>, _}, _, Body, _, _}}, Offset) ->\n\tcase ar_serialize:binary_to_data_roots(Body) of\n\t\t{ok, {TXRoot, BlockSize, Entries}} ->\n\t\t\tar_data_root_sync:validate_data_roots(TXRoot, BlockSize, Entries, Offset);\n\t\t_ ->\n\t\t\t{error, invalid_response}\n\tend;\nhandle_get_data_roots_response({ok, {{<<\"404\">>, _}, _, _, _, _}}, _Offset) ->\n\t{error, not_found};\nhandle_get_data_roots_response(Other, _Offset) ->\n\t{error, Other}.\n\nget_peer_and_path_from_url(URL) ->\n\t#{ host := Host, path := P } = Parsed = uri_string:parse(URL),\n\tPeer = case maps:get(port, Parsed, undefined) of\n\t\tundefined ->\n\t\t\tcase maps:get(scheme, Parsed, undefined) of\n\t\t\t\t\"https\" ->\n\t\t\t\t\t{binary_to_list(Host), 443};\n\t\t\t\t_ ->\n\t\t\t\t\t{binary_to_list(Host), 1984}\n\t\t\tend;\n\t\tPort ->\n\t\t\t{binary_to_list(Host), Port}\n\tend,\n\t{Peer, binary_to_list(P)}.\n\nbuild_cm_or_pool_request(Method, Peer, Path) ->\n\tbuild_cm_or_pool_request(Method, Peer, Path, <<>>).\nbuild_cm_or_pool_request(Method, Peer, Path, Body) ->\n\t{Peer3, Headers, BasePath, IsPeerRequest} =\n\t\tcase Peer of\n\t\t\t{pool, URL} ->\n\t\t\t\t{Peer2, Path2} = get_peer_and_path_from_url(URL),\n\t\t\t\t{Peer2, pool_client_headers(), Path2, false};\n\t\t\t_ ->\n\t\t\t\t{Peer, cm_p2p_headers(), \"\", true}\n\t\tend,\n\tHeaders2 = case Method of\n\t\tget ->\n\t\t\tHeaders;\n\t\t_ ->\n\t\t\tadd_header(<<\"content-type\">>, <<\"application/json\">>, Headers)\n\tend,\n\t#{\n\t\tpeer => Peer3,\n\t\tmethod => Method,\n\t\tpath => BasePath ++ Path,\n\t\ttimeout => 5 * 1000,\n\t\tconnect_timeout => 500,\n\t\theaders => Headers2,\n\t\tbody => Body,\n\t\tis_peer_request => IsPeerRequest\n\t}.\n\nhandle_get_pool_cm_jobs_response({ok, {{<<\"200\">>, _}, _, Body, _, _}}) ->\n\tcase catch ar_serialize:json_map_to_pool_cm_jobs(\n\t\t\telement(2, ar_serialize:json_decode(Body, [return_maps]))) of\n\t\t{'EXIT', _} ->\n\t\t\t{error, invalid_json};\n\t\tJobs ->\n\t\t\t{ok, Jobs}\n\tend;\nhandle_get_pool_cm_jobs_response(Reply) ->\n\t{error, Reply}.\n\nhandle_post_pool_cm_jobs_response({ok, {{<<\"200\">>, _}, _, _, _, _}}) ->\n\tok;\nhandle_post_pool_cm_jobs_response(Reply) ->\n\t{error, Reply}.\n\nhandle_post_partial_solution_response({ok, {{<<\"200\">>, _}, _, Body, _, _}}) ->\n\tcase catch jiffy:decode(Body, [return_maps]) of\n\t\t{'EXIT', _} ->\n\t\t\t{error, invalid_json};\n\t\tResponse ->\n\t\t\t{ok, Response}\n\tend;\nhandle_post_partial_solution_response(Reply) ->\n\t{error, Reply}.\n\nhandle_get_jobs_response({ok, {{<<\"200\">>, _}, _, Body, _, _}}) ->\n\tcase catch ar_serialize:json_struct_to_jobs(ar_serialize:dejsonify(Body)) of\n\t\t{'EXIT', _} ->\n\t\t\t{error, invalid_json};\n\t\tJobs ->\n\t\t\tprometheus_counter:inc(pool_total_job_got_count, length(Jobs#jobs.jobs)),\n\t\t\t{ok, Jobs}\n\tend;\nhandle_get_jobs_response(Reply) ->\n\t{error, Reply}.\n\nhandle_sync_record_response({ok, {{<<\"200\">>, _}, _, Body, _, _}}) ->\n\tar_intervals:safe_from_etf(Body);\nhandle_sync_record_response({ok, {{<<\"429\">>, _}, _, _, _, _}}) ->\n\t{error, too_many_requests};\nhandle_sync_record_response(Reply) ->\n\t{error, Reply}.\n\nhandle_sync_record_response({ok, {{<<\"200\">>, _}, _, Body, _, _}}, Start, Limit) ->\n\tcase ar_intervals:safe_from_etf(Body) of\n\t\t{ok, Intervals} ->\n\t\t\tcase ar_intervals:count(Intervals) > Limit of\n\t\t\t\ttrue ->\n\t\t\t\t\t{error, too_many_intervals};\n\t\t\t\tfalse ->\n\t\t\t\t\tcase ar_intervals:is_empty(Intervals) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t{ok, Intervals};\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tcase element(1, ar_intervals:smallest(Intervals)) < Start of\n\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\t{error, intervals_do_not_match_cursor};\n\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\t{ok, Intervals}\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend\n\t\t\tend;\n\t\tError ->\n\t\t\tError\n\tend;\nhandle_sync_record_response({ok, {{<<\"429\">>, _}, _, _, _, _}}, _, _) ->\n\t{error, too_many_requests};\nhandle_sync_record_response(Reply, _, _) ->\n\t{error, Reply}.\n\nhandle_footprints_response({ok, {{<<\"200\">>, _}, _, Body, _, _}}) ->\n\tcase catch ar_serialize:json_map_to_footprint(jiffy:decode(Body, [return_maps])) of\n\t\t{'EXIT', Reason} ->\n\t\t\t{error, Reason};\n\t\tFootprint ->\n\t\t\t{ok, Footprint}\n\tend;\nhandle_footprints_response({ok, {{<<\"404\">>, _}, _, _, _, _}}) ->\n\tnot_found;\nhandle_footprints_response({ok, {{<<\"400\">>, _}, _, Body, _, _}}) ->\n\tcase catch jiffy:decode(Body, [return_maps]) of\n\t\t{'EXIT', Reason} ->\n\t\t\t{error, Reason};\n\t\t#{ <<\"error\">> := <<\"footprint_number_too_large\">> } ->\n\t\t\t{error, footprint_number_too_large};\n\t\t#{ <<\"error\">> := <<\"negative_footprint_number\">> } ->\n\t\t\t{error, negative_footprint_number};\n\t\t#{ <<\"error\">> := <<\"negative_partition_number\">> } ->\n\t\t\t{error, negative_partition_number};\n\t\t#{ <<\"error\">> := <<\"invalid_footprint_number_encoding\">> } ->\n\t\t\t{error, invalid_footprint_number_encoding};\n\t\tResponse ->\n\t\t\t{error, Response}\n\tend;\nhandle_footprints_response({ok, {{<<\"429\">>, _}, _, _, _, _}}) ->\n\t{error, too_many_requests};\nhandle_footprints_response(Reply) ->\n\t{error, Reply}.\n\nhandle_chunk_response({ok, {{<<\"200\">>, _}, _, Body, Start, End}}, RequestedPacking, Peer) ->\n\tcase catch ar_serialize:binary_to_poa(Body) of\n\t\t{'EXIT', Reason} ->\n\t\t\t{error, Reason};\n\t\t{error, Reason} ->\n\t\t\t{error, Reason};\n\t\t{ok, #{ packing := Packing } = Proof} ->\n\t\t\tCheckPacking =\n\t\t\t\tcase RequestedPacking of\n\t\t\t\t\tany ->\n\t\t\t\t\t\ttrue;\n\t\t\t\t\tPacking ->\n\t\t\t\t\t\ttrue;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tfalse\n\t\t\t\tend,\n\t\t\tcase CheckPacking of\n\t\t\t\ttrue ->\n\t\t\t\t\tcase maps:get(chunk, Proof) of\n\t\t\t\t\t\t<<>> ->\n\t\t\t\t\t\t\t{error, empty_chunk};\n\t\t\t\t\t\tChunk when byte_size(Chunk) > ?DATA_CHUNK_SIZE ->\n\t\t\t\t\t\t\t{error, chunk_bigger_than_256kib};\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t{ok, Proof, End - Start, byte_size(term_to_binary(Proof))}\n\t\t\t\t\tend;\n\t\t\t\tfalse ->\n\t\t\t\t\t?LOG_WARNING([{event, peer_served_proof_with_wrong_packing},\n\t\t\t\t\t\t{requested_packing, ar_serialize:encode_packing(RequestedPacking, false)},\n\t\t\t\t\t\t{got_packing, ar_serialize:encode_packing(Packing, false)},\n\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)}]),\n\t\t\t\t\t{error, wrong_packing}\n\t\t\tend\n\tend;\nhandle_chunk_response({error, _} = Response, _RequestedPacking, _Peer) ->\n\tResponse;\nhandle_chunk_response(Response, _RequestedPacking, _Peer) ->\n\t{error, Response}.\n\nhandle_mempool_response({ok, {{<<\"200\">>, _}, _, Body, _, _}}, Peer) ->\n\tcase catch jiffy:decode(Body) of\n\t\t{'EXIT', Error} ->\n\t\t\t?LOG_WARNING([{event, failed_to_parse_peer_mempool},\n\t\t\t\t{error, io_lib:format(\"~p\", [Error])}]),\n\t\t\t{error, invalid_json};\n\t\tL when is_list(L) ->\n\t\t\tResult = lists:foldr(\n\t\t\t\tfun\t(_, {error, Reason}) ->\n\t\t\t\t\t\t{error, Reason};\n\t\t\t\t\t(EncodedTXID, {ok, Acc}) ->\n\t\t\t\t\t\tcase ar_util:safe_decode(EncodedTXID) of\n\t\t\t\t\t\t\t{ok, TXID} when byte_size(TXID) /= 32 ->\n\t\t\t\t\t\t\t\t?LOG_WARNING([{event, failed_to_parse_peer_mempool},\n\t\t\t\t\t\t\t\t\t{reason, invalid_txid},\n\t\t\t\t\t\t\t\t\t{txid, io_lib:format(\"~p\", [EncodedTXID])}]),\n\t\t\t\t\t\t\t\t{error, invalid_txid};\n\t\t\t\t\t\t\t{ok, TXID} ->\n\t\t\t\t\t\t\t\t{ok, [TXID | Acc]};\n\t\t\t\t\t\t\t{error, invalid} ->\n\t\t\t\t\t\t\t\t?LOG_WARNING([{event, failed_to_parse_peer_mempool},\n\t\t\t\t\t\t\t\t\t{reason, invalid_txid},\n\t\t\t\t\t\t\t\t\t{txid, io_lib:format(\"~p\", [EncodedTXID])}]),\n\t\t\t\t\t\t\t\t{error, invalid_txid}\n\t\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\t\t{ok, []},\n\t\t\t\tL\n\t\t\t),\n\t\t\tcase Result of\n\t\t\t\t{ok, TXIDs} ->\n\t\t\t\t\t{{ok, TXIDs}, Peer};\n\t\t\t\t{error, Reason2} ->\n\t\t\t\t\t{error, Reason2}\n\t\t\tend;\n\t\tNotList ->\n\t\t\t?LOG_WARNING([{event, failed_to_parse_peer_mempool}, {reason, invalid_format},\n\t\t\t\t{reply, io_lib:format(\"~p\", [NotList])}]),\n\t\t\t{error, invalid_format}\n\tend;\nhandle_mempool_response(Response, _Peer) ->\n\t{error, Response}.\n\nhandle_get_sync_buckets_response({ok, {{<<\"200\">>, _}, _, Body, _, _}}, BucketSize) ->\n\tcase ar_sync_buckets:deserialize(Body, BucketSize) of\n\t\t{ok, Buckets} ->\n\t\t\t{ok, Buckets};\n\t\t{'EXIT', Reason} ->\n\t\t\t{error, Reason};\n\t\t_ ->\n\t\t\t{error, invalid_response_type}\n\tend;\nhandle_get_sync_buckets_response({ok, {{<<\"400\">>, _}, _,\n\t\t<<\"Request type not found.\">>, _, _}}, _BucketSize) ->\n\t{error, request_type_not_found};\nhandle_get_sync_buckets_response(Response, _BucketSize) ->\n\t{error, Response}.\n\nhandle_get_recent_hash_list_response({ok, {{<<\"200\">>, _}, _, Body, _, _}}) ->\n\tcase ar_serialize:json_decode(Body) of\n\t\t{ok, HL} when is_list(HL) ->\n\t\t\tdecode_hash_list(HL);\n\t\t{ok, _} ->\n\t\t\t{error, invalid_hash_list};\n\t\tError ->\n\t\t\tError\n\tend;\nhandle_get_recent_hash_list_response({ok, {{<<\"400\">>, _}, _,\n\t\t<<\"Request type not found.\">>, _, _}}) ->\n\t{error, request_type_not_found};\nhandle_get_recent_hash_list_response(Response) ->\n\t{error, Response}.\n\nhandle_get_recent_hash_list_diff_response({ok, {{<<\"200\">>, _}, _, Body, _, _}}, HL, Peer) ->\n\tcase parse_recent_hash_list_diff(Body, HL) of\n\t\t{error, invalid_input} ->\n\t\t\tar_peers:issue_warning(Peer, recent_hash_list_diff, invalid_input),\n\t\t\t{error, invalid_input};\n\t\t{error, unknown_base} ->\n\t\t\tar_peers:issue_warning(Peer, recent_hash_list_diff, unknown_base),\n\t\t\t{error, unknown_base};\n\t\t{ok, Reply} ->\n\t\t\t{ok, Reply}\n\tend;\nhandle_get_recent_hash_list_diff_response({ok, {{<<\"404\">>, _}, _,\n\t\t_, _, _}}, _HL, _Peer) ->\n\t{error, not_found};\nhandle_get_recent_hash_list_diff_response({ok, {{<<\"400\">>, _}, _,\n\t\t<<\"Request type not found.\">>, _, _}}, _HL, _Peer) ->\n\t{error, request_type_not_found};\nhandle_get_recent_hash_list_diff_response(Response, _HL, _Peer) ->\n\t{error, Response}.\n\ndecode_hash_list(HL) ->\n\tdecode_hash_list(HL, []).\n\ndecode_hash_list([H | HL], DecodedHL) ->\n\tcase ar_util:safe_decode(H) of\n\t\t{ok, DecodedH} ->\n\t\t\tdecode_hash_list(HL, [DecodedH | DecodedHL]);\n\t\tError ->\n\t\t\tError\n\tend;\ndecode_hash_list([], DecodedHL) ->\n\t{ok, lists:reverse(DecodedHL)}.\n\nparse_recent_hash_list_diff(<< PrevH:48/binary, Rest/binary >>, HL) ->\n\tcase lists:member(PrevH, HL) of\n\t\ttrue ->\n\t\t\tparse_recent_hash_list_diff(Rest);\n\t\tfalse ->\n\t\t\t{error, unknown_base}\n\tend;\nparse_recent_hash_list_diff(_Input, _HL) ->\n\t{error, invalid_input}.\n\nparse_recent_hash_list_diff(<<>>) ->\n\t{ok, in_sync};\nparse_recent_hash_list_diff(<< H:48/binary, Len:16, TXIDs:(32 * Len)/binary, Rest/binary >>)\n\t\twhen Len =< ?BLOCK_TX_COUNT_LIMIT ->\n\tcase ar_block_cache:get(block_cache, H) of\n\t\tnot_found ->\n\t\t\tcase count_blocks_on_top(Rest) of\n\t\t\t\t{ok, N} ->\n\t\t\t\t\t{ok, {H, parse_txids(TXIDs), N}};\n\t\t\t\tError ->\n\t\t\t\t\tError\n\t\t\tend;\n\t\t_ ->\n\t\t\tparse_recent_hash_list_diff(Rest)\n\tend;\nparse_recent_hash_list_diff(_Input) ->\n\t{error, invalid_input4}.\n\ncount_blocks_on_top(Bin) ->\n\tcount_blocks_on_top(Bin, 0).\n\ncount_blocks_on_top(<<>>, N) ->\n\t{ok, N};\ncount_blocks_on_top(<< _H:48/binary, Len:16, _TXIDs:(32 * Len)/binary, Rest/binary >>, N) ->\n\tcount_blocks_on_top(Rest, N + 1);\ncount_blocks_on_top(_Bin, _N) ->\n\t{error, invalid_input5}.\n\nparse_txids(<< TXID:32/binary, Rest/binary >>) ->\n\t[TXID | parse_txids(Rest)];\nparse_txids(<<>>) ->\n\t[].\n\n%% @doc Return the current height of a remote node.\nget_height(Peer) ->\n\tResponse =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => Peer,\n\t\t\tpath => \"/height\",\n\t\t\theaders => p2p_headers()\n\t\t}),\n\tcase Response of\n\t\t{ok, {{<<\"200\">>, _}, _, Body, _, _}} ->\n\t\t\tcase catch binary_to_integer(Body) of\n\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t{error, invalid_height};\n\t\t\t\tHeight ->\n\t\t\t\t\tHeight\n\t\t\tend;\n\t\t{ok, {{<<\"500\">>, _}, _, _, _, _}} -> not_joined\n\tend.\n\nget_txs(Peers, B) ->\n\tcase B#block.txs of\n\t\tTXIDs when length(TXIDs) > ?BLOCK_TX_COUNT_LIMIT ->\n\t\t\t?LOG_ERROR([{event, downloaded_txs_count_exceeds_limit}]),\n\t\t\t{error, txs_count_exceeds_limit};\n\t\tTXIDs ->\n\t\t\tget_txs(B#block.height, Peers, TXIDs, [], 0)\n\tend.\n\nget_txs(_Height, _Peers, [], TXs, _TotalSize) ->\n\t{ok, lists:reverse(TXs)};\nget_txs(Height, Peers, [TXID | Rest], TXs, TotalSize) ->\n\tFork_2_0 = ar_fork:height_2_0(),\n\tcase get_tx(Peers, TXID) of\n\t\t#tx{ format = 2 } = TX ->\n\t\t\tget_txs(Height, Peers, Rest, [TX | TXs], TotalSize);\n\t\t#tx{} = TX when Height < Fork_2_0 ->\n\t\t\tget_txs(Height, Peers, Rest, [TX | TXs], TotalSize);\n\t\t#tx{ format = 1 } = TX ->\n\t\t\tcase TotalSize + TX#tx.data_size of\n\t\t\t\tNewTotalSize when NewTotalSize > ?BLOCK_TX_DATA_SIZE_LIMIT ->\n\t\t\t\t\t?LOG_ERROR([{event, downloaded_txs_exceed_block_size_limit}]),\n\t\t\t\t\t{error, txs_exceed_block_size_limit};\n\t\t\t\tNewTotalSize ->\n\t\t\t\t\tget_txs(Height, Peers, Rest, [TX | TXs], NewTotalSize)\n\t\t\tend;\n\t\t_ ->\n\t\t\t{error, tx_not_found}\n\tend.\n\n%% @doc Retreive a tx by ID from the memory pool, disk, or a remote peer.\nget_tx(Peer, TX) when not is_list(Peer) ->\n\tget_tx([Peer], TX);\nget_tx(_Peers, #tx{} = TX) ->\n\tTX;\nget_tx(Peers, TXID) ->\n\tcase ar_mempool:get_tx(TXID) of\n\t\tnot_found ->\n\t\t\tget_tx_from_disk_or_peers(Peers, TXID);\n\t\tTX ->\n\t\t\tTX\n\tend.\n\nget_tx_from_disk_or_peers(Peers, TXID) ->\n\tcase ar_storage:read_tx(TXID) of\n\t\tunavailable ->\n\t\t\tcase get_tx_from_remote_peers(Peers, TXID) of\n\t\t\t\tnot_found ->\n\t\t\t\t\tnot_found;\n\t\t\t\t{TX, _Peer, _Time, _Size} ->\n\t\t\t\t\tTX\n\t\t\tend;\n\t\tTX ->\n\t\t\tTX\n\tend.\n\nget_tx_from_remote_peers(Peers, TXID) ->\n\tget_tx_from_remote_peers(Peers, TXID, true).\n\nget_tx_from_remote_peers([], _TXID, _RatePeer) ->\n\tnot_found;\nget_tx_from_remote_peers(Peers, TXID, RatePeer) ->\n\tPeer = lists:nth(rand:uniform(min(5, length(Peers))), Peers),\n\tcase get_tx_from_remote_peer(Peer, TXID, RatePeer) of\n\t\t{#tx{} = TX, Peer, Time, Size} ->\n\t\t\t{TX, Peer, Time, Size};\n\t\t_ ->\n\t\t\tget_tx_from_remote_peers(Peers -- [Peer], TXID, RatePeer)\n\tend.\n\nget_tx_from_remote_peer(Peer, TXID, RatePeer) ->\n\tRelease = ar_peers:get_peer_release(Peer),\n\tEncoding = case Release >= 52 of true -> binary; _ -> json end,\n\tcase handle_tx_response(Peer, Encoding,\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => Peer,\n\t\t\tpath => get_tx_path(TXID, Encoding),\n\t\t\theaders => p2p_headers(),\n\t\t\tconnect_timeout => 1000,\n\t\t\ttimeout => 30 * 1000,\n\t\t\tlimit => ?MAX_BODY_SIZE\n\t\t})\n\t) of\n\t\t{ok, #tx{} = TX, Time, Size} ->\n\t\t\tcase ar_tx:verify_tx_id(TXID, TX) of\n\t\t\t\tfalse ->\n\t\t\t\t\t?LOG_WARNING([\n\t\t\t\t\t\t{event, peer_served_invalid_tx},\n\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t{tx, ar_util:encode(TXID)}\n\t\t\t\t\t]),\n\t\t\t\t\tar_peers:issue_warning(Peer, tx, invalid),\n\t\t\t\t\t{error, invalid_tx};\n\t\t\t\ttrue ->\n\t\t\t\t\tcase RatePeer of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tar_peers:rate_fetched_data(Peer, tx, Time, Size);\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tok\n\t\t\t\t\tend,\n\t\t\t\t\t{TX, Peer, Time, Size}\n\t\t\tend;\n\t\tError ->\n\t\t\tError\n\tend.\n\nget_tx_path(TXID, json) ->\n\t\"/unconfirmed_tx/\" ++ binary_to_list(ar_util:encode(TXID));\nget_tx_path(TXID, binary) ->\n\t\"/unconfirmed_tx2/\" ++ binary_to_list(ar_util:encode(TXID)).\n\n%% @doc Retreive only the data associated with a transaction.\n%% The function must only be used when it is known that the transaction\n%% has data.\nget_tx_data([], _Hash) ->\n\tunavailable;\nget_tx_data(Peers, Hash) when is_list(Peers) ->\n\tPeer = lists:nth(rand:uniform(min(5, length(Peers))), Peers),\n\tcase get_tx_data(Peer, Hash) of\n\t\tunavailable ->\n\t\t\tget_tx_data(Peers -- [Peer], Hash);\n\t\tData ->\n\t\t\tData\n\tend;\nget_tx_data(Peer, Hash) ->\n\tReply =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => Peer,\n\t\t\tpath => \"/tx/\" ++ binary_to_list(ar_util:encode(Hash)) ++ \"/data\",\n\t\t\theaders => p2p_headers(),\n\t\t\tconnect_timeout => 500,\n\t\t\ttimeout => 120 * 1000,\n\t\t\tlimit => ?MAX_BODY_SIZE\n\t\t}),\n\tcase Reply of\n\t\t{ok, {{<<\"200\">>, _}, _, <<>>, _, _}} ->\n\t\t\tunavailable;\n\t\t{ok, {{<<\"200\">>, _}, _, EncodedData, _, _}} ->\n\t\t\tcase ar_util:safe_decode(EncodedData) of\n\t\t\t\t{ok, Data} ->\n\t\t\t\t\tData;\n\t\t\t\t{error, invalid} ->\n\t\t\t\t\tunavailable\n\t\t\tend;\n\t\t_ ->\n\t\t\tunavailable\n\tend.\n\n%% @doc Retreive the current universal time as claimed by a foreign node.\nget_time(Peer, Timeout) ->\n\tcase ar_http:req(#{method => get, peer => Peer, path => \"/time\",\n\t\t\theaders => p2p_headers(), timeout => Timeout + 100}) of\n\t\t{ok, {{<<\"200\">>, _}, _, Body, Start, End}} ->\n\t\t\tcase catch binary_to_integer(Body) of\n\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t{error, invalid_time};\n\t\t\t\tTime ->\n\t\t\t\t\tRequestTime = ceil((End - Start) / 1000000),\n\t\t\t\t\t%% The timestamp returned by the HTTP daemon is floored second precision.\n\t\t\t\t\t%% Thus the upper bound is increased by 1.\n\t\t\t\t\t{ok, {Time - RequestTime, Time + RequestTime + 1}}\n\t\t\tend;\n\t\tOther ->\n\t\t\t{error, Other}\n\tend.\n\n%% @doc Retreive information from a peer. Optionally, filter the resulting\n%% keyval list for required information.\nget_info(Peer, Type) ->\n\tcase get_info(Peer) of\n\t\tinfo_unavailable -> info_unavailable;\n\t\tInfo ->\n\t\t\tmaps:get(atom_to_binary(Type), Info, info_unavailable)\n\tend.\nget_info(Peer) ->\n\tcase\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => Peer,\n\t\t\tpath => \"/info\",\n\t\t\theaders => p2p_headers(),\n\t\t\tconnect_timeout => 1000,\n\t\t\ttimeout => 2 * 1000\n\t\t})\n\tof\n\t\t{ok, {{<<\"200\">>, _}, _, JSON, _, _}} ->\n\t\t\tcase ar_serialize:json_decode(JSON, [return_maps]) of\n\t\t\t\t{ok, JsonMap} ->\n\t\t\t\t\tJsonMap;\n\t\t\t\t{error, _} ->\n\t\t\t\t\tinfo_unavailable\n\t\t\tend;\n\t\t_ -> info_unavailable\n\tend.\n\n%% @doc Return a list of parsed peer IPs for a remote server.\nget_peers(Peer) ->\n\ttry\n\t\tbegin\n\t\t\t{ok, {{<<\"200\">>, _}, _, Body, _, _}} =\n\t\t\t\tar_http:req(#{\n\t\t\t\t\tmethod => get,\n\t\t\t\t\tpeer => Peer,\n\t\t\t\t\tpath => \"/peers\",\n\t\t\t\t\theaders => p2p_headers(),\n\t\t\t\t\tconnect_timeout => 500,\n\t\t\t\t\ttimeout => 2 * 1000\n\t\t\t\t}),\n\t\t\tPeerArray = ar_serialize:dejsonify(Body),\n\t\t\tPeersList = lists:map(fun ar_util:parse_peer/1, PeerArray),\n\t\t\tlists:flatten(PeersList)\n\t\tend\n\tcatch _:_ -> unavailable\n\tend.\n\n\n%% @doc Process the response of an /block call.\nhandle_block_response(_Peer, _Encoding, {ok, {{<<\"400\">>, _}, _, _, _, _}}) ->\n\tnot_found;\nhandle_block_response(_Peer, _Encoding, {ok, {{<<\"404\">>, _}, _, _, _, _}}) ->\n\tnot_found;\nhandle_block_response(Peer, Encoding, {ok, {{<<\"200\">>, _}, _, Body, Start, End}}) ->\n\tDecodeFun = case Encoding of json ->\n\t\t\tfun(Input) ->\n\t\t\t\tar_serialize:json_struct_to_block(ar_serialize:dejsonify(Input))\n\t\t\tend; binary -> fun ar_serialize:binary_to_block/1 end,\n\tcase catch DecodeFun(Body) of\n\t\t{'EXIT', Reason} ->\n\t\t\t?LOG_INFO(\n\t\t\t\t\"event: failed_to_parse_block_response, peer: ~s, reason: ~p\",\n\t\t\t\t[ar_util:format_peer(Peer), Reason]),\n\t\t\tar_peers:issue_warning(Peer, block, Reason),\n\t\t\tnot_found;\n\t\t{ok, B} ->\n\t\t\t{ok, B, End - Start, byte_size(term_to_binary(B))};\n\t\tB when is_record(B, block) ->\n\t\t\t{ok, B, End - Start, byte_size(term_to_binary(B))};\n\t\tError ->\n\t\t\t?LOG_INFO(\n\t\t\t\t\"event: failed_to_parse_block_response, peer: ~s, error: ~p\",\n\t\t\t\t[ar_util:format_peer(Peer), Error]),\n\t\t\tar_peers:issue_warning(Peer, block, Error),\n\t\t\tnot_found\n\tend;\nhandle_block_response(Peer, _Encoding, Response) ->\n\tar_peers:issue_warning(Peer, block, Response),\n\tnot_found.\n\n%% @doc Process the response of a GET /unconfirmed_tx call.\nhandle_tx_response(_Peer, _Encoding, {ok, {{<<\"404\">>, _}, _, _, _, _}}) ->\n\t{error, not_found};\nhandle_tx_response(_Peer, _Encoding, {ok, {{<<\"400\">>, _}, _, _, _, _}}) ->\n\t{error, bad_request};\nhandle_tx_response(Peer, Encoding, {ok, {{<<\"200\">>, _}, _, Body, Start, End}}) ->\n\tDecodeFun = case Encoding of json -> fun ar_serialize:json_struct_to_tx/1;\n\t\t\tbinary -> fun ar_serialize:binary_to_tx/1 end,\n\tcase catch DecodeFun(Body) of\n\t\t{ok, TX} ->\n\t\t\tSize = byte_size(term_to_binary(TX)),\n\t\t\tcase TX#tx.format == 1 of\n\t\t\t\ttrue ->\n\t\t\t\t\t{ok, TX, End - Start, Size};\n\t\t\t\t_ ->\n\t\t\t\t\tDataSize = byte_size(TX#tx.data),\n\t\t\t\t\t{ok, TX#tx{ data = <<>> }, End - Start, Size - DataSize}\n\t\t\tend;\n\t\tTX when is_record(TX, tx) ->\n\t\t\tSize = byte_size(term_to_binary(TX)),\n\t\t\tcase TX#tx.format == 1 of\n\t\t\t\ttrue ->\n\t\t\t\t\t{ok, TX, End - Start, Size};\n\t\t\t\t_ ->\n\t\t\t\t\tDataSize = byte_size(TX#tx.data),\n\t\t\t\t\t{ok, TX#tx{ data = <<>> }, End - Start, Size - DataSize}\n\t\t\tend;\n\t\t{'EXIT', Reason} ->\n\t\t\tar_peers:issue_warning(Peer, tx, Reason),\n\t\t\t{error, Reason};\n\t\tReply ->\n\t\t\tar_peers:issue_warning(Peer, tx, Reply),\n\t\t\tReply\n\tend;\nhandle_tx_response(Peer, _Encoding, Response) ->\n\tar_peers:issue_warning(Peer, tx, Response),\n\t{error, Response}.\n\nhandle_cm_partition_table_response({ok, {{<<\"200\">>, _}, _, Body, _, _}}) ->\n\tcase catch jiffy:decode(Body) of\n\t\t{'EXIT', Error} ->\n\t\t\t?LOG_WARNING([{event, failed_to_parse_cm_partition_table},\n\t\t\t\t{error, io_lib:format(\"~p\", [Error])}]),\n\t\t\t{error, invalid_json};\n\t\tL when is_list(L) ->\n\t\t\tlists:foldr(\n\t\t\t\tfun\t(_, {error, Reason}) ->\n\t\t\t\t\t\t{error, Reason};\n\t\t\t\t\t(Partition, {ok, Acc}) ->\n\t\t\t\t\t\tcase Partition of\n\t\t\t\t\t\t\t{[\n\t\t\t\t\t\t\t\t{<<\"bucket\">>, Bucket},\n\t\t\t\t\t\t\t\t{<<\"bucketsize\">>, BucketSize},\n\t\t\t\t\t\t\t\t{<<\"addr\">>, EncodedAddr}\n\t\t\t\t\t\t\t]} ->\n\t\t\t\t\t\t\t\tDecodedPartition = {\n\t\t\t\t\t\t\t\t\tBucket,\n\t\t\t\t\t\t\t\t\tBucketSize,\n\t\t\t\t\t\t\t\t\tar_util:decode(EncodedAddr),\n\t\t\t\t\t\t\t\t\t0\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{ok, [DecodedPartition | Acc]};\n\t\t\t\t\t\t\t{[\n\t\t\t\t\t\t\t\t{<<\"bucket\">>, Bucket},\n\t\t\t\t\t\t\t\t{<<\"bucketsize\">>, BucketSize},\n\t\t\t\t\t\t\t\t{<<\"addr\">>, EncodedAddr},\n\t\t\t\t\t\t\t\t{<<\"pdiff\">>, PackingDifficulty}\n\t\t\t\t\t\t\t]} when is_integer(PackingDifficulty) andalso (\n\t\t\t\t\t\t\t\t(PackingDifficulty >= 1\n\t\t\t\t\t\t\t\t\tandalso PackingDifficulty =< ?MAX_PACKING_DIFFICULTY)\n\t\t\t\t\t\t\t\t\t\torelse\n\t\t\t\t\t\t\t\t\t(PackingDifficulty == ?REPLICA_2_9_PACKING_DIFFICULTY)) ->\n\t\t\t\t\t\t\t\tDecodedPartition = {\n\t\t\t\t\t\t\t\t\tBucket,\n\t\t\t\t\t\t\t\t\tBucketSize,\n\t\t\t\t\t\t\t\t\tar_util:decode(EncodedAddr),\n\t\t\t\t\t\t\t\t\tPackingDifficulty\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{ok, [DecodedPartition | Acc]};\n\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t?LOG_WARNING([{event, failed_to_parse_cm_partition_table},\n\t\t\t\t\t\t\t\t\t{reason, invalid_partition},\n\t\t\t\t\t\t\t\t\t{txid, io_lib:format(\"~p\", [Partition])}]),\n\t\t\t\t\t\t\t\t{error, invalid_partition}\n\t\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\t\t{ok, []},\n\t\t\t\tL\n\t\t\t);\n\t\tNotList ->\n\t\t\t?LOG_WARNING([{event, failed_to_parse_cm_partition_table}, {reason, invalid_format},\n\t\t\t\t{reply, io_lib:format(\"~p\", [NotList])}]),\n\t\t\t{error, invalid_format}\n\tend;\nhandle_cm_partition_table_response(Response) ->\n\t{error, Response}.\n\nhandle_cm_noop_response({ok, {{<<\"200\">>, _}, _, _Body, _, _}}) ->\n\t{ok, []};\nhandle_cm_noop_response(Response) ->\n\t{error, Response}.\n\np2p_headers() ->\n\t{ok, Config} = arweave_config:get_env(),\n\t[{<<\"x-p2p-port\">>, integer_to_binary(Config#config.port)},\n\t\t\t{<<\"x-release\">>, integer_to_binary(?RELEASE_NUMBER)}].\n\ncm_p2p_headers() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tadd_header(<<\"x-cm-api-secret\">>, Config#config.cm_api_secret, p2p_headers()).\n\npool_client_headers() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tHeaders = add_header(<<\"x-pool-api-key\">>, Config#config.pool_api_key, p2p_headers()),\n\tcase Config#config.pool_worker_name of\n\t\tnot_set ->\n\t\t\tHeaders;\n\t\tWorkerName ->\n\t\t\tadd_header(<<\"worker\">>, WorkerName, Headers)\n\tend.\n\nadd_header(Name, Value, Headers) when is_binary(Name) andalso is_binary(Value) ->\n\t[{Name, Value} | Headers];\nadd_header(Name, Value, Headers) ->\n\t?LOG_ERROR([{event, invalid_header}, {name, Name}, {value, Value}]),\n\tHeaders.\n\n%% @doc Utility to filter out some log spam. We generally don't want to log a failed HTTP\n%% request if the peer just times out or is no longer taking connections.\nlog_failed_request(Reason, Log) ->\n\tcase Reason of\n\t\t{error,{shutdown,econnrefused}} -> ok;\n\t\t{error,{shutdown,timeout}} -> ok;\n\t\t{error,timeout} -> ok;\n\t\t{error,{shutdown,ehostunreach}} -> ok;\n\t\t{error,{stream_error,closed}} -> ok;\n\t\t{error,{stream_error,closing}} -> ok;\n\t\t{error,{stream_error,{closed,normal}}} -> ok;\n\t\t_ -> ?LOG_DEBUG(Log)\n\tend.\n"
  },
  {
    "path": "apps/arweave/src/ar_http_iface_middleware.erl",
    "content": "-module(ar_http_iface_middleware).\n\n-behaviour(cowboy_middleware).\n\n-export([execute/2, read_body_chunk/4]).\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include(\"ar.hrl\").\n-include(\"ar_consensus.hrl\").\n-include(\"ar_mining.hrl\").\n-include(\"ar_data_sync.hrl\").\n-include(\"ar_data_discovery.hrl\").\n\n-include(\"ar_pool.hrl\").\n\n\n-define(HANDLER_TIMEOUT, ?DEFAULT_HTTP_HANDLER_TIMEOUT_MS).\n\n-define(MAX_SERIALIZED_RECENT_HASH_LIST_DIFF, 2400). % 50 * 48.\n-define(MAX_SERIALIZED_MISSING_TX_INDICES, 125). % Every byte encodes 8 positions.\n\n-define(MAX_BLOCK_INDEX_RANGE_SIZE, 10000).\n\n%%%===================================================================\n%%% Cowboy handler callbacks.\n%%%===================================================================\n\n%% To allow prometheus_cowboy2_handler to be run when the\n%% cowboy_router middleware matches on the /metrics route, this\n%% middleware runs between the cowboy_router and cowboy_handler\n%% middlewares. It uses the `handler` env value set by cowboy_router\n%% to determine whether or not it should run, otherwise it lets\n%% the cowboy_handler middleware run prometheus_cowboy2_handler.\nexecute(Req, #{ handler := ar_http_iface_handler }) ->\n\tPid = self(),\n\tHandlerPid = spawn_link(fun() ->\n\t\t{Duration, Response = {Code, _, _, Resp}}\n\t\t\t= timer:tc(fun() ->\n\t\t\t\t\thandle(Req, Pid)\n\t\t\t\tend),\n\t\tlog(Code, Resp, #{duration => Duration}),\n\t\tPid ! {handled, Response}\n\tend),\n\t{ok, TimeoutRef} = ar_timer:send_after(\n\t\t?HANDLER_TIMEOUT,\n\t\tself(),\n\t\t{timeout, HandlerPid, Req},\n\t\t#{ skip_on_shutdown => false }\n\t),\n\tloop(TimeoutRef);\nexecute(Req, Env) ->\n\t{ok, Req, Env}.\n\n%%--------------------------------------------------------------------\n%% @doc Logs client requests. HTTP logging is done only if\n%% arweave_http_api handler is started.\n%% @end\n%%--------------------------------------------------------------------\nlog(Code, Resp, Init) ->\n\tcase ar_logger:is_started(arweave_http_api) of\n\t\ttrue ->\n\t\t\tBuffer = Init#{\n\t\t\t\tdomain => [arweave,http,api],\n\t\t\t\tcode => \"undefined\",\n\t\t\t\tmethod => \"undefined\",\n\t\t\t\tpath => \"undefined\",\n\t\t\t\tpeer_ip => \"undefined\",\n\t\t\t\tpeer_port => \"undefined\",\n\t\t\t\tbody_length => 0,\n\t\t\t\tversion => \"undefined\"\n\t\t\t},\n\t\t\tMeta = log_code(Code, Resp, Buffer),\n\t\t\tlogger:info(\"\", [], Meta);\n\t\t_ ->\n\t\t\tok\n\tend.\n\nlog_code(Code, Resp, Buffer) when is_integer(Code) ->\n\tNewBuffer = Buffer#{code => integer_to_list(Code)},\n\tlog_method(Code, Resp, NewBuffer);\nlog_code(Code, Resp, Buffer) ->\n\tlog_method(Code, Resp, Buffer).\n\nlog_method(Code, Resp = #{method := Method}, Buffer)\n\twhen is_binary(Method) ->\n\t\tNewBuffer = Buffer#{method => binary_to_list(Method)},\n\t\tlog_path(Code, Resp, NewBuffer);\nlog_method(Code, Resp, Buffer) ->\n\tlog_path(Code, Resp, Buffer).\n\nlog_path(Code, Resp = #{path := Path}, Buffer)\n\twhen is_binary(Path) ->\n\t\tNewBuffer = Buffer#{path => binary_to_list(Path)},\n\t\tlog_peer(Code, Resp, NewBuffer);\nlog_path(Code, Resp, Buffer) ->\n\tlog_peer(Code, Resp, Buffer).\n\nlog_peer(Code, Resp = #{peer := {IP={A,B,C,D},Port}}, Buffer)\n\twhen is_integer(A), is_integer(B), is_integer(C), is_integer(D),\n\t     is_integer(Port) ->\n\t\tPeerIP = inet:ntoa(IP),\n\t\tPeerPort = integer_to_list(Port),\n\t\tNewBuffer = Buffer#{\n\t\t\tpeer_ip => PeerIP,\n\t\t\tpeer_port => PeerPort\n\t\t},\n\t\tlog_body_length(Code, Resp, NewBuffer);\nlog_peer(Code, Resp, Buffer) ->\n\tlog_body_length(Code, Resp, Buffer).\n\nlog_body_length(Code, Resp = #{body_length := BodyLength}, Buffer)\n\twhen is_integer(BodyLength) ->\n\t\tNewBuffer = Buffer#{body_length => integer_to_list(BodyLength)},\n\t\tlog_version(Code, Resp, NewBuffer);\nlog_body_length(Code, Resp, Buffer) ->\n\tlog_version(Code, Resp, Buffer).\n\nlog_version(_Code, _Resp = #{version := Version}, Buffer)\n\twhen is_atom(Version) ->\n\t\tBuffer#{version => atom_to_list(Version)};\nlog_version(_Code, _Resp, Buffer) ->\n\tBuffer.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\n%% @doc In order to be able to have a handler-side timeout, we need to\n%% handle the request asynchronously. However, cowboy doesn't allow\n%% reading the request's body from a process other than its handler's.\n%% This following loop function allows us to work around this\n%% limitation. (see https://github.com/ninenines/cowboy/issues/1374)\n%% @end\nloop(TimeoutRef) ->\n\treceive\n\t\t{handled, {Status, Headers, Body, HandledReq}} ->\n\t\t\ttimer:cancel(TimeoutRef),\n\t\t\tCowboyStatus = handle_custom_codes(Status),\n\t\t\tRepliedReq = cowboy_req:reply(CowboyStatus, Headers, Body, HandledReq),\n\t\t\t{stop, RepliedReq};\n\t\t{read_complete_body, From, Req, SizeLimit} ->\n\t\t\tcase catch ar_http_req:body(Req, SizeLimit) of\n\t\t\t\tTerm ->\n\t\t\t\t\tFrom ! {read_complete_body, Term}\n\t\t\tend,\n\t\t\tloop(TimeoutRef);\n\t\t{read_body_chunk, From, Req, Size, Timeout} ->\n\t\t\tcase catch ar_http_req:read_body_chunk(Req, Size, Timeout) of\n\t\t\t\tTerm ->\n\t\t\t\t\tFrom ! {read_body_chunk, Term}\n\t\t\tend,\n\t\t\tloop(TimeoutRef);\n\t\t{timeout, HandlerPid, InitialReq} ->\n\t\t\tunlink(HandlerPid),\n\t\t\texit(HandlerPid, handler_timeout),\n\t\t\t?LOG_WARNING([{event, handler_timeout},\n\t\t\t\t\t{method, cowboy_req:method(InitialReq)},\n\t\t\t\t\t{path, cowboy_req:path(InitialReq)}]),\n\t\t\tRepliedReq = cowboy_req:reply(500, #{}, <<\"Handler timeout\">>, InitialReq),\n\t\t\t{stop, RepliedReq}\n\tend.\n\nhandle(Req, Pid) ->\n\tPeer = ar_http_util:arweave_peer(Req),\n\thandle(Peer, Req, Pid).\n\nhandle(Peer, Req, Pid) ->\n\tMethod = cowboy_req:method(Req),\n\tSplitPath = ar_http_iface_server:split_path(cowboy_req:path(Req)),\n\t{ok, Config} = arweave_config:get_env(),\n\tcase lists:member(http_logging, Config#config.enable) of\n\t\ttrue ->\n\t\t\t?LOG_INFO([\n\t\t\t\t{event, http_request},\n\t\t\t\t{method, Method},\n\t\t\t\t{path, SplitPath},\n\t\t\t\t{peer, ar_util:format_peer(Peer)}\n\t\t\t]);\n\t\t_ ->\n\t\t\tdo_nothing\n\tend,\n\tResponse2 = handle4(Method, SplitPath, Req, Pid),\n\tadd_cors_headers(Req, Response2).\n\nadd_cors_headers(Req, Response) ->\n\tcase Response of\n\t\t{Status, Hdrs, Body, HandledReq} ->\n\t\t\t{Status, maps:merge(?CORS_HEADERS, Hdrs), Body, HandledReq};\n\t\t{Status, Body, HandledReq} ->\n\t\t\t{Status, ?CORS_HEADERS, Body, HandledReq};\n\t\t{error, timeout} ->\n\t\t\t{503, ?CORS_HEADERS, jiffy:encode(#{ error => timeout }), Req}\n\tend.\n\n-ifdef(TESTNET).\nhandle4(<<\"POST\">>, [<<\"mine\">>], Req, _Pid) ->\n\tar_test_node:mine(),\n\t{200, #{}, <<>>, Req};\n\nhandle4(<<\"GET\">>, [<<\"tx\">>, <<\"ready_for_mining\">>], Req, _Pid) ->\n\t{200, #{},\n\t\t\tar_serialize:jsonify(\n\t\t\t\tlists:map(\n\t\t\t\t\tfun ar_util:encode/1,\n\t\t\t\t\tar_node:get_ready_for_mining_txs()\n\t\t\t\t)\n\t\t\t),\n\tReq};\n\nhandle4(Method, SplitPath, Req, Pid) ->\n\thandle(Method, SplitPath, Req, Pid).\n-else.\nhandle4(Method, SplitPath, Req, Pid) ->\n\thandle(Method, SplitPath, Req, Pid).\n-endif.\n\n%% Return network information from a given node.\n%% GET request to endpoint /info.\nhandle(<<\"GET\">>, [], Req, _Pid) ->\n\t{200, #{}, ar_serialize:jsonify(ar_info:get_info()), Req};\n\nhandle(<<\"GET\">>, [<<\"info\">>], Req, _Pid) ->\n\t{200, #{}, ar_serialize:jsonify(ar_info:get_info()), Req};\n\nhandle(<<\"GET\">>, [<<\"recent\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\t{200, #{}, ar_serialize:jsonify(ar_info:get_recent()), Req}\n\tend;\n\nhandle(<<\"GET\">>, [<<\"is_tx_blacklisted\">>, EncodedTXID], Req, _Pid) ->\n\tcase ar_util:safe_decode(EncodedTXID) of\n\t\t{error, invalid} ->\n\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_tx_id }), Req};\n\t\t{ok, TXID} ->\n\t\t\t{200, #{}, jiffy:encode(ar_tx_blacklist:is_tx_blacklisted(TXID)), Req}\n\tend;\n\n%% Some load balancers use 'HEAD's rather than 'GET's to tell if a node\n%% is alive. Appease them.\nhandle(<<\"HEAD\">>, [], Req, _Pid) ->\n\t{200, #{}, <<>>, Req};\nhandle(<<\"HEAD\">>, [<<\"info\">>], Req, _Pid) ->\n\t{200, #{}, <<>>, Req};\n\n%% Return permissive CORS headers for all endpoints.\nhandle(<<\"OPTIONS\">>, [<<\"block\">>], Req, _Pid) ->\n\t{200, #{<<\"access-control-allow-methods\">> => <<\"GET, POST\">>,\n\t\t\t<<\"access-control-allow-headers\">> => <<\"Content-Type\">>}, <<\"OK\">>, Req};\nhandle(<<\"OPTIONS\">>, [<<\"tx\">>], Req, _Pid) ->\n\t{200, #{<<\"access-control-allow-methods\">> => <<\"GET, POST\">>,\n\t\t\t<<\"access-control-allow-headers\">> => <<\"Content-Type\">>}, <<\"OK\">>, Req};\nhandle(<<\"OPTIONS\">>, [<<\"peer\">> | _], Req, _Pid) ->\n\t{200, #{<<\"access-control-allow-methods\">> => <<\"GET, POST\">>,\n\t\t\t<<\"access-control-allow-headers\">> => <<\"Content-Type\">>}, <<\"OK\">>, Req};\nhandle(<<\"OPTIONS\">>, _, Req, _Pid) ->\n\t{200, #{<<\"access-control-allow-methods\">> => <<\"GET\">>}, <<\"OK\">>, Req};\n\n%% Return the current universal time in seconds.\nhandle(<<\"GET\">>, [<<\"time\">>], Req, _Pid) ->\n\t{200, #{}, integer_to_binary(os:system_time(second)), Req};\n\n%% Return all mempool transactions.\n%% GET request to endpoint /tx/pending.\nhandle(<<\"GET\">>, [<<\"tx\">>, <<\"pending\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\t{200, #{},\n\t\t\t\t\tar_serialize:jsonify(\n\t\t\t\t\t\t%% Should encode\n\t\t\t\t\t\tlists:map(\n\t\t\t\t\t\t\tfun ar_util:encode/1,\n\t\t\t\t\t\t\tar_mempool:get_all_txids()\n\t\t\t\t\t\t)\n\t\t\t\t\t),\n\t\t\tReq}\n\tend;\n\n%% Return outgoing transaction priority queue.\n%% GET request to endpoint /queue.\n%% @deprecated\nhandle(<<\"GET\">>, [<<\"queue\">>], Req, _Pid) ->\n\t{200, #{}, <<\"[]\">>, Req};\n\n%% Return additional information about the transaction with the given identifier (hash).\n%% GET request to endpoint /tx/{hash}/status.\nhandle(<<\"GET\">>, [<<\"tx\">>, Hash, <<\"status\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\thandle_get_tx_status(Hash, Req)\n\tend;\n\n%% Return a JSON-encoded transaction.\n%% GET request to endpoint /tx/{hash}.\nhandle(<<\"GET\">>, [<<\"tx\">>, Hash], Req, _Pid) ->\n\thandle_get_tx(Hash, Req, json);\n\n%% Return a binary-encoded transaction.\n%% GET request to endpoint /tx2/{hash}.\nhandle(<<\"GET\">>, [<<\"tx2\">>, Hash], Req, _Pid) ->\n\thandle_get_tx(Hash, Req, binary);\n\n%% Return a possibly unconfirmed JSON-encoded transaction.\n%% GET request to endpoint /unconfirmed_tx/{hash}.\nhandle(<<\"GET\">>, [<<\"unconfirmed_tx\">>, Hash], Req, _Pid) ->\n\thandle_get_unconfirmed_tx(Hash, Req, json);\n\n%% Return a possibly unconfirmed binary-encoded transaction.\n%% GET request to endpoint /unconfirmed_tx2/{hash}.\nhandle(<<\"GET\">>, [<<\"unconfirmed_tx2\">>, Hash], Req, _Pid) ->\n\thandle_get_unconfirmed_tx(Hash, Req, binary);\n\n%% Return the data field of the transaction specified via the transaction ID (hash)\n%% served as HTML.\n%% GET request to endpoint /tx/{hash}/data.html\nhandle(<<\"GET\">>, [<<\"tx\">>, Hash, << \"data.\", _/binary >>], Req, _Pid) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase lists:member(serve_html_data, Config#config.disable) of\n\t\ttrue ->\n\t\t\t{421, #{}, <<\"Serving HTML data is disabled on this node.\">>, Req};\n\t\t_ ->\n\t\t\tcase ar_util:safe_decode(Hash) of\n\t\t\t\t{error, invalid} ->\n\t\t\t\t\t{400, #{}, <<\"Invalid hash.\">>, Req};\n\t\t\t\t{ok, ID} ->\n\t\t\t\t\tcase ar_storage:read_tx(ID) of\n\t\t\t\t\t\tunavailable ->\n\t\t\t\t\t\t\t{404, #{ <<\"content-type\">> => <<\"text/html; charset=utf-8\">> }, sendfile(\"genesis_data/not_found.html\"), Req};\n\t\t\t\t\t\t#tx{} = TX ->\n\t\t\t\t\t\t\tserve_tx_html_data(Req, TX)\n\t\t\t\t\tend\n\t\t\tend\n\tend;\n\nhandle(<<\"GET\">>, [<<\"sync_buckets\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tok = ar_semaphore:acquire(get_sync_record, ?DEFAULT_CALL_TIMEOUT),\n\t\t\tcase ar_global_sync_record:get_serialized_sync_buckets() of\n\t\t\t\t{ok, Binary} ->\n\t\t\t\t\t{200, #{}, Binary, Req};\n\t\t\t\t{error, not_initialized} ->\n\t\t\t\t\t{500, #{}, jiffy:encode(#{ error => not_initialized }), Req};\n\t\t\t\t{error, timeout} ->\n\t\t\t\t\t{503, #{}, jiffy:encode(#{ error => timeout }), Req}\n\t\t\tend\n\tend;\n\nhandle(<<\"GET\">>, [<<\"footprint_buckets\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tok = ar_semaphore:acquire(get_sync_record, ?DEFAULT_CALL_TIMEOUT),\n\t\t\tcase ar_global_sync_record:get_serialized_footprint_buckets() of\n\t\t\t\t{ok, Binary} ->\n\t\t\t\t\t{200, #{}, Binary, Req};\n\t\t\t\t{error, not_initialized} ->\n\t\t\t\t\t{500, #{}, jiffy:encode(#{ error => not_initialized }), Req};\n\t\t\t\t{error, timeout} ->\n\t\t\t\t\t{503, #{}, jiffy:encode(#{ error => timeout }), Req}\n\t\t\tend\n\tend;\n\nhandle(<<\"GET\">>, [<<\"data_sync_record\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tFormat =\n\t\t\t\tcase cowboy_req:header(<<\"content-type\">>, Req) of\n\t\t\t\t\t<<\"application/json\">> ->\n\t\t\t\t\t\tjson;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tetf\n\t\t\tend,\n\t\t\tok = ar_semaphore:acquire(get_sync_record, ?DEFAULT_CALL_TIMEOUT),\n\t\t\tOptions = #{ format => Format, random_subset => true },\n\t\t\tcase ar_global_sync_record:get_serialized_sync_record(Options) of\n\t\t\t\t{ok, Binary} ->\n\t\t\t\t\t{200, #{}, Binary, Req};\n\t\t\t\t{error, timeout} ->\n\t\t\t\t\t{503, #{}, jiffy:encode(#{ error => timeout }), Req}\n\t\t\tend\n\tend;\n\nhandle(<<\"GET\">>, [<<\"data_sync_record\">>, EncodedStart, EncodedLimit], Req, _Pid) ->\n\tcase catch binary_to_integer(EncodedStart) of\n\t\t{'EXIT', _} ->\n\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_start_encoding }), Req};\n\t\tStart ->\n\t\t\tcase catch binary_to_integer(EncodedLimit) of\n\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_limit_encoding }), Req};\n\t\t\t\tLimit ->\n\t\t\t\t\tcase Limit > ?MAX_SHARED_SYNCED_INTERVALS_COUNT of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => limit_too_big }), Req};\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tok = ar_semaphore:acquire(get_sync_record, ?DEFAULT_CALL_TIMEOUT),\n\t\t\t\t\t\t\thandle_get_data_sync_record(Start, Limit, Req)\n\t\t\t\t\tend\n\t\t\tend\n\tend;\n\nhandle(<<\"GET\">>, [<<\"data_sync_record\">>, EncodedStart, EncodedEnd, EncodedLimit], Req, _Pid) ->\n\tcase catch binary_to_integer(EncodedStart) of\n\t\t{'EXIT', _} ->\n\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_start_encoding }), Req};\n\t\tStart ->\n\t\t\tcase catch binary_to_integer(EncodedEnd) of\n\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_end_encoding }), Req};\n\t\t\t\tEnd ->\n\t\t\t\t\tcase catch binary_to_integer(EncodedLimit) of\n\t\t\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_limit_encoding }), Req};\n\t\t\t\t\t\tLimit ->\n\t\t\t\t\t\t\tcase Limit > ?MAX_SHARED_SYNCED_INTERVALS_COUNT of\n\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => limit_too_big }), Req};\n\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\tok = ar_semaphore:acquire(get_sync_record, ?DEFAULT_CALL_TIMEOUT),\n\t\t\t\t\t\t\t\t\thandle_get_data_sync_record(Start, End, Limit, Req)\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend\n\t\t\tend\n\tend;\n\n%% Return the information about the presence of the data from the given footprint\n%% in the given partition. The returned intervals contain the numbers of the chunks\n%% starting from 0 belonging to the given footprint (and present on this node).\n%% The footprint is constructed like a replica 2.9 entropy footprint where chunks are\n%% spread out across the partition. Therefore, the interval [0, 2] does not denote\n%% two adjacent chunks but rather two chunks separated by\n%% ar_block:get_replica_2_9_entropy_count() chunks.\n%% Note that we do not only record footprints for replica_2_9 storage modules, but\n%% for any packing, because we want to make it convenient for any client to fetch\n%% the data from us.\n%%\n%% Example response:\n%% {\n%%   \"packing\": \"replica_2_9_A5KJQ7LjCyfGpNj-L-pasroRRVA7z_vWDNcK4aSgZs0\",\n%%   \"intervals\": [\n%%     [\"0\", \"1\"],\n%%     [\"2\", \"10\"],\n%%     [\"12\", \"1024\"]\n%%   ]\n%% }\n%%\n%% Example response:\n%% {\n%%   \"packing\": \"unpacked\",\n%%   \"intervals\": [\"0\", \"1024\"]\n%% }\n%%\n%% Return 404 when no storage module is configured for the given partition.\n%%\n%% Return 400 when the partition or footprint number is not a non-negative integer or the\n%% footprint number is too large.\n%%\n%% GET /footprints/{partition_number}/{footprint_number}\nhandle(<<\"GET\">>, [<<\"footprints\">>, EncodedPartition, EncodedFootprintNumber], Req, _Pid) ->\n\tcase catch binary_to_integer(EncodedPartition) of\n\t\t{'EXIT', _} ->\n\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_partition_encoding }), Req};\n\t\tPartition when Partition >= 0 ->\n\t\t\tcase catch binary_to_integer(EncodedFootprintNumber) of\n\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_footprint_number_encoding }), Req};\n\t\t\t\tFootprintNumber when FootprintNumber >= 0 ->\n\t\t\t\t\tok = ar_semaphore:acquire(get_sync_record, ?DEFAULT_CALL_TIMEOUT),\n\t\t\t\t\thandle_get_footprints(Partition, FootprintNumber, Req);\n\t\t\t\t_ ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => negative_footprint_number }), Req}\n\t\t\tend;\n\t\t_ ->\n\t\t\t{400, #{}, jiffy:encode(#{ error => negative_partition_number }), Req}\n\tend;\n\nhandle(<<\"GET\">>, [<<\"chunk\">>, OffsetBinary], Req, _Pid) ->\n\thandle_get_chunk(OffsetBinary, Req, json);\n\nhandle(<<\"GET\">>, [<<\"chunk_proof\">>, OffsetBinary], Req, _Pid) ->\n\thandle_get_chunk_proof(OffsetBinary, Req, json);\n\nhandle(<<\"GET\">>, [<<\"chunk2\">>, OffsetBinary], Req, _Pid) ->\n\thandle_get_chunk(OffsetBinary, Req, binary);\n\nhandle(<<\"GET\">>, [<<\"chunk_proof2\">>, OffsetBinary], Req, _Pid) ->\n\thandle_get_chunk_proof(OffsetBinary, Req, binary);\n\nhandle(<<\"GET\">>, [<<\"tx\">>, EncodedID, <<\"offset\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tcase ar_util:safe_decode(EncodedID) of\n\t\t\t\t{error, invalid} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_address }), Req};\n\t\t\t\t{ok, ID} ->\n\t\t\t\t\tcase ar_data_sync:get_tx_offset(ID) of\n\t\t\t\t\t\t{ok, {Offset, Size}} ->\n\t\t\t\t\t\t\tResponseBody = jiffy:encode(#{\n\t\t\t\t\t\t\t\toffset => integer_to_binary(Offset),\n\t\t\t\t\t\t\t\tsize => integer_to_binary(Size)\n\t\t\t\t\t\t\t}),\n\t\t\t\t\t\t\t{200, #{}, ResponseBody, Req};\n\t\t\t\t\t\t{error, not_found} ->\n\t\t\t\t\t\t\t{404, #{}, <<>>, Req};\n\t\t\t\t\t\t{error, failed_to_read_offset} ->\n\t\t\t\t\t\t\t{500, #{}, <<>>, Req};\n\t\t\t\t\t\t{error, timeout} ->\n\t\t\t\t\t\t\t{503, #{}, jiffy:encode(#{ error => timeout }), Req}\n\t\t\t\t\tend\n\t\t\tend\n\tend;\n\n%% Return data root metadata for the block containing the offset, >= BlockStartOffset, < BlockEndOffset.\n%% Return only entries corresponding to non-empty transactions.\n%% Return the complete list of entries in the order they appear in the data root index,\n%% which corresponds to sorted #tx records in the block.\n%% GET /data_roots/{offset}\nhandle(<<\"GET\">>, [<<\"data_roots\">>, OffsetBin], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tok = ar_semaphore:acquire(get_data_roots, ?DEFAULT_CALL_TIMEOUT),\n\t\t\tcase catch binary_to_integer(OffsetBin) of\n\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t{400, #{}, <<>>, Req};\n\t\t\t\tOffset ->\n\t\t\t\t\tcase ar_data_sync:get_data_roots_for_offset(Offset) of\n\t\t\t\t\t\t{ok, {TXRoot, BlockSize, Entries}} ->\n\t\t\t\t\t\t\tPayload = ar_serialize:data_roots_to_binary({TXRoot, BlockSize, Entries}),\n\t\t\t\t\t\t\t{200, #{}, Payload, Req};\n\t\t\t\t\t\t{error, not_found} ->\n\t\t\t\t\t\t\t{404, #{}, jiffy:encode(#{ error => not_found }), Req};\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t{500, #{}, <<>>, Req}\n\t\t\t\t\tend\n\t\t\tend\n\tend;\n\n%% Accept data roots for a given block offset (>= BlockStartOffset, < BlockEndOffset).\n%% Expect only entries corresponding to non-empty transactions.\n%% Expect the complete list of entries in the order they appear in the data root index,\n%% which corresponds to sorted #tx records in the block.\n%% POST /data_roots/{offset}\nhandle(<<\"POST\">>, [<<\"data_roots\">>, OffsetBin], Req, Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tok = ar_semaphore:acquire(get_data_roots, ?DEFAULT_CALL_TIMEOUT),\n\t\t\tDiskPoolThreshold = ar_data_sync:get_disk_pool_threshold(),\n\t\t\tReadOffset =\n\t\t\t\tcase catch binary_to_integer(OffsetBin) of\n\t\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t\t{reply, {400, #{}, <<>>, Req}};\n\t\t\t\t\tOffset when Offset >= DiskPoolThreshold ->\n\t\t\t\t\t\t{reply, {400, #{}, jiffy:encode(#{ error => offset_above_disk_pool_threshold }), Req}};\n\t\t\t\t\tOffset when Offset < 0 ->\n\t\t\t\t\t\t{reply, {400, #{}, jiffy:encode(#{ error => negative_offset }), Req}};\n\t\t\t\t\tOffset ->\n\t\t\t\t\t\t{BlockStart, BlockEnd, ExpectedTXRoot} = ar_block_index:get_block_bounds(Offset),\n\t\t\t\t\t\tcase ar_data_sync:are_data_roots_synced(BlockStart, BlockEnd, ExpectedTXRoot) of\n\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t{reply, {200, #{}, <<>>, Req}};\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t{Offset, BlockStart, BlockEnd}\n\t\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\tcase ReadOffset of\n\t\t\t\t{reply, Reply} ->\n\t\t\t\t\tReply;\n\t\t\t\t{Offset2, BlockStart2, BlockEnd2} ->\n\t\t\t\t\tcase read_complete_body(Req, Pid) of\n\t\t\t\t\t\t{ok, Body, Req2} ->\n\t\t\t\t\t\t\tcase ar_serialize:binary_to_data_roots(Body) of\n\t\t\t\t\t\t\t\t{ok, {TXRoot, BlockSize, Entries}} ->\n\t\t\t\t\t\t\t\t\tcase ar_data_root_sync:validate_data_roots(TXRoot, BlockSize, Entries, Offset2) of\n\t\t\t\t\t\t\t\t\t\t{ok, _} ->\n\t\t\t\t\t\t\t\t\t\t\tcase catch ar_data_root_sync:store_data_roots_sync(\n\t\t\t\t\t\t\t\t\t\t\t\t\tBlockStart2, BlockEnd2, TXRoot, Entries) of\n\t\t\t\t\t\t\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\t\t\t\t\t\t\t{200, #{}, <<>>, Req2};\n\t\t\t\t\t\t\t\t\t\t\t\t{'EXIT', {timeout, _}} ->\n\t\t\t\t\t\t\t\t\t\t\t\t\t{503, #{}, jiffy:encode(#{ error => timeout }), Req2};\n\t\t\t\t\t\t\t\t\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t\t\t\t\t\t\t\t\t{503, #{}, jiffy:encode(#{ error => timeout }), Req2};\n\t\t\t\t\t\t\t\t\t\t\t\t{error, Reason} ->\n\t\t\t\t\t\t\t\t\t\t\t\t\t{503, #{}, jiffy:encode(#{ error => Reason }), Req2}\n\t\t\t\t\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t\t\t\t\t{error, Reason} ->\n\t\t\t\t\t\t\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => Reason }), Req2}\n\t\t\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_format }), Req2}\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t{error, body_size_too_large} ->\n\t\t\t\t\t\t\t{400, #{}, <<>>, Req};\n\t\t\t\t\t\t{error, timeout} ->\n\t\t\t\t\t\t\t{503, #{}, jiffy:encode(#{ error => timeout }), Req}\n\t\t\t\t\tend\n\t\t\tend\n\tend;\n\nhandle(<<\"POST\">>, [<<\"chunk\">>], Req, Pid) ->\n\tJoined =\n\t\tcase ar_node:is_joined() of\n\t\t\tfalse ->\n\t\t\t\tnot_joined(Req);\n\t\t\ttrue ->\n\t\t\t\tok\n\t\tend,\n\tDataRootKnown =\n\t\tcase Joined of\n\t\t\tok ->\n\t\t\t\tcase get_data_root_from_headers(Req) of\n\t\t\t\t\tnot_set ->\n\t\t\t\t\t\tok;\n\t\t\t\t\t{ok, {DataRoot, DataSize}} ->\n\t\t\t\t\t\tcase ar_data_sync:has_data_root(DataRoot, DataSize) of\n\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\tok;\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => data_root_not_found }),\n\t\t\t\t\t\t\t\t\t\tReq}\n\t\t\t\t\t\tend\n\t\t\t\tend;\n\t\t\tReply ->\n\t\t\t\tReply\n\t\tend,\n\tParseChunk =\n\t\tcase DataRootKnown of\n\t\t\tok ->\n\t\t\t\tparse_chunk(Req, Pid);\n\t\t\tReply2 ->\n\t\t\t\tReply2\n\t\tend,\n\tcase ParseChunk of\n\t\t{ok, {Proof, Req2}} ->\n\t\t\tcase ar_semaphore:acquire(post_chunk, 5000) of\n\t\t\t\tok ->\n\t\t\t\t\thandle_post_chunk(Proof, Req2);\n\t\t\t\t{error, timeout} ->\n\t\t\t\t\t{503, #{}, jiffy:encode(#{ error => timeout }), Req2}\n\t\t\tend;\n\t\tReply3 ->\n\t\t\tReply3\n\tend;\n\n%% Accept an announcement of a block. Reply 412 (no previous block),\n%% 200 (optionally specifying missing transactions and chunk in the response)\n%% or 208 (already processing the block).\nhandle(<<\"POST\">>, [<<\"block_announcement\">>], Req, Pid) ->\n\tcase read_complete_body(Req, Pid) of\n\t\t{ok, Body, Req2} ->\n\t\t\tcase catch ar_serialize:binary_to_block_announcement(Body) of\n\t\t\t\t{ok, Announcement} ->\n\t\t\t\t\thandle_block_announcement(Announcement, Req2);\n\t\t\t\t{'EXIT', _Reason} ->\n\t\t\t\t\t{400, #{}, <<>>, Req2};\n\t\t\t\t{error, _Reason} ->\n\t\t\t\t\t{400, #{}, <<>>, Req2}\n\t\t\tend;\n\t\t{error, body_size_too_large} ->\n\t\t\t{400, #{}, <<>>, Req};\n\t\t{error, timeout} ->\n\t\t\t{503, #{}, jiffy:encode(#{ error => timeout }), Req}\n\tend;\n\n%% Accept a JSON-encoded block with Base64Url encoded fields.\nhandle(<<\"POST\">>, [<<\"block\">>], Req, Pid) ->\n\tpost_block(request, {Req, Pid, json}, erlang:timestamp());\n\n%% Accept a binary-encoded block.\nhandle(<<\"POST\">>, [<<\"block2\">>], Req, Pid) ->\n\terlang:put(post_block2, true),\n\tpost_block(request, {Req, Pid, binary}, erlang:timestamp());\n\n%% Accept a (partial) solution from a pool or a CM node and validate it.\n%%\n%% If the node is a CM exit node and a pool client, send the given solution to\n%% the pool and return an empty JSON object.\n%%\n%% If the node is a pool server, return a JSON object:\n%% {\n%%   \"indep_hash\": \"\",\n%%   \"status\": \"\"\n%% },\n%% where the status is one of \"accepted\", \"accepted_block\", \"rejected_bad_poa\",\n%% \"rejected_wrong_hash\", \"rejected_bad_vdf\", \"rejected_mining_address_banned\",\n%% \"stale\", \"rejected_vdf_not_found\", \"rejected_missing_key_file\",\n%% \"rejected_invalid_packing_difficulty\".\n%% If the solution is partial, \"indep_hash\" string is empty.\nhandle(<<\"POST\">>, [<<\"partial_solution\">>], Req, Pid) ->\n\tcase ar_node:is_joined() of\n\t\ttrue ->\n\t\t\thandle_post_partial_solution(Req, Pid);\n\t\tfalse ->\n\t\t\tnot_joined(Req)\n\tend;\n\n%% Return the information about up to ?GET_JOBS_COUNT latest VDF steps and a difficulty.\n%%\n%% If the given VDF output is present in the latest 10 VDF steps, return only the steps\n%% strictly above the given output. If the given output is our latest output,\n%% wait for up to ?GET_JOBS_TIMEOUT_S and return an empty list if no new steps are\n%% computed by the time. Also, only return the steps strictly above the latest block.\n%%\n%% If we are a pool server, return the current network difficulty along with the VDF\n%% information.\n%%\n%% If we are a CM exit node and a pool client, return the partial difficulty provided\n%% by the pool.\n%%\n%% Return a JSON object:\n%% {\n%%   \"jobs\":\n%%     [\n%%       {\"nonce_limiter_output\": \"...\", \"step_number\": \"...\", \"partition_upper_bound\": \"...\"},\n%%       ...\n%%     ],\n%%   \"partial_diff\": \"...\",\n%%   \"next_seed\": \"...\",\n%%   \"interval_number\": \"...\",\n%%   \"next_vdf_difficulty\": \"...\"\n%% }\nhandle(<<\"GET\">>, [<<\"jobs\">>, EncodedPrevOutput], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tcase ar_util:safe_decode(EncodedPrevOutput) of\n\t\t\t\t{ok, PrevOutput} ->\n\t\t\t\t\thandle_get_jobs(PrevOutput, Req);\n\t\t\t\t{error, invalid} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_prev_output }), Req}\n\t\t\tend\n\tend;\n\nhandle(<<\"GET\">>, [<<\"jobs\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\thandle_get_jobs(<<>>, Req)\n\tend;\n\nhandle(<<\"POST\">>, [<<\"pool_cm_jobs\">>], Req, Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\thandle_post_pool_cm_jobs(Req, Pid)\n\tend;\n\n%% Generate a wallet and receive a secret key identifying it.\n%% Requires internal_api_secret startup option to be set.\n%% WARNING: only use it if you really really know what you are doing.\nhandle(<<\"POST\">>, [<<\"wallet\">>], Req, _Pid) ->\n\tcase check_internal_api_secret(Req) of\n\t\tpass ->\n\t\t\tWalletAccessCode = ar_util:encode(crypto:strong_rand_bytes(32)),\n\t\t\tcase ar_wallet:new_keyfile(?DEFAULT_KEY_TYPE, WalletAccessCode) of\n\t\t\t\t{error, Reason} ->\n\t\t\t\t\t?LOG_ERROR([{event, failed_to_create_new_wallet},\n\t\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}]),\n\t\t\t\t\t{500, #{}, <<>>, Req};\n\t\t\t\t{_, Pub} ->\n\t\t\t\t\tResponseProps = [\n\t\t\t\t\t\t{<<\"wallet_address\">>, ar_util:encode(ar_wallet:to_address(Pub))},\n\t\t\t\t\t\t{<<\"wallet_access_code\">>, WalletAccessCode}\n\t\t\t\t\t],\n\t\t\t\t\t{200, #{}, ar_serialize:jsonify({ResponseProps}), Req}\n\t\t\tend;\n\t\t{reject, {Status, Headers, Body}} ->\n\t\t\t{Status, Headers, Body, Req}\n\tend;\n\n%% Accept a new JSON-encoded transaction.\n%% POST request to endpoint /tx.\nhandle(<<\"POST\">>, [<<\"tx\">>], Req, Pid) ->\n\thandle_post_tx({Req, Pid, json});\n\n%% Accept a new binary-encoded transaction.\n%% POST request to endpoint /tx2.\nhandle(<<\"POST\">>, [<<\"tx2\">>], Req, Pid) ->\n\thandle_post_tx({Req, Pid, binary});\n\n%% Sign and send a tx to the network.\n%% Fetches the wallet by the provided key generated via POST /wallet.\n%% Requires internal_api_secret startup option to be set.\n%% WARNING: only use it if you really really know what you are doing.\nhandle(<<\"POST\">>, [<<\"unsigned_tx\">>], Req, Pid) ->\n\tcase {ar_node:is_joined(), check_internal_api_secret(Req)} of\n\t\t{false, _} ->\n\t\t\tnot_joined(Req);\n\t\t{true, pass} ->\n\t\t\tcase read_complete_body(Req, Pid) of\n\t\t\t\t{ok, Body, Req2} ->\n\t\t\t\t\t{UnsignedTXProps} = ar_serialize:dejsonify(Body),\n\t\t\t\t\tWalletAccessCode =\n\t\t\t\t\t\tproplists:get_value(<<\"wallet_access_code\">>, UnsignedTXProps),\n\t\t\t\t\t%% ar_serialize:json_struct_to_tx/1 requires all properties to be there,\n\t\t\t\t\t%% so we're adding id, owner and signature with bogus values. These\n\t\t\t\t\t%% will later be overwritten in ar_tx:sign/2\n\t\t\t\t\tFullTxProps = lists:append(\n\t\t\t\t\t\tproplists:delete(<<\"wallet_access_code\">>, UnsignedTXProps),\n\t\t\t\t\t\t[\n\t\t\t\t\t\t\t{<<\"id\">>, ar_util:encode(crypto:strong_rand_bytes(32))},\n\t\t\t\t\t\t\t{<<\"owner\">>, ar_util:encode(<<\"owner placeholder\">>)},\n\t\t\t\t\t\t\t{<<\"signature\">>, ar_util:encode(<<\"signature placeholder\">>)}\n\t\t\t\t\t\t]\n\t\t\t\t\t),\n\t\t\t\t\tKeyPair = ar_wallet:load_keyfile(\n\t\t\t\t\t\t\tar_wallet:wallet_filepath(WalletAccessCode)),\n\t\t\t\t\tUnsignedTX = ar_serialize:json_struct_to_tx({FullTxProps}),\n\t\t\t\t\tData = UnsignedTX#tx.data,\n\t\t\t\t\tDataSize = byte_size(Data),\n\t\t\t\t\tDataRoot = case DataSize > 0 of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tTreeTX = ar_tx:generate_chunk_tree(#tx{ data = Data }),\n\t\t\t\t\t\t\tTreeTX#tx.data_root;\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t<<>>\n\t\t\t\t\tend,\n\t\t\t\t\tFormat2TX = UnsignedTX#tx{\n\t\t\t\t\t\tformat = 2,\n\t\t\t\t\t\tdata_size = DataSize,\n\t\t\t\t\t\tdata_root = DataRoot\n\t\t\t\t\t},\n\t\t\t\t\tSignedTX = ar_tx:sign(Format2TX, KeyPair),\n\t\t\t\t\tPeer = ar_http_util:arweave_peer(Req),\n\t\t\t\t\tReply = ar_serialize:jsonify({[{<<\"id\">>,\n\t\t\t\t\t\t\tar_util:encode(SignedTX#tx.id)}]}),\n\t\t\t\t\tcase handle_post_tx(Req2, Peer, SignedTX) of\n\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\t{200, #{}, Reply, Req2};\n\t\t\t\t\t\t{error_response, {Status, Headers, ErrBody}} ->\n\t\t\t\t\t\t\t{Status, Headers, ErrBody, Req2}\n\t\t\t\t\tend;\n\t\t\t\t{error, body_size_too_large} ->\n\t\t\t\t\t{413, #{}, <<\"Payload too large\">>, Req};\n\t\t\t\t{error, timeout} ->\n\t\t\t\t\t{500, #{}, <<\"Handler timeout\">>, Req}\n\t\t\tend;\n\t\t{true, {reject, {Status, Headers, Body}}} ->\n\t\t\t{Status, Headers, Body, Req}\n\tend;\n\n%% Return the list of peers held by the node.\n%% GET request to endpoint /peers.\nhandle(<<\"GET\">>, [<<\"peers\">>], Req, _Pid) ->\n\t{200, #{},\n\t\tar_serialize:jsonify(\n\t\t\t[\n\t\t\t\tlist_to_binary(ar_util:format_peer(P))\n\t\t\t||\n\t\t\t\tP <- ar_peers:get_peers(current),\n\t\t\t\tP /= ar_http_util:arweave_peer(Req),\n\t\t\t\tar_peers:is_public_peer(P)\n\t\t\t]\n\t\t),\n\tReq};\n\n%% Return the inflation reward emitted at the given block.\n%% GET request to endpoint /price/{height}.\nhandle(<<\"GET\">>, [<<\"inflation\">>, EncodedHeight], Req, _Pid) ->\n\tcase catch binary_to_integer(EncodedHeight) of\n\t\t{'EXIT', _} ->\n\t\t\t{400, #{}, jiffy:encode(#{ error => height_must_be_an_integer }), Req};\n\t\tHeight when Height < 0 ->\n\t\t\t{400, #{}, jiffy:encode(#{ error => height_must_be_non_negative }), Req};\n\t\tHeight when Height > 13000000 -> % An approximate number.\n\t\t\t{200, #{}, \"0\", Req};\n\t\tHeight ->\n\t\t\t{200, #{}, integer_to_list(trunc(ar_inflation:calculate(Height))), Req}\n\tend;\n\n%% Return the estimated transaction fee not including a new wallet fee.\n%% GET request to endpoint /price/{bytes}.\nhandle(<<\"GET\">>, [<<\"price\">>, SizeInBytesBinary], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tcase catch binary_to_integer(SizeInBytesBinary) of\n\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => size_must_be_an_integer }), Req};\n\t\t\t\tSize ->\n\t\t\t\t\t{Fee, _Denomination} = estimate_tx_fee(Size, <<>>),\n\t\t\t\t\t{200, #{}, integer_to_binary(Fee), Req}\n\t\t\tend\n\tend;\n\n%% Return the estimated transaction fee not (including a new wallet fee) along with the\n%% denomination code.\n%% GET request to endpoint /price2/{bytes}.\nhandle(<<\"GET\">>, [<<\"price2\">>, SizeInBytesBinary], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tcase catch binary_to_integer(SizeInBytesBinary) of\n\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => size_must_be_an_integer }), Req};\n\t\t\t\tSize ->\n\t\t\t\t\t{Fee, Denomination} = estimate_tx_fee(Size, <<>>),\n\t\t\t\t\t{200, #{}, jiffy:encode(#{ fee => integer_to_binary(Fee),\n\t\t\t\t\t\t\tdenomination => Denomination }), Req}\n\t\t\tend\n\tend;\n\n%% Return the optimistic transaction fee not (including a new wallet fee) along with the\n%% denomination code.\n%% GET request to endpoint /optimistic_price/{bytes}.\nhandle(<<\"GET\">>, [<<\"optimistic_price\">>, SizeInBytesBinary], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tcase catch binary_to_integer(SizeInBytesBinary) of\n\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => size_must_be_an_integer }), Req};\n\t\t\t\tSize ->\n\t\t\t\t\t{Fee, Denomination} = estimate_tx_fee(Size, <<>>, optimistic),\n\t\t\t\t\t{200, #{}, jiffy:encode(#{ fee => integer_to_binary(Fee),\n\t\t\t\t\t\t\tdenomination => Denomination }), Req}\n\t\t\tend\n\tend;\n\n%% Return the estimated transaction fee (including a new wallet fee if the given address\n%% is not found in the account tree).\n%% GET request to endpoint /price/{bytes}/{address}.\nhandle(<<\"GET\">>, [<<\"price\">>, SizeInBytesBinary, EncodedAddr], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tcase ar_wallet:base64_address_with_optional_checksum_to_decoded_address_safe(\n\t\t\t\t\tEncodedAddr) of\n\t\t\t\t{error, invalid} ->\n\t\t\t\t\t{400, #{}, <<\"Invalid address.\">>, Req};\n\t\t\t\t{ok, Addr} ->\n\t\t\t\t\tcase catch binary_to_integer(SizeInBytesBinary) of\n\t\t\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => size_must_be_an_integer }),\n\t\t\t\t\t\t\t\t\tReq};\n\t\t\t\t\t\tSize ->\n\t\t\t\t\t\t\t{Fee, _Denomination} = estimate_tx_fee(Size, Addr),\n\t\t\t\t\t\t\t{200, #{}, integer_to_binary(Fee), Req}\n\t\t\t\t\tend\n\t\t\tend\n\tend;\n\n%% Return the estimated transaction fee (including a new wallet fee if the given address\n%% is not found in the account tree) along with the denomination code.\n%% GET request to endpoint /price2/{bytes}/{address}.\nhandle(<<\"GET\">>, [<<\"price2\">>, SizeInBytesBinary, EncodedAddr], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tcase ar_wallet:base64_address_with_optional_checksum_to_decoded_address_safe(\n\t\t\t\t\tEncodedAddr) of\n\t\t\t\t{error, invalid} ->\n\t\t\t\t\t{400, #{}, <<\"Invalid address.\">>, Req};\n\t\t\t\t{ok, Addr} ->\n\t\t\t\t\tcase catch binary_to_integer(SizeInBytesBinary) of\n\t\t\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => size_must_be_an_integer }),\n\t\t\t\t\t\t\t\t\tReq};\n\t\t\t\t\t\tSize ->\n\t\t\t\t\t\t\t{Fee, Denomination} = estimate_tx_fee(Size, Addr),\n\t\t\t\t\t\t\t{200, #{}, jiffy:encode(#{ fee => integer_to_binary(Fee),\n\t\t\t\t\t\t\t\t\tdenomination => Denomination }), Req}\n\t\t\t\t\tend\n\t\t\tend\n\tend;\n\n%% Return the estimated transaction fee (including a new wallet fee if the given address\n%% is not found in the account tree) along with the denomination code.\n%% GET request to endpoint /optimistic_price/{bytes}/{address}.\nhandle(<<\"GET\">>, [<<\"optimistic_price\">>, SizeInBytesBinary, EncodedAddr], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tcase ar_wallet:base64_address_with_optional_checksum_to_decoded_address_safe(\n\t\t\t\t\tEncodedAddr) of\n\t\t\t\t{error, invalid} ->\n\t\t\t\t\t{400, #{}, <<\"Invalid address.\">>, Req};\n\t\t\t\t{ok, Addr} ->\n\t\t\t\t\tcase catch binary_to_integer(SizeInBytesBinary) of\n\t\t\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => size_must_be_an_integer }),\n\t\t\t\t\t\t\t\t\tReq};\n\t\t\t\t\t\tSize ->\n\t\t\t\t\t\t\t{Fee, Denomination} = estimate_tx_fee(Size, Addr, optimistic),\n\t\t\t\t\t\t\t{200, #{}, jiffy:encode(#{ fee => integer_to_binary(Fee),\n\t\t\t\t\t\t\t\t\tdenomination => Denomination }), Req}\n\t\t\t\t\tend\n\t\t\tend\n\tend;\n\n%% Return the estimated transaction fee not including a new wallet fee. The fee is estimated\n%% using the new pricing scheme.\n%% GET request to endpoint /v2price/{bytes}.\nhandle(<<\"GET\">>, [<<\"v2price\">>, SizeInBytesBinary], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tcase catch binary_to_integer(SizeInBytesBinary) of\n\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => size_must_be_an_integer }), Req};\n\t\t\t\tSize ->\n\t\t\t\t\tFee = estimate_tx_fee_v2(Size, <<>>),\n\t\t\t\t\t{200, #{}, integer_to_binary(Fee), Req}\n\t\t\tend\n\tend;\n\n%% Return the estimated transaction fee (including a new wallet fee if the given address\n%% is not found in the account tree). The fee is estimated using the new pricing scheme.\n%% GET request to endpoint /v2price/{bytes}/{address}.\nhandle(<<\"GET\">>, [<<\"v2price\">>, SizeInBytesBinary, EncodedAddr], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tcase ar_wallet:base64_address_with_optional_checksum_to_decoded_address_safe(\n\t\t\t\t\tEncodedAddr) of\n\t\t\t\t{error, invalid} ->\n\t\t\t\t\t{400, #{}, <<\"Invalid address.\">>, Req};\n\t\t\t\t{ok, Addr} ->\n\t\t\t\t\tcase catch binary_to_integer(SizeInBytesBinary) of\n\t\t\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => size_must_be_an_integer }),\n\t\t\t\t\t\t\t\t\tReq};\n\t\t\t\t\t\tSize ->\n\t\t\t\t\t\t\tFee = estimate_tx_fee_v2(Size, Addr),\n\t\t\t\t\t\t\t{200, #{}, integer_to_binary(Fee), Req}\n\t\t\t\t\tend\n\t\t\tend\n\tend;\n\nhandle(<<\"GET\">>, [<<\"reward_history\">>, EncodedBH], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tok = ar_semaphore:acquire(get_reward_history, ?DEFAULT_CALL_TIMEOUT),\n\t\t\tcase ar_util:safe_decode(EncodedBH) of\n\t\t\t\t{ok, BH} ->\n\t\t\t\t\tFork_2_6 = ar_fork:height_2_6(),\n\t\t\t\t\tcase ar_block_cache:get_block_and_status(block_cache, BH) of\n\t\t\t\t\t\t{#block{ height = Height, reward_history = RewardHistory }, {Status, _}}\n\t\t\t\t\t\t\t\twhen (Status == on_chain orelse Status == validated),\n\t\t\t\t\t\t\t\t\tHeight >= Fork_2_6 ->\n\t\t\t\t\t\t\tRewardHistory2 = ar_rewards:trim_buffered_reward_history(Height,\n\t\t\t\t\t\t\t\t\tRewardHistory),\n\t\t\t\t\t\t\t{200, #{}, ar_serialize:reward_history_to_binary(RewardHistory2),\n\t\t\t\t\t\t\t\t\tReq};\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t{404, #{}, <<>>, Req}\n\t\t\t\t\tend;\n\t\t\t\t{error, invalid} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_block_hash }), Req}\n\t\t\tend\n\tend;\n\nhandle(<<\"GET\">>, [<<\"block_time_history\">>, EncodedBH], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tcase ar_util:safe_decode(EncodedBH) of\n\t\t\t\t{ok, BH} ->\n\t\t\t\t\tFork_2_7 = ar_fork:height_2_7(),\n\t\t\t\t\tcase ar_block_cache:get_block_and_status(block_cache, BH) of\n\t\t\t\t\t\t{#block{ height = Height,\n\t\t\t\t\t\t\t\t\tblock_time_history = BlockTimeHistory }, {Status, _}}\n\t\t\t\t\t\t\t\twhen (Status == on_chain orelse Status == validated),\n\t\t\t\t\t\t\t\t\tHeight >= Fork_2_7 ->\n\t\t\t\t\t\t\t{200, #{}, ar_serialize:block_time_history_to_binary(\n\t\t\t\t\t\t\t\t\tBlockTimeHistory), Req};\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t{404, #{}, <<>>, Req}\n\t\t\t\t\tend;\n\t\t\t\t{error, invalid} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_block_hash }), Req}\n\t\t\tend\n\tend;\n\n%% Return the current JSON-encoded hash list held by the node.\n%% GET request to endpoint /block_index.\nhandle(<<\"GET\">>, [<<\"hash_list\">>], Req, _Pid) ->\n\thandle(<<\"GET\">>, [<<\"block_index\">>], Req, _Pid);\n\nhandle(<<\"GET\">>, [<<\"block_index\">>], Req, _Pid) ->\n\tok = ar_semaphore:acquire(get_block_index, ?DEFAULT_CALL_TIMEOUT),\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tcase ar_node:get_height() >= ar_fork:height_2_6() of\n\t\t\t\ttrue ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => not_supported_since_fork_2_6 }), Req};\n\t\t\t\tfalse ->\n\t\t\t\t\tBI = ar_node:get_block_index(),\n\t\t\t\t\t{200, #{},\n\t\t\t\t\t\tar_serialize:jsonify(\n\t\t\t\t\t\t\tar_serialize:block_index_to_json_struct(\n\t\t\t\t\t\t\t\tformat_bi_for_peer(BI, Req)\n\t\t\t\t\t\t\t)\n\t\t\t\t\t\t),\n\t\t\t\t\tReq}\n\t\t\tend\n\tend;\n\n%% Return the current binary-encoded block index held by the node.\n%% GET request to endpoint /block_index2.\nhandle(<<\"GET\">>, [<<\"block_index2\">>], Req, _Pid) ->\n\tok = ar_semaphore:acquire(get_block_index, ?DEFAULT_CALL_TIMEOUT),\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tcase ar_node:get_height() >= ar_fork:height_2_6() of\n\t\t\t\ttrue ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => not_supported_since_fork_2_6 }), Req};\n\t\t\t\tfalse ->\n\t\t\t\t\tBI = ar_node:get_block_index(),\n\t\t\t\t\tBin = ar_serialize:block_index_to_binary(BI),\n\t\t\t\t\t{200, #{}, Bin, Req}\n\t\t\tend\n\tend;\n\nhandle(<<\"GET\">>, [<<\"hash_list\">>, From, To], Req, _Pid) ->\n\thandle(<<\"GET\">>, [<<\"block_index\">>, From, To], Req, _Pid);\n\nhandle(<<\"GET\">>, [<<\"hash_list2\">>, From, To], Req, _Pid) ->\n\thandle(<<\"GET\">>, [<<\"block_index2\">>, From, To], Req, _Pid);\n\nhandle(<<\"GET\">>, [<<\"block_index2\">>, From, To], Req, _Pid) ->\n\terlang:put(encoding, binary),\n\thandle(<<\"GET\">>, [<<\"block_index\">>, From, To], Req, _Pid);\n\nhandle(<<\"GET\">>, [<<\"block_index\">>, From, To], Req, _Pid) ->\n\tok = ar_semaphore:acquire(get_block_index, ?DEFAULT_CALL_TIMEOUT),\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tProps =\n\t\t\t\tets:select(\n\t\t\t\t\tnode_state,\n\t\t\t\t\t[{{'$1', '$2'},\n\t\t\t\t\t\t[{'or',\n\t\t\t\t\t\t\t{'==', '$1', height},\n\t\t\t\t\t\t\t{'==', '$1', recent_block_index}}], ['$_']}]\n\t\t\t\t),\n\t\t\tHeight = proplists:get_value(height, Props),\n\t\t\tRecentBI = proplists:get_value(recent_block_index, Props),\n\t\t\ttry\n\t\t\t\tStart = binary_to_integer(From),\n\t\t\t\tEnd = binary_to_integer(To),\n\t\t\t\tEncoding = case erlang:get(encoding) of undefined -> json; Enc -> Enc end,\n\t\t\t\thandle_get_block_index_range(Start, End, Height, RecentBI, Req, Encoding)\n\t\t\tcatch _:_ ->\n\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_range }), Req}\n\t\t\tend\n\tend;\n\nhandle(<<\"GET\">>, [<<\"recent_hash_list\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tEncoded = [ar_util:encode(H) || H <- ar_node:get_block_anchors()],\n\t\t\t{200, #{}, ar_serialize:jsonify(Encoded), Req}\n\tend;\n\n%% Accept the list of independent block hashes ordered from oldest to newest\n%% and return the deviation of our hash list from the given one.\n%% Peers may use this endpoint to make sure they did not miss blocks or learn\n%% about the missed blocks and their transactions so that they can catch up quickly.\nhandle(<<\"GET\">>, [<<\"recent_hash_list_diff\">>], Req, Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tcase read_complete_body(Req, Pid, ?MAX_SERIALIZED_RECENT_HASH_LIST_DIFF) of\n\t\t\t\t{ok, Body, Req2} ->\n\t\t\t\t\tcase decode_recent_hash_list(Body) of\n\t\t\t\t\t\t{ok, ReverseHL} ->\n\t\t\t\t\t\t\t{BlockTXPairs, _}\n\t\t\t\t\t\t\t\t\t= ar_block_cache:get_longest_chain_cache(block_cache),\n\t\t\t\t\t\t\tcase get_recent_hash_list_diff(ReverseHL,\n\t\t\t\t\t\t\t\t\tlists:reverse(BlockTXPairs)) of\n\t\t\t\t\t\t\t\tno_intersection ->\n\t\t\t\t\t\t\t\t\t{404, #{}, <<>>, Req2};\n\t\t\t\t\t\t\t\tBin ->\n\t\t\t\t\t\t\t\t\t{200, #{}, Bin, Req2}\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\terror ->\n\t\t\t\t\t\t\t{400, #{}, <<>>, Req2}\n\t\t\t\t\tend;\n\t\t\t\t{error, timeout} ->\n\t\t\t\t\t{503, #{}, jiffy:encode(#{ error => timeout }), Req};\n\t\t\t\t{error, body_size_too_large} ->\n\t\t\t\t\t{413, #{}, <<\"Payload too large\">>, Req}\n\t\t\tend\n\tend;\n\n%% Return the sum of all the existing accounts in the latest state, in Winston.\n%% GET request to endpoint /total_supply.\nhandle(<<\"GET\">>, [<<\"total_supply\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tok = ar_semaphore:acquire(get_wallet_list, ?DEFAULT_CALL_TIMEOUT),\n\t\t\tB = ar_node:get_current_block(),\n\t\t\tTotalSupply = get_total_supply(B#block.wallet_list, first, 0,\n\t\t\t\t\tB#block.denomination),\n\t\t\t{200, #{}, integer_to_binary(TotalSupply), Req}\n\tend;\n\n%% Return the current wallet list held by the node.\n%% GET request to endpoint /wallet_list.\nhandle(<<\"GET\">>, [<<\"wallet_list\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tH = ar_node:get_current_block_hash(),\n\t\t\tprocess_request(get_block, [<<\"hash\">>, ar_util:encode(H), <<\"wallet_list\">>], Req)\n\tend;\n\n%% Return a bunch of wallets, up to ?WALLET_LIST_CHUNK_SIZE, from the tree with\n%% the given root hash. The wallet addresses are picked in the ascending alphabetical order.\nhandle(<<\"GET\">>, [<<\"wallet_list\">>, EncodedRootHash], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tprocess_get_wallet_list_chunk(EncodedRootHash, first, Req)\n\tend;\n\n%% Return a bunch of wallets, up to ?WALLET_LIST_CHUNK_SIZE, from the tree with\n%% the given root hash, starting with the provided cursor, taken the wallet addresses\n%% are picked in the ascending alphabetical order.\nhandle(<<\"GET\">>, [<<\"wallet_list\">>, EncodedRootHash, EncodedCursor], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tprocess_get_wallet_list_chunk(EncodedRootHash, EncodedCursor, Req)\n\tend;\n\n%% Return the balance of the given address from the wallet tree with the given root hash.\nhandle(<<\"GET\">>, [<<\"wallet_list\">>, EncodedRootHash, EncodedAddr, <<\"balance\">>], Req,\n\t\t_Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tcase {ar_util:safe_decode(EncodedRootHash), ar_util:safe_decode(EncodedAddr)} of\n\t\t\t\t{{error, invalid}, _} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_root_hash_encoding }), Req};\n\t\t\t\t{_, {error, invalid}} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_address_encoding }), Req};\n\t\t\t\t{{ok, RootHash}, {ok, Addr}} ->\n\t\t\t\t\tcase ar_wallets:get_balance(RootHash, Addr) of\n\t\t\t\t\t\t{error, not_found} ->\n\t\t\t\t\t\t\t{404, #{}, jiffy:encode(#{ error => root_hash_not_found }), Req};\n\t\t\t\t\t\tBalance when is_integer(Balance) ->\n\t\t\t\t\t\t\t{200, #{}, integer_to_binary(Balance), Req};\n\t\t\t\t\t\t_Error ->\n\t\t\t\t\t\t\t{500, #{}, <<>>, Req}\n\t\t\t\t\tend\n\t\t\tend\n\tend;\n\n%% Share your IP with another peer.\n%% @deprecated To make a node learn your IP, you can make any request to it.\nhandle(<<\"POST\">>, [<<\"peers\">>], Req, _Pid) ->\n\t{200, #{}, <<>>, Req};\n\n%% Return the balance of the wallet specified via wallet_address.\n%% GET request to endpoint /wallet/{wallet_address}/balance.\nhandle(<<\"GET\">>, [<<\"wallet\">>, Addr, <<\"balance\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tcase ar_wallet:base64_address_with_optional_checksum_to_decoded_address_safe(Addr) of\n\t\t\t\t{error, invalid} ->\n\t\t\t\t\t{400, #{}, <<\"Invalid address.\">>, Req};\n\t\t\t\t{ok, AddrOK} ->\n\t\t\t\t\tcase ar_node:get_balance(AddrOK) of\n\t\t\t\t\t\tnode_unavailable ->\n\t\t\t\t\t\t\t{503, #{}, <<\"Internal timeout.\">>, Req};\n\t\t\t\t\t\tBalance ->\n\t\t\t\t\t\t\t{200, #{}, integer_to_binary(Balance), Req}\n\t\t\t\t\tend\n\t\t\tend\n\tend;\n\n%% Return the sum of reserved mining rewards of the given account.\n%% GET request to endpoint /wallet/{wallet_address}/reserved_rewards_total.\nhandle(<<\"GET\">>, [<<\"wallet\">>, Addr, <<\"reserved_rewards_total\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tcase ar_wallet:base64_address_with_optional_checksum_to_decoded_address_safe(Addr) of\n\t\t\t\t{ok, AddrOK} when byte_size(AddrOK) == 32 ->\n\t\t\t\t\tB = ar_node:get_current_block(),\n\t\t\t\t\tSum = ar_rewards:get_total_reward_for_address(AddrOK, B),\n\t\t\t\t\t{200, #{}, integer_to_binary(Sum), Req};\n\t\t\t\t_ ->\n\t\t\t\t\t{400, #{}, <<\"Invalid address.\">>, Req}\n\t\t\tend\n\tend;\n\n%% Return the last transaction ID (hash) for the wallet specified via wallet_address.\n%% GET request to endpoint /wallet/{wallet_address}/last_tx.\nhandle(<<\"GET\">>, [<<\"wallet\">>, Addr, <<\"last_tx\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tcase ar_wallet:base64_address_with_optional_checksum_to_decoded_address_safe(Addr) of\n\t\t\t\t{error, invalid} ->\n\t\t\t\t\t{400, #{}, <<\"Invalid address.\">>, Req};\n\t\t\t\t{ok, AddrOK} ->\n\t\t\t\t\t{200, #{},\n\t\t\t\t\t\tar_util:encode(\n\t\t\t\t\t\t\t?OK(ar_node:get_last_tx(AddrOK))\n\t\t\t\t\t\t),\n\t\t\t\t\tReq}\n\t\t\tend\n\tend;\n\n%% Return a block anchor to use for building transactions.\nhandle(<<\"GET\">>, [<<\"tx_anchor\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tList = ar_node:get_block_anchors(),\n\t\t\tSuggestedAnchor = lists:nth(min(length(List), ?SUGGESTED_TX_ANCHOR_DEPTH), List),\n\t\t\t{200, #{}, ar_util:encode(SuggestedAnchor), Req}\n\tend;\n\n%% Return the JSON-encoded block with the given height or hash.\n%% GET request to endpoint /block/{height|hash}/{height|hash}.\nhandle(<<\"GET\">>, [<<\"block\">>, Type, ID], Req, Pid)\n\t\twhen Type == <<\"height\">> orelse Type == <<\"hash\">> ->\n\thandle_get_block(Type, ID, Req, Pid, json);\n\n%% Return the binary-encoded block with the given height or hash.\n%% GET request to endpoint /block2/{height|hash}/{height|hash}.\n%% Optionally accept an HTTP body, up to 125 bytes - the encoded\n%% transaction indices where the Nth bit being 1 asks to include\n%% the Nth transaction in the alphabetical order (not just its identifier)\n%% in the response. The node only includes transactions in the response\n%% when the corresponding indices are present in the request and those\n%% transactions are found in the block cache - the motivation is to keep\n%% the endpoint lightweight.\nhandle(<<\"GET\">>, [<<\"block2\">>, Type, ID], Req, Pid)\n\t\twhen Type == <<\"height\">> orelse Type == <<\"hash\">> ->\n\thandle_get_block(Type, ID, Req, Pid, binary);\n\n%% Return block or block field.\nhandle(<<\"GET\">>, [<<\"block\">>, Type, ID, Field], Req, _Pid)\n\t\twhen Type == <<\"height\">> orelse Type == <<\"hash\">> ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tprocess_request(get_block, [Type, ID, Field], Req)\n\tend;\n\n%% Return the balance of the given wallet at the given block.\nhandle(<<\"GET\">>, [<<\"block\">>, <<\"height\">>, Height, <<\"wallet\">>, Addr, <<\"balance\">>], Req,\n\t\t_Pid) ->\n\tok = ar_semaphore:acquire(get_wallet_list, ?DEFAULT_CALL_TIMEOUT),\n\thandle_get_block_wallet_balance(Height, Addr, Req);\n\n%% Return the current block.\n%% GET request to endpoint /block/current.\nhandle(<<\"GET\">>, [<<\"block\">>, <<\"current\">>], Req, Pid) ->\n\tcase ar_node:get_current_block_hash() of\n\t\tnot_joined ->\n\t\t\tnot_joined(Req);\n\t\tH when is_binary(H) ->\n\t\t\thandle(<<\"GET\">>, [<<\"block\">>, <<\"hash\">>, ar_util:encode(H)], Req, Pid)\n\tend;\n\n%% DEPRECATED (12/07/2018)\nhandle(<<\"GET\">>, [<<\"current_block\">>], Req, Pid) ->\n\thandle(<<\"GET\">>, [<<\"block\">>, <<\"current\">>], Req, Pid);\n\n%% Return a given field of the transaction specified by the transaction ID (hash).\n%% GET request to endpoint /tx/{hash}/{field}\n%%\n%% {field} := { id | last_tx | owner | tags | target | quantity | data | signature | reward }\nhandle(<<\"GET\">>, [<<\"tx\">>, Hash, Field], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tReadTX =\n\t\t\t\tcase ar_util:safe_decode(Hash) of\n\t\t\t\t\t{error, invalid} ->\n\t\t\t\t\t\t{reply, {400, #{}, <<\"Invalid hash.\">>, Req}};\n\t\t\t\t\t{ok, ID} ->\n\t\t\t\t\t\t{ar_storage:read_tx(ID), ID}\n\t\t\t\tend,\n\t\t\tcase ReadTX of\n\t\t\t\t{unavailable, TXID} ->\n\t\t\t\t\tcase is_a_pending_tx(TXID) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t{202, #{}, <<\"Pending\">>, Req};\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t{404, #{}, <<\"Not Found.\">>, Req}\n\t\t\t\t\tend;\n\t\t\t\t{reply, Reply} ->\n\t\t\t\t\tReply;\n\t\t\t\t{#tx{} = TX, _} ->\n\t\t\t\t\tcase Field of\n\t\t\t\t\t\t<<\"tags\">> ->\n\t\t\t\t\t\t\t{200, #{}, ar_serialize:jsonify(lists:map(\n\t\t\t\t\t\t\t\t\tfun({Name, Value}) ->\n\t\t\t\t\t\t\t\t\t\t{[{name, ar_util:encode(Name)},\n\t\t\t\t\t\t\t\t\t\t\t\t{value, ar_util:encode(Value)}]}\n\t\t\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\t\t\tTX#tx.tags)), Req};\n\t\t\t\t\t\t<<\"data\">> ->\n\t\t\t\t\t\t\tserve_tx_data(Req, TX);\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\tcase catch binary_to_existing_atom(Field) of\n\t\t\t\t\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_field }), Req};\n\t\t\t\t\t\t\t\tFieldAtom ->\n\t\t\t\t\t\t\t\t\t{TXJSON} = ar_serialize:tx_to_json_struct(TX),\n\t\t\t\t\t\t\t\t\tcase catch val_for_key(FieldAtom, TXJSON) of\n\t\t\t\t\t\t\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t\t\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_field }),\n\t\t\t\t\t\t\t\t\t\t\t\t\tReq};\n\t\t\t\t\t\t\t\t\t\tVal ->\n\t\t\t\t\t\t\t\t\t\t\t{200, #{}, Val, Req}\n\t\t\t\t\t\t\t\t\tend\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend\n\t\t\tend\n\tend;\n\n%% Return the current block hieght, or 500.\nhandle(Method, [<<\"height\">>], Req, _Pid)\n\t\twhen (Method == <<\"GET\">>) or (Method == <<\"HEAD\">>) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tH = ar_node:get_height(),\n\t\t\t{200, #{}, integer_to_binary(H), Req}\n\tend;\n\n%% If we are given a hash with no specifier (block, tx, etc), assume that\n%% the user is requesting the data from the TX associated with that hash.\n%% Optionally allow a file extension.\nhandle(<<\"GET\">>, [<<Hash:43/binary, MaybeExt/binary>>], Req, Pid) ->\n\thandle(<<\"GET\">>, [<<\"tx\">>, Hash, <<\"data.\", MaybeExt/binary>>], Req, Pid);\n\n%% Accept a nonce limiter (VDF) update from a configured peer, if any.\n%% POST request to /vdf.\nhandle(<<\"POST\">>, [<<\"vdf\">>], Req, Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\thandle_post_vdf(Req, Pid)\n\tend;\n\n%% Serve an VDF update to a configured VDF client.\n%% GET request to /vdf.\nhandle(<<\"GET\">>, [<<\"vdf\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\thandle_get_vdf(Req, get_update, 2)\n\tend;\n\n%% Serve an VDF update to a configured VDF client.\n%% GET request to /vdf2.\nhandle(<<\"GET\">>, [<<\"vdf2\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\thandle_get_vdf(Req, get_update, 2)\n\tend;\n\n%% Serve the current VDF session to a configured VDF client.\n%% GET request to /vdf/session.\nhandle(<<\"GET\">>, [<<\"vdf\">>, <<\"session\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\thandle_get_vdf(Req, get_session, 2)\n\tend;\n\n%% Serve the current VDF session to a configured VDF client.\n%% GET request to /vdf2/session.\nhandle(<<\"GET\">>, [<<\"vdf2\">>, <<\"session\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\thandle_get_vdf(Req, get_session, 2)\n\tend;\n\n%% Serve the current VDF session to a configured VDF client.\n%% GET request to /vdf3/session.\nhandle(<<\"GET\">>, [<<\"vdf3\">>, <<\"session\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\thandle_get_vdf(Req, get_session, 3)\n\tend;\n\n%% Serve the current VDF session to a configured VDF client.\n%% GET request to /vdf3/session.\nhandle(<<\"GET\">>, [<<\"vdf4\">>, <<\"session\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\thandle_get_vdf(Req, get_session, 4)\n\tend;\n\n%% Serve the previous VDF session to a configured VDF client.\n%% GET request to /vdf/previous_session.\nhandle(<<\"GET\">>, [<<\"vdf\">>, <<\"previous_session\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\thandle_get_vdf(Req, get_previous_session, 2)\n\tend;\n\n%% Serve the previous VDF session to a configured VDF client.\n%% GET request to /vdf2/previous_session.\nhandle(<<\"GET\">>, [<<\"vdf2\">>, <<\"previous_session\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\thandle_get_vdf(Req, get_previous_session, 2)\n\tend;\n\n%% Serve the previous VDF session to a configured VDF client.\n%% GET request to /vdf4/previous_session.\nhandle(<<\"GET\">>, [<<\"vdf4\">>, <<\"previous_session\">>], Req, _Pid) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\thandle_get_vdf(Req, get_previous_session, 4)\n\tend;\n\nhandle(<<\"GET\">>, [<<\"coordinated_mining\">>, <<\"partition_table\">>], Req, _Pid) ->\n\tcase check_cm_api_secret(Req) of\n\t\tpass ->\n\t\t\tcase ar_node:is_joined() of\n\t\t\t\tfalse ->\n\t\t\t\t\tnot_joined(Req);\n\t\t\t\ttrue ->\n\t\t\t\t\tPartitions =\n\t\t\t\t\t\tcase {ar_pool:is_client(), ar_coordination:is_exit_peer()} of\n\t\t\t\t\t\t\t{true, true} ->\n\t\t\t\t\t\t\t\t%% When we work with a pool, the exit node shares\n\t\t\t\t\t\t\t\t%% the information about external partitions with\n\t\t\t\t\t\t\t\t%% every internal miner.\n\t\t\t\t\t\t\t\tar_coordination:get_self_plus_external_partitions_list();\n\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t%% CM miners ask each other about their local\n\t\t\t\t\t\t\t\t%% partitions. A CM exit node is not an exception - it\n\t\t\t\t\t\t\t\t%% does NOT aggregate peer partitions in this case.\n\t\t\t\t\t\t\t\tar_coordination:get_unique_partitions_list()\n\t\t\t\t\t\tend,\n\t\t\t\t\tJSON = ar_serialize:jsonify(Partitions),\n\t\t\t\t\t{200, #{}, JSON, Req}\n\t\t\tend;\n\t\t{reject, {Status, Headers, Body}} ->\n\t\t\t{Status, Headers, Body, Req}\n\tend;\n\n% If somebody want to make GUI, monitoring tool\nhandle(<<\"GET\">>, [<<\"coordinated_mining\">>, <<\"state\">>], Req, _Pid) ->\n\tcase check_cm_api_secret(Req) of\n\t\tpass ->\n\t\t\tcase ar_node:is_joined() of\n\t\t\t\tfalse ->\n\t\t\t\t\tnot_joined(Req);\n\t\t\t\ttrue ->\n\t\t\t\t\t{ok, {LastPeerResponse}} = ar_coordination:get_public_state(),\n\t\t\t\t\tPeers = maps:fold(fun(Peer, Value, Acc) ->\n\t\t\t\t\t\t{AliveStatus, PartitionList} = Value,\n\t\t\t\t\t\tTable = lists:map(\n\t\t\t\t\t\t\tfun\t(ListValue) ->\n\t\t\t\t\t\t\t\t{Bucket, BucketSize, Addr, PackingDifficulty} = ListValue,\n\t\t\t\t\t\t\t\tar_serialize:partition_to_json_struct(Bucket, BucketSize,\n\t\t\t\t\t\t\t\t\tAddr, PackingDifficulty)\n\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\tPartitionList\n\t\t\t\t\t\t),\n\t\t\t\t\t\tVal = {[\n\t\t\t\t\t\t\t{peer, list_to_binary(ar_util:format_peer(Peer))},\n\t\t\t\t\t\t\t{alive, AliveStatus},\n\t\t\t\t\t\t\t{partition_table, Table}\n\t\t\t\t\t\t]},\n\t\t\t\t\t\t[Val | Acc]\n\t\t\t\t\t\tend,\n\t\t\t\t\t\t[],\n\t\t\t\t\t\tLastPeerResponse\n\t\t\t\t\t),\n\t\t\t\t{200, #{}, ar_serialize:jsonify(Peers), Req}\n\t\t\tend;\n\t\t{reject, {Status, Headers, Body}} ->\n\t\t\t{Status, Headers, Body, Req}\n\tend;\n\n%% POST request to /coordinated_mining/h1.\nhandle(<<\"POST\">>, [<<\"coordinated_mining\">>, <<\"h1\">>], Req, Pid) ->\n\tcase check_cm_api_secret(Req) of\n\t\tpass ->\n\t\t\tcase ar_node:is_joined() of\n\t\t\t\tfalse ->\n\t\t\t\t\tnot_joined(Req);\n\t\t\t\ttrue ->\n\t\t\t\t\thandle_mining_h1(Req, Pid)\n\t\t\tend;\n\t\t{reject, {Status, Headers, Body}} ->\n\t\t\t{Status, Headers, Body, Req}\n\tend;\n\n%% POST request to /coordinated_mining/h2.\nhandle(<<\"POST\">>, [<<\"coordinated_mining\">>, <<\"h2\">>], Req, Pid) ->\n\tcase check_cm_api_secret(Req) of\n\t\tpass ->\n\t\t\tcase ar_node:is_joined() of\n\t\t\t\tfalse ->\n\t\t\t\t\tnot_joined(Req);\n\t\t\t\ttrue ->\n\t\t\t\t\thandle_mining_h2(Req, Pid)\n\t\t\tend;\n\t\t{reject, {Status, Headers, Body}} ->\n\t\t\t{Status, Headers, Body, Req}\n\tend;\n\nhandle(<<\"POST\">>, [<<\"coordinated_mining\">>, <<\"publish\">>], Req, Pid) ->\n\tcase check_cm_api_secret(Req) of\n\t\tpass ->\n\t\t\tcase ar_node:is_joined() of\n\t\t\t\tfalse ->\n\t\t\t\t\tnot_joined(Req);\n\t\t\t\ttrue ->\n\t\t\t\t\thandle_mining_cm_publish(Req, Pid)\n\t\t\tend;\n\t\t{reject, {Status, Headers, Body}} ->\n\t\t\t{Status, Headers, Body, Req}\n\tend;\n\n%% Catch case for requests made to unknown endpoints.\n%% Returns error code 400 - Request type not found.\nhandle(_, _, Req, _Pid) ->\n\tnot_found(Req).\n\n%% Cowlib does not yet support status codes 208 and 419 properly.\n%% See https://github.com/ninenines/cowlib/pull/79\nhandle_custom_codes(208) -> <<\"208 Already Reported\">>;\nhandle_custom_codes(419) -> <<\"419 Missing Chunk\">>;\nhandle_custom_codes(Status) -> Status.\n\nformat_bi_for_peer(BI, Req) ->\n\tcase cowboy_req:header(<<\"x-block-format\">>, Req, <<\"2\">>) of\n\t\t<<\"2\">> -> ?BI_TO_BHL(BI);\n\t\t_ -> BI\n\tend.\n\nhandle_get_block_index_range(Start, _End, _CurrentHeight, _RecentBI, Req, _Encoding)\n\t\twhen Start < 0 ->\n\t{400, #{}, jiffy:encode(#{ error => negative_start }), Req};\nhandle_get_block_index_range(Start, End, _CurrentHeight, _RecentBI, Req, _Encoding)\n\t\twhen Start > End ->\n\t{400, #{}, jiffy:encode(#{ error => start_bigger_than_end }), Req};\nhandle_get_block_index_range(Start, End, _CurrentHeight, _RecentBI, Req, _Encoding)\n\t\twhen End - Start + 1 > ?MAX_BLOCK_INDEX_RANGE_SIZE ->\n\t{400, #{}, jiffy:encode(#{ error => range_too_big,\n\t\t\tmax_range_size => ?MAX_BLOCK_INDEX_RANGE_SIZE }), Req};\nhandle_get_block_index_range(Start, _End, CurrentHeight, _RecentBI, Req, _Encoding)\n\t\twhen Start > CurrentHeight ->\n\t{400, #{}, jiffy:encode(#{ error => start_too_big }), Req};\nhandle_get_block_index_range(Start, End, CurrentHeight, RecentBI, Req, Encoding) ->\n\tCheckpointHeight = CurrentHeight - ar_block:get_consensus_window_size() + 1,\n\tRecentRange =\n\t\tcase End >= CheckpointHeight of\n\t\t\ttrue ->\n\t\t\t\tTop = min(CurrentHeight, End),\n\t\t\t\tRange1 = lists:nthtail(CurrentHeight - Top, RecentBI),\n\t\t\t\tlists:sublist(Range1, min(Top - Start + 1,\n\t\t\t\t\t\tar_block:get_consensus_window_size() - (CurrentHeight - Top)));\n\t\t\tfalse ->\n\t\t\t\t[]\n\t\tend,\n\tRange =\n\t\tcase Start < CheckpointHeight of\n\t\t\ttrue ->\n\t\t\t\tRecentRange ++ ar_block_index:get_range(Start, min(End, CheckpointHeight - 1));\n\t\t\tfalse ->\n\t\t\t\tRecentRange\n\t\tend,\n\tcase Encoding of\n\t\tbinary ->\n\t\t\t{200, #{}, ar_serialize:block_index_to_binary(Range), Req};\n\t\tjson ->\n\t\t\t{200, #{}, ar_serialize:jsonify(ar_serialize:block_index_to_json_struct(\n\t\t\t\t\tformat_bi_for_peer(Range, Req))), Req}\n\tend.\n\nsendfile(Filename) ->\n\t{sendfile, 0, filelib:file_size(Filename), Filename}.\n\nnot_found(Req) ->\n\t{400, #{}, <<\"Request type not found.\">>, Req}.\n\nnot_joined(Req) ->\n\t{503, #{}, jiffy:encode(#{ error => not_joined }), Req}.\n\nhandle_get_tx_status(EncodedTXID, Req) ->\n\tcase ar_util:safe_decode(EncodedTXID) of\n\t\t{error, invalid} ->\n\t\t\t{400, #{}, <<\"Invalid address.\">>, Req};\n\t\t{ok, TXID} ->\n\t\t\tcase is_a_pending_tx(TXID) of\n\t\t\t\ttrue ->\n\t\t\t\t\t{202, #{}, <<\"Pending\">>, Req};\n\t\t\t\tfalse ->\n\t\t\t\t\tcase ar_storage:get_tx_confirmation_data(TXID) of\n\t\t\t\t\t\t{ok, {Height, BH}} ->\n\t\t\t\t\t\t\tPseudoTags = [\n\t\t\t\t\t\t\t\t{<<\"block_height\">>, Height},\n\t\t\t\t\t\t\t\t{<<\"block_indep_hash\">>, ar_util:encode(BH)}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\tcase ar_block_index:get_element_by_height(Height) of\n\t\t\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\t\t\t{404, #{}, <<\"Not Found.\">>, Req};\n\t\t\t\t\t\t\t\t{BH, _, _} ->\n\t\t\t\t\t\t\t\t\tCurrentHeight = ar_node:get_height(),\n\t\t\t\t\t\t\t\t\t%% First confirmation is when the TX is\n\t\t\t\t\t\t\t\t\t%% in the latest block.\n\t\t\t\t\t\t\t\t\tNumberOfConfirmations = CurrentHeight - Height + 1,\n\t\t\t\t\t\t\t\t\tStatus = PseudoTags\n\t\t\t\t\t\t\t\t\t\t\t++ [{<<\"number_of_confirmations\">>,\n\t\t\t\t\t\t\t\t\t\t\t\tNumberOfConfirmations}],\n\t\t\t\t\t\t\t\t\t{200, #{}, ar_serialize:jsonify({Status}), Req};\n\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\t{404, #{}, <<\"Not Found.\">>, Req}\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\t{404, #{}, <<\"Not Found.\">>, Req};\n\t\t\t\t\t\t{error, timeout} ->\n\t\t\t\t\t\t\t{503, #{}, <<\"ArQL unavailable.\">>, Req}\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\nhandle_get_tx(Hash, Req, Encoding) ->\n\tcase ar_util:safe_decode(Hash) of\n\t\t{error, invalid} ->\n\t\t\t{400, #{}, <<\"Invalid hash.\">>, Req};\n\t\t{ok, ID} ->\n\t\t\tok = ar_semaphore:acquire(get_tx, ?DEFAULT_CALL_TIMEOUT),\n\t\t\tcase ar_storage:read_tx(ID) of\n\t\t\t\tunavailable ->\n\t\t\t\t\tmaybe_tx_is_pending_response(ID, Req);\n\t\t\t\t#tx{} = TX ->\n\t\t\t\t\tBody =\n\t\t\t\t\t\tcase Encoding of\n\t\t\t\t\t\t\tjson ->\n\t\t\t\t\t\t\t\tar_serialize:jsonify(ar_serialize:tx_to_json_struct(TX));\n\t\t\t\t\t\t\tbinary ->\n\t\t\t\t\t\t\t\tar_serialize:tx_to_binary(TX)\n\t\t\t\t\t\tend,\n\t\t\t\t\t{200, #{}, Body, Req}\n\t\t\tend\n\tend.\n\nhandle_get_unconfirmed_tx(Hash, Req, Encoding) ->\n\tcase ar_util:safe_decode(Hash) of\n\t\t{error, invalid} ->\n\t\t\t{400, #{}, <<\"Invalid hash.\">>, Req};\n\t\t{ok, TXID} ->\n\t\t\tcase ar_mempool:get_tx(TXID) of\n\t\t\t\tnot_found ->\n\t\t\t\t\thandle_get_tx(Hash, Req, Encoding);\n\t\t\t\tTX ->\n\t\t\t\t\tBody =\n\t\t\t\t\t\tcase Encoding of\n\t\t\t\t\t\t\tjson ->\n\t\t\t\t\t\t\t\tar_serialize:jsonify(ar_serialize:tx_to_json_struct(TX));\n\t\t\t\t\t\t\tbinary ->\n\t\t\t\t\t\t\t\tar_serialize:tx_to_binary(TX)\n\t\t\t\t\t\tend,\n\t\t\t\t\t{200, #{}, Body, Req}\n\t\t\tend\n\tend.\n\nmaybe_tx_is_pending_response(ID, Req) ->\n\tcase is_a_pending_tx(ID) of\n\t\ttrue ->\n\t\t\t{202, #{}, <<\"Pending\">>, Req};\n\t\tfalse ->\n\t\t\tcase ar_tx_db:get_error_codes(ID) of\n\t\t\t\t{ok, ErrorCodes} ->\n\t\t\t\t\tErrorBody = list_to_binary(lists:join(\" \", ErrorCodes)),\n\t\t\t\t\t{410, #{}, ErrorBody, Req};\n\t\t\t\tnot_found ->\n\t\t\t\t\t{404, #{}, <<\"Not Found.\">>, Req}\n\t\t\tend\n\tend.\n\nserve_tx_data(Req, #tx{ format = 1 } = TX) ->\n\t{200, #{}, ar_util:encode(TX#tx.data), Req};\nserve_tx_data(Req, #tx{ format = 2, id = ID, data_size = DataSize } = TX) ->\n\tDataFilename = ar_storage:tx_data_filepath(TX),\n\tcase filelib:is_file(DataFilename) of\n\t\ttrue ->\n\t\t\t{200, #{}, sendfile(DataFilename), Req};\n\t\tfalse ->\n\t\t\tok = ar_semaphore:acquire(get_tx_data, ?DEFAULT_CALL_TIMEOUT),\n\t\t\tcase ar_data_sync:get_tx_data(ID) of\n\t\t\t\t{ok, Data} ->\n\t\t\t\t\t{200, #{}, ar_util:encode(Data), Req};\n\t\t\t\t{error, tx_data_too_big} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => tx_data_too_big }), Req};\n\t\t\t\t{error, not_found} when DataSize == 0 ->\n        \t{200, #{}, <<>>, Req};\n\t\t\t\t{error, not_found} ->\n\t\t\t\t\t{404, #{ <<\"content-type\">> => <<\"text/html; charset=utf-8\">> }, sendfile(\"genesis_data/not_found.html\"), Req};\n\t\t\t\t{error, timeout} ->\n\t\t\t\t\t{503, #{}, jiffy:encode(#{ error => timeout }), Req}\n\t\t\tend\n\tend.\n\nserve_tx_html_data(Req, TX) ->\n\tserve_tx_html_data(Req, TX, ar_http_util:get_tx_content_type(TX)).\n\nserve_tx_html_data(Req, #tx{ format = 1 } = TX, {valid, ContentType}) ->\n\t{200, #{ <<\"content-type\">> => ContentType }, TX#tx.data, Req};\nserve_tx_html_data(Req, #tx{ format = 1 } = TX, none) ->\n\t{200, #{ <<\"content-type\">> => <<\"text/html\">> }, TX#tx.data, Req};\nserve_tx_html_data(Req, #tx{ format = 2 } = TX, {valid, ContentType}) ->\n\tserve_format_2_html_data(Req, ContentType, TX);\nserve_tx_html_data(Req, #tx{ format = 2 } = TX, none) ->\n\tserve_format_2_html_data(Req, <<\"text/html\">>, TX);\nserve_tx_html_data(Req, _TX, invalid) ->\n\t{421, #{}, <<>>, Req}.\n\nserve_format_2_html_data(Req, ContentType, TX) ->\n\tcase ar_storage:read_tx_data(TX) of\n\t\t{ok, Data} ->\n\t\t\t{200, #{ <<\"content-type\">> => ContentType }, Data, Req};\n\t\t{error, enoent} ->\n\t\t\tok = ar_semaphore:acquire(get_tx_data, ?DEFAULT_CALL_TIMEOUT),\n\t\t\tcase ar_data_sync:get_tx_data(TX#tx.id) of\n\t\t\t\t{ok, Data} ->\n\t\t\t\t\t{200, #{ <<\"content-type\">> => ContentType }, Data, Req};\n\t\t\t\t{error, tx_data_too_big} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => tx_data_too_big }), Req};\n\t\t\t\t{error, not_found} when TX#tx.data_size == 0 ->\n        \t{200, #{ <<\"content-type\">> => ContentType }, <<>>, Req};\n\t\t\t\t{error, not_found} ->\n\t\t\t\t\t{404, #{ <<\"content-type\">> => <<\"text/html; charset=utf-8\">> }, sendfile(\"genesis_data/not_found.html\"), Req};\n\t\t\t\t{error, timeout} ->\n\t\t\t\t\t{503, #{}, jiffy:encode(#{ error => timeout }), Req}\n\t\t\tend\n\tend.\n\nestimate_tx_fee(Size, Addr) ->\n\testimate_tx_fee(Size, Addr, pessimistic).\n\nestimate_tx_fee(Size, Addr, Type) ->\n\tProps =\n\t\tets:select(\n\t\t\tnode_state,\n\t\t\t[{{'$1', '$2'},\n\t\t\t\t[{'or',\n\t\t\t\t\t{'==', '$1', height},\n\t\t\t\t\t{'==', '$1', wallet_list},\n\t\t\t\t\t{'==', '$1', usd_to_ar_rate},\n\t\t\t\t\t{'==', '$1', scheduled_usd_to_ar_rate},\n\t\t\t\t\t{'==', '$1', price_per_gib_minute},\n\t\t\t\t\t{'==', '$1', denomination},\n\t\t\t\t\t{'==', '$1', scheduled_price_per_gib_minute},\n\t\t\t\t\t{'==', '$1', kryder_plus_rate_multiplier}}], ['$_']}]\n\t\t),\n\tHeight = proplists:get_value(height, Props),\n\tCurrentPricePerGiBMinute =  proplists:get_value(price_per_gib_minute, Props),\n\tDenomination = proplists:get_value(denomination, Props),\n\tScheduledPricePerGiBMinute = proplists:get_value(scheduled_price_per_gib_minute, Props),\n\tKryderPlusRateMultiplier = proplists:get_value(kryder_plus_rate_multiplier, Props),\n\tPricePerGiBMinute =\n\t\tcase Type of\n\t\t\tpessimistic ->\n\t\t\t\tmax(CurrentPricePerGiBMinute, ScheduledPricePerGiBMinute);\n\t\t\toptimistic ->\n\t\t\t\tmin(CurrentPricePerGiBMinute, ScheduledPricePerGiBMinute)\n\t\tend,\n\tRootHash = proplists:get_value(wallet_list, Props),\n\tAccounts =\n\t\tcase Addr of\n\t\t\t<<>> ->\n\t\t\t\t#{};\n\t\t\t_ ->\n\t\t\t\tar_wallets:get(RootHash, Addr)\n\t\tend,\n\tSize2 = ar_tx:get_weave_size_increase(Size, Height + 1),\n\tArgs = {Size2, PricePerGiBMinute, KryderPlusRateMultiplier, Addr, Accounts, Height + 1},\n\tDenomination2 =\n\t\tcase Height >= ar_fork:height_2_6() of\n\t\t\ttrue ->\n\t\t\t\tDenomination;\n\t\t\tfalse ->\n\t\t\t\t0\n\t\tend,\n\t{ar_tx:get_tx_fee(Args), Denomination2}.\n\nestimate_tx_fee_v2(Size, Addr) ->\n\tProps =\n\t\tets:select(\n\t\t\tnode_state,\n\t\t\t[{{'$1', '$2'},\n\t\t\t\t[{'or',\n\t\t\t\t\t{'==', '$1', height},\n\t\t\t\t\t{'==', '$1', wallet_list},\n\t\t\t\t\t{'==', '$1', price_per_gib_minute},\n\t\t\t\t\t{'==', '$1', scheduled_price_per_gib_minute},\n\t\t\t\t\t{'==', '$1', kryder_plus_rate_multiplier}}], ['$_']}]\n\t\t),\n\tHeight = proplists:get_value(height, Props),\n\tCurrentPricePerGiBMinute = proplists:get_value(price_per_gib_minute, Props),\n\tScheduledPricePerGiBMinute = proplists:get_value(scheduled_price_per_gib_minute, Props),\n\tKryderPlusRateMultiplier = proplists:get_value(kryder_plus_rate_multiplier, Props),\n\tPricePerGiBMinute = max(CurrentPricePerGiBMinute, ScheduledPricePerGiBMinute),\n\tRootHash = proplists:get_value(wallet_list, Props),\n\tAccounts =\n\t\tcase Addr of\n\t\t\t<<>> ->\n\t\t\t\t#{};\n\t\t\t_ ->\n\t\t\t\tar_wallets:get(RootHash, Addr)\n\t\tend,\n\tSize2 = ar_tx:get_weave_size_increase(Size, Height + 1),\n\tArgs = {Size2, PricePerGiBMinute, KryderPlusRateMultiplier, Addr, Accounts, Height + 1},\n\tar_tx:get_tx_fee2(Args).\n\nhandle_get_block(Type, ID, Req, Pid, Encoding) ->\n\tcase Type of\n\t\t<<\"hash\">> ->\n\t\t\tcase ar_util:safe_decode(ID) of\n\t\t\t\t{error, invalid} ->\n\t\t\t\t\t{404, #{}, <<\"Block not found.\">>, Req};\n\t\t\t\t{ok, H} ->\n\t\t\t\t\thandle_get_block(H, Req, Pid, Encoding)\n\t\t\tend;\n\t\t<<\"height\">> ->\n\t\t\tcase ar_node:is_joined() of\n\t\t\t\tfalse ->\n\t\t\t\t\tnot_joined(Req);\n\t\t\t\ttrue ->\n\t\t\t\t\tCurrentHeight = ar_node:get_height(),\n\t\t\t\t\ttry binary_to_integer(ID) of\n\t\t\t\t\t\tHeight when Height < 0 ->\n\t\t\t\t\t\t\t{400, #{}, <<\"Invalid height.\">>, Req};\n\t\t\t\t\t\tHeight when Height > CurrentHeight ->\n\t\t\t\t\t\t\t{404, #{}, <<\"Block not found.\">>, Req};\n\t\t\t\t\t\tHeight ->\n\t\t\t\t\t\t\tcase ar_block_index:get_element_by_height(Height) of\n\t\t\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\t\t\t{404, #{}, <<\"Block not found.\">>, Req};\n\t\t\t\t\t\t\t\t{H, _, _} ->\n\t\t\t\t\t\t\t\t\thandle_get_block(<<\"hash\">>, ar_util:encode(H), Req, Pid,\n\t\t\t\t\t\t\t\t\t\t\tEncoding)\n\t\t\t\t\t\t\tend\n\t\t\t\t\tcatch _:_ ->\n\t\t\t\t\t\t{400, #{}, <<\"Invalid height.\">>, Req}\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\nhandle_get_block(H, Req, Pid, Encoding) ->\n\tcase ar_block_cache:get(block_cache, H) of\n\t\tnot_found ->\n\t\t\thandle_get_block2(H, Req, Encoding);\n\t\tB ->\n\t\t\tcase {Encoding, lists:any(fun(TX) -> is_binary(TX) end, B#block.txs)} of\n\t\t\t\t{binary, false} ->\n\t\t\t\t\t%% We have found the block in the block cache. Therefore, we can\n\t\t\t\t\t%% include the requested transactions without doing disk lookups.\n\t\t\t\t\tcase read_complete_body(Req, Pid, ?MAX_SERIALIZED_MISSING_TX_INDICES) of\n\t\t\t\t\t\t{ok, Body, Req2} ->\n\t\t\t\t\t\t\tcase ar_util:parse_list_indices(Body) of\n\t\t\t\t\t\t\t\terror ->\n\t\t\t\t\t\t\t\t\t{400, #{}, <<>>, Req2};\n\t\t\t\t\t\t\t\tIndices ->\n\t\t\t\t\t\t\t\t\tMap = collect_missing_transactions(B#block.txs, Indices),\n\t\t\t\t\t\t\t\t\tTXs2 = [maps:get(TX#tx.id, Map, TX#tx.id)\n\t\t\t\t\t\t\t\t\t\t\t|| TX <- B#block.txs],\n\t\t\t\t\t\t\t\t\thandle_get_block3(B#block{ txs = TXs2 }, Req2, binary)\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t{error, body_size_too_large} ->\n\t\t\t\t\t\t\t{413, #{}, <<\"Payload too large\">>, Req};\n\t\t\t\t\t\t{error, timeout} ->\n\t\t\t\t\t\t\t{503, #{}, jiffy:encode(#{ error => timeout }), Req}\n\t\t\t\t\tend;\n\t\t\t\t_ ->\n\t\t\t\t\thandle_get_block3(B, Req, Encoding)\n\t\t\tend\n\tend.\n\nhandle_get_block2(H, Req, Encoding) ->\n\tcase ar_storage:read_block(H) of\n\t\tunavailable ->\n\t\t\t{404, #{}, <<\"Block not found.\">>, Req};\n\t\t#block{} = B ->\n\t\t\thandle_get_block3(B, Req, Encoding)\n\tend.\n\nhandle_get_block3(B, Req, Encoding) ->\n\tBin =\n\t\tcase Encoding of\n\t\t\tjson ->\n\t\t\t\tar_serialize:jsonify(ar_serialize:block_to_json_struct(B));\n\t\t\tbinary ->\n\t\t\t\tar_serialize:block_to_binary(B)\n\t\tend,\n\t{200, #{}, Bin, Req}.\n\ncollect_missing_transactions(TXs, Indices) ->\n\tcollect_missing_transactions(TXs, Indices, 0).\n\ncollect_missing_transactions([#tx{ id = TXID } = TX | TXs], [N | Indices], N) ->\n\tmaps:put(TXID, TX, collect_missing_transactions(TXs, Indices, N + 1));\ncollect_missing_transactions([_TX | TXs], Indices, N) ->\n\tcollect_missing_transactions(TXs, Indices, N + 1);\ncollect_missing_transactions(_TXs, [], _N) ->\n\t#{};\ncollect_missing_transactions([], _Indices, _N) ->\n\t#{}.\n\nhandle_post_tx({Req, Pid, Encoding}) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tcase post_tx_parse_id({Req, Pid, Encoding}) of\n\t\t\t\t{error, invalid_hash, Req2} ->\n\t\t\t\t\t{400, #{}, <<\"Invalid hash.\">>, Req2};\n\t\t\t\t{error, tx_already_processed, _TXID, Req2} ->\n\t\t\t\t\t{208, #{}, <<\"Transaction already processed.\">>, Req2};\n\t\t\t\t{error, invalid_signature_type, Req2} ->\n\t\t\t\t\t{400, #{}, <<\"Invalid signature type.\">>, Req2};\n\t\t\t\t{error, invalid_json, Req2} ->\n\t\t\t\t\t{400, #{}, <<\"Invalid JSON.\">>, Req2};\n\t\t\t\t{error, body_size_too_large, Req2} ->\n\t\t\t\t\t{413, #{}, <<\"Payload too large\">>, Req2};\n\t\t\t\t{error, timeout} ->\n\t\t\t\t\t{503, #{}, <<>>, Req};\n\t\t\t\t{ok, TX, Req2} ->\n\t\t\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\t\t\tcase ar_semaphore:acquire(post_tx,\n\t\t\t\t\t\t\tConfig#config.post_tx_timeout * 1000) of\n\t\t\t\t\t\t{error, timeout} ->\n\t\t\t\t\t\t\t{503, #{}, <<>>, Req2};\n\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\tPeer = ar_http_util:arweave_peer(Req),\n\t\t\t\t\t\t\tcase handle_post_tx(Req2, Peer, TX) of\n\t\t\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\t\t\t{200, #{}, <<\"OK\">>, Req2};\n\t\t\t\t\t\t\t\t{error_response, {Status, Headers, Body}} ->\n\t\t\t\t\t\t\t\t\tRef = erlang:get(tx_id_ref),\n\t\t\t\t\t\t\t\t\tar_ignore_registry:remove_ref(TX#tx.id, Ref),\n\t\t\t\t\t\t\t\t\t{Status, Headers, Body, Req2}\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\nhandle_post_tx(Req, Peer, TX) ->\n\tcase ar_tx_validator:validate(TX) of\n\t\t{invalid, tx_verification_failed} ->\n\t\t\thandle_post_tx_verification_response();\n\t\t{invalid, last_tx_in_mempool} ->\n\t\t\thandle_post_tx_last_tx_in_mempool_response();\n\t\t{invalid, invalid_last_tx} ->\n\t\t\thandle_post_tx_verification_response();\n\t\t{invalid, tx_bad_anchor} ->\n\t\t\thandle_post_tx_bad_anchor_response();\n\t\t{invalid, tx_already_in_weave} ->\n\t\t\thandle_post_tx_already_in_weave_response();\n\t\t{invalid, tx_already_in_mempool} ->\n\t\t\thandle_post_tx_already_in_mempool_response();\n\t\t{invalid, invalid_data_root_size} ->\n\t\t\thandle_post_tx_invalid_data_root_response();\n\t\t{valid, TX2} ->\n\t\t\tar_data_sync:add_data_root_to_disk_pool(TX2#tx.data_root, TX2#tx.data_size,\n\t\t\t\t\tTX#tx.id),\n\t\t\thandle_post_tx_accepted(Req, TX, Peer)\n\tend.\n\nhandle_post_tx_accepted(Req, TX, Peer) ->\n\t%% Exclude successful requests with valid transactions from the\n\t%% IP-based throttling, to avoid connectivity issues at the times\n\t%% of excessive transaction volumes.\n\t{A, B, C, D, _} = Peer, %%-> Peer is the peer key for the general rate limiter group.\n\tarweave_limiter:reduce_for_peer(general, {A, B, C, D}),\n\tBodyReadTime = ar_http_req:body_read_time(Req),\n\tar_peers:rate_gossiped_data(Peer, tx,\n\t\terlang:convert_time_unit(BodyReadTime, native, microsecond),\n\t\tbyte_size(term_to_binary(TX))),\n\tar_events:send(tx, {new, TX, {pushed, Peer}}),\n\tTXID = TX#tx.id,\n\tRef = erlang:get(tx_id_ref),\n\tar_ignore_registry:remove_ref(TXID, Ref),\n\tar_ignore_registry:add_temporary(TXID, 10 * 60 * 1000),\n\tok.\n\nhandle_post_tx_verification_response() ->\n\t{error_response, {400, #{}, <<\"Transaction verification failed.\">>}}.\n\nhandle_post_tx_last_tx_in_mempool_response() ->\n\t{error_response, {400, #{}, <<\"Invalid anchor (last_tx from mempool).\">>}}.\n\nhandle_post_tx_bad_anchor_response() ->\n\t{error_response, {400, #{}, <<\"Invalid anchor (last_tx).\">>}}.\n\nhandle_post_tx_already_in_weave_response() ->\n\t{error_response, {400, #{}, <<\"Transaction is already on the weave.\">>}}.\n\nhandle_post_tx_already_in_mempool_response() ->\n\t{error_response, {400, #{}, <<\"Transaction is already in the mempool.\">>}}.\n\nhandle_post_tx_invalid_data_root_response() ->\n\t{error_response, {400, #{}, <<\"The attached data is split in an unknown way.\">>}}.\n\nhandle_get_data_sync_record(Start, Limit, Req) ->\n\tFormat =\n\t\tcase cowboy_req:header(<<\"content-type\">>, Req) of\n\t\t\t<<\"application/json\">> ->\n\t\t\t\tjson;\n\t\t\t_ ->\n\t\t\t\tetf\n\t\tend,\n\tOptions = #{ start => Start, limit => Limit, format => Format },\n\tcase ar_global_sync_record:get_serialized_sync_record(Options) of\n\t\t{ok, Binary} ->\n\t\t\t{200, #{}, Binary, Req};\n\t\t{error, timeout} ->\n\t\t\t{503, #{}, jiffy:encode(#{ error => timeout }), Req}\n\tend.\n\nhandle_get_data_sync_record(Start, End, Limit, Req) ->\n\tFormat =\n\t\tcase cowboy_req:header(<<\"content-type\">>, Req) of\n\t\t\t<<\"application/json\">> ->\n\t\t\t\tjson;\n\t\t\t_ ->\n\t\t\t\tetf\n\t\tend,\n\tOptions = #{ start => Start, right_bound => End, limit => Limit, format => Format },\n\tcase ar_global_sync_record:get_serialized_sync_record(Options) of\n\t\t{ok, Binary} ->\n\t\t\t{200, #{}, Binary, Req};\n\t\t{error, timeout} ->\n\t\t\t{503, #{}, jiffy:encode(#{ error => timeout }), Req}\n\tend.\n\nhandle_get_footprints(Partition, FootprintNumber, Req) ->\n\tFootprintsPerPartition = ar_block:get_replica_2_9_entropy_count(),\n\tCheckFootprintNumber =\n\t\tcase FootprintNumber >= FootprintsPerPartition of\n\t\t\ttrue ->\n\t\t\t\t{400, #{}, jiffy:encode(#{ error => footprint_number_too_large }), Req};\n\t\t\tfalse ->\n\t\t\t\tok\n\t\tend,\n\t{Start, End} = ar_replica_2_9:get_entropy_partition_range(Partition),\n\tFindStorageModules =\n\t\tcase CheckFootprintNumber of\n\t\t\tok ->\n\t\t\t\tcase ar_storage_module:get_all(Start, End) of\n\t\t\t\t\t[] ->\n\t\t\t\t\t\t{404, #{}, <<>>, Req};\n\t\t\t\t\tModules ->\n\t\t\t\t\t\t{ok, Modules}\n\t\t\t\tend;\n\t\t\tReply ->\n\t\t\t\tReply\n\t\tend,\n\tFindStoreIDPacking =\n\t\tcase FindStorageModules of\n\t\t\t{ok, StorageModules} ->\n\t\t\t\t{ok, [{ar_storage_module:id(Module), Packing}\n\t\t\t\t\t\t|| {_, _, Packing} = Module <- StorageModules]};\n\t\t\tReply2 ->\n\t\t\t\tReply2\n\t\tend,\n\tCollectIntervals =\n\t\tcase FindStoreIDPacking of\n\t\t\t{ok, L} ->\n\t\t\t\t{ok, lists:foldl(\n\t\t\t\t\tfun({StoreID2, Packing2}, Acc) ->\n\t\t\t\t\t\tIntervals = ar_footprint_record:get_intervals(Partition, FootprintNumber, Packing2, StoreID2),\n\t\t\t\t\t\tar_intervals:union(Acc, Intervals)\n\t\t\t\t\tend,\n\t\t\t\t\tar_intervals:new(),\n\t\t\t\t\tL\n\t\t\t\t)};\n\t\t\tReply3 ->\n\t\t\t\tReply3\n\t\tend,\n\tcase CollectIntervals of\n\t\t{ok, Intervals2} ->\n\t\t\tPayload = jiffy:encode(ar_serialize:footprint_to_json_map(Intervals2)),\n\t\t\t{200, #{}, Payload, Req};\n\t\tReply4 ->\n\t\t\tReply4\n\tend.\n\nhandle_get_chunk(OffsetBinary, Req, Encoding) ->\n\tcase catch binary_to_integer(OffsetBinary) of\n\t\tOffset when is_integer(Offset) ->\n\t\t\tcase << Offset:(?NOTE_SIZE * 8) >> of\n\t\t\t\t%% A positive number represented by =< ?NOTE_SIZE bytes.\n\t\t\t\t<< Offset:(?NOTE_SIZE * 8) >> ->\n\t\t\t\t\tRequestedPacking = ar_serialize:decode_packing(\n\t\t\t\t\t\tcowboy_req:header(<<\"x-packing\">>, Req, <<\"unpacked\">>),\n\t\t\t\t\t\tany\n\t\t\t\t\t),\n\t\t\t\t\tIsBucketBasedOffset =\n\t\t\t\t\t\tcase cowboy_req:header(<<\"x-bucket-based-offset\">>, Req, not_set) of\n\t\t\t\t\t\t\tnot_set ->\n\t\t\t\t\t\t\t\tfalse;\n\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\ttrue\n\t\t\t\t\t\tend,\n\t\t\t\t\t{ReadPacking, CheckRecords} =\n\t\t\t\t\t\tcase ar_sync_record:is_recorded(Offset, ar_data_sync) of\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t{none, {reply, {404, #{}, <<>>, Req}}};\n\t\t\t\t\t\t\t{true, _} ->\n\t\t\t\t\t\t\t\t%% Chunk is recorded but packing is unknown.\n\t\t\t\t\t\t\t\t{none, {reply, {404, #{}, <<>>, Req}}};\n\t\t\t\t\t\t\t{{true, RequestedPacking}, _StoreID} ->\n\t\t\t\t\t\t\t\tok = ar_semaphore:acquire(get_chunk, ?DEFAULT_CALL_TIMEOUT),\n\t\t\t\t\t\t\t\t{RequestedPacking, ok};\n\t\t\t\t\t\t\t{{true, Packing}, _StoreID} when RequestedPacking == any ->\n\t\t\t\t\t\t\t\tok = ar_semaphore:acquire(get_chunk, ?DEFAULT_CALL_TIMEOUT),\n\t\t\t\t\t\t\t\t{Packing, ok};\n\t\t\t\t\t\t\t{{true, _}, _StoreID} ->\n\t\t\t\t\t\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\t\t\t\t\t\tcase lists:member(pack_served_chunks, Config#config.enable) of\n\t\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\t\t{none, {reply, {404, #{}, <<>>, Req}}};\n\t\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\t\tok = ar_semaphore:acquire(get_and_pack_chunk,\n\t\t\t\t\t\t\t\t\t\t\t\t?DEFAULT_CALL_TIMEOUT),\n\t\t\t\t\t\t\t\t\t\t{RequestedPacking, ok}\n\t\t\t\t\t\t\t\tend\n\t\t\t\t\t\tend,\n\t\t\t\t\tcase CheckRecords of\n\t\t\t\t\t\t{reply, Reply} ->\n\t\t\t\t\t\t\tReply;\n\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\tArgs = #{ packing => ReadPacking,\n\t\t\t\t\t\t\t\t\tbucket_based_offset => IsBucketBasedOffset,\n\t\t\t\t\t\t\t\t\torigin => http },\n\t\t\t\t\t\t\tcase ar_data_sync:get_chunk(Offset, Args) of\n\t\t\t\t\t\t\t\t{ok, Proof} ->\n\t\t\t\t\t\t\t\t\tProof2 = maps:remove(unpacked_chunk,\n\t\t\t\t\t\t\t\t\t\t\tProof#{ packing => ReadPacking }),\n\t\t\t\t\t\t\t\t\tReply =\n\t\t\t\t\t\t\t\t\t\tcase Encoding of\n\t\t\t\t\t\t\t\t\t\t\tjson ->\n\t\t\t\t\t\t\t\t\t\t\t\tjiffy:encode(\n\t\t\t\t\t\t\t\t\t\t\t\t\tar_serialize:poa_map_to_json_map(\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tProof2));\n\t\t\t\t\t\t\t\t\t\t\tbinary ->\n\t\t\t\t\t\t\t\t\t\t\t\tar_serialize:poa_map_to_binary(Proof2)\n\t\t\t\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\t\t\t{200, #{}, Reply, Req};\n\t\t\t\t\t\t\t\t{error, chunk_not_found} ->\n\t\t\t\t\t\t\t\t\t{404, #{}, <<>>, Req};\n\t\t\t\t\t\t\t\t{error, invalid_padding} ->\n\t\t\t\t\t\t\t\t\t{404, #{}, <<>>, Req};\n\t\t\t\t\t\t\t\t{error, chunk_failed_validation} ->\n\t\t\t\t\t\t\t\t\t{404, #{}, <<>>, Req};\n\t\t\t\t\t\t\t\t{error, chunk_stored_in_different_packing_only} ->\n\t\t\t\t\t\t\t\t\t{404, #{}, <<>>, Req};\n\t\t\t\t\t\t\t\t{error, not_joined} ->\n\t\t\t\t\t\t\t\t\tnot_joined(Req);\n\t\t\t\t\t\t\t\t{error, Error} ->\n\t\t\t\t\t\t\t\t\t?LOG_ERROR([{event, get_chunk_error}, {offset, Offset},\n\t\t\t\t\t\t\t\t\t\t{requested_packing,\n\t\t\t\t\t\t\t\t\t\t\tar_serialize:encode_packing(RequestedPacking, false)},\n\t\t\t\t\t\t\t\t\t\t{read_packing,\n\t\t\t\t\t\t\t\t\t\t\tar_serialize:encode_packing(ReadPacking, false)},\n\t\t\t\t\t\t\t\t\t\t{error, Error}]),\n\t\t\t\t\t\t\t\t\t{500, #{}, <<>>, Req}\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend;\n\t\t\t\t_ ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => offset_out_of_bounds }), Req}\n\t\t\tend;\n\t\t_ ->\n\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_offset }), Req}\n\tend.\n\nhandle_get_chunk_proof(OffsetBinary, Req, Encoding) ->\n\tcase catch binary_to_integer(OffsetBinary) of\n\t\tOffset when is_integer(Offset) ->\n\t\t\tcase << Offset:(?NOTE_SIZE * 8) >> of\n\t\t\t\t%% A positive number represented by =< ?NOTE_SIZE bytes.\n\t\t\t\t<< Offset:(?NOTE_SIZE * 8) >> ->\n\t\t\t\t\thandle_get_chunk_proof2(Offset, Req, Encoding);\n\t\t\t\t_ ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => offset_out_of_bounds }), Req}\n\t\t\tend;\n\t\t_ ->\n\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_offset }), Req}\n\tend.\n\nhandle_get_chunk_proof2(Offset, Req, Encoding) ->\n\tIsBucketBasedOffset =\n\t\tcase cowboy_req:header(<<\"x-bucket-based-offset\">>, Req, not_set) of\n\t\t\tnot_set ->\n\t\t\t\tfalse;\n\t\t\t_ ->\n\t\t\t\ttrue\n\t\tend,\n\tok = ar_semaphore:acquire(get_chunk, ?DEFAULT_CALL_TIMEOUT),\n\tCheckRecords =\n\t\tcase ar_sync_record:is_recorded(Offset, ar_data_sync) of\n\t\t\tfalse ->\n\t\t\t\t{reply, {404, #{}, <<>>, Req}};\n\t\t\t{{true, _Packing}, _StoreID} ->\n\t\t\t\tok\n\t\tend,\n\tcase CheckRecords of\n\t\t{reply, Reply} ->\n\t\t\tReply;\n\t\tok ->\n\t\t\tArgs = #{ bucket_based_offset => IsBucketBasedOffset },\n\t\t\tcase ar_data_sync:get_chunk_proof(Offset, Args) of\n\t\t\t\t{ok, Proof} ->\n\t\t\t\t\tReply =\n\t\t\t\t\t\tcase Encoding of\n\t\t\t\t\t\t\tjson ->\n\t\t\t\t\t\t\t\tjiffy:encode(\n\t\t\t\t\t\t\t\t\tar_serialize:poa_no_chunk_map_to_json_map(\n\t\t\t\t\t\t\t\t\t\t\tProof));\n\t\t\t\t\t\t\tbinary ->\n\t\t\t\t\t\t\t\tar_serialize:poa_no_chunk_map_to_binary(Proof)\n\t\t\t\t\t\tend,\n\t\t\t\t\t{200, #{}, Reply, Req};\n\t\t\t\t{error, chunk_not_found} ->\n\t\t\t\t\t{404, #{}, <<>>, Req};\n\t\t\t\t{error, not_joined} ->\n\t\t\t\t\tnot_joined(Req);\n\t\t\t\t{error, failed_to_read_chunk} ->\n\t\t\t\t\t{500, #{}, <<>>, Req}\n\t\t\tend\n\tend.\n\nget_data_root_from_headers(Req) ->\n\tcase {cowboy_req:header(<<\"arweave-data-root\">>, Req, not_set),\n\t\t\tcowboy_req:header(<<\"arweave-data-size\">>, Req, not_set)} of\n\t\t{not_set, _} ->\n\t\t\tnot_set;\n\t\t{_, not_set} ->\n\t\t\tnot_set;\n\t\t{EncodedDataRoot, EncodedDataSize} when byte_size(EncodedDataRoot) == 43 ->\n\t\t\tcase catch binary_to_integer(EncodedDataSize) of\n\t\t\t\tDataSize when is_integer(DataSize) ->\n\t\t\t\t\tcase ar_util:safe_decode(EncodedDataRoot) of\n\t\t\t\t\t\t{ok, DataRoot} ->\n\t\t\t\t\t\t\t{ok, {DataRoot, DataSize}};\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\tnot_set\n\t\t\t\t\tend;\n\t\t\t\t_ ->\n\t\t\t\t\tnot_set\n\t\t\tend;\n\t\t_ ->\n\t\t\tnot_set\n\tend.\n\nparse_chunk(Req, Pid) ->\n\tcase read_complete_body(Req, Pid, ?MAX_SERIALIZED_CHUNK_PROOF_SIZE) of\n\t\t{ok, Body, Req2} ->\n\t\t\tcase ar_serialize:json_decode(Body, [return_maps]) of\n\t\t\t\t{ok, JSON} ->\n\t\t\t\t\tcase catch ar_serialize:json_map_to_poa_map(JSON) of\n\t\t\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_json }), Req2};\n\t\t\t\t\t\tProof ->\n\t\t\t\t\t\t\t{ok, {Proof, Req2}}\n\t\t\t\t\tend;\n\t\t\t\t{error, _} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_json }), Req2}\n\t\t\tend;\n\t\t{error, body_size_too_large} ->\n\t\t\t{413, #{}, <<\"Payload too large\">>, Req};\n\t\t{error, timeout} ->\n\t\t\t{503, #{}, jiffy:encode(#{ error => timeout }), Req}\n\tend.\n\nhandle_post_chunk(Proof, Req) ->\n\thandle_post_chunk(check_data_size, Proof, Req).\n\nhandle_post_chunk(check_data_size, Proof, Req) ->\n\tcase maps:get(data_size, Proof) > trunc(math:pow(2, ?NOTE_SIZE * 8)) - 1 of\n\t\ttrue ->\n\t\t\t{400, #{}, jiffy:encode(#{ error => data_size_too_big }), Req};\n\t\tfalse ->\n\t\t\thandle_post_chunk(check_chunk_size, Proof, Req)\n\tend;\nhandle_post_chunk(check_chunk_size, Proof, Req) ->\n\tcase byte_size(maps:get(chunk, Proof)) > ?DATA_CHUNK_SIZE of\n\t\ttrue ->\n\t\t\t{400, #{}, jiffy:encode(#{ error => chunk_too_big }), Req};\n\t\tfalse ->\n\t\t\thandle_post_chunk(check_data_path_size, Proof, Req)\n\tend;\nhandle_post_chunk(check_data_path_size, Proof, Req) ->\n\tcase byte_size(maps:get(data_path, Proof)) > ?MAX_PATH_SIZE of\n\t\ttrue ->\n\t\t\t{400, #{}, jiffy:encode(#{ error => data_path_too_big }), Req};\n\t\tfalse ->\n\t\t\thandle_post_chunk(check_offset_field, Proof, Req)\n\tend;\nhandle_post_chunk(check_offset_field, Proof, Req) ->\n\tcase maps:is_key(offset, Proof) of\n\t\tfalse ->\n\t\t\t{400, #{}, jiffy:encode(#{ error => offset_field_required }), Req};\n\t\ttrue ->\n\t\t\thandle_post_chunk(check_offset_size, Proof, Req)\n\tend;\nhandle_post_chunk(check_offset_size, Proof, Req) ->\n\tcase maps:get(offset, Proof) > trunc(math:pow(2, ?NOTE_SIZE * 8)) - 1 of\n\t\ttrue ->\n\t\t\t{400, #{}, jiffy:encode(#{ error => offset_too_big }), Req};\n\t\tfalse ->\n\t\t\thandle_post_chunk(check_chunk_proof_ratio, Proof, Req)\n\tend;\nhandle_post_chunk(check_chunk_proof_ratio, Proof, Req) ->\n\tDataPath = maps:get(data_path, Proof),\n\tChunk = maps:get(chunk, Proof),\n\tDataSize = maps:get(data_size, Proof),\n\tcase ar_data_sync:is_chunk_proof_ratio_attractive(byte_size(Chunk), DataSize, DataPath) of\n\t\tfalse ->\n\t\t\t{400, #{}, jiffy:encode(#{ error => chunk_proof_ratio_not_attractive }), Req};\n\t\ttrue ->\n\t\t\thandle_post_chunk(validate_proof, Proof, Req)\n\tend;\nhandle_post_chunk(validate_proof, Proof, Req) ->\n\tParent = self(),\n\t#{ chunk := Chunk, data_path := DataPath, data_size := TXSize, offset := Offset,\n\t\t\tdata_root := DataRoot } = Proof,\n\tspawn(fun() ->\n\t\t\tParent ! ar_data_sync:add_chunk_to_disk_pool(\n\t\t\t\tDataRoot, DataPath, Chunk, Offset, TXSize)\n\t\t\tend),\n\treceive\n\t\tok ->\n\t\t\t{200, #{}, <<>>, Req};\n\t\ttemporary ->\n\t\t\t{303, #{}, <<>>, Req};\n\t\t{error, data_root_not_found} ->\n\t\t\t{400, #{}, jiffy:encode(#{ error => data_root_not_found }), Req};\n\t\t{error, exceeds_disk_pool_size_limit} ->\n\t\t\t{400, #{}, jiffy:encode(#{ error => exceeds_disk_pool_size_limit }), Req};\n\t\t{error, disk_full} ->\n\t\t\t{400, #{}, jiffy:encode(#{ error => disk_full }), Req};\n\t\t{error, failed_to_store_chunk} ->\n\t\t\t{500, #{}, <<>>, Req};\n\t\t{error, invalid_proof} ->\n\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_proof }), Req}\n\tend.\n\ncheck_internal_api_secret(Req) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcheck_api_secret(\n\t\t<<\"x-internal-api-secret\">>, Config#config.internal_api_secret, <<\"Internal API\">>, Req).\n\ncheck_cm_api_secret(Req) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcheck_api_secret(<<\"x-cm-api-secret\">>, Config#config.cm_api_secret, <<\"CM API\">>, Req).\n\ncheck_api_secret(Header, Secret, APIName, Req) ->\n\tReject = fun(Msg) ->\n\t\tlog_api_reject(Msg, Req),\n\t\t%% Reduce efficiency of timing attacks by sleeping randomly between 1-2s.\n\t\ttimer:sleep(rand:uniform(1000) + 1000),\n\t\t{reject, {\n\t\t\t421, #{},\n\t\t\t<<APIName/bitstring, \" disabled or invalid \", APIName/bitstring, \" secret in request.\">>\n\t\t}}\n\tend,\n\tcase {Secret, cowboy_req:header(Header, Req)} of\n\t\t{not_set, _} ->\n\t\t\tReject(<<\"Request to disabled \", APIName/bitstring>>);\n\t\t{Secret, Secret} when is_binary(Secret) ->\n\t\t\tpass;\n\t\t_ ->\n\t\t\tReject(<<\"Invalid secret for \", APIName/bitstring, \" request\">>)\n\tend.\n\nlog_api_reject(Msg, Req) ->\n\tspawn(fun() ->\n\t\tPath = ar_http_iface_server:split_path(cowboy_req:path(Req)),\n\t\t{IpAddr, _Port} = cowboy_req:peer(Req),\n\t\tBinIpAddr = list_to_binary(inet:ntoa(IpAddr)),\n\t\t?LOG_WARNING(\"~s: IP address: ~s Path: ~p\", [Msg, BinIpAddr, Path])\n\tend).\n\n%% @doc Convert a blocks field with the given label into a string.\nblock_field_to_string(<<\"timestamp\">>, Res) -> integer_to_list(Res);\nblock_field_to_string(<<\"last_retarget\">>, Res) -> integer_to_list(Res);\nblock_field_to_string(<<\"diff\">>, Res) -> integer_to_list(Res);\nblock_field_to_string(<<\"cumulative_diff\">>, Res) -> integer_to_list(Res);\nblock_field_to_string(<<\"height\">>, Res) -> integer_to_list(Res);\nblock_field_to_string(<<\"txs\">>, Res) -> ar_serialize:jsonify(Res);\nblock_field_to_string(<<\"hash_list\">>, Res) -> ar_serialize:jsonify(Res);\nblock_field_to_string(<<\"wallet_list\">>, Res) -> ar_serialize:jsonify(Res);\nblock_field_to_string(<<\"usd_to_ar_rate\">>, Res) -> ar_serialize:jsonify(Res);\nblock_field_to_string(<<\"scheduled_usd_to_ar_rate\">>, Res) -> ar_serialize:jsonify(Res);\nblock_field_to_string(<<\"poa\">>, Res) -> ar_serialize:jsonify(Res);\nblock_field_to_string(_, Res) -> Res.\n\n%% @doc Return true if TXID is a pending tx.\nis_a_pending_tx(TXID) ->\n\tar_mempool:has_tx(TXID).\n\ndecode_block(JSON, json) ->\n\ttry\n\t\t{Struct} = ar_serialize:dejsonify(JSON),\n\t\tJSONB = val_for_key(<<\"new_block\">>, Struct),\n\t\tBShadow = ar_serialize:json_struct_to_block(JSONB),\n\t\t{ok, BShadow}\n\tcatch\n\t\tException:Reason ->\n\t\t\t{error, {Exception, Reason}}\n\tend;\ndecode_block(Bin, binary) ->\n\ttry\n\t\tar_serialize:binary_to_block(Bin)\n\tcatch\n\t\tException:Reason ->\n\t\t\t{error, {Exception, Reason}}\n\tend.\n\n%% @doc Convenience function for lists:keyfind(Key, 1, List). Returns Value, not {Key, Value}.\nval_for_key(K, L) ->\n\tcase lists:keyfind(K, 1, L) of\n\t\tfalse -> false;\n\t\t{K, V} -> V\n\tend.\n\nhandle_block_announcement(#block_announcement{ indep_hash = H, previous_block = PrevH,\n\t\ttx_prefixes = Prefixes, recall_byte2 = RecallByte2 }, Req) ->\n\tcase ar_ignore_registry:member(H) of\n\t\ttrue ->\n\t\t\tcheck_block_receive_timestamp(H),\n\t\t\t{208, #{}, <<>>, Req};\n\t\tfalse ->\n\t\t\tcase ar_node:get_block_shadow_from_cache(PrevH) of\n\t\t\t\tnot_found ->\n\t\t\t\t\t{412, #{}, <<>>, Req};\n\t\t\t\t#block{} ->\n\t\t\t\t\tIndices = collect_missing_tx_indices(Prefixes),\n\t\t\t\t\tprometheus_counter:inc(block_announcement_reported_transactions,\n\t\t\t\t\t\t\tlength(Prefixes)),\n\t\t\t\t\tprometheus_counter:inc(block_announcement_missing_transactions,\n\t\t\t\t\t\t\tlength(Indices)),\n\t\t\t\t\tResponse = #block_announcement_response{ missing_chunk = true,\n\t\t\t\t\t\t\tmissing_tx_indices = Indices },\n\t\t\t\t\tResponse2 =\n\t\t\t\t\t\tcase RecallByte2 == undefined of\n\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\tResponse;\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\tResponse#block_announcement_response{ missing_chunk2 = true }\n\t\t\t\t\t\tend,\n\t\t\t\t\t{200, #{}, ar_serialize:block_announcement_response_to_binary(Response2),\n\t\t\t\t\t\t\tReq}\n\t\t\tend\n\tend.\n\ncollect_missing_tx_indices(Prefixes) ->\n\tcollect_missing_tx_indices(Prefixes, [], 0).\n\ncollect_missing_tx_indices([], Indices, _N) ->\n\tlists:reverse(Indices);\ncollect_missing_tx_indices([Prefix | Prefixes], Indices, N) ->\n\tcase ets:member(tx_prefixes, Prefix) of\n\t\tfalse ->\n\t\t\tcollect_missing_tx_indices(Prefixes, [N | Indices], N + 1);\n\t\ttrue ->\n\t\t\tcollect_missing_tx_indices(Prefixes, Indices, N + 1)\n\tend.\n\npost_block(request, {Req, Pid, Encoding}, ReceiveTimestamp) ->\n\tPeer = ar_http_util:arweave_peer(Req),\n\tcase ar_blacklist_middleware:is_peer_banned(Peer) of\n\t\tnot_banned ->\n\t\t\tpost_block(check_joined, Peer, {Req, Pid, Encoding}, ReceiveTimestamp);\n\t\tbanned ->\n\t\t\t{403, #{}, <<\"IP address blocked due to previous request.\">>, Req}\n\tend.\n\npost_block(check_joined, Peer, {Req, Pid, Encoding}, ReceiveTimestamp) ->\n\tcase ar_node:is_joined() of\n\t\ttrue ->\n\t\t\tConfirmedHeight = ar_node:get_height() - ar_block:get_consensus_window_size(),\n\t\t\tcase {Encoding, ConfirmedHeight >= ar_fork:height_2_6()} of\n\t\t\t\t{json, true} ->\n\t\t\t\t\t%% We gesticulate it explicitly here that POST /block is not\n\t\t\t\t\t%% supported after the 2.6 fork. However, this check is not strictly\n\t\t\t\t\t%% necessary because ar_serialize:json_struct_to_block/1 fails\n\t\t\t\t\t%% unless the block height is smaller than the fork 2.6 height.\n\t\t\t\t\t{400, #{}, <<>>, Req};\n\t\t\t\t_ ->\n\t\t\t\t\tpost_block(check_block_hash_header, Peer, {Req, Pid, Encoding},\n\t\t\t\t\t\t\tReceiveTimestamp)\n\t\t\tend;\n\t\tfalse ->\n\t\t\t%% The node is not ready to validate and accept blocks.\n\t\t\t%% If the network adopts this block, ar_poller will catch up.\n\t\t\t{503, #{}, <<\"Not joined.\">>, Req}\n\tend;\npost_block(check_block_hash_header, Peer, {Req, Pid, Encoding}, ReceiveTimestamp) ->\n\tcase cowboy_req:header(<<\"arweave-block-hash\">>, Req, not_set) of\n\t\tnot_set ->\n\t\t\tpost_block(read_body, Peer, {Req, Pid, Encoding}, ReceiveTimestamp);\n\t\tEncodedBH ->\n\t\t\tcase ar_util:safe_decode(EncodedBH) of\n\t\t\t\t{ok, BH} when byte_size(BH) =< 48 ->\n\t\t\t\t\tcase ar_ignore_registry:member(BH) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tcheck_block_receive_timestamp(BH),\n\t\t\t\t\t\t\t{208, #{}, <<\"Block already processed.\">>, Req};\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tpost_block(read_body, Peer, {Req, Pid, Encoding},\n\t\t\t\t\t\t\t\t\tReceiveTimestamp)\n\t\t\t\t\tend;\n\t\t\t\t_ ->\n\t\t\t\t\tpost_block(read_body, Peer, {Req, Pid, Encoding}, ReceiveTimestamp)\n\t\t\tend\n\tend;\npost_block(read_body, Peer, {Req, Pid, Encoding}, ReceiveTimestamp) ->\n\tcase read_complete_body(Req, Pid) of\n\t\t{ok, Body, Req2} ->\n\t\t\tcase decode_block(Body, Encoding) of\n\t\t\t\t{error, _} ->\n\t\t\t\t\t{400, #{}, <<\"Invalid block.\">>, Req2};\n\t\t\t\t{ok, BShadow} ->\n\t\t\t\t\tpost_block(check_transactions_are_present, {BShadow, Peer}, Req2,\n\t\t\t\t\t\t\tReceiveTimestamp)\n\t\t\tend;\n\t\t{error, body_size_too_large} ->\n\t\t\t{413, #{}, <<\"Payload too large\">>, Req};\n\t\t{error, timeout} ->\n\t\t\t{503, #{}, jiffy:encode(#{ error => timeout }), Req}\n\tend;\npost_block(check_transactions_are_present, {BShadow, Peer}, Req, ReceiveTimestamp) ->\n\tcase erlang:get(post_block2) of\n\t\ttrue ->\n\t\t\tcase get_missing_tx_identifiers(BShadow#block.txs) of\n\t\t\t\t[] ->\n\t\t\t\t\tpost_block(enqueue_block, {BShadow, Peer}, Req, ReceiveTimestamp);\n\t\t\t\t{error, tx_list_too_long} ->\n\t\t\t\t\t{400, #{}, <<>>, Req};\n\t\t\t\tMissingTXIDs ->\n\t\t\t\t\t{418, #{}, encode_txids(MissingTXIDs), Req}\n\t\t\tend;\n\t\t_ -> % POST /block; do not reject for backwards-compatibility\n\t\t\tpost_block(enqueue_block, {BShadow, Peer}, Req, ReceiveTimestamp)\n\tend;\npost_block(enqueue_block, {B, Peer}, Req, ReceiveTimestamp) ->\n\tB2 =\n\t\tcase B#block.height >= ar_fork:height_2_6() of\n\t\t\ttrue ->\n\t\t\t\tB;\n\t\t\tfalse ->\n\t\t\t\tcase cowboy_req:header(<<\"arweave-recall-byte\">>, Req, not_set) of\n\t\t\t\t\tnot_set ->\n\t\t\t\t\t\tB;\n\t\t\t\t\tByteBin ->\n\t\t\t\t\t\tcase catch binary_to_integer(ByteBin) of\n\t\t\t\t\t\t\tRecallByte when is_integer(RecallByte) ->\n\t\t\t\t\t\t\t\tB#block{ recall_byte = RecallByte };\n\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\tB\n\t\t\t\t\t\tend\n\t\t\t\tend\n\t\tend,\n\t?LOG_INFO([{event, received_block}, {block, ar_util:encode(B#block.indep_hash)},\n\t\t{peer, ar_util:format_peer(Peer)}]),\n\tBodyReadTime = ar_http_req:body_read_time(Req),\n\tcase ar_block_pre_validator:pre_validate(B2, Peer, ReceiveTimestamp) of\n\t\tok ->\n\t\t\tar_peers:rate_gossiped_data(Peer, block,\n\t\t\t\terlang:convert_time_unit(BodyReadTime, native, microsecond),\n\t\t\t\tbyte_size(term_to_binary(B)));\n\t\t_ ->\n\t\t\tok\n\tend,\n\t{200, #{}, <<\"OK\">>, Req}.\n\ncheck_block_receive_timestamp(H) ->\n\tcase ar_block_cache:get(block_cache, H) of\n\t\tnot_found ->\n\t\t\tnot_found;\n\t\tB ->\n\t\t\tcase B#block.receive_timestamp of\n\t\t\t\tundefined ->\n\t\t\t\t\t%% This node mined block H and this is the first time it's been\n\t\t\t\t\t%% gossipped back to it. Update the node's receive_timestamp.\n\t\t\t\t\tar_events:send(block, {mined_block_received, H, erlang:timestamp()});\n\t\t\t\t_ ->\n\t\t\t\t\tok\n\t\t\tend\n\tend.\n\nhandle_post_partial_solution(Req, Pid) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tCMExitNode = ar_coordination:is_exit_peer() andalso ar_pool:is_client(),\n\tcase {Config#config.is_pool_server, CMExitNode} of\n\t\t{false, false} ->\n\t\t\t{501, #{}, jiffy:encode(#{ error => configuration }), Req};\n\t\t{true, _} ->\n\t\t\tcase check_internal_api_secret(Req) of\n\t\t\t\t{reject, {Status, Headers, Body}} ->\n\t\t\t\t\t{Status, Headers, Body, Req};\n\t\t\t\tpass ->\n\t\t\t\t\thandle_post_partial_solution_pool_server(Req, Pid)\n\t\t\tend;\n\t\t{_, true} ->\n\t\t\tcase check_cm_api_secret(Req) of\n\t\t\t\t{reject, {Status, Headers, Body}} ->\n\t\t\t\t\t{Status, Headers, Body, Req};\n\t\t\t\tpass ->\n\t\t\t\t\thandle_post_partial_solution_cm_exit_peer_pool_client(Req, Pid)\n\t\t\tend\n\tend.\n\nhandle_post_partial_solution_pool_server(Req, Pid) ->\n\tcase read_complete_body(Req, Pid) of\n\t\t{ok, Body, Req2} ->\n\t\t\tcase catch ar_serialize:json_map_to_solution(\n\t\t\t\t\tjiffy:decode(Body, [return_maps])) of\n\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_json }), Req2};\n\t\t\t\tSolution ->\n\t\t\t\t\tResponse = ar_pool:process_partial_solution(Solution),\n\t\t\t\t\tJSON = ar_serialize:partial_solution_response_to_json_struct(Response),\n\t\t\t\t\t{200, #{}, ar_serialize:jsonify(JSON), Req2}\n\t\t\tend;\n\t\t{error, body_size_too_large} ->\n\t\t\t{413, #{}, <<\"Payload too large\">>, Req};\n\t\t{error, timeout} ->\n\t\t\t{500, #{}, <<\"Handler timeout\">>, Req}\n\tend.\n\nhandle_post_partial_solution_cm_exit_peer_pool_client(Req, Pid) ->\n\tcase read_complete_body(Req, Pid) of\n\t\t{ok, Body, Req2} ->\n\t\t\tar_pool:post_partial_solution(Body),\n\t\t\t{200, #{}, jiffy:encode(#{}), Req2};\n\t\t{error, body_size_too_large} ->\n\t\t\t{413, #{}, <<\"Payload too large\">>, Req};\n\t\t{error, timeout} ->\n\t\t\t{500, #{}, <<\"Handler timeout\">>, Req}\n\tend.\n\nhandle_get_jobs(PrevOutput, Req) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tCMExitNode = ar_coordination:is_exit_peer() andalso ar_pool:is_client(),\n\tcase {Config#config.is_pool_server, CMExitNode} of\n\t\t{false, false} ->\n\t\t\t{501, #{}, jiffy:encode(#{ error => configuration }), Req};\n\t\t{true, _} ->\n\t\t\tcase check_internal_api_secret(Req) of\n\t\t\t\t{reject, {Status, Headers, Body}} ->\n\t\t\t\t\t{Status, Headers, Body, Req};\n\t\t\t\tpass ->\n\t\t\t\t\thandle_get_jobs_pool_server(PrevOutput, Req)\n\t\t\tend;\n\t\t{_, true} ->\n\t\t\tcase check_cm_api_secret(Req) of\n\t\t\t\t{reject, {Status, Headers, Body}} ->\n\t\t\t\t\t{Status, Headers, Body, Req};\n\t\t\t\tpass ->\n\t\t\t\t\thandle_get_jobs_cm_exit_peer_pool_client(PrevOutput, Req)\n\t\t\tend\n\tend.\n\nhandle_get_jobs_pool_server(PrevOutput, Req) ->\n\tProps =\n\t\tets:select(\n\t\t\tnode_state,\n\t\t\t[{{'$1', '$2'},\n\t\t\t\t[{'or',\n\t\t\t\t\t{'==', '$1', diff_pair},\n\t\t\t\t\t{'==', '$1', nonce_limiter_info}}], ['$_']}]\n\t\t),\n\tDiffPair = proplists:get_value(diff_pair, Props),\n\tInfo = proplists:get_value(nonce_limiter_info, Props),\n\tResult = ar_util:do_until(\n\t\tfun() ->\n\t\t\tS = ar_nonce_limiter:get_step_triplets(Info, PrevOutput, ?GET_JOBS_COUNT),\n\t\t\tcase S of\n\t\t\t\t[] ->\n\t\t\t\t\tfalse;\n\t\t\t\t_ ->\n\t\t\t\t\t{ok, S}\n\t\t\tend\n\t\tend,\n\t\t200,\n\t\t(?GET_JOBS_TIMEOUT_S) * 1000\n\t),\n\tSteps = case Result of {ok, S} -> S; _ -> [] end,\n\t{NextSeed, IntervalNumber, NextVDFDiff} = ar_nonce_limiter:session_key(Info),\n\tJobList = [#job{ output = O, global_step_number = SN,\n\t\t\tpartition_upper_bound = U } || {O, SN, U} <- Steps],\n\tJobs = #jobs{ jobs = JobList, seed = Info#nonce_limiter_info.seed,\n\t\t\tnext_seed = NextSeed, interval_number = IntervalNumber,\n\t\t\tnext_vdf_difficulty = NextVDFDiff, partial_diff = DiffPair },\n\t{200, #{}, ar_serialize:jsonify(ar_serialize:jobs_to_json_struct(Jobs)), Req}.\n\nhandle_get_jobs_cm_exit_peer_pool_client(PrevOutput, Req) ->\n\t{200, #{}, ar_serialize:jsonify(\n\t\t\tar_serialize:jobs_to_json_struct(ar_pool:get_jobs(PrevOutput))), Req}.\n\n%% Only for cm miners that are NOT exit peers.\nhandle_post_pool_cm_jobs(Req, Pid) ->\n\tPoolCMMiner = (not ar_coordination:is_exit_peer()) andalso ar_pool:is_client(),\n\tcase PoolCMMiner of\n\t\tfalse ->\n\t\t\t{501, #{}, jiffy:encode(#{ error => configuration }), Req};\n\t\ttrue ->\n\t\t\tcase check_cm_api_secret(Req) of\n\t\t\t\t{reject, {Status, Headers, Body}} ->\n\t\t\t\t\t{Status, Headers, Body, Req};\n\t\t\t\tpass ->\n\t\t\t\t\thandle_post_pool_cm_jobs2(Req, Pid)\n\t\t\tend\n\tend.\n\nhandle_post_pool_cm_jobs2(Req, Pid) ->\n\tPeer = ar_http_util:arweave_peer(Req),\n\tcase read_complete_body(Req, Pid) of\n\t\t{ok, Body, Req2} ->\n\t\t\tcase catch ar_serialize:json_map_to_pool_cm_jobs(\n\t\t\t\t\telement(2, ar_serialize:json_decode(Body, [return_maps]))) of\n\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_json }), Req2};\n\t\t\t\tJobs ->\n\t\t\t\t\tar_pool:process_cm_jobs(Jobs, Peer),\n\t\t\t\t\t{200, #{}, <<>>, Req2}\n\t\t\tend;\n\t\t{error, body_size_too_large} ->\n\t\t\t{413, #{}, <<\"Payload too large\">>, Req};\n\t\t{error, timeout} ->\n\t\t\t{500, #{}, <<\"Handler timeout\">>, Req}\n\tend.\n\nencode_txids([]) ->\n\t<<>>;\nencode_txids([TXID | TXIDs]) ->\n\t<< TXID/binary, (encode_txids(TXIDs))/binary >>.\n\nget_missing_tx_identifiers(TXIDs) ->\n\tget_missing_tx_identifiers(TXIDs, [], 0).\n\nget_missing_tx_identifiers([], MissingTXIDs, _N) ->\n\tMissingTXIDs;\nget_missing_tx_identifiers([_ | _], _, N) when N == ?BLOCK_TX_COUNT_LIMIT ->\n\t{error, tx_list_too_long};\nget_missing_tx_identifiers([#tx{} | TXIDs], MissingTXIDs, N) ->\n\tget_missing_tx_identifiers(TXIDs, MissingTXIDs, N + 1);\nget_missing_tx_identifiers([TXID | TXIDs], MissingTXIDs, N) ->\n\tcase ar_node_worker:is_mempool_or_block_cache_tx(TXID) of\n\t\ttrue ->\n\t\t\tget_missing_tx_identifiers(TXIDs, MissingTXIDs, N + 1);\n\t\tfalse ->\n\t\t\tget_missing_tx_identifiers(TXIDs, [TXID | MissingTXIDs], N + 1)\n\tend.\n\ndecode_recent_hash_list(<<>>) ->\n\t{ok, []};\ndecode_recent_hash_list(<< H:48/binary, Rest/binary >>) ->\n\tcase decode_recent_hash_list(Rest) of\n\t\terror ->\n\t\t\terror;\n\t\t{ok, HL} ->\n\t\t\t{ok, [H | HL]}\n\tend;\ndecode_recent_hash_list(_Rest) ->\n\terror.\n\nget_recent_hash_list_diff([H | HL], BlockTXPairs) ->\n\tcase lists:dropwhile(fun({BH, _TXIDs}) -> BH /= H end, BlockTXPairs) of\n\t\t[] ->\n\t\t\tget_recent_hash_list_diff(HL, BlockTXPairs);\n\t\tTail ->\n\t\t\tget_recent_hash_list_diff(HL, tl(Tail), H)\n\tend;\nget_recent_hash_list_diff([], _BlockTXPairs) ->\n\tno_intersection.\n\nget_recent_hash_list_diff([H | HL], [{H, _SizeTaggedTXs} | BlockTXPairs], _PrevH) ->\n\tget_recent_hash_list_diff(HL, BlockTXPairs, H);\nget_recent_hash_list_diff(_HL, BlockTXPairs, PrevH) ->\n\t<< PrevH/binary, (get_recent_hash_list_diff(BlockTXPairs))/binary >>.\n\nget_recent_hash_list_diff([{H, TXIDs} | BlockTXPairs]) ->\n\tLen = length(TXIDs),\n\t<< H:48/binary, Len:16,\n\t\t\t(iolist_to_binary([TXID || TXID <- TXIDs]))/binary,\n\t\t\t(get_recent_hash_list_diff(BlockTXPairs))/binary >>;\nget_recent_hash_list_diff([]) ->\n\t<<>>.\n\nget_total_supply(RootHash, Cursor, Sum, Denomination) ->\n\t{ok, {NextCursor, Range}} = ar_wallets:get_chunk(RootHash, Cursor),\n\tRangeSum = get_balance_sum(Range, Denomination),\n\tcase NextCursor of\n\t\tlast ->\n\t\t\tSum + RangeSum;\n\t\t_ ->\n\t\t\tget_total_supply(RootHash, NextCursor, Sum + RangeSum, Denomination)\n\tend.\n\nget_balance_sum([{_, {Balance, _LastTX}} | Range], BlockDenomination) ->\n\tar_pricing:redenominate(Balance, 1, BlockDenomination)\n\t\t\t+ get_balance_sum(Range, BlockDenomination);\nget_balance_sum([{_, {Balance, _LastTX, Denomination, _MiningPermission}} | Range],\n\t\tBlockDenomination) ->\n\tar_pricing:redenominate(Balance, Denomination, BlockDenomination)\n\t\t\t+ get_balance_sum(Range, BlockDenomination);\nget_balance_sum([], _BlockDenomination) ->\n\t0.\n\n%% Return the block hash list associated with a block.\nprocess_request(get_block, [Type, ID, <<\"hash_list\">>], Req) ->\n\tcase find_block(Type, ID) of\n\t\t{error, height_not_integer} ->\n\t\t\t{400, #{}, jiffy:encode(#{ error => size_must_be_an_integer }), Req};\n\t\tunavailable ->\n\t\t\t{404, #{}, <<\"Not Found.\">>, Req};\n\t\tB ->\n\t\t\tok = ar_semaphore:acquire(get_block_index, ?DEFAULT_CALL_TIMEOUT),\n\t\t\tcase ar_node:get_height() >= ar_fork:height_2_6() of\n\t\t\t\ttrue ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => not_supported_since_fork_2_6 }), Req};\n\t\t\t\tfalse ->\n\t\t\t\t\tCurrentBI = ar_node:get_block_index(),\n\t\t\t\t\tHL = ar_block:generate_hash_list_for_block(B#block.indep_hash, CurrentBI),\n\t\t\t\t\t{200, #{}, ar_serialize:jsonify(lists:map(fun ar_util:encode/1, HL)), Req}\n\t\t\tend\n\tend;\n%% @doc Return the wallet list associated with a block.\nprocess_request(get_block, [Type, ID, <<\"wallet_list\">>], Req) ->\n\tcase find_block(Type, ID) of\n\t\t{error, height_not_integer} ->\n\t\t\t{400, #{}, jiffy:encode(#{ error => size_must_be_an_integer }), Req};\n\t\tunavailable ->\n\t\t\t{404, #{}, <<\"Not Found.\">>, Req};\n\t\tB ->\n\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\tcase {B#block.height >= ar_fork:height_2_2(),\n\t\t\t\t\tlists:member(serve_wallet_lists, Config#config.enable)} of\n\t\t\t\t{true, false} ->\n\t\t\t\t\t{400, #{},\n\t\t\t\t\t\tjiffy:encode(#{ error => does_not_serve_blocks_after_2_2_fork }),\n\t\t\t\t\t\tReq};\n\t\t\t\t{true, _} ->\n\t\t\t\t\tok = ar_semaphore:acquire(get_wallet_list, ?DEFAULT_CALL_TIMEOUT),\n\t\t\t\t\tcase ar_storage:read_wallet_list(B#block.wallet_list) of\n\t\t\t\t\t\t{ok, Tree} ->\n\t\t\t\t\t\t\t{200, #{}, ar_serialize:jsonify(\n\t\t\t\t\t\t\t\tar_serialize:wallet_list_to_json_struct(\n\t\t\t\t\t\t\t\t\tB#block.reward_addr, false, Tree\n\t\t\t\t\t\t\t\t)), Req};\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t{404, #{}, <<\"Block not found.\">>, Req}\n\t\t\t\t\tend;\n\t\t\t\t_ ->\n\t\t\t\t\tWLFilepath = ar_storage:wallet_list_filepath(B#block.wallet_list),\n\t\t\t\t\tcase filelib:is_file(WLFilepath) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t{200, #{}, sendfile(WLFilepath), Req};\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t{404, #{}, <<\"Block not found.\">>, Req}\n\t\t\t\t\tend\n\t\t\tend\n\tend;\n\n%% Return a requested field of a given block.\n%% GET request to endpoint /block/hash/{hash|height}/{field}.\n%%\n%% field :: nonce | previous_block | timestamp | last_retarget | diff | height | hash |\n%%\t\t\tindep_hash | txs | hash_list | wallet_list | reward_addr | tags | reward_pool\nprocess_request(get_block, [Type, ID, Field], Req) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase lists:member(subfield_queries, Config#config.enable) of\n\t\ttrue ->\n\t\t\tcase find_block(Type, ID) of\n\t\t\t\t{error, height_not_integer} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => size_must_be_an_integer }), Req};\n\t\t\t\tunavailable ->\n\t\t\t\t\t{404, #{}, <<\"Not Found.\">>, Req};\n\t\t\t\tB ->\n\t\t\t\t\t{BLOCKJSON} = ar_serialize:block_to_json_struct(B),\n\t\t\t\t\tcase catch list_to_existing_atom(binary_to_list(Field)) of\n\t\t\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t\t\t{404, #{}, <<\"Not Found.\">>, Req};\n\t\t\t\t\t\tAtom ->\n\t\t\t\t\t\t\tcase lists:keyfind(Atom, 1, BLOCKJSON) of\n\t\t\t\t\t\t\t\t{_, Res} ->\n\t\t\t\t\t\t\t\t\tResult = block_field_to_string(Field, Res),\n\t\t\t\t\t\t\t\t\t{200, #{}, Result, Req};\n\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\t{404, #{}, <<\"Not Found.\">>, Req}\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend\n\t\t\tend;\n\t\t_ ->\n\t\t\t{421, #{}, <<\"Subfield block querying is disabled on this node.\">>, Req}\n\tend.\n\nhandle_get_block_wallet_balance(EncodedHeight, EncodedAddr, Req) ->\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\tnot_joined(Req);\n\t\ttrue ->\n\t\t\tCurrentHeight = ar_node:get_height(),\n\t\t\ttry binary_to_integer(EncodedHeight) of\n\t\t\t\tHeight when Height < 0 ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_height }), Req};\n\t\t\t\tHeight when Height > CurrentHeight ->\n\t\t\t\t\t{404, #{}, jiffy:encode(#{ error => block_not_found }), Req};\n\t\t\t\tHeight ->\n\t\t\t\t\tcase ar_block_index:get_element_by_height(Height) of\n\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\t{404, #{}, jiffy:encode(#{ error => block_not_found }), Req};\n\t\t\t\t\t\t{H, _, _} ->\n\t\t\t\t\t\t\tB =\n\t\t\t\t\t\t\t\tcase ar_block_cache:get(block_cache, H) of\n\t\t\t\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\t\t\t\tar_storage:read_block(H);\n\t\t\t\t\t\t\t\t\tB2 ->\n\t\t\t\t\t\t\t\t\t\tB2\n\t\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\tcase B of\n\t\t\t\t\t\t\t\tunavailable ->\n\t\t\t\t\t\t\t\t\t{404, #{}, jiffy:encode(#{ error => block_not_found }),\n\t\t\t\t\t\t\t\t\t\t\tReq};\n\t\t\t\t\t\t\t\t#block{ wallet_list = RootHash } ->\n\t\t\t\t\t\t\t\t\tcase ar_util:safe_decode(EncodedAddr) of\n\t\t\t\t\t\t\t\t\t\t{ok, Addr} ->\n\t\t\t\t\t\t\t\t\t\t\thandle_get_block_wallet_balance2(Addr, RootHash,\n\t\t\t\t\t\t\t\t\t\t\t\t\tReq);\n\t\t\t\t\t\t\t\t\t\t{error, invalid} ->\n\t\t\t\t\t\t\t\t\t\t\t{400, #{}, jiffy:encode(#{\n\t\t\t\t\t\t\t\t\t\t\t\t\terror => invalid_address }), Req}\n\t\t\t\t\t\t\t\t\tend\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend\n\t\t\tcatch _:_ ->\n\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_height }), Req}\n\t\t\tend\n\tend.\n\nhandle_get_block_wallet_balance2(Addr, RootHash, Req) ->\n\tcase ar_wallets:get_balance(RootHash, Addr) of\n\t\t{error, not_found} ->\n\t\t\thandle_get_block_wallet_balance3(Addr, RootHash, Req);\n\t\tBalance when is_integer(Balance) ->\n\t\t\t{200, #{}, integer_to_binary(Balance), Req};\n\t\t_Error ->\n\t\t\t{500, #{}, <<>>, Req}\n\tend.\n\nhandle_get_block_wallet_balance3(Addr, RootHash, Req) ->\n\tcase ar_storage:read_account(Addr, RootHash) of\n\t\tnot_found ->\n\t\t\t{404, #{}, jiffy:encode(#{ error => account_data_not_found }), Req};\n\t\t{Balance, _LastTX} ->\n\t\t\t{200, #{}, integer_to_binary(Balance), Req};\n\t\t{Balance, _LastTX, _Denomination, _MiningPermission} ->\n\t\t\t{200, #{}, integer_to_binary(Balance), Req}\n\tend.\n\nprocess_get_wallet_list_chunk(EncodedRootHash, EncodedCursor, Req) ->\n\tDecodeCursorResult =\n\t\tcase EncodedCursor of\n\t\t\tfirst ->\n\t\t\t\t{ok, first};\n\t\t\t_ ->\n\t\t\t\tar_util:safe_decode(EncodedCursor)\n\t\tend,\n\tcase {ar_util:safe_decode(EncodedRootHash), DecodeCursorResult} of\n\t\t{{error, invalid}, _} ->\n\t\t\t{400, #{}, <<\"Invalid root hash.\">>, Req};\n\t\t{_, {error, invalid}} ->\n\t\t\t{400, #{}, <<\"Invalid root hash.\">>, Req};\n\t\t{{ok, RootHash}, {ok, Cursor}} ->\n\t\t\tcase ar_wallets:get_chunk(RootHash, Cursor) of\n\t\t\t\t{ok, {NextCursor, Wallets}} ->\n\t\t\t\t\tSerializeFn = case cowboy_req:header(<<\"content-type\">>, Req) of\n\t\t\t\t\t\t<<\"application/json\">> -> fun wallet_list_chunk_to_json/1;\n\t\t\t\t\t\t<<\"application/etf\">> -> fun erlang:term_to_binary/1;\n\t\t\t\t\t\t_ -> fun erlang:term_to_binary/1\n\t\t\t\t\tend,\n\t\t\t\t\tReply = SerializeFn(#{ next_cursor => NextCursor, wallets => Wallets }),\n\t\t\t\t\t{200, #{}, Reply, Req};\n\t\t\t\t{error, root_hash_not_found} ->\n\t\t\t\t\t{404, #{}, <<\"Root hash not found.\">>, Req}\n\t\t\tend\n\tend.\n\nwallet_list_chunk_to_json(#{ next_cursor := NextCursor, wallets := Wallets }) ->\n\tSerializedWallets =\n\t\tlists:map(\n\t\t\tfun({Addr, Value}) ->\n\t\t\t\tar_serialize:wallet_to_json_struct(Addr, Value)\n\t\t\tend,\n\t\t\tWallets\n\t\t),\n\tcase NextCursor of\n\t\tlast ->\n\t\t\tjiffy:encode(#{ wallets => SerializedWallets });\n\t\tCursor when is_binary(Cursor) ->\n\t\t\tjiffy:encode(#{\n\t\t\t\tnext_cursor => ar_util:encode(Cursor),\n\t\t\t\twallets => SerializedWallets\n\t\t\t})\n\tend.\n\n%% @doc Find a block, given a type and a specifier.\nfind_block(<<\"height\">>, RawHeight) ->\n\tcase catch binary_to_integer(RawHeight) of\n\t\t{'EXIT', _} ->\n\t\t\t{error, height_not_integer};\n\t\tHeight ->\n\t\t\tcase ar_block_index:get_element_by_height(Height) of\n\t\t\t\tnot_found ->\n\t\t\t\t\tunavailable;\n\t\t\t\t{H, _, _} ->\n\t\t\t\t\tar_storage:read_block(H)\n\t\t\tend\n\tend;\nfind_block(<<\"hash\">>, ID) ->\n\tcase ar_util:safe_decode(ID) of\n\t\t{ok, H} ->\n\t\t\tar_storage:read_block(H);\n\t\t_ ->\n\t\t\tunavailable\n\tend.\n\npost_tx_parse_id({Req, Pid, Encoding}) ->\n\tpost_tx_parse_id(check_header, {Req, Pid, Encoding}).\n\npost_tx_parse_id(check_header, {Req, Pid, Encoding}) ->\n\tcase cowboy_req:header(<<\"arweave-tx-id\">>, Req, not_set) of\n\t\tnot_set ->\n\t\t\tpost_tx_parse_id(read_body, {not_set, Req, Pid, Encoding});\n\t\tEncodedTXID ->\n\t\t\tcase ar_util:safe_decode(EncodedTXID) of\n\t\t\t\t{ok, TXID} when byte_size(TXID) =< 32 ->\n\t\t\t\t\tpost_tx_parse_id(check_ignore_list, {TXID, Req, Pid, Encoding});\n\t\t\t\t_ ->\n\t\t\t\t\t{error, invalid_hash, Req}\n\t\t\tend\n\tend;\npost_tx_parse_id(check_ignore_list, {TXID, Req, Pid, Encoding}) ->\n\tcase ar_mempool:is_known_tx(TXID) of\n\t\ttrue ->\n\t\t\t{error, tx_already_processed, TXID, Req};\n\t\tfalse ->\n\t\t\tRef = make_ref(),\n\t\t\terlang:put(tx_id_ref, Ref),\n\t\t\tar_ignore_registry:add_ref(TXID, Ref, 5000),\n\t\t\tpost_tx_parse_id(read_body, {TXID, Req, Pid, Encoding})\n\tend;\npost_tx_parse_id(read_body, {TXID, Req, Pid, Encoding}) ->\n\tcase read_complete_body(Req, Pid) of\n\t\t{ok, Body, Req2} ->\n\t\t\tcase Encoding of\n\t\t\t\tjson ->\n\t\t\t\t\tpost_tx_parse_id(parse_json, {TXID, Req2, Body});\n\t\t\t\tbinary ->\n\t\t\t\t\tpost_tx_parse_id(parse_binary, {TXID, Req2, Body})\n\t\t\tend;\n\t\t{error, body_size_too_large} ->\n\t\t\t{error, body_size_too_large, Req};\n\t\t{error, timeout} ->\n\t\t\t{error, timeout}\n\tend;\npost_tx_parse_id(parse_json, {TXID, Req, Body}) ->\n\tRef = erlang:get(tx_id_ref),\n\tcase catch ar_serialize:json_struct_to_tx(Body) of\n\t\t{'EXIT', _} ->\n\t\t\tcase TXID of\n\t\t\t\tnot_set ->\n\t\t\t\t\tnoop;\n\t\t\t\t_ ->\n\t\t\t\t\tar_ignore_registry:remove_ref(TXID, Ref)\n\t\t\tend,\n\t\t\t{error, invalid_json, Req};\n\t\t{error, invalid_signature_type} ->\n            case TXID of\n                not_set ->\n                    noop;\n                _ ->\n                    ar_ignore_registry:remove_ref(TXID, Ref),\n\t\t\t\t\tar_tx_db:put_error_codes(TXID, [<<\"invalid_signature_type\">>])\n            end,\n            {error, invalid_signature_type, Req};\n\t\t{error, _} ->\n\t\t\tcase TXID of\n\t\t\t\tnot_set ->\n\t\t\t\t\tnoop;\n\t\t\t\t_ ->\n\t\t\t\t\tar_ignore_registry:remove_ref(TXID, Ref)\n\t\t\tend,\n\t\t\t{error, invalid_json, Req};\n\t\tTX ->\n\t\t\tpost_tx_parse_id(verify_id_match, {TXID, Req, TX})\n\tend;\npost_tx_parse_id(parse_binary, {TXID, Req, Body}) ->\n\tRef = erlang:get(tx_id_ref),\n\tcase catch ar_serialize:binary_to_tx(Body) of\n\t\t{'EXIT', _} ->\n\t\t\tcase TXID of\n\t\t\t\tnot_set ->\n\t\t\t\t\tnoop;\n\t\t\t\t_ ->\n\t\t\t\t\tar_ignore_registry:remove_ref(TXID, Ref)\n\t\t\tend,\n\t\t\t{error, invalid_json, Req};\n\t\t{error, _} ->\n\t\t\tcase TXID of\n\t\t\t\tnot_set ->\n\t\t\t\t\tnoop;\n\t\t\t\t_ ->\n\t\t\t\t\tar_ignore_registry:remove_ref(TXID, Ref)\n\t\t\tend,\n\t\t\t{error, invalid_json, Req};\n\t\t{ok, TX} ->\n\t\t\tpost_tx_parse_id(verify_id_match, {TXID, Req, TX})\n\tend;\npost_tx_parse_id(verify_id_match, {MaybeTXID, Req, TX}) ->\n\tTXID = TX#tx.id,\n\tRef = erlang:get(tx_id_ref),\n\tcase MaybeTXID of\n\t\tTXID ->\n\t\t\t{ok, TX, Req};\n\t\tMaybeNotSet ->\n\t\t\tcase MaybeNotSet of\n\t\t\t\tnot_set ->\n\t\t\t\t\tnoop;\n\t\t\t\tMismatchingTXID ->\n\t\t\t\t\tar_ignore_registry:remove_ref(MismatchingTXID, Ref)\n\t\t\tend,\n\t\t\tcase byte_size(TXID) > 32 of\n\t\t\t\ttrue ->\n\t\t\t\t\t{error, invalid_hash, Req};\n\t\t\t\tfalse ->\n\t\t\t\t\tcase ar_mempool:is_known_tx(TXID) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t{error, tx_already_processed, TXID, Req};\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tRef2 =\n\t\t\t\t\t\t\t\tcase Ref of\n\t\t\t\t\t\t\t\t\tundefined ->\n\t\t\t\t\t\t\t\t\t\tmake_ref();\n\t\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\t\tRef\n\t\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\terlang:put(tx_id_ref, Ref2),\n\t\t\t\t\t\t\tar_ignore_registry:add_ref(TXID, Ref2, 5000),\n\t\t\t\t\t\t\t{ok, TX, Req}\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\nhandle_post_vdf(Req, Pid) ->\n\tPeer = ar_http_util:arweave_peer(Req),\n\tcase ets:member(ar_peers, {vdf_server_peer, Peer}) of\n\t\tfalse ->\n\t\t\t{400, #{}, <<>>, Req};\n\t\ttrue ->\n\t\t\thandle_post_vdf2(Req, Pid, Peer)\n\tend.\n\nhandle_post_vdf2(Req, Pid, Peer) ->\n\tcase ar_config:pull_from_remote_vdf_server() of\n\t\ttrue ->\n\t\t\t%% We are pulling the updates - tell the server not to push them.\n\t\t\tResponse = #nonce_limiter_update_response{ postpone = 120 },\n\t\t\tBin = ar_serialize:nonce_limiter_update_response_to_binary(Response),\n\t\t\t{202, #{}, Bin, Req};\n\t\tfalse ->\n\t\t\thandle_post_vdf3(Req, Pid, Peer)\n\tend.\n\nhandle_post_vdf3(Req, Pid, Peer) ->\n\tcase read_complete_body(Req, Pid) of\n\t\t{ok, Body, Req2} ->\n\t\t\tFormat =\n\t\t\t\tcase ar_config:compute_own_vdf() of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\t%% If we compute our own VDF, we need to know the VDF difficulties\n\t\t\t\t\t\t%% so that we can continue extending the new session.\n\t\t\t\t\t\t%% The VDF difficulties have been introduced in the format number 4.\n\t\t\t\t\t\t4;\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t2\n\t\t\t\tend,\n\t\t\tcase ar_serialize:binary_to_nonce_limiter_update(Format, Body) of\n\t\t\t\t{ok, Update} ->\n\t\t\t\t\tcase ar_nonce_limiter:apply_external_update(Update, Peer) of\n\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\t{200, #{}, <<>>, Req2};\n\t\t\t\t\t\t#nonce_limiter_update_response{} = Response ->\n\t\t\t\t\t\t\tBin = ar_serialize:nonce_limiter_update_response_to_binary(Response),\n\t\t\t\t\t\t\t{202, #{}, Bin, Req2}\n\t\t\t\t\tend;\n\t\t\t\t{error, _} ->\n\t\t\t\t\t%% We couldn't deserialize the update, ask for a different format\n\t\t\t\t\tResponse = #nonce_limiter_update_response{ format = Format },\n\t\t\t\t\tBin = ar_serialize:nonce_limiter_update_response_to_binary(Response),\n\t\t\t\t\t{202, #{}, Bin, Req}\n\t\t\tend;\n\t\t{error, body_size_too_large} ->\n\t\t\t{413, #{}, <<\"Payload too large\">>, Req};\n\t\t{error, timeout} ->\n\t\t\t{503, #{}, jiffy:encode(#{ error => timeout }), Req}\n\tend.\n\nhandle_get_vdf(Req, Call, Format) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase lists:member(public_vdf_server, Config#config.enable) of\n\t\ttrue ->\n\t\t\thandle_get_vdf2(Req, Call, Format);\n\t\tfalse ->\n\t\t\tPeer = ar_http_util:arweave_peer(Req),\n\t\t\tcase ets:lookup(ar_peers, {vdf_client_peer, Peer}) of\n\t\t\t\t[] ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => not_our_vdf_client }), Req};\n\t\t\t\t[{_, _RawPeer}] ->\n\t\t\t\t\thandle_get_vdf2(Req, Call, Format)\n\t\t\tend\n\tend.\n\nhandle_get_vdf2(Req, Call, Format) ->\n\tUpdate =\n\t\tcase Call of\n\t\t\tget_update ->\n\t\t\t\tar_nonce_limiter_server:get_update(Format);\n\t\t\tget_session ->\n\t\t\t\tar_nonce_limiter_server:get_full_update(Format);\n\t\t\tget_previous_session ->\n\t\t\t\tar_nonce_limiter_server:get_full_prev_update(Format)\n\t\tend,\n\tcase Update of\n\t\tnot_found ->\n\t\t\t{404, #{}, <<>>, Req};\n\t\tUpdate ->\n\t\t\t{200, #{}, Update, Req}\n\tend.\n\nread_complete_body(Req, Pid) ->\n\tread_complete_body(Req, Pid, ?MAX_BODY_SIZE).\n\nread_complete_body(Req, Pid, SizeLimit) ->\n\tPid ! {read_complete_body, self(), Req, SizeLimit},\n\treceive\n\t\t{read_complete_body, {'EXIT', timeout}} ->\n\t\t\t?LOG_WARNING([{event, body_read_cowboy_timeout}, {method, cowboy_req:method(Req)},\n\t\t\t\t\t{path, cowboy_req:path(Req)}]),\n\t\t\t{error, timeout};\n\t\t{read_complete_body, Term} ->\n\t\t\tTerm\n\tend.\n\nread_body_chunk(Req, Pid, Size, Timeout) ->\n\tPid ! {read_body_chunk, self(), Req, Size, Timeout},\n\treceive\n\t\t{read_body_chunk, {'EXIT', timeout}} ->\n\t\t\tPeer = ar_http_util:arweave_peer(Req),\n\t\t\t?LOG_DEBUG([{event, body_read_cowboy_timeout}, {method, cowboy_req:method(Req)},\n\t\t\t\t\t{path, cowboy_req:path(Req)}, {peer, ar_util:format_peer(Peer)}]),\n\t\t\t{error, timeout};\n\t\t{read_body_chunk, Term} ->\n\t\t\tTerm\n\tafter Timeout ->\n\t\tPeer = ar_http_util:arweave_peer(Req),\n\t\t?LOG_DEBUG([{event, body_read_timeout}, {method, cowboy_req:method(Req)},\n\t\t\t\t{path, cowboy_req:path(Req)}, {peer, ar_util:format_peer(Peer)}]),\n\t\t{error, timeout}\n\tend.\n\nhandle_mining_h1(Req, Pid) ->\n\tPeer = ar_http_util:arweave_peer(Req),\n\tcase read_complete_body(Req, Pid) of\n\t\t{ok, Body, Req2} ->\n\t\t\tcase ar_serialize:json_decode(Body, [return_maps]) of\n\t\t\t\t{ok, JSON} ->\n\t\t\t\t\tcase catch ar_serialize:json_map_to_candidate(JSON) of\n\t\t\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_json }), Req2};\n\t\t\t\t\t\tCandidate ->\n\t\t\t\t\t\t\tcase {ar_pool:is_client(), ar_coordination:is_exit_peer()} of\n\t\t\t\t\t\t\t\t{true, true} ->\n\t\t\t\t\t\t\t\t\tPoolPeer = ar_pool:pool_peer(),\n\t\t\t\t\t\t\t\t\tJobs = #pool_cm_jobs{ h1_to_h2_jobs = [Candidate] },\n\t\t\t\t\t\t\t\t\tPayload = ar_serialize:jsonify(\n\t\t\t\t\t\t\t\t\t\t\tar_serialize:pool_cm_jobs_to_json_struct(Jobs)),\n\t\t\t\t\t\t\t\t\tspawn(fun() ->\n\t\t\t\t\t\t\t\t\t\tar_http_iface_client:post_pool_cm_jobs(PoolPeer,\n\t\t\t\t\t\t\t\t\t\t\t\tPayload) end),\n\t\t\t\t\t\t\t\t\t{200, #{}, <<>>, Req2};\n\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\tar_coordination:compute_h2_for_peer(Peer, Candidate),\n\t\t\t\t\t\t\t\t\t{200, #{}, <<>>, Req}\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend;\n\t\t\t\t{error, _} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_json }), Req2}\n\t\t\tend;\n\t\t{error, body_size_too_large} ->\n\t\t\t{413, #{}, <<\"Payload too large\">>, Req}\n\tend.\n\nhandle_mining_h2(Req, Pid) ->\n\tPeer = ar_http_util:arweave_peer(Req),\n\tcase read_complete_body(Req, Pid) of\n\t\t{ok, Body, Req2} ->\n\t\t\tcase ar_serialize:json_decode(Body, [return_maps]) of\n\t\t\t\t{ok, JSON} ->\n\t\t\t\t\tcase catch ar_serialize:json_map_to_candidate(JSON) of\n\t\t\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_json }), Req2};\n\t\t\t\t\t\tCandidate ->\n\t\t\t\t\t\t\t?LOG_INFO([{event, h2_received},\n\t\t\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)}]),\n\t\t\t\t\t\t\tcase {ar_pool:is_client(), ar_coordination:is_exit_peer()} of\n\t\t\t\t\t\t\t\t{true, true} ->\n\t\t\t\t\t\t\t\t\tPoolPeer = ar_pool:pool_peer(),\n\t\t\t\t\t\t\t\t\tJobs = #pool_cm_jobs{ h1_read_jobs = [Candidate] },\n\t\t\t\t\t\t\t\t\tPayload = ar_serialize:jsonify(\n\t\t\t\t\t\t\t\t\t\t\tar_serialize:pool_cm_jobs_to_json_struct(Jobs)),\n\t\t\t\t\t\t\t\t\tspawn(fun() ->\n\t\t\t\t\t\t\t\t\t\tar_http_iface_client:post_pool_cm_jobs(PoolPeer,\n\t\t\t\t\t\t\t\t\t\t\t\tPayload) end),\n\t\t\t\t\t\t\t\t\t{200, #{}, <<>>, Req2};\n\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\tar_mining_server:prepare_and_post_solution(Candidate),\n\t\t\t\t\t\t\t\t\tar_mining_stats:h2_received_from_peer(Peer),\n\t\t\t\t\t\t\t\t\t{200, #{}, <<>>, Req}\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend;\n\t\t\t\t{error, _} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_json }), Req2}\n\t\t\tend;\n\t\t{error, body_size_too_large} ->\n\t\t\t{413, #{}, <<\"Payload too large\">>, Req}\n\tend.\n\nhandle_mining_cm_publish(Req, Pid) ->\n\tPeer = ar_http_util:arweave_peer(Req),\n\tcase read_complete_body(Req, Pid) of\n\t\t{ok, Body, Req2} ->\n\t\t\tcase ar_serialize:json_decode(Body, [return_maps]) of\n\t\t\t\t{ok, JSON} ->\n\t\t\t\t\tcase catch ar_serialize:json_map_to_solution(JSON) of\n\t\t\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_json }), Req2};\n\t\t\t\t\t\tSolution ->\n\t\t\t\t\t\t\tar:console(\"Block candidate ~p from ~p ~n\", [\n\t\t\t\t\t\t\t\tar_util:encode(Solution#mining_solution.solution_hash),\n\t\t\t\t\t\t\t\tar_util:format_peer(Peer)]),\n\t\t\t\t\t\t\t?LOG_INFO(\"Block candidate ~p from ~p ~n\", [\n\t\t\t\t\t\t\t\tar_util:encode(Solution#mining_solution.solution_hash),\n\t\t\t\t\t\t\t\tar_util:format_peer(Peer)]),\n\t\t\t\t\t\t\tar_mining_server:prepare_and_post_solution(Solution),\n\t\t\t\t\t\t\t{200, #{}, <<>>, Req}\n\t\t\t\t\tend;\n\t\t\t\t{error, _} ->\n\t\t\t\t\t{400, #{}, jiffy:encode(#{ error => invalid_json }), Req2}\n\t\t\tend;\n\t\t{error, body_size_too_large} ->\n\t\t\t{413, #{}, <<\"Payload too large\">>, Req}\n\tend.\n"
  },
  {
    "path": "apps/arweave/src/ar_http_iface_rate_limiter_middleware.erl",
    "content": "%%%\n%%% @doc Cowboy handler to manage server-side rate limiting.\n%%%\n%%% This module provides a routing layer, mapping incoming requests\n%%% to respective rate limiter groups (RLG). \n%%% The mapping logic can be extended in a quite complex manner if \n%%% required, however it should be  considered that the execute function will be\n%%% called for each HTTP request.\n%%% \n%%% Also, there is nothing limiting the developer from calling multiple RLGs\n%%% for a single request, if necessary.\n%%%\n%%% The LimiterRef reference  in the arweave_limiter:register_or_reject_call/2\n%%% call must match one of the RLGs started by the arweave_limiter application,\n%%% otherwise a noproc error will be raised.\n%%% \n%%% We currency use IP addresses and ports as Keys for the calling peers. \n%%% However, any Erlang term might be used as a key in an RLG.\n%%% \n-module(ar_http_iface_rate_limiter_middleware).\n\n-behaviour(cowboy_middleware).\n\n-export([execute/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\nexecute(Req, Env) ->\n\tLimiterRef = get_limiter_ref(Req),\n\tPeerKey = get_peer_key(Req),\n\n\tcase arweave_limiter:register_or_reject_call(LimiterRef, PeerKey) of\n\t\t{reject, Reason, Data} ->\n\t\t\t?LOG_DEBUG([{event, rate_limiter_reject}, {reason, Reason}, {data, Data}]),\n\t\t\t{stop, reject(Req, Reason, Data)};\n\t\t_ ->\n\t\t\t{ok, Req, Env}\n\tend.\n\nget_limiter_ref(Req) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tLocalIPs = [config_peer_to_ip_addr(Peer) || Peer <- Config#config.local_peers],\n\tPeerIP = config_peer_to_ip_addr(get_peer_key(Req)),\n\n\tcase lists:member(PeerIP, LocalIPs) of\n\t\ttrue ->\n\t\t\tlocal_peers;\n\t\t_ ->\n\t\t\tPath = ar_http_iface_server:split_path(cowboy_req:path(Req)),\n\t\t\tpath_to_limiter_ref(Path)\n\tend.\n\nreject(Req, _Reason, _Data) ->\n\tcowboy_req:reply(\n\t\t429,\n\t\t#{},\n\t\t<<\"Too Many Requests\">>,\n\t\tReq\n\t).\n\n-ifdef(AR_TEST).\nget_peer_key(Req) ->\n\t{{A, B, C, D}, _Port} = cowboy_req:peer(Req),\n\tcase cowboy_req:header(<<\"x-p2p-port\">>, Req) of\n\t\tundefined ->\n\t\t\t{A, B, C, D};\n\t\tPortBin ->\n\t\t\tcase catch binary_to_integer(PortBin) of\n\t\t\t\tPort when is_integer(Port) ->\n\t\t\t\t\t{A, B, C, D, Port};\n\t\t\t\t_ ->\n\t\t\t\t\t{A, B, C, D}\n\t\t\tend\n\tend.\n-else.\nget_peer_key(Req) ->\n\t{{A, B, C, D}, _Port} = cowboy_req:peer(Req),\n\t{A, B, C, D}.\n-endif.\n\nconfig_peer_to_ip_addr({{A, B, C, D}, _Port}) -> {A, B, C, D};\nconfig_peer_to_ip_addr({A, B, C, D, _Port}) -> {A, B, C, D};\nconfig_peer_to_ip_addr({A, B, C, D}) -> {A, B, C, D}.\n\npath_to_limiter_ref([<<\"chunk\">> | _]) -> chunk;\npath_to_limiter_ref([<<\"chunk2\">> | _]) -> chunk;\npath_to_limiter_ref([<<\"data_sync_record\">> | _]) -> data_sync_record;\npath_to_limiter_ref([<<\"recent_hash_list_diff\">> | _]) -> recent_hash_list_diff;\npath_to_limiter_ref([<<\"hash_list\">>]) -> block_index;\npath_to_limiter_ref([<<\"hash_list2\">>]) -> block_index;\npath_to_limiter_ref([<<\"block_index\">>]) -> block_index;\npath_to_limiter_ref([<<\"block_index2\">>]) -> block_index;\npath_to_limiter_ref([<<\"block\">>, _Type, _ID, <<\"hash_list\">>]) -> block_index;\npath_to_limiter_ref([<<\"wallet_list\">>]) -> wallet_list;\npath_to_limiter_ref([<<\"block\">>, _Type, _ID, <<\"wallet_list\">>]) -> wallet_list;\npath_to_limiter_ref([<<\"vdf\">>]) -> get_vdf;\npath_to_limiter_ref([<<\"vdf2\">>]) -> get_vdf;\npath_to_limiter_ref([<<\"vdf\">>, <<\"session\">>]) -> get_vdf_session;\npath_to_limiter_ref([<<\"vdf2\">>, <<\"session\">>]) -> get_vdf_session;\npath_to_limiter_ref([<<\"vdf3\">>, <<\"session\">>]) -> get_vdf_session;\npath_to_limiter_ref([<<\"vdf4\">>, <<\"session\">>]) -> get_vdf_session;\npath_to_limiter_ref([<<\"vdf\">>, <<\"previous_session\">>]) -> get_previous_vdf_session;\npath_to_limiter_ref([<<\"vdf2\">>, <<\"previous_session\">>]) -> get_previous_vdf_session;\n%% No vdf3 prev_session in ar_blacklist_middleware.hrl ?RPM_BY_PATH\npath_to_limiter_ref([<<\"vdf4\">>, <<\"previous_session\">>]) -> get_previous_vdf_session;\npath_to_limiter_ref([<<\"metrics\">> | _ ])-> metrics;\npath_to_limiter_ref(_) -> general.\n"
  },
  {
    "path": "apps/arweave/src/ar_http_iface_server.erl",
    "content": "%%%===================================================================\n%%% @doc Handle http requests.\n%%%===================================================================\n\n-module(ar_http_iface_server).\n-behavior(gen_server).\n\n-export([start_link/0]).\n-export([init/1]).\n-export([handle_call/3, handle_cast/2, handle_info/2, terminate/2]).\n-export([split_path/1, label_http_path/1, label_req/1]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-define(HTTP_IFACE_MIDDLEWARES, [\n\tar_http_iface_rate_limiter_middleware,\n\tar_network_middleware,\n\tcowboy_router,\n\tar_http_iface_middleware,\n\tcowboy_handler\n]).\n\n-define(HTTP_IFACE_ROUTES, [\n\t{\"/metrics/[:registry]\", ar_prometheus_cowboy_handler, []},\n\t{\"/[...]\", ar_http_iface_handler, []}\n]).\n\n-define(ENDPOINTS, [\"info\", \"block\", \"block_announcement\", \"block2\", \"tx\", \"tx2\",\n\t\t\"queue\", \"recent_hash_list\", \"recent_hash_list_diff\", \"tx_anchor\", \"arql\", \"time\",\n\t\t\"chunk\", \"chunk2\", \"data_sync_record\", \"sync_buckets\", \"footprint_buckets\",\n\t\t\"wallet\", \"unsigned_tx\",\n\t\t\"peers\", \"hash_list\", \"block_index\", \"block_index2\", \"total_supply\", \"wallet_list\",\n\t\t\"height\", \"metrics\", \"vdf\", \"vdf2\", \"partial_solution\", \"pool_cm_jobs\"]).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\ninit(_) ->\n\t?LOG_INFO([{start, ?MODULE}, {pid, self()}]),\n\t% this process needs to be stopped in a clean way,\n\t% if something goes wrong, the connections must\n\t% be cleaned before leaving.\n\terlang:process_flag(trap_exit, true),\n\t{ok, Config} = arweave_config:get_env(),\n\tcase start_http_iface_listener(Config) of\n\t\t{ok, Pid} -> {ok, Pid};\n\t\tElsewise -> {error, Elsewise}\n\tend.\n\nsplit_path(Path) ->\n\tbinary:split(Path, <<\"/\">>, [global, trim_all]).\n\n%% @doc Return the HTTP path label,\n%% Used for cowboy_requests_total and gun_requests_total metrics, as well as P3 handling.\nlabel_http_path(Path) when is_list(Path) ->\n\tname_route(Path);\nlabel_http_path(Path) ->\n\tlabel_http_path(split_path(Path)).\n\nlabel_req(Req) ->\n\tSplitPath = ar_http_iface_server:split_path(cowboy_req:path(Req)),\n\tar_http_iface_server:label_http_path(SplitPath).\n\nhandle_call(Msg, From, State) ->\n\t?LOG_WARNING([{process, ?MODULE}, {received, Msg}, {from, From}]),\n\t{noreply, State}.\n\nhandle_cast(Msg, State) ->\n\t?LOG_WARNING([{process, ?MODULE}, {received, Msg}]),\n\t{noreply, State}.\n\nhandle_info(Msg = {'EXIT', _From, Reason}, _State) ->\n\t?LOG_ERROR([{process, ?MODULE}, {received, Msg}]),\n\t{stop, Reason};\nhandle_info(Msg, State) ->\n\t?LOG_WARNING([{process, ?MODULE}, {received, Msg}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\nstart_http_iface_listener(Config) ->\n\tDispatch = cowboy_router:compile([{'_', ?HTTP_IFACE_ROUTES}]),\n\tTlsCertfilePath = Config#config.tls_cert_file,\n\tTlsKeyfilePath = Config#config.tls_key_file,\n\tTransportOpts = #{\n\t\t% ranch_tcp parameters\n\t\tbacklog => Config#config.'http_api.tcp.backlog',\n\t\tdelay_send => Config#config.'http_api.tcp.delay_send',\n\t\tkeepalive => Config#config.'http_api.tcp.keepalive',\n\t\tlinger => {\n\t\t\t\tConfig#config.'http_api.tcp.linger',\n\t\t\t\tConfig#config.'http_api.tcp.linger_timeout'\n\t\t},\n\t\tmax_connections => Config#config.'http_api.tcp.max_connections',\n\t\tnodelay => Config#config.'http_api.tcp.nodelay',\n\t\tnum_acceptors => Config#config.'http_api.tcp.num_acceptors',\n\t\tsend_timeout_close => Config#config.'http_api.tcp.send_timeout_close',\n\t\tsend_timeout => Config#config.'http_api.tcp.send_timeout',\n\t\tshutdown => Config#config.'http_api.tcp.listener_shutdown',\n\t\tsocket_opts => [\n\t\t\t{port, Config#config.port}\n\t\t]\n\t},\n\tProtocolOpts = #{\n\t\tactive_n => Config#config.'http_api.http.active_n',\n\t\tinactivity_timeout => Config#config.'http_api.http.inactivity_timeout',\n\t\tlinger_timeout => Config#config.'http_api.http.linger_timeout',\n\t\trequest_timeout => Config#config.'http_api.http.request_timeout',\n\t\tidle_timeout => Config#config.http_api_transport_idle_timeout,\n\t\tmiddlewares => ?HTTP_IFACE_MIDDLEWARES,\n\t\tenv => #{\n\t\t\tdispatch => Dispatch\n\t\t},\n\t\tmetrics_callback => fun prometheus_cowboy2_instrumenter:observe/1,\n\t\tstream_handlers => [cowboy_metrics_h, cowboy_stream_h]\n\t},\n\tcase TlsCertfilePath of\n\t\tnot_set ->\n\t\t\tcowboy:start_clear(ar_http_iface_listener, TransportOpts, ProtocolOpts);\n\t\t_ ->\n\t\t\tcowboy:start_tls(ar_http_iface_listener, TransportOpts ++ [\n\t\t\t\t{certfile, TlsCertfilePath},\n\t\t\t\t{keyfile, TlsKeyfilePath}\n\t\t\t], ProtocolOpts)\n\tend.\n\nname_route([]) ->\n\t\"/\";\nname_route([<<\"current_block\">>]) ->\n\t\"/current/block\";\nname_route([<<_Hash:43/binary, _MaybeExt/binary>>]) ->\n\t\"/{hash}[.{ext}]\";\nname_route([Bin]) ->\n\tL = binary_to_list(Bin),\n\tcase lists:member(L, ?ENDPOINTS) of\n\t\ttrue ->\n\t\t\t\"/\" ++ L;\n\t\tfalse ->\n\t\t\tundefined\n\tend;\nname_route([<<\"peer\">> | _]) ->\n\t\"/peer/...\";\n\nname_route([<<\"jobs\">>, _PrevOutput]) ->\n\t\"/jobs/{prev_output}\";\n\nname_route([<<\"vdf\">>, <<\"session\">>]) ->\n\t\"/vdf/session\";\nname_route([<<\"vdf2\">>, <<\"session\">>]) ->\n\t\"/vdf2/session\";\nname_route([<<\"vdf3\">>, <<\"session\">>]) ->\n\t\"/vdf3/session\";\nname_route([<<\"vdf4\">>, <<\"session\">>]) ->\n\t\"/vdf4/session\";\n\nname_route([<<\"vdf\">>, <<\"previous_session\">>]) ->\n\t\"/vdf/previous_session\";\nname_route([<<\"vdf2\">>, <<\"previous_session\">>]) ->\n\t\"/vdf2/previous_session\";\nname_route([<<\"vdf4\">>, <<\"previous_session\">>]) ->\n\t\"/vdf4/previous_session\";\n\nname_route([<<\"tx\">>, <<\"pending\">>]) ->\n\t\"/tx/pending\";\nname_route([<<\"tx\">>, _Hash, <<\"status\">>]) ->\n\t\"/tx/{hash}/status\";\nname_route([<<\"tx\">>, _Hash]) ->\n\t\"/tx/{hash}\";\nname_route([<<\"tx2\">>, _Hash]) ->\n\t\"/tx2/{hash}\";\nname_route([<<\"unconfirmed_tx\">>, _Hash]) ->\n\t\"/unconfirmed_tx/{hash}\";\nname_route([<<\"unconfirmed_tx2\">>, _Hash]) ->\n\t\"/unconfirmed_tx2/{hash}\";\nname_route([<<\"tx\">>, _Hash, << \"data\" >>]) ->\n\t\"/tx/{hash}/data\";\nname_route([<<\"tx\">>, _Hash, << \"data.\", _/binary >>]) ->\n\t\"/tx/{hash}/data.{ext}\";\nname_route([<<\"tx\">>, _Hash, << \"offset\" >>]) ->\n\t\"/tx/{hash}/offset\";\nname_route([<<\"tx\">>, _Hash, _Field]) ->\n\t\"/tx/{hash}/{field}\";\n\nname_route([<<\"chunk\">>, _Offset]) ->\n\t\"/chunk/{offset}\";\nname_route([<<\"chunk2\">>, _Offset]) ->\n\t\"/chunk2/{offset}\";\n\nname_route([<<\"data_roots\">>, _Offset]) ->\n\t\"/data_roots/{offset}\";\n\nname_route([<<\"chunk_proof\">>, _Offset]) ->\n\t\"/chunk_proof/{offset}\";\nname_route([<<\"chunk_proof2\">>, _Offset]) ->\n\t\"/chunk_proof2/{offset}\";\n\nname_route([<<\"data_sync_record\">>, _Start, _Limit]) ->\n\t\"/data_sync_record/{start}/{limit}\";\nname_route([<<\"data_sync_record\">>, _Start, _End, _Limit]) ->\n\t\"/data_sync_record/{start}/{end}/{limit}\";\n\nname_route([<<\"footprints\">>, _Partition, _Number]) ->\n\t\"/footprints/{partition}/{footprint_number}\";\n\nname_route([<<\"price\">>, _SizeInBytes]) ->\n\t\"/price/{bytes}\";\nname_route([<<\"price\">>, _SizeInBytes, _Addr]) ->\n\t\"/price/{bytes}/{address}\";\n\nname_route([<<\"price2\">>, _SizeInBytes]) ->\n\t\"/price2/{bytes}\";\nname_route([<<\"price2\">>, _SizeInBytes, _Addr]) ->\n\t\"/price2/{bytes}/{address}\";\n\nname_route([<<\"v2price\">>, _SizeInBytes]) ->\n\t\"/v2price/{bytes}\";\nname_route([<<\"v2price\">>, _SizeInBytes, _Addr]) ->\n\t\"/v2price/{bytes}/{address}\";\n\nname_route([<<\"optimistic_price\">>, _SizeInBytes]) ->\n\t\"/optimistic_price/{bytes}\";\nname_route([<<\"optimistic_price\">>, _SizeInBytes, _Addr]) ->\n\t\"/optimistic_price/{bytes}/{address}\";\n\nname_route([<<\"reward_history\">>, _BH]) ->\n\t\"/reward_history/{block_hash}\";\n\nname_route([<<\"block_time_history\">>, _BH]) ->\n\t\"/block_time_history/{block_hash}\";\n\nname_route([<<\"wallet\">>, _Addr, <<\"balance\">>]) ->\n\t\"/wallet/{addr}/balance\";\nname_route([<<\"wallet\">>, _Addr, <<\"last_tx\">>]) ->\n\t\"/wallet/{addr}/last_tx\";\nname_route([<<\"wallet\">>, _Addr, <<\"txs\">>]) ->\n\t\"/wallet/{addr}/txs\";\nname_route([<<\"wallet\">>, _Addr, <<\"txs\">>, _EarliestTX]) ->\n\t\"/wallet/{addr}/txs/{earliest_tx}\";\nname_route([<<\"wallet\">>, _Addr, <<\"deposits\">>]) ->\n\t\"/wallet/{addr}/deposits\";\nname_route([<<\"wallet\">>, _Addr, <<\"deposits\">>, _EarliestDeposit]) ->\n\t\"/wallet/{addr}/deposits/{earliest_deposit}\";\n\nname_route([<<\"wallet_list\">>, _Root]) ->\n\t\"/wallet_list/{root_hash}\";\nname_route([<<\"wallet_list\">>, _Root, _Cursor]) ->\n\t\"/wallet_list/{root_hash}/{cursor}\";\nname_route([<<\"wallet_list\">>, _Root, _Addr, <<\"balance\">>]) ->\n\t\"/wallet_list/{root_hash}/{addr}/balance\";\n\nname_route([<<\"block_index\">>, _From, _To]) ->\n\t\"/block_index/{from}/{to}\";\nname_route([<<\"block_index2\">>, _From, _To]) ->\n\t\"/block_index2/{from}/{to}\";\nname_route([<<\"hash_list\">>, _From, _To]) ->\n\t\"/hash_list/{from}/{to}\";\nname_route([<<\"hash_list2\">>, _From, _To]) ->\n\t\"/hash_list2/{from}/{to}\";\n\nname_route([<<\"block\">>, <<\"hash\">>, _IndepHash]) ->\n\t\"/block/hash/{indep_hash}\";\nname_route([<<\"block\">>, <<\"height\">>, _Height]) ->\n\t\"/block/height/{height}\";\nname_route([<<\"block2\">>, <<\"hash\">>, _IndepHash]) ->\n\t\"/block2/hash/{indep_hash}\";\nname_route([<<\"block2\">>, <<\"height\">>, _Height]) ->\n\t\"/block2/height/{height}\";\nname_route([<<\"block\">>, _Type, _IDBin, _Field]) ->\n\t\"/block/{type}/{id_bin}/{field}\";\nname_route([<<\"block\">>, <<\"height\">>, _Height, <<\"wallet\">>, _Addr, <<\"balance\">>]) ->\n\t\"/block/height/{height}/wallet/{addr}/balance\";\nname_route([<<\"block\">>, <<\"current\">>]) ->\n\t\"/block/current\";\n\nname_route([<<\"coordinated_mining\">>, <<\"h1\">>]) ->\n\t\"/coordinated_mining/h1\";\nname_route([<<\"coordinated_mining\">>, <<\"h2\">>]) ->\n\t\"/coordinated_mining/h2\";\nname_route([<<\"coordinated_mining\">>, <<\"partition_table\">>]) ->\n\t\"/coordinated_mining/partition_table\";\nname_route([<<\"coordinated_mining\">>, <<\"publish\">>]) ->\n\t\"/coordinated_mining/publish\";\nname_route([<<\"coordinated_mining\">>, <<\"state\">>]) ->\n\t\"/coordinated_mining/state\";\n\n\nname_route(_) ->\n\tundefined.\n"
  },
  {
    "path": "apps/arweave/src/ar_http_req.erl",
    "content": "-module(ar_http_req).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-export([body/2, read_body_chunk/3, body_read_time/1]).\n\n-define(AR_HTTP_REQ_BODY, '_ar_http_req_body').\n-define(AR_HTTP_REQ_BODY_READ_TIME, '_ar_http_req_body_read_time').\n\nbody(Req, SizeLimit) ->\n\tcase maps:get(?AR_HTTP_REQ_BODY, Req, not_set) of\n\t\tnot_set ->\n\t\t\tStartTime = erlang:monotonic_time(),\n\t\t\tread_complete_body(Req, SizeLimit, StartTime);\n\t\tBody ->\n\t\t\t{ok, Body, Req}\n\tend.\n\n%% @doc The elapsed time (in native units) to read the request body via `read_complete_body()`\nbody_read_time(Req) ->\n\tmaps:get(?AR_HTTP_REQ_BODY_READ_TIME, Req, undefined).\n\nread_body_chunk(Req, Size, Timeout) ->\n\tcase cowboy_req:read_body(Req, #{ length => Size, period => Timeout }) of\n\t\t{_, Chunk, Req2} when byte_size(Chunk) >= Size ->\n\t\t\tprometheus_counter:inc(http_server_accepted_bytes_total,\n\t\t\t\t\t[ar_prometheus_cowboy_labels:label_value(route, #{ req => Req2 })], Size),\n\t\t\t{ok, Chunk, Req2};\n\t\t{_, Chunk, Req2} ->\n\t\t\tprometheus_counter:inc(http_server_accepted_bytes_total,\n\t\t\t\t\t[ar_prometheus_cowboy_labels:label_value(route, #{ req => Req2 })],\n\t\t\t\t\tbyte_size(Chunk)),\n\t\t\texit(timeout)\n\tend.\n\nread_complete_body(Req, SizeLimit, StartTime) ->\n\tParent = self(),\n\tRef = make_ref(),\n\t{Pid, MonRef} = spawn_monitor(\n\t\tfun() ->\n\t\t\tdo_read_body(Req, Parent, Ref)\n\t\tend),\n\tTRef = erlang:send_after(?DEFAULT_HTTP_MAX_BODY_READ_TIME_MS, self(),\n\t\t{body_timeout, Ref}),\n\tResult = accumulate_body(MonRef, Pid, Ref, Req, [], 0, SizeLimit, StartTime),\n\terlang:cancel_timer(TRef),\n\tflush_messages(Ref),\n\tdemonitor(MonRef, [flush]),\n\tcase Result of\n\t\texit_timeout ->\n\t\t\texit(timeout);\n\t\t_ ->\n\t\t\tResult\n\tend.\n\naccumulate_body(MonRef, Pid, Ref, Req, Acc, Size, SizeLimit, StartTime) ->\n\treceive\n\t\t{Ref, _OkOrMore, _Data, DataSize} when Size + DataSize > SizeLimit ->\n\t\t\texit(Pid, kill),\n\t\t\t{error, body_size_too_large};\n\t\t{Ref, more, Data, DataSize} ->\n\t\t\tNewSize = Size + DataSize,\n\t\t\taccumulate_body(MonRef, Pid, Ref, Req, [Acc | Data],\n\t\t\t\tNewSize, SizeLimit, StartTime);\n\t\t{Ref, ok, Data, _DataSize} ->\n\t\t\tBody = iolist_to_binary([Acc | Data]),\n\t\t\tBodyReadTime = erlang:monotonic_time() - StartTime,\n\t\t\t{ok, Body, with_body_req_fields(Req, Body, BodyReadTime)};\n\t\t{body_timeout, Ref} ->\n\t\t\texit(Pid, kill),\n\t\t\texit_timeout;\n\t\t{'DOWN', MonRef, process, Pid, timeout} ->\n\t\t\texit_timeout;\n\t\t{'DOWN', MonRef, process, Pid, Reason} ->\n\t\t\t{error, Reason}\n\tafter ?DEFAULT_HTTP_READ_BODY_PERIOD_MS + 2000 ->\n\t\texit(Pid, kill),\n\t\texit_timeout\n\tend.\n\ndo_read_body(Req, Parent, Ref) ->\n\t{MoreOrOk, Data, ReadReq} = cowboy_req:read_body(Req,\n\t\t#{ period => ?DEFAULT_HTTP_READ_BODY_PERIOD_MS,\n\t\t   timeout => ?DEFAULT_HTTP_READ_BODY_PERIOD_MS + 1000 }),\n\tDataSize = byte_size(Data),\n\tprometheus_counter:inc(\n\t\thttp_server_accepted_bytes_total,\n\t\t[ar_prometheus_cowboy_labels:label_value(route, #{ req => Req })],\n\t\tDataSize),\n\tParent ! {Ref, MoreOrOk, Data, DataSize},\n\tcase MoreOrOk of\n\t\tok ->\n\t\t\tok;\n\t\tmore ->\n\t\t\tdo_read_body(ReadReq, Parent, Ref)\n\tend.\n\nflush_messages(Ref) ->\n\treceive\n\t\t{body_timeout, Ref} ->\n\t\t\tflush_messages(Ref);\n\t\t{Ref, _, _, _} ->\n\t\t\tflush_messages(Ref)\n\tafter 0 ->\n\t\tok\n\tend.\n\nwith_body_req_fields(Req, Body, BodyReadTime) ->\n\tReq#{\n\t\t?AR_HTTP_REQ_BODY => Body,\n\t\t?AR_HTTP_REQ_BODY_READ_TIME => BodyReadTime }.\n"
  },
  {
    "path": "apps/arweave/src/ar_http_sup.erl",
    "content": "%% This Source Code Form is subject to the terms of the GNU General\n%% Public License, v. 2.0. If a copy of the GPLv2 was not distributed\n%% with this file, You can obtain one at\n%% https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html\n\n-module(ar_http_sup).\n\n-behaviour(supervisor).\n\n%% API\n-export([start_link/0]).\n\n%% Supervisor callbacks\n-export([init/1]).\n\n-include_lib(\"arweave/include/ar_sup.hrl\").\n\n%% ===================================================================\n%% API functions\n%% ===================================================================\n\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%% ===================================================================\n%% Supervisor callbacks\n%% ===================================================================\n\ninit([]) ->\n\t{ok, {{one_for_one, 5, 10}, [?CHILD(ar_http, worker)]}}.\n"
  },
  {
    "path": "apps/arweave/src/ar_http_util.erl",
    "content": "-module(ar_http_util).\n\n-export([get_tx_content_type/1, arweave_peer/1]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-define(PRINTABLE_ASCII_REGEX, \"^[ -~]*$\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nget_tx_content_type(#tx { tags = Tags }) ->\n\tcase lists:keyfind(<<\"Content-Type\">>, 1, Tags) of\n\t\t{<<\"Content-Type\">>, ContentType} ->\n\t\t\tcase is_valid_content_type(ContentType) of\n\t\t\t\ttrue -> {valid, ContentType};\n\t\t\t\tfalse -> invalid\n\t\t\tend;\n\t\tfalse ->\n\t\t\tnone\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc Check and valid `x-p2p-port' header.\n%% @end\n%%--------------------------------------------------------------------\n-spec arweave_peer(Req) -> Return when\n\tReq :: cowboy:req(),\n\tReturn :: {A, A, A, A, Port},\n\tA :: pos_integer(),\n\tPort :: pos_integer().\n\narweave_peer(Req) ->\n\tP2PPort = cowboy_req:header(<<\"x-p2p-port\">>, Req),\n\t{IP, _Port} = cowboy_req:peer(Req),\n\tarweave_peer2(P2PPort, IP).\n\n%%--------------------------------------------------------------------\n%% @private\n%% @hidden\n%%--------------------------------------------------------------------\n-spec arweave_peer2(Binary, IP) -> Return when\n\tBinary :: binary() | undefined,\n\tIP :: {A, A, A, A},\n\tReturn :: {A, A, A, A, Port},\n\tA :: pos_integer(),\n\tPort :: pos_integer().\n\narweave_peer2(Binary, {A, B, C, D}) when is_binary(Binary) ->\n\ttry\n\t\tbinary_to_integer(Binary)\n\tof\n\t\tP when P >= 1 andalso P =< 65535 ->\n\t\t\t{A, B, C, D, P};\n\t\t_ ->\n\t\t\t{A, B, C, D, ?DEFAULT_HTTP_IFACE_PORT}\n\tcatch\n\t\t_:_ ->\n\t\t\t{A, B, C, D, ?DEFAULT_HTTP_IFACE_PORT}\n\tend;\narweave_peer2(_, {A, B, C, D}) ->\n\t{A, B, C, D, ?DEFAULT_HTTP_IFACE_PORT}.\n\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nis_valid_content_type(ContentType) ->\n\tcase re:run(\n\t\tContentType,\n\t\t?PRINTABLE_ASCII_REGEX,\n\t\t[dollar_endonly, {capture, none}]\n\t) of\n\t\tmatch -> true;\n\t\tnomatch -> false\n\tend.\n\narweave_peer_test() ->\n\t[\n\t\t% an undefined x-p2p-port header should return the\n\t\t% default arweave port\n\t\t?assertEqual(\n\t\t\t{1,2,3,4, ?DEFAULT_HTTP_IFACE_PORT},\n\t\t\tarweave_peer(#{\n\t\t\t\theaders => #{},\n\t\t\t\tpeer => {{1,2,3,4}, 1234}\n\t\t\t})\n\t\t),\n\n\t\t% 1/TCP port is valid\n\t\t?assertEqual(\n\t\t\t{1,2,3,4, 1},\n\t\t\tarweave_peer(#{\n\t\t\t\theaders => #{ <<\"x-p2p-port\">> => <<\"1\">> },\n\t\t\t\tpeer => {{1,2,3,4}, 1234}\n\t\t\t})\n\t\t),\n\n\t\t% 65535/TCP port is valid\n\t\t?assertEqual(\n\t\t\t{1,2,3,4, 65535},\n\t\t\tarweave_peer(#{\n\t\t\t\theaders => #{ <<\"x-p2p-port\">> => <<\"65535\">> },\n\t\t\t\tpeer => {{1,2,3,4}, 1234}\n\t\t\t})\n\t\t),\n\n\t\t% 0/TCP port is invalid\n\t\t?assertEqual(\n\t\t\t{1,2,3,4, ?DEFAULT_HTTP_IFACE_PORT},\n\t\t\tarweave_peer(#{\n\t\t\t\theaders => #{ <<\"x-p2p-port\">> => <<\"0\">> },\n\t\t\t\tpeer => {{1,2,3,4}, 1234}\n\t\t\t})\n\t\t),\n\n\t\t% 65536/TCP port is invalid\n\t\t?assertEqual(\n\t\t\t{1,2,3,4, ?DEFAULT_HTTP_IFACE_PORT},\n\t\t\tarweave_peer(#{\n\t\t\t\theaders => #{ <<\"x-p2p-port\">> => <<\"65536\">> },\n\t\t\t\tpeer => {{1,2,3,4}, 1234}\n\t\t\t})\n\t\t),\n\n\t\t% a TCP port must be an integer, if not, a default\n\t\t% port is returned.\n\t\t?assertEqual(\n\t\t\t{1,2,3,4, ?DEFAULT_HTTP_IFACE_PORT},\n\t\t\tarweave_peer(#{\n\t\t\t\theaders => #{ <<\"x-p2p-port\">> => <<\"test\">> },\n\t\t\t\tpeer => {{1,2,3,4}, 1234}\n\t\t\t})\n\t\t)\n\t].\n"
  },
  {
    "path": "apps/arweave/src/ar_ignore_registry.erl",
    "content": "%%% @doc The module offers an interface to the \"ignore registry\" -\n%%% an in-memory storage used for avoiding redundant processing of\n%%% blocks and transactions, in the setting of historically synchronous\n%%% POST /block and POST /tx requests. An incoming block or transaction is\n%%% temporary placed in the registry. Requests with the same identifiers\n%%% are ignored for the time. After a block or a transaction is validated\n%%% a permanent record can be inserted into the registry.\n%%% @end\n-module(ar_ignore_registry).\n\n-export([add/1, add_ref/2, add_ref/3, remove/1, remove_ref/2,\n\t\tadd_temporary/2, remove_temporary/2, member/1,\n\t\tpermanent_member/1]).\n\n%% @doc Put a permanent ID record into the registry.\nadd(ID) ->\n\tets:insert(ignored_ids, {ID, permanent}).\n\n%% @doc Remove a permanent ID record from the registry.\nremove(ID) ->\n\tcatch ets:delete_object(ignored_ids, {ID, permanent}).\n\n%% @doc Put a referenced ID record into the registry.\n%% The record may be removed by ar_ignore_registry:remove_ref/2.\nadd_ref(ID, Ref) ->\n\tadd_ref(ID, Ref, 10000).\n\nadd_ref(ID, Ref, Timeout) ->\n\tets:insert(ignored_ids, {ID, {ref, Ref}}),\n\t{ok, _} = ar_timer:apply_after(\n\t\tTimeout,\n\t\tar_ignore_registry,\n\t\tremove_ref,\n\t\t[ID, Ref],\n\t\t#{ skip_on_shutdown => false }\n\t).\n\n%% @doc Remove a referenced ID record from the registry.\nremove_ref(ID, Ref) ->\n\tcatch ets:delete_object(ignored_ids, {ID, {ref, Ref}}).\n\n%% @doc Put a temporary ID record into the registry.\n%% The record expires after Timeout milliseconds.\nadd_temporary(ID, Timeout) ->\n\tRef = make_ref(),\n\tets:insert(ignored_ids, {ID, {temporary, Ref}}),\n\t{ok, _} = ar_timer:apply_after(\n\t\tTimeout,\n\t\tar_ignore_registry,\n\t\tremove_temporary,\n\t\t[ID, Ref],\n\t\t#{ skip_on_shutdown => false }\n\t).\n\n%% @doc Remove the temporary record from the registry.\nremove_temporary(ID, Ref) ->\n\tcatch ets:delete_object(ignored_ids, {ID, {temporary, Ref}}).\n\n%% @doc Check if there is a temporary or a permanent record in the registry.\nmember(ID) ->\n\tcase ets:lookup(ignored_ids, ID) of\n\t\t[] ->\n\t\t\tfalse;\n\t\t_ ->\n\t\t\ttrue\n\tend.\n\n%% @doc Check if there is a permanent record in the registry.\npermanent_member(ID) ->\n\tEntries = ets:lookup(ignored_ids, ID),\n\tlists:member({ID, permanent}, Entries).\n"
  },
  {
    "path": "apps/arweave/src/ar_inflation.erl",
    "content": "%%% @doc Module responsible for managing and testing the inflation schedule of \n%%% the Arweave main network.\n-module(ar_inflation).\n\n-export([calculate/1, blocks_per_year/1]).\n\n-include_lib(\"arweave/include/ar_inflation.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Calculate the static reward received for mining a given block.\n%% This reward portion depends only on block height, not the number of transactions.\n-ifdef(AR_TEST).\ncalculate(_Height) ->\n\t10.\n-else.\ncalculate(Height) ->\n\tcalculate2(Height).\n-endif.\n\ncalculate2(Height) when Height =< ?FORK_15_HEIGHT ->\n\tpre_15_calculate(Height);\ncalculate2(Height) when Height =< ?PRE_25_BLOCKS_PER_YEAR ->\n    calculate_base(Height) + ?POST_15_Y1_EXTRA;\ncalculate2(Height) ->\n\tcase Height >= ar_fork:height_2_5() of\n\t\ttrue ->\n\t\t\tcalculate_base(Height);\n\t\tfalse ->\n\t\t\tcalculate_base_pre_fork_2_5(Height)\n\tend.\n\n%% @doc An estimation for the number of blocks produced in a year.\n%% Note: I've confirmed that when TARGET_BLOCK_TIME = 120 the following equation is\n%% exactly equal to `30 * 24 * 365` when executed within an Erlang shell (i.e. 262800).\nblocks_per_year(Height) ->\n\t((60 * 60 * 24 * 365) div ar_testnet:target_block_time(Height)).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\n%% @doc Pre-1.5.0.0 style reward calculation.\npre_15_calculate(Height) ->\n\tRewardDelay = (?PRE_15_BLOCKS_PER_YEAR)/4,\n\tcase Height =< RewardDelay of\n\t\ttrue ->\n\t\t\t1;\n\t\tfalse ->\n\t\t\t?WINSTON_PER_AR\n\t\t\t\t* 0.2\n\t\t\t\t* ?GENESIS_TOKENS\n\t\t\t\t* math:pow(2, -(Height - RewardDelay) / ?PRE_15_BLOCKS_PER_YEAR)\n\t\t\t\t* math:log(2)\n\t\t\t\t/ ?PRE_15_BLOCKS_PER_YEAR\n\tend.\n\ncalculate_base(Height) ->\n\t{Ln2Dividend, Ln2Divisor} = ?LN2,\n\tDividend = Height * Ln2Dividend,\n\tDivisor = blocks_per_year(Height) * Ln2Divisor,\n\tPrecision = ?INFLATION_NATURAL_EXPONENT_DECIMAL_FRACTION_PRECISION,\n\t{EXDividend, EXDivisor} = ar_fraction:natural_exponent({Dividend, Divisor}, Precision),\n\t?GENESIS_TOKENS\n\t\t* ?WINSTON_PER_AR\n\t\t* EXDivisor\n\t\t* 2\n\t\t* Ln2Dividend\n\t\tdiv (\n\t\t\t10\n\t\t\t* blocks_per_year(Height)\n\t\t\t* Ln2Divisor\n\t\t\t* EXDividend\n\t\t).\n\ncalculate_base_pre_fork_2_5(Height) ->\n\t?WINSTON_PER_AR\n\t\t* (\n\t\t\t0.2\n\t\t\t* ?GENESIS_TOKENS\n\t\t\t* math:pow(2, -(Height) / ?PRE_25_BLOCKS_PER_YEAR)\n\t\t\t* math:log(2)\n\t\t)\n\t\t/ ?PRE_25_BLOCKS_PER_YEAR.\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\n%% Test that the within tolerance helper function works as anticipated.\nis_in_tolerance_test() ->\n    true = is_in_tolerance(100, 100.5, 1),\n    false = is_in_tolerance(100, 101.5, 1),\n    true = is_in_tolerance(100.9, 100, 1),\n    false = is_in_tolerance(101.1, 100, 1),\n    true = is_in_tolerance(100.0001, 100, 0.01),\n    false = is_in_tolerance(100.0001, 100, 0.00009),\n    true = is_in_tolerance(?AR(100 * 1000000), ?AR(100 * 1000000) + 10, 0.01).\n\n%%% Calculate and verify per-year expected and actual inflation.\n\nyear_1_test_() ->\n\t{timeout, 60, fun test_year_1/0}.\n\ntest_year_1() ->\n    true = is_in_tolerance(year_sum_rewards(0), ?AR(5500000)).\n\nyear_2_test_() ->\n\t{timeout, 60, fun test_year_2/0}.\n\ntest_year_2() ->\n    true = is_in_tolerance(year_sum_rewards(1), ?AR(2750000)).\n\nyear_3_test_() ->\n\t{timeout, 60, fun test_year_3/0}.\n\ntest_year_3() ->\n    true = is_in_tolerance(year_sum_rewards(2), ?AR(1375000)).\n\nyear_4_test_() ->\n\t{timeout, 60, fun test_year_4/0}.\n\ntest_year_4() ->\n    true = is_in_tolerance(year_sum_rewards(3), ?AR(687500)).\n\nyear_5_test_() ->\n\t{timeout, 60, fun test_year_5/0}.\n\ntest_year_5() ->\n    true = is_in_tolerance(year_sum_rewards(4), ?AR(343750)).\n\nyear_6_test_() ->\n\t{timeout, 60, fun test_year_6/0}.\n\ntest_year_6() ->\n    true = is_in_tolerance(year_sum_rewards(5), ?AR(171875)).\n\nyear_7_test_() ->\n\t{timeout, 60, fun test_year_7/0}.\n\ntest_year_7() ->\n    true = is_in_tolerance(year_sum_rewards(6), ?AR(85937.5)).\n\nyear_8_test_() ->\n\t{timeout, 60, fun test_year_8/0}.\n\ntest_year_8() ->\n    true = is_in_tolerance(year_sum_rewards(7), ?AR(42968.75)).\n\nyear_9_test_() ->\n\t{timeout, 60, fun test_year_9/0}.\n\ntest_year_9() ->\n    true = is_in_tolerance(year_sum_rewards(8), ?AR(21484.375)).\n\nyear_10_test_() ->\n\t{timeout, 60, fun test_year_10/0}.\n\ntest_year_10() ->\n    true = is_in_tolerance(year_sum_rewards(9), ?AR(10742.1875)).\n\n%% @doc Is the value X within TolerancePercent of Y.\nis_in_tolerance(X, Y) ->\n    is_in_tolerance(X, Y, ?DEFAULT_TOLERANCE_PERCENT).\nis_in_tolerance(X, Y, TolerancePercent) ->\n    Tolerance = TolerancePercent / 100,\n    ( X >= ( Y * (1 - Tolerance ) ) ) and\n    ( X =< ( Y + (Y * Tolerance ) ) ).\n\n%% @doc Count the total inflation rewards for a given year.\nyear_sum_rewards(YearNum) ->\n    year_sum_rewards(YearNum, fun calculate2/1).\nyear_sum_rewards(YearNum, Fun) ->\n    sum_rewards(\n        Fun,\n        (YearNum * trunc(?PRE_25_BLOCKS_PER_YEAR)),\n        ((YearNum + 1) * trunc(?PRE_25_BLOCKS_PER_YEAR))\n    ).\n\n%% @doc Calculate the reward sum between two blocks.\nsum_rewards(Fun, Start, End) ->\n    lists:sum(lists:map(Fun, lists:seq(Start, End))).\n"
  },
  {
    "path": "apps/arweave/src/ar_info.erl",
    "content": "%%%\n%%% @doc Gathers the data for the /info and /recent endpoints.\n%%%\n\n-module(ar_info).\n\n-export([get_info/0, get_recent/0]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_chain_stats.hrl\").\n\nget_info() ->\n\t{Time, Current} =\n\t\ttimer:tc(fun() -> ar_node:get_current_block_hash() end),\n\t{Time2, Height} =\n\t\ttimer:tc(fun() -> ar_node:get_height() end),\n\t[{_, BlockCount}] = ets:lookup(ar_header_sync, synced_blocks),\n    #{\n        <<\"network\">> => list_to_binary(?NETWORK_NAME),\n        <<\"version\">> => ?CLIENT_VERSION,\n        <<\"release\">> => ?RELEASE_NUMBER,\n        <<\"height\">> =>\n            case Height of\n                not_joined -> -1;\n                H -> H\n            end,\n        <<\"current\">> =>\n            case is_atom(Current) of\n                true -> atom_to_binary(Current, utf8);\n                false -> ar_util:encode(Current)\n            end,\n        <<\"blocks\">> => BlockCount,\n        <<\"peers\">> => prometheus_gauge:value(arweave_peer_count),\n        <<\"queue_length\">> =>\n            element(\n                2,\n                erlang:process_info(whereis(ar_node_worker), message_queue_len)\n            ),\n        <<\"node_state_latency\">> => (Time + Time2) div 2\n    }.\n\nget_recent() ->\n    #{\n        %% #{\n        %%   \"id\": <indep_hash>,\n        %%   \"received\": <received_timestamp>\",\n        %%   \"height\": <height>\n        %% }\n        <<\"blocks\">> => get_recent_blocks(),\n        %% #{\n        %%   \"id\": <hash_of_block_ids>,\n        %%   \"height\": <height_of_first_orphaned_block>,\n        %%   \"timestamp\": <timestamp_of_when_fork_was_abandoned>\n        %%   \"blocks\": [<block_id>, <block_id>, ...]\n        %% }\n        <<\"forks\">> => get_recent_forks()\n    }.\n\n%% @doc Return the the most recent blocks in reverse chronological order.\n%% \n%% There are a few list reversals that happen here:\n%% 1. get_block_anchors returns the blocks in reverse chronological order (latest block first)\n%% 2. [Element | Acc] reverses the list into chronological order (latest block last)\n%% 3. The final lists:reverse puts the list back into reverse chronological order\n%%    (latest block first)\nget_recent_blocks() ->\n    Anchors = lists:sublist(ar_node:get_block_anchors(), ?CHECKPOINT_DEPTH),\n    Blocks = lists:foldl(\n        fun(H, Acc) ->\n            B = ar_block_cache:get(block_cache, H),\n            [#{\n                <<\"id\">> => ar_util:encode(H),\n                <<\"received\">> => get_block_timestamp(B, length(Acc)),\n                <<\"height\">> => B#block.height\n            } | Acc]\n        end,\n        [],\n        Anchors\n    ),\n    lists:reverse(Blocks).\n\n%% @doc Return the the most recent forks in reverse chronological order.\nget_recent_forks() ->\n    CutOffTime = os:system_time(seconds) - ?RECENT_FORKS_AGE,\n    case ar_chain_stats:get_forks(CutOffTime) of\n        {error, _} -> error;\n        Forks ->\n            %% 1. We receive forks in ascending order (oldest first)\n            %% 2. But since we want to truncate the list to only include the most recent forks,\n            %%    we first reverse...\n            ReversedForks = lists:reverse(Forks),\n            %% 3. Then truncate...\n            TruncatedForks = lists:sublist(ReversedForks, ?RECENT_FORKS_LENGTH),\n            %% 4. Then convert to JSON maps\n            %%    (which reverses the list again due to list prepending)\n            RecentForks = lists:foldl(\n                fun(Fork, Acc) ->\n                    #fork{ \n                        id = ID, height = Height, timestamp = Timestamp, \n                        block_ids = BlockIDs} = Fork,\n                    [#{\n                        <<\"id\">> => ar_util:encode(ID),\n                        <<\"height\">> => Height,\n                        <<\"timestamp\">> => Timestamp div 1000,\n                        <<\"blocks\">> => [ ar_util:encode(BlockID) || BlockID <- BlockIDs ]\n                    } | Acc]\n                end,\n                [],\n                TruncatedForks\n            ),\n            %% 5. Then finally reverse the list again so we end up with forks in descending\n            %%    order (newest first)\n            lists:reverse(RecentForks)\n    end.\n\nget_block_timestamp(B, Depth)\n        when Depth < ?RECENT_BLOCKS_WITHOUT_TIMESTAMP orelse\n            B#block.receive_timestamp =:= undefined ->\n    <<\"pending\">>;\nget_block_timestamp(B, _Depth) ->\n    ar_util:timestamp_to_seconds(B#block.receive_timestamp).\n\n"
  },
  {
    "path": "apps/arweave/src/ar_intervals.erl",
    "content": "%%% @doc A set of non-overlapping intervals.\n-module(ar_intervals).\n\n-export([new/0, from_list/1, add/3, delete/3, cut/2, is_inside/2, sum/1, union/2, serialize/2,\n\t\tsafe_from_etf/1, count/1, is_empty/1, take_smallest/1, take_largest/1, largest/1,\n\t\tsmallest/1, to_list/1, iterator_from/2, next/1, fold/3, outerjoin/2, intersection/2]).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Create an empty set of intervals.\nnew() ->\n\tgb_sets:new().\n\n%% @doc Create a set from a list of {End, Start} pairs.\nfrom_list(L) ->\n\tlists:foldl(fun({End, Start}, Acc) -> add(Acc, End, Start) end, new(), L).\n\n%% @doc Add a new interval. Intervals are compacted - e.g., (2, 1) and (1, 0) are joined\n%% into (2, 0). Also, if two intervals intersect each other, they are joined.\n%% @end\nadd(Intervals, End, Start) when End > Start ->\n\tIter = gb_sets:iterator_from({Start - 1, Start - 1}, Intervals),\n\tadd2(Iter, Intervals, End, Start).\n\n%% @doc Remove the given interval from the set.\ndelete(Intervals, End, Start) ->\n\tIter = gb_sets:iterator_from({Start - 1, Start - 1}, Intervals),\n\tdelete2(Iter, Intervals, End, Start).\n\n%% @doc Remove the interval above the given cut. If there is an interval containing\n%% the cut, replace it with its part up to the cut.\n%% @end\ncut(Intervals, Cut) ->\n\tcase gb_sets:size(Intervals) of\n\t\t0 ->\n\t\t\tIntervals;\n\t\t_ ->\n\t\t\tcase gb_sets:take_largest(Intervals) of\n\t\t\t\t{{_, Start}, UpdatedIntervals} when Start >= Cut ->\n\t\t\t\t\tcut(UpdatedIntervals, Cut);\n\t\t\t\t{{End, Start}, UpdatedIntervals} when End > Cut ->\n\t\t\t\t\tgb_sets:add_element({Cut, Start}, UpdatedIntervals);\n\t\t\t\t_ ->\n\t\t\t\t\tIntervals\n\t\t\tend\n\tend.\n\n%% @doc Return true if the given number is inside one of the intervals, false otherwise.\n%% The left bounds of the intervals are excluded from search, the right bounds are included.\n%% @end\nis_inside(Intervals, Number) ->\n\tIter = gb_sets:iterator_from({Number - 1, Number - 1}, Intervals),\n\tcase gb_sets:next(Iter) of\n\t\tnone ->\n\t\t\tfalse;\n\t\t{{Number, _Start}, _Iter} ->\n\t\t\ttrue;\n\t\t{{_End, Start}, _Iter} when Number > Start ->\n\t\t\ttrue;\n\t\t_ ->\n\t\t\tfalse\n\tend.\n\n%% @doc Return the sum of the lengths of the intervals.\nsum(Intervals) ->\n\tgb_sets:fold(fun({End, Start}, Acc) -> Acc + End - Start end, 0, Intervals).\n\n%% @doc Return the set of intervals consisting of the points of intervals from both sets.\nunion(I1, I2) ->\n\t{Longer, Shorter} =\n\t\tcase gb_sets:size(I1) > gb_sets:size(I2) of\n\t\t\ttrue ->\n\t\t\t\t{I1, I2};\n\t\t\tfalse ->\n\t\t\t\t{I2, I1}\n\t\tend,\n\tgb_sets:fold(\n\t\tfun({End, Start}, Acc) ->\n\t\t\tadd(Acc, End, Start)\n\t\tend,\n\t\tLonger,\n\t\tShorter\n\t).\n\n%% @doc Serialize a subset of the intervals using the requested format, etf | json.\n%% The subset is always smaller than or equal to Limit. If random_subset key is present,\n%% the chosen subset is random. Otherwise, the right bound of the first interval is\n%% greater than or equal to start.\nserialize(#{ random_subset := _, limit := Limit, format := Format }, Intervals) ->\n\tserialize_random_subset(Intervals, Limit, Format);\nserialize(#{ start := Start, limit := Limit, format := Format } = Args, Intervals) ->\n\tRightBound = maps:get(right_bound, Args, infinity),\n\tserialize_subset(Intervals, Start, RightBound, Limit, Format).\n\n%% @doc Convert the binary produced by to_etf/2 into the set of intervals.\n%% Return {error, invalid} if the binary is not a valid ETF representation of the\n%% non-overlapping intervals.\n%% @end\nsafe_from_etf(Binary) ->\n\tcase catch from_etf(Binary) of\n\t\t{ok, Intervals} ->\n\t\t\t{ok, Intervals};\n\t\t_ ->\n\t\t\t{error, invalid}\n\tend.\n\n%% @doc Return the number of intervals in the set.\ncount(Intervals) ->\n\tgb_sets:size(Intervals).\n\n%% @doc Return true if the set of intervals is empty, false otherwise.\nis_empty(Intervals) ->\n\tgb_sets:is_empty(Intervals).\n\n%% @doc Return {Interval, Intervals2} when Interval is the interval with the smallest\n%% right bound and Intervals2 is the set of intervals with this interval removed.\n%% @end\ntake_smallest(Intervals) ->\n\tgb_sets:take_smallest(Intervals).\n\n%% @doc Return {Interval, Intervals2} when Interval is the interval with the largest\n%% right bound and Intervals2 is the set of intervals with this interval removed.\n%% @end\ntake_largest(Intervals) ->\n\tgb_sets:take_largest(Intervals).\n\n%% @doc A proxy for gb_sets:smallest/1.\nsmallest(Intervals) ->\n\tgb_sets:smallest(Intervals).\n\n%% @doc A proxy for gb_sets:largest/1.\nlargest(Intervals) ->\n\tgb_sets:largest(Intervals).\n\n%% @doc A proxy for gb_sets:iterator_from/2.\niterator_from(Interval, Intervals) ->\n\tgb_sets:iterator_from(Interval, Intervals).\n\n%% @doc A proxy for gb_sets:next/1.\nnext(Iterator) ->\n\tgb_sets:next(Iterator).\n\n%% @doc A proxy for gb_sets:fold/3.\nfold(Fun, Acc, Intervals) ->\n\tgb_sets:fold(Fun, Acc, Intervals).\n\n%% @doc A proxy for gb_sets:to_list/1.\nto_list(Intervals) ->\n\tgb_sets:to_list(Intervals).\n\n%% @doc Return a set of intervals containing the points from the second given set of\n%% intervals and excluding the points from the first given set of intervals.\nouterjoin(I1, I2) ->\n\t%% intersection(inverse(I1), I2) also works but slower because inverse is relatively\n\t%% expensive. intersection(I1, I2) is expected to be relatively small so inverting it\n\t%% is quick.\n\tintersection(inverse(intersection(I1, I2)), I2).\n\n%% @doc Return the set of intervals - the intersection of the two given sets.\nintersection(I1, I2) ->\n\tcase gb_sets:is_empty(I1) orelse gb_sets:is_empty(I2) of\n\t\ttrue ->\n\t\t\tnew();\n\t\tfalse ->\n\t\t\t{_, Start1} = gb_sets:smallest(I1),\n\t\t\t{_, Start2} = gb_sets:smallest(I2),\n\t\t\tStart = min(Start1, Start2),\n\t\t\tintersection(gb_sets:iterator_from({Start, infinity}, I1),\n\t\t\t\t\tgb_sets:iterator_from({Start, infinity}, I2), new())\n\tend.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nadd2(Iter, Intervals, End, Start) ->\n\tcase gb_sets:next(Iter) of\n\t\tnone ->\n\t\t\tgb_sets:add_element({End, Start}, Intervals);\n\t\t{{End2, Start2}, Iter2} when End >= Start2 andalso Start =< End2 ->\n\t\t\tEnd3 = max(End, End2),\n\t\t\tStart3 = min(Start, Start2),\n\t\t\tadd2(Iter2, gb_sets:del_element({End2, Start2}, Intervals), End3, Start3);\n\t\t_ ->\n\t\t\tgb_sets:add_element({End, Start}, Intervals)\n\tend.\n\ndelete2(Iter, Intervals, End, Start) ->\n\tcase gb_sets:next(Iter) of\n\t\tnone ->\n\t\t\tIntervals;\n\t\t{{End2, Start2}, Iter2} when End >= Start2 andalso Start =< End2 ->\n\t\t\tIntervals2 = gb_sets:del_element({End2, Start2}, Intervals),\n\t\t\tIntervals3 =\n\t\t\t\tcase End2 > End of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tgb_sets:insert({End2, End}, Intervals2);\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tIntervals2\n\t\t\t\tend,\n\t\t\tIntervals4 =\n\t\t\t\tcase Start > Start2 of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tgb_sets:insert({Start, Start2}, Intervals3);\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tIntervals3\n\t\t\t\tend,\n\t\t\tdelete2(Iter2, Intervals4, End, Start);\n\t\t_ ->\n\t\t\tIntervals\n\tend.\n\nserialize_random_subset(Intervals, Limit, Format) ->\n\tcase gb_sets:is_empty(Intervals) of\n\t\ttrue ->\n\t\t\tserialize_empty(Format);\n\t\tfalse ->\n\t\t\t{Largest, _} = gb_sets:largest(Intervals),\n\t\t\tRandomOffsets = [rand:uniform(Largest) || _ <- lists:seq(1, Limit)],\n\t\t\tserialize_random_subset(Intervals, RandomOffsets, Format, ar_intervals:new(), Limit)\n\tend.\n\nserialize_empty(etf) ->\n\tterm_to_binary([]);\nserialize_empty(json) ->\n\tjiffy:encode([]).\n\nserialize_random_subset(_Intervals, [], Format, PickedIntervals, Limit) ->\n\tserialize_subset(PickedIntervals, 0, infinity, Limit, Format);\nserialize_random_subset(Intervals, [Offset | Offsets], Format, PickedIntervals, Limit) ->\n\tIter = gb_sets:iterator_from({Offset, 0}, Intervals),\n\tcase gb_sets:next(Iter) of\n\t\tnone ->\n\t\t\tserialize_random_subset(Intervals, Offsets, Format, PickedIntervals, Limit);\n\t\t{{End, Start}, _} ->\n\t\t\tserialize_random_subset(Intervals, Offsets, Format,\n\t\t\t\t\tar_intervals:add(PickedIntervals, End, Start), Limit)\n\tend.\n\nserialize_list(L, etf) ->\n\tterm_to_binary(L);\nserialize_list(L, json) ->\n\tjiffy:encode(L).\n\nserialize_item(End, Start, etf) ->\n\t{<< End:256 >>, << Start:256 >>};\nserialize_item(End, Start, json) ->\n\t#{ integer_to_binary(End) => integer_to_binary(Start) }.\n\nserialize_subset(Intervals, Start, End, Limit, Format) ->\n\tcase gb_sets:is_empty(Intervals) of\n\t\ttrue ->\n\t\t\tserialize_empty(Format);\n\t\tfalse ->\n\t\t\tIterator = gb_sets:iterator_from({Start, 0}, Intervals),\n\t\t\tserialize_subset(Iterator, [], 0, End, Limit, Format)\n\tend.\n\nserialize_subset(_Iterator, L, Count, _RightBound, Limit, Format) when Count == Limit ->\n\tserialize_list(L, Format);\nserialize_subset(Iterator, L, Count, RightBound, Limit, Format) ->\n\tcase gb_sets:next(Iterator) of\n\t\t{{End, Start}, Iterator2} when Start < RightBound ->\n\t\t\tEnd2 = min(End, RightBound),\n\t\t\tL2 = [serialize_item(End2, Start, Format) | L],\n\t\t\tserialize_subset(Iterator2, L2, Count + 1, RightBound, Limit, Format);\n\t\t_ ->\n\t\t\tserialize_list(L, Format)\n\tend.\n\nfrom_etf(Binary) ->\n\tL = binary_to_term(Binary, [safe]),\n\tfrom_etf(L, infinity, new()).\n\nfrom_etf([], _, Intervals) ->\n\t{ok, Intervals};\nfrom_etf([{<< End:256 >>, << Start:256 >>} | List], R, Intervals)\n\t\twhen End > Start andalso R > End andalso Start >= 0 ->\n\tfrom_etf(List, Start, gb_sets:add_element({End, Start}, Intervals)).\n\ninverse(Intervals) ->\n\tinverse(gb_sets:iterator(Intervals), 0, new()).\n\ninverse(Iterator, L, G) ->\n\tcase gb_sets:next(Iterator) of\n\t\tnone ->\n\t\t\tgb_sets:add_element({infinity, L}, G);\n\t\t{{End1, Start1}, I1} ->\n\t\t\tG2 = case Start1 > L of true -> gb_sets:add_element({Start1, L}, G); _ -> G end,\n\t\t\tL2 = End1,\n\t\t\tcase gb_sets:next(I1) of\n\t\t\t\tnone ->\n\t\t\t\t\tgb_sets:add_element({infinity, L2}, G2);\n\t\t\t\t{{End2, Start2}, I2} ->\n\t\t\t\t\tinverse(I2, End2, gb_sets:add_element({Start2, End1}, G2))\n\t\t\tend\n\tend.\n\nintersection(I1, I2, G) ->\n\tcase {gb_sets:next(I1), gb_sets:next(I2)} of\n\t\t{none, _} ->\n\t\t\tG;\n\t\t{_, none} ->\n\t\t\tG;\n\t\t{{{End1, _Start1}, UpdatedI1}, {{_End2, Start2}, _UpdatedI2}} when Start2 >= End1 ->\n\t\t\tintersection(UpdatedI1, I2, G);\n\t\t{{{_End1, Start1}, _UpdatedI1}, {{End2, _Start2}, UpdatedI2}} when Start1 >= End2 ->\n\t\t\tintersection(I1, UpdatedI2, G);\n\t\t{{{End1, Start1}, UpdatedI1}, {{End2, Start2}, _UpdatedI2}} when End2 >= End1 ->\n\t\t\tintersection(UpdatedI1, I2, gb_sets:add_element({End1, max(Start1, Start2)}, G));\n\t\t{{{End1, Start1}, _UpdatedI1}, {{End2, Start2}, UpdatedI2}} when End1 > End2 ->\n\t\t\tintersection(I1, UpdatedI2, gb_sets:add_element({End2, max(Start1, Start2)}, G))\n\tend.\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\nintervals_test() ->\n\tI = new(),\n\t?assertEqual(0, count(I)),\n\t?assertEqual(0, sum(I)),\n\t?assert(not is_inside(I, 0)),\n\t?assert(not is_inside(I, 1)),\n\t?assertEqual(<<\"[]\">>, serialize(#{ random_subset => true, format => json, limit => 1 }, I)),\n\t?assertEqual(<<\"[]\">>, serialize(#{ start => 0, format => json, limit => 1 }, I)),\n\t?assertEqual(<<\"[]\">>, serialize(#{ start => 1, format => json, limit => 1 }, I)),\n\t?assertEqual(\n\t\t{ok, new()},\n\t\tsafe_from_etf(serialize(#{ random_subset => true, format => etf, limit => 1 }, I))\n\t),\n\t?assertEqual(new(), outerjoin(I, I)),\n\t?assertEqual(new(), delete(I, 2, 1)),\n\tI2 = add(I, 2, 1),\n\t?assertEqual(1, count(I2)),\n\t?assertEqual(1, sum(I2)),\n\t?assert(not is_inside(I2, 0)),\n\t?assert(not is_inside(I2, 1)),\n\t?assert(is_inside(I2, 2)),\n\t?assert(not is_inside(I2, 3)),\n\t?assertEqual(new(), delete(I2, 2, 1)),\n\t?assertEqual(new(), delete(I2, 2, 0)),\n\t?assertEqual(new(), delete(I2, 3, 1)),\n\t?assertEqual(new(), delete(I2, 3, 0)),\n\t?assertEqual(new(), cut(I2, 1)),\n\t?assertEqual(new(), cut(I2, 0)),\n\tcompare(I2, cut(I2, 2)),\n\tcompare(I2, cut(I2, 3)),\n\t?assertEqual(\n\t\t<<\"[{\\\"2\\\":\\\"1\\\"}]\">>,\n\t\tserialize(#{ random_subset => true, limit => 1, format => json }, I2)\n\t),\n\t?assertEqual(\n\t\t<<\"[]\">>,\n\t\tserialize(#{ random_subset => true, limit => 0, format => json }, I2)\n\t),\n\t?assertEqual(\n\t\t<<\"[{\\\"2\\\":\\\"1\\\"}]\">>,\n\t\tserialize(#{ start => 2, limit => 1, format => json }, I2)\n\t),\n\t?assertEqual(\n\t\t<<\"[]\">>,\n\t\tserialize(#{ start => 3, limit => 1, format => json }, I2)\n\t),\n\t?assertEqual(\n\t\t<<\"[]\">>,\n\t\tserialize(#{ start => 2, limit => 0, format => json }, I2)\n\t),\n\t{ok, I2_FromETF} =\n\t\tsafe_from_etf(serialize(#{ format => etf, limit => 1, random_subset => true }, I2)),\n\tcompare(I2, I2_FromETF),\n\t?assertEqual(\n\t\t{ok, new()},\n\t\tsafe_from_etf(serialize(#{ format => etf, limit => 0, random_subset => true }, I2))\n\t),\n\tcompare(I2, add(I2, 2, 1)),\n\tcompare(add(new(), 3, 1), add(I2, 3, 1)),\n\tcompare(add(new(), 2, 0), add(I2, 2, 0)),\n\t?assertEqual(new(), outerjoin(I2, I)),\n\tcompare(add(add(new(), 1, 0), 3, 2), outerjoin(I2, add(new(), 3, 0))),\n\tI3 = add(I2, 6, 3),\n\t?assertEqual(2, count(I3)),\n\t?assertEqual(4, sum(I3)),\n\t?assert(not is_inside(I3, 0)),\n\t?assert(not is_inside(I3, 1)),\n\t?assert(is_inside(I3, 2)),\n\t?assert(not is_inside(I3, 3)),\n\t?assert(is_inside(I3, 4)),\n\t?assert(is_inside(I3, 5)),\n\t?assert(is_inside(I3, 6)),\n\tcompare(add(add(add(new(), 2, 1), 6, 5), 4, 3), delete(I3, 5, 4)),\n\tcompare(add(new(), 6, 5), delete(I3, 5, 1)),\n\tcompare(add(new(), 10, 0), add(I3, 10, 0)),\n\t?assertEqual(new(), cut(I3, 1)),\n\t?assertEqual(new(), cut(I3, 0)),\n\t?assertEqual(I2, cut(I3, 2)),\n\t?assertEqual(I2, cut(I3, 3)),\n\tcompare(add(I2, 4, 3), cut(I3, 4)),\n\tcompare(add(I2, 5, 3), cut(I3, 5)),\n\tcompare(I3, cut(I3, 6)),\n\t?assertEqual(\n\t\t<<\"[{\\\"6\\\":\\\"3\\\"},{\\\"2\\\":\\\"1\\\"}]\">>,\n\t\tserialize(#{ random_subset => true, limit => 1000, format => json }, I3)\n\t),\n\t?assertEqual(\n\t\t<<\"[{\\\"6\\\":\\\"3\\\"},{\\\"2\\\":\\\"1\\\"}]\">>,\n\t\tserialize(#{ start => 1, limit => 10, format => json }, I3)\n\t),\n\t?assertEqual(\n\t\t<<\"[{\\\"2\\\":\\\"1\\\"}]\">>,\n\t\tserialize(#{ start => 1, limit => 1, format => json }, I3)\n\t),\n\t?assertEqual(\n\t\t<<\"[{\\\"6\\\":\\\"3\\\"}]\">>,\n\t\tserialize(#{ start => 3, limit => 10, format => json }, I3)\n\t),\n\t{ok, I3_FromETF} =\n\t\tsafe_from_etf(serialize(#{ format => etf, limit => 1000, random_subset => true }, I3)),\n\tcompare(I3, I3_FromETF),\n\tcompare(I3, add(I3, 4, 3)),\n\tcompare(add(new(), 6, 1), add(I3, 3, 1)),\n\tI3_2 = add(new(), 7, 5),\n\tcompare(add(new(), 7, 5), outerjoin(I2, I3_2)),\n\tcompare(add(new(), 7, 6), outerjoin(I3, I3_2)),\n\tcompare(add(add(add(new(), 1, 0), 3, 2), 8, 6), outerjoin(I3, add(new(), 8, 0))),\n\tI4 = add(I3, 7, 6),\n\t?assertEqual(2, count(I4)),\n\t?assertEqual(5, sum(I4)),\n\t?assert(not is_inside(I4, 0)),\n\t?assert(not is_inside(I4, 1)),\n\t?assert(is_inside(I4, 2)),\n\t?assert(not is_inside(I4, 3)),\n\t?assert(is_inside(I4, 4)),\n\t?assert(is_inside(I4, 5)),\n\t?assert(is_inside(I4, 6)),\n\t?assert(is_inside(I4, 7)),\n\t?assert(not is_inside(I4, 8)),\n\t?assertEqual(new(), cut(I4, 1)),\n\t?assertEqual(new(), cut(I4, 0)),\n\tcompare(add(I2, 5, 3), cut(I4, 5)),\n\tcompare(I4, cut(I4, 7)),\n\t?assertEqual(\n\t\t<<\"[{\\\"7\\\":\\\"3\\\"},{\\\"2\\\":\\\"1\\\"}]\">>,\n\t\tserialize(#{ format => json, limit => 1000, random_subset => true }, I4)\n\t),\n\t{ok, I4_FromETF} = safe_from_etf(serialize(#{ limit => 1000, random_subset => true,\n\t\t\tformat => etf }, I4)),\n\tcompare(I4, I4_FromETF),\n\tI5 = add(I4, 3, 2),\n\t?assertEqual(1, count(I5)),\n\t?assertEqual(6, sum(I5)),\n\tcompare(I5, add(I5, 3, 2)),\n\tcompare(I5, add(I5, 2, 1)),\n\tcompare(add(add(new(), 3, 2), 8, 7), delete(add(add(new(), 4, 2), 8, 6), 7, 3)).\n\ncompare(I1, I2) ->\n\t?assertEqual(\n\t\tserialize(#{ format => json, limit => count(I1), start => 0 }, I1),\n\t\tserialize(#{ format => json, limit => count(I2), start => 0 }, I2)\n\t),\n\tFolded1 = gb_sets:fold(fun({K, V}, Acc) -> [{K, V} | Acc] end, [], I1),\n\tFolded2 = gb_sets:fold(fun({K, V}, Acc) -> [{K, V} | Acc] end, [], I2),\n\t?assertEqual(Folded1, Folded2).\n"
  },
  {
    "path": "apps/arweave/src/ar_join.erl",
    "content": "-module(ar_join).\n\n-export([start/1]).\n\n-include(\"ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%% Represents a process that handles downloading the block index and the latest\n%%% blocks from the trusted peers, to initialize the node state.\n\n%% The number of block index elements to fetch per request.\n%% Must not exceed ?MAX_BLOCK_INDEX_RANGE_SIZE defined in ar_http_iface_middleware.erl.\n-ifdef(AR_TEST).\n-define(REQUEST_BLOCK_INDEX_RANGE_SIZE, 2).\n-else.\n-define(REQUEST_BLOCK_INDEX_RANGE_SIZE, 10000).\n-endif.\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start a process that will attempt to download the block index and the latest blocks.\nstart(Peers) ->\n\tspawn(fun() -> process_flag(trap_exit, true), start2(filter_peers(Peers)) end).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nfilter_peers(Peers) ->\n\tfilter_peers(Peers, []).\n\nfilter_peers([Peer | Peers], Peers2) ->\n\tcase ar_http_iface_client:get_info(Peer, height) of\n\t\tinfo_unavailable ->\n\t\t\t?LOG_WARNING([{event, trusted_peer_unavailable},\n\t\t\t\t\t{peer, ar_util:format_peer(Peer)}]),\n\t\t\tfilter_peers(Peers, Peers2);\n\t\tHeight ->\n\t\t\tfilter_peers(Peers, [{Height, Peer} | Peers2])\n\tend;\nfilter_peers([], []) ->\n\t[];\nfilter_peers([], Peers2) ->\n\tMaxHeight = lists:max([Height || {Height, _Peer} <- Peers2]),\n\tfilter_peers2(Peers2, MaxHeight).\n\nfilter_peers2([], _MaxHeight) ->\n\t[];\nfilter_peers2([{Height, Peer} | Peers], MaxHeight) when MaxHeight - Height >= 5 ->\n\t?LOG_WARNING([{event, trusted_peer_five_or_more_blocks_behind},\n\t\t\t{peer, ar_util:format_peer(Peer)}]),\n\tfilter_peers2(Peers, MaxHeight);\nfilter_peers2([{_Height, Peer} | Peers], MaxHeight) ->\n\t[Peer | filter_peers2(Peers, MaxHeight)].\n\nstart2([]) ->\n\tar:console(\"~nTrusted peers are not available.~n\", []),\n\t?LOG_WARNING([{event, not_joining}, {reason, trusted_peers_not_available}]),\n\ttimer:sleep(1000),\n\tinit:stop(1);\nstart2(Peers) ->\n\tar:console(\"Joining the Arweave network...~n\"),\n\t[{H, _, _} | _] = BI = get_block_index(Peers, ?REJOIN_RETRIES),\n\tar:console(\"Downloaded the block index successfully.~n\", []),\n\tB = get_block(Peers, H),\n\tExpectedBIMerkleH = ar_unbalanced_merkle:block_index_to_merkle_root(tl(BI)),\n\tcase B#block.hash_list_merkle of\n\t\tExpectedBIMerkleH ->\n\t\t\tdo_join(Peers, B, BI);\n\t\t_ ->\n\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\tID = binary_to_list(ar_util:encode(crypto:strong_rand_bytes(16))),\n\t\t\tFile = filename:join(Config#config.data_dir,\n\t\t\t\t\t\"inconsistent_joining_data_dump_\" ++ ID),\n\t\t\tfile:write_file(File, term_to_binary({B, Peers, BI})),\n\t\t\tar:console(\"Inconsistent head block and block index. Error dump: ~s.\", [File]),\n\t\t\ttimer:sleep(2000),\n\t\t\tinit:stop(1)\n\tend.\n\nget_block_index(Peers, Retries) ->\n\tcase get_block_index(Peers) of\n\t\tunavailable ->\n\t\t\tcase Retries > 0 of\n\t\t\t\ttrue ->\n\t\t\t\t\tar:console(\n\t\t\t\t\t\t\"Failed to fetch the block index from any of the peers.\"\n\t\t\t\t\t\t\" Retrying..~n\"\n\t\t\t\t\t),\n\t\t\t\t\t?LOG_WARNING([{event, failed_to_fetch_block_index}]),\n\t\t\t\t\ttimer:sleep(?REJOIN_TIMEOUT),\n\t\t\t\t\tget_block_index(Peers, Retries - 1);\n\t\t\t\tfalse ->\n\t\t\t\t\tar:console(\n\t\t\t\t\t\t\"Failed to fetch the block index from any of the peers. Giving up..\"\n\t\t\t\t\t\t\" Consider changing the peers.~n\"\n\t\t\t\t\t),\n\t\t\t\t\t?LOG_ERROR([{event, failed_to_fetch_block_index}]),\n\t\t\t\t\ttimer:sleep(1000),\n\t\t\t\t\tinit:stop(1)\n\t\t\tend;\n\t\tBI ->\n\t\t\tBI\n\tend.\n\nget_block_index([]) ->\n\tunavailable;\nget_block_index([Peer | Peers]) ->\n\tcase get_block_index2(Peer) of\n\t\tunavailable ->\n\t\t\tget_block_index(Peers);\n\t\tBI ->\n\t\t\tBI\n\tend.\n\nget_block_index2(Peer) ->\n\tHeight = ar_http_iface_client:get_info(Peer, height),\n\tget_block_index2(Peer, 0, Height, []).\n\nget_block_index2(Peer, Start, Height, BI) ->\n\tN = ?REQUEST_BLOCK_INDEX_RANGE_SIZE,\n\tcase ar_http_iface_client:get_block_index(Peer, min(Start, Height),\n\t\t\tmin(Height, Start + N - 1)) of\n\t\t{ok, Range} when length(Range) < N ->\n\t\t\tcase Start of\n\t\t\t\t0 ->\n\t\t\t\t\tRange;\n\t\t\t\t_ ->\n\t\t\t\t\tcase lists:last(Range) == hd(BI) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tRange ++ tl(BI);\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tunavailable\n\t\t\t\t\tend\n\t\t\tend;\n\t\t{ok, Range} when length(Range) == N ->\n\t\t\tcase Start of\n\t\t\t\t0 ->\n\t\t\t\t\tget_block_index2(Peer, Start + N - 1, Height, Range);\n\t\t\t\t_ ->\n\t\t\t\t\tcase lists:last(Range) == hd(BI) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tget_block_index2(Peer, Start + N - 1, Height,\n\t\t\t\t\t\t\t\t\tRange ++ tl(BI));\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tunavailable\n\t\t\t\t\tend\n\t\t\tend;\n\t\t_ ->\n\t\t\tunavailable\n\tend.\n\nget_block(Peers, H) ->\n\tcase ar_storage:read_block(H) of\n\t\tunavailable ->\n\t\t\tget_block(Peers, H, 10);\n\t\tBShadow ->\n\t\t\tget_block(Peers, BShadow, BShadow#block.txs, [], 10)\n\tend.\n\nget_block(Peers, H, Retries) ->\n\tar:console(\"Downloading joining block ~s.~n\", [ar_util:encode(H)]),\n\tcase ar_http_iface_client:get_block_shadow(Peers, H) of\n\t\t{_Peer, #block{} = BShadow, _Time, _Size} ->\n\t\t\tget_block(Peers, BShadow, BShadow#block.txs, [], Retries);\n\t\t_ ->\n\t\t\tcase Retries > 0 of\n\t\t\t\ttrue ->\n\t\t\t\t\tar:console(\n\t\t\t\t\t\t\"Failed to fetch a joining block ~s from any of the peers.\"\n\t\t\t\t\t\t\" Retrying..~n\", [ar_util:encode(H)]\n\t\t\t\t\t),\n\t\t\t\t\t?LOG_WARNING([\n\t\t\t\t\t\t{event, failed_to_fetch_joining_block},\n\t\t\t\t\t\t{block, ar_util:encode(H)}\n\t\t\t\t\t]),\n\t\t\t\t\ttimer:sleep(1000),\n\t\t\t\t\tget_block(Peers, H, Retries - 1);\n\t\t\t\tfalse ->\n\t\t\t\t\tar:console(\n\t\t\t\t\t\t\"Failed to fetch a joining block ~s from any of the peers. Giving up..\"\n\t\t\t\t\t\t\" Consider changing the peers.~n\", [ar_util:encode(H)]\n\t\t\t\t\t),\n\t\t\t\t\t?LOG_ERROR([\n\t\t\t\t\t\t{event, failed_to_fetch_joining_block},\n\t\t\t\t\t\t{block, ar_util:encode(H)}\n\t\t\t\t\t]),\n\t\t\t\t\ttimer:sleep(1000),\n\t\t\t\t\tinit:stop(1)\n\t\t\tend\n\tend.\n\nget_block(_Peers, BShadow, [], TXs, _Retries) ->\n\tBShadow#block{ txs = lists:reverse(TXs) };\nget_block(Peers, BShadow, [TXID | TXIDs], TXs, Retries) ->\n\tcase ar_http_iface_client:get_tx(Peers, TXID) of\n\t\t#tx{} = TX ->\n\t\t\tget_block(Peers, BShadow, TXIDs, [TX | TXs], Retries);\n\t\t_ ->\n\t\t\tcase Retries > 0 of\n\t\t\t\ttrue ->\n\t\t\t\t\tar:console(\n\t\t\t\t\t\t\"Failed to fetch a joining transaction ~s from any of the peers.\"\n\t\t\t\t\t\t\" Retrying..~n\", [ar_util:encode(TXID)]\n\t\t\t\t\t),\n\t\t\t\t\t?LOG_WARNING([\n\t\t\t\t\t\t{event, failed_to_fetch_joining_tx},\n\t\t\t\t\t\t{tx, ar_util:encode(TXID)}\n\t\t\t\t\t]),\n\t\t\t\t\ttimer:sleep(1000),\n\t\t\t\t\tget_block(Peers, BShadow, [TXID | TXIDs], TXs, Retries - 1);\n\t\t\t\tfalse ->\n\t\t\t\t\tar:console(\n\t\t\t\t\t\t\"Failed to fetch a joining tx ~s from any of the peers. Giving up..\"\n\t\t\t\t\t\t\" Consider changing the peers.~n\", [ar_util:encode(TXID)]\n\t\t\t\t\t),\n\t\t\t\t\t?LOG_ERROR([\n\t\t\t\t\t\t{event, failed_to_fetch_joining_tx},\n\t\t\t\t\t\t{block, ar_util:encode(TXID)}\n\t\t\t\t\t]),\n\t\t\t\t\ttimer:sleep(1000),\n\t\t\t\t\tinit:stop(1)\n\t\t\tend\n\tend.\n\n%% @doc Perform the joining process.\ndo_join(Peers, B, BI) ->\n\tar:console(\"Downloading the block trail.~n\", []),\n\t{ok, Config} = arweave_config:get_env(),\n\tWorkerQ = queue:from_list([spawn(fun() -> worker() end)\n\t\t\t|| _ <- lists:seq(1, Config#config.join_workers)]),\n\tPeerQ = queue:from_list(Peers),\n\tTrail = lists:sublist(tl(BI), 2 * ar_block:get_max_tx_anchor_depth()),\n\tSizeTaggedTXs = ar_block:generate_size_tagged_list_from_txs(B#block.txs, B#block.height),\n\tRetries = lists:foldl(fun(Peer, Acc) -> maps:put(Peer, 5, Acc) end, #{}, Peers),\n\tBlocks = [B#block{ size_tagged_txs = SizeTaggedTXs }\n\t\t\t| get_block_trail(WorkerQ, PeerQ, Trail, Retries)],\n\tar:console(\"Downloaded the block trail successfully.~n\", []),\n\tBlocks2 = maybe_set_reward_history(Blocks, Peers),\n\tBlocks3 = maybe_set_block_time_history(Blocks2, Peers),\n\tar_node_worker ! {join, B#block.height, BI, Blocks3},\n\tjoin_peers(Peers).\n\n%% @doc Get the 2 * ar_block:get_max_tx_anchor_depth() blocks preceding the head block.\n%% If the block list is shorter than 2 * ar_block:get_max_tx_anchor_depth(), simply\n%% get all existing blocks.\n%%\n%% The node needs 2 * ar_block:get_max_tx_anchor_depth() block anchors so that it\n%% can validate transactions even if it enters a ar_block:get_max_tx_anchor_depth()-deep\n%% fork recovery (which is the deepest fork recovery possible) immediately after\n%% joining the network.\nget_block_trail(_WorkerQ, _PeerQ, [], _Retries) ->\n\t[];\nget_block_trail(WorkerQ, PeerQ, Trail, Retries) ->\n\t{WorkerQ2, PeerQ2} = request_blocks(Trail, WorkerQ, PeerQ),\n\tFetchState = #{ awaiting_block_count => length(Trail) },\n\tget_block_trail_loop(WorkerQ2, PeerQ2, Retries, Trail, FetchState).\n\nrequest_blocks([], WorkerQ, PeerQ) ->\n\t{WorkerQ, PeerQ};\nrequest_blocks([{H, _, _} | Trail], WorkerQ, PeerQ) ->\n\t{{value, W}, WorkerQ2} = queue:out(WorkerQ),\n\t{{value, Peer}, PeerQ2} = queue:out(PeerQ),\n\tW ! {get_block_shadow, H, Peer, self()},\n\trequest_blocks(Trail, queue:in(W, WorkerQ2), queue:in(Peer, PeerQ2)).\n\nget_block_trail_loop(WorkerQ, PeerQ, Retries, Trail, FetchState) ->\n\treceive\n\t\t{block_response, H, _Peer, #block{} = BShadow, Origin} ->\n\t\t\tcase Origin of\n\t\t\t\tstorage ->\n\t\t\t\t\tok;\n\t\t\t\t_ ->\n\t\t\t\t\tar_disk_cache:write_block_shadow(BShadow)\n\t\t\tend,\n\t\t\tTXCount = length(BShadow#block.txs),\n\t\t\tFetchState2 = maps:put(H, {BShadow, #{}, TXCount}, FetchState),\n\t\t\tAwaitingBlockCount = maps:get(awaiting_block_count, FetchState2),\n\t\t\tAwaitingBlockCount2 =\n\t\t\t\tcase TXCount of\n\t\t\t\t\t0 ->\n\t\t\t\t\t\t?LOG_INFO([{event, join_remaining_blocks_to_fetch},\n\t\t\t\t\t\t\t{remaining_blocks_count, AwaitingBlockCount - 1}]),\n\t\t\t\t\t\tAwaitingBlockCount - 1;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tAwaitingBlockCount\n\t\t\t\tend,\n\t\t\tFetchState3 = maps:put(awaiting_block_count, AwaitingBlockCount2, FetchState2),\n\t\t\t{WorkerQ2, PeerQ2} = request_txs(H, BShadow#block.txs, WorkerQ, PeerQ),\n\t\t\tcase AwaitingBlockCount2 of\n\t\t\t\t0 ->\n\t\t\t\t\tget_blocks(Trail, FetchState3);\n\t\t\t\t_ ->\n\t\t\t\t\tget_block_trail_loop(WorkerQ2, PeerQ2, Retries, Trail, FetchState3)\n\t\t\tend;\n\t\t{block_response, H, Peer, Response, peer} ->\n\t\t\tPeerRetries = maps:get(Peer, Retries),\n\t\t\tcase PeerRetries > 0 of\n\t\t\t\ttrue ->\n\t\t\t\t\tar:console(\"Failed to fetch a joining block ~s from ~s.\"\n\t\t\t\t\t\t\t\" Retrying..~n\", [ar_util:encode(H), ar_util:format_peer(Peer)]),\n\t\t\t\t\t?LOG_WARNING([\n\t\t\t\t\t\t{event, failed_to_fetch_joining_block},\n\t\t\t\t\t\t{block, ar_util:encode(H)},\n\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t{response, io_lib:format(\"~p\", [Response])}\n\t\t\t\t\t]),\n\t\t\t\t\ttimer:sleep(1000),\n\t\t\t\t\tRetries2 = maps:put(Peer, PeerRetries - 1, Retries),\n\t\t\t\t\t{WorkerQ2, PeerQ2} = request_block(H, WorkerQ, PeerQ),\n\t\t\t\t\tget_block_trail_loop(WorkerQ2, PeerQ2, Retries2, Trail, FetchState);\n\t\t\t\tfalse ->\n\t\t\t\t\tcase queue:to_list(PeerQ) of\n\t\t\t\t\t\t[Peer] -> % The last peer left and it is out of attempts.\n\t\t\t\t\t\t\tar:console(\n\t\t\t\t\t\t\t\t\"Failed to fetch the joining headers from any of the peers, \"\n\t\t\t\t\t\t\t\t\"consider trying some other trusted peers.\", []),\n\t\t\t\t\t\t\t?LOG_ERROR([{event, failed_to_join}]),\n\t\t\t\t\t\t\ttimer:sleep(1000),\n\t\t\t\t\t\t\tinit:stop(1);\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\tcase queue:member(Peer, PeerQ) of\n\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\t{WorkerQ2, PeerQ2} = request_block(H, WorkerQ, PeerQ),\n\t\t\t\t\t\t\t\t\tget_block_trail_loop(WorkerQ2, PeerQ2, Retries, Trail,\n\t\t\t\t\t\t\t\t\t\t\tFetchState);\n\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\tPeerQ2 = queue:delete(Peer, PeerQ),\n\t\t\t\t\t\t\t\t\tar:console(\"Failed to fetch a joining block ~s from ~s. \"\n\t\t\t\t\t\t\t\t\t\t\t\"Removing the peer from the queue..\",\n\t\t\t\t\t\t\t\t\t\t\t[ar_util:encode(H), ar_util:format_peer(Peer)]),\n\t\t\t\t\t\t\t\t\t?LOG_ERROR([\n\t\t\t\t\t\t\t\t\t\t{event, failed_to_fetch_joining_block},\n\t\t\t\t\t\t\t\t\t\t{block, ar_util:encode(H)},\n\t\t\t\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t\t\t\t\t{response, io_lib:format(\"~p\", [Response])}\n\t\t\t\t\t\t\t\t\t]),\n\t\t\t\t\t\t\t\t\t{WorkerQ2, PeerQ3} = request_block(H, WorkerQ, PeerQ2),\n\t\t\t\t\t\t\t\t\tget_block_trail_loop(WorkerQ2, PeerQ3, Retries, Trail,\n\t\t\t\t\t\t\t\t\t\t\tFetchState)\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend\n\t\t\tend;\n\t\t{tx_response, H, TXID, _Peer, #tx{} = TX, Origin} ->\n\t\t\tcase Origin of\n\t\t\t\tstorage ->\n\t\t\t\t\tok;\n\t\t\t\t_ ->\n\t\t\t\t\tar_disk_cache:write_tx(TX)\n\t\t\tend,\n\t\t\t{BShadow, TXMap, AwaitingTXCount} = maps:get(H, FetchState),\n\t\t\tTXMap2 = maps:put(TXID, TX, TXMap),\n\t\t\tAwaitingTXCount2 = AwaitingTXCount - 1,\n\t\t\tFetchState2 = maps:put(H, {BShadow, TXMap2, AwaitingTXCount2}, FetchState),\n\t\t\tAwaitingBlockCount = maps:get(awaiting_block_count, FetchState2),\n\t\t\tAwaitingBlockCount2 =\n\t\t\t\tcase AwaitingTXCount2 of\n\t\t\t\t\t0 ->\n\t\t\t\t\t\t?LOG_INFO([{event, join_remaining_blocks_to_fetch},\n\t\t\t\t\t\t\t\t{remaining_blocks_count, AwaitingBlockCount - 1}]),\n\t\t\t\t\t\tAwaitingBlockCount - 1;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tAwaitingBlockCount\n\t\t\t\tend,\n\t\t\tFetchState3 = maps:put(awaiting_block_count, AwaitingBlockCount2, FetchState2),\n\t\t\tcase AwaitingBlockCount2 of\n\t\t\t\t0 ->\n\t\t\t\t\tget_blocks(Trail, FetchState3);\n\t\t\t\t_ ->\n\t\t\t\t\tget_block_trail_loop(WorkerQ, PeerQ, Retries, Trail, FetchState3)\n\t\t\tend;\n\t\t{tx_response, H, TXID, Peer, Response, peer} ->\n\t\t\tPeerRetries = maps:get(Peer, Retries),\n\t\t\tcase PeerRetries > 0 of\n\t\t\t\ttrue ->\n\t\t\t\t\tar:console(\"Failed to fetch a joining transaction ~s from ~s. \"\n\t\t\t\t\t\t\t\"Retrying..~n\", [ar_util:encode(TXID), ar_util:format_peer(Peer)]),\n\t\t\t\t\t?LOG_WARNING([{event, failed_to_fetch_joining_tx},\n\t\t\t\t\t\t\t{block, ar_util:encode(H)},\n\t\t\t\t\t\t\t{tx, ar_util:encode(TXID)},\n\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t\t{response, io_lib:format(\"~p\", [Response])}]),\n\t\t\t\t\ttimer:sleep(1000),\n\t\t\t\t\tRetries2 = maps:put(Peer, PeerRetries - 1, Retries),\n\t\t\t\t\t{WorkerQ2, PeerQ2} = request_tx(H, TXID, WorkerQ, PeerQ),\n\t\t\t\t\tget_block_trail_loop(WorkerQ2, PeerQ2, Retries2, Trail, FetchState);\n\t\t\t\tfalse ->\n\t\t\t\t\tcase queue:to_list(PeerQ) of\n\t\t\t\t\t\t[Peer] -> % The last peer left and it is out of attempts.\n\t\t\t\t\t\t\tar:console(\n\t\t\t\t\t\t\t\t\"Failed to fetch the joining headers from any of the peers, \"\n\t\t\t\t\t\t\t\t\"consider trying some other trusted peers.\", []),\n\t\t\t\t\t\t\t?LOG_ERROR([{event, failed_to_join}]),\n\t\t\t\t\t\t\ttimer:sleep(1000),\n\t\t\t\t\t\t\tinit:stop(1);\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\tcase queue:member(Peer, PeerQ) of\n\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\t{WorkerQ2, PeerQ2} = request_tx(H, TXID, WorkerQ, PeerQ),\n\t\t\t\t\t\t\t\t\tget_block_trail_loop(WorkerQ2, PeerQ2, Retries, Trail,\n\t\t\t\t\t\t\t\t\t\t\tFetchState);\n\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\tPeerQ2 = queue:delete(Peer, PeerQ),\n\t\t\t\t\t\t\t\t\tar:console(\"Failed to fetch a joining tx ~s from ~s. \"\n\t\t\t\t\t\t\t\t\t\t\t\"Removing the peer from the queue..\",\n\t\t\t\t\t\t\t\t\t\t\t[ar_util:encode(TXID), ar_util:format_peer(Peer)]),\n\t\t\t\t\t\t\t\t\t?LOG_ERROR([\n\t\t\t\t\t\t\t\t\t\t{event, failed_to_fetch_joining_tx},\n\t\t\t\t\t\t\t\t\t\t{block, ar_util:encode(H)},\n\t\t\t\t\t\t\t\t\t\t{tx, ar_util:encode(TXID)},\n\t\t\t\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t\t\t\t\t{response, io_lib:format(\"~p\", [Response])}\n\t\t\t\t\t\t\t\t\t]),\n\t\t\t\t\t\t\t\t\t{WorkerQ2, PeerQ3} = request_tx(H, TXID, WorkerQ, PeerQ2),\n\t\t\t\t\t\t\t\t\tget_block_trail_loop(WorkerQ2, PeerQ3, Retries, Trail,\n\t\t\t\t\t\t\t\t\t\t\tFetchState)\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\nrequest_txs(_H, [], WorkerQ, PeerQ) ->\n\t{WorkerQ, PeerQ};\nrequest_txs(H, [TXID | TXIDs], WorkerQ, PeerQ) ->\n\t{WorkerQ2, PeerQ2} = request_tx(H, TXID, WorkerQ, PeerQ),\n\trequest_txs(H, TXIDs, WorkerQ2, PeerQ2).\n\nrequest_tx(H, TXID, WorkerQ, PeerQ) ->\n\t{{value, W}, WorkerQ2} = queue:out(WorkerQ),\n\t{{value, Peer}, PeerQ2} = queue:out(PeerQ),\n\tW ! {get_tx, H, TXID, Peer, self()},\n\t{queue:in(W, WorkerQ2), queue:in(Peer, PeerQ2)}.\n\nget_blocks([], _FetchState) ->\n\t[];\nget_blocks([{H, _, _} | Trail], FetchState) ->\n\t{B, TXMap, _} = maps:get(H, FetchState),\n\tTXs = [maps:get(TXID, TXMap) || TXID <- B#block.txs],\n\tSizeTaggedTXs = ar_block:generate_size_tagged_list_from_txs(TXs, B#block.height),\n\t[B#block{ txs = TXs, size_tagged_txs = SizeTaggedTXs } | get_blocks(Trail, FetchState)].\n\nrequest_block(H, WorkerQ, PeerQ) ->\n\t{{value, W}, WorkerQ2} = queue:out(WorkerQ),\n\t{{value, Peer}, PeerQ2} = queue:out(PeerQ),\n\tW ! {get_block_shadow, H, Peer, self()},\n\t{queue:in(W, WorkerQ2), queue:in(Peer, PeerQ2)}.\n\nmaybe_set_reward_history(Blocks, Peers) ->\n\tHeadB = hd(Blocks),\n\tExpectedHashesLen = ar_rewards:expected_hashes_length(HeadB#block.height),\n\tExpectedHashes = [B#block.reward_history_hash\n\t\t\t|| B <- lists:sublist(Blocks, ExpectedHashesLen)],\n\tcase ar_http_iface_client:get_reward_history(Peers, HeadB, ExpectedHashes) of\n\t\t{ok, RewardHistory} ->\n\t\t\tar_rewards:set_reward_history(Blocks, RewardHistory);\n\t\t_ ->\n\t\t\tar:console(\"Failed to fetch the reward history for the block ~s from \"\n\t\t\t\t\t\"any of the peers. Consider changing the peers.~n\",\n\t\t\t\t\t[ar_util:encode((hd(Blocks))#block.indep_hash)]),\n\t\t\t?LOG_WARNING([{event, failed_to_fetch_reward_history}]),\n\t\t\ttimer:sleep(1000),\n\t\t\tinit:stop(1)\n\tend.\n\nmaybe_set_block_time_history([#block{ height = Height } | _] = Blocks, Peers) ->\n\tcase Height >= ar_fork:height_2_7() of\n\t\ttrue ->\n\t\t\tcase ar_http_iface_client:get_block_time_history(\n\t\t\t\t\tPeers, hd(Blocks), ar_block_time_history:get_hashes(Blocks)) of\n\t\t\t\t{ok, BlockTimeHistory} ->\n\t\t\t\t\tar_block_time_history:set_history(Blocks, BlockTimeHistory);\n\t\t\t\t_ ->\n\t\t\t\t\tar:console(\"Failed to fetch the block time history for the block ~s from \"\n\t\t\t\t\t\t\t\"any of the peers. Consider changing the peers.~n\",\n\t\t\t\t\t\t\t[ar_util:encode((hd(Blocks))#block.indep_hash)]),\n\t\t\t\t\ttimer:sleep(1000),\n\t\t\t\t\tinit:stop(1)\n\t\t\tend;\n\t\tfalse ->\n\t\t\tBlocks\n\tend.\n\njoin_peers(Peers) ->\n\tlists:foreach(\n\t\tfun(Peer) ->\n\t\t\tar_http_iface_client:add_peer(Peer)\n\t\tend,\n\t\tPeers\n\t).\n\nworker() ->\n\treceive\n\t\t{get_block_shadow, H, Peer, From} ->\n\t\t\tcase ar_storage:read_block(H) of\n\t\t\t\t#block{} = B ->\n\t\t\t\t\tFrom ! {block_response, H, Peer, B, storage};\n\t\t\t\tunavailable ->\n\t\t\t\t\tcase ar_http_iface_client:get_block_shadow([Peer], H) of\n\t\t\t\t\t\t{_, B, _, _} ->\n\t\t\t\t\t\t\tFrom ! {block_response, H, Peer, B, peer};\n\t\t\t\t\t\tError ->\n\t\t\t\t\t\t\tFrom ! {block_response, H, Peer, Error, peer}\n\t\t\t\t\tend\n\t\t\tend,\n\t\t\tworker();\n\t\t{get_tx, H, TXID, Peer, From} ->\n\t\t\tcase ar_storage:read_tx(TXID) of\n\t\t\t\t#tx{} = TX ->\n\t\t\t\t\tFrom ! {tx_response, H, TXID, Peer, TX, storage};\n\t\t\t\tunavailable ->\n\t\t\t\t\tcase ar_http_iface_client:get_tx_from_remote_peer(Peer, TXID, true) of\n\t\t\t\t\t\t{TX, _, _, _} ->\n\t\t\t\t\t\t\tFrom ! {tx_response, H, TXID, Peer, TX, peer};\n\t\t\t\t\t\tError ->\n\t\t\t\t\t\t\tFrom ! {tx_response, H, TXID, Peer, Error, peer}\n\t\t\t\t\tend\n\t\t\tend,\n\t\t\tworker()\n\tend.\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\n%% @doc Check that nodes can join a running network by using the fork recoverer.\nbasic_node_join_test_() ->\n\t{timeout, ?TEST_NODE_TIMEOUT, fun() ->\n\t\t[B0] = ar_weave:init(),\n\t\tar_test_node:start(B0),\n\t\tar_test_node:mine(),\n\t\tar_test_node:wait_until_height(main, 1),\n\t\tar_test_node:mine(),\n\t\tar_test_node:wait_until_height(main, 2),\n\t\tar_test_node:join_on(#{ node => peer1, join_on => main }),\n\t\tar_test_node:assert_wait_until_height(peer1, 2)\n\tend}.\n\n%% @doc Ensure that both nodes can mine after a join.\nnode_join_test_() ->\n\t{timeout, ?TEST_NODE_TIMEOUT, fun() ->\n\t\t[B0] = ar_weave:init(),\n\t\tar_test_node:start(B0),\n\t\tar_test_node:mine(),\n\t\tar_test_node:wait_until_height(main, 1),\n\t\tar_test_node:mine(),\n\t\tar_test_node:wait_until_height(main, 2),\n\t\tar_test_node:join_on(#{ node => peer1, join_on => main }),\n\t\tar_test_node:assert_wait_until_height(peer1, 2),\n\t\tar_test_node:mine(peer1),\n\t\tar_test_node:wait_until_height(main, 3)\n\tend}.\n\n%% @doc Ensure that get_tx works with a single peer and a list of peers.\nget_tx_test_() ->\n\t[\n\t\tar_test_node:test_with_mocked_functions(\n\t\t\t[{ar_http_iface_client, get_tx_from_remote_peer,\n\t\t\t\tfun(_, _, _) -> {error,{closed,\"The connection was lost.\"}} end}],\n\t\t\tfun test_get_tx/0)\n\t].\n\ntest_get_tx() ->\n\t?assertEqual(ar_http_iface_client:get_tx({127, 0, 0, 1, 1984}, <<\"123\">>), not_found),\n\t?assertEqual(ar_http_iface_client:get_tx([{127, 0, 0, 1, 1984}], <<\"123\">>), not_found),\n\t?assertEqual(ar_http_iface_client:get_tx(\n\t\t[{127, 0, 0, 1, 1984}, {127, 0, 0, 1, 1985}], <<\"123\">>), not_found).\n"
  },
  {
    "path": "apps/arweave/src/ar_kv.erl",
    "content": "-module(ar_kv).\n\n-behaviour(gen_server).\n\n-export([\n\tstart_link/0, create_ets/0, open/1, open_readonly/1, close/1, put/3, get/2,\n\tget_next_by_prefix/4, get_next/2, get_prev/2, get_range/2, get_range/3,\n\tdelete/2, delete_range/3, count/1\n]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-define(WITH_DB(Name, Callback), with_db(Name, ?FUNCTION_NAME, Callback)).\n-define(WITH_ITERATOR(Name, IteratorOptions, Callback), with_iterator(Name, ?FUNCTION_NAME, IteratorOptions, Callback)).\n\n-define(DEFAULT_ROCKSDB_DATABASE_OPTIONS, #{\n\tcreate_if_missing => true,\n\tcreate_missing_column_families => true,\n\n\t%% these are default values, but they must not be overriden;\n\t%% otherwise the syncWAL will not work.\n\tallow_mmap_reads => false,\n\tallow_mmap_writes => false\n}).\n\n-record(db, {\n\t%% name may be undefined in short intervals before opening the database,\n\t%% or reopening the database (which implies close and open operations).\n\t%% It may happen in case of opening the database with column families.\n\t%% NB: records with undefined db_handle must not be stored in the ETS table.\n\tname :: term() | undefined,\n\tfilepath :: file:filename_all(),\n\tdb_options :: rocksdb:db_options(),\n\t%% db_handle may be undefined in short intervals before opening the database,\n\t%% or reopening the database (which implies close and open operations).\n\t%% NB: records with undefined db_handle must not be stored in the ETS table.\n\tdb_handle :: rocksdb:db_handle() | undefined,\n\n\t%% column families only fields, must be set to undefined for plain databases.\n\tcf_names = undefined :: [term()],\n\tcf_descriptors = undefined :: [rocksdb:cf_descriptor()],\n\tcf_handle = undefined :: rocksdb:cf_handle(),\n\n\treadonly :: boolean()\n}).\n\n-define(msg_trigger_timer(Kind, Secret), {msg_trigger_timer, Kind, Secret}).\n-define(msg_trigger_db_flush(Secret), ?msg_trigger_timer(db_flush, Secret)).\n-define(msg_trigger_wal_sync(Secret), ?msg_trigger_timer(wal_sync, Secret)).\n\n-record(timer, {\n\tinterval_ms :: pos_integer(),\n\tref :: erlang:reference() | undefined,\n\tsecret :: erlang:reference() | undefined\n}).\n\n-record(state, {\n\tdb_flush_timer :: #timer{},\n\twal_sync_timer :: #timer{}\n}).\n\n\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n\n\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n\n\n%% @doc Creates a named ETS table.\n%% This function is used within `ar_kv_sup` as well as `ar_test_node` modules.\ncreate_ets() ->\n\tets:new(?MODULE, [set, public, named_table, {keypos, #db.name}]).\n\n\n\n%% @doc Open a key-value store.\n%% Args is a map with the following keys:\n%% - path: the filesystem path of the database (required)\n%% - name: the name of the database (required for plain databases, not used for column family databases)\n%% - log_path: the filesystem path of the log file (optional, [data_dir]/logs/rocksdb/ by default)\n%% - options: the options for the database (optional, see ?DEFAULT_ROCKSDB_DATABASE_OPTIONS for the default options)\n%% - cf_descriptors: the column family descriptors (optional, see rocksdb:open/3 for the supported options)\n%% - cf_names: the column family names (required if cf_descriptors is provided)\n%% - readonly: whether to open the database in read-only mode (optional, false by default)\nopen(Args) ->\n\tPath = maps:get(path, Args),\n\tCustomLogPath = maps:get(log_path, Args, not_set),\n\tOptions = maps:get(options, Args, []),\n\tReadOnly = maps:get(readonly, Args, false),\n\tcase maps:get(cf_descriptors, Args, undefined) of\n\t\tundefined ->\n\t\t\tName = maps:get(name, Args),\n\t\t\tgen_server:call(?MODULE, {open, {Path, CustomLogPath, Options, Name, ReadOnly}}, ?DEFAULT_CALL_TIMEOUT);\n\t\tCFDescriptors ->\n\t\t\tCFNames = maps:get(cf_names, Args),\n\t\t\tgen_server:call(\n\t\t\t\t?MODULE, {open, {Path, CustomLogPath, CFDescriptors, Options, CFNames, ReadOnly}}, ?DEFAULT_CALL_TIMEOUT\n\t\t\t)\n\tend.\n\n\n\n%% @doc Open a key-value store in read-only mode.\n%% This will not modify any database files (no WAL writes, no compaction, no manifest updates).\n%% Useful for reading snapshot data without altering it.\n%% Args is a map with the same keys as open/1.\nopen_readonly(Args) ->\n\topen(Args#{ readonly => true }).\n\n\n\n%% @doc Store the given value under the given key.\nput(Name, Key, Value) ->\n\t?WITH_DB(Name, fun\n\t\t(#db{db_handle = Db, cf_handle = undefined}) ->\n\t\t\trocksdb:put(Db, Key, Value, []);\n\t\t(#db{db_handle = Db, cf_handle = Cf}) ->\n\t\t\trocksdb:put(Db, Cf, Key, Value, [])\n\tend).\n\n\n\n%% @doc Return the value stored under the given key.\nget(Name, Key) ->\n\t?WITH_DB(Name, fun\n\t\t(#db{db_handle = Db, cf_handle = undefined}) ->\n\t\t\trocksdb:get(Db, Key, []);\n\t\t(#db{db_handle = Db, cf_handle = Cf}) ->\n\t\t\trocksdb:get(Db, Cf, Key, [])\n\tend).\n\n\n\n%% @doc Return the key ({ok, Key, Value}) equal to or bigger than OffsetBinary with\n%% either the matching PrefixBitSize first bits or PrefixBitSize first bits bigger by one.\nget_next_by_prefix(Name, PrefixBitSize, KeyBitSize, OffsetBinary) ->\n\t?WITH_ITERATOR(Name, [{prefix_same_as_start, true}], fun\n\t\t(Iterator) -> get_next_by_prefix2(Iterator, PrefixBitSize, KeyBitSize, OffsetBinary)\n\tend).\n\n\n\nget_next_by_prefix2(Iterator, PrefixBitSize, KeyBitSize, OffsetBinary) ->\n\tcase rocksdb:iterator_move(Iterator, {seek, OffsetBinary}) of\n\t\t{error, invalid_iterator} ->\n\t\t\t%% There is no bigger or equal key sharing the prefix.\n\t\t\t%% Query one more time with prefix + 1.\n\t\t\tSuffixBitSize = KeyBitSize - PrefixBitSize,\n\t\t\t<< Prefix:PrefixBitSize, _:SuffixBitSize >> = OffsetBinary,\n\t\t\tNextPrefixSmallestBytes = << (Prefix + 1):PrefixBitSize, 0:SuffixBitSize >>,\n\t\t\trocksdb:iterator_move(Iterator, {seek, NextPrefixSmallestBytes});\n\t\tReply ->\n\t\t\tReply\n\tend.\n\n\n\n%% @doc Return {ok, Key, Value} where Key is the smallest Key equal to or bigger than Cursor\n%% or none.\nget_next(Name, Cursor) ->\n\t?WITH_ITERATOR(Name, [{total_order_seek, true}], fun\n\t\t(Iterator) -> get_next2(Iterator, Cursor)\n\tend).\n\n\n\nget_next2(Iterator, Cursor) ->\n\tcase rocksdb:iterator_move(Iterator, Cursor) of\n\t\t{error, invalid_iterator} -> none;\n\t\tReply -> Reply\n\tend.\n\n\n\n%% @doc Return {ok, Key, Value} where Key is the largest Key equal to or smaller than Cursor\n%% or none.\nget_prev(Name, Cursor) ->\n\t?WITH_ITERATOR(Name, [{total_order_seek, true}], fun\n\t\t(Iterator) -> get_prev2(Iterator, Cursor)\n\tend).\n\n\n\nget_prev2(Iterator, Cursor) ->\n\tcase rocksdb:iterator_move(Iterator, {seek_for_prev, Cursor}) of\n\t\t{error, invalid_iterator} -> none;\n\t\tReply -> Reply\n\tend.\n\n\n\n%% @doc Return a Key => Value map with all keys equal to or larger than Start.\nget_range(Name, Start) ->\n\tget_range2(Name, {Start, undefined}).\n\n\n\n%% @doc Return a Key => Value map with all keys equal to or larger than Start and\n%% equal to or smaller than End.\nget_range(Name, Start, End) ->\n\tget_range2(Name, {Start, End}).\n\n\n\nget_range2(Name, {StartOffsetBinary, MaybeEndOffsetBinary}) ->\n\t?WITH_ITERATOR(Name, [{total_order_seek, true}], fun\n\t\t(Iterator) -> get_range3(Iterator, {StartOffsetBinary, MaybeEndOffsetBinary})\n\tend).\n\n\n\nget_range3(Iterator, {StartOffsetBinary, MaybeEndOffsetBinary}) ->\n\tcase rocksdb:iterator_move(Iterator, {seek, StartOffsetBinary}) of\n\t\t{ok, Key, _Value} when is_binary(MaybeEndOffsetBinary), Key > MaybeEndOffsetBinary ->\n\t\t\t{ok, #{}};\n\t\t{ok, Key, Value} ->\n\t\t\tget_range4(Iterator, #{ Key => Value }, MaybeEndOffsetBinary);\n\t\t{error, invalid_iterator} ->\n\t\t\t{ok, #{}};\n\t\t{error, Reason} ->\n\t\t\t{error, Reason}\n\tend.\n\n\n\nget_range4(Iterator, Map, MaybeEndOffsetBinary) ->\n\tcase rocksdb:iterator_move(Iterator, next) of\n\t\t{ok, Key, _Value} when is_binary(MaybeEndOffsetBinary), Key > MaybeEndOffsetBinary ->\n\t\t\t{ok, Map};\n\t\t{ok, Key, Value} ->\n\t\t\tget_range4(Iterator, Map#{ Key => Value }, MaybeEndOffsetBinary);\n\t\t{error, invalid_iterator} ->\n\t\t\t{ok, Map};\n\t\t{error, Reason} ->\n\t\t\t{error, Reason}\n\tend.\n\n\n\n%% @doc Remove the given key.\ndelete(Name, Key) ->\n\t?WITH_DB(Name, fun\n\t\t(#db{db_handle = Db, cf_handle = undefined}) -> rocksdb:delete(Db, Key, []);\n\t\t(#db{db_handle = Db, cf_handle = Cf}) -> rocksdb:delete(Db, Cf, Key, [])\n\tend).\n\n\n\n%% @doc Remove the keys equal to or larger than Start and smaller than End.\ndelete_range(Name, StartOffsetBinary, EndOffsetBinary) ->\n\t?WITH_DB(Name, fun\n\t\t(#db{db_handle = Db, cf_handle = undefined}) -> rocksdb:delete_range(Db, StartOffsetBinary, EndOffsetBinary, []);\n\t\t(#db{db_handle = Db, cf_handle = Cf}) -> rocksdb:delete_range(Db, Cf, StartOffsetBinary, EndOffsetBinary, [])\n\tend).\n\n\n\n%% @doc Return the number of keys in the table.\ncount(Name) ->\n\t?WITH_DB(Name, fun\n\t\t(#db{db_handle = Db, cf_handle = undefined}) -> rocksdb:count(Db);\n\t\t(#db{db_handle = Db, cf_handle = Cf}) -> rocksdb:count(Db, Cf)\n\tend).\n\n\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\n\n\ninit([]) ->\n\tprocess_flag(trap_exit, true),\n\t{ok, Config} = arweave_config:get_env(),\n\tS0 = #state{\n\t\tdb_flush_timer = #timer{interval_ms = Config#config.rocksdb_flush_interval_s * 1000},\n\t\twal_sync_timer = #timer{interval_ms = Config#config.rocksdb_wal_sync_interval_s * 1000}\n\t},\n\tS1 = init_db_flush_timer(S0),\n\tS2 = init_wal_sync_timer(S1),\n\t{ok, S2}.\n\n\n\nhandle_call({open, {Filepath, LogFilepath, UserOptions, Name, ReadOnly}}, _From, State) ->\n\tDbRec0 = new_dbrec(Name, Filepath, LogFilepath, UserOptions, ReadOnly),\n\tcase ets:lookup(?MODULE, DbRec0#db.name) of\n\t\t[] ->\n\t\t\tcase do_open(DbRec0) of\n\t\t\t\tok -> {reply, ok, State};\n\t\t\t\t{error, Reason} -> {reply, {error, Reason}, State}\n\t\t\tend;\n\t\t[#db{filepath = Filepath, db_options = DbOptions}]\n\t\twhen DbRec0#db.filepath == Filepath, DbRec0#db.db_options == DbOptions ->\n\t\t\t{reply, ok, State};\n\t\t[#db{filepath = Filepath, db_options = Options}] ->\n\t\t\t{reply, {error, {already_open, Filepath, Options}}, State}\n\tend;\n\nhandle_call({open, {Filepath, LogFilepath, CfDescriptors, UserOptions, CfNames, ReadOnly}}, _From, State) ->\n\tDbRec0 = new_dbrec(CfNames, CfDescriptors, Filepath, LogFilepath, UserOptions, ReadOnly),\n\tcase ets:lookup(?MODULE, hd(CfNames)) of\n\t\t[] ->\n\t\t\tcase do_open(DbRec0) of\n\t\t\t\tok -> {reply, ok, State};\n\t\t\t\t{error, Reason} -> {reply, {error, Reason}, State}\n\t\t\tend;\n\t\t[#db{filepath = Filepath, db_options = DbOptions, cf_descriptors = CfDescriptors, cf_names = CfNames}]\n\t\twhen\n\t\tDbRec0#db.filepath == Filepath, DbRec0#db.db_options == DbOptions,\n\t\tDbRec0#db.cf_descriptors == CfDescriptors, DbRec0#db.cf_names == CfNames ->\n\t\t\t{reply, ok, State};\n\t\t[#db{filepath = Filepath1, db_options = Options1}] ->\n\t\t\t{reply, {error, {already_open, Filepath1, Options1}}, State}\n\tend;\n\nhandle_call({close, Name}, _From, State) ->\n\tcase ets:lookup(?MODULE, Name) of\n\t\t[] ->\n\t\t\t{reply, {error, not_found}, State};\n\t\t[DbRec] ->\n\t\t\t{reply, close(DbRec), State}\n\tend;\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\n\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\n\n\nhandle_info(\n\t?msg_trigger_db_flush(SameSecret),\n\t#state{db_flush_timer = #timer{secret = SameSecret}} = S0\n) ->\n\twith_each_db(fun(DbRec) ->\n\t\t{ElapsedUs, _} = timer:tc(fun() -> db_flush(DbRec) end),\n\t\t?LOG_DEBUG([\n\t\t\t{event, periodic_timer}, {}, {op, db_flush},\n\t\t\t{name, io_lib:format(\"~p\", [DbRec#db.name])}, {elapsed_us, ElapsedUs}\n\t\t])\n\tend),\n\t{noreply, init_db_flush_timer(S0)};\n\nhandle_info(\n\t?msg_trigger_wal_sync(SameSecret),\n\t#state{wal_sync_timer = #timer{secret = SameSecret}} = S0\n) ->\n\twith_each_db(fun(DbRec) ->\n\t\t{ElapsedUs, _} = timer:tc(fun() -> wal_sync(DbRec) end),\n\t\t?LOG_DEBUG([\n\t\t\t{event, periodic_timer}, {}, {op, wal_sync},\n\t\t\t{name, io_lib:format(\"~p\", [DbRec#db.name])}, {elapsed_us, ElapsedUs}\n\t\t])\n\tend),\n\t{noreply, init_wal_sync_timer(S0)};\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\n\n\nterminate(Reason, _State) ->\n\tResult = with_each_db(fun(DbRec) ->\n\t\t?LOG_INFO([{event, terminate_db}, {module, ?MODULE}, {db, DbRec#db.name}]),\n\t\t_ = db_flush(DbRec),\n\t\t_ = wal_sync(DbRec),\n\t\t_ = close(DbRec)\n\tend),\n\t?LOG_INFO([{event, terminate_complete}, {module, ?MODULE}, {reason, Reason}]),\n\tResult.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\n\n\nmaybe_cancel_timer(#timer{ref = undefined}) -> ok;\nmaybe_cancel_timer(#timer{ref = TRef}) -> erlang:cancel_timer(TRef).\n\n\n\ninit_timer(Timer0, MsgFun) ->\n\t_ = maybe_cancel_timer(Timer0),\n\tSecret = erlang:make_ref(),\n\tTRef = erlang:send_after(Timer0#timer.interval_ms, self(), apply(MsgFun, [Secret])),\n\tTimer0#timer{ref = TRef, secret = Secret}.\n\n\n\ninit_db_flush_timer(#state{db_flush_timer = Timer0} = S0) ->\n\tS0#state{\n\t\tdb_flush_timer = init_timer(Timer0, fun(Secret) -> ?msg_trigger_db_flush(Secret) end)\n\t}.\n\n\n\ninit_wal_sync_timer(#state{wal_sync_timer = Timer0} = S0) ->\n\tS0#state{\n\t\twal_sync_timer = init_timer(Timer0, fun(Secret) -> ?msg_trigger_wal_sync(Secret) end)\n\t}.\n\n\n\n%% @doc Create a new plain database record.\nnew_dbrec(Name, Filepath, LogFilepath, UserOptions, ReadOnly) ->\n\tLogDir = filename:join([get_base_log_dir(LogFilepath), ?ROCKS_DB_DIR, filename:basename(Filepath)]),\n\tok = filelib:ensure_dir(Filepath ++ \"/\"),\n\tok = filelib:ensure_dir(LogDir ++ \"/\"),\n\tDefaultOptionsMap = (?DEFAULT_ROCKSDB_DATABASE_OPTIONS)#{db_log_dir => LogDir},\n\tDbOptions = maps:to_list(maps:merge(maps:from_list(UserOptions), DefaultOptionsMap)),\n\t#db{ name = Name, filepath = Filepath, db_options = DbOptions, readonly = ReadOnly }.\n\n\n\n%% @doc  Create a new 'column-family' database record.\nnew_dbrec(CfNames, CfDescriptors, Filepath, LogFilepath, UserOptions, ReadOnly) ->\n\tLogDir = filename:join([get_base_log_dir(LogFilepath), ?ROCKS_DB_DIR, filename:basename(Filepath)]),\n\tok = filelib:ensure_dir(Filepath ++ \"/\"),\n\tok = filelib:ensure_dir(LogDir ++ \"/\"),\n\tDefaultOptionsMap = (?DEFAULT_ROCKSDB_DATABASE_OPTIONS)#{db_log_dir => LogDir},\n\tDbOptions = maps:to_list(maps:merge(maps:from_list(UserOptions), DefaultOptionsMap)),\n\t#db{\n\t\tname = hd(CfNames), filepath = Filepath,\n\t\tdb_options = DbOptions,\n\t\tcf_descriptors = CfDescriptors, cf_names = CfNames,\n\t\treadonly = ReadOnly\n\t}.\n\n\n\n%% @doc Attempt to open the database.\n%% Both plain and 'column-family' databases are attempted.\n%% When opening the plain database, the record will have `name` set to the given\n%% name parameter.\n%% When opening 'column-family' database, the record will have a column name; several\n%% database records will be inserted during the process.\ndo_open(#db{\n\tdb_handle = undefined, cf_descriptors = undefined,\n\tfilepath = Filepath, db_options = DbOptions,\n\treadonly = ReadOnly\n} = DbRec0) ->\n\tOpen =\n\t\tcase ReadOnly of\n\t\t\ttrue ->\n\t\t\t\trocksdb:open_readonly(Filepath, DbOptions);\n\t\t\tfalse ->\n\t\t\t\trocksdb:open(Filepath, DbOptions)\n\t\tend,\n\tcase Open of\n\t\t{ok, Db} ->\n\t\t\tDbRec1 = DbRec0#db{db_handle = Db},\n\t\t\ttrue = ets:insert(?MODULE, DbRec1),\n\t\t\tok;\n\t\t{error, OpenError} ->\n\t\t\t?LOG_ERROR([{event, db_operation_failed}, {op, open},\n\t\t\t\t{name, io_lib:format(\"~p\", [DbRec0#db.name])},\n\t\t\t\t{reason, io_lib:format(\"~p\", [OpenError])}]),\n\t\t\t{error, failed}\n\tend;\n\ndo_open(#db{\n\tdb_handle = undefined, cf_descriptors = CfDescriptors, cf_names = CfNames,\n\tfilepath = Filepath, db_options = DbOptions,\n\treadonly = ReadOnly\n} = DbRec0) ->\n\tOpen =\n\t\tcase ReadOnly of\n\t\t\ttrue ->\n\t\t\t\trocksdb:open_readonly(Filepath, DbOptions, CfDescriptors);\n\t\t\tfalse ->\n\t\t\t\trocksdb:open(Filepath, DbOptions, CfDescriptors)\n\t\tend,\n\tcase Open of\n\t\t{ok, Db, Cfs} ->\n\t\t\tFirstDbRec = lists:foldr(\n\t\t\t\tfun({Cf, CfName}, _) ->\n\t\t\t\t\tDbRec1 = DbRec0#db{name = CfName, db_handle = Db, cf_handle = Cf},\n\t\t\t\t\ttrue = ets:insert(?MODULE, DbRec1),\n\t\t\t\t\tDbRec1\n\t\t\t\tend,\n\t\t\t\tundefined,\n\t\t\t\tlists:zip(Cfs, CfNames)\n\t\t\t),\n\t\t\t%% flush the cf database (all column families at once)\n\t\t\t_ = db_flush(FirstDbRec),\n\t\t\tok;\n\t\t{error, OpenError} ->\n\t\t\t?LOG_ERROR([{event, db_operation_failed}, {op, open},\n\t\t\t\t{name, io_lib:format(\"~p\", [DbRec0#db.name])},\n\t\t\t\t{reason, io_lib:format(\"~p\", [OpenError])}]),\n\t\t\t{error, failed}\n\tend;\n\ndo_open(#db{} = DbRec0) ->\n\t?LOG_ERROR([\n\t\t{event, db_operation_failed}, {op, open}, {error, already_open},\n\t\t{name, io_lib:format(\"~p\", [DbRec0#db.name])}\n\t]).\n\n\n\n%% Attempt to close the database and remove the ETS entries related to it.\n%% This function WILL NOT perform any actions regarding persistence: it is up to\n%% the user to ensure that both db_flush/1 and wal_sync/1 functions are called\n%% prior to calling this function.\n%% Database must be open at the moment of calling the function.\nclose(Name) when not is_record(Name, db) ->\n\tgen_server:call(?MODULE, {close, Name}, ?DEFAULT_CALL_TIMEOUT);\n\nclose(#db{db_handle = undefined}) -> {error, closed};\n\nclose(#db{db_handle = Db, name = Name}) ->\n\ttry\n\t\tcase rocksdb:close(Db) of\n\t\t\tok ->\n\t\t\t\ttrue = ets:match_delete(?MODULE, #db{db_handle = Db, _ = '_'});\n\t\t\t{error, CloseError} ->\n\t\t\t\t?LOG_ERROR([\n\t\t\t\t\t{event, db_operation_failed}, {op, close}, {name, io_lib:format(\"~p\", [Name])},\n\t\t\t\t\t{error, io_lib:format(\"~p\", [CloseError])}\n\t\t\t\t])\n\t\tend\n\tcatch\n\t\tExc ->\n\t\t\t?LOG_ERROR([\n\t\t\t\t{event, ar_kv_failed}, {op, close}, {name, io_lib:format(\"~p\", [Name])},\n\t\t\t\t{reason, io_lib:format(\"~p\", [Exc])}\n\t\t\t])\n\tend.\n\n\n\n%% @doc Attempt to flush the database: persist the memtables contents on disk.\n%% Database must be open at the moment of calling the function.\ndb_flush(#db{name = Name, db_handle = undefined}) ->\n\t?LOG_ERROR([{event, db_operation_failed}, {op, db_flush}, {error, closed}, {name, io_lib:format(\"~p\", [Name])}]),\n\t{error, closed};\n\ndb_flush(#db{readonly = true}) ->\n\tok;\n\ndb_flush(#db{name = Name, db_handle = Db}) ->\n\tcase rocksdb:flush(Db, [{wait, true}, {allow_write_stall, false}]) of\n\t\t{error, FlushError} ->\n\t\t\t?LOG_ERROR([{event, db_operation_failed}, {op, db_flush},\n\t\t\t\t{name, io_lib:format(\"~p\", [Name])},\n\t\t\t\t{reason, io_lib:format(\"~p\", [FlushError])}]),\n\t\t\t{error, failed};\n\t\t_ ->\n\t\t\tok\n\tend.\n\n\n\n%% @doc Attempt to sync Write Ahead Log (WAL): persist WAL contents on disk.\n%% Database must be open at the moment of calling the function.\nwal_sync(#db{name = Name, db_handle = undefined}) ->\n\t?LOG_ERROR([{event, db_operation_failed}, {op, wal_sync}, {error, closed}, {name, io_lib:format(\"~p\", [Name])}]),\n\t{error, closed};\n\nwal_sync(#db{readonly = true}) ->\n\tok;\n\nwal_sync(#db{name = Name, db_handle = Db}) ->\n\tcase rocksdb:sync_wal(Db) of\n\t\t{error, SyncError} ->\n\t\t\t?LOG_ERROR([{event, db_operation_failed}, {op, wal_sync},\n\t\t\t\t{name, io_lib:format(\"~p\", [Name])},\n\t\t\t\t{reason, io_lib:format(\"~p\", [SyncError])}]),\n\t\t\t{error, failed};\n\t\t_ ->\n\t\t\tok\n\tend.\n\n\n\n%% @doc Apply callback if it is possible to obtain the iterator for the database.\n%% The callback will get an iterator as an argument.\nwith_iterator(Name, Op, IteratorOptions, Callback) ->\n\twith_db(Name, Op, fun\n\t\t(#db{db_handle = Db, cf_handle = undefined}) ->\n\t\t\tcase rocksdb:iterator(Db, IteratorOptions) of\n\t\t\t\t{ok, Iterator} -> apply(Callback, [Iterator]);\n\t\t\t\t{error, IteratorError} -> {error, IteratorError}\n\t\t\tend;\n\t\t(#db{db_handle = Db, cf_handle = Cf}) ->\n\t\t\tcase rocksdb:iterator(Db, Cf, IteratorOptions) of\n\t\t\t\t{ok, Iterator} -> apply(Callback, [Iterator]);\n\t\t\t\t{error, IteratorError} -> {error, IteratorError}\n\t\t\tend\n\tend).\n\n\n\n%% @doc Apply callback if the database is available.\n%% The callback will get the database record (#db{}) as an argument.\nwith_db(Name, Op, Callback) ->\n\ttry\n\t\tcase ets:lookup(?MODULE, Name) of\n\t\t\t[] ->\n\t\t\t\t{error, db_not_found};\n\t\t\t[DbRec0] ->\n\t\t\t\tapply(Callback, [DbRec0])\n\t\tend\n\tcatch\n\t\tExc ->\n\t\t\t?LOG_ERROR([{event, db_operation_failed}, {op, Op},\n\t\t\t\t{name, io_lib:format(\"~p\", [Name])},\n\t\t\t\t{reason, io_lib:format(\"~p\", [Exc])}]),\n\t\t\t{error, failed}\n\tend.\n\n\n\n%% @doc Apply callback for each unique database found in ETS (column family\n%% databases will be only called once).\n%% The callback will get the database record (#db{}) as an argument.\nwith_each_db(Callback) ->\n\tets:foldl(\n\t\tfun(#db{db_handle = Db} = DbRec0, Acc) ->\n\t\t\t\tcase sets:is_element(Db, Acc) of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tAcc;\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t_ = apply(Callback, [DbRec0]),\n\t\t\t\t\t\tsets:add_element(Db, Acc)\n\t\t\t\tend\n\t\tend,\n\t\tsets:new(),\n\t\t?MODULE\n\t).\n\n\n\nget_base_log_dir(LogFilepath) ->\n\tcase LogFilepath of\n\t\tnot_set ->\n\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\tConfig#config.log_dir;\n\t\t_ ->\n\t\t\tLogFilepath\n\tend.\n\n\n\ntest_get_data_dir() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tConfig#config.data_dir.\n\n\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\n\n\nrocksdb_iterator_test_() ->\n\t{timeout, 300, fun test_rocksdb_iterator/0}.\n\n\n\ntest_rocksdb_iterator() ->\n\ttest_destroy(\"test_db\"),\n\tDataDir = test_get_data_dir(),\n\t%% Configure the DB similarly to how it used to be configured before the tested change.\n\tOpts = [\n\t\t{prefix_extractor, {capped_prefix_transform, 28}},\n\t\t{optimize_filters_for_hits, true},\n\t\t{max_open_files, 1000000}\n\t],\n\tok = ar_kv:open(#{\n\t\tpath => filename:join([DataDir, ?ROCKS_DB_DIR, \"test_db\"]),\n\t\tcf_descriptors => [{\"default\", Opts}, {\"test\", Opts}],\n\t\tcf_names => [default, test]}),\n\tok = ar_kv:open(#{\n\t\tpath => filename:join([DataDir, ?ROCKS_DB_DIR, \"test_db\"]),\n\t\tcf_descriptors => [{\"default\", Opts}, {\"test\", Opts}],\n\t\tcf_names => [default, test]}),\n\tSmallerPrefix = crypto:strong_rand_bytes(29),\n\t<< O1:232 >> = SmallerPrefix,\n\tBiggerPrefix = << (O1 + 1):232 >>,\n\tSuffixes =\n\t\tsets:to_list(sets:from_list([crypto:strong_rand_bytes(3) || _ <- lists:seq(1, 20)])),\n\t{Suffixes1, Suffixes2} = lists:split(10, Suffixes),\n\tlists:foreach(\n\t\tfun(Suffix) ->\n\t\t\tok = ar_kv:put(\n\t\t\t\ttest,\n\t\t\t\t<< SmallerPrefix/binary, Suffix/binary >>,\n\t\t\t\tcrypto:strong_rand_bytes(40 * ?MiB)\n\t\t\t),\n\t\t\tok = ar_kv:put(\n\t\t\t\ttest,\n\t\t\t\t<< BiggerPrefix/binary, Suffix/binary >>,\n\t\t\t\tcrypto:strong_rand_bytes(40 * ?MiB)\n\t\t\t)\n\t\tend,\n\t\tSuffixes1\n\t),\n\ttest_close(test),\n\t%% Reopen with the new configuration.\n\tOpts2 = [\n\t\t{block_based_table_options, [\n\t\t\t{cache_index_and_filter_blocks, true},\n\t\t\t{bloom_filter_policy, 10}\n\t\t]},\n\t\t{prefix_extractor, {capped_prefix_transform, 29}},\n\t\t{optimize_filters_for_hits, true},\n\t\t{max_open_files, 1000000},\n\t\t{write_buffer_size, 256 * ?MiB},\n\t\t{target_file_size_base, 256 * ?MiB},\n\t\t{max_bytes_for_level_base, 10 * 256 * ?MiB}\n\t],\n\tok = ar_kv:open(#{\n\t\tpath => filename:join([DataDir, ?ROCKS_DB_DIR, \"test_db\"]),\n\t\tcf_descriptors => [{\"default\", Opts2}, {\"test\", Opts2}],\n\t\tcf_names => [default, test]}),\n\t%% Store new data enough for new SST files to be created.\n\tlists:foreach(\n\t\tfun(Suffix) ->\n\t\t\tok = ar_kv:put(\n\t\t\t\ttest,\n\t\t\t\t<< SmallerPrefix/binary, Suffix/binary >>,\n\t\t\t\tcrypto:strong_rand_bytes(40 * ?MiB)\n\t\t\t),\n\t\t\tok = ar_kv:put(\n\t\t\t\ttest,\n\t\t\t\t<< BiggerPrefix/binary, Suffix/binary >>,\n\t\t\t\tcrypto:strong_rand_bytes(50 * ?MiB)\n\t\t\t)\n\t\tend,\n\t\tSuffixes2\n\t),\n\tassert_iteration(test, SmallerPrefix, BiggerPrefix, Suffixes),\n\t%% Close the database to make sure the new data is flushed.\n\ttest_close(test),\n\tok = ar_kv:open(#{\n\t\tpath => filename:join([DataDir, ?ROCKS_DB_DIR, \"test_db\"]),\n\t\tcf_descriptors => [{\"default\", Opts2}, {\"test\", Opts2}],\n\t\tcf_names => [default1, test1]}),\n\tassert_iteration(test1, SmallerPrefix, BiggerPrefix, Suffixes),\n\ttest_close(test1),\n\ttest_destroy(\"test_db\").\n\n\n\ndelete_range_test_() ->\n\t{timeout, 300, fun test_delete_range/0}.\n\n\n\ntest_delete_range() ->\n\ttest_destroy(\"test_db\"),\n\tDataDir = test_get_data_dir(),\n\tok = ar_kv:open(#{\n\t\tpath => filename:join([DataDir, ?ROCKS_DB_DIR, \"test_db\"]),\n\t\tname => test_db}),\n\tok = ar_kv:put(test_db, << 0:256 >>, << 0:256 >>),\n\tok = ar_kv:put(test_db, << 1:256 >>, << 1:256 >>),\n\tok = ar_kv:put(test_db, << 2:256 >>, << 2:256 >>),\n\tok = ar_kv:put(test_db, << 3:256 >>, << 3:256 >>),\n\tok = ar_kv:put(test_db, << 4:256 >>, << 4:256 >>),\n\t?assertEqual({ok, << 1:256 >>}, ar_kv:get(test_db, << 1:256 >>)),\n\n\t%% Base case\n\t?assertEqual(ok, ar_kv:delete_range(test_db, << 1:256 >>, << 2:256 >>)),\n\t?assertEqual({ok, << 0:256 >>}, ar_kv:get(test_db, << 0:256 >>)),\n\t?assertEqual(not_found, ar_kv:get(test_db, << 1:256 >>)),\n\t?assertEqual({ok, << 2:256 >>}, ar_kv:get(test_db, << 2:256 >>)),\n\n\t%% Missing start and missing end\n\t?assertEqual(ok, ar_kv:delete_range(test_db, << 1:256 >>, << 5:256 >>)),\n\t?assertEqual({ok, << 0:256 >>}, ar_kv:get(test_db, << 0:256 >>)),\n\t?assertEqual(not_found, ar_kv:get(test_db, << 1:256 >>)),\n\t?assertEqual(not_found, ar_kv:get(test_db, << 2:256 >>)),\n\t?assertEqual(not_found, ar_kv:get(test_db, << 3:256 >>)),\n\t?assertEqual(not_found, ar_kv:get(test_db, << 4:256 >>)),\n\n\t%% Empty range\n\t?assertEqual(ok, ar_kv:delete_range(test_db, << 1:256 >>, << 1:256 >>)),\n\t?assertEqual({ok, << 0:256 >>}, ar_kv:get(test_db, << 0:256 >>)),\n\t?assertEqual(not_found, ar_kv:get(test_db, << 1:256 >>)),\n\t?assertEqual(not_found, ar_kv:get(test_db, << 2:256 >>)),\n\t?assertEqual(not_found, ar_kv:get(test_db, << 3:256 >>)),\n\t?assertEqual(not_found, ar_kv:get(test_db, << 4:256 >>)),\n\n\t%% Reversed range\n\t?assertMatch({error, _}, ar_kv:delete_range(test_db, << 1:256 >>, << 0:256 >>)),\n\t?assertEqual({ok, << 0:256 >>}, ar_kv:get(test_db, << 0:256 >>)),\n\t?assertEqual(not_found, ar_kv:get(test_db, << 1:256 >>)),\n\t?assertEqual(not_found, ar_kv:get(test_db, << 2:256 >>)),\n\t?assertEqual(not_found, ar_kv:get(test_db, << 3:256 >>)),\n\t?assertEqual(not_found, ar_kv:get(test_db, << 4:256 >>)),\n\n\ttest_destroy(\"test_db\").\n\n\n\nassert_iteration(Name, SmallerPrefix, BiggerPrefix, Suffixes) ->\n\tSortedSuffixes = lists:sort(Suffixes),\n\tSmallestKey = << SmallerPrefix/binary, (lists:nth(1, SortedSuffixes))/binary >>,\n\tNextSmallestKey = << SmallerPrefix/binary, (lists:nth(2, SortedSuffixes))/binary >>,\n\t<< SmallestOffset:256 >> = SmallestKey,\n\t%% Assert forwards and backwards iteration within the same prefix works.\n\t?assertMatch({ok, SmallestKey, _}, ar_kv:get_next_by_prefix(Name, 232, 256, SmallestKey)),\n\t?assertMatch({ok, SmallestKey, _}, ar_kv:get_prev(Name, SmallestKey)),\n\t?assertMatch({ok, NextSmallestKey, _},\n\t\t\tar_kv:get_next_by_prefix(Name, 232, 256, << (SmallestOffset + 1):256 >>)),\n\t<< NextSmallestOffset:256 >> = NextSmallestKey,\n\t?assertMatch({ok, SmallestKey, _},\n\t\t\tar_kv:get_prev(Name, << (NextSmallestOffset - 1):256 >>)),\n\t%% Assert forwards and backwards iteration across different prefixes works.\n\tSmallerPrefixBiggestKey = << SmallerPrefix/binary, (lists:last(SortedSuffixes))/binary >>,\n\tBiggerPrefixSmallestKey = << BiggerPrefix/binary, (lists:nth(1, SortedSuffixes))/binary >>,\n\t<< SmallerPrefixBiggestOffset:256 >> = SmallerPrefixBiggestKey,\n\t?assertMatch({ok, BiggerPrefixSmallestKey, _},\n\t\t\tar_kv:get_next_by_prefix(Name, 232, 256,\n\t\t\t<< (SmallerPrefixBiggestOffset + 1):256 >>)),\n\t<< BiggerPrefixSmallestOffset:256 >> = BiggerPrefixSmallestKey,\n\t?assertMatch({ok, SmallerPrefixBiggestKey, _},\n\t\t\tar_kv:get_prev(Name, << (BiggerPrefixSmallestOffset - 1):256 >>)),\n\tBiggerPrefixNextSmallestKey =\n\t\t<< BiggerPrefix/binary, (lists:nth(2, SortedSuffixes))/binary >>,\n\t{ok, Map} = ar_kv:get_range(Name, SmallerPrefixBiggestKey, BiggerPrefixNextSmallestKey),\n\t?assertEqual(3, map_size(Map)),\n\t?assert(maps:is_key(SmallerPrefixBiggestKey, Map)),\n\t?assert(maps:is_key(BiggerPrefixNextSmallestKey, Map)),\n\t?assert(maps:is_key(BiggerPrefixSmallestKey, Map)),\n\tar_kv:delete_range(Name, SmallerPrefixBiggestKey, BiggerPrefixNextSmallestKey),\n\t?assertEqual(not_found, ar_kv:get(Name, SmallerPrefixBiggestKey)),\n\t?assertEqual(not_found, ar_kv:get(Name, BiggerPrefixSmallestKey)),\n\tlists:foreach(\n\t\tfun(Suffix) ->\n\t\t\t?assertMatch({ok, _}, ar_kv:get(Name, << BiggerPrefix/binary, Suffix/binary >>))\n\t\tend,\n\t\tlists:sublist(lists:reverse(SortedSuffixes), length(SortedSuffixes) - 1)\n\t),\n\tlists:foreach(\n\t\tfun(Suffix) ->\n\t\t\t?assertMatch({ok, _},\n\t\t\t\t\tar_kv:get(Name, << SmallerPrefix/binary, Suffix/binary >>))\n\t\tend,\n\t\tlists:sublist(SortedSuffixes, length(SortedSuffixes) - 1)\n\t),\n\tar_kv:put(Name, SmallerPrefixBiggestKey, crypto:strong_rand_bytes(50 * 1024)),\n\tar_kv:put(Name, BiggerPrefixNextSmallestKey, crypto:strong_rand_bytes(50 * 1024)),\n\tar_kv:put(Name, BiggerPrefixSmallestKey, crypto:strong_rand_bytes(50 * 1024)).\n\n\n\ntest_destroy(Name) ->\n\tRocksDBDir = filename:join(test_get_data_dir(), ?ROCKS_DB_DIR),\n\tFilename = filename:join(RocksDBDir, Name),\n\tcase filelib:is_dir(Filename) of\n\t\ttrue ->\n\t\t\trocksdb:destroy(Filename, []);\n\t\tfalse ->\n\t\t\tok\n\tend.\n\n\n\ntest_close(Name) ->\n\t?WITH_DB(Name, fun(Db) ->\n\t\t_ = db_flush(Db),\n\t\t_ = wal_sync(Db),\n\t\t_ = close(Db)\n\tend).\n"
  },
  {
    "path": "apps/arweave/src/ar_kv_sup.erl",
    "content": "-module(ar_kv_sup).\n\n-behaviour(supervisor).\n\n-export([start_link/0]).\n\n-export([init/1]).\n\n-include_lib(\"arweave/include/ar_sup.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%% ===================================================================\n%% Supervisor callbacks.\n%% ===================================================================\n\ninit([]) ->\n\tar_kv:create_ets(),\n\t{ok, {{one_for_one, 5, 10}, [?CHILD(ar_kv, worker)]}}.\n"
  },
  {
    "path": "apps/arweave/src/ar_localnet.erl",
    "content": "-module(ar_localnet).\n\n-export([start/0, start/1, submit_snapshot_data/0,\n\t\tmine_one_block/0, mine_until_height/1, create_snapshot/0]).\n\n-include(\"ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"kernel/include/file.hrl\").\n\n-define(WAIT_UNTIL_JOINED_TIMEOUT, 200_000).\n-define(DEFAULT_SNAPSHOT_DIR, \"localnet_snapshot\").\n\n-ifndef(START_FROM_STATE_SEARCH_DEPTH).\n\t-define(LOCALNET_START_FROM_STATE_SEARCH_DEPTH, 100).\n-else.\n\t-define(LOCALNET_START_FROM_STATE_SEARCH_DEPTH, ?START_FROM_STATE_SEARCH_DEPTH).\n-endif.\n\n-define(LOCALNET_DATA_DIR, \".tmp/data_localnet_main\").\n\n%% @doc Start a node from the localnet_snapshot directory.\n%% Disable mining (can be triggered by request).\n%% Configure a single storage module with the first 21 MiB of partition 0\n%% (enough to cover the seed data with some headroom).\n%% Seed the storage module with data.\nstart() ->\n\tstart(#config{\n\t\tdata_dir = ?LOCALNET_DATA_DIR\n\t}).\n\nstart(SnapshotDir) when is_list(SnapshotDir) ->\n\tstart(#config{\n\t\tdata_dir = ?LOCALNET_DATA_DIR,\n\t\tstart_from_state = SnapshotDir\n\t});\nstart(SnapshotDir) when is_atom(SnapshotDir) ->\n\tstart(#config{\n\t\tdata_dir = ?LOCALNET_DATA_DIR,\n\t\tstart_from_state = atom_to_list(SnapshotDir)\n\t});\nstart(Config) ->\n\tSnapshotDir = snapshot_dir(Config),\n\tDataDir = Config#config.data_dir,\n\tarweave_config:start(),\n\tok = filelib:ensure_dir(DataDir ++ \"/\"),\n\tMiningAddr =\n\t\tcase Config#config.mining_addr of\n\t\t\tnot_set ->\n\t\t\t\tar_wallet:to_address(ar_wallet:new_keyfile({?ECDSA_SIGN_ALG, secp256k1}, wallet_address, DataDir));\n\t\t\tAddr ->\n\t\t\t\tAddr\n\t\tend,\n\tStorageModules =\n\t\tcase Config#config.storage_modules of\n\t\t\t[] ->\n\t\t\t\t[{21 * ?MiB, 0, {replica_2_9, MiningAddr}}];\n\t\t\tConfiguredStorageModules ->\n\t\t\t\tConfiguredStorageModules\n\t\tend,\n\tok = arweave_config:set_env(Config#config{\n\t\tmining_addr = MiningAddr,\n\t\tstorage_modules = StorageModules,\n\t\tstart_from_latest_state = true,\n\t\tstart_from_state = SnapshotDir,\n\t\tdisk_cache_size = 128,\n\t\tmax_disk_pool_buffer_mb = 128,\n\t\tmax_disk_pool_data_root_buffer_mb = 128,\n\t\tauto_join = true,\n\t\tpeers = [],\n\t\tcm_exit_peer = not_set,\n\t\tcm_peers = [],\n\t\tlocal_peers = [],\n\t\tmine = false,\n\t\tdisk_space_check_frequency = 1000,\n\t\tsync_jobs = 0,\n\t\tdisk_pool_jobs = 1,\n\t\theader_sync_jobs = 0,\n\t\tdebug = true\n\t}),\n\tar:start_dependencies(),\n\tcase wait_until_joined() of\n\t\ttrue ->\n\t\t\tsubmit_snapshot_data(),\n\t\t\tio:format(\"~n~nLocalnet node started~n\"),\n\t\t\tio:format(\"  Snapshot: ~s~n\", [SnapshotDir]),\n\t\t\tio:format(\"  Data dir: ~s~n\", [DataDir]),\n\t\t\tio:format(\"  Mining address: ~s~n\", [ar_util:encode(MiningAddr)]),\n\t\t\tio:format(\"  Storage modules:~n\"),\n\t\t\tlists:foreach(fun({Size, Partition, Packing}) ->\n\t\t\t\tio:format(\"    - partition ~B, size ~B MB, packing ~s~n\",\n\t\t\t\t\t[Partition, Size div (1_000_000), ar_serialize:encode_packing(Packing, false)])\n\t\t\tend, StorageModules),\n\t\t\tio:format(\"~nMining is disabled. Call ar_localnet:mine_one_block/0 to mine a block.~n\"\n\t\t\t\t\t\"Call ar_localnet:mine_until_height/1 to mine until the given height.~n~n\");\n\t\t{error, _} = Error ->\n\t\t\tio:format(\n\t\t\t\t\"Localnet startup failed while waiting for node to join (~B ms): ~p~n\",\n\t\t\t\t[?WAIT_UNTIL_JOINED_TIMEOUT, Error]\n\t\t\t),\n\t\t\tar:stop_dependencies(),\n\t\t\tError\n\tend.\n\n%% @doc Mine one block.\nmine_one_block() ->\n\tar_node_worker:mine_one_block().\n\n%% @doc Mine blocks until the given height is reached.\nmine_until_height(Height) ->\n\tar_node_worker:mine_until_height(Height).\n\n%% @doc Create a reproducible snapshot in localnet_snapshot_[mainnet_starting_height]_[localnet_end_height].\ncreate_snapshot() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase ar_node:is_joined() of\n\t\tfalse ->\n\t\t\t{error, node_not_joined};\n\t\ttrue ->\n\t\t\tSnapshotDir = snapshot_dir(Config),\n\t\t\tcase open_snapshot_databases(SnapshotDir) of\n\t\t\t\t{ok, CloseSnapshotDbs} ->\n\t\t\t\t\tSnapshotResult =\n\t\t\t\t\t\tcase ar_storage:read_block_index(SnapshotDir) of\n\t\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\t\t{error, snapshot_block_index_not_found};\n\t\t\t\t\t\t\tBI ->\n\t\t\t\t\t\t\t\tMainnetStartHeight = length(BI) - 1,\n\t\t\t\t\t\t\t\tLocalnetEndHeight = ar_node:get_height(),\n\t\t\t\t\t\t\t\tNewSnapshotDir = snapshot_dir_name(MainnetStartHeight, LocalnetEndHeight),\n\t\t\t\t\t\t\t\tcreate_snapshot(SnapshotDir, Config#config.data_dir, NewSnapshotDir)\n\t\t\t\t\t\tend,\n\t\t\t\t\tCloseSnapshotDbs(),\n\t\t\t\t\tSnapshotResult;\n\t\t\t\t{error, _} = Error ->\n\t\t\t\t\tError\n\t\t\tend\n\tend.\n\n%% @doc Poll every 100ms until the node has joined the network, or until\n%% WAIT_UNTIL_JOINED_TIMEOUT ms have elapsed.\nwait_until_joined() ->\n\tar_util:do_until(\n\t\tfun() -> ar_node:is_joined() end,\n\t\t100,\n\t\t?WAIT_UNTIL_JOINED_TIMEOUT\n\t ).\n\n%% @doc Read recent block heads, tx headers, block time history, and account tree data\n%% from the snapshot directory and store them in the data directory. Does not do anything\n%% if recent blocks are already available locally.\nstore_snapshot_data(SnapshotDir) ->\n\tcase ar_node:get_block_index() of\n\t\t[] ->\n\t\t\t{error, empty_block_index};\n\t\tBI ->\n\t\t\tHeight = length(BI) - 1,\n\t\t\tSearchDepth = min(Height, ?LOCALNET_START_FROM_STATE_SEARCH_DEPTH),\n\t\t\tio:format(\"Copying snapshot data into data_dir from ~s~n\", [SnapshotDir]),\n\t\t\tio:format(\"  Height: ~B, search depth: ~B~n\", [Height, SearchDepth]),\n\t\t\tcase read_recent_blocks_local(BI, SearchDepth) of\n\t\t\t\t{Skipped, _Blocks} ->\n\t\t\t\t\tio:format(\"  Local blocks available (skipped: ~B). Skip backfill.~n\",\n\t\t\t\t\t\t[Skipped]),\n\t\t\t\t\tok;\n\t\t\t\tnot_found ->\n\t\t\t\t\tstore_snapshot_data3(BI, Height, SearchDepth, SnapshotDir)\n\t\t\tend\n\tend.\n\nstore_snapshot_data3(BI, Height, SearchDepth, SnapshotDir) ->\n\tcase read_recent_blocks_from_snapshot(BI, SearchDepth, SnapshotDir) of\n\t\tnot_found ->\n\t\t\t{error, block_headers_not_found};\n\t\t{Skipped, Blocks} ->\n\t\t\tio:format(\"  Recent blocks: ~B (skipped: ~B)~n\",\n\t\t\t\t[length(Blocks), Skipped]),\n\t\t\tBI2 = lists:nthtail(Skipped, BI),\n\t\t\tHeight2 = Height - Skipped,\n\t\t\tRewardHistoryBI = ar_rewards:interim_reward_history_bi(Height2, BI2),\n\t\t\tBlockTimeHistoryBI = block_time_history_bi(BI2),\n\t\t\tio:format(\"  Reward history: ~B entries~n\", [length(RewardHistoryBI)]),\n\t\t\tio:format(\"  Block time history: ~B entries~n\", [length(BlockTimeHistoryBI)]),\n\t\t\tcase store_snapshot_blocks_from_list(Blocks) of\n\t\t\t\t{ok, TxIds} ->\n\t\t\t\t\tio:format(\"  Tx headers to copy: ~B~n\", [length(TxIds)]),\n\t\t\t\t\tcase store_snapshot_tx_headers(TxIds, SnapshotDir) of\n\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\tcase store_snapshot_history_entries(\n\t\t\t\t\t\t\t\tRewardHistoryBI,\n\t\t\t\t\t\t\t\tstart_from_state_reward_history_db,\n\t\t\t\t\t\t\t\treward_history_db,\n\t\t\t\t\t\t\t\treward_history\n\t\t\t\t\t\t\t) of\n\t\t\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\t\t\tcase store_snapshot_history_entries(\n\t\t\t\t\t\t\t\t\t\tBlockTimeHistoryBI,\n\t\t\t\t\t\t\t\t\t\tstart_from_state_block_time_history_db,\n\t\t\t\t\t\t\t\t\t\tblock_time_history_db,\n\t\t\t\t\t\t\t\t\t\tblock_time_history\n\t\t\t\t\t\t\t\t\t) of\n\t\t\t\t\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\t\t\t\t\tstore_snapshot_wallet_list(Blocks, SnapshotDir, SearchDepth);\n\t\t\t\t\t\t\t\t\t\t{error, _} = Error ->\n\t\t\t\t\t\t\t\t\t\t\tError\n\t\t\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t\t\t{error, _} = Error ->\n\t\t\t\t\t\t\t\t\tError\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t{error, _} = Error ->\n\t\t\t\t\t\t\tError\n\t\t\t\t\tend;\n\t\t\t\t{error, _} = Error ->\n\t\t\t\t\tError\n\t\t\tend\n\tend.\n\n%% @doc Read special hard-coded seed transactions from the snapshot directory and submit their data\n%% chunks to the node's storage.\n%% Return {TotalBigChunkBytes, TotalSmallChunkBytes}.\nsubmit_snapshot_data() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tSnapshotDir = snapshot_dir(Config),\n\tio:format(\"Seeding data from snapshot...~n\"),\n\tSnapshotTXs = snapshot_txs(),\n\tBlockStarts = lists:sort(maps:keys(SnapshotTXs)),\n\t{TotalBigChunk, TotalSmallChunk} =\n\t\tlists:foldl(\n\t\t\tfun(BlockStart, {AccBigChunk, AccSmallChunk}) ->\n\t\t\t\tTXIDs = maps:get(BlockStart, SnapshotTXs),\n\t\t\t\tTXs = lists:map(fun(TXID) ->\n\t\t\t\t\tFilename = filename:join([SnapshotDir, \"seed_txs\",\n\t\t\t\t\t\tbinary_to_list(TXID) ++ \".json\"]),\n\t\t\t\t\tcase file:read_file(Filename) of\n\t\t\t\t\t\t{ok, JSON} ->\n\t\t\t\t\t\t\tMap = jiffy:decode(JSON, [return_maps]),\n\t\t\t\t\t\t\t#tx{\n\t\t\t\t\t\t\t\tid = ar_util:decode(TXID),\n\t\t\t\t\t\t\t\tdata_size = binary_to_integer(maps:get(<<\"data_size\">>, Map, <<\"0\">>)),\n\t\t\t\t\t\t\t\tdata = ar_util:decode(maps:get(<<\"data\">>, Map, <<>>)),\n\t\t\t\t\t\t\t\tdata_root = ar_util:decode(maps:get(<<\"data_root\">>, Map, <<>>)),\n\t\t\t\t\t\t\t\tformat = maps:get(<<\"format\">>, Map, 1)\n\t\t\t\t\t\t\t};\n\t\t\t\t\t\t{error, Reason} ->\n\t\t\t\t\t\t\tio:format(\"Failed to read ~s: ~p~n\", [Filename, Reason]),\n\t\t\t\t\t\t\terror({missing_tx_file, TXID})\n\t\t\t\t\tend\n\t\t\t\tend, TXIDs),\n\t\t\t\t{BlockBigChunk, BlockSmallChunk} = submit_block_data(BlockStart, TXs),\n\t\t\t\t{AccBigChunk + BlockBigChunk, AccSmallChunk + BlockSmallChunk}\n\t\tend,\n\t\t{0, 0},\n\t\tBlockStarts\n\t),\n\tio:format(\"Seeding completed. Total size: ~B bytes (~B big chunks, ~B small chunks).~n\",\n\t\t[TotalBigChunk + TotalSmallChunk, TotalBigChunk, TotalSmallChunk]),\n\t{TotalBigChunk, TotalSmallChunk}.\n\n%% @doc For a given block start offset and its transactions, generate Merkle\n%% paths and data roots, register them with ar_data_root_sync, then write the\n%% raw transaction data to storage.\n%% Return {TotalBigChunkBytes, TotalSmallChunkBytes}.\nsubmit_block_data(BlockStart, TXs) ->\n\t{BlockStart, BlockEnd, TXRoot, Height} = ar_block_index:get_block_bounds_with_height(BlockStart),\n\tSortedTXs = lists:sort(TXs),\n\tSizeTaggedTXs = ar_block:generate_size_tagged_list_from_txs(SortedTXs, Height),\n\tSizeTaggedTXsNoPadding = [T || {{ID, _}, _} = T <- SizeTaggedTXs, ID /= padding],\n\tTXs2 = [TX#tx{ data_root = DR }\n\t\t|| {TX, {{_ID, DR}, _End}} <- lists:zip(SortedTXs, SizeTaggedTXsNoPadding)\n\t],\n\t{_, Tree} = ar_merkle:generate_tree([{DR, End} || {{_, DR}, End} <- SizeTaggedTXs]),\n\tEntries = [\n\t\t{DR, TX#tx.data_size, BlockStart + End - TX#tx.data_size,\n\t\t\tar_merkle:generate_path(TXRoot, End - 1, Tree)}\n\t\t|| {TX, {{_ID, DR}, End}} <- lists:zip(TXs2, SizeTaggedTXsNoPadding),\n\t\t\tTX#tx.data_size > 0\n\t],\n\tar_data_root_sync:store_data_roots_sync(BlockStart, BlockEnd, TXRoot, Entries),\n\tlists:foreach(fun(TX) ->\n\t\tcase TX#tx.data_size > 0 of\n\t\t\ttrue ->\n\t\t\t\tData = TX#tx.data,\n\t\t\t\tTXID = TX#tx.id,\n\t\t\t\tcase ar_storage:write_tx_data(TX#tx.data_root, Data, TXID) of\n\t\t\t\t\tok ->\n\t\t\t\t\t\tok;\n\t\t\t\t\t{error, Errors} ->\n\t\t\t\t\t\tio:format(\"  Failed to write data for tx ~s: ~p~n\",\n\t\t\t\t\t\t\t[binary_to_list(ar_util:encode(TXID)), Errors])\n\t\t\t\tend;\n\t\t\tfalse ->\n\t\t\t\tok\n\t\tend\n\tend, TXs2),\n\tlists:foldl(\n\t\tfun(TX, {AccBigChunk, AccSmallChunk}) ->\n\t\t\tDataSize = TX#tx.data_size,\n\t\t\tBigChunkSize = (DataSize div ?DATA_CHUNK_SIZE) * ?DATA_CHUNK_SIZE,\n\t\t\tSmallChunkSize = DataSize rem ?DATA_CHUNK_SIZE,\n\t\t\t{AccBigChunk + BigChunkSize, AccSmallChunk + SmallChunkSize}\n\t\tend,\n\t\t{0, 0},\n\t\tTXs2\n\t).\n\n%% @doc Return the hardcoded map of {BlockStartOffset => [TXID]} for the seed\n%% data included in the localnet snapshot. These are real mainnet transactions\n%% whose data is bundled in the snapshot's seed_txs/ directory so the localnet\n%% node has chunk data to mine with.\nsnapshot_txs() ->\n\t#{\n\t\t0 => [<<\"t81tluHdoePSxjq7qG-6TMqBKmQLYr5gupmfvW25Y_o\">>],\n\t\t599058 => [<<\"QAQ-134At0mSPVrwBzTTUalyL_zqE_dMR_WggkZvF5E\">>],\n\t\t1039029 => [<<\"-B7wF8TF5AodemKM2UjeFySwA_-Q12Ai8z9FSqgIEyA\">>, <<\"vYnzbcbBQbPQB7GKrXzPlz1MuT9cfnNI_NBVajaTnPg\">>, <<\"Lt7WJclVu4iYHqGHIYIBia3ABMnvmd5cW4ELIzUTfPE\">>],\n\t\t1074629 => [<<\"JfTiLBj5Gxr1v7JwoNf9-7sRAiLOrg1AZ6kqwSkEpTc\">>, <<\"YMnQwrWWVRmkMs0B41lz-VdixskatlPcY7j4r0iSLbQ\">>, <<\"dMTZgKHD-NkP3iM5RjFNhppiwfTlYd-Imi9aA6IK0So\">>],\n\t\t1113021 => [<<\"qHvSpQXYh9RZmXIoIOexmDs0iQgjCubl6KSsgg7cDz8\">>],\n\t\t1179126 => [<<\"8TiSScQCv06oS9b8Tt5WBnf7sUVgzPAFGJ3Lq2bt8rY\">>],\n\t\t1621676 => [<<\"U_1PPd40n2grpuhkMJcMXPVuJhtaQoUWei63iN2rS7o\">>],\n\t\t2057001 => [<<\"m1DnUoXf7wMtIGkkDZAALobw0GbGehfEMX_jNLvs3i8\">>],\n\t\t2507233 => [<<\"JUf6alhhrfuL22XuQ0yrZ6_xBFBIQqi85wRxv2nUCMs\">>],\n\t\t3065027 => [<<\"EDt8sO0AWKJyNeUxd-U6ihy0rgRKUPjpfRGarEHlOCs\">>],\n\t\t3103419 => [<<\"UH3C65dDo62rp5ciK3XzyhufE71xorL7r7MWVwdhavk\">>],\n\t\t3551503 => [<<\"FbeSRhJR00VPygimhm47VwirSeBATnlf240hv4a2G4E\">>],\n\t\t3853182 => [<<\"ViCjDXb4IEZcXBtlYvTm3HCB6cf4gDbrXCCdvVVgB1g\">>],\n\t\t3898843 => [<<\"Jo3rf0JPJR2kCHBqZG71xouWzuOSY-MXJufpfzFl7sE\">>, <<\"Tg9QZvUPJoAZKRkPhPgQrgnlTY6s9UxRSQaMw6shhOU\">>],\n\t\t3933160 => [<<\"uNiZ8TfAQ8GWjtbqhVi90qO3U5dl9afmKE1-KbHQYM4\">>],\n\t\t4748637 => [<<\"ujON59jsellR3M8hq9unBPISOwRgEVUogdi3FG_pVMk\">>],\n\t\t4913142 => [<<\"U_UF7e-hOd5uLIj10fYZVxQ5mXyZUxvMxhWWgAMaj0s\">>],\n\t\t4984947 => [<<\"_AiF52l4uqTkKOVpQw9hr6l6FCIdWs8PCFtFxEBOopU\">>],\n\t\t5052944 => [<<\"Ace-njSprwHMwZaW5nuD0y1lKFoaafU3T8d7PLBeEIA\">>],\n\t\t5168327 => [<<\"UbW68tRQtThl9ah8tJb-X_af5M8FHYARiGZFiPGk_90\">>],\n\t\t5339549 => [<<\"JDS1sGkpC0ua7UGfpLEJSF-jXUnjAs2fa5V7y6rccdY\">>, <<\"hMfNPSlINViUDVnor18GgPs0Ut0i9XY7dwM9MVOL-2I\">>],\n\t\t5530545 => [<<\"MCCCpl9AGNAzy3WvM5lniJ88iC3-8NPiiWIsxcLZZxQ\">>],\n\t\t5573275 => [<<\"F3c9tsVvmCiFNxK7hVEzROraVm477QdyQ8t6afBs5E4\">>],\n\t\t5716883 => [<<\"uOqsnEjVGQCbtrKI7QbHYxbbLUdCKC-792SgZr5KUKM\">>],\n\t\t5951779 => [<<\"Ie-fxxzdBweiA0N1ZbzUqXhNI310uDUmaBc3ajlV6YY\">>, <<\"ycjvsn3A9cUMjnbDaSUpf1HRQd4duP9AL1YVwSjwuAQ\">>],\n\t\t6012107 => [<<\"1VknqhhAXRQ6hzeZL-IMVBznTFCdiWcwlXhzpLKS8Zk\">>],\n\t\t6049618 => [<<\"wntmnG9yRP9aoioRDILKkmSZqdemR-XDCIKJS-wpRYw\">>, <<\"WJTACYoRG89VIpjzsIZLIy93U7HoC4OJyLy6WAlqv-0\">>],\n\t\t6095199 => [<<\"EayO1EsmOinnbi-NVa2V7cVraoI0TZ6xE5-sNU7fc94\">>, <<\"8CPVZq-zPdMQ2to1P91vl6XBXyL7sLH8-vNclnOCug8\">>, <<\"QyQL1TYdwmguUIBjTV-shWqrwS6AwxhZ6lf7Rx-vxH0\">>, <<\"vvPtX1U0EZS9PMsQBVk3mjD9yS6EHIt0FXdKf2dOELw\">>],\n\t\t6441166 => [<<\"MklsZ_cDz470C40UGZUJoVfMeVA89-r7SHxuomBeCPM\">>],\n\t\t6441564 => [<<\"Hs-Yj4ZE9ACfQIjzS8E-qvxSkQALsCIDHwcLEMnlz90\">>],\n\t\t6441962 => [<<\"I4ifBnOF6OQFautfisGFTVIn2NsrvqrdnQ-O7JOMouE\">>],\n\t\t6808479 => [<<\"7fat_nqzDJCTfJMqyEpOcavt1cZNM-tfSzASJd0wrHo\">>, <<\"YcTBCg3mLRFByb1cnjrq9DzEBnnOT9jQtfYEE34QZ1M\">>],\n\t\t6820776 => [<<\"Re-7lkSGlYP4SFddz0rrXIF0r4MVYZuagjkVpEm79bY\">>, <<\"hXNDNwQ6zA7aHAqvfBj_az9CovV37bJywdgPdb_ooIA\">>],\n\t\t6906339 => [<<\"qDEFXj8hSgOuuqWM52y6pbUX1cyp7bS4qItfctgtVx8\">>],\n\t\t6992421 => [<<\"9hX3cS3Vjr6vAqJW3WtPN665NpLJegcxyaDZO2esElM\">>],\n\t\t7145431 => [<<\"kVNsLH0kpIkFnBBGWxoIajVLSpvzmsKHpsATPAcR86Y\">>],\n\t\t7177405 => [<<\"hB1Hj0mfuh_x3ijhqkw1s3wdCh8qdPz_IMs0MPraVCk\">>],\n\t\t7193892 => [<<\"ivWTdg5M9XqjP-Iu4C97r3qZQhotJgfF17g__7EH7VM\">>],\n\t\t7308719 => [<<\"lxtOUAEj-E1jb6J8uGCRlRgJDHJyFOu0O73jQHnAhpg\">>, <<\"ntnx85KcYZ_ZhR6dL2A_p8foCmStgD-69ODoOUdipiA\">>, <<\"yo8VtPVXWBpTqLbLL-ZeOmZTW2HTqTzsf9RPzgHM-bQ\">>, <<\"_gduN41u7Xxac_Gm3pBQI3icoKhOfiRV2TKhDnlyakU\">>, <<\"5iK4mPnFqGdUxpiZmGtTbj7xoSC2una7sjsbUyZkOmM\">>, <<\"iWUFDucATDE8gjbsL-9KpOIW9l8Ipsh1wliv4e05xhg\">>],\n\t\t7308855 => [<<\"oiYeEvWqOkaHzCSunznZ09U_tuHqP1UyZkRrKYHgNBw\">>],\n\t\t7465544 => [<<\"xK4fFG-PbnQx6EGmmj1A0JVWQ9Bg7q-FncaU7hHk9ds\">>],\n\t\t7470098 => [<<\"eGYHUFl46laNa8v_WjdadvCkIErWqmx0hoia7PCSmSw\">>],\n\t\t7516201 => [<<\"2dxNaIAvkAuL_N2qpTGSl7d7rU3Hu7d4l4IkYb9jgDU\">>],\n\t\t7737695 => [<<\"jStDc8gP5lyHVSFIJiT_2RrXhT26GpAhNItDEje07_Y\">>],\n\t\t8693975 => [<<\"_BN_07s59sawk5e9YcjHTX2qtYX9q7nCBYrlSWXoEsc\">>],\n\t\t8721209 => [<<\"4tlIV1x4YRWtNMut11ox9SS-lWt3xIzcXnrBBbNxGYs\">>],\n\t\t8740888 => [<<\"p4oyXU5C3T0ZycNhEwBZ0MbpV0j3voWV4mr__3fhOek\">>],\n\t\t8770977 => [<<\"cTmKy32Fbmlybl-WtbyuVFNhO11Efr4e_rGbzwAkPbs\">>, <<\"zNae10gPNkFt5aRVaSL2eSgxZiRDG79B9oDIeYqyzDY\">>],\n\t\t9155991 => [<<\"8qtH9T9jgYLHH-xi39w1OCNJykqew1O5qzrDkhAxN0U\">>],\n\t\t9489880 => [<<\"vDtQzZ9jl6r7yzczhoKhvzekCQYx-qskwYdzQO92eWo\">>],\n\t\t9525187 => [<<\"MQD8-8yIZwNC4A006TC1FVZSyCDHeIAN6YpDbTiX2RU\">>],\n\t\t9840597 => [<<\"1QGjyW1AEFlrFAs6VtUcmwOVOEZJjxaBR_z61W9mftI\">>],\n\t\t9870742 => [<<\"K0w8hOO1oCu4sQipWDQGyEFvn6kAXO-M93neMZmRoUc\">>, <<\"HoEZ6sK46bzTg4Jzrfy1kHFzkFQgI2UMm9pm0qJS3as\">>],\n\t\t10104161 => [<<\"rTaanqa6Z5KxtBV4Kj2Fu2KKqAWlstE0JeUbZ3AuN3o\">>],\n\t\t10324897 => [<<\"rAARxLc7tOdjUXEdNmSpOtsJIAw0XS229YHO1KOeUqI\">>],\n\t\t10337196 => [<<\"CCH2h2MzMP7WMh0Xf3GYL7zZDbU7E4CZPJWngp1qmDc\">>],\n\t\t10411070 => [<<\"xaB3eS6qbtKSrfFACMcYpgxWRtaJfT1kmOVpyaE45tI\">>, <<\"Hv0Q5APV6ARfDXDpxI-07R1YFSJAQpxTFh1Z8_nCk3U\">>, <<\"xCUsF5aatMdiiUAkGjg29_TiQGKqXpbzoMsB0yI-Dd8\">>, <<\"TNj-jk-KpKzz84xb1SRiKqyp8LNBnONxA9SIXs3XU7k\">>],\n\t\t10766840 => [<<\"AoCuo7S7ewDIqhYheBX6AjShrbyTgIv6Fp1AwQgmGqg\">>],\n\t\t11188140 => [<<\"sEw-yqeADuF0n_M6jTPLrOgH3coalIQHYPLrwM87nmo\">>],\n\t\t11386662 => [<<\"tn3FQGSVFt_TE5nyQNpuf_gnHdaWF85hZg1iE5hPQSE\">>],\n\t\t11392406 => [<<\"WwgngUwH7mXX15tdbfcjG_9gX2t8N8wbbfW2N34b3dA\">>],\n\t\t11392470 => [<<\"3DSCNJ5H9Hpyy7auT9qG5vom9jHBrCgjs48w_R6iSJg\">>],\n\t\t11490753 => [<<\"Zu9CSLWidXEnbSAQVuXGk62eMrVAGQb4qHmrtQrOQIQ\">>],\n\t\t12665298 => [<<\"Yk_dta-f75GShvyUvXq132pohaNpiQgerfIKJA0vdCw\">>],\n\t\t13324206 => [<<\"goAmthhGPdbYUqbAymyG_MjBUWVdS9OBm78mOoiITHo\">>],\n\t\t13489960 => [<<\"RJzScDd1IYIVaVOMo8zV2sXaGE4ZtKxwO2ONPFK-ou4\">>],\n\t\t13543865 => [<<\"ehTWq16I6ixhFOVkpTKi7s4jgYjNzGJ5CoJW3xjHDTE\">>],\n\t\t13577296 => [<<\"D29DVKVYAe74sAj9NBQ351rI6SseWZ5MMsSedGtydS8\">>, <<\"q8aw85uHTIPxuXcv2Awts4JVVHEMCl7J-61WfnvbYuQ\">>],\n\t\t13583145 => [<<\"4QcodvSlgZnuz5uWGmBARsGUJ5XaYORIO5jYM1dTucI\">>],\n\t\t14080582 => [<<\"clMyhm_qgwUJq68xb8Yf9EEaN3F7jgdqgKnKgjVRom8\">>],\n\t\t14080694 => [<<\"uoTzfoaN81h2_JyFkrvXTLFMnoSlWiuc9Yu1CmsFkH0\">>],\n\t\t14228100 => [<<\"NBjbIMFIdd6jFhSZ20izEke9Ju8jMuvYl8O4bqe4wC4\">>],\n\t\t14262305 => [<<\"vFP1U-4lk3GypDZFceLvRXjoadcB2FRKrcNQf9WjzpQ\">>],\n\t\t14714324 => [<<\"HOMVwtocaJIRPdCeKgzorJZJq1jw_lVGz0pQ3POj7No\">>],\n\t\t14714340 => [<<\"IqJf6iISeiEj3oof9491-jQX4drDZ92VoFuZqNmoixk\">>],\n\t\t15161433 => [<<\"d8CQoDBSrekoGZXqTatc7Y5JkHtNviX1D3JD-fxFDmU\">>],\n\t\t15337962 => [<<\"U2DZlRhnzhZrC7GsVNX0TxnXbHh03P3g-cU4fkHpiXA\">>],\n\t\t15355055 => [<<\"PgxqlgdluUGnmGCal3dgB6PYCd5S7FtBpI0zKDc8-AY\">>],\n\t\t15355071 => [<<\"XrtNbxWFUGlP-SYqQm8aYawQJU6H7CSyHpRZM1iLdKg\">>],\n\t\t15355087 => [<<\"rcc-B4OWqf0dbVY7Eq6q3pRDHLUjJ8tix8UeLQ4D68w\">>],\n\t\t15355103 => [<<\"0KMeq830vwvxUUM7RLCwE0ve4i0h_XHugbUTCkPNH-M\">>],\n\t\t15359953 => [<<\"IeEkQUBq3aE2CSbCF2Bk126lLaLZEYjUPJ_IO601tZg\">>],\n\t\t15365216 => [<<\"7M4KyVB4Wr-Le3Knb7JExgnsXTtG7718JIlhVBNstlE\">>],\n\t\t15366225 => [<<\"A5oMEDa7ZEm1kjPlXpwjuZd40rqP6eo3GobNGQY4HlY\">>],\n\t\t15435544 => [<<\"R5utplMYRQsJwA9Y63cL3Na4mXtYzE4gWG6g6zwgEQE\">>],\n\t\t15762143 => [<<\"lF7NSIz6CNf8WsMNQl8It8HbJem3MAllokozblLdU5A\">>],\n\t\t15864737 => [<<\"sMF6pWIkJFygBbR2IS10liEsjsLAMDja_E9_yUvUgeI\">>],\n\t\t15905630 => [<<\"D_3jwPKLfcTpWcrDV1Q7k3D4sMtyfw7vd45D2C9pUNA\">>],\n\t\t15915756 => [<<\"VL10zUkfmLz5eRxQsZi0G5wsfo8mvyN3p82updP17D4\">>],\n\t\t16002235 => [<<\"B_F4zIV1I5DXM-lR-Ko1tVUTTSmLCOYR7PoY8V8wFas\">>],\n\t\t16004299 => [<<\"FIrCkHY8jVkXcIkWYbMpuQSRYxavkOQ3wtUZPwMS1hM\">>],\n\t\t16017728 => [<<\"bhEMgsj4Yf5tdCDlwK9KpHmsgVLAsBDPOLtYeUDLw0M\">>],\n\t\t16106226 => [<<\"oNZMr_dB-L40nSUj6Fc19-FGteHQu7ZaRZu9_mgM1BI\">>],\n\t\t16114723 => [<<\"TMjINkrJIS3kbGu8bmcVt_34TaFN8lINFQPR_YGzHss\">>, <<\"ks0ODNqrNY4CCDxJcrgRY324WykCeTiSH4Tmdi30I2E\">>],\n\t\t16119806 => [<<\"rY4cJeAtYkg3bnTdqk4Vb0ojEcfS76L4B-iqyvQZ2VA\">>],\n\t\t16169319 => [<<\"ldoaD2NbG9VRhLOXddM1ypoAU3W5gR_zabUWZa4r6lM\">>],\n\t\t16197187 => [<<\"hPnpcoVcfRdkyUyhYSFNhsEcz7nQU0UU-fPSiRalDvw\">>],\n\t\t16197716 => [<<\"JeP9HaxmjN-TcbCkhKDIQejkGdKTlOgp68O5cy_2GRc\">>],\n\t\t16199303 => [<<\"JNCYRy7XYR_20vvXEAwpT43ovKB23np9yE9cqQfsIJk\">>, <<\"U4o0STLxwOEf42F4DF22ooOoA5Ykdp5j_D1io-4w1lc\">>],\n\t\t16376488 => [<<\"jOFeroI0Oz4TWcOx8mgv4iOZLv6ncbRXFRtJfqS4Pq0\">>, <<\"o9ArU5IxydvpJo2iiPI-p4EGBwlpBlyFIfbnz8Qrg6c\">>],\n\t\t16455497 => [<<\"y9wJkLq6Q0hKSDD67ilFqtMMatw9qpsKM9W2uy2Rfjc\">>],\n\t\t16560237 => [<<\"DlRct3GdPx7oYi3MSdmv16CgGWqhLJjbrKcIfU0E48I\">>],\n\t\t16985772 => [<<\"MGDpPk3LsexVpFBF43-FIIvc0vyeEDroYcIONJ6abd0\">>],\n\t\t17081857 => [<<\"fAnOUj-jmlzPMtIN90ZvowG9VUmBtD36MZ8-tRP1Ut4\">>],\n\t\t17251267 => [<<\"bcbIZq2gy8ivQiUlEch7tjNoCcUMTTLhInMlj9P2P88\">>, <<\"hxyn3yZ7-LCgKqfkCljyM7Hq7HJnmPnEKaXoybXJjHo\">>],\n\t\t17334749 => [<<\"hQvPcHPcBhyxv7GPx-E3bZWiNBhnCpFIDwWa3XBcYEU\">>],\n\t\t17335147 => [<<\"BFfNP1eCeYIkLiWWAVvHNLzk1N2pxkOChFzQbdv1IiA\">>],\n\t\t17416549 => [<<\"Cdcx7-UZJN324I9L47rrph9dIVy8RwfJa9mY7cJp9gk\">>, <<\"Hiu5cti9FefwcvT6xRCIoADUMkuDEm_6pZo92CK3fiw\">>],\n\t\t17416947 => [<<\"oO7raEVlJC6KhfK-UbNuppzbYPGdKWbh1e6rOymd_-o\">>],\n\t\t17455331 => [<<\"ojgJyXT8qwRXj1hOVx2gbeJDT0xEOIye0o9EbfU2LRM\">>],\n\t\t17670889 => [<<\"JDG-HBsrHGDodot2clC3nNkRKV5cvuhRWZjCwVFHG_Q\">>],\n\t\t17750387 => [<<\"G6JD1n-FXMSyTSryo0HoX7L3i7e4KEFK_ekDMEn9Bcg\">>],\n\t\t17785886 => [<<\"NixeAD5Y_8sQfcrMBWkODQuoXgJouUBmQmQzwTzlaKU\">>],\n\t\t18017285 => [<<\"P5KQo3QSWLzTLWkq3wgJlii11CEUSKMG_O2NMN6y_8c\">>],\n\t\t18199428 => [<<\"zavm_CqSq0KuWfc-E0JccEyrrQzjigxt7yuW1ceYjE0\">>],\n\t\t18248736 => [<<\"OGA55Jyg2c-Jhkx5zDNyiDvbFZiRXF0S_JESMhWAWcs\">>],\n\t\t18834003 => [<<\"kLP-8ILxdLSAQsrC6IwvfqQL6Loq2Q6lqOzwrnb6QoE\">>],\n\t\t19175334 => [<<\"1fzKf0Ygc-z3ejpZ1ZLOiNBYDRzViGRdPLtUqRS1nKY\">>],\n\t\t19774875 => [<<\"fr3nkF8AHXTcq9bT_b7x2X7Mun2A--Ssb7eyoKgQEwI\">>],\n\t\t20029848 => [<<\"_KI9ocPARF5JjaDPIbtpqw2hj_qRonw-AERjWOs5ZYM\">>],\n\t\t20076163 => [<<\"n1GVITzrvCF95Vz7l6hH7fdYzebDDAJav5z4-9C7lB4\">>],\n\t\t20250499 => [<<\"p0MVPvnv_lkWwfhSuSCgQ3NUj83shBffAx1NKPn4oy8\">>],\n\t\t20424523 => [<<\"CQv9OVOCzntq2DRqNJ9j_WnWPcsniyGRXpt4i_a8Iy0\">>],\n\t\t20599699 => [<<\"35wYULjhQBiTFh9u-PJz6ki0v7Zi1whk_AhowUt99Ac\">>]\n\t}.\n\n%% @doc Return the configured snapshot directory. Fall back to\n%% DEFAULT_SNAPSHOT_DIR (\"localnet_snapshot\") when start_from_state is not set.\nsnapshot_dir(Config) ->\n\tcase Config#config.start_from_state of\n\t\tnot_set ->\n\t\t\t?DEFAULT_SNAPSHOT_DIR;\n\t\tDir ->\n\t\t\tDir\n\tend.\n\n%% @doc Generate a snapshot directory name encoding both the mainnet starting\n%% height and the localnet ending height, e.g. \"localnet_snapshot_1500000_1500050\".\nsnapshot_dir_name(MainnetStartHeight, LocalnetEndHeight) ->\n\tlists:flatten(\n\t\tio_lib:format(\"localnet_snapshot_~B_~B\", [MainnetStartHeight, LocalnetEndHeight])\n\t).\n\n%% @doc Open the read-only \"start-from-state\" databases for the given snapshot\n%% directory. Return {ok, CloseFun} on success, where CloseFun/0 closes them.\nopen_snapshot_databases(SnapshotDir) ->\n\tcase ar_storage:open_start_from_state_databases(SnapshotDir) of\n\t\tok ->\n\t\t\t{ok, fun ar_storage:close_start_from_state_databases/0};\n\t\t{error, _} = Error ->\n\t\t\tError\n\tend.\n\n%% @doc Create a new snapshot directory.\ncreate_snapshot(SnapshotDir, DataDir, NewSnapshotDir) ->\n\tcase ensure_snapshot_dir(NewSnapshotDir) of\n\t\tok ->\n\t\t\tcase maybe_backfill_snapshot_data(SnapshotDir) of\n\t\t\t\tok ->\n\t\t\t\t\tok = copy_from_dir(DataDir, SnapshotDir, NewSnapshotDir, \"wallets\"),\n\t\t\t\t\tok = copy_from_dir(DataDir, SnapshotDir, NewSnapshotDir, \"ar_tx_blacklist\"),\n\t\t\t\t\tok = copy_from_dir(DataDir, SnapshotDir, NewSnapshotDir, \"data_sync_state\"),\n\t\t\t\t\tok = copy_from_dir(DataDir, SnapshotDir, NewSnapshotDir, \"header_sync_state\"),\n\t\t\t\t\tok = copy_from_dir(DataDir, SnapshotDir, NewSnapshotDir, \"mempool\"),\n\t\t\t\t\tok = copy_from_dir(DataDir, SnapshotDir, NewSnapshotDir, \"peers\"),\n\t\t\t\t\tok = copy_required_dir(SnapshotDir, NewSnapshotDir, \"seed_txs\"),\n\t\t\t\t\tcase create_snapshot_rocksdb(NewSnapshotDir) of\n\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\tio:format(\"Snapshot created: ~s~n\", [NewSnapshotDir]),\n\t\t\t\t\t\t\t{ok, NewSnapshotDir};\n\t\t\t\t\t\t{error, _} = Error ->\n\t\t\t\t\t\t\tError\n\t\t\t\t\tend;\n\t\t\t\t{error, _} = Error ->\n\t\t\t\t\tError\n\t\t\tend;\n\t\t{error, _} = Error ->\n\t\t\tError\n\tend.\n\n%% @doc Open the original snapshot's databases, copy any data that is missing\n%% from the node databases, then close the snapshot dbs.\nmaybe_backfill_snapshot_data(SnapshotDir) ->\n\tcase open_snapshot_databases(SnapshotDir) of\n\t\t{ok, CloseSnapshotDbs} ->\n\t\t\tResult = store_snapshot_data(SnapshotDir),\n\t\t\tCloseSnapshotDbs(),\n\t\t\tResult;\n\t\t{error, _} = Error ->\n\t\t\tError\n\tend.\n\n%% @doc Create the RocksDB databases for a new snapshot: open fresh databases,\n%% populate them with the block index, recent blocks, tx headers, block time history,\n%% and account tree, then verify the result.\ncreate_snapshot_rocksdb(NewSnapshotDir) ->\n\tcase ar_node:get_block_index() of\n\t\t[] ->\n\t\t\t{error, empty_block_index};\n\t\tBI ->\n\t\t\tHeight = length(BI) - 1,\n\t\t\tSearchDepth = min(Height, ?LOCALNET_START_FROM_STATE_SEARCH_DEPTH),\n\t\t\tcase open_snapshot_dbs(NewSnapshotDir) of\n\t\t\t\t{ok, SnapshotDbs} ->\n\t\t\t\t\tResult =\n\t\t\t\t\t\tcreate_snapshot_rocksdb2(\n\t\t\t\t\t\t\tBI,\n\t\t\t\t\t\t\tHeight,\n\t\t\t\t\t\t\tSearchDepth,\n\t\t\t\t\t\t\tSnapshotDbs\n\t\t\t\t\t\t),\n\t\t\t\t\tclose_snapshot_dbs(SnapshotDbs),\n\t\t\t\t\tResult;\n\t\t\t\t{error, _} = Error ->\n\t\t\t\t\tError\n\t\t\tend\n\tend.\n\ncreate_snapshot_rocksdb2(BI, Height, SearchDepth, SnapshotDbs) ->\n\tcase write_block_index_snapshot(BI, maps:get(block_index, SnapshotDbs)) of\n\t\tok ->\n\t\t\tcreate_snapshot_rocksdb3(\n\t\t\t\tBI,\n\t\t\t\tHeight,\n\t\t\t\tSearchDepth,\n\t\t\t\tSnapshotDbs\n\t\t\t);\n\t\t{error, _} = Error ->\n\t\t\tError\n\tend.\n\ncreate_snapshot_rocksdb3(BI, Height, SearchDepth, SnapshotDbs) ->\n\tcase read_recent_blocks_local(BI, SearchDepth) of\n\t\tnot_found ->\n\t\t\t{error, block_headers_not_found};\n\t\t{Skipped, Blocks} ->\n\t\t\tio:format(\"Snapshot: recent blocks ~B (skipped: ~B)~n\",\n\t\t\t\t[length(Blocks), Skipped]),\n\t\t\tBI2 = lists:nthtail(Skipped, BI),\n\t\t\tHeight2 = Height - Skipped,\n\t\t\tRewardHistoryBI = ar_rewards:interim_reward_history_bi(Height2, BI2),\n\t\t\tBlockTimeHistoryBI = block_time_history_bi(BI2),\n\t\t\tcase store_snapshot_blocks_in_snapshot(Blocks, SnapshotDbs) of\n\t\t\t\t{ok, TxIds} ->\n\t\t\t\t\tcreate_snapshot_rocksdb4(\n\t\t\t\t\t\tBI,\n\t\t\t\t\t\tBlocks,\n\t\t\t\t\t\tRewardHistoryBI,\n\t\t\t\t\t\tBlockTimeHistoryBI,\n\t\t\t\t\t\tSnapshotDbs,\n\t\t\t\t\t\tTxIds,\n\t\t\t\t\t\tSearchDepth\n\t\t\t\t\t);\n\t\t\t\t{error, _} = Error ->\n\t\t\t\t\tError\n\t\t\tend\n\tend.\n\ncreate_snapshot_rocksdb4(BI, Blocks, RewardHistoryBI, BlockTimeHistoryBI, SnapshotDbs, TxIds,\n\t\tSearchDepth) ->\n\tcase store_tx_headers(TxIds, maps:get(tx, SnapshotDbs)) of\n\t\tok ->\n\t\t\tcase copy_history_entries(\n\t\t\t\tRewardHistoryBI,\n\t\t\t\treward_history_db,\n\t\t\t\tmaps:get(reward_history, SnapshotDbs),\n\t\t\t\treward_history\n\t\t\t) of\n\t\t\t\tok ->\n\t\t\t\t\tcase copy_history_entries(\n\t\t\t\t\t\tBlockTimeHistoryBI,\n\t\t\t\t\t\tblock_time_history_db,\n\t\t\t\t\t\tmaps:get(block_time_history, SnapshotDbs),\n\t\t\t\t\t\tblock_time_history\n\t\t\t\t\t) of\n\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\tcase store_latest_wallet_list_from_blocks(Blocks,\n\t\t\t\t\t\t\t\tmaps:get(account_tree, SnapshotDbs), SearchDepth) of\n\t\t\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\t\t\tverify_snapshot_rocksdb(BI, Blocks,\n\t\t\t\t\t\t\t\t\t\tmaps:get(block_index, SnapshotDbs),\n\t\t\t\t\t\t\t\t\t\tmaps:get(block, SnapshotDbs),\n\t\t\t\t\t\t\t\t\t\tmaps:get(account_tree, SnapshotDbs));\n\t\t\t\t\t\t\t\t{error, _} = Error ->\n\t\t\t\t\t\t\t\t\tio:format(\"Snapshot: error storing wallet list: ~p~n\", [Error]),\n\t\t\t\t\t\t\t\t\tError\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t{error, _} = Error ->\n\t\t\t\t\t\t\tio:format(\"Snapshot: error copying block time history: ~p~n\", [Error]),\n\t\t\t\t\t\t\tError\n\t\t\t\t\tend;\n\t\t\t\t{error, _} = Error ->\n\t\t\t\t\tio:format(\"Snapshot: error copying reward history: ~p~n\", [Error]),\n\t\t\t\t\tError\n\t\t\tend;\n\t\t{error, _} = Error ->\n\t\t\tio:format(\"Snapshot: error storing tx headers: ~p~n\", [Error]),\n\t\t\tError\n\tend.\n\n%% @doc Open a fresh set of RocksDB databases (tx_confirmation, tx, block,\n%% reward_history, block_time_history, block_index, account_tree) under\n%% NewSnapshotDir for writing. Return {ok, DbMap} where DbMap contains\n%% named handles keyed by atom. We cannot use ar_storage:open_databases/0\n%% since it always targets the data_dir and would collide with the\n%% running node's DB names.\nopen_snapshot_dbs(NewSnapshotDir) ->\n\tRocksDir = filename:join([NewSnapshotDir, ?ROCKS_DB_DIR]),\n\tDbs = [\n\t\t{tx_confirmation, snapshot_tx_confirmation_db, \"ar_storage_tx_confirmation_db\"},\n\t\t{tx, snapshot_tx_db, \"ar_storage_tx_db\"},\n\t\t{block, snapshot_block_db, \"ar_storage_block_db\"},\n\t\t{reward_history, snapshot_reward_history_db, \"reward_history_db\"},\n\t\t{block_time_history, snapshot_block_time_history_db, \"block_time_history_db\"},\n\t\t{block_index, snapshot_block_index_db, \"block_index_db\"},\n\t\t{account_tree, snapshot_account_tree_db, \"account_tree_db\"}\n\t],\n\topen_snapshot_write_dbs(Dbs, RocksDir, #{}, []).\n\n%% @doc Recursively open each RocksDB database in the list. On failure, close\n%% any already-opened databases before returning the error.\nopen_snapshot_write_dbs([], _RocksDir, DbMap, Opened) ->\n\t{ok, DbMap#{ opened => Opened }};\nopen_snapshot_write_dbs([{Key, Name, DirName} | Rest], RocksDir, DbMap, Opened) ->\n\tPath = filename:join([RocksDir, DirName]),\n\tcase ar_kv:open(#{ path => Path, name => Name }) of\n\t\tok ->\n\t\t\topen_snapshot_write_dbs(Rest, RocksDir, DbMap#{ Key => Name }, [Name | Opened]);\n\t\t{error, _} = Error ->\n\t\t\tclose_snapshot_dbs(DbMap#{ opened => Opened }),\n\t\t\tError\n\tend.\n\n%% @doc Close all RocksDB databases that were opened for a snapshot.\nclose_snapshot_dbs(SnapshotDbs) ->\n\tOpened = maps:get(opened, SnapshotDbs, []),\n\tlists:foreach(fun ar_kv:close/1, Opened).\n\n%% @doc Extract the portion of the block index needed for the block time history:\n%% history_length + consensus_window_size entries from the tip.\nblock_time_history_bi(BI) ->\n\tlists:sublist(BI,\n\t\tar_block_time_history:history_length() + ar_block:get_consensus_window_size()).\n\n%% @doc Serialize the full block index into a snapshot RocksDB database, keyed\n%% by height. Each entry stores {H, WeaveSize, TXRoot, PrevH} so the chain\n%% linkage can be validated on load.\nwrite_block_index_snapshot(BI, SnapshotBlockIndexDb) ->\n\twrite_block_index_snapshot2(0, <<>>, lists:reverse(BI), SnapshotBlockIndexDb).\n\nwrite_block_index_snapshot2(_Height, _PrevH, [], _SnapshotBlockIndexDb) ->\n\tok;\nwrite_block_index_snapshot2(Height, PrevH, [{H, WeaveSize, TXRoot} | BI],\n\t\tSnapshotBlockIndexDb) ->\n\tBin = term_to_binary({H, WeaveSize, TXRoot, PrevH}),\n\tcase ar_kv:put(SnapshotBlockIndexDb, << Height:256 >>, Bin) of\n\t\tok ->\n\t\t\twrite_block_index_snapshot2(Height + 1, H, BI, SnapshotBlockIndexDb);\n\t\tError ->\n\t\t\tError\n\tend.\n\n%% @doc Store blocks and their tx confirmation entries into the snapshot's\n%% RocksDB databases (block_db and tx_confirmation_db).\nstore_snapshot_blocks_in_snapshot(Blocks, SnapshotDbs) ->\n\tBlockDb = maps:get(block, SnapshotDbs),\n\tTxConfirmationDb = maps:get(tx_confirmation, SnapshotDbs),\n\tstore_snapshot_blocks_with_dbs(Blocks, BlockDb, TxConfirmationDb, \"Snapshot\").\n\n%% @doc Serialize a single block (replacing full tx records with just their IDs)\n%% and write it to the given block database, keyed by indep_hash.\nstore_block_snapshot(B, SnapshotBlockDb) ->\n\tTxIds = lists:map(fun tx_id/1, B#block.txs),\n\tBlockBin = ar_serialize:block_to_binary(B#block{ txs = TxIds }),\n\tar_kv:put(SnapshotBlockDb, B#block.indep_hash, BlockBin).\n\n%% @doc Extract the transaction ID from either a #tx{} record or a raw binary.\ntx_id(#tx{ id = TXID }) ->\n\tTXID;\ntx_id(TXID) ->\n\tTXID.\n\n%% @doc Copy transaction confirmation entries from the tx_confirmation_db\n%% to the snapshot database. If an entry is missing from the storage, build\n%% one from the given Height and BlockHash.\ncopy_tx_confirmations([], _Height, _BlockHash, _SnapshotTxConfDb) ->\n\tok;\ncopy_tx_confirmations([TXID | Rest], Height, BlockHash, SnapshotTxConfDb) ->\n\tcase tx_id(TXID) of\n\t\tTXID2 when is_binary(TXID2) ->\n\t\t\tcase ar_kv:get(tx_confirmation_db, TXID2) of\n\t\t\t\t{ok, Bin} ->\n\t\t\t\t\tcase ar_kv:put(SnapshotTxConfDb, TXID2, Bin) of\n\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\tcopy_tx_confirmations(Rest, Height, BlockHash, SnapshotTxConfDb);\n\t\t\t\t\t\tError ->\n\t\t\t\t\t\t\tError\n\t\t\t\t\tend;\n\t\t\t\tnot_found ->\n\t\t\t\t\tcase ar_kv:put(SnapshotTxConfDb, TXID2, term_to_binary({Height, BlockHash})) of\n\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\tcopy_tx_confirmations(Rest, Height, BlockHash, SnapshotTxConfDb);\n\t\t\t\t\t\tError ->\n\t\t\t\t\t\t\tError\n\t\t\t\t\tend;\n\t\t\t\t{error, _} = Error ->\n\t\t\t\t\tError\n\t\t\tend;\n\t\t_ ->\n\t\t\tcopy_tx_confirmations(Rest, Height, BlockHash, SnapshotTxConfDb)\n\tend.\n\n%% @doc Read transactions from the node and write their headers (stripping\n%% data from v2 format) to the given snapshot tx database.\nstore_tx_headers(TxIds, SnapshotTxDb) ->\n\tlists:foldl(\n\t\tfun(TXID, Acc) ->\n\t\t\tcase Acc of\n\t\t\t\tok ->\n\t\t\t\t\tcase read_tx_local(TXID) of\n\t\t\t\t\t\t#tx{} = TX ->\n\t\t\t\t\t\t\tstore_tx_header_snapshot(TX, SnapshotTxDb);\n\t\t\t\t\t\tunavailable ->\n\t\t\t\t\t\t\t{error, {tx_unavailable, tx_id(TXID)}};\n\t\t\t\t\t\tError ->\n\t\t\t\t\t\t\t{error, {tx_unavailable, tx_id(TXID), Error}}\n\t\t\t\t\tend;\n\t\t\t\t{error, _} = Error ->\n\t\t\t\t\tError\n\t\t\tend\n\t\tend,\n\t\tok,\n\t\tTxIds\n\t).\n\n%% @doc Write a single tx header to a database. For v2 transactions, strip the\n%% data field (it's stored separately as chunks); v1 transactions keep their data\n%% inline.\nstore_tx_header_snapshot(TX, SnapshotTxDb) ->\n\tTX2 =\n\t\tcase TX#tx.format of\n\t\t\t1 ->\n\t\t\t\tTX;\n\t\t\t_ ->\n\t\t\t\tTX#tx{ data = <<>> }\n\t\tend,\n\tar_kv:put(SnapshotTxDb, TX2#tx.id, ar_serialize:tx_to_binary(TX2)).\n\n%% @doc Copy history entries (reward_history or block_time_history) from SourceDb\n%% to DestDb for each block hash in HistoryBI. Stops on the first error.\ncopy_history_entries(HistoryBI, SourceDb, DestDb, Label) ->\n\tlists:foldl(\n\t\tfun({BH, _, _}, Acc) ->\n\t\t\tcase Acc of\n\t\t\t\tok ->\n\t\t\t\t\tcase ar_kv:get(SourceDb, BH) of\n\t\t\t\t\t\t{ok, Bin} ->\n\t\t\t\t\t\t\tar_kv:put(DestDb, BH, Bin);\n\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\t{error, {Label, not_found, BH}};\n\t\t\t\t\t\t{error, Reason} ->\n\t\t\t\t\t\t\t{error, {Label, Reason, BH}}\n\t\t\t\t\tend;\n\t\t\t\t{error, _} = Error ->\n\t\t\t\t\tError\n\t\t\tend\n\t\tend,\n\t\tok,\n\t\tHistoryBI\n\t).\n\n%% @doc Find the most recent readable wallet tree from the block list, verify\n%% its root hash matches the block's wallet_list field, then write the account\n%% tree nodes to the snapshot account_tree database.\nstore_latest_wallet_list_from_blocks(Blocks, SnapshotAccountTreeDb, SearchDepth) ->\n\tcase find_wallet_tree_with_search(Blocks, SearchDepth, fun ar_storage:read_wallet_list/1) of\n\t\t{ok, {B, Tree}} ->\n\t\t\t{RootHash, _UpdatedTree, UpdateMap} = ar_block:hash_wallet_list(Tree),\n\t\t\tcase RootHash == B#block.wallet_list of\n\t\t\t\ttrue ->\n\t\t\t\t\tstore_account_tree_update_snapshot(\n\t\t\t\t\t\tB#block.height,\n\t\t\t\t\t\tRootHash,\n\t\t\t\t\t\tUpdateMap,\n\t\t\t\t\t\tSnapshotAccountTreeDb\n\t\t\t\t\t);\n\t\t\t\tfalse ->\n\t\t\t\t\t{error, {wallet_list_root_mismatch, RootHash, B#block.wallet_list}}\n\t\t\tend;\n\t\tnot_found ->\n\t\t\t{error, wallet_list_not_found}\n\tend.\n\n%% @doc Search through the block list to find one whose account tree can be\n%% successfully read. Start at the consensus-window-offset block and walks\n%% backward, skipping up to SearchDepth blocks. Return {ok, {Block, Tree}}\n%% or not_found.\nfind_wallet_tree_with_search([], _SearchDepth, _ReadWalletFun) ->\n\tnot_found;\nfind_wallet_tree_with_search(Blocks, SearchDepth, ReadWalletFun) ->\n\tfind_wallet_tree_with_search(Blocks, SearchDepth, 0, ReadWalletFun).\n\nfind_wallet_tree_with_search(_Blocks, Skipped, Skipped, _ReadWalletFun) ->\n\tnot_found;\nfind_wallet_tree_with_search(Blocks, SearchDepth, Skipped, ReadWalletFun) ->\n\t{IsLast, B} =\n\t\tcase length(Blocks) >= ar_block:get_consensus_window_size() of\n\t\t\ttrue ->\n\t\t\t\t{false, lists:nth(ar_block:get_consensus_window_size(), Blocks)};\n\t\t\tfalse ->\n\t\t\t\t{true, lists:last(Blocks)}\n\t\tend,\n\tcase ReadWalletFun(B#block.wallet_list) of\n\t\t{ok, Tree} ->\n\t\t\t{ok, {B, Tree}};\n\t\t_ ->\n\t\t\tcase IsLast of\n\t\t\t\ttrue ->\n\t\t\t\t\tnot_found;\n\t\t\t\tfalse ->\n\t\t\t\t\tfind_wallet_tree_with_search(tl(Blocks), SearchDepth, Skipped + 1, ReadWalletFun)\n\t\t\tend\n\tend.\n\n%% @doc Write account tree nodes from the update map to the snapshot database.\n%% Each key is {Hash, Prefix} and the value is the tree node data. Existing\n%% entries are not overwritten.\nstore_account_tree_update_snapshot(_Height, _RootHash, Map, SnapshotAccountTreeDb) ->\n\tmaps:fold(\n\t\tfun({H, Prefix}, Value, Acc) ->\n\t\t\tcase Acc of\n\t\t\t\tok ->\n\t\t\t\t\tPrefix2 =\n\t\t\t\t\t\tcase Prefix of\n\t\t\t\t\t\t\troot ->\n\t\t\t\t\t\t\t\t<<>>;\n\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\tPrefix\n\t\t\t\t\t\tend,\n\t\t\t\t\tDBKey = << H/binary, Prefix2/binary >>,\n\t\t\t\t\tcase ar_kv:get(SnapshotAccountTreeDb, DBKey) of\n\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\tar_kv:put(SnapshotAccountTreeDb, DBKey, term_to_binary(Value));\n\t\t\t\t\t\t{ok, _} ->\n\t\t\t\t\t\t\tok;\n\t\t\t\t\t\t{error, Reason} ->\n\t\t\t\t\t\t\t{error, {account_tree_read_failed, Reason}}\n\t\t\t\t\tend;\n\t\t\t\t{error, _} = Error ->\n\t\t\t\t\tError\n\t\t\tend\n\t\tend,\n\t\tok,\n\t\tMap\n\t).\n\n%% @doc Validate a completed snapshot by checking: (1) the block index length\n%% matches, (2) all recent blocks exist in the snapshot block db, and (3) at\n%% least one account tree root is present in the account tree db.\nverify_snapshot_rocksdb(BI, Blocks, SnapshotBlockIndexDb, SnapshotBlockDb,\n\t\tSnapshotAccountTreeDb) ->\n\tHeight = length(BI) - 1,\n\tcase read_block_index_from_db(SnapshotBlockIndexDb) of\n\t\tnot_found ->\n\t\t\t{error, snapshot_block_index_not_found};\n\t\tSnapshotBI ->\n\t\t\tcase length(SnapshotBI) == length(BI) of\n\t\t\t\ttrue ->\n\t\t\t\t\tcase verify_recent_blocks_from_blocks(Blocks, SnapshotBlockDb) of\n\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\tcase validate_wallet_root_from_blocks(Blocks, SnapshotAccountTreeDb) of\n\t\t\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\t\t\tok;\n\t\t\t\t\t\t\t\t{error, _} = Error ->\n\t\t\t\t\t\t\t\t\tError\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t{error, _} = Error ->\n\t\t\t\t\t\t\tError\n\t\t\t\t\tend;\n\t\t\t\tfalse ->\n\t\t\t\t\t{error, {snapshot_block_index_length_mismatch, Height}}\n\t\t\tend\n\tend.\n\n%% @doc Read the complete block index from a RocksDB database and return it as\n%% a list of {H, WeaveSize, TXRoot} tuples (newest-first). Validates the PrevH\n%% chain linkage during reconstruction.\nread_block_index_from_db(DbName) ->\n\tcase ar_kv:get_prev(DbName, <<\"a\">>) of\n\t\tnone ->\n\t\t\tnot_found;\n\t\t{ok, << Height:256 >>, _V} ->\n\t\t\t{ok, Map} = ar_kv:get_range(DbName, << 0:256 >>, << Height:256 >>),\n\t\t\tread_block_index_from_map(Map, 0, Height, <<>>, [])\n\tend.\n\nread_block_index_from_map(_Map, Height, End, _PrevH, BI) when Height > End ->\n\tBI;\nread_block_index_from_map(Map, Height, End, PrevH, BI) ->\n\tV = maps:get(<< Height:256 >>, Map, not_found),\n\tcase V of\n\t\tnot_found ->\n\t\t\tnot_found;\n\t\t_ ->\n\t\t\tcase binary_to_term(V) of\n\t\t\t\t{H, WeaveSize, TXRoot, PrevH} ->\n\t\t\t\t\tread_block_index_from_map(Map, Height + 1, End, H,\n\t\t\t\t\t\t[{H, WeaveSize, TXRoot} | BI]);\n\t\t\t\t{_, _, _, _} ->\n\t\t\t\t\tnot_found\n\t\t\tend\n\tend.\n\n%% @doc Verify that every block in the list exists in the snapshot block database.\nverify_recent_blocks_from_blocks([], _SnapshotBlockDb) ->\n\tok;\nverify_recent_blocks_from_blocks([B | Rest], SnapshotBlockDb) ->\n\tBH = B#block.indep_hash,\n\tcase ar_kv:get(SnapshotBlockDb, BH) of\n\t\t{ok, _} ->\n\t\t\tverify_recent_blocks_from_blocks(Rest, SnapshotBlockDb);\n\t\tnot_found ->\n\t\t\t{error, {snapshot_block_missing, BH}};\n\t\t{error, _} = Error ->\n\t\t\tError\n\tend.\n\n%% @doc Check that at least one block's wallet_list root hash is present in the\n%% snapshot account tree database.\nvalidate_wallet_root_from_blocks([], _SnapshotAccountTreeDb) ->\n\t{error, wallet_list_not_found};\nvalidate_wallet_root_from_blocks([B | Rest], SnapshotAccountTreeDb) ->\n\tWalletList = B#block.wallet_list,\n\tcase ar_kv:get_prev(SnapshotAccountTreeDb, << WalletList/binary >>) of\n\t\tnone ->\n\t\t\tvalidate_wallet_root_from_blocks(Rest, SnapshotAccountTreeDb);\n\t\t{ok, _, _} ->\n\t\t\tok\n\tend.\n\n%% @doc If TXID is already a #tx{} record, return it directly; otherwise read\n%% the transaction from local storage.\nread_tx_local(TXID) ->\n\tcase TXID of\n\t\t#tx{} = TX ->\n\t\t\tTX;\n\t\t_ ->\n\t\t\tar_storage:read_tx(TXID)\n\tend.\n\n%% @doc Read recent blocks from the node's local storage.\nread_recent_blocks_local(BI, SearchDepth) ->\n\tar_node:read_recent_blocks(BI, SearchDepth, not_set).\n\n%% @doc Read recent blocks from a snapshot directory's databases.\nread_recent_blocks_from_snapshot(BI, SearchDepth, SnapshotDir) ->\n\tar_node:read_recent_blocks(BI, SearchDepth, SnapshotDir).\n\n\n%% @doc Store blocks into the node's block_db and tx_confirmation_db.\nstore_snapshot_blocks_from_list(Blocks) ->\n\tstore_snapshot_blocks_with_dbs(Blocks, block_db, tx_confirmation_db, \"Startup copy\").\n\n%% @doc Serialize and store a list of blocks and their tx\n%% confirmation entries into the given databases. Return {ok, TXIDList} with\n%% the deduplicated set of all tx IDs across the stored blocks.\nstore_snapshot_blocks_with_dbs(Blocks, BlockDb, TxConfirmationDb, LogPrefix) ->\n\tcase lists:foldl(\n\t\tfun(B, Acc) ->\n\t\t\tcase Acc of\n\t\t\t\t{ok, TxIdSet} ->\n\t\t\t\t\tio:format(\"~s: block ~s height ~B~n\",\n\t\t\t\t\t\t[LogPrefix, ar_util:encode(B#block.indep_hash), B#block.height]),\n\t\t\t\t\tcase store_block_snapshot(B, BlockDb) of\n\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\tTxIds = lists:map(fun tx_id/1, B#block.txs),\n\t\t\t\t\t\t\tcase copy_tx_confirmations(TxIds, B#block.height,\n\t\t\t\t\t\t\t\t\tB#block.indep_hash, TxConfirmationDb) of\n\t\t\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\t\t\t{ok, sets:union(TxIdSet, sets:from_list(TxIds))};\n\t\t\t\t\t\t\t\t{error, _} = Error ->\n\t\t\t\t\t\t\t\t\tio:format(\"~s: error confirmations for block ~s: ~p~n\",\n\t\t\t\t\t\t\t\t\t\t[LogPrefix, ar_util:encode(B#block.indep_hash), Error]),\n\t\t\t\t\t\t\t\t\tError\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t{error, _} = Error ->\n\t\t\t\t\t\t\tio:format(\"~s: error storing block ~s: ~p~n\",\n\t\t\t\t\t\t\t\t[LogPrefix, ar_util:encode(B#block.indep_hash), Error]),\n\t\t\t\t\t\t\tError\n\t\t\t\t\tend;\n\t\t\t\t{error, _} = Error ->\n\t\t\t\t\tError\n\t\t\tend\n\t\tend,\n\t\t{ok, sets:new()},\n\t\tBlocks\n\t) of\n\t\t{ok, TxIdSet} ->\n\t\t\t{ok, sets:to_list(TxIdSet)};\n\t\t{error, _} = Error ->\n\t\t\tError\n\tend.\n\n\n%% @doc Copy tx headers into tx_db during startup. First check if\n%% the entry already exists; if not, read the tx from the snapshot directory\n%% as a fallback.\nstore_snapshot_tx_headers(TxIds, SnapshotDir) ->\n\tio:format(\"Startup copy: tx headers to copy ~B~n\", [length(TxIds)]),\n\tlists:foldl(\n\t\tfun(TXID, Acc) ->\n\t\t\tcase Acc of\n\t\t\t\tok ->\n\t\t\t\t\tTXID2 = tx_id(TXID),\n\t\t\t\t\tcase ar_kv:get(tx_db, TXID2) of\n\t\t\t\t\t\t{ok, _} ->\n\t\t\t\t\t\t\tok;\n\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\tcase ar_storage:read_tx(TXID2, SnapshotDir) of\n\t\t\t\t\t\t\t\t#tx{} = TX ->\n\t\t\t\t\t\t\t\t\tstore_tx_header_snapshot(TX, tx_db);\n\t\t\t\t\t\t\t\tunavailable ->\n\t\t\t\t\t\t\t\t\tio:format(\"Startup copy: missing tx header ~s~n\",\n\t\t\t\t\t\t\t\t\t\t[ar_util:encode(TXID2)]),\n\t\t\t\t\t\t\t\t\t{error, {tx_unavailable, TXID2}};\n\t\t\t\t\t\t\t\tError ->\n\t\t\t\t\t\t\t\t\tio:format(\"Startup copy: error reading tx ~s: ~p~n\",\n\t\t\t\t\t\t\t\t\t\t[ar_util:encode(TXID2), Error]),\n\t\t\t\t\t\t\t\t\t{error, {tx_unavailable, TXID2, Error}}\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t{error, _} = Error ->\n\t\t\t\t\t\t\tio:format(\"Startup copy: error reading tx db ~s: ~p~n\",\n\t\t\t\t\t\t\t\t[ar_util:encode(TXID2), Error]),\n\t\t\t\t\t\t\t{error, {tx_db_read_failed, TXID2, Error}}\n\t\t\t\t\tend;\n\t\t\t\t{error, _} = Error ->\n\t\t\t\t\tError\n\t\t\tend\n\t\tend,\n\t\tok,\n\t\tTxIds\n\t).\n\n%% @doc Copy block time history and reward history entries from SourceDB to DestDB\n%% during startup. Unlike copy_history_entries/4, this skips entries that\n%% already exist in the destination.\nstore_snapshot_history_entries(HistoryBI, SourceDb, DestDb, Label) ->\n\tlists:foldl(\n\t\tfun({BH, _, _}, Acc) ->\n\t\t\tcase Acc of\n\t\t\t\tok ->\n\t\t\t\t\tcase ar_kv:get(DestDb, BH) of\n\t\t\t\t\t\t{ok, _} ->\n\t\t\t\t\t\t\tok;\n\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\tcase ar_kv:get(SourceDb, BH) of\n\t\t\t\t\t\t\t\t{ok, Bin} ->\n\t\t\t\t\t\t\t\t\tar_kv:put(DestDb, BH, Bin);\n\t\t\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\t\t\tio:format(\"Startup copy: missing ~p entry ~s~n\",\n\t\t\t\t\t\t\t\t\t\t[Label, ar_util:encode(BH)]),\n\t\t\t\t\t\t\t\t\t{error, {Label, not_found, BH}};\n\t\t\t\t\t\t\t\t{error, Reason} ->\n\t\t\t\t\t\t\t\t\tio:format(\"Startup copy: error ~p entry ~s: ~p~n\",\n\t\t\t\t\t\t\t\t\t\t[Label, ar_util:encode(BH), Reason]),\n\t\t\t\t\t\t\t\t\t{error, {Label, Reason, BH}}\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t{error, _} = Error ->\n\t\t\t\t\t\t\tio:format(\"Startup copy: error reading ~p entry ~s: ~p~n\",\n\t\t\t\t\t\t\t\t[Label, ar_util:encode(BH), Error]),\n\t\t\t\t\t\t\t{error, {Label, Error, BH}}\n\t\t\t\t\tend;\n\t\t\t\t{error, _} = Error ->\n\t\t\t\t\tError\n\t\t\tend\n\t\tend,\n\t\tok,\n\t\tHistoryBI\n\t).\n\n%% @doc Read an account tree from the snapshot directory,\n%% verify the root hash, and store the account tree nodes in the local\n%% account_tree_db.\nstore_snapshot_wallet_list(Blocks, SnapshotDir, SearchDepth) ->\n\tcase find_wallet_tree_with_search(Blocks, SearchDepth,\n\t\t\tfun(WalletList) -> read_wallet_list_from_snapshot(WalletList, SnapshotDir) end) of\n\t\t{ok, {B, Tree}} ->\n\t\t\t{RootHash, _UpdatedTree, UpdateMap} = ar_block:hash_wallet_list(Tree),\n\t\t\tio:format(\"Startup copy: wallet list root ~s height ~B~n\",\n\t\t\t\t[ar_util:encode(RootHash), B#block.height]),\n\t\t\tcase RootHash == B#block.wallet_list of\n\t\t\t\ttrue ->\n\t\t\t\t\tstore_account_tree_update_snapshot(\n\t\t\t\t\t\tB#block.height,\n\t\t\t\t\t\tRootHash,\n\t\t\t\t\t\tUpdateMap,\n\t\t\t\t\t\taccount_tree_db\n\t\t\t\t\t);\n\t\t\t\tfalse ->\n\t\t\t\t\t{error, {wallet_list_root_mismatch, RootHash, B#block.wallet_list}}\n\t\t\tend;\n\t\tnot_found ->\n\t\t\t{error, wallet_list_not_found}\n\tend.\n\n\n%% @doc Try to read a wallet list from local storage first; fall back to the\n%% snapshot directory if not available locally.\nread_wallet_list_from_snapshot(WalletList, SnapshotDir) ->\n\tcase ar_storage:read_wallet_list(WalletList) of\n\t\t{ok, Tree} ->\n\t\t\t{ok, Tree};\n\t\t_ ->\n\t\t\tar_storage:read_wallet_list(WalletList, SnapshotDir)\n\tend.\n\n%% @doc Create the snapshot output directory. Fail if the directory already\n%% exists (to avoid accidentally overwriting a previous snapshot).\nensure_snapshot_dir(NewSnapshotDir) ->\n\tcase file:read_file_info(NewSnapshotDir) of\n\t\t{ok, _} ->\n\t\t\t{error, {snapshot_dir_exists, NewSnapshotDir}};\n\t\t{error, enoent} ->\n\t\t\tfilelib:ensure_dir(filename:join(NewSnapshotDir, \"placeholder\") ++ \"/\");\n\t\t{error, Reason} ->\n\t\t\t{error, {snapshot_dir_unavailable, Reason}}\n\tend.\n\n%% @doc Copy a named subdirectory or file into TargetDir. Tries PrimaryDir\n%% first, then FallbackDir. Silently succeeds if the name is not found in\n%% either location (the data may simply not exist yet).\ncopy_from_dir(PrimaryDir, FallbackDir, TargetDir, Name) ->\n\tPrimaryPath = filename:join([PrimaryDir, Name]),\n\tFallbackPath = filename:join([FallbackDir, Name]),\n\tTargetPath = filename:join([TargetDir, Name]),\n\tcase exists_on_disk(PrimaryPath) of\n\t\ttrue ->\n\t\t\tcopy_any(PrimaryPath, TargetPath);\n\t\tfalse ->\n\t\t\tcase exists_on_disk(FallbackPath) of\n\t\t\t\ttrue ->\n\t\t\t\t\tcopy_any(FallbackPath, TargetPath);\n\t\t\t\tfalse ->\n\t\t\t\t\tok\n\t\t\tend\n\tend.\n\n%% @doc Copy a named subdirectory or file from SourceDir to TargetDir. Unlike\n%% copy_from_dir/4, this returns an error if the source does not exist.\ncopy_required_dir(SourceDir, TargetDir, Name) ->\n\tSourcePath = filename:join([SourceDir, Name]),\n\tTargetPath = filename:join([TargetDir, Name]),\n\tcase exists_on_disk(SourcePath) of\n\t\ttrue ->\n\t\t\tcopy_any(SourcePath, TargetPath);\n\t\tfalse ->\n\t\t\t{error, {snapshot_path_missing, SourcePath}}\n\tend.\n\n%% @doc Return true if the given path exists on disk (file, directory, or symlink).\nexists_on_disk(Path) ->\n\tcase file:read_file_info(Path) of\n\t\t{ok, _} ->\n\t\t\ttrue;\n\t\t_ ->\n\t\t\tfalse\n\tend.\n\n%% @doc Copy a filesystem entry (regular file, directory, or symlink) from\n%% SourcePath to TargetPath, dispatching to the appropriate copy function.\ncopy_any(SourcePath, TargetPath) ->\n\tcase file:read_file_info(SourcePath) of\n\t\t{ok, #file_info{ type = directory }} ->\n\t\t\tcopy_dir(SourcePath, TargetPath);\n\t\t{ok, #file_info{ type = regular }} ->\n\t\t\tfilelib:ensure_dir(TargetPath),\n\t\t\tcase file:copy(SourcePath, TargetPath) of\n\t\t\t\t{ok, _} ->\n\t\t\t\t\tok;\n\t\t\t\tError ->\n\t\t\t\t\tError\n\t\t\tend;\n\t\t{ok, #file_info{ type = symlink }} ->\n\t\t\tcopy_symlink(SourcePath, TargetPath);\n\t\t{ok, #file_info{ type = other }} ->\n\t\t\t{error, {unsupported_type, SourcePath}};\n\t\t{error, Reason} ->\n\t\t\t{error, {read_file_info_failed, SourcePath, Reason}}\n\tend.\n\n%% @doc Recursively copy a directory and all its contents.\ncopy_dir(SourceDir, TargetDir) ->\n\tcase file:list_dir(SourceDir) of\n\t\t{ok, Entries} ->\n\t\t\tok = filelib:ensure_dir(filename:join([TargetDir, \"placeholder\"]) ++ \"/\"),\n\t\t\tlists:foldl(\n\t\t\t\tfun(Entry, Acc) ->\n\t\t\t\t\tcase Acc of\n\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\tSourcePath = filename:join([SourceDir, Entry]),\n\t\t\t\t\t\t\tTargetPath = filename:join([TargetDir, Entry]),\n\t\t\t\t\t\t\tcopy_any(SourcePath, TargetPath);\n\t\t\t\t\t\t{error, _} = Error ->\n\t\t\t\t\t\t\tError\n\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\t\tok,\n\t\t\t\tEntries\n\t\t\t);\n\t\t{error, Reason} ->\n\t\t\t{error, {list_dir_failed, SourceDir, Reason}}\n\tend.\n\n%% @doc Copy a symbolic link by reading its target and creating a new symlink.\ncopy_symlink(SourcePath, TargetPath) ->\n\tcase file:read_link(SourcePath) of\n\t\t{ok, LinkTarget} ->\n\t\t\tfilelib:ensure_dir(TargetPath),\n\t\t\tfile:make_symlink(LinkTarget, TargetPath);\n\t\t{error, Reason} ->\n\t\t\t{error, {read_link_failed, SourcePath, Reason}}\n\tend.\n\n"
  },
  {
    "path": "apps/arweave/src/ar_localnet_mining_server.erl",
    "content": "-module(ar_localnet_mining_server).\n\n-behaviour(ar_mining_server_behaviour).\n-behaviour(gen_server).\n\n-export([start_link/0, start_mining/1, pause/0, is_paused/0, set_difficulty/1,\n\tset_merkle_rebase_threshold/1, set_height/1]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include(\"ar.hrl\").\n-include(\"ar_consensus.hrl\").\n-include(\"ar_mining.hrl\").\n-include(\"ar_vdf.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-record(state, {\n\tpaused = true,\n\tdifficulty = {infinity, infinity},\n\tmerkle_rebase_threshold = infinity,\n\theight = 0\n}).\n\n-define(RETRY_MINE_DELAY_MS, 1000).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\nstart_mining(Args) ->\n\tgen_server:cast(?MODULE, {start_mining, Args}).\n\npause() ->\n\tgen_server:cast(?MODULE, pause).\n\nis_paused() ->\n\tgen_server:call(?MODULE, is_paused).\n\nset_difficulty(DiffPair) ->\n\tgen_server:cast(?MODULE, {set_difficulty, DiffPair}).\n\nset_merkle_rebase_threshold(Threshold) ->\n\tgen_server:cast(?MODULE, {set_merkle_rebase_threshold, Threshold}).\n\nset_height(Height) ->\n\tgen_server:cast(?MODULE, {set_height, Height}).\n\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\t{ok, #state{}}.\n\nhandle_call(is_paused, _From, State) ->\n\t{reply, State#state.paused, State};\nhandle_call(_Request, _From, State) ->\n\t{reply, ok, State}.\n\nhandle_cast({start_mining, _Args}, #state{ paused = false } = State) ->\n\t{noreply, State};\nhandle_cast({start_mining, {DiffPair, RebaseThreshold, Height}}, State) ->\n\tgen_server:cast(self(), mine),\n\t{noreply, State#state{\n\t\tpaused = false,\n\t\tdifficulty = DiffPair,\n\t\tmerkle_rebase_threshold = RebaseThreshold,\n\t\theight = Height\n\t}};\n\nhandle_cast(pause, State) ->\n\tar:console(\"Pausing localnet mining.~n\"),\n\t{noreply, State#state{ paused = true }};\n\nhandle_cast({set_difficulty, DiffPair}, State) ->\n\t{noreply, State#state{ difficulty = DiffPair }};\n\nhandle_cast({set_merkle_rebase_threshold, Threshold}, State) ->\n\t{noreply, State#state{ merkle_rebase_threshold = Threshold }};\n\nhandle_cast({set_height, Height}, State) ->\n\t{noreply, State#state{ height = Height }};\n\nhandle_cast(mine, #state{ paused = true } = State) ->\n\t{noreply, State};\nhandle_cast(mine, State) ->\n\tcase mine_block(State) of\n\t\tok ->\n\t\t\t{noreply, State#state{ paused = true }};\n\t\terror ->\n\t\t\terlang:send_after(?RETRY_MINE_DELAY_MS, self(), retry_mine),\n\t\t\t{noreply, State}\n\tend;\n\nhandle_cast(_Msg, State) ->\n\t{noreply, State}.\n\nhandle_info(retry_mine, State) ->\n\tgen_server:cast(self(), mine),\n\t{noreply, State};\nhandle_info(_Info, State) ->\n\t{noreply, State}.\n\nterminate(_Reason, _State) ->\n\tok.\n\n%%%===================================================================\n%%% Internal functions.\n%%%===================================================================\n\nmine_block(State) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tMiningAddr = Config#config.mining_addr,\n\tStorageModules = Config#config.storage_modules,\n\tmine_block2(pick_random_storage_module(StorageModules), State, MiningAddr, StorageModules).\n\nmine_block2(error, _State, _MiningAddr, _StorageModules) ->\n\t?LOG_ERROR([{event, failed_to_create_localnet_block}, {step, sample_storage_module}, {reason, all_storage_modules_empty}]),\n\terror;\nmine_block2({StoreID, Intervals}, State, MiningAddr, StorageModules) ->\n\tmine_block3(sample_chunk_with_proof(StoreID, Intervals, MiningAddr), State, MiningAddr, StorageModules).\n\nmine_block3({error, Error}, _State, _MiningAddr, _StorageModules) ->\n\t?LOG_ERROR([{event, failed_to_create_localnet_block}, {step, sample_chunk_with_proof}, {reason, io_lib:format(\"~p\", [Error])}]),\n\terror;\nmine_block3({RecallByte1, _Chunk1, PoA1}, State, MiningAddr, StorageModules) ->\n\tNoncesPerChunk = ar_block:get_nonces_per_chunk(?REPLICA_2_9_PACKING_DIFFICULTY),\n\tNonce = rand:uniform(NoncesPerChunk) - 1,\n\tSubChunk1 = get_sub_chunk(PoA1#poa.chunk, Nonce, ?REPLICA_2_9_PACKING_DIFFICULTY),\n\tStage1Data = #{\n\t\trecall_byte1 => RecallByte1,\n\t\tpoa1 => PoA1#poa{ chunk = SubChunk1 },\n\t\tnonce => Nonce\n\t},\n\tIsTwoChunk = rand:uniform(?POA1_DIFF_MULTIPLIER + 1) > 1,\n\tmine_block4(IsTwoChunk, Stage1Data, State, MiningAddr, StorageModules).\n\nmine_block4(false, Stage1Data, State, MiningAddr, _StorageModules) ->\n\tmine_block7(Stage1Data, one_chunk, State, MiningAddr);\nmine_block4(true, Stage1Data, State, MiningAddr, StorageModules) ->\n\tmine_block5(pick_random_storage_module(StorageModules), Stage1Data, State, MiningAddr, StorageModules).\n\nmine_block5(error, Stage1Data, State, MiningAddr, _StorageModules) ->\n\tmine_block7(Stage1Data, one_chunk, State, MiningAddr);\nmine_block5({StoreID2, Intervals2}, Stage1Data, State, MiningAddr, StorageModules) ->\n\tmine_block6(sample_chunk_with_proof(StoreID2, Intervals2, MiningAddr), Stage1Data, State, MiningAddr, StorageModules).\n\nmine_block6({error, _Error}, Stage1Data, State, MiningAddr, _StorageModules) ->\n\tmine_block7(Stage1Data, one_chunk, State, MiningAddr);\nmine_block6({RecallByte2, _Chunk2, PoA2}, Stage1Data, State, MiningAddr, _StorageModules) ->\n\t#{ nonce := Nonce } = Stage1Data,\n\tSubChunk2 = get_sub_chunk(PoA2#poa.chunk, Nonce, ?REPLICA_2_9_PACKING_DIFFICULTY),\n\tStage2Data = #{\n\t\trecall_byte2 => RecallByte2,\n\t\tpoa2 => PoA2#poa{ chunk = SubChunk2 }\n\t},\n\tmine_block7(Stage1Data, Stage2Data, State, MiningAddr).\n\nmine_block7(Stage1Data, Stage2Data, State, MiningAddr) ->\n\t[{_, TipNonceLimiterInfo}] = ets:lookup(node_state, nonce_limiter_info),\n\tPrevStepNumber = TipNonceLimiterInfo#nonce_limiter_info.global_step_number,\n\tSessionKey = ar_nonce_limiter:session_key(TipNonceLimiterInfo),\n\tcase ar_nonce_limiter:get_session(SessionKey) of\n\t\tnot_found ->\n\t\t\t?LOG_ERROR([\n\t\t\t\t{event, localnet_nonce_limiter_session_not_found},\n\t\t\t\t{session_key, ar_nonce_limiter:encode_session_key(SessionKey)},\n\t\t\t\t{prev_step_number, PrevStepNumber}\n\t\t\t]),\n\t\t\terror;\n\t\t#vdf_session{} = Session ->\n\t\t\tmine_block7_with_session(\n\t\t\t\tSession,\n\t\t\t\tSessionKey,\n\t\t\t\tPrevStepNumber,\n\t\t\t\tTipNonceLimiterInfo,\n\t\t\t\tStage1Data,\n\t\t\t\tStage2Data,\n\t\t\t\tState,\n\t\t\t\tMiningAddr\n\t\t\t)\n\tend.\n\nmine_block7_with_session(\n\tSession,\n\tSessionKey,\n\tPrevStepNumber,\n\tTipNonceLimiterInfo,\n\tStage1Data,\n\tStage2Data,\n\tState,\n\tMiningAddr\n) ->\n\t{NextSeed, StartIntervalNumber, NextVDFDifficulty} = SessionKey,\n\t{StepNumber, Output, Seed, Checkpoints, Steps} =\n\t\tcase Session#vdf_session.step_number == PrevStepNumber of\n\t\t\ttrue ->\n\t\t\t\t{\n\t\t\t\t\tPrevStepNumber,\n\t\t\t\t\tTipNonceLimiterInfo#nonce_limiter_info.output,\n\t\t\t\t\tTipNonceLimiterInfo#nonce_limiter_info.seed,\n\t\t\t\t\tTipNonceLimiterInfo#nonce_limiter_info.last_step_checkpoints,\n\t\t\t\t\tTipNonceLimiterInfo#nonce_limiter_info.steps\n\t\t\t\t};\n\t\t\tfalse ->\n\t\t\t\tStepNumber0 = Session#vdf_session.step_number,\n\t\t\t\t{\n\t\t\t\t\tStepNumber0,\n\t\t\t\t\thd(Session#vdf_session.steps),\n\t\t\t\t\tSession#vdf_session.seed,\n\t\t\t\t\tmaps:get(StepNumber0, Session#vdf_session.step_checkpoints_map, []),\n\t\t\t\t\tSession#vdf_session.steps\n\t\t\t\t}\n\t\tend,\n\t#{ recall_byte1 := RecallByte1, poa1 := PoA1, nonce := Nonce } = Stage1Data,\n\tH0 = ar_block:compute_h0(\n\t\tOutput,\n\t\tar_node:get_partition_number(RecallByte1),\n\t\tSeed,\n\t\tMiningAddr,\n\t\t?REPLICA_2_9_PACKING_DIFFICULTY\n\t),\n\t{H1, _} = ar_block:compute_h1(H0, Nonce, PoA1#poa.chunk),\n\t{RecallByte2, PoA2, SolutionHash} =\n\t\tcase Stage2Data of\n\t\t\tone_chunk ->\n\t\t\t\t{undefined, #poa{}, H1};\n\t\t\t#{ recall_byte2 := RecallByte2_0, poa2 := PoA2_0 } ->\n\t\t\t\t{H2, _} = ar_block:compute_h2(H1, PoA2_0#poa.chunk, H0),\n\t\t\t\t{RecallByte2_0, PoA2_0, H2}\n\t\tend,\n\tSolution = #mining_solution{\n\t\tmining_address = MiningAddr,\n\t\tmerkle_rebase_threshold = State#state.merkle_rebase_threshold,\n\t\tnext_seed = NextSeed,\n\t\tnext_vdf_difficulty = NextVDFDifficulty,\n\t\tnonce = Nonce,\n\t\tnonce_limiter_output = Output,\n\t\tpartition_number = ar_node:get_partition_number(RecallByte1),\n\t\tpartition_upper_bound = ar_node:get_weave_size(),\n\t\tpoa1 = PoA1,\n\t\tpoa2 = PoA2,\n\t\trecall_byte1 = RecallByte1,\n\t\trecall_byte2 = RecallByte2,\n\t\tseed = Seed,\n\t\tsolution_hash = SolutionHash,\n\t\tstart_interval_number = StartIntervalNumber,\n\t\tstep_number = StepNumber,\n\t\tpacking_difficulty = ?REPLICA_2_9_PACKING_DIFFICULTY,\n\t\treplica_format = 1,\n\t\tlast_step_checkpoints = Checkpoints,\n\t\tsteps = Steps\n\t},\n\tar_node_worker:found_solution(miner, Solution, undefined, undefined),\n\tok.\n\npick_random_storage_module(StorageModules) ->\n\tModulesWithData =\n\t\tlists:filtermap(\n\t\t\tfun(Module) ->\n\t\t\t\tStoreID = ar_storage_module:id(Module),\n\t\t\t\tIntervals = ar_sync_record:get(ar_data_sync, StoreID),\n\t\t\t\tcase ar_intervals:is_empty(Intervals) of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tfalse;\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t{true, {StoreID, Intervals}}\n\t\t\t\tend\n\t\t\tend,\n\t\t\tStorageModules\n\t\t),\n\tcase ModulesWithData of\n\t\t[] ->\n\t\t\terror;\n\t\t_ ->\n\t\t\tlists:nth(rand:uniform(length(ModulesWithData)), ModulesWithData)\n\tend.\n\nsample_chunk_with_proof(_StoreID, Intervals, MiningAddr) ->\n\tTotalSize = ar_intervals:sum(Intervals),\n\tRandomOffset = rand:uniform(TotalSize) - 1,\n\tList = ar_intervals:to_list(Intervals),\n\tAbsoluteOffset = find_offset_in_intervals(List, RandomOffset),\n\tRecallByte = (AbsoluteOffset div ?DATA_CHUNK_SIZE) * ?DATA_CHUNK_SIZE,\n\tPacking = {replica_2_9, MiningAddr},\n\tOptions = #{ pack => true, packing => Packing, origin => miner },\n\tcase ar_data_sync:get_chunk(RecallByte + 1, Options) of\n\t\t{ok, Proof} ->\n\t\t\t#{ chunk := PackedChunk, tx_path := TXPath, data_path := DataPath } = Proof,\n\t\t\tcase maps:get(unpacked_chunk, Proof, not_found) of\n\t\t\t\tnot_found ->\n\t\t\t\t\t#{ tx_root := TXRoot, absolute_end_offset := AbsoluteEndOffset,\n\t\t\t\t\t\tchunk_size := ChunkSize } = Proof,\n\t\t\t\t\tcase ar_packing_server:unpack(\n\t\t\t\t\t\tPacking, AbsoluteEndOffset, TXRoot, PackedChunk, ChunkSize\n\t\t\t\t\t) of\n\t\t\t\t\t\t{ok, UnpackedChunk} ->\n\t\t\t\t\t\t\tPaddedUnpackedChunk = ar_packing_server:pad_chunk(UnpackedChunk),\n\t\t\t\t\t\t\t{RecallByte, PackedChunk, #poa{\n\t\t\t\t\t\t\t\tchunk = PackedChunk,\n\t\t\t\t\t\t\t\tunpacked_chunk = PaddedUnpackedChunk,\n\t\t\t\t\t\t\t\tdata_path = DataPath,\n\t\t\t\t\t\t\t\ttx_path = TXPath\n\t\t\t\t\t\t\t}};\n\t\t\t\t\t\tError ->\n\t\t\t\t\t\t\t{error, Error}\n\t\t\t\t\tend;\n\t\t\t\tUnpackedChunk ->\n\t\t\t\t\tPaddedUnpackedChunk = ar_packing_server:pad_chunk(UnpackedChunk),\n\t\t\t\t\t{RecallByte, PackedChunk, #poa{\n\t\t\t\t\t\tchunk = PackedChunk,\n\t\t\t\t\t\tunpacked_chunk = PaddedUnpackedChunk,\n\t\t\t\t\t\tdata_path = DataPath,\n\t\t\t\t\t\ttx_path = TXPath\n\t\t\t\t\t}}\n\t\t\tend;\n\t\tError ->\n\t\t\t{error, Error}\n\tend.\n\nfind_offset_in_intervals([{End, Start} | Rest], Offset) ->\n\tLen = End - Start,\n\tcase Offset < Len of\n\t\ttrue ->\n\t\t\tStart + Offset;\n\t\tfalse ->\n\t\t\tfind_offset_in_intervals(Rest, Offset - Len)\n\tend.\n\nget_sub_chunk(Chunk, _Nonce, 0) when byte_size(Chunk) == ?DATA_CHUNK_SIZE ->\n\tChunk;\nget_sub_chunk(Chunk, Nonce, _PackingDifficulty) ->\n\tSubChunkSize = ?COMPOSITE_PACKING_SUB_CHUNK_SIZE,\n\tSubChunkStartOffset = SubChunkSize * Nonce,\n\tbinary:part(Chunk, SubChunkStartOffset, SubChunkSize).\n"
  },
  {
    "path": "apps/arweave/src/ar_localnet_mining_sup.erl",
    "content": "-module(ar_localnet_mining_sup).\n\n-behaviour(supervisor).\n\n-export([start_link/0]).\n\n-export([init/1]).\n\n-include_lib(\"arweave/include/ar_sup.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%% ===================================================================\n%% Supervisor callbacks.\n%% ===================================================================\n\ninit([]) ->\n\tChildren = [\n\t\t?CHILD(ar_localnet_mining_server, worker)\n\t],\n\t{ok, {{one_for_one, 5, 10}, Children}}.\n\n"
  },
  {
    "path": "apps/arweave/src/ar_logger.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @doc Arweave Logging Interface.\n%%%\n%%% This module is in charge of starting, stopping, enabling,\n%%% disabling logging Arweave handlers.\n%%%\n%%% == Logger Primary Configuration ==\n%%%\n%%% Primary logger configuration is defined in `config/sys.config',\n%%% with the help of `logger_level' key.\n%%%\n%%% see: https://www.erlang.org/doc/apps/kernel/logger_chapter\n%%%\n%%% == Logger Default Configuration ==\n%%%\n%%% The default configuration is used to log to the console, and it\n%%% should be not modified by default. To avoid modify this, this\n%%% value is defined outside of this module, in `config/sys.config'\n%%% via the help of `logger' key.  Here the configuration:\n%%%\n%%% ```\n%%% [{handler, default, logger_std_h, #{\n%%%     level => warning,\n%%%     formatter => {\n%%%       logger_formatter, #{\n%%%         legacy_header => false,\n%%%         single_line => true,\n%%%         chars_limit => 16256,\n%%%         max_size => 8128,\n%%%         depth => 256,\n%%%         template => [time,\" [\",level,\"] \",mfa,\":\",line,\" \",msg,\"\\n\"]\n%%%       }\n%%%     }\n%%%   }\n%%% }].\n%%% '''\n%%%\n%%% see: https://www.erlang.org/doc/apps/kernel/logger_chapter\n%%% @end\n%%% @see logger\n%%% @see logger_handler\n%%% @TODO integrate with arweave_config.\n%%% @TODO create domain for different part of the code, but all\n%%%       calls to logger inside arweave should be in the domain\n%%%       [arweave].\n%%% @TODO ensure primary logger configuration is set with the right\n%%%       values (level => all).\n%%%===================================================================\n-module(ar_logger).\n-compile(warnings_as_errors).\n-export([\n\tinit/1,\n\tis_started/1,\n\thandlers/0,\n\tstart_handlers/0,\n\tstart_handler/1,\n\tstarted_handlers/0,\n\tstop_handlers/0,\n\tstop_handler/1,\n\tgen_log/3\n]).\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%%--------------------------------------------------------------------\n%% @doc legacy compatible interface. to be removed.\n%% @end\n%%--------------------------------------------------------------------\ninit(Config = #config{}) ->\n\tstart_handler(default),\n\tstart_handler(arweave_info),\n\tinit_debug(Config).\n\ninit_debug(#config{ debug = true }) ->\n\tstart_handler(arweave_debug);\ninit_debug(_) ->\n\tok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ntemplate() ->\n\t[time,\" [\",level,\"] \",mfa,\":\",line,\" \",msg,\"\\n\"].\n\n%%--------------------------------------------------------------------\n%% @doc wrapper around `logger:get_handler_config/1'.\n%% @see logger:get_handle_config/1\n%% @end\n%%--------------------------------------------------------------------\nis_started(Handler) ->\n\tcase logger:get_handler_config(Handler) of\n\t\t{ok, _} -> true;\n\t\t_ -> false\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc defined loggers.\n%% @end\n%%--------------------------------------------------------------------\nhandlers() -> #{\n\t% log every info message.\n\t% This handler can be configured on demand, all message\n\t% greater or equal than info are being logged.\n\tarweave_info => #{\n\t\tlevel => info,\n\t\tconfig => #{\n\t\t\ttype => file,\n\t\t\tfile => logfile_path(#{\n\t\t\t\tprefix => \"arweave\",\n\t\t\t\tlevel => info\n\t\t\t}),\n\t\t\tcompress_on_rotate => arweave_config:get(\n\t\t\t\t[logging,compress_on_rotate],\n\t\t\t\tfalse\n\t\t\t),\n\t\t\tmax_no_files => arweave_config:get(\n\t\t\t\t[logging,sync_mode_qlen],\n\t\t\t\t10\n\t\t\t),\n\t\t\tmax_no_bytes => arweave_config:get(\n\t\t\t\t[logging,max_no_bytes],\n\t\t\t\t51_418_800\n\t\t\t),\n\t\t\tmodes => [raw, append],\n\t\t\tsync_mode_qlen => arweave_config:get(\n\t\t\t\t[logging,sync_mode_qlen],\n\t\t\t\t10\n\t\t\t),\n\t\t\tdrop_mode_qlen => arweave_config:get(\n\t\t\t\t[logging,drop_mode_qlen],\n\t\t\t\t200\n\t\t\t),\n\t\t\tflush_qlen => arweave_config:get(\n\t\t\t\t[logging,flush_qlen],\n\t\t\t\t1000\n\t\t\t),\n\t\t\tburst_limit_enable => arweave_config:get(\n\t\t\t\t[logging,burst_limit_enable],\n\t\t\t\ttrue\n\t\t\t),\n\t\t\tburst_limit_max_count => arweave_config:get(\n\t\t\t\t[logging,burst_limit_max_count],\n\t\t\t\t500\n\t\t\t),\n\t\t\tburst_limit_window_time => arweave_config:get(\n\t\t\t\t[logging,burst_limit_window_time],\n\t\t\t\t1000\n\t\t\t),\n\t\t\toverload_kill_enable => arweave_config:get(\n\t\t\t\t[logging,overload_kill_enable],\n\t\t\t\ttrue\n\t\t\t),\n\t\t\toverload_kill_qlen => arweave_config:get(\n\t\t\t\t[logging,overload_kill_qlen],\n\t\t\t\t20_000\n\t\t\t),\n\t\t\toverload_kill_mem_size => arweave_config:get(\n\t\t\t\t[logging,overload_kill_mem_size],\n\t\t\t\t3_000_000\n\t\t\t),\n\t\t\toverload_kill_restart_after => arweave_config:get(\n\t\t\t\t[logging,overload_kill_restart_after],\n\t\t\t\t5000\n\t\t\t)\n\t\t},\n\t\tformatter => {\n\t\t\tlogger_formatter, #{\n\t\t\t\tchars_limit => arweave_config:get(\n\t\t\t\t\t[logging,formatter,chars_limit],\n\t\t\t\t\t16256\n\t\t\t\t),\n\t\t\t\tdepth => arweave_config:get(\n\t\t\t\t\t[logging,formatter,depth],\n\t\t\t\t\t256\n\t\t\t\t),\n\t\t\t\tlegacy_header => false,\n\t\t\t\tmax_size => arweave_config:get(\n\t\t\t\t\t[logging,formatter,max_size],\n\t\t\t\t\t8128\n\t\t\t\t),\n\t\t\t\tsingle_line => true,\n\t\t\t\ttemplate => arweave_config:get(\n\t\t\t\t\t[logging,formatter,template],\n\t\t\t\t\ttemplate()\n\t\t\t\t),\n\t\t\t\ttime_offset => \"Z\"\n\t\t\t}\n\t\t},\n\t\tfilter_default => log,\n\t\tfilters => [\n\t\t\t{n_wildcard, {fun logger_filters:level/2, {stop, lt, info}}},\n\t\t\t{n_http, {fun logger_filters:domain/2, {stop, sub, [arweave,http]}}}\n\t\t]\n\t},\n\n\t% log every debug message.\n\t% Only debug messages are being logged.\n\tarweave_debug => #{\n\t\tlevel => debug,\n\t\tconfig => #{\n\t\t\ttype => file,\n\t\t\tfile => logfile_path(#{\n\t\t\t\tprefix => \"arweave\",\n\t\t\t\tlevel => debug\n\t\t\t}),\n\t\t\tcompress_on_rotate => arweave_config:get(\n\t\t\t\t[logging,handlers,debug,compress_on_rotate],\n\t\t\t\tfalse\n\t\t\t),\n\t\t\tmax_no_files => arweave_config:get(\n\t\t\t\t[logging,handlers,debug,max_no_files],\n\t\t\t\t10\n\t\t\t),\n\t\t\tmax_no_bytes => arweave_config:get(\n\t\t\t\t[logging,handlers,debug,max_no_bytes],\n\t\t\t\t51_418_800\n\t\t\t),\n\t\t\tmodes => [raw, append],\n\t\t\tsync_mode_qlen => arweave_config:get(\n\t\t\t\t[logging,handlers,debug,sync_mode_qlen],\n\t\t\t\t10\n\t\t\t),\n\t\t\tdrop_mode_qlen => arweave_config:get(\n\t\t\t\t[logging,handlers,debug,drop_mode_qlen],\n\t\t\t\t200\n\t\t\t),\n\t\t\tflush_qlen => arweave_config:get(\n\t\t\t\t[logging,handlers,debug,flush_qlen],\n\t\t\t\t1000\n\t\t\t),\n\t\t\tburst_limit_enable => arweave_config:get(\n\t\t\t\t[logging,handlers,debug,burst_limit_enable],\n\t\t\t\ttrue\n\t\t\t),\n\t\t\tburst_limit_max_count => arweave_config:get(\n\t\t\t\t[logging,handlers,debug,burst_limit_max_count],\n\t\t\t\t500\n\t\t\t),\n\t\t\tburst_limit_window_time => arweave_config:get(\n\t\t\t\t[logging,handlers,debug,burst_limit_window_time],\n\t\t\t\t1000\n\t\t\t),\n\t\t\toverload_kill_enable => arweave_config:get(\n\t\t\t\t[logging,handlers,debug,overload_kill_enable],\n\t\t\t\ttrue\n\t\t\t),\n\t\t\toverload_kill_qlen => arweave_config:get(\n\t\t\t\t[logging,handlers,debug,overload_kill_qlen],\n\t\t\t\t20_000\n\t\t\t),\n\t\t\toverload_kill_mem_size => arweave_config:get(\n\t\t\t\t[logging,handlers,debug,overload_kill_mem_size],\n\t\t\t\t3_000_000\n\t\t\t),\n\t\t\toverload_kill_restart_after => arweave_config:get(\n\t\t\t\t[logging,handlers,debug,overload_kill_restart_after],\n\t\t\t\t5000\n\t\t\t)\n\t\t},\n\t\tformatter => {\n\t\t\tlogger_formatter, #{\n\t\t\t\tchars_limit => arweave_config:get(\n\t\t\t\t\t[logging,handlers,debug,formatter,chars_limit],\n\t\t\t\t\t16256\n\t\t\t\t),\n\t\t\t\tdepth => arweave_config:get(\n\t\t\t\t\t[logging,handlers,debug,formatter,depth],\n\t\t\t\t\t256\n\t\t\t\t),\n\t\t\t\tlegacy_header => false,\n\t\t\t\tmax_size => arweave_config:get(\n\t\t\t\t\t[logging,handlers,debug,formatter,max_size],\n\t\t\t\t\t8128\n\t\t\t\t),\n\t\t\t\tsingle_line => true,\n\t\t\t\ttemplate => arweave_config:get(\n\t\t\t\t\t[logging,handlers,debug,formatter,template],\n\t\t\t\t\ttemplate()\n\t\t\t\t),\n\t\t\t\ttime_offset => \"Z\"\n\t\t\t}\n\t\t},\n\t\tfilter_default => log,\n\t\tfilters => [\n\t\t\t{n_wildcard, {fun logger_filters:level/2, {stop, lt, debug}}},\n\t\t\t{n_http, {fun logger_filters:domain/2, {stop, sub, [arweave,http]}}}\n\t\t]\n\t},\n\n\t% this handler will log only log message containing the domain\n\t% [arweave,http,api] in info level.\n\tarweave_http_api => #{\n\t\tlevel => info,\n\t\tconfig => #{\n\t\t\ttype => file,\n\t\t\tfile => logfile_path(#{\n\t\t\t\tprefix => \"arweave-http-api\",\n\t\t\t\tlevel => debug\n\t\t\t}),\n\t\t\tcompress_on_rotate => arweave_config:get(\n\t\t\t\t[logging,handlers,http,api,compress_on_rotate],\n\t\t\t\tfalse\n\t\t\t),\n\t\t\tmax_no_files => arweave_config:get(\n\t\t\t\t[logging,handlers,http,api,max_no_files],\n\t\t\t\t10\n\t\t\t),\n\t\t\tmax_no_bytes => arweave_config:get(\n\t\t\t\t[logging,handlers,http,api,max_no_bytes],\n\t\t\t\t51_418_800\n\t\t\t),\n\t\t\tmodes => [raw, append],\n\t\t\tsync_mode_qlen => arweave_config:get(\n\t\t\t\t[logging,handlers,http,api,sync_mode_qlen],\n\t\t\t\t10\n\t\t\t),\n\t\t\tdrop_mode_qlen => arweave_config:get(\n\t\t\t\t[logging,handlers,http,api,drop_mode_qlen],\n\t\t\t\t200\n\t\t\t),\n\t\t\tflush_qlen => arweave_config:get(\n\t\t\t\t[logging,handlers,http,api,flush_qlen],\n\t\t\t\t1000\n\t\t\t),\n\t\t\tburst_limit_enable => arweave_config:get(\n\t\t\t\t[logging,handlers,http,api,burst_limit_enable],\n\t\t\t\ttrue\n\t\t\t),\n\t\t\tburst_limit_max_count => arweave_config:get(\n\t\t\t\t[logging,handlers,http,api,burst_limit_max_count],\n\t\t\t\t500\n\t\t\t),\n\t\t\tburst_limit_window_time => arweave_config:get(\n\t\t\t\t[logging,handlers,http,api,burst_limit_window_time],\n\t\t\t\t1000\n\t\t\t),\n\t\t\toverload_kill_enable => arweave_config:get(\n\t\t\t\t[logging,handlers,http,api,overload_kill_enable],\n\t\t\t\ttrue\n\t\t\t),\n\t\t\toverload_kill_qlen => arweave_config:get(\n\t\t\t\t[logging,handlers,http,api,overload_kill_qlen],\n\t\t\t\t20_000\n\t\t\t),\n\t\t\toverload_kill_mem_size => arweave_config:get(\n\t\t\t\t[logging,handlers,http,api,overload_kill_mem_size],\n\t\t\t\t3_000_000\n\t\t\t),\n\t\t\toverload_kill_restart_after => arweave_config:get(\n\t\t\t\t[logging,handlers,http,api,overload_kill_restart_after],\n\t\t\t\t5000\n\t\t\t)\n\t\t},\n\t\tformatter => {\n\t\t\tlogger_formatter, #{\n\t\t\t\tlegacy_header => false,\n\t\t\t\tsingle_line => true,\n\t\t\t\tchars_limit => arweave_config:get(\n\t\t\t\t\t[logging,handlers,http,api,formatter,chars_limit],\n\t\t\t\t\t16256\n\t\t\t\t),\n\t\t\t\tmax_size => arweave_config:get(\n\t\t\t\t\t[logging,handlers,http,api,formatter,max_size],\n\t\t\t\t\t8128\n\t\t\t\t),\n\t\t\t\tdepth => arweave_config:get(\n\t\t\t\t\t[logging,handlers,http,api,formatter,depth],\n\t\t\t\t\t256\n\t\t\t\t),\n\t\t\t\ttemplate => [\n\t\t\t\t\ttime, \" \",\n\t\t\t\t\t\"ip=\", peer_ip, \" \",\n\t\t\t\t\t\"port=\", peer_port, \" \",\n\t\t\t\t\t\"version=\", version, \" \",\n\t\t\t\t\t\"method=\", method, \" \",\n\t\t\t\t\t\"code=\", code, \" \",\n\t\t\t\t\t\"path=\", path, \" \",\n\t\t\t\t\t\"body_length=\", body_length, \" \",\n\t\t\t\t\t\"duration=\", duration, \" \",\n\t\t\t\t\t\"msg=\", msg, \"\\n\"\n\t\t\t\t],\n\t\t\t\ttime_offset => \"Z\"\n\t\t\t}\n\t\t},\n\t\tfilter_default => stop,\n\t\tfilters => [\n\t\t\t{info, {fun logger_filters:level/2, {stop, lt, info}}},\n\t\t\t{http, {fun logger_filters:domain/2, {log, sub, [arweave,http,api]}}}\n\t\t]\n\t}\n}.\n\n%%--------------------------------------------------------------------\n%% @doc start all defined loggers.\n%% @end\n%%--------------------------------------------------------------------\nstart_handlers() ->\n\tHandlers = maps:keys(handlers()),\n\t[ start_handler(Handler) || Handler <- Handlers ].\n\n%%--------------------------------------------------------------------\n%% @doc start one defined logger.\n%% @end\n%%--------------------------------------------------------------------\nstart_handler(Handler) ->\n\tcase maps:get(Handler, handlers(), undefined) of\n\t\tundefined ->\n\t\t\t{error, not_found};\n\t\tConfig ->\n\t\t\tstart_handler(Handler, Config)\n\tend.\n\nstart_handler(Handler, Config) ->\n\tcase is_started(Handler) of\n\t\ttrue ->\n\t\t\tok;\n\t\tfalse ->\n\t\t\tlogger:add_handler(Handler, logger_std_h, Config)\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc stop all loggers set.\n%% @end\n%%--------------------------------------------------------------------\nstop_handlers() ->\n\tHandlers = maps:keys(handlers()),\n\t[ stop_handler(Handler) || Handler <- Handlers ].\n\n%%--------------------------------------------------------------------\n%% @doc stop logger.\n%% @end\n%%--------------------------------------------------------------------\nstop_handler(Handler) -> logger:remove_handler(Handler).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc list started handlers.\n%% @end\n%%--------------------------------------------------------------------\nstarted_handlers() ->\n\tHandlersIds = maps:keys(handlers()),\n\t#{ handlers := HandlersStarted } = logger:get_config(),\n\t[ Id ||\n\t\t#{ id := Id } <- HandlersStarted,\n\t\tId2 <- HandlersIds,\n\t\tId =:= Id2\n\t].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc function only used to write to logs during test.\n%% @end\n%%--------------------------------------------------------------------\ngen_log(Format, FormatMsg, Meta) ->\n\t?LOG_EMERGENCY(Format, FormatMsg, Meta),\n\t?LOG_ALERT(Format, FormatMsg, Meta),\n\t?LOG_CRITICAL(Format, FormatMsg, Meta),\n\t?LOG_ERROR(Format, FormatMsg, Meta),\n\t?LOG_WARNING(Format, FormatMsg, Meta),\n\t?LOG_NOTICE(Format, FormatMsg, Meta),\n\t?LOG_INFO(Format, FormatMsg, Meta),\n\t?LOG_DEBUG(Format, FormatMsg, Meta).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc returns a log filename.\n%% @end\n%%--------------------------------------------------------------------\nlogfile_path(Opts) ->\n\t% TODO: if arweave_config is not set, even with a default\n\t% value set, this part of the code crashes.\n\tLogDir = arweave_config:get([logging,path], \"./logs\"),\n\tPrefix = maps:get(prefix, Opts),\n\tLevel = maps:get(level, Opts),\n\tNodeName = erlang:node(),\n\tRawFilename = lists:join(\"-\", [Prefix, NodeName, Level]),\n\tFilename = filename:flatten(RawFilename) ++ \".log\",\n\tfilename:flatten(filename:join(LogDir, Filename)).\n"
  },
  {
    "path": "apps/arweave/src/ar_mempool.erl",
    "content": "-module(ar_mempool).\n\n-include(\"ar.hrl\").\n\n-export([reset/0, load_from_disk/0, add_tx/2, drop_txs/1, drop_txs/3,\n\t\tget_map/0, get_all_txids/0, take_chunk/2, get_tx/1, is_known_tx/1, has_tx/1,\n\t\tget_priority_set/0, get_last_tx_map/0, get_origin_tx_map/0,\n\t\tget_propagation_queue/0, del_from_propagation_queue/2]).\n\nreset() ->\n\tets:insert(node_state, [\n\t\t{mempool_size, {0, 0}},\n\t\t{tx_priority_set, gb_sets:new()},\n\t\t{tx_propagation_queue, gb_sets:new()},\n\t\t{last_tx_map, maps:new()},\n\t\t{origin_tx_map, maps:new()},\n\t\t{origin_spent_total_map, maps:new()},\n\t\t{origin_spent_total_denomination, 0}\n\t]).\n\nload_from_disk() ->\n\tcase ar_storage:read_term(mempool) of\n\t\t{ok, {SerializedTXs, _MempoolSize}} ->\n\t\t\tTXs = maps:map(fun(_, {TX, St}) -> {deserialize_tx(TX), St} end, SerializedTXs),\n\n\t\tMaxDenomination = maps:fold(\n\t\t\tfun(_TXID, {TX, _Status}, Acc) ->\n\t\t\t\tmax(Acc, TX#tx.denomination)\n\t\t\tend,\n\t\t\t0,\n\t\t\tTXs\n\t\t),\n\n\t\t{MempoolSize2, PrioritySet2, PropagationQueue2, LastTXMap2, OriginTXMap2,\n\t\t\t\tOriginSpentTotalMap2} =\n\t\t\tmaps:fold(\n\t\t\t\tfun(TXID, {TX, Status}, {MempoolSize, PrioritySet, PropagationQueue,\n\t\t\t\t\t\tLastTXMap, OriginTXMap, OriginSpentTotalMap}) ->\n\t\t\t\t\tMetadata = {_, _, Timestamp} = init_tx_metadata(TX, Status),\n\t\t\t\t\tets:insert(node_state, {{tx, TXID}, Metadata}),\n\t\t\t\t\tets:insert(tx_prefixes, {ar_node_worker:tx_id_prefix(TXID), TXID}),\n\t\t\t\t\tQ = case Status of\n\t\t\t\t\t\tready_for_mining ->\n\t\t\t\t\t\t\tPropagationQueue;\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\tadd_to_propagation_queue(PropagationQueue, TX, Timestamp)\n\t\t\t\t\tend,\n\t\t\t\t\t{\n\t\t\t\t\t\tincrease_mempool_size(MempoolSize, TX),\n\t\t\t\t\t\tadd_to_priority_set(PrioritySet, TX, Status, Timestamp),\n\t\t\t\t\t\tQ,\n\t\t\t\t\t\tadd_to_last_tx_map(LastTXMap, TX),\n\t\t\t\t\t\tadd_to_origin_tx_map(OriginTXMap, TX),\n\t\t\t\t\t\tadd_to_origin_spent_total_map(OriginSpentTotalMap, TX, MaxDenomination)\n\t\t\t\t\t}\n\t\t\t\tend,\n\t\t\t\t{\n\t\t\t\t\t{0, 0},\n\t\t\t\t\tgb_sets:new(),\n\t\t\t\t\tgb_sets:new(),\n\t\t\t\t\tmaps:new(),\n\t\t\t\t\tmaps:new(),\n\t\t\t\t\tmaps:new()\n\t\t\t\t},\n\t\t\t\tTXs\n\t\t\t),\n\n\t\tets:insert(node_state, [\n\t\t\t{mempool_size, MempoolSize2},\n\t\t\t{tx_priority_set, PrioritySet2},\n\t\t\t{tx_propagation_queue, PropagationQueue2},\n\t\t\t{last_tx_map, LastTXMap2},\n\t\t\t{origin_tx_map, OriginTXMap2},\n\t\t\t{origin_spent_total_map, OriginSpentTotalMap2},\n\t\t\t{origin_spent_total_denomination, MaxDenomination}\n\t\t]);\n\t\tnot_found ->\n\t\t\treset();\n\t\t{error, Error} ->\n\t\t\t?LOG_ERROR([{event, failed_to_load_mempool}, {reason, Error}]),\n\t\t\treset()\n\tend.\n\nadd_tx(TX, Status) ->\n\tprometheus_histogram:observe_duration(ar_mempool_add_tx_duration_milliseconds,\n\t\tfun() ->\n\t\t\tadd_tx2(TX, Status)\n\t\tend).\n\nadd_tx2(#tx{ id = TXID } = TX, Status) ->\n\tDenomination = max(get_current_denomination(), get_origin_spent_total_denomination()),\n\tCheckRequiresUpdate =\n\t\tcase get_tx_metadata(TXID) of\n\t\t\tnot_found ->\n\t\t\t\t{_, _, Timestamp} = init_tx_metadata(TX, Status),\n\t\t\t\tets:insert(tx_prefixes, {ar_node_worker:tx_id_prefix(TXID), TXID}),\n\t\t\t\tOriginSpentTotalMap = get_redenominated_origin_spent_total_map(Denomination),\n\t\t\t\t{\n\t\t\t\t\t{TX, Status, Timestamp},\n\t\t\t\t\tincrease_mempool_size(get_mempool_size(), TX),\n\t\t\t\t\tadd_to_priority_set(get_priority_set(), TX, Status, Timestamp),\n\t\t\t\t\tadd_to_propagation_queue(get_propagation_queue(), TX, Timestamp),\n\t\t\t\t\tadd_to_last_tx_map(get_last_tx_map(), TX),\n\t\t\t\t\tadd_to_origin_tx_map(get_origin_tx_map(), TX),\n\t\t\t\t\tadd_to_origin_spent_total_map(OriginSpentTotalMap, TX, Denomination)\n\t\t\t\t};\n\t\t\t{KnownTX, PrevStatus, Timestamp} ->\n\t\t\t\t{TX2, IsDataUpdatedRequired} = assert_same_tx(TX, KnownTX),\n\t\t\t\tcase {Status == PrevStatus, IsDataUpdatedRequired} of\n\t\t\t\t\t{true, false} ->\n\t\t\t\t\t\tdoes_not_require_update;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t{TX2, Status, Timestamp},\n\t\t\t\t\t\t\tget_mempool_size(),\n\t\t\t\t\t\t\tadd_to_priority_set(get_priority_set(), TX2, PrevStatus,\n\t\t\t\t\t\t\t\t\tStatus, Timestamp),\n\t\t\t\t\t\t\tget_propagation_queue(),\n\t\t\t\t\t\t\tget_last_tx_map(),\n\t\t\t\t\t\t\tget_origin_tx_map(),\n\t\t\t\t\t\t\tget_redenominated_origin_spent_total_map(Denomination)\n\t\t\t\t\t\t}\n\t\t\t\tend\n\t\tend,\n\tcase CheckRequiresUpdate of\n\t\tdoes_not_require_update ->\n\t\t\tok;\n\t\t{Metadata, MempoolSize, PrioritySet, PropagationQueue, LastTXMap,\n\t\t\t\tOriginTXMap, OriginSpentTotalMap2} ->\n\t\t\t%% Insert all data at the same time to ensure atomicity\n\t\t\tets:insert(node_state, [\n\t\t\t\t{{tx, TXID}, Metadata},\n\t\t\t\t{mempool_size, MempoolSize},\n\t\t\t\t{tx_priority_set, PrioritySet},\n\t\t\t\t{tx_propagation_queue, PropagationQueue},\n\t\t\t\t{last_tx_map, LastTXMap},\n\t\t\t\t{origin_tx_map, OriginTXMap},\n\t\t\t\t{origin_spent_total_map, OriginSpentTotalMap2},\n\t\t\t\t{origin_spent_total_denomination, Denomination}\n\t\t\t]),\n\t\t\tcase ar_node:is_joined() of\n\t\t\t\ttrue ->\n\t\t\t\t\t%% 1. Drop unconfirmable transactions:\n\t\t\t\t\t%%    - those with clashing last_tx\n\t\t\t\t\t%%    - those which overspend an account\n\t\t\t\t\t%% 2. If the mempool is too large, drop low priority transactions\n\t\t\t\t\t%%    until the mempool is small enough\n\t\t\t\t\t%% To limit revalidation work, all of these checks assume\n\t\t\t\t\t%% every TX in the mempool has previously been validated.\n\t\t\t\t\tdrop_txs(find_clashing_txs(TX)),\n\t\t\t\t\tdrop_txs(find_overspent_txs(TX, Denomination)),\n\t\t\t\t\tdrop_txs(find_low_priority_txs());\n\t\t\t\tfalse ->\n\t\t\t\t\tnoop\n\t\t\tend\n\tend.\n\nassert_same_tx(#tx{ format = 1 } = TX, #tx{ format = 1 } = TX) ->\n\t{TX, false};\nassert_same_tx(#tx{ format = 2, data = Data } = TX, #tx{ format = 2 } = TX2) ->\n\ttrue = TX#tx{ data = <<>> } == TX2#tx{ data = <<>> },\n\tcase byte_size(Data) == 0 of\n\t\ttrue ->\n\t\t\t{TX2, false};\n\t\tfalse ->\n\t\t\t{TX, true}\n\tend.\n\ndrop_txs(DroppedTXs) ->\n\tdrop_txs(DroppedTXs, true, true).\ndrop_txs([], _RemoveTXPrefixes, _DropFromDiskPool) ->\n\tok;\ndrop_txs(DroppedTXs, RemoveTXPrefixes, DropFromDiskPool) ->\n\tprometheus_histogram:observe_duration(drop_txs_duration_milliseconds,\n\t\tfun() ->\n\t\t\tdrop_txs2(DroppedTXs, RemoveTXPrefixes, DropFromDiskPool)\n\t\tend).\n\ndrop_txs2(DroppedTXs, RemoveTXPrefixes, DropFromDiskPool) ->\n\tDenomination = max(get_current_denomination(), get_origin_spent_total_denomination()),\n\tOriginSpentTotalMap0 = get_redenominated_origin_spent_total_map(Denomination),\n\t{MempoolSize2, PrioritySet2, PropagationQueue2, LastTXMap2, OriginTXMap2,\n\t\t\tOriginSpentTotalMap2} =\n\t\tlists:foldl(\n\t\t\tfun(TX, {MempoolSize, PrioritySet, PropagationQueue, LastTXMap,\n\t\t\t\t\tOriginTXMap, OriginSpentTotalMap}) ->\n\t\t\t\tTXID = TX#tx.id,\n\t\t\t\tcase get_tx_metadata(TXID) of\n\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t{MempoolSize, PrioritySet, PropagationQueue, LastTXMap,\n\t\t\t\t\t\t\t\tOriginTXMap, OriginSpentTotalMap};\n\t\t\t\t\t{_, Status, Timestamp} ->\n\t\t\t\t\t\tets:delete(node_state, {tx, TXID}),\n\t\t\t\t\t\tcase RemoveTXPrefixes of\n\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\tets:delete_object(tx_prefixes,\n\t\t\t\t\t\t\t\t\t\t{ar_node_worker:tx_id_prefix(TXID), TXID});\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\tok\n\t\t\t\t\t\tend,\n\t\t\t\t\t\tcase DropFromDiskPool of\n\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\tmay_be_drop_from_disk_pool(TX);\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\tok\n\t\t\t\t\t\tend,\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tdecrease_mempool_size(MempoolSize, TX),\n\t\t\t\t\t\t\tdel_from_priority_set(PrioritySet, TX, Status, Timestamp),\n\t\t\t\t\t\t\tdel_from_propagation_queue(PropagationQueue, TX, Timestamp),\n\t\t\t\t\t\t\tdel_from_last_tx_map(LastTXMap, TX),\n\t\t\t\t\t\t\tdel_from_origin_tx_map(OriginTXMap, TX),\n\t\t\t\t\t\t\tdel_from_origin_spent_total_map(OriginSpentTotalMap, TX, Denomination)\n\t\t\t\t\t\t}\n\t\t\t\tend\n\t\t\tend,\n\t\t\t{\n\t\t\t\tget_mempool_size(),\n\t\t\t\tget_priority_set(),\n\t\t\t\tget_propagation_queue(),\n\t\t\t\tget_last_tx_map(),\n\t\t\t\tget_origin_tx_map(),\n\t\t\t\tOriginSpentTotalMap0\n\t\t\t},\n\t\t\tDroppedTXs\n\t\t),\n\tets:insert(node_state, [\n\t\t{mempool_size, MempoolSize2},\n\t\t{tx_priority_set, PrioritySet2},\n\t\t{tx_propagation_queue, PropagationQueue2},\n\t\t{last_tx_map, LastTXMap2},\n\t\t{origin_tx_map, OriginTXMap2},\n\t\t{origin_spent_total_map, OriginSpentTotalMap2},\n\t\t{origin_spent_total_denomination, Denomination}\n\t]).\n\nget_map() ->\n\tgb_sets:fold(\n\t\tfun({_Utility, TXID, Status}, Acc) ->\n\t\t\tAcc#{TXID => Status}\n\t\tend,\n\t\t#{},\n\t\tget_priority_set()\n\t).\n\nget_all_txids() ->\n\tgb_sets:fold(\n\t\tfun({_Utility, TXID, _Status}, Acc) ->\n\t\t\t[TXID | Acc]\n\t\tend,\n\t\t[],\n\t\tget_priority_set()\n\t).\n\ntake_chunk(Mempool, Size) ->\n\ttake_chunk(Mempool, Size, []).\ntake_chunk(Mempool, 0, Taken) ->\n\t{ok, Taken, Mempool};\ntake_chunk([], _Size, Taken) ->\n\t{ok, Taken, []};\ntake_chunk(Mempool, Size, Taken) ->\n\tTXID = lists:last(Mempool),\n\tRemainingMempool = lists:droplast(Mempool),\n\tcase get_tx(TXID) of\n\t\tnot_found ->\n\t\t\ttake_chunk(RemainingMempool, Size, Taken);\n\t\tTX ->\n\t\t\ttake_chunk(RemainingMempool, Size - 1, [TX | Taken])\n\tend.\n\nget_tx_metadata(TXID) ->\n\tcase ets:lookup(node_state, {tx, TXID}) of\n\t\t[{_, {TX, Status, Timestamp}}] ->\n\t\t\t{TX, Status, Timestamp};\n\t\t_ ->\n\t\t\tnot_found\n\tend.\n\nget_tx(TXID) ->\n\tcase get_tx_metadata(TXID) of\n\t\tnot_found ->\n\t\t\tnot_found;\n\t\t{TX, _Status, _Timestamp} ->\n\t\t\tTX\n\tend.\n\nis_known_tx(TXID) ->\n\tcase ar_ignore_registry:member(TXID) of\n\t\ttrue ->\n\t\t\ttrue;\n\t\tfalse ->\n\t\t\thas_tx(TXID)\n\tend.\n\nhas_tx(TXID) ->\n\tets:member(node_state, {tx, TXID}).\n\nget_priority_set() ->\n\tcase ets:lookup(node_state, tx_priority_set) of\n\t\t[{tx_priority_set, Set}] ->\n\t\t\tSet;\n\t\t_ ->\n\t\t\tgb_sets:new()\n\tend.\n\nget_propagation_queue() ->\n\tcase ets:lookup(node_state, tx_propagation_queue) of\n\t\t[{tx_propagation_queue, Q}] ->\n\t\t\tQ;\n\t\t_ ->\n\t\t\tgb_sets:new()\n\tend.\n\nget_last_tx_map() ->\n\tcase ets:lookup(node_state, last_tx_map) of\n\t\t[{last_tx_map, Map}] ->\n\t\t\tMap;\n\t\t_ ->\n\t\t\tmaps:new()\n\tend.\n\nget_origin_tx_map() ->\n\tcase ets:lookup(node_state, origin_tx_map) of\n\t\t[{origin_tx_map, Map}] ->\n\t\t\tMap;\n\t\t_ ->\n\t\t\tmaps:new()\n\tend.\n\n\ndel_from_propagation_queue(Priority, TXID) ->\n\tets:insert(node_state, {\n\t\ttx_propagation_queue,\n\t\tdel_from_propagation_queue(ar_mempool:get_propagation_queue(), Priority, TXID)\n\t}).\ndel_from_propagation_queue(PropagationQueue, TX = #tx{}, Timestamp) ->\n\tPriority = {ar_tx:utility(TX), Timestamp},\n\tdel_from_propagation_queue(PropagationQueue, Priority, TX#tx.id);\ndel_from_propagation_queue(PropagationQueue, Priority, TXID) when is_bitstring(TXID) ->\n\tprometheus_histogram:observe_duration(del_from_propagation_queue_duration_milliseconds,\n\t\tfun() ->\n\t\t\tgb_sets:del_element({Priority, TXID}, PropagationQueue)\n\t\tend).\n\n%% ------------------------------------------------------------------\n%% Private Functions\n%% ------------------------------------------------------------------\n\nget_mempool_size() ->\n\tcase ets:lookup(node_state, mempool_size) of\n\t\t[{mempool_size, MempoolSize}] ->\n\t\t\tMempoolSize;\n\t\t_ ->\n\t\t\t{0, 0}\n\tend.\n\ninit_tx_metadata(TX, Status) ->\n\t{TX, Status, -os:system_time(microsecond)}.\n\nadd_to_priority_set(PrioritySet, TX, Status, Timestamp) ->\n\tPriority = {ar_tx:utility(TX), Timestamp},\n\tgb_sets:add_element({Priority, TX#tx.id, Status}, PrioritySet).\n\nadd_to_priority_set(PrioritySet, TX, PrevStatus, Status, Timestamp) ->\n\tPriority = {ar_tx:utility(TX), Timestamp},\n\tgb_sets:add_element({Priority, TX#tx.id, Status},\n\t\tgb_sets:del_element({Priority, TX#tx.id, PrevStatus},\n\t\t\tPrioritySet\n\t\t)\n\t).\n\ndel_from_priority_set(PrioritySet, TX, Status, Timestamp) ->\n\tPriority = {ar_tx:utility(TX), Timestamp},\n\tgb_sets:del_element({Priority, TX#tx.id, Status}, PrioritySet).\n\nadd_to_propagation_queue(PropagationQueue, TX, Timestamp) ->\n\tPriority = {ar_tx:utility(TX), Timestamp},\n\tgb_sets:add_element({Priority, TX#tx.id}, PropagationQueue).\n\n%% @doc Store a map of last_tx TXIDs to a priority set of TXs that use\n%% that last_tx. We actually store the TXIDs of the TXs to avoid bloating\n%% the ets table. The trade off is that we have to do a TXID to TX lookup\n%% when resolving last_tx clashes.\nadd_to_last_tx_map(LastTXMap, TX) ->\n\tElement = unconfirmed_tx(TX),\n\tSet2 = case maps:get(TX#tx.last_tx, LastTXMap, not_found) of\n\t\tnot_found ->\n\t\t\tgb_sets:from_list([Element]);\n\t\tSet ->\n\t\t\tgb_sets:add_element(Element, Set)\n\tend,\n\tmaps:put(TX#tx.last_tx, Set2, LastTXMap).\n\ndel_from_last_tx_map(LastTXMap, TX) ->\n\tElement = unconfirmed_tx(TX),\n\tcase maps:get(TX#tx.last_tx, LastTXMap, not_found) of\n\t\tnot_found ->\n\t\t\tLastTXMap;\n\t\tSet ->\n\t\t\tmaps:put(TX#tx.last_tx, gb_sets:del_element(Element, Set), LastTXMap)\n\tend.\n\n%% @doc Store a map of addresses to a priority set of TXs that spend\n%% from that address. We actually store the TXIDs of the TXs to avoid bloating\n%% the ets table. The trade off is that we have to do a TXID to TX lookup\n%% when resolving overspends.\nadd_to_origin_tx_map(OriginTXMap, TX) ->\n\tElement = unconfirmed_tx(TX),\n\tOrigin = ar_tx:get_owner_address(TX),\n\tSet2 = case maps:get(Origin, OriginTXMap, not_found) of\n\t\tnot_found ->\n\t\t\tgb_sets:from_list([Element]);\n\t\tSet ->\n\t\t\tgb_sets:add_element(Element, Set)\n\tend,\n\tmaps:put(Origin, Set2, OriginTXMap).\n\ndel_from_origin_tx_map(OriginTXMap, TX) ->\n\tElement = unconfirmed_tx(TX),\n\tOrigin = ar_tx:get_owner_address(TX),\n\tcase maps:get(Origin, OriginTXMap, not_found) of\n\t\tnot_found ->\n\t\t\tOriginTXMap;\n\t\tSet ->\n\t\t\tmaps:put(Origin, gb_sets:del_element(Element, Set), OriginTXMap)\n\tend.\n\nunconfirmed_tx(TX = #tx{}) ->\n\t{ar_tx:utility(TX), TX#tx.id}.\n\n\nincrease_mempool_size(\n\t_MempoolSize = {MempoolHeaderSize, MempoolDataSize}, TX = #tx{}) ->\n\t{HeaderSize, DataSize} = tx_mempool_size(TX),\n\t{MempoolHeaderSize + HeaderSize, MempoolDataSize + DataSize}.\n\ndecrease_mempool_size(\n\t_MempoolSize = {MempoolHeaderSize, MempoolDataSize}, TX = #tx{}) ->\n\t{HeaderSize, DataSize} = tx_mempool_size(TX),\n\t{MempoolHeaderSize - HeaderSize, MempoolDataSize - DataSize}.\n\ntx_mempool_size(#tx{ format = 1, data = Data }) ->\n\t{?TX_SIZE_BASE + byte_size(Data), 0};\ntx_mempool_size(#tx{ format = 2, data = Data }) ->\n\t{?TX_SIZE_BASE, byte_size(Data)}.\n\ndeserialize_tx(Bin) when is_binary(Bin) ->\n\t{ok, TX} = ar_serialize:binary_to_tx(Bin),\n\tTX;\ndeserialize_tx(TX) ->\n\tar_storage:migrate_tx_record(TX).\n\n\n\nmay_be_drop_from_disk_pool(#tx{ format = 1 }) ->\n\tok;\nmay_be_drop_from_disk_pool(TX) ->\n\tar_data_sync:maybe_drop_data_root_from_disk_pool(TX#tx.data_root, TX#tx.data_size,\n\t\t\tTX#tx.id).\n\nfind_low_priority_txs() ->\n\tfind_low_priority_txs(gb_sets:iterator(get_priority_set()), get_mempool_size()).\nfind_low_priority_txs(Iterator, {MempoolHeaderSize, MempoolDataSize})\n\t\twhen\n\t\t\tMempoolHeaderSize > ?MEMPOOL_HEADER_SIZE_LIMIT;\n\t\t\tMempoolDataSize > ?MEMPOOL_DATA_SIZE_LIMIT ->\n\t{{_Utility, TXID, _Status} = _Element, Iterator2} = gb_sets:next(Iterator),\n\tTX = get_tx(TXID),\n\tcase should_drop_low_priority_tx(TX, {MempoolHeaderSize, MempoolDataSize}) of\n\t\ttrue ->\n\t\t\tMempoolSize2 = decrease_mempool_size({MempoolHeaderSize, MempoolDataSize}, TX),\n\t\t\t[TX | find_low_priority_txs(Iterator2, MempoolSize2)];\n\t\tfalse ->\n\t\t\tfind_low_priority_txs(Iterator2, {MempoolHeaderSize, MempoolDataSize})\n\tend;\nfind_low_priority_txs(_Iterator, {_MempoolHeaderSize, _MempoolDataSize}) ->\n\t[].\n\nshould_drop_low_priority_tx(_TX, {MempoolHeaderSize, _MempoolDataSize})\n\t\twhen MempoolHeaderSize > ?MEMPOOL_HEADER_SIZE_LIMIT ->\n\ttrue;\nshould_drop_low_priority_tx(TX, {_MempoolHeaderSize, MempoolDataSize})\n\t\twhen MempoolDataSize > ?MEMPOOL_DATA_SIZE_LIMIT ->\n\tTX#tx.format == 2 andalso byte_size(TX#tx.data) > 0;\nshould_drop_low_priority_tx(_TX, {_MempoolHeaderSize, _MempoolDataSize}) ->\n\tfalse.\n\n\n%% @doc identify any transactions that refer to the same last_tx\n%% (where last_tx is the last confirmed transaction in the wallet).\n%% Only 1 of these transactions will confirm, so we want to drop\n%% all the others to prevent a mempool spam attack.\nfind_clashing_txs(#tx{ last_tx = <<>> }) ->\n\t[];\nfind_clashing_txs(TX = #tx{}) ->\n\tWallets = ar_wallets:get(ar_tx:get_addresses([TX])),\n\tfind_clashing_txs(TX, Wallets).\n\nfind_clashing_txs(TX = #tx{}, Wallets) when is_map(Wallets) ->\n\tcase ar_tx:check_last_tx(Wallets, TX) of\n\t\ttrue ->\n\t\t\tClashingTXIDs = maps:get(TX#tx.last_tx, get_last_tx_map(), gb_sets:new()),\n\t\t\tfilter_clashing_txs(ClashingTXIDs);\n\t\t_ ->\n\t\t\t[]\n\tend;\nfind_clashing_txs(_TX, _Wallets) ->\n\t[].\n\n%% @doc Only the highest priority TX will be kept, others will be dropped.\n%% Priority is defined as:\n%% 1. ar_tx:utility\n%% 2. alphanumeric order of TXID (z is higher priority than a)\n%%\n%% Adding the TXID term to the priority calculation (rather than local\n%% timestamp), ensures that the sorting will be stable and deterministic\n%% across peers and so all peers will drop the same clashing TXs regardless\n%% of the order in which the transactions are received.\nfilter_clashing_txs(ClashingTXIDs) ->\n\tcase gb_sets:is_empty(ClashingTXIDs) of\n\t\ttrue ->\n\t\t\t[];\n\t\tfalse ->\n\t\t\t% Exclude the highest priority TX from the list of TXs to be dropped\n\t\t\t{_, UncomfirmableTXIDs} = gb_sets:take_largest(ClashingTXIDs),\n\t\t\tto_txs(UncomfirmableTXIDs)\n\tend.\n\n%% @doc identify any transactions that would overspend an account if\n%% they were to be confirmed. Since those transactions won't confirm\n%% we want to drop them to prevent a mempool spam attack (e.g.\n%% an attacker posts hundreds or thousands of overspend transactions\n%% which saturate the mempool, but for which only 1 will ever be\n%% confirmed)\n%%\n%% Note: when doing the overspend calculation any unconfirmed deposit\n%% transactions are ignored. This is to prevent a second potentially\n%% malicious scenario like the following:\n%%\n%% Peer A: receives deposit TX and several spend TXs,\n%%         all TXs are added to the mempool\n%% Peer B: receives only the spend TXs, and all are dropped from the mempool\n%% Peer A: publishes block\n%% Peer B: needs to request potentially many TXs from peer A since their\n%%         mempools differ\n%% A malicious attacker could exploit this to greatly increase the overall\n%% network traffic, slow down block propagation, and increases fork incidence.\n%%\n%% By ignoring unconfirmed deposit TXs, and ensuring a globally consistent\n%% sort order (e.g. (format, reward, TXID)) this malicious scenario is\n%% prevented.\nfind_overspent_txs(<<>>, _Denomination) ->\n\t[];\nfind_overspent_txs(TX, Denomination)\n\t\twhen TX#tx.reward > 0 orelse TX#tx.quantity > 0 ->\n\tOrigin = ar_tx:get_owner_address(TX),\n\tSpentTotal = get_origin_spent_total(Origin),\n\tBalance = get_confirmed_balance(Origin, Denomination),\n\tcase SpentTotal =< Balance of\n\t\ttrue ->\n\t\t\t[];\n\t\tfalse ->\n\t\t\tSpentTXIDs = maps:get(Origin, get_origin_tx_map(), gb_sets:new()),\n\t\t\tdrop_lowest_until_solvent(SpentTXIDs, SpentTotal, Balance, Denomination)\n\tend;\nfind_overspent_txs(_TX, _Denomination) ->\n\t[].\n\n%% @doc Walk from lowest priority, dropping TXs until spent total =< balance.\n%% Only iterates over the TXs that need to be dropped (amortized O(1) per add_tx).\ndrop_lowest_until_solvent(SpentTXIDs, SpentTotal, Balance, Denomination) ->\n\tcase SpentTotal =< Balance of\n\t\ttrue ->\n\t\t\t[];\n\t\tfalse ->\n\t\t\tcase gb_sets:is_empty(SpentTXIDs) of\n\t\t\t\ttrue ->\n\t\t\t\t\t[];\n\t\t\t\tfalse ->\n\t\t\t\t\t{{_, TXID}, SpentTXIDs2} = gb_sets:take_smallest(SpentTXIDs),\n\t\t\t\t\tTX = get_tx(TXID),\n\t\t\t\t\tAmount = tx_spent_amount(TX, Denomination),\n\t\t\t\t\t[TX | drop_lowest_until_solvent(\n\t\t\t\t\t\t\tSpentTXIDs2, SpentTotal - Amount, Balance, Denomination)]\n\t\t\tend\n\tend.\n\nto_txs(TXIDs) when is_list(TXIDs) ->\n\t[get_tx(TXID) || TXID <- TXIDs];\nto_txs(TXIDs) ->\n\tto_txs([TXID || {_, TXID} <- gb_sets:to_list(TXIDs)]).\n\ntx_spent_amount(#tx{ reward = Reward, quantity = Quantity }, 0) ->\n\tReward + Quantity;\ntx_spent_amount(#tx{ reward = Reward, quantity = Quantity, denomination = TXDenom }, Denomination) ->\n\tar_pricing:redenominate(Reward + Quantity, TXDenom, Denomination).\n\nget_current_denomination() ->\n\tcase ar_node:get_current_block() of\n\t\tnot_joined ->\n\t\t\t0;\n\t\tB ->\n\t\t\tB#block.denomination\n\tend.\n\nget_origin_spent_total_map() ->\n\tcase ets:lookup(node_state, origin_spent_total_map) of\n\t\t[{origin_spent_total_map, Map}] ->\n\t\t\tMap;\n\t\t_ ->\n\t\t\tmaps:new()\n\tend.\n\nget_origin_spent_total_denomination() ->\n\tcase ets:lookup(node_state, origin_spent_total_denomination) of\n\t\t[{origin_spent_total_denomination, D}] ->\n\t\t\tD;\n\t\t_ ->\n\t\t\t0\n\tend.\n\nadd_to_origin_spent_total_map(SpentTotalMap, TX, Denomination) ->\n\tOrigin = ar_tx:get_owner_address(TX),\n\tAmount = tx_spent_amount(TX, Denomination),\n\tOldAmount = maps:get(Origin, SpentTotalMap, 0),\n\tmaps:put(Origin, OldAmount + Amount, SpentTotalMap).\n\ndel_from_origin_spent_total_map(SpentTotalMap, TX, Denomination) ->\n\tOrigin = ar_tx:get_owner_address(TX),\n\tAmount = tx_spent_amount(TX, Denomination),\n\tOldAmount = maps:get(Origin, SpentTotalMap, 0),\n\tmaps:put(Origin, max(0, OldAmount - Amount), SpentTotalMap).\n\nget_origin_spent_total(Origin) ->\n\tmaps:get(Origin, get_origin_spent_total_map(), 0).\n\n%% @doc Return the origin address => spent total map in the given denomination. If the\n%% denomination has increased, redenominate all stored totals directly.\n%% This is O(number of origins) and only triggers on denomination change.\nget_redenominated_origin_spent_total_map(Denomination) ->\n\tcase get_origin_spent_total_denomination() of\n\t\tDenomination ->\n\t\t\tget_origin_spent_total_map();\n\t\tOldDenomination ->\n\t\t\tredenominate_origin_spent_total(get_origin_spent_total_map(), OldDenomination, Denomination)\n\tend.\n\nredenominate_origin_spent_total(SpentTotalMap, OldDenomination, NewDenomination) ->\n\tmaps:map(\n\t\tfun(_Origin, Total) ->\n\t\t\tar_pricing:redenominate(Total, OldDenomination, NewDenomination)\n\t\tend,\n\t\tSpentTotalMap\n\t).\n\nget_confirmed_balance(Origin, Denomination) ->\n\tWallet = ar_wallets:get(Origin),\n\tcase maps:get(Origin, Wallet, not_found) of\n\t\tnot_found ->\n\t\t\t0;\n\t\t{Balance, _LastTX} ->\n\t\t\tar_pricing:redenominate(Balance, 1, Denomination);\n\t\t{Balance, _LastTX, AccountDenomination, _MiningPermission} ->\n\t\t\tar_pricing:redenominate(Balance, AccountDenomination, Denomination)\n\tend.\n"
  },
  {
    "path": "apps/arweave/src/ar_merkle.erl",
    "content": "%%% @doc Generates annotated merkle trees, paths inside those trees, as well\n%%% as verification of those proofs.\n-module(ar_merkle).\n\n-export([generate_tree/1, generate_path/3, validate_path/4, validate_path/5,\n\t\textract_note/1, extract_root/1]).\n\n-export([get/2, hash/1, note_to_binary/1]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%% @doc Generates annotated merkle trees, paths inside those trees, as well\n%%% as verification of those proofs.\n-record(node, {\n\tid,\n\ttype = branch,\t% root | branch | leaf\n\tdata,\t\t\t% The value (for leaves).\n\tnote,\t\t\t% The offset, a number less than 2^256.\n\tleft,\t\t\t% The (optional) ID of a node to the left.\n\tright,\t\t\t% The (optional) ID of a node to the right.\n\tmax,\t\t\t% The maximum observed note at this point.\n\tis_rebased = false\n}).\n\n-define(HASH_SIZE, ?CHUNK_ID_HASH_SIZE).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Generate a Merkle tree from a list of pairs of IDs (of length 32 bytes)\n%% and labels -- offsets. The list may be arbitrarily nested - the inner lists then\n%% contain the leaves of the sub trees with the rebased (on 0) starting offsets.\ngenerate_tree(Elements) ->\n\tgenerate_tree(Elements, queue:new(), []).\n\n%% @doc Generate a Merkle path for the given offset Dest from the tree Tree\n%% with the root ID.\ngenerate_path(ID, Dest, Tree) ->\n\tbinary:list_to_bin(generate_path_parts(ID, Dest, Tree, 0)).\n\n%% @doc Validate the given merkle path.\nvalidate_path(ID, Dest, RightBound, Path) ->\n\tvalidate_path(ID, Dest, RightBound, Path, basic_ruleset).\n\n%% @doc Validate the given merkle path using the given set of rules.\nvalidate_path(ID, Dest, RightBound, _Path, _Ruleset) when RightBound =< 0 ->\n\t?LOG_ERROR([{event, validate_path_called_with_non_positive_right_bound},\n\t\t\t{root, ar_util:encode(ID)}, {dest, Dest}, {right_bound, RightBound}]),\n\tthrow(invalid_right_bound);\nvalidate_path(ID, Dest, RightBound, Path, Ruleset) when Dest >= RightBound ->\n\tvalidate_path(ID, RightBound - 1, RightBound, Path, Ruleset);\nvalidate_path(ID, Dest, RightBound, Path, Ruleset) when Dest < 0 ->\n\tvalidate_path(ID, 0, RightBound, Path, Ruleset);\nvalidate_path(ID, Dest, RightBound, Path, Ruleset) ->\n\tvalidate_path(ID, Dest, 0, RightBound, Path, Ruleset).\n\nvalidate_path(ID, Dest, LeftBound, RightBound, Path, basic_ruleset) ->\n\tCheckBorders = false,\n\tCheckSplit = false,\n\tAllowRebase = false,\n\tvalidate_path(ID, Dest, LeftBound, RightBound, Path, CheckBorders, CheckSplit, AllowRebase);\n\nvalidate_path(ID, Dest, LeftBound, RightBound, Path, strict_borders_ruleset) ->\n\tCheckBorders = true,\n\tCheckSplit = false,\n\tAllowRebase = false,\n\tvalidate_path(ID, Dest, LeftBound, RightBound, Path, CheckBorders, CheckSplit, AllowRebase);\n\nvalidate_path(ID, Dest, LeftBound, RightBound, Path, strict_data_split_ruleset) ->\n\tCheckBorders = true,\n\tCheckSplit = strict,\n\tAllowRebase = false,\n\tvalidate_path(ID, Dest, LeftBound, RightBound, Path, CheckBorders, CheckSplit, AllowRebase);\n\nvalidate_path(ID, Dest, LeftBound, RightBound, Path, offset_rebase_support_ruleset) ->\n\tCheckBorders = true,\n\tCheckSplit = relaxed,\n\tAllowRebase = true,\n\tvalidate_path(ID, Dest, LeftBound, RightBound, Path, CheckBorders, CheckSplit, AllowRebase).\n\n\nvalidate_path(ID, Dest, LeftBound, RightBound, Path, CheckBorders, CheckSplit, AllowRebase) ->\n\tDataSize = RightBound,\n\t%% Will be set to true only if we only take right branches from the root to the leaf. In this\n\t%% case we know the leaf chunk is the final chunk in the range represented by the merkle tree.\n\tIsRightMostInItsSubTree = undefined, \n\t%% Set to non-zero when AllowRebase is true and we begin processing a subtree.\n\tLeftBoundShift = 0,\n\tvalidate_path(ID, Dest, LeftBound, RightBound, Path,\n\t\tDataSize, IsRightMostInItsSubTree, LeftBoundShift,\n\t\tCheckBorders, CheckSplit, AllowRebase).\n\n%% Validate the leaf of the merkle path (i.e. the data chunk)\nvalidate_path(ID, _Dest, LeftBound, RightBound,\n\t\t<< Data:?HASH_SIZE/binary, EndOffset:(?NOTE_SIZE*8) >>,\n\t\tDataSize, IsRightMostInItsSubTree, LeftBoundShift,\n\t\tCheckBorders, CheckSplit, _AllowRebase) ->\n\tAreBordersValid = case CheckBorders of\n\t\ttrue ->\n\t\t\t%% Borders are only valid if every offset does not exceed the previous offset\n\t\t\t%% by more than ?DATA_CHUNK_SIZE\n\t\t\tEndOffset - LeftBound =< ?DATA_CHUNK_SIZE andalso\n\t\t\t\tRightBound - LeftBound =< ?DATA_CHUNK_SIZE;\n\t\tfalse ->\n\t\t\t%% Borders are always valid if we don't need to check them\n\t\t\ttrue\n\tend,\n\tIsSplitValid = case CheckSplit of\n\t\tstrict ->\n\t\t\tChunkSize = EndOffset - LeftBound,\n\t\t\tcase validate_strict_split of\n\t\t\t\t_ when ChunkSize == (?DATA_CHUNK_SIZE) ->\n\t\t\t\t\tLeftBound rem (?DATA_CHUNK_SIZE) == 0;\n\t\t\t\t_ when EndOffset == DataSize ->\n\t\t\t\t\tBorder = ar_util:floor_int(RightBound, \t?DATA_CHUNK_SIZE),\n\t\t\t\t\tRightBound rem (?DATA_CHUNK_SIZE) > 0\n\t\t\t\t\t\t\tandalso LeftBound =< Border;\n\t\t\t\t_ ->\n\t\t\t\t\tLeftBound rem (?DATA_CHUNK_SIZE) == 0\n\t\t\t\t\t\t\tandalso DataSize - LeftBound > (?DATA_CHUNK_SIZE)\n\t\t\t\t\t\t\tandalso DataSize - LeftBound < 2 * (?DATA_CHUNK_SIZE)\n\t\t\tend;\n\t\trelaxed ->\n\t\t\t%% Reject chunks smaller than 256 KiB unless they are the last or the only chunks\n\t\t\t%% of their datasets or the second last chunks which do not exceed 256 KiB when\n\t\t\t%% combined with the following (last) chunks. Finally, reject chunks smaller than\n\t\t\t%% their Merkle proofs unless they are the last chunks of their datasets.\n\t\t\tShiftedLeftBound = LeftBoundShift + LeftBound,\n\t\t\tShiftedEndOffset = LeftBoundShift + EndOffset,\n\t\t\tcase IsRightMostInItsSubTree of\n\t\t\t\ttrue ->\n\t\t\t\t\t%% The last chunk may either start at the bucket start or\n\t\t\t\t\t%% span two buckets.\n\t\t\t\t\tBucket0 = ShiftedLeftBound div (?DATA_CHUNK_SIZE),\n\t\t\t\t\tBucket1 = ShiftedEndOffset div (?DATA_CHUNK_SIZE),\n\t\t\t\t\t(ShiftedLeftBound rem (?DATA_CHUNK_SIZE) == 0)\n\t\t\t\t\t\t\t%% Make sure each chunk \"steps\" at least 1 byte into\n\t\t\t\t\t\t\t%% its own bucket, which is to the right from the right border\n\t\t\t\t\t\t\t%% cause since this chunk does not start at the left border,\n\t\t\t\t\t\t\t%% the bucket on the left from the right border belongs to\n\t\t\t\t\t\t\t%% the preceding chunk.\n\t\t\t\t\t\t\torelse (Bucket0 + 1 == Bucket1\n\t\t\t\t\t\t\t\t\tandalso ShiftedEndOffset rem ?DATA_CHUNK_SIZE /= 0);\n\t\t\t\t_ ->\n\t\t\t\t\t%% May also be the only chunk of a single-chunk subtree.\n\t\t\t\t\tShiftedLeftBound rem (?DATA_CHUNK_SIZE) == 0\n\t\t\tend;\n\t\t_ ->\n\t\t\t%% Split is always valid if we don't need to check it\n\t\t\ttrue\n\tend,\n\tcase AreBordersValid andalso IsSplitValid of\n\t\ttrue ->\n\t\t\tvalidate_leaf(ID, Data, EndOffset, LeftBound, RightBound, LeftBoundShift);\n\t\tfalse ->\n\t\t\tfalse\n\tend;\n\n%% Validate the given merkle path where any subtrees may have 0-based offset.\nvalidate_path(ID, Dest, LeftBound, RightBound,\n\t\t<< 0:(?HASH_SIZE*8), L:?HASH_SIZE/binary, R:?HASH_SIZE/binary,\n\t\t\tNote:(?NOTE_SIZE*8), Rest/binary >>,\n\t\tDataSize, _IsRightMostInItsSubTree, LeftBoundShift,\n\t\tCheckBorders, CheckSplit, true) ->\n\tcase hash([hash(L), hash(R), hash(note_to_binary(Note))]) of\n\t\tID ->\n\t\t\t{Path, NextLeftBound, NextRightBound, Dest2, NextLeftBoundShift} =\n\t\t\t\tcase Dest < Note of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tNote2 = min(RightBound, Note),\n\t\t\t\t\t\t{L, 0, Note2 - LeftBound, Dest - LeftBound,\n\t\t\t\t\t\t\t\tLeftBoundShift + LeftBound};\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tNote2 = max(LeftBound, Note),\n\t\t\t\t\t\t{R, 0, RightBound - Note2,\n\t\t\t\t\t\t\t\tDest - Note2,\n\t\t\t\t\t\t\t\tLeftBoundShift + Note2}\n\t\t\t\tend,\n\t\t\tvalidate_path(Path, Dest2, NextLeftBound, NextRightBound, Rest, DataSize,\n\t\t\t\tundefined, NextLeftBoundShift, CheckBorders, CheckSplit, true);\n\t\t_ ->\n\t\t\tfalse\n\tend;\n\n%% Validate a non-leaf node in the merkle path\nvalidate_path(ID, Dest, LeftBound, RightBound,\n\t\t<< L:?HASH_SIZE/binary, R:?HASH_SIZE/binary, Note:(?NOTE_SIZE*8), Rest/binary >>,\n\t\tDataSize, IsRightMostInItsSubTree, LeftBoundShift,\n\t\tCheckBorders, CheckSplit, AllowRebase) ->\n\tvalidate_node(ID, Dest, LeftBound, RightBound, L, R, Note, Rest,\n\t\tDataSize, IsRightMostInItsSubTree, LeftBoundShift,\n\t\tCheckBorders, CheckSplit, AllowRebase);\n\n%% Invalid merkle path\nvalidate_path(_, _, _, _, _, _, _, _, _, _, _) ->\n\tfalse.\n\nvalidate_node(ID, Dest, LeftBound, RightBound, L, R, Note, RemainingPath,\n\t\tDataSize, IsRightMostInItsSubTree, LeftBoundShift,\n\t\tCheckBorders, CheckSplit, AllowRebase) ->\n\tcase hash([hash(L), hash(R), hash(note_to_binary(Note))]) of\n\t\tID ->\n\t\t\t{BranchID, NextLeftBound, NextRightBound, IsRightMostInItsSubTree2} =\n\t\t\t\tcase Dest < Note of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\t%% Traverse left branch (at this point we know the leaf chunk will never\n\t\t\t\t\t\t%% be the right most in the subtree)\n\t\t\t\t\t\t{L, LeftBound, min(RightBound, Note), false};\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t%% Traverse right branch\n\t\t\t\t\t\t{R, max(LeftBound, Note), RightBound,\n\t\t\t\t\t\t\t\tcase IsRightMostInItsSubTree of undefined -> true;\n\t\t\t\t\t\t\t\t\t\t_ -> IsRightMostInItsSubTree end}\n\t\t\t\tend,\n\t\t\tvalidate_path(BranchID, Dest, NextLeftBound, NextRightBound, RemainingPath,\n\t\t\t\tDataSize, IsRightMostInItsSubTree2, LeftBoundShift,\n\t\t\t\tCheckBorders, CheckSplit, AllowRebase);\n\t\t_ ->\n\t\t\tfalse\n\tend.\n\nvalidate_leaf(ID, Data, EndOffset, LeftBound, RightBound, LeftBoundShift) ->\n\tcase hash([hash(Data), hash(note_to_binary(EndOffset))]) of\n\t\tID ->\n\t\t\t{Data, LeftBoundShift + LeftBound,\n\t\t\t\tLeftBoundShift + max(min(RightBound, EndOffset), LeftBound + 1)};\n\t\t_ ->\n\t\t\tfalse\n\tend.\n\n%% @doc Get the note (offset) attached to the leaf from a path.\nextract_note(Path) ->\n\tbinary:decode_unsigned(\n\t\tbinary:part(Path, byte_size(Path) - ?NOTE_SIZE, ?NOTE_SIZE)\n\t).\n\n%% @doc Get the Merkle root from a path.\nextract_root(<< Data:?HASH_SIZE/binary, EndOffset:(?NOTE_SIZE*8) >>) ->\n\t{ok, hash([hash(Data), hash(note_to_binary(EndOffset))])};\nextract_root(<< L:?HASH_SIZE/binary, R:?HASH_SIZE/binary, Note:(?NOTE_SIZE*8), _/binary >>) ->\n\t{ok, hash([hash(L), hash(R), hash(note_to_binary(Note))])};\nextract_root(_) ->\n\t{error, invalid_proof}.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\ngenerate_tree([Element | Elements], Stack, Tree) when is_list(Element) ->\n\t{SubRoot, SubTree} = generate_tree(Element),\n\tSubTree2 = [mark_rebased(Node, SubRoot) || Node <- SubTree],\n\tSubRootN = get(SubRoot, SubTree2),\n\tgenerate_tree(Elements, queue:in(SubRootN, Stack), Tree ++ SubTree2);\ngenerate_tree([Element | Elements], Stack, Tree) ->\n\tLeaf = generate_leaf(Element),\n\tgenerate_tree(Elements, queue:in(Leaf, Stack), [Leaf | Tree]);\ngenerate_tree([], Stack, Tree) ->\n\tcase queue:to_list(Stack) of\n\t\t[] ->\n\t\t\t{<<>>, []};\n\t\t_ ->\n\t\t\tgenerate_all_rows(queue:to_list(Stack), Tree)\n\tend.\n\nmark_rebased(#node{ id = RootID } = Node, RootID) ->\n\tNode#node{ is_rebased = true };\nmark_rebased(Node, _RootID) ->\n\tNode.\n\ngenerate_leaf({Data, Note}) ->\n\tHash = hash([hash(Data), hash(note_to_binary(Note))]),\n\t#node{\n\t\tid = Hash,\n\t\ttype = leaf,\n\t\tdata = Data,\n\t\tnote = Note,\n\t\tmax = Note\n\t}.\n\n%% Note: This implementation leaves some duplicates in the tree structure.\n%% The produced trees could be a little smaller if these duplicates were \n%% not present, but removing them with ar_util:unique takes far too long.\ngenerate_all_rows([RootN], Tree) ->\n\tRootID = RootN#node.id,\n\t{RootID, Tree};\ngenerate_all_rows(Row, Tree) ->\n\tNewRow = generate_row(Row, 0),\n\tgenerate_all_rows(NewRow, NewRow ++ Tree).\n\ngenerate_row([], _Shift) -> [];\ngenerate_row([Left], _Shift) -> [Left];\ngenerate_row([L, R | Rest], Shift) ->\n\t{N, Shift2} = generate_node(L, R, Shift),\n\t[N | generate_row(Rest, Shift2)].\n\ngenerate_node(Left, empty, Shift) ->\n\t{Left, Shift};\ngenerate_node(L, R, Shift) ->\n\tLMax = L#node.max,\n\tLMax2 = case L#node.is_rebased of true -> Shift + LMax; _ -> LMax end,\n\tRMax = R#node.max,\n\tRMax2 = case R#node.is_rebased of true -> LMax2 + RMax; _ -> RMax end,\n\t{#node{\n\t\tid = hash([hash(L#node.id), hash(R#node.id), hash(note_to_binary(LMax2))]),\n\t\ttype = branch,\n\t\tleft = L#node.id,\n\t\tright = R#node.id,\n\t\tnote = LMax2,\n\t\tmax = RMax2\n\t}, RMax2}.\n\ngenerate_path_parts(ID, Dest, Tree, PrevNote) ->\n\tcase get(ID, Tree) of\n\t\tN when N#node.type == leaf ->\n\t\t\t[N#node.data, note_to_binary(N#node.note)];\n\t\tN when N#node.type == branch ->\n\t\t\tNote = N#node.note,\n\t\t\t{Direction, NextID} =\n\t\t\t\tcase Dest < Note of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\t{left, N#node.left};\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t{right, N#node.right}\n\t\t\t\tend,\n\t\t\tNextN = get(NextID, Tree),\n\t\t\t{RebaseMark, Dest2} =\n\t\t\t\tcase {NextN#node.is_rebased, Direction} of\n\t\t\t\t\t{false, _} ->\n\t\t\t\t\t\t{<<>>, Dest};\n\t\t\t\t\t{true, right} ->\n\t\t\t\t\t\t{<< 0:(?HASH_SIZE * 8) >>, Dest - Note};\n\t\t\t\t\t{true, left} ->\n\t\t\t\t\t\t{<< 0:(?HASH_SIZE * 8) >>, Dest - PrevNote}\n\t\t\t\tend,\n\t\t\t[RebaseMark, N#node.left, N#node.right, note_to_binary(Note)\n\t\t\t\t\t| generate_path_parts(NextID, Dest2, Tree, Note)]\n\tend.\n\nget(ID, Map) ->\n\tcase lists:keyfind(ID, #node.id, Map) of\n\t\tfalse -> false;\n\t\tNode -> Node\n\tend.\n\nnote_to_binary(Note) ->\n\t<< Note:(?NOTE_SIZE * 8) >>.\n\nhash(Parts) when is_list(Parts) ->\n\tcrypto:hash(sha256, binary:list_to_bin(Parts));\nhash(Binary) ->\n\tcrypto:hash(sha256, Binary).\n\nmake_tags_cumulative(L) ->\n\tlists:reverse(\n\t\telement(2,\n\t\t\tlists:foldl(\n\t\t\t\tfun({X, Tag}, {AccTag, AccL}) ->\n\t\t\t\t\tCurr = AccTag + Tag,\n\t\t\t\t\t{Curr, [{X, Curr} | AccL]}\n\t\t\t\tend,\n\t\t\t\t{0, []},\n\t\t\t\tL\n\t\t\t)\n\t\t)\n\t).\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\n-define(TEST_SIZE, 64 * 1024).\n-define(UNEVEN_TEST_SIZE, 35643).\n-define(UNEVEN_TEST_TARGET, 33271).\n\ngenerate_and_validate_balanced_tree_path_test_() ->\n\t{timeout, 30, fun test_generate_and_validate_balanced_tree_path/0}.\n\ntest_generate_and_validate_balanced_tree_path() ->\n\tTags = make_tags_cumulative([{<< N:256 >>, 1} || N <- lists:seq(0, ?TEST_SIZE - 1)]),\n\t{MR, Tree} = ar_merkle:generate_tree(Tags),\n\t?assertEqual(length(Tree), (?TEST_SIZE * 2) - 1),\n\tlists:foreach(\n\t\tfun(_TestCase) ->\n\t\t\tRandomTarget = rand:uniform(?TEST_SIZE) - 1,\n\t\t\tPath = ar_merkle:generate_path(MR, RandomTarget, Tree),\n\t\t\t{Leaf, StartOffset, EndOffset} =\n\t\t\t\tar_merkle:validate_path(MR, RandomTarget, ?TEST_SIZE, Path),\n\t\t\t{Leaf, StartOffset, EndOffset} =\n\t\t\t\tar_merkle:validate_path(MR, RandomTarget, ?TEST_SIZE, Path,\n\t\t\t\t\t\tstrict_borders_ruleset),\n\t\t\t?assertEqual(RandomTarget, binary:decode_unsigned(Leaf)),\n\t\t\t?assert(RandomTarget < EndOffset),\n\t\t\t?assert(RandomTarget >= StartOffset)\n\t\tend,\n\t\tlists:seq(1, 100)\n\t).\n\ngenerate_and_validate_tree_with_rebase_test_() ->\n\t[\n\t\t{timeout, 30, fun test_tree_with_rebase_shallow/0},\n\t\t{timeout, 30, fun test_tree_with_rebase_nested/0},\n\t\t{timeout, 30, fun test_tree_with_rebase_bad_paths/0},\n\t\t{timeout, 30, fun test_tree_with_rebase_partial_chunk/0},\n\t\t{timeout, 30, fun test_tree_with_rebase_subtree_ids/0}\n\t].\n\ntest_tree_with_rebase_shallow() ->\n\tLeaf1 = crypto:strong_rand_bytes(?HASH_SIZE),\n\tLeaf2 = crypto:strong_rand_bytes(?HASH_SIZE),\n\n\t%%    Root1\n\t%%    /   \\\n\t%% Leaf1  Leaf2 (with offset reset)\n\tTags0 = [\n\t\t{Leaf1, ?DATA_CHUNK_SIZE},\n\t\t{Leaf2, 2 * ?DATA_CHUNK_SIZE}\n\t],\n\t{Root0, Tree0} = ar_merkle:generate_tree(Tags0),\n\tassert_tree([\n\t\t\t{branch, undefined, ?DATA_CHUNK_SIZE, false},\n\t\t\t{leaf, Leaf2, 2*?DATA_CHUNK_SIZE, false},\n\t\t\t{leaf, Leaf1, ?DATA_CHUNK_SIZE, false}\n\t\t], Tree0),\n\n\tTags1 = [{Leaf1, ?DATA_CHUNK_SIZE}, [{Leaf2, ?DATA_CHUNK_SIZE}]],\n\t{Root1, Tree1} = ar_merkle:generate_tree(Tags1),\n\tassert_tree([\n\t\t\t{branch, undefined, ?DATA_CHUNK_SIZE, false},\n\t\t\t{leaf, Leaf1, ?DATA_CHUNK_SIZE, false},\n\t\t\t{leaf, Leaf2, ?DATA_CHUNK_SIZE, true}\n\t\t], Tree1),\n\t?assertNotEqual(Root1, Root0),\n\n\tPath0_1 = ar_merkle:generate_path(Root0, 0, Tree0),\n\tPath1_1 = ar_merkle:generate_path(Root1, 0, Tree1),\n\t?assertNotEqual(Path0_1, Path1_1),\n\t{Leaf1, 0, ?DATA_CHUNK_SIZE} = ar_merkle:validate_path(Root0, 0, 2 * ?DATA_CHUNK_SIZE,\n\t\t\tPath0_1, offset_rebase_support_ruleset),\t\n\t{Leaf1, 0, ?DATA_CHUNK_SIZE} = ar_merkle:validate_path(Root1, 0, 2 * ?DATA_CHUNK_SIZE,\n\t\t\tPath1_1, offset_rebase_support_ruleset),\n\t?assertEqual(false, ar_merkle:validate_path(Root1, 0, 2 * ?DATA_CHUNK_SIZE,\n\t\t\tPath0_1, offset_rebase_support_ruleset)),\n\t?assertEqual(false, ar_merkle:validate_path(Root0, 0, 2 * ?DATA_CHUNK_SIZE,\n\t\t\tPath1_1, offset_rebase_support_ruleset)),\n\n\tPath0_2 = ar_merkle:generate_path(Root0, ?DATA_CHUNK_SIZE, Tree0),\n\tPath1_2 = ar_merkle:generate_path(Root1, ?DATA_CHUNK_SIZE, Tree1),\n\t?assertNotEqual(Path1_2, Path0_2),\n\t{Leaf2, ?DATA_CHUNK_SIZE, 2 * ?DATA_CHUNK_SIZE} = ar_merkle:validate_path(\n\t\t\tRoot0, ?DATA_CHUNK_SIZE, 2 * ?DATA_CHUNK_SIZE, Path0_2, offset_rebase_support_ruleset),\n\t{Leaf2, ?DATA_CHUNK_SIZE, 2 * ?DATA_CHUNK_SIZE} = ar_merkle:validate_path(\n\t\t\tRoot1, ?DATA_CHUNK_SIZE, 2 * ?DATA_CHUNK_SIZE, Path1_2, offset_rebase_support_ruleset),\n\t{Leaf2, ?DATA_CHUNK_SIZE, 2 * ?DATA_CHUNK_SIZE} = ar_merkle:validate_path(\n\t\t\tRoot1, 2 * ?DATA_CHUNK_SIZE - 1, 2 * ?DATA_CHUNK_SIZE, Path1_2, \n\t\t\toffset_rebase_support_ruleset),\n\t?assertEqual(false, ar_merkle:validate_path(\n\t\t\tRoot1, ?DATA_CHUNK_SIZE, 2 * ?DATA_CHUNK_SIZE, Path0_2, offset_rebase_support_ruleset)),\n\t?assertEqual(false, ar_merkle:validate_path(\n\t\t\tRoot0, ?DATA_CHUNK_SIZE, 2 * ?DATA_CHUNK_SIZE, Path1_2, offset_rebase_support_ruleset)),\n\t?assertEqual(false, ar_merkle:validate_path(\n\t\t\tRoot1, ?DATA_CHUNK_SIZE, 2 * ?DATA_CHUNK_SIZE, Path1_1, offset_rebase_support_ruleset)),\n\t?assertEqual(false, ar_merkle:validate_path(\n\t\t\tRoot1, 0, 2 * ?DATA_CHUNK_SIZE, Path1_2, offset_rebase_support_ruleset)),\n\n\t%%     ________Root2_________\n\t%%    /                      \\\n\t%% Leaf1 (with offset reset)  Leaf2 (with offset reset)\n\tTags2 = [\n\t\t[\n\t\t\t{Leaf1, ?DATA_CHUNK_SIZE}\n\t\t],\n\t\t[\n\t\t\t{Leaf2, ?DATA_CHUNK_SIZE}\n\t\t]\n\t],\n\t{Root2, Tree2} = ar_merkle:generate_tree(Tags2),\n\tassert_tree([\n\t\t\t{branch, undefined, ?DATA_CHUNK_SIZE, false},\n\t\t\t{leaf, Leaf1, ?DATA_CHUNK_SIZE, true},\n\t\t\t{leaf, Leaf2, ?DATA_CHUNK_SIZE, true}\n\t\t], Tree2),\n\n\tPath2_1 = ar_merkle:generate_path(Root2, 0, Tree2),\n\t{Leaf1, 0, ?DATA_CHUNK_SIZE} = ar_merkle:validate_path(Root2, 0,\n\t\t\t2 * ?DATA_CHUNK_SIZE, Path2_1, offset_rebase_support_ruleset),\n\n\tPath2_2 = ar_merkle:generate_path(Root2, ?DATA_CHUNK_SIZE, Tree2),\n\t{Leaf2, ?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE} = ar_merkle:validate_path(Root2,\n\t\t\t?DATA_CHUNK_SIZE, 2 * ?DATA_CHUNK_SIZE, Path2_2, offset_rebase_support_ruleset),\n\t\n\t{Leaf2, ?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE} = ar_merkle:validate_path(Root2,\n\t\t\t2*?DATA_CHUNK_SIZE - 1, 2*?DATA_CHUNK_SIZE, Path2_2, offset_rebase_support_ruleset),\n\n\t?assertEqual(false, ar_merkle:validate_path(Root2, ?DATA_CHUNK_SIZE,\n\t\t\t2*?DATA_CHUNK_SIZE, Path2_1, offset_rebase_support_ruleset)),\n\t?assertEqual(false, ar_merkle:validate_path(Root2, 0,\n\t\t\t2*?DATA_CHUNK_SIZE, Path2_2, offset_rebase_support_ruleset)).\n\ntest_tree_with_rebase_nested() ->\n\t%%                       _________________Root3________________\n\t%%                      /                                      \\\n\t%%             _____SubTree1______________                    Leaf6\n\t%%            /                           \\           \n\t%%       SubTree2              ________SubTree3_________\n\t%%       /       \\           /                           \\  \n\t%%    Leaf1    Leaf2     SubTree4 (with offset reset)   Leaf5\n\t%%                       /       \\\n\t%%                    Leaf3    Leaf4 (with offset reset)\n\tLeaf1 = crypto:strong_rand_bytes(?HASH_SIZE),\n\tLeaf2 = crypto:strong_rand_bytes(?HASH_SIZE),\n\tLeaf3 = crypto:strong_rand_bytes(?HASH_SIZE),\n\tLeaf4 = crypto:strong_rand_bytes(?HASH_SIZE),\n\tLeaf5 = crypto:strong_rand_bytes(?HASH_SIZE),\n\tLeaf6 = crypto:strong_rand_bytes(?HASH_SIZE),\n\tTags3 = [\n\t\t{Leaf1, ?DATA_CHUNK_SIZE},\n\t\t{Leaf2, 2*?DATA_CHUNK_SIZE},\n\t\t[\n\t\t\t{Leaf3, ?DATA_CHUNK_SIZE},\n\t\t\t[\n\t\t\t\t{Leaf4, ?DATA_CHUNK_SIZE}\n\t\t\t]\n\t\t],\n\t\t{Leaf5, 5*?DATA_CHUNK_SIZE},\n\t\t{Leaf6, 6*?DATA_CHUNK_SIZE}\n\t],\n\t{Root3, Tree3} = ar_merkle:generate_tree(Tags3),\n\tassert_tree([\n\t\t\t{branch, undefined, 5*?DATA_CHUNK_SIZE, false},  %% Root\n\t\t\t{branch, undefined, 2*?DATA_CHUNK_SIZE, false},  %% SubTree1\n\t\t\t{leaf, Leaf6, 6*?DATA_CHUNK_SIZE, false},\n\t\t\t{branch, undefined, ?DATA_CHUNK_SIZE, false},    %% SubTree2\n\t\t\t{branch, undefined, 4*?DATA_CHUNK_SIZE, false},  %% SubTree3\n\t\t\t{leaf, Leaf6, 6*?DATA_CHUNK_SIZE, false},        %% duplicates are safe and expected\n\t\t\t{leaf, Leaf6, 6*?DATA_CHUNK_SIZE, false},        %% duplicates are safe and expected\n\t\t\t{leaf, Leaf5, 5*?DATA_CHUNK_SIZE, false},\n\t\t\t{leaf, Leaf2, 2*?DATA_CHUNK_SIZE, false},\n\t\t\t{leaf, Leaf1, ?DATA_CHUNK_SIZE, false},\n\t\t\t{branch, undefined, ?DATA_CHUNK_SIZE, true},     %% SubTree4\n\t\t\t{leaf, Leaf3, ?DATA_CHUNK_SIZE, false},\n\t\t\t{leaf, Leaf4, ?DATA_CHUNK_SIZE, true}\n\t\t], Tree3),\n\n\tBadRoot = crypto:strong_rand_bytes(32),\n\tPath3_1 = ar_merkle:generate_path(Root3, 0, Tree3),\n\t{Leaf1, 0, ?DATA_CHUNK_SIZE} = ar_merkle:validate_path(\n\t\tRoot3, 0, 6*?DATA_CHUNK_SIZE, Path3_1, offset_rebase_support_ruleset),\n\t?assertEqual(false, ar_merkle:validate_path(\n\t\tBadRoot, 0, 6*?DATA_CHUNK_SIZE, Path3_1, offset_rebase_support_ruleset)),\n\t\n\tPath3_2 = ar_merkle:generate_path(Root3, ?DATA_CHUNK_SIZE, Tree3),\n\t{Leaf2, ?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE} = ar_merkle:validate_path(\n\t\tRoot3, ?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE, Path3_2, offset_rebase_support_ruleset),\n\t?assertEqual(false, ar_merkle:validate_path(\n\t\tBadRoot, ?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE, Path3_2, offset_rebase_support_ruleset)),\n\t\n\tPath3_3 = ar_merkle:generate_path(Root3, ?DATA_CHUNK_SIZE * 2, Tree3),\n\t{Leaf3, 2*?DATA_CHUNK_SIZE, 3*?DATA_CHUNK_SIZE} = ar_merkle:validate_path(\n\t\tRoot3, 2*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE, Path3_3, offset_rebase_support_ruleset),\n\t?assertEqual(false, ar_merkle:validate_path(\n\t\tBadRoot, 2*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE, Path3_3, offset_rebase_support_ruleset)),\n\t\n\tPath3_4 = ar_merkle:generate_path(Root3, ?DATA_CHUNK_SIZE * 3, Tree3),\n\t{Leaf4, 3*?DATA_CHUNK_SIZE, 4*?DATA_CHUNK_SIZE} = ar_merkle:validate_path(\n\t\tRoot3, 3*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE, Path3_4, offset_rebase_support_ruleset),\n\t?assertEqual(false, ar_merkle:validate_path(\n\t\tBadRoot, 3*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE, Path3_4, offset_rebase_support_ruleset)),\n\t\n\tPath3_5 = ar_merkle:generate_path(Root3, ?DATA_CHUNK_SIZE * 4, Tree3),\n\t{Leaf5, 4*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE} = ar_merkle:validate_path(\n\t\tRoot3, 4*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE, Path3_5, offset_rebase_support_ruleset),\n\t?assertEqual(false, ar_merkle:validate_path(\n\t\tBadRoot, 4*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE, Path3_5, offset_rebase_support_ruleset)),\n\n\tPath3_6 = ar_merkle:generate_path(Root3, ?DATA_CHUNK_SIZE * 5, Tree3),\n\t{Leaf6, 5*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE} = ar_merkle:validate_path(\n\t\tRoot3, 5*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE, Path3_6, offset_rebase_support_ruleset),\n\t?assertEqual(false, ar_merkle:validate_path(\n\t\tBadRoot, 5*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE, Path3_6, offset_rebase_support_ruleset)),\n\n\t%%           ________Root4_________\n\t%%          /                      \\\n\t%%      SubTree1         _______SubTree2____________\n\t%%     /    \\           /                           \\\n\t%%   Leaf1 Leaf2    SubTree3 (with offset reset)  SubTree4 (with offset reset)\n\t%%                  /   \\                          /     \\\n\t%%               Leaf3  Leaf4                   Leaf5   Leaf6\n\tTags4 = [\n\t\t\t\t{Leaf1, ?DATA_CHUNK_SIZE},\n\t\t\t\t{Leaf2, 2*?DATA_CHUNK_SIZE},\n\t\t\t\t[\n\t\t\t\t\t{Leaf3, ?DATA_CHUNK_SIZE},\n\t\t\t\t\t{Leaf4, 2*?DATA_CHUNK_SIZE}\n\t\t\t\t],\n\t\t\t\t[\n\t\t\t\t\t{Leaf5, ?DATA_CHUNK_SIZE},\n\t\t\t\t\t{Leaf6, 2*?DATA_CHUNK_SIZE}\n\t\t\t\t]\n\t\t\t],\n\t{Root4, Tree4} = ar_merkle:generate_tree(Tags4),\n\tassert_tree([\n\t\t\t{branch, undefined, 2*?DATA_CHUNK_SIZE, false},  %% Root\n\t\t\t{branch, undefined, ?DATA_CHUNK_SIZE, false},    %% SubTree1\n\t\t\t{branch, undefined, 4*?DATA_CHUNK_SIZE, false},  %% SubTree2\n\t\t\t{leaf, Leaf2, 2*?DATA_CHUNK_SIZE, false},\n\t\t\t{leaf, Leaf1, ?DATA_CHUNK_SIZE, false},\n\t\t\t{branch, undefined, ?DATA_CHUNK_SIZE, true},     %% SubTree3\n\t\t\t{leaf, Leaf4, 2*?DATA_CHUNK_SIZE, false},\n\t\t\t{leaf, Leaf3, ?DATA_CHUNK_SIZE, false},\n\t\t\t{branch, undefined, ?DATA_CHUNK_SIZE, true},     %% SubTree4\n\t\t\t{leaf, Leaf6, 2*?DATA_CHUNK_SIZE, false},\n\t\t\t{leaf, Leaf5, ?DATA_CHUNK_SIZE, false}\n\t\t], Tree4),\n\n\tPath4_1 = ar_merkle:generate_path(Root4, 0, Tree4),\n\t{Leaf1, 0, ?DATA_CHUNK_SIZE} = ar_merkle:validate_path(Root4, 0, 6 * ?DATA_CHUNK_SIZE,\n\t\t\tPath4_1, offset_rebase_support_ruleset),\n\n\tPath4_2 = ar_merkle:generate_path(Root4, ?DATA_CHUNK_SIZE, Tree4),\n\t{Leaf2, ?DATA_CHUNK_SIZE, Right4_2} = ar_merkle:validate_path(Root4, ?DATA_CHUNK_SIZE,\n\t\t\t6 * ?DATA_CHUNK_SIZE, Path4_2, offset_rebase_support_ruleset),\n\t?assertEqual(2 * ?DATA_CHUNK_SIZE, Right4_2),\n\n\tPath4_3 = ar_merkle:generate_path(Root4, ?DATA_CHUNK_SIZE * 2, Tree4),\n\t{Leaf3, Left4_3, Right4_3} = ar_merkle:validate_path(Root4, 2 * ?DATA_CHUNK_SIZE,\n\t\t\t6 * ?DATA_CHUNK_SIZE, Path4_3, offset_rebase_support_ruleset),\n\t?assertEqual(2 * ?DATA_CHUNK_SIZE, Left4_3),\n\t?assertEqual(3 * ?DATA_CHUNK_SIZE, Right4_3),\n\n\tPath4_4 = ar_merkle:generate_path(Root4, ?DATA_CHUNK_SIZE * 3, Tree4),\n\t{Leaf4, Left4_4, Right4_4} = ar_merkle:validate_path(Root4, 3 * ?DATA_CHUNK_SIZE,\n\t\t\t6 * ?DATA_CHUNK_SIZE, Path4_4, offset_rebase_support_ruleset),\n\t?assertEqual(3 * ?DATA_CHUNK_SIZE, Left4_4),\n\t?assertEqual(4 * ?DATA_CHUNK_SIZE, Right4_4),\n\n\tPath4_5 = ar_merkle:generate_path(Root4, ?DATA_CHUNK_SIZE * 4, Tree4),\n\t{Leaf5, Left4_5, Right4_5} = ar_merkle:validate_path(Root4, 4 * ?DATA_CHUNK_SIZE,\n\t\t\t6 * ?DATA_CHUNK_SIZE, Path4_5, offset_rebase_support_ruleset),\n\t?assertEqual(4 * ?DATA_CHUNK_SIZE, Left4_5),\n\t?assertEqual(5 * ?DATA_CHUNK_SIZE, Right4_5),\n\n\tPath4_6 = ar_merkle:generate_path(Root4, ?DATA_CHUNK_SIZE * 5, Tree4),\n\t{Leaf6, Left4_6, Right4_6} = ar_merkle:validate_path(Root4, 5 * ?DATA_CHUNK_SIZE,\n\t\t\t6 * ?DATA_CHUNK_SIZE, Path4_6, offset_rebase_support_ruleset),\n\t?assertEqual(5 * ?DATA_CHUNK_SIZE, Left4_6),\n\t?assertEqual(6 * ?DATA_CHUNK_SIZE, Right4_6),\n\n\t%%             ______________Root__________________\n\t%%            /                                    \\\n\t%%     ____SubTree1                               Leaf5\n\t%%    /            \\\n\t%% Leaf1         SubTree2 (with offset reset)\n\t%%                /                   \\\n\t%%           SubTree3               Leaf4\n\t%%           /      \\\n\t%%        Leaf2    Leaf3 \n\tTags5 = [\n\t\t\t\t{Leaf1, ?DATA_CHUNK_SIZE},\n\t\t\t\t[\n\t\t\t\t\t{Leaf2, ?DATA_CHUNK_SIZE},\n\t\t\t\t\t{Leaf3, 2*?DATA_CHUNK_SIZE},\n\t\t\t\t\t{Leaf4, 3*?DATA_CHUNK_SIZE}\n\t\t\t\t],\n\t\t\t\t{Leaf5, 5*?DATA_CHUNK_SIZE}\n\t\t\t],\n\t{Root5, Tree5} = ar_merkle:generate_tree(Tags5),\n\tassert_tree([\n\t\t\t{branch, undefined, 4*?DATA_CHUNK_SIZE, false}, %% Root\n\t\t\t{branch, undefined, ?DATA_CHUNK_SIZE, false},   %% SubTree1\n\t\t\t{leaf, Leaf5, 5*?DATA_CHUNK_SIZE, false},\n\t\t\t{leaf, Leaf5, 5*?DATA_CHUNK_SIZE, false},       %% Duplicates are safe and expected\n\t\t\t{leaf, Leaf1, ?DATA_CHUNK_SIZE, false},\n\t\t\t{branch, undefined, 2*?DATA_CHUNK_SIZE, true},   %% SubTree2\n\t\t\t{branch, undefined, ?DATA_CHUNK_SIZE, false},    %% SubTree3\n\t\t\t{leaf, Leaf4, 3*?DATA_CHUNK_SIZE, false},\n\t\t\t{leaf, Leaf4, 3*?DATA_CHUNK_SIZE, false},\n\t\t\t{leaf, Leaf3, 2*?DATA_CHUNK_SIZE, false},\n\t\t\t{leaf, Leaf2, ?DATA_CHUNK_SIZE, false}\n\t\t], Tree5),\n\n\tPath5_1 = ar_merkle:generate_path(Root5, 0, Tree5),\n\t{Leaf1, 0, ?DATA_CHUNK_SIZE} = ar_merkle:validate_path(Root5, 0, 5*?DATA_CHUNK_SIZE,\n\t\t\tPath5_1, offset_rebase_support_ruleset),\n\n\tPath5_2 = ar_merkle:generate_path(Root5, ?DATA_CHUNK_SIZE, Tree5),\n\t{Leaf2, ?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE} = ar_merkle:validate_path(\n\t\t\tRoot5, ?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE, Path5_2, offset_rebase_support_ruleset),\n\n\tPath5_3 = ar_merkle:generate_path(Root5, 2*?DATA_CHUNK_SIZE, Tree5),\n\t{Leaf3, 2*?DATA_CHUNK_SIZE, 3*?DATA_CHUNK_SIZE} = ar_merkle:validate_path(\n\t\t\tRoot5, 2*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE, Path5_3, offset_rebase_support_ruleset),\n\n\tPath5_4 = ar_merkle:generate_path(Root5, 3*?DATA_CHUNK_SIZE, Tree5),\n\t{Leaf4, 3*?DATA_CHUNK_SIZE, 4*?DATA_CHUNK_SIZE} = ar_merkle:validate_path(\n\t\t\tRoot5, 3*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE, Path5_4, offset_rebase_support_ruleset),\n\n\tPath5_5 = ar_merkle:generate_path(Root5, 4*?DATA_CHUNK_SIZE, Tree5),\n\t{Leaf5, 4*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE} = ar_merkle:validate_path(\n\t\t\tRoot5, 4*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE, Path5_5, offset_rebase_support_ruleset),\n\n\t%%             ______________Root__________________\n\t%%            /                                    \\\n\t%%     ____SubTree1                               Leaf5\n\t%%    /            \\\n\t%% Leaf1         SubTree2 (with offset reset)\n\t%%                /                   \\\n\t%%           Leaf2               SubTree3 (with offset reset)\n\t%%                                     /      \\\n\t%%                                  Leaf3    Leaf4 \n\tTags6 = [\n\t\t\t\t{Leaf1, ?DATA_CHUNK_SIZE},\n\t\t\t\t[\n\t\t\t\t\t{Leaf2, ?DATA_CHUNK_SIZE},\n\t\t\t\t\t[\n\t\t\t\t\t\t{Leaf3, ?DATA_CHUNK_SIZE},\n\t\t\t\t\t\t{Leaf4, 2*?DATA_CHUNK_SIZE}\n\t\t\t\t\t]\n\t\t\t\t],\n\t\t\t\t{Leaf5, 5*?DATA_CHUNK_SIZE}\n\t\t\t],\n\t{Root6, Tree6} = ar_merkle:generate_tree(Tags6),\n\tassert_tree([\n\t\t\t{branch, undefined, 4*?DATA_CHUNK_SIZE, false}, %% Root\n\t\t\t{branch, undefined, ?DATA_CHUNK_SIZE, false},   %% SubTree1\n\t\t\t{leaf, Leaf5, 5*?DATA_CHUNK_SIZE, false},\n\t\t\t{leaf, Leaf5, 5*?DATA_CHUNK_SIZE, false},\n\t\t\t{leaf, Leaf1, ?DATA_CHUNK_SIZE, false},\n\t\t\t{branch, undefined, ?DATA_CHUNK_SIZE, true},    %% SubTree2\n\t\t\t{leaf, Leaf2, ?DATA_CHUNK_SIZE, false},\n\t\t\t{branch, undefined, ?DATA_CHUNK_SIZE, true},    %% SubTree3\n\t\t\t{leaf, Leaf4, 2*?DATA_CHUNK_SIZE, false},\n\t\t\t{leaf, Leaf3, ?DATA_CHUNK_SIZE, false}\n\t\t], Tree6),\n\n\tPath6_1 = ar_merkle:generate_path(Root6, 0, Tree6),\n\t{Leaf1, 0, ?DATA_CHUNK_SIZE} = ar_merkle:validate_path(Root6, 0, 5*?DATA_CHUNK_SIZE,\n\t\t\tPath6_1, offset_rebase_support_ruleset),\n\n\tPath6_2 = ar_merkle:generate_path(Root6, ?DATA_CHUNK_SIZE, Tree6),\n\t{Leaf2, ?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE} = ar_merkle:validate_path(\n\t\t\tRoot6, ?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE, Path6_2, offset_rebase_support_ruleset),\n\n\tPath6_3 = ar_merkle:generate_path(Root6, 2*?DATA_CHUNK_SIZE, Tree6),\n\t{Leaf3, 2*?DATA_CHUNK_SIZE, 3*?DATA_CHUNK_SIZE} = ar_merkle:validate_path(\n\t\t\tRoot6, 2*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE, Path6_3, offset_rebase_support_ruleset),\n\n\tPath6_4 = ar_merkle:generate_path(Root6, 3*?DATA_CHUNK_SIZE, Tree6),\n\t{Leaf4, 3*?DATA_CHUNK_SIZE, 4*?DATA_CHUNK_SIZE} = ar_merkle:validate_path(\n\t\t\tRoot6, 3*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE, Path6_4, offset_rebase_support_ruleset),\n\n\tPath6_5 = ar_merkle:generate_path(Root6, 4*?DATA_CHUNK_SIZE, Tree6),\n\t{Leaf5, 4*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE} = ar_merkle:validate_path(\n\t\t\tRoot6, 4*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE, Path6_5, offset_rebase_support_ruleset).\n\ntest_tree_with_rebase_bad_paths() ->\n\t%%             ______________Root__________________\n\t%%            /                                    \\\n\t%%     ____SubTree1                               Leaf5\n\t%%    /            \\\n\t%% Leaf1         SubTree2 (with offset reset)\n\t%%                /                   \\\n\t%%           Leaf2               SubTree3 (with offset reset)\n\t%%                                     /      \\\n\t%%                                  Leaf3    Leaf4 \n\tLeaf1 = crypto:strong_rand_bytes(?HASH_SIZE),\n\tLeaf2 = crypto:strong_rand_bytes(?HASH_SIZE),\n\tLeaf3 = crypto:strong_rand_bytes(?HASH_SIZE),\n\tLeaf4 = crypto:strong_rand_bytes(?HASH_SIZE),\n\tLeaf5 = crypto:strong_rand_bytes(?HASH_SIZE),\n\tTags = [\n\t\t\t\t{Leaf1, ?DATA_CHUNK_SIZE},\n\t\t\t\t[\n\t\t\t\t\t{Leaf2, ?DATA_CHUNK_SIZE},\n\t\t\t\t\t[\n\t\t\t\t\t\t{Leaf3, ?DATA_CHUNK_SIZE},\n\t\t\t\t\t\t{Leaf4, 2*?DATA_CHUNK_SIZE}\n\t\t\t\t\t]\n\t\t\t\t],\n\t\t\t\t{Leaf5, 5*?DATA_CHUNK_SIZE}\n\t\t\t],\n\t{Root, Tree} = ar_merkle:generate_tree(Tags),\n\tGoodPath = ar_merkle:generate_path(Root, 3*?DATA_CHUNK_SIZE, Tree),\n\t{Leaf4, 3*?DATA_CHUNK_SIZE, 4*?DATA_CHUNK_SIZE} = ar_merkle:validate_path(\n\t\t\tRoot, 3*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE, GoodPath, offset_rebase_support_ruleset),\n\n\tBadPath1 = change_path(GoodPath, 0), %% Change L\n\t?assertEqual(false, ar_merkle:validate_path(\n\t\tRoot, 3*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE, BadPath1, offset_rebase_support_ruleset)),\n\n\tBadPath2 = change_path(GoodPath, 2*?HASH_SIZE + 1), %% Change note\n\t?assertEqual(false, ar_merkle:validate_path(\n\t\tRoot, 3*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE, BadPath2, offset_rebase_support_ruleset)),\n\n\tBadPath3 = change_path(GoodPath, 2*?HASH_SIZE + ?NOTE_SIZE + 1), %% Change offset rebase zeros\n\t?assertEqual(false, ar_merkle:validate_path(\n\t\tRoot, 3*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE, BadPath3, offset_rebase_support_ruleset)),\n\n\tBadPath4 = change_path(GoodPath, byte_size(GoodPath) - ?NOTE_SIZE - 1), %% Change leaf data hash\n\t?assertEqual(false, ar_merkle:validate_path(\n\t\tRoot, 3*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE, BadPath4, offset_rebase_support_ruleset)),\n\n\tBadPath5 = change_path(GoodPath, byte_size(GoodPath) - 1), %% Change leaf note\n\t?assertEqual(false, ar_merkle:validate_path(\n\t\tRoot, 3*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE, BadPath5, offset_rebase_support_ruleset)).\n\ntest_tree_with_rebase_partial_chunk() ->\n\tLeaf1 = crypto:strong_rand_bytes(?HASH_SIZE),\n\tLeaf2 = crypto:strong_rand_bytes(?HASH_SIZE),\n\tLeaf3 = crypto:strong_rand_bytes(?HASH_SIZE),\n\n\t%%    Root5\n\t%%    /   \\\n\t%% Leaf1  Leaf2 (with offset reset, < 256 KiB)\n\tTags5 = [\n\t\t\t\t{Leaf1, ?DATA_CHUNK_SIZE},\n\t\t\t\t[\n\t\t\t\t\t{Leaf2, 100}\n\t\t\t\t]\n\t\t\t],\n\t{Root5, Tree5} = ar_merkle:generate_tree(Tags5),\n\tassert_tree([\n\t\t\t{branch, undefined, ?DATA_CHUNK_SIZE, false},  %% Root\n\t\t\t{leaf, Leaf1, ?DATA_CHUNK_SIZE, false},\n\t\t\t{leaf, Leaf2, 100, true}\n\t\t], Tree5),\n\n\tPath5_1 = ar_merkle:generate_path(Root5, 0, Tree5),\n\t{Leaf1, 0, ?DATA_CHUNK_SIZE} = ar_merkle:validate_path(Root5, 0,\n\t\t\t?DATA_CHUNK_SIZE + 100, Path5_1, offset_rebase_support_ruleset),\n\n\tPath5_2 = ar_merkle:generate_path(Root5, ?DATA_CHUNK_SIZE, Tree5),\n\t{Leaf2, ?DATA_CHUNK_SIZE, ?DATA_CHUNK_SIZE+100} = ar_merkle:validate_path(Root5,\n\t\t\t?DATA_CHUNK_SIZE, ?DATA_CHUNK_SIZE+100, Path5_2, offset_rebase_support_ruleset),\n\n\t%%              Root6__________________\n\t%%             /                       \\\n\t%%     SubTree1 (with offset reset)   Leaf3\n\t%%         /               \\\n\t%% Leaf1 (< 256 KiB)  Leaf2 (< 256 KiB, spans two buckets)\n\tTags6 = [\n\t\t\t\t[\n\t\t\t\t\t{Leaf1, 131070},\n\t\t\t\t\t{Leaf2, 393213}\n\t\t\t\t],\n\t\t\t\t{Leaf3, 655355}\n\t\t\t],\n\t{Root6, Tree6} = ar_merkle:generate_tree(Tags6),\n\tassert_tree([\n\t\t\t{branch, undefined, 393213, false},  %% Root\n\t\t\t{leaf, Leaf3, 655355, false},\n\t\t\t{branch, undefined, 131070, true},  %% SubTree1\n\t\t\t{leaf, Leaf2, 393213, false},\n\t\t\t{leaf, Leaf1, 131070, false}\n\t\t], Tree6),\n\n\tPath6_1 = ar_merkle:generate_path(Root6, 0, Tree6),\n\t{Leaf1, 0, 131070} = ar_merkle:validate_path(Root6, 0,\n\t\t\t1000000, % an arbitrary bound > 655355\n\t\t\tPath6_1, offset_rebase_support_ruleset),\n\n\tPath6_2 = ar_merkle:generate_path(Root6, 131070, Tree6),\n\t{Leaf2, 131070, 393213} = ar_merkle:validate_path(Root6, 131070 + 5,\n\t\t\t655355, Path6_2, offset_rebase_support_ruleset),\n\n\tPath6_3 = ar_merkle:generate_path(Root6, 393213 + 1, Tree6),\n\t{Leaf3, 393213, 655355} = ar_merkle:validate_path(Root6, 393213 + 2, 655355, Path6_3,\n\t\t\toffset_rebase_support_ruleset),\n\n    %%                Root6  (with offset reset)\n\t%%                  /                     \\\n\t%%          ____SubTree1___              Leaf3\n\t%%         /               \\\n\t%% Leaf1 (< 256 KiB)  Leaf2 (< 256 KiB, spans two buckets)\n\tTags8 = [\n\t\t\t\t[\n\t\t\t\t\t{Leaf1, 131070},\n\t\t\t\t\t{Leaf2, 393213},\n\t\t\t\t\t{Leaf3, 655355}\n\t\t\t\t]\n\t\t\t],\n\t{Root8, Tree8} = ar_merkle:generate_tree(Tags8),\n\tassert_tree([\n\t\t\t{branch, undefined, 393213, true},  %% Root\n\t\t\t{branch, undefined, 131070, false},  %% SubTree1\n\t\t\t{leaf, Leaf3, 655355, false},\n\t\t\t{leaf, Leaf3, 655355, false},\n\t\t\t{leaf, Leaf2, 393213, false},\n\t\t\t{leaf, Leaf1, 131070, false}\n\t\t], Tree8),\n\n\t%% Path to first chunk in data set (even if it's a small chunk) will validate\n\tPath8_1 = ar_merkle:generate_path(Root8, 0, Tree8),\n\t{Leaf1, 0, 131070} = ar_merkle:validate_path(Root8, 0,\n\t\t\t1000000, % an arbitrary bound > 655355\n\t\t\tPath8_1, offset_rebase_support_ruleset),\n\n\tPath8_2 = ar_merkle:generate_path(Root8, 131070, Tree8),\n\t?assertEqual(false,\n\t\tar_merkle:validate_path(Root8, 131070+5, 655355, Path8_2, offset_rebase_support_ruleset)),\n\n\tPath8_3 = ar_merkle:generate_path(Root8, 393213 + 1, Tree8),\n\t{Leaf3, 393213, 655355} = ar_merkle:validate_path(Root8, 393213 + 2, 655355, Path8_3,\n\t\t\toffset_rebase_support_ruleset),\n\n    %%                           Root9\n\t%%                  /                     \\\n\t%%              SubTree1             Leaf3 (1 B)\n\t%%         /               \\\n\t%% Leaf1 (1 B)         Leaf2 (1 B)\n\tTags9 = [\n\t\t\t\t[\n\t\t\t\t\t{Leaf1, 1}\n\t\t\t\t],\n\t\t\t\t[\n\t\t\t\t\t{Leaf2, 1}\n\t\t\t\t],\n\t\t\t\t[\n\t\t\t\t\t{Leaf3, 1}\n\t\t\t\t]\n\t\t\t],\n\t{Root9, Tree9} = ar_merkle:generate_tree(Tags9),\n\tassert_tree([\n\t\t\t{branch, undefined, 2, false},  %% Root\n\t\t\t{branch, undefined, 1, false},  %% SubTree1\n\t\t\t{leaf, Leaf3, 1, true},\n\t\t\t{leaf, Leaf1, 1, true},\n\t\t\t{leaf, Leaf2, 1, true},\n\t\t\t{leaf, Leaf3, 1, true}          %% Duplicates are safe and expected\n\t\t], Tree9),\n\n\t%% Path to first chunk in data set (even if it's a small chunk) will validate\n\tPath9_1 = ar_merkle:generate_path(Root9, 0, Tree9),\n\t{Leaf1, 0, 1} = ar_merkle:validate_path(Root9, 0,\n\t\t\t1,\n\t\t\tPath9_1, offset_rebase_support_ruleset),\n\n\tPath9_2 = ar_merkle:generate_path(Root9, 1, Tree9),\n\t?assertEqual(false,\n\t\tar_merkle:validate_path(Root9, 1, 2, Path9_2, offset_rebase_support_ruleset)),\n\n\tPath9_3 = ar_merkle:generate_path(Root9, 2, Tree9),\n\t?assertEqual(false,\n\t\tar_merkle:validate_path(Root9, 2, 3, Path9_3, offset_rebase_support_ruleset)),\n\n\t%%                           Root9\n\t%%                  /                     \\\n\t%%              SubTree1             Leaf3 (256 KiB)\n\t%%         /               \\\n\t%% Leaf1 (256 KiB)         Leaf2 (1 B)\n\t%%\n\t%% Every chunk in a subtree following a small-chunk subtree should fail to validated. When\n\t%% bundling, bundlers are required to bad small chunks out to a chunk boundary.\n\tTags10 = [\n\t\t\t\t[\n\t\t\t\t\t{Leaf1, ?DATA_CHUNK_SIZE}\n\t\t\t\t],\n\t\t\t\t[\n\t\t\t\t\t{Leaf2, 1}\n\t\t\t\t],\n\t\t\t\t[\n\t\t\t\t\t{Leaf3, ?DATA_CHUNK_SIZE}\n\t\t\t\t]\n\t\t\t],\n\t{Root10, Tree10} = ar_merkle:generate_tree(Tags10),\n\tassert_tree([\n\t\t\t{branch, undefined, ?DATA_CHUNK_SIZE+1, false},  %% Root\n\t\t\t{branch, undefined, ?DATA_CHUNK_SIZE, false},  %% SubTree1\n\t\t\t{leaf, Leaf3, ?DATA_CHUNK_SIZE, true},\n\t\t\t{leaf, Leaf1, ?DATA_CHUNK_SIZE, true},\n\t\t\t{leaf, Leaf2, 1, true},\n\t\t\t{leaf, Leaf3, ?DATA_CHUNK_SIZE, true}          %% Duplicates are safe and expected\n\t\t], Tree10),\n\n\tPath10_1 = ar_merkle:generate_path(Root10, 0, Tree10),\n\t{Leaf1, 0, ?DATA_CHUNK_SIZE} = ar_merkle:validate_path(Root10, 0,\n\t\t\t?DATA_CHUNK_SIZE,\n\t\t\tPath10_1, offset_rebase_support_ruleset),\n\n\tPath10_2 = ar_merkle:generate_path(Root10, ?DATA_CHUNK_SIZE, Tree10),\n\t{Leaf2, ?DATA_CHUNK_SIZE, ?DATA_CHUNK_SIZE+1} = ar_merkle:validate_path(Root10,\n\t\t\t?DATA_CHUNK_SIZE, ?DATA_CHUNK_SIZE+1,\n\t\t\tPath10_2, offset_rebase_support_ruleset),\n\n\tPath10_3 = ar_merkle:generate_path(Root10, ?DATA_CHUNK_SIZE+1, Tree10),\n\t?assertEqual(false,\n\t\tar_merkle:validate_path(Root10, ?DATA_CHUNK_SIZE+1, (2*?DATA_CHUNK_SIZE)+1,\n\t\t\tPath10_3, offset_rebase_support_ruleset)),\n\tok.\n\ntest_tree_with_rebase_subtree_ids() ->\n\t%% Assert that the all the tree IDs are preserved when the tree is added as a subtree within\n\t%% a larger tree\n\tLeaf1 = crypto:strong_rand_bytes(?HASH_SIZE),\n\tLeaf2 = crypto:strong_rand_bytes(?HASH_SIZE),\n\tLeaf3 = crypto:strong_rand_bytes(?HASH_SIZE),\n\tSubTreeTags = [\n\t\t\t\t{Leaf1, ?DATA_CHUNK_SIZE},\n\t\t\t\t{Leaf2, 2 * ?DATA_CHUNK_SIZE}\n\t\t\t],\n\t{SubTreeRoot, SubTree} = ar_merkle:generate_tree(SubTreeTags),\n\n\tTreeTags = [\n\t\t{Leaf3, ?DATA_CHUNK_SIZE},\n\t\t[\n\t\t\t{Leaf1, ?DATA_CHUNK_SIZE},\n\t\t\t{Leaf2, 2 * ?DATA_CHUNK_SIZE}\n\t\t]\n\t],\n\n\t{_TreeRoot, Tree} = ar_merkle:generate_tree(TreeTags),\n\n\tTreeNodes = lists:nthtail(length(Tree) - length(SubTree), Tree),\n\tTreeSubTreeRoot = lists:nth(1, TreeNodes),\n\tTreeLeaf1 = lists:nth(2, TreeNodes),\n\tSubTreeLeaf1 = lists:nth(2, SubTree),\n\tTreeLeaf2 = lists:nth(3, TreeNodes),\n\tSubTreeLeaf2 = lists:nth(3, SubTree),\n\t?assertEqual(SubTreeRoot, TreeSubTreeRoot#node.id),\n\t?assertEqual(SubTreeLeaf1#node.id, TreeLeaf1#node.id),\n\t?assertEqual(SubTreeLeaf2#node.id, TreeLeaf2#node.id).\n\ngenerate_and_validate_uneven_tree_path_test() ->\n\tTags = make_tags_cumulative([{<<N:256>>, 1}\n\t\t\t|| N <- lists:seq(0, ?UNEVEN_TEST_SIZE - 1)]),\n\t{MR, Tree} = ar_merkle:generate_tree(Tags),\n\t%% Make sure the target is in the 'uneven' ending of the tree.\n\tPath = ar_merkle:generate_path(MR, ?UNEVEN_TEST_TARGET, Tree),\n\t{Leaf, StartOffset, EndOffset} =\n\t\tar_merkle:validate_path(MR, ?UNEVEN_TEST_TARGET, ?UNEVEN_TEST_SIZE, Path),\n\t{Leaf, StartOffset, EndOffset} =\n\t\tar_merkle:validate_path(MR, ?UNEVEN_TEST_TARGET, ?UNEVEN_TEST_SIZE,\n\t\t\t\tPath, strict_borders_ruleset),\n\t?assertEqual(?UNEVEN_TEST_TARGET, binary:decode_unsigned(Leaf)),\n\t?assert(?UNEVEN_TEST_TARGET < EndOffset),\n\t?assert(?UNEVEN_TEST_TARGET >= StartOffset).\n\nreject_invalid_tree_path_test_() ->\n\t{timeout, 30, fun test_reject_invalid_tree_path/0}.\n\ntest_reject_invalid_tree_path() ->\n\tTags = make_tags_cumulative([{<<N:256>>, 1} || N <- lists:seq(0, ?TEST_SIZE - 1)]),\n\t{MR, Tree} =\n\t\tar_merkle:generate_tree(Tags),\n\tRandomTarget = rand:uniform(?TEST_SIZE) - 2,\n\t?assertEqual(\n\t\tfalse,\n\t\tar_merkle:validate_path(\n\t\t\tMR, RandomTarget,\n\t\t\t?TEST_SIZE,\n\t\t\tar_merkle:generate_path(MR, RandomTarget+1, Tree)\n\t\t)\n\t).\n\nassert_node({Id, Type, Data, Note, IsRebased}, Node) ->\n\t?assertEqual(Id, Node#node.id),\n\tassert_node({Type, Data, Note, IsRebased}, Node);\nassert_node({Type, Data, Note, IsRebased}, Node) ->\n\t?assertEqual(Type, Node#node.type),\n\t?assertEqual(Data, Node#node.data),\n\t?assertEqual(Note, Node#node.note),\n\t?assertEqual(IsRebased, Node#node.is_rebased).\n\nassert_tree([], []) ->\n\tok;\nassert_tree([], _RestOfTree) ->\n\t?assert(false);\nassert_tree(_RestOfValues, []) ->\n\t?assert(false);\nassert_tree([ExpectedValues | RestOfValues], [Node | RestOfTree]) ->\n\tassert_node(ExpectedValues, Node),\n\tassert_tree(RestOfValues, RestOfTree).\n\nchange_path(Path, Index) ->\n\tNewByte = (binary:at(Path, Index) + 1) rem 256,\n\tList = binary_to_list(Path),\n\tUpdatedList = lists:sublist(List, Index) ++ [NewByte] ++ lists:nthtail(Index+1, List),\n\tlist_to_binary(UpdatedList).\n"
  },
  {
    "path": "apps/arweave/src/ar_metrics.erl",
    "content": "-module(ar_metrics).\n\n-include(\"ar.hrl\").\n\n-export([register/0, get_status_class/1, record_rate_metric/4]).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Declare Arweave metrics.\nregister() ->\n\t%% App info\n\tprometheus_gauge:new([\n\t\t{name, arweave_release},\n\t\t{help, \"Arweave release number\"}\n\t]),\n\t%% Release number never changes so just set it here.\n\tprometheus_gauge:set(arweave_release, ?RELEASE_NUMBER),\n\n\t%% Networking.\n\tprometheus_counter:new([\n\t\t{name, http_server_accepted_bytes_total},\n\t\t{help, \"The total amount of bytes accepted by the HTTP server, per endpoint\"},\n\t\t{labels, [route]}\n\t]),\n\tprometheus_counter:new([\n\t\t{name, http_server_served_bytes_total},\n\t\t{help, \"The total amount of bytes served by the HTTP server, per endpoint\"},\n\t\t{labels, [route]}\n\t]),\n\tprometheus_counter:new([\n\t\t{name, http_client_downloaded_bytes_total},\n\t\t{help, \"The total amount of bytes requested via HTTP, per remote endpoint\"},\n\t\t{labels, [route]}\n\t]),\n\tprometheus_counter:new([\n\t\t{name, http_client_uploaded_bytes_total},\n\t\t{help, \"The total amount of bytes posted via HTTP, per remote endpoint\"},\n\t\t{labels, [route]}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, arweave_peer_count},\n\t\t{help, \"peer count\"}\n\t]),\n\tprometheus_counter:new([\n\t\t{name, gun_requests_total},\n\t\t{labels, [http_method, route, status_class]},\n\t\t{\n\t\t\thelp,\n\t\t\t\"The total number of GUN requests.\"\n\t\t}\n\t]),\n\t%% NOTE: the erlang prometheus client looks at the metric name to determine units.\n\t%%       If it sees <name>_duration_<unit> it assumes the observed value is in\n\t%%       native units and it converts it to <unit> .To query native units, use:\n\t%%       erlant:monotonic_time() without any arguments.\n\t%%       See: https://github.com/deadtrickster/prometheus.erl/blob/6dd56bf321e99688108bb976283a80e4d82b3d30/src/prometheus_time.erl#L2-L84\n\tprometheus_histogram:new([\n\t\t{name, ar_http_request_duration_seconds},\n\t\t{buckets, [infinity]}, %% we don't care about the histogram portion\n        {labels, [http_method, route, status_class]},\n\t\t{\n\t\t\thelp,\n\t\t\t\"The total duration of an ar_http:req call. This includes more than just the GUN \"\n\t\t\t\"request itself (e.g. establishing a connection, throttling, etc...)\"\n\t\t}\n\t]),\n\tprometheus_histogram:new([\n\t\t{name, http_client_get_chunk_duration_seconds},\n\t\t{buckets, [infinity]}, %% we don't care about the histogram portion\n        {labels, [status_class, peer]},\n\t\t{\n\t\t\thelp,\n\t\t\t\"The total duration of an HTTP GET chunk request made to a peer.\"\n\t\t}\n\t]),\n\n\tprometheus_gauge:new([\n\t\t{name, downloader_queue_size},\n\t\t{help, \"The size of the back-off queue for the block and transaction headers \"\n\t\t\t\t\"the node failed to sync and will retry later.\"}\n\t]),\n\tprometheus_gauge:new([{name, outbound_connections},\n\t\t\t{help, \"The current number of the open outbound network connections\"}]),\n\n\t%% Transaction and block propagation.\n\tprometheus_gauge:new([\n\t\t{name, tx_queue_size},\n\t\t{help, \"The size of the transaction propagation queue\"}\n\t]),\n\tprometheus_counter:new([\n\t\t{name, propagated_transactions_total},\n\t\t{labels, [status_class]},\n\t\t{\n\t\t\thelp,\n\t\t\t\"The total number of propagated transactions. Increases \"\n\t\t\t\"with the number of peers the node propagates transactions to.\"\n\t\t}\n\t]),\n\tprometheus_histogram:declare([\n\t\t{name, tx_propagation_bits_per_second},\n\t\t{buckets, [infinity]}, %% we don't care about the histogram portion\n\t\t{help, \"The throughput (in bits/s) of transaction propagation.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, mempool_header_size_bytes},\n\t\t{\n\t\t\thelp,\n\t\t\t\"The size (in bytes) of the memory pool of transaction headers. \"\n\t\t\t\"The data fields of format=1 transactions are considered to be \"\n\t\t\t\"parts of transaction headers.\"\n\t\t}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, mempool_data_size_bytes},\n\t\t{\n\t\t\thelp,\n\t\t\t\"The size (in bytes) of the memory pool of transaction data. \"\n\t\t\t\"The data fields of format=1 transactions are NOT considered \"\n\t\t\t\"to be transaction data.\"\n\t\t}\n\t]),\n\tprometheus_counter:new([{name, block_announcement_missing_transactions},\n\t\t\t{help, \"The total number of tx prefixes reported to us via \"\n\t\t\t\t\t\"POST /block_announcement and not found in the mempool or block cache.\"}]),\n\tprometheus_counter:new([{name, block_announcement_reported_transactions},\n\t\t\t{help, \"The total number of tx prefixes reported to us via \"\n\t\t\t\t\t\"POST /block_announcement.\"}]),\n\tprometheus_counter:new([{name, block2_received_transactions},\n\t\t\t{help, \"The total number of transactions received via POST /block2.\"}]),\n\tprometheus_counter:new([{name, block_announcement_missing_chunks},\n\t\t\t{help, \"The total number of chunks reported to us via \"\n\t\t\t\t\t\"POST /block_announcement and not found locally.\"}]),\n\tprometheus_counter:new([{name, block_announcement_reported_chunks},\n\t\t\t{help, \"The total number of chunks reported to us via \"\n\t\t\t\t\t\"POST /block_announcement.\"}]),\n\tprometheus_counter:new([{name, block2_fetched_chunks},\n\t\t\t{help, \"The total number of chunks fetched locally during the successful\"\n\t\t\t\t\t\" processing of POST /block2.\"}]),\n\tprometheus_histogram:new([\n\t\t{name, ar_mempool_add_tx_duration_milliseconds},\n\t\t{buckets, [infinity]}, %% we don't care about the histogram portion\n\t\t{help, \"The duration in milliseconds it took to add a transaction to the mempool.\"}\n\t]),\n\tprometheus_histogram:new([\n\t\t{name, reverify_mempool_chunk_duration_milliseconds},\n\t\t{buckets, [infinity]}, %% we don't care about the histogram portion\n\t\t{help, \"The duration in milliseconds it took to reverify a chunk of transactions \"\n\t\t\t\t\"in the mempool.\"}\n\t]),\n\tprometheus_histogram:new([\n\t\t{name, drop_txs_duration_milliseconds},\n\t\t{buckets, [infinity]}, %% we don't care about the histogram portion\n\t\t{help, \"The duration in milliseconds it took to drop a chunk of transactions \"\n\t\t\t\t\"from the mempool.\"}\n\t]),\n\tprometheus_histogram:new([\n\t\t{name, del_from_propagation_queue_duration_milliseconds},\n\t\t{buckets, [infinity]}, %% we don't care about the histogram portion\n\t\t{help, \"The duration in milliseconds it took to remove a transaction from the \"\n\t\t\t\t\"propagation queue after it was emitted to peers.\"}\n\t]),\n\n\t%% Data seeding.\n\tprometheus_gauge:new([\n\t\t{name, weave_size},\n\t\t{help, \"The size of the weave (in bytes).\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, v2_index_data_size},\n\t\t{help, \"The size (in bytes) of the data stored and indexed. Note: if \"\n\t\t\t\t\"multiple storage modules cover the same range of data, that \"\n\t\t\t\t\"range will be counted multiple times.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, v2_index_data_size_by_packing},\n\t\t{labels, [store_id, packing, partition_number, storage_module_size, storage_module_index,\n\t\t\t  packing_difficulty]},\n\t\t{help, \"The size (in bytes) of the data stored and indexed. Grouped by the \"\n\t\t\t\t\"store ID, packing, partition number, storage module size, \"\n\t\t\t\t\"storage module index, and packing difficulty.\"}\n\t]),\n\n\t%% Disk pool.\n\tprometheus_gauge:new([\n\t\t{name, pending_chunks_size},\n\t\t{\n\t\t\thelp,\n\t\t\t\"The total size in bytes of stored pending and seeded chunks.\"\n\t\t}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, disk_pool_chunks_count},\n\t\t{\n\t\t\thelp,\n\t\t\t\"The approximate number of chunks in the disk pool.\"\n\t\t\t\"The disk pool includes pending, recent, and orphaned chunks.\"\n\t\t}\n\t]),\n\tprometheus_counter:new([\n\t\t{name, disk_pool_processed_chunks},\n\t\t{\n\t\t\thelp,\n\t\t\t\"The counter is incremented every time the periodic process\"\n\t\t\t\" looks up a chunk from the disk pool and decides whether to\"\n\t\t\t\" remove it, include it in the weave, or keep in the disk pool.\"\n\t\t}\n\t]),\n\n\t%% Consensus.\n\tprometheus_gauge:new([\n\t\t{name, arweave_block_height},\n\t\t{help, \"The block height.\"}\n\t]),\n\tprometheus_gauge:new([{name, block_time},\n\t\t\t{help, \"The time in seconds between two blocks as recorded by the miners.\"}]),\n\tprometheus_gauge:new([\n\t\t{name, block_vdf_time},\n\t\t{help, \"The number of the VDF steps between two consequent blocks.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, block_vdf_advance},\n\t\t{help, \"The number of the VDF steps a received block is ahead of our current step.\"}\n\t]),\n\n\tprometheus_counter:new([\n\t\t{name, wallet_list_size},\n\t\t{\n\t\t\thelp,\n\t\t\t\"The total number of wallets in the system.\"\n\t\t}\n\t]),\n\tprometheus_histogram:new([\n\t\t{name, block_pre_validation_time},\n\t\t{buckets, [infinity]}, %% we don't care about the histogram portion\n\t\t{help,\n\t\t\t\"The time in milliseconds taken to parse the POST /block input and perform a \"\n\t\t\t\"preliminary validation before relaying the block to peers.\"}\n\t]),\n\tprometheus_histogram:new([\n\t\t{name, block_processing_time},\n\t\t{buckets, [infinity]}, %% we don't care about the histogram portion\n\t\t{help,\n\t\t\t\"The time in seconds taken to validate the block and apply it on top of \"\n\t\t\t\"the current state, possibly involving a chain reorganisation.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, synced_blocks},\n\t\t{\n\t\t\thelp,\n\t\t\t\"The total number of synced block headers.\"\n\t\t}\n\t]),\n\n\t%% Mining.\n\tprometheus_gauge:new([\n\t\t{name, mining_rate},\n\t\t{labels, [type, partition]},\n\t\t{help, \"Tracks 3 different mining rate metrics, each with a different type label. \"\n\t\t\t\t\"The type label can be 'read', 'raw_read', 'hash', or 'ideal'. \"\n\t\t\t\t\"'read' tracks the number of chunks read per second - recorded in MiB per second. \"\n\t\t\t\t\"This is the effective mining read rate as it considers all limiting factors like \"\n\t\t\t\t\"nonce limiter, hashing speed, etc...\"\n\t\t\t\t\"'raw_read' tracks the average read rate of the partition ignoring any other \"\n\t\t\t\t\"limiting factors - recorded in MiB per second.\"\n\t\t\t\t\"'hash' tracks the number of solutions candidates generated per second. \"\n\t\t\t\t\"'ideal' tracks the ideal read rate given the current VDF step time and amount of \"\n\t\t\t\t\"data synced. The partition label breaks the mining rate down by partition. \"\n\t\t\t\t\"The overall mining rate is inidcated by 'total'.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, cm_h1_rate},\n\t\t{labels, [peer, direction]},\n\t\t{help, \"The number of H1 hashes exchanged with a coordinated mining peer per second. \"\n\t\t\t\t\"The peer label indicates the peer that the value is exchanged with, and the \"\n\t\t\t\t\"direction label can be 'to' or 'from'.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, cm_h2_count},\n\t\t{labels, [peer, direction]},\n\t\t{help, \"The total number of H2 hashes exchanged with a coordinated mining peer. \"\n\t\t\t\t\"The peer label indicates the peer that the value is exchanged with, and the \"\n\t\t\t\t\"direction label can be 'to' or 'from'.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, mining_server_chunk_cache_size},\n\t\t{labels, [partition, type]},\n\t\t{help, \"The amount of data (measured in bytes) \"\n\t\t\t\"fetched during mining and not processed yet. \"\n\t\t  \"The type label can be 'total', 'reserved'.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, mining_server_task_queue_len},\n\t\t{labels, [task]},\n\t\t{help, \"The number of items in the mining server task queue.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, mining_solution},\n\t\t{labels, [reason]},\n\t\t{help, \"Incremented whenever the miner generates a solution. The 'reason' label \"\n\t\t\t\t\"will be 'success' if a block was successfully prepared from the solution, \"\n\t\t\t\t\"and will list a failure reason otherwise. Note: even if a block is \"\n\t\t\t\t\"successfully prepared from a solution, it does not necessarily mean \"\n\t\t\t\t\"the block ended up in the blockchain.\"}\n\t]),\n\tprometheus_histogram:new([\n\t\t{name, chunk_storage_sync_record_check_duration_milliseconds},\n\t\t{labels, [requested_chunk_count]},\n\t\t{buckets, [infinity]}, %% we don't care about the histogram portion\n\t\t{help, \"The time in milliseconds it took to check the fetched chunk range \"\n\t\t\t\t\"is actually registered by the chunk storage.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, mining_server_tasks},\n\t\t{labels, [task]},\n\t\t{help, \"Incremented each time the mining server adds a task to the task queue.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, mining_vdf_step},\n\t\t{help, \"Incremented each time the mining server processes a VDF step.\"}\n\t]),\n\t%% VDF.\n\tprometheus_histogram:new([\n\t\t{name, vdf_step_time_milliseconds},\n\t\t{buckets, [infinity]}, %% we don't care about the histogram portion\n\t\t{labels, []},\n\t\t{help, \"The time in milliseconds it took to compute a VDF step.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, vdf_step},\n\t\t{help, \"The current VDF step.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, vdf_difficulty},\n\t\t{labels, [type]},\n\t\t{help, \"The cached VDF difficulty. 'type' can be either 'current' or 'next'.\"}\n\t]),\n\n\t%% Economic metrics.\n\tprometheus_gauge:new([\n\t\t{name, average_network_hash_rate},\n\t\t{help, \"The average network hash rate measured over the last ~30 days of blocks\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, average_block_reward},\n\t\t{help, \"The average block reward in Winston computed from the last ~30 days of blocks\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, expected_block_reward},\n\t\t{help, \"The block reward required to sustain 20 replicas of the present weave\"\n\t\t\t\t\" as currently estimated by the protocol.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, network_data_size},\n\t\t{help, \"Total size of the network data in bytes.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, v2_price_per_gibibyte_minute},\n\t\t{help, \"The price of storing 1 GiB for one minute as it will be calculated once the\"\n\t\t\t\t\" transition to the new pricing protocol is complete.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, price_per_gibibyte_minute},\n\t\t{help, \"The price of storing 1 GiB for one minute as currently estimated by \"\n\t\t\t\t\"the protocol.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, legacy_price_per_gibibyte_minute},\n\t\t{help, \"The price of storing 1 GiB for one minute as estimated by the previous (\"\n\t\t\t\t\"USD to AR benchmark-based) version of the protocol.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, endowment_pool},\n\t\t{help, \"The amount of Winston in the endowment pool.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, kryder_plus_rate_multiplier},\n\t\t{help, \"Kryder+ rate multiplier.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, endowment_pool_take},\n\t\t{help, \"Value we take from endowment pool to miner to compensate difference between expected and real reward.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, endowment_pool_give},\n\t\t{help, \"Value we give to endowment pool from transaction fees.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, available_supply},\n\t\t{help, \"The total supply minus the endowment, in Winston.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, debt_supply},\n\t\t{help, \"The amount of Winston emitted when the endowment pool was not sufficiently\"\n\t\t\t\t\" large to compensate mining.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, poa_count},\n\t\t{labels, [chunks]},\n\t\t{help, \"A count of the number of 1-chunk and 2-chunk blocks in the last 21,600 blocks. \"\n\t\t\t\t\"The 'chunks' label is 1 for the count of 1-chunk blocks, and 2 for the count of \"\n\t\t\t\t\"2-chunk blocks.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, log_diff},\n\t\t{labels, [chunk]},\n\t\t{help, \"The current linear difficulty converted to log scale. The chunk label \"\n\t\t\t\t\"is either 'poa1' or 'poa2'.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, network_hashrate},\n\t\t{help, \"An estimation of the network hash rate based on the mining difficulty \"\n\t\t\t\t\"of the latest block.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, expected_minimum_200_years_storage_costs_decline_rate},\n\t\t{help, \"The expected minimum decline rate sufficient to subsidize storage of \"\n\t\t\t\t\"the current weave for 200 years according to the legacy (2.5) estimations.\"}\n\t]),\n\tprometheus_gauge:new([\n\t\t{name, expected_minimum_200_years_storage_costs_decline_rate_10_usd_ar},\n\t\t{help, \"The expected minimum decline rate sufficient to subsidize storage of \"\n\t\t\t\t\"the current weave for 200 years according to the legacy (2.6) estimations\"\n\t\t\t\t\"and assuming 10 $/AR.\"}\n\t]),\n\n\t%% Packing.\n\tprometheus_histogram:new([\n\t\t{name, packing_duration_milliseconds},\n\t\t{labels, [type, packing, trigger]},\n\t\t{buckets, [infinity]}, %% we don't care about the histogram portion\n\t\t{help, \"The packing/unpacking time in milliseconds. The type label indicates what \"\n\t\t\t\t\"type of operation was requested either: 'pack', 'unpack',\"\n\t\t\t\t\"'unpack_sub_chunk', or 'pack_sub_chunk'. The packing \"\n\t\t\t\t\"label differs based on the type. If type is 'unpack' then the packing label \"\n\t\t\t\t\"indicates the format of the chunk before being unpacked. If type is 'pack' \"\n\t\t\t\t\"then the packing label indicates the format that the chunk will be packed \"\n\t\t\t\t\"to. In all cases its value can be 'spora_2_5', 'spora_2_6', 'composite', \"\n\t\t\t\t\"or 'replica_2_9'. The trigger label shows where the request was triggered: \"\n\t\t\t\t\"'external' (e.g. an HTTP request) or 'internal' (e.g. during syncing or \"\n\t\t\t\t\"repacking).\"}\n\t]),\n\tprometheus_counter:new([\n\t\t{name, packing_requests},\n\t\t{labels, [type, packing]},\n\t\t{help, \"The number of packing requests received. The type label indicates what \"\n\t\t\t\t\"type of operation was requested either: 'pack', 'unpack', or \"\n\t\t\t\t\"'unpack_sub_chunk'. The packing \"\n\t\t\t\t\"label differs based on the type. If type is 'unpack' then the packing label \"\n\t\t\t\t\"indicates the format of the chunk before being unpacked. If type is 'pack' \"\n\t\t\t\t\"then the packing label indicates the format that the chunk will be packed \"\n\t\t\t\t\"to. In all cases its value can be 'unpacked', 'unpacked_padded', \"\n\t\t\t\t\"'spora_2_5', 'spora_2_6', 'composite', or 'replica_2_9'.\"}\n\t]),\n\tprometheus_counter:new([\n\t\t{name, validating_packed_spora},\n\t\t{labels, [packing]},\n\t\t{help, \"The number of SPoRA solutions based on packed chunks entered validation. \"\n\t\t\t\t\"The packing label can be 'spora_2_5', 'spora_2_6', 'composite', \"\n\t\t\t\t\" or replica_2_9.\"}\n\t]),\n\n\tprometheus_gauge:new([{name, packing_buffer_size},\n\t\t{help, \"The number of chunks in the packing server queue.\"}]),\n\tprometheus_gauge:new([{name, chunk_cache_size},\n\t\t\t{help, \"The number of chunks scheduled for downloading.\"}]),\n\tprometheus_counter:new([{name, chunks_stored},\n\t\t{labels, [packing, store_id]},\n\t\t{help, \"The counter is incremented every time a chunk is written to \"\n\t\t\t\t\"chunk_storage.\"}]),\n\tprometheus_counter:new([{name, chunks_read},\n\t\t{labels, [store_id]},\n\t\t{help, \"The counter is incremented every time a chunk is read from \"\n\t\t\t\t\"chunk_storage.\"}]),\n\tprometheus_histogram:new([\n\t\t{name, chunk_read_rate_bytes_per_second},\n\t\t{labels, [store_id, type]},\n\t\t{buckets, [infinity]}, %% we don't care about the histogram portion\n\t\t{help, \"The rate, in bytes per second, at which chunks are read from storage. \"\n\t\t\t\t\"The type label can be 'raw' or 'repack'.\"}\n\t]),\n\tprometheus_histogram:new([\n\t\t{name, chunk_write_rate_bytes_per_second},\n\t\t{labels, [store_id, type]},\n\t\t{buckets, [infinity]}, %% we don't care about the histogram portion\n\t\t{help, \"The rate, in bytes per second, at which chunks are written to storage.\"}\n\t]),\n\n\tprometheus_gauge:new([{name, data_discovery},\n\t\t{labels, [type, store_id, stat]},\n\t\t{help, \"Tracks peer availability statistics from data discovery across buckets. \"\n\t\t\t\t\"'type' is 'normal' or 'footprint'. \"\n\t\t\t\t\"'stat' is 'num_peers', 'total_buckets', 'zero_peer_count', or 'healthy_peer_count'.\"}]),\n\n\tprometheus_counter:new([{name, sync_tasks},\n\t\t{labels, [state, peer]},\n\t\t{help, \"The number of syncing tasks. 'state' can be 'waiting_in', 'waiting_out', \"\n\t\t\t\t\"'queued_in', 'queued_out', 'dispatched', 'completed', \"\n\t\t\t\t\"'activate_footprint', or 'deactivate_footprint'. \"\n\t\t\t\t\" 'peer' is the peer the task is intended for.\"}]),\n\n\tprometheus_counter:new([{name, sync_chunks_skipped},\n\t\t{labels, [reason]},\n\t\t{help, \"The number of chunks skipped during syncing.\"}]),\n\n\tprometheus_gauge:new([{name, device_lock_status},\n\t\t{labels, [store_id, mode]},\n\t\t{help, \"The device lock status of the storage module. \"\n\t\t\t\t\"-1: off, 0: paused, 1: active, 2: complete -2: unknown\"}]),\n\tprometheus_gauge:new([{name, sync_intervals_queue_size},\n\t\t{labels, [store_id]},\n\t\t{help, \"The size of the syncing intervals queue.\"}]),\n\n\tprometheus_gauge:new([{name, repack_chunk_states},\n\t\t{labels, [store_id, type, state]},\n\t\t{help, \"The count of chunks in each state. 'type' can be 'cache' or 'queue'.\"}]),\n\n\n\t%% ---------------------------------------------------------------------------------------\n\t%% Replica 2.9 metrics\n\t%% ---------------------------------------------------------------------------------------\n\tprometheus_counter:new([{name, replica_2_9_entropy_stored},\n\t\t{labels, [store_id]},\n\t\t{help, \"The number of bytes of replica.2.9 entropy written to chunk storage.\"}]),\n\tprometheus_counter:new([{name, replica_2_9_entropy_generated},\n\t\t{help, \"The number of bytes of replica.2.9 entropy generated.\"}]),\n\tprometheus_gauge:new([{name, replica_2_9_entropy_cache},\n\t\t{help, \"The size (in bytes) of the replica.2.9 entropy cache.\"}]),\n\tprometheus_counter:new([{name, replica_2_9_entropy_stats},\n\t\t{labels, [partition, stat]},\n\t\t{help, \"Count of different replica_2_9 entropy events: 'cache_hit', 'cache_miss', \"\n\t\t\t   \"'redundant'.\"}]),\n\tprometheus_histogram:new([\n\t\t{name, replica_2_9_entropy_duration_milliseconds},\n\t\t{buckets, [infinity]}, %% we don't care about the histogram portion\n\t\t{help, \"The time, in milliseconds, to generate 256 MiB of replica.2.9 entropy.\"}\n\t]),\n\n\t%% ---------------------------------------------------------------------------------------\n\t%% Pool related metrics\n\t%% ---------------------------------------------------------------------------------------\n\tprometheus_counter:new([\n\t\t{name, pool_job_request_count},\n\t\t{help, \"The number of requests to pool /job from start of arweave node\"}\n\t]),\n\n\tprometheus_counter:new([\n\t\t{name, pool_total_job_got_count},\n\t\t{help, \"The number of jobs received from /job requests.\"}\n\t]),\n\n\t%% ---------------------------------------------------------------------------------------\n\t%% Debug-only metrics\n\t%% ---------------------------------------------------------------------------------------\n\tprometheus_counter:new([{name, process_functions},\n\t\t\t{labels, [process]},\n\t\t\t{help, \"Sampling active functions. The 'process' label is a fully qualified \"\n\t\t\t\t\t\"function name with the format 'process~module:function/arith'. \"\n\t\t\t\t\t\"Only set when debug=true.\"}]),\n\t%% process_info gets unregistered and re-registered in ar_process_sampler.erl\n\tprometheus_gauge:new([{name, process_info},\n\t\t\t{labels, [process, type]},\n\t\t\t{help, \"Sampling info about active processes. Only set when debug=true.\"}]),\n\tprometheus_gauge:new([{name, scheduler_utilization},\n\t\t\t{labels, [type]},\n\t\t\t{help, \"Average scheduler utilization. `type` maps to the sched_type as defined here: \"\n\t\t\t\t\"https://www.erlang.org/doc/man/scheduler#type-sched_util_result. \"\n\t\t\t\t\"Only set when debug=true.\"}]),\n\tprometheus_gauge:new([{name, allocator},\n\t\t\t{labels, [type, instance, section, metric]},\n\t\t\t{help, \"Erlang VM memory allocator metrics. Only set when debug=true.\"}]).\n\nrecord_rate_metric(StartTime, Bytes, Metric, Labels) ->\n\tEndTime = erlang:monotonic_time(),\n\tElapsedTime =\n\t\terlang:convert_time_unit(EndTime - StartTime,\n\t\t\t\t\t\t\t\tnative,\n\t\t\t\t\t\t\t\tmicrosecond),\n\t%% bytes per second\n\tRate =\n\t\tcase ElapsedTime > 0 of\n\t\t\ttrue -> 1_000_000 * Bytes / ElapsedTime;\n\t\t\tfalse -> 0\n\t\tend,\n\tprometheus_histogram:observe(Metric, Labels, Rate).\n\n\n%% @doc Return the HTTP status class label for cowboy_requests_total and gun_requests_total\n%% metrics.\nget_status_class({ok, {{Status, _}, _, _, _, _}}) ->\n\tget_status_class(Status);\nget_status_class({error, connection_closed}) ->\n\t\"connection_closed\";\nget_status_class({error, connect_timeout}) ->\n\t\"connect_timeout\";\nget_status_class({error, timeout}) ->\n\t\"timeout\";\nget_status_class({error,{shutdown,timeout}}) ->\n\t\"shutdown_timeout\";\nget_status_class({error, econnrefused}) ->\n\t\"econnrefused\";\nget_status_class({error, {shutdown,econnrefused}}) ->\n\t\"shutdown_econnrefused\";\nget_status_class({error, {shutdown,ehostunreach}}) ->\n\t\"shutdown_ehostunreach\";\nget_status_class({error, {shutdown,normal}}) ->\n\t\"shutdown_normal\";\nget_status_class({error, {closed,_}}) ->\n\t\"closed\";\nget_status_class({error, noproc}) ->\n\t\"noproc\";\nget_status_class({error, {down,_}}) ->\n\t\"down\";\nget_status_class({error, {stream_error,_}}) ->\n\t\"stream_error\";\nget_status_class(Data) when is_integer(Data), Data > 0 ->\n\tinteger_to_list(Data);\nget_status_class(Data) when is_binary(Data) ->\n\tcase catch binary_to_integer(Data) of\n\t\t{_, _} ->\n\t\t\t?LOG_DEBUG([{event, unknown_status}, {status, Data}]),\n\t\t\t\"unknown\";\n\t\tStatus ->\n\t\t\tget_status_class(Status)\n\tend;\nget_status_class(Data) when is_atom(Data) ->\n\tatom_to_list(Data);\nget_status_class(Data) ->\n\t?LOG_DEBUG([{event, unknown_status}, {status, Data}]),\n\t\"unknown\".\n"
  },
  {
    "path": "apps/arweave/src/ar_metrics_collector.erl",
    "content": "-module(ar_metrics_collector).\n\n-behaviour(prometheus_collector).\n\n-export([\n\tderegister_cleanup/1,\n\tcollect_mf/2\n]).\n\n-import(prometheus_model_helpers, [create_mf/4]).\n\n-include_lib(\"prometheus/include/prometheus.hrl\").\n-define(METRIC_NAME_PREFIX, \"arweave_\").\n\n%% ===================================================================\n%% API\n%% ===================================================================\n\n%% called to collect Metric Families\n-spec collect_mf(_Registry, Callback) -> ok when\n\t_Registry :: prometheus_registry:registry(),\n\tCallback :: prometheus_collector:callback().\ncollect_mf(_Registry, Callback) ->\n\tMetrics = metrics(),\n\t[add_metric_family(Metric, Callback) || Metric <- Metrics],\n\tok.\n\n%% called when collector deregistered\nderegister_cleanup(_Registry) -> ok.\n\n%% ===================================================================\n%% Private functions\n%% ===================================================================\n\nadd_metric_family({Name, Type, Help, Metrics}, Callback) ->\n\tCallback(create_mf(?METRIC_NAME(Name), Help, Type, Metrics)).\n\nmetrics() ->\n\tRanchInfo = ranch:info(),\n\t[\n\t {storage_blocks_stored, gauge,\n\t\t\"Blocks stored\",\n\t\tcase ets:lookup(ar_header_sync, synced_blocks) of [] -> 0; [{_, N}] -> N end},\n\t {arnode_queue_len, gauge,\n\t\t\"Size of message queuee on ar_node_worker\",\n\t\telement(2, erlang:process_info(whereis(ar_node_worker), message_queue_len))},\n\t {arbridge_queue_len, gauge,\n\t\t\"Size of message queuee on ar_bridge\",\n\t\telement(2, erlang:process_info(whereis(ar_bridge), message_queue_len))},\n\t {ignored_ids_len, gauge,\n\t\t\"Size of table of Ignored/already seen IDs:\",\n\t\tets:info(ignored_ids, size)},\n\t {ar_data_discovery_bytes_total, gauge, \"ar_data_discovery process memory\",\n\t\tget_process_memory(ar_data_discovery)},\n\t {ar_node_worker_bytes_total, gauge, \"ar_node_worker process memory\",\n\t\tget_process_memory(ar_node_worker)},\n\t {ar_header_sync_bytes_total, gauge, \"ar_header_sync process memory\",\n\t\tget_process_memory(ar_header_sync)},\n\t {ar_wallets_bytes_total, gauge, \"ar_wallets process memory\",\n\t\tget_process_memory(ar_wallets)},\n         {ar_http_iface_listener_ranch_max_connections, gauge, \"Maximum number of Ranch connections\",\n          get_ranch_max_connections(RanchInfo, ar_http_iface_listener)},\n         {ar_http_iface_listener_ranch_active_connections, gauge, \"Currently active Ranch connections\",\n          get_ranch_active_connections(RanchInfo, ar_http_iface_listener)}\n\t].\n\nget_process_memory(Name) ->\n\tcase whereis(Name) of\n\t\tundefined ->\n\t\t\t0;\n\t\tPID ->\n\t\t\t{memory, Memory} = erlang:process_info(PID, memory),\n\t\t\tMemory\n\tend.\n\nget_ranch_max_connections(RInfo, Name) ->\n    get_ranch_info_value(RInfo, Name, max_connections).\n\nget_ranch_active_connections(RInfo, Name) ->\n    get_ranch_info_value(RInfo, Name, active_connections).\n\nget_ranch_info_value(RInfo, Name, Key) ->\n    PoolDetails = proplists:get_value(Name, RInfo, []),\n    %% Signal error condition with -1\n    proplists:get_value(Key, PoolDetails, -1).\n"
  },
  {
    "path": "apps/arweave/src/ar_mine_randomx.erl",
    "content": "-module(ar_mine_randomx).\n\n-export([init_fast/3, init_light/2, info/1, hash/2, hash/5,\n\t\trandomx_encrypt_chunk/4,\n\t\trandomx_decrypt_chunk/5,\n\t\trandomx_decrypt_sub_chunk/5,\n\t\trandomx_reencrypt_chunk/7,\n\t\trandomx_generate_replica_2_9_entropy/2,\n\t\trandomx_encrypt_replica_2_9_sub_chunk/1,\n\t\trandomx_decrypt_replica_2_9_sub_chunk/1,\n\t\texor_sub_chunk/2]).\n\n%% These exports are required for the STUB mode, where these functions are unused.\n%% Also, some of these functions are used in ar_mine_randomx_tests.\n-export([jit/0, large_pages/0, hardware_aes/0, init_fast2/5, init_light2/4]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n-ifdef(STUB_RANDOMX).\ninit_fast(RxMode, Key, _Threads) ->\n\t{RxMode, {stub_state, Key}}.\ninit_light(RxMode, Key) ->\n\t{RxMode, {stub_state, Key}}.\n-else.\ninit_fast(RxMode, Key, Threads) ->\n\tinit_fast2(RxMode, Key, jit(), large_pages(), Threads).\ninit_light(RxMode, Key) ->\n\tinit_light2(RxMode, jit(), large_pages(), Key).\n-endif.\n\ninfo(State) ->\n\tinfo2(State).\n\nhash(State, Data) ->\n\thash(State, Data, jit(), large_pages(), hardware_aes()).\n\nhash(State, Data, JIT, LargePages, HardwareAES) ->\n\thash2(State, Data, JIT, LargePages, HardwareAES).\n\nrandomx_encrypt_chunk(Packing, RandomxState, Key, Chunk) ->\n\tcase randomx_encrypt_chunk2(Packing, RandomxState, Key, Chunk) of\n\t\t{error, invalid_randomx_mode} ->\n\t\t\t{error, invalid_randomx_mode};\n\t\t{error, Error} ->\n\t\t\t%% All other errors are from the NIF, so we treat as an exception\n\t\t\t{exception, Error};\n\t\tReply ->\n\t\t\tReply\n\tend.\n\nrandomx_decrypt_chunk(Packing, RandomxState, Key, Chunk, ChunkSize) ->\n\tPackedSize = byte_size(Chunk),\n\t%% For the spora_2_6 and composite packing schemes we want to confirm\n\t%% the padding in the unpacked chunk is all zeros.\n\t%% To do that we pass in the maximum chunk size (?DATA_CHUNK_SIZE) to prevent the NIF\n\t%% from removing the padding. We can then validate the padding and remove it in\n\t%% ar_packing_server:unpad_chunk/4.\n\tSize = case Packing of\n\t\t{spora_2_6, _Addr} ->\n\t\t\t?DATA_CHUNK_SIZE;\n\t\t{composite, _Addr, _PackingDifficulty} ->\n\t\t\t?DATA_CHUNK_SIZE;\n\t\t_ ->\n\t\t\tChunkSize\n\tend,\n\tcase randomx_decrypt_chunk2(RandomxState, Key, Chunk, Size, Packing) of\n\t\t{error, invalid_randomx_mode} ->\n\t\t\t{error, invalid_randomx_mode};\n\t\t{error, Error} ->\n\t\t\t%% All other errors are from the NIF, so we treat as an exception\n\t\t\t{exception, Error};\n\t\t{ok, Unpacked} ->\n\t\t\t%% Validating the padding (for spora_2_6 and composite) and then remove it.\n\t\t\tcase ar_packing_server:unpad_chunk(Packing, Unpacked, ChunkSize, PackedSize) of\n\t\t\t\terror ->\n\t\t\t\t\t?LOG_WARNING([{event, unpad_chunk_error},\n\t\t\t\t\t\t\t{packed_size, PackedSize},\n\t\t\t\t\t\t\t{chunk_size, ChunkSize}]),\n\t\t\t\t\t{error, invalid_padding};\n\t\t\t\tUnpackedChunk ->\n\t\t\t\t\t{ok, UnpackedChunk}\n\t\t\tend\n\tend.\n\nrandomx_decrypt_sub_chunk(Packing, RandomxState, Key, Chunk, SubChunkStartOffset) ->\n\tcase randomx_decrypt_sub_chunk2(Packing, RandomxState, Key, Chunk, SubChunkStartOffset) of\n\t\t{error, invalid_randomx_mode} ->\n\t\t\t{error, invalid_randomx_mode};\n\t\t{error, Error} ->\n\t\t\t%% All other errors are from the NIF, so we treat as an exception\n\t\t\t{exception, Error};\n\t\tReply ->\n\t\t\tReply\n\tend.\n\nrandomx_reencrypt_chunk(SourcePacking, TargetPacking,\n\t\tRandomxState, UnpackKey, PackKey, Chunk, ChunkSize) ->\n\trandomx_reencrypt_chunk2(SourcePacking, TargetPacking, \n\t\tRandomxState, UnpackKey, PackKey, Chunk, ChunkSize).\n\n%%% AR_TEST implementation\nrandomx_generate_replica_2_9_entropy({_, {stub_state, _}}, Key) ->\n\t%% Make it fast, deterministic, and scoped by Key.\n\t%% Note that ?REPLICA_2_9_ENTROPY_SIZE is\n\t%% reduced significantly in the AR_TEST mode.\n\tSubChunkCount = ar_block:get_sub_chunks_per_replica_2_9_entropy(),\n\tlists:foldl(\n\t\tfun(N1, Acc) ->\n\t\t\tlists:foldl(\n\t\t\t\tfun(N2, Acc2) ->\n\t\t\t\t\t<< (crypto:hash(sha256, << N1:16, N2:16, Key/binary >>))/binary,\n\t\t\t\t\t\tAcc2/binary >>\n\t\t\t\tend,\n\t\t\t\tAcc,\n\t\t\t\tlists:seq(1, ?COMPOSITE_PACKING_SUB_CHUNK_SIZE div 32)\n\t\t\t)\n\t\tend,\n\t\t<<>>,\n\t\tlists:seq(1, SubChunkCount)\n\t);\n\n%% Non-AR_TEST implementation\nrandomx_generate_replica_2_9_entropy({rxsquared, RandomxState}, Key) ->\n\t{ok, EntropyFused} = ar_rxsquared_nif:rsp_fused_entropy_nif(\n\t\tRandomxState,\n\t\t?COMPOSITE_PACKING_SUB_CHUNK_COUNT,\n\t\t?COMPOSITE_PACKING_SUB_CHUNK_SIZE,\n\t\t?REPLICA_2_9_RANDOMX_LANE_COUNT,\n\t\t?REPLICA_2_9_RANDOMX_DEPTH,\n\t\tjit(),\n\t\tlarge_pages(),\n\t\thardware_aes(),\n\t\t?REPLICA_2_9_RANDOMX_PROGRAM_COUNT,\n\t\tKey\n\t),\n\tEntropyFused.\n\nrandomx_decrypt_replica_2_9_sub_chunk(\n\t\t{_PackingState, Entropy, SubChunk, EntropySubChunkIndex}) ->\n\tSubChunkSize = ?COMPOSITE_PACKING_SUB_CHUNK_SIZE,\n\tEntropyPart = binary:part(Entropy, EntropySubChunkIndex * SubChunkSize, SubChunkSize),\n\t{ok, exor_sub_chunk(SubChunk, EntropyPart)}.\n\nrandomx_encrypt_replica_2_9_sub_chunk(\n\t\t{_PackingState, Entropy, SubChunk, EntropySubChunkIndex}) ->\n\tSubChunkSize = ?COMPOSITE_PACKING_SUB_CHUNK_SIZE,\n\tEntropyPart = binary:part(Entropy, EntropySubChunkIndex * SubChunkSize, SubChunkSize),\n\t{ok, exor_sub_chunk(SubChunk, EntropyPart)}.\n\n%% @doc Encipher/decipher the given sub-chunk using the given 2.9 entropy.\n-spec exor_sub_chunk(\n\t\tSubChunk :: binary(),\n\t\tEntropyPart :: binary()\n) -> binary().\nexor_sub_chunk(SubChunk, EntropyPart) ->\n\tcrypto:exor(SubChunk, EntropyPart).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\n%% -------------------------------------------------------------------------------------------\n%% Helper functions\n%% -------------------------------------------------------------------------------------------\npacking_rounds(spora_2_5) ->\n\t?RANDOMX_PACKING_ROUNDS;\npacking_rounds({spora_2_6, _Addr}) ->\n\t?RANDOMX_PACKING_ROUNDS_2_6.\n\njit() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase lists:member(randomx_jit, Config#config.disable) of\n\t\ttrue ->\n\t\t\t0;\n\t\t_ ->\n\t\t\t1\n\tend.\n\nlarge_pages() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase lists:member(randomx_large_pages, Config#config.enable) of\n\t\ttrue ->\n\t\t\t1;\n\t\t_ ->\n\t\t\t0\n\tend.\n\nhardware_aes() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase lists:member(randomx_hardware_aes, Config#config.disable) of\n\t\ttrue ->\n\t\t\t0;\n\t\t_ ->\n\t\t\t1\n\tend.\n\nsplit_into_sub_chunks(Chunk) ->\n\tsplit_into_sub_chunks(Chunk, 0).\n\nsplit_into_sub_chunks(<<>>, _StartOffset) ->\n\t[];\nsplit_into_sub_chunks(<< SubChunk:8192/binary, Rest/binary >>, StartOffset) ->\n\t[{StartOffset, SubChunk} | split_into_sub_chunks(Rest, StartOffset + 8192)].\n\n\ninit_fast2(rx512, Key, JIT, LargePages, Threads) ->\n\t{ok, FastState} = ar_rx512_nif:rx512_init_nif(Key, ?RANDOMX_HASHING_MODE_FAST, JIT, LargePages, Threads),\n\t{rx512, FastState};\ninit_fast2(rx4096, Key, JIT, LargePages, Threads) ->\n\t{ok, FastState} = ar_rx4096_nif:rx4096_init_nif(Key, ?RANDOMX_HASHING_MODE_FAST, JIT, LargePages, Threads),\n\t{rx4096, FastState};\ninit_fast2(rxsquared, Key, JIT, LargePages, Threads) ->\n\t{ok, FastState} = ar_rxsquared_nif:rxsquared_init_nif(Key, ?RANDOMX_HASHING_MODE_FAST, JIT, LargePages, Threads),\n\t{rxsquared, FastState};\ninit_fast2(RxMode, _Key, _JIT, _LargePages, _Threads) ->\n\t?LOG_ERROR([{event, invalid_randomx_mode}, {mode, RxMode}]),\n\t{error, invalid_randomx_mode}.\ninit_light2(rx512, Key, JIT, LargePages) ->\n\t{ok, LightState} = ar_rx512_nif:rx512_init_nif(Key, ?RANDOMX_HASHING_MODE_LIGHT, JIT, LargePages, 0),\n\t{rx512, LightState};\ninit_light2(rx4096, Key, JIT, LargePages) ->\n\t{ok, LightState} = ar_rx4096_nif:rx4096_init_nif(Key, ?RANDOMX_HASHING_MODE_LIGHT, JIT, LargePages, 0),\n\t{rx4096, LightState};\ninit_light2(rxsquared, Key, JIT, LargePages) ->\n\t{ok, LightState} = ar_rxsquared_nif:rxsquared_init_nif(Key, ?RANDOMX_HASHING_MODE_LIGHT, JIT, LargePages, 0),\n\t{rxsquared, LightState};\ninit_light2(RxMode, _Key, _JIT, _LargePages) ->\n\t?LOG_ERROR([{event, invalid_randomx_mode}, {mode, RxMode}]),\n\t{exceperrortion, invalid_randomx_mode}.\n\ninfo2({rx512, State}) ->\n\tar_rx512_nif:rx512_info_nif(State);\ninfo2({rx4096, State}) ->\n\tar_rx4096_nif:rx4096_info_nif(State);\ninfo2({rxsquared, State}) ->\n\tar_rxsquared_nif:rxsquared_info_nif(State);\ninfo2(_) ->\n\t{error, invalid_randomx_mode}.\n\n%% -------------------------------------------------------------------------------------------\n%% hash2 and randomx_[encrypt|decrypt|reencrypt]_chunk2\n%% STUB implementation, used in tests, is called when State is {stub_state, Key}\n%% Otherwise, NIF implementation is used\n%% We set it up this way so that we can have some tests trigger the NIF implementation\n%% -------------------------------------------------------------------------------------------\n%% STUB implementation\nhash2({_, {stub_state, Key}}, Data, _JIT, _LargePages, _HardwareAES) ->\n\tcrypto:hash(sha256, << Key/binary, Data/binary >>);\n%% Non-STUB implementation\nhash2({rx512, State}, Data, JIT, LargePages, HardwareAES) ->\n\t{ok, Hash} = ar_rx512_nif:rx512_hash_nif(State, Data, JIT, LargePages, HardwareAES),\n\tHash;\nhash2({rx4096, State}, Data, JIT, LargePages, HardwareAES) ->\n\t{ok, Hash} = ar_rx4096_nif:rx4096_hash_nif(State, Data, JIT, LargePages, HardwareAES),\n\tHash;\nhash2({rxsquared, State}, Data, JIT, LargePages, HardwareAES) ->\n\t{ok, Hash} = ar_rxsquared_nif:rxsquared_hash_nif(State, Data, JIT, LargePages, HardwareAES),\n\tHash;\nhash2(_BadState, _Data, _JIT, _LargePages, _HardwareAES) ->\n\t{error, invalid_randomx_mode}.\n\n%% STUB implementation\nrandomx_decrypt_chunk2({_, {stub_state, _}}, Key, Chunk, _ChunkSize,\n\t\t{composite, _, PackingDifficulty} = _Packing) ->\n\tOptions = [{encrypt, false}],\n\tIV = binary:part(Key, {0, 16}),\n\tSubChunks = split_into_sub_chunks(Chunk),\n\t{ok, iolist_to_binary(lists:map(\n\t\tfun({SubChunkStartOffset, SubChunk}) ->\n\t\t\tKey2 = crypto:hash(sha256, << Key/binary, SubChunkStartOffset:24 >>),\n\t\t\tlists:foldl(\n\t\t\t\tfun(_, Acc) ->\n\t\t\t\t\tcrypto:crypto_one_time(aes_256_cbc, Key2, IV, Acc, Options)\n\t\t\t\tend,\n\t\t\t\tSubChunk,\n\t\t\t\tlists:seq(1, PackingDifficulty)\n\t\t\t)\n\t\tend,\n\t\tSubChunks))};\nrandomx_decrypt_chunk2({_, {stub_state, _}}, Key, Chunk, _ChunkSize, _Packing) ->\n\tOptions = [{encrypt, false}],\n\tIV = binary:part(Key, {0, 16}),\n\t{ok, crypto:crypto_one_time(aes_256_cbc, Key, IV, Chunk, Options)};\n%% Non-STUB implementation\nrandomx_decrypt_chunk2({rx512, RandomxState}, Key, Chunk, ChunkSize, spora_2_5) ->\n\tar_rx512_nif:rx512_decrypt_chunk_nif(RandomxState, Key, Chunk, ChunkSize, ?RANDOMX_PACKING_ROUNDS,\n\t\t\tjit(), large_pages(), hardware_aes());\nrandomx_decrypt_chunk2({rx512, RandomxState}, Key, Chunk, ChunkSize, {spora_2_6, _Addr}) ->\n\tar_rx512_nif:rx512_decrypt_chunk_nif(RandomxState, Key, Chunk, ChunkSize, ?RANDOMX_PACKING_ROUNDS_2_6,\n\t\t\tjit(), large_pages(), hardware_aes());\nrandomx_decrypt_chunk2({rx4096, RandomxState}, Key, Chunk, ChunkSize,\n\t\t{composite, _Addr, PackingDifficulty}) ->\n\tar_rx4096_nif:rx4096_decrypt_composite_chunk_nif(RandomxState, Key, Chunk, ChunkSize,\n\t\t\tjit(), large_pages(), hardware_aes(), ?COMPOSITE_PACKING_ROUND_COUNT,\n\t\t\tPackingDifficulty, ?COMPOSITE_PACKING_SUB_CHUNK_COUNT);\nrandomx_decrypt_chunk2(_BadState, _Key, _Chunk, _ChunkSize, _Packing) ->\n\t{error, invalid_randomx_mode}.\n\n%% STUB implementation\nrandomx_decrypt_sub_chunk2(Packing, {_, {stub_state, _}}, Key, Chunk, SubChunkStartOffset) ->\n\t{_, _, Iterations} = Packing,\n\tOptions = [{encrypt, false}],\n\tKey2 = crypto:hash(sha256, << Key/binary, SubChunkStartOffset:24 >>),\n\tIV = binary:part(Key, {0, 16}),\n\t{ok, lists:foldl(fun(_, Acc) ->\n\t\t\tcrypto:crypto_one_time(aes_256_cbc, Key2, IV, Acc, Options)\n\t\tend, Chunk, lists:seq(1, Iterations))};\n%% Non-STUB implementation\nrandomx_decrypt_sub_chunk2(Packing, {rx4096, RandomxState}, Key, Chunk, SubChunkStartOffset) ->\n\t{_, _, IterationCount} = Packing,\n\tRoundCount = ?COMPOSITE_PACKING_ROUND_COUNT,\n\tOutSize = ?COMPOSITE_PACKING_SUB_CHUNK_SIZE,\n\tar_rx4096_nif:rx4096_decrypt_composite_sub_chunk_nif(RandomxState, Key, Chunk, OutSize,\n\t\tjit(), large_pages(), hardware_aes(), RoundCount, IterationCount, SubChunkStartOffset);\nrandomx_decrypt_sub_chunk2(_Packing, _BadState, _Key, _Chunk, _SubChunkStartOffset) ->\n\t{error, invalid_randomx_mode}.\n\n%% STUB implementation\nrandomx_encrypt_chunk2({composite, _, PackingDifficulty} = _Packing, {_, {stub_state, _}}, Key, Chunk) ->\n\tOptions = [{encrypt, true}, {padding, zero}],\n\tIV = binary:part(Key, {0, 16}),\n\tSubChunks = split_into_sub_chunks(ar_packing_server:pad_chunk(Chunk)),\n\t{ok, iolist_to_binary(lists:map(\n\t\t\tfun({SubChunkStartOffset, SubChunk}) ->\n\t\t\t\tKey2 = crypto:hash(sha256, << Key/binary, SubChunkStartOffset:24 >>),\n\t\t\t\tlists:foldl(\n\t\t\t\t\tfun(_, Acc) ->\n\t\t\t\t\t\tcrypto:crypto_one_time(aes_256_cbc, Key2, IV, Acc, Options)\n\t\t\t\t\tend,\n\t\t\t\t\tSubChunk,\n\t\t\t\t\tlists:seq(1, PackingDifficulty)\n\t\t\t\t)\n\t\t\tend,\n\t\t\tSubChunks))};\nrandomx_encrypt_chunk2(_Packing, {_, {stub_state, _}}, Key, Chunk) ->\n\tOptions = [{encrypt, true}, {padding, zero}],\n\tIV = binary:part(Key, {0, 16}),\n\t{ok, crypto:crypto_one_time(aes_256_cbc, Key, IV,\n\t\t\tar_packing_server:pad_chunk(Chunk), Options)};\n%% Non-STUB implementation\nrandomx_encrypt_chunk2(spora_2_5, {rx512, RandomxState}, Key, Chunk) ->\n\tar_rx512_nif:rx512_encrypt_chunk_nif(RandomxState, Key, Chunk, ?RANDOMX_PACKING_ROUNDS,\n\t\t\tjit(), large_pages(), hardware_aes());\nrandomx_encrypt_chunk2({spora_2_6, _Addr}, {rx512, RandomxState}, Key, Chunk) ->\n\tar_rx512_nif:rx512_encrypt_chunk_nif(RandomxState, Key, Chunk, ?RANDOMX_PACKING_ROUNDS_2_6,\n\t\t\tjit(), large_pages(), hardware_aes());\nrandomx_encrypt_chunk2({composite, _Addr, PackingDifficulty}, {rx4096, RandomxState}, Key, Chunk) ->\n\tar_rx4096_nif:rx4096_encrypt_composite_chunk_nif(RandomxState, Key, Chunk,\n\t\t\tjit(), large_pages(), hardware_aes(), ?COMPOSITE_PACKING_ROUND_COUNT,\n\t\t\tPackingDifficulty, ?COMPOSITE_PACKING_SUB_CHUNK_COUNT);\nrandomx_encrypt_chunk2(_Packing, _BadState, _Key, _Chunk) ->\n\t{error, invalid_randomx_mode}.\n\n%% STUB implementation\nrandomx_reencrypt_chunk2(SourcePacking, TargetPacking,\n\t\t{_, {stub_state, _}} = State, UnpackKey, PackKey, Chunk, ChunkSize) ->\n\tcase randomx_decrypt_chunk(SourcePacking, State, UnpackKey, Chunk, ChunkSize) of\n\t\t{ok, UnpackedChunk} ->\n\t\t\t{ok, RepackedChunk} = randomx_encrypt_chunk2(TargetPacking, State, PackKey,\n\t\t\t\t\tar_packing_server:pad_chunk(UnpackedChunk)),\n\t\t\tcase {SourcePacking, TargetPacking} of\n\t\t\t\t{{composite, Addr, _}, {composite, Addr, _}} ->\n\t\t\t\t\t%% See the same function defined for the non-STUB mode.\n\t\t\t\t\t{ok, RepackedChunk, none};\n\t\t\t\t_ ->\n\t\t\t\t\t{ok, RepackedChunk, UnpackedChunk}\n\t\t\tend;\n\t\tError ->\n\t\t\tError\n\tend;\n%% Non-STUB implementation\nrandomx_reencrypt_chunk2({composite, Addr1, PackingDifficulty1},\n\t\t{composite, Addr2, PackingDifficulty2},\n\t\t{rx4096, RandomxState}, UnpackKey, PackKey, Chunk, ChunkSize) ->\n\tcase ar_rx4096_nif:rx4096_reencrypt_composite_chunk_nif(RandomxState, UnpackKey,\n\t\t\tPackKey, Chunk, jit(), large_pages(), hardware_aes(),\n\t\t\t?COMPOSITE_PACKING_ROUND_COUNT, ?COMPOSITE_PACKING_ROUND_COUNT,\n\t\t\tPackingDifficulty1, PackingDifficulty2,\n\t\t\t?COMPOSITE_PACKING_SUB_CHUNK_COUNT, ?COMPOSITE_PACKING_SUB_CHUNK_COUNT) of\n\t\t{ok, Repacked, RepackInput} ->\n\t\t\tcase Addr1 == Addr2 of\n\t\t\t\ttrue ->\n\t\t\t\t\t%% When the addresses match, we do not have to unpack the chunk - we may\n\t\t\t\t\t%% simply pack the missing iterations so RepackInput is not the unpacked\n\t\t\t\t\t%% chunk and we return none instead. If the caller needs the unpacked\n\t\t\t\t\t%% chunk as well, they need to make an extra call.\n\t\t\t\t\t{ok, Repacked, none};\n\t\t\t\tfalse ->\n\t\t\t\t\t%% RepackInput is the unpacked chunk - return it.\n\t\t\t\t\tUnpadded = ar_packing_server:unpad_chunk(RepackInput, ChunkSize,\n\t\t\t\t\t\t\t?DATA_CHUNK_SIZE),\n\t\t\t\t\t{ok, Repacked, Unpadded}\n\t\t\tend;\n\t\t{error, Error} ->\n\t\t\t{exception, Error};\n\t\tReply ->\n\t\t\tReply\n\tend;\nrandomx_reencrypt_chunk2(_SourcePacking, {composite, _Addr, _PackingDifficulty},\n\t\t_RandomxState, _UnpackKey, _PackKey, _Chunk, _ChunkSize) ->\n\t{error, invalid_reencrypt_packing};\nrandomx_reencrypt_chunk2(SourcePacking, TargetPacking,\n\t\t{rx512, RandomxState}, UnpackKey, PackKey, Chunk, ChunkSize) ->\n\tUnpackRounds = packing_rounds(SourcePacking),\n\tPackRounds = packing_rounds(TargetPacking),\n\tcase ar_rx512_nif:rx512_reencrypt_chunk_nif(RandomxState, UnpackKey, PackKey, Chunk,\n\t\t\tChunkSize, UnpackRounds, PackRounds, jit(), large_pages(), hardware_aes()) of\n\t\t{error, Error} ->\n\t\t\t{exception, Error};\n\t\tReply ->\n\t\t\tReply\n\tend;\nrandomx_reencrypt_chunk2(\n\t\t_SourcePacking, _TargetPacking, _BadState, _UnpackKey, _PackKey, _Chunk, _ChunkSize) ->\n\t{error, invalid_randomx_mode}.\n\n"
  },
  {
    "path": "apps/arweave/src/ar_mining_cache.erl",
    "content": "-module(ar_mining_cache).\n-include_lib(\"arweave/include/ar_mining_cache.hrl\").\n-include_lib(\"arweave/include/ar.hrl\").\n\n-export([\n\tnew/1, new/2, set_limit/2, get_limit/1,\n\tcache_size/1, actual_cache_size/1, available_size/1, reserved_size/1, reserved_size/2,\n\tadd_session/2, reserve_for_session/3, release_for_session/3, drop_session/2,\n\tsession_exists/2, get_sessions/1, with_cached_value/4\n]).\n\n-define(CACHE_SESSIONS_LIMIT, 4).\n\n%%%===================================================================\n%%% Public API.\n%%%===================================================================\n\n%% @doc Creates a new mining cache with a default limit of 0.\n-spec new(Name :: term()) ->\n\tCache :: #ar_mining_cache{}.\nnew(Name) -> #ar_mining_cache{name = Name}.\n\n%% @doc Creates a new mining cache with a given limit.\n-spec new(Name :: term(), Limit :: pos_integer()) ->\n\tCache :: #ar_mining_cache{}.\nnew(Name, Limit) -> #ar_mining_cache{name = Name, mining_cache_limit_bytes = Limit}.\n\n%% @doc Sets the limit for the mining cache.\n-spec set_limit(Limit :: pos_integer(), Cache :: #ar_mining_cache{}) ->\n\tCache :: #ar_mining_cache{}.\nset_limit(Limit, Cache) ->\n\tCache#ar_mining_cache{mining_cache_limit_bytes = Limit}.\n\n%% @doc Returns the limit for the mining cache.\n-spec get_limit(Cache :: #ar_mining_cache{}) ->\n\tLimit :: non_neg_integer().\nget_limit(Cache) ->\n\tCache#ar_mining_cache.mining_cache_limit_bytes.\n\n%% @doc Returns the size of the cached data in bytes.\n%% Note, that cache size includes both the cached data and the reserved space for sessions.\n-spec cache_size(Cache :: #ar_mining_cache{}) ->\n\tSize :: non_neg_integer().\ncache_size(Cache) ->\n\tmaps:fold(\n\t\tfun(_, #ar_mining_cache_session{mining_cache_size_bytes = Size, reserved_mining_cache_bytes = ReservedSize}, Acc) ->\n\t\t\tAcc + Size + ReservedSize\n\t\tend,\n\t\t0,\n\t\tCache#ar_mining_cache.mining_cache_sessions\n\t).\n\n%% @doc Returns the size of the cached data in bytes.\n%% Note, that cache size includes both the cached data and the reserved space for sessions.\n-spec actual_cache_size(Cache :: #ar_mining_cache{}) ->\n\tSize :: non_neg_integer().\nactual_cache_size(Cache) ->\n\tmaps:fold(\n\t\tfun(_, #ar_mining_cache_session{mining_cache = MiningCache}, Acc) ->\n\t\t\tAcc + maps:fold(fun(_, CacheValue, Acc0) ->\n\t\t\t\tAcc0 + cached_value_size(CacheValue)\n\t\t\tend, 0, MiningCache)\n\t\tend,\n\t\t0,\n\t\tCache#ar_mining_cache.mining_cache_sessions\n\t).\n\n%% @doc Returns the available size for the mining cache.\n%% Note, that this value does not include the reserved space for sessions,\n%% as this space is considered already used.\n%% @see reserved_size/1,2\n%% @see cache_size/1\n-spec available_size(Cache :: #ar_mining_cache{}) ->\n\tSize :: non_neg_integer().\navailable_size(Cache) ->\n\tCache#ar_mining_cache.mining_cache_limit_bytes - cache_size(Cache).\n\n%% @doc Returns the reserved size for a cache.\n-spec reserved_size(Cache0 :: #ar_mining_cache{}) ->\n\t{ok, Size :: non_neg_integer()} | {error, Reason :: term()}.\nreserved_size(Cache0) ->\n\t{ok, lists:sum([\n\t\tbegin\n\t\t\t{ok, Size} = reserved_size(SessionKey, Cache0),\n\t\t\tSize\n\t\tend || SessionKey <- get_sessions(Cache0)\n\t])}.\n\n%% @doc Returns the reserved size for a session.\n-spec reserved_size(SessionKey :: term(), Cache0 :: #ar_mining_cache{}) ->\n\t{ok, Size :: non_neg_integer()} | {error, Reason :: term()}.\nreserved_size(SessionKey, Cache0) ->\n\tcase with_mining_cache_session(SessionKey, fun(Session) ->\n\t\t{ok, Session#ar_mining_cache_session.reserved_mining_cache_bytes, Session}\n\tend, Cache0) of\n\t\t{ok, Size, _Cache1} -> {ok, Size};\n\t\t{error, Reason} -> {error, Reason}\n\tend.\n\n%% @doc Adds a new mining cache session to the cache.\n%% If the cache limit is exceeded, the oldest session is dropped.\n-spec add_session(SessionKey :: term(), Cache0 :: #ar_mining_cache{}) ->\n\tCache1 :: #ar_mining_cache{}.\nadd_session(SessionKey, Cache0) ->\n\tcase maps:is_key(SessionKey, Cache0#ar_mining_cache.mining_cache_sessions) of\n\t\ttrue -> Cache0;\n\t\tfalse ->\n\t\t\tCache1 = Cache0#ar_mining_cache{\n\t\t\t\tmining_cache_sessions = maps:put(SessionKey, #ar_mining_cache_session{}, Cache0#ar_mining_cache.mining_cache_sessions),\n\t\t\t\tmining_cache_sessions_queue = queue:in(SessionKey, Cache0#ar_mining_cache.mining_cache_sessions_queue)\n\t\t\t},\n\t\t\tcase queue:len(Cache1#ar_mining_cache.mining_cache_sessions_queue) > ?CACHE_SESSIONS_LIMIT of\n\t\t\t\ttrue ->\n\t\t\t\t\t{{value, LastSessionKey}, Queue1} = queue:out(Cache1#ar_mining_cache.mining_cache_sessions_queue),\n\t\t\t\t\tCache2 = drop_session(LastSessionKey, Cache1),\n\t\t\t\t\t?LOG_DEBUG([\n\t\t\t\t\t\t{event, mining_cache_add_drop_session},\n\t\t\t\t\t\t{cache_name, Cache1#ar_mining_cache.name},\n\t\t\t\t\t\t{added_session_key, ar_nonce_limiter:encode_session_key(SessionKey)},\n\t\t\t\t\t\t{dropped_session_key, ar_nonce_limiter:encode_session_key(LastSessionKey)},\n\t\t\t\t\t\t{num_sessions, queue:len(Queue1)}]),\n\t\t\t\t\tCache2#ar_mining_cache{mining_cache_sessions_queue = Queue1};\n\t\t\t\tfalse ->\n\t\t\t\t\t?LOG_DEBUG([\n\t\t\t\t\t\t{event, mining_cache_add_session},\n\t\t\t\t\t\t{cache_name, Cache1#ar_mining_cache.name},\n\t\t\t\t\t\t{session_key, ar_nonce_limiter:encode_session_key(SessionKey)},\n\t\t\t\t\t\t{num_sessions,\n\t\t\t\t\t\t\tqueue:len(Cache1#ar_mining_cache.mining_cache_sessions_queue)}]),\n\t\t\t\t\tCache1\n\t\t\tend\n\tend.\n\n%% @doc Reserves a certain amount of space for a session.\n%% Note, that if the session already has a reserved amount of space, it will be\n%% added to the existing reserved space.\n-spec reserve_for_session(SessionKey :: term(), Size :: non_neg_integer(), Cache0 :: #ar_mining_cache{}) ->\n\t{ok, Cache1 :: #ar_mining_cache{}} | {error, Reason :: term()}.\nreserve_for_session(SessionKey, Size, Cache0) ->\n\tcase available_size(Cache0) < Size of\n\t\ttrue -> {error, cache_limit_exceeded};\n\t\tfalse ->\n\t\t\twith_mining_cache_session(SessionKey, fun(#ar_mining_cache_session{reserved_mining_cache_bytes = ReservedSize} = Session) ->\n\t\t\t\t{ok, Session#ar_mining_cache_session{reserved_mining_cache_bytes = ReservedSize + Size}}\n\t\t\tend, Cache0)\n\tend.\n\n%% @doc Releases the reserved space for a session.\n%% If the reserved space is less than the released size, the reserved space will be set to 0.\n-spec release_for_session(SessionKey :: term(), Size :: non_neg_integer(), Cache0 :: #ar_mining_cache{}) ->\n\t{ok, Cache1 :: #ar_mining_cache{}} | {error, Reason :: term()}.\nrelease_for_session(SessionKey, Size, Cache0) ->\n\twith_mining_cache_session(SessionKey, fun(#ar_mining_cache_session{reserved_mining_cache_bytes = ReservedSize} = Session) ->\n\t\t{ok, Session#ar_mining_cache_session{reserved_mining_cache_bytes = max(0, ReservedSize - Size)}}\n\tend, Cache0).\n\n%% @doc Drops a mining cache session from the cache.\n-spec drop_session(SessionKey :: term(), Cache0 :: #ar_mining_cache{}) ->\n\tCache1 :: #ar_mining_cache{}.\ndrop_session(SessionKey, Cache0) ->\n\tcase maps:take(SessionKey, Cache0#ar_mining_cache.mining_cache_sessions) of\n\t\t{Session, Sessions} ->\n\t\t\tmaybe_search_for_anomalies(SessionKey, Session),\n\t\t\tQueue0 = queue:filter(\n\t\t\t\tfun(SessionKey0) -> SessionKey0 =/= SessionKey end,\n\t\t\t\tCache0#ar_mining_cache.mining_cache_sessions_queue\n\t\t\t),\n\t\t\t?LOG_DEBUG([\n\t\t\t\t{event, mining_cache_drop_session},\n\t\t\t\t{cache_name, Cache0#ar_mining_cache.name},\n\t\t\t\t{session_key, ar_nonce_limiter:encode_session_key(SessionKey)},\n\t\t\t\t{num_sessions, queue:len(Queue0)}]),\n\t\t\tCache0#ar_mining_cache{\n\t\t\t\tmining_cache_sessions = Sessions,\n\t\t\t\tmining_cache_sessions_queue = Queue0\n\t\t\t};\n\t\t_ -> Cache0\n\tend.\n\n%% @doc Checks if a session exists in the cache.\n-spec session_exists(SessionKey :: term(), Cache0 :: #ar_mining_cache{}) ->\n\tExists :: boolean().\nsession_exists(SessionKey, Cache0) ->\n\tmaps:is_key(SessionKey, Cache0#ar_mining_cache.mining_cache_sessions).\n\n%% @doc Returns the list of sessions in the cache.\n%% Note, that this list is not sorted by the chronological order.\n-spec get_sessions(Cache0 :: #ar_mining_cache{}) ->\n\tSessions :: [term()].\nget_sessions(Cache0) ->\n\tqueue:to_list(Cache0#ar_mining_cache.mining_cache_sessions_queue).\n\n%% @doc Maps a cached value for a session into a new value.\n%%\n%% This function will take care of the cache size and reserved space for the session.\n%% If the session does not contain a cached value for the given key, it will be generated,\n%% e.g. the very first event for the `Key` is a genesis event.\n%%\n%% The `Fun` must return one of the following:\n%% - `{ok, drop}`: drops the cached value\n%% - `{ok, {drop, Size}}`: drops the cached value and\n%%   additionally releases the reserved space (`Size` bytes)\n%% - `{ok, Value1}`: replaces the cached value\n%% - `{ok, Value1, Size}`: replaces the cached value and\n%%   reserves the reserved space (`Size` bytes)\n%% - `{error, Reason}`: returns an error\n%%\n%% If the returned value equals to the argument passed into the `Fun`, the cache\n%% will not be changed. This implies that cache will not store the empty value.\n-spec with_cached_value(\n\tKey :: term(),\n\tSessionKey :: term(),\n\tCache0 :: #ar_mining_cache{},\n\tFun :: fun(\n\t\t(Value :: #ar_mining_cache_value{}) ->\n\t\t\t{ok, drop} |\n\t\t\t{ok, drop, Size :: non_neg_integer()} |\n\t\t\t{ok, Value1 :: #ar_mining_cache_value{}} |\n\t\t\t{ok, Value1 :: #ar_mining_cache_value{}, Size :: non_neg_integer()} |\n\t\t\t{error, Reason :: term()}\n\t)\n) ->\n\tResult :: {ok, Cache1 :: #ar_mining_cache{}} | {error, Reason :: term()}.\nwith_cached_value(Key, SessionKey, Cache0, Fun) ->\n\twith_mining_cache_session(SessionKey, fun(Session) ->\n\t\tValue0 = maps:get(Key, Session#ar_mining_cache_session.mining_cache, #ar_mining_cache_value{}),\n\t\tcase Fun(Value0) of\n\t\t\t{error, Reason} -> {error, Reason};\n\t\t\t{ok, drop} ->\n\t\t\t\t{ok, Session#ar_mining_cache_session{\n\t\t\t\t\tmining_cache = maps:remove(Key, Session#ar_mining_cache_session.mining_cache),\n\t\t\t\t\tmining_cache_size_bytes = max(0, Session#ar_mining_cache_session.mining_cache_size_bytes - cached_value_size(Value0))\n\t\t\t\t}};\n\t\t\t{ok, drop, ReservationSizeAdjustment} when ReservationSizeAdjustment < 0 ->\n\t\t\t\t{ok, Session#ar_mining_cache_session{\n\t\t\t\t\tmining_cache = maps:remove(Key, Session#ar_mining_cache_session.mining_cache),\n\t\t\t\t\treserved_mining_cache_bytes = max(0, Session#ar_mining_cache_session.reserved_mining_cache_bytes + ReservationSizeAdjustment),\n\t\t\t\t\tmining_cache_size_bytes = max(0, Session#ar_mining_cache_session.mining_cache_size_bytes - cached_value_size(Value0))\n\t\t\t\t}};\n\t\t\t{ok, Value0} -> {ok, Session};\n\t\t\t{ok, Value0, ReservationSizeAdjustment} when ReservationSizeAdjustment < 0 ->\n\t\t\t\t{ok, Session#ar_mining_cache_session{\n\t\t\t\t\treserved_mining_cache_bytes = max(0, Session#ar_mining_cache_session.reserved_mining_cache_bytes + ReservationSizeAdjustment)\n\t\t\t\t}};\n\t\t\t{ok, Value1} ->\n\t\t\t\tSizeDiff = cached_value_size(Value1) - cached_value_size(Value0),\n\t\t\t\tSessionAvailableSize = available_size(Cache0) + Session#ar_mining_cache_session.reserved_mining_cache_bytes,\n\t\t\t\tCacheLimit = get_limit(Cache0),\n\t\t\t\tcase SizeDiff > SessionAvailableSize of\n\t\t\t\t\ttrue when CacheLimit =/= 0 -> {error, cache_limit_exceeded};\n\t\t\t\t\t_ ->\n\t\t\t\t\t\t{ok, Session#ar_mining_cache_session{\n\t\t\t\t\t\t\tmining_cache = maps:put(Key, Value1, Session#ar_mining_cache_session.mining_cache),\n\t\t\t\t\t\t\treserved_mining_cache_bytes = max(0, Session#ar_mining_cache_session.reserved_mining_cache_bytes - SizeDiff),\n\t\t\t\t\t\t\tmining_cache_size_bytes = Session#ar_mining_cache_session.mining_cache_size_bytes + SizeDiff\n\t\t\t\t\t\t}}\n\t\t\t\tend;\n\t\t\t{ok, Value1, ReservationSizeAdjustment} when ReservationSizeAdjustment < 0 ->\n\t\t\t\tSizeDiff = cached_value_size(Value1) - cached_value_size(Value0),\n\t\t\t\tSessionAvailableSize = available_size(Cache0) + Session#ar_mining_cache_session.reserved_mining_cache_bytes,\n\t\t\t\tCacheLimit = get_limit(Cache0),\n\t\t\t\tcase SizeDiff > SessionAvailableSize of\n\t\t\t\t\ttrue when CacheLimit =/= 0 -> {error, cache_limit_exceeded};\n\t\t\t\t\t_ ->\n\t\t\t\t\t\t{ok, Session#ar_mining_cache_session{\n\t\t\t\t\t\t\tmining_cache = maps:put(Key, Value1, Session#ar_mining_cache_session.mining_cache),\n\t\t\t\t\t\t\treserved_mining_cache_bytes = max(0, Session#ar_mining_cache_session.reserved_mining_cache_bytes - SizeDiff + ReservationSizeAdjustment),\n\t\t\t\t\t\t\tmining_cache_size_bytes = Session#ar_mining_cache_session.mining_cache_size_bytes + SizeDiff\n\t\t\t\t\t\t}}\n\t\t\t\tend;\n\t\t\tOther ->\n\t\t\t\t?LOG_WARNING([\n\t\t\t\t\t{event, unexpected_return_value_from_with_cached_value},\n\t\t\t\t\t{cache_name, Cache0#ar_mining_cache.name}, {value, Other}]),\n\t\t\t\t{error, unexpected_return_value_from_with_cached_value}\n\t\tend\n\tend, Cache0).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\n%% Returns the size of the cached data in bytes.\ncached_value_size(#ar_mining_cache_value{ chunk1 = Chunk1, chunk2 = Chunk2 }) ->\n  MaybeBinarySize = fun\n\t\t(undefined) -> 0;\n\t\t(Binary) -> byte_size(Binary)\n  end,\n  MaybeBinarySize(Chunk1) + MaybeBinarySize(Chunk2).\n\n%% Executes the `Fun` function with the chunk cache session as argument.\n%% If the session does not exist, it returns an error without executing the `Fun`.\n%% The `Fun` function should return either:\n%% - a new chunk cache session `{ok, Session}`, which will be used to replace the old one.\n%% - a new chunk cache session with return value `{ok, Return, Session}`, which will\n%%   be used to replace the old cache session and return a value to the caller.\n%% - an error `{error, Reason}` to report back to the caller.\nwith_mining_cache_session(SessionKey, Fun, Cache0) ->\n\tcase maps:is_key(SessionKey, Cache0#ar_mining_cache.mining_cache_sessions) of\n\t\ttrue ->\n\t\t\tcase Fun(maps:get(SessionKey, Cache0#ar_mining_cache.mining_cache_sessions)) of\n\t\t\t\t{ok, Return, Session1} -> {ok, Return, Cache0#ar_mining_cache{\n\t\t\t\t\tmining_cache_sessions = maps:put(SessionKey, Session1, Cache0#ar_mining_cache.mining_cache_sessions)\n\t\t\t\t}};\n\t\t\t\t{ok, Session1} -> {ok, Cache0#ar_mining_cache{\n\t\t\t\t\tmining_cache_sessions = maps:put(SessionKey, Session1, Cache0#ar_mining_cache.mining_cache_sessions)\n\t\t\t\t}};\n\t\t\t\t{error, Reason} -> {error, Reason}\n\t\t\tend;\n\t\tfalse ->\n\t\t\t{error, session_not_found}\n\tend.\n\n%% Searches for anomalies in the mining cache session.\n%% If the actual cache size is different from the expected cache size,\n%% it will log a warning.\n%% If the reserved cache size is different from 0, it will log a warning.\n%% It will also search for invalid cache values, e.g. missing chunks, or failed\n%% invariants.\n%%\n%% Perhaps it is a good idea to put this under a config flag, disabled by default.\nmaybe_search_for_anomalies(SessionKey, #ar_mining_cache_session{\n  mining_cache = MiningCache,\n  mining_cache_size_bytes = MiningCacheSize,\n  reserved_mining_cache_bytes = ReservedMiningCacheBytes\n}) ->\n\tActualCacheSize = maybe_search_for_anomalies_cache_values(SessionKey, MiningCache),\n\tcase {ActualCacheSize, MiningCacheSize} of\n\t\t{0, 0} -> ok;\n\t\t{EqualSize, EqualSize} -> ?LOG_WARNING([\n\t\t\t{event, mining_cache_anomaly}, {anomaly, cache_size_non_zero},\n\t\t\t{session_key, ar_nonce_limiter:encode_session_key(SessionKey)},\n\t\t\t{actual_size, ActualCacheSize}, {reported_size, MiningCacheSize}]);\n\t\t{_, _} -> ?LOG_WARNING([\n\t\t\t{event, mining_cache_anomaly}, {anomaly, cache_size_mismatch},\n\t\t\t{session_key, ar_nonce_limiter:encode_session_key(SessionKey)},\n\t\t\t{actual_size, ActualCacheSize}, {reported_size, MiningCacheSize}])\n\tend,\n\tcase ReservedMiningCacheBytes of\n\t\t0 -> ok;\n\t\t_ -> ?LOG_WARNING([\n\t\t\t{event, mining_cache_anomaly}, {anomaly, reserved_size_non_zero},\n\t\t\t{session_key, ar_nonce_limiter:encode_session_key(SessionKey)},\n\t\t\t{actual_size, ReservedMiningCacheBytes}, {expected_size, 0}])\n\tend;\nmaybe_search_for_anomalies(SessionKey, _InvalidSession) ->\n\t?LOG_ERROR([{event, mining_cache_anomaly}, {anomaly, invalid_session_type},\n\t\t{session_key, ar_nonce_limiter:encode_session_key(SessionKey)}]),\n\tok.\n\nmaybe_search_for_anomalies_cache_values(SessionKey, MiningCache) when is_map(MiningCache) ->\n\tOuterAcc0 = {_Anomalies = #{}, _ActualSize = 0},\n\t{Anomalies, ActualSize} = maps:fold(fun(Key, Value, {Anomalies0, ActualSize0}) ->\n\t\tAnomalies1 = lists:foldl(fun(Check, Anomalies) -> Check({Key, Value}, Anomalies) end, Anomalies0, [\n\t\t\tfun maybe_search_for_anomalies_cache_values_chunk1_failed/2,\n\t\t\tfun maybe_search_for_anomalies_cache_values_chunk1_stale/2,\n\t\t\tfun maybe_search_for_anomalies_cache_values_chunk2_failed/2,\n\t\t\tfun maybe_search_for_anomalies_cache_values_chunk2_stale/2,\n\t\t\tfun maybe_search_for_anomalies_cache_values_h1_missing/2,\n\t\t\tfun maybe_search_for_anomalies_cache_values_h2_missing/2,\n\t\t\tfun maybe_search_for_anomalies_cache_values_h1_passes_diff_checks_present/2\n\t\t]),\n\t\t{Anomalies1, ActualSize0 + cached_value_size(Value)}\n\tend, OuterAcc0, MiningCache),\n\tcase maps:size(Anomalies) > 0 of\n\t\ttrue -> ?LOG_WARNING([\n\t\t\t{event, mining_cache_anomaly}, {anomaly, cached_values_anomalies},\n\t\t\t{anomalies, Anomalies},\n\t\t\t{session_key, ar_nonce_limiter:encode_session_key(SessionKey)}]);\n\t\tfalse -> ok\n\tend,\n\tActualSize;\nmaybe_search_for_anomalies_cache_values(SessionKey, _InvalidCache) ->\n\t?LOG_ERROR([{event, mining_cache_anomaly}, {anomaly, invalid_cache_type},\n\t\t{session_key, ar_nonce_limiter:encode_session_key(SessionKey)}]),\n\t0.\n\nmaybe_search_for_anomalies_cache_values_chunk1_failed({\n\tKey,\n\t#ar_mining_cache_value{ chunk1 = undefined, chunk1_failed = false } = Value\n}, Anomalies) ->\n\tmaps:update_with(chunk1_failed, fun(V) -> V + 1 end, 1,\n\t\tmaps:update_with(chunk1_failed_sample, fun(V) -> V end, {Key, Value}, Anomalies));\nmaybe_search_for_anomalies_cache_values_chunk1_failed({_, _}, Anomalies) ->\n\tAnomalies.\n\nmaybe_search_for_anomalies_cache_values_chunk1_stale({\n\tKey,\n\t#ar_mining_cache_value{ chunk1 = Chunk1, chunk1_failed = true } = Value\n}, Anomalies) when undefined =/= Chunk1 ->\n\tmaps:update_with(chunk1_stale, fun(V) -> V + 1 end, 1,\n\t\tmaps:update_with(chunk1_stale_sample, fun(V) -> V end, {Key, Value}, Anomalies));\nmaybe_search_for_anomalies_cache_values_chunk1_stale({_, _}, Anomalies) ->\n\tAnomalies.\n\nmaybe_search_for_anomalies_cache_values_chunk2_failed({\n\tKey,\n\t#ar_mining_cache_value{ chunk2 = undefined, chunk2_failed = false } = Value\n}, Anomalies) ->\n\tmaps:update_with(chunk2_failed, fun(V) -> V + 1 end, 1,\n\t\tmaps:update_with(chunk2_failed_sample, fun(V) -> V end, {Key, Value}, Anomalies));\nmaybe_search_for_anomalies_cache_values_chunk2_failed({_, _}, Anomalies) ->\n\tAnomalies.\n\nmaybe_search_for_anomalies_cache_values_chunk2_stale({\n\tKey,\n\t#ar_mining_cache_value{ chunk2 = Chunk2, chunk2_failed = true } = Value\n}, Anomalies) when undefined =/= Chunk2 ->\n\tmaps:update_with(chunk2_stale, fun(V) -> V + 1 end, 1,\n\t\tmaps:update_with(chunk2_stale_sample, fun(V) -> V end, {Key, Value}, Anomalies));\nmaybe_search_for_anomalies_cache_values_chunk2_stale({_, _}, Anomalies) ->\n\tAnomalies.\n\nmaybe_search_for_anomalies_cache_values_h1_missing({\n\tKey,\n\t#ar_mining_cache_value{ h1 = undefined, chunk1 = Chunk1 } = Value\n}, Anomalies)\nwhen undefined =/= Chunk1 ->\n\tmaps:update_with(h1_missing, fun(V) -> V + 1 end, 1,\n\t\tmaps:update_with(h1_missing_sample, fun(V) -> V end, {Key, Value}, Anomalies));\nmaybe_search_for_anomalies_cache_values_h1_missing({_, _}, Anomalies) ->\n\tAnomalies.\n\nmaybe_search_for_anomalies_cache_values_h2_missing({\n\tKey,\n\t#ar_mining_cache_value{ h2 = undefined, chunk2 = Chunk2 } = Value\n}, Anomalies)\nwhen undefined =/= Chunk2 ->\n\tmaps:update_with(h2_missing, fun(V) -> V + 1 end, 1,\n\t\tmaps:update_with(h2_missing_sample, fun(V) -> V end, {Key, Value}, Anomalies));\nmaybe_search_for_anomalies_cache_values_h2_missing({_, _}, Anomalies) ->\n\tAnomalies.\n\nmaybe_search_for_anomalies_cache_values_h1_passes_diff_checks_present({\n\tKey,\n\t#ar_mining_cache_value{ h1_passes_diff_checks = true } = Value\n}, Anomalies) ->\n\tmaps:update_with(h1_passes_diff_checks_present, fun(V) -> V + 1 end, 1,\n\t\tmaps:update_with(h1_passes_diff_checks_present_sample, fun(V) -> V end, {Key, Value}, Anomalies));\nmaybe_search_for_anomalies_cache_values_h1_passes_diff_checks_present({_, _}, Anomalies) ->\n\tAnomalies.\n\n\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\ncache_size_test() ->\n\tCache = new(test_cache),\n\t?assertEqual(0, cache_size(Cache)).\n\nadd_session_test() ->\n\tCache0 = new(test_cache),\n\tSessionKey0 = session0,\n\tCache1 = add_session(SessionKey0, Cache0),\n\t?assert(session_exists(SessionKey0, Cache1)),\n\t?assertEqual(0, cache_size(Cache1)),\n\tCache1 = add_session(SessionKey0, Cache1),\n\t?assertEqual([SessionKey0], get_sessions(Cache1)).\n\nadd_session_limit_test() ->\n\tCache0 = new(test_cache),\n\tCache1 = add_session(session0, Cache0),\n\tCache2 = add_session(session1, Cache1),\n\tCache3 = add_session(session2, Cache2),\n\tCache4 = add_session(session3, Cache3),\n\t?assertEqual([session0, session1, session2, session3], get_sessions(Cache4)),\n\t?assertEqual(0, cache_size(Cache4)),\n\tCache5 = add_session(session4, Cache4),\n\t?assertEqual([session1, session2, session3, session4], get_sessions(Cache5)),\n\t?assertEqual(0, cache_size(Cache5)).\n\nreserve_test() ->\n\tCache0 = new(test_cache, 1024),\n\tSessionKey0 = session0,\n\tChunkId = chunk0,\n\tData = <<\"chunk_data\">>,\n\tReservedSize = 100,\n\t%% Add session\n\tCache1 = add_session(SessionKey0, Cache0),\n\t%% Reserve space\n\t{ok, Cache2} = reserve_for_session(SessionKey0, ReservedSize, Cache1),\n\t?assertEqual(ReservedSize, cache_size(Cache2)),\n\t?assertMatch({ok, ReservedSize}, reserved_size(SessionKey0, Cache2)),\n\t%% Add chunk1\n\t{ok, Cache3} = with_cached_value(ChunkId, SessionKey0, Cache2, fun(Value) ->\n\t\t{ok, Value#ar_mining_cache_value{ chunk1 = Data }}\n\tend),\n\t?assertEqual(ReservedSize, cache_size(Cache3)),\n\t?assertEqual(byte_size(Data), actual_cache_size(Cache3)),\n\tExpectedReservedSize = ReservedSize - byte_size(Data),\n\t?assertMatch({ok, ExpectedReservedSize}, reserved_size(SessionKey0, Cache3)),\n\t%% Reserve more space\n\t?assertMatch({error, cache_limit_exceeded}, reserve_for_session(SessionKey0, 1024 + ReservedSize, Cache3)),\n\t%% Drop session\n\tCache4 = drop_session(SessionKey0, Cache3),\n\t?assertEqual(0, cache_size(Cache4)).\n\nrelease_test() ->\n\tCache0 = new(test_cache, 1024),\n\tSessionKey0 = session0,\n\tChunkId = chunk0,\n\tData = <<\"chunk_data\">>,\n\tReservedSize = 100,\n\t%% Add session\n\tCache1 = add_session(SessionKey0, Cache0),\n\t%% Reserve space\n\t{ok, Cache2} = reserve_for_session(SessionKey0, ReservedSize, Cache1),\n\t?assertEqual(ReservedSize, cache_size(Cache2)),\n\t?assertMatch({ok, ReservedSize}, reserved_size(SessionKey0, Cache2)),\n\t%% Add chunk1\n\t{ok, Cache3} = with_cached_value(ChunkId, SessionKey0, Cache2, fun(Value) ->\n\t\t{ok, Value#ar_mining_cache_value{ chunk1 = Data }}\n\tend),\n\tExpectedReservedSize = ReservedSize - byte_size(Data),\n\t?assertMatch({ok, ExpectedReservedSize}, reserved_size(SessionKey0, Cache3)),\n\t?assertEqual(byte_size(Data), actual_cache_size(Cache3)),\n\t%% Release space\n\t{ok, Cache4} = release_for_session(SessionKey0, 10, Cache3),\n\tExpectedReleasedReserveSize = ExpectedReservedSize - 10,\n\t?assertMatch({ok, ExpectedReleasedReserveSize}, reserved_size(SessionKey0, Cache4)),\n\t?assertEqual(byte_size(Data), actual_cache_size(Cache4)),\n\t%% Drop session\n\tCache5 = drop_session(SessionKey0, Cache4),\n\t?assertEqual(0, cache_size(Cache5)).\n\nwith_cached_value_add_chunk_test() ->\n\tCache0 = new(test_cache, 1024),\n\tChunkId = chunk0,\n\tData = <<\"chunk_data\">>,\n\tSessionKey0 = session0,\n\t%% Add session\n\tCache1 = add_session(SessionKey0, Cache0),\n\t%% Add chunk1\n\t{ok, Cache2} = with_cached_value(ChunkId, SessionKey0, Cache1, fun(Value) ->\n\t\t{ok, Value#ar_mining_cache_value{ chunk1 = Data }}\n\tend),\n\t?assertEqual(byte_size(Data), cache_size(Cache2)),\n\t%% Add chunk2\n\t{ok, Cache3} = with_cached_value(ChunkId, SessionKey0, Cache2, fun(Value) ->\n\t\t{ok, Value#ar_mining_cache_value{ chunk2 = Data }}\n\tend),\n\t?assertEqual(byte_size(Data) * 2, cache_size(Cache3)).\n\nwith_cached_value_add_hash_test() ->\n\tCache0 = new(test_cache),\n\tChunkId = chunk0,\n\tHash = <<\"hash\">>,\n\tSessionKey0 = session0,\n\t%% Add session\n\tCache1 = add_session(SessionKey0, Cache0),\n\t%% Add h1\n\t{ok, Cache2} = with_cached_value(ChunkId, SessionKey0, Cache1, fun(Value) ->\n\t\t{ok, Value#ar_mining_cache_value{ h1 = Hash }}\n\tend),\n\t?assertEqual(0, cache_size(Cache2)),\n\t%% Add h2\n\t{ok, Cache3} = with_cached_value(ChunkId, SessionKey0, Cache2, fun(Value) ->\n\t\t{ok, Value#ar_mining_cache_value{ h2 = Hash }}\n\tend),\n\t?assertEqual(0, cache_size(Cache3)).\n\nwith_cached_value_drop_test() ->\n\tCache0 = new(test_cache, 1024),\n\tChunkId = chunk0,\n\tData = <<\"chunk_data\">>,\n\tSessionKey0 = session0,\n\t%% Add session\n\tCache1 = add_session(SessionKey0, Cache0),\n\t%% Add chunk1\n\t{ok, Cache2} = with_cached_value(ChunkId, SessionKey0, Cache1, fun(Value) ->\n\t\t{ok, Value#ar_mining_cache_value{ chunk1 = Data }}\n\tend),\n\t?assertEqual(byte_size(Data), cache_size(Cache2)),\n\t%% Drop\n\t{ok, Cache3} = with_cached_value(ChunkId, SessionKey0, Cache2, fun(_Value) ->\n\t\t{ok, drop}\n\tend),\n\t?assertEqual(0, cache_size(Cache3)),\n\t?assertEqual(0, actual_cache_size(Cache3)).\n\nset_limit_test() ->\n\tCache0 = new(test_cache),\n\tData = <<\"chunk_data\">>,\n\tSessionKey0 = session0,\n\t%% Add session\n\tCache1 = add_session(SessionKey0, Cache0),\n\t%% Add chunk1\n\tChunkId0 = chunk0,\n\t{ok, Cache2} = with_cached_value(ChunkId0, SessionKey0, Cache1, fun(Value) ->\n\t\t{ok, Value#ar_mining_cache_value{ chunk1 = Data }}\n\tend),\n\t?assertEqual(byte_size(Data), cache_size(Cache2)),\n\t%% Set limit\n\tChunkId1 = chunk1,\n\tCache3 = set_limit(5, Cache2),\n\t%% Try to add chunk2\n\t{error, cache_limit_exceeded} = with_cached_value(ChunkId1, SessionKey0, Cache3, fun(Value) ->\n\t\t{ok, Value#ar_mining_cache_value{ chunk1 = Data }}\n\tend),\n\t?assertEqual(byte_size(Data), cache_size(Cache3)).\n\ndrop_session_test() ->\n\tCache0 = new(test_cache, 1024),\n\tChunkId = chunk0,\n\tData = <<\"chunk_data\">>,\n\tSessionKey0 = session0,\n\t%% Add session\n\tCache1 = add_session(SessionKey0, Cache0),\n\t%% Add chunk1\n\t{ok, Cache2} = with_cached_value(ChunkId, SessionKey0, Cache1, fun(Value) ->\n\t\t{ok, Value#ar_mining_cache_value{ chunk1 = Data }}\n\tend),\n\t?assertEqual(byte_size(Data), cache_size(Cache2)),\n\t%% Drop session\n\tCache3 = drop_session(SessionKey0, Cache2),\n\t?assertNot(session_exists(SessionKey0, Cache3)),\n\t?assertEqual(0, cache_size(Cache3)).\n"
  },
  {
    "path": "apps/arweave/src/ar_mining_hash.erl",
    "content": "-module(ar_mining_hash).\n\n-behaviour(gen_server).\n\n-export([start_link/0, compute_h0/2, compute_h1/2, compute_h2/2,\n\t\tgarbage_collect/0]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include(\"ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include(\"ar_consensus.hrl\").\n-include(\"ar_mining.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-record(state, {\n\thashing_threads\t\t\t\t= queue:new(),\n  \thashing_thread_monitor_refs = #{}\n}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the gen_server.\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\ncompute_h0(Worker, Candidate) ->\n\tgen_server:cast(?MODULE, {compute, h0, Worker, Candidate}).\n\ncompute_h1(Worker, Candidate) ->\n\tgen_server:cast(?MODULE, {compute, h1, Worker, Candidate}).\n\ncompute_h2(Worker, Candidate) ->\n\tgen_server:cast(?MODULE, {compute, h2, Worker, Candidate}).\n\ngarbage_collect() ->\n\tgen_server:cast(?MODULE, garbage_collect).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tState = lists:foldl(\n\t\tfun(_, Acc) -> start_hashing_thread(Acc) end,\n\t\t#state{},\n\t\tlists:seq(1, Config#config.hashing_threads)\n\t),\n\t{ok, State}.\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast({compute, HashType, Worker, Candidate},\n\t\t#state{ hashing_threads = Threads } = State) ->\n\t{Thread, Threads2} = pick_hashing_thread(Threads),\n\tThread ! {compute, HashType, Worker, Candidate},\n\t{noreply, State#state{ hashing_threads = Threads2 }};\n\nhandle_cast(garbage_collect, State) ->\n\terlang:garbage_collect(self(),\n\t\t[{async, {ar_mining_hash, self(), erlang:monotonic_time()}}]),\n\tqueue:fold(\n\t\tfun(Thread, _) ->\n\t\t\terlang:garbage_collect(Thread,\n\t\t\t\t[{async, {ar_mining_hash_worker, Thread, erlang:monotonic_time()}}])\n\t\tend,\n\t\tok,\n\t\tState#state.hashing_threads\n\t),\n\t{noreply, State};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info({garbage_collect, {Name, Pid, StartTime}, GCResult}, State) ->\n\tEndTime = erlang:monotonic_time(),\n\tElapsedTime = erlang:convert_time_unit(EndTime-StartTime, native, millisecond),\n\tcase GCResult == false orelse ElapsedTime > ?GC_LOG_THRESHOLD of\n\t\ttrue ->\n\t\t\t?LOG_DEBUG([\n\t\t\t\t{event, mining_debug_garbage_collect}, {process, Name}, {pid, Pid},\n\t\t\t\t{gc_time, ElapsedTime}, {gc_result, GCResult}]);\n\t\tfalse ->\n\t\t\tok\n\tend,\n\t{noreply, State};\n\nhandle_info({'DOWN', Ref, process, _, Reason},\n\t\t#state{ hashing_thread_monitor_refs = HashingThreadRefs } = State) ->\n\tcase maps:is_key(Ref, HashingThreadRefs) of\n\t\ttrue ->\n\t\t\t{noreply, handle_hashing_thread_down(Ref, Reason, State)};\n\t\t_ ->\n\t\t\t{noreply, State}\n\tend;\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nstart_hashing_thread(State) ->\n\t#state{ hashing_threads = Threads, hashing_thread_monitor_refs = Refs } = State,\n\tThread = spawn_link(\n\t\tfun() ->\n\t\t\thashing_thread(ar_packing_server:get_packing_state())\n\t\tend\n\t),\n\tRef = monitor(process, Thread),\n\tThreads2 = queue:in(Thread, Threads),\n\tRefs2 = maps:put(Ref, Thread, Refs),\n\tState#state{ hashing_threads = Threads2, hashing_thread_monitor_refs = Refs2 }.\n\nhandle_hashing_thread_down(Ref, Reason,\n\t\t#state{ hashing_threads = Threads, hashing_thread_monitor_refs = Refs } = State) ->\n\t?LOG_WARNING([{event, mining_hashing_thread_down},\n\t\t\t{reason, io_lib:format(\"~p\", [Reason])}]),\n\tThread = maps:get(Ref, Refs),\n\tRefs2 = maps:remove(Ref, Refs),\n\tThreads2 = queue:delete(Thread, Threads),\n\tstart_hashing_thread(State#state{ hashing_threads = Threads2,\n\t\t\thashing_thread_monitor_refs = Refs2 }).\n\nhashing_thread(PackingState) ->\n\treceive\n\t\t{compute, h0, Worker, Candidate} ->\n\t\t\t#mining_candidate{\n\t\t\t\tmining_address = MiningAddress, nonce_limiter_output = Output,\n\t\t\t\tpartition_number = PartitionNumber, seed = Seed,\n\t\t\t\tpacking_difficulty = PackingDifficulty } = Candidate,\n\t\t\tH0 = ar_block:compute_h0(Output, PartitionNumber, Seed, MiningAddress,\n\t\t\t\t\tPackingDifficulty, PackingState),\n\t\t\tar_mining_worker:computed_hash(Worker, computed_h0, H0, undefined, Candidate),\n\t\t\thashing_thread(PackingState);\n\t\t{compute, h1, Worker, Candidate} ->\n\t\t\t#mining_candidate{ h0 = H0, nonce = Nonce, chunk1 = Chunk1 } = Candidate,\n\t\t\t{H1, Preimage} = ar_block:compute_h1(H0, Nonce, Chunk1),\n\t\t\tar_mining_worker:computed_hash(Worker, computed_h1, H1, Preimage, Candidate),\n\t\t\thashing_thread(PackingState);\n\t\t{compute, h2, Worker, Candidate} ->\n\t\t\t#mining_candidate{ h0 = H0, h1 = H1, chunk2 = Chunk2 } = Candidate,\n\t\t\t{H2, Preimage} = ar_block:compute_h2(H1, Chunk2, H0),\n\t\t\tar_mining_worker:computed_hash(Worker, computed_h2, H2, Preimage, Candidate),\n\t\t\thashing_thread(PackingState)\n\tend.\n\npick_hashing_thread(Threads) ->\n\t{{value, Thread}, Threads2} = queue:out(Threads),\n\t{Thread, queue:in(Thread, Threads2)}.\n"
  },
  {
    "path": "apps/arweave/src/ar_mining_io.erl",
    "content": "-module(ar_mining_io).\n\n-behaviour(gen_server).\n\n-export([start_link/0, start_link/1, set_largest_seen_upper_bound/1,\n\t\t\tget_packing/0, get_partitions/0, get_partitions/1, \n\t\t\tget_minable_storage_modules/0, read_recall_range/4,\n\t\t\tis_recall_range_readable/2, garbage_collect/0,\n\t\t\tget_replica_format_from_packing_difficulty/1]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n-include_lib(\"arweave/include/ar_mining.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-define(CACHE_TTL_MS, 2000).\n\n-record(state, {\n\tmode = miner,\n\tpartition_upper_bound = 0,\n\tio_threads = #{},\n\tio_thread_monitor_refs = #{},\n\tstore_id_to_device = #{},\n\tpartition_to_store_ids = #{}\n}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the gen_server.\nstart_link() ->\n\tstart_link(miner).\n\nstart_link(Mode) ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, Mode, []).\n\nset_largest_seen_upper_bound(PartitionUpperBound) ->\n\tgen_server:call(?MODULE, {set_largest_seen_upper_bound, PartitionUpperBound}, 60000).\n\nget_partitions() ->\n\tgen_server:call(?MODULE, get_partitions, 60000).\n\nread_recall_range(WhichChunk, Worker, Candidate, RecallRangeStart) ->\n\tgen_server:call(?MODULE,\n\t\t\t{read_recall_range, WhichChunk, Worker, Candidate, RecallRangeStart}, 60000).\n\nis_recall_range_readable(Candidate, RecallRangeStart) ->\n\tgen_server:call(?MODULE,\n\t\t\t{is_recall_range_readable, Candidate, RecallRangeStart}, 60000).\n\nget_packing() ->\n\t%% ar_config:validate_storage_modules/1 ensures that we only mine against a single\n\t%% packing format. So we can grab any partition.\n\tcase get_minable_storage_modules() of\n\t\t[] -> undefined;\n        [{_, _, Packing} | _Rest] -> Packing\n    end.\n\nget_partitions(PartitionUpperBound) when PartitionUpperBound =< 0 ->\n\t[];\nget_partitions(PartitionUpperBound) ->\n\tMax = ar_node:get_max_partition_number(PartitionUpperBound),\n\tAllPartitions = lists:foldl(\n\t\tfun\t(Module, Acc) ->\n\t\t\t\tAddr = ar_storage_module:module_address(Module),\n\t\t\t\tPackingDifficulty =\n\t\t\t\t\tar_storage_module:module_packing_difficulty(Module),\n\t\t\t\t{Start, End} = ar_storage_module:module_range(Module, 0),\n\t\t\t\tPartitions = get_store_id_partitions({Start, End}, []),\n\t\t\t\tlists:foldl(\n\t\t\t\t\tfun(PartitionNumber, AccInner) ->\n\t\t\t\t\t\tsets:add_element({PartitionNumber, Addr, PackingDifficulty}, AccInner)\n\t\t\t\t\tend,\n\t\t\t\t\tAcc,\n\t\t\t\t\tPartitions\n\t\t\t\t)\n\t\tend,\n\t\tsets:new(),\n\t\tget_minable_storage_modules()\n\t),\n\tFilteredPartitions = sets:filter(\n        fun ({PartitionNumber, _Addr, _PackingDifficulty}) ->\n            PartitionNumber =< Max\n        end,\n        AllPartitions\n    ),\n    lists:sort(sets:to_list(FilteredPartitions)).\n\nget_minable_storage_modules() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tlists:filter(\n\t\tfun\t(Module) ->\n\t\t\t\tar_storage_module:module_address(Module) == Config#config.mining_addr\n\t\tend,\n\t\tConfig#config.storage_modules\n\t).\n\n\ngarbage_collect() ->\n\tgen_server:cast(?MODULE, garbage_collect).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit(Mode) ->\n\t?LOG_INFO([{event, mining_io_init}, {mode, Mode}]),\n\tgen_server:cast(self(), initialize_state),\n\t{ok, #state{ mode = Mode }}.\n\nhandle_call({set_largest_seen_upper_bound, PartitionUpperBound}, _From, State) ->\n\t#state{ partition_upper_bound = CurrentUpperBound } = State,\n\tcase PartitionUpperBound > CurrentUpperBound of\n\t\ttrue ->\n\t\t\t{reply, true, State#state{ partition_upper_bound = PartitionUpperBound }};\n\t\tfalse ->\n\t\t\t{reply, false, State}\n\tend;\n\nhandle_call(get_partitions, _From, #state{ partition_upper_bound = PartitionUpperBound } = State) ->\n\t{reply, get_partitions(PartitionUpperBound), State};\n\nhandle_call({read_recall_range, WhichChunk, Worker, Candidate, RecallRangeStart},\n\t\t_From, State) ->\n\t#mining_candidate{ packing_difficulty = PackingDifficulty } = Candidate,\n\tRangeEnd = RecallRangeStart + ar_block:get_recall_range_size(PackingDifficulty),\n\tThreadFound = case find_thread(RecallRangeStart, RangeEnd, State) of\n\t\tnot_found ->\n\t\t\tfalse;\n\t\t{Thread, StoreID} ->\n\t\t\tThread ! {WhichChunk, {Worker, Candidate, RecallRangeStart, StoreID}},\n\t\t\ttrue\n\tend,\n\t{reply, ThreadFound, State};\n\nhandle_call({is_recall_range_readable, Candidate, RecallRangeStart}, _From, State) ->\n\t#mining_candidate{ packing_difficulty = PackingDifficulty } = Candidate,\n\tRangeEnd = RecallRangeStart + ar_block:get_recall_range_size(PackingDifficulty),\n\tThreadFound = case find_thread(RecallRangeStart, RangeEnd, State) of\n\t\tnot_found -> false;\n\t\t{_, _} -> true\n\tend,\n\t{reply, ThreadFound, State};\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast(initialize_state, State) ->\n\tState3 = case ar_device_lock:is_ready() of\n\t\tfalse ->\n\t\t\tar_util:cast_after(1000, self(), initialize_state),\n\t\t\tState;\n\t\ttrue ->\n\t\t\tcase start_io_threads(State) of\n\t\t\t\t{error, _} ->\n\t\t\t\t\tar_util:cast_after(1000, self(), initialize_state),\n\t\t\t\t\tState;\n\t\t\t\tState2 ->\n\t\t\t\t\tState2\n\t\t\tend\n\tend,\n\t{noreply, State3};\n\nhandle_cast(garbage_collect, State) ->\n\terlang:garbage_collect(self(),\n\t\t[{async, {ar_mining_io, self(), erlang:monotonic_time()}}]),\n\tmaps:fold(\n\t\tfun(_Key, Thread, _) ->\n\t\t\terlang:garbage_collect(Thread,\n\t\t\t\t[{async, {ar_mining_io_worker, Thread, erlang:monotonic_time()}}])\n\t\tend,\n\t\tok,\n\t\tState#state.io_threads\n\t),\n\t{noreply, State};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info({garbage_collect, {Name, Pid, StartTime}, GCResult}, State) ->\n\tEndTime = erlang:monotonic_time(),\n\tElapsedTime = erlang:convert_time_unit(EndTime-StartTime, native, millisecond),\n\tcase GCResult == false orelse ElapsedTime > ?GC_LOG_THRESHOLD of\n\t\ttrue ->\n\t\t\t?LOG_DEBUG([\n\t\t\t\t{event, mining_debug_garbage_collect}, {process, Name}, {pid, Pid},\n\t\t\t\t{gc_time, ElapsedTime}, {gc_result, GCResult}]);\n\t\tfalse ->\n\t\t\tok\n\tend,\n\t{noreply, State};\n\nhandle_info({'DOWN', Ref, process, _, Reason},\n\t\t#state{ io_thread_monitor_refs = IOThreadRefs } = State) ->\n\tcase maps:is_key(Ref, IOThreadRefs) of\n\t\ttrue ->\n\t\t\t{noreply, handle_io_thread_down(Ref, Reason, State)};\n\t\t_ ->\n\t\t\t{noreply, State}\n\tend;\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nstart_io_threads(State) ->\n\t#state{ mode = Mode } = State,\n\n    % Step 1: Group StoreIDs by their system device\n\tcase ar_device_lock:get_store_id_to_device_map() of\n\t\t{error, Reason} ->\n\t\t\t?LOG_ERROR([{event, error_initializing_mining_io_state}, {module, ?MODULE},\n\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}]),\n\t\t\t{error, Reason};\n\t\tStoreIDToDevice ->\n\t\t\t?LOG_INFO([{event, starting_mining_io_threads}, {store_id_to_device, StoreIDToDevice}]),\n\t\t\tDeviceToStoreIDs = ar_util:invert_map(StoreIDToDevice),\n\t\t\t% Step 2: Start IO threads for each device and populate map indices\n\t\t\tState2 = maps:fold(\n\t\t\t\tfun(Device, StoreIDs, StateAcc) ->\n\t\t\t\t\t#state{ io_threads = Threads, io_thread_monitor_refs = Refs,\n\t\t\t\t\t\tpartition_to_store_ids = PartitionToStoreIDs } = StateAcc,\n\n\t\t\t\t\tStoreIDs2 = sets:to_list(StoreIDs),\n\n\t\t\t\t\tThread = start_io_thread(Mode, StoreIDs2),\n\t\t\t\t\tThreadRef = monitor(process, Thread),\n\n\t\t\t\t\tPartitionToStoreIDs2 = map_partition_to_store_ids(StoreIDs2, PartitionToStoreIDs),\n\t\t\t\t\tStateAcc#state{\n\t\t\t\t\t\tio_threads = maps:put(Device, Thread, Threads),\n\t\t\t\t\t\tio_thread_monitor_refs = maps:put(ThreadRef, Device, Refs),\n\t\t\t\t\t\tpartition_to_store_ids = PartitionToStoreIDs2\n\t\t\t\t\t}\n\t\t\t\tend,\n\t\t\t\tState,\n\t\t\t\tDeviceToStoreIDs\n\t\t\t),\n\n\t\t\tState2#state{ store_id_to_device = StoreIDToDevice }\n\tend.\n\nstart_io_thread(Mode, StoreIDs) ->\n\tNow = os:system_time(millisecond),\n\tspawn(\n\t\tfun() ->\n\t\t\topen_files(StoreIDs),\n\t\t\tio_thread(Mode, #{}, Now)\n\t\tend\n\t).\n\nmap_partition_to_store_ids([], PartitionToStoreIDs) ->\n\tPartitionToStoreIDs;\nmap_partition_to_store_ids([StoreID | StoreIDs], PartitionToStoreIDs) ->\n\tcase ar_storage_module:get_by_id(StoreID) of\n\t\tnot_found ->\n\t\t\t%% Occasionally happens in tests.\n\t\t\t?LOG_ERROR([{event, mining_storage_module_not_found}, {store_id, StoreID}]),\n\t\t\tmap_partition_to_store_ids(StoreIDs, PartitionToStoreIDs);\n\t\tStorageModule ->\n\t\t\t{Start, End} = ar_storage_module:module_range(StorageModule, 0),\n\t\t\tPartitions = get_store_id_partitions({Start, End}, []),\n\t\t\tPartitionToStoreIDs2 = lists:foldl(\n\t\t\t\tfun(Partition, Acc) ->\n\t\t\t\t\tmaps:update_with(Partition,\n\t\t\t\t\t\tfun(PartitionStoreIDs) -> [StoreID | PartitionStoreIDs] end,\n\t\t\t\t\t\t[StoreID], Acc)\n\t\t\t\tend,\n\t\t\t\tPartitionToStoreIDs, Partitions),\n\t\t\tmap_partition_to_store_ids(StoreIDs, PartitionToStoreIDs2)\n\tend.\n\nget_store_id_partitions({Start, End}, Partitions) when Start >= End ->\n\tPartitions;\nget_store_id_partitions({Start, End}, Partitions) ->\n\tPartitionNumber = ar_node:get_partition_number(Start),\n\tget_store_id_partitions({Start + ar_block:partition_size(), End}, [PartitionNumber | Partitions]).\n\nopen_files(StoreIDs) ->\n\tlists:foreach(\n\t\tfun(StoreID) ->\n\t\t\tcase StoreID of\n\t\t\t\t?DEFAULT_MODULE ->\n\t\t\t\t\tok;\n\t\t\t\t_ ->\n\t\t\t\t\tar_chunk_storage:open_files(StoreID)\n\t\t\tend\n\t\tend,\n\t\tStoreIDs).\n\nhandle_io_thread_down(Ref, Reason, State) ->\n\t#state{ mode = Mode, io_threads = Threads, io_thread_monitor_refs = Refs,\n\t\tstore_id_to_device = StoreIDToDevice } = State,\n\t?LOG_WARNING([{event, mining_io_thread_down}, {reason, io_lib:format(\"~p\", [Reason])}]),\n\tDevice = maps:get(Ref, Refs),\n\tRefs2 = maps:remove(Ref, Refs),\n\tThreads2 = maps:remove(Device, Threads),\n\n\tDeviceToStoreIDs = ar_util:invert_map(StoreIDToDevice),\n\tStoreIDs = maps:get(Device, DeviceToStoreIDs, sets:new()),\n\tThread = start_io_thread(Mode, sets:to_list(StoreIDs)),\n\tThreadRef = monitor(process, Thread),\n\tState#state{ io_threads = maps:put(Device, Thread, Threads2),\n\t\tio_thread_monitor_refs = maps:put(ThreadRef, Device, Refs2) }.\n\nio_thread(Mode, Cache, LastClearTime) ->\n\treceive\n\t\t{WhichChunk, {Worker, Candidate, RecallRangeStart, StoreID}} ->\n\t\t\t{ChunkOffsets, Cache2} =\n\t\t\t\tget_chunks(Mode, WhichChunk, Candidate, RecallRangeStart, StoreID, Cache),\n\t\t\tchunks_read(Mode, Worker, WhichChunk, Candidate, RecallRangeStart, ChunkOffsets),\n\t\t\t{Cache3, LastClearTime2} = maybe_clear_cached_chunks(Cache2, LastClearTime),\n\t\t\tio_thread(Mode, Cache3, LastClearTime2)\n\tend.\n\nchunks_read(miner, Worker, WhichChunk, Candidate, RecallRangeStart, ChunkOffsets) ->\n\tar_mining_worker:chunks_read(\n\t\tWorker, WhichChunk, Candidate, RecallRangeStart, ChunkOffsets);\nchunks_read(standalone, Worker, WhichChunk, Candidate, RecallRangeStart, ChunkOffsets) ->\n\tWorker ! {chunks_read, WhichChunk, Candidate, RecallRangeStart, ChunkOffsets}.\n\nget_packed_intervals(Start, End, MiningAddress, PackingDifficulty, ?DEFAULT_MODULE, Intervals) ->\n\tReplicaFormat = get_replica_format_from_packing_difficulty(PackingDifficulty),\n\tPacking = ar_block:get_packing(PackingDifficulty, MiningAddress, ReplicaFormat),\n\tcase ar_sync_record:get_next_synced_interval(Start, End, Packing, ar_data_sync, ?DEFAULT_MODULE) of\n\t\tnot_found ->\n\t\t\tIntervals;\n\t\t{Right, Left} ->\n\t\t\tget_packed_intervals(Right, End, MiningAddress, PackingDifficulty, ?DEFAULT_MODULE,\n\t\t\t\t\tar_intervals:add(Intervals, Right, Left))\n\tend;\nget_packed_intervals(_Start, _End, _MiningAddr, _PackingDifficulty, _StoreID, _Intervals) ->\n\tno_interval_check_implemented_for_non_default_store.\n\n%% The protocol allows composite packing with the packing difficulty 25 for now,\n%% but it is not practical and it is convenient to exlude it from the range of\n%% supported storage module configurations and treat it as the 2.9 replication format\n%% in the mining process.\nget_replica_format_from_packing_difficulty(?REPLICA_2_9_PACKING_DIFFICULTY) ->\n\t1;\nget_replica_format_from_packing_difficulty(_PackingDifficulty) ->\n\t0.\n\nmaybe_clear_cached_chunks(Cache, LastClearTime) ->\n\tNow = os:system_time(millisecond),\n\tcase (Now - LastClearTime) > (?CACHE_TTL_MS div 2) of\n\t\ttrue ->\n\t\t\tCutoffTime = Now - ?CACHE_TTL_MS,\n\t\t\tCache2 = maps:filter(\n\t\t\t\tfun(_CachedRangeStart, {CachedTime, _ChunkOffsets}) ->\n\t\t\t\t\t%% Remove all ranges that were cached before the CutoffTime\n\t\t\t\t\t%% true: keep\n\t\t\t\t\t%% false: remove\n\t\t\t\t\tcase CachedTime > CutoffTime of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\ttrue;\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tfalse\n\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\t\tCache),\n\t\t\t{Cache2, Now};\n\t\tfalse ->\n\t\t\t{Cache, LastClearTime}\n\tend.\n\n%% @doc When we're reading a range for a CM peer we'll cache it temporarily in case\n%% that peer has broken up the batch of H1s into multiple requests. The temporary cache\n%% prevents us from reading the same range from disk multiple times.\n%%\n%% However if the request is from our local miner there's no need to cache since the H1\n%% batch is always handled all at once.\nget_chunks(Mode, WhichChunk, Candidate, RangeStart, StoreID, Cache) ->\n\tcase Candidate#mining_candidate.cm_lead_peer of\n\t\tnot_set ->\n\t\t\tChunkOffsets = read_range(Mode, WhichChunk, Candidate, RangeStart, StoreID),\n\t\t\t{ChunkOffsets, Cache};\n\t\t_ ->\n\t\t\tcached_read_range(Mode, WhichChunk, Candidate, RangeStart, StoreID, Cache)\n\tend.\n\ncached_read_range(Mode, WhichChunk, Candidate, RangeStart, StoreID, Cache) ->\n\tNow = os:system_time(millisecond),\n\tcase maps:get(RangeStart, Cache, not_found) of\n\t\tnot_found ->\n\t\t\tChunkOffsets = read_range(Mode, WhichChunk, Candidate, RangeStart, StoreID),\n\t\t\tCache2 = maps:put(RangeStart, {Now, ChunkOffsets}, Cache),\n\t\t\t{ChunkOffsets, Cache2};\n\t\t{_CachedTime, ChunkOffsets} ->\n\t\t\t?LOG_DEBUG([{event, mining_debug_read_cached_recall_range},\n\t\t\t\t{pid, self()}, {range_start, RangeStart},\n\t\t\t\t{store_id, StoreID},\n\t\t\t\t{partition_number, Candidate#mining_candidate.partition_number},\n\t\t\t\t{partition_number2, Candidate#mining_candidate.partition_number2},\n\t\t\t\t{cm_peer, ar_util:format_peer(Candidate#mining_candidate.cm_lead_peer)},\n\t\t\t\t{cache_ref, Candidate#mining_candidate.cache_ref},\n\t\t\t\t{session,\n\t\t\t\tar_nonce_limiter:encode_session_key(Candidate#mining_candidate.session_key)}]),\n\t\t\t{ChunkOffsets, Cache}\n\tend.\n\nread_range(Mode, WhichChunk, Candidate, RangeStart, StoreID) ->\n\tStartTime = erlang:monotonic_time(),\n\t#mining_candidate{ mining_address = MiningAddress,\n\t\t\tpacking_difficulty = PackingDifficulty } = Candidate,\n\tRecallRangeSize = ar_block:get_recall_range_size(PackingDifficulty),\n\tIntervals = get_packed_intervals(RangeStart, RangeStart + RecallRangeSize,\n\t\t\tMiningAddress, PackingDifficulty, StoreID, ar_intervals:new()),\n\tChunkOffsets = ar_chunk_storage:get_range(RangeStart, RecallRangeSize, StoreID),\n\tChunkOffsets2 = filter_by_packing(ChunkOffsets, Intervals, StoreID),\n\tlog_read_range(Mode, Candidate, WhichChunk, length(ChunkOffsets), StartTime),\n\tChunkOffsets2.\n\nfilter_by_packing([], _Intervals, _StoreID) ->\n\t[];\nfilter_by_packing([{EndOffset, Chunk} | ChunkOffsets], Intervals, ?DEFAULT_MODULE = StoreID) ->\n\tcase ar_intervals:is_inside(Intervals, EndOffset) of\n\t\tfalse ->\n\t\t\tfilter_by_packing(ChunkOffsets, Intervals, StoreID);\n\t\ttrue ->\n\t\t\t[{EndOffset, Chunk} | filter_by_packing(ChunkOffsets, Intervals, StoreID)]\n\tend;\nfilter_by_packing(ChunkOffsets, _Intervals, _StoreID) ->\n\tChunkOffsets.\n\nlog_read_range(standalone, _Candidate, _WhichChunk, _FoundChunks, _StartTime) ->\n\tok;\nlog_read_range(_Mode, Candidate, WhichChunk, FoundChunks, StartTime) ->\n\tEndTime = erlang:monotonic_time(),\n\tElapsedTime = erlang:convert_time_unit(EndTime-StartTime, native, millisecond),\n\tReadRate = case ElapsedTime > 0 of\n\t\ttrue -> (FoundChunks * 1000 div 4) div ElapsedTime; %% MiB per second\n\t\tfalse -> 0\n\tend,\n\n\tPartitionNumber = case WhichChunk of\n\t\tchunk1 -> Candidate#mining_candidate.partition_number;\n\t\tchunk2 -> Candidate#mining_candidate.partition_number2\n\tend,\n\n\tar_mining_stats:raw_read_rate(PartitionNumber, ReadRate),\n\n\t% ?LOG_DEBUG([{event, mining_debug_read_recall_range},\n\t% \t\t{thread, self()},\n\t% \t\t{elapsed_time_ms, ElapsedTime},\n\t% \t\t{chunks_read, FoundChunks},\n\t% \t\t{mib_read, FoundChunks / 4},\n\t% \t\t{read_rate_mibps, ReadRate},\n\t% \t\t{chunk, WhichChunk},\n\t% \t\t{partition_number, PartitionNumber}]),\n\tok.\n\nfind_thread(RangeStart, RangeEnd, State) ->\n\tPartitionNumber = ar_node:get_partition_number(RangeStart),\n\tStoreIDs = maps:get(PartitionNumber, State#state.partition_to_store_ids, not_found),\n\tStoreID = find_largest_intersection(StoreIDs, RangeStart, RangeEnd, 0, not_found),\n\tDevice = maps:get(StoreID, State#state.store_id_to_device, not_found),\n\tThread = maps:get(Device, State#state.io_threads, not_found),\n\tcase Thread of\n\t\tnot_found ->\n\t\t\tnot_found;\n\t\t_ ->\n\t\t\t{Thread, StoreID}\n\tend.\n\nfind_largest_intersection(not_found, _RangeStart, _RangeEnd, _Max, _MaxKey) ->\n\tnot_found;\nfind_largest_intersection([StoreID | StoreIDs], RangeStart, RangeEnd, Max, MaxKey) ->\n\tI = ar_sync_record:get_intersection_size(RangeEnd, RangeStart, ar_chunk_storage, StoreID),\n\tcase I > Max of\n\t\ttrue ->\n\t\t\tfind_largest_intersection(StoreIDs, RangeStart, RangeEnd, I, StoreID);\n\t\tfalse ->\n\t\t\tfind_largest_intersection(StoreIDs, RangeStart, RangeEnd, Max, MaxKey)\n\tend;\nfind_largest_intersection([], _RangeStart, _RangeEnd, _Max, MaxKey) ->\n\tMaxKey.\n"
  },
  {
    "path": "apps/arweave/src/ar_mining_server.erl",
    "content": "%%% @doc The 2.6 mining server.\n-module(ar_mining_server).\n\n-behaviour(ar_mining_server_behaviour).\n-behaviour(gen_server).\n\n-export([start_link/0,\n\t\tstart_mining/1, is_paused/0, set_difficulty/1, set_merkle_rebase_threshold/1, set_height/1,\n\t\tcompute_h2_for_peer/1, prepare_and_post_solution/1, prepare_poa/3,\n\t\tget_recall_bytes/5, get_recall_range/3, get_recall_range/5,\n\t\tactive_sessions/0, encode_sessions/1, add_pool_job/6,\n\t\tis_one_chunk_solution/1, fetch_poa_from_peers/2, log_prepare_solution_failure/5,\n\t\tget_packing_difficulty/1, get_packing_type/1]).\n-export([pause/0]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_data_discovery.hrl\").\n-include_lib(\"arweave/include/ar_mining.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n-include_lib(\"stdlib/include/ms_transform.hrl\").\n\n-record(state, {\n\tpaused \t\t\t\t\t\t= true,\n\tworkers\t\t\t\t\t\t= #{},\n\tactive_sessions\t\t\t\t= sets:new(),\n\tseeds\t\t\t\t\t\t= #{},\n\tdiff_pair\t\t\t\t\t= not_set,\n\tchunk_cache_limit \t\t\t= 0,\n\tgc_frequency_ms\t\t\t\t= undefined,\n\tgc_process_ref\t\t\t\t= undefined,\n\tmerkle_rebase_threshold\t\t= infinity,\n\tis_pool_client\t\t\t\t= false,\n\tallow_composite_packing\t\t= false,\n\tallow_replica_2_9_mining\t= false,\n\tpacking_difficulty\t\t\t= 0\n}).\n\n-ifdef(AR_TEST).\n-define(POST_2_8_COMPOSITE_PACKING_DELAY_BLOCKS, 0).\n-define(MINIMUM_CACHE_LIMIT_BYTES, 100 * ?MiB).\n-else.\n-define(POST_2_8_COMPOSITE_PACKING_DELAY_BLOCKS, 10).\n-define(MINIMUM_CACHE_LIMIT_BYTES, 1).\n-endif.\n\n%% The number of concurrent VDF steps per partition that will fit in the cache. The higher this\n%% number the more memory the cache can use (roughly ?IDEAL_STEPS_PER_PARTITION * 5 MiB per\n%% partition). Also the higher the number the more the miner is able to respond to temporary\n%% hashrate slowdowns (e.g. a system process temporarily consumes all CPU) or temporary VDF\n%% step spikes (e.g. the node validates an block with an advanced VDF step and unlocks many\n%% VDF steps at once) without losing hashrate.\n-define(IDEAL_STEPS_PER_PARTITION, 20).\n\n-define(FETCH_POA_FROM_PEERS_TIMEOUT_MS, 10000).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the gen_server.\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%% @doc Start mining. Does nothing if the mining server is already running.\nstart_mining(Args) ->\n\tgen_server:cast(?MODULE, {start_mining, Args}).\n\n%% @doc Return true if the mining server is paused.\nis_paused() ->\n\tgen_server:call(?MODULE, is_paused, 60_000).\n\n%% @doc Compute H2 for a remote peer (used in coordinated mining).\ncompute_h2_for_peer(Candidate) ->\n\tgen_server:cast(?MODULE, {compute_h2_for_peer, Candidate}).\n\n%% @doc Set the new mining difficulty. We do not recalculate it inside the mining\n%% server because we want to completely detach the mining server from the block\n%% ordering. The previous block is chosen only after the mining solution is found (if\n%% we choose it in advance we may miss a better option arriving in the process).\n%% Also, a mining session may (in practice, almost always will) span several blocks.\nset_difficulty(DiffPair) ->\n\tgen_server:cast(?MODULE, {set_difficulty, DiffPair}).\n\nset_merkle_rebase_threshold(Threshold) ->\n\tgen_server:cast(?MODULE, {set_merkle_rebase_threshold, Threshold}).\n\nset_height(Height) ->\n\tgen_server:cast(?MODULE, {set_height, Height}).\n\n%% @doc Add a pool job to the mining queue.\nadd_pool_job(SessionKey, StepNumber, Output, PartitionUpperBound, Seed, PartialDiff) ->\n\tArgs = {SessionKey, StepNumber, Output, PartitionUpperBound, Seed, PartialDiff},\n\tgen_server:cast(?MODULE, {add_pool_job, Args}).\n\nprepare_and_post_solution(CandidateOrSolution) ->\n\tgen_server:cast(?MODULE, {prepare_and_post_solution, CandidateOrSolution}).\n\nactive_sessions() ->\n\tgen_server:call(?MODULE, active_sessions).\n\nencode_sessions(Sessions) ->\n\tlists:map(fun(SessionKey) ->\n\t\tar_nonce_limiter:encode_session_key(SessionKey)\n\tend, Sessions).\n\nis_one_chunk_solution(Solution) ->\n\tSolution#mining_solution.recall_byte2 == undefined.\n\n%% @doc Use this function every time the miner finds a solution but fails to prepare a block.\n%% It may happen to a standalone miner, a worker in a coordinated mining setup, the exit node,\n%% a pool worker in a pool or the pool server.\n\n-spec log_prepare_solution_failure(\n\tSolution :: #mining_solution{},\n\tFailureType :: stale | rejected,\n\tFailureReason :: atom(),\n\tSource :: atom(),\n\tAdditionalLogData :: list({atom(), term()})\n) ->\n\tRet :: ok.\n\n-ifdef(AR_TEST).\nlog_prepare_solution_failure(_Solution, stale, _FailureReason, _Source, _AdditionalLogData) ->\n\tok;\nlog_prepare_solution_failure(Solution, FailureType, FailureReason, Source, AdditionalLogData) ->\n\tlog_prepare_solution_failure2(Solution, FailureType, FailureReason, Source, AdditionalLogData).\n-else.\nlog_prepare_solution_failure(Solution, FailureType, FailureReason, Source, AdditionalLogData) ->\n\tlog_prepare_solution_failure2(Solution, FailureType, FailureReason, Source, AdditionalLogData).\n-endif.\n\nlog_prepare_solution_failure2(Solution, FailureType, FailureReason, Source, AdditionalLogData) ->\n\t#mining_solution{\n\t\tsolution_hash = SolutionH,\n\t\tpacking_difficulty = PackingDifficulty } = Solution,\n\n\tar_events:send(solution, {FailureType,\n\t\t#{ solution_hash => SolutionH,\n\t\t\treason => FailureReason,\n\t\t\tsource => Source }}),\n\n\tar:console(\"~nFailed to prepare block from the mining solution.. Reason: ~p~n\",\n\t\t\t[FailureReason]),\n\t?LOG_ERROR([{event, failed_to_prepare_block_from_mining_solution},\n\t\t\t{reason, FailureReason},\n\t\t\t{solution_hash, ar_util:safe_encode(SolutionH)},\n\t\t\t{packing_difficulty, PackingDifficulty} | AdditionalLogData]),\n\tprometheus_gauge:inc(mining_solution, [FailureReason]).\n\n-spec get_packing_difficulty(Packing :: ar_storage_module:packing()) ->\n\tPackingDifficulty :: non_neg_integer().\nget_packing_difficulty({composite, _, Difficulty}) ->\n\tDifficulty;\nget_packing_difficulty({replica_2_9, _}) ->\n\t?REPLICA_2_9_PACKING_DIFFICULTY;\nget_packing_difficulty(_) ->\n\t0.\n\n-spec get_packing_type(Packing :: ar_storage_module:packing()) ->\n\tPackingType :: atom().\nget_packing_type({composite, _, _}) ->\n\tcomposite;\nget_packing_type({replica_2_9, _}) ->\n\treplica_2_9;\nget_packing_type({spora_2_6, _}) ->\n\tspora_2_6;\nget_packing_type(Packing) ->\n\tPacking.\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\tok = ar_events:subscribe(nonce_limiter),\n\n\tPartitions = ar_mining_io:get_partitions(infinity),\n\tPacking = ar_mining_io:get_packing(),\n\tPackingDifficulty = get_packing_difficulty(Packing),\n\n\tWorkers = lists:foldl(\n\t\tfun({Partition, _Addr, Difficulty}, Acc) ->\n\t\t\tmaps:put({Partition, Difficulty},\n\t\t\t\t\tar_mining_worker:name(Partition, Difficulty), Acc)\n\t\tend,\n\t\t#{},\n\t\tPartitions\n\t),\n\n\t?LOG_INFO([{event, mining_server_init},\n\t\t\t{packing, ar_serialize:encode_packing(Packing, false)},\n\t\t\t{partitions, [ Partition || {Partition, _, _} <- Partitions]}]),\n\n\t{ok, #state{\n\t\tworkers = Workers,\n\t\tis_pool_client = ar_pool:is_client(),\n\t\tpacking_difficulty = PackingDifficulty\n\t}}.\n\nhandle_call(active_sessions, _From, State) ->\n\t{reply, State#state.active_sessions, State};\n\nhandle_call(is_paused, _From, State) ->\n\t{reply, State#state.paused, State};\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast(pause, State) ->\n\tar:console(\"Pausing mining.~n\"),\n\t?LOG_INFO([{event, pause_mining}]),\n\tar_mining_stats:mining_paused(),\n\t%% Setting paused to true allows all pending tasks to complete, but prevents new output to be\n\t%% distributed. Setting diff to infinity ensures that no solutions are found.\n\tState2 = set_difficulty({infinity, infinity}, State),\n\t{noreply, State2#state{ paused = true }};\n\nhandle_cast({start_mining, Args}, #state{ paused = false } = State) ->\n\t{noreply, State};\nhandle_cast({start_mining, Args}, State) ->\n\t{DiffPair, RebaseThreshold, Height} = Args,\n\tar:console(\"Starting mining.~n\"),\n\t?LOG_INFO([{event, start_mining}, {difficulty, DiffPair},\n\t\t\t{rebase_threshold, RebaseThreshold}, {height, Height}]),\n\tar_mining_stats:start_performance_reports(),\n\n\tmaps:foreach(\n\t\tfun(_Partition, Worker) ->\n\t\t\tar_mining_worker:reset_mining_session(Worker, DiffPair)\n\t\tend,\n\t\tState#state.workers\n\t),\n\n\t{noreply, State#state{\n\t\tpaused = false,\n\t\tactive_sessions\t= sets:new(),\n\t\tdiff_pair = DiffPair,\n\t\tmerkle_rebase_threshold = RebaseThreshold,\n\t\tallow_composite_packing = allow_composite_packing(Height),\n\t\tallow_replica_2_9_mining = allow_replica_2_9_mining(Height) }};\n\nhandle_cast({set_difficulty, DiffPair}, State) ->\n\tState2 = set_difficulty(DiffPair, State),\n\t{noreply, State2};\n\nhandle_cast({set_merkle_rebase_threshold, Threshold}, State) ->\n\t{noreply, State#state{ merkle_rebase_threshold = Threshold }};\n\nhandle_cast({set_height, Height}, State) ->\n\t{noreply, State#state{ allow_composite_packing = allow_composite_packing(Height),\n\t\t\tallow_replica_2_9_mining = allow_replica_2_9_mining(Height) }};\n\nhandle_cast({add_pool_job, Args}, State) ->\n\t{SessionKey, StepNumber, Output, PartitionUpperBound, Seed, PartialDiff} = Args,\n\tState2 = set_seed(SessionKey, Seed, State),\n\thandle_computed_output(\n\t\tSessionKey, StepNumber, Output, PartitionUpperBound, PartialDiff, State2);\n\nhandle_cast({compute_h2_for_peer, Candidate}, State) ->\n\t#mining_candidate{ partition_number2 = Partition2,\n\t\t\tpacking_difficulty = PackingDifficulty } = Candidate,\n\tcase get_worker({Partition2, PackingDifficulty}, State) of\n\t\tnot_found ->\n\t\t\tok;\n\t\tWorker ->\n\t\t\tar_mining_worker:add_task(Worker, compute_h2_for_peer, Candidate)\n\tend,\n\t{noreply, State};\n\nhandle_cast({prepare_and_post_solution, _}, #state{ paused = true } = State) ->\n\t%% Ignore solutions when the server is paused. Should only happen in tests.\n\t{noreply, State};\nhandle_cast({prepare_and_post_solution, CandidateOrSolution}, State) ->\n\tprepare_and_post_solution(CandidateOrSolution, State),\n\t{noreply, State};\n\nhandle_cast({manual_garbage_collect, Ref}, #state{ gc_process_ref = Ref } = State) ->\n\t%% Reading recall ranges from disk causes a large amount of binary data to be allocated and\n\t%% references to that data is spread among all the different mining processes. Because of\n\t%% this it can take the default garbage collection to clean up all references and\n\t%% deallocate the memory - which in turn can cause memory to be exhausted.\n\t%%\n\t%% To address this the mining server will force a garbage collection on all mining\n\t%% processes every time we process a few VDF steps. The exact number of VDF steps is\n\t%% determined by the chunk cache size limit in order to roughly align garbage collection\n\t%% with when we expect all references to a recall range's chunks to be evicted from\n\t%% the cache.\n\t?LOG_DEBUG([{event, mining_debug_garbage_collect_start},\n\t\t{frequency, State#state.gc_frequency_ms}]),\n\tar_mining_io:garbage_collect(),\n\tar_mining_hash:garbage_collect(),\n\terlang:garbage_collect(self(), [{async, erlang:monotonic_time()}]),\n\tmaps:foreach(\n\t\tfun(_Partition, Worker) ->\n\t\t\tar_mining_worker:garbage_collect(Worker)\n\t\tend,\n\t\tState#state.workers\n\t),\n\tar_coordination:garbage_collect(),\n\tar_util:cast_after(State#state.gc_frequency_ms, ?MODULE, {manual_garbage_collect, Ref}),\n\t{noreply, State};\nhandle_cast({manual_garbage_collect, _}, State) ->\n\t%% Does not originate from the running instance of the server; happens in tests.\n\t{noreply, State};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info({event, nonce_limiter, {computed_output, _Args}}, #state{ paused = true } = State) ->\n\t{noreply, State};\nhandle_info({event, nonce_limiter, {computed_output, Args}}, State) ->\n\tcase ar_pool:is_client() of\n\t\ttrue ->\n\t\t\t%% Ignore VDF events because we are receiving jobs from the pool.\n\t\t\t{noreply, State};\n\t\tfalse ->\n\t\t\t{SessionKey, StepNumber, Output, PartitionUpperBound} = Args,\n\t\t\thandle_computed_output(\n\t\t\t\tSessionKey, StepNumber, Output, PartitionUpperBound, not_set, State)\n\tend;\n\nhandle_info({event, nonce_limiter, {valid, _}}, State) ->\n\t%% Silently ignore validation messages\n\t{noreply, State};\n\nhandle_info({event, nonce_limiter, Message}, State) ->\n\t?LOG_DEBUG([{event, mining_debug_skipping_nonce_limiter}, {message, Message}]),\n\t{noreply, State};\n\nhandle_info({garbage_collect, StartTime, GCResult}, State) ->\n\tEndTime = erlang:monotonic_time(),\n\tElapsedTime = erlang:convert_time_unit(EndTime-StartTime, native, millisecond),\n\tcase GCResult == false orelse ElapsedTime > ?GC_LOG_THRESHOLD of\n\t\ttrue ->\n\t\t\t?LOG_DEBUG([\n\t\t\t\t{event, mining_debug_garbage_collect}, {process, ar_mining_server},\n\t\t\t\t{pid, self()}, {gc_time, ElapsedTime}, {gc_result, GCResult}]);\n\t\tfalse ->\n\t\t\tok\n\tend,\n\t{noreply, State};\n\nhandle_info({fetched_last_moment_proof, _}, State) ->\n    %% This is a no-op to handle \"slow\" response from peers that were queried by\n\t%% `fetch_poa_from_peers`. Only the first peer to respond with a PoA will be handled,\n\t%% all other responses will fall through to here an be ignored.\n\t{noreply, State};\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\n\nallow_composite_packing(Height) ->\n\tHeight - ?POST_2_8_COMPOSITE_PACKING_DELAY_BLOCKS >= ar_fork:height_2_8()\n\t\tandalso Height - ?COMPOSITE_PACKING_EXPIRATION_PERIOD_BLOCKS < ar_fork:height_2_9().\n\nallow_replica_2_9_mining(Height) ->\n\tHeight >= ar_fork:height_2_9().\n\nget_worker(Key, State) ->\n\tmaps:get(Key, State#state.workers, not_found).\n\nset_difficulty(DiffPair, State) ->\n\tmaps:foreach(\n\t\tfun(_Partition, Worker) ->\n\t\t\tar_mining_worker:set_difficulty(Worker, DiffPair)\n\t\tend,\n\t\tState#state.workers\n\t),\n\tState#state{ diff_pair = DiffPair }.\n\nmaybe_update_sessions(SessionKey, State) ->\n\tCurrentActiveSessions = State#state.active_sessions,\n\tcase sets:is_element(SessionKey, CurrentActiveSessions) of\n\t\ttrue ->\n\t\t\tState;\n\t\tfalse ->\n\t\t\tNewActiveSessions = build_active_session_set(SessionKey, CurrentActiveSessions),\n\t\t\tcase sets:to_list(sets:subtract(NewActiveSessions, CurrentActiveSessions)) of\n\t\t\t\t[] ->\n\t\t\t\t\tState;\n\t\t\t\t_ ->\n\t\t\t\t\tupdate_sessions(NewActiveSessions, CurrentActiveSessions, State)\n\t\t\tend\n\tend.\n\nbuild_active_session_set(SessionKey, CurrentActiveSessions) ->\n\tCandidateSessions = [SessionKey | sets:to_list(CurrentActiveSessions)],\n\tSortedSessions = lists:sort(\n\t\tfun({_, StartIntervalA, _}, {_, StartIntervalB, _}) ->\n\t\t\tStartIntervalA > StartIntervalB\n\t\tend, CandidateSessions),\n\tbuild_active_session_set(SortedSessions).\n\nbuild_active_session_set([A, B | _]) ->\n\tsets:from_list([A, B]);\nbuild_active_session_set([A]) ->\n\tsets:from_list([A]);\nbuild_active_session_set([]) ->\n\tsets:new().\n\nupdate_sessions(NewActiveSessions, CurrentActiveSessions, State) ->\n\tAddedSessions = sets:to_list(sets:subtract(NewActiveSessions, CurrentActiveSessions)),\n\tRemovedSessions = sets:to_list(sets:subtract(CurrentActiveSessions, NewActiveSessions)),\n\n\tmaps:foreach(\n\t\tfun(_Partition, Worker) ->\n\t\t\tar_mining_worker:set_sessions(Worker, sets:to_list(NewActiveSessions))\n\t\tend,\n\t\tState#state.workers\n\t),\n\n\tState2 = add_sessions(AddedSessions, State),\n\tState3 = remove_sessions(RemovedSessions, State2),\n\n\tState3#state{ active_sessions = NewActiveSessions }.\n\nadd_sessions([], State) ->\n\tState;\nadd_sessions([SessionKey | AddedSessions], State) ->\n\t{NextSeed, StartIntervalNumber, NextVDFDifficulty} = SessionKey,\n\tar:console(\"Starting new mining session: \"\n\t\t\"next entropy nonce: ~s, interval number: ~B, next vdf difficulty: ~B.~n\",\n\t\t[ar_util:safe_encode(NextSeed), StartIntervalNumber, NextVDFDifficulty]),\n\t?LOG_INFO([{event, new_mining_session},\n\t\t{session_key, ar_nonce_limiter:encode_session_key(SessionKey)}]),\n\tadd_sessions(AddedSessions, add_seed(SessionKey, State)).\n\nremove_sessions([], State) ->\n\tState;\nremove_sessions([SessionKey | RemovedSessions], State) ->\n\tremove_sessions(RemovedSessions, remove_seed(SessionKey, State)).\n\nget_seed(SessionKey, State) ->\n\tmaps:get(SessionKey, State#state.seeds, not_found).\n\nset_seed(SessionKey, Seed, State) ->\n\tState#state{ seeds = maps:put(SessionKey, Seed, State#state.seeds) }.\n\nremove_seed(SessionKey, State) ->\n\tState#state{ seeds = maps:remove(SessionKey, State#state.seeds) }.\n\nadd_seed(SessionKey, State) ->\n\tcase get_seed(SessionKey, State) of\n\t\tnot_found ->\n\t\t\tSession = ar_nonce_limiter:get_session(SessionKey),\n\t\t\tcase Session of\n\t\t\t\tnot_found ->\n\t\t\t\t\t?LOG_ERROR([{event, mining_session_not_found},\n\t\t\t\t\t\t{session_key, ar_nonce_limiter:encode_session_key(SessionKey)}]),\n\t\t\t\t\tState;\n\t\t\t\t_ ->\n\t\t\t\t\tset_seed(SessionKey, Session#vdf_session.seed, State)\n\t\t\tend;\n\t\t_ ->\n\t\t\tState\n\tend.\n\nupdate_cache_limits(State) ->\n\tNumActivePartitions = length(ar_mining_io:get_partitions()),\n\tupdate_cache_limits(NumActivePartitions, State).\n\nupdate_cache_limits(0, State) ->\n\tState;\nupdate_cache_limits(NumActivePartitions, State) ->\n\tLimits = calculate_cache_limits(NumActivePartitions, State#state.packing_difficulty),\n\tmaybe_update_cache_limits(Limits, State).\n\ncalculate_cache_limits(NumActivePartitions, PackingDifficulty) ->\n\tIdealRangesPerStep = 2,\n\tRecallRangeSize = ar_block:get_recall_range_size(PackingDifficulty),\n\n\tMinimumCacheLimitBytes = max(\n\t\t?MINIMUM_CACHE_LIMIT_BYTES,\n\t\t(?IDEAL_STEPS_PER_PARTITION * IdealRangesPerStep * RecallRangeSize * NumActivePartitions)\n\t),\n\n\t{ok, Config} = arweave_config:get_env(),\n\tOverallCacheLimitBytes = case Config#config.mining_cache_size_mb of\n\t\tundefined ->\n\t\t\tMinimumCacheLimitBytes;\n\t\tN ->\n\t\t\tN * ?MiB\n\tend,\n\n\t%% We shard the chunk cache across every active worker. Only workers that mine a partition\n\t%% included in the current weave are active.\n\tPartitionCacheLimitBytes = OverallCacheLimitBytes div NumActivePartitions,\n\n\t%% Allow enough compute_h0 tasks to be queued to completely refill the chunk cache.\n\tVDFQueueLimit = max(\n\t\t1,\n\t\tPartitionCacheLimitBytes div (2 * ar_block:get_recall_range_size(PackingDifficulty))\n\t),\n\n\tGarbageCollectionFrequency = 4 * VDFQueueLimit * 1000,\n\n\t{MinimumCacheLimitBytes, OverallCacheLimitBytes, PartitionCacheLimitBytes, VDFQueueLimit,\n\t\tGarbageCollectionFrequency}.\n\nmaybe_update_cache_limits({_, _, PartitionCacheLimit, _, _},\n\t\t#state{chunk_cache_limit = PartitionCacheLimit} = State) ->\n\tState;\nmaybe_update_cache_limits(Limits, State) ->\n\t{MinimumCacheLimitBytes, OverallCacheLimitBytes, PartitionCacheLimitBytes, VDFQueueLimit,\n\t\tGarbageCollectionFrequency} = Limits,\n\tmaps:foreach(\n\t\tfun(_Partition, Worker) ->\n\t\t\tar_mining_worker:set_cache_limits(\n\t\t\t\tWorker, PartitionCacheLimitBytes, VDFQueueLimit)\n\t\tend,\n\t\tState#state.workers\n\t),\n\n\tar:console(\n\t\t\"~nSetting the mining chunk cache size limit to ~B MiB \"\n\t\t\"(~B MiB per partition).~n\",\n\t\t\t[OverallCacheLimitBytes div ?MiB, PartitionCacheLimitBytes div ?MiB]),\n\t?LOG_INFO([{event, update_mining_cache_limits},\n\t\t{overall_limit_mb, OverallCacheLimitBytes div ?MiB},\n\t\t{per_partition_limit_mb, PartitionCacheLimitBytes div ?MiB},\n\t\t{vdf_queue_limit_steps, VDFQueueLimit}]),\n\t\tcase OverallCacheLimitBytes < MinimumCacheLimitBytes of\n\t\ttrue ->\n\t\t\tar:console(\"~nChunk cache size limit (~p MiB) is below minimum limit of \"\n\t\t\t\t\"~p MiB. Mining performance may be impacted.~n\"\n\t\t\t\t\"Consider changing the 'mining_cache_size_mb' option.\",\n\t\t\t\t[OverallCacheLimitBytes div ?MiB, MinimumCacheLimitBytes div ?MiB]);\n\t\tfalse -> ok\n\tend,\n\n\tState2 = reset_gc_timer(GarbageCollectionFrequency, State),\n\tState2#state{\n\t\tchunk_cache_limit = PartitionCacheLimitBytes\n\t}.\n\ndistribute_output(Candidate, State) ->\n\tdistribute_output(ar_mining_io:get_partitions(), Candidate, State).\n\ndistribute_output([], _Candidate, _State) ->\n\tok;\ndistribute_output([{_Partition, _MiningAddress, PackingDifficulty} | _Partitions],\n\t\t_Candidate, #state{ allow_composite_packing = false })\n\t\twhen PackingDifficulty >= 1, PackingDifficulty /= ?REPLICA_2_9_PACKING_DIFFICULTY ->\n\t%% Only mine with composite packing until some time after the fork 2.9.\n\tok;\ndistribute_output([{Partition, MiningAddress, PackingDifficulty} | Partitions],\n\t\tCandidate, State) ->\n\tcase get_worker({Partition, PackingDifficulty}, State) of\n\t\tnot_found ->\n\t\t\t?LOG_ERROR([{event, worker_not_found}, {partition, Partition}]),\n\t\t\tok;\n\t\tWorker ->\n\t\t\tar_mining_worker:add_task(\n\t\t\t\tWorker, compute_h0,\n\t\t\t\tCandidate#mining_candidate{\n\t\t\t\t\tpartition_number = Partition,\n\t\t\t\t\tmining_address = MiningAddress,\n\t\t\t\t\tpacking_difficulty = PackingDifficulty,\n\t\t\t\t\treplica_format\n\t\t\t\t\t\t= ar_mining_io:get_replica_format_from_packing_difficulty(PackingDifficulty)\n\t\t\t\t})\n\tend,\n\tdistribute_output(Partitions, Candidate, State).\n\nget_recall_bytes(H0, PartitionNumber, Nonce, PartitionUpperBound, PackingDifficulty) ->\n\t{RecallRange1Start, RecallRange2Start} = get_recall_range(H0,\n\t\t\tPartitionNumber, PartitionUpperBound),\n\tRecallByte1 = ar_block:get_recall_byte(RecallRange1Start, Nonce, PackingDifficulty),\n\tRecallByte2 = ar_block:get_recall_byte(RecallRange2Start, Nonce, PackingDifficulty),\n\t{RecallByte1, RecallByte2}.\n\nget_recall_range(H0, PartitionNumber, PartitionUpperBound) ->\n\tar_block:get_recall_range(H0, PartitionNumber, PartitionUpperBound).\n\nget_recall_range(H0, PartitionNumber, PartitionUpperBound, RecallRange1, RecallRange2) ->\n\tar_block:get_recall_range(H0, PartitionNumber, PartitionUpperBound,\n\t\t\tRecallRange1, RecallRange2).\n\nprepare_and_post_solution(#mining_candidate{} = Candidate, State) ->\n\t%% A solo miner builds a solution from a candidate here or a CM miner who received\n\t%% H2 from another peer reads chunk1 and sends it to the exit peer.\n\tSolution = prepare_solution_from_candidate(Candidate, State),\n\tRet = post_solution(Solution, State),\n\tmaybe_flush_solution_queue(Ret),\n\tRet;\nprepare_and_post_solution(#mining_solution{} = Solution, State) ->\n\t%% An exit peer receives a mining solution, possibly without the VDF data and chunk proofs.\n\tSolution2 = prepare_solution(Solution, State),\n\tRet = post_solution(Solution2, State),\n\tmaybe_flush_solution_queue(Ret),\n\tRet.\n\nprepare_solution(Solution, State) ->\n\t#state{ merkle_rebase_threshold = RebaseThreshold, is_pool_client = IsPoolClient } = State,\n\t#mining_solution{\n\t\tmining_address = MiningAddress, next_seed = NextSeed,\n\t\tnext_vdf_difficulty = NextVDFDifficulty, nonce = Nonce,\n\t\tnonce_limiter_output = NonceLimiterOutput, partition_number = PartitionNumber,\n\t\tpartition_upper_bound = PartitionUpperBound, poa1 = PoA1, poa2 = PoA2,\n\t\tpreimage = Preimage, seed = Seed, start_interval_number = StartIntervalNumber,\n\t\tstep_number = StepNumber, packing_difficulty = PackingDifficulty,\n\t\treplica_format = ReplicaFormat\n\t} = Solution,\n\tCandidate = #mining_candidate{\n\t\tmining_address = MiningAddress, next_seed = NextSeed,\n\t\tnext_vdf_difficulty = NextVDFDifficulty, nonce = Nonce,\n\t\tnonce_limiter_output = NonceLimiterOutput, partition_number = PartitionNumber,\n\t\tpartition_upper_bound = PartitionUpperBound, poa2 = PoA2,\n\t\tpreimage = Preimage, seed = Seed, start_interval_number = StartIntervalNumber,\n\t\tstep_number = StepNumber, packing_difficulty = PackingDifficulty,\n\t\treplica_format = ReplicaFormat\n\t},\n\tH0 = ar_block:compute_h0(NonceLimiterOutput, PartitionNumber,\n\t\t\tSeed, MiningAddress, PackingDifficulty),\n\tChunk1 = PoA1#poa.chunk,\n\t{H1, Preimage1} = ar_block:compute_h1(H0, Nonce, Chunk1),\n\tCandidate2 = Candidate#mining_candidate{\n\t\th0 = H0,\n\t\th1 = H1,\n\t\tchunk1 = Chunk1\n\t},\n\tCandidate3 =\n\t\tcase PoA2#poa.chunk of\n\t\t\t<<>> ->\n\t\t\t\tPreimage = Preimage1,\n\t\t\t\tCandidate2;\n\t\t\tChunk2 ->\n\t\t\t\t{H2, Preimage} = ar_block:compute_h2(H1, Chunk2, H0),\n\t\t\t\tCandidate2#mining_candidate{ h2 = H2, chunk2 = Chunk2 }\n\t\tend,\n\tSolution2 = Solution#mining_solution{ merkle_rebase_threshold = RebaseThreshold },\n\t%% A pool client does not validate VDF before sharing a solution.\n\tcase IsPoolClient of\n\t\ttrue ->\n\t\t\tprepare_solution(proofs, Candidate3, Solution2);\n\t\tfalse ->\n\t\t\tprepare_solution(last_step_checkpoints, Candidate3, Solution2)\n\tend.\n\nprepare_solution_from_candidate(Candidate, State) ->\n\t#state{ merkle_rebase_threshold = RebaseThreshold, is_pool_client = IsPoolClient } = State,\n\t#mining_candidate{\n\t\tmining_address = MiningAddress, next_seed = NextSeed,\n\t\tnext_vdf_difficulty = NextVDFDifficulty, nonce = Nonce,\n\t\tnonce_limiter_output = NonceLimiterOutput, partition_number = PartitionNumber,\n\t\tpartition_upper_bound = PartitionUpperBound, poa2 = PoA2, preimage = Preimage,\n\t\tseed = Seed, start_interval_number = StartIntervalNumber, step_number = StepNumber,\n\t\tpacking_difficulty = PackingDifficulty, replica_format = ReplicaFormat\n\t} = Candidate,\n\n\tSolution = #mining_solution{\n\t\tmining_address = MiningAddress,\n\t\tmerkle_rebase_threshold = RebaseThreshold,\n\t\tnext_seed = NextSeed,\n\t\tnext_vdf_difficulty = NextVDFDifficulty,\n\t\tnonce = Nonce,\n\t\tnonce_limiter_output = NonceLimiterOutput,\n\t\tpartition_number = PartitionNumber,\n\t\tpartition_upper_bound = PartitionUpperBound,\n\t\tpoa2 = PoA2,\n\t\tpreimage = Preimage,\n\t\tseed = Seed,\n\t\tstart_interval_number = StartIntervalNumber,\n\t\tstep_number = StepNumber,\n\t\tpacking_difficulty = PackingDifficulty,\n\t\treplica_format = ReplicaFormat\n\t},\n\t%% A pool client does not validate VDF before sharing a solution.\n\tcase IsPoolClient of\n\t\ttrue ->\n\t\t\tprepare_solution(proofs, Candidate, Solution);\n\t\tfalse ->\n\t\t\tprepare_solution(last_step_checkpoints, Candidate, Solution)\n\tend.\n\nprepare_solution(last_step_checkpoints, Candidate, Solution) ->\n\t#mining_candidate{\n\t\tnext_seed = NextSeed, next_vdf_difficulty = NextVDFDifficulty,\n\t\tstart_interval_number = StartIntervalNumber, step_number = StepNumber } = Candidate,\n\tLastStepCheckpoints = ar_nonce_limiter:get_step_checkpoints(\n\t\t\tStepNumber, NextSeed, StartIntervalNumber, NextVDFDifficulty),\n\tLastStepCheckpoints2 =\n\t\tcase LastStepCheckpoints of\n\t\t\tnot_found ->\n\t\t\t\t?LOG_WARNING([{event,\n\t\t\t\t\t\tfound_solution_but_failed_to_find_last_step_checkpoints}]),\n\t\t\t\t[];\n\t\t\t_ ->\n\t\t\t\tLastStepCheckpoints\n\t\tend,\n\tprepare_solution(steps, Candidate, Solution#mining_solution{\n\t\t\tlast_step_checkpoints = LastStepCheckpoints2 });\n\nprepare_solution(steps, Candidate, Solution) ->\n\t#mining_candidate{ step_number = StepNumber } = Candidate,\n\t[{_, TipNonceLimiterInfo}] = ets:lookup(node_state, nonce_limiter_info),\n\t#nonce_limiter_info{ global_step_number = PrevStepNumber, seed = PrevSeed,\n\t\t\tnext_seed = PrevNextSeed,\n\t\t\tnext_vdf_difficulty = PrevNextVDFDifficulty } = TipNonceLimiterInfo,\n\tcase StepNumber > PrevStepNumber of\n\t\ttrue ->\n\t\t\tSteps = ar_nonce_limiter:get_steps(\n\t\t\t\t\tPrevStepNumber, StepNumber, PrevNextSeed, PrevNextVDFDifficulty),\n\t\t\tcase Steps of\n\t\t\t\tnot_found ->\n\t\t\t\t\tCurrentSessionKey = ar_nonce_limiter:session_key(TipNonceLimiterInfo),\n\t\t\t\t\tSolutionSessionKey = Candidate#mining_candidate.session_key,\n\t\t\t\t\tLogData = [\n\t\t\t\t\t\t{current_session_key,\n\t\t\t\t\t\t\tar_nonce_limiter:encode_session_key(CurrentSessionKey)},\n\t\t\t\t\t\t{solution_session_key,\n\t\t\t\t\t\t\tar_nonce_limiter:encode_session_key(SolutionSessionKey)},\n\t\t\t\t\t\t{start_step_number, PrevStepNumber},\n\t\t\t\t\t\t{next_step_number, StepNumber},\n\t\t\t\t\t\t{seed, ar_util:safe_encode(PrevSeed)},\n\t\t\t\t\t\t{next_seed, ar_util:safe_encode(PrevNextSeed)},\n\t\t\t\t\t\t{next_vdf_difficulty, PrevNextVDFDifficulty},\n\t\t\t\t\t\t{h1, ar_util:safe_encode(Candidate#mining_candidate.h1)},\n\t\t\t\t\t\t{h2, ar_util:safe_encode(Candidate#mining_candidate.h2)}],\n\t\t\t\t\t?LOG_INFO([{event, found_solution_but_failed_to_find_checkpoints}\n\t\t\t\t\t\t| LogData]),\n\t\t\t\t\tmay_be_leave_it_to_exit_peer(\n\t\t\t\t\t\t\tprepare_solution(proofs, Candidate,\n\t\t\t\t\t\t\t\t\tSolution#mining_solution{ steps = [] }),\n\t\t\t\t\t\t\tstep_checkpoints_not_found, LogData);\n\t\t\t\t_ ->\n\t\t\t\t\tprepare_solution(proofs, Candidate,\n\t\t\t\t\t\t\tSolution#mining_solution{ steps = Steps })\n\t\t\tend;\n\t\tfalse ->\n\t\t\tlog_prepare_solution_failure(Solution, stale, stale_step_number, miner, [\n\t\t\t\t\t{start_step_number, PrevStepNumber},\n\t\t\t\t\t{next_step_number, StepNumber},\n\t\t\t\t\t{next_seed, ar_util:safe_encode(PrevNextSeed)},\n\t\t\t\t\t{next_vdf_difficulty, PrevNextVDFDifficulty},\n\t\t\t\t\t{h1, ar_util:safe_encode(Candidate#mining_candidate.h1)},\n\t\t\t\t\t{h2, ar_util:safe_encode(Candidate#mining_candidate.h2)}\n\t\t\t\t\t]),\n\t\t\terror\n\tend;\n\nprepare_solution(proofs, Candidate, Solution) ->\n\t#mining_candidate{\n\t\th0 = H0, h1 = H1, h2 = H2, partition_number = PartitionNumber,\n\t\tpartition_upper_bound = PartitionUpperBound,\n\t\tpacking_difficulty = PackingDifficulty,\n\t\tnonce = Nonce,\n\t\tseed = Seed,\n\t\tmining_address = MiningAddress,\n\t\tnonce_limiter_output = NonceLimiterOutput,\n\t\tchunk2 = Chunk2\n\t} = Candidate,\n\t#mining_solution{ poa1 = PoA1, poa2 = PoA2 } = Solution,\n\t{RecallByte1, RecallByte2} = get_recall_bytes(H0, PartitionNumber, Nonce,\n\t\t\tPartitionUpperBound, PackingDifficulty),\n\tExpectedH0 = ar_block:compute_h0(NonceLimiterOutput,\n\t\t\tPartitionNumber, Seed, MiningAddress,\n\t\t\tPackingDifficulty),\n\tcase {H0, H1, H2} of\n\t\t{_, not_set, not_set} ->\n\t\t\t%% We should never end up here..\n\t\t\tlog_prepare_solution_failure(Solution, rejected, h1_h2_not_set, miner, []),\n\t\t\terror;\n\t\t{ExpectedH0, _H1, not_set} ->\n\t\t\tprepare_solution(poa1, Candidate, Solution#mining_solution{\n\t\t\t\tsolution_hash = H1, recall_byte1 = RecallByte1,\n\t\t\t\tpoa1 = may_be_empty_poa(PoA1), poa2 = #poa{} });\n\t\t{_H0, _H1, not_set} ->\n\t\t\tlog_prepare_solution_failure(Solution, rejected, incorrect_h0, miner, []),\n\t\t\terror;\n\t\t{ExpectedH0, _H1, _H2} ->\n\t\t\tcase is_h2_valid(Chunk2, H0, H1, H2) of\n\t\t\t\ttrue ->\n\t\t\t\t\tprepare_solution(poa2, Candidate, Solution#mining_solution{\n\t\t\t\t\t\tsolution_hash = H2, recall_byte1 = RecallByte1, recall_byte2 = RecallByte2,\n\t\t\t\t\t\tpoa1 = may_be_empty_poa(PoA1), poa2 = may_be_empty_poa(PoA2) });\n\t\t\t\tfalse ->\n\t\t\t\t\tlog_prepare_solution_failure(Solution, rejected, incorrect_h2, miner, []),\n\t\t\t\t\terror\n\t\t\tend;\n\t\t_ ->\n\t\t\tlog_prepare_solution_failure(Solution, rejected, incorrect_h0, miner, []),\n\t\t\terror\n\tend;\n\nprepare_solution(poa1, Candidate, Solution) ->\n\t#mining_solution{\n\t\tpoa1 = CurrentPoA1, recall_byte1 = RecallByte1,\n\t\tmining_address = MiningAddress, packing_difficulty = PackingDifficulty,\n\t\treplica_format = ReplicaFormat\n\t} = Solution,\n\t#mining_candidate{ h0 = H0, h1 = H1, chunk1 = Chunk1, nonce = Nonce,\n\t\t\tpartition_number = PartitionNumber } = Candidate,\n\n\tcase prepare_poa(poa1, Candidate, CurrentPoA1) of\n\t\t{ok, PoA1} ->\n\t\t\tcase is_h1_valid(Chunk1, PoA1, H0, H1, Nonce) of\n\t\t\t\ttrue ->\n\t\t\t\t\tSolution#mining_solution{ poa1 = PoA1 };\n\t\t\t\tfalse ->\n\t\t\t\t\tlog_prepare_solution_failure(Solution, rejected, incorrect_h1, miner, []),\n\t\t\t\t\terror\n\t\t\tend;\n\t\t{error, Error} ->\n\t\t\tModules = ar_storage_module:get_all(RecallByte1 + 1),\n\t\t\tModuleIDs = [ar_storage_module:id(Module) || Module <- Modules],\n\t\t\tLogData = [{recall_byte, RecallByte1},\n\t\t\t\t\t{modules_covering_recall_byte, ModuleIDs},\n\t\t\t\t\t{fetch_proofs_error, io_lib:format(\"~p\", [Error])},\n\t\t\t\t\t{nonce, Nonce},\n\t\t\t\t\t{partition_number, PartitionNumber}],\n\t\t\tcase Chunk1 of\n\t\t\t\tnot_set ->\n\t\t\t\t\tPacking = ar_block:get_packing(PackingDifficulty, MiningAddress,\n\t\t\t\t\t\t\tReplicaFormat),\n\t\t\t\t\t?LOG_WARNING([{event, failed_to_find_poa1_proofs_for_h2_solution},\n\t\t\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])},\n\t\t\t\t\t\t\t{tags, [solution_proofs]} | LogData]),\n\t\t\t\t\tcase ar_storage_module:get(RecallByte1 + 1, Packing) of\n\t\t\t\t\t\t{_BucketSize, _Bucket, Packing} = StorageModule ->\n\t\t\t\t\t\t\tStoreID = ar_storage_module:id(StorageModule),\n\t\t\t\t\t\t\tcase ar_chunk_storage:get(RecallByte1, StoreID) of\n\t\t\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\t\t\tlog_prepare_solution_failure(Solution,\n\t\t\t\t\t\t\t\t\t\t\trejected, chunk1_for_h2_solution_not_found, miner,\n\t\t\t\t\t\t\t\t\t\t\tLogData),\n\t\t\t\t\t\t\t\t\terror;\n\t\t\t\t\t\t\t\t{_EndOffset, Chunk} ->\n\t\t\t\t\t\t\t\t\tSubChunk = get_sub_chunk(Chunk, PackingDifficulty, Nonce),\n\t\t\t\t\t\t\t\t\t%% If we are a coordinated miner and not an exit node -\n\t\t\t\t\t\t\t\t\t%% the exit node will fetch the proofs.\n\t\t\t\t\t\t\t\t\tmay_be_leave_it_to_exit_peer(\n\t\t\t\t\t\t\t\t\t\t\tSolution#mining_solution{\n\t\t\t\t\t\t\t\t\t\t\t\tpoa1 = #poa{ chunk = SubChunk } },\n\t\t\t\t\t\t\t\t\t\t\tchunk1_proofs_for_h2_solution_not_found,\n\t\t\t\t\t\t\t\t\t\t\tLogData)\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\tlog_prepare_solution_failure(Solution, rejected,\n\t\t\t\t\t\t\t\t\tstorage_module_for_chunk1_for_h2_solution_not_found, miner,\n\t\t\t\t\t\t\t\t\tLogData),\n\t\t\t\t\t\t\terror\n\t\t\t\t\tend;\n\t\t\t\t_ ->\n\t\t\t\t\t%% If we are a coordinated miner and not an exit node - the exit\n\t\t\t\t\t%% node will fetch the proofs.\n\t\t\t\t\tmay_be_leave_it_to_exit_peer(\n\t\t\t\t\t\tSolution#mining_solution{ poa1 = #poa{ chunk = Chunk1 } },\n\t\t\t\t\t\tchunk1_proofs_not_found,\n\t\t\t\t\t\tLogData)\n\t\t\tend\n\tend;\n\nprepare_solution(poa2, Candidate, Solution) ->\n\t#mining_solution{ poa2 = CurrentPoA2, recall_byte2 = RecallByte2 } = Solution,\n\t#mining_candidate{ chunk2 = Chunk2 } = Candidate,\n\n\tcase prepare_poa(poa2, Candidate, CurrentPoA2) of\n\t\t{ok, PoA2} ->\n\t\t\tprepare_solution(poa1, Candidate, Solution#mining_solution{ poa2 = PoA2 });\n\t\t{error, _Error} ->\n\t\t\tModules = ar_storage_module:get_all(RecallByte2 + 1),\n\t\t\tModuleIDs = [ar_storage_module:id(Module) || Module <- Modules],\n\t\t\tLogData = [{recall_byte2, RecallByte2}, {modules_covering_recall_byte, ModuleIDs}],\n\t\t\t%% If we are a coordinated miner and not an exit node - the exit\n\t\t\t%% node will fetch the proofs.\n\t\t\tmay_be_leave_it_to_exit_peer(\n\t\t\t\tprepare_solution(poa1, Candidate,\n\t\t\t\t\t\tSolution#mining_solution{\n\t\t\t\t\t\t\t\tpoa2 = #poa{ chunk = Chunk2 } }),\n\t\t\t\tchunk2_proofs_not_found, LogData)\n\tend.\n\nprepare_poa(PoAType, Candidate, CurrentPoA) ->\n\t#mining_candidate{\n\t\tpacking_difficulty = PackingDifficulty,\n\t\treplica_format = ReplicaFormat,\n\t\tmining_address = MiningAddress,\n\t\tnonce = Nonce, partition_number = PartitionNumber,\n\t\tpartition_upper_bound = PartitionUpperBound,\n\t\th0 = H0,\n\t\tchunk1 = Chunk1, chunk2 = Chunk2\n\t} = Candidate,\n\t{RecallByte1, RecallByte2} = get_recall_bytes(H0, PartitionNumber, Nonce,\n\t\t\tPartitionUpperBound, PackingDifficulty),\n\n\t{RecallByte, Chunk} = case PoAType of\n\t\tpoa1 -> {RecallByte1, Chunk1};\n\t\tpoa2 -> {RecallByte2, Chunk2}\n\tend,\n\n\tPacking = ar_block:get_packing(PackingDifficulty, MiningAddress, ReplicaFormat),\n\tcase is_poa_complete(CurrentPoA, PackingDifficulty) of\n\t\ttrue ->\n\t\t\t{ok, CurrentPoA};\n\t\tfalse ->\n\t\t\tcase read_poa(RecallByte, Chunk, Packing, Nonce) of\n\t\t\t\t{ok, PoA} ->\n\t\t\t\t\t{ok, PoA};\n\t\t\t\t{error, Error} ->\n\t\t\t\t\tModules = ar_storage_module:get_all(RecallByte + 1),\n\t\t\t\t\tModuleIDs = [ar_storage_module:id(Module) || Module <- Modules],\n\t\t\t\t\t?LOG_INFO([{event, failed_to_find_poa_proofs_locally},\n\t\t\t\t\t\t\t{poa, PoAType},\n\t\t\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])},\n\t\t\t\t\t\t\t{tags, [solution_proofs]},\n\t\t\t\t\t\t\t{recall_byte, RecallByte},\n\t\t\t\t\t\t\t{packing, ar_serialize:encode_packing(Packing, true)},\n\t\t\t\t\t\t\t{packing_difficulty, PackingDifficulty},\n\t\t\t\t\t\t\t{modules_covering_recall_byte, ModuleIDs}]),\n\t\t\t\t\tChunkBinary = case Chunk of\n\t\t\t\t\t\tnot_set ->\n\t\t\t\t\t\t\t<<>>;\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\tChunk\n\t\t\t\t\tend,\n\t\t\t\t\tcase fetch_poa_from_peers(RecallByte, PackingDifficulty) of\n\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\t?LOG_INFO([{event, failed_to_fetch_proofs_from_peers},\n\t\t\t\t\t\t\t\t\t{tags, [solution_proofs]},\n\t\t\t\t\t\t\t\t\t{poa, PoAType},\n\t\t\t\t\t\t\t\t\t{recall_byte, RecallByte},\n\t\t\t\t\t\t\t\t\t{nonce, Nonce},\n\t\t\t\t\t\t\t\t\t{partition, PartitionNumber},\n\t\t\t\t\t\t\t\t\t{mining_address, ar_util:safe_encode(MiningAddress)},\n\t\t\t\t\t\t\t\t\t{packing, ar_serialize:encode_packing(Packing, true)},\n\t\t\t\t\t\t\t\t\t{packing_difficulty, PackingDifficulty}]),\n\t\t\t\t\t\t\t{error, Error};\n\t\t\t\t\t\tPoA ->\n\t\t\t\t\t\t\t{ok, PoA#poa{ chunk = ChunkBinary }}\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\n%% Check if the chunk proof has been assembed already by the mining nodes.\n%% If not, we are the exit node and we will fetch the missing data.\nis_poa_complete(#poa{ chunk = <<>> }, _) ->\n\tfalse;\nis_poa_complete(#poa{ data_path = <<>> }, _) ->\n\tfalse;\nis_poa_complete(#poa{ tx_path = <<>> }, _) ->\n\tfalse;\nis_poa_complete(#poa{ unpacked_chunk = <<>> }, PackingDifficulty)\n\t\twhen PackingDifficulty >= 1 ->\n\tfalse;\nis_poa_complete(_, _) ->\n\ttrue.\n\nmay_be_leave_it_to_exit_peer(error, _FailureReason, _AdditionalLogData) ->\n\terror;\nmay_be_leave_it_to_exit_peer(Solution, FailureReason, AdditionalLogData) ->\n\tcase ar_coordination:is_coordinated_miner() andalso\n\t\t\tnot ar_coordination:is_exit_peer() of\n\t\ttrue ->\n\t\t\tSolution;\n\t\tfalse ->\n\t\t\tlog_prepare_solution_failure(\n\t\t\t\tSolution, rejected, FailureReason, miner, AdditionalLogData),\n\t\t\terror\n\tend.\n\nis_h1_valid(Chunk, PoA, H0, H1, Nonce) ->\n\tChunk1 = case Chunk of\n\t\tnot_set ->\n\t\t\tPoA#poa.chunk;\n\t\t_ ->\n\t\t\tChunk\n\tend,\n\t{ExpectedH1, _} = ar_block:compute_h1(H0, Nonce, Chunk1),\n\tH1 == ExpectedH1.\n\nis_h2_valid(Chunk, H0, H1, H2) ->\n\t{ExpectedH2, _} =\n\t\tcase Chunk of\n\t\t\tnot_set ->\n\t\t\t\t{H2, not_set};\n\t\t\t_ ->\n\t\t\t\tar_block:compute_h2(H1, Chunk, H0)\n\t\tend,\n\tH2 == ExpectedH2.\n\npost_solution(error, _State) ->\n\t?LOG_WARNING([{event, found_solution_but_could_not_build_a_block}]),\n\terror;\npost_solution(Solution, State) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tpost_solution(Config#config.cm_exit_peer, Solution, State).\n\npost_solution(not_set, Solution, #state{ is_pool_client = true }) ->\n\t%% When posting a partial solution the pool client will skip many of the validation steps\n\t%% that are normally performed before sharing a solution.\n\tar_pool:post_partial_solution(Solution),\n\tok;\npost_solution(not_set, Solution, State) ->\n\t#state{ diff_pair = DiffPair } = State,\n\t#mining_solution{\n\t\tmining_address = MiningAddress, nonce_limiter_output = NonceLimiterOutput,\n\t\tpartition_number = PartitionNumber, recall_byte1 = RecallByte1,\n\t\trecall_byte2 = RecallByte2,\n\t\tsolution_hash = H, step_number = StepNumber } = Solution,\n\tcase validate_solution(Solution, DiffPair) of\n\t\terror ->\n\t\t\t?LOG_WARNING([{event, failed_to_validate_solution},\n\t\t\t\t\t{partition, PartitionNumber},\n\t\t\t\t\t{step_number, StepNumber},\n\t\t\t\t\t{mining_address, ar_util:safe_encode(MiningAddress)},\n\t\t\t\t\t{recall_byte1, RecallByte1},\n\t\t\t\t\t{recall_byte2, RecallByte2},\n\t\t\t\t\t{solution_h, ar_util:safe_encode(H)},\n\t\t\t\t\t{nonce_limiter_output, ar_util:safe_encode(NonceLimiterOutput)},\n\t\t\t\t\t{diff_pair, DiffPair}]),\n\t\t\tar:console(\"WARNING: we failed to validate our solution. Check logs for more \"\n\t\t\t\t\t\"details~n\"),\n\t\t\terror;\n\t\t{false, Reason} ->\n\t\t\tar_events:send(solution, {rejected,\n\t\t\t\t\t#{ reason => Reason, source => miner }}),\n\t\t\t?LOG_WARNING([{event, found_invalid_solution},\n\t\t\t\t\t{reason, Reason},\n\t\t\t\t\t{partition, PartitionNumber},\n\t\t\t\t\t{step_number, StepNumber},\n\t\t\t\t\t{mining_address, ar_util:safe_encode(MiningAddress)},\n\t\t\t\t\t{recall_byte1, RecallByte1},\n\t\t\t\t\t{recall_byte2, RecallByte2},\n\t\t\t\t\t{solution_h, ar_util:safe_encode(H)},\n\t\t\t\t\t{nonce_limiter_output, ar_util:safe_encode(NonceLimiterOutput)},\n\t\t\t\t\t{diff_pair, DiffPair}]),\n\t\t\tar:console(\"WARNING: the solution we found is invalid. Check logs for more \"\n\t\t\t\t\t\"details~n\"),\n\t\t\terror;\n\t\t{true, PoACache, PoA2Cache} ->\n\t\t\tar_node_worker:found_solution(miner, Solution, PoACache, PoA2Cache),\n\t\t\tok\n\tend;\npost_solution(ExitPeer, Solution, #state{ is_pool_client = true }) ->\n\tcase ar_http_iface_client:post_partial_solution(ExitPeer, Solution) of\n\t\t{ok, _} ->\n\t\t\tok;\n\t\t{error, Reason} ->\n\t\t\t?LOG_WARNING([{event, found_partial_solution_but_failed_to_reach_exit_node},\n\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}]),\n\t\t\tar:console(\"We found a partial solution but failed to reach the exit node, \"\n\t\t\t\t\t\"error: ~p.\", [io_lib:format(\"~p\", [Reason])]),\n\t\t\terror\n\tend;\npost_solution(ExitPeer, Solution, _State) ->\n\tcase ar_http_iface_client:cm_publish_send(ExitPeer, Solution) of\n\t\t{ok, _} ->\n\t\t\tok;\n\t\t{error, Reason} ->\n\t\t\t?LOG_WARNING([{event, found_solution_but_failed_to_reach_exit_node},\n\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}]),\n\t\t\tar:console(\"We found a solution but failed to reach the exit node, \"\n\t\t\t\t\t\"error: ~p.\", [io_lib:format(\"~p\", [Reason])]),\n\t\t\terror\n\tend.\n\n-ifdef(AR_TEST).\n%% @doc During tests the miner can mine so many solutions in parallel\n%% that it fills up the ar_mining_server queue and can cause flaky timeouts.\n%% To avoid this we'll flush the queue after a successful solution is posted.\nmaybe_flush_solution_queue(ok) ->\n\tflush_solution_queue(),\n\tok;\nmaybe_flush_solution_queue(_Other) ->\n\tok.\n\nflush_solution_queue() ->\n\treceive\n\t\t{'$gen_cast', {prepare_and_post_solution, _}} ->\n\t\t\tflush_solution_queue()\n\tafter 0 ->\n\t\tok\n\tend.\n-else.\nmaybe_flush_solution_queue(_Ret) ->\n\tok.\n-endif.\n\nmay_be_empty_poa(not_set) ->\n\t#poa{};\nmay_be_empty_poa(#poa{} = PoA) ->\n\tPoA.\n\nfetch_poa_from_peers(_RecallByte, PackingDifficulty) when PackingDifficulty >= 1 ->\n\tnot_found;\nfetch_poa_from_peers(RecallByte, _PackingDifficulty) ->\n\tBucketPeers = ar_data_discovery:get_bucket_peers(RecallByte div ?NETWORK_DATA_BUCKET_SIZE),\n\tPeers = ar_data_discovery:pick_peers(BucketPeers, ?QUERY_BEST_PEERS_COUNT),\n\tFrom = self(),\n\tlists:foreach(\n\t\tfun(Peer) ->\n\t\t\tspawn(\n\t\t\t\tfun() ->\n\t\t\t\t\t?LOG_INFO([{event, last_moment_proof_search},\n\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)}, {recall_byte, RecallByte}]),\n\t\t\t\t\tcase fetch_poa_from_peer(Peer, RecallByte) of\n\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\tok;\n\t\t\t\t\t\tPoA ->\n\t\t\t\t\t\t\tFrom ! {fetched_last_moment_proof, PoA}\n\t\t\t\t\tend\n\t\t\t\tend)\n\t\tend,\n\t\tPeers\n\t),\n\treceive\n         %% The first spawned process to fetch a PoA from a peer will trigger this `receive`\n\t\t %% and allow `fetch_poa_from_peers` to exit. All other processes that complete later\n\t\t %% will trigger the\n\t\t %% `handle_info({fetched_last_moment_proof, _}, State) ->` above (which is a no-op).\n\t\t{fetched_last_moment_proof, PoA} ->\n\t\t\tPoA\n\t\tafter ?FETCH_POA_FROM_PEERS_TIMEOUT_MS ->\n\t\t\tnot_found\n\tend.\n\nfetch_poa_from_peer(Peer, RecallByte) ->\n\tcase ar_http_iface_client:get_chunk_binary(Peer, RecallByte + 1, any) of\n\t\t{ok, #{ data_path := DataPath, tx_path := TXPath }, _, _} ->\n\t\t\t#poa{ data_path = DataPath, tx_path = TXPath };\n\t\t_ ->\n\t\t\tnot_found\n\tend.\n\nhandle_computed_output(SessionKey, StepNumber, Output, PartitionUpperBound,\n\t\tPartialDiff, State) ->\n\ttrue = is_integer(StepNumber),\n\tar_mining_stats:vdf_computed(),\n\n\tState2 = case ar_mining_io:set_largest_seen_upper_bound(PartitionUpperBound) of\n\t\ttrue ->\n\t\t\t%% If the largest seen upper bound changed, a new partition may have been added\n\t\t\t%% to the mining set, so we may need to update the chunk cache size limit.\n\t\t\tupdate_cache_limits(State);\n\t\tfalse ->\n\t\t\tState\n\tend,\n\n\tState3 = maybe_update_sessions(SessionKey, State2),\n\n\tcase sets:is_element(SessionKey, State3#state.active_sessions) of\n\t\tfalse ->\n\t\t\t?LOG_DEBUG([{event, mining_debug_skipping_vdf_output}, {reason, stale_session},\n\t\t\t\t{step_number, StepNumber},\n\t\t\t\t{session_key, ar_nonce_limiter:encode_session_key(SessionKey)},\n\t\t\t\t{active_sessions, encode_sessions(sets:to_list(State#state.active_sessions))}]);\n\t\ttrue ->\n\t\t\t{NextSeed, StartIntervalNumber, NextVDFDifficulty} = SessionKey,\n\t\t\tCandidate = #mining_candidate{\n\t\t\t\tsession_key = SessionKey,\n\t\t\t\tseed = get_seed(SessionKey, State3),\n\t\t\t\tnext_seed = NextSeed,\n\t\t\t\tnext_vdf_difficulty = NextVDFDifficulty,\n\t\t\t\tstart_interval_number = StartIntervalNumber,\n\t\t\t\tstep_number = StepNumber,\n\t\t\t\tnonce_limiter_output = Output,\n\t\t\t\tpartition_upper_bound = PartitionUpperBound,\n\t\t\t\tcm_diff = PartialDiff\n\t\t\t},\n\t\t\tprometheus_gauge:inc(mining_vdf_step),\n\t\t\tdistribute_output(Candidate, State3),\n\t\t\t?LOG_DEBUG([{event, mining_debug_processing_vdf_output},\n\t\t\t\t{step_number, StepNumber}, {output, ar_util:safe_encode(Output)},\n\t\t\t\t{start_interval_number, StartIntervalNumber},\n\t\t\t\t{session_key, ar_nonce_limiter:encode_session_key(SessionKey)},\n\t\t\t\t{partition_upper_bound, PartitionUpperBound}])\n\tend,\n\t{noreply, State3}.\n\nread_poa(RecallByte, ChunkOrSubChunk, Packing, Nonce) ->\n\tPoAReply = read_poa(RecallByte, Packing),\n\tcase {ChunkOrSubChunk, PoAReply, Packing} of\n\t\t{not_set, {ok, #poa{ chunk = Chunk } = PoA}, {replica_2_9, _}} ->\n\t\t\tPackingDifficulty = ?REPLICA_2_9_PACKING_DIFFICULTY,\n\t\t\tSubChunk = get_sub_chunk(Chunk, PackingDifficulty, Nonce),\n\t\t\t{ok, PoA#poa{ chunk = SubChunk }};\n\t\t{not_set, {ok, #poa{ chunk = Chunk } = PoA}, {composite, _, PackingDifficulty}} ->\n\t\t\tSubChunk = get_sub_chunk(Chunk, PackingDifficulty, Nonce),\n\t\t\t{ok, PoA#poa{ chunk = SubChunk }};\n\t\t{_ChunkOrSubChunk, {ok, #poa{ chunk = Chunk } = PoA}, {replica_2_9, _}} ->\n\t\t\tcase sub_chunk_belongs_to_chunk(ChunkOrSubChunk, Chunk) of\n\t\t\t\ttrue ->\n\t\t\t\t\t{ok, PoA#poa{ chunk = ChunkOrSubChunk }};\n\t\t\t\tfalse ->\n\t\t\t\t\tdump_invalid_solution_data({sub_chunk_mismatch, RecallByte,\n\t\t\t\t\t\t\tChunkOrSubChunk, PoA, Packing, PoAReply, Nonce}),\n\t\t\t\t\t{error, sub_chunk_mismatch};\n\t\t\t\tError2 ->\n\t\t\t\t\tError2\n\t\t\tend;\n\t\t{_ChunkOrSubChunk, {ok, #poa{ chunk = Chunk } = PoA}, {composite, _, _}} ->\n\t\t\tcase sub_chunk_belongs_to_chunk(ChunkOrSubChunk, Chunk) of\n\t\t\t\ttrue ->\n\t\t\t\t\t{ok, PoA#poa{ chunk = ChunkOrSubChunk }};\n\t\t\t\tfalse ->\n\t\t\t\t\tdump_invalid_solution_data({sub_chunk_mismatch, RecallByte,\n\t\t\t\t\t\t\tChunkOrSubChunk, PoA, Packing, PoAReply, Nonce}),\n\t\t\t\t\t{error, sub_chunk_mismatch};\n\t\t\t\tError2 ->\n\t\t\t\t\tError2\n\t\t\tend;\n\t\t{not_set, {ok, #poa{} = PoA}, _Packing} ->\n\t\t\t{ok, PoA};\n\t\t{_ChunkOrSubChunk, {ok, #poa{ chunk = ChunkOrSubChunk } = PoA}, _Packing} ->\n\t\t\t{ok, PoA};\n\t\t{_ChunkOrSubChunk, {ok, #poa{} = PoA}, _Packing} ->\n\t\t\tdump_invalid_solution_data({chunk_mismatch, RecallByte,\n\t\t\t\t\tChunkOrSubChunk, PoA, Packing, PoAReply, Nonce}),\n\t\t\t{error, chunk_mismatch};\n\t\t{_ChunkOrSubChunk, Error, _Packing} ->\n\t\t\tError\n\tend.\n\ndump_invalid_solution_data(Data) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tID = binary_to_list(ar_util:encode(crypto:strong_rand_bytes(16))),\n\tFile = filename:join(Config#config.data_dir, \"invalid_solution_data_dump_\" ++ ID),\n\tfile:write_file(File, term_to_binary(Data)).\n\nget_sub_chunk(Chunk, 0, _Nonce) ->\n\tChunk;\nget_sub_chunk(Chunk, PackingDifficulty, Nonce) ->\n\tSubChunkSize = ?COMPOSITE_PACKING_SUB_CHUNK_SIZE,\n\tSubChunkIndex = ar_block:get_sub_chunk_index(PackingDifficulty, Nonce),\n\tSubChunkStartOffset = SubChunkSize * SubChunkIndex,\n\tbinary:part(Chunk, SubChunkStartOffset, SubChunkSize).\n\nsub_chunk_belongs_to_chunk(SubChunk,\n\t\t<< SubChunk:?COMPOSITE_PACKING_SUB_CHUNK_SIZE/binary, _Rest/binary >>) ->\n\ttrue;\nsub_chunk_belongs_to_chunk(SubChunk,\n\t\t<< _SubChunk:?COMPOSITE_PACKING_SUB_CHUNK_SIZE/binary, Rest/binary >>) ->\n\tsub_chunk_belongs_to_chunk(SubChunk, Rest);\nsub_chunk_belongs_to_chunk(_SubChunk, <<>>) ->\n\tfalse;\nsub_chunk_belongs_to_chunk(_SubChunk, _Chunk) ->\n\t{error, uneven_chunk}.\n\nread_poa(RecallByte, Packing) ->\n\tOptions = #{ pack => true, packing => Packing, origin => miner },\n\tcase ar_data_sync:get_chunk(RecallByte + 1, Options) of\n\t\t{ok, Proof} ->\n\t\t\t#{ chunk := Chunk, tx_path := TXPath, data_path := DataPath } = Proof,\n\t\t\tcase get_packing_type(Packing) of\n\t\t\t\tType when Type == replica_2_9; Type == composite ->\n\t\t\t\t\tcase maps:get(unpacked_chunk, Proof, not_found) of\n\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\tread_unpacked_chunk(RecallByte, Proof);\n\t\t\t\t\t\tUnpackedChunk ->\n\t\t\t\t\t\t\t{ok, #poa{ option = 1, chunk = Chunk,\n\t\t\t\t\t\t\t\tunpacked_chunk = ar_packing_server:pad_chunk(UnpackedChunk),\n\t\t\t\t\t\t\t\ttx_path = TXPath, data_path = DataPath }}\n\t\t\t\t\tend;\n\t\t\t\t_ ->\n\t\t\t\t\t{ok, #poa{ option = 1, chunk = Chunk,\n\t\t\t\t\t\t\ttx_path = TXPath, data_path = DataPath }}\n\t\t\tend;\n\t\tError ->\n\t\t\tError\n\tend.\n\nread_unpacked_chunk(RecallByte, Proof) ->\n\tOptions = #{ pack => true, packing => unpacked, origin => miner },\n\tcase ar_data_sync:get_chunk(RecallByte + 1, Options) of\n\t\t{ok, #{ chunk := UnpackedChunk, tx_path := TXPath, data_path := DataPath }} ->\n\t\t\t{ok, #poa{ option = 1, chunk = maps:get(chunk, Proof),\n\t\t\t\tunpacked_chunk = ar_packing_server:pad_chunk(UnpackedChunk),\n\t\t\t\ttx_path = TXPath, data_path = DataPath }};\n\t\tError ->\n\t\t\tError\n\tend.\n\nvalidate_solution(Solution, DiffPair) ->\n\t#mining_solution{\n\t\tmining_address = MiningAddress,\n\t\tnonce = Nonce, nonce_limiter_output = NonceLimiterOutput,\n\t\tpartition_number = PartitionNumber, partition_upper_bound = PartitionUpperBound,\n\t\tpoa1 = PoA1, recall_byte1 = RecallByte1, seed = Seed,\n\t\tsolution_hash = SolutionHash,\n\t\tpacking_difficulty = PackingDifficulty, replica_format = ReplicaFormat } = Solution,\n\tH0 = ar_block:compute_h0(NonceLimiterOutput, PartitionNumber, Seed, MiningAddress,\n\t\t\tPackingDifficulty),\n\t{H1, _Preimage1} = ar_block:compute_h1(H0, Nonce, PoA1#poa.chunk),\n\t{RecallRange1Start, RecallRange2Start} = get_recall_range(H0,\n\t\t\tPartitionNumber, PartitionUpperBound),\n\t%% Assert recall_byte1 is computed correctly.\n\tRecallByte1 = ar_block:get_recall_byte(RecallRange1Start, Nonce, PackingDifficulty),\n\t{BlockStart1, BlockEnd1, TXRoot1} = ar_block_index:get_block_bounds(RecallByte1),\n\tBlockSize1 = BlockEnd1 - BlockStart1,\n\tPacking = ar_block:get_packing(PackingDifficulty, MiningAddress, ReplicaFormat),\n\tSubChunkIndex = ar_block:get_sub_chunk_index(PackingDifficulty, Nonce),\n\tcase ar_poa:validate({BlockStart1, RecallByte1, TXRoot1, BlockSize1, PoA1,\n\t\t\tPacking, SubChunkIndex, not_set}) of\n\t\t{true, ChunkID} ->\n\t\t\tPoACache = {{BlockStart1, RecallByte1, TXRoot1, BlockSize1, Packing,\n\t\t\t\t\tSubChunkIndex}, ChunkID},\n\t\t\tcase ar_node_utils:h1_passes_diff_check(H1, DiffPair, PackingDifficulty) of\n\t\t\t\ttrue ->\n\t\t\t\t\tcase SolutionHash of\n\t\t\t\t\t\tH1 ->\n\t\t\t\t\t\t\t{true, PoACache, undefined};\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t?LOG_ERROR([{event, invalid_solution_hash},\n\t\t\t\t\t\t\t\t\t{solution_hash, ar_util:encode(SolutionHash)},\n\t\t\t\t\t\t\t\t\t{h1, ar_util:encode(H1)}]),\n\t\t\t\t\t\t\terror\n\t\t\t\t\tend;\n\t\t\t\tfalse ->\n\t\t\t\t\tcase is_one_chunk_solution(Solution) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t%% This can happen if the difficulty has increased between the\n\t\t\t\t\t\t\t%% time the H1 solution was found and now. In this case,\n\t\t\t\t\t\t\t%% there is no H2 solution, so we flag the solution invalid.\n\t\t\t\t\t\t\t{Diff1, _} = DiffPair,\n\t\t\t\t\t\t\t{false, {h1_diff_check,\n\t\t\t\t\t\t\t\tar_util:safe_encode(H0),\n\t\t\t\t\t\t\t\tar_util:safe_encode(H1),\n\t\t\t\t\t\t\t\tbinary:decode_unsigned(H1),\n\t\t\t\t\t\t\t\tar_node_utils:scaled_diff(Diff1, PackingDifficulty)\n\t\t\t\t\t\t\t}};\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t#mining_solution{\n\t\t\t\t\t\t\t\trecall_byte2 = RecallByte2, poa2 = PoA2 } = Solution,\n\t\t\t\t\t\t\t{H2, _Preimage2} = ar_block:compute_h2(H1, PoA2#poa.chunk, H0),\n\t\t\t\t\t\t\tcase ar_node_utils:h2_passes_diff_check(H2, DiffPair,\n\t\t\t\t\t\t\t\t\tPackingDifficulty) of\n\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\t{_, Diff2} = DiffPair,\n\t\t\t\t\t\t\t\t\t{false, {h2_diff_check,\n\t\t\t\t\t\t\t\t\t\tar_util:safe_encode(H0),\n\t\t\t\t\t\t\t\t\t\tar_util:safe_encode(H1),\n\t\t\t\t\t\t\t\t\t\tar_util:safe_encode(H2),\n\t\t\t\t\t\t\t\t\t\tbinary:decode_unsigned(H2),\n\t\t\t\t\t\t\t\t\t\tar_node_utils:scaled_diff(Diff2, PackingDifficulty)\n\t\t\t\t\t\t\t\t\t}};\n\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\tSolutionHash = H2,\n\t\t\t\t\t\t\t\t\tRecallByte2 = ar_block:get_recall_byte(RecallRange2Start,\n\t\t\t\t\t\t\t\t\t\t\tNonce, PackingDifficulty),\n\t\t\t\t\t\t\t\t\t{BlockStart2, BlockEnd2, TXRoot2} =\n\t\t\t\t\t\t\t\t\t\t\tar_block_index:get_block_bounds(RecallByte2),\n\t\t\t\t\t\t\t\t\tBlockSize2 = BlockEnd2 - BlockStart2,\n\t\t\t\t\t\t\t\t\tcase ar_poa:validate({BlockStart2, RecallByte2, TXRoot2,\n\t\t\t\t\t\t\t\t\t\t\tBlockSize2, PoA2,\n\t\t\t\t\t\t\t\t\t\t\tPacking, SubChunkIndex, not_set}) of\n\t\t\t\t\t\t\t\t\t\t{true, Chunk2ID} ->\n\t\t\t\t\t\t\t\t\t\t\tPoA2Cache = {{BlockStart2, RecallByte2, TXRoot2,\n\t\t\t\t\t\t\t\t\t\t\t\t\tBlockSize2, Packing, SubChunkIndex},\n\t\t\t\t\t\t\t\t\t\t\t\t\tChunk2ID},\n\t\t\t\t\t\t\t\t\t\t\t{true, PoACache, PoA2Cache};\n\t\t\t\t\t\t\t\t\t\terror ->\n\t\t\t\t\t\t\t\t\t\t\tlog_prepare_solution_failure(Solution,\n\t\t\t\t\t\t\t\t\t\t\t\trejected, poa2_validation_error, miner, []),\n\t\t\t\t\t\t\t\t\t\t\terror;\n\t\t\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\t\t\t{false, poa2}\n\t\t\t\t\t\t\t\t\tend\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend\n\t\t\tend;\n\t\terror ->\n\t\t\tlog_prepare_solution_failure(Solution, rejected, poa1_validation_error, miner, []),\n\t\t\terror;\n\t\tfalse ->\n\t\t\t{false, poa1}\n\tend.\n\nreset_gc_timer(GarbageCollectionFrequency, State) ->\n\tState2 = maybe_cancel_gc_timer(State),\n\tRef = erlang:make_ref(),\n\tar_util:cast_after(GarbageCollectionFrequency, ?MODULE,\n\t\t\t{manual_garbage_collect, Ref}),\n\tState2#state{ gc_process_ref = Ref, gc_frequency_ms = GarbageCollectionFrequency }.\n\nmaybe_cancel_gc_timer(#state{gc_process_ref = undefined} = State) ->\n\tState;\nmaybe_cancel_gc_timer(State) ->\n\terlang:cancel_timer(State#state.gc_process_ref),\n\tState#state{ gc_process_ref = undefined }.\n\n%%%===================================================================\n%%% Public Test interface.\n%%%===================================================================\n\n%% @doc Pause the mining server. Only used in tests.\npause() ->\n\tgen_server:cast(?MODULE, pause).\n\nsetup() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tConfig.\n\ncleanup(Config) ->\n\tarweave_config:set_env(Config).\n\ncalculate_cache_limits_test_() ->\n\t{setup, fun setup/0, fun cleanup/1,\n\t\t[\n\t\t\t{timeout, 30, fun test_calculate_cache_limits_default/0},\n\t\t\t{timeout, 30, fun test_calculate_cache_limits_custom_low/0},\n\t\t\t{timeout, 30, fun test_calculate_cache_limits_custom_high/0}\n\t\t]\n\t}.\n\ntest_calculate_cache_limits_default() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tarweave_config:set_env(Config#config{\n\t\tmining_cache_size_mb = undefined\n\t}),\n\t?assertEqual(\n\t\t{\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 100 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 100 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 4000},\n\t\tcalculate_cache_limits(100, 0)\n\t),\n\t?assertEqual(\n\t\t{\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 200 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 200 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 4000},\n\t\tcalculate_cache_limits(200, 0)\n\t),\n\t?assertEqual(\n\t\t{\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 1000 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 1000 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 4000},\n\t\tcalculate_cache_limits(1000, 0)\n\t),\n\t?assertEqual(\n\t\t{\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 25 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 25 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 256 * ?KiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 4000},\n\t\tcalculate_cache_limits(100, 1)\n\t),\n\t?assertEqual(\n\t\t{\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 50 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 50 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 256 * ?KiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 4000},\n\t\tcalculate_cache_limits(200, 1)\n\t),\n\t?assertEqual(\n\t\t{\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 250 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 250 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 256 * ?KiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 4000},\n\t\tcalculate_cache_limits(1000, 1)\n\t),\n\t?assertEqual(\n\t\t{\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 25 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 25 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 128 * ?KiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 4000},\n\t\tcalculate_cache_limits(200, 2)\n\t),\n\t?assertEqual(\n\t\t{\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 50 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 50 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 128 * ?KiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 4000},\n\t\tcalculate_cache_limits(400, 2)\n\t),\n\t?assertEqual(\n\t\t{\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 125 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 125 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 128 * ?KiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 4000},\n\t\tcalculate_cache_limits(1000, 2)\n\t),\n\t?assertEqual(\n\t\t{\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 50 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 50 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 8 * ?KiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 4000},\n\t\tcalculate_cache_limits(6_400, 32)\n\t),\n\t?assertEqual(\n\t\t{\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 100 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 100 * ?MiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 8 * ?KiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 4000},\n\t\tcalculate_cache_limits(12_800, 32)\n\t),\n\t?assertEqual(\n\t\t{\n\t\t\ttrunc(?IDEAL_STEPS_PER_PARTITION * 156.25 * ?MiB),\n\t\t\ttrunc(?IDEAL_STEPS_PER_PARTITION * 156.25 * ?MiB),\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 8 * ?KiB,\n\t\t\t?IDEAL_STEPS_PER_PARTITION,\n\t\t\t?IDEAL_STEPS_PER_PARTITION * 4000},\n\t\tcalculate_cache_limits(20_000, 32)\n\t).\n\ntest_calculate_cache_limits_custom_low() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tarweave_config:set_env(Config#config{\n\t\tmining_cache_size_mb = 1\n\t}),\n\t?assertEqual(\n\t\t{?MINIMUM_CACHE_LIMIT_BYTES, 1 * ?MiB, 1 * ?MiB, 1, 4_000},\n\t\tcalculate_cache_limits(1, 0)\n\t),\n\t?assertEqual(\n\t\t{?MINIMUM_CACHE_LIMIT_BYTES, 1 * ?MiB, 512 * ?KiB, 1, 4_000},\n\t\tcalculate_cache_limits(2, 0)\n\t),\n\t?assertEqual(\n\t\t{?IDEAL_STEPS_PER_PARTITION * 1000 * ?MiB, 1 * ?MiB, (1 * ?MiB) div 1_000, 1, 4_000},\n\t\tcalculate_cache_limits(1000, 0)\n\t),\n\t?assertEqual(\n\t\t{?MINIMUM_CACHE_LIMIT_BYTES, 1 * ?MiB, 1 * ?MiB, 4, 16_000},\n\t\tcalculate_cache_limits(1, 1)\n\t),\n\t?assertEqual(\n\t\t{?MINIMUM_CACHE_LIMIT_BYTES, 1 * ?MiB, 512 * ?KiB, 2, 8_000},\n\t\tcalculate_cache_limits(2, 1)\n\t),\n\t?assertEqual(\n\t\t{?IDEAL_STEPS_PER_PARTITION * 250 * ?MiB, 1 * ?MiB, (1 * ?MiB) div 1_000, 1, 4_000},\n\t\tcalculate_cache_limits(1000, 1)\n\t),\n\t?assertEqual(\n\t\t{?MINIMUM_CACHE_LIMIT_BYTES, 1 * ?MiB, 1 * ?MiB, 8, 32_000},\n\t\tcalculate_cache_limits(1, 2)\n\t),\n\t?assertEqual(\n\t\t{?MINIMUM_CACHE_LIMIT_BYTES, 1 * ?MiB, 512 * ?KiB, 4, 16_000},\n\t\tcalculate_cache_limits(2, 2)\n\t),\n\t?assertEqual(\n\t\t{?IDEAL_STEPS_PER_PARTITION * 128_000 * ?KiB, 1 * ?MiB, (1 * ?MiB) div 1_000, 1, 4_000},\n\t\tcalculate_cache_limits(1000, 2)\n\t),\n\t?assertEqual(\n\t\t{?MINIMUM_CACHE_LIMIT_BYTES, 1 * ?MiB, 1 * ?MiB, 128, 512_000},\n\t\tcalculate_cache_limits(1, 32)\n\t),\n\t?assertEqual(\n\t\t{?MINIMUM_CACHE_LIMIT_BYTES, 1 * ?MiB, 512 * ?KiB, 64, 256_000},\n\t\tcalculate_cache_limits(2, 32)\n\t),\n\t?assertEqual(\n\t\t{?IDEAL_STEPS_PER_PARTITION * 500 * ?MiB, 1 * ?MiB, (1 * ?MiB) div 64_000, 1, 4_000},\n\t\tcalculate_cache_limits(64_000, 32)\n\t).\n\ntest_calculate_cache_limits_custom_high() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tarweave_config:set_env(Config#config{\n\t\tmining_cache_size_mb = 500_000\n\t}),\n\t?assertEqual(\n\t\t{?MINIMUM_CACHE_LIMIT_BYTES, 512_000_000 * ?KiB, 512_000_000 * ?KiB, 500_000, 2_000_000_000},\n\t\tcalculate_cache_limits(1, 0)\n\t),\n\t?assertEqual(\n\t\t{?MINIMUM_CACHE_LIMIT_BYTES, 512_000_000 * ?KiB, 256_000_000 * ?KiB, 250_000, 1_000_000_000},\n\t\tcalculate_cache_limits(2, 0)\n\t),\n\t?assertEqual(\n\t\t{?IDEAL_STEPS_PER_PARTITION * 1000 * ?MiB, 512_000_000 * ?KiB, 512_000 * ?KiB, 500, 2_000_000},\n\t\tcalculate_cache_limits(1000, 0)\n\t),\n\t?assertEqual(\n\t\t{?MINIMUM_CACHE_LIMIT_BYTES, 512_000_000 * ?KiB, 512_000_000 * ?KiB, 2_000_000, 8_000_000_000},\n\t\tcalculate_cache_limits(1, 1)\n\t),\n\t?assertEqual(\n\t\t{?MINIMUM_CACHE_LIMIT_BYTES, 512_000_000 * ?KiB, 256_000_000 * ?KiB, 1_000_000, 4_000_000_000},\n\t\tcalculate_cache_limits(2, 1)\n\t),\n\t?assertEqual(\n\t\t{?IDEAL_STEPS_PER_PARTITION * 250 * ?MiB, 512_000_000 * ?KiB, 512_000 * ?KiB, 2_000, 8_000_000},\n\t\tcalculate_cache_limits(1000, 1)\n\t),\n\t?assertEqual(\n\t\t{?MINIMUM_CACHE_LIMIT_BYTES, 512_000_000 * ?KiB, 512_000_000 * ?KiB, 4_000_000, 16_000_000_000},\n\t\tcalculate_cache_limits(1, 2)\n\t),\n\t?assertEqual(\n\t\t{?MINIMUM_CACHE_LIMIT_BYTES, 512_000_000 * ?KiB, 256_000_000 * ?KiB, 2_000_000, 8_000_000_000},\n\t\tcalculate_cache_limits(2, 2)\n\t),\n\t?assertEqual(\n\t\t{?IDEAL_STEPS_PER_PARTITION * 128_000 * ?KiB, 512_000_000 * ?KiB, 512_000 * ?KiB, 4_000, 16_000_000},\n\t\tcalculate_cache_limits(1000, 2)\n\t),\n\t?assertEqual(\n\t\t{?MINIMUM_CACHE_LIMIT_BYTES, 512_000_000 * ?KiB, 512_000_000 * ?KiB, 64_000_000, 256_000_000_000},\n\t\tcalculate_cache_limits(1, 32)\n\t),\n\t?assertEqual(\n\t\t{?MINIMUM_CACHE_LIMIT_BYTES, 512_000_000 * ?KiB, 256_000_000 * ?KiB, 32_000_000, 128_000_000_000},\n\t\tcalculate_cache_limits(2, 32)\n\t),\n\t?assertEqual(\n\t\t{(?IDEAL_STEPS_PER_PARTITION * 2 * (?RECALL_RANGE_SIZE div 32) * 1000), 512_000_000 * ?KiB, 512_000 * ?KiB, 64_000, 256_000_000},\n\t\tcalculate_cache_limits(1000, 32)\n\t).\n"
  },
  {
    "path": "apps/arweave/src/ar_mining_server_behaviour.erl",
    "content": "-module(ar_mining_server_behaviour).\n\n-include(\"ar.hrl\").\n-include(\"ar_mining.hrl\").\n\n-callback start_mining({DiffPair, MerkleRebaseThreshold, Height}) -> ok when\n\tDiffPair :: {non_neg_integer() | infinity, non_neg_integer() | infinity},\n\tMerkleRebaseThreshold :: non_neg_integer() | infinity,\n\tHeight :: non_neg_integer().\n\n-callback pause() -> ok.\n\n-callback is_paused() -> boolean().\n\n-callback set_difficulty(DiffPair :: {non_neg_integer() | infinity, non_neg_integer() | infinity}) -> ok.\n\n-callback set_merkle_rebase_threshold(Threshold :: non_neg_integer() | infinity) -> ok.\n\n-callback set_height(Height :: integer()) -> ok.\n"
  },
  {
    "path": "apps/arweave/src/ar_mining_stats.erl",
    "content": "-module(ar_mining_stats).\n-behaviour(gen_server).\n\n-export([start_link/0, start_performance_reports/0, pause_performance_reports/1, mining_paused/0,\n\t\tset_storage_module_data_size/6,\n\t\tvdf_computed/0, raw_read_rate/2, chunks_read/2, h1_computed/2, h2_computed/2,\n\t\th1_solution/0, h2_solution/0, block_found/0, block_mined_but_orphaned/0,\n\t\th1_sent_to_peer/2, h1_received_from_peer/2, h2_sent_to_peer/1, h2_received_from_peer/1,\n\t\tget_partition_data_size/2]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-record(state, {\n\tpause_performance_reports = false,\n\tpause_performance_reports_timeout\n}).\n\n-record(report, {\n\tnow,\n\tvdf_speed = undefined,\n\th1_solution = 0,\n\th2_solution = 0,\n\tconfirmed_block = 0,\n\ttotal_data_size = 0,\n\toptimal_overall_read_mibps = 0.0,\n\toptimal_overall_hash_hps = 0.0,\n\taverage_read_mibps = 0.0,\n\tcurrent_read_mibps = 0.0,\n\taverage_hash_hps = 0.0,\n\tcurrent_hash_hps = 0.0,\n\taverage_h1_to_peer_hps = 0.0,\n\tcurrent_h1_to_peer_hps = 0.0,\n\taverage_h1_from_peer_hps = 0.0,\n\tcurrent_h1_from_peer_hps = 0.0,\n\ttotal_h2_to_peer = 0,\n\ttotal_h2_from_peer = 0,\n\t\n\tpartitions = [],\n\tpeers = []\n}).\n\n-record(partition_report, {\n\tpartition_number,\n\tdata_size,\n\toptimal_read_mibps,\n\toptimal_hash_hps,\n\taverage_read_mibps,\n\tcurrent_read_mibps,\n\taverage_hash_hps,\n\tcurrent_hash_hps\n}).\n\n-record(peer_report, {\n\tpeer,\n\taverage_h1_to_peer_hps,\n\tcurrent_h1_to_peer_hps,\n\taverage_h1_from_peer_hps,\n\tcurrent_h1_from_peer_hps,\n\ttotal_h2_to_peer,\n\ttotal_h2_from_peer\n}).\n\n%% ETS table structure:\n%%\n%% {vdf, \t\t\t\t\t\t\t\t\t\t\t\t\tStartTime, Samples, VDFStepCount}\n%% {h1_solution, \t\t\t\t\t\t\t\t\t\t\tStartTime, Samples, TotalH1SolutionsFound}\n%% {h2_solution, \t\t\t\t\t\t\t\t\t\t\tStartTime, Samples, TotalH2SolutionsFound}\n%% {confirmed_block, \t\t\t\t\t\t\t\t\t\tStartTime, Samples, TotalConfirmedBlocksMined}\n%% {{partition, PartitionNumber, read, total}, \t\t\t\tStartTime, Samples, TotalChunksRead}\n%% {{partition, PartitionNumber, read, current}, \t\t\tStartTime, Samples, CurrentChunksRead}\n%% {{partition, PartitionNumber, h1, total}, \t\t\t\tStartTime, Samples, TotalH1}\n%% {{partition, PartitionNumber, h1, current}, \t\t\t\tStartTime, Samples, CurrentH1}\n%% {{partition, PartitionNumber, h2, total}, \t\t\t\tStartTime, Samples, TotalH2}\n%% {{partition, PartitionNumber, h2, current}, \t\t\t\tStartTime, Samples, CurrentH2}\n%% {total_data_size, \t\t\t\t\t\t\t\t\t\tTotalBytesPacked}\n%% {{partition, PartitionNumber, storage_module, StoreID}, \tBytesPacked}\n%% {{peer, Peer, h1_to_peer, total}, \t\t\t\t\t\tStartTime, Samples, TotalH1sSentToPeer}\n%% {{peer, Peer, h1_to_peer, current}, \t\t\t\t\t\tStartTime, Samples, CurrentH1sSentToPeer}\n%% {{peer, Peer, h1_from_peer, total}, \t\t\t\t\t\tStartTime, Samples, TotalH1sReceivedFromPeer}\n%% {{peer, Peer, h1_from_peer, current}, \t\t\t\t\tStartTime, Samples, CurrentH1sReceivedFromPeer}\n%% {{peer, Peer, h2_to_peer, total}, \t\t\t\t\t\tStartTime, Samples, TotalH2sSentToPeer}\n%% {{peer, Peer, h2_from_peer, total}, \t\t\t\t\t\tStartTime, Samples, TotalH2sReceivedFromPeer}\n\n-define(PERFORMANCE_REPORT_FREQUENCY_MS, 10000).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the gen_server.\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\nstart_performance_reports() ->\n\treset_all_stats(),\n\tar_util:cast_after(?PERFORMANCE_REPORT_FREQUENCY_MS, ?MODULE, report_performance).\n\n%% @doc Stop logging performance reports for the given number of milliseconds.\npause_performance_reports(Time) ->\n\tgen_server:call(?MODULE, {pause_performance_reports, Time}).\n\nvdf_computed() ->\n\tvdf_computed(erlang:monotonic_time(millisecond)).\n\nvdf_computed(Now) ->\n\tincrement_count(vdf, 1, Now).\n\nraw_read_rate(PartitionNumber, ReadRate) ->\n\tprometheus_gauge:set(mining_rate, [raw_read, PartitionNumber], ReadRate).\n\nchunks_read(PartitionNumber, Count) ->\n\tchunks_read(PartitionNumber, Count, erlang:monotonic_time(millisecond)).\n\nchunks_read(PartitionNumber, Count, Now) ->\n\tincrement_count({partition, PartitionNumber, read, total}, Count, Now),\n\tincrement_count({partition, PartitionNumber, read, current}, Count, Now).\n\nh1_computed(PartitionNumber, Count) ->\n\th1_computed(PartitionNumber, Count, erlang:monotonic_time(millisecond)).\n\nh1_computed(PartitionNumber, Count, Now) ->\n\tincrement_count({partition, PartitionNumber, h1, total}, Count, Now),\n\tincrement_count({partition, PartitionNumber, h1, current}, Count, Now).\n\nh2_computed(PartitionNumber, Count) ->\n\th2_computed(PartitionNumber, Count, erlang:monotonic_time(millisecond)).\n\nh2_computed(PartitionNumber, Count, Now) ->\n\tincrement_count({partition, PartitionNumber, h2, total}, Count, Now),\n\tincrement_count({partition, PartitionNumber, h2, current}, Count, Now).\n\nh1_sent_to_peer(Peer, H1Count) ->\n\th1_sent_to_peer(Peer, H1Count, erlang:monotonic_time(millisecond)).\n\nh1_sent_to_peer(Peer, H1Count, Now) ->\n\tincrement_count({peer, Peer, h1_to_peer, total}, H1Count, Now),\n\tincrement_count({peer, Peer, h1_to_peer, current}, H1Count, Now).\n\nh1_received_from_peer(Peer, H1Count) ->\n\th1_received_from_peer(Peer, H1Count, erlang:monotonic_time(millisecond)).\n\nh1_received_from_peer(Peer, H1Count, Now) ->\n\tincrement_count({peer, Peer, h1_from_peer, total}, H1Count, Now),\n\tincrement_count({peer, Peer, h1_from_peer, current}, H1Count, Now).\n\nh2_sent_to_peer(Peer) ->\n\th2_sent_to_peer(Peer, erlang:monotonic_time(millisecond)).\n\nh2_sent_to_peer(Peer, Now) ->\n\tincrement_count({peer, Peer, h2_to_peer, total}, 1, Now).\n\nh2_received_from_peer(Peer) ->\n\th2_received_from_peer(Peer, erlang:monotonic_time(millisecond)).\n\nh2_received_from_peer(Peer, Now) ->\n\tincrement_count({peer, Peer, h2_from_peer, total}, 1, Now).\n\nh1_solution() ->\n\th1_solution(erlang:monotonic_time(millisecond)).\n\nh1_solution(Now) ->\n\tincrement_count(h1_solution, 1, Now).\n\nh2_solution() ->\n\th2_solution(erlang:monotonic_time(millisecond)).\n\nh2_solution(Now) ->\n\tincrement_count(h2_solution, 1, Now).\n\nblock_found() ->\n\tblock_found(erlang:monotonic_time(millisecond)).\n\nblock_found(Now) ->\n\tincrement_count(confirmed_block, 1, Now).\n\nblock_mined_but_orphaned() ->\n\tblock_mined_but_orphaned(erlang:monotonic_time(millisecond)).\n\nblock_mined_but_orphaned(Now) ->\n\tincrement_count(block_mined_but_orphaned, 1, Now).\n\nupdate_total_data_size() ->\n\tPattern = {\n\t\t{partition, '_', storage_module, '_', packing, '_'}, '$1'\n\t},\n\tSizes = [Size || [Size] <- ets:match(?MODULE, Pattern)],\n\tTotalDataSize = lists:sum(Sizes),\n\ttry\n\t\tprometheus_gauge:set(v2_index_data_size, TotalDataSize),\n\t\tets:insert(?MODULE, {total_data_size, TotalDataSize})\n\tcatch\n\t\terror:badarg ->\n\t\t\t?LOG_WARNING([{event, set_total_data_size_failed},\n\t\t\t\t{reason, prometheus_not_started}, {data_size, TotalDataSize}]);\n\t\tType:Reason ->\n\t\t\t?LOG_ERROR([{event, set_total_data_size_failed},\n\t\t\t\t{type, Type}, {reason, Reason}, {data_size, TotalDataSize}])\n\tend.\n\nset_storage_module_data_size(\n\t\tStoreID, Packing, PartitionNumber, StorageModuleSize, StorageModuleIndex, DataSize) ->\n\tStoreIDLabel = ar_storage_module:label(StoreID),\n\tPackingLabel = ar_storage_module:packing_label(Packing),\n\ttry\t\n\t\tPackingDifficulty = ar_mining_server:get_packing_difficulty(Packing),\n\t\tprometheus_gauge:set(v2_index_data_size_by_packing,\n\t\t\t[StoreIDLabel, PackingLabel, PartitionNumber,\n\t\t\t StorageModuleSize, StorageModuleIndex,\n\t\t\t PackingDifficulty],\n\t\t\tDataSize),\n\t\tets:insert(?MODULE, {\n\t\t\t{partition, PartitionNumber, storage_module, StoreID, packing, Packing}, DataSize}),\n\t\tupdate_total_data_size()\n\tcatch\n\t\terror:badarg ->\n\t\t\t?LOG_WARNING([{event, set_storage_module_data_size_failed},\n\t\t\t\t{reason, prometheus_not_started},\n\t\t\t\t{store_id, StoreID}, {store_id_label, StoreIDLabel},\n\t\t\t\t{packing, ar_serialize:encode_packing(Packing, true)},\n\t\t\t\t{packing_label, PackingLabel},\n\t\t\t\t{partition_number, PartitionNumber}, {storage_module_size, StorageModuleSize},\n\t\t\t\t{storage_module_index, StorageModuleIndex}, {data_size, DataSize}]);\n\t\terror:{unknown_metric,default,v2_index_data_size_by_packing} ->\n\t\t\t?LOG_WARNING([{event, set_storage_module_data_size_failed},\n\t\t\t\t{reason, prometheus_not_started},\n\t\t\t\t{store_id, StoreID}, {store_id_label, StoreIDLabel},\n\t\t\t\t{packing, ar_serialize:encode_packing(Packing, true)},\n\t\t\t\t{packing_label, PackingLabel},\n\t\t\t\t{partition_number, PartitionNumber}, {storage_module_size, StorageModuleSize},\n\t\t\t\t{storage_module_index, StorageModuleIndex}, {data_size, DataSize}]);\n\t\tType:Reason ->\n\t\t\t?LOG_ERROR([{event, set_storage_module_data_size_failed},\n\t\t\t\t{type, Type}, {reason, Reason},\n\t\t\t\t{store_id, StoreID}, {store_id_label, StoreIDLabel},\n\t\t\t\t{packing, ar_serialize:encode_packing(Packing, true)},\n\t\t\t\t{packing_label, PackingLabel},\n\t\t\t\t{partition_number, PartitionNumber}, {storage_module_size, StorageModuleSize},\n\t\t\t\t{storage_module_index, StorageModuleIndex}, {data_size, DataSize} ])\n\tend.\n\nmining_paused() ->\n\tclear_metrics().\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\t{ok, #state{}}.\n\nhandle_call({pause_performance_reports, Time}, _From, State) ->\n\tNow = os:system_time(millisecond),\n\tTimeout = Now + Time,\n\t{reply, ok, State#state{ pause_performance_reports = true,\n\t\t\tpause_performance_reports_timeout = Timeout }};\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast(report_performance, #state{ pause_performance_reports = true,\n\t\t\tpause_performance_reports_timeout = Timeout } = State) ->\n\tNow = os:system_time(millisecond),\n\tcase Now > Timeout of\n\t\ttrue ->\n\t\t\tgen_server:cast(?MODULE, report_performance),\n\t\t\t{noreply, State#state{ pause_performance_reports = false }};\n\t\tfalse ->\n\t\t\tar_util:cast_after(?PERFORMANCE_REPORT_FREQUENCY_MS, ?MODULE, report_performance),\n\t\t\t{noreply, State}\n\tend;\nhandle_cast(report_performance, State) ->\n\treport_performance(),\n\tar_util:cast_after(?PERFORMANCE_REPORT_FREQUENCY_MS, ?MODULE, report_performance),\n\t{noreply, State};\n\n\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nreset_all_stats() ->\n\tets:delete_all_objects(?MODULE).\n\n%% @doc Atomically increments the count for ETS records stored in the format:\n%% {Key, StartTimestamp, Count}\n%% If the Key doesn't exist, it is initialized with the current timestamp and a count of Amount\nincrement_count(_Key, 0, _Now) ->\n\tok;\nincrement_count(Key, Amount, Now) ->\n\tets:update_counter(?MODULE, Key,\n\t\t[{3, 1}, {4, Amount}], \t\t\t\t\t\t%% increment samples by 1, count by Amount\n\t\t{Key, Now, 0, 0} %% initialize timestamp, samples, count\n\t).\n\nreset_count(Key, Now) ->\n\tets:insert(?MODULE, [{Key, Now, 0, 0}]).\n\nget_average_count_by_time(Key, Now) ->\n\t{_AvgSamples, AvgCount} = get_average_by_time(Key, Now),\n\tAvgCount.\n\nget_average_samples_by_time(Key, Now) ->\n\t{AvgSamples, _AvgCount} = get_average_by_time(Key, Now),\n\tAvgSamples.\n\nget_average_by_time(Key, Now) ->\n\tcase ets:lookup(?MODULE, Key) of \n\t\t[] ->\n\t\t\t{0.0, 0.0};\n\t\t[{_, Start, _Samples, _Count}] when Now - Start =:= 0 ->\n\t\t\t{0.0, 0.0};\n\t\t[{_, Start, Samples, Count}] ->\n\t\t\tElapsed = (Now - Start) / 1000,\n\t\t\t{Samples / Elapsed, Count / Elapsed}\n\tend.\n\nget_average_by_samples(Key) ->\n\tcase ets:lookup(?MODULE, Key) of \n\t\t[] ->\n\t\t\t0.0;\n\t\t[{_, _Start, Samples, _Count}] when Samples == 0 ->\n\t\t\t0.0;\n\t\t[{_, _Start, Samples, Count}] ->\n\t\t\tCount / Samples\n\tend.\n\n\nget_count(Key) ->\n\tcase ets:lookup(?MODULE, Key) of \n\t\t[] ->\n\t\t\t0;\n\t\t[{_, _Start, _Samples, Count}] ->\n\t\t\tCount\n\tend.\n\nget_start(Key) ->\n\tcase ets:lookup(?MODULE, Key) of \n\t\t[] ->\n\t\t\tundefined;\n\t\t[{_, Start, _Samples, _Count}] ->\n\t\t\tStart\n\tend.\n\nget_hashrate_divisor(PackingDifficulty) ->\n\t%% Raw hashrate varies based on packing difficulty. Assuming a spora_2_6 base hashrate\n\t%% of 404, the raw hashrate at different packing difficulties is:\n\t%% spora_2_6: 404\n\t%% composite, 1: 404 * 32 / 4 / 1 = 3232\n\t%% composite, 2: 404 * 32 / 4 / 2 = 1616\n\t%% composite, 32: 404 * 32 / 4 / 32 = 101\n\t%%\n\t%% Basically:\n\t%% - composite packing generate 32x the number of hashes, but they are compared against\n\t%%   a higher solution difficulty\n\t%% - composite uses a 4x lower read recall range which *reduces* the number of hashes\n\t%%   4-fold, and increases the solution difficulty\n\t%% - finally as the difficulty increases, the number of hashes generated decreases as does\n\t%%   the solution difficulty\n\t%%\n\t%% This function returns a divisor we can use to normalize the hahsrate to 404.\n\tcase PackingDifficulty of\n\t\t0 ->\n\t\t\t1.0;\n\t\t_ ->\n\t\t\t(32.0 / 4.0) / PackingDifficulty\n\tend.\n\nget_total_minable_data_size(Packing) ->\n\tPattern = {\n\t\t{partition, '_', storage_module, '_', packing, Packing}, '$1'\n\t},\n\tSizes = [Size || [Size] <- ets:match(?MODULE, Pattern)],\n\tTotalDataSize = lists:sum(Sizes),\n\n\tWeaveSize = ar_node:get_weave_size(),\n\tTipPartition = ar_node:get_max_partition_number(WeaveSize) + 1,\n\tTipPartitionSize = get_partition_data_size(TipPartition, Packing),\n\t?LOG_DEBUG([{event, get_total_minable_data_size},\n\t\t{total_data_size, TotalDataSize}, {weave_size, WeaveSize},\n\t\t{tip_partition, TipPartition}, {tip_partition_size, TipPartitionSize},\n\t\t{total_minable_data_size, TotalDataSize - TipPartitionSize}\t]),\n\tTotalDataSize - TipPartitionSize.\n\nget_overall_total(PartitionPeer, Stat, TotalCurrent) ->\n\tPattern = {{PartitionPeer, '_', Stat, TotalCurrent}, '_', '_', '$1'},\n    Matches = ets:match(?MODULE, Pattern),\n\tCounts = [Count || [Count] <- Matches],\n\tlists:sum(Counts).\n\nget_partition_data_size(PartitionNumber, Packing) ->\n\tPattern = {{partition, PartitionNumber, storage_module, '_', packing, Packing }, '$1'},\n\tSizes = [Size || [Size] <- ets:match(?MODULE, Pattern)],\n    lists:sum(Sizes).\n\nvdf_speed(Now) ->\n\tcase get_average_count_by_time(vdf, Now) of\n\t\t0.0 ->\n\t\t\tundefined;\n\t\tStepsPerSecond ->\n\t\t\treset_count(vdf, Now),\n\t\t\t1.0 / StepsPerSecond\n\tend.\n\nget_hash_hps(PoA1Multiplier, Packing, PartitionNumber, TotalCurrent, Now) ->\n\tH1 = get_average_count_by_time({partition, PartitionNumber, h1, TotalCurrent}, Now),\n\tH2 = get_average_count_by_time({partition, PartitionNumber, h2, TotalCurrent}, Now),\n\tPackingDifficulty = ar_mining_server:get_packing_difficulty(Packing),\n\t((H1 / PoA1Multiplier) + H2) / get_hashrate_divisor(PackingDifficulty).\n\n%% @doc calculate the maximum hash rate (in MiB per second read from disk) for the given VDF\n%% speed at the current weave size.\noptimal_partition_read_mibps(_Packing, undefined, _PartitionDataSize, _TotalDataSize, _WeaveSize) ->\n\t0.0;\t\noptimal_partition_read_mibps(Packing, VDFSpeed, PartitionDataSize, TotalDataSize, WeaveSize) ->\n\tPackingDifficulty = ar_mining_server:get_packing_difficulty(Packing),\n\tRecallRangeSize = ar_block:get_recall_range_size(PackingDifficulty) / ?MiB,\n\t(RecallRangeSize / VDFSpeed) *\n\tmin(1.0, (PartitionDataSize / ar_block:partition_size())) *\n\t(1 + min(1.0, (TotalDataSize / WeaveSize))).\n\n%% @doc calculate the maximum hash rate (in hashes per second) for the given VDF speed\n%% at the current weave size.\noptimal_partition_hash_hps(_PoA1Multiplier, undefined, _PartitionDataSize, _TotalDataSize, _WeaveSize) ->\n\t0.0;\t\noptimal_partition_hash_hps(PoA1Multiplier, VDFSpeed, PartitionDataSize, TotalDataSize, WeaveSize) ->\n\tBasePartitionHashes = (400.0 / VDFSpeed) * min(1.0, (PartitionDataSize / ar_block:partition_size())),\n\tH1Optimal = BasePartitionHashes / PoA1Multiplier,\n\tH2Optimal = BasePartitionHashes * min(1.0, (TotalDataSize / WeaveSize)),\n\tH1Optimal + H2Optimal.\n\ngenerate_report() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tHeight = ar_node:get_height(),\n\tPacking = ar_mining_io:get_packing(),\n\tPartitions = ar_mining_io:get_partitions(),\n\tgenerate_report(\n\t\tHeight,\n\t\tPacking,\n\t\tPartitions,\n\t\tConfig#config.cm_peers,\n\t\tar_node:get_weave_size(),\n\t\terlang:monotonic_time(millisecond)\n\t).\n\ngenerate_report(_Height, _Packing, [], _Peers, _WeaveSize, Now) ->\n\t#report{\n\t\tnow = Now\n\t};\ngenerate_report(Height, Packing, Partitions, Peers, WeaveSize, Now) ->\n\tPoA1Multiplier = ar_difficulty:poa1_diff_multiplier(Height),\n\tVDFSpeed = vdf_speed(Now),\n\tTotalDataSize = get_total_minable_data_size(Packing),\n\tReport = #report{\n\t\tnow = Now,\n\t\tvdf_speed = VDFSpeed,\n\t\th1_solution = get_count(h1_solution),\n\t\th2_solution = get_count(h2_solution),\n\t\tconfirmed_block = get_count(confirmed_block),\n\t\ttotal_data_size = TotalDataSize,\n\t\ttotal_h2_to_peer = get_overall_total(peer, h2_to_peer, total),\n\t\ttotal_h2_from_peer = get_overall_total(peer, h2_from_peer, total)\n\t},\n\n\tReport2 = generate_partition_reports(\n\t\tPoA1Multiplier, Partitions, Packing, Report, WeaveSize),\n\tReport3 = generate_peer_reports(Peers, Report2),\n\tReport3.\n\ngenerate_partition_reports(PoA1Multiplier, Partitions, Packing, Report, WeaveSize) ->\n\tlists:foldr(\n\t\tfun({PartitionNumber, _MiningAddr, _PackingDifficulty}, Acc) ->\n\t\t\tgenerate_partition_report(\n\t\t\t\tPoA1Multiplier, PartitionNumber, Packing, Acc, WeaveSize)\n\t\tend,\n\t\tReport,\n\t\tPartitions\n\t).\n\ngenerate_partition_report(\n\t\tPoA1Multiplier, PartitionNumber, Packing, Report, WeaveSize) ->\n\t#report{\n\t\tnow = Now,\n\t\tvdf_speed = VDFSpeed,\n\t\ttotal_data_size = TotalDataSize,\n\t\tpartitions = Partitions,\n\t\toptimal_overall_read_mibps = OptimalOverallRead,\n\t\toptimal_overall_hash_hps = OptimalOverallHash,\n\t\taverage_read_mibps = AverageRead,\n\t\tcurrent_read_mibps = CurrentRead,\n\t\taverage_hash_hps = AverageHash,\n\t\tcurrent_hash_hps = CurrentHash } = Report,\n\tDataSize = get_partition_data_size(PartitionNumber, Packing),\n\tPartitionReport = #partition_report{\n\t\tpartition_number = PartitionNumber,\n\t\tdata_size = DataSize,\n\t\taverage_read_mibps = get_average_count_by_time(\n\t\t\t\t{partition, PartitionNumber, read, total}, Now) / 4,\n\t\tcurrent_read_mibps = get_average_count_by_time(\n\t\t\t\t{partition, PartitionNumber, read, current}, Now) / 4,\n\t\taverage_hash_hps = get_hash_hps(\n\t\t\tPoA1Multiplier, Packing, PartitionNumber, total, Now),\n\t\tcurrent_hash_hps = get_hash_hps(\n\t\t\tPoA1Multiplier, Packing, PartitionNumber, current, Now),\n\t\toptimal_read_mibps = optimal_partition_read_mibps(\n\t\t\tPacking, VDFSpeed, DataSize, TotalDataSize, WeaveSize),\n\t\toptimal_hash_hps = optimal_partition_hash_hps(\n\t\t\tPoA1Multiplier, VDFSpeed, DataSize, TotalDataSize, WeaveSize)\n\t},\n\n\treset_count({partition, PartitionNumber, read, current}, Now),\n\treset_count({partition, PartitionNumber, h1, current}, Now),\n\treset_count({partition, PartitionNumber, h2, current}, Now),\n\n\tReport#report{ \n\t\toptimal_overall_read_mibps = \n\t\t\tOptimalOverallRead + PartitionReport#partition_report.optimal_read_mibps,\n\t\toptimal_overall_hash_hps = \n\t\t\tOptimalOverallHash + PartitionReport#partition_report.optimal_hash_hps,\n\t\taverage_read_mibps = AverageRead + PartitionReport#partition_report.average_read_mibps,\n\t\tcurrent_read_mibps = CurrentRead + PartitionReport#partition_report.current_read_mibps,\n\t\taverage_hash_hps = AverageHash + PartitionReport#partition_report.average_hash_hps,\n\t\tcurrent_hash_hps = CurrentHash + PartitionReport#partition_report.current_hash_hps,\n\t\tpartitions = Partitions ++ [PartitionReport] }.\n\ngenerate_peer_reports(Peers, Report) ->\n\tlists:foldr(\n\t\tfun(Peer, Acc) ->\n\t\t\tgenerate_peer_report(Peer, Acc)\n\t\tend,\n\t\tReport,\n\t\tPeers\n\t).\n\ngenerate_peer_report(Peer, Report) ->\n\t#report{\n\t\tnow = Now,\n\t\tpeers = Peers,\n\t\taverage_h1_to_peer_hps = AverageH1ToPeer,\n\t\tcurrent_h1_to_peer_hps = CurrentH1ToPeer,\n\t\taverage_h1_from_peer_hps = AverageH1FromPeer,\n\t\tcurrent_h1_from_peer_hps = CurrentH1FromPeer } = Report,\n\tPeerReport = #peer_report{\n\t\tpeer = Peer,\n\t\taverage_h1_to_peer_hps =\n\t\t\tget_average_count_by_time({peer, Peer, h1_to_peer, total}, Now),\n\t\tcurrent_h1_to_peer_hps =\n\t\t\tget_average_count_by_time({peer, Peer, h1_to_peer, current}, Now),\n\t\taverage_h1_from_peer_hps =\n\t\t\tget_average_count_by_time({peer, Peer, h1_from_peer, total}, Now),\n\t\tcurrent_h1_from_peer_hps =\n\t\t\tget_average_count_by_time({peer, Peer, h1_from_peer, current}, Now),\n\t\ttotal_h2_to_peer = get_count({peer, Peer, h2_to_peer, total}),\n\t\ttotal_h2_from_peer = get_count({peer, Peer, h2_from_peer, total})\n\t},\n\n\treset_count({peer, Peer, h1_to_peer, current}, Now),\n\treset_count({peer, Peer, h1_from_peer, current}, Now),\n\n\tReport#report{\n\t\tpeers = Peers ++ [PeerReport],\n\t\taverage_h1_to_peer_hps =\n\t\t\tAverageH1ToPeer + PeerReport#peer_report.average_h1_to_peer_hps,\n\t\tcurrent_h1_to_peer_hps =\n\t\t\tCurrentH1ToPeer + PeerReport#peer_report.current_h1_to_peer_hps,\n\t\taverage_h1_from_peer_hps =\n\t\t\tAverageH1FromPeer + PeerReport#peer_report.average_h1_from_peer_hps,\n\t\tcurrent_h1_from_peer_hps =\n\t\t\tCurrentH1FromPeer + PeerReport#peer_report.current_h1_from_peer_hps\n\t}.\n\nreport_performance() ->\n\tReport = generate_report(),\n\tcase Report#report.partitions of\n\t\t[] ->\n\t\t\tok;\n\t\t_ ->\n\t\t\tset_metrics(Report),\n\t\t\tReportString = format_report(Report),\n\t\t\tar:console(\"~s\", [ReportString]),\n\t\t\tlog_report(ReportString)\n\tend.\n\nlog_report(ReportString) ->\n\tLines = string:tokens(lists:flatten(ReportString), \"\\n\"),\n\tlog_report_lines(Lines).\n\nlog_report_lines([]) ->\n\tok;\nlog_report_lines([Line | Lines]) ->\n\t?LOG_INFO(Line),\n\tlog_report_lines(Lines).\n\nset_metrics(Report) ->\n\tprometheus_gauge:set(mining_rate, [read, total], Report#report.current_read_mibps),\n\tprometheus_gauge:set(mining_rate, [hash, total],  Report#report.current_hash_hps),\n\tprometheus_gauge:set(mining_rate, [ideal_read, total],  Report#report.optimal_overall_read_mibps),\n\tprometheus_gauge:set(mining_rate, [ideal_hash, total],  Report#report.optimal_overall_hash_hps),\n\tprometheus_gauge:set(cm_h1_rate, [total, to], Report#report.current_h1_to_peer_hps),\n\tprometheus_gauge:set(cm_h1_rate, [total, from], Report#report.current_h1_from_peer_hps),\n\tprometheus_gauge:set(cm_h2_count, [total, to], Report#report.total_h2_to_peer),\n\tprometheus_gauge:set(cm_h2_count, [total, from], Report#report.total_h2_from_peer),\n\tset_partition_metrics(Report#report.partitions),\n\tset_peer_metrics(Report#report.peers).\n\nset_partition_metrics([]) ->\n\tok;\nset_partition_metrics([PartitionReport | PartitionReports]) ->\n\tPartitionNumber = PartitionReport#partition_report.partition_number,\n\tprometheus_gauge:set(mining_rate, [read, PartitionNumber],\n\t\tPartitionReport#partition_report.current_read_mibps),\n\tprometheus_gauge:set(mining_rate, [hash, PartitionNumber],\n\t\tPartitionReport#partition_report.current_hash_hps),\n\tprometheus_gauge:set(mining_rate, [ideal_read, PartitionNumber],\n\t\tPartitionReport#partition_report.optimal_read_mibps),\n\tprometheus_gauge:set(mining_rate, [ideal_hash, PartitionNumber],\n\t\tPartitionReport#partition_report.optimal_hash_hps),\n\tset_partition_metrics(PartitionReports).\n\nset_peer_metrics([]) ->\n\tok;\nset_peer_metrics([PeerReport | PeerReports]) ->\n\tPeer = ar_util:format_peer(PeerReport#peer_report.peer),\n\tprometheus_gauge:set(cm_h1_rate, [Peer, to],\n\t\tPeerReport#peer_report.current_h1_to_peer_hps),\n\tprometheus_gauge:set(cm_h1_rate, [Peer, from],\n\t\tPeerReport#peer_report.current_h1_from_peer_hps),\n\tprometheus_gauge:set(cm_h2_count, [Peer, to],\n\t\tPeerReport#peer_report.total_h2_to_peer),\n\tprometheus_gauge:set(cm_h2_count, [Peer, from],\n\t\tPeerReport#peer_report.total_h2_from_peer),\n\tset_peer_metrics(PeerReports).\n\nclear_metrics() ->\n\tReport = generate_report(),\n\tprometheus_gauge:set(mining_rate, [read, total], 0),\n\tprometheus_gauge:set(mining_rate, [hash, total],  0),\n\tprometheus_gauge:set(mining_rate, [ideal, total],  0),\n\tprometheus_gauge:set(cm_h1_rate, [total, to], 0),\n\tprometheus_gauge:set(cm_h1_rate, [total, from], 0),\n\tprometheus_gauge:set(cm_h2_count, [total, to], 0),\n\tprometheus_gauge:set(cm_h2_count, [total, from], 0),\n\tclear_partition_metrics(Report#report.partitions),\n\tclear_peer_metrics(Report#report.peers).\n\nclear_partition_metrics([]) ->\n\tok;\nclear_partition_metrics([PartitionReport | PartitionReports]) ->\n\tPartitionNumber = PartitionReport#partition_report.partition_number,\n\tprometheus_gauge:set(mining_rate, [read, PartitionNumber], 0),\n\tprometheus_gauge:set(mining_rate, [hash, PartitionNumber], 0),\n\tprometheus_gauge:set(mining_rate, [ideal, PartitionNumber], 0),\n\tclear_partition_metrics(PartitionReports).\n\nclear_peer_metrics([]) ->\n\tok;\nclear_peer_metrics([PeerReport | PeerReports]) ->\n\tPeer = ar_util:format_peer(PeerReport#peer_report.peer),\n\tprometheus_gauge:set(cm_h1_rate, [Peer, to], 0),\n\tprometheus_gauge:set(cm_h1_rate, [Peer, from], 0),\n\tprometheus_gauge:set(cm_h2_count, [Peer, to], 0),\n\tprometheus_gauge:set(cm_h2_count, [Peer, from], 0),\n\tclear_peer_metrics(PeerReports).\n\nformat_report(Report) ->\n\tformat_report(Report, ar_node:get_weave_size()).\nformat_report(Report, WeaveSize) ->\n\tPreamble = io_lib:format(\n\t\t\"================================================= Mining Performance Report =================================================\\n\"\n\t\t\"\\n\"\n\t\t\"VDF Speed:       ~s\\n\"\n\t\t\"H1 Solutions:     ~B\\n\"\n\t\t\"H2 Solutions:     ~B\\n\"\n\t\t\"Confirmed Blocks: ~B\\n\"\n\t\t\"\\n\",\n\t\t[format_vdf_speed(Report#report.vdf_speed), Report#report.h1_solution,\n\t\t\tReport#report.h2_solution, Report#report.confirmed_block]\n\t),\n\tPartitionTable = format_partition_report(Report, WeaveSize),\n\tPeerTable = format_peer_report(Report),\n    \n    io_lib:format(\"\\n~s~s~s\", [Preamble, PartitionTable, PeerTable]).\n\nformat_partition_report(Report, WeaveSize) ->\n\tHeader = \n\t\t\"Local mining stats:\\n\"\n\t\t\"+-----------+-----------+----------+---------------+---------------+---------------+------------+------------+--------------+\\n\"\n        \"| Partition | Data Size | % of Max |   Read (Cur)  |   Read (Avg)  |  Read (Ideal) | Hash (Cur) | Hash (Avg) | Hash (Ideal) |\\n\"\n\t\t\"+-----------+-----------+----------+---------------+---------------+---------------+------------+------------+--------------+\\n\",\n\tTotalRow = format_partition_total_row(Report, WeaveSize),\n\tPartitionRows = format_partition_rows(Report#report.partitions),\n    Footer =\n\t\t\"+-----------+-----------+----------+---------------+---------------+---------------+------------+------------+--------------+\\n\",\n\tio_lib:format(\"~s~s~s~s\", [Header, TotalRow, PartitionRows, Footer]).\n\nformat_partition_total_row(Report, WeaveSize) ->\n\t#report{\n\t\ttotal_data_size = TotalDataSize,\n\t\toptimal_overall_read_mibps = OptimalOverallRead,\n\t\toptimal_overall_hash_hps = OptimalOverallHash,\n\t\taverage_read_mibps = AverageRead,\n\t\tcurrent_read_mibps = CurrentRead,\n\t\taverage_hash_hps = AverageHash,\n\t\tcurrent_hash_hps = CurrentHash } = Report,\n\tTotalTiB = TotalDataSize / ?TiB,\n\tPctOfWeave = floor((TotalDataSize / WeaveSize) * 100),\n    io_lib:format(\n\t\t\"|     Total | ~5.1f TiB | ~6.B % \"\n\t\t\"| ~7.1f MiB/s | ~7.1f MiB/s | ~7.1f MiB/s \"\n\t\t\"| ~6B h/s | ~6B h/s | ~8B h/s |\\n\",\n\t\t[\n\t\t\tTotalTiB, PctOfWeave,\n\t\t\tCurrentRead, AverageRead, OptimalOverallRead,\n\t\t\tfloor(CurrentHash), floor(AverageHash), floor(OptimalOverallHash)]).\n\nformat_partition_rows([]) ->\n\t\"\";\nformat_partition_rows([PartitionReport | PartitionReports]) ->\n\tformat_partition_rows(PartitionReports) ++\n\t[format_partition_row(PartitionReport)].\n\nformat_partition_row(PartitionReport) ->\n\t#partition_report{\n\t\tpartition_number = PartitionNumber,\n\t\tdata_size = DataSize,\n\t\toptimal_read_mibps = OptimalRead,\n\t\taverage_read_mibps = AverageRead,\n\t\tcurrent_read_mibps = CurrentRead,\n\t\toptimal_hash_hps = OptimalHash,\n\t\taverage_hash_hps = AverageHash,\n\t\tcurrent_hash_hps = CurrentHash } = PartitionReport,\n\tTiB = DataSize / ?TiB,\n\tPctOfPartition = floor((DataSize / ar_block:partition_size()) * 100),\n    io_lib:format(\n\t\t\"| ~9.B | ~5.1f TiB | ~6.B % \"\n\t\t\"| ~7.1f MiB/s | ~7.1f MiB/s | ~7.1f MiB/s \"\n\t\t\"| ~6B h/s | ~6B h/s | ~8B h/s |\\n\",\n\t\t[\n\t\t\tPartitionNumber, TiB, PctOfPartition,\n\t\t\tCurrentRead, AverageRead, OptimalRead,\n\t\t\tfloor(CurrentHash), floor(AverageHash), floor(OptimalHash)]).\n\nformat_peer_report(#report{ peers = [] }) ->\n\t\"\";\nformat_peer_report(Report) ->\n\tHeader = \n\t\t\"\\n\"\n\t\t\"Coordinated mining cluster stats:\\n\"\n\t\t\"+----------------------+--------------+--------------+-------------+-------------+--------+--------+\\n\"\n        \"|                 Peer | H1 Out (Cur) | H1 Out (Avg) | H1 In (Cur) | H1 In (Avg) | H2 Out |  H2 In |\\n\"\n\t\t\"+----------------------+--------------+--------------+-------------+-------------+--------+--------+\\n\",\n\tTotalRow = format_peer_total_row(Report),\n\tPartitionRows = format_peer_rows(Report#report.peers),\n    Footer =\n\t\t\"+----------------------+--------------+--------------+-------------+-------------+--------+--------+\\n\",\n\tio_lib:format(\"~s~s~s~s\", [Header, TotalRow, PartitionRows, Footer]).\n\nformat_peer_total_row(Report) ->\n\t#report{\n\t\taverage_h1_to_peer_hps = AverageH1To,\n\t\tcurrent_h1_to_peer_hps = CurrentH1To,\n\t\taverage_h1_from_peer_hps = AverageH1From,\n\t\tcurrent_h1_from_peer_hps = CurrentH1From,\n\t\ttotal_h2_to_peer = TotalH2To,\n\t\ttotal_h2_from_peer = TotalH2From } = Report,\n    io_lib:format(\n\t\t\"|                  All | ~8B h/s | ~8B h/s | ~7B h/s | ~7B h/s | ~6B | ~6B |\\n\",\n\t\t[\n\t\t\tfloor(CurrentH1To), floor(AverageH1To),\n\t\t\tfloor(CurrentH1From), floor(AverageH1From),\n\t\t\tTotalH2To, TotalH2From\n\t\t]).\n\nformat_peer_rows([]) ->\n\t\"\";\nformat_peer_rows([PeerReport | PeerReports]) ->\n\tformat_peer_rows(PeerReports) ++\n\t[format_peer_row(PeerReport)].\n\nformat_peer_row(PeerReport) ->\n\t#peer_report{\n\t\tpeer = Peer,\n\t\taverage_h1_to_peer_hps = AverageH1To,\n\t\tcurrent_h1_to_peer_hps = CurrentH1To,\n\t\taverage_h1_from_peer_hps = AverageH1From,\n\t\tcurrent_h1_from_peer_hps = CurrentH1From,\n\t\ttotal_h2_to_peer = TotalH2To,\n\t\ttotal_h2_from_peer = TotalH2From } = PeerReport,\n    io_lib:format(\n\t\t\"| ~20s | ~8B h/s | ~8B h/s | ~7B h/s | ~7B h/s | ~6B | ~6B |\\n\",\n\t\t[\n\t\t\tar_util:format_peer(Peer),\n\t\t\tfloor(CurrentH1To), floor(AverageH1To), \n\t\t\tfloor(CurrentH1From), floor(AverageH1From), \n\t\t\tTotalH2To, TotalH2From\n\t\t]).\n\nformat_vdf_speed(undefined) ->\n\t\" undefined\";\nformat_vdf_speed(VDFSpeed) ->\n\tio_lib:format(\"~5.2f s\", [VDFSpeed]).\n\n%%%===================================================================\n%%% Tests\n%%%===================================================================\n\nmining_stats_test_() ->\n\t[ar_test_node:test_with_mocked_functions(\n\t\t[\n\t\t\t{ar_block, partition_size, fun() -> 2097152 end}\n\t\t],\n\t\tfun test_read_stats/0),\n\tar_test_node:test_with_mocked_functions(\n\t\t[\n\t\t\t{ar_block, partition_size, fun() -> 2097152 end}\n\t\t],\n\t\tfun test_h1_stats/0),\n\tar_test_node:test_with_mocked_functions(\n\t\t[\n\t\t\t{ar_block, partition_size, fun() -> 2097152 end}\n\t\t],\n\t\tfun test_h2_stats/0),\n\tar_test_node:test_with_mocked_functions(\n\t\t[\n\t\t\t{ar_block, partition_size, fun() -> 2097152 end}\n\t\t],\n\t\tfun test_vdf_stats/0),\n\tar_test_node:test_with_mocked_functions(\n\t\t[\n\t\t\t{ar_block, partition_size, fun() -> 2097152 end}\n\t\t],\n\t\tfun test_data_size_stats/0),\n\tar_test_node:test_with_mocked_functions(\n\t\t[\n\t\t\t{ar_block, partition_size, fun() -> 2097152 end}\n\t\t],\n\t\tfun test_h1_sent_to_peer_stats/0),\n\tar_test_node:test_with_mocked_functions(\n\t\t[\n\t\t\t{ar_block, partition_size, fun() -> 2097152 end}\n\t\t],\n\t\tfun test_h1_received_from_peer_stats/0),\n\tar_test_node:test_with_mocked_functions(\n\t\t[\n\t\t\t{ar_block, partition_size, fun() -> 2097152 end}\n\t\t],\n\t\tfun test_h2_peer_stats/0),\n\tar_test_node:test_with_mocked_functions(\n\t\t[\n\t\t\t{ar_block, partition_size, fun() -> 2097152 end}\n\t\t],\n\t\tfun test_optimal_stats_poa1_multiple_1/0),\n\tar_test_node:test_with_mocked_functions(\n\t\t[\n\t\t\t{ar_block, partition_size, fun() -> 2097152 end}\n\t\t],\n\t\tfun test_optimal_stats_poa1_multiple_2/0),\n\tar_test_node:test_with_mocked_functions(\n\t\t[\n\t\t\t{ar_block, partition_size, fun() -> 2097152 end}\n\t\t],\n\t\tfun test_report_poa1_multiple_1/0),\n\tar_test_node:test_with_mocked_functions(\n\t\t\t[\n\t\t\t\t{ar_block, partition_size, fun() -> 2097152 end},\n\t\t\t\t{ar_difficulty, poa1_diff_multiplier, fun(_) -> 2 end}\n\t\t\t],\n\t\t\tfun test_report_poa1_multiple_2/0\n\t\t)\n\t].\n\ntest_read_stats() ->\n\ttest_local_stats(fun chunks_read/2, read).\n\ntest_h1_stats() ->\n\ttest_local_stats(fun h1_computed/2, h1).\n\ntest_h2_stats() ->\n\ttest_local_stats(fun h2_computed/2, h2).\n\ntest_local_stats(Fun, Stat) ->\n\tar_mining_stats:pause_performance_reports(120000),\n\treset_all_stats(),\n\tFun(1, 1),\n\tTotalStart1 = get_start({partition, 1, Stat, total}),\n\tCurrentStart1 = get_start({partition, 1, Stat, current}),\n\ttimer:sleep(1000),\n\tFun(1, 1),\n\tFun(1, 1),\n\t\n\tFun(2, 1),\n\tTotalStart2 = get_start({partition, 2, Stat, total}),\n\tCurrentStart2 = get_start({partition, 2, Stat, current}),\n\tFun(2, 1),\n\t\n\t?assert(TotalStart1 /= TotalStart2),\n\t?assert(CurrentStart1 /= CurrentStart2),\n\t?assertEqual(0.0, get_average_count_by_time({partition, 1, Stat, total}, TotalStart1)),\n\t?assertEqual(0.0, get_average_count_by_time({partition, 1, Stat, current}, CurrentStart1)),\n\t?assertEqual(0.0, get_average_count_by_time({partition, 2, Stat, total}, TotalStart2)),\n\t?assertEqual(0.0, get_average_count_by_time({partition, 2, Stat, current}, CurrentStart2)),\n\n\t?assertEqual(6.0, get_average_count_by_time({partition, 1, Stat, total}, TotalStart1 + 500)),\n\t?assertEqual(0.25, get_average_count_by_time({partition, 1, Stat, current}, CurrentStart1 + 12000)),\n\t?assertEqual(0.5, get_average_count_by_time({partition, 2, Stat, total}, TotalStart2 + 4000)),\n\t?assertEqual(8.0, get_average_count_by_time({partition, 2, Stat, current}, CurrentStart2 + 250)),\n\t?assertEqual(0.0, get_average_count_by_time({partition, 3, Stat, total}, TotalStart1 + 4000)),\n\t?assertEqual(0.0, get_average_count_by_time({partition, 3, Stat, current}, TotalStart1 + 250)),\n\n\t?assertEqual(6.0, get_average_samples_by_time({partition, 1, Stat, total}, TotalStart1 + 500)),\n\t?assertEqual(0.25, get_average_samples_by_time({partition, 1, Stat, current}, CurrentStart1 + 12000)),\n\t?assertEqual(0.5, get_average_samples_by_time({partition, 2, Stat, total}, TotalStart2 + 4000)),\n\t?assertEqual(8.0, get_average_samples_by_time({partition, 2, Stat, current}, CurrentStart2 + 250)),\n\t?assertEqual(0.0, get_average_samples_by_time({partition, 3, Stat, total}, TotalStart1 + 4000)),\n\t?assertEqual(0.0, get_average_samples_by_time({partition, 3, Stat, current}, TotalStart1 + 250)),\n\n\t?assertEqual(1.0, get_average_by_samples({partition, 1, Stat, total})),\n\t?assertEqual(1.0, get_average_by_samples({partition, 1, Stat, current})),\n\t?assertEqual(1.0, get_average_by_samples({partition, 2, Stat, total})),\n\t?assertEqual(1.0, get_average_by_samples({partition, 2, Stat, current})),\n\t?assertEqual(0.0, get_average_by_samples({partition, 3, Stat, total})),\n\t?assertEqual(0.0, get_average_by_samples({partition, 3, Stat, current})),\n\n\tNow = CurrentStart2 + 1000,\n\treset_count({partition, 1, Stat, current}, Now),\n\t?assertEqual(Now, get_start({partition, 1, Stat, current})),\n\t?assertEqual(6.0, get_average_count_by_time({partition, 1, Stat, total}, TotalStart1 + 500)),\n\t?assertEqual(0.0, get_average_count_by_time({partition, 1, Stat, current}, Now + 12000)),\n\t?assertEqual(0.5, get_average_count_by_time({partition, 2, Stat, total}, TotalStart2 + 4000)),\n\t?assertEqual(8.0, get_average_count_by_time({partition, 2, Stat, current}, CurrentStart2 + 250)),\n\t?assertEqual(0.0, get_average_count_by_time({partition, 3, Stat, total}, TotalStart1 + 4000)),\n\t?assertEqual(0.0, get_average_count_by_time({partition, 3, Stat, current}, CurrentStart1 + 250)),\n\n\t?assertEqual(6.0, get_average_samples_by_time({partition, 1, Stat, total}, TotalStart1 + 500)),\n\t?assertEqual(0.0, get_average_samples_by_time({partition, 1, Stat, current}, Now + 12000)),\n\t?assertEqual(0.5, get_average_samples_by_time({partition, 2, Stat, total}, TotalStart2 + 4000)),\n\t?assertEqual(8.0, get_average_samples_by_time({partition, 2, Stat, current}, CurrentStart2 + 250)),\n\t?assertEqual(0.0, get_average_samples_by_time({partition, 3, Stat, total}, TotalStart1 + 4000)),\n\t?assertEqual(0.0, get_average_samples_by_time({partition, 3, Stat, current}, CurrentStart1 + 250)),\n\n\t?assertEqual(1.0, get_average_by_samples({partition, 1, Stat, total})),\n\t?assertEqual(0.0, get_average_by_samples({partition, 1, Stat, current})),\n\t?assertEqual(1.0, get_average_by_samples({partition, 2, Stat, total})),\n\t?assertEqual(1.0, get_average_by_samples({partition, 2, Stat, current})),\n\t?assertEqual(0.0, get_average_by_samples({partition, 3, Stat, total})),\n\t?assertEqual(0.0, get_average_by_samples({partition, 3, Stat, current})),\n\n\treset_all_stats(),\n\t?assertEqual(0.0, get_average_count_by_time({partition, 1, Stat, total}, Now + 500)),\n\t?assertEqual(0.0, get_average_count_by_time({partition, 1, Stat, current}, Now + 12000)),\n\t?assertEqual(0.0, get_average_count_by_time({partition, 2, Stat, total}, TotalStart2 + 4000)),\n\t?assertEqual(0.0, get_average_count_by_time({partition, 2, Stat, current}, CurrentStart2 + 250)),\n\t?assertEqual(0.0, get_average_count_by_time({partition, 3, Stat, total}, TotalStart1 + 4000)),\n\t?assertEqual(0.0, get_average_count_by_time({partition, 3, Stat, current}, TotalStart1 + 250)),\n\n\t?assertEqual(0.0, get_average_samples_by_time({partition, 1, Stat, total}, Now + 500)),\n\t?assertEqual(0.0, get_average_samples_by_time({partition, 1, Stat, current}, Now + 12000)),\n\t?assertEqual(0.0, get_average_samples_by_time({partition, 2, Stat, total}, TotalStart2 + 4000)),\n\t?assertEqual(0.0, get_average_samples_by_time({partition, 2, Stat, current}, CurrentStart2 + 250)),\n\t?assertEqual(0.0, get_average_samples_by_time({partition, 3, Stat, total}, TotalStart1 + 4000)),\n\t?assertEqual(0.0, get_average_samples_by_time({partition, 3, Stat, current}, TotalStart1 + 250)),\n\n\t?assertEqual(0.0, get_average_by_samples({partition, 1, Stat, total})),\n\t?assertEqual(0.0, get_average_by_samples({partition, 1, Stat, current})),\n\t?assertEqual(0.0, get_average_by_samples({partition, 2, Stat, total})),\n\t?assertEqual(0.0, get_average_by_samples({partition, 2, Stat, current})),\n\t?assertEqual(0.0, get_average_by_samples({partition, 3, Stat, total})),\n\t?assertEqual(0.0, get_average_by_samples({partition, 3, Stat, current})).\n\ntest_vdf_stats() ->\n\tar_mining_stats:pause_performance_reports(120000),\n\treset_all_stats(),\n\tar_mining_stats:vdf_computed(),\n\tStart = get_start(vdf),\n\tar_mining_stats:vdf_computed(),\n\tar_mining_stats:vdf_computed(),\n\n\t?assertEqual(0.0, get_average_count_by_time(vdf, Start)),\n\t?assertEqual(6.0, get_average_count_by_time(vdf, Start + 500)),\n\t?assertEqual(0.0, get_average_samples_by_time(vdf, Start)),\n\t?assertEqual(6.0, get_average_samples_by_time(vdf, Start + 500)),\n\t?assertEqual(1.0, get_average_by_samples(vdf)),\n\n\tNow = Start + 1000,\n\t?assertEqual(1.0/3.0, vdf_speed(Now)),\n\t?assertEqual(Now, get_start(vdf)),\n\t?assertEqual(undefined, vdf_speed(Now)),\n\t?assertEqual(0.0, get_average_count_by_time(vdf, Now + 500)),\n\t?assertEqual(0.0, get_average_samples_by_time(vdf, Now + 500)),\n\t?assertEqual(0.0, get_average_by_samples(vdf)),\n\t?assertEqual(undefined, vdf_speed(Now + 500)),\n\n\tar_mining_stats:vdf_computed(),\n\tStart2 = get_start(vdf),\n\t?assertEqual(0.5, vdf_speed(Start2 + 500)),\n\n\tar_mining_stats:vdf_computed(),\n\treset_all_stats(),\n\t?assertEqual(undefined, get_start(vdf)),\n\t?assertEqual(0.0, get_average_count_by_time(vdf, 1000)),\n\t?assertEqual(0.0, get_average_samples_by_time(vdf, 1000)),\n\t?assertEqual(0.0, get_average_by_samples(vdf)),\n\t?assertEqual(undefined, vdf_speed(1000)).\n\ntest_data_size_stats() ->\n\t{ok, Config} = arweave_config:get_env(),\n\ttry\n\t\tarweave_config:set_env(Config#config{\n\t\t\tmining_addr = <<\"MINING\">>\n\t\t}),\n\n\t\tWeaveSize = floor(2 * ar_block:partition_size()),\n\t\tets:insert(node_state, [{weave_size, WeaveSize}]),\n\n\t\tar_mining_stats:pause_performance_reports(120000),\n\t\tdo_test_data_size_stats(Config, {spora_2_6, <<\"MINING\">>}, {spora_2_6, <<\"PACKING\">>}),\n\t\tdo_test_data_size_stats(Config, {composite, <<\"MINING\">>, 1}, {composite, <<\"PACKING\">>, 1}),\n\t\tdo_test_data_size_stats(Config, {composite, <<\"MINING\">>, 2}, {composite, <<\"PACKING\">>, 2})\n\tafter\n\t\tarweave_config:set_env(Config)\n\tend.\n\ndo_test_data_size_stats(Config, Mining, Packing) ->\n\tarweave_config:set_env(Config#config{ \n\t\tstorage_modules = [\n\t\t\t{floor(0.1 * ar_block:partition_size()), 10, unpacked},\n\t\t\t{floor(0.1 * ar_block:partition_size()), 10, Mining},\n\t\t\t{floor(0.1 * ar_block:partition_size()), 10, Packing},\n\t\t\t{floor(0.3 * ar_block:partition_size()), 4, unpacked},\n\t\t\t{floor(0.3 * ar_block:partition_size()), 4, Mining},\n\t\t\t{floor(0.3 * ar_block:partition_size()), 4, Packing},\n\t\t\t{floor(0.2 * ar_block:partition_size()), 8, unpacked},\n\t\t\t{floor(0.2 * ar_block:partition_size()), 8, Mining},\n\t\t\t{floor(0.2 * ar_block:partition_size()), 8, Packing},\n\t\t\t{ar_block:partition_size(), 2, unpacked},\n\t\t\t{ar_block:partition_size(), 2, Mining},\n\t\t\t{ar_block:partition_size(), 2, Packing}\n\t\t]\n\t}),\n\n\treset_all_stats(),\n\t?assertEqual(0, get_total_minable_data_size(Mining)),\n\t?assertEqual(0, get_partition_data_size(1, Mining)),\n\t?assertEqual(0, get_partition_data_size(2, Mining)),\n\n\tar_mining_stats:set_storage_module_data_size(\n\t\tar_storage_module:id({floor(0.1 * ar_block:partition_size()), 10, unpacked}),\n\t\tunpacked, 1, floor(0.1 * ar_block:partition_size()), 10, 101),\n\tar_mining_stats:set_storage_module_data_size(\n\t\tar_storage_module:id({floor(0.1 * ar_block:partition_size()), 10, Mining}),\n\t\tMining, 1, floor(0.1 * ar_block:partition_size()), 10, 102),\n\tar_mining_stats:set_storage_module_data_size(\n\t\tar_storage_module:id({floor(0.1 * ar_block:partition_size()), 10, Packing}),\n\t\tPacking, 1, floor(0.1 * ar_block:partition_size()), 10, 103),\n\n\tar_mining_stats:set_storage_module_data_size(\n\t\tar_storage_module:id({floor(0.3 * ar_block:partition_size()), 4, unpacked}),\n\t\tunpacked, 1, floor(0.3 * ar_block:partition_size()), 4, 111),\n\tar_mining_stats:set_storage_module_data_size(\n\t\tar_storage_module:id({floor(0.3 * ar_block:partition_size()), 4, Mining}),\n\t\tMining, 1, floor(0.3 * ar_block:partition_size()), 4, 112),\n\tar_mining_stats:set_storage_module_data_size(\n\t\tar_storage_module:id({floor(0.3 * ar_block:partition_size()), 4, Packing}),\n\t\tPacking, 1, floor(0.3 * ar_block:partition_size()), 4, 113),\n\n\tar_mining_stats:set_storage_module_data_size(\n\t\tar_storage_module:id({ar_block:partition_size(), 2, unpacked}),\n\t\tunpacked, 2, ar_block:partition_size(), 2, 201),\n\tar_mining_stats:set_storage_module_data_size(\n\t\tar_storage_module:id({ar_block:partition_size(), 2, Mining}),\n\t\tMining, 2, ar_block:partition_size(), 2, 202),\n\tar_mining_stats:set_storage_module_data_size(\n\t\tar_storage_module:id({ar_block:partition_size(), 2, Packing}),\n\t\tPacking, 2, ar_block:partition_size(), 2, 203),\n\n\t?assertEqual(214, get_partition_data_size(1, Mining)),\n\t?assertEqual(202, get_partition_data_size(2, Mining)),\n\t?assertEqual(214, get_total_minable_data_size(Mining)),\n\n\tar_mining_stats:set_storage_module_data_size(\n\t\tar_storage_module:id({floor(0.2 * ar_block:partition_size()), 8, unpacked}),\n\t\tunpacked, 1, floor(0.2 * ar_block:partition_size()), 8, 121),\n\tar_mining_stats:set_storage_module_data_size(\n\t\tar_storage_module:id({floor(0.2 * ar_block:partition_size()), 8, Mining}),\n\t\tMining, 1, floor(0.2 * ar_block:partition_size()), 8, 122),\n\tar_mining_stats:set_storage_module_data_size(\n\t\tar_storage_module:id({floor(0.2 * ar_block:partition_size()), 8, Packing}),\n\t\tPacking, 1, floor(0.2 * ar_block:partition_size()), 8, 123),\n\n\tar_mining_stats:set_storage_module_data_size(\n\t\tar_storage_module:id({ar_block:partition_size(), 2, unpacked}),\n\t\tunpacked, 2, ar_block:partition_size(), 2, 51),\n\tar_mining_stats:set_storage_module_data_size(\n\t\tar_storage_module:id({ar_block:partition_size(), 2, Mining}),\n\t\tMining, 2, ar_block:partition_size(), 2, 52),\n\tar_mining_stats:set_storage_module_data_size(\n\t\tar_storage_module:id({ar_block:partition_size(), 2, Packing}),\n\t\tPacking, 2, ar_block:partition_size(), 2, 53),\n\t\n\t?assertEqual(336, get_partition_data_size(1, Mining)),\n\t?assertEqual(52, get_partition_data_size(2, Mining)),\n\t?assertEqual(336, get_total_minable_data_size(Mining)),\n\n\treset_all_stats(),\n\t?assertEqual(0, get_total_minable_data_size(Mining)),\n\t?assertEqual(0, get_partition_data_size(1, Mining)),\n\t?assertEqual(0, get_partition_data_size(2, Mining)).\n\ntest_h1_sent_to_peer_stats() ->\n\ttest_peer_stats(fun h1_sent_to_peer/2, h1_to_peer).\n\ntest_h1_received_from_peer_stats() ->\n\ttest_peer_stats(fun h1_received_from_peer/2, h1_from_peer).\n\ntest_peer_stats(Fun, Stat) ->\n\tar_mining_stats:pause_performance_reports(120000),\n\treset_all_stats(),\n\n\tPeer1 = ar_test_node:peer_ip(peer1),\n\tPeer2 = ar_test_node:peer_ip(peer2),\n\tPeer3 = ar_test_node:peer_ip(peer3),\n\n\tFun(Peer1, 10),\n\tTotalStart1 = get_start({peer, Peer1, Stat, total}),\n\tCurrentStart1 = get_start({peer, Peer1, Stat, current}),\n\ttimer:sleep(1000),\n\tFun(Peer1, 5),\n\tFun(Peer1, 15),\n\t\n\tFun(Peer2, 1),\n\tTotalStart2 = get_start({peer, Peer2, Stat, total}),\n\tCurrentStart2 = get_start({peer, Peer2, Stat, current}),\n\tFun(Peer2, 19),\n\t\n\t?assert(TotalStart1 /= TotalStart2),\n\t?assert(CurrentStart1 /= CurrentStart2),\n\t?assertEqual(0.0, get_average_count_by_time({peer, Peer1, Stat, total}, TotalStart1)),\n\t?assertEqual(0.0, get_average_count_by_time({peer, Peer1, Stat, current}, CurrentStart1)),\n\t?assertEqual(0.0, get_average_count_by_time({peer, Peer2, Stat, total}, TotalStart2)),\n\t?assertEqual(0.0, get_average_count_by_time({peer, Peer2, Stat, current}, CurrentStart2)),\n\t?assertEqual(60.0, get_average_count_by_time({peer, Peer1, Stat, total}, TotalStart1 + 500)),\n\t?assertEqual(2.5, get_average_count_by_time({peer, Peer1, Stat, current}, CurrentStart1 + 12000)),\n\t?assertEqual(5.0, get_average_count_by_time({peer, Peer2, Stat, total}, TotalStart2 + 4000)),\n\t?assertEqual(80.0, get_average_count_by_time({peer, Peer2, Stat, current}, CurrentStart2 + 250)),\n\t?assertEqual(0.0, get_average_count_by_time({peer, Peer3, Stat, total}, TotalStart1 + 4000)),\n\t?assertEqual(0.0, get_average_count_by_time({peer, Peer3, Stat, current}, TotalStart1 + 250)),\n\n\t?assertEqual(0.0, get_average_samples_by_time({peer, Peer1, Stat, total}, TotalStart1)),\n\t?assertEqual(0.0, get_average_samples_by_time({peer, Peer1, Stat, current}, CurrentStart1)),\n\t?assertEqual(0.0, get_average_samples_by_time({peer, Peer2, Stat, total}, TotalStart2)),\n\t?assertEqual(0.0, get_average_samples_by_time({peer, Peer2, Stat, current}, CurrentStart2)),\n\t?assertEqual(6.0, get_average_samples_by_time({peer, Peer1, Stat, total}, TotalStart1 + 500)),\n\t?assertEqual(0.25, get_average_samples_by_time({peer, Peer1, Stat, current}, CurrentStart1 + 12000)),\n\t?assertEqual(0.5, get_average_samples_by_time({peer, Peer2, Stat, total}, TotalStart2 + 4000)),\n\t?assertEqual(8.0, get_average_samples_by_time({peer, Peer2, Stat, current}, CurrentStart2 + 250)),\n\t?assertEqual(0.0, get_average_samples_by_time({peer, Peer3, Stat, total}, TotalStart1 + 4000)),\n\t?assertEqual(0.0, get_average_samples_by_time({peer, Peer3, Stat, current}, TotalStart1 + 250)),\n\n\t?assertEqual(10.0, get_average_by_samples({peer, Peer1, Stat, total})),\n\t?assertEqual(10.0, get_average_by_samples({peer, Peer1, Stat, current})),\n\t?assertEqual(10.0, get_average_by_samples({peer, Peer2, Stat, total})),\n\t?assertEqual(10.0, get_average_by_samples({peer, Peer2, Stat, current})),\n\t?assertEqual(0.0, get_average_by_samples({peer, Peer3, Stat, total})),\n\t?assertEqual(0.0, get_average_by_samples({peer, Peer3, Stat, current})),\n\n\tNow = CurrentStart2 + 1000,\n\treset_count({peer, Peer1, Stat, current}, Now),\n\t?assertEqual(Now, get_start({peer, Peer1, Stat, current})),\n\t?assertEqual(60.0, get_average_count_by_time({peer, Peer1, Stat, total}, TotalStart1 + 500)),\n\t?assertEqual(0.0, get_average_count_by_time({peer, Peer1, Stat, current}, Now + 12000)),\n\t?assertEqual(5.0, get_average_count_by_time({peer, Peer2, Stat, total}, TotalStart2 + 4000)),\n\t?assertEqual(80.0, get_average_count_by_time({peer, Peer2, Stat, current}, CurrentStart2 + 250)),\n\t?assertEqual(0.0, get_average_count_by_time({peer, Peer3, Stat, total}, TotalStart1 + 4000)),\n\t?assertEqual(0.0, get_average_count_by_time({peer, Peer3, Stat, current}, CurrentStart1 + 250)),\n\n\t?assertEqual(6.0, get_average_samples_by_time({peer, Peer1, Stat, total}, TotalStart1 + 500)),\n\t?assertEqual(0.0, get_average_samples_by_time({peer, Peer1, Stat, current}, Now + 12000)),\n\t?assertEqual(0.5, get_average_samples_by_time({peer, Peer2, Stat, total}, TotalStart2 + 4000)),\n\t?assertEqual(8.0, get_average_samples_by_time({peer, Peer2, Stat, current}, CurrentStart2 + 250)),\n\t?assertEqual(0.0, get_average_samples_by_time({peer, Peer3, Stat, total}, TotalStart1 + 4000)),\n\t?assertEqual(0.0, get_average_samples_by_time({peer, Peer3, Stat, current}, CurrentStart1 + 250)),\n\n\t?assertEqual(10.0, get_average_by_samples({peer, Peer1, Stat, total})),\n\t?assertEqual(0.0, get_average_by_samples({peer, Peer1, Stat, current})),\n\t?assertEqual(10.0, get_average_by_samples({peer, Peer2, Stat, total})),\n\t?assertEqual(10.0, get_average_by_samples({peer, Peer2, Stat, current})),\n\t?assertEqual(0.0, get_average_by_samples({peer, Peer3, Stat, total})),\n\t?assertEqual(0.0, get_average_by_samples({peer, Peer3, Stat, current})),\n\n\treset_all_stats(),\n\t?assertEqual(0.0, get_average_count_by_time({peer, Peer1, Stat, total}, TotalStart1 + 500)),\n\t?assertEqual(0.0, get_average_count_by_time({peer, Peer1, Stat, current}, Now + 12000)),\n\t?assertEqual(0.0, get_average_count_by_time({peer, Peer2, Stat, total}, TotalStart2 + 4000)),\n\t?assertEqual(0.0, get_average_count_by_time({peer, Peer2, Stat, current}, CurrentStart2 + 250)),\n\t?assertEqual(0.0, get_average_count_by_time({peer, Peer3, Stat, total}, TotalStart1 + 4000)),\n\t?assertEqual(0.0, get_average_count_by_time({peer, Peer3, Stat, current}, CurrentStart1 + 250)),\n\n\t?assertEqual(0.0, get_average_samples_by_time({peer, Peer1, Stat, total}, TotalStart1 + 500)),\n\t?assertEqual(0.0, get_average_samples_by_time({peer, Peer1, Stat, current}, Now + 12000)),\n\t?assertEqual(0.0, get_average_samples_by_time({peer, Peer2, Stat, total}, TotalStart2 + 4000)),\n\t?assertEqual(0.0, get_average_samples_by_time({peer, Peer2, Stat, current}, CurrentStart2 + 250)),\n\t?assertEqual(0.0, get_average_samples_by_time({peer, Peer3, Stat, total}, TotalStart1 + 4000)),\n\t?assertEqual(0.0, get_average_samples_by_time({peer, Peer3, Stat, current}, CurrentStart1 + 250)),\n\n\t?assertEqual(0.0, get_average_by_samples({peer, Peer1, Stat, total})),\n\t?assertEqual(0.0, get_average_by_samples({peer, Peer1, Stat, current})),\n\t?assertEqual(0.0, get_average_by_samples({peer, Peer2, Stat, total})),\n\t?assertEqual(0.0, get_average_by_samples({peer, Peer2, Stat, current})),\n\t?assertEqual(0.0, get_average_by_samples({peer, Peer3, Stat, total})),\n\t?assertEqual(0.0, get_average_by_samples({peer, Peer3, Stat, current})).\n\ntest_h2_peer_stats() ->\n\tar_mining_stats:pause_performance_reports(120000),\n\treset_all_stats(),\n\n\tPeer1 = ar_test_node:peer_ip(peer1),\n\tPeer2 = ar_test_node:peer_ip(peer2),\n\tPeer3 = ar_test_node:peer_ip(peer3),\n\n\tar_mining_stats:h2_sent_to_peer(Peer1),\n\tar_mining_stats:h2_sent_to_peer(Peer1),\n\tar_mining_stats:h2_sent_to_peer(Peer1),\n\tar_mining_stats:h2_sent_to_peer(Peer2),\n\tar_mining_stats:h2_sent_to_peer(Peer2),\n\n\t?assertEqual(3, get_count({peer, Peer1, h2_to_peer, total})),\n\t?assertEqual(2, get_count({peer, Peer2, h2_to_peer, total})),\n\t?assertEqual(0, get_count({peer, Peer3, h2_to_peer, total})),\n\n\t?assertEqual(5, get_overall_total(peer, h2_to_peer, total)),\n\n\tar_mining_stats:h2_received_from_peer(Peer1),\n\tar_mining_stats:h2_received_from_peer(Peer1),\n\tar_mining_stats:h2_received_from_peer(Peer1),\n\tar_mining_stats:h2_received_from_peer(Peer2),\n\tar_mining_stats:h2_received_from_peer(Peer2),\n\n\t?assertEqual(3, get_count({peer, Peer1, h2_from_peer, total})),\n\t?assertEqual(2, get_count({peer, Peer2, h2_from_peer, total})),\n\t?assertEqual(0, get_count({peer, Peer3, h2_from_peer, total})),\n\n\t?assertEqual(5, get_overall_total(peer, h2_from_peer, total)),\n\n\treset_count({peer, Peer1, h2_to_peer, total}, 1000),\n\treset_count({peer, Peer2, h2_from_peer, total}, 1000),\n\n\t?assertEqual(0, get_count({peer, Peer1, h2_to_peer, total})),\n\t?assertEqual(2, get_count({peer, Peer2, h2_to_peer, total})),\n\t?assertEqual(0, get_count({peer, Peer3, h2_to_peer, total})),\n\t?assertEqual(3, get_count({peer, Peer1, h2_from_peer, total})),\n\t?assertEqual(0, get_count({peer, Peer2, h2_from_peer, total})),\n\t?assertEqual(0, get_count({peer, Peer3, h2_from_peer, total})),\n\n\t?assertEqual(2, get_overall_total(peer, h2_to_peer, total)),\n\t?assertEqual(3, get_overall_total(peer, h2_from_peer, total)),\n\n\treset_all_stats(),\n\n\t?assertEqual(0, get_count({peer, Peer1, h2_to_peer, total})),\n\t?assertEqual(0, get_count({peer, Peer2, h2_to_peer, total})),\n\t?assertEqual(0, get_count({peer, Peer3, h2_to_peer, total})),\n\t?assertEqual(0, get_count({peer, Peer1, h2_from_peer, total})),\n\t?assertEqual(0, get_count({peer, Peer2, h2_from_peer, total})),\n\t?assertEqual(0, get_count({peer, Peer3, h2_from_peer, total})),\n\n\t?assertEqual(0, get_overall_total(peer, h2_to_peer, total)),\n\t?assertEqual(0, get_overall_total(peer, h2_from_peer, total)).\n\ntest_optimal_stats_poa1_multiple_1() ->\n\ttest_optimal_stats({spora_2_6, <<\"MINING\">>}, 1),\n\ttest_optimal_stats({composite, <<\"MINING\">>, 1}, 1),\n\ttest_optimal_stats({composite, <<\"MINING\">>, 2}, 1).\n\ntest_optimal_stats_poa1_multiple_2() ->\n\ttest_optimal_stats({spora_2_6, <<\"MINING\">>}, 2),\n\ttest_optimal_stats({composite, <<\"MINING\">>, 1}, 2),\n\ttest_optimal_stats({composite, <<\"MINING\">>, 2}, 2).\n\ntest_optimal_stats(Packing, PoA1Multiplier) ->\n\tPackingDifficulty = ar_mining_server:get_packing_difficulty(Packing),\n\tRecallRangeSize = case PackingDifficulty of\n\t\t0 ->\n\t\t\t0.5;\n\t\t1 ->\n\t\t\t0.125;\n\t\t2 ->\n\t\t\t0.0625\n\tend,\n\t?assertEqual(0.0, \n\t\toptimal_partition_read_mibps(\n\t\t\tPacking, undefined, ar_block:partition_size(),\n\t\t\tfloor(10 * ar_block:partition_size()), floor(10 * ar_block:partition_size()))),\n\t?assertEqual(RecallRangeSize * 2, \n\t\toptimal_partition_read_mibps(\n\t\t\tPacking, 1.0, ar_block:partition_size(),\n\t\t\tfloor(10 * ar_block:partition_size()), floor(10 * ar_block:partition_size()))),\n\t?assertEqual(RecallRangeSize, \n\t\toptimal_partition_read_mibps(\n\t\t\tPacking, 2.0, ar_block:partition_size(),\n\t\t\tfloor(10 * ar_block:partition_size()), floor(10 * ar_block:partition_size()))),\n\t?assertEqual(RecallRangeSize / 2, \n\t\toptimal_partition_read_mibps(\n\t\t\tPacking, 1.0, floor(0.25 * ar_block:partition_size()),\n\t\t\tfloor(10 * ar_block:partition_size()), floor(10 * ar_block:partition_size()))),\n\t?assertEqual(RecallRangeSize * 1.6, \n\t\toptimal_partition_read_mibps(\n\t\t\tPacking, 1.0, ar_block:partition_size(),\n\t\t\tfloor(6 * ar_block:partition_size()), floor(10 * ar_block:partition_size()))),\n\n\t{FullWeave, SlowVDF, SmallPartition, SmallWeave} = case PoA1Multiplier of\n\t\t1 -> {800.0, 400.0, 200.0, 640.0};\n\t\t2 -> {600.0, 300.0, 150.0, 440.0}\n\tend,\n\n\t?assertEqual(0.0, \n\t\toptimal_partition_hash_hps(\n\t\t\tPoA1Multiplier, undefined, ar_block:partition_size(),\n\t\t\tfloor(10 * ar_block:partition_size()), floor(10 * ar_block:partition_size()))),\n\t?assertEqual(FullWeave, \n\t\toptimal_partition_hash_hps(\n\t\t\tPoA1Multiplier, 1.0, ar_block:partition_size(),\n\t\t\tfloor(10 * ar_block:partition_size()), floor(10 * ar_block:partition_size()))),\n\t?assertEqual(SlowVDF, \n\t\toptimal_partition_hash_hps(\n\t\t\tPoA1Multiplier, 2.0, ar_block:partition_size(),\n\t\t\tfloor(10 * ar_block:partition_size()), floor(10 * ar_block:partition_size()))),\n\t?assertEqual(SmallPartition, \n\t\toptimal_partition_hash_hps(\n\t\t\tPoA1Multiplier, 1.0, floor(0.25 * ar_block:partition_size()),\n\t\t\tfloor(10 * ar_block:partition_size()), floor(10 * ar_block:partition_size()))),\n\t?assertEqual(SmallWeave, \n\t\toptimal_partition_hash_hps(\n\t\t\tPoA1Multiplier, 1.0, ar_block:partition_size(),\n\t\t\tfloor(6 * ar_block:partition_size()), floor(10 * ar_block:partition_size()))).\n\ntest_report_poa1_multiple_1() ->\n\ttest_report({spora_2_6, <<\"MINING\">>}, {spora_2_6, <<\"PACKING\">>}, 1),\n\ttest_report({composite, <<\"MINING\">>, 1}, {composite, <<\"PACKING\">>, 1}, 1),\n\ttest_report({composite, <<\"MINING\">>, 2}, {composite, <<\"PACKING\">>, 2}, 1).\n\ntest_report_poa1_multiple_2() ->\n\ttest_report({spora_2_6, <<\"MINING\">>}, {spora_2_6, <<\"PACKING\">>}, 2),\n\ttest_report({composite, <<\"MINING\">>, 1}, {composite, <<\"PACKING\">>, 1}, 2),\n\ttest_report({composite, <<\"MINING\">>, 2}, {composite, <<\"PACKING\">>, 2}, 2).\n\ntest_report(Mining, Packing, PoA1Multiplier) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tMiningAddress = case Mining of\n\t\t{spora_2_6, Addr} ->\n\t\t\tAddr;\n\t\t{composite, Addr, _} ->\n\t\t\tAddr\n\tend,\n\tPackingDifficulty = ar_mining_server:get_packing_difficulty(Mining),\n\tDifficultyDivisor = case PackingDifficulty of\n\t\t0 ->\n\t\t\t1.0;\n\t\t1 ->\n\t\t\t8.0;\n\t\t2 ->\n\t\t\t4.0\n\tend,\n\tRecallRangeSize = case PackingDifficulty of\n\t\t0 ->\n\t\t\t0.5;\n\t\t1 ->\n\t\t\t0.125;\n\t\t2 ->\n\t\t\t0.0625\n\tend,\n\tStorageModules = [\n\t\t%% partition 1\n\t\t{floor(0.1 * ar_block:partition_size()), 10, unpacked},\n\t\t{floor(0.1 * ar_block:partition_size()), 10, Mining},\n\t\t{floor(0.1 * ar_block:partition_size()), 10, Packing},\n\t\t{floor(0.3 * ar_block:partition_size()), 4, unpacked},\n\t\t{floor(0.3 * ar_block:partition_size()), 4, Mining},\n\t\t{floor(0.3 * ar_block:partition_size()), 4, Packing},\n\t\t{floor(0.2 * ar_block:partition_size()), 8, unpacked},\n\t\t{floor(0.2 * ar_block:partition_size()), 8, Mining},\n\t\t{floor(0.2 * ar_block:partition_size()), 8, Packing},\n\t\t%% partition 2\n\t\t{ar_block:partition_size(), 2, unpacked},\n\t\t{ar_block:partition_size(), 2, Mining},\n\t\t{ar_block:partition_size(), 2, Packing}\n\t],\n\t\n\ttry\t\n\t\tarweave_config:set_env(Config#config{\n\t\t\tstorage_modules = StorageModules,\n\t\t\tmining_addr = MiningAddress\n\t\t}),\n\t\tar_mining_stats:pause_performance_reports(120000),\n\t\treset_all_stats(),\n\t\tPartitions = [\n\t\t\t{1, MiningAddress, 0},\n\t\t\t{2, MiningAddress, 0},\n\t\t\t{3, MiningAddress, 0}\n\t\t],\n\t\tPeer1 = ar_test_node:peer_ip(peer1),\n\t\tPeer2 = ar_test_node:peer_ip(peer2),\n\t\tPeer3 = ar_test_node:peer_ip(peer3),\n\t\tPeers = [Peer1, Peer2, Peer3],\n\n\t\tNow = erlang:monotonic_time(millisecond),\n\t\tWeaveSize = floor(10 * ar_block:partition_size()),\n\t\tets:insert(node_state, [{weave_size, WeaveSize}]),\n\t\tar_mining_stats:set_storage_module_data_size(\n\t\t\tar_storage_module:id({floor(0.1 * ar_block:partition_size()), 10, Mining}),\n\t\t\tMining, 1, floor(0.1 * ar_block:partition_size()), 10,\n\t\t\tfloor(0.1 * ar_block:partition_size())),\n\t\tar_mining_stats:set_storage_module_data_size(\n\t\t\tar_storage_module:id({floor(0.3 * ar_block:partition_size()), 4, Mining}),\n\t\t\tMining, 1, floor(0.3 * ar_block:partition_size()), 4,\n\t\t\tfloor(0.2 * ar_block:partition_size())),\n\t\tar_mining_stats:set_storage_module_data_size(\n\t\t\tar_storage_module:id({floor(0.2 * ar_block:partition_size()), 8, Mining}),\n\t\t\tMining, 1, floor(0.2 * ar_block:partition_size()), 8,\n\t\t\tfloor(0.05 * ar_block:partition_size())),\t\n\t\tar_mining_stats:set_storage_module_data_size(\n\t\t\tar_storage_module:id({ar_block:partition_size(), 2, Mining}),\n\t\t\tMining, 2, ar_block:partition_size(), 2, floor(0.25 * ar_block:partition_size())),\n\t\tvdf_computed(Now),\n\t\tvdf_computed(Now),\n\t\tvdf_computed(Now),\n\t\th1_solution(Now),\n\t\th2_solution(Now),\n\t\th2_solution(Now),\n\t\tblock_found(Now),\n\t\tchunks_read(1, 1, Now),\n\t\tchunks_read(1, 2, Now),\n\t\tchunks_read(2, 2, Now),\n\t\th1_computed(1, 1, Now),\n\t\th1_computed(1, 2, Now),\n\t\th2_computed(1, 2, Now),\n\t\th1_computed(2, 4, Now),\n\t\th1_sent_to_peer(Peer1, 10, Now),\n\t\th1_sent_to_peer(Peer1, 5, Now),\n\t\th1_sent_to_peer(Peer1, 15, Now),\n\t\th1_sent_to_peer(Peer2, 1, Now),\n\t\th1_sent_to_peer(Peer2, 19, Now),\n\t\th1_received_from_peer(Peer2, 10, Now),\n\t\th1_received_from_peer(Peer2, 5, Now),\n\t\th1_received_from_peer(Peer2, 15, Now),\n\t\th1_received_from_peer(Peer1, 1, Now),\n\t\th1_received_from_peer(Peer1, 19, Now),\n\t\th2_sent_to_peer(Peer1, Now),\n\t\th2_sent_to_peer(Peer1, Now),\n\t\th2_sent_to_peer(Peer1, Now),\n\t\th2_sent_to_peer(Peer2, Now),\n\t\th2_sent_to_peer(Peer2, Now),\n\t\th2_received_from_peer(Peer1, Now),\n\t\th2_received_from_peer(Peer1, Now),\n\t\th2_received_from_peer(Peer2, Now),\n\t\th2_received_from_peer(Peer2, Now),\n\t\th2_received_from_peer(Peer2, Now),\n\t\t\n\t\tReport1 = generate_report(0, Mining, [], [], WeaveSize, Now+1000),\n\t\t?assertEqual(#report{ now = Now+1000 }, Report1),\n\t\tlog_report(format_report(Report1, WeaveSize)),\n\n\t\tReport2 = generate_report(0, Mining, Partitions, Peers, WeaveSize, Now+1000),\n\t\tReportString = format_report(Report2, WeaveSize),\n\t\tlog_report(ReportString),\n\n\t\t{\n\t\t\tTotalHash, Partition1Hash, Partition2Hash,\n\t\t\tTotalOptimal, Partition1Optimal, Partition2Optimal\n\t\t} = case PoA1Multiplier of\n\t\t\t1 -> {9.0, 5.0, 4.0, 763.1992309570705, 445.19924812320824, 317.9999828338623};\n\t\t\t2 -> {5.5, 3.5, 2.0, 403.19957427982445, 235.19959144596214, 167.9999828338623}\n\t\tend,\n\n\t\t?assertEqual(#report{ \n\t\t\tnow = Now+1000,\n\t\t\tvdf_speed = 1.0 / 3.0,\n\t\t\th1_solution = 1,\n\t\t\th2_solution = 2,\n\t\t\tconfirmed_block = 1,\n\t\t\ttotal_data_size = \n\t\t\t\tfloor(0.1 * ar_block:partition_size()) + floor(0.2 * ar_block:partition_size()) +\n\t\t\t\tfloor(0.05 * ar_block:partition_size()) + floor(0.25 * ar_block:partition_size()),\n\t\t\toptimal_overall_read_mibps = 0.9539990386963382 * 2 * RecallRangeSize,\n\t\t\toptimal_overall_hash_hps = TotalOptimal,\n\t\t\taverage_read_mibps = 1.25,\n\t\t\tcurrent_read_mibps = 1.25,\n\t\t\taverage_hash_hps = TotalHash / DifficultyDivisor,\n\t\t\tcurrent_hash_hps = TotalHash / DifficultyDivisor,\n\t\t\taverage_h1_to_peer_hps = 50.0,\n\t\t\tcurrent_h1_to_peer_hps = 50.0,\n\t\t\taverage_h1_from_peer_hps = 50.0,\n\t\t\tcurrent_h1_from_peer_hps = 50.0,\n\t\t\ttotal_h2_to_peer = 5,\n\t\t\ttotal_h2_from_peer = 5,\n\t\t\tpartitions = [\n\t\t\t\t#partition_report{\n\t\t\t\t\tpartition_number = 3,\n\t\t\t\t\tdata_size = 0,\n\t\t\t\t\toptimal_read_mibps = 0.0,\n\t\t\t\t\taverage_read_mibps = 0.0,\n\t\t\t\t\tcurrent_read_mibps = 0.0,\n\t\t\t\t\toptimal_hash_hps = 0.0,\n\t\t\t\t\taverage_hash_hps = 0.0,\n\t\t\t\t\tcurrent_hash_hps = 0.0\n\t\t\t\t},\n\t\t\t\t#partition_report{\n\t\t\t\t\tpartition_number = 2,\n\t\t\t\t\tdata_size = floor(0.25 * ar_block:partition_size()),\n\t\t\t\t\toptimal_read_mibps = 0.3974999785423279 * 2 * RecallRangeSize,\n\t\t\t\t\taverage_read_mibps = 0.5,\n\t\t\t\t\tcurrent_read_mibps = 0.5,\n\t\t\t\t\toptimal_hash_hps = Partition2Optimal,\n\t\t\t\t\taverage_hash_hps = Partition2Hash / DifficultyDivisor,\n\t\t\t\t\tcurrent_hash_hps = Partition2Hash / DifficultyDivisor\n\t\t\t\t},\n\t\t\t\t#partition_report{\n\t\t\t\t\tpartition_number = 1,\n\t\t\t\t\tdata_size = 734002,\n\t\t\t\t\toptimal_read_mibps = 0.5564990601540103 * 2 * RecallRangeSize,\n\t\t\t\t\taverage_read_mibps = 0.75,\n\t\t\t\t\tcurrent_read_mibps = 0.75,\n\t\t\t\t\toptimal_hash_hps = Partition1Optimal,\n\t\t\t\t\taverage_hash_hps = Partition1Hash / DifficultyDivisor,\n\t\t\t\t\tcurrent_hash_hps = Partition1Hash / DifficultyDivisor\n\t\t\t\t}\n\t\t\t],\n\t\t\tpeers = [\n\t\t\t\t#peer_report{\n\t\t\t\t\tpeer = Peer3,\n\t\t\t\t\taverage_h1_to_peer_hps = 0.0,\n\t\t\t\t\tcurrent_h1_to_peer_hps = 0.0,\n\t\t\t\t\taverage_h1_from_peer_hps = 0.0,\n\t\t\t\t\tcurrent_h1_from_peer_hps = 0.0,\n\t\t\t\t\ttotal_h2_to_peer = 0,\n\t\t\t\t\ttotal_h2_from_peer = 0\n\t\t\t\t},\n\t\t\t\t#peer_report{\n\t\t\t\t\tpeer = Peer2,\n\t\t\t\t\taverage_h1_to_peer_hps = 20.0,\n\t\t\t\t\tcurrent_h1_to_peer_hps = 20.0,\n\t\t\t\t\taverage_h1_from_peer_hps = 30.0,\n\t\t\t\t\tcurrent_h1_from_peer_hps = 30.0,\n\t\t\t\t\ttotal_h2_to_peer = 2,\n\t\t\t\t\ttotal_h2_from_peer = 3\n\t\t\t\t},\n\t\t\t\t#peer_report{\n\t\t\t\t\tpeer = Peer1,\n\t\t\t\t\taverage_h1_to_peer_hps = 30.0,\n\t\t\t\t\tcurrent_h1_to_peer_hps = 30.0,\n\t\t\t\t\taverage_h1_from_peer_hps = 20.0,\n\t\t\t\t\tcurrent_h1_from_peer_hps = 20.0,\n\t\t\t\t\ttotal_h2_to_peer = 3,\n\t\t\t\t\ttotal_h2_from_peer = 2\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\tReport2)\n\tafter\n\t\tarweave_config:set_env(Config)\n\tend.\n"
  },
  {
    "path": "apps/arweave/src/ar_mining_sup.erl",
    "content": "-module(ar_mining_sup).\n\n-behaviour(supervisor).\n\n-export([start_link/0]).\n\n-export([init/1]).\n\n-include_lib(\"arweave/include/ar_sup.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%% ===================================================================\n%% Supervisor callbacks.\n%% ===================================================================\n\ninit([]) ->\n\t%% We'll create workers for all configured parititions - even those partitions that\n\t%% currently exceed the weave size. Those workers will just lie dormant until the\n\t%% weave size grows to meet them.\n\tMiningWorkers = lists:map(\n\t\tfun({Partition, _MiningAddr, PackingDifficulty}) ->\n\t\t\t?CHILD_WITH_ARGS(\n\t\t\t\tar_mining_worker, worker, ar_mining_worker:name(Partition, PackingDifficulty),\n\t\t\t\t\t[Partition, PackingDifficulty])\n\t\tend,\n\t\tar_mining_io:get_partitions(infinity)\n\t),\n\tChildren = [\n\t\t?CHILD(ar_mining_stats, worker),\n\t\t?CHILD(ar_mining_hash, worker),\n\t\t?CHILD(ar_mining_io, worker)\n\t] ++ MiningWorkers ++ [?CHILD(ar_mining_server, worker)],\n\t{ok, {{one_for_one, 5, 10}, Children}}.\n"
  },
  {
    "path": "apps/arweave/src/ar_mining_worker.erl",
    "content": "-module(ar_mining_worker).\n\n-behaviour(gen_server).\n\n-export([start_link/2, name/2, reset_mining_session/2, set_sessions/2, chunks_read/5, computed_hash/5,\n\t\tset_difficulty/2, set_cache_limits/3, add_task/3, garbage_collect/1]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include(\"ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include(\"ar_mining.hrl\").\n-include(\"ar_mining_cache.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-record(state, {\n\tname = not_set,\n\tpartition_number = not_set,\n\tdiff_pair = not_set,\n\tpacking_difficulty = 0,\n\ttask_queue = gb_sets:new(),\n\tchunk_cache = undefined,\n\tvdf_queue_limit = 0,\n\tlatest_vdf_step_number = 0,\n\tis_pool_client = false,\n\th1_hashes = #{},\n\th2_hashes = #{}\n}).\n\n-define(TASK_CHECK_INTERVAL_MS, 200).\n-define(STATUS_CHECK_INTERVAL_MS, 5000).\n-define(REPORT_CHUNK_CACHE_METRICS_INTERVAL_MS, 30000).\n-define(CACHE_KEY(CacheRef, Nonce), {CacheRef, Nonce}).\n\n%%%===================================================================\n%%% Messages\n%%%===================================================================\n\n-define(MSG_RESET_MINING_SESSION(DiffPair), {reset_mining_session, DiffPair}).\n-define(MSG_SET_SESSIONS(ActiveSessions), {set_sessions, ActiveSessions}).\n-define(MSG_ADD_TASK(Task), {add_task, Task}).\n-define(MSG_SET_DIFFICULTY(DiffPair), {set_difficulty, DiffPair}).\n-define(MSG_SET_CACHE_LIMITS(CacheLimitBytes, VDFQueueLimit), {set_cache_limits, CacheLimitBytes, VDFQueueLimit}).\n-define(MSG_CHECK_WORKER_STATUS, {check_worker_status}).\n-define(MSG_HANDLE_TASK, {handle_task}).\n-define(MSG_GARBAGE_COLLECT(StartTime, GCResult), {garbage_collect, StartTime, GCResult}).\n-define(MSG_GARBAGE_COLLECT, {garbage_collect}).\n-define(MSG_FETCHED_LAST_MOMENT_PROOF(Any), {fetched_last_moment_proof, Any}).\n-define(MSG_REPORT_CHUNK_CACHE_METRICS, {report_chunk_cache_metrics}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the gen_server.\nstart_link(Partition, PackingDifficulty) ->\n\tName = name(Partition, PackingDifficulty),\n\tgen_server:start_link({local, Name}, ?MODULE, {Partition, PackingDifficulty}, []).\n\n-spec name(Partition :: non_neg_integer(), PackingDifficulty :: non_neg_integer()) -> atom().\nname(Partition, PackingDifficulty) ->\n\tlist_to_atom(lists:flatten([\"ar_mining_worker_\", integer_to_list(Partition), \"_\", integer_to_list(PackingDifficulty)])).\n\n-spec reset_mining_session(Worker :: pid(), DiffPair :: {non_neg_integer(), non_neg_integer()}) -> ok.\nreset_mining_session(Worker, DiffPair) ->\n\tgen_server:cast(Worker, ?MSG_RESET_MINING_SESSION(DiffPair)).\n\n-spec set_sessions(Worker :: pid(), ActiveSessions :: [ar_nonce_limiter:session_key()]) -> ok.\nset_sessions(Worker, ActiveSessions) ->\n\tgen_server:cast(Worker, ?MSG_SET_SESSIONS(ActiveSessions)).\n\n-spec add_task(Worker :: pid(), TaskType :: atom(), Candidate :: #mining_candidate{}) -> ok.\nadd_task(Worker, TaskType, Candidate) ->\n\tadd_task(Worker, TaskType, Candidate, []).\n\n-spec add_task(Worker :: pid(), TaskType :: atom(), Candidate :: #mining_candidate{}, ExtraArgs :: [term()]) -> ok.\nadd_task(Worker, TaskType, Candidate, ExtraArgs) ->\n\tgen_server:cast(Worker, ?MSG_ADD_TASK({TaskType, Candidate, ExtraArgs})).\n\n-spec add_delayed_task(Worker :: pid(), TaskType :: atom(), Candidate :: #mining_candidate{}) -> ok.\nadd_delayed_task(Worker, TaskType, Candidate) ->\n\t%% Delay task by random amount between ?TASK_CHECK_INTERVAL_MS and 2*?TASK_CHECK_INTERVAL_MS\n\t%% The reason for the randomization to avoid a glut tasks to all get added at the same time -\n\t%% in particular when the chunk cache fills up it's possible for all queued compute_h0 tasks\n\t%% to be delayed at about the same time.\n\tDelay = rand:uniform(?TASK_CHECK_INTERVAL_MS) + ?TASK_CHECK_INTERVAL_MS,\n\tar_util:cast_after(Delay, Worker, ?MSG_ADD_TASK({TaskType, Candidate, []})).\n\n-spec chunks_read(\n\tWorker :: pid(),\n\tWhichChunk :: atom(),\n\tCandidate :: #mining_candidate{},\n\tRangeStart :: non_neg_integer(),\n\tChunkOffsets :: [non_neg_integer()]\n) -> ok.\nchunks_read(Worker, WhichChunk, Candidate, RangeStart, ChunkOffsets) ->\n\tadd_task(Worker, WhichChunk, Candidate, [RangeStart, ChunkOffsets]).\n\n%% @doc Callback from the hashing threads when a hash is computed\n-spec computed_hash(\n\tWorker :: pid(),\n\tTaskType :: atom(),\n\tHash :: binary(),\n\tPreimage :: binary(),\n\tCandidate :: #mining_candidate{}\n) -> ok.\ncomputed_hash(Worker, computed_h0, H0, undefined, Candidate) ->\n\tadd_task(Worker, computed_h0, Candidate#mining_candidate{ h0 = H0 });\ncomputed_hash(Worker, computed_h1, H1, Preimage, Candidate) ->\n\tadd_task(Worker, computed_h1, Candidate#mining_candidate{ h1 = H1, preimage = Preimage });\ncomputed_hash(Worker, computed_h2, H2, Preimage, Candidate) ->\n\tadd_task(Worker, computed_h2, Candidate#mining_candidate{ h2 = H2, preimage = Preimage }).\n\n%% @doc Set the new mining difficulty. We do not recalculate it inside the mining\n%% server or worker because we want to completely detach the mining server from the block\n%% ordering. The previous block is chosen only after the mining solution is found (if\n%% we choose it in advance we may miss a better option arriving in the process).\n%% Also, a mining session may (in practice, almost always will) span several blocks.\n-spec set_difficulty(Worker :: pid(), DiffPair :: {non_neg_integer(), non_neg_integer()}) -> ok.\nset_difficulty(Worker, DiffPair) ->\n\tgen_server:cast(Worker, ?MSG_SET_DIFFICULTY(DiffPair)).\n\n-spec set_cache_limits(Worker :: pid(), CacheLimitBytes :: non_neg_integer(), VDFQueueLimit :: non_neg_integer()) -> ok.\nset_cache_limits(Worker, CacheLimitBytes, VDFQueueLimit) ->\n\tgen_server:cast(Worker, ?MSG_SET_CACHE_LIMITS(CacheLimitBytes, VDFQueueLimit)).\n\n-spec garbage_collect(Worker :: pid()) -> ok.\ngarbage_collect(Worker) ->\n\tgen_server:cast(Worker, ?MSG_GARBAGE_COLLECT).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit({Partition, PackingDifficulty}) ->\n\tName = name(Partition, PackingDifficulty),\n\t?LOG_INFO([{event, mining_worker_started},\n\t\t{worker, Name}, {pid, self()}, {partition, Partition}]),\n\tChunkCache = ar_mining_cache:new(Name),\n\tState0 = #state{\n\t\tname = Name,\n\t\tchunk_cache = ChunkCache,\n\t\tpartition_number = Partition,\n\t\tis_pool_client = ar_pool:is_client(),\n\t\tpacking_difficulty = PackingDifficulty\n\t},\n\tgen_server:cast(self(), ?MSG_HANDLE_TASK),\n\tgen_server:cast(self(), ?MSG_CHECK_WORKER_STATUS),\n\tgen_server:cast(self(), ?MSG_REPORT_CHUNK_CACHE_METRICS),\n\t{ok, report_chunk_cache_metrics(State0)}.\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast(?MSG_SET_CACHE_LIMITS(CacheLimitBytes, VDFQueueLimit), State) ->\n\tState1 = State#state{\n\t\tchunk_cache = ar_mining_cache:set_limit(CacheLimitBytes, State#state.chunk_cache),\n\t\tvdf_queue_limit = VDFQueueLimit\n\t},\n\t{noreply, State1};\n\nhandle_cast(?MSG_SET_DIFFICULTY(DiffPair), State) ->\n\tState1 = State#state{ diff_pair = DiffPair },\n\t{noreply, State1};\n\nhandle_cast(?MSG_RESET_MINING_SESSION(DiffPair), State) ->\n\tState1 = update_sessions([], State),\n\tState2 = State1#state{ diff_pair = DiffPair },\n\t{noreply, State2};\n\nhandle_cast(?MSG_SET_SESSIONS(ActiveSessions), State) ->\n\tState1 = update_sessions(ActiveSessions, State),\n\t{noreply, State1};\n\nhandle_cast(?MSG_ADD_TASK({TaskType, Candidate, _ExtraArgs} = Task), State) ->\n\tcase is_session_valid(State, Candidate) of\n\t\ttrue ->\n\t\t\tState1 = add_task(Task, State),\n\t\t\t{noreply, State1};\n\t\tfalse ->\n\t\t\tlog_debug(mining_debug_add_stale_task, Candidate, State, [\n\t\t\t\t{task, TaskType},\n\t\t\t\t{active_sessions,\n\t\t\t\t\tar_mining_server:encode_sessions(ar_mining_cache:get_sessions(State#state.chunk_cache))}]),\n\t\t\t{noreply, State}\n\tend;\n\nhandle_cast(?MSG_HANDLE_TASK, #state{ task_queue = Q } = State) ->\n\tcase gb_sets:is_empty(Q) of\n\t\ttrue ->\n\t\t\tar_util:cast_after(?TASK_CHECK_INTERVAL_MS, self(), ?MSG_HANDLE_TASK),\n\t\t\t{noreply, State};\n\t\t_ ->\n\t\t\tgen_server:cast(self(), ?MSG_HANDLE_TASK),\n\t\t\t{{_Priority, _ID, {TaskType, Candidate, _ExtraArgs} = Task}, Q2} = gb_sets:take_smallest(Q),\n\t\t\tprometheus_gauge:dec(mining_server_task_queue_len, [TaskType]),\n\t\t\tcase is_session_valid(State, Candidate) of\n\t\t\t\ttrue ->\n\t\t\t\t\tState1 = handle_task(Task, State#state{ task_queue = Q2 }),\n\t\t\t\t\t{noreply, State1};\n\t\t\t\tfalse ->\n\t\t\t\t\tlog_debug(mining_debug_handle_stale_task, Candidate, State, [\n\t\t\t\t\t\t{task, TaskType},\n\t\t\t\t\t\t{active_sessions,\n\t\t\t\t\t\t\tar_mining_server:encode_sessions(ar_mining_cache:get_sessions(State#state.chunk_cache))}]),\n\t\t\t\t\t{noreply, State}\n\t\t\tend\n\tend;\n\nhandle_cast(?MSG_CHECK_WORKER_STATUS, State) ->\n\tmaybe_warn_about_lag(State#state.task_queue, State#state.name),\n\tar_util:cast_after(?STATUS_CHECK_INTERVAL_MS, self(), ?MSG_CHECK_WORKER_STATUS),\n\t{noreply, State};\n\nhandle_cast(?MSG_GARBAGE_COLLECT, State) ->\n\terlang:garbage_collect(self(), [{async, erlang:monotonic_time()}]),\n\t{noreply, State};\n\nhandle_cast(?MSG_REPORT_CHUNK_CACHE_METRICS, State) ->\n\tar_util:cast_after(?REPORT_CHUNK_CACHE_METRICS_INTERVAL_MS, self(), ?MSG_REPORT_CHUNK_CACHE_METRICS),\n\t{noreply, report_chunk_cache_metrics(State)};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info(?MSG_GARBAGE_COLLECT(StartTime, GCResult), State) ->\n\tEndTime = erlang:monotonic_time(),\n\tElapsedTime = erlang:convert_time_unit(EndTime-StartTime, native, millisecond),\n\tcase GCResult == false orelse ElapsedTime > ?GC_LOG_THRESHOLD of\n\t\ttrue ->\n\t\t\tlog_debug(mining_debug_garbage_collect, State, \n\t\t\t\t[{pid, self()}, {gc_time, ElapsedTime}, {gc_result, GCResult}]);\n\t\tfalse ->\n\t\t\tok\n\tend,\n\t{noreply, State};\n\nhandle_info(?MSG_FETCHED_LAST_MOMENT_PROOF(_), State) ->\n\t%% This is a no-op to handle \"slow\" response from peers that were queried by `fetch_poa_from_peers`\n\t%% Only the first peer to respond with a PoA will be handled, all other responses will fall through to here\n\t%% an be ignored.\n\t{noreply, State};\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Mining tasks.\n%%%===================================================================\n\nadd_task({TaskType, Candidate, _ExtraArgs} = Task, State) ->\n\t#state{ task_queue = Q } = State,\n\tStepNumber = Candidate#mining_candidate.step_number,\n\tQ2 = gb_sets:insert({priority(TaskType, StepNumber), make_ref(), Task}, Q),\n\tprometheus_gauge:inc(mining_server_task_queue_len, [TaskType]),\n\tprometheus_gauge:inc(mining_server_tasks, [TaskType]),\n\tState#state{ task_queue = Q2 }.\n\n-spec handle_task(\n\tTask :: {\n\t\tEventType :: compute_h0 | computed_h0 | chunk1 | chunk2 | computed_h1 | computed_h2 | compute_h2_for_peer,\n\t\tCandidate :: #mining_candidate{},\n\t\tExtraArgs :: term()\n\t},\n\tState :: #state{}\n) -> State :: #state{}.\n\n%% @doc Handle the `compute_h0` task.\n%% Indicates that the VDF step has been computed.\nhandle_task({compute_h0, Candidate, _ExtraArgs}, State) ->\n\t#state{\n\t\tlatest_vdf_step_number = LatestVDFStepNumber,\n\t\tvdf_queue_limit = VDFQueueLimit\n\t} = State,\n\t#mining_candidate{ step_number = StepNumber } = Candidate,\n\tState1 = report_and_reset_hashes(State),\n\t% Only mine recent VDF Steps\n\tcase StepNumber >= LatestVDFStepNumber - VDFQueueLimit of\n\t\ttrue ->\n\t\t\t%% Try to reserve the cache space for both partitions, as we do not know if we have both, one, or none.\n\t\t\tcase try_to_reserve_cache_range_space(2, Candidate#mining_candidate.session_key, State1) of\n\t\t\t\t{true, State2} ->\n\t\t\t\t\t%% Cache space reserved, compute h0.\n\t\t\t\t\tar_mining_hash:compute_h0(self(), Candidate),\n\t\t\t\t\tState2#state{ latest_vdf_step_number = max(StepNumber, LatestVDFStepNumber) };\n\t\t\t\tfalse ->\n\t\t\t\t\t%% We don't have enough cache space to read the recall ranges, so we'll try again later.\n\t\t\t\t\tadd_delayed_task(self(), compute_h0, Candidate),\n\t\t\t\t\tState1#state{ latest_vdf_step_number = max(StepNumber, LatestVDFStepNumber) }\n\t\t\tend;\n\t\tfalse ->\n\t\t\tState1\n\tend;\n\n%% @doc Handle the `computed_h0` task.\n%% Indicates that the hash for the VDF step has been computed.\nhandle_task({computed_h0, Candidate, _ExtraArgs}, State) ->\n\t#mining_candidate{\n\t\th0 = H0, partition_number = Partition1, partition_upper_bound = PartitionUpperBound\n\t} = Candidate,\n\t{RecallRange1Start, RecallRange2Start} = ar_mining_server:get_recall_range(H0, Partition1, PartitionUpperBound),\n\tPartition2 = ar_node:get_partition_number(RecallRange2Start),\n\tCandidate1 = generate_cache_ref(Candidate#mining_candidate{ partition_number2 = Partition2 }),\n\t%% Check if the recall ranges are readable to avoid reserving cache space for non-existent data.\n\tRange1Exists = ar_mining_io:is_recall_range_readable(Candidate1, RecallRange1Start),\n\tRange2Exists = ar_mining_io:is_recall_range_readable(Candidate1, RecallRange2Start),\n\n\t%% We already reserved the cache space for both partitions, so we need to release the reserved space\n\t%% if we're missing one or both of the recall ranges.\n\tcase {Range1Exists, Range2Exists} of\n\t\t{true, true} ->\n\t\t\t%% Both recall ranges are readable, no release needed.\n\t\t\t%% Read the recall ranges; the result of the read will be reported by the `chunk1` and `chunk2` tasks.\n\t\t\tar_mining_io:read_recall_range(chunk1, self(), Candidate1, RecallRange1Start),\n\t\t\tar_mining_io:read_recall_range(chunk2, self(), Candidate1, RecallRange2Start),\n\t\t\tState;\n\t\t{true, false} ->\n\t\t\t%% Only the first recall range is readable, so we need to release the reserved space for the second\n\t\t\t%% recall range.\n\t\t\tState1 = release_cache_range_space(1, Candidate1#mining_candidate.session_key, State),\n\t\t\t%% Mark second recall range as failed, not to wait for it to arrive.\n\t\t\tState2 = mark_recall_range_failed(chunk2, Candidate1, State1),\n\t\t\t%% Read the recall range; the result of the read will be reported by the `chunk1` task.\n\t\t\tar_mining_io:read_recall_range(chunk1, self(), Candidate1, RecallRange1Start),\n\t\t\tState2;\n\t\t{false, _} ->\n\t\t\t%% We don't have the recall ranges, so we need to release the reserved space for both partitions.\n\t\t\tState1 = release_cache_range_space(2, Candidate1#mining_candidate.session_key, State),\n\t\t\tState1\n\tend;\n\n%% @doc Handle the `chunk1` task.\n%% Indicates that the first recall range has been read.\nhandle_task({chunk1, Candidate, [RangeStart, ChunkOffsets]}, State) ->\n\tState1 = process_chunks(chunk1, Candidate, RangeStart, ChunkOffsets, State),\n\tState1;\n\n%% @doc Handle the `chunk2` task.\n%% Indicates that the second recall range has been read.\nhandle_task({chunk2, Candidate, [RangeStart, ChunkOffsets]}, State) ->\n\tState1 = process_chunks(chunk2, Candidate, RangeStart, ChunkOffsets, State),\n\tState1;\n\n%% @doc Handle the `computed_h1` task.\n%% Indicates that the single hash for the first recall range has been computed.\nhandle_task({computed_h1, Candidate, _ExtraArgs}, State) ->\n\t#mining_candidate{ h1 = H1 } = Candidate,\n\tState1 = hash_computed(h1, Candidate, State),\n\tH1PassesDiffChecks = h1_passes_diff_checks(H1, Candidate, State1),\n\tcase H1PassesDiffChecks of\n\t\tfalse ->\n\t\t\tok;\n\t\tpartial ->\n\t\t\tar_mining_server:prepare_and_post_solution(Candidate);\n\t\ttrue ->\n\t\t\tlog_info(found_h1_solution, Candidate, State1, [\n\t\t\t\t{h1, ar_util:encode(H1)},\n\t\t\t\t{difficulty, get_difficulty(State1, Candidate)}]),\n\t\t\tar_mining_server:prepare_and_post_solution(Candidate),\n\t\t\tar_mining_stats:h1_solution()\n\tend,\n\t%% Check if we need to compute H2.\n\t%% Also store H1 in the cache if needed.\n\tcase ar_mining_cache:with_cached_value(\n\t\t?CACHE_KEY(Candidate#mining_candidate.cache_ref, Candidate#mining_candidate.nonce),\n\t\tCandidate#mining_candidate.session_key,\n\t\tState1#state.chunk_cache,\n\t\tfun\n\t\t\t(#ar_mining_cache_value{ chunk2_failed = true }) ->\n\t\t\t\t%% This node does not store chunk2. If we're part of a coordinated\n\t\t\t\t%% mining set, we can try one of our peers, but this node is done with\n\t\t\t\t%% this VDF step.\n\t\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\t\tcase Config#config.coordinated_mining of\n\t\t\t\t\tfalse -> ok;\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tDiffPair = case get_partial_difficulty(State1, Candidate) of\n\t\t\t\t\t\t\t\tnot_set -> get_difficulty(State1, Candidate);\n\t\t\t\t\t\t\t\tPartialDiffPair -> PartialDiffPair\n\t\t\t\t\t\t\tend,\n\t\t\t\t\t\tar_coordination:computed_h1(Candidate, DiffPair)\n\t\t\t\tend,\n\t\t\t\t%% Remove the cached value from the cache.\n\t\t\t\t{ok, drop};\n\t\t\t(#ar_mining_cache_value{ chunk2 = undefined } = CachedValue) ->\n\t\t\t\t%% chunk2 hasn't been read yet, so we cache H1 and wait for it.\n\t\t\t\t%% If H1 passes diff checks, we will skip H2 for this nonce.\n\t\t\t\t{ok, CachedValue#ar_mining_cache_value{ h1 = H1, h1_passes_diff_checks = H1PassesDiffChecks }};\n\t\t\t(#ar_mining_cache_value{ chunk2 = Chunk2 } = CachedValue) when not H1PassesDiffChecks ->\n\t\t\t\t%% chunk2 has already been read, so we can compute H2 now.\n\t\t\t\tar_mining_hash:compute_h2(self(), Candidate#mining_candidate{ chunk2 = Chunk2 }),\n\t\t\t\t{ok, CachedValue#ar_mining_cache_value{ h1 = H1 }};\n\t\t\t(#ar_mining_cache_value{chunk2 = _Chunk2} = _CachedValue) when H1PassesDiffChecks ->\n\t\t\t\t%% H1 passes diff checks, so we skip H2 for this nonce.\n\t\t\t\t%% Might as well drop the cached data, we don't need it anymore.\n\t\t\t\t{ok, drop}\n\t\tend\n\t) of\n\t\t{ok, ChunkCache2} -> State1#state{ chunk_cache = ChunkCache2 };\n\t\t{error, Reason} ->\n\t\t\tlog_error(mining_worker_failed_to_process_h1, Candidate, State1, [{reason, Reason}]),\n\t\t\tState1\n\tend;\n\n%% @doc Handle the `computed_h2` task.\n%% Indicates that the single hash for the second recall range has been computed.\nhandle_task({computed_h2, Candidate, _ExtraArgs}, State) ->\n\t#mining_candidate{ h2 = H2, cm_lead_peer = Peer } = Candidate,\n\tState1 = hash_computed(h2, Candidate, State),\n\tPassesDiffChecks = h2_passes_diff_checks(H2, Candidate, State1),\n\tcase PassesDiffChecks of\n\t\tfalse -> ok;\n\t\tpartial ->\n\t\t\tlog_info(found_h2_partial_solution, Candidate, State1, [\n\t\t\t\t{h0, ar_util:safe_encode(Candidate#mining_candidate.h0)},\n\t\t\t\t{h2, ar_util:safe_encode(H2)},\n\t\t\t\t{partial_difficulty, get_partial_difficulty(State1, Candidate)}]);\n\t\ttrue ->\n\t\t\tlog_info(found_h2_solution, Candidate, State1, [\n\t\t\t\t{h0, ar_util:safe_encode(Candidate#mining_candidate.h0)},\n\t\t\t\t{h2, ar_util:safe_encode(H2)},\n\t\t\t\t{difficulty, get_difficulty(State1, Candidate)},\n\t\t\t\t{partial_difficulty, get_partial_difficulty(State1, Candidate)}]),\n\t\t\tar_mining_stats:h2_solution()\n\tend,\n\tcase {PassesDiffChecks, Peer} of\n\t\t{false, _} ->\n\t\t\t%% H2 does not pass diff checks, do nothing.\n\t\t\tok;\n\t\t{Check, not_set} when partial == Check orelse true == Check ->\n\t\t\t%% This branch only handles the case where we're not part of a coordinated mining set.\n\t\t\t%% This includes the solo mining setup, and pool mining setup.\n\t\t\t%% In case of solo mining, the `Check` will always be `true`.\n\t\t\t%% In case of pool mining, the `Check` will be `partial` or `true`.\n\t\t\t%% In either case, we prepare and post the solution.\n\t\t\tar_mining_server:prepare_and_post_solution(Candidate);\n\t\t{Check, _} when partial == Check orelse true == Check ->\n\t\t\t%% This branch only handles the case where we're part of a coordinated mining set.\n\t\t\t%% In this case, we prepare the PoA2 and send it to the lead peer.\n\t\t\tar_coordination:computed_h2_for_peer(Candidate)\n\tend,\n\t%% Remove the cached value from the cache.\n\tcase ar_mining_cache:with_cached_value(\n\t\t?CACHE_KEY(Candidate#mining_candidate.cache_ref, Candidate#mining_candidate.nonce),\n\t\tCandidate#mining_candidate.session_key,\n\t\tState1#state.chunk_cache,\n\t\tfun(_) -> {ok, drop} end\n\t) of\n\t\t{ok, ChunkCache2} -> State1#state{ chunk_cache = ChunkCache2 };\n\t\t{error, Reason} ->\n\t\t\tlog_error(mining_worker_failed_to_process_computed_h2, Candidate, State1, [{reason, Reason}]),\n\t\t\tState1\n\tend;\n\n%% @doc Handle the `compute_h2_for_peer` task.\n%% Indicates that we got a request to compute H2 for a peer.\nhandle_task({compute_h2_for_peer, Candidate, _ExtraArgs}, State) ->\n\t#mining_candidate{\n\t\th0 = H0,\n\t\tpartition_number = Partition1,\n\t\tpartition_upper_bound = PartitionUpperBound,\n\t\tcm_h1_list = H1List,\n\t\tcm_lead_peer = Peer\n\t} = Candidate,\n\t{_, RecallRange2Start} = ar_mining_server:get_recall_range(H0, Partition1, PartitionUpperBound),\n\tCandidate1 = generate_cache_ref(Candidate),\n\t%% Clear the list so we aren't copying it around all over the place.\n\tCandidate3 = Candidate1#mining_candidate{ cm_h1_list = [] },\n\tRange2Exists = ar_mining_io:read_recall_range(chunk2, self(), Candidate3, RecallRange2Start),\n\tcase Range2Exists of\n\t\ttrue ->\n\t\t\tar_mining_stats:h1_received_from_peer(Peer, length(H1List)),\n\n\t\t\t%% Add the candidate session to the cache. This is only needed during rare occasions\n\t\t\t%% where a CM peer has added a new session a few seconds before this node does.\n\t\t\t%% Typically all CM peers will be on the same VDF sessions within a few seconds, but\n\t\t\t%% this step prevents the H1s shared during those few seconds from being rejected.\n\t\t\tChunkCache = ar_mining_cache:add_session(\n\t\t\t\tCandidate3#mining_candidate.session_key, State#state.chunk_cache),\n\t\t\tState1 = State#state{ chunk_cache = ChunkCache },\t\n\t\t\t%% First we mark the whole first recall range as failed\n\t\t\t%% Then we can cache the H1 list. During this process, we also reset the chunk1_failed\n\t\t\t%% flag to false for the entries we have H1 for.\n\t\t\t%% After these manipulations we will only handle the second recall range nonces that\n\t\t\t%% have corresponding H1s.\n\t\t\tState2 = mark_recall_range_failed(chunk1, Candidate3, State1),\n\t\t\tcache_h1_list(Candidate3, H1List, State2);\n\t\tfalse ->\n\t\t\t%% This can happen for two reasons:\n\t\t\t%% 1. (most common) Remote peer has requested a range we don't have from a\n\t\t\t%%    partition that we do have.\n\t\t\t%% 2. (rare, but possible) Remote peer has an outdated partition table and we\n\t\t\t%%    don't even have the requested partition.\n\t\t\t%% In both cases, we don't even need to cache the H1 list as we cannot\n\t\t\t%% find a valid H2.\n\t\t\tState\n\tend.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nprocess_chunks(WhichChunk, Candidate, RangeStart, ChunkOffsets, State) ->\n\tPackingDifficulty = Candidate#mining_candidate.packing_difficulty,\n\tNoncesPerRecallRange = ar_block:get_max_nonce(PackingDifficulty),\n\tNoncesPerChunk = ar_block:get_nonces_per_chunk(PackingDifficulty),\n\tSubChunkSize = ar_block:get_sub_chunk_size(PackingDifficulty),\n\tprocess_chunks(\n\t\tWhichChunk, Candidate, RangeStart, 0, NoncesPerChunk,\n\t\tNoncesPerRecallRange, ChunkOffsets, SubChunkSize, 0, State\n\t).\n\n%% Processing chunks for a recall range.\n%%\n%% Recall range offset is not aligned to chunk size.\n%% When reading data from disk, we always read the entire chunk.\n%% This means that the amount of data read from disk is always bigger than the\n%% recall range size:\n%%\n%%         |<-      recall range       ->|\n%% [    ][ 1  ][ 2  ] .... [n-2 ][n-1 ][ n  ]\n%%         ^\n%%         recall range start offset\n%%         falls into chunk 1\n%%\n%% When determining which chunks to process, we find the first chunk that\n%% contains the first nonce of the recall range, and start processing from this\n%% chunk. This effectively shifts the recall range to the left:\n%%\n%%         |<-      recall range       ->|\n%% [    ][ 1  ][ 2  ] .... [n-2 ][n-1 ][ n  ]\n%%       |<- effective recall range ->|\n%%\n%% If the recall range start offset aligns with the chunk size accidentally,\n%% current implementation skips the first chunk completely. Fixing this\n%% inconsistency will require a hard fork:\n%%\n%%         |<-    recall range      ->|\n%% [    ][ 1  ][ 2  ] .... [n-2 ][n-1 ][ n  ]\n%%             |<- effective recall range ->|\n%%\n%% The ultimate goal is to process all the sub-chunks in the recall range.\n%% The count of subchunks in the recall range is `NoncesPerRecallRange`.\n%% replica packing: 10 chunks, 32 nonces per chunk, 320 nonces per recall range.\n%% spora 2.6: 200 chunks, 1 nonce per chunk, 200 nonces per recall range.\n%%\n%% Some of the chunks inside (including first and last) might be missing.\n%% This cases must be handled correctly to avoid keeping not needed chunks in\n%% the cache.\nprocess_chunks(\n\tWhichChunk, Candidate, _RangeStart, Nonce, _NoncesPerChunk,\n\tNoncesPerRecallRange, _ChunkOffsets, _SubChunkSize, Count, State\n) when Nonce > NoncesPerRecallRange ->\n\t%% We've processed all the sub_chunks in the recall range.\n\tar_mining_stats:chunks_read(case WhichChunk of\n\t\tchunk1 -> Candidate#mining_candidate.partition_number;\n\t\tchunk2 -> Candidate#mining_candidate.partition_number2\n\tend, Count),\n\tState;\nprocess_chunks(\n\tWhichChunk, Candidate, RangeStart, Nonce, NoncesPerChunk,\n\tNoncesPerRecallRange, [], SubChunkSize, Count, State\n) ->\n\t%% No more ChunkOffsets means no more chunks have been read. Iterate through all the\n\t%% remaining nonces and remove the full chunks from the cache.\n\t%% mark_single_chunk*_failed_or_drop releases SubChunkSize per nonce via\n\t%% ReservationSizeAdjustment, which sums to NoncesPerChunk * SubChunkSize =\n\t%% DATA_CHUNK_SIZE per chunk — matching the reservation made in compute_h0.\n\tState1 = case WhichChunk of\n\t\tchunk1 -> mark_single_chunk1_failed_or_drop(Nonce, Candidate, State);\n\t\tchunk2 -> mark_single_chunk2_failed_or_drop(Nonce, Candidate, State)\n\tend,\n\t%% Process the next chunk.\n\tprocess_chunks(\n\t\tWhichChunk, Candidate, RangeStart, Nonce + NoncesPerChunk,\n\t\tNoncesPerChunk, NoncesPerRecallRange, [], SubChunkSize, Count, State1\n\t);\nprocess_chunks(\n\tWhichChunk, Candidate, RangeStart, Nonce, NoncesPerChunk,\n\tNoncesPerRecallRange, [{ChunkEndOffset, Chunk} | ChunkOffsets], SubChunkSize, Count, State\n) ->\n\tNonceOffset = RangeStart + Nonce * SubChunkSize,\n\tChunkStartOffset = ChunkEndOffset - ?DATA_CHUNK_SIZE,\n\tcase {NonceOffset < ChunkStartOffset, NonceOffset >= ChunkEndOffset, WhichChunk} of\n\t\t{true, _, chunk1} ->\n\t\t\t%% Skip these nonces (starting from Nonce to Nonce + NoncesPerChunk - 1).\n\t\t\t%% Nonce falls in a chunk which wasn't read from disk (for example, because there are holes\n\t\t\t%% in the recall range), e.g. the nonce is in the middle of a non-existent chunk.\n\t\t\t%% Mark single chunk1 as failed or remove it if the corresponding chunk is already read or marked as failed.\n\t\t\tState1 = mark_single_chunk1_failed_or_drop(Nonce, Candidate, State),\n\t\t\tprocess_chunks(\n\t\t\t\tWhichChunk, Candidate, RangeStart, Nonce + NoncesPerChunk, NoncesPerChunk,\n\t\t\t\tNoncesPerRecallRange, [{ChunkEndOffset, Chunk} | ChunkOffsets], SubChunkSize, Count, State1\n\t\t\t);\n\t\t{true, _, chunk2} ->\n\t\t\t%% Skip these nonces (starting from Nonce to Nonce + NoncesPerChunk - 1).\n\t\t\t%% Nonce falls in a chunk which wasn't read from disk (for example, because there are holes\n\t\t\t%% in the recall range), e.g. the nonce is in the middle of a non-existent chunk.\n\t\t\t%% Mark single chunk2 as failed or remove it if the corresponding chunk is already read and H1 is calculated.\n\t\t\tState1 = mark_single_chunk2_failed_or_drop(Nonce, Candidate, State),\n\t\t\tprocess_chunks(\n\t\t\t\tWhichChunk, Candidate, RangeStart, Nonce + NoncesPerChunk, NoncesPerChunk,\n\t\t\t\tNoncesPerRecallRange, [{ChunkEndOffset, Chunk} | ChunkOffsets], SubChunkSize, Count, State1\n\t\t\t);\n\t\t{_, true, _} ->\n\t\t\t%% Skip this chunk.\n\t\t\t%% Nonce falls in a chunk beyond the current chunk offset, (for example, because we\n\t\t\t%% read extra chunk in the beginning of recall range). Move ahead to the next\n\t\t\t%% chunk offset.\n\t\t\tprocess_chunks(\n\t\t\t\tWhichChunk, Candidate, RangeStart, Nonce, NoncesPerChunk,\n\t\t\t\tNoncesPerRecallRange, ChunkOffsets, SubChunkSize, Count, State\n\t\t\t);\n\t\t{false, false, _} ->\n\t\t\t%% Process all sub-chunks in Chunk, and then advance to the next chunk.\n\t\t\tState1 = process_all_sub_chunks(WhichChunk, Chunk, Candidate, Nonce, State),\n\t\t\tprocess_chunks(\n\t\t\t\tWhichChunk, Candidate, RangeStart, Nonce + NoncesPerChunk, NoncesPerChunk,\n\t\t\t\tNoncesPerRecallRange, ChunkOffsets, SubChunkSize, Count + 1, State1\n\t\t\t)\n\tend.\n\nprocess_all_sub_chunks(_WhichChunk, <<>>, _Candidate, _Nonce, State) -> State;\nprocess_all_sub_chunks(WhichChunk, Chunk, Candidate, Nonce, State)\n\t\twhen Candidate#mining_candidate.packing_difficulty == 0 ->\n\t?LOG_ERROR([{event, process_all_sub_chunks}, {packing_difficulty, 0}]),\n\t%% Spora 2.6 packing (aka difficulty 0).\n\tCandidate1 = Candidate#mining_candidate{ nonce = Nonce },\n\tprocess_sub_chunk(WhichChunk, Candidate1, Chunk, State);\nprocess_all_sub_chunks(\n\tWhichChunk,\n\t<< SubChunk:?COMPOSITE_PACKING_SUB_CHUNK_SIZE/binary, Rest/binary >>,\n\tCandidate, Nonce, State\n) ->\n\t%% Composite packing / replica packing (aka difficulty 1+).\n\tCandidate1 = Candidate#mining_candidate{ nonce = Nonce },\n\tState1 = process_sub_chunk(WhichChunk, Candidate1, SubChunk, State),\n\tprocess_all_sub_chunks(WhichChunk, Rest, Candidate1, Nonce + 1, State1);\nprocess_all_sub_chunks(WhichChunk, Rest, Candidate, Nonce, State) ->\n\t%% The chunk is not a multiple of the subchunk size.\n\tlog_error(failed_to_split_chunk_into_sub_chunks, Candidate, State, [\n\t\t\t{remaining_size, byte_size(Rest)},\n\t\t\t{nonce, Nonce},\n\t\t\t{chunk, WhichChunk}]),\n\tState.\n\nprocess_sub_chunk(chunk1, Candidate, SubChunk, State) ->\n\t%% Store the chunk1 in the cache first; only compute h1 if the store succeeds.\n\tcase ar_mining_cache:with_cached_value(\n\t\t?CACHE_KEY(Candidate#mining_candidate.cache_ref, Candidate#mining_candidate.nonce),\n\t\tCandidate#mining_candidate.session_key,\n\t\tState#state.chunk_cache,\n\t\tfun(CachedValue) -> {ok, CachedValue#ar_mining_cache_value{ chunk1 = SubChunk }} end\n\t) of\n\t\t{ok, ChunkCache2} ->\n\t\t\tar_mining_hash:compute_h1(self(), Candidate#mining_candidate{ chunk1 = SubChunk }),\n\t\t\tState#state{ chunk_cache = ChunkCache2 };\n\t\t{error, Reason} ->\n\t\t\tlog_error(mining_worker_failed_to_process_chunk1, Candidate, State, [{reason, Reason}]),\n\t\t\t%% Mark chunk1 as failed for this nonce. This releases the sub-chunk reservation\n\t\t\t%% (pre-allocated in compute_h0) and ensures that when chunk2 arrives, the entry\n\t\t\t%% will be dropped immediately — preventing orphaned entries from filling the cache.\n\t\t\t%% Marking as failed adds no binary data (size difference = 0), so it succeeds even\n\t\t\t%% when the cache is full.\n\t\t\tSubChunkSize = ar_block:get_sub_chunk_size(Candidate#mining_candidate.packing_difficulty),\n\t\t\tcase ar_mining_cache:with_cached_value(\n\t\t\t\t?CACHE_KEY(Candidate#mining_candidate.cache_ref, Candidate#mining_candidate.nonce),\n\t\t\t\tCandidate#mining_candidate.session_key,\n\t\t\t\tState#state.chunk_cache,\n\t\t\t\tfun\n\t\t\t\t\t(#ar_mining_cache_value{ chunk2_failed = true }) ->\n\t\t\t\t\t\t{ok, drop, -SubChunkSize};\n\t\t\t\t\t(#ar_mining_cache_value{ chunk2 = Chunk2 }) when is_binary(Chunk2) ->\n\t\t\t\t\t\t{ok, drop, -SubChunkSize};\n\t\t\t\t\t(#ar_mining_cache_value{ chunk2 = undefined } = CachedValue) ->\n\t\t\t\t\t\t{ok, CachedValue#ar_mining_cache_value{ chunk1_failed = true },\n\t\t\t\t\t\t\t-SubChunkSize}\n\t\t\t\tend\n\t\t\t) of\n\t\t\t\t{ok, ChunkCache3} ->\n\t\t\t\t\tState#state{ chunk_cache = ChunkCache3 };\n\t\t\t\t{error, Reason2} ->\n\t\t\t\t\tlog_error(mining_worker_failed_to_release_reservation_for_session,\n\t\t\t\t\t\tCandidate, State, [{reason, Reason2}]),\n\t\t\t\t\tState\n\t\t\tend\n\tend;\nprocess_sub_chunk(chunk2, Candidate, SubChunk, State) ->\n\tCandidate1 = Candidate#mining_candidate{ chunk2 = SubChunk },\n\tcase ar_mining_cache:with_cached_value(\n\t\t?CACHE_KEY(Candidate1#mining_candidate.cache_ref, Candidate1#mining_candidate.nonce),\n\t\tCandidate#mining_candidate.session_key,\n\t\tState#state.chunk_cache,\n\t\tfun\n\t\t\t(#ar_mining_cache_value{ chunk1_failed = true }) ->\n\t\t\t\t%% chunk1 already failed, so there was no reservation for it.\n\t\t\t\t%% Since there is no need to calculate H2, we can just drop the cached value.\n\t\t\t\t{ok, drop, -ar_block:get_sub_chunk_size(Candidate#mining_candidate.packing_difficulty)};\n\t\t\t(#ar_mining_cache_value{ h1_passes_diff_checks = true } = _CachedValue) ->\n\t\t\t\t%% H1 passes diff checks, so we skip H2 for this nonce.\n\t\t\t\t%% Drop the cached data, we don't need it anymore.\n\t\t\t\t%% Since we already reserved the cache size for chunk2, but we never store it,\n\t\t\t\t%% we need to drop the reservation here.\n\t\t\t\t{ok, drop, -ar_block:get_sub_chunk_size(Candidate#mining_candidate.packing_difficulty)};\n\t\t\t(#ar_mining_cache_value{ h1 = undefined } = CachedValue) ->\n\t\t\t\t%% H1 is not yet calculated, cache the chunk2 for this nonce.\n\t\t\t\t{ok, CachedValue#ar_mining_cache_value{ chunk2 = SubChunk }};\n\t\t\t(#ar_mining_cache_value{ h1 = H1, chunk1 = Chunk1 } = CachedValue) ->\n\t\t\t\t%% H1 is already calculated, compute H2 and cache the chunk2 for this nonce.\n\t\t\t\tar_mining_hash:compute_h2(self(), Candidate1#mining_candidate{ h1 = H1, chunk1 = Chunk1 }),\n\t\t\t\t{ok, CachedValue#ar_mining_cache_value{ chunk2 = SubChunk }}\n\t\tend\n\t) of\n\t\t{ok, ChunkCache2} -> State#state{ chunk_cache = ChunkCache2 };\n\t\t{error, Reason} ->\n\t\t\tlog_error(mining_worker_failed_to_process_chunk2, Candidate1, State, [{reason, Reason}]),\n\t\t\t%% Mark chunk2 as failed for this nonce. This releases the sub-chunk reservation\n\t\t\t%% and prevents orphaned cache entries from accumulating.\n\t\t\tSubChunkSize = ar_block:get_sub_chunk_size(Candidate#mining_candidate.packing_difficulty),\n\t\t\tcase ar_mining_cache:with_cached_value(\n\t\t\t\t?CACHE_KEY(Candidate1#mining_candidate.cache_ref, Candidate1#mining_candidate.nonce),\n\t\t\t\tCandidate#mining_candidate.session_key,\n\t\t\t\tState#state.chunk_cache,\n\t\t\t\tfun\n\t\t\t\t\t(#ar_mining_cache_value{ chunk1_failed = true }) ->\n\t\t\t\t\t\t{ok, drop, -SubChunkSize};\n\t\t\t\t\t(#ar_mining_cache_value{ chunk1 = Chunk1, h1 = undefined } = CachedValue)\n\t\t\t\t\t\t\twhen is_binary(Chunk1) ->\n\t\t\t\t\t\t{ok, CachedValue#ar_mining_cache_value{ chunk2_failed = true },\n\t\t\t\t\t\t\t-SubChunkSize};\n\t\t\t\t\t(#ar_mining_cache_value{ h1 = H1 }) when is_binary(H1) ->\n\t\t\t\t\t\t{ok, drop, -SubChunkSize};\n\t\t\t\t\t(CachedValue) ->\n\t\t\t\t\t\t{ok, CachedValue#ar_mining_cache_value{ chunk2_failed = true },\n\t\t\t\t\t\t\t-SubChunkSize}\n\t\t\t\tend\n\t\t\t) of\n\t\t\t\t{ok, ChunkCache3} ->\n\t\t\t\t\tState#state{ chunk_cache = ChunkCache3 };\n\t\t\t\t{error, Reason2} ->\n\t\t\t\t\tlog_error(mining_worker_failed_to_release_reservation_for_session,\n\t\t\t\t\t\tCandidate1, State, [{reason, Reason2}]),\n\t\t\t\t\tState\n\t\t\tend\n\tend.\n\npriority(computed_h2, StepNumber) -> {1, -StepNumber};\npriority(computed_h1, StepNumber) -> {2, -StepNumber};\npriority(compute_h2_for_peer, StepNumber) -> {2, -StepNumber};\npriority(chunk2, StepNumber) -> {3, -StepNumber};\npriority(chunk1, StepNumber) -> {4, -StepNumber};\npriority(computed_h0, StepNumber) -> {5, -StepNumber};\npriority(compute_h0, StepNumber) -> {6, -StepNumber}.\n\n%% @doc Returns true if the mining candidate belongs to a valid mining session. Always assume\n%% that a coordinated mining candidate is valid (its cm_lead_peer is set).\nis_session_valid(_State, #mining_candidate{ cm_lead_peer = Peer })\n\t\twhen Peer /= not_set ->\n\ttrue;\nis_session_valid(State, #mining_candidate{ session_key = SessionKey }) ->\n\tar_mining_cache:session_exists(SessionKey, State#state.chunk_cache).\n\nh1_passes_diff_checks(H1, Candidate, State) ->\n\tpasses_diff_checks(H1, true, Candidate, State).\n\nh2_passes_diff_checks(H2, Candidate, State) ->\n\tpasses_diff_checks(H2, false, Candidate, State).\n\npasses_diff_checks(SolutionHash, IsPoA1, Candidate, State) ->\n\tDiffPair = get_difficulty(State, Candidate),\n\t#mining_candidate{ packing_difficulty = PackingDifficulty } = Candidate,\n\tcase ar_node_utils:passes_diff_check(SolutionHash, IsPoA1, DiffPair, PackingDifficulty) of\n\t\ttrue ->\n\t\t\ttrue;\n\t\tfalse ->\n\t\t\tcase get_partial_difficulty(State, Candidate) of\n\t\t\t\tnot_set -> false;\n\t\t\t\tPartialDiffPair ->\n\t\t\t\t\tcase ar_node_utils:passes_diff_check(\n\t\t\t\t\t\tSolutionHash, IsPoA1, PartialDiffPair, PackingDifficulty\n\t\t\t\t\t) of\n\t\t\t\t\t\ttrue -> partial;\n\t\t\t\t\t\tfalse -> false\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\nmaybe_warn_about_lag(Q, Name) ->\n\tcase gb_sets:is_empty(Q) of\n\t\ttrue -> ok;\n\t\tfalse ->\n\t\t\tcase gb_sets:take_smallest(Q) of\n\t\t\t\t{{_Priority, _ID, {compute_h0, _}}, Q3} ->\n\t\t\t\t\t%% Since we sample the queue asynchronously, we expect there to regularly\n\t\t\t\t\t%% be a queue of length 1 (i.e. a task may have just been added to the\n\t\t\t\t\t%% queue when we run this check).\n\t\t\t\t\t%%\n\t\t\t\t\t%% To further reduce log spam, we'll only warn if the queue is greater\n\t\t\t\t\t%% than 2. We really only care if a queue is consistently long or if\n\t\t\t\t\t%% it's getting longer. Temporary blips are fine. We may incrase\n\t\t\t\t\t%% the threshold in the future.\n\t\t\t\t\tN = count_h0_tasks(Q3) + 1,\n\t\t\t\t\tcase N > 2 of\n\t\t\t\t\t\tfalse -> ok;\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t?LOG_WARNING([\n\t\t\t\t\t\t\t\t{event, mining_worker_lags_behind_the_nonce_limiter},\n\t\t\t\t\t\t\t\t{worker, Name},\n\t\t\t\t\t\t\t\t{step_count, N}])\n\t\t\t\t\tend;\n\t\t\t\t_ -> ok\n\t\t\tend\n\tend.\n\ncount_h0_tasks(Q) ->\n\tcase gb_sets:is_empty(Q) of\n\t\ttrue -> 0;\n\t\tfalse ->\n\t\t\tcase gb_sets:take_smallest(Q) of\n\t\t\t\t{{_Priority, _ID, {compute_h0, _Args}}, Q2} ->\n\t\t\t\t\t1 + count_h0_tasks(Q2);\n\t\t\t\t_ -> 0\n\t\t\tend\n\tend.\n\nupdate_sessions(ActiveSessions, State) ->\n\tCurrentSessions = ar_mining_cache:get_sessions(State#state.chunk_cache),\n\tAddedSessions = lists:subtract(ActiveSessions, CurrentSessions),\n\tRemovedSessions = lists:subtract(CurrentSessions, ActiveSessions),\n\tadd_sessions(AddedSessions, remove_sessions(RemovedSessions, State)).\n\nadd_sessions([], State) -> State;\nadd_sessions([SessionKey | AddedSessions], State) ->\n\tChunkCache = ar_mining_cache:add_session(SessionKey, State#state.chunk_cache),\n\tlog_debug(mining_debug_add_session, State, [\n\t\t{session_key, ar_nonce_limiter:encode_session_key(SessionKey)}]),\n\tadd_sessions(AddedSessions, State#state{chunk_cache = ChunkCache}).\n\nremove_sessions([], State) -> State;\nremove_sessions([SessionKey | RemovedSessions], State) ->\n\tChunkCache = ar_mining_cache:drop_session(SessionKey, State#state.chunk_cache),\n\tTaskQueue = remove_tasks(SessionKey, State#state.task_queue),\n\tlog_debug(mining_debug_remove_session, State, [\n\t\t{session_key, ar_nonce_limiter:encode_session_key(SessionKey)}]),\n\tremove_sessions(RemovedSessions, State#state{\n\t\ttask_queue = TaskQueue,\n\t\tchunk_cache = ChunkCache\n\t}).\n\nremove_tasks(SessionKey, TaskQueue) ->\n\tgb_sets:filter(\n\t\tfun({_Priority, _ID, {TaskType, Candidate, _ExtraArgs}}) ->\n\t\t\tcase Candidate#mining_candidate.session_key == SessionKey of\n\t\t\t\ttrue ->\n\t\t\t\t\tprometheus_gauge:dec(mining_server_task_queue_len, [TaskType]),\n\t\t\t\t\tfalse;\n\t\t\t\tfalse ->\n\t\t\t\t\ttrue\n\t\t\tend\n\t\tend,\n\t\tTaskQueue\n\t).\n\ntry_to_reserve_cache_range_space(Multiplier, SessionKey, #state{\n\tpacking_difficulty = PackingDifficulty,\n\tchunk_cache = ChunkCache0\n} = State) ->\n\tReserveSize = Multiplier * ar_block:get_recall_range_size(PackingDifficulty),\n\tcase ar_mining_cache:reserve_for_session(SessionKey, ReserveSize, ChunkCache0) of\n\t\t{ok, ChunkCache1} ->\n\t\t\tState1 = State#state{ chunk_cache = ChunkCache1 },\n\t\t\t{true, State1};\n\t\t{error, Reason} ->\n\t\t\tlog_warning(mining_worker_failed_to_reserve_cache_space, State, [\n\t\t\t\t{session_key, ar_nonce_limiter:encode_session_key(SessionKey)},\n\t\t\t\t{cache_size, ar_mining_cache:cache_size(ChunkCache0)},\n\t\t\t\t{cache_limit, ar_mining_cache:get_limit(ChunkCache0)},\n\t\t\t\t{reserved_size, ar_mining_cache:reserved_size(ChunkCache0)},\n\t\t\t\t{reserve_size, ReserveSize},\n\t\t\t\t{reason, Reason}]),\n\t\t\tfalse\n\tend.\n\nrelease_cache_range_space(Multiplier, SessionKey, #state{\n\tpacking_difficulty = PackingDifficulty,\n\tchunk_cache = ChunkCache0\n} = State) ->\n\tReleaseSize = Multiplier * ar_block:get_recall_range_size(PackingDifficulty),\n\tcase ar_mining_cache:release_for_session(SessionKey, ReleaseSize, ChunkCache0) of\n\t\t{ok, ChunkCache1} -> State#state{ chunk_cache = ChunkCache1 };\n\t\t{error, Reason} ->\n\t\t\tlog_error(mining_worker_failed_to_release_cache_space, State, [\n\t\t\t\t{session_key, ar_nonce_limiter:encode_session_key(SessionKey)},\n\t\t\t\t{cache_size, ar_mining_cache:cache_size(ChunkCache0)},\n\t\t\t\t{cache_limit, ar_mining_cache:get_limit(ChunkCache0)},\n\t\t\t\t{reserved_size, ar_mining_cache:reserved_size(ChunkCache0)},\n\t\t\t\t{release_size, ReleaseSize},\n\t\t\t\t{reason, Reason}]),\n\t\t\tState\n\tend.\n\n%% @doc Mark the chunk1 as failed or drop the cache and reservation for this chunk.\n%% This function is called for one chunk1.\nmark_single_chunk1_failed_or_drop(Nonce, Candidate, State) ->\n\t#mining_candidate{ packing_difficulty = PackingDifficulty } = Candidate,\n\tSubChunksPerChunk = ar_block:get_nonces_per_chunk(PackingDifficulty),\n\tmark_single_chunk1_failed_or_drop(Nonce, SubChunksPerChunk, Candidate, State).\n\nmark_single_chunk1_failed_or_drop(_Nonce, 0, _Candidate, State) -> State;\nmark_single_chunk1_failed_or_drop(Nonce, NoncesLeft, Candidate, State) ->\n\t%% Mark the chunk1 as failed.\n\t%% The cache reservation for this chunk1 will be dropped in the final (first) clause of the function.\n\tcase ar_mining_cache:with_cached_value(\n\t\t?CACHE_KEY(Candidate#mining_candidate.cache_ref, Nonce),\n\t\tCandidate#mining_candidate.session_key,\n\t\tState#state.chunk_cache,\n\t\tfun\n\t\t\t(#ar_mining_cache_value{ chunk2_failed = true }) ->\n\t\t\t\t%% chunk2 already failed, so there was no reservation for it.\n\t\t\t\t%% We can just drop the cached value.\n\t\t\t\t{ok, drop, -ar_block:get_sub_chunk_size(Candidate#mining_candidate.packing_difficulty)};\n\t\t\t(#ar_mining_cache_value{chunk2 = Chunk2}) when is_binary(Chunk2) ->\n\t\t\t\t%% We've already read the chunk2 from disk, so we can just drop the cached value.\n\t\t\t\t%% The cache reservation for corresponding chunk2 was already consumed.\n\t\t\t\t{ok, drop, -ar_block:get_sub_chunk_size(Candidate#mining_candidate.packing_difficulty)};\n\t\t\t(#ar_mining_cache_value{chunk2 = undefined} = CachedValue) ->\n\t\t\t\t%% Mark the chunk1 as failed.\n\t\t\t\t%% When the corresponding chunk2 will be read from disk, it will be dropped immediately.\n\t\t\t\t{ok, CachedValue#ar_mining_cache_value{ chunk1_failed = true }, -ar_block:get_sub_chunk_size(Candidate#mining_candidate.packing_difficulty)}\n\t\tend\n\t) of\n\t\t{ok, ChunkCache1} ->\n\t\t\tmark_single_chunk1_failed_or_drop(Nonce + 1, NoncesLeft - 1, Candidate, State#state{ chunk_cache = ChunkCache1 });\n\t\t{error, Reason} ->\n\t\t\tlog_error(mining_worker_failed_to_mark_chunk1_failed,\n\t\t\t\tCandidate, State, [{reason, Reason}]),\n\t\t\tmark_single_chunk1_failed_or_drop(Nonce + 1, NoncesLeft - 1, Candidate, State)\n\tend.\n\n%% @doc Mark the chunk2 as failed for a single chunk.\nmark_single_chunk2_failed_or_drop(Nonce, Candidate, State) ->\n\t#mining_candidate{ packing_difficulty = PackingDifficulty } = Candidate,\n\tSubChunksPerChunk = ar_block:get_nonces_per_chunk(PackingDifficulty),\n\tmark_single_chunk2_failed_or_drop(Nonce, SubChunksPerChunk, Candidate, State).\n\nmark_single_chunk2_failed_or_drop(_Nonce, 0, _Candidate, State) -> State;\nmark_single_chunk2_failed_or_drop(Nonce, NoncesLeft, Candidate, State) ->\n\tcase ar_mining_cache:with_cached_value(\n\t\t?CACHE_KEY(Candidate#mining_candidate.cache_ref, Nonce),\n\t\tCandidate#mining_candidate.session_key,\n\t\tState#state.chunk_cache,\n\t\tfun\n\t\t\t(#ar_mining_cache_value{ chunk1_failed = true }) ->\n\t\t\t\t%% chunk1 already failed, so the reservation for it was released.\n\t\t\t\t%% We can just drop the cached value and release the reservation for a single subchunk.\n\t\t\t\t{ok, drop, -ar_block:get_sub_chunk_size(Candidate#mining_candidate.packing_difficulty)};\n\t\t\t(#ar_mining_cache_value{chunk1 = Chunk1, h1 = undefined} = CachedValue) when is_binary(Chunk1) ->\n\t\t\t\t%% We have the corresponding chunk1, but we didn't calculate H1 yet.\n\t\t\t\t%% Mark chunk2 as failed to drop the cached value after we calculate H1.\n\t\t\t\t%% Drop the reservation for a single subchunk.\n\t\t\t\t{ok, CachedValue#ar_mining_cache_value{ chunk2_failed = true }, -ar_block:get_sub_chunk_size(Candidate#mining_candidate.packing_difficulty)};\n\t\t\t(#ar_mining_cache_value{h1 = H1}) when is_binary(H1) ->\n\t\t\t\t%% We've already calculated H1, so we can drop the cached value.\n\t\t\t\t%% Drop the reservation for a single subchunk.\n\t\t\t\t{ok, drop, -ar_block:get_sub_chunk_size(Candidate#mining_candidate.packing_difficulty)};\n\t\t\t(CachedValue) ->\n\t\t\t\t%% chunk1 hasn't arrived yet, so\n\t\t\t\t%% we just mark the chunk2 as failed and continue.\n\t\t\t\t%% Drop the reservation for a single subchunk.\n\t\t\t\t{ok, CachedValue#ar_mining_cache_value{ chunk2_failed = true }, -ar_block:get_sub_chunk_size(Candidate#mining_candidate.packing_difficulty)}\n\t\tend\n\t) of\n\t\t{ok, ChunkCache1} ->\n\t\t\tmark_single_chunk2_failed_or_drop(Nonce + 1, NoncesLeft - 1, Candidate, State#state{ chunk_cache = ChunkCache1 });\n\t\t{error, Reason} ->\n\t\t\t%% NB: this clause may cause a memory leak, because mining worker will wait for\n\t\t\t%% chunk2 to arrive.\n\t\t\tlog_error(mining_worker_failed_to_mark_chunk2_failed,\n\t\t\t\tCandidate, State, [{reason, Reason}]),\n\t\t\tmark_single_chunk2_failed_or_drop(Nonce + 1, NoncesLeft - 1, Candidate, State)\n\tend.\n\n%% @doc Mark the chunk1 or chunk2 as failed for the whole recall range.\nmark_recall_range_failed(WhichChunk, Candidate, State) ->\n\t#mining_candidate{ packing_difficulty = PackingDifficulty } = Candidate,\n\tmark_recall_range_failed(WhichChunk, 0, ar_block:get_nonces_per_recall_range(PackingDifficulty), Candidate, State).\n\nmark_recall_range_failed(_WhichChunk, _Nonce, 0, _Candidate, State) -> State;\nmark_recall_range_failed(WhichChunk, Nonce, NoncesLeft, Candidate, State) ->\n\tcase ar_mining_cache:with_cached_value(\n\t\t?CACHE_KEY(Candidate#mining_candidate.cache_ref, Nonce),\n\t\tCandidate#mining_candidate.session_key,\n\t\tState#state.chunk_cache,\n\t\tfun(CachedValue) ->\n\t\t\tcase WhichChunk of\n\t\t\t\tchunk1 -> {ok, CachedValue#ar_mining_cache_value{ chunk1_failed = true }};\n\t\t\t\tchunk2 -> {ok, CachedValue#ar_mining_cache_value{ chunk2_failed = true }}\n\t\t\tend\n\t\tend\n\t\t) of\n\t\t{ok, ChunkCache1} ->\n\t\t\tmark_recall_range_failed(WhichChunk, Nonce + 1, NoncesLeft - 1, Candidate, State#state{ chunk_cache = ChunkCache1 });\n\t\t{error, Reason} ->\n\t\t\t%% NB: this clause may cause a memory leak, because mining worker will wait for\n\t\t\t%% WhichChunk to arrive.\n\t\t\tlog_error(mining_worker_failed_to_add_chunk_to_cache, Candidate, State, [{reason, Reason}]),\n\t\t\tmark_recall_range_failed(WhichChunk, Nonce + 1, NoncesLeft - 1, Candidate, State)\n\tend.\n\ncache_h1_list(_Candidate, [], State) -> State;\ncache_h1_list(#mining_candidate{ cache_ref = not_set } = _Candidate, [], State) -> State;\ncache_h1_list(Candidate, [ {H1, Nonce} | H1List ], State) ->\n\tcase ar_mining_cache:with_cached_value(\n\t\t?CACHE_KEY(Candidate#mining_candidate.cache_ref, Nonce),\n\t\tCandidate#mining_candidate.session_key,\n\t\tState#state.chunk_cache,\n\t\tfun(CachedValue) ->\n\t\t\t%% Store the H1 received from peer, and set chunk1_failed to false,\n\t\t\t%% marking that we have a recall range for this H1 list.\n\t\t\t{ok, CachedValue#ar_mining_cache_value{ h1 = H1, chunk1_failed = false }}\n\t\tend\n\t) of\n\t\t{ok, ChunkCache1} ->\n\t\t\tcache_h1_list(Candidate, H1List, State#state{ chunk_cache = ChunkCache1 });\n\t\t{error, Reason} ->\n\t\t\tlog_error(mining_worker_failed_to_cache_h1, Candidate, State, [{reason, Reason}]),\n\t\t\tcache_h1_list(Candidate, H1List, State)\n\tend.\n\nget_difficulty(State, #mining_candidate{ cm_diff = not_set }) ->\n\tState#state.diff_pair;\nget_difficulty(_State, #mining_candidate{ cm_diff = DiffPair }) ->\n\tDiffPair.\n\nget_partial_difficulty(#state{ is_pool_client = false }, _Candidate) ->\n\tnot_set;\nget_partial_difficulty(_State, #mining_candidate{ cm_diff = DiffPair }) ->\n\tDiffPair.\n\ngenerate_cache_ref(Candidate) ->\n\t#mining_candidate{\n\t\tpartition_number = Partition1, partition_number2 = Partition2,\n\t\tpartition_upper_bound = PartitionUpperBound } = Candidate,\n\tCacheRef = {Partition1, Partition2, PartitionUpperBound, make_ref()},\n\tCandidate#mining_candidate{ cache_ref = CacheRef }.\n\nhash_computed(WhichHash, Candidate, State) ->\n\tcase WhichHash of\n\t\th1 ->\n\t\t\tPartitionNumber = Candidate#mining_candidate.partition_number,\n\t\t\tHashes = maps:get(PartitionNumber, State#state.h1_hashes, 0),\n\t\t\tState#state{ h1_hashes = maps:put(PartitionNumber, Hashes+1, State#state.h1_hashes) };\n\t\th2 ->\n\t\t\tPartitionNumber = Candidate#mining_candidate.partition_number2,\n\t\t\tHashes = maps:get(PartitionNumber, State#state.h2_hashes, 0),\n\t\t\tState#state{ h2_hashes = maps:put(PartitionNumber, Hashes+1, State#state.h2_hashes) }\n\tend.\n\nreport_and_reset_hashes(State) ->\n\tmaps:foreach(\n        fun(PartitionNumber, Count) ->\n            ar_mining_stats:h1_computed(PartitionNumber, Count)\n        end,\n        State#state.h1_hashes\n    ),\n\tmaps:foreach(\n        fun(PartitionNumber, Count) ->\n            ar_mining_stats:h2_computed(PartitionNumber, Count)\n        end,\n        State#state.h2_hashes\n    ),\n\tState#state{ h1_hashes = #{}, h2_hashes = #{} }.\n\nreport_chunk_cache_metrics(#state{chunk_cache = ChunkCache, partition_number = Partition} = State) ->\n\tprometheus_gauge:set(mining_server_chunk_cache_size, [Partition, \"total\"], ar_mining_cache:cache_size(ChunkCache)),\n\tcase ar_mining_cache:reserved_size(ChunkCache) of\n\t\t{ok, ReservedSize} -> prometheus_gauge:set(mining_server_chunk_cache_size, [Partition, \"reserved\"], ReservedSize);\n\t\t{error, Reason} ->\n\t\t\tlog_error(mining_worker_failed_to_report_chunk_cache_metrics, State, [{reason, Reason}])\n\tend,\n\tState.\n\nformat_logs(State = #state{}) ->\n\t#state{\n\t\tname = Name,\n\t\tpartition_number = PartitionNumber,\n\t\tlatest_vdf_step_number = LatestStepNumber\n\t} = State,\n\t[\n\t\t{worker, Name}, {state_partition, PartitionNumber}, {latest_vdf_step_number, LatestStepNumber}\n\t];\nformat_logs(Candidate = #mining_candidate{}) ->\n\t#mining_candidate{\n\t\tcm_lead_peer = Peer,\n\t\tsession_key = SessionKey,\n\t\tstep_number = StepNumber,\n\t\tpartition_number = Partition,\n\t\tpartition_number2 = Partition2,\n\t\tnonce = Nonce\n\t} = Candidate,\n\t[{cm_peer, Peer}, {candidate_session, ar_nonce_limiter:encode_session_key(SessionKey)},\n\t\t{candidate_step_number, StepNumber}, {candidate_nonce, Nonce},\n\t\t{candidate_partition, Partition}, {candidate_partition2, Partition2}];\nformat_logs(undefined) ->\n\t[].\n\nformat_logs(Event, Candidate, State, ExtraLogs) ->\n\t[{event, Event}] ++ format_logs(State) ++ format_logs(Candidate) ++ ExtraLogs.\n\nlog_debug(Event, Candidate, State, ExtraLogs) ->\n\t?LOG_DEBUG(format_logs(Event, Candidate, State, ExtraLogs)).\n\nlog_debug(Event, State, ExtraLogs) ->\n\tlog_debug(Event, undefined, State, ExtraLogs).\t\n\nlog_error(Event, Candidate, State, ExtraLogs) ->\n\t?LOG_ERROR(format_logs(Event, Candidate, State, ExtraLogs)).\n\nlog_error(Event, State, ExtraLogs) ->\n\tlog_error(Event, undefined, State, ExtraLogs).\n\nlog_info(Event, Candidate, State, ExtraLogs) ->\n\t?LOG_INFO(format_logs(Event, Candidate, State, ExtraLogs)).\n\nlog_info(Event, State, ExtraLogs) ->\n\tlog_info(Event, undefined, State, ExtraLogs).\n\nlog_warning(Event, Candidate, State, ExtraLogs) ->\n\t?LOG_WARNING(format_logs(Event, Candidate, State, ExtraLogs)).\n\nlog_warning(Event, State, ExtraLogs) ->\n\tlog_warning(Event, undefined, State, ExtraLogs).\n\n\n%%%===================================================================\n%%% Public Test interface.\n%%%===================================================================\n"
  },
  {
    "path": "apps/arweave/src/ar_network_middleware.erl",
    "content": "-module(ar_network_middleware).\n\n-behaviour(cowboy_middleware).\n\n-export([execute/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n\nexecute(Req, Env) ->\n\tcase cowboy_req:header(<<\"x-network\">>, Req, <<?DEFAULT_NETWORK_NAME>>) of\n\t\t<<?NETWORK_NAME>> ->\n\t\t\tmaybe_add_peer(ar_http_util:arweave_peer(Req), Req),\n\t\t\t{ok, Req, Env};\n\t\t_ ->\n\t\t\tcase cowboy_req:method(Req) of\n\t\t\t\t<<\"GET\">> ->\n\t\t\t\t\t{ok, Req, Env};\n\t\t\t\t<<\"HEAD\">> ->\n\t\t\t\t\t{ok, Req, Env};\n\t\t\t\t<<\"OPTIONS\">> ->\n\t\t\t\t\t{ok, Req, Env};\n\t\t\t\t_ ->\n\t\t\t\t\twrong_network(Req)\n\t\t\tend\n\tend.\n\n%% @doc When a node receives a request that includes the x-p2p-port header, it will attempt to\n%% add the requesting node to its peer list.\nmaybe_add_peer(Peer, Req) ->\n\tcase cowboy_req:header(<<\"x-p2p-port\">>, Req, not_set) of\n\t\tnot_set ->\n\t\t\tok;\n\t\t_ ->\n\t\t\tar_peers:add_peer(Peer, get_release(Req))\n\tend.\n\nwrong_network(Req) ->\n\t{stop, cowboy_req:reply(412, #{}, jiffy:encode(#{ error => wrong_network }), Req)}.\n\nget_release(Req) ->\n\tcase cowboy_req:header(<<\"x-release\">>, Req, -1) of\n\t\t-1 ->\n\t\t\t-1;\n\t\tReleaseBin ->\n\t\t\tcase catch binary_to_integer(ReleaseBin) of\n\t\t\t\t{'EXIT', _} ->\n\t\t\t\t\t-1;\n\t\t\t\tRelease ->\n\t\t\t\t\tRelease\n\t\t\tend\n\tend.\n"
  },
  {
    "path": "apps/arweave/src/ar_node.erl",
    "content": "%% This Source Code Form is subject to the terms of the GNU General\n%% Public License, v. 2.0. If a copy of the GPLv2 was not distributed\n%% with this file, You can obtain one at\n%% https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html\n\n-module(ar_node).\n\n-export([get_recent_block_hash_by_height/1, get_blocks/0, get_block_index/0,\n\t\tget_current_block/0, get_current_diff/0,\n\t\tis_in_block_index/1, get_block_index_and_height/0,\n\t\tget_height/0, get_weave_size/0, get_balance/1, get_last_tx/1, get_ready_for_mining_txs/0,\n\t\tget_current_usd_to_ar_rate/0, get_current_block_hash/0,\n\t\tget_block_index_entry/1, get_2_0_hash_of_1_0_block/1, is_joined/0, get_block_anchors/0,\n\t\tget_recent_txs_map/0, get_mempool_size/0,\n\t\tget_block_shadow_from_cache/1, get_recent_partition_upper_bound_by_prev_h/1,\n\t\tget_block_txs_pairs/0, get_partition_upper_bound/1, get_nth_or_last/2,\n\t\tget_partition_number/1, get_max_partition_number/1,\n\t\tget_current_weave_size/0, get_recent_max_block_size/0,\n\t\tread_recent_blocks/3]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%%===================================================================\n%%% API\n%%%===================================================================\n\n%% @doc Return the hash of the block of the given Height. Return not_found\n%% if Height is bigger than the current height or too small.\nget_recent_block_hash_by_height(Height) ->\n\tProps =\n\t\tets:select(\n\t\t\tnode_state,\n\t\t\t[{{'$1', '$2'},\n\t\t\t\t[{'or',\n\t\t\t\t\t{'==', '$1', height},\n\t\t\t\t\t{'==', '$1', block_anchors}}], ['$_']}]\n\t\t),\n\tCurrentHeight = proplists:get_value(height, Props),\n\tAnchors = proplists:get_value(block_anchors, Props),\n\tcase Height > CurrentHeight orelse Height =< CurrentHeight - length(Anchors) of\n\t\ttrue ->\n\t\t\tnot_found;\n\t\tfalse ->\n\t\t\tlists:nth(CurrentHeight - Height + 1, Anchors)\n\tend.\n\n%% @doc Get the current block index (the list of {block hash, weave size, tx root} triplets).\nget_blocks() ->\n\tget_block_index().\n\n%% @doc Get the current block index (the list of {block hash, weave size, tx root} triplets).\nget_block_index() ->\n\tcase ar_util:safe_ets_lookup(node_state, is_joined) of\n\t\t[{_, true}] ->\n\t\t\telement(2, get_block_index_and_height());\n\t\t_ ->\n\t\t\t[]\n\tend.\n\n%% @doc Return the current tip block. Assume the node has joined the network and\n%% initialized the state.\nget_current_block() ->\n\tcase ar_util:safe_ets_lookup(node_state, current) of\n\t\t[{_, Current}] ->\n\t\t\tar_block_cache:get(block_cache, Current);\n\t\t_ ->\n\t\t\tnot_joined\n\tend.\n\n%% @doc Return the current network difficulty. Assume the node has joined the network and\n%% initialized the state.\nget_current_diff() ->\n\tcase ar_util:safe_ets_lookup(node_state, diff_pair) of\n\t\t[{_, DiffPair}] ->\n\t\t\tDiffPair;\n\t\t_ ->\n\t\t\tnot_joined\n\tend.\n\nget_block_index_and_height() ->\n\tProps =\n\t\tets:select(\n\t\t\tnode_state,\n\t\t\t[{{'$1', '$2'},\n\t\t\t\t[{'or',\n\t\t\t\t\t{'==', '$1', height},\n\t\t\t\t\t{'==', '$1', recent_block_index}}], ['$_']}]\n\t\t),\n\tCurrentHeight = proplists:get_value(height, Props),\n\tRecentBI = proplists:get_value(recent_block_index, Props),\n\t{CurrentHeight, merge(RecentBI,\n\t\t\tar_block_index:get_list(CurrentHeight - length(RecentBI)))}.\n\nmerge([Elem | BI], BI2) ->\n\t[Elem | merge(BI, BI2)];\nmerge([], BI) ->\n\tBI.\n\n%% @doc Get the list of being mined or ready to be mined transactions.\n%% The list does _not_ include transactions waiting for network propagation.\nget_ready_for_mining_txs() ->\n\tgb_sets:fold(\n\t\tfun\n\t\t\t({_Utility, TXID, ready_for_mining}, Acc) ->\n\t\t\t\t[TXID | Acc];\n\t\t\t(_, Acc) ->\n\t\t\t\tAcc\n\t\tend,\n\t\t[],\n\t\tar_mempool:get_priority_set()\n\t).\n\n%% @doc Return true if the given block hash is found in the block index.\nis_in_block_index(H) ->\n\tar_block_index:member(H).\n\n%% @doc Get the current block hash.\nget_current_block_hash() ->\n\tcase ar_util:safe_ets_lookup(node_state, current) of\n\t\t[{current, H}] ->\n\t\t\tH;\n\t\t[] ->\n\t\t\tnot_joined\n\tend.\n\nread_recent_blocks(BI, SearchDepth, CustomDir) ->\n\tread_recent_blocks2(lists:sublist(BI, 2 * ar_block:get_max_tx_anchor_depth() + SearchDepth),\n\t\t\tSearchDepth, 0, CustomDir).\n\nread_recent_blocks2(_BI, Depth, Skipped, _CustomDir) when Skipped > Depth orelse\n\t\t(Skipped > 0 andalso Depth == Skipped) ->\n\tnot_found;\nread_recent_blocks2([], _SearchDepth, Skipped, _CustomDir) ->\n\t{Skipped, []};\nread_recent_blocks2([{BH, _, _} | BI], SearchDepth, Skipped, CustomDir) ->\n\tcase ar_storage:read_block(BH, CustomDir) of\n\t\tB = #block{} ->\n\t\t\tTXs = ar_storage:read_tx(B#block.txs, CustomDir),\n\t\t\tcase lists:any(fun(TX) -> TX == unavailable end, TXs) of\n\t\t\t\ttrue ->\n\t\t\t\t\tread_recent_blocks2(BI, SearchDepth, Skipped + 1, CustomDir);\n\t\t\t\tfalse ->\n\t\t\t\t\tSizeTaggedTXs = ar_block:generate_size_tagged_list_from_txs(TXs,\n\t\t\t\t\t\t\tB#block.height),\n\t\t\t\t\tcase read_recent_blocks3(BI, 2 * ar_block:get_max_tx_anchor_depth() - 1,\n\t\t\t\t\t\t\t[B#block{ size_tagged_txs = SizeTaggedTXs, txs = TXs }], CustomDir) of\n\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\tnot_found;\n\t\t\t\t\t\tBlocks ->\n\t\t\t\t\t\t\t{Skipped, Blocks}\n\t\t\t\t\tend\n\t\t\tend;\n\t\tError ->\n\t\t\tar:console(\"Skipping the block ~s, reason: ~p.~n\", [ar_util:encode(BH),\n\t\t\t\t\tio_lib:format(\"~p\", [Error])]),\n\t\t\tread_recent_blocks2(BI, SearchDepth, Skipped + 1, CustomDir)\n\tend.\n\nread_recent_blocks3([], _BlocksToRead, Blocks, _CustomDir) ->\n\tlists:reverse(Blocks);\nread_recent_blocks3(_BI, 0, Blocks, _CustomDir) ->\n\tlists:reverse(Blocks);\nread_recent_blocks3([{BH, _, _} | BI], BlocksToRead, Blocks, CustomDir) ->\n\tcase ar_storage:read_block(BH, CustomDir) of\n\t\tB = #block{} ->\n\t\t\tTXs = ar_storage:read_tx(B#block.txs, CustomDir),\n\t\t\tcase lists:any(fun(TX) -> TX == unavailable end, TXs) of\n\t\t\t\ttrue ->\n\t\t\t\t\tar:console(\"Failed to find all transaction headers for the block ~s.~n\",\n\t\t\t\t\t\t\t[ar_util:encode(BH)]),\n\t\t\t\t\tnot_found;\n\t\t\t\tfalse ->\n\t\t\t\t\tSizeTaggedTXs = ar_block:generate_size_tagged_list_from_txs(TXs,\n\t\t\t\t\t\t\tB#block.height),\n\t\t\t\t\tread_recent_blocks3(BI, BlocksToRead - 1,\n\t\t\t\t\t\t\t[B#block{ size_tagged_txs = SizeTaggedTXs, txs = TXs } | Blocks], CustomDir)\n\t\t\tend;\n\t\tError ->\n\t\t\tar:console(\"Failed to read block header ~s, reason: ~p.~n\",\n\t\t\t\t\t[ar_util:encode(BH), io_lib:format(\"~p\", [Error])]),\n\t\t\tnot_found\n\tend.\n\n%% @doc Get the block index entry by height.\nget_block_index_entry(Height) ->\n\tcase ar_util:safe_ets_lookup(node_state, is_joined) of\n\t\t[] ->\n\t\t\tnot_joined;\n\t\t[{_, false}] ->\n\t\t\tnot_joined;\n\t\t[{_, true}] ->\n\t\t\tar_block_index:get_element_by_height(Height)\n\tend.\n\n%% @doc Get the 2.0 hash for a 1.0 block.\n%% Before 2.0, to compute a block hash, the complete wallet list\n%% and all the preceding hashes were required. Getting a wallet list\n%% and a hash list for every historical block to verify it belongs to\n%% the weave is very costly. Therefore, a list of 2.0 hashes for 1.0\n%% blocks was computed and stored along with the network client.\n%% @end\nget_2_0_hash_of_1_0_block(Height) ->\n\t[{hash_list_2_0_for_1_0_blocks, HL}] = ar_util:safe_ets_lookup(node_state, hash_list_2_0_for_1_0_blocks),\n\tFork_2_0 = ar_fork:height_2_0(),\n\tcase Height > Fork_2_0 of\n\t\ttrue ->\n\t\t\tinvalid_height;\n\t\tfalse ->\n\t\t\tlists:nth(Fork_2_0 - Height, HL)\n\tend.\n\n%% @doc Return the current height of the blockweave.\nget_height() ->\n\tcase ar_util:safe_ets_lookup(node_state, height) of\n\t\t[{height, Height}] ->\n\t\t\tHeight;\n\t\t[] ->\n\t\t\t-1\n\tend.\n\nget_weave_size() ->\n\tcase ar_util:safe_ets_lookup(node_state, weave_size) of\n\t\t[{weave_size, WeaveSize}] ->\n\t\t\tWeaveSize;\n\t\t[] ->\n\t\t\t-1\n\tend.\n\n%% @doc Check whether the node has joined the network.\nis_joined() ->\n\tcase ar_util:safe_ets_lookup(node_state, is_joined) of\n\t\t[{is_joined, IsJoined}] ->\n\t\t\tIsJoined;\n\t\t[] ->\n\t\t\tfalse\n\tend.\n\n%% @doc Get the currently estimated USD to AR exchange rate.\nget_current_usd_to_ar_rate() ->\n\t[{_, Rate}] = ar_util:safe_ets_lookup(node_state, usd_to_ar_rate),\n\tRate.\n\n%% @doc Returns a list of block anchors corrsponding to the current state -\n%% the hashes of the recent blocks that can be used in transactions as anchors.\n%% @end\nget_block_anchors() ->\n\tcase ar_util:safe_ets_lookup(node_state, block_anchors) of\n\t\t[{block_anchors, BlockAnchors}] ->\n\t\t\tBlockAnchors;\n\t\t[] ->\n\t\t\tnot_joined\n\tend.\n\n%% @doc Return a map TXID -> ok containing all the recent transaction identifiers.\n%% Used for preventing replay attacks.\n%% @end\nget_recent_txs_map() ->\n\t[{recent_txs_map, RecentTXMap}] = ar_util:safe_ets_lookup(node_state, recent_txs_map),\n\tRecentTXMap.\n\n%% @doc Return memory pool size\nget_mempool_size() ->\n\t[{mempool_size, MempoolSize}] = ar_util:safe_ets_lookup(node_state, mempool_size),\n\tMempoolSize.\n\n%% @doc Get the block shadow from the block cache.\nget_block_shadow_from_cache(H) ->\n\tar_block_cache:get(block_cache, H).\n\n%% @doc Get the current balance of a given wallet address.\n%% The balance returned is in relation to the nodes current wallet list.\nget_balance({SigType, PubKey}) ->\n\tget_balance(ar_wallet:to_address(PubKey, SigType));\nget_balance(MaybeRSAPub) when byte_size(MaybeRSAPub) == 512 ->\n\t%% A legacy feature where we may search the public key instead of address.\n\tar_wallets:get_balance(ar_wallet:hash_pub_key(MaybeRSAPub));\nget_balance(Addr) ->\n\tar_wallets:get_balance(Addr).\n\n%% @doc Get the last tx id associated with a given wallet address.\n%% Should the wallet not have made a tx the empty binary will be returned.\nget_last_tx({SigType, PubKey}) ->\n\tget_last_tx(ar_wallet:to_address(PubKey, SigType));\nget_last_tx(MaybeRSAPub) when byte_size(MaybeRSAPub) == 512 ->\n\t%% A legacy feature where we may search the public key instead of address.\n\tget_last_tx(ar_wallet:hash_pub_key(MaybeRSAPub));\nget_last_tx(Addr) ->\n\t{ok, ar_wallets:get_last_tx(Addr)}.\n\nget_recent_partition_upper_bound_by_prev_h(H) ->\n\tget_recent_partition_upper_bound_by_prev_h(H, 0).\n\n%% @doc Get the list of the recent {H, TXIDs} pairs sorted from latest to earliest.\nget_block_txs_pairs() ->\n\t[{_, BlockTXPairs}] = ar_util:safe_ets_lookup(node_state, block_txs_pairs),\n\tBlockTXPairs.\n\nget_nth_or_last(N, BI) ->\n\tcase length(BI) < N of\n\t\ttrue ->\n\t\t\tlists:last(BI);\n\t\tfalse ->\n\t\t\tlists:nth(N, BI)\n\tend.\n\nget_partition_upper_bound(BI) ->\n\telement(2, get_nth_or_last(?SEARCH_SPACE_UPPER_BOUND_DEPTH, BI)).\n\nget_recent_partition_upper_bound_by_prev_h(H, Diff) ->\n\tcase ar_block_cache:get_block_and_status(block_cache, H) of\n\t\t{_B, {on_chain, _}} ->\n\t\t\t[{_, BI}] = ar_util:safe_ets_lookup(node_state, recent_block_index),\n\t\t\tGenesis = length(BI) =< ?SEARCH_SPACE_UPPER_BOUND_DEPTH,\n\t\t\tget_recent_partition_upper_bound_by_prev_h(H, Diff, BI, Genesis);\n\t\t{#block{ indep_hash = H2, previous_block = PrevH, weave_size = WeaveSize }, _} ->\n\t\t\tcase Diff == ?SEARCH_SPACE_UPPER_BOUND_DEPTH - 1 of\n\t\t\t\ttrue ->\n\t\t\t\t\t{H2, WeaveSize};\n\t\t\t\tfalse ->\n\t\t\t\t\tget_recent_partition_upper_bound_by_prev_h(PrevH, Diff + 1)\n\t\t\tend;\n\t\tnot_found ->\n\t\t\t?LOG_INFO([{event, prev_block_not_found}, {h, ar_util:encode(H)}, {depth, Diff}]),\n\t\t\tnot_found\n\tend.\n\nget_recent_partition_upper_bound_by_prev_h(H, Diff, [{H, _, _} | _] = BI, Genesis) ->\n\tPartitionUpperBoundDepth = ?SEARCH_SPACE_UPPER_BOUND_DEPTH,\n\tDepth = PartitionUpperBoundDepth - Diff,\n\tcase length(BI) < Depth of\n\t\ttrue ->\n\t\t\tcase Genesis of\n\t\t\t\ttrue ->\n\t\t\t\t\t{H2, PartitionUpperBound, _TXRoot} = lists:last(BI),\n\t\t\t\t\t{H2, PartitionUpperBound};\n\t\t\t\tfalse ->\n\t\t\t\t\tnot_found\n\t\t\tend;\n\t\tfalse ->\n\t\t\t{H2, PartitionUpperBound, _TXRoot} = lists:nth(Depth, BI),\n\t\t\t{H2, PartitionUpperBound}\n\tend;\nget_recent_partition_upper_bound_by_prev_h(H, Diff, [_ | BI], Genesis) ->\n\tget_recent_partition_upper_bound_by_prev_h(H, Diff, BI, Genesis);\nget_recent_partition_upper_bound_by_prev_h(H, Diff, [], _Genesis) ->\n\t?LOG_INFO([{event, prev_block_not_found_when_scanning_recent_block_index},\n\t\t\t{h, ar_util:encode(H)}, {depth, Diff}]),\n\tnot_found.\n\nget_partition_number(undefined) ->\n\tundefined;\nget_partition_number(infinity) ->\n\tinfinity;\nget_partition_number(Offset) ->\n\tOffset div ar_block:partition_size().\n\n%% @doc Excludes the last partition as it may be incomplete and therefore provides\n%% a mining advantage (e.g. it can fit in RAM)\nget_max_partition_number(infinity) ->\n\tinfinity;\nget_max_partition_number(PartitionUpperBound) ->\n\tmax(0, PartitionUpperBound div ar_block:partition_size() - 1).\n\n%% @doc Return the current weave size. Assume the node has joined the network and\n%% initialized the state.\nget_current_weave_size() ->\n\t[{_, WeaveSize}] = ar_util:safe_ets_lookup(node_state, weave_size),\n\tWeaveSize.\n\n%% @doc Return the maximum block size among the latest ?BLOCK_INDEX_HEAD_LEN blocks.\n%% Assume the node has joined the network and initialized the state.\nget_recent_max_block_size() ->\n\t[{_, MaxBlockSize}] = ar_util:safe_ets_lookup(node_state, recent_max_block_size),\n\tMaxBlockSize.\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\nget_recent_partition_upper_bound_by_prev_h_short_cache_test() ->\n\tar_block_cache:new(block_cache, B0 = test_block(1, 1, <<>>)),\n\tH0 = B0#block.indep_hash,\n\tBI = lists:reverse([{H0, 20, <<>>}\n\t\t\t| [{crypto:strong_rand_bytes(48), 20, <<>>} || _ <- lists:seq(1, 99)]]),\n\tets:insert(node_state, {recent_block_index, BI}),\n\t?assertEqual(not_found, get_recent_partition_upper_bound_by_prev_h(B0#block.indep_hash)),\n\t?assertEqual(not_found,\n\t\t\tget_recent_partition_upper_bound_by_prev_h(crypto:strong_rand_bytes(48))),\n\t{HPrev, _, _} = lists:nth(length(BI) - ?SEARCH_SPACE_UPPER_BOUND_DEPTH + 2, BI),\n\t?assertEqual(not_found, get_recent_partition_upper_bound_by_prev_h(HPrev)),\n\t{H, _, _} = lists:nth(length(BI) - ?SEARCH_SPACE_UPPER_BOUND_DEPTH + 1, BI),\n\t?assertEqual(not_found, get_recent_partition_upper_bound_by_prev_h(H)),\n\tadd_blocks(tl(lists:reverse(BI)), 2, 2, H0),\n\t?assertEqual(not_found, get_recent_partition_upper_bound_by_prev_h(HPrev)),\n\t?assertEqual({H0, 20}, get_recent_partition_upper_bound_by_prev_h(H)),\n\t{HNext, _, _} = lists:nth(length(BI) - ?SEARCH_SPACE_UPPER_BOUND_DEPTH, BI),\n\t{H1, _, _} = lists:nth(99, BI),\n\t?assertEqual({H1, 20}, get_recent_partition_upper_bound_by_prev_h(HNext)).\n\nget_recent_partition_upper_bound_by_prev_h_genesis_test() ->\n\tar_block_cache:new(block_cache, B0 = test_block(0, 1, <<>>)),\n\tH0 = B0#block.indep_hash,\n\tets:insert(node_state, {recent_block_index, [{H0, 20, <<>>}]}),\n\t?assertEqual({H0, 20}, get_recent_partition_upper_bound_by_prev_h(H0)).\n\ntest_block(Height, CDiff, PrevH) ->\n\ttest_block(crypto:strong_rand_bytes(48), Height, CDiff, PrevH).\n\ntest_block(H, Height, CDiff, PrevH) ->\n\t#block{ indep_hash = H, height = Height, cumulative_diff = CDiff, previous_block = PrevH }.\n\nadd_blocks([{H, _, _} | BI], Height, CDiff, PrevH) ->\n\tar_block_cache:add_validated(block_cache, test_block(H, Height, CDiff, PrevH)),\n\tar_block_cache:mark_tip(block_cache, H),\n\tadd_blocks(BI, Height + 1, CDiff + 1, H);\nadd_blocks([], _Height, _CDiff, _PrevH) ->\n\tok.\n"
  },
  {
    "path": "apps/arweave/src/ar_node_sup.erl",
    "content": "%% This Source Code Form is subject to the terms of the GNU General\n%% Public License, v. 2.0. If a copy of the GPLv2 was not distributed\n%% with this file, You can obtain one at\n%% https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html\n\n-module(ar_node_sup).\n-behaviour(supervisor).\n\n%% API\n-export([start_link/0]).\n\n%% Supervisor callbacks\n-export([init/1]).\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_sup.hrl\").\n\n%% ===================================================================\n%% API functions\n%% ===================================================================\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%% ===================================================================\n%% Supervisor callbacks\n%% ===================================================================\ninit([]) ->\n\t{ok, {supervisor_spec(), children_spec()}}.\n\nsupervisor_spec() ->\n\t#{ strategy => one_for_all\n\t , intensity => 5\n\t , period => 10\n\t }.\n\n%%--------------------------------------------------------------------\n%% the order is important. the first process to be started is\n%% ar_node_worker, then other processes in order. The shutdown\n%% is in reverse, the last process to be stopped is ar_node_worker.\n%%--------------------------------------------------------------------\nchildren_spec() ->\n\tlists:flatten([\n\t\tar_node_worker_spec(),\n\t\tar_semaphores_spec(),\n\t\tar_blacklist_middleware_spec(),\n\t\tar_http_iface_server_spec()\n\t]).\n\n%%--------------------------------------------------------------------\n%% ar_node_worker is the main process, must be started before others,\n%% and should be stopped at last. This process should not be restarted.\n%%--------------------------------------------------------------------\nar_node_worker_spec() ->\n\t#{ id => ar_node_worker\n\t , start => {ar_node_worker, start_link, []}\n\t , type => worker\n\t , shutdown => ?SHUTDOWN_TIMEOUT\n\t , restart => temporary\n\t }.\n\n%%--------------------------------------------------------------------\n%% ar_http_iface_server process is a frontend to cowboy:start*/3,\n%% and will return a worker. This worker is protecting the\n%% cowboy listener (stored in its state). The timeout should be\n%% greater or equal to the TCP_MAX_CONNECTION to avoid killing the\n%% child too early during the shutdown procedure.\n%%--------------------------------------------------------------------\nar_http_iface_server_spec() ->\n\t#{ id => ar_http_iface_server\n\t , start => {ar_http_iface_server, start_link, []}\n\t , type => worker\n\t , shutdown => ?SHUTDOWN_TCP_CONNECTION_TIMEOUT*2*1000\n\t }.\n\n%%--------------------------------------------------------------------\n%% ar_blacklist_middle process is transient, it will configure\n%% a timer and then return. In case of error, it should be\n%% restarted.\n%%--------------------------------------------------------------------\nar_blacklist_middleware_spec() ->\n\t#{ id => ar_blacklist_middleware\n\t , start => {ar_blacklist_middleware, start_link, []}\n\t , type => worker\n\t , restart => transient\n\t , shutdown => ?SHUTDOWN_TIMEOUT\n\t }.\n\n%%--------------------------------------------------------------------\n%% ar_semaphores are processes started based on arweave\n%% configuration.\n%%--------------------------------------------------------------------\nar_semaphores_spec() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tSemaphores = Config#config.semaphores,\n\t[ ar_semaphore_spec(Name, N) || {Name, N} <- maps:to_list(Semaphores) ].\n\nar_semaphore_spec(Name, N) ->\n\t#{ id => Name\n\t , start => {ar_semaphore, start_link, [Name, N]}\n\t , type => worker\n\t , shutdown => ?SHUTDOWN_TIMEOUT\n\t }.\n"
  },
  {
    "path": "apps/arweave/src/ar_node_utils.erl",
    "content": "%%% @doc Different utility functions for node and node worker.\n-module(ar_node_utils).\n\n-export([apply_tx/3, apply_txs/3, update_accounts/3, validate/6,\n\th1_passes_diff_check/3, h2_passes_diff_check/3, solution_passes_diff_check/2,\n\tblock_passes_diff_check/1, block_passes_diff_check/2, passes_diff_check/4,\n\tscaled_diff/2, update_account/6, is_account_banned/2]).\n\n-include(\"ar.hrl\").\n-include(\"ar_pricing.hrl\").\n-include(\"ar_consensus.hrl\").\n-include(\"ar_mining.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n\n%% @doc Update the given accounts by applying a transaction.\napply_tx(Accounts, Denomination, TX) ->\n\tAddr = ar_tx:get_owner_address(TX),\n\tcase maps:get(Addr, Accounts, not_found) of\n\t\tnot_found ->\n\t\t\tAccounts;\n\t\t_ ->\n\t\t\tapply_tx2(Accounts, Denomination, Addr, TX)\n\tend.\n\n%% @doc Update the given accounts by applying the given transactions.\napply_txs(Accounts, Denomination, TXs) ->\n\tlists:foldl(fun(TX, Acc) -> apply_tx(Acc, Denomination, TX) end, Accounts, TXs).\n\n%% @doc Distribute transaction fees across accounts and the endowment pool,\n%% reserve a reward for the current miner, release the reserved reward to the corresponding\n%% miner. If a double-signing proof is provided, ban the account and assign a\n%% reward to the prover.\nupdate_accounts(B, PrevB, Accounts) ->\n\tEndowmentPool = PrevB#block.reward_pool,\n\tRate = ar_pricing:usd_to_ar_rate(PrevB),\n\tPricePerGiBMinute = PrevB#block.price_per_gib_minute,\n\tKryderPlusRateMultiplierLatch = PrevB#block.kryder_plus_rate_multiplier_latch,\n\tKryderPlusRateMultiplier = PrevB#block.kryder_plus_rate_multiplier,\n\tDenomination = PrevB#block.denomination,\n\tDebtSupply = PrevB#block.debt_supply,\n\tTXs = B#block.txs,\n\tBlockInterval = ar_block_time_history:compute_block_interval(PrevB),\n\tArgs =\n\t\tget_miner_reward_and_endowment_pool({EndowmentPool, DebtSupply, TXs,\n\t\t\t\tB#block.reward_addr, B#block.weave_size, B#block.height, B#block.timestamp,\n\t\t\t\tRate, PricePerGiBMinute, KryderPlusRateMultiplierLatch,\n\t\t\t\tKryderPlusRateMultiplier, Denomination, BlockInterval}),\n\tAccounts2 = apply_txs(Accounts, Denomination, TXs),\n\ttrue = B#block.height >= ar_fork:height_2_6(),\n\tupdate_accounts2(B, PrevB, Accounts2, Args).\n\n%%--------------------------------------------------------------------\n%% @doc Perform the last stage of block validation. The majority of\n%% the checks are made in `ar_block_pre_validator.erl',\n%% `ar_nonce_limiter.erl', and `ar_node_utils:update_accounts/3'.\n%% @end\n%%--------------------------------------------------------------------\n-spec validate(NewB, B, Wallets, BlocksAnchors, RecentTXMap, PartitionUpperBound) -> Return when\n\tNewB :: #block{},\n\tB :: #block{},\n\tWallets :: term(),\n\tBlocksAnchors :: term(),\n\tRecentTXMap :: term(),\n\tPartitionUpperBound :: term(),\n\tReturn :: valid | {invalid, Reason},\n\tReason :: term().\n\nvalidate(NewB, B, Wallets, BlockAnchors, RecentTXMap, PartitionUpperBound) ->\n\t?LOG_INFO([{event, validating_block}, {hash, ar_util:encode(NewB#block.indep_hash)}]),\n\tcase timer:tc(\n\t\tfun() ->\n\t\t\ttry\n\t\t\t\tdo_validate(NewB, B, Wallets, BlockAnchors, RecentTXMap, PartitionUpperBound)\n\t\t\tcatch\n\t\t\t\tC:R:S ->\n\t\t\t\t\t?LOG_ERROR([\n\t\t\t\t\t\t{event, block_validation_exception},\n\t\t\t\t\t\t{class, C},\n\t\t\t\t\t\t{reason, R},\n\t\t\t\t\t\t{stacktrace, S},\n\t\t\t\t\t\t{hash, ar_util:encode(NewB#block.indep_hash)},\n\t\t\t\t\t\t{height, NewB#block.height}\n\t\t\t\t\t]),\n\t\t\t\t\t{invalid, validation_exception}\n\t\t\tend\n\n\t\tend\n\t) of\n\t\t{TimeTaken, valid} ->\n\t\t\t?LOG_INFO([{event, block_validation_successful},\n\t\t\t\t\t{hash, ar_util:encode(NewB#block.indep_hash)},\n\t\t\t\t\t{time_taken_us, TimeTaken}]),\n\t\t\tvalid;\n\t\t{TimeTaken, {invalid, Reason}} ->\n\t\t\t?LOG_INFO([{event, block_validation_failed}, {reason, Reason},\n\t\t\t\t\t{hash, ar_util:encode(NewB#block.indep_hash)},\n\t\t\t\t\t{time_taken_us, TimeTaken}]),\n\t\t\t{invalid, Reason};\n\t\t{TimeTaken, {error, Reason}} ->\n\t\t\t?LOG_INFO([{event, block_validation_failed}, {reason, Reason},\n\t\t\t\t\t{hash, ar_util:encode(NewB#block.indep_hash)},\n\t\t\t\t\t{time_taken_us, TimeTaken}]),\n\t\t\t{invalid, Reason};\n\t\t{TimeTaken, Else} ->\n\t\t\t?LOG_ERROR([{event, block_validation_failed}, {reason, Else},\n\t\t\t\t\t{hash, ar_util:encode(NewB#block.indep_hash)},\n\t\t\t\t\t{time_taken_us, TimeTaken}]),\n\t\t\t{invalid, Else}\n\n\tend.\n\nh1_passes_diff_check(H1, DiffPair, PackingDifficulty) ->\n\tpasses_diff_check(H1, true, DiffPair, PackingDifficulty).\n\nh2_passes_diff_check(H2, DiffPair, PackingDifficulty) ->\n\tpasses_diff_check(H2, false, DiffPair, PackingDifficulty).\n\nsolution_passes_diff_check(Solution, DiffPair) ->\n\tSolutionHash = Solution#mining_solution.solution_hash,\n\tPackingDifficulty = Solution#mining_solution.packing_difficulty,\n\tIsPoA1 = ar_mining_server:is_one_chunk_solution(Solution),\n\tpasses_diff_check(SolutionHash, IsPoA1, DiffPair, PackingDifficulty).\n\nblock_passes_diff_check(Block) ->\n\tSolutionHash = Block#block.hash,\n\tblock_passes_diff_check(SolutionHash, Block).\n\nblock_passes_diff_check(SolutionHash, Block) ->\n\tIsPoA1 = (Block#block.recall_byte2 == undefined),\n\tPackingDifficulty = Block#block.packing_difficulty,\n\tDiffPair = ar_difficulty:diff_pair(Block),\n\tpasses_diff_check(SolutionHash, IsPoA1, DiffPair, PackingDifficulty).\n\n-ifdef(LOCALNET).\n%% We skip difficulty checks on localnet for faster block production.\npasses_diff_check(_SolutionHash, _IsPoA1, _DiffPair, _PackingDifficulty) ->\n\ttrue.\n\n-else.\npasses_diff_check(SolutionHash, IsPoA1, not_set, _PackingDifficulty) ->\n\t?LOG_ERROR([{event, diff_check_not_set}, {solution_hash, SolutionHash}, {is_poa1, IsPoA1}]),\n\tfalse;\npasses_diff_check(SolutionHash, IsPoA1, {PoA1Diff, Diff}, PackingDifficulty) ->\n\tDiff2 =\n\t\tcase IsPoA1 of\n\t\t\ttrue ->\n\t\t\t\tPoA1Diff;\n\t\t\tfalse ->\n\t\t\t\tDiff\n\t\tend,\n\tbinary:decode_unsigned(SolutionHash) > scaled_diff(Diff2, PackingDifficulty).\n-endif.\n\nscaled_diff(RawDiff, PackingDifficulty) ->\n\tcase PackingDifficulty of\n\t\t0 ->\n\t\t\tRawDiff;\n\t\t_ ->\n\t\t\tSubDiff = ar_difficulty:sub_diff(RawDiff, ?COMPOSITE_PACKING_SUB_CHUNK_COUNT),\n\t\t\t%% We are introducing composite packing along with reducing the recall range\n\t\t\t%% from 200 MiB to 50 MiB while keeping the total worth of the nonces constant\n\t\t\t%% so we want the mining difficulty to be 1 / (PackingDifficulty * 4)\n\t\t\t%% of the difficulty applied to the 200 MiB-range nonces.\n\t\t\tar_difficulty:scale_diff(SubDiff, {1, PackingDifficulty * 4},\n\t\t\t\t\t%% The minimal difficulty height. It does not change at the\n\t\t\t\t\t%% packing difficulty fork.\n\t\t\t\t\tar_fork:height_2_8())\n\tend.\n\nupdate_account(Addr, Balance, LastTX, 1, true, Accounts) ->\n\tmaps:put(Addr, {Balance, LastTX}, Accounts);\nupdate_account(Addr, Balance, LastTX, Denomination, MiningPermission, Accounts) ->\n\tmaps:put(Addr, {Balance, LastTX, Denomination, MiningPermission}, Accounts).\n\nis_account_banned(Addr, Accounts) ->\n\tcase maps:get(Addr, Accounts, not_found) of\n\t\tnot_found ->\n\t\t\tfalse;\n\t\t{_, _} ->\n\t\t\tfalse;\n\t\t{_, _, _, MiningPermission} ->\n\t\t\tnot MiningPermission\n\tend.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\napply_tx2(Accounts, Denomination, Addr, TX) ->\n\tupdate_recipient_balance(\n\t\t\tupdate_sender_balance(Accounts, Denomination, Addr, TX), Denomination, TX).\n\nupdate_sender_balance(Accounts, Denomination, Addr,\n\t\t#tx{\n\t\t\tid = ID,\n\t\t\tquantity = Qty,\n\t\t\treward = Reward,\n\t\t\tdenomination = TXDenomination\n\t\t}) ->\n\tcase maps:get(Addr, Accounts, not_found) of\n\t\t{Balance, _LastTX} ->\n\t\t\tBalance2 = ar_pricing:redenominate(Balance, 1, Denomination),\n\t\t\tSpent = ar_pricing:redenominate(Qty + Reward, TXDenomination, Denomination),\n\t\t\tupdate_account(Addr, Balance2 - Spent, ID, Denomination, true, Accounts);\n\t\t{Balance, _LastTX, AccountDenomination, MiningPermission} ->\n\t\t\tBalance2 = ar_pricing:redenominate(Balance, AccountDenomination, Denomination),\n\t\t\tSpent = ar_pricing:redenominate(Qty + Reward, TXDenomination, Denomination),\n\t\t\tupdate_account(Addr, Balance2 - Spent, ID, Denomination, MiningPermission,\n\t\t\t\t\tAccounts);\n\t\t_ ->\n\t\t\tAccounts\n\tend.\n\nupdate_recipient_balance(Accounts, _Denomination, #tx{ quantity = 0 }) ->\n\tAccounts;\nupdate_recipient_balance(Accounts, Denomination,\n\t\t#tx{\n\t\t\ttarget = To,\n\t\t\tquantity = Qty,\n\t\t\tdenomination = TXDenomination\n\t\t}) ->\n\tcase maps:get(To, Accounts, not_found) of\n\t\tnot_found ->\n\t\t\tQty2 = ar_pricing:redenominate(Qty, TXDenomination, Denomination),\n\t\t\tupdate_account(To, Qty2, <<>>, Denomination, true, Accounts);\n\t\t{Balance, LastTX} ->\n\t\t\tQty2 = ar_pricing:redenominate(Qty, TXDenomination, Denomination),\n\t\t\tBalance2 = ar_pricing:redenominate(Balance, 1, Denomination),\n\t\t\tupdate_account(To, Balance2 + Qty2, LastTX, Denomination, true, Accounts);\n\t\t{Balance, LastTX, AccountDenomination, MiningPermission} ->\n\t\t\tQty2 = ar_pricing:redenominate(Qty, TXDenomination, Denomination),\n\t\t\tBalance2 = ar_pricing:redenominate(Balance, AccountDenomination, Denomination),\n\t\t\tupdate_account(To, Balance2 + Qty2, LastTX, Denomination, MiningPermission,\n\t\t\t\t\tAccounts)\n\tend.\n\nget_miner_reward_and_endowment_pool(Args) ->\n\t{EndowmentPool, DebtSupply, TXs, RewardAddr, WeaveSize, Height, Timestamp, Rate,\n\t\t\tPricePerGiBMinute, KryderPlusRateMultiplierLatch, KryderPlusRateMultiplier,\n\t\t\tDenomination, BlockInterval} = Args,\n\ttrue = Height >= ar_fork:height_2_4(),\n\tcase ar_pricing_transition:is_v2_pricing_height(Height) of\n\t\ttrue ->\n\t\t\t{MinerReward, EndowmentPool2, DebtSupply2, KryderPlusRateMultiplierLatch2,\n\t\t\t\t\tKryderPlusRateMultiplier2, _, _} = ar_pricing:get_miner_reward_endowment_pool_debt_supply({EndowmentPool, DebtSupply,\n\t\t\t\t\tTXs, WeaveSize, Height, PricePerGiBMinute, KryderPlusRateMultiplierLatch,\n\t\t\t\t\tKryderPlusRateMultiplier, Denomination, BlockInterval}),\n\t\t\t{MinerReward, EndowmentPool2, DebtSupply2, KryderPlusRateMultiplierLatch2, KryderPlusRateMultiplier2};\n\t\tfalse ->\n\t\t\t{MinerReward, EndowmentPool2} = ar_pricing:get_miner_reward_and_endowment_pool({\n\t\t\t\t\tEndowmentPool, TXs, RewardAddr, WeaveSize, Height, Timestamp, Rate}),\n\t\t\t{MinerReward, EndowmentPool2, 0, 0, 1}\n\tend.\n\nvalidate_account_anchors(Accounts, TXs) ->\n\tnot lists:any(fun(TX) -> is_wallet_invalid(TX, Accounts) end, TXs).\n\nupdate_accounts2(B, PrevB, Accounts, Args) ->\n\tcase is_account_banned(B#block.reward_addr, Accounts) of\n\t\ttrue ->\n\t\t\t{error, mining_address_banned};\n\t\tfalse ->\n\t\t\tupdate_accounts3(B, PrevB, Accounts, Args)\n\tend.\n\nupdate_accounts3(B, PrevB, Accounts, Args) ->\n\tcase may_be_apply_double_signing_proof(B, PrevB, Accounts) of\n\t\t{ok, Accounts2} ->\n\t\t\tAccounts3 = ar_rewards:apply_rewards(PrevB, Accounts2),\n\t\t\tupdate_accounts4(B, PrevB, Accounts3, Args);\n\t\tError ->\n\t\t\tError\n\tend.\n\nmay_be_apply_double_signing_proof(#block{ double_signing_proof = undefined }, _PrevB, Accounts) ->\n\t{ok, Accounts};\nmay_be_apply_double_signing_proof(#block{\n\t\tdouble_signing_proof = {_Pub, Sig, _, _, _, Sig, _, _, _} }, _PrevB, _Accounts) ->\n\t{error, invalid_double_signing_proof_same_signature};\nmay_be_apply_double_signing_proof(B, PrevB, Accounts) ->\n\t{_Pub, _Signature1, CDiff1, PrevCDiff1, _Preimage1, _Signature2, CDiff2, PrevCDiff2,\n\t\t\t_Preimage2} = B#block.double_signing_proof,\n\tcase ar_block:get_double_signing_condition(CDiff1, PrevCDiff1, CDiff2, PrevCDiff2) of\n\t\tfalse ->\n\t\t\t{error, invalid_double_signing_proof_cdiff};\n\t\ttrue ->\n\t\t\tmay_be_apply_double_signing_proof2(B, PrevB, Accounts)\n\tend.\n\nmay_be_apply_double_signing_proof2(B, PrevB, Accounts) ->\n\t{Pub, _Signature1, _CDiff1, _PrevCDiff1, _Preimage1, _Signature2, _CDiff2, _PrevCDiff2,\n\t\t\t_Preimage2} = B#block.double_signing_proof,\n\tKey = ar_block:get_reward_key(Pub, B#block.height),\n\tcase B#block.reward_key == Key of\n\t\ttrue ->\n\t\t\t{error, invalid_double_signing_proof_same_address};\n\t\tfalse ->\n\t\t\tAddr = ar_wallet:to_address(Key),\n\t\t\tcase is_account_banned(Addr, Accounts) of\n\t\t\t\ttrue ->\n\t\t\t\t\t{error, invalid_double_signing_proof_already_banned};\n\t\t\t\tfalse ->\n\t\t\t\t\tLockedRewards = ar_rewards:get_locked_rewards(PrevB),\n\t\t\t\t\tcase ar_rewards:has_locked_reward(Addr, LockedRewards) of\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t{error, invalid_double_signing_proof_not_in_reward_history};\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tmay_be_apply_double_signing_proof3(B, PrevB, Accounts)\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\nmay_be_apply_double_signing_proof3(B, PrevB, Accounts) ->\n\t#block{ height = Height } = B,\n\t{Pub, Signature1, CDiff1, PrevCDiff1, Preimage1, Signature2, CDiff2, PrevCDiff2,\n\t\t\tPreimage2} = B#block.double_signing_proof,\n\tSignaturePreimage1 = ar_block:get_block_signature_preimage(CDiff1, PrevCDiff1,\n\t\t\tPreimage1, Height),\n\tKey = ar_block:get_reward_key(Pub, B#block.height),\n\tAddr = ar_wallet:to_address(Key),\n\tcase ar_wallet:verify(Key, SignaturePreimage1, Signature1) of\n\t\tfalse ->\n\t\t\t{error, invalid_double_signing_proof_invalid_signature};\n\t\ttrue ->\n\t\t\tSignaturePreimage2 = ar_block:get_block_signature_preimage(CDiff2, PrevCDiff2,\n\t\t\t\t\tPreimage2, Height),\n\t\t\tcase ar_wallet:verify(Key, SignaturePreimage2, Signature2) of\n\t\t\t\tfalse ->\n\t\t\t\t\t{error, invalid_double_signing_proof_invalid_signature};\n\t\t\t\ttrue ->\n\t\t\t\t\t?LOG_INFO([{event, banning_account},\n\t\t\t\t\t\t\t{address, ar_util:encode(Addr)},\n\t\t\t\t\t\t\t{previous_block, ar_util:encode(B#block.previous_block)},\n\t\t\t\t\t\t\t{height, Height}]),\n\t\t\t\t\t{ok, ban_account(Addr, Accounts, PrevB#block.denomination)}\n\t\t\tend\n\tend.\n\nban_account(Addr, Accounts, Denomination) ->\n\tcase maps:get(Addr, Accounts, not_found) of\n\t\tnot_found ->\n\t\t\tmaps:put(Addr, {1, <<>>, Denomination, false}, Accounts);\n\t\t{Balance, LastTX} ->\n\t\t\tBalance2 = ar_pricing:redenominate(Balance, 1, Denomination),\n\t\t\tmaps:put(Addr, {Balance2 + 1, LastTX, Denomination, false}, Accounts);\n\t\t{Balance, LastTX, AccountDenomination, _MiningPermission} ->\n\t\t\tBalance2 = ar_pricing:redenominate(Balance, AccountDenomination, Denomination),\n\t\t\tmaps:put(Addr, {Balance2 + 1, LastTX, Denomination, false}, Accounts)\n\tend.\n\nupdate_accounts4(B, PrevB, Accounts, Args) ->\n\t{MinerReward, EndowmentPool, DebtSupply, KryderPlusRateMultiplierLatch,\n\t\t\tKryderPlusRateMultiplier} = Args,\n\tcase B#block.double_signing_proof of\n\t\tundefined ->\n\t\t\tupdate_accounts5(B, Accounts, Args);\n\t\tProof ->\n\t\t\tDenomination = PrevB#block.denomination,\n\t\t\tBannedAddr = ar_wallet:hash_pub_key(element(1, Proof)),\n\t\t\tSum = ar_rewards:get_total_reward_for_address(BannedAddr, PrevB) - 1,\n\t\t\t{Dividend, Divisor} = ?DOUBLE_SIGNING_PROVER_REWARD_SHARE,\n\t\t\tLockedRewards = ar_rewards:get_locked_rewards(PrevB),\n\t\t\tSample = lists:sublist(LockedRewards, ?DOUBLE_SIGNING_REWARD_SAMPLE_SIZE),\n\t\t\t{Min, MinDenomination} = get_minimal_reward(Sample),\n\t\t\tMin2 = ar_pricing:redenominate(Min, MinDenomination, Denomination),\n\t\t\tProverReward = min(Min2 * Dividend div Divisor, Sum),\n\t\t\t{MinerReward, EndowmentPool, DebtSupply, KryderPlusRateMultiplierLatch,\n\t\t\t\t\tKryderPlusRateMultiplier} = Args,\n\t\t\tEndowmentPool2 = EndowmentPool + Sum - ProverReward,\n\t\t\tAccounts2 = ar_rewards:apply_reward(Accounts, B#block.reward_addr, ProverReward,\n\t\t\t\t\tDenomination),\n\t\t\tArgs2 = {MinerReward, EndowmentPool2, DebtSupply, KryderPlusRateMultiplierLatch,\n\t\t\t\t\tKryderPlusRateMultiplier},\n\t\t\tupdate_accounts5(B, Accounts2, Args2)\n\tend.\n\nget_minimal_reward(RewardHistory) ->\n\tget_minimal_reward(\n\t\t\t%% Make sure to traverse in the order of not decreasing denomination.\n\t\t\tlists:reverse(RewardHistory), infinity, 1).\n\nget_minimal_reward([], Min, Denomination) ->\n\t{Min, Denomination};\nget_minimal_reward([{_Addr, _HashRate, Reward, RewardDenomination} | RewardHistory], Min,\n\t\tDenomination) ->\n\tMin2 =\n\t\tcase Min of\n\t\t\tinfinity ->\n\t\t\t\tinfinity;\n\t\t\t_ ->\n\t\t\t\tar_pricing:redenominate(Min, Denomination, RewardDenomination)\n\t\tend,\n\tcase Reward < Min2 of\n\t\ttrue ->\n\t\t\tget_minimal_reward(RewardHistory, Reward, RewardDenomination);\n\t\tfalse ->\n\t\t\tget_minimal_reward(RewardHistory, Min, Denomination)\n\tend.\n\nupdate_accounts5(B, Accounts, Args) ->\n\t{MinerReward, EndowmentPool, DebtSupply, KryderPlusRateMultiplierLatch,\n\t\t\tKryderPlusRateMultiplier} = Args,\n\tcase validate_account_anchors(Accounts, B#block.txs) of\n\t\ttrue ->\n\t\t\tAccounts2 = ar_testnet:top_up_test_wallet(Accounts, B#block.height),\n\t\t\t{ok, {EndowmentPool, MinerReward, DebtSupply, KryderPlusRateMultiplierLatch,\n\t\t\t\t\tKryderPlusRateMultiplier, Accounts2}};\n\t\tfalse ->\n\t\t\t{error, invalid_account_anchors}\n\tend.\n\ndo_validate(NewB, OldB, Wallets, BlockAnchors, RecentTXMap, PartitionUpperBound) ->\n\tvalidate_block(weave_size, {NewB, OldB, Wallets, BlockAnchors, RecentTXMap,\n\t\t\tPartitionUpperBound}).\n\nvalidate_block(weave_size, {#block{ txs = TXs } = NewB, OldB, Wallets, BlockAnchors,\n\t\tRecentTXMap, PartitionUpperBound}) ->\n\tcase ar_block:verify_weave_size(NewB, OldB, TXs) of\n\t\tfalse ->\n\t\t\t{invalid, invalid_weave_size};\n\t\ttrue ->\n\t\t\tvalidate_block(previous_block, {NewB, OldB, Wallets, BlockAnchors, RecentTXMap,\n\t\t\t\t\tPartitionUpperBound})\n\tend;\n\nvalidate_block(previous_block, {NewB, OldB, Wallets, BlockAnchors, RecentTXMap,\n\t\tPartitionUpperBound}) ->\n\tcase OldB#block.indep_hash == NewB#block.previous_block of\n\t\tfalse ->\n\t\t\t{invalid, invalid_previous_block};\n\t\ttrue ->\n\t\t\tvalidate_block(previous_solution_hash,\n\t\t\t\t{NewB, OldB, Wallets, BlockAnchors, RecentTXMap, PartitionUpperBound})\n\tend;\n\nvalidate_block(previous_solution_hash, {NewB, OldB, Wallets, BlockAnchors, RecentTXMap,\n\t\tPartitionUpperBound}) ->\n\ttrue = NewB#block.height >= ar_fork:height_2_6(),\n\tcase NewB#block.previous_solution_hash == OldB#block.hash of\n\t\tfalse ->\n\t\t\t{invalid, invalid_previous_solution_hash};\n\t\ttrue ->\n\t\t\tvalidate_block(packing_2_5_threshold, {NewB, OldB, Wallets, BlockAnchors,\n\t\t\t\t\tRecentTXMap, PartitionUpperBound})\n\tend;\n\nvalidate_block(packing_2_5_threshold, {NewB, OldB, Wallets, BlockAnchors, RecentTXMap,\n\t\tPartitionUpperBound}) ->\n\tExpectedPackingThreshold = ar_block:get_packing_threshold(OldB, PartitionUpperBound),\n\tValid =\n\t\tcase ExpectedPackingThreshold of\n\t\t\tundefined ->\n\t\t\t\ttrue;\n\t\t\t_ ->\n\t\t\t\tNewB#block.packing_2_5_threshold == ExpectedPackingThreshold\n\t\tend,\n\tcase Valid of\n\t\ttrue ->\n\t\t\tvalidate_block(strict_data_split_threshold,\n\t\t\t\t\t{NewB, OldB, Wallets, BlockAnchors, RecentTXMap});\n\t\tfalse ->\n\t\t\t{error, invalid_packing_2_5_threshold}\n\tend;\n\nvalidate_block(strict_data_split_threshold,\n\t\t{NewB, OldB, Wallets, BlockAnchors, RecentTXMap}) ->\n\tHeight = NewB#block.height,\n\tFork_2_5 = ar_fork:height_2_5(),\n\tValid =\n\t\tcase Height == Fork_2_5 of\n\t\t\ttrue ->\n\t\t\t\tNewB#block.strict_data_split_threshold == OldB#block.weave_size;\n\t\t\tfalse ->\n\t\t\t\tcase Height > Fork_2_5 of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tNewB#block.strict_data_split_threshold ==\n\t\t\t\t\t\t\t\tOldB#block.strict_data_split_threshold;\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\ttrue\n\t\t\t\tend\n\t\tend,\n\tcase Valid of\n\t\ttrue ->\n\t\t\ttrue = NewB#block.height >= ar_fork:height_2_6(),\n\t\t\tvalidate_block(usd_to_ar_rate, {NewB, OldB, Wallets, BlockAnchors, RecentTXMap});\n\t\tfalse ->\n\t\t\t{error, invalid_strict_data_split_threshold}\n\tend;\n\nvalidate_block(difficulty, {NewB, OldB, Wallets, BlockAnchors, RecentTXMap}) ->\n\tcase ar_retarget:validate_difficulty(NewB, OldB) of\n\t\tfalse ->\n\t\t\t{invalid, invalid_difficulty};\n\t\ttrue ->\n\t\t\tvalidate_block(usd_to_ar_rate, {NewB, OldB, Wallets, BlockAnchors, RecentTXMap})\n\tend;\n\nvalidate_block(usd_to_ar_rate, {NewB, OldB, Wallets, BlockAnchors, RecentTXMap}) ->\n\t{USDToARRate, ScheduledUSDToARRate} = ar_pricing:recalculate_usd_to_ar_rate(OldB),\n\tcase NewB#block.usd_to_ar_rate == USDToARRate\n\t\t\tandalso NewB#block.scheduled_usd_to_ar_rate == ScheduledUSDToARRate of\n\t\tfalse ->\n\t\t\t{invalid, invalid_usd_to_ar_rate};\n\t\ttrue ->\n\t\t\tvalidate_block(denomination, {NewB, OldB, Wallets, BlockAnchors, RecentTXMap})\n\tend;\n\nvalidate_block(denomination, {NewB, OldB, Wallets, BlockAnchors, RecentTXMap}) ->\n\t#block{ height = Height, denomination = Denomination,\n\t\t\tredenomination_height = RedenominationHeight } = NewB,\n\ttrue = Height >= ar_fork:height_2_6(),\n\tcase ar_pricing:may_be_redenominate(OldB) of\n\t\t{Denomination, RedenominationHeight} ->\n\t\t\tvalidate_block(reward_history_hash, {NewB, OldB, Wallets, BlockAnchors,\n\t\t\t\t\tRecentTXMap});\n\t\t_ ->\n\t\t\t{invalid, invalid_denomination}\n\tend;\n\nvalidate_block(reward_history_hash, {NewB, OldB, Wallets, BlockAnchors, RecentTXMap}) ->\n\t#block{ reward = Reward, reward_history_hash = RewardHistoryHash,\n\t\t\tdenomination = Denomination, height = Height } = NewB,\n\t#block{ reward_history = RewardHistory,\n\t\t\treward_history_hash = PreviousRewardHistoryHash } = OldB,\n\tHashRate = ar_difficulty:get_hash_rate_fixed_ratio(NewB),\n\tRewardAddr = NewB#block.reward_addr,\n\t%% Pre-2.8: slice the reward history to compute the hash\n\t%% Post-2.8: use the previous reward history hash and the head of the history to compute\n\t%% the new hash.\n\tLockedRewards = ar_rewards:trim_locked_rewards(Height,\n\t\t[{RewardAddr, HashRate, Reward, Denomination} | RewardHistory]),\n\tcase ar_rewards:reward_history_hash(Height, PreviousRewardHistoryHash, LockedRewards) of\n\t\tRewardHistoryHash ->\n\t\t\tvalidate_block(block_time_history_hash, {NewB, OldB, Wallets, BlockAnchors,\n\t\t\t\t\tRecentTXMap});\n\t\t_ ->\n\t\t\t{invalid, invalid_reward_history_hash}\n\tend;\n\nvalidate_block(block_time_history_hash, {NewB, OldB, Wallets, BlockAnchors, RecentTXMap}) ->\n\tcase NewB#block.height >= ar_fork:height_2_7() of\n\t\tfalse ->\n\t\t\tvalidate_block(next_vdf_difficulty, {NewB, OldB, Wallets, BlockAnchors,\n\t\t\t\t\tRecentTXMap});\n\t\ttrue ->\n\t\t\t#block{ block_time_history_hash = HistoryHash } = NewB,\n\t\t\tHistory = ar_block_time_history:update_history(NewB, OldB),\n\t\t\tcase ar_block_time_history:hash(History) of\n\t\t\t\tHistoryHash ->\n\t\t\t\t\tvalidate_block(next_vdf_difficulty, {NewB, OldB, Wallets, BlockAnchors,\n\t\t\t\t\t\t\tRecentTXMap});\n\t\t\t\t_ ->\n\t\t\t\t\t{invalid, invalid_block_time_history_hash}\n\t\t\tend\n\tend;\n\nvalidate_block(next_vdf_difficulty, {NewB, OldB, Wallets, BlockAnchors, RecentTXMap}) ->\n\tcase NewB#block.height >= ar_fork:height_2_7() of\n\t\tfalse ->\n\t\t\tvalidate_block(price_per_gib_minute, {NewB, OldB, Wallets, BlockAnchors,\n\t\t\t\t\tRecentTXMap});\n\t\ttrue ->\n\t\t\tExpectedNextVDFDifficulty = ar_block:compute_next_vdf_difficulty(OldB),\n\t\t\t#nonce_limiter_info{ next_vdf_difficulty = NextVDFDifficulty } =\n\t\t\t\tNewB#block.nonce_limiter_info,\n\t\t\tcase ExpectedNextVDFDifficulty == NextVDFDifficulty of\n\t\t\t\tfalse ->\n\t\t\t\t\t{invalid, invalid_next_vdf_difficulty};\n\t\t\t\ttrue ->\n\t\t\t\t\tvalidate_block(price_per_gib_minute, {NewB, OldB, Wallets, BlockAnchors,\n\t\t\t\t\t\t\tRecentTXMap})\n\t\t\tend\n\tend;\n\nvalidate_block(price_per_gib_minute, {NewB, OldB, Wallets, BlockAnchors, RecentTXMap}) ->\n\t#block{ denomination = Denomination } = NewB,\n\t#block{ denomination = PrevDenomination } = OldB,\n\t{Price, ScheduledPrice} = ar_pricing:recalculate_price_per_gib_minute(OldB),\n\tPrice2 = ar_pricing:redenominate(Price, PrevDenomination, Denomination),\n\tScheduledPrice2 = ar_pricing:redenominate(ScheduledPrice, PrevDenomination, Denomination),\n\tcase NewB#block.price_per_gib_minute == Price2\n\t\t\tandalso NewB#block.scheduled_price_per_gib_minute == ScheduledPrice2 of\n\t\tfalse ->\n\t\t\t{invalid, invalid_price_per_gib_minute};\n\t\ttrue ->\n\t\t\tvalidate_block(txs, {NewB, OldB, Wallets, BlockAnchors, RecentTXMap})\n\tend;\n\nvalidate_block(txs, {NewB = #block{ timestamp = Timestamp, height = Height, txs = TXs },\n\t\tOldB, Wallets, BlockAnchors, RecentTXMap}) ->\n\tRate = ar_pricing:usd_to_ar_rate(OldB),\n\tPricePerGiBMinute = OldB#block.price_per_gib_minute,\n\tKryderPlusRateMultiplier = OldB#block.kryder_plus_rate_multiplier,\n\tDenomination = OldB#block.denomination,\n\tRedenominationHeight = OldB#block.redenomination_height,\n\tArgs = {TXs, Rate, PricePerGiBMinute, KryderPlusRateMultiplier, Denomination, Height - 1,\n\t\t\tRedenominationHeight, Timestamp, Wallets, BlockAnchors, RecentTXMap},\n\tcase ar_tx_replay_pool:verify_block_txs(Args) of\n\t\tinvalid ->\n\t\t\t{invalid, invalid_txs};\n\t\tvalid ->\n\t\t\ttrue = Height >= ar_fork:height_2_6(),\n\t\t\t%% The field size limits in 2.6 are naturally asserted in\n\t\t\t%% ar_serialize:binary_to_block/1.\n\t\t\tvalidate_block(tx_root, {NewB, OldB})\n\tend;\n\nvalidate_block(block_field_sizes, {NewB, OldB, _Wallets, _BlockAnchors, _RecentTXMap}) ->\n\tcase ar_block:block_field_size_limit(NewB) of\n\t\tfalse ->\n\t\t\t{invalid, invalid_field_size};\n\t\ttrue ->\n\t\t\tvalidate_block(tx_root, {NewB, OldB})\n\tend;\n\nvalidate_block(tx_root, {NewB, OldB}) ->\n\tcase ar_block:verify_tx_root(NewB) of\n\t\tfalse ->\n\t\t\t{invalid, invalid_tx_root};\n\t\ttrue ->\n\t\t\tvalidate_block(block_index_root, {NewB, OldB})\n\tend;\n\nvalidate_block(block_index_root, {NewB, OldB}) ->\n\tcase ar_block:verify_block_hash_list_merkle(NewB, OldB) of\n\t\tfalse ->\n\t\t\t{invalid, invalid_block_index_root};\n\t\ttrue ->\n\t\t\tvalidate_block(last_retarget, {NewB, OldB})\n\tend;\n\nvalidate_block(last_retarget, {NewB, OldB}) ->\n\tcase ar_block:verify_last_retarget(NewB, OldB) of\n\t\tfalse ->\n\t\t\t{invalid, invalid_last_retarget};\n\t\ttrue ->\n\t\t\tvalidate_block(cumulative_diff, {NewB, OldB})\n\tend;\n\nvalidate_block(cumulative_diff, {NewB, OldB}) ->\n\tcase ar_block:verify_cumulative_diff(NewB, OldB) of\n\t\tfalse ->\n\t\t\t{invalid, invalid_cumulative_difficulty};\n\t\ttrue ->\n\t\t\tvalidate_block(merkle_rebase_support_threshold, {NewB, OldB})\n\tend;\n\nvalidate_block(merkle_rebase_support_threshold, {NewB, OldB}) ->\n\t#block{ height = Height } = NewB,\n\tcase Height > ar_fork:height_2_7() of\n\t\ttrue ->\n\t\t\tcase NewB#block.merkle_rebase_support_threshold\n\t\t\t\t\t== OldB#block.merkle_rebase_support_threshold of\n\t\t\t\tfalse ->\n\t\t\t\t\t{error, invalid_merkle_rebase_support_threshold};\n\t\t\t\ttrue ->\n\t\t\t\t\tvalid\n\t\t\tend;\n\t\tfalse ->\n\t\t\tcase Height == ar_fork:height_2_7() of\n\t\t\t\ttrue ->\n\t\t\t\t\tcase NewB#block.merkle_rebase_support_threshold\n\t\t\t\t\t\t\t== OldB#block.weave_size of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tvalid;\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t{error, invalid_merkle_rebase_support_threshold}\n\t\t\t\t\tend;\n\t\t\t\tfalse ->\n\t\t\t\t\tvalid\n\t\t\tend\n\tend.\n\n-ifdef(AR_TEST).\nis_wallet_invalid(#tx{ signature = <<>> }, _Wallets) ->\n\tfalse;\nis_wallet_invalid(TX, Wallets) ->\n\tOwnerAddress = ar_tx:get_owner_address(TX),\n\tcase maps:get(OwnerAddress, Wallets, not_found) of\n\t\t{Balance, LastTX} when Balance >= 0 ->\n\t\t\tcase Balance of\n\t\t\t\t0 ->\n\t\t\t\t\tbyte_size(LastTX) == 0;\n\t\t\t\t_ ->\n\t\t\t\t\tfalse\n\t\t\tend;\n\t\t{Balance, LastTX, _Denomination, _MiningPermission} when Balance >= 0 ->\n\t\t\tcase Balance of\n\t\t\t\t0 ->\n\t\t\t\t\tbyte_size(LastTX) == 0;\n\t\t\t\t_ ->\n\t\t\t\t\tfalse\n\t\t\tend;\n\t\t_ ->\n\t\t\ttrue\n\tend.\n-else.\nis_wallet_invalid(TX, Wallets) ->\n\tOwnerAddress = ar_tx:get_owner_address(TX),\n\tcase maps:get(OwnerAddress, Wallets, not_found) of\n\t\t{Balance, LastTX} when Balance >= 0 ->\n\t\t\tcase Balance of\n\t\t\t\t0 ->\n\t\t\t\t\tbyte_size(LastTX) == 0;\n\t\t\t\t_ ->\n\t\t\t\t\tfalse\n\t\t\tend;\n\t\t{Balance, LastTX, _Denomination, _MiningPermission} when Balance >= 0 ->\n\t\t\tcase Balance of\n\t\t\t\t0 ->\n\t\t\t\t\tbyte_size(LastTX) == 0;\n\t\t\t\t_ ->\n\t\t\t\t\tfalse\n\t\t\tend;\n\t\t_ ->\n\t\t\ttrue\n\tend.\n-endif.\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\nblock_validation_test_() ->\n\t{timeout, 90, fun test_block_validation/0}.\n\ntest_block_validation() ->\n\tWallet = {_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(200), <<>>}]),\n\tar_test_node:start(B0),\n\t%% Add at least 10 KiB of data to the weave and mine a block on top,\n\t%% to make sure SPoRA mining activates.\n\tPrevTX = ar_test_node:sign_tx(main, Wallet, #{ reward => ?AR(10),\n\t\t\tdata => crypto:strong_rand_bytes(10 * ?MiB) }),\n\tar_test_node:assert_post_tx_to_peer(main, PrevTX),\n\tar_test_node:mine(),\n\t[_ | _] = ar_test_node:wait_until_height(main, 1),\n\tar_test_node:mine(),\n\t[{PrevH, _, _} | _ ] = ar_test_node:wait_until_height(main, 2),\n\tPrevB = ar_node:get_block_shadow_from_cache(PrevH),\n\tBI = ar_node:get_block_index(),\n\tPartitionUpperBound = ar_node:get_partition_upper_bound(BI),\n\tBlockAnchors = ar_node:get_block_anchors(),\n\tRecentTXMap = ar_node:get_recent_txs_map(),\n\tTX = ar_test_node:sign_tx(main, Wallet, #{ reward => ?AR(10),\n\t\t\tdata => crypto:strong_rand_bytes(7 * ?MiB), last_tx => PrevH }),\n\tar_test_node:assert_post_tx_to_peer(main, TX),\n\tar_test_node:mine(),\n\t[{H, _, _} | _] = ar_test_node:wait_until_height(main, 3),\n\tB = ar_node:get_block_shadow_from_cache(H),\n\tWallets = #{ ar_wallet:to_address(Pub) => {?AR(200), <<>>} },\n\t?assertEqual(valid, validate(B, PrevB, Wallets, BlockAnchors, RecentTXMap,\n\t\t\tPartitionUpperBound)),\n\t?assertMatch({ok, _}, update_accounts(B, PrevB, Wallets)),\n\t?assertEqual({invalid, invalid_weave_size},\n\t\t\tvalidate(B#block{ weave_size = PrevB#block.weave_size + 1 }, PrevB, Wallets,\n\t\t\t\t\tBlockAnchors, RecentTXMap, PartitionUpperBound)),\n\t?assertEqual({invalid, invalid_previous_block},\n\t\t\tvalidate(B#block{ previous_block = B#block.indep_hash }, PrevB, Wallets,\n\t\t\t\t\tBlockAnchors, RecentTXMap, PartitionUpperBound)),\n\n\t% AVDE-2026-4: invalid block\n\t?assertMatch(\n\t\t{invalid, _},\n\t\tvalidate(\n\t\t\tB,\n\t\t\tPrevB#block{\n\t\t\t\tstrict_data_split_threshold =\n\t\t\t\t\tPrevB#block.strict_data_split_threshold + 1\n\t\t\t},\n\t\t\tWallets,\n\t\t\tBlockAnchors,\n\t\t\tRecentTXMap,\n\t\t\tPartitionUpperBound\n\t\t)\n\t),\n\n\tInvLastRetargetB = B#block{ last_retarget = B#block.timestamp },\n\tInvDataRootB = B#block{ tx_root = crypto:strong_rand_bytes(32) },\n\tInvBlockIndexRootB = B#block{ hash_list_merkle = crypto:strong_rand_bytes(32) },\n\tInvCDiffB = B#block{ cumulative_diff = PrevB#block.cumulative_diff * 1000 },\n\t?assertEqual({invalid, invalid_difficulty},\n\t\t\tvalidate_block(difficulty, {\n\t\t\t\t\tB#block{ diff = PrevB#block.diff - 1 }, PrevB, Wallets, BlockAnchors,\n\t\t\t\t\tRecentTXMap})),\n\t?assertEqual({invalid, invalid_usd_to_ar_rate},\n\t\t\tvalidate_block(usd_to_ar_rate, {\n\t\t\t\t\tB#block{ usd_to_ar_rate = {0, 0} }, PrevB, Wallets, BlockAnchors,\n\t\t\t\t\tRecentTXMap})),\n\t?assertEqual({invalid, invalid_usd_to_ar_rate},\n\t\t\tvalidate_block(usd_to_ar_rate, {\n\t\t\t\t\tB#block{ scheduled_usd_to_ar_rate = {0, 0} }, PrevB, Wallets, BlockAnchors,\n\t\t\t\t\tRecentTXMap})),\n\t?assertEqual({invalid, invalid_txs},\n\t\t\tvalidate_block(txs, {B#block{ txs = [#tx{ signature = <<1>> }] }, PrevB, Wallets,\n\t\t\t\t\tBlockAnchors, RecentTXMap})),\n\t?assertEqual({invalid, invalid_txs},\n\t\t\tvalidate(B#block{ txs = [TX#tx{ reward = ?AR(201) }] }, PrevB, Wallets,\n\t\t\t\t\tBlockAnchors, RecentTXMap, PartitionUpperBound)),\n\t?assertEqual({error, invalid_account_anchors},\n\t\t\tupdate_accounts(B#block{ txs = [TX#tx{ reward = ?AR(201) }] }, PrevB, Wallets)),\n\t?assertEqual({invalid, invalid_tx_root},\n\t\t\tvalidate_block(tx_root, {\n\t\t\t\tInvDataRootB#block{ indep_hash = ar_block:indep_hash(InvDataRootB) }, PrevB})),\n\t?assertEqual({invalid, invalid_difficulty},\n\t\t\tvalidate_block(difficulty, {\n\t\t\t\t\tInvLastRetargetB#block{\n\t\t\t\t\t\tindep_hash = ar_block:indep_hash(InvLastRetargetB) },\n\t\t\t\t\tPrevB, Wallets, BlockAnchors, RecentTXMap})),\n\t?assertEqual({invalid, invalid_block_index_root},\n\t\t\tvalidate_block(block_index_root, {\n\t\t\t\tInvBlockIndexRootB#block{\n\t\t\t\t\tindep_hash = ar_block:indep_hash(InvBlockIndexRootB) }, PrevB})),\n\t?assertEqual({invalid, invalid_last_retarget},\n\t\t\tvalidate_block(last_retarget, {B#block{ last_retarget = 0 }, PrevB})),\n\t?assertEqual(\n\t\t{invalid, invalid_cumulative_difficulty},\n\t\tvalidate_block(cumulative_diff, {\n\t\t\t\tInvCDiffB#block{ indep_hash = ar_block:indep_hash(InvCDiffB) }, PrevB})),\n\n\tBI2 = ar_node:get_block_index(),\n\tPartitionUpperBound2 = ar_node:get_partition_upper_bound(BI2),\n\tBlockAnchors2 = ar_node:get_block_anchors(),\n\tRecentTXMap2 = ar_node:get_recent_txs_map(),\n\tar_test_node:mine(),\n\t[{H2, _, _} | _ ] = ar_test_node:wait_until_height(main, 4),\n\tB2 = ar_node:get_block_shadow_from_cache(H2),\n\t?assertEqual(valid, validate(B2, B, Wallets, BlockAnchors2, RecentTXMap2,\n\t\t\tPartitionUpperBound2)).\n\nupdate_accounts_rejects_same_signature_in_double_signing_proof_test_() ->\n\t{timeout, 30, fun test_update_accounts_rejects_same_signature_in_double_signing_proof/0}.\n\ntest_update_accounts_rejects_same_signature_in_double_signing_proof() ->\n\tAccounts = #{},\n\tKey = ar_wallet:new(),\n\tPub = element(2, element(2, Key)),\n\tRandom = crypto:strong_rand_bytes(64),\n\tPreimage = << (ar_serialize:encode_int(1, 16))/binary,\n\t\t\t(ar_serialize:encode_int(1, 16))/binary, Random/binary >>,\n\tSig1 = ar_wallet:sign(element(1, Key), Preimage),\n\tDoubleSigningProof = {Pub, Sig1, 1, 1, Random, Sig1, 1, 1, Random},\n\tBannedAddr = ar_wallet:to_address(Key),\n\tProverKey = ar_wallet:new(),\n\tRewardAddr = ar_wallet:to_address(ProverKey),\n\tB = #block{ timestamp = os:system_time(second), reward_addr = RewardAddr, weave_size = 1,\n\t\t\tdouble_signing_proof = DoubleSigningProof },\n\tReward = 12,\n\tPrevB = #block{ reward_history = [{RewardAddr, 0, Reward, 1}, {BannedAddr, 0, 10, 1}],\n\t\t\tusd_to_ar_rate = {1, 5}, reward_pool = 0 },\n\t?assertEqual({error, invalid_double_signing_proof_same_signature},\n\t\t\tupdate_accounts(B, PrevB, Accounts)).\n\nupdate_accounts_receives_released_reward_and_prover_reward_test_() ->\n\t{timeout, 30, fun test_update_accounts_receives_released_reward_and_prover_reward/0}.\n\n% this function will prepend reward_history up to ?REWARD_HISTORY_BLOCKS\n% elements will keep pattern of changed values\naugment_reward_history(PrevB = #block{ reward_history = RewardHistory }) ->\n\t[First, Second | _] = RewardHistory,\n\tNewRewardHistory = case length(RewardHistory) >= ?REWARD_HISTORY_BLOCKS of\n\t\ttrue -> RewardHistory;\n\t\t_ ->\n\t\t\tPairsToAdd = (?REWARD_HISTORY_BLOCKS - length(RewardHistory)) div 2,\n\t\t\tAdditionalElements = lists:flatten(lists:duplicate(PairsToAdd, [First, Second])),\n\t\t\tcase (?REWARD_HISTORY_BLOCKS - length(RewardHistory)) rem 2 of\n\t\t\t\t1 -> [Second | AdditionalElements];\n\t\t\t\t0 -> AdditionalElements\n\t\t\tend ++ RewardHistory\n\tend,\n\tPrevB#block{ reward_history = NewRewardHistory }.\n\ntest_update_accounts_receives_released_reward_and_prover_reward() ->\n\t?assert(?DOUBLE_SIGNING_REWARD_SAMPLE_SIZE == 2),\n\t?assert(?LOCKED_REWARDS_BLOCKS >= 3),\n\t?assert(?DOUBLE_SIGNING_PROVER_REWARD_SHARE == {1, 2}),\n\tAccounts = #{},\n\tKey = ar_wallet:new(),\n\tPub = element(2, element(2, Key)),\n\tRandom = crypto:strong_rand_bytes(64),\n\tPreimage = << 0:256, (ar_serialize:encode_int(1, 16))/binary,\n\t\t\t(ar_serialize:encode_int(1, 16))/binary, Random/binary >>,\n\tSig1 = ar_wallet:sign(element(1, Key), Preimage),\n\tSig2 = ar_wallet:sign(element(1, Key), Preimage),\n\tDoubleSigningProof = {Pub, Sig1, 1, 1, Random, Sig2, 1, 1, Random},\n\tBannedAddr = ar_wallet:to_address(Key),\n\tProverKey = ar_wallet:new(),\n\tRewardAddr = ar_wallet:to_address(ProverKey),\n\tB = #block{ timestamp = os:system_time(second), reward_addr = RewardAddr, weave_size = 1,\n\t\t\tdouble_signing_proof = DoubleSigningProof },\n\tReward = 13,\n\tProverReward = 5, % 1/2 of min(10, 12)\n\tPrevB = #block{ reward_history = [{stub, stub, 12, 1}, {BannedAddr, 0, 10, 1},\n\t\t\t{RewardAddr, 0, Reward, 1}], usd_to_ar_rate = {1, 5}, reward_pool = 0 },\n\tPrevB1 = augment_reward_history(PrevB),\n\t{ok, {_EndowmentPool2, _MinerReward, _DebtSupply2,\n\t\t\t_KryderPlusRateMultiplierLatch2, _KryderPlusRateMultiplier2, Accounts2}} =\n\t\t\tupdate_accounts(B, PrevB1, Accounts),\n\t?assertEqual({ProverReward + Reward, <<>>}, maps:get(RewardAddr, Accounts2)),\n\t?assertEqual({1, <<>>, 1, false}, maps:get(BannedAddr, Accounts2)).\n\nupdate_accounts_does_not_let_banned_account_take_reward_test_() ->\n\t{timeout, 30, fun test_update_accounts_does_not_let_banned_account_take_reward/0}.\n\ntest_update_accounts_does_not_let_banned_account_take_reward() ->\n\t?assert(?DOUBLE_SIGNING_REWARD_SAMPLE_SIZE == 2),\n\t?assert(?LOCKED_REWARDS_BLOCKS >= 3),\n\t?assert(?DOUBLE_SIGNING_PROVER_REWARD_SHARE == {1, 2}),\n\tAccounts = #{},\n\tKey = ar_wallet:new(),\n\tPub = element(2, element(2, Key)),\n\tRandom = crypto:strong_rand_bytes(64),\n\tPreimage = << 0:256, (ar_serialize:encode_int(1, 16))/binary,\n\t\t\t(ar_serialize:encode_int(1, 16))/binary, Random/binary >>,\n\tSig1 = ar_wallet:sign(element(1, Key), Preimage),\n\tSig2 = ar_wallet:sign(element(1, Key), Preimage),\n\tDoubleSigningProof = {Pub, Sig1, 1, 1, Random, Sig2, 1, 1, Random},\n\tBannedAddr = ar_wallet:to_address(Key),\n\tProverKey = ar_wallet:new(),\n\tRewardAddr = ar_wallet:to_address(ProverKey),\n\tB = #block{ timestamp = os:system_time(second), reward_addr = RewardAddr, weave_size = 1,\n\t\t\tdouble_signing_proof = DoubleSigningProof },\n\tReward = 12,\n\tProverReward = 3, % 1/2 of min(7, 8)\n\tPrevB = #block{ reward_history = [{stub, stub, 7, 1}, {stub, stub, 8, 1},\n\t\t\t{BannedAddr, 0, Reward, 1}, {BannedAddr, 0, 10, 1}],\n\t\t\t\t\tusd_to_ar_rate = {1, 5}, reward_pool = 0 },\n\tPrevB1 = augment_reward_history(PrevB),\n\t{ok, {_EndowmentPool2, _MinerReward, _DebtSupply2,\n\t\t\t_KryderPlusRateMultiplierLatch2, _KryderPlusRateMultiplier2, Accounts2}} =\n\t\t\tupdate_accounts(B, PrevB1, Accounts),\n\t?assertEqual({ProverReward, <<>>}, maps:get(RewardAddr, Accounts2)),\n\t?assertEqual({1, <<>>, 1, false}, maps:get(BannedAddr, Accounts2)).\n"
  },
  {
    "path": "apps/arweave/src/ar_node_worker.erl",
    "content": "%% This Source Code Form is subject to the terms of the GNU General\n%% Public License, v. 2.0. If a copy of the GPLv2 was not distributed\n%% with this file, You can obtain one at\n%% https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html\n%%\n\n%%% @doc The server responsible for processing blocks and transactions and\n%%% maintaining the node state. Blocks are prioritized over transactions.\n-module(ar_node_worker).\n\n-export([start_link/0, calculate_delay/1, is_mempool_or_block_cache_tx/1,\n\t\ttx_id_prefix/1, found_solution/4, pause/0,\n\t\tstart_mining/0, mine_one_block/0, mine_until_height/1]).\n\n-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2]).\n-export([set_reward_addr/1]).\n\n-include(\"ar.hrl\").\n-include(\"ar_consensus.hrl\").\n-include(\"ar_pricing.hrl\").\n-include(\"ar_data_sync.hrl\").\n-include(\"ar_vdf.hrl\").\n-include(\"ar_mining.hrl\").\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-ifdef(LOCALNET).\n-define(MINING_SERVER, ar_localnet_mining_server).\n-else.\n-define(MINING_SERVER, ar_mining_server).\n-endif.\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-ifdef(AR_TEST).\n\t-define(PROCESS_TASK_QUEUE_FREQUENCY_MS, 10).\n-else.\n\t-ifdef(LOCALNET).\n\t\t-define(PROCESS_TASK_QUEUE_FREQUENCY_MS, 10).\n\t-else.\n\t\t-define(PROCESS_TASK_QUEUE_FREQUENCY_MS, 200).\n\t-endif.\n-endif.\n\n-define(FILTER_MEMPOOL_CHUNK_SIZE, 100).\n\n-ifdef(AR_TEST).\n-define(BLOCK_INDEX_HEAD_LEN, (?STORE_BLOCKS_BEHIND_CURRENT * 2)).\n-else.\n-define(BLOCK_INDEX_HEAD_LEN, 10000).\n-endif.\n\n%% How deep into the past do we search for the state data starting from the tip of\n%% the extracted block index. Normally, the very recent block and transaction headers\n%% would be found, but in case something goes wrong we may skip up to this many missing\n%% records and start from a slightly older state. Also very helpful for testing, e.g., when\n%% we want to restart a testnet from a certain point in the past.\n-ifndef(START_FROM_STATE_SEARCH_DEPTH).\n\t-define(START_FROM_STATE_SEARCH_DEPTH, 100).\n-endif.\n\n%% How frequently (in seconds) to recompute the mining difficulty at the retarget blocks.\n-ifdef(AR_TEST).\n-define(COMPUTE_MINING_DIFFICULTY_INTERVAL, 1).\n-else.\n-define(COMPUTE_MINING_DIFFICULTY_INTERVAL, 10).\n-endif.\n\n-ifndef(LOCALNET_BALANCE).\n-define(LOCALNET_BALANCE, 1000000000000).\n-endif.\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the server.\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%% @doc Return the prefix used to inform block receivers about the block's transactions\n%% via POST /block_announcement.\ntx_id_prefix(TXID) ->\n\tbinary:part(TXID, 0, 8).\n\n%% @doc Return true if the given transaction identifier is found in the mempool or\n%% block cache (the last ar_block:get_consensus_window_size() blocks).\nis_mempool_or_block_cache_tx(TXID) ->\n\tets:match_object(tx_prefixes, {tx_id_prefix(TXID), TXID}) /= [].\n\nset_reward_addr(Addr) ->\n\tgen_server:call(?MODULE, {set_reward_addr, Addr}).\n\nfound_solution(Source, Solution, PoACache, PoA2Cache) ->\n\tgen_server:cast(?MODULE, {found_solution, Source, Solution, PoACache, PoA2Cache}).\n\n%% @doc Start the mining server. It will be running indefinitely until paused.\nstart_mining() ->\n\tgen_server:cast(?MODULE, start_mining).\n\n%% @doc Mine until a block is found. The default server may produce several block\n%% candidates (happens often in tests). The localnet mining server only produces\n%% one candidate and one block.\nmine_one_block() ->\n\tgen_server:cast(?MODULE, mine_one_block).\n\n%% @doc Mine blocks until the given height is reached.\nmine_until_height(Height) ->\n\tgen_server:cast(?MODULE, {mine_until_height, Height}).\n\n%% @doc Pause the mining server.\npause() ->\n\tgen_server:cast(?MODULE, pause).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\t?LOG_INFO([{start, ?MODULE}, {pid,self()}]),\n\t%% Trap exit to avoid corrupting any open files on quit.\n\tprocess_flag(trap_exit, true),\n\t[ok, ok, ok, ok] = ar_events:subscribe([tx, block, nonce_limiter, node_state]),\n\t%% Read persisted mempool.\n\tar_mempool:load_from_disk(),\n\t%% Join the network.\n\t{ok, Config} = arweave_config:get_env(),\n\tvalidate_trusted_peers(Config),\n\tStartFromLocalState = Config#config.start_from_latest_state orelse\n\t\t\tConfig#config.start_from_block /= not_set,\n\tcase {StartFromLocalState, Config#config.init, Config#config.auto_join} of\n\t\t{false, false, true} ->\n\t\t\tar_join:start(ar_peers:get_trusted_peers());\n\t\t{true, _, _} ->\n\t\t\tcase ar_storage:read_block_index(Config#config.start_from_state) of\n\t\t\t\tnot_found ->\n\t\t\t\t\tblock_index_not_found([]);\n\t\t\t\tBI ->\n\t\t\t\t\tcase get_block_index_at_state(BI, Config) of\n\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\tblock_index_not_found(BI);\n\t\t\t\t\t\tBI2 ->\n\t\t\t\t\t\t\tHeight = length(BI2) - 1,\n\t\t\t\t\t\t\tcase start_from_state(BI2, Height) of\n\t\t\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\t\t\tok;\n\t\t\t\t\t\t\t\tError ->\n\t\t\t\t\t\t\t\t\tar:console(\"~n~n\\tFailed to read the local state: ~p.~n\",\n\t\t\t\t\t\t\t\t\t\t\t[Error]),\n\t\t\t\t\t\t\t\t\t?LOG_INFO([{event, failed_to_read_local_state},\n\t\t\t\t\t\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Error])}]),\n\t\t\t\t\t\t\t\t\ttimer:sleep(1000),\n\t\t\t\t\t\t\t\t\tinit:stop(1)\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend\n\t\t\tend;\n\t\t{false, true, _} ->\n\t\t\tConfig2 = Config#config{ init = false },\n\t\t\tarweave_config:set_env(Config2),\n\t\t\tInitialBalance = ?AR(?LOCALNET_BALANCE),\n\t\t\t[B0] = ar_weave:init([{Config#config.mining_addr, InitialBalance, <<>>}],\n\t\t\t\t\tar_retarget:switch_to_linear_diff(Config#config.diff)),\n\t\t\tRootHash0 = B0#block.wallet_list,\n\t\t\tRootHash0 = ar_storage:write_wallet_list(0, B0#block.account_tree),\n\t\t\tstart_from_state([B0]);\n\t\t_ ->\n\t\t\tok\n\tend,\n\t%% Add pending transactions from the persisted mempool to the propagation queue.\n\tgb_sets:filter(\n\t\tfun ({_Utility, _TXID, ready_for_mining}) ->\n\t\t\t\tfalse;\n\t\t\t({_Utility, TXID, waiting}) ->\n\t\t\t\tstart_tx_mining_timer(ar_mempool:get_tx(TXID)),\n\t\t\t\ttrue\n\t\tend,\n\t\tar_mempool:get_priority_set()\n\t),\n\t%% May be start mining.\n\tcase Config#config.mine of\n\t\ttrue ->\n\t\t\tgen_server:cast(?MODULE, start_mining);\n\t\t_ ->\n\t\t\tok\n\tend,\n\tgen_server:cast(?MODULE, process_task_queue),\n\tets:insert(node_state, [\n\t\t{is_joined,\t\t\t\t\t\tfalse},\n\t\t{hash_list_2_0_for_1_0_blocks,\tread_hash_list_2_0_for_1_0_blocks()}\n\t]),\n\tgen_server:cast(?MODULE, compute_mining_difficulty),\n\t{ok, #{\n\t\tminer_state => undefined,\n\t\tio_threads => [],\n\t\tautomine => false,\n\t\tmine_until_height => undefined,\n\t\ttags => [],\n\t\tblocks_missing_txs => sets:new(),\n\t\tmissing_txs_lookup_processes => #{},\n\t\ttask_queue => gb_sets:new(),\n\t\tsolution_cache => #{},\n\t\tsolution_cache_records => queue:new()\n\t}}.\n\nget_block_index_at_state(BI, Config) ->\n\tcase Config#config.start_from_latest_state of\n\t\ttrue ->\n\t\t\tBI;\n\t\tfalse ->\n\t\t\tH = Config#config.start_from_block,\n\t\t\tget_block_index_at_state2(BI, H)\n\tend.\n\nget_block_index_at_state2([], _H) ->\n\tnot_found;\nget_block_index_at_state2([{H, _, _} | _] = BI, H) ->\n\tBI;\nget_block_index_at_state2([_ | BI], H) ->\n\tget_block_index_at_state2(BI, H).\n\nblock_index_not_found([]) ->\n\tar:console(\"~n~n\\tThe local state is empty, consider joining the network \"\n\t\t\t\"via the trusted peers.~n\"),\n\t?LOG_INFO([{event, local_state_empty}]),\n\ttimer:sleep(1000),\n\tinit:stop(1);\nblock_index_not_found(BI) ->\n\t{Last, _, _} = hd(BI),\n\t{First, _, _} = lists:last(BI),\n\tar:console(\"~n~n\\tThe local state is missing the target block. Available height range: ~p to ~p.~n\",\n\t\t\t[ar_util:encode(First), ar_util:encode(Last)]),\n\t?LOG_INFO([{event, local_state_missing_target},\n\t\t\t{first, ar_util:encode(First)}, {last, ar_util:encode(Last)}]),\n\ttimer:sleep(1000),\n\tinit:stop(1).\n\n\nvalidate_trusted_peers(#config{ peers = [] }) ->\n\tok;\nvalidate_trusted_peers(Config) ->\n\tPeers = Config#config.peers,\n\tValidPeers = filter_valid_peers(Peers),\n\tcase ValidPeers of\n\t\t[] ->\n\t\t\tar:console(\"The specified trusted peers are not valid.~n\", []),\n\t\t\t?LOG_INFO([{event, no_valid_trusted_peers}]),\n\t\t\ttimer:sleep(2000),\n\t\t\tinit:stop(1);\n\t\t_ ->\n\t\t\tarweave_config:set_env(Config#config{ peers = ValidPeers }),\n\t\t\tcase lists:member(time_syncing, Config#config.disable) of\n\t\t\t\tfalse ->\n\t\t\t\t\tvalidate_clock_sync(ValidPeers);\n\t\t\t\ttrue ->\n\t\t\t\t\tok\n\t\t\tend\n\tend.\n\n%% @doc Verify peers are on the same network as us.\nfilter_valid_peers(Peers) ->\n\tlists:filter(\n\t\tfun(Peer) ->\n\t\t\tcase ar_http_iface_client:get_info(Peer, network) of\n\t\t\t\tinfo_unavailable ->\n\t\t\t\t\tio:format(\"~n\\tPeer ~s is not available.~n~n\",\n\t\t\t\t\t\t\t[ar_util:format_peer(Peer)]),\n\t\t\t\t\tfalse;\n\t\t\t\t<<?NETWORK_NAME>> ->\n\t\t\t\t\ttrue;\n\t\t\t\t_ ->\n\t\t\t\t\tio:format(\n\t\t\t\t\t\t\"~n\\tPeer ~s does not belong to the network ~s.~n~n\",\n\t\t\t\t\t\t[ar_util:format_peer(Peer), ?NETWORK_NAME]\n\t\t\t\t\t),\n\t\t\t\t\tfalse\n\t\t\tend\n\t\tend,\n\t\tPeers\n\t).\n\n%% @doc Validate our clocks are in sync with the trusted peers' clocks.\nvalidate_clock_sync(Peers) ->\n\tValidatePeerClock = fun(Peer) ->\n\t\tcase ar_http_iface_client:get_time(Peer, 5 * 1000) of\n\t\t\t{ok, {RemoteTMin, RemoteTMax}} ->\n\t\t\t\tLocalT = os:system_time(second),\n\t\t\t\tTolerance = ?JOIN_CLOCK_TOLERANCE,\n\t\t\t\tcase LocalT of\n\t\t\t\t\tT when T < RemoteTMin - Tolerance ->\n\t\t\t\t\t\tlog_peer_clock_diff(Peer, RemoteTMin - Tolerance - T),\n\t\t\t\t\t\tfalse;\n\t\t\t\t\tT when T < RemoteTMin - Tolerance div 2 ->\n\t\t\t\t\t\tlog_peer_clock_diff(Peer, RemoteTMin - T),\n\t\t\t\t\t\ttrue;\n\t\t\t\t\tT when T > RemoteTMax + Tolerance ->\n\t\t\t\t\t\tlog_peer_clock_diff(Peer, T - RemoteTMax - Tolerance),\n\t\t\t\t\t\tfalse;\n\t\t\t\t\tT when T > RemoteTMax + Tolerance div 2 ->\n\t\t\t\t\t\tlog_peer_clock_diff(Peer, T - RemoteTMax),\n\t\t\t\t\t\ttrue;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\ttrue\n\t\t\t\tend;\n\t\t\t{error, Err} ->\n\t\t\t\tar:console(\n\t\t\t\t\t\"Failed to get time from peer ~s: ~p.\",\n\t\t\t\t\t[ar_util:format_peer(Peer), Err]\n\t\t\t\t),\n\t\t\t\tfalse\n\t\tend\n\tend,\n\tResponses = ar_util:pmap(ValidatePeerClock, [P || P <- Peers, not is_pid(P)]),\n\tcase checker(Responses) of\n\t\t% If more valid nodes are present than invalid nodes, it should be\n\t\t% good\n\t\t{X, #{true := True, false := False}}\n\t\t\twhen X>0, True>False ->\n\t\t\t\tok;\n\n\t\t% If all nodes are valid, then its good\n\t\t{_, #{ true := _ }} ->\n\t\t\tok;\n\n\t\t% Else there is a problem somewhere. Too many peers\n\t\t% with clock issues will only cause problems.\n\t\t_ ->\n\t\t\tar:console(\n\t\t\t\t\"~n\\tInvalid peers. A valid peer must be part of the\"\n\t\t\t\t\" network ~s and its clock must deviate from ours by no\"\n\t\t\t\t\" more than ~B seconds.~n\", [?NETWORK_NAME, ?JOIN_CLOCK_TOLERANCE]\n\t\t\t),\n\t\t\t?LOG_INFO([{event, invalid_peer}]),\n\t\t\ttimer:sleep(1000),\n\t\t\tinit:stop(1)\n\tend.\n\nlog_peer_clock_diff(Peer, Delta) ->\n\tWarning = \"Your local clock deviates from peer ~s by ~B seconds or more.\",\n\tWarningArgs = [ar_util:format_peer(Peer), Delta],\n\tio:format(Warning, WarningArgs),\n\t?LOG_WARNING(Warning, WarningArgs).\n\nstart_tx_mining_timer(TX) ->\n\t%% Calling with ar_node_worker: allows to mock calculate_delay/1 in tests.\n\terlang:send_after(ar_node_worker:calculate_delay(tx_propagated_size(TX)), ?MODULE,\n\t\t\t{tx_ready_for_mining, TX}).\n\ntx_propagated_size(#tx{ format = 2 }) ->\n\t?TX_SIZE_BASE;\ntx_propagated_size(#tx{ format = 1, data = Data }) ->\n\t?TX_SIZE_BASE + byte_size(Data).\n\n%% @doc Return a delay in milliseconds to wait before including a transaction\n%% into a block. The delay is computed as base delay + a function of data size with\n%% a conservative estimation of the network speed.\ncalculate_delay(Bytes) ->\n\tBaseDelay = (?BASE_TX_PROPAGATION_DELAY) * 1000,\n\tNetworkDelay = Bytes * 8 div (?TX_PROPAGATION_BITS_PER_SECOND) * 1000,\n\tBaseDelay + NetworkDelay.\n\nhandle_call({set_reward_addr, Addr}, _From, State) ->\n\t{reply, ok, State#{ reward_addr => Addr }}.\n\nhandle_cast({found_solution, miner, _Solution, _PoACache, _PoA2Cache},\n\t\t#{ automine := false, miner_state := undefined } = State) ->\n\t{noreply, State};\nhandle_cast({found_solution, Source, Solution, PoACache, PoA2Cache}, State) ->\n\t[{_, PrevH}] = ets:lookup(node_state, current),\n\tPrevB = ar_block_cache:get(block_cache, PrevH),\n\thandle_found_solution({Source, Solution, PoACache, PoA2Cache}, PrevB, State, false);\n\nhandle_cast(process_task_queue, #{ task_queue := TaskQueue } = State) ->\n\tRunTask =\n\t\tcase gb_sets:is_empty(TaskQueue) of\n\t\t\ttrue ->\n\t\t\t\tfalse;\n\t\t\tfalse ->\n\t\t\t\tcase ets:lookup(node_state, is_joined) of\n\t\t\t\t\t[{_, true}] ->\n\t\t\t\t\t\ttrue;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tfalse\n\t\t\t\tend\n\t\tend,\n\tcase RunTask of\n\t\ttrue ->\n\t\t\trecord_metrics(),\n\t\t\t{{_Priority, Task}, TaskQueue2} = gb_sets:take_smallest(TaskQueue),\n\t\t\tgen_server:cast(self(), process_task_queue),\n\t\t\thandle_task(Task, State#{ task_queue => TaskQueue2 });\n\t\tfalse ->\n\t\t\tar_util:cast_after(?PROCESS_TASK_QUEUE_FREQUENCY_MS, ?MODULE, process_task_queue),\n\t\t\t{noreply, State}\n\tend;\n\nhandle_cast(Message, #{ task_queue := TaskQueue } = State) ->\n\tTask = {priority(Message), Message},\n\tcase gb_sets:is_element(Task, TaskQueue) of\n\t\ttrue ->\n\t\t\t{noreply, State};\n\t\tfalse ->\n\t\t\t{noreply, State#{ task_queue => gb_sets:insert(Task, TaskQueue) }}\n\tend.\n\nhandle_info({join_from_state, Height, BI, Blocks, CustomDir}, State) ->\n\t{ok, _} = ar_wallets:start_link([{blocks, Blocks},\n\t\t\t{from_state, ?START_FROM_STATE_SEARCH_DEPTH},\n\t\t\t{custom_dir, CustomDir}]),\n\tets:insert(node_state, {join_state, {Height, Blocks, BI, CustomDir}}),\n\t{noreply, State};\n\nhandle_info({join, Height, BI, Blocks}, State) ->\n\tPeers = ar_peers:get_trusted_peers(),\n\t{ok, _} = ar_wallets:start_link([{blocks, Blocks}, {from_peers, Peers}]),\n\tets:insert(node_state, {join_state, {Height, Blocks, BI, not_set}}),\n\t{noreply, State};\n\nhandle_info({event, node_state, {account_tree_initialized, Height}}, State) ->\n\t[{_, {Height2, Blocks, BI, CustomDir}}] = ets:lookup(node_state, join_state),\n\t?LOG_INFO([{event, account_tree_initialized}, {height, Height}]),\n\tar:console(\"The account tree has been initialized at the block height ~B.~n\", [Height]),\n\tcase CustomDir of\n\t\tnot_set ->\n\t\t\tok;\n\t\t_ ->\n\t\t\tar_storage:close_start_from_state_databases()\n\tend,\n\t%% Take the latest block the account tree is stored for.\n\tBlocks2 = lists:nthtail(Height2 - Height, Blocks),\n\tBI2 = lists:nthtail(Height2 - Height, BI),\n\tar_block_index:init(BI2),\n\tBlocks3 = lists:sublist(Blocks2, ?SEARCH_SPACE_UPPER_BOUND_DEPTH),\n\tBlocks4 = may_be_initialize_nonce_limiter(Blocks3, BI2),\n\tBlocks5 = Blocks4 ++ lists:nthtail(length(Blocks3), Blocks2),\n\tets:insert(node_state, {join_state, {Height, Blocks5, BI2, CustomDir}}),\n\tar_nonce_limiter:account_tree_initialized(Blocks5),\n\t{noreply, State};\n\nhandle_info({event, node_state, _Event}, State) ->\n\t{noreply, State};\n\nhandle_info({event, nonce_limiter, initialized}, State) ->\n\t[{_, {Height, Blocks, BI, _CustomDir}}] = ets:lookup(node_state, join_state),\n\tar_storage:store_block_index(BI),\n\tRecentBI = lists:sublist(BI, ?BLOCK_INDEX_HEAD_LEN),\n\tCurrent = element(1, hd(RecentBI)),\n\tRecentBlocks = lists:sublist(Blocks, ar_block:get_consensus_window_size()),\n\tRecentBlocks2 = set_poa_caches(RecentBlocks),\n\tar_block_cache:initialize_from_list(block_cache, RecentBlocks2),\n\tB = hd(RecentBlocks2),\n\tRewardHistory = [{H, {Addr, HashRate, Reward, Denomination}}\n\t\t\t|| {{Addr, HashRate, Reward, Denomination}, {H, _, _}}\n\t\t\t<- lists:zip(B#block.reward_history,\n\t\t\t\t\tlists:sublist(BI, length(B#block.reward_history)))],\n\tar_storage:store_reward_history_part2(RewardHistory),\n\tBlockTimeHistory = [{H, {BlockInterval, VDFInterval, ChunkCount}}\n\t\t\t|| {{BlockInterval, VDFInterval, ChunkCount}, {H, _, _}}\n\t\t\t<- lists:zip(B#block.block_time_history,\n\t\t\t\t\tlists:sublist(BI, length(B#block.block_time_history)))],\n\tar_storage:store_block_time_history_part2(BlockTimeHistory),\n\tHeight = B#block.height,\n\tar_disk_cache:write_block(B),\n\tar_data_sync:join(RecentBI),\n\tar_header_sync:join(Height, RecentBI, Blocks),\n\tar_tx_blacklist:start_taking_down(),\n\tBlockTXPairs = [block_txs_pair(Block) || Block <- Blocks],\n\t{BlockAnchors, RecentTXMap} = get_block_anchors_and_recent_txs_map(BlockTXPairs),\n\t{Rate, ScheduledRate} = {B#block.usd_to_ar_rate, B#block.scheduled_usd_to_ar_rate},\n\tRecentBI2 = lists:sublist(BI, ?BLOCK_INDEX_HEAD_LEN),\n\tets:insert(node_state, [\n\t\t{recent_block_index,\tRecentBI2},\n\t\t{recent_max_block_size, get_max_block_size(RecentBI2)},\n\t\t{is_joined,\t\t\t\ttrue},\n\t\t{current,\t\t\t\tCurrent},\n\t\t{timestamp,\t\t\t\tB#block.timestamp},\n\t\t{nonce_limiter_info,\tB#block.nonce_limiter_info},\n\t\t{wallet_list,\t\t\tB#block.wallet_list},\n\t\t{height,\t\t\t\tHeight},\n\t\t{hash,\t\t\t\t\tB#block.hash},\n\t\t{reward_pool,\t\t\tB#block.reward_pool},\n\t\t{diff_pair,\t\t\t\tar_difficulty:diff_pair(B)},\n\t\t{cumulative_diff,\t\tB#block.cumulative_diff},\n\t\t{last_retarget,\t\t\tB#block.last_retarget},\n\t\t{weave_size,\t\t\tB#block.weave_size},\n\t\t{block_txs_pairs,\t\tBlockTXPairs},\n\t\t{block_anchors,\t\t\tBlockAnchors},\n\t\t{recent_txs_map,\t\tRecentTXMap},\n\t\t{usd_to_ar_rate,\t\tRate},\n\t\t{scheduled_usd_to_ar_rate, ScheduledRate},\n\t\t{price_per_gib_minute, B#block.price_per_gib_minute},\n\t\t{kryder_plus_rate_multiplier, B#block.kryder_plus_rate_multiplier},\n\t\t{denomination, B#block.denomination},\n\t\t{redenomination_height, B#block.redenomination_height},\n\t\t{scheduled_price_per_gib_minute, B#block.scheduled_price_per_gib_minute},\n\t\t{merkle_rebase_support_threshold, get_merkle_rebase_threshold(B)}\n\t]),\n\tSearchSpaceUpperBound = ar_node:get_partition_upper_bound(RecentBI),\n\tar_events:send(node_state, {search_space_upper_bound, SearchSpaceUpperBound}),\n\tar_events:send(node_state, {initialized, B}),\n\tar_events:send(node_state, {checkpoint_block,\n\t\tar_block_cache:get_checkpoint_block(RecentBI)}),\n\tar:console(\"Joined the Arweave network successfully at the block ~s, height ~B.~n\",\n\t\t\t[ar_util:encode(Current), Height]),\n\t?LOG_INFO([{event, joined_the_network}, {block, ar_util:encode(Current)},\n\t\t\t{height, Height}]),\n\tets:delete(node_state, join_state),\n\t{noreply, maybe_reset_miner(State)};\n\nhandle_info({event, nonce_limiter, {invalid, H, Code}}, State) ->\n\t?LOG_WARNING([{event, received_block_with_invalid_nonce_limiter_chain},\n\t\t\t{block, ar_util:encode(H)}, {code, Code}]),\n\tar_block_cache:remove(block_cache, H),\n\tar_ignore_registry:add(H),\n\tgen_server:cast(?MODULE, apply_block),\n\t{noreply, maps:remove({nonce_limiter_validation_scheduled, H}, State)};\n\nhandle_info({event, nonce_limiter, {valid, H}}, State) ->\n\t?LOG_INFO([{event, vdf_validation_successful}, {block, ar_util:encode(H)}]),\n\tar_block_cache:mark_nonce_limiter_validated(block_cache, H),\n\tgen_server:cast(?MODULE, apply_block),\n\t{noreply, maps:remove({nonce_limiter_validation_scheduled, H}, State)};\n\nhandle_info({event, nonce_limiter, {validation_error, H}}, State) ->\n\t?LOG_WARNING([{event, vdf_validation_error}, {block, ar_util:encode(H)}]),\n\tar_block_cache:remove(block_cache, H),\n\tgen_server:cast(?MODULE, apply_block),\n\t{noreply, maps:remove({nonce_limiter_validation_scheduled, H}, State)};\n\nhandle_info({event, nonce_limiter, {refuse_validation, H}}, State) ->\n\tar_util:cast_after(500, ?MODULE, apply_block),\n\t{noreply, maps:remove({nonce_limiter_validation_scheduled, H}, State)};\n\nhandle_info({event, nonce_limiter, _}, State) ->\n\t{noreply, State};\n\nhandle_info({tx_ready_for_mining, TX}, State) ->\n\tar_mempool:add_tx(TX, ready_for_mining),\n\tar_events:send(tx, {ready_for_mining, TX}),\n\t{noreply, State};\n\nhandle_info({event, block, {new, Block, _Source}}, State)\n\t\twhen length(Block#block.txs) > ?BLOCK_TX_COUNT_LIMIT ->\n\t?LOG_WARNING([{event, received_block_with_too_many_txs},\n\t\t\t{block, ar_util:encode(Block#block.indep_hash)}, {txs, length(Block#block.txs)}]),\n\t{noreply, State};\n\nhandle_info({event, block, {new, B, _Source}}, State) ->\n\tH = B#block.indep_hash,\n\t%% Record the block in the block cache. Schedule an application of the\n\t%% earliest not validated block from the longest chain, if any.\n\tcase ar_block_cache:get(block_cache, H) of\n\t\tnot_found ->\n\t\t\tcase ar_block_cache:get(block_cache, B#block.previous_block) of\n\t\t\t\tnot_found ->\n\t\t\t\t\t%% The cache should have been just pruned and this block is old.\n\t\t\t\t\t?LOG_WARNING([{event, block_cache_missing_block},\n\t\t\t\t\t\t\t{previous_block, ar_util:encode(B#block.previous_block)},\n\t\t\t\t\t\t\t{previous_height, B#block.height - 1},\n\t\t\t\t\t\t\t{block, ar_util:encode(H)}]),\n\t\t\t\t\tar_ignore_registry:remove(H),\n\t\t\t\t\t{noreply, State};\n\t\t\t\t_PrevB ->\n\t\t\t\t\tState2 = may_be_report_double_signing(B, State),\n\t\t\t\t\tar_block_cache:add(block_cache, B),\n\t\t\t\t\tgen_server:cast(?MODULE, apply_block),\n\t\t\t\t\t{noreply, State2}\n\t\t\tend;\n\t\t_ ->\n\t\t\t%% The block's already received from a different peer or\n\t\t\t%% fetched by ar_poller.\n\t\t\t{noreply, State}\n\tend;\n\nhandle_info({event, block, {mined_block_received, H, ReceiveTimestamp}}, State) ->\n\tar_block_cache:update_timestamp(block_cache, H, ReceiveTimestamp),\n\t{noreply, State};\n\nhandle_info({event, block, _}, State) ->\n\t{noreply, State};\n\n%% Add the new waiting transaction to the server state.\nhandle_info({event, tx, {new, TX, _Source}}, State) ->\n\tTXID = TX#tx.id,\n\tcase ar_mempool:has_tx(TXID) of\n\t\tfalse ->\n\t\t\tInitialStatus =\n\t\t\t\tcase maps:get(automine, State) of\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tready_for_mining;\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\twaiting\n\t\t\t\tend,\n\t\t\tar_mempool:add_tx(TX, InitialStatus),\n\t\t\tcase ar_mempool:has_tx(TXID) of\n\t\t\t\ttrue ->\n\t\t\t\t\tcase maps:get(automine, State) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t%% Do not include transactions into blocks until\n\t\t\t\t\t\t\t%% they had time to propagate around the network.\n\t\t\t\t\t\t\tstart_tx_mining_timer(TX);\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tok\n\t\t\t\t\tend;\n\t\t\t\tfalse ->\n\t\t\t\t\t%% The transaction has been dropped because more valuable transactions\n\t\t\t\t\t%% exceed the mempool limit.\n\t\t\t\t\tok\n\t\t\tend,\n\t\t\t{noreply, State};\n\t\ttrue ->\n\t\t\t{noreply, State}\n\tend;\n\nhandle_info({event, tx, {emitting_scheduled, Utility, TXID}}, State) ->\n\tar_mempool:del_from_propagation_queue(Utility, TXID),\n\t{noreply, State};\n\n%% Add the transaction to the mining pool, to be included in the mined block.\nhandle_info({event, tx, {ready_for_mining, TX}}, State) ->\n\tar_mempool:add_tx(TX, ready_for_mining),\n\t{noreply, State};\n\nhandle_info({event, tx, _}, State) ->\n\t{noreply, State};\n\nhandle_info({'DOWN', _Ref, process, PID, _Info}, State) ->\n\t#{\n\t\tblocks_missing_txs := Set,\n\t\tmissing_txs_lookup_processes := Map\n\t} = State,\n\tBH = maps:get(PID, Map),\n\t{noreply, State#{\n\t\tmissing_txs_lookup_processes => maps:remove(PID, Map),\n\t\tblocks_missing_txs => sets:del_element(BH, Set)\n\t}};\n\nhandle_info({'EXIT', _PID, normal}, State) ->\n\t{noreply, State};\n\nhandle_info(shutdown, State) ->\n\t{stop, shutdown, State};\n\nhandle_info(Info, State) ->\n\t?LOG_ERROR([{event, unhandled_info}, {module, ?MODULE}, {message, Info}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\tcase ets:lookup(node_state, is_joined) of\n\t\t[{_, true}] ->\n\t\t\t[{mempool_size, MempoolSize}] = ets:lookup(node_state, mempool_size),\n\t\t\tMempool =\n\t\t\t\tgb_sets:fold(\n\t\t\t\t\tfun({_Utility, TXID, Status}, Acc) ->\n\t\t\t\t\t\tmaps:put(TXID, {ar_mempool:get_tx(TXID), Status}, Acc)\n\t\t\t\t\tend,\n\t\t\t\t\t#{},\n\t\t\t\t\tar_mempool:get_priority_set()\n\t\t\t\t),\n\t\t\tdump_mempool(Mempool, MempoolSize);\n\t\t_ ->\n\t\t\tok\n\tend,\n\t?LOG_INFO([{event, ar_node_worker_terminated}, {reason, Reason}]).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nrecord_metrics() ->\n\t[{mempool_size, MempoolSize}] = ets:lookup(node_state, mempool_size),\n\tprometheus_gauge:set(arweave_block_height, ar_node:get_height()),\n\trecord_mempool_size_metrics(MempoolSize),\n\tprometheus_gauge:set(weave_size, ar_node:get_weave_size()).\n\nrecord_mempool_size_metrics({HeaderSize, DataSize}) ->\n\tprometheus_gauge:set(mempool_header_size_bytes, HeaderSize),\n\tprometheus_gauge:set(mempool_data_size_bytes, DataSize).\n\nmay_be_initialize_nonce_limiter([#block{ height = Height } = B | Blocks], BI) ->\n\tcase Height + 1 == ar_fork:height_2_6() of\n\t\ttrue ->\n\t\t\t{Seed, PartitionUpperBound, _TXRoot} = ar_node:get_nth_or_last(\n\t\t\t\t\t?SEARCH_SPACE_UPPER_BOUND_DEPTH, BI),\n\t\t\tOutput = crypto:hash(sha256, Seed),\n\t\t\tNextSeed = B#block.indep_hash,\n\t\t\tNextPartitionUpperBound = B#block.weave_size,\n\t\t\tInfo = #nonce_limiter_info{ output = Output, seed = Seed, next_seed = NextSeed,\n\t\t\t\t\tpartition_upper_bound = PartitionUpperBound,\n\t\t\t\t\tnext_partition_upper_bound = NextPartitionUpperBound },\n\t\t\t[B#block{ nonce_limiter_info = Info } | Blocks];\n\t\tfalse ->\n\t\t\t[B | may_be_initialize_nonce_limiter(Blocks, tl(BI))]\n\tend;\nmay_be_initialize_nonce_limiter([], _BI) ->\n\t[].\n\nhandle_task(apply_block, State) ->\n\tapply_block(State);\n\nhandle_task({cache_missing_txs, BH, TXs}, State) ->\n\tcase ar_block_cache:get_block_and_status(block_cache, BH) of\n\t\tnot_found ->\n\t\t\t%% The block should have been pruned while we were fetching the missing txs.\n\t\t\t{noreply, State};\n\t\t{B, {{not_validated, _}, _}} ->\n\t\t\tcase ar_block_cache:get(block_cache, B#block.previous_block) of\n\t\t\t\tnot_found ->\n\t\t\t\t\tok;\n\t\t\t\t_ ->\n\t\t\t\t\tar_block_cache:add(block_cache, B#block{ txs = TXs })\n\t\t\tend,\n\t\t\tgen_server:cast(?MODULE, apply_block),\n\t\t\t{noreply, State};\n\t\t{_B, _AnotherStatus} ->\n\t\t\t%% The transactions should have been received and the block validated while\n\t\t\t%% we were looking for previously missing transactions.\n\t\t\t{noreply, State}\n\tend;\n\nhandle_task(start_mining, State) ->\n\t{noreply, start_mining(State#{ automine => true })};\n\nhandle_task(mine_one_block, State) ->\n\tcase maps:get(miner_state, State) of\n\t\tundefined ->\n\t\t\t{noreply, start_mining(State)};\n\t\t_ ->\n\t\t\t{noreply, State}\n\tend;\n\nhandle_task({mine_until_height, Height}, State) ->\n\t{noreply, start_mining(State#{ mine_until_height => {height, Height}, automine => true })};\n\nhandle_task(pause, State) ->\n\tcase maps:get(miner_state, State) of\n\t\tundefined ->\n\t\t\tok;\n\t\t_ ->\n\t\t\t?MINING_SERVER:pause()\n\tend,\n\t{noreply, State#{ miner_state => undefined, automine => false,\n\t\t\tmine_until_height => undefined }};\n\nhandle_task({filter_mempool, Mempool}, State) ->\n\t{ok, List, RemainingMempool} = ar_mempool:take_chunk(Mempool, ?FILTER_MEMPOOL_CHUNK_SIZE),\n\tcase List of\n\t\t[] ->\n\t\t\t{noreply, State};\n\t\t_ ->\n\t\t\t[{wallet_list, WalletList}] = ets:lookup(node_state, wallet_list),\n\t\t\tHeight = ar_node:get_height(),\n\t\t\t[{usd_to_ar_rate, Rate}] = ets:lookup(node_state, usd_to_ar_rate),\n\t\t\t[{price_per_gib_minute, Price}] = ets:lookup(node_state, price_per_gib_minute),\n\t\t\t[{kryder_plus_rate_multiplier, KryderPlusRateMultiplier}] = ets:lookup(node_state,\n\t\t\t\t\tkryder_plus_rate_multiplier),\n\t\t\t[{denomination, Denomination}] = ets:lookup(node_state, denomination),\n\t\t\t[{redenomination_height, RedenominationHeight}] = ets:lookup(node_state,\n\t\t\t\t\tredenomination_height),\n\t\t\t[{block_anchors, BlockAnchors}] = ets:lookup(node_state, block_anchors),\n\t\t\t[{recent_txs_map, RecentTXMap}] = ets:lookup(node_state, recent_txs_map),\n\t\t\tWallets = ar_wallets:get(WalletList, ar_tx:get_addresses(List)),\n\t\t\tInvalidTXs =\n\t\t\t\tprometheus_histogram:observe_duration(\n\t\t\t\t\treverify_mempool_chunk_duration_milliseconds,\n\t\t\t\t\tfun() ->\n\t\t\t\t\t\tlists:foldl(\n\t\t\t\t\t\t\tfun(TX, Acc) ->\n\t\t\t\t\t\t\t\tcase ar_tx_replay_pool:verify_tx({TX, Rate, Price,\n\t\t\t\t\t\t\t\t\t\tKryderPlusRateMultiplier, Denomination, Height,\n\t\t\t\t\t\t\t\t\t\tRedenominationHeight, BlockAnchors, RecentTXMap,\n\t\t\t\t\t\t\t\t\t\t#{}, Wallets}, do_not_verify_signature) of\n\t\t\t\t\t\t\t\t\tvalid ->\n\t\t\t\t\t\t\t\t\t\tAcc;\n\t\t\t\t\t\t\t\t\t{invalid, _Reason} ->\n\t\t\t\t\t\t\t\t\t\t[TX | Acc]\n\t\t\t\t\t\t\t\tend\n\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\t[],\n\t\t\t\t\t\t\tList\n\t\t\t\t\t\t)\n\t\t\t\t\tend\n\t\t\t\t),\n\t\t\tar_mempool:drop_txs(InvalidTXs),\n\t\t\tcase RemainingMempool of\n\t\t\t\t[] ->\n\t\t\t\t\tscan_complete;\n\t\t\t\t_ ->\n\t\t\t\t\tgen_server:cast(self(), {filter_mempool, RemainingMempool})\n\t\t\tend,\n\t\t\t{noreply, State}\n\tend;\n\nhandle_task(compute_mining_difficulty, State) ->\n\tDiff = get_current_diff(),\n\tcase ar_node:get_height() of\n\t\tHeight when (Height + 1) rem 10 == 0 ->\n\t\t\t?LOG_INFO([{event, current_mining_difficulty},\n\t\t\t\t\t{height, Height}, {difficulty, Diff}]);\n\t\t_ ->\n\t\t\tok\n\tend,\n\tcase maps:get(miner_state, State) of\n\t\tundefined ->\n\t\t\tok;\n\t\t_ ->\n\t\t\t?MINING_SERVER:set_difficulty(Diff)\n\tend,\n\tar_util:cast_after((?COMPUTE_MINING_DIFFICULTY_INTERVAL) * 1000, ?MODULE,\n\t\t\tcompute_mining_difficulty),\n\t{noreply, State};\n\nhandle_task(Msg, State) ->\n\t?LOG_ERROR([\n\t\t{event, ar_node_worker_received_unknown_message},\n\t\t{message, Msg}\n\t]),\n\t{noreply, State}.\n\nget_block_anchors_and_recent_txs_map(BlockTXPairs) ->\n\tlists:foldr(\n\t\tfun({BH, L}, {Acc1, Acc2}) ->\n\t\t\tAcc3 =\n\t\t\t\tlists:foldl(\n\t\t\t\t\tfun({{TXID, _}, _}, Acc4) ->\n\t\t\t\t\t\t%% We use a map instead of a set here because it is faster.\n\t\t\t\t\t\tmaps:put(TXID, ok, Acc4)\n\t\t\t\t\tend,\n\t\t\t\t\tAcc2,\n\t\t\t\t\tL\n\t\t\t\t),\n\t\t\t{[BH | Acc1], Acc3}\n\t\tend,\n\t\t{[], #{}},\n\t\tlists:sublist(BlockTXPairs, ar_block:get_max_tx_anchor_depth())\n\t).\n\nget_max_block_size([_SingleElement]) ->\n\t0;\nget_max_block_size([{_BH, WeaveSize, _TXRoot} | BI]) ->\n\tget_max_block_size(BI, WeaveSize, 0).\n\nget_max_block_size([], _WeaveSize, Max) ->\n\tMax;\nget_max_block_size([{_BH, PrevWeaveSize, _TXRoot} | BI], WeaveSize, Max) ->\n\tMax2 = max(Max, WeaveSize - PrevWeaveSize),\n\tget_max_block_size(BI, PrevWeaveSize, Max2).\n\napply_block(State) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tAllowRebase = Config#config.allow_rebase,\n\tcase ar_block_cache:get_earliest_not_validated_from_longest_chain(block_cache) of\n\t\tnot_found when AllowRebase == true ->\n\t\t\tmaybe_rebase(State);\n\t\tnot_found when AllowRebase == false ->\n\t\t\t{noreply, State};\n\t\tArgs ->\n\t\t\t%% Cancel the pending rebase, if there is one.\n\t\t\tState2 = State#{ pending_rebase => false },\n\t\t\tapply_block(Args, State2)\n\tend.\n\napply_block({B, [PrevB | _PrevBlocks], {{not_validated, awaiting_nonce_limiter_validation},\n\t\t_Timestamp}}, State) ->\n\tH = B#block.indep_hash,\n\tcase maps:get({nonce_limiter_validation_scheduled, H}, State, false) of\n\t\ttrue ->\n\t\t\t%% Waiting until the nonce limiter chain is validated.\n\t\t\t{noreply, State};\n\t\tfalse ->\n\t\t\t?LOG_DEBUG([{event, schedule_nonce_limiter_validation},\n\t\t\t\t{block, ar_util:encode(B#block.indep_hash)}]),\n\t\t\trequest_nonce_limiter_validation(B, PrevB),\n\t\t\t{noreply, State#{ {nonce_limiter_validation_scheduled, H} => true }}\n\tend;\napply_block({B, PrevBlocks, {{not_validated, nonce_limiter_validated}, Timestamp}}, State) ->\n\tapply_block(B, PrevBlocks, Timestamp, State).\n\nmaybe_rebase(#{ pending_rebase := {PrevH, H} } = State) ->\n\tcase ar_block_cache:get_block_and_status(block_cache, PrevH) of\n\t\tnot_found ->\n\t\t\t{noreply, State};\n\t\t{PrevB, {validated, _}} ->\n\t\t\tcase get_cached_solution(H, State) of\n\t\t\t\tnot_found ->\n\t\t\t\t\t?LOG_WARNING([{event, failed_to_find_cached_solution_for_rebasing},\n\t\t\t\t\t\t\t{h, ar_util:encode(H)},\n\t\t\t\t\t\t\t{prev_h, ar_util:encode(PrevH)}]),\n\t\t\t\t\t{noreply, State};\n\t\t\t\tArgs ->\n\t\t\t\t\tSolutionH = (element(2, Args))#mining_solution.solution_hash,\n\t\t\t\t\t?LOG_INFO([{event, rebasing_block},\n\t\t\t\t\t\t\t{h, ar_util:encode(H)},\n\t\t\t\t\t\t\t{prev_h, ar_util:encode(PrevH)},\n\t\t\t\t\t\t\t{solution_h, ar_util:encode(SolutionH)},\n\t\t\t\t\t\t\t{expected_new_height, PrevB#block.height + 1}]),\n\t\t\t\t\tar:console(\"Rebasing block ~s (solution ~s, previous block ~s, height ~B).\",\n\t\t\t\t\t\t\t[\n\t\t\t\t\t\t\t\tar_util:encode(H), ar_util:encode(SolutionH),\n\t\t\t\t\t\t\t\tar_util:encode(PrevH), PrevB#block.height + 1\n\t\t\t\t\t\t\t]),\n\t\t\t\t\thandle_found_solution(Args, PrevB, State, true)\n\t\t\t\tend;\n\t\t{B, {Status, Timestamp}} ->\n\t\t\tPrevBlocks = ar_block_cache:get_fork_blocks(block_cache, B),\n\t\t\tArgs = {B, PrevBlocks, {Status, Timestamp}},\n\t\t\tapply_block(Args, State)\n\tend;\nmaybe_rebase(State) ->\n\t[{_, H}] = ets:lookup(node_state, current),\n\tB = ar_block_cache:get(block_cache, H),\n\t{ok, Config} = arweave_config:get_env(),\n\tcase B#block.reward_addr == Config#config.mining_addr of\n\t\tfalse ->\n\t\t\t{noreply, State};\n\t\ttrue ->\n\t\t\tcase ar_block_cache:get_siblings(block_cache, B) of\n\t\t\t\t[] ->\n\t\t\t\t\t{noreply, State};\n\t\t\t\tSiblings ->\n\t\t\t\t\tmaybe_rebase(B, Siblings, State)\n\t\t\tend\n\tend.\n\nmaybe_rebase(_B, [], State) ->\n\t{noreply, State};\nmaybe_rebase(B, [Sib | Siblings], State) ->\n\t#block{ nonce_limiter_info = Info, cumulative_diff = CDiff } = B,\n\t#block{ nonce_limiter_info = SibInfo, cumulative_diff = SibCDiff } = Sib,\n\tStepNumber = Info#nonce_limiter_info.global_step_number,\n\tSibStepNumber = SibInfo#nonce_limiter_info.global_step_number,\n\tcase {CDiff == SibCDiff, StepNumber > SibStepNumber,\n\t\t\tSib#block.reward_addr == B#block.reward_addr} of\n\t\t{true, true, false} ->\n\t\t\t%% See if the solution is cached to avoid wasting time.\n\t\t\tcase get_cached_solution(B#block.indep_hash, State) of\n\t\t\t\tnot_found ->\n\t\t\t\t\tmaybe_rebase(B, Siblings, State);\n\t\t\t\t_Args ->\n\t\t\t\t\trebase(B, Sib, State)\n\t\t\tend;\n\t\t_ ->\n\t\t\tmaybe_rebase(B, Siblings, State)\n\tend.\n\nrebase(B, PrevB, State) ->\n\tH = B#block.indep_hash,\n\tPrevH = PrevB#block.indep_hash,\n\tgen_server:cast(?MODULE, apply_block),\n\tPrevBlocks = ar_block_cache:get_fork_blocks(block_cache, PrevB),\n\t{_, {Status, Timestamp}} = ar_block_cache:get_block_and_status(block_cache, PrevH),\n\tState2 = State#{ pending_rebase => {PrevH, H} },\n\tcase Status of\n\t\tvalidated ->\n\t\t\t{noreply, State2};\n\t\t_ ->\n\t\t\tapply_block({PrevB, PrevBlocks, {Status, Timestamp}}, State2)\n\tend.\n\nget_cached_solution(H, State) ->\n\tmaps:get(H, maps:get(solution_cache, State), not_found).\n\napply_block(B, PrevBlocks, Timestamp, State) ->\n\t#{ blocks_missing_txs := BlocksMissingTXs } = State,\n\tcase sets:is_element(B#block.indep_hash, BlocksMissingTXs) of\n\t\ttrue ->\n\t\t\t?LOG_DEBUG([{event, block_is_missing_txs},\n\t\t\t\t\t{block, ar_util:encode(B#block.indep_hash)}]),\n\t\t\t%% We do not have some of the transactions from this block,\n\t\t\t%% searching for them at the moment.\n\t\t\t{noreply, State};\n\t\tfalse ->\n\t\t\tapply_block2(B, PrevBlocks, Timestamp, State)\n\tend.\n\napply_block2(BShadow, PrevBlocks, Timestamp, State) ->\n\t#{ blocks_missing_txs := BlocksMissingTXs,\n\t\t\tmissing_txs_lookup_processes := MissingTXsLookupProcesses } = State,\n\t{TXs, MissingTXIDs} = pick_txs(BShadow#block.txs),\n\tcase MissingTXIDs of\n\t\t[] ->\n\t\t\tHeight = BShadow#block.height,\n\t\t\tSizeTaggedTXs = ar_block:generate_size_tagged_list_from_txs(TXs, Height),\n\t\t\tB = BShadow#block{ txs = TXs, size_tagged_txs = SizeTaggedTXs },\n\t\t\tapply_block3(B, PrevBlocks, Timestamp, State);\n\t\t_ ->\n\t\t\t?LOG_INFO([{event, missing_txs_for_block}, {count, length(MissingTXIDs)}]),\n\t\t\tSelf = self(),\n\t\t\tmonitor(\n\t\t\t\tprocess,\n\t\t\t\tPID = spawn(fun() -> get_missing_txs_and_retry(BShadow, Self) end)\n\t\t\t),\n\t\t\tBH = BShadow#block.indep_hash,\n\t\t\t{noreply, State#{\n\t\t\t\tblocks_missing_txs => sets:add_element(BH, BlocksMissingTXs),\n\t\t\t\tmissing_txs_lookup_processes => maps:put(PID, BH, MissingTXsLookupProcesses)\n\t\t\t}}\n\tend.\n\napply_block3(B, [PrevB | _] = PrevBlocks, Timestamp, State) ->\n\t[{block_txs_pairs, BlockTXPairs}] = ets:lookup(node_state, block_txs_pairs),\n\t[{recent_block_index, RecentBI}] = ets:lookup(node_state, recent_block_index),\n\tRootHash = PrevB#block.wallet_list,\n\tTXs = B#block.txs,\n\tAccounts = ar_wallets:get(RootHash, [B#block.reward_addr | ar_tx:get_addresses(TXs)]),\n\t{Orphans, RecentBI2} = update_block_index(B, PrevBlocks, RecentBI),\n\tBlockTXPairs2 = update_block_txs_pairs(B, PrevBlocks, BlockTXPairs),\n\tBlockTXPairs3 = tl(BlockTXPairs2),\n\t{BlockAnchors, RecentTXMap} = get_block_anchors_and_recent_txs_map(BlockTXPairs3),\n\tRecentBI3 = tl(RecentBI2),\n\tPartitionUpperBound = ar_node:get_partition_upper_bound(RecentBI3),\n\tcase ar_node_utils:validate(B, PrevB, Accounts, BlockAnchors, RecentTXMap,\n\t\t\tPartitionUpperBound) of\n\t\terror ->\n\t\t\t?LOG_WARNING([{event, failed_to_validate_block},\n\t\t\t\t\t{h, ar_util:encode(B#block.indep_hash)}]),\n\t\t\tgen_server:cast(?MODULE, apply_block),\n\t\t\t{noreply, State};\n\t\t{invalid, Reason} ->\n\t\t\t?LOG_WARNING([{event, received_invalid_block},\n\t\t\t\t\t{validation_error, Reason},\n\t\t\t\t\t{h, ar_util:encode(B#block.indep_hash)}]),\n\t\t\tar_events:send(block, {rejected, Reason, B#block.indep_hash, no_peer}),\n\t\t\tBH = B#block.indep_hash,\n\t\t\tar_block_cache:remove(block_cache, BH),\n\t\t\tar_ignore_registry:add(BH),\n\t\t\tgen_server:cast(?MODULE, apply_block),\n\t\t\t{noreply, State};\n\t\tvalid ->\n\t\t\tcase validate_wallet_list(B, PrevB) of\n\t\t\t\terror ->\n\t\t\t\t\tBH = B#block.indep_hash,\n\t\t\t\t\t?LOG_WARNING([{event, failed_to_validate_wallet_list},\n\t\t\t\t\t\t\t{h, ar_util:encode(BH)}]),\n\t\t\t\t\tar_block_cache:remove(block_cache, BH),\n\t\t\t\t\tar_ignore_registry:add(BH),\n\t\t\t\t\tgen_server:cast(?MODULE, apply_block),\n\t\t\t\t\t{noreply, State};\n\t\t\t\tok ->\n\t\t\t\t\tB2 =\n\t\t\t\t\t\tcase B#block.height >= ar_fork:height_2_6() of\n\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\tB#block{\n\t\t\t\t\t\t\t\t\treward_history =\n\t\t\t\t\t\t\t\t\t\tar_rewards:add_element(B, PrevB#block.reward_history)\n\t\t\t\t\t\t\t\t};\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\tB\n\t\t\t\t\t\tend,\n\t\t\t\t\tB3 =\n\t\t\t\t\t\tcase B#block.height >= ar_fork:height_2_7() of\n\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\tBlockTimeHistory2 = ar_block_time_history:update_history(B, PrevB),\n\t\t\t\t\t\t\t\tLen2 = ar_block_time_history:history_length()\n\t\t\t\t\t\t\t\t\t\t+ ar_block:get_consensus_window_size(),\n\t\t\t\t\t\t\t\tBlockTimeHistory3 = lists:sublist(BlockTimeHistory2, Len2),\n\t\t\t\t\t\t\t\tB2#block{ block_time_history = BlockTimeHistory3 };\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\tB2\n\t\t\t\t\t\tend,\n\t\t\t\t\tState2 = apply_validated_block(State, B3, PrevBlocks, Orphans, RecentBI2,\n\t\t\t\t\t\t\tBlockTXPairs2),\n\t\t\t\t\trecord_processing_time(Timestamp),\n\t\t\t\t\t{noreply, State2}\n\t\t\tend\n\tend.\n\nrequest_nonce_limiter_validation(#block{ indep_hash = H } = B, PrevB) ->\n\tInfo = B#block.nonce_limiter_info,\n\tPrevInfo = ar_nonce_limiter:get_or_init_nonce_limiter_info(PrevB),\n\tar_nonce_limiter:request_validation(H, Info, PrevInfo).\n\npick_txs(TXIDs) ->\n\tMempool = ar_mempool:get_map(),\n\tlists:foldr(\n\t\tfun (TX, {Found, Missing}) when is_record(TX, tx) ->\n\t\t\t\t{[TX | Found], Missing};\n\t\t\t(TXID, {Found, Missing}) ->\n\t\t\t\tcase maps:get(TXID, Mempool, tx_not_in_mempool) of\n\t\t\t\t\ttx_not_in_mempool ->\n\t\t\t\t\t\t%% This disk read should almost never be useful. Presumably,\n\t\t\t\t\t\t%% the only reason to find some of these transactions on disk\n\t\t\t\t\t\t%% is they had been written prior to the call, what means they are\n\t\t\t\t\t\t%% from an orphaned fork, more than one block behind.\n\t\t\t\t\t\tcase ar_storage:read_tx(TXID) of\n\t\t\t\t\t\t\tunavailable ->\n\t\t\t\t\t\t\t\t{Found, [TXID | Missing]};\n\t\t\t\t\t\t\tTX ->\n\t\t\t\t\t\t\t\t{[TX | Found], Missing}\n\t\t\t\t\t\tend;\n\t\t\t\t\t_Status ->\n\t\t\t\t\t\t{[ar_mempool:get_tx(TXID) | Found], Missing}\n\t\t\t\tend\n\t\tend,\n\t\t{[], []},\n\t\tTXIDs\n\t).\n\nmay_be_get_double_signing_proof(PrevB, State) ->\n\tLockedRewards = ar_rewards:get_locked_rewards(PrevB),\n\tProofs = maps:get(double_signing_proofs, State, #{}),\n\tRootHash = PrevB#block.wallet_list,\n\tHeight = PrevB#block.height + 1,\n\tmay_be_get_double_signing_proof2(maps:iterator(Proofs), RootHash, LockedRewards, Height).\n\nmay_be_get_double_signing_proof2(Iterator, RootHash, LockedRewards, Height) ->\n\tcase maps:next(Iterator) of\n\t\tnone ->\n\t\t\tundefined;\n\t\t{Addr, {_Timestamp, Proof2}, Iterator2} ->\n\t\t\t{Pub, Sig1, CDiff1, PrevCDiff1, Preimage1,\n\t\t\t\t\tSig2, CDiff2, PrevCDiff2, Preimage2} = Proof2,\n\t\t\t?LOG_INFO([{event, evaluating_double_signing_proof},\n\t\t\t\t{key_size, byte_size(Pub)},\n\t\t\t\t{sig1_size, byte_size(Sig1)},\n\t\t\t\t{sig2_size, byte_size(Sig2)},\n\t\t\t\t{height, Height}]),\n\t\t\tCheckKeyType =\n\t\t\t\tcase {byte_size(Pub) == ?ECDSA_PUB_KEY_SIZE, Height >= ar_fork:height_2_9()} of\n\t\t\t\t\t{true, false} ->\n\t\t\t\t\t\tfalse;\n\t\t\t\t\t{true, true} ->\n\t\t\t\t\t\tbyte_size(Sig1) == ?ECDSA_SIG_SIZE\n\t\t\t\t\t\t\tandalso byte_size(Sig2) == ?ECDSA_SIG_SIZE;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tbyte_size(Pub) == ?RSA_BLOCK_SIG_SIZE\n\t\t\t\t\t\t\tandalso byte_size(Sig1) == ?RSA_BLOCK_SIG_SIZE\n\t\t\t\t\t\t\tandalso byte_size(Sig2) == ?RSA_BLOCK_SIG_SIZE\n\t\t\t\tend,\n\t\t\tCheckDifferentSignatures =\n\t\t\t\tcase CheckKeyType of\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tfalse;\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tSig1 /= Sig2\n\t\t\t\tend,\n\t\t\tHasLockedReward =\n\t\t\t\tcase CheckDifferentSignatures of\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tfalse;\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tar_rewards:has_locked_reward(Addr, LockedRewards)\n\t\t\t\tend,\n\t\t\tValidSignatures =\n\t\t\t\tcase HasLockedReward of\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tfalse;\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tSignaturePreimage1 = ar_block:get_block_signature_preimage(\n\t\t\t\t\t\t\t\tCDiff1, PrevCDiff1, Preimage1, Height),\n\t\t\t\t\t\tSignaturePreimage2 = ar_block:get_block_signature_preimage(\n\t\t\t\t\t\t\t\tCDiff2, PrevCDiff2, Preimage2, Height),\n\t\t\t\t\t\tKey = ar_block:get_reward_key(Pub, Height),\n\t\t\t\t\t\tar_wallet:verify(Key, SignaturePreimage1, Sig1)\n\t\t\t\t\t\t\t\tandalso ar_wallet:verify(Key, SignaturePreimage2, Sig2)\n\t\t\t\tend,\n\t\t\tValidCDiffs =\n\t\t\t\tcase ValidSignatures of\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tfalse;\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tar_block:get_double_signing_condition(CDiff1, PrevCDiff1, CDiff2, PrevCDiff2)\n\t\t\t\tend,\n\t\t\tcase ValidCDiffs of\n\t\t\t\tfalse ->\n\t\t\t\t\tmay_be_get_double_signing_proof2(Iterator2,\n\t\t\t\t\t\t\tRootHash, LockedRewards, Height);\n\t\t\t\ttrue ->\n\t\t\t\t\tAccounts = ar_wallets:get(RootHash, [Addr]),\n\t\t\t\t\tcase ar_node_utils:is_account_banned(Addr, Accounts) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tmay_be_get_double_signing_proof2(Iterator2,\n\t\t\t\t\t\t\t\t\tRootHash, LockedRewards, Height);\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tProof2\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\nget_chunk_hash(#poa{ chunk = Chunk }, Height) ->\n\tcase Height >= ar_fork:height_2_7() of\n\t\tfalse ->\n\t\t\tundefined;\n\t\ttrue ->\n\t\t\tcase Chunk of\n\t\t\t\t<<>> ->\n\t\t\t\t\tundefined;\n\t\t\t\t_ ->\n\t\t\t\t\tcrypto:hash(sha256, Chunk)\n\t\t\tend\n\tend.\n\nget_unpacked_chunk_hash(PoA, PackingDifficulty, RecallByte) ->\n\tcase PackingDifficulty >= 1 of\n\t\tfalse ->\n\t\t\tundefined;\n\t\ttrue ->\n\t\t\tcase RecallByte of\n\t\t\t\tundefined ->\n\t\t\t\t\tundefined;\n\t\t\t\t_ ->\n\t\t\t\t\tcrypto:hash(sha256, PoA#poa.unpacked_chunk)\n\t\t\tend\n\tend.\n\npack_block_with_transactions(B, PrevB) ->\n\t#block{ reward_history = RewardHistory,\n\t\t\treward_history_hash = PreviousRewardHistoryHash } = PrevB,\n\tTXs = collect_mining_transactions(?BLOCK_TX_COUNT_LIMIT),\n\tRate = ar_pricing:usd_to_ar_rate(PrevB),\n\tPricePerGiBMinute = PrevB#block.price_per_gib_minute,\n\tPrevDenomination = PrevB#block.denomination,\n\tHeight = B#block.height,\n\tDenomination = B#block.denomination,\n\tKryderPlusRateMultiplier = PrevB#block.kryder_plus_rate_multiplier,\n\tRedenominationHeight = PrevB#block.redenomination_height,\n\tAddresses = [B#block.reward_addr | ar_tx:get_addresses(TXs)],\n\tAddresses2 = [ar_rewards:get_oldest_locked_address(PrevB) | Addresses],\n\tAddresses3 =\n\t\tcase B#block.double_signing_proof of\n\t\t\tundefined ->\n\t\t\t\tAddresses2;\n\t\t\tProof ->\n\t\t\t\t[ar_wallet:hash_pub_key(element(1, Proof)) | Addresses2]\n\t\tend,\n\tAccounts = ar_wallets:get(PrevB#block.wallet_list, Addresses3),\n\t[{block_txs_pairs, BlockTXPairs}] = ets:lookup(node_state, block_txs_pairs),\n\tPrevBlocks = ar_block_cache:get_fork_blocks(block_cache, B),\n\tBlockTXPairs2 = update_block_txs_pairs(B, PrevBlocks, BlockTXPairs),\n\tBlockTXPairs3 = tl(BlockTXPairs2),\n\t{BlockAnchors, RecentTXMap} = get_block_anchors_and_recent_txs_map(BlockTXPairs3),\n\tValidTXs = ar_tx_replay_pool:pick_txs_to_mine({BlockAnchors, RecentTXMap, Height - 1,\n\t\t\tRedenominationHeight, Rate, PricePerGiBMinute, KryderPlusRateMultiplier,\n\t\t\tPrevDenomination, B#block.timestamp, Accounts, TXs}),\n\tBlockSize =\n\t\tlists:foldl(\n\t\t\tfun(TX, Acc) ->\n\t\t\t\tAcc + ar_tx:get_weave_size_increase(TX, Height)\n\t\t\tend,\n\t\t\t0,\n\t\t\tValidTXs\n\t\t),\n\tWeaveSize = PrevB#block.weave_size + BlockSize,\n\tB2 = B#block{ txs = ValidTXs, block_size = BlockSize, weave_size = WeaveSize,\n\t\t\ttx_root = ar_block:generate_tx_root_for_block(ValidTXs, Height),\n\t\t\tsize_tagged_txs = ar_block:generate_size_tagged_list_from_txs(ValidTXs, Height) },\n\t{ok, {EndowmentPool, Reward, DebtSupply, KryderPlusRateMultiplierLatch,\n\t\t\tKryderPlusRateMultiplier2, Accounts2}} = ar_node_utils:update_accounts(B2, PrevB,\n\t\t\t\t\tAccounts),\n\tReward2 = ar_pricing:redenominate(Reward, PrevDenomination, Denomination),\n\tEndowmentPool2 = ar_pricing:redenominate(EndowmentPool, PrevDenomination, Denomination),\n\tDebtSupply2 = ar_pricing:redenominate(DebtSupply, PrevDenomination, Denomination),\n\t{ok, RootHash} = ar_wallets:add_wallets(PrevB#block.wallet_list, Accounts2, Height,\n\t\t\tDenomination),\n\tRewardHistory2 = ar_rewards:add_element(B2#block{ reward = Reward2 }, RewardHistory),\n\t%% Pre-2.8: slice the reward history to compute the hash\n\t%% Post-2.8: use the previous reward history hash and the head of the history to compute\n\t%% the new hash.\n\tLockedRewards = ar_rewards:trim_locked_rewards(Height, RewardHistory2),\n\tB2#block{\n\t\twallet_list = RootHash,\n\t\treward_pool = EndowmentPool2,\n\t\treward = Reward2,\n\t\treward_history = RewardHistory2,\n\t\treward_history_hash = ar_rewards:reward_history_hash(Height, PreviousRewardHistoryHash,\n\t\t\tLockedRewards),\n\t\tdebt_supply = DebtSupply2,\n\t\tkryder_plus_rate_multiplier_latch = KryderPlusRateMultiplierLatch,\n\t\tkryder_plus_rate_multiplier = KryderPlusRateMultiplier2\n\t}.\n\nupdate_block_index(B, PrevBlocks, BI) ->\n\t#block{ indep_hash = H } = lists:last(PrevBlocks),\n\t{Orphans, Base} = get_orphans(BI, H),\n\t{Orphans, [block_index_entry(B) |\n\t\t[block_index_entry(PrevB) || PrevB <- PrevBlocks] ++ Base]}.\n\nget_orphans(BI, H) ->\n\tget_orphans(BI, H, []).\n\nget_orphans([{H, _, _} | BI], H, Orphans) ->\n\t{Orphans, BI};\nget_orphans([{OrphanH, _, _} | BI], H, Orphans) ->\n\tget_orphans(BI, H, [OrphanH | Orphans]).\n\nblock_index_entry(B) ->\n\t{B#block.indep_hash, B#block.weave_size, B#block.tx_root}.\n\nupdate_block_txs_pairs(B, PrevBlocks, BlockTXPairs) ->\n\tlists:sublist(update_block_txs_pairs2(B, PrevBlocks, BlockTXPairs),\n\t\t\t2 * ar_block:get_max_tx_anchor_depth()).\n\nupdate_block_txs_pairs2(B, [PrevB, PrevPrevB | PrevBlocks], BP) ->\n\t[block_txs_pair(B) | update_block_txs_pairs2(PrevB, [PrevPrevB | PrevBlocks], BP)];\nupdate_block_txs_pairs2(B, [#block{ indep_hash = H }], BP) ->\n\t[block_txs_pair(B) | lists:dropwhile(fun({Hash, _}) -> Hash /= H end, BP)].\n\nblock_txs_pair(B) ->\n\t{B#block.indep_hash, B#block.size_tagged_txs}.\n\nvalidate_wallet_list(#block{ indep_hash = H } = B, PrevB) ->\n\tcase ar_wallets:apply_block(B, PrevB) of\n\t\t{error, invalid_denomination} ->\n\t\t\t?LOG_WARNING([{event, received_invalid_block},\n\t\t\t\t\t{validation_error, invalid_denomination}, {h, ar_util:encode(H)}]),\n\t\t\tar_events:send(block, {rejected, invalid_denomination, H, no_peer}),\n\t\t\terror;\n\t\t{error, mining_address_banned} ->\n\t\t\t?LOG_WARNING([{event, received_invalid_block},\n\t\t\t\t\t{validation_error, mining_address_banned}, {h, ar_util:encode(H)},\n\t\t\t\t\t{mining_address, ar_util:encode(B#block.reward_addr)}]),\n\t\t\tar_events:send(block, {rejected, mining_address_banned, H, no_peer}),\n\t\t\terror;\n\t\t{error, invalid_double_signing_proof_same_signature} ->\n\t\t\t?LOG_WARNING([{event, received_invalid_block},\n\t\t\t\t\t{validation_error, invalid_double_signing_proof_same_signature},\n\t\t\t\t\t{h, ar_util:encode(H)}]),\n\t\t\tar_events:send(block, {rejected, invalid_double_signing_proof_same_signature, H,\n\t\t\t\t\tno_peer}),\n\t\t\terror;\n\t\t{error, invalid_double_signing_proof_cdiff} ->\n\t\t\t?LOG_WARNING([{event, received_invalid_block},\n\t\t\t\t\t{validation_error, invalid_double_signing_proof_cdiff},\n\t\t\t\t\t{h, ar_util:encode(H)}]),\n\t\t\tar_events:send(block, {rejected, invalid_double_signing_proof_cdiff, H, no_peer}),\n\t\t\terror;\n\t\t{error, invalid_double_signing_proof_same_address} ->\n\t\t\t?LOG_WARNING([{event, received_invalid_block},\n\t\t\t\t\t{validation_error, invalid_double_signing_proof_same_address},\n\t\t\t\t\t{h, ar_util:encode(H)}]),\n\t\t\tar_events:send(block, {rejected, invalid_double_signing_proof_same_address, H,\n\t\t\t\t\tno_peer}),\n\t\t\terror;\n\t\t{error, invalid_double_signing_proof_not_in_reward_history} ->\n\t\t\t?LOG_WARNING([{event, received_invalid_block},\n\t\t\t\t\t{validation_error, invalid_double_signing_proof_not_in_reward_history},\n\t\t\t\t\t{h, ar_util:encode(H)}]),\n\t\t\tar_events:send(block, {rejected,\n\t\t\t\t\tinvalid_double_signing_proof_not_in_reward_history, H, no_peer}),\n\t\t\terror;\n\t\t{error, invalid_double_signing_proof_already_banned} ->\n\t\t\t?LOG_WARNING([{event, received_invalid_block},\n\t\t\t\t\t{validation_error, invalid_double_signing_proof_already_banned},\n\t\t\t\t\t{h, ar_util:encode(H)}]),\n\t\t\tar_events:send(block, {rejected,\n\t\t\t\t\tinvalid_double_signing_proof_already_banned, H, no_peer}),\n\t\t\terror;\n\t\t{error, invalid_double_signing_proof_invalid_signature} ->\n\t\t\t?LOG_WARNING([{event, received_invalid_block},\n\t\t\t\t\t{validation_error, invalid_double_signing_proof_invalid_signature},\n\t\t\t\t\t{h, ar_util:encode(H)}]),\n\t\t\tar_events:send(block, {rejected,\n\t\t\t\t\tinvalid_double_signing_proof_invalid_signature, H, no_peer}),\n\t\t\terror;\n\t\t{error, invalid_account_anchors} ->\n\t\t\t?LOG_WARNING([{event, received_invalid_block},\n\t\t\t\t\t{validation_error, invalid_account_anchors}, {h, ar_util:encode(H)}]),\n\t\t\tar_events:send(block, {rejected, invalid_account_anchors, H, no_peer}),\n\t\t\terror;\n\t\t{error, invalid_reward_pool} ->\n\t\t\t?LOG_WARNING([{event, received_invalid_block},\n\t\t\t\t\t{validation_error, invalid_reward_pool}, {h, ar_util:encode(H)}]),\n\t\t\tar_events:send(block, {rejected, invalid_reward_pool, H, no_peer}),\n\t\t\terror;\n\t\t{error, invalid_miner_reward} ->\n\t\t\t?LOG_WARNING([{event, received_invalid_block},\n\t\t\t\t\t{validation_error, invalid_miner_reward}, {h, ar_util:encode(H)}]),\n\t\t\tar_events:send(block, {rejected, invalid_miner_reward, H, no_peer}),\n\t\t\terror;\n\t\t{error, invalid_debt_supply} ->\n\t\t\t?LOG_WARNING([{event, received_invalid_block},\n\t\t\t\t\t{validation_error, invalid_debt_supply}, {h, ar_util:encode(H)}]),\n\t\t\tar_events:send(block, {rejected, invalid_debt_supply, H, no_peer}),\n\t\t\terror;\n\t\t{error, invalid_kryder_plus_rate_multiplier_latch} ->\n\t\t\t?LOG_WARNING([{event, received_invalid_block},\n\t\t\t\t\t{validation_error, invalid_kryder_plus_rate_multiplier_latch},\n\t\t\t\t\t{h, ar_util:encode(H)}]),\n\t\t\tar_events:send(block, {rejected, invalid_kryder_plus_rate_multiplier_latch, H,\n\t\t\t\t\tno_peer}),\n\t\t\terror;\n\t\t{error, invalid_kryder_plus_rate_multiplier} ->\n\t\t\t?LOG_WARNING([{event, received_invalid_block},\n\t\t\t\t\t{validation_error, invalid_kryder_plus_rate_multiplier},\n\t\t\t\t\t{h, ar_util:encode(H)}]),\n\t\t\tar_events:send(block, {rejected, invalid_kryder_plus_rate_multiplier, H, no_peer}),\n\t\t\terror;\n\t\t{error, invalid_wallet_list} ->\n\t\t\t?LOG_WARNING([{event, received_invalid_block},\n\t\t\t\t\t{validation_error, invalid_wallet_list}, {h, ar_util:encode(H)}]),\n\t\t\tar_events:send(block, {rejected, invalid_wallet_list, H, no_peer}),\n\t\t\terror;\n\t\t{ok, _RootHash2} ->\n\t\t\tok\n\tend.\n\nget_missing_txs_and_retry(#block{ txs = TXIDs }, _Worker)\n\t\twhen length(TXIDs) > 1000 ->\n\t?LOG_WARNING([{event, ar_node_worker_downloaded_txs_count_exceeds_limit}]),\n\tok;\nget_missing_txs_and_retry(BShadow, Worker) ->\n\tget_missing_txs_and_retry(BShadow#block.indep_hash, BShadow#block.txs,\n\t\t\tWorker, ar_peers:get_peers(current), [], 0).\n\nget_missing_txs_and_retry(_H, _TXIDs, _Worker, _Peers, _TXs, TotalSize)\n\t\twhen TotalSize > ?BLOCK_TX_DATA_SIZE_LIMIT ->\n\t?LOG_WARNING([{event, ar_node_worker_downloaded_txs_exceed_block_size_limit}]),\n\tok;\nget_missing_txs_and_retry(H, [], Worker, _Peers, TXs, _TotalSize) ->\n\tgen_server:cast(Worker, {cache_missing_txs, H, lists:reverse(TXs)});\nget_missing_txs_and_retry(H, TXIDs, Worker, Peers, TXs, TotalSize) ->\n\tSplit = min(5, length(TXIDs)),\n\t{Bulk, Rest} = lists:split(Split, TXIDs),\n\tFetch =\n\t\tlists:foldl(\n\t\t\tfun\t(TX = #tx{ format = 1, data_size = DataSize }, {Acc1, Acc2}) ->\n\t\t\t\t\t{[TX | Acc1], Acc2 + DataSize};\n\t\t\t\t(TX = #tx{}, {Acc1, Acc2}) ->\n\t\t\t\t\t{[TX | Acc1], Acc2};\n\t\t\t\t(_, failed_to_fetch_tx) ->\n\t\t\t\t\tfailed_to_fetch_tx;\n\t\t\t\t(_, _) ->\n\t\t\t\t\tfailed_to_fetch_tx\n\t\t\tend,\n\t\t\t{TXs, TotalSize},\n\t\t\tar_util:pmap(\n\t\t\t\tfun(TXID) ->\n\t\t\t\t\tar_http_iface_client:get_tx(Peers, TXID)\n\t\t\t\tend,\n\t\t\t\tBulk\n\t\t\t)\n\t\t),\n\tcase Fetch of\n\t\tfailed_to_fetch_tx ->\n\t\t\t?LOG_WARNING([{event, ar_node_worker_failed_to_fetch_missing_tx}]),\n\t\t\tok;\n\t\t{TXs2, TotalSize2} ->\n\t\t\tget_missing_txs_and_retry(H, Rest, Worker, Peers, TXs2, TotalSize2)\n\tend.\n\napply_validated_block(State, B, PrevBlocks, Orphans, RecentBI, BlockTXPairs) ->\n\t?LOG_DEBUG([{event, apply_validated_block}, {block, ar_util:encode(B#block.indep_hash)}]),\n\tcase ar_watchdog:is_mined_block(B) of\n\t\ttrue ->\n\t\t\tar_events:send(block, {new, B, #{ source => miner }});\n\t\tfalse ->\n\t\t\tok\n\tend,\n\t[{_, CDiff}] = ets:lookup(node_state, cumulative_diff),\n\tcase B#block.cumulative_diff =< CDiff of\n\t\ttrue ->\n\t\t\t%% The block is from the longest fork, but not the latest known block from there.\n\t\t\tar_block_cache:add_validated(block_cache, B),\n\t\t\tgen_server:cast(?MODULE, apply_block),\n\t\t\tlog_applied_block(B),\n\t\t\tState;\n\t\tfalse ->\n\t\t\tapply_validated_block2(State, B, PrevBlocks, Orphans, RecentBI, BlockTXPairs)\n\tend.\n\napply_validated_block2(State, B, PrevBlocks, Orphans, RecentBI, BlockTXPairs) ->\n\t[{current, CurrentH}] = ets:lookup(node_state, current),\n\tBH = B#block.indep_hash,\n\t%% Overwrite the block to store computed size tagged txs - they\n\t%% may be needed for reconstructing block_txs_pairs if there is a reorg\n\t%% off and then back on this fork.\n\tar_block_cache:add(block_cache, B),\n\tar_block_cache:mark_tip(block_cache, BH),\n\tar_block_cache:prune(block_cache, ar_block:get_consensus_window_size()),\n\t%% We could have missed a few blocks due to networking issues, which would then\n\t%% be picked by ar_poller and end up waiting for missing transactions to be fetched.\n\t%% Thefore, it is possible (although not likely) that there are blocks above the new tip,\n\t%% for which we trigger a block application here, in order not to wait for the next\n\t%% arrived or fetched block to trigger it.\n\tgen_server:cast(?MODULE, apply_block),\n\tlog_applied_block(B),\n\tlog_tip(B),\n\tmaybe_report_n_confirmations(B, RecentBI),\n\tPrevB = hd(PrevBlocks),\n\tForkRootB = lists:last(PrevBlocks), %% The root of any detected fork\n\tprometheus_gauge:set(block_time, B#block.timestamp - PrevB#block.timestamp),\n\trecord_economic_metrics(B, PrevB),\n\tlists:foldl(\n\t\tfun(OrphanH, OrphanHeight) ->\n\t\t\tar_watchdog:block_orphaned(OrphanH, OrphanHeight),\n\t\t\tOrphanHeight + 1\n\t\tend,\n\t\tForkRootB#block.height + 1,\n\t\tOrphans\n\t),\n\tar_chain_stats:log_fork(Orphans, ForkRootB),\n\trecord_vdf_metrics(B, PrevB),\n\treturn_orphaned_txs_to_mempool(CurrentH, ForkRootB#block.indep_hash),\n\tlists:foldl(\n\t\tfun (CurrentB, start) ->\n\t\t\t\tCurrentB;\n\t\t\t(CurrentB, _CurrentPrevB) ->\n\t\t\t\tWallets = CurrentB#block.wallet_list,\n\t\t\t\t%% Use a twice bigger depth than the depth requested on join to serve\n\t\t\t\t%% the wallet trees to the joining nodes.\n\t\t\t\tok = ar_wallets:set_current(\n\t\t\t\t\tWallets, CurrentB#block.height, ar_block:get_consensus_window_size() * 2),\n\t\t\t\tCurrentB\n\t\tend,\n\t\tstart,\n\t\tlists:reverse([B | PrevBlocks])\n\t),\n\tar_disk_cache:write_block(B),\n\tBlockTXs = B#block.txs,\n\tar_mempool:drop_txs(BlockTXs, false, false),\n\tgen_server:cast(self(), {filter_mempool, ar_mempool:get_all_txids()}),\n\t{BlockAnchors, RecentTXMap} = get_block_anchors_and_recent_txs_map(BlockTXPairs),\n\tHeight = B#block.height,\n\t{Rate, ScheduledRate} =\n\t\tcase Height >= ar_fork:height_2_5() of\n\t\t\ttrue ->\n\t\t\t\t{B#block.usd_to_ar_rate, B#block.scheduled_usd_to_ar_rate};\n\t\t\tfalse ->\n\t\t\t\t{?INITIAL_USD_TO_AR((Height + 1))(), ?INITIAL_USD_TO_AR((Height + 1))()}\n\t\tend,\n\tAddedBlocks = tl(lists:reverse([B | [PrevB2 || PrevB2 <- PrevBlocks]])),\n\tAddedBIElements = [block_index_entry(Blck) || Blck <- AddedBlocks],\n\tOrphanCount = length(Orphans),\n\tar_block_index:update(AddedBIElements, OrphanCount),\n\tRecentBI2 = lists:sublist(RecentBI, ?BLOCK_INDEX_HEAD_LEN),\n\tar_data_sync:add_tip_block(BlockTXPairs, RecentBI2),\n\tar_header_sync:add_tip_block(B, RecentBI2),\n\tlists:foreach(\n\t\tfun(PrevB3) ->\n\t\t\tar_header_sync:add_block(PrevB3),\n\t\t\tar_disk_cache:write_block(PrevB3)\n\t\tend,\n\t\ttl(lists:reverse(PrevBlocks))\n\t),\n\n\tar_storage:update_block_index(B#block.height, OrphanCount, AddedBIElements),\n\tar_storage:store_reward_history_part(AddedBlocks),\n\tar_storage:store_block_time_history_part(AddedBlocks, ForkRootB),\n\tets:insert(node_state, [\n\t\t{recent_block_index,\tRecentBI2},\n\t\t{recent_max_block_size, get_max_block_size(RecentBI2)},\n\t\t{current,\t\t\t\tB#block.indep_hash},\n\t\t{timestamp,\t\t\t\tB#block.timestamp},\n\t\t{wallet_list,\t\t\tB#block.wallet_list},\n\t\t{height,\t\t\t\tB#block.height},\n\t\t{hash,\t\t\t\t\tB#block.hash},\n\t\t{reward_pool,\t\t\tB#block.reward_pool},\n\t\t{diff_pair,\t\t\t\tar_difficulty:diff_pair(B)},\n\t\t{cumulative_diff,\t\tB#block.cumulative_diff},\n\t\t{last_retarget,\t\t\tB#block.last_retarget},\n\t\t{weave_size,\t\t\tB#block.weave_size},\n\t\t{nonce_limiter_info,\tB#block.nonce_limiter_info},\n\t\t{block_txs_pairs,\t\tBlockTXPairs},\n\t\t{block_anchors,\t\t\tBlockAnchors},\n\t\t{recent_txs_map,\t\tRecentTXMap},\n\t\t{usd_to_ar_rate,\t\tRate},\n\t\t{scheduled_usd_to_ar_rate, ScheduledRate},\n\t\t{price_per_gib_minute, B#block.price_per_gib_minute},\n\t\t{kryder_plus_rate_multiplier, B#block.kryder_plus_rate_multiplier},\n\t\t{denomination, B#block.denomination},\n\t\t{redenomination_height, B#block.redenomination_height},\n\t\t{scheduled_price_per_gib_minute, B#block.scheduled_price_per_gib_minute},\n\t\t{merkle_rebase_support_threshold, get_merkle_rebase_threshold(B)}\n\t]),\n\tSearchSpaceUpperBound = ar_node:get_partition_upper_bound(RecentBI),\n\tar_events:send(node_state, {search_space_upper_bound, SearchSpaceUpperBound}),\n\tar_events:send(node_state, {new_tip, B, PrevB}),\n\tar_events:send(node_state, {checkpoint_block,\n\t\tar_block_cache:get_checkpoint_block(RecentBI)}),\n\tmaybe_reset_miner(State).\n\nlog_applied_block(B) ->\n\tPartition1 = ar_node:get_partition_number(B#block.recall_byte),\n\tPartition2 = ar_node:get_partition_number(B#block.recall_byte2),\n\tNumChunks = case {Partition1, Partition2} of\n\t\t{undefined, undefined} ->\n\t\t\t0;\n\t\t{undefined, _} ->\n\t\t\t1;\n\t\t{_, undefined} ->\n\t\t\t1;\n\t\t_ ->\n\t\t\t2\n\tend,\n\t?LOG_INFO([\n\t\t{event, applied_block},\n\t\t{indep_hash, ar_util:encode(B#block.indep_hash)},\n\t\t{height, B#block.height}, {partition1, Partition1}, {partition2, Partition2},\n\t\t{num_chunks, NumChunks}\n\t]).\n\nlog_tip(B) ->\n\t?LOG_INFO([{event, new_tip_block}, {indep_hash, ar_util:encode(B#block.indep_hash)},\n\t\t\t{height, B#block.height}, {weave_size, B#block.weave_size},\n\t\t\t{reward_addr, ar_util:encode(B#block.reward_addr)}]).\n\nmaybe_report_n_confirmations(B, BI) ->\n\tN = 10,\n\tLastNBlocks = lists:sublist(BI, N),\n\tcase length(LastNBlocks) == N of\n\t\ttrue ->\n\t\t\t{H, _, _} = lists:last(LastNBlocks),\n\t\t\tar_watchdog:block_received_n_confirmations(H, B#block.height - N + 1);\n\t\tfalse ->\n\t\t\tdo_nothing\n\tend.\n\nrecord_economic_metrics(B, PrevB) ->\n\tcase B#block.height >= ar_fork:height_2_5() of\n\t\tfalse ->\n\t\t\tok;\n\t\ttrue ->\n\t\t\trecord_economic_metrics2(B, PrevB)\n\tend.\n\nrecord_economic_metrics2(B, PrevB) ->\n\t{PoA1Diff, Diff} = ar_difficulty:diff_pair(B),\n\tprometheus_gauge:set(log_diff, [poa1], ar_retarget:switch_to_log_diff(PoA1Diff)),\n\tprometheus_gauge:set(log_diff, [poa2], ar_retarget:switch_to_log_diff(Diff)),\n\tprometheus_gauge:set(network_hashrate, ar_difficulty:get_hash_rate_fixed_ratio(B)),\n\tprometheus_gauge:set(endowment_pool, B#block.reward_pool),\n\tprometheus_gauge:set(kryder_plus_rate_multiplier, B#block.kryder_plus_rate_multiplier),\n\tPeriod_200_Years = 200 * 365 * 24 * 60 * 60,\n\tcase B#block.height >= ar_fork:height_2_6() of\n\t\ttrue ->\n\t\t\t#block{ reward_history = RewardHistory } = B,\n\t\t\tRewardHistorySize = length(RewardHistory),\n\t\t\tAverageHashRate = ar_util:safe_divide(lists:sum([HR\n\t\t\t\t\t|| {_, HR, _, _} <- RewardHistory]), RewardHistorySize),\n\t\t\tprometheus_gauge:set(average_network_hash_rate, AverageHashRate),\n\t\t\tAverageBlockReward = ar_util:safe_divide(lists:sum([R\n\t\t\t\t\t|| {_, _, R, _} <- RewardHistory]), RewardHistorySize),\n\t\t\tprometheus_gauge:set(average_block_reward, AverageBlockReward),\n\t\t\tprometheus_gauge:set(price_per_gibibyte_minute, B#block.price_per_gib_minute),\n\t\t\tBlockInterval = ar_block_time_history:compute_block_interval(PrevB),\n\t\t\tArgs = {PrevB#block.reward_pool, PrevB#block.debt_supply, B#block.txs,\n\t\t\t\t\tB#block.weave_size, B#block.height, PrevB#block.price_per_gib_minute,\n\t\t\t\t\tPrevB#block.kryder_plus_rate_multiplier_latch,\n\t\t\t\t\tPrevB#block.kryder_plus_rate_multiplier, PrevB#block.denomination,\n\t\t\t\t\tBlockInterval},\n\t\t\t{ExpectedBlockReward,\n\t\t\t\t\t_, _, _, _, Give, Take} = ar_pricing:get_miner_reward_endowment_pool_debt_supply(Args),\n\t\t\tprometheus_gauge:set(endowment_pool_take, Take),\n\t\t\tprometheus_gauge:set(endowment_pool_give, Give),\n\t\t\tprometheus_gauge:set(expected_block_reward, ExpectedBlockReward),\n\t\t\tLegacyPricePerGibibyte = ar_pricing:get_storage_cost(?MiB * 1024,\n\t\t\t\t\tos:system_time(second), PrevB#block.usd_to_ar_rate, B#block.height),\n\t\t\tprometheus_gauge:set(legacy_price_per_gibibyte_minute, LegacyPricePerGibibyte),\n\t\t\tprometheus_gauge:set(available_supply,\n\t\t\t\t\t?TOTAL_SUPPLY - B#block.reward_pool + B#block.debt_supply),\n\t\t\tprometheus_gauge:set(debt_supply, B#block.debt_supply);\n\t\tfalse ->\n\t\t\tok\n\tend,\n\tcase catch ar_pricing:get_expected_min_decline_rate(B#block.timestamp,\n\t\t\tPeriod_200_Years, B#block.reward_pool, B#block.weave_size, B#block.usd_to_ar_rate,\n\t\t\tB#block.height) of\n\t\t{'EXIT', _} ->\n\t\t\t?LOG_ERROR([{event, failed_to_compute_expected_min_decline_rate}]);\n\t\t{RateDivisor, RateDividend} ->\n\t\t\tprometheus_gauge:set(expected_minimum_200_years_storage_costs_decline_rate,\n\t\t\t\t\tar_util:safe_divide(RateDivisor, RateDividend))\n\tend,\n\tcase catch ar_pricing:get_expected_min_decline_rate(B#block.timestamp,\n\t\t\tPeriod_200_Years, B#block.reward_pool, B#block.weave_size, {1, 10},\n\t\t\tB#block.height) of\n\t\t{'EXIT', _} ->\n\t\t\t?LOG_ERROR([{event, failed_to_compute_expected_min_decline_rate2}]);\n\t\t{RateDivisor2, RateDividend2} ->\n\t\t\tprometheus_gauge:set(\n\t\t\t\t\texpected_minimum_200_years_storage_costs_decline_rate_10_usd_ar,\n\t\t\t\t\tar_util:safe_divide(RateDivisor2, RateDividend2))\n\tend.\n\nrecord_vdf_metrics(#block{ height = Height } = B, PrevB) ->\n\tcase Height >= ar_fork:height_2_6() of\n\t\ttrue ->\n\t\t\tStepNumber = ar_block:vdf_step_number(B),\n\t\t\tPrevBStepNumber = ar_block:vdf_step_number(PrevB),\n\t\t\tprometheus_gauge:set(block_vdf_time, StepNumber - PrevBStepNumber);\n\t\tfalse ->\n\t\t\tok\n\tend.\n\nreturn_orphaned_txs_to_mempool(H, H) ->\n\tok;\nreturn_orphaned_txs_to_mempool(H, BaseH) ->\n\t#block{ txs = TXs, previous_block = PrevH } = ar_block_cache:get(block_cache, H),\n\tlists:foreach(fun(TX) ->\n\t\tar_events:send(tx, {orphaned, TX}),\n\t\tar_events:send(tx, {ready_for_mining, TX}),\n\t\t%% Add it to the mempool here even though have triggered an event - processes\n\t\t%% do not handle their own events.\n\t\tar_mempool:add_tx(TX, ready_for_mining)\n\tend, TXs),\n\treturn_orphaned_txs_to_mempool(PrevH, BaseH).\n\n%% @doc Stop the current mining session and optionally start a new one,\n%% depending on the automine setting.\nmaybe_reset_miner(#{ mine_until_height := {height, TargetHeight} } = State) ->\n\tcase ar_node:get_height() >= TargetHeight of\n\t\ttrue ->\n\t\t\tmaybe_reset_miner(State#{ mine_until_height => undefined, automine => false });\n\t\tfalse ->\n\t\t\tstart_mining(State)\n\tend;\nmaybe_reset_miner(#{ miner_state := MinerState, automine := false } = State) ->\n\tcase MinerState of\n\t\tundefined ->\n\t\t\tok;\n\t\t_ ->\n\t\t\t?MINING_SERVER:pause()\n\tend,\n\tState#{ miner_state => undefined };\nmaybe_reset_miner(State) ->\n\tstart_mining(State).\n\nstart_mining(State) ->\n\tDiffPair = get_current_diff(),\n\t[{_, MerkleRebaseThreshold}] = ets:lookup(node_state,\n\t\t\tmerkle_rebase_support_threshold),\n\t[{_, Height}] = ets:lookup(node_state, height),\n\t?MINING_SERVER:start_mining({DiffPair, MerkleRebaseThreshold, Height}),\n\tcase maps:get(miner_state, State) of\n\t\tundefined ->\n\t\t\tState#{ miner_state => running };\n\t\trunning ->\n\t\t\t?MINING_SERVER:set_difficulty(DiffPair),\n\t\t\t?MINING_SERVER:set_merkle_rebase_threshold(MerkleRebaseThreshold),\n\t\t\t?MINING_SERVER:set_height(Height),\n\t\t\tState\n\tend.\n\nget_current_diff() ->\n\tget_current_diff(os:system_time(second)).\n\nget_current_diff(TS) ->\n\tProps =\n\t\tets:select(\n\t\t\tnode_state,\n\t\t\t[{{'$1', '$2'},\n\t\t\t\t[{'or',\n\t\t\t\t\t{'==', '$1', height},\n\t\t\t\t\t{'==', '$1', diff_pair},\n\t\t\t\t\t{'==', '$1', last_retarget},\n\t\t\t\t\t{'==', '$1', timestamp}}], ['$_']}]\n\t\t),\n\tHeight = proplists:get_value(height, Props),\n\tDiffPair = proplists:get_value(diff_pair, Props),\n\tLastRetarget = proplists:get_value(last_retarget, Props),\n\tPrevTS = proplists:get_value(timestamp, Props),\n\tar_retarget:maybe_retarget(Height + 1, DiffPair, TS, LastRetarget, PrevTS).\n\nget_merkle_rebase_threshold(PrevB) ->\n\tcase PrevB#block.height + 1 == ar_fork:height_2_7() of\n\t\ttrue ->\n\t\t\tPrevB#block.weave_size;\n\t\t_ ->\n\t\t\tPrevB#block.merkle_rebase_support_threshold\n\tend.\n\ncollect_mining_transactions(Limit) ->\n\tcollect_mining_transactions(Limit, ar_mempool:get_priority_set(), []).\n\ncollect_mining_transactions(0, _Set, TXs) ->\n\tTXs;\ncollect_mining_transactions(Limit, Set, TXs) ->\n\tcase gb_sets:is_empty(Set) of\n\t\ttrue ->\n\t\t\tTXs;\n\t\tfalse ->\n\t\t\t{{_Utility, TXID, Status}, Set2} = gb_sets:take_largest(Set),\n\t\t\tcase Status of\n\t\t\t\tready_for_mining ->\n\t\t\t\t\tTX = ar_mempool:get_tx(TXID),\n\t\t\t\t\tcollect_mining_transactions(Limit - 1, Set2, [TX | TXs]);\n\t\t\t\t_ ->\n\t\t\t\t\tcollect_mining_transactions(Limit, Set2, TXs)\n\t\t\tend\n\tend.\n\nrecord_processing_time(StartTimestamp) ->\n\tProcessingTime = timer:now_diff(erlang:timestamp(), StartTimestamp) / 1000000,\n\tprometheus_histogram:observe(block_processing_time, ProcessingTime).\n\npriority(apply_block) ->\n\t{1, 1};\npriority({work_complete, _, _, _, _, _}) ->\n\t{2, 1};\npriority({cache_missing_txs, _, _}) ->\n\t{3, 1};\npriority(_) ->\n\t{os:system_time(second), 1}.\n\nread_hash_list_2_0_for_1_0_blocks() ->\n\tFork_2_0 = ar_fork:height_2_0(),\n\tcase Fork_2_0 > 0 of\n\t\ttrue ->\n\t\t\tFile = filename:join([\"genesis_data\", \"hash_list_1_0\"]),\n\t\t\t{ok, Binary} = file:read_file(File),\n\t\t\tHL = lists:map(fun ar_util:decode/1, jiffy:decode(Binary)),\n\t\t\tFork_2_0 = length(HL),\n\t\t\tHL;\n\t\tfalse ->\n\t\t\t[]\n\tend.\n\nstart_from_state([#block{} = GenesisB]) ->\n\tRewardHistory = GenesisB#block.reward_history,\n\tBlockTimeHistory = GenesisB#block.block_time_history,\n\tBI = [ar_util:block_index_entry_from_block(GenesisB)],\n\tself() ! {join_from_state, 0, BI, [GenesisB#block{\n\t\treward_history = RewardHistory,\n\t\tblock_time_history = BlockTimeHistory\n\t}], not_set}.\nstart_from_state(BI, Height) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tstart_from_state(BI, Height, Config#config.start_from_state).\n\nstart_from_state(BI, Height, CustomDir) ->\n\tcase ar_node:read_recent_blocks(BI,\n\t\t\tmin(length(BI) - 1, ?START_FROM_STATE_SEARCH_DEPTH), CustomDir) of\n\t\tnot_found ->\n\t\t\t?LOG_ERROR([{event, start_from_state}, {reason, block_headers_not_found}]),\n\t\t\tblock_headers_not_found;\n\t\t{Skipped, Blocks} ->\n\t\t\tBI2 = lists:nthtail(Skipped, BI),\n\t\t\tHeight2 = Height - Skipped,\n\n\t\t\tRewardHistoryBI = ar_rewards:interim_reward_history_bi(Height, BI2),\n\n\t\t\tBlockTimeHistoryBI = lists:sublist(BI2,\n\t\t\t\t\tar_block_time_history:history_length() + ar_block:get_consensus_window_size()),\n\t\t\tcase {ar_storage:read_reward_history(RewardHistoryBI, CustomDir),\n\t\t\t\t\tar_storage:read_block_time_history(Height2, BlockTimeHistoryBI, CustomDir)} of\n\t\t\t\t{not_found, _} ->\n\t\t\t\t\t?LOG_ERROR([{event, start_from_state_error},\n\t\t\t\t\t\t\t{reason, reward_history_not_found},\n\t\t\t\t\t\t\t{height, Height2},\n\t\t\t\t\t\t\t{block_index, length(BI2)},\n\t\t\t\t\t\t\t{reward_history, length(RewardHistoryBI)}]),\n\t\t\t\t\treward_history_not_found;\n\t\t\t\t{_, not_found} ->\n\t\t\t\t\t?LOG_ERROR([{event, start_from_state_error},\n\t\t\t\t\t\t\t{reason, block_time_history_not_found},\n\t\t\t\t\t\t\t{height, Height2},\n\t\t\t\t\t\t\t{block_index, length(BI2)},\n\t\t\t\t\t\t\t{block_time_history, length(BlockTimeHistoryBI)}]),\n\t\t\t\t\tblock_time_history_not_found;\n\t\t\t\t{RewardHistory, BlockTimeHistory} ->\n\t\t\t\t\tBlocks2 = ar_rewards:set_reward_history(Blocks, RewardHistory),\n\t\t\t\t\tBlocks3 = ar_block_time_history:set_history(Blocks2, BlockTimeHistory),\n\t\t\t\t\tself() ! {join_from_state, Height2, BI2, Blocks3, CustomDir},\n\t\t\t\t\tok\n\t\t\tend\n\tend.\n\nset_poa_caches([]) ->\n\t[];\nset_poa_caches([B | Blocks]) ->\n\t[set_poa_cache(B) | set_poa_caches(Blocks)].\n\nset_poa_cache(B) ->\n\tPoA1 = B#block.poa,\n\tPoA2 = B#block.poa2,\n\tMiningAddress = B#block.reward_addr,\n\tPackingDifficulty = B#block.packing_difficulty,\n\tReplicaFormat = B#block.replica_format,\n\tNonce = B#block.nonce,\n\tRecallByte1 = B#block.recall_byte,\n\tRecallByte2 = B#block.recall_byte2,\n\tPacking = ar_block:get_packing(PackingDifficulty, MiningAddress, ReplicaFormat),\n\tPoACache = compute_poa_cache(B, PoA1, RecallByte1, Nonce, Packing),\n\tB2 = B#block{ poa_cache = PoACache },\n\t%% Compute PoA2 cache if PoA2 is present.\n\tcase RecallByte2 of\n\t\tundefined ->\n\t\t\tB2;\n\t\t_ ->\n\t\t\tPoA2Cache = compute_poa_cache(B, PoA2, RecallByte2, Nonce, Packing),\n\t\t\tB2#block{ poa2_cache = PoA2Cache }\n\tend.\n\ncompute_poa_cache(#block{ height = 0 }, _PoA, _RecallByte, _Nonce, _Packing) ->\n\tundefined;\ncompute_poa_cache(B, PoA, RecallByte, Nonce, Packing) ->\n\tPackingDifficulty = B#block.packing_difficulty,\n\tSubChunkIndex = ar_block:get_sub_chunk_index(PackingDifficulty, Nonce),\n\t{BlockStart, BlockEnd, TXRoot} = ar_block_index:get_block_bounds(RecallByte),\n\tBlockSize = BlockEnd - BlockStart,\n\tChunkID =\n\t\tcase PoA#poa.unpacked_chunk of\n\t\t\t<<>> ->\n\t\t\t\tArgs = {BlockStart, RecallByte, TXRoot, BlockSize, PoA, Packing,\n\t\t\t\t\tSubChunkIndex, not_set},\n\t\t\t\t{true, ComputedChunkID} = ar_poa:validate(Args),\n\t\t\t\tComputedChunkID;\n\t\t\t_ ->\n\t\t\t\tar_tx:generate_chunk_id(PoA#poa.unpacked_chunk)\n\t\tend,\n\t{{BlockStart, RecallByte, TXRoot, BlockSize, Packing, SubChunkIndex}, ChunkID}.\n\ndump_mempool(TXs, MempoolSize) ->\n\tSerializedTXs = maps:map(fun(_, {TX, St}) -> {ar_serialize:tx_to_binary(TX), St} end, TXs),\n\tcase ar_storage:write_term(mempool, {SerializedTXs, MempoolSize}) of\n\t\tok ->\n\t\t\tok;\n\t\t{error, Reason} ->\n\t\t\t?LOG_ERROR([{event, failed_to_dump_mempool}, {reason, Reason}])\n\tend.\n\nhandle_found_solution(Args, PrevB, State, IsRebase) ->\n\t{Source, Solution, PoACache, PoA2Cache} = Args,\n\t#mining_solution{\n\t\tlast_step_checkpoints = LastStepCheckpoints,\n\t\tmining_address = MiningAddress,\n\t\tseed = NonceLimiterSeed,\n\t\tnext_seed = NonceLimiterNextSeed,\n\t\tnext_vdf_difficulty = NonceLimiterNextVDFDifficulty,\n\t\tnonce = Nonce,\n\t\tnonce_limiter_output = NonceLimiterOutput,\n\t\tpartition_number = PartitionNumber,\n\t\tpoa1 = PoA1,\n\t\tpoa2 = PoA2,\n\t\tpreimage = SolutionPreimage,\n\t\trecall_byte1 = RecallByte1,\n\t\trecall_byte2 = RecallByte2,\n\t\tsolution_hash = SolutionH,\n\t\tstart_interval_number = IntervalNumber,\n\t\tstep_number = StepNumber,\n\t\tsteps = SuppliedSteps,\n\t\tpacking_difficulty = PackingDifficulty,\n\t\treplica_format = ReplicaFormat\n\t} = Solution,\n\t?LOG_INFO([{event, handle_found_solution}, {solution, ar_util:encode(SolutionH)}]),\n\tMerkleRebaseThreshold = ?MERKLE_REBASE_SUPPORT_THRESHOLD,\n\n\t#block{ indep_hash = PrevH, timestamp = PrevTimestamp,\n\t\t\twallet_list = WalletList,\n\t\t\tnonce_limiter_info = PrevNonceLimiterInfo,\n\t\t\theight = PrevHeight } = PrevB,\n\tHeight = PrevHeight + 1,\n\tNow = os:system_time(second),\n\tMaxDeviation = ar_block:get_max_timestamp_deviation(),\n\tTimestamp =\n\t\tcase Now < PrevTimestamp - MaxDeviation of\n\t\t\ttrue ->\n\t\t\t\t?LOG_WARNING([{event, clock_out_of_sync},\n\t\t\t\t\t\t{previous_block, ar_util:encode(PrevH)},\n\t\t\t\t\t\t{previous_block_timestamp, PrevTimestamp},\n\t\t\t\t\t\t{our_time, Now},\n\t\t\t\t\t\t{max_allowed_deviation, MaxDeviation}]),\n\t\t\t\tPrevTimestamp - MaxDeviation;\n\t\t\tfalse ->\n\t\t\t\tNow\n\t\tend,\n\tIsBanned = ar_node_utils:is_account_banned(MiningAddress,\n\t\t\tar_wallets:get(WalletList, MiningAddress)),\n\t%% Check the solution is ahead of the previous solution on the timeline.\n\tNonceLimiterInfo = #nonce_limiter_info{ global_step_number = StepNumber,\n\t\t\toutput = NonceLimiterOutput,\n\t\t\tprev_output = PrevNonceLimiterInfo#nonce_limiter_info.output },\n\tPassesTimelineCheck =\n\t\tcase IsBanned of\n\t\t\ttrue ->\n\t\t\t\tar_mining_server:log_prepare_solution_failure(Solution, rejected,\n\t\t\t\t\t\tmining_address_banned, Source, []),\n\t\t\t\t{false, address_banned};\n\t\t\tfalse ->\n\t\t\t\tcase ar_block:validate_replica_format(Height, PackingDifficulty, ReplicaFormat) of\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tar_mining_server:log_prepare_solution_failure(Solution,\n\t\t\t\t\t\t\t\trejected, invalid_packing_difficulty, Source, []),\n\t\t\t\t\t\t{false, invalid_packing_difficulty};\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tcase ar_nonce_limiter:is_ahead_on_the_timeline(NonceLimiterInfo,\n\t\t\t\t\t\t\t\tPrevNonceLimiterInfo) of\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\tSolutionVDF =\n\t\t\t\t\t\t\t\t\tNonceLimiterInfo#nonce_limiter_info.global_step_number,\n\t\t\t\t\t\t\t\tPrevBlockVDF =\n\t\t\t\t\t\t\t\t\tPrevNonceLimiterInfo#nonce_limiter_info.global_step_number,\n\t\t\t\t\t\t\t\tar_mining_server:log_prepare_solution_failure(Solution,\n\t\t\t\t\t\t\t\t\tstale, stale_solution, Source, [\n\t\t\t\t\t\t\t\t\t\t{solution_vdf, SolutionVDF},\n\t\t\t\t\t\t\t\t\t\t{prev_block_vdf, PrevBlockVDF}\n\t\t\t\t\t\t\t\t\t]),\n\t\t\t\t\t\t\t\t{false, timeline};\n\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\ttrue\n\t\t\t\t\t\tend\n\t\t\t\tend\n\t\tend,\n\n\t%% Check solution seed.\n\t#nonce_limiter_info{ next_seed = PrevNextSeed,\n\t\t\tnext_vdf_difficulty = PrevNextVDFDifficulty,\n\t\t\tglobal_step_number = PrevStepNumber,\n\t\t\tseed = PrevSeed } = PrevNonceLimiterInfo,\n\tPrevIntervalNumber = PrevStepNumber div ar_nonce_limiter:get_reset_frequency(),\n\tPassesSeedCheck =\n\t\tcase PassesTimelineCheck of\n\t\t\t{false, Reason} ->\n\t\t\t\t{false, Reason};\n\t\t\ttrue ->\n\t\t\t\tcase {IntervalNumber, NonceLimiterNextSeed, NonceLimiterNextVDFDifficulty, NonceLimiterSeed}\n\t\t\t\t\t\t== {PrevIntervalNumber, PrevNextSeed, PrevNextVDFDifficulty, PrevSeed} of\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tar_mining_server:log_prepare_solution_failure(Solution, stale,\n\t\t\t\t\t\t\tvdf_seed_data_does_not_match_current_block, Source, [\n\t\t\t\t\t\t\t\t{output, ar_util:encode(NonceLimiterOutput)},\n\t\t\t\t\t\t\t\t{interval_number, IntervalNumber},\n\t\t\t\t\t\t\t\t{prev_interval_number, PrevIntervalNumber},\n\t\t\t\t\t\t\t\t{nonce_limiter_next_seed, ar_util:encode(NonceLimiterNextSeed)},\n\t\t\t\t\t\t\t\t{nonce_limiter_seed, ar_util:encode(NonceLimiterSeed)},\n\t\t\t\t\t\t\t\t{prev_nonce_limiter_next_seed, ar_util:encode(PrevNextSeed)},\n\t\t\t\t\t\t\t\t{prev_nonce_limiter_seed, ar_util:encode(PrevSeed)},\n\t\t\t\t\t\t\t\t{nonce_limiter_next_vdf_difficulty, NonceLimiterNextVDFDifficulty},\n\t\t\t\t\t\t\t\t{prev_nonce_limiter_next_vdf_difficulty, PrevNextVDFDifficulty}\n\t\t\t\t\t\t\t]),\n\t\t\t\t\t\t{false, seed_data};\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\ttrue\n\t\t\t\tend\n\t\tend,\n\n\t%% Check solution difficulty\n\tPrevDiffPair = ar_difficulty:diff_pair(PrevB),\n\tLastRetarget = PrevB#block.last_retarget,\n\tPrevTS = PrevB#block.timestamp,\n\tDiffPair = {_PoA1Diff, Diff} = ar_retarget:maybe_retarget(PrevB#block.height + 1,\n\t\t\tPrevDiffPair, Timestamp, LastRetarget, PrevTS),\n\tPassesDiffCheck =\n\t\tcase PassesSeedCheck of\n\t\t\t{false, Reason2} ->\n\t\t\t\t{false, Reason2};\n\t\t\ttrue ->\n\t\t\t\tcase ar_node_utils:solution_passes_diff_check(Solution, DiffPair) of\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tar_mining_server:log_prepare_solution_failure(Solution, partial,\n\t\t\t\t\t\t\t\tdoes_not_pass_diff_check, Source, []),\n\t\t\t\t\t\t{false, diff};\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\ttrue\n\t\t\t\tend\n\t\tend,\n\n\tRewardKey = case ar_wallet:load_key(MiningAddress) of\n\t\tnot_found ->\n\t\t\t?LOG_WARNING([{event, mined_block_but_no_mining_key_found}, {node, node()},\n\t\t\t\t\t{mining_address, ar_util:encode(MiningAddress)}]),\n\t\t\tar:console(\"WARNING. Can't find key ~s~n\", [ar_util:encode(MiningAddress)]),\n\t\t\tnot_found;\n\t\tKey ->\n\t\t\tKey\n\tend,\n\tPassesKeyCheck =\n\t\tcase PassesDiffCheck of\n\t\t\t{false, Reason3} ->\n\t\t\t\t{false, Reason3};\n\t\t\ttrue ->\n\t\t\t\tcase RewardKey of\n\t\t\t\t\tnot_found ->\n\t\t\t\t\t\tar_mining_server:log_prepare_solution_failure(Solution, rejected,\n\t\t\t\t\t\t\t\tmissing_key_file, Source, []),\n\t\t\t\t\t\t{false, wallet_not_found};\n\t\t\t\t\t_ ->\n\t\t\t\t\t\ttrue\n\t\t\t\tend\n\t\tend,\n\n\tCorrectRebaseThreshold =\n\t\tcase PassesKeyCheck of\n\t\t\t{false, Reason4} ->\n\t\t\t\t{false, Reason4};\n\t\t\ttrue ->\n\t\t\t\tcase get_merkle_rebase_threshold(PrevB) of\n\t\t\t\t\tMerkleRebaseThreshold ->\n\t\t\t\t\t\ttrue;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tar_mining_server:log_prepare_solution_failure(Solution, rejected,\n\t\t\t\t\t\t\t\tinvalid_merkle_rebase_threshold, Source, []),\n\t\t\t\t\t\t{false, rebase_threshold}\n\t\t\t\tend\n\t\tend,\n\n\tPrevCDiff = PrevB#block.cumulative_diff,\n\tCDiff = ar_difficulty:next_cumulative_diff(PrevCDiff, Diff, Height),\n\tNoDoubleSigning =\n\t\tcase CorrectRebaseThreshold of\n\t\t\t{false, Reason5} ->\n\t\t\t\t{false, Reason5};\n\t\t\ttrue ->\n\t\t\t\tcase check_no_double_signing(CDiff, PrevCDiff, MiningAddress, Height) of\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t{false, double_signing};\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\ttrue\n\t\t\t\tend\n\t\tend,\n\n\t%% Check steps and step checkpoints.\n\tHaveSteps =\n\t\tcase NoDoubleSigning of\n\t\t\t{false, Reason6} ->\n\t\t\t\t?LOG_WARNING([{event, ignore_mining_solution},\n\t\t\t\t\t\t{reason, Reason6},\n\t\t\t\t\t\t{solution, ar_util:encode(SolutionH)}]),\n\t\t\t\tfalse;\n\t\t\ttrue ->\n\t\t\t\tar_nonce_limiter:get_steps(PrevStepNumber, StepNumber, PrevNextSeed,\n\t\t\t\t\t\tPrevNextVDFDifficulty)\n\t\tend,\n\tHaveSteps2 =\n\t\tcase HaveSteps of\n\t\t\tnot_found ->\n\t\t\t\t% TODO verify\n\t\t\t\tSuppliedSteps;\n\t\t\t_ ->\n\t\t\t\tHaveSteps\n\t\tend,\n\n\t%% Pack, build, and sign block.\n\tcase HaveSteps2 of\n\t\tfalse ->\n\t\t\t{noreply, State};\n\t\tnot_found ->\n\t\t\t?LOG_WARNING([{event, did_not_find_steps_for_mined_block},\n\t\t\t\t\t{seed, ar_util:encode(PrevNextSeed)}, {prev_step_number, PrevStepNumber},\n\t\t\t\t\t{step_number, StepNumber}]),\n\t\t\tar_mining_server:log_prepare_solution_failure(Solution, rejected,\n\t\t\t\t\tvdf_steps_not_found, Source, []),\n\t\t\t{noreply, State};\n\t\t[NonceLimiterOutput | _] = Steps ->\n\t\t\t{Seed, NextSeed, PartitionUpperBound, NextPartitionUpperBound, VDFDifficulty}\n\t\t\t\t= ar_nonce_limiter:get_seed_data(StepNumber, PrevB),\n\t\t\tLastStepCheckpoints2 =\n\t\t\t\tcase LastStepCheckpoints of\n\t\t\t\t\tEmpty when Empty == not_found orelse Empty == [] ->\n\t\t\t\t\t\tPrevOutput =\n\t\t\t\t\t\t\tcase Steps of\n\t\t\t\t\t\t\t\t[_, PrevStepOutput | _] ->\n\t\t\t\t\t\t\t\t\tPrevStepOutput;\n\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\tPrevNonceLimiterInfo#nonce_limiter_info.output\n\t\t\t\t\t\t\tend,\n\t\t\t\t\t\tPrevOutput2 = ar_nonce_limiter:maybe_add_entropy(\n\t\t\t\t\t\t\t\tPrevOutput, PrevStepNumber, StepNumber, PrevNextSeed),\n\t\t\t\t\t\t{ok, NonceLimiterOutput, Checkpoints} = ar_nonce_limiter:compute(\n\t\t\t\t\t\t\t\tStepNumber, PrevOutput2, VDFDifficulty),\n\t\t\t\t\t\tCheckpoints;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tLastStepCheckpoints\n\t\t\t\tend,\n\t\t\tNextVDFDifficulty = ar_block:compute_next_vdf_difficulty(PrevB),\n\t\t\tNonceLimiterInfo2 = NonceLimiterInfo#nonce_limiter_info{ seed = Seed,\n\t\t\t\t\tnext_seed = NextSeed, partition_upper_bound = PartitionUpperBound,\n\t\t\t\t\tnext_partition_upper_bound = NextPartitionUpperBound,\n\t\t\t\t\tvdf_difficulty = VDFDifficulty,\n\t\t\t\t\tnext_vdf_difficulty = NextVDFDifficulty,\n\t\t\t\t\tlast_step_checkpoints = LastStepCheckpoints2,\n\t\t\t\t\tsteps = Steps },\n\t\t\t{Rate, ScheduledRate} = ar_pricing:recalculate_usd_to_ar_rate(PrevB),\n\t\t\t{PricePerGiBMinute, ScheduledPricePerGiBMinute} =\n\t\t\t\t\tar_pricing:recalculate_price_per_gib_minute(PrevB),\n\t\t\tDenomination = PrevB#block.denomination,\n\t\t\t{Denomination2, RedenominationHeight2} = ar_pricing:may_be_redenominate(PrevB),\n\t\t\tPricePerGiBMinute2 = ar_pricing:redenominate(PricePerGiBMinute, Denomination,\n\t\t\t\t\tDenomination2),\n\t\t\tScheduledPricePerGiBMinute2 = ar_pricing:redenominate(ScheduledPricePerGiBMinute,\n\t\t\t\t\tDenomination, Denomination2),\n\t\t\tUnsignedB = pack_block_with_transactions(#block{\n\t\t\t\tnonce = Nonce,\n\t\t\t\tprevious_block = PrevH,\n\t\t\t\ttimestamp = Timestamp,\n\t\t\t\tlast_retarget =\n\t\t\t\t\tcase ar_retarget:is_retarget_height(Height) of\n\t\t\t\t\t\ttrue -> Timestamp;\n\t\t\t\t\t\tfalse -> PrevB#block.last_retarget\n\t\t\t\t\tend,\n\t\t\t\tdiff = Diff,\n\t\t\t\theight = Height,\n\t\t\t\thash = SolutionH,\n\t\t\t\thash_list_merkle = ar_block:compute_hash_list_merkle(PrevB),\n\t\t\t\treward_addr = ar_wallet:to_address(RewardKey),\n\t\t\t\ttags = [],\n\t\t\t\tcumulative_diff = CDiff,\n\t\t\t\tprevious_cumulative_diff = PrevB#block.cumulative_diff,\n\t\t\t\tpoa = PoA1,\n\t\t\t\tpoa_cache = PoACache,\n\t\t\t\tusd_to_ar_rate = Rate,\n\t\t\t\tscheduled_usd_to_ar_rate = ScheduledRate,\n\t\t\t\tpacking_2_5_threshold = 0,\n\t\t\t\tstrict_data_split_threshold = PrevB#block.strict_data_split_threshold,\n\t\t\t\thash_preimage = SolutionPreimage,\n\t\t\t\trecall_byte = RecallByte1,\n\t\t\t\tprevious_solution_hash = PrevB#block.hash,\n\t\t\t\tpartition_number = PartitionNumber,\n\t\t\t\tnonce_limiter_info = NonceLimiterInfo2,\n\t\t\t\tpoa2 = case PoA2 of not_set -> #poa{}; _ -> PoA2 end,\n\t\t\t\tpoa2_cache = PoA2Cache,\n\t\t\t\trecall_byte2 = RecallByte2,\n\t\t\t\treward_key = element(2, RewardKey),\n\t\t\t\tprice_per_gib_minute = PricePerGiBMinute2,\n\t\t\t\tscheduled_price_per_gib_minute = ScheduledPricePerGiBMinute2,\n\t\t\t\tdenomination = Denomination2,\n\t\t\t\tredenomination_height = RedenominationHeight2,\n\t\t\t\tdouble_signing_proof = may_be_get_double_signing_proof(PrevB, State),\n\t\t\t\tmerkle_rebase_support_threshold = MerkleRebaseThreshold,\n\t\t\t\tchunk_hash = get_chunk_hash(PoA1, Height),\n\t\t\t\tchunk2_hash = get_chunk_hash(PoA2, Height),\n\t\t\t\tpacking_difficulty = PackingDifficulty,\n\t\t\t\treplica_format = ReplicaFormat,\n\t\t\t\tunpacked_chunk_hash = get_unpacked_chunk_hash(\n\t\t\t\t\t\tPoA1, PackingDifficulty, RecallByte1),\n\t\t\t\tunpacked_chunk2_hash = get_unpacked_chunk_hash(\n\t\t\t\t\t\tPoA2, PackingDifficulty, RecallByte2)\n\t\t\t}, PrevB),\n\n\t\t\tBlockTimeHistory2 = lists:sublist(\n\t\t\t\tar_block_time_history:update_history(UnsignedB, PrevB),\n\t\t\t\tar_block_time_history:history_length() + ar_block:get_consensus_window_size()),\n\t\t\tUnsignedB2 = UnsignedB#block{\n\t\t\t\tblock_time_history = BlockTimeHistory2,\n\t\t\t\tblock_time_history_hash = ar_block_time_history:hash(BlockTimeHistory2)\n\t\t\t},\n\t\t\tSignedH = ar_block:generate_signed_hash(UnsignedB2),\n\t\t\tPrevCDiff = PrevB#block.cumulative_diff,\n\t\t\tSignaturePreimage = ar_block:get_block_signature_preimage(CDiff, PrevCDiff,\n\t\t\t\t\t<< (PrevB#block.hash)/binary, SignedH/binary >>, Height),\n\t\t\tassert_key_type(RewardKey, Height),\n\t\t\tSignature = ar_wallet:sign(element(1, RewardKey), SignaturePreimage),\n\t\t\tH = ar_block:indep_hash2(SignedH, Signature),\n\t\t\tB = UnsignedB2#block{ indep_hash = H, signature = Signature },\n\t\t\tar_watchdog:mined_block(H, Height, PrevH),\n\t\t\t?LOG_INFO([{event, mined_block}, {indep_hash, ar_util:encode(H)},\n\t\t\t\t\t{solution, ar_util:encode(SolutionH)}, {height, Height},\n\t\t\t\t\t{step_number, StepNumber}, {steps, length(Steps)},\n\t\t\t\t\t{txs, length(B#block.txs)},\n\t\t\t\t\t{recall_byte1, B#block.recall_byte},\n\t\t\t\t\t{recall_byte2, B#block.recall_byte2},\n\t\t\t\t\t{chunks,\n\t\t\t\t\t\tcase B#block.recall_byte2 of\n\t\t\t\t\t\t\tundefined -> 1;\n\t\t\t\t\t\t\t_ -> 2\n\t\t\t\t\t\tend}]),\n\t\t\tprometheus_gauge:inc(mining_solution, [success]),\n\t\t\tar_block_cache:add(block_cache, B),\n\t\t\tar_events:send(solution, {accepted,\n\t\t\t\t\t#{ indep_hash => H, source => Source, is_rebase => IsRebase }}),\n\t\t\tapply_block(update_solution_cache(H, Args, State));\n\t\t_Steps ->\n\t\t\tar_mining_server:log_prepare_solution_failure(\n\t\t\t\tSolution, rejected, bad_vdf, Source, [\n\t\t\t\t\t{event, bad_steps},\n\t\t\t\t\t{prev_block, ar_util:encode(PrevH)},\n\t\t\t\t\t{step_number, StepNumber},\n\t\t\t\t\t{prev_step_number, PrevStepNumber},\n\t\t\t\t\t{prev_next_seed, ar_util:encode(PrevNextSeed)},\n\t\t\t\t\t{output, ar_util:encode(NonceLimiterOutput)}\n\t\t\t\t]),\n\t\t\t{noreply, State}\n\tend.\n\nassert_key_type(RewardKey, Height) ->\n\tcase Height >= ar_fork:height_2_9() of\n\t\tfalse ->\n\t\t\tcase RewardKey of\n\t\t\t\t{{?RSA_KEY_TYPE, _, _}, {?RSA_KEY_TYPE, Pub}} ->\n\t\t\t\t\ttrue = byte_size(Pub) == 512,\n\t\t\t\t\tok;\n\t\t\t\t_ ->\n\t\t\t\t\texit(invalid_reward_key)\n\t\t\tend;\n\t\ttrue ->\n\t\t\tcase RewardKey of\n\t\t\t\t{{?RSA_KEY_TYPE, _, _}, {?RSA_KEY_TYPE, Pub}} ->\n\t\t\t\t\ttrue = byte_size(Pub) == 512,\n\t\t\t\t\tok;\n\t\t\t\t{{?ECDSA_KEY_TYPE, _, _}, {?ECDSA_KEY_TYPE, Pub}} ->\n\t\t\t\t\ttrue = byte_size(Pub) == ?ECDSA_PUB_KEY_SIZE,\n\t\t\t\t\tok;\n\t\t\t\t_ ->\n\t\t\t\t\texit(invalid_reward_key)\n\t\t\tend\n\tend.\n\ncheck_no_double_signing(CDiff, PrevCDiff, MiningAddress, Height) ->\n\tBlocks = ar_block_cache:get_blocks_by_miner(block_cache, MiningAddress),\n\tnot lists:any(\n\t\tfun(B) ->\n\t\t\tcase ar_block:get_double_signing_condition(\n\t\t\t\t\tB#block.cumulative_diff,\n\t\t\t\t\tB#block.previous_cumulative_diff,\n\t\t\t\t\tCDiff,\n\t\t\t\t\tPrevCDiff) of\n\t\t\t\ttrue ->\n\t\t\t\t\t?LOG_WARNING([{event, avoiding_double_signing},\n\t\t\t\t\t\t\t{block, ar_util:encode(B#block.indep_hash)},\n\t\t\t\t\t\t\t{height, B#block.height},\n\t\t\t\t\t\t\t{new_height, Height},\n\t\t\t\t\t\t\t{cdiff, B#block.cumulative_diff},\n\t\t\t\t\t\t\t{prev_cdiff, B#block.previous_cumulative_diff},\n\t\t\t\t\t\t\t{new_cdiff, CDiff},\n\t\t\t\t\t\t\t{new_prev_cdiff, PrevCDiff}]),\n\t\t\t\t\ttrue;\n\t\t\t\tfalse ->\n\t\t\t\t\tfalse\n\t\t\tend\n\t\tend,\n\t\tBlocks).\n\nupdate_solution_cache(H, Args, State) ->\n\t%% Maintain a cache of mining solutions for potential reuse in rebasing.\n\t%%\n\t%% - We only want to cache 5 solutions at max.\n\t%% - If we exceed 5, we remove the oldest one from the solution_cache.\n\t%% - solution_cache_records is only used to track which solution is oldest.\n\t#{ solution_cache := Map, solution_cache_records := Q } = State,\n\tcase maps:is_key(H, Map) of\n\t\ttrue ->\n\t\t\tState;\n\t\tfalse ->\n\t\t\tQ2 = queue:in(H, Q),\n\t\t\tMap2 = maps:put(H, Args, Map),\n\t\t\t{Map3, Q3} =\n\t\t\t\tcase queue:len(Q2) > 5 of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\t{{value, H2}, Q4} = queue:out(Q2),\n\t\t\t\t\t\t{maps:remove(H2, Map2), Q4};\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t{Map2, Q2}\n\t\t\t\tend,\n\t\t\tState#{ solution_cache => Map3, solution_cache_records => Q3 }\n\tend.\n\nmay_be_report_double_signing(B, State) ->\n\t#block{ indep_hash = H, hash = SolutionH, cumulative_diff = CDiff1,\n\t\t\tprevious_cumulative_diff = PrevCDiff1,\n\t\t\tprevious_solution_hash = PrevSolutionH1,\n\t\t\treward_key = {_, Key},\n\t\t\tsignature = Signature1 } = B,\n\tcase ar_block_cache:get_by_solution_hash(block_cache,\n\t\t\tSolutionH, H, CDiff1, PrevCDiff1) of\n\t\tnot_found ->\n\t\t\tState;\n\t\tCacheB ->\n\t\t\t#block{\n\t\t\t\t\thash = SolutionH,\n\t\t\t\t\tcumulative_diff = CDiff2,\n\t\t\t\t\tprevious_cumulative_diff = PrevCDiff2,\n\t\t\t\t\tprevious_solution_hash = PrevSolutionH2,\n\t\t\t\t\treward_key = {_, Key},\n\t\t\t\t\tsignature = Signature2 } = CacheB,\n\t\t\tcase ar_block:get_double_signing_condition(CDiff1, PrevCDiff1, CDiff2, PrevCDiff2) of\n\t\t\t\ttrue ->\n\t\t\t\t\tPreimage1 = << PrevSolutionH1/binary,\n\t\t\t\t\t\t\t(ar_block:generate_signed_hash(B))/binary >>,\n\t\t\t\t\tPreimage2 = << PrevSolutionH2/binary,\n\t\t\t\t\t\t\t(ar_block:generate_signed_hash(CacheB))/binary >>,\n\t\t\t\t\tProof = {Key, Signature1, CDiff1, PrevCDiff1, Preimage1,\n\t\t\t\t\t\t\tSignature2, CDiff2, PrevCDiff2, Preimage2},\n\t\t\t\t\t?LOG_INFO([{event, report_double_signing},\n\t\t\t\t\t\t\t{key, ar_util:encode(Key)},\n\t\t\t\t\t\t\t{block1, ar_util:encode(H)},\n\t\t\t\t\t\t\t{block2, ar_util:encode(CacheB#block.indep_hash)},\n\t\t\t\t\t\t\t{height1, B#block.height},\n\t\t\t\t\t\t\t{height2, CacheB#block.height}]),\n\t\t\t\t\tcache_double_signing_proof(Proof, State);\n\t\t\t\tfalse ->\n\t\t\t\t\tState\n\t\t\tend\n\tend.\n\ncache_double_signing_proof(Proof, State) ->\n\tMap = maps:get(double_signing_proofs, State, #{}),\n\tKey = element(1, Proof),\n\tAddr = ar_wallet:hash_pub_key(Key),\n\tcase is_map_key(Addr, Map) of\n\t\ttrue ->\n\t\t\tState;\n\t\tfalse ->\n\t\t\tMap2 = maps:put(Addr, {os:system_time(second), Proof}, Map),\n\t\t\tState#{ double_signing_proofs => Map2 }\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc A simple list term checker. The idea is to get some information\n%% regarding the content of a list (e.g. number of same item).\n%% @end\n%%--------------------------------------------------------------------\n-spec checker(List) -> Return when\n\tList :: [term()],\n\tReturn :: {Length, Counter},\n\tLength :: pos_integer(),\n\tCounter :: #{ term() => pos_integer() }.\n\nchecker(List) ->\n\tchecker(List, length(List), #{}).\n\nchecker([], Length, Buffer) ->\n\t{Length, Buffer};\nchecker([H|T], Length, Buffer) ->\n\tV = maps:get(H, Buffer, 0),\n\tchecker(T, Length, Buffer#{ H => V+1 }).\n\nchecker_test() ->\n\t?assertEqual({0, #{}}, checker([])),\n\t?assertEqual({3, #{ true => 3 }}, checker([true, true, true])),\n\t?assertEqual({3, #{ true => 2, false => 1}}, checker([true, true, false])),\n\t?assertEqual({3, #{ true => 1, false => 2}}, checker([true, false, false])).\n"
  },
  {
    "path": "apps/arweave/src/ar_nonce_limiter.erl",
    "content": "-module(ar_nonce_limiter).\n\n-behaviour(gen_server).\n\n-export([start_link/0, account_tree_initialized/1, encode_session_key/1, session_key/1,\n\t\tis_ahead_on_the_timeline/2,\n\t\tget_current_step_number/0, get_current_step_number/1, get_step_triplets/3,\n\t\tget_seed_data/2, get_step_checkpoints/2, get_step_checkpoints/4, get_steps/4,\n\t\tget_seed/1, get_active_partition_upper_bound/2,\n\t\tget_reset_frequency/0, get_entropy_reset_point/2,\n\t\tvalidate_last_step_checkpoints/3, request_validation/3,\n\t\tget_or_init_nonce_limiter_info/1, get_or_init_nonce_limiter_info/2,\n\t\tapply_external_update/2, get_session/1, get_current_session/0,\n\t\tget_current_sessions/0,\n\t\tcompute/3,\n\t\tmaybe_add_entropy/4, mix_seed/2]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include(\"ar.hrl\").\n-include(\"ar_vdf.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include(\"ar_consensus.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-record(state, {\n\tcurrent_session_key,\n\tsessions = gb_sets:new(),\n\tsession_by_key = #{}, % {NextSeed, StartIntervalNumber, NextVDFDifficulty} => #vdf_session\n\tworker,\n\tworker_monitor_ref,\n\tautocompute = true,\n\tcomputing = false,\n\tlast_external_update = {not_set, 0},\n\temit_initialized_event = true\n}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the server.\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\naccount_tree_initialized(Blocks) ->\n\tgen_server:cast(?MODULE, {account_tree_initialized, Blocks}).\n\nencode_session_key({NextSeed, StartIntervalNumber, NextVDFDifficulty}) ->\n\t{ar_util:safe_encode(NextSeed), StartIntervalNumber, NextVDFDifficulty};\nencode_session_key(SessionKey) ->\n\tSessionKey.\n\n%% @doc Return true if the first solution is above the second one according\n%% to the protocol ordering.\n-ifdef(LOCALNET).\nis_ahead_on_the_timeline(NonceLimiterInfo1, NonceLimiterInfo2) ->\n\t#nonce_limiter_info{ global_step_number = N1 } = NonceLimiterInfo1,\n\t#nonce_limiter_info{ global_step_number = N2 } = NonceLimiterInfo2,\n\tN1 >= N2.\n-else.\nis_ahead_on_the_timeline(NonceLimiterInfo1, NonceLimiterInfo2) ->\n\t#nonce_limiter_info{ global_step_number = N1 } = NonceLimiterInfo1,\n\t#nonce_limiter_info{ global_step_number = N2 } = NonceLimiterInfo2,\n\tN1 > N2.\n-endif.\n\nsession_key(#nonce_limiter_info{ \n\t\tnext_seed = NextSeed, global_step_number = StepNumber,\n\t\tnext_vdf_difficulty = NextVDFDifficulty }) ->\n\tsession_key(NextSeed, StepNumber, NextVDFDifficulty).\n\n%% @doc Return the nonce limiter session with the given key.\nget_session(SessionKey) ->\n\tgen_server:call(?MODULE, {get_session, SessionKey}, ?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Return {SessionKey, Session} for the current VDF session.\nget_current_session() ->\n\tgen_server:call(?MODULE, get_current_session, ?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Return a list of up to two {SessionKey, Session} pairs\n%% where the first pair corresponds to the current VDF session\n%% and the second pair is its previous session, if any.\nget_current_sessions() ->\n\tgen_server:call(?MODULE, get_current_sessions, ?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Return the latest known step number.\nget_current_step_number() ->\n\tgen_server:call(?MODULE, get_current_step_number, ?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Return the latest known step number in the session of the given (previous) block.\n%% Return not_found if the session is not found.\nget_current_step_number(B) ->\n\tSessionKey = session_key(B#block.nonce_limiter_info),\n\tgen_server:call(?MODULE, {get_current_step_number, SessionKey}, ?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Return {Output, StepNumber, PartitionUpperBound} for up to N latest steps\n%% from the VDF session of Info, if any. If PrevOutput is among the N latest steps,\n%% return only the steps strictly above PrevOutput.\nget_step_triplets(Info, PrevOutput, N) ->\n\tSessionKey = session_key(Info),\n\tSteps = gen_server:call(?MODULE, {get_latest_step_triplets, SessionKey, N}, ?DEFAULT_CALL_TIMEOUT),\n\tfilter_step_triplets(Steps, [PrevOutput, Info#nonce_limiter_info.output]).\n\n-ifdef(LOCALNET).\nassert_step_number_is_ahead(StepNumber, PrevStepNumber) ->\n\ttrue = StepNumber >= PrevStepNumber.\n-else.\nassert_step_number_is_ahead(StepNumber, PrevStepNumber) ->\n\ttrue = StepNumber > PrevStepNumber.\n-endif.\n\n%% @doc Return {Seed, NextSeed, PartitionUpperBound, NextPartitionUpperBound, VDFDifficulty}\n%% for the block mined at StepNumber considering its previous block PrevB.\n%% The previous block's independent hash, weave size, and VDF difficulty\n%% become the new NextSeed, NextPartitionUpperBound, and NextVDFDifficulty\n%% accordingly when we cross the next reset line.\n%% Note: next_vdf_difficulty is not part of the seed data as it is computed using the\n%% block_time_history - which is a heavier operation handled separate from the (quick) seed data\n%% retrieval\nget_seed_data(StepNumber, PrevB) ->\n\tNonceLimiterInfo = PrevB#block.nonce_limiter_info,\n\t#nonce_limiter_info{\n\t\tglobal_step_number = PrevStepNumber,\n\t\tseed = Seed, next_seed = NextSeed,\n\t\tpartition_upper_bound = PartitionUpperBound,\n\t\tnext_partition_upper_bound = NextPartitionUpperBound,\n\t\t%% VDF difficulty in use at the previous block\n\t\tvdf_difficulty = VDFDifficulty,\n\t\t%% Next VDF difficulty scheduled at the previous block\n\t\tnext_vdf_difficulty = PrevNextVDFDifficulty\n\t} = NonceLimiterInfo,\n\tassert_step_number_is_ahead(StepNumber, PrevStepNumber),\n\tcase get_entropy_reset_point(PrevStepNumber, StepNumber) of\n\t\tnone ->\n\t\t\t%% Entropy reset line was not crossed between previous and current block\n\t\t\t{ Seed, NextSeed, PartitionUpperBound, NextPartitionUpperBound, VDFDifficulty };\n\t\t_ ->\n\t\t\t%% Entropy reset line was crossed between previous and current block\n\t\t\t{\n\t\t\t\tNextSeed, PrevB#block.indep_hash,\n\t\t\t\tNextPartitionUpperBound, PrevB#block.weave_size,\n\t\t\t\t%% The next VDF difficulty that was scheduled at the previous block\n\t\t\t\t%% (PrevNextVDFDifficulty) was applied when we crossed the entropy reset line and\n\t\t\t\t%% is now the current VDF difficulty.\n\t\t\t\tPrevNextVDFDifficulty\n\t\t\t}\n\tend.\n\n%% @doc Return the cached checkpoints for the given step. Return not_found if\n%% none found.\nget_step_checkpoints(StepNumber, NextSeed, StartIntervalNumber, NextVDFDifficulty) ->\n\tSessionKey = {NextSeed, StartIntervalNumber, NextVDFDifficulty},\n\tget_step_checkpoints(StepNumber, SessionKey).\nget_step_checkpoints(StepNumber, SessionKey) ->\n\tgen_server:call(?MODULE, {get_step_checkpoints, StepNumber, SessionKey}, ?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Return the entropy seed of the given session.\n%% Return not_found if the VDF session is not found.\nget_seed(SessionKey) ->\n\tgen_server:call(?MODULE, {get_seed, SessionKey}, ?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Return the active partition upper bound for the given step (chosen among\n%% session's upper_bound and next_upper_bound depending on whether the step number has\n%% reached the entropy reset point).\n%% Return not_found if the VDF session is not found.\nget_active_partition_upper_bound(StepNumber, SessionKey) ->\n\tgen_server:call(?MODULE, {get_active_partition_upper_bound, StepNumber, SessionKey},\n\t\t\t?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Return the steps of the given interval. The steps are chosen\n%% according to the protocol. Return not_found if the corresponding hash chain is not\n%% computed yet.\n-ifdef(LOCALNET).\n%% On localnet two blocks may be mined on the same step so this function may be called\n%% with the same StepNumber for both the StartStepNumber and EndStepNumber arguments.\nget_steps(StepNumber, StepNumber, _NextSeed, _NextVDFDifficulty) ->\n\tnot_found;\nget_steps(StartStepNumber, EndStepNumber, NextSeed, NextVDFDifficulty)\n\t\twhen EndStepNumber > StartStepNumber ->\n\tSessionKey = session_key(NextSeed, StartStepNumber, NextVDFDifficulty),\n\tgen_server:call(?MODULE, {get_steps, StartStepNumber, EndStepNumber, SessionKey},\n\t\t\t?DEFAULT_CALL_TIMEOUT).\n-else.\nget_steps(StartStepNumber, EndStepNumber, NextSeed, NextVDFDifficulty)\n\t\twhen EndStepNumber > StartStepNumber ->\n\tSessionKey = session_key(NextSeed, StartStepNumber, NextVDFDifficulty),\n\tgen_server:call(?MODULE, {get_steps, StartStepNumber, EndStepNumber, SessionKey},\n\t\t\t?DEFAULT_CALL_TIMEOUT).\n-endif.\n\n%% @doc Quickly validate the checkpoints of the latest step.\nvalidate_last_step_checkpoints(B = #block{ nonce_limiter_info = #nonce_limiter_info{\n\t\tglobal_step_number = StepNumber } },\n\t\tPrevB = #block{ nonce_limiter_info = #nonce_limiter_info{\n\t\t\t\tglobal_step_number = StepNumber } }, _PrevOutput) ->\n\tvalidate_last_step_checkpoints_same_step_number(B, PrevB);\nvalidate_last_step_checkpoints(#block{\n\t\tnonce_limiter_info = #nonce_limiter_info{ output = Output,\n\t\t\t\tglobal_step_number = StepNumber, seed = Seed,\n\t\t\t\tvdf_difficulty = VDFDifficulty,\n\t\t\t\tlast_step_checkpoints = [Output | _] = LastStepCheckpoints } }, PrevB,\n\t\t\t\tPrevOutput)\n\t\twhen length(LastStepCheckpoints) == ?VDF_CHECKPOINT_COUNT_IN_STEP ->\n\tPrevInfo = get_or_init_nonce_limiter_info(PrevB),\n\t#nonce_limiter_info{ global_step_number = PrevBStepNumber } = PrevInfo,\n\tSessionKey = session_key(PrevInfo),\n\tcase get_step_checkpoints(StepNumber, SessionKey) of\n\t\tLastStepCheckpoints ->\n\t\t\t{true, cache_match};\n\t\tnot_found ->\n\t\t\tPrevOutput2 = ar_nonce_limiter:maybe_add_entropy(\n\t\t\t\tPrevOutput, PrevBStepNumber, StepNumber, Seed),\n\t\t\tPrevStepNumber = StepNumber - 1,\n\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\tThreadCount = Config#config.max_nonce_limiter_last_step_validation_thread_count,\n\t\t\tcase verify_no_reset(PrevStepNumber, PrevOutput2, 1,\n\t\t\t\t\tlists:reverse(LastStepCheckpoints), ThreadCount, VDFDifficulty) of\n\t\t\t\t{true, _Steps} ->\n\t\t\t\t\ttrue;\n\t\t\t\tfalse ->\n\t\t\t\t\tfalse\n\t\t\tend;\n\t\tCachedSteps ->\n\t\t\t{false, cache_mismatch, CachedSteps}\n\tend;\nvalidate_last_step_checkpoints(_B, _PrevB, _PrevOutput) ->\n\tfalse.\n\n-ifdef(LOCALNET).\n%% On localnet two blocks may be mined on the same step. In this case\n%% we validate the last step checkpoints come from the previous block.\nvalidate_last_step_checkpoints_same_step_number(\n\t\t#block{\n\t\t\tnonce_limiter_info = #nonce_limiter_info{\n\t\t\t\toutput = Output,\n\t\t\t\tlast_step_checkpoints = [Output | _] = LastStepCheckpoints\n\t\t\t}\n\t\t},\n\t\tPrevB\n\t)\n\t\twhen length(LastStepCheckpoints) == ?VDF_CHECKPOINT_COUNT_IN_STEP ->\n\tPrevInfo = get_or_init_nonce_limiter_info(PrevB),\n\tPrevInfo#nonce_limiter_info.last_step_checkpoints == LastStepCheckpoints;\nvalidate_last_step_checkpoints_same_step_number(_B, _PrevB) ->\n\tfalse.\n-else.\nvalidate_last_step_checkpoints_same_step_number(_B, _PrevB) ->\n\tfalse.\n-endif.\n\nget_reset_frequency() ->\n\t?NONCE_LIMITER_RESET_FREQUENCY.\n\n%% @doc Determine whether StepNumber has passed the entropy reset line. If it has return the\n%% reset line, otherwise return none.\nget_entropy_reset_point(PrevStepNumber, StepNumber) ->\n\tResetLine = (PrevStepNumber div ar_nonce_limiter:get_reset_frequency() + 1)\n\t\t\t* ar_nonce_limiter:get_reset_frequency(),\n\tcase ResetLine > StepNumber of\n\t\ttrue ->\n\t\t\tnone;\n\t\tfalse ->\n\t\t\tResetLine\n\tend.\n\n%% @doc Conditionally add entropy to PrevOutput if the configured number of steps have\n%% passed. See ar_nonce_limiter:get_reset_frequency() for more details.\nmaybe_add_entropy(PrevOutput, PrevStepNumber, StepNumber, Seed) ->\n\tcase get_entropy_reset_point(PrevStepNumber, StepNumber) of\n\t\tStepNumber ->\n\t\t\tmix_seed(PrevOutput, Seed);\n\t\t_ ->\n\t\t\tPrevOutput\n\tend.\n\n%% @doc Add entropy to an earlier VDF output to mitigate the impact of a miner with a\n%% fast VDF compute. See ar_nonce_limiter:get_reset_frequency() for more details.\nmix_seed(PrevOutput, Seed) ->\n\tSeedH = crypto:hash(sha256, Seed),\n\tmix_seed2(PrevOutput, SeedH).\n\nmix_seed2(PrevOutput, SeedH) ->\n\tcrypto:hash(sha256, << PrevOutput/binary, SeedH/binary >>).\n\n%% @doc Validate the nonce limiter chain between two blocks in the background.\n%% Assume the seeds are correct and the first block is above the second one\n%% according to the protocol.\n%% Emit {nonce_limiter, {invalid, H, ErrorCode}} or {nonce_limiter, {valid, H}}.\n-ifdef(LOCALNET).\n%% On localnet two blocks may be mined on the same step. In this case\n%% we validate steps come from the previous block.\nrequest_validation_same_step_number(H, #nonce_limiter_info{ steps = Steps },\n\t\t#nonce_limiter_info{ steps = Steps }) ->\n\tspawn(fun() -> ar_events:send(nonce_limiter, {valid, H}) end);\nrequest_validation_same_step_number(H, _Info, _PrevInfo) ->\n\tspawn(fun() -> ar_events:send(nonce_limiter, {invalid, H, 1}) end).\n-else.\nrequest_validation_same_step_number(H, _Info, _PrevInfo) ->\n\tspawn(fun() -> ar_events:send(nonce_limiter, {invalid, H, 1}) end).\n-endif.\n\nrequest_validation(H, #nonce_limiter_info{ global_step_number = N } = Info,\n\t\t#nonce_limiter_info{ global_step_number = N } = PrevInfo) ->\n\trequest_validation_same_step_number(H, Info, PrevInfo);\nrequest_validation(H, #nonce_limiter_info{ output = Output,\n\t\tsteps = [Output | _] = StepsToValidate } = Info, PrevInfo) ->\n\t#nonce_limiter_info{ output = PrevOutput,\n\t\t\tglobal_step_number = PrevStepNumber, vdf_difficulty = PrevVDFDifficulty } = PrevInfo,\n\t#nonce_limiter_info{ output = Output, seed = Seed,\n\t\t\tvdf_difficulty = VDFDifficulty, next_vdf_difficulty = NextVDFDifficulty,\n\t\t\tpartition_upper_bound = UpperBound,\n\t\t\tnext_partition_upper_bound = NextUpperBound, global_step_number = StepNumber,\n\t\t\tsteps = StepsToValidate } = Info,\n\tEntropyResetPoint = get_entropy_reset_point(PrevStepNumber, StepNumber),\n\tSessionKey = session_key(PrevInfo),\n\t%% The steps that fall at the intersection of the PrevStepNumber to StepNumber range\n\t%% and the SessionKey session.\n\tSessionSteps = gen_server:call(?MODULE, {get_session_steps, PrevStepNumber, StepNumber,\n\t\t\tSessionKey}, ?DEFAULT_CALL_TIMEOUT),\n\tNextSessionKey = session_key(Info),\n\n\t%% We need to validate all the steps from PrevStepNumber to StepNumber:\n\t%% PrevStepNumber <--------------------------------------------> StepNumber\n\t%%     PrevOutput x\n\t%%                                      |----------------------| StepsToValidate\n\t%%                |-----------------------------------| SessionSteps\n\t%%                      StartStepNumber x\n\t%%                          StartOutput x\n\t%%                                      |-------------| ComputedSteps\n\t%%                                      --------------> NumAlreadyComputed\n\t%%                                   StartStepNumber2 x\n\t%%                                       StartOutput2 x\n\t%%                                                    |--------| RemainingStepsToValidate\n\t%%\n\t{StartStepNumber, StartOutput, ComputedSteps} =\n\t\tskip_already_computed_steps(PrevStepNumber, StepNumber, PrevOutput,\n\t\t\tStepsToValidate, SessionSteps),\n\t?LOG_INFO([{event, vdf_validation_start}, {block, ar_util:encode(H)},\n\t\t\t{session_key, encode_session_key(SessionKey)},\n\t\t\t{next_session_key, encode_session_key(NextSessionKey)},\n\t\t\t{prev_step_number, PrevStepNumber}, {step_number, StepNumber},\n\t\t\t{start_step_number, StartStepNumber},\n\t\t\t{step_count, StepNumber - PrevStepNumber}, {steps, length(StepsToValidate)},\n\t\t\t{session_steps, length(SessionSteps)}, {prev_vdf_difficulty, PrevVDFDifficulty},\n\t\t\t{vdf_difficulty, VDFDifficulty}, {next_vdf_difficulty, NextVDFDifficulty},\n\t\t\t{pid, self()}]),\n\tcase exclude_computed_steps_from_steps_to_validate(\n\t\t\tlists:reverse(StepsToValidate), ComputedSteps) of\n\t\tinvalid ->\n\t\t\tErrorID = dump_error({PrevStepNumber, StepNumber, StepsToValidate, SessionSteps}),\n\t\t\t?LOG_WARNING([{event, nonce_limiter_validation_failed},\n\t\t\t\t\t{step, exclude_computed_steps_from_steps_to_validate},\n\t\t\t\t\t{error_dump, ErrorID}]),\n\t\t\tspawn(fun() -> ar_events:send(nonce_limiter, {invalid, H, 2}) end);\n\n\t\t{[], NumAlreadyComputed} when StartStepNumber + NumAlreadyComputed == StepNumber ->\n\t\t\t%% We've already computed up to StepNumber, so we can use the checkpoints from the\n\t\t\t%% current session\n\t\t\tLastStepCheckpoints = get_step_checkpoints(StepNumber, SessionKey),\n\t\t\tArgs = {StepNumber, SessionKey, NextSessionKey, Seed, UpperBound, NextUpperBound,\n\t\t\t\t\tVDFDifficulty, NextVDFDifficulty, SessionSteps, LastStepCheckpoints},\n\t\t\tgen_server:cast(?MODULE, {validated_steps, Args}),\n\t\t\tspawn(fun() -> ar_events:send(nonce_limiter, {valid, H}) end);\n\n\t\t{_, NumAlreadyComputed} when StartStepNumber + NumAlreadyComputed >= StepNumber ->\n\t\t\tErrorID = dump_error({PrevStepNumber, StepNumber, StepsToValidate, SessionSteps,\n\t\t\t\t\tStartStepNumber, NumAlreadyComputed}),\n\t\t\t?LOG_WARNING([{event, nonce_limiter_validation_failed},\n\t\t\t\t\t{step, exclude_computed_steps_from_steps_to_validate_shift},\n\t\t\t\t\t{start_step_number, StartStepNumber}, {shift2, NumAlreadyComputed},\n\t\t\t\t\t{error_dump, ErrorID}]),\n\t\t\tspawn(fun() -> ar_events:send(nonce_limiter, {invalid, H, 2}) end);\n\t\t{RemainingStepsToValidate, NumAlreadyComputed}\n\t\t  \t\twhen StartStepNumber + NumAlreadyComputed < StepNumber ->\n\t\t\tcase ar_config:use_remote_vdf_server() and not ar_config:compute_own_vdf() of\n\t\t\t\ttrue ->\n\t\t\t\t\t%% Wait for our VDF server(s) to validate the remaining steps.\n\t\t\t\t\t%% Alternatively, the network may abandon this block.\n\t\t\t\t\tar_nonce_limiter_client:maybe_request_sessions(SessionKey),\n\t\t\t\t\tspawn(fun() -> ar_events:send(nonce_limiter, {refuse_validation, H}) end);\n\t\t\t\tfalse ->\n\t\t\t\t\t%% Validate the remaining steps.\n\t\t\t\t\tStartOutput2 = case NumAlreadyComputed of\n\t\t\t\t\t\t\t0 -> StartOutput;\n\t\t\t\t\t\t\t_ -> lists:nth(NumAlreadyComputed, ComputedSteps)\n\t\t\t\t\tend,\n\t\t\t\t\tspawn(fun() ->\n\t\t\t\t\t\tStartStepNumber2 = StartStepNumber + NumAlreadyComputed,\n\t\t\t\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\t\t\t\tThreadCount = Config#config.max_nonce_limiter_validation_thread_count,\n\t\t\t\t\t\tResult =\n\t\t\t\t\t\t\tcase is_integer(EntropyResetPoint) andalso\n\t\t\t\t\t\t\t\t\tEntropyResetPoint > StartStepNumber2 of\n\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\tcatch verify(StartStepNumber2, StartOutput2,\n\t\t\t\t\t\t\t\t\t\t\t?VDF_CHECKPOINT_COUNT_IN_STEP,\n\t\t\t\t\t\t\t\t\t\t\tRemainingStepsToValidate, EntropyResetPoint,\n\t\t\t\t\t\t\t\t\t\t\tcrypto:hash(sha256, Seed), ThreadCount,\n\t\t\t\t\t\t\t\t\t\t\tPrevVDFDifficulty, VDFDifficulty);\n\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\tcatch verify_no_reset(StartStepNumber2, StartOutput2,\n\t\t\t\t\t\t\t\t\t\t\t?VDF_CHECKPOINT_COUNT_IN_STEP,\n\t\t\t\t\t\t\t\t\t\t\tRemainingStepsToValidate, ThreadCount,\n\t\t\t\t\t\t\t\t\t\t\tVDFDifficulty)\n\t\t\t\t\t\t\tend,\n\t\t\t\t\t\tcase Result of\n\t\t\t\t\t\t\t{'EXIT', Exc} ->\n\t\t\t\t\t\t\t\tErrorID = dump_error(\n\t\t\t\t\t\t\t\t\t{StartStepNumber2, StartOutput2,\n\t\t\t\t\t\t\t\t\t?VDF_CHECKPOINT_COUNT_IN_STEP,\n\t\t\t\t\t\t\t\t\tRemainingStepsToValidate,\n\t\t\t\t\t\t\t\t\tEntropyResetPoint, crypto:hash(sha256, Seed),\n\t\t\t\t\t\t\t\t\tThreadCount, VDFDifficulty}),\n\t\t\t\t\t\t\t\t?LOG_ERROR([{event, nonce_limiter_validation_failed},\n\t\t\t\t\t\t\t\t\t\t{block, ar_util:encode(H)},\n\t\t\t\t\t\t\t\t\t\t{start_step_number, StartStepNumber2},\n\t\t\t\t\t\t\t\t\t\t{error_id, ErrorID},\n\t\t\t\t\t\t\t\t\t\t{prev_output, ar_util:encode(StartOutput2)},\n\t\t\t\t\t\t\t\t\t\t{exception, io_lib:format(\"~p\", [Exc])}]),\n\t\t\t\t\t\t\t\tar_events:send(nonce_limiter, {validation_error, H});\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\tar_events:send(nonce_limiter, {invalid, H, 3});\n\t\t\t\t\t\t\t{true, ValidatedSteps} ->\n\t\t\t\t\t\t\t\tAllValidatedSteps = ValidatedSteps ++ SessionSteps,\n\t\t\t\t\t\t\t\t%% The last_step_checkpoints in Info were validated as part\n\t\t\t\t\t\t\t\t%% of an earlier call to\n\t\t\t\t\t\t\t\t%% ar_block_pre_validator:pre_validate_nonce_limiter, so\n\t\t\t\t\t\t\t\t%% we can trust them here.\n\t\t\t\t\t\t\t\tLastStepCheckpoints = get_last_step_checkpoints(Info),\n\t\t\t\t\t\t\t\tArgs = {StepNumber, SessionKey, NextSessionKey,\n\t\t\t\t\t\t\t\t\t\tSeed, UpperBound, NextUpperBound,\n\t\t\t\t\t\t\t\t\t\tVDFDifficulty, NextVDFDifficulty,\n\t\t\t\t\t\t\t\t\t\tAllValidatedSteps, LastStepCheckpoints},\n\t\t\t\t\t\t\t\tgen_server:cast(?MODULE, {validated_steps, Args}),\n\t\t\t\t\t\t\t\tar_events:send(nonce_limiter, {valid, H})\n\t\t\t\t\t\tend\n\t\t\t\t\tend)\n\t\t\tend;\n\t\tData ->\n\t\t\tErrorID = dump_error(Data),\n\t\t\tar_events:send(nonce_limiter, {validation_error, H}),\n\t\t\t?LOG_ERROR([{event, unexpected_error_during_nonce_limiter_validation},\n\t\t\t\t\t{error_id, ErrorID}])\n\tend;\nrequest_validation(H, _Info, _PrevInfo) ->\n\tspawn(fun() -> ar_events:send(nonce_limiter, {invalid, H, 4}) end).\n\nget_last_step_checkpoints(Info) ->\n\tInfo#nonce_limiter_info.last_step_checkpoints.\n\nget_or_init_nonce_limiter_info(#block{ height = Height, indep_hash = H } = B) ->\n\tcase Height >= ar_fork:height_2_6() of\n\t\ttrue ->\n\t\t\tB#block.nonce_limiter_info;\n\t\tfalse ->\n\t\t\t{Seed, PartitionUpperBound} =\n\t\t\t\t\tar_node:get_recent_partition_upper_bound_by_prev_h(H),\n\t\t\tget_or_init_nonce_limiter_info(B, Seed, PartitionUpperBound)\n\tend.\n\nget_or_init_nonce_limiter_info(#block{ height = Height } = B, RecentBI) ->\n\tcase Height >= ar_fork:height_2_6() of\n\t\ttrue ->\n\t\t\tB#block.nonce_limiter_info;\n\t\tfalse ->\n\t\t\t{Seed, PartitionUpperBound, _TXRoot}\n\t\t\t\t\t= lists:last(lists:sublist(RecentBI, ?SEARCH_SPACE_UPPER_BOUND_DEPTH)),\n\t\t\tget_or_init_nonce_limiter_info(B, Seed, PartitionUpperBound)\n\tend.\n\n%% @doc Apply the nonce limiter update provided by the configured trusted peer.\napply_external_update(Update, Peer) ->\n\tgen_server:call(?MODULE, {apply_external_update, Update, Peer}, ?DEFAULT_CALL_TIMEOUT).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\t?LOG_INFO([{event, nonce_limiter_init}]),\n\tok = ar_events:subscribe(node_state),\n\tState =\n\t\tcase ar_node:is_joined() of\n\t\t\ttrue ->\n\t\t\t\tBlocks = get_blocks(),\n\t\t\t\thandle_initialized(Blocks, #state{});\n\t\t\t_ ->\n\t\t\t\t#state{}\n\t\tend,\n\tcase ar_config:use_remote_vdf_server() and not ar_config:compute_own_vdf() of\n\t\ttrue ->\n\t\t\tgen_server:cast(?MODULE, check_external_vdf_server_input);\n\t\tfalse ->\n\t\t\tok\n\tend,\n\t{ok, start_worker(State#state{ autocompute = ar_config:compute_own_vdf() })}.\n\nget_blocks() ->\n\tB = ar_node:get_current_block(),\n\t[B | get_blocks(B#block.previous_block, 1)].\n\nget_blocks(H, N) ->\n\tcase N >= ar_block:get_consensus_window_size() of\n\t\ttrue ->\n\t\t\t[];\n\t\tfalse ->\n\t\t\t#block{} = B = ar_block_cache:get(block_cache, H),\n\t\t\t[B | get_blocks(B#block.previous_block, N + 1)]\n\tend.\n\nhandle_call(get_current_step_number, _From,\n\t\t#state{ current_session_key = undefined } = State) ->\n\t{reply, 0, State};\nhandle_call(get_current_step_number, _From, State) ->\n\t#state{ current_session_key = Key } = State,\n\t#vdf_session{ step_number = StepNumber } = get_session(Key, State),\n\t{reply, StepNumber, State};\n\nhandle_call({get_current_step_number, SessionKey}, _From, State) ->\n\tcase get_session(SessionKey, State) of\n\t\tnot_found ->\n\t\t\t{reply, not_found, State};\n\t\t#vdf_session{ step_number = StepNumber } ->\n\t\t\t{reply, StepNumber, State}\n\tend;\n\nhandle_call({get_latest_step_triplets, SessionKey, N}, _From, State) ->\n\tcase get_session(SessionKey, State) of\n\t\tnot_found ->\n\t\t\t{reply, [], State};\n\t\t#vdf_session{ step_number = StepNumber, steps = Steps,\n\t\t\t\tstep_checkpoints_map = Map,\n\t\t\t\tupper_bound = UpperBound, next_upper_bound = NextUpperBound } ->\n\t\t\t{_, IntervalNumber, _} = SessionKey,\n\t\t\tIntervalStart = IntervalNumber * ar_nonce_limiter:get_reset_frequency(),\n\t\t\tResetPoint = get_entropy_reset_point(IntervalStart, StepNumber),\n\t\t\tTriplets = get_triplets(StepNumber, Steps, ResetPoint, UpperBound,\n\t\t\t\t\tNextUpperBound, N),\n\t\t\t{Triplets2, NSkipped} = filter_step_triplets_with_checkpoints(Triplets, Map),\n\t\t\tcase NSkipped > 0 of\n\t\t\t\ttrue ->\n\t\t\t\t\t?LOG_INFO([{event, missing_step_checkpoints},\n\t\t\t\t\t\t\t{count, NSkipped}]);\n\t\t\t\tfalse ->\n\t\t\t\t\tok\n\t\t\tend,\n\t\t\t{reply, Triplets2, State}\n\tend;\n\nhandle_call({get_step_checkpoints, StepNumber, SessionKey}, _From, State) ->\n\tcase get_session(SessionKey, State) of\n\t\tnot_found ->\n\t\t\t{reply, not_found, State};\n\t\t#vdf_session{ step_checkpoints_map = Map } ->\n\t\t\t{reply, maps:get(StepNumber, Map, not_found), State}\n\tend;\n\nhandle_call({get_seed, SessionKey}, _From, State) ->\n\tcase get_session(SessionKey, State) of\n\t\tnot_found ->\n\t\t\t{reply, not_found, State};\n\t\t#vdf_session{ seed = Seed } ->\n\t\t\t{reply, Seed, State}\n\tend;\n\nhandle_call({get_active_partition_upper_bound, StepNumber, SessionKey}, _From, State) ->\n\tcase get_session(SessionKey, State) of\n\t\tnot_found ->\n\t\t\t{reply, not_found, State};\n\t\t#vdf_session{ upper_bound = UpperBound, next_upper_bound = NextUpperBound } ->\n\t\t\t{_NextSeed, IntervalNumber, _NextVDFDifficulty} = SessionKey,\n\t\t\tIntervalStart = IntervalNumber * ar_nonce_limiter:get_reset_frequency(),\n\t\t\tUpperBound2 =\n\t\t\t\tcase get_entropy_reset_point(IntervalStart, StepNumber) of\n\t\t\t\t\tnone ->\n\t\t\t\t\t\tUpperBound;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tNextUpperBound\n\t\t\t\tend,\n\t\t\t{reply, UpperBound2, State}\n\tend;\n\nhandle_call({get_steps, StartStepNumber, EndStepNumber, SessionKey}, _From, State) ->\n\tcase get_steps2(StartStepNumber, EndStepNumber, SessionKey, State) of\n\t\tnot_found ->\n\t\t\t{reply, not_found, State};\n\t\tSteps ->\n\t\t\tTakeN = min(?NONCE_LIMITER_MAX_CHECKPOINTS_COUNT, EndStepNumber - StartStepNumber),\n\t\t\t{reply, lists:sublist(Steps, TakeN), State}\n\tend;\n\n%% @doc Get all the steps in the current session that fall between\n%% StartStepNumber+1 and EndStepNumber (inclusive)\nhandle_call({get_session_steps, StartStepNumber, EndStepNumber, SessionKey}, _From, State) ->\n\tSession = get_session(SessionKey, State),\n\t{_, Steps} = get_step_range(Session, StartStepNumber + 1, EndStepNumber),\n\t{reply, Steps, State};\n\nhandle_call(get_steps, _From, #state{ current_session_key = undefined } = State) ->\n\t{reply, [], State};\nhandle_call(get_steps, _From, State) ->\n\t#state{ current_session_key = SessionKey } = State,\n\t#vdf_session{ step_number = StepNumber } = get_session(SessionKey, State),\n\t{reply, get_steps2(1, StepNumber, SessionKey, State), State};\n\nhandle_call({apply_external_update, Update, Peer}, _From, State) ->\n\tNow = os:system_time(millisecond),\n\t#nonce_limiter_update{ session_key = SessionKey } = Update,\n\t%% The client consults the latest session key by peer to decide whether to request the\n\t%% missing VDF session when we call ar_nonce_limiter_client:maybe_request_sessions/1\n\t%% during VDF validation.\n\tgen_server:cast(ar_nonce_limiter_client,\n\t\t\t{update_latest_session_key, Peer, SessionKey}),\n\tapply_external_update2(Update, State#state{ last_external_update = {Peer, Now} });\n\nhandle_call({get_session, SessionKey}, _From, State) ->\n\t{reply, get_session(SessionKey, State), State};\n\nhandle_call(get_current_session, _From, State) ->\n\t#state{ current_session_key = CurrentSessionKey } = State,\n\t{reply, {CurrentSessionKey, get_session(CurrentSessionKey, State)}, State};\n\nhandle_call(get_current_sessions, _From, State) ->\n\t#state{ current_session_key = CurrentSessionKey } = State,\n\tSession = get_session(CurrentSessionKey, State),\n\tPreviousSessionKey = Session#vdf_session.prev_session_key,\n\tcase get_session(PreviousSessionKey, State) of\n\t\tnot_found ->\n\t\t\t?LOG_DEBUG([{event, request_current_sessions_missing_previous_session},\n\t\t\t\t\t{current_session_key, encode_session_key(CurrentSessionKey)},\n\t\t\t\t\t{previous_session_key, encode_session_key(PreviousSessionKey)}]),\n\t\t\t{reply, [{CurrentSessionKey, Session}], State};\n\t\tPrevSession ->\n\t\t\t{reply, [{CurrentSessionKey, Session}, {PreviousSessionKey, PrevSession}], State}\n\tend;\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast(check_external_vdf_server_input,\n\t\t#state{ last_external_update = {_, 0} } = State) ->\n\tar_util:cast_after(1000, ?MODULE, check_external_vdf_server_input),\n\t{noreply, State};\nhandle_cast(check_external_vdf_server_input,\n\t\t#state{ last_external_update = {_, Time} } = State) ->\n\tNow = os:system_time(millisecond),\n\tcase Now - Time > 2000 of\n\t\ttrue ->\n\t\t\t?LOG_WARNING([{event, no_message_from_any_vdf_servers},\n\t\t\t\t\t{last_message_seconds_ago, (Now - Time) div 1000}]),\n\t\t\tar_util:cast_after(30000, ?MODULE, check_external_vdf_server_input);\n\t\tfalse ->\n\t\t\tar_util:cast_after(1000, ?MODULE, check_external_vdf_server_input)\n\tend,\n\t{noreply, State};\n\nhandle_cast(initialized, State) ->\n\tgen_server:cast(?MODULE, schedule_step),\n\tcase State#state.emit_initialized_event of\n\t\ttrue ->\n\t\t\tar_events:send(nonce_limiter, initialized);\n\t\tfalse ->\n\t\t\tok\n\tend,\n\t{noreply, State};\n\nhandle_cast({initialize, [PrevB, B | Blocks]}, State) ->\n\tapply_chain(B#block.nonce_limiter_info, PrevB#block.nonce_limiter_info),\n\tgen_server:cast(?MODULE, {apply_tip, B, PrevB}),\n\tgen_server:cast(?MODULE, {initialize, [B | Blocks]}),\n\t{noreply, State};\nhandle_cast({initialize, _}, State) ->\n\tgen_server:cast(?MODULE, initialized),\n\t{noreply, State};\n\nhandle_cast({account_tree_initialized, Blocks}, State) ->\n\t{noreply, handle_initialized(lists:sublist(Blocks, ar_block:get_consensus_window_size()), State)};\n\nhandle_cast({apply_tip, B, PrevB}, State) ->\n\t{noreply, apply_tip2(B, PrevB, State)};\n\nhandle_cast({validated_steps, Args}, State) ->\n\t{StepNumber, SessionKey, NextSessionKey, Seed, UpperBound, NextUpperBound,\n\t\t\tVDFDifficulty, NextVDFDifficulty, Steps, LastStepCheckpoints} = Args,\n\tcase get_session(SessionKey, State) of\n\t\tnot_found ->\n\t\t\t%% The corresponding fork origin should have just dropped below the\n\t\t\t%% checkpoint height.\n\t\t\t?LOG_WARNING([{event, session_not_found_for_validated_steps},\n\t\t\t\t\t{session_key, encode_session_key(SessionKey)},\n\t\t\t\t\t{interval, element(2, SessionKey)},\n\t\t\t\t\t{vdf_difficulty, element(3, SessionKey)}]),\n\t\t\t{noreply, State};\n\t\tSession ->\n\t\t\t#vdf_session{ step_number = CurrentStepNumber } = Session,\n\t\t\tSession2 =\n\t\t\t\tcase CurrentStepNumber < StepNumber of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\t%% Update the current Session with all the newly validated steps and\n\t\t\t\t\t\t%% as well as the checkpoints associated with step StepNumber.\n\t\t\t\t\t\t%% This branch occurs when a block is received that is ahead of us\n\t\t\t\t\t\t%% in the VDF chain.\n\t\t\t\t\t\t?LOG_DEBUG([{event, new_vdf_step}, {source, validated_steps},\n\t\t\t\t\t\t\t{session_key, encode_session_key(SessionKey)},\n\t\t\t\t\t\t\t{step_number, StepNumber}]),\n\t\t\t\t\t\t{_, Steps2} =\n\t\t\t\t\t\t\tget_step_range(Steps, StepNumber, CurrentStepNumber + 1, StepNumber),\n\t\t\t\t\t\tupdate_session(Session, StepNumber,\n\t\t\t\t\t\t\t\t#{ StepNumber => LastStepCheckpoints }, Steps2,\n\t\t\t\t\t\t\t\tvalidated_steps);\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tSession\n\t\t\t\tend,\n\t\t\tState2 = cache_session(State, SessionKey, Session2),\n\t\t\tState3 = cache_block_session(State2, NextSessionKey, SessionKey,\n\t\t\t\t\t#{}, Seed, UpperBound, NextUpperBound, VDFDifficulty, NextVDFDifficulty),\n\t\t\t{noreply, State3}\n\tend;\n\nhandle_cast(schedule_step, #state{ autocompute = false } = State) ->\n\t{noreply, State#state{ computing = false }};\nhandle_cast(schedule_step, State) ->\n\t{noreply, schedule_step(State#state{ computing = true })};\n\nhandle_cast(compute_step, State) ->\n\t{noreply, schedule_step(State)};\n\nhandle_cast(reset_and_pause, State) ->\n\t{noreply, State#state{ autocompute = false, computing = false,\n\t\t\tcurrent_session_key = undefined, sessions = gb_sets:new(), session_by_key = #{} }};\n\nhandle_cast(turn_off_initialized_event, State) ->\n\t{noreply, State#state{ emit_initialized_event = false }};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info({event, node_state, {new_tip, B, PrevB}}, State) ->\n\t{noreply, apply_tip(B, PrevB, State)};\n\nhandle_info({event, node_state, {checkpoint_block, _B}},\n\t\t#state{ current_session_key = undefined } = State) ->\n\t%% The server has been restarted after a crash and a base block has not been\n\t%% applied yet.\n\t{noreply, State};\nhandle_info({event, node_state, {checkpoint_block, B}}, State) ->\n\tcase B#block.height < ar_fork:height_2_6() of\n\t\ttrue ->\n\t\t\t{noreply, State};\n\t\tfalse ->\n\t\t\t#state{ sessions = Sessions, session_by_key = SessionByKey,\n\t\t\t\t\tcurrent_session_key = CurrentSessionKey } = State,\n\t\t\tStepNumber = ar_block:vdf_step_number(B),\n\t\t\tBaseInterval = StepNumber div ar_nonce_limiter:get_reset_frequency(),\n\t\t\t{Sessions2, SessionByKey2} = prune_old_sessions(Sessions, SessionByKey,\n\t\t\t\t\tBaseInterval),\n\t\t\ttrue = maps:is_key(CurrentSessionKey, SessionByKey2),\n\t\t\t{noreply, State#state{ sessions = Sessions2, session_by_key = SessionByKey2 }}\n\tend;\n\nhandle_info({event, node_state, _}, State) ->\n\t{noreply, State};\n\nhandle_info({'DOWN', Ref, process, _, Reason}, #state{ worker_monitor_ref = Ref } = State) ->\n\t?LOG_WARNING([{event, nonce_limiter_worker_down},\n\t\t\t{reason, io_lib:format(\"~p\", [Reason])}]),\n\t{noreply, start_worker(State)};\n\nhandle_info({computed, _Args}, #state{ current_session_key = undefined } = State) ->\n\t%% Practically, only happens in tests.\n\t?LOG_WARNING([{event, computed_without_current_session}]),\n\t{noreply, State};\nhandle_info({computed, Args}, State) ->\n\t#state{ current_session_key = CurrentSessionKey } = State,\n\t{StepNumber, PrevOutput, Output, Checkpoints, SessionKey} = Args,\n\tSession = get_session(CurrentSessionKey, State),\n\t#vdf_session{ next_vdf_difficulty = NextVDFDifficulty, steps = [SessionOutput | _] } = Session,\n\t{NextSeed, IntervalNumber, NextVDFDifficulty} = CurrentSessionKey,\n\tIntervalStart = IntervalNumber * ar_nonce_limiter:get_reset_frequency(),\n\tSessionOutput2 = ar_nonce_limiter:maybe_add_entropy(\n\t\t\tSessionOutput, IntervalStart, StepNumber, NextSeed),\n\tgen_server:cast(?MODULE, schedule_step),\n\tcase {PrevOutput == SessionOutput2, SessionKey == CurrentSessionKey} of\n\t\t{true, false} ->\n\t\t\t?LOG_INFO([{event, received_computed_output_for_different_session_key}]),\n\t\t\t{noreply, State};\n\t\t{false, _} ->\n\t\t\tcase ar_config:use_remote_vdf_server() of\n\t\t\t\ttrue ->\n\t\t\t\t\tok;\n\t\t\t\tfalse ->\n\t\t\t\t\t?LOG_WARNING([{event, computed_for_outdated_key}, {step_number, StepNumber},\n\t\t\t\t\t\t{output, ar_util:encode(Output)},\n\t\t\t\t\t\t{prev_output, ar_util:encode(PrevOutput)},\n\t\t\t\t\t\t{session_output, ar_util:encode(SessionOutput2)},\n\t\t\t\t\t\t{current_session_key, encode_session_key(CurrentSessionKey)},\n\t\t\t\t\t\t{session_key, encode_session_key(SessionKey)}])\n\t\t\tend,\n\t\t\t{noreply, State};\n\t\t{true, true} ->\n\t\t\tSession2 = update_session(Session, StepNumber,\n\t\t\t\t\t#{ StepNumber => Checkpoints }, [Output], computed_step),\n\t\t\tState2 = cache_session(State, CurrentSessionKey, Session2),\n\t\t\t?LOG_DEBUG([{event, new_vdf_step}, {source, computed},\n\t\t\t\t{session_key, encode_session_key(CurrentSessionKey)}, {step_number, StepNumber}]),\n\t\t\tsend_output(CurrentSessionKey, Session2),\n\t\t\t{noreply, State2}\n\tend;\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, #state{ worker = W }) ->\n\tW ! stop,\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nsession_key(NextSeed, StepNumber, NextVDFDifficulty) ->\n\t{NextSeed, StepNumber div ar_nonce_limiter:get_reset_frequency(), NextVDFDifficulty}.\n\nget_session(SessionKey, #state{ session_by_key = SessionByKey }) ->\n\tmaps:get(SessionKey, SessionByKey, not_found).\n\nupdate_session(Session, StepNumber, StepCheckpointsMap, Steps, Source) ->\n\t#vdf_session{ step_checkpoints_map = Map } = Session,\n\tcase find_step_checkpoints_mismatch(StepCheckpointsMap, Map) of\n\t\t{true, MismatchStepNumber} ->\n\t\t\t?LOG_ERROR([{event, step_checkpoints_mismatch},\n\t\t\t\t\t{step_number, StepNumber},\n\t\t\t\t\t{mismatch_step_number, MismatchStepNumber},\n\t\t\t\t\t{source, Source}]);\n\t\tfalse ->\n\t\t\tfalse\n\tend,\n\tMap2 = maps:merge(StepCheckpointsMap, Map),\n\tupdate_session(Session#vdf_session{ step_checkpoints_map = Map2 }, StepNumber, Steps).\n\nfind_step_checkpoints_mismatch(StepCheckpointsMap, Map) ->\n\tmaps:fold(fun(StepNumber, Checkpoints, Acc) ->\n\t\tcase maps:get(StepNumber, Map, not_found) of\n\t\t\tnot_found ->\n\t\t\t\tAcc;\n\t\t\tCheckpoints ->\n\t\t\t\tAcc;\n\t\t\t_Checkpoints2 ->\n\t\t\t\t{true, StepNumber}\n\t\tend\n\tend, false, StepCheckpointsMap).\n\nupdate_session(Session, StepNumber, Steps) ->\n\t#vdf_session{ steps = CurrentSteps } = Session,\n\tSession#vdf_session{ step_number = StepNumber, steps = Steps ++ CurrentSteps }.\n\nsend_output(SessionKey, Session) ->\n\t{_, IntervalNumber, _} = SessionKey,\n\t#vdf_session{ step_number = StepNumber, steps = [Output | _] } = Session,\n\tIntervalStart = IntervalNumber * ar_nonce_limiter:get_reset_frequency(),\n\tUpperBound =\n\t\tcase get_entropy_reset_point(IntervalStart, StepNumber) of\n\t\t\tnone ->\n\t\t\t\tSession#vdf_session.upper_bound;\n\t\t\t_ ->\n\t\t\t\tSession#vdf_session.next_upper_bound\n\t\tend,\n\tar_events:send(nonce_limiter, {computed_output, {SessionKey, StepNumber, Output, UpperBound}}).\n\ndump_error(Data) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tErrorID = binary_to_list(ar_util:encode(crypto:strong_rand_bytes(8))),\n\tErrorDumpFile = filename:join(Config#config.data_dir, \"error_dump_\" ++ ErrorID),\n\tfile:write_file(ErrorDumpFile, term_to_binary(Data)),\n\tErrorID.\n\n%% @doc\n%% PrevStepNumber <------------------------------------------------------> StepNumber\n%%     PrevOutput x\n%%                                                |----------------------| StepsToValidate\n%%                |-------------------------------| NumStepsBefore\n%%                |---------------------------------------------| SessionSteps\n%%\nskip_already_computed_steps(PrevStepNumber, StepNumber, PrevOutput, StepsToValidate,\n\t\tSessionSteps) ->\n\tComputedSteps = lists:reverse(SessionSteps),\n\t%% Number of steps in the PrevStepNumber to StepNumber range that fall before the\n\t%% beginning of the StepsToValidate list. To avoid computing these steps we will look for\n\t%% them in the current VDF session (i.e. in the SessionSteps list)\n\tNumStepsBefore = StepNumber - PrevStepNumber - length(StepsToValidate),\n\tcase NumStepsBefore > 0 andalso length(ComputedSteps) >= NumStepsBefore of\n\t\tfalse ->\n\t\t\t{PrevStepNumber, PrevOutput, ComputedSteps};\n\t\ttrue ->\n\t\t\t{\n\t\t\t\tPrevStepNumber + NumStepsBefore,\n\t\t\t\tlists:nth(NumStepsBefore, ComputedSteps),\n\t\t\t\tlists:nthtail(NumStepsBefore, ComputedSteps)\n\t\t\t}\n\tend.\n\nexclude_computed_steps_from_steps_to_validate(StepsToValidate, ComputedSteps) ->\n\texclude_computed_steps_from_steps_to_validate(StepsToValidate, ComputedSteps, 1, 0).\n\nexclude_computed_steps_from_steps_to_validate(StepsToValidate, [], _I, NumAlreadyComputed) ->\n\t{StepsToValidate, NumAlreadyComputed};\nexclude_computed_steps_from_steps_to_validate(StepsToValidate, [_Step | ComputedSteps], I,\n\t\tNumAlreadyComputed) when I /= 1 ->\n\texclude_computed_steps_from_steps_to_validate(StepsToValidate, ComputedSteps, I + 1,\n\t\t\tNumAlreadyComputed);\nexclude_computed_steps_from_steps_to_validate([Step], [Step | _ComputedSteps], _I,\n\t\t\tNumAlreadyComputed) ->\n\t{[], NumAlreadyComputed + 1};\nexclude_computed_steps_from_steps_to_validate([Step | StepsToValidate], [Step | ComputedSteps],\n\t\t_I, NumAlreadyComputed) ->\n\texclude_computed_steps_from_steps_to_validate(StepsToValidate, ComputedSteps, 1,\n\t\tNumAlreadyComputed + 1);\nexclude_computed_steps_from_steps_to_validate(_StepsToValidate, _ComputedSteps, _I,\n\t\t_NumAlreadyComputed) ->\n\tinvalid.\n\nhandle_initialized([B | Blocks], State) ->\n\t?LOG_INFO([{event, handle_initialized},\n\t\t{module, ar_nonce_limiter}, {blocks, length([B | Blocks])}]),\n\tBlocks2 = take_blocks_after_fork([B | Blocks]),\n\thandle_initialized2(lists:reverse(Blocks2), State).\n\ntake_blocks_after_fork([#block{ height = Height } = B | Blocks]) ->\n\tcase Height + 1 >= ar_fork:height_2_6() of\n\t\ttrue ->\n\t\t\t[B | take_blocks_after_fork(Blocks)];\n\t\tfalse ->\n\t\t\t[]\n\tend;\ntake_blocks_after_fork([]) ->\n\t[].\n\nhandle_initialized2([B | Blocks], State) ->\n\tState2 = apply_base_block(B, State),\n\tgen_server:cast(?MODULE, {initialize, [B | Blocks]}),\n\tState2.\n\napply_base_block(B, State) ->\n\t#nonce_limiter_info{ seed = Seed, output = Output,\n\t\t\tpartition_upper_bound = UpperBound,\n\t\t\tnext_partition_upper_bound = NextUpperBound,\n\t\t\tglobal_step_number = StepNumber,\n\t\t\tlast_step_checkpoints = LastStepCheckpoints,\n\t\t\tvdf_difficulty = VDFDifficulty,\n\t\t\tnext_vdf_difficulty = NextVDFDifficulty } = B#block.nonce_limiter_info,\n\tSession = #vdf_session{\n\t\t\tseed = Seed, step_number = StepNumber,\n\t\t\tupper_bound = UpperBound, next_upper_bound = NextUpperBound,\n\t\t\tvdf_difficulty = VDFDifficulty, next_vdf_difficulty = NextVDFDifficulty ,\n\t\t\tstep_checkpoints_map = #{ StepNumber => LastStepCheckpoints },\n\t\t\tsteps = [Output] },\n\tSessionKey = session_key(B#block.nonce_limiter_info),\n\t?LOG_DEBUG([{event, new_vdf_step}, {source, base_block},\n\t\t{session_key, encode_session_key(SessionKey)}, {step_number, StepNumber}]),\n\tState2 = set_current_session(State, SessionKey),\n\tcache_session(State2, SessionKey, Session).\n\napply_chain(#nonce_limiter_info{ global_step_number = StepNumber },\n\t\t#nonce_limiter_info{ global_step_number = PrevStepNumber })\n\t\twhen StepNumber - PrevStepNumber > ?NONCE_LIMITER_MAX_CHECKPOINTS_COUNT ->\n\tar:console(\"Cannot do a trusted join - there are not enough checkpoints\"\n\t\t\t\" to apply quickly; step number: ~B, previous step number: ~B.\",\n\t\t\t[StepNumber, PrevStepNumber]),\n\ttimer:sleep(1000),\n\tinit:stop(1);\n%% @doc Apply the pre-validated / trusted nonce_limiter_info. Since the info is trusted\n%% we don't validate it here.\napply_chain(Info, PrevInfo) ->\n\t#nonce_limiter_info{ global_step_number = PrevStepNumber } = PrevInfo,\n\t#nonce_limiter_info{ output = Output, seed = Seed,\n\t\t\tvdf_difficulty = VDFDifficulty,\n\t\t\tnext_vdf_difficulty = NextVDFDifficulty,\n\t\t\tpartition_upper_bound = UpperBound,\n\t\t\tnext_partition_upper_bound = NextUpperBound, global_step_number = StepNumber,\n\t\t\tsteps = Steps, last_step_checkpoints = LastStepCheckpoints } = Info,\n\tOutput = hd(Steps),\n\tassert_step_count(StepNumber, PrevStepNumber, Steps),\n\tSessionKey = session_key(PrevInfo),\n\tNextSessionKey = session_key(Info),\n\tArgs = {StepNumber, SessionKey, NextSessionKey, Seed, UpperBound, NextUpperBound,\n\t\t\tVDFDifficulty, NextVDFDifficulty, Steps, LastStepCheckpoints},\n\tgen_server:cast(?MODULE, {validated_steps, Args}).\n\n-ifdef(LOCALNET).\nassert_step_count(StepNumber, PrevStepNumber, Steps) ->\n\tcase StepNumber == PrevStepNumber of\n\t\ttrue ->\n\t\t\ttrue = length(Steps) > 0;\n\t\tfalse ->\n\t\t\ttrue = StepNumber - PrevStepNumber == length(Steps)\n\tend.\n-else.\nassert_step_count(StepNumber, PrevStepNumber, Steps) ->\n\ttrue = StepNumber - PrevStepNumber == length(Steps).\n-endif.\n\napply_tip(#block{ height = Height } = B, PrevB, #state{ sessions = Sessions } = State) ->\n\tcase Height + 1 < ar_fork:height_2_6() of\n\t\ttrue ->\n\t\t\tState;\n\t\tfalse ->\n\t\t\tState2 =\n\t\t\t\tcase State#state.computing of\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tgen_server:cast(?MODULE, schedule_step),\n\t\t\t\t\t\tState#state{ computing = true };\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tState\n\t\t\t\tend,\n\t\t\tcase gb_sets:is_empty(Sessions) of\n\t\t\t\ttrue ->\n\t\t\t\t\ttrue = (Height + 1) == ar_fork:height_2_6(),\n\t\t\t\t\tState3 = apply_base_block(B, State2),\n\t\t\t\t\tState3;\n\t\t\t\tfalse ->\n\t\t\t\t\tapply_tip2(B, PrevB, State2)\n\t\t\tend\n\tend.\n\napply_tip2(B, PrevB, State) ->\n\t#nonce_limiter_info{ seed = Seed, partition_upper_bound = UpperBound,\n\t\t\tnext_partition_upper_bound = NextUpperBound, global_step_number = StepNumber,\n\t\t\tlast_step_checkpoints = LastStepCheckpoints,\n\t\t\tvdf_difficulty = VDFDifficulty,\n\t\t\tnext_vdf_difficulty = NextVDFDifficulty } = B#block.nonce_limiter_info,\n\tSessionKey = session_key(B#block.nonce_limiter_info),\n\tPrevSessionKey = session_key(PrevB#block.nonce_limiter_info),\n\tState2 = set_current_session(State, SessionKey),\n\tState3 = cache_block_session(State2, SessionKey, PrevSessionKey,\n\t\t\t#{ StepNumber => LastStepCheckpoints }, Seed, UpperBound, NextUpperBound,\n\t\t\tVDFDifficulty, NextVDFDifficulty),\n\tState3.\n\nprune_old_sessions(Sessions, SessionByKey, BaseInterval) ->\n\t{{Interval, NextSeed, NextVdfDifficulty}, Sessions2} = gb_sets:take_smallest(Sessions),\n\tSessionKey = {NextSeed, Interval, NextVdfDifficulty},\n\tcase BaseInterval > Interval + 10 of\n\t\ttrue ->\n\t\t\t?LOG_DEBUG([{event, prune_old_vdf_session},\n\t\t\t\t{session_key, encode_session_key(SessionKey)}]),\n\t\t\tSessionByKey2 = maps:remove(SessionKey, SessionByKey),\n\t\t\tprune_old_sessions(Sessions2, SessionByKey2, BaseInterval);\n\t\tfalse ->\n\t\t\t{Sessions, SessionByKey}\n\tend.\n\nstart_worker(State) ->\n\tWorker = spawn(fun() -> process_flag(priority, high), worker() end),\n\tRef = monitor(process, Worker),\n\tState#state{ worker = Worker, worker_monitor_ref = Ref }.\n\ncompute(StepNumber, PrevOutput, VDFDifficulty) ->\n\t{ok, Output, Checkpoints} = ar_vdf:compute2(StepNumber, PrevOutput, VDFDifficulty),\n\tdebug_double_check(\n\t\t\"compute\",\n\t\t{ok, Output, Checkpoints},\n\t\tfun ar_vdf:compute_legacy/3,\n\t\t[StepNumber, PrevOutput, VDFDifficulty]).\n\nverify(StartStepNumber, PrevOutput, NumCheckpointsBetweenHashes, Hashes, ResetStepNumber,\n\t\tResetSeed, ThreadCount, VDFDifficulty, NextVDFDifficulty) ->\n\t{Result1, PrevOutput2, ValidatedSteps1} =\n\t\tcase lists:sublist(Hashes, ResetStepNumber - StartStepNumber - 1) of\n\t\t\t[] ->\n\t\t\t\t{true, mix_seed2(PrevOutput, ResetSeed), []};\n\t\t\tHashes1 ->\n\t\t\t\tcase verify_no_reset(StartStepNumber, PrevOutput,\n\t\t\t\t\t\tNumCheckpointsBetweenHashes, Hashes1, ThreadCount, VDFDifficulty) of\n\t\t\t\t\t{true, ValidatedSteps} ->\n\t\t\t\t\t\t{true, mix_seed2(hd(ValidatedSteps), ResetSeed), ValidatedSteps};\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t{false, undefined, undefined}\n\t\t\t\tend\n\t\tend,\n\tcase Result1 of\n\t\tfalse ->\n\t\t\tfalse;\n\t\ttrue ->\n\t\t\tHashes2 = lists:nthtail(ResetStepNumber - StartStepNumber - 1, Hashes),\n\t\t\tcase verify_no_reset(ResetStepNumber - 1, PrevOutput2, NumCheckpointsBetweenHashes,\n\t\t\t\t\tHashes2, ThreadCount, NextVDFDifficulty) of\n\t\t\t\t{true, ValidatedSteps2} ->\n\t\t\t\t\t{true, ValidatedSteps2 ++ ValidatedSteps1};\n\t\t\t\tfalse ->\n\t\t\t\t\tfalse\n\t\t\tend\n\tend.\n\nverify_no_reset(StartStepNumber, PrevOutput, NumCheckpointsBetweenHashes, Hashes, ThreadCount,\n\t\tVDFDifficulty) ->\n\tGarbage = crypto:strong_rand_bytes(32),\n\tResult = ar_vdf:verify2(StartStepNumber, PrevOutput, NumCheckpointsBetweenHashes, Hashes,\n\t\t\t0, Garbage, ThreadCount, VDFDifficulty),\n\tdebug_double_check(\n\t\t\"verify_no_reset\",\n\t\tResult,\n\t\tfun ar_vdf:debug_sha_verify_no_reset/6,\n\t\t[StartStepNumber, PrevOutput, NumCheckpointsBetweenHashes, Hashes, ThreadCount,\n\t\t\t\tVDFDifficulty]).\n\nworker() ->\n\treceive\n\t\t{compute, {StepNumber, PrevOutput, VDFDifficulty, SessionKey}, From} ->\n\t\t\t{ok, Output, Checkpoints} = prometheus_histogram:observe_duration(\n\t\t\t\t\tvdf_step_time_milliseconds, [], fun() -> compute(StepNumber, PrevOutput,\n\t\t\t\t\t\t\tVDFDifficulty) end),\n\t\t\tFrom ! {computed, {StepNumber, PrevOutput, Output, Checkpoints, SessionKey}},\n\t\t\tworker();\n\t\tstop ->\n\t\t\tok\n\tend.\n\n%% @doc Get all the steps that fall between StartStepNumber and EndStepNumber, traversing\n%% multiple sessions if needed.\nget_steps2(StartStepNumber, EndStepNumber, SessionKey, State) ->\n\tcase get_session(SessionKey, State) of\n\t\t#vdf_session{ step_number = StepNumber, prev_session_key = PrevSessionKey } = Session\n\t\t\t\twhen StepNumber >= EndStepNumber ->\n\t\t\t%% Get the steps within the current session that fall within the StartStepNumber+1\n\t\t\t%% and EndStepNumber (inclusive) range.\n\t\t\t{_, Steps} = get_step_range(Session, StartStepNumber + 1, EndStepNumber),\n\t\t\tTotalCount = EndStepNumber - StartStepNumber - 1,\n\t\t\tCount = length(Steps),\n\t\t\t%% If we haven't found all the steps, recurse into the previous session.\n\t\t\tcase TotalCount > Count of\n\t\t\t\ttrue ->\n\t\t\t\t\tcase get_steps2(StartStepNumber, EndStepNumber - Count,\n\t\t\t\t\t\t\tPrevSessionKey, State) of\n\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\tnot_found;\n\t\t\t\t\t\tPrevSteps ->\n\t\t\t\t\t\t\tSteps ++ PrevSteps\n\t\t\t\t\tend;\n\t\t\t\tfalse ->\n\t\t\t\t\tSteps\n\t\t\tend;\n\t\t_ ->\n\t\t\tnot_found\n\tend.\n\nschedule_step(State) ->\n\t#state{ current_session_key = {NextSeed, IntervalNumber, NextVDFDifficulty} = Key,\n\t\t\tworker = Worker } = State,\n\t#vdf_session{ step_number = PrevStepNumber,\n\t\tvdf_difficulty = VDFDifficulty, next_vdf_difficulty = NextVDFDifficulty,\n\t\tsteps = Steps } = get_session(Key, State),\n\tPrevOutput = hd(Steps),\n\tStepNumber = PrevStepNumber + 1,\n\tIntervalStart = IntervalNumber * ar_nonce_limiter:get_reset_frequency(),\n\tPrevOutput2 = ar_nonce_limiter:maybe_add_entropy(\n\t\tPrevOutput, IntervalStart, StepNumber, NextSeed),\n\tVDFDifficulty2 =\n\t\tcase get_entropy_reset_point(IntervalStart, StepNumber) of\n\t\t\tnone ->\n\t\t\t\tVDFDifficulty;\n\t\t\t_ ->\n\t\t\t\t?LOG_DEBUG([{event, entropy_reset_point_found}, {step_number, StepNumber},\n\t\t\t\t\t{interval_start, IntervalStart}, {vdf_difficulty, VDFDifficulty},\n\t\t\t\t\t{next_vdf_difficulty, NextVDFDifficulty},\n\t\t\t\t\t{session_key, encode_session_key(Key)}]),\n\t\t\t\tNextVDFDifficulty\n\t\tend,\n\tWorker ! {compute, {StepNumber, PrevOutput2, VDFDifficulty2, Key}, self()},\n\tState.\n\nget_or_init_nonce_limiter_info(#block{ height = Height } = B, Seed, PartitionUpperBound) ->\n\tNextSeed = B#block.indep_hash,\n\tNextPartitionUpperBound = B#block.weave_size,\n\tcase Height + 1 == ar_fork:height_2_6() of\n\t\ttrue ->\n\t\t\tOutput = crypto:hash(sha256, Seed),\n\t\t\t#nonce_limiter_info{ output = Output, seed = Seed, next_seed = NextSeed,\n\t\t\t\t\tpartition_upper_bound = PartitionUpperBound,\n\t\t\t\t\tnext_partition_upper_bound = NextPartitionUpperBound };\n\t\tfalse ->\n\t\t\tundefined\n\tend.\n\napply_external_update2(Update, State) ->\n\t#state{ last_external_update = {Peer, _} } = State,\n\t#nonce_limiter_update{ session_key = SessionKey,\n\t\t\tsession = #vdf_session{\n\t\t\t\tstep_number = StepNumber }\n\t\t\t} = Update,\n\tcase get_session(SessionKey, State) of\n\t\tnot_found ->\n\t\t\tapply_external_update_session_not_found(Update, State);\n\t\t#vdf_session{ step_number = CurrentStepNumber } = CurrentSession ->\n\t\t\tcase CurrentStepNumber >= StepNumber of\n\t\t\t\ttrue ->\n\t\t\t\t\t%% Inform the peer we are ahead.\n\t\t\t\t\tcase CurrentStepNumber > StepNumber of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t?LOG_DEBUG([{event, apply_external_vdf},\n\t\t\t\t\t\t\t\t\t{result, ahead_of_server},\n\t\t\t\t\t\t\t\t\t{vdf_server, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t\t\t\t{session_key, encode_session_key(SessionKey)},\n\t\t\t\t\t\t\t\t\t{client_step_number, CurrentStepNumber},\n\t\t\t\t\t\t\t\t\t{server_step_number, StepNumber}]);\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tok\n\t\t\t\t\tend,\n\t\t\t\t\t{reply, #nonce_limiter_update_response{\n\t\t\t\t\t\t\tstep_number = CurrentStepNumber }, State};\n\t\t\t\tfalse ->\n\t\t\t\t\tapply_external_update3(Update, CurrentSession, State)\n\t\t\tend\n\tend.\n\napply_external_update_session_not_found(Update, State) ->\n\t#state{ last_external_update = {Peer, _} } = State,\n\t#nonce_limiter_update{ session_key = SessionKey,\n\t\t\tsession = #vdf_session{\n\t\t\t\tprev_session_key = PrevSessionKey, step_number = StepNumber } = Session,\n\t\t\tis_partial = IsPartial } = Update,\n\t{_SessionSeed, SessionInterval, _SessionVDFDifficulty} = SessionKey,\n\tcase IsPartial of\n\t\ttrue ->\n\t\t\t%% Inform the peer we have not initialized the corresponding session yet.\n\t\t\t?LOG_DEBUG([{event, apply_external_vdf},\n\t\t\t\t{result, session_not_found},\n\t\t\t\t{vdf_server, ar_util:format_peer(Peer)},\n\t\t\t\t{is_partial, IsPartial},\n\t\t\t\t{session_key, encode_session_key(SessionKey)},\n\t\t\t\t{server_step_number, StepNumber}]),\n\t\t\t{reply, #nonce_limiter_update_response{ session_found = false }, State};\n\t\tfalse ->\n\t\t\t%% Handle the case where The VDF server has processed a block and re-allocated\n\t\t\t%% steps from the previous session to the new session. In this case we only\n\t\t\t%% want to apply new steps - steps in Session that weren't already applied as\n\t\t\t%% part of PrevSession.\n\n\t\t\t%% Start after the last step of the previous session\n\t\t\tRangeStart = case get_session(PrevSessionKey, State) of\n\t\t\t\tnot_found -> 0;\n\t\t\t\tPrevSession -> PrevSession#vdf_session.step_number + 1\n\t\t\tend,\n\t\t\t%% But start no later than the beginning of the session 2 after PrevSession.\n\t\t\t%% This is because the steps in that session - which may have been previously\n\t\t\t%% computed - have now been invalidated.\n\t\t\tNextSessionStart = (SessionInterval + 1) * ar_nonce_limiter:get_reset_frequency(),\n\t\t\t{_, Steps} = get_step_range(Session,\n\t\t\t\t\tmin(RangeStart, NextSessionStart), StepNumber),\n\t\t\tState2 = apply_external_update4(State, SessionKey, Session, Steps),\n\t\t\t{reply, ok, State2}\n\tend.\n\napply_external_update3(Update, CurrentSession, State) ->\n\t#state{ last_external_update = {Peer, _} } = State,\n\t#nonce_limiter_update{ session_key = SessionKey,\n\t\t\tsession = #vdf_session{\n\t\t\t\tstep_checkpoints_map = StepCheckpointsMap,\n\t\t\t\tstep_number = StepNumber,\n\t\t\t\tsteps = Steps } = Session,\n\t\t\tis_partial = IsPartial } = Update,\n\t#vdf_session{ step_number = CurrentStepNumber } = CurrentSession,\n\t%% CurrentStepNumber < StepNumber by construction.\n\tStepCount = length(Steps),\n\tStartStepNumber = StepNumber - StepCount,\n\tcase CurrentStepNumber >= StartStepNumber of\n\t\ttrue ->\n\t\t\tSteps2 = lists:sublist(Steps,\n\t\t\t\t\tStepNumber - max(CurrentStepNumber, StartStepNumber)),\n\t\t\tCurrentSession2 = update_session(CurrentSession, StepNumber,\n\t\t\t\t\tStepCheckpointsMap, Steps2, apply_external_update),\n\t\t\tState2 = apply_external_update4(State, SessionKey, CurrentSession2, Steps2),\n\t\t\t{reply, ok, State2};\n\t\tfalse ->\n\t\t\tcase IsPartial of\n\t\t\t\ttrue ->\n\t\t\t\t\t%% Inform the peer we miss some steps.\n\t\t\t\t\t?LOG_DEBUG([{event, apply_external_vdf},\n\t\t\t\t\t\t\t{result, missing_steps},\n\t\t\t\t\t\t\t{vdf_server, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t\t{is_partial, IsPartial},\n\t\t\t\t\t\t\t{session_key, encode_session_key(SessionKey)},\n\t\t\t\t\t\t\t{client_step_number, CurrentStepNumber},\n\t\t\t\t\t\t\t{server_step_number, StepNumber}]),\n\t\t\t\t\t{reply, #nonce_limiter_update_response{\n\t\t\t\t\t\t\tstep_number = CurrentStepNumber }, State};\n\t\t\t\tfalse ->\n\t\t\t\t\t%% Handle the case where the VDF client has dropped of the\n\t\t\t\t\t%% network briefly and the VDF server has advanced several\n\t\t\t\t\t%% steps within the same session. In this case the client has\n\t\t\t\t\t%% noticed the gap and requested the full VDF session be sent -\n\t\t\t\t\t%% which may contain previously processed steps in a addition to\n\t\t\t\t\t%% the missing ones.\n\t\t\t\t\t%%\n\t\t\t\t\t%% To avoid processing those steps twice, the client grabs\n\t\t\t\t\t%% CurrentStepNumber (our most recently processed step number)\n\t\t\t\t\t%% and ignores it and any lower steps found in Session.\n\t\t\t\t\t{_, Steps} = get_step_range(Session, CurrentStepNumber + 1, StepNumber),\n\t\t\t\t\tState2 = apply_external_update4(State, SessionKey, Session, Steps),\n\t\t\t\t\t{reply, ok, State2}\n\t\t\tend\n\tend.\n\n%% Note: we do not take the VDF steps from Session but accept them separately in Steps,\n%% where only unique steps are included to ensure that VDF steps are only processed once.\n%% In the first place, it is important for avoiding extra mining work.\napply_external_update4(State, SessionKey, Session, Steps) ->\n\t#state{ last_external_update = {Peer, _} } = State,\n\n\t?LOG_DEBUG([{event, new_vdf_step}, {source, apply_external_vdf},\n\t\t\t{vdf_server, ar_util:format_peer(Peer)},\n\t\t\t{session_key, encode_session_key(SessionKey)},\n\t\t\t{step_number, Session#vdf_session.step_number},\n\t\t\t{length, length(Steps)}]),\n\n\tState2 = cache_session(State, SessionKey, Session),\n\tsend_events_for_external_update(SessionKey, Session#vdf_session{ steps = Steps }),\n\tState2.\n\n%% @doc Returns a sub-range of steps out of a larger list of steps. This is\n%% primarily used to manage \"overflow\" steps.\n%%\n%% Between blocks nodes will add all computed VDF steps to the same session -\n%% *even if* the new steps have crossed the entropy reset line and therefore\n%% could be added to a new session (i.e. \"overflow steps\"). Once a block is\n%% processed the node will open a new session and re-allocate all the steps past\n%% the entropy reset line to that new session. However, any steps that have crossed\n%% *TWO* entropy reset lines are no longer valid (the seed they were generated with\n%% has changed with the arrival of a new block)\n%%\n%% Note: This overlap in session caching is intentional. The intention is to\n%% quickly access the steps when validating B1 -> reset line -> B2 given the\n%% current fork of B1 -> B2' -> reset line -> B3 i.e. we can query all steps by\n%% B1.next_seed even though on our fork the reset line determined a different\n%% next_seed for the latest session.\nget_step_range_from_interval(Session, SessionInterval, ResetFrequency) ->\n\tSessionStart = SessionInterval * ResetFrequency,\n\tSessionEnd = (SessionInterval + 1) * ResetFrequency - 1,\n\tget_step_range(Session, SessionStart, SessionEnd).\n\nget_step_range(not_found, _RangeStart, _RangeEnd) ->\n\t{0, []};\nget_step_range(Session, RangeStart, RangeEnd) ->\n\t#vdf_session{ step_number = StepNumber, steps = Steps } = Session,\n\tget_step_range(Steps, StepNumber, RangeStart, RangeEnd).\n\nget_step_range([], _StepNumber, _RangeStart, _RangeEnd) ->\n\t{0, []};\nget_step_range(_Steps, _StepNumber, RangeStart, RangeEnd)\n\t\twhen RangeStart > RangeEnd ->\n\t{0, []};\nget_step_range(_Steps, StepNumber, RangeStart, _RangeEnd)\n\t\twhen StepNumber < RangeStart ->\n\t{0, []};\nget_step_range(Steps, StepNumber, _RangeStart, RangeEnd)\n\t\twhen StepNumber - length(Steps) + 1 > RangeEnd ->\n\t{0, []};\nget_step_range(Steps, StepNumber, RangeStart, RangeEnd) ->\n\t%% Clip RangeStart to the earliest step number in Steps\n\tRangeStart2 = max(RangeStart, StepNumber - length(Steps) + 1),\n\tRangeSteps =\n\t\tcase StepNumber > RangeEnd of\n\t\t\ttrue ->\n\t\t\t\t%% Exclude steps beyond the end of the session\n\t\t\t\tlists:nthtail(StepNumber - RangeEnd, Steps);\n\t\t\tfalse ->\n\t\t\t\tSteps\n\t\tend,\n\t%% The highest step number in the range\n\tRangeEnd2 = min(StepNumber, RangeEnd),\n\t%% Exclude the steps before the start of the session\n\tRangeSteps2 = lists:sublist(RangeSteps, RangeEnd2 - RangeStart2 + 1),\n\t{RangeEnd2, RangeSteps2}.\n\nset_current_session(State, SessionKey) ->\n\t?LOG_DEBUG([{event, set_current_session},\n\t\t{new_session_key, encode_session_key(SessionKey)},\n\t\t{old_session_key, encode_session_key(State#state.current_session_key)}]),\n\tState#state{ current_session_key = SessionKey }.\n\n%% @doc Update the VDF session cache based on new info from a validated block.\ncache_block_session(State, SessionKey, PrevSessionKey, StepCheckpointsMap, Seed,\n\t\tUpperBound, NextUpperBound, VDFDifficulty, NextVDFDifficulty) ->\n\tSession =\n\t\tcase get_session(SessionKey, State) of\n\t\t\tnot_found ->\n\t\t\t\t{_, Interval, NextVDFDifficulty} = SessionKey,\n\t\t\t\tPrevSession = get_session(PrevSessionKey, State),\n\t\t\t\t{StepNumber, Steps} = get_step_range_from_interval(\n\t\t\t\t\tPrevSession, Interval, ar_nonce_limiter:get_reset_frequency()),\n\t\t\t\t?LOG_DEBUG([{event, new_vdf_step}, {source, block},\n\t\t\t\t\t{session_key, encode_session_key(SessionKey)}, {step_number, StepNumber}]),\n\t\t\t\t#vdf_session{ step_number = StepNumber, seed = Seed,\n\t\t\t\t\t\tupper_bound = UpperBound, next_upper_bound = NextUpperBound,\n\t\t\t\t\t\tprev_session_key = PrevSessionKey,\n\t\t\t\t\t\tvdf_difficulty = VDFDifficulty, next_vdf_difficulty = NextVDFDifficulty,\n\t\t\t\t\t\tstep_checkpoints_map = StepCheckpointsMap,\n\t\t\t\t\t\tsteps = Steps };\n\t\t\tExistingSession ->\n\t\t\t\tExistingSession\n\t\tend,\n\tcache_session(State, SessionKey, Session).\n\ncache_session(State, SessionKey, Session) ->\n\t#state{ current_session_key = CurrentSessionKey, session_by_key = SessionByKey,\n\t\tsessions = Sessions } = State,\n\t{NextSeed, Interval, NextVDFDifficulty} = SessionKey,\n\tmaybe_set_vdf_metrics(SessionKey, CurrentSessionKey, Session),\n\tSessionByKey2 = maps:put(SessionKey, Session, SessionByKey),\n\t%% If Session exists, then {Interval, NextSeed} will already exist in the Sessions set and\n\t%% gb_sets:add_element will not cause a change.\n\tSessions2 = gb_sets:add_element({Interval, NextSeed, NextVDFDifficulty}, Sessions),\n\tState#state{ sessions = Sessions2, session_by_key = SessionByKey2 }.\n\nmaybe_set_vdf_metrics(SessionKey, CurrentSessionKey, Session) ->\n\tcase SessionKey == CurrentSessionKey of\n\t\ttrue ->\n\t\t\t#vdf_session{\n\t\t\t\tstep_number = StepNumber,\n\t\t\t\tvdf_difficulty = VDFDifficulty,\n\t\t\t\tnext_vdf_difficulty = NextVDFDifficulty } = Session,\n\t\t\tprometheus_gauge:set(vdf_step, StepNumber),\n\t\t\tprometheus_gauge:set(vdf_difficulty, [current], VDFDifficulty),\n\t\t\tprometheus_gauge:set(vdf_difficulty, [next], NextVDFDifficulty);\n\t\tfalse ->\n\t\t\tok\n\tend.\n\nsend_events_for_external_update(_SessionKey, #vdf_session{ steps = [] }) ->\n\tok;\nsend_events_for_external_update(SessionKey, Session) ->\n\tsend_output(SessionKey, Session),\n\t#vdf_session{ step_number = StepNumber, steps = [_ | RemainingSteps] } = Session,\n\tsend_events_for_external_update(SessionKey,\n\t\tSession#vdf_session{ step_number = StepNumber-1, steps = RemainingSteps }).\n\ndebug_double_check(Label, Result, Func, Args) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase lists:member(double_check_nonce_limiter, Config#config.enable) of\n\t\tfalse ->\n\t\t\tResult;\n\t\ttrue ->\n\t\t\tCheck = apply(Func, Args),\n\t\t\tcase Result == Check of\n\t\t\t\ttrue ->\n\t\t\t\t\tResult;\n\t\t\t\tfalse ->\n\t\t\t\t\tID = ar_util:encode(crypto:strong_rand_bytes(16)),\n\t\t\t\t\tfile:write_file(Label ++ \"_\" ++ binary_to_list(ID),\n\t\t\t\t\t\t\tterm_to_binary(Args)),\n\t\t\t\t\tEvent = \"nonce_limiter_\" ++ Label ++ \"_mismatch\",\n\t\t\t\t\t?LOG_ERROR([{event, list_to_atom(Event)}, {report_id, ID}]),\n\t\t\t\t\tResult\n\t\t\tend\n\tend.\n\nfilter_step_triplets([], _LowerBounds) ->\n\t[];\nfilter_step_triplets([{O, _, _} = Triplet | Triplets], LowerBounds) ->\n\tcase lists:member(O, LowerBounds) of\n\t\ttrue ->\n\t\t\t[];\n\t\tfalse ->\n\t\t\t[Triplet | filter_step_triplets(Triplets, LowerBounds)]\n\tend.\n\nget_triplets(_StepNumber, _Steps, _ResetPoint, _UpperBound, _NextUpperBound, 0) ->\n\t[];\nget_triplets(_StepNumber, [], _ResetPoint, _UpperBound, _NextUpperBound, _N) ->\n\t[];\nget_triplets(StepNumber, [Step | Steps], ResetPoint, UpperBound, NextUpperBound, N) ->\n\tU =\n\t\tcase ResetPoint of\n\t\t\tnone ->\n\t\t\t\tUpperBound;\n\t\t\t_ when StepNumber >= ResetPoint ->\n\t\t\t\tNextUpperBound;\n\t\t\t_ ->\n\t\t\t\tUpperBound\n\t\tend,\n\t[{Step, StepNumber, U}\n\t\t| get_triplets(StepNumber - 1, Steps, ResetPoint, UpperBound, NextUpperBound, N - 1)].\n\nfilter_step_triplets_with_checkpoints([], _Map) ->\n\t{[], 0};\nfilter_step_triplets_with_checkpoints([{_, StepNumber, _} = Triplet | Triplets], Map) ->\n\t{List, NSkipped} = filter_step_triplets_with_checkpoints(Triplets, Map),\n\tcase maps:is_key(StepNumber, Map) of\n\t\ttrue ->\n\t\t\t{[Triplet | List], NSkipped};\n\t\tfalse ->\n\t\t\t{List, NSkipped + 1}\n\tend.\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\n\nexclude_computed_steps_from_steps_to_validate_test() ->\n\tC1 = crypto:strong_rand_bytes(32),\n\tC2 = crypto:strong_rand_bytes(32),\n\tC3 = crypto:strong_rand_bytes(32),\n\tC4 = crypto:strong_rand_bytes(32),\n\tC5 = crypto:strong_rand_bytes(32),\n\tCases = [\n\t\t{{[], []}, {[], 0}, \"Case 1\"},\n\t\t{{[C1], []}, {[C1], 0}, \"Case 2\"},\n\t\t{{[C1], [C1]}, {[], 1}, \"Case 3\"},\n\t\t{{[C1, C2], []}, {[C1, C2], 0}, \"Case 4\"},\n\t\t{{[C1, C2], [C2]}, invalid, \"Case 5\"},\n\t\t{{[C1, C2], [C1]}, {[C2], 1}, \"Case 6\"},\n\t\t{{[C1, C2], [C2, C1]}, invalid, \"Case 7\"},\n\t\t{{[C1, C2], [C1, C2]}, {[], 2}, \"Case 8\"},\n\t\t{{[C1, C2], [C1, C2, C3, C4, C5]}, {[], 2}, \"Case 9\"}\n\t],\n\ttest_exclude_computed_steps_from_steps_to_validate(Cases).\n\ntest_exclude_computed_steps_from_steps_to_validate([Case | Cases]) ->\n\t{Input, Expected, Title} = Case,\n\t{StepsToValidate, ComputedSteps} = Input,\n\tGot = exclude_computed_steps_from_steps_to_validate(StepsToValidate, ComputedSteps),\n\t?assertEqual(Expected, Got, Title),\n\ttest_exclude_computed_steps_from_steps_to_validate(Cases);\ntest_exclude_computed_steps_from_steps_to_validate([]) ->\n\tok.\n\nget_entropy_reset_point_test() ->\n\tResetFreq = ar_nonce_limiter:get_reset_frequency(),\n\t?assertEqual(none, get_entropy_reset_point(1, ResetFreq - 1)),\n\t?assertEqual(ResetFreq, get_entropy_reset_point(1, ResetFreq)),\n\t?assertEqual(none, get_entropy_reset_point(ResetFreq, ResetFreq + 1)),\n\t?assertEqual(2 * ResetFreq, get_entropy_reset_point(ResetFreq, ResetFreq * 2)),\n\t?assertEqual(ResetFreq * 3, get_entropy_reset_point(ResetFreq * 3 - 1, ResetFreq * 3 + 2)),\n\t?assertEqual(ResetFreq * 4, get_entropy_reset_point(ResetFreq * 3, ResetFreq * 4 + 1)).\n\nreorg_after_join_test_() ->\n\t{timeout, ?TEST_NODE_TIMEOUT, fun test_reorg_after_join/0}.\n\ntest_reorg_after_join() ->\n\t[B0] = ar_weave:init(),\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:connect_to_peer(peer1),\n\tar_test_node:mine(),\n\tar_test_node:assert_wait_until_height(peer1, 1),\n\tar_test_node:disconnect_from(peer1),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:join_on(#{ node => main, join_on => peer1 }),\n\tar_test_node:mine(peer1),\n\tar_test_node:assert_wait_until_height(peer1, 1),\n\tar_test_node:mine(peer1),\n\tar_test_node:wait_until_height(main, 2).\n\nreorg_after_join2_test_() ->\n\t{timeout, ?TEST_NODE_TIMEOUT, fun test_reorg_after_join2/0}.\n\ntest_reorg_after_join2() ->\n\t[B0] = ar_weave:init(),\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:connect_to_peer(peer1),\n\tar_test_node:mine(),\n\tar_test_node:assert_wait_until_height(peer1, 1),\n\tar_test_node:join_on(#{ node => main, join_on => peer1 }),\n\tar_test_node:mine(),\n\tar_test_node:wait_until_height(main, 2),\n\tar_test_node:disconnect_from(peer1),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:mine(peer1),\n\tar_test_node:assert_wait_until_height(peer1, 1),\n\tar_test_node:mine(peer1),\n\tar_test_node:assert_wait_until_height(peer1, 2),\n\tar_test_node:connect_to_peer(peer1),\n\tar_test_node:mine(peer1),\n\tar_test_node:wait_until_height(main, 3).\n\nget_step_range_test() ->\n\t?assertEqual(\n\t\t{0, []},\n\t\tget_step_range(lists:seq(9, 5, -1), 9, 0, 4),\n\t\t\"Disjoint range A\"\n\t),\n\t?assertEqual(\n\t\t{0, []},\n\t\tget_step_range(lists:seq(9, 5, -1), 9 , 10, 14),\n\t\t\"Disjoint range B\"\n\t),\n\t?assertEqual(\n\t\t{0, []},\n\t\tget_step_range([], 9, 0, 4),\n\t\t\"Empty steps\"\n\t),\n\t?assertEqual(\n\t\t{0, []},\n\t\tget_step_range(lists:seq(9, 5, -1), 9, 9, 5),\n\t\t\"Invalid range\"\n\t),\n\t?assertEqual(\n\t\t{9, [9, 8, 7, 6, 5]},\n\t\tget_step_range(lists:seq(9, 5, -1), 9, 5, 9),\n\t\t\"Full intersection\"\n\t),\n\t?assertEqual(\n\t\t{9, [9, 8, 7, 6, 5]},\n\t\tget_step_range(lists:seq(9, 5, -1), 9, 3, 9),\n\t\t\"Clipped RangeStart\"\n\t),\n\t?assertEqual(\n\t\t{9, [9, 8, 7, 6]},\n\t\tget_step_range(lists:seq(9, 5, -1), 9, 6, 12),\n\t\t\"Clipped RangeEnd\"\n\t),\n\t?assertEqual(\n\t\t{8, [8, 7]},\n\t\tget_step_range(lists:seq(20, 5, -1), 20, 7, 8),\n\t\t\"Clipped Steps above\"\n\t),\n\t?assertEqual(\n\t\t{9, [9, 8, 7, 6, 5]},\n\t\tget_step_range(lists:seq(9, 0, -1), 9, 5, 9),\n\t\t\"Clipped Steps below\"\n\t),\n\t?assertEqual(\n\t\t{6, [6]},\n\t\tget_step_range(lists:seq(9, 5, -1), 9, 6, 6),\n\t\t\"Range length 1\"\n\t),\n\t?assertEqual(\n\t\t{8, [8]},\n\t\tget_step_range([8], 8, 8, 8),\n\t\t\"Steps length 1\"\n\t),\n\tResetFrequency = 5,\n\t?assertEqual(\n\t\t{9, [9, 8, 7, 6, 5]},\n\t\tget_step_range_from_interval(\n\t\t\t#vdf_session{ step_number = 12, steps = lists:seq(12, 0, -1) }, 1, ResetFrequency),\n\t\t\"Session and Interval\"\n\t),\n\t?assertEqual(\n\t\t{0, []},\n\t\tget_step_range_from_interval(not_found, 1, ResetFrequency),\n\t\t\"not_found and Interval\"\n\t),\n\t?assertEqual(\n\t\t{9, [9, 8, 7]},\n\t\tget_step_range(\n\t\t\t#vdf_session{ step_number = 12, steps = lists:seq(12, 0, -1) }, 7, 9),\n\t\t\"Session and Range\"\n\t),\n\t?assertEqual(\n\t\t{0, []},\n\t\tget_step_range(not_found, 7, 9),\n\t\t\"not_found and Range\"\n\t),\n\tok.\n\nfilter_step_triplets_test() ->\n\t?assertEqual([], filter_step_triplets([], [a, b])),\n\t?assertEqual([], filter_step_triplets([{a, 1, s}], [a, b])),\n\t?assertEqual([], filter_step_triplets([{b, 1, s}], [a, b])),\n\t?assertEqual([], filter_step_triplets([{b, 1, s}, {x, 1, s}], [a, b])),\n\t?assertEqual([{y, 1, s}], filter_step_triplets([{y, 1, s}, {a, 1, s}, {x, 1, s}], [a, b])),\n\t?assertEqual([{y, 1, s}, {x, 1, s}], filter_step_triplets([{y, 1, s}, {x, 1, s}], [a, b])).\n\nget_triplets_test() ->\n\t?assertEqual([], get_triplets(1, [a], none, 2, 3, 0)),\n\t?assertEqual([], get_triplets(2, [], 2, 4, 5, 2)),\n\t?assertEqual([{a, 1, 2}], get_triplets(1, [a], none, 2, 3, 2)),\n\t?assertEqual([{a, 1, 2}], get_triplets(1, [a], 2, 2, 3, 2)),\n\t?assertEqual([{a, 1, 3}], get_triplets(1, [a], 1, 2, 3, 2)),\n\t?assertEqual([{a, 2, 3}, {b, 1, 2}], get_triplets(2, [a, b], 2, 2, 3, 2)),\n\t?assertEqual([{a, 2, 3}, {b, 1, 2}], get_triplets(2, [a, b, c], 2, 2, 3, 2)),\n\t?assertEqual([{a, 3, 3}, {b, 2, 3}, {c, 1, 3}], get_triplets(3, [a, b, c], 0, 2, 3, 3)),\n\t?assertEqual([{a, 3, 2}, {b, 2, 2}, {c, 1, 2}], get_triplets(3, [a, b, c], none, 2, 3, 4)).\n"
  },
  {
    "path": "apps/arweave/src/ar_nonce_limiter_client.erl",
    "content": "-module(ar_nonce_limiter_client).\n\n-behaviour(gen_server).\n\n-export([start_link/0, maybe_request_sessions/1]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include(\"ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-record(state, {\n\tremote_servers,\n\tlatest_session_keys = #{},\n\t%% request_sessions is set to true when the node is unable to validate a block due\n\t%% to a gap in its cached step numbers. When true, the node will query the full\n\t%% session and previous session from a VDF server.\n\trequest_sessions = false,\n\tlatest_remote_server_rotation_timestamp = erlang:system_time(millisecond)\n}).\n\n-define(PULL_FREQUENCY_MS, 800).\n-define(NO_UPDATE_PULL_FREQUENCY_MS, 200).\n-define(PULL_THROTTLE_MS, 200).\n\n-ifdef(AR_TEST).\n-define(ROTATE_REMOTE_SERVERS_MS, 2_000).\n-else.\n-define(ROTATE_REMOTE_SERVERS_MS, 30_000).\n-endif.\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the server.\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%% @doc Look at the session key of the last update from the VDF server we are currently\n%% working with and in case it does not match the given session key, request complete\n%% sessions from this VDF server.\n%%\n%% The client may need this additional request around a VDF reset when a new session is\n%% created but the previous session is not completed because the VDF server instantiated\n%% the new session before sending the last computed output(s) of the previous session to\n%% the client.\nmaybe_request_sessions(SessionKey) ->\n\tgen_server:cast(?MODULE, {maybe_request_sessions, SessionKey}).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tPeers = Config#config.nonce_limiter_server_trusted_peers,\n\tcase ar_config:use_remote_vdf_server() of\n\t\tfalse ->\n\t\t\tok;\n\t\ttrue ->\n\t\t\t%% Resolve and cache all VDF server peers upfront so that pushes\n\t\t\t%% (POST /vdf) may be accepted even when pulling is disabled.\n\t\t\tresolve_server_peers(Peers),\n\t\t\tgen_server:cast(?MODULE, pull)\n\tend,\n\t{ok, #state{ remote_servers = queue:from_list(Peers) }}.\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast(pull, State = #state{ request_sessions = RequestSessions }) ->\n\tDoPull = (\n\t\tar_config:pull_from_remote_vdf_server() orelse\n\t\tRequestSessions == true\n\t),\n\tcase DoPull of\n\t\ttrue ->\n\t\t\t{Delay, State1} = do_pull(State),\n\t\t\tar_util:cast_after(Delay, ?MODULE, pull),\n\t\t\t{noreply, State1};\n\t\tfalse ->\n\t\t\t%% Even when pulling is disabled, periodically re-resolve VDF server peers\n\t\t\t%% so that pushes (POST /vdf) continue to work (e.g., after DNS changes).\n\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\tresolve_server_peers(Config#config.nonce_limiter_server_trusted_peers),\n\t\t\tar_util:cast_after(?PULL_FREQUENCY_MS, ?MODULE, pull),\n\t\t\t{noreply, State}\n\tend;\n\nhandle_cast({maybe_request_sessions, SessionKey}, State) ->\n\t#state{ remote_servers = Q } = State,\n\t{{value, RawPeer}, _Q2} = queue:out(Q),\n\tRotatedServers = rotate_servers(Q),\n\tcase ar_peers:resolve_and_cache_peer(RawPeer, vdf_server_peer) of\n\t\t{error, _} ->\n\t\t\t%% Push the peer to the back of the queue. We'll also wait and see if another\n\t\t\t%% `maybe_request_sessions` message comes in before we fetch the full session.\n\t\t\t{noreply, State#state{ remote_servers = RotatedServers }};\n\t\t{ok, Peer} ->\n\t\t\tcase get_latest_session_key(Peer, State) of\n\t\t\t\tSessionKey ->\n\t\t\t\t\t%% No reason to make extra requests. And don't rotate the peers.\n\t\t\t\t\t{noreply, State};\n\t\t\t\t_ ->\n\t\t\t\t\t%% Ensure the current and previous sessions are fetched and applied on\n\t\t\t\t\t%% the next `pull` message.\n\t\t\t\t\t?LOG_DEBUG([{event, vdf_request_sessions},\n\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t\t{session_key, ar_nonce_limiter:encode_session_key(SessionKey)}]),\n\t\t\t\t\t{noreply, State#state{ request_sessions = true }}\n\t\t\tend\n\tend;\n\nhandle_cast({update_latest_session_key, Peer, SessionKey}, State) ->\n\t{noreply, update_latest_session_key(Peer, SessionKey, State)};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nresolve_server_peers(RawPeers) ->\n\tlists:foreach(\n\t\tfun(RawPeer) ->\n\t\t\tar_peers:resolve_and_cache_peer(RawPeer, vdf_server_peer)\n\t\tend,\n\t\tRawPeers\n\t).\n\ndo_pull(State) ->\n\t#state{ remote_servers = Q } = State,\n\tRotatedServers = rotate_servers(Q),\n\t{RawPeer, State2} = get_raw_peer_and_update_remote_servers(State),\n\tcase ar_peers:resolve_and_cache_peer(RawPeer, vdf_server_peer) of\n\t\t{error, _} ->\n\t\t\t?LOG_WARNING([{event, failed_to_resolve_peer},\n\t\t\t\t\t{raw_peer, io_lib:format(\"~p\", [RawPeer])}]),\n\t\t\t%% Push the peer to the back of the queue.\n\t\t\t{?PULL_THROTTLE_MS, State#state{ remote_servers = RotatedServers }};\n\t\t{ok, Peer} ->\n\t\t\tcase ar_http_iface_client:get_vdf_update(Peer) of\n\t\t\t\t{ok, Update} ->\n\t\t\t\t\t#nonce_limiter_update{ session_key = SessionKey,\n\t\t\t\t\t\t\tsession = #vdf_session{\n\t\t\t\t\t\t\t\t\tstep_number = SessionStepNumber } } = Update,\n\t\t\t\t\tState3 = update_latest_session_key(Peer, SessionKey, State2),\n\t\t\t\t\tUpdateResponse =\n\t\t\t\t\t\tar_nonce_limiter:apply_external_update(Update, Peer),\n\n\t\t\t\t\tSessionFound = case UpdateResponse of\n\t\t\t\t\t\t#nonce_limiter_update_response{ session_found = false } ->\n\t\t\t\t\t\t\tfalse;\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\ttrue\n\t\t\t\t\tend,\n\t\t\t\t\tRequestSessions = (\n\t\t\t\t\t\tState3#state.request_sessions == true orelse\n\t\t\t\t\t\tnot SessionFound\n\t\t\t\t\t),\n\n\t\t\t\t\tcase RequestSessions of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tcase fetch_and_apply_session_and_previous_session(Peer) of\n\t\t\t\t\t\t\t\t{error, _} ->\n\t\t\t\t\t\t\t\t\t{?PULL_THROTTLE_MS, State3#state{\n\t\t\t\t\t\t\t\t\t\tremote_servers = RotatedServers }};\n\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\t{?PULL_FREQUENCY_MS, State3#state{\n\t\t\t\t\t\t\t\t\t\trequest_sessions = false }}\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tcase UpdateResponse of\n\t\t\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\t\t\t{?PULL_FREQUENCY_MS, State3};\n\t\t\t\t\t\t\t#nonce_limiter_update_response{ step_number = StepNumber }\n\t\t\t\t\t\t\t\t\t\twhen StepNumber > SessionStepNumber ->\n\t\t\t\t\t\t\t\t%% We are ahead of the server - may be, it is not\n\t\t\t\t\t\t\t\t%% the fastest server in the list so try another one,\n\t\t\t\t\t\t\t\t%% if there are more servers in the configuration\n\t\t\t\t\t\t\t\t%% and they are not on timeout.\n\t\t\t\t\t\t\t\t{0, State3#state{\n\t\t\t\t\t\t\t\t\t\tremote_servers = RotatedServers }};\n\t\t\t\t\t\t\t\t#nonce_limiter_update_response{ step_number = StepNumber }\n\t\t\t\t\t\t\t\t\t\twhen StepNumber == SessionStepNumber ->\n\t\t\t\t\t\t\t\t\t%% We are in sync with the server. Re-try soon.\n\t\t\t\t\t\t\t\t\t{?NO_UPDATE_PULL_FREQUENCY_MS, State3};\n\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\t%% We have received a partial session, but there's a gap\n\t\t\t\t\t\t\t\t\t%% in the step numbers, e.g., the update we received is at\n\t\t\t\t\t\t\t\t\t%% step 100, but our last seen step was 90.\n\t\t\t\t\t\t\t\t\tcase fetch_and_apply_session(Peer) of\n\t\t\t\t\t\t\t\t\t\t{error, _} ->\n\t\t\t\t\t\t\t\t\t\t\t{?PULL_THROTTLE_MS, State3#state{\n\t\t\t\t\t\t\t\t\t\t\t\t\tremote_servers = RotatedServers }};\n\t\t\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\t\t\t{?PULL_FREQUENCY_MS, State3}\n\t\t\t\t\t\t\t\t\tend\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend;\n\t\t\t{error, not_found} ->\n\t\t\t\t?LOG_WARNING([{event, failed_to_fetch_vdf_update},\n\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t{error, not_found}]),\n\t\t\t\t%% The server might be restarting.\n\t\t\t\t%% Try another one, if there are any.\n\t\t\t\t{?PULL_THROTTLE_MS, State#state{ remote_servers = RotatedServers }};\n\t\t\t{error, Reason} ->\n\t\t\t\t?LOG_WARNING([{event, failed_to_fetch_vdf_update},\n\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t{error, io_lib:format(\"~p\", [Reason])}]),\n\t\t\t\t%% Try another server, if there are any.\n\t\t\t\t{?PULL_THROTTLE_MS, State#state{ remote_servers = RotatedServers }}\n\t\t\tend\n\tend.\n\nget_raw_peer_and_update_remote_servers(State) ->\n\t#state{ remote_servers = Q,\n\t\t\tlatest_remote_server_rotation_timestamp = Timestamp } = State,\n\t{{value, RawPeer}, Q2} = queue:out(Q),\n\tNow = erlang:system_time(millisecond),\n\tcase Now < Timestamp + ?ROTATE_REMOTE_SERVERS_MS of\n\t\ttrue ->\n\t\t\t{RawPeer, State};\n\t\tfalse ->\n\t\t\t{RawPeer, State#state{\n\t\t\t\t\tlatest_remote_server_rotation_timestamp = Now,\n\t\t\t\t\tremote_servers = queue:in(RawPeer, Q2) }}\n\tend.\n\nrotate_servers(Q) ->\n\t{{value, RawPeer}, Q2} = queue:out(Q),\n\tqueue:in(RawPeer, Q2).\n\nfetch_and_apply_session_and_previous_session(Peer) ->\n\tcase ar_http_iface_client:get_vdf_session(Peer) of\n\t\t{ok, #nonce_limiter_update{ session = #vdf_session{\n\t\t\t\tprev_session_key = PrevSessionKey } } = Update} ->\n\t\t\tcase ar_http_iface_client:get_previous_vdf_session(Peer) of\n\t\t\t\t{ok, #nonce_limiter_update{ session_key = PrevSessionKey } = Update2} ->\n\t\t\t\t\tar_nonce_limiter:apply_external_update(Update2, Peer),\n\t\t\t\t\tar_nonce_limiter:apply_external_update(Update, Peer);\n\t\t\t\t{ok, _} ->\n\t\t\t\t\t%% The session should have just changed, retry.\n\t\t\t\t\tfetch_and_apply_session_and_previous_session(Peer);\n\t\t\t\t{error, Reason} = Error ->\n\t\t\t\t\t?LOG_WARNING([{event, failed_to_fetch_previous_vdf_session},\n\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t{error, io_lib:format(\"~p\", [Reason])}]),\n\t\t\t\t\tError\n\t\t\tend;\n\t\t{error, Reason2} = Error2 ->\n\t\t\t?LOG_WARNING([{event, failed_to_fetch_vdf_session},\n\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t{error, io_lib:format(\"~p\", [Reason2])}]),\n\t\t\tError2\n\tend.\n\nfetch_and_apply_session(Peer) ->\n\tcase ar_http_iface_client:get_vdf_session(Peer) of\n\t\t{ok, Update} ->\n\t\t\tar_nonce_limiter:apply_external_update(Update, Peer);\n\t\t{error, Reason} = Error ->\n\t\t\t?LOG_WARNING([{event, failed_to_fetch_vdf_session},\n\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t{error, io_lib:format(\"~p\", [Reason])}]),\n\t\t\tError\n\tend.\n\nget_latest_session_key(Peer, State) ->\n\t#state{ latest_session_keys = Map } = State,\n\tmaps:get(Peer, Map, not_found).\n\nupdate_latest_session_key(Peer, SessionKey, State) ->\n\t#state{ latest_session_keys = Map } = State,\n\tState#state{ latest_session_keys = maps:put(Peer, SessionKey, Map) }.\n"
  },
  {
    "path": "apps/arweave/src/ar_nonce_limiter_server.erl",
    "content": "-module(ar_nonce_limiter_server).\n\n-behaviour(gen_server).\n\n-export([start_link/0, make_full_nonce_limiter_update/2, make_partial_nonce_limiter_update/4,\n\t\tget_update/1, get_full_update/1, get_full_prev_update/1]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-record(state, {\n\tsession_key,\n\tstep_number\n}).\n\n%% @doc The number of steps and the corresponding step checkpoints to include in every\n%% regular update.\n-define(REGULAR_UPDATE_INCLUDE_STEPS_COUNT, 2).\n\n%% @doc The number of steps for which we include step checkpoints in the full session update.\n%% Does not apply to previous session updates.\n-define(SESSION_UPDATE_INCLUDE_STEP_CHECKPOINTS_COUNT, 20).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\nmake_partial_nonce_limiter_update(SessionKey, Session, StepNumber, Output) ->\n\t#vdf_session{ steps = Steps, step_number = SessionStepNumber } = Session,\n\tStepNumberMinusOne = StepNumber - 1,\n\tSteps2 =\n\t\tcase SessionStepNumber of\n\t\t\tStepNumber ->\n\t\t\t\tSteps;\n\t\t\tStepNumberMinusOne ->\n\t\t\t\t[Output | Steps];\n\t\t\t_ ->\n\t\t\t\t?LOG_WARNING([{event, vdf_gap},\n\t\t\t\t\t\t{session_step_number, SessionStepNumber},\n\t\t\t\t\t\t{computed_output, StepNumber}]),\n\t\t\t\t[Output]\n\t\tend,\n\tmake_nonce_limiter_update(\n\t\tSessionKey,\n\t\tSession#vdf_session{\n\t\t\tstep_number = StepNumber,\n\t\t\tsteps = lists:sublist(Steps2, ?REGULAR_UPDATE_INCLUDE_STEPS_COUNT)\n\t\t},\n\t\ttrue).\n\nmake_full_nonce_limiter_update(SessionKey, Session) ->\n\tmake_nonce_limiter_update(SessionKey, Session, false).\n\n%% @doc Return the minimal VDF update, i.e., the latest computed output.\nget_update(Format) ->\n\tcase ets:lookup(?MODULE, {partial_update, Format}) of\n\t\t[] ->\n\t\t\tnot_found;\n\t\t[{_, PartialUpdate}] ->\n\t\t\tPartialUpdate\n\tend.\n\n%% @doc Return the \"full update\" including the latest VDF session.\nget_full_update(Format) ->\n\tcase ets:lookup(?MODULE, {full_update, Format}) of\n\t\t[] ->\n\t\t\tnot_found;\n\t\t[{_, Session}] ->\n\t\t\tSession\n\tend.\n\n%% @doc Return the \"full previous update\" including the latest but one VDF session.\nget_full_prev_update(Format) ->\n\tcase ets:lookup(?MODULE, {full_prev_update, Format}) of\n\t\t[] ->\n\t\t\tnot_found;\n\t\t[{_, PrevSession}] ->\n\t\t\tPrevSession\n\tend.\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\t?LOG_INFO([{event, nonce_limiter_server_init}]),\n\tok = ar_events:subscribe(nonce_limiter),\n\t{ok, #state{}}.\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info({event, nonce_limiter, {computed_output, Args}}, State) ->\n\thandle_computed_output(Args, State);\n\nhandle_info({event, nonce_limiter, _Args}, State) ->\n\t{noreply, State};\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nmake_nonce_limiter_update(_SessionKey, not_found, _IsPartial) ->\n\tnot_found;\nmake_nonce_limiter_update(SessionKey, Session, IsPartial) ->\n\t#vdf_session{ step_number = StepNumber, steps = Steps,\n\t\t\tstep_checkpoints_map = StepCheckpointsMap } = Session,\n\t%% Clear the step_checkpoints_map to cut down on the amount of data pushed to each client.\n\tRecentStepNumbers =\n\t\tcase IsPartial of\n\t\t\tfalse ->\n\t\t\t\t%% There is an upper bound on the number of steps with step checkpoints\n\t\t\t\t%% because the total number of steps in the session updates is often large.\n\t\t\t\tget_recent_step_numbers(StepNumber);\n\t\t\ttrue ->\n\t\t\t\t%% Include step checkpoints for every step included in the regular\n\t\t\t\t%% update.\n\t\t\t\tget_recent_step_numbers_from_steps(StepNumber, Steps)\n\t\tend,\n\tStepCheckpointsMap2 = maps:with(RecentStepNumbers, StepCheckpointsMap),\n\t#nonce_limiter_update{ session_key = SessionKey,\n\t\t\tis_partial = IsPartial,\n\t\t\tsession = Session#vdf_session{ step_checkpoints_map = StepCheckpointsMap2 } }.\n\nhandle_computed_output({SessionKey, StepNumber, _, _},\n\t\t#state{ session_key = SessionKey, step_number = CurrentStepNumber } = State)\n\t\t\twhen CurrentStepNumber >= StepNumber ->\n\t{noreply, State};\nhandle_computed_output(Args, State) ->\n\t{SessionKey, StepNumber, Output, _PartitionUpperBound} = Args,\n\tcase ar_nonce_limiter:get_session(SessionKey) of\n\t\tnot_found ->\n\t\t\t?LOG_WARNING([{event, computed_output_session_not_found},\n\t\t\t\t\t{session_key, ar_nonce_limiter:encode_session_key(SessionKey)},\n\t\t\t\t\t{step_number, StepNumber}]),\n\t\t\t{noreply, State};\n\t\tSession ->\n\t\t\tPrevSessionKey = Session#vdf_session.prev_session_key,\n\t\t\tPrevSession = ar_nonce_limiter:get_session(PrevSessionKey),\n\t\t\tPartialUpdate = make_partial_nonce_limiter_update(SessionKey, Session,\n\t\t\t\t\tStepNumber, Output),\n\t\t\tFullUpdate = make_full_nonce_limiter_update(SessionKey, Session),\n\n\t\t\tPartialUpdateBin2 = ar_serialize:nonce_limiter_update_to_binary(2, PartialUpdate),\n\t\t\tPartialUpdateBin3 = ar_serialize:nonce_limiter_update_to_binary(3, PartialUpdate),\n\t\t\tFullUpdateBin2 = ar_serialize:nonce_limiter_update_to_binary(2, FullUpdate),\n\t\t\tFullUpdateBin3 = ar_serialize:nonce_limiter_update_to_binary(3, FullUpdate),\n\t\t\tFullUpdateBin4 = ar_serialize:nonce_limiter_update_to_binary(4, FullUpdate),\n\t\t\tKeys = [\n\t\t\t\t{{partial_update, 2}, PartialUpdateBin2},\n\t\t\t\t{{partial_update, 3}, PartialUpdateBin3},\n\t\t\t\t{{full_update, 2}, FullUpdateBin2},\n\t\t\t\t{{full_update, 3}, FullUpdateBin3},\n\t\t\t\t{{full_update, 4}, FullUpdateBin4}\n\t\t\t],\n\t\t\tKeys2 =\n\t\t\t\tcase PrevSession of\n\t\t\t\t\tnot_found ->\n\t\t\t\t\t\tKeys;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tFullPrevUpdate = make_full_nonce_limiter_update(\n\t\t\t\t\t\t\t\tPrevSessionKey, PrevSession),\n\t\t\t\t\t\tFullPrevUpdateBin2 = ar_serialize:nonce_limiter_update_to_binary(\n\t\t\t\t\t\t\t\t2, FullPrevUpdate),\n\t\t\t\t\t\tFullPrevUpdateBin3 = ar_serialize:nonce_limiter_update_to_binary(\n\t\t\t\t\t\t\t\t3, FullPrevUpdate),\n\t\t\t\t\t\tFullPrevUpdateBin4 = ar_serialize:nonce_limiter_update_to_binary(\n\t\t\t\t\t\t\t\t4, FullPrevUpdate),\n\t\t\t\t\t\tKeys ++ [\n\t\t\t\t\t\t\t{{full_prev_update, 2}, FullPrevUpdateBin2},\n\t\t\t\t\t\t\t{{full_prev_update, 3}, FullPrevUpdateBin3},\n\t\t\t\t\t\t\t{{full_prev_update, 4}, FullPrevUpdateBin4}]\n\t\t\t\tend,\n\t\t\tets:insert(?MODULE, Keys2),\n\t\t\t{noreply, State#state{ session_key = SessionKey, step_number = StepNumber }}\n\tend.\n\nget_recent_step_numbers(StepNumber) ->\n\tget_recent_step_numbers(StepNumber, 0).\n\nget_recent_step_numbers(_, Taken)\n\t\twhen Taken == ?SESSION_UPDATE_INCLUDE_STEP_CHECKPOINTS_COUNT ->\n\t[];\nget_recent_step_numbers(-1, _Taken) ->\n\t[];\nget_recent_step_numbers(StepNumber, Taken) ->\n\t[StepNumber | get_recent_step_numbers(StepNumber - 1, Taken + 1)].\n\nget_recent_step_numbers_from_steps(_StepNumber, []) ->\n\t[];\nget_recent_step_numbers_from_steps(StepNumber, [_Step | Steps]) ->\n\t[StepNumber | get_recent_step_numbers_from_steps(StepNumber - 1, Steps)].\n"
  },
  {
    "path": "apps/arweave/src/ar_nonce_limiter_server_worker.erl",
    "content": "-module(ar_nonce_limiter_server_worker).\n\n-behaviour(gen_server).\n\n-export([start_link/2]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n\n-record(state, {\n\traw_peer,\n\tpause_until = 0,\n\tformat = 2\n}).\n\n%% The frequency in milliseconds of re-resolving the domain name of the client,\n%% if the client is configured via the domain name.\n%%\n%% ar_nonce_limiter_server_worker periodically re-resolves and caches the address\n%% of the corresponding client such that they can be identified upon request,\n%% unless we are configured as a public VDF server.\n-define(RE_RESOLVE_PEER_DOMAIN_MS, (30 * 1000)).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the server.\nstart_link(Name, RawPeer) ->\n\tgen_server:start_link({local, Name}, ?MODULE, RawPeer, []).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit(RawPeer) ->\n\t?LOG_INFO([{event, nonce_limiter_server_worker_init}, {raw_peer, RawPeer}]),\n\tok = ar_events:subscribe(nonce_limiter),\n\tcase ar_config:is_public_vdf_server() of\n\t\tfalse ->\n\t\t\tgen_server:cast(self(), re_resolve_peer_domain);\n\t\ttrue ->\n\t\t\tok\n\tend,\n\t{ok, #state{ raw_peer = RawPeer }}.\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast(re_resolve_peer_domain, #state{ raw_peer = RawPeer } = State) ->\n\tcase ar_peers:resolve_and_cache_peer(RawPeer, vdf_client_peer) of\n\t\t{ok, _} ->\n\t\t\tok;\n\t\tError ->\n\t\t\t?LOG_WARNING([{event, failed_to_re_resolve_peer_domain},\n\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])},\n\t\t\t\t\t{peer, io_lib:format(\"~p\", [RawPeer])}])\n\tend,\n\tar_util:cast_after(?RE_RESOLVE_PEER_DOMAIN_MS, ?MODULE, re_resolve_peer_domain),\n\t{noreply, State};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info({event, nonce_limiter, {computed_output, Args}}, State) ->\n\t#state{ raw_peer = RawPeer } = State,\n\tcase ar_peers:resolve_and_cache_peer(RawPeer, vdf_client_peer) of\n\t\t{error, _} ->\n\t\t\t?LOG_WARNING([{event, failed_to_resolve_vdf_client_peer_before_push},\n\t\t\t\t\t{raw_peer, io_lib:format(\"~p\", [RawPeer])}]),\n\t\t\t{noreply, State};\n\t\t{ok, Peer} ->\n\t\t\thandle_computed_output(Peer, Args, State)\n\tend;\n\nhandle_info({event, nonce_limiter, _Args}, State) ->\n\t{noreply, State};\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nhandle_computed_output(Peer, Args, State) ->\n\t#state{ pause_until = Timestamp, format = Format } = State,\n\t{SessionKey, StepNumber, Output, _PartitionUpperBound} = Args,\n\tCurrentStepNumber = ar_nonce_limiter:get_current_step_number(),\n\t?LOG_DEBUG([{event, handle_computed_output}, {peer, ar_util:format_peer(Peer)},\n\t\t{session_key, ar_nonce_limiter:encode_session_key(SessionKey)}, {step_number, StepNumber},\n\t\t {current_step_number, CurrentStepNumber},\n\t\t{timestamp, Timestamp}, {format, Format}]),\n\tcase os:system_time(second) < Timestamp of\n\t\ttrue ->\n\t\t\t{noreply, State};\n\t\tfalse ->\n\t\t\tcase StepNumber < CurrentStepNumber of\n\t\t\t\ttrue ->\n\t\t\t\t\t{noreply, State};\n\t\t\t\tfalse ->\n\t\t\t\t\t{noreply, push_update(SessionKey, StepNumber, Output, Peer, Format, State)}\n\t\t\tend\n\tend.\n\npush_update(SessionKey, StepNumber, Output, Peer, Format, State) ->\n\tSession = ar_nonce_limiter:get_session(SessionKey),\n\tUpdate = ar_nonce_limiter_server:make_partial_nonce_limiter_update(\n\t\tSessionKey, Session, StepNumber, Output),\n\tcase Update of\n\t\tnot_found -> \n\t\t\tState;\n\t\t_ ->\n\t\t\tcase ar_http_iface_client:push_nonce_limiter_update(Peer, Update, Format) of\n\t\t\t\tok -> State;\n\t\t\t\t{ok, Response} ->\n\t\t\t\t\tRequestedFormat = Response#nonce_limiter_update_response.format,\n\t\t\t\t\tPostpone = Response#nonce_limiter_update_response.postpone,\n\t\t\t\t\tSessionFound = Response#nonce_limiter_update_response.session_found,\n\t\t\t\t\tRequestedStepNumber = Response#nonce_limiter_update_response.step_number,\n\n\t\t\t\t\tcase { \n\t\t\t\t\t\t\tRequestedFormat == Format,\n\t\t\t\t\t\t\tPostpone == 0,\n\t\t\t\t\t\t\tSessionFound,\n\t\t\t\t\t\t\tRequestedStepNumber >= StepNumber - 1\n\t\t\t\t\t} of\n\t\t\t\t\t\t{false, _, _, _} ->\n\t\t\t\t\t\t\t%% Client requested a different payload format\n\t\t\t\t\t\t\t?LOG_DEBUG([{event, vdf_client_requested_different_format},\n\t\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)}, {step_number, StepNumber},\n\t\t\t\t\t\t\t\t{format, Format}, {requested_format, RequestedFormat}]),\n\t\t\t\t\t\t\tpush_update(SessionKey, StepNumber, Output, Peer, RequestedFormat,\n\t\t\t\t\t\t\t\t\tState#state{ format = RequestedFormat });\n\t\t\t\t\t\t{true, false, _, _} ->\n\t\t\t\t\t\t\t%% Client requested we pause updates\n\t\t\t\t\t\t\tNow = os:system_time(second),\n\t\t\t\t\t\t\tState#state{ pause_until = Now + Postpone };\n\t\t\t\t\t\t{true, true, false, _} ->\n\t\t\t\t\t\t\t%% Client requested the full session\n\t\t\t\t\t\t\tPrevSessionKey = Session#vdf_session.prev_session_key,\n\t\t\t\t\t\t\tPrevSession = ar_nonce_limiter:get_session(PrevSessionKey),\n\t\t\t\t\t\t\tcase push_session(PrevSessionKey, PrevSession, Peer, Format) of\n\t\t\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\t\t\t%% Do not push the new session until the previous\n\t\t\t\t\t\t\t\t\t%% session is in line with our view (i.e., has steps\n\t\t\t\t\t\t\t\t\t%% at least up to StepNumber where the new session begins).\n\t\t\t\t\t\t\t\t\tpush_session(SessionKey, Session, Peer, Format);\n\t\t\t\t\t\t\t\tfail ->\n\t\t\t\t\t\t\t\t\tok\n\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\tState;\n\t\t\t\t\t\t{true, true, true, false} ->\n\t\t\t\t\t\t\t%% Client requested missing steps\n\t\t\t\t\t\t\tpush_session(SessionKey, Session, Peer, Format),\n\t\t\t\t\t\t\tState;\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t%% Client is ahead of the server\n\t\t\t\t\t\t\tState\n\t\t\t\t\tend;\n\t\t\t\t{error, Error} ->\n\t\t\t\t\tlog_failure(Peer, SessionKey, Update, Error, []),\n\t\t\t\t\tState\n\t\t\tend\n\tend.\n\npush_session(SessionKey, Session, Peer, Format) ->\n\tUpdate = ar_nonce_limiter_server:make_full_nonce_limiter_update(SessionKey, Session),\n\tcase Update of\n\t\tnot_found -> ok;\n\t\t_ ->\n\t\t\tcase ar_http_iface_client:push_nonce_limiter_update(Peer, Update, Format) of\n\t\t\t\tok ->\n\t\t\t\t\tok;\n\t\t\t\t{ok, #nonce_limiter_update_response{ step_number = ClientStepNumber,\n\t\t\t\t\t\tsession_found = ReportedSessionFound }} ->\n\t\t\t\t\tlog_failure(Peer, SessionKey, Update, behind_client,\n\t\t\t\t\t\t[{client_step_number, ClientStepNumber},\n\t\t\t\t\t\t{session_found, ReportedSessionFound}]),\n\t\t\t\t\tfail;\n\t\t\t\t{error, Error} ->\n\t\t\t\t\tlog_failure(Peer, SessionKey, Update, Error, []),\n\t\t\t\t\tfail\n\t\t\tend\n\tend.\n\nlog_failure(Peer, SessionKey, Update, Error, Extra) ->\n\t{SessionSeed, SessionInterval, NextVDFDifficulty} = SessionKey,\n\tStepNumber = Update#nonce_limiter_update.session#vdf_session.step_number,\n\tLog = [{event, failed_to_push_nonce_limiter_update_to_peer},\n\t\t\t{reason, io_lib:format(\"~p\", [Error])},\n\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t{session_seed, ar_util:encode(SessionSeed)},\n\t\t\t{session_interval, SessionInterval},\n\t\t\t{session_difficulty, NextVDFDifficulty},\n\t\t\t{server_step_number, StepNumber}] ++ Extra,\n\n\tcase Error of\n\t\tbehind_client -> ?LOG_DEBUG(Log);\n\t\t{shutdown, econnrefused} -> ?LOG_DEBUG(Log);\n\t\t{shutdown, timeout} -> ?LOG_DEBUG(Log);\n\t\t{shutdown, ehostunreach} -> ?LOG_DEBUG(Log);\n\t\t{closed, \"The connection was lost.\"} -> ?LOG_DEBUG(Log);\n\t\ttimeout -> ?LOG_DEBUG(Log);\n\t\t{<<\"400\">>, <<>>} -> ?LOG_DEBUG(Log);\n\t\t{<<\"503\">>, <<\"{\\\"error\\\":\\\"not_joined\\\"}\">>} -> ?LOG_DEBUG(Log);\n\t\t_ -> ?LOG_WARNING(Log)\n\tend.\n"
  },
  {
    "path": "apps/arweave/src/ar_nonce_limiter_sup.erl",
    "content": "-module(ar_nonce_limiter_sup).\n\n-behaviour(supervisor).\n\n-export([start_link/0]).\n\n-export([init/1]).\n\n-include_lib(\"arweave/include/ar_sup.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%% ===================================================================\n%% Supervisor callbacks.\n%% ===================================================================\n\ninit([]) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tServerWorkers = lists:map(\n\t\tfun(Peer) ->\n\t\t\tName = list_to_atom(\"ar_nonce_limiter_server_worker_\"\n\t\t\t\t\t++ ar_util:peer_to_str(Peer)),\n\t\t\t?CHILD_WITH_ARGS(ar_nonce_limiter_server_worker,\n\t\t\t\t\tworker, Name, [Name, Peer])\n\t\tend,\n\t\tConfig#config.nonce_limiter_client_peers\n\t),\n\tClient = ?CHILD(ar_nonce_limiter_client, worker),\n\tServer = ?CHILD(ar_nonce_limiter_server, worker),\n\tNonceLimiter = ?CHILD(ar_nonce_limiter, worker),\n\n\tWorkers = case ar_config:is_vdf_server() of\n\t\ttrue ->\n\t\t\t[NonceLimiter, Server, Client | ServerWorkers];\n\t\tfalse ->\n\t\t\t[NonceLimiter, Client]\n\tend,\n\t?LOG_INFO([{event, nonce_limiter_sup_init}, {workers, Workers}]),\n\t{ok, {{one_for_one, 5, 10}, Workers}}.\n"
  },
  {
    "path": "apps/arweave/src/ar_packing_server.erl",
    "content": "-module(ar_packing_server).\n\n-behaviour(gen_server).\n\n-export([start_link/0, packing_atom/1, get_packing_state/0, get_randomx_state_for_h0/2,\n\t\trequest_unpack/2, request_unpack/3, request_repack/2, request_repack/3,\n\t\trequest_encipher/3, request_decipher/3,\n\t\tpack/4, unpack/5, repack/6, unpack_sub_chunk/5,\n\t\tis_buffer_full/0, record_buffer_size_metric/0,\n\t\tpad_chunk/1, unpad_chunk/3, unpad_chunk/4,\n\t\tencipher_replica_2_9_chunk/2, decipher_replica_2_9_chunk/2, \n\t\texor_replica_2_9_chunk/2, pack_replica_2_9_chunk/3, request_entropy_generation/3]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include(\"ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include(\"ar_consensus.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-record(state, {\n\tworkers,\n\tnum_workers\n}).\n\n%% We remember the earliest entropy generation per mining address\n%% until it falls out of this window. Used to track the amount of\n%% redundant entropy generation.\n-define(ENTROPY_GENERATION_STATS_WINDOW_MS, 1000 * 60 * 30). % 30 minutes\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\npacking_atom(Packing) when is_atom(Packing) ->\n\tPacking;\npacking_atom({spora_2_6, _Addr}) ->\n\tspora_2_6;\npacking_atom({composite, _Addr, _Diff}) ->\n\tcomposite;\npacking_atom({replica_2_9, _Addr}) ->\n\treplica_2_9.\n\nrequest_unpack(Ref, Args) ->\n\trequest_unpack(Ref, self(), Args).\n\nrequest_unpack(Ref, ReplyTo, Args) ->\n\tar_util:cast_after(600000, ReplyTo, {expire_unpack_request, Ref}),\n\tgen_server:cast(?MODULE, {unpack_request, ReplyTo, Ref, Args}).\n\nrequest_repack(Ref, Args) ->\n\trequest_repack(Ref, self(), Args).\n\nrequest_repack(Ref, ReplyTo, Args) ->\n\tar_util:cast_after(600000, ReplyTo, {expire_repack_request, Ref}),\n\tgen_server:cast(?MODULE, {repack_request, ReplyTo, Ref, Args}).\n\nrequest_encipher(Ref, ReplyTo, {Chunk, Entropy}) ->\n\tar_util:cast_after(600000, ReplyTo, {expire_encipher_request, Ref}),\n\tgen_server:cast(?MODULE, {encipher_request, ReplyTo, Ref, {Chunk, Entropy}}).\n\nrequest_decipher(Ref, ReplyTo, {Chunk, Entropy}) ->\n\tar_util:cast_after(600000, ReplyTo, {expire_decipher_request, Ref}),\n\tgen_server:cast(?MODULE, {decipher_request, ReplyTo, Ref, {Chunk, Entropy}}).\n\nrequest_entropy_generation(\n\t\tRef, ReplyTo, {RewardAddr, BucketEndOffset, SubChunkStart, CacheEntropy}) ->\n\tgen_server:cast(?MODULE,\n\t\t{generate_entropy, ReplyTo, Ref,\n\t\t\t{RewardAddr, BucketEndOffset, SubChunkStart, CacheEntropy}}).\n\n%% @doc Pack the chunk for mining. Packing ensures every mined chunk of data is globally\n%% unique and cannot be easily inferred during mining from any metadata stored in RAM.\npack(Packing, ChunkOffset, TXRoot, Chunk) ->\n\tPackingState = get_packing_state(),\n\trecord_packing_request(pack, Packing, unpacked),\n\tcase pack(Packing, ChunkOffset, TXRoot, Chunk, PackingState, external) of\n\t\t{ok, Packed, _} ->\n\t\t\t{ok, Packed};\n\t\tReply ->\n\t\t\tReply\n\tend.\n\n%% @doc Unpack the chunk packed for mining.\n%%\n%% Return {ok, UnpackedChunk} or {error, invalid_packed_size} or {error, invalid_chunk_size}\n%% or {error, invalid_padding}.\nunpack(Packing, ChunkOffset, TXRoot, Chunk, ChunkSize) ->\n\tPackingState = get_packing_state(),\n\trecord_packing_request(unpack, unpacked, Packing),\n\tcase unpack(Packing, ChunkOffset, TXRoot, Chunk, ChunkSize, PackingState, external) of\n\t\t{ok, Unpacked, _WasAlreadyUnpacked} ->\n\t\t\t{ok, Unpacked};\n\t\tReply ->\n\t\t\tReply\n\tend.\n\n%% @doc Unpack the packed sub-chunk of a composite packing or shared entropy replica.\n%%\n%% Return {ok, UnpackedSubChunk} or {error, invalid_packed_size}.\nunpack_sub_chunk({composite, _, _} = Packing,\n\t\tAbsoluteEndOffset, TXRoot, Chunk, SubChunkStartOffset) ->\n\tcase byte_size(Chunk) == ?COMPOSITE_PACKING_SUB_CHUNK_SIZE of\n\t\tfalse ->\n\t\t\t{error, invalid_packed_size};\n\t\ttrue ->\n\t\t\tPackingState = get_packing_state(),\n\t\t\trecord_packing_request(unpack_sub_chunk, not_set, Packing),\n\t\t\t{PackingAtom, Key} = chunk_key(Packing, AbsoluteEndOffset, TXRoot),\n\t\t\tRandomXState = get_randomx_state_by_packing(Packing, PackingState),\n\t\t\tcase prometheus_histogram:observe_duration(packing_duration_milliseconds,\n\t\t\t\t\t[unpack_sub_chunk, PackingAtom, external], fun() ->\n\t\t\t\t\t\tar_mine_randomx:randomx_decrypt_sub_chunk(Packing, RandomXState,\n\t\t\t\t\t\t\t\t\tKey, Chunk, SubChunkStartOffset) end) of\n\t\t\t\t{ok, UnpackedSubChunk} ->\n\t\t\t\t\t{ok, UnpackedSubChunk};\n\t\t\t\tError ->\n\t\t\t\t\tError\n\t\t\tend\n\tend;\nunpack_sub_chunk({replica_2_9, RewardAddr} = Packing,\n\t\tAbsoluteEndOffset, _TXRoot, Chunk, SubChunkStartOffset) ->\n\tcase byte_size(Chunk) == ?COMPOSITE_PACKING_SUB_CHUNK_SIZE of\n\t\tfalse ->\n\t\t\t{error, invalid_packed_size};\n\t\ttrue ->\n\t\t\tPackingState = get_packing_state(),\n\t\t\trecord_packing_request(unpack_sub_chunk, not_set, Packing),\n\t\t\tEntropy = generate_replica_2_9_entropy(\n\t\t\t\tRewardAddr, AbsoluteEndOffset, SubChunkStartOffset),\n\t\t\tRandomXState = get_randomx_state_by_packing(Packing, PackingState),\n\t\t\tEntropySubChunkIndex = ar_replica_2_9:get_slice_index(AbsoluteEndOffset),\n\t\t\tcase prometheus_histogram:observe_duration(packing_duration_milliseconds,\n\t\t\t\t\t[unpack_sub_chunk, replica_2_9, external], fun() ->\n\t\t\t\t\t\tar_mine_randomx:randomx_decrypt_replica_2_9_sub_chunk({RandomXState,\n\t\t\t\t\t\t\tEntropy, Chunk, EntropySubChunkIndex}) end) of\n\t\t\t\t{ok, UnpackedSubChunk} ->\n\t\t\t\t\t{ok, UnpackedSubChunk};\n\t\t\t\tError ->\n\t\t\t\t\tError\n\t\t\tend\n\tend.\n\nrepack(RequestedPacking, StoredPacking, ChunkOffset, TXRoot, Chunk, ChunkSize) ->\n\tPackingState = get_packing_state(),\n\trecord_packing_request(repack, RequestedPacking, StoredPacking),\n\trepack(\n\t\tRequestedPacking, StoredPacking, ChunkOffset, TXRoot,\n\t\tChunk, ChunkSize, PackingState, external).\n\n%% @doc Return true if the packing server buffer is considered full, to apply\n%% some back-pressure on the pack/4 and unpack/5 callers.\nis_buffer_full() ->\n\t[{_, Limit}] = ets:lookup(?MODULE, buffer_size_limit),\n\tcase ets:lookup(?MODULE, buffer_size) of\n\t\t[{_, Size}] when Size > Limit ->\n\t\t\ttrue;\n\t\t_ ->\n\t\t\tfalse\n\tend.\n\npad_chunk(Chunk) ->\n\tpad_chunk(Chunk, byte_size(Chunk)).\npad_chunk(Chunk, ChunkSize) when ChunkSize == (?DATA_CHUNK_SIZE) ->\n\tChunk;\npad_chunk(Chunk, ChunkSize) ->\n\tZeros =\n\t\tcase erlang:get(zero_chunk) of\n\t\t\tundefined ->\n\t\t\t\tZeroChunk = << <<0>> || _ <- lists:seq(1, ?DATA_CHUNK_SIZE) >>,\n\t\t\t\t%% Cache the zero chunk in the process memory, constructing\n\t\t\t\t%% it is expensive.\n\t\t\t\terlang:put(zero_chunk, ZeroChunk),\n\t\t\t\tZeroChunk;\n\t\t\tZeroChunk ->\n\t\t\t\tZeroChunk\n\t\tend,\n\tPaddingSize = (?DATA_CHUNK_SIZE) - ChunkSize,\n\t<< Chunk/binary, (binary:part(Zeros, 0, PaddingSize))/binary >>.\n\nunpad_chunk(spora_2_5, Unpacked, ChunkSize, _PackedSize) ->\n\tbinary:part(Unpacked, 0, ChunkSize);\nunpad_chunk({spora_2_6, _Addr}, Unpacked, ChunkSize, PackedSize) ->\n\tunpad_chunk(Unpacked, ChunkSize, PackedSize);\nunpad_chunk({composite, _Addr, _PackingDifficulty}, Unpacked, ChunkSize, PackedSize) ->\n\tunpad_chunk(Unpacked, ChunkSize, PackedSize);\nunpad_chunk({replica_2_9, _Addr}, Unpacked, ChunkSize, PackedSize) ->\n\tunpad_chunk(Unpacked, ChunkSize, PackedSize);\nunpad_chunk(unpacked, Unpacked, ChunkSize, _PackedSize) ->\n\tbinary:part(Unpacked, 0, ChunkSize).\n\nunpad_chunk(Unpacked, ChunkSize, PackedSize) ->\n\tPadding = binary:part(Unpacked, ChunkSize, PackedSize - ChunkSize),\n\tcase Padding of\n\t\t<<>> ->\n\t\t\tUnpacked;\n\t\t_ ->\n\t\t\tcase is_zero(Padding) of\n\t\t\t\tfalse ->\n\t\t\t\t\terror;\n\t\t\t\ttrue ->\n\t\t\t\t\tbinary:part(Unpacked, 0, ChunkSize)\n\t\t\tend\n\tend.\n\nis_zero(<< 0:8, Rest/binary >>) ->\n\tis_zero(Rest);\nis_zero(<<>>) ->\n\ttrue;\nis_zero(_Rest) ->\n\tfalse.\n\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\nget_packing_state() ->\n\t[{_, PackingState}] = ets:lookup(?MODULE, randomx_packing_state),\n\tPackingState.\n\nget_randomx_state_for_h0(PackingDifficulty, PackingState) ->\n\t{RandomXState512, RandomXState4096, _} = PackingState,\n\tcase PackingDifficulty of\n\t\t0 ->\n\t\t\tRandomXState512;\n\t\t_ ->\n\t\t\tRandomXState4096\n\tend.\n\n%% @doc Encipher the given chunk with the given 2.9 entropy assembled for this chunk.\n%% Encipher and decipher are the same operation, only difference is how we record the operation.\n-spec encipher_replica_2_9_chunk(\n\t\tChunk :: binary(),\n\t\tEntropy :: binary()\n) -> binary().\nencipher_replica_2_9_chunk(Chunk, Entropy) ->\n\trecord_packing_request(encipher, {replica_2_9, <<>>}, unpacked_padded),\n\texor_replica_2_9_chunk(Chunk, Entropy).\n\n%% @doc Decipher the given chunk with the given 2.9 entropy assembled for this chunk.\n%% Encipher and decipher are the same operation, only difference is how we record the operation.\n-spec decipher_replica_2_9_chunk(\n\t\tChunk :: binary(),\n\t\tEntropy :: binary()\n) -> binary().\ndecipher_replica_2_9_chunk(Chunk, Entropy) ->\n\trecord_packing_request(decipher, unpacked_padded, {replica_2_9, <<>>}),\n\texor_replica_2_9_chunk(Chunk, Entropy).\n\n%% @doc Generate the 2.9 entropy.\n-spec generate_replica_2_9_entropy(\n\t\tRewardAddr :: binary(),\n\t\tBucketEndOffset :: non_neg_integer(),\n\t\tSubChunkStartOffset :: non_neg_integer()\n) -> binary().\ngenerate_replica_2_9_entropy(RewardAddr, BucketEndOffset, SubChunkStartOffset) ->\n\tgenerate_replica_2_9_entropy(RewardAddr, BucketEndOffset, SubChunkStartOffset, true).\n\n-spec generate_replica_2_9_entropy(\n\t\tRewardAddr :: binary(),\n\t\tBucketEndOffset :: non_neg_integer(),\n\t\tSubChunkStartOffset :: non_neg_integer(),\n\t\tCacheEntropy :: boolean()\n) -> binary().\ngenerate_replica_2_9_entropy(RewardAddr, BucketEndOffset, SubChunkStartOffset, false) ->\n\tKey = ar_replica_2_9:get_entropy_key(RewardAddr, BucketEndOffset, SubChunkStartOffset),\n\tdo_generate_entropy(RewardAddr, Key);\ngenerate_replica_2_9_entropy(RewardAddr, BucketEndOffset, SubChunkStartOffset, true) ->\n\tKey = ar_replica_2_9:get_entropy_key(RewardAddr, BucketEndOffset, SubChunkStartOffset),\n\tPartition = ar_node:get_partition_number(BucketEndOffset),\n\n\tentropy_generation_lock(Key, RewardAddr, BucketEndOffset, SubChunkStartOffset),\n\tcase ar_entropy_cache:get(Key) of\n\t\t{ok, Entropy} ->\n\t\t\tprometheus_counter:inc(replica_2_9_entropy_stats, [Partition, cache_hit]),\n\t\t\tentropy_generation_release(Key),\n\t\t\tEntropy;\n\t\tnot_found ->\n\t\t\tprometheus_counter:inc(replica_2_9_entropy_stats, [Partition, cache_miss]),\n\t\t\tEntropy = do_generate_entropy(RewardAddr, Key),\n\t\t\tupdate_entropy_generation_stats(Key, RewardAddr, BucketEndOffset, SubChunkStartOffset),\n\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\tMaxSize = Config#config.replica_2_9_entropy_cache_size_mb * ?MiB,\n\t\t\tar_entropy_cache:clean_up_space(?REPLICA_2_9_ENTROPY_SIZE, MaxSize),\n\t\t\tar_entropy_cache:put(Key, Entropy, ?REPLICA_2_9_ENTROPY_SIZE),\n\t\t\tentropy_generation_release(Key),\n\t\t\tEntropy\n\tend.\n\ndo_generate_entropy(RewardAddr, Key) ->\n\tPackingState = get_packing_state(),\n\tRandomXState = get_randomx_state_by_packing({replica_2_9, RewardAddr}, PackingState),\n\tEntropy = ar_mine_randomx:randomx_generate_replica_2_9_entropy(RandomXState, Key),\n\t%% Primarily needed for testing where the entropy generated exceeds the entropy\n\t%% needed for tests.\n\tbinary_part(Entropy, 0, ?REPLICA_2_9_ENTROPY_SIZE).\n\n%% @doc Pad (to ?DATA_CHUNK_SIZE) and pack the chunk according to the 2.9 replication format.\n%% Return the chunk and the combined entropy used on that chunk.\n-spec pack_replica_2_9_chunk(\n\t\tRewardAddr :: binary(),\n\t\tAbsoluteEndOffset :: non_neg_integer(),\n\t\tChunk :: binary()\n) -> {ok, binary(), binary()}.\npack_replica_2_9_chunk(RewardAddr, AbsoluteEndOffset, Chunk) ->\n\tPackingState = get_packing_state(),\n\tRandomXState = get_randomx_state_by_packing({replica_2_9, RewardAddr}, PackingState),\n\tPaddedChunk = pad_chunk(Chunk),\n\tSubChunks = get_sub_chunks(PaddedChunk),\n\tpack_replica_2_9_sub_chunks(RewardAddr, AbsoluteEndOffset, RandomXState, SubChunks).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\t{ok, Config} = arweave_config:get_env(),\n\n\tar:console(\"~nInitialising RandomX datasets. Keys: ~p, ~p. \"\n\t\t\t\"The process may take several minutes.~n\",\n\t\t\t[ar_util:encode(?RANDOMX_PACKING_KEY),\n\t\t\t\tar_util:encode(?RANDOMX_PACKING_KEY)]),\n\t{RandomXState512, _RandomXState4096, _RandomXStateSharedEntropy}\n\t\t\t= PackingState = init_packing_state(),\n\tar:console(\"RandomX dataset initialisation complete.~n\", []),\n\t{H0, H1} = ar_bench_hash:run_benchmark(RandomXState512),\n\tH0String = io_lib:format(\"~.3f\", [H0 / 1000]),\n\tH1String = io_lib:format(\"~.3f\", [H1 / 1000]),\n\tar:console(\"Hashing benchmark~nH0: ~s ms~nH1/H2: ~s ms~n\", [H0String, H1String]),\n\t?LOG_INFO([{event, hash_benchmark}, {h0_ms, H0String}, {h1_ms, H1String}]),\n\tNumWorkers = Config#config.packing_workers,\n\tar:console(\"~nStarting ~B packing threads.~n\", [NumWorkers]),\n\t?LOG_INFO([{event, starting_packing_threads}, {num_threads, NumWorkers}]),\n\tWorkers = queue:from_list(\n\t\t[spawn_link(fun() -> worker(PackingState) end) || _ <- lists:seq(1, NumWorkers)]),\n\tets:insert(?MODULE, {buffer_size, 0}),\n\n\tMaxSize =\n\t\tcase Config#config.packing_cache_size_limit of\n\t\t\tundefined ->\n\t\t\t\tFree = proplists:get_value(free_memory, memsup:get_system_memory_data(),\n\t\t\t\t\t\t2000000000),\n\t\t\t\tLimit2 = min(1200, erlang:ceil(Free * 0.9 / 3 / ?DATA_CHUNK_SIZE)),\n\t\t\t\tLimit3 = ar_util:ceil_int(Limit2, 100),\n\t\t\t\tLimit3;\n\t\t\tLimit ->\n\t\t\t\tLimit\n\t\tend,\n\tar:console(\"~nSetting the packing chunk cache size limit to ~B chunks.~n\", [MaxSize]),\n\t?LOG_INFO([{event, packing_chunk_cache_size_limit}, {max_size, MaxSize}]),\n\tets:insert(?MODULE, {buffer_size_limit, MaxSize}),\n\t{ok, _} = ar_timer:apply_interval(\n\t\t200,\n\t\t?MODULE,\n\t\trecord_buffer_size_metric,\n\t\t[],\n\t\t#{ skip_on_shutdown => false }\n\t),\n\t{ok, #state{\n\t\tworkers = Workers, num_workers = NumWorkers }}.\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast({unpack_request, _, _, _}, #state{ num_workers = 0 } = State) ->\n\t?LOG_WARNING([{event, got_unpack_request_while_packing_is_disabled}]),\n\t{noreply, State};\nhandle_cast({unpack_request, From, Ref, Args}, State) ->\n\t#state{ workers = Workers } = State,\n\t{Packing, _Chunk, _AbsoluteOffset, _TXRoot, _ChunkSize} = Args,\n\t{{value, Worker}, Workers2} = queue:out(Workers),\n\tincrement_buffer_size(),\n\trecord_packing_request(unpack, unpacked, Packing),\n\tWorker ! {unpack, Ref, From, Args},\n\t{noreply, State#state{ workers = queue:in(Worker, Workers2) }};\nhandle_cast({repack_request, _, _, _}, #state{ num_workers = 0 } = State) ->\n\t?LOG_WARNING([{event, got_repack_request_while_packing_is_disabled}]),\n\t{noreply, State};\nhandle_cast({repack_request, From, Ref, Args}, State) ->\n\t#state{ workers = Workers } = State,\n\t{RequestedPacking, Packing, Chunk, AbsoluteOffset, TXRoot, ChunkSize} = Args,\n\t{{value, Worker}, Workers2} = queue:out(Workers),\n\tcase {RequestedPacking, Packing} of\n\t\t{unpacked, unpacked} ->\n\t\t\tFrom ! {chunk, {packed, Ref, {unpacked, Chunk, AbsoluteOffset, TXRoot, ChunkSize}}},\n\t\t\t{noreply, State};\n\t\t{_, unpacked} ->\n\t\t\tincrement_buffer_size(),\n\t\t\trecord_packing_request(pack, RequestedPacking, unpacked),\n\t\t\tWorker ! {pack, Ref, From, {RequestedPacking, Chunk, AbsoluteOffset, TXRoot,\n\t\t\t\t\tChunkSize}},\n\t\t\t{noreply, State#state{ workers = queue:in(Worker, Workers2) }};\n\t\t_ ->\n\t\t\tincrement_buffer_size(),\n\t\t\trecord_packing_request(repack, RequestedPacking, Packing),\n\t\t\tWorker ! {\n\t\t\t\trepack, Ref, From,\n\t\t\t\t{RequestedPacking, Packing, Chunk, AbsoluteOffset, TXRoot, ChunkSize}\n\t\t\t},\n\t\t\t{noreply, State#state{ workers = queue:in(Worker, Workers2) }}\n\tend;\nhandle_cast({encipher_request, From, Ref, {Chunk, Entropy}}, State) ->\n\t#state{ workers = Workers } = State,\n\t{{value, Worker}, Workers2} = queue:out(Workers),\n\tWorker ! {encipher, Ref, From, {Chunk, Entropy}},\n\t{noreply, State#state{ workers = queue:in(Worker, Workers2) }};\nhandle_cast({decipher_request, From, Ref, {Chunk, Entropy}}, State) ->\n\t#state{ workers = Workers } = State,\n\t{{value, Worker}, Workers2} = queue:out(Workers),\n\tWorker ! {decipher, Ref, From, {Chunk, Entropy}},\n\t{noreply, State#state{ workers = queue:in(Worker, Workers2) }};\nhandle_cast({generate_entropy, From, Ref,\n\t\t{RewardAddr, BucketEndOffset, SubChunkStart, CacheEntropy}}, State) ->\n\t#state{ workers = Workers } = State,\n\t{{value, Worker}, Workers2} = queue:out(Workers),\n\tWorker ! {generate_entropy, Ref, From,\n\t\t{RewardAddr, BucketEndOffset, SubChunkStart, CacheEntropy}},\n\t{noreply, State#state{ workers = queue:in(Worker, Workers2) }};\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\ninit_packing_state() ->\n\tSchedulers = erlang:system_info(dirty_cpu_schedulers_online),\n\tRandomXState512 = ar_mine_randomx:init_fast(rx512, ?RANDOMX_PACKING_KEY, Schedulers),\n\tRandomXState4096 = ar_mine_randomx:init_fast(rx4096, ?RANDOMX_PACKING_KEY, Schedulers),\n\tRandomXStateSharedEntropy = ar_mine_randomx:init_fast(rxsquared,\n\t\t\t?RANDOMX_PACKING_KEY, Schedulers),\n\tPackingState = {RandomXState512, RandomXState4096, RandomXStateSharedEntropy},\n\tets:insert(?MODULE, {randomx_packing_state, PackingState}),\n\tPackingState.\n\nget_randomx_state_by_packing({composite, _, _}, {_, RandomXState, _}) ->\n\tRandomXState;\nget_randomx_state_by_packing({replica_2_9, _}, {_, _, RandomXState}) ->\n\tRandomXState;\nget_randomx_state_by_packing({spora_2_6, _}, {RandomXState, _, _}) ->\n\tRandomXState;\nget_randomx_state_by_packing(spora_2_5, {RandomXState, _, _}) ->\n\tRandomXState.\n\nworker(PackingState) ->\n\treceive\n\t\t{unpack, Ref, From, Args} ->\n\t\t\t{Packing, Chunk, AbsoluteOffset, TXRoot, ChunkSize} = Args,\n\t\t\tcase unpack(Packing, AbsoluteOffset, TXRoot, Chunk, ChunkSize,\n\t\t\t\t\tPackingState, internal) of\n\t\t\t\t{ok, U, _AlreadyUnpacked} ->\n\t\t\t\t\tFrom ! {chunk, {unpacked, Ref, {Packing, U, AbsoluteOffset, TXRoot,\n\t\t\t\t\t\t\tChunkSize}}};\n\t\t\t\t{error, invalid_packed_size} ->\n\t\t\t\t\tFrom ! {chunk, {unpack_error, Ref, Args, invalid_packed_size}};\n\t\t\t\t{error, invalid_chunk_size} ->\n\t\t\t\t\tFrom ! {chunk, {unpack_error, Ref, Args, invalid_chunk_size}};\n\t\t\t\t{error, invalid_padding} ->\n\t\t\t\t\tFrom ! {chunk, {unpack_error, Ref, Args, invalid_padding}};\n\t\t\t\t{exception, Error} ->\n\t\t\t\t\t?LOG_ERROR([{event, failed_to_unpack_chunk},\n\t\t\t\t\t\t\t{absolute_end_offset, AbsoluteOffset},\n\t\t\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])}])\n\t\t\tend,\n\t\t\tdecrement_buffer_size(),\n\t\t\tworker(PackingState);\n\t\t{pack, Ref, From, Args} ->\n\t\t\t{Packing, Chunk, AbsoluteOffset, TXRoot, ChunkSize} = Args,\n\t\t\tcase pack(Packing, AbsoluteOffset, TXRoot, Chunk, PackingState, internal) of\n\t\t\t\t{ok, Packed, _AlreadyPacked} ->\n\t\t\t\t\tFrom ! {chunk, {packed, Ref, {Packing, Packed, AbsoluteOffset, TXRoot,\n\t\t\t\t\t\t\tChunkSize}}};\n\t\t\t\t{error, invalid_unpacked_size} ->\n\t\t\t\t\t?LOG_WARNING([{event, got_unpacked_chunk_of_invalid_size}]);\n\t\t\t\t{exception, Error} ->\n\t\t\t\t\t?LOG_ERROR([{event, failed_to_pack_chunk},\n\t\t\t\t\t\t\t{absolute_end_offset, AbsoluteOffset},\n\t\t\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])}])\n\t\t\tend,\n\t\t\tdecrement_buffer_size(),\n\t\t\tworker(PackingState);\n\t\t{repack, Ref, From, Args} ->\n\t\t\t{RequestedPacking, Packing, Chunk, AbsoluteOffset, TXRoot, ChunkSize} = Args,\n\t\t\tcase repack(RequestedPacking, Packing,\n\t\t\t\t\tAbsoluteOffset, TXRoot, Chunk, ChunkSize, PackingState, internal) of\n\t\t\t\t{ok, Packed, _RepackInput} ->\n\t\t\t\t\tFrom ! {chunk, {packed, Ref,\n\t\t\t\t\t\t\t{RequestedPacking, Packed, AbsoluteOffset, TXRoot, ChunkSize}}};\n\t\t\t\t{error, invalid_packed_size} ->\n\t\t\t\t\t?LOG_WARNING([{event, got_packed_chunk_of_invalid_size}]);\n\t\t\t\t{error, invalid_chunk_size} ->\n\t\t\t\t\t?LOG_WARNING([{event, got_packed_chunk_with_invalid_chunk_size}]);\n\t\t\t\t{error, invalid_padding} ->\n\t\t\t\t\t?LOG_WARNING([{event, got_packed_chunk_with_invalid_padding},\n\t\t\t\t\t\t{absolute_end_offset, AbsoluteOffset}]);\n\t\t\t\t{error, invalid_unpacked_size} ->\n\t\t\t\t\t?LOG_WARNING([{event, got_unpacked_chunk_of_invalid_size}]);\n\t\t\t\t{exception, Error} ->\n\t\t\t\t\t?LOG_ERROR([{event, failed_to_repack_chunk},\n\t\t\t\t\t\t\t{absolute_end_offset, AbsoluteOffset},\n\t\t\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])}])\n\t\t\tend,\n\t\t\tdecrement_buffer_size(),\n\t\t\tworker(PackingState);\n\t\t{encipher, Ref, From, {Chunk, Entropy}} ->\n\t\t\tPackedChunk = encipher_replica_2_9_chunk(Chunk, Entropy),\n\t\t\tFrom ! {chunk, {enciphered, Ref, PackedChunk}},\n\t\t\tworker(PackingState);\n\t\t{decipher, Ref, From, {Chunk, Entropy}} ->\n\t\t\tUnpackedChunk = decipher_replica_2_9_chunk(Chunk, Entropy),\n\t\t\tFrom ! {chunk, {deciphered, Ref, UnpackedChunk}},\n\t\t\tworker(PackingState);\n\t\t{generate_entropy, Ref, From, {RewardAddr, BucketEndOffset, SubChunkStart, CacheEntropy}} ->\n\t\t\tEntropy = generate_replica_2_9_entropy(\n\t\t\t\tRewardAddr, BucketEndOffset, SubChunkStart, CacheEntropy),\n\t\t\tFrom ! {entropy_generated, Ref, Entropy},\n\t\t\tworker(PackingState)\n\tend.\n\nchunk_key(spora_2_5, ChunkOffset, TXRoot) ->\n\t%% The presence of the absolute end offset in the key makes sure\n\t%% packing of every chunk is unique, even when the same chunk is\n\t%% present in the same transaction or across multiple transactions\n\t%% or blocks. The presence of the transaction root in the key\n\t%% ensures one cannot find data that has certain patterns after\n\t%% packing.\n\t{spora_2_5, crypto:hash(sha256, << ChunkOffset:256, TXRoot/binary >>)};\nchunk_key({spora_2_6, RewardAddr}, ChunkOffset, TXRoot) ->\n\t%% The presence of the absolute end offset in the key makes sure\n\t%% packing of every chunk is unique, even when the same chunk is\n\t%% present in the same transaction or across multiple transactions\n\t%% or blocks. The presence of the transaction root in the key\n\t%% ensures one cannot find data that has certain patterns after\n\t%% packing. The presence of the reward address, combined with\n\t%% the 2.6 mining mechanics, puts a relatively low cap on the performance\n\t%% of a single dataset replica, essentially incentivizing miners to create\n\t%% more weave replicas per invested dollar.\n\t{\n\t\tspora_2_6,\n\t\tcrypto:hash(sha256, << ChunkOffset:256, TXRoot:32/binary, RewardAddr/binary >>)\n\t};\nchunk_key({composite, RewardAddr, PackingDiff}, ChunkOffset, TXRoot) ->\n\t%% This is only a part of the packing key. Each sub-chunk is packed using a different\n\t%% key composed from the key returned by this function and the relative sub-chunk offset.\n\t{\n\t\tcomposite,\n\t\tcrypto:hash(sha256, << ChunkOffset:256, TXRoot:32/binary, PackingDiff:8,\n\t\t\t\tRewardAddr/binary >>)\n\t}.\n\npack(unpacked, _ChunkOffset, _TXRoot, Chunk, _PackingState, _External) ->\n\t%% Allows to reuse the same interface for unpacking and repacking.\n\t{ok, Chunk, already_packed};\npack(unpacked_padded, _ChunkOffset, _TXRoot, Chunk, _PackingState, _External) ->\n\t%% Allows to reuse the same interface for unpacking and repacking.\n\t{ok, pad_chunk(Chunk), was_not_already_packed};\npack({replica_2_9, RewardAddr} = Packing, AbsoluteEndOffset, _TXRoot, Chunk, PackingState,\n\t\t_External) ->\n\tcase byte_size(Chunk) > ?DATA_CHUNK_SIZE of\n\t\ttrue ->\n\t\t\t{error, invalid_unpacked_size};\n\t\tfalse ->\n\t\t\tRandomXState = get_randomx_state_by_packing(Packing, PackingState),\n\t\t\tPaddedChunk = pad_chunk(Chunk),\n\t\t\tSubChunks = get_sub_chunks(PaddedChunk),\n\t\t\tcase pack_replica_2_9_sub_chunks(RewardAddr, AbsoluteEndOffset,\n\t\t\t\t\tRandomXState, SubChunks) of\n\t\t\t\t{ok, Packed, _Entropy} ->\n\t\t\t\t\t{ok, Packed, was_not_already_packed};\n\t\t\t\tError ->\n\t\t\t\t\tError\n\t\t\tend\n\tend;\npack(Packing, ChunkOffset, TXRoot, Chunk, PackingState, External) ->\n\tcase byte_size(Chunk) > ?DATA_CHUNK_SIZE of\n\t\ttrue ->\n\t\t\t{error, invalid_unpacked_size};\n\t\tfalse ->\n\t\t\t{PackingAtom, Key} = chunk_key(Packing, ChunkOffset, TXRoot),\n\t\t\tRandomXState = get_randomx_state_by_packing(Packing, PackingState),\n\t\t\tcase prometheus_histogram:observe_duration(packing_duration_milliseconds,\n\t\t\t\t\t[pack, PackingAtom, External], fun() ->\n\t\t\t\t\t\t\tar_mine_randomx:randomx_encrypt_chunk(Packing, RandomXState,\n\t\t\t\t\t\t\t\t\tKey, Chunk) end) of\n\t\t\t\t{ok, Packed} ->\n\t\t\t\t\t{ok, Packed, was_not_already_packed};\n\t\t\t\tError ->\n\t\t\t\t\tError\n\t\t\tend\n\tend.\n\nget_sub_chunks(<< SubChunk:(?COMPOSITE_PACKING_SUB_CHUNK_SIZE)/binary, Rest/binary >>) ->\n\t[SubChunk | get_sub_chunks(Rest)];\nget_sub_chunks(<<>>) ->\n\t[].\n\npack_replica_2_9_sub_chunks(RewardAddr, AbsoluteEndOffset, RandomXState, SubChunks) ->\n\tpack_replica_2_9_sub_chunks(RewardAddr, AbsoluteEndOffset, RandomXState,\n\t\t\t0, SubChunks, [], []).\n\npack_replica_2_9_sub_chunks(_RewardAddr, _AbsoluteEndOffset, _RandomXState,\n\t\t_SubChunkStartOffset, [], PackedSubChunks, EntropyParts) ->\n\t{ok, iolist_to_binary(lists:reverse(PackedSubChunks)),\n\t\t\tiolist_to_binary(lists:reverse(EntropyParts))};\npack_replica_2_9_sub_chunks(RewardAddr, AbsoluteEndOffset, RandomXState,\n\t\tSubChunkStartOffset, [SubChunk | SubChunks], PackedSubChunks, EntropyParts) ->\n\tEntropySubChunkIndex = ar_replica_2_9:get_slice_index(AbsoluteEndOffset),\n\tEntropy = generate_replica_2_9_entropy(RewardAddr, AbsoluteEndOffset, SubChunkStartOffset),\n\tcase prometheus_histogram:observe_duration(packing_duration_milliseconds,\n\t\t\t[pack_sub_chunk, replica_2_9, internal], fun() ->\n\t\t\t\t\tar_mine_randomx:randomx_encrypt_replica_2_9_sub_chunk({RandomXState,\n\t\t\t\t\t\t\tEntropy, SubChunk, EntropySubChunkIndex}) end) of\n\t\t{ok, PackedSubChunk} ->\n\t\t\tSubChunkSize = ?COMPOSITE_PACKING_SUB_CHUNK_SIZE,\n\t\t\tEntropyPart = binary:part(Entropy,\n\t\t\t\t\tEntropySubChunkIndex * ?COMPOSITE_PACKING_SUB_CHUNK_SIZE,\n\t\t\t\t\t?COMPOSITE_PACKING_SUB_CHUNK_SIZE),\n\t\t\tpack_replica_2_9_sub_chunks(RewardAddr, AbsoluteEndOffset, RandomXState,\n\t\t\t\tSubChunkStartOffset + SubChunkSize, SubChunks,\n\t\t\t\t[PackedSubChunk | PackedSubChunks], [EntropyPart | EntropyParts]);\n\t\tError ->\n\t\t\tError\n\tend.\n\nunpack_replica_2_9_sub_chunks(RewardAddr, AbsoluteEndOffset, RandomXState, SubChunks) ->\n\tunpack_replica_2_9_sub_chunks(\n\t\tRewardAddr, AbsoluteEndOffset, RandomXState, 0, SubChunks, []).\n\nunpack_replica_2_9_sub_chunks(_RewardAddr, _AbsoluteEndOffset, _RandomXState,\n\t\t_SubChunkStartOffset, [], UnpackedSubChunks) ->\n\t{ok, iolist_to_binary(lists:reverse(UnpackedSubChunks))};\nunpack_replica_2_9_sub_chunks(RewardAddr, AbsoluteEndOffset, RandomXState,\n\t\tSubChunkStartOffset, [SubChunk | SubChunks], UnpackedSubChunks) ->\n\tEntropySubChunkIndex = ar_replica_2_9:get_slice_index(AbsoluteEndOffset),\n\tEntropy = generate_replica_2_9_entropy(RewardAddr, AbsoluteEndOffset, SubChunkStartOffset),\n\tcase prometheus_histogram:observe_duration(packing_duration_milliseconds,\n\t\t\t[unpack_sub_chunk, replica_2_9, internal], fun() ->\n\t\t\t\t\tar_mine_randomx:randomx_decrypt_replica_2_9_sub_chunk({RandomXState,\n\t\t\t\t\t\t\tEntropy, SubChunk, EntropySubChunkIndex}) end) of\n\t\t{ok, UnpackedSubChunk} ->\n\t\t\tSubChunkSize = ?COMPOSITE_PACKING_SUB_CHUNK_SIZE,\n\t\t\tunpack_replica_2_9_sub_chunks(RewardAddr, AbsoluteEndOffset, RandomXState,\n\t\t\t\t\tSubChunkStartOffset + SubChunkSize, SubChunks,\n\t\t\t\t\t[UnpackedSubChunk | UnpackedSubChunks]);\n\t\tError ->\n\t\t\tError\n\tend.\n\nunpack({replica_2_9, RewardAddr} = Packing, AbsoluteEndOffset,\n\t\t_TXRoot, Chunk, ChunkSize, PackingState, _External) ->\n\tcase validate_chunk_size(Packing, Chunk, ChunkSize) of\n\t\t{error, Reason} ->\n\t\t\t?LOG_ERROR([{event, unpack_chunk_size_error}, {error, Reason},\n\t\t\t\t\t{chunk_offset, AbsoluteEndOffset},\n\t\t\t\t\t{packing, ar_serialize:encode_packing(Packing, true)},\n\t\t\t\t\t{expected_chunk_size, ChunkSize},\n\t\t\t\t\t{actual_chunk_size, byte_size(Chunk)}]),\n\t\t\t{error, Reason};\n\t\t{ok, PackedSize} ->\n\t\t\tSubChunks = get_sub_chunks(Chunk),\n\t\t\tRandomXState = get_randomx_state_by_packing(Packing, PackingState),\n\t\t\tcase unpack_replica_2_9_sub_chunks(RewardAddr, AbsoluteEndOffset,\n\t\t\t\t\tRandomXState, SubChunks) of\n\t\t\t\t{ok, Unpacked} ->\n\t\t\t\t\tcase ar_packing_server:unpad_chunk(Packing, Unpacked,\n\t\t\t\t\t\t\tChunkSize, PackedSize) of\n\t\t\t\t\t\terror ->\n\t\t\t\t\t\t\t?LOG_WARNING([{event, unpad_chunk_error},\n\t\t\t\t\t\t\t\t\t{packed_size, PackedSize},\n\t\t\t\t\t\t\t\t\t{chunk_size, ChunkSize},\n\t\t\t\t\t\t\t\t\t{absolute_end_offset, AbsoluteEndOffset}]),\n\t\t\t\t\t\t\t{error, invalid_padding};\n\t\t\t\t\t\tUnpackedChunk ->\n\t\t\t\t\t\t\t{ok, UnpackedChunk, was_not_already_unpacked}\n\t\t\t\t\tend;\n\t\t\t\tError ->\n\t\t\t\t\t?LOG_ERROR([{event, unpack_replica_2_9_sub_chunks_error}, {error, Error}]),\n\t\t\t\t\tError\n\t\t\tend\n\tend;\nunpack(unpacked, _ChunkOffset, _TXRoot, Chunk, _ChunkSize, _PackingState, _External) ->\n\t%% Allows to reuse the same interface for unpacking and repacking.\n\t{ok, Chunk, already_unpacked};\nunpack(unpacked_padded, _ChunkOffset, _TXRoot, Chunk, ChunkSize, _PackingState, _External) ->\n\t{ok, binary:part(Chunk, 0, ChunkSize), was_not_already_unpacked};\nunpack(Packing, ChunkOffset, TXRoot, Chunk, ChunkSize, PackingState, External) ->\n\tcase validate_chunk_size(Packing, Chunk, ChunkSize) of\n\t\t{error, Reason} ->\n\t\t\t?LOG_ERROR([{event, unpack_chunk_size_error}, {error, Reason},\n\t\t\t\t\t{chunk_offset, ChunkOffset},\n\t\t\t\t\t{packing, ar_serialize:encode_packing(Packing, true)},\n\t\t\t\t\t{expected_chunk_size, ChunkSize},\n\t\t\t\t\t{actual_chunk_size, byte_size(Chunk)}]),\n\t\t\t{error, Reason};\n\t\t{ok, _PackedSize} ->\n\t\t\t{PackingAtom, Key} = chunk_key(Packing, ChunkOffset, TXRoot),\n\t\t\tRandomXState = get_randomx_state_by_packing(Packing, PackingState),\n\t\t\tcase prometheus_histogram:observe_duration(packing_duration_milliseconds,\n\t\t\t\t\t[unpack, PackingAtom, External], fun() ->\n\t\t\t\t\t\t\tar_mine_randomx:randomx_decrypt_chunk(Packing, RandomXState,\n\t\t\t\t\t\t\t\t\tKey, Chunk, ChunkSize) end) of\n\t\t\t\t{ok, Unpacked} ->\n\t\t\t\t\t{ok, Unpacked, was_not_already_unpacked};\n\t\t\t\tError ->\n\t\t\t\t\tError\n\t\t\tend\n\tend.\n\nrepack(unpacked, unpacked,\n\t\t_ChunkOffset, _TXRoot, Chunk, _ChunkSize, _PackingState, _External) ->\n\t%% The difference with the next clause is that here we know the unpacked chunk\n\t%% and can explicitly return it as unpacked.\n\t{ok, Chunk, Chunk};\nrepack(RequestedPacking, StoredPacking,\n\t\t_ChunkOffset, _TXRoot, Chunk, _ChunkSize, _PackingState, _External)\n\t\twhen StoredPacking == RequestedPacking ->\n\t%% StoredPacking and Packing are in the same format and neither is unpacked. To\n\t%% avoid uneccessary unpacking we'll return none for the UnpackedChunk. If a caller\n\t%% needs the UnpackedChunk they should call unpack explicity.\n\t{ok, Chunk, none};\n\nrepack(RequestedPacking, unpacked_padded,\n\t\tChunkOffset, TXRoot, Chunk, ChunkSize, PackingState, External) ->\n\tUnpacked = binary:part(Chunk, 0, ChunkSize),\n\trepack(RequestedPacking, unpacked,\n\t\t\tChunkOffset, TXRoot, Unpacked, ChunkSize, PackingState, External);\nrepack(RequestedPacking, unpacked,\n\t\tChunkOffset, TXRoot, Chunk, _ChunkSize, PackingState, External) ->\n\tcase pack(RequestedPacking, ChunkOffset, TXRoot, Chunk, PackingState, External) of\n\t\t{ok, Packed, _WasAlreadyPacked} ->\n\t\t\t{ok, Packed, Chunk};\n\t\tError ->\n\t\t\tError\n\tend;\n\nrepack(unpacked_padded, StoredPacking,\n\t\tChunkOffset, TXRoot, Chunk, ChunkSize, PackingState, External) ->\n\tcase unpack(StoredPacking, ChunkOffset, TXRoot, Chunk, ChunkSize, PackingState, External) of\n\t\t{ok, Unpacked, _WasAlreadyUnpacked} ->\n\t\t\t{ok, pad_chunk(Unpacked), Unpacked};\n\t\tError ->\n\t\t\tError\n\tend;\nrepack(unpacked, StoredPacking,\n\t\tChunkOffset, TXRoot, Chunk, ChunkSize, PackingState, External) ->\n\tcase unpack(StoredPacking, ChunkOffset, TXRoot, Chunk, ChunkSize, PackingState, External) of\n\t\t{ok, Unpacked, _WasAlreadyUnpacked} ->\n\t\t\t{ok, Unpacked, Unpacked};\n\t\tError ->\n\t\t\tError\n\tend;\n\nrepack({replica_2_9, _} = RequestedPacking, StoredPacking,\n\t\tChunkOffset, TXRoot, Chunk, ChunkSize, PackingState, External) ->\n\trepack_no_nif({RequestedPacking, StoredPacking, ChunkOffset, TXRoot, Chunk,\n\t\t\tChunkSize, PackingState, External});\n\nrepack(RequestedPacking, {replica_2_9, _} = StoredPacking,\n\t\tChunkOffset, TXRoot, Chunk, ChunkSize, PackingState, External) ->\n\trepack_no_nif({RequestedPacking, StoredPacking, ChunkOffset, TXRoot, Chunk,\n\t\t\tChunkSize, PackingState, External});\n\nrepack({composite, RequestedAddr, RequestedPackingDifficulty} = RequestedPacking,\n\t\t{composite, StoredAddr, StoredPackingDifficulty} = StoredPacking,\n\t\t\tChunkOffset, TXRoot, Chunk, ChunkSize, PackingState, External)\n\t\twhen RequestedAddr == StoredAddr,\n\t\t\tStoredPackingDifficulty > RequestedPackingDifficulty ->\n\trepack_no_nif({RequestedPacking, StoredPacking, ChunkOffset, TXRoot, Chunk,\n\t\t\tChunkSize, PackingState, External});\n\nrepack({composite, _Addr, _PackingDifficulty} = RequestedPacking,\n\t\t{spora_2_6, _StoredAddr} = StoredPacking,\n\t\t\tChunkOffset, TXRoot, Chunk, ChunkSize, PackingState, External) ->\n\trepack_no_nif({RequestedPacking, StoredPacking, ChunkOffset, TXRoot, Chunk,\n\t\t\tChunkSize, PackingState, External});\n\nrepack({spora_2_6, _StoredAddr} = RequestedPacking,\n\t\t{composite, _Addr, _PackingDifficulty} = StoredPacking,\n\t\t\tChunkOffset, TXRoot, Chunk, ChunkSize, PackingState, External) ->\n\trepack_no_nif({RequestedPacking, StoredPacking, ChunkOffset, TXRoot, Chunk,\n\t\t\tChunkSize, PackingState, External});\n\nrepack({composite, _Addr, _PackingDifficulty} = RequestedPacking,\n\t\tspora_2_5 = StoredPacking,\n\t\t\tChunkOffset, TXRoot, Chunk, ChunkSize, PackingState, External) ->\n\trepack_no_nif({RequestedPacking, StoredPacking, ChunkOffset, TXRoot, Chunk,\n\t\t\tChunkSize, PackingState, External});\n\nrepack(RequestedPacking, StoredPacking,\n\t\tChunkOffset, TXRoot, Chunk, ChunkSize, PackingState, External) ->\n\t{SourcePackingAtom, UnpackKey} = chunk_key(StoredPacking, ChunkOffset, TXRoot),\n\t{TargetPackingAtom, PackKey} = chunk_key(RequestedPacking, ChunkOffset, TXRoot),\n\tcase validate_chunk_size(StoredPacking, Chunk, ChunkSize) of\n\t\t{ok, _} ->\n\t\t\tPrometheusLabel = atom_to_list(SourcePackingAtom) ++ \"_to_\"\n\t\t\t\t\t++ atom_to_list(TargetPackingAtom),\n\t\t\t%% By the time we hit this branch both RequestedPacking and StoredPacking should\n\t\t\t%% use the same RandomX state (i.e. both are either spora_2_5/spora_2_6 or both\n\t\t\t%% composite).\n\t\t\tRandomXState = get_randomx_state_by_packing(RequestedPacking, PackingState),\n\t\t\tprometheus_histogram:observe_duration(packing_duration_milliseconds,\n\t\t\t\t[repack, PrometheusLabel, External], fun() ->\n\t\t\t\t\tar_mine_randomx:randomx_reencrypt_chunk(StoredPacking, RequestedPacking,\n\t\t\t\t\t\t\tRandomXState, UnpackKey, PackKey, Chunk, ChunkSize) end);\n\t\tError ->\n\t\t\t?LOG_ERROR([{event, repack_chunk_size_error}, {error, Error},\n\t\t\t\t\t{chunk_offset, ChunkOffset},\n\t\t\t\t\t{requested_packing, ar_serialize:encode_packing(RequestedPacking, true)},\n\t\t\t\t\t{stored_packing, ar_serialize:encode_packing(StoredPacking, true)},\n\t\t\t\t\t{expected_chunk_size, ChunkSize},\n\t\t\t\t\t{actual_chunk_size, byte_size(Chunk)}]),\n\t\t\tError\n\tend.\n\nrepack_no_nif(Args) ->\n\t{RequestedPacking, StoredPacking, ChunkOffset, TXRoot, Chunk,\n\t\t\tChunkSize, PackingState, External} = Args,\n\tcase unpack(StoredPacking, ChunkOffset, TXRoot,\n\t\t\tChunk, ChunkSize, PackingState, External) of\n\t\t{ok, Unpacked, _WasAlreadyUnpacked} ->\n\t\t\tcase pack(RequestedPacking, ChunkOffset, TXRoot, Unpacked, PackingState, External) of\n\t\t\t\t{ok, Packed, _WasAlreadyPacked} ->\n\t\t\t\t\t{ok, Packed, Unpacked};\n\t\t\t\tError2 ->\n\t\t\t\t\tError2\n\t\t\tend;\n\t\tError ->\n\t\t\tError\n\tend.\n\nvalidate_chunk_size(spora_2_5, Chunk, ChunkSize) ->\n\tPackedSize = byte_size(Chunk),\n\tcase PackedSize ==\n\t\t\t(((ChunkSize - 1) div (?DATA_CHUNK_SIZE)) + 1) * (?DATA_CHUNK_SIZE) of\n\t\tfalse ->\n\t\t\t{error, invalid_packed_size};\n\t\ttrue ->\n\t\t\t{ok, PackedSize}\n\tend;\nvalidate_chunk_size({spora_2_6, _Addr}, Chunk, ChunkSize) ->\n\tvalidate_chunk_size(Chunk, ChunkSize);\nvalidate_chunk_size({composite, _Addr, _PackingDifficulty}, Chunk, ChunkSize) ->\n\tvalidate_chunk_size(Chunk, ChunkSize);\nvalidate_chunk_size({replica_2_9, _Addr}, Chunk, ChunkSize) ->\n\tvalidate_chunk_size(Chunk, ChunkSize).\n\nvalidate_chunk_size(Chunk, ChunkSize) ->\n\tPackedSize = byte_size(Chunk),\n\tcase {PackedSize == ?DATA_CHUNK_SIZE, ChunkSize =< PackedSize andalso ChunkSize > 0} of\n\t\t{false, _} ->\n\t\t\t{error, invalid_packed_size};\n\t\t{true, false} ->\n\t\t\t%% In practice, we would never get here because the merkle proof\n\t\t\t%% validation does not allow ChunkSize to exceed ?DATA_CHUNK_SIZE.\n\t\t\t{error, invalid_chunk_size};\n\t\t_ ->\n\t\t\t{ok, PackedSize}\n\tend.\n\nincrement_buffer_size() ->\n\tets:update_counter(?MODULE, buffer_size, {2, 1}, {buffer_size, 1}).\n\ndecrement_buffer_size() ->\n\tets:update_counter(?MODULE, buffer_size, {2, -1}, {buffer_size, 0}).\n\n%%%===================================================================\n%%% Prometheus metrics\n%%%===================================================================\n\nrecord_buffer_size_metric() ->\n\tcase ets:lookup(?MODULE, buffer_size) of\n\t\t[{_, Size}] ->\n\t\t\tprometheus_gauge:set(packing_buffer_size, Size);\n\t\t_ ->\n\t\t\tok\n\tend.\n\n%% @doc Log actual packings and unpackings\n%% where the StoredPacking does not match the RequestedPacking.\nrecord_packing_request(_Type, RequestedPacking, StoredPacking)\n\t\twhen RequestedPacking == StoredPacking ->\n\tok;\nrecord_packing_request(Type, RequestedPacking, StoredPacking) ->\n\tPacking = case Type of\n\t\tunpack -> StoredPacking;\n\t\tunpack_sub_chunk -> StoredPacking;\n\t\tdecipher -> StoredPacking;\n\t\tpack -> RequestedPacking;\n\t\trepack -> RequestedPacking;\n\t\tencipher -> RequestedPacking\n\tend,\n\tprometheus_counter:inc(packing_requests, [Type, packing_atom(Packing)]).\n\t\nexor_replica_2_9_chunk(Chunk, Entropy) ->\n\tiolist_to_binary(exor_replica_2_9_sub_chunks(Chunk, Entropy)).\n\nexor_replica_2_9_sub_chunks(<<>>, <<>>) ->\n\t[];\nexor_replica_2_9_sub_chunks(\n\t\t<< SubChunk:(?COMPOSITE_PACKING_SUB_CHUNK_SIZE)/binary, ChunkRest/binary >>,\n\t\t<< EntropyPart:(?COMPOSITE_PACKING_SUB_CHUNK_SIZE)/binary, EntropyRest/binary >>) ->\n\t[ar_mine_randomx:exor_sub_chunk(SubChunk, EntropyPart)\n\t\t\t| exor_replica_2_9_sub_chunks(ChunkRest, EntropyRest)].\n\nentropy_generation_lock(Key, RewardAddr, BucketEndOffset, SubChunkStartOffset) ->\n\tcase ets:insert_new(?MODULE, {{entropy_generation_lock, Key}}) of\n\t\ttrue ->\n\t\t\tok;\n\t\tfalse ->\n\t\t\ttimer:sleep(100),\n\t\t\tentropy_generation_lock(Key, RewardAddr, BucketEndOffset, SubChunkStartOffset)\n\tend.\n\nentropy_generation_release(Key) ->\n\tets:delete(?MODULE, {entropy_generation_lock, Key}).\n\nupdate_entropy_generation_stats(Key, RewardAddr, BucketEndOffset, SubChunkStartOffset) ->\n\tTab = entropy_generation_stats,\n\tTime = erlang:monotonic_time(millisecond),\n\tets:update_counter(Tab, Key, {2, 1}, {Key, 0, Time}),\n\tprometheus_counter:inc(replica_2_9_entropy_generated, ?REPLICA_2_9_ENTROPY_SIZE),\n\tmaybe_report_redundant_entropy_generation(Key, RewardAddr, BucketEndOffset, SubChunkStartOffset),\n\tremove_outdated_entropy_generation_stats().\n\nmaybe_report_redundant_entropy_generation(Key, RewardAddr, BucketEndOffset, SubChunkStartOffset) ->\n\tTab = entropy_generation_stats,\n\tNow = erlang:monotonic_time(millisecond),\n\t[{_, Count, Time}] = ets:lookup(Tab, Key),\n\tcase Count > 1 of\n\t\ttrue ->\n\t\t\tPartition = ar_node:get_partition_number(BucketEndOffset),\n\t\t\tprometheus_counter:inc(replica_2_9_entropy_stats, [Partition, redundant]),\n\t\t\t?LOG_DEBUG([{event, possibly_redundant_entropy_generation},\n\t\t\t\t\t{reward_addr, ar_util:encode(RewardAddr)},\n\t\t\t\t\t{key, ar_util:encode(Key)},\n\t\t\t\t\t{bucket_end_offset, BucketEndOffset},\n\t\t\t\t\t{sub_chunk_start_offset, SubChunkStartOffset},\n\t\t\t\t\t{count, Count},\n\t\t\t\t\t{seconds_since_first_generation, (Now - Time) / 1_000},\n\t\t\t\t\t{avg_per_second, Count / ((Now - Time) / 1_000)}]);\n\t\tfalse ->\n\t\t\tok\n\tend.\n\nremove_outdated_entropy_generation_stats() ->\n\tTab = entropy_generation_stats,\n\tCursor = ets:first(Tab),\n\tNow = erlang:monotonic_time(millisecond),\n\tcase ets:lookup(Tab, Cursor) of\n\t\t[{_, _, Time}] when Time < Now - ?ENTROPY_GENERATION_STATS_WINDOW_MS ->\n\t\t\tets:delete(Tab, Cursor),\n\t\t\tremove_outdated_entropy_generation_stats();\n\t\t_ ->\n\t\t\tok\n\tend.\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\npack_test() ->\n\tRoot = crypto:strong_rand_bytes(32),\n\tCases = [\n\t\t{<<1>>, 1, Root},\n\t\t{<<1>>, 2, Root},\n\t\t{<<0>>, 1, crypto:strong_rand_bytes(32)},\n\t\t{<<0>>, 2, crypto:strong_rand_bytes(32)},\n\t\t{<<0>>, 1234234534535, crypto:strong_rand_bytes(32)},\n\t\t{crypto:strong_rand_bytes(2), 234134234, crypto:strong_rand_bytes(32)},\n\t\t{crypto:strong_rand_bytes(3), 333, crypto:strong_rand_bytes(32)},\n\t\t{crypto:strong_rand_bytes(15), 9999999999999999999999999999,\n\t\t\t\tcrypto:strong_rand_bytes(32)},\n\t\t{crypto:strong_rand_bytes(16), 16, crypto:strong_rand_bytes(32)},\n\t\t{crypto:strong_rand_bytes(256 * 1024), 100000000000000, crypto:strong_rand_bytes(32)},\n\t\t{crypto:strong_rand_bytes(256 * 1024 - 1), 100000000000000,\n\t\t\t\tcrypto:strong_rand_bytes(32)}\n\t],\n\tPackingState = init_packing_state(),\n\tPackedList = lists:flatten(lists:map(\n\t\tfun({Chunk, Offset, TXRoot}) ->\n\t\t\tECDSA = ar_wallet:to_address(ar_wallet:new({ecdsa, secp256k1})),\n\t\t\tEDDSA = ar_wallet:to_address(ar_wallet:new({eddsa, ed25519})),\n\t\t\t{ok, Chunk, already_packed} = pack(unpacked, Offset, TXRoot, Chunk,\n\t\t\t\t\t\tPackingState, external),\n\t\t\t{ok, Packed, was_not_already_packed} = pack(spora_2_5, Offset, TXRoot, Chunk,\n\t\t\t\t\t\tPackingState, external),\n\t\t\t{ok, Packed2, was_not_already_packed} = pack({spora_2_6, ECDSA}, Offset, TXRoot,\n\t\t\t\t\tChunk, PackingState, external),\n\t\t\t{ok, Packed3, was_not_already_packed} = pack({spora_2_6, EDDSA}, Offset, TXRoot,\n\t\t\t\t\tChunk, PackingState, external),\n\t\t\t{ok, Packed4, was_not_already_packed} = pack({composite, ECDSA, 1}, Offset, TXRoot,\n\t\t\t\t\tChunk, PackingState, external),\n\t\t\t{ok, Packed5, was_not_already_packed} = pack({composite, EDDSA, 1}, Offset, TXRoot,\n\t\t\t\t\tChunk, PackingState, external),\n\t\t\t{ok, Packed6, was_not_already_packed} = pack({composite, ECDSA, 2}, Offset, TXRoot,\n\t\t\t\t\tChunk, PackingState, external),\n\t\t\t{ok, Packed7, was_not_already_packed} = pack({composite, EDDSA, 2}, Offset, TXRoot,\n\t\t\t\t\tChunk, PackingState, external),\n\t\t\t?assertNotEqual(Packed, Chunk),\n\t\t\t?assertNotEqual(Packed2, Chunk),\n\t\t\t?assertNotEqual(Packed3, Chunk),\n\t\t\t?assertNotEqual(Packed4, Chunk),\n\t\t\t?assertNotEqual(Packed5, Chunk),\n\t\t\t?assertNotEqual(Packed6, Chunk),\n\t\t\t?assertNotEqual(Packed7, Chunk),\n\t\t\t?assertEqual({ok, Packed, already_unpacked},\n\t\t\t\t\tunpack(unpacked, Offset, TXRoot, Packed, byte_size(Chunk), PackingState,\n\t\t\t\t\t\t\tinternal)),\n\t\t\t?assertEqual({ok, Chunk, was_not_already_unpacked},\n\t\t\t\t\tunpack(spora_2_5, Offset, TXRoot, Packed, byte_size(Chunk), PackingState,\n\t\t\t\t\t\t\tinternal)),\n\t\t\t?assertEqual({ok, Chunk, was_not_already_unpacked},\n\t\t\t\t\tunpack({spora_2_6, ECDSA}, Offset, TXRoot, Packed2, byte_size(Chunk),\n\t\t\t\t\t\t\tPackingState, internal)),\n\t\t\t?assertEqual({ok, Chunk, was_not_already_unpacked},\n\t\t\t\t\tunpack({spora_2_6, EDDSA}, Offset, TXRoot, Packed3, byte_size(Chunk),\n\t\t\t\t\t\t\tPackingState, internal)),\n\t\t\t?assertEqual({ok, Chunk, was_not_already_unpacked},\n\t\t\t\t\tunpack({composite, ECDSA, 1}, Offset, TXRoot, Packed4, byte_size(Chunk),\n\t\t\t\t\t\t\tPackingState, internal)),\n\t\t\t?assertEqual({ok, Chunk, was_not_already_unpacked},\n\t\t\t\t\tunpack({composite, EDDSA, 1}, Offset, TXRoot, Packed5, byte_size(Chunk),\n\t\t\t\t\t\t\tPackingState, internal)),\n\t\t\t?assertEqual({ok, Chunk, was_not_already_unpacked},\n\t\t\t\t\tunpack({composite, ECDSA, 2}, Offset, TXRoot, Packed6, byte_size(Chunk),\n\t\t\t\t\t\t\tPackingState, internal)),\n\t\t\t?assertEqual({ok, Chunk, was_not_already_unpacked},\n\t\t\t\t\tunpack({composite, EDDSA, 2}, Offset, TXRoot, Packed7, byte_size(Chunk),\n\t\t\t\t\t\t\tPackingState, internal)),\n\t\t\t[Packed, Packed2, Packed3, Packed4, Packed5, Packed6, Packed7]\n\t\tend,\n\t\tCases\n\t)),\n\t?assertEqual(length(PackedList), sets:size(sets:from_list(PackedList))).\n"
  },
  {
    "path": "apps/arweave/src/ar_packing_sup.erl",
    "content": "-module(ar_packing_sup).\n\n-behaviour(supervisor).\n\n-export([start_link/0]).\n\n-export([init/1]).\n\n-include_lib(\"arweave/include/ar_sup.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%% ===================================================================\n%% Supervisor callbacks.\n%% ===================================================================\n\ninit([]) ->\n    ets:new(ar_packing_server, [set, public, named_table]),\n\tets:new(ar_entropy_cache, [set, public, named_table]),\n\tets:new(ar_entropy_cache_ordered_keys, [ordered_set, public, named_table]),\n\n\t{ok, {{one_for_one, 5, 10}, [\n\t\t?CHILD(ar_packing_server, worker)\n\t]}}.\n"
  },
  {
    "path": "apps/arweave/src/ar_patricia_tree.erl",
    "content": "%%% @doc An implementation of a tree closely resembling a merkle patricia tree.\n-module(ar_patricia_tree).\n\n-export([new/0, insert/3, get/2, size/1, compute_hash/2, foldr/3, is_empty/1, from_proplist/1,\n\t\tdelete/2, get_range/2, get_range/3]).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Return a new tree.\nnew() ->\n\t#{ root => {no_parent, gb_sets:new(), no_hash, no_prefix, no_value}, size => 0 }.\n\n%% @doc Insert the given value under the given binary key.\ninsert(Key, Value, Tree) when is_binary(Key) ->\n\tinsert(Key, Value, Tree, 1, root).\n\n%% @doc Get the value stored under the given key or not_found.\nget(Key, Tree) when is_binary(Key) ->\n\tcase get(Key, Tree, 1) of\n\t\t{_, {_, _, _, _, {v, Value}}} ->\n\t\t\tValue;\n\t\tnot_found ->\n\t\t\tnot_found\n\tend;\nget(_Key, _Tree) ->\n\tnot_found.\n\n%% @doc Return the number of values in the tree.\nsize(Tree) ->\n\tmaps:get(size, Tree).\n\n%% @doc Compute the root hash by recursively hashing the tree values.\n%% Each key value pair is hashed via the provided hash function. The hashes of the siblings\n%% are combined using ar_deep_hash:hash/1. The keys are traversed in the alphabetical order.\ncompute_hash(#{ size := 0 } = Tree, _HashFun) ->\n\t{<<>>, Tree, #{}};\ncompute_hash(Tree, HashFun) ->\n\tcompute_hash(Tree, HashFun, root, #{}).\n\n%% @doc Traverse the keys in the reversed alphabetical order iteratively applying\n%% the given function of a key, a value, and an accumulator.\nfoldr(Fun, Acc, Tree) ->\n\tcase is_empty(Tree) of\n\t\ttrue ->\n\t\t\tAcc;\n\t\tfalse ->\n\t\t\tfoldr(Fun, Acc, Tree, root)\n\tend.\n\n%% @doc Return true if the tree stores no values.\nis_empty(Tree) ->\n\tmaps:get(size, Tree) == 0.\n\n%% @doc Create a tree from the given list of {Key, Value} pairs.\nfrom_proplist(Proplist) ->\n\tlists:foldl(\n\t\tfun({Key, Value}, Acc) -> ar_patricia_tree:insert(Key, Value, Acc) end,\n\t\tnew(),\n\t\tProplist\n\t).\n\n%% @doc Delete the given key.\ndelete(Key, Tree) ->\n\tdelete(Key, Tree, 1).\n\n%% @doc Return the list of up to Count key-value tuples collected by traversing the keys\n%% in the alphabetical order. The keys in the returned list are sorted in the descending order.\nget_range(Count, Tree) ->\n\tIterator = iterator(Tree),\n\tget_range(Iterator, Count, 0, []).\n\n%% @doc Return the list of up to Count key-value tuples collected by traversing the keys\n%% in the alphabetical order starting from the Start key. The keys in the returned list\n%% are sorted in the descending order. If Start is not a key or Count is not positive,\n%% return an empty list.\nget_range(Start, Count, Tree) when is_binary(Start) ->\n\tIterator = iterator_from(Start, Tree),\n\tget_range(Iterator, Count, 0, []);\nget_range(_, _, _) ->\n\t[].\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\ninsert(Key, Value, Tree, Level, Parent) ->\n\t{KeyPrefix, KeySuffix} = split_by_pos(Key, Level),\n\tcase maps:get(KeyPrefix, Tree, not_found) of\n\t\t{NodeParent, NodeChildren, NodeHash, NodeSuffix, NodeValue} ->\n\t\t\t{Common, KeySuffix2, NodeSuffix2} = join(KeySuffix, NodeSuffix),\n\t\t\tcase {KeySuffix == NodeSuffix, Common == KeySuffix, Common == NodeSuffix} of\n\t\t\t\t{true, _, _} ->\n\t\t\t\t\tSize = maps:get(size, Tree),\n\t\t\t\t\tSize2 =\n\t\t\t\t\t\tcase NodeValue of\n\t\t\t\t\t\t\tno_value ->\n\t\t\t\t\t\t\t\tSize + 1;\n\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\tSize\n\t\t\t\t\t\tend,\n\t\t\t\t\tUpdatedNode = {NodeParent, NodeChildren, no_hash, NodeSuffix, {v, Value}},\n\t\t\t\t\tinvalidate_hash(NodeParent,\n\t\t\t\t\t\t\tTree#{ KeyPrefix => UpdatedNode, size => Size2 });\n\t\t\t\t{_, _, true} when KeySuffix > NodeSuffix ->\n\t\t\t\t\tinsert(Key, Value, Tree, Level + byte_size(NodeSuffix) + 1, KeyPrefix);\n\t\t\t\t{_, true, _} when KeySuffix < NodeSuffix ->\n\t\t\t\t\t{Head, NodeSuffix3} = strip_head(NodeSuffix2),\n\t\t\t\t\tUpdatedNodeKey = << KeyPrefix/binary, Common/binary, Head/binary >>,\n\t\t\t\t\tPivotChildren = gb_sets:from_list([UpdatedNodeKey]),\n\t\t\t\t\tPivotNode = {NodeParent, PivotChildren, no_hash, KeySuffix, {v, Value}},\n\t\t\t\t\tUpdatedNode = {KeyPrefix, NodeChildren, NodeHash, NodeSuffix3, NodeValue},\n\t\t\t\t\tSize = maps:get(size, Tree),\n\t\t\t\t\tTree2 = Tree#{\n\t\t\t\t\t\tKeyPrefix => PivotNode,\n\t\t\t\t\t\tUpdatedNodeKey => UpdatedNode,\n\t\t\t\t\t\tsize => Size + 1\n\t\t\t\t\t},\n\t\t\t\t\tTree3 = update_children_parent(UpdatedNodeKey, NodeChildren, Tree2),\n\t\t\t\t\tinvalidate_hash(NodeParent, Tree3);\n\t\t\t\t{false, false, false} ->\n\t\t\t\t\t{KeyHead, KeySuffix3} = strip_head(KeySuffix2),\n\t\t\t\t\tNewNodeKey = << KeyPrefix/binary, Common/binary, KeyHead/binary >>,\n\t\t\t\t\tNewNode = {KeyPrefix, gb_sets:new(), no_hash, KeySuffix3, {v, Value}},\n\t\t\t\t\t{NodeKeyHead, NodeSuffix3} = strip_head(NodeSuffix2),\n\t\t\t\t\tUpdatedNodeKey = << KeyPrefix/binary, Common/binary, NodeKeyHead/binary >>,\n\t\t\t\t\tUpdatedNode = {KeyPrefix, NodeChildren, NodeHash, NodeSuffix3, NodeValue},\n\t\t\t\t\tPivotChildren = gb_sets:from_list([NewNodeKey, UpdatedNodeKey]),\n\t\t\t\t\tPivotNode = {NodeParent, PivotChildren, no_hash, Common, no_value},\n\t\t\t\t\tSize = maps:get(size, Tree),\n\t\t\t\t\tTree2 = Tree#{\n\t\t\t\t\t\tNewNodeKey => NewNode,\n\t\t\t\t\t\tUpdatedNodeKey => UpdatedNode,\n\t\t\t\t\t\tKeyPrefix => PivotNode,\n\t\t\t\t\t\tsize => Size + 1\n\t\t\t\t\t},\n\t\t\t\t\tTree3 = update_children_parent(UpdatedNodeKey, NodeChildren, Tree2),\n\t\t\t\t\tinvalidate_hash(NodeParent, Tree3)\n\t\t\tend;\n\t\tnot_found ->\n\t\t\tNewNode = {Parent, gb_sets:new(), no_hash, KeySuffix, {v, Value}},\n\t\t\t{NextParent, Children, _Hash, NextSuffix, ParentValue} = maps:get(Parent, Tree),\n\t\t\tUpdatedChildren = gb_sets:insert(KeyPrefix, Children),\n\t\t\tSize = maps:get(size, Tree),\n\t\t\tTree2 = Tree#{\n\t\t\t\tKeyPrefix => NewNode,\n\t\t\t\tParent => {NextParent, UpdatedChildren, no_hash, NextSuffix, ParentValue},\n\t\t\t\tsize => Size + 1\n\t\t\t},\n\t\t\tinvalidate_hash(NextParent, Tree2)\n\tend.\n\nsplit_by_pos(<<>>, _Pos) ->\n\t{<<>>, <<>>};\nsplit_by_pos(Binary, Pos) ->\n\t{binary:part(Binary, {0, Pos}), binary:part(Binary, {Pos, byte_size(Binary) - Pos})}.\n\njoin(Binary1, Binary2) ->\n\t%% Return the longest common prefix and the diverged suffixes of the two binaries.\n\tPrefixLen = binary:longest_common_prefix([Binary1, Binary2]),\n\tPrefix = binary:part(Binary1, {0, PrefixLen}),\n\tSuffix1 = binary:part(Binary1, {PrefixLen, byte_size(Binary1) - PrefixLen}),\n\tSuffix2 = binary:part(Binary2, {PrefixLen, byte_size(Binary2) - PrefixLen}),\n\t{Prefix, Suffix1, Suffix2}.\n\nupdate_children_parent(Key, Children, Tree) ->\n\tgb_sets:fold(\n\t\tfun(ChildKey, Acc) ->\n\t\t\t{_, C, H, S, V} = maps:get(ChildKey, Acc),\n\t\t\tChildNode2 = {Key, C, H, S, V},\n\t\t\tAcc#{ ChildKey => ChildNode2 }\n\t\tend,\n\t\tTree,\n\t\tChildren\n\t).\n\ninvalidate_hash(no_parent, Tree) ->\n\tTree;\ninvalidate_hash(Key, Tree) ->\n\t{Parent, Children, _Hash, Suffix, Value} = maps:get(Key, Tree),\n\tInvalidatedHashNode = {Parent, Children, no_hash, Suffix, Value},\n\tinvalidate_hash(Parent, Tree#{ Key => InvalidatedHashNode }).\n\nstrip_head(Binary) ->\n\t{binary:part(Binary, {0, 1}), binary:part(Binary, {1, byte_size(Binary) - 1})}.\n\nget(Key, Tree, Level) ->\n\t{KeyPrefix, KeySuffix} = split_by_pos(Key, Level),\n\tcase maps:get(KeyPrefix, Tree, not_found) of\n\t\tnot_found ->\n\t\t\tnot_found;\n\t\t{_, _, _, Suffix, MaybeValue} = NodeData ->\n\t\t\tLen = binary:longest_common_prefix([KeySuffix, Suffix]),\n\t\t\tSuffixSize = byte_size(Suffix),\n\t\t\tcase Len < SuffixSize of\n\t\t\t\ttrue ->\n\t\t\t\t\tnot_found;\n\t\t\t\tfalse ->\n\t\t\t\t\tcase KeySuffix == Suffix of\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tget(Key, Tree, Level + SuffixSize + 1);\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tcase MaybeValue of\n\t\t\t\t\t\t\t\tno_value ->\n\t\t\t\t\t\t\t\t\tnot_found;\n\t\t\t\t\t\t\t\t{v, _Value} ->\n\t\t\t\t\t\t\t\t\t{KeyPrefix, NodeData}\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\ncompute_hash(Tree, HashFun, KeyPrefix, UpdateMap) ->\n\t{Parent, Children, Hash, Suffix, MaybeValue} = maps:get(KeyPrefix, Tree),\n\tcase Hash of\n\t\tno_hash ->\n\t\t\tcase gb_sets:is_empty(Children) of\n\t\t\t\ttrue ->\n\t\t\t\t\t{v, Value} = MaybeValue,\n\t\t\t\t\tKey = << KeyPrefix/binary, Suffix/binary >>,\n\t\t\t\t\tNewHash = HashFun(Key, Value),\n\t\t\t\t\tNewTree = Tree#{\n\t\t\t\t\t\tKeyPrefix => {Parent, gb_sets:new(), NewHash, Suffix, {v, Value}}\n\t\t\t\t\t},\n\t\t\t\t\tUpdateMap2 = maps:put({NewHash, KeyPrefix}, {Key, Value}, UpdateMap),\n\t\t\t\t\t{NewHash, NewTree, UpdateMap2};\n\t\t\t\tfalse ->\n\t\t\t\t\t{Hashes, UpdatedTree, UpdateMap2} = gb_sets_foldr(\n\t\t\t\t\t\tfun(Child, {HashesAcc, TreeAcc, UpdateMapAcc}) ->\n\t\t\t\t\t\t\t{ChildHash, TreeAcc2, UpdateMapAcc2} = compute_hash(TreeAcc,\n\t\t\t\t\t\t\t\t\tHashFun, Child, UpdateMapAcc),\n\t\t\t\t\t\t\t{[{ChildHash, Child} | HashesAcc], TreeAcc2, UpdateMapAcc2}\n\t\t\t\t\t\tend,\n\t\t\t\t\t\t{[], Tree, UpdateMap},\n\t\t\t\t\t\tChildren\n\t\t\t\t\t),\n\t\t\t\t\t{NewHash, UpdateMap3} =\n\t\t\t\t\t\tcase MaybeValue of\n\t\t\t\t\t\t\t{v, Value} ->\n\t\t\t\t\t\t\t\tKey = << KeyPrefix/binary, Suffix/binary >>,\n\t\t\t\t\t\t\t\tNewHash2 = HashFun(Key, Value),\n\t\t\t\t\t\t\t\tHashes2 = [H || {H, _} <- Hashes],\n\t\t\t\t\t\t\t\tNewHash3 = ar_deep_hash:hash([NewHash2 | Hashes2]),\n\t\t\t\t\t\t\t\t{NewHash3, UpdateMap2#{ {NewHash2, KeyPrefix} => {Key, Value},\n\t\t\t\t\t\t\t\t\t\t{NewHash3, KeyPrefix} => [{NewHash2, KeyPrefix}\n\t\t\t\t\t\t\t\t\t\t\t\t| Hashes] }};\n\t\t\t\t\t\t\tno_value ->\n\t\t\t\t\t\t\t\tcase Hashes of\n\t\t\t\t\t\t\t\t\t[{SingleHash, _}] ->\n\t\t\t\t\t\t\t\t\t\t{SingleHash, UpdateMap2#{\n\t\t\t\t\t\t\t\t\t\t\t\t{SingleHash, KeyPrefix} => Hashes }};\n\t\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\t\tHashes2 = [H || {H, _} <- Hashes],\n\t\t\t\t\t\t\t\t\t\tNewHash2 = ar_deep_hash:hash(Hashes2),\n\t\t\t\t\t\t\t\t\t\t{NewHash2, UpdateMap2#{\n\t\t\t\t\t\t\t\t\t\t\t\t{NewHash2, KeyPrefix} => Hashes }}\n\t\t\t\t\t\t\t\tend\n\t\t\t\t\t\tend,\n\t\t\t\t\t{NewHash, UpdatedTree#{\n\t\t\t\t\t\tKeyPrefix => {Parent, Children, NewHash, Suffix, MaybeValue}\n\t\t\t\t\t}, UpdateMap3}\n\t\t\tend;\n\t\t_ ->\n\t\t\t{Hash, Tree, UpdateMap}\n\tend.\n\nfoldr(Fun, Acc, Tree, KeyPrefix) ->\n\t{_, Children, _, Suffix, MaybeValue} = maps:get(KeyPrefix, Tree),\n\tcase gb_sets:is_empty(Children) of\n\t\ttrue ->\n\t\t\t{v, Value} = MaybeValue,\n\t\t\tKey = << KeyPrefix/binary, Suffix/binary >>,\n\t\t\tFun(Key, Value, Acc);\n\t\tfalse ->\n\t\t\tAcc2 = gb_sets_foldr(\n\t\t\t\tfun(Child, ChildrenAcc) ->\n\t\t\t\t\tfoldr(Fun, ChildrenAcc, Tree, Child)\n\t\t\t\tend,\n\t\t\t\tAcc,\n\t\t\t\tChildren\n\t\t\t),\n\t\t\tcase MaybeValue of\n\t\t\t\t{v, Value} ->\n\t\t\t\t\tKey = << KeyPrefix/binary, Suffix/binary >>,\n\t\t\t\t\tFun(Key, Value, Acc2);\n\t\t\t\t_ ->\n\t\t\t\t\tAcc2\n\t\t\tend\n\tend.\n\ngb_sets_foldr(Fun, Acc, G) ->\n\tcase gb_sets:is_empty(G) of\n\t\ttrue ->\n\t\t\tAcc;\n\t\tfalse ->\n\t\t\t{Largest, G2} = gb_sets:take_largest(G),\n\t\t\tgb_sets_foldr(Fun, Fun(Largest, Acc), G2)\n\tend.\n\ndelete(Key, Tree, Level) ->\n\t{KeyPrefix, KeySuffix} = split_by_pos(Key, Level),\n\tcase maps:get(KeyPrefix, Tree, not_found) of\n\t\tnot_found ->\n\t\t\tTree;\n\t\t{Parent, Children, _Hash, Suffix, MaybeValue} ->\n\t\t\tLen = binary:longest_common_prefix([KeySuffix, Suffix]),\n\t\t\tSuffixSize = byte_size(Suffix),\n\t\t\tcase Len < SuffixSize of\n\t\t\t\ttrue ->\n\t\t\t\t\tTree;\n\t\t\t\tfalse ->\n\t\t\t\t\tcase KeySuffix == Suffix of\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tdelete(Key, Tree, Level + SuffixSize + 1);\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tcase MaybeValue of\n\t\t\t\t\t\t\t\tno_value ->\n\t\t\t\t\t\t\t\t\tTree;\n\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\tSize = maps:get(size, Tree),\n\t\t\t\t\t\t\t\t\tTree2 = Tree#{ size => Size - 1 },\n\t\t\t\t\t\t\t\t\tcase gb_sets:is_empty(Children) of\n\t\t\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\t\t\tdelete2(KeyPrefix, Parent, Tree2);\n\t\t\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\t\t\tNode2 = {Parent, Children, no_hash, Suffix,\n\t\t\t\t\t\t\t\t\t\t\t\t\tno_value},\n\t\t\t\t\t\t\t\t\t\t\tinvalidate_hash(Parent,\n\t\t\t\t\t\t\t\t\t\t\t\t\tTree2#{ KeyPrefix => Node2 })\n\t\t\t\t\t\t\t\t\tend\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\ndelete2(Key, Parent, Tree) ->\n\tTree2 = maps:remove(Key, Tree),\n\t{ParentParent, ParentChildren, _Hash, Suffix, ParentValue} = maps:get(Parent, Tree),\n\tParentChildren2 = gb_sets:del_element(Key, ParentChildren),\n\tTree3 = Tree2#{ Parent => {ParentParent, ParentChildren2, no_hash, Suffix, ParentValue} },\n\tcase {Parent == root, gb_sets:is_empty(ParentChildren2), ParentValue} of\n\t\t{false, true, no_value} ->\n\t\t\tdelete2(Parent, ParentParent, Tree3);\n\t\t_ ->\n\t\t\tinvalidate_hash(ParentParent, Tree3)\n\tend.\n\niterator(Tree) ->\n\titerator(Tree, root).\n\niterator(Tree, Key) ->\n\t{_, Children, _, _, MaybeValue} = NodeData = maps:get(Key, Tree),\n\tcase MaybeValue of\n\t\t{v, _Value} ->\n\t\t\t{{Key, NodeData}, Tree};\n\t\tno_value ->\n\t\t\tcase gb_sets:is_empty(Children) of\n\t\t\t\ttrue ->\n\t\t\t\t\tnone;\n\t\t\t\tfalse ->\n\t\t\t\t\titerator(Tree, gb_sets:smallest(Children))\n\t\t\tend\n\tend.\n\niterator_from(Start, Tree) ->\n\tcase get(Start, Tree, 1) of\n\t\tnot_found ->\n\t\t\tnone;\n\t\t{Prefix, NodeData} ->\n\t\t\t{{Prefix, NodeData}, Tree}\n\tend.\n\nget_range(_Iterator, Count, Count, List) ->\n\tList;\nget_range(Iterator, Count, Got, List) ->\n\tcase next(Iterator) of\n\t\tnone ->\n\t\t\tList;\n\t\t{{Key, Value}, UpdatedIterator} ->\n\t\t\tget_range(UpdatedIterator, Count, Got + 1, [{Key, Value} | List])\n\tend.\n\nnext({{Prefix, {Parent, Children, _Hash, Suffix, {v, Value}}}, Tree}) ->\n\tKey = << Prefix/binary, Suffix/binary >>,\n\t{{Key, Value}, get_next_start_from_children(Prefix, Parent, Children, Tree)};\nnext(none) ->\n\tnone.\n\nget_next_start_from_children(Key, Parent, Children, Tree) ->\n\tNextChild =\n\t\tcase gb_sets:is_empty(Children) of\n\t\t\ttrue ->\n\t\t\t\tnone;\n\t\t\tfalse ->\n\t\t\t\tChild = gb_sets:smallest(Children),\n\t\t\t\t{Child, maps:get(Child, Tree)}\n\t\tend,\n\tcase NextChild of\n\t\tnone ->\n\t\t\tget_next_start_from_sibling(Key, Parent, Tree);\n\t\t_ ->\n\t\t\t{ChildKey, {_, ChildChildren, _, _, MaybeValue}} = NextChild,\n\t\t\tcase MaybeValue of\n\t\t\t\tno_value ->\n\t\t\t\t\tget_next_start_from_children(ChildKey, Key, ChildChildren, Tree);\n\t\t\t\t{v, _} ->\n\t\t\t\t\t{NextChild, Tree}\n\t\t\tend\n\tend.\n\nget_next_start_from_sibling(root, no_parent, _Tree) ->\n\tnone;\nget_next_start_from_sibling(Key, Parent, Tree) ->\n\t{ParentParent, Children, _, _, _} = maps:get(Parent, Tree),\n\tIterator = gb_sets:iterator_from(Key, Children),\n\tStart =\n\t\tcase gb_sets:next(Iterator) of\n\t\t\tnone ->\n\t\t\t\tnone;\n\t\t\t{Key, UpdatedIterator} ->\n\t\t\t\tgb_sets:next(UpdatedIterator);\n\t\t\tNext ->\n\t\t\t\tNext\n\t\tend,\n\tcase Start of\n\t\tnone ->\n\t\t\tget_next_start_from_sibling(Parent, ParentParent, Tree);\n\t\t{NextSiblingKey, _} ->\n\t\t\tNextSibling = maps:get(NextSiblingKey, Tree),\n\t\t\t{_, NextSiblingChildren, _, _, MaybeValue} = NextSibling,\n\t\t\tcase MaybeValue of\n\t\t\t\tno_value ->\n\t\t\t\t\tget_next_start_from_children(NextSiblingKey, Key, NextSiblingChildren,\n\t\t\t\t\t\t\tTree);\n\t\t\t\t{v, _} ->\n\t\t\t\t\t{{NextSiblingKey, NextSibling}, Tree}\n\t\t\tend\n\tend.\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\ntrie_test() ->\n\tT1 = new(),\n\t?assertEqual(not_found, get(<<\"aaa\">>, T1)),\n\t?assertEqual(true, is_empty(T1)),\n\tHashFun = fun(K, V) -> crypto:hash(sha256, << K/binary, (term_to_binary(V))/binary >>) end,\n\t?assertEqual(<<>>, element(1, compute_hash(T1, HashFun))),\n\t?assertEqual(true, is_empty(delete(<<\"a\">>, T1))),\n\t?assertEqual(0, ar_patricia_tree:size(T1)),\n\t?assertEqual([], get_range(1, T1)),\n\t?assertEqual([], get_range(<<>>, 1, T1)),\n\t?assertEqual([], get_range(0, T1)),\n\t?assertEqual([], get_range(<<>>, 0, T1)),\n\t?assertEqual([], get_range(<<\"aaa\">>, 10, T1)),\n\t%% a -> a -> 1\n\t%%      b -> 1\n\tT1_2 = insert(<<\"ab\">>, 1, insert(<<\"aa\">>, 1, T1)),\n\t?assertEqual(not_found, get(<<\"a\">>, T1_2)),\n\tT1_3 = delete(<<\"ab\">>, delete(<<\"aa\">>, T1_2)),\n\t?assertEqual(true, is_empty(T1_3)),\n\t?assertEqual(<<>>, element(1, compute_hash(T1_3, HashFun))),\n\t?assertEqual(true, is_empty(delete(<<\"a\">>, T1_3))),\n\t?assertEqual(not_found, get(<<\"a\">>, T1_3)),\n\t%% aaa -> 1\n\tT2 = insert(<<\"aaa\">>, 1, T1_3),\n\t?assertEqual(false, is_empty(T2)),\n\t?assertEqual(1, ar_patricia_tree:size(T2)),\n\t{H2, T2_2, _} = compute_hash(T2, HashFun),\n\t{H2_2, _, _} = compute_hash(T2_2, HashFun),\n\t?assertEqual(H2, H2_2),\n\t?assertEqual(1, get(<<\"aaa\">>, T2)),\n\t?assertEqual([], get_range(<<>>, 1, T2)),\n\t?assertEqual([{<<\"aaa\">>, 1}], get_range(1, T2)),\n\t?assertEqual([{<<\"aaa\">>, 1}], get_range(<<\"aaa\">>, 1, T2)),\n\t%% aa -> a -> 1\n\t%%       b -> 2\n\tT3 = insert(<<\"aab\">>, 2, T2),\n\t?assertEqual(2, ar_patricia_tree:size(T3)),\n\t{H3, _, _} = compute_hash(T3, HashFun),\n\t?assertNotEqual(H2, H3),\n\t{H3_2, _, _} = compute_hash(insert(<<\"aaa\">>, 1, insert(<<\"aab\">>, 2, new())), HashFun),\n\t?assertEqual(H3, H3_2),\n\t{H3_3, _, _} =\n\t\tcompute_hash(\n\t\t\tinsert(<<\"aaa\">>, 1, insert(<<\"aab\">>, 2, insert(<<\"a\">>, 3, new()))),\n\t\t\tHashFun\n\t\t),\n\t{H3_4, _, _} = compute_hash(insert(<<\"a\">>, 3, T3), HashFun),\n\t?assertEqual(H3_3, H3_4),\n\t?assertEqual(1, get(<<\"aaa\">>, T3)),\n\t?assertEqual(2, get(<<\"aab\">>, T3)),\n\t?assertEqual([{<<\"aaa\">>, 1}], get_range(<<\"aaa\">>, 1, T3)),\n\t?assertEqual([{<<\"aaa\">>, 1}], get_range(1, T3)),\n\t?assertEqual([{<<\"aab\">>, 2}, {<<\"aaa\">>, 1}], get_range(<<\"aaa\">>, 2, T3)),\n\t?assertEqual([{<<\"aab\">>, 2}, {<<\"aaa\">>, 1}], get_range(2, T3)),\n\t?assertEqual([{<<\"aab\">>, 2}, {<<\"aaa\">>, 1}], get_range(<<\"aaa\">>, 20, T3)),\n\t?assertEqual([{<<\"aab\">>, 2}, {<<\"aaa\">>, 1}], get_range(20, T3)),\n\t?assertEqual([], get_range(<<\"a\">>, 2, T3)),\n\t?assertEqual([], get_range(<<\"aa\">>, 2, T3)),\n\t?assertEqual([{<<\"aab\">>, 2}], get_range(<<\"aab\">>, 2, T3)),\n\t?assertEqual([], get_range(<<\"aac\">>, 2, T3)),\n\t?assertEqual([], get_range(<<\"b\">>, 2, T3)),\n\tT4 = insert(<<\"aab\">>, 3, T3),\n\t?assertEqual(2, ar_patricia_tree:size(T4)),\n\t{H4, _, _} = compute_hash(T4, HashFun),\n\t?assertNotEqual(H3, H4),\n\t?assertEqual(1, get(<<\"aaa\">>, T4)),\n\t?assertEqual(3, get(<<\"aab\">>, T4)),\n\t%% a -> a -> a -> 1\n\t%%           b -> 3\n\t%%      b -> 2\n\tT5 = insert(<<\"ab\">>, 2, T4),\n\t?assertEqual(3, ar_patricia_tree:size(T5)),\n\t?assertEqual(1, gb_sets:size(element(2, maps:get(root, T5)))),\n\t{H5, _, _} = compute_hash(T5, HashFun),\n\t?assertNotEqual(H4, H5),\n\t{H5_2, _, _} =\n\t\tcompute_hash(\n\t\t\tinsert(<<\"aab\">>, 3, insert(<<\"aaa\">>, 1, insert(<<\"ab\">>, 2, new()))),\n\t\t\tHashFun\n\t\t),\n\t?assertEqual(H5, H5_2),\n\t{_H5_3, T5_2, _} = compute_hash(insert(<<\"aaa\">>, 1, new()), HashFun),\n\t{_H5_4, T5_3, _} = compute_hash(insert(<<\"ab\">>, 2, T5_2), HashFun),\n\t{H5_5, _T5_4, _} = compute_hash(insert(<<\"aab\">>, 3, T5_3), HashFun),\n\t?assertEqual(H5, H5_5),\n\t?assertEqual(1, get(<<\"aaa\">>, T5)),\n\t?assertEqual(3, get(<<\"aab\">>, T5)),\n\t?assertEqual(2, get(<<\"ab\">>, T5)),\n\t?assertEqual([{<<\"ab\">>, 2}, {<<\"aab\">>, 3}], get_range(<<\"aab\">>, 20, T5)),\n\t?assertEqual([{<<\"aab\">>, 3}, {<<\"aaa\">>, 1}], get_range(2, T5)),\n\t%% a -> a -> a -> 1\n\t%%           b -> 3\n\t%%      b -> 2\n\t%%           c -> 4\n\tT6 = insert(<<\"abc\">>, 4, T5),\n\t?assertEqual(4, ar_patricia_tree:size(T6)),\n\t?assertEqual(1, gb_sets:size(element(2, maps:get(root, T6)))),\n\t?assertEqual(1, get(<<\"aaa\">>, T6)),\n\t?assertEqual(3, get(<<\"aab\">>, T6)),\n\t?assertEqual(2, get(<<\"ab\">>, T6)),\n\t?assertEqual(4, get(<<\"abc\">>, T6)),\n\t?assertEqual([{<<\"abc\">>, 4}, {<<\"ab\">>, 2}, {<<\"aab\">>, 3}],\n\t\t\tget_range(<<\"aab\">>, 20, T6)),\n\t?assertEqual([{<<\"abc\">>, 4}], get_range(<<\"abc\">>, 20, T6)),\n\t?assertEqual([{<<\"abc\">>, 4}, {<<\"ab\">>, 2}], get_range(<<\"ab\">>, 20, T6)),\n\t?assertEqual(\n\t\t[{<<\"abc\">>, 4}, {<<\"ab\">>, 2}, {<<\"aab\">>, 3}, {<<\"aaa\">>, 1}],\n\t\tget_range(20, T6)\n\t),\n\t%% a -> a -> a -> 1\n\t%%           b -> 3\n\t%%      b -> 2\n\t%%           c -> 4\n\t%% bcdefj -> 4\n\tT7 = insert(<<\"bcdefj\">>, 4, T6),\n\t?assertEqual(5, ar_patricia_tree:size(T7)),\n\t?assertEqual(2, gb_sets:size(element(2, maps:get(root, T7)))),\n\t?assertEqual(1, get(<<\"aaa\">>, T7)),\n\t?assertEqual(3, get(<<\"aab\">>, T7)),\n\t?assertEqual(4, get(<<\"abc\">>, T7)),\n\t?assertEqual(4, get(<<\"bcdefj\">>, T7)),\n\t?assertEqual([{<<\"bcdefj\">>, 4}, {<<\"abc\">>, 4}, {<<\"ab\">>, 2}],\n\t\t\tget_range(<<\"ab\">>, 3, T7)),\n\t?assertEqual([], get_range(0, T7)),\n\t%% a -> a -> a -> 1\n\t%%           b -> 3\n\t%%      b -> 2\n\t%%           c -> 4\n\t%% bcd -> efj -> 4\n\t%%        bcd -> 5\n\tT8 = insert(<<\"bcdbcd\">>, 5, T7),\n\t?assertEqual(6, ar_patricia_tree:size(T8)),\n\t?assertEqual(4, get(<<\"bcdefj\">>, T8)),\n\t?assertEqual(5, get(<<\"bcdbcd\">>, T8)),\n\tT9 = insert(<<\"bcdbcd\">>, 6, T8),\n\t?assertEqual(6, ar_patricia_tree:size(T9)),\n\t?assertEqual(4, get(<<\"bcdefj\">>, T9)),\n\t?assertEqual(6, get(<<\"bcdbcd\">>, T9)),\n\t%% a -> a -> a -> 1\n\t%%           b -> 3\n\t%%      b -> 2\n\t%%           c -> 4\n\t%% bab -> 7\n\t%% bcd -> efj -> 4\n\t%%        bcd -> 6\n\tT10 = insert(<<\"bab\">>, 7, T9),\n\t?assertEqual(7, ar_patricia_tree:size(T10)),\n\t?assertEqual(1, get(<<\"aaa\">>, T10)),\n\t?assertEqual(3, get(<<\"aab\">>, T10)),\n\t?assertEqual(4, get(<<\"abc\">>, T10)),\n\t?assertEqual(4, get(<<\"bcdefj\">>, T10)),\n\t?assertEqual(6, get(<<\"bcdbcd\">>, T10)),\n\t?assertEqual(7, get(<<\"bab\">>, T10)),\n\t?assertEqual(\n\t\t[\n\t\t\t{<<\"aaa\">>, 1}, {<<\"aab\">>, 3}, {<<\"ab\">>, 2}, {<<\"abc\">>, 4}, {<<\"bab\">>, 7},\n\t\t\t{<<\"bcdbcd\">>, 6}, {<<\"bcdefj\">>, 4}\n\t\t],\n\t\tfoldr(fun(K, V, Acc) -> [{K, V} | Acc] end, [], T10)\n\t),\n\t?assertEqual(\n\t\t[\n\t\t\t{<<\"bcdefj\">>, 4},\n\t\t\t{<<\"bcdbcd\">>, 6},\n\t\t\t{<<\"bab\">>, 7},\n\t\t\t{<<\"abc\">>, 4},\n\t\t\t{<<\"ab\">>, 2},\n\t\t\t{<<\"aab\">>, 3},\n\t\t\t{<<\"aaa\">>, 1}\n\t\t],\n\t\tget_range(<<\"aaa\">>, 20, T10)\n\t),\n\t?assertEqual(\n\t\t[\n\t\t\t{<<\"bcdefj\">>, 4},\n\t\t\t{<<\"bcdbcd\">>, 6},\n\t\t\t{<<\"bab\">>, 7},\n\t\t\t{<<\"abc\">>, 4},\n\t\t\t{<<\"ab\">>, 2},\n\t\t\t{<<\"aab\">>, 3},\n\t\t\t{<<\"aaa\">>, 1}\n\t\t],\n\t\tget_range(7, T10)\n\t),\n\t{H10, _, _} = compute_hash(T10, HashFun),\n\t{H10_1, _, _} = compute_hash(\n\t\tinsert(\n\t\t\t<<\"ab\">>, 2,\n\t\t\t\tinsert(<<\"abc\">>, 4, insert(<<\"aab\">>, 3, insert(<<\"aaa\">>, 1, new())))),\n\t\tHashFun\n\t),\n\t{H10_2, _, _} = compute_hash(\n\t\tinsert(<<\"bcdefj\">>, 4, insert(<<\"bab\">>, 7, insert(<<\"bcdbcd\">>, 6, new()))),\n\t\tHashFun\n\t),\n\t?assertEqual(H10, ar_deep_hash:hash([H10_1, H10_2])),\n\t{H10_2_1, _, _} = compute_hash(insert(<<\"bab\">>, 7, new()), HashFun),\n\t{H10_2_2, _, _} = compute_hash(insert(<<\"bcdbcd\">>, 6,\n\t\t\tinsert(<<\"bcdefj\">>, 4, new())), HashFun),\n\t?assertEqual(H10_2, ar_deep_hash:hash([H10_2_1, H10_2_2])),\n\t?assertNotEqual(H10, element(1, compute_hash(delete(<<\"ab\">>, T10), HashFun))),\n\t%% a -> a -> a -> 1\n\t%%           b -> 3\n\t%%      b -> 2\n\t%%           c -> 4\n\t%% b -> a -> b -> 7\n\t%%      a -> a -> 8\n\t%% bcd -> efj -> 4\n\t%%        bcd -> 6\n\tT11 = insert(<<\"baa\">>, 8, T10),\n\t?assertEqual(8, ar_patricia_tree:size(T11)),\n\t?assertEqual(1, get(<<\"aaa\">>, T11)),\n\t?assertEqual(3, get(<<\"aab\">>, T11)),\n\t?assertEqual(4, get(<<\"abc\">>, T11)),\n\t?assertEqual(4, get(<<\"bcdefj\">>, T11)),\n\t?assertEqual(6, get(<<\"bcdbcd\">>, T11)),\n\t?assertEqual(7, get(<<\"bab\">>, T11)),\n\t?assertEqual(8, get(<<\"baa\">>, T11)),\n\t%% a -> a -> a -> 1\n\t%%           b -> 3\n\t%%      b -> 2\n\t%%           c -> 4\n\t%% b -> a -> b -> 7\n\t%%      a -> a -> 8\n\t%% bcd -> efj -> 4\n\t%%        bcd -> 6\n\t%% <<>> -> empty\n\tT12 = insert(<<>>, empty, T11),\n\t?assertEqual(9, ar_patricia_tree:size(T12)),\n\t?assertEqual(1, get(<<\"aaa\">>, T12)),\n\t?assertEqual(3, get(<<\"aab\">>, T12)),\n\t?assertEqual(4, get(<<\"abc\">>, T12)),\n\t?assertEqual(4, get(<<\"bcdefj\">>, T12)),\n\t?assertEqual(6, get(<<\"bcdbcd\">>, T12)),\n\t?assertEqual(7, get(<<\"bab\">>, T12)),\n\t?assertEqual(8, get(<<\"baa\">>, T12)),\n\t?assertEqual(empty, get(<<>>, T12)),\n\t?assertEqual(\n\t\t[\n\t\t\t{<<>>, empty}, {<<\"aaa\">>, 1}, {<<\"aab\">>, 3}, {<<\"ab\">>, 2}, {<<\"abc\">>, 4},\n\t\t\t{<<\"baa\">>, 8}, {<<\"bab\">>, 7}, {<<\"bcdbcd\">>, 6}, {<<\"bcdefj\">>, 4}\n\t\t],\n\t\tfoldr(fun(K, V, Acc) -> [{K, V} | Acc] end, [], T12)\n\t),\n\t{H12, _, _} = compute_hash(T12, HashFun),\n\tT13 = from_proplist([\n\t\t{<<\"bcdbcd\">>, 6}, {<<>>, empty}, {<<\"ab\">>, 2}, {<<\"baa\">>, 8}, {<<\"aab\">>, 3},\n\t\t{<<\"bab\">>, 7}, {<<\"aaa\">>, 1}, {<<\"abc\">>, 4}, {<<\"bcdefj\">>, 4}\n\t]),\n\t{H13, _, _} = compute_hash(T13, HashFun),\n\t?assertEqual(H12, H13),\n\t?assertEqual(1, get(<<\"aaa\">>, T13)),\n\t?assertEqual(3, get(<<\"aab\">>, T13)),\n\t?assertEqual(4, get(<<\"abc\">>, T13)),\n\t?assertEqual(4, get(<<\"bcdefj\">>, T13)),\n\t?assertEqual(6, get(<<\"bcdbcd\">>, T13)),\n\t?assertEqual(7, get(<<\"bab\">>, T13)),\n\t?assertEqual(8, get(<<\"baa\">>, T13)),\n\t?assertEqual(empty, get(<<>>, T13)),\n\t%% a -> a -> a -> 1\n\t%%           b -> 3\n\t%%      b -> 2\n\t%%           c -> 4\n\t%% b -> a -> b -> 7\n\t%%      a -> a -> 8\n\t%% bcd -> efj -> 4\n\t%%        bc -> 9\n\t%%              d -> 6\n\t%% <<>> -> empty\n\tT14 = insert(<<\"bcdbc\">>, 9, T13),\n\t?assertEqual(10, ar_patricia_tree:size(T14)),\n\t?assertEqual(1, get(<<\"aaa\">>, T14)),\n\t?assertEqual(3, get(<<\"aab\">>, T14)),\n\t?assertEqual(4, get(<<\"abc\">>, T14)),\n\t?assertEqual(4, get(<<\"bcdefj\">>, T14)),\n\t?assertEqual(6, get(<<\"bcdbcd\">>, T14)),\n\t?assertEqual(7, get(<<\"bab\">>, T14)),\n\t?assertEqual(8, get(<<\"baa\">>, T14)),\n\t?assertEqual(9, get(<<\"bcdbc\">>, T14)),\n\t?assertEqual(empty, get(<<>>, T14)),\n\tT15 = insert(<<\"bcdbc\">>, 10, T14),\n\t?assertEqual(10, ar_patricia_tree:size(T15)),\n\t?assertEqual(10, get(<<\"bcdbc\">>, T15)),\n\t?assertEqual(6, get(<<\"bcdbcd\">>, T15)),\n\t%% a -> a -> a -> 1\n\t%%           b -> 3\n\t%%      b -> 2\n\t%%           c -> 4\n\t%% b -> a -> b -> 7\n\t%% bcd -> efj -> 4\n\t%%        bc -> 10\n\t%%              d -> 6\n\t%% <<>> -> empty\n\t{H15, T15_2, _} = compute_hash(T15, HashFun),\n\tT16 = delete(<<\"baa\">>, T15_2),\n\t?assertEqual(1, get(<<\"aaa\">>, T16)),\n\t?assertEqual(3, get(<<\"aab\">>, T16)),\n\t?assertEqual(4, get(<<\"abc\">>, T16)),\n\t?assertEqual(4, get(<<\"bcdefj\">>, T16)),\n\t?assertEqual(6, get(<<\"bcdbcd\">>, T16)),\n\t?assertEqual(7, get(<<\"bab\">>, T16)),\n\t?assertEqual(not_found, get(<<\"baa\">>, T16)),\n\t?assertEqual(10, get(<<\"bcdbc\">>, T16)),\n\t?assertEqual(empty, get(<<>>, T16)),\n\t{H16, T16_2, _} = compute_hash(T16, HashFun),\n\t?assertNotEqual(H16, H15),\n\t%% a -> a -> a -> 1\n\t%%           b -> 3\n\t%%      b -> 2\n\t%%           c -> 4\n\t%% b -> a -> b -> 7\n\t%% bcd -> efj -> 4\n\t%%        bc -> 10\n\t%% <<>> -> empty\n\tT17 = delete(<<\"bcdbcd\">>, T16_2),\n\t?assertEqual(1, get(<<\"aaa\">>, T17)),\n\t?assertEqual(3, get(<<\"aab\">>, T17)),\n\t?assertEqual(4, get(<<\"abc\">>, T17)),\n\t?assertEqual(4, get(<<\"bcdefj\">>, T17)),\n\t?assertEqual(not_found, get(<<\"bcdbcd\">>, T17)),\n\t?assertEqual(7, get(<<\"bab\">>, T17)),\n\t?assertEqual(10, get(<<\"bcdbc\">>, T17)),\n\t?assertEqual(empty, get(<<>>, T17)),\n\t{H17, T17_2, _} = compute_hash(T17, HashFun),\n\t?assertNotEqual(H17, H16),\n\t%% a -> a -> b -> 3\n\t%%      b -> 2\n\t%%           c -> 4\n\t%% b -> a -> b -> 7\n\t%%      a -> a -> 9\n\t%% bcd -> efj -> 4\n\t%%        bc -> 10\n\t%% <<>> -> empty\n\tT18 = insert(<<\"baa\">>, 9, delete(<<\"aaa\">>, T17_2)),\n\t{H18, _, _} = compute_hash(T18, HashFun),\n\t?assertEqual(not_found, get(<<\"aaa\">>, T18)),\n\t?assertEqual(3, get(<<\"aab\">>, T18)),\n\t?assertEqual(4, get(<<\"abc\">>, T18)),\n\t?assertEqual(4, get(<<\"bcdefj\">>, T18)),\n\t?assertEqual(9, get(<<\"baa\">>, T18)),\n\t?assertEqual(7, get(<<\"bab\">>, T18)),\n\t?assertEqual(10, get(<<\"bcdbc\">>, T18)),\n\t?assertEqual(empty, get(<<>>, T18)),\n\t?assertNotEqual(H18, H17),\n\t?assertEqual([{<<>>, empty}], get_range(<<>>, 1, T18)),\n\t?assertEqual([{<<>>, empty}], get_range(1, T18)),\n\t?assertEqual([{<<\"bcdefj\">>, 4}, {<<\"bcdbc\">>, 10}, {<<\"bab\">>, 7}, {<<\"baa\">>, 9},\n\t\t\t{<<\"abc\">>, 4}, {<<\"ab\">>, 2}, {<<\"aab\">>, 3}, {<<>>, empty}],\n\t\t\tget_range(<<>>, 20, T18)),\n\t?assertEqual(\n\t\t[\n\t\t\t{<<\"bcdefj\">>, 4},\n\t\t\t{<<\"bcdbc\">>, 10},\n\t\t\t{<<\"bab\">>, 7},\n\t\t\t{<<\"baa\">>, 9},\n\t\t\t{<<\"abc\">>, 4},\n\t\t\t{<<\"ab\">>, 2},\n\t\t\t{<<\"aab\">>, 3},\n\t\t\t{<<>>, empty}\n\t\t],\n\t\tget_range(8, T18)\n\t),\n\tT19 = insert(<<\"a\">>, 11, T18),\n\t?assertEqual(11, get(<<\"a\">>, T19)),\n\t%% a -> 11\n\t%%      a -> b -> 3\n\t%%      b -> c -> 4\n\t%% b -> a -> b -> 7\n\t%%      a -> a -> 9\n\t%% bcd -> efj -> 4\n\t%%        bc -> 10\n\t%% <<>> -> empty\n\tT20 = delete(<<\"ab\">>, T19),\n\t?assertEqual(not_found, get(<<\"ab\">>, T20)),\n\t?assertEqual(11, get(<<\"a\">>, T20)),\n\t?assertEqual(3, get(<<\"aab\">>, T20)),\n\t?assertEqual(4, get(<<\"abc\">>, T20)),\n\t?assertEqual(4, get(<<\"bcdefj\">>, T20)),\n\t?assertEqual(9, get(<<\"baa\">>, T20)),\n\t?assertEqual(7, get(<<\"bab\">>, T20)),\n\t?assertEqual(10, get(<<\"bcdbc\">>, T20)),\n\t?assertEqual(empty, get(<<>>, T20)),\n\t?assertEqual(8, ar_patricia_tree:size(T20)),\n\t%% abc -> 1\n\t%% def -> 1\n\tT21 = delete(<<\"def\">>, insert(<<\"def\">>, 1, insert(<<\"abc\">>, 1, new()))),\n\t?assertEqual(not_found, get(<<\"def\">>, T21)),\n\t?assertNotEqual(\n\t\telement(1,\n\t\t\tcompute_hash(insert(<<\"aab\">>, 1, insert(<<\"aaa\">>, 1, insert(<<\"a\">>, 2, new()))),\n\t\t\t\t\tHashFun)),\n\t\telement(1,\n\t\t\tcompute_hash(insert(<<\"aab\">>, 1, insert(<<\"aaa\">>, 1,\n\t\t\t\t\tinsert(<<\"aa\">>, 2, new()))), HashFun))\n\t).\n\nstochastic_test() ->\n\tlists:foreach(\n\t\tfun(_Case) ->\n\t\t\tKeyValues = random_key_values(3),\n\t\t\tlists:foldl(\n\t\t\t\t%% Assert all the permutations of the order of insertion of elements\n\t\t\t\t%% produce the tree with the same root hash. Assert that each of the\n\t\t\t\t%% elements removed from the tree after each permutation produces the tree\n\t\t\t\t%% with the same root hash as the trees produced by building up the tree\n\t\t\t\t%% without this element.\n\t\t\t\tfun(Permutation, Acc) ->\n\t\t\t\t\tTree = from_proplist(Permutation),\n\t\t\t\t\tMap = maps:from_list(Permutation),\n\t\t\t\t\tcompare_with_map(Tree, Map),\n\t\t\t\t\tSHA256Fun =\n\t\t\t\t\t\tfun(K, V) ->\n\t\t\t\t\t\t\tcrypto:hash(sha256, << K/binary, (term_to_binary(V))/binary >>)\n\t\t\t\t\t\tend,\n\t\t\t\t\tlists:foreach(\n\t\t\t\t\t\tfun({K, V}) ->\n\t\t\t\t\t\t\tTree1 = delete(K, Tree),\n\t\t\t\t\t\t\tM = maps:remove(K, Map),\n\t\t\t\t\t\t\tcompare_with_map(Tree1, M),\n\t\t\t\t\t\t\t{H1, _, _} = compute_hash(Tree1, SHA256Fun),\n\t\t\t\t\t\t\tTree2 = from_proplist(Permutation -- [{K, V}]),\n\t\t\t\t\t\t\t{H2, _, _} = compute_hash(Tree2, SHA256Fun),\n\t\t\t\t\t\t\t?assertEqual(H1, H2, [{tree1, Tree1}, {tree2, Tree2}])\n\t\t\t\t\t\tend,\n\t\t\t\t\t\tPermutation\n\t\t\t\t\t),\n\t\t\t\t\t{H, _, _} = compute_hash(Tree, SHA256Fun),\n\t\t\t\t\tcase Acc of\n\t\t\t\t\t\tstart ->\n\t\t\t\t\t\t\tdo_not_assert;\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t?assertEqual(H, Acc)\n\t\t\t\t\tend,\n\t\t\t\t\tAcc\n\t\t\t\tend,\n\t\t\t\tstart,\n\t\t\t\tpermutations(KeyValues)\n\t\t\t)\n\t\tend,\n\t\tlists:seq(1, 1000)\n\t).\n\nrandom_key_values(N) ->\n\tlists:foldl(\n\t\tfun(_, Acc) ->\n\t\t\t[{crypto:strong_rand_bytes(5), crypto:strong_rand_bytes(30)} | Acc]\n\t\tend,\n\t\t[],\n\t\tlists:seq(1, N)\n\t).\n\ncompare_with_map(Tree, Map) ->\n\t?assertEqual(map_size(Map), ar_patricia_tree:size(Tree)),\n\tmaps:map(\n\t\tfun(Key, Value) ->\n\t\t\t?assertEqual(Value, get(Key, Tree))\n\t\tend,\n\t\tMap\n\t).\n\npermutations([]) ->\n\t[[]];\npermutations(L) ->\n\t[[KV | T] || KV <- L, T <- permutations(L -- [KV])].\n"
  },
  {
    "path": "apps/arweave/src/ar_peer_intervals.erl",
    "content": "-module(ar_peer_intervals).\n\n-export([fetch/5]).\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include(\"ar.hrl\").\n-include(\"ar_data_discovery.hrl\").\n\n-ifdef(AR_TEST).\n-include_lib(\"eunit/include/eunit.hrl\").\n-endif.\n\n%% The size of the span of the weave we search at a time.\n%% By searching we mean asking peers about the intervals they have in the given span\n%% and finding the intersection with the unsynced intervals.\n-ifdef(AR_TEST).\n-define(QUERY_RANGE_STEP_SIZE, 10_000_000). % 10 MB\n-else.\n-define(QUERY_RANGE_STEP_SIZE, 1_000_000_000). % 1 GB\n-endif.\n\n%% Fetch at most this many sync intervals from a peer at a time.\n-ifdef(AR_TEST).\n-define(QUERY_SYNC_INTERVALS_COUNT_LIMIT, 10).\n-else.\n-define(QUERY_SYNC_INTERVALS_COUNT_LIMIT, 1000).\n-endif.\n\n%% The number of peers to fetch sync intervals from in parallel at a time.\n-define(GET_SYNC_RECORD_BATCH_SIZE, 2).\n-define(GET_SYNC_RECORD_COOLDOWN_MS, 60 * 1000).\n-define(GET_SYNC_RECORD_RPM_KEY, data_sync_record).\n-define(GET_FOOTPRINT_RECORD_RPM_KEY, footprints).\n-define(GET_FOOTPRINT_RECORD_COOLDOWN_MS, 60 * 1000).\n-define(GET_SYNC_RECORD_PATH, [<<\"data_sync_record\">>]).\n-define(GET_FOOTPRINT_RECORD_PATH, [<<\"footprints\">>]).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nfetch(Offset, Start, End, StoreID, Type) when Offset >= End ->\n\t?LOG_DEBUG([{event, fetch_peer_intervals_end},\n\t\t\t{store_id, StoreID},\n\t\t\t{offset, Offset},\n\t\t\t{range_start, Start},\n\t\t\t{range_end, End},\n\t\t\t{type, Type}]),\n\tgen_server:cast(ar_data_sync:name(StoreID),\n\t\t{collect_peer_intervals, Offset, Start, End, Type});\nfetch(Offset, Start, End, StoreID, Type) ->\n\tParent = ar_data_sync:name(StoreID),\n\tspawn_link(fun() ->\n\t\tcase do_fetch(Offset, Start, End, StoreID, Type) of\n\t\t\t{End2, EnqueueIntervals} ->\n\t\t\t\tgen_server:cast(Parent, {enqueue_intervals, EnqueueIntervals}),\n\t\t\t\tgen_server:cast(Parent, {collect_peer_intervals, End2, Start, End, Type});\n\t\t\twait ->\n\t\t\t\tar_util:cast_after(1000, Parent,\n\t\t\t\t\t{collect_peer_intervals, Offset, Start, End, Type})\n\t\tend\n\tend).\n\ndo_fetch(Offset, Start, End, StoreID, normal) ->\n\tParent = ar_data_sync:name(StoreID),\n\ttry\n\t\tcase get_peers(Offset, normal) of\n\t\t\twait ->\n\t\t\t\twait;\n\t\t\tPeers ->\n\t\t\t\tEnd2 = min(Offset + ?QUERY_RANGE_STEP_SIZE, End),\n\t\t\t\tUnsyncedIntervals = get_unsynced_intervals(Offset, End2, StoreID),\n\t\t\t\t%% Schedule the next sync bucket. The cast handler logic will pause collection\n\t\t\t\t%% if needed.\n\t\t\t\tcase ar_intervals:is_empty(UnsyncedIntervals) of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\t{End2, []};\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t{End3, EnqueueIntervals2} =\n\t\t\t\t\t\t\tfetch_peer_intervals(Parent, Offset, Peers, UnsyncedIntervals),\n\t\t\t\t\t\t{min(End2, End3), EnqueueIntervals2}\n\t\t\t\tend\n\t\tend\n\tcatch\n\t\tClass:Reason:Stacktrace ->\n\t\t\t?LOG_WARNING([{event, fetch_peers_process_exit},\n\t\t\t\t\t{store_id, StoreID},\n\t\t\t\t\t{offset, Offset},\n\t\t\t\t\t{range_start, Start},\n\t\t\t\t\t{range_end, End},\n\t\t\t\t\t{type, normal},\n\t\t\t\t\t{class, Class},\n\t\t\t\t\t{reason, Reason},\n\t\t\t\t\t{stacktrace, Stacktrace}]),\n\t\t\t{Offset, []}\n\tend;\n\ndo_fetch(Offset, Start, End, StoreID, footprint) ->\n\tParent = ar_data_sync:name(StoreID),\n\ttry\n\t\tcase get_peers(Offset, footprint) of\n\t\t\twait ->\n\t\t\t\twait;\n\t\t\tPeers ->\n\t\t\t\tPartition = ar_replica_2_9:get_entropy_partition(Offset + ?DATA_CHUNK_SIZE),\n\t\t\t\tFootprint = ar_footprint_record:get_footprint(Offset + ?DATA_CHUNK_SIZE),\n\t\t\t\tUnsyncedIntervals =\n\t\t\t\t\tar_footprint_record:get_unsynced_intervals(Partition, Footprint, StoreID),\n\n\t\t\t\tEnqueueIntervals =\n\t\t\t\t\tcase ar_intervals:is_empty(UnsyncedIntervals) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t[];\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tfetch_peer_footprint_intervals(\n\t\t\t\t\t\t\t\tParent, Partition, Footprint, Offset, End, Peers, UnsyncedIntervals)\n\t\t\t\t\tend,\n\t\t\t\tOffset2 = get_next_fetch_offset(Offset, Start, End),\n\t\t\t\t%% Schedule the next sync bucket. The cast handler logic will pause collection if needed.\n\t\t\t\t{Offset2, EnqueueIntervals}\n\t\tend\n\tcatch\n\t\tClass:Reason:Stacktrace ->\n\t\t\t?LOG_WARNING([{event, fetch_footprint_intervals_process_exit},\n\t\t\t\t\t{store_id, StoreID},\n\t\t\t\t\t{offset, Offset},\n\t\t\t\t\t{range_start, Start},\n\t\t\t\t\t{range_end, End},\n\t\t\t\t\t{type, footprint},\n\t\t\t\t\t{class, Class},\n\t\t\t\t\t{reason, Reason},\n\t\t\t\t\t{stacktrace, Stacktrace}]),\n\t\t\t{Offset, []}\n\tend.\n\n%% @doc Calculate the next fetch start position after processing a sector.\n%% Advances by one chunk within a sector, or jumps to the next partition boundary\n%% when near the sector end.\nget_next_fetch_offset(Offset, Start, End) ->\n\tSectorSize = ar_block:get_replica_2_9_entropy_sector_size(),\n\tPartition = ar_replica_2_9:get_entropy_partition(Offset + ?DATA_CHUNK_SIZE),\n\t{PartitionStart, PartitionEnd} = ar_replica_2_9:get_entropy_partition_range(Partition),\n\tSectorStart = max(Start, PartitionStart),\n\tSectorEnd = min(PartitionEnd, SectorStart + SectorSize),\n\tOffset2 =\n\t\tcase Offset + 2 * ?DATA_CHUNK_SIZE > SectorEnd of\n\t\t\ttrue ->\n\t\t\t\tPartitionEnd;\n\t\t\tfalse ->\n\t\t\t\tOffset + ?DATA_CHUNK_SIZE\n\t\tend,\n\tmin(Offset2, End).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nget_peers(Offset, normal) ->\n\tBucket = Offset div ?NETWORK_DATA_BUCKET_SIZE,\n\tget_peers2(Bucket,\n\t\tfun(B) -> ar_data_discovery:get_bucket_peers(B) end,\n\t\t?GET_SYNC_RECORD_RPM_KEY,\n\t\t?GET_SYNC_RECORD_PATH);\nget_peers(Offset, footprint) ->\n\tFootprintBucket = ar_footprint_record:get_footprint_bucket(Offset + ?DATA_CHUNK_SIZE),\n\tget_peers2(FootprintBucket,\n\t\tfun(B) -> ar_data_discovery:get_footprint_bucket_peers(B) end,\n\t\t?GET_FOOTPRINT_RECORD_RPM_KEY,\n\t\t?GET_FOOTPRINT_RECORD_PATH).\n\nget_peers2(Bucket, GetPeersFun, RPMKey, Path) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tAllPeers =\n\t\tcase Config#config.sync_from_local_peers_only of\n\t\t\ttrue ->\n\t\t\t\tConfig#config.local_peers;\n\t\t\tfalse ->\n\t\t\t\tGetPeersFun(Bucket)\n\t\tend,\n\tHotPeers = [\n\t\tPeer || Peer <- AllPeers,\n\t\tnot ar_rate_limiter:is_on_cooldown(Peer, RPMKey) andalso\n\t\tnot ar_rate_limiter:is_throttled(Peer, Path)\n\t],\n\tcase length(AllPeers) > 0 andalso length(HotPeers) == 0 of\n\t\ttrue ->\n\t\t\t% There are peers for this Offset, but they are all on cooldown/throttled, so\n\t\t\t% we'll give them time to recover.\n\t\t\twait;\n\t\tfalse ->\n\t\t\tar_data_discovery:pick_peers(HotPeers, ?QUERY_BEST_PEERS_COUNT)\n\tend.\n\n%% @doc Collect the unsynced intervals between Start and End excluding the blocklisted\n%% intervals.\nget_unsynced_intervals(Start, End, StoreID) ->\n\tUnsyncedIntervals = get_unsynced_intervals(Start, End, ar_intervals:new(), StoreID),\n\tBlacklistedIntervals = ar_tx_blacklist:get_blacklisted_intervals(Start, End),\n\tar_intervals:outerjoin(BlacklistedIntervals, UnsyncedIntervals).\n\nget_unsynced_intervals(Start, End, Intervals, _StoreID) when Start >= End ->\n\tIntervals;\nget_unsynced_intervals(Start, End, Intervals, StoreID) ->\n\tcase ar_sync_record:get_next_synced_interval(Start, End, ar_data_sync, StoreID) of\n\t\tnot_found ->\n\t\t\tar_intervals:add(Intervals, End, Start);\n\t\t{End2, Start2} ->\n\t\t\tcase Start2 > Start of\n\t\t\t\ttrue ->\n\t\t\t\t\tEnd3 = min(Start2, End),\n\t\t\t\t\tget_unsynced_intervals(End2, End,\n\t\t\t\t\t\t\tar_intervals:add(Intervals, End3, Start), StoreID);\n\t\t\t\t_ ->\n\t\t\t\t\tget_unsynced_intervals(End2, End, Intervals, StoreID)\n\t\t\tend\n\tend.\n\nfetch_peer_intervals(Parent, Start, Peers, UnsyncedIntervals) ->\n\tIntervals =\n\t\tar_util:batch_pmap(\n\t\t\tfun(Peer) ->\n\t\t\t\tcase maybe_get_peer_intervals(Peer, Start, UnsyncedIntervals) of\n\t\t\t\t\t{ok, SoughtIntervals, PeerRightBound} ->\n\t\t\t\t\t\t{Peer, SoughtIntervals, PeerRightBound};\n\t\t\t\t\t{error, cooldown} ->\n\t\t\t\t\t\t%% Skipping peer because we hit a 429 and put it on cooldown.\n\t\t\t\t\t\tok;\n\t\t\t\t\t{error, Reason} ->\n\t\t\t\t\t\tar_http_iface_client:log_failed_request(Reason, [{event, failed_to_fetch_peer_intervals},\n\t\t\t\t\t\t\t{parent, Parent},\n\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}]),\n\t\t\t\t\t\tok\n\t\t\t\tend\n\t\t\tend,\n\t\t\tPeers,\n\t\t\t?GET_SYNC_RECORD_BATCH_SIZE, % fetch sync intervals from so many peers at a time\n\t\t\t%% We'll rely on the timeout to also flag when we are approaching a peer's RPM\n\t\t\t%% limit. As we approach the limit we will self-throttle the requests. Eventually this\n\t\t\t%% throttling will exceed 60s and we'll timout the batch_pmap and flag the peer for\n\t\t\t%% cooldown.\n\t\t\t60 * 1000 \n\t\t),\n\t{EnqueueIntervals, MinRightBound} =\n\t\tlists:foldl(\n\t\t\tfun\t({error, batch_pmap_timeout, Peer}, Acc) ->\n\t\t\t\t\t?LOG_DEBUG([{event, failed_to_fetch_peer_intervals},\n\t\t\t\t\t\t{parent, Parent},\n\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t{reason, batch_pmap_timeout}]),\n\t\t\t\t\tar_rate_limiter:set_cooldown(\n\t\t\t\t\t\tPeer, ?GET_SYNC_RECORD_RPM_KEY, ?GET_SYNC_RECORD_COOLDOWN_MS),\n\t\t\t\t\tAcc;\n\t\t\t\t({Peer, SoughtIntervals, RightBound}, {IntervalsAcc, RightBoundAcc}) ->\n\t\t\t\t\tcase ar_intervals:is_empty(SoughtIntervals) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t{IntervalsAcc, RightBoundAcc};\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t%% FootprintKey = none for normal syncing\n\t\t\t\t\t\t\t{[{Peer, SoughtIntervals, none} | IntervalsAcc],\n\t\t\t\t\t\t\t\tmin(RightBound, RightBoundAcc)}\n\t\t\t\t\tend;\n\t\t\t\t(ok, Acc) ->\n\t\t\t\t\tAcc;\n\t\t\t\t(Error, Acc) ->\n\t\t\t\t\tar_http_iface_client:log_failed_request(Error, [{event, failed_to_fetch_peer_intervals},\n\t\t\t\t\t\t{parent, Parent},\n\t\t\t\t\t\t{peer, unknown},\n\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Error])}]),\n\t\t\t\t\tAcc\n\t\t\tend,\n\t\t\t{[], infinity},\n\t\t\tIntervals\n\t\t),\n\t{MinRightBound, EnqueueIntervals}.\n\n%% @doc\n%% @return {ok, Intervals, PeerRightBound} | Error\n%% Intervals: the intersection of the intervals we are looking for and the intervals that\n%%\t\t\t\tthe peer advertised inside the recently queried range\n%% PeerRightBound: the right bound of the intervals the peer advertised; for example,\n%%\t\t\t\twe may ask for at most 100 continuous intervals inside the given gigabyte,\n%%\t\t\t\tbut the peer may have this region very fractured and 100 intervals will\n%%\t\t\t\tnot be all intervals covering this gigabyte, so we take the right bound\n%%\t\t\t\tto know where to query next\nmaybe_get_peer_intervals(Peer, Left, SoughtIntervals) ->\n\tcase ar_rate_limiter:is_on_cooldown(Peer, ?GET_SYNC_RECORD_RPM_KEY) of\n\t\ttrue ->\n\t\t\t{error, cooldown};\n\t\tfalse ->\n\t\t\tget_peer_intervals(Peer, Left, SoughtIntervals)\n\tend.\n\nget_peer_intervals(Peer, Left, SoughtIntervals) ->\n\tLimit = ?QUERY_SYNC_INTERVALS_COUNT_LIMIT,\n\tRight = element(1, ar_intervals:largest(SoughtIntervals)),\n\tPeerReply =\n\t\tcase ar_peers:get_peer_release(Peer) >= ?GET_SYNC_RECORD_RIGHT_BOUND_SUPPORT_RELEASE of\n\t\t\ttrue ->\n\t\t\t\tar_http_iface_client:get_sync_record(Peer, Left + 1, Right, Limit);\n\t\t\tfalse ->\n\t\t\t\tar_http_iface_client:get_sync_record(Peer, Left + 1, Limit)\n\t\tend,\n\tcase PeerReply of\n\t\t{ok, PeerIntervals2} ->\n\t\t\tPeerRightBound =\n\t\t\t\tcase ar_intervals:is_empty(PeerIntervals2) of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tinfinity;\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\telement(1, ar_intervals:largest(PeerIntervals2))\n\t\t\t\tend,\n\t\t\t{ok, ar_intervals:intersection(PeerIntervals2, SoughtIntervals), PeerRightBound};\n\t\t{error, too_many_requests} = Error ->\n\t\t\tar_rate_limiter:set_cooldown(Peer,\n\t\t\t\t?GET_SYNC_RECORD_RPM_KEY, ?GET_SYNC_RECORD_COOLDOWN_MS),\n\t\t\tError;\n\t\tError ->\n\t\t\tError\n\tend.\n\nfetch_peer_footprint_intervals(Parent, Partition, Footprint, Start, End, Peers, UnsyncedIntervals) ->\n\tIntervals =\n\t\tar_util:batch_pmap(\n\t\t\tfun(Peer) ->\n\t\t\t\tcase maybe_get_peer_footprint_intervals(\n\t\t\t\t\t\tPeer, Partition, Footprint, UnsyncedIntervals) of\n\t\t\t\t\t{ok, SoughtIntervals} ->\n\t\t\t\t\t\t{Peer, SoughtIntervals};\n\t\t\t\t\t{error, cooldown} ->\n\t\t\t\t\t\t%% Skipping peer because we hit a 429 and put it on cooldown.\n\t\t\t\t\t\tok;\n\t\t\t\t\t{error, Reason} ->\n\t\t\t\t\t\t?LOG_DEBUG([{event, failed_to_fetch_peer_footprint_intervals},\n\t\t\t\t\t\t\t{parent, Parent},\n\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}]),\n\t\t\t\t\t\tok\n\t\t\t\tend\n\t\t\tend,\n\t\t\tPeers,\n\t\t\t?GET_SYNC_RECORD_BATCH_SIZE, % fetch sync intervals from so many peers at a time\n\t\t\t%% We'll rely on the timeout to also flag when we are approaching a peer's RPM\n\t\t\t%% limit. As we approach the limit we will self-throttle the requests. Eventually this\n\t\t\t%% throttling will exceed 60s and we'll timout the batch_pmap and flag the peer for\n\t\t\t%% cooldown.\n\t\t\t60 * 1000\n\t\t),\n\tEnqueueIntervals =\n\t\tlists:foldl(\n\t\t\tfun\t({error, batch_pmap_timeout, Peer}, Acc) ->\n\t\t\t\t\t?LOG_DEBUG([{event, failed_to_fetch_peer_footprint_intervals},\n\t\t\t\t\t\t{parent, Parent},\n\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t{reason, batch_pmap_timeout}]),\n\t\t\t\t\tar_rate_limiter:set_cooldown(\n\t\t\t\t\t\tPeer, ?GET_FOOTPRINT_RECORD_RPM_KEY, ?GET_FOOTPRINT_RECORD_COOLDOWN_MS),\n\t\t\t\t\tAcc;\n\t\t\t\t({Peer, SoughtIntervals}, IntervalsAcc) ->\n\t\t\t\t\tcase ar_intervals:is_empty(SoughtIntervals) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tIntervalsAcc;\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tByteIntervals = \n\t\t\t\t\t\t\t\tcut_peer_footprint_intervals(SoughtIntervals, Start, End),\n\t\t\t\t\t\t\t?LOG_DEBUG([{event, fetch_peer_intervals},\n\t\t\t\t\t\t\t\t{function, fetch_peer_footprint_intervals},\n\t\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t\t\t{partition, Partition},\n\t\t\t\t\t\t\t\t{footprint, Footprint},\n\t\t\t\t\t\t\t\t{unsynced_intervals, ar_intervals:sum(UnsyncedIntervals)},\n\t\t\t\t\t\t\t\t{sought_intervals, ar_intervals:sum(SoughtIntervals)},\n\t\t\t\t\t\t\t\t{intervals, length(Intervals)},\n\t\t\t\t\t\t\t\t{byte_intervals, ar_intervals:sum(ByteIntervals)}]),\n\t\t\t\t\t\t\tFootprintKey = {Partition, Footprint, Peer},\n\t\t\t\t\t\t\t[{Peer, ByteIntervals, FootprintKey} | IntervalsAcc]\n\t\t\t\t\tend;\n\t\t\t\t(ok, Acc) ->\n\t\t\t\t\tAcc;\n\t\t\t\t(Error, Acc) ->\n\t\t\t\t\t?LOG_DEBUG([{event, failed_to_fetch_peer_footprint_intervals},\n\t\t\t\t\t\t{parent, Parent},\n\t\t\t\t\t\t{peer, unknown},\n\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Error])}]),\n\t\t\t\t\tAcc\n\t\t\tend,\n\t\t\t[],\n\t\t\tIntervals\n\t\t),\n\tEnqueueIntervals.\n\nmaybe_get_peer_footprint_intervals(Peer, Partition, Footprint, SoughtIntervals) ->\n\tcase ar_rate_limiter:is_on_cooldown(Peer, ?GET_FOOTPRINT_RECORD_RPM_KEY) of\n\t\ttrue ->\n\t\t\t{error, cooldown};\n\t\tfalse ->\n\t\t\tget_peer_footprint_intervals(Peer, Partition, Footprint, SoughtIntervals)\n\tend.\n\nget_peer_footprint_intervals(Peer, Partition, Footprint, SoughtIntervals) ->\n\tPeerReply =\n\t\tcase ar_peers:get_peer_release(Peer) >= ?GET_FOOTPRINT_SUPPORT_RELEASE of\n\t\t\ttrue ->\n\t\t\t\tar_http_iface_client:get_footprints(Peer, Partition, Footprint);\n\t\t\tfalse ->\n\t\t\t\t%% We expect to get here only if the peer is upgraded and then downgraded again,\n\t\t\t\t%% because we check the peer release at the bucket collection stage.\n\t\t\t\tnot_found\n\t\tend,\n\tcase PeerReply of\n\t\t{ok, Intervals} ->\n\t\t\t{ok, ar_intervals:intersection(Intervals, SoughtIntervals)};\n\t\tnot_found ->\n\t\t\t{ok, ar_intervals:new()};\n\t\t{error, too_many_requests} = Error ->\n\t\t\tar_rate_limiter:set_cooldown(Peer,\n\t\t\t\t?GET_FOOTPRINT_RECORD_RPM_KEY, ?GET_FOOTPRINT_RECORD_COOLDOWN_MS),\n\t\t\tError;\n\t\tError ->\n\t\t\tError\n\tend.\n\n%% @doc The intervals returned by a peer may include intervals beyond the\n%% storage module boundaries. This is because we end up querying all advertised \n%% intervals belonging to a footprint that intersects this node's unsynced\n%% intervals. This can cause this node to try to store a chunk that lies beyond\n%% its configured storage module range. To avoid this we explicitly remove all\n%% intervals beyond the provided boundaries.\ncut_peer_footprint_intervals(FootprintIntervals, Start, End) -> \n\tByteIntervals =\n\t\tar_footprint_record:get_intervals_from_footprint_intervals(FootprintIntervals),\n\tByteIntervals2 = ar_intervals:cut(ByteIntervals, End),\n\tPaddedStart =\n\t\tcase ar_block:get_chunk_padded_offset(Start) of\n\t\t\tStart ->\n\t\t\t\tStart;\n\t\t\tOffset ->\n\t\t\t\tOffset - ?DATA_CHUNK_SIZE\n\t\tend,\n\tar_intervals:outerjoin(\n\t\tar_intervals:from_list([{PaddedStart, -1}]), ByteIntervals2).\n\n%%%===================================================================\n%%% Tests\n%%%===================================================================\n\n-ifdef(AR_TEST).\n\ncut_peer_footprint_intervals_test() ->\n\n\t?assertEqual(\n\t\tar_intervals:from_list([{786432, 524288}, {1310720, 1048576}]),\n\t\tcut_peer_footprint_intervals(\n\t\t\tar_intervals:from_list([{4, 0}]), 262144, 1572864),\n\t\t\"Full Footprint 0, aligned boundaries\"),\n\n\t?assertEqual(\n\t\tar_intervals:from_list([{524288,262144}, {1048576,786432}, {1572864,1310720}]),\n\t\tcut_peer_footprint_intervals(\n\t\t\tar_intervals:from_list([{8, 4}]), 262144, 1572864),\n\t\t\"Full Footprint 1 cut to aligned boundaries\"),\n\n\t?assertEqual(\n\t\tar_intervals:from_list([\n\t\t\t{262144,200000}, {786432, 524288}, {1310720, 1048576}, {1600000,1572864}]),\n\t\tcut_peer_footprint_intervals(\n\t\t\tar_intervals:from_list([{4, 0}]), 200000, 1600000),\n\t\t\"Full Footprint 0, unaligned boundaries, pre-strict\"),\n\n\t?assertEqual(\n\t\tar_intervals:from_list([{524288,262144}, {1048576, 786432}, {1572864, 1310720}]),\n\t\tcut_peer_footprint_intervals(\n\t\t\tar_intervals:from_list([{8, 4}]), 200000, 1600000),\n\t\t\"Full Footprint 1, unaligned boundaries, pre-strict\"),\n\n\t?assertEqual(\n\t\tar_intervals:from_list([{2883584,2621440}, {3407872,3145728}]),\n\t\tcut_peer_footprint_intervals(\n\t\t\tar_intervals:from_list([{12, 8}]), 2400000, 3500000),\n\t\t\"Full Footprint 2, unaligned boundaries, post-strict\"),\n\t\n\t?assertEqual(\n\t\tar_intervals:from_list([{2621440,2359296}, {3145728,2883584}, {3500000,3407872}]),\n\t\tcut_peer_footprint_intervals(\n\t\t\tar_intervals:from_list([{16, 12}]), 2400000, 3500000),\n\t\t\"Full Footprint 3, unaligned boundaries, post-strict\"),\n\n\t?assertEqual(\n\t\tar_intervals:from_list([{2621440,2359296}, {3500000,3407872}]),\n\t\tcut_peer_footprint_intervals(\n\t\t\tar_intervals:from_list([{16, 14}, {13, 12}]), 2400000, 3500000),\n\t\t\"Partial Footprint 3, unaligned boundaries, post-strict\"),\n\n\tok.\n\n%% Tests for get_next_fetch_offset/4\n%% 4 binary conditions (shown as debug output 0/1 for each):\n%%   1. Start > PartitionStart\n%%   2. PartitionEnd > SectorStart + SectorSize\n%%   3. Offset + 2*CHUNK > SectorEnd\n%%   4. Offset2 > End\n%% Pattern labeled 0-F in hex (e.g., 0101 = 5)\n%% Note: 0xxx (cond1=0, cond2=0) requires partition < SectorSize, impossible in tests\nget_next_fetch_offset_test() ->\n\tSectorSize = ar_block:get_replica_2_9_entropy_sector_size(),\n\t{P0Start, P0End} = ar_replica_2_9:get_entropy_partition_range(0),\n\tChunk = ?DATA_CHUNK_SIZE,\n\n\t?assertEqual(P0Start + Chunk,\n\t\tget_next_fetch_offset(P0Start, P0Start, P0End),\n\t\t\"simple advance\"),\n\n\t?assertEqual(P0Start + 1000,\n\t\tget_next_fetch_offset(P0Start, P0Start, P0Start + 1000),\n\t\t\"simple advance, limited by End\"),\n\n\t?assertEqual(P0End,\n\t\tget_next_fetch_offset(P0Start + SectorSize - 1, P0Start, P0End),\n\t\t\"jump to PartitionEnd\"),\n\n\t?assertEqual(P0Start + SectorSize,\n\t\tget_next_fetch_offset(P0Start + SectorSize - 1, P0Start, P0Start + SectorSize),\n\t\t\"jump to PartitionEnd, limited by End\"),\n\n\tStart8 = P0End - SectorSize,\n\t?assertEqual(Start8 + Chunk,\n\t\tget_next_fetch_offset(Start8, Start8, P0End),\n\t\t\"simple advance, mid-partition Start\"),\n\n\tStart9 = P0End - SectorSize,\n\t?assertEqual(Start9 + 1000,\n\t\tget_next_fetch_offset(Start9, Start9, Start9 + 1000),\n\t\t\"simple advance, mid-partition Start, limited by End\"),\n\n\tStartA = P0End - SectorSize,\n\t?assertEqual(P0End,\n\t\tget_next_fetch_offset(StartA + Chunk, StartA, P0End),\n\t\t\"jump to PartitionEnd, mid-partition Start\"),\n\n\tStartB = P0End - SectorSize,\n\tSmallEndB = P0End - Chunk,\n\t?assertEqual(SmallEndB,\n\t\tget_next_fetch_offset(StartB + Chunk, StartB, SmallEndB),\n\t\t\"jump to PartitionEnd, mid-partition Start, limited by End\"),\n\n\tMidStart = P0Start + SectorSize,\n\t?assertEqual(MidStart + Chunk,\n\t\tget_next_fetch_offset(MidStart, MidStart, P0End),\n\t\t\"simple advance, mid-partition Start, SectorEnd past PartitionEnd\"),\n\n\t?assertEqual(MidStart + 1000,\n\t\tget_next_fetch_offset(MidStart, MidStart, MidStart + 1000),\n\t\t\"simple advance, mid-partition Start, SectorEnd past PartitionEnd, limited by End\"),\n\n\t?assertEqual(P0End,\n\t\tget_next_fetch_offset(MidStart + Chunk, MidStart, P0End),\n\t\t\"jump to PartitionEnd, mid-partition Start, SectorEnd past PartitionEnd\"),\n\n\tSmallEndF = MidStart + SectorSize,\n\t?assertEqual(SmallEndF,\n\t\tget_next_fetch_offset(MidStart + Chunk, MidStart, SmallEndF),\n\t\t\"jump to PartitionEnd, mid-partition Start, SectorEnd past PartitionEnd, limited by End\"),\n\n\tok.\n\n-endif."
  },
  {
    "path": "apps/arweave/src/ar_peer_worker.erl",
    "content": "%%% @doc Per-peer process managing sync task queue, dispatch state, and footprints.\n%%%\n%%% Each peer gets its own worker process to isolate state and prevent\n%%% stale-state bugs from interleaved updates. This process manages:\n%%%\n%%% Peer Queue Management:\n%%% - Maintains a queue of tasks ready to be dispatched for this peer\n%%% - Tracks dispatched_count (number of tasks currently being processed)\n%%% - Maintains max_dispatched limit that controls how many tasks can be\n%%%   concurrently dispatched for this peer (adjusted via rebalancing)\n%%% - If too many requests are made to a peer, it can be blocked or throttled\n%%% - Peer performance varies over time; dispatch limits adapt to avoid getting\n%%%   \"stuck\" syncing many chunks from a slow peer\n%%%\n%%% Footprint Management:\n%%% - Tasks with a FootprintKey are grouped by footprint to limit concurrent\n%%%   processing and avoid overloading the entropy cache\n%%% - Footprints can be active (tasks being processed) or waiting (task_queue for later)\n%%% - Each peer has a max_footprints limit to prevent overloading entropy cache\n%%% - When a footprint becomes active, waiting tasks are moved to the peer queue\n%%% - When all tasks in a footprint complete, it's deactivated and the next\n%%%   waiting footprint may be activated\n%%% - Long-running footprints are detected and logged per-peer\n%%%\n%%% Performance Tracking:\n%%% - Tracks task completion times and data sizes for performance metrics\n%%% - Integrates with ar_peers module for peer rating and performance tracking\n%%%\n%%% Rebalancing:\n%%% - Responds to rebalance requests from the coordinator\n%%% - Adjusts max_dispatched based on peer performance vs target latency\n%%% - Cuts queue if it exceeds the calculated max_queue size\n-module(ar_peer_worker).\n\n-behaviour(gen_server).\n\n%% Lifecycle\n-export([start_link/1, get_or_start/1, stop/1]).\n%% Operations (all take Pid as first arg - coordinator caches Peer->Pid mapping)\n-export([enqueue_task/5, task_completed/5, process_queue/2,\n         get_max_dispatched/1, rebalance/3, try_activate_footprint/1]).\n\n-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_peers.hrl\").\n\n-define(MIN_MAX_DISPATCHED, 8).\n-define(MIN_PEER_QUEUE, 20).\n-define(CHECK_LONG_RUNNING_FOOTPRINTS_MS, 60000).  %% Check every 60 seconds\n-define(LONG_RUNNING_FOOTPRINT_THRESHOLD_S, 120).\n-define(IDLE_SHUTDOWN_THRESHOLD_S, 300).  %% Shutdown after 5 minutes of no tasks\n-define(CALL_TIMEOUT_MS, 30000).\n\n-record(footprint, {\n\twaiting = queue:new(),   %% queue of waiting tasks\n\tactive_count = 0,        %% count of active tasks (0 = inactive)\n\tactivation_time          %% monotonic time when activated (undefined if inactive)\n}).\n\n-record(state, {\n\tpeer,\n\tpeer_formatted,                %% cached ar_util:format_peer(Peer) for metrics\n\ttask_queue = queue:new(),\n\tdispatched_count = 0,\n\twaiting_count = 0,\n\tmax_dispatched = ?MIN_MAX_DISPATCHED,\n\tlast_task_time,                %% monotonic time when last task was received\n\t%% Footprint management (coordinator tracks global limits)\n\tfootprints = #{},              %% FootprintKey => #footprint{}\n\tactive_footprints = sets:new() %% set of active FootprintKeys\n}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link(Peer) ->\n\tcase gen_server:start_link(?MODULE, [Peer], []) of\n\t\t{ok, Pid} ->\n\t\t\tets:insert(?MODULE, {Peer, Pid}),\n\t\t\t{ok, Pid};\n\t\tError ->\n\t\t\tError\n\tend.\n\n%% @doc Lookup a peer worker pid by ETS registry.\nlookup(Peer) ->\n\tcase ets:lookup(?MODULE, Peer) of\n\t\t[] -> undefined;\n\t\t[{_, Pid}] -> {ok, Pid}\n\tend.\n\n%% @doc Get the pid of an existing peer worker or start a new one.\nget_or_start(Peer) ->\n\tcase lookup(Peer) of\n\t\t{ok, Pid} ->\n\t\t\t{ok, Pid};\n\t\tundefined ->\n\t\t\tcase supervisor:start_child(ar_peer_worker_sup, [Peer]) of\n\t\t\t\t{ok, Pid} -> {ok, Pid};\n\t\t\t\t{error, {already_started, Pid}} -> {ok, Pid};\n\t\t\t\tError -> Error\n\t\t\tend\n\tend.\n\nstop(Pid) ->\n\tgen_server:stop(Pid).\n\n%%%===================================================================\n%%% Operations (all take Pid as first argument).\n%%% Coordinator caches Peer->Pid mapping to avoid lookup overhead.\n%%%===================================================================\n\n%% @doc Enqueue a task and process the queue synchronously.\n%% Returns {WasActivated, TasksToDispatch} where:\n%% - WasActivated: true if a new footprint was just activated, false otherwise\n%% - TasksToDispatch: list of tasks ready to dispatch\n%% HasCapacity indicates whether the global footprint limit allows activating new footprints.\n%% WorkerCount is used to calculate available dispatch slots.\nenqueue_task(Pid, FootprintKey, Args, HasCapacity, WorkerCount) ->\n\ttry\n\t\tgen_server:call(Pid,\n\t\t\t{enqueue_task, FootprintKey, Args, HasCapacity, WorkerCount},\n\t\t\t?CALL_TIMEOUT_MS)\n\tcatch\n\t\texit:{timeout, _} -> {false, []};\n\t\t_:_ -> {false, []}\n\tend.\n\n%% @doc Try to activate a waiting footprint (called when global capacity becomes available).\n%% Returns true if a footprint was activated, false otherwise.\ntry_activate_footprint(Pid) ->\n\ttry\n\t\tgen_server:call(Pid, try_activate_footprint, ?CALL_TIMEOUT_MS)\n\tcatch\n\t\texit:{timeout, _} -> false;\n\t\t_:_ -> false\n\tend.\n\n%% @doc Process the queue and return tasks ready for dispatch.\n%% Used to drain queued tasks without enqueuing new ones (e.g., after task completion).\nprocess_queue(Pid, WorkerCount) ->\n\ttry\n\t\tgen_server:call(Pid, {process_queue, WorkerCount}, ?CALL_TIMEOUT_MS)\n\tcatch\n\t\texit:{timeout, _} -> [];\n\t\t_:_ -> []\n\tend.\n\n%% @doc Notify task completed, update footprint accounting and rate data fetched.\ntask_completed(Pid, FootprintKey, Result, ElapsedNative, DataSize) ->\n\tgen_server:cast(Pid, {task_completed, FootprintKey, Result, ElapsedNative, DataSize}).\n\n%% @doc Get max_dispatched for this peer.\nget_max_dispatched(Pid) ->\n\ttry\n\t\tgen_server:call(Pid, get_max_dispatched, ?CALL_TIMEOUT_MS)\n\tcatch\n\t\texit:{timeout, _} -> {error, timeout};\n\t\t_:_ -> {error, error}\n\tend.\n\n%% @doc Rebalance based on performance and targets.\n%% Returns RemovedCount (number of tasks cut from queue).\nrebalance(Pid, Performance, RebalanceParams) ->\n\ttry\n\t\tgen_server:call(Pid, {rebalance, Performance, RebalanceParams}, ?CALL_TIMEOUT_MS)\n\tcatch\n\t\texit:{timeout, _} -> {error, timeout};\n\t\t_:_ -> {error, timeout}\n\tend.\n\n%%%===================================================================\n%%% gen_server callbacks.\n%%%===================================================================\n\ninit([Peer]) ->\n\t%% Schedule periodic check for long-running footprints\n\terlang:send_after(?CHECK_LONG_RUNNING_FOOTPRINTS_MS, self(), check_long_running_footprints),\n\tPeerFormatted = ar_util:format_peer(Peer),\n\t%% Notify coordinator of our PID (handles restarts updating stale cached PIDs)\n\tgen_server:cast(ar_data_sync_coordinator, {peer_worker_started, Peer, self()}),\n\t?LOG_INFO([{event, init}, {module, ?MODULE}, {peer, PeerFormatted}]),\n\t{ok, #state{ peer = Peer, peer_formatted = PeerFormatted, \n\t\t\t\t last_task_time = erlang:monotonic_time() }}.\n\nhandle_call(get_max_dispatched, _From, State) ->\n\t{reply, State#state.max_dispatched, State};\n\nhandle_call(get_state, _From, State) ->\n\t%% Test-only: keep for tests to access raw state\n\t{reply, {ok, State}, State};\n\nhandle_call({enqueue_task, FootprintKey, Args, HasCapacity, WorkerCount}, _From, State) ->\n\tState1 = State#state{ last_task_time = erlang:monotonic_time() },\n\t{WasActivated, State2} = do_enqueue_task(FootprintKey, Args, HasCapacity, State1),\n\t{TasksToDispatch, State3} = do_process_queue(State2, WorkerCount),\n\t{reply, {WasActivated, TasksToDispatch}, State3};\n\nhandle_call({process_queue, WorkerCount}, _From, State) ->\n\t{TasksToDispatch, State2} = do_process_queue(State, WorkerCount),\n\t{reply, TasksToDispatch, State2};\n\nhandle_call(try_activate_footprint, _From, State) ->\n\t%% Global capacity became available - try to activate a waiting footprint\n\t{Activated, State2} = try_activate_waiting_footprint(State),\n\t{reply, Activated, State2};\n\nhandle_call({rebalance, Performance, RebalanceParams}, _From, State) ->\n\t{QueueScalingFactor, TargetLatency, WorkersStarved} = RebalanceParams,\n\t#state{ task_queue = Queue, max_dispatched = MaxDispatched,\n\t\t\tdispatched_count = Dispatched, waiting_count = Waiting,\n\t\t\tpeer_formatted = PeerFormatted, last_task_time = LastTaskTime } = State,\n\t\n\t%% 1. Cut queue if needed\n\tMaxQueueLen = max_queue_length(Performance, QueueScalingFactor),\n\tQueueLen = queue:len(Queue),\n\t{State2, RemovedCount} = case MaxQueueLen =/= infinity andalso QueueLen > MaxQueueLen of\n\t\ttrue ->\n\t\t\t{NewQueued, RemovedQueue} = queue:split(MaxQueueLen, Queue),\n\t\t\tRemoved = queue:len(RemovedQueue),\n\t\t\tRemovedTasks = queue:to_list(RemovedQueue),\n\t\t\tincrement_metrics(queued_out, State, Removed),\n\t\t\tS2 = cut_footprint_task_counts(RemovedTasks, State#state{ task_queue = NewQueued }),\n\t\t\t{S2, Removed};\n\t\tfalse ->\n\t\t\t{State, 0}\n\tend,\n\t\n\t%% 2. Update max_dispatched\n\tFasterThanTarget = Performance#performance.average_latency < TargetLatency,\n\tTargetMax = case FasterThanTarget orelse WorkersStarved of\n\t\ttrue -> MaxDispatched + 1;\n\t\tfalse -> MaxDispatched - 1\n\tend,\n\tMaxTasks = max(Dispatched, Waiting + queue:len(State2#state.task_queue)),\n\tNewMaxDispatched = ar_util:between(TargetMax, ?MIN_MAX_DISPATCHED, max(MaxTasks, ?MIN_MAX_DISPATCHED)),\n\tState3 = State2#state{ max_dispatched = NewMaxDispatched },\n\t\n\t%% 3. Check if we should shutdown (idle worker)\n\tNewQueueLen = queue:len(State3#state.task_queue),\n\tNewWaiting = State3#state.waiting_count,\n\tNewDispatched = State3#state.dispatched_count,\n\tIdleSeconds = erlang:convert_time_unit(\n\t\terlang:monotonic_time() - LastTaskTime, native, second),\n\tShouldShutdown = (NewDispatched == 0) andalso (NewQueueLen == 0) andalso \n\t\t\t\t\t (NewWaiting == 0) andalso (IdleSeconds >= ?IDLE_SHUTDOWN_THRESHOLD_S),\n\t\n\t%% 4. Log rebalance\n\t?LOG_DEBUG([{event, rebalance_peer},\n\t\t{peer, PeerFormatted},\n\t\t{dispatched_count, NewDispatched},\n\t\t{queued_count, NewQueueLen},\n\t\t{waiting_count, NewWaiting},\n\t\t{max_queue_len, MaxQueueLen},\n\t\t{faster_than_target, FasterThanTarget},\n\t\t{workers_starved, WorkersStarved},\n\t\t{max_dispatched, NewMaxDispatched},\n\t\t{removed_count, RemovedCount},\n\t\t{idle_seconds, IdleSeconds},\n\t\t{should_shutdown, ShouldShutdown}]),\n\t\n\tResult = case ShouldShutdown of\n\t\ttrue -> {shutdown, RemovedCount};\n\t\tfalse -> {ok, RemovedCount}\n\tend,\n\t{reply, Result, State3};\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, {error, unhandled}, State}.\n\nhandle_cast({task_completed, FootprintKey, Result, ElapsedNative, DataSize}, State) ->\n\t#state{ dispatched_count = DispatchedCount, max_dispatched = MaxDispatched, \n\t\t\tpeer = Peer } = State,\n\tNewDispatchedCount = max(0, DispatchedCount - 1),\n\tincrement_metrics(completed, State, 1),\n\t%% Rate the fetched data with ar_peers\n\tElapsedMicroseconds = erlang:convert_time_unit(ElapsedNative, native, microsecond),\n\tar_peers:rate_fetched_data(Peer, chunk, Result, ElapsedMicroseconds, DataSize, MaxDispatched),\n\t%% Complete footprint task\n\tState2 = do_complete_footprint_task(\n\t\tFootprintKey, State#state{ dispatched_count = NewDispatchedCount }),\n\t{noreply, State2};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info(check_long_running_footprints, State) ->\n\t%% Schedule next check\n\terlang:send_after(?CHECK_LONG_RUNNING_FOOTPRINTS_MS, self(), check_long_running_footprints),\n\tlog_long_running_footprints(State),\n\t{noreply, State};\n\nhandle_info(Info, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {info, Info}]),\n\t{noreply, State}.\n\nterminate(_Reason, State) ->\n\t%% Clean up ETS entry when process terminates\n\tets:delete(?MODULE, State#state.peer),\n\tok.\n\n%%%===================================================================\n%%% Private functions - Task management\n%%%===================================================================\n\ndequeue_tasks(Queue, 0, Acc) ->\n\t{lists:reverse(Acc), Queue};\ndequeue_tasks(Queue, N, Acc) ->\n\tcase queue:out(Queue) of\n\t\t{empty, _} ->\n\t\t\t{lists:reverse(Acc), Queue};\n\t\t{{value, Args}, NewQueued} ->\n\t\t\tdequeue_tasks(NewQueued, N - 1, [Args | Acc])\n\tend.\n\n%% @doc Calculate max queue size for this peer.\n%% MaxQueue = max(PeerThroughput * ScalingFactor, MIN_PEER_QUEUE)\nmax_queue_length(#performance{ current_rating = 0 }, _ScalingFactor) -> infinity;\nmax_queue_length(#performance{ current_rating = 0.0 }, _ScalingFactor) -> infinity;\nmax_queue_length(_Performance, infinity) -> infinity;\nmax_queue_length(Performance, ScalingFactor) ->\n\tPeerThroughput = Performance#performance.current_rating,\n\tmax(trunc(PeerThroughput * ScalingFactor), ?MIN_PEER_QUEUE).\n\n%%%===================================================================\n%%% Private functions - Task management\n%%%===================================================================\n\n%% @doc Process tasks from queue based on sync workers.\ndo_process_queue(State, WorkerCount) ->\n\t#state{ \n\t\tdispatched_count = Dispatched,\n\t\tmax_dispatched = MaxDispatched,\n\t\ttask_queue = Queue } = State,\n\n\tAvailableSlots = min(WorkerCount, MaxDispatched - Dispatched),\n\t{Tasks, RQ} = dequeue_tasks(Queue, AvailableSlots, []),\n\tTaskCount = length(Tasks),\n\tcase TaskCount > 0 of\n\t\ttrue ->\n\t\t\tincrement_metrics(dispatched, State, TaskCount),\n\t\t\tincrement_metrics(queued_out, State, TaskCount);\n\t\tfalse ->\n\t\t\tok\n\tend,\n\tNewDispatched = Dispatched + TaskCount,\n\tState2 = State#state{ task_queue = RQ, dispatched_count = NewDispatched },\n\t{Tasks, State2}.\n\n%% @doc Enqueue a 'normal' task. Footprint limits can be ignored.\ndo_enqueue_task(none, Args, _HasCapacity, State) ->\n\t%% No footprint key - enqueue directly to peer queue\n\t#state{ task_queue = Queue } = State,\n\tNewQueue = queue:in(Args, Queue),\n\tincrement_metrics(queued_in, State, 1),\n\t{false, State#state{ task_queue = NewQueue }};\n\n%% @doc Enqueue a 'footprint' task respecting footprint limits.\ndo_enqueue_task(FootprintKey, Args, HasCapacity, State) ->\n\t#state{ footprints = Footprints, task_queue = Queue, \n\t\t\tactive_footprints = ActiveFootprints } = State,\n\tFootprint = maps:get(FootprintKey, Footprints, #footprint{}),\n\tIsActive = sets:is_element(FootprintKey, ActiveFootprints),\n\tcase IsActive of\n\t\ttrue ->\n\t\t\t%% Footprint is already active, add task to it (not a new activation)\n\t\t\tNewQueue = queue:in(Args, Queue),\n\t\t\tFootprint2 = Footprint#footprint{ active_count = Footprint#footprint.active_count + 1 },\n\t\t\tincrement_metrics(queued_in, State, 1),\n\t\t\tincrement_metrics(activate_footprint_task, State, 1),\n\t\t\t{false, State#state{ \n\t\t\t\ttask_queue = NewQueue,\n\t\t\t\tfootprints = maps:put(FootprintKey, Footprint2, Footprints)\n\t\t\t}};\n\t\tfalse when HasCapacity ->\n\t\t\t%% New footprint and global capacity available - activate it (new activation)\n\t\t\tNewQueue = queue:in(Args, Queue),\n\t\t\tFootprint2 = Footprint#footprint{ active_count = 1 },\n\t\t\tincrement_metrics(queued_in, State, 1),\n\t\t\tincrement_metrics(activate_footprint_task, State, 1),\n\t\t\tState2 = State#state{ \n\t\t\t\ttask_queue = NewQueue,\n\t\t\t\tfootprints = maps:put(FootprintKey, Footprint2, Footprints)\n\t\t\t},\n\t\t\tState3 = activate_footprint(FootprintKey, State2),\n\t\t\t{true, State3};\n\t\tfalse ->\n\t\t\t%% No global capacity - queue task for later (no activation)\n\t\t\tFootprint2 = Footprint#footprint{ \n\t\t\t\twaiting = queue:in(Args, Footprint#footprint.waiting) \n\t\t\t},\n\t\t\tincrement_metrics(waiting_in, State, 1),\n\t\t\t{false, State#state{ \n\t\t\t\tfootprints = maps:put(FootprintKey, Footprint2, Footprints),\n\t\t\t\twaiting_count = State#state.waiting_count + 1\n\t\t\t}}\n\tend.\n\n%% @doc Handle completion of a footprint task.\ndo_complete_footprint_task(none, State) ->\n\tState;\ndo_complete_footprint_task(FootprintKey, State) ->\n\t#state{ footprints = Footprints, peer_formatted = PeerFormatted } = State,\n\tcase Footprints of\n\t\t#{ FootprintKey := Footprint } ->\n\t\t\tincrement_metrics(deactivate_footprint_task, State, 1),\n\t\t\tNewActiveCount = Footprint#footprint.active_count - 1,\n\t\t\tcase NewActiveCount =< 0 of\n\t\t\t\ttrue ->\n\t\t\t\t\t%% Footprint has no more active tasks\n\t\t\t\t\tcase queue:is_empty(Footprint#footprint.waiting) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t%% No waiting tasks - deactivate footprint\n\t\t\t\t\t\t\tdeactivate_footprint(FootprintKey, Footprint, State);\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t%% Has waiting tasks - activate them\n\t\t\t\t\t\t\tactivate_waiting_tasks(FootprintKey, Footprint, State)\n\t\t\t\t\tend;\n\t\t\t\tfalse ->\n\t\t\t\t\t%% Still has active tasks\n\t\t\t\t\tFootprint2 = Footprint#footprint{ active_count = NewActiveCount },\n\t\t\t\t\tState#state{ footprints = maps:put(FootprintKey, Footprint2, Footprints) }\n\t\t\tend;\n\t\t_ ->\n\t\t\t?LOG_WARNING([{event, complete_footprint_task_not_found},\n\t\t\t\t{footprint_key, FootprintKey},\n\t\t\t\t{peer, PeerFormatted}]),\n\t\t\tState\n\tend.\n\n%% @doc Deactivate a footprint.\ndeactivate_footprint(FootprintKey, Footprint, State) ->\n\t#state{ footprints = Footprints, peer_formatted = PeerFormatted, \n\t\t\tactive_footprints = ActiveFootprints } = State,\n\t%% Log deactivation with duration\n\tcase Footprint#footprint.activation_time of\n\t\tundefined -> ok;\n\t\tActivationTime ->\n\t\t\tDurationMs = erlang:convert_time_unit(\n\t\t\t\terlang:monotonic_time() - ActivationTime, native, millisecond),\n\t\t\t?LOG_DEBUG([{event, footprint_deactivated},\n\t\t\t\t{peer, PeerFormatted},\n\t\t\t\t{footprint_key, FootprintKey},\n\t\t\t\t{duration_ms, DurationMs}])\n\tend,\n\tincrement_metrics(deactivate_footprint, State, 1),\n\tnotify_footprint_deactivated(State#state.peer),\n\tState#state{ \n\t\tfootprints = maps:remove(FootprintKey, Footprints),\n\t\tactive_footprints = sets:del_element(FootprintKey, ActiveFootprints)\n\t}.\n\n%% @doc Activate waiting tasks from a footprint that was already active.\n%% Called when active_count reaches 0 but footprint has waiting tasks - just cycles tasks.\nactivate_waiting_tasks(FootprintKey, Footprint, State) ->\n\t#state{ task_queue = Queue, footprints = Footprints } = State,\n\tWaitingQueue = Footprint#footprint.waiting,\n\tWaitingCount = queue:len(WaitingQueue),\n\tNewQueue = queue:join(Queue, WaitingQueue),\n\tincrement_metrics(waiting_out, State, WaitingCount),\n\tincrement_metrics(queued_in, State, WaitingCount),\n\tincrement_metrics(activate_footprint_task, State, WaitingCount),\n\tFootprint2 = Footprint#footprint{ \n\t\twaiting = queue:new(), \n\t\tactive_count = WaitingCount \n\t},\n\tState#state{ \n\t\ttask_queue = NewQueue, \n\t\tfootprints = maps:put(FootprintKey, Footprint2, Footprints),\n\t\twaiting_count = State#state.waiting_count - WaitingCount\n\t}.\n\n%% @doc Activate a footprint - common logic for new activations.\n%% Sets activation_time, adds to active set, notifies coordinator, logs.\nactivate_footprint(FootprintKey, State) ->\n\t#state{ footprints = Footprints,\n\t\t\tactive_footprints = ActiveFootprints } = State,\n\tFootprint = maps:get(FootprintKey, Footprints),\n\tFootprint2 = Footprint#footprint{ activation_time = erlang:monotonic_time() },\n\tincrement_metrics(activate_footprint, State, 1),\n\tState#state{\n\t\tfootprints = maps:put(FootprintKey, Footprint2, Footprints),\n\t\tactive_footprints = sets:add_element(FootprintKey, ActiveFootprints)\n\t}.\n\n%% @doc Try to activate the next waiting footprint if any.\n%% Returns {Activated, NewState} where Activated is true if a footprint was activated.\ntry_activate_waiting_footprint(State) ->\n\t#state{ footprints = Footprints, active_footprints = ActiveFootprints } = State,\n\tcase find_waiting_footprint(Footprints, ActiveFootprints) of\n\t\tnone ->\n\t\t\t{false, State};\n\t\t{FootprintKey, Footprint} ->\n\t\t\t%% Move waiting tasks to queue, then activate the footprint\n\t\t\tState2 = activate_waiting_tasks(FootprintKey, Footprint, State),\n\t\t\tState3 = activate_footprint(FootprintKey, State2),\n\t\t\t{true, State3}\n\tend.\n\n%% @doc Find the inactive footprint with the most waiting tasks.\nfind_waiting_footprint(Footprints, ActiveFootprints) ->\n\tmaps:fold(\n\t\tfun(Key, Footprint, Acc) ->\n\t\t\tIsActive = sets:is_element(Key, ActiveFootprints),\n\t\t\tHasWaiting = not queue:is_empty(Footprint#footprint.waiting),\n\t\t\tcase not IsActive andalso HasWaiting of\n\t\t\t\ttrue ->\n\t\t\t\t\tWaitingCount = queue:len(Footprint#footprint.waiting),\n\t\t\t\t\tcase Acc of\n\t\t\t\t\t\tnone ->\n\t\t\t\t\t\t\t{Key, Footprint};\n\t\t\t\t\t\t{_AccKey, AccFP} ->\n\t\t\t\t\t\t\tAccWaitingCount = queue:len(AccFP#footprint.waiting),\n\t\t\t\t\t\t\tcase WaitingCount > AccWaitingCount of\n\t\t\t\t\t\t\t\ttrue -> {Key, Footprint};\n\t\t\t\t\t\t\t\tfalse -> Acc\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend;\n\t\t\t\tfalse ->\n\t\t\t\t\tAcc\n\t\t\tend\n\t\tend, none, Footprints).\n\n%% @doc Decrement footprint counts for tasks removed from queue (e.g., during cut).\ncut_footprint_task_counts([], State) ->\n\tState;\ncut_footprint_task_counts([Args | Rest], State) ->\n\tFootprintKey = element(5, Args),\n\tState2 = case FootprintKey of\n\t\tnone -> State;\n\t\t_ -> do_complete_footprint_task(FootprintKey, State)\n\tend,\n\tcut_footprint_task_counts(Rest, State2).\n\n%%%===================================================================\n%%% Private functions - Long-running footprint detection (debugging)\n%%%===================================================================\n\nlog_long_running_footprints(State) ->\n\t#state{ peer_formatted = PeerFormatted, footprints = Footprints } = State,\n\tNow = erlang:monotonic_time(),\n\tThresholdNative = erlang:convert_time_unit(?LONG_RUNNING_FOOTPRINT_THRESHOLD_S, second, native),\n\tLongRunning = maps:fold(\n\t\tfun(FootprintKey, Footprint, Acc) ->\n\t\t\tcase Footprint#footprint.activation_time of\n\t\t\t\tundefined -> Acc;\n\t\t\t\tActivationTime ->\n\t\t\t\t\tDuration = Now - ActivationTime,\n\t\t\t\t\tcase Duration > ThresholdNative of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tDurationS = erlang:convert_time_unit(Duration, native, second),\n\t\t\t\t\t\t\t[#{key => FootprintKey,\n\t\t\t\t\t\t\t   duration_s => DurationS,\n\t\t\t\t\t\t\t   active_count => Footprint#footprint.active_count,\n\t\t\t\t\t\t\t   waiting_count => queue:len(Footprint#footprint.waiting)} | Acc];\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tAcc\n\t\t\t\t\tend\n\t\t\tend\n\t\tend, [], Footprints),\n\tcase LongRunning of\n\t\t[] -> ok;\n\t\t_ ->\n\t\t\t?LOG_WARNING([{event, long_running_footprints},\n\t\t\t\t{peer, PeerFormatted},\n\t\t\t\t{count, length(LongRunning)},\n\t\t\t\t{footprints, LongRunning}])\n\tend.\n\n%%%===================================================================\n%%% Private functions - Coordinator notifications\n%%%===================================================================\n\n%% Notify coordinator that a footprint was deactivated (for global tracking)\nnotify_footprint_deactivated(Peer) ->\n\tgen_server:cast(ar_data_sync_coordinator, {footprint_deactivated, Peer}).\n\n%%%===================================================================\n%%% Private functions - Metrics\n%%%===================================================================\n\n%% Increment prometheus counter - catches errors when prometheus isn't initialized (e.g. in tests)\n%% Metric is an atom (e.g., dispatched, queued_in)\nincrement_metrics(Metric, #state{ peer_formatted = PeerFormatted }, Value) ->\n\ttry\n\t\tprometheus_counter:inc(sync_tasks, [Metric, PeerFormatted], Value)\n\tcatch\n\t\t_:_ -> ok\n\tend.\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\n-ifdef(AR_TEST).\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%% @doc Test-only helper to get footprint stats by calling get_state.\nget_footprint_stats(Pid) ->\n\tcase gen_server:call(Pid, get_state) of\n\t\t{ok, State} ->\n\t\t\t#state{ footprints = Footprints } = State,\n\t\t\tTotalActive = maps:fold(fun(_, Footprint, Acc) -> \n\t\t\t\tAcc + Footprint#footprint.active_count \n\t\t\tend, 0, Footprints),\n\t\t\t#{\n\t\t\t\tfootprint_count => maps:size(Footprints),\n\t\t\t\tactive_footprint_count => maps:size(Footprints),  %% All footprints are now active\n\t\t\t\ttotal_active_tasks => TotalActive\n\t\t\t};\n\t\tError ->\n\t\t\tError\n\tend.\n\nlookup_test() ->\n\tPeer1 = {10, 20, 30, 40, 9999},\n\t?assertEqual(undefined, lookup(Peer1)),\n\t\n\tPeer2 = {50, 60, 70, 80, 1234},\n\tTestPid = spawn(fun() -> receive stop -> ok end end),\n\tets:insert(?MODULE, {Peer2, TestPid}),\n\t?assertEqual({ok, TestPid}, lookup(Peer2)),\n\t\n\t%% Cleanup\n\tTestPid ! stop,\n\tets:delete(?MODULE, Peer2).\n\n%% Tests that require setup/cleanup\npeer_worker_test_() ->\n\t{foreach,\n\t\tfun setup/0,\n\t\tfun cleanup/1,\n\t\t[\n\t\t\tfun test_enqueue_and_process/1,\n\t\t\tfun test_task_completed/1,\n\t\t\tfun test_cut_queue/1,\n\t\t\tfun test_footprint_basic/1,\n\t\t\tfun test_multiple_footprints/1,\n\t\t\tfun test_footprint_completion/1,\n\t\t\tfun test_footprint_waiting_queue/1,\n\t\t\tfun test_try_activate_footprint/1,\n\t\t\tfun test_footprint_task_cycling/1,\n\t\t\tfun test_active_footprints_set/1,\n\t\t\tfun test_add_task_to_active_footprint/1,\n\t\t\tfun test_footprint_deactivation_removes_from_map/1\n\t\t]\n\t}.\n\nsetup() ->\n\tPeer = {1, 2, 3, 4, 1984},\n\t%% Start peer worker directly (not via supervisor, unnamed for test isolation)\n\t{ok, Pid} = gen_server:start(?MODULE, [Peer], []),\n\t{Peer, Pid}.\n\ncleanup({Peer, Pid}) ->\n\t%% Clean up ETS entry when removing the worker\n\tets:delete(?MODULE, Peer),\n\tgen_server:stop(Pid),\n\tok.\n\ntest_enqueue_and_process({Peer, Pid}) ->\n\tfun() ->\n\t\t%% Enqueue some tasks (no footprint)\n\t\t{false, []} = enqueue_task(Pid, none, {0, 100, Peer, store1, none}, true, 0),\n\t\t{false, []} = enqueue_task(Pid, none, {100, 200, Peer, store1, none}, true, 0),\n\t\t{false, []} = enqueue_task(Pid, none, {200, 300, Peer, store1, none}, true, 0),\n\n\t\t%% Sync to ensure casts processed\n\t\t{ok, _} = gen_server:call(Pid, get_state),\n\n\t\t%% Process queue - should get up to 2 tasks\n\t\tTasks = process_queue(Pid, 2),\n\t\t?assertEqual(2, length(Tasks)),\n\n\t\t{false, Tasks2} = enqueue_task(Pid, none, {300, 400, Peer, store1, none}, true, 1),\n\t\t?assertEqual(1, length(Tasks2)),\n\n\t\t%% Check state\n\t\t{ok, State} = gen_server:call(Pid, get_state),\n\t\t?assertEqual(1, queue:len(State#state.task_queue)),\n\t\t?assertEqual(3, State#state.dispatched_count)\n\tend.\n\ntest_task_completed({Peer, Pid}) ->\n\tfun() ->\n\t\t%% Enqueue and dispatch a task (no footprint)\n\t\tenqueue_task(Pid, none, {0, 100, Peer, store1, none}, true, 0),\n\t\t{ok, _} = gen_server:call(Pid, get_state), %% sync\n\t\t[_Task] = process_queue(Pid, 1),\n\n\t\t%% Complete the task (FootprintKey = none)\n\t\ttask_completed(Pid, none, ok, 0, 100),\n\t\t\n\t\t%% Wait for cast to be processed by doing a sync call\n\t\t{ok, State} = gen_server:call(Pid, get_state),\n\t\t?assertEqual(0, State#state.dispatched_count)\n\tend.\n\ntest_cut_queue({Peer, Pid}) ->\n\tfun() ->\n\t\t%% Enqueue 25 tasks (more than MIN_PEER_QUEUE = 20)\n\t\tlists:foreach(fun(I) ->\n\t\t\tenqueue_task(Pid, none, {I * 100, (I + 1) * 100, Peer, store1, none}, true, 0)\n\t\tend, lists:seq(0, 24)),\n\n\t\t%% Sync and check we have 25 tasks\n\t\t{ok, State1} = gen_server:call(Pid, get_state),\n\t\t?assertEqual(25, queue:len(State1#state.task_queue)),\n\n\t\t%% Rebalance with scaling factor that gives MaxQueue = 20 (MIN_PEER_QUEUE)\n\t\t%% MaxQueue = max(PeerThroughput * ScalingFactor, MIN_PEER_QUEUE)\n\t\t%% With PeerThroughput = 100, ScalingFactor = 0.1 => MaxQueue = max(10, 20) = 20\n\t\tPerformance = #performance{ current_rating = 100.0, average_latency = 50.0 },\n\t\t%% RebalanceParams = {QueueScalingFactor, TargetLatency, WorkersStarved}\n\t\t%% FasterThanTarget = (50.0 < 100.0) = true\n\t\tRebalanceParams = {0.1, 100.0, false},\n\t\tResult = rebalance(Pid, Performance, RebalanceParams),\n\t\t?assertEqual({ok, 5}, Result),\n\n\t\t%% Check we have 20 tasks left (MIN_PEER_QUEUE)\n\t\t{ok, State2} = gen_server:call(Pid, get_state),\n\t\t?assertEqual(20, queue:len(State2#state.task_queue))\n\tend.\n\ntest_footprint_basic({Peer, Pid}) ->\n\tfun() ->\n\t\tFootprintKey = {store1, 1000, Peer},\n\t\t%% Enqueue task with footprint - should activate it (HasCapacity=true)\n\t\tenqueue_task(Pid, FootprintKey, {0, 100, Peer, store1, FootprintKey}, true, 0),\n\t\t\n\t\t%% Check footprint stats (get_footprint_stats syncs via call)\n\t\tStats = get_footprint_stats(Pid),\n\t\t?assertEqual(1, maps:get(active_footprint_count, Stats)),\n\t\t?assertEqual(1, maps:get(total_active_tasks, Stats)),\n\n\t\t%% Complete the task\n\t\ttask_completed(Pid, FootprintKey, ok, 0, 100),\n\t\t\n\t\t%% Footprint should be deactivated (get_footprint_stats will wait for cast to process)\n\t\tStats2 = get_footprint_stats(Pid),\n\t\t?assertEqual(0, maps:get(active_footprint_count, Stats2))\n\tend.\n\ntest_multiple_footprints({Peer, Pid}) ->\n\tfun() ->\n\t\t%% Verify state is accessible\n\t\t{ok, _State0} = gen_server:call(Pid, get_state),\n\t\t\n\t\t%% Test that multiple footprints can be activated (HasCapacity=true)\n\t\tFootprintKey1 = {store1, 1000, Peer},\n\t\tFootprintKey2 = {store1, 2000, Peer},\n\t\t\n\t\t%% Enqueue tasks for two footprints\n\t\tenqueue_task(Pid, FootprintKey1, {0, 100, Peer, store1, FootprintKey1}, true, 0),\n\t\tenqueue_task(Pid, FootprintKey2, {100, 200, Peer, store1, FootprintKey2}, true, 0),\n\t\t\n\t\tStats = get_footprint_stats(Pid),\n\t\t?assertEqual(2, maps:get(active_footprint_count, Stats))\n\tend.\n\ntest_footprint_completion({Peer, Pid}) ->\n\tfun() ->\n\t\tFootprintKey = {store1, 1000, Peer},\n\t\t%% Enqueue multiple tasks for same footprint (HasCapacity=true)\n\t\tenqueue_task(Pid, FootprintKey, {0, 100, Peer, store1, FootprintKey}, true, 0),\n\t\tenqueue_task(Pid, FootprintKey, {100, 200, Peer, store1, FootprintKey}, true, 0),\n\t\tenqueue_task(Pid, FootprintKey, {200, 300, Peer, store1, FootprintKey}, true, 0),\n\t\t\n\t\tStats1 = get_footprint_stats(Pid),  %% Syncs via call\n\t\t?assertEqual(3, maps:get(total_active_tasks, Stats1)),\n\n\t\t%% Complete tasks one by one\n\t\ttask_completed(Pid, FootprintKey, ok, 0, 100),\n\t\tStats2 = get_footprint_stats(Pid),  %% Wait for cast to process\n\t\t?assertEqual(2, maps:get(total_active_tasks, Stats2)),\n\t\t?assertEqual(1, maps:get(active_footprint_count, Stats2)),  %% Still active\n\n\t\ttask_completed(Pid, FootprintKey, ok, 0, 100),\n\t\ttask_completed(Pid, FootprintKey, ok, 0, 100),\n\t\t\n\t\tStats3 = get_footprint_stats(Pid),  %% Wait for casts to process\n\t\t?assertEqual(0, maps:get(total_active_tasks, Stats3)),\n\t\t?assertEqual(0, maps:get(active_footprint_count, Stats3))  %% Deactivated\n\tend.\n\ntest_footprint_waiting_queue({Peer, Pid}) ->\n\tfun() ->\n\t\tFootprintKey = {store1, 1000, Peer},\n\t\t%% Enqueue task with HasCapacity=false - should go to waiting queue\n\t\tenqueue_task(Pid, FootprintKey, {0, 100, Peer, store1, FootprintKey}, false, 0),\n\t\tenqueue_task(Pid, FootprintKey, {100, 200, Peer, store1, FootprintKey}, false, 0),\n\t\t\n\t\t%% Sync and check state\n\t\t{ok, State} = gen_server:call(Pid, get_state),\n\t\t\n\t\t%% Task queue should be empty (tasks went to waiting)\n\t\t?assertEqual(0, queue:len(State#state.task_queue)),\n\t\t%% waiting_count should be 2\n\t\t?assertEqual(2, State#state.waiting_count),\n\t\t%% Footprint should NOT be in active_footprints set\n\t\t?assertEqual(false, sets:is_element(FootprintKey, State#state.active_footprints)),\n\t\t%% Footprint should have 2 waiting tasks\n\t\tFootprint = maps:get(FootprintKey, State#state.footprints),\n\t\t?assertEqual(2, queue:len(Footprint#footprint.waiting)),\n\t\t?assertEqual(0, Footprint#footprint.active_count)\n\tend.\n\ntest_try_activate_footprint({Peer, Pid}) ->\n\tfun() ->\n\t\tFootprintKey = {store1, 1000, Peer},\n\t\t%% First, no waiting footprints - should return false\n\t\tResult1 = try_activate_footprint(Pid),\n\t\t?assertEqual(false, Result1),\n\t\t\n\t\t%% Enqueue task with HasCapacity=false - goes to waiting\n\t\tenqueue_task(Pid, FootprintKey, {0, 100, Peer, store1, FootprintKey}, false, 0),\n\t\t{ok, _} = gen_server:call(Pid, get_state), %% sync\n\t\t\n\t\t%% Now try_activate_footprint should return true\n\t\tResult2 = try_activate_footprint(Pid),\n\t\t?assertEqual(true, Result2),\n\t\t\n\t\t%% Check state - task should now be in task_queue\n\t\t{ok, State} = gen_server:call(Pid, get_state),\n\t\t?assertEqual(1, queue:len(State#state.task_queue)),\n\t\t?assertEqual(0, State#state.waiting_count),\n\t\t%% Footprint should be in active_footprints set\n\t\t?assertEqual(true, sets:is_element(FootprintKey, State#state.active_footprints)),\n\t\t%% Footprint should have active_count = 1\n\t\tFootprint = maps:get(FootprintKey, State#state.footprints),\n\t\t?assertEqual(1, Footprint#footprint.active_count),\n\t\t?assertEqual(0, queue:len(Footprint#footprint.waiting))\n\tend.\n\ntest_footprint_task_cycling({Peer, Pid}) ->\n\tfun() ->\n\t\tFootprintKey = {store1, 1000, Peer},\n\t\t%% Enqueue one task with HasCapacity=true (activates footprint)\n\t\tenqueue_task(Pid, FootprintKey, {0, 100, Peer, store1, FootprintKey}, true, 0),\n\t\t{ok, _} = gen_server:call(Pid, get_state), %% sync\n\t\t\n\t\t%% Dispatch the task\n\t\t[_Task] = process_queue(Pid, 1),\n\t\t\n\t\t%% Verify the basic behavior that when a footprint completes\n\t\t%% and has no waiting tasks, it deactivates.\n\t\t\n\t\t%% Complete the task\n\t\ttask_completed(Pid, FootprintKey, ok, 0, 100),\n\t\t{ok, State} = gen_server:call(Pid, get_state),\n\t\t\n\t\t%% Footprint should be removed (no waiting tasks)\n\t\t?assertEqual(false, maps:is_key(FootprintKey, State#state.footprints)),\n\t\t?assertEqual(false, sets:is_element(FootprintKey, State#state.active_footprints))\n\tend.\n\ntest_active_footprints_set({Peer, Pid}) ->\n\tfun() ->\n\t\tFootprintKey1 = {store1, 1000, Peer},\n\t\tFootprintKey2 = {store1, 2000, Peer},\n\t\tFootprintKey3 = {store1, 3000, Peer},\n\t\t\n\t\t%% Initially set should be empty\n\t\t{ok, State0} = gen_server:call(Pid, get_state),\n\t\t?assertEqual(0, sets:size(State0#state.active_footprints)),\n\t\t\n\t\t%% Activate two footprints\n\t\tenqueue_task(Pid, FootprintKey1, {0, 100, Peer, store1, FootprintKey1}, true, 0),\n\t\tenqueue_task(Pid, FootprintKey2, {100, 200, Peer, store1, FootprintKey2}, true, 0),\n\t\t%% Third one goes to waiting\n\t\tenqueue_task(Pid, FootprintKey3, {200, 300, Peer, store1, FootprintKey3}, false, 0),\n\t\t\n\t\t{ok, State1} = gen_server:call(Pid, get_state),\n\t\t?assertEqual(2, sets:size(State1#state.active_footprints)),\n\t\t?assertEqual(true, sets:is_element(FootprintKey1, State1#state.active_footprints)),\n\t\t?assertEqual(true, sets:is_element(FootprintKey2, State1#state.active_footprints)),\n\t\t?assertEqual(false, sets:is_element(FootprintKey3, State1#state.active_footprints)),\n\t\t\n\t\t%% Deactivate one footprint\n\t\ttask_completed(Pid, FootprintKey1, ok, 0, 100),\n\t\t{ok, State2} = gen_server:call(Pid, get_state),\n\t\t?assertEqual(1, sets:size(State2#state.active_footprints)),\n\t\t?assertEqual(false, sets:is_element(FootprintKey1, State2#state.active_footprints)),\n\t\t?assertEqual(true, sets:is_element(FootprintKey2, State2#state.active_footprints))\n\tend.\n\ntest_add_task_to_active_footprint({Peer, Pid}) ->\n\tfun() ->\n\t\tFootprintKey = {store1, 1000, Peer},\n\t\t\n\t\t%% Activate footprint with one task\n\t\tenqueue_task(Pid, FootprintKey, {0, 100, Peer, store1, FootprintKey}, true, 0),\n\t\t{ok, State1} = gen_server:call(Pid, get_state),\n\t\tFootprint1 = maps:get(FootprintKey, State1#state.footprints),\n\t\t?assertEqual(1, Footprint1#footprint.active_count),\n\t\t?assertEqual(1, queue:len(State1#state.task_queue)),\n\t\t\n\t\t%% Add more tasks to same footprint (regardless of HasCapacity, goes to task_queue since active)\n\t\tenqueue_task(Pid, FootprintKey, {100, 200, Peer, store1, FootprintKey}, true, 0),\n\t\tenqueue_task(Pid, FootprintKey, {200, 300, Peer, store1, FootprintKey}, false, 0),\n\t\t\n\t\t{ok, State2} = gen_server:call(Pid, get_state),\n\t\tFootprint2 = maps:get(FootprintKey, State2#state.footprints),\n\t\t%% active_count should be 3 (all tasks added to active footprint)\n\t\t?assertEqual(3, Footprint2#footprint.active_count),\n\t\t%% All 3 tasks should be in task_queue\n\t\t?assertEqual(3, queue:len(State2#state.task_queue)),\n\t\t%% waiting should still be empty\n\t\t?assertEqual(0, queue:len(Footprint2#footprint.waiting)),\n\t\t?assertEqual(0, State2#state.waiting_count)\n\tend.\n\ntest_footprint_deactivation_removes_from_map({Peer, Pid}) ->\n\tfun() ->\n\t\tFootprintKey = {store1, 1000, Peer},\n\t\t\n\t\t%% Activate footprint\n\t\tenqueue_task(Pid, FootprintKey, {0, 100, Peer, store1, FootprintKey}, true, 0),\n\t\t{ok, State1} = gen_server:call(Pid, get_state),\n\t\t\n\t\t%% Footprint should exist in map\n\t\t?assertEqual(true, maps:is_key(FootprintKey, State1#state.footprints)),\n\t\t?assertEqual(true, sets:is_element(FootprintKey, State1#state.active_footprints)),\n\t\t\n\t\t%% Complete the task (footprint has no waiting tasks)\n\t\ttask_completed(Pid, FootprintKey, ok, 0, 100),\n\t\t{ok, State2} = gen_server:call(Pid, get_state),\n\t\t\n\t\t%% Footprint should be completely removed from both structures\n\t\t?assertEqual(false, maps:is_key(FootprintKey, State2#state.footprints)),\n\t\t?assertEqual(false, sets:is_element(FootprintKey, State2#state.active_footprints)),\n\t\t?assertEqual(0, State2#state.waiting_count)\n\tend.\n\n-endif.\n"
  },
  {
    "path": "apps/arweave/src/ar_peer_worker_sup.erl",
    "content": "%%% @doc Supervisor for ar_peer_worker processes.\n%%% Uses simple_one_for_one to dynamically spawn peer workers on demand.\n-module(ar_peer_worker_sup).\n\n-behaviour(supervisor).\n\n-export([start_link/0]).\n-export([init/1]).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%%%===================================================================\n%%% Supervisor callbacks.\n%%%===================================================================\n\ninit([]) ->\n\tets:new(ar_peer_worker, [set, public, named_table, {read_concurrency, true}]),\n\tChildSpec = #{\n\t\tid => ar_peer_worker,\n\t\tstart => {ar_peer_worker, start_link, []},\n\t\trestart => temporary,  %% Don't restart - will be recreated on demand\n\t\tshutdown => 5000,\n\t\ttype => worker,\n\t\tmodules => [ar_peer_worker]\n\t},\n\t{ok, {#{strategy => simple_one_for_one, intensity => 10, period => 60}, [ChildSpec]}}.\n\n"
  },
  {
    "path": "apps/arweave/src/ar_peers.erl",
    "content": "%%% @doc Tracks the availability and performance of the network peers.\n-module(ar_peers).\n-behaviour(gen_server).\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_peers.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n-export([\n\tadd_peer/2,\n\tconnected_peer/1,\n\tdisconnected_peer/1,\n\tdiscover_peers/0,\n\tfilter_peers/2,\n\tget_connection_timestamp_peer/1,\n\tget_peer_performances/1,\n\tget_peer_release/1,\n\tget_peers/1,\n\tget_tag/2,\n\tget_trusted_peers/0,\n\tis_connected_peer/1,\n\tis_public_peer/1,\n\tissue_warning/3,\n\trate_fetched_data/4,\n\trate_fetched_data/6,\n\trate_gossiped_data/4,\n\tresolve_and_cache_peer/2,\n\tresolve_and_cache_peer/3,\n\tset_tag/3,\n\tstart_link/0,\n\tstats/1\n]).\n-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2]).\n\n%% The frequency in seconds of re-resolving DNS of peers configured by domain names.\n-define(STORE_RESOLVED_DOMAIN_S, 60).\n\n%% The frequency in milliseconds of ranking the known peers.\n-ifdef(AR_TEST).\n-define(RANK_PEERS_FREQUENCY_MS, 2 * 1000).\n-else.\n-define(RANK_PEERS_FREQUENCY_MS, 2 * 60 * 1000).\n-endif.\n\n%% The frequency in milliseconds of asking some peers for their peers.\n-ifdef(AR_TEST).\n-define(GET_MORE_PEERS_FREQUENCY_MS, 5000).\n-else.\n-define(GET_MORE_PEERS_FREQUENCY_MS, 240 * 1000).\n-endif.\n\n%% Peers to never add to the peer list.\n-define(PEER_PERMANENT_BLACKLIST, []).\n\n%% The maximum number of peers to return from get_peers/0.\n-define(MAX_PEERS, 1000).\n\n%% Minimum average_success we'll tolerate before dropping a peer.\n-define(MINIMUM_SUCCESS, 0.8).\n%% The alpha value in an EMA calculation is somewhat unintuitive:\n%%\n%% NewEma = (1 - Alpha) * OldEma + Alpha * NewValue\n%%\n%% When calculating the SuccessEma the NewValue is always either 1 or 0. So if we want to see how\n%% many consecutive failures it will take to drop the SuccessEma from 1 to 0.5 (i.e. 50% failure\n%% rate), a number of terms in the equation drop out and we're left with:\n%%\n%% 0.5 = (1 - Alpha) ^ N\n%%\n%% Where N is the number of consecutive failures.\n%%\n%% Setting Alpha to 0.1 we can determine the number of consecutive failures:\n%% 0.5 = 0.9 ^ N\n%% log(0.5) = N * log(0.9)\n%% N = log(0.5) / log(0.9)\n%% N = 6.58\n%%\n%% And if we want to set the Alpha such that it takes 20 consecutive failures to go from 1 to 0.5:\n%% 0.5 = (1 - Alpha) ^ 20\n%% log(0.5) = 20 * log(1 - Alpha)\n%% 1 - Alpha = 10 ^ (log(0.5) / 20)\n%% Alpha = 1 - 10 ^ (log(0.5) / 20)\n%% Alpha = 0.035\n-define(SUCCESS_ALPHA, 0.035).\n%% The THROUGHPUT_ALPHA is even harder to intuit since the values being averaged can be any\n%% positive number and are not just limited to 0 or 1. Perhaps one way to think about it is:\n%% When a datapoint is first added to the average it is scaled by Alpha, and then every time\n%% another datapoint is added, the contribution of all prior datapoints are scaled by (1-Alpha).\n%% So how many new datapoints will it take to reduce the contribution of an earlier datapoint\n%% to \"virtually\" 0?\n%%\n%% If we assume \"virtually 0\" is the same as 1% of its true value (i.e. if the datapoint was\n%% originaly 100, it now contributes 1 to the average), then we can use a similar equation as\n%% the SUCCESS_ALPHA equation to determine how many datapoints materially contribute to the average:\n%%\n%% 0.01 = (1 - Alpha) ^ N) * Alpha\n%%\n%% The additional \"* Alpha\" term is to account for the scaling that happens when a datapoint is\n%% first added.\n%%\n%% With an Alpha of 0.05 we're essentially saying that the last ~31 datapoints contribute 99% of\n%% the average:\n%%\n%% 0.01 = ((1 - 0.05) ^ N) * 0.05\n%% 0.01 / 0.05 = (1 - 0.05) ^ N\n%% log(0.2) = N * log(0.95)\n%% N = log(0.2) / log(0.95)\n%% N = 31.38\n-define(THROUGHPUT_ALPHA, 0.05).\n\n%% When processing block rejected events for blocks received from a peer, we handle rejections\n%% differently based on the rejection reason.\n-define(BLOCK_REJECTION_WARNING, [\n\tfailed_to_fetch_first_chunk,\n\tfailed_to_fetch_second_chunk,\n\tfailed_to_fetch_chunk\n]).\n-define(BLOCK_REJECTION_BAN, [\n\tinvalid_previous_solution_hash,\n\tinvalid_last_retarget,\n\tinvalid_difficulty,\n\tinvalid_cumulative_difficulty,\n\tinvalid_hash_preimage,\n\tinvalid_nonce_limiter_seed_data,\n\tinvalid_partition_number,\n\tinvalid_nonce,\n\tinvalid_pow,\n\tinvalid_poa,\n\tinvalid_poa2,\n\tinvalid_nonce_limiter,\n\tinvalid_nonce_limiter_cache_mismatch,\n\tinvalid_packing_difficulty\n]).\n\n-define(BLOCK_REJECTION_IGNORE, [\n\tinvalid_signature,\n\tinvalid_proof_size,\n\tinvalid_first_chunk,\n\tinvalid_second_chunk,\n\tinvalid_poa2_recall_byte2_undefined,\n\tinvalid_hash,\n\tinvalid_timestamp,\n\tinvalid_resigned_solution_hash,\n\tinvalid_nonce_limiter_global_step_number,\n\tinvalid_first_unpacked_chunk,\n\tinvalid_second_unpacked_chunk,\n\tinvalid_first_unpacked_chunk_hash,\n\tinvalid_second_unpacked_chunk_hash\n]).\n\n%% We only do scoring of this many TCP ports per IP address. When there are not enough slots,\n%% we remove the peer from the first slot.\n-define(DEFAULT_PEER_PORT_MAP, {empty_slot, empty_slot, empty_slot, empty_slot, empty_slot,\n\t\tempty_slot, empty_slot, empty_slot, empty_slot, empty_slot}).\n\n-record(state, {}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the server.\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%%--------------------------------------------------------------------\n%% @doc Return the list of peers in the given ranking order.\n%%      Rating is an estimate of the peer's effective throughput in\n%%      bytes per millisecond.\n%%\n%%      `lifetime' considers all data ever received from this peer and\n%%      is most useful when we care more about identifying \"good\n%%      samaritans\" rather than maximizing throughput (e.g. when\n%%      polling for new blocks are determing which peer's blocks to\n%%      validated first).\n%%\n%%      `current' weights recently received data higher than old data\n%%      and is most useful when we care more about maximizing throughput\n%%      (e.g. when syncing chunks).\n%%\n%% @end\n%%--------------------------------------------------------------------\nget_peers(Ranking) ->\n\tcase catch ets:lookup(?MODULE, {peers, Ranking}) of\n\t\t{'EXIT', _} ->\n\t\t\t[];\n\t\t[] ->\n\t\t\t[];\n\t\t[{{peers, lifetime}, Peers}] ->\n\t\t\tPeers;\n\t\t[{{peers, current}, Peers}] ->\n\t\t\tfilter_peers(Peers, {timestamp, ?CURRENT_PEERS_LIST_FILTER});\n\t\t[{_, Peers}] ->\n\t\t\tPeers\n\tend.\n\nfilter_peers(Peers, {timestamp, Seconds})\n\twhen is_integer(Seconds) ->\n\t\tTimefilter = erlang:system_time(seconds) - Seconds,\n\t\tTag = {connection, last},\n\t\tPattern = {{ar_tags, ?MODULE, '$1', Tag}, '$3'},\n\t\tGuard = [{'>=', '$3', Timefilter}],\n\t\tSelect = ['$1'],\n\t\tTaggedPeers = ets:select(?MODULE, [{Pattern, Guard, Select}]),\n\t\t[ P || T <- TaggedPeers, P <- Peers, T =:= P ].\n\nget_peer_performances(Peers) ->\n\tlists:foldl(\n\t\tfun(Peer, Map) ->\n\t\t\tPerformance = get_or_init_performance(Peer),\n\t\t\tmaps:put(Peer, Performance, Map)\n\t\tend,\n\t\t#{},\n\t\tPeers).\n\n-if(?NETWORK_NAME == \"arweave.N.1\").\nresolve_peers([]) ->\n\t[];\nresolve_peers([RawPeer | Peers]) ->\n\tcase ar_util:safe_parse_peer(RawPeer) of\n\t\t{ok, Peer} ->\n\t\t\tPeer ++ resolve_peers(Peers);\n\t\t{error, invalid} ->\n\t\t\t?LOG_WARNING([{event, failed_to_resolve_trusted_peer},\n\t\t\t\t\t{peer, RawPeer}]),\n\t\t\tresolve_peers(Peers)\n\tend.\n\nget_trusted_peers() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase Config#config.peers of\n\t\t[] ->\n\t\t\tArweavePeers = [\n\t\t\t\t\"asia.peers.arweave.xyz\",\n\t\t\t\t\"europe.peers.arweave.xyz\",\n\t\t\t\t\"india.peers.arweave.xyz\",\n\t\t\t\t\"north-america.peers.arweave.xyz\",\n\t\t\t\t\"oceania.peers.arweave.xyz\"\n\t\t\t],\n\t\t\tresolve_peers(ArweavePeers);\n\t\tPeers ->\n\t\t\tPeers\n\tend.\n-else.\nget_trusted_peers() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tConfig#config.peers.\n-endif.\n\n%% @doc Return true if the given peer has a public IPv4 address.\n%% https://en.wikipedia.org/wiki/Reserved_IP_addresses.\nis_public_peer({Oct1, Oct2, Oct3, Oct4, _Port}) ->\n\tis_public_peer({Oct1, Oct2, Oct3, Oct4});\nis_public_peer({0, _, _, _}) ->\n\tfalse;\nis_public_peer({10, _, _, _}) ->\n\tfalse;\nis_public_peer({127, _, _, _}) ->\n\tfalse;\nis_public_peer({100, Oct2, _, _}) when Oct2 >= 64 andalso Oct2 =< 127 ->\n\tfalse;\nis_public_peer({169, 254, _, _}) ->\n\tfalse;\nis_public_peer({172, Oct2, _, _}) when Oct2 >= 16 andalso Oct2 =< 31 ->\n\tfalse;\nis_public_peer({192, 0, 0, _}) ->\n\tfalse;\nis_public_peer({192, 0, 2, _}) ->\n\tfalse;\nis_public_peer({192, 88, 99, _}) ->\n\tfalse;\nis_public_peer({192, 168, _, _}) ->\n\tfalse;\nis_public_peer({198, 18, _, _}) ->\n\tfalse;\nis_public_peer({198, 19, _, _}) ->\n\tfalse;\nis_public_peer({198, 51, 100, _}) ->\n\tfalse;\nis_public_peer({203, 0, 113, _}) ->\n\tfalse;\nis_public_peer({Oct1, _, _, _}) when Oct1 >= 224 ->\n\tfalse;\nis_public_peer(_) ->\n\ttrue.\n\n%% @doc Return the release nubmer reported by the peer.\n%% Return -1 if the release is not known.\nget_peer_release(Peer) ->\n\tcase catch ets:lookup(?MODULE, {peer, Peer}) of\n\t\t[{_, #performance{ release = Release }}] ->\n\t\t\tRelease;\n\t\t_ ->\n\t\t\t-1\n\tend.\n\nrate_fetched_data(Peer, DataType, LatencyMicroseconds, DataSize) ->\n\trate_fetched_data(Peer, DataType, ok, LatencyMicroseconds, DataSize, 1).\nrate_fetched_data(Peer, DataType, ok, LatencyMicroseconds, DataSize, Concurrency) ->\n\ttry\n\t\tgen_server:cast(?MODULE,\n\t\t\t{valid_data, Peer, DataType, LatencyMicroseconds / 1000, DataSize, Concurrency})\n\tcatch\n\t\t_:_ -> ok\n\tend;\nrate_fetched_data(Peer, DataType, _, _LatencyMicroseconds, _DataSize, _Concurrency) ->\n\ttry\n\t\tgen_server:cast(?MODULE, {invalid_data, Peer, DataType})\n\tcatch\n\t\t_:_ -> ok\n\tend.\n\nrate_gossiped_data(Peer, DataType, LatencyMicroseconds, DataSize) ->\n\tcase check_peer(Peer) of\n\t\tok ->\n\t\t\tgen_server:cast(?MODULE,\n\t\t\t\t{valid_data, Peer, DataType,  LatencyMicroseconds / 1000, DataSize, 1});\n\t\t_ ->\n\t\t\tok\n\tend.\n\nissue_warning(Peer, _Type, _Reason) ->\n\tgen_server:cast(?MODULE, {warning, Peer}).\n\nadd_peer(Peer, Release) ->\n\tgen_server:cast(?MODULE, {add_peer, Peer, Release}).\n\n%% @doc Print statistics about the current peers.\nstats(Ranking) ->\n\tConnected = get_peers(Ranking),\n\tio:format(\"Connected peers, in ~s order:~n\", [Ranking]),\n\tstats(Ranking, Connected),\n\tio:format(\"Other known peers:~n\"),\n\tAll = ets:foldl(\n\t\tfun\n\t\t\t({{peer, Peer}, _}, Acc) -> [Peer | Acc];\n\t\t\t(_, Acc) -> Acc\n\t\tend,\n\t\t[],\n\t\t?MODULE\n\t),\n\tstats(All -- Connected).\nstats(Ranking, Peers) ->\n\tlists:foreach(\n\t\tfun(Peer) -> format_stats(Ranking, Peer, get_or_init_performance(Peer)) end,\n\t\tPeers\n\t).\n\ndiscover_peers() ->\n\tcase get_peers(current) of\n\t\t[] ->\n\t\t\tok;\n\t\tPeers ->\n\t\t\tPeer = ar_util:pick_random(Peers),\n\t\t\tdiscover_peers(get_peer_peers(Peer))\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @see resolve_and_cache_peer/3\n%% @end\n%%--------------------------------------------------------------------\n-spec resolve_and_cache_peer(RawPeer, Type) -> Return when\n\tRawPeer :: string(),\n\tType :: term(),\n\tReturn :: {ok, {A,A,A,A,Port}} | {error, term()},\n\tA :: pos_integer(),\n\tPort :: pos_integer().\n\nresolve_and_cache_peer(RawPeer, Type) ->\n\tresolve_and_cache_peer(RawPeer, Type, #{}).\n\n%%--------------------------------------------------------------------\n%% @doc Resolve the  domain name of the given peer  (if the given peer\n%% is  an  IP  address)  and  cache it.  Invalidate  the  cache  after\n%% `?STORE_RESOLVED_DOMAIN_S seconds.'  Return {ok, Peer} | {error,\n%% Reason}.\n%% @end\n%%--------------------------------------------------------------------\n-spec resolve_and_cache_peer(RawPeer, Type, Opts) -> Return when\n\tRawPeer :: string(),\n\tType :: term(),\n\tOpts :: map(),\n\tReturn :: {ok, {A,A,A,A,Port}} | {error, term()},\n\tA :: pos_integer(),\n\tPort :: pos_integer().\n\nresolve_and_cache_peer(RawPeer, Type, Opts) ->\n\tNow = maps:get(now, Opts, erlang:system_time(second)),\n\tCacheTTL = maps:get(cache_ttl, Opts,\n\t\t\t    ?STORE_RESOLVED_DOMAIN_S),\n\tState = #{\n\t\traw_peer => RawPeer,\n\t\ttype => Type,\n\t\tnow => Now,\n\t\topts => Opts,\n\t\tcache_ttl => CacheTTL\n\t},\n\n\t% first check if the peer as string (so using a name\n\t% record) is present in ets table. If the peer is not present\n\t% in cache, then it will be updated. Else, the timestamp needs\n\t% to be checked.\n\tcase ets:lookup(?MODULE, {raw_peer, RawPeer}) of\n\t\t[] ->\n\t\t\tresolve_and_cache_peer_empty(State);\n\t\t[{_, {CachedPeer, CachedTimestamp}}] ->\n\t\t\tNewState = State#{\n\t\t\t\tcache_timestamp => CachedTimestamp,\n\t\t\t\tcache_peer => CachedPeer\n\t\t\t},\n\t\t\tresolve_and_cache_peer2(CachedPeer, NewState)\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc check if peer cache did not expired.\n%% @end\n%%--------------------------------------------------------------------\nresolve_and_cache_peer2(CachedPeer, State) ->\n\tNow = maps:get(now, State),\n\tCachedTimestamp = maps:get(cache_timestamp, State),\n\tCacheTTL = maps:get(cache_ttl, State),\n\n\t% if the peer present in cache expired, it needs to be\n\t% refreshed, else it can be returned.\n\tcase CachedTimestamp + CacheTTL < Now of\n\t\ttrue ->\n\t\t\tresolve_and_cache_peer_refresh(CachedPeer, State);\n\t\tfalse ->\n\t\t\t{ok, CachedPeer}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc the cache expired.\n%% @end\n%%--------------------------------------------------------------------\nresolve_and_cache_peer_refresh(_CachedPeer, State) ->\n\tRawPeer = maps:get(raw_peer, State),\n\tOpts = maps:get(opts, State, #{}),\n\n\t% the cache entry expired, in this case, raw peer needs to be\n\t% reparsed and checked. It will return a list of peers.\n\tcase ar_util:safe_parse_peer(RawPeer, Opts) of\n\t\t{ok, NewPeers} when is_list(NewPeers) ->\n\t\t\t%% The cache entry has expired.\n\t\t\tcache_update_peers(NewPeers, State);\n\t\t{error, Error} ->\n\t\t\t{error, Error}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc No peer cached available, we need to update it.\n%% @end\n%%--------------------------------------------------------------------\nresolve_and_cache_peer_empty(State) ->\n\tRawPeer = maps:get(raw_peer, State),\n\tcase ar_util:safe_parse_peer(RawPeer) of\n\t\t{ok, Peers} when is_list(Peers) ->\n\t\t\tcache_insert_peers(Peers, State);\n\t\t{error, Error} ->\n\t\t\t{error, Error}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc insert peers in the cache, when the peer is a DNS containing\n%% more than one entry.\n%% @end\n%%--------------------------------------------------------------------\ncache_insert_peers(Peers, State) ->\n\tcache_insert_peers(Peers, [], State).\n\ncache_insert_peers([], Buffer, _State) ->\n\t[Peer] = ar_util:pick_random(Buffer, 1),\n\t{ok, Peer};\ncache_insert_peers([Peer|Rest], Buffer, State) ->\n\tRawPeer = maps:get(raw_peer, State),\n\tType = maps:get(type, State),\n\tNow = maps:get(now, State),\n\t_ = ets:insert(?MODULE, {{raw_peer, RawPeer}, {Peer, Now}}),\n\t_ = ets:insert(?MODULE, {{Type, Peer}, RawPeer}),\n\tcache_insert_peers(Rest, [Peer|Buffer], State).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc Update a list of peers.\n%% @end\n%%--------------------------------------------------------------------\ncache_update_peers(Peers, State) ->\n\tcache_update_peers(Peers, [], State).\n\ncache_update_peers([], Buffer, _State) ->\n\t[Peer] = ar_util:pick_random(Buffer, 1),\n\t{ok, Peer};\ncache_update_peers([Peer|Rest], Buffer, State) ->\n\tRawPeer = maps:get(raw_peer, State),\n\tCacheTimestamp = maps:get(cache_timestamp, State),\n\tType = maps:get(type, State),\n\tNow = maps:get(now, State),\n\tets:delete(?MODULE, {Type, {Peer, CacheTimestamp}}),\n\tets:insert(?MODULE, {{raw_peer, RawPeer}, {Peer, Now}}),\n\tets:insert(?MODULE, {{Type, Peer}, RawPeer}),\n\tcache_update_peers(Rest, [Peer|Buffer], State).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase Config#config.verify of\n\t\tfalse ->\n\t\t\t%% Trap exit to avoid corrupting any open files on quit.\n\t\t\tprocess_flag(trap_exit, true),\n\t\t\tok = ar_events:subscribe(block),\n\t\t\tload_peers(),\n\t\t\tgen_server:cast(?MODULE, rank_peers),\n\t\t\tgen_server:cast(?MODULE, ping_peers),\n\t\t\t_ = ar_timer:apply_interval(\n\t\t\t\t?GET_MORE_PEERS_FREQUENCY_MS,\n\t\t\t\t?MODULE,\n\t\t\t\tdiscover_peers,\n\t\t\t\t[],\n\t\t\t\t#{ skip_on_shutdown => true }\n\t\t\t);\n\t\t_ ->\n\t\t\tok\n\tend,\n\n\t{ok, #state{}}.\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast({add_peer, Peer, Release}, State) ->\n\tmaybe_add_peer(Peer, Release),\n\t{noreply, State};\n\nhandle_cast(rank_peers, State) ->\n\tLifetimePeers = score_peers(lifetime),\n\tCurrentPeers = score_peers(current),\n\tprometheus_gauge:set(arweave_peer_count, length(LifetimePeers)),\n\tset_ranked_peers(lifetime, rank_peers(LifetimePeers)),\n\tset_ranked_peers(current, rank_peers(CurrentPeers)),\n\tar_util:cast_after(?RANK_PEERS_FREQUENCY_MS, ?MODULE, rank_peers),\n\t{noreply, State};\n\nhandle_cast(ping_peers, State) ->\n\tPeers = get_peers(lifetime),\n\tping_peers(lists:sublist(Peers, 100)),\n\t{noreply, State};\n\nhandle_cast({valid_data, Peer, _DataType, LatencyMilliseconds, DataSize, Concurrency},\n\t\tState) ->\n\tupdate_rating(Peer, LatencyMilliseconds, DataSize, Concurrency, true),\n\t{noreply, State};\n\nhandle_cast({invalid_data, Peer, _DataType}, State) ->\n\tupdate_rating(Peer, false),\n\t{noreply, State};\n\nhandle_cast({warning, Peer}, State) ->\n\tPerformance = update_rating(Peer, false),\n\tcase Performance#performance.average_success < ?MINIMUM_SUCCESS of\n\t\ttrue ->\n\t\t\tremove_peer(low_success, Peer);\n\t\tfalse ->\n\t\t\tok\n\tend,\n\t{noreply, State};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info({event, block, {rejected, Reason, _H, Peer}}, State) when Peer /= no_peer ->\n\tIssueBan = lists:member(Reason, ?BLOCK_REJECTION_BAN),\n\tIssueWarning = lists:member(Reason, ?BLOCK_REJECTION_WARNING),\n\tIgnore = lists:member(Reason, ?BLOCK_REJECTION_IGNORE),\n\n\tcase {IssueBan, IssueWarning, Ignore} of\n\t\t{true, false, false} ->\n\t\t\tar_blacklist_middleware:ban_peer(Peer, ?BAD_BLOCK_BAN_TIME),\n\t\t\tremove_peer(banned, Peer);\n\t\t{false, true, false} ->\n\t\t\tissue_warning(Peer, block_rejected, Reason);\n\t\t{false, false, true} ->\n\t\t\t%% ignore\n\t\t\tok;\n\t\t_ ->\n\t\t\t%% Ever reason should be in exactly 1 list.\n\t\t\terror(\"invalid block rejection reason\")\n\tend,\n\t{noreply, State};\n\nhandle_info({event, block, _}, State) ->\n\t{noreply, State};\n\nhandle_info({'EXIT', _, normal}, State) ->\n\t{noreply, State};\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\tstore_peers(),\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nget_peer_peers(Peer) ->\n\tcase ar_http_iface_client:get_peers(Peer) of\n\t\tunavailable -> [];\n\t\tPeers -> Peers\n\tend.\n\nget_or_init_performance(Peer) ->\n\tcase ets:lookup(?MODULE, {peer, Peer}) of\n\t\t[] ->\n\t\t\t#performance{};\n\t\t[{_, Performance}] ->\n\t\t\tPerformance\n\tend.\n\nset_performance(Peer, Performance) ->\n\tets:insert(?MODULE, [{{peer, Peer}, Performance}]).\n\nget_total_rating(Rating) ->\n\tcase ets:lookup(?MODULE, {rating_total, Rating}) of\n\t\t[] ->\n\t\t\t0;\n\t\t[{_, Total}] ->\n\t\t\tTotal\n\tend.\n\nset_total_rating(Rating, Total) ->\n\tets:insert(?MODULE, {{rating_total, Rating}, Total}).\n\nrecalculate_total_rating(Rating) ->\n\tTotalRating = ets:foldl(\n\t\tfun\t({{peer, _Peer}, Performance}, Acc) ->\n\t\t\t\tAcc + get_peer_rating(Rating, Performance);\n\t\t\t(_, Acc) ->\n\t\t\t\tAcc\n\t\tend,\n\t\t0,\n\t\t?MODULE\n\t),\n\tset_total_rating(Rating, TotalRating).\n\nget_peer_rating(Rating, Performance) ->\n\tcase Rating of\n\t\tlifetime ->\n\t\t\tPerformance#performance.lifetime_rating;\n\t\tcurrent ->\n\t\t\tPerformance#performance.current_rating\n\tend.\n\ndiscover_peers([]) ->\n\tok;\ndiscover_peers([Peer | Peers]) ->\n\tcase ets:member(?MODULE, {peer, Peer}) of\n\t\ttrue ->\n\t\t\tok;\n\t\tfalse ->\n\t\t\tcase check_peer(Peer, is_public_peer(Peer)) of\n\t\t\t\tok ->\n\t\t\t\t\tcase ar_http_iface_client:get_info(Peer) of\n\t\t\t\t\t\tinfo_unavailable ->\n\t\t\t\t\t\t\tok;\n\t\t\t\t\t\tInfo ->\n\t\t\t\t\t\t\tcase maps:get(atom_to_binary(network), Info, no_key) of\n\t\t\t\t\t\t\t\t<<?NETWORK_NAME>> ->\n\t\t\t\t\t\t\t\t\tcase maps:get(atom_to_binary(release), Info, no_key) of\n\t\t\t\t\t\t\t\t\t\tRelease when is_integer(Release) ->\n\t\t\t\t\t\t\t\t\t\t\tmaybe_add_peer(Peer, Release);\n\t\t\t\t\t\t\t\t\t\tno_key ->\n\t\t\t\t\t\t\t\t\t\t\tmaybe_add_peer(Peer, 0)\n\t\t\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\tok\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend;\n\t\t\t\t_ ->\n\t\t\t\t\tok\n\t\t\tend\n\tend,\n\tdiscover_peers(Peers).\n\nformat_stats(lifetime, Peer, Perf) ->\n\tKB = Perf#performance.total_bytes / 1024,\n\tio:format(\n\t\t\"\\t~s ~.2f kB/s (~.2f kB, ~.2f success, ~p transfers)~n\",\n\t\t[string:pad(ar_util:format_peer(Peer), 21, trailing, $\\s),\n\t\t\tfloat(Perf#performance.lifetime_rating), KB,\n\t\t\tPerf#performance.average_success, Perf#performance.total_transfers]);\nformat_stats(current, Peer, Perf) ->\n\tio:format(\n\t\t\"\\t~s ~.2f kB/s (~.2f success)~n\",\n\t\t[string:pad(ar_util:format_peer(Peer), 21, trailing, $\\s),\n\t\t\tfloat(Perf#performance.current_rating),\n\t\t\tPerf#performance.average_success]).\n\nload_peers() ->\n\tcase ar_storage:read_term(peers) of\n\t\tnot_found ->\n\t\t\tok;\n\t\t{ok, {_TotalRating, Records}} ->\n\t\t\t?LOG_INFO([{event, polling_saved_peers}, {records, length(Records)}]),\n\t\t\tar:console(\"Polling saved peers...~n\"),\n\t\t\tload_peers(Records),\n\t\t\trecalculate_total_rating(lifetime),\n\t\t\trecalculate_total_rating(current),\n\t\t\t?LOG_INFO([{event, polled_saved_peers}]),\n\t\t\tar:console(\"Polled saved peers.~n\");\n\t\t{ok, {_TotalRating, Records, Tags}} ->\n\t\t\t?LOG_INFO([{event, polling_saved_peers}, {records, length(Records)}]),\n\t\t\tar:console(\"Polling saved peers...~n\"),\n\t\t\tload_peers(Records),\n\t\t\trecalculate_total_rating(lifetime),\n\t\t\trecalculate_total_rating(current),\n\t\t\t[ ets:insert(?MODULE, {K, V}) || {K, V} <- Tags ],\n\t\t\t?LOG_INFO([{event, polled_saved_peers}]),\n\t\t\tar:console(\"Polled saved peers.~n\")\n\tend.\n\nload_peers(Peers) when length(Peers) < 20 ->\n\tar_util:pmap(fun load_peer/1, Peers);\nload_peers(Peers) ->\n\t{Peers2, Peers3} = lists:split(20, Peers),\n\tar_util:pmap(fun load_peer/1, Peers2),\n\tload_peers(Peers3).\n\nload_peer({Peer, Performance}) ->\n\tcase ar_http_iface_client:get_info(Peer, network) of\n\t\tinfo_unavailable ->\n\t\t\t?LOG_DEBUG([{event, peer_unavailable}, {peer, ar_util:format_peer(Peer)}]),\n\t\t\tok;\n\t\t<<?NETWORK_NAME>> ->\n\t\t\tmaybe_rotate_peer_ports(Peer),\n\t\t\tcase Performance of\n\t\t\t\t{performance, TotalBytes, _TotalLatency, Transfers, _Failures, Rating} ->\n\t\t\t\t\t%% For backwards compatibility.\n\t\t\t\t\tset_performance(Peer, #performance{\n\t\t\t\t\t\ttotal_bytes = TotalBytes,\n\t\t\t\t\t\ttotal_throughput = Rating,\n\t\t\t\t\t\ttotal_transfers = Transfers,\n\t\t\t\t\t\taverage_throughput = Rating,\n\t\t\t\t\t\tlifetime_rating = Rating,\n\t\t\t\t\t\tcurrent_rating = Rating\n\t\t\t\t\t});\n\t\t\t\t{performance, TotalBytes, _TotalLatency, Transfers, _Failures, Rating, Release} ->\n\t\t\t\t\t%% For backwards compatibility.\n\t\t\t\t\tset_performance(Peer, #performance{\n\t\t\t\t\t\trelease = Release,\n\t\t\t\t\t\ttotal_bytes = TotalBytes,\n\t\t\t\t\t\ttotal_throughput = Rating,\n\t\t\t\t\t\ttotal_transfers = Transfers,\n\t\t\t\t\t\taverage_throughput = Rating,\n\t\t\t\t\t\tlifetime_rating = Rating,\n\t\t\t\t\t\tcurrent_rating = Rating\n\t\t\t\t\t});\n\t\t\t\t{performance, 3,\n\t\t\t\t\t\t_Release, _TotalBytes, _TotalThroughput, _TotalTransfers,\n\t\t\t\t\t\t_AverageLatency, _AverageThroughput, _AverageSuccess, _LifetimeRating,\n\t\t\t\t\t\t_CurrentRating} ->\n\t\t\t\t\t%% Going forward whenever we change the #performance record we should increment the\n\t\t\t\t\t%% version field so we can match on it when doing a load. Here we're handling the\n\t\t\t\t\t%% version 3 format.\n\t\t\t\t\tset_performance(Peer, Performance)\n\t\t\tend,\n\t\t\tok;\n\t\tNetwork ->\n\t\t\t?LOG_DEBUG([{event, peer_from_the_wrong_network},\n\t\t\t\t\t{peer, ar_util:format_peer(Peer)}, {network, Network}]),\n\t\t\tok\n\tend.\n\nmaybe_rotate_peer_ports(Peer) ->\n\t{IP, Port} = get_ip_port(Peer),\n\tcase ets:lookup(?MODULE, {peer_ip, IP}) of\n\t\t[] ->\n\t\t\tets:insert(?MODULE, {{peer_ip, IP},\n\t\t\t\t\t{erlang:setelement(1, ?DEFAULT_PEER_PORT_MAP, Port), 1}});\n\t\t[{_, {PortMap, Position}}] ->\n\t\t\tcase is_in_port_map(Port, PortMap) of\n\t\t\t\t{true, _} ->\n\t\t\t\t\tok;\n\t\t\t\tfalse ->\n\t\t\t\t\tMaxSize = erlang:size(?DEFAULT_PEER_PORT_MAP),\n\t\t\t\t\tcase Position < MaxSize of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tets:insert(?MODULE, {{peer_ip, IP},\n\t\t\t\t\t\t\t\t\t{erlang:setelement(Position + 1, PortMap, Port),\n\t\t\t\t\t\t\t\t\tPosition + 1}});\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tRemovedPeer = construct_peer(IP, element(1, PortMap)),\n\t\t\t\t\t\t\tPortMap2 = shift_port_map_left(PortMap),\n\t\t\t\t\t\t\tPortMap3 = erlang:setelement(MaxSize, PortMap2, Port),\n\t\t\t\t\t\t\tets:insert(?MODULE, {{peer_ip, IP}, {PortMap3, MaxSize}}),\n\t\t\t\t\t\t\tremove_peer(rotated, RemovedPeer)\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\nget_ip_port({A, B, C, D, Port}) ->\n\t{{A, B, C, D}, Port}.\n\nconstruct_peer({A, B, C, D}, Port) ->\n\t{A, B, C, D, Port}.\n\nis_in_port_map(Port, PortMap) ->\n\tis_in_port_map(Port, PortMap, erlang:size(PortMap), 1).\n\nis_in_port_map(_Port, _PortMap, Max, N) when N > Max ->\n\tfalse;\nis_in_port_map(Port, PortMap, Max, N) ->\n\tcase element(N, PortMap) == Port of\n\t\ttrue ->\n\t\t\t{true, N};\n\t\tfalse ->\n\t\t\tis_in_port_map(Port, PortMap, Max, N + 1)\n\tend.\n\nshift_port_map_left(PortMap) ->\n\tshift_port_map_left(PortMap, erlang:size(PortMap), 1).\n\nshift_port_map_left(PortMap, Max, N) when N == Max ->\n\terlang:setelement(N, PortMap, empty_slot);\nshift_port_map_left(PortMap, Max, N) ->\n\tPortMap2 = erlang:setelement(N, PortMap, element(N + 1, PortMap)),\n\tshift_port_map_left(PortMap2, Max, N + 1).\n\nping_peers(Peers) when length(Peers) < 100 ->\n\tar_util:pmap(fun ar_http_iface_client:add_peer/1, Peers);\nping_peers(Peers) ->\n\t{Send, Rest} = lists:split(100, Peers),\n\tar_util:pmap(fun ar_http_iface_client:add_peer/1, Send),\n\tping_peers(Rest).\n\n-ifdef(AR_TEST).\n%% Do not filter out loopback IP addresses with custom port in the debug mode\n%% to allow multiple local VMs to peer with each other.\nis_loopback_ip({127, _, _, _, Port}) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tPort == Config#config.port;\nis_loopback_ip({_, _, _, _, _}) ->\n\tfalse.\n-else.\n%% @doc Is the IP address in question a loopback ('us') address?\nis_loopback_ip({A, B, C, D, _Port}) -> is_loopback_ip({A, B, C, D});\nis_loopback_ip({127, _, _, _}) -> true;\nis_loopback_ip({0, _, _, _}) -> true;\nis_loopback_ip({169, 254, _, _}) -> true;\nis_loopback_ip({255, 255, 255, 255}) -> true;\nis_loopback_ip({_, _, _, _}) -> false.\n-endif.\n\nscore_peers(Rating) ->\n\tTotal = get_total_rating(Rating),\n\tets:foldl(\n\t\tfun ({{peer, Peer}, Performance}, Acc) ->\n\t\t\t\t%% Bigger score increases the chances to end up on the top\n\t\t\t\t%% of the peer list, but at the same time the ranking is\n\t\t\t\t%% probabilistic to always give everyone a chance to improve\n\t\t\t\t%% in the competition (i.e., reduce the advantage gained by\n\t\t\t\t%% being the first to earn a reputation).\n\t\t\t\tScore = rand:uniform() * get_peer_rating(Rating, Performance)\n\t\t\t\t\t\t/ (Total + 0.0001),\n\t\t\t\t[{Peer, Score} | Acc];\n\t\t\t(_, Acc) ->\n\t\t\t\tAcc\n\t\tend,\n\t\t[],\n\t\t?MODULE\n\t).\n\n%% @doc Return a ranked list of peers.\nrank_peers(ScoredPeers) ->\n\tSortedReversed = lists:reverse(\n\t\tlists:sort(fun({_, S1}, {_, S2}) -> S1 >= S2 end, ScoredPeers)),\n\tGroupedBySubnet =\n\t\tlists:foldl(\n\t\t\tfun({{A, B, _C, _D, _Port}, _Score} = Peer, Acc) ->\n\t\t\t\tmaps:update_with({A, B}, fun(L) -> [Peer | L] end, [Peer], Acc)\n\t\t\tend,\n\t\t\t#{},\n\t\t\tSortedReversed\n\t\t),\n\tScoredSubnetPeers =\n\t\tmaps:fold(\n\t\t\tfun(_Subnet, SubnetPeers, Acc) ->\n\t\t\t\telement(2, lists:foldl(\n\t\t\t\t\tfun({Peer, Score}, {N, Acc2}) ->\n\t\t\t\t\t\t%% At first we take the best peer from every subnet,\n\t\t\t\t\t\t%% then take the second best from every subnet, etc.\n\t\t\t\t\t\t{N + 1, [{Peer, {-N, Score}} | Acc2]}\n\t\t\t\t\tend,\n\t\t\t\t\t{0, Acc},\n\t\t\t\t\tSubnetPeers\n\t\t\t\t))\n\t\t\tend,\n\t\t\t[],\n\t\t\tGroupedBySubnet\n\t\t),\n\t[Peer || {Peer, _} <- lists:sort(\n\t\tfun({_, S1}, {_, S2}) -> S1 >= S2 end,\n\t\tScoredSubnetPeers\n\t)].\n\nset_ranked_peers(Rating, Peers) ->\n\tets:insert(?MODULE, {{peers, Rating}, lists:sublist(Peers, ?MAX_PEERS)}).\n\ncheck_peer(Peer) ->\n\tcheck_peer(Peer, not is_loopback_ip(Peer)).\ncheck_peer(Peer, IsPeerScopeValid) ->\n\tIsBlacklisted = lists:member(Peer, ?PEER_PERMANENT_BLACKLIST),\n\tIsBanned = ar_blacklist_middleware:is_peer_banned(Peer) == banned,\n\tcase IsPeerScopeValid andalso not IsBlacklisted andalso not IsBanned of\n\t\ttrue ->\n\t\t\tok;\n\t\tfalse ->\n\t\t\treject\n\tend.\n\nupdate_rating(Peer, IsSuccess) ->\n\tupdate_rating(Peer, undefined, undefined, 1, IsSuccess).\nupdate_rating(Peer, LatencyMilliseconds, DataSize, Concurrency, false)\n\t\twhen LatencyMilliseconds =/= undefined; DataSize =/= undefined ->\n\t%% Don't credit peers for failed requests.\n\tupdate_rating(Peer, undefined, undefined, Concurrency, false);\nupdate_rating(Peer, 0, _DataSize, Concurrency, IsSuccess) ->\n\tupdate_rating(Peer, undefined, undefined, Concurrency, IsSuccess);\nupdate_rating(Peer, 0.0, _DataSize, Concurrency, IsSuccess) ->\n\tupdate_rating(Peer, undefined, undefined, Concurrency, IsSuccess);\nupdate_rating(Peer, LatencyMilliseconds, DataSize, Concurrency, IsSuccess) ->\n\tPerformance = get_or_init_performance(Peer),\n\n\t#performance{\n\t\ttotal_bytes = TotalBytes,\n\t\ttotal_throughput = TotalThroughput,\n\t\ttotal_transfers = TotalTransfers,\n\t\taverage_latency = AverageLatency,\n\t\taverage_throughput = AverageThroughput,\n\t\taverage_success = AverageSuccess,\n\t\tlifetime_rating = LifetimeRating,\n\t\tcurrent_rating = CurrentRating\n\t} = Performance,\n\tTotalBytes2 = case DataSize of\n\t\tundefined -> TotalBytes;\n\t\t_ -> TotalBytes + DataSize\n\tend,\n\tAverageLatency2 = case LatencyMilliseconds of\n\t\tundefined -> AverageLatency;\n\t\t_ -> calculate_ema(AverageLatency, LatencyMilliseconds, ?THROUGHPUT_ALPHA)\n\tend,\n\t%% In order to approximate the impact of multiple concurrent requests we multiply\n\t%% DataSize by the Concurrency value. We do this *only* when updating the AverageThroughput\n\t%% value so that it doesn't distort the TotalThroughput.\n\tAverageThroughput2 = case LatencyMilliseconds of\n\t\tundefined -> AverageThroughput;\n\t\t_ -> calculate_ema(\n\t\t\tAverageThroughput, (DataSize * Concurrency) / LatencyMilliseconds, ?THROUGHPUT_ALPHA)\n\tend,\n\tTotalThroughput2 = case LatencyMilliseconds of\n\t\tundefined -> TotalThroughput;\n\t\t_ -> TotalThroughput + (DataSize / LatencyMilliseconds)\n\tend,\n\tTotalTransfers2 = case DataSize of\n\t\tundefined -> TotalTransfers;\n\t\t_ -> TotalTransfers + 1\n\tend,\n\tAverageSuccess2 = calculate_ema(AverageSuccess, ar_util:bool_to_int(IsSuccess), ?SUCCESS_ALPHA),\n\t%% Rating is an estimate of the peer's effective throughput in bytes per millisecond.\n\t%% 'lifetime' considers all data ever received from this peer\n\t%% 'current' considers recently received data\n\tLifetimeRating2 = case TotalTransfers2 > 0 of\n\t\ttrue -> (TotalThroughput2 / TotalTransfers2) * AverageSuccess2;\n\t\t_ -> LifetimeRating\n\tend,\n\tCurrentRating2 = case AverageThroughput2 > 0 of\n\t\ttrue -> AverageThroughput2 * AverageSuccess2;\n\t\t_ -> CurrentRating\n\tend,\n\tPerformance2 = Performance#performance{\n\t\ttotal_bytes = TotalBytes2,\n\t\ttotal_throughput = TotalThroughput2,\n\t\ttotal_transfers = TotalTransfers2,\n\t\taverage_latency = AverageLatency2,\n\t\taverage_throughput = AverageThroughput2,\n\t\taverage_success = AverageSuccess2,\n\t\tlifetime_rating = LifetimeRating2,\n\t\tcurrent_rating = CurrentRating2\n\t},\n\tTotalLifetimeRating = get_total_rating(lifetime),\n\tTotalLifetimeRating2 = TotalLifetimeRating - LifetimeRating + LifetimeRating2,\n\tTotalCurrentRating = get_total_rating(current),\n\tTotalCurrentRating2 = TotalCurrentRating - CurrentRating + CurrentRating2,\n\n\tmaybe_rotate_peer_ports(Peer),\n\tset_performance(Peer, Performance2),\n\tset_total_rating(lifetime, TotalLifetimeRating2),\n\tset_total_rating(current, TotalCurrentRating2),\n\tPerformance2.\n\ncalculate_ema(OldEMA, Value, Alpha) ->\n\tAlpha * Value + (1 - Alpha) * OldEMA.\n\nmaybe_add_peer(Peer, Release) ->\n\tmaybe_rotate_peer_ports(Peer),\n\t%% If we've just added his peer, flag it as active and connected.\n\tconnected_peer(Peer),\n\tcase ets:lookup(?MODULE, {peer, Peer}) of\n\t\t[{_, #performance{ release = Release }}] ->\n\t\t\tok;\n\t\t[{_, Performance}] ->\n\t\t\tset_performance(Peer, Performance#performance{ release = Release });\n\t\t[] ->\n\t\t\tcase check_peer(Peer) of\n\t\t\t\tok ->\n\t\t\t\t\tset_performance(Peer, #performance{ release = Release });\n\t\t\t\t_ ->\n\t\t\t\t\tok\n\t\t\tend\n\tend.\n\nremove_peer(Reason, RemovedPeer) ->\n\tcase Reason of\n\t\trotated ->\n\t\t\tok;\n\t\t_ ->\n\t\t\t?LOG_DEBUG([\n\t\t\t\t{event, remove_peer},\n\t\t\t\t{peer, ar_util:format_peer(RemovedPeer)},\n\t\t\t\t{reason, Reason}\n\t\t\t])\n\tend,\n\tPerformance = get_or_init_performance(RemovedPeer),\n\tTotalLifetimeRating = get_total_rating(lifetime),\n\tTotalCurrentRating = get_total_rating(current),\n\tset_total_rating(lifetime, TotalLifetimeRating - get_peer_rating(lifetime, Performance)),\n\tset_total_rating(current, TotalCurrentRating - get_peer_rating(current, Performance)),\n\tets:delete(?MODULE, {peer, RemovedPeer}),\n\tremove_peer_port(RemovedPeer),\n\tar_events:send(peer, {removed, RemovedPeer}).\n\nremove_peer_port(Peer) ->\n\t{IP, Port} = get_ip_port(Peer),\n\tcase ets:lookup(?MODULE, {peer_ip, IP}) of\n\t\t[] ->\n\t\t\tok;\n\t\t[{_, {PortMap, Position}}] ->\n\t\t\tcase is_in_port_map(Port, PortMap) of\n\t\t\t\tfalse ->\n\t\t\t\t\tok;\n\t\t\t\t{true, N} ->\n\t\t\t\t\tPortMap2 = erlang:setelement(N, PortMap, empty_slot),\n\t\t\t\t\tcase is_port_map_empty(PortMap2) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tets:delete(?MODULE, {peer_ip, IP});\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tets:insert(?MODULE, {{peer_ip, IP}, {PortMap2, Position}})\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\nis_port_map_empty(PortMap) ->\n\tis_port_map_empty(PortMap, erlang:size(PortMap), 1).\n\nis_port_map_empty(_PortMap, Max, N) when N > Max ->\n\ttrue;\nis_port_map_empty(PortMap, Max, N) ->\n\tcase element(N, PortMap) of\n\t\tempty_slot ->\n\t\t\tis_port_map_empty(PortMap, Max, N + 1);\n\t\t_ ->\n\t\t\tfalse\n\tend.\n\nstore_peers() ->\n\tcase get_total_rating(lifetime) of\n\t\t0 ->\n\t\t\tok;\n\t\tTotal ->\n\t\t\tRecords =\n\t\t\t\tets:foldl(\n\t\t\t\t\tfun\t({{peer, Peer}, Performance}, Acc) ->\n\t\t\t\t\t\t\t[{Peer, Performance} | Acc];\n\t\t\t\t\t\t(_, Acc) ->\n\t\t\t\t\t\t\tAcc\n\t\t\t\t\tend,\n\t\t\t\t\t[],\n\t\t\t\t\t?MODULE\n\t\t\t\t),\n\t\t\tTags = ets:foldl(fun ({{ar_tags, _, _, _}, _} = Tag, Acc) ->\n\t\t\t\t\t\t[Tag|Acc];\n\t\t\t\t\t     (_, Acc) -> Acc\n\t\t\t\t\tend, [], ?MODULE),\n\t\t\t?LOG_INFO([{event, store_peers}\n\t\t\t\t   , {total, Total}\n\t\t\t\t   , {records, length(Records)}\n\t\t\t\t   , {tags, length(Tags)}]),\n\t\t\tar_storage:write_term(peers, {Total, Records, Tags})\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc internal function to tag a peer.\n%% @end\n%%--------------------------------------------------------------------\nset_tag(Peer, Tag, Value) ->\n\tets:insert(?MODULE, {{ar_tags, ?MODULE, Peer, Tag}, Value}).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc internal function to get tag value set on a peer.\n%% @end\n%%--------------------------------------------------------------------\nget_tag(Peer, Tag) ->\n\tPattern = {{ar_tags, ?MODULE, Peer, Tag}, '$1'},\n\tGuard = [],\n\tSelect = ['$1'],\n\tcase ets:select(?MODULE, [{Pattern, Guard, Select}]) of\n\t\t[] -> {error, not_found};\n\t\t[V] -> {ok, V}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc defined a peer as connected (in HTTP sense).\n%% @end\n%%--------------------------------------------------------------------\nconnected_peer(Peer) ->\n\tset_tag(Peer, {connection, last}, erlang:system_time(second)),\n\tset_tag(Peer, {connection, active}, true).\n\n%%--------------------------------------------------------------------\n%% @doc defined a peer as disconnected (in HTTP sense).\n%% @end\n%%--------------------------------------------------------------------\ndisconnected_peer(Peer) ->\n\tset_tag(Peer, {connection, active}, false).\n\n%%--------------------------------------------------------------------\n%% @doc returns peer's timestamp.\n%% @end\n%%--------------------------------------------------------------------\nget_connection_timestamp_peer(Peer) ->\n\tcase get_tag(Peer, {connection, last}) of\n\t\t{ok, V} -> V;\n\t\t_ -> undefined\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc returns the HTTP connection state of a peer.\n%% @end\n%%--------------------------------------------------------------------\nis_connected_peer(Peer) ->\n\tcase get_tag(Peer, {connection, active}) of\n\t\t{ok, V} -> V;\n\t\t{error, _} -> false\n\tend.\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\nconnected_peer_test() ->\n\tPeer = {100, 117, 109, 98, 1234},\n\n\t% drop all objects from the table to start with a clean state\n\tets:delete_all_objects(?MODULE),\n\n\t% get all peers connected, it should returns nothing by\n\t% default because the table is empty.\n\t?assertEqual(undefined, get_connection_timestamp_peer(Peer)),\n\n\t% manually add a new peer using set_ranked_peers function.\n\t% the node is not connected because gun did not manage the\n\t% connection in this test.\n\tset_ranked_peers(lifetime, [Peer]),\n\tset_ranked_peers(current, [Peer]),\n\t?assertEqual(false, is_connected_peer(Peer)),\n\t?assertEqual(undefined, get_connection_timestamp_peer(Peer)),\n\n\t% force this peer to be connected using connected_peer\n\t% function. A timestamp is created.\n\tconnected_peer(Peer),\n\tTimestamp = get_connection_timestamp_peer(Peer),\n\t?assertEqual(true, is_connected_peer(Peer)),\n\t?assertEqual(Timestamp, get_connection_timestamp_peer(Peer)),\n\t?assertNotEqual(undefined, get_connection_timestamp_peer(Peer)),\n\t?assertEqual([Peer], get_peers(lifetime)),\n\t?assertEqual([Peer], get_peers(current)),\n\n\t% Now remove the connection to the peer. A timestamp must\n\t% still be there.\n\tdisconnected_peer(Peer),\n\t?assertEqual(false, is_connected_peer(Peer)),\n\t?assertNotEqual(undefined, get_connection_timestamp_peer(Peer)),\n\t?assertEqual([Peer], get_peers(lifetime)),\n\t?assertEqual([Peer], get_peers(current)),\n\n\t% let modify manually the timestamp to check get_peers/1\n\t% function, and overwrite Peer timestamp with some defined\n\t% values.\n\tTime = erlang:system_time(second),\n\t% Go above the limit\n\tLimit = Time-((?CURRENT_PEERS_LIST_FILTER+10)*60*60*24),\n\tset_tag(Peer, {connection, last}, Limit),\n\t?assertEqual([], get_peers(current)),\n\t?assertEqual([Peer], get_peers(lifetime)).\n\nrotate_peer_ports_test() ->\n\tPeer = {2, 2, 2, 2, 1},\n\tmaybe_rotate_peer_ports(Peer),\n\t[{_, {PortMap, 1}}] = ets:lookup(?MODULE, {peer_ip, {2, 2, 2, 2}}),\n\t?assertEqual(1, element(1, PortMap)),\n\tremove_peer(test, Peer),\n\t?assertEqual([], ets:lookup(?MODULE, {peer_ip, {2, 2, 2, 2}})),\n\tmaybe_rotate_peer_ports(Peer),\n\tPeer2 = {2, 2, 2, 2, 2},\n\tmaybe_rotate_peer_ports(Peer2),\n\t[{_, {PortMap2, 2}}] = ets:lookup(?MODULE, {peer_ip, {2, 2, 2, 2}}),\n\t?assertEqual(1, element(1, PortMap2)),\n\t?assertEqual(2, element(2, PortMap2)),\n\tremove_peer(test, Peer),\n\t[{_, {PortMap3, 2}}] = ets:lookup(?MODULE, {peer_ip, {2, 2, 2, 2}}),\n\t?assertEqual(empty_slot, element(1, PortMap3)),\n\t?assertEqual(2, element(2, PortMap3)),\n\tPeer3 = {2, 2, 2, 2, 3},\n\tPeer4 = {2, 2, 2, 2, 4},\n\tPeer5 = {2, 2, 2, 2, 5},\n\tPeer6 = {2, 2, 2, 2, 6},\n\tPeer7 = {2, 2, 2, 2, 7},\n\tPeer8 = {2, 2, 2, 2, 8},\n\tPeer9 = {2, 2, 2, 2, 9},\n\tPeer10 = {2, 2, 2, 2, 10},\n\tPeer11 = {2, 2, 2, 2, 11},\n\tmaybe_rotate_peer_ports(Peer3),\n\tmaybe_rotate_peer_ports(Peer4),\n\tmaybe_rotate_peer_ports(Peer5),\n\tmaybe_rotate_peer_ports(Peer6),\n\tmaybe_rotate_peer_ports(Peer7),\n\tmaybe_rotate_peer_ports(Peer8),\n\tmaybe_rotate_peer_ports(Peer9),\n\tmaybe_rotate_peer_ports(Peer10),\n\t[{_, {PortMap4, 10}}] = ets:lookup(?MODULE, {peer_ip, {2, 2, 2, 2}}),\n\t?assertEqual(empty_slot, element(1, PortMap4)),\n\t?assertEqual(2, element(2, PortMap4)),\n\t?assertEqual(10, element(10, PortMap4)),\n\tmaybe_rotate_peer_ports(Peer8),\n\tmaybe_rotate_peer_ports(Peer9),\n\tmaybe_rotate_peer_ports(Peer10),\n\t[{_, {PortMap5, 10}}] = ets:lookup(?MODULE, {peer_ip, {2, 2, 2, 2}}),\n\t?assertEqual(empty_slot, element(1, PortMap5)),\n\t?assertEqual(2, element(2, PortMap5)),\n\t?assertEqual(3, element(3, PortMap5)),\n\t?assertEqual(9, element(9, PortMap5)),\n\t?assertEqual(10, element(10, PortMap5)),\n\tmaybe_rotate_peer_ports(Peer11),\n\t[{_, {PortMap6, 10}}] = ets:lookup(?MODULE, {peer_ip, {2, 2, 2, 2}}),\n\t?assertEqual(element(2, PortMap5), element(1, PortMap6)),\n\t?assertEqual(3, element(2, PortMap6)),\n\t?assertEqual(4, element(3, PortMap6)),\n\t?assertEqual(5, element(4, PortMap6)),\n\t?assertEqual(11, element(10, PortMap6)),\n\tmaybe_rotate_peer_ports(Peer11),\n\t[{_, {PortMap7, 10}}] = ets:lookup(?MODULE, {peer_ip, {2, 2, 2, 2}}),\n\t?assertEqual(element(2, PortMap5), element(1, PortMap7)),\n\t?assertEqual(3, element(2, PortMap7)),\n\t?assertEqual(4, element(3, PortMap7)),\n\t?assertEqual(5, element(4, PortMap7)),\n\t?assertEqual(11, element(10, PortMap7)),\n\tremove_peer(test, Peer4),\n\t[{_, {PortMap8, 10}}] = ets:lookup(?MODULE, {peer_ip, {2, 2, 2, 2}}),\n\t?assertEqual(empty_slot, element(3, PortMap8)),\n\t?assertEqual(3, element(2, PortMap8)),\n\t?assertEqual(5, element(4, PortMap8)),\n\tremove_peer(test, Peer2),\n\tremove_peer(test, Peer3),\n\tremove_peer(test, Peer5),\n\tremove_peer(test, Peer6),\n\tremove_peer(test, Peer7),\n\tremove_peer(test, Peer8),\n\tremove_peer(test, Peer9),\n\tremove_peer(test, Peer10),\n\t[{_, {PortMap9, 10}}] = ets:lookup(?MODULE, {peer_ip, {2, 2, 2, 2}}),\n\t?assertEqual(11, element(10, PortMap9)),\n\tremove_peer(test, Peer11),\n\t?assertEqual([], ets:lookup(?MODULE, {peer_ip, {2, 2, 2, 2}})).\n\nupdate_rating_test() ->\n\tets:delete_all_objects(?MODULE),\n\tPeer1 = {1, 2, 3, 4, 1984},\n\tPeer2 = {5, 6, 7, 8, 1984},\n\n\t?assertEqual(#performance{}, get_or_init_performance(Peer1)),\n\t?assertEqual(0, get_total_rating(lifetime)),\n\t?assertEqual(0, get_total_rating(current)),\n\n\tupdate_rating(Peer1, true),\n\t?assertEqual(#performance{}, get_or_init_performance(Peer1)),\n\t?assertEqual(0, get_total_rating(lifetime)),\n\t?assertEqual(0, get_total_rating(current)),\n\n\n\tupdate_rating(Peer1, false),\n\tassert_performance(#performance{ average_success = 0.965 }, get_or_init_performance(Peer1)),\n\t?assertEqual(0, get_total_rating(lifetime)),\n\t?assertEqual(0, get_total_rating(current)),\n\n\n\t%% Failed transfer should impact bytes or latency\n\tupdate_rating(Peer1, 1000, 100, 1, false),\n\tassert_performance(#performance{\n\t\t\taverage_success = 0.9312 },\n\t\tget_or_init_performance(Peer1)),\n\t?assertEqual(0, get_total_rating(lifetime)),\n\t?assertEqual(0, get_total_rating(current)),\n\n\n\t%% Test successful transfer\n\tupdate_rating(Peer1, 1000, 100, 1, true),\n\tassert_performance(#performance{\n\t\t\ttotal_bytes = 100,\n\t\t\ttotal_throughput = 0.1,\n\t\t\ttotal_transfers = 1,\n\t\t\taverage_latency = 50,\n\t\t\taverage_throughput = 0.005,\n\t\t\taverage_success = 0.9336,\n\t\t\tlifetime_rating = 0.0934,\n\t\t\tcurrent_rating = 0.0047 },\n\t\tget_or_init_performance(Peer1)),\n\t?assertEqual(0.0934, round(get_total_rating(lifetime), 4)),\n\t?assertEqual(0.0047, round(get_total_rating(current), 4)),\n\n\t%% Test concurrency\n\tupdate_rating(Peer1, 1000, 50, 10, true),\n\tassert_performance(#performance{\n\t\t\ttotal_bytes = 150,\n\t\t\ttotal_throughput = 0.15,\n\t\t\ttotal_transfers = 2,\n\t\t\taverage_latency = 97.5,\n\t\t\taverage_throughput = 0.0298,\n\t\t\taverage_success = 0.936,\n\t\t\tlifetime_rating = 0.0702,\n\t\t\tcurrent_rating = 0.0278 },\n\t\tget_or_init_performance(Peer1)),\n\t?assertEqual(0.0702, round(get_total_rating(lifetime), 4)),\n\t?assertEqual(0.0278, round(get_total_rating(current), 4)),\n\n\t%% With 2 peers total rating should be the sum of both\n\tupdate_rating(Peer2, 1000, 100, 1, true),\n\tassert_performance(#performance{\n\t\t\ttotal_bytes = 100,\n\t\t\ttotal_throughput = 0.1,\n\t\t\ttotal_transfers = 1,\n\t\t\taverage_latency = 50,\n\t\t\taverage_throughput = 0.005,\n\t\t\taverage_success = 1,\n\t\t\tlifetime_rating = 0.1,\n\t\t\tcurrent_rating = 0.005 },\n\t\tget_or_init_performance(Peer2)),\n\t?assertEqual(0.1702, round(get_total_rating(lifetime), 4)),\n\t?assertEqual(0.0328, round(get_total_rating(current), 4)).\n\nblock_rejected_test_() ->\n\t[\n\t\t{timeout, 30, fun test_block_rejected/0}\n\t].\n\ntest_block_rejected() ->\n\tar_blacklist_middleware:cleanup_ban(whereis(ar_blacklist_middleware)),\n\tPeer = {127, 0, 0, 1, ar_test_node:get_unused_port()},\n\tar_peers:add_peer(Peer, -1),\n\n\tar_events:send(block, {rejected, invalid_signature, <<>>, Peer}),\n\ttimer:sleep(5000),\n\n\t?assertEqual(#{Peer => #performance{}}, ar_peers:get_peer_performances([Peer])),\n\t?assertEqual(not_banned, ar_blacklist_middleware:is_peer_banned(Peer)),\n\n\tar_events:send(block, {rejected, failed_to_fetch_first_chunk, <<>>, Peer}),\n\ttimer:sleep(5000),\n\n\t?assertEqual(\n\t\t#{Peer => #performance{ average_success = 0.965 }},\n\t\tar_peers:get_peer_performances([Peer])),\n\t?assertEqual(not_banned, ar_blacklist_middleware:is_peer_banned(Peer)),\n\n\tar_events:send(block, {rejected, invalid_previous_solution_hash, <<>>, Peer}),\n\ttimer:sleep(5000),\n\n\t?assertEqual(#{Peer => #performance{}}, ar_peers:get_peer_performances([Peer])),\n\t?assertEqual(banned, ar_blacklist_middleware:is_peer_banned(Peer)).\n\nrate_data_test() ->\n\tets:delete_all_objects(?MODULE),\n\tPeer1 = {1, 2, 3, 4, 1984},\n\n\t?assertEqual(#performance{}, get_or_init_performance(Peer1)),\n\t?assertEqual(0, get_total_rating(lifetime)),\n\t?assertEqual(0, get_total_rating(current)),\n\n\tar_peers:rate_fetched_data(Peer1, chunk, {error, timeout}, 1000000, 100, 10),\n\ttimer:sleep(500),\n\tassert_performance(#performance{ average_success = 0.965 }, get_or_init_performance(Peer1)),\n\t?assertEqual(0, get_total_rating(lifetime)),\n\t?assertEqual(0, get_total_rating(current)),\n\n\tar_peers:rate_fetched_data(Peer1, block, 1000000, 100),\n\ttimer:sleep(500),\n\tassert_performance(#performance{\n\t\t\ttotal_bytes = 100,\n\t\t\ttotal_throughput = 0.1,\n\t\t\ttotal_transfers = 1,\n\t\t\taverage_latency = 50,\n\t\t\taverage_throughput = 0.005,\n\t\t\taverage_success = 0.9662,\n\t\t\tlifetime_rating = 0.0966,\n\t\t\tcurrent_rating = 0.0048 },\n\t\tget_or_init_performance(Peer1)),\n\t?assertEqual(0.0966, round(get_total_rating(lifetime), 4)),\n\t?assertEqual(0.0048, round(get_total_rating(current), 4)),\n\n\tar_peers:rate_fetched_data(Peer1, tx, ok, 1000000, 100, 2),\n\ttimer:sleep(500),\n\tassert_performance(#performance{\n\t\t\ttotal_bytes = 200,\n\t\t\ttotal_throughput = 0.2,\n\t\t\ttotal_transfers = 2,\n\t\t\taverage_latency = 97.5,\n\t\t\taverage_throughput = 0.0148,\n\t\t\taverage_success = 0.9674,\n\t\t\tlifetime_rating = 0.0967,\n\t\t\tcurrent_rating = 0.0143 },\n\t\tget_or_init_performance(Peer1)),\n\t?assertEqual(0.0967, round(get_total_rating(lifetime), 4)),\n\t?assertEqual(0.0143, round(get_total_rating(current), 4)),\n\n\tar_peers:rate_gossiped_data(Peer1, block, 1000000, 100),\n\ttimer:sleep(500),\n\tassert_performance(#performance{\n\t\t\ttotal_bytes = 300,\n\t\t\ttotal_throughput = 0.3,\n\t\t\ttotal_transfers = 3,\n\t\t\taverage_latency = 142.625,\n\t\t\taverage_throughput = 0.019,\n\t\t\taverage_success = 0.9685,\n\t\t\tlifetime_rating = 0.0969,\n\t\t\tcurrent_rating = 0.0184 },\n\t\tget_or_init_performance(Peer1)),\n\t?assertEqual(0.0969, round(get_total_rating(lifetime), 4)),\n\t?assertEqual(0.0184, round(get_total_rating(current), 4)).\n\nassert_performance(Expected, Actual) ->\n\t?assertEqual(Expected#performance.total_bytes, Actual#performance.total_bytes),\n\t?assertEqual(\n\t\tround(Expected#performance.total_throughput, 4),\n\t\tround(Actual#performance.total_throughput, 4)),\n\t?assertEqual(Expected#performance.total_transfers, Actual#performance.total_transfers),\n\t?assertEqual(\n\t\tround(Expected#performance.average_latency, 4),\n\t\tround(Actual#performance.average_latency, 4)),\n\t?assertEqual(\n\t\tround(Expected#performance.average_throughput, 4),\n\t\tround(Actual#performance.average_throughput, 4)),\n\t?assertEqual(\n\t\tround(Expected#performance.average_success, 4),\n\t\tround(Actual#performance.average_success, 4)),\n\t?assertEqual(\n\t\tround(Expected#performance.lifetime_rating, 4),\n\t\tround(Actual#performance.lifetime_rating, 4)),\n\t?assertEqual(\n\t\tround(Expected#performance.current_rating, 4),\n\t\tround(Actual#performance.current_rating, 4)).\n\nround(Float, N) ->\n    Multiplier = math:pow(10, N),\n    round(Float * Multiplier) / Multiplier.\n"
  },
  {
    "path": "apps/arweave/src/ar_poa.erl",
    "content": "%%% @doc This module implements all mechanisms required to validate a proof of access\n%%% for a chunk of data received from the network.\n-module(ar_poa).\n\n-export([get_data_path_validation_ruleset/2, get_data_path_validation_ruleset/3,\n\t\t validate_pre_fork_2_5/4, validate/1, chunk_proof/2, chunk_proof/3, chunk_proof/5,\n\t\t validate_paths/1, get_padded_offset/1, get_padded_offset/2]).\n\n-include_lib(\"arweave/include/ar_poa.hrl\").\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Return the merkle proof validation ruleset code depending on the block start\n%% offset, the threshold where the offset rebases were allowed (and the validation\n%% changed in some other ways on top of that). The threshold where the specific\n%% requirements were imposed on data splits to make each chunk belong to its own\n%% 256 KiB bucket is set to ar_block:strict_data_split_threshold(). The code is then passed to\n%% ar_merkle:validate_path/5.\nget_data_path_validation_ruleset(BlockStartOffset, MerkleRebaseSupportThreshold) ->\n\tget_data_path_validation_ruleset(BlockStartOffset, MerkleRebaseSupportThreshold,\n\t\t\tar_block:strict_data_split_threshold()).\n\n%% @doc Return the merkle proof validation ruleset code depending on the block start\n%% offset, the threshold where the offset rebases were allowed (and the validation\n%% changed in some other ways on top of that), and the threshold where the specific\n%% requirements were imposed on data splits to make each chunk belong to its own\n%% 256 KiB bucket. The code is then passed to ar_merkle:validate_path/5.\nget_data_path_validation_ruleset(BlockStartOffset, MerkleRebaseSupportThreshold,\n\t\tStrictDataSplitThreshold) ->\n\tcase BlockStartOffset >= MerkleRebaseSupportThreshold of\n\t\ttrue ->\n\t\t\toffset_rebase_support_ruleset;\n\t\tfalse ->\n\t\t\tcase BlockStartOffset >= StrictDataSplitThreshold of\n\t\t\t\ttrue ->\n\t\t\t\t\tstrict_data_split_ruleset;\n\t\t\t\tfalse ->\n\t\t\t\t\tstrict_borders_ruleset\n\t\t\tend\n\tend.\n\nget_data_path_validation_ruleset(BlockStartOffset) ->\n\tget_data_path_validation_ruleset(BlockStartOffset, ?MERKLE_REBASE_SUPPORT_THRESHOLD,\n\t\t\tar_block:strict_data_split_threshold()).\n\n%% @doc Validate a proof of access.\nvalidate(Args) ->\n\t{BlockStartOffset, RecallOffset, TXRoot, BlockSize, SPoA, Packing, SubChunkIndex,\n\t\t\tExpectedChunkID} = Args,\n\t#poa{ chunk = Chunk, unpacked_chunk = UnpackedChunk } = SPoA,\n\n\tChunkMetadata = #chunk_metadata{\n\t\ttx_root = TXRoot,\n\t\ttx_path = SPoA#poa.tx_path,\n\t\tdata_path = SPoA#poa.data_path\n\t},\n\tChunkProof = chunk_proof(ChunkMetadata, RecallOffset, BlockStartOffset, BlockSize),\n\n\tcase validate_paths(ChunkProof) of\n\t\t{false, _} ->\n\t\t\tfalse;\n\t\t{true, ChunkProof2} ->\n\t\t\t#chunk_proof{\n\t\t\t\tchunk_id = ChunkID,\n\t\t\t\tchunk_start_offset = ChunkStartOffset,\n\t\t\t\tchunk_end_offset = ChunkEndOffset,\n\t\t\t\ttx_start_offset = TXStartOffset\n\t\t\t} = ChunkProof2,\n\t\t\tcase ExpectedChunkID of\n\t\t\t\tnot_set ->\n\t\t\t\t\tvalidate2(Packing, {ChunkID, ChunkStartOffset,\n\t\t\t\t\t\t\tChunkEndOffset, BlockStartOffset, TXStartOffset,\n\t\t\t\t\t\t\tTXRoot, Chunk, UnpackedChunk, SubChunkIndex});\n\t\t\t\t_ ->\n\t\t\t\t\tcase ChunkID == ExpectedChunkID of\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tfalse;\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t{true, ChunkID}\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\nchunk_proof(#chunk_metadata{} = ChunkMetadata, SeekByte) ->\n\tchunk_proof(ChunkMetadata, SeekByte, ?MERKLE_REBASE_SUPPORT_THRESHOLD).\n\nchunk_proof(#chunk_metadata{} = ChunkMetadata, SeekByte, MerkleRebaseSupportThreshold) ->\n\t{BlockStartOffset, BlockEndOffset, TXRoot} = ar_block_index:get_block_bounds(SeekByte),\n\n\tChunkMetadata2 = case ChunkMetadata#chunk_metadata.tx_root of\n\t\tnot_set ->\n\t\t\tChunkMetadata#chunk_metadata{ tx_root = TXRoot };\n\t\tTXRoot ->\n\t\t\tChunkMetadata\n\tend,\n\n\tValidateDataPathRuleset = get_data_path_validation_ruleset(\n\t\tBlockStartOffset, MerkleRebaseSupportThreshold, ar_block:strict_data_split_threshold()),\n\tchunk_proof(\n\t\tChunkMetadata2,\n\t\tBlockStartOffset,\n\t\tBlockEndOffset,\n\t\tSeekByte,\n\t\tValidateDataPathRuleset\n\t).\n\nchunk_proof(#chunk_metadata{} = ChunkMetadata, RecallOffset, BlockStartOffset, BlockSize) ->\n\tBlockRelativeOffset = get_recall_bucket_offset(RecallOffset, BlockStartOffset),\n\tValidateDataPathRuleset = get_data_path_validation_ruleset(BlockStartOffset),\n\n\tBlockEndOffset = BlockStartOffset + BlockSize,\n\tSeekByte = BlockStartOffset + BlockRelativeOffset,\n\tchunk_proof(\n\t\tChunkMetadata,\n\t\tBlockStartOffset,\n\t\tBlockEndOffset,\n\t\tSeekByte,\n\t\tValidateDataPathRuleset\n\t).\n\nchunk_proof(#chunk_metadata{} = ChunkMetadata,\n\tBlockStartOffset, BlockEndOffset, SeekByte, ValidateDataPathRuleset) ->\n\n\t#chunk_proof{\n\t\tseek_byte = SeekByte,\n\t\tmetadata = ChunkMetadata,\n\t\tblock_start_offset = BlockStartOffset,\n\t\tblock_end_offset = BlockEndOffset,\n\t\tvalidate_data_path_ruleset = ValidateDataPathRuleset\n\t}.\n\n%% @doc Validate the TXPath and DataPath for a chunk. This will return the ChunkID but won't\n%% validate that the ChunkID is correct.\n-spec validate_paths(#chunk_proof{}) -> {boolean(), #chunk_proof{}}.\nvalidate_paths(Proof) ->\n\t#chunk_proof{\n\t\tseek_byte = SeekByte,\n\t\tmetadata = #chunk_metadata{\n\t\t\ttx_root = TXRoot,\n\t\t\ttx_path = TXPath,\n\t\t\tdata_path = DataPath\n\t\t},\n\t\tblock_start_offset = BlockStartOffset,\n\t\tblock_end_offset = BlockEndOffset,\n\t\tvalidate_data_path_ruleset = ValidateDataPathRuleset\n\t} = Proof,\n\n\tBlockRelativeOffset = SeekByte - BlockStartOffset,\n\tBlockSize = BlockEndOffset - BlockStartOffset,\n\n\tcase ar_merkle:validate_path(TXRoot, BlockRelativeOffset, BlockSize, TXPath) of\n\t\tfalse ->\n\t\t\t{false, Proof#chunk_proof{ tx_path_is_valid = invalid }};\n\t\t{DataRoot, TXStartOffset, TXEndOffset} ->\n\t\t\tProof2 = Proof#chunk_proof{\n\t\t\t\tmetadata = Proof#chunk_proof.metadata#chunk_metadata{\n\t\t\t\t\tdata_root = DataRoot\n\t\t\t\t},\n\t\t\t\ttx_start_offset = TXStartOffset,\n\t\t\t\ttx_end_offset = TXEndOffset,\n\t\t\t\ttx_path_is_valid = valid\n\t\t\t},\n\t\t\tTXSize = TXEndOffset - TXStartOffset,\n\t\t\tTXRelativeOffset = BlockRelativeOffset - TXStartOffset,\n\t\t\tcase ar_merkle:validate_path(\n\t\t\t\t\tDataRoot, TXRelativeOffset, TXSize, DataPath, ValidateDataPathRuleset) of\n\t\t\t\tfalse ->\n\t\t\t\t\t{false, Proof2#chunk_proof{ data_path_is_valid = invalid }};\n\t\t\t\t{ChunkID, ChunkStartOffset, ChunkEndOffset} ->\n\t\t\t\t\tProof3 = Proof2#chunk_proof{\n\t\t\t\t\t\tchunk_id = ChunkID,\n\t\t\t\t\t\tchunk_start_offset = ChunkStartOffset,\n\t\t\t\t\t\tchunk_end_offset = ChunkEndOffset,\n\t\t\t\t\t\tmetadata = Proof2#chunk_proof.metadata#chunk_metadata{\n\t\t\t\t\t\t\tchunk_size = ChunkEndOffset - ChunkStartOffset\n\t\t\t\t\t\t},\n\t\t\t\t\t\tdata_path_is_valid = valid\n\t\t\t\t\t},\n\t\t\t\t\t{true, Proof3}\n\t\t\tend\n\tend.\n\nget_recall_bucket_offset(RecallOffset, BlockStartOffset) ->\n\tcase RecallOffset >= ar_block:strict_data_split_threshold() of\n\t\ttrue ->\n\t\t\tget_padded_offset(RecallOffset + 1, ar_block:strict_data_split_threshold())\n\t\t\t\t\t- (?DATA_CHUNK_SIZE) - BlockStartOffset;\n\t\tfalse ->\n\t\t\tRecallOffset - BlockStartOffset\n\tend.\n\nvalidate2({spora_2_6, _} = Packing, Args) ->\n\t{ChunkID, ChunkStartOffset, ChunkEndOffset, BlockStartOffset, TXStartOffset,\n\t\t\tTXRoot, Chunk, _UnpackedChunk, _SubChunkIndex} = Args,\n\tChunkSize = ChunkEndOffset - ChunkStartOffset,\n\tAbsoluteEndOffset = BlockStartOffset + TXStartOffset + ChunkEndOffset,\n\tprometheus_counter:inc(validating_packed_spora, [ar_packing_server:packing_atom(Packing)]),\n\tcase ar_packing_server:unpack(Packing, AbsoluteEndOffset, TXRoot, Chunk, ChunkSize) of\n\t\t{error, _} ->\n\t\t\tfalse;\n\t\t{exception, Exception} ->\n\t\t\t?LOG_WARNING([{event, validate_unpack_exception},\n\t\t\t\t{packing, ar_serialize:encode_packing(Packing, false)},\n\t\t\t\t{exception, Exception}]),\n\t\t\terror;\n\t\t{ok, Unpacked} ->\n\t\t\tcase ChunkID == ar_tx:generate_chunk_id(Unpacked) of\n\t\t\t\tfalse ->\n\t\t\t\t\tfalse;\n\t\t\t\ttrue ->\n\t\t\t\t\t{true, ChunkID}\n\t\t\tend\n\tend;\nvalidate2(Packing, Args) ->\n\t{_ChunkID, ChunkStartOffset, ChunkEndOffset, _BlockStartOffset, _TXStartOffset,\n\t\t\t_TXRoot, _Chunk, UnpackedChunk, _SubChunkIndex} = Args,\n\tChunkSize = ChunkEndOffset - ChunkStartOffset,\n\tcase ChunkSize > ?DATA_CHUNK_SIZE of\n\t\ttrue ->\n\t\t\tfalse;\n\t\tfalse ->\n\t\t\tPaddingSize = ?DATA_CHUNK_SIZE - ChunkSize,\n\t\t\tcase binary:part(UnpackedChunk, ChunkSize, PaddingSize) of\n\t\t\t\t<< 0:(PaddingSize * 8) >> ->\n\t\t\t\t\tvalidate3(Packing, Args);\n\t\t\t\t_ ->\n\t\t\t\t\tfalse\n\t\t\tend\n\tend.\n\nvalidate3(Packing, Args) ->\n\t{ChunkID, ChunkStartOffset, ChunkEndOffset, BlockStartOffset, TXStartOffset,\n\t\t\tTXRoot, Chunk, UnpackedChunk, SubChunkIndex} = Args,\n\tAbsoluteEndOffset = BlockStartOffset + TXStartOffset + ChunkEndOffset,\n\tSubChunkSize = ?COMPOSITE_PACKING_SUB_CHUNK_SIZE,\n\tSubChunkStartOffset = SubChunkIndex * SubChunkSize,\n\t%% We always expect the provided unpacked chunks to be padded (if necessary)\n\t%% to 256 KiB.\n\tUnpackedSubChunk = binary:part(UnpackedChunk, SubChunkStartOffset, SubChunkSize),\n\tPackingAtom = ar_packing_server:packing_atom(Packing),\n\tprometheus_counter:inc(validating_packed_spora, [PackingAtom]),\n\tcase ar_packing_server:unpack_sub_chunk(Packing, AbsoluteEndOffset,\n\t\t\tTXRoot, Chunk, SubChunkStartOffset) of\n\t\t{error, _} ->\n\t\t\tfalse;\n\t\t{exception, Exception} ->\n\t\t\t?LOG_WARNING([{event, validate_unpack_exception},\n\t\t\t\t{packing, ar_serialize:encode_packing(Packing, false)},\n\t\t\t\t{exception, Exception}]),\n\t\t\terror;\n\t\t{ok, UnpackedSubChunk} ->\n\t\t\tChunkSize = ChunkEndOffset - ChunkStartOffset,\n\t\t\tUnpackedChunkNoPadding = binary:part(UnpackedChunk, 0, ChunkSize),\n\t\t\tcase ChunkID == ar_tx:generate_chunk_id(UnpackedChunkNoPadding) of\n\t\t\t\tfalse ->\n\t\t\t\t\tfalse;\n\t\t\t\ttrue ->\n\t\t\t\t\t{true, ChunkID}\n\t\t\tend;\n\t\t{ok, _UnexpectedSubChunk} ->\n\t\t\tfalse\n\tend.\n\n%% @doc Return the smallest multiple of 256 KiB >= Offset\n%% counting from ar_block:strict_data_split_threshold().\nget_padded_offset(Offset) ->\n\tget_padded_offset(Offset, ar_block:strict_data_split_threshold()).\n\n%% @doc Return the smallest multiple of 256 KiB >= Offset\n%% counting from StrictDataSplitThreshold.\nget_padded_offset(Offset, StrictDataSplitThreshold) ->\n\tDiff = Offset - StrictDataSplitThreshold,\n\tStrictDataSplitThreshold + ((Diff - 1) div (?DATA_CHUNK_SIZE) + 1) * (?DATA_CHUNK_SIZE).\n\n%% @doc Validate a proof of access.\nvalidate_pre_fork_2_5(BlockOffset, TXRoot, BlockEndOffset, POA) ->\n\tValidation =\n\t\tar_merkle:validate_path(\n\t\t\tTXRoot,\n\t\t\tBlockOffset,\n\t\t\tBlockEndOffset,\n\t\t\tPOA#poa.tx_path\n\t\t),\n\tcase Validation of\n\t\tfalse -> false;\n\t\t{DataRoot, StartOffset, EndOffset} ->\n\t\t\tTXOffset = BlockOffset - StartOffset,\n\t\t\tvalidate_data_path_pre_fork_2_5(DataRoot, TXOffset, EndOffset - StartOffset, POA)\n\tend.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nvalidate_data_path_pre_fork_2_5(DataRoot, TXOffset, EndOffset, POA) ->\n\tValidation =\n\t\tar_merkle:validate_path(\n\t\t\tDataRoot,\n\t\t\tTXOffset,\n\t\t\tEndOffset,\n\t\t\tPOA#poa.data_path\n\t\t),\n\tcase Validation of\n\t\tfalse -> false;\n\t\t{ChunkID, _, _} ->\n\t\t\tvalidate_chunk_pre_fork_2_5(ChunkID, POA)\n\tend.\n\nvalidate_chunk_pre_fork_2_5(ChunkID, POA) ->\n\tChunkID == ar_tx:generate_chunk_id(POA#poa.chunk).\n"
  },
  {
    "path": "apps/arweave/src/ar_poller.erl",
    "content": "%% This Source Code Form is subject to the terms of the GNU General\n%% Public License, v. 2.0. If a copy of the GPLv2 was not distributed\n%% with this file, You can obtain one at\n%% https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html\n\n%%% @doc The module periodically asks peers about their recent blocks and downloads\n%%% the missing ones. It serves the following purposes:\n%%%\n%%% - allows following the network in the absence of a public IP;\n%%% - protects the node from lagging behind when there are networking issues.\n\n-module(ar_poller).\n\n-behaviour(gen_server).\n\n-export([start_link/2, pause/0, resume/0]).\n\n-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%% The frequency of choosing the peers to poll.\n-ifdef(AR_TEST).\n-define(COLLECT_PEERS_FREQUENCY_MS, 2000).\n-else.\n-define(COLLECT_PEERS_FREQUENCY_MS, 1000 * 15).\n-endif.\n\n-record(state, {\n\tworkers,\n\tworker_count,\n\tpause = false,\n\tin_sync_trusted_peers = sets:new()\n}).\n\n%%%===================================================================\n%%% Public API.\n%%%===================================================================\n\nstart_link(Name, Workers) ->\n\tgen_server:start_link({local, Name}, ?MODULE, Workers, []).\n\n%% @doc Put polling on pause.\npause() ->\n\tgen_server:cast(?MODULE, pause).\n\n%% @doc Resume paused polling.\nresume() ->\n\tgen_server:cast(?MODULE, resume).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit(Workers) ->\n\tok = ar_events:subscribe(node_state),\n\tcase ar_node:is_joined() of\n\t\ttrue ->\n\t\t\thandle_node_state_initialized();\n\t\tfalse ->\n\t\t\tok\n\tend,\n\t{ok, Config} = arweave_config:get_env(),\n\t{ok, #state{ \n\t\tworkers = Workers,\n\t\tworker_count = length(Workers),\n\t\tin_sync_trusted_peers = sets:from_list(Config#config.peers) \n\t}}.\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast(pause, #state{ workers = Workers } = State) ->\n\t[gen_server:cast(W, pause) || W <- Workers],\n\t{noreply, State#state{ pause = true }};\n\nhandle_cast(resume, #state{ pause = false } = State) ->\n\t{noreply, State};\nhandle_cast(resume, #state{ workers = Workers } = State) ->\n\t[gen_server:cast(W, resume) || W <- Workers],\n\tgen_server:cast(?MODULE, collect_peers),\n\t{noreply, State#state{ pause = false }};\n\nhandle_cast(collect_peers, #state{ pause = true } = State) ->\n\t{noreply, State};\nhandle_cast(collect_peers, State) ->\n\t#state{ worker_count = N, workers = Workers } = State,\n\tTrustedPeers = ar_util:pick_random(ar_peers:get_trusted_peers(), N div 3),\n\tPeers = ar_peers:get_peers(current),\n\tOtherPeers =  ar_data_discovery:pick_peers(Peers -- TrustedPeers, N - length(TrustedPeers)),\n\tPickedPeers = TrustedPeers ++ OtherPeers,\n\tstart_polling_peers(Workers, PickedPeers),\n\tar_util:cast_after(?COLLECT_PEERS_FREQUENCY_MS, ?MODULE, collect_peers),\n\t{noreply, State};\n\nhandle_cast({peer_out_of_sync_timeout, Peer}, State) ->\n\t#state{ in_sync_trusted_peers = Set } = State,\n\t{ok, Config} = arweave_config:get_env(),\n\tcase lists:member(Peer, Config#config.peers) of\n\t\tfalse ->\n\t\t\t{noreply, State};\n\t\ttrue ->\n\t\t\t{noreply, State#state{ in_sync_trusted_peers = sets:add_element(Peer, Set) }}\n\tend;\n\nhandle_cast({peer_out_of_sync, Peer}, State) ->\n\t#state{ in_sync_trusted_peers = Set } = State,\n\t{ok, Config} = arweave_config:get_env(),\n\tcase lists:member(Peer, Config#config.peers) of\n\t\tfalse ->\n\t\t\t{noreply, State};\n\t\ttrue ->\n\t\t\tSet2 = sets:del_element(Peer, Set),\n\t\t\tar_util:cast_after(300000, ?MODULE, {peer_out_of_sync_timeout, Peer}),\n\t\t\tcase {sets:is_empty(Set), sets:is_empty(Set2)} of\n\t\t\t\t{false, true} ->\n\t\t\t\t\tar_mining_stats:pause_performance_reports(60000),\n\t\t\t\t\tar_util:terminal_clear(),\n\t\t\t\t\tTrustedPeersStr = string:join([ar_util:format_peer(Peer2)\n\t\t\t\t\t\t\t|| Peer2 <- Config#config.peers], \", \"),\n\t\t\t\t\t?LOG_INFO([{event, node_out_of_sync}, {peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t{trusted_peers, TrustedPeersStr}]),\n\t\t\t\t\tar:console(\"WARNING: The node is out of sync with all of the specified \"\n\t\t\t\t\t\t\t\"trusted peers: ~s.~n~n\"\n\t\t\t\t\t\t\t\"Please, check whether you are in sync with the network and \"\n\t\t\t\t\t\t\t\"make sure your CPU computes VDF fast enough or you are connected \"\n\t\t\t\t\t\t\t\"to a VDF server.~nThe node may be still mining, but console \"\n\t\t\t\t\t\t\t\"performance reports are temporarily paused.~n~n\",\n\t\t\t\t\t\t\t[TrustedPeersStr]);\n\t\t\t\t_ ->\n\t\t\t\t\tok\n\t\t\tend,\n\t\t\t{noreply, State#state{ in_sync_trusted_peers = Set2 }}\n\tend;\n\nhandle_cast({block, Peer, B, BlockQueryTime}, State) ->\n\tcase ar_ignore_registry:member(B#block.indep_hash) of\n\t\tfalse ->\n\t\t\t?LOG_INFO([{event, fetched_block_for_validation},\n\t\t\t\t\t{block, ar_util:encode(B#block.indep_hash)},\n\t\t\t\t\t{peer, ar_util:format_peer(Peer)}]);\n\t\ttrue ->\n\t\t\tok\n\tend,\n\tcase ar_block_pre_validator:pre_validate(B, Peer, erlang:timestamp()) of\n\t\tok ->\n\t\t\tar_peers:rate_fetched_data(Peer, block, BlockQueryTime, byte_size(term_to_binary(B)));\n\t\t_ ->\n\t\t\tok\n\tend,\n\t{noreply, State};\n\nhandle_cast(Msg, State) ->\n\t?LOG_ERROR([{event, unhandled_cast}, {module, ?MODULE}, {message, Msg}]),\n\t{noreply, State}.\n\nhandle_info({event, node_state, {initialized, _B}}, State) ->\n\thandle_node_state_initialized(),\n\t{noreply, State};\n\nhandle_info({event, node_state, _}, State) ->\n\t{noreply, State};\n\nhandle_info(Info, State) ->\n\t?LOG_ERROR([{event, unhandled_info}, {module, ?MODULE}, {info, Info}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nhandle_node_state_initialized() ->\n\tgen_server:cast(?MODULE, collect_peers).\n\nstart_polling_peers([W | Workers], [Peer | Peers]) ->\n\tgen_server:cast(W, {set_peer, Peer}),\n\tstart_polling_peers(Workers, Peers);\nstart_polling_peers(_Workers, []) ->\n\tok.\n"
  },
  {
    "path": "apps/arweave/src/ar_poller_sup.erl",
    "content": "-module(ar_poller_sup).\n\n-behaviour(supervisor).\n\n-export([start_link/0]).\n\n-export([init/1]).\n\n-include_lib(\"arweave/include/ar_sup.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%%%===================================================================\n%%% Public API.\n%%%===================================================================\n\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%% ===================================================================\n%% Supervisor callbacks.\n%% ===================================================================\n\ninit([]) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tChildren = lists:map(\n\t\tfun(Num) ->\n\t\t\tName = list_to_atom(\"ar_poller_worker_\" ++ integer_to_list(Num)),\n\t\t\t{Name, {ar_poller_worker, start_link, [Name]}, permanent, ?SHUTDOWN_TIMEOUT,\n\t\t\t\t\tworker, [ar_poller_worker]}\n\t\tend,\n\t\tlists:seq(1, Config#config.block_pollers)\n\t),\n\tWorkers = [element(1, El) || El <- Children],\n\tChildren2 = [?CHILD_WITH_ARGS(ar_poller, worker, ar_poller, [ar_poller, Workers]) | Children],\n\t{ok, {{one_for_one, 5, 10}, Children2}}.\n"
  },
  {
    "path": "apps/arweave/src/ar_poller_worker.erl",
    "content": "-module(ar_poller_worker).\n\n-behaviour(gen_server).\n\n-export([start_link/1]).\n\n-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-record(state, {\n\tpeer,\n\tpolling_frequency_ms,\n\tref,\n\tpause = false\n}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link(Name) ->\n\tgen_server:start_link({local, Name}, ?MODULE, [], []).\n\n%%%===================================================================\n%%% gen_server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\t{ok, Config} = arweave_config:get_env(),\n\t[ok] = ar_events:subscribe([node_state]),\n\tState = #state{ polling_frequency_ms = Config#config.polling * 1000 },\n\tcase ar_node:is_joined() of\n\t\ttrue ->\n\t\t\t{ok, handle_node_state_initialized(State)};\n\t\tfalse ->\n\t\t\t{ok, State}\n\tend.\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast(pause, State) ->\n\t{noreply, State#state{ pause = true }};\n\nhandle_cast(resume, #state{ pause = false } = State) ->\n\t{noreply, State};\nhandle_cast(resume, State) ->\n\tRef = make_ref(),\n\tgen_server:cast(self(), {poll, Ref}),\n\t{noreply, State#state{ pause = false, ref = Ref }};\n\nhandle_cast({poll, _Ref}, #state{ pause = true } = State) ->\n\t{noreply, State};\nhandle_cast({poll, _Ref}, #state{ peer = undefined } = State) ->\n\t{noreply, State#state{ pause = true }};\nhandle_cast({poll, Ref}, #state{ ref = Ref, peer = Peer,\n\t\tpolling_frequency_ms = FrequencyMs } = State) ->\n\tCurrentHeight = ar_node:get_height(),\n\t{L, NotOnChain} = ar_block_cache:get_longest_chain_cache(block_cache),\n\tHL = [H || {H, _TXIDs} <- L],\n\tcase NotOnChain >= 5 of\n\t\ttrue ->\n\t\t\tslow_block_application_warning(NotOnChain);\n\t\tfalse ->\n\t\t\tok\n\tend,\n\tcase ar_http_iface_client:get_recent_hash_list_diff(Peer, HL) of\n\t\t{ok, in_sync} ->\n\t\t\tar_util:cast_after(FrequencyMs, self(), {poll, Ref}),\n\t\t\t{noreply, State};\n\t\t{ok, {H, TXIDs, BlocksOnTop}} ->\n\t\t\tcase ar_ignore_registry:member({poller_worker, H})\n\t\t\t\t\torelse ar_ignore_registry:permanent_member(H) of\n\t\t\t\ttrue ->\n\t\t\t\t\tok;\n\t\t\t\tfalse ->\n\t\t\t\t\tcase BlocksOnTop >= 5 of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\twarning(Peer, behind);\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tok\n\t\t\t\t\tend,\n\t\t\t\t\tIgnoreRef = make_ref(),\n\t\t\t\t\tar_ignore_registry:add_ref({poller_worker, H}, IgnoreRef, 1000),\n\t\t\t\t\tIndices = get_missing_tx_indices(TXIDs),\n\t\t\t\t\tcase ar_http_iface_client:get_block(Peer, H, Indices) of\n\t\t\t\t\t\t{#block{ height = Height } = B, TimeMicroseconds, _Size} ->\n\t\t\t\t\t\t\tcase Height =< CurrentHeight - 5 of\n\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\twarning(Peer, fork);\n\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\tok\n\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\tcase collect_missing_transactions(B#block.txs) of\n\t\t\t\t\t\t\t\t{ok, TXs} ->\n\t\t\t\t\t\t\t\t\tB2 = B#block{ txs = TXs },\n\t\t\t\t\t\t\t\t\tar_ignore_registry:remove_ref({poller_worker, H}, IgnoreRef),\n\t\t\t\t\t\t\t\t\tgen_server:cast(ar_poller, {block, Peer, B2, TimeMicroseconds}),\n\t\t\t\t\t\t\t\t\tok;\n\t\t\t\t\t\t\t\tfailed ->\n\t\t\t\t\t\t\t\t\t?LOG_WARNING([{event, failed_to_get_block_txs_from_peer},\n\t\t\t\t\t\t\t\t\t\t\t{block, ar_util:encode(H)},\n\t\t\t\t\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t\t\t\t\t\t{tx_count, length(B#block.txs)}]),\n\t\t\t\t\t\t\t\t\tok\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\tError ->\n\t\t\t\t\t\t\tar_ignore_registry:remove_ref({poller_worker, H}, IgnoreRef),\n\t\t\t\t\t\t\t?LOG_DEBUG([{event, failed_to_fetch_block},\n\t\t\t\t\t\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t\t\t\t\t\t{block, ar_util:encode(H)},\n\t\t\t\t\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])}]),\n\t\t\t\t\t\t\tok\n\t\t\t\t\tend\n\t\t\tend,\n\t\t\tar_util:cast_after(FrequencyMs, self(), {poll, Ref}),\n\t\t\t{noreply, State};\n\t\t{error, not_found} ->\n\t\t\t?LOG_DEBUG([{event, peer_out_of_sync}, {peer, ar_util:format_peer(Peer)}]),\n\t\t\tgen_server:cast(ar_poller, {peer_out_of_sync, Peer}),\n\t\t\t{noreply, State#state{ pause = true }};\n\t\t{error, Reason} ->\n\t\t\tar_http_iface_client:log_failed_request(Reason,\n\t\t\t\t[{event, failed_to_get_recent_hash_list_diff},\n\t\t\t\t{peer, ar_util:format_peer(Peer)},\n\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}]),\n\t\t\t{noreply, State#state{ pause = true }}\n\tend;\nhandle_cast({poll, _Ref}, State) ->\n\t{noreply, State};\n\nhandle_cast({set_peer, Peer}, #state{ ref = Ref, pause = Pause } = State) ->\n\tRef2 =\n\t\tcase Pause of\n\t\t\ttrue ->\n\t\t\t\tRef3 = make_ref(),\n\t\t\t\tgen_server:cast(self(), {poll, Ref3}),\n\t\t\t\tRef3;\n\t\t\tfalse ->\n\t\t\t\tRef\n\t\tend,\n\t{noreply, State#state{ peer = Peer, pause = false, ref = Ref2 }};\n\nhandle_cast(Msg, State) ->\n\t?LOG_ERROR([{event, unhandled_cast}, {message, Msg}]),\n\t{noreply, State}.\n\nhandle_info({event, node_state, {initialized, _B}}, State) ->\n\t{noreply, handle_node_state_initialized(State)};\n\nhandle_info({event, node_state, _}, State) ->\n\t{noreply, State};\n\nhandle_info({gun_down, _, http, normal, _, _}, State) ->\n\t{noreply, State};\nhandle_info({gun_down, _, http, closed, _, _}, State) ->\n\t{noreply, State};\nhandle_info({gun_up, _, http}, State) ->\n\t{noreply, State};\n\nhandle_info(Info, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {info, Info}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nhandle_node_state_initialized(State) ->\n\tRef = make_ref(),\n\tgen_server:cast(self(), {poll, Ref}),\n\tState#state{ ref = Ref }.\n\nget_missing_tx_indices(TXIDs) ->\n\tget_missing_tx_indices(TXIDs, 0).\n\nget_missing_tx_indices([], _N) ->\n\t[];\nget_missing_tx_indices([TXID | TXIDs], N) ->\n\tcase ar_mempool:has_tx(TXID) of\n\t\ttrue ->\n\t\t\tget_missing_tx_indices(TXIDs, N + 1);\n\t\tfalse ->\n\t\t\t[N | get_missing_tx_indices(TXIDs, N + 1)]\n\tend.\n\nslow_block_application_warning(N) ->\n\tar_mining_stats:pause_performance_reports(60000),\n\tar_util:terminal_clear(),\n\tar:console(\"WARNING: there are more than ~B not yet validated blocks on the longest chain.\"\n\t\t\t\" Please, double-check if you are in sync with the network and make sure your \"\n\t\t\t\"CPU computes VDF fast enough or you are connected to a VDF server.\"\n\t\t\t\"~nThe node may be still mining, but console performance reports are temporarily \"\n\t\t\t\"paused.~n~n\", [N]).\n\nwarning(Peer, Event) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase lists:member(Peer, Config#config.peers) of\n\t\tfalse ->\n\t\t\tok;\n\t\ttrue ->\n\t\t\tar_mining_stats:pause_performance_reports(60000),\n\t\t\tEventMessage =\n\t\t\t\tcase Event of\n\t\t\t\t\tbehind ->\n\t\t\t\t\t\t\"is 5 or more blocks ahead of us\";\n\t\t\t\t\tfork ->\n\t\t\t\t\t\t\"is on a fork branching off of our fork 5 or more blocks behind\"\n\t\t\t\tend,\n\t\t\tar_util:terminal_clear(),\n\t\t\tar:console(\"WARNING: peer ~s ~s. \"\n\t\t\t\t\t\"Please, double-check if you are in sync with the network and \"\n\t\t\t\t\t\"make sure your CPU computes VDF fast enough or you are connected \"\n\t\t\t\t\t\"to a VDF server.~nThe node may be still mining, but console performance \"\n\t\t\t\t\t\"reports are temporarily paused.~n~n\",\n\t\t\t\t\t[ar_util:format_peer(Peer), EventMessage])\n\tend.\n\ncollect_missing_transactions([#tx{} = TX | TXs]) ->\n\tcase collect_missing_transactions(TXs) of\n\t\tfailed ->\n\t\t\tfailed;\n\t\t{ok, TXs2} ->\n\t\t\t{ok, [TX | TXs2]}\n\tend;\ncollect_missing_transactions([TXID | TXs]) ->\n\tcase ar_mempool:get_tx(TXID) of\n\t\tnot_found ->\n\t\t\tfailed;\n\t\tTX ->\n\t\t\tcollect_missing_transactions([TX | TXs])\n\tend;\ncollect_missing_transactions([]) ->\n\t{ok, []}.\n"
  },
  {
    "path": "apps/arweave/src/ar_pool.erl",
    "content": "%%% @doc The module defines the core pool mining functionality.\n%%%\n%%% The key actors are a pool client, a pool proxy, and a pool server. The pool client may be\n%%% a standalone mining node or an exit peer in a coordinated mining setup. The other CM peers\n%%% communicate with the pool via the exit peer. The proxy is NOT an Arweave node.\n%%%\n%%% Communication Scheme\n%%%\n%%%                                 +---> Standalone Pool Client\n%%%                                 |\n%%% Pool Server <--> Pool Proxy <---+\n%%%                                 |\n%%%                                 +---> CM Exit Node Pool Client <--> CM Miner Pool Client\n%%%\n%%% Job Assignment\n%%%\n%%% 1. Solo Mining\n%%%\n%%%   Pool Server -> Pool Proxy -> Standalone Pool Client\n%%%\n%%% 2. Coordinated Mining\n%%%\n%%%   Pool Server -> Pool Proxy -> CM Exit Node Pool Client -> CM Miner Pool Client\n%%%\n%%% Partial Solution Lifecycle\n%%%\n%%% 1. Solo Mining\n%%%\n%%%   Standalone Pool Client -> Pool Proxy -> Pool Sever\n%%%\n%%% 2. Coordinated Mining\n%%%\n%%%   CM Miner Pool Client -> CM Exit Node Pool Client -> Pool Proxy -> Pool Server\n-module(ar_pool).\n\n-behaviour(gen_server).\n\n-export([start_link/0, is_client/0, get_current_session_key_seed_pairs/0, get_jobs/1,\n\t\tget_latest_job/0, cache_jobs/1, process_partial_solution/1,\n\t\tpost_partial_solution/1, pool_peer/0, process_cm_jobs/2]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n-include_lib(\"arweave/include/ar_mining.hrl\").\n-include_lib(\"arweave/include/ar_pool.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-record(state, {\n\t%% The most recent keys come first.\n\tsession_keys = [],\n\t%% Key => [{Output, StepNumber, PartitionUpperBound, Seed, Diff}, ...]\n\tjobs_by_session_key = maps:new(),\n\trequest_pid_by_ref = maps:new()\n}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the server.\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%% @doc Return true if we are a pool client.\nis_client() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tConfig#config.is_pool_client == true.\n\n%% @doc Return a list of up to two most recently cached VDF session key, seed pairs.\nget_current_session_key_seed_pairs() ->\n\tgen_server:call(?MODULE, get_current_session_key_seed_pairs, ?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Return a set of the most recent cached jobs.\nget_jobs(PrevOutput) ->\n\tgen_server:call(?MODULE, {get_jobs, PrevOutput}, ?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Return the most recent cached #job{}. Return an empty record if the\n%% cache is empty.\nget_latest_job() ->\n\tgen_server:call(?MODULE, get_latest_job, ?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Cache the given jobs.\ncache_jobs(Jobs) ->\n\tgen_server:cast(?MODULE, {cache_jobs, Jobs}).\n\n%% @doc Validate the given (partial) solution. If the solution is eligible for\n%% producing a block, produce and publish a block.\nprocess_partial_solution(Solution) ->\n\tgen_server:call(?MODULE, {process_partial_solution, Solution}, ?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Send the partial solution to the pool.\npost_partial_solution(Solution) ->\n\tgen_server:cast(?MODULE, {post_partial_solution, Solution}).\n\n%% @doc Return the pool server as a \"peer\" recognized by ar_http_iface_client.\npool_peer() ->\n\t{ok, Config} = arweave_config:get_env(),\n\t{pool, Config#config.pool_server_address}.\n\n%% @doc Process the set of coordinated mining jobs received from the pool.\nprocess_cm_jobs(Jobs, Peer) ->\n\t#pool_cm_jobs{ h1_to_h2_jobs = H1ToH2Jobs, h1_read_jobs = H1ReadJobs } = Jobs,\n\t{ok, Config} = arweave_config:get_env(),\n\tPartitions = ar_mining_io:get_partitions(infinity),\n\tcase Config#config.mine of\n\t\ttrue ->\n\t\t\tprocess_h1_to_h2_jobs(H1ToH2Jobs, Peer, Partitions);\n\t\t_ ->\n\t\t\tok\n\tend,\n\tprocess_h1_read_jobs(H1ReadJobs, Partitions).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\tok = ar_events:subscribe(solution),\n\t{ok, #state{}}.\n\nhandle_call(get_current_session_key_seed_pairs, _From, State) ->\n\tJobsBySessionKey = State#state.jobs_by_session_key,\n\tKeys = lists:sublist(State#state.session_keys, 2),\n\tKeySeedPairs = [{Key, element(4, hd(maps:get(Key, JobsBySessionKey)))} || Key <- Keys],\n\t{reply, KeySeedPairs, State};\n\nhandle_call({get_jobs, PrevOutput}, _From, State) ->\n\tSessionKeys = State#state.session_keys,\n\tJobCache = State#state.jobs_by_session_key,\n\t{reply, get_jobs(PrevOutput, SessionKeys, JobCache), State};\n\nhandle_call(get_latest_job, _From, State) ->\n\tcase State#state.session_keys of\n\t\t[] ->\n\t\t\t{reply, #job{}, State};\n\t\t[Key | _] ->\n\t\t\t{O, SN, U, _S, _Diff} = hd(maps:get(Key, State#state.jobs_by_session_key)),\n\t\t\t{reply, #job{ output = O, global_step_number = SN,\n\t\t\t\t\tpartition_upper_bound = U }, State}\n\tend;\n\nhandle_call({process_partial_solution, Solution}, From, State) ->\n\t#state{ request_pid_by_ref = Map } = State,\n\tRef = make_ref(),\n\tcase process_partial_solution(Solution, Ref) of\n\t\tnoreply ->\n\t\t\t{noreply, State#state{ request_pid_by_ref = maps:put(Ref, From, Map) }};\n\t\tReply ->\n\t\t\t{reply, Reply, State}\n\tend;\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast({cache_jobs, #jobs{ jobs = [] }}, State) ->\n\t{noreply, State};\nhandle_cast({cache_jobs, Jobs}, State) ->\n\t#jobs{ jobs = JobList, partial_diff = PartialDiff,\n\t\t\tnext_seed = NextSeed, seed = Seed,\n\t\t\tinterval_number = IntervalNumber,\n\t\t\tnext_vdf_difficulty = NextVDFDifficulty } = Jobs,\n\tSessionKey = {NextSeed, IntervalNumber, NextVDFDifficulty},\n\tSessionKeys = State#state.session_keys,\n\tSessionKeys2 =\n\t\tcase lists:member(SessionKey, SessionKeys) of\n\t\t\ttrue ->\n\t\t\t\tSessionKeys;\n\t\t\tfalse ->\n\t\t\t\t[SessionKey | SessionKeys]\n\t\tend,\n\tJobList2 = [{Job#job.output, Job#job.global_step_number,\n\t\t\tJob#job.partition_upper_bound, Seed, PartialDiff} || Job <- JobList],\n\tPrevJobList = maps:get(SessionKey, State#state.jobs_by_session_key, []),\n\tJobList3 = JobList2 ++ PrevJobList,\n\tJobsBySessionKey = maps:put(SessionKey, JobList3, State#state.jobs_by_session_key),\n\t{SessionKeys3, JobsBySessionKey2} =\n\t\tcase length(SessionKeys2) == 3 of\n\t\t\ttrue ->\n\t\t\t\t[SK1, SK2, RemoveKey] = SessionKeys2,\n\t\t\t\t{[SK1, SK2], maps:remove(RemoveKey, JobsBySessionKey)};\n\t\t\tfalse ->\n\t\t\t\t{SessionKeys2, JobsBySessionKey}\n\t\tend,\n\t{noreply, State#state{ session_keys = SessionKeys3,\n\t\t\tjobs_by_session_key = JobsBySessionKey2 }};\n\nhandle_cast({post_partial_solution, Solution}, State) ->\n\tcase ar_http_iface_client:post_partial_solution(pool_peer(), Solution) of\n\t\t{ok, _Response} ->\n\t\t\tok;\n\t\t{error, Error} ->\n\t\t\t?LOG_WARNING([{event, failed_to_submit_partial_solution},\n\t\t\t\t\t{reason, io_lib:format(\"~p\", [Error])}])\n\tend,\n\t{noreply, State};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info({event, solution,\n\t\t{rejected, #{ reason := mining_address_banned, source := {pool, Ref} }}}, State) ->\n\t#state{ request_pid_by_ref = Map } = State,\n\tPID = maps:get(Ref, Map),\n\tgen_server:reply(PID,\n\t\t\t#partial_solution_response{ status = <<\"rejected_mining_address_banned\">> }),\n\t{noreply, State#state{ request_pid_by_ref = maps:remove(Ref, Map) }};\n\nhandle_info({event, solution,\n\t\t{rejected, #{ reason := missing_key_file, source := {pool, Ref} }}}, State) ->\n\t#state{ request_pid_by_ref = Map } = State,\n\tPID = maps:get(Ref, Map),\n\tgen_server:reply(PID,\n\t\t\t#partial_solution_response{ status = <<\"rejected_missing_key_file\">> }),\n\t{noreply, State#state{ request_pid_by_ref = maps:remove(Ref, Map) }};\n\nhandle_info({event, solution,\n\t\t{rejected, #{ reason := vdf_not_found, source := {pool, Ref} }}}, State) ->\n\t#state{ request_pid_by_ref = Map } = State,\n\tPID = maps:get(Ref, Map),\n\tgen_server:reply(PID, #partial_solution_response{ status = <<\"rejected_vdf_not_found\">> }),\n\t{noreply, State#state{ request_pid_by_ref = maps:remove(Ref, Map) }};\n\nhandle_info({event, solution,\n\t\t{rejected, #{ reason := bad_vdf, source := {pool, Ref} }}}, State) ->\n\t#state{ request_pid_by_ref = Map } = State,\n\tPID = maps:get(Ref, Map),\n\tgen_server:reply(PID, #partial_solution_response{ status = <<\"rejected_bad_vdf\">> }),\n\t{noreply, State#state{ request_pid_by_ref = maps:remove(Ref, Map) }};\n\nhandle_info({event, solution,\n\t\t{rejected, #{ reason := invalid_packing_difficulty, source := {pool, Ref} }}}, State) ->\n\t#state{ request_pid_by_ref = Map } = State,\n\tPID = maps:get(Ref, Map),\n\tgen_server:reply(PID,\n\t\t\t#partial_solution_response{ status = <<\"rejected_invalid_packing_difficulty\">> }),\n\t{noreply, State#state{ request_pid_by_ref = maps:remove(Ref, Map) }};\n\nhandle_info({event, solution, {partial, #{ source := {pool, Ref} }}}, State) ->\n\t#state{ request_pid_by_ref = Map } = State,\n\tPID = maps:get(Ref, Map),\n\tgen_server:reply(PID, #partial_solution_response{ status = <<\"accepted\">> }),\n\t{noreply, State#state{ request_pid_by_ref = maps:remove(Ref, Map) }};\n\nhandle_info({event, solution, {stale, #{ source := {pool, Ref} }}}, State) ->\n\t#state{ request_pid_by_ref = Map } = State,\n\tPID = maps:get(Ref, Map),\n\tgen_server:reply(PID, #partial_solution_response{ status = <<\"stale\">> }),\n\t{noreply, State#state{ request_pid_by_ref = maps:remove(Ref, Map) }};\n\nhandle_info({event, solution,\n\t\t{accepted, #{ indep_hash := H, source := {pool, Ref} }}}, State) ->\n\t#state{ request_pid_by_ref = Map } = State,\n\tPID = maps:get(Ref, Map),\n\tgen_server:reply(PID,\n\t\t\t#partial_solution_response{ indep_hash = H, status = <<\"accepted_block\">> }),\n\t{noreply, State#state{ request_pid_by_ref = maps:remove(Ref, Map) }};\n\nhandle_info({event, solution, _Event}, State) ->\n\t{noreply, State};\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nget_jobs(PrevOutput, SessionKeys, JobCache) ->\n\tcase SessionKeys of\n\t\t[] ->\n\t\t\t#jobs{};\n\t\t[{NextSeed, Interval, NextVDFDifficulty} = SessionKey | _] ->\n\t\t\tJobs = maps:get(SessionKey, JobCache),\n\t\t\t{Seed, PartialDiff, Jobs2} = collect_jobs(Jobs, PrevOutput, ?GET_JOBS_COUNT),\n\t\t\tJobs3 = [#job{ output = O, global_step_number = SN,\n\t\t\t\t\tpartition_upper_bound = U } || {O, SN, U} <- Jobs2],\n\t\t\t#jobs{ jobs = Jobs3, seed = Seed, partial_diff = PartialDiff,\n\t\t\t\t\tnext_seed = NextSeed,\n\t\t\t\t\tinterval_number = Interval, next_vdf_difficulty = NextVDFDifficulty }\n\tend.\n\ncollect_jobs([], _PrevO, _N) ->\n\t{<<>>, {0, 0}, []};\ncollect_jobs(_Jobs, _PrevO, 0) ->\n\t{<<>>, {0, 0}, []};\ncollect_jobs([{O, _SN, _U, _S, _PartialDiff} | _Jobs], O, _N) ->\n\t{<<>>, {0, 0}, []};\ncollect_jobs([{O, SN, U, S, PartialDiff} | Jobs], PrevO, N) ->\n\t{S, PartialDiff, [{O, SN, U} | collect_jobs(Jobs, PrevO, N - 1, PartialDiff)]}.\n\ncollect_jobs([], _PrevO, _N, _PartialDiff) ->\n\t[];\ncollect_jobs(_Jobs, _PrevO, 0, _PartialDiff) ->\n\t[];\ncollect_jobs([{O, _SN, _U, _S, _PartialDiff} | _Jobs], O, _N, _PartialDiff2) ->\n\t[];\ncollect_jobs([{O, SN, U, _S, PartialDiff} | Jobs], PrevO, N, PartialDiff) ->\n\t[{O, SN, U} | collect_jobs(Jobs, PrevO, N - 1, PartialDiff)];\ncollect_jobs(_Jobs, _PrevO, _N, _PartialDiff) ->\n\t%% PartialDiff mismatch.\n\t[].\n\nprocess_partial_solution(Solution, Ref) ->\n\tPoA1 = Solution#mining_solution.poa1,\n\tPoA2 = Solution#mining_solution.poa2,\n\tcase ar_block:validate_proof_size(PoA1) andalso ar_block:validate_proof_size(PoA2) of\n\t\ttrue ->\n\t\t\tprocess_partial_solution_field_size(Solution, Ref);\n\t\tfalse ->\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }\n\tend.\n\nprocess_partial_solution_field_size(Solution, Ref) ->\n\t#mining_solution{\n\t\tnonce_limiter_output = Output,\n\t\tseed = Seed,\n\t\tnext_seed = NextSeed,\n\t\tmining_address = MiningAddress,\n\t\tpreimage = Preimage,\n\t\tsolution_hash = SolutionH\n\t} = Solution,\n\t%% We have less strict deserialization in the pool pipeline to simplify\n\t%% the pool \"proxy\" implementation. Therefore, we validate the field sizes here\n\t%% and return the \"rejected_bad_poa\" status in case of a failure.\n\tcase {byte_size(Output), byte_size(Seed), byte_size(NextSeed), byte_size(MiningAddress),\n\t\t\tbyte_size(Preimage), byte_size(SolutionH)} of\n\t\t{32, 48, 48, 32, 32, 32} ->\n\t\t\tcase assert_chunk_sizes(Solution) of\n\t\t\t\t{true, Solution2} ->\n\t\t\t\t\tprocess_partial_solution_poa2_size(Solution2, Ref);\n\t\t\t\t{false, _} ->\n\t\t\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }\n\t\t\tend;\n\t\t_ ->\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }\n\tend.\n\nassert_chunk_sizes(Solution) ->\n\t#mining_solution{\n\t\tpacking_difficulty = PackingDifficulty,\n\t\trecall_byte2 = RecallByte2,\n\t\tpoa1 = #poa{ chunk = C1, unpacked_chunk = U1 } = PoA1,\n\t\tpoa2 = #poa{ chunk = C2, unpacked_chunk = U2 } = PoA2\n\t} = Solution,\n\tSolutionResetUnpackedChunk2 = Solution#mining_solution{\n\t\t\tpoa2 = PoA2#poa{ unpacked_chunk = <<>> }\n\t},\n\tSolutionResetUnpackedChunks = SolutionResetUnpackedChunk2#mining_solution{\n\t\t\tpoa1 = PoA1#poa{ unpacked_chunk = <<>> }\n\t},\n\tC1Size = byte_size(C1),\n\tC2Size = byte_size(C2),\n\tU1Size = byte_size(U1),\n\tU2Size = byte_size(U2),\n\tIsC1FullSize = C1Size == ?DATA_CHUNK_SIZE,\n\tIsC1SubChunkSize = C1Size == ?COMPOSITE_PACKING_SUB_CHUNK_SIZE,\n\tIsC2Empty = C2Size == 0,\n\tIsC2FullSize = C2Size == ?DATA_CHUNK_SIZE,\n\tIsC2SubChunkSize = C2Size == ?COMPOSITE_PACKING_SUB_CHUNK_SIZE,\n\t%% When the packing is composite (packing_difficulty >= 1), The unpacked chunk is\n\t%% expected to be 0-padded when smaller than ?DATA_CHUNK_SIZE.\n\tIsU1FullSize = U1Size == ?DATA_CHUNK_SIZE,\n\tIsU2FullSize = U2Size == ?DATA_CHUNK_SIZE,\n\tcase {PackingDifficulty >= 1, RecallByte2} of\n\t\t{false, undefined} ->\n\t\t\t{IsC1FullSize andalso IsC2Empty, SolutionResetUnpackedChunks};\n\t\t{true, undefined} ->\n\t\t\t{IsC1SubChunkSize andalso IsC2Empty andalso IsU1FullSize,\n\t\t\t\t\tSolutionResetUnpackedChunk2};\n\t\t{false, _} ->\n\t\t\t{IsC1FullSize andalso IsC2FullSize, SolutionResetUnpackedChunks};\n\t\t{true, _} ->\n\t\t\t{IsC1SubChunkSize andalso IsC2SubChunkSize\n\t\t\t\t\tandalso IsU1FullSize andalso IsU2FullSize, Solution}\n\tend.\n\nprocess_partial_solution_poa2_size(Solution, Ref) ->\n\t#mining_solution{\n\t\tpoa2 = #poa{ chunk = C, data_path = DP, tx_path = TP, unpacked_chunk = U }\n\t} = Solution,\n\tcase ar_mining_server:is_one_chunk_solution(Solution) of\n\t\ttrue ->\n\t\t\tcase {C, DP, TP, U} of\n\t\t\t\t{<<>>, <<>>, <<>>, <<>>} ->\n\t\t\t\t\tprocess_partial_solution_partition_number(Solution, Ref);\n\t\t\t\t_ ->\n\t\t\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }\n\t\t\tend;\n\t\tfalse ->\n\t\t\tprocess_partial_solution_partition_number(Solution, Ref)\n\tend.\n\nprocess_partial_solution_partition_number(Solution, Ref) ->\n\tPartitionNumber = Solution#mining_solution.partition_number,\n\tPartitionUpperBound = Solution#mining_solution.partition_upper_bound,\n\tMax = ar_node:get_max_partition_number(PartitionUpperBound),\n\tcase PartitionNumber > Max of\n\t\tfalse ->\n\t\t\tprocess_partial_solution_packing_difficulty(Solution, Ref);\n\t\ttrue ->\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }\n\tend.\n\nprocess_partial_solution_packing_difficulty(Solution, Ref) ->\n\t#mining_solution{ packing_difficulty = PackingDifficulty, replica_format = ReplicaFormat } = Solution,\n\tHeight = ar_node:get_height(),\n\tcase ar_block:validate_replica_format(Height, PackingDifficulty, ReplicaFormat) of\n\t\ttrue ->\n\t\t\tprocess_partial_solution_nonce(Solution, Ref);\n\t\tfalse ->\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }\n\tend.\n\nprocess_partial_solution_nonce(Solution, Ref) ->\n\tMax = ar_block:get_max_nonce(Solution#mining_solution.packing_difficulty),\n\tcase Solution#mining_solution.nonce > Max of\n\t\tfalse ->\n\t\t\tprocess_partial_solution_quick_pow(Solution, Ref);\n\t\ttrue ->\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }\n\tend.\n\nprocess_partial_solution_quick_pow(Solution, Ref) ->\n\t#mining_solution{\n\t\tnonce_limiter_output = NonceLimiterOutput,\n\t\tpartition_number = PartitionNumber,\n\t\tseed = Seed,\n\t\tmining_address = MiningAddress,\n\t\tpreimage = Preimage,\n\t\tsolution_hash = SolutionH,\n\t\tpacking_difficulty = PackingDifficulty\n\t} = Solution,\n\tH0 = ar_block:compute_h0(NonceLimiterOutput, PartitionNumber, Seed, MiningAddress,\n\t\t\tPackingDifficulty),\n\tcase ar_block:compute_solution_h(H0, Preimage) of\n\t\tSolutionH ->\n\t\t\tprocess_partial_solution_pow(Solution, Ref, H0);\n\t\t_ ->\n\t\t\t%% Solution hash mismatch (pattern matching against solution_hash = SolutionH).\n\t\t\t#partial_solution_response{ status = <<\"rejected_wrong_hash\">> }\n\tend.\n\nprocess_partial_solution_pow(Solution, Ref, H0) ->\n\t#mining_solution{\n\t\tnonce = Nonce,\n\t\tpoa1 = #poa{ chunk = Chunk1 },\n\t\tsolution_hash = SolutionH,\n\t\tpreimage = Preimage,\n\t\tpoa2 = #poa{ chunk = Chunk2 }\n\t} = Solution,\n\t{H1, Preimage1} = ar_block:compute_h1(H0, Nonce, Chunk1),\n\n\tcase {H1 == SolutionH andalso Preimage1 == Preimage,\n\t\t\tar_mining_server:is_one_chunk_solution(Solution)} of\n\t\t{true, false} ->\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> };\n\t\t{true, true} ->\n\t\t\tprocess_partial_solution_partition_upper_bound(Solution, Ref, H0, H1);\n\t\t{false, true} ->\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> };\n\t\t{false, false} ->\n\t\t\t{H2, Preimage2} = ar_block:compute_h2(H1, Chunk2, H0),\n\t\t\tcase H2 == SolutionH andalso Preimage2 == Preimage of\n\t\t\t\tfalse ->\n\t\t\t\t\t#partial_solution_response{ status = <<\"rejected_wrong_hash\">> };\n\t\t\t\ttrue ->\n\t\t\t\t\tprocess_partial_solution_partition_upper_bound(Solution, Ref, H0, H1)\n\t\t\tend\n\tend.\n\nprocess_partial_solution_partition_upper_bound(Solution, Ref, H0, H1) ->\n\t#mining_solution{ partition_upper_bound = PartitionUpperBound } = Solution,\n\t%% We are going to validate the VDF data later anyways; here we simply want to\n\t%% make sure the upper bound is positive so that the recall byte calculation\n\t%% does not fail as it takes a remainder of the division by partition upper bound.\n\tcase PartitionUpperBound > 0 of\n\t\ttrue ->\n\t\t\tprocess_partial_solution_poa(Solution, Ref, H0, H1);\n\t\t_ ->\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }\n\tend.\n\nprocess_partial_solution_poa(Solution, Ref, H0, H1) ->\n\t#mining_solution{\n\t\tpartition_number = PartitionNumber,\n\t\tpartition_upper_bound = PartitionUpperBound,\n\t\tnonce = Nonce,\n\t\trecall_byte1 = RecallByte1,\n\t\tpoa1 = PoA1,\n\t\tmining_address = MiningAddress,\n\t\tsolution_hash = SolutionH,\n\t\trecall_byte2 = RecallByte2,\n\t\tpoa2 = PoA2,\n\t\tpacking_difficulty = PackingDifficulty,\n\t\treplica_format = ReplicaFormat\n\t} = Solution,\n\t{RecallRange1Start, RecallRange2Start} = ar_block:get_recall_range(H0,\n\t\t\tPartitionNumber, PartitionUpperBound),\n\tComputedRecallByte1 = ar_block:get_recall_byte(RecallRange1Start, Nonce,\n\t\t\tPackingDifficulty),\n\t{BlockStart1, BlockEnd1, TXRoot1} = ar_block_index:get_block_bounds(ComputedRecallByte1),\n\tBlockSize1 = BlockEnd1 - BlockStart1,\n\tPacking = ar_block:get_packing(PackingDifficulty, MiningAddress, ReplicaFormat),\n\tSubChunkIndex = ar_block:get_sub_chunk_index(PackingDifficulty, Nonce),\n\tcase RecallByte1 == ComputedRecallByte1 andalso\n\t\t\tar_poa:validate({BlockStart1, RecallByte1, TXRoot1, BlockSize1, PoA1,\n\t\t\t\t\tPacking, SubChunkIndex, not_set}) of\n\t\terror ->\n\t\t\t?LOG_ERROR([{event, pool_failed_to_validate_proof_of_access}]),\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> };\n\t\tfalse ->\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> };\n\t\t{true, ChunkID} when H1 == SolutionH ->\n\t\t\tPoACache = {{BlockStart1, RecallByte1, TXRoot1, BlockSize1, Packing}, ChunkID},\n\t\t\tPoA2Cache = undefined,\n\t\t\tprocess_partial_solution_difficulty(Solution, Ref, PoACache, PoA2Cache);\n\t\t{true, ChunkID} ->\n\t\t\tComputedRecallByte2 = ar_block:get_recall_byte(RecallRange2Start, Nonce,\n\t\t\t\t\tPackingDifficulty),\n\t\t\t{BlockStart2, BlockEnd2, TXRoot2} = ar_block_index:get_block_bounds(\n\t\t\t\t\tComputedRecallByte2),\n\t\t\tBlockSize2 = BlockEnd2 - BlockStart2,\n\t\t\tcase RecallByte2 == ComputedRecallByte2 andalso\n\t\t\t\t\tar_poa:validate({BlockStart2, RecallByte2, TXRoot2, BlockSize2,\n\t\t\t\t\t\t\t\t\tPoA2, Packing, SubChunkIndex, not_set}) of\n\t\t\t\terror ->\n\t\t\t\t\t?LOG_ERROR([{event, pool_failed_to_validate_proof_of_access}]),\n\t\t\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> };\n\t\t\t\tfalse ->\n\t\t\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> };\n\t\t\t\t{true, Chunk2ID} ->\n\t\t\t\t\tPoA2Cache = {{BlockStart2, RecallByte2, TXRoot2, BlockSize2,\n\t\t\t\t\t\t\tPacking}, Chunk2ID},\n\t\t\t\t\tPoACache = {{BlockStart1, RecallByte1, TXRoot1, BlockSize1,\n\t\t\t\t\t\t\tPacking}, ChunkID},\n\t\t\t\t\tprocess_partial_solution_difficulty(Solution, Ref, PoACache, PoA2Cache)\n\t\t\tend\n\tend.\n\nprocess_partial_solution_difficulty(Solution, Ref, PoACache, PoA2Cache) ->\n\t#mining_solution{ solution_hash = SolutionH, recall_byte2 = RecallByte2,\n\t\t\tpacking_difficulty = PackingDifficulty } = Solution,\n\tIsPoA1 = (RecallByte2 == undefined),\n\tcase ar_node_utils:passes_diff_check(SolutionH, IsPoA1, ar_node:get_current_diff(),\n\t\t\tPackingDifficulty) of\n\t\tfalse ->\n\t\t\t#partial_solution_response{ status = <<\"accepted\">> };\n\t\ttrue ->\n\t\t\tprocess_partial_solution_vdf(Solution, Ref, PoACache, PoA2Cache)\n\tend.\n\nprocess_partial_solution_vdf(Solution, Ref, PoACache, PoA2Cache) ->\n\t#mining_solution{\n\t\tstep_number = StepNumber,\n\t\tnext_seed = NextSeed,\n\t\tstart_interval_number = StartIntervalNumber,\n\t\tnext_vdf_difficulty = NextVDFDifficulty,\n\t\tnonce_limiter_output = Output,\n\t\tseed = Seed,\n\t\tpartition_upper_bound = PartitionUpperBound\n\t} = Solution,\n\tSessionKey = {NextSeed, StartIntervalNumber, NextVDFDifficulty},\n\tMayBeLastStepCheckpoints = ar_nonce_limiter:get_step_checkpoints(StepNumber, SessionKey),\n\tMayBeSeed = ar_nonce_limiter:get_seed(SessionKey),\n\tMayBeUpperBound = ar_nonce_limiter:get_active_partition_upper_bound(StepNumber, SessionKey),\n\tcase {MayBeLastStepCheckpoints, MayBeSeed, MayBeUpperBound} of\n\t\t{not_found, _, _} ->\n\t\t\t#partial_solution_response{ status = <<\"rejected_vdf_not_found\">> };\n\t\t{_, not_found, _} ->\n\t\t\t#partial_solution_response{ status = <<\"rejected_vdf_not_found\">> };\n\t\t{_, _, not_found} ->\n\t\t\t#partial_solution_response{ status = <<\"rejected_vdf_not_found\">> };\n\t\t{[Output | _] = LastStepCheckpoints, Seed, PartitionUpperBound} ->\n\t\t\tSolution2 =\n\t\t\t\tSolution#mining_solution{\n\t\t\t\t\tlast_step_checkpoints = LastStepCheckpoints,\n\t\t\t\t\t%% ar_node_worker will fetch the required steps based on the prev block.\n\t\t\t\t\tsteps = not_found\n\t\t\t\t},\n\t\t\tar_node_worker:found_solution({pool, Ref}, Solution2, PoACache, PoA2Cache),\n\t\t\tnoreply;\n\t\t_ ->\n\t\t\t%% {Output, Seed, PartitionUpperBound} mismatch (pattern matching against\n\t\t\t%% the solution fields deconstructed above).\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_vdf\">> }\n\tend.\n\nprocess_h1_to_h2_jobs([], _Peer, _Partitions) ->\n\tok;\nprocess_h1_to_h2_jobs([Candidate | Jobs], Peer, Partitions) ->\n\tcase we_have_partition_for_the_second_recall_byte(Candidate, Partitions) of\n\t\ttrue ->\n\t\t\tar_coordination:compute_h2_for_peer(Peer, Candidate);\n\t\tfalse ->\n\t\t\tok\n\tend,\n\tprocess_h1_to_h2_jobs(Jobs, Peer, Partitions).\n\nprocess_h1_read_jobs([], _Partitions) ->\n\tok;\nprocess_h1_read_jobs([Candidate | Jobs], Partitions) ->\n\tcase we_have_partition_for_the_first_recall_byte(Candidate, Partitions) of\n\t\ttrue ->\n\t\t\tar_mining_server:prepare_and_post_solution(Candidate),\n\t\t\tar_mining_stats:h2_received_from_peer(pool);\n\t\tfalse ->\n\t\t\tok\n\tend,\n\tprocess_h1_read_jobs(Jobs, Partitions).\n\nwe_have_partition_for_the_first_recall_byte(_Candidate, []) ->\n\tfalse;\nwe_have_partition_for_the_first_recall_byte(\n\t\t#mining_candidate{ mining_address = Addr, partition_number = PartitionID,\n\t\t\t\tpacking_difficulty = PackingDifficulty },\n\t\t[{PartitionID, Addr, PackingDifficulty} | _Partitions]) ->\n\ttrue;\nwe_have_partition_for_the_first_recall_byte(Candidate, [_Partition | Partitions]) ->\n\t%% Mining address or partition number mismatch.\n\twe_have_partition_for_the_first_recall_byte(Candidate, Partitions).\n\nwe_have_partition_for_the_second_recall_byte(_Candidate, []) ->\n\tfalse;\nwe_have_partition_for_the_second_recall_byte(\n\t\t#mining_candidate{ mining_address = Addr, partition_number2 = PartitionID,\n\t\t\t\tpacking_difficulty = PackingDifficulty },\n\t\t[{PartitionID, Addr, PackingDifficulty} | _Partitions]) ->\n\ttrue;\nwe_have_partition_for_the_second_recall_byte(Candidate, [_Partition | Partitions]) ->\n\t%% Mining address or partition number mismatch.\n\twe_have_partition_for_the_second_recall_byte(Candidate, Partitions).\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\nget_jobs_test() ->\n\t?assertEqual(#jobs{}, get_jobs(<<>>, [], maps:new())),\n\n\t?assertEqual(#jobs{ next_seed = ns, interval_number = in, next_vdf_difficulty = nvd },\n\t\t\tget_jobs(o, [{ns, in, nvd}],\n\t\t\t\t\t\t#{ {ns, in, nvd} => [{o, gsn, u, s, d}] })),\n\n\t?assertEqual(#jobs{ jobs = [#job{ output = o, global_step_number = gsn,\n\t\t\t\t\t\t\tpartition_upper_bound = u }],\n\t\t\t\t\t\tpartial_diff = d,\n\t\t\t\t\t\tseed = s,\n\t\t\t\t\t\tnext_seed = ns,\n\t\t\t\t\t\tinterval_number = in,\n\t\t\t\t\t\tnext_vdf_difficulty = nvd },\n\t\t\tget_jobs(a, [{ns, in, nvd}],\n\t\t\t\t\t\t#{ {ns, in, nvd} => [{o, gsn, u, s, d}] })),\n\n\t%% d2 /= d (the difficulties are different) => only take the latest job.\n\t?assertEqual(#jobs{ jobs = [#job{ output = o, global_step_number = gsn,\n\t\t\t\t\t\t\tpartition_upper_bound = u }],\n\t\t\t\t\t\tpartial_diff = d,\n\t\t\t\t\t\tseed = s,\n\t\t\t\t\t\tnext_seed = ns,\n\t\t\t\t\t\tinterval_number = in,\n\t\t\t\t\t\tnext_vdf_difficulty = nvd },\n\t\t\tget_jobs(a, [{ns, in, nvd}, {ns2, in2, nvd2}],\n\t\t\t\t\t\t#{ {ns, in, nvd} => [{o, gsn, u, s, d}, {o2, gsn2, u2, s, d2}],\n\t\t\t\t\t\t\t%% Same difficulty, but a different VDF session => not picked.\n\t\t\t\t\t\t\t{ns2, in2, nvd2} => [{o3, gsn3, u3, s3, d}] })),\n\n\t%% d2 == d => take both.\n\t?assertEqual(#jobs{ jobs = [#job{ output = o, global_step_number = gsn,\n\t\t\t\t\t\t\tpartition_upper_bound = u }, #job{ output = o2,\n\t\t\t\t\t\t\t\t\tglobal_step_number = gsn2, partition_upper_bound = u2 }],\n\t\t\t\t\t\tpartial_diff = d,\n\t\t\t\t\t\tseed = s,\n\t\t\t\t\t\tnext_seed = ns,\n\t\t\t\t\t\tinterval_number = in,\n\t\t\t\t\t\tnext_vdf_difficulty = nvd },\n\t\t\tget_jobs(a, [{ns, in, nvd}, {ns2, in2, nvd2}],\n\t\t\t\t\t\t#{ {ns, in, nvd} => [{o, gsn, u, s, d}, {o2, gsn2, u2, s, d}],\n\t\t\t\t\t\t\t{ns2, in2, nvd2} => [{o2, gsn2, u2, s2, d2}] })),\n\n\t%% Take strictly above the previous output.\n\t?assertEqual(#jobs{ jobs = [#job{ output = o, global_step_number = gsn,\n\t\t\t\t\t\t\t\tpartition_upper_bound = u }],\n\t\t\t\t\t\tpartial_diff = d,\n\t\t\t\t\t\tseed = s,\n\t\t\t\t\t\tnext_seed = ns,\n\t\t\t\t\t\tinterval_number = in,\n\t\t\t\t\t\tnext_vdf_difficulty = nvd },\n\t\t\tget_jobs(o2, [{ns, in, nvd}, {ns2, in2, nvd2}],\n\t\t\t\t\t\t#{ {ns, in, nvd} => [{o, gsn, u, s, d}, {o2, gsn2, u2, s, d}],\n\t\t\t\t\t\t\t{ns2, in2, nvd2} => [{o2, gsn2, u2, s2, d2}] })).\n\nprocess_partial_solution_test_() ->\n\tar_test_node:test_with_mocked_functions([\n\t\t{ar_block, compute_h0,\n\t\t\tfun(O, P, S, M, PD) ->\n\t\t\t\t\tcrypto:hash(sha256, << O/binary, P:256, S/binary, M/binary, PD:8 >>) end},\n\t\t{ar_block_index, get_block_bounds,\n\t\t\tfun(_Byte) ->\n\t\t\t\t{10, 110, << 1:256 >>}\n\t\t\tend},\n\t\t{ar_poa, validate,\n\t\t\tfun(Args) ->\n\t\t\t\tPoA = #poa{ tx_path = << 0:(2176 * 8) >>, data_path = << 0:(349504 * 8) >> },\n\t\t\t\tPoA2 = PoA#poa{ chunk = << 0:(262144 * 8) >> },\n\t\t\t\tCPoA = PoA#poa{ chunk = << 0:(8192 * 8) >>,\n\t\t\t\t\t\tunpacked_chunk = << 1:(262144 * 8) >> },\n\t\t\t\tcase Args of\n\t\t\t\t\t{10, _, << 1:256 >>, 100, PoA2, {spora_2_6, << 0:256 >>}, -1, not_set} ->\n\t\t\t\t\t\t{true, << 2:256 >>};\n\t\t\t\t\t{10, _, << 1:256 >>, 100, CPoA, {composite, << 0:256 >>, 1}, 30, not_set} ->\n\t\t\t\t\t\t{true, << 2:256 >>};\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tfalse\n\t\t\t\tend\n\t\t\tend},\n\t\t{ar_node, get_current_diff, fun() -> {?MAX_DIFF, ?MAX_DIFF} end},\n\t\t{ar_node, get_height, fun() -> 0 end}],\n\t\tfun test_process_partial_solution/0\n\t).\n\ntest_process_partial_solution() ->\n\tZero = << 0:256 >>,\n\tZero48 = << 0:(8*48) >>,\n\tH0 = ar_block:compute_h0(Zero, 0, Zero48, Zero, 0),\n\tSolutionHQuick = ar_block:compute_solution_h(H0, Zero),\n\tC = << 0:(262144 * 8) >>,\n\t{H1, Preimage1} = ar_block:compute_h1(H0, 1, C),\n\tSolutionH = ar_block:compute_solution_h(H0, Preimage1),\n\t{RecallRange1Start, RecallRange2Start} = ar_block:get_recall_range(H0, 0, 1),\n\tRecallByte1 = RecallRange1Start + 1 * ?DATA_CHUNK_SIZE,\n\t{H2, Preimage2} = ar_block:compute_h2(H1, C, H0),\n\tRecallByte2 = RecallRange2Start + 1 * ?DATA_CHUNK_SIZE,\n\tPoA = #poa{ chunk = C },\n\tCompositeSubChunk = << 0:(8192 * 8) >>,\n\tCPoA = #poa{ chunk = CompositeSubChunk },\n\tCH0 = ar_block:compute_h0(Zero, 0, Zero48, Zero, 1),\n\t{CH1, CPreimage1} = ar_block:compute_h1(CH0, 30, CompositeSubChunk),\n\tCSolutionH = ar_block:compute_solution_h(CH0, CPreimage1),\n\t{CRecallRange1Start, CRecallRange2Start} = ar_block:get_recall_range(CH0, 0, 1),\n\tCRecallByte1 = CRecallRange1Start,\n\t{CH2, CPreimage2} = ar_block:compute_h2(CH1, CompositeSubChunk, CH0),\n\tCRecallByte2 = CRecallRange2Start,\n\tTestCases = [\n\t\t{\"Bad proof size 0\",\n\t\t\t#mining_solution{ poa1 = #poa{} }, % Empty chunk.\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\t\t{\"Bad proof size 1\",\n\t\t\t#mining_solution{ poa1 = #poa{ chunk = C, tx_path = << 0:(2177 * 8) >> } },\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\t\t{\"Bad proof size 2\",\n\t\t\t#mining_solution{ poa1 = PoA,\n\t\t\t\t\tpoa2 = #poa{ chunk = C, tx_path = << 0:(2177 * 8) >> } },\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\t\t{\"Bad proof size 3\",\n\t\t\t#mining_solution{ poa1 = #poa{ chunk = C, data_path = << 0:(349505 * 8) >> } },\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\t\t{\"Bad proof size 4\",\n\t\t\t#mining_solution{ poa1 = PoA,\n\t\t\t\t\tpoa2 = #poa{ chunk = C, data_path = << 0:(349505 * 8) >> } },\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\t\t{\"Bad field size 1\",\n\t\t\t#mining_solution{ next_seed = <<>>, poa1 = PoA },\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\t\t{\"Bad field size 2\",\n\t\t\t#mining_solution{ seed = <<>>, poa1 = PoA },\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\t\t{\"Bad field size 3\",\n\t\t\t#mining_solution{ preimage = <<>>, poa1 = PoA },\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\t\t{\"Bad field size 4\",\n\t\t\t#mining_solution{ mining_address = <<>>, poa1 = PoA },\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\t\t{\"Bad field size 5\",\n\t\t\t#mining_solution{ nonce_limiter_output = <<>>, poa1 = PoA },\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\t\t{\"Bad field size 6\",\n\t\t\t#mining_solution{ solution_hash = <<>>, poa1 = PoA },\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\t\t{\"Bad field size 7\",\n\t\t\t#mining_solution{ poa1 = #poa{ chunk = << 0:((?DATA_CHUNK_SIZE + 1) * 8) >> }},\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\t\t{\"Bad field size 8\",\n\t\t\t#mining_solution{ poa1 = PoA,\n\t\t\t\t\tpoa2 = #poa{ chunk = << 0:((?DATA_CHUNK_SIZE + 1) * 8) >> }},\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\n\t\t{\"Bad partition number\",\n\t\t\t#mining_solution{ partition_number = 1, poa1 = PoA },\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\t\t{\"Bad nonce\",\n\t\t\t#mining_solution{ poa1 = PoA,\n\t\t\t\t\t%% We have 2 nonces per recall range (packing diff = 0) in debug mode.\n\t\t\t\t\tnonce = 2 },\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\t\t{\"Bad quick pow\",\n\t\t\t#mining_solution{ poa1 = PoA },\n\t\t\t#partial_solution_response{ status = <<\"rejected_wrong_hash\">> }},\n\t\t{\"Bad pow\",\n\t\t\t#mining_solution{ nonce = 1, solution_hash = SolutionHQuick,\n\t\t\t\t\tpreimage = Preimage1, partition_upper_bound = 1,\n\t\t\t\t\trecall_byte1 = RecallByte1,\n\t\t\t\t\tpoa1 = PoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> }},\n\t\t\t#partial_solution_response{ status = <<\"rejected_wrong_hash\">> }},\n\t\t{\"Bad partition upper bound\",\n\t\t\t#mining_solution{ nonce = 1, solution_hash = SolutionH,\n\t\t\t\t\tpreimage = Preimage1, partition_upper_bound = 0,\n\t\t\t\t\trecall_byte1 = RecallByte1,\n\t\t\t\t\tpoa1 = PoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> }},\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\t\t{\"Bad poa 1\",\n\t\t\t#mining_solution{ nonce = 1, solution_hash = SolutionH, preimage = Preimage1,\n\t\t\t\t\tpartition_upper_bound = 1, poa1 = PoA },\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\t\t{\"Bad poa 2\",\n\t\t\t#mining_solution{ nonce = 1, solution_hash = SolutionH,\n\t\t\t\t\tpreimage = Preimage1, partition_upper_bound = 1,\n\t\t\t\t\tpoa1 = PoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> }},\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\t\t{\"Bad poa 3\",\n\t\t\t#mining_solution{ nonce = 1, solution_hash = SolutionH,\n\t\t\t\t\tpreimage = Preimage1, partition_upper_bound = 1,\n\t\t\t\t\trecall_byte1 = RecallByte1,\n\t\t\t\t\tpoa2 = #poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> },\n\t\t\t\t\tpoa1 = PoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> }},\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\n\t\t{\"Two-chunk bad poa 1\",\n\t\t\t#mining_solution{ nonce = 1, solution_hash = SolutionH,\n\t\t\t\t\tpreimage = Preimage1, partition_upper_bound = 1,\n\t\t\t\t\trecall_byte1 = RecallByte1, recall_byte2 = 0,\n\t\t\t\t\tpoa2 = PoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> },\n\t\t\t\t\tpoa1 = PoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> }},\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\t\t{\"Two-chunk bad poa 2\",\n\t\t\t#mining_solution{ nonce = 1, solution_hash = SolutionH,\n\t\t\t\t\tpreimage = Preimage2, partition_upper_bound = 1,\n\t\t\t\t\trecall_byte1 = RecallByte1, recall_byte2 = 0,\n\t\t\t\t\tpoa2 = PoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> },\n\t\t\t\t\tpoa1 = PoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> }},\n\t\t\t#partial_solution_response{ status = <<\"rejected_wrong_hash\">> }},\n\t\t{\"Two-chunk bad poa 3\",\n\t\t\t#mining_solution{ nonce = 1, solution_hash = H2,\n\t\t\t\t\tpreimage = Preimage2, partition_upper_bound = 1,\n\t\t\t\t\trecall_byte1 = RecallByte1, recall_byte2 = 0,\n\t\t\t\t\tpoa2 = PoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> },\n\t\t\t\t\tpoa1 = PoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> }},\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\n\t\t{\"Accepted\",\n\t\t\t#mining_solution{ nonce = 1, solution_hash = SolutionH,\n\t\t\t\t\tpreimage = Preimage1, partition_upper_bound = 1,\n\t\t\t\t\trecall_byte1 = RecallByte1,\n\t\t\t\t\tpoa1 = PoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> }},\n\t\t\t#partial_solution_response{ status = <<\"accepted\">> }},\n\t\t{\"Accepted 2\",\n\t\t\t#mining_solution{ nonce = 1, solution_hash = H2,\n\t\t\t\t\tpreimage = Preimage2, partition_upper_bound = 1,\n\t\t\t\t\trecall_byte1 = RecallByte1, recall_byte2 = RecallByte2,\n\t\t\t\t\tpoa2 = PoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> },\n\t\t\t\t\tpoa1 = PoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> }},\n\t\t\t#partial_solution_response{ status = <<\"accepted\">> }},\n\n\t\t{\"No unpacked chunk\",\n\t\t\t#mining_solution{ nonce = 30, solution_hash = CSolutionH,\n\t\t\t\t\tpreimage = CPreimage1, partition_upper_bound = 1,\n\t\t\t\t\trecall_byte1 = CRecallByte1,\n\t\t\t\t\tpacking_difficulty = 1,\n\t\t\t\t\tpoa1 = CPoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> }},\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\t\t{\"Accepted packing difficulty=1\",\n\t\t\t#mining_solution{ nonce = 30, solution_hash = CSolutionH,\n\t\t\t\t\tpreimage = CPreimage1, partition_upper_bound = 1,\n\t\t\t\t\trecall_byte1 = CRecallByte1,\n\t\t\t\t\tpacking_difficulty = 1,\n\t\t\t\t\tpoa1 = CPoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >>,\n\t\t\t\t\t\tunpacked_chunk = << 1:(262144 * 8) >> }},\n\t\t\t#partial_solution_response{ status = <<\"accepted\">> }},\n\t\t{\"No second unpacked chunk\",\n\t\t\t#mining_solution{ nonce = 30, solution_hash = CH2,\n\t\t\t\t\tpreimage = CPreimage2, partition_upper_bound = 1,\n\t\t\t\t\trecall_byte1 = CRecallByte1, recall_byte2 = CRecallByte2,\n\t\t\t\t\tpacking_difficulty = 1,\n\t\t\t\t\tpoa2 = CPoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> },\n\t\t\t\t\tpoa1 = CPoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> }},\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_poa\">> }},\n\t\t{\"Accepted two-chunk packing difficulty=1\",\n\t\t\t#mining_solution{ nonce = 30, solution_hash = CH2,\n\t\t\t\t\tpreimage = CPreimage2, partition_upper_bound = 1,\n\t\t\t\t\trecall_byte1 = CRecallByte1, recall_byte2 = CRecallByte2,\n\t\t\t\t\tpacking_difficulty = 1,\n\t\t\t\t\tpoa2 = CPoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >>,\n\t\t\t\t\t\tunpacked_chunk = << 1:(262144 * 8) >> },\n\t\t\t\t\tpoa1 = CPoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >>,\n\t\t\t\t\t\tunpacked_chunk = << 1:(262144 * 8) >>}},\n\t\t\t#partial_solution_response{ status = <<\"accepted\">> }}\n\t],\n\tlists:foreach(\n\t\tfun({Title, Solution, ExpectedReply}) ->\n\t\t\tRef = make_ref(),\n\t\t\t?assertEqual(ExpectedReply, process_partial_solution(Solution, Ref), Title)\n\t\tend,\n\t\tTestCases\n\t).\n\nprocess_solution_test_() ->\n\tar_test_node:test_with_mocked_functions([\n\t\t{ar_block, compute_h0,\n\t\t\tfun(O, P, S, M, PD) ->\n\t\t\t\tcrypto:hash(sha256, << O/binary, P:256, S/binary, M/binary, PD:8 >>) end},\n\t\t{ar_block_index, get_block_bounds,\n\t\t\tfun(_Byte) ->\n\t\t\t\t{10, 110, << 1:256 >>}\n\t\t\tend},\n\t\t{ar_poa, validate,\n\t\t\tfun(Args) ->\n\t\t\t\tPoA = #poa{ tx_path = << 0:(2176 * 8) >>, data_path = << 0:(349504 * 8) >> },\n\t\t\t\tPoA2 = PoA#poa{ chunk = << 0:(262144 * 8) >> },\n\t\t\t\tCPoA = PoA#poa{ chunk = << 0:(8192 * 8) >>,\n\t\t\t\t\t\tunpacked_chunk = << 1:(262144 * 8) >> },\n\t\t\t\tcase Args of\n\t\t\t\t\t{10, _, << 1:256 >>, 100, PoA2, {spora_2_6, << 0:256 >>}, -1, not_set} ->\n\t\t\t\t\t\t{true, << 2:256 >>};\n\t\t\t\t\t{10, _, << 1:256 >>, 100, CPoA, {composite, << 0:256 >>, 2}, 31, not_set} ->\n\t\t\t\t\t\t{true, << 2:256 >>};\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tfalse\n\t\t\t\tend\n\t\t\tend},\n\t\t{ar_node, get_current_diff, fun() -> {0, 0} end},\n\t\t{ar_node, get_height, fun() -> 0 end},\n\t\t{ar_nonce_limiter, get_step_checkpoints,\n\t\t\tfun(S, {N, SIN, D}) ->\n\t\t\t\tcase {S, N, SIN, D} of\n\t\t\t\t\t{0, << 10:(48*8) >>, 0, 0} ->\n\t\t\t\t\t\t%% Test not found.\n\t\t\t\t\t\tnot_found;\n\t\t\t\t\t{0, << 3:(48*8) >>, 0, 0} ->\n\t\t\t\t\t\t%% Test output mismatch (<< 1:256 >> /= << 0:256 >>).\n\t\t\t\t\t\t[<< 1:256 >>];\n\t\t\t\t\t_ ->\n\t\t\t\t\t\t[<< 0:256 >>]\n\t\t\t\tend\n\t\t\tend},\n\t\t{ar_nonce_limiter, get_seed,\n\t\t\tfun({N, SIN, D}) ->\n\t\t\t\tcase {N, SIN, D} of\n\t\t\t\t\t{<< 11:(48*8) >>, 0, 0} ->\n\t\t\t\t\t\t%% Test not_found.\n\t\t\t\t\t\tnot_found;\n\t\t\t\t\t{<< 2:(48*8) >>, 0, 0} ->\n\t\t\t\t\t\t%% Test seed mismatch (<< 3:(48*8) >> /= << 0:(48*8) >>).\n\t\t\t\t\t\t<< 3:(48*8) >>;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\t<< 0:(48*8) >>\n\t\t\t\tend\n\t\t\tend},\n\t\t{ar_nonce_limiter, get_active_partition_upper_bound,\n\t\t\tfun(S, {N, SIN, D}) ->\n\t\t\t\tcase {S, N, SIN, D} of\n\t\t\t\t\t{0, << 12:(48*8) >>, 0, 0} ->\n\t\t\t\t\t\t%% Test not_found.\n\t\t\t\t\t\tnot_found;\n\t\t\t\t\t{0, << 1:(48*8) >>, 0, 0} ->\n\t\t\t\t\t\t%% Test partition upper bound mismatch (2 /= 1).\n\t\t\t\t\t\t2;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\t1\n\t\t\t\tend\n\t\t\tend},\n\t\t{ar_events, send, fun(_Type, _Payload) -> ok end},\n\t\t{ar_node_worker, found_solution, fun(_, _, _, _) -> ok end}],\n\t\tfun test_process_solution/0\n\t).\n\ntest_process_solution() ->\n\tZero = << 0:256 >>,\n\tZero48 = << 0:(48*8) >>,\n\tC = << 0:(262144 * 8) >>,\n\tH0 = ar_block:compute_h0(Zero, 0, Zero48, Zero, 0),\n\t{_H1, Preimage1} = ar_block:compute_h1(H0, 1, C),\n\tSolutionH = ar_block:compute_solution_h(H0, Preimage1),\n\t{RecallRange1Start, _RecallRange2Start} = ar_block:get_recall_range(H0, 0, 1),\n\tRecallByte1 = RecallRange1Start + 1 * ?DATA_CHUNK_SIZE,\n\tPoA = #poa{ chunk = C },\n\tCompositeSubChunk = << 0:(8192 * 8) >>,\n\tCPoA = #poa{ chunk = CompositeSubChunk },\n\tCH0 = ar_block:compute_h0(Zero, 0, Zero48, Zero, 2),\n\t{_CH1, CPreimage1} = ar_block:compute_h1(CH0, 31, CompositeSubChunk),\n\tCSolutionH = ar_block:compute_solution_h(CH0, CPreimage1),\n\t{CRecallRange1Start, _CRecallRange2Start} = ar_block:get_recall_range(CH0, 0, 1),\n\tCRecallByte1 = CRecallRange1Start,\n\tTestCases = [\n\t\t{\"VDF not found\",\n\t\t\t#mining_solution{ next_seed = << 10:(48*8) >>, nonce = 1, solution_hash = SolutionH,\n\t\t\t\t\tpreimage = Preimage1, partition_upper_bound = 1,\n\t\t\t\t\trecall_byte1 = RecallByte1,\n\t\t\t\t\tpoa1 = PoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> }},\n\t\t\t#partial_solution_response{ status = <<\"rejected_vdf_not_found\">> }},\n\t\t{\"VDF not found 2\",\n\t\t\t#mining_solution{ next_seed = << 11:(48*8) >>, nonce = 1, solution_hash = SolutionH,\n\t\t\t\t\tpreimage = Preimage1, partition_upper_bound = 1,\n\t\t\t\t\trecall_byte1 = RecallByte1,\n\t\t\t\t\tpoa1 = PoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> }},\n\t\t\t#partial_solution_response{ status = <<\"rejected_vdf_not_found\">> }},\n\t\t{\"VDF not found 3\",\n\t\t\t#mining_solution{ next_seed = << 12:(48*8) >>, nonce = 1, solution_hash = SolutionH,\n\t\t\t\t\tpreimage = Preimage1, partition_upper_bound = 1,\n\t\t\t\t\trecall_byte1 = RecallByte1,\n\t\t\t\t\tpoa1 = PoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> }},\n\t\t\t#partial_solution_response{ status = <<\"rejected_vdf_not_found\">> }},\n\t\t{\"Bad VDF 1\",\n\t\t\t#mining_solution{ next_seed = << 1:(48*8) >>, nonce = 1, solution_hash = SolutionH,\n\t\t\t\t\tpreimage = Preimage1, partition_upper_bound = 1,\n\t\t\t\t\trecall_byte1 = RecallByte1,\n\t\t\t\t\tpoa1 = PoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> }},\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_vdf\">> }},\n\t\t{\"Bad VDF 2\",\n\t\t\t#mining_solution{ next_seed = << 2:(48*8) >>, nonce = 1, solution_hash = SolutionH,\n\t\t\t\t\tpreimage = Preimage1, partition_upper_bound = 1,\n\t\t\t\t\trecall_byte1 = RecallByte1,\n\t\t\t\t\tpoa1 = PoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> }},\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_vdf\">> }},\n\t\t{\"Bad VDF 3\",\n\t\t\t#mining_solution{ next_seed = << 3:(48*8) >>, nonce = 1, solution_hash = SolutionH,\n\t\t\t\t\tpreimage = Preimage1, partition_upper_bound = 1,\n\t\t\t\t\trecall_byte1 = RecallByte1,\n\t\t\t\t\tpoa1 = PoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> }},\n\t\t\t#partial_solution_response{ status = <<\"rejected_bad_vdf\">> }},\n\t\t{\"Accepted\",\n\t\t\t#mining_solution{ next_seed = << 4:(48*8) >>, nonce = 1, solution_hash = SolutionH,\n\t\t\t\t\tpreimage = Preimage1, partition_upper_bound = 1,\n\t\t\t\t\trecall_byte1 = RecallByte1,\n\t\t\t\t\tpoa1 = PoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >> }},\n\t\t\tnoreply},\n\t\t{\"Accepted packing diff=2\",\n\t\t\t#mining_solution{ next_seed = << 4:(48*8) >>, nonce = 31,\n\t\t\t\t\tsolution_hash = CSolutionH,\n\t\t\t\t\tpreimage = CPreimage1, partition_upper_bound = 1,\n\t\t\t\t\trecall_byte1 = CRecallByte1,\n\t\t\t\t\tpacking_difficulty = 2,\n\t\t\t\t\tpoa1 = CPoA#poa{ tx_path = << 0:(2176 * 8) >>,\n\t\t\t\t\t\tdata_path = << 0:(349504 * 8) >>,\n\t\t\t\t\t\tunpacked_chunk = << 1:(262144 * 8) >> }},\n\t\t\t%% The difficulty is about 32 times higher now (because we can try 32x nonces).\n\t\t\t%% However, the recall range reduction (1 / (4 (base) * 2 (packing diff)))\n\t\t\t%% make it only about 4 times higher.\n\t\t\t%% The inputs are deterministic.\n\t\t\tnoreply}\n\t],\n\tlists:foreach(\n\t\tfun({Title, Solution, ExpectedReply}) ->\n\t\t\tRef = make_ref(),\n\t\t\t?assertEqual(ExpectedReply, process_partial_solution(Solution, Ref), Title)\n\t\tend,\n\t\tTestCases\n\t).\n"
  },
  {
    "path": "apps/arweave/src/ar_pool_cm_job_poller.erl",
    "content": "-module(ar_pool_cm_job_poller).\n\n-behaviour(gen_server).\n\n-export([start_link/0]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_pool.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-record(state, {}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the server.\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\tcase {ar_pool:is_client(), ar_coordination:is_exit_peer()} of\n\t\t{true, true} ->\n\t\t\tgen_server:cast(self(), fetch_cm_jobs);\n\t\t_ ->\n\t\t\t%% If we are a CM miner and not an exit peer, our exit peer will push\n\t\t\t%% the pool CM jobs to us.\n\t\t\tok\n\tend,\n\t{ok, #state{}}.\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast(fetch_cm_jobs, State) ->\n\tPeer = ar_pool:pool_peer(),\n\tPartitions = ar_coordination:get_cluster_partitions_list(),\n\tPartitionJobs = #pool_cm_jobs{ partitions = Partitions },\n\tcase ar_http_iface_client:get_pool_cm_jobs(Peer, PartitionJobs) of\n\t\t{ok, Jobs} ->\n\t\t\tpush_cm_jobs_to_cm_peers(Jobs),\n\t\t\tar_pool:process_cm_jobs(Jobs, Peer),\n\t\t\tar_util:cast_after(?FETCH_CM_JOBS_FREQUENCY_MS, self(), fetch_cm_jobs);\n\t\t{error, Error} ->\n\t\t\t?LOG_WARNING([{event, failed_to_fetch_pool_cm_jobs},\n\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])}]),\n\t\t\tar_util:cast_after(?FETCH_CM_JOBS_RETRY_MS, self(), fetch_cm_jobs)\n\tend,\n\t{noreply, State};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([ {module, ?MODULE}, {pid, self()}, {callback, terminate}, {reason, Reason} ]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\npush_cm_jobs_to_cm_peers(Jobs) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tPeers = Config#config.cm_peers,\n\tPayload = ar_serialize:jsonify(ar_serialize:pool_cm_jobs_to_json_struct(Jobs)),\n\tpush_cm_jobs_to_cm_peers(Payload, Peers).\n\npush_cm_jobs_to_cm_peers(_Payload, []) ->\n\tok;\npush_cm_jobs_to_cm_peers(Payload, [Peer | Peers]) ->\n\tspawn(fun() -> ar_http_iface_client:post_pool_cm_jobs(Peer, Payload) end),\n\tpush_cm_jobs_to_cm_peers(Payload, Peers).\n"
  },
  {
    "path": "apps/arweave/src/ar_pool_job_poller.erl",
    "content": "-module(ar_pool_job_poller).\n\n-behaviour(gen_server).\n\n-export([start_link/0]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_pool.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-record(state, {}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the server.\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\tcase ar_pool:is_client() of\n\t\ttrue ->\n\t\t\tgen_server:cast(self(), fetch_jobs);\n\t\tfalse ->\n\t\t\tok\n\tend,\n\t{ok, #state{}}.\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast(fetch_jobs, State) ->\n\tPrevOutput = (ar_pool:get_latest_job())#job.output,\n\t{ok, Config} = arweave_config:get_env(),\n\tPeer =\n\t\tcase {Config#config.coordinated_mining, Config#config.cm_exit_peer} of\n\t\t\t{true, not_set} ->\n\t\t\t\t%% We are a CM exit node.\n\t\t\t\tar_pool:pool_peer();\n\t\t\t{true, ExitPeer} ->\n\t\t\t\t%% We are a CM miner.\n\t\t\t\tExitPeer;\n\t\t\t_ ->\n\t\t\t\t%% We are a standalone pool client (a non-CM miner and a pool client).\n\t\t\t\tar_pool:pool_peer()\n\t\tend,\n\tcase ar_http_iface_client:get_jobs(Peer, PrevOutput) of\n\t\t{ok, Jobs} ->\n\t\t\temit_pool_jobs(Jobs),\n\t\t\tar_pool:cache_jobs(Jobs),\n\t\t\tar_util:cast_after(?FETCH_JOBS_FREQUENCY_MS, self(), fetch_jobs);\n\t\t{error, Error} ->\n\t\t\t?LOG_WARNING([{event, failed_to_fetch_pool_jobs},\n\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])}]),\n\t\t\tar_util:cast_after(?FETCH_JOBS_RETRY_MS, self(), fetch_jobs)\n\tend,\n\t{noreply, State};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([ {module, ?MODULE}, {pid, self()}, {callback, terminate}, {reason, Reason} ]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nemit_pool_jobs(Jobs) ->\n\tSessionKey = {Jobs#jobs.next_seed, Jobs#jobs.interval_number,\n\t\t\tJobs#jobs.next_vdf_difficulty},\n\temit_pool_jobs(Jobs#jobs.jobs, SessionKey, Jobs#jobs.partial_diff, Jobs#jobs.seed).\n\nemit_pool_jobs([], _SessionKey, _PartialDiff, _Seed) ->\n\tok;\nemit_pool_jobs([Job | Jobs], SessionKey, PartialDiff, Seed) ->\n\t#job{\n\t\toutput = Output, global_step_number = StepNumber,\n\t\tpartition_upper_bound = PartitionUpperBound } = Job,\n\tar_mining_server:add_pool_job(\n\t\tSessionKey, StepNumber, Output, PartitionUpperBound, Seed, PartialDiff),\n\temit_pool_jobs(Jobs, SessionKey, PartialDiff, Seed).\n"
  },
  {
    "path": "apps/arweave/src/ar_pricing.erl",
    "content": "-module(ar_pricing).\n\n%% 2.6 exports.\n-export([get_price_per_gib_minute/2, get_tx_fee/1,\n\t\tget_miner_reward_endowment_pool_debt_supply/1, recalculate_price_per_gib_minute/1,\n\t\tredenominate/3, may_be_redenominate/1,\n\t\tget_redenomination_threshold/0, get_redenomination_delay_blocks/0]).\n\n%% 2.5 exports.\n-export([get_tx_fee/4, get_miner_reward_and_endowment_pool/1,\n\t\tusd_to_ar_rate/1, usd_to_ar/3, recalculate_usd_to_ar_rate/1,\n\t\tget_storage_cost/4, get_expected_min_decline_rate/6]).\n\n%% For tests.\n-export([get_v2_price_per_gib_minute/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_inflation.hrl\").\n-include_lib(\"arweave/include/ar_pricing.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%%===================================================================\n%%% Types.\n%%%===================================================================\n\n-type nonegint() :: non_neg_integer().\n-type fraction() :: {integer(), integer()}.\n-type usd() :: float() | fraction().\n-type date() :: {nonegint(), nonegint(), nonegint()}.\n-type time() :: {nonegint(), nonegint(), nonegint()}.\n-type datetime() :: {date(), time()}.\n\n%%%===================================================================\n%%% Public interface 2.6+.\n%%%===================================================================\n\n%% @doc Return the price per gibibyte minute estimated from the given history of\n%% network hash rates and block rewards. The total reward used in calculations\n%% is at least 1 Winston, even if all block rewards from the given history are 0.\n%% Also, the returned price is always at least 1 Winston.\nget_price_per_gib_minute(Height, B) ->\n\tV2Price = get_v2_price_per_gib_minute(Height, B),\n\tar_pricing_transition:get_transition_price(Height, V2Price).\n\nget_v2_price_per_gib_minute(Height, B) ->\n\tOneDifficultyHeight = ar_fork:height_2_7() + ar_block_time_history:history_length(),\n\tTwoDifficultyHeight = ar_fork:height_2_7_2() + ar_block_time_history:history_length(),\n\n\tcase Height of\n\t\t_ when Height >= TwoDifficultyHeight ->\n\t\t\tget_v2_price_per_gib_minute_two_difficulty(Height, B);\n\t\t_ when Height >= OneDifficultyHeight ->\n\t\t\tget_v2_price_per_gib_minute_one_difficulty(Height, B);\n\t\t_ ->\n\t\t\tget_v2_price_per_gib_minute_simple(B)\n\tend.\n\nget_v2_price_per_gib_minute_two_difficulty(Height, B) ->\n\t{HashRateTotal, RewardTotal, History} = ar_rewards:get_reward_history_totals(B),\n\t{IntervalTotal, VDFIntervalTotal, OneChunkCount, TwoChunkCount} =\n\t\tar_block_time_history:sum_history(B),\n\t%% The intent of the SolutionsPerPartitionPerVDFStep is to estimate network replica\n\t%% count (how many copies of the weave are stored across the network).\n\t%% The logic behind this is complex - an explanation from @vird:\n\t%%\n\t%% 1. Naive solution: If we assume that each miner stores 1 replica, then we\n\t%%    can trivially calculate the network replica count using the network hashrate\n\t%%    (which we have) and the weave size (which we also have). However what if on\n\t%%    average each miner only stores 50% of the weave? In that case each miner will\n\t%%    get fewer hashes per partition (because they will miss out on 2-chunk solutions\n\t%%    that fall on the partitions they don't store), and that will push *up* the\n\t%%    replica count for a given network hashrate. How much to scale up our replica\n\t%%    count is based on the average replica count per miner.\n\t%%\n\t%% 2. Estimate average replica count per miner. Start with this basic assumption:\n\t%%    the higher the percentage of the weave a miner stores, the more likely they are\n\t%%    to mine a 2-chunk solution. If a miner has 100% of the weave and if the PoA1 and\n\t%%    PoA2 difficulties are the same, then, on average, 50% of their solutions will be\n\t%%    1-chunk, and 50% will be 2-chunk.\n\t%%\n\t%%    With this we can use the ratio of observed 2-chunk to 1-chunk solutions to\n\t%%    estimate the average percentage of the weave each miner stores.\n\t%%\n\t%% 3. However, what happens if the PoA1 difficulty is higher than the PoA2 difficulty?\n\t%%    In that case, we'd expect a miner with 100% of the weave to have fewer 1-chunk\n\t%%    solutions than 2-chunk solutions. If the PoA1 difficulty is PoA1Mult times higher\n\t%%    than the PoA2 difficulty, we'd expect the maximum number of solutions to be:\n\t%%    \n\t%%    (PoA1Mult + 1) * RecallRangeSize div (?DATA_CHUNK_SIZE * PoA1Mult)\n\t%%    \n\t%%    Or basically 1 1-chunk solution for every PoA1Mult 2-chunk solutions in the\n\t%%    full-replica case.\n\t%%\n\t%% 4. Finally, what if the average miner is not mining a full replica? In that case we\n\t%%    need to arrive at an equation that weights the 1-chunk and 2-chunk solutions\n\t%%    differently - and use that to estimate the expected number of solutions per\n\t%%    partition:\n\t%%\n\t%%    EstimatedSolutionsPerPartition = \n\t%%    (\n\t%%      RecallRangeSize div PoA1Mult +\n\t%%\t\tRecallRangeSize * TwoChunkCount div (OneChunkCount * PoA1Mult)\n\t%%    ) div (?DATA_CHUNK_SIZE) \n\t%%\n\t%% The SolutionsPerPartitionPerVDFStep combines that average weave calculation\n\t%% with the expected number of solutions per partition per VDF step to arrive a single\n\t%% number that can be used in the PricePerGiBPerMinute calculation.\n\tPoA1Mult = ar_difficulty:poa1_diff_multiplier(Height),\n\tRecallRangeSize = ar_block:get_recall_range_size(0),\n\tMaxSolutionsPerPartition =\n\t\t(PoA1Mult + 1) * RecallRangeSize div (?DATA_CHUNK_SIZE * PoA1Mult),\n\tSolutionsPerPartitionPerVDFStep =\n\t\tcase OneChunkCount of\n\t\t\t0 ->\n\t\t\t\tMaxSolutionsPerPartition;\n\t\t\t_ ->\n\t\t\t\t%% The following is a version of the EstimatedSolutionsPerPartition\n\t\t\t\t%% equation mentioned above that has been simpplified to limit rounding\n\t\t\t\t%% errors:\n\t\t\t\tEstimatedSolutionsPerPartition =\n\t\t\t\t\t(OneChunkCount + TwoChunkCount) * RecallRangeSize\n\t\t\t\t\tdiv (?DATA_CHUNK_SIZE * OneChunkCount * PoA1Mult),\n\t\t\t\tmin(MaxSolutionsPerPartition, EstimatedSolutionsPerPartition)\n\t\tend,\n\t%% The following walks through the math of calculating the price per GiB per minute.\n\t%% However to reduce rounding errors due to divs, the uncommented equation at the\n\t%% end is used instead. Logically they should be the same. Notably the '* ?TARGET_BLOCK_TIME' in\n\t%% SolutionsPerPartitionPerBlock and the 'div ?TARGET_BLOCK_TIME' in PricePerGiBPerSecond cancel\n\t%% each other out.\n\t%%\n\t%% SolutionsPerPartitionPerSecond =\n\t%%          (SolutionsPerPartitionPerVDFStep * VDFIntervalTotal) div IntervalTotal\n\t%% SolutionsPerPartitionPerBlock = SolutionsPerPartitionPerSecond * ?TARGET_BLOCK_TIME,\n\t%% EstimatedPartitionCount = max(1, HashRateTotal) div SolutionsPerPartitionPerBlock,\n\t%% EstimatedDataSizeInGiB = EstimatedPartitionCount * (ar_block:partition_size()) div (?GiB),\n\t%% PricePerGiBPerBlock = max(1, RewardTotal) div EstimatedDataSizeInGiB,\n\t%% PricePerGiBPerSecond = PricePerGibPerBlock div ?TARGET_BLOCK_TIME\n\t%% PricePerGiBPerMinute = PricePerGiBPerSecond * 60,\n\tPricePerGiBPerMinute = \n\t\t(\n\t\t\t(SolutionsPerPartitionPerVDFStep * VDFIntervalTotal) *\n\t\t\tmax(1, RewardTotal) * (?GiB) * 60\n\t\t)\n\t\tdiv\n\t\t(\n\t\t\tIntervalTotal * max(1, HashRateTotal) * (ar_block:partition_size())\n\t\t),\n\tlog_price_metrics(get_v2_price_per_gib_minute_two_difficulty,\n\t\tHeight, History, HashRateTotal, RewardTotal, \n\t\tIntervalTotal, VDFIntervalTotal, OneChunkCount, TwoChunkCount,\n\t\tSolutionsPerPartitionPerVDFStep, PricePerGiBPerMinute),\n\tPricePerGiBPerMinute.\n\nget_v2_price_per_gib_minute_one_difficulty(Height, B) ->\n\t{HashRateTotal, RewardTotal, History} = ar_rewards:get_reward_history_totals(B),\n\t{IntervalTotal, VDFIntervalTotal, OneChunkCount, TwoChunkCount} =\n\t\tar_block_time_history:sum_history(B),\n\t%% The intent of the SolutionsPerPartitionPerVDFStep is to estimate network replica\n\t%% count (how many copies of the weave are stored across the network).\n\t%% The logic behind this is complex - an explanation from @vird:\n\t%%\n\t%% 1. Naive solution: If we assume that each miner stores 1 replica, then we\n\t%%    can trivially calculate the network replica count using the network hashrate\n\t%%    (which we have) and the weave size (which we also have). However what if on\n\t%%    average each miner only stores 50% of the weave? In that case each miner will\n\t%%    get fewer hashes per partition (because they will miss out on 2-chunk solutions\n\t%%    that fall on the partitions they don't store), and that will push *up* the\n\t%%    replica count for a given network hashrate. How much to scale up our replica\n\t%%    count is based on the average replica count per miner.\n\t%% 2. Estimate average replica count per miner: Start with this basic assumption:\n\t%%    the higher the percentage of the weave a miner stores, the more likely they are\n\t%%    to mine a 2-chunk solution. If a miner has 100% of the weave, then, on average,\n\t%%    50% of their solutions will be 1-chunk, and 50% will be 2-chunk.\n\t%%\n\t%%    With this we can use the ratio of observed 2-chunk to 1-chunk solutions to\n\t%%    estimate the average percentage of the weave each miner stores.\n\t%%\n\t%% The SolutionsPerPartitionPerVDFStep combines that average weave % calculation\n\t%% with the expected number of solutions per partition per VDF step to arrive a single\n\t%% number that can be used in the PricePerGiBPerMinute calculation.\n\tRecallRangeSize = ?LEGACY_RECALL_RANGE_SIZE,\n\tSolutionsPerPartitionPerVDFStep =\n\t\tcase OneChunkCount of\n\t\t\t0 ->\n\t\t\t\t2 * RecallRangeSize div (?DATA_CHUNK_SIZE);\n\t\t\t_ ->\n\t\t\t\tmin(2 * RecallRangeSize,\n\t\t\t\t\t\tRecallRangeSize\n\t\t\t\t\t\t\t+ RecallRangeSize * TwoChunkCount div OneChunkCount)\n\t\t\t\t\tdiv ?DATA_CHUNK_SIZE\n\t\tend,\n\t%% The following walks through the math of calculating the price per GiB per minute.\n\t%% However to reduce rounding errors due to divs, the uncommented equation at the\n\t%% end is used instead. Logically they should be the same. Notably the '* ?TARGET_BLOCK_TIME' in\n\t%% SolutionsPerPartitionPerBlock and the 'div ?TARGET_BLOCK_TIME' in PricePerGiBPerSecond cancel\n\t%% each other out.\n\t%%\n\t%% SolutionsPerPartitionPerSecond =\n\t%%          (SolutionsPerPartitionPerVDFStep * VDFIntervalTotal) div IntervalTotal\n\t%% SolutionsPerPartitionPerBlock = SolutionsPerPartitionPerSecond * ?TARGET_BLOCK_TIME,\n\t%% EstimatedPartitionCount = max(1, HashRateTotal) div SolutionsPerPartitionPerBlock,\n\t%% EstimatedDataSizeInGiB = EstimatedPartitionCount * (ar_block:partition_size()) div (?GiB),\n\t%% PricePerGiBPerBlock = max(1, RewardTotal) div EstimatedDataSizeInGiB,\n\t%% PricePerGiBPerSecond = PricePerGibPerBlock div ?TARGET_BLOCK_TIME\n\t%% PricePerGiBPerMinute = PricePerGiBPerSecond * 60,\n\tPricePerGiBPerMinute = \n\t\t(\n\t\t\t(SolutionsPerPartitionPerVDFStep * VDFIntervalTotal) *\n\t\t\tmax(1, RewardTotal) * (?GiB) * 60\n\t\t)\n\t\tdiv\n\t\t(\n\t\t\tIntervalTotal * max(1, HashRateTotal) * (ar_block:partition_size())\n\t\t),\n\tlog_price_metrics(get_v2_price_per_gib_minute_one_difficulty,\n\t\tHeight, History, HashRateTotal, RewardTotal, \n\t\tIntervalTotal, VDFIntervalTotal, OneChunkCount, TwoChunkCount,\n\t\tSolutionsPerPartitionPerVDFStep, PricePerGiBPerMinute),\n\tPricePerGiBPerMinute.\n\nget_v2_price_per_gib_minute_simple(B) ->\n\t{HashRateTotal, RewardTotal, _History} = ar_rewards:get_reward_history_totals(B),\n\t%% 2 recall ranges per partition per second.\n\tSolutionsPerPartitionPerSecond = 2 * (?LEGACY_RECALL_RANGE_SIZE) div (?DATA_CHUNK_SIZE),\n\tSolutionsPerPartitionPerMinute = SolutionsPerPartitionPerSecond * 60,\n\tSolutionsPerPartitionPerBlock = SolutionsPerPartitionPerMinute * 2,\n\t%% Estimated partition count = hash rate / 2 / solutions per partition per minute.\n\t%% 2 minutes is the average block time.\n\t%% Estimated data size = estimated partition count * partition size.\n\t%% Estimated price per gib minute = total block reward / estimated data size\n\t%% in gibibytes.\n\t(max(1, RewardTotal) * (?GiB) * SolutionsPerPartitionPerBlock)\n\t\tdiv (max(1, HashRateTotal)\n\t\t\t\t* (ar_block:partition_size())\n\t\t\t\t* 2\t% The reward is paid every two minutes whereas we are calculating\n\t\t\t\t\t% the minute rate here.\n\t\t\t).\n\n%% @doc Return the minimum required transaction fee for the given number of\n%% total bytes stored and gibibyte minute price.\nget_tx_fee(Args) ->\n\t{DataSize, GiBMinutePrice, KryderPlusRateMultiplier, Height} = Args,\n\tFirstYearPrice = DataSize * GiBMinutePrice * 60 * 24 * 365,\n\t{LnDecayDividend, LnDecayDivisor} = ?LN_PRICE_DECAY_ANNUAL,\n\tPerpetualPrice = {-FirstYearPrice * LnDecayDivisor * KryderPlusRateMultiplier\n\t\t\t* (?N_REPLICATIONS(Height)), LnDecayDividend * (?GiB)},\n\tMinerShare = ar_fraction:multiply(PerpetualPrice,\n\t\t\t?MINER_MINIMUM_ENDOWMENT_CONTRIBUTION_SHARE),\n\t{Dividend, Divisor} = ar_fraction:add(PerpetualPrice, MinerShare),\n\tDividend div Divisor.\n\n%% @doc Return the block reward, the new endowment pool, and the new debt supply.\nget_miner_reward_endowment_pool_debt_supply(Args) ->\n\t{EndowmentPool, DebtSupply, TXs, WeaveSize, Height, GiBMinutePrice,\n\t\t\tKryderPlusRateMultiplierLatch, KryderPlusRateMultiplier, Denomination,\n\t\t\tBlockInterval} = Args,\n\tInflation = redenominate(ar_inflation:calculate(Height), 1, Denomination),\n\tExpectedReward = (?N_REPLICATIONS(Height)) * WeaveSize * GiBMinutePrice\n\t\t\t* BlockInterval div (60 * ?GiB),\n\t{EndowmentPoolFeeShare, MinerFeeShare} = distribute_transaction_fees2(TXs, Denomination),\n\tBaseReward = Inflation + MinerFeeShare,\n\tEndowmentPool2 = EndowmentPool + EndowmentPoolFeeShare,\n\tcase BaseReward >= ExpectedReward of\n\t\ttrue ->\n\t\t\t{BaseReward, EndowmentPool2, DebtSupply, KryderPlusRateMultiplierLatch,\n\t\t\t\t\tKryderPlusRateMultiplier, EndowmentPoolFeeShare, 0};\n\t\tfalse ->\n\t\t\tTake = ExpectedReward - BaseReward,\n\t\t\t{EndowmentPool3, DebtSupply2} =\n\t\t\t\tcase Take > EndowmentPool2 of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\t{0, DebtSupply + Take - EndowmentPool2};\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t{EndowmentPool2 - Take, DebtSupply}\n\t\t\t\tend,\n\t\t\t{KryderPlusRateMultiplierLatch2, KryderPlusRateMultiplier2} =\n\t\t\t\tcase {Take > EndowmentPool2, KryderPlusRateMultiplierLatch} of\n\t\t\t\t\t{true, 0} ->\n\t\t\t\t\t\t{1, KryderPlusRateMultiplier * 2};\n\t\t\t\t\t{false, 1} ->\n\t\t\t\t\t\tThreshold = redenominate(?RESET_KRYDER_PLUS_LATCH_THRESHOLD, 1,\n\t\t\t\t\t\t\t\tDenomination),\n\t\t\t\t\t\tcase EndowmentPool3 > Threshold of\n\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t{0, KryderPlusRateMultiplier};\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t{1, KryderPlusRateMultiplier}\n\t\t\t\t\t\tend;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\t{KryderPlusRateMultiplierLatch, KryderPlusRateMultiplier}\n\t\t\t\tend,\n\t\t\t{BaseReward + Take, EndowmentPool3, DebtSupply2, KryderPlusRateMultiplierLatch2,\n\t\t\t\t\tKryderPlusRateMultiplier2, EndowmentPoolFeeShare, Take}\n\tend.\n\n%% @doc Return the denominated amount.\nredenominate(Amount, 0, _Denomination) ->\n\tAmount;\nredenominate(Amount, BaseDenomination, BaseDenomination) ->\n\tAmount;\nredenominate(Amount, BaseDenomination, Denomination) when Denomination > BaseDenomination ->\n\tredenominate(Amount * 1000, BaseDenomination, Denomination - 1).\n\n%% @doc Return the threshold for scheduling redenomination.\n-ifdef(LOCALNET).\nget_redenomination_threshold() ->\n\tcase application:get_env(arweave, redenomination_threshold) of\n\t\t{ok, Value} when is_integer(Value), Value > 0 ->\n\t\t\tValue;\n\t\t_ ->\n\t\t\t?REDENOMINATION_THRESHOLD\n\tend.\n-else.\nget_redenomination_threshold() ->\n\t?REDENOMINATION_THRESHOLD.\n-endif.\n\n%% @doc Return the delay (in blocks) before redenomination takes effect.\n-ifdef(LOCALNET).\nget_redenomination_delay_blocks() ->\n\tcase application:get_env(arweave, redenomination_delay_blocks) of\n\t\t{ok, Value} when is_integer(Value), Value > 0 ->\n\t\t\tValue;\n\t\t_ ->\n\t\t\t?REDENOMINATION_DELAY_BLOCKS\n\tend.\n-else.\nget_redenomination_delay_blocks() ->\n\t?REDENOMINATION_DELAY_BLOCKS.\n-endif.\n\n%% @doc\tIncrease the amount of base currency units in the system if\n%% the available supply is too low.\nmay_be_redenominate(B) ->\n\t#block{ height = Height, denomination = Denomination,\n\t\t\tredenomination_height = RedenominationHeight } = B,\n\tcase ar_pricing_transition:is_v2_pricing_height(Height + 1) of\n\t\tfalse ->\n\t\t\t{Denomination, RedenominationHeight};\n\t\ttrue ->\n\t\t\tmay_be_redenominate2(B)\n\tend.\n\nmay_be_redenominate2(B) ->\n\t#block{ height = Height, denomination = Denomination,\n\t\t\tredenomination_height = RedenominationHeight } = B,\n\tcase Height == RedenominationHeight of\n\t\ttrue ->\n\t\t\t{Denomination + 1, RedenominationHeight};\n\t\tfalse ->\n\t\t\tcase Height < RedenominationHeight of\n\t\t\t\ttrue ->\n\t\t\t\t\t{Denomination, RedenominationHeight};\n\t\t\t\tfalse ->\n\t\t\t\t\tmay_be_redenominate3(B)\n\t\t\tend\n\tend.\n\nmay_be_redenominate3(B) ->\n\t#block{ height = Height, debt_supply = DebtSupply, reward_pool = EndowmentPool,\n\t\t\tdenomination = Denomination, redenomination_height = RedenominationHeight } = B,\n\tTotalSupply = get_total_supply(Denomination),\n\tThreshold = get_redenomination_threshold(),\n\tcase TotalSupply + DebtSupply - EndowmentPool < Threshold of\n\t\ttrue ->\n\t\t\t{Denomination, Height + get_redenomination_delay_blocks()};\n\t\tfalse ->\n\t\t\t{Denomination, RedenominationHeight}\n\tend.\n\n%% @doc Return the new current and scheduled prices per byte minute.\nrecalculate_price_per_gib_minute(B) ->\n\t#block{ height = PrevHeight,\n\t\t\tprice_per_gib_minute = Price,\n\t\t\tscheduled_price_per_gib_minute = ScheduledPrice } = B,\n\tHeight = PrevHeight + 1,\n\tFork_2_7 = ar_fork:height_2_7(),\n\tFork_2_7_1 = ar_fork:height_2_7_1(),\n\tcase Height of\n\t\tFork_2_7 ->\n\t\t\t{ar_pricing_transition:static_price(), ar_pricing_transition:static_price()};\n\t\tHeight when Height < Fork_2_7_1 ->\n\t\t\tcase is_price_adjustment_height(Height) of\n\t\t\t\tfalse ->\n\t\t\t\t\t{Price, ScheduledPrice};\n\t\t\t\ttrue ->\n\t\t\t\t\t%% price_per_gib_minute = scheduled_price_per_gib_minute\n\t\t\t\t\t%% scheduled_price_per_gib_minute = get_price_per_gib_minute() capped to\n\t\t\t\t\t%%                                  0.5x to 2x of old price_per_gib_minute\n\t\t\t\t\tPrice2 = min(Price * 2, get_price_per_gib_minute(Height, B)),\n\t\t\t\t\tPrice3 = max(Price div 2, Price2),\n\t\t\t\t\t{ScheduledPrice, Price3}\n\t\t\tend;\n\t\t_ ->\n\t\t\tcase is_price_adjustment_height(Height) of\n\t\t\t\tfalse ->\n\t\t\t\t\t{Price, ScheduledPrice};\n\t\t\t\ttrue ->\n\t\t\t\t\t%% price_per_gib_minute = scheduled_price_per_gib_minute\n\t\t\t\t\t%% scheduled_price_per_gib_minute =\n\t\t\t\t\t%% \t\tget_price_per_gib_minute() \n\t\t\t\t\t%%\t\tEMA'ed with scheduled_price_per_gib_minute at 0.1 alpha\n\t\t\t\t\t%%\t\tand then capped to 0.5x to 2x of scheduled_price_per_gib_minute\n\t\t\t\t\tTargetPrice = get_price_per_gib_minute(Height, B),\n\t\t\t\t\tEMAPrice = (9 * ScheduledPrice + TargetPrice) div 10,\n\t\t\t\t\tPrice2 = min(ScheduledPrice * 2, EMAPrice),\n\t\t\t\t\tPrice3 = max(ScheduledPrice div 2, Price2),\n\t\t\t\t\t?LOG_DEBUG([{event, recalculate_price_per_gib_minute},\n\t\t\t\t\t\t{height, Height},\n\t\t\t\t\t\t{old_price, Price},\n\t\t\t\t\t\t{scheduled_price, ScheduledPrice},\n\t\t\t\t\t\t{target_price, TargetPrice},\n\t\t\t\t\t\t{ema_price, EMAPrice},\n\t\t\t\t\t\t{capped_price, Price3}]),\n\t\t\t\t\t{ScheduledPrice, Price3}\n\t\t\tend\n\tend.\n\nis_price_adjustment_height(Height) ->\n\tHeight rem ?PRICE_ADJUSTMENT_FREQUENCY == 0.\n\ndistribute_transaction_fees2(TXs, Denomination) ->\n\tdistribute_transaction_fees2(TXs, 0, 0, Denomination).\n\ndistribute_transaction_fees2([], EndowmentPoolTotal, MinerTotal, _Denomination) ->\n\t{EndowmentPoolTotal, MinerTotal};\ndistribute_transaction_fees2([TX | TXs], EndowmentPoolTotal, MinerTotal, Denomination) ->\n\tTXFee = redenominate(TX#tx.reward, TX#tx.denomination, Denomination),\n\t{Dividend, Divisor} = ?MINER_FEE_SHARE,\n\tMinerFee = TXFee * Dividend div Divisor,\n\tEndowmentPoolTotal2 = EndowmentPoolTotal + TXFee - MinerFee,\n\tMinerTotal2 = MinerTotal + MinerFee,\n\tdistribute_transaction_fees2(TXs, EndowmentPoolTotal2, MinerTotal2, Denomination).\n\nget_total_supply(Denomination) ->\n\tredenominate(?TOTAL_SUPPLY, 1, Denomination).\n\n%%%===================================================================\n%%% Public interface 2.5.\n%%%===================================================================\n\n%% @doc Return the perpetual cost of storing the given amount of data.\nget_storage_cost(DataSize, Timestamp, Rate, Height) ->\n\tSize = ?TX_SIZE_BASE + DataSize,\n\tPerpetualGBStorageCost =\n\t\tusd_to_ar(\n\t\t\tget_perpetual_gb_cost_at_timestamp(Timestamp, Height),\n\t\t\tRate,\n\t\t\tHeight\n\t\t),\n\tStorageCost = max(1, PerpetualGBStorageCost div (?MiB * 1024)) * Size,\n\tHashingCost = StorageCost,\n\tStorageCost + HashingCost.\n\n%% @doc Calculate the transaction fee.\nget_tx_fee(DataSize, Timestamp, Rate, Height) ->\n\tMaintenanceCost = get_storage_cost(DataSize, Timestamp, Rate, Height),\n\tMinerFeeShare = get_miner_fee_share(MaintenanceCost, Height),\n\tMaintenanceCost + MinerFeeShare.\n\n%% @doc Return the miner reward and the new endowment pool.\nget_miner_reward_and_endowment_pool({Pool, TXs, unclaimed, _, _, _, _}) ->\n\t{0, Pool + lists:sum([TX#tx.reward || TX <- TXs])};\nget_miner_reward_and_endowment_pool(Args) ->\n\t{Pool, TXs, _Addr, WeaveSize, Height, Timestamp, Rate} = Args,\n\tInflation = trunc(ar_inflation:calculate(Height)),\n\t{PoolFeeShare, MinerFeeShare} = distribute_transaction_fees(TXs, Height),\n\tBaseReward = Inflation + MinerFeeShare,\n\tStorageCostPerGBPerBlock =\n\t\tusd_to_ar(\n\t\t\tget_gb_cost_per_block_at_timestamp(Timestamp, Height),\n\t\t\tRate,\n\t\t\tHeight\n\t\t),\n\tBurden = WeaveSize * StorageCostPerGBPerBlock div (?MiB * 1024),\n\tPool2 = Pool + PoolFeeShare,\n\tcase BaseReward >= Burden of\n\t\ttrue ->\n\t\t\t{BaseReward, Pool2};\n\t\tfalse ->\n\t\t\tTake = min(Pool2, Burden - BaseReward),\n\t\t\t{BaseReward + Take, Pool2 - Take}\n\tend.\n\n%% @doc Return the effective USD to AR rate corresponding to the given block\n%% considering its previous block.\nusd_to_ar_rate(#block{ height = PrevHeight } = PrevB) ->\n\tHeight_2_5 = ar_fork:height_2_5(),\n\tHeight = PrevHeight + 1,\n\tcase PrevHeight < Height_2_5 of\n\t\ttrue ->\n\t\t\t?INITIAL_USD_TO_AR(Height)();\n\t\tfalse ->\n\t\t\tPrevB#block.usd_to_ar_rate\n\tend.\n\n%% @doc Return the amount of AR the given number of USD is worth.\nusd_to_ar(USD, Rate, Height) when is_number(USD) ->\n\tusd_to_ar({USD, 1}, Rate, Height);\nusd_to_ar({Dividend, Divisor}, Rate, Height) ->\n\tInitialInflation = trunc(ar_inflation:calculate(?INITIAL_USD_TO_AR_HEIGHT(Height)())),\n\tCurrentInflation = trunc(ar_inflation:calculate(Height)),\n\t{InitialRateDividend, InitialRateDivisor} = Rate,\n\ttrunc(\tDividend\n\t\t\t* ?WINSTON_PER_AR\n\t\t\t* CurrentInflation\n\t\t\t* InitialRateDividend\t)\n\t\tdiv Divisor\n\t\tdiv InitialInflation\n\t\tdiv InitialRateDivisor.\n\nrecalculate_usd_to_ar_rate(#block{ height = PrevHeight } = B) ->\n\tHeight = PrevHeight + 1,\n\tFork_2_5 = ar_fork:height_2_5(),\n\ttrue = Height >= Fork_2_5,\n\tcase Height > Fork_2_5 of\n\t\tfalse ->\n\t\t\tRate = ?INITIAL_USD_TO_AR(Height)(),\n\t\t\t{Rate, Rate};\n\t\ttrue ->\n\t\t\tFork_2_6 = ar_fork:height_2_6(),\n\t\t\tcase Height == Fork_2_6 of\n\t\t\t\ttrue ->\n\t\t\t\t\t{B#block.usd_to_ar_rate, ?FORK_2_6_PRE_TRANSITION_USD_TO_AR_RATE};\n\t\t\t\tfalse ->\n\t\t\t\t\trecalculate_usd_to_ar_rate2(B)\n\t\t\tend\n\tend.\n\n%% @doc Return an estimation for the minimum required decline rate making the given\n%% Amount (in Winston) sufficient to subsidize storage for Period seconds starting from\n%% Timestamp and assuming the given USD to AR rate.\n%% When computing the exponent, the function accounts for the first 16 summands in\n%% the Taylor series. The fraction is reduced to the 1/1000000 precision.\nget_expected_min_decline_rate(Timestamp, Period, Amount, Size, Rate, Height) ->\n\t{USDDiv1, USDDivisor1} = get_gb_cost_per_year_at_timestamp(Timestamp, Height),\n\t%% Multiply by 2 to account for hashing costs.\n\tSum1 = 2 * usd_to_ar({USDDiv1, USDDivisor1}, Rate, Height),\n\t{USDDiv2, USDDivisor2} = get_gb_cost_per_year_at_timestamp(Timestamp + Period, Height),\n\tSum2 = 2 * usd_to_ar({USDDiv2, USDDivisor2}, Rate, Height),\n\t%% Sum1 / -logRate - Sum2 / -logRate = Amount\n\t%% => -logRate = (Sum1 - Sum2) / Amount\n\t%% => 1 / Rate = exp((Sum1 - Sum2) / Amount)\n\t%% => Rate = 1 / exp((Sum1 - Sum2) / Amount)\n\t{ExpDiv, ExpDivisor} = ar_fraction:natural_exponent(\n\t\t\t{(Sum1 - Sum2) * Size, Amount * (?MiB * 1024)}, 16),\n\tar_fraction:reduce({ExpDivisor, ExpDiv}, 1000000).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\n%% @doc Get the share of the maintenance cost the miner receives for a transation.\nget_miner_fee_share(MaintenanceCost, Height) ->\n\t{Dividend, Divisor} = ?MINING_REWARD_MULTIPLIER,\n\tcase Height >= ar_fork:height_2_5() of\n\t\tfalse ->\n\t\t\terlang:trunc(MaintenanceCost * (Dividend / Divisor));\n\t\ttrue ->\n\t\t\tMaintenanceCost * Dividend div Divisor\n\tend.\n\ndistribute_transaction_fees(TXs, Height) ->\n\tdistribute_transaction_fees(TXs, 0, 0, Height).\n\ndistribute_transaction_fees([], EndowmentPool, Miner, _Height) ->\n\t{EndowmentPool, Miner};\ndistribute_transaction_fees([TX | TXs], EndowmentPool, Miner, Height) ->\n\tTXFee = TX#tx.reward,\n\t{Dividend, Divisor} = ?MINING_REWARD_MULTIPLIER,\n\tMinerFee =\n\t\tcase Height >= ar_fork:height_2_5() of\n\t\t\tfalse ->\n\t\t\t\terlang:trunc((Dividend / Divisor) * TXFee / ((Dividend / Divisor) + 1));\n\t\t\ttrue ->\n\t\t\t\tTXFee * Dividend div (Dividend + Divisor)\n\t\tend,\n\tdistribute_transaction_fees(TXs, EndowmentPool + TXFee - MinerFee, Miner + MinerFee,\n\t\t\tHeight).\n\n%% @doc Return the cost of storing 1 GB in the network perpetually.\n%% Integral of the exponential decay curve k*e^(-at), i.e. k/a.\n%% @end\n-spec get_perpetual_gb_cost_at_timestamp(Timestamp::integer(), Height::nonegint()) -> usd().\nget_perpetual_gb_cost_at_timestamp(Timestamp, Height) ->\n\tK = get_gb_cost_per_year_at_timestamp(Timestamp, Height),\n\tget_perpetual_gb_cost(K, Height).\n\n-spec get_perpetual_gb_cost(Init::usd(), Height::nonegint()) -> usd().\nget_perpetual_gb_cost(Init, Height) ->\n\tcase Height >= ar_fork:height_2_5() of\n\t\ttrue ->\n\t\t\t{LnDecayDividend, LnDecayDivisor} = ?LN_PRICE_DECAY_ANNUAL,\n\t\t\t{InitDividend, InitDivisor} = Init,\n\t\t\t{-InitDividend * LnDecayDivisor, InitDivisor * LnDecayDividend};\n\t\tfalse ->\n\t\t\t{Dividend, Divisor} = ?PRICE_DECAY_ANNUAL,\n\t\t\tInit / -math:log(Dividend / Divisor)\n\tend.\n\n%% @doc Return the cost in USD of storing 1 GB per year at the given time.\n-spec get_gb_cost_per_year_at_timestamp(Timestamp::integer(), Height::nonegint()) -> usd().\nget_gb_cost_per_year_at_timestamp(Timestamp, Height) ->\n\tDatetime = system_time_to_universal_time(Timestamp, seconds),\n\tget_gb_cost_per_year_at_datetime(Datetime, Height).\n\n%% @doc Return the cost in USD of storing 1 GB per average block time at the given time.\n-spec get_gb_cost_per_block_at_timestamp(integer(), nonegint()) -> usd().\nget_gb_cost_per_block_at_timestamp(Timestamp, Height) ->\n\tDatetime = system_time_to_universal_time(Timestamp, seconds),\n\tget_gb_cost_per_block_at_datetime(Datetime, Height).\n\n%% @doc Return the cost in USD of storing 1 GB per year.\n-spec get_gb_cost_per_year_at_datetime(DT::datetime(), Height::nonegint()) -> usd().\nget_gb_cost_per_year_at_datetime({{Y, M, _}, _} = DT, Height) ->\n\tPrevY = prev_jun_30_year(Y, M),\n\tNextY = next_jun_30_year(Y, M),\n\tFracY = fraction_of_year(PrevY, NextY, DT, Height),\n\tPrevYCost = usd_p_gby(PrevY, Height),\n\tNextYCost = usd_p_gby(NextY, Height),\n\tcase Height >= ar_fork:height_2_5() of\n\t\ttrue ->\n\t\t\t{FracYDividend, FracYDivisor} = FracY,\n\t\t\t{PrevYCostDividend, PrevYCostDivisor} = PrevYCost,\n\t\t\t{NextYCostDividend, NextYCostDivisor} = NextYCost,\n\t\t\tDividend =\n\t\t\t\t(?N_REPLICATIONS(Height))\n\t\t\t\t* (\n\t\t\t\t\tPrevYCostDividend * NextYCostDivisor * FracYDivisor\n\t\t\t\t\t- FracYDividend\n\t\t\t\t\t\t* (\n\t\t\t\t\t\t\tPrevYCostDividend\n\t\t\t\t\t\t\t\t* NextYCostDivisor\n\t\t\t\t\t\t\t- NextYCostDividend\n\t\t\t\t\t\t\t\t* PrevYCostDivisor\n\t\t\t\t\t\t)\n\t\t\t\t),\n\t\t\tDivisor =\n\t\t\t\tPrevYCostDivisor\n\t\t\t\t* NextYCostDivisor\n\t\t\t\t* FracYDivisor,\n\t\t\t{Dividend, Divisor};\n\t\tfalse ->\n\t\t\tCY = PrevYCost - (FracY * (PrevYCost - NextYCost)),\n\t\t\tCY * (?N_REPLICATIONS(Height))\n\tend.\n\nprev_jun_30_year(Y, M) when M < 7 ->\n\tY - 1;\nprev_jun_30_year(Y, _M) ->\n\tY.\n\nnext_jun_30_year(Y, M) when M < 7 ->\n\tY;\nnext_jun_30_year(Y, _M) ->\n\tY + 1.\n\n%% @doc Return the cost in USD of storing 1 GB per average block time.\n-spec get_gb_cost_per_block_at_datetime(DT::datetime(), Height::nonegint()) -> usd().\nget_gb_cost_per_block_at_datetime(DT, Height) ->\n\tcase Height >= ar_fork:height_2_5() of\n\t\ttrue ->\n\t\t\t{Dividend, Divisor} = get_gb_cost_per_year_at_datetime(DT, Height),\n\t\t\t{Dividend, Divisor * ar_inflation:blocks_per_year(Height)};\n\t\tfalse ->\n\t\t\tget_gb_cost_per_year_at_datetime(DT, Height) / ar_inflation:blocks_per_year(Height)\n\tend.\n\n%% @doc Return the cost in USD of storing 1 GB per year. Estmimated from empirical data.\n%% Assumes a year after 2019 inclusive. Uses data figures for 2018 and 2019.\n%% Extrapolates the exponential decay curve k*e^(-at) to future years.\n%% @end\n-spec usd_p_gby(nonegint(), nonegint()) -> usd().\nusd_p_gby(2018, Height) ->\n\t{Dividend, Divisor} = ?USD_PER_GBY_2018,\n\tcase Height >= ar_fork:height_2_5() of\n\t\ttrue ->\n\t\t\t{Dividend, Divisor};\n\t\tfalse ->\n\t\t\tDividend / Divisor\n\tend;\nusd_p_gby(2019, Height) ->\n\t{Dividend, Divisor} = ?USD_PER_GBY_2019,\n\tcase Height >= ar_fork:height_2_5() of\n\t\ttrue ->\n\t\t\t{Dividend, Divisor};\n\t\tfalse ->\n\t\t\tDividend / Divisor\n\tend;\nusd_p_gby(Y, Height) ->\n\tcase Height >= ar_fork:height_2_5() of\n\t\ttrue ->\n\t\t\t{KDividend, KDivisor} = ?USD_PER_GBY_2019,\n\t\t\t{ADividend, ADivisor} = ?LN_PRICE_DECAY_ANNUAL,\n\t\t\tT = Y - 2019,\n\t\t\tP = ?TX_PRICE_NATURAL_EXPONENT_DECIMAL_FRACTION_PRECISION,\n\t\t\t{EDividend, EDivisor} = ar_fraction:natural_exponent({ADividend * T, ADivisor}, P),\n\t\t\t{EDividend * KDividend, EDivisor * KDivisor};\t\n\t\tfalse ->\n\t\t\t{Dividend, Divisor} = ?USD_PER_GBY_2019,\n\t\t\tK = Dividend / Divisor,\n\t\t\t{DecayDividend, DecayDivisor} = ?PRICE_DECAY_ANNUAL,\n\t\t\tA = math:log(DecayDividend / DecayDivisor),\n\t\t\tT = Y - 2019,\n\t\t\tK * math:exp(A * T)\n\tend.\n\n%% @doc Return elapsed time as the fraction of the year\n%% between Jun 30th of PrevY and Jun 30th of NextY.\n%% @end\n-spec fraction_of_year(nonegint(), nonegint(), datetime(), nonegint()) -> float() | fraction().\nfraction_of_year(PrevY, NextY, {{Y, Mo, D}, {H, Mi, S}}, Height) ->\n\tStart = calendar:datetime_to_gregorian_seconds({{PrevY, 6, 30}, {23, 59, 59}}),\n\tNow = calendar:datetime_to_gregorian_seconds({{Y, Mo, D}, {H, Mi, S}}),\n\tEnd = calendar:datetime_to_gregorian_seconds({{NextY, 6, 30}, {23, 59, 59}}),\n\tcase Height >= ar_fork:height_2_5() of\n\t\ttrue ->\n\t\t\t{Now - Start, End - Start};\n\t\tfalse ->\n\t\t\t(Now - Start) / (End - Start)\n\tend.\n\n%% TODO Use calendar:system_time_to_universal_time/2 in Erlang OTP-21.\nsystem_time_to_universal_time(Time, TimeUnit) ->\n\tSeconds = erlang:convert_time_unit(Time, TimeUnit, seconds),\n\tDaysFrom0To1970 = 719528,\n\tSecondsPerDay = 86400,\n\tcalendar:gregorian_seconds_to_datetime(Seconds + (DaysFrom0To1970 * SecondsPerDay)).\n\nrecalculate_usd_to_ar_rate2(#block{ height = PrevHeight } = B) ->\n\tcase is_price_adjustment_height(PrevHeight + 1) of\n\t\tfalse ->\n\t\t\t{B#block.usd_to_ar_rate, B#block.scheduled_usd_to_ar_rate};\n\t\ttrue ->\n\t\t\tFork_2_6 = ar_fork:height_2_6(),\n\t\t\ttrue = PrevHeight + 1 /= Fork_2_6,\n\t\t\tcase PrevHeight + 1 > Fork_2_6 of\n\t\t\t\ttrue ->\n\t\t\t\t\t%% Keep the rate fixed after the 2.6 fork till the transition to the\n\t\t\t\t\t%% new pricing scheme ends. Then it won't be used any longer.\n\t\t\t\t\t{B#block.scheduled_usd_to_ar_rate, B#block.scheduled_usd_to_ar_rate};\n\t\t\t\tfalse ->\n\t\t\t\t\trecalculate_usd_to_ar_rate3(B)\n\t\t\tend\n\tend.\n\nrecalculate_usd_to_ar_rate3(#block{ height = PrevHeight, diff = Diff } = B) ->\n\tHeight = PrevHeight + 1,\n\tInitialDiff = ar_retarget:switch_to_linear_diff(?INITIAL_USD_TO_AR_DIFF(Height)()),\n\tMaxDiff = ?MAX_DIFF,\n\tInitialRate = ?INITIAL_USD_TO_AR(Height)(),\n\t{Dividend, Divisor} = InitialRate,\n\tScheduledRate = {Dividend * (MaxDiff - Diff), Divisor * (MaxDiff - InitialDiff)},\n\tRate = B#block.scheduled_usd_to_ar_rate,\n\tMaxAdjustmentUp = ar_fraction:multiply(Rate, ?USD_TO_AR_MAX_ADJUSTMENT_UP_MULTIPLIER),\n\tMaxAdjustmentDown = ar_fraction:multiply(Rate, ?USD_TO_AR_MAX_ADJUSTMENT_DOWN_MULTIPLIER),\n\tCappedScheduledRate = ar_fraction:reduce(ar_fraction:maximum(\n\t\t\tar_fraction:minimum(ScheduledRate, MaxAdjustmentUp), MaxAdjustmentDown),\n\t\t\t?USD_TO_AR_FRACTION_REDUCTION_LIMIT),\n\t?LOG_DEBUG([{event, recalculated_rate},\n\t\t\t{new_rate, ar_util:safe_divide(element(1, Rate), element(2, Rate))},\n\t\t\t{new_scheduled_rate, ar_util:safe_divide(element(1, CappedScheduledRate),\n\t\t\t\t\telement(2, CappedScheduledRate))},\n\t\t\t{new_scheduled_rate_without_capping,\n\t\t\t\t\tar_util:safe_divide(element(1, ScheduledRate), element(2, ScheduledRate))},\n\t\t{max_adjustment_up, ar_util:safe_divide(element(1, MaxAdjustmentUp),\n\t\t\t\telement(2,MaxAdjustmentUp))},\n\t\t{max_adjustment_down, ar_util:safe_divide(element(1, MaxAdjustmentDown),\n\t\t\t\telement(2,MaxAdjustmentDown))}]),\n\t{Rate, CappedScheduledRate}.\n\nlog_price_metrics(Event,\n\t\tHeight, History, HashRateTotal, RewardTotal, IntervalTotal, VDFIntervalTotal,\n\t\tOneChunkCount, TwoChunkCount, SolutionsPerPartitionPerVDFStep, PricePerGiBPerMinute) ->\n\n\tRewardHistoryLength = length(History),\n\tAverageHashRate = HashRateTotal div RewardHistoryLength,\n\tEstimatedDataSizeInBytes = network_data_size(Height,\n\t\t\tAverageHashRate, IntervalTotal, VDFIntervalTotal, SolutionsPerPartitionPerVDFStep),\n\n\tprometheus_gauge:set(poa_count, [1], OneChunkCount),\n\tprometheus_gauge:set(poa_count, [2], TwoChunkCount),\n\tprometheus_gauge:set(v2_price_per_gibibyte_minute, PricePerGiBPerMinute),\n\tprometheus_gauge:set(network_data_size, EstimatedDataSizeInBytes),\n\n\t?LOG_DEBUG([{event, Event}, {height, Height}, {reward_history_length, RewardHistoryLength},\n\t\t{hash_rate_total, HashRateTotal}, {average_hash_rate, AverageHashRate},\n\t\t{reward_total, RewardTotal},\n\t\t{interval_total, IntervalTotal}, {vdf_interval_total, VDFIntervalTotal},\n\t\t{one_chunk_count, OneChunkCount}, {two_chunk_count, TwoChunkCount},\n\t\t{solutions_per_partition_per_vdf_step, SolutionsPerPartitionPerVDFStep},\n\t\t{data_size, EstimatedDataSizeInBytes}, {price, PricePerGiBPerMinute}]).\n\nnetwork_data_size(Height,\n\t\tAverageHashRate, IntervalTotal, VDFIntervalTotal, SolutionsPerPartitionPerVDFStep) ->\n\tTargetTime = ar_testnet:target_block_time(Height),\n\tSolutionsPerPartitionPerBlock =\n\t\t(SolutionsPerPartitionPerVDFStep * VDFIntervalTotal * TargetTime) div IntervalTotal,\n\t?LOG_DEBUG([{event, network_data_size},\n\t\t{solutions_per_partition_per_vdf_step, SolutionsPerPartitionPerVDFStep},\n\t\t{vdf_interval_total, VDFIntervalTotal}, {target_time, TargetTime},\n\t\t{interval_total, IntervalTotal}, {solutions_per_partition_per_block,\n\t\t\t\tSolutionsPerPartitionPerBlock}]),\n\tcase SolutionsPerPartitionPerBlock of\n\t\t0 -> 0;\n\t\t_ ->\n\t\t\tEstimatedPartitionCount = AverageHashRate div SolutionsPerPartitionPerBlock,\n\t\t\tEstimatedPartitionCount * (ar_block:partition_size())\n\tend.\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\nget_gb_cost_per_year_at_datetime_is_monotone_test_() ->\n\t[\n\t\tar_test_node:test_with_mocked_functions([{ar_fork, height_2_5, fun() -> infinity end}],\n\t\t\tfun test_get_gb_cost_per_year_at_datetime_is_monotone/0, 120)\n\t\t| \n\t\t[\n\t\t\tar_test_node:test_with_mocked_functions([{ar_fork, height_2_5, fun() -> Height end}],\n\t\t\t\tfun test_get_gb_cost_per_year_at_datetime_is_monotone/0, 120)\n\t\t\t|| Height <- lists:seq(0, 20)\n\t\t]\n\t].\n\ntest_get_gb_cost_per_year_at_datetime_is_monotone() ->\n\tInitialDT = {{2019, 1, 1}, {0, 0, 0}},\n\tFollowingDTs = [\n\t\t{{2019, 1, 1}, {10, 0, 0}},\n\t\t{{2019, 6, 15}, {0, 0, 0}},\n\t\t{{2019, 6, 29}, {23, 59, 59}},\n\t\t{{2019, 6, 30}, {0, 0, 0}},\n\t\t{{2019, 6, 30}, {23, 59, 59}},\n\t\t{{2019, 7, 1}, {0, 0, 0}},\n\t\t{{2019, 12, 31}, {23, 59, 59}},\n\t\t{{2020, 1, 1}, {0, 0, 0}},\n\t\t{{2020, 1, 2}, {0, 0, 0}},\n\t\t{{2020, 10, 1}, {0, 0, 0}},\n\t\t{{2020, 12, 31}, {23, 59, 59}},\n\t\t{{2021, 1, 1}, {0, 0, 0}},\n\t\t{{2021, 2, 1}, {0, 0, 0}},\n\t\t{{2021, 12, 31}, {23, 59, 59}},\n\t\t{{2022, 1, 1}, {0, 0, 0}},\n\t\t{{2022, 6, 29}, {23, 59, 59}},\n\t\t{{2022, 6, 30}, {0, 0, 0}},\n\t\t{{2050, 3, 1}, {10, 10, 10}},\n\t\t{{2100, 2, 1}, {0, 0, 0}}\n\t],\n\tlists:foldl(\n\t\tfun(CurrDT, {PrevDT, PrevHeight}) ->\n\t\t\tCurrCost = get_gb_cost_per_year_at_datetime(CurrDT, PrevHeight + 1),\n\t\t\tPrevCost = get_gb_cost_per_year_at_datetime(PrevDT, PrevHeight),\n\t\t\tassert_less_than_or_equal_to(CurrCost, PrevCost),\n\t\t\t{CurrDT, PrevHeight + 1}\n\t\tend,\n\t\t{InitialDT, 0},\n\t\tFollowingDTs\n\t).\n\nassert_less_than_or_equal_to(X1, X2) when is_number(X1), is_number(X2) ->\n\t?assert(X1 =< X2, io_lib:format(\"~p is bigger than ~p\", [X1, X2]));\nassert_less_than_or_equal_to({Dividend1, Divisor1} = X1, X2) when is_number(X2) ->\n\t?assert((Dividend1 div Divisor1) =< X2, io_lib:format(\"~p is bigger than ~p\", [X1, X2]));\nassert_less_than_or_equal_to({Dividend1, Divisor1} = X1, {Dividend2, Divisor2} = X2) ->\n\t?assert(Dividend1 * Divisor2 =< Dividend2 * Divisor1,\n\t\tio_lib:format(\"~p is bigger than ~p\", [X1, X2])).\n"
  },
  {
    "path": "apps/arweave/src/ar_pricing_transition.erl",
    "content": "-module(ar_pricing_transition).\n\n-export([get_transition_price/2, static_price/0, static_pricing_height/0, is_v2_pricing_height/1,\n\ttransition_start_2_6_8/0, transition_start_2_7_2/0, transition_length_2_6_8/0,\n\ttransition_length_2_7_2/0, transition_length/1\n\t]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_inflation.hrl\").\n-include_lib(\"arweave/include/ar_pricing.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%% @doc This module encapsulates most of the complexity of our multi-phased pricing transition.\n%%                                                     __\n%%                                                   /    \\__\n%% V2 Pricing..................................__   /\n%%                                           /|  \\_/  \n%%                                         /  |\n%%                                       /    |\n%%                                     /      |\n%%                                   /        |\n%%                                 /          |\n%% 520 (cap).............________/            |\n%%                     / |       |            |\n%% 400 ______________/   |       |            |\n%%       |           |   |       |            |\n%%     today   2/20/24   3/7/24  11/20/24     11/20/26\n%%               2.6.8   2.7.2   2.7.2        2.7.2\n%%          Transition   HF      Transition   Transition\n%%               Start           Start        End\n\n%%%===================================================================\n%%% Constants\n%%%===================================================================\n\n%% The number of blocks which have to pass since the 2.6.8 fork before we\n%% start mixing in the new fee calculation method.\n-ifdef(AR_TEST).\n\t-define(PRICE_2_6_8_TRANSITION_START, 2).\n-else.\n\t-ifndef(PRICE_2_6_8_TRANSITION_START).\n\t\t-ifdef(FORKS_RESET).\n\t\t\t-define(PRICE_2_6_8_TRANSITION_START, 0).\n\t\t-else.\n\t\t\t%% Target: February 20, 2024 at 2p UTC\n\t\t\t%% Fork 2.6.8 was published at: May 30, 2023 at 3:35p UTC\n\t\t\t%% Time between dates: 265 days, 22 hours, 25 minutes\n\t\t\t%% https://www.timeanddate.com/date/durationresult.html?m1=05&d1=30&y1=2023&m2=02&d2=20&y2=2024&h1=15&i1=35&s1=&h2=14&i2=&s2=\n\t\t\t%% In seconds: 22,976,700\n\t\t\t%% In blocks: 22,976,700 / 128s average block time = 179505\n\t\t\t%% Target block: 1189560 + 179505 = 1369065\n\t\t\t-define(PRICE_2_6_8_TRANSITION_START, 179505).\n\t\t-endif.\n\t-endif.\n-endif.\n\n%% The number of blocks following the 2.6.8 + ?PRICE_2_6_8_TRANSITION_START block\n%% where the tx fee computation is transitioned to the new calculation method.\n%% Let TransitionStart = fork 2.6.8 height + ?PRICE_2_6_8_TRANSITION_START.\n%% Let A = height - TransitionStart + 1.\n%% Let B = TransitionStart + ?PRICE_2_6_8_TRANSITION_BLOCKS - (height + 1).\n%% Then price per GiB-minute = price old * B / (A + B) + price new * A / (A + B).\n-ifdef(AR_TEST).\n\t-define(PRICE_2_6_8_TRANSITION_BLOCKS, 2).\n-else.\n\t-ifndef(PRICE_2_6_8_TRANSITION_BLOCKS).\n\t\t-ifdef(FORKS_RESET).\n\t\t\t-define(PRICE_2_6_8_TRANSITION_BLOCKS, 0).\n\t\t-else.\n\t\t\t-ifndef(PRICE_2_6_8_TRANSITION_BLOCKS).\n\t\t\t\t-define(PRICE_2_6_8_TRANSITION_BLOCKS, (30 * 24 * 30 * 18)). % ~18 months.\n\t\t\t-endif.\n\t\t-endif.\n\t-endif.\n-endif.\n\n%% The number of blocks which have to pass since the 2.6.8 fork before we\n%% remove the price transition cap.\n%%\n%% Note: Even though this constant is related to the *2.7.2* fork we count the blocks\n%% since the *2.6.8* fork for easier comparison with ?PRICE_2_6_8_TRANSITION_START\n-ifdef(AR_TEST).\n\t-define(PRICE_2_7_2_TRANSITION_START, 4).\n-else.\n\t-ifndef(PRICE_2_7_2_TRANSITION_START).\n\t\t-ifdef(FORKS_RESET).\n\t\t\t-define(PRICE_2_7_2_TRANSITION_START, 0).\n\t\t-else.\n\t\t\t%% Target: November 20, 2024 at 2p UTC\n\t\t\t%% Fork 2.6.8 was published at: May 30, 2023 at 3:35p UTC\n\t\t\t%% Time between dates: 539 days, 22 hours, 25 minutes\n\t\t\t%% https://www.timeanddate.com/date/durationresult.html?m1=5&d1=30&y1=2023&m2=11&d2=20&y2=2024&h1=15&i1=35&s1=0&h2=14&i2=0&s2=0\n\t\t\t%% In seconds: 46,650,300\n\t\t\t%% In blocks: 46,650,300 / 128.9s average block time = 361910\n\t\t\t%% Target block: 1189560 + 361910 = 1551470\n\t\t\t-define(PRICE_2_7_2_TRANSITION_START, 361910).\n\t\t-endif.\n\t-endif.\n-endif.\n\n%% The number of blocks following the 2.6.8 + ?PRICE_2_7_2_TRANSITION_START block\n%% where the tx fee computation is transitioned to the new calculation method.\n%% Let TransitionStart = fork 2.6.8 height + ?PRICE_2_7_2_TRANSITION_START.\n%% Let A = height - TransitionStart + 1.\n%% Let B = TransitionStart + ?PRICE_2_7_2_TRANSITION_START - (height + 1).\n%% Then price per GiB-minute = price cap * B / (A + B) + price new * A / (A + B).\n-ifdef(AR_TEST).\n\t-define(PRICE_2_7_2_TRANSITION_BLOCKS, 2).\n-else.\n\t-ifndef(PRICE_2_7_2_TRANSITION_BLOCKS).\n\t\t-ifdef(FORKS_RESET).\n\t\t\t-define(PRICE_2_7_2_TRANSITION_BLOCKS, 0).\n\t\t-else.\n\t\t\t-ifndef(PRICE_2_7_2_TRANSITION_BLOCKS).\n\t\t\t\t-define(PRICE_2_7_2_TRANSITION_BLOCKS, (30 * 24 * 30 * 24)). % ~24 months.\n\t\t\t-endif.\n\t\t-endif.\n\t-endif.\n-endif.\n\n-ifdef(AR_TEST).\n\t-define(PRICE_PER_GIB_MINUTE_PRE_TRANSITION, 8162).\n-else.\n\t%% STATIC_2_6_8_FEE_WINSTON / (200 (years) * 365 (days) * 24 * 60) / 20 (replicas)\n\t%% = ~400 Winston per GiB per minute.\n\t-define(PRICE_PER_GIB_MINUTE_PRE_TRANSITION, 400).\n-endif.\n\n-ifdef(AR_TEST).\n\t-define(PRICE_2_7_2_PER_GIB_MINUTE_UPPER_BOUND, 30000).\n-else.\n\t-ifndef(PRICE_2_7_2_PER_GIB_MINUTE_UPPER_BOUND).\n\t\t%% 714_000_000_000 / (200 (years) * 365 (days) * 24 * 60) / 20 (replicas)\n\t\t%% = ~340 Winston per GiB per minute.\n\t\t-define(PRICE_2_7_2_PER_GIB_MINUTE_UPPER_BOUND, 340).\n\t-endif.\n-endif.\n\n-ifdef(AR_TEST).\n\t-define(PRICE_2_7_2_PER_GIB_MINUTE_LOWER_BOUND, 0).\n-else.\n\t-ifndef(PRICE_2_7_2_PER_GIB_MINUTE_LOWER_BOUND).\n\t\t%% 357_000_000_000 / (200 (years) * 365 (days) * 24 * 60) / 20 (replicas)\n\t\t%% = ~170 Winston per GiB per minute.\n\t\t-define(PRICE_2_7_2_PER_GIB_MINUTE_LOWER_BOUND, 170).\n\t-endif.\n-endif.\n\n\n%%%===================================================================\n%%% Public Interface\n%%%===================================================================\n\n%% @doc There's a complex series of transition phases that we pass through as we move from\n%% static pricing to dynamic pricing (aka v2 pricing). This function handles those phases.\nget_transition_price(Height, V2Price) ->\n\tStaticPricingHeight = ar_pricing_transition:static_pricing_height(),\n\tPriceTransitionStart = transition_start(Height),\n\tPriceTransitionEnd = PriceTransitionStart + transition_length(Height),\n\n\tStartPrice = transition_start_price(Height),\n\tUpperBound = transition_upper_bound(Height),\n\tLowerBound = transition_lower_bound(Height),\n\n\tcase Height of\n\t\t_ when Height < StaticPricingHeight ->\n\t\t\tar_pricing_transition:static_price();\n\t\t_ when Height < PriceTransitionEnd ->\n\t\t\t%% Interpolate between the pre-transition price and the new price.\n\t\t\tInterval1 = Height - PriceTransitionStart,\n\t\t\tInterval2 = PriceTransitionEnd - Height,\n\t\t\tInterpolatedPrice =\n\t\t\t\t(StartPrice * Interval2 + V2Price * Interval1) div (Interval1 + Interval2),\n\t\t\tPricePerGiBPerMinute = ar_util:between(InterpolatedPrice, LowerBound, UpperBound),\n\t\t\t?LOG_DEBUG([{event, get_price_per_gib_minute},\n\t\t\t\t{height, Height}, {price1, StartPrice}, {price2, V2Price},\n\t\t\t\t{lower_bound, LowerBound}, {upper_bound, UpperBound},\n\t\t\t\t{transition_start, PriceTransitionStart}, {transition_end, PriceTransitionEnd},\n\t\t\t\t{interval1, Interval1}, {interval2, Interval2},\n\t\t\t\t{interpolated_price, InterpolatedPrice}, {price, PricePerGiBPerMinute}]),\n\t\t\tPricePerGiBPerMinute;\n\t\t_ ->\n\t\t\tV2Price\n\tend.\n\nstatic_price() ->\n\t?PRICE_PER_GIB_MINUTE_PRE_TRANSITION.\n\n%% @doc Height before which we use the hardcoded static price - no phase\n%% of the pricing transition has started.\nstatic_pricing_height() ->\n\tar_pricing_transition:transition_start_2_6_8().\n\n%% @doc Return true if the given height is a height where the transition to the\n%% new pricing algorithm is complete.\nis_v2_pricing_height(Height) ->\n\tHeight >=\n\t\tar_pricing_transition:transition_start_2_7_2() +\n\t\t\tar_pricing_transition:transition_length_2_7_2().\n\ntransition_start_2_6_8() ->\n\tar_fork:height_2_6_8() + ?PRICE_2_6_8_TRANSITION_START.\n\ntransition_start_2_7_2() ->\n\t%% Note: Even though this constant is related to the *2.7.2* fork we count the blocks\n\t%% since the *2.6.8* fork for easier comparison with ?PRICE_2_6_8_TRANSITION_START\n\tar_fork:height_2_6_8() + ?PRICE_2_7_2_TRANSITION_START.\n\ntransition_length_2_6_8() ->\n\t?PRICE_2_6_8_TRANSITION_BLOCKS.\n\ntransition_length_2_7_2() ->\n\t?PRICE_2_7_2_TRANSITION_BLOCKS.\n\ntransition_length(Height) ->\n\tTransitionStart_2_7_2 = ar_pricing_transition:transition_start_2_7_2(),\n\n\tcase Height of\n\t\t_ when Height >= TransitionStart_2_7_2 ->\n\t\t\tar_pricing_transition:transition_length_2_7_2();\n\t\t_ ->\n\t\t\tar_pricing_transition:transition_length_2_6_8()\n\tend.\n\n\n%%%===================================================================\n%%% Private functions\n%%%===================================================================\n\ntransition_start(Height) ->\n\tTransitionStart_2_6_8 = ar_pricing_transition:transition_start_2_6_8(),\n\tTransitionStart_2_7_2 = ar_pricing_transition:transition_start_2_7_2(),\n\t\n\t%% There are 2 overlapping transition periods:\n\t%% 2.6.8 Transition Period:\n\t%% - Start: 2.6.8 + ?PRICE_2_6_8_TRANSITION_START\n\t%% - Length: 18 months\n\t%%\n\t%% 2.7.2 Transition Period:\n\t%% - Start: 2.6.8 + ?PRICE_2_7_2_TRANSITION_START\n\t%% - Length: 24 months\n\t%%\n\t%% The 2.7.2 transition period starts in the middle of the 2.6.8 transition period and \n\t%% replaces it.\n\tcase Height of\n\t\t_ when Height >= TransitionStart_2_7_2 ->\n\t\t\tTransitionStart_2_7_2;\n\t\t_ ->\n\t\t\tTransitionStart_2_6_8\n\tend.\n\ntransition_start_price(Height) ->\n\tTransitionStart_2_7_2 = ar_pricing_transition:transition_start_2_7_2(),\n\n\tcase Height of\n\t\t_ when Height >= TransitionStart_2_7_2 ->\n\t\t\t?PRICE_2_7_2_PER_GIB_MINUTE_UPPER_BOUND;\n\t\t_ ->\n\t\t\t?PRICE_PER_GIB_MINUTE_PRE_TRANSITION\n\tend.\n\ntransition_upper_bound(Height) ->\n\tTransitionStart_2_7_2 = ar_pricing_transition:transition_start_2_7_2(),\n\tFork_2_7_2 = ar_fork:height_2_7_2(),\n\t\n\tcase Height of\n\t\t_ when Height >= TransitionStart_2_7_2 ->\n\t\t\tinfinity;\n\t\t_ when Height >= Fork_2_7_2 ->\n\t\t\t?PRICE_2_7_2_PER_GIB_MINUTE_UPPER_BOUND;\n\t\t_ ->\n\t\t\tinfinity\n\tend.\n\ntransition_lower_bound(Height) ->\n\tTransitionStart_2_7_2 = ar_pricing_transition:transition_start_2_7_2(),\n\tFork_2_7_2 = ar_fork:height_2_7_2(),\n\t\n\tcase Height of\n\t\t_ when Height >= TransitionStart_2_7_2 ->\n\t\t\t0;\n\t\t_ when Height >= Fork_2_7_2 ->\n\t\t\t?PRICE_2_7_2_PER_GIB_MINUTE_LOWER_BOUND;\n\t\t_ ->\n\t\t\t0\n\tend."
  },
  {
    "path": "apps/arweave/src/ar_process_sampler.erl",
    "content": "-module(ar_process_sampler).\n-behaviour(gen_server).\n\n-include_lib(\"arweave/include/ar.hrl\").\n\n-export([start_link/0]).\n-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2]).\n\n-define(SAMPLE_PROCESSES_INTERVAL, 15000).\n-define(SAMPLE_SCHEDULERS_INTERVAL, 30000).\n-define(SAMPLE_SCHEDULERS_DURATION, 5000).\n\n-record(state, {\n\tscheduler_samples = undefined\n}).\n\n%% API\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%% gen_server callbacks\ninit([]) ->\n\t{ok, _} = ar_timer:send_interval(\n\t\t?SAMPLE_PROCESSES_INTERVAL,\n\t\tself(),\n\t\tsample_processes,\n\t\t#{ skip_on_shutdown => false }\n\t),\n\tar_util:cast_after(?SAMPLE_SCHEDULERS_INTERVAL, ?MODULE, sample_schedulers),\n\t{ok, #state{}}.\n\nhandle_call(_Request, _From, State) ->\n\t{reply, ok, State}.\n\nhandle_cast(sample_schedulers, State) ->\n\tState2 = sample_schedulers(State),\n\t{noreply, State2};\n\nhandle_cast(_Msg, State) ->\n\t{noreply, State}.\n\nhandle_info(sample_processes, State) ->\n\tStartTime = erlang:monotonic_time(),\n\tProcesses = erlang:processes(),\n\tProcessData = lists:filtermap(fun(Pid) -> process_function(Pid) end, Processes),\n\n\tProcessMetrics =\n\t\tlists:foldl(fun({_Status, ProcessName, Memory, Reductions, MsgQueueLen}, Acc) ->\n\t\t\t%% Sum the data for each process. This is a compromise for handling unregistered\n\t\t\t%% processes. It has the effect of summing the memory and message queue length across all unregistered processes running off the\n\t\t\t%% same function. In general this is what we want (e.g. for the io threads within\n\t\t\t%% ar_mining_io and the hashing threads within ar_mining_hashing, we wand to\n\t\t\t%% see if, in aggregate, their memory or message queue length has spiked).\n\t\t\t{MemoryTotal, ReductionsTotal, MsgQueueLenTotal} =\n\t\t\t\tmaps:get(ProcessName, Acc, {0, 0, 0}),\n\t\t\tMetrics = {\n\t\t\t\tMemoryTotal + Memory, ReductionsTotal + Reductions, MsgQueueLenTotal + MsgQueueLen},\n\t\t\tmaps:put(ProcessName, Metrics, Acc)\n\t\tend,\n\t\t#{},\n\t\tProcessData),\n\n\t%% Clear out the process_info metric so that we don't persist data about processes that\n\t%% have exited. We have to deregister and re-register the metric because we don't track\n\t%% all the label values used.\n\tprometheus_gauge:deregister(process_info),\n\tprometheus_gauge:new([{name, process_info},\n\t\t{labels, [process, type]},\n\t\t{help, \"Sampling info about active processes. Only set when debug=true.\"}]),\n\n\tmaps:foreach(fun(ProcessName, Metrics) ->\n\t\t{Memory, Reductions, MsgQueueLen} = Metrics,\n\t\tprometheus_gauge:set(process_info, [ProcessName, memory], Memory),\n\t\tprometheus_gauge:set(process_info, [ProcessName, reductions], Reductions),\n\t\tprometheus_gauge:set(process_info, [ProcessName, message_queue], MsgQueueLen)\n\tend, ProcessMetrics),\n\n\tprometheus_gauge:set(process_info, [total, memory], erlang:memory(total)),\n\tprometheus_gauge:set(process_info, [processes, memory], erlang:memory(processes)),\n\tprometheus_gauge:set(process_info, [processes_used, memory], erlang:memory(processes_used)),\n\tprometheus_gauge:set(process_info, [system, memory], erlang:memory(system)),\n\tprometheus_gauge:set(process_info, [atom, memory], erlang:memory(atom)),\n\tprometheus_gauge:set(process_info, [atom_used, memory], erlang:memory(atom_used)),\n\tprometheus_gauge:set(process_info, [binary, memory], erlang:memory(binary)),\n\tprometheus_gauge:set(process_info, [code, memory], erlang:memory(code)),\n\tprometheus_gauge:set(process_info, [ets, memory], erlang:memory(ets)),\n\n\tlog_binary_alloc(),\n\n\tEndTime = erlang:monotonic_time(),\n\tElapsedTime = erlang:convert_time_unit(EndTime-StartTime, native, microsecond),\n\t?LOG_DEBUG([{event, sample_processes}, {elapsed_ms, ElapsedTime / 1000}]),\n\t{noreply, State};\n\nhandle_info(_Info, State) ->\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%% Internal functions\nsample_schedulers(#state{ scheduler_samples = undefined } = State) ->\n\t%% Start sampling\n\terlang:system_flag(scheduler_wall_time,true),\n\tSamples = scheduler:sample_all(),\n\t%% Every ?SAMPLE_SCHEDULERS_INTERVAL ms, we'll sample the schedulers for\n\t%% ?SAMPLE_SCHEDULERS_DURATION ms.\n\tar_util:cast_after(?SAMPLE_SCHEDULERS_INTERVAL, ?MODULE, sample_schedulers),\n\tar_util:cast_after(?SAMPLE_SCHEDULERS_DURATION, ?MODULE, sample_schedulers),\n\tState#state{ scheduler_samples = Samples };\nsample_schedulers(#state{ scheduler_samples = Samples1 } = State) ->\n\t%% Finish sampling\n\tSamples2 = scheduler:sample_all(),\n\tUtil = scheduler:utilization(Samples1, Samples2),\n\terlang:system_flag(scheduler_wall_time,false),\n\taverage_utilization(Util),\n\tState#state{ scheduler_samples = undefined }.\n\naverage_utilization(Util) ->\n\tAverages = lists:foldl(\n\t\tfun\n\t\t({Type, Value, _}, Acc) ->\n\t\t\tmaps:put(Type, {Value, 1}, Acc);\n\t\t({Type, _, Value, _}, Acc) ->\n\t\t\tcase (Type == io andalso Value > 0) orelse (Type /= io) of\n\t\t\t\ttrue ->\n\t\t\t\t\t{Sum, Count} = maps:get(Type, Acc, {0, 0}),\n\t\t\t\t\tmaps:put(Type, {Sum+Value, Count+1}, Acc);\n\t\t\t\tfalse ->\n\t\t\t\t\tAcc\n\t\t\tend\n\t\tend,\n\t\t#{},\n\t\tUtil),\n\tmaps:foreach(\n\t\tfun(Type, {Sum, Count}) ->\n\t\t\tprometheus_gauge:set(scheduler_utilization, [Type], Sum / Count)\n\t\tend,\n\t\tAverages).\n\nprocess_function(Pid) ->\n\tcase process_info(Pid, [current_function, current_stacktrace, registered_name,\n\t\tstatus, memory, reductions, message_queue_len, messages]) of\n\t[{current_function, {erlang, process_info, _A}}, _, _, _, _, _, _, _] ->\n\t\tfalse;\n\t[{current_function, CurrentFunction}, {current_stacktrace, Stack},\n\t\t\t{registered_name, Name}, {status, Status},\n\t\t\t{memory, Memory}, {reductions, Reductions},\n\t\t\t{message_queue_len, MsgQueueLen}, {messages, Messages}] ->\n\t\tProcessName = process_name(Name, Stack),\n\t\tcase MsgQueueLen > 1000 of\n\t\t\ttrue ->\n\t\t\t\tFormattedMessages =\n\t\t\t\t\t[format_message(Msg) || Msg <- lists:sublist(Messages, 10)],\n\t\t\t\t?LOG_DEBUG([{event, process_long_message_queue}, {pid, Pid},\n\t\t\t\t\t{process_name, ProcessName}, {current_function, CurrentFunction},\n\t\t\t\t\t{current_stacktrace, Stack}, {memory, Memory},\n\t\t\t\t\t{reductions, Reductions}, {message_queue_len, MsgQueueLen},\n\t\t\t\t\t{head_messages, FormattedMessages}]);\n\t\t\tfalse ->\n\t\t\t\tok\n\t\tend,\n\t\t{true, {Status, ProcessName, Memory, Reductions, MsgQueueLen}};\n\t_ ->\n\t\tfalse\n\tend.\n\nlog_binary_alloc() ->\n\t[Instance0 | _Rest] = erlang:system_info({allocator, binary_alloc}),\n\tlog_binary_alloc_instances([Instance0]).\n\nlog_binary_alloc_instances([]) ->\n\tok;\nlog_binary_alloc_instances([Instance | _Rest]) ->\n\t{instance, Id, [\n\t\t_Versions,\n\t\t_Options,\n\t\tMBCS,\n\t\tSBCS,\n\t\tCalls\n\t]} = Instance,\n\t{calls, [\n\t\t{binary_alloc, AllocGigaCount, AllocCount},\n\t\t{binary_free, FreeGigaCount, FreeCount},\n\t\t{binary_realloc, ReallocGigaCount, ReallocCount},\n\t\t_MsegAllocCount, _MsegDeallocCount, _MsegReallocCount,\n\t\t_SysAllocCount, _SysDeallocCount, _SysReallocCount\n\t]} = Calls,\n\n\tlog_binary_alloc_carrier(Id, MBCS),\n\tlog_binary_alloc_carrier(Id, SBCS),\n\n\tprometheus_gauge:set(allocator, [binary, Id, calls, binary_alloc_count],\n\t\t(AllocGigaCount * 1000000000) + AllocCount),\n\tprometheus_gauge:set(allocator, [binary, Id, calls, binary_free_count],\n\t\t(FreeGigaCount * 1000000000) + FreeCount),\n\tprometheus_gauge:set(allocator, [binary, Id, calls, binary_realloc_count],\n\t\t(ReallocGigaCount * 1000000000) + ReallocCount).\n\nlog_binary_alloc_carrier(Id, Carrier) ->\n\t{CarrierType, [\n\t\t{blocks, Blocks},\n\t\t{carriers, _, CarrierCount, _},\n\t\t_MsegCount, _SysCount,\n\t\t{carriers_size, _, CarrierSize, _},\n\t\t_MsegSize, _SysSize\n\t]} = Carrier,\n\n\tcase Blocks of\n\t\t[{binary_alloc, [{count, _, BlockCount, _}, {size, _, BlockSize, _}]}] ->\n\t\t\tprometheus_gauge:set(allocator, [binary, Id, CarrierType, binary_block_count],\n\t\t\t\tBlockCount),\n\t\t\tprometheus_gauge:set(allocator, [binary, Id, CarrierType, binary_block_size],\n\t\t\t\tBlockSize);\n\t\t_ ->\n\t\t\tprometheus_gauge:set(allocator, [binary, Id, CarrierType, binary_block_count],\n\t\t\t\t0),\n\t\t\tprometheus_gauge:set(allocator, [binary, Id, CarrierType, binary_block_size],\n\t\t\t\t0)\n\tend,\n\n\tprometheus_gauge:set(allocator, [binary, Id, CarrierType, binary_carrier_count],\n\t\tCarrierCount),\n\tprometheus_gauge:set(allocator, [binary, Id, CarrierType, binary_carrier_size],\n\t\tCarrierSize).\n\n\n%% @doc Anonymous processes don't have a registered name. So we'll name them after their\n%% module, function and arity.\nprocess_name([], []) ->\n\t\"unknown\";\nprocess_name([], Stack) ->\n\tInitialCall = initial_call(lists:reverse(Stack)),\n\tM = element(1, InitialCall),\n\tF = element(2, InitialCall),\n\tA = element(3, InitialCall),\n\tatom_to_list(M) ++ \":\" ++ atom_to_list(F) ++ \"/\" ++ integer_to_list(A);\nprocess_name(Name, _Stack) ->\n\tatom_to_list(Name).\n\ninitial_call([]) ->\n\t\"unknown\";\ninitial_call([{proc_lib, init_p_do_apply, _A, _Location} | Stack]) ->\n\tinitial_call(Stack);\ninitial_call([InitialCall | _Stack]) ->\n\tInitialCall.\n\n\nformat_message(Msg) ->\n    TruncatedMsg = truncate_term(Msg),\n    Formatted = io_lib:format(\"~p\", [TruncatedMsg]),\n    OutputStr = lists:flatten(Formatted),\n    LimitedOutput = limit_output(OutputStr, 1000),\n    io_lib:format(\"~s~n\", [LimitedOutput]).\n\nlimit_output(Str, Limit) ->\n    if\n        length(Str) > Limit -> lists:sublist(Str, Limit);\n        true -> Str\n    end.\n\ntruncate_term(Term) when is_binary(Term) ->\n    if\n        byte_size(Term) > 8 ->\n            <<Head:8/binary, _/binary>> = Term,\n            %% Append ellipsis (three periods) to indicate truncation.\n            <<Head/binary, 46,46,46>>;\n        true ->\n            Term\n    end;\ntruncate_term([]) ->\n    [];\ntruncate_term([Head | Tail]) ->\n    [truncate_term(Head) | truncate_term(Tail)];\ntruncate_term(Term) when is_tuple(Term) ->\n    List = tuple_to_list(Term),\n    TruncatedList = [truncate_term(Elem) || Elem <- List],\n    list_to_tuple(TruncatedList);\ntruncate_term(Term) when is_map(Term) ->\n    maps:map(fun(_Key, Value) -> truncate_term(Value) end, Term);\ntruncate_term(Term) ->\n    Term.\n"
  },
  {
    "path": "apps/arweave/src/ar_prometheus_cowboy_handler.erl",
    "content": "%% @doc\n%% Cowboy2 handler for exporting prometheus metrics.\n%% @end\n-module(ar_prometheus_cowboy_handler).\n\n%% -behaviour(cowboy_handler).\n\n-export([init/2, terminate/3]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n\n%% ===================================================================\n%% cowboy_handler callbacks\n%% ===================================================================\n\ninit(Req, _Opts) ->\n\thandle(Req).\n\nterminate(_Reason, _Req, _State) ->\n\tok.\n\n%% ===================================================================\n%% Private functions\n%% ===================================================================\n\nhandle(Request) ->\n\tMethod = cowboy_req:method(Request),\n\tRequest1 = gen_response(Method, Request),\n\t{ok, Request1, undefined}.\n\ngen_response(<<\"HEAD\">>, Request) ->\n\tRegistry0 = cowboy_req:binding(registry, Request, <<\"default\">>),\n\tcase prometheus_registry:exists(Registry0) of\n\t\tfalse ->\n\t\t\tcowboy_req:reply(404, #{}, <<\"Unknown Registry\">>, Request);\n\t\tRegistry ->\n\t\t\tgen_metrics_response(Registry, Request)\n\tend;\ngen_response(<<\"GET\">>, Request) ->\n\tRegistry0 = cowboy_req:binding(registry, Request, <<\"default\">>),\n\tcase prometheus_registry:exists(Registry0) of\n\t\tfalse ->\n\t\t\tcowboy_req:reply(404, #{}, <<\"Unknown Registry\">>, Request);\n\t\tRegistry ->\n\t\t\tgen_metrics_response(Registry, Request)\n\tend;\ngen_response(_, Request) ->\n\tRequest.\n\ngen_metrics_response(Registry, Request) ->\n\tURI = true,\n\tGetHeader =\n\t\tfun(Name, Default) ->\n\t\t\tcowboy_req:header(iolist_to_binary(Name),\n\t\t\t\t\tRequest, Default)\n        end,\n\t{Code, RespHeaders, Body} = prometheus_http_impl:reply(\n\t\t\t#{ path => URI, headers => GetHeader, registry => Registry,\n\t\t\t\tstandalone => false}),\n\tHeaders = prometheus_cowboy:to_cowboy_headers(RespHeaders),\n\tHeaders2 = maps:merge(?CORS_HEADERS, maps:from_list(Headers)),\n\tcowboy_req:reply(Code, Headers2, Body, Request).\n"
  },
  {
    "path": "apps/arweave/src/ar_prometheus_cowboy_labels.erl",
    "content": "-module(ar_prometheus_cowboy_labels).\n\n-export([label_value/2]).\n\n%%%===================================================================\n%%% Prometheus cowboy labels module callback (no behaviour)\n%%%===================================================================\n\nlabel_value(http_method, #{req:=Req}) ->\n\tnormalize_method(cowboy_req:method(Req));\nlabel_value(route, #{req:=Req}) ->\n\tar_http_iface_server:label_http_path(cowboy_req:path(Req));\nlabel_value(_, _) ->\n\tundefined.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nnormalize_method(<<\"GET\">>) -> 'GET';\nnormalize_method(<<\"HEAD\">>) -> 'HEAD';\nnormalize_method(<<\"POST\">>) -> 'POST';\nnormalize_method(<<\"PUT\">>) -> 'PUT';\nnormalize_method(<<\"DELETE\">>) -> 'DELETE';\nnormalize_method(<<\"CONNECT\">>) -> 'CONNECT';\nnormalize_method(<<\"OPTIONS\">>) -> 'OPTIONS';\nnormalize_method(<<\"TRACE\">>) -> 'TRACE';\nnormalize_method(<<\"PATCH\">>) -> 'PATCH';\nnormalize_method(_) -> undefined.\n"
  },
  {
    "path": "apps/arweave/src/ar_rate_limiter.erl",
    "content": "-module(ar_rate_limiter).\n\n-behaviour(gen_server).\n\n-export([start_link/0, throttle/2, off/0, on/0, is_on_cooldown/2, set_cooldown/3]).\n-export([is_throttled/2]).\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\"). % Used in ?RPM_BY_PATH.\n-include_lib(\"arweave/include/ar_blacklist_middleware.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-record(state, {\n\ttraces,\n\toff\n}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%% @doc Hang until it is safe to make another request to the given Peer with the given Path.\n%% The limits are configured in include/ar_blacklist_middleware.hrl.\nthrottle(Peer, Path) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase lists:member(Peer, Config#config.local_peers) of\n\t\ttrue ->\n\t\t\tok;\n\t\tfalse ->\n\t\t\tthrottle2(Peer, Path)\n\tend.\n\nthrottle2(Peer, Path) ->\n\tP = ar_http_iface_server:split_path(iolist_to_binary(Path)),\n\tcase P of\n\t\t[<<\"tx\">>] ->\n\t\t\t%% Do not throttle transaction gossip.\n\t\t\tok;\n\t\t_ ->\n\t\t\tcase catch gen_server:call(?MODULE, {throttle, Peer, P}, infinity) of\n\t\t\t\t{'EXIT', {noproc, {gen_server, call, _}}} ->\n\t\t\t\t\tok;\n\t\t\t\t{'EXIT', Reason} ->\n\t\t\t\t\texit(Reason);\n\t\t\t\t_ ->\n\t\t\t\t\tok\n\t\t\tend\n\tend.\n\n%% @doc Turn rate limiting off.\noff() ->\n\tgen_server:cast(?MODULE, turn_off).\n\n%% @doc Turn rate limiting on.\non() ->\n\tgen_server:cast(?MODULE, turn_on).\n\n%% @doc Return true if Peer should be throttled for the given RPMKey.\nis_throttled(Peer, Path) ->\n\tcase catch gen_server:call(?MODULE, {is_throttled, Peer, Path}, infinity) of\n\t\t{'EXIT', {noproc, {gen_server, call, _}}} -> false;\n\t\t{'EXIT', Reason} -> exit(Reason);\n\t\tBool when is_boolean(Bool) -> Bool\n\tend.\n\n%% @doc Return true if Peer is on cooldown for the given Path.\nis_on_cooldown(Peer, RPMKey) ->\n\tNow = os:system_time(millisecond),\n\tcase ets:lookup(?MODULE, {cooldown, Peer, RPMKey}) of\n\t\t[{_, Until}] when Until > Now -> true;\n\t\t_ -> false\n\tend.\n\n%% @doc Put Peer on cooldown for the given RPMKey for Milliseconds.\nset_cooldown(Peer, RPMKey, Milliseconds) when Milliseconds > 0 ->\n\t?LOG_DEBUG([{event, set_cooldown}, {peer, ar_util:format_peer(Peer)}, {rpm_key, RPMKey}, {milliseconds, Milliseconds}]),\n\tUntil = os:system_time(millisecond) + Milliseconds,\n\tets:insert(?MODULE, {{cooldown, Peer, RPMKey}, Until}),\n\tok;\nset_cooldown(_Peer, _RPMKey, _Milliseconds) ->\n\tok.\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\t{ok, #state{ traces = #{}, off = false }}.\n\nhandle_call({throttle, _Peer, _Path}, _From, #state{ off = true } = State) ->\n\t{reply, ok, State};\n\nhandle_call({is_throttled, Peer, Path}, _From, State) ->\n\t{Throttle, _} = is_throttled(Peer, Path, State),\n\t{reply, Throttle, State};\nhandle_call({throttle, Peer, Path}, From, State) ->\n\tgen_server:cast(?MODULE, {throttle, Peer, Path, From}),\n\t{noreply, State};\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\n\nhandle_cast({throttle, Peer, Path, From}, State) ->\n\t#state{ traces = Traces } = State,\n\t{RPMKey, Limit} = ?RPM_BY_PATH(Path)(),\n\t{Throttle, {N, Trace}} = is_throttled(Peer, Path, State),\n\tcase Throttle of\n\t\ttrue ->\n\t\t\t?LOG_DEBUG([{event, approaching_peer_rpm_limit},\n\t\t\t\t\t{path, Path}, {minute_limit, Limit},\n\t\t\t\t\t{peer, ar_util:format_peer(Peer)}, {caller, From}]),\n\t\t\tar_util:cast_after(1000, ?MODULE, {throttle, Peer, Path, From}),\n\t\t\t{noreply, State};\n\t\tfalse ->\n\t\t\tgen_server:reply(From, ok),\n\t\t\tTraces2 = maps:put({Peer, RPMKey}, {N, Trace}, Traces),\n\t\t\t{noreply, State#state{ traces = Traces2 }}\n\tend;\n\nhandle_cast(turn_off, State) ->\n\t{noreply, State#state{ off = true }};\n\nhandle_cast(turn_on, State) ->\n\t{noreply, State#state{ off = false }};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\ncut_trace(N, Trace, Now) ->\n\t{{value, Timestamp}, Trace2} = queue:out(Trace),\n\tcase Timestamp < Now - ?THROTTLE_PERIOD of\n\t\ttrue ->\n\t\t\tcut_trace(N - 1, Trace2, Now);\n\t\tfalse ->\n\t\t\t{N, Trace}\n\tend.\n\n%% @doc Internal predicate used by both server and tests. Returns {Throttle, {NewN, NewTrace}}.\nis_throttled(Peer, Path, #state{ traces = Traces } = _State) ->\n\t{RPMKey, Limit} = ?RPM_BY_PATH(Path)(),\n\tNow = os:system_time(millisecond),\n\tcase maps:get({Peer, RPMKey}, Traces, not_found) of\n\t\tnot_found ->\n\t\t\t{false, {1, queue:from_list([Now])}};\n\t\t{N, Trace} ->\n\t\t\t{N2, Trace2} = cut_trace(N, queue:in(Now, Trace), Now),\n\t\t\t%% The macro specifies requests per minute while the throttling window\n\t\t\t%% is 30 seconds.\n\t\t\tHalfLimit = Limit div 2,\n\t\t\t%% Try to approach but not hit the limit.\n\t\t\tThrottle = N2 + 1 > max(1, HalfLimit * 80 div 100),\n\t\t\t{Throttle, {N2 + 1, Trace2}}\n\tend.\n\n\n%%--------------------------------------------------------------------\n%% Tests\n%%--------------------------------------------------------------------\n\nis_throttled_server_down_test() ->\n\t%% When the server is not running, we should not crash and return false.\n\tPeer = {127,0,0,1},\n\t?assertEqual(false, is_throttled(Peer, [<<\"hash_list\">>]) ).\n\nis_throttled_test() ->\n\tPeer = {127,0,0,1},\n\tRPMKey = data_sync_record,\n\tPath = [<<\"data_sync_record\">>],\n\tNow = os:system_time(millisecond),\n\tThrottleLimit = (?DEFAULT_REQUESTS_PER_MINUTE_LIMIT div 2) * 80 div 100,\n\n\t%% Build a trace representing ThrottleLimit - 1 requests\n\tTrace = queue:from_list(lists:duplicate(ThrottleLimit - 1, Now + 2000 - ?THROTTLE_PERIOD)),\n\n\tState = #state{ traces = #{ {Peer, RPMKey} => {ThrottleLimit - 1, Trace} }, off = false },\n\t{Throttle1, {N1, Trace1}} = is_throttled(Peer, Path, State),\n\t?assertEqual(false, Throttle1),\n\n\t%% Add one more implied request (same inputs) should be throttled next time\n\tState2 = #state{ traces = #{ {Peer, RPMKey} => {N1, Trace1} }, off = false },\n\t{Throttle2, {_N2, _Trace2}} = is_throttled(Peer, Path, State2),\n\t?assertEqual(true, Throttle2),\n\n\t%% Sleep to let most of the requests age out. Note: ar_rate_limiter only updates the traces\n\t%% state when there is no throttle, so we won't use {N2, Trace2}\n\ttimer:sleep(3000),\n\t{Throttle3, {N3, _Trace3}} = is_throttled(Peer, Path, State2),\n\t?assertEqual(false, Throttle3),\n\t?assertEqual(2, N3),\n\n\t%% Not found path should not throttle and should suggest initial trace\n\tState4 = #state{ traces = #{}, off = false },\n\t{Throttle4, {N4, Trace4}} = is_throttled(Peer, Path, State4),\n\t?assertEqual(false, Throttle4),\n\t?assertEqual(1, N4),\n\t?assertEqual(1, queue:len(Trace4)),\n\tok."
  },
  {
    "path": "apps/arweave/src/ar_repack.erl",
    "content": "-module(ar_repack).\n\n-behaviour(gen_server).\n\n-export([name/1, register_workers/0, get_read_range/3, chunk_range_read/4]).\n\n-export([start_link/2, init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include(\"ar.hrl\").\n-include(\"ar_sup.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include(\"ar_repack.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-moduledoc \"\"\"\n\tThis module handles the repack-in-place logic.\n\"\"\".\n\n-define(REPACK_WRITE_BATCH_SIZE, 1024).\n\n-record(state, {\n\tstore_id = undefined,\n\tread_batch_size = ?DEFAULT_REPACK_BATCH_SIZE,\n\twrite_batch_size = ?REPACK_WRITE_BATCH_SIZE,\n\tnum_entropy_offsets,\n\tmodule_start = 0,\n\tmodule_end = 0,\n\tfootprint_start = 0,\n\t%% The highest chunk offset that can be read for this repack footprint.\n\tfootprint_end = 0,\n\t%% The highest bucket end offset to generate entropy for. Generating entropy for this\n\t%% bucket may yield entropy offsets higher than this because entropy is generated in\n\t%% 256 MiB batches.\n\tentropy_end = 0,\n\tnext_cursor = 0, \n\tconfigured_packing = undefined,\n\ttarget_packing = undefined,\n\trepack_status = undefined,\n\trepack_chunk_map = #{},\n\twrite_queue = gb_sets:new()\n}).\n\n-ifdef(AR_TEST).\n-define(DEVICE_LOCK_WAIT, 100).\n-else.\n-define(DEVICE_LOCK_WAIT, 5_000).\n-endif.\n\n-define(STATE_COUNT_INTERVAL, 10_000).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the server.\nstart_link(Name, {StoreID, Packing}) ->\n\tgen_server:start_link({local, Name}, ?MODULE,  {StoreID, Packing}, []).\n\n%% @doc Return the name of the server serving the given StoreID.\nname(StoreID) ->\n\tlist_to_atom(\"ar_repack_\" ++ ar_storage_module:label(StoreID)).\n\nregister_workers() ->\n    {ok, Config} = arweave_config:get_env(),\n    \n    RepackInPlaceWorkers = lists:flatmap(\n        fun({StorageModule, Packing}) ->\n            StoreID = ar_storage_module:id(StorageModule),\n            %% Note: the config validation will prevent a StoreID from being used in both\n            %% `storage_modules` and `repack_in_place_storage_modules`, so there's\n            %% no risk of a `Name` clash with the workers spawned above.\n\t\t\tRepackWorker = ?CHILD_WITH_ARGS(\n\t\t\t\tar_repack, worker, name(StoreID),\n\t\t\t\t[name(StoreID), {StoreID, Packing}]),\n\n\t\t\tRepackIOWorker = ?CHILD_WITH_ARGS(\n\t\t\t\tar_repack_io, worker, ar_repack_io:name(StoreID),\n\t\t\t\t[ar_repack_io:name(StoreID), StoreID]),\n\n\t\t\t[RepackWorker, RepackIOWorker]\n        end,\n        Config#config.repack_in_place_storage_modules\n    ),\n\n    RepackInPlaceWorkers.\n\ninit({StoreID, ToPacking}) ->\n\tFromPacking = ar_storage_module:get_packing(StoreID),\n\t?LOG_INFO([{event, ar_repack_init},\n        {name, name(StoreID)}, {store_id, StoreID},\n\t\t{from_packing, ar_serialize:encode_packing(FromPacking, false)},\n        {to_packing, ar_serialize:encode_packing(ToPacking, false)}]),\n\t\n\t%% ModuleStart to PaddedModuleEnd is the *chunk* range that will be repacked. Chunk\n\t%% offsets will later be converted to bucket offsets and entropy offsets - and the \n\t%% bucket and entropy ranges may differ from this chunk range.\n\tModule = ar_storage_module:get_by_id(StoreID),\n    {ModuleStart, ModuleEnd} = ar_storage_module:module_range(Module),\n\tPaddedModuleEnd = ar_block:get_chunk_padded_offset(ModuleEnd),\n    Cursor = read_cursor(StoreID, ToPacking, ModuleStart),\n\n\t{ok, Config} = arweave_config:get_env(),\n\tBatchSize = Config#config.repack_batch_size,\n\tCacheSize = Config#config.repack_cache_size_mb,\n\tNumEntropyOffsets = calculate_num_entropy_offsets(CacheSize, BatchSize),\n\tgen_server:cast(self(), repack),\n\tgen_server:cast(self(), count_states),\n\tar_device_lock:set_device_lock_metric(StoreID, repack, paused),\n\tState = #state{ \n\t\tstore_id = StoreID,\n\t\tread_batch_size = BatchSize,\n\t\tnum_entropy_offsets = NumEntropyOffsets,\n\t\tmodule_start = ModuleStart,\n\t\tmodule_end = PaddedModuleEnd,\n\t\tnext_cursor = Cursor, \n\t\tconfigured_packing = FromPacking,\n\t\ttarget_packing = ToPacking,\n\t\trepack_status = paused\n\t},\n\tlog_info(starting_repack_in_place, State, [\n\t\t{name, name(StoreID)},\n\t\t{read_batch_size, BatchSize},\n\t\t{write_batch_size, State#state.write_batch_size},\n\t\t{num_entropy_offsets, State#state.num_entropy_offsets},\n\t\t{from_packing, ar_serialize:encode_packing(FromPacking, false)},\n        {to_packing, ar_serialize:encode_packing(ToPacking, false)},\n\t\t{raw_module_end, ModuleEnd},\n\t\t{next_cursor, Cursor}]),\n    {ok, State}.\n\n%% @doc Gets the start and end offset of the range of chunks to read starting from\n%% BucketEndOffset. Also includes the BucketEndOffsets covered by that range.\nget_read_range(BucketEndOffset, #state{} = State) ->\n\t#state{\n\t\tmodule_end = ModuleEnd,\n\t\tfootprint_end = FootprintEnd,\n\t\tread_batch_size = BatchSize\n\t} = State,\n\tget_read_range(BucketEndOffset, min(ModuleEnd, FootprintEnd), BatchSize).\n\n-spec get_read_range(\n\t\tnon_neg_integer(), non_neg_integer(), non_neg_integer()) ->\n\t{non_neg_integer(), non_neg_integer(), [non_neg_integer()]}.\nget_read_range(BucketEndOffset, RangeEnd, BatchSize) ->\n\tReadRangeStart = ar_chunk_storage:get_chunk_byte_from_bucket_end(BucketEndOffset),\n\n\tPartition = ar_node:get_partition_number(BucketEndOffset),\n\t{EntropyPartitionStart, EntropyPartitionEnd} =\n\t\tar_replica_2_9:get_entropy_partition_range(Partition),\n\tSectorSize = ar_block:get_replica_2_9_entropy_sector_size(),\n\tEntropyPartitionStartBucket = ar_chunk_storage:get_chunk_bucket_start(EntropyPartitionStart),\n\tSector = (BucketEndOffset - EntropyPartitionStartBucket) div SectorSize,\n\tSectorBucketEnd = EntropyPartitionStartBucket + (Sector + 1) * SectorSize,\n\tSectorChunkEnd =\n\t\tar_chunk_storage:get_chunk_byte_from_bucket_end(SectorBucketEnd) + ?DATA_CHUNK_SIZE,\n\t\n\tFullRangeSize = ?DATA_CHUNK_SIZE * BatchSize,\n\tReadRangeEnd = lists:min([\n\t\tReadRangeStart + FullRangeSize,\n\t\tEntropyPartitionEnd,\n\t\tSectorChunkEnd,\n\t\tRangeEnd]),\n\n\tBucketEndOffsets = [BucketEndOffset + (N * ?DATA_CHUNK_SIZE) || \n\t\tN <- lists:seq(0, BatchSize-1),\n\t\tBucketEndOffset + (N * ?DATA_CHUNK_SIZE) =< ReadRangeEnd],\n\n\t{ReadRangeStart, ReadRangeEnd, BucketEndOffsets}.\n\nchunk_range_read(BucketEndOffset, OffsetChunkMap, OffsetMetadataMap, StoreID) ->\n\tgen_server:cast(name(StoreID),\n\t\t{chunk_range_read, BucketEndOffset, OffsetChunkMap, OffsetMetadataMap}).\n\n%%%===================================================================\n%%% Gen server callbacks.\n%%%===================================================================\n\nhandle_call(Request, _From, #state{} = State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast(repack, #state{} = State) ->\n\t#state{ store_id = StoreID } = State,\n\tstore_cursor(State),\n\tNewStatus = ar_device_lock:acquire_lock(repack, StoreID, State#state.repack_status),\n\tState2 = State#state{ repack_status = NewStatus },\n\tState3 = case NewStatus of\n\t\tactive ->\n\t\t\trepack(State2);\n\t\tpaused ->\n\t\t\tar_util:cast_after(?DEVICE_LOCK_WAIT, self(), repack),\n\t\t\tState2;\n\t\t_ ->\n\t\t\tState2\n\tend,\n\t{noreply, State3};\n\nhandle_cast({chunk_range_read, BucketEndOffset, OffsetChunkMap, OffsetMetadataMap}, #state{} = State) ->\n\t{_, _, ReadRangeOffsets} = get_read_range(BucketEndOffset, State),\n\tState2 = add_range_to_repack_chunk_map(OffsetChunkMap, OffsetMetadataMap, State),\n\tState3 = mark_missing_chunks(ReadRangeOffsets, State2),\n\t{noreply, State3};\n\nhandle_cast({expire_repack_request, {BucketEndOffset, FootprintID}},\n\t\t#state{footprint_start = FootprintStart} = State) \n\t\twhen FootprintID == FootprintStart ->\n\t#state{\n\t\trepack_chunk_map = Map\n\t} = State,\n\tState2 = case maps:get(BucketEndOffset, Map, not_found) of\n\t\tnot_found ->\n\t\t\t%% Chunk has already been repacked and processed.\n\t\t\tState;\n\t\tRepackChunk ->\n\t\t\tlog_debug(repack_request_expired, RepackChunk, State, []),\n\t\t\tremove_repack_chunk(BucketEndOffset, State)\n\tend,\n\t{noreply, State2};\nhandle_cast({expire_repack_request, _Ref}, #state{} = State) ->\n\t%% Request is from an old batch, ignore.\n\t{noreply, State};\n\nhandle_cast({expire_encipher_request, {BucketEndOffset, FootprintID}},\n\t\t#state{footprint_start = FootprintStart} = State) \n\t\twhen FootprintID == FootprintStart ->\n\t{noreply, expire_exor_request(BucketEndOffset, State)};\nhandle_cast({expire_encipher_request, _Ref}, #state{} = State) ->\n\t%% Request is from an old batch, ignore.\n\t{noreply, State};\n\nhandle_cast({expire_decipher_request, {BucketEndOffset, FootprintID}},\n\t\t#state{footprint_start = FootprintStart} = State) \n\t\twhen FootprintID == FootprintStart ->\n\t{noreply, expire_exor_request(BucketEndOffset, State)};\nhandle_cast({expire_decipher_request, _Ref}, #state{} = State) ->\n\t%% Request is from an old batch, ignore.\n\t{noreply, State};\n\nhandle_cast(count_states, #state{} = State) ->\n\tcount_states(cache, State),\n\tar_util:cast_after(?STATE_COUNT_INTERVAL, self(), count_states),\n\t{noreply, State};\n\nhandle_cast(Request, #state{} = State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {request, Request}]),\n\t{noreply, State}.\n\nhandle_info({entropy, BucketEndOffset, RewardAddr, Entropies}, #state{} = State) ->\n\t#state{ \n\t\tfootprint_start = FootprintStart,\n\t\tfootprint_end = FootprintEnd\n\t} = State,\n\n\tgenerate_repack_entropy(\n\t\tBucketEndOffset + ?DATA_CHUNK_SIZE,\n\t\t{replica_2_9, RewardAddr},\n\t\tState),\n\n\tEntropyKeys = ar_entropy_gen:generate_entropy_keys(RewardAddr, BucketEndOffset),\n\tEntropyOffsets = ar_entropy_gen:entropy_offsets(BucketEndOffset, FootprintEnd),\n\n\tState2 = ar_entropy_gen:map_entropies(\n\t\tEntropies, \n\t\tEntropyOffsets,\n\t\tFootprintStart,\n\t\tEntropyKeys, \n\t\tRewardAddr,\n\t\tfun entropy_generated/4, [], State),\n\t{noreply, State2};\n\nhandle_info({chunk, {packed, {BucketEndOffset, _}, ChunkArgs}}, #state{} = State) ->\n\t#state{\n\t\trepack_chunk_map = Map\n\t} = State,\n\n\tState2 = case maps:get(BucketEndOffset, Map, not_found) of\n\t\tnot_found ->\n\t\t\t{Packing, _, AbsoluteOffset, _, ChunkSize} = ChunkArgs,\n\t\t\tlog_warning(chunk_repack_request_not_found, State, [\n\t\t\t\t{bucket_end_offset, BucketEndOffset},\n\t\t\t\t{absolute_offset, AbsoluteOffset},\n\t\t\t\t{chunk_size, ChunkSize},\n\t\t\t\t{packing, ar_serialize:encode_packing(Packing, false)},\n\t\t\t\t{repack_chunk_map, maps:size(Map)}\n\t\t\t]),\n\t\t\tState;\n\t\tRepackChunk ->\n\t\t\t{Packing, Chunk, _, _, _} = ChunkArgs,\n\t\t\t%% sanity checks\n\t\t\ttrue = RepackChunk#repack_chunk.state == needs_repack,\n\t\t\t%% end sanity checks\n\n\t\t\tRepackChunk2 = RepackChunk#repack_chunk{\n\t\t\t\tchunk = Chunk,\n\t\t\t\tsource_packing = Packing\n\t\t\t},\n\t\t\tupdate_chunk_state(RepackChunk2, State)\n\tend,\n\t{noreply, State2};\n\nhandle_info({chunk, {deciphered, {BucketEndOffset, _}, UnpackedChunk}}, #state{} = State) ->\n\t#state{\n\t\trepack_chunk_map = Map\n\t} = State,\n\n\tState2 = case maps:get(BucketEndOffset, Map, not_found) of\n\t\tnot_found ->\n\t\t\tlog_warning(chunk_decipher_request_not_found, State, [\n\t\t\t\t{bucket_end_offset, BucketEndOffset},\n\t\t\t\t{repack_chunk_map, maps:size(Map)}\n\t\t\t]),\n\t\t\tState;\n\t\tRepackChunk ->\n\t\t\t%% sanity checks\n\t\t\ttrue = RepackChunk#repack_chunk.state == needs_decipher,\n\t\t\ttrue = byte_size(UnpackedChunk) == ?DATA_CHUNK_SIZE,\n\t\t\t%% end sanity checks\n\n\t\t\tRepackChunk2 = RepackChunk#repack_chunk{\n\t\t\t\tchunk = UnpackedChunk,\n\t\t\t\tsource_entropy = <<>>,\n\t\t\t\tsource_packing = unpacked_padded\n\t\t\t},\n\t\t\tupdate_chunk_state(RepackChunk2, State)\n\tend,\n\t{noreply, State2};\n\nhandle_info({chunk, {enciphered, {BucketEndOffset, _}, PackedChunk}}, #state{} = State) ->\n\t#state{ repack_chunk_map = Map } = State,\n\tState2 = case maps:get(BucketEndOffset, Map, not_found) of\n\t\tnot_found ->\n\t\t\tlog_warning(chunk_encipher_request_not_found, State, [\n\t\t\t\t{bucket_end_offset, BucketEndOffset},\n\t\t\t\t{repack_chunk_map, maps:size(Map)}\n\t\t\t]),\n\t\t\tState;\n\t\tRepackChunk ->\n\t\t\t%% sanity checks\n\t\t\ttrue = RepackChunk#repack_chunk.state == needs_encipher,\n\t\t\t%% end sanity checks\n\n\t\t\tRepackChunk2 = RepackChunk#repack_chunk{\n\t\t\t\tchunk = PackedChunk,\n\t\t\t\ttarget_entropy = <<>>,\n\t\t\t\tsource_packing = RepackChunk#repack_chunk.target_packing\n\t\t\t},\n\t\t\tupdate_chunk_state(RepackChunk2, State)\n\tend,\n\t{noreply, State2};\n\nhandle_info({entropy_generated, _Ref, _Entropy}, State) ->\n\t?LOG_WARNING([{event, entropy_generation_timed_out}]),\n\t{noreply, State};\n\nhandle_info(Request, #state{} = State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {request, Request}]),\n\t{noreply, State}.\n\nterminate(Reason, #state{} = State) ->\n\tlog_debug(terminate, State, [{reason, ar_util:safe_format(Reason)}]),\n\tstore_cursor(State),\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\ncalculate_num_entropy_offsets(CacheSize, BatchSize) ->\n\tmin(ar_block:get_sub_chunks_per_replica_2_9_entropy(), (CacheSize * 4) div BatchSize).\n\n%% @doc Outer repack loop. Called via `gen_server:cast(self(), repack)`. Each call\n%% repacks another footprint of chunks. A repack footprint is N entropy footprints where N\n%% is the repack batch size.\nrepack(#state{ next_cursor = Cursor, module_end = ModuleEnd } = State)\n\t\twhen Cursor > ModuleEnd ->\n\t#state{ repack_chunk_map = Map, store_id = StoreID, target_packing = TargetPacking } = State,\n\n\tcase maps:size(Map) of\n\t\t0 ->\n\t\t\tar_device_lock:release_lock(repack, StoreID),\n\t\t\tar_device_lock:set_device_lock_metric(StoreID, repack, complete),\n\t\t\tState2 = State#state{ repack_status = complete },\n\t\t\tar:console(\"~n~nRepacking of ~s is complete! \"\n\t\t\t\t\t\"We suggest you stop the node, rename \"\n\t\t\t\t\t\"the storage module folder to reflect \"\n\t\t\t\t\t\"the new packing, and start the \"\n\t\t\t\t\t\"node with the new storage module.~n\", [StoreID]),\n\t\t\t?LOG_INFO([{event, repacking_complete},\n\t\t\t\t\t{store_id, StoreID},\n\t\t\t\t\t{target_packing, ar_serialize:encode_packing(TargetPacking, false)}]),\n\t\t\tState2;\n\t\t_ ->\n\t\t\tlog_debug(repacking_complete_but_waiting, State, [\n\t\t\t\t{target_packing, ar_serialize:encode_packing(TargetPacking, false)}]),\n\t\t\tar_util:cast_after(5000, self(), repack),\n\t\t\tState\n\tend;\n\nrepack(#state{} = State) ->\n\t#state{ next_cursor = Cursor, target_packing = TargetPacking } = State,\n\n\tcase ar_packing_server:is_buffer_full() of\n\t\ttrue ->\n\t\t\tlog_debug(waiting_for_repack_buffer, State, [\n\t\t\t\t{target_packing, ar_serialize:encode_packing(TargetPacking, false)}]),\n\t\t\tar_util:cast_after(200, self(), repack),\n\t\t\tState;\n\t\tfalse ->\n\t\t\trepack_footprint(Cursor, State)\n\tend.\n\nrepack_footprint(Cursor, #state{} = State) ->\n\t#state{ module_end = ModuleEnd,\n\t\tnum_entropy_offsets = NumEntropyOffsets,\n\t\tconfigured_packing = SourcePacking,\n\t\ttarget_packing = TargetPacking,\n\t\tstore_id = StoreID,\n\t\tread_batch_size = BatchSize } = State,\n\n\tBucketEndOffset = ar_chunk_storage:get_chunk_bucket_end(Cursor),\n\tBucketStartOffset = ar_chunk_storage:get_chunk_bucket_start(Cursor),\n\tFootprintOffsets = footprint_offsets(BucketEndOffset, NumEntropyOffsets, ModuleEnd),\n\tFootprintStart = BucketStartOffset+1,\n\tFootprintEnd = footprint_end(FootprintOffsets, ModuleEnd, BatchSize),\n\tcase should_repack(Cursor, FootprintStart, FootprintEnd, State) of\n\t\t{false, Logs} ->\n\t\t\t%% Skip this Cursor for one of these reasons:\n\t\t\t%% 1. Cursor has already been repacked.\n\t\t\t%%    Note: we expect this to happen a lot since we iterate through all\n\t\t\t%%    chunks in the partition, but for each chunk we will repack N\n\t\t\t%%    entropy footprints.\n\t\t\t%% 2. The iteration range of this batch starts after the end of the\n\t\t\t%%    storage module.\n\t\t\tgen_server:cast(self(), repack),\n\t\t\tInterval = ar_sync_record:get_next_unsynced_interval(\n\t\t\t\tCursor, infinity, TargetPacking, ar_data_sync, StoreID),\n\t\t\tNextCursor = case Interval of\n\t\t\t\tnot_found ->\n\t\t\t\t\tCursor + ?DATA_CHUNK_SIZE;\n\t\t\t\t{_, Start} ->\n\t\t\t\t\tStart\n\t\t\tend,\n\t\t\tNextCursor2 = max(NextCursor, Cursor + ?DATA_CHUNK_SIZE),\n\t\t\tlog_debug(skipping_cursor, State, [\n\t\t\t\t{next_cursor, NextCursor2},\n\t\t\t\t{cursor, Cursor},\n\t\t\t\t{footprint_start, FootprintStart},\n\t\t\t\t{footprint_end, FootprintEnd},\n\t\t\t\t{footprint_offsets, length(FootprintOffsets)}\n\t\t\t] ++ Logs),\n\t\t\tState#state{ next_cursor = NextCursor2 };\n\t\ttrue ->\n\t\t\tState2 = State#state{ \n\t\t\t\tfootprint_start = FootprintStart,\n\t\t\t\tfootprint_end = FootprintEnd,\n\t\t\t\tnext_cursor = Cursor + ?DATA_CHUNK_SIZE\n\t\t\t},\n\n\t\t\t{_, EntropyEnd, _} = get_read_range(BucketEndOffset, State2),\n\t\t\tState3 = State2#state{\n\t\t\t\tentropy_end = ar_chunk_storage:get_chunk_bucket_end(EntropyEnd)\n\t\t\t},\n\t\t\tState4 = init_repack_chunk_map(FootprintOffsets, State3),\n\n\t\t\tMaxChunkMapOffset = lists:max(maps:keys(State4#state.repack_chunk_map)),\n\t\t\t\n\t\t\tlog_info(repack_footprint_start, State4, [\n\t\t\t\t{cursor, Cursor},\n\t\t\t\t{bucket_end_offset, BucketEndOffset},\n\t\t\t\t{source_packing, ar_serialize:encode_packing(SourcePacking, false)},\n\t\t\t\t{target_packing, ar_serialize:encode_packing(TargetPacking, false)},\n\t\t\t\t{entropy_end, EntropyEnd},\n\t\t\t\t{read_batch_size, BatchSize},\n\t\t\t\t{write_batch_size, State4#state.write_batch_size},\n\t\t\t\t{num_entropy_offsets, NumEntropyOffsets},\n\t\t\t\t{footprint_offsets, length(FootprintOffsets)},\n\t\t\t\t{max_chunk_map_offset, MaxChunkMapOffset}\n\t\t\t]),\n\n\t\t\t%% sanity checks\n\t\t\ttrue = MaxChunkMapOffset =< FootprintEnd,\n\t\t\ttrue = EntropyEnd =< FootprintEnd,\n\t\t\ttrue = FootprintEnd =< ModuleEnd,\n\t\t\t%% end sanity checks\n\n\t\t\t%% We'll generate BatchSize entropy footprints, one for each bucket end offset\n\t\t\t%% starting at BucketEndOffset and ending at EntropyEnd.\n\t\t\tgenerate_repack_entropy(BucketEndOffset, SourcePacking, State4),\n\t\t\tgenerate_repack_entropy(BucketEndOffset, TargetPacking, State4),\n\n\t\t\tar_repack_io:read_footprint(\n\t\t\t\tFootprintOffsets, FootprintStart, FootprintEnd, StoreID),\n\n\t\t\tState4\n\tend.\n\nshould_repack(Cursor, FootprintStart, FootprintEnd, State) ->\n\t#state{ module_start = ModuleStart, module_end = ModuleEnd,\n\t\ttarget_packing = TargetPacking, store_id = StoreID } = State,\n\tPaddedEndOffset = ar_block:get_chunk_padded_offset(Cursor),\n\tIsChunkRecorded = ar_sync_record:is_recorded(PaddedEndOffset, ar_data_sync, StoreID),\n\tIsEntropyRecorded = ar_entropy_storage:is_entropy_recorded(\n\t\tPaddedEndOffset, TargetPacking, StoreID),\n\t%% Skip this offset if it's already packed to TargetPacking, or if it's not recorded\n\t%% at all.\n\tSkip = case {IsChunkRecorded, IsEntropyRecorded} of\n\t\t%% Chunk is missing and we haven't written entropy yet, so we still want to process\n\t\t%% the bucket and write entropy to it.\n\t\t{false, false} -> false;\n\t\t%% Chunk is missing but entropy has already been written, so we can skip.\n\t\t{false, true} -> true;\n\t\t%% Skip if chunk is recorded and already packed to TargetPacking\n\t\t{{true, TargetPacking}, _} -> true;\n\t\t%% Skip if entropy exists for an unpacked chunk as this indicates the chunks\n\t\t%% 1. the chunks are small and therefore can't be packed\n\t\t%% 2. have already been processed and classified as `entropy_only`\n\t\t{{true, unpacked}, true} -> true;\n\t\t_ -> false\n\tend,\n\n\tShouldRepack = (\n\t\tnot Skip \n\t\tandalso FootprintStart =< ModuleEnd\n\t\tandalso FootprintEnd >= ModuleStart\n\t),\n\tcase ShouldRepack of\n\t\tfalse ->\n\t\t\tLogs = [\n\t\t\t\t{cursor, Cursor},\n\t\t\t\t{padded_end_offset, PaddedEndOffset},\n\t\t\t\t{is_chunk_recorded, IsChunkRecorded},\n\t\t\t\t{is_entropy_recorded, IsEntropyRecorded},\n\t\t\t\t{skip, Skip}\n\t\t\t],\n\t\t\t{false, Logs};\n\t\t_ ->\n\t\t\ttrue\n\tend.\n\n%% @doc Generates the set of entropy offsets that will be used during one iteration of\n%% repack_footprint. Expects to be called with a BucketEndOffset. This is to avoid\n%% unexpected filtering results when a BucketEndOffset is lower than a PickOffset or\n%% an AbsoluteEndOffset.\n%% \n%% One footprint of entropy offsets is generated and then filtered such that:\n%% - no offset is less than BucketEndOffset\n%% - no offset is greater than ModuleEnd\n%% - at most NumEntropyOffsets offsets are returned\nfootprint_offsets(BucketEndOffset, NumEntropyOffsets, ModuleEnd) ->\n\t%% sanity checks\n\tBucketEndOffset = ar_chunk_storage:get_chunk_bucket_end(BucketEndOffset),\n\t%% end sanity checks\n\t\n\tEntropyOffsets = ar_entropy_gen:entropy_offsets(BucketEndOffset, ModuleEnd),\n\n\tFilteredOffsets = lists:filter(\n\t\tfun(Offset) -> Offset >= BucketEndOffset end,\n\t\tEntropyOffsets),\n\t\n\tlists:sublist(FilteredOffsets, NumEntropyOffsets).\n\n%% @doc Calculates and returns the highest chunk offset that can be read for this \n%% repack footprint. This is the highest chunk offset that maps to the highest bucket in\n%% the footprint.\nfootprint_end(FootprintOffsets, ModuleEnd, BatchSize) ->\n\tFirstOffset = lists:min(FootprintOffsets),\n\tLastOffset = lists:max(FootprintOffsets),\n\t%% The final read range of the footprint starts at the last entropy offset\n\t{_, LastOffsetRangeEnd, _} = get_read_range(LastOffset, ModuleEnd, BatchSize),\n\n\t%% makes sure all offsets are in the same entropy partition\n\tPartition = ar_replica_2_9:get_entropy_partition(FirstOffset),\n\t{_, EntropyPartitionEnd} = ar_replica_2_9:get_entropy_partition_range(Partition),\n\n\tmin(LastOffsetRangeEnd, EntropyPartitionEnd).\n\ngenerate_repack_entropy(BucketEndOffset, {replica_2_9, _}, #state{ entropy_end = EntropyEnd })\n\t\twhen BucketEndOffset > EntropyEnd ->\n\tok;\ngenerate_repack_entropy(BucketEndOffset, {replica_2_9, RewardAddr}, #state{} = State) ->\n\t#state{ \n\t\tstore_id = StoreID\n\t} = State,\n\n\tar_entropy_gen:generate_entropies(StoreID, RewardAddr, BucketEndOffset, self());\ngenerate_repack_entropy(_BucketEndOffset, _Packing, #state{}) ->\n\t%% Only generate entropy for the replica.2.9 packing format.\n\tok.\n\ninit_repack_chunk_map([], #state{} = State) ->\n\tState;\ninit_repack_chunk_map([EntropyOffset | EntropyOffsets], #state{} = State) ->\n\t#state{ \n\t\trepack_chunk_map = Map,\n\t\tconfigured_packing = SourcePacking,\n\t\ttarget_packing = TargetPacking\n\t} = State,\n\n\t{_ReadRangeStart, _ReadRangeEnd, ReadRangeOffsets} = get_read_range(\n\t\tEntropyOffset, State),\n\n\tMap2 = lists:foldl(\n\t\tfun(BucketEndOffset, Acc) ->\n\t\t\tfalse = maps:is_key(BucketEndOffset, Acc),\n\t\t\tSourceEntropy = case SourcePacking of\n\t\t\t\t{replica_2_9, _} ->\n\t\t\t\t\tnot_set;\n\t\t\t\t_ ->\n\t\t\t\t\t%% Setting to <<>> indicates that source entropy is not needed.\n\t\t\t\t\t<<>>\n\t\t\tend,\n\t\t\tTargetEntropy = case TargetPacking of\n\t\t\t\t{replica_2_9, _} ->\n\t\t\t\t\tnot_set;\n\t\t\t\t_ ->\n\t\t\t\t\t%% Setting to <<>> indicates that target entropy is not needed.\n\t\t\t\t\t<<>>\n\t\t\tend,\n\n\t\t\tRepackChunk = #repack_chunk{\n\t\t\t\toffsets = #chunk_offsets{\n\t\t\t\t\tbucket_end_offset = BucketEndOffset\n\t\t\t\t},\n\t\t\t\ttarget_packing = TargetPacking,\n\t\t\t\tsource_entropy = SourceEntropy,\n\t\t\t\ttarget_entropy = TargetEntropy\n\t\t\t},\n\t\t\tmaps:put(BucketEndOffset, RepackChunk, Acc)\n\t\tend,\n\tMap, ReadRangeOffsets),\n\n\t%% sanity checks\n\ttrue = maps:size(Map2) == maps:size(Map) + length(ReadRangeOffsets),\n\t%% end sanity checks\n\n\tinit_repack_chunk_map(EntropyOffsets, State#state{ repack_chunk_map = Map2 }).\n\nadd_range_to_repack_chunk_map(OffsetChunkMap, OffsetMetadataMap, #state{} = State) ->\n\t#state{\n\t\tstore_id = StoreID,\n\t\tconfigured_packing = ConfiguredPacking,\n\t\ttarget_packing = TargetPacking\n\t} = State,\n\t\n\tmaps:fold(\n\t\tfun(AbsoluteEndOffset, Metadata, Acc) ->\n\t\t\t#state{\n\t\t\t\trepack_chunk_map = RepackChunkMap\n\t\t\t} = Acc,\n\n\t\t\tBucketEndOffset = ar_chunk_storage:get_chunk_bucket_end(AbsoluteEndOffset),\n\t\t\tRepackChunk =  maps:get(BucketEndOffset, RepackChunkMap, not_found),\n\t\t\tRepackChunk2 = assemble_repack_chunk(\n\t\t\t\tRepackChunk, AbsoluteEndOffset, TargetPacking, \n\t\t\t\tMetadata, OffsetChunkMap, ConfiguredPacking, StoreID),\n\n\t\t\tcase RepackChunk2 of\n\t\t\t\tnot_found ->\n\t\t\t\t\tAcc;\n\t\t\t\t_ ->\n\t\t\t\t\tupdate_chunk_state(RepackChunk2, Acc)\n\t\t\tend\n\t\tend,\n\t\tState, OffsetMetadataMap).\n\nassemble_repack_chunk(\n\t\tRepackChunk, AbsoluteEndOffset, TargetPacking, Metadata, OffsetChunkMap,\n\t\tConfiguredPacking, StoreID) ->\n\t{ChunkDataKey, TXRoot, DataRoot, TXPath, RelativeOffset, ChunkSize} = Metadata,\n\n\tBucketEndOffset = ar_chunk_storage:get_chunk_bucket_end(AbsoluteEndOffset),\n\tPaddedEndOffset = ar_block:get_chunk_padded_offset(AbsoluteEndOffset),\n\n\tSourcePacking = get_chunk_packing(PaddedEndOffset, ConfiguredPacking, StoreID),\n\n\tShouldRepack = (\n\t\tar_chunk_storage:is_storage_supported(\n\t\t\tPaddedEndOffset, ChunkSize, SourcePacking)\n\t\torelse\n\t\tar_chunk_storage:is_storage_supported(\n\t\t\tPaddedEndOffset, ChunkSize, TargetPacking)\n\t),\n\n\tcase {ShouldRepack, RepackChunk} of\n\t\t{true, not_found} ->\n\t\t\tlog_error(chunk_not_found_in_map, [\n\t\t\t\t{bucket_end_offset, ar_chunk_storage:get_chunk_bucket_end(AbsoluteEndOffset)},\n\t\t\t\t{absolute_end_offset, AbsoluteEndOffset},\n\t\t\t\t{padded_end_offset, ar_block:get_chunk_padded_offset(AbsoluteEndOffset)},\n\t\t\t\t{chunk_size, ChunkSize}\n\t\t\t]),\n\t\t\tnot_found;\n\t\t{true, _} ->\n\t\t\tRepackChunk#repack_chunk{\n\t\t\t\tsource_packing = SourcePacking,\n\t\t\t\toffsets = #chunk_offsets{\n\t\t\t\t\tabsolute_offset = AbsoluteEndOffset,\n\t\t\t\t\tbucket_end_offset = BucketEndOffset,\n\t\t\t\t\tpadded_end_offset = PaddedEndOffset,\n\t\t\t\t\trelative_offset = RelativeOffset\n\t\t\t\t},\n\t\t\t\tmetadata = #chunk_metadata{\n\t\t\t\t\tchunk_data_key = ChunkDataKey,\n\t\t\t\t\ttx_root = TXRoot,\n\t\t\t\t\tdata_root = DataRoot,\n\t\t\t\t\ttx_path = TXPath,\n\t\t\t\t\tchunk_size = ChunkSize\n\t\t\t\t},\n\t\t\t\tchunk = maps:get(PaddedEndOffset, OffsetChunkMap, not_found)\n\t\t\t};\n\t\t{false, _} ->\n\t\t\tnot_found\n\tend.\n\nget_chunk_packing(PaddedEndOffset, ConfiguredPacking, StoreID) ->\n\tHasConfiguredPacking = ar_sync_record:is_recorded(\n\t\tPaddedEndOffset, ConfiguredPacking, ar_data_sync, StoreID),\n\tcase HasConfiguredPacking of\n\t\ttrue -> ConfiguredPacking;\n\t\t_ ->\n\t\t\tcase ar_sync_record:is_recorded(PaddedEndOffset, ar_data_sync, StoreID) of\n\t\t\t\t{true, Packing} -> Packing;\n\t\t\t\t_ -> not_found\n\t\t\tend\n\tend.\n\n%% @doc Mark any chunks that weren't found in either chunk_storage or the chunks_index.\nmark_missing_chunks([], #state{} = State) ->\n\tState;\nmark_missing_chunks([BucketEndOffset | ReadRangeOffsets], #state{} = State) ->\n\t#state{ \n\t\trepack_chunk_map = Map\n\t} = State,\n\t\n\tRepackChunk = maps:get(BucketEndOffset, Map, not_found),\n\n\tState2 = case RepackChunk of\n\t\tnot_found ->\n\t\t\tState;\n\t\t#repack_chunk{state = needs_chunk} ->\n\t\t\t%% If we're here and still in the needs_chunk state it means we weren't able\n\t\t\t%% to find the chunk in chunk_storage or the chunks_index.\n\t\t\tRepackChunk2 = RepackChunk#repack_chunk{\n\t\t\t\tchunk = not_found,\n\t\t\t\tmetadata = not_found\n\t\t\t},\n\t\t\tupdate_chunk_state(RepackChunk2, State);\n\t\t_ ->\n\t\t\tState\n\tend,\n\n\tmark_missing_chunks(ReadRangeOffsets, State2).\n\ncache_repack_chunk(RepackChunk, #state{} = State) ->\n\t#repack_chunk{\n\t\toffsets = #chunk_offsets{\n\t\t\tbucket_end_offset = BucketEndOffset\n\t\t}\n\t} = RepackChunk,\n\tState#state{ repack_chunk_map =\n\t\tmaps:put(BucketEndOffset, RepackChunk, State#state.repack_chunk_map)\n\t}.\n\nremove_repack_chunk(BucketEndOffset, #state{} = State) ->\n\tState2 = State#state{ repack_chunk_map =\n\t\tmaps:remove(BucketEndOffset, State#state.repack_chunk_map)\n\t},\n\tmaybe_repack_next_footprint(State2).\n\nenqueue_chunk_for_writing(RepackChunk, #state{} = State) ->\n\t#state{\n\t\ttarget_packing = TargetPacking,\n\t\tstore_id = StoreID\n\t} = State,\n\t#repack_chunk{\n\t\toffsets = #chunk_offsets{\n\t\t\tbucket_end_offset = BucketEndOffset\n\t\t}\n\t} = RepackChunk,\n\tState2 = State#state{\n\t\twrite_queue = gb_sets:add_element(\n\t\t\t{BucketEndOffset, RepackChunk}, State#state.write_queue)\n\t},\n\n\tcase gb_sets:size(State2#state.write_queue) >= State2#state.write_batch_size of\n\t\ttrue ->\n\t\t\tcount_states(queue, State2),\n\t\t\tar_repack_io:write_queue(State2#state.write_queue, TargetPacking, StoreID),\n\t\t\tState2#state{ write_queue = gb_sets:new() };\n\t\tfalse ->\n\t\t\tState2\n\tend.\n\nentropy_generated(Entropy, BucketEndOffset, RewardAddr, #state{} = State) ->\n\t#state{\n\t\trepack_chunk_map = Map,\n\t\tconfigured_packing = SourcePacking,\n\t\ttarget_packing = TargetPacking\n\t} = State,\n\n\tcase maps:get(BucketEndOffset, Map, not_found) of\n\t\tnot_found ->\n\t\t\t%% This should never happen.\n\t\t\tlog_error(entropy_generated_chunk_not_found, State, [\n\t\t\t\t{bucket_end_offset, BucketEndOffset}\n\t\t\t]),\n\t\t\tState;\n\t\tRepackChunk ->\n\t\t\tRepackChunk2 = case {replica_2_9, RewardAddr} of\n\t\t\t\tTargetPacking ->\n\t\t\t\t\tRepackChunk#repack_chunk{\n\t\t\t\t\t\ttarget_entropy = Entropy\n\t\t\t\t\t};\n\t\t\t\tSourcePacking ->\n\t\t\t\t\tRepackChunk#repack_chunk{\n\t\t\t\t\t\tsource_entropy = Entropy\n\t\t\t\t\t}\n\t\t\tend,\n\t\t\tupdate_chunk_state(RepackChunk2, State)\n\tend.\n\nmaybe_repack_next_footprint(#state{} = State) ->\n\t#state{ \n\t\trepack_chunk_map = Map,\n\t\twrite_queue = WriteQueue,\n\t\ttarget_packing = TargetPacking,\n\t\tstore_id = StoreID\n\t} = State,\n\tcase maps:size(Map) of\n\t\t0 ->\n\t\t\tcount_states(queue, State),\n\t\t\tar_repack_io:write_queue(WriteQueue, TargetPacking, StoreID),\n\t\t\tState2 = State#state{ write_queue = gb_sets:new() },\n\t\t\tgen_server:cast(self(), repack),\n\t\t\tState2;\n\t\t_ ->\n\t\t\tState\n\tend.\n\nread_chunk_and_data_path(RepackChunk, #state{} = State) ->\n\t#state{\n\t\tstore_id = StoreID\n\t} = State,\n\t#repack_chunk{\n\t\tmetadata = Metadata,\n\t\tchunk = MaybeChunk\n\t} = RepackChunk,\n\t#chunk_metadata{\n\t\tchunk_data_key = ChunkDataKey\n\t} = Metadata,\n\tcase ar_data_sync:get_chunk_data(ChunkDataKey, StoreID) of\n\t\tnot_found ->\n\t\t\tlog_warning(chunk_not_found_in_chunk_data_db, RepackChunk, State, []),\n\t\t\tRepackChunk#repack_chunk{ \n\t\t\t\tmetadata = Metadata#chunk_metadata{ data_path = not_found } };\n\t\t{ok, V} ->\n\t\t\tcase binary_to_term(V, [safe]) of\n\t\t\t\t{Chunk, DataPath} ->\n\t\t\t\t\tRepackChunk#repack_chunk{ \n\t\t\t\t\t\tmetadata = Metadata#chunk_metadata{ data_path = DataPath },\n\t\t\t\t\t\tchunk = Chunk\n\t\t\t\t\t};\n\t\t\t\tDataPath when MaybeChunk /= not_found ->\n\t\t\t\t\tRepackChunk#repack_chunk{ \n\t\t\t\t\t\tmetadata = Metadata#chunk_metadata{ data_path = DataPath },\n\t\t\t\t\t\tchunk = MaybeChunk\n\t\t\t\t\t};\n\t\t\t\t_ ->\n\t\t\t\t\tlog_warning(chunk_not_found, RepackChunk, State, []),\n\t\t\t\t\tRepackChunk#repack_chunk{ \n\t\t\t\t\t\tmetadata = Metadata#chunk_metadata{ data_path = not_found }\n\t\t\t\t\t}\n\t\t\tend\n\tend.\n\nupdate_chunk_state(RepackChunk, #state{} = State) ->\n\tRepackChunk2 = ar_repack_fsm:crank_state(RepackChunk),\n\n\tcase RepackChunk == RepackChunk2 of\n\t\ttrue ->\n\t\t\t%% Cache it anyways, just in case.\n\t\t\tcache_repack_chunk(RepackChunk2, State);\n\t\tfalse ->\n\t\t\tprocess_state_change(RepackChunk2, State)\n\tend.\n\nprocess_state_change(RepackChunk, #state{} = State) ->\n\t#state{\n\t\tstore_id = StoreID,\n\t\tfootprint_start = FootprintStart\n\t} = State,\n\t#repack_chunk{\n\t\toffsets = #chunk_offsets{\n\t\t\tbucket_end_offset = BucketEndOffset,\n\t\t\tabsolute_offset = AbsoluteEndOffset\n\t\t},\n\t\tchunk = Chunk\n\t} = RepackChunk,\n\n\tcase RepackChunk#repack_chunk.state of\n\t\tinvalid ->\n\t\t\tChunkSize = RepackChunk#repack_chunk.metadata#chunk_metadata.chunk_size,\n\t\t\tar_data_sync:invalidate_bad_data_record(\n\t\t\t\tAbsoluteEndOffset, ChunkSize, StoreID, repack_found_stale_indices),\n\t\t\tRepackChunk2 = RepackChunk#repack_chunk{ chunk = invalid },\n\t\t\tState2 = cache_repack_chunk(RepackChunk2, State),\n\t\t\tupdate_chunk_state(RepackChunk2, State2);\n\t\talready_repacked ->\n\t\t\t%% Remove the chunk to free up memory. If we're in the already_repacked state\n\t\t\t%% it means the entropy hasn't been set yet. Once it's set we'll transition to\n\t\t\t%% the ignore state and the RepackChunk will be removed from the cache.\n\t\t\tRepackChunk2 = RepackChunk#repack_chunk{ chunk = <<>> },\n\t\t\tcache_repack_chunk(RepackChunk2, State);\n\t\tneeds_data_path ->\n\t\t\tRepackChunk2 = read_chunk_and_data_path(RepackChunk, State),\n\t\t\tState2 = cache_repack_chunk(RepackChunk2, State),\n\t\t\tupdate_chunk_state(RepackChunk2, State2);\n\t\tneeds_repack ->\n\t\t\t%% Include BatchStart so that we don't accidentally expire a chunk from some\n\t\t\t%% future batch. Unlikely, but not impossible.\n\t\t\tChunkSize = RepackChunk#repack_chunk.metadata#chunk_metadata.chunk_size,\n\t\t\tTXRoot = RepackChunk#repack_chunk.metadata#chunk_metadata.tx_root,\n\t\t\tSourcePacking = RepackChunk#repack_chunk.source_packing,\n\t\t\tTargetPacking = RepackChunk#repack_chunk.target_packing,\n\n\t\t\tPacking = case TargetPacking of\n\t\t\t\t{replica_2_9, _} -> unpacked_padded;\n\t\t\t\t_ -> TargetPacking\n\t\t\tend,\n\n\t\t\tar_packing_server:request_repack({BucketEndOffset, FootprintStart}, self(),\n\t\t\t\t{Packing, SourcePacking, Chunk, AbsoluteEndOffset, TXRoot, ChunkSize}),\n\t\t\tcache_repack_chunk(RepackChunk, State);\n\t\tneeds_decipher ->\n\t\t\t%% We now have the unpacked_padded chunk and the entropy, proceed\n\t\t\t%% with enciphering and storing the chunk.\n\t\t\tSourceEntropy = RepackChunk#repack_chunk.source_entropy,\n\t\t\tar_packing_server:request_decipher(\n\t\t\t\t{BucketEndOffset, FootprintStart}, self(), {Chunk, SourceEntropy}),\n\t\t\tcache_repack_chunk(RepackChunk, State);\n\t\tneeds_encipher ->\n\t\t\t%% We now have the unpacked_padded chunk and the entropy, proceed\n\t\t\t%% with enciphering and storing the chunk.\n\t\t\tTargetEntropy = RepackChunk#repack_chunk.target_entropy,\n\t\t\tar_packing_server:request_encipher(\n\t\t\t\t{BucketEndOffset, FootprintStart}, self(), {Chunk, TargetEntropy}),\n\t\t\tcache_repack_chunk(RepackChunk, State);\n\t\twrite_entropy ->\n\t\t\tState2 = enqueue_chunk_for_writing(RepackChunk, State),\n\t\t\tremove_repack_chunk(BucketEndOffset, State2);\n\t\twrite_chunk ->\n\t\t\tState2 = enqueue_chunk_for_writing(RepackChunk, State),\n\t\t\tremove_repack_chunk(BucketEndOffset, State2);\n\t\tignore ->\n\t\t\t%% Chunk was already_repacked.\n\t\t\tremove_repack_chunk(BucketEndOffset, State);\n\t\terror ->\n\t\t\t%% This should never happen.\n\t\t\tlog_error(invalid_repack_chunk_state, RepackChunk, State, []),\n\t\t\tremove_repack_chunk(BucketEndOffset, State);\n\t\t_ ->\n\t\t\t%% No action to take now, but since the chunk state changed, we need to update\n\t\t\t%% the cache.\n\t\t\tcache_repack_chunk(RepackChunk, State)\n\tend.\n\nexpire_exor_request(BucketEndOffset, State) ->\n\t#state{\n\t\trepack_chunk_map = Map\n\t} = State,\n\tcase maps:get(BucketEndOffset, Map, not_found) of\n\t\tnot_found ->\n\t\t\t%% Chunk has already been processed.\n\t\t\tState;\n\t\tRepackChunk ->\n\t\t\tlog_debug(exor_request_expired, RepackChunk, State, []),\n\t\t\tremove_repack_chunk(BucketEndOffset, State)\n\tend.\n\nread_cursor(StoreID, TargetPacking, ModuleStart) ->\n\tFilepath = ar_chunk_storage:get_filepath(\"repack_in_place_cursor2\", StoreID),\n\tDefaultCursor = case ModuleStart of\n\t\t0 -> 0;\n\t\t_ -> ModuleStart + 1\n\tend,\n\tcase file:read_file(Filepath) of\n\t\t{ok, Bin} ->\n\t\tcase catch binary_to_term(Bin, [safe]) of\n\t\t\t{Cursor, TargetPacking} when is_integer(Cursor) ->\n\t\t\t\tCursor;\n\t\t\t\t_ ->\n\t\t\t\t\tDefaultCursor\n\t\t\tend;\n\t\t_ ->\n\t\t\tDefaultCursor\n\tend.\n\nstore_cursor(#state{} = State) ->\n\tstore_cursor(State#state.next_cursor, State#state.store_id, State#state.target_packing).\nstore_cursor(none, _StoreID, _TargetPacking) ->\n\tok;\nstore_cursor(Cursor, StoreID, TargetPacking) ->\n\tFilepath = ar_chunk_storage:get_filepath(\"repack_in_place_cursor2\", StoreID),\n\tfile:write_file(Filepath, term_to_binary({Cursor, TargetPacking})).\n\nlog_error(Event, #repack_chunk{} = RepackChunk, #state{} = State, ExtraLogs) ->\n\t?LOG_ERROR(format_logs(Event, RepackChunk, State, ExtraLogs)).\n\nlog_error(Event, #state{} = State, ExtraLogs) ->\n\t?LOG_ERROR(format_logs(Event, State, ExtraLogs)).\n\nlog_error(Event, ExtraLogs) ->\n\t?LOG_ERROR(format_logs(Event, ExtraLogs)).\n\nlog_warning(Event, #repack_chunk{} = RepackChunk, #state{} = State, ExtraLogs) ->\n\t?LOG_WARNING(format_logs(Event, RepackChunk, State, ExtraLogs)).\n\nlog_warning(Event, #state{} = State, ExtraLogs) ->\n\t?LOG_WARNING(format_logs(Event, State, ExtraLogs)).\n\nlog_info(Event, #state{} = State, ExtraLogs) ->\n\t?LOG_INFO(format_logs(Event, State, ExtraLogs)).\n\nlog_debug(Event, #repack_chunk{} = RepackChunk, #state{} = State, ExtraLogs) ->\n\t?LOG_DEBUG(format_logs(Event, RepackChunk, State, ExtraLogs)).\n\t\nlog_debug(Event, #state{} = State, ExtraLogs) ->\n\t?LOG_DEBUG(format_logs(Event, State, ExtraLogs)).\n\nformat_logs(Event, ExtraLogs) ->\n\t[\n\t\t{event, Event},\n\t\t{tags, [repack_in_place]},\n\t\t{pid, self()}\n\t\t| ExtraLogs\n\t].\n\nformat_logs(Event, #state{} = State, ExtraLogs) ->\n\tformat_logs(Event, [\n\t\t{store_id, State#state.store_id},\n\t\t{next_cursor, State#state.next_cursor},\n\t\t{footprint_start, State#state.footprint_start},\n\t\t{footprint_end, State#state.footprint_end},\n\t\t{module_start, State#state.module_start},\n\t\t{module_end, State#state.module_end},\n\t\t{repack_chunk_map, maps:size(State#state.repack_chunk_map)},\n\t\t{write_queue, gb_sets:size(State#state.write_queue)}\n\t\t| ExtraLogs\n\t]).\n\nformat_logs(Event, #repack_chunk{} = RepackChunk, #state{} = State, ExtraLogs) ->\n\t#repack_chunk{\n\t\tstate = ChunkState,\n\t\toffsets = Offsets,\n\t\tmetadata = Metadata,\n\t\tchunk = Chunk,\n\t\ttarget_entropy = TargetEntropy,\n\t\tsource_entropy = SourceEntropy,\n\t\tsource_packing = SourcePacking,\n\t\ttarget_packing = TargetPacking\n\t} = RepackChunk,\n\t#chunk_offsets{\t\n\t\tabsolute_offset = AbsoluteOffset,\n\t\tbucket_end_offset = BucketEndOffset,\n\t\tpadded_end_offset = PaddedEndOffset\n\t} = Offsets,\n\tChunkSize = case Metadata of\n\t\t#chunk_metadata{chunk_size = Size} -> Size;\n\t\t_ -> Metadata\n\tend,\n\tformat_logs(Event, State, [\n\t\t{state, ChunkState},\n\t\t{bucket_end_offset, BucketEndOffset},\n\t\t{absolute_offset, AbsoluteOffset},\n\t\t{padded_end_offset, PaddedEndOffset},\n\t\t{chunk_size, ChunkSize},\n\t\t{chunk, atom_or_binary(Chunk)},\n\t\t{source_packing, ar_serialize:encode_packing(SourcePacking, false)},\n\t\t{target_packing, ar_serialize:encode_packing(TargetPacking, false)},\n\t\t{source_entropy, atom_or_binary(SourceEntropy)},\n\t\t{target_entropy, atom_or_binary(TargetEntropy)} | ExtraLogs\n\t]).\n\ncount_states(cache, #state{} = State) ->\n\t#state{\n\t\tstore_id = StoreID,\n\t\trepack_chunk_map = Map\n\t} = State,\n\tMapCount = maps:fold(\n\t\tfun(_BucketEndOffset, RepackChunk, Acc) ->\n\t\t\tmaps:update_with(RepackChunk#repack_chunk.state, fun(Count) -> Count + 1 end, 1, Acc)\n\t\tend,\n\t\t#{},\n\t\tMap\n\t),\n\tlog_debug(count_cache_states, State, [\n\t\t{cache_size, maps:size(Map)},\n\t\t{states, maps:to_list(MapCount)}\n\t]),\n\tStoreIDLabel = ar_storage_module:label(StoreID),\n\tmaps:fold(\n\t\tfun(ChunkState, Count, Acc) ->\n\t\t\tprometheus_gauge:set(repack_chunk_states, [StoreIDLabel, cache, ChunkState], Count),\n\t\t\tAcc\n\t\tend,\n\t\tok,\n\t\tMapCount\n\t);\ncount_states(queue, #state{} = State) ->\n\t#state{\n\t\tstore_id = StoreID,\n\t\twrite_queue = WriteQueue\n\t} = State,\n\tWriteQueueCount = gb_sets:fold(\n\t\tfun({_BucketEndOffset, RepackChunk}, Acc) ->\n\t\t\tmaps:update_with(RepackChunk#repack_chunk.state, fun(Count) -> Count + 1 end, 1, Acc)\n\t\tend,\n\t\t#{},\n\t\tWriteQueue\n\t),\n\tlog_debug(count_write_queue_states, State, [\n\t\t{queue_size, gb_sets:size(WriteQueue)},\n\t\t{states, maps:to_list(WriteQueueCount)}\n\t]),\t\n\tStoreIDLabel = ar_storage_module:label(StoreID),\n\tmaps:fold(\n\t\tfun(ChunkState, Count, Acc) ->\n\t\t\tprometheus_gauge:set(repack_chunk_states, [StoreIDLabel, queue, ChunkState], Count),\n\t\t\tAcc\n\t\tend,\n\t\tok,\n\t\tWriteQueueCount\n\t).\n\natom_or_binary(Atom) when is_atom(Atom) -> Atom;\natom_or_binary(Bin) when is_binary(Bin) -> binary:part(Bin, {0, min(10, byte_size(Bin))}).\t\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\ncache_size_test_() ->\n\tar_test_node:test_with_mocked_functions([\n\t\t{ar_block, get_sub_chunks_per_replica_2_9_entropy, fun() -> 3 end}\n\t],\n\tfun test_cache_size/0, 30).\n\ntest_cache_size() ->\n\t?assertEqual(1, calculate_num_entropy_offsets(100, 400)),\n\t?assertEqual(2, calculate_num_entropy_offsets(100, 200)),\n\t?assertEqual(3, calculate_num_entropy_offsets(300, 400)),\n\t?assertEqual(3, calculate_num_entropy_offsets(3000, 400)),\n\t?assertEqual(3, calculate_num_entropy_offsets(3, 4)),\n\t?assertEqual(3, calculate_num_entropy_offsets(3, 1)),\n\t?assertEqual(2, calculate_num_entropy_offsets(5, 10)).\n\t\nfootprint_offsets_test_() ->\n\t[\n\t\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_block, get_replica_2_9_entropy_sector_size, fun() -> 786432 end},\n\t\t\t{ar_block, get_replica_2_9_entropy_partition_size, fun() -> 2359296 end},\n\t\t\t{ar_block, get_sub_chunks_per_replica_2_9_entropy, fun() -> 3 end},\n\t\t\t{ar_block, strict_data_split_threshold, fun() -> 700_000 end}\n\t\t],\n\t\tfun test_footprint_offsets_small/0, 30),\n\n\t\t%% Run footprint_offsets tests using the production constant values.\n\t\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_block, partition_size, fun() -> 3_600_000_000_000 end},\n\t\t\t{ar_block, strict_data_split_threshold, fun() -> 30_607_159_107_830 end},\n\t\t\t{ar_storage_module, get_overlap, fun(_) -> 104_857_600 end},\n\t\t\t{ar_block, get_sub_chunks_per_replica_2_9_entropy, fun() -> 1024 end},\n\t\t\t{ar_block, get_replica_2_9_entropy_sector_size, fun() -> 3_515_875_328 end}\n\t\t],\n\t\tfun test_footprint_offsets_large/0, 30)\n\t].\n\ntest_footprint_offsets_small() ->\n    {Start0, End0} = ar_storage_module:module_range({ar_block:partition_size(), 0, unpacked}),\n\t{Start1, End1} = ar_storage_module:module_range({ar_block:partition_size(), 1, unpacked}),\n\tPaddedEnd0 = ar_block:get_chunk_padded_offset(End0),\n\tPaddedEnd1 = ar_block:get_chunk_padded_offset(End1),\n\n\t?assertEqual(3, ar_block:get_sub_chunks_per_replica_2_9_entropy()),\n\t?assertEqual({0, 2262144}, {Start0, End0}),\n\t?assertEqual({2000000, 4262144}, {Start1, End1}),\n\t?assertEqual(2272864, PaddedEnd0),\n\t?assertEqual(4370016, PaddedEnd1),\n\n\t?assertEqual([262144, 1048576, 1835008], footprint_offsets(262144, 3, PaddedEnd0)),\n\t?assertEqual([262144], footprint_offsets(262144, 3, 1_000_000)),\n\t?assertEqual([262144, 1048576], footprint_offsets(262144, 3, 1_500_000)),\n\t?assertEqual([262144, 1048576], footprint_offsets(262144, 2, PaddedEnd0)),\n\t?assertEqual([262144], footprint_offsets(262144, 1, PaddedEnd0)),\n\n\t?assertEqual([786432, 1572864], footprint_offsets(786432, 3, PaddedEnd0)),\n\t?assertEqual([786432, 1572864], footprint_offsets(786432, 2, PaddedEnd0)),\n\t?assertEqual([786432], footprint_offsets(786432, 1, PaddedEnd0)),\n\n\t?assertEqual([1048576, 1835008], footprint_offsets(1048576, 3, PaddedEnd0)),\n\n\t?assertEqual([1572864], footprint_offsets(1572864, 3, PaddedEnd0)),\n\t?assertEqual([1572864], footprint_offsets(1572864, 2, PaddedEnd0)),\n\t?assertEqual([1572864], footprint_offsets(1572864, 1, PaddedEnd0)),\n\t\n\t?assertEqual([1835008], footprint_offsets(1835008, 3, PaddedEnd0)),\n\t?assertEqual([2097152], footprint_offsets(2097152, 3, PaddedEnd0)),\n\t\n\t%% all offsets should be limited to a single entropy partition\n\t?assertEqual([2097152], footprint_offsets(2097152, 3, PaddedEnd1)),\n\n\t?assertEqual([2359296, 3145728, 3932160], footprint_offsets(2359296, 3, PaddedEnd1)),\n\t?assertEqual([2621440, 3407872, 4194304], footprint_offsets(2621440, 3, PaddedEnd1)),\n\t?assertEqual([2883584, 3670016], footprint_offsets(2883584, 3, PaddedEnd1)),\n\t?assertEqual([3145728, 3932160], footprint_offsets(3145728, 3, PaddedEnd1)),\n\t?assertEqual([4194304], footprint_offsets(4194304, 3, PaddedEnd1)).\n\n%% @doc run a series of footprint_offsets tests using the production constant values.\ntest_footprint_offsets_large() ->\n\t{Start0, End0} = ar_storage_module:module_range({ar_block:partition_size(), 0, unpacked}),\n\t{Start1, End1} = ar_storage_module:module_range({ar_block:partition_size(), 1, unpacked}),\n\t{Start30, End30} = ar_storage_module:module_range({ar_block:partition_size(), 30, unpacked}),\n\tPaddedEnd0 = ar_block:get_chunk_padded_offset(End0),\n\tPaddedEnd1 = ar_block:get_chunk_padded_offset(End1),\n\tPaddedEnd30 = ar_block:get_chunk_padded_offset(End30),\n\n\t?assertEqual(1024, ar_block:get_sub_chunks_per_replica_2_9_entropy()),\n\t?assertEqual(3515875328, ar_block:get_replica_2_9_entropy_sector_size()),\n\t?assertEqual({0, 3600104857600}, {Start0, End0}),\n\t?assertEqual({3600000000000, 7200104857600}, {Start1, End1}),\n\t?assertEqual({108000000000000, 111600104857600}, {Start30, End30}),\n\t?assertEqual(3600104857600, PaddedEnd0),\n\t?assertEqual(7200104857600, PaddedEnd1),\n\t?assertEqual(111600104939766, PaddedEnd30),\n\n\tTestCases = [\n\t\t%% {ExpectedFootprintOffsetsLength, End, BucketEndOffset}\n\t\t%% Partition 0 - special case as there is no lower partition\n\t\t{1024, PaddedEnd0, ar_chunk_storage:get_chunk_bucket_end(Start0)},\n\t\t{1024, PaddedEnd0, ar_chunk_storage:get_chunk_bucket_end(Start0 + ?DATA_CHUNK_SIZE)},\n\t\t{1024, PaddedEnd0, ar_chunk_storage:get_chunk_bucket_end(Start0 + (2 * ?DATA_CHUNK_SIZE))},\n\t\t{1023, PaddedEnd0, ar_chunk_storage:get_chunk_bucket_end(Start0 + (ar_block:get_replica_2_9_entropy_sector_size()))},\n\t\t{1023, PaddedEnd0, ar_chunk_storage:get_chunk_bucket_end(Start0 + (ar_block:get_replica_2_9_entropy_sector_size() + ?DATA_CHUNK_SIZE))},\n\t\t{1022, PaddedEnd0, ar_chunk_storage:get_chunk_bucket_end(Start0 + (2 * ar_block:get_replica_2_9_entropy_sector_size()))},\n\t\t{1022, PaddedEnd0, ar_chunk_storage:get_chunk_bucket_end(Start0 + (2 * ar_block:get_replica_2_9_entropy_sector_size() + ?DATA_CHUNK_SIZE))},\n\t\t%% Partition 1 - before the strict data split threshold\n\t\t{1, PaddedEnd1, ar_chunk_storage:get_chunk_bucket_end(Start1)},\n\t\t{1, PaddedEnd1, ar_chunk_storage:get_chunk_bucket_end(Start1 + ?DATA_CHUNK_SIZE)},\n\t\t{1024, PaddedEnd1, ar_chunk_storage:get_chunk_bucket_end(Start1 + (2 * ?DATA_CHUNK_SIZE))},\n\t\t{1023, PaddedEnd1, ar_chunk_storage:get_chunk_bucket_end(Start1 + (ar_block:get_replica_2_9_entropy_sector_size()))},\n\t\t{1023, PaddedEnd1, ar_chunk_storage:get_chunk_bucket_end(Start1 + (ar_block:get_replica_2_9_entropy_sector_size() + ?DATA_CHUNK_SIZE))},\n\t\t{1022, PaddedEnd1, ar_chunk_storage:get_chunk_bucket_end(Start1 + (2 * ar_block:get_replica_2_9_entropy_sector_size()))},\n\t\t{1022, PaddedEnd1, ar_chunk_storage:get_chunk_bucket_end(Start1 + (2 * ar_block:get_replica_2_9_entropy_sector_size() + ?DATA_CHUNK_SIZE))},\n\t\t%% Partition 30 - after the strict data split threshold\n\t\t{1, PaddedEnd30, ar_chunk_storage:get_chunk_bucket_end(Start30)},\n\t\t{1024, PaddedEnd30, ar_chunk_storage:get_chunk_bucket_end(Start30 + ?DATA_CHUNK_SIZE)},\n\t\t{1024, PaddedEnd30, ar_chunk_storage:get_chunk_bucket_end(Start30 + (2 * ?DATA_CHUNK_SIZE))},\n\t\t{1023, PaddedEnd30, ar_chunk_storage:get_chunk_bucket_end(Start30 + (ar_block:get_replica_2_9_entropy_sector_size()))},\n\t\t{1023, PaddedEnd30, ar_chunk_storage:get_chunk_bucket_end(Start30 + (ar_block:get_replica_2_9_entropy_sector_size() + ?DATA_CHUNK_SIZE))},\n\t\t{1022, PaddedEnd30, ar_chunk_storage:get_chunk_bucket_end(Start30 + (2 * ar_block:get_replica_2_9_entropy_sector_size()))},\n\t\t{1022, PaddedEnd30, ar_chunk_storage:get_chunk_bucket_end(Start30 + (2 * ar_block:get_replica_2_9_entropy_sector_size() + ?DATA_CHUNK_SIZE))}\n\t],\n\n\tlists:foreach(\n\t\tfun({ExpectedLength, End, BucketEndOffset}) ->\n\t\t\t?assertEqual(ExpectedLength, length(footprint_offsets(BucketEndOffset, 1024, End)),\n\t\t\t\tlists:flatten(io_lib:format(\n\t\t\t\t\t\"Offset: ~p, Expected Length: ~p\", [BucketEndOffset, ExpectedLength])))\n\t\tend,\n\t\tTestCases\n\t),\n\n\tok.\n\nfootprint_end_test_() ->\n\t[\n\t\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_block, get_replica_2_9_entropy_sector_size, fun() -> 786432 end},\n\t\t\t{ar_block, get_replica_2_9_entropy_partition_size, fun() -> 2359296 end},\n\t\t\t{ar_block, get_sub_chunks_per_replica_2_9_entropy, fun() -> 3 end},\n\t\t\t{ar_block, strict_data_split_threshold, fun() -> 700_000 end}\n\t\t],\n\t\tfun test_footprint_end_small/0, 30)\n\t].\n\ntest_footprint_end_small() ->\n\t{Start0, End0} = ar_storage_module:module_range({ar_block:partition_size(), 0, unpacked}),\n\t{Start1, End1} = ar_storage_module:module_range({ar_block:partition_size(), 1, unpacked}),\n\tPaddedEnd0 = ar_block:get_chunk_padded_offset(End0),\n\tPaddedEnd1 = ar_block:get_chunk_padded_offset(End1),\n\n\t?assertEqual(3, ar_block:get_sub_chunks_per_replica_2_9_entropy()),\n\t?assertEqual({0, 2262144}, {Start0, End0}),\n\t?assertEqual({2000000, 4262144}, {Start1, End1}),\n\t?assertEqual(2272864, PaddedEnd0),\n\t?assertEqual(4370016, PaddedEnd1),\n\t?assertEqual({0, 2272864}, ar_replica_2_9:get_entropy_partition_range(0)),\n\n\t?assertEqual(2010720,\n\t\tfootprint_end([262144, 1048576, 1835008], PaddedEnd0, 1)),\n\t?assertEqual(2272864,\n\t\tfootprint_end([262144, 1048576, 1835008], PaddedEnd0, 2)),\n\t?assertEqual(2272864,\n\t\tfootprint_end([262144, 1048576, 1835008], PaddedEnd0, 3)),\n\t?assertEqual(2272864,\n\t\tfootprint_end([262144, 1048576, 1835008], PaddedEnd1, 4)),\n\tok.\n\nassemble_repack_chunk_test_() ->\n[\n\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_sync_record, is_recorded, fun(_, _, _) -> {true, unpacked} end}\n\t\t],\n\t\tfun test_assemble_repack_chunk/0, 30),\n\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_sync_record, is_recorded, fun(_, _, _) -> {true, unpacked} end}\n\t\t],\n\t\tfun test_assemble_repack_chunk_too_small_unpacked/0, 30),\n\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_sync_record, is_recorded, fun(_, _, _) -> {true, {spora_2_6, <<\"addr\">>}} end}\n\t\t],\n\t\tfun test_assemble_repack_chunk_too_small_packed/0, 30)\n].\n\ntest_assemble_repack_chunk() ->\n\tAddr = <<\"addr\">>,\n\tStoreID = \"storage_module_100_unpacked\",\n\tChunkDataKey = <<\"chunk_data_key\">>,\n\tTXRoot = <<\"tx_root\">>,\n\tDataRoot = <<\"data_root\">>,\n\tTXPath = <<\"tx_path\">>,\n\tRelativeOffset = 1000,\n\tChunkSize = ?DATA_CHUNK_SIZE,\n\tChunk = crypto:strong_rand_bytes(ChunkSize),\n\n\tMetadata = {ChunkDataKey, TXRoot, DataRoot, TXPath, RelativeOffset, ChunkSize},\n\n\t% %% Error - BucketEndOffset hasn't been initialized\n\t?assertEqual(not_found,\n\t\tassemble_repack_chunk(not_found, 100, {replica_2_9, Addr}, Metadata, #{}, \n\t\tunpacked, StoreID)),\n\n\tExpectedRepackedChunk = #repack_chunk{\n\t\tsource_packing = unpacked,\n\t\tmetadata = #chunk_metadata{\n\t\t\tchunk_data_key = ChunkDataKey,\n\t\t\ttx_root = TXRoot,\n\t\t\tdata_root = DataRoot,\n\t\t\ttx_path = TXPath,\n\t\t\tchunk_size = ChunkSize\n\t\t},\n\t\tchunk = Chunk\n\t},\n\n\t%% Chunk before the strict data split threshold\n\t%% unpacked -> unpacked\n\tExpectedOffsets1 = #chunk_offsets{\n\t\tabsolute_offset = 100,\n\t\tbucket_end_offset = 262144,\n\t\tpadded_end_offset = 100,\n\t\trelative_offset = RelativeOffset\n\t},\n\t?assertEqual(\n\t\tExpectedRepackedChunk#repack_chunk{\n\t\t\toffsets = ExpectedOffsets1,\n\t\t\ttarget_packing = unpacked\n\t\t},\n\t\tassemble_repack_chunk(\n\t\t\t#repack_chunk{\n\t\t\t\ttarget_packing = unpacked\n\t\t\t}, 100, unpacked, Metadata, #{ 100 => Chunk }, unpacked, StoreID)\n\t),\n\t%% unpacked -> packed\n\t?assertEqual(\n\t\tExpectedRepackedChunk#repack_chunk{\n\t\t\toffsets = ExpectedOffsets1,\n\t\t\ttarget_packing = {replica_2_9, Addr}\n\t\t},\n\t\tassemble_repack_chunk(\n\t\t\t#repack_chunk{\n\t\t\t\ttarget_packing = {replica_2_9, Addr}\n\t\t\t}, 100, {replica_2_9, Addr}, Metadata, #{ 100 => Chunk }, unpacked, StoreID)\n\t),\n\n\t%% Chunk after the strict data split threshold\n\t%% unpacked -> unpacked\n\tExpectedOffsets2 = #chunk_offsets{\n\t\tabsolute_offset = 10_000_000,\n\t\tbucket_end_offset = 10_223_616,\n\t\tpadded_end_offset = 10_223_616,\n\t\trelative_offset = RelativeOffset\n\t},\n\t?assertEqual(\n\t\tExpectedRepackedChunk#repack_chunk{\n\t\t\toffsets = ExpectedOffsets2,\n\t\t\ttarget_packing = unpacked\n\t\t},\n\t\tassemble_repack_chunk(\n\t\t\t#repack_chunk{\n\t\t\t\ttarget_packing = unpacked\n\t\t\t}, 10_000_000, unpacked, Metadata, #{ 10_223_616 => Chunk }, unpacked, StoreID)\n\t),\n\t%% unpacked -> packed\n\t?assertEqual(\n\t\tExpectedRepackedChunk#repack_chunk{\n\t\t\toffsets = ExpectedOffsets2,\n\t\t\ttarget_packing = {replica_2_9, Addr}\n\t\t},\n\t\tassemble_repack_chunk(\n\t\t\t#repack_chunk{\n\t\t\t\ttarget_packing = {replica_2_9, Addr}\n\t\t\t}, 10_000_000, {replica_2_9, Addr}, Metadata, #{ 10_223_616 => Chunk }, \n\t\t\tunpacked, StoreID)\n\t),\n\tok.\n\ntest_assemble_repack_chunk_too_small_unpacked() ->\n\tAddr = <<\"addr\">>,\n\tStoreID = \"storage_module_100_unpacked\",\n\tChunkDataKey = <<\"chunk_data_key\">>,\n\tTXRoot = <<\"tx_root\">>,\n\tDataRoot = <<\"data_root\">>,\n\tTXPath = <<\"tx_path\">>,\n\tRelativeOffset = 1000,\n\tChunkSize = 100,\n\n\tMetadata = {ChunkDataKey, TXRoot, DataRoot, TXPath, RelativeOffset, ChunkSize},\n\n\t%% Small chunk before the strict data split threshold\n\t%% unpacked -> unpacked\n\t?assertEqual(not_found,\n\t\tassemble_repack_chunk(\n\t\t\t#repack_chunk{\n\t\t\t\ttarget_packing = unpacked\n\t\t\t}, 100, unpacked, Metadata, #{}, unpacked, StoreID)),\n\t%% unpacked -> packed\n\t?assertEqual(not_found,\n\t\tassemble_repack_chunk(\n\t\t\t#repack_chunk{\n\t\t\t\ttarget_packing = {replica_2_9, Addr}\n\t\t\t}, 100, {replica_2_9, Addr}, Metadata, #{}, unpacked, StoreID)),\n\n\t%% Small chunk after the strict data split threshold\n\t%% unpacked -> unpacked\n\t?assertEqual(not_found,\n\t\tassemble_repack_chunk(\n\t\t\t#repack_chunk{\n\t\t\t\ttarget_packing = unpacked\n\t\t\t}, 10_000_000, unpacked, Metadata, #{}, unpacked, StoreID)),\n\t%% unpacked -> packed\n\tExpectedRepackedChunk = #repack_chunk{\n\t\tsource_packing = unpacked,\n\t\ttarget_packing = {replica_2_9, Addr},\n\t\tmetadata = #chunk_metadata{\n\t\t\tchunk_data_key = ChunkDataKey,\n\t\t\ttx_root = TXRoot,\n\t\t\tdata_root = DataRoot,\n\t\t\ttx_path = TXPath,\n\t\t\tchunk_size = ChunkSize\n\t\t},\n\t\toffsets = #chunk_offsets{\n\t\t\tabsolute_offset = 10_000_000,\n\t\t\tbucket_end_offset = 10_223_616,\n\t\t\tpadded_end_offset = 10_223_616,\n\t\t\trelative_offset = RelativeOffset\n\t\t},\n\t\tchunk = not_found\n\t},\n\t?assertEqual(ExpectedRepackedChunk,\n\t\tassemble_repack_chunk(\n\t\t\t#repack_chunk{\n\t\t\t\ttarget_packing = {replica_2_9, Addr}\n\t\t\t}, 10_000_000, {replica_2_9, Addr}, Metadata, #{}, unpacked, StoreID)),\n\tok.\n\ntest_assemble_repack_chunk_too_small_packed() ->\n\tAddr = <<\"addr\">>,\n\tStoreID = \"storage_module_100_unpacked\",\n\tChunkDataKey = <<\"chunk_data_key\">>,\n\tTXRoot = <<\"tx_root\">>,\n\tDataRoot = <<\"data_root\">>,\n\tTXPath = <<\"tx_path\">>,\n\tRelativeOffset = 1000,\n\tChunkSize = 100,\n\n\tMetadata = {ChunkDataKey, TXRoot, DataRoot, TXPath, RelativeOffset, ChunkSize},\n\n\t%% Small chunk before the strict data split threshold\n\t%% packed -> unpacked\n\t?assertEqual(not_found,\n\t\tassemble_repack_chunk(\n\t\t\t#repack_chunk{\n\t\t\t\ttarget_packing = unpacked\n\t\t\t}, 100, unpacked, Metadata, #{}, {spora_2_6, <<\"addr\">>}, StoreID)),\n\t%% packed -> packed\n\t?assertEqual(not_found,\n\t\tassemble_repack_chunk(\n\t\t\t#repack_chunk{\n\t\t\t\ttarget_packing = {replica_2_9, Addr}\n\t\t\t}, 100, {replica_2_9, Addr}, Metadata, #{}, {spora_2_6, <<\"addr\">>}, StoreID)),\n\n\t%% Small chunk after the strict data split threshold\n\tExpectedRepackedChunk = #repack_chunk{\n\t\tsource_packing = {spora_2_6, Addr},\n\t\tmetadata = #chunk_metadata{\n\t\t\tchunk_data_key = ChunkDataKey,\n\t\t\ttx_root = TXRoot,\n\t\t\tdata_root = DataRoot,\n\t\t\ttx_path = TXPath,\n\t\t\tchunk_size = ChunkSize\n\t\t},\n\t\toffsets = #chunk_offsets{\n\t\t\tabsolute_offset = 10_000_000,\n\t\t\tbucket_end_offset = 10_223_616,\n\t\t\tpadded_end_offset = 10_223_616,\n\t\t\trelative_offset = RelativeOffset\n\t\t},\n\t\tchunk = not_found\n\t},\n\t%% packed -> unpacked\n\t?assertEqual(ExpectedRepackedChunk#repack_chunk{target_packing = unpacked},\n\t\tassemble_repack_chunk(\n\t\t\t#repack_chunk{\n\t\t\t\ttarget_packing = unpacked\n\t\t\t}, 10_000_000, unpacked, Metadata, #{}, {spora_2_6, <<\"addr\">>}, StoreID)),\n\t%% packed -> packed\n\t?assertEqual(ExpectedRepackedChunk#repack_chunk{target_packing = {replica_2_9, Addr}},\n\t\tassemble_repack_chunk(\n\t\t\t#repack_chunk{\n\t\t\t\ttarget_packing = {replica_2_9, Addr}\n\t\t\t}, 10_000_000, {replica_2_9, Addr}, Metadata, #{},\n\t\t\t{spora_2_6, <<\"addr\">>}, StoreID)),\n\tok.\n\nshould_repack_test_() ->\n\t[\n\t\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_block, strict_data_split_threshold, fun() -> 700_000 end},\n\t\t\t{ar_sync_record, is_recorded, fun(_, _, _) -> false end},\n\t\t\t{ar_entropy_storage, is_entropy_recorded, fun(_, _, _) -> false end}\n\t\t],\n\t\tfun test_should_repack_no_chunk_no_entropy/0, 30),\n\t\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_block, strict_data_split_threshold, fun() -> 700_000 end},\n\t\t\t{ar_sync_record, is_recorded, fun(_, _, _) -> {true, {replica_2_9, <<\"addr\">>}} end},\n\t\t\t{ar_entropy_storage, is_entropy_recorded, fun(_, _, _) -> true end}\n\t\t],\n\t\tfun test_should_repack_chunk_and_entropy/0, 30),\n\t\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_block, strict_data_split_threshold, fun() -> 700_000 end},\n\t\t\t{ar_sync_record, is_recorded, fun(_, _, _) -> false end},\n\t\t\t{ar_entropy_storage, is_entropy_recorded, fun(_, _, _) -> true end}\n\t\t],\n\t\tfun test_should_repack_entropy_but_no_chunk/0, 30),\n\t\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_block, strict_data_split_threshold, fun() -> 700_000 end},\n\t\t\t{ar_sync_record, is_recorded, fun(_, _, _) -> {true, unpacked} end},\n\t\t\t{ar_entropy_storage, is_entropy_recorded, fun(_, _, _) -> true end}\n\t\t],\n\t\tfun test_should_repack_unpacked_chunk_and_entropy/0, 30),\n\t\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_block, strict_data_split_threshold, fun() -> 700_000 end},\n\t\t\t{ar_sync_record, is_recorded, fun(_, _, _) -> {true, unpacked} end},\n\t\t\t{ar_entropy_storage, is_entropy_recorded, fun(_, _, _) -> false end}\n\t\t],\n\t\tfun test_should_repack_unpacked_chunk_no_entropy/0, 30)\n\t].\n\ntest_should_repack_no_chunk_no_entropy() ->\n\t%% No chunk exists to repack however we still want to process the bucket and write\n\t%% entropy to it.\n\t?assertEqual(true,\n\t\tshould_repack(600_000, 200_000, 300_000, #state{\n\t\t\tmodule_start = 100_000,\n\t\t\tmodule_end = 2_000_000\n\t\t})),\n\t?assertEqual({false, [\n\t\t\t{cursor, 600_000},\n\t\t\t{padded_end_offset, 600_000},\n\t\t\t{is_chunk_recorded, false},\n\t\t\t{is_entropy_recorded, false},\n\t\t\t{skip, false}\n\t\t]},\n\t\tshould_repack(600_000, 0, 50_000, #state{\n\t\t\tmodule_start = 100_000,\n\t\t\tmodule_end = 2_000_000,\n\t\t\ttarget_packing = {replica_2_9, <<\"addr\">>}\n\t\t})),\n\t?assertEqual({false, [\n\t\t\t{cursor, 600_000},\n\t\t\t{padded_end_offset, 600_000},\n\t\t\t{is_chunk_recorded, false},\n\t\t\t{is_entropy_recorded, false},\n\t\t\t{skip, false}\n\t\t]},\n\t\tshould_repack(600_000, 2_000_001, 3_000_000, #state{\n\t\t\tmodule_start = 100_000,\n\t\t\tmodule_end = 2_000_000,\n\t\t\ttarget_packing = {replica_2_9, <<\"addr\">>}\n\t\t})),\n\t?assertEqual(true,\n\t\tshould_repack(750_000, 200_000, 300_000, #state{\n\t\t\tmodule_start = 100_000,\n\t\t\tmodule_end = 2_000_000\n\t\t})).\n\ntest_should_repack_chunk_and_entropy() ->\n\t%% Chunk is already packed to the target packing\n\t?assertEqual({false, [\n\t\t\t{cursor, 600_000},\n\t\t\t{padded_end_offset, 600_000},\n\t\t\t{is_chunk_recorded, {true, {replica_2_9, <<\"addr\">>}}},\n\t\t\t{is_entropy_recorded, true},\n\t\t\t{skip, true}\n\t\t]},\n\t\tshould_repack(600_000, 200_000, 300_000, #state{\n\t\t\tmodule_start = 100_000,\n\t\t\tmodule_end = 2_000_000,\n\t\t\ttarget_packing = {replica_2_9, <<\"addr\">>}\n\t\t})),\n\t%% Chunk exists and needs repacking - but footprint start is beyond the end of the module\n\t?assertEqual({false, [\n\t\t\t{cursor, 600_000},\n\t\t\t{padded_end_offset, 600_000},\n\t\t\t{is_chunk_recorded, {true, {replica_2_9, <<\"addr\">>}}},\n\t\t\t{is_entropy_recorded, true},\n\t\t\t{skip, false}\n\t\t]},\n\t\tshould_repack(600_000, 2_000_001, 3_000_000, #state{\n\t\t\tmodule_start = 100_000,\n\t\t\tmodule_end = 2_000_000,\n\t\t\ttarget_packing = {replica_2_9, <<\"addr2\">>}\n\t\t})),\n\t%% Chunk exists, needs repacking and falls within the module.\n\t?assertEqual(\n\t\ttrue, \n\t\tshould_repack(600_000, 200_000, 300_000, #state{\n\t\t\tmodule_start = 100_000,\n\t\t\tmodule_end = 2_000_000,\n\t\t\ttarget_packing = {replica_2_9, <<\"addr2\">>}\n\t\t})),\n\t?assertEqual({false, [\n\t\t\t{cursor, 600_000},\n\t\t\t{padded_end_offset, 600_000},\n\t\t\t{is_chunk_recorded, {true, {replica_2_9, <<\"addr\">>}}},\n\t\t\t{is_entropy_recorded, true},\n\t\t\t{skip, false}\n\t\t]},\n\t\tshould_repack(600_000, 0, 50_000, #state{\n\t\t\tmodule_start = 100_000,\n\t\t\tmodule_end = 2_000_000,\n\t\t\ttarget_packing = {replica_2_9, <<\"addr2\">>}\n\t\t})),\n\t?assertEqual({false, [\n\t\t\t{cursor, 600_000},\n\t\t\t{padded_end_offset, 600_000},\n\t\t\t{is_chunk_recorded, {true, {replica_2_9, <<\"addr\">>}}},\n\t\t\t{is_entropy_recorded, true},\n\t\t\t{skip, false}\n\t\t]},\n\t\tshould_repack(600_000, 2_000_001, 3_000_000, #state{\n\t\t\tmodule_start = 100_000,\n\t\t\tmodule_end = 2_000_000,\n\t\t\ttarget_packing = {replica_2_9, <<\"addr2\">>}\n\t\t})).\n\t\ntest_should_repack_entropy_but_no_chunk() ->\n\t%% Entropy exists which means this bucket has been processed, but there is no chunk\n\t%% to repack.\n\t?assertEqual({false, [\n\t\t{cursor, 600_000},\n\t\t{padded_end_offset, 600_000},\n\t\t{is_chunk_recorded, false},\n\t\t{is_entropy_recorded, true},\n\t\t{skip, true}\n\t]},\n\tshould_repack(600_000, 200_000, 300_000, #state{\n\t\tmodule_start = 100_000,\n\t\tmodule_end = 2_000_000,\n\t\ttarget_packing = {replica_2_9, <<\"addr\">>}\n\t})).\n\ntest_should_repack_unpacked_chunk_and_entropy() ->\n\t%% Unpacked chunk and entropy exist, which means:\n\t%% 1. this bucket has small chunks which can not be written to chunk storage.\n\t%% 2. this bucket has already been processed so we can skip\n\t?assertEqual({false, [\n\t\t{cursor, 600_000},\n\t\t{padded_end_offset, 600_000},\n\t\t{is_chunk_recorded, {true, unpacked}},\n\t\t{is_entropy_recorded, true},\n\t\t{skip, true}\n\t]},\n\tshould_repack(600_000, 200_000, 300_000, #state{\n\t\tmodule_start = 100_000,\n\t\tmodule_end = 2_000_000,\n\t\ttarget_packing = {replica_2_9, <<\"addr\">>}\n\t})).\n\ntest_should_repack_unpacked_chunk_no_entropy() ->\n\t%% Chunk is already packed to the target packing\n\t?assertEqual({false, [\n\t\t{cursor, 600_000},\n\t\t{padded_end_offset, 600_000},\n\t\t{is_chunk_recorded, {true, unpacked}},\n\t\t{is_entropy_recorded, false},\n\t\t{skip, true}\n\t]},\n\tshould_repack(600_000, 200_000, 300_000, #state{\n\t\tmodule_start = 100_000,\n\t\tmodule_end = 2_000_000,\n\t\ttarget_packing = unpacked\n\t})),\n\t%% Chunk exists, needs repacking and falls within the module.\n\t?assertEqual(\n\t\ttrue, \n\t\tshould_repack(600_000, 200_000, 300_000, #state{\n\t\t\tmodule_start = 100_000,\n\t\t\tmodule_end = 2_000_000,\n\t\t\ttarget_packing = {replica_2_9, <<\"addr\">>}\n\t\t})),\n\t?assertEqual({false, [\n\t\t\t{cursor, 600_000},\n\t\t\t{padded_end_offset, 600_000},\n\t\t\t{is_chunk_recorded, {true, unpacked}},\n\t\t\t{is_entropy_recorded, false},\n\t\t\t{skip, false}\n\t\t]},\n\t\tshould_repack(600_000, 0, 50_000, #state{\n\t\t\tmodule_start = 100_000,\n\t\t\tmodule_end = 2_000_000,\n\t\t\ttarget_packing = {replica_2_9, <<\"addr\">>}\n\t\t})),\n\t?assertEqual({false, [\n\t\t\t{cursor, 600_000},\n\t\t\t{padded_end_offset, 600_000},\n\t\t\t{is_chunk_recorded, {true, unpacked}},\n\t\t\t{is_entropy_recorded, false},\n\t\t\t{skip, false}\n\t\t]},\n\t\tshould_repack(600_000, 2_000_001, 3_000_000, #state{\n\t\t\tmodule_start = 100_000,\n\t\t\tmodule_end = 2_000_000,\n\t\t\ttarget_packing = {replica_2_9, <<\"addr\">>}\n\t\t})).\n\n\ninit_repack_chunk_map_test_() ->\n\t[\n\t\tar_test_node:test_with_mocked_functions(ar_test_node:mainnet_packing_mocks(),\n\t\t\tfun test_init_repack_chunk_map_a/0, 30),\n\t\tar_test_node:test_with_mocked_functions(ar_test_node:mainnet_packing_mocks(),\n\t\t\tfun test_init_repack_chunk_map_b/0, 30)\n\t].\n\n%% @doc This tests a specific off-by-one error that occurred in the footprint_end calculation.\n%% Previously there was an ar_entropy_gen:footprint_end function which was incorrect. The\n%% fix removes the ar_entropy_gen:footprint_end function and has everyone use \n%% ar_replica_2_9:get_entropy_partition_range instead, as that one does the correct end of\n%% range calculation.\n%% \n%% Keeping this test as it's an easy way to assert no future regressions in this logic.\ntest_init_repack_chunk_map_a() ->\n\tCursor = 18003250911837,\n\tModuleStart = 18000000000000,\n\tModuleEnd = 21600104857600,\n\tBatchSize = 100,\n\tBucketEndOffset = ar_chunk_storage:get_chunk_bucket_end(Cursor),\n\tBucketStartOffset = ar_chunk_storage:get_chunk_bucket_start(Cursor),\n\tFootprintOffsets = footprint_offsets(BucketEndOffset, 1024, ModuleEnd),\n\tFootprintStart = BucketStartOffset+1,\n\tFootprintEnd = footprint_end(FootprintOffsets, ModuleEnd, BatchSize),\n\n\tState = #state{\n\t\tmodule_start = ModuleStart,\n\t\tmodule_end = ModuleEnd,\n\t\tfootprint_start = FootprintStart,\n\t\tfootprint_end = FootprintEnd,\n\t\tread_batch_size = BatchSize,\n\t\trepack_chunk_map = #{},\n\t\ttarget_packing = {replica_2_9, <<\"addr\">>}\n\t},\n\n\tState2 = init_repack_chunk_map(FootprintOffsets, State),\n\t\n\t?assertEqual(102334, maps:size(State2#state.repack_chunk_map)),\n\tok.\n\ntest_init_repack_chunk_map_b() ->\n\tCursor = 21564833002875,\n\tNumEntropyOffsets = 1024,\n\tModuleStart = 18000000000000,\n\tModuleEnd = 21600104857600,\n\tBatchSize = 100,\n\tBucketEndOffset = ar_chunk_storage:get_chunk_bucket_end(Cursor),\n\tBucketStartOffset = ar_chunk_storage:get_chunk_bucket_start(Cursor),\n\tFootprintOffsets = footprint_offsets(BucketEndOffset, NumEntropyOffsets, ModuleEnd),\n\tFootprintStart = BucketStartOffset+1,\n\tFootprintEnd = footprint_end(FootprintOffsets, ModuleEnd, BatchSize),\n\t{_, EntropyEnd, _} = get_read_range(BucketEndOffset, FootprintEnd, BatchSize),\n\tEntropyEnd2 = ar_chunk_storage:get_chunk_bucket_end(EntropyEnd),\n\t{_ReadRangeStart, _ReadRangeEnd, ReadRangeOffsets} = get_read_range(\n\t\tBucketEndOffset, FootprintEnd, BatchSize),\n\tState = #state{\n\t\tmodule_start = ModuleStart,\n\t\tmodule_end = ModuleEnd,\n\t\tfootprint_start = FootprintStart,\n\t\tfootprint_end = FootprintEnd,\n\t\tread_batch_size = BatchSize,\n\t\trepack_chunk_map = #{},\n\t\ttarget_packing = {replica_2_9, <<\"addr\">>}\n\t},\n\tState2 = init_repack_chunk_map(FootprintOffsets, State),\n\n\tMaxChunkMap = lists:max(maps:keys(State2#state.repack_chunk_map)),\n\n\t?assertEqual(ar_chunk_storage:get_chunk_bucket_end(FootprintEnd), MaxChunkMap),\n\t?assertEqual(EntropyEnd2, lists:max(ReadRangeOffsets)),\n\tok.\n\t\nget_read_range_test_() ->\n\t[\n\t\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_block, get_replica_2_9_entropy_sector_size, fun() -> 786432 end},\n\t\t\t{ar_block, get_replica_2_9_entropy_partition_size, fun() -> 2359296 end},\n\t\t\t{ar_block, strict_data_split_threshold, fun() -> 5_000_000 end}\n\t\t],\n\t\t\tfun test_get_read_range_before_strict/0, 30),\n\t\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_block, get_replica_2_9_entropy_sector_size, fun() -> 786432 end},\n\t\t\t{ar_block, get_replica_2_9_entropy_partition_size, fun() -> 2359296 end},\n\t\t\t{ar_block, strict_data_split_threshold, fun() -> 700_000 end}\n\t\t],\n\t\t\tfun test_get_read_range_after_strict/0, 30)\n\t].\n\ntest_get_read_range_before_strict() ->\n\t?assertEqual({2359296, 4456447}, ar_replica_2_9:get_entropy_partition_range(1)),\n\t?assertEqual(786432, ar_block:get_replica_2_9_entropy_sector_size()),\n\t%% no limit\n\t?assertEqual(\n\t\t{2097151, 2883583, [2097152, 2359296, 2621440]},\n\t\tget_read_range(2097152, 4_000_000, 3)\n\t),\n\t%% sector limit\n\t?assertEqual(\n\t\t{2359295, 3145727, [2359296, 2621440, 2883584]},\n\t\tget_read_range(2359296, 4_000_000, 4)\n\t),\n\t?assertEqual(\n\t\t{3407871, 3932159, [3407872, 3670016]},\n\t\tget_read_range(3407872, 4_000_000, 3)\n\t),\n\t%% range end limit\n\t?assertEqual(\n\t\t{2359295, 2700000, [2359296, 2621440]},\n\t\tget_read_range(2359296, 2_700_000, 4)\n\t),\n\t%% partition end limit\n\t?assertEqual(\n\t\t{3932159, 4456447, [3932160, 4194304]},\n\t\tget_read_range(3932160, 6_000_000, 3)\n\t),\n\tok.\n\ntest_get_read_range_after_strict() ->\n\t?assertEqual({2272865, 4370016}, ar_replica_2_9:get_entropy_partition_range(1)),\n\t?assertEqual(786432, ar_block:get_replica_2_9_entropy_sector_size()),\n\t%% no limit\n\t?assertEqual(\n\t\t{2272864, 3059296, [2359296, 2621440, 2883584]},\n\t\tget_read_range(2359296, 4_000_000, 3)\n\t),\n\t%% sector limit\n\t?assertEqual(\n\t\t{2272864, 3059296, [2359296, 2621440, 2883584]},\n\t\tget_read_range(2359296, 4_000_000, 4)\n\t),\n\t?assertEqual(\n\t\t{3321440, 3845728, [3407872, 3670016]},\n\t\tget_read_range(3407872, 4_000_000, 3)\n\t),\n\t%% range end limit\n\t?assertEqual(\n\t\t{2272864, 2700000, [2359296, 2621440]},\n\t\tget_read_range(2359296, 2_700_000, 4)\n\t),\n\t%% partition end limit\n\t?assertEqual(\n\t\t{3845728, 4370016, [3932160, 4194304]},\n\t\tget_read_range(3932160, 6_000_000, 3)\n\t),\n\tok.\n"
  },
  {
    "path": "apps/arweave/src/ar_repack_fsm.erl",
    "content": "-module(ar_repack_fsm).\n\n-export([crank_state/1]).\n\n-include(\"ar.hrl\").\n-include(\"ar_repack.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-moduledoc \"\"\"\n\tMaintain a finite state machine (FSM) to track the state each chunk passes through as\n\tit is repacked.\n\n\tState Transition Diagram:\n\n\tneeds_chunk\n\t\t|\n\t\t+----> invalid ----------> entropy_only \n\t\t|\n\t\t+----> entropy_only\n\t\t|\t\t|\n\t\t|\t\t+----> write_entropy (terminal)\n\t\t|\t\t|\n\t\t|\t\t+----> ignore\n\t\t|\n\t\t+----> already_repacked -> ignore (terminal)\n\t\t|\n\t\t+----> needs_data_path --> has_chunk\n\t\t|\n\t\t+----> has_chunk\n\t\t\t\t|\n\t\t\t\t+----> write_chunk (terminal)\n\t\t\t\t|\n\t\t\t\t+----> needs_repack ---------------------------> has_chunk\n\t\t\t\t|\n\t\t\t\t+----> needs_source_entropy -> needs_decipher -> has_chunk\n\t\t\t\t|\n\t\t\t\t+----> needs_target_entropy -> needs_encipher -> has_chunk\n\n\tStart State: needs_chunk\n\tTerminal States: write_chunk, write_entropy, ignore\n\n\tState Descriptions:\n\t- needs_chunk: Initial state, waiting to read chunk data and metadata\n\t- entropy_only: Chunk is too small or not found, only entropy will be recorded\n\t- invalid: Chunk is corrupt or inconsistent, will be invalidated\n\t- already_repacked: Chunk is already in target format\n\t- needs_data_path: Chunk not found on disk, checking chunk data db\n\t- has_chunk: Chunk has been read, decide what to do next\n\t- needs_repack: Repack between non-replica_2_9 formats\n\t- needs_source_entropy: Waiting for source entropy to be calculated\n\t- needs_decipher: Waiting for chunk to be deciphered from replica_2_9 to unpacked_padded\n\t- needs_target_entropy: Waiting for target entropy to be calculated\n\t- needs_encipher: Waiting for chunk to be enciphered from unpacked_padded to replica_2_9\n\t- write_chunk: Terminal state, chunk will be written\n\t- write_entropy: Terminal state, only entropy will be written\n\t- ignore: Terminal state, no action needed\n\"\"\".\n\n%% @doc: Repeatedly call next_state until the state no longer changes.\n-spec crank_state(#repack_chunk{}) -> #repack_chunk{}.\ncrank_state(RepackChunk) ->\n\tcrank_state(RepackChunk, next_state(RepackChunk)).\n\ncrank_state(RepackChunk, RepackChunk) ->\n\t%% State did not change, return the final state\n\tRepackChunk;\ncrank_state(_OldRepackChunk, NewRepackChunk) ->\n\t%% State has changed, continue cranking\n\tcrank_state(NewRepackChunk, next_state(NewRepackChunk)).\n\n%% ---------------------------------------------------------------------------\n%% State: needs_chunk\n%% ---------------------------------------------------------------------------\nnext_state(\n\t\t#repack_chunk{\n\t\t\tstate = needs_chunk, chunk = not_set, metadata = not_set\n\t\t} = RepackChunk) ->\n\tRepackChunk;\nnext_state(\n\t\t#repack_chunk{\n\t\t\tstate = needs_chunk, chunk = not_found, metadata = not_found\n\t\t} = RepackChunk) ->\n\t%% Chunk is not recorded in any index.\n\tNextState = entropy_only,\n\tRepackChunk#repack_chunk{state = NextState};\nnext_state(#repack_chunk{ state = needs_chunk, metadata = Metadata } = RepackChunk) \n\t\twhen Metadata == not_set orelse Metadata == not_found ->\n\t%% Metadata can not be empty unless chunk is also empty.\n\tlog_error(invalid_repack_fsm_transition, RepackChunk, []),\n\tRepackChunk#repack_chunk{state = error};\nnext_state(#repack_chunk{state = needs_chunk} = RepackChunk) ->\n\t#repack_chunk{\n\t\toffsets = Offsets,\n\t\tmetadata = Metadata,\n\t\tchunk = Chunk,\n\t\tsource_packing = SourcePacking,\n\t\ttarget_packing = TargetPacking\n\t} = RepackChunk,\n\t#chunk_metadata{\n\t\tchunk_size = ChunkSize\n\t} = Metadata,\n\t#chunk_offsets{\n\t\tabsolute_offset = AbsoluteEndOffset\n\t} = Offsets,\n\n\tIsTooSmall = (\n\t\tChunkSize /= ?DATA_CHUNK_SIZE andalso\n\t\tAbsoluteEndOffset =< ar_block:strict_data_split_threshold()\n\t),\n\n\tIsStorageSupported = ar_chunk_storage:is_storage_supported(\n\t\tAbsoluteEndOffset, ChunkSize, TargetPacking),\n\n\tNextState = case {IsTooSmall, SourcePacking, Chunk, IsStorageSupported} of\n\t\t{true, _, _, _} -> entropy_only;\n\t\t{_, not_found, _, _} ->\n\t\t\t%% This offset exists in some of the chunk indices, the chunk is not recorded\n\t\t\t%% in the sync record. This can happen if there was some corruption at some\n\t\t\t%% point in the past. We'll clean out the bad indices, and then record\n\t\t\t%% the entropy.\n\t\t\tinvalid;\n\t\t{_, TargetPacking, _, _} -> already_repacked;\n\t\t{_, _, not_found, _} ->\n\t\t\t%% Chunk doesn't exist on disk, try chunk data db.\n\t\t\tneeds_data_path;\n\t\t{_, _, _, false} -> \n\t\t\t%% We are going to move this chunk to RocksDB after repacking so\n\t\t\t%% we read its DataPath here to pass it later on to store_chunk.\n\t\t\tneeds_data_path;\n\t\t_ -> has_chunk\n\tend,\n\tRepackChunk#repack_chunk{state = NextState};\n\n%% ---------------------------------------------------------------------------\n%% State: needs_data_path\n%% ---------------------------------------------------------------------------\nnext_state(#repack_chunk{\n\t\tstate = needs_data_path,\n\t\tmetadata = #chunk_metadata{data_path = not_set}} = RepackChunk) ->\n\t%% Still waiting on data path.\n\tRepackChunk;\nnext_state(#repack_chunk{state = needs_data_path} = RepackChunk) ->\n\t#repack_chunk{\n\t\tchunk = Chunk,\n\t\tmetadata = Metadata\n\t} = RepackChunk,\n\t#chunk_metadata{\n\t\tdata_path = DataPath\n\t} = Metadata,\n\n\tIsInvalid = (\n\t\tChunk == not_found orelse\n\t\tDataPath == not_found\n\t),\n\n\tNextState = case IsInvalid of\n\t\ttrue -> \n\t\t\t%% This offset exists in some of the chunk indices and sync records, but there's\n\t\t\t%% either no chunk data or no data_path. This can happen if there was some\n\t\t\t%% corruption at some point in the past. We'll clean out the bad indices, and\n\t\t\t%% then record the entropy.\n\t\t\tinvalid;\n\t\t_ -> has_chunk\n\tend,\n\tRepackChunk#repack_chunk{state = NextState};\n\n%% ---------------------------------------------------------------------------\n%% State: has_chunk\n%% \n%% has_chunk is an intermediate state to avoid duplicating state transition\n%% logic across both the needs_chunk and needs_data_path states. Once a chunk\n%% and optionally data_path have been read, we'll transition to has_chunk and\n%% then from there enter the repack logic.\n%% ---------------------------------------------------------------------------\nnext_state(#repack_chunk{state = has_chunk} = RepackChunk) ->\n\t#repack_chunk{\n\t\tsource_packing = SourcePacking,\n\t\ttarget_packing = TargetPacking\n\t} = RepackChunk,\n\n\tNextState = case {SourcePacking, TargetPacking} of\n\t\t_ when SourcePacking == TargetPacking ->\n\t\t\twrite_chunk;\n\t\t{{replica_2_9, _}, _} ->\n\t\t\t%% Source is replica_2_9, so we need its entropy first before we can unpack it.\n\t\t\tneeds_source_entropy;\n\t\t{unpacked_padded, {replica_2_9, _}} ->\n\t\t\t%% When source_packing is unpacked_padded it means that the chunk was originally\n\t\t\t%% some other format, but has now been repacked to unpacked_padded and is ready\n\t\t\t%% to be enciphered to the target_packing replica_2_9 format. Before we can do\n\t\t\t%% that we need to wait for the target entropy to be generated.\n\t\t\tneeds_target_entropy;\n\t\t_ -> \n\t\t\t%% Source packing is either unpacked or spora_2_6, so the next step is to repack\n\t\t\t%% it. Whether we repack to unpacked_padded or to some other format depends on\n\t\t\t%% the current source and target packing. The logic to determine what to repack\n\t\t\t%% to is handled by ar_repack.erl.\n\t\t\tneeds_repack\n\tend,\n\tRepackChunk#repack_chunk{state = NextState};\n\n%% ---------------------------------------------------------------------------\n%% State: invalid\n%% ---------------------------------------------------------------------------\nnext_state(#repack_chunk{state = invalid} = RepackChunk) ->\n\t#repack_chunk{\n\t\tchunk = Chunk\n\t} = RepackChunk,\n\n\tNextState = case Chunk of\n\t\tinvalid ->\n\t\t\t%% Chunk is already invalid, ready to write entropy.\n\t\t\tentropy_only;\n\t\t_ ->\n\t\t\t%% Offset has not yet been invalidated.\n\t\t\tinvalid\n\tend,\n\tRepackChunk#repack_chunk{state = NextState};\n\n%% ---------------------------------------------------------------------------\n%% State: entropy_only\n%% ---------------------------------------------------------------------------\nnext_state(#repack_chunk{state = entropy_only} = RepackChunk) ->\n\t#repack_chunk{\n\t\ttarget_packing = TargetPacking\n\t} = RepackChunk,\n\n\tNextState = case {has_all_entropy(RepackChunk), TargetPacking} of\n\t\t{false, _} -> \n\t\t\t%% Still waiting on entropy.\n\t\t\tentropy_only;\n\t\t{true, {replica_2_9, _}} ->\n\t\t\t%% We don't have a record of this chunk anywhere, so we'll record and\n\t\t\t%% index the entropy \n\t\t\twrite_entropy;\n\t\t{true, _} ->\n\t\t\t%% We have a record of this chunk, so we'll ignore it.\n\t\t\tignore\n\tend,\n\tRepackChunk#repack_chunk{state = NextState};\n\n%% ---------------------------------------------------------------------------\n%% State: already_repacked\n%% ---------------------------------------------------------------------------\nnext_state(#repack_chunk{state = already_repacked} = RepackChunk) ->\n\tNextState = case has_all_entropy(RepackChunk) of\n\t\tfalse -> \n\t\t\t%% Still waiting on entropy.\n\t\t\talready_repacked;\n\t\ttrue ->\n\t\t\t%% Repacked chunk already exists on disk so don't write anything\n\t\t\t%% (neither entropy nor chunk)\n\t\t\tignore\n\tend,\n\tRepackChunk#repack_chunk{state = NextState};\n\n%% ---------------------------------------------------------------------------\n%% State: needs_repack\n%% \n%% Source chunk will be repacked directly to:\n%% - TargetPacking if TargetPacking is not replica_2_9\n%% - unpacked_padded if TargetPacking is replica_2_9\n%% \n%% Note when TargetPacking is replica_2_9 we need to generate entropy and then\n%% encipher the chunk rather than doing a direct repack.\n%% \n%% When we detect one of those condition, transition to has_chunk which will\n%% determine what state to transition to next.\n%% ---------------------------------------------------------------------------\nnext_state(#repack_chunk{state = needs_repack} = RepackChunk) ->\n\t#repack_chunk{\n\t\tsource_packing = SourcePacking,\n\t\ttarget_packing = TargetPacking\n\t} = RepackChunk,\n\n\tNextState = case {SourcePacking, TargetPacking} of\n\t\t{TargetPacking, _} -> has_chunk;\n\t\t{unpacked_padded, {replica_2_9, _}} -> has_chunk;\n\t\t_ ->\n\t\t\t%% Still waiting on repacking.\n\t\t\tneeds_repack\n\tend,\n\tRepackChunk#repack_chunk{state = NextState};\n\n%% ---------------------------------------------------------------------------\n%% State: needs_source_entropy\n%% ---------------------------------------------------------------------------\nnext_state(#repack_chunk{state = needs_source_entropy} = RepackChunk) ->\n\t#repack_chunk{\n\t\tsource_entropy = SourceEntropy\n\t} = RepackChunk,\n\n\tNextState = case SourceEntropy of\n\t\tnot_set -> \n\t\t\t%% Still waiting on entropy.\n\t\t\tneeds_source_entropy;\n\t\t_ ->\n\t\t\t%% sanity checks\n\t\t\ttrue = SourceEntropy /= <<>>,\n\t\t\t%% end sanity checks\n\t\t\tneeds_decipher\n\tend,\n\tRepackChunk#repack_chunk{state = NextState};\n\n%% ---------------------------------------------------------------------------\n%% State: needs_target_entropy\n%% ---------------------------------------------------------------------------\nnext_state(#repack_chunk{state = needs_target_entropy} = RepackChunk) ->\n\t#repack_chunk{\n\t\ttarget_entropy = TargetEntropy\n\t} = RepackChunk,\n\n\tNextState = case TargetEntropy of\n\t\tnot_set -> \n\t\t\t%% Still waiting on entropy.\n\t\t\tneeds_target_entropy;\n\t\t_ ->\n\t\t\t%% We now have the unpacked_padded chunk and the entropy, proceed\n\t\t\t%% with enciphering and storing the chunk.\n\n\t\t\t%% sanity checks\n\t\t\ttrue = TargetEntropy /= <<>>,\n\t\t\ttrue = RepackChunk#repack_chunk.chunk /= not_found,\n\t\t\ttrue = RepackChunk#repack_chunk.chunk /= not_set,\n\t\t\ttrue = RepackChunk#repack_chunk.source_packing == unpacked_padded,\n\t\t\t%% end sanity checks\n\t\t\tneeds_encipher\n\tend,\n\tRepackChunk#repack_chunk{state = NextState};\n\n%% ---------------------------------------------------------------------------\n%% State: needs_decipher\n%% ---------------------------------------------------------------------------\nnext_state(#repack_chunk{state = needs_decipher} = RepackChunk) ->\n\t#repack_chunk{\n\t\tsource_packing = SourcePacking\n\t} = RepackChunk,\n\n\tNextState = case SourcePacking of\n\t\tunpacked_padded -> has_chunk;\n\t\t_ ->\n\t\t\t%% Still waiting on deciphering.\n\t\t\tneeds_decipher\n\tend,\n\tRepackChunk#repack_chunk{state = NextState};\n\n%% ---------------------------------------------------------------------------\n%% State: needs_encipher\n%% ---------------------------------------------------------------------------\nnext_state(#repack_chunk{state = needs_encipher} = RepackChunk) ->\n\t#repack_chunk{\n\t\tsource_packing = SourcePacking,\n\t\ttarget_packing = TargetPacking\n\t} = RepackChunk,\n\n\t%% sanity checks\n\ttrue = RepackChunk#repack_chunk.chunk /= not_found,\n\ttrue = RepackChunk#repack_chunk.chunk /= not_set,\n\t%% end sanity checks\n\t\n\tIsRepacked = SourcePacking == TargetPacking,\n\n\tNextState = case IsRepacked of\n\t\ttrue -> has_chunk;\n\t\t_  ->\n\t\t\t%% Still waiting on enciphering.\n\t\t\tneeds_encipher\n\tend,\n\tRepackChunk#repack_chunk{state = NextState};\n\n%% ---------------------------------------------------------------------------\n%% State: write_chunk\n%% ---------------------------------------------------------------------------\nnext_state(#repack_chunk{state = write_chunk} = RepackChunk) ->\n\t%% write_chunk is a terminal state.\n\tRepackChunk;\n%% ---------------------------------------------------------------------------\n%% State: write_entropy\n%% ---------------------------------------------------------------------------\nnext_state(#repack_chunk{state = write_entropy} = RepackChunk) ->\n\t%% write_entropy is a terminal state.\n\tRepackChunk;\n%% ---------------------------------------------------------------------------\n%% State: ignore\n%% ---------------------------------------------------------------------------\nnext_state(#repack_chunk{state = ignore} = RepackChunk) ->\n\t%% ignore is a terminal state.\n\tRepackChunk;\nnext_state(RepackChunk) ->\n\tlog_error(invalid_repack_fsm_transition, RepackChunk, []),\n\tRepackChunk.\n\nhas_all_entropy(RepackChunk) ->\n\t#repack_chunk{\n\t\tsource_entropy = SourceEntropy,\n\t\ttarget_entropy = TargetEntropy\n\t} = RepackChunk,\n\tSourceEntropy /= not_set andalso TargetEntropy /= not_set.\n\nlog_error(Event, #repack_chunk{} = RepackChunk, ExtraLogs) ->\n\t?LOG_ERROR(format_logs(Event, RepackChunk, ExtraLogs)).\n\nlog_debug(Event, #repack_chunk{} = RepackChunk, ExtraLogs) ->\n\t?LOG_DEBUG(format_logs(Event, RepackChunk, ExtraLogs)).\n\nformat_logs(Event, #repack_chunk{} = RepackChunk, ExtraLogs) ->\n\t#repack_chunk{\n\t\toffsets = Offsets,\n\t\tmetadata = Metadata,\n\t\tstate = ChunkState,\n\t\tsource_entropy = SourceEntropy,\n\t\ttarget_entropy = TargetEntropy,\n\t\tsource_packing = SourcePacking,\n\t\ttarget_packing = TargetPacking,\n\t\tchunk = Chunk\n\t} = RepackChunk,\n\t#chunk_offsets{\n\t\tabsolute_offset = AbsoluteOffset,\n\t\tpadded_end_offset = PaddedEndOffset,\n\t\tbucket_end_offset = BucketEndOffset\n\t} = Offsets,\n\t{ChunkSize, DataPath} = case Metadata of\n\t\t#chunk_metadata{chunk_size = Size, data_path = Path} -> {Size, Path};\n\t\t_ -> {not_set, not_set}\n\tend,\n\t[\n\t\t{event, Event},\n\t\t{state, ChunkState},\n\t\t{bucket_end_offset, BucketEndOffset},\n\t\t{absolute_offset, AbsoluteOffset},\n\t\t{padded_end_offset, PaddedEndOffset},\n\t\t{chunk_size, ChunkSize},\n\t\t{source_packing, ar_serialize:encode_packing(SourcePacking, false)},\n\t\t{target_packing, ar_serialize:encode_packing(TargetPacking, false)},\n\t\t{chunk, atom_or_binary(Chunk)},\n\t\t{source_entropy, atom_or_binary(SourceEntropy)},\n\t\t{target_entropy, atom_or_binary(TargetEntropy)},\n\t\t{data_path, atom_or_binary(DataPath)}\n\t\t| ExtraLogs\n\t].\n\n\natom_or_binary(Atom) when is_atom(Atom) -> Atom;\natom_or_binary(Bin) when is_binary(Bin) -> binary:part(Bin, {0, min(10, byte_size(Bin))}).\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\nstate_transition_test_() ->\n\t[\n\t\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_block, strict_data_split_threshold, fun() -> 700_000 end}\n\t\t],\n\t\tfun test_state_transitions/0, 30)\n\t].\n\ntest_state_transitions() ->\n\tAddr1 = crypto:strong_rand_bytes(32),\n\tAddr2 = crypto:strong_rand_bytes(32),\n\tChunk = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\t\n\tEntropy1 = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\tEntropy2 = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\n\t%% ---------------------------------------------------------------------------\n\t%% needs_chunk\n\t%% ---------------------------------------------------------------------------\n\t?assertEqual(needs_chunk, (next_state(#repack_chunk{\n\t\tstate = needs_chunk,\n\t\tchunk = not_set,\n\t\tmetadata = not_set\n\t}))#repack_chunk.state),\n\n\t?assertEqual(entropy_only, (next_state(#repack_chunk{\n\t\tstate = needs_chunk,\n\t\tchunk = not_found,\n\t\tmetadata = not_found\n\t}))#repack_chunk.state),\n\n\t?assertEqual(error, (next_state(#repack_chunk{\n\t\tstate = needs_chunk,\n\t\tchunk = Chunk,\n\t\tmetadata = not_set,\n\t\toffsets = #chunk_offsets{}\n\t}))#repack_chunk.state),\n\n\t?assertEqual(entropy_only, (next_state(#repack_chunk{\n\t\tstate = needs_chunk,\n\t\tchunk = Chunk,\n\t\tmetadata = #chunk_metadata{chunk_size = 100},\n\t\toffsets = #chunk_offsets{absolute_offset = 100},\n\t\tsource_packing = unpacked,\n\t\ttarget_packing = {replica_2_9, Addr1}\n\t}))#repack_chunk.state),\n\n\t?assertEqual(invalid, (next_state(#repack_chunk{\n\t\tstate = needs_chunk,\n\t\tchunk = Chunk,\n\t\tmetadata = #chunk_metadata{chunk_size = ?DATA_CHUNK_SIZE},\n\t\toffsets = #chunk_offsets{absolute_offset = 1000000},\n\t\tsource_packing = not_found,\n\t\ttarget_packing = {replica_2_9, Addr1}\n\t}))#repack_chunk.state),\n\n\t?assertEqual(already_repacked, (next_state(#repack_chunk{\n\t\tstate = needs_chunk,\n\t\tchunk = Chunk,\n\t\tmetadata = #chunk_metadata{chunk_size = ?DATA_CHUNK_SIZE},\n\t\toffsets = #chunk_offsets{absolute_offset = 1000000},\n\t\tsource_packing = {replica_2_9, Addr1},\n\t\ttarget_packing = {replica_2_9, Addr1}\n\t}))#repack_chunk.state),\n\n\t?assertEqual(needs_data_path, (next_state(#repack_chunk{\n\t\tstate = needs_chunk,\n\t\tchunk = not_found,\n\t\tmetadata = #chunk_metadata{chunk_size = ?DATA_CHUNK_SIZE},\n\t\toffsets = #chunk_offsets{absolute_offset = 1000000},\n\t\tsource_packing = unpacked,\n\t\ttarget_packing = {replica_2_9, Addr1}\n\t}))#repack_chunk.state),\n\n\t?assertEqual(needs_data_path, (next_state(#repack_chunk{\n\t\tstate = needs_chunk,\n\t\tchunk = Chunk,\n\t\tmetadata = #chunk_metadata{chunk_size = 100},\n\t\toffsets = #chunk_offsets{absolute_offset = 1000000},\n\t\tsource_packing = {replica_2_9, Addr1},\n\t\ttarget_packing = unpacked\n\t}))#repack_chunk.state),\n\n\t?assertEqual(has_chunk, (next_state(#repack_chunk{\n\t\tstate = needs_chunk,\n\t\tchunk = Chunk,\n\t\tmetadata = #chunk_metadata{chunk_size = ?DATA_CHUNK_SIZE},\n\t\toffsets = #chunk_offsets{absolute_offset = 1000000},\n\t\tsource_packing = {replica_2_9, Addr2},\n\t\ttarget_packing = {replica_2_9, Addr1}\n\t}))#repack_chunk.state),\n\n\t?assertEqual(has_chunk, (next_state(#repack_chunk{\n\t\tstate = needs_chunk,\n\t\tchunk = Chunk,\n\t\tmetadata = #chunk_metadata{chunk_size = ?DATA_CHUNK_SIZE},\n\t\toffsets = #chunk_offsets{absolute_offset = 1000000},\n\t\tsource_packing = unpacked,\n\t\ttarget_packing = {replica_2_9, Addr1}\n\t}))#repack_chunk.state),\n\n\t?assertEqual(has_chunk, (next_state(#repack_chunk{\n\t\tstate = needs_chunk,\n\t\tchunk = Chunk,\n\t\tmetadata = #chunk_metadata{chunk_size = ?DATA_CHUNK_SIZE},\n\t\toffsets = #chunk_offsets{absolute_offset = 1000000},\n\t\tsource_packing = {replica_2_9, Addr1},\n\t\ttarget_packing = unpacked\n\t}))#repack_chunk.state),\n\n\t%% ---------------------------------------------------------------------------\n\t%% needs_data_path\n\t%% ---------------------------------------------------------------------------\n\t?assertEqual(needs_data_path, (next_state(#repack_chunk{\n\t\tstate = needs_data_path,\n\t\tmetadata = #chunk_metadata{data_path = not_set}\n\t}))#repack_chunk.state),\n\n\t?assertEqual(invalid, (next_state(#repack_chunk{\n\t\tstate = needs_data_path,\n\t\tchunk = not_found,\n\t\tmetadata = #chunk_metadata{data_path = <<\"path\">>}\n\t}))#repack_chunk.state),\n\n\t?assertEqual(invalid, (next_state(#repack_chunk{\n\t\tstate = needs_data_path,\n\t\tchunk = Chunk,\n\t\tmetadata = #chunk_metadata{data_path = not_found}\n\t}))#repack_chunk.state),\n\n\t?assertEqual(has_chunk, (next_state(#repack_chunk{\n\t\tstate = needs_data_path,\n\t\tchunk = Chunk,\n\t\tmetadata = #chunk_metadata{data_path = <<\"path\">>},\n\t\tsource_packing = {replica_2_9, Addr1},\n\t\ttarget_packing = {replica_2_9, Addr1}\n\t}))#repack_chunk.state),\n\n\t?assertEqual(has_chunk, (next_state(#repack_chunk{\n\t\tstate = needs_data_path,\n\t\tchunk = Chunk,\n\t\tmetadata = #chunk_metadata{data_path = <<\"path\">>},\n\t\tsource_packing = {replica_2_9, Addr1},\n\t\ttarget_packing = {replica_2_9, Addr2}\n\t}))#repack_chunk.state),\n\n\t?assertEqual(has_chunk, (next_state(#repack_chunk{\n\t\tstate = needs_data_path,\n\t\tchunk = Chunk,\n\t\tmetadata = #chunk_metadata{data_path = <<\"path\">>},\n\t\tsource_packing = unpacked,\n\t\ttarget_packing = {replica_2_9, Addr1}\n\t}))#repack_chunk.state),\n\n\t%% ---------------------------------------------------------------------------\n\t%% has_chunk\n\t%% ---------------------------------------------------------------------------\n\t?assertEqual(write_chunk, (next_state(#repack_chunk{\n\t\tstate = has_chunk,\n\t\tsource_packing = {replica_2_9, Addr1},\n\t\ttarget_packing = {replica_2_9, Addr1}\n\t}))#repack_chunk.state),\n\n\t?assertEqual(needs_source_entropy, (next_state(#repack_chunk{\n\t\tstate = has_chunk,\n\t\tsource_packing = {replica_2_9, Addr1},\n\t\ttarget_packing = {replica_2_9, Addr2}\n\t}))#repack_chunk.state),\n\n\t?assertEqual(needs_source_entropy, (next_state(#repack_chunk{\n\t\tstate = has_chunk,\n\t\tsource_packing = {replica_2_9, Addr1},\n\t\ttarget_packing = unpacked\n\t}))#repack_chunk.state),\n\n\t?assertEqual(needs_target_entropy, (next_state(#repack_chunk{\n\t\tstate = has_chunk,\n\t\tsource_packing = unpacked_padded,\n\t\ttarget_packing = {replica_2_9, Addr1}\n\t}))#repack_chunk.state),\n\n\t?assertEqual(needs_repack, (next_state(#repack_chunk{\n\t\tstate = has_chunk,\n\t\tsource_packing = unpacked,\n\t\ttarget_packing = {replica_2_9, Addr1}\n\t}))#repack_chunk.state),\n\n\t?assertEqual(needs_repack, (next_state(#repack_chunk{\n\t\tstate = has_chunk,\n\t\tsource_packing = unpacked_padded,\n\t\ttarget_packing = unpacked\n\t}))#repack_chunk.state),\n\n\t?assertEqual(needs_repack, (next_state(#repack_chunk{\n\t\tstate = has_chunk,\n\t\tsource_packing = {spora_2_6, Addr1},\n\t\ttarget_packing = {replica_2_9, Addr1}\n\t}))#repack_chunk.state),\n\n\t?assertEqual(needs_repack, (next_state(#repack_chunk{\n\t\tstate = has_chunk,\n\t\tsource_packing = {spora_2_6, Addr1},\n\t\ttarget_packing = unpacked\n\t}))#repack_chunk.state),\n\t\n\t%% ---------------------------------------------------------------------------\n\t%% invalid\n\t%% ---------------------------------------------------------------------------\n\t?assertEqual(entropy_only, (next_state(#repack_chunk{\n\t\tstate = invalid,\n\t\tchunk = invalid\n\t}))#repack_chunk.state),\n\n\t?assertEqual(invalid, (next_state(#repack_chunk{\n\t\tstate = invalid,\n\t\tchunk = Chunk\n\t}))#repack_chunk.state),\n\n\t%% ---------------------------------------------------------------------------\n\t%% entropy_only\n\t%% ---------------------------------------------------------------------------\n\t?assertEqual(entropy_only, (next_state(#repack_chunk{\n\t\tstate = entropy_only\n\t}))#repack_chunk.state),\n\n\t?assertEqual(entropy_only, (next_state(#repack_chunk{\n\t\tstate = entropy_only,\n\t\ttarget_entropy = Entropy1\n\t}))#repack_chunk.state),\n\n\t?assertEqual(entropy_only, (next_state(#repack_chunk{\n\t\tstate = entropy_only,\n\t\tsource_entropy = Entropy1\n\t}))#repack_chunk.state),\n\n\t?assertEqual(write_entropy, (next_state(#repack_chunk{\n\t\tstate = entropy_only,\n\t\tsource_entropy = <<>>,\n\t\ttarget_entropy = Entropy1,\n\t\ttarget_packing = {replica_2_9, Addr1}\n\t}))#repack_chunk.state),\n\n\t?assertEqual(ignore, (next_state(#repack_chunk{\n\t\tstate = entropy_only,\n\t\tsource_entropy = Entropy1,\n\t\ttarget_entropy = <<>>,\n\t\ttarget_packing = unpacked\n\t}))#repack_chunk.state),\n\n\t%% ---------------------------------------------------------------------------\n\t%% already_repacked\n\t%% ---------------------------------------------------------------------------\n\t?assertEqual(already_repacked, (next_state(#repack_chunk{\n\t\tstate = already_repacked,\n\t\ttarget_entropy = Entropy1\n\t}))#repack_chunk.state),\n\n\t?assertEqual(already_repacked, (next_state(#repack_chunk{\n\t\tstate = already_repacked,\n\t\tsource_entropy = Entropy1\n\t}))#repack_chunk.state),\n\n\t?assertEqual(ignore, (next_state(#repack_chunk{\n\t\tstate = already_repacked,\n\t\tsource_entropy = <<>>,\n\t\ttarget_entropy = Entropy2\n\t}))#repack_chunk.state),\n\n\t%% ---------------------------------------------------------------------------\n\t%% needs_repack\n\t%% ---------------------------------------------------------------------------\n\t?assertEqual(has_chunk, (next_state(#repack_chunk{\n\t\tstate = needs_repack,\n\t\tsource_packing = {replica_2_9, Addr1},\n\t\ttarget_packing = {replica_2_9, Addr1},\n\t\tchunk = Chunk\n\t}))#repack_chunk.state),\n\n\t?assertEqual(has_chunk, (next_state(#repack_chunk{\n\t\tstate = needs_repack,\n\t\tsource_packing = unpacked,\n\t\ttarget_packing = unpacked,\n\t\tchunk = Chunk\n\t}))#repack_chunk.state),\n\n\t?assertEqual(has_chunk, (next_state(#repack_chunk{\n\t\tstate = needs_repack,\n\t\tsource_packing = unpacked_padded,\n\t\ttarget_packing = {replica_2_9, Addr1},\n\t\tchunk = Chunk\n\t}))#repack_chunk.state),\n\n\t?assertEqual(needs_repack, (next_state(#repack_chunk{\n\t\tstate = needs_repack,\n\t\tsource_packing = unpacked,\n\t\ttarget_packing = {replica_2_9, Addr1},\n\t\tchunk = Chunk\n\t}))#repack_chunk.state),\n\n\t?assertEqual(needs_repack, (next_state(#repack_chunk{\n\t\tstate = needs_repack,\n\t\tsource_packing = {spora_2_6, Addr1},\n\t\ttarget_packing = unpacked,\n\t\tchunk = Chunk\n\t}))#repack_chunk.state),\n\n\t%% ---------------------------------------------------------------------------\n\t%% needs_source_entropy\n\t%% ---------------------------------------------------------------------------\n\t?assertEqual(needs_source_entropy, (next_state(#repack_chunk{\n\t\tstate = needs_source_entropy,\n\t\tchunk = Chunk,\n\t\tsource_packing = {replica_2_9, Addr1}\n\t}))#repack_chunk.state),\n\n\t?assertEqual(needs_decipher, (next_state(#repack_chunk{\n\t\tstate = needs_source_entropy,\n\t\tchunk = Chunk,\n\t\tsource_packing = {replica_2_9, Addr1},\n\t\tsource_entropy = Entropy1\n\t}))#repack_chunk.state),\n\n\t%% ---------------------------------------------------------------------------\n\t%% needs_target_entropy\n\t%% ---------------------------------------------------------------------------\n\t?assertEqual(needs_target_entropy, (next_state(#repack_chunk{\n\t\tstate = needs_target_entropy,\n\t\ttarget_entropy = not_set,\n\t\tchunk = Chunk,\n\t\tsource_packing = unpacked_padded\n\t}))#repack_chunk.state),\n\n\t?assertEqual(needs_encipher, (next_state(#repack_chunk{\n\t\tstate = needs_target_entropy,\n\t\ttarget_entropy = Entropy1,\n\t\tchunk = Chunk,\n\t\tsource_packing = unpacked_padded\n\t}))#repack_chunk.state),\n\n\t%% ---------------------------------------------------------------------------\n\t%% needs_decipher\n\t%% ---------------------------------------------------------------------------\n\t?assertEqual(has_chunk, (next_state(#repack_chunk{\n\t\tstate = needs_decipher,\n\t\tsource_packing = unpacked_padded,\n\t\ttarget_packing = {replica_2_9, Addr1},\n\t\tchunk = Chunk\n\t}))#repack_chunk.state),\n\n\t?assertEqual(needs_decipher, (next_state(#repack_chunk{\n\t\tstate = needs_decipher,\n\t\tsource_packing = {replica_2_9, Addr1},\n\t\ttarget_packing = {replica_2_9, Addr2},\n\t\tchunk = Chunk\n\t}))#repack_chunk.state),\n\n\t%% ---------------------------------------------------------------------------\n\t%% needs_encipher\n\t%% ---------------------------------------------------------------------------\n\t?assertEqual(has_chunk, (next_state(#repack_chunk{\n\t\tstate = needs_encipher,\n\t\tsource_packing = {replica_2_9, Addr1},\n\t\ttarget_packing = {replica_2_9, Addr1},\n\t\tchunk = Chunk\n\t}))#repack_chunk.state),\n\n\t?assertEqual(needs_encipher, (next_state(#repack_chunk{\n\t\tstate = needs_encipher,\n\t\tsource_packing = unpacked_padded,\n\t\ttarget_packing = {replica_2_9, Addr1},\n\t\tchunk = Chunk\n\t}))#repack_chunk.state),\n\n\t%% ---------------------------------------------------------------------------\n\t%% write_chunk\n\t%% ---------------------------------------------------------------------------\n\t?assertEqual(write_chunk, (next_state(#repack_chunk{\n\t\tstate = write_chunk,\n\t\tchunk = Chunk\n\t}))#repack_chunk.state),\n\n\t%% ---------------------------------------------------------------------------\n\t%% write_entropy\n\t%% ---------------------------------------------------------------------------\n\t?assertEqual(write_entropy, (next_state(#repack_chunk{\n\t\tstate = write_entropy,\n\t\ttarget_entropy = Entropy1\n\t}))#repack_chunk.state),\n\n\t%% ---------------------------------------------------------------------------\n\t%% ignore\n\t%% ---------------------------------------------------------------------------\n\t?assertEqual(ignore, (next_state(#repack_chunk{\n\t\tstate = ignore,\n\t\tchunk = Chunk\n\t}))#repack_chunk.state),\n\n\tok.\n\n"
  },
  {
    "path": "apps/arweave/src/ar_repack_io.erl",
    "content": "-module(ar_repack_io).\n\n-behaviour(gen_server).\n\n-export([name/1, read_footprint/4, write_queue/3]).\n\n-export([start_link/2, init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include(\"ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include(\"ar_repack.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-moduledoc \"\"\"\n\tThis module handles disk IO for the repack-in-place process.\n\"\"\".\n\n-record(state, {\n\tstore_id = undefined,\n\tread_batch_size = ?DEFAULT_REPACK_BATCH_SIZE\n}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the server.\nstart_link(Name, StoreID) ->\n\tgen_server:start_link({local, Name}, ?MODULE,  StoreID, []).\n\n%% @doc Return the name of the server serving the given StoreID.\nname(StoreID) ->\n\tlist_to_atom(\"ar_repack_io_\" ++ ar_storage_module:label(StoreID)).\n\ninit(StoreID) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tReadBatchSize = Config#config.repack_batch_size,\n\tState = #state{ \n\t\tstore_id = StoreID,\n\t\tread_batch_size = ReadBatchSize\n\t},\n\tlog_info(ar_repack_io_init, State, [\n\t\t{name, name(StoreID)},\n\t\t{read_batch_size, ReadBatchSize}\n\t]),\n\t\n    {ok, State}.\n\n%% @doc Read all the chunks covered by the given footprint.\n%% The footprint covers:\n%% - A list of offsets determined by the replica.2.9 entropy footprint pattern.\n%% - A set of consecutive chunks following each offset. The number of consecutive chunks\n%%   read for each footprint offset is determined by the repack_batch_size config.\n-spec read_footprint(\n\t[non_neg_integer()], non_neg_integer(), non_neg_integer(), ar_storage_module:store_id()) ->\n\tok.\nread_footprint(FootprintOffsets, FootprintStart, FootprintEnd, StoreID) ->\n\tgen_server:cast(name(StoreID),\n\t\t{read_footprint, FootprintOffsets, FootprintStart, FootprintEnd}).\n\nwrite_queue(WriteQueue, Packing, StoreID) ->\n\tgen_server:cast(name(StoreID), {write_queue, WriteQueue, Packing}).\n\t\n\n%%%===================================================================\n%%% Gen server callbacks.\n%%%===================================================================\n\nhandle_call(Request, _From, #state{} = State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast(\n\t\t{read_footprint, FootprintOffsets, FootprintStart, FootprintEnd}, #state{} = State) ->\n\tdo_read_footprint(FootprintOffsets, FootprintStart, FootprintEnd, State),\n\t{noreply, State};\n\nhandle_cast({write_queue, WriteQueue, Packing}, #state{} = State) ->\n\tprocess_write_queue(WriteQueue, Packing, State),\n\t{noreply, State};\n\nhandle_cast(Request, #state{} = State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {request, Request}]),\n\t{noreply, State}.\n\nhandle_info(Request, #state{} = State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {request, Request}]),\n\t{noreply, State}.\n\nterminate(Reason, #state{} = State) ->\n\tlog_debug(terminate, State, [\n\t\t{module, ?MODULE},\n\t\t{reason, ar_util:safe_format(Reason)}\n\t]).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\ndo_read_footprint([], _FootprintStart, _FootprintEnd, #state{}) ->\n\tok;\ndo_read_footprint([\n\tBucketEndOffset | FootprintOffsets], FootprintStart, FootprintEnd, #state{} = State) \n\t\twhen BucketEndOffset < FootprintStart ->\n\t%% Advance until we hit a chunk covered by the current storage module\n\tdo_read_footprint(FootprintOffsets, FootprintStart, FootprintEnd, State);\ndo_read_footprint(\n\t[BucketEndOffset | _FootprintOffsets], _FootprintStart, FootprintEnd, #state{} ) \n\t\twhen BucketEndOffset > FootprintEnd ->\n\tok;\ndo_read_footprint(\n\t[BucketEndOffset | FootprintOffsets], FootprintStart, FootprintEnd, #state{} = State) ->\n\t#state{ \n\t\tstore_id = StoreID,\n\t\tread_batch_size = ReadBatchSize\n\t} = State,\n\n\tStartTime = erlang:monotonic_time(),\n\t{ReadRangeStart, ReadRangeEnd, _ReadRangeOffsets} = ar_repack:get_read_range(\n\t\tBucketEndOffset, FootprintEnd, ReadBatchSize),\n\tReadRangeSizeInBytes = ReadRangeEnd - ReadRangeStart,\n\tOffsetChunkMap = \n\t\tcase catch ar_chunk_storage:get_range(ReadRangeStart, ReadRangeSizeInBytes, StoreID) of\n\t\t\t[] ->\n\t\t\t\t#{};\n\t\t\t{'EXIT', _Exc} ->\n\t\t\t\tlog_error(failed_to_read_chunk_range, State, [\n\t\t\t\t\t{read_range_start, ReadRangeStart},\n\t\t\t\t\t{read_range_end, ReadRangeEnd},\n\t\t\t\t\t{read_range_size_bytes, ReadRangeSizeInBytes}\n\t\t\t\t]),\n\t\t\t\t#{};\n\t\t\tRange ->\n\t\t\t\tmaps:from_list(Range)\n\t\tend,\n\n\t\n\tOffsetMetadataMap =\n\t\tcase ar_data_sync:get_chunk_metadata_range(ReadRangeStart+1, ReadRangeEnd, StoreID) of\n\t\t\t{ok, MetadataMap} ->\n\t\t\t\tMetadataMap;\n\t\t\t{error, invalid_iterator} ->\n\t\t\t\t#{};\n\t\t\t{error, Reason} ->\n\t\t\t\tlog_warning(failed_to_read_chunk_metadata, State, [\n\t\t\t\t\t{read_range_start, ReadRangeStart},\n\t\t\t\t\t{read_range_end, ReadRangeEnd},\n\t\t\t\t\t{reason, Reason}\n\t\t\t\t]),\n\t\t\t\t#{}\n\t\tend,\n\n\tChunkReadSizeInBytes = maps:fold(\n\t\tfun(_Key, Value, Acc) -> Acc + byte_size(Value) end, \n\t\t0, \n\t\tOffsetChunkMap\n\t),\n\tar_metrics:record_rate_metric(\n\t\tStartTime, ChunkReadSizeInBytes,\n\t\tchunk_read_rate_bytes_per_second, [ar_storage_module:label(StoreID), repack]),\n\n\tEndTime = erlang:monotonic_time(),\n\tElapsedTime =  max(1, erlang:convert_time_unit(EndTime - StartTime, native, millisecond)),\n\tlog_debug(read_footprint, State, [\n\t\t{bucket_end_offset, BucketEndOffset},\n\t\t{read_range_start, ReadRangeStart},\n\t\t{read_range_end, ReadRangeEnd},\n\t\t{read_range_size_bytes, ReadRangeSizeInBytes},\n\t\t{chunk_read_size_bytes, ChunkReadSizeInBytes},\n\t\t{chunks_read, maps:size(OffsetChunkMap)},\n\t\t{metadata_read, maps:size(OffsetMetadataMap)},\n\t\t{footprint_start, FootprintStart},\n\t\t{footprint_end, FootprintEnd},\n\t\t{remaining_offsets, length(FootprintOffsets)},\n\t\t{time_taken, ElapsedTime},\n\t\t{rate, (ChunkReadSizeInBytes / ?MiB / ElapsedTime) * 1000}\n\t]),\n\n\tar_repack:chunk_range_read(\n\t\tBucketEndOffset, OffsetChunkMap, OffsetMetadataMap, State#state.store_id),\n\tread_footprint(FootprintOffsets, FootprintStart, FootprintEnd, StoreID).\n\nprocess_write_queue(WriteQueue, Packing, #state{} = State) ->\n\t#state{\n\t\tstore_id = StoreID\n\t} = State,\n\tStartTime = erlang:monotonic_time(),\n    gb_sets:fold(\n        fun({_BucketEndOffset, RepackChunk}, _) ->\n\t\t\twrite_repack_chunk(RepackChunk, Packing, State)\n        end,\n        ok,\n        WriteQueue\n    ),\n\tar_metrics:record_rate_metric(\n\t\tStartTime, gb_sets:size(WriteQueue) * ?DATA_CHUNK_SIZE,\n\t\tchunk_write_rate_bytes_per_second, [ar_storage_module:label(StoreID), repack]),\n\tEndTime = erlang:monotonic_time(),\n\tElapsedTime =  max(1, erlang:convert_time_unit(EndTime - StartTime, native, millisecond)),\n\tlog_debug(process_write_queue, State, [\n\t\t{write_queue_size, gb_sets:size(WriteQueue)},\n\t\t{time_taken, ElapsedTime},\n\t\t{rate, (gb_sets:size(WriteQueue) / 4 / ElapsedTime) * 1000}\n\t]).\n\nwrite_repack_chunk(RepackChunk, Packing, #state{} = State) ->\n\t#state{ \n\t\tstore_id = StoreID\n\t} = State,\n\t\n\tcase RepackChunk#repack_chunk.state of\n\t\twrite_entropy ->\n\t\t\t{replica_2_9, RewardAddr} = Packing,\n\t\t\tEntropy = RepackChunk#repack_chunk.target_entropy,\n\t\t\tBucketEndOffset = RepackChunk#repack_chunk.offsets#chunk_offsets.bucket_end_offset,\n\t\t\tar_entropy_storage:store_entropy(Entropy, BucketEndOffset, StoreID, RewardAddr);\n\t\twrite_chunk ->\n\t\t\twrite_chunk(RepackChunk, Packing, State);\n\t\t_ ->\n\t\t\tlog_error(unexpected_chunk_state, State, format_logs(RepackChunk))\n\tend.\n\nwrite_chunk(RepackChunk, TargetPacking, #state{} = State) ->\n\t#state{\n\t\tstore_id = StoreID\n\t} = State,\n\t#repack_chunk{\n\t\toffsets = Offsets,\n\t\tmetadata = Metadata,\n\t\tchunk = Chunk\n\t} = RepackChunk,\n\t#chunk_offsets{\n\t\tabsolute_offset = AbsoluteOffset\n\t} = Offsets,\n\n\tIsBlacklisted = ar_tx_blacklist:is_byte_blacklisted(AbsoluteOffset),\n\t\n\tcase remove_from_sync_record(Offsets, StoreID) of\n\t\tok when IsBlacklisted == true ->\n\t\t\tok;\n\t\tok when IsBlacklisted == false ->\n\t\t\tWriteResult = \n\t\t\t\tar_data_sync:write_chunk(\n\t\t\t\t\tAbsoluteOffset, Metadata, Chunk, TargetPacking, StoreID),\n\t\t\tcase WriteResult of\n\t\t\t\t{ok, TargetPacking} ->\n\t\t\t\t\tadd_to_sync_record(Offsets, Metadata, TargetPacking, StoreID);\n\t\t\t\t{ok, WrongPacking} ->\n\t\t\t\t\t%% This shouldn't ever happen - the only time write_chunk should change\n\t\t\t\t\t%% the packing is when writing to unpacked_padded.\n\t\t\t\t\tlog_error(repacked_chunk_stored_with_wrong_packing, State, [\n\t\t\t\t\t\t{requested_packing, ar_serialize:encode_packing(TargetPacking, true)},\n\t\t\t\t\t\t{stored_packing, ar_serialize:encode_packing(WrongPacking, true)}\n\t\t\t\t\t]);\n\t\t\t\tError ->\n\t\t\t\t\tlog_error(failed_to_store_repacked_chunk, State, [\n\t\t\t\t\t\t{requested_packing, ar_serialize:encode_packing(TargetPacking, true)},\n\t\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])}\n\t\t\t\t\t])\n\t\t\tend;\n\t\tError ->\n\t\t\tlog_error(failed_to_remove_from_sync_record, State, [\n\t\t\t\t{error, io_lib:format(\"~p\", [Error])}\n\t\t\t])\n\tend.\n\nremove_from_sync_record(Offsets, StoreID) ->\n\t#chunk_offsets{\n\t\tpadded_end_offset = PaddedEndOffset\n\t} = Offsets,\n\n\tStartOffset = PaddedEndOffset - ?DATA_CHUNK_SIZE,\n\n\tDeleteEntropyRecord = ar_entropy_storage:delete_record(PaddedEndOffset, StoreID),\n\tDeleteFootprint =\n\t\tcase DeleteEntropyRecord of\n\t\t\tok ->\n\t\t\t\tar_footprint_record:delete(PaddedEndOffset, StoreID);\n\t\t\tError ->\n\t\t\t\tError\n\t\tend,\n\tDeleteSyncRecord =\n\t\tcase DeleteFootprint of\n\t\t\tok ->\n\t\t\t\tar_sync_record:delete(PaddedEndOffset, StartOffset, ar_data_sync, StoreID);\n\t\t\tError2 ->\n\t\t\t\tError2\n\t\tend,\n\tcase DeleteSyncRecord of\n\t\tok ->\n\t\t\tar_sync_record:delete(PaddedEndOffset, StartOffset, ar_chunk_storage, StoreID);\n\t\tError3 ->\n\t\t\tError3\n\tend.\n\nadd_to_sync_record(Offsets, Metadata, Packing, StoreID) ->\n\t#chunk_offsets{\n\t\tpadded_end_offset = PaddedEndOffset,\n\t\tbucket_end_offset = BucketEndOffset\n\t} = Offsets,\n\t#chunk_metadata{\n\t\tchunk_size = ChunkSize\n\t} = Metadata,\n\t\n\tStartOffset = PaddedEndOffset - ?DATA_CHUNK_SIZE,\n\tar_sync_record:add(PaddedEndOffset, StartOffset, Packing, ar_data_sync, StoreID),\n\tcase ar_data_sync:is_footprint_record_supported(PaddedEndOffset, ChunkSize, Packing) of\n\t\ttrue ->\n\t\t\tar_footprint_record:add(PaddedEndOffset, Packing, StoreID);\n\t\tfalse ->\n\t\t\tok\n\tend,\n\n\tIsStorageSupported =\n\t\tar_chunk_storage:is_storage_supported(PaddedEndOffset, ChunkSize, Packing),\n\tIsReplica29 = case Packing of\n\t\t{replica_2_9, _} -> true;\n\t\t_ -> false\n\tend,\n\n\tcase IsStorageSupported andalso IsReplica29 of\n\t\ttrue ->\n\t\t\tar_entropy_storage:add_record(BucketEndOffset, Packing, StoreID);\n\t\t_ -> ok\n\tend.\n\nlog_error(Event, #state{} = State, ExtraLogs) ->\n\t?LOG_ERROR(format_logs(Event, State, ExtraLogs)).\n\nlog_warning(Event, #state{} = State, ExtraLogs) ->\n\t?LOG_WARNING(format_logs(Event, State, ExtraLogs)).\n\nlog_info(Event, #state{} = State, ExtraLogs) ->\n\t?LOG_INFO(format_logs(Event, State, ExtraLogs)).\n\t\nlog_debug(Event, #state{} = State, ExtraLogs) ->\n\t?LOG_DEBUG(format_logs(Event, State, ExtraLogs)).\n\nformat_logs(Event, #state{} = State, ExtraLogs) ->\n\t[\n\t\t{event, Event},\n\t\t{tags, [repack_in_place, ar_repack_io]},\n\t\t{pid, self()},\n\t\t{store_id, State#state.store_id}\n\t\t| ExtraLogs\n\t].\n\nformat_logs(#repack_chunk{} = RepackChunk) ->\n\t#repack_chunk{\n\t\tstate = ChunkState,\n\t\toffsets = Offsets,\n\t\tmetadata = Metadata,\n\t\tchunk = Chunk,\n\t\tsource_packing = SourcePacking,\n\t\ttarget_packing = TargetPacking,\n\t\ttarget_entropy = TargetEntropy,\n\t\tsource_entropy = SourceEntropy\n\t} = RepackChunk,\n\t#chunk_offsets{\t\n\t\tabsolute_offset = AbsoluteOffset,\n\t\tbucket_end_offset = BucketEndOffset,\n\t\tpadded_end_offset = PaddedEndOffset\n\t} = Offsets,\n\tChunkSize = case Metadata of\n\t\t#chunk_metadata{chunk_size = Size} -> Size;\n\t\t_ -> Metadata\n\tend,\n\t[\n\t\t{state, ChunkState},\n\t\t{bucket_end_offset, BucketEndOffset},\n\t\t{absolute_offset, AbsoluteOffset},\n\t\t{padded_end_offset, PaddedEndOffset},\n\t\t{chunk_size, ChunkSize},\n\t\t{chunk, atom_or_binary(Chunk)},\n\t\t{source_packing, ar_serialize:encode_packing(SourcePacking, false)},\n\t\t{target_packing, ar_serialize:encode_packing(TargetPacking, false)},\n\t\t{source_entropy, atom_or_binary(SourceEntropy)},\n\t\t{target_entropy, atom_or_binary(TargetEntropy)}\n\t].\n\natom_or_binary(Atom) when is_atom(Atom) -> Atom;\natom_or_binary(Bin) when is_binary(Bin) -> binary:part(Bin, {0, min(10, byte_size(Bin))}).\t\n"
  },
  {
    "path": "apps/arweave/src/ar_replica_2_9.erl",
    "content": "-module(ar_replica_2_9).\n\n-export([get_entropy_partition/1, get_entropy_partition_range/1, get_entropy_key/3,\n    get_slice_index/1, get_partition_offset/1, get_entropy_index/2]).\n\n-include(\"ar.hrl\").\n-include(\"ar_consensus.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-moduledoc \"\"\"\n    This module handles mapping the 2.9 replica entropy to chunks and sub-chunks.\n\n    Here's a break down of how entropy is mapped to sub-chunks.\n\n    1. Iterate through each chunk's (e.g. chunk0) sub-chunks (e.g. s0, s1) assigning each one\n       to a different entropy. This ensures that all contiguous sub-chunks are assigned to\n       different entropies, maximizing the amount of work that an on-demand miner needs to do\n       to pack and mine a contiguous recall range.\n\n                   chunk0                          chunk1\n                   +-----------------------------+ +-----------------------------+\n                   |  s0 |  s1 |  s2 | ... | s31 | |  s0 |  s1 |  s2 | ... | s31 |\n                   +-----------------------------+ +-----------------------------+\n                      v     v     v           v       v      v     v          v \n    entropy index:   e0    e1    e2          e31     e32    e33   e33        e63\n\n    2. Each 8 MiB entropy contains 1024 8 KiB slices. To finish packing the sub-chunks we\n       will encipher them with the appropriate slice. A sub-chunk's slice index is\n       determined by its *chunk* - each sub-chunk in a chunk is assigned to a different\n       *entropy* but has the same *slice index*. A slice index and sector index are the same\n       but are just used in difference contexts (e.g. slices divide up entropy, sectors\n       divide up the partition). A chunk in sector 0 of the partition is enciphered with\n       slice index 0 from its entropies.\n\n         sector0   sector1  sector2           sector1023        sector0  sector1\n         chunk0    c12413   c26825            cXXXXXX           chunk1   c12414\n         +-------++-------++-------+         +-------+          +-------++-------+\n         | | | | || | | | || | | | |   ...   | | | | |          | | | | || | | | |\n         +-------++-------++-------+         +-------+          +-------++-------+\n             |        |        |                 |                  |        |\n         +-----------------------------------------------+      +--------------------------+\n     e0: | slice0 | slice1 | slice2 | ...... | slice1023 | e32: | slice0 | slice1 | ...... \n         +-----------------------------------------------+      +--------------------------+\n             |        |        |                 |                  |        |      \n         +-----------------------------------------------+      +--------------------------+\n     e1: | slice0 | slice1 | slice2 | ...... | slice1023 | e33: | slice0 | slice1 | ...... \n         +-----------------------------------------------+      +--------------------------+\n             |        |        |                 |                  |        |      \n         +-----------------------------------------------+      +--------------------------+\n     e2: | slice0 | slice1 | slice2 | ...... | slice1023 | e34: | slice0 | slice1 | ...... \n         +-----------------------------------------------+      +--------------------------+\n     ...     |        |        |                 |                  |        |      \n         +-----------------------------------------------+      +--------------------------+\n    e31: | slice0 | slice1 | slice2 | ...... | slice1023 | e63: | slice0 | slice1 | ...... \n         +-----------------------------------------------+      +--------------------------+\n             |        |        |                 |                  |        |      \n             v        v        v                 v                  v        v\n\n    Glossary:\n\n    entropy: An 8 MiB (?REPLICA_2_9_ENTROPY_SIZE) block of entropy that contains the entropy\n             for 1024 sub-chunks (?REPLICA_2_9_ENTROPY_SIZE div \n             ?COMPOSITE_PACKING_SUB_CHUNK_SIZE.\n\n    slice: The 8192 byte (?COMPOSITE_PACKING_SUB_CHUNK_SIZE) range of an 'entropy' that will\n           be enciphered with a sub-chunk when packing to the replica_2_9 format.\n\n    entropy partition: contains all the entropies needed to encipher all the chunks in a\n                       recall partition. A recall partition is 3.6 TB (ar_block:partition_size()),\n                       but an entropy partition is slightly larger since enciphering a chunk\n                       (256 KiB) requires slices from 32 different entropies (256 MiB).\n                       Some of the entropies in a partition can be reused by neighboring\n                       recall partitions.\n\n    entropy index: The index of an entropy within an entropy partition. All of a chunk's\n                   sub-chunks have a different entropy index.\n\n    slice index: the index of a slice within an entropy. All of a chunk's sub-chunks have\n                 the same slice index.\n\n    sector: Each slice of an entropy is distributed to a different sector such that consecutive\n            slices map to chunks that are as far as possible from each other within a\n            partition. With an entropy size of 8_388_608 bytes and a slice size of 8_192 bytes,\n            there are 1024 slices per entropy, which yields 1024 sectors per partition.\n\"\"\".\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Return the 2.9 partition number the chunk with the given absolute end offset is\n%% mapped to. This partition number is a part of the 2.9 replication key. It is NOT\n%% the same as the ar_block:partition_size() (3.6 TB) recall partition.\n-spec get_entropy_partition(\n\t\tAbsoluteChunkEndOffset :: non_neg_integer()\n) -> non_neg_integer().\nget_entropy_partition(AbsoluteChunkEndOffset) ->\n    BucketStart = get_entropy_bucket_start(AbsoluteChunkEndOffset),\n    ar_node:get_partition_number(BucketStart).\n\nget_entropy_partition_range(PartitionNumber) ->\n    %% The goal of this function is to return the minimum and maximum byte offsets that, when\n    %% fed to ar_replica_2_9:get_entropy_partition/1 will yield the provided PartitinNumber.\n    %% \n    %% To do this we do a rough reversal of the steps taken by\n    %% ar_replica_2_9:get_entropy_partition/1:\n    %% \n    %% get_entropy_partition(AbsoluteChunkEndOffset) ->\n    %%    BucketStart = get_entropy_bucket_start(AbsoluteChunkEndOffset),\n    %%    ar_node:get_partition_number(BucketStart).\n    %% \n    %% I say \"rough reverseal\" because several of the steps are not reversible (e.g. \n    %% ar_util:floor_int/2 discards data and so it not perfectly reversible). \n    %% \n    %% 1. Reverse ar_node:get_partition_number(BucketStart) to get the pick offsets\n    %%    representing the byte boundaries of the recall partition.\n    StartRecall = PartitionNumber * ar_block:partition_size(),\n    EndRecall = (PartitionNumber + 1) * ar_block:partition_size(),\n    %% 2. The next 3 steps reverse ar_replica_2_9:get_entropy_bucket_start/1 to yield the\n    %%    first and last bytes of the entropy partition.\n    %% \n    %%    Get the first bucket boundary greater than the recall boundaries. This represents\n    %%    the bucket end offset of the bucket which contains the first/last byte of the\n    %%    recall partition. \n    %% \n    %%    Note: by passing 0 into get_padded_offset/2 we ignore the strict data split\n    %%    threshold and focus on just finding the nearest 256 KiB aligned boundary greater\n    %%    than the recall boundaries.\n    StartBucket1 = ar_poa:get_padded_offset(StartRecall, 0),\n    EndBucket1 = ar_poa:get_padded_offset(EndRecall, 0),\n    %% 3. ar_replica_2_9:get_entropy_partition/1 allocates this straddling bucket to the \n    %%    previous partition. So the start of the entropy partition is the first byte which\n    %%    falls in the *next* bucket, and the end of the entropy partition is the last byte\n    %%    which falls in *this* bucket. To get those bytes we'll advance to the next bucket...\n    StartBucket2 = StartBucket1 + ?DATA_CHUNK_SIZE,\n    EndBucket2 = EndBucket1 + ?DATA_CHUNK_SIZE,\n    %% 4. ... and then get the first byte which falls in that bucket\n    StartByte1 = ar_chunk_storage:get_chunk_byte_from_bucket_end(StartBucket2) + 1,\n    EndByte1 = ar_chunk_storage:get_chunk_byte_from_bucket_end(EndBucket2),\n\n    %% 5. Handle the special case of partition 0. Since it has no preceding partition its\n    %%    byte start is 0.\n    StartByte2 = case PartitionNumber of\n        0 ->\n            0;\n        _ ->\n            StartByte1\n    end,\n\n    {StartByte2, EndByte1}.\n\n%% @doc Return the key used to generate the entropy for the 2.9 replication format.\n%% RewardAddr: The address of the miner that mined the chunk.\n%% AbsoluteEndOffset: The absolute end offset of the chunk.\n%% SubChunkStartOffset: The start offset of the sub-chunk within the chunk. 0 is the first\n%% sub-chunk of the chunk, (?DATA_CHUNK_SIZE - ?COMPOSITE_PACKING_SUB_CHUNK_SIZE) is the\n%% last sub-chunk of the chunk.\n-spec get_entropy_key(\n\t\tRewardAddr :: binary(),\n\t\tAbsoluteEndOffset :: non_neg_integer(),\n\t\tSubChunkStartOffset :: non_neg_integer()\n) -> binary().\nget_entropy_key(RewardAddr, AbsoluteEndOffset, SubChunkStartOffset) ->\n\tPartition = get_entropy_partition(AbsoluteEndOffset),\n\t%% We use the key to generate a large entropy shared by many chunks.\n\tEntropyIndex = get_entropy_index(AbsoluteEndOffset, SubChunkStartOffset),\n\tcrypto:hash(sha256, << Partition:256, EntropyIndex:256, RewardAddr/binary >>).\n\n%% @doc Return the 0-based index indicating which area within a 2.9 entropy the\n%% given sub-chunk is mapped to (aka slice index). Sub-chunks of the same chunk are mapped to\n%% different entropies but all use the same slice index.\n-spec get_slice_index(\n\t\tAbsoluteChunkEndOffset :: non_neg_integer()\n) -> non_neg_integer().\nget_slice_index(AbsoluteChunkEndOffset) ->\n    PartitionRelativeOffset = get_partition_offset(AbsoluteChunkEndOffset),\n\tSectorSize = ar_block:get_replica_2_9_entropy_sector_size(),\n\t(PartitionRelativeOffset div SectorSize) rem ar_block:get_sub_chunks_per_replica_2_9_entropy().\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\n%% @doc Return the start offset of the bucket containing the given chunk offset.\n%% A chunk bucket is a 0-based, 256-KiB wide, 256-KiB aligned range. A chunk belongs to\n%% the bucket that contains the first byte of the chunk.\n-spec get_entropy_bucket_start(non_neg_integer()) -> non_neg_integer().\nget_entropy_bucket_start(AbsoluteChunkEndOffset) ->\n\tPaddedEndOffset = ar_block:get_chunk_padded_offset(AbsoluteChunkEndOffset),\n\tPickOffset = max(0, PaddedEndOffset - ?DATA_CHUNK_SIZE),\n\tBucketStart = ar_util:floor_int(PickOffset, ?DATA_CHUNK_SIZE),\n\n    true = BucketStart == ar_chunk_storage:get_chunk_bucket_start(PaddedEndOffset),\n    \n\tBucketStart.\n\n%% @doc Return the offset of the chunk within its partition.\n-spec get_partition_offset(AbsoluteChunkEndOffset :: non_neg_integer()) -> non_neg_integer().\nget_partition_offset(AbsoluteChunkEndOffset) ->\n    BucketStart = get_entropy_bucket_start(AbsoluteChunkEndOffset),\n    Partition = get_entropy_partition(AbsoluteChunkEndOffset),\n    PartitionStart = Partition * ar_block:partition_size(),\n    BucketStart - PartitionStart.\n\n%% @doc Returns the index of the entropy containing the slice for specified chunk's sub-chunk. \n%% An entropy index is 0-based index used to identify a specific entropy within an entropy\n%% partition. It is not unique - the same index will refer to different entropies in different\n%% partitions and for different mining addresses. For a unique entropy identifier see\n%% get_entropy_key/3.\n%% \n%% The entropy index is for the 2.9 replication format.\n-spec get_entropy_index(\n    AbsoluteChunkEndOffset :: non_neg_integer(),\n    SubChunkStartOffset :: non_neg_integer()\n) -> non_neg_integer().\nget_entropy_index(AbsoluteChunkEndOffset, SubChunkStartOffset) ->\n    %% Assert that SubChunkStartOffset is less than ?DATA_CHUNK_SIZE\n    true = SubChunkStartOffset < ?DATA_CHUNK_SIZE,\n    PartitionRelativeOffset = get_partition_offset(AbsoluteChunkEndOffset),\n    SectorSize = ar_block:get_replica_2_9_entropy_sector_size(),\n    %% Index of this chunk into the sector (i.e. how many chunks into the sector it falls)\n    ChunkBucket = (PartitionRelativeOffset rem SectorSize) div ?DATA_CHUNK_SIZE,\n    %% Index of this sub-chunk into the chunk (i.e. how many sub-chunks into the chunk it\n    %% falls)\n    SubChunkBucket = SubChunkStartOffset div ?COMPOSITE_PACKING_SUB_CHUNK_SIZE,\n    ChunkBucket * ?COMPOSITE_PACKING_SUB_CHUNK_COUNT + SubChunkBucket.\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\nget_entropy_key_test_() ->\n    ar_test_node:test_with_mocked_functions([\n        {ar_block, partition_size, fun() -> 2_000_000 end},\n        {ar_block, get_replica_2_9_entropy_sector_size, fun() -> 786432 end},\n        {ar_block, get_replica_2_9_entropy_partition_size, fun() -> 2359296 end},\n        {ar_block, get_sub_chunks_per_replica_2_9_entropy, fun() -> 3 end}\n    ],\n    fun test_get_entropy_key/0, 30).\n\ntest_get_entropy_key() ->\n    SubChunkSize = ?COMPOSITE_PACKING_SUB_CHUNK_SIZE,\n    SectorSize = ar_block:get_replica_2_9_entropy_sector_size(),\n    EntropyPartitionSize = ar_block:get_replica_2_9_entropy_partition_size(),\n    Addr = << 0:256 >>,\n    ?assertEqual(32, ?COMPOSITE_PACKING_SUB_CHUNK_COUNT),\n    ?assertEqual(0, get_entropy_index(1, 0)),\n    EntropyKey = ar_util:encode(get_entropy_key(Addr, 1, 0)),\n    ?assertEqual(EntropyKey,\n            ar_util:encode(get_entropy_key(Addr, 1, 0))),\n    ?assertEqual(EntropyKey,\n            ar_util:encode(get_entropy_key(Addr, 262144, 0))),\n    %% The strict data split threshold in tests is 262144 * 3. Before the strict data\n    %% split threshold, the mapping works such that the chunk end offset up to but excluding\n    %% the bucket border is mapped to the previous bucket.\n    ?assertEqual(EntropyKey,\n            ar_util:encode(get_entropy_key(Addr, 262144 * 2 - 1, 0))),\n    EntropyKey2 = ar_util:encode(get_entropy_key(Addr, 262144 * 2, 0)),\n    ?assertNotEqual(EntropyKey, EntropyKey2),\n    ?assertEqual(EntropyKey2,\n            ar_util:encode(get_entropy_key(Addr, 262144 * 3 - 1, 0))),\n    EntropyKey3 = ar_util:encode(get_entropy_key(Addr, 262144 * 3, 0)),\n    ?assertNotEqual(EntropyKey2, EntropyKey3),\n    EntropyKey4 = ar_util:encode(get_entropy_key(Addr, 262144 * 3 + 1, 0)),\n    %% 262144 * 3 is the strict data split threshold so chunks ending after it are mapped\n    %% to the first bucket after the threshold so the key does not equal the one of the\n    %% chunk ending exactly at the threshold which is still mapped to the previous bucket.\n    ?assertNotEqual(EntropyKey3, EntropyKey4),\n    ?assertEqual(EntropyKey4,\n            ar_util:encode(get_entropy_key(Addr, 262144 * 4 - 1, 0))),\n    ?assertEqual(EntropyKey4,\n            ar_util:encode(get_entropy_key(Addr, 262144 * 4, 0))),\n    %% The mapping then goes this way indefinitely.\n    EntropyKey5 = ar_util:encode(get_entropy_key(Addr, 262144 * 5, 0)),\n    ?assertNotEqual(EntropyKey4, EntropyKey5),\n    %% Shift by sector size.\n    ?assertEqual(EntropyKey4,\n            ar_util:encode(get_entropy_key(Addr, 262144 * 3 + 1 + SectorSize, 0))),\n    ?assertEqual(EntropyKey4,\n            ar_util:encode(get_entropy_key(Addr, 262144 * 4 + SectorSize, 0))),\n    ?assertEqual(EntropyKey5,\n            ar_util:encode(get_entropy_key(Addr, 262144 * 4 + 1 + SectorSize, 0))),\n    ?assertEqual(EntropyKey5,\n            ar_util:encode(get_entropy_key(Addr, 262144 * 5 + SectorSize, 0))),\n\n    %% Exactly equal to the recall partition size:\n    ?assertEqual(0, get_entropy_partition(262144 * 5 + SectorSize)),\n    %% One greater than the recall partition size:\n    ?assertEqual(1, get_entropy_partition(262144 * 5 + SectorSize + 1)),\n    %% Greater than the entropy partition size (shouldn't matter since we map chunks\n    %% based on recall partition size)\n    ?assertEqual(1, get_entropy_partition(262144 * 6 + SectorSize + 1)),\n    %% The new partition => the new entropy.\n    EntropyKey6 =\n            ar_util:encode(get_entropy_key(Addr, 262144 * 5 + 2 * SectorSize, 0)),\n    ?assertNotEqual(EntropyKey6, EntropyKey5),\n    %% There is, of course, regularity within every partition.\n    ?assertEqual(EntropyKey6,\n            ar_util:encode(get_entropy_key(Addr, 262144 * 5 + 3 * SectorSize, 0))),\n\n    %% Test the edges of recall partition vs. entropy partition.\n    ?assertEqual(0, get_entropy_partition(ar_block:partition_size())),    \n    ?assertEqual(1, get_entropy_partition(EntropyPartitionSize)),\n    ?assertEqual(1, get_entropy_partition(2 * ar_block:partition_size())),\n    ?assertEqual(2, get_entropy_partition(ar_block:partition_size() + EntropyPartitionSize)),\n    ?assertEqual(2, get_entropy_partition(3 * ar_block:partition_size())),\n    ?assertEqual(3, get_entropy_partition(2 * ar_block:partition_size() + EntropyPartitionSize)),\n    ?assertEqual(10, get_entropy_partition(11 * ar_block:partition_size())),\n    ?assertEqual(11, get_entropy_partition(10 * ar_block:partition_size() + EntropyPartitionSize)),\n    %% This sub-chunk offset isn't used in practice, just adding a bounds check.\n    ?assertMatch(\n        {'EXIT', {{badmatch, false}, _}},  catch get_entropy_index(0, 32 * SubChunkSize)).\n\nget_entropy_partition_range_test_() ->\n    [\n        ar_test_node:test_with_mocked_functions([\n                {ar_block, strict_data_split_threshold, fun() -> 700_000 end}\n            ],\n            fun test_get_entropy_partition_range_after_strict/0, 30),\n        ar_test_node:test_with_mocked_functions([\n                {ar_block, strict_data_split_threshold, fun() -> 5_000_000 end}\n            ],\n            fun test_get_entropy_partition_range_before_strict/0, 30)\n    ].\n\ntest_get_entropy_partition_range_after_strict() ->\n    Start0 = 0,\n    End0 = 2272864,\n    ?assertEqual(0, get_entropy_partition(Start0)),\n    ?assertEqual(0, get_entropy_partition(End0)),\n\t?assertEqual({Start0, End0}, get_entropy_partition_range(0)),\n\n    Start1 = 2272865,\n    End1 = 4370016,\n    ?assertEqual(1, get_entropy_partition(Start1)),\n    ?assertEqual(1, get_entropy_partition(End1)),\n\t?assertEqual({Start1, End1}, get_entropy_partition_range(1)),\n\n    Start2 = 4370017,\n    End2 = 6205024,\n    ?assertEqual(2, get_entropy_partition(Start2)),\n    ?assertEqual(2, get_entropy_partition(End2)),\n\t?assertEqual({Start2, End2}, get_entropy_partition_range(2)),\n\tok.\n\ntest_get_entropy_partition_range_before_strict() ->\n    Start0 = 0,\n    End0 = 2359295,\n    ?assertEqual(0, get_entropy_partition(Start0)),\n    ?assertEqual(0, get_entropy_partition(End0)),\n\t?assertEqual({Start0, End0}, get_entropy_partition_range(0)),\n    \n    Start1 = 2359296,\n    End1 = 4456447,\n    ?assertEqual(1, get_entropy_partition(Start1)),\n    ?assertEqual(1, get_entropy_partition(End1)),\n\t?assertEqual({Start1, End1}, get_entropy_partition_range(1)),\n    \n    Start2 = 4456448,\n    End2 = 6048576,\n    ?assertEqual(2, get_entropy_partition(Start2)),\n    ?assertEqual(2, get_entropy_partition(End2)),\n\t?assertEqual({Start2, End2}, get_entropy_partition_range(2)),\n\tok.\n\n\n%% @doc Walk sequentially through all chunks in a couple partitions and verify their slice\n%% indices\nslice_index_walk_test_() ->\n    ar_test_node:test_with_mocked_functions([\n        {ar_block, partition_size, fun() -> 8 * 262144 end},\n        {ar_block, get_replica_2_9_entropy_sector_size, fun() -> 786432 end},\n        {ar_block, get_replica_2_9_entropy_partition_size, fun() -> 2359296 end},\n        {ar_block, get_sub_chunks_per_replica_2_9_entropy, fun() -> 3 end},\n        {ar_block, strict_data_split_threshold, fun() -> 3 * 262144 end}\n    ],\n    fun test_slice_index_walk/0, 30).\n\ntest_slice_index_walk() ->\n    %% --------------------------------------------------------------------------\n    %% Before the strict data split threshold:\n    %% --------------------------------------------------------------------------\n    \n    %% Partition start\n    %% Sector start\n    %% All sub-chunks in a chunk have the same slice index\n    assert_slice_index(0, [\n        0\n    ]),\n    assert_slice_index(0, [\n        1, 262144-1, 262144\n    ]),\n    assert_slice_index(0, [\n        262144+1, 2*262144-1\n    ]),\n    assert_slice_index(0, [\n        2*262144, 2*262144+1, 3*262144-1\n    ]),\n\n    %% The strict data split threshold:\n    %% The end offset exactly at the strict data split threshold is mapped to the\n    %% second bucket, therefore it is still the same sector size.\n    assert_slice_index(0, [\n        3*262144\n    ]),\n\n    %% --------------------------------------------------------------------------\n    %% After the strict data split threshold, all end offsets are padded to a multiple of\n    %% ?DATA_CHUNK_SIZE (i.e. 262144).\n    %% --------------------------------------------------------------------------\n    \n    %% Sector start\n    assert_slice_index(1, [\n        3*262144+1, 4*262144-1, 4*262144\n    ]),\n    assert_slice_index(1, [\n        4*262144+1, 5*262144-1, 5*262144\n    ]),\n    assert_slice_index(1, [\n        5*262144+1 , 6*262144-1, 6*262144\n    ]),\n\n    %% Sector start\n    assert_slice_index(2, [\n        6*262144+1, 7*262144-1, 7*262144\n    ]),\n    assert_slice_index(2, [\n        7*262144+1, 8*262144-1, 8*262144\n    ]),\n\n    %% Recall partition start\n    %% Sector start\n    assert_slice_index(0, [\n        8*262144+1, 9*262144-1, 9*262144\n    ]),\n    assert_slice_index(0, [\n        9*262144+1, 10*262144-1, 10*262144\n    ]),\n    assert_slice_index(0, [\n        10*262144+1, 11*262144-1, 11*262144\n    ]),\n\n    %% Sector start\n    assert_slice_index(1, [\n        11*262144+1, 12*262144-1, 12*262144\n    ]),\n    assert_slice_index(1, [\n        12*262144+1, 13*262144-1, 13*262144\n    ]),\n    assert_slice_index(1, [\n        13*262144+1, 14*262144-1, 14*262144\n    ]),\n    \n    %% Sector start\n    assert_slice_index(2, [\n        14*262144+1, 15*262144-1, 15*262144\n    ]),\n    assert_slice_index(2, [\n        15*262144+1, 16*262144-1, 16*262144\n    ]),\n\n    %% Recall partition start\n    %% Sector start\n    assert_slice_index(0, [\n        16*262144+1, 17*262144-1, 17*262144\n    ]),\n\n    ?assertEqual(ar_block:get_sub_chunks_per_replica_2_9_entropy() - 1,\n            get_slice_index(ar_block:partition_size())),\n    ?assertEqual(0,\n            get_slice_index(ar_block:partition_size() + 1)),\n\n    ok.\n\nassert_slice_index(_ExpectedIndex, []) ->\n    ok; \nassert_slice_index(ExpectedIndex, [AbsoluteChunkByteOffset | Rest]) ->\n    ?assertEqual(\n        ExpectedIndex, get_slice_index(AbsoluteChunkByteOffset),\n        lists:flatten(io_lib:format(\"get_slice_index(~p)\", \n            [AbsoluteChunkByteOffset]))\n    ),\n    assert_slice_index(ExpectedIndex, Rest).\n\n\n%% @doc Walk through every sub-chunk of each chunk and verify its entropy index and\n%% entropy sub-chunk index.\nentropy_index_walk_test_() ->\n    ar_test_node:test_with_mocked_functions([\n        {ar_block, get_replica_2_9_entropy_sector_size, fun() -> 786432 end},\n        {ar_block, get_replica_2_9_entropy_partition_size, fun() -> 2359296 end},\n        {ar_block, get_sub_chunks_per_replica_2_9_entropy, fun() -> 3 end}\n    ],\n    fun test_entropy_index_walk/0, 30).\n\ntest_entropy_index_walk() ->\n    %% assert_entropy_index takes a list of chunk end offsets and verifies the entropy\n    %% index for each sub-chunk in the chunk. The first argument is the expected entropy\n    %% index for the first sub-chunk in the chunk, for each subsequent sub-chunk the\n    %% expected index is incremented by 1.\n    %% \n    %% The sector size determines the number of entropy indices. During tests the sector\n    %% size is 3*262144, so the total number of entropy indices is 3*262144 / 8192 = 96 (one\n    %% for each sub-chunk in each sector).\n    \n    %% In tests the strict data split threshold is 262144 * 3, before that offset chunks\n    %% were not padded. So each provided end offset is taken as is. After the threshold each\n    %% offset is padded to a multiple of ?DATA_CHUNK_SIZE (i.e. 262144) off of the threshold\n    %% value.\n\n    %% --------------------------------------------------------------------------\n    %% Before the strict data split threshold:\n    %% --------------------------------------------------------------------------\n    \n    %% Partition start\n    %% Sector start\n    assert_entropy_index(0, [\n        0\n    ]),\n    assert_entropy_index(0, [\n        1, 262144-1, 262144\n    ]),\n    assert_entropy_index(0, [\n        262144+1, 2*262144-1\n    ]),\n    assert_entropy_index(32, [\n        2*262144, 2*262144+1, 3*262144-1\n    ]),\n\n    %% The strict data split threshold:\n    assert_entropy_index(64, [\n        3*262144\n    ]),\n\n    %% --------------------------------------------------------------------------\n    %% After the strict data split threshold, all end offsets are padded to a multiple of\n    %% ?DATA_CHUNK_SIZE (i.e. 262144).\n    %% --------------------------------------------------------------------------\n    \n    %% Sector start\n    assert_entropy_index(0, [\n        3*262144+1, 4*262144-1, 4*262144\n    ]),\n    assert_entropy_index(32, [\n        4*262144+1, 5*262144-1, 5*262144\n    ]),\n    assert_entropy_index(64, [\n        5*262144+1 , 6*262144-1, 6*262144\n    ]),\n\n    %% Sector start\n    assert_entropy_index(0, [\n        6*262144+1, 7*262144-1, 7*262144\n    ]),\n    assert_entropy_index(32, [\n        7*262144+1, 8*262144-1, 8*262144\n    ]),\n\n    %% Partition start\n    %% Sector start\n    assert_entropy_index(0, [\n        8*262144+1, 9*262144-1, 9*262144\n    ]),\n    assert_entropy_index(32, [\n        9*262144+1, 10*262144-1, 10*262144\n    ]),\n    assert_entropy_index(64, [\n        10*262144+1, 11*262144-1, 11*262144\n    ]),\n\n    %% Sector start\n    assert_entropy_index(0, [\n        11*262144+1, 12*262144-1, 12*262144\n    ]),\n    assert_entropy_index(32, [\n        12*262144+1, 13*262144-1, 13*262144\n    ]),\n    assert_entropy_index(64, [\n        13*262144+1, 14*262144-1, 14*262144\n    ]),\n\n    %% Sector start\n    assert_entropy_index(0, [\n        14*262144+1, 15*262144-1, 15*262144\n    ]),\n    assert_entropy_index(32, [\n        15*262144+1, 16*262144-1, 16*262144\n    ]),\n\n    %% Partition start\n    %% Sector start\n    assert_entropy_index(0, [\n        16*262144+1, 17*262144-1, 17*262144\n    ]),\n\n\n    ok.\n\nassert_entropy_index(_ExpectedIndex, []) ->\n    ok; \nassert_entropy_index(ExpectedIndex, [AbsoluteChunkByteOffset | Rest]) ->\n    walk_sub_chunks(ExpectedIndex, AbsoluteChunkByteOffset, 0),\n    assert_entropy_index(ExpectedIndex, Rest).\n\nwalk_sub_chunks(_ExpectedIndex, _AbsoluteChunkByteOffset, SubChunkStartOffset)\n    when SubChunkStartOffset >= ?DATA_CHUNK_SIZE ->\n        ok;\nwalk_sub_chunks(ExpectedIndex, AbsoluteChunkByteOffset, SubChunkStartOffset) ->\n    ?assertEqual(\n        ExpectedIndex, get_entropy_index(AbsoluteChunkByteOffset, SubChunkStartOffset),\n        lists:flatten(io_lib:format(\"get_entropy_index(~p, ~p)\", \n            [AbsoluteChunkByteOffset, SubChunkStartOffset]))\n    ),\n    ?assertEqual(\n        ExpectedIndex, get_entropy_index(AbsoluteChunkByteOffset, SubChunkStartOffset+1),\n        lists:flatten(io_lib:format(\"get_entropy_index(~p, ~p)\", \n            [AbsoluteChunkByteOffset, SubChunkStartOffset+1]))\n    ),\n    ?assertEqual(\n        ExpectedIndex, get_entropy_index(AbsoluteChunkByteOffset,  SubChunkStartOffset+8192-1),\n        lists:flatten(io_lib:format(\"get_entropy_index(~p, ~p)\", \n            [AbsoluteChunkByteOffset,  SubChunkStartOffset+8192-1]))\n    ),\n    walk_sub_chunks(ExpectedIndex+1, AbsoluteChunkByteOffset, SubChunkStartOffset+8192)."
  },
  {
    "path": "apps/arweave/src/ar_retarget.erl",
    "content": "%%% @doc A helper module for deciding when and which blocks will be retarget\n%%% blocks, that is those in which change the current mining difficulty\n%%% on the weave to maintain a constant block time.\n%%% @end\n-module(ar_retarget).\n\n-export([is_retarget_height/1, is_retarget_block/1, maybe_retarget/5,\n\t\tcalculate_difficulty/5, validate_difficulty/2,\n\t\tswitch_to_linear_diff/1, switch_to_linear_diff_pre_fork_2_5/1, switch_to_log_diff/1]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%% A macro for checking if the given block is a retarget block.\n%% Returns true if so, otherwise returns false.\n-define(IS_RETARGET_BLOCK(X),\n\t\t(\n\t\t\t((X#block.height rem ?RETARGET_BLOCKS) == 0) and\n\t\t\t(X#block.height =/= 0)\n\t\t)\n\t).\n\n%% A macro for checking if the given height is a retarget height.\n%% Returns true if so, otherwise returns false.\n-define(IS_RETARGET_HEIGHT(Height),\n\t\t(\n\t\t\t((Height rem ?RETARGET_BLOCKS) == 0) and\n\t\t\t(Height =/= 0)\n\t\t)\n\t).\n\n%% @doc The unconditional difficulty reduction coefficient applied at the\n%% first 2.5 block.\n-define(DIFF_DROP_2_5, 2).\n\n%% @doc The unconditional difficulty reduction coefficient applied at the\n%% first 2.6 block.\n-define(INITIAL_DIFF_DROP_2_6, 100).\n\n%% @doc The additional difficulty reduction coefficient applied every 10 minutes at the\n%% first 2.6 block.\n-define(DIFF_DROP_2_6, 2).\n\n%% @doc The unconditional difficulty reduction coefficient applied at the\n%% first 2.7.2 block.\n-define(INITIAL_DIFF_DROP_2_7_2, 10).\n\n%% @doc The additional difficulty reduction coefficient applied every 10 minutes at the\n%% first 2.7.2 block.\n-define(DIFF_DROP_2_7_2, 2).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Return true if the given height is a retarget height.\nis_retarget_height(Height) ->\n\t?IS_RETARGET_HEIGHT(Height).\n\n%% @doc Return true if the given block is a retarget block.\nis_retarget_block(Block) ->\n\t?IS_RETARGET_BLOCK(Block).\n\nmaybe_retarget(Height, {CurPoA1Diff, CurDiff}, TS, LastRetargetTS, PrevTS) ->\n\tcase ar_retarget:is_retarget_height(Height) of\n\t\ttrue ->\n\t\t\tNewDiff = calculate_difficulty(CurDiff, TS, LastRetargetTS, Height, PrevTS),\n\t\t\t{ar_difficulty:poa1_diff(NewDiff, Height), NewDiff};\n\t\tfalse ->\n\t\t\t{CurPoA1Diff, CurDiff}\n\tend.\n\n-ifdef(LOCALNET).\ncalculate_difficulty(OldDiff, _TS, _Last, _Height, _PrevTS) ->\n\tOldDiff.\n-else.\ncalculate_difficulty(OldDiff, TS, Last, Height, PrevTS) ->\n\tFork_1_7 = ar_fork:height_1_7(),\n\tFork_1_8 = ar_fork:height_1_8(),\n\tFork_1_9 = ar_fork:height_1_9(),\n\tFork_2_4 = ar_fork:height_2_4(),\n\tFork_2_5 = ar_fork:height_2_5(),\n\tFork_2_6 = ar_fork:height_2_6(),\n\tFork_2_7_2 = ar_fork:height_2_7_2(),\n\tFork_Testnet = ar_testnet:height_testnet_fork(),\n\tcase Height of\n\t\t_ when Height == Fork_Testnet ->\n\t\t\tcalculate_difficulty_with_drop(OldDiff, TS, Last, Height, PrevTS, 100, 2);\n\t\t_ when Height == Fork_2_7_2 ->\n\t\t\tcalculate_difficulty_with_drop(OldDiff, TS, Last, Height, PrevTS,\n\t\t\t\t\t?INITIAL_DIFF_DROP_2_7_2, ?DIFF_DROP_2_7_2);\n\t\t_ when Height == Fork_2_6 ->\n\t\t\tcalculate_difficulty_with_drop(OldDiff, TS, Last, Height, PrevTS,\n\t\t\t\t\t?INITIAL_DIFF_DROP_2_6, ?DIFF_DROP_2_6);\n\t\t_ when Height > Fork_2_5 ->\n\t\t\tcalculate_difficulty(OldDiff, TS, Last, Height);\n\t\t_ when Height == Fork_2_5 ->\n\t\t\tcalculate_difficulty_at_2_5(OldDiff, TS, Last, Height, PrevTS);\n\t\t_ when Height > Fork_2_4 ->\n\t\t\tcalculate_difficulty_after_2_4_before_2_5(OldDiff, TS, Last, Height);\n\t\t_ when Height == Fork_2_4 ->\n\t\t\tcalculate_difficulty_at_2_4(OldDiff, TS, Last, Height);\n\t\t_ when Height >= Fork_1_9 ->\n\t\t\tcalculate_difficulty_at_and_after_1_9_before_2_4(OldDiff, TS, Last, Height);\n\t\t_ when Height > Fork_1_8 ->\n\t\t\tcalculate_difficulty_after_1_8_before_1_9(OldDiff, TS, Last, Height);\n\t\t_ when Height == Fork_1_8 ->\n\t\t\tswitch_to_linear_diff_pre_fork_2_5(OldDiff);\n\t\t_ when Height == Fork_1_7 ->\n\t\t\tar_difficulty:switch_to_randomx_fork_diff(OldDiff);\n\t\t_ ->\n\t\t\tcalculate_difficulty_before_1_8(OldDiff, TS, Last, Height)\n\tend.\n-endif.\n\n%% @doc Assert the new block has an appropriate difficulty.\n-ifdef(LOCALNET).\nvalidate_difficulty(_NewB, _OldB) ->\n\ttrue.\n-else.\nvalidate_difficulty(NewB, OldB) ->\n\tcase ar_retarget:is_retarget_block(NewB) of\n\t\ttrue ->\n\t\t\t(NewB#block.diff ==\n\t\t\t\tcalculate_difficulty(\n\t\t\t\t\tOldB#block.diff, NewB#block.timestamp, OldB#block.last_retarget,\n\t\t\t\t\tNewB#block.height, OldB#block.timestamp));\n\t\tfalse ->\n\t\t\t(NewB#block.diff == OldB#block.diff) and\n\t\t\t\t(NewB#block.last_retarget == OldB#block.last_retarget)\n\tend.\n-endif.\n\n%% @doc The number a hash must be greater than, to give the same odds of success\n%% as the old-style Diff (number of leading zeros in the bitstring).\nswitch_to_linear_diff(LogDiff) ->\n\t?MAX_DIFF - ar_fraction:pow(2, 256 - LogDiff).\n\nswitch_to_linear_diff_pre_fork_2_5(Diff) ->\n\terlang:trunc(math:pow(2, 256)) - erlang:trunc(math:pow(2, 256 - Diff)).\n\n%% @doc only used for logging/metrics as the log diff is easier to understand than the linear diff\nswitch_to_log_diff(LinearDiff) ->\n\t256 - math:log2(?MAX_DIFF - LinearDiff).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\ncalculate_difficulty(OldDiff, TS, Last, Height) ->\n\t%% We only do retarget if the time it took to mine ?RETARGET_BLOCKS is bigger than\n\t%% or equal to RetargetToleranceUpperBound or smaller than or equal to\n\t%% RetargetToleranceLowerBound.\n\tTargetTime = ?RETARGET_BLOCKS * ar_testnet:target_block_time(Height),\n\tTargetTimeUpperBound = TargetTime + ar_testnet:target_block_time(Height),\n\tTargetTimeLowerBound = TargetTime - ar_testnet:target_block_time(Height),\n\tActualTime = max(TS - Last, ar_block:get_max_timestamp_deviation()),\n\n\tcase ActualTime < TargetTimeUpperBound andalso ActualTime > TargetTimeLowerBound of\n\t\ttrue ->\n\t\t\tOldDiff;\n\t\tfalse ->\n\t\t\t%% Scale difficulty by TargetTime / ActualTime\n\t\t\t%% If ActualTime is less than TargetTime it means we need to *increase* the difficulty,\n\t\t\t%% and vice versa.\n\t\t\tar_difficulty:scale_diff(OldDiff, {TargetTime, ActualTime}, Height)\n\tend.\n\ncalculate_difficulty_at_2_5(OldDiff, TS, Last, Height, PrevTS) ->\n\tcalculate_difficulty_with_drop(OldDiff, TS, Last, Height, PrevTS, ?DIFF_DROP_2_5,\n\t\t\t?DIFF_DROP_2_5).\n\ncalculate_difficulty_with_drop(OldDiff, TS, Last, Height, PrevTS, InitialCoeff, Coeff) ->\n\tTargetTime = ?RETARGET_BLOCKS * ar_testnet:target_block_time(Height),\n\tActualTime = max(TS - Last, ar_block:get_max_timestamp_deviation()),\n\tStep = 10 * 60,\n\t%% Drop the difficulty InitialCoeff times right away, then drop extra Coeff times\n\t%% for every 10 minutes passed.\n\tActualTime2 = ActualTime * InitialCoeff\n\t\t\t* ar_fraction:pow(Coeff, max(TS - PrevTS, 0) div Step),\n\t%% Scale difficulty by TargetTime / ActualTime2\n\t%% If ActualTime2 is less than TargetTime it means we need to *increase* the difficulty,\n\t%% and vice versa.\n\tar_difficulty:scale_diff(OldDiff, {TargetTime, ActualTime2}, Height).\n\ncalculate_difficulty_after_2_4_before_2_5(OldDiff, TS, Last, Height) ->\n\tTargetTime = ?RETARGET_BLOCKS * ar_testnet:target_block_time(Height),\n\tActualTime = TS - Last,\n\tTimeDelta = ActualTime / TargetTime,\n\tcase abs(1 - TimeDelta) < ?RETARGET_TOLERANCE of\n\t\ttrue ->\n\t\t\tOldDiff;\n\t\tfalse ->\n\t\t\tMaxDiff = ?MAX_DIFF,\n\t\t\tMinDiff = ar_difficulty:min_difficulty(Height),\n\t\t\tDiffInverse = erlang:trunc((MaxDiff - OldDiff) * TimeDelta),\n\t\t\tar_util:between(\n\t\t\t\tMaxDiff - DiffInverse,\n\t\t\t\tMinDiff,\n\t\t\t\tMaxDiff\n\t\t\t)\n\tend.\n\ncalculate_difficulty_at_2_4(OldDiff, TS, Last, Height) ->\n\tTargetTime = ?RETARGET_BLOCKS * ar_testnet:target_block_time(Height),\n\tActualTime = TS - Last,\n\t%% Make the difficulty drop 10 times faster than usual. The difficulty\n\t%% after SPoRA is estimated to be around 10-100 times lower. In the worst\n\t%% case, the 10x adjustment leads to a block per 12 seconds on average,\n\t%% what is a reasonable lower bound on the block time. In case of the 100x\n\t%% reduction in difficulty, it would only take 100 minutes to adjust.\n\tTimeDelta = 10 * ActualTime / TargetTime,\n\tMaxDiff = ?MAX_DIFF,\n\tMinDiff = ar_difficulty:min_difficulty(Height),\n\tDiffInverse = erlang:trunc((MaxDiff - OldDiff) * TimeDelta),\n\tar_util:between(\n\t\tMaxDiff - DiffInverse,\n\t\tMinDiff,\n\t\tMaxDiff\n\t).\n\ncalculate_difficulty_at_and_after_1_9_before_2_4(OldDiff, TS, Last, Height) ->\n\tTargetTime = ?RETARGET_BLOCKS * ar_testnet:target_block_time(Height),\n\tActualTime = TS - Last,\n\tTimeDelta = ActualTime / TargetTime,\n\tcase abs(1 - TimeDelta) < ?RETARGET_TOLERANCE of\n\t\ttrue ->\n\t\t\tOldDiff;\n\t\tfalse ->\n\t\t\tMaxDiff = ?MAX_DIFF,\n\t\t\tMinDiff = ar_difficulty:min_difficulty(Height),\n\t\t\tEffectiveTimeDelta = ar_util:between(\n\t\t\t\tActualTime / TargetTime,\n\t\t\t\t1 / ?DIFF_ADJUSTMENT_UP_LIMIT,\n\t\t\t\t?DIFF_ADJUSTMENT_DOWN_LIMIT\n\t\t\t),\n\t\t\tDiffInverse = erlang:trunc((MaxDiff - OldDiff) * EffectiveTimeDelta),\n\t\t\tar_util:between(\n\t\t\t\tMaxDiff - DiffInverse,\n\t\t\t\tMinDiff,\n\t\t\t\tMaxDiff\n\t\t\t)\n\tend.\n\ncalculate_difficulty_after_1_8_before_1_9(OldDiff, TS, Last, Height) ->\n\tTargetTime = ?RETARGET_BLOCKS * ar_testnet:target_block_time(Height),\n\tActualTime = TS - Last,\n\tTimeDelta = ActualTime / TargetTime,\n\tcase abs(1 - TimeDelta) < ?RETARGET_TOLERANCE of\n\t\ttrue ->\n\t\t\tOldDiff;\n\t\tfalse ->\n\t\t\tMaxDiff = ?MAX_DIFF,\n\t\t\tMinDiff = ar_difficulty:min_difficulty(Height),\n\t\t\tar_util:between(\n\t\t\t\tMaxDiff - (MaxDiff - OldDiff) * ActualTime div TargetTime,\n\t\t\t\tmax(MinDiff, OldDiff div 2),\n\t\t\t\tmin(MaxDiff, OldDiff * 4)\n\t\t\t)\n\tend.\n\ncalculate_difficulty_before_1_8(OldDiff, TS, Last, Height) ->\n\tTargetTime = ?RETARGET_BLOCKS * ar_testnet:target_block_time(Height),\n\tActualTime = TS - Last,\n\tTimeError = abs(ActualTime - TargetTime),\n\tDiff = erlang:max(\n\t\tif\n\t\t\tTimeError < (TargetTime * ?RETARGET_TOLERANCE) -> OldDiff;\n\t\t\tTargetTime > ActualTime                        -> OldDiff + 1;\n\t\t\ttrue                                           -> OldDiff - 1\n\t\tend,\n\t\tar_difficulty:min_difficulty(Height)\n\t),\n\tDiff.\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\n%% Ensure that after a series of very fast mines, the diff increases.\nsimple_retarget_test_() ->\n\t{timeout, 300, fun() ->\n\t\t[B0] = ar_weave:init(),\n\t\tar_test_node:start(B0),\n\t\tlists:foreach(\n\t\t\tfun(Height) ->\n\t\t\t\tar_test_node:mine(),\n\t\t\t\tar_test_node:wait_until_height(main, Height)\n\t\t\tend,\n\t\t\tlists:seq(1, ?RETARGET_BLOCKS + 1)\n\t\t),\n\t\ttrue = ar_util:do_until(\n\t\t\tfun() ->\n\t\t\t\t[BH | _] = ar_node:get_blocks(),\n\t\t\t\tB = ar_storage:read_block(BH),\n\t\t\t\tB#block.diff > B0#block.diff\n\t\t\tend,\n\t\t\t1000,\n\t\t\t5 * 60 * 1000\n\t\t)\n\tend}.\n\ncalculate_difficulty_linear_test_() ->\n\tar_test_node:test_with_mocked_functions([{ar_fork, height_2_5, fun() -> 0 end}],\n\t\tfun test_calculate_difficulty_linear/0, 120).\n\ntest_calculate_difficulty_linear() ->\n\tDiff = switch_to_linear_diff(27),\n\tTargetTime = ?RETARGET_BLOCKS * ?TARGET_BLOCK_TIME,\n\tTimestamp = os:system_time(seconds),\n\t%% The change is smaller than retarget tolerance.\n\tRetarget1 = Timestamp - TargetTime - ?TARGET_BLOCK_TIME + 1,\n\t?assertEqual(\n\t\tDiff,\n\t\tcalculate_difficulty(Diff, Timestamp, Retarget1, 1)\n\t),\n\tRetarget2 = Timestamp - TargetTime + ?TARGET_BLOCK_TIME - 1,\n\t?assertEqual(\n\t\tDiff,\n\t\tcalculate_difficulty(Diff, Timestamp, Retarget2, 1)\n\t),\n\t%% The change is not capped by ?DIFF_ADJUSTMENT_UP_LIMIT anymore.\n\tRetarget3 = Timestamp - TargetTime div (?DIFF_ADJUSTMENT_UP_LIMIT + 1),\n\t?assertEqual(\n\t\t(?DIFF_ADJUSTMENT_UP_LIMIT + 1) * hashes(Diff),\n\t\thashes(\n\t\t\tcalculate_difficulty(Diff, Timestamp, Retarget3, 1)\n\t\t)\n\t),\n\t%% The change is not capped by ?DIFF_ADJUSTMENT_DOWN_LIMIT anymore.\n\tRetarget4 = Timestamp - (?DIFF_ADJUSTMENT_DOWN_LIMIT + 2) * TargetTime,\n\t?assertEqual(\n\t\thashes(Diff),\n\t\t(?DIFF_ADJUSTMENT_DOWN_LIMIT + 2) * hashes(\n\t\t\tcalculate_difficulty(Diff, Timestamp, Retarget4, 1)\n\t\t)\n\t),\n\t%% The actual time is three times smaller.\n\tRetarget5 = Timestamp - TargetTime div 3,\n\t?assert(\n\t\t3.001 * hashes(Diff)\n\t\t\t> hashes(\n\t\t\t\tcalculate_difficulty(Diff, Timestamp, Retarget5, 1)\n\t\t\t)\n\t),\n\t?assert(\n\t\t3.001 / 2 * hashes(Diff)\n\t\t\t> hashes( % Expect 2x drop at 2.5.\n\t\t\t\tcalculate_difficulty_at_2_5(Diff, Timestamp, Retarget5, 0, Timestamp - 1)\n\t\t\t)\n\t),\n\t?assert(\n\t\t2.999 * hashes(Diff)\n\t\t\t< hashes(\n\t\t\t\tcalculate_difficulty(Diff, Timestamp, Retarget5, 1)\n\t\t\t)\n\t),\n\t?assert(\n\t\t2.999 / 2 * hashes(Diff)\n\t\t\t< hashes( % Expect 2x drop at 2.5.\n\t\t\t\tcalculate_difficulty_at_2_5(Diff, Timestamp, Retarget5, 0, Timestamp - 1)\n\t\t\t)\n\t),\n\t%% The actual time is two times bigger.\n\tRetarget6 = Timestamp - 2 * TargetTime,\n\t?assert(\n\t\thashes(Diff)\n\t\t\t> 1.999 * hashes(\n\t\t\t\tcalculate_difficulty(Diff, Timestamp, Retarget6, 1)\n\t\t\t)\n\t),\n\t?assert(\n\t\thashes(Diff)\n\t\t\t> 3.999 * hashes( % Expect 2x drop at 2.5.\n\t\t\t\tcalculate_difficulty_at_2_5(Diff, Timestamp, Retarget6, 0, Timestamp - 1)\n\t\t\t)\n\t),\n\t?assert(\n\t\thashes(Diff)\n\t\t\t> 7.999 * hashes( % Expect extra 2x after 10 minutes.\n\t\t\t\tcalculate_difficulty_at_2_5(Diff, Timestamp, Retarget6, 0, Timestamp - 600)\n\t\t\t)\n\t),\n\t?assert(\n\t\thashes(Diff)\n\t\t\t< 2.001 * hashes(\n\t\t\t\tcalculate_difficulty(Diff, Timestamp, Retarget6, 1)\n\t\t\t)\n\t),\n\t?assert(\n\t\thashes(Diff)\n\t\t\t< 4.001 * hashes( % Expect 2x drop at 2.5.\n\t\t\t\tcalculate_difficulty_at_2_5(Diff, Timestamp, Retarget6, 0, Timestamp - 1)\n\t\t\t)\n\t),\n\t?assert(\n\t\thashes(Diff)\n\t\t\t< 8.001 * hashes( % Expect extra 2x after 10 minutes.\n\t\t\t\tcalculate_difficulty_at_2_5(Diff, Timestamp, Retarget6, 0, Timestamp - 600)\n\t\t\t)\n\t).\n\nhashes(Diff) ->\n\tMaxDiff = ?MAX_DIFF,\n\tMaxDiff div (MaxDiff - Diff).\n"
  },
  {
    "path": "apps/arweave/src/ar_rewards.erl",
    "content": "-module(ar_rewards).\n\n-export([reward_history_length/1, expected_hashes_length/1, buffered_reward_history_length/1, \n\t\tset_reward_history/2, get_locked_rewards/1,\n\t\ttrim_locked_rewards/2, trim_reward_history/2,\n\t\ttrim_buffered_reward_history/2, interim_reward_history_bi/2,\n\t\tget_oldest_locked_address/1,\n\t\tadd_element/2, has_locked_reward/2,\n\t\treward_history_hash/3, validate_reward_history_hashes/3,\n\t\tget_total_reward_for_address/2, get_reward_history_totals/1,\n\t\tapply_rewards/2, apply_reward/4, log_reward_history/3]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n\nreward_history_length(Height) ->\n\tmin(\n\t\tHeight - ar_fork:height_2_6() + 1, %% included for compatibility with unit tests\n\t\tcase Height >= ar_fork:height_2_8() of\n\t\t\ttrue ->\n\t\t\t\tar_testnet:reward_history_blocks(Height) + ar_block:get_consensus_window_size();\n\t\t\tfalse ->\n\t\t\t\tar_testnet:legacy_reward_history_blocks(Height) + ar_block:get_consensus_window_size()\n\t\tend\n\t).\n\nexpected_hashes_length(Height) ->\n\tcase Height >= ar_fork:height_2_8() of\n\t\ttrue ->\n\t\t\t%% Take one more block.reward_history_hash because after 2.8 we use\n\t\t\t%% the previous reward history hash to compute the new one.\n\t\t\tar_block:get_consensus_window_size() + 1;\n\t\tfalse ->\n\t\t\tar_block:get_consensus_window_size()\n\t\tend.\n\n%% @doc The reward history that gets cached in #block and returned by /reward_history has\n%% to be long enough to include:\n%% 1. The current reward history (i.e. reward_history_length(Height))\n%% 2. The reward history that was in use recently\n%%    (i.e. reward_history_length(Height - expected_hashes_length(Height)))\n%% 3. The current locked rewards (i.e. locked_rewards_blocks(Height))\nbuffered_reward_history_length(Height) ->\n\tmax(\n\t\tmax(\n\t\t\treward_history_length(Height - expected_hashes_length(Height)),\n\t\t\treward_history_length(Height)\n\t\t),\n\t\tar_testnet:locked_rewards_blocks(Height)\n\t).\n\n%% @doc Add the corresponding reward history to every block record. We keep\n%% the reward histories in the block cache and use them to validate blocks applied on top.\n%%\n%% The expectation is that RewardHistory is at least\n%% reward_history_length/1 long, and that Blocks is no longer than\n%% ar_block:get_consensus_window_size(). If so then each block.reward_history value will be at least\n%% ?REWARD_HISTORY_BLOCKS long.\nset_reward_history([], _RewardHistory) ->\n\t[];\nset_reward_history(Blocks, []) ->\n\tBlocks;\nset_reward_history([B | Blocks], RewardHistory) ->\n\t[B#block{ reward_history = RewardHistory } | set_reward_history(Blocks, tl(RewardHistory))].\n\n%% @doc Return the most recent part of the reward history including the locked rewards.\nget_locked_rewards(B) ->\n\ttrim_locked_rewards(B#block.height, B#block.reward_history).\n\n%% @doc Trim RewardHistory to just the locked rewards.\ntrim_locked_rewards(Height, RewardHistory) ->\n\tLockRewardsLength = ar_testnet:locked_rewards_blocks(Height),\n\tlists:sublist(RewardHistory, LockRewardsLength).\n\n%% @doc Trim RewardHistory to the values that will be stored in the block. This is the\n%% sliding window plus a buffer of ar_block:get_consensus_window_size() values.\ntrim_reward_history(Height, RewardHistory) ->\n\tlists:sublist(RewardHistory, reward_history_length(Height)).\n\n%% @doc See the buffered_reward_history_length/1 function for the distinction between\n%% reward_history_length/1 and buffered_reward_history_length/1.\ntrim_buffered_reward_history(Height, RewardHistory) ->\n\tlists:sublist(RewardHistory, buffered_reward_history_length(Height)).\n\n%% @doc Return the portion of the block index needed for reading the reward history\n%% during startup. Until ~2 months post 2.8 hardfork, the reward history accumulated\n%% by any node will be shorter than the full expected length — specifically 21,600\n%% blocks plus the number of blocks elapsed since the 2.8 activation.\ninterim_reward_history_bi(Height, BI) ->\n\tInterimRewardHistoryLength = (Height - ar_fork:height_2_8()) + 21600,\n\tlists:sublist(trim_buffered_reward_history(Height, BI), InterimRewardHistoryLength).\n\nget_oldest_locked_address(B) ->\n\tLockedRewards = get_locked_rewards(B),\n\t{Addr, _HashRate, _Reward, _Denomination} = lists:last(LockedRewards),\n\tAddr.\n\n%% @doc Add a new {Addr, HashRate, Reward, Denomination} tuple to the reward history.\nadd_element(B, RewardHistory) ->\n\tHeight = B#block.height,\n\tReward = B#block.reward,\n\tHashRate = ar_difficulty:get_hash_rate_fixed_ratio(B),\n\tDenomination = B#block.denomination,\n\tRewardAddr = B#block.reward_addr,\n\ttrim_buffered_reward_history(Height, \n\t\t[{RewardAddr, HashRate, Reward, Denomination} | RewardHistory]).\n\nhas_locked_reward(_Addr, []) ->\n\tfalse;\nhas_locked_reward(Addr, [{Addr, _, _, _} | _]) ->\n\ttrue;\nhas_locked_reward(Addr, [_ | RewardHistory]) ->\n\thas_locked_reward(Addr, RewardHistory).\n\nvalidate_reward_history_hashes(_Height, _RewardHistory, []) ->\n\ttrue;\nvalidate_reward_history_hashes(0, [_Element] = History, [H]) ->\n\t%% This clause is not applicable in mainnet but reflects how we initialize\n\t%% the reward history hash in the new weaves, even if the 2.8 height is not\n\t%% set from the genesis.\n\tH == reward_history_hash(0, <<>>, History);\nvalidate_reward_history_hashes(Height, RewardHistory, [H, PrevH | ExpectedHashes]) ->\n\tcase validate_reward_history_hash(Height, PrevH, H, RewardHistory) of\n\t\ttrue ->\n\t\t\tcase ExpectedHashes of\n\t\t\t\t[] ->\n\t\t\t\t\ttrue;\n\t\t\t\t_ ->\n\t\t\t\t\tvalidate_reward_history_hashes(Height - 1,\n\t\t\t\t\t\t\ttl(RewardHistory), [PrevH | ExpectedHashes])\n\t\t\tend;\n\t\tfalse ->\n\t\t\tfalse\n\tend;\nvalidate_reward_history_hashes(Height, RewardHistory, [H]) ->\n\t%% After 2.8 we always include one extra hash to the list so we cannot end up here.\n\ttrue = Height < ar_fork:height_2_8(),\n\tvalidate_reward_history_hash(Height, not_set, H, RewardHistory).\n\nvalidate_reward_history_hash(Height, PreviousRewardHistoryHash, H, RewardHistory) ->\n\tH == reward_history_hash(Height, PreviousRewardHistoryHash,\n\t\t\t%% Pre-2.8: slice the reward history to compute the hash\n\t\t\t%% Post-2.8: use the previous reward history hash and the head of the history to compute\n\t\t\t%% the new hash.\n\t\t\ttrim_locked_rewards(Height, RewardHistory)).\n\nreward_history_hash(Height, PreviousRewardHistoryHash, History) ->\n\tcase Height >= ar_fork:height_2_8() of\n\t\ttrue ->\n\t\t\tElement = encode_reward_history_element(hd(History)),\n\t\t\tPreimage = << Element/binary, PreviousRewardHistoryHash/binary >>,\n\t\t\tcrypto:hash(sha256, Preimage);\n\t\tfalse ->\n\t\t\treward_history_hash(History, [ar_serialize:encode_int(length(History), 8)])\n\tend.\n\nencode_reward_history_element({Addr, HashRate, Reward, Denomination}) ->\n\tHashRateBin = ar_serialize:encode_int(HashRate, 8),\n\tRewardBin = ar_serialize:encode_int(Reward, 8),\n\tDenominationBin = << Denomination:24 >>,\n\tcrypto:hash(sha256, << Addr/binary, HashRateBin/binary,\n\t\t\tRewardBin/binary, DenominationBin/binary >>).\n\nreward_history_hash([], IOList) ->\n\tcrypto:hash(sha256, iolist_to_binary(IOList));\nreward_history_hash([{Addr, HashRate, Reward, Denomination} | History], IOList) ->\n\tHashRateBin = ar_serialize:encode_int(HashRate, 8),\n\tRewardBin = ar_serialize:encode_int(Reward, 8),\n\tDenominationBin = << Denomination:24 >>,\n\treward_history_hash(History, [Addr, HashRateBin, RewardBin, DenominationBin | IOList]).\n\nget_total_reward_for_address(Addr, B) ->\n\tget_total_reward_for_address(Addr, get_locked_rewards(B), B#block.denomination, 0).\n\nget_total_reward_for_address(_Addr, [], _Denomination, Total) ->\n\tTotal;\nget_total_reward_for_address(Addr, [{Addr, _, Reward, RewardDenomination} | LockedRewards], Denomination, Total) ->\n\tReward2 = ar_pricing:redenominate(Reward, RewardDenomination, Denomination),\n\tget_total_reward_for_address(Addr, LockedRewards, Denomination, Total + Reward2);\nget_total_reward_for_address(Addr, [_ | LockedRewards], Denomination, Total) ->\n\tget_total_reward_for_address(Addr, LockedRewards, Denomination, Total).\n\n%% @doc Return {HashRateTotal, RewardTotal} summed up over the entire\n%% sliding window of the history of rewards for the given block.\nget_reward_history_totals(B) ->\n\tDenomination = B#block.denomination,\n\tHistory = trim_reward_history(B#block.height, B#block.reward_history),\n\tlog_reward_history(\"get_reward_history_totals\", History, 200),\n\t{HashRateTotal, RewardTotal} = get_totals(History, Denomination, 0, 0),\n\t{HashRateTotal, RewardTotal, History}.\n\nget_totals([], _Denomination, HashRateTotal, RewardTotal) ->\n\t{HashRateTotal, RewardTotal};\nget_totals([{_Addr, HashRate, Reward, RewardDenomination} | History],\n\t\tDenomination, HashRateTotal, RewardTotal) ->\n\tHashRateTotal2 = HashRateTotal + HashRate,\n\tReward2 = ar_pricing:redenominate(Reward, RewardDenomination, Denomination),\n\tRewardTotal2 = RewardTotal + Reward2,\n\tget_totals(History, Denomination, HashRateTotal2, RewardTotal2).\n\napply_rewards(PrevB, Accounts) ->\n\t%% The only time we won't have only a single reward to apply is if the\n\t%% ?LOCKED_REWARDS_BLOCKS has changed between blocks. And currently that can only\n\t%% happen on testnet.\n\tHeight = PrevB#block.height,\n\tNumRewardsToApply = max(0,\n\t\t\tar_testnet:locked_rewards_blocks(Height) -\n\t\t\tar_testnet:locked_rewards_blocks(Height + 1) + 1),\n\ttrue = NumRewardsToApply == 1 orelse ar_testnet:is_testnet(),\n\n\t%% Get the last NumRewardsToApply elements of the LockedRewards list in reverse order.\n\t%% Normally this will be a list with a single element: the last element in the\n\t%% LockedRewards list.\n\t%% When forking testnet off of mainnet this may be a list of more than 1 element.\n\tRewardsToApply = lists:sublist(lists:reverse(get_locked_rewards(PrevB)), NumRewardsToApply), \n\n\tapply_rewards2(RewardsToApply, PrevB#block.denomination, Accounts).\n\napply_rewards2([], _Denomination, Accounts) ->\n\tAccounts;\napply_rewards2([{Addr, _HashRate, Reward, RewardDenomination} | RewardsToApply],\n\t\tDenomination, Accounts) ->\n\tcase ar_node_utils:is_account_banned(Addr, Accounts) of\n\t\ttrue ->\n\t\t\tapply_rewards2(RewardsToApply, Denomination, Accounts);\n\t\tfalse ->\n\t\t\tReward2 = ar_pricing:redenominate(Reward, RewardDenomination,\n\t\t\t\t\tDenomination),\n\t\t\tAccounts2 = apply_reward(Accounts, Addr, Reward2, Denomination),\n\t\t\tapply_rewards2(RewardsToApply, Denomination, Accounts2)\n\tend.\n\n%% @doc Add the mining reward to the corresponding account.\napply_reward(Accounts, unclaimed, _Quantity, _Denomination) ->\n\tAccounts;\napply_reward(Accounts, RewardAddr, Amount, Denomination) ->\n\tcase maps:get(RewardAddr, Accounts, not_found) of\n\t\tnot_found ->\n\t\t\tar_node_utils:update_account(RewardAddr, Amount, <<>>, Denomination, true, Accounts);\n\t\t{Balance, LastTX} ->\n\t\t\tBalance2 = ar_pricing:redenominate(Balance, 1, Denomination),\n\t\t\tar_node_utils:update_account(RewardAddr, Balance2 + Amount, LastTX,\n\t\t\t\tDenomination, true, Accounts);\n\t\t{Balance, LastTX, AccountDenomination, MiningPermission} ->\n\t\t\tBalance2 = ar_pricing:redenominate(Balance, AccountDenomination, Denomination),\n\t\t\tar_node_utils:update_account(RewardAddr, Balance2 + Amount, LastTX,\n\t\t\t\tDenomination, MiningPermission, Accounts)\n\tend.\n\nlog_reward_history(Message, RewardHistory, N) ->\n\tLength = length(RewardHistory),\n\tLimitedRewardHistory = lists:sublist(RewardHistory, N),\n\tLogEntries = lists:map(fun({Addr, HashRate, Reward, Denomination}) ->\n\t\tEncodedAddr = ar_util:encode(Addr),\n\t\tLogHashRate = math:log10(HashRate),\n\t\tio_lib:format(\"{~s, ~p, ~p, ~p}\", \n\t\t\t\t\t\t[EncodedAddr, LogHashRate, Reward, Denomination])\n\tend, LimitedRewardHistory),\n\tLogString = string:join(LogEntries, \"; \"),\n\t?LOG_INFO(\"~s Length: ~p, Entries: ~s\", [Message, Length, LogString]).\t"
  },
  {
    "path": "apps/arweave/src/ar_rx4096_nif.erl",
    "content": "-module(ar_rx4096_nif).\n\n-include_lib(\"arweave/include/ar.hrl\").\n\n-on_load(init_nif/0).\n\n-export([rx4096_hash_nif/5, rx4096_info_nif/1, rx4096_init_nif/5,\n\t\trx4096_encrypt_composite_chunk_nif/9,\n\t\trx4096_decrypt_composite_chunk_nif/10,\n\t\trx4096_decrypt_composite_sub_chunk_nif/10,\n\t\trx4096_reencrypt_composite_chunk_nif/13\n]).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nrx4096_info_nif(_State) ->\n\t?LOG_ERROR(\"rx4096_info_nif\"),\n\terlang:nif_error(nif_not_loaded).\n\nrx4096_init_nif(_Key, _HashingMode, _JIT, _LargePages, _Threads) ->\n\t?LOG_ERROR(\"rx4096_init_nif\"),\n\terlang:nif_error(nif_not_loaded).\n\nrx4096_hash_nif(_State, _Data, _JIT, _LargePages, _HardwareAES) ->\n\t?LOG_ERROR(\"rx4096_hash_nif\"),\n\terlang:nif_error(nif_not_loaded).\n\nrx4096_encrypt_composite_chunk_nif(_State, _Key, _Chunk, _JIT, _LargePages, _HardwareAES,\n\t\t_RoundCount, _IterationCount, _SubChunkCount) ->\n\t?LOG_ERROR(\"rx4096_encrypt_composite_chunk_nif\"),\n\terlang:nif_error(nif_not_loaded).\n\nrx4096_decrypt_composite_chunk_nif(_State, _Data, _Chunk, _OutSize,\n\t\t_JIT, _LargePages, _HardwareAES, _RoundCount, _IterationCount, _SubChunkCount) ->\n\t?LOG_ERROR(\"rx4096_decrypt_composite_chunk_nif\"),\n\terlang:nif_error(nif_not_loaded).\n\nrx4096_decrypt_composite_sub_chunk_nif(_State, _Data, _Chunk, _OutSize,\n\t\t_JIT, _LargePages, _HardwareAES, _RoundCount, _IterationCount, _Offset) ->\n\t?LOG_ERROR(\"rx4096_decrypt_composite_sub_chunk_nif\"),\n\terlang:nif_error(nif_not_loaded).\n\nrx4096_reencrypt_composite_chunk_nif(_State,\n\t\t_DecryptKey, _EncryptKey, _Chunk,\n\t\t_JIT, _LargePages, _HardwareAES,\n\t\t_DecryptRoundCount, _EncryptRoundCount,\n\t\t_DecryptIterationCount, _EncryptIterationCount,\n\t\t_DecryptSubChunkCount, _EncryptSubChunkCount) ->\n\t?LOG_ERROR(\"rx4096_reencrypt_composite_chunk_nif\"),\n\terlang:nif_error(nif_not_loaded).\n\ninit_nif() ->\n\tPrivDir = code:priv_dir(arweave),\n\tok = erlang:load_nif(filename:join([PrivDir, \"rx4096_arweave\"]), 0).\n"
  },
  {
    "path": "apps/arweave/src/ar_rx512_nif.erl",
    "content": "-module(ar_rx512_nif).\n\n-include_lib(\"arweave/include/ar.hrl\").\n\n-on_load(init_nif/0).\n\n-export([rx512_hash_nif/5, rx512_info_nif/1, rx512_init_nif/5,\n\t\trx512_encrypt_chunk_nif/7, rx512_decrypt_chunk_nif/8,\n\t\trx512_reencrypt_chunk_nif/10\n]).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nrx512_info_nif(_State) ->\n\t?LOG_ERROR(\"rx512_info_nif\"),\n\terlang:nif_error(nif_not_loaded).\n\nrx512_init_nif(_Key, _HashingMode, _JIT, _LargePages, _Threads) ->\n\t?LOG_ERROR(\"rx512_init_nif\"),\n\terlang:nif_error(nif_not_loaded).\n\nrx512_hash_nif(_State, _Data, _JIT, _LargePages, _HardwareAES) ->\n\t?LOG_ERROR(\"rx512_hash_nif\"),\n\terlang:nif_error(nif_not_loaded).\n\nrx512_encrypt_chunk_nif(_State, _Data, _Chunk, _RoundCount, _JIT, _LargePages, _HardwareAES) ->\n\t?LOG_ERROR(\"rx512_encrypt_chunk_nif\"),\n\terlang:nif_error(nif_not_loaded).\n\nrx512_decrypt_chunk_nif(_State, _Data, _Chunk, _OutSize, _RoundCount, _JIT, _LargePages,\n\t\t_HardwareAES) ->\n\t?LOG_ERROR(\"rx512_decrypt_chunk_nif\"),\n\terlang:nif_error(nif_not_loaded).\n\nrx512_reencrypt_chunk_nif(_State, _DecryptKey, _EncryptKey, _Chunk, _ChunkSize,\n\t\t_DecryptRoundCount, _EncryptRoundCount, _JIT, _LargePages, _HardwareAES) ->\n\t?LOG_ERROR(\"rx512_reencrypt_chunk_nif\"),\n\terlang:nif_error(nif_not_loaded).\n\ninit_nif() ->\n\tPrivDir = code:priv_dir(arweave),\n\tok = erlang:load_nif(filename:join([PrivDir, \"rx512_arweave\"]), 0).\n"
  },
  {
    "path": "apps/arweave/src/ar_rxsquared_nif.erl",
    "content": "-module(ar_rxsquared_nif).\n\n-include_lib(\"arweave/include/ar.hrl\").\n\n-on_load(init_nif/0).\n\n-export([rxsquared_hash_nif/5, rxsquared_info_nif/1, rxsquared_init_nif/5,\n\t\trsp_fused_entropy_nif/10,\n\t\trsp_feistel_encrypt_nif/2,\n\t\trsp_feistel_decrypt_nif/2]).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nrxsquared_info_nif(_State) ->\n\t?LOG_ERROR(\"rxsquared_info_nif\"),\n\terlang:nif_error(nif_not_loaded).\n\nrxsquared_init_nif(_Key, _HashingMode, _JIT, _LargePages, _Threads) ->\n\t?LOG_ERROR(\"rxsquared_init_nif\"),\n\terlang:nif_error(nif_not_loaded).\n\nrxsquared_hash_nif(_State, _Data, _JIT, _LargePages, _HardwareAES) ->\n\t?LOG_ERROR(\"rxsquared_hash_nif\"),\n\terlang:nif_error(nif_not_loaded).\n\ninit_nif() ->\n\tPrivDir = code:priv_dir(arweave),\n\tok = erlang:load_nif(filename:join([PrivDir, \"rxsquared_arweave\"]), 0).\n\n%%%===================================================================\n%%% Randomx square packing\n%%%===================================================================\n\nrsp_fused_entropy_nif(\n\t_RandomxState,\n\t_ReplicaEntropySubChunkCount,\n\t_CompositePackingSubChunkSize,\n\t_LaneCount,\n\t_RxDepth,\n\t_JitEnabled,\n\t_LargePagesEnabled,\n\t_HardwareAESEnabled,\n\t_RandomxProgramCount,\n\t_Key\n) ->\n\t?LOG_ERROR(\"rsp_fused_entropy_nif\"),\n\terlang:nif_error(nif_not_loaded).\n\nrsp_feistel_encrypt_nif(_InMsg, _Key) ->\n\t?LOG_ERROR(\"rsp_feistel_encrypt_nif\"),\n\terlang:nif_error(nif_not_loaded).\n\nrsp_feistel_decrypt_nif(_InMsg, _Key) ->\n\t?LOG_ERROR(\"rsp_feistel_decrypt_nif\"),\n\terlang:nif_error(nif_not_loaded).\n"
  },
  {
    "path": "apps/arweave/src/ar_semaphore.erl",
    "content": "-module(ar_semaphore).\n-behaviour(gen_server).\n\n-export([start_link/2, acquire/2, stop/1]).\n-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2]).\n-include_lib(\"kernel/include/logger.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Open a semaphore registered with Name, with the specified\n%% Capacity.\nstart_link(Name, InitCapacity) ->\n\tprometheus_gauge:new([\n\t\t{name, Name},\n\t\t{help, \"The size of the corresponding semaphore queue.\"}\n\t]),\n\tgen_server:start_link({local, Name}, ?MODULE, [InitCapacity], []).\n\n%% @doc Acquire the semaphore, willing to wait for the provided\n%% Timeout.\nacquire(Name, Timeout) ->\n\ttry\n\t\tgen_server:call(Name, acquire, Timeout)\n\tcatch\n\t\texit:{timeout, _} -> {error, timeout}\n\tend.\n\n%% @doc Close the semaphore and stop the process registered under the\n%% given name.\nstop(Name) ->\n\tgen_server:stop(Name).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([InitCapacity]) when is_integer(InitCapacity) ->\n\t{ok, {InitCapacity, #{}, queue:new()}};\ninit([infinity]) ->\n\t{ok, {infinity, undefined, undefined}}.\n\nhandle_call(acquire, {FromPid, FromRef}, {Capacity, WaitingPids, Queue}) when is_integer(Capacity) ->\n\tcase maps:is_key(FromPid, WaitingPids) of\n\t\ttrue ->\n\t\t\t{reply, {error, process_already_waiting}, {Capacity, WaitingPids, Queue}};\n\t\tfalse ->\n\t\t\tcase Capacity > 0 of\n\t\t\t\ttrue ->\n\t\t\t\t\tmonitor(process, FromPid),\n\t\t\t\t\t{reply, ok, {Capacity - 1, WaitingPids#{ FromPid => {} }, Queue}};\n\t\t\t\tfalse ->\n\t\t\t\t\tQueue1 = queue:in({FromPid, FromRef}, Queue),\n\t\t\t\t\tprometheus_gauge:inc(element(2, process_info(self(), registered_name))),\n\t\t\t\t\t{noreply, {Capacity, WaitingPids, Queue1}}\n\t\t\tend\n\tend;\nhandle_call(acquire, _, {infinity, _, _} = State) ->\n\t{reply, ok, State}.\n\nhandle_cast(_, State) ->\n\t{stop, {error, handle_cast_unsupported}, State}.\n\nhandle_info({'DOWN', _,  process, Pid, _}, {Capacity, WaitingPids, Queue}) ->\n\tcase maps:take(Pid, WaitingPids) of\n\t\t{{}, WaitingPids1} ->\n\t\t\tdequeue({Capacity + 1, WaitingPids1, Queue});\n\t\terror ->\n\t\t\t{noreply, {Capacity, WaitingPids, Queue}}\n\tend.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\ndequeue({Capacity, WaitingPids, Queue}) ->\n\tcase Capacity > 0 of\n\t\tfalse ->\n\t\t\t{noreply, {Capacity, WaitingPids, Queue}};\n\t\ttrue ->\n\t\t\tcase queue:out(Queue) of\n\t\t\t\t{empty, Queue} ->\n\t\t\t\t\tprometheus_gauge:set(element(2, process_info(self(), registered_name)), 0),\n\t\t\t\t\t{noreply, {Capacity, WaitingPids, Queue}};\n\t\t\t\t{{value, {FromPid, FromRef}}, NewQueue} ->\n\t\t\t\t\tmonitor(process, FromPid),\n\t\t\t\t\tgen_server:reply({FromPid, FromRef}, ok),\n\t\t\t\t\tprometheus_gauge:dec(element(2, process_info(self(), registered_name))),\n\t\t\t\t\t{noreply, {Capacity - 1, WaitingPids#{ FromPid => {} }, NewQueue}}\n\t\t\tend\n\tend.\n"
  },
  {
    "path": "apps/arweave/src/ar_serialize.erl",
    "content": "%%% @doc The module contains the serialization and deserialization utilities for the\n%%% various protocol entitities - transactions, blocks, proofs, etc\n-module(ar_serialize).\n\n-export([block_to_binary/1, binary_to_block/1, json_struct_to_block/1,\n\t\tblock_to_json_struct/1,\n\t\tblock_announcement_to_binary/1, binary_to_block_announcement/1,\n\t\tbinary_to_block_announcement_response/1, block_announcement_response_to_binary/1,\n\t\ttx_to_binary/1, binary_to_tx/1,\n\n\t\tpoa_map_to_binary/1, binary_to_poa/1,\n\t\tpoa_no_chunk_map_to_binary/1, binary_to_no_chunk_map/1,\n\t\tpoa_map_to_json_map/1, poa_no_chunk_map_to_json_map/1, json_map_to_poa_map/1,\n\n\t\tblock_index_to_binary/1, binary_to_block_index/1, encode_double_signing_proof/2,\n\t\tjson_struct_to_poa/1, poa_to_json_struct/1,\n\t\ttx_to_json_struct/1, json_struct_to_tx/1, json_struct_to_v1_tx/1,\n\t\tetf_to_wallet_chunk_response/1, wallet_list_to_json_struct/3,\n\t\twallet_to_json_struct/2, json_struct_to_wallet_list/1,\n\t\tblock_index_to_json_struct/1, json_struct_to_block_index/1,\n\t\tjsonify/1, dejsonify/1, json_decode/1, json_decode/2,\n\t\tquery_to_json_struct/1, json_struct_to_query/1,\n\t\tencode_int/2, encode_bin/2,\n\t\tencode_bin_list/3, signature_type_to_binary/1, binary_to_signature_type/1,\n\t\treward_history_to_binary/1, binary_to_reward_history/1,\n\t\tblock_time_history_to_binary/1, binary_to_block_time_history/1, parse_32b_list/1,\n\t\tnonce_limiter_update_to_binary/2, binary_to_nonce_limiter_update/2,\n\t\tnonce_limiter_update_response_to_binary/1, binary_to_nonce_limiter_update_response/1,\n\t\tpartition_to_json_struct/4,\n\t\tcandidate_to_json_struct/1, solution_to_json_struct/1, json_map_to_solution/1,\n\t\tjson_map_to_candidate/1, encode_packing/2, decode_packing/2,\n\t\tjobs_to_json_struct/1, json_struct_to_jobs/1,\n\t\tpartial_solution_response_to_json_struct/1,\n\t\tpool_cm_jobs_to_json_struct/1, json_map_to_pool_cm_jobs/1,\n\t\tfootprint_to_json_map/1, json_map_to_footprint/1,\n\t\tdata_roots_to_binary/1, binary_to_data_roots/1]).\n\n-include(\"ar.hrl\").\n-include(\"ar_consensus.hrl\").\n-include(\"ar_vdf.hrl\").\n-include(\"ar_mining.hrl\").\n-include(\"ar_pool.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Serialize the block.\nblock_to_binary(#block{ indep_hash = H, previous_block = PrevH, timestamp = TS,\n\t\tnonce = Nonce, height = Height, diff = Diff, cumulative_diff = CDiff,\n\t\tlast_retarget = LastRetarget, hash = Hash, block_size = BlockSize,\n\t\tweave_size = WeaveSize, reward_addr = Addr, tx_root = TXRoot,\n\t\twallet_list = WalletList, hash_list_merkle = HashListMerkle,\n\t\treward_pool = RewardPool, packing_2_5_threshold = Threshold,\n\t\tstrict_data_split_threshold = StrictChunkThreshold,\n\t\tusd_to_ar_rate = Rate, scheduled_usd_to_ar_rate = ScheduledRate,\n\t\tpoa = #poa{ option = Option, chunk = Chunk, data_path = DataPath,\n\t\t\t\ttx_path = TXPath }, tags = Tags, txs = TXs } = B) ->\n\tAddr2 = case Addr of unclaimed -> <<>>; _ -> Addr end,\n\t{RateDividend, RateDivisor} = case Rate of undefined -> {undefined, undefined};\n\t\t\t_ -> Rate end,\n\t{ScheduledRateDividend, ScheduledRateDivisor} =\n\t\t\tcase ScheduledRate of\n\t\t\t\tundefined ->\n\t\t\t\t\t{undefined, undefined};\n\t\t\t\t_ ->\n\t\t\t\t\tScheduledRate\n\t\t\tend,\n\tNonce2 = case B#block.height >= ar_fork:height_2_6() of\n\t\t\ttrue -> binary:encode_unsigned(Nonce, big); false -> Nonce end,\n\t<< H:48/binary, (encode_bin(PrevH, 8))/binary, (encode_int(TS, 8))/binary,\n\t\t\t(encode_bin(Nonce2, 16))/binary, (encode_int(Height, 8))/binary,\n\t\t\t(encode_int(Diff, 16))/binary, (encode_int(CDiff, 16))/binary,\n\t\t\t(encode_int(LastRetarget, 8))/binary, (encode_bin(Hash, 8))/binary,\n\t\t\t(encode_int(BlockSize, 16))/binary, (encode_int(WeaveSize, 16))/binary,\n\t\t\t(encode_bin(Addr2, 8))/binary, (encode_bin(TXRoot, 8))/binary,\n\t\t\t(encode_bin(WalletList, 8))/binary, (encode_bin(HashListMerkle, 8))/binary,\n\t\t\t(encode_int(RewardPool, 8))/binary, (encode_int(Threshold, 8))/binary,\n\t\t\t(encode_int(StrictChunkThreshold, 8))/binary,\n\t\t\t(encode_int(RateDividend, 8))/binary,\n\t\t\t(encode_int(RateDivisor, 8))/binary,\n\t\t\t(encode_int(ScheduledRateDividend, 8))/binary,\n\t\t\t(encode_int(ScheduledRateDivisor, 8))/binary, (encode_int(Option, 8))/binary,\n\t\t\t(encode_bin(Chunk, 24))/binary, (encode_bin(TXPath, 24))/binary,\n\t\t\t(encode_bin(DataPath, 24))/binary, (encode_bin_list(Tags, 16, 16))/binary,\n\t\t\t(encode_transactions(TXs))/binary, (encode_post_2_6_fields(B))/binary >>.\n\n%% @doc Deserialize the block.\nbinary_to_block(<< H:48/binary, PrevHSize:8, PrevH:PrevHSize/binary,\n\t\tTSSize:8, TS:(TSSize * 8),\n\t\tNonceSize:16, Nonce:NonceSize/binary,\n\t\tHeightSize:8, Height:(HeightSize * 8),\n\t\tDiffSize:16, Diff:(DiffSize * 8),\n\t\tCDiffSize:16, CDiff:(CDiffSize * 8),\n\t\tLastRetargetSize:8, LastRetarget:(LastRetargetSize * 8),\n\t\tHashSize:8, Hash:HashSize/binary,\n\t\tBlockSizeSize:16, BlockSize:(BlockSizeSize * 8),\n\t\tWeaveSizeSize:16, WeaveSize:(WeaveSizeSize * 8),\n\t\tAddrSize:8, Addr:AddrSize/binary,\n\t\tTXRootSize:8, TXRoot:TXRootSize/binary, % 0 or 32\n\t\tWalletListSize:8, WalletList:WalletListSize/binary,\n\t\tHashListMerkleSize:8, HashListMerkle:HashListMerkleSize/binary,\n\t\tRewardPoolSize:8, RewardPool:(RewardPoolSize * 8),\n\t\tPackingThresholdSize:8, Threshold:(PackingThresholdSize * 8),\n\t\tStrictChunkThresholdSize:8, StrictChunkThreshold:(StrictChunkThresholdSize * 8),\n\t\tRateDividendSize:8, RateDividend:(RateDividendSize * 8),\n\t\tRateDivisorSize:8, RateDivisor:(RateDivisorSize * 8),\n\t\tSchedRateDividendSize:8, SchedRateDividend:(SchedRateDividendSize * 8),\n\t\tSchedRateDivisorSize:8, SchedRateDivisor:(SchedRateDivisorSize * 8),\n\t\tPoAOptionSize:8, PoAOption:(PoAOptionSize * 8),\n\t\tChunkSize:24, Chunk:ChunkSize/binary,\n\t\tTXPathSize:24, TXPath:TXPathSize/binary,\n\t\tDataPathSize:24, DataPath:DataPathSize/binary,\n\t\tRest/binary >>) when NonceSize =< 512 ->\n\tThreshold2 = case PackingThresholdSize of 0 -> undefined; _ -> Threshold end,\n\tStrictChunkThreshold2 = case StrictChunkThresholdSize of 0 -> undefined;\n\t\t\t_ -> StrictChunkThreshold end,\n\tRate = case RateDivisorSize of 0 -> undefined;\n\t\t\t_ -> {RateDividend, RateDivisor} end,\n\tScheduledRate = case SchedRateDivisorSize of 0 -> undefined;\n\t\t\t_ -> {SchedRateDividend, SchedRateDivisor} end,\n\tAddr2 = case {AddrSize, Height >= ar_fork:height_2_6()} of\n\t\t\t{0, false} -> unclaimed; _ -> Addr end,\n\tB = #block{ indep_hash = H, previous_block = PrevH, timestamp = TS,\n\t\t\tnonce = Nonce, height = Height, diff = Diff, cumulative_diff = CDiff,\n\t\t\tlast_retarget = LastRetarget, hash = Hash, block_size = BlockSize,\n\t\t\tweave_size = WeaveSize, reward_addr = Addr2, tx_root = TXRoot,\n\t\t\twallet_list = WalletList, hash_list_merkle = HashListMerkle,\n\t\t\treward_pool = RewardPool, packing_2_5_threshold = Threshold2,\n\t\t\tstrict_data_split_threshold = StrictChunkThreshold2,\n\t\t\tusd_to_ar_rate = Rate, scheduled_usd_to_ar_rate = ScheduledRate,\n\t\t\tpoa = #poa{ option = PoAOption, chunk = Chunk, data_path = DataPath,\n\t\t\t\t\ttx_path = TXPath }},\n\tparse_block_tags_transactions(Rest, B);\nbinary_to_block(_Bin) ->\n\t{error, invalid_block_input}.\n\n%% @doc Convert a block record into a JSON struct.\nblock_to_json_struct(\n\t#block{ nonce = Nonce, previous_block = PrevHash, timestamp = TimeStamp,\n\t\t\tlast_retarget = LastRetarget, diff = Diff, height = Height, hash = Hash,\n\t\t\tindep_hash = IndepHash, txs = TXs, tx_root = TXRoot, wallet_list = WalletList,\n\t\t\treward_addr = RewardAddr, tags = Tags, reward_pool = RewardPool,\n\t\t\tweave_size = WeaveSize, block_size = BlockSize, cumulative_diff = CDiff,\n\t\t\thash_list_merkle = MR, poa = POA,\n\t\t\tprevious_cumulative_diff = PrevCDiff,\n\t\t\tmerkle_rebase_support_threshold = RebaseThreshold,\n\t\t\trecall_byte2 = RecallByte2,\n\t\t\tpacking_difficulty = PackingDifficulty,\n\t\t\tunpacked_chunk_hash = UnpackedChunkHash,\n\t\t\tunpacked_chunk2_hash = UnpackedChunk2Hash,\n\t\t\treplica_format = ReplicaFormat } = B) ->\n\t{JSONDiff, JSONCDiff} =\n\t\tcase Height >= ar_fork:height_1_8() of\n\t\t\ttrue ->\n\t\t\t\t{integer_to_binary(Diff), integer_to_binary(CDiff)};\n\t\t\tfalse ->\n\t\t\t\t{Diff, CDiff}\n\tend,\n\t{JSONRewardPool, JSONBlockSize, JSONWeaveSize} =\n\t\tcase Height >= ar_fork:height_2_4() of\n\t\t\ttrue ->\n\t\t\t\t{integer_to_binary(RewardPool), integer_to_binary(BlockSize),\n\t\t\t\t\tinteger_to_binary(WeaveSize)};\n\t\t\tfalse ->\n\t\t\t\t{RewardPool, BlockSize, WeaveSize}\n\tend,\n\tTags2 =\n\t\tcase Height >= ar_fork:height_2_5() of\n\t\t\ttrue ->\n\t\t\t\t[ar_util:encode(Tag) || Tag <- Tags];\n\t\t\tfalse ->\n\t\t\t\tTags\n\t\tend,\n\tNonce2 = case B#block.height >= ar_fork:height_2_6() of\n\t\t\ttrue -> binary:encode_unsigned(Nonce); false -> Nonce end,\n\tJSONElements =\n\t\t[{nonce, ar_util:encode(Nonce2)}, {previous_block, ar_util:encode(PrevHash)},\n\t\t\t\t{timestamp, TimeStamp}, {last_retarget, LastRetarget}, {diff, JSONDiff},\n\t\t\t\t{height, Height}, {hash, ar_util:encode(Hash)},\n\t\t\t\t{indep_hash, ar_util:encode(IndepHash)},\n\t\t\t\t{txs,\n\t\t\t\t\tlists:map(\n\t\t\t\t\t\tfun(TXID) when is_binary(TXID) ->\n\t\t\t\t\t\t\tar_util:encode(TXID);\n\t\t\t\t\t\t(TX) ->\n\t\t\t\t\t\t\tar_util:encode(TX#tx.id)\n\t\t\t\t\t\tend,\n\t\t\t\t\t\tTXs\n\t\t\t\t\t)\n\t\t\t\t}, {tx_root, ar_util:encode(TXRoot)}, {tx_tree, []},\n\t\t\t\t{wallet_list, ar_util:encode(WalletList)},\n\t\t\t\t{reward_addr,\n\t\t\t\t\tcase RewardAddr of unclaimed -> list_to_binary(\"unclaimed\");\n\t\t\t\t\t\t\t_ -> ar_util:encode(RewardAddr) end}, {tags, Tags2},\n\t\t\t\t{reward_pool, JSONRewardPool}, {weave_size, JSONWeaveSize},\n\t\t\t\t{block_size, JSONBlockSize}, {cumulative_diff, JSONCDiff},\n\t\t\t\t{hash_list_merkle, ar_util:encode(MR)}, {poa, poa_to_json_struct(POA)}],\n\tJSONElements2 =\n\t\tcase Height < ar_fork:height_1_6() of\n\t\t\ttrue ->\n\t\t\t\tKeysToDelete = [cumulative_diff, hash_list_merkle],\n\t\t\t\tdelete_keys(KeysToDelete, JSONElements);\n\t\t\tfalse ->\n\t\t\t\tJSONElements\n\t\tend,\n\tJSONElements3 =\n\t\tcase Height >= ar_fork:height_2_4() of\n\t\t\ttrue ->\n\t\t\t\tdelete_keys([tx_tree], JSONElements2);\n\t\t\tfalse ->\n\t\t\t\tJSONElements2\n\t\tend,\n\tJSONElements4 =\n\t\tcase Height >= ar_fork:height_2_5() of\n\t\t\ttrue ->\n\t\t\t\t{RateDividend, RateDivisor} = B#block.usd_to_ar_rate,\n\t\t\t\t{ScheduledRateDividend,\n\t\t\t\t\t\tScheduledRateDivisor} = B#block.scheduled_usd_to_ar_rate,\n\t\t\t\t[\n\t\t\t\t\t{usd_to_ar_rate,\n\t\t\t\t\t\t[integer_to_binary(RateDividend), integer_to_binary(RateDivisor)]},\n\t\t\t\t\t{scheduled_usd_to_ar_rate,\n\t\t\t\t\t\t[integer_to_binary(ScheduledRateDividend),\n\t\t\t\t\t\t\tinteger_to_binary(ScheduledRateDivisor)]},\n\t\t\t\t\t{packing_2_5_threshold,\n\t\t\t\t\t\t\tinteger_to_binary(B#block.packing_2_5_threshold)},\n\t\t\t\t\t{strict_data_split_threshold,\n\t\t\t\t\t\t\tinteger_to_binary(B#block.strict_data_split_threshold)}\n\t\t\t\t\t| JSONElements3\n\t\t\t\t];\n\t\t\tfalse ->\n\t\t\t\tJSONElements3\n\t\tend,\n\tJSONElements5 =\n\t\tcase Height >= ar_fork:height_2_6() of\n\t\t\ttrue ->\n\t\t\t\tPricePerGiBMinute = B#block.price_per_gib_minute,\n\t\t\t\tScheduledPricePerGiBMinute = B#block.scheduled_price_per_gib_minute,\n\t\t\t\tDebtSupply = B#block.debt_supply,\n\t\t\t\tKryderPlusRateMultiplier = B#block.kryder_plus_rate_multiplier,\n\t\t\t\tKryderPlusRateMultiplierLatch = B#block.kryder_plus_rate_multiplier_latch,\n\t\t\t\tDenomination = B#block.denomination,\n\t\t\t\tRedenominationHeight = B#block.redenomination_height,\n\t\t\t\tDoubleSigningProof =\n\t\t\t\t\tcase B#block.double_signing_proof of\n\t\t\t\t\t\tundefined ->\n\t\t\t\t\t\t\t{[]};\n\t\t\t\t\t\t{Key, Sig1, CDiff1, PrevCDiff1, Preimage1, Sig2, CDiff2,\n\t\t\t\t\t\t\t\tPrevCDiff2, Preimage2} ->\n\t\t\t\t\t\t\t{[{pub_key, ar_util:encode(Key)}, {sig1, ar_util:encode(Sig1)},\n\t\t\t\t\t\t\t\t\t{cdiff1, integer_to_binary(CDiff1)},\n\t\t\t\t\t\t\t\t\t{prev_cdiff1, integer_to_binary(PrevCDiff1)},\n\t\t\t\t\t\t\t\t\t{preimage1, ar_util:encode(Preimage1)},\n\t\t\t\t\t\t\t\t\t{sig2, ar_util:encode(Sig2)},\n\t\t\t\t\t\t\t\t\t{cdiff2, integer_to_binary(CDiff2)},\n\t\t\t\t\t\t\t\t\t{prev_cdiff2, integer_to_binary(PrevCDiff2)},\n\t\t\t\t\t\t\t\t\t{preimage2, ar_util:encode(Preimage2)}]}\n\t\t\t\t\tend,\n\t\t\t\tJSONElements6 =\n\t\t\t\t\t[{hash_preimage, ar_util:encode(B#block.hash_preimage)},\n\t\t\t\t\t\t\t{recall_byte, integer_to_binary(B#block.recall_byte)},\n\t\t\t\t\t\t\t{reward, integer_to_binary(B#block.reward)},\n\t\t\t\t\t\t\t{previous_solution_hash,\n\t\t\t\t\t\t\t\t\tar_util:encode(B#block.previous_solution_hash)},\n\t\t\t\t\t\t\t{partition_number, B#block.partition_number},\n\t\t\t\t\t\t\t{nonce_limiter_info, nonce_limiter_info_to_json_struct(\n\t\t\t\t\t\t\t\t\tB#block.height, B#block.nonce_limiter_info)},\n\t\t\t\t\t\t\t{poa2, poa_to_json_struct(B#block.poa2)},\n\t\t\t\t\t\t\t{signature, ar_util:encode(B#block.signature)},\n\t\t\t\t\t\t\t{reward_key, ar_util:encode(element(2, B#block.reward_key))},\n\t\t\t\t\t\t\t{price_per_gib_minute, integer_to_binary(PricePerGiBMinute)},\n\t\t\t\t\t\t\t{scheduled_price_per_gib_minute,\n\t\t\t\t\t\t\t\t\tinteger_to_binary(ScheduledPricePerGiBMinute)},\n\t\t\t\t\t\t\t{reward_history_hash,\n\t\t\t\t\t\t\t\t\tar_util:encode(B#block.reward_history_hash)},\n\t\t\t\t\t\t\t{debt_supply, integer_to_binary(DebtSupply)},\n\t\t\t\t\t\t\t{kryder_plus_rate_multiplier,\n\t\t\t\t\t\t\t\t\tinteger_to_binary(KryderPlusRateMultiplier)},\n\t\t\t\t\t\t\t{kryder_plus_rate_multiplier_latch,\n\t\t\t\t\t\t\t\t\tinteger_to_binary(KryderPlusRateMultiplierLatch)},\n\t\t\t\t\t\t\t{denomination, integer_to_binary(Denomination)},\n\t\t\t\t\t\t\t{redenomination_height, RedenominationHeight},\n\t\t\t\t\t\t\t{double_signing_proof, DoubleSigningProof},\n\t\t\t\t\t\t\t{previous_cumulative_diff, integer_to_binary(PrevCDiff)}\n\t\t\t\t\t\t\t| JSONElements4],\n\t\t\t\tcase RecallByte2 of\n\t\t\t\t\tundefined ->\n\t\t\t\t\t\tJSONElements6;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\t[{recall_byte2, integer_to_binary(RecallByte2)} | JSONElements6]\n\t\t\t\tend;\n\t\t\tfalse ->\n\t\t\t\tJSONElements4\n\t\tend,\n\tJSONElements8 =\n\t\tcase Height >= ar_fork:height_2_7() of\n\t\t\ttrue ->\n\t\t\t\tJSONElements7 = [\n\t\t\t\t\t\t{merkle_rebase_support_threshold, integer_to_binary(RebaseThreshold)},\n\t\t\t\t\t\t{chunk_hash, ar_util:encode(B#block.chunk_hash)},\n\t\t\t\t\t\t{block_time_history_hash,\n\t\t\t\t\t\t\tar_util:encode(B#block.block_time_history_hash)}\n\t\t\t\t\t\t| JSONElements5],\n\t\t\t\tcase B#block.chunk2_hash of\n\t\t\t\t\tundefined ->\n\t\t\t\t\t\tJSONElements7;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\t[{chunk2_hash, ar_util:encode(B#block.chunk2_hash)} | JSONElements7]\n\t\t\t\tend;\n\t\t\tfalse ->\n\t\t\t\tJSONElements5\n\t\tend,\n\tJSONElements9 =\n\t\tcase Height >= ar_fork:height_2_8() of\n\t\t\tfalse ->\n\t\t\t\tJSONElements8;\n\t\t\ttrue ->\n\t\t\t\tcase {PackingDifficulty >= 1, RecallByte2} of\n\t\t\t\t\t{false, _} ->\n\t\t\t\t\t\t[{packing_difficulty, PackingDifficulty} | JSONElements8];\n\t\t\t\t\t{true, undefined} ->\n\t\t\t\t\t\t[{packing_difficulty, PackingDifficulty},\n\t\t\t\t\t\t\t{unpacked_chunk_hash, ar_util:encode(UnpackedChunkHash)}\n\t\t\t\t\t\t\t| JSONElements8];\n\t\t\t\t\t_ ->\n\t\t\t\t\t\t[{packing_difficulty, PackingDifficulty},\n\t\t\t\t\t\t\t{unpacked_chunk_hash, ar_util:encode(UnpackedChunkHash)},\n\t\t\t\t\t\t\t{unpacked_chunk2_hash, ar_util:encode(UnpackedChunk2Hash)}\n\t\t\t\t\t\t\t| JSONElements8]\n\t\t\t\tend\n\t\tend,\n\tJSONElements10 =\n\t\tcase Height >= ar_fork:height_2_9() of\n\t\t\tfalse ->\n\t\t\t\tJSONElements9;\n\t\t\ttrue ->\n\t\t\t\t[{replica_format, ReplicaFormat} | JSONElements9]\n\t\tend,\n\t{JSONElements10}.\n\nreward_history_to_binary(RewardHistory) ->\n\treward_history_to_binary(RewardHistory, []).\n\nreward_history_to_binary([], IOList) ->\n\tiolist_to_binary(IOList);\nreward_history_to_binary([{Addr, HashRate, Reward, Denomination} | RewardHistory], IOList) ->\n\treward_history_to_binary(RewardHistory, [Addr, ar_serialize:encode_int(HashRate, 8),\n\t\t\tar_serialize:encode_int(Reward, 8), << Denomination:24 >> | IOList]).\n\nbinary_to_reward_history(Bin) ->\n\tbinary_to_reward_history(Bin, []).\n\nbinary_to_reward_history(<< Addr:32/binary, HashRateSize:8, HashRate:(HashRateSize * 8),\n\t\tRewardSize:8, Reward:(RewardSize * 8), Denomination:24, Rest/binary >>,\n\t\tRewardHistory) ->\n\tbinary_to_reward_history(Rest, [{Addr, HashRate, Reward, Denomination} | RewardHistory]);\nbinary_to_reward_history(<<>>, RewardHistory) ->\n\t{ok, RewardHistory};\nbinary_to_reward_history(_Rest, _RewardHistory) ->\n\t{error, invalid_reward_history}.\n\nblock_time_history_to_binary(BlockTimeHistory) ->\n\tblock_time_history_to_binary(BlockTimeHistory, []).\n\nblock_time_history_to_binary([], IOList) ->\n\tiolist_to_binary(IOList);\nblock_time_history_to_binary([{BlockInterval, VDFInterval, ChunkCount} | BlockTimeHistory],\n\t\tIOList) ->\n\tblock_time_history_to_binary(BlockTimeHistory, [\n\t\t\tar_serialize:encode_int(BlockInterval, 8),\n\t\t\tar_serialize:encode_int(VDFInterval, 8),\n\t\t\tar_serialize:encode_int(ChunkCount, 8)\n\t| IOList]).\n\nbinary_to_block_time_history(Bin) ->\n\tbinary_to_block_time_history(Bin, []).\n\nbinary_to_block_time_history(<< BlockIntervalSize:8,\n\t\t\tBlockInterval:(BlockIntervalSize * 8),\n\t\t\tVDFIntervalSize:8, VDFInterval:(VDFIntervalSize * 8),\n\t\t\tChunkCountSize:8, ChunkCount:(ChunkCountSize * 8), Rest/binary >>,\n\t\tBlockTimeHistory) ->\n\tbinary_to_block_time_history(Rest,\n\t\t\t[{BlockInterval, VDFInterval, ChunkCount} | BlockTimeHistory]);\nbinary_to_block_time_history(<<>>, BlockTimeHistory) ->\n\t{ok, BlockTimeHistory};\nbinary_to_block_time_history(_Rest, _BlockTimeHistory) ->\n\t{error, invalid_block_time_history}.\n\n%% Note: the #nonce_limiter_update and #vdf_session records are only serialized for communication\n%% between a VDF server and VDF client. Only fields that are required for this communication are\n%% serialized.\n%% \n%% For example, the vdf_difficulty and next_vdf_difficulty fields are omitted as they are only used\n%% by nodes that compute their own VDF and never need to be shared from VDF server to VDF client.\nnonce_limiter_update_to_binary(2 = _Format, #nonce_limiter_update{\n\t\t\tsession_key = {NextSeed, Interval, NextVDFDifficulty},\n\t\tsession = Session, is_partial = IsPartial }) ->\n\t#vdf_session{ step_number = StepNumber, step_checkpoints_map = Map } = Session,\n\tCheckpoints = maps:get(StepNumber, Map, []),\n\tIsPartialBin = case IsPartial of true -> << 1:8 >>; _ -> << 0:8 >> end,\n\tCheckpointLen = length(Checkpoints),\n\t<< NextSeed:48/binary, (ar_serialize:encode_int(NextVDFDifficulty, 8))/binary,\n\t\t\tInterval:64, IsPartialBin/binary, CheckpointLen:16,\n\t\t\t(iolist_to_binary(Checkpoints))/binary, (encode_vdf_session(2, Session))/binary >>;\n\nnonce_limiter_update_to_binary(3 = _Format, #nonce_limiter_update{\n\t\t\tsession_key = {NextSeed, Interval, NextVDFDifficulty},\n\t\t\tsession = Session, is_partial = IsPartial }) ->\n\t#vdf_session{ step_checkpoints_map = Map } = Session,\n\tCheckpointsMapBin = encode_step_checkpoints_map(Map),\n\tCheckpointsMapSize = byte_size(CheckpointsMapBin),\n\tIsPartialBin = case IsPartial of true -> << 1:8 >>; _ -> << 0:8 >> end,\n\t<< NextSeed:48/binary, (ar_serialize:encode_int(NextVDFDifficulty, 8))/binary,\n\t\t\tInterval:64, IsPartialBin/binary, CheckpointsMapSize:24,\n\t\t\tCheckpointsMapBin:CheckpointsMapSize/binary,\n\t\t\t(encode_vdf_session(2, Session))/binary >>;\n\nnonce_limiter_update_to_binary(4 = _Format, #nonce_limiter_update{\n\t\t\tsession_key = {NextSeed, Interval, NextVDFDifficulty},\n\t\t\tsession = Session, is_partial = IsPartial }) ->\n\t#vdf_session{ step_checkpoints_map = Map } = Session,\n\tCheckpointsMapBin = encode_step_checkpoints_map(Map),\n\tCheckpointsMapSize = byte_size(CheckpointsMapBin),\n\tIsPartialBin = case IsPartial of true -> << 1:8 >>; _ -> << 0:8 >> end,\n\t<< NextSeed:48/binary, (ar_serialize:encode_int(NextVDFDifficulty, 8))/binary,\n\t\t\tInterval:64, IsPartialBin/binary, CheckpointsMapSize:24,\n\t\t\tCheckpointsMapBin:CheckpointsMapSize/binary,\n\t\t\t(encode_vdf_session(4, Session))/binary >>.\n\nencode_step_checkpoints_map(Map) ->\n\tencode_step_checkpoints_map(maps:keys(Map), Map, <<>>).\n\nencode_step_checkpoints_map([], _Map, Bin) ->\n\tBin;\nencode_step_checkpoints_map([Key | Keys], Map, Bin) ->\n\tCheckpoints = maps:get(Key, Map),\n\tCheckpointLen = length(Checkpoints),\n\tencode_step_checkpoints_map(Keys, Map,\n\t\t<< Key:64, CheckpointLen:16, (iolist_to_binary(Checkpoints))/binary, Bin/binary >>).\n\nencode_vdf_session(2 = _Format, #vdf_session{ step_number = StepNumber, seed = Seed, steps = Steps,\n\t\tprev_session_key = PrevSessionKey, upper_bound = UpperBound,\n\t\tnext_upper_bound = NextUpperBound }) ->\n\tStepsLen = length(Steps),\n\t<< StepNumber:64, Seed:48/binary, (encode_int(UpperBound, 8))/binary,\n\t\t\t(encode_int(NextUpperBound, 8))/binary, StepsLen:16,\n\t\t\t(iolist_to_binary(Steps))/binary,\n\t\t\t(encode_session_key(2, PrevSessionKey))/binary >>;\n\nencode_vdf_session(4 = _Format, #vdf_session{\n\t\tstep_number = StepNumber, seed = Seed, steps = Steps,\n\t\tprev_session_key = PrevSessionKey,\n\t\tupper_bound = UpperBound, next_upper_bound = NextUpperBound,\n\t\tvdf_difficulty = VDFDifficulty }) ->\n\tStepsLen = length(Steps),\n\t<< StepNumber:64, Seed:48/binary, (encode_int(UpperBound, 8))/binary,\n\t\t\t(encode_int(NextUpperBound, 8))/binary, StepsLen:16,\n\t\t\t(iolist_to_binary(Steps))/binary,\n\t\t\t(encode_int(VDFDifficulty, 8))/binary,\n\t\t\t(encode_session_key(2, PrevSessionKey))/binary >>.\n\nencode_session_key(undefined) ->\n\t<<>>;\nencode_session_key({NextSeed, Interval, NextDifficulty}) ->\n\t<< NextSeed:48/binary, (ar_serialize:encode_int(NextDifficulty, 8))/binary, Interval:64 >>.\n\nencode_session_key(2 = _Format, SessionKey) ->\n\tencode_session_key(SessionKey).\n\ndecode_session_key(<<>>) ->\n\tundefined;\ndecode_session_key(<<\n\t\tNextSeed:48/binary,\n\t\tNextVDFDifficultySize:8, NextVDFDifficulty:(NextVDFDifficultySize * 8),\n\t\tInterval:64 >>) ->\n\t{NextSeed, Interval, NextVDFDifficulty};\ndecode_session_key(_) ->\n\terror.\n\nbinary_to_nonce_limiter_update(2, % Format\n\t\t\t<< NextSeed:48/binary,\n\t\t\tNextVDFDifficultySize:8, NextVDFDifficulty:(NextVDFDifficultySize * 8),\n\t\t\tInterval:64, IsPartial:8,\n\t\t\tCheckpointLen:16, Checkpoints:(CheckpointLen * 32)/binary,\n\t\t\tStepNumber:64, Seed:48/binary, UpperBoundSize:8, UpperBound:(UpperBoundSize * 8),\n\t\t\tNextUpperBoundSize:8, NextUpperBound:(NextUpperBoundSize * 8),\n\t\t\tStepsLen:16, Steps:(StepsLen * 32)/binary,\n\t\t\tPrevSessionKeyBin/binary >>)\n\t\twhen UpperBoundSize > 0, StepsLen > 0, CheckpointLen == ?VDF_CHECKPOINT_COUNT_IN_STEP ->\n\tNextUpperBound2 = case NextUpperBoundSize of 0 -> undefined; _ -> NextUpperBound end,\n\tUpdate = #nonce_limiter_update{ session_key = {NextSeed, Interval, NextVDFDifficulty},\n\t\t\tis_partial = case IsPartial of 0 -> false; _ -> true end,\n\t\t\tsession = Session = #vdf_session{ step_number = StepNumber, seed = Seed,\n\t\t\t\t\tstep_checkpoints_map = #{ StepNumber => parse_32b_list(Checkpoints) },\n\t\t\t\t\tupper_bound = UpperBound, next_upper_bound = NextUpperBound2,\n\t\t\t\t\tsteps = parse_32b_list(Steps) } },\n\tcase decode_session_key(PrevSessionKeyBin) of\n\t\tundefined ->\n\t\t\t{ok, Update};\n\t\terror ->\n\t\t\t{error, invalid1};\n\t\tSessionKey ->\n\t\t\tSession2 = Session#vdf_session{ prev_session_key = SessionKey },\n\t\t\t{ok, Update#nonce_limiter_update{ session = Session2 }}\n\tend;\nbinary_to_nonce_limiter_update(2, _Bin) ->\n\t{error, invalid2};\n\nbinary_to_nonce_limiter_update(3, % Format = 3.\n\t\t\t<< NextSeed:48/binary,\n\t\t\tNextVDFDifficultySize:8, NextVDFDifficulty:(NextVDFDifficultySize * 8),\n\t\t\tInterval:64, IsPartial:8,\n\t\t\tCheckpointsMapSize:24, CheckpointsMapBin:CheckpointsMapSize/binary,\n\t\t\tStepNumber:64, Seed:48/binary, UpperBoundSize:8, UpperBound:(UpperBoundSize * 8),\n\t\t\tNextUpperBoundSize:8, NextUpperBound:(NextUpperBoundSize * 8),\n\t\t\tStepsLen:16, Steps:(StepsLen * 32)/binary,\n\t\t\tPrevSessionKeyBin/binary >>)\n\t\twhen UpperBoundSize > 0, StepsLen > 0 ->\n\tNextUpperBound2 = case NextUpperBoundSize of 0 -> undefined; _ -> NextUpperBound end,\n\tcase decode_step_checkpoints_map(CheckpointsMapBin, #{}) of\n\t\t{error, _} = Error ->\n\t\t\tError;\n\t\t{ok, StepCheckpointsMap} ->\n\t\t\tUpdate = #nonce_limiter_update{\n\t\t\t\t\tsession_key = {NextSeed, Interval, NextVDFDifficulty},\n\t\t\t\t\tis_partial = case IsPartial of 0 -> false; _ -> true end,\n\t\t\t\t\tsession = Session = #vdf_session{ step_number = StepNumber, seed = Seed,\n\t\t\t\t\t\t\tupper_bound = UpperBound, next_upper_bound = NextUpperBound2,\n\t\t\t\t\t\t\tsteps = parse_32b_list(Steps),\n\t\t\t\t\t\t\tstep_checkpoints_map = StepCheckpointsMap } },\n\t\t\tcase decode_session_key(PrevSessionKeyBin) of\n\t\t\t\tundefined ->\n\t\t\t\t\t{ok, Update};\n\t\t\t\terror ->\n\t\t\t\t\t{error, invalid1};\n\t\t\t\tPrevSessionKey ->\n\t\t\t\t\tSession2 = Session#vdf_session{ prev_session_key = PrevSessionKey },\n\t\t\t\t\t{ok, Update#nonce_limiter_update{ session = Session2 }}\n\t\t\tend\n\tend;\nbinary_to_nonce_limiter_update(3, _Bin) ->\n\t{error, invalid2};\n\nbinary_to_nonce_limiter_update(4, % Format = 4.\n\t\t\t<< NextSeed:48/binary,\n\t\t\tNextVDFDifficultySize:8, NextVDFDifficulty:(NextVDFDifficultySize * 8),\n\t\t\tInterval:64, IsPartial:8,\n\t\t\tCheckpointsMapSize:24, CheckpointsMapBin:CheckpointsMapSize/binary,\n\t\t\tStepNumber:64, Seed:48/binary, UpperBoundSize:8, UpperBound:(UpperBoundSize * 8),\n\t\t\tNextUpperBoundSize:8, NextUpperBound:(NextUpperBoundSize * 8),\n\t\t\tStepsLen:16, Steps:(StepsLen * 32)/binary,\n\t\t\tVDFDifficultySize:8, VDFDifficulty:(VDFDifficultySize * 8),\n\t\t\tPrevSessionKeyBin/binary >>)\n\t\twhen UpperBoundSize > 0, StepsLen > 0 ->\n\tNextUpperBound2 = case NextUpperBoundSize of 0 -> undefined; _ -> NextUpperBound end,\n\tcase decode_step_checkpoints_map(CheckpointsMapBin, #{}) of\n\t\t{error, _} = Error ->\n\t\t\tError;\n\t\t{ok, StepCheckpointsMap} ->\n\t\t\tUpdate = #nonce_limiter_update{\n\t\t\t\t\tsession_key = {NextSeed, Interval, NextVDFDifficulty},\n\t\t\t\t\tis_partial = case IsPartial of 0 -> false; _ -> true end,\n\t\t\t\t\tsession = Session = #vdf_session{ step_number = StepNumber, seed = Seed,\n\t\t\t\t\t\t\tupper_bound = UpperBound, next_upper_bound = NextUpperBound2,\n\t\t\t\t\t\t\tvdf_difficulty = VDFDifficulty,\n\t\t\t\t\t\t\tnext_vdf_difficulty = NextVDFDifficulty,\n\t\t\t\t\t\t\tsteps = parse_32b_list(Steps),\n\t\t\t\t\t\t\tstep_checkpoints_map = StepCheckpointsMap } },\n\t\t\tcase decode_session_key(PrevSessionKeyBin) of\n\t\t\t\tundefined ->\n\t\t\t\t\t{ok, Update};\n\t\t\t\terror ->\n\t\t\t\t\t{error, invalid1};\n\t\t\t\tPrevSessionKey ->\n\t\t\t\t\tSession2 = Session#vdf_session{ prev_session_key = PrevSessionKey },\n\t\t\t\t\t{ok, Update#nonce_limiter_update{ session = Session2 }}\n\t\t\tend\n\tend;\nbinary_to_nonce_limiter_update(4, _Bin) ->\n\t{error, invalid2};\n\nbinary_to_nonce_limiter_update(_, _Bin) ->\n\t{error, invalid_format}.\n\ndecode_step_checkpoints_map(<<>>, Map) ->\n\t{ok, Map};\ndecode_step_checkpoints_map(<< StepNumber:64,\n\t\tCheckpointLen:16, Checkpoints:(CheckpointLen * 32)/binary, Rest/binary >>, Map)\n\t\t\twhen CheckpointLen == ?VDF_CHECKPOINT_COUNT_IN_STEP ->\n\tdecode_step_checkpoints_map(Rest, maps:put(StepNumber, parse_32b_list(Checkpoints), Map));\ndecode_step_checkpoints_map(_Bin, _Map) ->\n\t{error, invalid_checkpoints_map}.\n\nparse_32b_list(<<>>) ->\n\t[];\nparse_32b_list(<< El:32/binary, Rest/binary >>) ->\n\t[El | parse_32b_list(Rest)].\n\nnonce_limiter_update_response_to_binary(#nonce_limiter_update_response{\n\t\tsession_found = SessionFound, step_number = StepNumber, postpone = Postpone,\n\t\tformat = Format }) ->\n\tSessionFoundBin = case SessionFound of false -> << 0:8 >>; _ -> << 1:8 >> end,\n\t<< SessionFoundBin/binary, (encode_int(StepNumber, 8))/binary, Postpone:8, Format:8 >>.\n\nbinary_to_nonce_limiter_update_response(<< SessionFoundBin:8, StepNumberSize:8,\n\t\tStepNumber:(StepNumberSize * 8) >>) ->\n\tbinary_to_nonce_limiter_update_response(\n\t\tSessionFoundBin, StepNumberSize, StepNumber, 0, 1);\nbinary_to_nonce_limiter_update_response(<< SessionFoundBin:8, StepNumberSize:8,\n\t\tStepNumber:(StepNumberSize * 8), Postpone:8 >>) ->\n\tbinary_to_nonce_limiter_update_response(\n\t\tSessionFoundBin, StepNumberSize, StepNumber, Postpone, 1);\nbinary_to_nonce_limiter_update_response(<< SessionFoundBin:8, StepNumberSize:8,\n\t\tStepNumber:(StepNumberSize * 8), Postpone:8, Format:8 >>) ->\n\tbinary_to_nonce_limiter_update_response(\n\t\tSessionFoundBin, StepNumberSize, StepNumber, Postpone, Format);\nbinary_to_nonce_limiter_update_response(_Bin) ->\n\t{error, invalid2}.\n\nbinary_to_nonce_limiter_update_response(\n\tSessionFoundBin, StepNumberSize, StepNumber, Postpone, Format) \n\t\twhen SessionFoundBin == 0; SessionFoundBin == 1 ->\n\tSessionFound = case SessionFoundBin of 0 -> false; 1 -> true end,\n\tStepNumber2 = case StepNumberSize of 0 -> undefined; _ -> StepNumber end,\n\t{ok, #nonce_limiter_update_response{ session_found = SessionFound,\n\t\t\tstep_number = StepNumber2, postpone = Postpone, format = Format }};\nbinary_to_nonce_limiter_update_response(\n\t\t_SessionFoundBin, _StepNumberSize, _StepNumber, _Postpone, _Format) ->\n\t{error, invalid1}.\n\nencode_double_signing_proof(undefined, _Height) ->\n\t<< 0:8 >>;\nencode_double_signing_proof(Proof, Height) ->\n\t{Key, Sig1, CDiff1, PrevCDiff1, Preimage1,\n\t\t\tSig2, CDiff2, PrevCDiff2, Preimage2} = Proof,\n\tcase Height >= ar_fork:height_2_9() of\n\t\tfalse ->\n\t\t\t<< 1:8, Key:512/binary, Sig1:512/binary,\n\t\t\t\t(ar_serialize:encode_int(CDiff1, 16))/binary,\n\t\t\t\t(ar_serialize:encode_int(PrevCDiff1, 16))/binary, Preimage1:64/binary,\n\t\t\t\tSig2:512/binary, (ar_serialize:encode_int(CDiff2, 16))/binary,\n\t\t\t\t(ar_serialize:encode_int(PrevCDiff2, 16))/binary, Preimage2:64/binary >>;\n\t\ttrue ->\n\t\t\t<< 1:8, (ar_serialize:encode_bin(Key, 16))/binary,\n\t\t\t\t(ar_serialize:encode_bin(Sig1, 16))/binary,\n\t\t\t\t(ar_serialize:encode_int(CDiff1, 16))/binary,\n\t\t\t\t(ar_serialize:encode_int(PrevCDiff1, 16))/binary, Preimage1:64/binary,\n\t\t\t\t(ar_serialize:encode_bin(Sig2, 16))/binary,\n\t\t\t\t(ar_serialize:encode_int(CDiff2, 16))/binary,\n\t\t\t\t(ar_serialize:encode_int(PrevCDiff2, 16))/binary, Preimage2:64/binary >>\n\tend.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nencode_post_2_6_fields(#block{ height = Height, hash_preimage = HashPreimage,\n\t\t\trecall_byte = RecallByte, reward = Reward,\n\t\t\tprevious_solution_hash = PreviousSolutionHash,\n\t\t\tpartition_number = PartitionNumber,\n\t\t\tsignature = Sig, nonce_limiter_info = NonceLimiterInfo,\n\t\t\tpoa2 = #poa{ chunk = Chunk, data_path = DataPath, tx_path = TXPath },\n\t\t\trecall_byte2 = RecallByte2, price_per_gib_minute = PricePerGiBMinute,\n\t\t\tscheduled_price_per_gib_minute = ScheduledPricePerGiBMinute,\n\t\t\treward_history_hash = RewardHistoryHash, debt_supply = DebtSupply,\n\t\t\tkryder_plus_rate_multiplier = KryderPlusRateMultiplier,\n\t\t\tkryder_plus_rate_multiplier_latch = KryderPlusRateMultiplierLatch,\n\t\t\tdenomination = Denomination, redenomination_height = RedenominationHeight,\n\t\t\tdouble_signing_proof = DoubleSigningProof,\n\t\t\tprevious_cumulative_diff = PrevCDiff } = B) ->\n\tRewardKey = case B#block.reward_key of undefined -> <<>>; {_Type, Key} -> Key end,\n\tcase Height >= ar_fork:height_2_6() of\n\t\tfalse ->\n\t\t\t<<>>;\n\t\ttrue ->\n\t\t\t<< (encode_bin(HashPreimage, 8))/binary, (encode_int(RecallByte, 16))/binary,\n\t\t\t\t(encode_int(Reward, 8))/binary, (encode_bin(Sig, 16))/binary,\n\t\t\t\t(encode_int(RecallByte2, 16))/binary,\n\t\t\t\t(encode_bin(PreviousSolutionHash, 8))/binary, PartitionNumber:256,\n\t\t\t\t(encode_nonce_limiter_info(NonceLimiterInfo))/binary,\n\t\t\t\t(encode_bin(Chunk, 24))/binary, (encode_bin(RewardKey, 16))/binary,\n\t\t\t\t(encode_bin(TXPath, 24))/binary, (encode_bin(DataPath, 24))/binary,\n\t\t\t\t(encode_int(PricePerGiBMinute, 8))/binary,\n\t\t\t\t(encode_int(ScheduledPricePerGiBMinute, 8))/binary,\n\t\t\t\tRewardHistoryHash:32/binary, (encode_int(DebtSupply, 8))/binary,\n\t\t\t\tKryderPlusRateMultiplier:24, KryderPlusRateMultiplierLatch:8,\n\t\t\t\tDenomination:24, (encode_int(RedenominationHeight, 8))/binary,\n\t\t\t\t(encode_int(PrevCDiff, 16))/binary,\n\t\t\t\t(encode_double_signing_proof(DoubleSigningProof, Height))/binary,\n\t\t\t\t(encode_post_2_7_fields(B))/binary >>\n\tend.\n\nencode_post_2_7_fields(#block{ height = Height,\n\t\tmerkle_rebase_support_threshold = Threshold, chunk_hash = ChunkHash,\n\t\tchunk2_hash = Chunk2Hash,\n\t\tblock_time_history_hash = BlockTimeHistoryHash,\n\t\tnonce_limiter_info = #nonce_limiter_info{ vdf_difficulty = VDFDifficulty,\n\t\t\t\tnext_vdf_difficulty = NextVDFDifficulty } } = B) ->\n\tcase Height >= ar_fork:height_2_7() of\n\t\ttrue ->\n\t\t\t<< (encode_int(Threshold, 16))/binary, ChunkHash:32/binary,\n\t\t\t\t\t(encode_bin(Chunk2Hash, 8))/binary,\n\t\t\t\t\tBlockTimeHistoryHash:32/binary,\n\t\t\t\t\t(encode_int(VDFDifficulty, 8))/binary,\n\t\t\t\t\t(encode_int(NextVDFDifficulty, 8))/binary,\n\t\t\t\t\t(encode_post_2_8_fields(B))/binary >>;\n\t\tfalse ->\n\t\t\t<<>>\n\tend.\n\nencode_post_2_8_fields(#block{ height = Height,\n\t\tpacking_difficulty = PackingDifficulty,\n\t\tunpacked_chunk_hash = UnpackedChunkHash, unpacked_chunk2_hash = UnpackedChunk2Hash,\n\t\tpoa = #poa{ unpacked_chunk = UnpackedChunk },\n\t\tpoa2 = #poa{ unpacked_chunk = UnpackedChunk2 }} = B) ->\n\tcase Height >= ar_fork:height_2_8() of\n\t\tfalse ->\n\t\t\t<<>>;\n\t\ttrue ->\n\t\t\t<< PackingDifficulty:8,\n\t\t\t\t(ar_serialize:encode_bin(UnpackedChunkHash, 8))/binary,\n\t\t\t\t(ar_serialize:encode_bin(UnpackedChunk2Hash, 8))/binary,\n\t\t\t\t(ar_serialize:encode_bin(UnpackedChunk, 24))/binary,\n\t\t\t\t(ar_serialize:encode_bin(UnpackedChunk2, 24))/binary,\n\t\t\t\t(encode_post_2_9_fields(B))/binary >>\n\tend.\n\nencode_post_2_9_fields(#block{ height = Height, replica_format = ReplicaFormat }) ->\n\tcase Height >= ar_fork:height_2_9() of\n\t\tfalse ->\n\t\t\t<<>>;\n\t\ttrue ->\n\t\t\t<< ReplicaFormat:8 >>\n\tend.\n\nencode_nonce_limiter_info(#nonce_limiter_info{ output = Output, global_step_number = N,\n\t\tseed = Seed, next_seed = NextSeed, partition_upper_bound = PartitionUpperBound,\n\t\tnext_partition_upper_bound = NextPartitionUpperBound, prev_output = PrevOutput,\n\t\tlast_step_checkpoints = Checkpoints, steps = Steps }) ->\n\tCheckpointsLen = length(Checkpoints),\n\tStepsLen = length(Steps),\n\t<< Output:32/binary, N:64, Seed:48/binary, NextSeed:48/binary,\n\t\t\t(encode_bin(PrevOutput, 8))/binary,\n\t\t\tPartitionUpperBound:256, NextPartitionUpperBound:256,\n\t\t\tCheckpointsLen:16, (iolist_to_binary(Checkpoints))/binary,\n\t\t\tStepsLen:16, (iolist_to_binary(Steps))/binary >>.\n\nencode_int(undefined, SizeBits) ->\n\t<< 0:SizeBits >>;\nencode_int(N, SizeBits) ->\n\tBin = binary:encode_unsigned(N, big),\n\t<< (byte_size(Bin)):SizeBits, Bin/binary >>.\n\nencode_bin(undefined, SizeBits) ->\n\t<< 0:SizeBits >>;\nencode_bin(Bin, SizeBits) ->\n\t<< (byte_size(Bin)):SizeBits, Bin/binary >>.\n\nencode_bin_list(Bins, LenBits, ElemSizeBits) ->\n\tencode_bin_list(Bins, [], 0, LenBits, ElemSizeBits).\n\nencode_bin_list([], Encoded, N, LenBits, _ElemSizeBits) ->\n\t<< N:LenBits, (iolist_to_binary(Encoded))/binary >>;\nencode_bin_list([Bin | Bins], Encoded, N, LenBits, ElemSizeBits) ->\n\tElem = encode_bin(Bin, ElemSizeBits),\n\tencode_bin_list(Bins, [Elem | Encoded], N + 1, LenBits, ElemSizeBits).\n\nencode_transactions(TXs) ->\n\tencode_transactions(TXs, [], 0).\n\nencode_transactions([], Encoded, N) ->\n\t<< N:16, (iolist_to_binary(Encoded))/binary >>;\nencode_transactions([<< TXID:32/binary >> | TXs], Encoded, N) ->\n\tencode_transactions(TXs, [<< 32:24, TXID:32/binary >> | Encoded], N + 1);\nencode_transactions([TX | TXs], Encoded, N) ->\n\tBin = encode_tx(TX),\n\tTXSize = byte_size(Bin),\n\tencode_transactions(TXs, [<< TXSize:24, Bin/binary >> | Encoded], N + 1).\n\nencode_tx(#tx{ format = Format, id = TXID, last_tx = LastTX, owner = Owner,\n\t\ttags = Tags, target = Target, quantity = Quantity, data = Data,\n\t\tdata_size = DataSize, data_root = DataRoot, signature = Signature,\n\t\treward = Reward, signature_type = SignatureType } = TX) ->\n\tOwner2 =\n\t\tcase SignatureType of\n\t\t\t?ECDSA_KEY_TYPE ->\n\t\t\t\t<<>>;\n\t\t\t_ ->\n\t\t\t\tOwner\n\t\tend,\n\t<< Format:8, TXID:32/binary,\n\t\t\t(encode_bin(LastTX, 8))/binary, (encode_bin(Owner2, 16))/binary,\n\t\t\t(encode_bin(Target, 8))/binary, (encode_int(Quantity, 8))/binary,\n\t\t\t(encode_int(DataSize, 16))/binary, (encode_bin(DataRoot, 8))/binary,\n\t\t\t(encode_bin(Signature, 16))/binary, (encode_int(Reward, 8))/binary,\n\t\t\t(encode_bin(Data, 24))/binary, (encode_tx_tags(Tags))/binary,\n\t\t\t(may_be_encode_tx_denomination(TX))/binary >>.\n\nencode_tx_tags(Tags) ->\n\tencode_tx_tags(Tags, [], 0).\n\nencode_tx_tags([], Encoded, N) ->\n\t<< N:16, (iolist_to_binary(Encoded))/binary >>;\nencode_tx_tags([{Name, Value} | Tags], Encoded, N) ->\n\tTagNameSize = byte_size(Name),\n\tTagValueSize = byte_size(Value),\n\tTag = << TagNameSize:16, TagValueSize:16, Name/binary, Value/binary >>,\n\tencode_tx_tags(Tags, [Tag | Encoded], N + 1).\n\nmay_be_encode_tx_denomination(#tx{ denomination = 0 }) ->\n\t<<>>;\nmay_be_encode_tx_denomination(#tx{ denomination = Denomination }) ->\n\t<< Denomination:24 >>.\n\nparse_block_tags_transactions(Bin, B) ->\n\tcase parse_block_tags(Bin) of\n\t\t{error, Reason} ->\n\t\t\t{error, Reason};\n\t\t{ok, Tags, Rest} ->\n\t\t\tparse_block_transactions(Rest, B#block{ tags = Tags })\n\tend.\n\nparse_block_transactions(Bin, B) ->\n\tcase {parse_block_transactions(Bin), B#block.height < ar_fork:height_2_6()} of\n\t\t{{error, Reason}, _} ->\n\t\t\t{error, Reason};\n\t\t{{ok, TXs, <<>>}, true} ->\n\t\t\t{ok, B#block{ txs = TXs }};\n\t\t{{ok, TXs, Rest}, false} ->\n\t\t\tparse_block_post_2_6_fields(B#block{ txs = TXs }, Rest);\n\t\t_ ->\n\t\t\t{error, invalid_input1}\n\tend.\n\nparse_block_post_2_6_fields(B, << HashPreimageSize:8, HashPreimage:HashPreimageSize/binary,\n\t\tRecallByteSize:16, RecallByte:(RecallByteSize * 8), RewardSize:8,\n\t\tReward:(RewardSize * 8), SigSize:16, Sig:SigSize/binary,\n\t\tRecallByte2Size:16, RecallByte2:(RecallByte2Size * 8), PreviousSolutionHashSize:8,\n\t\tPreviousSolutionHash:PreviousSolutionHashSize/binary,\n\t\tPartitionNumber:256, NonceLimiterOutput:32/binary,\n\t\tGlobalStepNumber:64, Seed:48/binary, NextSeed:48/binary,\n\t\tPrevOutputSize:8, PrevOutput:PrevOutputSize/binary,\n\t\tPartitionUpperBound:256, NextPartitionUpperBound:256,\n\t\tLastCheckpointsLen:16, LastCheckpoints:(LastCheckpointsLen * 32)/binary,\n\t\tStepsLen:16, Steps:(StepsLen * 32)/binary,\n\t\tChunkSize:24, Chunk:ChunkSize/binary, RewardKeySize:16,\n\t\tRewardKey:RewardKeySize/binary, TXPathSize:24, TXPath:TXPathSize/binary,\n\t\tDataPathSize:24, DataPath:DataPathSize/binary,\n\t\tPricePerGiBMinuteSize:8, PricePerGiBMinute:(PricePerGiBMinuteSize * 8),\n\t\tScheduledPricePerGiBMinuteSize:8,\n\t\tScheduledPricePerGiBMinute:(ScheduledPricePerGiBMinuteSize * 8),\n\t\tRewardHistoryHash:32/binary, DebtSupplySize:8, DebtSupply:(DebtSupplySize * 8),\n\t\tKryderPlusRateMultiplier:24, KryderPlusRateMultiplierLatch:8,\n\t\tDenomination:24, RedenominationHeightSize:8,\n\t\tRedenominationHeight:(RedenominationHeightSize * 8),\n\t\tPrevCDiffSize:16, PrevCDiff:(PrevCDiffSize * 8),\n\t\tRest/binary >>) ->\n\t%% The only block where recall_byte may be undefined is the genesis block\n\t%% of a new weave.\n\tRecallByte_2 = case RecallByteSize of 0 -> undefined; _ -> RecallByte end,\n\tHeight = B#block.height,\n\tNonce = binary:decode_unsigned(B#block.nonce, big),\n\tNonceLimiterInfo = #nonce_limiter_info{ output = NonceLimiterOutput,\n\t\t\tprev_output = PrevOutput, global_step_number = GlobalStepNumber,\n\t\t\tseed = Seed, next_seed = NextSeed,\n\t\t\tpartition_upper_bound = PartitionUpperBound,\n\t\t\tnext_partition_upper_bound = NextPartitionUpperBound,\n\t\t\tlast_step_checkpoints = parse_checkpoints(LastCheckpoints, Height),\n\t\t\tsteps = parse_checkpoints(Steps, Height) },\n\tRecallByte2_2 = case RecallByte2Size of 0 -> undefined; _ -> RecallByte2 end,\n\tSigType =\n\t\tcase {RewardKeySize, Height >= ar_fork:height_2_9()} of\n\t\t\t{?ECDSA_PUB_KEY_SIZE, true} ->\n\t\t\t\t?ECDSA_KEY_TYPE;\n\t\t\t_ ->\n\t\t\t\t?RSA_KEY_TYPE\n\t\tend,\n\tB2 = B#block{ hash_preimage = HashPreimage, recall_byte = RecallByte_2,\n\t\t\treward = Reward, nonce = Nonce, recall_byte2 = RecallByte2_2,\n\t\t\tprevious_solution_hash = PreviousSolutionHash,\n\t\t\tsignature = Sig, partition_number = PartitionNumber,\n\t\t\treward_key = {SigType, RewardKey},\n\t\t\tnonce_limiter_info = NonceLimiterInfo,\n\t\t\tpoa2 = #poa{ chunk = Chunk, data_path = DataPath, tx_path = TXPath },\n\t\t\tprice_per_gib_minute = PricePerGiBMinute,\n\t\t\tscheduled_price_per_gib_minute = ScheduledPricePerGiBMinute,\n\t\t\treward_history_hash = RewardHistoryHash, debt_supply = DebtSupply,\n\t\t\tkryder_plus_rate_multiplier = KryderPlusRateMultiplier,\n\t\t\tkryder_plus_rate_multiplier_latch = KryderPlusRateMultiplierLatch,\n\t\t\tdenomination = Denomination, redenomination_height = RedenominationHeight,\n\t\t\tprevious_cumulative_diff = PrevCDiff },\n\tparse_double_signing_proof(Rest, B2);\nparse_block_post_2_6_fields(_B, _Rest) ->\n\t{error, invalid_input4}.\n\nparse_checkpoints(<<>>, 0) ->\n\t[];\nparse_checkpoints(_, 0) ->\n\t{error, invalid_checkpoints};\nparse_checkpoints(<< Checkpoint:32/binary >>, _Height) ->\n\t%% The block must have at least one checkpoint (the last nonce limiter output).\n\t[Checkpoint];\nparse_checkpoints(<< Checkpoint:32/binary, Rest/binary >>, Height) ->\n\t[Checkpoint | parse_checkpoints(Rest, Height)].\n\nparse_block_tags(<< TagsLen:16, Rest/binary >>) when TagsLen =< 2048 ->\n\tparse_block_tags(TagsLen, Rest, [], 0);\nparse_block_tags(_Bin) ->\n\t{error, invalid_tags_input}.\n\nparse_block_tags(0, Rest, Tags, _TotalSize) ->\n\t{ok, Tags, Rest};\nparse_block_tags(N, << TagSize:16, Tag:TagSize/binary, Rest/binary >>, Tags, TotalSize)\n\t\twhen TotalSize + TagSize =< 2048 ->\n\tparse_block_tags(N - 1, << Rest/binary >>, [Tag | Tags], TotalSize + TagSize);\nparse_block_tags(_N, _Bin, _Tags, _TotalSize) ->\n\t{error, invalid_tag_input}.\n\nparse_block_transactions(<< Count:16, Rest/binary >>) when Count =< 1000 ->\n\tparse_block_transactions(Count, Rest, []);\nparse_block_transactions(_Bin) ->\n\t{error, invalid_transactions_input}.\n\nparse_block_transactions(0, Rest, TXs) ->\n\t{ok, TXs, Rest};\nparse_block_transactions(N, << Size:24, Bin:Size/binary, Rest/binary >>, TXs)\n\t\twhen N > 0 ->\n\tcase parse_tx(Bin) of\n\t\t{error, Reason} ->\n\t\t\t{error, Reason};\n\t\t{ok, TX} ->\n\t\t\tparse_block_transactions(N - 1, Rest, [TX | TXs])\n\tend;\nparse_block_transactions(_N, _Rest, _TXs) ->\n\t{error, invalid_transactions2_input}.\n\nparse_double_signing_proof(<< 0:8, Rest/binary >>, B) ->\n\tparse_post_2_7_fields(Rest, B);\nparse_double_signing_proof(Bin, #block{ height = Height } = B) ->\n\tcase {Bin, Height >= ar_fork:height_2_9()} of\n\t\t{<< 1:8, Key:512/binary, Sig1:512/binary,\n\t\t\t\tCDiff1Size:16, CDiff1:(CDiff1Size * 8),\n\t\t\t\tPrevCDiff1Size:16, PrevCDiff1:(PrevCDiff1Size * 8),\n\t\t\t\tPreimage1:64/binary, Sig2:512/binary,\n\t\t\t\tCDiff2Size:16, CDiff2:(CDiff2Size * 8),\n\t\t\t\tPrevCDiff2Size:16, PrevCDiff2:(PrevCDiff2Size * 8),\n\t\t\t\tPreimage2:64/binary, Rest/binary >>, false} ->\n\t\t\tProof = {Key, Sig1, CDiff1, PrevCDiff1, Preimage1,\n\t\t\t\t\tSig2, CDiff2, PrevCDiff2, Preimage2},\n\t\t\tB2 = B#block{ double_signing_proof = Proof },\n\t\t\tparse_post_2_7_fields(Rest, B2);\n\t\t{_Bin, false} ->\n\t\t\t{error, invalid_double_signing_proof_input};\n\t\t{<< 1:8, KeySize:16, Key:KeySize/binary, Sig1Size:16, Sig1:Sig1Size/binary,\n\t\t\t\tCDiff1Size:16, CDiff1:(CDiff1Size * 8),\n\t\t\t\tPrevCDiff1Size:16, PrevCDiff1:(PrevCDiff1Size * 8),\n\t\t\t\tPreimage1:64/binary, Sig2Size:16, Sig2:Sig2Size/binary,\n\t\t\t\tCDiff2Size:16, CDiff2:(CDiff2Size * 8),\n\t\t\t\tPrevCDiff2Size:16, PrevCDiff2:(PrevCDiff2Size * 8),\n\t\t\t\tPreimage2:64/binary, Rest/binary >>, true}\n\t\t\t\t\twhen (KeySize == ?RSA_BLOCK_SIG_SIZE andalso\n\t\t\t\t\t\t\tSig1Size == ?RSA_BLOCK_SIG_SIZE andalso\n\t\t\t\t\t\t\tSig2Size == ?RSA_BLOCK_SIG_SIZE) orelse\n\t\t\t\t\t\t(KeySize == ?ECDSA_PUB_KEY_SIZE andalso\n\t\t\t\t\t\t\tSig1Size == ?ECDSA_SIG_SIZE andalso\n\t\t\t\t\t\t\tSig2Size == ?ECDSA_SIG_SIZE) ->\n\t\t\tProof = {Key, Sig1, CDiff1, PrevCDiff1, Preimage1,\n\t\t\t\t\tSig2, CDiff2, PrevCDiff2, Preimage2},\n\t\t\tB2 = B#block{ double_signing_proof = Proof },\n\t\t\tparse_post_2_7_fields(Rest, B2);\n\t\t{_Bin, true} ->\n\t\t\t{error, invalid_double_signing_proof_input2}\nend.\n\nparse_post_2_7_fields(Rest, #block{ height = Height } = B) ->\n\tcase {Rest, Height >= ar_fork:height_2_7()} of\n\t\t{<<>>, false} ->\n\t\t\t{ok, B};\n\t\t{<< ThresholdSize:16, Threshold:(ThresholdSize*8), ChunkHash:32/binary,\n\t\t\t\tChunk2HashSize:8, Chunk2Hash:Chunk2HashSize/binary,\n\t\t\t\tBlockTimeHistoryHash:32/binary,\n\t\t\t\tVDFDifficultySize:8, VDFDifficulty:(VDFDifficultySize * 8),\n\t\t\t\tNextVDFDifficultySize:8, NextVDFDifficulty:(NextVDFDifficultySize * 8),\n\t\t\t\tRest2/binary >>, true} ->\n\t\t\tChunk2Hash2 = case Chunk2HashSize of 0 -> undefined; _ -> Chunk2Hash end,\n\t\t\tB2 = B#block{ merkle_rebase_support_threshold = Threshold,\n\t\t\t\t\tchunk_hash = ChunkHash, chunk2_hash = Chunk2Hash2,\n\t\t\t\t\tblock_time_history_hash = BlockTimeHistoryHash,\n\t\t\t\t\tnonce_limiter_info = (B#block.nonce_limiter_info)#nonce_limiter_info{\n\t\t\t\t\t\t\tvdf_difficulty = VDFDifficulty,\n\t\t\t\t\t\t\tnext_vdf_difficulty = NextVDFDifficulty } },\n\t\t\tparse_post_2_8_fields(Rest2, B2);\n\t\t_ ->\n\t\t\t{error, invalid_merkle_rebase_support_threshold}\n\tend.\n\nparse_post_2_8_fields(Rest, #block{ height = Height, poa = PoA, poa2 = PoA2 } = B) ->\n\tcase {Rest, Height >= ar_fork:height_2_8()} of\n\t\t{<<>>, false} ->\n\t\t\t{ok, B};\n\t\t{<< PackingDifficulty:8, UnpackedChunkHashSize:8,\n\t\t\t\tUnpackedChunkHash:UnpackedChunkHashSize/binary,\n\t\t\t\tUnpackedChunk2HashSize:8,\n\t\t\t\tUnpackedChunk2Hash:UnpackedChunk2HashSize/binary,\n\t\t\t\tUnpackedChunkSize:24,\n\t\t\t\tUnpackedChunk:UnpackedChunkSize/binary,\n\t\t\t\tUnpackedChunk2Size:24,\n\t\t\t\tUnpackedChunk2:UnpackedChunk2Size/binary, Rest2/binary >>, true} ->\n\t\t\tUnpackedChunkHash_2 =\n\t\t\t\tcase UnpackedChunkHash of\n\t\t\t\t\t<<>> -> undefined;\n\t\t\t\t\t_ -> UnpackedChunkHash\n\t\t\t\tend,\n\t\t\tUnpackedChunk2Hash_2 =\n\t\t\t\tcase UnpackedChunk2Hash of\n\t\t\t\t\t<<>> -> undefined;\n\t\t\t\t\t_ -> UnpackedChunk2Hash\n\t\t\t\tend,\n\t\t\tparse_post_2_9_fields(Rest2, B#block{ packing_difficulty = PackingDifficulty,\n\t\t\t\t\tunpacked_chunk_hash = UnpackedChunkHash_2,\n\t\t\t\t\tunpacked_chunk2_hash = UnpackedChunk2Hash_2,\n\t\t\t\t\tpoa = PoA#poa{ unpacked_chunk = UnpackedChunk },\n\t\t\t\t\tpoa2 = PoA2#poa{ unpacked_chunk = UnpackedChunk2 } });\n\t\t_ ->\n\t\t\t{error, invalid_packing_difficulty}\n\tend.\n\nparse_post_2_9_fields(Rest, #block{ height = Height } = B) ->\n\tcase {Rest, Height >= ar_fork:height_2_9()} of\n\t\t{<<>>, false} ->\n\t\t\t{ok, B};\n\t\t{<< ReplicaFormat:8 >>, true} ->\n\t\t\t{ok, B#block{ replica_format = ReplicaFormat }};\n\t\t_ ->\n\t\t\t{error, invalid_replica_format}\n\tend.\n\nparse_tx(<< TXID:32/binary >>) ->\n\t{ok, TXID};\nparse_tx(<< Format:8, TXID:32/binary,\n\t\tLastTXSize:8, LastTX:LastTXSize/binary,\n\t\tOwnerSize:16, Owner:OwnerSize/binary,\n\t\tTargetSize:8, Target:TargetSize/binary,\n\t\tQuantitySize:8, Quantity:(QuantitySize * 8),\n\t\tDataSizeSize:16, DataSize:(DataSizeSize * 8),\n\t\tDataRootSize:8, DataRoot:DataRootSize/binary,\n\t\tSignatureSize:16, Signature:SignatureSize/binary,\n\t\tRewardSize:8, Reward:(RewardSize * 8),\n\t\tDataEncodingSize:24, Data:DataEncodingSize/binary,\n\t\tRest/binary >>) when Format == 1 orelse Format == 2 ->\n\tcase parse_tx_tags(Rest) of\n\t\t{error, Reason} ->\n\t\t\t{error, Reason};\n\t\t{ok, Tags, Rest2} ->\n\t\t\tSigType = set_sig_type_from_pub_key(Owner, Signature),\n\t\t\tcase parse_tx_denomination(Rest2) of\n\t\t\t\t{ok, Denomination} ->\n\t\t\t\t\tDataSize2 = case Format of 1 -> byte_size(Data); _ -> DataSize end,\n\t\t\t\t\tTX = #tx{ format = Format, id = TXID, last_tx = LastTX, owner = Owner,\n\t\t\t\t\t\t\ttarget = Target, quantity = Quantity, data_size = DataSize2,\n\t\t\t\t\t\t\tdata_root = DataRoot, signature = Signature, reward = Reward,\n\t\t\t\t\t\t\tdata = Data, tags = Tags, denomination = Denomination,\n\t\t\t\t\t\t\tsignature_type = SigType },\n\t\t\t\t\tcase SigType of\n\t\t\t\t\t\t{?ECDSA_SIGN_ALG, secp256k1} ->\n\t\t\t\t\t\t\tDataSegment = ar_tx:generate_signature_data_segment(TX),\n\t\t\t\t\t\t\tOwner2 = ar_wallet:recover_key(DataSegment, Signature, SigType),\n\t\t\t\t\t\t\t{ok, TX#tx{ owner = Owner2,\n\t\t\t\t\t\t\t\t\towner_address = ar_wallet:to_address(Owner2, SigType) }};\n\t\t\t\t\t\t{?RSA_SIGN_ALG, 65537} ->\n\t\t\t\t\t\t\t{ok, TX#tx{ owner_address = ar_wallet:to_address(Owner, SigType) }}\n\t\t\t\t\tend;\n\t\t\t\t{error, Reason} ->\n\t\t\t\t\t{error, Reason}\n\t\t\tend\n\tend;\nparse_tx(_Bin) ->\n\t{error, invalid_tx_input}.\n\nparse_tx_tags(<< TagsLen:16, Rest/binary >>) when TagsLen =< 2048 ->\n\tparse_tx_tags(TagsLen, Rest, []);\nparse_tx_tags(_Bin) ->\n\t{error, invalid_tx_tags_input}.\n\nparse_tx_tags(0, Rest, Tags) ->\n\t{ok, Tags, Rest};\nparse_tx_tags(N, << TagNameSize:16, TagValueSize:16,\n\t\tTagName:TagNameSize/binary, TagValue:TagValueSize/binary, Rest/binary >>, Tags)\n\t\twhen N > 0 ->\n\tparse_tx_tags(N - 1, Rest, [{TagName, TagValue} | Tags]);\nparse_tx_tags(_N, _Bin, _Tags) ->\n\t{error, invalid_tx_tag_input}.\n\nparse_tx_denomination(<<>>) ->\n\t{ok, 0};\nparse_tx_denomination(<< Denomination:24 >>) when Denomination > 0 ->\n\t{ok, Denomination};\nparse_tx_denomination(_Rest) ->\n\t{error, invalid_denomination}.\n\ntx_to_binary(TX) ->\n\tBin = encode_tx(TX),\n\tTXSize = byte_size(Bin),\n\t<< TXSize:24, Bin/binary >>.\n\nbinary_to_tx(<< Size:24, Bin:Size/binary >>) ->\n\tparse_tx(Bin);\nbinary_to_tx(_Rest) ->\n\t{error, invalid_input7}.\n\nblock_announcement_to_binary(#block_announcement{ indep_hash = H,\n\t\tprevious_block = PrevH, tx_prefixes = L, recall_byte = O, recall_byte2 = O2,\n\t\tsolution_hash = SolutionH }) ->\n\t<< H:48/binary, PrevH:48/binary, (encode_int(O, 8))/binary,\n\t\t\t(encode_tx_prefixes(L))/binary, (case O2 of undefined -> <<>>;\n\t\t\t\t\t_ -> encode_int(O2, 8) end)/binary,\n\t\t\t(encode_solution_hash(SolutionH))/binary >>.\n\nencode_tx_prefixes(L) ->\n\t<< (length(L)):16, (encode_tx_prefixes(L, []))/binary >>.\n\nencode_tx_prefixes([], Encoded) ->\n\tiolist_to_binary(Encoded);\nencode_tx_prefixes([Prefix | Prefixes], Encoded) ->\n\tencode_tx_prefixes(Prefixes, [<< Prefix:8/binary >> | Encoded]).\n\nencode_solution_hash(undefined) ->\n\t<<>>;\nencode_solution_hash(H) ->\n\t<< H:32/binary >>.\n\nbinary_to_block_announcement(<< H:48/binary, PrevH:48/binary,\n\t\tRecallByteSize:8, RecallByte:(RecallByteSize * 8), N:16, Rest/binary >>) ->\n\tRecallByte2 = case RecallByteSize of 0 -> undefined; _ -> RecallByte end,\n\tcase parse_tx_prefixes_and_recall_byte2_and_solution_hash(N, Rest) of\n\t\t{error, Reason} ->\n\t\t\t{error, Reason};\n\t\t{ok, {Prefixes, RecallByte3, SolutionH}} ->\n\t\t\t{ok, #block_announcement{ indep_hash = H, previous_block = PrevH,\n\t\t\t\t\trecall_byte = RecallByte2, tx_prefixes = Prefixes,\n\t\t\t\t\trecall_byte2 = RecallByte3, solution_hash = SolutionH }}\n\tend;\nbinary_to_block_announcement(_Rest) ->\n\t{error, invalid_input}.\n\nparse_tx_prefixes_and_recall_byte2_and_solution_hash(N, Bin) ->\n\tparse_tx_prefixes_and_recall_byte2_and_solution_hash(N, Bin, []).\n\nparse_tx_prefixes_and_recall_byte2_and_solution_hash(0, Rest, Prefixes) ->\n\tcase Rest of\n\t\t<<>> ->\n\t\t\t{ok, {Prefixes, undefined, undefined}};\n\t\t<< RecallByte2Size:8, RecallByte2:(RecallByte2Size * 8), SolutionH:32/binary >> ->\n\t\t\t{ok, {Prefixes, RecallByte2, SolutionH}};\n\t\t<< SolutionH:32/binary >> ->\n\t\t\t{ok, {Prefixes, undefined, SolutionH}};\n\t\t_ ->\n\t\t\t{error, invalid_recall_byte2_and_solution_hash_input}\n\tend;\nparse_tx_prefixes_and_recall_byte2_and_solution_hash(N, << Prefix:8/binary, Rest/binary >>,\n\t\tPrefixes) when N > 0 ->\n\tparse_tx_prefixes_and_recall_byte2_and_solution_hash(N - 1, Rest, [Prefix | Prefixes]);\nparse_tx_prefixes_and_recall_byte2_and_solution_hash(_N, _Rest, _Prefixes) ->\n\t{error, invalid_tx_prefixes_input}.\n\nbinary_to_block_announcement_response(<< ChunkMissing:8, Rest/binary >>)\n\t\twhen ChunkMissing == 1 orelse ChunkMissing == 0 ->\n\tcase parse_missing_tx_indices_and_missing_chunk2(Rest) of\n\t\t{ok, {Indices, MissingChunk2}} ->\n\t\t\t{ok, #block_announcement_response{ missing_chunk = ar_util:int_to_bool(ChunkMissing),\n\t\t\t\t\tmissing_tx_indices = Indices, missing_chunk2 = MissingChunk2 }};\n\t\t{error, Reason} ->\n\t\t\t{error, Reason}\n\tend;\nbinary_to_block_announcement_response(_Rest) ->\n\t{error, invalid_block_announcement_response_input}.\n\nparse_missing_tx_indices_and_missing_chunk2(Bin) ->\n\tparse_missing_tx_indices_and_missing_chunk2(Bin, []).\n\nparse_missing_tx_indices_and_missing_chunk2(<<>>, Indices) ->\n\t{ok, {Indices, undefined}};\nparse_missing_tx_indices_and_missing_chunk2(<< MissingChunk2:8 >>, Indices) ->\n\tcase MissingChunk2 of\n\t\t0 ->\n\t\t\t{ok, {Indices, false}};\n\t\t1 ->\n\t\t\t{ok, {Indices, true}};\n\t\t_ ->\n\t\t\t{error, invalid_missing_chunk2_input}\n\tend;\nparse_missing_tx_indices_and_missing_chunk2(<< Index:16, Rest/binary >>, Indices) ->\n\tparse_missing_tx_indices_and_missing_chunk2(Rest, [Index | Indices]);\nparse_missing_tx_indices_and_missing_chunk2(_Rest, _Indices) ->\n\t{error, invalid_missing_tx_indices_input}.\n\nblock_announcement_response_to_binary(#block_announcement_response{\n\t\tmissing_tx_indices = L, missing_chunk = Reply, missing_chunk2 = Reply2 }) ->\n\t<< (ar_util:bool_to_int(Reply)):8, (encode_missing_tx_indices(L))/binary,\n\t\t\t(case Reply2 of undefined -> <<>>; false -> << 0:8 >>;\n\t\t\t\t\ttrue -> << 1:8 >> end)/binary >>.\n\nencode_missing_tx_indices(L) ->\n\tencode_missing_tx_indices(L, []).\n\nencode_missing_tx_indices([], Encoded) ->\n\tiolist_to_binary(Encoded);\nencode_missing_tx_indices([Index | Indices], Encoded) ->\n\tencode_missing_tx_indices(Indices, [<< Index:16 >> | Encoded]).\n\npoa_map_to_binary(#{ chunk := Chunk, tx_path := TXPath, data_path := DataPath,\n\t\tpacking := Packing }) ->\n\tBinaryPacking = packing_to_binary(Packing),\n\t<< (encode_bin(Chunk, 24))/binary, (encode_bin(TXPath, 24))/binary,\n\t\t\t(encode_bin(DataPath, 24))/binary, (encode_bin(BinaryPacking, 8))/binary >>.\n\npoa_no_chunk_map_to_binary(#{ tx_path := TXPath, data_path := DataPath }) ->\n\t<< (encode_bin(TXPath, 24))/binary, (encode_bin(DataPath, 24))/binary >>.\n\nbinary_to_poa(<< ChunkSize:24, Chunk:ChunkSize/binary,\n\t\tTXPathSize:24, TXPath:TXPathSize/binary,\n\t\tDataPathSize:24, DataPath:DataPathSize/binary,\n\t\tPackingSize:8, PackingBinary:PackingSize/binary >>) ->\n\tPacking = binary_to_packing(PackingBinary, error),\n\tcase Packing of\n\t\terror ->\n\t\t\t{error, invalid_packing};\n\t\t_ ->\n\t\t\t{ok, #{ chunk => Chunk, data_path => DataPath, tx_path => TXPath,\n\t\t\t\t\tpacking => Packing }}\n\tend;\nbinary_to_poa(_Rest) ->\n\t{error, invalid_input}.\n\nbinary_to_no_chunk_map(<< TXPathSize:24, TXPath:TXPathSize/binary,\n\t\tDataPathSize:24, DataPath:DataPathSize/binary >>) ->\n\t{ok, #{ data_path => DataPath, tx_path => TXPath }};\nbinary_to_no_chunk_map(_Rest) ->\n\t{error, invalid_input}.\n\nblock_index_to_binary(BI) ->\n\tblock_index_to_binary(BI, []).\n\nblock_index_to_binary([], Encoded) ->\n\tiolist_to_binary(Encoded);\nblock_index_to_binary([{BH, WeaveSize, TXRoot} | BI], Encoded) ->\n\tblock_index_to_binary(BI,\n\t\t\t[<< BH:48/binary, (encode_int(WeaveSize, 16))/binary,\n\t\t\t\t(encode_bin(TXRoot, 8))/binary >> | Encoded]).\n\nbinary_to_block_index(Bin) ->\n\tbinary_to_block_index(Bin, []).\n\nbinary_to_block_index(<<>>, BI) ->\n\t{ok, BI};\nbinary_to_block_index(<< BH:48/binary, WeaveSizeSize:16, WeaveSize:(WeaveSizeSize * 8),\n\t\tTXRootSize:8, TXRoot:TXRootSize/binary, Rest/binary >>, BI) ->\n\tbinary_to_block_index(Rest, [{BH, WeaveSize, TXRoot} | BI]);\nbinary_to_block_index(_Rest, _BI) ->\n\t{error, invalid_input}.\n\ndata_roots_to_binary({TXRoot, BlockSize, Entries}) when is_binary(TXRoot) ->\n\tEncodedEntries = lists:map(\n\t\tfun({DataRoot, TXSize, TXStartOffset, TXPath}) ->\n\t\t\t<< DataRoot:32/binary,\n\t\t\t\t(encode_int(TXSize, 8))/binary,\n\t\t\t\t(encode_int(TXStartOffset, 8))/binary,\n                (encode_bin(TXPath, 24))/binary >>\n\t\tend,\n\t\tEntries),\n\t<< (encode_bin(TXRoot, 8))/binary,\n\t\t(encode_int(BlockSize, 16))/binary,\n\t\t(length(Entries)):32,\n\t\t(iolist_to_binary(EncodedEntries))/binary >>.\n\n%% @doc Decode data_roots_to_binary/1 payload.\nbinary_to_data_roots(<< TXRootSize:8, TXRoot:TXRootSize/binary,\n\t\tBlockSizeSize:16, BlockSize:(BlockSizeSize*8),\n\t\tCount:32, Rest/binary >>) when TXRootSize == 0; TXRootSize == 32; Count =< ?BLOCK_TX_COUNT_LIMIT ->\n\tcase catch binary_to_data_root_entries(Count, Rest, []) of\n\t\t{ok, Entries, <<>>} ->\n\t\t\t{ok, {TXRoot, BlockSize, lists:reverse(Entries)}};\n\t\t{ok, _Entries, _Tail} ->\n\t\t\t{error, invalid_input3};\n\t\t{'EXIT', _} ->\n\t\t\t{error, exception};\n\t\tError ->\n\t\t\tError\n\tend;\nbinary_to_data_roots(_Other) ->\n\t{error, invalid_input1}.\n\nbinary_to_data_root_entries(0, Bin, Acc) ->\n\t{ok, Acc, Bin};\nbinary_to_data_root_entries(N, << DataRoot:32/binary,\n\t\tTXSizeSize:8, TXSize:(TXSizeSize*8),\n\t\tTXStartSize:8, TXStartOffset:(TXStartSize*8),\n        TXPathSize:24, TXPath:TXPathSize/binary, Rest/binary >>, Acc) when N > 0 ->\n\tbinary_to_data_root_entries(N - 1, Rest,\n\t\t[{DataRoot, TXSize, TXStartOffset, TXPath} | Acc]);\nbinary_to_data_root_entries(_N, _Bin, _Acc) ->\n\t{error, invalid_input2}.\n\n%% @doc Take a JSON struct and produce JSON string.\njsonify(JSONStruct) ->\n\tiolist_to_binary(jiffy:encode(JSONStruct)).\n\n%% @doc Decode JSON string into a JSON struct.\n%% @deprecated In favor of json_decode/1\ndejsonify(JSON) ->\n\tcase json_decode(JSON) of\n\t\t{ok, V} -> V;\n\t\t{error, Reason} -> throw({error, Reason})\n\tend.\n\njson_decode(JSON) ->\n\tjson_decode(JSON, []).\n\njson_decode(JSON, JiffyOpts) ->\n\tcase catch jiffy:decode(JSON, JiffyOpts) of\n\t\t{'EXIT', {Reason, _Stacktrace}} ->\n\t\t\t{error, Reason};\n\t\tDecodedJSON ->\n\t\t\t{ok, DecodedJSON}\n\tend.\n\ndelete_keys([], Proplist) ->\n\tProplist;\ndelete_keys([Key | Keys], Proplist) ->\n\tdelete_keys(\n\t\tKeys,\n\t\tlists:keydelete(Key, 1, Proplist)\n\t).\n\n%% @doc Convert parsed JSON blocks fields from a HTTP request into a block.\njson_struct_to_block(JSONBlock) when is_binary(JSONBlock) ->\n\tjson_struct_to_block(dejsonify(JSONBlock));\njson_struct_to_block({BlockStruct}) ->\n\tHeight = find_value(<<\"height\">>, BlockStruct),\n\ttrue = is_integer(Height) andalso Height < ar_fork:height_2_6(),\n\tFork_2_5 = ar_fork:height_2_5(),\n\tTXIDs = find_value(<<\"txs\">>, BlockStruct),\n\tWalletList = find_value(<<\"wallet_list\">>, BlockStruct),\n\tHashList = find_value(<<\"hash_list\">>, BlockStruct),\n\tTagsValue = find_value(<<\"tags\">>, BlockStruct),\n\tTags =\n\t\tcase Height >= Fork_2_5 of\n\t\t\ttrue ->\n\t\t\t\t[ar_util:decode(Tag) || Tag <- TagsValue];\n\t\t\tfalse ->\n\t\t\t\ttrue = (byte_size(list_to_binary(TagsValue)) =< 2048),\n\t\t\t\tTagsValue\n\t\tend,\n\tFork_1_8 = ar_fork:height_1_8(),\n\tFork_1_6 = ar_fork:height_1_6(),\n\tCDiff =\n\t\tcase find_value(<<\"cumulative_diff\">>, BlockStruct) of\n\t\t\t_ when Height < Fork_1_6 -> 0;\n\t\t\tundefined -> 0; % In case it's an invalid block (in the pre-fork format).\n\t\t\tBinaryCDiff when Height >= Fork_1_8 -> binary_to_integer(BinaryCDiff);\n\t\t\tCD -> CD\n\t\tend,\n\tDiff =\n\t\tcase find_value(<<\"diff\">>, BlockStruct) of\n\t\t\tBinaryDiff when Height >= Fork_1_8 -> binary_to_integer(BinaryDiff);\n\t\t\tD -> D\n\t\tend,\n\tMR =\n\t\tcase find_value(<<\"hash_list_merkle\">>, BlockStruct) of\n\t\t\t_ when Height < Fork_1_6 -> <<>>;\n\t\t\tundefined -> <<>>; % In case it's an invalid block (in the pre-fork format).\n\t\t\tR -> ar_util:decode(R)\n\t\tend,\n\tRewardAddr =\n\t\tcase find_value(<<\"reward_addr\">>, BlockStruct) of\n\t\t\t<<\"unclaimed\">> -> unclaimed;\n\t\t\tAddrBinary -> AddrBinary\n\t\tend,\n\tRewardAddr2 =\n\t\tcase RewardAddr of\n\t\t\tunclaimed ->\n\t\t\t\tunclaimed;\n\t\t\t_ ->\n\t\t\t\tar_wallet:base64_address_with_optional_checksum_to_decoded_address(RewardAddr)\n\t\tend,\n\t{RewardPool, BlockSize, WeaveSize} =\n\t\tcase Height >= ar_fork:height_2_4() of\n\t\t\ttrue ->\n\t\t\t\t{\n\t\t\t\t\tbinary_to_integer(find_value(<<\"reward_pool\">>, BlockStruct)),\n\t\t\t\t\tbinary_to_integer(find_value(<<\"block_size\">>, BlockStruct)),\n\t\t\t\t\tbinary_to_integer(find_value(<<\"weave_size\">>, BlockStruct))\n\t\t\t\t};\n\t\t\tfalse ->\n\t\t\t\t{\n\t\t\t\t\tfind_value(<<\"reward_pool\">>, BlockStruct),\n\t\t\t\t\tfind_value(<<\"block_size\">>, BlockStruct),\n\t\t\t\t\tfind_value(<<\"weave_size\">>, BlockStruct)\n\t\t\t\t}\n\t\tend,\n\t{Rate, ScheduledRate, Packing_2_5_Threshold, StrictDataSplitThreshold} =\n\t\tcase Height >= Fork_2_5 of\n\t\t\ttrue ->\n\t\t\t\t[RateDividendBinary, RateDivisorBinary] =\n\t\t\t\t\tfind_value(<<\"usd_to_ar_rate\">>, BlockStruct),\n\t\t\t\t[ScheduledRateDividendBinary, ScheduledRateDivisorBinary] =\n\t\t\t\t\tfind_value(<<\"scheduled_usd_to_ar_rate\">>, BlockStruct),\n\t\t\t\t{{binary_to_integer(RateDividendBinary),\n\t\t\t\t\t\tbinary_to_integer(RateDivisorBinary)},\n\t\t\t\t\t{binary_to_integer(ScheduledRateDividendBinary),\n\t\t\t\t\t\tbinary_to_integer(ScheduledRateDivisorBinary)},\n\t\t\t\t\t\t\tbinary_to_integer(find_value(<<\"packing_2_5_threshold\">>,\n\t\t\t\t\t\t\t\tBlockStruct)),\n\t\t\t\t\t\t\tbinary_to_integer(find_value(<<\"strict_data_split_threshold\">>,\n\t\t\t\t\t\t\t\tBlockStruct))};\n\t\t\tfalse ->\n\t\t\t\t{undefined, undefined, undefined, undefined}\n\t\tend,\n\tTimestamp = find_value(<<\"timestamp\">>, BlockStruct),\n\ttrue = is_integer(Timestamp),\n\tLastRetarget = find_value(<<\"last_retarget\">>, BlockStruct),\n\ttrue = is_integer(LastRetarget),\n\tDecodedTXIDs = [ar_util:decode(TXID) || TXID <- TXIDs],\n\t[] = [TXID || TXID <- DecodedTXIDs, byte_size(TXID) /= 32],\n\t#block{\n\t\tnonce = ar_util:decode(find_value(<<\"nonce\">>, BlockStruct)),\n\t\tprevious_block = ar_util:decode(find_value(<<\"previous_block\">>, BlockStruct)),\n\t\ttimestamp = Timestamp,\n\t\tlast_retarget = LastRetarget,\n\t\tdiff = Diff,\n\t\theight = Height,\n\t\thash = ar_util:decode(find_value(<<\"hash\">>, BlockStruct)),\n\t\tindep_hash = ar_util:decode(find_value(<<\"indep_hash\">>, BlockStruct)),\n\t\ttxs = DecodedTXIDs,\n\t\thash_list =\n\t\t\tcase HashList of\n\t\t\t\tundefined -> unset;\n\t\t\t\t_\t\t  -> [ar_util:decode(Hash) || Hash <- HashList]\n\t\t\tend,\n\t\twallet_list = ar_util:decode(WalletList),\n\t\treward_addr = RewardAddr2,\n\t\ttags = Tags,\n\t\treward_pool = RewardPool,\n\t\tweave_size = WeaveSize,\n\t\tblock_size = BlockSize,\n\t\tcumulative_diff = CDiff,\n\t\thash_list_merkle = MR,\n\t\ttx_root =\n\t\t\tcase find_value(<<\"tx_root\">>, BlockStruct) of\n\t\t\t\tundefined -> <<>>;\n\t\t\t\tRoot -> ar_util:decode(Root)\n\t\t\tend,\n\t\tpoa =\n\t\t\tcase find_value(<<\"poa\">>, BlockStruct) of\n\t\t\t\tundefined -> #poa{};\n\t\t\t\tPOAStruct -> json_struct_to_poa(POAStruct)\n\t\t\tend,\n\t\tusd_to_ar_rate = Rate,\n\t\tscheduled_usd_to_ar_rate = ScheduledRate,\n\t\tpacking_2_5_threshold = Packing_2_5_Threshold,\n\t\tstrict_data_split_threshold = StrictDataSplitThreshold\n\t}.\n\n%% @doc Convert a transaction record into a JSON struct.\ntx_to_json_struct(\n\t#tx{\n\t\tid = ID,\n\t\tformat = Format,\n\t\tlast_tx = Last,\n\t\towner = Owner,\n\t\ttags = Tags,\n\t\ttarget = Target,\n\t\tquantity = Quantity,\n\t\tdata = Data,\n\t\treward = Reward,\n\t\tsignature = Sig,\n\t\tsignature_type = SigType,\n\t\tdata_size = DataSize,\n\t\tdata_root = DataRoot,\n\t\tdenomination = Denomination\n\t}) ->\n\tOwner2 =\n\t\tcase SigType of\n\t\t\t?ECDSA_KEY_TYPE ->\n\t\t\t\t<<>>;\n\t\t\t_ ->\n\t\t\t\tOwner\n\t\tend,\n\tFields = [\n\t\t{format,\n\t\t\tcase Format of\n\t\t\t\tundefined ->\n\t\t\t\t\t1;\n\t\t\t\t_ ->\n\t\t\t\t\tFormat\n\t\t\tend},\n\t\t{id, ar_util:encode(ID)},\n\t\t{last_tx, ar_util:encode(Last)},\n\t\t{owner, ar_util:encode(Owner2)},\n\t\t{tags,\n\t\t\tlists:map(\n\t\t\t\tfun({Name, Value}) ->\n\t\t\t\t\t{\n\t\t\t\t\t\t[\n\t\t\t\t\t\t\t{name, ar_util:encode(Name)},\n\t\t\t\t\t\t\t{value, ar_util:encode(Value)}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\tend,\n\t\t\t\tTags\n\t\t\t)\n\t\t},\n\t\t{target, ar_util:encode(Target)},\n\t\t{quantity, integer_to_binary(Quantity)},\n\t\t{data, ar_util:encode(Data)},\n\t\t{data_size, integer_to_binary(DataSize)},\n\t\t{data_tree, []},\n\t\t{data_root, ar_util:encode(DataRoot)},\n\t\t{reward, integer_to_binary(Reward)},\n\t\t{signature, ar_util:encode(Sig)}\n\t],\n\tFields2 =\n\t\tcase Denomination > 0 of\n\t\t\ttrue ->\n\t\t\t\tFields ++ [{denomination, integer_to_binary(Denomination)}];\n\t\t\tfalse ->\n\t\t\t\tFields\n\t\tend,\n\t{Fields2}.\n\npoa_to_json_struct(POA) ->\n\tFields = [\n\t\t{option, integer_to_binary(POA#poa.option)},\n\t\t{tx_path, ar_util:encode(POA#poa.tx_path)},\n\t\t{data_path, ar_util:encode(POA#poa.data_path)},\n\t\t{chunk, ar_util:encode(POA#poa.chunk)}\n\t],\n\tFields2 =\n\t\tcase POA#poa.unpacked_chunk of\n\t\t\t<<>> ->\n\t\t\t\tFields;\n\t\t\tUnpackedChunk ->\n\t\t\t\tFields ++ [{unpacked_chunk, ar_util:encode(UnpackedChunk)}]\n\t\tend,\n\t{Fields2}.\n\nnonce_limiter_info_to_json_struct(Height,\n\t\t#nonce_limiter_info{ output = Output, global_step_number = N,\n\t\tseed = Seed, next_seed = NextSeed, partition_upper_bound = ZoneUpperBound,\n\t\tnext_partition_upper_bound = NextZoneUpperBound, last_step_checkpoints = Checkpoints,\n\t\tsteps = Steps, prev_output = PrevOutput,\n\t\tvdf_difficulty = VDFDifficulty, next_vdf_difficulty = NextVDFDifficulty }) ->\n\tFields = [{output, ar_util:encode(Output)}, {global_step_number, N},\n\t\t\t{seed, ar_util:encode(Seed)},\n\t\t\t{next_seed, ar_util:encode(NextSeed)}, {zone_upper_bound, ZoneUpperBound},\n\t\t\t{next_zone_upper_bound, NextZoneUpperBound},\n\t\t\t{prev_output, ar_util:encode(PrevOutput)},\n\t\t\t{last_step_checkpoints, [ar_util:encode(Elem) || Elem <- Checkpoints]},\n\t\t\t%% Keeping  'checkpoints' as JSON key (rather than 'steps') for backwards\n\t\t\t%% compatibility.\n\t\t\t{checkpoints, [ar_util:encode(Elem) || Elem <- Steps]}],\n\tFields2 =\n\t\tcase Height >= ar_fork:height_2_7() of\n\t\t\tfalse ->\n\t\t\t\tFields;\n\t\t\ttrue ->\n\t\t\t\tFields ++ [{vdf_difficulty, integer_to_binary(VDFDifficulty)},\n\t\t\t\t\t\t{next_vdf_difficulty, integer_to_binary(NextVDFDifficulty)}]\n\t\tend,\n\t{Fields2}.\n\ndiff_pair_to_json_list(DiffPair) ->\n\t{PoA1Diff, Diff} = DiffPair,\n\t[\n\t\tar_util:integer_to_binary(PoA1Diff),\n\t\tar_util:integer_to_binary(Diff)\n\t].\n\njson_struct_to_poa({JSONStruct}) ->\n\tUnpackedChunk =\n\t\tcase find_value(<<\"unpacked_chunk\">>, JSONStruct) of\n\t\t\tundefined ->\n\t\t\t\t<<>>;\n\t\t\tU ->\n\t\t\t\tU\n\t\tend,\n\t#poa{\n\t\toption = binary_to_integer(find_value(<<\"option\">>, JSONStruct)),\n\t\ttx_path = ar_util:decode(find_value(<<\"tx_path\">>, JSONStruct)),\n\t\tdata_path = ar_util:decode(find_value(<<\"data_path\">>, JSONStruct)),\n\t\tchunk = ar_util:decode(find_value(<<\"chunk\">>, JSONStruct)),\n\t\tunpacked_chunk = ar_util:decode(UnpackedChunk)\n\t}.\n\njson_struct_to_poa_from_map(JSONStruct) ->\n\t#poa{\n\t\toption = binary_to_integer(maps:get(<<\"option\">>, JSONStruct)),\n\t\ttx_path = ar_util:decode(maps:get(<<\"tx_path\">>, JSONStruct)),\n\t\tdata_path = ar_util:decode(maps:get(<<\"data_path\">>, JSONStruct)),\n\t\tchunk = ar_util:decode(maps:get(<<\"chunk\">>, JSONStruct)),\n\t\tunpacked_chunk = ar_util:decode(maps:get(<<\"unpacked_chunk\">>, JSONStruct, <<>>))\n\t}.\n\n%% @doc Convert parsed JSON tx fields from a HTTP request into a\n%% transaction record.\njson_struct_to_tx(JSONTX) when is_binary(JSONTX) ->\n\tjson_struct_to_tx(dejsonify(JSONTX));\njson_struct_to_tx({TXStruct}) ->\n\tjson_struct_to_tx(TXStruct, true).\n\njson_struct_to_v1_tx(JSONTX) when is_binary(JSONTX) ->\n\t{TXStruct} = dejsonify(JSONTX),\n\tjson_struct_to_tx(TXStruct, false).\n\njson_struct_to_tx(TXStruct, ComputeDataSize) ->\n\tTags =\n\t\tcase find_value(<<\"tags\">>, TXStruct) of\n\t\t\tundefined ->\n\t\t\t\t[];\n\t\t\tXs ->\n\t\t\t\tXs\n\t\tend,\n\tData = ar_util:decode(find_value(<<\"data\">>, TXStruct)),\n\tFormat =\n\t\tcase find_value(<<\"format\">>, TXStruct) of\n\t\t\tundefined ->\n\t\t\t\t1;\n\t\t\tN when is_integer(N) ->\n\t\t\t\tN;\n\t\t\tN when is_binary(N) ->\n\t\t\t\tbinary_to_integer(N)\n\t\tend,\n\tDenomination =\n\t\tcase find_value(<<\"denomination\">>, TXStruct) of\n\t\t\tundefined ->\n\t\t\t\t0;\n\t\t\tEncodedDenomination ->\n\t\t\t\tMaybeDenomination = binary_to_integer(EncodedDenomination),\n\t\t\t\ttrue = MaybeDenomination > 0,\n\t\t\t\tMaybeDenomination\n\t\tend,\n\tTXID = ar_util:decode(find_value(<<\"id\">>, TXStruct)),\n\t32 = byte_size(TXID),\n\tOwner = ar_util:decode(find_value(<<\"owner\">>, TXStruct)),\n\tSig = ar_util:decode(find_value(<<\"signature\">>, TXStruct)),\n\tSigType = set_sig_type_from_pub_key(Owner, Sig),\n\tTX = #tx{\n\t\tformat = Format,\n\t\tid = TXID,\n\t\tlast_tx = ar_util:decode(find_value(<<\"last_tx\">>, TXStruct)),\n\t\towner = Owner,\n\t\ttags = [{ar_util:decode(Name), ar_util:decode(Value)}\n\t\t\t\t%% Only the elements matching this pattern are included in the list.\n\t\t\t\t|| {[{<<\"name\">>, Name}, {<<\"value\">>, Value}]} <- Tags],\n\t\ttarget = ar_wallet:base64_address_with_optional_checksum_to_decoded_address(\n\t\t\t\tfind_value(<<\"target\">>, TXStruct)),\n\t\tquantity = binary_to_integer(find_value(<<\"quantity\">>, TXStruct)),\n\t\tdata = Data,\n\t\treward = binary_to_integer(find_value(<<\"reward\">>, TXStruct)),\n\t\tsignature = Sig,\n\t\tsignature_type = SigType,\n\t\tdata_size = parse_data_size(Format, TXStruct, Data, ComputeDataSize),\n\t\tdata_root =\n\t\t\tcase find_value(<<\"data_root\">>, TXStruct) of\n\t\t\t\tundefined -> <<>>;\n\t\t\t\tDR -> ar_util:decode(DR)\n\t\t\tend,\n\t\tdenomination = Denomination\n\t},\n\tcase SigType of\n\t\t?ECDSA_KEY_TYPE ->\n\t\t\tDataSegment = ar_tx:generate_signature_data_segment(TX),\n\t\t\tOwner2 = ar_wallet:recover_key(DataSegment, Sig, SigType),\n\t\t\tTX#tx{ owner = Owner2, owner_address = ar_wallet:to_address(Owner2, SigType) };\n\t\t?RSA_KEY_TYPE ->\n\t\t\tTX#tx{ owner_address = ar_wallet:to_address(Owner, SigType) }\n\tend.\n\nset_sig_type_from_pub_key(_Owner, <<>>) ->\n\t%% Transactions with the empty signatures are used in some old tests,\n\t%% e.g., ar_http_iface_tests.erl.\n\t?RSA_KEY_TYPE;\nset_sig_type_from_pub_key(Owner, _Sig) ->\n\tcase Owner of\n\t\t<<>> ->\n\t\t\t?ECDSA_KEY_TYPE;\n\t\t_ ->\n\t\t\t?RSA_KEY_TYPE\n\tend.\n\njson_list_to_diff_pair(List) ->\n\t[PoA1DiffBin, DiffBin] = \n\t\tcase List of\n\t\t\tundefined -> [<<\"0\">>, <<\"0\">>];\n\t\t\t_ -> List\n\t\tend,\n\tPoA1Diff = ar_util:binary_to_integer(PoA1DiffBin),\n\tDiff = ar_util:binary_to_integer(DiffBin),\n\t{PoA1Diff, Diff}.\n\t\nparse_data_size(1, _TXStruct, Data, true) ->\n\tbyte_size(Data);\nparse_data_size(_Format, TXStruct, _Data, _ComputeDataSize) ->\n\tbinary_to_integer(find_value(<<\"data_size\">>, TXStruct)).\n\netf_to_wallet_chunk_response(ETF) ->\n\tcatch etf_to_wallet_chunk_response_unsafe(ETF).\n\netf_to_wallet_chunk_response_unsafe(ETF) ->\n\t#{ next_cursor := NextCursor, wallets := Wallets } = binary_to_term(ETF, [safe]),\n\ttrue = is_binary(NextCursor) orelse NextCursor == last,\n\tlists:foreach(\n\t\tfun\t({Addr, {Balance, LastTX}})\n\t\t\t\t\t\twhen is_binary(Addr), is_binary(LastTX), is_integer(Balance),\n\t\t\t\t\t\t\tBalance >= 0 ->\n\t\t\t\tok;\n\t\t\t({Addr, {Balance, LastTX, Denomination, MiningPermission}})\n\t\t\t\t\t\twhen is_binary(Addr), is_binary(LastTX), is_integer(Balance),\n\t\t\t\t\t\t\t\tBalance >= 0,\n\t\t\t\t\t\t\t\tis_integer(Denomination),\n\t\t\t\t\t\t\t\tDenomination > 0,\n\t\t\t\t\t\t\t\tis_boolean(MiningPermission) ->\n\t\t\t\tok\n\t\tend,\n\t\tWallets\n\t),\n\t{ok, #{ next_cursor => NextCursor, wallets => Wallets }}.\n\n%% @doc Convert a wallet list into a JSON struct.\n%% The order of the wallets is somewhat weird for historical reasons. If the reward address,\n%% appears in the list for the first time, it is placed on the first position. Except for that,\n%% wallets are sorted in the alphabetical order.\nwallet_list_to_json_struct(RewardAddr, IsRewardAddrNew, WL) ->\n\tList = ar_patricia_tree:foldr(\n\t\tfun(Addr, Value, Acc) ->\n\t\t\tcase Addr == RewardAddr andalso IsRewardAddrNew of\n\t\t\t\ttrue ->\n\t\t\t\t\tAcc;\n\t\t\t\tfalse ->\n\t\t\t\t\t[wallet_to_json_struct(Addr, Value) | Acc]\n\t\t\tend\n\t\tend,\n\t\t[],\n\t\tWL\n\t),\n\tcase {ar_patricia_tree:get(RewardAddr, WL), IsRewardAddrNew} of\n\t\t{not_found, _} ->\n\t\t\tList;\n\t\t{_, false} ->\n\t\t\tList;\n\t\t{Value, true} ->\n\t\t\t%% Place the reward wallet first, for backwards-compatibility.\n\t\t\t[wallet_to_json_struct(RewardAddr, Value) | List]\n\tend.\n\nwallet_to_json_struct(Address, {Balance, LastTX}) ->\n\t{[{address, ar_util:encode(Address)}, {balance, list_to_binary(integer_to_list(Balance))},\n\t\t\t{last_tx, ar_util:encode(LastTX)}]};\nwallet_to_json_struct(Address, {Balance, LastTX, Denomination, MiningPermission}) ->\n\t{[{address, ar_util:encode(Address)}, {balance, list_to_binary(integer_to_list(Balance))},\n\t\t\t{last_tx, ar_util:encode(LastTX)}, {denomination, Denomination},\n\t\t\t{mining_permission, MiningPermission}]}.\n\n%% @doc Convert parsed JSON from fields into a valid wallet list.\njson_struct_to_wallet_list(JSON) when is_binary(JSON) ->\n\tjson_struct_to_wallet_list(dejsonify(JSON));\njson_struct_to_wallet_list(WalletsStruct) ->\n\tlists:foldl(\n\t\tfun(WalletStruct, Acc) ->\n\t\t\t{Address, Value} = json_struct_to_wallet(WalletStruct),\n\t\t\tar_patricia_tree:insert(Address, Value, Acc)\n\t\tend,\n\t\tar_patricia_tree:new(),\n\t\tWalletsStruct\n\t).\n\njson_struct_to_wallet({Wallet}) ->\n\tAddress = ar_util:decode(find_value(<<\"address\">>, Wallet)),\n\tBalance = binary_to_integer(find_value(<<\"balance\">>, Wallet)),\n\ttrue = Balance >= 0,\n\tLastTX = ar_util:decode(find_value(<<\"last_tx\">>, Wallet)),\n\tcase find_value(<<\"denomination\">>, Wallet) of\n\t\tundefined ->\n\t\t\t{Address, {Balance, LastTX}};\n\t\tDenomination when is_integer(Denomination), Denomination > 0 ->\n\t\t\tMiningPermission = find_value(<<\"mining_permission\">>, Wallet),\n\t\t\ttrue = is_boolean(MiningPermission),\n\t\t\t{Address, {Balance, LastTX, Denomination, MiningPermission}}\n\tend.\n\n%% @doc Find the value associated with a key in parsed a JSON structure list.\nfind_value(Key, List) ->\n\tcase lists:keyfind(Key, 1, List) of\n\t\t{Key, Val} -> Val;\n\t\tfalse -> undefined\n\tend.\n\n%% @doc Convert an ARQL query into a JSON struct\nquery_to_json_struct({Op, Expr1, Expr2}) ->\n\t{\n\t\t[\n\t\t\t{op, list_to_binary(atom_to_list(Op))},\n\t\t\t{expr1, query_to_json_struct(Expr1)},\n\t\t\t{expr2, query_to_json_struct(Expr2)}\n\t\t]\n\t};\nquery_to_json_struct(Expr) ->\n\tExpr.\n\n%% @doc Convert parsed JSON from fields into an internal ARQL query.\njson_struct_to_query(QueryJSON) ->\n\tcase json_decode(QueryJSON) of\n\t\t{ok, Decoded} ->\n\t\t\t{ok, do_json_struct_to_query(Decoded)};\n\t\t{error, _} ->\n\t\t\t{error, invalid_json}\n\tend.\n\ndo_json_struct_to_query({Query}) ->\n\t{\n\t\tlist_to_existing_atom(binary_to_list(find_value(<<\"op\">>, Query))),\n\t\tdo_json_struct_to_query(find_value(<<\"expr1\">>, Query)),\n\t\tdo_json_struct_to_query(find_value(<<\"expr2\">>, Query))\n\t};\ndo_json_struct_to_query(Query) ->\n\tQuery.\n\n%% @doc Generate a JSON structure representing a block index.\nblock_index_to_json_struct(BI) ->\n\tlists:map(\n\t\tfun\n\t\t\t({BH, WeaveSize, TXRoot}) ->\n\t\t\t\tKeys1 = [{<<\"hash\">>, ar_util:encode(BH)}],\n\t\t\t\tKeys2 =\n\t\t\t\t\tcase WeaveSize of\n\t\t\t\t\t\tnot_set ->\n\t\t\t\t\t\t\tKeys1;\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t[{<<\"weave_size\">>, integer_to_binary(WeaveSize)} | Keys1]\n\t\t\t\t\tend,\n\t\t\t\tKeys3 =\n\t\t\t\t\tcase TXRoot of\n\t\t\t\t\t\tnot_set ->\n\t\t\t\t\t\t\tKeys2;\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t[{<<\"tx_root\">>, ar_util:encode(TXRoot)} | Keys2]\n\t\t\t\t\tend,\n\t\t\t\t{Keys3};\n\t\t\t(BH) ->\n\t\t\t\tar_util:encode(BH)\n\t\tend,\n\t\tBI\n\t).\n\n%% @doc Convert a JSON structure into a block index.\njson_struct_to_block_index(JSONStruct) ->\n\tlists:map(\n\t\tfun\n\t\t\t(Hash) when is_binary(Hash) ->\n\t\t\t\tar_util:decode(Hash);\n\t\t\t({JSON}) ->\n\t\t\t\tHash = ar_util:decode(find_value(<<\"hash\">>, JSON)),\n\t\t\t\tWeaveSize =\n\t\t\t\t\tcase find_value(<<\"weave_size\">>, JSON) of\n\t\t\t\t\t\tundefined ->\n\t\t\t\t\t\t\tnot_set;\n\t\t\t\t\t\tWS ->\n\t\t\t\t\t\t\tbinary_to_integer(WS)\n\t\t\t\t\tend,\n\t\t\t\tTXRoot =\n\t\t\t\t\tcase find_value(<<\"tx_root\">>, JSON) of\n\t\t\t\t\t\tundefined ->\n\t\t\t\t\t\t\tnot_set;\n\t\t\t\t\t\tR ->\n\t\t\t\t\t\t\tar_util:decode(R)\n\t\t\t\t\tend,\n\t\t\t\t{Hash, WeaveSize, TXRoot}\n\t\tend,\n\t\tJSONStruct\n\t).\n\npoa_map_to_json_map(Map) ->\n\t#{ chunk := Chunk, tx_path := TXPath, data_path := DataPath, packing := Packing } = Map,\n\tBinaryPacking = iolist_to_binary(encode_packing(Packing, true)),\n\tMap2 = #{\n\t\tchunk => ar_util:encode(Chunk),\n\t\ttx_path => ar_util:encode(TXPath),\n\t\tdata_path => ar_util:encode(DataPath),\n\t\tpacking => BinaryPacking\n\t},\n\tMap3 =\n\t\tcase maps:get(absolute_end_offset, Map, not_found) of\n\t\t\tnot_found ->\n\t\t\t\tMap2;\n\t\t\tEndOffset ->\n\t\t\t\tMap2#{ absolute_end_offset => integer_to_binary(EndOffset) }\n\t\tend,\n\tcase maps:get(chunk_size, Map, not_found) of\n\t\tnot_found ->\n\t\t\tMap3;\n\t\tChunkSize ->\n\t\t\tMap3#{ chunk_size => integer_to_binary(ChunkSize) }\n\tend.\n\npoa_no_chunk_map_to_json_map(Map) ->\n\t#{ tx_path := TXPath, data_path := DataPath } = Map,\n\tMap2 = #{\n\t\ttx_path => ar_util:encode(TXPath),\n\t\tdata_path => ar_util:encode(DataPath)\n\t},\n\tcase maps:get(absolute_end_offset, Map, not_found) of\n\t\tnot_found ->\n\t\t\tMap2;\n\t\tEndOffset ->\n\t\t\tMap2#{ absolute_end_offset => integer_to_binary(EndOffset) }\n\tend.\n\njson_map_to_poa_map(JSON) ->\n\tMap = #{\n\t\tdata_root => ar_util:decode(maps:get(<<\"data_root\">>, JSON, <<>>)),\n\t\tchunk => ar_util:decode(maps:get(<<\"chunk\">>, JSON)),\n\t\tdata_path => ar_util:decode(maps:get(<<\"data_path\">>, JSON)),\n\t\ttx_path => ar_util:decode(maps:get(<<\"tx_path\">>, JSON, <<>>)),\n\t\tdata_size => binary_to_integer(maps:get(<<\"data_size\">>, JSON, <<\"0\">>))\n\t},\n\tPackingJSON = maps:get(<<\"packing\">>, JSON, <<\"unpacked\">>),\n\tPacking = decode_packing(PackingJSON, error),\n\tMap2 = case Packing of\n\t\terror ->\n\t\t\terror({unsupported_packing, PackingJSON});\n\t\tPacking ->\n\t\t\tmaps:put(packing, Packing, Map)\n\tend,\n\tcase maps:get(<<\"offset\">>, JSON, none) of\n\t\tnone ->\n\t\t\tMap2;\n\t\tOffset ->\n\t\t\tMap2#{ offset => binary_to_integer(Offset) }\n\tend.\n\nsignature_type_to_binary(SigType) ->\n\tcase SigType of\n\t\t{?RSA_SIGN_ALG, 65537} -> <<\"PS256_65537\">>;\n\t\t{?ECDSA_SIGN_ALG, secp256k1} -> <<\"ES256K\">>;\n\t\t{?EDDSA_SIGN_ALG, ed25519} -> <<\"Ed25519\">>\n\tend.\n\nbinary_to_signature_type(List) ->\n\tcase List of\n\t\tundefined -> {?RSA_SIGN_ALG, 65537};\n\t\t<<\"PS256_65537\">> -> {?RSA_SIGN_ALG, 65537};\n\t\t<<\"ES256K\">> -> {?ECDSA_SIGN_ALG, secp256k1};\n\t\t<<\"Ed25519\">> -> {?EDDSA_SIGN_ALG, ed25519};\n\t\t%% For backwards-compatibility.\n\t\t_ -> {?RSA_SIGN_ALG, 65537}\n\tend.\n\ncandidate_to_json_struct(\n\t#mining_candidate{\n\t\tcm_diff = DiffPair,\n\t\tcm_h1_list = H1List,\n\t\th0 = H0,\n\t\th1 = H1,\n\t\th2 = H2,\n\t\tmining_address = MiningAddress,\n\t\tnonce = Nonce,\n\t\tnext_seed = NextSeed,\n\t\tnext_vdf_difficulty = NextVDFDifficulty,\n\t\tnonce_limiter_output = NonceLimiterOutput,\n\t\tpartition_number = PartitionNumber,\n\t\tpartition_number2 = PartitionNumber2,\n\t\tpartition_upper_bound = PartitionUpperBound,\n\t\tpoa2 = PoA2,\n\t\tpreimage = Preimage,\n\t\tseed = Seed,\n\t\tsession_key = SessionKey,\n\t\tstart_interval_number = StartIntervalNumber,\n\t\tstep_number = StepNumber,\n\t\tlabel = Label,\n\t\tpacking_difficulty = PackingDifficulty,\n\t\treplica_format = ReplicaFormat\n\t}) ->\n\tJSON = [\n\t\t{cm_diff, diff_pair_to_json_list(DiffPair)},\n\t\t{cm_h1_list, h1_list_to_json_struct(H1List)},\n\t\t{mining_address, ar_util:encode(MiningAddress)},\n\t\t{h0, ar_util:encode(H0)},\n\t\t{partition_number, integer_to_binary(PartitionNumber)},\n\t\t{partition_number2, integer_to_binary(PartitionNumber2)},\n\t\t{partition_upper_bound, integer_to_binary(PartitionUpperBound)},\n\t\t{seed, ar_util:encode(Seed)},\n\t\t{next_seed, ar_util:encode(NextSeed)},\n\t\t{next_vdf_difficulty, integer_to_binary(NextVDFDifficulty)},\n\t\t{session_key, session_key_json_struct(SessionKey)},\n\t\t{start_interval_number, integer_to_binary(StartIntervalNumber)},\n\t\t{step_number, integer_to_binary(StepNumber)},\n\t\t{nonce_limiter_output, ar_util:encode(NonceLimiterOutput)},\n\t\t{label, Label},\n\t\t{packing_difficulty, PackingDifficulty},\n\t\t{replica_format, ReplicaFormat}\n\t],\n\n\tJSON2 = encode_if_set(JSON, h1, H1, fun ar_util:encode/1),\n\tJSON3 = encode_if_set(JSON2, h2, H2, fun ar_util:encode/1),\n\tJSON4 = encode_if_set(JSON3, nonce, Nonce, fun integer_to_binary/1),\n\tJSON5 = encode_if_set(JSON4, poa2, PoA2, fun poa_to_json_struct/1),\n\t{encode_if_set(JSON5, preimage, Preimage, fun ar_util:encode/1)}.\n\nh1_list_to_json_struct(H1List) ->\n\tlists:map(fun ({H1, Nonce}) ->\n\t\t{[\n\t\t\t{h1, ar_util:encode(H1)},\n\t\t\t{nonce, integer_to_binary(Nonce)}\n\t\t]}\n\tend,\n\tH1List).\n\nsession_key_json_struct({NextSeed, Interval, NextDifficulty}) ->\n\t{[\n\t\t{next_seed, ar_util:encode(NextSeed)},\n\t\t{interval, integer_to_binary(Interval)},\n\t\t{next_difficulty, integer_to_binary(NextDifficulty)}\n\t]}.\n\njson_map_to_candidate(JSON) ->\n\tDiffPair = json_list_to_diff_pair(maps:get(<<\"cm_diff\">>, JSON)),\n\tH1List = json_struct_to_h1_list(maps:get(<<\"cm_h1_list\">>, JSON)),\n\tH0 = ar_util:decode(maps:get(<<\"h0\">>, JSON)),\n\tH1 = decode_if_set(JSON, <<\"h1\">>, fun ar_util:decode/1, not_set),\n\tH2 = decode_if_set(JSON, <<\"h2\">>, fun ar_util:decode/1, not_set),\n\tMiningAddress = ar_util:decode(maps:get(<<\"mining_address\">>, JSON)),\n\tNextSeed = ar_util:decode(maps:get(<<\"next_seed\">>, JSON)),\n\tNextVDFDifficulty = binary_to_integer(maps:get(<<\"next_vdf_difficulty\">>, JSON)),\n\tNonce = decode_if_set(JSON, <<\"nonce\">>, fun binary_to_integer/1, not_set),\n\tNonceLimiterOutput = ar_util:decode(maps:get(<<\"nonce_limiter_output\">>, JSON)),\n\tPartitionNumber = binary_to_integer(maps:get(<<\"partition_number\">>, JSON)),\n\tPartitionNumber2 = binary_to_integer(maps:get(<<\"partition_number2\">>, JSON)),\n\tPartitionUpperBound = binary_to_integer(maps:get(<<\"partition_upper_bound\">>, JSON)),\n\tPoA2 = decode_if_set(JSON, <<\"poa2\">>, fun json_struct_to_poa_from_map/1, not_set),\n\tPreimage = decode_if_set(JSON, <<\"preimage\">>, fun ar_util:decode/1, not_set),\n\tSeed = ar_util:decode(maps:get(<<\"seed\">>, JSON)),\n\tSessionKey = json_struct_to_session_key(maps:get(<<\"session_key\">>, JSON)),\n\tStartIntervalNumber = binary_to_integer(maps:get(<<\"start_interval_number\">>, JSON)),\n\tStepNumber = binary_to_integer(maps:get(<<\"step_number\">>, JSON)),\n\tLabel = maps:get(<<\"label\">>, JSON, <<\"not_set\">>),\n\tPackingDifficulty = maps:get(<<\"packing_difficulty\">>, JSON, 0),\n\tReplicaFormat = maps:get(<<\"replica_format\">>, JSON, 0),\n\ttrue = (PackingDifficulty >= 0 andalso PackingDifficulty =< ?MAX_PACKING_DIFFICULTY\n\t\t\tandalso ReplicaFormat == 0)\n\t\t\torelse (ReplicaFormat == 1\n\t\t\t\t\tandalso PackingDifficulty == ?REPLICA_2_9_PACKING_DIFFICULTY),\n\t#mining_candidate{\n\t\tcm_diff = DiffPair,\n\t\tcm_h1_list = H1List,\n\t\th0 = H0,\n\t\th1 = H1,\n\t\th2 = H2,\n\t\tmining_address = MiningAddress,\n\t\tnext_seed = NextSeed,\n\t\tnext_vdf_difficulty = NextVDFDifficulty,\n\t\tnonce = Nonce,\n\t\tnonce_limiter_output = NonceLimiterOutput,\n\t\tpartition_number = PartitionNumber,\n\t\tpartition_number2 = PartitionNumber2,\n\t\tpartition_upper_bound = PartitionUpperBound,\n\t\tpoa2 = PoA2,\n\t\tpreimage = Preimage,\n\t\tseed = Seed,\n\t\tsession_key = SessionKey,\n\t\tstart_interval_number = StartIntervalNumber,\n\t\tstep_number = StepNumber,\n\t\tlabel = Label,\n\t\tpacking_difficulty = PackingDifficulty,\n\t\treplica_format = ReplicaFormat\n\t}.\n\njson_struct_to_h1_list(JSON) ->\n\tlists:map(fun (JSONElement) ->\n\t\tH1 = ar_util:decode(maps:get(<<\"h1\">>, JSONElement)),\n\t\tNonce = binary_to_integer(maps:get(<<\"nonce\">>, JSONElement)),\n\t\t{H1, Nonce}\n\tend, JSON).\n\njson_struct_to_session_key(JSON) ->\n\t{\n\t\tar_util:decode(maps:get(<<\"next_seed\">>, JSON)),\n\t\tbinary_to_integer(maps:get(<<\"interval\">>, JSON)),\n\t\tbinary_to_integer(maps:get(<<\"next_difficulty\">>, JSON))\n\t}.\n\nsolution_to_json_struct(\n\t#mining_solution{\n\t\tlast_step_checkpoints = LastStepCheckpoints,\n\t\tmining_address = MiningAddress,\n\t\tnext_seed = NextSeed,\n\t\tnext_vdf_difficulty = NextVDFDifficulty,\n\t\tnonce = Nonce,\n\t\tnonce_limiter_output = NonceLimiterOutput,\n\t\tpartition_number = PartitionNumber,\n\t\tpartition_upper_bound = PartitionUpperBound,\n\t\tpoa1 = PoA1,\n\t\tpoa2 = PoA2,\n\t\tpreimage = Preimage,\n\t\trecall_byte1 = RecallByte1,\n\t\trecall_byte2 = RecallByte2,\n\t\tseed = Seed,\n\t\tsolution_hash = SolutionHash,\n\t\tstart_interval_number = StartIntervalNumber,\n\t\tstep_number = StepNumber,\n\t\tsteps = Steps,\n\t\tpacking_difficulty = PackingDifficulty,\n\t\treplica_format = ReplicaFormat\n\t}) ->\n\tJSON = [\n\t\t{last_step_checkpoints, ar_util:encode(iolist_to_binary(LastStepCheckpoints))},\n\t\t{mining_address, ar_util:encode(MiningAddress)},\n\t\t{nonce, Nonce},\n\t\t{nonce_limiter_output, ar_util:encode(NonceLimiterOutput)},\n\t\t{next_seed, ar_util:encode(NextSeed)},\n\t\t{next_vdf_difficulty, integer_to_binary(NextVDFDifficulty)},\n\t\t{partition_number, integer_to_binary(PartitionNumber)},\n\t\t{partition_upper_bound, integer_to_binary(PartitionUpperBound)},\n\t\t{poa1, poa_to_json_struct(PoA1)},\n\t\t{poa2, poa_to_json_struct(PoA2)},\n\t\t{preimage, ar_util:encode(Preimage)},\n\t\t{recall_byte1, integer_to_binary(RecallByte1)},\n\t\t{seed, ar_util:encode(Seed)},\n\t\t{solution_hash, ar_util:encode(SolutionHash)},\n\t\t{start_interval_number, integer_to_binary(StartIntervalNumber)},\n\t\t{step_number, integer_to_binary(StepNumber)},\n\t\t{steps, ar_util:encode(iolist_to_binary(Steps))},\n\t\t{packing_difficulty, PackingDifficulty},\n\t\t{replica_format, ReplicaFormat}\n\t],\n\t{encode_if_set(JSON, recall_byte2, RecallByte2, fun integer_to_binary/1)}.\n\njson_map_to_solution(JSON) ->\n\tLastStepCheckpoints = parse_json_checkpoints(\n\t\t\tar_util:decode(maps:get(<<\"last_step_checkpoints\">>, JSON, <<>>))),\n\tMiningAddress = ar_util:decode(maps:get(<<\"mining_address\">>, JSON)),\n\tNextSeed = ar_util:decode(maps:get(<<\"next_seed\">>, JSON)),\n\tNextVDFDifficulty = maps:get(<<\"next_vdf_difficulty\">>, JSON),\n\tNextVDFDifficulty2 =\n\t\tcase is_binary(NextVDFDifficulty) of\n\t\t\ttrue ->\n\t\t\t\tbinary_to_integer(NextVDFDifficulty);\n\t\t\tfalse ->\n\t\t\t\tNextVDFDifficulty\n\t\tend,\n\tNonce = maps:get(<<\"nonce\">>, JSON),\n\tNonceLimiterOutput = ar_util:decode(maps:get(<<\"nonce_limiter_output\">>, JSON)),\n\tPartitionNumber = binary_to_integer(maps:get(<<\"partition_number\">>, JSON)),\n\tPartitionUpperBound = binary_to_integer(maps:get(<<\"partition_upper_bound\">>, JSON)),\n\tPoA1 = json_struct_to_poa_from_map(maps:get(<<\"poa1\">>, JSON)),\n\tPoA2 = json_struct_to_poa_from_map(maps:get(<<\"poa2\">>, JSON)),\n\tPreimage = ar_util:decode(maps:get(<<\"preimage\">>, JSON)),\n\tRecallByte1 = binary_to_integer(maps:get(<<\"recall_byte1\">>, JSON)),\n\tRecallByte2 = decode_if_set(JSON, <<\"recall_byte2\">>, fun binary_to_integer/1, undefined),\n\tSeed = ar_util:decode(maps:get(<<\"seed\">>, JSON)),\n\tSolutionHash = ar_util:decode(maps:get(<<\"solution_hash\">>, JSON)),\n\tStartIntervalNumber = binary_to_integer(maps:get(<<\"start_interval_number\">>, JSON)),\n\tStepNumber = binary_to_integer(maps:get(<<\"step_number\">>, JSON)),\n\tSteps = parse_json_checkpoints(ar_util:decode(maps:get(<<\"steps\">>, JSON, <<>>))),\n\tPackingDifficulty = maps:get(<<\"packing_difficulty\">>, JSON, 0),\n\tReplicaFormat = maps:get(<<\"replica_format\">>, JSON, 0),\n\ttrue = (PackingDifficulty >= 0 andalso PackingDifficulty =< ?MAX_PACKING_DIFFICULTY\n\t\t\tandalso ReplicaFormat == 0)\n\t\t\torelse (ReplicaFormat == 1\n\t\t\t\t\tandalso PackingDifficulty == ?REPLICA_2_9_PACKING_DIFFICULTY),\n\t#mining_solution{\n\t\tlast_step_checkpoints = LastStepCheckpoints,\n\t\tmining_address = MiningAddress,\n\t\tnext_seed = NextSeed,\n\t\tnext_vdf_difficulty = NextVDFDifficulty2,\n\t\tnonce = Nonce,\n\t\tnonce_limiter_output = NonceLimiterOutput,\n\t\tpartition_number = PartitionNumber,\n\t\tpartition_upper_bound = PartitionUpperBound,\n\t\tpoa1 = PoA1,\n\t\tpoa2 = PoA2,\n\t\tpreimage = Preimage,\n\t\trecall_byte1 = RecallByte1,\n\t\trecall_byte2 = RecallByte2,\n\t\tseed = Seed,\n\t\tsolution_hash = SolutionHash,\n\t\tstart_interval_number = StartIntervalNumber,\n\t\tstep_number = StepNumber,\n\t\tsteps = Steps,\n\t\tpacking_difficulty = PackingDifficulty,\n\t\treplica_format = ReplicaFormat\n\t}.\n\nencode_if_set(JSON, _JSONProperty, not_set, _Encoder) ->\n\tJSON;\nencode_if_set(JSON, _JSONProperty, undefined, _Encoder) ->\n\tJSON;\nencode_if_set(JSON, JSONProperty, Value, Encoder) ->\n\t[{JSONProperty, Encoder(Value)} | JSON].\n\ndecode_if_set(JSON, JSONProperty, Decoder, Default) ->\n\tcase maps:get(JSONProperty, JSON, not_found) of\n\t\tnot_found ->\n\t\t\tDefault;\n\t\tEncodedValue ->\n\t\t\tDecoder(EncodedValue)\n\tend.\n\nparse_json_checkpoints(<<>>) ->\n\t[];\nparse_json_checkpoints(<< Checkpoint:32/binary, Rest/binary >>) ->\n\t[Checkpoint | parse_json_checkpoints(Rest)].\n\njobs_to_json_struct(Jobs) ->\n\t#jobs{ jobs = JobList, partial_diff = PartialDiff,\n\t\t\tseed = Seed, next_seed = NextSeed, interval_number = IntervalNumber,\n\t\t\tnext_vdf_difficulty = NextVDFDiff } = Jobs,\n\t\n\t{[{jobs, [job_to_json_struct(Job) || Job <- JobList]},\n\t\t{partial_diff, diff_pair_to_json_list(PartialDiff)},\n\t\t{seed, ar_util:encode(Seed)},\n\t\t{next_seed, ar_util:encode(NextSeed)},\n\t\t{interval_number, integer_to_binary(IntervalNumber)},\n\t\t{next_vdf_difficulty, integer_to_binary(NextVDFDiff)}\n\t]}.\n\njob_to_json_struct(Job) ->\n\t#job{ output = Output, global_step_number = StepNumber,\n\t\t\tpartition_upper_bound = PartitionUpperBound } = Job,\n\t{[{nonce_limiter_output, ar_util:encode(Output)},\n\t\t\t{step_number, integer_to_binary(StepNumber)},\n\t\t\t{partition_upper_bound, integer_to_binary(PartitionUpperBound)}]}.\n\njson_struct_to_jobs(Struct) ->\n\t{Keys} = Struct,\n\tPartialDiff = json_list_to_diff_pair(proplists:get_value(<<\"partial_diff\">>, Keys)),\n\tSeed = ar_util:decode(proplists:get_value(<<\"seed\">>, Keys, <<>>)),\n\tNextSeed = ar_util:decode(proplists:get_value(<<\"next_seed\">>, Keys, <<>>)),\n\tNextVDFDiff = binary_to_integer(proplists:get_value(<<\"next_vdf_difficulty\">>, Keys,\n\t\t\t<<\"0\">>)),\n\tIntervalNumber = binary_to_integer(proplists:get_value(<<\"interval_number\">>, Keys,\n\t\t\t<<\"0\">>)),\n\tJobs = [json_struct_to_job(Job) || Job <- proplists:get_value(<<\"jobs\">>, Keys, [])],\n\t#jobs{ jobs = Jobs, seed = Seed, next_seed = NextSeed,\n\t\t\tinterval_number = IntervalNumber,\n\t\t\tnext_vdf_difficulty = NextVDFDiff, partial_diff = PartialDiff }.\n\njson_struct_to_job(Struct) ->\n\t{Keys} = Struct,\n\tOutput = ar_util:decode(proplists:get_value(<<\"nonce_limiter_output\">>, Keys, <<>>)),\n\tStepNumber = binary_to_integer(proplists:get_value(<<\"step_number\">>, Keys,\n\t\t\t<<\"0\">>)),\n\tPartitionUpperBound = binary_to_integer(proplists:get_value(<<\"partition_upper_bound\">>,\n\t\t\tKeys, <<\"0\">>)),\n\t#job{ output = Output, global_step_number = StepNumber,\n\t\t\tpartition_upper_bound = PartitionUpperBound }.\n\npartial_solution_response_to_json_struct(Response) ->\n\t#partial_solution_response{ indep_hash = H, status = S } = Response,\n\t{[{<<\"indep_hash\">>, ar_util:encode(H)}, {<<\"status\">>, S}]}.\n\npartition_to_json_struct(Bucket, BucketSize, Addr, PackingDifficulty) ->\n\tFields = [\n\t\t{bucket, Bucket},\n\t\t{bucketsize, BucketSize},\n\t\t{addr, ar_util:encode(Addr)}\n\t],\n\tFields2 =\n\t\tcase PackingDifficulty >= 1 of\n\t\t\ttrue ->\n\t\t\t\tFields ++ [{pdiff, PackingDifficulty}];\n\t\t\tfalse ->\n\t\t\t\tFields\n\t\tend,\n\t{Fields2}.\n\n%% Used in logging among other things, therefore we have more\n%% possible values here than in decode_packing/2.\nencode_packing(undefined, false) ->\n\t\"undefined\";\nencode_packing({spora_2_6, Addr}, _Strict) ->\n\t\"spora_2_6_\" ++ binary_to_list(ar_util:encode(Addr));\nencode_packing({composite, Addr, PackingDifficulty}, _Strict) ->\n\t\"composite_\" ++ binary_to_list(ar_util:encode(Addr)) ++ \".\"\n\t\t\t++ integer_to_list(PackingDifficulty);\nencode_packing(spora_2_5, _Strict) ->\n\t\"spora_2_5\";\nencode_packing(unpacked, _Strict) ->\n\t\"unpacked\";\nencode_packing(unpacked_padded, _Strict) ->\n\t\"unpacked_padded\";\nencode_packing({replica_2_9, Addr}, _Strict) ->\n\t\"replica_2_9_\" ++ binary_to_list(ar_util:encode(Addr));\nencode_packing(Packing, false) when is_atom(Packing) ->\n\tatom_to_list(Packing).\n\ndecode_packing(<<\"unpacked\">>, _Error) ->\n\tunpacked;\ndecode_packing(<<\"spora_2_5\">>, _Error) ->\n\tspora_2_5;\ndecode_packing(<< \"spora_2_6_\", Addr/binary >>, Error) ->\n\t\tcase ar_util:safe_decode(Addr) of\n\t\t\t{ok, DecodedAddr} ->\n\t\t\t\t{spora_2_6, DecodedAddr};\n\t\t\t_ ->\n\t\t\t\tError\n\t\tend;\ndecode_packing(<<\"composite_\", Rest/binary>>, Error) ->\n\tcase binary:split(Rest, <<\".\">>, [global]) of\n\t\t[AddrBin, PackingDifficultyBin] ->\n\t\t\tcase catch binary_to_integer(PackingDifficultyBin) of\n\t\t\t\tPackingDifficulty when is_integer(PackingDifficulty),\n\t\t\t\t\t\tPackingDifficulty >= 0,\n\t\t\t\t\t\tPackingDifficulty =< ?MAX_PACKING_DIFFICULTY ->\n\t\t\t\t\tcase ar_util:safe_decode(AddrBin) of\n\t\t\t\t\t\t{ok, DecodedAddr} ->\n\t\t\t\t\t\t\t{composite, DecodedAddr, PackingDifficulty};\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\tError\n\t\t\t\t\tend;\n\t\t\t\t_ ->\n\t\t\t\t\tError\n\t\t\tend;\n\t\t_ ->\n\t\t\tError\n\tend;\ndecode_packing(<< \"replica_2_9_\", Addr/binary >>, Error) ->\n\tcase ar_util:safe_decode(Addr) of\n\t\t{ok, DecodedAddr} ->\n\t\t\t{replica_2_9, DecodedAddr};\n\t\t_ ->\n\t\t\tError\n\tend;\ndecode_packing(<<\"unpacked_padded\">>, _Error) ->\n\tunpacked_padded;\ndecode_packing(_, Error) ->\n\tError.\n\nbinary_to_packing(<<\"unpacked\">>, _Error) ->\n\tunpacked;\nbinary_to_packing(<<\"spora_2_5\">>, _Error) ->\n\tspora_2_5;\nbinary_to_packing(<< \"spora_2_6_\", Addr/binary >>, Error) when byte_size(Addr) =< 64 ->\n\tcase ar_util:safe_decode(Addr) of\n\t\t{ok, DecodedAddr} ->\n\t\t\t{spora_2_6, DecodedAddr};\n\t\t_ ->\n\t\t\tError\n\tend;\nbinary_to_packing(<< \"composite_\", PackingDifficulty:8, Addr/binary >>, Error)\n\t\twhen byte_size(Addr) =< 64,\n\t\tPackingDifficulty =< ?MAX_PACKING_DIFFICULTY ->\n\tcase ar_util:safe_decode(Addr) of\n\t\t{ok, DecodedAddr} ->\n\t\t\t{composite, DecodedAddr, PackingDifficulty};\n\t\t_ ->\n\t\t\tError\n\tend;\nbinary_to_packing(<< \"replica_2_9_\", Addr/binary >>, Error) when byte_size(Addr) =< 64 ->\n\tcase ar_util:safe_decode(Addr) of\n\t\t{ok, DecodedAddr} ->\n\t\t\t{replica_2_9, DecodedAddr};\n\t\t_ ->\n\t\t\tError\n\tend;\nbinary_to_packing(<<\"unpacked_padded\">>, _Error) ->\n\tunpacked_padded.\n\npacking_to_binary(unpacked) ->\n\t<<\"unpacked\">>;\npacking_to_binary(spora_2_5) ->\n\t<<\"spora_2_5\">>;\npacking_to_binary({spora_2_6, Addr}) ->\n\tiolist_to_binary([<<\"spora_2_6_\">>, ar_util:encode(Addr)]);\npacking_to_binary({composite, Addr, PackingDifficulty}) ->\n\tiolist_to_binary([<<\"composite_\">>, << PackingDifficulty:8 >>,\n\t\t\t\t\t\tar_util:encode(Addr)]);\npacking_to_binary({replica_2_9, Addr}) ->\n\tiolist_to_binary([<<\"replica_2_9_\">>, ar_util:encode(Addr)]);\npacking_to_binary(unpacked_padded) ->\n\t<<\"unpacked_padded\">>.\n\npool_cm_jobs_to_json_struct(Jobs) ->\n\t#pool_cm_jobs{ h1_to_h2_jobs = H1ToH2Jobs, h1_read_jobs = H1ReadJobs,\n\t\t\tpartitions = Partitions } = Jobs,\n\t{[\n\t\t{h1_to_h2_jobs, [pool_cm_h1_to_h2_job_to_json_struct(Job) || Job <- H1ToH2Jobs]},\n\t\t{h1_read_jobs, [pool_cm_h1_read_job_to_json_struct(Job) || Job <- H1ReadJobs]},\n\t\t{partitions, Partitions}\n\t]}.\n\npool_cm_h1_to_h2_job_to_json_struct(Job) ->\n\tcandidate_to_json_struct(Job).\n\npool_cm_h1_read_job_to_json_struct(Job) ->\n\tcandidate_to_json_struct(Job).\n\njson_map_to_pool_cm_jobs(Map) ->\n\tH1ToH2Jobs = [json_map_to_candidate(Job)\n\t\t\t|| Job <- maps:get(<<\"h1_to_h2_jobs\">>, Map, [])],\n\tH1ReadJobs = [json_map_to_candidate(Job)\n\t\t\t|| Job <- maps:get(<<\"h1_read_jobs\">>, Map, [])],\n\t#pool_cm_jobs{ h1_to_h2_jobs = H1ToH2Jobs, h1_read_jobs = H1ReadJobs }.\n\nfootprint_to_json_map(Intervals) ->\n\tIntervals2 = ar_intervals:to_list(Intervals),\n\tIntervals3 = [[integer_to_binary(Start), integer_to_binary(End)]\n\t\t\t|| {End, Start} <- Intervals2],\n\t#{\n\t\tintervals => Intervals3\n\t}.\n\njson_map_to_footprint(Map) ->\n\tIntervals = maps:get(<<\"intervals\">>, Map),\n\tIntervals2 = [{binary_to_integer(End), binary_to_integer(Start)}\n\t\t\t|| [Start, End] <- Intervals],\n\tar_intervals:from_list(Intervals2)."
  },
  {
    "path": "apps/arweave/src/ar_shutdown_manager.erl",
    "content": "%%%===================================================================\n%%% This Source Code Form is subject to the terms of the GNU General\n%%% Public License, v. 2.0. If a copy of the GPLv2 was not distributed\n%%% with this file, You can obtain one at\n%%% https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html\n%%%\n%%% @doc Arweave Shutdown Manager.\n%%%\n%%% This module was created to ensure all remaining connections or\n%%% processes related to arweave have been correctly terminated. This\n%%% process should be started first and stopped last.\n%%%\n%%% The module export few functions to help diagnose arweave network\n%%% connections.\n%%%\n%%% When arweave application is stopped, this application should\n%%% receive `shutdown' due to trap exist. In this situation, terminate\n%%% functon is then called.\n%%%\n%%% @end\n%%%===================================================================\n-module(ar_shutdown_manager).\n-export([start_link/0]).\n-export([init/1, terminate/2]).\n-export([handle_call/3, handle_cast/2, handle_info/2]).\n-export([\n\tapply/3,\n\tapply/4,\n\tconnections/0,\n\tconnections/1,\n\tlist_connections/0,\n\tlist_connections/1,\n\tshutdown/0,\n\tsocket_info/1,\n\tsocket_info/2,\n\tstart_killer/1,\n\tstate/0,\n\tterminate_connections/0\n]).\n-include_lib(\"eunit/include/eunit.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nstart_link() ->\n\tstart_link(#{}).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nstart_link(Args) ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, Args, []).\n\n%%--------------------------------------------------------------------\n%% @doc returns service state.\n%% @end\n%%--------------------------------------------------------------------\n-spec state() -> shutdown | running.\n\nstate() ->\n\tcase ets:lookup(?MODULE, state) of\n\t\t[{state, shutdown}] -> shutdown;\n\t\t_ -> running\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc set state value to shutdown.\n%% @end\n%%--------------------------------------------------------------------\n-spec shutdown() -> boolean().\n\nshutdown() ->\n\tets:insert(?MODULE, {state, shutdown}).\n\n%%--------------------------------------------------------------------\n%% @doc apply a function only if the service is running.\n%% @see apply/4\n%% @end\n%%--------------------------------------------------------------------\n-spec apply(Module, Function, Arguments) -> Return when\n\tModule :: atom(),\n\tFunction :: atom(),\n\tArguments :: [term()],\n\tReturn :: any() | {error, shutdown}.\n\napply(Module, Function, Arguments) ->\n\tapply(Module, Function, Arguments, #{}).\n\n%%--------------------------------------------------------------------\n%% @doc execute a MFA with extra option for filtering.\n%% @end\n%%--------------------------------------------------------------------\n-spec apply(Module, Function, Arguments, Opts) -> Return when\n\tModule :: atom(),\n\tFunction :: atom(),\n\tArguments :: [term()],\n\tOpts :: #{ skip_on_shutdown => boolean() },\n\tReturn :: any() | {error, shutdown}.\n\napply(Module, Function, Arguments, #{ skip_on_shutdown := false }) ->\n\terlang:apply(Module, Function, Arguments);\napply(Module, Function, Arguments, Opts) ->\n\tcase state() of\n\t\trunning ->\n\t\t\terlang:apply(Module, Function, Arguments);\n\t\tshutdown ->\n\t\t\t?LOG_WARNING([\n\t\t\t\t{state, shutdown},\n\t\t\t\t{module, Module},\n\t\t\t\t{function, Function},\n\t\t\t\t{function, Arguments}\n\t\t\t]),\n\t\t\t{error, shutdown}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit(_Args) ->\n\terlang:process_flag(trap_exit, true),\n\tStartedAt = erlang:system_time(),\n\t?LOG_INFO([{start, ?MODULE}, {pid, self()}, {started_at, StartedAt}]),\n\tets:insert(?MODULE, {state, running}),\n\t{ok, #{ started_at => StartedAt }}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_call(uptime, _From, State = #{ started_at := StartedAt }) ->\n\tNow = erlang:system_time(),\n\t{reply, Now-StartedAt, State};\nhandle_call(Msg, From, State) ->\n\t?LOG_WARNING([{message, Msg}, {from, From}]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_cast(Msg, State) ->\n\t?LOG_WARNING([{message, Msg}]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_info(Msg, State) ->\n\t?LOG_WARNING([{message, Msg}]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc this function is called when ar_sup is being stopped. If it\n%% was correctly configured, this should be the last function to be\n%% executed in the supervision tree.\n%% @end\n%%--------------------------------------------------------------------\nterminate(_Reason = shutdown, _State = #{ started_at := StartedAt }) ->\n\tNow = erlang:system_time(),\n\tterminate_connections(),\n\t?LOG_INFO([\n\t\t{stop, ?MODULE},\n\t\t{started_at, StartedAt},\n\t\t{stopped_at, Now},\n\t\t{uptime, Now-StartedAt}\n\t]),\n\tok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc terminate all active http connections from ranch/cowboy. This\n%% @end\n%%--------------------------------------------------------------------\nterminate_connections() ->\n\t% this process should not exit, it will be linked to another\n\t% process in charge to kill all connections.\n\terlang:process_flag(trap_exit, true),\n\n\t% list the connections/sockets by target, where target can\n\t% be gun or cowboy.\n\tConnections = list_connections(),\n\t?LOG_DEBUG([{connections, length(Connections)}]),\n\tcase Connections of\n\t\t[] ->\n\t\t\tok;\n\t\tSockets ->\n\t\t\t% this call is blocking until all killer\n\t\t\t% processes are dead. When done, the code\n\t\t\t% will loop to check if some connections\n\t\t\t% are still active.\n\t\t\t_ = killers_connections_init(Sockets),\n\t\t\tterminate_connections()\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nlist_connections() ->\n\tTargets = [cowboy, gun],\n\tlists:flatten([ list_connections(T) || T <- Targets ]).\n\nlist_connections(gun) ->\n\tlists:flatten([\n\t\tbegin\n\t\t\tProcessInfo = process_info(P),\n\t\t\tLinks = proplists:get_value(links, ProcessInfo, []),\n\t\t\t[ L || L <- Links, is_port(L) ]\n\t\tend ||\n\t\t{_, P, _, _} <- supervisor:which_children(gun_sup)\n\t]);\nlist_connections(cowboy) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tFilters = [{'=:=', peer_port, Config#config.port}],\n\tSocketsInfo = connections(#{ filters => Filters }),\n\t[ S || #{ socket := S } <- SocketsInfo ].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nkillers_connections_init([]) -> ok;\nkillers_connections_init(Sockets) ->\n\t% start killers processes to stop those sockets.\n\tKillers = lists:foldr(\n\t\tfun(S, Acc) ->\n\t\t\tcase start_killer(S) of\n\t\t\t\t{ok, K} -> [K|Acc];\n\t\t\t\t_ -> Acc\n\t\t\tend\n\t\tend, [], Sockets),\n\t% wait until all connection killers are done with their job.\n\tkillers_connections_loop(Killers).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc main terminate connections loop. This functions waits for all\n%% killer process to be stopped.\n%% @end\n%%--------------------------------------------------------------------\n-spec killers_connections_loop([pid()]) -> ok.\n\nkillers_connections_loop([]) -> ok;\nkillers_connections_loop(Killers) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tTcpTimeout = 1000*Config#config.shutdown_tcp_connection_timeout,\n\treceive\n\t\t{'EXIT', Pid, _} ->\n\t\t\tFilter = fun\n\t\t\t\t(P) when P =:= Pid -> false;\n\t\t\t\t(_) -> true\n\t\t\tend,\n\t\t\tNewKillers = lists:filter(Filter, Killers),\n\t\t\tkillers_connections_loop(NewKillers);\n\t\tMsg ->\n\t\t\t?LOG_WARNING([{received, Msg}]),\n\t\t\tkillers_connections_loop(Killers)\n\tafter\n\t\tTcpTimeout ->\n\t\t\t?LOG_WARNING([{error, timeout}]),\n\t\t\t{error, timeout}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc start a connection killer process. This function starts a new\n%% killer job using a socket. A killer will be spawned and linked to\n%% the caller process.\n%% @end\n%%--------------------------------------------------------------------\n-spec start_killer(port()) -> {ok, pid()} | {error, term()}.\n\nstart_killer(Socket)\n\twhen is_port(Socket) ->\n\t\tFun = fun() -> killer_init(Socket) end,\n\t\t{ok, spawn_link(Fun)};\nstart_killer(Term) ->\n\t{error, Term}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc killer process initialization function. A killer connection\n%% process requires some information about a socket, but it also\n%% need to wait until the socket is terminated. So, the first step\n%% is to trap exit then links it to the socket. The extended information\n%% about the socket is also collected before entering into the loop.\n%% @see killer_loop/1\n%% @end\n%%--------------------------------------------------------------------\nkiller_init(Socket) ->\n\t?LOG_DEBUG([{socket, Socket}, {pid, self()}, {action, started}]),\n\terlang:process_flag(trap_exit, true),\n\terlang:link(Socket),\n\t{ok, Config} = arweave_config:get_env(),\n\tMode = Config#config.shutdown_tcp_mode,\n\tState = socket_info(Socket),\n\tNewState = killer_loop(State#{\n\t\tcounter => 0,\n\t\tmode => Mode\n\t}),\n\t?LOG_DEBUG([{state, NewState}, {pid, self()}, {action, done}]).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc killer connection loop. this loop is checking the state of\n%% an active socket. The goal is to have all socket in closed state.\n%% if its not the case, the killer connection will loop over until\n%% its done.\n%% @end\n%%--------------------------------------------------------------------\n-spec killer_loop(State) -> Return when\n\tState :: map(),\n\tReturn :: ok | {error, term()}.\n\nkiller_loop(_State = #{ info := #{ states := [closed] }}) -> ok;\nkiller_loop(State = #{ socket := Socket, counter := Counter }) ->\n\tDelay = maps:get(delay, State, 1000),\n\treceive\n\t\t{'EXIT', Socket, _} ->\n\t\t\t?LOG_DEBUG([{state, State}, {pid, self()},\n\t\t\t\t{action, exited}]),\n\t\t\tok\n\tafter\n\t\tDelay ->\n\t\t\ttry stop_connection(State) of\n\t\t\t\tstop -> ok;\n\t\t\t\tcontinue ->\n\t\t\t\t\tNewState = socket_info(Socket, State),\n\t\t\t\t\tkiller_loop(NewState#{ counter => Counter + 1})\n\t\t\tcatch\n\t\t\t\tE:R ->\n\t\t\t\t\t?LOG_DEBUG([{state, State}, {pid, self()},\n\t\t\t\t\t\t{action, error}, {error, E}, {reason, R}]),\n\t\t\t\t\t{E, {R, State}}\n\t\t\tend\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc stop an active connection using a socket. This function is\n%% mainly used to kill a connection based on socket state.\n%%\n%% Two modes are currently available: shutdown or close. The shutdown mode\n%% shutdowns the socket and follows the safest procedure for the\n%% client: shutdown the socket and then close it when the remote\n%% side of the connection is okay.\n%%\n%% The close mode is setting linger and close the socket. This is not\n%% a clean way but in some situation, it can be useful.\n%% @end\n%%--------------------------------------------------------------------\n-spec stop_connection(State) -> Return when\n\tState :: map(),\n\tReturn :: continue | stop.\n\nstop_connection(State = #{ socket := Socket, mode := shutdown }) ->\n\t?LOG_DEBUG([{state, State}, {pid, self()}]),\n\tcase State of\n\t\t#{ info := #{ states := [closed] }} ->\n\t\t\tstop;\n\t\t#{ counter := 0 } ->\n\t\t\tranch_tcp:setopts(Socket, [{delay_send, false}]),\n\t\t\tranch_tcp:setopts(Socket, [{nodelay, true}]),\n\t\t\tranch_tcp:setopts(Socket, [{send_timeout_close, true}]),\n\t\t\tranch_tcp:setopts(Socket, [{send_timeout, 10}]),\n\t\t\tranch_tcp:setopts(Socket, [{keepalive, false}]),\n\t\t\tranch_tcp:setopts(Socket, [{exit_on_close, false}]),\n\t\t\tranch_tcp:shutdown(Socket, write),\n\t\t\tcontinue;\n\t\t#{ counter := Counter } when Counter < 10 ->\n\t\t\tranch_tcp:shutdown(Socket, write),\n\t\t\tcontinue;\n\t\t#{ counter := Counter } when Counter > 10 ->\n\t\t\tranch_tcp:shutdown(Socket, read),\n\t\t\tranch_tcp:close(Socket),\n\t\t\tcontinue;\n\t\t_ ->\n\t\t\tcontinue\n\tend;\nstop_connection(State = #{ socket := Socket, mode := close }) ->\n\t?LOG_DEBUG([{state, State}, {pid, self()}]),\n\tcase State of\n\t\t#{ info := #{ states := [closed] }} ->\n\t\t\tstop;\n\t\t_ ->\n\t\t\tranch_tcp:setopts(Socket, [{delay_send, false}]),\n\t\t\tranch_tcp:setopts(Socket, [{nodelay, true}]),\n\t\t\tranch_tcp:setopts(Socket, [{exit_on_close, false}]),\n\t\t\tranch_tcp:setopts(Socket, [{send_timeout_close, true}]),\n\t\t\tranch_tcp:setopts(Socket, [{send_timeout, 10}]),\n\t\t\tranch_tcp:setopts(Socket, [{keepalive, false}]),\n\t\t\tranch_tcp:setopts(Socket, [{linger, {true, 0}}]),\n\t\t\tranch_tcp:shutdown(Socket, read_write),\n\t\t\tranch_tcp:close(Socket),\n\t\t\tcontinue\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc list all active connections.\n%% @see connections/1\n%% @end\n%%--------------------------------------------------------------------\n-spec connections() -> Return when\n\tReturn :: [map()].\n\nconnections() ->\n\tconnections(#{}).\n\n%%--------------------------------------------------------------------\n%% @doc `connections/1' is based on inet module private function.\n%% The goal of this function is to return only active http connections.\n%% By using this method, we are avoiding checking the links from\n%% `ranch:procs/2' and deal directly with the sockets from `inet'.\n%% @see connections/1\n%% @end\n%%--------------------------------------------------------------------\n-spec connections(Opts) -> Return when\n\tOpts :: #{\n\t\tfilters => [tuple()]\n\t},\n\tReturn :: [map()].\n\nconnections(Opts) ->\n\tFilters = maps:get(filters, Opts, []),\n\tSockets = erlang:ports(),\n\tFoldr = fun network_connection_foldr/2,\n\tNetworkSockets = lists:foldr(Foldr, [], Sockets),\n\tDefaultFilters = [\n\t\t{'=:=', name, \"tcp_inet\"},\n\t\t{'=/=', [info, states], [closed]},\n\t\t{'=/=', [info, states], [listen, open]},\n\t\t{'=/=', [info, states], [acception, listen, open]}\n\t],\n\tFinalFilters = DefaultFilters ++ Filters,\n\tdata_filters(FinalFilters, NetworkSockets, #{}).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc filters only ports defined as network sockets, ignore all\n%% other ports/sockets, and collect extended socket information.\n%% @end\n%%--------------------------------------------------------------------\n-spec network_connection_foldr(Port, Acc) -> Return when\n\tPort :: port(),\n\tAcc :: list(),\n\tReturn :: [map()].\n\nnetwork_connection_foldr(Port, Acc) ->\n\ttry erlang:port_info(Port, name) of\n\t\t{name, N = \"tcp_inet\"} ->\n\t\t\tI = socket_info(Port),\n\t\t\t[I#{ name => N }|Acc];\n\t\t{name, N = \"udp_inet\"} ->\n\t\t\tI = socket_info(Port),\n\t\t\t[I#{ name => N }|Acc];\n\t\t{name, N = \"sctp_inet\"} ->\n\t\t\tI = socket_info(Port),\n\t\t\t[I#{ name => N }|Acc];\n\t\t_ -> Acc\n\tcatch\n\t\t_:_ -> Acc\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc Returns extended information about an active socket (port).\n%% it includes a formatted version of the peer/sock name and all\n%% information from `inet:info/1' function.\n%% @end\n%%--------------------------------------------------------------------\n-spec socket_info(Socket) -> Return when\n\tSocket :: port(),\n\tReturn :: #{\n\t\tsocket => port(),\n\t\tsock_name => {tuple(), pos_integer()},\n\t\tsock_address => tuple(),\n\t\tsock_port => pos_integer(),\n\t\tsock => string(),\n\t\tpeer_name => {tuple(), pos_integer()},\n\t\tpeer_address => tuple(),\n\t\tpeer_port => pos_integer(),\n\t\tpeer => string(),\n\t\tinfo => map()\n\t}.\n\nsocket_info(Socket)\n\twhen is_port (Socket) ->\n\t\tsocket_info(Socket, #{}).\n\nsocket_info(Socket, State) ->\n\t{SockAddress, SockPort, Sock} =\n\t\tsock_wrapper(Socket, sockname),\n\n\t{PeerAddress, PeerPort, Peer} =\n\t\tsock_wrapper(Socket, peername),\n\n\t% On R24, inet:info/1 can raise an exception\n\t% because some functions used to generate the\n\t% final map results are not returning correct\n\t% data. See inet:port_info/1, line 714\n\tInfo =\n\t\ttry inet:info(Socket) of\n\t\t\tResult when is_map(Result) -> Result;\n\t\t\t_ -> #{}\n\t\tcatch\n\t\t\t_:_ -> #{}\n\t\tend,\n\n\tNewState= #{\n\t\tsocket => Socket,\n\t\tsock_name => {SockAddress, SockPort},\n\t\tsock_address => SockAddress,\n\t\tsock_port => SockPort,\n\t\tsock => Sock,\n\t\tpeer_name => {PeerAddress, SockPort},\n\t\tpeer_address => PeerAddress,\n\t\tpeer_port => PeerPort,\n\t\tpeer => Peer,\n\t\tinfo => Info\n\t},\n\tmaps:merge(State, NewState).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc wrapper around `inet:peername/1' and `inet:sockname/1'.\n%% @end\n%%---------------------------------------------------------------------\n-spec sock_wrapper(Socket, Info) -> Return when\n\tSocket :: port(),\n\tInfo :: peername | sockname,\n\tReturn :: {Address, Port, Peer},\n\tAddress :: tuple() | undefined,\n\tPort :: pos_integer() | undefined,\n\tPeer :: string().\n\nsock_wrapper(Socket, Info) ->\n\ttry inet:Info(Socket) of\n\t\t{ok, {Address, Port}} ->\n\t\t\tAddressList = inet:ntoa(Address),\n\t\t\tPortList = integer_to_list(Port),\n\t\t\tPeer = string:join([AddressList, PortList], \":\"),\n\t\t\t{Address, Port, Peer};\n\t\t_ ->\n\t\t\t{undefined, undefined, \"unknown\"}\n\tcatch\n\t\t_:_ ->\n\t\t\t{undefined, undefined, \"unknown\"}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc a simple data filter function. Filters Data using a list of\n%% filters. Only the data matching all filters are returned.\n%% @end\n%%--------------------------------------------------------------------\n-spec data_filters(Filters, Datas, Opts) -> Return when\n\tFilters :: [\n\t\t{Operator, CheckKey} |\n\t\t{Operator, CheckKey, CheckValue}\n\t],\n\tOperator :: atom(),\n\tCheckKey :: [term()] | term(),\n\tCheckValue :: term(),\n\tDatas :: [map()],\n\tOpts :: #{\n\t\treverse => boolean()\n\t},\n\tReturn :: [map()].\n\ndata_filters(_, [], _) -> [];\ndata_filters([], Datas, _) -> Datas;\ndata_filters(Filters, Datas, Opts) ->\n\tdata_filters(Filters, Datas, [], Opts).\n\ndata_filters_test() ->\n\t?assertEqual(\n\t\t[],\n\t\tdata_filters([], [], #{})\n\t),\n\t?assertEqual(\n\t\t[#{}],\n\t\tdata_filters([], [#{}], #{})\n\t),\n\t?assertEqual(\n\t\t[],\n\t\tdata_filters([{'is_integer', a}], [], #{})\n\t),\n\t?assertEqual(\n\t\t[],\n\t\tdata_filters([{'=:=', a, 1}], [#{ a => 2 }], #{})\n\t),\n\t?assertEqual([\n\t\t#{ a => 1 }],\n\t\tdata_filters([{'=:=', a, 1}], [#{ a => 1 }], #{})\n\t).\n\ndata_filters(_Filters, [], Buffer, _Opts) -> lists:reverse(Buffer);\ndata_filters(Filters, [H|T], Buffer, Opts) ->\n\tcase filters(Filters, H, Opts) of\n\t\t{false, _} -> data_filters(Filters, T, Buffer, Opts);\n\t\t{true, _} -> data_filters(Filters, T, [H|Buffer], Opts)\n\tend.\n\nfilters([], Data, _Opts) ->\n\t{true, Data};\nfilters([Filter|Rest], Data, Opts) ->\n\tcase filter(Filter, Data, Opts) of\n\t\ttrue ->\n\t\t\tfilters(Rest, Data, Opts);\n\t\tfalse ->\n\t\t\t{false, Data}\n\tend.\n\nfilter(Filter, Data, Opts) ->\n\tCheckKey = erlang:element(2, Filter),\n\tcase get(CheckKey, Data) of\n\t\t{ok, Value} ->\n\t\t\tfilter2(Filter, [Value], Data, Opts);\n\t\t{error, _} ->\n\t\t\tfalse\n\tend.\n\nfilter2(Filter = {Operator, _CheckKey}, [Value], Data, Opts) ->\n\tResult = erlang:apply(erlang, Operator, [Value]),\n\tfilter3(Filter, [Result, Value], Data, Opts);\nfilter2(Filter = {Operator, _CheckKey, CheckValue}, [Value], Data, Opts) ->\n\tResult = erlang:apply(erlang, Operator, [CheckValue, Value]),\n\tfilter3(Filter, [Result, Value], Data, Opts).\n\nfilter3(_Filter, [Result|_], _Data, #{ reverse := true }) ->\n\tnot Result;\nfilter3(_Filter, [Result|_], _Data, _Opts) ->\n\tResult.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc recursive maps value extractor.\n%% @end\n%%--------------------------------------------------------------------\n-spec get(Key, Map) -> Return when\n\tKey :: [term()] | term(),\n\tMap :: map(),\n\tReturn :: {ok, term()} | {error, term()}.\n\nget(Key, Map)\n\twhen not is_list(Key), is_map(Map) ->\n\t\t{ok, maps:get(Key, Map)};\nget([Key], Map)\n\twhen is_map(Map) ->\n\t\t{ok, maps:get(Key, Map)};\nget([Key|Rest], Map)\n\twhen is_map(Map) ->\n\t\tValue = maps:get(Key, Map),\n\t\tget(Rest, Value);\nget(_, _) ->\n\t{error, not_found}.\n\nget_test() ->\n\t?assertEqual(\n\t\t{error, not_found},\n\t\tget(1, [])\n\t),\n\t?assertEqual(\n\t\t{ok, 1},\n\t\tget(a, #{ a => 1 })\n\t),\n\t?assertEqual(\n\t\t{ok, 1},\n\t\tget([a,b], #{ a => #{ b => 1 } })\n\t),\n\t?assertEqual(\n\t\t{error, not_found},\n\t\tget([a,b,c], #{ a => #{ b => 1 } })\n\t).\n"
  },
  {
    "path": "apps/arweave/src/ar_storage.erl",
    "content": "-module(ar_storage).\n\n-behaviour(gen_server).\n\n-export([start_link/0, read_block_index/0, read_block_index/1, read_reward_history/1, read_reward_history/2,\n\t\tread_block_time_history/2, read_block_time_history/3,\n\t\tstore_block_index/1, update_block_index/3,\n\t\tstore_reward_history_part/1, store_reward_history_part2/1,\n\t\tstore_block_time_history_part/2, store_block_time_history_part2/1,\n\t\twrite_full_block/2, read_block/1, read_block/2, read_block/3, write_tx/1,\n\t\tread_tx/1, read_tx/2, read_tx_data/1, read_tx_data/2, update_confirmation_index/1, get_tx_confirmation_data/1,\n\t\tread_wallet_list/1, read_wallet_list/2, write_wallet_list/2,\n\t\tdelete_blacklisted_tx/1, lookup_tx_filename/1, lookup_tx_filename/2, open_databases/0,\n\t\topen_start_from_state_databases/1, close_start_from_state_databases/0,\n\t\twallet_list_filepath/1, wallet_list_filepath/2, tx_filepath/1, tx_filepath/2,\n\t\ttx_data_filepath/1, tx_data_filepath/2, read_tx_file/1,\n\t\tread_migrated_v1_tx_file/1, read_migrated_v1_tx_file/2, ensure_directories/1, write_file_atomic/2,\n\t\twrite_tx_data/3,\n\t\twrite_term/2, write_term/3, read_term/1, read_term/2, delete_term/1, is_file/1,\n\t\tmigrate_tx_record/1, migrate_block_record/1, read_account/2, read_account/4, read_block_from_file/2]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include(\"ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include(\"ar_wallets.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n-include_lib(\"kernel/include/file.hrl\").\n\n-record(state, {}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%% @doc Read the entire stored block index.\nread_block_index() ->\n\tread_block_index(not_set).\n\nread_block_index(CustomDir) ->\n\tcase block_index_tip(CustomDir) of\n\t\tnot_found ->\n\t\t\tnot_found;\n\t\t{Height, {_H, _, _, _PrevH}} ->\n\t\t\t{ok, Map} = ar_kv:get_range(get_db_name(block_index_db, CustomDir),\n\t\t\t\t\t<< 0:256 >>, << Height:256 >>),\n\t\t\tread_block_index_from_map(Map, 0, Height, <<>>, [])\n\tend.\n\nread_block_index_from_map(_Map, Height, End, _PrevH, BI) when Height > End ->\n\tBI;\nread_block_index_from_map(Map, Height, End, PrevH, BI) ->\n\tV = maps:get(<< Height:256 >>, Map, not_found),\n\tcase V of\n\t\tnot_found ->\n\t\t\tar:console(\"The stored block index is invalid. Height ~B not found.~n\", [Height]),\n\t\t\tnot_found;\n\t\t_ ->\n\t\t\tcase binary_to_term(V, [safe]) of\n\t\t\t\t{H, WeaveSize, TXRoot, PrevH} ->\n\t\t\t\t\tread_block_index_from_map(Map, Height + 1, End, H, [{H, WeaveSize, TXRoot} | BI]);\n\t\t\t\t{_, _, _, PrevH2} ->\n\t\t\t\t\tar:console(\"The stored block index is invalid. Height: ~B, \"\n\t\t\t\t\t\t\t\"stored previous hash: ~s, expected previous hash: ~s.~n\",\n\t\t\t\t\t\t\t[Height, ar_util:encode(PrevH2), ar_util:encode(PrevH)]),\n\t\t\t\t\tnot_found\n\t\t\tend\n\tend.\n\n%% @doc Return the reward history for the given block index part or not_found.\nread_reward_history(BI) ->\n\tread_reward_history(BI, not_set).\n\nread_reward_history([], _CustomDir) ->\n\t[];\nread_reward_history([{H, _WeaveSize, _TXRoot} | BI], CustomDir) ->\n\tcase read_reward_history(BI, CustomDir) of\n\t\tnot_found ->\n\t\t\tnot_found;\n\t\tHistory ->\n\t\t\tcase ar_kv:get(get_db_name(reward_history_db, CustomDir), H) of\n\t\t\t\tnot_found ->\n\t\t\t\t\t?LOG_DEBUG([{event, read_reward_history_not_found},\n\t\t\t\t\t\t\t{reason, missing_block},\n\t\t\t\t\t\t\t{block, ar_util:encode(H)}]),\n\t\t\t\t\tnot_found;\n\t\t\t\t{ok, Bin} ->\n\t\t\t\t\tElement = binary_to_term(Bin, [safe]),\n\t\t\t\t\t[Element | History]\n\t\t\tend\n\tend.\n\n%% @doc Return the block time history for the given block index part or not_found.\nread_block_time_history(Height, BI) ->\n\tread_block_time_history(Height, BI, not_set).\n\nread_block_time_history(_Height, [], _CustomDir) ->\n\t[];\nread_block_time_history(Height, [{H, _WeaveSize, _TXRoot} | BI], CustomDir) ->\n\tcase Height < ar_fork:height_2_7() of\n\ttrue ->\n\t\t\t[];\n\t\tfalse ->\n\t\t\tcase read_block_time_history(Height - 1, BI, CustomDir) of\n\t\t\t\tnot_found ->\n\t\t\t\t\tnot_found;\n\t\t\t\tHistory ->\n\t\t\t\t\tcase ar_kv:get(get_db_name(block_time_history_db, CustomDir), H) of\n\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\tnot_found;\n\t\t\t\t\t\t{ok, Bin} ->\n\t\t\t\t\t\t\tElement = binary_to_term(Bin, [safe]),\n\t\t\t\t\t\t\t[Element | History]\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\n%% @doc Record the entire block index on disk.\n%% Return {error, block_index_no_recent_intersection} if the local state forks away\n%% at more than ar_block:get_consensus_window_size() blocks ago.\nstore_block_index(BI) ->\n\t%% Use a key that is bigger than any << Height:256 >> (<<\"a\">> > << Height:256 >>)\n\t%% to retrieve the largest stored Height.\n\tNewHeight = length(BI) - 1,\n\tcase ar_kv:get_prev(block_index_db, <<\"a\">>) of\n\t\tnone ->\n\t\t\tupdate_block_index(NewHeight, 0, lists:reverse(BI));\n\t\t{ok, << StoredHeight:256 >>, _V} ->\n\t\t\t%% RootHeight should a historical height shared by both the stored BI and the\n\t\t\t%% new BI\n\t\t\tRootHeight = max(0, min(StoredHeight, NewHeight) - ar_block:get_consensus_window_size()),\n\t\t\t{ok, V} = ar_kv:get(block_index_db, << RootHeight:256 >>),\n\t\t\t{H, WeaveSize, TXRoot} = lists:nth(NewHeight - RootHeight + 1, BI),\n\t\tcase binary_to_term(V, [safe]) of\n\t\t\t{H, WeaveSize, TXRoot, _PrevH} ->\n\t\t\t\tBI2 = lists:reverse(lists:sublist(BI, NewHeight - RootHeight)),\n\t\t\t\tupdate_block_index(NewHeight, StoredHeight - RootHeight, BI2);\n\t\t\t{H2, _, _, _} ->\n\t\t\t\t\t?LOG_ERROR([{event, failed_to_store_block_index},\n\t\t\t\t\t\t\t{reason, no_intersection},\n\t\t\t\t\t\t\t{height, RootHeight},\n\t\t\t\t\t\t\t{stored_hash, ar_util:encode(H2)},\n\t\t\t\t\t\t\t{expected_hash, ar_util:encode(H)}]),\n\t\t\t\t\t{error, block_index_no_recent_intersection}\n\t\t\tend;\n\t\tError ->\n\t\t\tError\n\tend.\n\n%% @doc Record the block index update on disk. Remove the orphans, if any.\n\nupdate_block_index(NewTipHeight, OrphanCount, _BI)\n\t\twhen NewTipHeight < 0 orelse OrphanCount < 0 ->\n\t{error, badarg};\nupdate_block_index(NewTipHeight, OrphanCount, BI) ->\n\t%% Record the contents of BI starting at this height - the entry at IndexHeight will\n\t%% be the first entry written (perhaps replacing an existing entry at that height)\n\tCurTipHeight = case block_index_tip() of\n\t\tnot_found ->\n\t\t\t-1;\n\t\t{Height, _} ->\n\t\t\tHeight\n\tend,\n\t%% IndexHeight is by default one beyond the current tip, this only changes if we have\n\t%% orphans (which will be deleted).\n\tIndexHeight = (CurTipHeight + 1) - OrphanCount,\n\n\tcase IndexHeight + length(BI) - 1 == NewTipHeight of\n\t\ttrue ->\n\t\t\tupdate_block_index2(IndexHeight, OrphanCount, BI);\n\t\tfalse ->\n\t\t\t?LOG_ERROR([{event, failed_to_update_block_index},\n\t\t\t\t{reason, block_index_gap},\n\t\t\t\t{cur_tip_height, CurTipHeight},\n\t\t\t\t{new_tip_height, NewTipHeight},\n\t\t\t\t{index_height, IndexHeight},\n\t\t\t\t{orphan_count, OrphanCount},\n\t\t\t\t{block_count, length(BI)}]),\n\t\t\t{error, not_found}\n\tend.\n\nupdate_block_index2(IndexHeight, OrphanCount, BI) ->\n\t%% 1. Delete all the orphaned blocks from the block index\n\tcase ar_kv:delete_range(block_index_db,\n\t\t\t<< IndexHeight:256 >>, << (IndexHeight + OrphanCount):256 >>) of\n\t\tok ->\n\t\t\tcase IndexHeight of\n\t\t\t\t0 ->\n\t\t\t\t\tupdate_block_index3(0, <<>>, BI);\n\t\t\t\t_ ->\n\t\t\t\t\t%% 2. Add all the entries in BI to the block index\n\t\t\t\t\t%% BI will include the new tip block at the current height, as well as any new\n\t\t\t\t\t%% history blocks if the tip is on a new branch.\n\t\t\t\t\tcase ar_kv:get(block_index_db, << (IndexHeight - 1):256 >>) of\n\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\t?LOG_ERROR([{event, failed_to_update_block_index},\n\t\t\t\t\t\t\t\t\t{reason, prev_element_not_found},\n\t\t\t\t\t\t\t\t\t{prev_height, IndexHeight - 1}]),\n\t\t\t\t\t\t\t{error, not_found};\n\t\t\t\t\t\t{ok, Bin} ->\n\t\t\t\t\t\t\t{PrevH, _, _, _} = binary_to_term(Bin, [safe]),\n\t\t\t\t\t\t\tupdate_block_index3(IndexHeight, PrevH, BI)\n\t\t\t\t\tend\n\t\t\tend;\n\t\t{error, Error} ->\n\t\t\t?LOG_ERROR([{event, failed_to_update_block_index},\n\t\t\t\t\t{reason, failed_to_remove_orphaned_range},\n\t\t\t\t\t{range_start, IndexHeight},\n\t\t\t\t\t{range_end, IndexHeight + OrphanCount},\n\t\t\t\t\t{reason, io_lib:format(\"~p\", [Error])}]),\n\t\t\t{error, Error}\n\tend.\n\nupdate_block_index3(_Height, _PrevH, []) ->\n\tok;\nupdate_block_index3(Height, PrevH, [{H, WeaveSize, TXRoot} | BI]) ->\n\tBin = term_to_binary({H, WeaveSize, TXRoot, PrevH}),\n\tcase ar_kv:put(block_index_db, << Height:256 >>, Bin) of\n\t\tok ->\n\t\t\tupdate_block_index3(Height + 1, H, BI);\n\t\tError ->\n\t\t\t?LOG_ERROR([{event, failed_to_update_block_index},\n\t\t\t\t\t{height, Height},\n\t\t\t\t\t{reason, io_lib:format(\"~p\", [Error])}]),\n\t\t\t{error, Error}\n\tend.\n\nstore_reward_history_part([]) ->\n\tok;\nstore_reward_history_part(Blocks) ->\n\tstore_reward_history_part2([{B#block.indep_hash, {B#block.reward_addr,\n\t\t\tar_difficulty:get_hash_rate_fixed_ratio(B), B#block.reward,\n\t\t\tB#block.denomination}} || B <- Blocks]).\n\nstore_reward_history_part2([]) ->\n\tok;\nstore_reward_history_part2([{H, El} | History]) ->\n\tBin = term_to_binary(El),\n\tcase ar_kv:put(reward_history_db, H, Bin) of\n\t\tok ->\n\t\t\tstore_reward_history_part2(History);\n\t\tError ->\n\t\t\t?LOG_ERROR([{event, failed_to_update_reward_history},\n\t\t\t\t\t{reason, io_lib:format(\"~p\", [Error])},\n\t\t\t\t\t{block, ar_util:encode(H)}]),\n\t\t\t{error, not_found}\n\tend.\n\nstore_block_time_history_part([], _PrevB) ->\n\tok;\nstore_block_time_history_part(Blocks, PrevB) ->\n\tHistory = ar_block_time_history:get_history_from_blocks(Blocks, PrevB),\n\tstore_block_time_history_part2(History).\n\nstore_block_time_history_part2([]) ->\n\tok;\nstore_block_time_history_part2([{H, El} | History]) ->\n\tBin = term_to_binary(El),\n\tcase ar_kv:put(block_time_history_db, H, Bin) of\n\t\tok ->\n\t\t\tstore_block_time_history_part2(History);\n\t\tError ->\n\t\t\t?LOG_ERROR([{event, failed_to_update_block_time_history},\n\t\t\t\t\t{reason, io_lib:format(\"~p\", [Error])},\n\t\t\t\t\t{block, ar_util:encode(H)}]),\n\t\t\t{error, not_found}\n\tend.\n\n-if(?NETWORK_NAME == \"arweave.N.1\").\nwrite_full_block(#block{ height = 0 } = BShadow, TXs) ->\n\t%% Genesis transactions are stored in data/genesis_txs; they are part of the repository.\n\twrite_full_block2(BShadow, TXs);\nwrite_full_block(BShadow, TXs) ->\n\tcase update_confirmation_index(BShadow#block{ txs = TXs }) of\n\t\tok ->\n\t\t\tcase write_tx([TX || TX <- TXs, not is_blacklisted(TX)]) of\n\t\t\t\tok ->\n\t\t\t\t\twrite_full_block2(BShadow, TXs);\n\t\t\t\tError ->\n\t\t\t\t\tError\n\t\t\tend;\n\t\tError ->\n\t\t\tError\n\tend.\n-else.\nwrite_full_block(BShadow, TXs) ->\n\tcase update_confirmation_index(BShadow#block{ txs = TXs }) of\n\t\tok ->\n\t\t\tcase write_tx([TX || TX <- TXs, not is_blacklisted(TX)]) of\n\t\t\t\tok ->\n\t\t\t\t\twrite_full_block2(BShadow, TXs);\n\t\t\t\tError ->\n\t\t\t\t\tError\n\t\t\tend;\n\t\tError ->\n\t\t\tError\n\tend.\n-endif.\n\nis_blacklisted(#tx{ format = 2 }) ->\n\tfalse;\nis_blacklisted(#tx{ id = TXID }) ->\n\tar_tx_blacklist:is_tx_blacklisted(TXID).\n\nupdate_confirmation_index(B) ->\n\tput_tx_confirmation_data(B).\n\nput_tx_confirmation_data(B) ->\n\tData = term_to_binary({B#block.height, B#block.indep_hash}),\n\tlists:foldl(\n\t\tfun\t(TX, ok) ->\n\t\t\t\tar_kv:put(tx_confirmation_db, TX#tx.id, Data);\n\t\t\t(_TX, Acc) ->\n\t\t\t\tAcc\n\t\tend,\n\t\tok,\n\t\tB#block.txs\n\t).\n\n%% @doc Return {BlockHeight, BlockHash} belonging to the block where\n%% the given transaction was included.\nget_tx_confirmation_data(TXID) ->\n\tcase ar_kv:get(tx_confirmation_db, TXID) of\n\t\t{ok, Binary} ->\n\t\t\t{ok, binary_to_term(Binary, [safe])};\n\t\tnot_found ->\n\t\t\tnot_found\n\tend.\n\n%% @doc Read a block from disk, given a height\n%% and a block index (used to determine the hash by height).\nread_block(Height, BI) when is_integer(Height) ->\n\tread_block(Height, BI, not_set);\nread_block(B, _CustomDir) when is_record(B, block) ->\n\tB;\nread_block(unavailable, _CustomDir) ->\n\tunavailable;\nread_block(Blocks, CustomDir) when is_list(Blocks) ->\n\tlists:map(fun(B) -> read_block(B, CustomDir) end, Blocks);\nread_block({H, _, _}, CustomDir) ->\n\tread_block(H, CustomDir);\nread_block(BH, CustomDir) ->\n\tcase ar_disk_cache:lookup_block_filename(BH, CustomDir) of\n\t\t{ok, {Filename, Encoding}} ->\n\t\t\t%% The cache keeps a rotated number of recent headers when the\n\t\t\t%% node is out of disk space.\n\t\t\tread_block_from_file(Filename, Encoding);\n\t\t_ ->\n\t\t\tcase ar_kv:get(get_db_name(block_db, CustomDir), BH) of\n\t\t\t\tnot_found ->\n\t\t\t\t\tcase lookup_block_filename(BH, CustomDir) of\n\t\t\t\t\t\tunavailable ->\n\t\t\t\t\t\t\tunavailable;\n\t\t\t\t\t\t{Filename, Encoding} ->\n\t\t\t\t\t\t\tread_block_from_file(Filename, Encoding)\n\t\t\t\t\tend;\n\t\t\t\t{ok, V} ->\n\t\t\t\t\tparse_block_kv_binary(V);\n\t\t\t\t{error, Reason} ->\n\t\t\t\t\t?LOG_WARNING([{event, error_reading_block_from_kv_storage},\n\t\t\t\t\t\t\t{block, ar_util:encode(BH)},\n\t\t\t\t\t\t\t{error, io_lib:format(\"~p\", [Reason])}]),\n\t\t\t\t\tunavailable\n\t\t\tend\n\tend.\n\nread_block(Height, BI, CustomDir) when is_integer(Height) ->\n\tcase Height of\n\t\t_ when Height < 0 ->\n\t\t\tunavailable;\n\t\t_ when Height > length(BI) - 1 ->\n\t\t\tunavailable;\n\t\t_ ->\n\t\t\t{H, _, _} = lists:nth(length(BI) - Height, BI),\n\t\t\tread_block(H, CustomDir)\n\tend.\n\nread_block(B) ->\n\tread_block(B, not_set).\n\n%% @doc Read the account information for the given address and\n%% root hash of the account tree. Return {0, <<>>} if the given address does not belong\n%% to the tree. The balance may be also 0 when the address exists in the tree. Return\n%% not_found if some of the files with the account data are missing.\nread_account(Addr, Key) ->\n\tread_account(Addr, Key, <<>>, not_set).\n\nread_account(Addr, Key, Prefix, CustomDir) ->\n\tcase get_account_tree_value(Key, Prefix, CustomDir) of\n\t\t{ok, << Key:48/binary, _/binary >>, V} ->\n\t\t\tcase binary_to_term(V, [safe]) of\n\t\t\t\t{K, Val} when K == Addr ->\n\t\t\t\t\tVal;\n\t\t\t\t{_, _} ->\n\t\t\t\t\t{0, <<>>};\n\t\t\t\t[_ | _] = SubTrees ->\n\t\t\t\t\tcase find_key_by_matching_longest_prefix(Addr, SubTrees) of\n\t\t\t\t\t\tnot_found ->\n\t\t\t\t\t\t\t{0, <<>>};\n\t\t\t\t\t\t{H, Prefix2} ->\n\t\t\t\t\t\t\tread_account(Addr, H, Prefix2, CustomDir)\n\t\t\t\t\tend\n\t\t\tend;\n\t\t_ ->\n\t\t\tread_account2(Addr, Key, CustomDir)\n\tend.\n\nfind_key_by_matching_longest_prefix(Addr, Keys) ->\n\tfind_key_by_matching_longest_prefix(Addr, Keys, {<<>>, -1}).\n\nfind_key_by_matching_longest_prefix(_Addr, [], {Key, Prefix}) ->\n\tcase Key of\n\t\t<<>> ->\n\t\t\tnot_found;\n\t\t_ ->\n\t\t\t{Key, Prefix}\n\tend;\nfind_key_by_matching_longest_prefix(Addr, [{_, Prefix} | Keys], {Key, KeyPrefix})\n\t\twhen Prefix == <<>> orelse byte_size(Prefix) =< byte_size(KeyPrefix) ->\n\tfind_key_by_matching_longest_prefix(Addr, Keys, {Key, KeyPrefix});\nfind_key_by_matching_longest_prefix(Addr, [{H, Prefix} | Keys], {Key, KeyPrefix}) ->\n\tcase binary:match(Addr, Prefix) of\n\t\t{0, _} ->\n\t\t\tfind_key_by_matching_longest_prefix(Addr, Keys, {H, Prefix});\n\t\t_ ->\n\t\t\tfind_key_by_matching_longest_prefix(Addr, Keys, {Key, KeyPrefix})\n\tend.\n\nread_account2(Addr, RootHash, CustomDir) ->\n\t%% Unfortunately, we do not have an easy access to the information about how many\n\t%% accounts there were in the given tree so we perform the binary search starting\n\t%% from the number in the latest block.\n\tSize = ar_wallets:get_size(),\n\tMaxFileCount = Size div ?WALLET_LIST_CHUNK_SIZE + 1,\n\tDir =\n\t\tcase CustomDir of\n\t\t\tnot_set ->\n\t\t\t\tget_data_dir();\n\t\t\t_ ->\n\t\t\t\tCustomDir\n\t\tend,\n\tread_account(Addr, RootHash, 0, MaxFileCount, Dir, false).\n\nread_account(_Addr, _RootHash, Left, Right, _DataDir, _RightFileFound) when Left == Right ->\n\tnot_found;\nread_account(Addr, RootHash, Left, Right, DataDir, RightFileFound) ->\n\tPos = Left + (Right - Left) div 2,\n\tFilepath = wallet_list_chunk_relative_filepath(Pos * ?WALLET_LIST_CHUNK_SIZE, RootHash),\n\tcase filelib:is_file(filename:join(DataDir, Filepath)) of\n\t\tfalse ->\n\t\t\tread_account(Addr, RootHash, Left, Pos, DataDir, false);\n\t\ttrue ->\n\t\t\t{ok, L} = ar_storage:read_term(Filepath),\n\t\t\tread_account2(Addr, RootHash, Pos, Left, Right, DataDir, L, RightFileFound)\n\tend.\n\nwallet_list_chunk_relative_filepath(Position, RootHash) ->\n\tbinary_to_list(iolist_to_binary([\n\t\t?WALLET_LIST_DIR,\n\t\t\"/\",\n\t\tar_util:encode(RootHash),\n\t\t\"-\",\n\t\tinteger_to_binary(Position),\n\t\t\"-\",\n\t\tinteger_to_binary(?WALLET_LIST_CHUNK_SIZE)\n\t])).\n\nread_account2(Addr, _RootHash, _Pos, _Left, _Right, _DataDir, [last, {LargestAddr, _} | _L],\n\t\t_RightFileFound) when Addr > LargestAddr ->\n\t{0, <<>>};\nread_account2(Addr, RootHash, Pos, Left, Right, DataDir, [last | L], RightFileFound) ->\n\tread_account2(Addr, RootHash, Pos, Left, Right, DataDir, L, RightFileFound);\nread_account2(Addr, RootHash, Pos, _Left, Right, DataDir, [{LargestAddr, _} | _L],\n\t\tRightFileFound) when Addr > LargestAddr ->\n\tcase Pos + 1 == Right of\n\t\ttrue ->\n\t\t\tcase RightFileFound of\n\t\t\t\ttrue ->\n\t\t\t\t\t{0, <<>>};\n\t\t\t\tfalse ->\n\t\t\t\t\tnot_found\n\t\t\tend;\n\t\tfalse ->\n\t\t\tread_account(Addr, RootHash, Pos, Right, DataDir, RightFileFound)\n\tend;\nread_account2(Addr, RootHash, Pos, Left, _Right, DataDir, L, _RightFileFound) ->\n\tcase Addr < element(1, lists:last(L)) of\n\t\ttrue ->\n\t\t\tcase Pos == Left of\n\t\t\t\ttrue ->\n\t\t\t\t\t{0, <<>>};\n\t\t\t\tfalse ->\n\t\t\t\t\tread_account(Addr, RootHash, Left, Pos, DataDir, true)\n\t\t\tend;\n\t\tfalse ->\n\t\t\tcase lists:search(fun({Addr2, _}) -> Addr2 == Addr end, L) of\n\t\t\t\t{value, {Addr, Data}} ->\n\t\t\t\t\tData;\n\t\t\t\tfalse ->\n\t\t\t\t\t{0, <<>>}\n\t\t\tend\n\tend.\n\nlookup_block_filename(H) ->\n\tlookup_block_filename(H, not_set).\n\nlookup_block_filename(H, CustomDir) ->\n\tDir =\n\t\tcase CustomDir of\n\t\t\tnot_set ->\n\t\t\t\tget_data_dir();\n\t\t\t_ ->\n\t\t\t\tCustomDir\n\t\tend,\n\tName = filename:join([Dir, ?BLOCK_DIR, binary_to_list(ar_util:encode(H))]),\n\tNameJSON = iolist_to_binary([Name, \".json\"]),\n\tcase is_file(NameJSON) of\n\t\ttrue ->\n\t\t\t{NameJSON, json};\n\t\tfalse ->\n\t\t\tNameBin = iolist_to_binary([Name, \".bin\"]),\n\t\t\tcase is_file(NameBin) of\n\t\t\t\ttrue ->\n\t\t\t\t\t{NameBin, binary};\n\t\t\t\tfalse ->\n\t\t\t\t\tunavailable\n\t\t\tend\n\tend.\n\n%% @doc Delete the blacklisted tx with the given hash from disk. Return {ok, BytesRemoved} if\n%% the removal is successful or the file does not exist. The reported number of removed\n%% bytes does not include the migrated v1 data. The removal of migrated v1 data is requested\n%% from ar_data_sync asynchronously. The v2 headers are not removed.\ndelete_blacklisted_tx(Hash) ->\n\tcase ar_kv:get(tx_db, Hash) of\n\t\t{ok, V} ->\n\t\t\tTX = parse_tx_kv_binary(V),\n\t\t\tcase TX#tx.format == 1 andalso TX#tx.data_size > 0 of\n\t\t\t\ttrue ->\n\t\t\t\t\tcase ar_kv:delete(tx_db, Hash) of\n\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\t{ok, byte_size(V)};\n\t\t\t\t\t\tError ->\n\t\t\t\t\t\t\tError\n\t\t\t\t\tend;\n\t\t\t\t_ ->\n\t\t\t\t\t{ok, 0}\n\t\t\tend;\n\t\t{error, _} = DBError ->\n\t\t\tDBError;\n\t\tnot_found ->\n\t\t\tcase lookup_tx_filename(Hash) of\n\t\t\t\t{Status, Filename} ->\n\t\t\t\t\tcase Status of\n\t\t\t\t\t\tmigrated_v1 ->\n\t\t\t\t\t\t\tcase file:read_file_info(Filename) of\n\t\t\t\t\t\t\t\t{ok, FileInfo} ->\n\t\t\t\t\t\t\t\t\tcase file:delete(Filename) of\n\t\t\t\t\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\t\t\t\t\t{ok, FileInfo#file_info.size};\n\t\t\t\t\t\t\t\t\t\tError ->\n\t\t\t\t\t\t\t\t\t\t\tError\n\t\t\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t\t\tError ->\n\t\t\t\t\t\t\t\t\tError\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t{ok, 0}\n\t\t\t\t\tend;\n\t\t\t\tunavailable ->\n\t\t\t\t\t{ok, 0}\n\t\t\tend\n\tend.\n\nparse_tx_kv_binary(Bin) ->\n\tcase catch ar_serialize:binary_to_tx(Bin) of\n\t\t{ok, TX} ->\n\t\t\tTX;\n\t\t_ ->\n\t\t\tmigrate_tx_record(binary_to_term(Bin, [safe]))\n\tend.\n\n%% Convert the stored tx record to its latest state in the code\n%% (assign the default values to all missing fields). Since the version introducing\n%% the fork 2.6, the transactions are serialized via ar_serialize:tx_to_binary/1, which\n%% is maintained compatible with all past versions, so this code is only used\n%% on the nodes synced before the corresponding release.\nmigrate_tx_record(#tx{} = TX) ->\n\tTX;\nmigrate_tx_record({tx, Format, ID, LastTX, Owner, Tags, Target, Quantity, Data,\n\t\tDataSize, DataTree, DataRoot, Signature, Reward}) ->\n\t#tx{ format = Format, id = ID, last_tx = LastTX,\n\t\t\towner = Owner, tags = Tags, target = Target, quantity = Quantity,\n\t\t\tdata = Data, data_size = DataSize, data_root = DataRoot,\n\t\t\tsignature = Signature, signature_type = ?DEFAULT_KEY_TYPE,\n\t\t\treward = Reward, data_tree = DataTree,\n\t\t\towner_address = ar_wallet:to_address(Owner, ?DEFAULT_KEY_TYPE) }.\n\nparse_block_kv_binary(Bin) ->\n\tcase catch ar_serialize:binary_to_block(Bin) of\n\t\t{ok, B} ->\n\t\t\tB;\n\t\t_ ->\n\t\t\tmigrate_block_record(binary_to_term(Bin, [safe]))\n\tend.\n\n%% Convert the stored block record to its latest state in the code\n%% (assign the default values to all missing fields). Since the version introducing\n%% the fork 2.6, the blocks are serialized via ar_serialize:block_to_binary/1, which\n%% is maintained compatible with all past block versions, so this code is only used\n%% on the nodes synced before the corresponding release.\nmigrate_block_record(#block{} = B) ->\n\tB;\nmigrate_block_record({block, Nonce, PrevH, TS, Last, Diff, Height, Hash, H,\n\t\tTXs, TXRoot, TXTree, HL, HLMerkle, WL, RewardAddr, Tags, RewardPool,\n\t\tWeaveSize, BlockSize, CDiff, SizeTaggedTXs, PoA, Rate, ScheduledRate,\n\t\tPacking_2_5_Threshold, StrictDataSplitThreshold}) ->\n\tPoA_2 =\n\t\tcase PoA of\n\t\t\t{poa, O, TXPath, DataPath, Chunk} ->\n\t\t\t\t#poa{ option = O, tx_path = TXPath, data_path = DataPath,\n\t\t\t\t\t\tchunk = Chunk };\n\t\t\t#poa{} ->\n\t\t\t\tPoA\n\t\tend,\n\t#block{ nonce = Nonce, previous_block = PrevH, timestamp = TS,\n\t\t\tlast_retarget = Last, diff = Diff, height = Height, hash = Hash,\n\t\t\tindep_hash = H, txs = TXs, tx_root = TXRoot, tx_tree = TXTree,\n\t\t\thash_list = HL, hash_list_merkle = HLMerkle, wallet_list = WL,\n\t\t\treward_addr = RewardAddr, tags = Tags, reward_pool = RewardPool,\n\t\t\tweave_size = WeaveSize, block_size = BlockSize, cumulative_diff = CDiff,\n\t\t\tsize_tagged_txs = SizeTaggedTXs, poa = PoA_2, usd_to_ar_rate = Rate,\n\t\t\tscheduled_usd_to_ar_rate = ScheduledRate,\n\t\t\tpacking_2_5_threshold = Packing_2_5_Threshold,\n\t\t\tstrict_data_split_threshold = StrictDataSplitThreshold }.\n\nwrite_tx(TXs) when is_list(TXs) ->\n\tlists:foldl(\n\t\tfun (TX, ok) ->\n\t\t\t\twrite_tx(TX);\n\t\t\t(_TX, Acc) ->\n\t\t\t\tAcc\n\t\tend,\n\t\tok,\n\t\tTXs\n\t);\nwrite_tx(#tx{ format = Format, id = TXID } = TX) ->\n\tcase write_tx_header(TX) of\n\t\tok ->\n\t\t\tDataSize = byte_size(TX#tx.data),\n\t\t\tcase DataSize > 0 of\n\t\t\t\ttrue ->\n\t\t\t\t\tcase {DataSize == TX#tx.data_size, Format} of\n\t\t\t\t\t\t{false, 2} ->\n\t\t\t\t\t\t\t?LOG_ERROR([{event, failed_to_store_tx_data},\n\t\t\t\t\t\t\t\t\t{reason, size_mismatch}, {tx, ar_util:encode(TX#tx.id)}]),\n\t\t\t\t\t\t\tok;\n\t\t\t\t\t\t{true, 1} ->\n\t\t\t\t\t\t\tcase write_tx_data(no_expected_data_root, TX#tx.data, TXID) of\n\t\t\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\t\t\tok;\n\t\t\t\t\t\t\t\t{error, Reason} ->\n\t\t\t\t\t\t\t\t\t?LOG_WARNING([{event, failed_to_store_tx_data},\n\t\t\t\t\t\t\t\t\t\t\t{reason, Reason}, {tx, ar_util:encode(TX#tx.id)}]),\n\t\t\t\t\t\t\t\t\t%% We have stored the data in the tx_db table\n\t\t\t\t\t\t\t\t\t%% so we return ok here.\n\t\t\t\t\t\t\t\t\tok\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t{true, 2} ->\n\t\t\t\t\t\t\tcase ar_tx_blacklist:is_tx_blacklisted(TX#tx.id) of\n\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\tok;\n\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\tcase write_tx_data(TX#tx.data_root, TX#tx.data, TXID) of\n\t\t\t\t\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\t\t\t\t\tok;\n\t\t\t\t\t\t\t\t\t\t{error, Reason} ->\n\t\t\t\t\t\t\t\t\t\t\t%% v2 data is not part of the header. We have to\n\t\t\t\t\t\t\t\t\t\t\t%% report success here even if we failed to store\n\t\t\t\t\t\t\t\t\t\t\t%% the attached data.\n\t\t\t\t\t\t\t\t\t\t\t?LOG_WARNING([{event, failed_to_store_tx_data},\n\t\t\t\t\t\t\t\t\t\t\t\t\t{reason, Reason},\n\t\t\t\t\t\t\t\t\t\t\t\t\t{tx, ar_util:encode(TX#tx.id)}]),\n\t\t\t\t\t\t\t\t\t\t\tok\n\t\t\t\t\t\t\t\t\tend\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend;\n\t\t\t\tfalse ->\n\t\t\t\t\tok\n\t\t\tend;\n\t\tNotOk ->\n\t\t\tNotOk\n\tend.\n\nwrite_tx_header(TX) ->\n\tTX2 =\n\t\tcase TX#tx.format of\n\t\t\t1 ->\n\t\t\t\tTX;\n\t\t\t_ ->\n\t\t\t\tTX#tx{ data = <<>> }\n\t\tend,\n\tar_kv:put(tx_db, TX#tx.id, ar_serialize:tx_to_binary(TX2)).\n\nwrite_tx_data(ExpectedDataRoot, Data, TXID) ->\n\tChunks = ar_tx:chunk_binary(?DATA_CHUNK_SIZE, Data),\n\tSizeTaggedChunks = ar_tx:chunks_to_size_tagged_chunks(Chunks),\n\tSizeTaggedChunkIDs = ar_tx:sized_chunks_to_sized_chunk_ids(SizeTaggedChunks),\n\tcase {ExpectedDataRoot, ar_merkle:generate_tree(SizeTaggedChunkIDs)} of\n\t\t{no_expected_data_root, {DataRoot, DataTree}} ->\n\t\t\twrite_tx_data(DataRoot, DataTree, Data, SizeTaggedChunks, TXID);\n\t\t{_, {ExpectedDataRoot, DataTree}} ->\n\t\t\twrite_tx_data(ExpectedDataRoot, DataTree, Data, SizeTaggedChunks, TXID);\n\t\t_ ->\n\t\t\t{error, [invalid_data_root]}\n\tend.\n\nwrite_tx_data(DataRoot, DataTree, Data, SizeTaggedChunks, TXID) ->\n\tErrors = lists:foldl(\n\t\tfun\n\t\t\t({<<>>, _}, Acc) ->\n\t\t\t\t%% Empty chunks are produced by ar_tx:chunk_binary/2, when\n\t\t\t\t%% the data is evenly split by the given chunk size. They are\n\t\t\t\t%% the last chunks of the corresponding transactions and have\n\t\t\t\t%% the same end offsets as their preceding chunks. They are never\n\t\t\t\t%% picked as recall chunks because recall byte has to be strictly\n\t\t\t\t%% smaller than the end offset. They are an artifact of the original\n\t\t\t\t%% chunking implementation. There is no value in storing them.\n\t\t\t\tAcc;\n\t\t\t({Chunk, Offset}, Acc) ->\n\t\t\t\tDataPath = ar_merkle:generate_path(DataRoot, Offset - 1, DataTree),\n\t\t\t\tTXSize = byte_size(Data),\n\t\t\t\tDiskPoolResult = ar_data_sync:add_chunk_to_disk_pool(\n\t\t\t\t\tDataRoot, DataPath, Chunk, Offset - 1, TXSize),\n\t\t\t\tcase DiskPoolResult of\n\t\t\t\t\tok ->\n\t\t\t\t\t\tAcc;\n\t\t\t\t\ttemporary ->\n\t\t\t\t\t\tAcc;\n\t\t\t\t\t{error, Reason} ->\n\t\t\t\t\t\t?LOG_WARNING([{event, failed_to_write_tx_chunk},\n\t\t\t\t\t\t\t\t{tx, ar_util:encode(TXID)},\n\t\t\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}]),\n\t\t\t\t\t\t[Reason | Acc]\n\t\t\t\tend\n\t\tend,\n\t\t[],\n\t\tSizeTaggedChunks\n\t),\n\tcase Errors of\n\t\t[] ->\n\t\t\tok;\n\t\t_ ->\n\t\t\t{error, Errors}\n\tend.\n\n%% @doc Read a tx from disk, given a hash.\n-spec read_tx(\n\tbinary() | #tx{} | [#tx{}]\n) -> unavailable | #tx{} | [#tx{} | unavailable].\nread_tx(TX) ->\n\tread_tx(TX, not_set).\n\nread_tx(unavailable, _CustomDir) ->\n\tunavailable;\nread_tx(TX, _CustomDir) when is_record(TX, tx) ->\n\tTX;\nread_tx(TXs, CustomDir) when is_list(TXs) ->\n\tlists:map(fun(TX) -> read_tx(TX, CustomDir) end, TXs);\nread_tx(ID, CustomDir) ->\n\tcase read_tx_from_disk_cache(ID, CustomDir) of\n\t\tunavailable ->\n\t\t\tread_tx2(ID, CustomDir);\n\t\tTX ->\n\t\t\tTX\n\tend.\n\nread_tx2(ID, CustomDir) ->\n\tcase ar_kv:get(get_db_name(tx_db, CustomDir), ID) of\n\t\tnot_found ->\n\t\t\tread_tx_from_file(ID, CustomDir);\n\t\t{ok, Binary} ->\n\t\t\tTX = parse_tx_kv_binary(Binary),\n\t\t\tTX2 = TX#tx{ owner_address = ar_tx:get_owner_address(TX) },\n\t\t\tcase TX2#tx.format == 1 andalso TX2#tx.data_size > 0\n\t\t\t\t\tandalso byte_size(TX2#tx.data) == 0 of\n\t\t\t\ttrue ->\n\t\t\t\t\tcase read_tx_data_from_kv_storage(TX2#tx.id, CustomDir) of\n\t\t\t\t\t\t{ok, Data} ->\n\t\t\t\t\t\t\tTX2#tx{ data = Data };\n\t\t\t\t\t\tError ->\n\t\t\t\t\t\t\t?LOG_WARNING([{event, error_reading_tx_from_kv_storage},\n\t\t\t\t\t\t\t\t\t{tx, ar_util:encode(ID)},\n\t\t\t\t\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])}]),\n\t\t\t\t\t\t\tunavailable\n\t\t\t\t\tend;\n\t\t\t\t_ ->\n\t\t\t\t\tTX2\n\t\t\tend\n\tend.\n\nread_tx_from_disk_cache(ID, CustomDir) ->\n\tcase ar_disk_cache:lookup_tx_filename(ID, CustomDir) of\n\t\tunavailable ->\n\t\t\tunavailable;\n\t\t{ok, Filename} ->\n\t\t\tcase read_tx_file(Filename) of\n\t\t\t\t{ok, TX} ->\n\t\t\t\t\tTX;\n\t\t\t\t_Error ->\n\t\t\t\t\tunavailable\n\t\t\tend\n\tend.\n\nread_tx_from_file(ID, CustomDir) ->\n\tcase lookup_tx_filename(ID, CustomDir) of\n\t\t{ok, Filename} ->\n\t\t\tcase read_tx_file(Filename) of\n\t\t\t\t{ok, TX} ->\n\t\t\t\t\tTX;\n\t\t\t\t_Error ->\n\t\t\t\t\tunavailable\n\t\t\tend;\n\t\t{migrated_v1, Filename} ->\n\t\t\tcase read_migrated_v1_tx_file(Filename, CustomDir) of\n\t\t\t\t{ok, TX} ->\n\t\t\t\t\tTX;\n\t\t\t\t_Error ->\n\t\t\t\t\tunavailable\n\t\t\tend;\n\t\tunavailable ->\n\t\t\tunavailable\n\tend.\n\nread_tx_file(Filename) ->\n\tcase read_file_raw(Filename) of\n\t\t{ok, <<>>} ->\n\t\t\tfile:delete(Filename),\n\t\t\t?LOG_WARNING([{event, empty_tx_file},\n\t\t\t\t\t{filename, Filename}]),\n\t\t\t{error, tx_file_empty};\n\t\t{ok, Binary} ->\n\t\t\tcase catch ar_serialize:json_struct_to_tx(Binary) of\n\t\t\t\tTX when is_record(TX, tx) ->\n\t\t\t\t\t{ok, TX};\n\t\t\t\t_ ->\n\t\t\t\t\tfile:delete(Filename),\n\t\t\t\t\t?LOG_WARNING([{event, failed_to_parse_tx},\n\t\t\t\t\t\t\t{filename, Filename}]),\n\t\t\t\t\t{error, failed_to_parse_tx}\n\t\t\tend;\n\t\tError ->\n\t\t\tError\n\tend.\n\nread_file_raw(Filename) ->\n\tcase file:open(Filename, [read, raw, binary]) of\n\t\t{ok, File} ->\n\t\t\tcase file:read(File, 20000000) of\n\t\t\t\t{ok, Bin} ->\n\t\t\t\t\tfile:close(File),\n\t\t\t\t\t{ok, Bin};\n\t\t\t\tError ->\n\t\t\t\t\tError\n\t\t\tend;\n\t\tError ->\n\t\t\tError\n\tend.\n\nread_migrated_v1_tx_file(Filename) ->\n\tread_migrated_v1_tx_file(Filename, not_set).\n\nread_migrated_v1_tx_file(Filename, CustomDir) ->\n\tcase read_file_raw(Filename) of\n\t\t{ok, Binary} ->\n\t\t\tcase catch ar_serialize:json_struct_to_v1_tx(Binary) of\n\t\t\t\t#tx{ id = ID } = TX ->\n\t\t\t\t\tcase read_tx_data_from_kv_storage(ID, CustomDir) of\n\t\t\t\t\t\t{ok, Data} ->\n\t\t\t\t\t\t\t{ok, TX#tx{ data = Data }};\n\t\t\t\t\t\tError ->\n\t\t\t\t\t\t\tError\n\t\t\t\t\tend\n\t\t\tend;\n\t\tError ->\n\t\t\tError\n\tend.\n\nread_tx_data_from_kv_storage(ID) ->\n\tread_tx_data_from_kv_storage(ID, not_set).\n\nread_tx_data_from_kv_storage(ID, _CustomDir) ->\n\tcase ar_data_sync:get_tx_data(ID) of\n\t\t{ok, Data} ->\n\t\t\t{ok, Data};\n\t\t{error, not_found} ->\n\t\t\t{error, data_unavailable};\n\t\t{error, timeout} ->\n\t\t\t{error, data_fetch_timeout};\n\t\tError ->\n\t\t\tError\n\tend.\n\nread_tx_data(TX) ->\n\tread_tx_data(TX, not_set).\n\nread_tx_data(TX, CustomDir) ->\n\tcase read_file_raw(tx_data_filepath(TX, CustomDir)) of\n\t\t{ok, Data} ->\n\t\t\t{ok, ar_util:decode(Data)};\n\t\tError ->\n\t\t\tError\n\tend.\n\nwrite_wallet_list(Height, Tree) ->\n\t{RootHash, _UpdatedTree, UpdateMap} = ar_block:hash_wallet_list(Tree),\n\tstore_account_tree_update(Height, RootHash, UpdateMap),\n\terlang:garbage_collect(),\n\tRootHash.\n\n%% @doc Read a given wallet list (by hash) from the disk.\nread_wallet_list(WalletListHash) ->\n\tread_wallet_list(WalletListHash, not_set).\n\nread_wallet_list(<<>>, _CustomDir) ->\n\t{ok, ar_patricia_tree:new()};\nread_wallet_list(WalletListHash, CustomDir) when is_binary(WalletListHash) ->\n\tKey = WalletListHash,\n\tread_wallet_list(get_account_tree_value(Key, <<>>, CustomDir), ar_patricia_tree:new(), [],\n\t\t\tWalletListHash, WalletListHash, CustomDir).\n\nread_wallet_list({ok, << K:48/binary, _/binary >>, Bin}, Tree, Keys, RootHash, K, CustomDir) ->\n\tcase binary_to_term(Bin, [safe]) of\n\t\t{Key, Value} ->\n\t\t\tTree2 = ar_patricia_tree:insert(Key, Value, Tree),\n\t\t\tcase Keys of\n\t\t\t\t[] ->\n\t\t\t\t\t{ok, Tree2};\n\t\t\t\t[{H, Prefix} | Keys2] ->\n\t\t\t\t\tread_wallet_list(get_account_tree_value(H, Prefix, CustomDir), Tree2, Keys2,\n\t\t\t\t\t\t\tRootHash, H, CustomDir)\n\t\t\tend;\n\t\t[{H, Prefix} | Hs] ->\n\t\t\tread_wallet_list(get_account_tree_value(H, Prefix, CustomDir), Tree, Hs ++ Keys, RootHash,\n\t\t\t\t\tH, CustomDir)\n\tend;\nread_wallet_list({ok, _, _}, _Tree, _Keys, RootHash, _K, CustomDir) ->\n\tread_wallet_list_from_chunk_files(RootHash, CustomDir);\nread_wallet_list(none, _Tree, _Keys, RootHash, _K, CustomDir) ->\n\tread_wallet_list_from_chunk_files(RootHash, CustomDir);\nread_wallet_list(Error, _Tree, _Keys, _RootHash, _K, _CustomDir) ->\n\tError.\n\nread_wallet_list_from_chunk_files(WalletListHash, CustomDir) when is_binary(WalletListHash) ->\n\tcase read_wallet_list_chunk(WalletListHash, CustomDir) of\n\t\tnot_found ->\n\t\t\tFilename = wallet_list_filepath(WalletListHash, CustomDir),\n\t\t\tcase file:read_file(Filename) of\n\t\t\t\t{ok, JSON} ->\n\t\t\t\t\tparse_wallet_list_json(JSON);\n\t\t\t\t{error, enoent} ->\n\t\t\t\t\tnot_found;\n\t\t\t\tError ->\n\t\t\t\t\tError\n\t\t\tend;\n\t\t{ok, Tree} ->\n\t\t\t{ok, Tree};\n\t\t{error, _Reason} = Error ->\n\t\t\tError\n\tend;\nread_wallet_list_from_chunk_files(WL, _CustomDir) when is_list(WL) ->\n\t{ok, ar_patricia_tree:from_proplist([{get_wallet_key(T), get_wallet_value(T)}\n\t\t\t|| T <- WL])}.\n\nget_wallet_key(T) ->\n\telement(1, T).\n\nget_wallet_value({_, Balance, LastTX}) ->\n\t{Balance, LastTX};\nget_wallet_value({_, Balance, LastTX, Denomination, MiningPermission}) ->\n\t{Balance, LastTX, Denomination, MiningPermission}.\n\nread_wallet_list_chunk(RootHash, CustomDir) ->\n\tread_wallet_list_chunk(RootHash, 0, ar_patricia_tree:new(), CustomDir).\n\nread_wallet_list_chunk(RootHash, Position, Tree, CustomDir) ->\n\tDir =\n\t\tcase CustomDir of\n\t\t\tnot_set ->\n\t\t\t\tget_data_dir();\n\t\t\t_ ->\n\t\t\t\tCustomDir\n\t\tend,\n\tFilename =\n\t\tbinary_to_list(iolist_to_binary([\n\t\t\tDir,\n\t\t\t\"/\",\n\t\t\t?WALLET_LIST_DIR,\n\t\t\t\"/\",\n\t\t\tar_util:encode(RootHash),\n\t\t\t\"-\",\n\t\t\tinteger_to_binary(Position),\n\t\t\t\"-\",\n\t\t\tinteger_to_binary(?WALLET_LIST_CHUNK_SIZE)\n\t\t])),\n\tcase read_term(\".\", Filename) of\n\t\t{ok, Chunk} ->\n\t\t\t{NextPosition, Wallets} =\n\t\t\t\tcase Chunk of\n\t\t\t\t\t[last | Tail] ->\n\t\t\t\t\t\t{last, Tail};\n\t\t\t\t\t_ ->\n\t\t\t\t\t\t{Position + ?WALLET_LIST_CHUNK_SIZE, Chunk}\n\t\t\t\tend,\n\t\t\tTree2 =\n\t\t\t\tlists:foldl(\n\t\t\t\t\tfun({K, V}, Acc) -> ar_patricia_tree:insert(K, V, Acc) end,\n\t\t\t\t\tTree,\n\t\t\t\t\tWallets\n\t\t\t\t),\n\t\t\tcase NextPosition of\n\t\t\t\tlast ->\n\t\t\t\t\t{ok, Tree2};\n\t\t\t\t_ ->\n\t\t\t\t\tread_wallet_list_chunk(RootHash, NextPosition, Tree2, CustomDir)\n\t\t\tend;\n\t\t{error, Reason} = Error ->\n\t\t\t?LOG_ERROR([\n\t\t\t\t{event, failed_to_read_wallet_list_chunk},\n\t\t\t\t{reason, Reason}\n\t\t\t]),\n\t\t\tError;\n\t\tnot_found ->\n\t\t\tnot_found\n\tend.\n\nparse_wallet_list_json(JSON) ->\n\tcase ar_serialize:json_decode(JSON) of\n\t\t{ok, JiffyStruct} ->\n\t\t\t{ok, ar_serialize:json_struct_to_wallet_list(JiffyStruct)};\n\t\t{error, Reason} ->\n\t\t\t{error, {invalid_json, Reason}}\n\tend.\n\nlookup_tx_filename(ID) ->\n\tlookup_tx_filename(ID, not_set).\n\nlookup_tx_filename(ID, CustomDir) ->\n\tFilepath = tx_filepath(ID, CustomDir),\n\tcase is_file(Filepath) of\n\t\ttrue ->\n\t\t\t{ok, Filepath};\n\t\tfalse ->\n\t\t\tMigratedV1Path = filepath([?TX_DIR, \"migrated_v1\", tx_filename(ID)], CustomDir),\n\t\t\tcase is_file(MigratedV1Path) of\n\t\t\t\ttrue ->\n\t\t\t\t\t{migrated_v1, MigratedV1Path};\n\t\t\t\tfalse ->\n\t\t\t\t\tunavailable\n\t\t\tend\n\tend.\n\n%% @doc A quick way to lookup the file without using the Erlang file server.\n%% Helps take off some IO load during the busy times.\nis_file(Filepath) ->\n\tcase file:read_file_info(Filepath, [raw]) of\n\t\t{ok, #file_info{ type = Type }} when Type == regular orelse Type == symlink ->\n\t\t\ttrue;\n\t\t_ ->\n\t\t\tfalse\n\tend.\n\nread_block_from_file(Filename, Encoding) ->\n\tcase read_file_raw(Filename) of\n\t\t{ok, Bin} ->\n\t\t\tcase Encoding of\n\t\t\t\tjson ->\n\t\t\t\t\tparse_block_json(Bin);\n\t\t\t\tbinary ->\n\t\t\t\t\tparse_block_binary(Bin)\n\t\t\tend;\n\t\t{error, Reason} ->\n\t\t\t?LOG_WARNING([{event, error_reading_block},\n\t\t\t\t\t{error, io_lib:format(\"~p\", [Reason])}]),\n\t\t\tunavailable\n\tend.\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\tprocess_flag(trap_exit, true),\n\t{ok, Config} = arweave_config:get_env(),\n\tensure_directories(Config#config.data_dir),\n\t%% Copy genesis transactions (snapshotted in the repo) into data_dir/txs\n\tar_weave:add_mainnet_v1_genesis_txs(),\n\topen_databases(),\n\tcase Config#config.start_from_state of\n\t\tnot_set ->\n\t\t\tok;\n\t\tCustomDir ->\n\t\t\topen_start_from_state_databases(CustomDir)\n\tend,\n\tets:insert(?MODULE, [{same_disk_storage_modules_total_size,\n\t\t\tget_same_disk_storage_modules_total_size()}]),\n\t{ok, #state{}}.\n\nopen_databases() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tDataDir = Config#config.data_dir,\n\tok = ar_kv:open(#{\n\t\tpath => filename:join([DataDir, ?ROCKS_DB_DIR, \"ar_storage_tx_confirmation_db\"]),\n\t\tname => tx_confirmation_db}),\n\tok = ar_kv:open(#{\n\t\tpath => filename:join([DataDir, ?ROCKS_DB_DIR, \"ar_storage_tx_db\"]),\n\t\tname => tx_db}),\n\tok = ar_kv:open(#{\n\t\tpath => filename:join([DataDir, ?ROCKS_DB_DIR, \"ar_storage_block_db\"]),\n\t\tname => block_db}),\n\tok = ar_kv:open(#{\n\t\tpath => filename:join([DataDir, ?ROCKS_DB_DIR, \"reward_history_db\"]),\n\t\tname => reward_history_db}),\n\tok = ar_kv:open(#{\n\t\tpath => filename:join([DataDir, ?ROCKS_DB_DIR, \"block_time_history_db\"]),\n\t\tname => block_time_history_db}),\n\tok = ar_kv:open(#{\n\t\tpath => filename:join([DataDir, ?ROCKS_DB_DIR, \"block_index_db\"]),\n\t\tname => block_index_db}),\n\tok = ar_kv:open(#{\n\t\tpath => filename:join([DataDir, ?ROCKS_DB_DIR, \"account_tree_db\"]),\n\t\tname => account_tree_db}).\n\nopen_start_from_state_databases(CustomDir) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tLogFilepath = filename:join([Config#config.log_dir, \"rocksdb\", \"start_from_state\"]),\n\tok = ar_kv:open_readonly(#{\n\t\tpath => filename:join([CustomDir, ?ROCKS_DB_DIR, \"ar_storage_tx_confirmation_db\"]),\n\t\tname => start_from_state_tx_confirmation_db,\n\t\tlog_path => LogFilepath}),\n\tok = ar_kv:open_readonly(#{\n\t\tpath => filename:join([CustomDir, ?ROCKS_DB_DIR, \"ar_storage_tx_db\"]),\n\t\tname => start_from_state_tx_db,\n\t\tlog_path => LogFilepath}),\n\tok = ar_kv:open_readonly(#{\n\t\tpath => filename:join([CustomDir, ?ROCKS_DB_DIR, \"ar_storage_block_db\"]),\n\t\tname => start_from_state_block_db,\n\t\tlog_path => LogFilepath}),\n\tok = ar_kv:open_readonly(#{\n\t\tpath => filename:join([CustomDir, ?ROCKS_DB_DIR, \"reward_history_db\"]),\n\t\tname => start_from_state_reward_history_db,\n\t\tlog_path => LogFilepath}),\n\tok = ar_kv:open_readonly(#{\n\t\tpath => filename:join([CustomDir, ?ROCKS_DB_DIR, \"block_time_history_db\"]),\n\t\tname => start_from_state_block_time_history_db,\n\t\tlog_path => LogFilepath}),\n\tok = ar_kv:open_readonly(#{\n\t\tpath => filename:join([CustomDir, ?ROCKS_DB_DIR, \"block_index_db\"]),\n\t\tname => start_from_state_block_index_db,\n\t\tlog_path => LogFilepath}),\n\tok = ar_kv:open_readonly(#{\n\t\tpath => filename:join([CustomDir, ?ROCKS_DB_DIR, \"account_tree_db\"]),\n\t\tname => start_from_state_account_tree_db,\n\t\tlog_path => LogFilepath}).\n\nclose_start_from_state_databases() ->\n\tar_kv:close(start_from_state_tx_confirmation_db),\n\tar_kv:close(start_from_state_tx_db),\n\tar_kv:close(start_from_state_block_db),\n\tar_kv:close(start_from_state_reward_history_db),\n\tar_kv:close(start_from_state_block_time_history_db),\n\tar_kv:close(start_from_state_block_index_db),\n\tar_kv:close(start_from_state_account_tree_db).\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast({store_account_tree_update, Height, RootHash, Map}, State) ->\n\tstore_account_tree_update(Height, RootHash, Map),\n\terlang:garbage_collect(),\n\t{noreply, State};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nget_data_dir() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tConfig#config.data_dir.\n\nblock_index_tip() ->\n\tblock_index_tip(not_set).\n\nblock_index_tip(CustomDir) ->\n\t%% Use a key that is bigger than any << Height:256 >> (<<\"a\">> > << Height:256 >>)\n\t%% to retrieve the largest stored Height.\n\tcase ar_kv:get_prev(get_db_name(block_index_db, CustomDir), <<\"a\">>) of\n\t\tnone ->\n\t\t\tnot_found;\n\t\t{ok, << Height:256 >>, V} ->\n\t\t\t{Height, binary_to_term(V, [safe])}\n\tend.\n\nwrite_block(B) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase lists:member(disk_logging, Config#config.enable) of\n\t\ttrue ->\n\t\t\t?LOG_INFO([{event, writing_block_to_disk},\n\t\t\t\t\t{block, ar_util:encode(B#block.indep_hash)}]);\n\t\t_ ->\n\t\t\tdo_nothing\n\tend,\n\tTXIDs = lists:map(fun(TXID) when is_binary(TXID) -> TXID;\n\t\t\t(#tx{ id = TXID }) -> TXID end, B#block.txs),\n\tar_kv:put(block_db, B#block.indep_hash, ar_serialize:block_to_binary(B#block{\n\t\t\ttxs = TXIDs })).\n\nwrite_full_block2(BShadow, _) ->\n\tcase write_block(BShadow) of\n\t\tok ->\n\t\t\tok;\n\t\tError ->\n\t\t\tError\n\tend.\n\nparse_block_json(JSON) ->\n\tcase catch ar_serialize:json_decode(JSON) of\n\t\t{ok, JiffyStruct} ->\n\t\t\tcase catch ar_serialize:json_struct_to_block(JiffyStruct) of\n\t\t\t\tB when is_record(B, block) ->\n\t\t\t\t\tB;\n\t\t\t\tError ->\n\t\t\t\t\t?LOG_WARNING([{event, error_parsing_block_json},\n\t\t\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])}]),\n\t\t\t\t\tunavailable\n\t\t\tend;\n\t\tError ->\n\t\t\t?LOG_WARNING([{event, error_parsing_block_json},\n\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])}]),\n\t\t\tunavailable\n\tend.\n\nparse_block_binary(Bin) ->\n\tcase catch ar_serialize:binary_to_block(Bin) of\n\t\t{ok, B} ->\n\t\t\tB;\n\t\tError ->\n\t\t\t?LOG_WARNING([{event, error_parsing_block_bin},\n\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])}]),\n\t\t\tunavailable\n\tend.\n\nfilepath(PathComponents) ->\n\tfilepath(PathComponents, not_set).\n\nfilepath(PathComponents, CustomDir) ->\n\tDataDir =\n\t\tcase CustomDir of\n\t\t\tnot_set ->\n\t\t\t\tget_data_dir();\n\t\t\t_ ->\n\t\t\t\tCustomDir\n\t\tend,\n\tto_string(filename:join([DataDir | PathComponents])).\n\nto_string(Bin) when is_binary(Bin) ->\n\tbinary_to_list(Bin);\nto_string(String) ->\n\tString.\n\n%% @doc Ensure that all of the relevant storage directories exist.\nensure_directories(DataDir) ->\n\t%% Append \"/\" to every path so that filelib:ensure_dir/1 creates a directory\n\t%% if it does not exist.\n\tfilelib:ensure_dir(filename:join(DataDir, ?TX_DIR) ++ \"/\"),\n\tfilelib:ensure_dir(filename:join(DataDir, ?BLOCK_DIR) ++ \"/\"),\n\tfilelib:ensure_dir(filename:join(DataDir, ?WALLET_LIST_DIR) ++ \"/\"),\n\tfilelib:ensure_dir(filename:join(DataDir, ?HASH_LIST_DIR) ++ \"/\"),\n\tfilelib:ensure_dir(filename:join([DataDir, ?TX_DIR, \"migrated_v1\"]) ++ \"/\").\n\nget_db_name(DBName, not_set) ->\n\tDBName;\nget_db_name(DBName, _CustomDir) ->\n\tlist_to_atom(\"start_from_state_\" ++ atom_to_list(DBName)).\n\nget_same_disk_storage_modules_total_size() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tDataDir = Config#config.data_dir,\n\t{ok, Info} = file:read_file_info(DataDir),\n\tDevice = Info#file_info.major_device,\n\tget_same_disk_storage_modules_total_size(0, Config#config.storage_modules, DataDir,\n\t\t\tDevice).\n\nget_same_disk_storage_modules_total_size(TotalSize, [], _DataDir, _Device) ->\n\tTotalSize;\nget_same_disk_storage_modules_total_size(TotalSize,\n\t\t[{Size, _Bucket, _Packing} = Module | StorageModules], DataDir, Device) ->\n\tPath = filename:join([DataDir, \"storage_modules\", ar_storage_module:id(Module)]),\n\tfilelib:ensure_dir(Path ++ \"/\"),\n\t{ok, Info} = file:read_file_info(Path),\n\tTotalSize2 =\n\t\tcase Info#file_info.major_device == Device of\n\t\t\ttrue ->\n\t\t\t\tTotalSize + Size;\n\t\t\tfalse ->\n\t\t\t\tTotalSize\n\t\tend,\n\tget_same_disk_storage_modules_total_size(TotalSize2, StorageModules, DataDir, Device).\n\ntx_filepath(TX) ->\n\ttx_filepath(TX, not_set).\n\ntx_filepath(TX, CustomDir) ->\n\tfilepath([?TX_DIR, tx_filename(TX)], CustomDir).\n\ntx_data_filepath(TX) ->\n\ttx_data_filepath(TX, not_set).\n\ntx_data_filepath(TX, CustomDir) when is_record(TX, tx) ->\n\ttx_data_filepath(TX#tx.id, CustomDir);\ntx_data_filepath(ID, CustomDir) ->\n\tfilepath([?TX_DIR, tx_data_filename(ID)], CustomDir).\n\ntx_filename(TX) when is_record(TX, tx) ->\n\ttx_filename(TX#tx.id);\ntx_filename(TXID) when is_binary(TXID) ->\n\tiolist_to_binary([ar_util:encode(TXID), \".json\"]).\n\ntx_data_filename(TXID) ->\n\tiolist_to_binary([ar_util:encode(TXID), \"_data.json\"]).\n\nwallet_list_filepath(Hash) ->\n\twallet_list_filepath(Hash, not_set).\n\nwallet_list_filepath(Hash, CustomDir) when is_binary(Hash) ->\n\tfilepath([?WALLET_LIST_DIR, iolist_to_binary([ar_util:encode(Hash), \".json\"])], CustomDir).\n\nwrite_file_atomic(Filename, Data) ->\n\tSwapFilename = Filename ++ \".swp\",\n\tcase file:open(SwapFilename, [write, raw]) of\n\t\t{ok, F} ->\n\t\t\tcase file:write(F, Data) of\n\t\t\t\tok ->\n\t\t\t\t\tcase file:close(F) of\n\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\tfile:rename(SwapFilename, Filename);\n\t\t\t\t\t\tError ->\n\t\t\t\t\t\t\tError\n\t\t\t\t\tend;\n\t\t\t\tError ->\n\t\t\t\t\tError\n\t\t\tend;\n\t\tError ->\n\t\t\tError\n\tend.\n\nwrite_term(Name, Term) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tDataDir = Config#config.data_dir,\n\twrite_term(DataDir, Name, Term, override).\n\nwrite_term(Dir, Name, Term) when is_atom(Name) ->\n\twrite_term(Dir, atom_to_list(Name), Term, override);\nwrite_term(Dir, Name, Term) ->\n\twrite_term(Dir, Name, Term, override).\n\nwrite_term(Dir, Name, Term, Override) ->\n\tFilepath = filename:join(Dir, Name),\n\tcase Override == do_not_override andalso filelib:is_file(Filepath) of\n\t\ttrue ->\n\t\t\tok;\n\t\tfalse ->\n\t\t\tcase write_file_atomic(Filepath, term_to_binary(Term)) of\n\t\t\t\tok ->\n\t\t\t\t\tok;\n\t\t\t\t{error, Reason} = Error ->\n\t\t\t\t\t?LOG_ERROR([{event, failed_to_write_term}, {name, Name},\n\t\t\t\t\t\t\t{reason, Reason}]),\n\t\t\t\t\tError\n\t\t\tend\n\tend.\n\nread_term(Name) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tDataDir = Config#config.data_dir,\n\tread_term(DataDir, Name).\n\nread_term(Dir, Name) when is_atom(Name) ->\n\tread_term(Dir, atom_to_list(Name));\nread_term(Dir, Name) ->\n\tcase file:read_file(filename:join(Dir, Name)) of\n\t\t{ok, Binary} ->\n\t\t\t{ok, binary_to_term(Binary, [safe])};\n\t\t{error, enoent} ->\n\t\t\tnot_found;\n\t\t{error, Reason} = Error ->\n\t\t\t?LOG_ERROR([{event, failed_to_read_term}, {name, Name}, {reason, Reason}]),\n\t\t\tError\n\tend.\n\ndelete_term(Name) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tDataDir = Config#config.data_dir,\n\tfile:delete(filename:join(DataDir, atom_to_list(Name))).\n\nstore_account_tree_update(Height, RootHash, Map) ->\n\t?LOG_INFO([{event, storing_account_tree_update}, {updated_key_count, map_size(Map)},\n\t\t\t{height, Height}, {root_hash, ar_util:encode(RootHash)}]),\n\tmaps:map(\n\t\tfun({H, Prefix} = Key, Value) ->\n\t\t\tPrefix2 = case Prefix of root -> <<>>; _ -> Prefix end,\n\t\t\tDBKey = << H/binary, Prefix2/binary >>,\n\t\t\tcase ar_kv:get(account_tree_db, DBKey) of\n\t\t\t\tnot_found ->\n\t\t\t\t\tcase ar_kv:put(account_tree_db, DBKey, term_to_binary(Value)) of\n\t\t\t\t\t\tok ->\n\t\t\t\t\t\t\tok;\n\t\t\t\t\t\t{error, Reason} ->\n\t\t\t\t\t\t\t?LOG_ERROR([{event, failed_to_store_account_tree_key},\n\t\t\t\t\t\t\t\t\t{key_hash, ar_util:encode(element(1, Key))},\n\t\t\t\t\t\t\t\t\t{key_prefix, case element(2, Key) of root -> root;\n\t\t\t\t\t\t\t\t\t\t\tPrefix -> ar_util:encode(Prefix) end},\n\t\t\t\t\t\t\t\t\t{height, Height},\n\t\t\t\t\t\t\t\t\t{root_hash, ar_util:encode(RootHash)},\n\t\t\t\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}])\n\t\t\t\t\tend;\n\t\t\t\t{ok, _} ->\n\t\t\t\t\tok;\n\t\t\t\t{error, Reason} ->\n\t\t\t\t\t?LOG_ERROR([{event, failed_to_read_account_tree_key},\n\t\t\t\t\t\t\t{key_hash, ar_util:encode(element(1, Key))},\n\t\t\t\t\t\t\t{key_prefix, case element(2, Key) of root -> root;\n\t\t\t\t\t\t\t\t\tPrefix -> ar_util:encode(Prefix) end},\n\t\t\t\t\t\t\t{height, Height},\n\t\t\t\t\t\t\t{root_hash, ar_util:encode(RootHash)},\n\t\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}])\n\t\t\tend\n\t\tend,\n\t\tMap\n\t),\n\t?LOG_INFO([{event, stored_account_tree}]).\n\n%% @doc Ignore the prefix when querying a key since the prefix might depend on the order of\n%% insertions and is only used to optimize certain lookups.\nget_account_tree_value(Key, Prefix, CustomDir) ->\n\tar_kv:get_prev(get_db_name(account_tree_db, CustomDir), << Key/binary, Prefix/binary >>).\n\t% does not work:\n\t%ar_kv:get_next(account_tree_db, << Key/binary, Prefix/binary >>).\n\t% works:\n\t%<< N:(48 * 8) >> = Key,\n\t%Key2 = << (N + 1):(48 * 8) >>,\n\t%ar_kv:get_prev(account_tree_db, Key2).\n\n%%%===================================================================\n%%% Tests\n%%%===================================================================\n\n%% @doc Test block storage.\nstore_and_retrieve_block_test_() ->\n\t{timeout, 60, fun test_store_and_retrieve_block/0}.\n\ntest_store_and_retrieve_block() ->\n\t[B0] = ar_weave:init(),\n\tar_test_node:start(B0),\n\tTXIDs = [TX#tx.id || TX <- B0#block.txs],\n\tFetchedB0 = read_block(B0#block.indep_hash),\n\tFetchedB01 = FetchedB0#block{ txs = [tx_id(TX) || TX <- FetchedB0#block.txs] },\n\tFetchedB02 = read_block(B0#block.height, [{B0#block.indep_hash, B0#block.weave_size,\n\t\t\tB0#block.tx_root}]),\n\tFetchedB03 = FetchedB02#block{ txs = [tx_id(TX) || TX <- FetchedB02#block.txs] },\n\t?assertEqual(B0#block{ size_tagged_txs = unset, txs = TXIDs, reward_history = [],\n\t\t\tblock_time_history = [], account_tree = undefined }, FetchedB01),\n\t?assertEqual(B0#block{ size_tagged_txs = unset, txs = TXIDs, reward_history = [],\n\t\t\tblock_time_history = [], account_tree = undefined }, FetchedB03),\n\tar_test_node:mine(),\n\tar_test_node:wait_until_height(main, 1),\n\tar_test_node:mine(),\n\tBI1 = ar_test_node:wait_until_height(main, 2),\n\t[{_, BlockCount}] = ets:lookup(ar_header_sync, synced_blocks),\n\tar_util:do_until(\n\t\tfun() ->\n\t\t\t3 == BlockCount\n\t\tend,\n\t\t100,\n\t\t2000\n\t),\n\tBH1 = element(1, hd(BI1)),\n\t?assertMatch(#block{ height = 2, indep_hash = BH1 }, read_block(BH1)),\n\t?assertMatch(#block{ height = 2, indep_hash = BH1 }, read_block(2, BI1)).\n\ntx_id(#tx{ id = TXID }) ->\n\tTXID;\ntx_id(TXID) ->\n\tTXID.\n\nstore_and_retrieve_wallet_list_test_() ->\n\t[\n\t\t{timeout, 20, fun test_store_and_retrieve_wallet_list/0},\n\t\t{timeout, 240, fun test_store_and_retrieve_wallet_list_permutations/0}\n\t].\n\ntest_store_and_retrieve_wallet_list() ->\n\t[B0] = ar_weave:init(),\n\t[TX] = B0#block.txs,\n\tAddr = ar_wallet:to_address(TX#tx.owner, {?RSA_SIGN_ALG, 65537}),\n\twrite_block(B0),\n\tTXID = TX#tx.id,\n\tExpectedWL = ar_patricia_tree:from_proplist([{Addr, {0, TXID}}]),\n\tWalletListHash = write_wallet_list(0, ExpectedWL),\n\t{ok, ActualWL} = read_wallet_list(WalletListHash),\n\tassert_wallet_trees_equal(ExpectedWL, ActualWL),\n\tAddr2 = binary:part(Addr, 0, 16),\n\tTXID2 = crypto:strong_rand_bytes(32),\n\tExpectedWL2 = ar_patricia_tree:from_proplist([{Addr, {0, TXID}}, {Addr2, {0, TXID2}}]),\n\tWalletListHash2 = write_wallet_list(0, ExpectedWL2),\n\t{ok, ActualWL2} = read_wallet_list(WalletListHash2),\n\t?assertEqual({0, TXID}, read_account(Addr, WalletListHash2)),\n\t?assertEqual({0, TXID2}, read_account(Addr2, WalletListHash2)),\n\tassert_wallet_trees_equal(ExpectedWL2, ActualWL2),\n\t{WalletListHash, ActualWL3, _UpdateMap} = ar_block:hash_wallet_list(ActualWL),\n\tAddr3 = << (binary:part(Addr, 0, 3))/binary, (crypto:strong_rand_bytes(29))/binary >>,\n\tTXID3 = crypto:strong_rand_bytes(32),\n\tTXID4 = crypto:strong_rand_bytes(32),\n\tActualWL4 = ar_patricia_tree:insert(Addr3, {100, TXID3},\n\t\t\tar_patricia_tree:insert(Addr2, {0, TXID4}, ActualWL3)),\n\t{WalletListHash3, ActualWL5, UpdateMap2} = ar_block:hash_wallet_list(ActualWL4),\n\tstore_account_tree_update(1, WalletListHash3, UpdateMap2),\n\t?assertEqual({100, TXID3}, read_account(Addr3, WalletListHash3)),\n\t?assertEqual({0, TXID4}, read_account(Addr2, WalletListHash3)),\n\t?assertEqual({0, TXID}, read_account(Addr, WalletListHash3)),\n\t{ok, ActualWL6} = read_wallet_list(WalletListHash3),\n\tassert_wallet_trees_equal(ActualWL5, ActualWL6).\n\ntest_store_and_retrieve_wallet_list_permutations() ->\n\tlists:foreach(\n\t\tfun(Permutation) ->\n\t\t\tstore_and_retrieve_wallet_list(Permutation)\n\t\tend,\n\t\tpermutations([ <<\"a\">>, <<\"aa\">>, <<\"ab\">>, <<\"bb\">>, <<\"b\">>, <<\"aaa\">> ])),\n\tlists:foreach(\n\t\tfun(Permutation) ->\n\t\t\tstore_and_retrieve_wallet_list(Permutation)\n\t\tend,\n\t\tpermutations([ <<\"a\">>, <<\"aa\">>, <<\"aaa\">>, <<\"aaaa\">>, <<\"aaaaa\">> ])),\n\tstore_and_retrieve_wallet_list([ <<\"a\">>, <<\"aa\">>, <<\"ab\">>, <<\"b\">> ]),\n\tstore_and_retrieve_wallet_list([ <<\"aa\">>, <<\"a\">>, <<\"ab\">> ]),\n\tstore_and_retrieve_wallet_list([ <<\"aaa\">>, <<\"bbb\">>, <<\"aab\">>, <<\"ab\">>, <<\"a\">> ]),\n\tstore_and_retrieve_wallet_list([\n\t\t<<\"aaaa\">>, <<\"aaab\">>, <<\"aaac\">>,\n\t\t<<\"aaa\">>, <<\"aab\">>, <<\"aac\">>,\n\t\t<<\"aa\">>, <<\"ab\">>, <<\"ac\">>,\n\t\t<<\"a\">>, <<\"b\">>, <<\"c\">>\n\t]),\n\tstore_and_retrieve_wallet_list([\n\t\t<<\"a\">>, <<\"b\">>, <<\"c\">>,\n\t\t<<\"aa\">>, <<\"ab\">>, <<\"ac\">>,\n\t\t<<\"aaa\">>, <<\"aab\">>, <<\"aac\">>,\n\t\t<<\"aaaa\">>, <<\"aaab\">>, <<\"aaac\">>,\n\t\t<<\"a\">>, <<\"b\">>, <<\"c\">>,\n\t\t<<\"aa\">>, <<\"ab\">>, <<\"ac\">>,\n\t\t<<\"aaa\">>, <<\"aab\">>, <<\"aac\">>,\n\t\t<<\"aaaa\">>, <<\"aaab\">>, <<\"aaac\">>\n\t]),\n\tstore_and_retrieve_wallet_list([\n\t\t<<\"aaaa\">>, <<\"aaa\">>, <<\"aa\">>, <<\"a\">>,\n\t\t<<\"aaab\">>, <<\"aab\">>, <<\"ab\">>, <<\"b\">>,\n\t\t<<\"aaac\">>, <<\"aac\">>, <<\"ac\">>, <<\"c\">>,\n\t\t<<\"aaaa\">>, <<\"aaa\">>, <<\"aa\">>, <<\"a\">>,\n\t\t<<\"aaab\">>, <<\"aab\">>, <<\"ab\">>, <<\"b\">>,\n\t\t<<\"aaac\">>, <<\"aac\">>, <<\"ac\">>, <<\"c\">>\n\t]),\n\tstore_and_retrieve_wallet_list([\n\t\t<<\"aaaa\">>, <<\"aaab\">>, <<\"aaac\">>,\n\t\t<<\"a\">>, <<\"aa\">>, <<\"aaa\">>,\n\t\t<<\"aaaa\">>, <<\"aaab\">>, <<\"aaac\">>\n\t]),\n\tok.\n\nstore_and_retrieve_wallet_list(Keys) ->\n\tMinBinary = <<>>,\n\tMaxBinary = << <<1:1>> || _ <- lists:seq(1, 512) >>,\n\tar_kv:delete_range(account_tree_db, MinBinary, MaxBinary),\n\tstore_and_retrieve_wallet_list(Keys, ar_patricia_tree:new(), maps:new(), false).\n\nstore_and_retrieve_wallet_list([], Tree, InsertedKeys, IsUpdate) ->\n\tstore_and_retrieve_wallet_list2(Tree, InsertedKeys, IsUpdate);\nstore_and_retrieve_wallet_list([Key | Keys], Tree, InsertedKeys, IsUpdate) ->\n\tTXID = crypto:strong_rand_bytes(32),\n\tBalance = rand:uniform(1000000000),\n\tTree2 = ar_patricia_tree:insert(Key, {Balance, TXID}, Tree),\n\tInsertedKeys2 = maps:put(Key, {Balance, TXID}, InsertedKeys),\n\tcase rand:uniform(2) of\n\t\t1 ->\n\t\t\tTree3 = store_and_retrieve_wallet_list2(Tree2, InsertedKeys2, IsUpdate),\n\t\t\tstore_and_retrieve_wallet_list(Keys, Tree3, InsertedKeys2, true);\n\t\t_ ->\n\t\t\tstore_and_retrieve_wallet_list(Keys, Tree2, InsertedKeys2, IsUpdate)\n\tend.\n\nstore_and_retrieve_wallet_list2(Tree, InsertedKeys, IsUpdate) ->\n\t{WalletListHash, Tree2} =\n\t\tcase IsUpdate of\n\t\t\tfalse ->\n\t\t\t\t{write_wallet_list(0, Tree), Tree};\n\t\t\t_ ->\n\t\t\t\t{R, T, Map} = ar_block:hash_wallet_list(Tree),\n\t\t\t\tstore_account_tree_update(0, R, Map),\n\t\t\t\t{R, T}\n\t\tend,\n\t{ok, ActualTree} = read_wallet_list(WalletListHash),\n\tmaps:foreach(\n\t\tfun(Key, {Balance, TXID}) ->\n\t\t\t?assertEqual({Balance, TXID}, read_account(Key, WalletListHash))\n\t\tend,\n\t\tInsertedKeys\n\t),\n\tassert_wallet_trees_equal(Tree, ActualTree),\n\tassert_wallet_trees_equal(Tree2, ActualTree),\n\tTree2.\n\n%% From: https://www.erlang.org/doc/programming_examples/list_comprehensions.html#permutations\npermutations([]) -> [[]];\npermutations(L)  -> [[H|T] || H <- L, T <- permutations(L--[H])].\n\nassert_wallet_trees_equal(Expected, Actual) ->\n\t?assertEqual(\n\t\tar_patricia_tree:foldr(fun(K, V, Acc) -> [{K, V} | Acc] end, [], Expected),\n\t\tar_patricia_tree:foldr(fun(K, V, Acc) -> [{K, V} | Acc] end, [], Actual)\n\t).\n\nread_wallet_list_chunks_test() ->\n\tTestCases = [\n\t\t[random_wallet()], % < chunk size\n\t\t[random_wallet() || _ <- lists:seq(1, ?WALLET_LIST_CHUNK_SIZE)], % == chunk size\n\t\t[random_wallet() || _ <- lists:seq(1, ?WALLET_LIST_CHUNK_SIZE + 1)], % > chunk size\n\t\t[random_wallet() || _ <- lists:seq(1, 10 * ?WALLET_LIST_CHUNK_SIZE)],\n\t\t[random_wallet() || _ <- lists:seq(1, 10 * ?WALLET_LIST_CHUNK_SIZE + 1)]\n\t],\n\tlists:foreach(\n\t\tfun(TestCase) ->\n\t\t\tTree = ar_patricia_tree:from_proplist(TestCase),\n\t\t\tRootHash = write_wallet_list(0, Tree),\n\t\t\t{ok, ReadTree} = read_wallet_list(RootHash),\n\t\t\tassert_wallet_trees_equal(Tree, ReadTree)\n\t\tend,\n\t\tTestCases\n\t).\n\nrandom_wallet() ->\n\t{crypto:strong_rand_bytes(32), {rand:uniform(1000000000), crypto:strong_rand_bytes(32)}}.\n\nupdate_block_index_test_() ->\n\t[\n\t\t{timeout, 20, fun test_update_block_index/0}\n\t].\n\ntest_update_block_index() ->\n\tar_kv:delete_range(block_index_db, <<0:256>>, <<\"a\">>),\n\n\t?assertEqual(\n\t\t{error, not_found},\n\t\tupdate_block_index(2, 0, [\n\t\t\t{<<\"hash_a\">>, 0, <<\"root_a\">>}\n\t\t]),\n\t\t\"Gap on empty index\"\n\t),\n\n\t?assertEqual(\n\t\t{error, badarg},\n\t\tupdate_block_index(-1, 0, [\n\t\t\t{<<\"hash_a\">>, 0, <<\"root_a\">>}\n\t\t]),\n\t\t\"Negative tip\"\n\t),\n\n\t?assertEqual(\n\t\t{error, badarg},\n\t\tupdate_block_index(0, -1, [\n\t\t\t{<<\"hash_a\">>, 0, <<\"root_a\">>}\n\t\t]),\n\t\t\"Negative orphan count\"\n\t),\n\n\t?assertEqual(\n\t\t{error, not_found},\n\t\tupdate_block_index(0, 1, [\n\t\t\t{<<\"hash_a\">>, 0, <<\"root_a\">>}\n\t\t]),\n\t\t\"Orphan on empty index\"\n\t),\n\t?assertEqual(\n\t\tok,\n\t\tupdate_block_index(0, 0, [\n\t\t\t{<<\"hash_a\">>, 0, <<\"root_a\">>}\n\t\t])\n\t),\n\t?assertEqual([\n\t\t{<<\"hash_a\">>, 0, <<\"root_a\">>}\n\t], read_block_index()),\n\n\t?assertEqual(\n\t\t{error, not_found},\n\t\tupdate_block_index(2, 0, [\n\t\t\t{<<\"hash_b\">>, 0, <<\"root_b\">>}\n\t\t]),\n\t\t\"Gap on non-empty index\"\n\t),\n\n\t?assertEqual(\n\t\tok,\n\t\tupdate_block_index(1, 0, [\n\t\t\t{<<\"hash_b\">>, 0, <<\"root_b\">>}\n\t\t])\n\t),\n\t?assertEqual([\n\t\t{<<\"hash_b\">>, 0, <<\"root_b\">>},\n\t\t{<<\"hash_a\">>, 0, <<\"root_a\">>}\n\t], read_block_index()),\n\n\t?assertEqual(\n\t\t{error, not_found},\n\t\tupdate_block_index(0, 3, [\n\t\t\t{<<\"hash_c\">>, 0, <<\"root_c\">>}\n\t\t]),\n\t\t\"Too many orphans\"\n\t),\n\n\t?assertEqual(\n\t\tok,\n\t\tupdate_block_index(2, 0, [\n\t\t\t{<<\"hash_c\">>, 0, <<\"root_c\">>}\n\t\t])\n\t),\n\t?assertEqual(\n\t\tok,\n\t\tupdate_block_index(3, 0, [\n\t\t\t{<<\"hash_d\">>, 0, <<\"root_d\">>}\n\t\t])\n\t),\n\t?assertEqual([\n\t\t{<<\"hash_d\">>, 0, <<\"root_d\">>},\n\t\t{<<\"hash_c\">>, 0, <<\"root_c\">>},\n\t\t{<<\"hash_b\">>, 0, <<\"root_b\">>},\n\t\t{<<\"hash_a\">>, 0, <<\"root_a\">>}\n\t], read_block_index()),\n\n\t%% Orphan: 2 for 1\n\t?assertEqual(\n\t\tok,\n\t\t\tupdate_block_index(4, 1, [\n\t\t\t\t{<<\"hash_e\">>, 0, <<\"root_e\">>},\n\t\t\t\t{<<\"hash_f\">>, 0, <<\"root_f\">>}\n\t\t])\n\t),\n\t?assertEqual([\n\t\t{<<\"hash_f\">>, 0, <<\"root_f\">>},\n\t\t{<<\"hash_e\">>, 0, <<\"root_e\">>},\n\t\t{<<\"hash_c\">>, 0, <<\"root_c\">>},\n\t\t{<<\"hash_b\">>, 0, <<\"root_b\">>},\n\t\t{<<\"hash_a\">>, 0, <<\"root_a\">>}\n\t], read_block_index()),\n\n\t%% Orphan: 1 for 1\n\t?assertEqual(\n\t\tok,\n\t\t\tupdate_block_index(4, 1, [\n\t\t\t\t{<<\"hash_g\">>, 0, <<\"root_g\">>}\n\t\t])\n\t),\n\t?assertEqual([\n\t\t{<<\"hash_g\">>, 0, <<\"root_g\">>},\n\t\t{<<\"hash_e\">>, 0, <<\"root_e\">>},\n\t\t{<<\"hash_c\">>, 0, <<\"root_c\">>},\n\t\t{<<\"hash_b\">>, 0, <<\"root_b\">>},\n\t\t{<<\"hash_a\">>, 0, <<\"root_a\">>}\n\t], read_block_index()),\n\n\t%% Orphan: 1 for 2\n\t?assertEqual(\n\t\tok,\n\t\t\tupdate_block_index(3, 2, [\n\t\t\t\t{<<\"hash_h\">>, 0, <<\"root_h\">>}\n\t\t])\n\t),\n\t?assertEqual([\n\t\t{<<\"hash_h\">>, 0, <<\"root_h\">>},\n\t\t{<<\"hash_c\">>, 0, <<\"root_c\">>},\n\t\t{<<\"hash_b\">>, 0, <<\"root_b\">>},\n\t\t{<<\"hash_a\">>, 0, <<\"root_a\">>}\n\t], read_block_index()),\n\n\t%% Orphan: 1 for 3\n\t?assertEqual(\n\t\tok,\n\t\t\tupdate_block_index(1, 3, [\n\t\t\t\t{<<\"hash_i\">>, 0, <<\"root_i\">>}\n\t\t])\n\t),\n\t?assertEqual([\n\t\t{<<\"hash_i\">>, 0, <<\"root_i\">>},\n\t\t{<<\"hash_a\">>, 0, <<\"root_a\">>}\n\t], read_block_index()),\n\n\t%% Orphan: 2 for 2\n\t?assertEqual(\n\t\tok,\n\t\t\tupdate_block_index(1, 2, [\n\t\t\t\t{<<\"hash_j\">>, 0, <<\"root_j\">>},\n\t\t\t\t{<<\"hash_k\">>, 0, <<\"root_k\">>}\n\t\t])\n\t),\n\t?assertEqual([\n\t\t{<<\"hash_k\">>, 0, <<\"root_k\">>},\n\t\t{<<\"hash_j\">>, 0, <<\"root_j\">>}\n\t], read_block_index()),\n\n\n\t%% Orphan: 3 for 1\n\t?assertEqual(\n\t\tok,\n\t\t\tupdate_block_index(3, 1, [\n\t\t\t\t{<<\"hash_l\">>, 0, <<\"root_l\">>},\n\t\t\t\t{<<\"hash_m\">>, 0, <<\"root_m\">>},\n\t\t\t\t{<<\"hash_n\">>, 0, <<\"root_n\">>}\n\t\t])\n\t),\n\t?assertEqual([\n\t\t{<<\"hash_n\">>, 0, <<\"root_n\">>},\n\t\t{<<\"hash_m\">>, 0, <<\"root_m\">>},\n\t\t{<<\"hash_l\">>, 0, <<\"root_l\">>},\n\t\t{<<\"hash_j\">>, 0, <<\"root_j\">>}\n\t], read_block_index()),\n\n\n\t%% Replace all but genesis\n\t?assertEqual(\n\t\tok,\n\t\t\tupdate_block_index(2, 3, [\n\t\t\t\t{<<\"hash_o\">>, 0, <<\"root_o\">>},\n\t\t\t\t{<<\"hash_p\">>, 0, <<\"root_p\">>}\n\t\t])\n\t),\n\t?assertEqual([\n\t\t{<<\"hash_p\">>, 0, <<\"root_p\">>},\n\t\t{<<\"hash_o\">>, 0, <<\"root_o\">>},\n\t\t{<<\"hash_j\">>, 0, <<\"root_j\">>}\n\t], read_block_index()).\n"
  },
  {
    "path": "apps/arweave/src/ar_storage_module.erl",
    "content": "-module(ar_storage_module).\n\n-export([get_overlap/1, id/1, label/1, address_label/2, module_address/1,\n\t\tmodule_packing_difficulty/1, packing_label/1, get_by_id/1,\n\t\tget_range/1, module_range/1, module_range/2, get_packing/1,\n\t\tget/2, get_all/1, get_all/2, get_all/3,\n\t\thas_any/1, has_range/2, get_cover/3, is_repack_in_place/1]).\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include(\"ar.hrl\").\n-include(\"ar_consensus.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%% The overlap makes sure a 100 MiB recall range can always be fetched\n%% from a single storage module.\n-ifdef(AR_TEST).\n-define(OVERLAP, 262144).\n-else.\n-define(OVERLAP, (?LEGACY_RECALL_RANGE_SIZE)).\n-endif.\n\n-ifdef(AR_TEST).\n-define(REPLICA_2_9_OVERLAP, 262144).\n-else.\n-define(REPLICA_2_9_OVERLAP, (262144 * 10)).\n-endif.\n\n-type storage_module() :: {integer(), integer(), {atom(), binary()}}\n\t\t\t\t\t\t| {integer(), integer(), {atom(), binary(), integer()}}.\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nget_overlap({replica_2_9, _Addr}) ->\n\t?REPLICA_2_9_OVERLAP;\nget_overlap(_Packing) ->\n\t?OVERLAP.\n\n%% @doc Return the storage module identifier.\nid(?DEFAULT_MODULE) -> ?DEFAULT_MODULE;\nid({BucketSize, Bucket, Packing}) ->\n\tPackingString =\n\t\tcase Packing of\n\t\t\t{spora_2_6, Addr} ->\n\t\t\t\tar_util:encode(Addr);\n\t\t\t{composite, Addr, PackingDiff} ->\n\t\t\t\t<< (ar_util:encode(Addr))/binary, \".\",\n\t\t\t\t\t\t(integer_to_binary(PackingDiff))/binary >>;\n\t\t\t{replica_2_9, Addr} ->\n\t\t\t\t<< (ar_util:encode(Addr))/binary, \".replica.2.9\" >>;\n\t\t\t_ ->\n\t\t\t\tatom_to_list(Packing)\n\t\tend,\n\tid(BucketSize, Bucket, PackingString).\n\n%% @doc Return the obscure unique label for the given storage module.\nlabel(?DEFAULT_MODULE) ->\n\t?DEFAULT_MODULE;\nlabel(StoreID) ->\n\tcase ets:lookup(?MODULE, {label, StoreID}) of\n\t\t[] ->\n\t\t\tStorageModule = get_by_id(StoreID),\n\t\t\t{BucketSize, Bucket, Packing} = StorageModule,\n\t\t\tPackingLabel = packing_label(Packing),\n\t\t\tLabel = id(BucketSize, Bucket, PackingLabel),\n\t\t\tets:insert(?MODULE, {{label, StoreID}, Label}),\n\t\t\tLabel;\n\t\t[{_, Label}] ->\n\t\t\tLabel\n\tend.\n\n%% @doc Return the obscure unique label for the given\n%% replica owner address + replica type pair.\naddress_label(Addr, ReplicaType) ->\n\tKey = {Addr, ReplicaType},\n\tcase ets:lookup(?MODULE, {address_label, Key}) of\n\t\t[] ->\n\t\t\tLabel =\n\t\t\t\tcase ets:lookup(?MODULE, last_address_label) of\n\t\t\t\t\t[] ->\n\t\t\t\t\t\t1;\n\t\t\t\t\t[{_, Counter}] ->\n\t\t\t\t\t\tCounter + 1\n\t\t\t\tend,\n\t\t\tets:insert(?MODULE, {{address_label, Key}, Label}),\n\t\t\tets:insert(?MODULE, {last_address_label, Label}),\n\t\t\tinteger_to_list(Label);\n\t\t[{_, Label}] ->\n\t\t\tinteger_to_list(Label)\n\tend.\n\n-spec module_address(ar_storage_module:storage_module()) -> binary() | undefined.\nmodule_address({_, _, {spora_2_6, Addr}}) ->\n\tAddr;\nmodule_address({_, _, {composite, Addr, _PackingDifficulty}}) ->\n\tAddr;\nmodule_address({_, _, {replica_2_9, Addr}}) ->\n\tAddr;\nmodule_address(_StorageModule) ->\n\tundefined.\n\n-spec module_packing_difficulty(ar_storage_module:storage_module()) -> integer().\nmodule_packing_difficulty({_, _, {composite, _Addr, PackingDifficulty}}) ->\n\ttrue = PackingDifficulty /= ?REPLICA_2_9_PACKING_DIFFICULTY,\n\tPackingDifficulty;\nmodule_packing_difficulty({_, _, {replica_2_9, _Addr}}) ->\n\t?REPLICA_2_9_PACKING_DIFFICULTY;\nmodule_packing_difficulty(_StorageModule) ->\n\t0.\n\npacking_label({spora_2_6, Addr}) ->\n\tAddrLabel = ar_storage_module:address_label(Addr, spora_2_6),\n\tlist_to_atom(\"spora_2_6_\" ++ AddrLabel);\npacking_label({composite, Addr, PackingDifficulty}) ->\n\tAddrLabel = ar_storage_module:address_label(Addr, {composite, PackingDifficulty}),\n\tlist_to_atom(\"composite_\" ++ AddrLabel);\npacking_label({replica_2_9, Addr}) ->\n\tAddrLabel = ar_storage_module:address_label(Addr, replica_2_9),\n\tlist_to_atom(\"replica_2_9_\" ++ AddrLabel);\npacking_label(Packing) ->\n\tPacking.\n\n%% @doc Return the storage module with the given identifier or not_found.\n%% Search across both attached modules and repacked in-place modules.\nget_by_id(?DEFAULT_MODULE) ->\n\t?DEFAULT_MODULE;\nget_by_id(Atom) when is_atom(Atom) ->\n\t%% May be 'default' or an atom from the unit tests.\n\tAtom;\nget_by_id(ID) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tRepackInPlaceModules = [element(1, El)\n\t\t\t|| El <- Config#config.repack_in_place_storage_modules],\n\tget_by_id(ID, Config#config.storage_modules ++ RepackInPlaceModules).\n\nget_by_id(_ID, []) ->\n\tnot_found;\nget_by_id(ID, [Module | Modules]) ->\n\tcase ar_storage_module:id(Module) == ID of\n\t\ttrue ->\n\t\t\tModule;\n\t\tfalse ->\n\t\t\tget_by_id(ID, Modules)\n\tend.\n\n%% @doc Return {StartOffset, EndOffset} the given module is responsible for.\nget_range(?DEFAULT_MODULE) ->\n\t{0, infinity};\nget_range(ID) ->\n\tModule = get_by_id(ID),\n\tcase Module of\n\t\tnot_found ->\n\t\t\tnot_found;\n\t\t_ ->\n\t\t\tmodule_range(Module)\n\tend.\n\n-spec module_range(ar_storage_module:storage_module()) ->\n\t{non_neg_integer(), non_neg_integer()}.\nmodule_range(Module) ->\n\t{_BucketSize, _Bucket, Packing} = Module,\n\tmodule_range(Module, ar_storage_module:get_overlap(Packing)).\n\nmodule_range(Module, Overlap) ->\n\t{BucketSize, Bucket, _Packing} = Module,\n\t{BucketSize * Bucket, (Bucket + 1) * BucketSize + Overlap}.\n\n%% @doc Return the packing configured for the given module.\nget_packing(?DEFAULT_MODULE) ->\n\tunpacked;\nget_packing({_BucketSize, _Bucket, Packing}) ->\n\tPacking;\nget_packing(ID) ->\n\tModule = get_by_id(ID),\n\tcase Module of\n\t\tnot_found ->\n\t\t\tnot_found;\n\t\t_ ->\n\t\t\tget_packing(Module)\n\tend.\n\n%% @doc Return a configured storage module covering the given Offset, preferably\n%% with the given Packing. Return not_found if none is found.\nget(Offset, Packing) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tget(Offset, Packing, Config#config.storage_modules, not_found).\n\n%% @doc Return the list of all configured storage modules covering the given Offset.\nget_all(Offset) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tget_all2(Offset, Config#config.storage_modules, []).\n\n%% @doc Return the list of configured storage modules whose ranges intersect\n%% the given interval.\nget_all(Start, End) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tget_all(Start, End, Config#config.storage_modules).\n\n%% @doc Return the list of storage modules chosen from the given list\n%% whose ranges intersect the given interval.\nget_all(Start, End, StorageModules) ->\n\tget_all2(Start, End, StorageModules, []).\n\n%% @doc Return true if the given Offset belongs to at least one storage module.\nhas_any(Offset) ->\n\t{ok, Config} = arweave_config:get_env(),\n\thas_any(Offset, Config#config.storage_modules).\n\n%% @doc Return true if the given range is covered by the configured storage modules.\nhas_range(Start, End) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase ets:lookup(?MODULE, unique_sorted_intervals) of\n\t\t[] ->\n\t\t\tIntervals = get_unique_sorted_intervals(Config#config.storage_modules),\n\t\t\tets:insert(?MODULE, {unique_sorted_intervals, Intervals}),\n\t\t\thas_range(Start, End, Intervals);\n\t\t[{_, Intervals}] ->\n\t\t\thas_range(Start, End, Intervals)\n\tend.\n\n%% @doc Return the list of at least one {Start, End, StoreID} covering the given range\n%% or not_found. The given StoreID (may be none) has a higher chance to be picked in case\n%% there are several storage modules covering the same range.\n%%\n%%                            0     6     10    14      20          30\n%%                            |--- sm_1 ---|--- sm_2 ---|--- sm_3 ---|\n%%                                         |----sm_4----|\n%%\n%% 1. get_cover(2, 8, none):       2<--->8\n%% 2. get_cover(7, 13, none):          7<--------->13\n%% 3. get_cover(7, 25, none):          7<-------------------->25\n%% 4. get_cover(7, 25, sm4):           7<-------------------->25\n%%\n%% 1. returns [{2, 8, sm_1}]\n%% 2. returns [{7, 10, sm1}, {10, 13, sm_2}]\n%% 3. returns [{7, 10, sm1}, {10, 20, sm_2}, {20, 25, sm_3}]\n%% 4. returns [{7, 10, sm1}, {10, 20, sm_4}, {20, 25, sm_3}]\nget_cover(Start, End, MaybeModule) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tSortedStorageModules = sort_storage_modules_by_left_bound(\n\t\t\tConfig#config.storage_modules, MaybeModule),\n\tcase get_cover2(Start, End, SortedStorageModules) of\n\t\t[] ->\n\t\t\tnot_found;\n\t\tnot_found ->\n\t\t\tnot_found;\n\t\tCover ->\n\t\t\tCover\n\tend.\n\nis_repack_in_place(ID) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tlists:any(\n\t\tfun({Module, _TargetPacking}) ->\n\t\t\tar_storage_module:id(Module) == ID\n\t\tend,\n\t\tConfig#config.repack_in_place_storage_modules).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nid(BucketSize, Bucket, PackingString) ->\n\tcase BucketSize == ar_block:partition_size() of\n\t\ttrue ->\n\t\t\tbinary_to_list(iolist_to_binary(io_lib:format(\"storage_module_~B_~s\",\n\t\t\t\t\t[Bucket, PackingString])));\n\t\tfalse ->\n\t\t\tbinary_to_list(iolist_to_binary(io_lib:format(\"storage_module_~B_~B_~s\",\n\t\t\t\t\t[BucketSize, Bucket, PackingString])))\n\tend.\n\nget(Offset, Packing, [{BucketSize, Bucket, Packing2} | StorageModules], StorageModule) ->\n\tcase Offset =< BucketSize * Bucket\n\t\t\torelse Offset > BucketSize * (Bucket + 1) + ar_storage_module:get_overlap(Packing2) of\n\t\ttrue ->\n\t\t\tget(Offset, Packing, StorageModules, StorageModule);\n\t\tfalse ->\n\t\t\tcase Packing == Packing2 of\n\t\t\t\ttrue ->\n\t\t\t\t\t{BucketSize, Bucket, Packing};\n\t\t\t\tfalse ->\n\t\t\t\t\tget(Offset, Packing, StorageModules, {BucketSize, Bucket, Packing})\n\t\t\tend\n\tend;\nget(_Offset, _Packing, [], StorageModule) ->\n\tStorageModule.\n\nget_all2(Offset, [{BucketSize, Bucket, Packing} = StorageModule | StorageModules], FoundModules) ->\n\tcase Offset =< BucketSize * Bucket\n\t\t\torelse Offset > BucketSize * (Bucket + 1) + ar_storage_module:get_overlap(Packing) of\n\t\ttrue ->\n\t\t\tget_all2(Offset, StorageModules, FoundModules);\n\t\tfalse ->\n\t\t\tget_all2(Offset, StorageModules, [StorageModule | FoundModules])\n\tend;\nget_all2(_Offset, [], FoundModules) ->\n\tFoundModules.\n\nget_all2(Start, End, [{BucketSize, Bucket, Packing} = StorageModule | StorageModules], FoundModules) ->\n\tcase End =< BucketSize * Bucket\n\t\t\torelse Start >= BucketSize * (Bucket + 1) + ar_storage_module:get_overlap(Packing) of\n\t\ttrue ->\n\t\t\tget_all2(Start, End, StorageModules, FoundModules);\n\t\tfalse ->\n\t\t\tget_all2(Start, End, StorageModules, [StorageModule | FoundModules])\n\tend;\nget_all2(_Start, _End, [], FoundModules) ->\n\tFoundModules.\n\nhas_any(_Offset, []) ->\n\tfalse;\nhas_any(Offset, [{BucketSize, Bucket, Packing} | StorageModules]) ->\n\tcase Offset > Bucket * BucketSize\n\t\t\tandalso Offset =< (Bucket + 1) * BucketSize + ar_storage_module:get_overlap(Packing) of\n\t\ttrue ->\n\t\t\ttrue;\n\t\tfalse ->\n\t\t\thas_any(Offset, StorageModules)\n\tend.\n\nget_unique_sorted_intervals(StorageModules) ->\n\tget_unique_sorted_intervals(StorageModules, ar_intervals:new()).\n\nget_unique_sorted_intervals([], Intervals) ->\n\t[{Start, End} || {End, Start} <- ar_intervals:to_list(Intervals)];\nget_unique_sorted_intervals([{BucketSize, Bucket, _Packing} | StorageModules], Intervals) ->\n\tEnd = (Bucket + 1) * BucketSize,\n\tStart = Bucket * BucketSize,\n\tget_unique_sorted_intervals(StorageModules, ar_intervals:add(Intervals, End, Start)).\n\nhas_range(PartitionStart, PartitionEnd, _Intervals)\n\t\twhen PartitionStart >= PartitionEnd ->\n\ttrue;\nhas_range(_PartitionStart, _PartitionEnd, []) ->\n\tfalse;\nhas_range(PartitionStart, _PartitionEnd, [{Start, _End} | _Intervals])\n\t\twhen PartitionStart < Start ->\n\t%% The given intervals are unique and sorted.\n\tfalse;\nhas_range(PartitionStart, PartitionEnd, [{_Start, End} | Intervals])\n\t\twhen PartitionStart >= End ->\n\thas_range(PartitionStart, PartitionEnd, Intervals);\nhas_range(_PartitionStart, PartitionEnd, [{_Start, End} | Intervals]) ->\n\thas_range(End, PartitionEnd, Intervals).\n\nsort_storage_modules_by_left_bound(StorageModules, MaybeModule) ->\n\tlists:sort(\n\t\tfun({BucketSize1, Bucket1, _} = M1, {BucketSize2, Bucket2, _} = M2) ->\n\t\t\tStart1 = BucketSize1 * Bucket1,\n\t\t\tStart2 = BucketSize2 * Bucket2,\n\t\t\tcase Start1 =< Start2 of\n\t\t\t\tfalse ->\n\t\t\t\t\tfalse;\n\t\t\t\ttrue ->\n\t\t\t\t\tcase Start1 == Start2 of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tM1 == MaybeModule orelse M2 /= MaybeModule;\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\ttrue\n\t\t\t\t\tend\n\t\t\tend\n\t\tend,\n\t\tStorageModules\n\t).\n\nget_cover2(Start, End, _StorageModules)\n\t\twhen Start >= End ->\n\t[];\nget_cover2(_Start, _End, []) ->\n\tnot_found;\nget_cover2(Start, _End, [{BucketSize, Bucket, _Packing} | _StorageModules])\n\t\twhen BucketSize * Bucket > Start ->\n\tnot_found;\nget_cover2(Start, End, [{BucketSize, Bucket, _Packing} | StorageModules])\n\t\twhen BucketSize * Bucket + BucketSize =< Start ->\n\tget_cover2(Start, End, StorageModules);\nget_cover2(Start, End, [{BucketSize, Bucket, _Packing} = StorageModule | StorageModules]) ->\n\tStart2 = BucketSize * Bucket,\n\tEnd2 = Start2 + BucketSize,\n\tEnd3 = min(End, End2),\n\tStoreID = ar_storage_module:id(StorageModule),\n\tcase get_cover2(End3, End, StorageModules) of\n\t\tnot_found ->\n\t\t\tnot_found;\n\t\tList ->\n\t\t\t[{Start, End3, StoreID} | List]\n\tend.\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\nlabel_test() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tOldLabels = ets:match_object(?MODULE, {{label, '_'}, '_'}),\n\tOldAddrLabels = ets:match_object(?MODULE, {{address_label, '_'}, '_'}),\n\tOldLastLabel = ets:lookup(?MODULE, last_address_label),\n\tets:match_delete(?MODULE, {{label, '_'}, '_'}),\n\tets:match_delete(?MODULE, {{address_label, '_'}, '_'}),\n\tets:delete(?MODULE, last_address_label),\n\ttry\n\t\tarweave_config:set_env(Config#config{storage_modules = [\n\t\t\t{ar_block:partition_size(), 0, {spora_2_6, <<\"a\">>}},\n\t\t\t{ar_block:partition_size(), 2, {spora_2_6, <<\"a\">>}},\n\t\t\t{ar_block:partition_size(), 0, {spora_2_6, <<\"b\">>}},\n\t\t\t{524288, 3, {spora_2_6, <<\"b\">>}},\n\t\t\t{ar_block:partition_size(), 2, unpacked},\n\t\t\t{ar_block:partition_size(), 2, {spora_2_6, <<\"s÷\">>}},\n\t\t\t{524288, 2, {spora_2_6, <<\"s÷\">>}},\n\t\t\t{524288, 3, {composite, <<\"b\">>, 1}},\n\t\t\t{524288, 3, {composite, <<\"b\">>, 1}},\n\t\t\t{524288, 3, {composite, <<\"b\">>, 2}}\n\t\t]}),\n\t\t?assertEqual(\"storage_module_0_spora_2_6_1\",\n\t\t\tlabel(id({ar_block:partition_size(), 0, {spora_2_6, <<\"a\">>}}))),\n\t\t?assertEqual(\"storage_module_2_spora_2_6_1\",\n\t\t\tlabel(id({ar_block:partition_size(), 2, {spora_2_6, <<\"a\">>}}))),\n\t\t?assertEqual(\"storage_module_0_spora_2_6_2\",\n\t\t\tlabel(id({ar_block:partition_size(), 0, {spora_2_6, <<\"b\">>}}))),\n\t\t?assertEqual(\"storage_module_524288_3_spora_2_6_2\",\n\t\t\tlabel(id({524288, 3, {spora_2_6, <<\"b\">>}}))),\n\t\t?assertEqual(\"storage_module_2_unpacked\",\n\t\t\tlabel(id({ar_block:partition_size(), 2, unpacked}))),\n\t\t%% force a _ in the encoded address\n\t\t?assertEqual(\"storage_module_2_spora_2_6_3\",\n\t\t\tlabel(id({ar_block:partition_size(), 2, {spora_2_6, <<\"s÷\">>}}))),\n\t\t?assertEqual(\"storage_module_524288_2_spora_2_6_3\",\n\t\t\tlabel(id({524288, 2, {spora_2_6, <<\"s÷\">>}}))),\n\t\t?assertEqual(\"storage_module_524288_3_composite_4\",\n\t\t\tlabel(id({524288, 3, {composite, <<\"b\">>, 1}}))),\n\t\t?assertEqual(\"storage_module_524288_3_composite_4\",\n\t\t\tlabel(id({524288, 3, {composite, <<\"b\">>, 1}}))),\n\t\t?assertEqual(\"storage_module_524288_3_composite_5\",\n\t\t\tlabel(id({524288, 3, {composite, <<\"b\">>, 2}})))\n\tafter\n\t\tets:match_delete(?MODULE, {{label, '_'}, '_'}),\n\t\tets:match_delete(?MODULE, {{address_label, '_'}, '_'}),\n\t\tets:delete(?MODULE, last_address_label),\n\t\tets:insert(?MODULE, OldLabels),\n\t\tets:insert(?MODULE, OldAddrLabels),\n\t\tets:insert(?MODULE, OldLastLabel),\n\t\tarweave_config:set_env(Config)\n\tend.\n\nhas_any_test() ->\n\t?assertEqual(false, has_any(0, [])),\n\t?assertEqual(false, has_any(0, [{10, 1, p}])),\n\t?assertEqual(false, has_any(10, [{10, 1, p}])),\n\t?assertEqual(true, has_any(11, [{10, 1, p}])),\n\t?assertEqual(true, has_any(11, [{10, 1, {replica_2_9, a}}])),\n\t?assertEqual(true, has_any(20 + ?OVERLAP, [{10, 1, p}])),\n\t?assertEqual(true, has_any(20 + ?OVERLAP, [{10, 1, {replica_2_9, a}}])).\n\nget_unique_sorted_intervals_test() ->\n\t?assertEqual([{0, 24}, {90, 120}],\n\t\t\tget_unique_sorted_intervals([{10, 0, p}, {30, 3, p}, {20, 0, p}, {12, 1, p}])).\n\nhas_range_test() ->\n\t?assertEqual(false, has_range(0, 10, [])),\n\t?assertEqual(false, has_range(0, 10, [{0, 9}])),\n\t?assertEqual(true, has_range(0, 10, [{0, 10}])),\n\t?assertEqual(true, has_range(0, 10, [{0, 11}])),\n\t?assertEqual(true, has_range(0, 10, [{0, 9}, {9, 10}])),\n\t?assertEqual(true, has_range(5, 10, [{0, 9}, {9, 10}])),\n\t?assertEqual(true, has_range(5, 10, [{0, 2}, {2, 9}, {9, 10}])).\n\nsort_storage_modules_by_left_bound_test() ->\n\t?assertEqual([], sort_storage_modules_by_left_bound([], none)),\n\t?assertEqual([{1, 0, p}], sort_storage_modules_by_left_bound([{1, 0, p}], none)),\n\t?assertEqual([{10, 0, p}, {10, 1, p}, {10, 2, p}],\n\t\t\tsort_storage_modules_by_left_bound([{10, 1, p}, {10, 0, p}, {10, 2, p}], none)),\n\t?assertEqual([{10, 0, p}, {7, 1, p}, {10, 1, p}, {10, 2, p}],\n\t\t\tsort_storage_modules_by_left_bound([{10, 1, p}, {10, 0, p}, {10, 2, p},\n\t\t\t\t\t{7, 1, p}], none)),\n\t?assertEqual([{10, 0, p}, {10, 1, p}, {10, 1, p2}],\n\t\t\tsort_storage_modules_by_left_bound([{10, 1, p}, {10, 0, p}, {10, 1, p2}], none)),\n\t?assertEqual([{10, 0, p}, {10, 1, p2}, {10, 1, p}],\n\t\t\tsort_storage_modules_by_left_bound([{10, 1, p}, {10, 0, p}, {10, 1, p2}],\n\t\t\t\t\t{10, 1, p2})).\n\nget_cover2_test() ->\n\t?assertEqual(not_found, get_cover2(0, 1, [])),\n\t?assertEqual([{0, 1, \"storage_module_1_0_p\"}], get_cover2(0, 1, [{1, 0, p}])),\n\t?assertEqual([{0, 1, \"storage_module_1_0_p\"}, {1, 2, \"storage_module_1_1_p\"}],\n\t\t\tget_cover2(0, 2, [{1, 0, p}, {1, 1, p}])),\n\t?assertEqual(not_found, get_cover2(0, 2, [{1, 0, p}, {1, 2, p}])),\n\t?assertEqual([{0, 2, \"storage_module_2_0_p\"}],\n\t\t\tget_cover2(0, 2, [{2, 0, p}, {1, 0, p}])),\n\t?assertEqual([{0, 2, \"storage_module_2_0_p\"}, {2, 3, \"storage_module_3_0_p\"}],\n\t\t\tget_cover2(0, 3, [{2, 0, p}, {3, 0, p}])).\n"
  },
  {
    "path": "apps/arweave/src/ar_storage_sup.erl",
    "content": "-module(ar_storage_sup).\n\n-behaviour(supervisor).\n\n-export([start_link/0]).\n\n-export([init/1]).\n\n-include_lib(\"arweave/include/ar_sup.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%% ===================================================================\n%% Supervisor callbacks.\n%% ===================================================================\n\ninit([]) ->\n    ets:new(ar_storage, [set, public, named_table, {read_concurrency, true}]),\n\tets:new(ar_storage_module, [set, public, named_table]),\n\t{ok, {{one_for_one, 5, 10}, [\n\t\t?CHILD(ar_storage, worker),\n\t\t?CHILD(ar_device_lock, worker)\n\t]}}.\n"
  },
  {
    "path": "apps/arweave/src/ar_sup.erl",
    "content": "%% This Source Code Form is subject to the terms of the GNU General\n%% Public License, v. 2.0. If a copy of the GPLv2 was not distributed\n%% with this file, You can obtain one at\n%% https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html\n\n-module(ar_sup).\n\n-behaviour(supervisor).\n\n%% API\n-export([start_link/0]).\n\n%% Supervisor callbacks\n-export([init/1]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_sup.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%% ===================================================================\n%% API functions\n%% ===================================================================\n\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%% ===================================================================\n%% Supervisor callbacks\n%% ===================================================================\n\ninit([]) ->\n\t%% These ETS tables should belong to the supervisor.\n\tets:new(ar_shutdown_manager, [set, public, named_table, {read_concurrency, true}]),\n\tets:new(ar_timer, [set, public, named_table, {read_concurrency, true}]),\n\tets:new(ar_peers, [set, public, named_table, {read_concurrency, true}]),\n\tets:new(ar_http, [set, public, named_table]),\n\tets:new(ar_rate_limiter, [set, public, named_table, {read_concurrency, true}]),\n\tets:new(ar_blacklist_middleware, [set, public, named_table]),\n\tets:new(blacklist, [set, public, named_table]),\n\tets:new(ignored_ids, [bag, public, named_table]),\n\tets:new(ar_tx_emitter_recently_emitted, [set, public, named_table]),\n\tets:new(ar_tx_db, [set, public, named_table]),\n\tets:new(ar_nonce_limiter, [set, public, named_table]),\n\tets:new(ar_nonce_limiter_server, [set, public, named_table]),\n\tets:new(ar_header_sync, [set, public, named_table, {read_concurrency, true}]),\n\tets:new(ar_data_discovery, [ordered_set, public, named_table, {read_concurrency, true}]),\n\tets:new(ar_data_discovery_footprint_buckets, [ordered_set, public, named_table, {read_concurrency, true}]),\n\tets:new(ar_data_sync_coordinator, [set, public, named_table]),\n\tets:new(ar_data_sync_state, [set, public, named_table, {read_concurrency, true}]),\n\tets:new(ar_chunk_storage, [set, public, named_table]),\n\tets:new(ar_entropy_storage, [set, public, named_table]),\n\tets:new(ar_mining_stats, [set, public, named_table]),\n\tets:new(entropy_generation_stats, [ordered_set, public, named_table]),\n\tets:new(ar_global_sync_record, [set, public, named_table]),\n\tets:new(ar_disk_pool_data_roots, [set, public, named_table, {read_concurrency, true}]),\n\tets:new(ar_tx_blacklist, [set, public, named_table, {read_concurrency, true}]),\n\tets:new(ar_tx_blacklist_pending_headers,\n\t\t\t[set, public, named_table, {read_concurrency, true}]),\n\tets:new(ar_tx_blacklist_pending_data,\n\t\t\t[set, public, named_table, {read_concurrency, true}]),\n\tets:new(ar_tx_blacklist_offsets,\n\t\t\t[ordered_set, public, named_table, {read_concurrency, true}]),\n\tets:new(ar_tx_blacklist_pending_restore_headers,\n\t\t\t[ordered_set, public, named_table, {read_concurrency, true}]),\n\tets:new(block_cache, [set, public, named_table]),\n\tets:new(tx_prefixes, [bag, public, named_table]),\n\tets:new(block_index, [ordered_set, public, named_table]),\n\tets:new(node_state, [set, public, named_table]),\n\tets:new(mining_state, [set, public, named_table, {read_concurrency, true}]),\n\tChildren = [\n\t\t?CHILD(ar_shutdown_manager, worker),\n\t\t?CHILD(ar_rate_limiter, worker),\n\t\t?CHILD(ar_disksup, worker),\n\t\t?CHILD_SUP(ar_events_sup, supervisor),\n\t\t?CHILD_SUP(ar_http_sup, supervisor),\n\t\t?CHILD_SUP(ar_kv_sup, supervisor),\n\t\t?CHILD_SUP(ar_storage_sup, supervisor),\n\t\t?CHILD(ar_peers, worker),\n\t\t?CHILD(ar_disk_cache, worker),\n\t\t?CHILD(ar_watchdog, worker),\n\t\t?CHILD(ar_tx_blacklist, worker),\n\t\t?CHILD_SUP(ar_bridge_sup, supervisor),\n\t\t?CHILD_SUP(ar_packing_sup, supervisor),\n\t\t?CHILD_SUP(ar_sync_record_sup, supervisor),\n\t\t?CHILD(ar_data_discovery, worker),\n\t\t?CHILD(ar_header_sync, worker),\n\t\t?CHILD_SUP(ar_chunk_storage_sup, supervisor),\n\t\t?CHILD_SUP(ar_data_sync_sup, supervisor),\n\t\t?CHILD_SUP(ar_data_root_sync_sup, supervisor),\n\t\t?CHILD_SUP(ar_verify_chunks_sup, supervisor),\n\t\t?CHILD(ar_global_sync_record, worker),\n\t\t?CHILD_SUP(ar_nonce_limiter_sup, supervisor),\n\t\tmining_sup(),\n\t\t?CHILD(ar_coordination, worker),\n\t\t?CHILD_SUP(ar_tx_emitter_sup, supervisor),\n\t\t?CHILD(ar_tx_poller, worker),\n\t\t?CHILD_SUP(ar_block_pre_validator_sup, supervisor),\n\t\t?CHILD_SUP(ar_poller_sup, supervisor),\n\t\t?CHILD_SUP(ar_webhook_sup, supervisor),\n\t\t?CHILD(ar_pool, worker),\n\t\t?CHILD(ar_pool_job_poller, worker),\n\t\t?CHILD(ar_pool_cm_job_poller, worker),\n\t\t?CHILD(ar_chain_stats, worker),\n\t\t?CHILD_SUP(ar_node_sup, supervisor)\n\t],\n\t{ok, Config} = arweave_config:get_env(),\n\tDebugChildren = case Config#config.debug of\n\t\ttrue -> [?CHILD(ar_process_sampler, worker)];\n\t\tfalse -> []\n\tend,\n\t{ok, {{one_for_one, 5, 10}, Children ++ DebugChildren}}.\n\n-ifdef(LOCALNET).\nmining_sup() ->\n\t?CHILD_SUP(ar_localnet_mining_sup, supervisor).\n-else.\nmining_sup() ->\n\t?CHILD_SUP(ar_mining_sup, supervisor).\n-endif.\n"
  },
  {
    "path": "apps/arweave/src/ar_sync_buckets.erl",
    "content": "-module(ar_sync_buckets).\n\n-export([new/0, new/1, from_intervals/1, from_intervals/2,\n\t\tadd/3, delete/3, cut/2, get/3, serialize/2,\n\t\tdeserialize/2, foreach/3]).\n\n-include_lib(\"arweave/include/ar_sync_buckets.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Return an empty set of buckets.\nnew() ->\n\t{?DEFAULT_SYNC_BUCKET_SIZE, #{}}.\n\nnew(Size) ->\n\t{Size, #{}}.\n\n%% @doc Initialize buckets from a set of intervals (see ar_intervals).\n%% The bucket size is ?DEFAULT_SYNC_BUCKET_SIZE.\nfrom_intervals(Intervals) ->\n\tfrom_intervals(Intervals, new()).\n\n%% @doc Add the data from a set of intervals (see ar_intervals) to the given buckets.\nfrom_intervals(Intervals, SyncBuckets) ->\n\t{Size, Map} = SyncBuckets,\n\t{Size, ar_intervals:fold(\n\t\tfun({End, Start}, Acc) ->\n\t\t\tadd(Start, End, Size, Acc)\n\t\tend,\n\t\tMap,\n\t\tIntervals\n\t)}.\n\n%% @doc Add the interval with the end offset End and start offset Start to the buckets.\nadd(End, Start, Buckets) ->\n\t{Size, Map} = Buckets,\n\t{Size, add(Start, End, Size, Map)}.\n\n%% @doc Remove the interval with the end offset End and start offset Start from the buckets.\ndelete(End, Start, Buckets) ->\n\t{Size, Map} = Buckets,\n\t{Size, delete(Start, End, Size, Map)}.\n\n%% @doc Remove the intervals strictly above Offset from the buckets.\ncut(_Offset, {_Size, Map} = Buckets) when map_size(Map) == 0 ->\n\tBuckets;\ncut(Offset, Buckets) ->\n\t{Size, Map} = Buckets,\n\tLast = lists:last(lists:sort(maps:keys(Map))),\n\tEnd = (Last + 1) * Size,\n\t{Size, delete(Offset, End, Size, Map)}.\n\n%% @doc Return the percentage of data synced in the given bucket of size BucketSize.\n%% If the recorded bucket size is bigger than the given bucket size, return the share\n%% corresponding to the bigger bucket (essentially assuming the uniform distribution of data).\n%% If the given bucket crosses the border between the two recorded buckets, return\n%% the sum of their shares.\nget(Bucket, BucketSize, Buckets) ->\n\t{Size, Map} = Buckets,\n\tFirst = Bucket * BucketSize div Size,\n\tLast = (Bucket * BucketSize + BucketSize - 1) div Size,\n\tlists:sum([maps:get(Key, Map, 0) || Key <- lists:seq(First, Last)]).\n\n%% @doc Serialize buckets into Erlang Term Format. If the size of the serialized structure\n%% exceeds MaxSize, double the bucket size and restructure the buckets.\n%% Throw uncompressable_buckets if MaxSize is too low.\nserialize(Buckets, MaxSize) ->\n\tserialize(Buckets, MaxSize, infinity).\n\nserialize(Buckets, MaxSize, PrevSerializedSize) ->\n\tS = term_to_binary(Buckets),\n\tSerializedSize = byte_size(S),\n\tcase SerializedSize > MaxSize of\n\t\ttrue ->\n\t\t\tcase SerializedSize > PrevSerializedSize of\n\t\t\t\ttrue ->\n\t\t\t\t\tthrow(uncompressable_buckets);\n\t\t\t\tfalse ->\n\t\t\t\t\tok\n\t\t\tend,\n\t\t\t{Size, Map} = Buckets,\n\t\t\t%% Double the bucket size until the serialized buckets are smaller than MaxSize.\n\t\t\tserialize({Size * 2, maps:fold(\n\t\t\t\tfun(Bucket, Share, Acc) ->\n\t\t\t\t\tmaps:update_with(Bucket div 2, fun(Sh) -> (Sh + Share) / 2 end, Share, Acc)\n\t\t\t\tend,\n\t\t\t\tmaps:new(),\n\t\t\t\tMap\n\t\t\t)}, MaxSize, SerializedSize);\n\t\tfalse ->\n\t\t\t{Buckets, S}\n\tend.\n\n%% @doc Deserialize the buckets from Erlang Term Format.\n%% The bucket size must be bigger than or equal to ExpectedBucketSize.\ndeserialize(SerializedBuckets, ExpectedBucketSize) ->\n\tcase catch binary_to_term(SerializedBuckets, [safe]) of\n\t\t{BucketSize, Map} when is_map(Map), is_integer(BucketSize),\n\t\t\t\tBucketSize >= ExpectedBucketSize,\n\t\t\t\tBucketSize =< ExpectedBucketSize * ?MAX_SYNC_BUCKET_SIZE_RATIO ->\n\t\t\t{ok, {BucketSize, maps:filter(\n\t\t\t\tfun\t(Bucket, Share) when\n\t\t\t\t\t\t\tis_integer(Bucket), Bucket >= 0,\n\t\t\t\t\t\t\tis_number(Share), Share > 0, Share =< 1 ->\n\t\t\t\t\t\ttrue;\n\t\t\t\t\t(_, _) ->\n\t\t\t\t\t\tfalse\n\t\t\t\tend,\n\t\t\t\tMap\n\t\t\t)}};\n\t\t{'EXIT', Reason} ->\n\t\t\t{error, Reason};\n\t\t_ ->\n\t\t\t{error, invalid_format}\n\tend.\n\n%% @doc Apply the given function of two arguments (Bucket, Share) to each\n%% of the given buckets breaking them down according to the given size.\nforeach(Fun, BucketSize, {Size, Map}) when Size >= BucketSize, Size rem BucketSize == 0 ->\n\tRatio = Size div BucketSize,\n\tmaps:fold(\n\t\tfun(Bucket, Share, ok) ->\n\t\t\tforeach_range(Fun, Share, Bucket * Ratio, (Bucket + 1) * Ratio)\n\t\tend,\n\t\tok,\n\t\tMap\n\t);\nforeach(_Fun, _BucketSize, _Buckets) ->\n\tok.\n\nforeach_range(_Fun, _Share, SubBucket, End) when SubBucket >= End ->\n\tok;\nforeach_range(Fun, Share, SubBucket, End) ->\n\tFun(SubBucket, Share),\n\tforeach_range(Fun, Share, SubBucket + 1, End).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nadd(Start, End, _Size, Map) when Start >= End ->\n\tMap;\nadd(Start, End, Size, Map) ->\n\tBucket = Start div Size,\n\tShare = maps:get(Bucket, Map, 0),\n\tBucketUpperBound = bucket_upper_bound(Start, Size),\n\tIncrease = min(BucketUpperBound, End) - Start,\n\tadd(BucketUpperBound, End, Size,\n\t\t\tmaps:put(Bucket, min(1, (Share * Size + Increase) / Size), Map)).\n\nbucket_upper_bound(Offset, Size) ->\n\tar_util:ceil_int(Offset, Size).\n\ndelete(Start, End, _Size, Map) when Start >= End ->\n\tMap;\ndelete(Start, End, Size, Map) ->\n\tBucket = Start div Size,\n\tShare = maps:get(Bucket, Map, 0),\n\tBucketUpperBound = bucket_upper_bound(Start, Size),\n\tDecrease = min(BucketUpperBound, End) - Start,\n\tdelete(BucketUpperBound, End, Size,\n\t\t\tmaps:put(Bucket, max(0, Share * (1 - Decrease / Size)), Map)).\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\nbuckets_test() ->\n\tSize = 10000000000,\n\tB1 = {10000000000, #{}},\n\t?assertException(throw, uncompressable_buckets, serialize(B1, 10)),\n\t{B1, S1} = serialize(B1, 20),\n\t{ok, B1} = deserialize(S1, ?DEFAULT_SYNC_BUCKET_SIZE),\n\tB2 = add(5, 0, B1),\n\t?assertEqual(5 / Size, get(0, 10, B2)),\n\tB3 = add(Size * 2, Size, B2),\n\t?assertEqual({Size, #{ 0 => 5 / Size, 1 => 1 }}, B3),\n\t{B3, S3} = serialize(B3, 40),\n\t{ok, B3} = deserialize(S3, ?DEFAULT_SYNC_BUCKET_SIZE),\n\t%% The size of the serialized buckets is 31 bytes.\n\tDoubleSize = 2 * Size,\n\t?assertEqual({DoubleSize, #{ 0 => 0.5 + 5 / Size / 2 }}, element(1, serialize(B3, 30))),\n\t{_, S3_1} = serialize(B3, 30),\n\t?assertEqual({ok, {DoubleSize, #{ 0 => 0.5 + 5 / Size / 2 }}}, deserialize(S3_1, ?DEFAULT_SYNC_BUCKET_SIZE)),\n\t?assertEqual({Size, #{ 0 => 5 / Size, 1 => 0.5 }}, cut(Size + Size div 2, B3)),\n\t?assertEqual({Size, #{ 0 => (1 - (Size - 4) / Size) * (5 / Size), 1 => 0 }},\n\t\t\tdelete(Size * 2, 4, B3)),\n\tB4 = from_intervals(gb_sets:from_list([{5, 0}, {2 * Size, Size}]), {10000000000, #{}}),\n\t?assertEqual(B3, B4),\n\tB5 = from_intervals(gb_sets:from_list([{2 * Size, Size}]), B2),\n\t?assertEqual(B4, B5).\n"
  },
  {
    "path": "apps/arweave/src/ar_sync_record.erl",
    "content": "-module(ar_sync_record).\n\n-behaviour(gen_server).\n\n-export([start_link/2, get/2, get/3, add/4, add/5, add_async/5, add_async/6, delete/4, cut/3,\n\t\tis_recorded/2, is_recorded/3, is_recorded/4, is_recorded_any/3,\n\t\tget_next_synced_interval/4, get_next_synced_interval/5,\n\t\tget_next_unsynced_interval/4, get_next_unsynced_interval/5,\n\t\tget_interval/3, get_intersection_size/4, name/1]).\n\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include(\"ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%% The kv storage key to the sync records.\n-define(SYNC_RECORDS_KEY, <<\"sync_records\">>).\n\n%% The kv key of the write ahead log counter.\n-define(WAL_COUNT_KEY, <<\"wal\">>).\n\n%% The frequency of dumping sync records on disk.\n-ifdef(AR_TEST).\n-define(STORE_SYNC_RECORD_FREQUENCY_MS, 1000).\n-else.\n-define(STORE_SYNC_RECORD_FREQUENCY_MS, 60 * 1000).\n-endif.\n\n-record(state, {\n\t%% A map ID => Intervals\n\t%% where Intervals is a set of non-overlapping intervals\n\t%% of global byte offsets {End, Start} denoting some synced\n\t%% data. End offsets are defined on [1, WeaveSize], start\n\t%% offsets are defined on [0, WeaveSize).\n\t%%\n\t%% Each set serves as a compact map of what is synced by the node.\n\t%% No matter how big the weave is or how much of it the node stores,\n\t%% this record can remain very small compared to the space taken by\n\t%% chunk identifiers, whose number grows unlimited with time.\n\tsync_record_by_id,\n\t%% A map {ID, Packing} => Intervals.\n\tsync_record_by_id_type,\n\t%% The name of the WAL store.\n\tstate_db,\n\t%% The identifier of the storage module.\n\tstore_id,\n\t%% The storage module.\n\tstorage_module,\n\t%% The partition covered by the storage module.\n\tpartition_number,\n\t%% The size in bytes of the storage module; undefined for the \"default\" storage.\n\tstorage_module_size,\n\t%% The index of the storage module; undefined for the \"default\" storage.\n\tstorage_module_index,\n\t%% The number of entries in the write-ahead log.\n\twal,\n\t%% Whether the sync record is in memory only.\n\tin_memory = false\n}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the server.\nstart_link(Name, StoreID) ->\n\tgen_server:start_link({local, Name}, ?MODULE, StoreID, []).\n\n%% @doc Return the set of intervals.\nget(ID, StoreID) ->\n\tGenServerID = name(StoreID),\n\tcase catch gen_server:call(GenServerID, {get, ID}, 20000) of\n\t\t{'EXIT', {timeout, {gen_server, call, _}}} ->\n\t\t\t{error, timeout};\n\t\tReply ->\n\t\t\tReply\n\tend.\n\n%% @doc Return the set of intervals.\nget(ID, Packing, StoreID) ->\n\tGenServerID = name(StoreID),\n\tcase catch gen_server:call(GenServerID, {get, Packing, ID}, 20000) of\n\t\t{'EXIT', {timeout, {gen_server, call, _}}} ->\n\t\t\t{error, timeout};\n\t\tReply ->\n\t\t\tReply\n\tend.\n\n%% @doc Add the given interval to the record with the\n%% given ID. Store the changes on disk before returning ok.\nadd(End, Start, ID, StoreID) ->\n\tGenServerID = name(StoreID),\n\tcase catch gen_server:call(GenServerID, {add, End, Start, ID}, 120000) of\n\t\t{'EXIT', {timeout, {gen_server, call, _}}} ->\n\t\t\t{error, timeout};\n\t\tReply ->\n\t\t\tReply\n\tend.\n\n%% @doc Add the given interval to the record with the\n%% given ID and Packing. Store the changes on disk before\n%% returning ok.\nadd(End, Start, Packing, ID, StoreID) ->\n\tGenServerID = name(StoreID),\n\tcase catch gen_server:call(GenServerID, {add, End, Start, Packing, ID}, 120000) of\n\t\t{'EXIT', {timeout, {gen_server, call, _}}} ->\n\t\t\t{error, timeout};\n\t\tReply ->\n\t\t\tReply\n\tend.\n\n%% @doc Special case of add/4.\nadd_async(Event, End, Start, ID, StoreID) ->\n\tGenServerID = name(StoreID),\n\tgen_server:cast(GenServerID, {add_async, Event, End, Start, ID}).\n\n%% @doc Special case of add/5 for repacked chunks. When repacking the ar_sync_record add\n%% happens at the end so we don't need to block on it to complete.\nadd_async(Event, End, Start, Packing, ID, StoreID) ->\n\tGenServerID = name(StoreID),\n\tgen_server:cast(GenServerID, {add_async, Event, End, Start, Packing, ID}).\n\n%% @doc Remove the given interval from the record\n%% with the given ID. Store the changes on disk before\n%% returning ok.\ndelete(End, Start, ID, StoreID) ->\n\tGenServerID = name(StoreID),\n\tcase catch gen_server:call(GenServerID, {delete, End, Start, ID}, 120000) of\n\t\t{'EXIT', {timeout, {gen_server, call, _}}} ->\n\t\t\t{error, timeout};\n\t\tReply ->\n\t\t\tReply\n\tend.\n\n%% @doc Remove everything strictly above the given\n%% Offset from the record. Store the changes on disk\n%% before returning ok.\ncut(Offset, ID, StoreID) ->\n\tGenServerID = name(StoreID),\n\tcase catch gen_server:call(GenServerID, {cut, Offset, ID}, 120000) of\n\t\t{'EXIT', {timeout, {gen_server, call, _}}} ->\n\t\t\t{error, timeout};\n\t\tReply ->\n\t\t\tReply\n\tend.\n\n%% @doc Return {true, StoreID} or {{true, Packing}, StoreID} if a chunk containing\n%% the given Offset is found in the record with the given ID, false otherwise.\n%% If several types are recorded for the chunk, only one of them is returned,\n%% the choice is not defined. If the chunk is stored in the default storage module,\n%% return the type found there. If not, search for a configured storage\n%% module covering the given Offset. If there are multiple\n%% storage modules with the chunk, the choice is not defined.\n%% The offset is 1-based - if a chunk consists of a single\n%% byte that is the first byte of the weave, is_recorded(0, ID)\n%% returns false and is_recorded(1, ID) returns true.\nis_recorded(Offset, {ID, Packing}) ->\n\tcase is_recorded(Offset, Packing, ID, ?DEFAULT_MODULE) of\n\t\ttrue ->\n\t\t\t{{true, Packing}, ?DEFAULT_MODULE};\n\t\tfalse ->\n\t\t\tModuleOffset =\n\t\t\t\tcase ID of\n\t\t\t\t\tar_data_sync_footprints ->\n\t\t\t\t\t\tar_footprint:get_padded_offset_from_footprint_offset(Offset);\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tOffset\n\t\t\t\tend,\n\t\t\tStorageModules = [Module\n\t\t\t\t\t|| {_, _, ModulePacking} = Module <- ar_storage_module:get_all(ModuleOffset),\n\t\t\t\t\t\tModulePacking == Packing],\n\t\t\tis_recorded_any_by_type(Offset, ID, StorageModules)\n\tend;\nis_recorded(Offset, ID) ->\n\tcase is_recorded(Offset, ID, ?DEFAULT_MODULE) of\n\t\tfalse ->\n\t\t\tModuleOffset =\n\t\t\t\tcase ID of\n\t\t\t\t\tar_data_sync_footprints ->\n\t\t\t\t\t\tar_footprint:get_padded_offset_from_footprint_offset(Offset);\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tOffset\n\t\t\t\tend,\n\t\t\tStorageModules = ar_storage_module:get_all(ModuleOffset),\n\t\t\tis_recorded_any(Offset, ID, StorageModules);\n\t\tReply ->\n\t\t\t{Reply, ?DEFAULT_MODULE}\n\tend.\n\n%% @doc Return true or {true, Packing} if a chunk containing\n%% the given Offset is found in the record with the given ID\n%% in the storage module identified by StoreID, false otherwise.\nis_recorded(Offset, ID, StoreID) ->\n\tcase ets:lookup(sync_records, {ID, StoreID}) of\n\t\t[] ->\n\t\t\tfalse;\n\t\t[{_, TID}] ->\n\t\t\tcase ar_ets_intervals:is_inside(TID, Offset) of\n\t\t\t\tfalse ->\n\t\t\t\t\tfalse;\n\t\t\t\ttrue ->\n\t\t\t\t\tcase is_recorded2(Offset, ets:first(sync_records), ID, StoreID) of\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\ttrue;\n\t\t\t\t\t\t{true, Packing} ->\n\t\t\t\t\t\t\t{true, Packing}\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\n%% @doc Return true if a chunk containing the given Offset and Packing\n%% is found in the record in the storage module identified by StoreID,\n%% false otherwise.\nis_recorded(Offset, Packing, ID, StoreID) ->\n\tcase ets:lookup(sync_records, {ID, Packing, StoreID}) of\n\t\t[] ->\n\t\t\tfalse;\n\t\t[{_, TID}] ->\n\t\t\tar_ets_intervals:is_inside(TID, Offset)\n\tend.\n\n%% @doc Return the lowest synced interval with the end offset strictly above the given Offset\n%% and at most EndOffsetUpperBound.\n%% Return not_found if there are no such intervals.\nget_next_synced_interval(Offset, EndOffsetUpperBound, ID, StoreID) ->\n\tcase ets:lookup(sync_records, {ID, StoreID}) of\n\t\t[] ->\n\t\t\tnot_found;\n\t\t[{_, TID}] ->\n\t\t\tar_ets_intervals:get_next_interval(TID, Offset, EndOffsetUpperBound)\n\tend.\n\n%% @doc Return the lowest unsynced interval with the end offset strictly above the given Offset\n%% and at most EndOffsetUpperBound.\n%% Return not_found when Offset >= EndOffsetUpperBound.\n%% Return {EndOffsetUpperBound, Offset} when no records are found.\nget_next_unsynced_interval(Offset, EndOffsetUpperBound, _ID, _StoreID)\n\t\twhen Offset >= EndOffsetUpperBound ->\n\tnot_found;\nget_next_unsynced_interval(Offset, EndOffsetUpperBound, ID, StoreID) ->\n\tcase ets:lookup(sync_records, {ID, StoreID}) of\n\t\t[] ->\n\t\t\t{EndOffsetUpperBound, Offset};\n\t\t[{_, TID}] ->\n\t\t\tar_ets_intervals:get_next_interval_outside(TID, Offset, EndOffsetUpperBound)\n\tend.\n\n%% @doc Return the lowest synced interval with the end offset strictly above the given Offset\n%% and at most EndOffsetUpperBound.\n%% Return not_found if there are no such intervals.\nget_next_synced_interval(Offset, EndOffsetUpperBound, Packing, ID, StoreID) ->\n\tcase ets:lookup(sync_records, {ID, Packing, StoreID}) of\n\t\t[] ->\n\t\t\tnot_found;\n\t\t[{_, TID}] ->\n\t\t\tar_ets_intervals:get_next_interval(TID, Offset, EndOffsetUpperBound)\n\tend.\n\n%% @doc Return the lowest unsynced interval with the end offset strictly above the given Offset\n%% and at most EndOffsetUpperBound.\n%% Return not_found when Offset >= EndOffsetUpperBound.\n%% Return {EndOffsetUpperBound, Offset} when no records are found.\nget_next_unsynced_interval(Offset, EndOffsetUpperBound, _Packing, _ID, _StoreID)\n\t\twhen Offset >= EndOffsetUpperBound ->\n\tnot_found;\nget_next_unsynced_interval(Offset, EndOffsetUpperBound, Packing, ID, StoreID) ->\n\tcase ets:lookup(sync_records, {ID, Packing, StoreID}) of\n\t\t[] ->\n\t\t\t{EndOffsetUpperBound, Offset};\n\t\t[{_, TID}] ->\n\t\t\tar_ets_intervals:get_next_interval_outside(TID, Offset, EndOffsetUpperBound)\n\tend.\n\n%% @doc Return the interval containing the given Offset, including the right bound,\n%% excluding the left bound. Return not_found if the given offset does not belong to\n%% any interval.\nget_interval(Offset, ID, StoreID) ->\n\tcase ets:lookup(sync_records, {ID, StoreID}) of\n\t\t[] ->\n\t\t\tnot_found;\n\t\t[{_, TID}] ->\n\t\t\tar_ets_intervals:get_interval_with_byte(TID, Offset)\n\tend.\n\n%% @doc Return the size of the intersection between the intervals and the given range.\n%% Return 0 if the given ID and StoreID are not found.\nget_intersection_size(End, Start, ID, StoreID) ->\n\tcase ets:lookup(sync_records, {ID, StoreID}) of\n\t\t[] ->\n\t\t\t0;\n\t\t[{_, TID}] ->\n\t\t\tar_ets_intervals:get_intersection_size(TID, End, Start)\n\tend.\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit(StoreID) ->\n\t%% Trap exit to avoid corrupting any open files on quit.\n\tprocess_flag(trap_exit, true),\n\tStorageModule = ar_storage_module:get_by_id(StoreID),\n\t{ok, Config} = arweave_config:get_env(),\n\tDataDir = Config#config.data_dir,\n\t{Dir, StorageModuleSize, StorageModuleIndex, PartitionNumber} =\n\t\tcase StorageModule of\n\t\t\t?DEFAULT_MODULE ->\n\t\t\t\t{filename:join([DataDir, ?ROCKS_DB_DIR, \"ar_sync_record_db\"]),\n\t\t\t\t\tundefined, undefined, undefined};\n\t\t\tAtom when is_atom(Atom) ->\n\t\t\t\t%% A module without a storage, to use in tests.\n\t\t\t\t{undefined, undefined, undefined, undefined};\n\t\t\t{Size, Index, _Packing} ->\n\t\t\t\t{filename:join([DataDir, \"storage_modules\", StoreID, ?ROCKS_DB_DIR,\n\t\t\t\t\t\t\"ar_sync_record_db\"]), Size, Index,\n\t\t\t\t\t\t\tar_node:get_partition_number(Size * Index)}\n\t\tend,\n\tStateDB = {sync_record, StoreID},\n\t{SyncRecordByID, SyncRecordByIDType, WAL} =\n\t\tcase Dir of\n\t\t\tundefined ->\n\t\t\t\t{#{}, #{}, undefined};\n\t\t\t_ ->\n\t\t\t\tok = ar_kv:open(#{ path => Dir, name => StateDB }),\n\t\t\t\tgen_server:cast(self(), store_state),\n\t\t\t\tread_sync_records(StateDB, StoreID)\n\t\tend,\n\tinitialize_sync_record_by_id_type_ets(SyncRecordByIDType, StoreID),\n\tinitialize_sync_record_by_id_ets(SyncRecordByID, StoreID),\n\t{ok, #state{\n\t\tstate_db = StateDB,\n\t\tstore_id = StoreID,\n\t\tstorage_module = StorageModule,\n\t\tpartition_number = PartitionNumber,\n\t\tstorage_module_size = StorageModuleSize,\n\t\tstorage_module_index = StorageModuleIndex,\n\t\tsync_record_by_id = SyncRecordByID,\n\t\tsync_record_by_id_type = SyncRecordByIDType,\n\t\twal = WAL,\n\t\tin_memory = Dir == undefined\n\t}}.\n\nhandle_call({get, ID}, _From, State) ->\n\t#state{ sync_record_by_id = SyncRecordByID } = State,\n\t{reply, maps:get(ID, SyncRecordByID, ar_intervals:new()), State};\n\nhandle_call({get, Packing, ID}, _From, State) ->\n\t#state{ sync_record_by_id_type = SyncRecordByIDType } = State,\n\t{reply, maps:get({ID, Packing}, SyncRecordByIDType, ar_intervals:new()), State};\n\nhandle_call({add, End, Start, ID}, _From, State) ->\n\t{Reply, State2} = add2(End, Start, ID, State),\n\t{reply, Reply, State2};\n\nhandle_call({add, End, Start, Packing, ID}, _From, State) ->\n\t{Reply, State2} = add2(End, Start, Packing, ID, State),\n\t{reply, Reply, State2};\n\nhandle_call({delete, End, Start, ID}, _From, State) ->\n\t{Reply, State2} = delete2(End, Start, ID, State),\n\t{reply, Reply, State2};\n\nhandle_call({cut, Offset, ID}, _From, State) ->\n\t#state{ sync_record_by_id = SyncRecordByID, sync_record_by_id_type = SyncRecordByIDType,\n\t\t\tstate_db = StateDB, store_id = StoreID } = State,\n\tSyncRecord = maps:get(ID, SyncRecordByID, ar_intervals:new()),\n\tSyncRecord2 = ar_intervals:cut(SyncRecord, Offset),\n\tSyncRecordByID2 = maps:put(ID, SyncRecord2, SyncRecordByID),\n\tTID = get_or_create_type_tid({ID, StoreID}),\n\tar_ets_intervals:cut(TID, Offset),\n\tSyncRecordByIDType2 =\n\t\tmaps:map(\n\t\t\tfun\n\t\t\t\t({ID2, _}, ByType) when ID2 == ID ->\n\t\t\t\t\tar_intervals:cut(ByType, Offset);\n\t\t\t\t(_, ByType) ->\n\t\t\t\t\tByType\n\t\t\tend,\n\t\t\tSyncRecordByIDType\n\t\t),\n\tets:foldl(\n\t\tfun\n\t\t\t({{ID2, _, SID}, TypeTID}, _) when ID2 == ID, SID == StoreID ->\n\t\t\t\tar_ets_intervals:cut(TypeTID, Offset);\n\t\t\t(_, _) ->\n\t\t\t\tok\n\t\tend,\n\t\tok,\n\t\tsync_records\n\t),\n\tState2 = State#state{ sync_record_by_id = SyncRecordByID2,\n\t\t\tsync_record_by_id_type = SyncRecordByIDType2 },\n\t{Reply, State3} = update_write_ahead_log({cut, {Offset, ID}}, StateDB, State2),\n\tcase Reply of\n\t\tok ->\n\t\t\temit_cut(Offset, StoreID);\n\t\t_ ->\n\t\t\tok\n\tend,\n\t{reply, Reply, State3};\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast(store_state, State) ->\n\t{_, State2} = store_state(State),\n\t{ok, _} = ar_timer:apply_after(\n\t\t?STORE_SYNC_RECORD_FREQUENCY_MS,\n\t\tgen_server,\n\t\tcast,\n\t\t[self(), store_state],\n\t\t#{ skip_on_shutdown => false }\n\t),\n\t{noreply, State2};\n\nhandle_cast({add_async, Event, End, Start, ID}, State) ->\n\t{Reply, State2} = add2(End, Start, ID, State),\n\tcase Reply of\n\t\tok ->\n\t\t\tok;\n\t\tError ->\n\t\t\t?LOG_ERROR([{event, Event},\n\t\t\t\t\t{operation, add_async},\n\t\t\t\t\t{status, failed},\n\t\t\t\t\t{sync_record_id, ID},\n\t\t\t\t\t{offset, End},\n\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])}])\n\tend,\n\t{noreply, State2};\n\nhandle_cast({add_async, Event, End, Start, Packing, ID}, State) ->\n\t{Reply, State2} = add2(End, Start, Packing, ID, State),\n\tcase Reply of\n\t\tok ->\n\t\t\tok;\n\t\tError ->\n\t\t\t?LOG_ERROR([{event, Event},\n\t\t\t\t\t{operation, add_async},\n\t\t\t\t\t{status, failed},\n\t\t\t\t\t{sync_record_id, ID},\n\t\t\t\t\t{offset, End},\n\t\t\t\t\t{packing, ar_serialize:encode_packing(Packing, true)},\n\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])}])\n\tend,\n\t{noreply, State2};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_info(Message, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {message, Message}]),\n\t{noreply, State}.\n\nterminate(Reason, State) ->\n\t?LOG_INFO([{event, terminate}, {module, ?MODULE}, {reason, io_lib:format(\"~p\", [Reason])}]),\n\tstore_state(State).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nname(StoreID) when is_atom(StoreID) ->\n\tlist_to_atom(\"ar_sync_record_\" ++ atom_to_list(StoreID));\nname(StoreID) ->\n\tlist_to_atom(\"ar_sync_record_\" ++ ar_storage_module:label(StoreID)).\n\nadd2(End, Start, ID, State) ->\n\t#state{ sync_record_by_id = SyncRecordByID, state_db = StateDB,\n\t\t\tstore_id = StoreID, storage_module = Module } = State,\n\tSyncRecord = maps:get(ID, SyncRecordByID, ar_intervals:new()),\n\tSyncRecord2 = ar_intervals:add(SyncRecord, End, Start),\n\tSyncRecordByID2 = maps:put(ID, SyncRecord2, SyncRecordByID),\n\tTID = get_or_create_type_tid({ID, StoreID}),\n\tar_ets_intervals:add(TID, End, Start),\n\tState2 = State#state{ sync_record_by_id = SyncRecordByID2 },\n\t{Reply, State3} = update_write_ahead_log({add, {End, Start, ID}}, StateDB, State2),\n\tcase Reply of\n\t\tok ->\n\t\t\temit_add_range(Start, End, ID, #{ module => Module });\n\t\t_ ->\n\t\t\tok\n\tend,\n\t{Reply, State3}.\n\nadd2(End, Start, Packing, ID, State) ->\n\t#state{ sync_record_by_id = SyncRecordByID, sync_record_by_id_type = SyncRecordByIDType,\n\t\t\tstate_db = StateDB, store_id = StoreID, storage_module = Module } = State,\n\tByType = maps:get({ID, Packing}, SyncRecordByIDType, ar_intervals:new()),\n\tByType2 = ar_intervals:add(ByType, End, Start),\n\tSyncRecordByIDType2 = maps:put({ID, Packing}, ByType2, SyncRecordByIDType),\n\tTypeTID = get_or_create_type_tid({ID, Packing, StoreID}),\n\tar_ets_intervals:add(TypeTID, End, Start),\n\tSyncRecord = maps:get(ID, SyncRecordByID, ar_intervals:new()),\n\tSyncRecord2 = ar_intervals:add(SyncRecord, End, Start),\n\tSyncRecordByID2 = maps:put(ID, SyncRecord2, SyncRecordByID),\n\tTID = get_or_create_type_tid({ID, StoreID}),\n\tar_ets_intervals:add(TID, End, Start),\n\tState2 = State#state{ sync_record_by_id = SyncRecordByID2,\n\t\tsync_record_by_id_type = SyncRecordByIDType2 },\n\t{Reply, State3} = update_write_ahead_log({{add, Packing}, {End, Start, ID}}, StateDB, State2),\n\tcase Reply of\n\t\tok ->\n\t\t\temit_add_range(Start, End, ID, #{ module => Module, packing => Packing });\n\t\t_ ->\n\t\t\tok\n\tend,\n\t{Reply, State3}.\n\ndelete2(End, Start, ID, State) ->\n\t#state{ sync_record_by_id = SyncRecordByID, sync_record_by_id_type = SyncRecordByIDType,\n\t\t\tstate_db = StateDB, store_id = StoreID } = State,\n\tSyncRecord = maps:get(ID, SyncRecordByID, ar_intervals:new()),\n\tSyncRecord2 = ar_intervals:delete(SyncRecord, End, Start),\n\tSyncRecordByID2 = maps:put(ID, SyncRecord2, SyncRecordByID),\n\tTID = get_or_create_type_tid({ID, StoreID}),\n\tar_ets_intervals:delete(TID, End, Start),\n\tSyncRecordByIDType2 =\n\t\tmaps:map(\n\t\t\tfun\n\t\t\t\t({ID2, _}, ByType) when ID2 == ID ->\n\t\t\t\t\tar_intervals:delete(ByType, End, Start);\n\t\t\t\t(_, ByType) ->\n\t\t\t\t\tByType\n\t\t\tend,\n\t\t\tSyncRecordByIDType\n\t\t),\n\tets:foldl(\n\t\tfun\n\t\t\t({{ID2, _, SID}, TypeTID}, _) when ID2 == ID, SID == StoreID ->\n\t\t\t\tar_ets_intervals:delete(TypeTID, End, Start);\n\t\t\t(_, _) ->\n\t\t\t\tok\n\t\tend,\n\t\tok,\n\t\tsync_records\n\t),\n\tState2 = State#state{ sync_record_by_id = SyncRecordByID2,\n\t\t\tsync_record_by_id_type = SyncRecordByIDType2 },\n\t{Reply, State3} = update_write_ahead_log({delete, {End, Start, ID}}, StateDB, State2),\n\tcase Reply of\n\t\tok ->\n\t\t\temit_remove_range(Start, End, StoreID);\n\t\t_ ->\n\t\t\tok\n\tend,\n\t{Reply, State3}.\n\nis_recorded_any_by_type(Offset, ID, [StorageModule | StorageModules]) ->\n\tStoreID = ar_storage_module:id(StorageModule),\n\t{_, _, Packing} = StorageModule,\n\tcase is_recorded(Offset, Packing, ID, StoreID) of\n\t\ttrue ->\n\t\t\t{{true, Packing}, StoreID};\n\t\tfalse ->\n\t\t\tis_recorded_any_by_type(Offset, ID, StorageModules)\n\tend;\nis_recorded_any_by_type(_Offset, _ID, []) ->\n\tfalse.\n\nis_recorded_any(Offset, ID, [StorageModule | StorageModules]) ->\n\tStoreID = ar_storage_module:id(StorageModule),\n\tcase is_recorded(Offset, ID, StoreID) of\n\t\tfalse ->\n\t\t\tis_recorded_any(Offset, ID, StorageModules);\n\t\tReply ->\n\t\t\t{Reply, StoreID}\n\tend;\nis_recorded_any(_Offset, _ID, []) ->\n\tfalse.\n\nis_recorded2(_Offset, '$end_of_table', _ID, _StoreID) ->\n\tfalse;\nis_recorded2(Offset, {ID, Packing, StoreID}, ID, StoreID) ->\n\tcase ets:lookup(sync_records, {ID, Packing, StoreID}) of\n\t\t[{_, TID}] ->\n\t\t\tcase ar_ets_intervals:is_inside(TID, Offset) of\n\t\t\t\ttrue ->\n\t\t\t\t\t{true, Packing};\n\t\t\t\tfalse ->\n\t\t\t\t\tis_recorded2(Offset, ets:next(sync_records, {ID, Packing, StoreID}), ID,\n\t\t\t\t\t\t\tStoreID)\n\t\t\tend;\n\t\t[] ->\n\t\t\t%% Very unlucky timing.\n\t\t\tfalse\n\tend;\nis_recorded2(Offset, Key, ID, StoreID) ->\n\tis_recorded2(Offset, ets:next(sync_records, Key), ID, StoreID).\n\nread_sync_records(StateDB, StoreID) ->\n\t{SyncRecordByID, SyncRecordByIDType} =\n\t\tcase ar_kv:get(StateDB, ?SYNC_RECORDS_KEY) of\n\t\t\tnot_found ->\n\t\t\t\t{#{}, #{}};\n\t\t{ok, V} ->\n\t\t\tbinary_to_term(V, [safe])\n\tend,\n\t{SyncRecordByID2, SyncRecordByIDType2, WAL} =\n\t\treplay_write_ahead_log(SyncRecordByID, SyncRecordByIDType, StateDB, StoreID),\n\t{SyncRecordByID2, SyncRecordByIDType2, WAL}.\n\nreplay_write_ahead_log(SyncRecordByID, SyncRecordByIDType, StateDB, StoreID) ->\n\tWAL =\n\t\tcase ar_kv:get(StateDB, ?WAL_COUNT_KEY) of\n\t\t\tnot_found ->\n\t\t\t\t0;\n\t\t\t{ok, V} ->\n\t\t\t\tbinary:decode_unsigned(V)\n\t\tend,\n\treplay_write_ahead_log(SyncRecordByID, SyncRecordByIDType, WAL, StateDB, StoreID).\n\nreplay_write_ahead_log(SyncRecordByID, SyncRecordByIDType, WAL, StateDB, StoreID) ->\n\tModule = ar_storage_module:get_by_id(StoreID),\n\treplay_write_ahead_log(\n\t\tSyncRecordByID, SyncRecordByIDType, 1, WAL, StateDB, StoreID, Module).\n\nreplay_write_ahead_log(SyncRecordByID, SyncRecordByIDType, N, WAL, _StateDB, _StoreID, _Module)\n\t\twhen N > WAL ->\n\t{SyncRecordByID, SyncRecordByIDType, WAL};\nreplay_write_ahead_log(SyncRecordByID, SyncRecordByIDType, N, WAL, StateDB, StoreID, Module) ->\n\tcase ar_kv:get(StateDB, binary:encode_unsigned(N)) of\n\t\tnot_found ->\n\t\t\t%% The VM crashed after recording the number.\n\t\t\t{SyncRecordByID, SyncRecordByIDType, WAL};\n\t\t{ok, V} ->\n\t\t\t{Op, Params} = binary_to_term(V, [safe]),\n\t\t\tcase Op of\n\t\t\t\tadd ->\n\t\t\t\t\t{End, Start, ID} = Params,\n\t\t\t\t\tSyncRecord = maps:get(ID, SyncRecordByID, ar_intervals:new()),\n\t\t\t\t\tSyncRecord2 = ar_intervals:add(SyncRecord, End, Start),\n\t\t\t\t\temit_add_range(Start, End, ID, #{ module => Module }),\n\t\t\t\t\tSyncRecordByID2 = maps:put(ID, SyncRecord2, SyncRecordByID),\n\t\t\t\t\treplay_write_ahead_log(\n\t\t\t\t\t\tSyncRecordByID2, SyncRecordByIDType, N + 1,\n\t\t\t\t\t\tWAL, StateDB, StoreID, Module);\n\t\t\t\t{add, Packing} ->\n\t\t\t\t\t{End, Start, ID} = Params,\n\t\t\t\t\tSyncRecord = maps:get(ID, SyncRecordByID, ar_intervals:new()),\n\t\t\t\t\tSyncRecord2 = ar_intervals:add(SyncRecord, End, Start),\n\t\t\t\t\tSyncRecordByID2 = maps:put(ID, SyncRecord2, SyncRecordByID),\n\t\t\t\t\tByType = maps:get({ID, Packing}, SyncRecordByIDType, ar_intervals:new()),\n\t\t\t\t\tByType2 = ar_intervals:add(ByType, End, Start),\n\t\t\t\t\temit_add_range(Start, End, ID, #{ module => Module, packing => Packing }),\n\t\t\t\t\tSyncRecordByIDType2 = maps:put({ID, Packing}, ByType2, SyncRecordByIDType),\n\t\t\t\t\treplay_write_ahead_log(\n\t\t\t\t\t\tSyncRecordByID2, SyncRecordByIDType2, N + 1,\n\t\t\t\t\t\tWAL, StateDB, StoreID, Module);\n\t\t\t\tdelete ->\n\t\t\t\t\t{End, Start, ID} = Params,\n\t\t\t\t\tSyncRecord = maps:get(ID, SyncRecordByID, ar_intervals:new()),\n\t\t\t\t\tSyncRecord2 = ar_intervals:delete(SyncRecord, End, Start),\n\t\t\t\t\temit_remove_range(Start, End, Module),\n\t\t\t\t\tSyncRecordByID2 = maps:put(ID, SyncRecord2, SyncRecordByID),\n\t\t\t\t\tSyncRecordByIDType2 =\n\t\t\t\t\t\tmaps:map(\n\t\t\t\t\t\t\tfun\n\t\t\t\t\t\t\t\t({ID2, _}, ByType) when ID2 == ID ->\n\t\t\t\t\t\t\t\t\tar_intervals:delete(ByType, End, Start);\n\t\t\t\t\t\t\t\t(_, ByType) ->\n\t\t\t\t\t\t\t\t\tByType\n\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\tSyncRecordByIDType\n\t\t\t\t\t\t),\n\t\t\t\t\treplay_write_ahead_log(\n\t\t\t\t\t\tSyncRecordByID2, SyncRecordByIDType2, N + 1,\n\t\t\t\t\t\tWAL, StateDB, StoreID, Module);\n\t\t\t\tcut ->\n\t\t\t\t\t{Offset, ID} = Params,\n\t\t\t\t\tSyncRecord = maps:get(ID, SyncRecordByID, ar_intervals:new()),\n\t\t\t\t\tSyncRecord2 = ar_intervals:cut(SyncRecord, Offset),\n\t\t\t\t\temit_cut(Offset, Module),\n\t\t\t\t\tSyncRecordByID2 = maps:put(ID, SyncRecord2, SyncRecordByID),\n\t\t\t\t\tSyncRecordByIDType2 =\n\t\t\t\t\t\tmaps:map(\n\t\t\t\t\t\t\tfun\n\t\t\t\t\t\t\t\t({ID2, _}, ByType) when ID2 == ID ->\n\t\t\t\t\t\t\t\t\tar_intervals:cut(ByType, Offset);\n\t\t\t\t\t\t\t\t(_, ByType) ->\n\t\t\t\t\t\t\t\t\tByType\n\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\tSyncRecordByIDType\n\t\t\t\t\t\t),\n\t\t\t\t\treplay_write_ahead_log(\n\t\t\t\t\t\tSyncRecordByID2, SyncRecordByIDType2, N + 1,\n\t\t\t\t\t\tWAL, StateDB, StoreID, Module)\n\t\t\tend\n\tend.\n\nemit_add_range(Start, End, ar_data_sync, Options) ->\n\tar_events:send(sync_record, {add_range, Start, End, ar_data_sync, Options});\nemit_add_range(Start, End, ar_data_sync_footprints, Options) ->\n\tar_events:send(sync_record, {add_range, Start, End, ar_data_sync_footprints, Options});\nemit_add_range(_Start, _End, _ID, _Options) ->\n\tok.\n\nemit_remove_range(Start, End, Module) ->\n\tar_events:send(sync_record, {remove_range, Start, End, Module}).\n\nemit_cut(Offset, Module) ->\n\tar_events:send(sync_record, {cut, Offset, Module}).\n\ninitialize_sync_record_by_id_ets(SyncRecordByID, StoreID) ->\n\tIterator = maps:iterator(SyncRecordByID),\n\tinitialize_sync_record_by_id_ets2(maps:next(Iterator), StoreID).\n\ninitialize_sync_record_by_id_ets2(none, _StoreID) ->\n\tok;\ninitialize_sync_record_by_id_ets2({ID, SyncRecord, Iterator}, StoreID) ->\n\tTID = ets:new(sync_record_type, [ordered_set, public, {read_concurrency, true}]),\n\tar_ets_intervals:init_from_gb_set(TID, SyncRecord),\n\tets:insert(sync_records, {{ID, StoreID}, TID}),\n\tinitialize_sync_record_by_id_ets2(maps:next(Iterator), StoreID).\n\ninitialize_sync_record_by_id_type_ets(SyncRecordByIDType, StoreID) ->\n\tIterator = maps:iterator(SyncRecordByIDType),\n\tinitialize_sync_record_by_id_type_ets2(maps:next(Iterator), StoreID).\n\ninitialize_sync_record_by_id_type_ets2(none, _StoreID) ->\n\tok;\ninitialize_sync_record_by_id_type_ets2({{ID, Packing}, SyncRecord, Iterator}, StoreID) ->\n\tTID = ets:new(sync_record_type, [ordered_set, public, {read_concurrency, true}]),\n\tar_ets_intervals:init_from_gb_set(TID, SyncRecord),\n\tets:insert(sync_records, {{ID, Packing, StoreID}, TID}),\n\tinitialize_sync_record_by_id_type_ets2(maps:next(Iterator), StoreID).\n\nstore_state(#state{ in_memory = true }) ->\n\tok;\nstore_state(State) ->\n\t#state{ state_db = StateDB, sync_record_by_id = SyncRecordByID,\n\t\t\tsync_record_by_id_type = SyncRecordByIDType, store_id = StoreID,\n\t\t\tpartition_number = PartitionNumber,\n\t\t\tstorage_module_size = StorageModuleSize,\n\t\t\tstorage_module_index = StorageModuleIndex } = State,\n\tStoreSyncRecords =\n\t\tar_kv:put(\n\t\t\tStateDB,\n\t\t\t?SYNC_RECORDS_KEY,\n\t\t\tterm_to_binary({SyncRecordByID, SyncRecordByIDType})\n\t\t),\n\tResetWAL =\n\t\tcase StoreSyncRecords of\n\t\t\t{error, _} = Error ->\n\t\t\t\tError;\n\t\t\tok ->\n\t\t\t\tar_kv:put(StateDB, ?WAL_COUNT_KEY, binary:encode_unsigned(0))\n\t\tend,\n\tcase ResetWAL of\n\t\t{error, Reason} = Error2 ->\n\t\t\t?LOG_WARNING([\n\t\t\t\t{event, failed_to_store_state},\n\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}\n\t\t\t]),\n\t\t\t{Error2, State};\n\t\tok ->\n\t\t\tmaps:map(\n\t\t\t\tfun\t({ar_data_sync, Packing}, TypeRecord) ->\n\t\t\t\t\t\tar_mining_stats:set_storage_module_data_size(\n\t\t\t\t\t\t\tStoreID, Packing, PartitionNumber, StorageModuleSize,\n\t\t\t\t\t\t\tStorageModuleIndex,\n\t\t\t\t\t\t\tar_intervals:sum(TypeRecord));\n\t\t\t\t\t(_, _) ->\n\t\t\t\t\t\tok\n\t\t\t\tend,\n\t\t\t\tSyncRecordByIDType\n\t\t\t),\n\t\t\t{ok, State#state{ wal = 0 }}\n\tend.\n\nget_or_create_type_tid(IDType) ->\n\tcase ets:lookup(sync_records, IDType) of\n\t\t[] ->\n\t\t\tTID = ets:new(sync_record_type, [ordered_set, public, {read_concurrency, true}]),\n\t\t\tets:insert(sync_records, {IDType, TID}),\n\t\t\tTID;\n\t\t[{_, TID2}] ->\n\t\t\tTID2\n\tend.\n\nupdate_write_ahead_log(_OpParams, _StateDB, #state{ in_memory = true } = State) ->\n\t{ok, State};\nupdate_write_ahead_log(OpParams, StateDB, State) ->\n\t#state{\n\t\twal = WAL\n\t} = State,\n\tcase ar_kv:put(StateDB, binary:encode_unsigned(WAL + 1), term_to_binary(OpParams)) of\n\t\t{error, _Reason} = Error ->\n\t\t\t{Error, State};\n\t\tok ->\n\t\t\tcase ar_kv:put(StateDB, ?WAL_COUNT_KEY, binary:encode_unsigned(WAL + 1)) of\n\t\t\t\tok ->\n\t\t\t\t\t{ok, State#state{ wal = WAL + 1 }};\n\t\t\t\tError2 ->\n\t\t\t\t\t{Error2, State}\n\t\t\tend\n\tend.\n"
  },
  {
    "path": "apps/arweave/src/ar_sync_record_sup.erl",
    "content": "-module(ar_sync_record_sup).\n\n-behaviour(supervisor).\n\n-export([start_link/0]).\n\n-export([init/1]).\n\n-include_lib(\"arweave/include/ar_sup.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%% ===================================================================\n%% Supervisor callbacks.\n%% ===================================================================\n\ninit([]) ->\n\tets:new(sync_records, [set, public, named_table, {read_concurrency, true}]),\n\t{ok, Config} = arweave_config:get_env(),\n\tConfiguredWorkers = lists:map(\n\t\tfun(StorageModule) ->\n\t\t\tStoreID = ar_storage_module:id(StorageModule),\n\t\t\tLabel = ar_storage_module:label(StoreID),\n\t\t\tName = list_to_atom(\"ar_sync_record_\" ++ Label),\n\t\t\t?CHILD_WITH_ARGS(ar_sync_record, worker, Name, [Name, StoreID])\n\t\tend,\n\t\tConfig#config.storage_modules\n\t),\n\tDefaultSyncRecordWorker = ?CHILD_WITH_ARGS(ar_sync_record, worker, ar_sync_record_default,\n\t\t[ar_sync_record_default, ?DEFAULT_MODULE]),\n\tRepackInPlaceWorkers = lists:map(\n\t\tfun({StorageModule, _Packing}) ->\n\t\t\tStoreID = ar_storage_module:id(StorageModule),\n\t\t\tLabel = ar_storage_module:label(StoreID),\n\t\t\tName = list_to_atom(\"ar_sync_record_\" ++ Label),\n\t\t\t?CHILD_WITH_ARGS(ar_sync_record, worker, Name, [Name, StoreID])\n\t\tend,\n\t\tConfig#config.repack_in_place_storage_modules\n\t),\n\tWorkers = [DefaultSyncRecordWorker] ++ ConfiguredWorkers ++ RepackInPlaceWorkers,\n\t{ok, {{one_for_one, 5, 10}, Workers}}.\n"
  },
  {
    "path": "apps/arweave/src/ar_testnet.erl",
    "content": "-module(ar_testnet).\n\n-export([is_testnet/0, height_testnet_fork/0, top_up_test_wallet/2,\n\t\tlocked_rewards_blocks/1, reward_history_blocks/1, target_block_time/1,\n\t\tlegacy_reward_history_blocks/1]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_pricing.hrl\").\n\n-ifndef(TESTNET_REWARD_HISTORY_BLOCKS).\n-define(TESTNET_REWARD_HISTORY_BLOCKS, ?REWARD_HISTORY_BLOCKS).\n-endif.\n\n-ifndef(TESTNET_LEGACY_REWARD_HISTORY_BLOCKS).\n-define(TESTNET_LEGACY_REWARD_HISTORY_BLOCKS, ?LEGACY_REWARD_HISTORY_BLOCKS).\n-endif.\n\n-ifndef(TESTNET_LOCKED_REWARDS_BLOCKS).\n-define(TESTNET_LOCKED_REWARDS_BLOCKS, ?LOCKED_REWARDS_BLOCKS).\n-endif.\n\n-ifndef(TESTNET_TARGET_BLOCK_TIME).\n-define(TESTNET_TARGET_BLOCK_TIME, ?TARGET_BLOCK_TIME).\n-endif.\n\n-ifndef(TESTNET_FORK_HEIGHT).\n-define(TESTNET_FORK_HEIGHT, infinity).\n-endif.\n\n-ifdef(TESTNET).\nis_testnet() -> true.\n-else.\nis_testnet() -> false.\n-endif.\n\n-ifdef(TESTNET).\nheight_testnet_fork() ->\n\t?TESTNET_FORK_HEIGHT.\n-else.\nheight_testnet_fork() ->\n\tinfinity.\n-endif.\n\n-ifdef(TESTNET).\ntop_up_test_wallet(Accounts, Height) ->\n\tcase Height == height_testnet_fork() of\n\t\ttrue ->\n\t\t\tAddr = ar_util:decode(<<?TEST_WALLET_ADDRESS>>),\n\t\t\tmaps:put(Addr, {?AR(?TOP_UP_TEST_WALLET_AR), <<>>, 1, true}, Accounts);\n\t\tfalse ->\n\t\t\tAccounts\n\tend.\n-else.\ntop_up_test_wallet(Accounts, _Height) ->\n\tAccounts.\n-endif.\n\nlocked_rewards_blocks(Height) ->\n\tcase application:get_env(arweave, locked_rewards_blocks) of\n\t\t{ok, Value} when is_integer(Value), Value > 0 ->\n\t\t\tValue;\n\t\t_ ->\n\t\t\tlocked_rewards_blocks2(Height)\n\tend.\n\n-ifdef(TESTNET).\nlocked_rewards_blocks2(Height) ->\n\tcase Height >= height_testnet_fork() of\n\t\ttrue ->\n\t\t\t?TESTNET_LOCKED_REWARDS_BLOCKS;\n\t\tfalse ->\n\t\t\t?LOCKED_REWARDS_BLOCKS\n\tend.\n-else.\nlocked_rewards_blocks2(_Height) ->\n\t?LOCKED_REWARDS_BLOCKS.\n-endif.\n\n-ifdef(TESTNET).\nreward_history_blocks(Height) ->\n\tcase Height >= height_testnet_fork() of\n\t\ttrue ->\n\t\t\t?TESTNET_REWARD_HISTORY_BLOCKS;\n\t\tfalse ->\n\t\t\t?REWARD_HISTORY_BLOCKS\n\tend.\n-else.\nreward_history_blocks(_Height) ->\n\t?REWARD_HISTORY_BLOCKS.\n-endif.\n\n-ifdef(TESTNET).\nlegacy_reward_history_blocks(Height) ->\n\tcase Height >= height_testnet_fork() of\n\t\ttrue ->\n\t\t\t?TESTNET_LEGACY_REWARD_HISTORY_BLOCKS;\n\t\tfalse ->\n\t\t\t?LEGACY_REWARD_HISTORY_BLOCKS\n\tend.\n-else.\nlegacy_reward_history_blocks(_Height) ->\n\t?LEGACY_REWARD_HISTORY_BLOCKS.\n-endif.\n\n-ifdef(TESTNET).\ntarget_block_time(Height) ->\n\tcase Height >= height_testnet_fork() of\n\t\ttrue ->\n\t\t\t?TESTNET_TARGET_BLOCK_TIME;\n\t\tfalse ->\n\t\t\t?TARGET_BLOCK_TIME\n\tend.\n-else.\ntarget_block_time(_Height) ->\n\t?TARGET_BLOCK_TIME.\n-endif.\n"
  },
  {
    "path": "apps/arweave/src/ar_timer.erl",
    "content": "%%%===================================================================\n%%% This Source Code Form is subject to the terms of the GNU General\n%%% Public License, v. 2.0. If a copy of the GPLv2 was not distributed\n%%% with this file, You can obtain one at\n%%% https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html\n%%%\n%%% @doc A timer wrapper/manager for Arweave.\n%%%\n%%% This module has been created to deal with all timers started by\n%%% Arweave. Those timers must be managed, in particular during\n%%% shutdown, when no new connections or other actions are required.\n%%%\n%%% Not all timers need to use this module, only the ones needing\n%%% to use a timer to connect to remote peers.\n%%%\n%%% Only intervals are currently managed, other functions are simple\n%%% wrappers.\n%%%\n%%% This module is tightly coupled with `ar_shutdown_manager' and\n%%% uses `ar_shutdown_manager:apply/4' to know if the application is\n%%% in running mode or if the application is being stopped.\n%%%\n%%% @see ar_shutdown_manager\n%%% @see ar_shutdown_manager:apply/4\n%%%\n%%% == Examples ==\n%%%\n%%% When the application is running normally, this module behave\n%%% exactly like the functions exported by timers:\n%%%\n%%% ```\n%%% {ok, Ref1} =\n%%%   ar_timer:apply_after(\n%%%   \t10_000,\n%%%   \tio,\n%%%   \tformat,\n%%%   \t[\"hello\"],\n%%%   \t#{}\n%%% ).\n%%% '''\n%%%\n%%% If the application is stopped, for example when executing the\n%%% `./bin/stop' script or `erlang:halt/1' or `init:stop/1' functions,\n%%% then those functions will return `shutdown'. This is the default\n%%% behavior when no specific options is passed in the last argument.\n%%%\n%%% ```\n%%% shutdown =\n%%%   ar_timer:apply_after(\n%%%     10_000,\n%%%     io,\n%%%     format,\n%%%     [\"hello\"],\n%%%     #{}\n%%% ).\n%%% '''\n%%%\n%%% This behavior can be disabled by setting the key `skip_on_shutdown'\n%%% to false when needed. In this case, these functions will simply\n%%% act as wrappers around `timers' module functions.\n%%%\n%%% ```\n%%% {ok, Ref1} =\n%%%   ar_timer:apply_after(\n%%%     10_000,\n%%%     io,\n%%%     format,\n%%%     [\"hello\"],\n%%%     #{ skip_on_shutdown => false }\n%%%   ).\n%%% '''\n%%%\n%%% @end\n%%%===================================================================\n-module(ar_timer).\n-export([\n\tapply_after/4,\n\tapply_after/5,\n\tapply_interval/4,\n\tapply_interval/5,\n\tcancel/1,\n\tinsert_timer/2,\n\tlist_timers/0,\n\tterminate_timers/0,\n\tsend_after/2,\n\tsend_after/3,\n\tsend_after/4,\n\tsend_interval/2,\n\tsend_interval/3,\n\tsend_interval/4\n]).\n-include_lib(\"kernel/include/logger.hrl\").\n-type ar_timer_opts() :: #{ skip_on_shutdown => boolean() }.\n\n%%--------------------------------------------------------------------\n%% @doc wrapper around timer:apply_after/4.\n%% @see timer:apply_after/5\n%% @end\n%%--------------------------------------------------------------------\n-spec apply_after(Time, Module, Function, Arguments) -> Return when\n\tTime :: pos_integer(),\n\tModule :: atom(),\n\tFunction :: atom(),\n\tArguments :: [term()],\n\tReturn :: shutdown | {ok, reference()}.\n\napply_after(Time, Module, Function, Arguments) ->\n\tapply_after(Time, Module, Function, Arguments, #{}).\n\n%%--------------------------------------------------------------------\n%% @doc wrapper around timer:apply_after/4.\n%%\n%% @see timer:apply_after/4\n%% @end\n%%--------------------------------------------------------------------\n-spec apply_after(Time, Module, Function, Arguments, Opts) -> Return when\n\tTime :: pos_integer(),\n\tModule :: atom(),\n\tFunction :: atom(),\n\tArguments :: [term()],\n\tOpts :: ar_timer_opts(),\n\tReturn :: shutdown\n\t\t| {ok, reference()}.\n\napply_after(Time, Module, Function, Arguments, Opts) ->\n\tM = timer,\n\tF = apply_after,\n\tA = [Time, Module, Function, Arguments],\n\tcase ar_shutdown_manager:apply(M, F, A, Opts) of\n\t\t{ok, TimerRef} -> {ok, TimerRef};\n\t\tElsewise -> Elsewise\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc wrapper around timer:apply_interval/4.\n%% @see timer:apply_interval/4\n%% @end\n%%--------------------------------------------------------------------\n-spec apply_interval(Time, Module, Function, Arguments) -> Return when\n\tTime :: pos_integer(),\n\tModule :: atom(),\n\tFunction :: atom(),\n\tArguments :: [term()],\n\tReturn :: shutdown\n\t\t| {ok, reference()}.\n\napply_interval(Time, Module, Function, Arguments) ->\n\tapply_interval(Time, Module, Function, Arguments, #{}).\n\n%%--------------------------------------------------------------------\n%% @doc wrapper around timer:apply_interval/4\n%% @end\n%%--------------------------------------------------------------------\n-spec apply_interval(Time, Module, Function, Arguments, Opts) -> Return when\n\tTime :: pos_integer(),\n\tModule :: atom(),\n\tFunction :: atom(),\n\tArguments :: [term()],\n\tOpts :: ar_timer_opts(),\n\tReturn :: shutdown\n\t\t| {ok, reference()}.\n\napply_interval(Time, Module, Function, Arguments, Opts) ->\n\tM = timer,\n\tF = apply_interval,\n\tA = [Time, Module, Function, Arguments],\n\tcase ar_shutdown_manager:apply(M, F, A, Opts) of\n\t\t{ok, TimerRef} ->\n\t\t\tinsert_timer(TimerRef, #{\n\t\t\t\tpid => self(),\n\t\t\t\tmodule => Module,\n\t\t\t\tfunction => Function,\n\t\t\t\targuments => Arguments,\n\t\t\t\ttime => Time\n\t\t\t}),\n\t\t\t{ok, TimerRef};\n\t\tElsewise ->\n\t\t\tElsewise\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc wrapper around timer:send_after/4.\n%% @see send_after/3\n%% @end\n%%--------------------------------------------------------------------\n-spec send_after(Time, Message) -> Return when\n\tTime :: pos_integer(),\n\tMessage :: term(),\n\tReturn :: shutdown | {ok, reference()}.\n\nsend_after(Time, Message) ->\n\tsend_after(Time, self(), Message).\n\n%%--------------------------------------------------------------------\n%% @doc wrapper around timer:send_after/3.\n%% @see send_after/4\n%% @end\n%%--------------------------------------------------------------------\n-spec send_after(Time, Pid, Message) -> Return when\n\tTime :: pos_integer(),\n\tPid :: pid() | atom(),\n\tMessage :: term(),\n\tReturn :: shutdown | {ok, reference()}.\n\nsend_after(Time, Pid, Message) ->\n\tsend_after(Time, Pid, Message, #{}).\n\n%%--------------------------------------------------------------------\n%% @doc wrapper around timer:send_after/3.\n%% @see timer:send_after/3\n%% @end\n%%--------------------------------------------------------------------\n-spec send_after(Time, Pid, Message, Opts) -> Return when\n\tTime :: pos_integer(),\n\tPid :: pid() | atom(),\n\tMessage :: term(),\n\tOpts :: ar_timer_opts(),\n\tReturn :: shutdown | {ok, reference()}.\n\nsend_after(Time, Pid, Message, Opts) ->\n\tM = timer,\n\tF = send_after,\n\tA = [Time, Pid, Message],\n\tcase ar_shutdown_manager:apply(M, F, A, Opts) of\n\t\t{ok, TimerRef} -> {ok, TimerRef};\n\t\tElsewise -> Elsewise\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc wrapper around timer:send_interval/2.\n%% @see send_interval/3\n%% @end\n%%--------------------------------------------------------------------\n-spec send_interval(Time, Message) -> Return when\n\tTime :: pos_integer(),\n\tMessage :: term(),\n\tReturn :: shutdown | {ok, reference()}.\n\nsend_interval(Time, Message) ->\n\tsend_interval(Time, self(), Message).\n\n%%--------------------------------------------------------------------\n%% @doc wrapper around timer:interval/3.\n%% @see send_interval/4\n%% @end\n%%--------------------------------------------------------------------\n-spec send_interval(Time, Pid, Message) -> Return when\n\tTime :: pos_integer(),\n\tPid :: atom() | pid(),\n\tMessage :: term(),\n\tReturn :: shutdown | {ok, reference()}.\n\nsend_interval(Time, Pid, Message) ->\n\tsend_interval(Time, Pid, Message, #{}).\n\n%%--------------------------------------------------------------------\n%% @doc wrapper around timer:interval/3.\n%% @see timer:send_interval/3\n%% @end\n%%--------------------------------------------------------------------\n-spec send_interval(Time, Pid, Message, Opts) -> Return when\n\tTime :: pos_integer(),\n\tPid :: atom() | pid(),\n\tMessage :: term(),\n\tOpts :: ar_timer_opts(),\n\tReturn :: shutdown | {ok, reference()}.\n\nsend_interval(Time, Pid, Message, Opts) ->\n\tM = timer,\n\tF = send_interval,\n\tA = [Time, Pid, Message],\n\tcase ar_shutdown_manager:apply(M, F, A, Opts) of\n\t\t{ok, TimerRef} ->\n\t\t\tinsert_timer(TimerRef, #{\n\t\t\t\tpid => self(),\n\t\t\t\ttime => Time\n\t\t\t}),\n\t\t\t{ok, TimerRef};\n\t\tElsewise ->\n\t\t\tElsewise\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc wrapper around timer:cancel/1.\n%% @see timer:cancel/1\n%% @end\n%%--------------------------------------------------------------------\ncancel(TimerRef) ->\n\tcase timer:cancel(TimerRef) of\n\t\t{ok, _} = Reply ->\n\t\t\tets:delete(?MODULE, {timer, TimerRef}),\n\t\t\t?LOG_DEBUG([\n\t\t\t\t{module, ?MODULE},\n\t\t\t\t{reference, TimerRef},\n\t\t\t\t{action, cancel}\n\t\t\t]),\n\t\t\tReply;\n\t\tElsewise ->\n\t\t\tElsewise\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninsert_timer(TimerRef, Meta) ->\n\tCreatedAt = erlang:system_time(),\n\tNewMeta = Meta#{\n\t\tcreated_at => CreatedAt\n\t},\n\t?LOG_DEBUG([\n\t\t{module, ?MODULE},\n\t\t{pid, self()},\n\t\t{meta, NewMeta},\n\t\t{reference, TimerRef}\n\t]),\n\tets:insert(?MODULE, {{timer, TimerRef}, NewMeta}).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nlist_timers() ->\n\t[ Ref || [Ref] <- ets:match(?MODULE, {{timer, '$1'}, '_'}) ].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc terminate all timers. This function will also list the timers\n%% from `timer_tab' ETS table and cancel all of them.\n%% @end\n%%--------------------------------------------------------------------\nterminate_timers() ->\n\t% cancel all intervals first\n\t[ cancel(Ref) || Ref <- list_timers() ],\n\n\t% then cancel all others timers from timer_tab.\n\tcase ets:whereis(timer_tab) of\n\t\tundefined ->\n\t\t\tok;\n\t\t_ ->\n\t\t\t[ timer:cancel(Ref) || {Ref, _, _} <- ets:tab2list(timer_tab) ]\n\tend.\n"
  },
  {
    "path": "apps/arweave/src/ar_tx.erl",
    "content": "%%% @doc The module with utilities for transaction creation, signing and verification.\n-module(ar_tx).\n\n-export([new/0, new/1, new/2, new/3, new/4, sign/2, sign/3, sign_v1/2, sign_v1/3, verify/2,\n\t\tverify/3, verify_tx_id/2, generate_signature_data_segment/1,\n\t\ttags_to_list/1, get_tx_fee/1, get_tx_fee2/1, check_last_tx/2,\n\t\tgenerate_chunk_tree/1, generate_chunk_tree/2, generate_chunk_id/1,\n\t\tchunk_binary/2, chunks_to_size_tagged_chunks/1, sized_chunks_to_sized_chunk_ids/1,\n\t\tget_addresses/1, get_weave_size_increase/2, utility/1, get_owner_address/1]).\n\n-include(\"ar.hrl\").\n-include(\"ar_pricing.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%% Prioritize format=1 transactions with data size bigger than this\n%% value (in bytes) lower than every other transaction. The motivation\n%% is to encourage people uploading data to use the new v2 transaction\n%% format. Large v1 transactions may significantly slow down the rate\n%% of acceptance of transactions into the weave.\n-define(DEPRIORITIZE_V1_TX_SIZE_THRESHOLD, 100).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc A helper for preparing transactions for signing. Used in tests.\n%% Should be moved to a testing module.\nnew() ->\n\t#tx{ id = crypto:strong_rand_bytes(32) }.\nnew(Data) ->\n\t#tx{ id = crypto:strong_rand_bytes(32), data = Data, data_size = byte_size(Data) }.\nnew(Data, Reward) ->\n\t#tx{\n\t\tid = crypto:strong_rand_bytes(32),\n\t\tdata = Data,\n\t\treward = Reward,\n\t\tdata_size = byte_size(Data)\n\t}.\nnew(Data, Reward, Last) ->\n\t#tx{\n\t\tid = crypto:strong_rand_bytes(32),\n\t\tlast_tx = Last,\n\t\tdata = Data,\n\t\tdata_size = byte_size(Data),\n\t\treward = Reward\n\t}.\nnew({SigType, PubKey}, Reward, Qty, Last) ->\n\tnew(ar_wallet:to_address(PubKey, SigType), Reward, Qty, Last, SigType);\nnew(Dest, Reward, Qty, Last) ->\n\t#tx{\n\t\tid = crypto:strong_rand_bytes(32),\n\t\tlast_tx = Last,\n\t\tquantity = Qty,\n\t\ttarget = Dest,\n\t\tdata = <<>>,\n\t\tdata_size = 0,\n\t\treward = Reward\n\t}.\nnew(Dest, Reward, Qty, Last, SigType) ->\n\t#tx{\n\t\tid = crypto:strong_rand_bytes(32),\n\t\tlast_tx = Last,\n\t\tquantity = Qty,\n\t\ttarget = Dest,\n\t\tdata = <<>>,\n\t\tdata_size = 0,\n\t\treward = Reward,\n\t\tsignature_type = SigType\n\t}.\n\n%% @doc Cryptographically sign (claim ownership of) a v2 transaction.\n%% Used in tests and by the handler of the POST /unsigned_tx endpoint, which is\n%% disabled by default.\nsign(TX, {PrivKey, PubKey = {KeyType, Owner}}) ->\n\tTX2 = TX#tx{ owner = Owner, signature_type = KeyType,\n\t\t\towner_address = ar_wallet:to_address(Owner, KeyType) },\n\tSignatureDataSegment = generate_signature_data_segment(TX2),\n\tsign(TX2, PrivKey, PubKey, SignatureDataSegment).\n\nsign(TX, PrivKey, PubKey = {KeyType, Owner}) ->\n\tTX2 = TX#tx{ owner = Owner, signature_type = KeyType,\n\t\t\towner_address = ar_wallet:to_address(Owner, KeyType) },\n\tSignatureDataSegment = generate_signature_data_segment(TX2),\n\tsign(TX2, PrivKey, PubKey, SignatureDataSegment).\n\n%% @doc Cryptographically sign (claim ownership of) a v1 transaction.\n%% Used in tests and by the handler of the POST /unsigned_tx endpoint, which is\n%% disabled by default.\nsign_v1(TX, {PrivKey, PubKey = {_, Owner}}) ->\n\tsign(TX, PrivKey, PubKey, signature_data_segment_v1(TX#tx{ owner = Owner })).\n\nsign_v1(TX, PrivKey, PubKey = {_, Owner}) ->\n\tsign(TX, PrivKey, PubKey, signature_data_segment_v1(TX#tx{ owner = Owner })).\n\n%% @doc Verify whether a transaction is valid.\n%% Signature verification can be optionally skipped, useful for\n%% repeatedly checking mempool transactions' validity.\nverify(TX, Args) ->\n\tverify(TX, Args, verify_signature).\n\n-ifdef(AR_TEST).\nverify(#tx{ signature = <<>> }, _Args, _VerifySignature) ->\n\ttrue;\nverify(TX, Args, VerifySignature) ->\n\tdo_verify(TX, Args, VerifySignature).\n-else.\nverify(TX, Args, VerifySignature) ->\n\tdo_verify(TX, Args, VerifySignature).\n-endif.\n\n%% @doc Verify the given transaction actually has the given identifier.\n%% Compute the signature data segment, verify the signature, and check\n%% whether its SHA2-256 hash equals the expected identifier.\nverify_tx_id(ExpectedID, #tx{ format = 1, id = ID } = TX) ->\n\tExpectedID == ID andalso verify_signature_v1(TX, verify_signature) andalso verify_hash(TX);\nverify_tx_id(ExpectedID, #tx{ format = 2, id = ID } = TX) ->\n\tExpectedID == ID andalso verify_signature_v2(TX, verify_signature) andalso verify_hash(TX).\n\n%% @doc Generate the data segment to be signed for a given TX.\ngenerate_signature_data_segment(#tx{ format = 2 } = TX) ->\n\tcase TX#tx.signature_type of\n\t\t{?ECDSA_SIGN_ALG, secp256k1} ->\n\t\t\tsignature_data_segment_v2_no_public_key(TX);\n\t\t{?RSA_SIGN_ALG, 65537} ->\n\t\t\tsignature_data_segment_v2(TX)\n\tend;\ngenerate_signature_data_segment(#tx{ format = 1 } = TX) ->\n\tsignature_data_segment_v1(TX).\n\ntags_to_list(Tags) ->\n\t[[Name, Value] || {Name, Value} <- Tags].\n\n-ifdef(AR_TEST).\ncheck_last_tx(_WalletList, TX) when TX#tx.owner == <<>> ->\n\ttrue;\ncheck_last_tx(WalletList, _TX) when map_size(WalletList) == 0 ->\n\ttrue;\ncheck_last_tx(WalletList, TX) ->\n\tAddr = get_owner_address(TX),\n\tcase maps:get(Addr, WalletList, not_found) of\n\t\tnot_found ->\n\t\t\tfalse;\n\t\t{_Balance, LastTX} ->\n\t\t\tLastTX == TX#tx.last_tx;\n\t\t{_Balance, LastTX, _Denomination, _MiningPermission} ->\n\t\t\tLastTX == TX#tx.last_tx\n\tend.\n-else.\n%% @doc Check if the given transaction anchors one of the wallets - its last_tx\n%% matches the last transaction made from the wallet.\ncheck_last_tx(WalletList, _TX) when map_size(WalletList) == 0 ->\n\ttrue;\ncheck_last_tx(WalletList, TX) ->\n\tAddr = get_owner_address(TX),\n\tcase maps:get(Addr, WalletList, not_found) of\n\t\tnot_found ->\n\t\t\tfalse;\n\t\t{_Balance, LastTX} ->\n\t\t\tLastTX == TX#tx.last_tx;\n\t\t{_Balance, LastTX, _Denomination, _MiningPermission} ->\n\t\t\tLastTX == TX#tx.last_tx\n\tend.\n-endif.\n\n%% @doc Split the tx data into chunks and compute the Merkle tree from them.\n%% Used to compute the Merkle roots of v1 transactions' data and to compute\n%% Merkle proofs for v2 transactions when their data is uploaded without proofs.\ngenerate_chunk_tree(TX) ->\n\tgenerate_chunk_tree(TX,\n\t\tsized_chunks_to_sized_chunk_ids(\n\t\t\tchunks_to_size_tagged_chunks(\n\t\t\t\tchunk_binary(?DATA_CHUNK_SIZE, TX#tx.data)\n\t\t\t)\n\t\t)\n\t).\n\ngenerate_chunk_tree(TX, ChunkIDSizes) ->\n\t{Root, Tree} = ar_merkle:generate_tree(ChunkIDSizes),\n\tTX#tx{ data_tree = Tree, data_root = Root }.\n\n%% @doc Generate a chunk ID used to construct the Merkle tree from the tx data chunks.\ngenerate_chunk_id(Chunk) ->\n\tcrypto:hash(sha256, Chunk).\n\n%% @doc Split the binary into chunks. Used for computing the Merkle roots of\n%% v1 transactions' data and computing Merkle proofs for v2 transactions' when\n%% their data is uploaded without proofs.\nchunk_binary(ChunkSize, Bin) when byte_size(Bin) < ChunkSize ->\n\t[Bin];\nchunk_binary(ChunkSize, Bin) ->\n\t<<ChunkBin:ChunkSize/binary, Rest/binary>> = Bin,\n\t[ChunkBin | chunk_binary(ChunkSize, Rest)].\n\n%% @doc Assign a byte offset to every chunk in the list.\nchunks_to_size_tagged_chunks(Chunks) ->\n\tlists:reverse(\n\t\telement(\n\t\t\t2,\n\t\t\tlists:foldl(\n\t\t\t\tfun(Chunk, {Pos, List}) ->\n\t\t\t\t\tEnd = Pos + byte_size(Chunk),\n\t\t\t\t\t{End, [{Chunk, End} | List]}\n\t\t\t\tend,\n\t\t\t\t{0, []},\n\t\t\t\tChunks\n\t\t\t)\n\t\t)\n\t).\n\n%% @doc Convert a list of chunk, byte offset tuples to\n%% the list of chunk ID, byte offset tuples.\nsized_chunks_to_sized_chunk_ids(SizedChunks) ->\n\t[{ar_tx:generate_chunk_id(Chunk), Size} || {Chunk, Size} <- SizedChunks].\n\n%% @doc Get a list of unique source and destination addresses from the given list of txs.\nget_addresses(TXs) ->\n\tget_addresses(TXs, sets:new()).\n\n%% @doc Return the number of bytes the weave is increased by when the given transaction\n%% is included.\nget_weave_size_increase(#tx{ data_size = DataSize }, Height) ->\n\tget_weave_size_increase(DataSize, Height);\n\nget_weave_size_increase(0, _Height) ->\n\t0;\nget_weave_size_increase(DataSize, Height) ->\n\tcase Height >= ar_fork:height_2_5() of\n\t\ttrue ->\n\t\t\t%% The smallest multiple of ?DATA_CHUNK_SIZE larger than or equal to data_size.\n\t\t\tar_poa:get_padded_offset(DataSize, 0);\n\t\tfalse ->\n\t\t\tDataSize\n\tend.\n\n%% @doc Return the transaction's utility for the miner. Transactions with higher utility\n%% are more attractive and therefore preferred when assembling blocks.\nutility(TX = #tx{ data_size = DataSize }) ->\n\tutility(TX, ?TX_SIZE_BASE + DataSize).\n\nutility(#tx{ format = 1, reward = Reward, data_size = DataSize,\n\t\tdenomination = Denomination }, _Size)\n\t\twhen DataSize > ?DEPRIORITIZE_V1_TX_SIZE_THRESHOLD ->\n\t%% For convenience, value higher denomination more.\n\t%% If we normalize by dividing by denomination, higher-denomination amounts\n\t%% may stop being distinguishable.\n\t%% To use the current block denomination, we would need to update\n\t%% comparators, which is somewhat cumbersome.\n\t%% Therefore, we simply choose to prefer higher denominations.\n\t{1, Denomination, Reward};\nutility(#tx{ reward = Reward, denomination = Denomination }, _Size) ->\n\t{2, Denomination, Reward}.\n\n%% @doc Return the transaction's owner address. Take the cached value if available.\nget_owner_address(#tx{ owner = Owner, signature_type = KeyType, owner_address = not_set }) ->\n\tar_wallet:to_address(Owner, KeyType);\nget_owner_address(#tx{ owner_address = OwnerAddress }) ->\n\tOwnerAddress.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\n%% @doc Generate the data segment to be signed for a given v2 TX.\nsignature_data_segment_v2(TX) ->\n\tList = [\n\t\t<< (integer_to_binary(TX#tx.format))/binary >>,\n\t\t<< (TX#tx.owner)/binary >>,\n\t\t<< (TX#tx.target)/binary >>,\n\t\t<< (list_to_binary(integer_to_list(TX#tx.quantity)))/binary >>,\n\t\t<< (list_to_binary(integer_to_list(TX#tx.reward)))/binary >>,\n\t\t<< (TX#tx.last_tx)/binary >>,\n\t\ttags_to_list(TX#tx.tags),\n\t\t<< (integer_to_binary(TX#tx.data_size))/binary >>,\n\t\t<< (TX#tx.data_root)/binary >>\n\t],\n\tList2 =\n\t\tcase TX#tx.denomination > 0 of\n\t\t\ttrue ->\n\t\t\t\t[<< (integer_to_binary(TX#tx.denomination))/binary >> | List];\n\t\t\tfalse ->\n\t\t\t\tList\n\t\tend,\n\tar_deep_hash:hash(List2).\n\nsignature_data_segment_v2_no_public_key(TX) ->\n\tList = [\n\t\t<< (integer_to_binary(TX#tx.format))/binary >>,\n\t\t<< (TX#tx.target)/binary >>,\n\t\t<< (list_to_binary(integer_to_list(TX#tx.quantity)))/binary >>,\n\t\t<< (list_to_binary(integer_to_list(TX#tx.reward)))/binary >>,\n\t\t<< (TX#tx.last_tx)/binary >>,\n\t\ttags_to_list(TX#tx.tags),\n\t\t<< (integer_to_binary(TX#tx.data_size))/binary >>,\n\t\t<< (TX#tx.data_root)/binary >>\n\t],\n\tList2 =\n\t\tcase TX#tx.denomination > 0 of\n\t\t\ttrue ->\n\t\t\t\t[<< (integer_to_binary(TX#tx.denomination))/binary >> | List];\n\t\t\tfalse ->\n\t\t\t\tList\n\t\tend,\n\tar_deep_hash:hash(List2).\n\n%% @doc Generate the data segment to be signed for a given v1 TX.\nsignature_data_segment_v1(TX) ->\n\tcase TX#tx.denomination > 0 of\n\t\ttrue ->\n\t\t\tar_deep_hash:hash([\n\t\t\t\t<< (integer_to_binary(TX#tx.denomination))/binary >>,\n\t\t\t\t<< (TX#tx.owner)/binary >>,\n\t\t\t\t<< (TX#tx.target)/binary >>,\n\t\t\t\t<< (TX#tx.data)/binary >>,\n\t\t\t\t<< (list_to_binary(integer_to_list(TX#tx.quantity)))/binary >>,\n\t\t\t\t<< (list_to_binary(integer_to_list(TX#tx.reward)))/binary >>,\n\t\t\t\t<< (TX#tx.last_tx)/binary >>,\n\t\t\t\ttags_to_list(TX#tx.tags)\n\t\t\t]);\n\t\tfalse ->\n\t\t\t<<\n\t\t\t\t(TX#tx.owner)/binary,\n\t\t\t\t(TX#tx.target)/binary,\n\t\t\t\t(TX#tx.data)/binary,\n\t\t\t\t(list_to_binary(integer_to_list(TX#tx.quantity)))/binary,\n\t\t\t\t(list_to_binary(integer_to_list(TX#tx.reward)))/binary,\n\t\t\t\t(TX#tx.last_tx)/binary,\n\t\t\t\t(tags_to_binary(TX#tx.tags))/binary\n\t\t\t>>\n\tend.\n\nsign(TX, PrivKey, {KeyType, Owner}, SignatureDataSegment) ->\n\tNewTX = TX#tx{ owner = Owner, signature_type = KeyType,\n\t\t\towner_address = ar_wallet:to_address(Owner, KeyType) },\n\tSig = ar_wallet:sign(PrivKey, SignatureDataSegment),\n\tID = crypto:hash(?HASH_ALG, <<Sig/binary>>),\n\tNewTX#tx{ id = ID, signature = Sig }.\n\nverify_signature_type(#tx{ format = 1 } = TX, _Height) ->\n\tcase TX#tx.signature_type of\n\t\t{?RSA_SIGN_ALG, 65537} ->\n\t\t\ttrue;\n\t\t_ ->\n\t\t\tfalse\n\tend;\nverify_signature_type(#tx{ format = 2 } = TX, Height) ->\n\tcase TX#tx.signature_type of\n\t\t{?RSA_SIGN_ALG, 65537} ->\n\t\t\ttrue;\n\t\t{?ECDSA_SIGN_ALG, secp256k1} ->\n\t\t\tHeight >= ar_fork:height_2_9();\n\t\t_ ->\n\t\t\tfalse\n\tend.\n\ndo_verify(#tx{ format = 1 } = TX, Args, VerifySignature) ->\n\t{_Rate, _PricePerGiBMinute, _KryderPlusRateMultiplier, _Denomination,\n\t\t\t_RedenominationHeight, Height, _Accounts, _Timestamp} = Args,\n\tcase verify_signature_type(TX, Height) of\n\t\ttrue ->\n\t\t\tdo_verify_v1(TX, Args, VerifySignature);\n\t\tfalse ->\n\t\t\tcollect_validation_results(TX#tx.id,\n\t\t\t\t\t[{\"tx_signature_type_not_supported\", false}])\n\tend;\ndo_verify(#tx{ format = 2 } = TX, Args, VerifySignature) ->\n\t{_Rate, _PricePerGiBMinute, _KryderPlusRateMultiplier, _Denomination,\n\t\t\t_RedenominationHeight, Height, _Accounts, _Timestamp} = Args,\n\tcase Height < ar_fork:height_2_0() of\n\t\ttrue ->\n\t\t\tcollect_validation_results(TX#tx.id, [{\"tx_format_not_supported\", false}]);\n\t\tfalse ->\n\t\t\tcase verify_signature_type(TX, Height) of\n\t\t\t\ttrue ->\n\t\t\t\t\tdo_verify_v2(TX, Args, VerifySignature);\n\t\t\t\tfalse ->\n\t\t\t\t\tcollect_validation_results(TX#tx.id,\n\t\t\t\t\t\t\t[{\"tx_signature_type_not_supported\", false}])\n\t\t\tend\n\tend;\ndo_verify(TX, _Args, _VerifySignature) ->\n\tcollect_validation_results(TX#tx.id, [{\"tx_format_not_supported\", false}]).\n\nget_addresses([], Addresses) ->\n\tsets:to_list(Addresses);\nget_addresses([TX | TXs], Addresses) ->\n\tSource = get_owner_address(TX),\n\tWithSource = sets:add_element(Source, Addresses),\n\tWithDest = sets:add_element(TX#tx.target, WithSource),\n\tget_addresses(TXs, WithDest).\n\ndo_verify_v1(TX, Args, VerifySignature) ->\n\t{_Rate, PricePerGiBMinute, KryderPlusRateMultiplier, Denomination, RedenominationHeight,\n\t\t\tHeight, Accounts, _Timestamp} = Args,\n\tFork_1_8 = ar_fork:height_1_8(),\n\tLastTXCheck = case Height of\n\t\tH when H >= Fork_1_8 ->\n\t\t\ttrue;\n\t\t_ ->\n\t\t\tcheck_last_tx(Accounts, TX)\n\tend,\n\tcase verify_denomination(TX, Denomination, Height, RedenominationHeight) of\n\t\tfalse ->\n\t\t\tcollect_validation_results(TX#tx.id, [{\"invalid_denomination\", false}]);\n\t\ttrue ->\n\t\t\tFrom = get_owner_address(TX),\n\t\t\tFeeArgs = {TX, PricePerGiBMinute, KryderPlusRateMultiplier, Denomination,\n\t\t\t\t\tHeight, Accounts, TX#tx.target},\n\t\t\tChecks = [\n\t\t\t\t{\"quantity_negative\", TX#tx.quantity >= 0},\n\t\t\t\t{\"same_owner_as_target\", (From =/= TX#tx.target)},\n\t\t\t\t{\"tx_too_cheap\", is_tx_fee_sufficient(FeeArgs)},\n\t\t\t\t{\"tx_fields_too_large\", tx_field_size_limit_v1(TX, Height, Denomination)},\n\t\t\t\t{\"last_tx_not_valid\", LastTXCheck},\n\t\t\t\t{\"tx_id_not_valid\", verify_hash(TX)},\n\t\t\t\t{\"overspend\", validate_overspend(TX,\n\t\t\t\t\t\tar_node_utils:apply_tx(Accounts, Denomination, TX))},\n\t\t\t\t{\"tx_signature_not_valid\", verify_signature_v1(TX, VerifySignature, Height)},\n\t\t\t\t{\"tx_malleable\", verify_malleability({TX, PricePerGiBMinute,\n\t\t\t\t\t\tKryderPlusRateMultiplier, Denomination, Height, Accounts})},\n\t\t\t\t{\"invalid_target_length\", verify_target_length(TX, Height)}\n\t\t\t],\n\t\t\tcollect_validation_results(TX#tx.id, Checks)\n\tend.\n\ncollect_validation_results(TXID, Checks) ->\n\tKeepFailed = fun\n\t\t({_, true}) ->\n\t\t\tfalse;\n\t\t({ErrorCode, false}) ->\n\t\t\t{true, ErrorCode}\n\tend,\n\tcase lists:filtermap(KeepFailed, Checks) of\n\t\t[] ->\n\t\t\ttrue;\n\t\tErrorCodes ->\n\t\t\tar_tx_db:put_error_codes(TXID, ErrorCodes),\n\t\t\tfalse\n\tend.\n\ndo_verify_v2(TX, Args, VerifySignature) ->\n\t{_Rate, PricePerGiBMinute, KryderPlusRateMultiplier, Denomination, RedenominationHeight,\n\t\t\tHeight, Accounts, _Timestamp} = Args,\n\tcase verify_denomination(TX, Denomination, Height, RedenominationHeight) of\n\t\tfalse ->\n\t\t\tcollect_validation_results(TX#tx.id, [{\"invalid_denomination\", false}]);\n\t\ttrue ->\n\t\t\tFrom = get_owner_address(TX),\n\t\t\tFeeArgs = {TX, PricePerGiBMinute, KryderPlusRateMultiplier, Denomination,\n\t\t\t\t\tHeight, Accounts, TX#tx.target},\n\t\t\tChecks = [\n\t\t\t\t{\"quantity_negative\", TX#tx.quantity >= 0},\n\t\t\t\t{\"same_owner_as_target\", (From =/= TX#tx.target)},\n\t\t\t\t{\"tx_too_cheap\", is_tx_fee_sufficient(FeeArgs)},\n\t\t\t\t{\"tx_fields_too_large\", tx_field_size_limit_v2(TX, Height, Denomination)},\n\t\t\t\t{\"tx_id_not_valid\", verify_hash(TX)},\n\t\t\t\t{\"overspend\", validate_overspend(TX,\n\t\t\t\t\t\tar_node_utils:apply_tx(Accounts, Denomination, TX))},\n\t\t\t\t{\"tx_signature_not_valid\", verify_signature_v2(TX, VerifySignature, Height)},\n\t\t\t\t{\"tx_data_size_negative\", TX#tx.data_size >= 0},\n\t\t\t\t{\"tx_data_size_data_root_mismatch\",\n\t\t\t\t\t\t(TX#tx.data_size == 0) == (TX#tx.data_root == <<>>)},\n\t\t\t\t{\"invalid_target_length\", verify_target_length(TX, Height)}\n\t\t\t],\n\t\t\tcollect_validation_results(TX#tx.id, Checks)\n\tend.\n\n%% @doc Check whether each field in a transaction is within the given byte size limits.\ntx_field_size_limit_v1(TX, Height, Denomination) ->\n\tLastTXLimit =\n\t\tcase Height >= ar_fork:height_1_8() of\n\t\t\ttrue ->\n\t\t\t\t48;\n\t\t\tfalse ->\n\t\t\t\t32\n\t\tend,\n\tMaxDigits =\n\t\tcase Height + 1 >= ar_fork:height_2_6() of\n\t\t\ttrue ->\n\t\t\t\t30 + (Denomination - 1) * 3;\n\t\t\tfalse ->\n\t\t\t\t21\n\t\tend,\n\t(byte_size(TX#tx.id) =< 32) andalso\n\t(byte_size(TX#tx.last_tx) =< LastTXLimit) andalso\n\t(byte_size(TX#tx.owner) =< 512) andalso\n\tvalidate_tags_size(TX, Height) andalso\n\t(byte_size(integer_to_binary(TX#tx.quantity)) =< MaxDigits) andalso\n\t(byte_size(TX#tx.data) =< (?TX_DATA_SIZE_LIMIT)) andalso\n\t(byte_size(TX#tx.signature) =< 512) andalso\n\t(byte_size(integer_to_binary(TX#tx.reward)) =< MaxDigits).\n\n%% @doc Verify that the transactions ID is a hash of its signature.\nverify_hash(#tx{ signature = Sig, id = ID }) ->\n\tID == crypto:hash(?HASH_ALG, << Sig/binary >>).\n\nverify_signature_v1(_TX, do_not_verify_signature) ->\n\ttrue;\nverify_signature_v1(TX, verify_signature) ->\n\tSignatureDataSegment = generate_signature_data_segment(TX),\n\tar_wallet:verify({?DEFAULT_KEY_TYPE, TX#tx.owner}, SignatureDataSegment, TX#tx.signature).\n\nverify_signature_v1(_TX, do_not_verify_signature, _Height) ->\n\ttrue;\nverify_signature_v1(TX, verify_signature, Height) ->\n\tSignatureDataSegment = generate_signature_data_segment(TX),\n\tcase Height >= ar_fork:height_2_4() of\n\t\ttrue ->\n\t\t\tar_wallet:verify({?DEFAULT_KEY_TYPE, TX#tx.owner}, SignatureDataSegment,\n\t\t\t\t\tTX#tx.signature);\n\t\tfalse ->\n\t\t\tar_wallet:verify_pre_fork_2_4({?DEFAULT_KEY_TYPE, TX#tx.owner}, SignatureDataSegment,\n\t\t\t\t\tTX#tx.signature)\n\tend.\n\nverify_malleability(Args) ->\n\t{TX, _PricePerGiBMinute, _KryderMultiplier, _Denomination, Height, _Accounts} = Args,\n\tcase Height + 1 >= ar_fork:height_2_4() of\n\t\tfalse ->\n\t\t\ttrue;\n\t\ttrue ->\n\t\t\tcase TX#tx.denomination > 0 of\n\t\t\t\ttrue ->\n\t\t\t\t\t%% The signtaure preimage is constructed differently for v1 transactions\n\t\t\t\t\t%% with the explicitly set denomination.\n\t\t\t\t\ttrue;\n\t\t\t\tfalse ->\n\t\t\t\t\tverify_malleability2(Args)\n\t\t\tend\n\tend.\n\nverify_malleability2(Args) ->\n\t{TX, PricePerGiBMinute, KryderPlusRateMultiplier, Denomination, Height, Accounts} = Args,\n\tTarget = TX#tx.target,\n\tcase {byte_size(Target), TX#tx.quantity > 0} of\n\t\t{TargetSize, true} when TargetSize /= 32 ->\n\t\t\tfalse;\n\t\t{TargetSize, false} when TargetSize > 0 ->\n\t\t\tfalse;\n\t\t_ ->\n\t\t\tcase ends_with_digit(TX#tx.data) of\n\t\t\t\ttrue ->\n\t\t\t\t\tfalse;\n\t\t\t\tfalse ->\n\t\t\t\t\tFee = TX#tx.reward,\n\t\t\t\t\tcase Fee < 10 of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\ttrue;\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tTruncatedReward = ar_pricing:redenominate(list_to_integer(\n\t\t\t\t\t\t\t\t\ttl(integer_to_list(TX#tx.reward))),\n\t\t\t\t\t\t\t\t\tTX#tx.denomination,\n\t\t\t\t\t\t\t\t\tDenomination),\n\t\t\t\t\t\t\tnot is_tx_fee_sufficient({TX#tx{ reward = TruncatedReward },\n\t\t\t\t\t\t\t\t\tPricePerGiBMinute, KryderPlusRateMultiplier,\n\t\t\t\t\t\t\t\t\tDenomination, Height, Accounts, Target})\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\nends_with_digit(<<>>) ->\n\tfalse;\nends_with_digit(Data) ->\n\tLastByte = binary:last(Data),\n\tLastByte >= 48 andalso LastByte =< 57.\n\nverify_signature_v2(_TX, do_not_verify_signature) ->\n\ttrue;\nverify_signature_v2(TX = #tx{ signature_type = SigType }, verify_signature) ->\n\tSignatureDataSegment = generate_signature_data_segment(TX),\n\tar_wallet:verify({SigType, TX#tx.owner}, SignatureDataSegment, TX#tx.signature).\n\nverify_signature_v2(_TX, do_not_verify_signature, _Height) ->\n\ttrue;\nverify_signature_v2(TX, verify_signature, Height) ->\n\tSignatureDataSegment = generate_signature_data_segment(TX),\n\tWallet =\n\t\tcase TX#tx.signature_type of\n\t\t\t?RSA_KEY_TYPE ->\n\t\t\t\t{{?RSA_SIGN_ALG, 65537}, TX#tx.owner};\n\t\t\t?ECDSA_KEY_TYPE ->\n\t\t\t\t{?ECDSA_KEY_TYPE, TX#tx.owner}\n\t\tend,\n\tcase Height >= ar_fork:height_2_4() of\n\t\ttrue ->\n\t\t\tar_wallet:verify(Wallet, SignatureDataSegment, TX#tx.signature);\n\t\tfalse ->\n\t\t\tar_wallet:verify_pre_fork_2_4({{?RSA_SIGN_ALG, 65537}, TX#tx.owner},\n\t\t\t\t\tSignatureDataSegment, TX#tx.signature)\n\tend.\n\nvalidate_overspend(TX, Accounts) ->\n\tFrom = get_owner_address(TX),\n\tAddresses = case TX#tx.target of\n\t\t<<>> ->\n\t\t\t[From];\n\t\tTo ->\n\t\t\t[From, To]\n\tend,\n\tlists:all(\n\t\tfun(Addr) ->\n\t\t\tcase maps:get(Addr, Accounts, not_found) of\n\t\t\t\t{0, LastTX} when byte_size(LastTX) == 0 ->\n\t\t\t\t\tfalse;\n\t\t\t\t{0, LastTX, _Denomination, _MiningPermission} when byte_size(LastTX) == 0 ->\n\t\t\t\t\tfalse;\n\t\t\t\t{Quantity, _} when Quantity < 0 ->\n\t\t\t\t\tfalse;\n\t\t\t\t{Quantity, _, _Denomination, _MiningPermission} when Quantity < 0 ->\n\t\t\t\t\tfalse;\n\t\t\t\tnot_found ->\n\t\t\t\t\tfalse;\n\t\t\t\t_ ->\n\t\t\t\t\ttrue\n\t\t\tend\n\t\tend,\n\t\tAddresses\n\t).\n\nis_tx_fee_sufficient(Args) ->\n\t{TX, PricePerGiBMinute, KryderPlusRateMultiplier, Denomination, Height, Accounts,\n\t\t\tAddr} = Args,\n\tDataSize = get_weave_size_increase(TX, Height + 1),\n\tMinimumRequiredFee = ar_tx:get_tx_fee({DataSize, PricePerGiBMinute,\n\t\t\tKryderPlusRateMultiplier, Addr, Accounts, Height + 1}),\n\tFee = TX#tx.reward,\n\tar_pricing:redenominate(Fee, TX#tx.denomination, Denomination) >= MinimumRequiredFee.\n\nget_tx_fee(Args) ->\n\t{DataSize, PricePerGiBMinute, KryderPlusRateMultiplier, Addr, Accounts, Height} = Args,\n\tFork_2_6_8 = ar_fork:height_2_6_8(),\n\tArgs2 = {DataSize, PricePerGiBMinute, KryderPlusRateMultiplier, Addr, Accounts, Height},\n\ttrue = Height >= Fork_2_6_8,\n\tcase Height < ar_pricing_transition:static_pricing_height() of\n\t\ttrue ->\n\t\t\t%% Pre-2.6.8 transition period. Use a static fee-based pricing + new account fee.\n\t\t\tget_static_2_6_8_tx_fee(DataSize, Addr, Accounts);\n\t\tfalse ->\n\t\t\tget_tx_fee2(Args2)\n\tend.\n\nget_static_2_6_8_tx_fee(DataSize, Addr, Accounts) ->\n\tUploadFee = (?STATIC_2_6_8_FEE_WINSTON div ?GiB) * (DataSize + ?TX_SIZE_BASE),\n\tcase Addr == <<>> orelse maps:is_key(Addr, Accounts) of\n\t\ttrue ->\n\t\t\tUploadFee;\n\t\tfalse ->\n\t\t\tNewAccountFee = (?STATIC_2_6_8_FEE_WINSTON div ?GiB) *\n\t\t\t\t\t?NEW_ACCOUNT_FEE_DATA_SIZE_EQUIVALENT,\n\t\t\tUploadFee + NewAccountFee\n\tend.\n\nget_tx_fee2(Args) ->\n\t{DataSize, PricePerGiBMinute, KryderPlusRateMultiplier, Addr, Accounts, Height} = Args,\n\tArgs2 = {DataSize + ?TX_SIZE_BASE, PricePerGiBMinute, KryderPlusRateMultiplier, Height},\n\tUploadFee = ar_pricing:get_tx_fee(Args2),\n\tcase Addr == <<>> orelse maps:is_key(Addr, Accounts) of\n\t\ttrue ->\n\t\t\tUploadFee;\n\t\tfalse ->\n\t\t\tNewAccountFee = get_new_account_fee(PricePerGiBMinute, KryderPlusRateMultiplier,\n\t\t\t\t\tHeight),\n\t\t\tUploadFee + NewAccountFee\n\tend.\n\nget_new_account_fee(BytePerMinutePrice, KryderPlusRateMultiplier, Height) ->\n\tArgs = {?NEW_ACCOUNT_FEE_DATA_SIZE_EQUIVALENT, BytePerMinutePrice,\n\t\t\tKryderPlusRateMultiplier, Height},\n\tar_pricing:get_tx_fee(Args).\n\nverify_target_length(TX, Height) ->\n\tcase Height >= ar_fork:height_2_4() of\n\t\ttrue ->\n\t\t\t(TX#tx.quantity == 0 andalso byte_size(TX#tx.target) =< 32)\n\t\t\t\torelse byte_size(TX#tx.target) == 32;\n\t\tfalse ->\n\t\t\tbyte_size(TX#tx.target) =< 32\n\tend.\n\nverify_denomination(TX, Denomination, Height, RedenominationHeight) ->\n\tcase Height + 1 >= ar_fork:height_2_6() of\n\t\tfalse ->\n\t\t\tTX#tx.denomination == 0;\n\t\ttrue ->\n\t\t\tcase TX#tx.denomination of\n\t\t\t\t0 ->\n\t\t\t\t\tHeight == 0 orelse Height > RedenominationHeight;\n\t\t\t\t_ ->\n\t\t\t\t\tTX#tx.denomination > 0 andalso TX#tx.denomination =< Denomination\n\t\t\tend\n\tend.\n\ntx_field_size_limit_v2(TX, Height, Denomination) ->\n\tMaxDigits =\n\t\tcase Height + 1 >= ar_fork:height_2_6() of\n\t\t\ttrue ->\n\t\t\t\t30 + (Denomination - 1) * 3;\n\t\t\tfalse ->\n\t\t\t\t21\n\t\tend,\n\t(byte_size(TX#tx.id) =< 32) andalso\n\t\t\t(byte_size(TX#tx.last_tx) =< 48) andalso\n\t\t\t(byte_size(TX#tx.owner) =< 512) andalso\n\t\t\tvalidate_tags_size(TX, Height) andalso\n\t\t\t(byte_size(integer_to_binary(TX#tx.quantity)) =< MaxDigits) andalso\n\t\t\t(byte_size(integer_to_binary(TX#tx.data_size)) =< 21) andalso\n\t\t\t(byte_size(TX#tx.signature) =< 512) andalso\n\t\t\t(byte_size(integer_to_binary(TX#tx.reward)) =< MaxDigits) andalso\n\t\t\t(byte_size(TX#tx.data_root) =< 32).\n\nvalidate_tags_size(TX, Height) ->\n\tcase Height >= ar_fork:height_2_5() of\n\t\ttrue ->\n\t\t\tTags = TX#tx.tags,\n\t\t\tvalidate_tags_length(Tags, 0) andalso byte_size(tags_to_binary(Tags)) =< 2048;\n\t\tfalse ->\n\t\t\tbyte_size(tags_to_binary(TX#tx.tags)) =< 2048\n\tend.\n\nvalidate_tags_length(_, N) when N > 2048 ->\n\tfalse;\nvalidate_tags_length([_ | Tags], N) ->\n\tvalidate_tags_length(Tags, N + 1);\nvalidate_tags_length([], _) ->\n\ttrue.\n\n%% @doc Convert a transactions key-value tags to binary a format.\ntags_to_binary(Tags) ->\n\tlist_to_binary(\n\t\tlists:foldr(\n\t\t\tfun({Name, Value}, Acc) ->\n\t\t\t\t[Name, Value | Acc]\n\t\t\tend,\n\t\t\t[],\n\t\t\tTags\n\t\t)\n\t).\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\nsign_tx_test_() ->\n\t{timeout, 30, fun test_sign_tx/0}.\ntest_sign_tx() ->\n\tNewTX = new(<<\"TEST DATA\">>, ?AR(1)),\n\t{Priv, Pub} = ar_wallet:new(),\n\tRate = ?INITIAL_USD_TO_AR_PRE_FORK_2_5,\n\tPricePerGiBMinute = 1,\n\tTimestamp = os:system_time(seconds),\n\tValidTXs = [\n\t\tsign_v1(NewTX, Priv, Pub),\n\t\tsign(generate_chunk_tree(NewTX#tx{ format = 2 }), Priv, Pub)\n\t],\n\tlists:foreach(\n\t\tfun(TX) ->\n\t\t\tAccounts =\n\t\t\t\tlists:foldl(\n\t\t\t\t\tfun(Addr, Acc) ->\n\t\t\t\t\t\tmaps:put(Addr, {?AR(10), <<>>}, Acc)\n\t\t\t\t\tend,\n\t\t\t\t\t#{},\n\t\t\t\t\tar_tx:get_addresses([TX])\n\t\t\t\t),\n\t\t\tArgs1 = {Rate, PricePerGiBMinute, 1, 1, 0, 0, Accounts, Timestamp},\n\t\t\t?assert(verify(TX, Args1), ar_util:encode(TX#tx.id)),\n\t\t\tArgs2 = {Rate, PricePerGiBMinute, 1, 1, 0, 1, Accounts, Timestamp},\n\t\t\t?assert(verify(TX, Args2), ar_util:encode(TX#tx.id))\n\t\tend,\n\t\tValidTXs\n\t),\n\tInvalidTXs = [\n\t\tsign(\n\t\t\tgenerate_chunk_tree( % a quantity with empty target\n\t\t\t\tNewTX#tx{ format = 2, quantity = 1 }\n\t\t\t),\n\t\t\tPriv,\n\t\t\tPub\n\t\t),\n\t\tsign_v1(\n\t\t\tgenerate_chunk_tree( % a target without quantity\n\t\t\t\tNewTX#tx{ format = 1, target = crypto:strong_rand_bytes(32) }\n\t\t\t),\n\t\t\tPriv,\n\t\t\tPub\n\t\t)\n\t],\n\tlists:foreach(\n\t\tfun(TX) ->\n\t\t\tAccounts =\n\t\t\t\tlists:foldl(\n\t\t\t\t\tfun(Addr, Acc) ->\n\t\t\t\t\t\tmaps:put(Addr, {?AR(10), <<>>}, Acc)\n\t\t\t\t\tend,\n\t\t\t\t\t#{},\n\t\t\t\t\tar_tx:get_addresses([TX])\n\t\t\t\t),\n\t\t\tArgs3 = {Rate, PricePerGiBMinute, 1, 1, 0, 0, Accounts, Timestamp},\n\t\t\t?assert(not verify(TX, Args3), ar_util:encode(TX#tx.id)),\n\t\t\tArgs4 = {Rate, PricePerGiBMinute, 1, 1, 0, 1, Accounts, Timestamp},\n\t\t\t?assert(not verify(TX, Args4), ar_util:encode(TX#tx.id))\n\t\tend,\n\t\tInvalidTXs\n\t).\n\nsign_and_verify_chunked_test_() ->\n\t{timeout, 60, fun test_sign_and_verify_chunked/0}.\n\nsign_and_verify_chunked_pre_fork_2_5_test_() ->\n\tar_test_node:test_with_mocked_functions([{ar_fork, height_2_5, fun() -> infinity end}],\n\t\tfun test_sign_and_verify_chunked/0, 120).\n\ntest_sign_and_verify_chunked() ->\n\tTXData = crypto:strong_rand_bytes(trunc(?DATA_CHUNK_SIZE * 5.5)),\n\t{Priv, Pub} = ar_wallet:new(),\n\tUnsignedTX =\n\t\tgenerate_chunk_tree(\n\t\t\t#tx{\n\t\t\t\tformat = 2,\n\t\t\t\tdata = TXData,\n\t\t\t\tdata_size = byte_size(TXData),\n\t\t\t\treward = ?AR(100)\n\t\t\t}\n\t\t),\n\tSignedTX = sign(UnsignedTX#tx{ data = <<>> }, Priv, Pub),\n\tHeight = 0,\n\tRate = {1, 3},\n\tPricePerGiBMinute = 200,\n\tTimestamp = os:system_time(seconds),\n\tAddress = ar_wallet:to_address(Pub),\n\tArgs = {Rate, PricePerGiBMinute, 1, 1, 0, Height,\n\t\t\tmaps:from_list([{Address, {?AR(100), <<>>}}]), Timestamp},\n\t?assert(verify(SignedTX, Args)).\n\n%% Ensure that a forged transaction does not pass verification.\n\nforge_test_() ->\n\t{timeout, 30, fun test_forge/0}.\n\ntest_forge() ->\n\tNewTX = new(<<\"TEST DATA\">>, ?AR(10)),\n\t{Priv, Pub} = ar_wallet:new(),\n\tRate = ?INITIAL_USD_TO_AR_PRE_FORK_2_5,\n\tPricePerGiBMinute = 400,\n\tHeight = 0,\n\tInvalidSignTX = (sign_v1(NewTX, Priv, Pub))#tx{\n\t\tdata = <<\"FAKE DATA\">>\n\t},\n\tTimestamp = os:system_time(seconds),\n\tArgs = {Rate, PricePerGiBMinute, 1, 1, 0, Height, #{}, Timestamp},\n\t?assert(not verify(InvalidSignTX, Args)).\n\n%% Ensure that transactions above the minimum tx cost are accepted.\nis_tx_fee_sufficient_test() ->\n\tValidTX = new(<<\"TEST DATA\">>, ?AR(10)),\n\tInvalidTX = new(<<\"TEST DATA\">>, 1),\n\tPricePerGiBMinute = 2,\n\tHeight = 2,\n\t?assert(is_tx_fee_sufficient({ValidTX, PricePerGiBMinute, 1, 1, Height, #{},\n\t\t\t<<\"non-existing-addr\">>})),\n\t?assert(\n\t\tnot is_tx_fee_sufficient({InvalidTX, PricePerGiBMinute, 1, 1, Height, #{},\n\t\t\t\t<<\"non-existing-addr\">>})).\n\n%% Ensure that the check_last_tx function only validates transactions in which\n%% last tx field matches that expected within the wallet list.\ncheck_last_tx_test_() ->\n\t{timeout, 60, fun test_check_last_tx/0}.\n\ncheck_last_tx_pre_fork_2_5_test_() ->\n\tar_test_node:test_with_mocked_functions([{ar_fork, height_2_4, fun() -> infinity end}],\n\t\tfun test_sign_and_verify_chunked/0, 120).\n\ntest_check_last_tx() ->\n\t{_Priv1, Pub1} = ar_wallet:new(),\n\t{Priv2, Pub2} = ar_wallet:new(),\n\t{Priv3, Pub3} = ar_wallet:new(),\n\tTX = ar_tx:new(Pub2, ?AR(1), ?AR(500), <<>>),\n\tTX2 = ar_tx:new(Pub3, ?AR(1), ?AR(400), TX#tx.id),\n\tTX3 = ar_tx:new(Pub1, ?AR(1), ?AR(300), TX#tx.id),\n\tSignedTX2 = sign_v1(TX2, Priv2, Pub2),\n\tSignedTX3 = sign_v1(TX3, Priv3, Pub3),\n\tWalletList =\n\t\tmaps:from_list(\n\t\t\t[\n\t\t\t\t{ar_wallet:to_address(Pub1), {1000, <<>>}},\n\t\t\t\t{ar_wallet:to_address(Pub2), {2000, TX#tx.id}},\n\t\t\t\t{ar_wallet:to_address(Pub3), {3000, <<>>}}\n\t\t\t]\n\t\t),\n\tfalse = check_last_tx(WalletList, SignedTX3),\n\ttrue = check_last_tx(WalletList, SignedTX2).\n\ngenerate_and_validate_even_chunk_tree_test() ->\n\tData = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE * 7),\n\tlists:map(\n\t\tfun(ChallengeLocation) ->\n\t\t\ttest_generate_chunk_tree_and_validate_path(Data, ChallengeLocation)\n\t\tend,\n\t\t[\n\t\t\t0, 1, 10, ?DATA_CHUNK_SIZE, ?DATA_CHUNK_SIZE + 1, 2 * ?DATA_CHUNK_SIZE - 1,\n\t\t\t7 * ?DATA_CHUNK_SIZE - 1\n\t\t]\n\t).\n\ngenerate_and_validate_uneven_chunk_tree_test() ->\n\tData = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE * 4 + 10),\n\tlists:map(\n\t\tfun(ChallengeLocation) ->\n\t\t\ttest_generate_chunk_tree_and_validate_path(Data, ChallengeLocation)\n\t\tend,\n\t\t[\n\t\t\t0, 1, 10, ?DATA_CHUNK_SIZE, ?DATA_CHUNK_SIZE + 1, 2 * ?DATA_CHUNK_SIZE - 1,\n\t\t\t4 * ?DATA_CHUNK_SIZE + 9\n\t\t]\n\t).\n\ntest_generate_chunk_tree_and_validate_path(Data, ChallengeLocation) ->\n\tChunkStart = ar_util:floor_int(ChallengeLocation, ?DATA_CHUNK_SIZE),\n\tChunk = binary:part(Data, ChunkStart, min(?DATA_CHUNK_SIZE, byte_size(Data) - ChunkStart)),\n\t#tx{ data_root = DataRoot, data_tree = DataTree } =\n\t\tar_tx:generate_chunk_tree(\n\t\t\t#tx{\n\t\t\t\tdata = Data,\n\t\t\t\tdata_size = byte_size(Data)\n\t\t\t}\n\t\t),\n\tDataPath =\n\t\tar_merkle:generate_path(\n\t\t\tDataRoot,\n\t\t\tChallengeLocation,\n\t\t\tDataTree\n\t\t),\n\tRealChunkID = ar_tx:generate_chunk_id(Chunk),\n\t{PathChunkID, StartOffset, EndOffset} =\n\t\tar_merkle:validate_path(DataRoot, ChallengeLocation, byte_size(Data), DataPath),\n\t{PathChunkID, StartOffset, EndOffset} =\n\t\tar_merkle:validate_path(DataRoot, ChallengeLocation, byte_size(Data),\n\t\t\t\tDataPath, strict_data_split_ruleset),\n\t{PathChunkID, StartOffset, EndOffset} =\n\t\tar_merkle:validate_path(DataRoot, ChallengeLocation, byte_size(Data),\n\t\t\t\tDataPath, strict_borders_ruleset),\n\t?assertEqual(RealChunkID, PathChunkID),\n\t?assert(ChallengeLocation >= StartOffset),\n\t?assert(ChallengeLocation < EndOffset).\n\nget_weave_size_increase_test() ->\n\t?assertEqual(0, get_weave_size_increase(#tx{}, ar_fork:height_2_5())),\n\t?assertEqual(262144,\n\t\t\tget_weave_size_increase(#tx{ data_size = 1 }, ar_fork:height_2_5())),\n\t?assertEqual(262144,\n\t\t\tget_weave_size_increase(#tx{ data_size = 256 }, ar_fork:height_2_5())),\n\t?assertEqual(262144,\n\t\t\tget_weave_size_increase(#tx{ data_size = 256 * 1024 - 1 }, ar_fork:height_2_5())),\n\t?assertEqual(262144,\n\t\t\tget_weave_size_increase(#tx{ data_size = 256 * 1024 }, ar_fork:height_2_5())),\n\t?assertEqual(2 * 262144,\n\t\t\tget_weave_size_increase(#tx{ data_size = 256 * 1024 + 1}, ar_fork:height_2_5())),\n\t?assertEqual(0,\n\t\t\tget_weave_size_increase(#tx{ data_size = 0 }, ar_fork:height_2_5() - 1)),\n\t?assertEqual(1,\n\t\t\tget_weave_size_increase(#tx{ data_size = 1 }, ar_fork:height_2_5() - 1)),\n\t?assertEqual(262144,\n\t\t\tget_weave_size_increase(#tx{ data_size = 256 * 1024 }, ar_fork:height_2_5() - 1)).\n"
  },
  {
    "path": "apps/arweave/src/ar_tx_blacklist.erl",
    "content": "%%% @doc The module manages a transaction blacklist. The blacklisted identifiers\n%%% are read from the configured files or downloaded from the configured HTTP endpoints.\n%%% The server coordinates the removal of the transaction headers and data and answers\n%%% queries about the currently blacklisted transactions and the corresponding global\n%%% byte offsets.\n%%% @end\n-module(ar_tx_blacklist).\n\n-behaviour(gen_server).\n\n-export([start_link/0, start_taking_down/0, is_tx_blacklisted/1, is_byte_blacklisted/1,\n\t\tget_blacklisted_intervals/2, get_next_not_blacklisted_byte/1,\n\t\tnotify_about_removed_tx/1, norify_about_orphaned_tx/1, notify_about_added_tx/3,\n\t\tstore_state/0]).\n\n-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%% The frequency of refreshing the blacklist.\n-ifdef(AR_TEST).\n-define(REFRESH_BLACKLISTS_FREQUENCY_MS, 2000).\n-else.\n-define(REFRESH_BLACKLISTS_FREQUENCY_MS, 10 * 60 * 1000).\n-endif.\n\n%% How long to wait before retrying to compose a blacklist from local and external\n%% sources after a failed attempt.\n-define(REFRESH_BLACKLISTS_RETRY_DELAY_MS, 10000).\n\n%% How long to wait for the response to the previously requested\n%% header or data removal (takedown) before requesting it for a new tx.\n-ifdef(AR_TEST).\n-define(REQUEST_TAKEDOWN_DELAY_MS, 1000).\n-else.\n-define(REQUEST_TAKEDOWN_DELAY_MS, 30000).\n-endif.\n\n%% The frequency of checking whether the time for the response to\n%% the previously requested takedown is due.\n-define(CHECK_PENDING_ITEMS_INTERVAL_MS, 1000).\n\n%% The frequency of persisting the server state.\n-ifdef(AR_TEST).\n-define(STORE_STATE_FREQUENCY_MS, 20000).\n-else.\n-define(STORE_STATE_FREQUENCY_MS, 10 * 60 * 1000).\n-endif.\n\n%% @doc The server state.\n-record(ar_tx_blacklist_state, {\n\t%% The timestamp of the last requested transaction header takedown.\n\t%% It is used to throttle the takedown requests.\n\theader_takedown_request_timestamp = os:system_time(millisecond),\n\t%% The timestamp of the last requested transaction data takedown.\n\t%% It is used to throttle the takedown requests.\n\tdata_takedown_request_timestamp = os:system_time(millisecond),\n\t%% A cursor pointing to a TXID in the list of pending unblacklisted transactions.\n\t%% Some of them might be orphaned or simply non-existent.\n\tpending_restore_cursor = first,\n\tunblacklist_timeout = os:system_time(second)\n}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%% @doc Start removing blacklisted headers and data, if any.\nstart_taking_down() ->\n\tgen_server:cast(?MODULE, start_taking_down).\n\n%% @doc Check whether the given transaction is blacklisted.\nis_tx_blacklisted(TXID) ->\n\tets:member(ar_tx_blacklist, TXID).\n\n%% @doc Check whether the byte with the given global offset is blacklisted.\nis_byte_blacklisted(Offset) ->\n\tar_ets_intervals:is_inside(ar_tx_blacklist_offsets, Offset).\n\n%% @doc Return the smallest not blacklisted byte bigger than or equal to\n%% the byte at the given global offset.\nget_next_not_blacklisted_byte(Offset) ->\n\tcase ets:next(ar_tx_blacklist_offsets, Offset - 1) of\n\t\t'$end_of_table' ->\n\t\t\tOffset;\n\t\tNextOffset ->\n\t\t\tcase ets:lookup(ar_tx_blacklist_offsets, NextOffset) of\n\t\t\t\t[{NextOffset, Start}] ->\n\t\t\t\t\tcase Start >= Offset of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tOffset;\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tNextOffset + 1\n\t\t\t\t\tend;\n\t\t\t\t[] ->\n\t\t\t\t\t%% The key should have been just removed, unlucky timing.\n\t\t\t\t\tget_next_not_blacklisted_byte(Offset)\n\t\t\tend\n\tend.\n\n%% @doc Return the blacklisted intervals intersecting the given range.\nget_blacklisted_intervals(Start, End) ->\n\tget_blacklisted_intervals(Start, End, ar_intervals:new()).\n\nget_blacklisted_intervals(Start, End, Intervals) ->\n\tcase ets:next(ar_tx_blacklist_offsets, Start) of\n\t\t'$end_of_table' ->\n\t\t\tIntervals;\n\t\tOffset ->\n\t\t\tcase ets:lookup(ar_tx_blacklist_offsets, Offset) of\n\t\t\t\t[{Offset, Start2}] when Start2 >= End ->\n\t\t\t\t\tIntervals;\n\t\t\t\t[{Offset, Start2}] when Offset >= End ->\n\t\t\t\t\tar_intervals:add(Intervals, End, max(Start2, Start));\n\t\t\t\t[{Offset, Start2}] ->\n\t\t\t\t\tget_blacklisted_intervals(Offset, End,\n\t\t\t\t\t\t\tar_intervals:add(Intervals, Offset, max(Start2, Start)));\n\t\t\t\t[] ->\n\t\t\t\t\t%% The key should have been just removed, unlucky timing.\n\t\t\t\t\tget_blacklisted_intervals(Start, End, Intervals)\n\t\t\tend\n\tend.\n\n%% @doc Notify the server about the removed transaction header.\nnotify_about_removed_tx(TXID) ->\n\tgen_server:cast(?MODULE, {removed_tx, TXID}).\n\n%% @doc Notify the server about the orphaned tx caused by the fork.\nnorify_about_orphaned_tx(TXID) ->\n\tgen_server:cast(?MODULE, {orphaned_tx, TXID}).\n\n%% @doc Notify the server about the added transaction.\nnotify_about_added_tx(TXID, End, Start) ->\n\tgen_server:cast(?MODULE, {added_tx, TXID, End, Start}).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\t?LOG_DEBUG([{event, initializing_tx_blacklist}, {tags, [tx_blacklist]}]),\n\tok = initialize_state(),\n\t%% Trap exit to avoid corrupting any open files on quit.\n\tprocess_flag(trap_exit, true),\n\tok = ar_events:subscribe(tx),\n\tgen_server:cast(?MODULE, refresh_blacklist),\n\t{ok, _} = ar_timer:apply_interval(\n\t\t?STORE_STATE_FREQUENCY_MS,\n\t\t?MODULE,\n\t\tstore_state,\n\t\t[],\n\t\t#{ skip_on_shutdown => false }\n\t),\n\t{ok, #ar_tx_blacklist_state{}}.\n\nhandle_call(Request, _From, State) ->\n\t?LOG_ERROR([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast(start_taking_down, State) ->\n\t?LOG_DEBUG([{event, start_taking_down}, {tags, [tx_blacklist]}]),\n\tgen_server:cast(?MODULE, maybe_restore),\n\tgen_server:cast(?MODULE, maybe_request_takedown),\n\t{noreply, State};\n\nhandle_cast(refresh_blacklist, State) ->\n\tcase refresh_blacklist() of\n\t\terror ->\n\t\t\t_ = ar_timer:apply_after(\n\t\t\t\t?REFRESH_BLACKLISTS_RETRY_DELAY_MS,\n\t\t\t\tgen_server,\n\t\t\t\tcast,\n\t\t\t\t[self(), refresh_blacklist],\n\t\t\t\t#{ skip_on_shutdown => true }\n\t\t\t);\n\t\tok ->\n\t\t\t_ = ar_timer:apply_after(\n\t\t\t\t?REFRESH_BLACKLISTS_FREQUENCY_MS,\n\t\t\t\tgen_server,\n\t\t\t\tcast,\n\t\t\t\t[self(), refresh_blacklist],\n\t\t\t\t#{ skip_on_shutdown => true }\n\t\t\t)\n\tend,\n\t{noreply, State};\n\nhandle_cast(maybe_request_takedown, State) ->\n\t#ar_tx_blacklist_state{\n\t\theader_takedown_request_timestamp = HTS,\n\t\tdata_takedown_request_timestamp = DTS\n\t} = State,\n\tNow = os:system_time(millisecond),\n\tState2 =\n\t\tcase HTS + ?REQUEST_TAKEDOWN_DELAY_MS < Now of\n\t\t\ttrue ->\n\t\t\t\trequest_header_takedown(State);\n\t\t\tfalse ->\n\t\t\t\tState\n\t\tend,\n\tState3 =\n\t\tcase DTS + ?REQUEST_TAKEDOWN_DELAY_MS < Now of\n\t\t\ttrue ->\n\t\t\t\trequest_data_takedown(State2);\n\t\t\tfalse ->\n\t\t\t\tState2\n\t\tend,\n\t_ = ar_timer:apply_after(\n\t\t?CHECK_PENDING_ITEMS_INTERVAL_MS,\n\t\tgen_server,\n\t\tcast,\n\t\t[self(), maybe_request_takedown],\n\t\t#{ skip_on_shutdown => true }\n\t),\n\t{noreply, State3};\n\nhandle_cast(maybe_restore, #ar_tx_blacklist_state{ pending_restore_cursor = Cursor,\n\t\tunblacklist_timeout = UnblacklistTimeout } = State) ->\n\tNow = os:system_time(second),\n\tar_util:cast_after(200, ?MODULE, maybe_restore),\n\tcase UnblacklistTimeout + 30000 < Now of\n\t\ttrue ->\n\t\t\tRead =\n\t\t\t\tcase Cursor of\n\t\t\t\t\tfirst ->\n\t\t\t\t\t\tets:first(ar_tx_blacklist_pending_restore_headers);\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tets:next(ar_tx_blacklist_pending_restore_headers, Cursor)\n\t\t\t\tend,\n\t\t\tcase Read of\n\t\t\t\t'$end_of_table' ->\n\t\t\t\t\t{noreply, State#ar_tx_blacklist_state{ pending_restore_cursor = first,\n\t\t\t\t\t\t\tunblacklist_timeout = Now }};\n\t\t\t\tTXID ->\n\t\t\t\t\t?LOG_DEBUG([{event, preparing_transaction_unblacklisting},\n\t\t\t\t\t\t\t{tags, [tx_blacklist]},\n\t\t\t\t\t\t\t{tx, ar_util:encode(TXID)}]),\n\t\t\t\t\tar_events:send(tx, {preparing_unblacklisting, TXID}),\n\t\t\t\t\t{noreply, State#ar_tx_blacklist_state{ pending_restore_cursor = TXID,\n\t\t\t\t\t\t\tunblacklist_timeout = Now }}\n\t\t\tend;\n\t\tfalse ->\n\t\t\t{noreply, State}\n\tend;\n\nhandle_cast({removed_tx, TXID}, State) ->\n\tcase ets:member(ar_tx_blacklist_pending_headers, TXID) of\n\t\tfalse ->\n\t\t\t{noreply, State};\n\t\ttrue ->\n\t\t\tets:delete(ar_tx_blacklist_pending_headers, TXID),\n\t\t\t{noreply, request_header_takedown(State)}\n\tend;\n\nhandle_cast({orphaned_tx, TXID}, State) ->\n\tcase ets:lookup(ar_tx_blacklist, TXID) of\n\t\t[{TXID, End, Start}] ->\n\t\t\trestore_offsets(End, Start),\n\t\t\tets:insert(ar_tx_blacklist, [{TXID}]);\n\t\t_ ->\n\t\t\tok\n\tend,\n\t{noreply, State};\n\nhandle_cast({added_tx, TXID, End, Start}, State) ->\n\tcase ets:lookup(ar_tx_blacklist, TXID) of\n\t\t[{TXID}] ->\n\t\t\tets:insert(ar_tx_blacklist, [{TXID, End, Start}]),\n\t\t\tets:insert(ar_tx_blacklist_pending_data, [{TXID}]),\n\t\t\t{noreply, request_data_takedown(State)};\n\t\t[{TXID, CurrentEnd, CurrentStart}] ->\n\t\t\trestore_offsets(CurrentEnd, CurrentStart),\n\t\t\tets:insert(ar_tx_blacklist, [{TXID, End, Start}]),\n\t\t\tets:insert(ar_tx_blacklist_pending_data, [{TXID}]),\n\t\t\t{noreply, request_data_takedown(State)};\n\t\t_ ->\n\t\t\t{noreply, State}\n\tend;\n\nhandle_cast(Msg, State) ->\n\t?LOG_ERROR([{event, unhandled_cast}, {module, ?MODULE}, {message, Msg}]),\n\t{noreply, State}.\n\nhandle_info({removed_range, Ref}, State) ->\n\tcase erlang:get(Ref) of\n\t\tundefined ->\n\t\t\t{noreply, State};\n\t\t{range, {Start, End}} ->\n\t\t\terlang:erase(Ref),\n\t\t\tcase ets:lookup(ar_tx_blacklist, {End, Start}) of\n\t\t\t\t[{{End, Start}}] ->\n\t\t\t\t\tets:delete(ar_tx_blacklist_pending_data, {End, Start}),\n\t\t\t\t\t{noreply, request_data_takedown(State)};\n\t\t\t\t_ ->\n\t\t\t\t\t{noreply, State}\n\t\t\tend;\n\t\t{tx, {TXID, Start, End}} ->\n\t\t\terlang:erase(Ref),\n\t\t\tcase ets:lookup(ar_tx_blacklist, TXID) of\n\t\t\t\t[{TXID, End, Start}] ->\n\t\t\t\t\tets:delete(ar_tx_blacklist_pending_data, TXID),\n\t\t\t\t\t{noreply, request_data_takedown(State)};\n\t\t\t\t_ ->\n\t\t\t\t\t{noreply, State}\n\t\t\tend\n\tend;\n\nhandle_info({event, tx, {ready_for_unblacklisting, TXID}}, State) ->\n\t?LOG_DEBUG([{event, unblacklisting_transaction},\n\t\t{tags, [tx_blacklist]},\n\t\t{tx, ar_util:encode(TXID)}]),\n\tets:delete(ar_tx_blacklist_pending_restore_headers, TXID),\n\t{noreply, State#ar_tx_blacklist_state{ unblacklist_timeout = os:system_time(second) }};\n\nhandle_info({event, tx, _}, State) ->\n\t{noreply, State};\n\nhandle_info(Info, State) ->\n\t?LOG_ERROR([{event, unhandled_info}, {module, ?MODULE}, {message, Info}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\tstore_state(),\n\tclose_dets(),\n\t?LOG_INFO([{event, terminate}, {module, ?MODULE}, {reason, Reason}]).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\ninitialize_state() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tDataDir = Config#config.data_dir,\n\tDir = filename:join(DataDir, \"ar_tx_blacklist\"),\n\tok = filelib:ensure_dir(Dir ++ \"/\"),\n\tNames = [\n\t\tar_tx_blacklist,\n\t\tar_tx_blacklist_pending_headers,\n\t\tar_tx_blacklist_pending_data,\n\t\tar_tx_blacklist_offsets,\n\t\tar_tx_blacklist_pending_restore_headers\n\t],\n\tlists:foreach(\n\t\tfun\n\t\t\t(Name) ->\n\t\t\t\t{ok, _} = dets:open_file(Name, [{file, filename:join(Dir, Name)}]),\n\t\t\t\ttrue = ets:from_dets(Name, Name)\n\t\tend,\n\t\tNames\n\t).\n\nrefresh_blacklist() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tWhitelistFiles = Config#config.transaction_whitelist_files,\n\tcase load_from_files(WhitelistFiles) of\n\t\terror ->\n\t\t\terror;\n\t\t{ok, Whitelist} ->\n\t\t\tWhitelistURLs = Config#config.transaction_whitelist_urls,\n\t\t\tcase load_from_urls(WhitelistURLs) of\n\t\t\t\terror ->\n\t\t\t\t\terror;\n\t\t\t\t{ok, Whitelist2} ->\n\t\t\t\t\trefresh_blacklist(sets:union(Whitelist, Whitelist2))\n\t\t\tend\n\tend.\n\nrefresh_blacklist(Whitelist) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tBlacklistFiles = Config#config.transaction_blacklist_files,\n\tcase load_from_files(BlacklistFiles) of\n\t\terror ->\n\t\t\terror;\n\t\t{ok, Blacklist} ->\n\t\t\tBlacklistURLs = Config#config.transaction_blacklist_urls,\n\t\t\tcase load_from_urls(BlacklistURLs) of\n\t\t\t\terror ->\n\t\t\t\t\terror;\n\t\t\t\t{ok, Blacklist2} ->\n\t\t\t\t\trefresh_blacklist(Whitelist, sets:union(Blacklist, Blacklist2))\n\t\t\tend\n\tend.\n\nrefresh_blacklist(Whitelist, Blacklist) ->\n\tRemoved =\n\t\tsets:fold(\n\t\t\tfun\t(TXID, Acc) when is_binary(TXID) ->\n\t\t\t\t\tcase not sets:is_element(TXID, Whitelist)\n\t\t\t\t\t\t\tandalso not ets:member(ar_tx_blacklist, TXID) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t[TXID | Acc];\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tAcc\n\t\t\t\t\tend;\n\t\t\t\t({End, Start}, Acc) ->\n\t\t\t\t\tcase ets:member(ar_tx_blacklist, {End, Start}) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tAcc;\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t[{End, Start} | Acc]\n\t\t\t\t\tend\n\t\t\tend,\n\t\t\t[],\n\t\t\tBlacklist\n\t\t),\n\tRestored =\n\t\tets:foldl(\n\t\t\tfun\t({End, Start}, Acc) ->\n\t\t\t\t\tcase sets:is_element({End, Start}, Blacklist) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\tAcc;\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t[{End, Start} | Acc]\n\t\t\t\t\tend;\n\t\t\t\t(Entry, Acc) ->\n\t\t\t\t\tTXID = element(1, Entry),\n\t\t\t\t\tcase sets:is_element(TXID, Whitelist)\n\t\t\t\t\t\t\torelse not sets:is_element(TXID, Blacklist) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t[TXID | Acc];\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tAcc\n\t\t\t\t\tend\n\t\t\tend,\n\t\t\t[],\n\t\t\tar_tx_blacklist\n\t\t),\n\tlists:foreach(\n\t\tfun\t(TXID) when is_binary(TXID) ->\n\t\t\t\tets:insert(ar_tx_blacklist, [{TXID}]),\n\t\t\t\tets:insert(ar_tx_blacklist_pending_headers, [{TXID}]),\n\t\t\t\tets:insert(ar_tx_blacklist_pending_data, [{TXID}]),\n\t\t\t\tets:delete(ar_tx_blacklist_pending_restore_headers, TXID);\n\t\t\t({End, Start}) ->\n\t\t\t\tets:insert(ar_tx_blacklist, [{{End, Start}}]),\n\t\t\t\tets:insert(ar_tx_blacklist_pending_data, [{{End, Start}}])\n\t\tend,\n\t\tRemoved\n\t),\n\tlists:foreach(\n\t\tfun\t(TXID) when is_binary(TXID) ->\n\t\t\t\tets:insert(ar_tx_blacklist_pending_restore_headers, [{TXID}]),\n\t\t\t\tcase ets:lookup(ar_tx_blacklist, TXID) of\n\t\t\t\t\t[{TXID}] ->\n\t\t\t\t\t\tok;\n\t\t\t\t\t[{TXID, End, Start}] ->\n\t\t\t\t\t\trestore_offsets(End, Start)\n\t\t\t\tend,\n\t\t\t\tets:delete(ar_tx_blacklist, TXID),\n\t\t\t\tets:delete(ar_tx_blacklist_pending_data, TXID),\n\t\t\t\tets:delete(ar_tx_blacklist_pending_headers, TXID);\n\t\t\t({End, Start}) ->\n\t\t\t\trestore_offsets(End, Start),\n\t\t\t\tets:delete(ar_tx_blacklist, {End, Start}),\n\t\t\t\tets:delete(ar_tx_blacklist_pending_data, {End, Start})\n\t\tend,\n\t\tRestored\n\t),\n\t?LOG_DEBUG([{event, refreshed_blacklist},\n\t\t{tags, [tx_blacklist]},\n\t\t{whitelist, sets:size(Whitelist)},\n\t\t{blacklist, sets:size(Blacklist)},\n\t\t{removed, length(Removed)},\n\t\t{restored, length(Restored)},\n\t\t{ar_tx_blacklist, ets:info(ar_tx_blacklist, size)},\n\t\t{ar_tx_blacklist_pending_headers, ets:info(ar_tx_blacklist_pending_headers, size)},\n\t\t{ar_tx_blacklist_pending_data, ets:info(ar_tx_blacklist_pending_data, size)},\n\t\t{ar_tx_blacklist_pending_restore_headers,\n\t\t\tets:info(ar_tx_blacklist_pending_restore_headers, size)}\n\t]),\n\tok.\n\nload_from_files(Files) ->\n\tLists = lists:map(fun load_from_file/1, Files),\n\tcase lists:all(fun(error) -> false; (_) -> true end, Lists) of\n\t\ttrue ->\n\t\t\t{ok, sets:from_list(lists:flatten(Lists))};\n\t\tfalse ->\n\t\t\terror\n\tend.\n\nload_from_file(File) ->\n\ttry\n\t\t{ok, Binary} = file:read_file(File),\n\t\tparse_binary(Binary)\n\tcatch Type:Pattern ->\n\t\tWarning = [\n\t\t\t{event, failed_to_load_and_parse_file},\n\t\t\t{tags, [tx_blacklist]},\n\t\t\t{file, File},\n\t\t\t{exception, {Type, Pattern}}\n\t\t],\n\t\t?LOG_WARNING(Warning),\n\t\terror\n\tend.\n\nparse_binary(Binary) ->\n\tlists:filtermap(\n\t\tfun(Line) ->\n\t\t\tcase Line of\n\t\t\t\t<<>> ->\n\t\t\t\t\tfalse;\n\t\t\t\tTXIDOrRange ->\n\t\t\t\t\tcase binary:split(TXIDOrRange, <<\",\">>, [global]) of\n\t\t\t\t\t\t[StartBin, EndBin] ->\n\t\t\t\t\t\t\tcase {catch binary_to_integer(StartBin),\n\t\t\t\t\t\t\t\t\tcatch binary_to_integer(EndBin)} of\n\t\t\t\t\t\t\t\t{Start, End} when is_integer(Start),\n\t\t\t\t\t\t\t\t\tis_integer(End), End > Start, Start >= 0 ->\n\t\t\t\t\t\t\t\t\t{true, {End, Start}};\n\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\t?LOG_WARNING([{event, failed_to_parse_line},\n\t\t\t\t\t\t\t\t\t\t\t{tags, [tx_blacklist]},\n\t\t\t\t\t\t\t\t\t\t\t{line, Line}]),\n\t\t\t\t\t\t\t\t\tfalse\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\tcase ar_util:safe_decode(TXIDOrRange) of\n\t\t\t\t\t\t\t\t{error, invalid} ->\n\t\t\t\t\t\t\t\t\t?LOG_WARNING([{event, failed_to_parse_line},\n\t\t\t\t\t\t\t\t\t\t\t{tags, [tx_blacklist]},\n\t\t\t\t\t\t\t\t\t\t\t{line, Line}]),\n\t\t\t\t\t\t\t\t\tfalse;\n\t\t\t\t\t\t\t\t{ok, TXID} ->\n\t\t\t\t\t\t\t\t\t{true, TXID}\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend\n\t\t\tend\n\t\tend,\n\t\tbinary:split(Binary, <<\"\\n\">>, [global])\n\t).\n\nload_from_urls(URLs) ->\n\tLists = lists:map(fun load_from_url/1, URLs),\n\tcase lists:all(fun(error) -> false; (_) -> true end, Lists) of\n\t\ttrue ->\n\t\t\t{ok, sets:from_list(lists:flatten(Lists))};\n\t\tfalse ->\n\t\t\terror\n\tend.\n\nload_from_url(URL) ->\n\ttry\n\t\t#{ host := Host, path := RawPath, scheme := Scheme } = M = uri_string:parse(URL),\n\t\tPath = case RawPath of \"\" -> \"/\"; Elsewise -> Elsewise end,\n\t\tQuery = case maps:get(query, M, not_found) of not_found -> <<>>; Q -> [<<\"?\">>, Q] end,\n\t\tPort = maps:get(port, M, case Scheme of \"http\" -> 80; \"https\" -> 443 end),\n\t\tReply =\n\t\t\tar_http:req(#{\n\t\t\t\tmethod => get,\n\t\t\t\tpeer => {Host, Port},\n\t\t\t\tpath => binary_to_list(iolist_to_binary([Path, Query])),\n\t\t\t\tis_peer_request => false,\n\t\t\t\ttimeout => 20000,\n\t\t\t\tconnect_timeout => 1000\n\t\t\t}),\n\t\tcase Reply of\n\t\t\t{ok, {{<<\"200\">>, _}, _, Body, _, _}} ->\n\t\t\t\tparse_binary(Body);\n\t\t\t_ ->\n\t\t\t\t?LOG_WARNING([\n\t\t\t\t\t{event, failed_to_download_tx_blacklist},\n\t\t\t\t\t{tags, [tx_blacklist]},\n\t\t\t\t\t{url, URL},\n\t\t\t\t\t{reply, Reply}\n\t\t\t\t]),\n\t\t\t\terror\n\t\tend\n\tcatch Type:Pattern ->\n\t\t?LOG_WARNING([\n\t\t\t{event, failed_to_load_and_parse_tx_blacklist},\n\t\t\t{tags, [tx_blacklist]},\n\t\t\t{url, URL},\n\t\t\t{exception, {Type, Pattern}}\n\t\t]),\n\t\terror\n\tend.\n\nrequest_header_takedown(State) ->\n\tcase ets:first(ar_tx_blacklist_pending_headers) of\n\t\t'$end_of_table' ->\n\t\t\tState;\n\t\tTXID ->\n\t\t\tar_header_sync:request_tx_removal(TXID),\n\t\t\tState#ar_tx_blacklist_state{\n\t\t\t\theader_takedown_request_timestamp = os:system_time(millisecond)\n\t\t\t}\n\tend.\n\nrequest_data_takedown(State) ->\n\tcase ets:first(ar_tx_blacklist_pending_data) of\n\t\t'$end_of_table' ->\n\t\t\tState;\n\t\tTXID when is_binary(TXID)  ->\n\t\t\tcase ets:lookup(ar_tx_blacklist, TXID) of\n\t\t\t\t[{TXID}] ->\n\t\t\t\t\tcase ar_data_sync:get_tx_offset(TXID) of\n\t\t\t\t\t\t{ok, {End, Size}} ->\n\t\t\t\t\t\t\tStart = End - Size,\n\t\t\t\t\t\t\tets:insert(ar_tx_blacklist, [{TXID, End, Start}]),\n\t\t\t\t\t\t\tblacklist_offsets(TXID, End, Start, State);\n\t\t\t\t\t\t{error, Reason} ->\n\t\t\t\t\t\t\t?LOG_WARNING([{event, failed_to_find_blocklisted_tx_in_the_index},\n\t\t\t\t\t\t\t\t\t{tags, [tx_blacklist]},\n\t\t\t\t\t\t\t\t\t{tx, ar_util:encode(TXID)},\n\t\t\t\t\t\t\t\t\t{reason, io_lib:format(\"~p\", [Reason])}]),\n\t\t\t\t\t\t\tets:delete(ar_tx_blacklist_pending_data, TXID),\n\t\t\t\t\t\t\tets:delete(ar_tx_blacklist, TXID),\n\t\t\t\t\t\t\tState\n\t\t\t\t\tend;\n\t\t\t\t[{TXID, End, Start}] ->\n\t\t\t\t\tblacklist_offsets(TXID, End, Start, State)\n\t\t\tend;\n\t\t{End, Start} ->\n\t\t\tblacklist_offsets(End, Start, State)\n\tend.\n\nstore_state() ->\n\tNames = [\n\t\tar_tx_blacklist,\n\t\tar_tx_blacklist_pending_headers,\n\t\tar_tx_blacklist_pending_data,\n\t\tar_tx_blacklist_offsets,\n\t\tar_tx_blacklist_pending_restore_headers\n\t],\n\tlists:foreach(\n\t\tfun\n\t\t\t(Name) ->\n\t\t\t\tets:to_dets(Name, Name)\n\t\tend,\n\t\tNames\n\t),\n\t?LOG_DEBUG([{event, stored_state},\n\t\t{tags, [tx_blacklist]},\n\t\t{ar_tx_blacklist, ets:info(ar_tx_blacklist, size)},\n\t\t{ar_tx_blacklist_pending_headers, ets:info(ar_tx_blacklist_pending_headers, size)},\n\t\t{ar_tx_blacklist_pending_data, ets:info(ar_tx_blacklist_pending_data, size)},\n\t\t{ar_tx_blacklist_offsets, ets:info(ar_tx_blacklist_offsets, size)},\n\t\t{ar_tx_blacklist_pending_restore_headers,\n\t\t\tets:info(ar_tx_blacklist_pending_restore_headers, size)}\n\t]).\n\nrestore_offsets(End, Start) ->\n\tar_ets_intervals:delete(ar_tx_blacklist_offsets, End, Start).\n\nblacklist_offsets(End, Start, State) ->\n\tar_ets_intervals:add(ar_tx_blacklist_offsets, End, Start),\n\tRef = make_ref(),\n\terlang:put(Ref, {range, {Start, End}}),\n\t?LOG_DEBUG([{event, requesting_data_removal},\n\t\t{tags, [tx_blacklist]},\n\t\t{s, Start},\n\t\t{e, End}]),\n\tar_data_sync:request_data_removal(Start, End, Ref, self()),\n\tState#ar_tx_blacklist_state{\n\t\tdata_takedown_request_timestamp = os:system_time(millisecond)\n\t}.\n\nblacklist_offsets(TXID, End, Start, State) ->\n\tar_ets_intervals:add(ar_tx_blacklist_offsets, End, Start),\n\tRef = make_ref(),\n\terlang:put(Ref, {tx, {TXID, Start, End}}),\n\t?LOG_DEBUG([{event, requesting_tx_data_removal},\n\t\t{tags, [tx_blacklist]},\n\t\t{tx, ar_util:encode(TXID)},\n\t\t{s, Start},\n\t\t{e, End}]),\n\tar_data_sync:request_tx_data_removal(TXID, Ref, self()),\n\tState#ar_tx_blacklist_state{\n\t\tdata_takedown_request_timestamp = os:system_time(millisecond)\n\t}.\n\nclose_dets() ->\n\tNames = [\n\t\tar_tx_blacklist,\n\t\tar_tx_blacklist_pending_headers,\n\t\tar_tx_blacklist_pending_data,\n\t\tar_tx_blacklist_offsets,\n\t\tar_tx_blacklist_pending_restore_headers\n\t],\n\tlists:foreach(\n\t\tfun\n\t\t\t(Name) ->\n\t\t\t\tcase dets:close(Name) of\n\t\t\t\t\tok ->\n\t\t\t\t\t\tok;\n\t\t\t\t\t{error, Reason} ->\n\t\t\t\t\t\t?LOG_ERROR([\n\t\t\t\t\t\t\t{event, failed_to_close_dets_table},\n\t\t\t\t\t\t\t{tags, [tx_blacklist]},\n\t\t\t\t\t\t\t{name, Name},\n\t\t\t\t\t\t\t{reason, Reason}\n\t\t\t\t\t\t])\n\t\t\t\tend\n\t\tend,\n\t\tNames\n\t).\n"
  },
  {
    "path": "apps/arweave/src/ar_tx_db.erl",
    "content": "%%% @doc Database for storing error codes for failed transactions, so that a user\n%%% can get the error reason when polling the status of a transaction. The entries\n%%% have a TTL. The DB is a singleton.\n-module(ar_tx_db).\n\n-export([get_error_codes/1, put_error_codes/2, ensure_error/1, clear_error_codes/1]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%% @doc Put an Erlang term into the meta DB. Typically these are\n%% write-once values.\nput_error_codes(TXID, ErrorCodes) ->\n\tets:insert(?MODULE, {TXID, ErrorCodes}),\n\t{ok, _} = ar_timer:apply_after(\n\t\t1800*1000,\n\t\t?MODULE,\n\t\tclear_error_codes,\n\t\t[TXID],\n\t\t#{ skip_on_shutdown => false }\n\t),\n\tok.\n\n%% @doc Retreive a term from the meta db.\nget_error_codes(TXID) ->\n\tcase ets:lookup(?MODULE, TXID) of\n\t\t[{_, ErrorCodes}] -> {ok, ErrorCodes};\n\t\t[] -> not_found\n\tend.\n\n%% @doc Writes an unknown error code if there are not already any error codes\n%% for this TX.\nensure_error(TXID) ->\n\tcase ets:lookup(?MODULE, TXID) of\n\t\t[_] -> ok;\n\t\t[] -> put_error_codes(TXID, [\"unknown_error\"])\n\tend.\n\n%% @doc Removes all error codes for this TX.\nclear_error_codes(TXID) ->\n\tets:delete(?MODULE, TXID).\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\nsetup_ets() ->\n\tcase ets:info(?MODULE) of\n\t\tundefined ->\n\t\t\tets:new(?MODULE, [set, public, named_table]),\n\t\t\tfun() -> ets:delete(?MODULE) end;\n\t\t_ ->\n\t\t\tfun() -> ok end\n\tend.\n\nread_write_test_() ->\n\t{setup, fun setup_ets/0, fun(Cleanup) -> Cleanup() end,\n\t\tfun(_) -> [fun() ->\n\t\t\tput_error_codes(mocked_txid1, mocked_error),\n\t\t\tput_error_codes(mocked_txid2, mocked_error),\n\t\t\tensure_error(mocked_txid3),\n\t\t\tassert_clear_error_codes(mocked_txid1),\n\t\t\tassert_clear_error_codes(mocked_txid2),\n\t\t\tassert_clear_error_codes(mocked_txid3)\n\t\tend] end}.\n\nassert_clear_error_codes(TXID) ->\n\tFetched = get_error_codes(TXID),\n\t?assertMatch({ok, _}, Fetched),\n\tclear_error_codes(TXID),\n\t?assert(not_found == get_error_codes(TXID)),\n\tok.\n\ntx_db_test_() ->\n\t{setup, fun setup_ets/0, fun(Cleanup) -> Cleanup() end,\n\t\tfun(_) -> [{timeout, 30, fun test_tx_db/0}] end}.\n\ntest_tx_db() ->\n\t{_, Pub1 = {_, Owner1}} = ar_wallet:new(),\n\t{Priv2, Pub2} = ar_wallet:new(),\n\tWallets = [\n\t\t{ar_wallet:to_address(Pub1), ?AR(10000), <<>>},\n\t\t{ar_wallet:to_address(Pub2), ?AR(10000), <<>>}\n\t],\n\tWL = maps:from_list([{A, {B, LTX}} || {A, B, LTX} <- Wallets]),\n\tOrphanedTX1 = ar_tx:new(Pub1, ?AR(1), ?AR(5000), <<>>),\n\tBadTX = OrphanedTX1#tx{ owner = Owner1, signature = <<\"BAD\">> },\n\tTimestamp = os:system_time(seconds),\n\t?assert(not ar_tx:verify(BadTX, {{1, 4}, 1, 1, 1, 0, 1, WL, Timestamp})),\n\tExpected = {ok, [\"same_owner_as_target\", \"tx_id_not_valid\", \"tx_signature_not_valid\"]},\n\t?assertEqual(Expected, get_error_codes(BadTX#tx.id)),\n\tOrphanedTX2 = ar_tx:new(Pub1, ?AR(1), ?AR(5000), <<>>),\n\tSignedTX = ar_tx:sign_v1(OrphanedTX2, Priv2, Pub2),\n\t?assert(ar_tx:verify(SignedTX, {{1, 4}, 1, 1, 1, 0, 1, WL, Timestamp})),\n\tclear_error_codes(BadTX#tx.id),\n\tclear_error_codes(SignedTX#tx.id),\n\tok.\n"
  },
  {
    "path": "apps/arweave/src/ar_tx_emitter.erl",
    "content": "-module(ar_tx_emitter).\n\n-behaviour(gen_server).\n\n-export([start_link/2]).\n\n-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include(\"ar.hrl\").\n\n%% Remove identifiers of recently emitted transactions from the cache after this long.\n-define(CLEANUP_RECENTLY_EMITTED_TIMEOUT, 60 * 60 * 1000).\n\n-define(WORKER_CONNECT_TIMEOUT, 1 * 1000).\n-define(WORKER_REQUEST_TIMEOUT, 5 * 1000).\n\n%% How frequently to check whether new transactions are appeared for distribution.\n-define(CHECK_MEMPOOL_FREQUENCY, 1000).\n\n-record(state, {\n\tcurrently_emitting,\n\tworkers,\n\t%% How long to wait for a reply from the emitter worker before considering it failed.\n\tworker_failed_timeout\n\n}).\n\n%% How many transactions to send to emitters at one go. With CHUNK_SIZE=1, the propagation\n%% speed is determined by the slowest peer among those chosen for the given transaction.\n%% Increasing CHUNK_SIZE reduces the influence of slow peers at the cost of RAM (message\n%% queues for transaction emitter workers.\n-define(CHUNK_SIZE, 100).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link(Name, Workers) ->\n\tgen_server:start_link({local, Name}, ?MODULE, Workers, []).\n\n%%%===================================================================\n%%% gen_server callbacks.\n%%%===================================================================\n\ninit(Workers) ->\n\tgen_server:cast(?MODULE, process_chunk),\n\n\tNumWorkers = length(Workers),\n\tNumPeers = max_propagation_peers(),\n\tJobsPerWorker = (?CHUNK_SIZE * NumPeers) div NumWorkers,\n\t%% Only time out a worker after we've given enough time for *all* workers to complete\n\t%% their tasks (including a small 1000 ms buffer). This should prevent a cascade where\n\t%% worker queues keep growing.\n\tWorkerFailedTimeout = JobsPerWorker * \n\t\t(?WORKER_CONNECT_TIMEOUT + ?WORKER_REQUEST_TIMEOUT + 1000),\n\n\tState = #state{ workers = queue:from_list(Workers)\n\t\t      , currently_emitting = sets:new()\n\t\t      , worker_failed_timeout = WorkerFailedTimeout\n\t\t      },\n\t{ok, State}.\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast(process_chunk, State) ->\n\t% only current (active) peers should be used, using lifetime\n\t% peers will create unecessary timeouts. The first to\n\t% contact are the trusted peers.\n\tTrustedPeers = ar_peers:get_trusted_peers(),\n\tCurrentPeers = ar_peers:get_peers(current),\n\tFilteredPeers = ar_peers:filter_peers(CurrentPeers, {timestamp, 60*60*24}),\n\tCleanedPeers = FilteredPeers -- TrustedPeers,\n\tPeers = TrustedPeers ++ CleanedPeers,\n\n\t% prepare to emit chunk(s)\n\tPropagationQueue = ar_mempool:get_propagation_queue(),\n\tPropagationMax = max_propagation_peers(),\n\tState2 = emit(\n\t\tPropagationQueue, Peers, PropagationMax, ?CHUNK_SIZE, State),\n\n\t% check later if emit/6 returns an empty set\n\tcase sets:is_empty(State2#state.currently_emitting) of\n\t\ttrue ->\n\t\t\tar_util:cast_after(?CHECK_MEMPOOL_FREQUENCY, ?MODULE, process_chunk);\n\t\tfalse ->\n\t\t\tok\n\tend,\n\n\t{noreply, State2};\n\nhandle_cast(Msg, State) ->\n\t?LOG_ERROR([{event, unhandled_cast}, {module, ?MODULE}, {message, Msg}]),\n\t{noreply, State}.\n\nhandle_info({emitted, TXID, Peer}, State) ->\n\t#state{ currently_emitting = Emitting } = State,\n\tcase sets:is_element({TXID, Peer}, Emitting) of\n\t\tfalse ->\n\t\t\t%% Should have been cleaned up by timeout.\n\t\t\t{noreply, State};\n\t\ttrue ->\n\t\t\tEmitting2 = sets:del_element({TXID, Peer}, Emitting),\n\t\t\tcase sets:is_empty(Emitting2) of\n\t\t\t\ttrue ->\n\t\t\t\t\tgen_server:cast(?MODULE, process_chunk);\n\t\t\t\tfalse ->\n\t\t\t\t\tok\n\t\t\tend,\n\t\t\t{noreply, State#state{ currently_emitting = Emitting2 }}\n\tend;\n\nhandle_info({timeout, TXID, Peer}, State) ->\n\t#state{ currently_emitting = Emitting } = State,\n\tcase sets:is_element({TXID, Peer}, Emitting) of\n\t\tfalse ->\n\t\t\t%% Should have been emitted.\n\t\t\t{noreply, State};\n\t\ttrue ->\n\t\t\t?LOG_DEBUG([{event, tx_propagation_timeout}, {txid, ar_util:encode(TXID)},\n\t\t\t\t\t{peer, ar_util:format_peer(Peer)}]),\n\t\t\tEmitting2 = sets:del_element({TXID, Peer}, Emitting),\n\t\t\tcase sets:is_empty(Emitting2) of\n\t\t\t\ttrue ->\n\t\t\t\t\tgen_server:cast(?MODULE, process_chunk);\n\t\t\t\tfalse ->\n\t\t\t\t\tok\n\t\t\tend,\n\t\t\t{noreply, State#state{ currently_emitting = Emitting2 }}\n\tend;\n\nhandle_info({remove_from_recently_emitted, TXID}, State) ->\n\tets:delete(ar_tx_emitter_recently_emitted, TXID),\n\t{noreply, State};\n\nhandle_info(Info, State) ->\n\t?LOG_ERROR([{event, unhandled_info}, {module, ?MODULE}, {info, Info}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nmax_propagation_peers() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tConfig#config.max_propagation_peers.\n\nemit(_Set, _Peers, _MaxPeers, N, State) when N =< 0 ->\n\tState;\nemit(Set, Peers, MaxPeers, N, State) ->\n\tcase gb_sets:is_empty(Set) of\n\t\ttrue ->\n\t\t\tState;\n\t\tfalse ->\n\t\t\temit_set_not_empty(Set, Peers, MaxPeers, N, State)\n\tend.\n\nemit_set_not_empty(Set, Peers, MaxPeers, N, State) ->\n\t{{Utility, TXID}, Set2} = gb_sets:take_largest(Set),\n\tcase ets:member(ar_tx_emitter_recently_emitted, TXID) of\n\t\ttrue ->\n\t\t\temit(Set2, Peers, MaxPeers, N, State);\n\t\tfalse ->\n\t\t\t#state{ \n\t\t\t\tworkers = Q,\n\t\t\t\tcurrently_emitting = Emitting,\n\t\t\t\tworker_failed_timeout = WorkerFailedTimeout\n\t\t\t} = State,\n\t\t\t% only a subset of the whole peers list is\n\t\t\t% taken using max_propagation_peers value.\n\t\t\t% the first N peers will be used instead of\n\t\t\t% the whole list. unfortunately, this list can\n\t\t\t% also have not connected peers.\n\t\t\tPeersToSync = lists:sublist(Peers, MaxPeers),\n\n\t\t\t% for each peers in the sublist, a chunk is\n\t\t\t% sent. The workers are taken one by one from\n\t\t\t% a FIFO, mainly used to distribute the\n\t\t\t% messages across all available workers.\n\t\t\tFoldl = fun(Peer, {Acc, Workers}) ->\n\t\t\t\t{{value, W}, Workers2} = queue:out(Workers),\n\t\t\t\tgen_server:cast(W, \n\t\t\t\t\t{emit, TXID, Peer, \n\t\t\t\t\t\t?WORKER_CONNECT_TIMEOUT, ?WORKER_REQUEST_TIMEOUT, self()}),\n\t\t\t\terlang:send_after(WorkerFailedTimeout, ?MODULE, {timeout, TXID, Peer}),\n\t\t\t\t{sets:add_element({TXID, Peer}, Acc), queue:in(W, Workers2)}\n\t\t\tend,\n\t\t\t{Emitting2, Q2} = lists:foldl(Foldl, {Emitting, Q}, PeersToSync),\n\t\t\tState2 = State#state{\n\t\t\t\tworkers = Q2,\n\t\t\t\tcurrently_emitting = Emitting2\n\t\t\t},\n\n\t\t\t%% The cache storing recently emitted transactions is used instead\n\t\t\t%% of an explicit synchronization of the propagation queue updates\n\t\t\t%% with ar_node_worker - we do not rely on ar_node_worker removing\n\t\t\t%% emitted transactions from the queue on time.\n\t\t\tets:insert(ar_tx_emitter_recently_emitted, {TXID}),\n\t\t\terlang:send_after(?CLEANUP_RECENTLY_EMITTED_TIMEOUT, ?MODULE,\n\t\t\t\t{remove_from_recently_emitted, TXID}),\n\t\t\tar_events:send(tx, {emitting_scheduled, Utility, TXID}),\n\t\t\temit(Set2, Peers, MaxPeers, N - 1, State2)\n\tend.\n"
  },
  {
    "path": "apps/arweave/src/ar_tx_emitter_sup.erl",
    "content": "-module(ar_tx_emitter_sup).\n\n-behaviour(supervisor).\n\n-export([start_link/0]).\n\n-export([init/1]).\n\n-include_lib(\"arweave/include/ar_sup.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%% ===================================================================\n%% Supervisor callbacks.\n%% ===================================================================\n\ninit([]) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tMaxEmitters = Config#config.max_emitters,\n\tWorkers = lists:map(fun tx_workers/1, lists:seq(1, MaxEmitters)),\n\tWorkerNames = [ Name || #{ id := Name } <- Workers],\n\tEmitter = tx_emitter([ar_tx_emitter, WorkerNames]),\n\tChildrenSpec = [Emitter|Workers],\n\t{ok, {supervisor_spec(), ChildrenSpec}}.\n\nsupervisor_spec() ->\n\t#{ strategy => one_for_one\n\t , intensity => 5\n\t , period => 10\n\t }.\n\n% helper to create ar_tx_emitter process, in charge\n% of sending chunk to propagate to ar_tx_emitter_worker.\ntx_emitter(Args) ->\n\t#{ id => ar_tx_emitter\n\t , type => worker\n\t , start => {ar_tx_emitter, start_link, Args}\n\t , shutdown => ?SHUTDOWN_TIMEOUT\n\t , modules => [ar_tx_emitter]\n\t , restart => permanent\n\t }.\n\n% helper function to create ar_tx_workers processes.\ntx_workers(Num) ->\n\tName = \"ar_tx_emitter_worker_\" ++ integer_to_list(Num),\n\tAtom = list_to_atom(Name),\n\t#{ id => Atom\n\t , start => {ar_tx_emitter_worker, start_link, [Atom]}\n\t , restart => permanent\n\t , type => worker\n\t , timeout => ?SHUTDOWN_TIMEOUT\n\t , modules => [ar_tx_emitter_worker]\n\t }.\n"
  },
  {
    "path": "apps/arweave/src/ar_tx_emitter_worker.erl",
    "content": "-module(ar_tx_emitter_worker).\n\n-behaviour(gen_server).\n\n-export([start_link/1]).\n\n-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-record(state, {}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link(Name) ->\n\tgen_server:start_link({local, Name}, ?MODULE, [], []).\n\n%%%===================================================================\n%%% gen_server callbacks.\n%%%===================================================================\n\ninit(_) ->\n\t{ok, #state{}}.\n\nhandle_call(Request, _From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {request, Request}]),\n\t{reply, ok, State}.\n\nhandle_cast({emit, TXID, Peer, ConnectTimeout, Timeout, ReplyTo}, State) ->\n\tcase ar_mempool:get_tx(TXID) of\n\t\tnot_found ->\n\t\t\tok;\n\t\tTX ->\n\t\t\tStartedAt = erlang:timestamp(),\n\t\t\tOpts = #{ connect_timeout => ConnectTimeout div 1000\n\t\t\t\t, timeout => Timeout div 1000\n\t\t\t\t},\n\t\t\temit(#{ tx_id => TXID\n\t\t\t      , peer => Peer\n\t\t\t      , tx => TX\n\t\t\t      , started_at => StartedAt\n\t\t\t      , opts => Opts\n\t\t\t      })\n\tend,\n\tReplyTo ! {emitted, TXID, Peer},\n\t{noreply, State};\n\nhandle_cast(Msg, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {message, Msg}]),\n\t{noreply, State}.\n\nhandle_info({event, tx, _}, State) ->\n\t{noreply, State};\n\nhandle_info(Info, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {info, Info}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{event, terminate}, {module, ?MODULE}, {reason, Reason}]),\n\tok.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\ntx_propagated_size(#tx{ format = 2 }) ->\n\t?TX_SIZE_BASE;\ntx_propagated_size(#tx{ format = 1, data = Data }) ->\n\t?TX_SIZE_BASE + byte_size(Data).\n\ntx_to_propagated_tx(#tx{ format = 1 } = TX, _Peer, _TrustedPeers) ->\n\tTX;\ntx_to_propagated_tx(#tx{ format = 2 } = TX, Peer, TrustedPeers) ->\n\tcase lists:member(Peer, TrustedPeers) of\n\t\ttrue ->\n\t\t\tTX;\n\t\tfalse ->\n\t\t\tTX#tx{ data = <<>> }\n\tend.\n\nrecord_propagation_status(not_sent) ->\n\tok;\nrecord_propagation_status(Data) ->\n\tStatusClass = ar_metrics:get_status_class(Data),\n\tprometheus_counter:inc(propagated_transactions_total, [StatusClass]),\n\tStatusClass.\n\nrecord_propagation_rate(PropagatedSize, PropagationTimeUs) ->\n\tBitsPerSecond = PropagatedSize * 1000000 / PropagationTimeUs * 8,\n\tprometheus_histogram:observe(tx_propagation_bits_per_second, BitsPerSecond),\n\tBitsPerSecond.\n\n% retrieve information about peer(s)\nemit(#{ tx := TX, peer := Peer } = Data) ->\n\tTrustedPeers = ar_peers:get_trusted_peers(),\n\tPropagatedTX = tx_to_propagated_tx(TX, Peer, TrustedPeers),\n\tRelease = ar_peers:get_peer_release(Peer),\n\tNewData = Data#{ propagated_tx => PropagatedTX\n\t\t       , trusted_peers => TrustedPeers\n\t\t       , release => Release\n\t\t       },\n\temit2(NewData).\n\n% depending on the version of the peer, different kind of payload\n% is being used, one in binary, another one in JSON.\nemit2(#{ release := Release, peer := Peer, propagated_tx := PropagatedTX,\n\t tx_id := TXID, opts := Opts } = Data)\n\twhen Release >= 42 ->\n\t\tBin = ar_serialize:tx_to_binary(PropagatedTX),\n\t\tReply = ar_http_iface_client:send_tx_binary(Peer, TXID, Bin, Opts),\n\t\tNewData = Data#{ reply => Reply },\n\t\temit3(NewData);\nemit2(#{ peer := Peer, propagated_tx := PropagatedTX, tx_id := TXID,\n\t opts := Opts } = Data) ->\n\tSerialize = ar_serialize:tx_to_json_struct(PropagatedTX),\n\tJSON = ar_serialize:jsonify(Serialize),\n\tReply = ar_http_iface_client:send_tx_json(Peer, TXID, JSON, Opts),\n\tNewData = Data#{ reply => Reply },\n\temit3(NewData).\n\n% deal with the reply and update propagation statistics.\nemit3(#{ started_at := StartedAt, reply := Reply, tx := TX } = Data) ->\n\tTimestamp = erlang:timestamp(),\n\tPropagationTimeUs = timer:now_diff(Timestamp, StartedAt),\n\tPropagationStatus = record_propagation_status(Reply),\n\tPropagatedSize = tx_propagated_size(TX),\n\tPropagationRate = record_propagation_rate(PropagatedSize, PropagationTimeUs),\n\tData#{ propagation_time_us => PropagationTimeUs\n\t       , propagation_status => PropagationStatus\n\t       , propagated_size => PropagatedSize\n\t       , propagation_rate => PropagationRate\n\t       }.\n"
  },
  {
    "path": "apps/arweave/src/ar_tx_poller.erl",
    "content": "-module(ar_tx_poller).\n-behaviour(gen_server).\n\n-export([\n\tstart_link/0\n]).\n\n-export([\n\tinit/1,\n\thandle_call/3,\n\thandle_cast/2,\n\thandle_info/2,\n\tterminate/2\n]).\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include(\"ar.hrl\").\n\n-record(state, {\n\tlast_seen_tx_timestamp = 0,\n\tpending_txids = [],\n\tlatest_txid_source_peer = none\n}).\n\n%% Number of peers to query for a transaction.\n-define(QUERY_PEERS_COUNT, 5).\n\n%% Check interval in milliseconds - how long to wait before polling\n%% since the last transaction push. If the node is not public (so it\n%% never receives transactions by push), we wait this long starting from\n%% the moment we join the network only once and then keep polling\n%% for transactions more frequently.\n-ifdef(AR_TEST).\n-define(CHECK_INTERVAL_MS, 5_000).\n-else.\n-define(CHECK_INTERVAL_MS, 30_000).\n-endif.\n\n%% Poll interval in milliseconds - how long we wait before downloading a new\n%% transaction or polling the mempools for new transactions.\n-ifdef(AR_TEST).\n-define(POLL_INTERVAL_MS, 500).\n-else.\n-define(POLL_INTERVAL_MS, 200).\n-endif.\n\n%%% Public API.\n\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%%% Gen server callbacks.\n\ninit([]) ->\n    [ok, ok] = ar_events:subscribe([tx, node_state]),\n\t{ok, #state{}}.\n\nhandle_call(Request, From, State) ->\n\t?LOG_WARNING(\"Unexpected call: ~p from ~p\", [Request, From]),\n\t{reply, ignored, State}.\n\nhandle_cast(check_for_received_txs, State) ->\n\t%% Check if there have been any transactions received in the last\n\t%% ?CHECK_INTERVAL_MS milliseconds.\n\tTimestampDiff = erlang:system_time(microsecond) - State#state.last_seen_tx_timestamp,\n\tState3 =\n\t\tcase TimestampDiff > 0 andalso TimestampDiff > (?CHECK_INTERVAL_MS * 1000) of\n\t\t\ttrue ->\n\t\t\t\tcheck_for_received_txs(State);\n\t\t\tfalse ->\n\t\t\t\tar_util:cast_after(?CHECK_INTERVAL_MS, self(), check_for_received_txs),\n\t\t\t\tState\n\t\tend,\n\t{noreply, State3};\n\nhandle_cast(Request, State) ->\n\t?LOG_WARNING(\"Unexpected cast: ~p\", [Request]),\n\t{noreply, State}.\n\nhandle_info({event, node_state, {initialized, _}}, State) ->\n\t%% Send a check_for_received_txs cast periodically to check for externally submitted\n\t%% transactions. If there have not been any for longer than 30 seconds, request the\n\t%% mempool from a peer and download the transactions.\n    {ok, Config} = arweave_config:get_env(),\n    case lists:member(tx_poller, Config#config.disable) of\n        true ->\n            ok;\n        false ->\n            gen_server:cast(self(), check_for_received_txs)\n    end,\n    {noreply, State};\n\nhandle_info({event, node_state, _}, State) ->\n\t{noreply, State};\n\nhandle_info({event, tx, {new, _TX, {pushed, _Peer}}}, State) ->\n\t{noreply, State#state{\n\t\tpending_txids = [],\n\t\tlast_seen_tx_timestamp = erlang:system_time(microsecond)\n\t}};\n\nhandle_info({event, tx, _}, State) ->\n\t{noreply, State};\n\nhandle_info(Info, State) ->\n\t?LOG_WARNING(\"event: unhandled_info, info: ~p\", [Info]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%% Internal functions.\n\ncheck_for_received_txs(#state{ pending_txids = [TXID | PendingTXIDs] } = State) ->\n\tcase ar_mempool:is_known_tx(TXID) of\n\t\ttrue ->\n\t\t\tok;\n\t\tfalse ->\n\t\t\tdownload_and_verify_tx(TXID, State#state.latest_txid_source_peer)\n\tend,\n\tgen_server:cast(self(), check_for_received_txs),\n\tState#state{ pending_txids = PendingTXIDs };\n\ncheck_for_received_txs(#state{ pending_txids = [] } = State) ->\n\tPeers = lists:sublist(ar_peers:get_peers(current), ?QUERY_PEERS_COUNT),\n\tReply = ar_http_iface_client:get_mempool(Peers),\n\tar_util:cast_after(?POLL_INTERVAL_MS, self(), check_for_received_txs),\n\tcase Reply of\n\t\t{{ok, TXIDs}, TXIDPeer} ->\n\t\t\tState#state{ pending_txids = TXIDs,\n\t\t\t\tlatest_txid_source_peer = TXIDPeer };\n\t\t{error, Error} ->\n\t\t\t?LOG_DEBUG([{event, failed_to_get_mempool_txids_from_peers},\n\t\t\t\t\t{peers, [ar_util:format_peer(Peer) || Peer <- Peers]},\n\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])}\n\t\t\t]),\n\t\t\tState\n\tend.\n\ndownload_and_verify_tx(TXID, TXIDPeer) ->\n\tRef = make_ref(),\n\tar_ignore_registry:add_ref(TXID, Ref, 10_000),\n\tPeers = lists:sublist(ar_peers:get_peers(current), ?QUERY_PEERS_COUNT),\n\tcase ar_http_iface_client:get_tx_from_remote_peers(Peers, TXID, false) of\n\t\tnot_found ->\n\t\t\tar_ignore_registry:remove_ref(TXID, Ref),\n\t\t\t?LOG_DEBUG([{event, failed_to_get_tx_from_peers},\n\t\t\t\t\t{peers, [ar_util:format_peer(Peer) || Peer <- Peers]},\n\t\t\t\t\t{txid, ar_util:encode(TXID)},\n\t\t\t\t\t{txid_peer, ar_util:format_peer(TXIDPeer)}\n\t\t\t]);\n\t\t{TX, Peer, Time, Size} ->\n\t\t\tcase ar_tx_validator:validate(TX) of\n\t\t\t\t{invalid, Code} ->\n\t\t\t\t\tlog_invalid_tx(Code, TXID, TX, Peer, TXIDPeer);\n\t\t\t\t{valid, TX2} ->\n\t\t\t\t\tar_peers:rate_fetched_data(Peer, tx, Time, Size),\n\t\t\t\t\tar_data_sync:add_data_root_to_disk_pool(TX2#tx.data_root,\n\t\t\t\t\t\t\tTX2#tx.data_size, TX#tx.id),\n\t\t\t\t\tar_events:send(tx, {new, TX2, {pulled, Peer}}),\n\t\t\t\t\tTXID = TX2#tx.id,\n\t\t\t\t\tar_ignore_registry:remove_ref(TXID, Ref),\n\t\t\t\t\tar_ignore_registry:add_temporary(TXID, 10 * 60 * 1000)\n\t\t\tend\n\tend.\n\nlog_invalid_tx(tx_bad_anchor, TXID, TX, Peer, TXIDPeer) ->\n\tLastTX = ar_util:encode(TX#tx.last_tx),\n\tCurrentHeight = ar_node:get_height(),\n\tCurrentBlockHash = ar_util:encode(ar_node:get_current_block_hash()),\n\t?LOG_INFO(format_invalid_tx_message(tx_bad_anchor,\n\t\tTXID, Peer, TXIDPeer, [\n\t\t\t{last_tx, LastTX},\n\t\t\t{current_height, CurrentHeight},\n\t\t\t{current_block_hash, CurrentBlockHash}\n\t\t]));\nlog_invalid_tx(tx_verification_failed, TXID, TX, Peer, TXIDPeer) ->\n\tLastTX = ar_util:encode(TX#tx.last_tx),\n\tCurrentHeight = ar_node:get_height(),\n\tCurrentBlockHash = ar_util:encode(ar_node:get_current_block_hash()),\n\tErrorCodes = ar_tx_db:get_error_codes(TXID),\n\t?LOG_INFO(format_invalid_tx_message(tx_verification_failed,\n\t\tTXID, Peer, TXIDPeer, [\n\t\t\t{last_tx, LastTX},\n\t\t\t{current_height, CurrentHeight},\n\t\t\t{current_block_hash, CurrentBlockHash},\n\t\t\t{error_codes, ErrorCodes}\n\t\t]));\nlog_invalid_tx(Code, TXID, _TX, Peer, TXIDPeer) ->\n\t?LOG_INFO(format_invalid_tx_message(Code, TXID, Peer, TXIDPeer, [])).\n\nformat_invalid_tx_message(Code, TXID, Peer, TXIDPeer, ExtraLogs) ->\n\t[\n\t\t{event, fetched_already_included_or_invalid_tx},\n\t\t{txid, ar_util:encode(TXID)},\n\t\t{code, Code},\n\t\t{peer, ar_util:format_peer(Peer)},\n\t\t{txid_peer, ar_util:format_peer(TXIDPeer)}\n\t\t| ExtraLogs\n\t].\n"
  },
  {
    "path": "apps/arweave/src/ar_tx_replay_pool.erl",
    "content": "%%% @doc This module contains functions for transaction verification. It relies on\n%%% some verification helpers from the ar_tx and ar_node_utils modules.\n%%% The module should be used to verify transactions on-edge, validate\n%%% new blocks' transactions, pick transactions to include into a block, and\n%%% remove no longer valid transactions from the mempool after accepting a new block.\n-module(ar_tx_replay_pool).\n\n-export([verify_tx/1, verify_tx/2, verify_block_txs/1, pick_txs_to_mine/1]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n\n%% @doc Verify that a transaction against the given mempool, wallet list, recent\n%% weave txs, current block height, current difficulty, and current time.\n%% The mempool is used to look for the same transaction there and to make sure\n%% the transaction does not reference another transaction from the mempool.\n%% The mempool is NOT used to verify shared resources like funds,\n%% wallet list references, and data size. Therefore, the function is suitable\n%% for on-edge verification where we want to accept potentially conflicting\n%% transactions to avoid consensus issues later.\nverify_tx(Args) ->\n\tverify_tx(Args, verify_signature).\n\nverify_tx({TX, Rate, PricePerGiBMinute, KryderPlusRateMultiplier, Denomination, Height,\n\t\t   RedenominationHeight, BlockAnchors, RecentTXMap, Mempool, Wallets},\n\t\t   VerifySignature) when\n\t\tis_record(TX, tx),\n\t\tis_list(BlockAnchors),\n\t\tis_map(RecentTXMap),\n\t\tis_map(Wallets),\n\t\tis_map(Mempool) ->\n\tverify_tx2({TX, Rate, PricePerGiBMinute, KryderPlusRateMultiplier, Denomination, Height,\n\t\t\tRedenominationHeight, os:system_time(seconds), Wallets, BlockAnchors,\n\t\t\tRecentTXMap, Mempool, VerifySignature}).\n\n%% @doc Verify the transactions are valid for the block taken into account\n%% the given current difficulty and height, the previous blocks' wallet list,\n%% and recent weave transactions.\nverify_block_txs(\n\t\t\t{TXs, Rate, PricePerGiBMinute, KryderPlusRateMultiplier, Denomination, Height,\n\t\t\t RedenominationHeight, Timestamp, Wallets, BlockAnchors, RecentTXMap}) ->\n\tverify_block_txs(TXs,\n\t\t\t{Rate, PricePerGiBMinute, KryderPlusRateMultiplier, Denomination,\n\t\t\t Height, RedenominationHeight, Timestamp, Wallets, BlockAnchors, RecentTXMap,\n\t\t\t maps:new(), 0, 0}).\n\nverify_block_txs([], _Args) ->\n\tvalid;\nverify_block_txs([TX | TXs],\n\t\t\t{Rate, PricePerGiBMinute, KryderPlusRateMultiplier, Denomination, Height,\n\t\t\t RedenominationHeight, Timestamp, Wallets, BlockAnchors, RecentTXMap,\n\t\t\t Mempool, C, Size}) when\n\t\tis_record(TX, tx),\n\t\tis_map(Wallets),\n\t\tis_list(BlockAnchors),\n\t\tis_map(RecentTXMap),\n\t\tis_map(Mempool) ->\n\tcase verify_tx2({TX, Rate, PricePerGiBMinute, KryderPlusRateMultiplier, Denomination,\n\t\t\tHeight, RedenominationHeight, Timestamp, Wallets, BlockAnchors, RecentTXMap,\n\t\t\tMempool, verify_signature}) of\n\t\tvalid ->\n\t\t\tNewMempool = maps:put(TX#tx.id, no_tx, Mempool),\n\t\t\tNewWallets = ar_node_utils:apply_tx(Wallets, Denomination, TX),\n\t\t\tNewSize =\n\t\t\t\tcase TX of\n\t\t\t\t\t#tx{ format = 1 } ->\n\t\t\t\t\t\tSize + TX#tx.data_size;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tSize\n\t\t\t\tend,\n\t\t\tNewCount = C + 1,\n\t\t\tAboveFork1_8 = Height >= ar_fork:height_1_8(),\n\t\t\tCountExceedsLimit = NewCount > ?BLOCK_TX_COUNT_LIMIT,\n\t\t\tSizeExceedsLimit = NewSize > ?BLOCK_TX_DATA_SIZE_LIMIT,\n\t\t\tcase {AboveFork1_8, CountExceedsLimit, SizeExceedsLimit} of\n\t\t\t\t{true, true, _} ->\n\t\t\t\t\tinvalid;\n\t\t\t\t{true, _, true} ->\n\t\t\t\t\tinvalid;\n\t\t\t\t_ ->\n\t\t\t\t\tverify_block_txs(TXs, \n\t\t\t\t\t\t\t{Rate, PricePerGiBMinute, KryderPlusRateMultiplier,\n\t\t\t\t\t\t\t Denomination, Height, RedenominationHeight, Timestamp, NewWallets,\n\t\t\t\t\t\t\t BlockAnchors, RecentTXMap, NewMempool, NewCount, NewSize})\n\t\t\tend;\n\t\t{invalid, _} ->\n\t\t\tinvalid\n\tend.\n\n%% @doc Pick a list of transactions from the mempool to mine on.\n%% Transactions have to be valid when applied on top of each other taken\n%% into account the current height and diff, recent weave transactions, and\n%% the wallet list. The total data size of chosen transactions does not\n%% exceed the block size limit. Before a valid subset of transactions is chosen,\n%% transactions are sorted from highest to lowest utility and then from oldest\n%% block anchors to newest.\npick_txs_to_mine(Args) ->\n\t{BlockAnchors, RecentTXMap, Height, RedenominationHeight, Rate, PricePerGiBMinute,\n\t\t\tKryderPlusRateMultiplier, Denomination, Timestamp, Wallets, TXs} = Args,\n\tpick_txs_under_size_limit(sort_txs_by_utility_and_anchor(TXs, BlockAnchors),\n\t\t\t{Rate, PricePerGiBMinute, KryderPlusRateMultiplier, Denomination, Height,\n\t\t\tRedenominationHeight, Timestamp, Wallets, BlockAnchors, RecentTXMap,\n\t\t\tmaps:new(), 0, 0}).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nverify_tx2(Args) ->\n\t{TX, Rate, PricePerGiBMinute, KryderPlusRateMultiplier, Denomination, Height,\n\t\t\tRedenominationHeight, Timestamp, FloatingWallets, BlockAnchors, RecentTXMap,\n\t\t\tMempool, VerifySignature} = Args,\n\t\n\tcase ar_tx:verify(TX, {Rate, PricePerGiBMinute, KryderPlusRateMultiplier, Denomination,\n\t\t\tRedenominationHeight, Height, FloatingWallets, Timestamp}, VerifySignature) of\n\t\ttrue ->\n\t\t\tverify_anchor(TX, Height, FloatingWallets, BlockAnchors, RecentTXMap, Mempool);\n\t\tfalse ->\n\t\t\t{invalid, tx_verification_failed}\n\tend.\n\nverify_anchor(TX, Height, FloatingWallets, BlockAnchors, RecentTXMap, Mempool) when\n\t\tis_record(TX, tx),\n\t\tis_map(FloatingWallets),\n\t\tis_list(BlockAnchors),\n\t\tis_map(RecentTXMap),\n\t\tis_map(Mempool) ->\n\tShouldContinue = case ar_fork:height_1_8() of\n\t\tH when Height >= H ->\n\t\t\t%% Only verify after fork 1.8 otherwise it causes a soft fork\n\t\t\t%% since current nodes can accept blocks with a chain of last_tx\n\t\t\t%% references. The check would still fail on edge pre 1.8 since\n\t\t\t%% TX is validated against a previous blocks' wallet list then.\n\t\t\tcase maps:is_key(TX#tx.last_tx, Mempool) of\n\t\t\t\ttrue ->\n\t\t\t\t\t{invalid, last_tx_in_mempool};\n\t\t\t\tfalse ->\n\t\t\t\t\tcontinue\n\t\t\tend;\n\t\t_ ->\n\t\t\tcontinue\n\tend,\n\tcase ShouldContinue of\n\t\tcontinue ->\n\t\t\tverify_last_tx(TX, FloatingWallets, BlockAnchors, RecentTXMap, Mempool);\n\t\t{invalid, Reason} ->\n\t\t\t{invalid, Reason}\n\tend.\n\nverify_last_tx(TX, FloatingWallets, BlockAnchors, RecentTXMap, Mempool) ->\n\tcase ar_tx:check_last_tx(FloatingWallets, TX) of\n\t\ttrue ->\n\t\t\tvalid;\n\t\tfalse ->\n\t\t\tverify_block_anchor(TX, BlockAnchors, RecentTXMap, Mempool)\n\tend.\n\nverify_block_anchor(TX, BlockAnchors, RecentTXMap, Mempool) ->\n\tcase lists:member(TX#tx.last_tx, BlockAnchors) of\n\t\tfalse ->\n\t\t\t{invalid, tx_bad_anchor};\n\t\ttrue ->\n\t\t\tverify_tx_in_weave(TX, RecentTXMap, Mempool)\n\tend.\n\nverify_tx_in_weave(TX, RecentTXMap, Mempool) ->\n\tcase maps:is_key(TX#tx.id, RecentTXMap) of\n\t\ttrue ->\n\t\t\t{invalid, tx_already_in_weave};\n\t\tfalse ->\n\t\t\tverify_tx_in_mempool(TX, Mempool)\n\tend.\n\nverify_tx_in_mempool(TX, Mempool) ->\n\tcase maps:is_key(TX#tx.id, Mempool) of\n\t\ttrue ->\n\t\t\t{invalid, tx_already_in_mempool};\n\t\tfalse ->\n\t\t\tvalid\n\tend.\n\npick_txs_under_size_limit([], _Args) ->\n\t[];\npick_txs_under_size_limit(\n\t\t\t[TX | TXs],\n\t\t\t{Rate, PricePerGiBMinute, KryderPlusRateMultiplier, Denomination, Height,\n\t\t\t RedenominationHeight, Timestamp, Wallets, BlockAnchors, RecentTXMap,\n\t\t\t Mempool, Size, Count}) when\n\t\tis_record(TX, tx),\n\t\tis_map(Wallets),\n\t\tis_list(BlockAnchors),\n\t\tis_map(RecentTXMap),\n\t\tis_map(Mempool) ->\n\tcase verify_tx2({TX, Rate, PricePerGiBMinute, KryderPlusRateMultiplier, Denomination,\n\t\t\tHeight, RedenominationHeight, Timestamp, Wallets, BlockAnchors, RecentTXMap,\n\t\t\tMempool, verify_signature}) of\n\t\tvalid ->\n\t\t\tNewMempool = maps:put(TX#tx.id, no_tx, Mempool),\n\t\t\tNewWallets = ar_node_utils:apply_tx(Wallets, Denomination, TX),\n\t\t\tNewSize =\n\t\t\t\tcase TX of\n\t\t\t\t\t#tx{ format = 1 } ->\n\t\t\t\t\t\tSize + TX#tx.data_size;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tSize\n\t\t\t\tend,\n\t\t\tNewCount = Count + 1,\n\t\t\tCountExceedsLimit = NewCount > ?BLOCK_TX_COUNT_LIMIT,\n\t\t\tSizeExceedsLimit = NewSize > ?BLOCK_TX_DATA_SIZE_LIMIT,\n\t\t\tcase CountExceedsLimit orelse SizeExceedsLimit of\n\t\t\t\ttrue ->\n\t\t\t\t\tpick_txs_under_size_limit(TXs, {Rate, PricePerGiBMinute,\n\t\t\t\t\t\t\tKryderPlusRateMultiplier, Denomination, Height,\n\t\t\t\t\t\t\tRedenominationHeight, Timestamp, Wallets, BlockAnchors,\n\t\t\t\t\t\t\tRecentTXMap, Mempool, Size, Count});\n\t\t\t\tfalse ->\n\t\t\t\t\t[TX | pick_txs_under_size_limit(TXs, {Rate, PricePerGiBMinute,\n\t\t\t\t\t\t\tKryderPlusRateMultiplier, Denomination, Height,\n\t\t\t\t\t\t\tRedenominationHeight, Timestamp, NewWallets, BlockAnchors,\n\t\t\t\t\t\t\tRecentTXMap, NewMempool, NewSize, NewCount})]\n\t\t\tend;\n\t\t{invalid, _} ->\n\t\t\tpick_txs_under_size_limit(TXs, {Rate, PricePerGiBMinute, KryderPlusRateMultiplier,\n\t\t\t\t\tDenomination, Height, RedenominationHeight, Timestamp, Wallets,\n\t\t\t\t\tBlockAnchors, RecentTXMap, Mempool, Size, Count})\n\tend.\n\nsort_txs_by_utility_and_anchor(TXs, BHL) ->\n\tlists:sort(fun(TX1, TX2) -> compare_txs(TX1, TX2, BHL) end, TXs).\n\ncompare_txs(TX1, TX2, BHL) ->\n\tcase {lists:member(TX1#tx.last_tx, BHL), lists:member(TX2#tx.last_tx, BHL)} of\n\t\t{false, _} ->\n\t\t\ttrue;\n\t\t{true, false} ->\n\t\t\tfalse;\n\t\t{true, true} ->\n\t\t\tcompare_txs_by_utility(TX1, TX2, BHL)\n\tend.\n\ncompare_txs_by_utility(TX1, TX2, BHL) ->\n\tU1 = ar_tx:utility(TX1),\n\tU2 = ar_tx:utility(TX2),\n\tcase U1 == U2 of\n\t\ttrue ->\n\t\t\tcompare_anchors(TX1#tx.last_tx, TX2#tx.last_tx, BHL);\n\t\tfalse ->\n\t\t\tU1 > U2\n\tend.\n\ncompare_anchors(_Anchor1, _Anchor2, []) ->\n\ttrue;\ncompare_anchors(Anchor, Anchor, _) ->\n\ttrue;\ncompare_anchors(Anchor1, _Anchor2, [Anchor1 | _]) ->\n\tfalse;\ncompare_anchors(_Anchor1, Anchor2, [Anchor2 | _]) ->\n\ttrue;\ncompare_anchors(Anchor1, Anchor2, [_ | Anchors]) ->\n\tcompare_anchors(Anchor1, Anchor2, Anchors).\n"
  },
  {
    "path": "apps/arweave/src/ar_tx_validator.erl",
    "content": "-module(ar_tx_validator).\n\n-export([validate/1]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nvalidate(TX) ->\n\tProps =\n\t\tets:select(\n\t\t\tnode_state,\n\t\t\t[{{'$1', '$2'},\n\t\t\t\t[{'or',\n\t\t\t\t\t{'==', '$1', height},\n\t\t\t\t\t{'==', '$1', wallet_list},\n\t\t\t\t\t{'==', '$1', recent_txs_map},\n\t\t\t\t\t{'==', '$1', block_anchors},\n\t\t\t\t\t{'==', '$1', usd_to_ar_rate},\n\t\t\t\t\t{'==', '$1', price_per_gib_minute},\n\t\t\t\t\t{'==', '$1', kryder_plus_rate_multiplier},\n\t\t\t\t\t{'==', '$1', denomination},\n\t\t\t\t\t{'==', '$1', redenomination_height}}], ['$_']}]\n\t\t),\n\tHeight = proplists:get_value(height, Props),\n\tWL = proplists:get_value(wallet_list, Props),\n\tRecentTXMap = proplists:get_value(recent_txs_map, Props),\n\tBlockAnchors = proplists:get_value(block_anchors, Props),\n\tUSDToARRate = proplists:get_value(usd_to_ar_rate, Props),\n\tPricePerGiBMinute = proplists:get_value(price_per_gib_minute, Props),\n\tKryderPlusRateMultiplier = proplists:get_value(kryder_plus_rate_multiplier, Props),\n\tDenomination = proplists:get_value(denomination, Props),\n\tRedenominationHeight = proplists:get_value(redenomination_height, Props),\n\tWallets = ar_wallets:get(WL, ar_tx:get_addresses([TX])),\n\tMempool = ar_mempool:get_map(),\n\tResult = ar_tx_replay_pool:verify_tx({TX, USDToARRate, PricePerGiBMinute,\n\t\t\tKryderPlusRateMultiplier, Denomination, Height, RedenominationHeight, BlockAnchors,\n\t\t\tRecentTXMap, Mempool, Wallets}),\n\tResult2 =\n\t\tcase {Result, TX#tx.format == 2 andalso byte_size(TX#tx.data) /= 0} of\n\t\t\t{valid, true} ->\n\t\t\t\tChunks = ar_tx:chunk_binary(?DATA_CHUNK_SIZE, TX#tx.data),\n\t\t\t\tSizeTaggedChunks = ar_tx:chunks_to_size_tagged_chunks(Chunks),\n\t\t\t\tSizeTaggedChunkIDs = ar_tx:sized_chunks_to_sized_chunk_ids(SizeTaggedChunks),\n\t\t\t\t{Root, _} = ar_merkle:generate_tree(SizeTaggedChunkIDs),\n\t\t\t\tSize = byte_size(TX#tx.data),\n\t\t\t\tcase {Root, Size} == {TX#tx.data_root, TX#tx.data_size} of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tvalid;\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t{invalid, invalid_data_root_size}\n\t\t\t\tend;\n\t\t\t_ ->\n\t\t\t\tResult\n\t\tend,\n\tcase Result2 of\n\t\tvalid ->\n\t\t\tcase TX#tx.format of\n\t\t\t\t2 ->\n\t\t\t\t\t{valid, TX};\n\t\t\t\t1 ->\n\t\t\t\t\tcase TX#tx.data_size > 0 of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t%% Compute the data root so that we can inform ar_data_sync about\n\t\t\t\t\t\t\t%% it so that it can accept the chunks. One may notice here that\n\t\t\t\t\t\t\t%% in case of v1 transactions, chunks arrive together with the tx\n\t\t\t\t\t\t\t%% header. However, we send the data root to ar_data_sync in\n\t\t\t\t\t\t\t%% advance, otherwise ar_header_sync may fail to store the chunks\n\t\t\t\t\t\t\t%% when persisting the transaction as registering the data roots of\n\t\t\t\t\t\t\t%% a confirmed block is an asynchronous procedure\n\t\t\t\t\t\t\t%% (see ar_data_sync:add_tip_block called in ar_node_worker) which\n\t\t\t\t\t\t\t%% does not always complete before ar_header_sync attempts the\n\t\t\t\t\t\t\t%% insertion.\n\t\t\t\t\t\t\tV1Chunks = ar_tx:chunk_binary(?DATA_CHUNK_SIZE, TX#tx.data),\n\t\t\t\t\t\t\tSizeTaggedV1Chunks = ar_tx:chunks_to_size_tagged_chunks(V1Chunks),\n\t\t\t\t\t\t\tSizeTaggedV1ChunkIDs = ar_tx:sized_chunks_to_sized_chunk_ids(\n\t\t\t\t\t\t\t\t\tSizeTaggedV1Chunks),\n\t\t\t\t\t\t\t{DataRoot, _} = ar_merkle:generate_tree(SizeTaggedV1ChunkIDs),\n\t\t\t\t\t\t\t{valid, TX#tx{ data_root = DataRoot }};\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t{valid, TX}\n\t\t\t\t\tend\n\t\t\tend;\n\t\t_ ->\n\t\t\tResult2\n\tend.\n"
  },
  {
    "path": "apps/arweave/src/ar_unbalanced_merkle.erl",
    "content": "-module(ar_unbalanced_merkle).\n\n-export([\n\troot/2, root/3,\n\thash_list_to_merkle_root/1,\n\tblock_index_to_merkle_root/1,\n\thash_block_index_entry/1\n]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%% Module for building and manipulating generic and specific unbalanced merkle trees.\n\n%% @doc Take a prior merkle root and add a new piece of data to it, optionally\n%% providing a conversion function prior to hashing.\nroot(OldRoot, Data, Fun) -> root(OldRoot, Fun(Data)).\nroot(OldRoot, Data) ->\n\tcrypto:hash(?MERKLE_HASH_ALG, << OldRoot/binary, Data/binary >>).\n\n%% @doc Generate a new entire merkle tree from a hash list.\nhash_list_to_merkle_root(HL) ->\n\tlists:foldl(\n\t\tfun(BH, MR) -> root(MR, BH) end,\n\t\t<<>>,\n\t\tlists:reverse(HL)\n\t).\n\n%% @doc Generate a new entire merkle tree from a block index.\nblock_index_to_merkle_root(HL) ->\n\tlists:foldl(\n\t\tfun(BIEntry, MR) -> root(MR, BIEntry, fun hash_block_index_entry/1) end,\n\t\t<<>>,\n\t\tlists:reverse(HL)\n\t).\n\nhash_block_index_entry({BH, WeaveSize, TXRoot}) ->\n\tar_deep_hash:hash([BH, integer_to_binary(WeaveSize), TXRoot]).\n\n%%% TESTS\n\nbasic_hash_root_generation_test() ->\n\tBH0 = crypto:strong_rand_bytes(32),\n\tBH1 = crypto:strong_rand_bytes(32),\n\tBH2 = crypto:strong_rand_bytes(32),\n\tMR0 = test_hash(BH0),\n\tMR1 = test_hash(<<MR0/binary, BH1/binary>>),\n\tMR2 = test_hash(<<MR1/binary, BH2/binary>>),\n\t?assertEqual(MR2, hash_list_to_merkle_root([BH2, BH1, BH0])).\n\ntest_hash(Bin) -> crypto:hash(?MERKLE_HASH_ALG, Bin).\n\nroot_update_test() ->\n\tBH0 = crypto:strong_rand_bytes(32),\n\tBH1 = crypto:strong_rand_bytes(32),\n\tBH2 = crypto:strong_rand_bytes(32),\n\tBH3 = crypto:strong_rand_bytes(32),\n\tRoot =\n\t\troot(\n\t\t\troot(\n\t\t\t\thash_list_to_merkle_root([BH1, BH0]),\n\t\t\t\tBH2\n\t\t\t),\n\t\t\tBH3\n\t\t),\n\t?assertEqual(\n\t\thash_list_to_merkle_root([BH3, BH2, BH1, BH0]),\n\t\tRoot\n\t).\n"
  },
  {
    "path": "apps/arweave/src/ar_util.erl",
    "content": "-module(ar_util).\n\n-export([\n\tassert_file_exists_and_readable/1,\n\tbatch_pmap/3,\n\tbatch_pmap/4,\n\tbetween/3,\n\tbinary_to_integer/1,\n\tblock_index_entry_from_block/1,\n\tbool_to_int/1,\n\tbytes_to_mb_string/1,\n\tcast_after/3,\n\tceil_int/2,\n\tcount/2,\n\tdecode/1,\n\tdo_until/3,\n\tencode/1,\n\tencode_list_indices/1,\n\tfloor_int/2,\n\tformat_peer/1,\n\tgenesis_wallets/0,\n\tget_system_device/1,\n\tinteger_to_binary/1,\n\tint_to_bool/1,\n\tparse_list_indices/1,\n\tparse_peer/1,\n\tparse_peer/2,\n\tparse_port/1,\n\tpeer_to_str/1,\n\tpfilter/2,\n\tpick_random/1,\n\tpick_random/2,\n\tpmap/2, pmap/3,\n\tprint_stacktrace/0,\n\tsafe_decode/1,\n\tsafe_divide/2,\n\tsafe_encode/1,\n\tsafe_ets_lookup/2,\n\tsafe_format/1,\n\tsafe_format/3,\n\tsafe_parse_peer/1,\n\tsafe_parse_peer/2,\n\tshuffle_list/1,\n\ttake_every_nth/2,\n\tterminal_clear/0,\n\ttimestamp_to_seconds/1,invert_map/1,\n\tunique/1,\n\tpad_to_closest_multiple_equal_or_above/2\n]).\n\n-include(\"ar.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-define(DEFAULT_PMAP_TIMEOUT, 60_000).\n\nbool_to_int(true) -> 1;\nbool_to_int(_) -> 0.\n\nint_to_bool(1) -> true;\nint_to_bool(0) -> false.\n\n%% @doc Implementations of integer_to_binary and binary_to_integer that can handle infinity.\ninteger_to_binary(infinity) ->\n\t<<\"infinity\">>;\ninteger_to_binary(N) ->\n\terlang:integer_to_binary(N).\n\nbinary_to_integer(<<\"infinity\">>) ->\n\tinfinity;\nbinary_to_integer(N) ->\n\terlang:binary_to_integer(N).\n\n%% @doc: rounds IntValue up to the nearest multiple of Nearest.\n%% Rounds up even if IntValue is already a multiple of Nearest.\nceil_int(IntValue, Nearest) ->\n\tIntValue - (IntValue rem Nearest) + Nearest.\n\n%% @doc: rounds IntValue down to the nearest multiple of Nearest.\n%% Doesn't change IntValue if it's already a multiple of Nearest.\nfloor_int(IntValue, Nearest) ->\n\tIntValue - (IntValue rem Nearest).\n\n%% @doc: clamp N to be between Min and Max.\nbetween(N, Min, _) when N < Min -> Min;\nbetween(N, _, Max) when N > Max -> Max;\nbetween(N, _, _) -> N.\n\n%% @doc Pick a list of random elements from a given list.\npick_random(_, 0) -> [];\npick_random([], _) -> [];\npick_random(List, N) ->\n\tElem = pick_random(List),\n\t[Elem|pick_random(List -- [Elem], N - 1)].\n\n%% @doc Select a random element from a list.\npick_random(Xs) ->\n\tlists:nth(rand:uniform(length(Xs)), Xs).\n\n%% @doc Encode a binary to URL safe base64 binary string.\nencode(Bin) ->\n\tb64fast:encode(Bin).\n\n%% @doc Try to decode a URL safe base64 into a binary or throw an error when\n%% invalid.\ndecode(Input) ->\n\tb64fast:decode(Input).\n\nsafe_encode(Bin) when is_binary(Bin) ->\n\tencode(Bin);\nsafe_encode(Bin) ->\n\tBin.\n\n%% @doc Safely decode a URL safe base64 into a binary returning an ok or error\n%% tuple.\nsafe_decode(E) ->\n\ttry\n\t\tD = decode(E),\n\t\t{ok, D}\n\tcatch\n\t\t_:_ ->\n\t\t\t{error, invalid}\n\tend.\n\n%% @doc Safely lookup a key in an ETS table.\n%% Returns [] if the table doesn't exist - this can happen when running some of the helper\n%% utilities like data_doctor\nsafe_ets_lookup(Table, Key) ->\n\ttry\n\t\tets:lookup(Table, Key)\n\tcatch\n\t\tType:Reason ->\n\t\t\t?LOG_WARNING([{event, ets_table_not_found}, {table, Table}, {key, Key},\n\t\t\t\t{type, Type}, {reason, Reason}]),\n\t\t\t[]\n\tend.\n\n%% @doc Convert an erlang:timestamp() to seconds since the Unix Epoch.\ntimestamp_to_seconds({MegaSecs, Secs, _MicroSecs}) ->\n\tMegaSecs * 1000000 + Secs.\n\n%% @doc Convert a map from Key => Value, to Value => set(Keys)\n-spec invert_map(map()) -> map().\ninvert_map(Map) ->\n    maps:fold(\n\tfun(Key, Value, Acc) ->\n\t    CurrentSet = maps:get(Value, Acc, sets:new()),\n\t    UpdatedSet = sets:add_element(Key, CurrentSet),\n\t    maps:put(Value, UpdatedSet, Acc)\n\tend,\n\t#{},\n\tMap\n    ).\n\n\n%%--------------------------------------------------------------------\n%% @doc Parse a string representing a remote host into our internal\n%%      format.\n%% @end\n%%--------------------------------------------------------------------\n-spec parse_peer(Hostname) -> Return when\n\tHostname :: string() | binary(),\n\tReturn :: [IpWithPort] | no_return(),\n\tIpWithPort :: {A, A, A, A, Port},\n\tA :: pos_integer(),\n\tPort :: pos_integer().\n\nparse_peer(Hostname) ->\n\tparse_peer(Hostname, #{}).\n\n%%--------------------------------------------------------------------\n%% @doc Parse a string representing a remote host into our internal\n%%      format.\n%% @end\n%%--------------------------------------------------------------------\n-spec parse_peer(Hostname, Opts) -> Return when\n\tHostname :: string() | binary(),\n\tOpts :: #{\n\t\t  module_resolve => atom()\n\t},\n\tReturn :: [IpWithPort] | no_return(),\n\tIpWithPort :: {A, A, A, A, Port},\n\tA :: pos_integer(),\n\tPort :: pos_integer().\n\nparse_peer(\"\", _Opts) ->\n\tthrow(empty_peer_string);\nparse_peer(BitStr, Opts) when is_binary(BitStr) ->\n\tparse_peer(binary_to_list(BitStr), Opts);\nparse_peer([{A,B,C,D,P}], _Opts) ->\n\t[{A, B, C, D, parse_port(P)}];\nparse_peer(Str, Opts) when is_list(Str) ->\n\t% useful to mock the resolver, instead of using\n\t% inet, any other custom module can be used.\n\tResolveModule = maps:get(module_resolve, Opts, inet),\n\t[Addr, PortStr] = parse_port_split(Str),\n\tcase ResolveModule:getaddrs(Addr, inet) of\n\t\t{ok, [{A, B, C, D}]} ->\n\t\t\t[{A, B, C, D, parse_port(PortStr)}];\n\t\t{ok, AddrsList} when is_list(AddrsList) ->\n\t\t\t[{A, B, C, D, parse_port(PortStr)} || {A, B, C, D} <- AddrsList];\n\t\t{error, Reason} ->\n\t\t\tthrow({invalid_peer_string, Str, Reason})\n\tend;\nparse_peer({{A,B,C,D},P}, _Opts) ->\n\t[{A, B, C, D, parse_port(P)}];\nparse_peer({IP, Port}, Opts) ->\n\t{A, B, C, D} = parse_peer(IP, Opts),\n\t[{A, B, C, D, parse_port(Port)}];\nparse_peer(_Peer, _) ->\n\tthrow(invalid_peer).\n\nparse_peer_test() ->\n\t?assertThrow(\n\t\tempty_peer_string,\n\t\tparse_peer(\"\")\n\t),\n\t?assertThrow(\n\t\tinvalid_peer,\n\t\tparse_peer(1)\n\t),\n\t?assertEqual(\n\t\t[{127,0,0,1,1985}],\n\t\tparse_peer({{127,0,0,1}, 1985})\n\t),\n\n\tOpts = #{ module_resolve => ar_test_inet_mock },\n\t?assertEqual(\n\t\t[{127,0,0,1,1984}],\n\t\tparse_peer(\"single.record.local\", Opts)\n\t),\n\t?assertEqual(\n\t\t[\n\t\t\t{127,0,0,2,1984},\n\t\t\t{127,0,0,3,1984},\n\t\t\t{127,0,0,4,1984},\n\t\t\t{127,0,0,5,1984}\n\t\t],\n\t\tparse_peer(\"multi.record.local\", Opts)\n\t),\n\t?assertThrow(\n\t\t{invalid_peer_string,_,_},\n\t\tparse_peer(\"error.test.local\", Opts)\n\t).\n\n\n%%--------------------------------------------------------------------\n%% @doc convert a peer as tuple or binary to string.\n%% @end\n%%--------------------------------------------------------------------\n-spec peer_to_str(Peer) -> Return when\n\tPeer :: binary() | string() | tuple(),\n\tReturn :: string().\n\npeer_to_str(Bin) when is_binary(Bin) ->\n\tbinary_to_list(Bin);\npeer_to_str(Str) when is_list(Str) ->\n\tStr;\npeer_to_str({A, B, C, D, Port}) ->\n\tinteger_to_list(A) ++ \"_\" ++\n\tinteger_to_list(B) ++ \"_\" ++\n\tinteger_to_list(C) ++ \"_\" ++\n\tinteger_to_list(D) ++ \"_\" ++\n\tinteger_to_list(Port).\n\n%% @doc Parses a port string into an integer.\nparse_port(Int) when is_integer(Int) -> Int;\nparse_port(\"\") -> ?DEFAULT_HTTP_IFACE_PORT;\nparse_port(PortStr) ->\n\t{ok, [Port], \"\"} = io_lib:fread(\"~d\", PortStr),\n\tPort.\n\nparse_port_split(Str) ->\n    case string:tokens(Str, \":\") of\n\t[Addr] -> [Addr, ?DEFAULT_HTTP_IFACE_PORT];\n\t[Addr, Port] -> [Addr, Port];\n\t_ -> throw({invalid_peer_string, Str})\n    end.\n\n%%--------------------------------------------------------------------\n%% @doc wrapper for parse_peer/1\n%% @see safe_parse_peer/2\n%% @end\n%%--------------------------------------------------------------------\nsafe_parse_peer(Peer) ->\n\tsafe_parse_peer(Peer, #{}).\n\n%%--------------------------------------------------------------------\n%% @doc wrapper for parse_peer/1\n%% @end\n%%--------------------------------------------------------------------\n-spec safe_parse_peer(Hostname, Opts) -> Return when\n\tHostname :: string() | binary(),\n\tOpts :: map(),\n\tReturn :: {ok, ReturnOk} | {error, invalid},\n\tReturnOk ::[IpWithPort] | no_return(),\n\tIpWithPort :: {A, A, A, A, Port},\n\tA :: pos_integer(),\n\tPort :: pos_integer().\n\nsafe_parse_peer(Peer, Opts) ->\n\ttry\n\t\t{ok, parse_peer(Peer, Opts)}\n\tcatch\n\t\t_:_ -> {error, invalid}\n\tend.\n\n%% @doc Take a remote host ID in various formats, return a HTTP-friendly string.\nformat_peer([{Host, Port}|_]) ->\n\tformat_peer({Host, Port});\nformat_peer([{A, B, C, D, Port}|_]) ->\n\tformat_peer({A, B, C, D, Port});\nformat_peer(Host) when is_list(Host) ->\n\tformat_peer({Host, ?DEFAULT_HTTP_IFACE_PORT});\nformat_peer({A, B, C, D}) ->\n\tformat_peer({A, B, C, D, ?DEFAULT_HTTP_IFACE_PORT});\nformat_peer({A, B, C, D, Port}) ->\n\tlists:flatten(\n\t\tio_lib:format(\"~w.~w.~w.~w:~w\", [A, B, C, D, Port])\n\t);\nformat_peer({Host, Port}) ->\n\tlists:flatten(\n\t\tio_lib:format(\"~s:~w\", [Host, Port])\n\t);\nformat_peer(Peer) ->\n\tPeer.\n\n%% @doc Count occurences of element within list.\ncount(A, List) ->\n\tlength([ B || B <- List, A == B ]).\n\n%% @doc Takes a list and returns the unique values in it (preserving the order of the first\n%% occurence of each value).\nunique(Xs) when not is_list(Xs) ->\n[Xs];\nunique(Xs) -> unique([], Xs).\nunique(Res, []) -> lists:reverse(Res);\nunique(Res, [X|Xs]) ->\n\tcase lists:member(X, Res) of\n\t\tfalse -> unique([X|Res], Xs);\n\t\ttrue -> unique(Res, Xs)\n\tend.\n\n%% @doc Pad the given value to the closest multiple equal or above the given value.\n-spec pad_to_closest_multiple_equal_or_above(Value :: non_neg_integer(), Multiple :: non_neg_integer()) -> non_neg_integer().\npad_to_closest_multiple_equal_or_above(Value, Multiple) ->\n\t(Value + Multiple - 1) div Multiple * Multiple.\n\n%% @doc Run a map in parallel, throw {pmap_timeout, ?DEFAULT_PMAP_TIMEOUT}\n%% if a worker takes longer than ?DEFAULT_PMAP_TIMEOUT milliseconds.\npmap(Mapper, List) ->\n\tpmap(Mapper, List, ?DEFAULT_PMAP_TIMEOUT).\n\n%% @doc Run a map in parallel, throw {pmap_timeout, Timeout} if a worker\n%% takes longer than Timeout milliseconds.\npmap(Mapper, List, Timeout) ->\n\tMaster = self(),\n\tListWithRefs = [{Elem, make_ref()} || Elem <- List],\n\tlists:foreach(fun({Elem, Ref}) ->\n\t\tspawn_link(fun() ->\n\t\t\tMaster ! {pmap_work, Ref, Mapper(Elem)}\n\t\tend)\n\tend, ListWithRefs),\n\tlists:map(\n\t\tfun({_, Ref}) ->\n\t\t\treceive\n\t\t\t\t{pmap_work, Ref, Mapped} -> Mapped\n\t\t\tafter Timeout ->\n\t\t\t\tthrow({pmap_timeout, Timeout})\n\t\t\tend\n\t\tend,\n\t\tListWithRefs\n\t).\n\n%% @doc Run a map in parallel, one batch at a time. If a worker does not\n%% finish within Timeout milliseconds, return {error, timeout, Elem} for that element\n%% instead of throwing.\nbatch_pmap(Mapper, List, BatchSize) ->\n\tbatch_pmap(Mapper, List, BatchSize, ?DEFAULT_PMAP_TIMEOUT).\n\n%% @doc Run a map in parallel, one batch at a time. If a worker takes\n%% longer than Timeout milliseconds, return {error, timeout, Elem}.\nbatch_pmap(_Mapper, [], _BatchSize, _Timeout) ->\n\t[];\nbatch_pmap(Mapper, List, BatchSize, Timeout)\n\t\twhen BatchSize > 0 ->\n\tSelf = self(),\n\t{Batch, Rest} =\n\t\tcase length(List) >= BatchSize of\n\t\t\ttrue ->\n\t\t\t\tlists:split(BatchSize, List);\n\t\t\tfalse ->\n\t\t\t\t{List, []}\n\t\tend,\n\tListWithRefs = [{Elem, make_ref()} || Elem <- Batch],\n\tlists:foreach(fun({Elem, Ref}) ->\n\t\tspawn_link(fun() ->\n\t\t\tSelf ! {pmap_work, Ref, Mapper(Elem)}\n\t\tend)\n\tend, ListWithRefs),\n\tlists:map(\n\t\tfun({Elem, Ref}) ->\n\t\t\treceive\n\t\t\t\t{pmap_work, Ref, Mapped} -> Mapped\n\t\t\tafter Timeout ->\n\t\t\t\t{error, batch_pmap_timeout, Elem}\n\t\t\tend\n\t\tend,\n\t\tListWithRefs\n\t) ++ batch_pmap(Mapper, Rest, BatchSize, Timeout).\n\n%% @doc Filter the list in parallel.\npfilter(Fun, List) ->\n\tMaster = self(),\n\tListWithRefs = [{Elem, make_ref()} || Elem <- List],\n\tlists:foreach(fun({Elem, Ref}) ->\n\t\tspawn_link(fun() ->\n\t\t\tMaster ! {pmap_work, Ref, Fun(Elem)}\n\t\tend)\n\tend, ListWithRefs),\n\tlists:filtermap(\n\t\tfun({Elem, Ref}) ->\n\t\t\treceive\n\t\t\t\t{pmap_work, Ref, false} -> false;\n\t\t\t\t{pmap_work, Ref, true} -> {true, Elem};\n\t\t\t\t{pmap_work, Ref, {true, Result}} -> {true, Result}\n\t\t\tend\n\t\tend,\n\t\tListWithRefs\n\t).\n\n%% @doc Generate a list of GENESIS wallets, from the CSV file.\ngenesis_wallets() ->\n\t{ok, Bin} = file:read_file(\"genesis_data/genesis_wallets.csv\"),\n\tlists:map(\n\t\tfun(Line) ->\n\t\t\t[Addr, RawQty] = string:tokens(Line, \",\"),\n\t\t\t{\n\t\t\t\tar_util:decode(Addr),\n\t\t\t\terlang:trunc(math:ceil(list_to_integer(RawQty))) * ?WINSTON_PER_AR,\n\t\t\t\t<<>>\n\t\t\t}\n\t\tend,\n\t\tstring:tokens(binary_to_list(Bin), [10])\n\t).\n\n%% @doc Perform a function until it returns {ok, Value} | ok | true | {error, Error}.\n%% That term will be returned, others will be ignored. Interval and timeout have to\n%% be passed in milliseconds.\ndo_until(_DoFun, _Interval, Timeout) when Timeout =< 0 ->\n\t{error, timeout};\ndo_until(DoFun, Interval, Timeout) ->\n\tStart = erlang:system_time(millisecond),\n\tcase DoFun() of\n\t\t{ok, Value} ->\n\t\t\t{ok, Value};\n\t\tok ->\n\t\t\tok;\n\t\ttrue ->\n\t\t\ttrue;\n\t\t{error, Error} ->\n\t\t\t{error, Error};\n\t\t_ ->\n\t\t\ttimer:sleep(Interval),\n\t\t\tNow = erlang:system_time(millisecond),\n\t\t\tdo_until(DoFun, Interval, Timeout - (Now - Start))\n\tend.\n\nblock_index_entry_from_block(B) ->\n\t{B#block.indep_hash, B#block.weave_size, B#block.tx_root}.\n\n%% @doc Convert the given number of bytes into the \"%s MiB\" string.\nbytes_to_mb_string(Bytes) ->\n\tinteger_to_list(Bytes div 1024 div 1024) ++ \" MiB\".\n\n%% @doc Encode the given list of sorted numbers into a binary where the nth bit\n%% is 1 the corresponding number is present in the given list; 0 otherwise.\nencode_list_indices(Indices) ->\n\tencode_list_indices(Indices, 0).\n\nencode_list_indices([Index | Indices], N) ->\n\t<< 0:(Index - N), 1:1, (encode_list_indices(Indices, Index + 1))/bitstring >>;\nencode_list_indices([], N) when N rem 8 == 0 ->\n\t<<>>;\nencode_list_indices([], N) ->\n\t<< 0:(8 - N rem 8) >>.\n\n%% @doc Return a list of position numbers corresponding to 1 bits of the given binary.\nparse_list_indices(Input) ->\n\tparse_list_indices(Input, 0).\n\nparse_list_indices(<< 0:1, Rest/bitstring >>, N) ->\n\tparse_list_indices(Rest, N + 1);\nparse_list_indices(<< 1:1, Rest/bitstring >>, N) ->\n\tcase parse_list_indices(Rest, N + 1) of\n\t\terror ->\n\t\t\terror;\n\t\tIndices ->\n\t\t\t[N | Indices]\n\tend;\nparse_list_indices(<<>>, _N) ->\n\t[];\nparse_list_indices(_BadInput, _N) ->\n\terror.\n\nshuffle_list(List) ->\n\tlists:sort(fun(_,_) -> rand:uniform() < 0.5 end, List).\n\n%% @doc Format a value and truncate it if it's too long - this can help avoid the node\n%% locking up when accidentally trying to log a large/complex datatype (e.g. a map of chunks).\n\n-spec safe_format(term(), non_neg_integer(), non_neg_integer()) -> string().\nsafe_format(Value) ->\n\tsafe_format(Value, 5, 2000).\n\nsafe_format(Value, Depth, Limit) ->\n\tValueStr = io_lib:format(\"~P\", [Value, Depth]),  % Depth limited to 5\n\tcase length(ValueStr) > Limit of\n\t\ttrue ->\n\t\t\tstring:slice(ValueStr, 0, Limit) ++ \"... (truncated)\";\n\t\tfalse ->\n\t\t\tValueStr\n\tend.\n\n%%%\n%%% Tests.\n%%%\n\n%% @doc Test that unique functions correctly.\nbasic_unique_test() ->\n\t[a, b, c] = unique([a, a, b, b, b, c, c]),\n\t[a, b, c] = unique([a, b, c, c, b, a]).\n\n%% @doc Ensure that hosts are formatted as lists correctly.\nbasic_peer_format_test() ->\n\t\"127.0.0.1:9001\" = format_peer({127,0,0,1,9001}).\n\n%% @doc Ensure that pick_random's are actually in the starting list.\npick_random_test() ->\n\tList = [a, b, c, d, e],\n\ttrue = lists:member(pick_random(List), List).\n\n%% @doc Test that binaries of different lengths can be encoded and decoded\n%% correctly.\nround_trip_encode_test() ->\n\tlists:map(\n\t\tfun(Bytes) ->\n\t\t\tBin = crypto:strong_rand_bytes(Bytes),\n\t\t\tBin = decode(encode(Bin))\n\t\tend,\n\t\tlists:seq(1, 64)\n\t).\n\n%% Test the paralell mapping functionality.\npmap_test() ->\n\tMapper = fun(X) ->\n\t\ttimer:sleep(100 * X),\n\t\tX * 2\n\tend,\n\t?assertEqual([6, 2, 4], pmap(Mapper, [3, 1, 2])).\n\ncast_after(0, Module, Message) ->\n\tgen_server:cast(Module, Message);\ncast_after(Delay, Module, Message) ->\n\t%% Not using timer:apply_after here because send_after is more efficient:\n\t%% http://erlang.org/doc/efficiency_guide/commoncaveats.html#timer-module.\n\terlang:send_after(Delay, Module, {'$gen_cast', Message}).\n\ntake_every_nth(N, L) ->\n\ttake_every_nth(N, L, 0).\n\ntake_every_nth(_N, [], _I) ->\n\t[];\ntake_every_nth(N, [El | L], I) when I rem N == 0 ->\n\t[El | take_every_nth(N, L, I + 1)];\ntake_every_nth(N, [_El | L], I) ->\n\ttake_every_nth(N, L, I + 1).\n\nsafe_divide(A, B) ->\n\tcase catch A / B of\n\t\t{'EXIT', _} ->\n\t\t\tA div B;\n\t\tResult ->\n\t\t\tResult\n\tend.\n\nencode_list_indices_test() ->\n\tlists:foldl(\n\t\tfun(Input, N) ->\n\t\t\t?assertEqual(Input, lists:sort(Input)),\n\t\t\tEncoded = encode_list_indices(Input),\n\t\t\t?assert(byte_size(Encoded) =< 125),\n\t\t\tIndices = parse_list_indices(Encoded),\n\t\t\t?assertEqual(Input, Indices, io_lib:format(\"Case ~B\", [N])),\n\t\t\tN + 1\n\t\tend,\n\t\t0,\n\t\t[[], [0], [1], [999], [0, 1], lists:seq(0, 999), lists:seq(0, 999, 2),\n\t\t\tlists:seq(1, 999, 3)]\n\t).\n\n%% @doc os aware way of clearing a terminal\nterminal_clear() ->\n\tio:format(\n\t\tcase os:type() == \"darwin\" of\n\t\t\ttrue -> \"\\e[H\\e[J\";\n\t\t\tfalse ->  os:cmd(clear)\n\t\tend\n\t).\n\n-spec get_system_device(string()) -> string().\nget_system_device(Path) ->\n\tCommand = \"df -P \" ++ Path ++ \" | awk 'NR==2 {print $1}'\",\n\tDevice = os:cmd(Command),\n\tstring:trim(Device).\n\nprint_stacktrace() ->\n    try\n\tthrow(dummy) %% In OTP21+ try/catch is the recommended way to get the stacktrace\n    catch\n\t_: _Exception:Stacktrace ->\n\t    %% Remove the first element (print_stacktrace call)\n\t    TrimmedStacktrace = lists:nthtail(1, Stacktrace),\n\t\t\tStacktraceString = lists:foldl(\n\t\t\t\tfun(StackTraceEntry, Acc) ->\n\t\t\tAcc ++ io_lib:format(\"  ~p~n\", [StackTraceEntry])\n\t\tend, \"Stack trace:~n\", TrimmedStacktrace),\n\t\t\t?LOG_INFO(StacktraceString)\n    end.\n\n% Function to assert that a file exists and is readable\nassert_file_exists_and_readable(FilePath) ->\n\tcase file:read_file(FilePath) of\n\t\t{ok, _} ->\n\t\t\tok;\n\t\t{error, _} ->\n\t\t\tio:format(\"~nThe filepath ~p doesn't exist or isn't readable.~n~n\", [FilePath]),\n\t\t\tinit:stop(1)\n\tend.\n\n"
  },
  {
    "path": "apps/arweave/src/ar_vdf.erl",
    "content": "-module(ar_vdf).\n\n-export([compute/3, compute_legacy/3, compute2/3, verify/8, verify2/8,\n\t\tdebug_sha_verify_no_reset/6, debug_sha_verify/8, debug_sha2/3,\n\t\tstep_number_to_salt_number/1, checkpoint_buffer_to_checkpoints/1]).\n\n-include(\"ar_vdf.hrl\").\n-include(\"ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\nstep_number_to_salt_number(0) ->\n\t0;\nstep_number_to_salt_number(StepNumber) ->\n\t(StepNumber - 1) * ?VDF_CHECKPOINT_COUNT_IN_STEP + 1.\n\n%% default IterationCount = ?VDF_DIFFICULTY\ncompute(StartStepNumber, PrevOutput, IterationCount) ->\n\tSalt = step_number_to_salt_number(StartStepNumber - 1),\n\tSaltBinary = << Salt:256 >>,\n\t{ok, Config} = arweave_config:get_env(),\n\tcase Config#config.vdf of\n\t\topenssl ->\n\t\t\tar_vdf_nif:vdf_sha2_nif(SaltBinary, PrevOutput, ?VDF_CHECKPOINT_COUNT_IN_STEP - 1, 0,\n\t\t\t\t\tIterationCount);\n\t\tfused ->\n\t\t\tar_vdf_nif:vdf_sha2_fused_nif(SaltBinary, PrevOutput, ?VDF_CHECKPOINT_COUNT_IN_STEP - 1, 0,\n\t\t\t\t\tIterationCount);\n\t\thiopt_m4 ->\n\t\t\tar_vdf_nif:vdf_sha2_hiopt_nif(SaltBinary, PrevOutput, ?VDF_CHECKPOINT_COUNT_IN_STEP - 1, 0,\n\t\t\t\t\tIterationCount);\n\t\t_ ->\n\t\t\tar_vdf_nif:vdf_sha2_nif(SaltBinary, PrevOutput, ?VDF_CHECKPOINT_COUNT_IN_STEP - 1, 0,\n\t\t\t\t\tIterationCount)\n\tend.\n\ncompute_legacy(StartStepNumber, PrevOutput, IterationCount) ->\n\tSalt = step_number_to_salt_number(StartStepNumber - 1),\n\tSaltBinary = << Salt:256 >>,\n\t{ok, Output, CheckpointBuffer} = ar_vdf_nif:vdf_sha2_nif(\n\t\t\tSaltBinary,\n\t\t\tPrevOutput,\n\t\t\t?VDF_CHECKPOINT_COUNT_IN_STEP - 1,\n\t\t\t0,\n\t\t\tIterationCount),\n\tCheckpoints = [Output | checkpoint_buffer_to_checkpoints(CheckpointBuffer)],\n\t{ok, Output, Checkpoints}.\n\n-ifdef(AR_TEST).\n%% Slow down VDF calculation on tests since it will complete too fast otherwise.\ncompute2(StartStepNumber, PrevOutput, IterationCount) ->\n\t{ok, Output, CheckpointBuffer} = compute(StartStepNumber, PrevOutput, IterationCount),\n\tCheckpoints = [Output | checkpoint_buffer_to_checkpoints(CheckpointBuffer)],\n\ttimer:sleep(50),\n\t{ok, Output, Checkpoints}.\n-else.\ncompute2(StartStepNumber, PrevOutput, IterationCount) ->\n\t{ok, Output, CheckpointBuffer} = compute(StartStepNumber, PrevOutput, IterationCount),\n\tCheckpoints = [Output | checkpoint_buffer_to_checkpoints(CheckpointBuffer)],\n\t{ok, Output, Checkpoints}.\n-endif.\n\n%% no reset in CheckpointGroups, then ResetStepNumber < StartSalt\n%%   any number out of bounds of\n%%   [StartSalt, StartSalt+group_list_to_sum_step(CheckpointGroups)]\nverify(StartSalt, PrevOutput, NumCheckpointsBetweenHashes, Hashes,\n\t\tResetSalt, ResetSeed, ThreadCount, IterationCount) ->\n\tStartSaltBinary = << StartSalt:256 >>,\n\tResetSaltBinary = << ResetSalt:256 >>,\n\tNumHashes = length(Hashes),\n\tHashBuffer = iolist_to_binary(Hashes),\n\tRestStepsSize = ?VDF_BYTE_SIZE * (NumHashes - 1),\n\tcase HashBuffer of\n\t\t<< RestSteps:RestStepsSize/binary, LastStep:?VDF_BYTE_SIZE/binary >> ->\n\t\t\tcase ar_vdf_nif:vdf_parallel_sha_verify_with_reset_nif(StartSaltBinary,\n\t\t\t\t\tPrevOutput, NumHashes - 1, NumCheckpointsBetweenHashes - 1,\n\t\t\t\t\tIterationCount, RestSteps, LastStep, ResetSaltBinary, ResetSeed,\n\t\t\t\t\tThreadCount) of\n\t\t\t\t{ok, Steps} ->\n\t\t\t\t\t{true, Steps};\n\t\t\t\t_ ->\n\t\t\t\t\tfalse\n\t\t\tend;\n\t\t_ ->\n\t\t\tfalse\n\tend.\n\nverify2(StartStepNumber, PrevOutput, NumCheckpointsBetweenHashes, Hashes,\n\t\tResetStepNumber, ResetSeed, ThreadCount, IterationCount) ->\n\tStartSalt = step_number_to_salt_number(StartStepNumber),\n\tResetSalt = step_number_to_salt_number(ResetStepNumber - 1),\n\tcase verify(StartSalt, PrevOutput, NumCheckpointsBetweenHashes, Hashes,\n\t\t\tResetSalt, ResetSeed, ThreadCount, IterationCount) of\n\t\tfalse ->\n\t\t\tfalse;\n\t\t{true, CheckpointBuffer} ->\n\t\t\t{true, ar_util:take_every_nth(?VDF_CHECKPOINT_COUNT_IN_STEP,\n\t\t\t\t\tcheckpoint_buffer_to_checkpoints(CheckpointBuffer))}\n\tend.\n\ncheckpoint_buffer_to_checkpoints(Buffer) ->\n\tcheckpoint_buffer_to_checkpoints(Buffer, []).\n\ncheckpoint_buffer_to_checkpoints(<<>>, Checkpoints) ->\n\tCheckpoints;\ncheckpoint_buffer_to_checkpoints(<< Checkpoint:32/binary, Rest/binary >>, Checkpoints) ->\n\tcheckpoint_buffer_to_checkpoints(Rest, [Checkpoint | Checkpoints]).\n\n%%%===================================================================\n%%% Debug implementations.\n%%% Erlang implementations of of NIFs. Usee in tests.\n%%%===================================================================\n\n\nhash(0, _Salt, Input) ->\n\tInput;\nhash(N, Salt, Input) ->\n\thash(N - 1, Salt, crypto:hash(sha256, << Salt/binary, Input/binary >>)).\n\n%% @doc An Erlang implementation of ar_vdf:compute2/3. Used in tests.\ndebug_sha2(StepNumber, Output, IterationCount) ->\n\tSalt = step_number_to_salt_number(StepNumber - 1),\n\t{Output2, Checkpoints} =\n\t\tlists:foldl(\n\t\t\tfun(I, {Acc, L}) ->\n\t\t\t\tSaltBinary = << (Salt + I):256 >>,\n\t\t\t\tH = hash(IterationCount, SaltBinary, Acc),\n\t\t\t\t{H, [H | L]}\n\t\t\tend,\n\t\t\t{Output, []},\n\t\t\tlists:seq(0, ?VDF_CHECKPOINT_COUNT_IN_STEP - 1)\n\t\t),\n\ttimer:sleep(500),\n\t{ok, Output2, Checkpoints}.\n\n%% @doc An Erlang implementation of ar_vdf:verify/7. Used in tests.\ndebug_sha_verify_no_reset(StepNumber, Output, NumCheckpointsBetweenHashes, Hashes, _ThreadCount, IterationCount) ->\n\tSalt = step_number_to_salt_number(StepNumber),\n\tdebug_verify_no_reset(Salt, Output, NumCheckpointsBetweenHashes, Hashes, [], IterationCount).\n\ndebug_verify_no_reset(Salt, Output, Size, Hashes, Steps, IterationCount) ->\n\ttrue = Size == 1 orelse Size rem ?VDF_CHECKPOINT_COUNT_IN_STEP == 0,\n\t{NextOutput, Steps2} =\n\t\tlists:foldl(\n\t\t\tfun(I, {Acc, S}) ->\n\t\t\t\tSaltBinary = << (Salt + I):256 >>,\n\t\t\t\tO2 = hash(IterationCount, SaltBinary, Acc),\n\t\t\t\tS2 = case (Salt + I) rem ?VDF_CHECKPOINT_COUNT_IN_STEP of 0 -> [O2 | S]; _ -> S end,\n\t\t\t\t{O2, S2}\n\t\t\tend,\n\t\t\t{Output, []},\n\t\t\tlists:seq(0, Size - 1)\n\t\t),\n\tSalt2 = Salt + Size,\n\tcase Hashes of\n\t\t[ NextOutput ] ->\n\t\t\t{true, Steps2 ++ Steps};\n\t\t[ NextOutput | Rest ] ->\n\t\t\tdebug_verify_no_reset(Salt2, NextOutput, Size, Rest, Steps2 ++ Steps, IterationCount);\n\t\t_ ->\n\t\t\tfalse\n\tend.\n\n%% @doc An Erlang implementation of ar_vdf:verify/7. Used in tests.\ndebug_sha_verify(StepNumber, Output, NumCheckpointsBetweenHashes, Hashes, ResetStepNumber, ResetSeed, _ThreadCount, IterationCount) ->\n\tStartSalt = step_number_to_salt_number(StepNumber),\n\tResetSalt = step_number_to_salt_number(ResetStepNumber - 1),\n\tdebug_verify(StartSalt, Output, NumCheckpointsBetweenHashes, Hashes, ResetSalt, ResetSeed,\n\t\t\t[], IterationCount).\n\ndebug_verify(StartSalt, Output, Size, Hashes, ResetSalt,\n\t\tResetSeed, Steps, IterationCount) ->\n\ttrue = Size rem ?VDF_CHECKPOINT_COUNT_IN_STEP == 0,\n\t{NextOutput, Steps2} =\n\t\tlists:foldl(\n\t\t\tfun(I, {Acc, S}) ->\n\t\t\t\tSaltBinary = << (StartSalt + I):256 >>,\n\t\t\t\tcase I rem ?VDF_CHECKPOINT_COUNT_IN_STEP /= 0 of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tH = hash(IterationCount, SaltBinary, Acc),\n\t\t\t\t\t\tcase (StartSalt + I) rem ?VDF_CHECKPOINT_COUNT_IN_STEP of\n\t\t\t\t\t\t\t0 ->\n\t\t\t\t\t\t\t\t{H, [H | S]};\n\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t{H, S}\n\t\t\t\t\t\tend;\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tAcc2 =\n\t\t\t\t\t\t\tcase StartSalt + I == ResetSalt of\n\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\tcrypto:hash(sha256, << Acc/binary, ResetSeed/binary >>);\n\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\tAcc\n\t\t\t\t\t\t\tend,\n\t\t\t\t\t\tH = hash(IterationCount, SaltBinary, Acc2),\n\t\t\t\t\t\tcase (StartSalt + I) rem ?VDF_CHECKPOINT_COUNT_IN_STEP of\n\t\t\t\t\t\t\t0 ->\n\t\t\t\t\t\t\t\t{H, [H | S]};\n\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t{H, S}\n\t\t\t\t\t\tend\n\t\t\t\tend\n\t\t\tend,\n\t\t\t{Output, []},\n\t\t\tlists:seq(0, Size - 1)\n\t\t),\n\tcase Hashes of\n\t\t[ NextOutput ] ->\n\t\t\t{true, Steps2 ++ Steps};\n\t\t[ NextOutput | Rest ] ->\n\t\t\tdebug_verify(StartSalt + Size, NextOutput,\n\t\t\t\t\tSize, Rest, ResetSalt, ResetSeed,\n\t\t\t\t\tSteps2 ++ Steps, IterationCount);\n\t\t_ ->\n\t\t\tfalse\n\tend.\n"
  },
  {
    "path": "apps/arweave/src/ar_vdf_nif.erl",
    "content": "-module(ar_vdf_nif).\n\n-on_load(init_nif/0).\n\n-export([vdf_sha2_nif/5, vdf_sha2_fused_nif/5, vdf_sha2_hiopt_nif/5, vdf_parallel_sha_verify_with_reset_nif/10]).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\nvdf_sha2_nif(_Salt, _PrevState, _CheckpointCount, _skipCheckpointCount, _Iterations) ->\n\terlang:nif_error(nif_not_loaded).\nvdf_sha2_fused_nif(_Salt, _PrevState, _CheckpointCount, _skipCheckpointCount, _Iterations) ->\n\terlang:nif_error(nif_not_loaded).\nvdf_sha2_hiopt_nif(_Salt, _PrevState, _CheckpointCount, _skipCheckpointCount, _Iterations) ->\n\terlang:nif_error(nif_not_loaded).\nvdf_parallel_sha_verify_with_reset_nif(_Salt, _PrevState, _CheckpointCount,\n\t\t_skipCheckpointCount, _Iterations, _InCheckpoint, _InRes, _ResetSalt, _ResetSeed,\n\t\t_MaxThreadCount) ->\n\terlang:nif_error(nif_not_loaded).\n\ninit_nif() ->\n\tPrivDir = code:priv_dir(arweave),\n\tok = erlang:load_nif(filename:join([PrivDir, \"vdf_arweave\"]), 0).\n"
  },
  {
    "path": "apps/arweave/src/ar_verify_chunks.erl",
    "content": "-module(ar_verify_chunks).\n\n-behaviour(gen_server).\n\n-export([start_link/2, name/1]).\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include(\"ar.hrl\").\n-include(\"ar_poa.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include(\"ar_consensus.hrl\").\n-include(\"ar_chunk_storage.hrl\").\n-include(\"ar_verify_chunks.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-record(state, {\n\tmode = log :: purge | log,\n\tstore_id :: string(),\n\tpacking :: term(),\n\tstart_offset :: non_neg_integer(),\n\tend_offset :: non_neg_integer(),\n\tcursor :: non_neg_integer(),\n\tready = false :: boolean(),\n\tchunk_samples = ?SAMPLE_CHUNK_COUNT :: non_neg_integer(),\n\tverify_report = #verify_report{} :: #verify_report{}\n}).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the server.\nstart_link(Name, StorageModule) ->\n\tgen_server:start_link({local, Name}, ?MODULE, StorageModule, []).\n\n-spec name(binary()) -> atom().\nname(StoreID) ->\n\tlist_to_atom(\"ar_verify_chunks_\" ++ ar_storage_module:label(StoreID)).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit(StoreID) ->\n\t{ok, Config} = arweave_config:get_env(),\n\t?LOG_INFO([{event, verify_chunk_storage_started},\n\t\t{store_id, StoreID}, {mode, Config#config.verify},\n\t\t{chunk_samples, Config#config.verify_samples}]),\n\t{StartOffset, EndOffset} = ar_storage_module:get_range(StoreID),\n\tgen_server:cast(self(), sample),\n\t{ok, #state{\n\t\tmode = Config#config.verify,\n\t\tstore_id = StoreID,\n\t\tpacking = ar_storage_module:get_packing(StoreID),\n\t\tstart_offset = StartOffset,\n\t\tend_offset = EndOffset,\n\t\tcursor = StartOffset,\n\t\tready = is_ready(EndOffset),\n\t\tchunk_samples = Config#config.verify_samples,\n\t\tverify_report = #verify_report{\n\t\t\tstart_time = erlang:system_time(millisecond)\n\t\t}\n\t}}.\n\nhandle_cast(sample, #state{ready = false, end_offset = EndOffset} = State) ->\n\tar_util:cast_after(1000, self(), sample),\n\t{noreply, State#state{ready = is_ready(EndOffset)}};\nhandle_cast(sample,\n\t\t#state{cursor = Cursor, end_offset = EndOffset} = State) when Cursor >= EndOffset ->\n\tar:console(\"Done!~n\"),\n\t{noreply, State};\nhandle_cast(sample, State) ->\n\t%% Sample ?SAMPLE_CHUNK_COUNT random chunks, read them, unpack them and verify them.\n\t%% Report the collected statistics and continue with the \"verify\" procedure.\n\tio:format(\"Sampling ~p chunks from ~p to ~p~n\",\n\t\t[State#state.chunk_samples, State#state.start_offset, State#state.end_offset]),\n\tMaxSamples = case State#state.chunk_samples of\n\t\tall ->\n\t\t\t(State#state.end_offset - State#state.start_offset) div ?DATA_CHUNK_SIZE;\n\t\tCount ->\n\t\t\tCount\n\tend,\n\t\n\tsample_chunks(\n\t\tState#state.chunk_samples, sets:new(), #sample_report{samples = MaxSamples}, State),\n\tgen_server:cast(self(), verify),\n\t{noreply, State};\n\nhandle_cast(verify, #state{ready = false, end_offset = EndOffset} = State) ->\n\tar_util:cast_after(1000, self(), verify),\n\t{noreply, State#state{ready = is_ready(EndOffset)}};\nhandle_cast(verify,\n\t\t#state{cursor = Cursor, end_offset = EndOffset} = State) when Cursor >= EndOffset ->\n\tar:console(\"Done!~n\"),\n\t{noreply, State};\nhandle_cast(verify, State) ->\n\tState2 = verify(State),\n\tState3 = report_progress(State2),\n\t{noreply, State3};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_call(Call, From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {call, Call}, {from, From}]),\n\t{reply, ok, State}.\n\nhandle_info(Info, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {info, Info}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\nis_ready(EndOffset) ->\n\tcase ar_block_index:get_last() of\n\t\t'$end_of_table' ->\n\t\t\tfalse;\n\t\t{WeaveSize, _Height, _H, _TXRoot}  ->\n\t\t\tWeaveSize >= EndOffset\n\tend.\n\n%% @doc verify runs through a series of checks:\n%% 1. All chunks covered by the ar_chunk_storage or ar_data_sync sync records exist.\n%% 2. All chunks in the ar_data_sync sync record are also in the ar_chunk_storage sync record.\n%% 3. All chunks have valid proofs.\n%% 4. The ar_data_sync record has the expected packing format.\n%% 5. All chunks in the ar_chunk_storage sync record are also in the ar_data_sync sync record.\n%% \n%% For any chunk that fails one of the above checks: invalidate it so that it can be resynced.\nverify(State) ->\n\t#state{store_id = StoreID} = State,\n\t{UnionInterval, Intervals} = query_intervals(State),\n\tState2 = verify_chunks(UnionInterval, Intervals, State),\n\tcase State2#state.cursor >= State2#state.end_offset of\n\t\ttrue ->\n\t\t\tar:console(\"Done verifying ~s!~n\", [StoreID]),\n\t\t\t?LOG_INFO([{event, verify_chunk_storage_verify_chunks_done}, {store_id, StoreID}]);\n\t\tfalse ->\n\t\t\tgen_server:cast(self(), verify)\n\tend,\n\tState2.\n\nverify_chunks(not_found, _Intervals, State) ->\n\tState#state{ cursor = State#state.end_offset };\nverify_chunks({End, _Start}, _Intervals, #state{cursor = Cursor} = State)\n\t\twhen Cursor >= End ->\n\tState;\nverify_chunks({IntervalEnd, IntervalStart}, Intervals, State) ->\n\t#state{ cursor = Cursor } = State,\n\tCursor2 = max(IntervalStart, Cursor),\n\tState3 = case verify_chunks_index(State#state{ cursor = Cursor2 }) of\n\t\t{error, State2} ->\n\t\t\tState2;\n\t\t{ChunkData, State2} ->\n\t\t\tverify_chunk(ChunkData, Intervals, State2)\n\tend,\n\tverify_chunks({IntervalEnd, IntervalStart}, Intervals, State3).\n\nverify_chunks_index(State) ->\n\t#state{ cursor = Cursor, store_id = StoreID } = State,\n\tChunkData = ar_data_sync:get_chunk_by_byte(Cursor, StoreID),\n\tverify_chunks_index2(ChunkData, State).\n\nverify_chunks_index2({error, Reason}, State) ->\n\t#state{ cursor = Cursor } = State,\n\tNextCursor = ar_data_sync:advance_chunks_index_cursor(Cursor),\n\tState2 = invalidate_sync_record(\n\t\tchunks_index_error, Cursor, NextCursor, [{reason, Reason}], State),\n\t{error, State2#state{ cursor = NextCursor }};\nverify_chunks_index2(\n\t{AbsoluteOffset, _, _, _, _, _, ChunkSize}, #state{cursor = Cursor} = State)\n\t\twhen AbsoluteOffset - Cursor >= ChunkSize ->\n\tNextCursor = AbsoluteOffset - ChunkSize,\n\tState2 = invalidate_sync_record(chunks_index_gap, Cursor, NextCursor, [], State),\n\t{error, State2#state{ cursor = NextCursor + 1 }};\nverify_chunks_index2(ChunkData, State) ->\t\n\t{ChunkData, State}.\n\nverify_chunk({ok, _Key, Metadata}, Intervals, State) ->\n\t{AbsoluteOffset, _ChunkDataKey, _TXRoot, _DataRoot, _TXPath,\n\t\t_TXRelativeOffset, _ChunkSize} = Metadata,\n\t{ChunkStorageInterval, _DataSyncInterval} = Intervals,\n\n\tPaddedOffset = ar_block:get_chunk_padded_offset(AbsoluteOffset),\n\t\n\tState2 = verify_chunk_storage(PaddedOffset, Metadata, ChunkStorageInterval, State),\n\n\tState3 = verify_proof(Metadata, State2),\n\n\tState4 = verify_packing(Metadata, State3),\n\n\tState4#state{ cursor = PaddedOffset + 1 };\nverify_chunk(_ChunkData, _Intervals, State) ->\n\tState.\n\nverify_proof(Metadata, State) ->\n\t#state{ store_id = StoreID } = State,\n\t{AbsoluteOffset, ChunkDataKey, TXRoot, _DataRoot, TXPath,\n\t\t_TXRelativeOffset, ChunkSize} = Metadata,\n\n\tcase ar_data_sync:read_data_path(ChunkDataKey, StoreID) of\n\t\t{ok, DataPath} ->\n\t\t\tChunkMetadata = #chunk_metadata{\n\t\t\t\ttx_root = TXRoot,\n\t\t\t\ttx_path = TXPath,\n\t\t\t\tdata_path = DataPath\n\t\t\t},\n\t\t\tChunkProof = ar_poa:chunk_proof(ChunkMetadata, AbsoluteOffset - 1),\n\t\t\tcase ar_poa:validate_paths(ChunkProof) of\n\t\t\t\t{false, _} ->\n\t\t\t\t\tinvalidate_chunk(validate_paths_error, AbsoluteOffset, ChunkSize, State);\n\t\t\t\t{true, _} ->\n\t\t\t\t\tState\n\t\t\tend;\n\t\tError ->\n\t\t\tinvalidate_chunk(\n\t\t\t\tread_data_path_error, AbsoluteOffset, ChunkSize, [{reason, Error}], State)\n\tend.\n\n%% @doc Verify that the ar_data_sync record is configured correctly - namely that it has\n%% entry in the expected packing format. This also indirectly detects the case where an\n%% interval exists in the ar_chunk_storage record, but not the ar_data_sync record.\nverify_packing(Metadata, State) ->\n\t#state{packing = Packing, store_id = StoreID} = State,\n\t{AbsoluteOffset, _ChunkDataKey, _TXRoot, _DataRoot, _TXPath,\n\t\t\t_TXRelativeOffset, ChunkSize} = Metadata,\n\tPaddedOffset = ar_block:get_chunk_padded_offset(AbsoluteOffset),\n\tStoredPackingCheck = ar_sync_record:is_recorded(AbsoluteOffset, ar_data_sync, StoreID),\n\tExpectedPacking =\n\t\tcase ar_chunk_storage:is_storage_supported(PaddedOffset, ChunkSize, Packing) of\n\t\t\ttrue ->\n\t\t\t\tPacking;\n\t\t\tfalse ->\n\t\t\t\tunpacked\n\t\tend,\n\tcase {StoredPackingCheck, ExpectedPacking} of\n\t\t{{true, ExpectedPacking}, _} ->\n\t\t\t%% Chunk is recorded in ar_sync_record under the expected Packing.\n\t\t\tState;\n\t\t{{true, StoredPacking}, _} ->\n\t\t\t%% This check will invalidate chunks that are not packed to the expected\n\t\t\t%% *final* packing format. A storage module that is in the process of being\n\t\t\t%% packed to replica_2_8 may have chunks that are stored in the intermediate\n\t\t\t%% unpacked_padded format. This check will invalidate those chunks as well.\n\t\t\t%% Miners should make sure to only run `verify` in the `purge` mode after they\n\t\t\t%% have completed packing.\n\t\t\tinvalidate_chunk(unexpected_packing, AbsoluteOffset, ChunkSize, \n\t\t\t\t[{stored_packing, ar_serialize:encode_packing(StoredPacking, true)}], State);\n\t\t{Reply, _} ->\n\t\t\tinvalidate_chunk(missing_packing_info, AbsoluteOffset, ChunkSize,\n\t\t\t\t[{packing_reply, io_lib:format(\"~p\", [Reply])}], State)\n\tend.\n\n%% @doc Verify that chunk exists on disk or in chunk_data_db. This also indirectly detects the\n%% case where an interval exists in the ar_data_sync record, but not the ar_chunk_storage\n%% record.\nverify_chunk_storage(PaddedOffset, Metadata, {End, Start}, State)\n\t\twhen PaddedOffset - ?DATA_CHUNK_SIZE >= Start andalso PaddedOffset =< End ->\n\t#state{store_id = StoreID} = State,\n\t{AbsoluteOffset, ChunkDataKey, _TXRoot, _DataRoot, _TXPath,\n\t\t_TXRelativeOffset, ChunkSize} = Metadata,\n\t{_ChunkFileStart, _Filepath, _Position, ExpectedChunkOffset} =\n\t\t\t\tar_chunk_storage:locate_chunk_on_disk(PaddedOffset, StoreID),\n\tcase ar_chunk_storage:read_offset(PaddedOffset, StoreID) of\n\t\t{ok, << ExpectedChunkOffset:?OFFSET_BIT_SIZE >>} ->\n\t\t\tState;\n\t\t{ok, << ActualChunkOffset:?OFFSET_BIT_SIZE >>} ->\n\t\t\t%% The chunk is recorded in the ar_chunk_storage sync record, but not stored.\n\t\t\tinvalidate_chunk(\n\t\t\t\tinvalid_chunk_offset, AbsoluteOffset, ChunkSize, [\n\t\t\t\t\t{expected_chunk_offset, ExpectedChunkOffset}, \n\t\t\t\t\t{actual_chunk_offset, ActualChunkOffset}\n\t\t\t\t], State);\n\t\tError ->\n\t\t\tIsChunkStoredInRocksDB =\n\t\t\t\tcase ar_data_sync:get_chunk_data(ChunkDataKey, StoreID) of\n\t\t\t\t\tnot_found ->\n\t\t\t\t\t\tfalse;\n\t\t\t\t\t{ok, Value} ->\n\t\t\t\t\t\tcase binary_to_term(Value, [safe]) of\n\t\t\t\t\t\t\t{_Chunk, _DataPath} ->\n\t\t\t\t\t\t\t\ttrue;\n\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\tfalse\n\t\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\tinvalidate_chunk(\n\t\t\t\tinvalid_chunk_offset, AbsoluteOffset, ChunkSize, [\n\t\t\t\t\t{expected_chunk_offset, ExpectedChunkOffset}, \n\t\t\t\t\t{error, Error},\n\t\t\t\t\t{is_chunk_stored_in_rocksdb, IsChunkStoredInRocksDB}\n\t\t\t\t], State)\n\tend;\nverify_chunk_storage(PaddedOffset, Metadata, Interval, State) ->\n\t#state{ packing = Packing, store_id = StoreID } = State,\n\t{AbsoluteOffset, _ChunkDataKey, _TXRoot, _DataRoot, _TXPath,\n\t\t_TXRelativeOffset, ChunkSize} = Metadata,\n\tcase ar_chunk_storage:is_storage_supported(PaddedOffset, ChunkSize, Packing) of\n\t\ttrue ->\n\t\t\tLogs = [\n\t\t\t\t{ar_data_sync,\n\t\t\t\t\tar_sync_record:is_recorded(AbsoluteOffset, ar_data_sync, StoreID)},\n\t\t\t\t{ar_chunk_storage,\n\t\t\t\t\tar_sync_record:is_recorded(AbsoluteOffset, ar_chunk_storage, StoreID)},\n\t\t\t\t{ar_chunk_storage_replica_2_9_1_unpacked,\n\t\t\t\t\tar_sync_record:is_recorded(AbsoluteOffset, ar_chunk_storage_replica_2_9_1_unpacked, StoreID)},\n\t\t\t\t{unpacked_padded,\n\t\t\t\t\tar_sync_record:is_recorded(AbsoluteOffset, unpacked_padded, StoreID)},\n\t\t\t\t{is_entropy_recorded, ar_entropy_storage:is_entropy_recorded(\n\t\t\t\t\tAbsoluteOffset, Packing, StoreID)},\n\t\t\t\t{is_blacklisted, ar_tx_blacklist:is_byte_blacklisted(AbsoluteOffset)},\n\t\t\t\t{interval, Interval},\n\t\t\t\t{padded_offset, PaddedOffset}\n\t\t\t],\n\t\t\tinvalidate_chunk(chunk_storage_gap, AbsoluteOffset, ChunkSize, Logs, State);\n\t\tfalse ->\n\t\t\tverify_chunk_data(Metadata, State)\n\tend.\n\nverify_chunk_data(Metadata, State) ->\n\t#state{ store_id = StoreID } = State,\n\t{AbsoluteOffset, ChunkDataKey, _TXRoot, _DataRoot, _TXPath,\n\t\t_TXRelativeOffset, ChunkSize} = Metadata,\n\tcase ar_data_sync:get_chunk_data(ChunkDataKey, StoreID) of\n\t\tnot_found ->\n\t\t\tinvalidate_chunk(chunk_data_not_found, AbsoluteOffset, ChunkSize, [], State);\n\t\t{ok, Value} ->\n\t\t\tcase binary_to_term(Value, [safe]) of\n\t\t\t\t{_Chunk, _DataPath} ->\n\t\t\t\t\tState;\n\t\t\t\t_DataPath ->\n\t\t\t\t\tinvalidate_chunk(\n\t\t\t\t\t\tchunk_data_no_chunk, AbsoluteOffset, ChunkSize, [], State)\n\t\t\tend;\n\t\tError ->\n\t\t\tinvalidate_chunk(\n\t\t\t\tchunk_data_error, AbsoluteOffset, ChunkSize, [{reason, Error}], State)\n\tend.\n\ninvalidate_chunk(Type, AbsoluteOffset, ChunkSize, State) ->\n\tinvalidate_chunk(Type, AbsoluteOffset, ChunkSize, [], State).\n\ninvalidate_chunk(Type, AbsoluteOffset, ChunkSize, Logs, State) ->\n\t#state{ mode = Mode, store_id = StoreID } = State,\n\tcase Mode of\n\t\tpurge ->\n\t\t\tar_data_sync:invalidate_bad_data_record(AbsoluteOffset, ChunkSize, StoreID, Type);\n\t\tlog ->\n\t\t\tok\n\tend,\n\tlog_error(Type, AbsoluteOffset, ChunkSize, Logs, State).\n\ninvalidate_sync_record(Type, Cursor, NextCursor, Logs, State) ->\n\t#state{ mode = Mode, store_id = StoreID } = State,\n\tcase Mode of\n\t\tpurge ->\n\t\t\tar_footprint_record:delete(NextCursor, StoreID),\n\t\t\tar_sync_record:delete(NextCursor, Cursor, ar_data_sync, StoreID),\n\t\t\tar_sync_record:delete(NextCursor, Cursor, ar_chunk_storage, StoreID);\n\t\tlog ->\n\t\t\tok\n\tend,\n\tRange = NextCursor - Cursor,\n\tlog_error(Type, Cursor, Range, Logs, State).\n\nlog_error(Type, AbsoluteOffset, ChunkSize, Logs, State) ->\n\t#state{ \n\t\tverify_report = Report, store_id = StoreID, cursor = Cursor, packing = Packing \n\t} = State,\n\n\tLogMessage = [{event, verify_chunk_error},\n\t\t{type, Type}, {store_id, StoreID},\n\t\t{expected_packing, ar_serialize:encode_packing(Packing, true)},\n\t\t{absolute_end_offset, AbsoluteOffset}, {cursor, Cursor}, {chunk_size, ChunkSize}]\n\t\t++ Logs,\n\t?LOG_INFO(LogMessage),\n\tNewBytes = maps:get(Type, Report#verify_report.error_bytes, 0) + ChunkSize,\n\tNewChunks = maps:get(Type, Report#verify_report.error_chunks, 0) + 1,\n\n\tReport2 = Report#verify_report{\n\t\ttotal_error_bytes = Report#verify_report.total_error_bytes + ChunkSize,\n\t\ttotal_error_chunks = Report#verify_report.total_error_chunks + 1,\n\t\terror_bytes = maps:put(Type, NewBytes, Report#verify_report.error_bytes),\n\t\terror_chunks = maps:put(Type, NewChunks, Report#verify_report.error_chunks)\n\t},\n\tState#state{ verify_report = Report2 }.\n\n%% @doc Returns 3 sets of intervals:\n%% 1. ar_chunk_storage: should cover all chunks that have been stored on disk.\n%% 2. ar_data_sync, Packing: should cover all chunks of the specified packing that have been\n%%                           synced\n%% 3. The union of the above two intervals.\n%% \n%% We will use these intervals to determine errors in the node state (e.g. a chunk that\n%% exists in ar_chunk_storage but not ar_data_sync - or vice versa).\nquery_intervals(State) ->\n\t#state{cursor = Cursor, store_id = StoreID} = State,\n\tChunkStorageInterval = ar_sync_record:get_next_synced_interval(\n\t\tCursor, infinity, ar_chunk_storage, StoreID),\n\tDataSyncInterval = ar_sync_record:get_next_synced_interval(\n\t\tCursor, infinity, ar_data_sync, StoreID),\n\t{ChunkStorageInterval2, DataSyncInterval2} = align_intervals(\n\t\tCursor, ChunkStorageInterval, DataSyncInterval),\n\tUnionInterval = union_intervals(ChunkStorageInterval2, DataSyncInterval2),\n\t{UnionInterval, {ChunkStorageInterval2, DataSyncInterval2}}.\n\nalign_intervals(_Cursor, not_found, not_found) ->\n\t{not_found, not_found};\nalign_intervals(Cursor, not_found, DataSyncInterval) ->\n\t{not_found, clamp_interval(Cursor, infinity, DataSyncInterval)};\nalign_intervals(Cursor, ChunkStorageInterval, not_found) ->\n\t{clamp_interval(Cursor, infinity, ChunkStorageInterval), not_found};\nalign_intervals(Cursor, ChunkStorageInterval, DataSyncInterval) ->\n\t{ChunkStorageEnd, _} = ChunkStorageInterval,\n\t{DataSyncEnd, _} = DataSyncInterval,\n\n\t{\n\t\tclamp_interval(Cursor, DataSyncEnd, ChunkStorageInterval),\n\t\tclamp_interval(Cursor, ChunkStorageEnd, DataSyncInterval)\n\t}.\n\nunion_intervals(not_found, not_found) ->\n\tnot_found;\nunion_intervals(not_found, B) ->\n\tB;\nunion_intervals(A, not_found) ->\n\tA;\nunion_intervals({End1, Start1}, {End2, Start2}) ->\n\t{max(End1, End2), min(Start1, Start2)}.\n\nclamp_interval(ClampMin, ClampMax, {End, Start}) ->\n\tcheck_interval({min(End, ClampMax), max(Start, ClampMin)}).\n\ncheck_interval({End, Start}) when Start > End ->\n\tnot_found;\ncheck_interval(Interval) ->\n\tInterval.\n\nreport_progress(State) ->\n\t#state{ \n\t\tstore_id = StoreID, verify_report = Report, cursor = Cursor,\n\t\tstart_offset = StartOffset, end_offset = EndOffset\n\t} = State,\n\n\tStatus = case Cursor >= EndOffset of\n\t\ttrue -> done;\n\t\tfalse -> running\n\tend,\n\n\tBytesProcessed = Cursor - StartOffset,\n\tProgress = BytesProcessed * 100 div (EndOffset - StartOffset),\n\tReport2 = Report#verify_report{\n\t\tbytes_processed = BytesProcessed,\n\t\tprogress = Progress,\n\t\tstatus = Status\n\t},\n\tar_verify_chunks_reporter:update(StoreID, Report2),\n\tState#state{ verify_report = Report2 }.\n\n%% Generate offset in the range [Start, End]\n%% (i.e. offsets greater than or equal to Start and less than or equal to End)\n%% Offsets are normalized to a bucket boundary such that if that bucket boundary has\n%% been sampled before, it won't be sampled again.\ngenerate_sample_offset(Start, End, SampledOffsets, Retry) when Retry > 0 ->\n\tRange = End - Start,\n\tOffset = Start + rand:uniform(Range),\n\tBucketStartOffset = ar_chunk_storage:get_chunk_bucket_start(Offset),\n\tSampleOffset = BucketStartOffset + 1,\n\tcase sets:is_element(SampleOffset, SampledOffsets) of\n\t\ttrue ->\n\t\t\tgenerate_sample_offset(Start, End, SampledOffsets, Retry - 1);\n\t\tfalse ->\n\t\t\tSampleOffset\n\tend.\n\nsample_chunks(0, _SampledOffsets, SampleReport, _State) ->\n\tSampleReport;\nsample_chunks(all, _SampledOffsets, SampleReport, State) ->\n\t#state{ store_id = StoreID, start_offset = Start, end_offset = End } = State,\n\tSampleOffset =  ar_chunk_storage:get_chunk_bucket_start(Start) + 1,\n\n\tlists:foldl(\n        fun(Offset, Report) ->\n            {_IsRecorded, NewReport} = sample_offset(Offset, StoreID, Report),\n            ar_verify_chunks_reporter:update(StoreID, NewReport),\n            NewReport\n        end,\n        SampleReport,\n        lists:seq(SampleOffset, End, ?DATA_CHUNK_SIZE)\n    );\nsample_chunks(Count, SampledOffsets, SampleReport, State) ->\n\t#state{ store_id = StoreID, start_offset = Start, end_offset = End } = State,\n\n\tSampleOffset = generate_sample_offset(Start+1, End, SampledOffsets, 100),\n\tSampledOffsets2 = sets:add_element(SampleOffset, SampledOffsets),\n\n\t{IsRecorded, SampleReport2} = sample_offset(SampleOffset, StoreID, SampleReport),\n\tcase IsRecorded of\n\t\ttrue ->\n\t\t\tar_verify_chunks_reporter:update(StoreID, SampleReport2),\n\t\t\tsample_chunks(Count - 1, SampledOffsets2, SampleReport2, State);\n\t\tfalse ->\n\t\t\tsample_chunks(Count, SampledOffsets2, SampleReport2, State)\n\tend.\n\nsample_offset(Offset, StoreID, SampleReport) ->\n\tIsRecorded = case ar_sync_record:is_recorded(Offset, ar_data_sync, StoreID) of\n\t\t{true, _} ->\n\t\t\ttrue;\n\t\ttrue ->\n\t\t\ttrue;\n\t\tfalse ->\n\t\t\tfalse\n\tend,\n\n\tSampleReport2 = case IsRecorded of\n\t\ttrue ->\n\t\t\tcase ar_data_sync:get_chunk(\n\t\t\t\tOffset, #{pack => true, packing => unpacked, origin => verify}) of\n\t\t\t\t{ok, _Proof} ->\n\t\t\t\t\tSampleReport#sample_report{\n\t\t\t\t\t\ttotal = SampleReport#sample_report.total + 1,\n\t\t\t\t\t\tsuccess = SampleReport#sample_report.success + 1\n\t\t\t\t\t};\n\t\t\t\t{error, Reason} ->\n\t\t\t\t\t?LOG_INFO([{event, sample_chunk_error}, {offset, Offset}, {status, Reason}]),\n\t\t\t\t\tSampleReport#sample_report{\n\t\t\t\t\t\ttotal = SampleReport#sample_report.total + 1,\n\t\t\t\t\t\tfailure = SampleReport#sample_report.failure + 1\n\t\t\t\t\t}\n\t\t\tend;\n\t\tfalse ->\n\t\t\tSampleReport\n\tend,\n\t{IsRecorded, SampleReport2}.\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\nintervals_test_() ->\n    [\n        {timeout, 30, fun test_align_intervals/0},\n\t\t{timeout, 30, fun test_union_intervals/0}\n\t].\n\nverify_chunk_storage_test_() ->\n\t[\n\t\tar_test_node:test_with_mocked_functions(\n\t\t\t[{ar_chunk_storage, read_offset,\n\t\t\t\tfun(_Offset, _StoreID) -> {ok, << ?DATA_CHUNK_SIZE:24 >>} end}],\n\t\t\tfun test_verify_chunk_storage_in_interval/0),\n\t\tar_test_node:test_with_mocked_functions(\n\t\t\t[{ar_chunk_storage, read_offset,\n\t\t\t\tfun(_Offset, _StoreID) -> {ok, << ?DATA_CHUNK_SIZE:24 >>} end},\n\t\t\t{ar_sync_record, is_recorded,\n\t\t\t\tfun(_, _, _) -> false end},\n\t\t\t{ar_entropy_storage, is_entropy_recorded,\n\t\t\t\tfun(_, _, _) -> false end},\n\t\t\t{ar_tx_blacklist, is_byte_blacklisted,\n\t\t\t\tfun(_) -> false end}],\n\t\t\tfun test_verify_chunk_storage_should_store/0),\n\t\tar_test_node:test_with_mocked_functions(\n\t\t\t[{ar_chunk_storage, read_offset,\n\t\t\t\tfun(_Offset, _StoreID) -> {ok, << ?DATA_CHUNK_SIZE:24 >>} end},\n\t\t\t{ar_data_sync, get_chunk_data,\n\t\t\t\tfun(_, _) -> {ok, term_to_binary({<<>>, <<>>})} end}],\n\t\t\tfun test_verify_chunk_storage_should_not_store/0)\n\t].\n\nverify_proof_test_() ->\n\t[\n\t\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_data_sync, read_data_path, fun(_, _) -> not_found end}],\n\t\t\tfun test_verify_proof_no_datapath/0\n\t\t),\n\t\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_data_sync, read_data_path, fun(_, _) -> {ok, <<>>} end},\n\t\t\t{ar_poa, chunk_proof, fun(_, _) -> #chunk_proof{} end},\n\t\t\t{ar_poa, validate_paths, fun(_) -> {true, <<>>} end}\n\t\t],\n\t\t\tfun test_verify_proof_valid_paths/0\n\t\t),\n\t\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_data_sync, read_data_path, fun(_, _) -> {ok, <<>>} end},\n\t\t\t{ar_poa, chunk_proof, fun(_, _) -> #chunk_proof{} end},\n\t\t\t{ar_poa, validate_paths, fun(_) -> {false, <<>>} end}\n\t\t],\n\t\t\tfun test_verify_proof_invalid_paths/0\n\t\t)\n\t].\n\nverify_chunk_test_() ->\n\t[\n\t\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_data_sync, read_data_path, fun(_, _) -> {ok, <<>>} end},\n\t\t\t{ar_poa, validate_paths, fun(_) -> {true, <<>>} end},\n\t\t\t{ar_poa, chunk_proof, fun(_, _) -> #chunk_proof{} end},\n\t\t\t{ar_chunk_storage, read_offset,\n\t\t\t\tfun(_Offset, _StoreID) -> {ok, << ?DATA_CHUNK_SIZE:24 >>} end},\n\t\t\t{ar_data_sync, get_chunk_data,\n\t\t\t\tfun(_, _) -> {ok, term_to_binary({<<>>, <<>>})} end},\n\t\t\t{ar_sync_record, is_recorded,\n\t\t\t\tfun(_, _, _) -> false end}\n\t\t],\n\t\t\tfun test_verify_chunk/0\n\t\t)\n\t].\n\ntest_align_intervals() ->\n\t?assertEqual(\n\t\t{not_found, not_found},\n\t\talign_intervals(0, not_found, not_found)),\n\t?assertEqual(\n\t\t{{10, 5}, not_found},\n\t\talign_intervals(0, {10, 5}, not_found)),\n\t?assertEqual(\n\t\t{{10, 7}, not_found},\n\t\talign_intervals(7, {10, 5}, not_found)),\n\t?assertEqual(\n\t\t{not_found, not_found},\n\t\talign_intervals(12, {10, 5}, not_found)),\n\t?assertEqual(\n\t\t{not_found, {10, 5}},\n\t\talign_intervals(0, not_found, {10, 5})),\n\t?assertEqual(\n\t\t{not_found, {10, 7}},\n\t\talign_intervals(7, not_found, {10, 5})),\n\t?assertEqual(\n\t\t{not_found, not_found},\n\t\talign_intervals(12, not_found, {10, 5})),\n\n\t?assertEqual(\n\t\t{{9, 4}, {9, 5}},\n\t\talign_intervals(0, {9, 4}, {10, 5})),\n\t?assertEqual(\n\t\t{{9, 7}, {9, 7}},\n\t\talign_intervals(7, {9, 4}, {10, 5})),\n\t?assertEqual(\n\t\t{not_found, not_found},\n\t\talign_intervals(12, {9, 4}, {10, 5})),\n\t?assertEqual(\n\t\t{{9, 5}, {9, 4}},\n\t\talign_intervals(0, {10, 5}, {9, 4})),\n\t?assertEqual(\n\t\t{{9, 7}, {9, 7}},\n\t\talign_intervals(7, {10, 5}, {9, 4})),\n\t?assertEqual(\n\t\t{not_found, not_found},\n\t\talign_intervals(12, {10, 5}, {9, 4})),\n\tok.\n\t\t\ntest_union_intervals() ->\n\t?assertEqual(\n\t\tnot_found,\n\t\tunion_intervals(not_found, not_found)),\n\t?assertEqual(\n\t\t{10, 5},\n\t\tunion_intervals(not_found, {10, 5})),\n\t?assertEqual(\n\t\t{10, 5},\n\t\tunion_intervals({10, 5}, not_found)),\n\t?assertEqual(\n\t\t{10, 3},\n\t\tunion_intervals({10, 7}, {5, 3})),\n\tok.\n\n\ntest_verify_chunk_storage_in_interval() ->\n\t?assertEqual(\n\t\t#state{ packing = unpacked },\n\t\tverify_chunk_storage(\n\t\t\t10*?DATA_CHUNK_SIZE,\n\t\t\t{10*?DATA_CHUNK_SIZE, <<>>, <<>>, <<>>, <<>>, <<>>, ?DATA_CHUNK_SIZE},\n\t\t\t{20*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE},\n\t\t\t#state{ packing = unpacked })),\n\t?assertEqual(\n\t\t#state{ packing = unpacked },\n\t\tverify_chunk_storage(\n\t\t\t6*?DATA_CHUNK_SIZE,\n\t\t\t{6*?DATA_CHUNK_SIZE - 1, <<>>, <<>>, <<>>, <<>>, <<>>, ?DATA_CHUNK_SIZE div 2},\n\t\t\t{20*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE},\n\t\t\t#state{ packing = unpacked })),\n\t?assertEqual(\n\t\t#state{ packing = unpacked },\n\t\tverify_chunk_storage(\n\t\t\t20*?DATA_CHUNK_SIZE,\n\t\t\t{20*?DATA_CHUNK_SIZE - ?DATA_CHUNK_SIZE div 2, <<>>, <<>>, <<>>, <<>>, <<>>, ?DATA_CHUNK_SIZE div 2},\n\t\t\t{20*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE},\n\t\t\t#state{ packing = unpacked })),\n\tok.\n\ntest_verify_chunk_storage_should_store() ->\n\tAddr = crypto:strong_rand_bytes(32),\n\tExpectedState = #state{ \n\t\tpacking = unpacked,\n\t\tverify_report = #verify_report{\n\t\t\ttotal_error_bytes = ?DATA_CHUNK_SIZE,\n\t\t\ttotal_error_chunks = 1,\n\t\t\terror_bytes = #{chunk_storage_gap => ?DATA_CHUNK_SIZE},\n\t\t\terror_chunks = #{chunk_storage_gap => 1}\n\t\t} \n\t},\n\t?assertEqual(\n\t\tExpectedState,\n\t\tverify_chunk_storage(\n\t\t\t0,\n\t\t\t{0, <<>>, <<>>, <<>>, <<>>, <<>>, ?DATA_CHUNK_SIZE},\n\t\t\t{20*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE},\n\t\t\t#state{ packing = unpacked })),\n\t?assertEqual(\n\t\tExpectedState,\n\t\tverify_chunk_storage(\n\t\t\tar_block:strict_data_split_threshold() + 1,\n\t\t\t{ar_block:strict_data_split_threshold() + 1, <<>>, <<>>, <<>>, <<>>, <<>>, ?DATA_CHUNK_SIZE},\n\t\t\t{20*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE},\n\t\t\t#state{ packing = unpacked })),\n\t?assertEqual(\n\t\t#state{\n\t\t\tpacking = {composite, Addr, 1},\n\t\t\tverify_report = #verify_report{\n\t\t\t\ttotal_error_bytes = ?DATA_CHUNK_SIZE div 2,\n\t\t\t\ttotal_error_chunks = 1,\n\t\t\t\terror_bytes = #{chunk_storage_gap => ?DATA_CHUNK_SIZE div 2},\n\t\t\t\terror_chunks = #{chunk_storage_gap => 1}\n\t\t\t} \n\t\t},\n\t\tverify_chunk_storage(\n\t\t\tar_block:strict_data_split_threshold() + 1,\n\t\t\t{ar_block:strict_data_split_threshold() + 1, <<>>, <<>>, <<>>, <<>>, <<>>, ?DATA_CHUNK_SIZE div 2},\n\t\t\t{20*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE},\n\t\t\t#state{ packing = {composite, Addr, 1} })),\n\tok.\n\ntest_verify_chunk_storage_should_not_store() ->\n\tExpectedState = #state{ \n\t\tpacking = unpacked\n\t},\n\t?assertEqual(\n\t\tExpectedState,\n\t\tverify_chunk_storage(\n\t\t\t0,\n\t\t\t{0, <<>>, <<>>, <<>>, <<>>, <<>>, ?DATA_CHUNK_SIZE div 2},\n\t\t\t{20*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE},\n\t\t\t#state{ packing = unpacked })),\n\t?assertEqual(\n\t\tExpectedState,\n\t\tverify_chunk_storage(\n\t\t\tar_block:strict_data_split_threshold() + 1,\n\t\t\t{ar_block:strict_data_split_threshold() + 1, <<>>, <<>>, <<>>, <<>>, <<>>, ?DATA_CHUNK_SIZE div 2},\n\t\t\t{20*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE},\n\t\t\t#state{ packing = unpacked })),\n\tok.\n\ntest_verify_proof_no_datapath() ->\n\tExpectedState1 = #state{ \n\t\tpacking = unpacked,\n\t\tverify_report = #verify_report{\n\t\t\ttotal_error_bytes = ?DATA_CHUNK_SIZE,\n\t\t\ttotal_error_chunks = 1,\n\t\t\terror_bytes = #{read_data_path_error => ?DATA_CHUNK_SIZE},\n\t\t\terror_chunks = #{read_data_path_error => 1}\n\t\t} \n\t},\n\tExpectedState2 = #state{ \n\t\tpacking = unpacked,\n\t\tverify_report = #verify_report{\n\t\t\ttotal_error_bytes = ?DATA_CHUNK_SIZE div 2,\n\t\t\ttotal_error_chunks = 1,\n\t\t\terror_bytes = #{read_data_path_error => ?DATA_CHUNK_SIZE div 2},\n\t\t\terror_chunks = #{read_data_path_error => 1}\n\t\t} \n\t},\n\t?assertEqual(\n\t\tExpectedState1,\n\t\tverify_proof(\n\t\t\t{10, <<>>, <<>>, <<>>, <<>>, <<>>, ?DATA_CHUNK_SIZE},\n\t\t\t#state{ packing = unpacked })),\n\t?assertEqual(\n\t\tExpectedState2,\n\t\tverify_proof(\n\t\t\t{10, <<>>, <<>>, <<>>, <<>>, <<>>, ?DATA_CHUNK_SIZE div 2},\n\t\t\t#state{ packing = unpacked })),\n\tok.\n\ntest_verify_proof_valid_paths() ->\n\t?assertEqual(\n\t\t#state{},\n\t\tverify_proof(\n\t\t\t{10, <<>>, <<>>, <<>>, <<>>, <<>>, ?DATA_CHUNK_SIZE},\n\t\t\t#state{})),\n\tok.\n\t\ntest_verify_proof_invalid_paths() ->\n\tExpectedState1 = #state{ \n\t\tpacking = unpacked,\n\t\tverify_report = #verify_report{\n\t\t\ttotal_error_bytes = ?DATA_CHUNK_SIZE,\n\t\t\ttotal_error_chunks = 1,\n\t\t\terror_bytes = #{validate_paths_error => ?DATA_CHUNK_SIZE},\n\t\t\terror_chunks = #{validate_paths_error => 1}\n\t\t} \n\t},\n\tExpectedState2 = #state{ \n\t\tpacking = unpacked,\n\t\tverify_report = #verify_report{\n\t\t\ttotal_error_bytes = ?DATA_CHUNK_SIZE div 2,\n\t\t\ttotal_error_chunks = 1,\n\t\t\terror_bytes = #{validate_paths_error => ?DATA_CHUNK_SIZE div 2},\n\t\t\terror_chunks = #{validate_paths_error => 1}\n\t\t} \n\t},\n\t?assertEqual(\n\t\tExpectedState1,\n\t\tverify_proof(\n\t\t\t{10, <<>>, <<>>, <<>>, <<>>, <<>>, ?DATA_CHUNK_SIZE},\n\t\t\t#state{ packing = unpacked })),\n\t?assertEqual(\n\t\tExpectedState2,\n\t\tverify_proof(\n\t\t\t{10, <<>>, <<>>, <<>>, <<>>, <<>>, ?DATA_CHUNK_SIZE div 2},\n\t\t\t#state{ packing = unpacked })),\n\tok.\n\ntest_verify_chunk() ->\n\tPreSplitOffset = ar_block:strict_data_split_threshold() - (?DATA_CHUNK_SIZE div 2),\n\tPostSplitOffset = ar_block:strict_data_split_threshold() + (?DATA_CHUNK_SIZE div 2),\n\tIntervalStart = ar_block:strict_data_split_threshold() - ?DATA_CHUNK_SIZE,\n\tIntervalEnd = ar_block:strict_data_split_threshold() + ?DATA_CHUNK_SIZE,\n\tInterval = {IntervalEnd, IntervalStart},\n\t?assertEqual(\n\t\t#state{ \n\t\t\tcursor = PreSplitOffset + 1,\n\t\t\tpacking = unpacked,\n\t\t\tverify_report = #verify_report{\n\t\t\t\ttotal_error_bytes = ?DATA_CHUNK_SIZE div 2,\n\t\t\t\ttotal_error_chunks = 1,\n\t\t\t\terror_bytes = #{missing_packing_info => ?DATA_CHUNK_SIZE div 2},\n\t\t\t\terror_chunks = #{missing_packing_info => 1}\n\t\t\t}\n\t\t},\n\t\tverify_chunk(\n\t\t\t{ok, <<>>, {PreSplitOffset, <<>>, <<>>, <<>>, <<>>, <<>>, ?DATA_CHUNK_SIZE div 2}},\n\t\t\t{Interval, not_found},\n\t\t\t#state{packing=unpacked})),\n\t?assertEqual(\n\t\t#state{ \n\t\t\tcursor = ar_block:strict_data_split_threshold() + ?DATA_CHUNK_SIZE + 1,\n\t\t\tpacking = unpacked,\n\t\t\tverify_report = #verify_report{\n\t\t\t\ttotal_error_bytes = ?DATA_CHUNK_SIZE div 2,\n\t\t\t\ttotal_error_chunks = 1,\n\t\t\t\terror_bytes = #{missing_packing_info => ?DATA_CHUNK_SIZE div 2},\n\t\t\t\terror_chunks = #{missing_packing_info => 1}\n\t\t\t}\n\t\t},\n\t\tverify_chunk(\n\t\t\t{ok, <<>>, {PostSplitOffset, <<>>, <<>>, <<>>, <<>>, <<>>, ?DATA_CHUNK_SIZE div 2}},\n\t\t\t{Interval, not_found},\n\t\t\t#state{packing=unpacked})),\n\tExpectedState = #state{ \n\t\tcursor = 33554432, %% = 2 * 2^24. From ar_data_sync:advance_chunks_index_cursor/1\n\t\tpacking = unpacked,\n\t\tverify_report = #verify_report{\n\t\t\ttotal_error_bytes = 33554432,\n\t\t\ttotal_error_chunks = 1,\n\t\t\terror_bytes = #{chunks_index_error => 33554432},\n\t\t\terror_chunks = #{chunks_index_error => 1}\n\t\t}\n\t},\n\t?assertEqual(\n\t\t{error, ExpectedState},\n\t\tverify_chunks_index2(\n\t\t\t{error, some_error},\n\t\t\t#state{ cursor = 0, packing = unpacked })),\n\tok.\n\n%% Verify that generate_sample_offsets/3 samples without replacement.\nsample_offsets_loop(Start, End, Count) ->\n    %% Compute the number of available unique candidates.\n    Candidates = lists:seq(Start + 1, End, ?DATA_CHUNK_SIZE),\n    ActualCount = erlang:min(Count, length(Candidates)),\n    sample_offsets_loop(Start, End, ActualCount, sets:new()).\n\nsample_offsets_loop(_Start, _End, 0, _SampledSet) ->\n    [];\nsample_offsets_loop(Start, End, Count, SampledSet) ->\n    Offset = generate_sample_offset(Start, End, SampledSet, 100),\n    NewSet = sets:add_element(Offset, SampledSet),\n    [Offset | sample_offsets_loop(Start, End, Count - 1, NewSet)].\n\nsample_offsets_without_replacement_test() ->\n    ChunkSize = ?DATA_CHUNK_SIZE,\n    Count = 5,\n    %% Use the helper function to generate a list of offsets.\n    Offsets = sample_offsets_loop(ChunkSize * 10, ChunkSize * 1000, Count),\n    %% Check that exactly Count unique offsets are produced.\n    ?assertEqual(Count, length(Offsets)),\n    %% For every pair, ensure the absolute difference is at least ?DATA_CHUNK_SIZE.\n    lists:foreach(fun(A) ->\n        lists:foreach(fun(B) ->\n            case {A == B, abs(A - B) < ?DATA_CHUNK_SIZE} of\n                {true, _} -> ok;\n                {false, true} -> ?assert(false);\n                _ -> ok\n            end\n        end, Offsets)\n    end, Offsets),\n    %% When the available candidates are fewer than Count,\n    %% only one unique offset should be returned.\n    Offsets2 = sample_offsets_loop(0, ChunkSize, Count),\n    ?assertEqual(1, length(Offsets2)).\n\n%% Verify sample_random_chunks/4 aggregates outcomes correctly.\n%%\n%% We mock ar_data_sync:get_chunk/2 such that:\n%%   - The first call returns {error, chunk_not_found},\n%%   - The second call returns {ok, <<\"valid_proof\">>},\n%%   - The third call returns {error, invalid_chunk}.\n%% Note: Using atoms for partition borders triggers the fallback in generate_sample_offsets/3.\nsample_random_chunks_test_() ->\n\t[\n\t\tar_test_node:test_with_mocked_functions(\n\t\t\t[{ar_data_sync, get_chunk, fun(_Offset, _Opts) ->\n\t\t\t\t%% Use process dictionary to simulate sequential responses.\n\t\t\t\tCounter = case erlang:get(sample_counter) of\n\t\t\t\t\tundefined -> 0;\n\t\t\t\t\tC -> C\n\t\t\t\tend,\n\t\t\t\terlang:put(sample_counter, Counter + 1),\n\t\t\t\tcase Counter of\n\t\t\t\t\t0 -> {error, chunk_not_found};\n\t\t\t\t\t1 -> {ok, <<\"valid_proof\">>};\n\t\t\t\t\t2 -> {error, invalid_chunk}\n\t\t\t\tend\n\t\t\tend},\n\t\t\t{ar_sync_record, is_recorded,\n\t\t\t\tfun(_, _, _) -> true end}\n\t\t\t],\n\t\t\tfun test_sample_random_chunks/0)\n\t].\n\ntest_sample_random_chunks() ->\n\t%% Initialize counter.\n\terlang:put(sample_counter, 0),\n\tState = #state{\n\t\tpacking = unpacked,\n\t\tstart_offset = 0,\n\t\tend_offset = ?DATA_CHUNK_SIZE * 10 ,\n\t\tstore_id = \"test\"\n\t},\n\tReport = sample_chunks(3, sets:new(), #sample_report{}, State),\n\tExpectedReport = #sample_report{total = 3, success = 1, failure = 2},\n\t?assertEqual(ExpectedReport, Report).\n"
  },
  {
    "path": "apps/arweave/src/ar_verify_chunks_reporter.erl",
    "content": "%%% The blob storage optimized for fast reads.\n-module(ar_verify_chunks_reporter).\n\n-behaviour(gen_server).\n\n-export([start_link/0, update/2]).\n-export([init/1, handle_cast/2, handle_call/3, handle_info/2, terminate/2]).\n\n-include(\"ar.hrl\").\n-include(\"ar_verify_chunks.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-record(state, {\n\tverify_reports = #{} :: #{string() => #verify_report{}},\n\tsample_reports = #{} :: #{string() => #sample_report{}}\n}).\n\n-define(REPORT_PROGRESS_INTERVAL, 10000).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Start the server.\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n-spec update(string(), #verify_report{} | #sample_report{}) -> ok.\nupdate(StoreID, Report) ->\n\tgen_server:cast(?MODULE, {update, StoreID, Report}).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([]) ->\n\tar_util:cast_after(?REPORT_PROGRESS_INTERVAL, self(), report_progress),\n\t{ok, #state{}}.\n\n\nhandle_cast({update, StoreID, #verify_report{} = Report}, State) ->\n\t{noreply, State#state{ verify_reports = maps:put(StoreID, Report, State#state.verify_reports) }};\n\nhandle_cast({update, StoreID, #sample_report{} = Report}, State) ->\n\t{noreply, State#state{ sample_reports = maps:put(StoreID, Report, State#state.sample_reports) }};\n\nhandle_cast(report_progress, State) ->\n\t#state{\n\t\tverify_reports = VerifyReports,\n\t\tsample_reports = SampleReports\n\t} = State,\n\n\tprint_sample_reports(SampleReports),\n\tprint_verify_reports(VerifyReports),\n\tar_util:cast_after(?REPORT_PROGRESS_INTERVAL, self(), report_progress),\n\t{noreply, State};\n\n% handle_cast({sample_update, StoreID, SampleReport}, State) ->\n% \tNewSampleReports = maps:put(StoreID, SampleReport, State#state.sample_reports),\n% \tprint_sampling_header(),\n% \tprint_sample_report(StoreID, SampleReport),\n% \t{noreply, State#state{sample_reports = NewSampleReports}};\n\nhandle_cast(Cast, State) ->\n\t?LOG_WARNING([{event, unhandled_cast}, {module, ?MODULE}, {cast, Cast}]),\n\t{noreply, State}.\n\nhandle_call(Call, From, State) ->\n\t?LOG_WARNING([{event, unhandled_call}, {module, ?MODULE}, {call, Call}, {from, From}]),\n\t{reply, ok, State}.\n\nhandle_info(Info, State) ->\n\t?LOG_WARNING([{event, unhandled_info}, {module, ?MODULE}, {info, Info}]),\n\t{noreply, State}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\nprint_verify_reports(Reports) when map_size(Reports) == 0 ->\n\tok;\nprint_verify_reports(Reports) ->\n\tprint_verify_header(),\n\tmaps:foreach(\n\t\tfun(StoreID, Report) ->\n\t\t\tprint_verify_report(StoreID, Report)\n\t\tend,\n\t\tReports\n\t),\n\tprint_verify_footer(),\n\tok.\n\nprint_verify_header() ->\n\tar:console(\"Verification Report~n\", []),\n\tar:console(\"+-------------------------------------------------------------------+-----------+-------+------------+------------+-------------+---------+~n\", []),\n\tar:console(\"|                                                    Storage Module | Processed |     % | Errors (#) | Errors (%) | Verify Rate |  Status |~n\", []),\n\tar:console(\"+-------------------------------------------------------------------+-----------+-------+------------+------------+-------------+---------+~n\", []).\n\nprint_verify_footer() ->\n\tar:console(\"+-------------------------------------------------------------------+-----------+-------+------------+------------+-------------+---------+~n~n\", []).\n\nprint_verify_report(StoreID, Report) ->\n\t#verify_report{\n\t\ttotal_error_chunks = TotalErrorChunks,\n\t\ttotal_error_bytes = TotalErrorBytes,\n\t\tbytes_processed = BytesProcessed,\n\t\tprogress = Progress,\n\t\tstart_time = StartTime,\n\t\tstatus = Status\n\t} = Report,\n\tDuration = erlang:system_time(millisecond) - StartTime,\n\tRate = 1000 * BytesProcessed / Duration,\n\tar:console(\"| ~65s |   ~4B GB | ~4B% | ~10B | ~9.2f% | ~6.1f MB/s | ~7s |~n\", \n\t\t[\n\t\t\tStoreID, BytesProcessed div 1000000000, Progress,\n\t\t\tTotalErrorChunks, (TotalErrorBytes * 100) / BytesProcessed, Rate / 1000000,\n\t\t\tStatus\n\t\t]\n\t).\n\nprint_sample_reports(Reports) when map_size(Reports) == 0 ->\n\tok;\nprint_sample_reports(Reports) ->\n\tprint_sample_header(),\n\tmaps:foreach(\n\t\tfun(StoreID, Report) ->\n\t\t\tprint_sample_report(StoreID, Report)\n\t\tend,\n\t\tReports\n\t),\n\tprint_sample_footer(),\n\tok.\n\nprint_sample_report(StoreID, Report) ->\n\t#sample_report{\n\t\tsamples = MaxSamples,\n\t\ttotal = Total,\n\t\tsuccess = Success,\n\t\tfailure = Failure\n\t} = Report,\n\tar:console(\"| ~65s | ~9B | ~4B% | ~6.1f% | ~6.1f% |~n\",\n\t\t[\n\t\t\tStoreID,\n\t\t\tTotal,\n\t\t\t(Total * 100) div MaxSamples,\n\t\t\t(Success * 100) / Total,\n\t\t\t(Failure * 100) / Total\n\t\t]).\n\nprint_sample_header() ->\n\tar:console(\"Chunk Sample Report~n\", []),\n\tar:console(\"+-------------------------------------------------------------------+-----------+-------+---------+---------+~n\", []),\n\tar:console(\"|                                                    Storage Module | Processed |     % | Success |   Error |~n\", []),\n\tar:console(\"+-------------------------------------------------------------------+-----------+-------+---------+---------+~n\", []).\n\nprint_sample_footer() ->\n\tar:console(\"+-------------------------------------------------------------------+-----------+-------+---------+---------+~n~n\", []).\n"
  },
  {
    "path": "apps/arweave/src/ar_verify_chunks_sup.erl",
    "content": "-module(ar_verify_chunks_sup).\n\n-behaviour(supervisor).\n\n-export([start_link/0]).\n\n-export([init/1]).\n\n-include_lib(\"arweave/include/ar_sup.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%% ===================================================================\n%% Supervisor callbacks.\n%% ===================================================================\n\ninit([]) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase Config#config.verify of\n\t\tfalse ->\n\t\t\tignore;\n\t\t_ ->\n\t\t\tWorkers = lists:map(\n\t\t\t\tfun(StorageModule) ->\n\t\t\t\t\tStoreID = ar_storage_module:id(StorageModule),\n\t\t\t\t\tName = ar_verify_chunks:name(StoreID),\n\t\t\t\t\t?CHILD_WITH_ARGS(ar_verify_chunks, worker, Name, [Name, StoreID])\n\t\t\t\tend,\n\t\t\t\tConfig#config.storage_modules\n\t\t\t),\n\t\t\tReporter = ?CHILD(ar_verify_chunks_reporter, worker),\n\t\t\t{ok, {{one_for_one, 5, 10}, [Reporter | Workers]}}\n\tend.\n\t\n"
  },
  {
    "path": "apps/arweave/src/ar_wallet.erl",
    "content": "%%% @doc Utilities for manipulating wallets.\n-module(ar_wallet).\n\n-export([new/0, new_ecdsa/0, new/1, sign/2, verify/3, verify_pre_fork_2_4/3,\n\t\tto_address/1, to_address/2, hash_pub_key/1,\n\t\tload_key/1, load_keyfile/1, new_keyfile/0, new_keyfile/1, new_keyfile/2, new_keyfile/3,\n\t\tbase64_address_with_optional_checksum_to_decoded_address/1,\n\t\tbase64_address_with_optional_checksum_to_decoded_address_safe/1,\n\t\twallet_filepath/1, wallet_filepath/3,\n\t\tget_or_create_wallet/1, recover_key/3]).\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include(\"ar.hrl\").\n\n-include_lib(\"public_key/include/public_key.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Generate a new wallet public key and private key.\nnew() ->\n\tnew(?DEFAULT_KEY_TYPE).\nnew(KeyType = {KeyAlg, PublicExpnt}) when KeyType =:= {?RSA_SIGN_ALG, 65537} ->\n    {[_, Pub], [_, Pub, Priv|_]} = {[_, Pub], [_, Pub, Priv|_]}\n\t\t= crypto:generate_key(KeyAlg, {?RSA_PRIV_KEY_SZ, PublicExpnt}),\n    {{KeyType, Priv, Pub}, {KeyType, Pub}};\nnew(KeyType = {KeyAlg, KeyCrv}) when KeyAlg =:= ?ECDSA_SIGN_ALG andalso KeyCrv =:= secp256k1 ->\n    {OrigPub, Priv} = crypto:generate_key(ecdh, KeyCrv),\n\tPub = compress_ecdsa_pubkey(OrigPub),\n    {{KeyType, Priv, Pub}, {KeyType, Pub}};\nnew(KeyType = {KeyAlg, KeyCrv}) when KeyAlg =:= ?EDDSA_SIGN_ALG andalso KeyCrv =:= ed25519 ->\n    {Pub, Priv} = crypto:generate_key(KeyAlg, KeyCrv),\n    {{KeyType, Priv, Pub}, {KeyType, Pub}}.\n\n%% @doc Generate a new ECDSA key, store it in a keyfile.\nnew_ecdsa() ->\n\tnew_keyfile({?ECDSA_SIGN_ALG, secp256k1}).\n\n%% @doc Generate a new wallet public and private key, with a corresponding keyfile.\nnew_keyfile() ->\n    new_keyfile(?DEFAULT_KEY_TYPE, wallet_address).\n\nnew_keyfile(KeyType) ->\n    new_keyfile(KeyType, wallet_address).\n\n%% @doc Generate a new wallet public and private key, with a corresponding keyfile.\n%% The provided key is used as part of the file name.\nnew_keyfile(KeyType, WalletName) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tnew_keyfile(KeyType, WalletName, Config#config.data_dir).\n\nnew_keyfile(KeyType, WalletName, DataDir) ->\n\t{Pub, Priv, Key} =\n\t\tcase KeyType of\n\t\t\t{?RSA_SIGN_ALG, PublicExpnt} ->\n\t\t\t\t{[Expnt, Pb], [Expnt, Pb, Prv, P1, P2, E1, E2, C]} =\n\t\t\t\t\tcrypto:generate_key(rsa, {?RSA_PRIV_KEY_SZ, PublicExpnt}),\n\t\t\t\tKy =\n\t\t\t\t\tar_serialize:jsonify(\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t[\n\t\t\t\t\t\t\t\t{kty, <<\"RSA\">>},\n\t\t\t\t\t\t\t\t{ext, true},\n\t\t\t\t\t\t\t\t{e, ar_util:encode(Expnt)},\n\t\t\t\t\t\t\t\t{n, ar_util:encode(Pb)},\n\t\t\t\t\t\t\t\t{d, ar_util:encode(Prv)},\n\t\t\t\t\t\t\t\t{p, ar_util:encode(P1)},\n\t\t\t\t\t\t\t\t{q, ar_util:encode(P2)},\n\t\t\t\t\t\t\t\t{dp, ar_util:encode(E1)},\n\t\t\t\t\t\t\t\t{dq, ar_util:encode(E2)},\n\t\t\t\t\t\t\t\t{qi, ar_util:encode(C)}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t),\n\t\t\t\t{Pb, Prv, Ky};\n\t\t\t{?ECDSA_SIGN_ALG, secp256k1} ->\n\t\t\t\t{OrigPub, Prv} = crypto:generate_key(ecdh, secp256k1),\n\t\t\t\t<<4:8, PubPoint/binary>> = OrigPub,\n\t\t\t\tPubPointMid = byte_size(PubPoint) div 2,\n\t\t\t\t<<X:PubPointMid/binary, Y:PubPointMid/binary>> = PubPoint,\n\t\t\t\tKy =\n\t\t\t\t\tar_serialize:jsonify(\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t[\n\t\t\t\t\t\t\t\t{kty, <<\"EC\">>},\n\t\t\t\t\t\t\t\t{crv, <<\"secp256k1\">>},\n\t\t\t\t\t\t\t\t{x, ar_util:encode(X)},\n\t\t\t\t\t\t\t\t{y, ar_util:encode(Y)},\n\t\t\t\t\t\t\t\t{d, ar_util:encode(Prv)}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t),\n\t\t\t\t{compress_ecdsa_pubkey(OrigPub), Prv, Ky};\n\t\t\t{?EDDSA_SIGN_ALG, ed25519} ->\n\t\t\t\t{{_, Prv, Pb}, _} = new(KeyType),\n\t\t\t\tKy =\n\t\t\t\t\tar_serialize:jsonify(\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t[\n\t\t\t\t\t\t\t\t{kty, <<\"OKP\">>},\n\t\t\t\t\t\t\t\t{alg, <<\"EdDSA\">>},\n\t\t\t\t\t\t\t\t{crv, <<\"Ed25519\">>},\n\t\t\t\t\t\t\t\t{x, ar_util:encode(Pb)},\n\t\t\t\t\t\t\t\t{d, ar_util:encode(Prv)}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t),\n\t\t\t\t{Pb, Prv, Ky}\n\t\tend,\n\tFilename = wallet_filepath(WalletName, Pub, KeyType, DataDir),\n\tcase filelib:ensure_dir(Filename) of\n\t\tok ->\n\t\t\tcase ar_storage:write_file_atomic(Filename, Key) of\n\t\t\t\tok ->\n\t\t\t\t\t{{KeyType, Priv, Pub}, {KeyType, Pub}};\n\t\t\t\tError2 ->\n\t\t\t\t\tError2\n\t\t\tend;\n\t\tError ->\n\t\t\tError\n\tend.\n\nwallet_filepath(Wallet) ->\n\t{ok, Config} = arweave_config:get_env(),\n\twallet_filepath(Wallet, Config#config.data_dir).\n\nwallet_filepath(Wallet, DataDir) ->\n\tFilename = lists:flatten([\"arweave_keyfile_\", binary_to_list(Wallet), \".json\"]),\n\tfilename:join([DataDir, ?WALLET_DIR, Filename]).\n\nwallet_filepath2(Wallet) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tFilename = lists:flatten([binary_to_list(Wallet), \".json\"]),\n\tfilename:join([Config#config.data_dir, ?WALLET_DIR, Filename]).\n\n%% @doc Read the keyfile for the key with the given address from disk.\n%% Return not_found if arweave_keyfile_[addr].json or [addr].json is not found\n%% in [data_dir]/?WALLET_DIR.\nload_key(Addr) ->\n\tPath = wallet_filepath(ar_util:encode(Addr)),\n\tcase filelib:is_file(Path) of\n\t\tfalse ->\n\t\t\tPath2 = wallet_filepath2(ar_util:encode(Addr)),\n\t\t\tcase filelib:is_file(Path2) of\n\t\t\t\tfalse ->\n\t\t\t\t\tnot_found;\n\t\t\t\ttrue ->\n\t\t\t\t\tload_keyfile(Path2)\n\t\t\tend;\n\t\ttrue ->\n\t\t\tload_keyfile(Path)\n\tend.\n\n%% @doc Extract the public and private key from a keyfile.\nload_keyfile(File) ->\n\t{ok, Body} = file:read_file(File),\n\t{Key} = ar_serialize:dejsonify(Body),\n\t{Pub, Priv, KeyType} =\n\t\tcase lists:keyfind(<<\"kty\">>, 1, Key) of\n\t\t\t{<<\"kty\">>, <<\"EC\">>} ->\n\t\t\t\t{<<\"x\">>, XEncoded} = lists:keyfind(<<\"x\">>, 1, Key),\n\t\t\t\t{<<\"y\">>, YEncoded} = lists:keyfind(<<\"y\">>, 1, Key),\n\t\t\t\t{<<\"d\">>, PrivEncoded} = lists:keyfind(<<\"d\">>, 1, Key),\n\t\t\t\tOrigPub = iolist_to_binary([<<4:8>>, ar_util:decode(XEncoded),\n\t\t\t\t\t\tar_util:decode(YEncoded)]),\n\t\t\t\tPb = compress_ecdsa_pubkey(OrigPub),\n\t\t\t\tPrv = ar_util:decode(PrivEncoded),\n\t\t\t\tKyType = {?ECDSA_SIGN_ALG, secp256k1},\n\t\t\t\t{Pb, Prv, KyType};\n\t\t\t{<<\"kty\">>, <<\"OKP\">>} ->\n\t\t\t\t{<<\"x\">>, PubEncoded} = lists:keyfind(<<\"x\">>, 1, Key),\n\t\t\t\t{<<\"d\">>, PrivEncoded} = lists:keyfind(<<\"d\">>, 1, Key),\n\t\t\t\tPb = ar_util:decode(PubEncoded),\n\t\t\t\tPrv = ar_util:decode(PrivEncoded),\n\t\t\t\tKyType = {?EDDSA_SIGN_ALG, ed25519},\n\t\t\t\t{Pb, Prv, KyType};\n\t\t\t_ ->\n\t\t\t\t{<<\"n\">>, PubEncoded} = lists:keyfind(<<\"n\">>, 1, Key),\n\t\t\t\t{<<\"d\">>, PrivEncoded} = lists:keyfind(<<\"d\">>, 1, Key),\n\t\t\t\tPb = ar_util:decode(PubEncoded),\n\t\t\t\tPrv = ar_util:decode(PrivEncoded),\n\t\t\t\tKyType = {?RSA_SIGN_ALG, 65537},\n\t\t\t\t{Pb, Prv, KyType}\n\t\tend,\n\t{{KeyType, Priv, Pub}, {KeyType, Pub}}.\n\n%% @doc Sign some data with a private key.\nsign({{KeyAlg, PublicExpnt}, Priv, Pub}, Data)\n\t\twhen KeyAlg =:= ?RSA_SIGN_ALG andalso PublicExpnt =:= 65537 ->\n\trsa_pss:sign(\n\t\tData,\n\t\tsha256,\n\t\t#'RSAPrivateKey'{\n\t\t\tpublicExponent = PublicExpnt,\n\t\t\tmodulus = binary:decode_unsigned(Pub),\n\t\t\tprivateExponent = binary:decode_unsigned(Priv)\n\t\t}\n\t);\nsign({{KeyAlg, KeyCrv}, Priv, _}, Data)\n\t\twhen KeyAlg =:= ?ECDSA_SIGN_ALG andalso KeyCrv =:= secp256k1 ->\n\tsecp256k1_nif:sign(Data, Priv);\nsign({{KeyAlg, KeyCrv}, Priv, _}, Data)\n\t\twhen KeyAlg =:= ?EDDSA_SIGN_ALG andalso KeyCrv =:= ed25519 ->\n\tcrypto:sign(\n\t\tKeyAlg,\n\t\tsha512,\n\t\tData,\n\t\t[Priv, KeyCrv]\n\t).\n\n%%--------------------------------------------------------------------\n%% @doc Verify that a signature is correct.\n%% @end\n%%--------------------------------------------------------------------\n-spec verify(PublicKeyInfo, Data, Signature) -> Return when\n\tPublicKeyInfo :: {{KeyAlgorithm, PublicExponent}, PublicKey},\n\tKeyAlgorithm :: atom(),\n\tPublicExponent :: pos_integer() | secp256k1 | ed25519,\n\tPublicKey :: binary(),\n\tSignature :: binary(),\n\tData :: binary(),\n\tReturn :: boolean().\n\nverify({{KeyAlg, PublicExpnt}, Pub}, Data, Sig)\n\t\twhen KeyAlg =:= ?RSA_SIGN_ALG andalso PublicExpnt =:= 65537 ->\n\ttry\n\t\trsa_pss:verify(\n\t\t\tData,\n\t\t\tsha256,\n\t\t\tSig,\n\t\t\t#'RSAPublicKey'{\n\t\t\t\tpublicExponent = PublicExpnt,\n\t\t\t\tmodulus = binary:decode_unsigned(Pub)\n\t\t\t}\n\t\t)\n\tcatch\n\t\tC:R:S ->\n\t\t\t?LOG_ERROR([\n\t\t\t\t{event, rsa_pss_verify_failed},\n\t\t\t\t{class, C},\n\t\t\t\t{reason, R},\n\t\t\t\t{stacktrace, S},\n\t\t\t\t{pub_size, byte_size(Pub)},\n\t\t\t\t{signature_size, byte_size(Sig)}\n\t\t\t]),\n\t\tfalse\n\tend;\n\n% NOTE. We will not write pubkey for ECDSA signature. So don't use verify function for ECDSA, use ecrecover\n% So this function will return always false if called with no Pub\nverify({{KeyAlg, KeyCrv}, Pub}, Data, Sig)\n\t\twhen KeyAlg =:= ?ECDSA_SIGN_ALG andalso KeyCrv =:= secp256k1 ->\n\t{Pass, PubExtracted} = secp256k1_nif:ecrecover(Data, Sig),\n\tPass andalso PubExtracted =:= Pub;\nverify({{KeyAlg, KeyCrv}, Pub}, Data, Sig)\n\t\twhen KeyAlg =:= ?EDDSA_SIGN_ALG andalso KeyCrv =:= ed25519 ->\n\tcrypto:verify(\n\t\tKeyAlg,\n\t\tsha512,\n\t\tData,\n\t\tSig,\n\t\t[Pub, KeyCrv]\n\t).\n\n%%--------------------------------------------------------------------\n%% @doc Verify that  a signature is correct. The function  was used to\n%% verify  transactions  until  the  fork  2.4.  It  rejects  a  valid\n%% transaction when  the key modulus bit  size is less than  4096. The\n%% new  method (verify/3)  successfully  verifies  all the  historical\n%% transactions so this  function is not used anywhere  after the fork\n%% 2.4.\n%% @end\n%%--------------------------------------------------------------------\n-spec verify_pre_fork_2_4(PublicKeyInfo, Data, Signature) -> Return when\n\tPublicKeyInfo :: {{KeyAlgorithm, PublicExponent}, PublicKey},\n\tKeyAlgorithm :: atom(),\n\tPublicExponent :: pos_integer(),\n\tPublicKey :: binary(),\n\tSignature :: binary(),\n\tData :: binary(),\n\tReturn :: boolean().\n\nverify_pre_fork_2_4({{KeyAlg, PublicExpnt}, Pub}, Data, Sig)\n\t\twhen KeyAlg =:= ?RSA_SIGN_ALG andalso PublicExpnt =:= 65537 ->\n\ttry\n\t\trsa_pss:verify_legacy(\n\t\t\tData,\n\t\t\tsha256,\n\t\t\tSig,\n\t\t\t#'RSAPublicKey'{\n\t\t\t\tpublicExponent = PublicExpnt,\n\t\t\t\tmodulus = binary:decode_unsigned(Pub)\n\t\t\t}\n\t\t)\n\tcatch\n\t\tC:R:S ->\n\t\t\t?LOG_ERROR([\n\t\t\t\t{event, rsa_pss_verify_legacy_failed},\n\t\t\t\t{class, C},\n\t\t\t\t{reason, R},\n\t\t\t\t{stacktrace, S},\n\t\t\t\t{pub_size, byte_size(Pub)},\n\t\t\t\t{signature_size, byte_size(Sig)}\n\t\t\t]),\n\t\tfalse\n\tend.\n\n%% @doc Generate an address from a public key.\nto_address({{SigType, _Priv, Pub}, {SigType, Pub}}) ->\n\tto_address(Pub, SigType);\nto_address({SigType, Pub}) ->\n\tto_address(Pub, SigType);\nto_address({SigType, _Priv, Pub}) ->\n\tto_address(Pub, SigType).\n\n%% @doc Generate an address from a public key.\nto_address(PubKey, {?RSA_SIGN_ALG, 65537}) when bit_size(PubKey) == 256 ->\n\t%% Small keys are not secure, nobody is using them, the clause\n\t%% is for backwards-compatibility.\n\tPubKey;\nto_address(PubKey, _SigType) ->\n\thash_pub_key(PubKey).\n\nhash_pub_key(PubKey) ->\n\tcrypto:hash(?HASH_ALG, PubKey).\n\nbase64_address_with_optional_checksum_to_decoded_address(AddrBase64) ->\n\tSize = byte_size(AddrBase64),\n\tcase Size > 7 of\n\t\tfalse ->\n\t\t\tar_util:decode(AddrBase64);\n\t\ttrue ->\n\t\t\tcase AddrBase64 of\n\t\t\t\t<< MainBase64url:(Size - 7)/binary, \":\", ChecksumBase64url:6/binary >> ->\n\t\t\t\t\tAddrDecoded = ar_util:decode(MainBase64url),\n\t\t\t\t\tcase byte_size(AddrDecoded) < 20 of\n\t\t\t\t\t\ttrue -> throw({error, invalid_address});\n\t\t\t\t\t\tfalse -> ok\n\t\t\t\t\tend,\n\t\t\t\t\tcase byte_size(AddrDecoded) > 64 of\n\t\t\t\t\t\ttrue -> throw({error, invalid_address});\n\t\t\t\t\t\tfalse -> ok\n\t\t\t\t\tend,\n\t\t\t\t\tChecksum = ar_util:decode(ChecksumBase64url),\n\t\t\t\t\tcase decoded_address_to_checksum(AddrDecoded) =:= Checksum of\n\t\t\t\t\t\ttrue -> AddrDecoded;\n\t\t\t\t\t\tfalse -> throw({error, invalid_address_checksum})\n\t\t\t\t\tend;\n\t\t\t\t_ ->\n\t\t\t\t\tar_util:decode(AddrBase64)\n\t\t\tend\n\tend.\n\nbase64_address_with_optional_checksum_to_decoded_address_safe(AddrBase64)->\n\ttry\n\t\tD = base64_address_with_optional_checksum_to_decoded_address(AddrBase64),\n\t\t{ok, D}\n\tcatch\n\t\t_:_ ->\n\t\t\t{error, invalid}\n\tend.\n\n%% @doc Read a wallet of one of the given types from disk. Files modified later are prefered.\n%% If no file is found, create one of the type standing first in the list.\nget_or_create_wallet(Types) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tWalletDir = filename:join(Config#config.data_dir, ?WALLET_DIR),\n\tEntries =\n\t\tlists:reverse(lists:sort(filelib:fold_files(\n\t\t\tWalletDir,\n\t\t\t\"(.*\\\\.json$)\",\n\t\t\tfalse,\n\t\t\tfun(F, Acc) ->\n\t\t\t\t [{filelib:last_modified(F), F} | Acc]\n\t\t\tend,\n\t\t\t[])\n\t\t)),\n\tget_or_create_wallet(Entries, Types).\n\nget_or_create_wallet([], [Type | _]) ->\n\tar_wallet:new_keyfile(Type);\nget_or_create_wallet([{_LastModified, F} | Entries], Types) ->\n\t{{Type, _, _}, _} = W = load_keyfile(F),\n\tcase lists:member(Type, Types) of\n\t\ttrue ->\n\t\t\tW;\n\t\tfalse ->\n\t\t\tget_or_create_wallet(Entries, Types)\n\tend.\n\nrecover_key(_Data, <<>>, ?ECDSA_KEY_TYPE) ->\n\t<<>>;\nrecover_key(Data, Signature, ?ECDSA_KEY_TYPE) ->\n\t{_Pass, PubKey} = secp256k1_nif:ecrecover(Data, Signature),\n\t% Note. if Pass = false, then PubKey will be <<>>\n\tPubKey.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nwallet_filepath(WalletName, PubKey, KeyType) ->\n\t{ok, Config} = arweave_config:get_env(),\n\twallet_filepath(WalletName, PubKey, KeyType, Config#config.data_dir).\n\nwallet_filepath(WalletName, PubKey, KeyType, DataDir) ->\n\twallet_filepath(wallet_name(WalletName, PubKey, KeyType), DataDir).\n\nwallet_name(wallet_address, PubKey, KeyType) ->\n\tar_util:encode(to_address(PubKey, KeyType));\nwallet_name(WalletName, _, _) ->\n\tWalletName.\n\ndecoded_address_to_checksum(AddrDecoded) ->\n\tCrc = erlang:crc32(AddrDecoded),\n\t<< Crc:32 >>.\n\ndecoded_address_to_base64_address_with_checksum(AddrDecoded) ->\n\tChecksum = decoded_address_to_checksum(AddrDecoded),\n\tAddrBase64 = ar_util:encode(AddrDecoded),\n\tChecksumBase64 = ar_util:encode(Checksum),\n\t<< AddrBase64/binary, \":\", ChecksumBase64/binary >>.\n\ncompress_ecdsa_pubkey(<<4:8, PubPoint/binary>>) ->\n\tPubPointMid = byte_size(PubPoint) div 2,\n\t<<X:PubPointMid/binary, Y:PubPointMid/integer-unit:8>> = PubPoint,\n\tPubKeyHeader =\n\t\tcase Y rem 2 of\n\t\t\t0 -> <<2:8>>;\n\t\t\t1 -> <<3:8>>\n\t\tend,\n\tiolist_to_binary([PubKeyHeader, X]).\n\n%%%===================================================================\n%%% Tests.\n%%%===================================================================\n\nwallet_sign_verify_test() ->\n\tTestData = <<\"TEST DATA\">>,\n\t{Priv, Pub} = new(),\n\tSignature = sign(Priv, TestData),\n\ttrue = verify(Pub, TestData, Signature).\n\ninvalid_signature_test() ->\n\tTestData = <<\"TEST DATA\">>,\n\t{Priv, Pub} = new(),\n\t<< _:32, Signature/binary >> = sign(Priv, TestData),\n\tfalse = verify(Pub, TestData, << 0:32, Signature/binary >>).\n\n%% @doc Check generated keyfiles can be retrieved.\ngenerate_keyfile_test() ->\n\t{Priv, Pub} = new_keyfile(),\n\tFileName = wallet_filepath(ar_util:encode(to_address(Pub))),\n\t{Priv, Pub} = load_keyfile(FileName).\n\nchecksum_test() ->\n\t{_, Pub} = new(),\n\tAddr = to_address(Pub),\n\tAddrBase64 = ar_util:encode(Addr),\n\tAddrBase64Wide = decoded_address_to_base64_address_with_checksum(Addr),\n\tAddr = base64_address_with_optional_checksum_to_decoded_address(AddrBase64Wide),\n\tAddr = base64_address_with_optional_checksum_to_decoded_address(AddrBase64),\n\t%% 64 bytes, for future.\n\tCorrectLongAddress = <<\"0123456789012345678901234567890123456789012345678901234567890123\">>,\n\tCorrectCheckSum = decoded_address_to_checksum(CorrectLongAddress),\n\tCorrectLongAddressBase64 = ar_util:encode(CorrectLongAddress),\n\tCorrectCheckSumBase64 = ar_util:encode(CorrectCheckSum),\n\tCorrectLongAddressWithChecksumBase64 = <<CorrectLongAddressBase64/binary, \":\", CorrectCheckSumBase64/binary>>,\n\tcase catch base64_address_with_optional_checksum_to_decoded_address(CorrectLongAddressWithChecksumBase64) of\n\t\t{error, _} -> throw({error, correct_long_address_should_bypass});\n\t\t_ -> ok\n\tend,\n\t%% 65 bytes.\n\tInvalidLongAddress = <<\"01234567890123456789012345678901234567890123456789012345678901234\">>,\n\tInvalidLongAddressBase64 = ar_util:encode(InvalidLongAddress),\n\tcase catch base64_address_with_optional_checksum_to_decoded_address(<<InvalidLongAddressBase64/binary, \":MDA\">>) of\n\t\t{'EXIT', _} -> ok\n\tend,\n\t%% 100 bytes.\n\tInvalidLongAddress2 = <<\"0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789\">>,\n\tInvalidLongAddress2Base64 = ar_util:encode(InvalidLongAddress2),\n\tcase catch base64_address_with_optional_checksum_to_decoded_address(<<InvalidLongAddress2Base64/binary, \":MDA\">>) of\n\t\t{'EXIT', _} -> ok\n\tend,\n\t%% 10 bytes\n\tInvalidShortAddress = <<\"0123456789\">>,\n\tInvalidShortAddressBase64 = ar_util:encode(InvalidShortAddress),\n\tcase catch base64_address_with_optional_checksum_to_decoded_address(<<InvalidShortAddressBase64/binary, \":MDA\">>) of\n\t\t{'EXIT', _} -> ok\n\tend,\n\tInvalidChecksum = ar_util:encode(<< 0:32 >>),\n\tcase catch base64_address_with_optional_checksum_to_decoded_address(\n\t\t\t<< AddrBase64/binary, \":\", InvalidChecksum/binary >>) of\n\t\t{error, invalid_address_checksum} -> ok\n\tend,\n\tcase catch base64_address_with_optional_checksum_to_decoded_address(<<\":MDA\">>) of\n\t\t{'EXIT', _} -> ok\n\tend.\n"
  },
  {
    "path": "apps/arweave/src/ar_wallets.erl",
    "content": "%%% @doc The module manages the states of wallets (their balances and last transactions)\n%%% in different blocks. Since wallet lists are huge, only one copy is stored at any time,\n%%% along with the small \"diffs\", which allow to reconstruct the wallet lists of the previous,\n%%% following, and uncle blocks.\n-module(ar_wallets).\n\n-export([start_link/1, get/1, get/2, get_chunk/2, get_balance/1, get_balance/2, get_last_tx/1,\n\t\tapply_block/2, add_wallets/4, set_current/3, get_size/0]).\n\n-export([init/1, handle_call/3, handle_cast/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_wallets.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nstart_link(Args) ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, Args, []).\n\n%% @doc Return the map mapping the given addresses to the corresponding wallets\n%% from the latest wallet tree.\nget(Address) when is_binary(Address) ->\n\tar_wallets:get([Address]);\nget(Addresses) ->\n\tgen_server:call(?MODULE, {get, Addresses}, ?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Return the map mapping the given addresses to the corresponding wallets\n%% from the wallet tree with the given root hash.\nget(RootHash, Address) when is_binary(Address) ->\n\tget(RootHash, [Address]);\nget(RootHash, Addresses) ->\n\tgen_server:call(?MODULE, {get, RootHash, Addresses}, ?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Return the map containing the wallets, up to ?WALLET_LIST_CHUNK_SIZE, starting\n%% from the given cursor (first or an address). The wallets are picked in the ascending\n%% alphabetical order, from the tree with the given root hash.\nget_chunk(RootHash, Cursor) ->\n\tgen_server:call(?MODULE, {get_chunk, RootHash, Cursor}, ?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Return balance of the given wallet in the latest wallet tree.\nget_balance(Address) ->\n\tgen_server:call(?MODULE, {get_balance, Address}, ?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Return balance of the given wallet in the given wallet tree.\nget_balance(RootHash, Address) ->\n\tgen_server:call(?MODULE, {get_balance, RootHash, Address}, ?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Return the anchor (last_tx) of the given wallet in the latest wallet tree.\nget_last_tx(Address) ->\n\tgen_server:call(?MODULE, {get_last_tx, Address}, ?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Compute and cache the account tree for the given new block and its previous block.\napply_block(B, PrevB) ->\n\tgen_server:call(?MODULE, {apply_block, B, PrevB}, ?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Cache the wallets to be upserted into the tree with the given root hash. Return\n%% the root hash of the new wallet tree.\nadd_wallets(RootHash, Wallets, Height, Denomination) ->\n\tgen_server:call(?MODULE, {add_wallets, RootHash, Wallets, Height, Denomination}, ?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Make the wallet tree with the given root hash \"the current tree\". The current tree\n%% is used by get/1, get_balance/1, and get_last_tx/1.\nset_current(RootHash, Height, PruneDepth) when is_binary(RootHash) ->\n\tCall = {set_current, RootHash, Height, PruneDepth},\n\tgen_server:call(?MODULE, Call, ?DEFAULT_CALL_TIMEOUT).\n\n%% @doc Return the number of accounts in the latest state.\nget_size() ->\n\tgen_server:call(?MODULE, get_size, ?DEFAULT_CALL_TIMEOUT).\n\n%%%===================================================================\n%%% Generic server callbacks.\n%%%===================================================================\n\ninit([{blocks, []} | _]) ->\n\t%% Trap exit to avoid corrupting any open files on quit.\n\tprocess_flag(trap_exit, true),\n\tDAG = ar_diff_dag:new(<<>>, ar_patricia_tree:new(), not_set),\n\tar_node_worker ! wallets_ready,\n\t{ok, DAG};\ninit([{blocks, Blocks} | Args]) ->\n\t%% Trap exit to avoid corrupting any open files on quit.\n\tprocess_flag(trap_exit, true),\n\tgen_server:cast(?MODULE, {init, Blocks, Args}),\n\tDAG = ar_diff_dag:new(<<>>, ar_patricia_tree:new(), not_set),\n\t{ok, DAG}.\n\nhandle_call({get, Addresses}, _From, DAG) ->\n\t{reply, get_map(ar_diff_dag:get_sink(DAG), Addresses), DAG};\n\nhandle_call({get, RootHash, Addresses}, _From, DAG) ->\n\tcase ar_diff_dag:reconstruct(DAG, RootHash, fun apply_diff/2) of\n\t\t{error, _} = Error ->\n\t\t\t{reply, Error, DAG};\n\t\tTree ->\n\t\t\t{reply, get_map(Tree, Addresses), DAG}\n\tend;\n\nhandle_call({get_chunk, RootHash, Cursor}, _From, DAG) ->\n\tcase ar_diff_dag:reconstruct(DAG, RootHash, fun apply_diff/2) of\n\t\t{error, not_found} ->\n\t\t\t{reply, {error, root_hash_not_found}, DAG};\n\t\tTree ->\n\t\t\t{NextCursor, Range} = get_account_tree_range(Tree, Cursor),\n\t\t\t{reply, {ok, {NextCursor, Range}}, DAG}\n\tend;\n\nhandle_call(get_size, _From, DAG) ->\n\t{reply, ar_patricia_tree:size(ar_diff_dag:get_sink(DAG)), DAG};\n\nhandle_call({get_balance, Address}, _From, DAG) ->\n\tcase ar_patricia_tree:get(Address, ar_diff_dag:get_sink(DAG)) of\n\t\tnot_found ->\n\t\t\t{reply, 0, DAG};\n\t\tEntry ->\n\t\t\tDenomination = ar_diff_dag:get_sink_metadata(DAG),\n\t\t\tcase Entry of\n\t\t\t\t{Balance, _LastTX} ->\n\t\t\t\t\t{reply, ar_pricing:redenominate(Balance, 1, Denomination), DAG};\n\t\t\t\t{Balance, _LastTX, BaseDenomination, _MiningPermission} ->\n\t\t\t\t\t{reply, ar_pricing:redenominate(Balance, BaseDenomination, Denomination),\n\t\t\t\t\t\t\tDAG}\n\t\t\tend\n\tend;\n\nhandle_call({get_balance, RootHash, Address}, _From, DAG) ->\n\tcase ar_diff_dag:reconstruct(DAG, RootHash, fun apply_diff/2) of\n\t\t{error, _} = Error ->\n\t\t\t{reply, Error, DAG};\n\t\tTree ->\n\t\t\tcase ar_patricia_tree:get(Address, Tree) of\n\t\t\t\tnot_found ->\n\t\t\t\t\t{reply, 0, DAG};\n\t\t\t\tEntry ->\n\t\t\t\t\tDenomination = ar_diff_dag:get_metadata(DAG, RootHash),\n\t\t\t\t\tcase Entry of\n\t\t\t\t\t\t{Balance, _LastTX} ->\n\t\t\t\t\t\t\t{reply, ar_pricing:redenominate(Balance, 1, Denomination), DAG};\n\t\t\t\t\t\t{Balance, _LastTX, BaseDenomination, _MiningPermission} ->\n\t\t\t\t\t\t\t{reply, ar_pricing:redenominate(Balance, BaseDenomination,\n\t\t\t\t\t\t\t\t\tDenomination), DAG}\n\t\t\t\t\tend\n\t\t\tend\n\tend;\n\nhandle_call({get_last_tx, Address}, _From, DAG) ->\n\t{reply,\n\t\tcase ar_patricia_tree:get(Address, ar_diff_dag:get_sink(DAG)) of\n\t\t\tnot_found ->\n\t\t\t\t<<>>;\n\t\t\t{_Balance, LastTX} ->\n\t\t\t\tLastTX;\n\t\t\t{_Balance, LastTX, _Denomination, _MiningPermission} ->\n\t\t\t\tLastTX\n\t\tend,\n\tDAG};\n\nhandle_call({apply_block, B, PrevB}, _From, DAG) ->\n\t{Reply, DAG2} = apply_block(B, PrevB, DAG),\n\t{reply, Reply, DAG2};\n\nhandle_call({add_wallets, RootHash, Wallets, Height, Denomination}, _From, DAG) ->\n\tTree = ar_diff_dag:reconstruct(DAG, RootHash, fun apply_diff/2),\n\tRootHash2 = compute_hash(Tree, Wallets, Height),\n\tDAG2 = maybe_add_node(DAG, RootHash2, RootHash, Wallets, Denomination),\n\t{reply, {ok, RootHash2}, DAG2};\n\nhandle_call({set_current, RootHash, Height, PruneDepth}, _, DAG) ->\n\t{reply, ok, set_current(DAG, RootHash, Height, PruneDepth)}.\n\nhandle_cast({init, Blocks, Args}, _) ->\n\tcase proplists:get_value(from_state, Args) of\n\t\tundefined ->\n\t\t\tPeers = proplists:get_value(from_peers, Args),\n\tB =\n\t\tcase length(Blocks) >= ar_block:get_consensus_window_size() of\n\t\t\ttrue ->\n\t\t\t\tlists:nth(ar_block:get_consensus_window_size(), Blocks);\n\t\t\tfalse ->\n\t\t\t\tlists:last(Blocks)\n\t\tend,\n\tTree = get_tree_from_peers(B, Peers),\n\tinitialize_state(Blocks, Tree);\n\t\tSearchDepth ->\n\t\t\t?LOG_DEBUG([{event, init_from_state}, {block_count, length(Blocks)}]),\n\t\t\tCustomDir = proplists:get_value(custom_dir, Args, not_set),\n\t\t\tcase find_local_account_tree(Blocks, SearchDepth, CustomDir) of\n\t\t\t\tnot_found ->\n\t\t\t\t\tar:console(\"~n~n\\tThe local state is missing an account tree, consider joining \"\n\t\t\t\t\t\t\t\"the network via the trusted peers.~n\"),\n\t\t\t\t\ttimer:sleep(1000),\n\t\t\t\t\tinit:stop(1);\n\t\t\t\t{Skipped, Tree} ->\n\t\t\t\t\tBlocks2 = lists:nthtail(Skipped, Blocks),\n\t\t\t\t\tinitialize_state(Blocks2, Tree)\n\t\t\tend\n\tend;\n\nhandle_cast(Msg, DAG) ->\n\t?LOG_ERROR([{event, unhandled_cast}, {module, ?MODULE}, {message, Msg}]),\n\t{noreply, DAG}.\n\nterminate(Reason, _State) ->\n\t?LOG_INFO([{event, ar_wallets_terminated}, {reason, Reason}]).\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nfind_local_account_tree(Blocks, SearchDepth, CustomDir) ->\n\tfind_local_account_tree(Blocks, SearchDepth, 0, CustomDir).\n\nfind_local_account_tree(_Blocks, Skipped, Skipped, _CustomDir) ->\n\tnot_found;\nfind_local_account_tree(Blocks, SearchDepth, Skipped, CustomDir) ->\n\t{IsLast, B} =\n\t\tcase length(Blocks) >= ar_block:get_consensus_window_size() of\n\t\t\ttrue ->\n\t\t\t\t{false, lists:nth(ar_block:get_consensus_window_size(), Blocks)};\n\t\t\tfalse ->\n\t\t\t\t{true, lists:last(Blocks)}\n\t\tend,\n\tID = B#block.wallet_list,\n\tcase ar_storage:read_wallet_list(ID, CustomDir) of\n\t\t{ok, Tree} ->\n\t\t\t{Skipped, Tree};\n\t\t_ ->\n\t\t\tcase IsLast of\n\t\t\t\ttrue ->\n\t\t\t\t\tnot_found;\n\t\t\t\tfalse ->\n\t\t\t\t\tfind_local_account_tree(tl(Blocks), SearchDepth, Skipped + 1, CustomDir)\n\t\t\tend\n\tend.\n\ninitialize_state(Blocks, Tree) ->\n\tInitialDepth = ar_block:get_consensus_window_size(),\n\t{DAG3, LastB} = lists:foldl(\n\t\tfun (B, start) ->\n\t\t\t\t{RootHash, UpdatedTree, UpdateMap} = ar_block:hash_wallet_list(Tree),\n\t\t\t\tgen_server:cast(ar_storage, {store_account_tree_update, B#block.height,\n\t\t\t\t\t\tRootHash, UpdateMap}),\n\t\t\t\tRootHash = B#block.wallet_list,\n\t\t\t\tDAG = ar_diff_dag:new(RootHash, UpdatedTree, B#block.denomination),\n\t\t\t\t{DAG, B};\n\t\t\t(B, {DAG, PrevB}) ->\n\t\t\t\tExpectedRootHash = B#block.wallet_list,\n\t\t\t\t{{ok, ExpectedRootHash}, DAG2} = apply_block(B, PrevB, DAG),\n\t\t\t\t{DAG2, B}\n\t\tend,\n\t\tstart,\n\t\tlists:reverse(lists:sublist(Blocks, InitialDepth))\n\t),\n\tWalletList = LastB#block.wallet_list,\n\tLastHeight = LastB#block.height,\n\tDAG4 = set_current(DAG3, WalletList, LastHeight, InitialDepth),\n\tar_events:send(node_state, {account_tree_initialized, LastB#block.height}),\n\t{noreply, DAG4}.\n\nget_tree_from_peers(B, Peers) ->\n\tID = B#block.wallet_list,\n\tar:console(\"Downloading the wallet tree, chunk 1.~n\", []),\n\tcase ar_http_iface_client:get_wallet_list_chunk(Peers, ID) of\n\t\t{ok, {Cursor, Chunk}} ->\n\t\t\t{ok, Tree} = load_wallet_tree_from_peers(\n\t\t\t\tID,\n\t\t\t\tPeers,\n\t\t\t\tar_patricia_tree:from_proplist(Chunk),\n\t\t\t\tCursor,\n\t\t\t\t2\n\t\t\t),\n\t\t\tar:console(\"Downloaded the wallet tree successfully.~n\", []),\n\t\t\tTree;\n\t\t_ ->\n\t\t\tar:console(\"Failed to download wallet tree chunk, retrying...~n\", []),\n\t\t\ttimer:sleep(1000),\n\t\t\tget_tree_from_peers(B, Peers)\n\tend.\n\nload_wallet_tree_from_peers(_ID, _Peers, Acc, last, _) ->\n\t{ok, Acc};\nload_wallet_tree_from_peers(ID, Peers, Acc, Cursor, N) ->\n\tar_util:terminal_clear(),\n\tar:console(\"Downloading the wallet tree, chunk ~B.~n\", [N]),\n\tcase ar_http_iface_client:get_wallet_list_chunk(Peers, ID, Cursor) of\n\t\t{ok, {NextCursor, Chunk}} ->\n\t\t\tAcc3 =\n\t\t\t\tlists:foldl(\n\t\t\t\t\tfun({K, V}, Acc2) -> ar_patricia_tree:insert(K, V, Acc2)\n\t\t\t\t\tend,\n\t\t\t\t\tAcc,\n\t\t\t\t\tChunk\n\t\t\t\t),\n\t\t\tload_wallet_tree_from_peers(ID, Peers, Acc3, NextCursor, N + 1);\n\t\t_ ->\n\t\t\tar:console(\"Failed to download wallet tree chunk, retrying...~n\", []),\n\t\t\ttimer:sleep(1000),\n\t\t\tload_wallet_tree_from_peers(ID, Peers, Acc, Cursor, N)\n\tend.\n\napply_block(B, PrevB, DAG) ->\n\tDenomination2 = B#block.denomination,\n\tRedenominationHeight2 = B#block.redenomination_height,\n\tcase ar_pricing:may_be_redenominate(PrevB) of\n\t\t{Denomination2, RedenominationHeight2} ->\n\t\t\tapply_block2(B, PrevB, DAG);\n\t\t_ ->\n\t\t\t{{error, invalid_denomination}, DAG}\n\tend.\n\napply_block2(B, PrevB, DAG) ->\n\tRootHash = PrevB#block.wallet_list,\n\tTree = ar_diff_dag:reconstruct(DAG, RootHash, fun apply_diff/2),\n\tTXs = B#block.txs,\n\tRewardAddr = B#block.reward_addr,\n\tAddresses = [RewardAddr | ar_tx:get_addresses(TXs)],\n\tAddresses2 = [ar_rewards:get_oldest_locked_address(PrevB) | Addresses],\n\tAddresses3 =\n\t\tcase B#block.double_signing_proof of\n\t\t\tundefined ->\n\t\t\t\tAddresses2;\n\t\t\tProof ->\n\t\t\t\t[ar_wallet:hash_pub_key(element(1, Proof)) | Addresses2]\n\t\tend,\n\tAccounts = get_map(Tree, Addresses3),\n\tcase ar_node_utils:update_accounts(B, PrevB, Accounts) of\n\t\t{ok, Args} ->\n\t\t\tapply_block2(B, PrevB, Args, Tree, DAG);\n\t\tError ->\n\t\t\t{Error, DAG}\n\tend.\n\napply_block2(B, PrevB, Args, Tree, DAG) ->\n\t{EndowmentPool, MinerReward, DebtSupply, KryderPlusRateMultiplierLatch,\n\t\t\tKryderPlusRateMultiplier, Accounts} = Args,\n\tDenomination = PrevB#block.denomination,\n\tDenomination2 = B#block.denomination,\n\tEndowmentPool2 = ar_pricing:redenominate(EndowmentPool, Denomination, Denomination2),\n\tMinerReward2 = ar_pricing:redenominate(MinerReward, Denomination, Denomination2),\n\tDebtSupply2 = ar_pricing:redenominate(DebtSupply, Denomination, Denomination2),\n\tcase {B#block.reward_pool == EndowmentPool2, B#block.reward == MinerReward2,\n\t\t\tB#block.debt_supply == DebtSupply2,\n\t\t\tB#block.kryder_plus_rate_multiplier_latch == KryderPlusRateMultiplierLatch,\n\t\t\tB#block.kryder_plus_rate_multiplier == KryderPlusRateMultiplier,\n\t\t\tB#block.height >= ar_fork:height_2_6()} of\n\t\t{false, _, _, _, _, _} ->\n\t\t\t{{error, invalid_reward_pool}, DAG};\n\t\t{true, false, _, _, _, true} ->\n\t\t\t{{error, invalid_miner_reward}, DAG};\n\t\t{true, true, false, _, _, true} ->\n\t\t\t{{error, invalid_debt_supply}, DAG};\n\t\t{true, true, true, false, _, true} ->\n\t\t\t{{error, invalid_kryder_plus_rate_multiplier_latch}, DAG};\n\t\t{true, true, true, true, false, true} ->\n\t\t\t{{error, invalid_kryder_plus_rate_multiplier}, DAG};\n\t\t_ ->\n\t\t\tTree2 = apply_diff(Accounts, Tree),\n\t\t\t{RootHash2, _, UpdateMap} = ar_block:hash_wallet_list(Tree2),\n\t\t\tcase B#block.wallet_list == RootHash2 of\n\t\t\t\ttrue ->\n\t\t\t\t\tRootHash = PrevB#block.wallet_list,\n\t\t\t\t\tDAG2 = maybe_add_node(DAG, RootHash2, RootHash, Accounts, Denomination2),\n\t\t\t\t\tgen_server:cast(ar_storage, {store_account_tree_update, B#block.height,\n\t\t\t\t\t\t\tRootHash2, UpdateMap}),\n\t\t\t\t\t{{ok, RootHash2}, DAG2};\n\t\t\t\tfalse ->\n\t\t\t\t\t{{error, invalid_wallet_list}, DAG}\n\t\t\tend\n\tend.\n\nset_current(DAG, RootHash, Height, PruneDepth) ->\n\tUpdatedDAG = ar_diff_dag:update_sink(\n\t\tar_diff_dag:move_sink(DAG, RootHash, fun apply_diff/2, fun reverse_diff/2),\n\t\tRootHash,\n\t\tfun(Tree, Meta) ->\n\t\t\t{RootHash, UpdatedTree, UpdateMap} = ar_block:hash_wallet_list(Tree),\n\t\t\tgen_server:cast(ar_storage, {store_account_tree_update, Height, RootHash,\n\t\t\t\t\tUpdateMap}),\n\t\t\t{RootHash, UpdatedTree, Meta}\n\t\tend\n\t),\n\tTree = ar_diff_dag:get_sink(UpdatedDAG),\n\ttrue = Height >= ar_fork:height_2_2(),\n\tprometheus_counter:inc(wallet_list_size, ar_patricia_tree:size(Tree)),\n\tar_diff_dag:filter(UpdatedDAG, PruneDepth).\n\napply_diff(Diff, Tree) ->\n\tmaps:fold(\n\t\tfun (Addr, remove, Acc) ->\n\t\t\t\tar_patricia_tree:delete(Addr, Acc);\n\t\t\t(Addr, {Balance, LastTX}, Acc) ->\n\t\t\t\tar_patricia_tree:insert(Addr, {Balance, LastTX}, Acc);\n\t\t\t(Addr, {Balance, LastTX, Denomination, MiningPermission}, Acc) ->\n\t\t\t\tar_patricia_tree:insert(Addr,\n\t\t\t\t\t\t{Balance, LastTX, Denomination, MiningPermission}, Acc)\n\t\tend,\n\t\tTree,\n\t\tDiff\n\t).\n\nreverse_diff(Diff, Tree) ->\n\tmaps:map(\n\t\tfun(Addr, _Value) ->\n\t\t\tcase ar_patricia_tree:get(Addr, Tree) of\n\t\t\t\tnot_found ->\n\t\t\t\t\tremove;\n\t\t\t\tValue ->\n\t\t\t\t\tValue\n\t\t\tend\n\t\tend,\n\t\tDiff\n\t).\n\nget_map(Tree, Addresses) ->\n\tlists:foldl(\n\t\tfun(Addr, Acc) ->\n\t\t\tcase ar_patricia_tree:get(Addr, Tree) of\n\t\t\t\tnot_found ->\n\t\t\t\t\tAcc;\n\t\t\t\tValue ->\n\t\t\t\t\tmaps:put(Addr, Value, Acc)\n\t\t\tend\n\t\tend,\n\t\t#{},\n\t\tAddresses\n\t).\n\nget_account_tree_range(Tree, Cursor) ->\n\tRange =\n\t\tcase Cursor of\n\t\t\tfirst ->\n\t\t\t\tar_patricia_tree:get_range(?WALLET_LIST_CHUNK_SIZE + 1, Tree);\n\t\t\t_ ->\n\t\t\t\tar_patricia_tree:get_range(Cursor, ?WALLET_LIST_CHUNK_SIZE + 1, Tree)\n\t\tend,\n\tcase length(Range) of\n\t\t?WALLET_LIST_CHUNK_SIZE + 1 ->\n\t\t\t{element(1, hd(Range)), tl(Range)};\n\t\t_ ->\n\t\t\t{last, Range}\n\tend.\n\ncompute_hash(Tree, Diff, Height) ->\n\tTree2 = apply_diff(Diff, Tree),\n\ttrue = Height >= ar_fork:height_2_2(),\n\telement(1, ar_block:hash_wallet_list(Tree2)).\n\nmaybe_add_node(DAG, RootHash, RootHash, _Wallets, _Metadata) ->\n\t%% The wallet list has not changed - there are no transactions\n\t%% and the miner did not claim the reward.\n\tDAG;\nmaybe_add_node(DAG, UpdatedRootHash, RootHash, Wallets, Metadata) ->\n\tcase ar_diff_dag:is_node(DAG, UpdatedRootHash) of\n\t\ttrue ->\n\t\t\t%% The new wallet list is already known from a different fork.\n\t\t\tDAG;\n\t\tfalse ->\n\t\t\tar_diff_dag:add_node(DAG, UpdatedRootHash, RootHash, Wallets, Metadata)\n\tend.\n"
  },
  {
    "path": "apps/arweave/src/ar_watchdog.erl",
    "content": "%% This Source Code Form is subject to the terms of the GNU General\n%% Public License, v. 2.0. If a copy of the GPLv2 was not distributed\n%% with this file, You can obtain one at\n%% https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html\n\n%%% @doc Watchdog process. Logs the information about mined blocks or missing external blocks.\n-module(ar_watchdog).\n\n-behaviour(gen_server).\n\n-export([start_link/0, started_hashing/0, block_received_n_confirmations/2, mined_block/3,\n\t\t\tis_mined_block/1, block_orphaned/2]).\n\n-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-record(state, {\n\tmined_blocks,\n\tminer_logging = false\n}).\n\n%%%===================================================================\n%%% API\n%%%===================================================================\n\nstarted_hashing() ->\n\tgen_server:cast(?MODULE, started_hashing).\n\nblock_received_n_confirmations(BH, Height) ->\n\tgen_server:cast(?MODULE, {block_received_n_confirmations, BH, Height}).\n\nblock_orphaned(BH, Height) ->\n\tgen_server:cast(?MODULE, {block_orphaned, BH, Height}).\n\nmined_block(BH, Height, PrevH) ->\n\tgen_server:cast(?MODULE, {mined_block, BH, Height, PrevH}).\n\nis_mined_block(Block) ->\n\tgen_server:call(?MODULE, {is_mined_block, Block#block.indep_hash}, ?DEFAULT_CALL_TIMEOUT).\n\n%%--------------------------------------------------------------------\n%% @doc\n%% Starts the server\n%%\n%% @spec start_link() -> {ok, Pid} | ignore | {error, Error}\n%% @end\n%%--------------------------------------------------------------------\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%%% gen_server callbacks\n%%%===================================================================\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Initializes the server\n%%\n%% @spec init(Args) -> {ok, State} |\n%%\t\t\t\t\t   {ok, State, Timeout} |\n%%\t\t\t\t\t   ignore |\n%%\t\t\t\t\t   {stop, Reason}\n%% @end\n%%--------------------------------------------------------------------\ninit([]) ->\n\tprocess_flag(trap_exit, true),\n\t{ok, Config} = arweave_config:get_env(),\n\tMinerLogging = not lists:member(miner_logging, Config#config.disable),\n\tState = #state{ mined_blocks = maps:new(), miner_logging = MinerLogging },\n\t{ok, State}.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Handling call messages\n%%\n%% @spec handle_call(Request, From, State) ->\n%%\t\t\t\t\t\t\t\t\t {reply, Reply, State} |\n%%\t\t\t\t\t\t\t\t\t {reply, Reply, State, Timeout} |\n%%\t\t\t\t\t\t\t\t\t {noreply, State} |\n%%\t\t\t\t\t\t\t\t\t {noreply, State, Timeout} |\n%%\t\t\t\t\t\t\t\t\t {stop, Reason, Reply, State} |\n%%\t\t\t\t\t\t\t\t\t {stop, Reason, State}\n%% @end\n%%--------------------------------------------------------------------\nhandle_call({is_mined_block, H}, _From, State) ->\n\t{reply, lists:member(H, maps:values(State#state.mined_blocks)), State};\n\nhandle_call(Request, _From, State) ->\n\t?LOG_ERROR([{event, unhandled_call}, {request, Request}]),\n\t{reply, ok, State}.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Handling cast messages\n%%\n%% @spec handle_cast(Msg, State) -> {noreply, State} |\n%%\t\t\t\t\t\t\t\t\t{noreply, State, Timeout} |\n%%\t\t\t\t\t\t\t\t\t{stop, Reason, State}\n%% @end\n%%--------------------------------------------------------------------\nhandle_cast(started_hashing, State) when State#state.miner_logging == true ->\n\tMessage = \"Starting to hash.\",\n\t?LOG_INFO([{event, starting_to_hash}]),\n\tar:console(\"~s~n\", [Message]),\n\t{noreply, State};\n\nhandle_cast(started_hashing, State) ->\n\t{noreply, State};\n\nhandle_cast({block_received_n_confirmations, BH, Height}, State) ->\n\tMinedBlocks = State#state.mined_blocks,\n\tUpdatedMinedBlocks = case maps:take(Height, MinedBlocks) of\n\t\t{BH, Map} ->\n\t\t\tar_events:send(solution, {confirmed, #{ indep_hash => BH, confirmations => 10 }}),\n\t\t\tar_mining_stats:block_found(),\n\t\t\tcase State#state.miner_logging of\n\t\t\t\ttrue ->\n\t\t\t\t\tMessage = io_lib:format(\"Your block ~s was accepted by the network!\",\n\t\t\t\t\t\t\t[ar_util:encode(BH)]),\n\t\t\t\t\t?LOG_INFO([{event, block_got_10_confirmations}, {block, ar_util:encode(BH)}]),\n\t\t\t\t\tar:console(\"~s~n\", [Message]),\n\t\t\t\t\tar_mining_stats:block_found(),\n\t\t\t\t\tMap;\n\t\t\t\t_ ->\n\t\t\t\t\tMap\n\t\t\tend;\n\t\t{_BH, _Map} ->\n\t\t\t%% The mined block was orphaned.\n\t\t\tar_mining_stats:block_mined_but_orphaned(),\n\t\t\tMinedBlocks;\n\t\terror ->\n\t\t\tMinedBlocks\n\tend,\n\t{noreply, State#state{ mined_blocks = UpdatedMinedBlocks }};\n\nhandle_cast({block_orphaned, BH, Height}, State) ->\n\tMinedBlocks = State#state.mined_blocks,\n\tUpdatedMinedBlocks = case maps:take(Height, MinedBlocks) of\n\t\t{BH, Map} ->\n\t\t\tar_events:send(solution, {orphaned, #{ indep_hash => BH }}),\n\t\t\tcase State#state.miner_logging of\n\t\t\t\ttrue ->\n\t\t\t\t\tMessage = io_lib:format(\"Your block ~s was orphaned.\",\n\t\t\t\t\t\t\t[ar_util:encode(BH)]),\n\t\t\t\t\t?LOG_INFO([{event, mined_block_orphaned}, {block, ar_util:encode(BH)}]),\n\t\t\t\t\tar:console(\"~s~n\", [Message]),\n\t\t\t\t\tMap;\n\t\t\t\t_ ->\n\t\t\t\t\tMap\n\t\t\tend;\n\t\terror ->\n\t\t\tMinedBlocks\n\tend,\n\t{noreply, State#state{ mined_blocks = UpdatedMinedBlocks }};\n\nhandle_cast({mined_block, BH, Height, PrevH}, State) ->\n\tcase State#state.miner_logging of\n\t\ttrue ->\n\t\t\tMessage = io_lib:format(\"Produced candidate block ~s (height ~B, previous block ~s).\",\n\t\t\t\t\t[ar_util:encode(BH), Height, ar_util:encode(PrevH)]),\n\t\t\t?LOG_INFO([{event, mined_block}, {block, ar_util:encode(BH)}, {height, Height},\n\t\t\t\t\t{previous_block, ar_util:encode(PrevH)}]),\n\t\t\tar:console(\"~s~n\", [Message]);\n\t\t_ ->\n\t\t\tok\n\tend,\n\tMinedBlocks = case maps:is_key(Height, State#state.mined_blocks) of\n\t\tfalse ->\n\t\t\tmaps:put(Height, BH, State#state.mined_blocks);\n\t\t_ ->\n\t\t\tState#state.mined_blocks\n\tend,\n\t{noreply, State#state{ mined_blocks = MinedBlocks } };\n\nhandle_cast(Msg, State) ->\n\t?LOG_ERROR([{event, unhandled_cast}, {message, Msg}]),\n\t{noreply, State}.\n\nhandle_info({'EXIT', _Pid, normal}, State) ->\n\t%% Gun sets monitors on the spawned processes, so thats the reason why we\n\t%% catch them here.\n\t{noreply, State};\n\nhandle_info(Info, State) ->\n\t?LOG_ERROR([{event, unhandled_info}, {module, ?MODULE}, {message, Info}]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% This function is called by a gen_server when it is about to\n%% terminate. It should be the opposite of Module:init/1 and do any\n%% necessary cleaning up. When it returns, the gen_server terminates\n%% with Reason. The return value is ignored.\n%%\n%% @spec terminate(Reason, State) -> void()\n%% @end\n%%--------------------------------------------------------------------\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n"
  },
  {
    "path": "apps/arweave/src/ar_weave.erl",
    "content": "-module(ar_weave).\n\n-export([init/0, init/1, init/2, init/3, create_mainnet_genesis_txs/0, generate_data/3,\n\t\tadd_mainnet_v1_genesis_txs/0]).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_pricing.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Create a genesis block. The genesis block includes one transaction with\n%% at least one small chunk and the total data size equal to ar_block:strict_data_split_threshold(),\n%% to test the code branches dealing with small chunks placed before the threshold.\ninit() ->\n\tinit([]).\n\n%% @doc Create a genesis block with the given accounts. One system account is added to the\n%% list - we use it to sign a transaction included in the genesis block.\ninit(WalletList) ->\n\tinit(WalletList, 1).\n\ninit(WalletList, Diff) ->\n\tSize = 3 * ?DATA_CHUNK_SIZE, % Matches ?STRICT_DATA_SPLIT_THRESHOLD in tests.\n\tinit(WalletList, Diff, Size).\n\ninit(_WalletList, _Diff, GenesisDataSize) when GenesisDataSize > (4 * ?GiB) ->\n\terlang:error({size_exceeds_limit, \"GenesisDataSize exceeds 4 GiB\"});\n\n%% @doc Create a genesis block with the given accounts and difficulty.\ninit(WalletList, Diff, GenesisDataSize) ->\n\t{{_, _, _}, {_, _}} = Key = ar_wallet:new_keyfile(),\n\tTX = create_genesis_tx(Key, GenesisDataSize),\n\tWalletList2 = WalletList ++ [{ar_wallet:to_address(Key), 0, TX#tx.id}],\n\tTXs = [TX],\n\tAccountTree = ar_patricia_tree:from_proplist([{A, {B, LTX}}\n\t\t\t|| {A, B, LTX} <- WalletList2]),\n\tWLH = element(1, ar_block:hash_wallet_list(AccountTree)),\n\tSizeTaggedTXs = ar_block:generate_size_tagged_list_from_txs(TXs, 0),\n\tBlockSize = case SizeTaggedTXs of [] -> 0; _ -> element(2, lists:last(SizeTaggedTXs)) end,\n\tSizeTaggedDataRoots = [{Root, Offset} || {{_, Root}, Offset} <- SizeTaggedTXs],\n\t{TXRoot, _Tree} = ar_merkle:generate_tree(SizeTaggedDataRoots),\n\tTimestamp = os:system_time(second),\n\tB0 =\n\t\t#block{\n\t\t\tnonce = <<>>,\n\t\t\ttxs = TXs,\n\t\t\ttx_root = TXRoot,\n\t\t\twallet_list = WLH,\n\t\t\tdiff = Diff,\n\t\t\tcumulative_diff = ar_difficulty:next_cumulative_diff(0, Diff, 0),\n\t\t\tweave_size = BlockSize,\n\t\t\tblock_size = BlockSize,\n\t\t\treward_pool = 0,\n\t\t\ttimestamp = Timestamp,\n\t\t\tlast_retarget = Timestamp,\n\t\t\tsize_tagged_txs = SizeTaggedTXs,\n\t\t\tusd_to_ar_rate = ?NEW_WEAVE_USD_TO_AR_RATE,\n\t\t\tscheduled_usd_to_ar_rate = ?NEW_WEAVE_USD_TO_AR_RATE,\n\t\t\tpacking_2_5_threshold = 0,\n\t\t\tstrict_data_split_threshold = BlockSize,\n\t\t\taccount_tree = AccountTree\n\t\t},\n\tB1 =\n\t\tcase ar_fork:height_2_6() > 0 of\n\t\t\tfalse ->\n\t\t\t\tRewardKey = element(2, ar_wallet:new()),\n\t\t\t\tRewardAddr = ar_wallet:to_address(RewardKey),\n\t\t\t\tHashRate = ar_difficulty:get_hash_rate_fixed_ratio(B0),\n\t\t\t\tRewardHistory = [{RewardAddr, HashRate, 10, 1}],\n\t\t\t\tPricePerGiBMinute = ar_pricing:get_price_per_gib_minute(0, \n\t\t\t\t\t\tB0#block{ reward_history = RewardHistory, denomination = 1 }),\n\t\t\t\tB0#block{ hash = crypto:strong_rand_bytes(32),\n\t\t\t\t\t\tnonce = 0, recall_byte = 0, partition_number = 0,\n\t\t\t\t\t\treward_key = RewardKey, reward_addr = RewardAddr,\n\t\t\t\t\t\treward = 10,\n\t\t\t\t\t\trecall_byte2 = 0, nonce_limiter_info = #nonce_limiter_info{\n\t\t\t\t\t\t\t\toutput = crypto:strong_rand_bytes(32),\n\t\t\t\t\t\t\t\tseed = crypto:strong_rand_bytes(48),\n\t\t\t\t\t\t\t\tpartition_upper_bound = BlockSize,\n\t\t\t\t\t\t\t\tnext_seed = crypto:strong_rand_bytes(48),\n\t\t\t\t\t\t\t\tnext_partition_upper_bound = BlockSize },\n\t\t\t\t\t\t\tprice_per_gib_minute = PricePerGiBMinute,\n\t\t\t\t\t\t\tscheduled_price_per_gib_minute = PricePerGiBMinute,\n\t\t\t\t\t\t\treward_history = RewardHistory,\n\t\t\t\t\t\t\treward_history_hash = ar_rewards:reward_history_hash(0, <<>>,\n\t\t\t\t\t\t\t\t\tRewardHistory)\n\t\t\t\t\t\t};\n\t\t\ttrue ->\n\t\t\t\tB0\n\t\tend,\n\tB2 =\n\t\tcase ar_fork:height_2_7() > 0 of\n\t\t\tfalse ->\n\t\t\t\tInitialHistory = get_initial_block_time_history(),\n\t\t\t\tB1#block{\n\t\t\t\t\tmerkle_rebase_support_threshold = ar_block:strict_data_split_threshold() * 2,\n\t\t\t\t\tchunk_hash = crypto:strong_rand_bytes(32),\n\t\t\t\t\tblock_time_history = InitialHistory,\n\t\t\t\t\tblock_time_history_hash = ar_block_time_history:hash(InitialHistory)\n\t\t\t\t};\n\t\t\ttrue ->\n\t\t\t\tB1\n\t\tend,\n\t[B2#block{ indep_hash = ar_block:indep_hash(B2) }].\n\n-ifdef(AR_TEST).\nget_initial_block_time_history() ->\n\t[{1, 1, 1}].\n-else.\nget_initial_block_time_history() ->\n\t[{120, 1, 1}].\n-endif.\n\n%% @doc: create a genesis transaction with the given key and data size. This is only used\n%% in tests and when launching a localnet node.\ncreate_genesis_tx(Key, Size) ->\n\t{_, {_, Pk}} = Key,\n\tUnsignedTX =\n\t\t(ar_tx:new())#tx{\n\t\t\towner = Pk,\n\t\t\treward = 0,\n\t\t\tdata = generate_genesis_data(Size),\n\t\t\tdata_size = Size,\n\t\t\ttarget = <<>>,\n\t\t\tquantity = 0,\n\t\t\ttags = [],\n\t\t\tlast_tx = <<>>,\n\t\t\tformat = 1\n\t\t},\n\tar_tx:sign_v1(UnsignedTX, Key).\n\n%% @doc: generate binary data to be used as genesis data in tests. That data is incrementing\n%% integer data in 4 byte chunks. e.g.\n%% <<0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 2, ...>>\n%% This makes it easier to assert correct chunk data in tests.\n-spec generate_genesis_data(integer()) -> binary().\ngenerate_genesis_data(DataSize) ->\n    FullChunks = DataSize div 4,\n    LeftoverBytes = DataSize rem 4,\n    IncrementingData = generate_data(0, FullChunks * 4, <<>>),\n    add_padding(IncrementingData, LeftoverBytes).\n\ngenerate_data(CurrentValue, RemainingBytes, Acc) when RemainingBytes >= 4 ->\n\tChunk = <<CurrentValue:32/integer>>,\n\tgenerate_data(CurrentValue + 1, RemainingBytes - 4, <<Acc/binary, Chunk/binary>>);\ngenerate_data(_, RemainingBytes, Acc) ->\n\tadd_padding(Acc, RemainingBytes).\n\nadd_padding(Data, 0) ->\n    Data;\nadd_padding(Data, LeftoverBytes) ->\n    Padding = <<16#FF:8, 16#FF:8, 16#FF:8, 16#FF:8>>,\n    <<Data/binary, Padding:LeftoverBytes/unit:8>>.\n\nadd_mainnet_v1_genesis_txs() ->\n\tcase filelib:is_dir(\"genesis_data/genesis_txs\") of\n\t\ttrue ->\n\t\t\t{ok, Files} = file:list_dir(\"genesis_data/genesis_txs\"),\n\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\tlists:foldl(\n\t\t\t\tfun(F, Acc) ->\n\t\t\t\t\tSourcePath = \"genesis_data/genesis_txs/\" ++ F,\n\t\t\t\t\tTargetPath = Config#config.data_dir ++ \"/\" ++ ?TX_DIR ++ \"/\" ++ F,\n\t\t\t\t\tfile:copy(SourcePath, TargetPath),\n\t\t\t\t\t[ar_util:decode(hd(string:split(F, \".\")))|Acc]\n\t\t\t\tend,\n\t\t\t\t[],\n\t\t\t\tFiles\n\t\t\t);\n\t\tfalse ->\n\t\t\t?LOG_WARNING(\"genesis_data/genesis_txs directory not found. Node might not index the genesis \"\n\t\t\t\t\t\t \"block transactions.\"),\n\t\t\t[]\n\tend.\n\n\n%% @doc Return the mainnet genesis transactions.\ncreate_mainnet_genesis_txs() ->\n\tTXs = lists:map(\n\t\tfun({M}) ->\n\t\t\t{Priv, Pub} = ar_wallet:new(),\n\t\t\tLastTx = <<>>,\n\t\t\tData = unicode:characters_to_binary(M),\n\t\t\tTX = ar_tx:new(Data, 0, LastTx),\n\t\t\tReward = 0,\n\t\t\tSignedTX = ar_tx:sign_v1(TX#tx{reward = Reward}, Priv, Pub),\n\t\t\tar_storage:write_tx(SignedTX),\n\t\t\tSignedTX\n\t\tend,\n\t\t?GENESIS_BLOCK_MESSAGES\n\t),\n\tar_storage:write_file_atomic(\n\t\t\"genesis_wallets.csv\",\n\t\tlists:map(fun(T) -> binary_to_list(ar_util:encode(T#tx.id)) ++ \",\" end, TXs)\n\t),\n\t[T#tx.id || T <- TXs].\n"
  },
  {
    "path": "apps/arweave/src/ar_webhook.erl",
    "content": "%% This Source Code Form is subject to the terms of the GNU General\n%% Public License, v. 2.0. If a copy of the GPLv2 was not distributed\n%% with this file, You can obtain one at\n%% https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html\n%%\n-module(ar_webhook).\n\n-behaviour(gen_server).\n\n-export([start_link/1]).\n\n-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]).\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-define(NUMBER_OF_TRIES, 10).\n-define(WAIT_BETWEEN_TRIES, 30 * 1000).\n\n-define(BASE_HEADERS, [\n\t{<<\"content-type\">>, <<\"application/json\">>}\n]).\n\n-define(MAX_TX_OFFSET_CACHE_SIZE, 100_000). %% Max number of transactions cached\n\n%% Functions which update or query the #tx_offset_cache are named with the `cache_` prefix\n%% e.g. `cache_is_tx_data_marked_synced/2`.\n-record(tx_offset_cache, {\n\ttxid_to_timestamp = #{}, % Map TXID => Timestamp\n\ttimestamp_txid = gb_sets:new(), % Ordered set of {Timestatmp, TXID}\n\ttxid_to_start_offset_end_offset = #{}, % Map TXID => {Start, End}\n\tend_offset_txid_start_offset = gb_sets:new(), % Ordered set of {End, TXID, Start}\n\tsize = 0,\n\ttxid_to_status = #{} % Map TXID => synced | unsynced\n}).\n\n%% Internal state definition.\n-record(state, {\n\turl,\n\theaders,\n\n\tlisten_to_block_stream = false,\n\tlisten_to_transaction_stream = false,\n\tlisten_to_transaction_data_stream = false,\n\n\ttx_offset_cache = #tx_offset_cache{}\n}).\n\n%%%===================================================================\n%%% API\n%%%===================================================================\n\n%%--------------------------------------------------------------------\n%% @doc\n%% Starts the server\n%%\n%% @spec start_link() -> {ok, Pid} | ignore | {error, Error}\n%% @end\n%%--------------------------------------------------------------------\nstart_link(Args) ->\n\tgen_server:start_link(?MODULE, Args, []).\n\n%%%===================================================================\n%%% gen_server callbacks\n%%%===================================================================\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Initializes the server\n%%\n%% @spec init(Args) -> {ok, State} |\n%%\t\t\t\t\t   {ok, State, Timeout} |\n%%\t\t\t\t\t   ignore |\n%%\t\t\t\t\t   {stop, Reason}\n%% @end\n%%--------------------------------------------------------------------\ninit(Hook) ->\n\t?LOG_DEBUG(\"Started web hook for ~p\", [Hook]),\n\tState = lists:foldl(\n\t\tfun (transaction, Acc) ->\n\t\t\t\tar_events:subscribe(tx),\n\t\t\t\tAcc#state{ listen_to_transaction_stream = true };\n\t\t\t(block, Acc) ->\n\t\t\t\tok = ar_events:subscribe(block),\n\t\t\t\tAcc#state{ listen_to_block_stream = true };\n\t\t\t(transaction_data, Acc) ->\n\t\t\t\tar_events:subscribe(tx),\n\t\t\t\tok = ar_events:subscribe(sync_record),\n\t\t\t\tAcc#state{ listen_to_transaction_data_stream = true };\n\t\t\t(solution, Acc) ->\n\t\t\t\tok = ar_events:subscribe(solution),\n\t\t\t\tAcc;\n\t\t\t(_, Acc) ->\n\t\t\t\t?LOG_WARNING(\"Wrong event name in webhook ~p\", [Hook]),\n\t\t\t\tAcc\n\t\tend,\n\t\t#state{},\n\t\tHook#config_webhook.events\n\t),\n\tState2 = State#state{\n\t\turl = Hook#config_webhook.url,\n\t\theaders = Hook#config_webhook.headers\n\t},\n\t{ok, State2}.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Handling call messages\n%%\n%% @spec handle_call(Request, From, State) ->\n%%\t\t\t\t\t\t\t\t\t {reply, Reply, State} |\n%%\t\t\t\t\t\t\t\t\t {reply, Reply, State, Timeout} |\n%%\t\t\t\t\t\t\t\t\t {noreply, State} |\n%%\t\t\t\t\t\t\t\t\t {noreply, State, Timeout} |\n%%\t\t\t\t\t\t\t\t\t {stop, Reason, Reply, State} |\n%%\t\t\t\t\t\t\t\t\t {stop, Reason, State}\n%% @end\n%%--------------------------------------------------------------------\nhandle_call(Request, _From, State) ->\n\t?LOG_ERROR(\"unhandled call: ~p\", [Request]),\n\t{reply, ok, State}.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Handling cast messages\n%%\n%% @spec handle_cast(Msg, State) -> {noreply, State} |\n%%\t\t\t\t\t\t\t\t\t{noreply, State, Timeout} |\n%%\t\t\t\t\t\t\t\t\t{stop, Reason, State}\n%% @end\n%%--------------------------------------------------------------------\nhandle_cast(Msg, State) ->\n\t?LOG_ERROR([{event, unhandled_cast}, {module, ?MODULE}, {message, Msg}]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Handling all non call/cast messages\n%%\n%% @spec handle_info(Info, State) -> {noreply, State} |\n%%\t\t\t\t\t\t\t\t\t {noreply, State, Timeout} |\n%%\t\t\t\t\t\t\t\t\t {stop, Reason, State}\n%% @end\n%%--------------------------------------------------------------------\nhandle_info({event, block, {new, Block, _Source}},\n\t\t#state{ listen_to_block_stream = true } = State) ->\n\tURL = State#state.url,\n\tHeaders = State#state.headers,\n\tcall_webhook(URL, Headers, Block, block),\n\t{noreply, State};\n\nhandle_info({event, block, _}, State) ->\n\t{noreply, State};\n\nhandle_info({event, tx, {new, TX, _Source}},\n\t\t#state{ listen_to_transaction_stream = true } = State) ->\n\tURL = State#state.url,\n\tHeaders = State#state.headers,\n\tcall_webhook(URL, Headers, TX, transaction),\n\t{noreply, State};\n\nhandle_info({event, tx, {registered_offset, TXID, EndOffset, Size}},\n\t\t#state{ listen_to_transaction_data_stream = true } = State) ->\n\tStart = EndOffset - Size,\n\tCache = State#state.tx_offset_cache,\n\tcase cache_is_tx_data_marked_synced(TXID, Cache) of\n\t\ttrue ->\n\t\t\t%% This event has already been processed for TXID, no need to process again\n\t\t\t{noreply, State};\n\t\tfalse ->\n\t\t\tCache2 = cache_add_tx_offset_data([{TXID, Start, EndOffset}], Cache),\n\t\t\tState2 = State#state{ tx_offset_cache = Cache2 },\n\t\t\t{noreply,\n\t\t\t\tmaybe_call_transaction_data_synced_webhook(Start, EndOffset, TXID,\n\t\t\t\t\t\tno_store_id, State2)}\n\tend;\n\nhandle_info({event, tx, {orphaned, TX}},\n\t\t#state{ listen_to_transaction_data_stream = true } = State) ->\n\tURL = State#state.url,\n\tHeaders = State#state.headers,\n\tPayload = #{ event => transaction_orphaned, txid => ar_util:encode(TX#tx.id) },\n\tcall_webhook(URL, Headers, Payload, transaction_orphaned),\n\tCache = State#state.tx_offset_cache,\n\tCache2 = cache_remove_tx_offset_data(TX#tx.id, Cache),\n\t{noreply, State#state{ tx_offset_cache = Cache2 }};\n\nhandle_info({event, tx, _}, State) ->\n\t{noreply, State};\n\nhandle_info({event, sync_record, {add_range, Start, End, ar_data_sync,\n\t\t\t#{ module := Module }}},\n\t\t#state{ listen_to_transaction_data_stream = true } = State) ->\n\tcase Module of\n\t\t?DEFAULT_MODULE ->\n\t\t\t%% The disk pool data is not guaranteed to stay forever so we\n\t\t\t%% do not want to push it here.\n\t\t\t{noreply, State};\n\t\t_ ->\n\t\t\t{noreply, handle_sync_record_add_range(Start, End, Module, State)}\n\tend;\n\nhandle_info({event, sync_record, {global_remove_range, Start, End}},\n\t\t#state{ listen_to_transaction_data_stream = true } = State) ->\n\t{noreply, handle_sync_record_remove_range(Start, End, State)};\n\nhandle_info({event, sync_record, _}, State) ->\n\t{noreply, State};\n\nhandle_info({event, solution, {rejected, #{ solution_hash := SolutionH, reason := Reason, source := Source }}}, State) ->\n\tURL = State#state.url,\n\tHeaders = State#state.headers,\n\tPayload = #{ event => solution_rejected,\n\t\tsolution_hash => ar_util:encode(SolutionH),\n\t\treason => Reason,\n\t\tsource => Source },\n\tcall_webhook(URL, Headers, Payload, solution_rejected),\n\t{noreply, State};\nhandle_info({event, solution, {stale, #{ solution_hash := SolutionH, source := Source }}}, State) ->\n\tURL = State#state.url,\n\tHeaders = State#state.headers,\n\tPayload = #{ event => solution_stale,\n\t\tsolution_hash => ar_util:encode(SolutionH),\n\t\tsource => Source },\n\tcall_webhook(URL, Headers, Payload, solution_stale),\n\t{noreply, State};\nhandle_info({event, solution, {partial, #{ solution_hash := SolutionH, source := Source }}}, State) ->\n\tURL = State#state.url,\n\tHeaders = State#state.headers,\n\tPayload = #{ event => solution_partial,\n\t\tsolution_hash => ar_util:encode(SolutionH),\n\t\tsource => Source },\n\tcall_webhook(URL, Headers, Payload, solution_partial),\n\t{noreply, State};\nhandle_info({event, solution, {accepted, #{ indep_hash := H, source := Source, is_rebase := IsRebase }}}, State) ->\n\tURL = State#state.url,\n\tHeaders = State#state.headers,\n\tPayload = #{ event => solution_accepted,\n\t\tindep_hash => ar_util:encode(H),\n\t\tsource => Source,\n\t\tis_rebase => IsRebase },\n\tcall_webhook(URL, Headers, Payload, solution_accepted),\n\t{noreply, State};\nhandle_info({event, solution, {confirmed, #{ indep_hash := BH, confirmations := N }}}, State) ->\n\tURL = State#state.url,\n\tHeaders = State#state.headers,\n\tPayload = #{ event => solution_confirmed,\n\t\tindep_hash => ar_util:encode(BH),\n\t\tconfirmations => N },\n\tcall_webhook(URL, Headers, Payload, solution_confirmed),\n\t{noreply, State};\nhandle_info({event, solution, {orphaned, #{ indep_hash := BH }}}, State) ->\n\tURL = State#state.url,\n\tHeaders = State#state.headers,\n\tPayload = #{ event => solution_orphaned,\n\t\tindep_hash => ar_util:encode(BH) },\n\tcall_webhook(URL, Headers, Payload, solution_orphaned),\n\t{noreply, State};\nhandle_info({event, solution, _}, State) ->\n\t{noreply, State};\n\nhandle_info(Info, State) ->\n\t?LOG_ERROR([{event, unhandled_info}, {module, ?MODULE}, {info, Info}]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% This function is called by a gen_server when it is about to\n%% terminate. It should be the opposite of Module:init/1 and do any\n%% necessary cleaning up. When it returns, the gen_server terminates\n%% with Reason. The return value is ignored.\n%%\n%% @spec terminate(Reason, State) -> void()\n%% @end\n%%--------------------------------------------------------------------\nterminate(Reason, _State) ->\n\t?LOG_INFO([{module, ?MODULE},{pid, self()},{callback, terminate},{reason, Reason}]),\n\tok.\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc\n%% Convert process state when code is changed\n%%\n%% @spec code_change(OldVsn, State, Extra) -> {ok, NewState}\n%% @end\n%%--------------------------------------------------------------------\ncode_change(_OldVsn, State, _Extra) ->\n\t{ok, State}.\n\n%%%===================================================================\n%%% Internal functions\n%%%===================================================================\n\ncall_webhook(URL, Headers, Entity, Event) ->\n\tdo_call_webhook(URL, Headers, Entity, Event, 0).\n\ndo_call_webhook(URL, Headers, Entity, Event, N) when N < ?NUMBER_OF_TRIES ->\n\t#{ host := Host, path := Path } = Map = uri_string:parse(URL),\n\tQuery = maps:get(query, Map, <<>>),\n\tPeer = case maps:get(port, Map, undefined) of\n\t\tundefined ->\n\t\t\tcase maps:get(scheme, Map, undefined) of\n\t\t\t\t\"https\" ->\n\t\t\t\t\t{binary_to_list(Host), 443};\n\t\t\t\t_ ->\n\t\t\t\t\t{binary_to_list(Host), 1984}\n\t\t\tend;\n\t\tPort ->\n\t\t\t{binary_to_list(Host), Port}\n\tend,\n\tcase catch\n\t\tar_http:req(#{\n\t\t\tmethod => post,\n\t\t\tpeer => Peer,\n\t\t\tpath => binary_to_list(<<Path/binary, Query/binary>>),\n\t\t\theaders => ?BASE_HEADERS ++ Headers,\n\t\t\tbody => to_json(Entity),\n\t\t\ttimeout => 10000,\n\t\t\tis_peer_request => false\n\t\t})\n\tof\n\t\t{ok, {{<<\"200\">>, _}, _, _, _, _}} = Result ->\n\t\t\t?LOG_INFO([\n\t\t\t\t{event, webhook_call_success},\n\t\t\t\t{webhook_event, Event},\n\t\t\t\t{id, entity_id(Entity)},\n\t\t\t\t{url, URL},\n\t\t\t\t{headers, Headers},\n\t\t\t\t{response, Result}\n\t\t\t]),\n\t\t\tok;\n\t\tError ->\n\t\t\t?LOG_ERROR([\n\t\t\t\t{event, webhook_call_failure},\n\t\t\t\t{webhook_event, Event},\n\t\t\t\t{id, entity_id(Entity)},\n\t\t\t\t{url, URL},\n\t\t\t\t{headers, Headers},\n\t\t\t\t{response, Error},\n\t\t\t\t{retry_in, ?WAIT_BETWEEN_TRIES}\n\t\t\t]),\n\t\t\ttimer:sleep(?WAIT_BETWEEN_TRIES),\n\t\t\tdo_call_webhook(URL, Headers, Entity, Event, N + 1)\n\tend;\ndo_call_webhook(URL, Headers, Entity, Event, _N) ->\n\t?LOG_WARNING([{event, gave_up_webhook_call},\n\t\t{webhook_event, Event},\n\t\t{id, entity_id(Entity)},\n\t\t{url, URL},\n\t\t{headers, Headers},\n\t\t{number_of_tries, ?NUMBER_OF_TRIES},\n\t\t{wait_between_tries, ?WAIT_BETWEEN_TRIES}\n\t]),\n\tok.\n\nentity_id(#block{ indep_hash = ID }) -> ar_util:encode(ID);\nentity_id(#tx{ id = ID }) -> ar_util:encode(ID);\nentity_id(#{ txid := TXID }) -> ar_util:encode(TXID);\nentity_id(#{ indep_hash := H }) -> ar_util:encode(H);\nentity_id(#{ solution_hash := H }) -> ar_util:encode(H).\n\nto_json(#block{} = Block) ->\n\t{JSONKVPairs} = ar_serialize:block_to_json_struct(Block),\n\tJSONStruct = {lists:keydelete(wallet_list, 1, JSONKVPairs)},\n\tar_serialize:jsonify({[{block, JSONStruct}]});\nto_json(#tx{} = TX) ->\n\t{JSONKVPairs1} = ar_serialize:tx_to_json_struct(TX),\n\tJSONKVPairs2 = lists:keydelete(data, 1, JSONKVPairs1),\n\tJSONKVPairs3 = [{data_size, TX#tx.data_size} | JSONKVPairs2],\n\tJSONStruct = {JSONKVPairs3},\n\tar_serialize:jsonify({[{transaction, JSONStruct}]});\nto_json(Map) when is_map(Map) ->\n\tjiffy:encode(Map).\n\nhandle_sync_record_add_range(Start, End, Module, State) ->\n\thandle_sync_record_add_range(Start, End, Module, State, 0).\n\nhandle_sync_record_add_range(Start, End, Module, State, N) when N < ?NUMBER_OF_TRIES ->\n\tcase get_tx_offset_data(Start, End, State#state.tx_offset_cache) of\n\t\t{ok, Data, Cache} ->\n\t\t\tprocess_updated_tx_data(Data, Module, State#state{ tx_offset_cache = Cache });\n\t\tnot_found ->\n\t\t\tState;\n\t\tError ->\n\t\t\t?LOG_WARNING([{event, failed_to_process_webhook_sync_record_add_range},\n\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])},\n\t\t\t\t\t{range_start, Start},\n\t\t\t\t\t{range_end, End}]),\n\t\t\ttimer:sleep(?WAIT_BETWEEN_TRIES),\n\t\t\thandle_sync_record_add_range(Start, End, State, N + 1)\n\tend;\nhandle_sync_record_add_range(Start, End, _Module, State, _N) ->\n\t?LOG_WARNING([{event, gave_up_webhook_tx_offset_data_fetch},\n\t\t{range_start, Start},\n\t\t{range_end, End},\n\t\t{number_of_tries, ?NUMBER_OF_TRIES},\n\t\t{wait_between_tries, ?WAIT_BETWEEN_TRIES}\n\t]),\n\tState.\n\nprocess_updated_tx_data([], _Module, State) ->\n\tState;\nprocess_updated_tx_data([{TXID, Start, End} | Data], Module, State) ->\n\tCache = State#state.tx_offset_cache,\n\tcase cache_is_tx_data_marked_synced(TXID, Cache) of\n\t\ttrue ->\n\t\t\t%% transaction_data_synced webhook has already been sent, advance to next range\n\t\t\tprocess_updated_tx_data(Data, Module, State);\n\t\tfalse ->\n\t\t\t%% Maybe send the transaction_data_synced webhook before advancing to the next range\n\t\t\tprocess_updated_tx_data(Data, Module,\n\t\t\t\tmaybe_call_transaction_data_synced_webhook(Start, End, TXID, Module, State))\n\tend.\n\n%% @doc Return a list of {TXID, Start2, End2} for recorded transactions from the given range.\nget_tx_offset_data(Start, End, Cache) ->\n\tcase cache_get_tx_offset_data(Start, End, Cache) of\n\t\t[] ->\n\t\t\tcase ar_data_sync:get_tx_offset_data_in_range(Start, End) of\n\t\t\t\t{ok, []} ->\n\t\t\t\t\tnot_found;\n\t\t\t\t{ok, Data} ->\n\t\t\t\t\tCache2 = cache_add_tx_offset_data(Data, Cache),\n\t\t\t\t\t{ok, Data, Cache2};\n\t\t\t\tError ->\n\t\t\t\t\tError\n\t\t\tend;\n\t\tList ->\n\t\t\t{ok, List, Cache}\n\tend.\n\ncache_get_tx_offset_data(Start, End, Cache) ->\n\t#tx_offset_cache{\n\t\tend_offset_txid_start_offset = Set\n\t} = Cache,\n\t%% 32-byte TXID > atom n => we choose any element with End at least Start + 1.\n\tIterator = gb_sets:iterator_from({Start + 1, n, n}, Set),\n\tcache_get_tx_offset_data_from_iterator(End, Iterator).\n\ncache_get_tx_offset_data_from_iterator(End, Iterator) ->\n\tcase gb_sets:next(Iterator) of\n\t\tnone ->\n\t\t\t[];\n\t\t{{End2, TXID, Start2}, Iterator2} when Start2 < End ->\n\t\t\t[{TXID, Start2, End2} | cache_get_tx_offset_data_from_iterator(End, Iterator2)];\n\t\t_ ->\n\t\t\t[]\n\tend.\n\ncache_add_tx_offset_data([], Cache) ->\n\tCache;\ncache_add_tx_offset_data([{TXID, Start, End} | Data], Cache) ->\n\t#tx_offset_cache{\n\t\ttxid_to_timestamp = Map,\n\t\ttimestamp_txid = Set,\n\t\tend_offset_txid_start_offset = OffsetSet,\n\t\ttxid_to_start_offset_end_offset = OffsetMap,\n\t\ttxid_to_status = StatusMap,\n\t\tsize = Size\n\t} = Cache,\n\tTimestamp = erlang:monotonic_time(microsecond),\n\tSet2 =\n\t\tcase maps:get(TXID, Map, not_found) of\n\t\t\tnot_found ->\n\t\t\t\tSet;\n\t\t\tPrevTimestamp ->\n\t\t\t\tgb_sets:del_element({PrevTimestamp, TXID}, Set)\n\t\tend,\n\tMap2 = maps:put(TXID, Timestamp, Map),\n\tcheck_offset_set_consistency(Start, End, TXID, OffsetSet),\n\tSet3 = gb_sets:add_element({Timestamp, TXID}, Set2),\n\tOffsetSet2 = gb_sets:add_element({End, TXID, Start}, OffsetSet),\n\tOffsetMap2 =\n\t\tcase maps:get(TXID, OffsetMap, not_found) of\n\t\t\tnot_found ->\n\t\t\t\tmaps:put(TXID, {Start, End}, OffsetMap);\n\t\t\t{Start, End} ->\n\t\t\t\tOffsetMap;\n\t\t\t{Start2, End2} ->\n\t\t\t\t?LOG_WARNING([{event, inconsistent_tx_offset_cache},\n\t\t\t\t\t\t{check, txid_to_start_offset_end_offset},\n\t\t\t\t\t\t{cached_tx_start_offset, Start2},\n\t\t\t\t\t\t{cached_tx_end_offset, End2},\n\t\t\t\t\t\t{tx_start_offset, Start},\n\t\t\t\t\t\t{tx_end_offset, End},\n\t\t\t\t\t\t{txid, ar_util:encode(TXID)}]),\n\t\t\t\tmaps:put(TXID, {Start, End}, OffsetMap)\n\t\tend,\n\tCache2 = Cache#tx_offset_cache{\n\t\ttxid_to_timestamp = Map2,\n\t\ttimestamp_txid = Set3,\n\t\tend_offset_txid_start_offset = OffsetSet2,\n\t\ttxid_to_start_offset_end_offset = OffsetMap2,\n\t\ttxid_to_status = maps:remove(TXID, StatusMap),\n\t\tsize = Size + 1\n\t},\n\tCache3 = maybe_remove_oldest(Cache2),\n\tcache_add_tx_offset_data(Data, Cache3).\n\ncheck_offset_set_consistency(Start, End, TXID, OffsetSet) ->\n\t%% 32-byte TXID > atom n => we choose any element with End at least Start + 1.\n\tIterator = gb_sets:iterator_from({Start + 1, n, n}, OffsetSet),\n\tcheck_offset_set_consistency2(Start, End, TXID, Iterator).\n\ncheck_offset_set_consistency2(Start, End, TXID, Iterator) ->\n\tcase gb_sets:next(Iterator) of\n\t\tnone ->\n\t\t\tok;\n\t\t{{End2, TXID2, Start2}, _Iterator2} when Start2 < End ->\n\t\t\t?LOG_WARNING([{event, inconsistent_tx_offset_cache},\n\t\t\t\t\t{check, end_offset_txid_start_offset},\n\t\t\t\t\t{cached_tx_start_offset, Start2},\n\t\t\t\t\t{cached_tx_end_offset, End2},\n\t\t\t\t\t{cached_txid, ar_util:encode(TXID2)},\n\t\t\t\t\t{tx_start_offset, Start},\n\t\t\t\t\t{tx_end_offset, End},\n\t\t\t\t\t{txid, ar_util:encode(TXID)}]);\n\t\t{_, Iterator2} ->\n\t\t\tcheck_offset_set_consistency2(Start, End, TXID, Iterator2)\n\tend.\n\nmaybe_remove_oldest(Cache) ->\n\t#tx_offset_cache{\n\t\ttxid_to_timestamp = Map,\n\t\ttimestamp_txid = Set,\n\t\tend_offset_txid_start_offset = OffsetSet,\n\t\ttxid_to_start_offset_end_offset = OffsetMap,\n\t\ttxid_to_status = StatusMap,\n\t\tsize = Size\n\t} = Cache,\n\tcase Size > ?MAX_TX_OFFSET_CACHE_SIZE of\n\t\tfalse ->\n\t\t\tCache;\n\t\ttrue ->\n\t\t\t{{_Timestamp, TXID}, Set2} = gb_sets:take_smallest(Set),\n\t\t\t{Start, End} = maps:get(TXID, OffsetMap),\n\t\t\tOffsetSet2 = gb_sets:del_element({End, TXID, Start}, OffsetSet),\n\t\t\tCache#tx_offset_cache{\n\t\t\t\ttxid_to_timestamp = maps:remove(TXID, Map),\n\t\t\t\ttimestamp_txid = Set2,\n\t\t\t\tend_offset_txid_start_offset = OffsetSet2,\n\t\t\t\ttxid_to_start_offset_end_offset = maps:remove(TXID, OffsetMap),\n\t\t\t\ttxid_to_status = maps:remove(TXID, StatusMap),\n\t\t\t\tsize = Size - 1\n\t\t\t}\n\tend.\n\ncache_remove_tx_offset_data(TXID, Cache) ->\n\t#tx_offset_cache{\n\t   txid_to_timestamp = Map,\n\t   timestamp_txid = Set,\n\t   end_offset_txid_start_offset = OffsetSet,\n\t   txid_to_start_offset_end_offset = OffsetMap,\n\t   size = Size,\n\t   txid_to_status = StatusMap\n\t} = Cache,\n\tcase maps:get(TXID, Map, not_found) of\n\t\tnot_found ->\n\t\t\tCache;\n\t\tTimestamp ->\n\t\t\tSet2 = gb_sets:del_element({Timestamp, TXID}, Set),\n\t\t\tMap2 = maps:remove(TXID, Map),\n\t\t\tSize2 = Size - 1,\n\t\t\t{Start, End} = maps:get(TXID, OffsetMap),\n\t\t\tOffsetSet2 = gb_sets:del_element({End, TXID, Start}, OffsetSet),\n\t\t\tOffsetMap2 = maps:remove(TXID, OffsetMap),\n\t\t\tStatusMap2 = maps:remove(TXID, StatusMap),\n\t\t\tCache#tx_offset_cache{\n\t\t\t\ttxid_to_timestamp = Map2,\n\t\t\t\ttimestamp_txid = Set2,\n\t\t\t\tend_offset_txid_start_offset = OffsetSet2,\n\t\t\t\ttxid_to_start_offset_end_offset = OffsetMap2,\n\t\t\t\tsize = Size2,\n\t\t\t\ttxid_to_status = StatusMap2\n\t\t\t}\n\tend.\n\ncache_mark_tx_data_synced(TXID, Cache) ->\n\t#tx_offset_cache{ txid_to_status = Map } = Cache,\n\tCache#tx_offset_cache{ txid_to_status = maps:put(TXID, synced, Map) }.\n\ncache_is_tx_data_marked_synced(TXID, Cache) ->\n\tmaps:get(TXID, Cache#tx_offset_cache.txid_to_status, false) == synced.\n\ncache_mark_tx_data_unsynced(TXID, Cache) ->\n\t#tx_offset_cache{ txid_to_status = Map } = Cache,\n\tCache#tx_offset_cache{ txid_to_status = maps:put(TXID, unsynced, Map) }.\n\ncache_is_tx_data_marked_unsynced(TXID, Cache) ->\n\tmaps:get(TXID, Cache#tx_offset_cache.txid_to_status, false) == unsynced.\n\nmaybe_call_transaction_data_synced_webhook(Start, End, TXID, MaybeModule, State) ->\n\tCache = State#state.tx_offset_cache,\n\tCache2 =\n\t\tcase is_synced_by_storage_modules(Start, End, MaybeModule) of\n\t\t\ttrue ->\n\t\t\t\tURL = State#state.url,\n\t\t\t\tHeaders = State#state.headers,\n\t\t\t\tPayload = #{ event => transaction_data_synced,\n\t\t\t\t\t\ttxid => ar_util:encode(TXID) },\n\t\t\t\tcall_webhook(URL, Headers, Payload, transaction_data_synced),\n\t\t\t\tcache_mark_tx_data_synced(TXID, Cache);\n\t\t\tfalse ->\n\t\t\t\tCache\n\t\tend,\n\tState#state{ tx_offset_cache = Cache2 }.\n\nis_synced_by_storage_modules(Start, End, Module) ->\n\tcase ar_storage_module:get_cover(Start, End, Module) of\n\t\tnot_found ->\n\t\t\tfalse;\n\t\tIntervals ->\n\t\t\tis_synced_by_storage_modules(Intervals)\n\tend.\n\nis_synced_by_storage_modules([]) ->\n\ttrue;\nis_synced_by_storage_modules([{Start, End, StoreID} | Intervals]) ->\n\tcase ar_sync_record:get_next_unsynced_interval(Start, End, ar_data_sync, StoreID) of\n\t\tnot_found ->\n\t\t\tis_synced_by_storage_modules(Intervals);\n\t\t_I ->\n\t\t\tfalse\n\tend.\n\nhandle_sync_record_remove_range(Start, End, State) ->\n\thandle_sync_record_remove_range(Start, End, State, 0).\n\nhandle_sync_record_remove_range(Start, End, State, N) when N < ?NUMBER_OF_TRIES ->\n\tcase get_tx_offset_data(Start, End, State#state.tx_offset_cache) of\n\t\t{ok, Data, Cache} ->\n\t\t\tprocess_removed_tx_data(Data, State#state{ tx_offset_cache = Cache });\n\t\tnot_found ->\n\t\t\tState;\n\t\tError ->\n\t\t\t?LOG_WARNING([{event, failed_to_process_webhook_sync_record_remove_range},\n\t\t\t\t\t{error, io_lib:format(\"~p\", [Error])},\n\t\t\t\t\t{range_start, Start},\n\t\t\t\t\t{range_end, End}]),\n\t\t\ttimer:sleep(?WAIT_BETWEEN_TRIES),\n\t\t\thandle_sync_record_remove_range(Start, End, State, N + 1)\n\tend;\nhandle_sync_record_remove_range(Start, End, State, _N) ->\n\t?LOG_WARNING([{event, gave_up_webhook_tx_offset_data_fetch},\n\t\t{range_start, Start},\n\t\t{range_end, End},\n\t\t{number_of_tries, ?NUMBER_OF_TRIES},\n\t\t{wait_between_tries, ?WAIT_BETWEEN_TRIES}\n\t]),\n\tState.\n\nprocess_removed_tx_data([], State) ->\n\tState;\nprocess_removed_tx_data([{TXID, _Start, _End} | Data], State) ->\n\tCache = State#state.tx_offset_cache,\n\tcase cache_is_tx_data_marked_unsynced(TXID, Cache) of\n\t\ttrue ->\n\t\t\tprocess_removed_tx_data(Data, State);\n\t\tfalse ->\n\t\t\tURL = State#state.url,\n\t\t\tHeaders = State#state.headers,\n\t\t\tPayload = #{ event => transaction_data_removed, txid => ar_util:encode(TXID) },\n\t\t\tcall_webhook(URL, Headers, Payload, transaction_data_removed),\n\t\t\tCache2 = cache_mark_tx_data_unsynced(TXID, Cache),\n\t\t\tState2 = State#state{ tx_offset_cache = Cache2 },\n\t\t\tprocess_removed_tx_data(Data, State2)\n\tend.\n"
  },
  {
    "path": "apps/arweave/src/ar_webhook_sup.erl",
    "content": "%% This Source Code Form is subject to the terms of the GNU General\n%% Public License, v. 2.0. If a copy of the GPLv2 was not distributed\n%% with this file, You can obtain one at\n%% https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html\n\n-module(ar_webhook_sup).\n\n-behaviour(supervisor).\n\n%% API\n-export([start_link/0]).\n\n%% Supervisor callbacks\n-export([init/1]).\n\n-include_lib(\"arweave/include/ar_sup.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%% ===================================================================\n%% API functions\n%% ===================================================================\n\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%% ===================================================================\n%% Supervisor callbacks\n%% ===================================================================\n\ninit([]) ->\n\t{ok, Config} = arweave_config:get_env(),\n\tChildren = lists:map(\n\t\tfun\n\t\t\t(Hook) when is_record(Hook, config_webhook) ->\n\t\t\t\tHandler = {ar_webhook, Hook#config_webhook.url},\n\t\t\t\t{Handler, {ar_webhook, start_link, [Hook]},\n\t\t\t\t\tpermanent, ?SHUTDOWN_TIMEOUT, worker, [ar_webhook]};\n\t\t\t(Hook) ->\n\t\t\t\t?LOG_ERROR([{event, failed_to_parse_webhook_config},\n\t\t\t\t\t{webhook_config, io_lib:format(\"~p\", [Hook])}])\n\t\tend,\n\t\tConfig#config.webhooks\n\t),\n\t{ok, {{one_for_one, 5, 10}, Children}}.\n"
  },
  {
    "path": "apps/arweave/src/arweave.app.src",
    "content": "{application, arweave, [\n\t{description, \"Arweave\"},\n\t{vsn, \"2.9.6-alpha1\"},\n\t{mod, {ar, []}},\n\t{applications, [\n\t\tkernel,\n\t\tstdlib,\n\t\tsasl,\n\t\tos_mon,\n\t\tgun,\n\t\tcowlib,\n\t\tranch,\n\t\tcowboy,\n\t\tprometheus,\n\t\tprometheus_cowboy,\n\t\tprometheus_process_collector,\n\t\tprometheus_httpd,\n\t\truntime_tools\n\t]},\n\t{extra_applications, [recon]}\n]}.\n"
  },
  {
    "path": "apps/arweave/src/e2e/ar_e2e.erl",
    "content": "-module(ar_e2e).\n\n-export([fixture_dir/1, fixture_dir/2, install_fixture/3, load_wallet_fixture/1,\n\twrite_chunk_fixture/3, load_chunk_fixture/2]).\n\n-export([delayed_print/2, packing_type_to_packing/2,\n\tstart_source_node/3, start_source_node/4, \n\tsource_node_storage_modules/3, source_node_storage_modules/4,\n\tmax_chunk_offset/1, aligned_partition_size/3,\n\tassert_recall_byte/3,\n\tassert_block/2, assert_syncs_range/3, assert_syncs_range/4, assert_does_not_sync_range/3,\n\tassert_has_entropy/4, assert_no_entropy/4,\n\tassert_chunks/3, assert_chunks/4, assert_no_chunks/2,\n\tassert_partition_size/3, assert_partition_size/4, assert_empty_partition/3,\n\tassert_mine_and_validate/3]).\n\n-include_lib(\"ar.hrl\").\n-include_lib(\"ar_consensus.hrl\").\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%% Set to true to update the chunk fixtures.\n%% WARNING: ONLY SET TO true IF YOU KNOW WHAT YOU ARE DOING!\n-define(UPDATE_CHUNK_FIXTURES, false).\n\n%% Partition size is 2,000,000 bytes, so round up to a full chunk\n-define(ALIGNED_PARTITION_SIZE, 2_097_152).\n\n-spec fixture_dir(atom()) -> binary().\nfixture_dir(FixtureType) ->\n\tDir = filename:dirname(?FILE),\n\tfilename:join([Dir, \"fixtures\", atom_to_list(FixtureType)]).\n\n-spec fixture_dir(atom(), [binary()]) -> binary().\nfixture_dir(FixtureType, SubDirs) ->\n\tFixtureDir = fixture_dir(FixtureType),\n\tfilename:join([FixtureDir] ++ SubDirs).\n\n-spec install_fixture(binary(), atom(), string()) -> binary().\ninstall_fixture(FilePath, FixtureType, FixtureName) ->\n\tFixtureDir = fixture_dir(FixtureType),\n\tok = filelib:ensure_dir(FixtureDir ++ \"/\"),\n\tFixturePath = filename:join([FixtureDir, FixtureName]),\n\tfile:copy(FilePath, FixturePath),\n\tFixturePath.\n\n-spec load_wallet_fixture(atom()) -> tuple().\nload_wallet_fixture(WalletFixture) ->\n\tWalletName = atom_to_list(WalletFixture),\n\tFixtureDir = fixture_dir(wallets),\n\tFixturePath = filename:join([FixtureDir, WalletName ++ \".json\"]),\n\tWallet = ar_wallet:load_keyfile(FixturePath),\n\tAddress = ar_wallet:to_address(Wallet),\n\tWalletPath = ar_wallet:wallet_filepath(ar_util:encode(Address)),\n\tfile:copy(FixturePath, WalletPath),\n\tar_wallet:load_keyfile(WalletPath).\n\n-spec write_chunk_fixture(binary(), non_neg_integer(), binary()) -> ok.\nwrite_chunk_fixture(Packing, EndOffset, Chunk) ->\n\tFixtureDir = fixture_dir(chunks, [ar_serialize:encode_packing(Packing, true)]),\n\tok = filelib:ensure_dir(FixtureDir ++ \"/\"),\n\tFixturePath = filename:join([FixtureDir, integer_to_list(EndOffset) ++ \".bin\"]),\n\tfile:write_file(FixturePath, Chunk).\n\n-spec load_chunk_fixture(binary(), non_neg_integer()) -> binary().\nload_chunk_fixture(Packing, EndOffset) ->\n\tFixtureDir = fixture_dir(chunks, [ar_serialize:encode_packing(Packing, true)]),\n\tFixturePath = filename:join([FixtureDir, integer_to_list(EndOffset) ++ \".bin\"]),\n\tfile:read_file(FixturePath).\n\npacking_type_to_packing(PackingType, Address) ->\n\tcase PackingType of\n\t\treplica_2_9 -> {replica_2_9, Address};\n\t\tspora_2_6 -> {spora_2_6, Address};\n\t\tcomposite_1 -> {composite, Address, 1};\n\t\tcomposite_2 -> {composite, Address, 2};\n\t\tunpacked -> unpacked\n\tend.\n\nstart_source_node(Node, PackingType, WalletFixture) ->\n\tstart_source_node(Node, PackingType, WalletFixture, default).\n\nstart_source_node(Node, unpacked, _WalletFixture, ModuleSize) ->\n\t?LOG_INFO(\"Starting source node ~p with packing type ~p and wallet fixture ~p\",\n\t\t[Node, unpacked, _WalletFixture]),\n\tTempNode = case Node of\n\t\tpeer1 -> peer2;\n\t\tpeer2 -> peer1\n\tend,\n\t{Blocks, _SourceAddr, Chunks} = start_source_node(TempNode, spora_2_6, wallet_a),\n\t{_, StorageModules} = source_node_storage_modules(Node, unpacked, wallet_a, ModuleSize),\n\t[B0, _, {TX2, _} | _] = Blocks,\n\t{ok, Config} = ar_test_node:get_config(Node),\n\tar_test_node:start_other_node(Node, B0, Config#config{\n\t\tpeers = [ar_test_node:peer_ip(TempNode)],\n\t\tstorage_modules = StorageModules,\n\t\tauto_join = true\n\t}, true),\n\n\t?LOG_INFO(\"Source node ~p started.\", [Node]),\n\t\n\tassert_syncs_range(Node, 0, 4*?ALIGNED_PARTITION_SIZE),\n\t\n\tassert_chunks(Node, unpacked, Chunks),\n\n\t?LOG_INFO(\"Source node ~p assertions passed.\", [Node]),\n\n\tar_test_node:stop(TempNode),\n\n\tar_test_node:restart_with_config(Node, Config#config{\n\t\tpeers = [],\n\t\tstart_from_latest_state = true,\n\t\tstorage_modules = StorageModules,\n\t\tauto_join = true\n\t}),\n\n\t%% pack_served_chunks is not enabled but the data is stored unpacked, so we should\n\t%% return it\n\t{ok, {{<<\"200\">>, _}, _, Data, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => ar_test_node:peer_ip(Node),\n\t\t\tpath => \"/tx/\" ++ binary_to_list(ar_util:encode(TX2#tx.id)) ++ \"/data\"\n\t\t}),\n\t{ok, ExpectedData} = load_chunk_fixture(\n\t\tunpacked, ?ALIGNED_PARTITION_SIZE + floor(3.75 * ?DATA_CHUNK_SIZE)),\n\t?assertEqual(ExpectedData, ar_util:decode(Data)),\n\n\t?LOG_INFO(\"Source node ~p restarted.\", [Node]),\n\n\t{Blocks, undefined, Chunks};\nstart_source_node(Node, PackingType, WalletFixture, ModuleSize) ->\n\t?LOG_INFO(\"Starting source node ~p with packing type ~p and wallet fixture ~p\",\n\t\t[Node, PackingType, WalletFixture]),\n\t{Wallet, StorageModules} = source_node_storage_modules(\n\t\tNode, PackingType, WalletFixture, ModuleSize),\n\tRewardAddr = ar_wallet:to_address(Wallet),\n\n\t[B0] = ar_weave:init([{RewardAddr, ?AR(200), <<>>}], 0, ?ALIGNED_PARTITION_SIZE),\n\n\t{ok, Config} = ar_test_node:remote_call(Node, arweave_config, get_env, []),\n\t\n\t?assertEqual(ar_test_node:peer_name(Node),\n\t\tar_test_node:start_other_node(Node, B0, Config#config{\n\t\t\tpeers = [],\n\t\t\tstart_from_latest_state = true,\n\t\t\tstorage_modules = StorageModules,\n\t\t\tauto_join = true,\n\t\t\tmining_addr = RewardAddr\n\t\t}, true)\n\t),\n\n\t?LOG_INFO(\"Source node ~p started.\", [Node]),\n\n\t%% Note: small chunks will be padded to 256 KiB. So B1 actually contains 3 chunks of data\n\t%% and B2 starts at a chunk boundary and contains 1 chunk of data.\n\t%% \n\t%% p1, 2097152 to 2883584\n\t{TX1, B1} = mine_block(Node, Wallet, floor(2.5 * ?DATA_CHUNK_SIZE), infinity),\n\t%% p1, 2883584 to 3145728\n\t{TX2, B2} = mine_block(Node, Wallet, floor(0.75 * ?DATA_CHUNK_SIZE), infinity),\n\t%% p1 to p2, 3145728 to 5242880\n\t{TX3, B3} = mine_block(Node, Wallet, ?ALIGNED_PARTITION_SIZE, infinity),\n\t%% p2, 5242880 to 6291456 (disk pool threshold falls in the middle of p2)\n\t{TX4, B4} = mine_block(Node, Wallet, floor(0.5 * ?ALIGNED_PARTITION_SIZE), 2 * ?DATA_CHUNK_SIZE),\n\t%% p3, 6291456 to 8388608 (chunks are stored in disk pool)\n\t{TX5, B5} = mine_block(Node, Wallet, ?ALIGNED_PARTITION_SIZE, 0),\n\n\t%% List of {Block, EndOffset, ChunkSize}\n\tChunks = [\n\t\t%% PaddedEndOffset: 2359296\n\t\t{B1, ?ALIGNED_PARTITION_SIZE + ?DATA_CHUNK_SIZE, ?DATA_CHUNK_SIZE}, \n\t\t%% PaddedEndOffset: 2621440\n\t\t{B1, ?ALIGNED_PARTITION_SIZE + (2*?DATA_CHUNK_SIZE), ?DATA_CHUNK_SIZE}, \n\t\t%% PaddedEndOffset: 2883584\n\t\t{B1, ?ALIGNED_PARTITION_SIZE + floor(2.5 * ?DATA_CHUNK_SIZE), floor(0.5 * ?DATA_CHUNK_SIZE)},\n\t\t%% PaddedEndOffset: 3145728\n\t\t{B2, ?ALIGNED_PARTITION_SIZE + floor(3.75 * ?DATA_CHUNK_SIZE), floor(0.75 * ?DATA_CHUNK_SIZE)},\n\t\t%% PaddedEndOffset: 3407872\n\t\t{B3, ?ALIGNED_PARTITION_SIZE + (5*?DATA_CHUNK_SIZE), ?DATA_CHUNK_SIZE},\n\t\t%% PaddedEndOffset: 3670016\n\t\t{B3, ?ALIGNED_PARTITION_SIZE + (6*?DATA_CHUNK_SIZE), ?DATA_CHUNK_SIZE},\n\t\t%% PaddedEndOffset: 3932160\n\t\t{B3, ?ALIGNED_PARTITION_SIZE + (7*?DATA_CHUNK_SIZE), ?DATA_CHUNK_SIZE},\n\t\t%% PaddedEndOffset: 4194304\n\t\t{B3, ?ALIGNED_PARTITION_SIZE + (8*?DATA_CHUNK_SIZE), ?DATA_CHUNK_SIZE}\n\t],\n\n\t?LOG_INFO(\"Source node ~p blocks mined.\", [Node]),\n\n\tSourcePacking = packing_type_to_packing(PackingType, RewardAddr),\n\n\tassert_syncs_range(Node, SourcePacking, 0, 4*?ALIGNED_PARTITION_SIZE),\n\tassert_chunks(Node, SourcePacking, Chunks),\n\n\t%% Restart the node to allow it to copy chunks between storage modules.\n\tar_test_node:restart(Node),\n\t?LOG_INFO(\"Source node ~p restarted.\", [Node]),\n\n\tassert_syncs_range(Node, SourcePacking, 0, 4*?ALIGNED_PARTITION_SIZE),\n\tassert_chunks(Node, SourcePacking, Chunks),\n\tassert_partition_size(Node, 0, SourcePacking),\n\tassert_partition_size(Node, 1, SourcePacking),\n\n\t%% pack_served_chunks is not enabled so we shouldn't return unpacked data\n\t?assertMatch({ok, {{<<\"404\">>, _}, _, _, _, _}},\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => ar_test_node:peer_ip(Node),\n\t\t\tpath => \"/tx/\" ++ binary_to_list(ar_util:encode(TX1#tx.id)) ++ \"/data\"\n\t\t})),\n\n\t?LOG_INFO(\"Source node ~p assertions passed.\", [Node]),\n\n\t{[B0, {TX1, B1}, {TX2, B2}, {TX3, B3}, {TX4, B4}, {TX5, B5}], RewardAddr, Chunks}.\n\nmax_chunk_offset(Chunks) ->\n\tlists:foldl(fun({_, EndOffset, _}, Acc) -> max(Acc, EndOffset) end, 0, Chunks).\n\naligned_partition_size(Node, Partition, Packing) ->\n\t{ok, Config} = ar_test_node:get_config(Node),\n\t%% Include both regular storage modules and repack_in_place modules.\n\t%% For repack_in_place modules, use the target packing.\n\tRepackInPlaceModules = [{BucketSize, Bucket, TargetPacking}\n\t\t|| {{BucketSize, Bucket, _FromPacking}, TargetPacking} <- Config#config.repack_in_place_storage_modules],\n\tAllStorageModules = Config#config.storage_modules ++ RepackInPlaceModules,\n\tPartitionStart = Partition * ar_block:partition_size(),\n\tPartitionEnd = (Partition + 1) * ar_block:partition_size(),\n\tStorageModules = filter_storage_modules_by_partition(\n\t\tPartitionStart, PartitionEnd, AllStorageModules),\n\tStorageModules2 = filter_storage_modules_by_packing(StorageModules, Packing),\n\taligned_partition_size2(StorageModules2, PartitionStart, PartitionEnd, 0).\n\nfilter_storage_modules_by_partition(PartitionStart, PartitionEnd, Modules) ->\n\tlists:filter(fun({ModuleSize, Bucket, _Packing}) ->\n\t\tModuleStart = Bucket * ModuleSize,\n\t\tModuleEnd = ModuleStart + ModuleSize,\n\t\tModuleStart < PartitionEnd andalso ModuleEnd > PartitionStart\n\tend, Modules).\n\nfilter_storage_modules_by_packing([{_, _, {replica_2_9, _} = Packing} = Module | Modules], unpacked_padded) ->\n\t[Module | filter_storage_modules_by_packing(Modules, Packing)];\nfilter_storage_modules_by_packing([{_, _, Packing} = Module | Modules], Packing) ->\n\t[Module | filter_storage_modules_by_packing(Modules, Packing)];\nfilter_storage_modules_by_packing([_Module | Modules], Packing) ->\n\tfilter_storage_modules_by_packing(Modules, Packing);\nfilter_storage_modules_by_packing([], _Packing) ->\n\t[].\n\naligned_partition_size2([{ModuleSize, Bucket, Packing} | Modules], PartitionStart, PartitionEnd, Acc) ->\n\tOverlap = ar_storage_module:get_overlap(Packing),\n\tModuleStart = Bucket * ModuleSize,\n\tModuleEnd = ModuleStart + ModuleSize,\n\tClippedStart = max(ModuleStart, PartitionStart),\n\tClippedEnd = min(ModuleEnd, PartitionEnd),\n\tAlignedModuleStart = max(0, ar_block:get_chunk_padded_offset(ClippedStart) - ?DATA_CHUNK_SIZE),\n\tAlignedModuleEnd = ar_block:get_chunk_padded_offset(ClippedEnd + Overlap),\n\tAlignedModuleSize = AlignedModuleEnd - AlignedModuleStart,\n\taligned_partition_size2(Modules, PartitionStart, PartitionEnd, Acc + AlignedModuleSize);\naligned_partition_size2([], _PartitionStart, _PartitionEnd, Acc) ->\n\tAcc.\n\nsource_node_storage_modules(Node, PackingType, WalletFixture) ->\n\tsource_node_storage_modules(Node, PackingType, WalletFixture, default).\n\nsource_node_storage_modules(_Node, unpacked, _WalletFixture, ModuleSize) ->\n\t{undefined, source_node_storage_modules(unpacked, ModuleSize)};\nsource_node_storage_modules(Node, PackingType, WalletFixture, ModuleSize) ->\n\tWallet = ar_test_node:remote_call(Node, ar_e2e, load_wallet_fixture, [WalletFixture]),\n\tRewardAddr = ar_wallet:to_address(Wallet),\n\tSourcePacking = packing_type_to_packing(PackingType, RewardAddr),\n\t{Wallet, source_node_storage_modules(SourcePacking, ModuleSize)}.\n\nsource_node_storage_modules(SourcePacking, default) ->\n\tSize = ar_block:partition_size(),\n\tlists:map(fun(I) -> {Size, I, SourcePacking} end, lists:seq(0, 4));\n\nsource_node_storage_modules(SourcePacking, small) ->\n\tSize = ar_block:partition_size() div 4,\n\t%% Put strict data split threshold inside the first storage module.\n\t[{Size * 2, 0, SourcePacking}\n\t\t\t| lists:map(fun(I) -> {Size, I, SourcePacking} end, lists:seq(2, 19))].\n\t\nmine_block(Node, Wallet, DataSize, IsTemporary) ->\n\tWeaveSize = ar_test_node:remote_call(Node, ar_node, get_current_weave_size, []),\n\tAddr = ar_wallet:to_address(Wallet),\n\t{TX, Chunks} = generate_tx(Node, Wallet, WeaveSize, DataSize),\n\tB = ar_test_node:post_and_mine(#{ miner => Node, await_on => Node }, [TX]),\n\n\t?assertEqual(Addr, B#block.reward_addr),\n\n\tProofs = ar_test_data_sync:post_proofs(Node, B, TX, Chunks, IsTemporary),\n\t\n\tar_test_data_sync:wait_until_syncs_chunks(Node, Proofs, infinity),\n\t{TX, B}.\n\ngenerate_tx(Node, Wallet, WeaveSize, DataSize) ->\n\tChunks = generate_chunks(Node, WeaveSize, DataSize, []),\n\t{DataRoot, _DataTree} = ar_merkle:generate_tree(\n\t\t[{ar_tx:generate_chunk_id(Chunk), Offset} || {Chunk, Offset} <- Chunks]\n\t),\n\tTX = ar_test_node:sign_tx(Node, Wallet, #{\n\t\tdata_size => DataSize,\n\t\tdata_root => DataRoot\n\t}),\n\t{TX, [Chunk || {Chunk, _} <- Chunks]}.\n\ngenerate_chunks(Node, WeaveSize, DataSize, Acc) when DataSize > 0 ->\n\tChunkSize = min(DataSize, ?DATA_CHUNK_SIZE),\n\tEndOffset = (length(Acc) * ?DATA_CHUNK_SIZE) + ChunkSize,\n\tChunk = ar_test_node:get_genesis_chunk(WeaveSize + EndOffset),\n\tgenerate_chunks(Node, WeaveSize, DataSize - ChunkSize, Acc ++ [{Chunk, EndOffset}]);\ngenerate_chunks(_, _, _, Acc) ->\n\tAcc.\n\nassert_recall_byte(Node, RangeStart, RangeEnd) when RangeStart > RangeEnd ->\n\tok;\nassert_recall_byte(Node, RangeStart, RangeEnd) ->\n\tOptions = #{ pack => true, packing => unpacked, origin => miner },\n\tResult = ar_test_node:remote_call(\n\t\tNode, ar_data_sync, get_chunk, [RangeStart + 1, Options]),\n\tcase Result of\n\t\t{ok, _} ->\n\t\t\t?LOG_INFO(\"Recall byte found at ~p\", [RangeStart + 1]),\n\t\t\tassert_recall_byte(Node, RangeStart + 1, RangeEnd);\n\t\tError ->\n\t\t\t?LOG_ERROR([{event, recall_byte_not_found}, \n\t\t\t\t\t\t{recall_byte, RangeStart}, \n\t\t\t\t\t\t{error, Error}])\n\tend.\nassert_block({spora_2_6, Address}, MinedBlock) ->\n\t?assertEqual(Address, MinedBlock#block.reward_addr),\n\t?assertEqual(0, MinedBlock#block.packing_difficulty);\nassert_block({composite, Address, PackingDifficulty}, MinedBlock) ->\n\t?assertEqual(Address, MinedBlock#block.reward_addr),\n\t?assertEqual(PackingDifficulty, MinedBlock#block.packing_difficulty);\nassert_block({replica_2_9, Address}, MinedBlock) ->\n\t?assertEqual(Address, MinedBlock#block.reward_addr),\n\t?assertEqual(?REPLICA_2_9_PACKING_DIFFICULTY, MinedBlock#block.packing_difficulty).\n\t\nassert_has_entropy(Node, StartOffset, EndOffset, StoreID) ->\n\tRangeSize = EndOffset - StartOffset,\n\tHasEntropy = ar_util:do_until(\n\t\tfun() -> \n\t\t\tIntersection = ar_test_node:remote_call(\n\t\t\t\tNode, ar_sync_record, get_intersection_size,\n\t\t\t\t[EndOffset, StartOffset, ar_entropy_storage:sync_record_id(), StoreID]),\n\t\t\tIntersection >= RangeSize\n\t\tend,\n\t\t100,\n\t\t60_000\n\t),\n\tcase HasEntropy of\n\t\ttrue ->\n\t\t\tok;\n\t\t_ ->\n\t\t\tIntersection = ar_test_node:remote_call(\n\t\t\t\tNode, ar_sync_record, get_intersection_size,\n\t\t\t\t[EndOffset, StartOffset, ar_entropy_storage:sync_record_id(), StoreID]),\n\t\t\tIntervals = ar_test_node:remote_call(\n\t\t\t\t\tNode, ar_sync_record, get,\n\t\t\t\t\t[ar_entropy_storage:sync_record_id(), StoreID]),\n\t\t\t?assert(false, \n\t\t\t\tiolist_to_binary(io_lib:format(\n\t\t\t\t\t\"~s failed to prepare entropy range ~p - ~p. \"\n\t\t\t\t\t\"Intersection size: ~p. Intervals: ~p\", \n\t\t\t\t\t[Node, StartOffset, EndOffset, Intersection,\n\t\t\t\t\tar_intervals:to_list(Intervals)])))\n\tend.\n\nassert_no_entropy(Node, StartOffset, EndOffset, StoreID) ->\n\tHasEntropy = ar_util:do_until(\n\t\tfun() -> \n\t\t\tIntersection = ar_test_node:remote_call(\n\t\t\t\tNode, ar_sync_record, get_intersection_size,\n\t\t\t\t[EndOffset, StartOffset, ar_entropy_storage:sync_record_id(), StoreID]),\n\t\t\tIntersection > 0\n\t\tend,\n\t\t100,\n\t\t15_000\n\t),\n\tcase HasEntropy of\n\t\ttrue ->\n\t\t\tIntersection = ar_test_node:remote_call(\n\t\t\t\tNode, ar_sync_record, get_intersection_size,\n\t\t\t\t[EndOffset, StartOffset, ar_entropy_storage:sync_record_id(), StoreID]),\n\t\t\tIntervals = ar_test_node:remote_call(\n\t\t\t\tNode, ar_sync_record, get,\n\t\t\t\t[ar_entropy_storage:sync_record_id(), StoreID]),\n\t\t\t?assert(false, \n\t\t\t\tiolist_to_binary(io_lib:format(\n\t\t\t\t\t\"~s found entropy when it should not have. Range: ~p - ~p. \"\n\t\t\t\t\t\"Intersection size: ~p. Intervals: ~p\", \n\t\t\t\t\t[Node, StartOffset, EndOffset, Intersection,\n\t\t\t\t\tar_intervals:to_list(Intervals)])));\n\t\t_ ->\n\t\t\tok\n\tend.\n\nassert_syncs_range(Node, _Packing, StartOffset, EndOffset) ->\n\tassert_syncs_range(Node, StartOffset, EndOffset).\n\nassert_syncs_range(Node, StartOffset, EndOffset) ->\n\tHasRange = ar_util:do_until(\n\t\tfun() -> has_range(Node, StartOffset, EndOffset) end,\n\t\t100,\n\t\t300_000\n\t),\n\tcase HasRange of\n\t\ttrue ->\n\t\t\tok;\n\t\t_ ->\n\t\t\t{ok, SyncRecord} = ar_http_iface_client:get_sync_record(\n\t\t\t\tar_test_node:peer_ip(Node)),\n\t\t\t?assert(false, \n\t\t\t\tiolist_to_binary(io_lib:format(\n\t\t\t\t\t\"~s failed to sync range ~p - ~p. Sync record: ~p\", \n\t\t\t\t\t[Node, StartOffset, EndOffset, ar_intervals:to_list(SyncRecord)])))\n\tend.\n\nassert_does_not_sync_range(Node, StartOffset, EndOffset) ->\n\tar_util:do_until(\n\t\tfun() -> has_range(Node, StartOffset, EndOffset) end,\n\t\t1000,\n\t\t15_000\n\t),\n\t?assertEqual(false, has_range(Node, StartOffset, EndOffset),\n\t\tiolist_to_binary(io_lib:format(\n\t\t\t\"~s synced range when it should not have: ~p - ~p\", \n\t\t\t[Node, StartOffset, EndOffset]))).\n\nassert_partition_size(Node, PartitionNumber, Packing) ->\n\tPartitionSize = aligned_partition_size(Node, PartitionNumber, Packing),\n\tassert_partition_size(Node, PartitionNumber, Packing, PartitionSize).\nassert_partition_size(Node, PartitionNumber, Packing, Size) ->\n\t?LOG_INFO(\"~p: Asserting partition ~p,~p is size ~p\",\n\t\t[Node, PartitionNumber, ar_serialize:encode_packing(Packing, true), Size]),\n\tar_util:do_until(\n\t\tfun() -> \n\t\t\tar_test_node:remote_call(Node, ar_mining_stats, get_partition_data_size, \n\t\t\t\t[PartitionNumber, Packing]) >= Size\n\t\tend,\n\t\t100,\n\t\t120_000\n\t),\n\t?assertEqual(\n\t\tSize,\n\t\tar_test_node:remote_call(Node, ar_mining_stats, get_partition_data_size, \n\t\t\t[PartitionNumber, Packing]),\n\t\tiolist_to_binary(io_lib:format(\n\t\t\t\"~s partition ~p,~p was not the expected size.\", \n\t\t\t[Node, PartitionNumber, ar_serialize:encode_packing(Packing, true)]))).\n\nassert_empty_partition(Node, PartitionNumber, Packing) ->\n\tar_util:do_until(\n\t\tfun() -> \n\t\t\tar_test_node:remote_call(Node, ar_mining_stats, get_partition_data_size, \n\t\t\t\t[PartitionNumber, Packing]) > 0\n\t\tend,\n\t\t100,\n\t\t15_000\n\t),\n\t?assertEqual(\n\t\t0,\n\t\tar_test_node:remote_call(Node, ar_mining_stats, get_partition_data_size, \n\t\t\t[PartitionNumber, Packing]),\n\t\tiolist_to_binary(io_lib:format(\n\t\t\t\"~s partition ~p,~p is not empty\", [Node, PartitionNumber, \n\t\t\t\tar_serialize:encode_packing(Packing, true)]))).\n\nassert_mine_and_validate(MinerNode, ValidatorNode, MinerPacking) ->\n\tCurrentHeight = max(\n\t\tar_test_node:remote_call(ValidatorNode, ar_node, get_height, []),\n\t\tar_test_node:remote_call(MinerNode, ar_node, get_height, [])\n\t),\n\tar_test_node:wait_until_height(ValidatorNode, CurrentHeight),\n\tar_test_node:wait_until_height(MinerNode, CurrentHeight),\n\tar_test_node:mine(MinerNode),\n\n\tMinerBI = ar_test_node:wait_until_height(MinerNode, CurrentHeight + 1),\n\t{ok, MinerBlock} = ar_test_node:http_get_block(element(1, hd(MinerBI)), MinerNode),\n\tassert_block(MinerPacking, MinerBlock),\n\n\tValidatorBI = ar_test_node:wait_until_height(ValidatorNode, MinerBlock#block.height),\n\t{ok, ValidatorBlock} = ar_test_node:http_get_block(element(1, hd(ValidatorBI)), ValidatorNode),\n\t?assertEqual(MinerBlock, ValidatorBlock).\n\nget_intervals(NodeIP, StartOffset, EndOffset) ->\n\tcase ar_http_iface_client:get_sync_record(NodeIP) of\n\t\t{ok, RegularIntervals} ->\n\t\t\tFootprintIntervals = collect_footprint_intervals(NodeIP, StartOffset, EndOffset),\n\t\t\tAllIntervals = ar_intervals:union(RegularIntervals, FootprintIntervals),\n\t\t\tAllIntervals;\n\t\t_Error ->\n\t\t\tFootprintIntervals = collect_footprint_intervals(NodeIP, StartOffset, EndOffset),\n\t\t\tFootprintIntervals\n\tend.\n\nhas_range(Node, StartOffset, EndOffset) ->\n\tNodeIP = ar_test_node:peer_ip(Node),\n\tcase ar_http_iface_client:get_sync_record(NodeIP) of\n\t\t{ok, RegularIntervals} ->\n\t\t\tFootprintIntervals = collect_footprint_intervals(NodeIP, StartOffset, EndOffset),\n\t\t\tAllIntervals = ar_intervals:union(RegularIntervals, FootprintIntervals),\n\t\t\tinterval_contains(AllIntervals, StartOffset, EndOffset);\n\t\tError ->\n\t\t\tIntervals = get_intervals(NodeIP, StartOffset, EndOffset),\n\t\t\t?assert(false, \n\t\t\t\tiolist_to_binary(io_lib:format(\n\t\t\t\t\t\"Failed to get sync record from ~p: ~p; range: ~p - ~p; intervals managed to collect: ~p\",\n\t\t\t\t\t\t[Node, Error, StartOffset, EndOffset, ar_intervals:to_list(Intervals)]))),\n\t\t\tfalse\n\tend.\n\ncollect_footprint_intervals(NodeIP, StartOffset, EndOffset) ->\n\tStartPartition = ar_replica_2_9:get_entropy_partition(StartOffset + 1),\n\tLastPartition = ar_replica_2_9:get_entropy_partition(EndOffset + 1),\n\tFootprintsPerPartition = ar_footprint_record:get_footprints_per_partition(),\n\tcollect_footprint_intervals(NodeIP, StartPartition, LastPartition, 0, FootprintsPerPartition - 1, ar_intervals:new()).\n\ncollect_footprint_intervals(_NodeIP, Partition, LastPartition, _Footprint, _MaxFootprint, Acc)\n\t\twhen Partition > LastPartition ->\n\tAcc;\ncollect_footprint_intervals(NodeIP, Partition, LastPartition, Footprint, MaxFootprint, Acc)\n\t\twhen Footprint > MaxFootprint ->\n\tcollect_footprint_intervals(NodeIP, Partition + 1, LastPartition, 0, MaxFootprint, Acc);\ncollect_footprint_intervals(NodeIP, Partition, LastPartition, Footprint, MaxFootprint, Acc) ->\n\tFootprintByteIntervals =\n\t\tcase ar_http_iface_client:get_footprints(NodeIP, Partition, Footprint) of\n\t\t\t{ok, FootprintIntervals} ->\n\t\t\t\tar_footprint_record:get_intervals_from_footprint_intervals(FootprintIntervals);\n\t\t\tnot_found ->\n\t\t\t\t?debugFmt(\"No footprint record found on ~p for partition ~B, footprint ~B\",\n\t\t\t\t\t[NodeIP, Partition, Footprint]),\n\t\t\t\tar_intervals:new();\n\t\t\tError ->\n\t\t\t\t?assert(false,\n\t\t\t\t\tiolist_to_binary(io_lib:format(\n\t\t\t\t\t\"Failed to get footprint record from ~p: ~p, partition: ~B, footprint: ~B\",\n\t\t\t\t\t[NodeIP, Error, Partition, Footprint])))\n\t\tend,\n\tNewAcc = ar_intervals:union(Acc, FootprintByteIntervals),\n\tcollect_footprint_intervals(NodeIP, Partition, LastPartition, Footprint + 1, MaxFootprint, NewAcc).\n\ninterval_contains(Intervals, Start, End) when End > Start ->\n\tcase gb_sets:iterator_from({Start, Start}, Intervals) of\n\t\tIter ->\n\t\t\tinterval_contains2(Iter, Start, End)\n\tend.\n\ninterval_contains2(Iter, Start, End) ->\n\tcase gb_sets:next(Iter) of\n\t\tnone ->\n\t\t\tfalse;\n\t\t{{IntervalEnd, IntervalStart}, _} when IntervalStart =< Start andalso IntervalEnd >= End ->\n\t\t\ttrue;\n\t\t_ ->\n\t\t\tfalse\n\tend.\n\nassert_chunks(Node, Packing, Chunks) ->\n\tassert_chunks(Node, any, Packing, Chunks).\n\nassert_chunks(Node, RequestPacking, Packing, Chunks) ->\n\tlists:foreach(fun({Block, EndOffset, ChunkSize}) ->\n\t\tassert_chunk(Node, RequestPacking, Packing, Block, EndOffset, ChunkSize)\n\tend, Chunks).\n\nassert_chunk(Node, RequestPacking, Packing, Block, EndOffset, ChunkSize) ->\n\t?LOG_INFO(\"Asserting chunk at offset ~p, size ~p\", [EndOffset, ChunkSize]),\n\n\tResult = ar_test_node:get_chunk(Node, EndOffset, RequestPacking),\n\t{ok, {{StatusCode, _}, _, EncodedProof, _, _}} = Result,\n\t?assertEqual(<<\"200\">>, StatusCode, iolist_to_binary(io_lib:format(\n\t\t\"Chunk not found. Node: ~p, Offset: ~p\",\n\t\t[Node, EndOffset]))),\n\tProof = ar_serialize:json_map_to_poa_map(\n\t\tjiffy:decode(EncodedProof, [return_maps])\n\t),\n\tProof = ar_serialize:json_map_to_poa_map(\n\t\tjiffy:decode(EncodedProof, [return_maps])\n\t),\n\tChunkMetadata = #chunk_metadata{\n\t\ttx_root = Block#block.tx_root,\n\t\ttx_path = maps:get(tx_path, Proof),\n\t\tdata_path = maps:get(data_path, Proof)\n\t},\n\tChunkProof = ar_test_node:remote_call(Node, ar_poa, chunk_proof, [ChunkMetadata, EndOffset - 1]),\n\t?LOG_INFO([{chunk_proof, ChunkProof}]),\n\t{true, _} = ar_test_node:remote_call(Node, ar_poa, validate_paths, [ChunkProof]),\n\tChunk = maps:get(chunk, Proof),\n\n\tmaybe_write_chunk_fixture(Packing, EndOffset, Chunk),\n\n\t{ok, ExpectedPackedChunk} = load_chunk_fixture(Packing, EndOffset),\n\t?assertEqual(byte_size(ExpectedPackedChunk), byte_size(Chunk),\n\t\tiolist_to_binary(io_lib:format(\n\t\t\t\"~p: Chunk at offset ~p size mismatch expected ~p, got ~p\",\n\t\t\t[Node, EndOffset, byte_size(ExpectedPackedChunk), byte_size(Chunk)]))),\n\t?assertEqual(ExpectedPackedChunk, Chunk,\n\t\tiolist_to_binary(io_lib:format(\n\t\t\t\"~p: Chunk at offset ~p, size ~p, packing ~p does not match packed chunk\",\n\t\t\t[Node, EndOffset, ChunkSize, ar_serialize:encode_packing(Packing, true)]))),\n\n\t{ok, UnpackedChunk} = ar_packing_server:unpack(\n\t\tPacking, EndOffset, Block#block.tx_root, Chunk, ?DATA_CHUNK_SIZE),\n\tUnpaddedChunk = ar_packing_server:unpad_chunk(\n\t\tPacking, UnpackedChunk, ChunkSize, byte_size(Chunk)),\n\tExpectedUnpackedChunk = ar_test_node:get_genesis_chunk(EndOffset),\n\t?assertEqual(ExpectedUnpackedChunk, UnpaddedChunk,\n\t\tiolist_to_binary(io_lib:format(\n\t\t\t\"~p: Chunk at offset ~p, size ~p does not match unpacked chunk\",\n\t\t\t[Node, EndOffset, ChunkSize]))).\n\nassert_no_chunks(Node, Chunks) ->\n\tlists:foreach(fun({_Block, EndOffset, _ChunkSize}) ->\n\t\tassert_no_chunk(Node, EndOffset)\n\tend, Chunks).\n\nassert_no_chunk(Node, EndOffset) ->\n\tResult = ar_test_node:get_chunk(Node, EndOffset, any),\n\t{ok, {{StatusCode, _}, _, _, _, _}} = Result,\n\t?assertEqual(<<\"404\">>, StatusCode, iolist_to_binary(io_lib:format(\n\t\t\"Chunk found when it should not have been. Node: ~p, Offset: ~p\",\n\t\t[Node, EndOffset]))).\n\ndelayed_print(Format, Args) ->\n\t%% Print the specific flavor of this test since it isn't captured in the test name.\n\t%% Delay the print by 1 second to allow the eunit output to be flushed.\n\tspawn(fun() ->\n\t\ttimer:sleep(1000),\n\t\tio:fwrite(user, Format, Args)\n\tend).\n\n%% --------------------------------------------------------------------------------------------\n%% Test Data Generation\n%% --------------------------------------------------------------------------------------------\t\nwrite_wallet_fixtures() ->\n\tWallets = [wallet_a, wallet_b, wallet_c, wallet_d],\n\tlists:foreach(fun(Wallet) ->\n\t\tWalletName = atom_to_list(Wallet),\n\t\tar_wallet:new_keyfile(?DEFAULT_KEY_TYPE, WalletName),\n\t\tinstall_fixture(\n\t\t\tar_wallet:wallet_filepath(Wallet), wallets, WalletName ++ \".json\")\n\tend, Wallets),\n\tok.\n\nmaybe_write_chunk_fixture(Packing, EndOffset, Chunk) when ?UPDATE_CHUNK_FIXTURES =:= true ->\n\t?LOG_ERROR(\"WARNING: Updating chunk fixture! EndOffset: ~p, Packing: ~p\", \n\t\t[EndOffset, ar_serialize:encode_packing(Packing, true)]),\n\twrite_chunk_fixture(Packing, EndOffset, Chunk);\nmaybe_write_chunk_fixture(_, _, _) ->\n\tok.\n"
  },
  {
    "path": "apps/arweave/src/e2e/ar_repack_in_place_mine_tests.erl",
    "content": "-module(ar_repack_in_place_mine_tests).\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-define(REPACK_IN_PLACE_MINE_TEST_TIMEOUT, 600).\n\n%% --------------------------------------------------------------------------------------------\n%% Test Registration\n%% --------------------------------------------------------------------------------------------\n%%\n%% Note:\n%% Repacking in place *from* replica_2_9 to any format is not currently supported.\nrepack_in_place_mine_test_() ->\n\tTimeout = ?REPACK_IN_PLACE_MINE_TEST_TIMEOUT,\n\t[\n\t\t{timeout, Timeout, {with, {unpacked, replica_2_9, default}, [fun test_repack_in_place_mine/1]}},\n\t\t{timeout, Timeout, {with, {spora_2_6, replica_2_9, default}, [fun test_repack_in_place_mine/1]}},\n\t\t{timeout, Timeout, {with, {replica_2_9, replica_2_9, default}, [fun test_repack_in_place_mine/1]}},\n\t\t{timeout, Timeout, {with, {replica_2_9, unpacked, default}, [fun test_repack_in_place_mine/1]}},\n\t\t{timeout, Timeout, {with, {spora_2_6, unpacked, default}, [fun test_repack_in_place_mine/1]}},\n\t\t{timeout, Timeout, {with, {unpacked, replica_2_9, small}, [fun test_repack_in_place_mine/1]}},\n\t\t{timeout, Timeout, {with, {spora_2_6, replica_2_9, small}, [fun test_repack_in_place_mine/1]}},\n\t\t{timeout, Timeout, {with, {replica_2_9, replica_2_9, small}, [fun test_repack_in_place_mine/1]}},\n\t\t{timeout, Timeout, {with, {replica_2_9, unpacked, small}, [fun test_repack_in_place_mine/1]}},\n\t\t{timeout, Timeout, {with, {spora_2_6, unpacked, small}, [fun test_repack_in_place_mine/1]}}\n\t].\n\n%% --------------------------------------------------------------------------------------------\n%% test_repack_in_place_mine\n%% --------------------------------------------------------------------------------------------\ntest_repack_in_place_mine({FromPackingType, ToPackingType, ModuleSize}) ->\n\tar_e2e:delayed_print(<<\" ~p -> ~p (~p) \">>, [FromPackingType, ToPackingType, ModuleSize]),\n\t?LOG_INFO([{event, test_repack_in_place_mine}, {module, ?MODULE},\n\t\t{from_packing_type, FromPackingType}, {to_packing_type, ToPackingType},\n\t\t{module_size, ModuleSize}]),\n\tValidatorNode = peer1,\n\tRepackerNode = peer2,\n\tar_test_node:stop(ValidatorNode),\n\tar_test_node:stop(RepackerNode),\n\t{Blocks, _AddrA, Chunks} = ar_e2e:start_source_node(\n\t\tRepackerNode, FromPackingType, wallet_a, ModuleSize),\n\n\t[B0 | _] = Blocks,\n\tstart_validator_node(ValidatorNode, RepackerNode, B0),\n\n\tNumModules = case ModuleSize of\n\t\tdefault -> 2;\n\t\tsmall -> 8\n\tend,\n\n\t{WalletB, SourceStorageModules} = ar_e2e:source_node_storage_modules(\n\t\tRepackerNode, ToPackingType, wallet_b, ModuleSize),\n\tAddrB = case WalletB of\n\t\tundefined -> undefined;\n\t\t_ -> ar_wallet:to_address(WalletB)\n\tend,\n\tFinalStorageModules = lists:sublist(SourceStorageModules, NumModules),\n\tToPacking = ar_e2e:packing_type_to_packing(ToPackingType, AddrB),\n\t{ok, Config} = ar_test_node:get_config(RepackerNode),\n\n\tRepackInPlaceStorageModules = lists:sublist([ \n\t\t{Module, ToPacking} || Module <- Config#config.storage_modules ], NumModules),\n\t\n\tar_test_node:restart_with_config(RepackerNode, Config#config{\n\t\tstorage_modules = [],\n\t\trepack_in_place_storage_modules = RepackInPlaceStorageModules,\n\t\tmining_addr = undefined\n\t}),\n\n\tExpectedSize0 = ar_e2e:aligned_partition_size(RepackerNode, 0, ToPacking),\n\tar_e2e:assert_partition_size(RepackerNode, 0, ToPacking, ExpectedSize0),\n\tExpectedSize1 = ar_e2e:aligned_partition_size(RepackerNode, 1, ToPacking),\n\tar_e2e:assert_partition_size(RepackerNode, 1, ToPacking, ExpectedSize1),\n\n\tar_test_node:stop(RepackerNode),\n\n\t%% Rename storage_modules\n\tDataDir = Config#config.data_dir,\n\tlists:foreach(fun({SourceModule, Packing}) ->\n\t\t{BucketSize, Bucket, _Packing} = SourceModule,\n\t\tSourceID = ar_storage_module:id(SourceModule),\n\t\tSourcePath = ar_chunk_storage:get_storage_module_path(DataDir, SourceID),\n\n\t\tTargetModule = {BucketSize, Bucket, Packing},\n\t\tTargetID = ar_storage_module:id(TargetModule),\n\t\tTargetPath = ar_chunk_storage:get_storage_module_path(DataDir, TargetID),\n\t\tfile:rename(SourcePath, TargetPath)\n\tend, RepackInPlaceStorageModules),\n\n\tar_test_node:restart_with_config(RepackerNode, Config#config{\n\t\tstorage_modules = FinalStorageModules,\n\t\trepack_in_place_storage_modules = [],\n\t\tmining_addr = AddrB\n\t}),\n\t\n\tar_e2e:assert_chunks(RepackerNode, ToPacking, Chunks),\n\n\tcase ToPackingType of\n\t\tunpacked ->\n\t\t\tok;\n\t\t_ ->\n\t\t\tar_e2e:assert_mine_and_validate(RepackerNode, ValidatorNode, ToPacking)\n\tend.\n\nstart_validator_node(ValidatorNode, RepackerNode, B0) ->\n\t{ok, Config} = ar_test_node:get_config(ValidatorNode),\n\t?assertEqual(ar_test_node:peer_name(ValidatorNode),\n\t\tar_test_node:start_other_node(ValidatorNode, B0, Config#config{\n\t\t\tpeers = [ar_test_node:peer_ip(RepackerNode)],\n\t\t\tstart_from_latest_state = true,\n\t\t\tauto_join = true\n\t\t}, true)\n\t),\n\tok."
  },
  {
    "path": "apps/arweave/src/e2e/ar_repack_mine_tests.erl",
    "content": "-module(ar_repack_mine_tests).\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-define(REPACK_MINE_TEST_TIMEOUT, 600).\n\n%% --------------------------------------------------------------------------------------------\n%% Test Registration\n%% --------------------------------------------------------------------------------------------\nrepack_mine_test_() ->\n\tTimeout = ?REPACK_MINE_TEST_TIMEOUT,\n\t[\n\t\t{timeout, Timeout, {with, {replica_2_9, replica_2_9}, [fun test_repack_mine/1]}},\n\t\t{timeout, Timeout, {with, {replica_2_9, spora_2_6}, [fun test_repack_mine/1]}},\n\t\t{timeout, Timeout, {with, {replica_2_9, unpacked}, [fun test_repack_mine/1]}},\n\n\t\t{timeout, Timeout, {with, {unpacked, replica_2_9}, [fun test_repack_mine/1]}},\n\t\t{timeout, Timeout, {with, {unpacked, spora_2_6}, [fun test_repack_mine/1]}},\n\t\t{timeout, Timeout, {with, {spora_2_6, replica_2_9}, [fun test_repack_mine/1]}},\n\t\t{timeout, Timeout, {with, {spora_2_6, spora_2_6}, [fun test_repack_mine/1]}},\n\t\t{timeout, Timeout, {with, {spora_2_6, unpacked}, [fun test_repack_mine/1]}}\n\t].\n\n%% --------------------------------------------------------------------------------------------\n%% test_repack_mine\n%% --------------------------------------------------------------------------------------------\ntest_repack_mine({FromPackingType, ToPackingType}) ->\n\tar_e2e:delayed_print(<<\" ~p -> ~p \">>, [FromPackingType, ToPackingType]),\n\t?LOG_INFO([{event, test_repack_mine}, {module, ?MODULE},\n\t\t{from_packing_type, FromPackingType}, {to_packing_type, ToPackingType}]),\n\tValidatorNode = peer1,\n\tRepackerNode = peer2,\n\tar_test_node:stop(ValidatorNode),\n\tar_test_node:stop(RepackerNode),\n\t{Blocks, _AddrA, Chunks} = ar_e2e:start_source_node(\n\t\tRepackerNode, FromPackingType, wallet_a),\n\n\t[B0 | _] = Blocks,\n\tstart_validator_node(ValidatorNode, RepackerNode, B0),\n\n\t{WalletB, StorageModules} = ar_e2e:source_node_storage_modules(\n\t\tRepackerNode, ToPackingType, wallet_b),\n\tAddrB = case WalletB of\n\t\tundefined -> undefined;\n\t\t_ -> ar_wallet:to_address(WalletB)\n\tend,\n\tToPacking = ar_e2e:packing_type_to_packing(ToPackingType, AddrB),\n\t{ok, Config} = ar_test_node:get_config(RepackerNode),\n\tar_test_node:restart_with_config(RepackerNode, Config#config{\n\t\tstorage_modules = Config#config.storage_modules ++ StorageModules,\n\t\tmining_addr = AddrB\n\t}),\n\n\tar_e2e:assert_syncs_range(RepackerNode, 0, 4*ar_block:partition_size()),\n\tar_e2e:assert_partition_size(RepackerNode, 0, ToPacking),\n\tar_e2e:assert_partition_size(RepackerNode, 1, ToPacking),\n\tRangeStart2 = 2 * ar_block:partition_size(),\n\tRangeEnd2 = RangeStart2 + floor(0.5 * ar_block:partition_size()),\n\tRangeSize2 = ar_util:ceil_int(RangeEnd2, ?DATA_CHUNK_SIZE)\n\t\t- ar_util:floor_int(RangeStart2, ?DATA_CHUNK_SIZE),\n\tar_e2e:assert_partition_size(\n\t\tRepackerNode, 2, ToPacking, RangeSize2),\n\t%% Don't assert chunks here. Since we have two storage modules defined we won't know\n\t%% which packing format will be found - which complicates the assertion. We'll rely\n\t%% on the assert_chunks later (after we restart with only a single set of storage modules)\n\t%% to verify that the chunks are present.\n\t%% ar_e2e:assert_chunks(RepackerNode, ToPacking, Chunks),\n\tar_e2e:assert_empty_partition(RepackerNode, 3, ToPacking),\n\n\tar_test_node:restart_with_config(RepackerNode, Config#config{\n\t\tstorage_modules = StorageModules,\n\t\tmining_addr = AddrB\n\t}),\n\tar_e2e:assert_syncs_range(RepackerNode, ToPacking, 0, 4*ar_block:partition_size()),\n\tar_e2e:assert_partition_size(RepackerNode, 0, ToPacking),\n\tar_e2e:assert_partition_size(RepackerNode, 1, ToPacking),\n\tRangeStart4 = 2 * ar_block:partition_size(),\n\tRangeEnd4 = RangeStart4 + floor(0.5 * ar_block:partition_size()),\n\tRangeSize4 = ar_util:ceil_int(RangeEnd4, ?DATA_CHUNK_SIZE)\n\t\t- ar_util:floor_int(RangeStart4, ?DATA_CHUNK_SIZE),\n\tar_e2e:assert_partition_size(\n\t\tRepackerNode, 2, ToPacking, RangeSize4),\n\tar_e2e:assert_chunks(RepackerNode, ToPacking, Chunks),\n\tar_e2e:assert_empty_partition(RepackerNode, 3, ToPacking),\n\n\tcase ToPackingType of\n\t\tunpacked ->\n\t\t\tok;\n\t\t_ ->\n\t\t\tar_e2e:assert_mine_and_validate(RepackerNode, ValidatorNode, ToPacking),\n\n\t\t\t%% Now that we mined a block, the rest of partition 2 is below the disk pool\n\t\t\t%% threshold\n\t\t\tar_e2e:assert_syncs_range(RepackerNode, ToPacking, 0, 4*ar_block:partition_size()),\n\t\t\tar_e2e:assert_partition_size(RepackerNode, 0, ToPacking),\n\t\t\tar_e2e:assert_partition_size(RepackerNode, 1, ToPacking),\t\t\t\n\t\t\tar_e2e:assert_partition_size(RepackerNode, 2, ToPacking),\n\t\t\t%% All of partition 3 is still above the disk pool threshold,\n\t\t\t%% except for two chunks, both are below the disk pool threshold = 6291456.\n\t\t\t%% The chunk ending at 6029312 crosses the beginnig of the partition 3 so\n\t\t\t%% it is also synced by this partition.\n\t\t\tar_e2e:assert_partition_size(RepackerNode, 3, ToPacking, 2 * ?DATA_CHUNK_SIZE)\n\tend.\n\nstart_validator_node(ValidatorNode, RepackerNode, B0) ->\n\t{ok, Config} = ar_test_node:get_config(ValidatorNode),\n\t?assertEqual(ar_test_node:peer_name(ValidatorNode),\n\t\tar_test_node:start_other_node(ValidatorNode, B0, Config#config{\n\t\t\tpeers = [ar_test_node:peer_ip(RepackerNode)],\n\t\t\tstart_from_latest_state = true,\n\t\t\tauto_join = true,\n\t\t\tstorage_modules = []\n\t\t}, true)\n\t),\n\tok.\n"
  },
  {
    "path": "apps/arweave/src/e2e/ar_sync_pack_mine_tests.erl",
    "content": "-module(ar_sync_pack_mine_tests).\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%% --------------------------------------------------------------------------------------------\n%% Fixtures\n%% --------------------------------------------------------------------------------------------\nsetup_source_node(PackingType) ->\n\tSourceNode = peer1,\n\tSinkNode = peer2,\n\tar_test_node:stop(SinkNode),\n\tar_test_node:stop(SourceNode),\n\t{Blocks, _SourceAddr, Chunks} = ar_e2e:start_source_node(SourceNode, PackingType, wallet_a),\n\n\t{Blocks, Chunks, PackingType}.\n\ninstantiator(GenesisData, SinkPackingType, TestFun) ->\n\t{timeout, 600, {with, {GenesisData, SinkPackingType}, [TestFun]}}.\n\t\n%% --------------------------------------------------------------------------------------------\n%% Test Registration\n%% --------------------------------------------------------------------------------------------\n\nreplica_2_9_syncing_test_() ->\n\t{setup, fun () -> setup_source_node(replica_2_9) end, \n\t\tfun (GenesisData) ->\n\t\t\t\t[\n\t\t\t\t\tinstantiator(GenesisData, replica_2_9, fun test_sync_pack_mine/1),\n\t\t\t\t\tinstantiator(GenesisData, spora_2_6, fun test_sync_pack_mine/1),\n\t\t\t\t\tinstantiator(GenesisData, unpacked, fun test_sync_pack_mine/1)\n\t\t\t\t]\n\t\tend}.\n\nspora_2_6_sync_pack_mine_test_() ->\n\t{setup, fun () -> setup_source_node(spora_2_6) end, \n\t\tfun (GenesisData) ->\n\t\t\t\t[\n\t\t\t\t\tinstantiator(GenesisData, replica_2_9, fun test_sync_pack_mine/1),\n\t\t\t\t\tinstantiator(GenesisData, spora_2_6, fun test_sync_pack_mine/1),\n\t\t\t\t\tinstantiator(GenesisData, unpacked, fun test_sync_pack_mine/1)\n\t\t\t\t]\n\t\tend}.\n\nunpacked_sync_pack_mine_test_() ->\n\t{setup, fun () -> setup_source_node(unpacked) end, \n\t\tfun (GenesisData) ->\n\t\t\t\t[\n\t\t\t\t\tinstantiator(GenesisData, replica_2_9, fun test_sync_pack_mine/1),\n\t\t\t\t\tinstantiator(GenesisData, spora_2_6, fun test_sync_pack_mine/1),\n\t\t\t\t\tinstantiator(GenesisData, unpacked, fun test_sync_pack_mine/1)\n\t\t\t\t]\n\t\tend}.\n\n% Note: we should limit the number of tests run per setup_source_node to 5, if it gets\n% too long then the source node may hit a difficulty adjustment, which can impact the\n% results.\nunpacked_edge_case_test_() ->\n\t{setup, fun () -> setup_source_node(unpacked) end, \n\t\tfun (GenesisData) ->\n\t\t\t\t[\n\t\t\t\t\tinstantiator(GenesisData, {replica_2_9, unpacked}, \n\t\t\t\t\t\tfun test_unpacked_and_packed_sync_pack_mine/1),\n\t\t\t\t\tinstantiator(GenesisData, {unpacked, replica_2_9}, \n\t\t\t\t\t\tfun test_unpacked_and_packed_sync_pack_mine/1),\n\t\t\t\t\tinstantiator(GenesisData, replica_2_9, \n\t\t\t\t\t\tfun test_entropy_first_sync_pack_mine/1),\n\t\t\t\t\tinstantiator(GenesisData, replica_2_9, \n\t\t\t\t\t\tfun test_entropy_last_sync_pack_mine/1)\n\t\t\t\t]\n\t\tend}.\n\nspora_2_6_edge_case_test_() ->\n\t{setup, fun () -> setup_source_node(spora_2_6) end, \n\t\tfun (GenesisData) ->\n\t\t\t\t[\n\t\t\t\t\tinstantiator(GenesisData, {replica_2_9, unpacked}, \n\t\t\t\t\t\tfun test_unpacked_and_packed_sync_pack_mine/1),\n\t\t\t\t\tinstantiator(GenesisData, {unpacked, replica_2_9}, \n\t\t\t\t\t\tfun test_unpacked_and_packed_sync_pack_mine/1),\n\t\t\t\t\tinstantiator(GenesisData, replica_2_9, \n\t\t\t\t\t\tfun test_entropy_first_sync_pack_mine/1),\n\t\t\t\t\tinstantiator(GenesisData, replica_2_9, \n\t\t\t\t\t\tfun test_entropy_last_sync_pack_mine/1)\n\t\t\t\t]\n\t\tend}.\n\nunpacked_small_module_test_() ->\n\t{setup, fun () -> setup_source_node(unpacked) end, \n\t\tfun (GenesisData) ->\n\t\t\t\t[\n\t\t\t\t\tinstantiator(GenesisData, replica_2_9, \n\t\t\t\t\t\tfun test_small_module_aligned_sync_pack_mine/1),\n\t\t\t\t\tinstantiator(GenesisData, replica_2_9, \n\t\t\t\t\t\tfun test_small_module_unaligned_sync_pack_mine/1)\n\t\t\t\t]\n\tend}.\n\t\nreplica_2_9_small_module_test_() ->\n\t{setup, fun () -> setup_source_node(replica_2_9) end, \n\t\tfun (GenesisData) ->\n\t\t\t\t[\n\t\t\t\t\tinstantiator(GenesisData, replica_2_9, \n\t\t\t\t\t\tfun test_small_module_aligned_sync_pack_mine/1),\n\t\t\t\t\tinstantiator(GenesisData, replica_2_9, \n\t\t\t\t\t\tfun test_small_module_unaligned_sync_pack_mine/1)\n\t\t\t\t]\n\t\tend}.\n\nspora_2_6_small_module_test_() ->\n\t{setup, fun () -> setup_source_node(spora_2_6) end, \n\t\tfun (GenesisData) ->\n\t\t\t\t[\n\t\t\t\t\tinstantiator(GenesisData, replica_2_9, \n\t\t\t\t\t\tfun test_small_module_aligned_sync_pack_mine/1),\n\t\t\t\t\tinstantiator(GenesisData, replica_2_9, \n\t\t\t\t\t\tfun test_small_module_unaligned_sync_pack_mine/1)\n\t\t\t\t]\n\t\tend}.\n\n%% NOTE: the test_large_module_unaligned_sync_pack_mine test should always be \n%% run after the test_large_module_aligned_sync_pack_mine test. This is because\n%% once a block is mined it shifts the disk pool threshold and changes the\n%% expected syncable chunks. The test_large_module_unaligned_sync_pack_mine test\n%% assumes the higher disk pool threshold in its assertions.\n\nunpacked_large_module_test_() ->\n\t{setup, fun () -> setup_source_node(unpacked) end, \n\t\tfun (GenesisData) ->\n\t\t\t\t[\n\t\t\t\t\tinstantiator(GenesisData, replica_2_9, \n\t\t\t\t\t\tfun test_large_module_aligned_sync_pack_mine/1),\n\t\t\t\t\tinstantiator(GenesisData, replica_2_9, \n\t\t\t\t\t\tfun test_large_module_unaligned_sync_pack_mine/1)\n\t\t\t\t]\n\tend}.\n\t\nreplica_2_9_large_module_test_() ->\n\t{setup, fun () -> setup_source_node(replica_2_9) end, \n\t\tfun (GenesisData) ->\n\t\t\t\t[\n\t\t\t\t\tinstantiator(GenesisData, replica_2_9, \n\t\t\t\t\t\tfun test_large_module_aligned_sync_pack_mine/1),\n\t\t\t\t\tinstantiator(GenesisData, replica_2_9, \n\t\t\t\t\t\tfun test_large_module_unaligned_sync_pack_mine/1)\n\t\t\t\t]\n\t\tend}.\n\nspora_2_6_large_module_test_() ->\n\t{setup, fun () -> setup_source_node(spora_2_6) end, \n\t\tfun (GenesisData) ->\n\t\t\t\t[\n\t\t\t\t\tinstantiator(GenesisData, replica_2_9, \n\t\t\t\t\t\tfun test_large_module_aligned_sync_pack_mine/1),\n\t\t\t\t\tinstantiator(GenesisData, replica_2_9, \n\t\t\t\t\t\tfun test_large_module_unaligned_sync_pack_mine/1)\n\t\t\t\t]\n\t\tend}.\n\ndisk_pool_threshold_test_() ->\n\t[\n\t\tinstantiator(unpacked, replica_2_9, fun test_disk_pool_threshold/1),\n\t\tinstantiator(unpacked, spora_2_6, fun test_disk_pool_threshold/1),\n\t\tinstantiator(spora_2_6, replica_2_9, fun test_disk_pool_threshold/1),\n\t\tinstantiator(spora_2_6, spora_2_6, fun test_disk_pool_threshold/1),\n\t\tinstantiator(spora_2_6, unpacked, fun test_disk_pool_threshold/1)\n\t].\n\n%% --------------------------------------------------------------------------------------------\n%% test_sync_pack_mine\n%% --------------------------------------------------------------------------------------------\ntest_sync_pack_mine({{Blocks, Chunks, SourcePackingType}, SinkPackingType}) ->\n\tar_e2e:delayed_print(<<\" ~p -> ~p \">>, [SourcePackingType, SinkPackingType]),\n\t?LOG_INFO([{event, test_sync_pack_mine}, {module, ?MODULE},\n\t\t{from_packing_type, SourcePackingType}, {to_packing_type, SinkPackingType}]),\n\t[B0 | _] = Blocks,\n\tSourceNode = peer1,\n\tSinkNode = peer2,\n\n\tSinkPacking = start_sink_node(SinkNode, SourceNode, B0, SinkPackingType),\n\n\tRangeStart = ar_block:partition_size(),\n\tRangeEnd = 2*ar_block:partition_size() + ar_storage_module:get_overlap(SinkPacking),\n\n\t%% Partition 1 and half of partition 2 are below the disk pool threshold\n\tar_e2e:assert_syncs_range(SinkNode,\tSinkPacking, RangeStart, RangeEnd),\n\tar_e2e:assert_partition_size(SinkNode, 1, SinkPacking),\n\tar_e2e:assert_chunks(SinkNode, SinkPacking, Chunks),\n\n\tcase SinkPackingType of\n\t\tunpacked ->\n\t\t\tok;\n\t\t_ ->\n\t\t\tar_e2e:assert_mine_and_validate(SinkNode, SourceNode, SinkPacking),\n\t\t\tok\n\tend.\n\ntest_unpacked_and_packed_sync_pack_mine(\n\t\t{{Blocks, _Chunks, SourcePackingType}, {PackingType1, PackingType2}}) ->\n\tar_e2e:delayed_print(<<\" ~p -> {~p, ~p} \">>, [SourcePackingType, PackingType1, PackingType2]),\n\t?LOG_INFO([{event, test_unpacked_and_packed_sync_pack_mine}, {module, ?MODULE},\n\t\t{from_packing_type, SourcePackingType}, {to_packing_type, {PackingType1, PackingType2}}]),\n\t[B0 | _] = Blocks,\n\tSourceNode = peer1,\n\tSinkNode = peer2,\n\n\t{SinkPacking1, SinkPacking2} = start_sink_node(\n\t\tSinkNode, SourceNode, B0, PackingType1, PackingType2),\n\n\tRangeStart1 = ar_block:partition_size(),\n\tRangeEnd1 = 2*ar_block:partition_size() + ar_storage_module:get_overlap(SinkPacking1),\n\n\t%% Data exists as both packed and unmpacked, so will exist in the global sync record\n\t%% even though replica_2_9 data is filtered out.\n\tar_e2e:assert_syncs_range(SinkNode, RangeStart1, RangeEnd1),\n\tar_e2e:assert_partition_size(SinkNode, 1, SinkPacking1),\n\tar_e2e:assert_partition_size(SinkNode, 1, SinkPacking2),\n\t%% XXX: we should be able to assert the chunks here, but since we have two\n\t%% storage modules configured and are querying the replica_2_9 chunk, GET /chunk gets\n\t%% confused and tries to load the unpacked chunk, which then fails within the middleware\n\t%% handler and 404s. To fix we'd need to update GET /chunk to query all matching\n\t%% storage modules and then find the best one to return. But since this is a rare edge\n\t%% case, we'll just disable the assertion for now.\n\t%% ar_e2e:assert_chunks(SinkNode, SinkPacking, Chunks),\n\t\n\tMinablePacking = case PackingType1 of\n\t\tunpacked -> SinkPacking2;\n\t\t_ -> SinkPacking1\n\tend,\n\tar_e2e:assert_mine_and_validate(SinkNode, SourceNode, MinablePacking),\n\tok.\n\t\n\ntest_entropy_first_sync_pack_mine({{Blocks, Chunks, SourcePackingType}, SinkPackingType}) ->\n\tar_e2e:delayed_print(<<\" ~p -> ~p \">>, [SourcePackingType, SinkPackingType]),\n\t?LOG_INFO([{event, test_entropy_first_sync_pack_mine}, {module, ?MODULE},\n\t\t{from_packing_type, SourcePackingType}, {to_packing_type, SinkPackingType}]),\n\t[B0 | _] = Blocks,\n\tSourceNode = peer1,\n\tSinkNode = peer2,\n\n\tWallet = ar_test_node:remote_call(SinkNode, ar_e2e, load_wallet_fixture, [wallet_b]),\n\tSinkAddr = ar_wallet:to_address(Wallet),\n\tSinkPacking = ar_e2e:packing_type_to_packing(SinkPackingType, SinkAddr),\n\t{ok, Config} = ar_test_node:get_config(SinkNode),\n\t\n\tModule = {ar_block:partition_size(), 1, SinkPacking},\n\tStoreID = ar_storage_module:id(Module),\n\tStorageModules = [ Module ],\n\n\n\t%% 1. Run node with no sync jobs so that it only prepares entropy\n\tConfig2 = Config#config{\n\t\tpeers = [ar_test_node:peer_ip(SourceNode)],\n\t\tstart_from_latest_state = true,\n\t\tstorage_modules = StorageModules,\n\t\tauto_join = true,\n\t\tmining_addr = SinkAddr,\n\t\tsync_jobs = 0\n\t},\n\t?assertEqual(ar_test_node:peer_name(SinkNode),\n\t\tar_test_node:start_other_node(SinkNode, B0, Config2, true)\n\t),\n\n\tRangeStart = ar_block:partition_size(),\n\tRangeEnd = 2*ar_block:partition_size() + ar_storage_module:get_overlap(SinkPacking),\n\n\tar_e2e:assert_has_entropy(SinkNode, RangeStart, RangeEnd, StoreID),\n\tar_e2e:assert_empty_partition(SinkNode, 1, unpacked),\n\tar_e2e:assert_empty_partition(SinkNode, 1, unpacked_padded),\n\n\t%% Delete two chunks of entropy from storage to test that the node will heal itself.\n\t%% 1. Delete the chunk from disk as well as all sync records.\n\t%% 2. Delete the chunk only from disk, but keep it in the sync records.\n\tDeleteOffset1 = RangeStart + ?DATA_CHUNK_SIZE,\n\tar_test_node:remote_call(SinkNode, ar_chunk_storage, delete,\n\t\t[DeleteOffset1, StoreID]),\n\tDeleteOffset2 = DeleteOffset1 + ?DATA_CHUNK_SIZE,\n\tar_test_node:remote_call(SinkNode, ar_chunk_storage, delete_chunk,\n\t\t[DeleteOffset2, StoreID]),\n\n\t%% 2. Run node with sync jobs so that it syncs and packs data\n\tar_test_node:restart_with_config(SinkNode, Config2#config{\n\t\tsync_jobs = 100\n\t}),\n\n\tar_e2e:assert_syncs_range(SinkNode, SinkPacking, RangeStart, RangeEnd),\n\tar_e2e:assert_partition_size(SinkNode, 1, SinkPacking),\n\tar_e2e:assert_empty_partition(SinkNode, 1, unpacked),\n\tar_e2e:assert_empty_partition(SinkNode, 1, unpacked_padded),\n\tar_e2e:assert_chunks(SinkNode, SinkPacking, Chunks),\n\n\t%% 3. Make sure the data is minable\n\tar_e2e:assert_mine_and_validate(SinkNode, SourceNode, SinkPacking),\n\tok.\n\ntest_entropy_last_sync_pack_mine({{Blocks, Chunks, SourcePackingType}, SinkPackingType}) ->\n\tar_e2e:delayed_print(<<\" ~p -> ~p \">>, [SourcePackingType, SinkPackingType]),\n\t?LOG_INFO([{event, test_entropy_last_sync_pack_mine}, {module, ?MODULE},\n\t\t{from_packing_type, SourcePackingType}, {to_packing_type, SinkPackingType}]),\n\t[B0 | _] = Blocks,\n\tSourceNode = peer1,\n\tSinkNode = peer2,\n\n\tWallet = ar_test_node:remote_call(SinkNode, ar_e2e, load_wallet_fixture, [wallet_b]),\n\tSinkAddr = ar_wallet:to_address(Wallet),\n\tSinkPacking = ar_e2e:packing_type_to_packing(SinkPackingType, SinkAddr),\n\t{ok, Config} = ar_test_node:get_config(SinkNode),\n\t\n\tModule = {ar_block:partition_size(), 1, SinkPacking},\n\tStoreID = ar_storage_module:id(Module),\n\tStorageModules = [ Module ],\n\n\t%% 1. Run node with no replica_2_9 workers so that it only syncs chunks\n\tConfig2 = Config#config{\n\t\tpeers = [ar_test_node:peer_ip(SourceNode)],\n\t\tstart_from_latest_state = true,\n\t\tstorage_modules = StorageModules,\n\t\tauto_join = true,\n\t\tmining_addr = SinkAddr,\n\t\treplica_2_9_workers = 0\n\t},\n\t?assertEqual(ar_test_node:peer_name(SinkNode),\n\t\tar_test_node:start_other_node(SinkNode, B0, Config2, true)\n\t),\n\n\tRangeStart = ar_block:partition_size(),\n\tRangeEnd = 2*ar_block:partition_size() + ar_storage_module:get_overlap(SinkPacking),\n\n\tar_e2e:assert_syncs_range(SinkNode, SinkPacking, RangeStart, RangeEnd),\n\tar_e2e:assert_partition_size(SinkNode, 1, unpacked_padded),\n\tar_e2e:assert_empty_partition(SinkNode, 1, unpacked),\n\n\t%% 2. Run node with sync jobs so that it syncs and packs data\n\tar_test_node:restart_with_config(SinkNode, Config2#config{\n\t\treplica_2_9_workers = 8\n\t}),\n\n\tar_e2e:assert_has_entropy(SinkNode, RangeStart, RangeEnd, StoreID),\n\tar_e2e:assert_syncs_range(SinkNode, SinkPacking, RangeStart, RangeEnd),\n\tar_e2e:assert_partition_size(SinkNode, 1, SinkPacking),\n\tar_e2e:assert_empty_partition(SinkNode, 1, unpacked_padded),\n\tar_e2e:assert_empty_partition(SinkNode, 1, unpacked),\n\tar_e2e:assert_chunks(SinkNode, SinkPacking, Chunks),\n\n\t%% 3. Make sure the data is minable\n\tar_e2e:assert_mine_and_validate(SinkNode, SourceNode, SinkPacking),\n\tok.\n\ntest_small_module_aligned_sync_pack_mine({{Blocks, Chunks, SourcePackingType}, SinkPackingType}) ->\n\tar_e2e:delayed_print(<<\" ~p -> ~p \">>, [SourcePackingType, SinkPackingType]),\n\t?LOG_INFO([{event, test_small_module_aligned_sync_pack_mine}, {module, ?MODULE},\n\t\t{from_packing_type, SourcePackingType}, {to_packing_type, SinkPackingType}]),\n\t[B0 | _] = Blocks,\n\tSourceNode = peer1,\n\tSinkNode = peer2,\n\n\tWallet = ar_test_node:remote_call(SinkNode, ar_e2e, load_wallet_fixture, [wallet_b]),\n\tSinkAddr = ar_wallet:to_address(Wallet),\n\tSinkPacking = ar_e2e:packing_type_to_packing(SinkPackingType, SinkAddr),\n\t{ok, Config} = ar_test_node:get_config(SinkNode),\n\n\tModule = {floor(0.5 * ar_block:partition_size()), 2, SinkPacking},\n\tStoreID = ar_storage_module:id(Module),\n\tStorageModules = [ Module ],\n\n\t%% Sync the second half of partition 1\n\tConfig2 = Config#config{\n\t\tpeers = [ar_test_node:peer_ip(SourceNode)],\n\t\tstart_from_latest_state = true,\n\t\tstorage_modules = StorageModules,\n\t\tauto_join = true,\n\t\tmining_addr = SinkAddr\n\t},\n\t?assertEqual(ar_test_node:peer_name(SinkNode),\n\t\tar_test_node:start_other_node(SinkNode, B0, Config2, true)\n\t),\n\n\tRangeStart = floor(ar_block:partition_size()),\n\tRangeEnd = floor(1.5 * ar_block:partition_size()),\n\tPartition = ar_node:get_partition_number(RangeStart),\n\tRangeSize = ar_e2e:aligned_partition_size(SinkNode, Partition, SinkPacking),\n\n\t%% Make sure the expected data was synced\n\tar_e2e:assert_partition_size(SinkNode, 1, SinkPacking, RangeSize),\n\tar_e2e:assert_empty_partition(SinkNode, 1, unpacked_padded),\n\tar_e2e:assert_empty_partition(SinkNode, 1, unpacked),\n\tar_e2e:assert_chunks(SinkNode, SinkPacking, lists:sublist(Chunks, 1, 4)),\n\tar_e2e:assert_syncs_range(SinkNode, SinkPacking, RangeStart, RangeEnd),\n\n\t%% Make sure no extra entropy was generated\n\tAlignedStart = ar_util:floor_int(RangeStart, ?DATA_CHUNK_SIZE),\n\tAlignedEnd = ar_util:ceil_int(RangeEnd, ?DATA_CHUNK_SIZE) + ?DATA_CHUNK_SIZE,\n\tar_e2e:assert_has_entropy(SinkNode, AlignedStart, AlignedEnd, StoreID),\n\tar_e2e:assert_no_entropy(SinkNode, AlignedEnd, AlignedEnd + ar_block:partition_size(), StoreID),\n\n\t%% Make sure the data is minable\n\tar_e2e:assert_mine_and_validate(SinkNode, SourceNode, SinkPacking),\n\tok.\n\ntest_small_module_unaligned_sync_pack_mine({{Blocks, Chunks, SourcePackingType}, SinkPackingType}) ->\n\tar_e2e:delayed_print(<<\" ~p -> ~p \">>, [SourcePackingType, SinkPackingType]),\n\t?LOG_INFO([{event, test_small_module_unaligned_sync_pack_mine}, {module, ?MODULE},\n\t\t{from_packing_type, SourcePackingType}, {to_packing_type, SinkPackingType}]),\n\t[B0 | _] = Blocks,\n\tSourceNode = peer1,\n\tSinkNode = peer2,\n\n\tWallet = ar_test_node:remote_call(SinkNode, ar_e2e, load_wallet_fixture, [wallet_b]),\n\tSinkAddr = ar_wallet:to_address(Wallet),\n\tSinkPacking = ar_e2e:packing_type_to_packing(SinkPackingType, SinkAddr),\n\t{ok, Config} = ar_test_node:get_config(SinkNode),\n\n\tModule = {floor(0.5 * ar_block:partition_size()), 3, SinkPacking},\n\tStoreID = ar_storage_module:id(Module),\n\tStorageModules = [ Module ],\n\n\t%% Sync the second half of partition 1\n\tConfig2 = Config#config{\n\t\tpeers = [ar_test_node:peer_ip(SourceNode)],\n\t\tstart_from_latest_state = true,\n\t\tstorage_modules = StorageModules,\n\t\tauto_join = true,\n\t\tmining_addr = SinkAddr\n\t},\n\t?assertEqual(ar_test_node:peer_name(SinkNode),\n\t\tar_test_node:start_other_node(SinkNode, B0, Config2, true)\n\t),\n\n\tRangeStart = floor(1.5 * ar_block:partition_size()),\n\tRangeEnd = floor(2 * ar_block:partition_size()),\n\tPartition = ar_node:get_partition_number(RangeStart),\n\tRangeSize = ar_e2e:aligned_partition_size(SinkNode, Partition, SinkPacking),\n\n\t%% Make sure the expected data was synced\t\n\tar_e2e:assert_partition_size(SinkNode, 1, SinkPacking, RangeSize),\n\tar_e2e:assert_empty_partition(SinkNode, 1, unpacked_padded),\n\tar_e2e:assert_empty_partition(SinkNode, 1, unpacked),\n\tar_e2e:assert_chunks(SinkNode, SinkPacking, lists:sublist(Chunks, 5, 4)),\n\t%% Even though the packing type is replica_2_9, the data will still exist in the\n\t%% default partition as unpacked - and so will exist in the global sync record.\n\tar_e2e:assert_syncs_range(SinkNode, RangeStart, RangeEnd),\n\n\t%% Make sure no extra entropy was generated\n\tAlignedStart = ar_util:floor_int(RangeStart, ?DATA_CHUNK_SIZE),\n\tAlignedEnd = ar_util:ceil_int(RangeEnd, ?DATA_CHUNK_SIZE) + ?DATA_CHUNK_SIZE,\n\tar_e2e:assert_has_entropy(SinkNode, AlignedStart, AlignedEnd, StoreID),\n\tar_e2e:assert_no_entropy(SinkNode, 0, AlignedStart, StoreID),\n\n\t%% Make sure the data is minable\n\tar_e2e:assert_mine_and_validate(SinkNode, SourceNode, SinkPacking),\n\tok.\n\n\ntest_large_module_aligned_sync_pack_mine({{Blocks, Chunks, SourcePackingType}, SinkPackingType}) ->\n\tar_e2e:delayed_print(<<\" ~p -> ~p \">>, [SourcePackingType, SinkPackingType]),\n\t?LOG_INFO([{event, test_large_module_aligned_sync_pack_mine}, {module, ?MODULE},\n\t\t{from_packing_type, SourcePackingType}, {to_packing_type, SinkPackingType}]),\n\t[B0 | _] = Blocks,\n\tSourceNode = peer1,\n\tSinkNode = peer2,\n\n\tWallet = ar_test_node:remote_call(SinkNode, ar_e2e, load_wallet_fixture, [wallet_b]),\n\tSinkAddr = ar_wallet:to_address(Wallet),\n\tSinkPacking = ar_e2e:packing_type_to_packing(SinkPackingType, SinkAddr),\n\t{ok, Config} = ar_test_node:get_config(SinkNode),\n\n\tModuleSize = floor(2 * ar_block:partition_size()),\n\tModule = {ModuleSize, 0, SinkPacking},\n\tStoreID = ar_storage_module:id(Module),\n\tStorageModules = [ Module ],\n\n\tConfig2 = Config#config{\n\t\tpeers = [ar_test_node:peer_ip(SourceNode)],\n\t\tstart_from_latest_state = true,\n\t\tstorage_modules = StorageModules,\n\t\tauto_join = true,\n\t\tmining_addr = SinkAddr\n\t},\n\t?assertEqual(ar_test_node:peer_name(SinkNode),\n\t\tar_test_node:start_other_node(SinkNode, B0, Config2, true)\n\t),\n\n\tRangeStart = 0,\n\tRangeEnd = ModuleSize,\n\n\t%% Make sure the expected data was synced\n\tar_e2e:assert_syncs_range(SinkNode, SinkPacking, RangeStart, RangeEnd),\n\t%% The assert_partition_size logic uses ar_mining_stats as the data source.\n\t%% It does not currently handle large storage modules well - it attributes all\n\t%% chunks in a large storage module to the first partition covered. So we'll\n\t%% just assert on partition 0 and ignore partition 1. It is worth keeping\n\t%% assert_partition_size in addition to  assert_syncs_range despite this limitation\n\t%% as it provides some coverage of the v2_index_data_size_by_packing metric (which\n\t%% is relied on by miners).\n\tar_e2e:assert_partition_size(SinkNode, 0, SinkPacking, 4456448),\n\tar_e2e:assert_empty_partition(SinkNode, 1, unpacked_padded),\n\tar_e2e:assert_empty_partition(SinkNode, 1, unpacked),\n\tar_e2e:assert_chunks(SinkNode, SinkPacking, lists:sublist(Chunks, 7, 2)),\n\n\t%% Make sure entropy was generated for the module range\n\tar_e2e:assert_has_entropy(SinkNode, RangeStart, RangeEnd, StoreID),\n\n\t%% Make sure the data is minable\n\tar_e2e:assert_mine_and_validate(SinkNode, SourceNode, SinkPacking),\n\tok.\n\n%% @doc Test a large storage module that is unaligned with the partition (it starts in the\n%% middle of partition 1 and covers through the end of partition 2).\ntest_large_module_unaligned_sync_pack_mine({{Blocks, Chunks, SourcePackingType}, SinkPackingType}) ->\n\tar_e2e:delayed_print(<<\" ~p -> ~p \">>, [SourcePackingType, SinkPackingType]),\n\t?LOG_INFO([{event, test_large_module_unaligned_sync_pack_mine}, {module, ?MODULE},\n\t\t{from_packing_type, SourcePackingType}, {to_packing_type, SinkPackingType}]),\n\t[B0 | _] = Blocks,\n\tSourceNode = peer1,\n\tSinkNode = peer2,\n\n\tWallet = ar_test_node:remote_call(SinkNode, ar_e2e, load_wallet_fixture, [wallet_b]),\n\tSinkAddr = ar_wallet:to_address(Wallet),\n\tSinkPacking = ar_e2e:packing_type_to_packing(SinkPackingType, SinkAddr),\n\t{ok, Config} = ar_test_node:get_config(SinkNode),\n\n\tModuleSize = floor(1.5 * ar_block:partition_size()),\n\tModule = {ModuleSize, 1, SinkPacking},\n\tStoreID = ar_storage_module:id(Module),\n\tStorageModules = [ Module ],\n\n\tConfig2 = Config#config{\n\t\tpeers = [ar_test_node:peer_ip(SourceNode)],\n\t\tstart_from_latest_state = true,\n\t\tstorage_modules = StorageModules,\n\t\tauto_join = true,\n\t\tmining_addr = SinkAddr\n\t},\n\t?assertEqual(ar_test_node:peer_name(SinkNode),\n\t\tar_test_node:start_other_node(SinkNode, B0, Config2, true)\n\t),\n\n\tRangeStart = ModuleSize,\n\tRangeEnd = 2 * ModuleSize,\n\n\t%% Make sure the expected data was synced\n\tar_e2e:assert_syncs_range(SinkNode, SinkPacking, RangeStart, RangeEnd),\n\t%% The assert_partition_size logic uses ar_mining_stats as the data source.\n\t%% It does not currently handle large storage modules well - it attributes all\n\t%% chunks in a large storage module to the first partition covered. So we'll\n\t%% just assert on partition 1 and ignore partition 2. It is worth keeping\n\t%% assert_partition_size in addition to  assert_syncs_range despite this limitation\n\t%% as it provides some coverage of the v2_index_data_size_by_packing metric (which\n\t%% is relied on by miners).\n\t%% \n\t%% Also note: this test assumes that it runs after the\n\t%% test_large_module_aligned_sync_pack_mine test - and therefore the blockchain is\n\t%% at height 6 rather than the default 5. This shifts the disk pool threshold and\n\t%% allows peer2 to fully sync its large storage module.\n\tar_e2e:assert_partition_size(SinkNode, 1, SinkPacking, 3407872),\n\tar_e2e:assert_empty_partition(SinkNode, 0, SinkPacking),\n\tar_e2e:assert_empty_partition(SinkNode, 1, unpacked_padded),\n\tar_e2e:assert_empty_partition(SinkNode, 1, unpacked),\n\tar_e2e:assert_chunks(SinkNode, SinkPacking, lists:sublist(Chunks, 7, 2)),\n\n\t%% Make sure entropy was generated for the module range\n\tar_e2e:assert_has_entropy(SinkNode, RangeStart, RangeEnd, StoreID),\n\n\t%% Make sure the data is minable\n\tar_e2e:assert_mine_and_validate(SinkNode, SourceNode, SinkPacking),\n\tok.\n\ntest_disk_pool_threshold({SourcePackingType, SinkPackingType}) ->\n\tar_e2e:delayed_print(<<\" ~p -> ~p \">>, [SourcePackingType, SinkPackingType]),\n\t?LOG_INFO([{event, test_disk_pool_threshold}, {module, ?MODULE},\n\t\t{from_packing_type, SourcePackingType}, {to_packing_type, SinkPackingType}]),\n\n\tSourceNode = peer1,\n\tSinkNode = peer2,\n\n\t%% When the source packing type is unpacked, this setup process performs some\n\t%% extra disk pool checks:\n\t%% 1. spin up a spora_2_6 node and mine some blocks\n\t%% 2. some chunks are below the disk pool threshold and some above\n\t%% 3. spin up an unpacked node and sync from spora_2_6\n\t%% 4. shut down the spora_2_6 node\n\t%% 5. now the unpacked node should have synced all of the chunks, both above and below\n\t%%    the disk pool threshold\n\t%% 6. proceed with test and spin up the sink node and confirm it too can sink all chunks\n\t%%    from the unpacked source node - both above and below the disk pool threshold\n\t{Blocks, Chunks, SourcePackingType} = setup_source_node(SourcePackingType),\n\t[B0 | _] = Blocks,\n\n\tSinkPacking = start_sink_node(SinkNode, SourceNode, B0, SinkPackingType),\n\tRangeStart = floor(2 * ar_block:partition_size()),\n\tRangeEnd = floor(2.5 * ar_block:partition_size()),\n\tRangeSize = ar_util:ceil_int(RangeEnd, ?DATA_CHUNK_SIZE) - ar_util:floor_int(RangeStart, ?DATA_CHUNK_SIZE),\n\n\t%% Partition 1 and half of partition 2 are below the disk pool threshold\n\tar_e2e:assert_syncs_range(SinkNode, SinkPacking, ar_block:partition_size(), 4*ar_block:partition_size()),\n\tar_e2e:assert_partition_size(SinkNode, 1, SinkPacking),\n\tar_e2e:assert_partition_size(SinkNode, 2, SinkPacking, RangeSize),\n\tar_e2e:assert_empty_partition(SinkNode, 3, SinkPacking),\n\tar_e2e:assert_does_not_sync_range(SinkNode, 0, ar_block:partition_size()),\n\tar_e2e:assert_chunks(SinkNode, SinkPacking, Chunks),\n\n\tcase SinkPackingType of\n\t\tunpacked ->\n\t\t\tok;\n\t\t_ ->\n\t\t\tar_e2e:assert_mine_and_validate(SinkNode, SourceNode, SinkPacking),\n\n\t\t\t%% Now that we mined a block, the rest of partition 2 is below the disk pool\n\t\t\t%% threshold\n\t\t\tar_e2e:assert_syncs_range(SinkNode, SinkPacking, ar_block:partition_size(), 4*ar_block:partition_size()),\n\t\t\tar_e2e:assert_partition_size(SinkNode, 2, SinkPacking),\n\t\t\t%% All of partition 3 is still above the disk pool threshold,\n\t\t\t%% except for two chunks, both are below the disk pool threshold = 6291456.\n\t\t\t%% The chunk ending at 6029312 crosses the beginnig of the partition 3 so\n\t\t\t%% it is also synced by this partition.\n\t\t\tar_e2e:assert_partition_size(SinkNode, 3, SinkPacking, 2 * ?DATA_CHUNK_SIZE),\n\t\t\tar_e2e:assert_does_not_sync_range(SinkNode, 0, ar_block:partition_size()),\n\t\t\tok\n\tend.\n\nstart_sink_node(Node, SourceNode, B0, PackingType) ->\n\tWallet = ar_test_node:remote_call(Node, ar_e2e, load_wallet_fixture, [wallet_b]),\n\tSinkAddr = ar_wallet:to_address(Wallet),\n\tSinkPacking = ar_e2e:packing_type_to_packing(PackingType, SinkAddr),\n\t{ok, Config} = ar_test_node:get_config(Node),\n\t\n\tStorageModules = [\n\t\t{ar_block:partition_size(), 1, SinkPacking},\n\t\t{ar_block:partition_size(), 2, SinkPacking},\n\t\t{ar_block:partition_size(), 3, SinkPacking},\n\t\t{ar_block:partition_size(), 4, SinkPacking},\n\t\t{ar_block:partition_size(), 5, SinkPacking},\n\t\t{ar_block:partition_size(), 6, SinkPacking},\n\t\t{ar_block:partition_size(), 10, SinkPacking}\n\t],\n\t?assertEqual(ar_test_node:peer_name(Node),\n\t\tar_test_node:start_other_node(Node, B0, Config#config{\n\t\t\tpeers = [ar_test_node:peer_ip(SourceNode)],\n\t\t\tstart_from_latest_state = true,\n\t\t\tstorage_modules = StorageModules,\n\t\t\tauto_join = true,\n\t\t\tmining_addr = SinkAddr\n\t\t}, true)\n\t),\n\n\tSinkPacking.\n\nstart_sink_node(Node, SourceNode, B0, PackingType1, PackingType2) ->\n\tWallet = ar_test_node:remote_call(Node, ar_e2e, load_wallet_fixture, [wallet_b]),\n\tSinkAddr = ar_wallet:to_address(Wallet),\n\tSinkPacking1 = ar_e2e:packing_type_to_packing(PackingType1, SinkAddr),\n\tSinkPacking2 = ar_e2e:packing_type_to_packing(PackingType2, SinkAddr),\n\t{ok, Config} = ar_test_node:get_config(Node),\n\t\n\tStorageModules = [\n\t\t{ar_block:partition_size(), 1, SinkPacking1},\n\t\t{ar_block:partition_size(), 1, SinkPacking2}\n\t],\n\n\t?assertEqual(ar_test_node:peer_name(Node),\n\t\tar_test_node:start_other_node(Node, B0, Config#config{\n\t\t\tpeers = [ar_test_node:peer_ip(SourceNode)],\n\t\t\tstart_from_latest_state = true,\n\t\t\tstorage_modules = StorageModules,\n\t\t\tauto_join = true,\n\t\t\tmining_addr = SinkAddr\n\t\t}, true)\n\t),\n\t{SinkPacking1, SinkPacking2}.\n"
  },
  {
    "path": "apps/arweave/src/e2e/fixtures/wallets/wallet_a.json",
    "content": "{\"kty\":\"RSA\",\"ext\":true,\"e\":\"AQAB\",\"n\":\"qi9ZdgEE_uoA804tHIeyiHdCpjZ608K3qX8cU-SrVMmyegXn3rTjGf29rwb2YqQuIohBVv--FSTDf4tQykCuJpKO5EHKH7qi7Hy1sxrJkHR13YbqX99vx4qAIQ-H8Zik4KdD8lKeaAmaOZ0lUt4Y48dxU51HHn06Bxg1HD7SRNnsFDt6juIPREn5pCzFd54braeIxexOM-0DekLR2dh8TjNAYfHy3tkDy64oSt_T5e4HzRLSGccMiGOo-6HFKj7ChxPiuFkDKBsBWcr9opK9TRaTzfQbCWWUasD9Hs5EtFPrS1HYTGpD4SuM33RPNXQZSTJp6BDRPP7bm-xH012uXoQNLe0Y9LUSphNrJZ1eDJ_pyXkS0NXgY9ggYIoJkjLxUCnAV_DO0e7w5BX1dpQv1N8gst-RvfDAYmIUYiDcMRMbmc8NqcJNITW81HqbSWhCLQ87w1UwwdLnB3vQ6dkcFSbzJjI2afqH-DieWvA1Pc9j8A9dQO3ac_4ZyvLo6EW_xAleM37x9jl7x3eSamGubi5IIYvaEVpyxqAHTxifVms-y8P9YiJEQbScdXtHKDEEFe8fZorPQppFscO-BSIzY6lrh_DbOdzTuytPOgMELeQ3bgAL4CVsizFoIMGo-61sEQwVbpCw3XHy-_TLfrn4crVJYn7WxX4SHNwpdfljeu8\",\"d\":\"G18PL-P9FjSzn24w4jhO9hTcUthLS_iyyl-HwlRyW-ouuuJtRwvnxLvjQJ3JjdbjFqm8fI4YV9U4XjCdd1IM0GZc9ghAxnahkpCCNsK1rXaVqGH1GyNYGotDjU2uqyRGTF2Kl5RDJu94bxC_uoK_FQ90QiL3F8fDR_XUQO03q1wzVJO2Y_mmw_Bz5rxOrCzxPa5G2LJnZ4GUwBq0HqnrYDZtAfPEgKP9sMobb-Ns9LuiZJDE2uGBOgRxXrtHd0JtzgTcP5MNZ2tkfbkgrv-T06ywa_z5RjsgskTE0SoSscAXhV8t_yhOL45uE1hlDu9Ty8qAbxMZXAqPbpYDfVLBYu3EiHduGoyrGhxElprxt7BCJ_CzyobZfGQGwelvPL_zYhu2EwzFQGKgrhPgLiW5mru6Jlq6CK86J7KNuFFRXzhjqjNcfupC1XWI9X7SFVKCi4mNf_ZM4ZmagJUTP-hyPowrnw4FnKOjJBadWaAdCVC5uWydHYQPFToraatJXWA5QUl2tktOaK8DPkd1KhIak5pxNaLhihqCdJurLrpTQ638r_jA5wBcLmt6SoYpWm3b7JngiuemlNk-G7B2f5y79_GT-t3jKbyaPClet6fOa2FnqDOT9eqR_ZU6h2h-kWlVSadXzSGGCmqfru57rAxpmJZ5SbuKSBEKnjx8Z-6HbOE\",\"p\":\"5McQNMYpbXEoaid0vFdWl6HOkOn0bSd1XeNDIxISkOsCO2G4kR5faxfA4rEZneXcQ4C_PhilRlgpjtkEG-5B9hgtWeeUWkoTTgBbuHqzv52R3Xc5L_VYtayHGjPi6wiJ5YYaDy4rVJv65gWGpO-6vj__DSnN4tvBcRryogzNH0BDvFyFKIbkqzp0a_psOXG3StxHv5Rue0ySAzvVHsgRxM0UJG06sH9KDstrhQRzQ6UGRWhV9uBPz-xMZYmFQC603kgWUEYRfF10eN7xdD9kM-NW_h4vzDqT6hdfOueZ7I0EUaLTqdb3QuX2UzvsZWCya37m4qW3a6t7zXky6nHaDw\",\"q\":\"vm91oPH5LRficcNfSP4AN-mL3nBn3Pkd8lkL9ZP6G-tR4CIB3-T6FusOeD-v8RwRtoBjbZYDM5XlYZHI1XF1-WYc4TXQp6mvF0S13tU62KyesHxVTRp6v0IyL8F1WOmQReDqoQrsr0ccIxF7jJmxgo-djKAVTU8U1dFGGgSvpxuuI0G7gDYyB5ieL2s38TX1H8FjfhAPEf5DIhzdmiYzoX4qXrT93tCYHCB7h8ffJT9tj_eTuXUitqc9A1FKT8hcbA4-DnUhxr-hI-lkK_dcjuVpAf9VbaKbSQNJ03SG7W8ALkZXmAXMw17t8mw7nFV4nWQIvEly7dgUk8eBwe6xIQ\",\"dp\":\"Du_sX_W8SLgFsoCm_5EYR0g6S33rBqF36UxoWsbYTXv6plPoEBmSk1R2tJZpnMSgUAv88Jn9WI1zES-cNBKnXeEQPPmA1zBU-FfPpUjlqZIpLvOU2UvEogAExjIzE7N4BXNvCiSykZCpnhEoTGaWo8tb5Mkg9znv9GmVA_2f-vVgNtE3pIDCN2fWqCIupMWG-S1OxfR0DjreobVrYdogRuA4-3PiTBnThQnFGGdE-1qwASIh0r-sll_QUSTcfWdPSeAdDNq2U49qhmXQEA3_hd_HE0p3Rndgpv0lq5vpkedXK9lcxo8Rj92h6qdT9P6OR7R-cLfvNOl6aN0L9QDAAw\",\"dq\":\"LOuXwJ4zW7qtlI40VMBthsLVVmQHa-1rbfYpRwVf0uQgTRFYhdq6T1uk7yJ-uw4W84i3a2seWDW8hNZhnE-GN40ptMn_7Pyuq3tutyBvIBsf15uMd4KOf7z6n58vsghuGr2iOtib2gCZF4CRNyot4BFGZZyBSdoknQcfVRXT5UQ3QGPJ-cVO6dHLRn4xFPnYV2RDtsHM_D6Q0WQjta_bL_XVwr9Ivx1PNBtJaE7ySRP8ISCSPQXvaUxrrPOo5sbpXifB5aEllX8wYIs2MNTJhX-B1JHJMfJQVNmsuW9cQHeVgFThZp-_nDoxQKTdLtROfjnRgbCFpqr4t58w8XD_YQ\",\"qi\":\"ImkdM0kwMvIQynehAma2kpjq72Wn54nr1TspB1CTrCTu9swcrgxhHbrd0CELrB5L1xhSBUzzRS05_Jx23fnKjVohV_Zm86QoaY8OAPKwVqVzA77PFfNChBoQ68NQhpXsFYKPU_5bhohhGMytkFKOqA3V1YDhyr8hStlcWLmIkt7MQMzPRK5s2z8uFmZ2KdSynfwr53pqr0UJwq9eCdzKGfL4aLiCEpL60yOk4ZGNCmKSl6pwG-cJQ9s-3cak5aiR54PxdjE0uJ3VUKQDq0lRAETM5lk8cEdINLwVlQ6g3Laxbh-BZIQl2WepmddH8ufAbZMaUgtYq_rzThM68qHLYw\"}"
  },
  {
    "path": "apps/arweave/src/e2e/fixtures/wallets/wallet_b.json",
    "content": "{\"kty\":\"RSA\",\"ext\":true,\"e\":\"AQAB\",\"n\":\"2xxp5KmsGIvR8Y6oSQJJNPgblXNX9vqKnE9P3CoujCnXdaYWsfbpBczZrSnaNV5w3_28Ph4fQHx4JWg6IqegSCB0tpZdnApRvLg406Ho8SWsc0d5QNtEZCvDb4RqWFwVmb7s6cXKtLw41Mq0rk_wI5pKurKpc5Kg_etw3K7TaSWOcsOH6Q42tiKAohN1tkwYnD0-nfRms6pZMVbLsgAMrA5-hojyPF3hSbtYEDYkMk9LOaXh2CXihlRKmpvWlZUo4jjJLUwVwB4k1YiMbLbnBcXpWKcCZtyU3_B_Q5-V1cPBHTMlG1AHrZGtkchRPTyozWvAPGzuyOpbD48k-RjrWkfzXOgsLfCYq5nCBWmOxoP0EgWm7fcgkKK0Q6LJ1CpIa-TtYzFn1Fph6ZrbYG9vxbkI5DP294txGxxjxK-f6gStQj3BG444q_NKfwfpf-sc2v--x50DG_LH3RnGshlcEdHml-107U8M4nHUBjEovtV92MPX0L9j0xLFYA27nWAHyNQhcuWkQbtXYxxk9WoSHM65o-_heDw7BAXc5NxVWXESy-Y_Zpm7i_fyAPiAiHMorqnR_x1ZaXbr7YnSD3e3eEWU0IomCjqcrEyF28kFhWTrlN0UbbCUMSRMTHq8jvTDL3aTggVWABw53DuneCoy87X06x5nBfHQet5DVeCqRdM\",\"d\":\"Led5Nt2qjka8vcUt8_ugOJJKzrfsnFGt2poYfu4TJ8G-e6uuFYmtnW_ebxwZ1uFQKvKOylKnW5-E4aa4qWfbV0EPEIcJrydHGrF9vtjPDj8A-65bjjJh1N7v6EJ3wOkjez0Vy8ITaOwoLxc3oSsvPd_ZlO5QxSGEIYxLh85L4sQEC22FJWf_luSjluqv-laQjRoDYjZi53RZ9xXaMJs4ctOFe0OtFdZzIMOfHnEGJFDv8aPoxnmP0QwtHZ13cQFO7UlNqUlhAOdx8jl-qWowyR5xzTqFX-VOGCUOyM9IajOyW0EPBRjHtjeCLiI4fIJBY7bzl9dct_9QNraK3ccwD7SkMJwMlkycIR0V3rvn1nvg7MrJOaDZCiKOod_6M1NIYuKHTFcnvQu67E147YGnWVX2BVMlIiB6M9IUymjxfpxhdR-iu_t-i94NoLcxaaK8BaaVV9N2qbZhCmEXjQGF8qo-lEPRpMHlhJKN7I68L30kMkGexz9OjiOT0WAgpmxOxRhvRsl0bI8A26jvg4amkY_nc48lmRF9xdhYeArBZm0S3NFO2qVcyc4CaF1XAx6GkAd7k4MQi1TqEMz5hENEzFMQ516CIJAN3IzqEh0R-1B66uWm3Lm-XYsUUssKJ2BUPWm1toplfEguXVSk9eYhy_MG03LHDej-YUeLAa9X0Q\",\"p\":\"7Vu-VFXsiTjBG9rxWWnriVDA65fWq3TSa3-x7DCaKpAOOV7yl5Mw0t5Jnp37qJJCPTU_-F_inST5a7Uhp07i_StQNoL9WrIQh7io8zk2GuvuvJx5h85zq6VYq9pC-jbf3ymO-H05jsjKL_ixX0UxaH4ng8wtUgsVyGhWF5CFk-AHiyugsIbMzoG_jVVPMkLiT0gd8yjI1fwVGFp2ZY-_KHiVT5pF33uRu3_DP_i3isMwUMFlnSb7sfZutCG0riVgZveyiKyj-PNcVpZ9kXm82ZVw5bA_pm_FG05kvHU5XT28iQLdIm09zMedrrdhJ7CGMebUtBC0pbApDXa222e5sQ\",\"q\":\"7FHLM7XQXgB5u7lVhYU_lV5hydFGo_G-TDkFY3_jHWTHyd_7fRiEqHCFk5INNgJSDVwzp3eesNT7MkWS2JTxnBIWH--4myAqUfSp1ttFYWNm2AfDndht14-R1YE2fXWasgTmEXTgkpID4WmWeSTl7tv28vWs-CgzSWl--C3Vr7fHUOTFiPkg2pDTdznHVgzT-SFFFqnvIMS0C1VWf5C85cOigNQy7RWmtBqslPtv48LQ0hYkbMAVAZ3XYnatBmsj1bc8EVO1eFLNeP5hvkV7SuwGKdCQyvMkgBtW7gb9sJUfhAANfvIuSWaREegtPJ5KwspEhBhpsv9F2hOmDAYUww\",\"dp\":\"qryBrlyYZyTCE-1sCqtcWEwUWePA8Vh5PAaAz6suWkuBT9dynYGtbyGix0xRCDMdHrY9K8adVfiQyd9jM9xU_1O2wV98K09HALneHgcbWkY4Vsgfy4bAQcoQfJ3l6-KpKvfT9f7t9j2M4vD7ddJp9gY5Gl82gnui0aPruculqndONdfOIOz2Sd2fEmU5MKhX7jur_4to3DQWYIxB-lBqawxCKx6IAHf8nmkK4-te65v4Fz7mfyLZjmv7ues88r_EFo06iYHV-W_lDgv2izyMkd8jdLVRM8HWgQvk_oM8HkwYYF4E_4yhFbrJPDKA2nHqNd8bReN2bnDHNv4cDrsQIQ\",\"dq\":\"lymm4nfdRhPlyme9xb-7MU-DG7ZLCll7EYSz5raKT2YEyiQE2TsSuC_pscCNxMttMvCUdf31O0WxPLH2QaXceqmzD1Cm9Et55pyq-y2dTrNnuK4WueQUNvu2HC0f7taIUnEBvY7Wi8rswoZo4yrwDX8UkssFjmMgk0fxGM0wz8qtqxf7Jye8lTJooe4KjQd9m_FlIR8oP_yy8kDvKIAr5IjkbKXPwYnE7ZXWaSIAq18Vdh0Fxa6EgVk2ydwBx4ZHENC5kpfKD6JfnpKRcUU-nWkmdB7eT4OCCJP0YiOEqSxqUWQ7PcWqR_dcumiabxkN11XMx_ZZvk69nsZMw4osQw\",\"qi\":\"FI3Yl18HjFsJhUuxh-WynnSIK9gSegwVK_EEdtYIvOOpKkjJDMQlnRYzNDnVbkFk6LB_dCZfmahQh5z193zRzAXRklE03d_WECc_UrCXhqjrwXkF33ZMQVZtXpjvB3krCXhnOmSNFwQZQDlJcUcPcqbTLrUukMm1_kgJxfch5PFANLUPKL-2_A8W4llpKvTeRZlED0BIKeJOMgDSsZvqbzHZif6ahSDOFO6D_SwkTjZ_qZ0IU2gU5wXNa6QhjcZX80Y05ROJrrT1O-K6rc6SJMYL7TW2xHZ2YmC2GX54rLUtuMiXWxllP1dhMDHIoZv3K8yDKi6-0ciOmWiuQJKNbA\"}"
  },
  {
    "path": "apps/arweave/src/e2e/fixtures/wallets/wallet_c.json",
    "content": "{\"kty\":\"RSA\",\"ext\":true,\"e\":\"AQAB\",\"n\":\"qmUNnwxgCkuOlHxaDy89YKmh1tUPg3yID0O0ZSGGJJz-E_gQkLQuigq5zZZCLKnL5KGekpPeq66SNQHZmu2x2O_VbsMJ2rH2WlPqBU9NOgSYh-4MCm56sDeTPzYwGcN5RkTJqvrfvy3E0Ej1V2tPCITJYUINmMoaY9zUuhFQ2Az0sCLRnpyRu8pLmn5ccwmLEYXl3ncvzvgZT6wm-uhcVFRfCIBMe_3ccy7-i3uhS_3R1usq1CoDkyULKYSBeD3UcQwcmSyDaP8kU5bG0NQIzP10cX2_qwsUbm9f27e-Mxs7OPYLWZFW1Xl4WOkXEWYrmtSxJue7eDEEp_gBaSGSYy4_XtjPhM6LsOSs59NR-qVRchoT8q4w8PUfYwW1wc6ntgP2B931Z_oTK1Xzg6A8yIs8knJDCw0o78i6B6ersMoOHuPVt-NroxAIb3B_Ui97MUPb2UwGj1Lw4ige1R6XZy3rOL0IHaaVnaJQObXBCjDiurYqna7tHACANb7xlVXsVvhi9EliLBe5JCDphpmh2dfDj3nyMI0Ab7d5_BuefchnQwkw_kna3iykxnH5poftKEBFnF7EmjzABm4Xvh8YEVloYO18e19B9VUYJQcZbCOUtq9oD5xBeVt0X1u9waeAQUWLr9HIBg4CkgGv5BdQ5A50mUfVO8uinSh5O0VKau8\",\"d\":\"p86ow9GbgRRYSiYJ6hRwAO4Ro5fm16pXSgHK56PJ5lZv6dFlIEqtrbGqWF8Jt5dy2-02cGNPlTR4XAVHDrRqO3v9jZBSPmfTuUf44Zl4h-ugE9TfPuh5r6hVZ08-PU7E1SAohicNEB_zKXoPNhhhZ0ow0OLtSPLnIwUowOheOToY7V3bR0mINUd6LWfQlimLdfrE0fH4Mxz_n0fGUFqlJme424wf_vHQxUq8nFCr3tTp8ONmBg3Kk4Z5T12vgm_1LAE6ixLV_KSuhcazGw56rd3qwxXaXe5forACHuPXfGS5BQnlFZFKM9Yr5bgRVanQK57z23qXqQ56KnpKJPrCsc6oroV3U-RvM9Nj_yLStwyeZCsGt0bd_TOxuTkwG9M04KRNM2qx4RuFhBQHiXH7yBe_QZHlxY1ou7K8jBmkNpp73gMOA01szjkOO9KAQI1WbA4BPyIzW428a9Or0H0W3x99Q6u2uMMt9mld3DK_vkRHc8JAXfTEfo43CeS4Gr7RtFel8WZ2n8tXKaAYcM8sGUoifiL-iI1dGKGysBsR8uMafOl4pOJLy7LouzNIvhBu0n-hDxOLbXETQ3y_GCZiuMWKdwqXqJ_BMCOQA1hcr9_sxcpalCy48dtEiQXtKDLt7qFtR8LXgBAiW4yjlkdkwGMx1rkwJMfwPcXNzCqWLQ\",\"p\":\"vM5SLTPsnVilEwgXs5YUhHGd2fOb8xdQIHPl2_3mn2YntxenS1qDoCYs6FO8hEhE_OEVCf7xaJ0TAEfX_0WgX_-Tptb3XW2GBX6vHTLs-zyO-XulKKyDpKxNTT4W7f4QW486rdDsDG40xxNZkv-lDShFe8h2WuIZ9_EhzHZSrbDOBBg4yM2ERR1h0wv8yM2JjTB5G-tpuhrwdJ_6jeiXsuuO85rVuXuveiSAMIH2AYz1Z0STIGhqgL93uxw29JHg63Q1PaqpyCo9Td0kxyqQX3tN4ZRDZAKj4nO5j_u1Sx2n_TLBzpgkH_Xy1F9y9_IcXbm2AiqnJWnJuVS5viQenQ\",\"q\":\"5wlUW8FjI3RvwL1QedfFp15ZVPaYc8N8sWH9U5tITK6G6IQ8UWtKphPO_gzxJFtP77sB_70tcf71u3UWY85vzuyQiQAIyjhJAWMupgYxvXn6Up8zUqgIMEUJxzhgVQ5319sWgpAL47YE7zKcy8BHdhesdaPvzIRQxlTHA57kXWur6X6f7QZiP9IySJwaWaXtjPxI07xx1l_D4dbPK6EFSrmW0Q92kC6koqNHiuNKQVhSpSnZ7EkT4VqNjsIvojPEBmTj-4bxEEObV-b-DGDppNXBnTBm2-gKpwsV4GdyGdK8wP1gZTIQtRBm_1GHBe9v8O-ZIGOaQ_NPiYZoQanT-w\",\"dp\":\"IKEPdpxoofC15ooZfoHLXfA8tXPyWZqH0HP3H4PLnXSMHIpL8SvdX4n5bNU72SicM4-6kRWsJsYuiHfiDk28H5sNq2GvMkhBRyXToZoxdmHK27bQniziO01DtruqPssPjKM-ItfeU2-gU182tb7UiWeSSogkXCSDFGRp0OoJ89aAZBjDh4BtAXzIcS67KwDKasobxAV1KiKJt74GEQxHWzZ2aAc0NG_5rYQtWzS6jR4NMyGYw5sH_OQaDw4bOT0Uv9w_bz7VRLB4E8LKHlluxfGLThbPZrNGG1aglQ-ND0Q6yflBoTCN3bAlnSo5tjvzRwdXOxyf8klMAWlxCDk5yQ\",\"dq\":\"zgHGo65TvPiE8UKdcJeSmcOKOjVMCOVF2VE7toIevKleiBPpSNw3itDc4DEgEEAPjf6dMLE5xY0HBijIVyRrFAJiepZ6P_5iMoeCv-2ECqSqLWPhOpG0A3570pUVaKJnACVN9AuHXnsd-T-TCicgUU-YqqkMGLve3ooXjsXucNKiTqhm582qa6f8yDvRTyCiKfWG5q4Af5uSqVyGDCwe8Nt9fFqiaLv-dzrKfzBeNNgRkU45D_S1clrxIFtMaABqiR0LIGvZpZvy9zV0UAtWKnGjm4reHLXSUdKTpi33UslTH26Oto0m0pyWipDiqcsvcJHkYzoNAwwAXutnKS3KYw\",\"qi\":\"F_9EOpmQ3q4-NliFmm0XFUNSp4nNeEJOwWcfCTrlF2ktMtojlt7__7PuBN6uILWQwqfgDsaDjSYQxiCSICMYQve3TqJxOToMv10EAjcQgyZkNiOmuJKYuG8GhRkB9PWRPeATCSuUJrL2D7zFm-ROcfEo6TblktmyEt1bRMIVqaFDuu8tFA31HKvKQxr3N8umkOHBa_RSxhn0dRmT3horCWwBKbPy7RV1JM7h5qSKbOJxZf2PgzaTZ3KTkoPRov4_Po-anTg2yOsIdqh92PrbN82WO_IJNhhlXWsWCYvNBlOTPjmkCQhUQy77qtMwpVOi60XoITIyq0b9D8LO1c6d7Q\"}"
  },
  {
    "path": "apps/arweave/src/e2e/fixtures/wallets/wallet_d.json",
    "content": "{\"kty\":\"RSA\",\"ext\":true,\"e\":\"AQAB\",\"n\":\"y3dvwHO8Bclzz58uwX5BMRjbB3i9S3ODEGW8b8mrsnWjm0m104ymNVXEtfgcaRfxit4Odt4MtQd5TrNzVQdmYruOZ_P7frXXCc8kqjGfjNLm96RqTDqVqpl5s8SuTlNOjnDwEdNrEewq2kCWR4roUxHTCc3DUOM6PXyZhlmr2zmdjCmn8L0eTxije7p5QdObW_64kycDbwXcyhOQrZ_588aXFFfTMogXCXk0lmRu_YvLieRmT-nIw7uLKhSjsMwCkEO4DD5lGb9_BE2kl8BGp9kb_Sqebh75kE9IL3b6bU5iABqDRwcPyiIzjKYrSDYpm_8dq6rT2ylw8TMombh9HlLSb-nijFftxW34nC5ue-g5pien-Cwk9Bfwsd-4Gj1nx03sBMbOwdPUmZ6gY0vqW65TOJVhSrD2GO9fIX8b9KdoU15rybCKsiDEWBiaq045vtyv1W78JYonZ5p7nmPzCRieyJuuHRCT3TnLRyobG6eguVQbEdhCp5FWmITT4lDLl5TqHPoqJPxINGHwFtZvqmKw5p1whM4hWCs3A7QMUMaJqZ0vAEOUBTEIph1c17tVuJYADNgKRwZGkXz-JvDH5o8vB01P-BRU5CWlGjzsCfHUvND-zuvlhkoCWwaUrlLSwQZ2ZqIKuAFs3hA-dRYQRcx7OMSJUYiACtXeSfaQAxs\",\"d\":\"Em3PAW96KEwG4VdZtMzqure1nwegnaToyiNs3fM2SgO9veL_RRoIM-yA1LqUWDCDAED8rmeOXxc-NZKrb5gr_eVfEKtYrDFsOMc6WvADs42mved2eVEVHU6pZ075Or7w7pXsKLEtkYICn6IZ-oDqahvDMbAhcMIkFE2k2jZlCoY9buSXAYcfp6pjpGFPelbgS4TW0v1Foli1ltgO0qsayKnEJWOPDZSmAYWo7bZLF0wCM4sseTCDrrbd9AHKkcjosohvsywznBFsP8eIkPYpcCqKDnQ9xVuo3xlPQH1WUXA4ECpWmahaFcTjRmoGoZPGUQradSIT7lXilPY9Ry8epecbaMdsu4yKcxLl5hF3_zTogtTi1VUnVaDHsDkxu3l3HBvly0OrchVPt44vmyvmsrhHrUUQCvqrKE3CV_00PzRBTzsUSuJLCQeDchTmh2azKoerojk4dqYA0I276mo8B19kEI1G8LZVp1sCaSWAtHzOyQn_bdZnl2o7tjTNzflqmzFtcvl-p8ocvvLCjfMXzY7aC8lNgqNhvTH-lcgLdSIj2x2lnJ2ThgHwjILSSk_R6WC6t1CE96l2akwCkIWsfLnCoxt_XpPw14OxEB4efD_dVI-KuzS-l5CfHAe225GgHFVAB2n-Zetwbr6bL3fflt5rZMf3VWQKdWiwcPuijYE\",\"p\":\"7L8bLfK8h0ZbJA0Q2A5AltWj5cMbYf5r9AYAiMs8-bo7RnPMT02dKB0KKmCbMJFe8uXTZ6d24-bKBYSjHFJAoONkbe35IdKG_6mUrYRN__N9-__p08YoEE2FdiSDgG6XqlK7DzpPfhPS5a4fM5g8V-hLFXqJb69LfPPARmlaaQoHJEZ05XDaTQwxjtuWFUJ8HASiwvtdFE_uU1i3gfNJswKgfKTbB614-DpNRHtdDMeuExM6_WMmqnwzUnvgPvMWbojHcXpkd0XrCh8Wdz-TIaprvQtHO1NO-WdNPL3wGvceC_6aoNRHdAP5XJIL2wibgBzdmCNCuMq2RRCp6xsNBw\",\"q\":\"3AN3HP4skZpSQJAS9Hm7y-HzMGYllorPjZtL3hYQiJG0nG7w_GPp_F9fPh5Bvs2pSG852Pftj2H7uchOGn9V0KzN-U9TYOwfSffZI6SrZfg6Mzl_wLOUqHN6Aq5jE4qVgoEv_fvTNcMSlI4kK-iQwW8JHzXuR-kEXhWK8EUmTNHtExzpVYQascyGpBfsJ74O8BdzUh8TrTVxRcO9w6PYQg_Hgn347001CvtEAEOWMSIADBBar1dro6LfyVm20FJLX3gnGSPOH3-kuozAL07dAh_GhHTKpRuwyySDfR8N5T21YL-0HJAlmu-zD7V25jKASuQVVffEfOldZnWTGnwoTQ\",\"dp\":\"zADtkc1-SW8F8G3V2ueFHrSf08gpW2raaV-WrEm9lE-27kGwh5GQ39UOQnAWqmZKFDKY1dQHbeEcql6eEzSJfloT22pZ6Jw6OipN9KtybyDJqhHe0t8I_OtgGurh6hTiWiGKEVgk0baRX9uIBXSkYvfHY43Ayl2aReThBYuZHbRHbSnNZzy0z_m25qwvishMm_QesLfbgDpUWruy_abAFiIoWt_P4bDI8dWDaYSILRAP314N0fTTh8sYinY2SOg9pyfz_MQDuIemPoWFXWKKDVOGHVOPoP5rqhwrATGGqiXRXXKamgXyQHWANhWfY7HqFR5KkOOphgUfxSnT0cTwlw\",\"dq\":\"CYRw26U3IllNo5NX7pFxiUFN9tMEXz3D-rk0D_heYLoE2RuHezOLRKqPgS1n5Kwa3ZJKK1OWSDSR4hiDIGxPtwYyps1CqxerxtRc5UjTTUbupZagKyLZlGviZElM6eR90TZrcA47tcCphhmcAPY_hM6b02jO1PeEg9lkuD4ViQ8vtTrz8QoU6YoSbPjH83QqS0KIb43-mOiN7Nmp1NO6oCj0lXWDlj59w-rYpzZFQfzZiawPcDRU6LA8BAbIfLyCnC-jaVf-K6im5JcAHUvJDbV4LfSra3cGL9N1iK0WOctwlC3WycGGjuw9j7lm2lBm8lZpgd2E925U5wDBC01BpQ\",\"qi\":\"ImTCUAjhh2zfP8JGgRmcftSQ-sqn1jchpNpT548HBO4Bcm4rAlRyOQDCV3Jjqyb3UnbHfdx278f0g4jWHOeHsfw5VZ1BKyJ0uDLnYlUuXQ9WO4Rig0j7iUJjR7xV5y6tQFXbZTqSHnEPp9UQJFKhjoBdNmO7IC-24RfA2aahonsRFbvqDOLdGqYKHJhajCa0IFPqBSkkOhPshMHDdKn5iX0AyT2-YTeYXL1u17H_57Hbry3xjxQsdnAcB8uZy4vwMLiac2-Yy9GmSKP-PoP1O2jCeLbBHDbHQW_MdVw0dpbqwLHDwMpU66e8TJeU5-uzn33qZSNOc0nZCYeoAGoxUw\"}"
  },
  {
    "path": "apps/arweave/src/rsa_pss.erl",
    "content": "%%%-------------------------------------------------------------------\n%%% @author Andrew Bennett <andrew@pixid.com>\n%%% @copyright 2014-2015, Andrew Bennett\n%%% @doc\n%%% Distributed under the Mozilla Public License v2.0.\n%%% Original available at:\n%%% https://github.com/potatosalad/erlang-crypto_rsassa_pss\n%%% @end\n%%% Created :  20 Jul 2015 by Andrew Bennett <andrew@pixid.com>\n%%% Modified:  17 Nov 2017 by The Arweave Team <team@arweave.org>\n%%%-------------------------------------------------------------------\n-module(rsa_pss).\n\n-include_lib(\"public_key/include/public_key.hrl\").\n\n%% API\n-export([sign/3]).\n-export([sign/4]).\n-export([verify/4]).\n\n%% Types\n-type rsa_public_key()  :: #'RSAPublicKey'{}.\n-type rsa_private_key() :: #'RSAPrivateKey'{}.\n-type rsa_digest_type() :: 'md5' | 'sha' | 'sha224' | 'sha256' | 'sha384' | 'sha512'.\n\n-define(PSS_TRAILER_FIELD, 16#BC).\n\n-export([verify_legacy/4]).\n\n%%====================================================================\n%% API functions\n%%====================================================================\n\n-spec sign(Message, DigestType, PrivateKey) -> Signature\n\twhen\n\t\tMessage    :: binary() | {digest, binary()},\n\t\tDigestType :: rsa_digest_type() | atom(),\n\t\tPrivateKey :: rsa_private_key(),\n\t\tSignature  :: binary().\nsign(Message, DigestType, PrivateKey) when is_binary(Message) ->\n\tsign({digest, crypto:hash(DigestType, Message)}, DigestType, PrivateKey);\nsign(Message={digest, _}, DigestType, PrivateKey) ->\n\tSaltLen = byte_size(crypto:hash(DigestType, <<>>)),\n\tSalt = crypto:strong_rand_bytes(SaltLen),\n\tsign(Message, DigestType, Salt, PrivateKey).\n\n-spec sign(Message, DigestType, Salt, PrivateKey) -> Signature\n\twhen\n\t\tMessage    :: binary() | {digest, binary()},\n\t\tDigestType :: rsa_digest_type() | atom(),\n\t\tSalt       :: binary(),\n\t\tPrivateKey :: rsa_private_key(),\n\t\tSignature  :: binary().\nsign(Message, DigestType, Salt, PrivateKey) when is_binary(Message) ->\n\tsign({digest, crypto:hash(DigestType, Message)}, DigestType, Salt, PrivateKey);\nsign({digest, Digest}, DigestType, Salt, PrivateKey=#'RSAPrivateKey'{modulus=N}) ->\n\tDigestLen = byte_size(Digest),\n\tSaltLen = byte_size(Salt),\n\tPublicBitSize = int_to_bit_size(N),\n\tPrivateByteSize = (PublicBitSize + 7) div 8,\n\tPublicByteSize = int_to_byte_size(N),\n\tcase PublicByteSize < (DigestLen + SaltLen + 2) of\n\t\tfalse ->\n\t\t\tDBLen = PrivateByteSize - DigestLen - 1,\n\t\t\tM = << 0:64, Digest/binary, Salt/binary >>,\n\t\t\tH = crypto:hash(DigestType, M),\n\t\t\tDB = << 0:((DBLen - SaltLen - 1) * 8), 1, Salt/binary >>,\n\t\t\tDBMask = mgf1(DigestType, H, DBLen),\n\t\t\tMaskedDB = normalize_to_key_size(PublicBitSize, crypto:exor(DB, DBMask)),\n\t\t\tEM = << MaskedDB/binary, H/binary, ?PSS_TRAILER_FIELD >>,\n\t\t\tDM = pad_to_key_size(PublicByteSize, dp(EM, PrivateKey)),\n\t\t\tDM;\n\t\ttrue ->\n\t\t\terlang:error(badarg, [{digest, Digest}, DigestType, Salt, PrivateKey])\n\tend.\n\n-spec verify(Message, DigestType, Signature, PublicKey) -> boolean()\n\twhen\n\t\tMessage    :: binary() | {digest, binary()},\n\t\tDigestType :: rsa_digest_type() | atom(),\n\t\tSignature  :: binary(),\n\t\tPublicKey  :: rsa_public_key().\nverify(Message, DigestType, Signature, PublicKey) when is_binary(Message) ->\n\tverify({digest, crypto:hash(DigestType, Message)}, DigestType, Signature, PublicKey);\nverify({digest, Digest}, DigestType, Signature, PublicKey=#'RSAPublicKey'{modulus=N}) ->\n\tDigestLen = byte_size(Digest),\n\tPublicBitSize = int_to_bit_size(N),\n\tPrivateByteSize = (PublicBitSize + 7) div 8,\n\tPublicByteSize = int_to_byte_size(N),\n\tSignatureSize = byte_size(Signature),\n\tcase PublicByteSize < DigestLen + 2 of true -> false; false ->\n\tcase PublicByteSize =:= SignatureSize of\n\t\ttrue ->\n\t\t\tSignatureNumber = binary:decode_unsigned(Signature, big),\n\t\t\tcase SignatureNumber >= 0 andalso SignatureNumber < N of\n\t\t\t\ttrue ->\n\t\t\t\t\tDBLen = PrivateByteSize - DigestLen - 1,\n\t\t\t\t\tEM = pad_to_key_size(PrivateByteSize, ep(Signature, PublicKey)),\n\t\t\t\t\tcase binary:last(EM) of\n\t\t\t\t\t\t?PSS_TRAILER_FIELD ->\n\t\t\t\t\t\t\tMaskedDB = binary:part(EM, 0, byte_size(EM) - DigestLen - 1),\n\t\t\t\t\t\t\tH = binary:part(EM, byte_size(MaskedDB), DigestLen),\n\t\t\t\t\t\t\tDBMask = mgf1(DigestType, H, DBLen),\n\t\t\t\t\t\t\tDB = normalize_to_key_size(PublicBitSize, crypto:exor(MaskedDB, DBMask)),\n\t\t\t\t\t\t\tcase binary:match(DB, << 1 >>) of\n\t\t\t\t\t\t\t\t{Pos, Len} ->\n\t\t\t\t\t\t\t\t\tPS = binary:decode_unsigned(binary:part(DB, 0, Pos)),\n\t\t\t\t\t\t\t\t\tcase PS =:= 0 of\n\t\t\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\t\t\tSalt = binary:part(DB, Pos + Len, byte_size(DB) - Pos - Len),\n\t\t\t\t\t\t\t\t\t\t\tM = << 0:64, Digest/binary, Salt/binary >>,\n\t\t\t\t\t\t\t\t\t\t\tHOther = crypto:hash(DigestType, M),\n\t\t\t\t\t\t\t\t\t\t\tH =:= HOther;\n\t\t\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\t\t\tfalse\n\t\t\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t\t\tnomatch ->\n\t\t\t\t\t\t\t\t\tfalse\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t_BadTrailer ->\n\t\t\t\t\t\t\tfalse\n\t\t\t\t\tend;\n\t\t\t\t_ ->\n\t\t\t\t\tfalse\n\t\t\tend;\n\t\tfalse ->\n\t\t\tfalse\n\tend end.\n\nverify_legacy(Message, DigestType, Signature, PublicKey) when is_binary(Message) ->\n\tverify_legacy({digest, crypto:hash(DigestType, Message)}, DigestType, Signature, PublicKey);\nverify_legacy({digest, Digest}, DigestType, Signature, PublicKey=#'RSAPublicKey'{modulus=N}) ->\n\tDigestLen = byte_size(Digest),\n\tPublicBitSize = int_to_bit_size(N),\n\tPrivateByteSize = PublicBitSize div 8,\n\tPublicByteSize = int_to_byte_size(N),\n\tSignatureSize = byte_size(Signature),\n\tcase PublicByteSize < DigestLen + 2 of true -> false; false ->\n\tcase PublicByteSize =:= SignatureSize of\n\t\ttrue ->\n\t\t\tSignatureNumber = binary:decode_unsigned(Signature, big),\n\t\t\tcase SignatureNumber >= 0 andalso SignatureNumber < N of\n\t\t\t\ttrue ->\n\t\t\t\t\tDBLen = PrivateByteSize - DigestLen - 1,\n\t\t\t\t\tEM = pad_to_key_size(PrivateByteSize, ep(Signature, PublicKey)),\n\t\t\t\t\tcase binary:last(EM) of\n\t\t\t\t\t\t?PSS_TRAILER_FIELD ->\n\t\t\t\t\t\t\tMaskedDB = binary:part(EM, 0, byte_size(EM) - DigestLen - 1),\n\t\t\t\t\t\t\tH = binary:part(EM, byte_size(MaskedDB), DigestLen),\n\t\t\t\t\t\t\tDBMask = mgf1(DigestType, H, DBLen),\n\t\t\t\t\t\t\tDB = normalize_to_key_size(PublicBitSize, crypto:exor(MaskedDB, DBMask)),\n\t\t\t\t\t\t\tcase binary:match(DB, << 1 >>) of\n\t\t\t\t\t\t\t\t{Pos, Len} ->\n\t\t\t\t\t\t\t\t\tPS = binary:decode_unsigned(binary:part(DB, 0, Pos)),\n\t\t\t\t\t\t\t\t\tcase PS =:= 0 of\n\t\t\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\t\t\tSalt = binary:part(DB, Pos + Len, byte_size(DB) - Pos - Len),\n\t\t\t\t\t\t\t\t\t\t\tM = << 0:64, Digest/binary, Salt/binary >>,\n\t\t\t\t\t\t\t\t\t\t\tHOther = crypto:hash(DigestType, M),\n\t\t\t\t\t\t\t\t\t\t\tH =:= HOther;\n\t\t\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\t\t\tfalse\n\t\t\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t\t\tnomatch ->\n\t\t\t\t\t\t\t\t\tfalse\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t_BadTrailer ->\n\t\t\t\t\t\t\tfalse\n\t\t\t\t\tend;\n\t\t\t\t_ ->\n\t\t\t\t\tfalse\n\t\t\tend;\n\t\tfalse ->\n\t\t\tfalse\n\tend end.\n\n%%%-------------------------------------------------------------------\n%%% Internal functions\n%%%-------------------------------------------------------------------\n\n%% @private\ndp(B, #'RSAPrivateKey'{modulus=N, privateExponent=E}) ->\n\tcrypto:mod_pow(B, E, N).\n\n%% @private\nep(B, #'RSAPublicKey'{modulus=N, publicExponent=E}) ->\n\tcrypto:mod_pow(B, E, N).\n\n%% @private\nint_to_bit_size(I) ->\n\tint_to_bit_size(I, 0).\n\n%% @private\nint_to_bit_size(0, B) ->\n\tB;\nint_to_bit_size(I, B) ->\n\tint_to_bit_size(I bsr 1, B + 1).\n\n%% @private\nint_to_byte_size(I) ->\n\tint_to_byte_size(I, 0).\n\n%% @private\nint_to_byte_size(0, B) ->\n\tB;\nint_to_byte_size(I, B) ->\n\tint_to_byte_size(I bsr 8, B + 1).\n\n%% @private\nmgf1(DigestType, Seed, Len) ->\n\tmgf1(DigestType, Seed, Len, <<>>, 0).\n\n%% @private\nmgf1(_DigestType, _Seed, Len, T, _Counter) when byte_size(T) >= Len ->\n\tbinary:part(T, 0, Len);\nmgf1(DigestType, Seed, Len, T, Counter) ->\n\tCounterBin = << Counter:8/unsigned-big-integer-unit:4 >>,\n\tNewT = << T/binary, (crypto:hash(DigestType, << Seed/binary, CounterBin/binary >>))/binary >>,\n\tmgf1(DigestType, Seed, Len, NewT, Counter + 1).\n\n%% @private\nnormalize_to_key_size(_, <<>>) ->\n\t<<>>;\nnormalize_to_key_size(Bits, _A = << C, Rest/binary >>) ->\n\tSH = (Bits - 1) band 16#7,\n\tMask = case SH > 0 of\n\t\tfalse ->\n\t\t\t16#FF;\n\t\ttrue ->\n\t\t\t16#FF bsr (8 - SH)\n\tend,\n\tB = << (C band Mask), Rest/binary >>,\n\tB.\n\n%% @private\npad_to_key_size(Bytes, Data) when byte_size(Data) < Bytes ->\n\tpad_to_key_size(Bytes, << 0, Data/binary >>);\npad_to_key_size(_Bytes, Data) ->\n\tData.\n"
  },
  {
    "path": "apps/arweave/src/secp256k1_nif.erl",
    "content": "-module(secp256k1_nif).\n-export([sign/2, ecrecover/2]).\n\n-on_load(init/0).\n\n-define(SigUpperBound, binary:decode_unsigned(<<16#7FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF5D576E7357A4501DDFE92F46681B20A0:256>>)).\n-define(SigDiv, binary:decode_unsigned(<<16#FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141:256>>)).\n\ninit() ->\n\tPrivDir = code:priv_dir(arweave),\n\tok = erlang:load_nif(filename:join([PrivDir, \"secp256k1_arweave\"]), 0).\n\nsign_recoverable(_Digest, _PrivateBytes) ->\n\terlang:nif_error(nif_not_loaded).\n\nrecover_pk_and_verify(_Digest, _Signature) ->\n\terlang:nif_error(nif_not_loaded).\n\nsign(Msg, PrivBytes) ->\n\tDigest = crypto:hash(sha256, Msg),\n\t{ok, Signature} = sign_recoverable(Digest, PrivBytes),\n\tSignature.\n\necrecover(Msg, Signature) ->\n\tDigest = crypto:hash(sha256, Msg),\n\tcase recover_pk_and_verify(Digest, Signature) of\n\t\t{ok, true, PubKey} -> {true, PubKey};\n\t\t{ok, false, _PubKey} -> {false, <<>>};\n\t\t{error, _Reason} -> {false, <<>>}\n\tend.\n"
  },
  {
    "path": "apps/arweave/src/user_default.erl",
    "content": "%%\n%% This file is loaded upon starting Erlang REPL, and loads all the records\n%% from user_default.hrl file.\n%% Another possibility is to add some broadly-user functions here: these\n%% functions will be useable from the REPL as first-class commands. As an\n%% example, running the `config().` in the REPL will return current node config.\n%%\n\n-module(user_default).\n-include_lib(\"arweave/include/user_default.hrl\").\n-compile([export_all, nowarn_export_all]).\n\n\n\nconfig() ->\n  {ok, Config} = arweave_config:get_env(),\n  Config.\n"
  },
  {
    "path": "apps/arweave/test/ar_base64_compatibility_tests.erl",
    "content": "-module(ar_base64_compatibility_tests).\n\n%%% The compatibility tests to assert the used\n%%% Base64URL encoding and decoding functions are\n%%% compatible with base64url 1.0.1.\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\nback_to_back_encode_test() ->\n\tInputs = [\n\t\t42,\n\t\tfoo,\n\t\t\"foo\",\n\t\t{foo},\n\t\t<< \"zany\" >>,\n\t\t<< \"zan\"  >>,\n\t\t<< \"za\"   >>,\n\t\t<< \"z\"\t>>,\n\t\t<<\t\t>>,\n\t\tbinary:copy(<<\"0123456789\">>, 100000)\n\t],\n\tlists:foreach(\n\t\tfun(Input) ->\n\t\t\tio:format(\"Running input: ~p...~n\", [Input]),\n\t\t\tassert_encode(Input)\n\t\tend,\n\t\tInputs\n\t).\n\n\nback_to_back_decode_test_() ->\n\t{timeout, 10, fun test_back_to_back_decode/0}.\n\ntest_back_to_back_decode() ->\n\tInputs = [\n\t\t42,\n\t\tfoo,\n\t\t\"foo\",\n\t\t{foo},\n\t\t<< \".\" >>,\n\t\t<< \"+\" >>,\n\t\t<< \"!\" >>,\n\t\t<< \"/\" >>,\n\t\t<< \"Σ\">>,\n\t\t<< \"a\" >>,\n\t\t<< \"aa\" >>,\n\t\t<< \"aaa\" >>,\n\t\t<< \"aaaa\" >>,\n\t\t<< \"aaaaa\" >>,\n\t\t<< \"aaaaaa\" >>,\n\t\t<< \"aaaaaaa\" >>,\n\t\t<< \"aaaaaaaa\" >>,\n\t\t<< \"aaaaaaaaa\" >>,\n\t\t<< \"aaaaaaaaaa\" >>,\n\t\t<< \"!~[]\" >>,\n\t\t<< \"emFueQ==\" >>,\n\t\t<< \"emFu\"\t >>,\n\t\t<< \"emE=\"\t >>,\n\t\t<< \"eg==\"\t >>,\n\t\t<<\t\t\t>>,\n\t\t<< \" emFu\" >>,\n\t\t<< \"em Fu\" >>,\n\t\t<< \"emFu \" >>,\n\t\t<< \"\t\"  >>,\n\t\t<< \"   =\"  >>,\n\t\t<< \"  ==\"  >>,\n\t\t<< \"=   \"  >>,\n\t\t<< \"==  \"  >>,\n\t\t<< \"\\temFu\">>,\n\t\t<< \"\\tem  F  u\">>,\n\t\t<< \"em  F  \\t u\">>,\n\t\t<< \"em  F  \\tu\">>,\n\t\t<< \"e\\nm\\nF\\nu\\n\" >>,\n\t\t<< \"e\\nm\\nF\\nu\" >>,\n\t\t<< \"e\\nm\\nF\\nu \" >>,\n\t\t<< \"AAAA\" >>,\n\t\t<< \"AAA=\" >>,\n\t\t<< \"AAAA=\" >>,\n\t\t<< \"AAA\"  >>,\n\t\t<< \"AA==\" >>,\n\t\t<< \"AA=\"  >>,\n\t\t<< \"AA\"   >>,\n\t\t<< \"A==\"  >>,\n\t\t<< \"A=\"   >>,\n\t\t<< \"A\"\t>>,\n\t\t<< \"==\"   >>,\n\t\t<< \"=\"\t>>,\n\t\t<< \"=a\"   >>,\n\t\t<< \"==a\"  >>,\n\t\t<< \"===a\" >>,\n\t\t<< \"====a\" >>,\n\t\t<< \"=====a\" >>,\n\t\t<< \"=======a\" >>,\n\t\t<< \"========a\" >>,\n\t\t<<\t\t>>,\n\t\t<<\"PDw/Pz8+Pg==\">>,\n\t\t<<\"PDw:Pz8.Pg==\">>,\n\t\tbinary:copy(<<\"a\">>, 1000000),\n\t\tbinary:copy(<<\"a\">>, 1000001),\n\t\tbinary:copy(<<\"a\">>, 1000002),\n\t\tbinary:copy(<<\"a\">>, 1000003),\n\t\tbinary:copy(<<\"a\">>, 1000004),\n\t\tbinary:copy(<<\"a\">>, 1000005),\n\t\tbinary:copy(<<\"a\">>, 1000006),\n\t\tbinary:copy(<<\"a\">>, 1000007),\n\t\tbinary:copy(<<\"a\">>, 1000008),\n\t\tbinary:copy(<<\"a\">>, 1000009),\n\t\tbinary:copy(<<\"0123456789\">>, 100000),\n\t\tbinary:copy(<<\"0123456789_-\">>, 100000)\n\t],\n\tlists:foreach(\n\t\tfun(Input) ->\n\t\t\tio:format(\"Running input: ~p...~n\", [Input]),\n\t\t\tassert_decode(Input)\n\t\tend,\n\t\tInputs\n\t).\n\nassert_encode(Input) ->\n\tcase catch encode(Input) of\n\t\t{'EXIT', {badarg, _}} ->\n\t\t\t?assertException(error, badarg, ar_util:encode(Input));\n\t\t{'EXIT', {function_clause, _}} ->\n\t\t\t?assertException(error, badarg, ar_util:encode(Input));\n\t\t{'EXIT', {badarith, _}} ->\n\t\t\t?assertException(error, badarg, ar_util:encode(Input));\n\t\t{'EXIT', {missing_padding, _}} ->\n\t\t\t?assertException(error, badarg, ar_util:encode(Input));\n\t\tOutput ->\n\t\t\t?assertEqual(Output, ar_util:encode(Input))\n\tend.\n\nassert_decode(Input) ->\n\tcase catch decode(Input) of\n\t\t{'EXIT', {badarg, _}} ->\n\t\t\t?assertException(error, badarg, ar_util:decode(Input));\n\t\t{'EXIT', {function_clause, _}} ->\n\t\t\t?assertException(error, badarg, ar_util:decode(Input));\n\t\t{'EXIT', {badarith, _}} ->\n\t\t\t?assertException(error, badarg, ar_util:decode(Input));\n\t\t{'EXIT', {missing_padding, _}} ->\n\t\t\t?assertException(error, badarg, ar_util:decode(Input));\n\t\tOutput ->\n\t\t\t?assertEqual(Output, ar_util:decode(Input))\n\tend.\n\nencode(Bin) when is_binary(Bin) ->\n    << << (urlencode_digit(D)) >> || <<D>> <= base64:encode(Bin), D =/= $= >>;\nencode(L) when is_list(L) ->\n    encode(iolist_to_binary(L));\nencode(_) ->\n    error(badarg).\n\ndecode(Bin) when is_binary(Bin) ->\n    Bin2 = case byte_size(Bin) rem 4 of\n        2 -> << Bin/binary, \"==\" >>;\n        3 -> << Bin/binary, \"=\" >>;\n        _ -> Bin\n    end,\n    base64:decode(<< << (urldecode_digit(D)) >> || <<D>> <= Bin2 >>);\ndecode(L) when is_list(L) ->\n    decode(iolist_to_binary(L));\ndecode(_) ->\n    error(badarg).\n\nurlencode_digit($/) -> $_;\nurlencode_digit($+) -> $-;\nurlencode_digit(D)  -> D.\n\nurldecode_digit($_) -> $/;\nurldecode_digit($-) -> $+;\nurldecode_digit(D)  -> D.\n"
  },
  {
    "path": "apps/arweave/test/ar_canary.erl",
    "content": "%%%===================================================================\n%%% @doc A test that always fail.\n%%% @end\n%%%===================================================================\n-module(ar_canary).\n-include_lib(\"eunit/include/eunit.hrl\").\n\ncanary_test_() ->\n    ?assert(4 =:= 5).\n"
  },
  {
    "path": "apps/arweave/test/ar_config_tests.erl",
    "content": "-module(ar_config_tests).\n\n-include_lib(\"ar_consensus.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\nparse_test_() ->\n\t{timeout, 60, fun test_parse_config/0}.\n\nvalidate_test_() ->\n\t[\n\t\t{timeout, 60, fun test_validate_repack_in_place/0},\n\t\t{timeout, 60, fun test_validate_cm_pool/0},\n\t\t{timeout, 60, fun test_validate_storage_modules/0},\n\t\t{timeout, 60, fun test_validate_cm/0}\n\t].\n\ntest_parse_config() ->\n\tExpectedMiningAddr = ar_util:decode(<<\"LKC84RnISouGUw4uMQGCpPS9yDC-tIoqM2UVbUIt-Sw\">>),\n\t{ok, ParsedConfig} = ar_config:parse(config_fixture()),\n\tExpectedBlockHash = ar_util:decode(\n\t\t\t<<\"lfoR_PyKV6t7Z6Xi2QJZlZ0JWThh0Ke7Zc5Q82CSshUhFGcjiYufP234ph1mVofX\">>),\n\tPartitionSize = ar_block:partition_size(),\n\t?assertMatch(#config{\n\t\tinit = true,\n\t\tport = 1985,\n\t\tmine = true,\n\t\tpeers = [\n\t\t\t{188,166,200,45,1984},\n\t\t\t{188,166,192,169,1984},\n\t\t\t{163,47,11,64,1984},\n\t\t\t{159,203,158,108,1984},\n\t\t\t{159,203,49,13,1984},\n\t\t\t{139,59,51,59,1984},\n\t\t\t{138,197,232,192,1984},\n\t\t\t{46,101,67,172,1984}\n\t\t],\n\t\tlocal_peers = [\n\t\t\t{192, 168, 2, 3, 1984},\n\t\t\t{172, 16, 10, 11, 1985}\n\t\t],\n\t\tsync_from_local_peers_only = true,\n\t\tblock_gossip_peers = [{159,203,158,108,1984}, {150,150,150,150, 1983}],\n\t\tdata_dir = \"some_data_dir\",\n\t\tlog_dir = \"log_dir\",\n\t\tstorage_modules = [{PartitionSize, 0, unpacked},\n\t\t\t\t{PartitionSize, 2, {spora_2_6, ExpectedMiningAddr}},\n\t\t\t\t{PartitionSize, 100, unpacked},\n\t\t\t\t{1, 0, unpacked},\n\t\t\t\t{1000000000000, 14, {spora_2_6, ExpectedMiningAddr}},\n\t\t\t\t{PartitionSize, 0, {replica_2_9, ExpectedMiningAddr}}],\n\t\trepack_in_place_storage_modules = [\n\t\t\t\t{{PartitionSize, 1, unpacked}, {spora_2_6, ExpectedMiningAddr}},\n\t\t\t\t{{1, 1, {spora_2_6, ExpectedMiningAddr}}, unpacked},\n\t\t\t\t{{PartitionSize,8, {replica_2_9, ExpectedMiningAddr}}, unpacked}],\n\t\trepack_batch_size = 200,\n\t\tpolling = 10,\n\t\tblock_pollers = 100,\n\t\tauto_join = false,\n\t\tjoin_workers = 9,\n\t\tdiff = 42,\n\t\tmining_addr = ExpectedMiningAddr,\n\t\thashing_threads = 17,\n\t\tdata_cache_size_limit = 10000,\n\t\tpacking_cache_size_limit = 20000,\n\t\tmining_cache_size_mb = 3,\n\t\tmax_propagation_peers = 8,\n\t\tmax_block_propagation_peers = 60,\n\t\tpost_tx_timeout = 50,\n\t\tmax_emitters = 4,\n\t\treplica_2_9_workers = 16,\n\t\tdisable_replica_2_9_device_limit = true,\n\t\treplica_2_9_entropy_cache_size_mb = 2000,\n\t\tpacking_workers = 25,\n\t\tsync_jobs = 10,\n\t\theader_sync_jobs = 1,\n\t\tdisk_pool_jobs = 2,\n\t\trequests_per_minute_limit = 2500,\n\t\trequests_per_minute_limit_by_ip = #{\n\t\t\t{127, 0, 0, 1} := #{\n\t\t\t\tchunk := 100000,\n\t\t\t\tdata_sync_record := 1,\n\t\t\t\trecent_hash_list_diff := 200000,\n\t\t\t\tdefault := 100\n\t\t\t}\n\t\t},\n\t\tdisk_space_check_frequency = 10 * 1000,\n\t\tstart_from_latest_state = true,\n\t\tstart_from_block = ExpectedBlockHash,\n\t\tinternal_api_secret = <<\"some_very_very_long_secret\">>,\n\t\tenable = [feature_1, feature_2],\n\t\tdisable = [feature_3, feature_4],\n\t\ttransaction_blacklist_files = [\"some_blacklist_1\", \"some_blacklist_2\"],\n\t\ttransaction_blacklist_urls = [\"http://some_blacklist_1\", \"http://some_blacklist_2/x\"],\n\t\ttransaction_whitelist_files = [\"some_whitelist_1\", \"some_whitelist_2\"],\n\t\ttransaction_whitelist_urls = [\"http://some_whitelist\"],\n\t\twebhooks = [\n\t\t\t#config_webhook{\n\t\t\t\tevents = [transaction, block],\n\t\t\t\turl = <<\"https://example.com/hook\">>,\n\t\t\t\theaders = [{<<\"Authorization\">>, <<\"Bearer 123456\">>}]\n\t\t\t}\n\t\t],\n\t\t'http_api.tcp.max_connections' = 512,\n\t\tdisk_pool_data_root_expiration_time = 10000,\n\t\tmax_disk_pool_buffer_mb = 100000,\n\t\tmax_disk_pool_data_root_buffer_mb = 100000000,\n\t\tmax_duplicate_data_roots = 7,\n\t\tdisk_cache_size = 1024,\n\t\tsemaphores = #{\n\t\t\tget_chunk := 1,\n\t\t\tget_and_pack_chunk := 2,\n\t\t\tget_tx_data := 3,\n\t\t\tpost_chunk := 999,\n\t\t\tget_block_index := 1,\n\t\t\tget_wallet_list := 2,\n\t\t\tarql := 3,\n\t\t\tgateway_arql := 3,\n\t\t\tget_sync_record := 10\n\t\t},\n\t\tvdf = hiopt_m4,\n\t\tmax_nonce_limiter_validation_thread_count = 2,\n\t\tmax_nonce_limiter_last_step_validation_thread_count = 3,\n\t\tnonce_limiter_server_trusted_peers = [\"127.0.0.1\", \"2.3.4.5\", \"6.7.8.9:1982\"],\n\t\tnonce_limiter_client_peers = [<<\"2.3.6.7:1984\">>, <<\"4.7.3.1:1983\">>, <<\"3.3.3.3\">>],\n\t\trun_defragmentation = true,\n\t\tdefragmentation_trigger_threshold = 1_000,\n\t\tdefragmentation_modules = [\n\t\t\t{PartitionSize, 0, unpacked},\n\t\t\t{PartitionSize, 2, {spora_2_6, ExpectedMiningAddr}},\n\t\t\t{PartitionSize, 100, unpacked},\n\t\t\t{1, 0, unpacked},\n\t\t\t{1000000000000, 14, {spora_2_6, ExpectedMiningAddr}}\n\t\t],\n\t\tblock_throttle_by_ip_interval = 5_000,\n\t\tblock_throttle_by_solution_interval = 12_000,\n\t\thttp_api_transport_idle_timeout = 15_000\n\t}, ParsedConfig).\n\nconfig_fixture() ->\n\t{ok, Cwd} = file:get_cwd(),\n\tPath = filename:join(Cwd, \"./apps/arweave/test/ar_config_tests_config_fixture.json\"),\n\t{ok, FileData} = file:read_file(Path),\n\tFileData.\n\ntest_validate_repack_in_place() ->\n\tAddr1 = crypto:strong_rand_bytes(32),\n\tAddr2 = crypto:strong_rand_bytes(32),\n\tPartitionSize = ar_block:partition_size(),\n\t?assertEqual(true,\n\t\tar_config:validate_config(#config{\n\t\t\tstorage_modules = [],\n\t\t\trepack_in_place_storage_modules = []})),\n\t?assertEqual(true,\n\t\t\tar_config:validate_config(#config{\n\t\t\t\tstorage_modules = [{PartitionSize, 0, {spora_2_6, Addr1}}],\n\t\t\t\trepack_in_place_storage_modules = []})),\n\t?assertEqual(true,\n\t\tar_config:validate_config(#config{\n\t\t\tstorage_modules = [{PartitionSize, 0, {spora_2_6, Addr1}}],\n\t\t\trepack_in_place_storage_modules = [\n\t\t\t\t{{PartitionSize, 1, {spora_2_6, Addr1}}, {replica_2_9, Addr2}}]})),\n\t?assertEqual(false,\n\t\tar_config:validate_config(#config{\n\t\t\tstorage_modules = [{PartitionSize, 0, {spora_2_6, Addr1}}],\n\t\t\trepack_in_place_storage_modules = [\n\t\t\t\t{{PartitionSize, 0, {spora_2_6, Addr1}}, {replica_2_9, Addr2}}]})),\n\t?assertEqual(true,\n\t\tar_config:validate_config(#config{\n\t\t\tstorage_modules = [],\n\t\t\trepack_in_place_storage_modules = [\n\t\t\t\t{{PartitionSize, 0, {replica_2_9, Addr1}}, {replica_2_9, Addr2}}]})),\n\t?assertEqual(true,\n\t\tar_config:validate_config(#config{\n\t\t\tstorage_modules = [],\n\t\t\trepack_in_place_storage_modules = [\n\t\t\t\t{{PartitionSize, 0, {replica_2_9, Addr1}}, {spora_2_6, Addr2}}]})),\n\t?assertEqual(true,\n\t\tar_config:validate_config(#config{\n\t\t\tstorage_modules = [],\n\t\t\trepack_in_place_storage_modules = [\n\t\t\t\t{{PartitionSize, 0, {replica_2_9, Addr2}}, unpacked}]})),\n\t?assertEqual(true,\n\t\tar_config:validate_config(#config{\n\t\t\tstorage_modules = [],\n\t\t\trepack_in_place_storage_modules = [\n\t\t\t\t{{PartitionSize, 0, unpacked}, {replica_2_9, Addr2}}]})),\n\t?assertEqual(true,\n\t\tar_config:validate_config(#config{\n\t\t\tstorage_modules = [],\n\t\t\trepack_in_place_storage_modules = [\n\t\t\t\t{{PartitionSize, 0, {spora_2_6, Addr1}}, {replica_2_9, Addr2}}]})),\n\t?assertEqual(true,\n\t\tar_config:validate_config(#config{\n\t\t\tstorage_modules = [],\n\t\t\trepack_in_place_storage_modules = [\n\t\t\t\t{{PartitionSize, 0, unpacked}, {spora_2_6, Addr2}}]})).\n\n\ntest_validate_cm_pool() ->\n\t?assertEqual(false,\n\t\tar_config:validate_config(\n\t\t\t#config{\n\t\t\t\tcoordinated_mining = true, is_pool_server = true,\n\t\t\t\tmine = true, cm_api_secret = <<\"secret\">>})),\n\t?assertEqual(true,\n\t\tar_config:validate_config(\n\t\t\t#config{\n\t\t\t\tcoordinated_mining = true, is_pool_server = false,\n\t\t\t\tmine = true, cm_api_secret = <<\"secret\">>})),\n\t?assertEqual(true,\n\t\tar_config:validate_config(\n\t\t\t#config{\n\t\t\t\tcoordinated_mining = false, is_pool_server = true,\n\t\t\t\tmine = true, cm_api_secret = <<\"secret\">>})),\n\t?assertEqual(true,\n\t\tar_config:validate_config(\n\t\t\t#config{\n\t\t\t\tcoordinated_mining = false, is_pool_server = false,\n\t\t\t\tmine = true, cm_api_secret = <<\"secret\">>})),\n\t?assertEqual(false,\n\t\tar_config:validate_config(\n\t\t\t#config{is_pool_server = true, is_pool_client = true, mine = true})),\n\t?assertEqual(true,\n\t\tar_config:validate_config(\n\t\t\t#config{is_pool_server = true, is_pool_client = false})),\n\t?assertEqual(true,\n\t\tar_config:validate_config(\n\t\t\t#config{is_pool_server = false, is_pool_client = true, mine = true})),\n\t?assertEqual(true,\n\t\tar_config:validate_config(\n\t\t\t#config{is_pool_server = false, is_pool_client = false, mine = true})),\n\t?assertEqual(false,\n\t\tar_config:validate_config(\n\t\t\t#config{is_pool_client = true, mine = false})),\n\t?assertEqual(true,\n\t\tar_config:validate_config(\n\t\t\t#config{is_pool_client = true, mine = true})),\n\t?assertEqual(true,\n\t\tar_config:validate_config(\n\t\t\t#config{is_pool_client = false, mine = true})),\n\t?assertEqual(true,\n\t\tar_config:validate_config(\n\t\t\t#config{is_pool_client = false, mine = false})).\n\ntest_validate_cm() ->\n\t?assertEqual(true,\n\t\tar_config:validate_config(\n\t\t\t#config{coordinated_mining = true, mine = true, cm_api_secret = <<\"secret\">>})),\n\t?assertEqual(true,\n\t\tar_config:validate_config(\n\t\t\t#config{coordinated_mining = false, mine = false, cm_api_secret = not_set})),\n\t?assertEqual(false,\n\t\tar_config:validate_config(\n\t\t\t#config{coordinated_mining = true, mine = false, cm_api_secret = <<\"secret\">>})),\n\t?assertEqual(false,\n\t\tar_config:validate_config(\n\t\t\t#config{coordinated_mining = true, mine = true, cm_api_secret = not_set})).\n\n\t\t\ntest_validate_storage_modules() ->\n\tAddr1 = crypto:strong_rand_bytes(32),\n\tAddr2 = crypto:strong_rand_bytes(32),\n\tLegacyPacking = {spora_2_6, Addr1},\n\tPartitionSize = ar_block:partition_size(),\n\n\tUnpacked = {PartitionSize, 0, unpacked},\n\tLegacy = {PartitionSize, 1, LegacyPacking},\n\tReplica29 = {PartitionSize, 2, {replica_2_9, Addr1}},\n\n\t?assertEqual(true,\n\t\tar_config:validate_config(\n\t\t\t#config{\n\t\t\t\tstorage_modules = [Unpacked, Legacy, Replica29],\n\t\t\t\tmining_addr = Addr1,\n\t\t\t\tmine = false})),\n\t?assertEqual(true,\n\t\tar_config:validate_config(\n\t\t\t#config{\n\t\t\t\tstorage_modules = [Unpacked, Legacy, Replica29],\n\t\t\t\tmining_addr = Addr2,\n\t\t\t\tmine = true})),\n\t?assertEqual(true,\n\t\tar_config:validate_config(\n\t\t\t#config{\n\t\t\t\tstorage_modules = [Unpacked, Legacy],\n\t\t\t\tmining_addr = Addr1,\n\t\t\t\tmine = true})),\n\t?assertEqual(true,\n\t\tar_config:validate_config(\n\t\t\t#config{\n\t\t\t\tstorage_modules = [Unpacked, Replica29],\n\t\t\t\tmining_addr = Addr1,\n\t\t\t\tmine = true})),\n\t?assertEqual(false,\n\t\tar_config:validate_config(\n\t\t\t#config{\n\t\t\t\tstorage_modules = [Legacy, Replica29],\n\t\t\t\tmining_addr = Addr1,\n\t\t\t\tmine = true})).\n"
  },
  {
    "path": "apps/arweave/test/ar_config_tests_config_fixture.json",
    "content": "{\n  \"peers\": [\n    \"188.166.200.45\",\n    \"188.166.192.169\",\n    \"163.47.11.64\",\n    \"159.203.158.108\",\n    \"159.203.49.13\",\n    \"139.59.51.59\",\n    \"138.197.232.192\",\n    \"46.101.67.172\"\n  ],\n  \"sync_from_local_peers_only\": true,\n  \"block_gossip_peers\": [\"159.203.158.108\", \"150.150.150.150:1983\"],\n  \"local_peers\": [\"192.168.2.3\", \"172.16.10.11:1985\"],\n  \"start_from_latest_state\": true,\n  \"start_from_block\": \"lfoR_PyKV6t7Z6Xi2QJZlZ0JWThh0Ke7Zc5Q82CSshUhFGcjiYufP234ph1mVofX\",\n  \"mine\": true,\n  \"port\": 1985,\n  \"data_dir\": \"some_data_dir\",\n  \"log_dir\": \"log_dir\",\n  \"storage_modules\": [\n    \"0,unpacked\",\n    \"2,LKC84RnISouGUw4uMQGCpPS9yDC-tIoqM2UVbUIt-Sw\",\n    \"100,unpacked\",\n    \"1,unpacked,repack_in_place,LKC84RnISouGUw4uMQGCpPS9yDC-tIoqM2UVbUIt-Sw\",\n    \"0,1,unpacked\",\n    \"14,1000000000000,LKC84RnISouGUw4uMQGCpPS9yDC-tIoqM2UVbUIt-Sw\",\n    \"1,1,LKC84RnISouGUw4uMQGCpPS9yDC-tIoqM2UVbUIt-Sw,repack_in_place,unpacked\",\n    \"0,LKC84RnISouGUw4uMQGCpPS9yDC-tIoqM2UVbUIt-Sw.replica.2.9\",\n    \"8,LKC84RnISouGUw4uMQGCpPS9yDC-tIoqM2UVbUIt-Sw.replica.2.9,repack_in_place,unpacked\"\n  ],\n  \"repack_batch_size\": 200,\n  \"polling\": 10,\n  \"block_pollers\": 100,\n  \"no_auto_join\": true,\n  \"join_workers\": 9,\n  \"diff\": 42,\n  \"mining_addr\": \"LKC84RnISouGUw4uMQGCpPS9yDC-tIoqM2UVbUIt-Sw\",\n  \"hashing_threads\": 17,\n  \"data_cache_size_limit\": 10000,\n  \"packing_cache_size_limit\": 20000,\n  \"mining_cache_size_mb\": 3,\n  \"max_propagation_peers\": 8,\n  \"max_block_propagation_peers\": 60,\n  \"post_tx_timeout\": 50,\n  \"max_emitters\": 4,\n  \"sync_jobs\": 10,\n  \"header_sync_jobs\": 1,\n  \"disk_pool_jobs\": 2,\n  \"requests_per_minute_limit\": 2500,\n  \"requests_per_minute_limit_by_ip\": {\n    \"127.0.0.1\": {\n      \"chunk\": 100000,\n      \"data_sync_record\": 1,\n      \"recent_hash_list_diff\": 200000,\n      \"default\": 100\n    }\n  },\n  \"transaction_blacklists\": [\"some_blacklist_1\", \"some_blacklist_2\"],\n  \"transaction_blacklist_urls\": [\n    \"http://some_blacklist_1\",\n    \"http://some_blacklist_2/x\"\n  ],\n  \"transaction_whitelists\": [\"some_whitelist_1\", \"some_whitelist_2\"],\n  \"transaction_whitelist_urls\": [\"http://some_whitelist\"],\n  \"disk_space_check_frequency\": 10,\n  \"init\": true,\n  \"internal_api_secret\": \"some_very_very_long_secret\",\n  \"enable\": [\"feature_1\", \"feature_2\"],\n  \"disable\": [\"feature_3\", \"feature_4\"],\n  \"webhooks\": [\n    {\n      \"events\": [\"transaction\", \"block\"],\n      \"url\": \"https://example.com/hook\",\n      \"headers\": {\n        \"Authorization\": \"Bearer 123456\"\n      }\n    }\n  ],\n  \"max_connections\": 512,\n  \"disk_pool_data_root_expiration_time\": 10000,\n  \"max_disk_pool_buffer_mb\": 100000,\n  \"max_disk_pool_data_root_buffer_mb\": 100000000,\n  \"max_duplicate_data_roots\": 7,\n  \"disk_cache_size_mb\": 1024,\n  \"semaphores\": {\n    \"get_chunk\": 1,\n    \"get_and_pack_chunk\": 2,\n    \"get_tx_data\": 3,\n    \"post_chunk\": 999,\n    \"get_block_index\": 1,\n    \"get_wallet_list\": 2,\n    \"arql\": 3,\n    \"gateway_arql\": 3\n  },\n  \"replica_2_9_workers\": 16,\n  \"disable_replica_2_9_device_limit\": true,\n  \"replica_2_9_entropy_cache_size_mb\": 2000,\n  \"packing_workers\": 25,\n  \"max_nonce_limiter_validation_thread_count\": 2,\n  \"max_nonce_limiter_last_step_validation_thread_count\": 3,\n  \"vdf\": \"hiopt_m4\",\n  \"vdf_server_trusted_peer\": \"127.0.0.1\",\n  \"vdf_server_trusted_peers\": [\"2.3.4.5\", \"6.7.8.9:1982\"],\n  \"vdf_client_peers\": [\"2.3.6.7:1984\", \"4.7.3.1:1983\", \"3.3.3.3\"],\n  \"run_defragmentation\": true,\n  \"defragmentation_trigger_threshold\": 1000,\n  \"defragment_modules\": [\n    \"0,unpacked\",\n    \"2,LKC84RnISouGUw4uMQGCpPS9yDC-tIoqM2UVbUIt-Sw\",\n    \"100,unpacked\",\n    \"0,1,unpacked\",\n    \"14,1000000000000,LKC84RnISouGUw4uMQGCpPS9yDC-tIoqM2UVbUIt-Sw\"\n  ],\n  \"block_throttle_by_ip_interval\": 5000,\n  \"block_throttle_by_solution_interval\": 12000,\n  \"http_client.http.closing_timeout\": 5000,\n  \"http_client.http.keepalive\": \"infinity\",\n  \"http_client.tcp.delay_send\": true,\n  \"http_client.tcp.keepalive\": false,\n  \"http_client.tcp.linger\": true,\n  \"http_client.tcp.linger_timeout\": 0,\n  \"http_client.tcp.nodelay\": true,\n  \"http_client.tcp.send_timeout_close\": true,\n  \"http_client.tcp.send_timeout\": 5000,\n  \"http_api.http.active_n\": 2,\n  \"http_api.http.inactivity_timeout\": 15000,\n  \"http_api.http.linger_timeout\": 0,\n  \"http_api.http.request_timeout\": 5000,\n  \"http_api.tcp.delay_send\": true,\n  \"http_api.tcp.idle_timeout_seconds\": 15,\n  \"http_api.tcp.keepalive\": false,\n  \"http_api.tcp.linger\": true,\n  \"http_api.tcp.linger_timeout\": 0,\n  \"http_api.tcp.listener_shutdown\": 5000,\n  \"http_api.tcp.nodelay\": true,\n  \"http_api.tcp.num_acceptors\": 10,\n  \"http_api.tcp.send_timeout_close\": true,\n  \"http_api.tcp.send_timeout\": 5000,\n  \"network.socket.backend\": \"socket\",\n  \"network.tcp.shutdown.mode\": \"shutdown\"\n}\n"
  },
  {
    "path": "apps/arweave/test/ar_coordinated_mining_tests.erl",
    "content": "-module(ar_coordinated_mining_tests).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_mining.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-import(ar_test_node, [http_get_block/2]).\n\n-define(COORDINATED_MINING_WAIT_TIMEOUT, 900_000).\n\n%% --------------------------------------------------------------------\n%% Test registration\n%% --------------------------------------------------------------------\nmining_test_() ->\n\t[\n\t\t{timeout, ?TEST_NODE_TIMEOUT, fun test_single_node_one_chunk/0},\n\t\tar_test_node:test_with_mocked_functions(\n\t\t\t[\n\t\t\t\tar_test_node:mock_to_force_invalid_h1(),\n\t\t\t\t{ar_retarget, is_retarget_height, fun(_Height) -> false end},\n\t\t\t\t{ar_retarget, is_retarget_block, fun(_Block) -> false end}\n\t\t\t],\n\t\t\tfun test_single_node_two_chunk/0, ?TEST_NODE_TIMEOUT),\n\t\tar_test_node:test_with_mocked_functions(\n\t\t\t[\n\t\t\t\tar_test_node:mock_to_force_invalid_h1(),\n\t\t\t\t{ar_retarget, is_retarget_height, fun(_Height) -> false end},\n\t\t\t\t{ar_retarget, is_retarget_block, fun(_Block) -> false end}\n\t\t\t],\n\t\t\tfun test_cross_node/0, ?TEST_NODE_TIMEOUT),\n\t\tar_test_node:test_with_mocked_functions(\n\t\t\t[\n\t\t\t\tar_test_node:mock_to_force_invalid_h1(),\n\t\t\t\tmock_for_single_difficulty_adjustment_height(),\n\t\t\t\tmock_for_single_difficulty_adjustment_block()\n\t\t\t],\n\t\t\tfun test_cross_node_retarget/0, 2 * ?TEST_NODE_TIMEOUT),\n\t\t{timeout, ?TEST_NODE_TIMEOUT, fun test_two_node_retarget/0},\n\t\tar_test_node:test_with_mocked_functions(\n\t\t\t[\n\t\t\t\t{ar_retarget, is_retarget_height, fun(_Height) -> false end},\n\t\t\t\t{ar_retarget, is_retarget_block, fun(_Block) -> false end}\n\t\t\t],\n\t\t\tfun test_three_node/0, 2 * ?TEST_NODE_TIMEOUT),\n\t\t{timeout, ?TEST_NODE_TIMEOUT, fun test_no_exit_node/0}\n\t].\n\napi_test_() ->\n\t[\n\t\t{timeout, ?TEST_NODE_TIMEOUT, fun test_no_secret/0},\n\t\t{timeout, ?TEST_NODE_TIMEOUT, fun test_bad_secret/0},\n\t\t{timeout, ?TEST_NODE_TIMEOUT, fun test_partition_table/0}\n\t].\n\nrefetch_partitions_test_() ->\n\t[\n\t\t{timeout, ?TEST_NODE_TIMEOUT, fun test_peers_by_partition/0}\n\t].\n\n%% --------------------------------------------------------------------\n%% Tests\n%% --------------------------------------------------------------------\n\n%% @doc One-node coordinated mining cluster mining a block with one\n%% or two chunks.\ntest_single_node_one_chunk() ->\n\t[Node, _ExitNode, ValidatorNode] = ar_test_node:start_coordinated(1),\n\tar_test_node:mine(Node),\n\tBI = ar_test_node:wait_until_height(ValidatorNode, 1, false),\n\t{ok, B} = http_get_block(element(1, hd(BI)), ValidatorNode),\n\t?assert(byte_size((B#block.poa)#poa.data_path) > 0),\n\tassert_empty_cache(Node).\n\t\n%% @doc One-node coordinated mining cluster mining a block with two chunks.\ntest_single_node_two_chunk() ->\n\t[Node, _ExitNode, ValidatorNode] = ar_test_node:start_coordinated(1),\n\tar_test_node:mine(Node),\n\tBI = ar_test_node:wait_until_height(ValidatorNode, 1, false),\n\t{ok, B} = http_get_block(element(1, hd(BI)), ValidatorNode),\n\t?assert(byte_size((B#block.poa2)#poa.data_path) > 0),\n\tassert_empty_cache(Node).\n\n%% @doc Two-node coordinated mining cluster mining until a difficulty retarget.\ntest_two_node_retarget() ->\n\t[Node1, Node2, _ExitNode, ValidatorNode] = ar_test_node:start_coordinated(2),\n\tlists:foreach(\n\t\tfun(Height) ->\n\t\t\tmine_in_parallel([Node1, Node2], ValidatorNode, Height)\n\t\tend,\n\t\tlists:seq(0, ?RETARGET_BLOCKS)),\n\tassert_empty_cache(Node1),\n\tassert_empty_cache(Node2).\n\n%% @doc Three-node coordinated mining cluster mining until all nodes have contributed\n%% to a solution. This test does not force cross-node solutions.\ntest_three_node() ->\n\t[Node1, Node2, Node3, _ExitNode, ValidatorNode] = ar_test_node:start_coordinated(3),\t\n\twait_for_each_node([Node1, Node2, Node3], ValidatorNode, 0, [0, 2, 4]),\n\tassert_empty_cache(Node1),\n\tassert_empty_cache(Node2),\n\tassert_empty_cache(Node3).\n\n%% @doc Two-node, mine until a block is found that incorporates hashes from each node.\ntest_cross_node() ->\n\t[Node1, Node2, _ExitNode, ValidatorNode] = ar_test_node:start_coordinated(2),\n\twait_for_cross_node([Node1, Node2], ValidatorNode, 0, [0, 2]),\n\tassert_empty_cache(Node1),\n\tassert_empty_cache(Node2).\n\n%% @doc Two-node, mine through difficulty retarget, then mine until a block is found that\n%% incorporates hashes from each node.\ntest_cross_node_retarget() ->\n\t[Node1, Node2, _ExitNode, ValidatorNode] = ar_test_node:start_coordinated(2),\n\tlists:foreach(\n\t\tfun(H) ->\n\t\t\tmine_in_parallel([Node1, Node2], ValidatorNode, H)\n\t\tend,\n\t\tlists:seq(0, ?RETARGET_BLOCKS)),\n\twait_for_cross_node([Node1, Node2], ValidatorNode, ?RETARGET_BLOCKS, [0, 2]),\n\tassert_empty_cache(Node1),\n\tassert_empty_cache(Node2).\n\ntest_no_exit_node() ->\n\t%% Assert that when the exit node is down, CM miners don't share their solution with any\n\t%% other peers.\n\t[Node, ExitNode, ValidatorNode] = ar_test_node:start_coordinated(1),\n\tar_test_node:stop(ExitNode),\n\tar_test_node:mine(Node),\n\ttimer:sleep(5000),\n\tBI = ar_test_node:get_blocks(ValidatorNode),\n\t?assertEqual(1, length(BI)).\n\ntest_no_secret() ->\n\t[Node, _ExitNode, _ValidatorNode] = ar_test_node:start_coordinated(1),\n\tPeer = ar_test_node:peer_ip(Node),\n\t?assertMatch(\n\t\t{error, {ok, {{<<\"421\">>, _}, _, \n\t\t\t<<\"CM API disabled or invalid CM API secret in request.\">>, _, _}}},\n\t\tar_http_iface_client:get_cm_partition_table(Peer)),\n\t?assertMatch(\n\t\t{error, {ok, {{<<\"421\">>, _}, _, \n\t\t\t<<\"CM API disabled or invalid CM API secret in request.\">>, _, _}}},\n\t\tar_http_iface_client:cm_h1_send(Peer, dummy_candidate())),\n\t?assertMatch(\n\t\t{error, {ok, {{<<\"421\">>, _}, _, \n\t\t\t<<\"CM API disabled or invalid CM API secret in request.\">>, _, _}}},\n\t\tar_http_iface_client:cm_h2_send(Peer, dummy_candidate())),\n\t?assertMatch(\n\t\t{error, {ok, {{<<\"421\">>, _}, _, \n\t\t\t<<\"CM API disabled or invalid CM API secret in request.\">>, _, _}}},\n\t\tar_http_iface_client:cm_publish_send(Peer, dummy_solution())).\n\ntest_bad_secret() ->\n\t[Node, _ExitNode, _ValidatorNode] = ar_test_node:start_coordinated(1),\n\tPeer = ar_test_node:peer_ip(Node),\n\t{ok, Config} = arweave_config:get_env(),\n\ttry\n\t\tok = arweave_config:set_env(Config#config{\n\t\t\tcm_api_secret = <<\"this_is_not_the_actual_secret\">>\n\t\t}),\n\t\t?assertMatch(\n\t\t\t{error, {ok, {{<<\"421\">>, _}, _, \n\t\t\t\t<<\"CM API disabled or invalid CM API secret in request.\">>, _, _}}},\n\t\t\tar_http_iface_client:get_cm_partition_table(Peer)),\n\t\t?assertMatch(\n\t\t\t{error, {ok, {{<<\"421\">>, _}, _, \n\t\t\t\t<<\"CM API disabled or invalid CM API secret in request.\">>, _, _}}},\n\t\t\tar_http_iface_client:cm_h1_send(Peer, dummy_candidate())),\n\t\t?assertMatch(\n\t\t\t{error, {ok, {{<<\"421\">>, _}, _, \n\t\t\t\t<<\"CM API disabled or invalid CM API secret in request.\">>, _, _}}},\n\t\t\tar_http_iface_client:cm_h2_send(Peer, dummy_candidate())),\n\t\t?assertMatch(\n\t\t\t{error, {ok, {{<<\"421\">>, _}, _, \n\t\t\t\t<<\"CM API disabled or invalid CM API secret in request.\">>, _, _}}},\n\t\t\tar_http_iface_client:cm_publish_send(Peer, dummy_solution()))\n\tafter\n\t\tok = arweave_config:set_env(Config)\n\tend.\n\ntest_partition_table() ->\n\t[B0] = ar_weave:init([], ar_test_node:get_difficulty_for_invalid_hash(), 5 * ar_block:partition_size()),\n\tConfig = ar_test_node:base_cm_config([]),\n\t\n\tMiningAddr = Config#config.mining_addr,\n\tRandomAddress = crypto:strong_rand_bytes(32),\n\tPeer = ar_test_node:peer_ip(main),\n\n\t%% No partitions\n\tar_test_node:start_node(B0, Config, false),\n\n\t?assertEqual(\n\t\t{ok, []},\n\t\tar_http_iface_client:get_cm_partition_table(Peer)\n\t),\n\n\t%% Partition jumble with 2 addresses\n\tar_test_node:start_node(B0, Config#config{ \n\t\tstorage_modules = [\n\t\t\t{ar_block:partition_size(), 0, {spora_2_6, MiningAddr}},\n\t\t\t{ar_block:partition_size(), 0, {spora_2_6, RandomAddress}},\n\t\t\t{1000, 2, {spora_2_6, MiningAddr}},\n\t\t\t{1000, 2, {spora_2_6, RandomAddress}},\n\t\t\t{1000, 10, {spora_2_6, MiningAddr}},\n\t\t\t{1000, 10, {spora_2_6, RandomAddress}},\n\t\t\t{ar_block:partition_size() * 2, 4, {spora_2_6, MiningAddr}},\n\t\t\t{ar_block:partition_size() * 2, 4, {spora_2_6, RandomAddress}},\n\t\t\t{(ar_block:partition_size() div 10), 18, {spora_2_6, MiningAddr}},\n\t\t\t{(ar_block:partition_size() div 10), 18, {spora_2_6, RandomAddress}},\n\t\t\t{(ar_block:partition_size() div 10), 19, {spora_2_6, MiningAddr}},\n\t\t\t{(ar_block:partition_size() div 10), 19, {spora_2_6, RandomAddress}},\n\t\t\t{(ar_block:partition_size() div 10), 20, {spora_2_6, MiningAddr}},\n\t\t\t{(ar_block:partition_size() div 10), 20, {spora_2_6, RandomAddress}},\n\t\t\t{(ar_block:partition_size() div 10), 21, {spora_2_6, MiningAddr}},\n\t\t\t{(ar_block:partition_size() div 10), 21, {spora_2_6, RandomAddress}},\n\t\t\t{ar_block:partition_size()+1, 30, {spora_2_6, MiningAddr}},\n\t\t\t{ar_block:partition_size()+1, 30, {spora_2_6, RandomAddress}},\n\t\t\t{ar_block:partition_size(), 40, {spora_2_6, MiningAddr}},\n\t\t\t{ar_block:partition_size(), 40, {spora_2_6, RandomAddress}}\n\t\t]}, false),\n\t%% get_cm_partition_table returns the currently minable partitions - which is [] if the\n\t%% node is not mining.\n\t?assertEqual(\n\t\t{ok, []},\n\t\tar_http_iface_client:get_cm_partition_table(Peer)\n\t),\n\n\t%% Simulate mining start\n\tPartitionUpperBound = 35 * ar_block:partition_size(), %% less than the highest configured partition\n\tar_mining_io:set_largest_seen_upper_bound(PartitionUpperBound),\n\t\n\t?assertEqual(\n\t\t{ok, [\n\t\t\t{0, ar_block:partition_size(), MiningAddr, 0},\n\t\t\t{1, ar_block:partition_size(), MiningAddr, 0},\n\t\t\t{2, ar_block:partition_size(), MiningAddr, 0},\n\t\t\t{8, ar_block:partition_size(), MiningAddr, 0},\n\t\t\t{9, ar_block:partition_size(), MiningAddr, 0},\n\t\t\t{30, ar_block:partition_size(), MiningAddr, 0},\n\t\t\t{31, ar_block:partition_size(), MiningAddr, 0}\n\t\t]},\n\t\tar_http_iface_client:get_cm_partition_table(Peer)\n\t).\n\ntest_peers_by_partition() ->\n\tPartitionUpperBound = 6 * ar_block:partition_size(),\n\t[B0] = ar_weave:init([], ar_test_node:get_difficulty_for_invalid_hash(),\n\t\t\tPartitionUpperBound),\n\n\tPeer1 = ar_test_node:peer_ip(peer1),\n\tPeer2 = ar_test_node:peer_ip(peer2),\n\tPeer3 = ar_test_node:peer_ip(peer3),\n\n\tBaseConfig = ar_test_node:base_cm_config([]),\n\tConfig = BaseConfig#config{ cm_exit_peer = Peer1 },\n\tMiningAddr = Config#config.mining_addr,\n\t\n\tar_test_node:remote_call(peer1, ar_test_node, start_node, [B0, Config#config{\n\t\tcm_exit_peer = not_set,\n\t\tcm_peers = [Peer2, Peer3],\n\t\tlocal_peers = [Peer2, Peer3],\n\t\tstorage_modules = [\n\t\t\t{ar_block:partition_size(), 0, {spora_2_6, MiningAddr}},\n\t\t\t{ar_block:partition_size(), 1, {spora_2_6, MiningAddr}},\n\t\t\t{ar_block:partition_size(), 2, {spora_2_6, MiningAddr}}\n\t\t]}, false]),\n\tar_test_node:remote_call(peer2, ar_test_node, start_node, [B0, Config#config{\n\t\tcm_peers = [Peer1, Peer3],\n\t\tlocal_peers = [Peer1, Peer3],\n\t\tstorage_modules = [\n\t\t\t{ar_block:partition_size(), 1, {spora_2_6, MiningAddr}},\n\t\t\t{ar_block:partition_size(), 2, {spora_2_6, MiningAddr}},\n\t\t\t{ar_block:partition_size(), 3, {spora_2_6, MiningAddr}}\n\t\t]}, false]),\n\tar_test_node:remote_call(peer3, ar_test_node, start_node, [B0, Config#config{\n\t\tcm_peers = [Peer1, Peer2],\n\t\tlocal_peers = [Peer1, Peer2],\n\t\tstorage_modules = [\n\t\t\t{ar_block:partition_size(), 2, {spora_2_6, MiningAddr}},\n\t\t\t{ar_block:partition_size(), 3, {spora_2_6, MiningAddr}},\n\t\t\t{ar_block:partition_size(), 4, {spora_2_6, MiningAddr}}\n\t\t]}, false]),\n\n\tar_test_node:remote_call(peer1, ar_mining_io, set_largest_seen_upper_bound,\n\t\t[PartitionUpperBound]),\n\tar_test_node:remote_call(peer2, ar_mining_io, set_largest_seen_upper_bound,\n\t\t[PartitionUpperBound]),\n\tar_test_node:remote_call(peer3, ar_mining_io, set_largest_seen_upper_bound,\n\t\t[PartitionUpperBound]),\n\n\ttimer:sleep(3000),\n\tassert_peers([], peer1, 0),\n\tassert_peers([Peer2], peer1, 1),\n\tassert_peers([Peer2, Peer3], peer1, 2),\n\tassert_peers([Peer2, Peer3], peer1, 3),\n\tassert_peers([Peer3], peer1, 4),\n\tassert_peers([], peer1, 5),\n\n\tassert_peers([Peer1], peer2, 0),\n\tassert_peers([Peer1], peer2, 1),\n\tassert_peers([Peer1, Peer3], peer2, 2),\n\tassert_peers([Peer3], peer2, 3),\n\tassert_peers([Peer3], peer2, 4),\n\tassert_peers([], peer2, 5),\n\n\tassert_peers([Peer1], peer3, 0),\n\tassert_peers([Peer1, Peer2], peer3, 1),\n\tassert_peers([Peer1, Peer2], peer3, 2),\n\tassert_peers([Peer2], peer3, 3),\n\tassert_peers([], peer3, 4),\n\tassert_peers([], peer3, 5),\n\n\tar_test_node:remote_call(peer1, ar_test_node, stop, []),\n\ttimer:sleep(3000),\n\n\tassert_peers([Peer1], peer2, 0),\n\tassert_peers([Peer1], peer2, 1),\n\tassert_peers([Peer1, Peer3], peer2, 2),\n\tassert_peers([Peer3], peer2, 3),\n\tassert_peers([Peer3], peer2, 4),\n\n\tassert_peers([Peer1], peer3, 0),\n\tassert_peers([Peer1, Peer2], peer3, 1),\n\tassert_peers([Peer1, Peer2], peer3, 2),\n\tassert_peers([Peer2], peer3, 3),\n\tassert_peers([], peer3, 4),\n\n\tar_test_node:remote_call(peer1, ar_test_node, start_node, [B0, Config#config{\n\t\tcm_exit_peer = not_set,\n\t\tcm_peers = [Peer2, Peer3],\n\t\tlocal_peers = [Peer2, Peer3],\n\t\tstorage_modules = [\n\t\t\t{ar_block:partition_size(), 0, {spora_2_6, MiningAddr}},\n\t\t\t{ar_block:partition_size(), 4, {spora_2_6, MiningAddr}},\n\t\t\t{ar_block:partition_size(), 5, {spora_2_6, MiningAddr}}\n\t\t]}, false]),\n\tar_test_node:remote_call(peer1, ar_mining_io, set_largest_seen_upper_bound,\n\t\t[PartitionUpperBound]),\n\ttimer:sleep(3000),\n\n\tassert_peers([], peer1, 0),\n\tassert_peers([Peer2], peer1, 1),\n\tassert_peers([Peer2, Peer3], peer1, 2),\n\tassert_peers([Peer2, Peer3], peer1, 3),\n\tassert_peers([Peer3], peer1, 4),\n\tassert_peers([], peer1, 5),\n\n\tassert_peers([Peer1], peer2, 0),\n\tassert_peers([], peer2, 1),\n\tassert_peers([Peer3], peer2, 2),\n\tassert_peers([Peer3], peer2, 3),\n\tassert_peers([Peer1, Peer3], peer2, 4),\n\tassert_peers([Peer1], peer2, 5),\n\n\tassert_peers([Peer1], peer3, 0),\n\tassert_peers([Peer2], peer3, 1),\n\tassert_peers([Peer2], peer3, 2),\n\tassert_peers([Peer2], peer3, 3),\n\tassert_peers([Peer1], peer3, 4),\n\tassert_peers([Peer1], peer3, 5),\n\tok.\t\n\n%% --------------------------------------------------------------------\n%% Helpers\n%% --------------------------------------------------------------------\n\nassert_peers(ExpectedPeers, Node, Partition) ->\n\t?assert(ar_util:do_until(\n\t\tfun() ->\n\t\t\tPeers = ar_test_node:remote_call(Node, ar_coordination, get_peers, [Partition]),\n\t\t\tlists:sort(ExpectedPeers) == lists:sort(Peers)\n\t\tend,\n\t\t200,\n\t\t5000\n\t)).\n\nwait_for_each_node(Miners, ValidatorNode, CurrentHeight, ExpectedPartitions) ->\n\twait_for_each_node(\n\t\tMiners, ValidatorNode, CurrentHeight, sets:from_list(ExpectedPartitions), 40).\n\nwait_for_each_node(\n\t\t_Miners, _ValidatorNode, _CurrentHeight, _ExpectedPartitions, 0) ->\n\t?assert(false, \"Timed out waiting for all mining nodes to win a solution\");\nwait_for_each_node(\n\t\tMiners, ValidatorNode, CurrentHeight, ExpectedPartitions, RetryCount) ->\n\tPartitions = mine_in_parallel(Miners, ValidatorNode, CurrentHeight),\n\tExpectedPartitions2 = sets:subtract(ExpectedPartitions, sets:from_list(Partitions)),\n\tcase sets:is_empty(ExpectedPartitions2) of\n\t\ttrue ->\n\t\t\tCurrentHeight+1;\n\t\tfalse ->\n\t\t\twait_for_each_node(\n\t\t\t\tMiners, ValidatorNode, CurrentHeight+1, ExpectedPartitions2, RetryCount-1)\n\tend.\n\nwait_for_cross_node(Miners, ValidatorNode, CurrentHeight, ExpectedPartitions) ->\n\twait_for_cross_node(\n\t\tMiners, ValidatorNode, CurrentHeight, sets:from_list(ExpectedPartitions), 20).\n\nwait_for_cross_node(_Miners, _ValidatorNode, _CurrentHeight, _ExpectedPartitions, 0) ->\n\t?assert(false, \"Timed out waiting for a cross-node solution\");\nwait_for_cross_node(_Miners, _ValidatorNode, _CurrentHeight, ExpectedPartitions, _RetryCount)\n\t\twhen length(ExpectedPartitions) /= 2 ->\n\t?assert(false, \"Cross-node solutions can only have 2 partitions.\");\nwait_for_cross_node(Miners, ValidatorNode, CurrentHeight, ExpectedPartitions, RetryCount) ->\n\tA = mine_in_parallel(Miners, ValidatorNode, CurrentHeight),\n\tPartitions = sets:from_list(A),\n\tMinedCrossNodeBlock = \n\t\tsets:is_subset(Partitions, ExpectedPartitions) andalso \n\t\tsets:is_subset(ExpectedPartitions, Partitions),\n\tcase MinedCrossNodeBlock of\n\t\ttrue ->\n\t\t\tCurrentHeight+1;\n\t\tfalse ->\n\t\t\twait_for_cross_node(\n\t\t\t\tMiners, ValidatorNode, CurrentHeight+1, ExpectedPartitions, RetryCount-1)\n\tend.\n\t\nmine_in_parallel(Miners, ValidatorNode, CurrentHeight) ->\n\treport_miners(Miners),\n\tCurrentB = ar_test_node:remote_call(ValidatorNode, ar_node, get_current_block, []),\n\tar_util:pmap(fun(Node) -> ar_test_node:mine(Node) end, Miners),\n\t?debugFmt(\n\t\t\"Waiting until the validator node (port ~B) advances to height ~B. \"\n\t\t\"Current block hash: ~s, solution hash: ~s.\",\n\t\t[\n\t\t\tar_test_node:peer_port(ValidatorNode),\n\t\t\tCurrentHeight + 1,\n\t\t\tar_util:encode(CurrentB#block.indep_hash),\n\t\t\tar_util:encode(CurrentB#block.hash)\n\t\t]\n\t),\n\tBIValidator = ar_test_node:wait_until_height(\n\t\tValidatorNode, CurrentHeight + 1, false, ?COORDINATED_MINING_WAIT_TIMEOUT),\n\t%% Since multiple nodes are mining in parallel it's possible that multiple blocks\n\t%% were mined. Get the Validator's current height in cas it's more than CurrentHeight+1.\n\tNewHeight = ar_test_node:remote_call(ValidatorNode, ar_node, get_height, []),\n\n\tHashes = [Hash || {Hash, _, _} <- lists:sublist(BIValidator, NewHeight - CurrentHeight)],\n\t\n\tlists:foreach(\n\t\tfun(Node) ->\n\t\t\t?LOG_DEBUG([{test, ar_coordinated_mining_tests},\n\t\t\t\t{waiting_for_height, NewHeight}, {node, Node}]),\n\t\t\t%% Make sure the miner contains all of the new validator hashes, it's okay if\n\t\t\t%% the miner contains *more* hashes since it's possible concurrent blocks were\n\t\t\t%% mined between when the Validator checked and now.\n\t\t\tBIMiner = ar_test_node:wait_until_height(\n\t\t\t\tNode, NewHeight, false, ?COORDINATED_MINING_WAIT_TIMEOUT),\n\t\t\tMinerHashes = [Hash || {Hash, _, _} <- BIMiner],\n\t\t\tMessage = lists:flatten(io_lib:format(\n\t\t\t\t\t\"Node ~p did not mine the same block as the validator node\", [Node])),\n\t\t\t?assert(lists:all(fun(Hash) -> lists:member(Hash, MinerHashes) end, Hashes), Message)\n\t\tend,\n\t\tMiners\n\t),\n\tLatestHash = lists:last(Hashes),\n\t{ok, Block} = ar_test_node:http_get_block(LatestHash, ValidatorNode),\n\n\tcase Block#block.recall_byte2 of\n\t\tundefined -> \n\t\t\t[\n\t\t\t\tar_node:get_partition_number(Block#block.recall_byte)\n\t\t\t];\n\t\tRecallByte2 ->\n\t\t\t[\n\t\t\t\tar_node:get_partition_number(Block#block.recall_byte), \n\t\t\t\tar_node:get_partition_number(RecallByte2)\n\t\t\t]\n\tend.\n\nreport_miners(Miners) ->\n\treport_miners(Miners, 1).\n\nreport_miners([], _I) ->\n\tok;\nreport_miners([Miner | Miners], I) ->\n\t?debugFmt(\"Miner ~B: ~p, port: ~B.\", [I, Miner, ar_test_node:peer_port(Miner)]),\n\treport_miners(Miners, I + 1).\n\nassert_empty_cache(_Node) ->\n\t%% wait until the mining has stopped, then assert that the cache is empty\n\ttimer:sleep(10000),\n\tok.\n\t% [{_, Size}] = ar_test_node:remote_call(Node, ets, lookup, [ar_mining_server, chunk_cache_size]),\n\t%% We should assert that the size is 0, but there is a lot of concurrency in these tests\n\t%% so it's been hard to guarantee the cache is always empty by the time this check runs.\n\t%% It's possible there is a bug in the cache management code, but that code is pretty complex.\n\t%% In the future, if cache size ends up being a problem we can revisit - but for now, not\n\t%% worth the time for a test failure that may not have any realworld implications.\n\t% ?assertEqual(0, Size, Node).\n\ndummy_candidate() ->\n\t#mining_candidate{\n\t\tcm_diff = {rand:uniform(1024), rand:uniform(1024)},\n\t\th0 = crypto:strong_rand_bytes(32),\n\t\th1 = crypto:strong_rand_bytes(32),\n\t\tmining_address = crypto:strong_rand_bytes(32),\n\t\tnext_seed = crypto:strong_rand_bytes(32),\n\t\tnext_vdf_difficulty = rand:uniform(1024),\n\t\tnonce_limiter_output = crypto:strong_rand_bytes(32),\n\t\tpartition_number = rand:uniform(1024),\n\t\tpartition_number2 = rand:uniform(1024),\n\t\tpartition_upper_bound = rand:uniform(1024),\n\t\tseed = crypto:strong_rand_bytes(32),\n\t\tsession_key = dummy_session_key(),\n\t\tstart_interval_number = rand:uniform(1024),\n\t\tstep_number = rand:uniform(1024)\n\t}.\n\ndummy_solution() ->\n\t#mining_solution{\n\t\tlast_step_checkpoints = [],\n\t\tmerkle_rebase_threshold = rand:uniform(1024),\n\t\tmining_address = crypto:strong_rand_bytes(32),\n\t\tnext_seed = crypto:strong_rand_bytes(32),\n\t\tnext_vdf_difficulty = rand:uniform(1024),\n\t\tnonce = rand:uniform(1024),\n\t\tnonce_limiter_output = crypto:strong_rand_bytes(32),\n\t\tpartition_number = rand:uniform(1024),\n\t\tpartition_upper_bound = rand:uniform(1024),\n\t\tpoa1 = dummy_poa(),\n\t\tpoa2 = dummy_poa(),\n\t\tpreimage = crypto:strong_rand_bytes(32),\n\t\trecall_byte1 = rand:uniform(1024),\n\t\tseed = crypto:strong_rand_bytes(32),\n\t\tsolution_hash = crypto:strong_rand_bytes(32),\n\t\tstart_interval_number = rand:uniform(1024),\n\t\tstep_number = rand:uniform(1024),\n\t\tsteps = []\n\t}.\n\ndummy_poa() ->\n\t#poa{\n\t\toption = rand:uniform(1024),\n\t\ttx_path = crypto:strong_rand_bytes(32),\n\t\tdata_path = crypto:strong_rand_bytes(32),\n\t\tchunk = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE)\n\t}.\n\ndummy_session_key() ->\n\t{crypto:strong_rand_bytes(32), rand:uniform(100), rand:uniform(10000)}.\n\nmock_for_single_difficulty_adjustment_height() ->\n\t{ar_retarget, is_retarget_height, fun(Height) ->\n\t\tcase Height of\n\t\t\t?RETARGET_BLOCKS -> true;\n\t\t\t_ -> false\n\t\tend\n\tend}.\n\nmock_for_single_difficulty_adjustment_block() ->\n\t{ar_retarget, is_retarget_block, fun(Block) ->\n\t\tcase Block#block.height of\n\t\t\t?RETARGET_BLOCKS -> true;\n\t\t\t_ -> false\n\t\tend\n\tend}.\n"
  },
  {
    "path": "apps/arweave/test/ar_data_roots_sync_tests.erl",
    "content": "-module(ar_data_roots_sync_tests).\n\n-include(\"ar.hrl\").\n-include(\"ar_data_sync.hrl\").\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%% Group: data roots metadata only (GET/POST /data_roots, sync from chain — no /chunk roundtrip).\n%%% ---------------------------------------------------------------------------\n\n%% Data roots sync from a peer via ar_data_root_sync when main joins with partial storage (not header sync).\ndata_roots_sync_from_peer_test_() ->\n\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_block, get_consensus_window_size, fun() -> 5 end},\n\t\t\t{ar_block, get_max_tx_anchor_depth, fun() -> 5 end},\n\t\t\t{ar_storage_module, get_overlap, fun(_Packing) -> 0 end}],\n\t\tfun test_data_roots_sync_from_peer/0).\n\n%% Data roots pushed with HTTP: GET /data_roots from miner, POST /data_roots to peer; assert metadata via GET.\ndata_roots_http_post_test_() ->\n\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_block, get_consensus_window_size, fun() -> 5 end},\n\t\t\t{ar_block, get_max_tx_anchor_depth, fun() -> 5 end},\n\t\t\t{ar_storage_module, get_overlap, fun(_Packing) -> 0 end}],\n\t\tfun test_data_roots_http_post/0).\n\n%%% Group: chunk POST/GET after data roots are available on the peer.\n%%% ---------------------------------------------------------------------------\n\n%% HTTP share of roots then chunk roundtrip: per block, GET roots from miner, POST to main, POST/GET /chunk.\nchunk_after_data_roots_http_post_test_() ->\n\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_block, get_consensus_window_size, fun() -> 5 end},\n\t\t\t{ar_block, get_max_tx_anchor_depth, fun() -> 5 end},\n\t\t\t{ar_storage_module, get_overlap, fun(_Packing) -> 0 end}],\n\t\tfun test_chunk_after_data_roots_http_post/0).\n\n%% Background ar_data_root_sync + header_sync_jobs > 0; then POST/GET /chunk (regression: POST 200, GET 404).\nchunk_after_data_roots_background_sync_test_() ->\n\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_block, get_consensus_window_size, fun() -> 5 end},\n\t\t\t{ar_block, get_max_tx_anchor_depth, fun() -> 5 end},\n\t\t\t{ar_storage_module, get_overlap, fun(_Packing) -> 0 end}],\n\t\tfun test_chunk_after_data_roots_background_sync/0).\n\nchunk_skipped_with_duplicate_data_root_test_() ->\n\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_block, get_consensus_window_size, fun() -> 5 end},\n\t\t\t{ar_block, get_max_tx_anchor_depth, fun() -> 5 end},\n\t\t\t{ar_storage_module, get_overlap, fun(_Packing) -> 0 end}],\n\t\tfun test_chunk_skipped_with_duplicate_data_root/0).\n\nchunk_skipped_with_depth_exhaustion_test_() ->\n\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_block, get_consensus_window_size, fun() -> 5 end},\n\t\t\t{ar_block, get_max_tx_anchor_depth, fun() -> 5 end},\n\t\t\t{ar_storage_module, get_overlap, fun(_Packing) -> 0 end}],\n\t\tfun test_chunk_skipped_with_depth_exhaustion/0).\n\ntest_data_roots_sync_from_peer() ->\n\tWallet = {_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(2_000_000_000_000_000), <<>>}]),\n\n\t%% Peer1 will mine while main stays disconnected until join_on later.\n\tstart_peers_then_disconnect(main, peer1, B0),\n\n\t%% Mine blocks with transactions with data on peer1 BEFORE main joins.\n\tBlocksBeforeJoin = lists:map(\n\t\tfun(_) ->\n\t\t\tData = generate_random_txs(Wallet),\n\t\t\tTXs = [TX || {TX, _} <- Data],\n\t\t\tar_test_node:post_and_mine(#{ miner => peer1, await_on => peer1 }, TXs)\n\t\tend,\n\t\tlists:seq(1, 3)\n\t),\n\t%% The node fetches this many latest blocks after joining the network.\n\t%% We want all our data blocks be older so that the node has to use\n\t%% the data root syncing mechanism to fetch data roots (we explicitly\n\t%% assert the unexpected data roots are not synced further down here).\n\t?assertEqual(10, 2 * ar_block:get_max_tx_anchor_depth()),\n\tBlocks = BlocksBeforeJoin ++ lists:map(\n\t\t\tfun(_) ->\n\t\t\t\tData = generate_random_txs(Wallet),\n\t\t\t\tTXs = [TX || {TX, _} <- Data],\n\t\t\t\tar_test_node:post_and_mine(#{ miner => peer1, await_on => peer1 }, TXs)\n\t\t\tend,\n\t\t\tlists:seq(1, 11)\n\t\t),\n\n\t%% Now start main (node A) with header syncing disabled and storage modules covering PART of the range.\n\t{ok, BaseConfig} = arweave_config:get_env(),\n\tMainRewardAddr = ar_wallet:to_address(ar_wallet:new_keyfile()),\n\t%% Cover only the first partition and half of the second one to ensure partial coverage.\n\tMainConfig = BaseConfig#config{\n\t\tmine = false,\n\t\theader_sync_jobs = 0,\n\t\tstorage_modules = [\n\t\t\t%% The first MB of the weave.\n\t\t\t{?MiB, 0, {replica_2_9, MainRewardAddr}},\n\t\t\t%% The second 3 MB of the weave (skipping 1-2 MB).\n\t\t\t{3 * ?MiB, 1, {replica_2_9, MainRewardAddr}}\n\t\t]\n\t},\n    ConfiguredRanges = ar_intervals:from_list([{?MiB, 0}, {6 * ?MiB, 3 * ?MiB}]),\n\n\tar_test_node:join_on(#{ node => main, join_on => peer1, config => MainConfig }, true),\n\tar_test_node:connect_to_peer(peer1),\n\tar_test_node:wait_until_joined(main),\n\n\tLastConsensusWindowHeight = 4,\n\tlists:foreach(\n\t\tfun\t(#block{ block_size = 0 }) ->\n\t\t\t\tok;\n\t\t\t(B) ->\n\t\t\t\tBlockStart = block_start(B),\n\t\t\t\tcase BlockStart >= ar_data_sync:get_disk_pool_threshold() of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tok;\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tBlockEnd = B#block.weave_size,\n\t\t\t\t\t\tBlockRange = ar_intervals:from_list([{BlockEnd, BlockStart}]),\n\t\t\t\t\t\tIntersection = ar_intervals:intersection(BlockRange, ConfiguredRanges),\n\t\t\t\t\t\tcase B#block.height >= LastConsensusWindowHeight of\n\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t?debugFmt(\"Asserting data roots synced during consensus \"\n\t\t\t\t\t\t\t\t\t\"are stored, even outside the configured storage modules, \"\n\t\t\t\t\t\t\t\t\t\"height: ~B, configured ranges: ~0p, intersection: ~0p\",\n\t\t\t\t\t\t\t\t\t[B#block.height, ConfiguredRanges, Intersection]),\n\t\t\t\t\t\t\t\twait_for_data_roots(main, B);\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\tcase ar_intervals:is_empty(Intersection) of\n\t\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\t\t?debugFmt(\"Asserting data roots synced for partitions \"\n\t\t\t\t\t\t\t\t\t\t\t\"we configured, range intersection: ~0p\", [Intersection]),\n\t\t\t\t\t\t\t\t\t\twait_for_data_roots(main, B);\n\t\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\t\t?debugFmt(\"Asserting no data roots for partitions \"\n\t\t\t\t\t\t\t\t\t\t\t\"we did not configure, block range: ~0p\", [BlockRange]),\n\t\t\t\t\t\t\t\t\t\tassert_no_data_roots(main, B)\n\t\t\t\t\t\t\t\tend\n\t\t\t\t\t\tend\n\t\t\t\tend\n\t\tend,\n\t\tBlocks\n\t).\n\ntest_data_roots_http_post() ->\n\tWallet = {_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(2_000_000_000_000_000), <<>>}]),\n\tstart_peers_then_disconnect(peer1, main, B0),\n\t{B, _} = mine_block_with_small_fixed_data_tx(peer1, Wallet),\n\t%% Mine some empty blocks to push the data block out of the recent window.\n\tmine_empty_blocks_on_peer_after(peer1, B, 11),\n\tjoin_main_on_peer1(B#block.height + 11, false),\n\tar_test_node:disconnect_from(peer1),\n\tassert_no_data_roots(main, B),\n\t{ok, Body} = get_data_roots(peer1, B),\n\tpost_data_roots(main, B, Body),\n\twait_for_data_roots(main, B),\n\tok.\n\ntest_chunk_after_data_roots_http_post() ->\n\tWallet = {_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(2_000_000_000_000_000), <<>>}]),\n\n\t%% Main must not sync while peer1 builds the chain we will POST later.\n\tstart_peers_then_disconnect(main, peer1, B0),\n\n\t%% Mine blocks with data on peer1. \"Guaranteed\" is a block with a fixed small data tx that\n\t%% is guaranteed to trigger a POST /data_roots in the test loop. The remaining random data\n\t%% provides some good additional coverage, but aren't guaranteed to always\n\t%% trigger a POST /data_roots.\n\tGuaranteed = mine_block_with_small_fixed_data_tx(peer1, Wallet),\n\tRandom = lists:map(\n\t\tfun(_) ->\n\t\t\tTXData0 = generate_random_txs(Wallet),\n\t\t\tTXs = [TX || {TX, _} <- TXData0],\n\t\t\tB = ar_test_node:post_and_mine(#{ miner => peer1, await_on => peer1 }, TXs),\n\t\t\t{B, filter_mined_tx_data(B, TXData0)}\n\t\tend,\n\t\tlists:seq(1, 4)\n\t),\n\tData = [Guaranteed | Random],\n\n\t%% Mine some empty blocks to push the data blocks out of the recent window.\n\t{LastB, _} = lists:last(Data),\n\tmine_empty_blocks_on_peer_after(peer1, LastB, 11),\n\n\tjoin_main_on_peer1(LastB#block.height + 11, false),\n\tar_test_node:disconnect_from(peer1),\n\n\t%% GET/POST /data_roots only apply below each peer's disk pool bound (see\n\t%% ar_data_sync:get_data_roots_for_offset/1 and POST handler in ar_http_iface_middleware).\n\tPeer1PoolBound = ar_test_node:remote_call(peer1, ar_data_sync, get_disk_pool_threshold, []),\n\tMainPoolBound = ar_test_node:remote_call(main, ar_data_sync, get_disk_pool_threshold, []),\n\n\t%% Verify POST /data_roots and POST /chunk for each non-empty block in range on both peers.\n\tlists:foreach(\n\t\tfun({B, TXData}) ->\n\t\t\tBlockStart = block_start(B),\n\t\t\tcase\n\t\t\t\tB#block.block_size > 0\n\t\t\t\tandalso BlockStart < Peer1PoolBound\n\t\t\t\tandalso BlockStart < MainPoolBound\n\t\t\tof\n\t\t\t\ttrue ->\n\t\t\t\t\tassert_no_data_roots(main, B),\n\n\t\t\t\t\t%% Fetch data roots from peer1 and POST to main.\n\t\t\t\t\t{ok, Body} = get_data_roots(peer1, B),\n\t\t\t\t\tpost_data_roots(main, B, Body),\n\n\t\t\t\t\t%% For each transaction with data, POST its chunks to main and verify GET /chunk.\n\t\t\t\t\tlists:foreach(\n\t\t\t\t\t\tfun({TX, Chunks}) ->\n\t\t\t\t\t\t\tcase TX#tx.data_size > 0 of\n\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\tpost_then_get_chunks(main, B, TX, Chunks);\n\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\tok\n\t\t\t\t\t\t\tend\n\t\t\t\t\t\tend,\n\t\t\t\t\t\tTXData\n\t\t\t\t\t);\n\t\t\t\tfalse ->\n\t\t\t\t\tok\n\t\t\tend\n\t\tend,\n\t\tData\n\t),\n\tok.\n\ntest_chunk_after_data_roots_background_sync() ->\n\tWallet = {_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(2_000_000_000_000_000), <<>>}]),\n\tstart_peers_then_disconnect(peer1, main, B0),\n\t{B, [{TX, Chunks}]} = mine_block_with_small_fixed_data_tx(peer1, Wallet),\n\tmine_empty_blocks_on_peer_after(peer1, B, 11),\n\tjoin_main_on_peer1(B#block.height + 11, true),\n\ttrue = B#block.block_size > 0,\n\twait_until_data_roots_synced(main, B),\n\tar_test_node:disconnect_from(peer1),\n\tpost_then_get_chunks(main, B, TX, Chunks),\n\tok.\n\ntest_chunk_skipped_with_duplicate_data_root() ->\n\tWallet = {_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(2_000_000_000_000_000), <<>>}]),\n\tstart_peers_then_disconnect(peer1, main, B0),\n\t%% Mine two consecutive blocks on peer1 with the SAME data root (identical chunk data).\n\t{B1, [{TX1, Chunks1}]} = mine_block_with_small_fixed_data_tx(peer1, Wallet),\n\t{B2, [{TX2, Chunks2}]} = mine_block_with_small_fixed_data_tx(peer1, Wallet),\n\t?assertEqual(TX1#tx.data_root, TX2#tx.data_root),\n\t%% Mine empty blocks to push the data blocks out of the recent window.\n\tmine_empty_blocks_on_peer_after(peer1, B2, 11),\n\t%% Join main with background data_roots syncing turned off.\n\t%% We manually sync only B2's data roots (simulating incomplete background sync where\n\t%% B2 was processed but B1 was not).\n\tjoin_main_on_peer1(B2#block.height + 11, false),\n\t%% Pre-fetch B1's data roots while still connected to peer1. We'll post them to main\n\t%% immediately after posting B1's chunk to beat the disk pool scan.\n\t{ok, Body1} = get_data_roots(peer1, B1),\n\t{ok, Body2} = get_data_roots(peer1, B2),\n\tpost_data_roots(main, B2, Body2),\n\twait_for_data_roots(main, B2),\n\tar_test_node:disconnect_from(peer1),\n\t[{AbsEnd2, Proof2}] = ar_test_data_sync:build_proofs(B2, TX2, Chunks2),\n\tpost_chunk(main, Proof2),\n\twait_for_sync_record_update(main, AbsEnd2),\n\t%% POST TX1's chunk. Since we've only synced one copy of the data_roots, and we've already\n\t%% postd one chunk matching that data_root (TX2's chunk), we should get a 200 when\n\t%% posting Chunk1 - *but* we will not see Chunk1 persisted. This is not ideal but it is\n\t%% by design. \n\t%% For more context, and to track the state of any improvements to the process, see:\n\t%% https://github.com/ArweaveTeam/arweave-dev/issues/1112\n\t[{AbsEnd1, Proof1}] = ar_test_data_sync:build_proofs(B1, TX1, Chunks1),\n\tpost_chunk(main, Proof1),\n\t%% Allow time for the chunk to be persisted (it shouldn't be)\n\ttimer:sleep(10_000),\n\t?assertEqual(not_found, get_chunk(main, AbsEnd1)),\n\t%% Now we post B1's data roots, which should allow Chunk1 to be persisted\n\tpost_data_roots(main, B1, Body1),\n\twait_for_data_roots(main, B1),\n\t%% Re-POST the chunk — now B1's index entry exists so the chunk can be promoted.\n\tpost_chunk(main, Proof1),\n\tok = wait_for_chunk_to_persist(main, AbsEnd1).\n\ntest_chunk_skipped_with_depth_exhaustion() ->\n\tMaxDuplicateDataRoots = 3,\n\tWallet = {_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(2_000_000_000_000_000), <<>>}]),\n\tstart_peers_then_disconnect(peer1, main, B0),\n\t%% Mine one more block than the configured duplicate-depth limit. This ensures the oldest\n\t%% chunk falls outside the configured duplicate data-root window.\n\tBlocks = lists:map(\n\t\tfun(_) -> mine_block_with_small_fixed_data_tx(peer1, Wallet) end,\n\t\tlists:seq(1, MaxDuplicateDataRoots + 1)\n\t),\n\t{LastB, _} = lists:last(Blocks),\n\t%% Mine enough empty blocks to push the chunks out of the disk pool. The later duplicates\n\t%% should still be persistable, but the oldest should now be outside the configured window.\n\tmine_empty_blocks_on_peer_after(peer1, LastB, 10),\n\tjoin_main_on_peer1(LastB#block.height + 10, false, MaxDuplicateDataRoots),\n\tar_test_node:disconnect_from(peer1),\n\t[{B1, [{TX1, Chunks1}]} | HigherBlocks] = Blocks,\n\tlists:foreach(\n\t\tfun({B, _}) ->\n\t\t\t{ok, Body} = get_data_roots(peer1, B),\n\t\t\tpost_data_roots(main, B, Body),\n\t\t\twait_for_data_roots(main, B)\n\t\tend,\n\t\tHigherBlocks\n\t),\n\t%% POST chunks for the later duplicate blocks and wait for promotion. The oldest block B1\n\t%% is outside the configured duplicate-offset window, so it should not become queryable via\n\t%% duplicate fanout.\n\tHigherAbsEnds = lists:map(\n\t\tfun({B, [{TX, Chunks}]}) ->\n\t\t\t[{AbsEnd, Proof}] = ar_test_data_sync:build_proofs(B, TX, Chunks),\n\t\t\tpost_chunk(main, Proof),\n\t\t\tAbsEnd\n\t\tend,\n\t\tHigherBlocks\n\t),\n\tlists:foreach(\n\t\tfun(AbsEnd) ->\n\t\t\twait_for_chunk_to_persist(main, AbsEnd)\n\t\tend,\n\t\tHigherAbsEnds\n\t),\n\ttimer:sleep(10_000),\n\t%% POST chunk for block 1 (lowest offset). chunk_offsets_synced/5 only checks the configured\n\t%% number of synced duplicate offsets, so B1 is treated as already synced and skipped.\n\t[{AbsEnd1, Proof1}] = ar_test_data_sync:build_proofs(B1, TX1, Chunks1),\n\tpost_chunk(main, Proof1),\n\ttimer:sleep(10_000),\n\t?assertEqual(not_found, get_chunk(main, AbsEnd1)),\n\t{ok, Body1} = get_data_roots(peer1, B1),\n\t%% This test confirms that doign a POST /data_roots before posting a chunk is not a reliable\n\t%% workaround to force the chunk to be persisted. We will either need to change the\n\t%% duplicate data_roots logic, or implement a new endpoint. \n\t%% For more context, and to track the state of any improvements to the process, see:\n\t%% https://github.com/ArweaveTeam/arweave-dev/issues/1112\n\tpost_data_roots(main, B1, Body1),\n\twait_for_data_roots(main, B1),\n\tpost_chunk(main, Proof1),\n\ttimer:sleep(10_000),\n\t?assertEqual(not_found, get_chunk(main, AbsEnd1)),\n\t% ok = wait_for_chunk_to_persist(main, AbsEnd1),\n\tok.\n\n%% Start PeerA and PeerB from the same genesis, wait until both joined, then have PeerA\n%% disconnect from PeerB so they stop syncing while tests extend the chain on one side.\nstart_peers_then_disconnect(PeerA, PeerB, B0) ->\n\tar_test_node:start_peer(PeerA, B0),\n\tar_test_node:start_peer(PeerB, B0),\n\tar_test_node:wait_until_joined(PeerA),\n\tar_test_node:wait_until_joined(PeerB),\n\tar_test_node:remote_call(PeerA, ar_test_node, disconnect_from, [PeerB]),\n\tok.\n\n%% Mine Count empty blocks on Peer immediately after Block (height advances from Block#block.height).\nmine_empty_blocks_on_peer_after(Peer, Block, Count) ->\n\tlists:foldl(\n\t\tfun(_, Height) ->\n\t\t\tar_test_node:mine(Peer),\n\t\t\tar_test_node:assert_wait_until_height(Peer, Height + 1),\n\t\t\tHeight + 1\n\t\tend,\n\t\tBlock#block.height,\n\t\tlists:seq(1, Count)\n\t).\n\n%% Rejoin main onto peer1 with mine disabled. When EnableBackgroundSync is true, enable the\n%% normal background sync setup; when false, disable both header syncing and background\n%% data_roots syncing. Does not disconnect — caller may call\n%% ar_test_node:remote_call(main, ar_test_node, disconnect_from, [peer1]) when needed.\njoin_main_on_peer1(ExpectedHeight, EnableBackgroundSync) ->\n\tjoin_main_on_peer1(ExpectedHeight, EnableBackgroundSync, undefined).\n\njoin_main_on_peer1(ExpectedHeight, EnableBackgroundSync, MaxDuplicateDataRoots) ->\n\t{ok, BaseConfig} = arweave_config:get_env(),\n\tConfig = BaseConfig#config{\n\t\tmine = false,\n\t\tsync_jobs = 0,\n\t\theader_sync_jobs =\n\t\t\tcase EnableBackgroundSync of\n\t\t\t\ttrue -> 2;\n\t\t\t\tfalse -> 0\n\t\t\tend,\n\t\tenable_data_roots_syncing = EnableBackgroundSync\n\t},\n\tMainConfig = case MaxDuplicateDataRoots of\n\t\tundefined ->\n\t\t\tConfig;\n\t\tValue ->\n\t\t\tConfig#config{ max_duplicate_data_roots = Value }\n\tend,\n\tar_test_node:join_on(#{ node => main, join_on => peer1, config => MainConfig }, true),\n\tar_test_node:connect_to_peer(peer1),\n\tar_test_node:wait_until_joined(main),\n\tar_test_node:assert_wait_until_height(main, ExpectedHeight),\n\tok.\n\nwait_until_data_roots_synced(Peer, B) ->\n\tStart = block_start(B),\n\tEnd = B#block.weave_size,\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\tar_test_node:remote_call(Peer, ar_data_sync, are_data_roots_synced,\n\t\t\t\t[Start, End, B#block.tx_root])\n\t\tend,\n\t\t500,\n\t\t120_000),\n\tok.\n\n%% Mine one block on peer1 with a single small v2 data tx. TXData matches generate_random_txs/1 entries.\nmine_block_with_small_fixed_data_tx(Peer, Wallet) ->\n\tChunks = [<< 0:(4096 * 8) >>],\n\t{DataRoot, _DataTree} = ar_merkle:generate_tree(\n\t\tar_tx:sized_chunks_to_sized_chunk_ids(\n\t\t\tar_tx:chunks_to_size_tagged_chunks(Chunks)\n\t\t)\n\t),\n\t{TX, _} = ar_test_data_sync:tx(\n\t\t#{ wallet => Wallet,\n\t\t\tsplit_type => {fixed_data, DataRoot, Chunks},\n\t\t\tformat => v2,\n\t\t\treward => ?AR(10_000_000_000),\n\t\t\ttx_anchor_peer => Peer,\n\t\t\tget_fee_peer => Peer }),\n\tB = ar_test_node:post_and_mine(#{ miner => Peer, await_on => Peer }, [TX]),\n\t{B, [{TX, Chunks}]}.\n\ngenerate_random_txs(Wallet) ->\n\tCoin = rand:uniform(12),\n\tcase Coin of\n\t\tVal when Val =< 3 ->\n\t\t\t%% Add three data txs with different data roots.\n\t\t\t[ar_test_data_sync:tx(\n\t\t\t\t#{ wallet => Wallet,\n\t\t\t\t\tsplit_type => original_split,\n\t\t\t\t\tformat => v2,\n\t\t\t\t\treward => ?AR(10_000_000_000),\n\t\t\t\t\ttx_anchor_peer => peer1,\n\t\t\t\t\tget_fee_peer => peer1 }),\n\t\t\t\tar_test_data_sync:tx(\n\t\t\t\t\t#{ wallet => Wallet,\n\t\t\t\t\t\tsplit_type => original_split,\n\t\t\t\t\t\tformat => v2,\n\t\t\t\t\t\treward => ?AR(10_000_000_000),\n\t\t\t\t\t\ttx_anchor_peer => peer1,\n\t\t\t\t\t\tget_fee_peer => peer1 }),\n\t\t\t\tar_test_data_sync:tx(\n\t\t\t\t\t#{ wallet => Wallet,\n\t\t\t\t\t\tsplit_type => original_split,\n\t\t\t\t\t\tformat => v2,\n\t\t\t\t\t\treward => ?AR(10_000_000_000),\n\t\t\t\t\t\ttx_anchor_peer => peer1,\n\t\t\t\t\t\tget_fee_peer => peer1 })\n\t\t\t\t\t| generate_random_txs(Wallet)];\n\t\tVal when Val =< 6 ->\n\t\t\t%% A bit smaller than 256 KiB to provoke padding.\n\t\t\tChunks = [<< 0:(262140 * 8) >>],\n\t\t\t{DataRoot, _DataTree} = ar_merkle:generate_tree(\n\t\t\t\tar_tx:sized_chunks_to_sized_chunk_ids(\n\t\t\t\t\tar_tx:chunks_to_size_tagged_chunks(Chunks)\n\t\t\t\t)\n\t\t\t),\n\t\t\t%% Two transactions with the same data root.\n\t\t\t[ar_test_data_sync:tx(\n\t\t\t\t#{ wallet => Wallet,\n\t\t\t\t\tsplit_type => {fixed_data, DataRoot, Chunks},\n\t\t\t\t\tformat => v2,\n\t\t\t\t\treward => ?AR(10_000_000_000),\n\t\t\t\t\ttx_anchor_peer => peer1,\n\t\t\t\t\tget_fee_peer => peer1 }),\n\t\t\tar_test_data_sync:tx(\n\t\t\t\t#{ wallet => Wallet,\n\t\t\t\t\tsplit_type => {fixed_data, DataRoot, Chunks},\n\t\t\t\t\tformat => v2,\n\t\t\t\t\treward => ?AR(10_000_000_000),\n\t\t\t\t\ttx_anchor_peer => peer1,\n\t\t\t\t\tget_fee_peer => peer1 }) | generate_random_txs(Wallet)];\n\t\tVal when Val =< 9 ->\n\t\t\t%% Add empty tx and two transactions with different data roots.\n\t\t\t[ar_test_data_sync:tx(\n\t\t\t\t#{ wallet => Wallet,\n\t\t\t\t\tsplit_type => {fixed_data, <<>>, []},\n\t\t\t\t\tformat => v2,\n\t\t\t\t\treward => ?AR(10_000_000_000),\n\t\t\t\t\ttx_anchor_peer => peer1,\n\t\t\t\t\tget_fee_peer => peer1 }),\n\t\t\t\tar_test_data_sync:tx(\n\t\t\t\t#{ wallet => Wallet,\n\t\t\t\t\tsplit_type => original_split,\n\t\t\t\t\tformat => v2,\n\t\t\t\t\treward => ?AR(10_000_000_000),\n\t\t\t\t\ttx_anchor_peer => peer1,\n\t\t\t\t\tget_fee_peer => peer1 }),\n\t\t\t\tar_test_data_sync:tx(\n\t\t\t\t\t#{ wallet => Wallet,\n\t\t\t\t\t\tsplit_type => original_split,\n\t\t\t\t\t\tformat => v2,\n\t\t\t\t\t\treward => ?AR(10_000_000_000),\n\t\t\t\t\t\ttx_anchor_peer => peer1,\n\t\t\t\t\t\tget_fee_peer => peer1 })\n\t\t\t| generate_random_txs(Wallet)];\n\t\t_ ->\n\t\t\t[]\n\tend.\n\nblock_start(B) ->\n\tB#block.weave_size - B#block.block_size.\n\nfilter_mined_tx_data(B, TXData) ->\n\tMinedTXIDs = sets:from_list(B#block.txs),\n\t[{TX, Chunks} || {TX, Chunks} <- TXData, sets:is_element(TX#tx.id, MinedTXIDs)].\n\ndata_roots_path(BlockStart) ->\n\t\"/data_roots/\" ++ integer_to_list(BlockStart).\n\n%% GET /data_roots for B on Peer. Returns {ok, Body} | not_found.\nget_data_roots(Peer, B) ->\n\tStart = block_start(B),\n\tcase ar_http:req(#{\n\t\tmethod => get,\n\t\tpeer => ar_test_node:peer_ip(Peer),\n\t\tpath => data_roots_path(Start)\n\t}) of\n\t\t{ok, {{<<\"200\">>, _}, _, Body, _, _}} ->\n\t\t\t{ok, Body};\n\t\t{ok, {{<<\"404\">>, _}, _, _, _, _}} ->\n\t\t\tnot_found;\n\t\tOther ->\n\t\t\t?assert(false, lists:flatten(io_lib:format(\n\t\t\t\t\"GET /data_roots/~B: unexpected reply ~p\", [Start, Other])))\n\tend.\n\n%% POST /data_roots for B to Peer. Asserts 200.\npost_data_roots(Peer, B, Body) ->\n\tStart = block_start(B),\n\tcase ar_http:req(#{\n\t\tmethod => post,\n\t\tpeer => ar_test_node:peer_ip(Peer),\n\t\tpath => data_roots_path(Start),\n\t\tbody => Body\n\t}) of\n\t\t{ok, {{<<\"200\">>, _}, _, <<>>, _, _}} ->\n\t\t\tok;\n\t\tOther ->\n\t\t\t?assert(false, lists:flatten(io_lib:format(\n\t\t\t\t\"POST /data_roots/~B: expected 200, got ~p\", [Start, Other])))\n\tend.\n\n%% POST /chunk with a proof map. Asserts 200.\npost_chunk(Node, Proof) ->\n\tcase ar_test_node:post_chunk(Node, ar_serialize:jsonify(Proof)) of\n\t\t{ok, {{<<\"200\">>, _}, _, <<>>, _, _}} ->\n\t\t\tok;\n\t\tOther ->\n\t\t\t?assert(false, lists:flatten(io_lib:format(\n\t\t\t\t\"POST /chunk: expected 200, got ~p\", [Other])))\n\tend.\n\n%% Poll until AbsoluteEndOffset appears in the sync record for the given node.\nwait_for_sync_record_update(Node, AbsoluteEndOffset) ->\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\tcase ar_test_node:remote_call(Node, ar_sync_record, is_recorded,\n\t\t\t\t\t[AbsoluteEndOffset, ar_data_sync]) of\n\t\t\t\t{{true, _}, _} -> true;\n\t\t\t\t_ -> false\n\t\t\tend\n\t\tend,\n\t\t500,\n\t\t60_000).\n\n%% GET /chunk at the given offset with any packing. Returns ok | not_found.\nget_chunk(Node, GlobalEndOffset) ->\n\tcase ar_test_node:get_chunk(Node, GlobalEndOffset, any) of\n\t\t{ok, {{<<\"200\">>, _}, _, _, _, _}} ->\n\t\t\tok;\n\t\t{ok, {{<<\"404\">>, _}, _, _, _, _}} ->\n\t\t\tnot_found;\n\t\tOther ->\n\t\t\t?assert(false, lists:flatten(io_lib:format(\n\t\t\t\t\"GET /chunk/~B: unexpected reply ~p\", [GlobalEndOffset, Other])))\n\tend.\n\n%% POST /chunk for each proof from get_records_with_proofs/3, then poll GET /chunk until\n%% queryable (disk pool → sync_record promotion is asynchronous).\npost_then_get_chunks(Node, B, TX, Chunks) ->\n\tRecords = ar_test_data_sync:get_records_with_proofs(B, TX, Chunks),\n\tlists:foreach(\n\t\tfun({_, _, _, {GlobalChunkEndOffset, Proof}}) ->\n\t\t\tpost_chunk(Node, Proof),\n\t\t\tok = wait_for_chunk_to_persist(Node, GlobalChunkEndOffset)\n\t\tend,\n\t\tRecords\n\t).\n\n%% Poll GET /chunk until HTTP 200 (disk-pool → sync_record promotion is asynchronous).\nwait_for_chunk_to_persist(Node, GlobalEndOffset) ->\n\twait_for_chunk_to_persist(Node, GlobalEndOffset, 15_000).\n\nwait_for_chunk_to_persist(Node, GlobalEndOffset, TimeoutMs) ->\n\tcase ar_util:do_until(\n\t\tfun() ->\n\t\t\tcase get_chunk(Node, GlobalEndOffset) of\n\t\t\t\tok -> true;\n\t\t\t\tnot_found -> false\n\t\t\tend\n\t\tend,\n\t\t100,\n\t\tTimeoutMs\n\t) of\n\t\ttrue ->\n\t\t\tok;\n\t\t{error, timeout} ->\n\t\t\t?assert(false, \n\t\t\tlists:flatten(io_lib:format(\"Timeout waiting for chunk to persist: ~p\",\n\t\t\t\t[GlobalEndOffset])))\n\tend.\n\nwait_for_data_roots(Peer, B) ->\n\tStart = block_start(B),\n\tEnd = B#block.weave_size,\n\tHeight = B#block.height,\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\tcase get_data_roots(Peer, B) of\n\t\t\t\t{ok, Body} ->\n\t\t\t\t\tcase ar_serialize:binary_to_data_roots(Body) of\n\t\t\t\t\t\t{ok, {_TXRoot, BlockSize, _Entries}}\n\t\t\t\t\t\t\t\twhen Start + BlockSize == End ->\n\t\t\t\t\t\t\ttrue;\n\t\t\t\t\t\t{ok, {_TXRoot, BlockSize2, _Entries}} ->\n\t\t\t\t\t\t\t?debugFmt(\"Unexpected block size: ~B, expected: ~B, height: ~B\",\n\t\t\t\t\t\t\t\t[BlockSize2, End - Start, Height]),\n\t\t\t\t\t\t\t?assert(false);\n\t\t\t\t\t\t{error, Error} ->\n\t\t\t\t\t\t\t?debugFmt(\"Unexpected error: ~p, height: ~B\", [Error, Height]),\n\t\t\t\t\t\t\t?assert(false)\n\t\t\t\t\tend;\n\t\t\t\tnot_found ->\n\t\t\t\t\tfalse\n\t\t\tend\n\t\tend,\n\t\t200,\n\t\t120_000\n\t),\n\tok.\n\nassert_no_data_roots(Peer, B) ->\n\tcase get_data_roots(Peer, B) of\n\t\tnot_found ->\n\t\t\tok;\n\t\t{ok, Body} ->\n\t\t\t?debugFmt(\"Expected 404 but got body: ~p ~p\",\n\t\t\t\t[Body, ar_serialize:binary_to_data_roots(Body)]),\n\t\t\t?assert(false)\n\tend.\n"
  },
  {
    "path": "apps/arweave/test/ar_data_sync_disk_pool_rotation_test.erl",
    "content": "-module(ar_data_sync_disk_pool_rotation_test).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include(\"ar.hrl\").\n-include(\"ar_consensus.hrl\").\n\n-import(ar_test_node, [assert_wait_until_height/2]).\n\ndisk_pool_rotation_test_() ->\n\t{timeout, 120, fun test_disk_pool_rotation/0}.\n\ntest_disk_pool_rotation() ->\n\t?LOG_DEBUG([{event, test_disk_pool_rotation_start}]),\n\tAddr = ar_wallet:to_address(ar_wallet:new_keyfile()),\n\t%% Will store the three genesis chunks.\n\t%% The third one falls inside the \"overlap\" (see ar_storage_module.erl)\n\tStorageModules = [{2 * ?DATA_CHUNK_SIZE, 0,\n\t\t\tar_test_node:get_default_storage_module_packing(Addr, 0)}],\n\tWallet = ar_test_data_sync:setup_nodes(\n\t\t\t#{ addr => Addr, storage_modules => StorageModules }),\n\tChunks = [crypto:strong_rand_bytes(?DATA_CHUNK_SIZE)],\n\t{DataRoot, DataTree} = ar_merkle:generate_tree(\n\t\tar_tx:sized_chunks_to_sized_chunk_ids(\n\t\t\tar_tx:chunks_to_size_tagged_chunks(Chunks)\n\t\t)\n\t),\n\t{TX, Chunks} = ar_test_data_sync:tx(Wallet, {fixed_data, DataRoot, Chunks}),\n\tar_test_node:assert_post_tx_to_peer(main, TX),\n\tOffset = ?DATA_CHUNK_SIZE,\n\tDataSize = ?DATA_CHUNK_SIZE,\n\tDataPath = ar_merkle:generate_path(DataRoot, Offset, DataTree),\n\tProof = #{ data_root => ar_util:encode(DataRoot),\n\t\t\tdata_path => ar_util:encode(DataPath),\n\t\t\tchunk => ar_util:encode(hd(Chunks)),\n\t\t\toffset => integer_to_binary(Offset),\n\t\t\tdata_size => integer_to_binary(DataSize) },\n\t?assertMatch({ok, {{<<\"200\">>, _}, _, _, _, _}},\n\t\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(Proof))),\n\tar_test_node:mine(main),\n\tassert_wait_until_height(main, 1),\n\ttimer:sleep(2_000),\n\tOptions = #{ format => etf, random_subset => false },\n\t{ok, Binary1} = ar_global_sync_record:get_serialized_sync_record(Options),\n\t{ok, Global1} = ar_intervals:safe_from_etf(Binary1),\n\t%% 3 genesis chunks are packed with the replica 2.9 format and therefore stored\n\t%% in the footprint record and not here.\n\t?assertEqual([{1048576, 786432}], ar_intervals:to_list(Global1)),\n\tar_test_node:mine(main),\n\tassert_wait_until_height(main, 2),\n\t{ok, Binary2} = ar_global_sync_record:get_serialized_sync_record(Options),\n\t{ok, Global2} = ar_intervals:safe_from_etf(Binary2),\n\t?assertEqual([{1048576, 786432}], ar_intervals:to_list(Global2)),\n\tar_test_node:mine(main),\n\tassert_wait_until_height(main, 3),\n\tar_test_node:mine(main),\n\tassert_wait_until_height(main, 4),\n\t%% The new chunk has been confirmed but there is not storage module to take it.\n\t?assertEqual(3, ?SEARCH_SPACE_UPPER_BOUND_DEPTH),\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\t{ok, Binary3} = ar_global_sync_record:get_serialized_sync_record(Options),\n\t\t\t{ok, Global3} = ar_intervals:safe_from_etf(Binary3),\n\t\t\t[] == ar_intervals:to_list(Global3)\n\t\tend,\n\t\t200,\n\t\t5000\n\t). "
  },
  {
    "path": "apps/arweave/test/ar_data_sync_enqueue_intervals_test.erl",
    "content": "-module(ar_data_sync_enqueue_intervals_test).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include(\"ar.hrl\").\n\nenqueue_intervals_test() ->\n\ttest_enqueue_intervals([], 2, [], [], [], \"Empty Intervals\"),\n\tPeer1 = {1, 2, 3, 4, 1984},\n\tPeer2 = {101, 102, 103, 104, 1984},\n\tPeer3 = {201, 202, 203, 204, 1984},\n\n\ttest_enqueue_intervals(\n\t\t[\n\t\t\t{Peer1, ar_intervals:from_list([\n\t\t\t\t\t{4*?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE},\n\t\t\t\t\t{9*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE}\n\t\t\t\t]), none}\n\t\t],\n\t\t5,\n\t\t[{20*?DATA_CHUNK_SIZE, 10*?DATA_CHUNK_SIZE}],\n\t\t[\n\t\t\t{4*?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE},\n\t\t\t{9*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE}\n\t\t],\n\t\t[\n\t\t\t{none, 2*?DATA_CHUNK_SIZE, 3*?DATA_CHUNK_SIZE, Peer1},\n\t\t\t{none, 3*?DATA_CHUNK_SIZE, 4*?DATA_CHUNK_SIZE, Peer1},\n\t\t\t{none, 6*?DATA_CHUNK_SIZE, 7*?DATA_CHUNK_SIZE, Peer1},\n\t\t\t{none, 7*?DATA_CHUNK_SIZE, 8*?DATA_CHUNK_SIZE, Peer1},\n\t\t\t{none, 8*?DATA_CHUNK_SIZE, 9*?DATA_CHUNK_SIZE, Peer1}\n\t\t],\n\t\t\"Single peer, full intervals, all chunks. Non-overlapping QIntervals.\"),\n\n\ttest_enqueue_intervals(\n\t\t[\n\t\t\t{Peer1, ar_intervals:from_list([\n\t\t\t\t\t{4*?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE},\n\t\t\t\t\t{9*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE}\n\t\t\t\t]), none}\n\t\t],\n\t\t2,\n\t\t[{20*?DATA_CHUNK_SIZE, 10*?DATA_CHUNK_SIZE}],\n\t\t[\n\t\t\t{4*?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE}\n\t\t],\n\t\t[\n\t\t\t{none, 2*?DATA_CHUNK_SIZE, 3*?DATA_CHUNK_SIZE, Peer1},\n\t\t\t{none, 3*?DATA_CHUNK_SIZE, 4*?DATA_CHUNK_SIZE, Peer1}\n\t\t],\n\t\t\"Single peer, full intervals, 2 chunks. Non-overlapping QIntervals.\"),\n\n\ttest_enqueue_intervals(\n\t\t[\n\t\t\t{Peer1, ar_intervals:from_list([\n\t\t\t\t{4*?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE},\n\t\t\t\t{9*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE}\n\t\t\t]), none},\n\t\t\t{Peer2, ar_intervals:from_list([\n\t\t\t\t{4*?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE},\n\t\t\t\t{7*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE}\n\t\t\t]), none},\n\t\t\t{Peer3, ar_intervals:from_list([\n\t\t\t\t{8*?DATA_CHUNK_SIZE, 7*?DATA_CHUNK_SIZE}\n\t\t\t]), none}\n\t\t],\n\t\t2,\n\t\t[{20*?DATA_CHUNK_SIZE, 10*?DATA_CHUNK_SIZE}],\n\t\t[\n\t\t\t{4*?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE},\n\t\t\t{8*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE}\n\t\t],\n\t\t[\n\t\t\t{none, 2*?DATA_CHUNK_SIZE, 3*?DATA_CHUNK_SIZE, Peer1},\n\t\t\t{none, 3*?DATA_CHUNK_SIZE, 4*?DATA_CHUNK_SIZE, Peer1},\n\t\t\t{none, 5*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE, Peer2},\n\t\t\t{none, 6*?DATA_CHUNK_SIZE, 7*?DATA_CHUNK_SIZE, Peer2},\n\t\t\t{none, 7*?DATA_CHUNK_SIZE, 8*?DATA_CHUNK_SIZE, Peer3}\n\t\t],\n\t\t\"Multiple peers, overlapping, full intervals, 2 chunks. Non-overlapping QIntervals.\"),\n\n\ttest_enqueue_intervals(\n\t\t[\n\t\t\t{Peer1, ar_intervals:from_list([\n\t\t\t\t{4*?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE},\n\t\t\t\t{9*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE}\n\t\t\t]), none},\n\t\t\t{Peer2, ar_intervals:from_list([\n\t\t\t\t{4*?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE},\n\t\t\t\t{7*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE}\n\t\t\t]), none},\n\t\t\t{Peer3, ar_intervals:from_list([\n\t\t\t\t{8*?DATA_CHUNK_SIZE, 7*?DATA_CHUNK_SIZE}\n\t\t\t]), none}\n\t\t],\n\t\t3,\n\t\t[{20*?DATA_CHUNK_SIZE, 10*?DATA_CHUNK_SIZE}],\n\t\t[\n\t\t\t{4*?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE},\n\t\t\t{8*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE}\n\t\t],\n\t\t[\n\t\t\t{none, 2*?DATA_CHUNK_SIZE, 3*?DATA_CHUNK_SIZE, Peer1},\n\t\t\t{none, 3*?DATA_CHUNK_SIZE, 4*?DATA_CHUNK_SIZE, Peer1},\n\t\t\t{none, 5*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE, Peer2},\n\t\t\t{none, 6*?DATA_CHUNK_SIZE, 7*?DATA_CHUNK_SIZE, Peer1},\n\t\t\t{none, 7*?DATA_CHUNK_SIZE, 8*?DATA_CHUNK_SIZE, Peer3}\n\t\t],\n\t\t\"Multiple peers, overlapping, full intervals, 3 chunks. Non-overlapping QIntervals.\"),\n\n\ttest_enqueue_intervals(\n\t\t[\n\t\t\t{Peer1, ar_intervals:from_list([\n\t\t\t\t\t{4*?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE},\n\t\t\t\t\t{9*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE}\n\t\t\t]), none}\n\t\t],\n\t\t5,\n\t\t[{20*?DATA_CHUNK_SIZE, 10*?DATA_CHUNK_SIZE}, {9*?DATA_CHUNK_SIZE, 7*?DATA_CHUNK_SIZE}],\n\t\t[\n\t\t\t{4*?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE},\n\t\t\t{7*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE}\n\t\t],\n\t\t[\n\t\t\t{none, 2*?DATA_CHUNK_SIZE, 3*?DATA_CHUNK_SIZE, Peer1},\n\t\t\t{none, 3*?DATA_CHUNK_SIZE, 4*?DATA_CHUNK_SIZE, Peer1},\n\t\t\t{none, 6*?DATA_CHUNK_SIZE, 7*?DATA_CHUNK_SIZE, Peer1}\n\t\t],\n\t\t\"Single peer, full intervals, all chunks. Overlapping QIntervals.\"),\n\n\ttest_enqueue_intervals(\n\t\t[\n\t\t\t{Peer1, ar_intervals:from_list([\n\t\t\t\t{4*?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE},\n\t\t\t\t{9*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE}\n\t\t\t]), none},\n\t\t\t{Peer2, ar_intervals:from_list([\n\t\t\t\t{4*?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE},\n\t\t\t\t{7*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE}\n\t\t\t]), none},\n\t\t\t{Peer3, ar_intervals:from_list([\n\t\t\t\t{8*?DATA_CHUNK_SIZE, 7*?DATA_CHUNK_SIZE}\n\t\t\t]), none}\n\t\t],\n\t\t2,\n\t\t[{20*?DATA_CHUNK_SIZE, 10*?DATA_CHUNK_SIZE}, {9*?DATA_CHUNK_SIZE, 7*?DATA_CHUNK_SIZE}],\n\t\t[\n\t\t\t{4*?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE},\n\t\t\t{7*?DATA_CHUNK_SIZE, 5*?DATA_CHUNK_SIZE}\n\t\t],\n\t\t[\n\t\t\t{none, 2*?DATA_CHUNK_SIZE, 3*?DATA_CHUNK_SIZE, Peer1},\n\t\t\t{none, 3*?DATA_CHUNK_SIZE, 4*?DATA_CHUNK_SIZE, Peer1},\n\t\t\t{none, 5*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE, Peer2},\n\t\t\t{none, 6*?DATA_CHUNK_SIZE, 7*?DATA_CHUNK_SIZE, Peer2}\n\t\t],\n\t\t\"Multiple peers, overlapping, full intervals, 2 chunks. Overlapping QIntervals.\"),\n\n\ttest_enqueue_intervals(\n\t\t[\n\t\t\t{Peer1, ar_intervals:from_list([\n\t\t\t\t{trunc(3.25*?DATA_CHUNK_SIZE), 2*?DATA_CHUNK_SIZE},\n\t\t\t\t{9*?DATA_CHUNK_SIZE, trunc(5.75*?DATA_CHUNK_SIZE)}\n\t\t\t]), none}\n\t\t],\n\t\t2,\n\t\t[\n\t\t\t{20*?DATA_CHUNK_SIZE, 10*?DATA_CHUNK_SIZE},\n\t\t\t{trunc(8.5*?DATA_CHUNK_SIZE), trunc(6.5*?DATA_CHUNK_SIZE)}\n\t\t],\n\t\t[\n\t\t\t{trunc(3.25*?DATA_CHUNK_SIZE), 2*?DATA_CHUNK_SIZE}\n\t\t],\n\t\t[\n\t\t\t{none, 2*?DATA_CHUNK_SIZE, 3*?DATA_CHUNK_SIZE, Peer1},\n\t\t\t{none, 3*?DATA_CHUNK_SIZE, trunc(3.25*?DATA_CHUNK_SIZE), Peer1}\n\t\t],\n\t\t\"Single peer, partial intervals, 2 chunks. Overlapping partial QIntervals.\"),\n\n\ttest_enqueue_intervals(\n\t\t[\n\t\t\t{Peer1, ar_intervals:from_list([\n\t\t\t\t{trunc(3.25*?DATA_CHUNK_SIZE), 2*?DATA_CHUNK_SIZE},\n\t\t\t\t{9*?DATA_CHUNK_SIZE, trunc(5.75*?DATA_CHUNK_SIZE)}\n\t\t\t]), none},\n\t\t\t{Peer2, ar_intervals:from_list([\n\t\t\t\t{4*?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE},\n\t\t\t\t{7*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE}\n\t\t\t]), none},\n\t\t\t{Peer3, ar_intervals:from_list([\n\t\t\t\t{8*?DATA_CHUNK_SIZE, 7*?DATA_CHUNK_SIZE}\n\t\t\t]), none}\n\t\t],\n\t\t2,\n\t\t[\n\t\t\t{20*?DATA_CHUNK_SIZE, 10*?DATA_CHUNK_SIZE},\n\t\t\t{trunc(8.5*?DATA_CHUNK_SIZE), trunc(6.5*?DATA_CHUNK_SIZE)}\n\t\t],\n\t\t[\n\t\t\t{4*?DATA_CHUNK_SIZE, 2*?DATA_CHUNK_SIZE},\n\t\t\t{8*?DATA_CHUNK_SIZE, 6*?DATA_CHUNK_SIZE}\n\t\t],\n\t\t[\n\t\t\t{none, 2*?DATA_CHUNK_SIZE, 3*?DATA_CHUNK_SIZE, Peer1},\n\t\t\t{none, 3*?DATA_CHUNK_SIZE, trunc(3.25*?DATA_CHUNK_SIZE), Peer1},\n\t\t\t{none, trunc(3.25*?DATA_CHUNK_SIZE), 4*?DATA_CHUNK_SIZE, Peer2},\n\t\t\t{none, 6*?DATA_CHUNK_SIZE, trunc(6.5*?DATA_CHUNK_SIZE), Peer2}\n\t\t],\n\t\t\"Multiple peers, overlapping, full intervals, 2 chunks. Overlapping QIntervals.\").\n\ntest_enqueue_intervals(Intervals, ChunksPerPeer, QIntervalsRanges, ExpectedQIntervalRanges, ExpectedChunks, Label) ->\n\tQIntervals = ar_intervals:from_list(QIntervalsRanges),\n\tQ = gb_sets:new(),\n\t{QResult, QIntervalsResult} = ar_data_sync:enqueue_intervals(Intervals, ChunksPerPeer, {Q, QIntervals}),\n\tExpectedQIntervals = lists:foldl(fun({End, Start}, Acc) ->\n\t\t\tar_intervals:add(Acc, End, Start)\n\t\tend, QIntervals, ExpectedQIntervalRanges),\n\t?assertEqual(ar_intervals:to_list(ExpectedQIntervals), ar_intervals:to_list(QIntervalsResult), Label),\n\t?assertEqual(ExpectedChunks, gb_sets:to_list(QResult), Label). "
  },
  {
    "path": "apps/arweave/test/ar_data_sync_mines_off_only_last_chunks_test.erl",
    "content": "-module(ar_data_sync_mines_off_only_last_chunks_test).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-include(\"ar.hrl\").\n-include(\"ar_consensus.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-import(ar_test_node, [test_with_mocked_functions/2]).\n\nmines_off_only_last_chunks_test_() ->\n\ttest_with_mocked_functions([{ar_fork, height_2_6, fun() -> 0 end}, mock_reset_frequency()],\n\t\t\tfun test_mines_off_only_last_chunks/0).\n\nmock_reset_frequency() ->\n\t{ar_nonce_limiter, get_reset_frequency, fun() -> 5 end}.\n\ntest_mines_off_only_last_chunks() ->\n\t?LOG_DEBUG([{event, test_mines_off_only_last_chunks_start}]),\n\tWallet = ar_test_data_sync:setup_nodes(),\n\t%% Submit only the last chunks (smaller than 256 KiB) of transactions.\n\t%% Assert the nodes construct correct proofs of access from them.\n\tlists:foreach(\n\t\tfun(Height) ->\n\t\t\tRandomID = crypto:strong_rand_bytes(32),\n\t\t\tChunk = crypto:strong_rand_bytes(1023),\n\t\t\tChunkID = ar_tx:generate_chunk_id(Chunk),\n\t\t\tDataSize = ?DATA_CHUNK_SIZE + 1023,\n\t\t\t{DataRoot, DataTree} = ar_merkle:generate_tree([{RandomID, ?DATA_CHUNK_SIZE},\n\t\t\t\t\t{ChunkID, DataSize}]),\n\t\t\tTX = ar_test_node:sign_tx(Wallet, #{ last_tx => ar_test_node:get_tx_anchor(main), data_size => DataSize,\n\t\t\t\t\tdata_root => DataRoot }),\n\t\t\tar_test_node:post_and_mine(#{ miner => main, await_on => peer1 }, [TX]),\n\t\t\tOffset = ?DATA_CHUNK_SIZE + 1,\n\t\t\tDataPath = ar_merkle:generate_path(DataRoot, Offset, DataTree),\n\t\t\tProof = #{ data_root => ar_util:encode(DataRoot),\n\t\t\t\t\tdata_path => ar_util:encode(DataPath), chunk => ar_util:encode(Chunk),\n\t\t\t\t\toffset => integer_to_binary(Offset),\n\t\t\t\t\tdata_size => integer_to_binary(DataSize) },\n\t\t\t?assertMatch({ok, {{<<\"200\">>, _}, _, _, _, _}},\n\t\t\t\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(Proof))),\n\t\t\tcase Height - ?SEARCH_SPACE_UPPER_BOUND_DEPTH of\n\t\t\t\t-1 ->\n\t\t\t\t\t%% Make sure we waited enough to have the next block use\n\t\t\t\t\t%% the new entropy reset source.\n\t\t\t\t\t[{_, Info}] = ets:lookup(node_state, nonce_limiter_info),\n\t\t\t\t\tPrevStepNumber = Info#nonce_limiter_info.global_step_number,\n\t\t\t\t\ttrue = ar_util:do_until(\n\t\t\t\t\t\tfun() ->\n\t\t\t\t\t\t\tar_nonce_limiter:get_current_step_number()\n\t\t\t\t\t\t\t\t\t> PrevStepNumber + ar_nonce_limiter:get_reset_frequency()\n\t\t\t\t\t\tend,\n\t\t\t\t\t\t100,\n\t\t\t\t\t\t60000\n\t\t\t\t\t);\n\t\t\t\t0 ->\n\t\t\t\t\t%% Wait until the new chunks fall below the new upper bound and\n\t\t\t\t\t%% remove the original big chunks. The protocol will increase the upper\n\t\t\t\t\t%% bound based on the nonce limiter entropy reset, but ar_data_sync waits\n\t\t\t\t\t%% for ?SEARCH_SPACE_UPPER_BOUND_DEPTH confirmations before packing the\n\t\t\t\t\t%% chunks.\n\t\t\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\t\t\tlists:foreach(\n\t\t\t\t\t\tfun(O) ->\n\t\t\t\t\t\t\t[ar_chunk_storage:delete(O, ar_storage_module:id(Module))\n\t\t\t\t\t\t\t\t\t|| Module <- Config#config.storage_modules]\n\t\t\t\t\t\tend,\n\t\t\t\t\t\tlists:seq(?DATA_CHUNK_SIZE, ar_block:strict_data_split_threshold(),\n\t\t\t\t\t\t\t\t?DATA_CHUNK_SIZE)\n\t\t\t\t\t);\n\t\t\t\t_ ->\n\t\t\t\t\tok\n\t\t\tend\n\t\tend,\n\t\tlists:seq(1, 6)\n\t). \n"
  },
  {
    "path": "apps/arweave/test/ar_data_sync_mines_off_only_second_last_chunks_test.erl",
    "content": "-module(ar_data_sync_mines_off_only_second_last_chunks_test).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-include(\"ar.hrl\").\n-include(\"ar_consensus.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-import(ar_test_node, [test_with_mocked_functions/2]).\n\nmines_off_only_second_last_chunks_test_() ->\n\ttest_with_mocked_functions([{ar_fork, height_2_6, fun() -> 0 end}, mock_reset_frequency()],\n\t\t\tfun test_mines_off_only_second_last_chunks/0).\n\nmock_reset_frequency() ->\n\t{ar_nonce_limiter, get_reset_frequency, fun() -> 5 end}.\n\ntest_mines_off_only_second_last_chunks() ->\n\t?LOG_DEBUG([{event, test_mines_off_only_second_last_chunks_start}]),\n\tWallet = ar_test_data_sync:setup_nodes(),\n\t%% Submit only the second last chunks (smaller than 256 KiB) of transactions.\n\t%% Assert the nodes construct correct proofs of access from them.\n\tlists:foreach(\n\t\tfun(Height) ->\n\t\t\tRandomID = crypto:strong_rand_bytes(32),\n\t\t\tChunk = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE div 2),\n\t\t\tChunkID = ar_tx:generate_chunk_id(Chunk),\n\t\t\tDataSize = (?DATA_CHUNK_SIZE) div 2 + (?DATA_CHUNK_SIZE) div 2 + 3,\n\t\t\t{DataRoot, DataTree} = ar_merkle:generate_tree([{ChunkID, ?DATA_CHUNK_SIZE div 2},\n\t\t\t\t\t{RandomID, DataSize}]),\n\t\t\tTX = ar_test_node:sign_tx(Wallet, #{ last_tx => ar_test_node:get_tx_anchor(main), data_size => DataSize,\n\t\t\t\t\tdata_root => DataRoot }),\n\t\t\tar_test_node:post_and_mine(#{ miner => main, await_on => peer1 }, [TX]),\n\t\t\tOffset = 0,\n\t\t\tDataPath = ar_merkle:generate_path(DataRoot, Offset, DataTree),\n\t\t\tProof = #{ data_root => ar_util:encode(DataRoot),\n\t\t\t\t\tdata_path => ar_util:encode(DataPath), chunk => ar_util:encode(Chunk),\n\t\t\t\t\toffset => integer_to_binary(Offset),\n\t\t\t\t\tdata_size => integer_to_binary(DataSize) },\n\t\t\t?assertMatch({ok, {{<<\"200\">>, _}, _, _, _, _}},\n\t\t\t\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(Proof))),\n\t\t\tcase Height - ?SEARCH_SPACE_UPPER_BOUND_DEPTH >= 0 of\n\t\t\t\ttrue ->\n\t\t\t\t\t%% Wait until the new chunks fall below the new upper bound and\n\t\t\t\t\t%% remove the original big chunks. The protocol will increase the upper\n\t\t\t\t\t%% bound based on the nonce limiter entropy reset, but ar_data_sync waits\n\t\t\t\t\t%% for ?SEARCH_SPACE_UPPER_BOUND_DEPTH confirmations before packing the\n\t\t\t\t\t%% chunks.\n\t\t\t\t\t{ok, Config} = arweave_config:get_env(),\n\t\t\t\t\tlists:foreach(\n\t\t\t\t\t\tfun(O) ->\n\t\t\t\t\t\t\t[ar_chunk_storage:delete(O, ar_storage_module:id(Module))\n\t\t\t\t\t\t\t\t\t|| Module <- Config#config.storage_modules]\n\t\t\t\t\t\tend,\n\t\t\t\t\t\tlists:seq(?DATA_CHUNK_SIZE, ar_block:strict_data_split_threshold(),\n\t\t\t\t\t\t\t\t?DATA_CHUNK_SIZE)\n\t\t\t\t\t);\n\t\t\t\t_ ->\n\t\t\t\t\tok\n\t\t\tend\n\t\tend,\n\t\tlists:seq(1, 6)\n\t). \n"
  },
  {
    "path": "apps/arweave/test/ar_data_sync_records_footprints_test.erl",
    "content": "-module(ar_data_sync_records_footprints_test).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-include(\"ar.hrl\").\n-include(\"ar_consensus.hrl\").\n\nrecords_footprints_test_() ->\n\t{timeout, 120, fun test_records_footprints/0}.\n\ntest_records_footprints() ->\n\tWallet = ar_wallet:new_keyfile(),\n\tAddr = ar_wallet:to_address(Wallet),\n\t[B0] = ar_weave:init([{Addr, ?AR(1000), <<>>}]),\n\tar_test_node:start(#{\n\t\tb0 => B0,\n\t\taddr => Addr,\n\t\tstorage_modules => [\n\t\t\t{262144 * 3, 0, {replica_2_9, Addr}}\n\t\t]\n\t}),\n\tPeer = ar_test_node:peer_ip(main),\n\t%% The partition 1 is not configured.\n\t?assertEqual(not_found, ar_http_iface_client:get_footprints(Peer, 1, 0)),\n\t{ok, Footprint1} = ar_http_iface_client:get_footprints(Peer, 0, 0),\n\t?assertEqual(ar_intervals:from_list([{2, 0}]), Footprint1),\n\t{ok, Footprint1_1} = ar_http_iface_client:get_footprints(Peer, 0, 1),\n\t?assertEqual(ar_intervals:from_list([{5, 4}]), Footprint1_1),\n\t%% We have 2 footprints with 4 chunks in each in partition 0.\n\t?assertEqual({error, footprint_number_too_large},\n\t\t\tar_http_iface_client:get_footprints(Peer, 0, 2)),\n\t?assertEqual({error, negative_footprint_number},\n\t\t\tar_http_iface_client:get_footprints(Peer, 0, -1)),\n\t?assertEqual({error, negative_partition_number},\n\t\t\tar_http_iface_client:get_footprints(Peer, -1, 0)).\n"
  },
  {
    "path": "apps/arweave/test/ar_data_sync_recovers_from_corruption_test.erl",
    "content": "-module(ar_data_sync_recovers_from_corruption_test).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include(\"ar.hrl\").\n-include(\"ar_consensus.hrl\").\n\n-import(ar_test_node, [assert_wait_until_height/2]).\n\nrecovers_from_corruption_test_() ->\n\t{timeout, 300, fun test_recovers_from_corruption/0}.\n\ntest_recovers_from_corruption() ->\n\t?LOG_DEBUG([{event, test_recovers_from_corruption_start}]),\n\tar_test_data_sync:setup_nodes(),\n\tStoreID = ar_storage_module:id(hd(ar_storage_module:get_all(262144 * 3))),\n\t?debugFmt(\"Corrupting ~s...\", [StoreID]),\n\t[ar_chunk_storage:write_chunk(PaddedEndOffset, << 0:(262144*8) >>, #{}, StoreID)\n\t\t\t|| PaddedEndOffset <- lists:seq(262144, 262144 * 3, 262144)],\n\tar_test_node:mine(),\n\tar_test_node:assert_wait_until_height(main, 1). "
  },
  {
    "path": "apps/arweave/test/ar_data_sync_syncs_after_joining_test.erl",
    "content": "-module(ar_data_sync_syncs_after_joining_test).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-include(\"ar.hrl\").\n-include(\"ar_consensus.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-import(ar_test_node, [assert_wait_until_height/2, test_with_mocked_functions/2]).\n\nsyncs_after_joining_test_() ->\n\tar_test_node:test_with_mocked_functions([{ar_fork, height_2_5, fun() -> 0 end}],\n\t\tfun test_syncs_after_joining/0, 240).\n\ntest_syncs_after_joining() ->\n\ttest_syncs_after_joining(original_split).\n\ntest_syncs_after_joining(Split) ->\n\t?LOG_DEBUG([{event, test_syncs_after_joining}, {split, Split}]),\n\tWallet = ar_test_data_sync:setup_nodes(),\n\t{TX1, Chunks1} = ar_test_data_sync:tx(Wallet, {Split, 1}, v2, ?AR(1)),\n\tB1 = ar_test_node:post_and_mine(#{ miner => main, await_on => peer1 }, [TX1]),\n\tProofs1 = ar_test_data_sync:post_proofs(main, B1, TX1, Chunks1),\n\tUpperBound = ar_node:get_partition_upper_bound(ar_node:get_block_index()),\n\tar_test_data_sync:wait_until_syncs_chunks(peer1, Proofs1, UpperBound),\n\tar_test_data_sync:wait_until_syncs_chunks(Proofs1),\n\tar_test_node:disconnect_from(peer1),\n\t{MainTX2, MainChunks2} = ar_test_data_sync:tx(Wallet, {Split, 3}, v2, ?AR(1)),\n\tMainB2 = ar_test_node:post_and_mine(#{ miner => main, await_on => main }, [MainTX2]),\n\tMainProofs2 = ar_test_data_sync:post_proofs(main, MainB2, MainTX2, MainChunks2),\n\t{MainTX3, MainChunks3} = ar_test_data_sync:tx(Wallet, {Split, 2}, v2, ?AR(1)),\n\tMainB3 = ar_test_node:post_and_mine(#{ miner => main, await_on => main }, [MainTX3]),\n\tMainProofs3 = ar_test_data_sync:post_proofs(main, MainB3, MainTX3, MainChunks3),\n\t{PeerTX2, PeerChunks2} = ar_test_data_sync:tx(Wallet, {Split, 2}, v2, ?AR(1)),\n\tPeerB2 = ar_test_node:post_and_mine( #{ miner => peer1, await_on => peer1 }, [PeerTX2] ),\n\tPeerProofs2 = ar_test_data_sync:post_proofs(peer1, PeerB2, PeerTX2, PeerChunks2),\n\tar_test_data_sync:wait_until_syncs_chunks(peer1, PeerProofs2, infinity),\n\t_Peer2 = ar_test_node:rejoin_on(#{ node => peer1, join_on => main }),\n\tassert_wait_until_height(peer1, 3),\n\tar_test_node:connect_to_peer(peer1),\n\tUpperBound2 = ar_node:get_partition_upper_bound(ar_node:get_block_index()),\n\tar_test_data_sync:wait_until_syncs_chunks(peer1, MainProofs2, UpperBound2),\n\tar_test_data_sync:wait_until_syncs_chunks(peer1, MainProofs3, UpperBound2),\n\tar_test_data_sync:wait_until_syncs_chunks(peer1, Proofs1, infinity). \n"
  },
  {
    "path": "apps/arweave/test/ar_data_sync_syncs_data_test.erl",
    "content": "-module(ar_data_sync_syncs_data_test).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-include(\"ar.hrl\").\n-include(\"ar_consensus.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-import(ar_test_node, [assert_wait_until_height/2]).\n\nsyncs_data_test_() ->\n\t{timeout, 240, fun test_syncs_data/0}.\n\ntest_syncs_data() ->\n\t?LOG_DEBUG([{event, test_syncs_data_start}]),\n\tWallet = ar_test_data_sync:setup_nodes(),\n\tRecords = ar_test_data_sync:post_random_blocks(Wallet),\n\tRecordsWithProofs = lists:flatmap(\n\t\t\tfun({B, TX, Chunks}) -> \n\t\t\t\tar_test_data_sync:get_records_with_proofs(B, TX, Chunks) end, Records),\n\tlists:foreach(\n\t\tfun({_, _, _, {_, Proof}}) ->\n\t\t\t?assertMatch({ok, {{<<\"200\">>, _}, _, _, _, _}},\n\t\t\t\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(Proof))),\n\t\t\t?assertMatch({ok, {{<<\"200\">>, _}, _, _, _, _}},\n\t\t\t\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(Proof)))\n\t\tend,\n\t\tRecordsWithProofs\n\t),\n\tProofs = [Proof || {_, _, _, Proof} <- RecordsWithProofs],\n\tar_test_data_sync:wait_until_syncs_chunks(Proofs),\n\tDiskPoolThreshold = ar_node:get_partition_upper_bound(ar_node:get_block_index()),\n\tar_test_data_sync:wait_until_syncs_chunks(peer1, Proofs, DiskPoolThreshold),\n\tlists:foreach(\n\t\tfun({B, #tx{ id = TXID }, Chunks, {_, Proof}}) ->\n\t\t\tTXSize = byte_size(binary:list_to_bin(Chunks)),\n\t\t\tTXOffset = ar_merkle:extract_note(ar_util:decode(maps:get(tx_path, Proof))),\n\t\t\tAbsoluteTXOffset = B#block.weave_size - B#block.block_size + TXOffset,\n\t\t\tExpectedOffsetInfo = ar_serialize:jsonify(#{\n\t\t\t\t\toffset => integer_to_binary(AbsoluteTXOffset),\n\t\t\t\t\tsize => integer_to_binary(TXSize) }),\n\t\t\ttrue = ar_util:do_until(\n\t\t\t\tfun() ->\n\t\t\t\t\tcase ar_test_data_sync:get_tx_offset(peer1, TXID) of\n\t\t\t\t\t\t{ok, {{<<\"200\">>, _}, _, ExpectedOffsetInfo, _, _}} ->\n\t\t\t\t\t\t\ttrue;\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\tfalse\n\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\t\t100,\n\t\t\t\t120 * 1000\n\t\t\t),\n\t\t\tExpectedData = ar_util:encode(binary:list_to_bin(Chunks)),\n\t\t\tar_test_node:assert_get_tx_data(main, TXID, ExpectedData),\n\t\t\tcase AbsoluteTXOffset > DiskPoolThreshold of\n\t\t\t\ttrue ->\n\t\t\t\t\tok;\n\t\t\t\tfalse ->\n\t\t\t\t\tar_test_node:assert_get_tx_data(peer1, TXID, ExpectedData)\n\t\t\tend\n\t\tend,\n\t\tRecordsWithProofs\n\t). \n"
  },
  {
    "path": "apps/arweave/test/ar_difficulty_tests.erl",
    "content": "-module(ar_difficulty_tests).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\nnext_cumul_diff_test() ->\n\tOldCDiff = 10,\n\tNewDiff = 25,\n\tExpected = OldCDiff + erlang:trunc(math:pow(2, 256) / (math:pow(2, 256) - NewDiff)),\n\tActual = ar_difficulty:next_cumulative_diff(OldCDiff, NewDiff, 0),\n\t?assertEqual(Expected, Actual).\n"
  },
  {
    "path": "apps/arweave/test/ar_ecdsa_tests.erl",
    "content": "-module(ar_ecdsa_tests).\n\n-include(\"ar.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\nsign_ecrecover_test() ->\n\t{{_, PrivBytes, PubBytes}, _} = ar_wallet:new({?ECDSA_SIGN_ALG, secp256k1}),\n\t% Just call. It should not fail\n\tar_wallet:hash_pub_key(PubBytes),\n\tMsg = <<\"This is a test message!\">>,\n\tSigRecoverable = secp256k1_nif:sign(Msg, PrivBytes),\n\t?assertEqual(byte_size(SigRecoverable), 65),\n\n\t% recid byte\n\t<<CompactSig:64/binary, RecId:8>> = SigRecoverable,\n\t?assert(lists:member(RecId, [0, 1, 2, 3])),\n\n\t% deterministic Sig\n\tNewSig = secp256k1_nif:sign(Msg, PrivBytes),\n\t?assertEqual(NewSig, SigRecoverable),\n\n\t% Recover pk\n\t{true, RecoveredBytes} = secp256k1_nif:ecrecover(Msg, SigRecoverable),\n\tio:format(\"Prv ~p~n\", [PrivBytes]),\n\tio:format(\"Pub ~p~n\", [PubBytes]),\n\t?assertEqual(RecoveredBytes, PubBytes),\n\n\tBadRecidSig = <<CompactSig:64/binary, 4:8>>,\n\t{false, <<>>} = secp256k1_nif:ecrecover(Msg, BadRecidSig),\n\n\tBadMsg = <<\"This is a bad test message!\">>,\n\t{true, ArbitraryPubBytes1} = secp256k1_nif:ecrecover(BadMsg, SigRecoverable),\n\t?assertNotEqual(PubBytes, ArbitraryPubBytes1),\n\n\t% recover and verify returns true for arbitrary message, but non matching PK\n\t{true, ArbitraryPubBytes2} = secp256k1_nif:ecrecover(crypto:strong_rand_bytes(100), SigRecoverable),\n\t?assertNotEqual(PubBytes, ArbitraryPubBytes2),\n\n\tok.\n"
  },
  {
    "path": "apps/arweave/test/ar_forced_validation_tests.erl",
    "content": "-module(ar_forced_validation_tests).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-import(ar_test_node, [wait_until_height/2,\n                       post_block/2,\n                       sign_block/3,\n                       send_new_block/2]).\n\navde3_post_2_8_test_() ->\n    \t{setup, fun setup_all_post_2_8/0, fun cleanup_all_post_fork/1,\n\t\t{foreach, fun reset_node/0,\n                 [\n                  instantiator(fun inject_undersized_rsa/1)\n                 ]\n                }\n        }.\n\navde3_post_2_9_height_test_() ->\n        {setup, fun setup_all_post_2_9_height/0, fun cleanup_all_post_fork/1,\n\t\t{foreach, fun reset_node/0,\n                 [\n                  instantiator(fun inject_undersized_rsa/1)\n                 ]\n                }\n        }.\n\nstart_node() ->\n\t[B0] = ar_weave:init([], 0), %% Set difficulty to 0 to speed up tests\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:connect_to_peer(peer1).\n\nreset_node() ->\n\tar_blacklist_middleware:reset(),\n\tar_test_node:remote_call(peer1, ar_blacklist_middleware, reset, []),\n\tar_test_node:connect_to_peer(peer1),\n\n\tHeight = height(peer1),\n\t[{PrevH, _, _} | _] = wait_until_height(main, Height),\n\tar_test_node:disconnect_from(peer1),\n\tar_test_node:mine(peer1),\n\t[{H, _, _} | _] = ar_test_node:assert_wait_until_height(peer1, Height + 1),\n\tB = ar_test_node:remote_call(peer1, ar_block_cache, get, [block_cache, H]),\n\tPrevB = ar_test_node:remote_call(peer1, ar_block_cache, get, [block_cache, PrevH]),\n\t{ok, Config} = ar_test_node:remote_call(peer1, arweave_config, get_env, []),\n\tKey = ar_test_node:remote_call(peer1, ar_wallet, load_key, [Config#config.mining_addr]),\n\t{Key, B, PrevB}.\n\nsetup_all_post_2_8() ->\n\t{Setup, Cleanup} = ar_test_node:mock_functions([\n\t\t{ar_fork, height_2_8, fun() -> 0 end}\n\t\t]),\n\tFunctions = Setup(),\n\tstart_node(),\n\t{Cleanup, Functions}.\n\nsetup_all_post_2_9_height() ->\n        {Setup, Cleanup} = ar_test_node:mock_functions([\n\t\t{ar_fork, height_2_9, fun() -> 0 end}\n\t\t]),\n\tFunctions = Setup(),\n\tstart_node(),\n\t{Cleanup, Functions}.\n\n\ncleanup_all_post_fork({Cleanup, Functions}) ->\n\tCleanup(Functions).\n\ninstantiator(TestFun) ->\n\tfun (Fixture) -> {timeout, 180, {with, Fixture, [TestFun]}} end.\n\ndebug_decode_block(B) ->\n    %% TEST ENCODE/DECODE - This is to help to determine where\n    %%                      encode/decode errors occur, as the HTTP API can be\n    %%                      a bit opaque.\n    BinaryB = ar_serialize:block_to_binary(B),\n    try\n\tar_serialize:binary_to_block(BinaryB),\n        ?debugFmt(\"============== DECODE PASSED ================== ~n\", [])\n    catch\n        Exception:Reason:Stacktrace ->\n            ?debugFmt(\"==>>>decode error: ~p:~p -->>>>>stack:~p~n\",\n                      [Exception, Reason, Stacktrace]),\n            {error, {Exception, Reason}}\n    end.\n\ninject_undersized_rsa({Key, BIn, PrevB}) ->\n    Victim = main,\n    Peer = ar_test_node:peer_ip(Victim),\n\n    ReqRes = ar_http:req(#{\n                           method => get,\n                           peer => Peer,\n                           path => \"/info\",\n                           headers => p2p_headers()\n                          }),\n    ?assertMatch({ok, {{<<\"200\">>, _}, _, _Body, _Start, _End}}, ReqRes),\n    TX = poc07_tx(),\n    B = block_with_undersized_rsa(Key, PrevB, BIn, TX),\n\n    debug_decode_block(B),\n\n    post_block(B, valid),\n\n    timer:sleep(1000),\n    ReqRes2 = ar_http:req(#{\n                           method => get,\n                           peer => Peer,\n                           path => \"/info\",\n                           headers => p2p_headers()\n                          }),\n    ?assertMatch({ok, {{<<\"200\">>, _}, _, _Body, _Start, _End}}, ReqRes2),\n    ok.\n\n\nblock_with_undersized_rsa(Key, PrevB, BIn, TX) ->\n    ok = ar_events:subscribe(block),\n\n    Height = BIn#block.height,\n    BlockSize = ar_tx:get_weave_size_increase(TX, Height),\n    WeaveSize = PrevB#block.weave_size + BlockSize,\n    TxRoot = ar_block:generate_tx_root_for_block([TX], Height),\n    SizeTagged = ar_block:generate_size_tagged_list_from_txs([TX], Height),\n\n    B1 = BIn#block{\n           txs = [TX],\n           block_size = BlockSize,\n           weave_size = WeaveSize,\n           tx_root = TxRoot,\n           size_tagged_txs = SizeTagged\n          },\n\n    B2 = sign_block(B1, PrevB, Key),\n    B2.\n\npoc07_tx() ->\n    Sig = <<71>>,\n    #tx{\n        format = 1,\n        id = crypto:hash(sha256, << Sig/binary >>),\n        last_tx = <<>>,\n        owner = <<191>>,\n        owner_address = not_set,\n        tags = [],\n        target = <<>>,\n        quantity = 0,\n        data = <<>>,\n        data_size = 0,\n        data_root = <<>>,\n        signature = Sig,\n        reward = 0,\n        denomination = 0,\n        signature_type = {rsa, 65537}\n    }.\n\n\np2p_headers() ->\n    {ok, Config} = arweave_config:get_env(),\n    [{<<\"x-p2p-port\">>, integer_to_binary(Config#config.port)},\n     {<<\"x-release\">>, integer_to_binary(?RELEASE_NUMBER)}].\n\n\n%% ------------------------------------------------------------------------------------------\n%% Helper functions\n%% ------------------------------------------------------------------------------------------\n\nheight(Node) ->\n\tar_test_node:remote_call(Node, ar_node, get_height, []).\n"
  },
  {
    "path": "apps/arweave/test/ar_fork_recovery_tests.erl",
    "content": "-module(ar_fork_recovery_tests).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-import(ar_test_node, [\n\t\tassert_wait_until_height/2, wait_until_height/2, read_block_when_stored/1]).\n\nheight_plus_one_fork_recovery_test_() ->\n\t{timeout, 240, fun test_height_plus_one_fork_recovery/0}.\n\ntest_height_plus_one_fork_recovery() ->\n\t%% Mine on two nodes until they fork. Mine an extra block on one of them.\n\t%% Expect the other one to recover.\n\t{_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(20), <<>>}]),\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:disconnect_from(peer1),\n\tar_test_node:mine(peer1),\n\tassert_wait_until_height(peer1, 1),\n\tar_test_node:mine(),\n\twait_until_height(main, 1),\n\tar_test_node:mine(),\n\tMainBI = wait_until_height(main, 2),\n\tar_test_node:connect_to_peer(peer1),\n\t?assertEqual(MainBI, ar_test_node:wait_until_height(peer1, 2)),\n\tar_test_node:disconnect_from(peer1),\n\tar_test_node:mine(),\n\twait_until_height(main, 3),\n\tar_test_node:mine(peer1),\n\tassert_wait_until_height(peer1, 3),\n\tar_test_node:rejoin_on(#{ node => main, join_on => peer1 }),\n\tar_test_node:mine(peer1),\n\tPeerBI = ar_test_node:wait_until_height(peer1, 4),\n\t?assertEqual(PeerBI, wait_until_height(main, 4)).\n\nheight_plus_three_fork_recovery_test_() ->\n\t{timeout, 240, fun test_height_plus_three_fork_recovery/0}.\n\ntest_height_plus_three_fork_recovery() ->\n\t%% Mine on two nodes until they fork. Mine three extra blocks on one of them.\n\t%% Expect the other one to recover.\n\t{_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(20), <<>>}]),\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:disconnect_from(peer1),\n\tar_test_node:mine(peer1),\n\tassert_wait_until_height(peer1, 1),\n\tar_test_node:mine(),\n\twait_until_height(main, 1),\n\tar_test_node:mine(),\n\twait_until_height(main, 2),\n\tar_test_node:mine(peer1),\n\tassert_wait_until_height(peer1, 2),\n\tar_test_node:mine(),\n\twait_until_height(main, 3),\n\tar_test_node:mine(peer1),\n\tassert_wait_until_height(peer1, 3),\n\tar_test_node:connect_to_peer(peer1),\n\tar_test_node:mine(),\n\tMainBI = wait_until_height(main, 4),\n\t?assertEqual(MainBI, ar_test_node:wait_until_height(peer1, 4)).\n\nmissing_txs_fork_recovery_test_() ->\n\t{timeout, 240, fun test_missing_txs_fork_recovery/0}.\n\ntest_missing_txs_fork_recovery() ->\n\t%% Mine a block with a transaction on the peer1 node\n\t%% but do not gossip the transaction. The main node\n\t%% is expected fetch the missing transaction and apply the block.\n\tKey = {_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(20), <<>>}]),\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:disconnect_from(peer1),\n\tTX1 = ar_test_node:sign_tx(Key, #{}),\n\tar_test_node:assert_post_tx_to_peer(peer1, TX1),\n\t%% Wait to make sure the tx will not be gossiped upon reconnect.\n\ttimer:sleep(2000), % == 2 * ?CHECK_MEMPOOL_FREQUENCY\n\tar_test_node:rejoin_on(#{ node => main, join_on => peer1 }),\n\t?assertEqual([], ar_mempool:get_all_txids()),\n\tar_test_node:mine(peer1),\n\t[{H1, _, _} | _] = wait_until_height(main, 1),\n\t?assertEqual(1, length((read_block_when_stored(H1))#block.txs)).\n\norphaned_txs_are_remined_after_fork_recovery_test_() ->\n\t{timeout, 240, fun test_orphaned_txs_are_remined_after_fork_recovery/0}.\n\ntest_orphaned_txs_are_remined_after_fork_recovery() ->\n\t%% Mine a transaction on peer1, mine two blocks on main to\n\t%% make the transaction orphaned. Mine a block on peer1 and\n\t%% assert the transaction is re-mined.\n\tKey = {_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(20), <<>>}]),\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:disconnect_from(peer1),\n\tTX = #tx{ id = TXID } = ar_test_node:sign_tx(Key, #{ denomination => 1, reward => ?AR(1) }),\n\tar_test_node:assert_post_tx_to_peer(peer1, TX),\n\tar_test_node:mine(peer1),\n\t[{H1, _, _} | _] = ar_test_node:wait_until_height(peer1, 1),\n\tH1TXIDs = (ar_test_node:remote_call(peer1, ar_test_node, read_block_when_stored, [H1]))#block.txs,\n\t?assertEqual([TXID], H1TXIDs),\n\tar_test_node:mine(),\n\t[{H2, _, _} | _] = wait_until_height(main, 1),\n\tar_test_node:mine(),\n\t[{H3, _, _}, {H2, _, _}, {_, _, _}] = wait_until_height(main, 2),\n\tar_test_node:connect_to_peer(peer1),\n\t?assertMatch([{H3, _, _}, {H2, _, _}, {_, _, _}], ar_test_node:wait_until_height(peer1, 2)),\n\tar_test_node:mine(peer1),\n\t[{H4, _, _} | _] = ar_test_node:wait_until_height(peer1, 3),\n\tH4TXIDs = (ar_test_node:remote_call(peer1, ar_test_node, read_block_when_stored, [H4]))#block.txs,\n\t?debugFmt(\"Expecting ~s to be re-mined.~n\", [ar_util:encode(TXID)]),\n\t?assertEqual([TXID], H4TXIDs).\n\ninvalid_block_with_high_cumulative_difficulty_test_() ->\n\tar_test_node:test_with_mocked_functions([{ar_fork, height_2_6, fun() -> 0 end}],\n\t\tfun() -> test_invalid_block_with_high_cumulative_difficulty() end).\n\ntest_invalid_block_with_high_cumulative_difficulty() ->\n\t%% Submit an alternative fork with valid blocks weaker than the tip and\n\t%% an invalid block on top, much stronger than the tip. Make sure the node\n\t%% ignores the invalid block and continues to build on top of the valid fork.\n\tRewardKey = ar_wallet:new_keyfile(),\n\tRewardAddr = ar_wallet:to_address(RewardKey),\n\tWalletName = ar_util:encode(RewardAddr),\n\tPath = ar_wallet:wallet_filepath(WalletName),\n\tPeerPath = ar_test_node:remote_call(peer1, ar_wallet, wallet_filepath, [WalletName]),\n\t%% Copy the key because we mine blocks on both nodes using the same key in this test.\n\t{ok, _} = file:copy(Path, PeerPath),\n\t[B0] = ar_weave:init(),\n\tar_test_node:start(B0, RewardAddr),\n\tar_test_node:start_peer(peer1, B0, RewardAddr),\n\tar_test_node:disconnect_from(peer1),\n\tar_test_node:mine(peer1),\n\t[{H1, _, _} | _] = ar_test_node:wait_until_height(peer1, 1),\n\tar_test_node:mine(),\n\t[{H2, _, _} | _] = wait_until_height(main, 1),\n\tar_test_node:connect_to_peer(peer1),\n\t?assertNotEqual(H2, H1),\n\tB1 = read_block_when_stored(H2),\n\tB2 = fake_block_with_strong_cumulative_difficulty(B1, B0, 10000000000000000),\n\tB2H = B2#block.indep_hash,\n\t?debugFmt(\"Fake block: ~s.\", [ar_util:encode(B2H)]),\n\tok = ar_events:subscribe(block),\n\t?assertMatch({ok, {{<<\"200\">>, _}, _, _, _, _}},\n\t\t\tar_http_iface_client:send_block_binary(ar_test_node:peer_ip(main), B2#block.indep_hash,\n\t\t\t\t\tar_serialize:block_to_binary(B2))),\n\treceive\n\t\t{event, block, {rejected, invalid_cumulative_difficulty, B2H, _Peer2}} ->\n\t\t\tok;\n\t\t{event, block, {new, #block{ indep_hash = B2H }, _Peer3}} ->\n\t\t\t?assert(false, \"Unexpected block acceptance\");\n\t\t{event, block, Other} ->\n\t\t\t?debugFmt(\"Unexpected block event: ~p\", [Other]),\n\t\t\t?assert(false, \"Unexpected block event\")\n\tafter 60_000 ->\n\t\t?assert(false, \"Timed out waiting for the node to pre-validate the fake \"\n\t\t\t\t\"block.\")\n\tend,\n\t[{H1, _, _} | _] = ar_test_node:wait_until_height(peer1, 1),\n\tar_test_node:mine(),\n\t%% Assert the nodes have continued building on the original fork.\n\t[{H3, _, _} | _] = ar_test_node:wait_until_height(peer1, 2),\n\t?assertNotEqual(B2#block.indep_hash, H3),\n\t{_Peer, B3, _Time, _Size} =\n\tar_http_iface_client:get_block_shadow(1,\n\t\tar_test_node:peer_ip(peer1),\n\t\tbinary, #{}),\n\t?assertEqual(H2, B3#block.indep_hash).\n\nfake_block_with_strong_cumulative_difficulty(B, PrevB, CDiff) ->\n\t#block{\n\t\theight = Height,\n\t\tpartition_number = PartitionNumber,\n\t\tprevious_solution_hash = PrevSolutionH,\n\t\tnonce_limiter_info = #nonce_limiter_info{\n\t\t\t\tpartition_upper_bound = PartitionUpperBound },\n\t\tdiff = Diff\n\t} = B,\n\tB2 = B#block{ cumulative_diff = CDiff },\n\tWallet = ar_wallet:new(),\n\tRewardAddr2 = ar_wallet:to_address(Wallet),\n\tH0 = ar_block:compute_h0(B, PrevB),\n\t{RecallByte, _RecallRange2Start} = ar_block:get_recall_range(H0, PartitionNumber,\n\t\t\tPartitionUpperBound),\n\t{ok, #{ data_path := DataPath, tx_path := TXPath,\n\t\t\tchunk := Chunk } } = ar_data_sync:get_chunk(RecallByte + 1,\n\t\t\t\t\t#{ pack => true, packing => {spora_2_6, RewardAddr2},\n\t\t\t\t\torigin => test }),\n\t{H1, Preimage} = ar_block:compute_h1(H0, 0, Chunk),\n\tcase binary:decode_unsigned(H1) > Diff of\n\t\ttrue ->\n\t\t\tPoA = #poa{ chunk = Chunk, data_path = DataPath, tx_path = TXPath },\n\t\t\tB3 = B2#block{ hash = H1, hash_preimage = Preimage, reward_addr = RewardAddr2,\n\t\t\t\t\treward_key = element(2, Wallet), recall_byte = RecallByte, nonce = 0,\n\t\t\t\t\trecall_byte2 = undefined, poa2 = #poa{},\n\t\t\t\t\tunpacked_chunk2_hash = undefined,\n\t\t\t\t\tpoa = #poa{ chunk = Chunk, data_path = DataPath,\n\t\t\t\t\t\t\ttx_path = TXPath },\n\t\t\t\t\tchunk_hash = crypto:hash(sha256, Chunk) },\n\t\t\tB4 =\n\t\t\t\tcase ar_fork:height_2_8() of\n\t\t\t\t\t0 ->\n\t\t\t\t\t\t{ok, #{ chunk := UnpackedChunk } } = ar_data_sync:get_chunk(\n\t\t\t\t\t\t\t\tRecallByte + 1, #{ pack => true, packing => unpacked,\n\t\t\t\t\t\t\t\torigin => test }),\n\t\t\t\t\t\tB3#block{ packing_difficulty = 1,\n\t\t\t\t\t\t\t\tpoa = PoA#poa{ unpacked_chunk = UnpackedChunk },\n\t\t\t\t\t\t\t\tunpacked_chunk_hash = crypto:hash(sha256, UnpackedChunk) };\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tB3\n\t\t\t\tend,\n\t\t\tPrevCDiff = PrevB#block.cumulative_diff,\n\t\t\tSignedH = ar_block:generate_signed_hash(B4),\n\t\t\tSignaturePreimage = ar_block:get_block_signature_preimage(CDiff, PrevCDiff,\n\t\t\t\t<< PrevSolutionH/binary, SignedH/binary >>, Height),\n\t\t\tSignature = ar_wallet:sign(element(1, Wallet), SignaturePreimage),\n\t\t\tB4#block{ indep_hash = ar_block:indep_hash2(SignedH, Signature),\n\t\t\t\t\tsignature = Signature };\n\t\tfalse ->\n\t\t\tfake_block_with_strong_cumulative_difficulty(B, PrevB, CDiff)\n\tend.\n\nfork_recovery_test_() ->\n\t%% Allow headroom for many sequential wait_until_syncs_chunks calls on slow CI.\n\t{timeout, 480, fun test_fork_recovery/0}.\n\ntest_fork_recovery() ->\n\ttest_fork_recovery(original_split).\n\ntest_fork_recovery(Split) ->\n\tWallet = ar_test_data_sync:setup_nodes(#{ packing => {composite, 1} }),\n\t{TX1, Chunks1} = ar_test_data_sync:tx(Wallet, {Split, 3}, v2, ?AR(10)),\n\t?debugFmt(\"Posting tx to main ~s.~n\", [ar_util:encode(TX1#tx.id)]),\n\tB1 = ar_test_node:post_and_mine(#{ miner => main, await_on => peer1 }, [TX1]),\n\t?debugFmt(\"Mined block ~s, height ~B.~n\", [ar_util:encode(B1#block.indep_hash),\n\t\t\tB1#block.height]),\n\tProofs1 = ar_test_data_sync:post_proofs(main, B1, TX1, Chunks1),\n\tar_test_data_sync:wait_until_syncs_chunks(Proofs1),\n\tUpperBound = ar_node:get_partition_upper_bound(ar_node:get_block_index()),\n\tar_test_data_sync:wait_until_syncs_chunks(peer1, Proofs1, UpperBound),\n\tar_test_node:disconnect_from(peer1),\n\t{PeerTX2, PeerChunks2} = ar_test_data_sync:tx(Wallet, {Split, 5}, v2, ?AR(10)),\n\t{PeerTX3, PeerChunks3} = ar_test_data_sync:tx(Wallet, {Split, 2}, v2, ?AR(10)),\n\t?debugFmt(\"Posting tx to peer1 ~s.~n\", [ar_util:encode(PeerTX2#tx.id)]),\n\t?debugFmt(\"Posting tx to peer1 ~s.~n\", [ar_util:encode(PeerTX3#tx.id)]),\n\tPeerB2 = ar_test_node:post_and_mine(#{ miner => peer1, await_on => peer1 },\n\t\t\t[PeerTX2, PeerTX3]),\n\t?debugFmt(\"Mined block ~s, height ~B.~n\", [ar_util:encode(PeerB2#block.indep_hash),\n\t\t\tPeerB2#block.height]),\n\t{MainTX2, MainChunks2} = ar_test_data_sync:tx(Wallet, {Split, 1}, v2, ?AR(10)),\n\t?debugFmt(\"Posting tx to main ~s.~n\", [ar_util:encode(MainTX2#tx.id)]),\n\tMainB2 = ar_test_node:post_and_mine(#{ miner => main, await_on => main },\n\t\t\t[MainTX2]),\n\t?debugFmt(\"Mined block ~s, height ~B.~n\", [ar_util:encode(MainB2#block.indep_hash),\n\t\t\tMainB2#block.height]),\n\t_PeerProofs2 = ar_test_data_sync:post_proofs(peer1, PeerB2, PeerTX2, PeerChunks2),\n\t_PeerProofs3 = ar_test_data_sync:post_proofs(peer1, PeerB2, PeerTX3, PeerChunks3),\n\t{PeerTX4, PeerChunks4} = ar_test_data_sync:tx(Wallet, {Split, 2}, v2, ?AR(10)),\n\t?debugFmt(\"Posting tx to peer1 ~s.~n\", [ar_util:encode(PeerTX4#tx.id)]),\n\tPeerB3 = ar_test_node:post_and_mine(#{ miner => peer1, await_on => peer1 },\n\t\t\t[PeerTX4]),\n\t?debugFmt(\"Mined block ~s, height ~B.~n\", [ar_util:encode(PeerB3#block.indep_hash),\n\t\t\tPeerB3#block.height]),\n\t_PeerProofs4 = ar_test_data_sync:post_proofs(peer1, PeerB3, PeerTX4, PeerChunks4),\n\tar_test_node:post_and_mine(#{ miner => main, await_on => main }, []),\n\tMainProofs2 = ar_test_data_sync:post_proofs(main, MainB2, MainTX2, MainChunks2),\n\t{MainTX3, MainChunks3} = ar_test_data_sync:tx(Wallet, {Split, 1}, v2, ?AR(10)),\n\t?debugFmt(\"Posting tx to main ~s.~n\", [ar_util:encode(MainTX3#tx.id)]),\n\tMainB3 = ar_test_node:post_and_mine(#{ miner => main, await_on => main },\n\t\t\t[MainTX3]),\n\t?debugFmt(\"Mined block ~s, height ~B.~n\", [ar_util:encode(MainB3#block.indep_hash),\n\t\t\tMainB3#block.height]),\n\tar_test_node:connect_to_peer(peer1),\n\tMainProofs3 = ar_test_data_sync:post_proofs(main, MainB3, MainTX3, MainChunks3),\n\tUpperBound2 = ar_node:get_partition_upper_bound(ar_node:get_block_index()),\n\tar_test_data_sync:wait_until_syncs_chunks(peer1, MainProofs2, UpperBound2),\n\tar_test_data_sync:wait_until_syncs_chunks(peer1, MainProofs3, UpperBound2),\n\tar_test_data_sync:wait_until_syncs_chunks(peer1, Proofs1, infinity),\n\t%% The peer1 node will return the orphaned transactions to the mempool\n\t%% and gossip them.\n\t?debugFmt(\"Posting tx to main ~s.~n\", [ar_util:encode(PeerTX2#tx.id)]),\n\t?debugFmt(\"Posting tx to main ~s.~n\", [ar_util:encode(PeerTX4#tx.id)]),\n\tar_test_node:post_tx_to_peer(main, PeerTX2),\n\tar_test_node:post_tx_to_peer(main, PeerTX4),\n\tar_test_node:assert_wait_until_receives_txs([PeerTX2, PeerTX4]),\n\tMainB4 = ar_test_node:post_and_mine(#{ miner => main, await_on => main }, []),\n\t?debugFmt(\"Mined block ~s, height ~B.~n\", [ar_util:encode(MainB4#block.indep_hash),\n\t\t\tMainB4#block.height]),\n\tProofs4 = ar_test_data_sync:post_proofs(main, MainB4, PeerTX4, PeerChunks4),\n\t%% We did not submit proofs for PeerTX4 to main - they are supposed to be still stored\n\t%% in the disk pool.\n\tar_test_data_sync:wait_until_syncs_chunks(peer1, Proofs4, infinity),\n\tUpperBound3 = ar_node:get_partition_upper_bound(ar_node:get_block_index()),\n\tar_test_data_sync:wait_until_syncs_chunks(Proofs4, UpperBound3),\n\tar_test_data_sync:post_proofs(peer1, PeerB2, PeerTX2, PeerChunks2).\n"
  },
  {
    "path": "apps/arweave/test/ar_get_chunk_tests.erl",
    "content": "-module(ar_get_chunk_tests).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-include_lib(\"arweave/include/ar.hrl\").\n\nget_chunk_below_strict_threshold_test_() ->\n\tar_test_node:test_with_mocked_functions(\n\t\t[strict_data_split_threshold_mock(10 * ?DATA_CHUNK_SIZE)],\n\t\tfun test_get_chunk_below_strict_threshold/0,\n\t\t120\n\t).\n\nget_chunk_below_strict_threshold_small_tail_test_() ->\n\tar_test_node:test_with_mocked_functions(\n\t\t[strict_data_split_threshold_mock(10 * ?DATA_CHUNK_SIZE)],\n\t\tfun test_get_chunk_below_strict_threshold_small_tail/0,\n\t\t120\n\t).\n\nget_chunk_above_strict_threshold_test_() ->\n\tar_test_node:test_with_mocked_functions(\n\t\t[strict_data_split_threshold_mock(?DATA_CHUNK_SIZE)],\n\t\tfun test_get_chunk_above_strict_threshold/0,\n\t\t180\n\t).\n\nget_chunk_above_strict_threshold_small_tail_test_() ->\n\tar_test_node:test_with_mocked_functions(\n\t\t[strict_data_split_threshold_mock(?DATA_CHUNK_SIZE)],\n\t\tfun test_get_chunk_above_strict_threshold_small_tail/0,\n\t\t180\n\t).\n\ntest_get_chunk_below_strict_threshold() ->\n\tWallet = ar_test_data_sync:setup_nodes(),\n\tChunks = [\n\t\tcrypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\t\tcrypto:strong_rand_bytes(?DATA_CHUNK_SIZE)\n\t],\n\t{TX, _} = tx_with_chunks(Wallet, Chunks),\n\tB = ar_test_node:post_and_mine(#{ miner => main, await_on => main }, [TX]),\n\t[{AbsoluteEndOffset, Proof} | _] = ar_test_data_sync:build_proofs(B, TX, Chunks),\n\tpost_and_wait_for_chunks([{AbsoluteEndOffset, Proof}]),\n\t?assert(AbsoluteEndOffset =< ar_block:strict_data_split_threshold()),\n\tassert_chunk_offsets_same(AbsoluteEndOffset, Proof).\n\ntest_get_chunk_below_strict_threshold_small_tail() ->\n\tSmallChunkSize = 12345,\n\tWallet = ar_test_data_sync:setup_nodes(),\n\tChunks = [\n\t\tcrypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\t\tcrypto:strong_rand_bytes(SmallChunkSize)\n\t],\n\t{TX, _} = tx_with_chunks(Wallet, Chunks),\n\tB = ar_test_node:post_and_mine(#{ miner => main, await_on => main }, [TX]),\n\t[{AbsoluteEndOffset, Proof} | _] = ar_test_data_sync:build_proofs(B, TX, Chunks),\n\tpost_and_wait_for_chunks([{AbsoluteEndOffset, Proof}]),\n\t?assert(byte_size(lists:last(Chunks)) < ?DATA_CHUNK_SIZE),\n\t?assert(AbsoluteEndOffset =< ar_block:strict_data_split_threshold()),\n\tassert_chunk_offsets_same(AbsoluteEndOffset, Proof).\n\ntest_get_chunk_above_strict_threshold() ->\n\tWallet = ar_test_data_sync:setup_nodes(),\n\tChunks = [\n\t\tcrypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\t\tcrypto:strong_rand_bytes(?DATA_CHUNK_SIZE)\n\t],\n\t{TX, _} = tx_with_chunks(Wallet, Chunks),\n\tB = ar_test_node:post_and_mine(#{ miner => main, await_on => main }, [TX]),\n\t[{FirstEndOffset, FirstProof}, {SecondEndOffset, SecondProof}] =\n\t\tar_test_data_sync:build_proofs(B, TX, Chunks),\n\tpost_and_wait_for_chunks([{FirstEndOffset, FirstProof}, {SecondEndOffset, SecondProof}]),\n\tThreshold = ar_block:strict_data_split_threshold(),\n\tAboveThreshold = [{AbsoluteEndOffset, Proof} || {AbsoluteEndOffset, Proof}\n\t\t<- [{FirstEndOffset, FirstProof}, {SecondEndOffset, SecondProof}],\n\t\tAbsoluteEndOffset > Threshold],\n\t?assertMatch([_ | _], AboveThreshold),\n\tlists:foreach(\n\t\tfun({AbsoluteEndOffset, Proof}) ->\n\t\t\tassert_chunk_offsets_same(AbsoluteEndOffset, Proof)\n\t\tend,\n\t\tAboveThreshold\n\t).\n\ntest_get_chunk_above_strict_threshold_small_tail() ->\n\tWallet = ar_test_data_sync:setup_nodes(),\n\tSmallChunkSize = 12345,\n\tFirstChunk = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\tLastChunk = crypto:strong_rand_bytes(SmallChunkSize),\n\t?assert(byte_size(LastChunk) < ?DATA_CHUNK_SIZE),\n\tChunks = [FirstChunk, LastChunk],\n\t{TX, _} = tx_with_chunks(Wallet, Chunks),\n\tB = ar_test_node:post_and_mine(#{ miner => main, await_on => main }, [TX]),\n\t[{FirstEndOffset, FirstProof}, {SecondEndOffset, SecondProof}] =\n\t\tar_test_data_sync:build_proofs(B, TX, Chunks),\n\tpost_and_wait_for_chunks([{FirstEndOffset, FirstProof}, {SecondEndOffset, SecondProof}]),\n\tThreshold = ar_block:strict_data_split_threshold(),\n\t?assert(FirstEndOffset > Threshold),\n\t?assert(SecondEndOffset > Threshold),\n\tassert_chunk_offsets_same(FirstEndOffset, FirstProof),\n\tassert_chunk_offsets_same(SecondEndOffset, SecondProof).\n\nassert_chunk_offsets_same(AbsoluteEndOffset, ExpectedProof) ->\n\tChunkSize = byte_size(ar_util:decode(maps:get(chunk, ExpectedProof))),\n\tStartOffset = AbsoluteEndOffset - ChunkSize,\n\tOffsets = unique_offsets([\n\t\tAbsoluteEndOffset,\n\t\tAbsoluteEndOffset - 1,\n\t\tAbsoluteEndOffset - max(1, ChunkSize div 2)\n\t], StartOffset),\n\tResponses = [fetch_chunk_response(Offset) || Offset <- Offsets],\n\t[FirstResponse | Rest] = Responses,\n\tlists:foreach(fun(Response) -> ?assertEqual(FirstResponse, Response) end, Rest),\n\tassert_chunk_response(FirstResponse, AbsoluteEndOffset, ExpectedProof).\n\nfetch_chunk_response(Offset) ->\n\t{ok, {{<<\"200\">>, _}, _, ProofJSON, _, _}} = ar_test_node:get_chunk(main, Offset),\n\t{ok, Response} = ar_serialize:json_decode(ProofJSON, [return_maps]),\n\tResponse.\n\nassert_chunk_response(Response, AbsoluteEndOffset, ExpectedProof) ->\n\t?assertEqual(maps:get(chunk, ExpectedProof), maps:get(<<\"chunk\">>, Response)),\n\t?assertEqual(maps:get(data_path, ExpectedProof), maps:get(<<\"data_path\">>, Response)),\n\t?assertEqual(maps:get(tx_path, ExpectedProof), maps:get(<<\"tx_path\">>, Response)),\n\t?assertEqual(integer_to_binary(AbsoluteEndOffset), maps:get(<<\"absolute_end_offset\">>, Response)),\n\t?assertEqual(\n\t\tinteger_to_binary(byte_size(ar_util:decode(maps:get(chunk, ExpectedProof)))),\n\t\tmaps:get(<<\"chunk_size\">>, Response)\n\t),\n\t?assertEqual(\n\t\tiolist_to_binary(ar_serialize:encode_packing(unpacked, true)),\n\t\tmaps:get(<<\"packing\">>, Response)\n\t).\n\nstrict_data_split_threshold_mock(Value) ->\n\t{ar_block, strict_data_split_threshold, fun() -> Value end}.\n\ntx_with_chunks(Wallet, Chunks) ->\n\t{DataRoot, _} = ar_merkle:generate_tree(\n\t\tar_tx:sized_chunks_to_sized_chunk_ids(\n\t\t\tar_tx:chunks_to_size_tagged_chunks(Chunks)\n\t\t)\n\t),\n\tar_test_data_sync:tx(Wallet, {fixed_data, DataRoot, Chunks}).\n\npost_and_wait_for_chunks(Proofs) ->\n\tlists:foreach(\n\t\tfun({_EndOffset, Proof}) ->\n\t\t\t?assertMatch(\n\t\t\t\t{ok, {{<<\"200\">>, _}, _, _, _, _}},\n\t\t\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(Proof))\n\t\t\t)\n\t\tend,\n\t\tProofs\n\t),\n\tlists:foreach(\n\t\tfun({AbsoluteEndOffset, Proof}) ->\n\t\t\tExpected = #{\n\t\t\t\tchunk => maps:get(chunk, Proof),\n\t\t\t\tdata_path => maps:get(data_path, Proof),\n\t\t\t\ttx_path => maps:get(tx_path, Proof)\n\t\t\t},\n\t\t\tar_test_data_sync:wait_until_syncs_chunk(AbsoluteEndOffset, Expected)\n\t\tend,\n\t\tProofs\n\t).\n\nunique_offsets(Offsets, StartOffset) ->\n\tlists:usort([Offset || Offset <- Offsets, Offset > StartOffset, Offset >= 0]).\n"
  },
  {
    "path": "apps/arweave/test/ar_header_sync_tests.erl",
    "content": "-module(ar_header_sync_tests).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-import(ar_test_node, [sign_v1_tx/3, wait_until_height/2, assert_wait_until_height/2,\n\tread_block_when_stored/1, random_v1_data/1\n]).\n\nsyncs_headers_test_() ->\n\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_fork, height_2_8, fun() -> 10 end},\n\t\t\t{ar_retarget, is_retarget_height, fun(_Height) -> false end},\n\t\t\t{ar_retarget, is_retarget_block, fun(_Block) -> false end}],\n\t\t\tfun test_syncs_headers/0).\n\ntest_syncs_headers() ->\n\tWallet = {_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(2000), <<>>}]),\n\tar_test_node:start(B0),\n\tpost_random_blocks(Wallet, ar_block:get_max_tx_anchor_depth() + 5, B0),\n\tar_test_node:join_on(#{ node => peer1, join_on => main }),\n\tBI = assert_wait_until_height(peer1, ar_block:get_max_tx_anchor_depth() + 5),\n\tlists:foreach(\n\t\tfun(Height) ->\n\t\t\t{ok, B} = ar_util:do_until(\n\t\t\t\tfun() ->\n\t\t\t\t\tcase ar_test_node:remote_call(peer1, ar_storage, read_block, [Height, BI]) of\n\t\t\t\t\t\tunavailable ->\n\t\t\t\t\t\t\tunavailable;\n\t\t\t\t\t\tB2 ->\n\t\t\t\t\t\t\t{ok, B2}\n\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\t\t200,\n\t\t\t\t30000\n\t\t\t),\n\t\t\tMainB = ar_storage:read_block(Height, ar_node:get_block_index()),\n\t\t\t?assertEqual(B, MainB),\n\t\t\tTXs = ar_test_node:remote_call(peer1, ar_storage, read_tx, [B#block.txs]),\n\t\t\tMainTXs = ar_storage:read_tx(B#block.txs),\n\t\t\t?assertEqual(TXs, MainTXs)\n\t\tend,\n\t\tlists:reverse(lists:seq(0, ar_block:get_max_tx_anchor_depth() + 5))\n\t),\n\t%% Throw the event to simulate running out of disk space.\n\tar_disksup:pause(),\n\tar_events:send(disksup, {remaining_disk_space, ?DEFAULT_MODULE, true, 0, 0}),\n\tNoSpaceHeight = ar_block:get_max_tx_anchor_depth() + 6,\n\tNoSpaceTX = sign_v1_tx(main, Wallet,\n\t\t#{ data => random_v1_data(10 * 1024), last_tx => ar_test_node:get_tx_anchor(peer1) }),\n\tar_test_node:assert_post_tx_to_peer(main, NoSpaceTX),\n\tar_test_node:mine(),\n\t[{NoSpaceH, _, _} | _] = wait_until_height(main, NoSpaceHeight),\n\ttimer:sleep(1000),\n\t%% The cleanup is not expected to kick in yet.\n\tNoSpaceB = read_block_when_stored(NoSpaceH),\n\t?assertMatch(#block{}, NoSpaceB),\n\t?assertMatch(#tx{}, ar_storage:read_tx(NoSpaceTX#tx.id)),\n\t?assertMatch({ok, _}, ar_storage:read_wallet_list(NoSpaceB#block.wallet_list)),\n\tets:new(test_syncs_header, [set, named_table]),\n\tets:insert(test_syncs_header, {height, NoSpaceHeight + 1}),\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\t%% Keep mining blocks. At some point the cleanup procedure will\n\t\t\t%% kick in and remove the oldest files.\n\t\t\tTX = sign_v1_tx(main, Wallet, #{\n\t\t\t\tdata => random_v1_data(200 * 1024), last_tx => ar_test_node:get_tx_anchor(peer1) }),\n\t\t\tar_test_node:assert_post_tx_to_peer(main, TX),\n\t\t\tar_test_node:mine(),\n\t\t\t[{_, Height}] = ets:lookup(test_syncs_header, height),\n\t\t\t[_ | _] = wait_until_height(main, Height),\n\t\t\tets:insert(test_syncs_header, {height, Height + 1}),\n\t\t\tunavailable == ar_storage:read_block(NoSpaceH)\n\t\t\t\tandalso ar_storage:read_tx(NoSpaceTX#tx.id) == unavailable\n\t\tend,\n\t\t100,\n\t\t20000\n\t),\n\ttimer:sleep(1000),\n\t[{LatestH, _, _} | _] = ar_node:get_block_index(),\n\t%% The latest block must not be cleaned up.\n\tLatestB = read_block_when_stored(LatestH),\n\t?assertMatch(#block{}, LatestB),\n\t?assertMatch(#tx{}, ar_storage:read_tx(lists:nth(1, LatestB#block.txs))),\n\t?assertMatch({ok, _}, ar_storage:read_wallet_list(LatestB#block.wallet_list)),\n\tar_disksup:resume().\n\npost_random_blocks(Wallet, TargetHeight, B0) ->\n\tlists:foldl(\n\t\tfun(Height, Anchor) ->\n\t\t\t?LOG_INFO([{event, post_random_blocks}, {height, Height}]),\n\t\t\tTXs =\n\t\t\t\tlists:foldl(\n\t\t\t\t\tfun(_, Acc) ->\n\t\t\t\t\t\tcase rand:uniform(2) == 1 of\n\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\tTX = ar_test_node:sign_tx(main, Wallet,\n\t\t\t\t\t\t\t\t\t#{\n\t\t\t\t\t\t\t\t\t\tlast_tx => Anchor,\n\t\t\t\t\t\t\t\t\t\tdata => crypto:strong_rand_bytes(10 * ?MiB)\n\t\t\t\t\t\t\t\t\t}),\n\t\t\t\t\t\t\t\tar_test_node:assert_post_tx_to_peer(main, TX),\n\t\t\t\t\t\t\t\t[TX | Acc];\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\tAcc\n\t\t\t\t\t\tend\n\t\t\t\t\tend,\n\t\t\t\t\t[],\n\t\t\t\t\tlists:seq(1, 2)\n\t\t\t\t),\n\t\t\t?LOG_INFO([{event, post_random_blocks}, {transactions_posted, length(TXs)}, {height, Height}]),\n\t\t\tar_test_node:mine(),\n\t\t\t[{H, _, _} | _] = wait_until_height(main, Height),\n\t\t\t?LOG_INFO([{event, post_random_blocks}, {block_mined, ar_util:encode(H)}, {height, Height}]),\n\t\t\t?assertEqual(length(TXs), length((read_block_when_stored(H))#block.txs)),\n\t\t\tH\n\t\tend,\n\t\tB0#block.indep_hash,\n\t\tlists:seq(1, TargetHeight)\n\t).\n"
  },
  {
    "path": "apps/arweave/test/ar_http_iface_tests.erl",
    "content": "-module(ar_http_iface_tests).\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-import(ar_test_node, [wait_until_height/2, wait_until_receives_txs/1,\n\t\tread_block_when_stored/1, read_block_when_stored/2, assert_wait_until_height/2]).\n\nstart_node() ->\n\t%% Starting a node is slow so we'll run it once for the whole test module\n\tWallet1 = {_, Pub1} = ar_wallet:new(),\n\tWallet2 = {_, Pub2} = ar_wallet:new(),\n\t%% This wallet is never spent from or deposited to, so the balance is predictable\n\tStaticWallet = {_, Pub3} = ar_wallet:new(),\n\t[B0] = ar_weave:init([\n\t\t{ar_wallet:to_address(Pub1), ?AR(10000), <<>>},\n\t\t{ar_wallet:to_address(Pub2), ?AR(10000), <<>>},\n\t\t{ar_wallet:to_address(Pub3), ?AR(10), <<\"TEST_ID\">>}\n\t], 0), %% Set difficulty to 0 to speed up tests\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:connect_to_peer(peer1),\n\t{B0, Wallet1, Wallet2, StaticWallet}.\n\nreset_node() ->\n\tar_blacklist_middleware:reset(),\n\tarweave_limiter_sup:reset_all(),\n\tar_test_node:remote_call(peer1, ar_blacklist_middleware, reset, []),\n\tar_test_node:connect_to_peer(peer1).\n\nsetup_all_batch() ->\n\t%% Never retarget the difficulty - this ensures the tests are always\n\t%% run against difficulty 0. Because of this we also have to hardcode\n\t%% the TX fee, otherwise it can jump pretty high.\n\t{Setup, Cleanup} = ar_test_node:mock_functions([\n\t\t{ar_retarget, is_retarget_height, fun(_Height) -> false end},\n\t\t{ar_retarget, is_retarget_block, fun(_Block) -> false end},\n\t\t{ar_tx, get_tx_fee, fun(_Args) -> ?AR(1) end}\n\t\t]),\n\tFunctions = Setup(),\n\tGenesisData = start_node(),\n\t{GenesisData, {Cleanup, Functions}}.\n\ncleanup_all_batch({_GenesisData, {Cleanup, Functions}}) ->\n\tCleanup(Functions).\n\ntest_register(TestFun, Fixture) ->\n\t{timeout, 120, {with, Fixture, [TestFun]}}.\n\n%% -------------------------------------------------------------------\n%% The spammer tests must run first. All the other tests will call\n%% ar_blacklist_middleware:reset() periodically and this will restart\n%% the 30-second throttle counter using timer:apply_after(30000, ...).\n%% However since most tests are less than 30 seconds, we end up with\n%% several pending timer:apply_after that can hit, and reset the\n%% throttle counter at any moment. This has caused these spammer tests\n%% to fail randomly in the past depending on whether or not the\n%% throttle counter was reset before the test finished.\n%% -------------------------------------------------------------------\n\n%% @doc Test that nodes sending too many requests are temporarily blocked: (a) GET.\nnode_blacklisting_get_spammer_test_() ->\n\t{timeout, 30, fun test_node_blacklisting_get_spammer/0}.\n\n% @doc Test that nodes sending too many requests are temporarily blocked: (b) POST.\nnode_blacklisting_post_spammer_test_() ->\n\t{timeout, 30, fun test_node_blacklisting_post_spammer/0}.\n\nbatch_test_() ->\n\t{setup, fun setup_all_batch/0, fun cleanup_all_batch/1,\n\t\tfun ({GenesisData, _MockData}) ->\n\t\t\t{foreach, fun reset_node/0, [\n\t\t\t\t%% ---------------------------------------------------------\n\t\t\t\t%% The following tests must be run at a block height of 0.\n\t\t\t\t%% ---------------------------------------------------------\n\t\t\t\ttest_register(fun test_get_total_supply/1, GenesisData),\n\t\t\t\ttest_register(fun test_get_current_block/1, GenesisData),\n\t\t\t\ttest_register(fun test_get_height/1, GenesisData),\n\t\t\t\t%% ---------------------------------------------------------\n\t\t\t\t%% The following tests are read-only and will not modify\n\t\t\t\t%% state. They assume that the blockchain state\n\t\t\t\t%% is fixed (and set by start_node and test_get_height).\n\t\t\t\t%% ---------------------------------------------------------\n\t\t\t\ttest_register(fun test_get_wallet_list_in_chunks/1, GenesisData),\n\t\t\t\ttest_register(fun test_get_info/1, GenesisData),\n\t\t\t\ttest_register(fun test_get_last_tx_single/1, GenesisData),\n\t\t\t\ttest_register(fun test_get_block_by_hash/1, GenesisData),\n\t\t\t\ttest_register(fun test_get_block_by_height/1, GenesisData),\n\t\t\t\ttest_register(fun test_get_non_existent_block/1, GenesisData),\n\t\t\t\t%% ---------------------------------------------------------\n\t\t\t\t%% The following tests are *not* read-only and may modify\n\t\t\t\t%% state. They can *not* assume a fixed blockchain state.\n\t\t\t\t%% ---------------------------------------------------------\n\t\t\t\ttest_register(fun test_addresses_with_checksum/1, GenesisData),\n\t\t\t\ttest_register(fun test_single_regossip/1, GenesisData),\n\t\t\t\ttest_register(fun test_get_balance/1, GenesisData),\n\t\t\t\ttest_register(fun test_get_format_2_tx/1, GenesisData),\n\t\t\t\ttest_register(fun test_get_format_1_tx/1, GenesisData),\n\t\t\t\ttest_register(fun test_add_external_tx_with_tags/1, GenesisData),\n\t\t\t\ttest_register(fun test_find_external_tx/1, GenesisData),\n\t\t\t\ttest_register(fun test_add_tx_and_get_last/1, GenesisData),\n\t\t\t\ttest_register(fun test_get_subfields_of_tx/1, GenesisData),\n\t\t\t\ttest_register(fun test_get_pending_tx/1, GenesisData),\n\t\t\t\ttest_register(fun test_get_tx_body/1, GenesisData),\n\t\t\t\ttest_register(fun test_get_tx_status/1, GenesisData),\n\t\t\t\ttest_register(fun test_post_unsigned_tx/1, GenesisData),\n\t\t\t\ttest_register(fun test_get_error_of_data_limit/1, GenesisData),\n\t\t\t\ttest_register(fun test_send_missing_tx_with_the_block/1, GenesisData),\n\t\t\t\ttest_register(\n\t\t\t\t\tfun test_fallback_to_block_endpoint_if_cannot_send_tx/1, GenesisData),\n\t\t\t\ttest_register(fun test_get_recent_hash_list_diff/1, GenesisData),\n\t\t\t\ttest_register(fun test_get_total_supply/1, GenesisData)\n\t\t\t]}\n\t\tend\n\t}.\n\n%% @doc Check that we can qickly get the local time from the peer.\nget_time_test() ->\n\tNow = os:system_time(second),\n\t{ok, {Min, Max}} = ar_http_iface_client:get_time(ar_test_node:peer_ip(main), 10 * 1000),\n\t?assert(Min < Now),\n\t?assert(Now < Max).\n\ntest_addresses_with_checksum({_, Wallet1, {_, Pub2}, _}) ->\n\tLocalHeight = ar_node:get_height(),\n\tRemoteHeight = height(peer1),\n\tAddress19 = crypto:strong_rand_bytes(19),\n\tAddress65 = crypto:strong_rand_bytes(65),\n\tAddress20 = crypto:strong_rand_bytes(20),\n\tAddress32 = ar_wallet:to_address(Pub2),\n\tTX = ar_test_node:sign_tx(Wallet1, #{ last_tx => ar_test_node:get_tx_anchor(peer1) }),\n\t{JSON} = ar_serialize:tx_to_json_struct(TX),\n\tJSON2 = proplists:delete(<<\"target\">>, JSON),\n\tTX2 = ar_test_node:sign_tx(Wallet1, #{ last_tx => ar_test_node:get_tx_anchor(peer1), target => Address32 }),\n\t{JSON3} = ar_serialize:tx_to_json_struct(TX2),\n\tInvalidPayloads = [\n\t\t[{<<\"target\">>, <<\":\">>} | JSON2],\n\t\t[{<<\"target\">>, << <<\":\">>/binary, (ar_util:encode(<< 0:32 >>))/binary >>} | JSON2],\n\t\t[{<<\"target\">>, << (ar_util:encode(Address19))/binary, <<\":\">>/binary,\n\t\t\t\t(ar_util:encode(<< (erlang:crc32(Address19)):32 >> ))/binary >>} | JSON2],\n\t\t[{<<\"target\">>, << (ar_util:encode(Address65))/binary, <<\":\">>/binary,\n\t\t\t\t(ar_util:encode(<< (erlang:crc32(Address65)):32 >>))/binary >>} | JSON2],\n\t\t[{<<\"target\">>, << (ar_util:encode(Address32))/binary, <<\":\">>/binary,\n\t\t\t\t(ar_util:encode(<< 0:32 >>))/binary >>} | JSON2],\n\t\t[{<<\"target\">>, << (ar_util:encode(Address20))/binary, <<\":\">>/binary,\n\t\t\t\t(ar_util:encode(<< 1:32 >>))/binary >>} | JSON2],\n\t\t[{<<\"target\">>, << (ar_util:encode(Address32))/binary, <<\":\">>/binary,\n\t\t\t\t(ar_util:encode(<< (erlang:crc32(Address32)):32 >>))/binary,\n\t\t\t\t<<\":\">>/binary >>} | JSON2],\n\t\t[{<<\"target\">>, << (ar_util:encode(Address32))/binary, <<\":\">>/binary >>} | JSON3]\n\t],\n\tlists:foreach(\n\t\tfun(Struct) ->\n\t\t\tPayload = ar_serialize:jsonify({Struct}),\n\t\t\t?assertMatch({ok, {{<<\"400\">>, _}, _, <<\"Invalid JSON.\">>, _, _}},\n\t\t\t\t\tar_test_node:post_tx_json(main, Payload))\n\t\tend,\n\t\tInvalidPayloads\n\t),\n\tValidPayloads = [\n\t\t[{<<\"target\">>, << (ar_util:encode(Address32))/binary, <<\":\">>/binary,\n\t\t\t\t(ar_util:encode(<< (erlang:crc32(Address32)):32 >>))/binary >>} | JSON3],\n\t\tJSON\n\t],\n\tlists:foreach(\n\t\tfun(Struct) ->\n\t\t\tPayload = ar_serialize:jsonify({Struct}),\n\t\t\t?assertMatch({ok, {{<<\"200\">>, _}, _, <<\"OK\">>, _, _}},\n\t\t\t\t\tar_test_node:post_tx_json(main, Payload))\n\t\tend,\n\t\tValidPayloads\n\t),\n\tar_test_node:assert_wait_until_receives_txs(main, [TX, TX2]),\n\tar_test_node:assert_wait_until_receives_txs(peer1, [TX, TX2]),\n\tar_test_node:mine(),\n\t[{H, _, _} | _] = ar_test_node:wait_until_height(main, LocalHeight + 1),\n\tar_test_node:assert_wait_until_height(peer1, RemoteHeight + 1),\n\tB = read_block_when_stored(H, true),\n\tChecksumAddr = << (ar_util:encode(Address32))/binary, <<\":\">>/binary,\n\t\t\t(ar_util:encode(<< (erlang:crc32(Address32)):32 >>))/binary >>,\n\t?assertEqual(2, length(B#block.txs)),\n\tBalance = get_balance(ar_util:encode(Address32)),\n\t?assertEqual(Balance, get_balance(ChecksumAddr)),\n\tLastTX = get_last_tx(ar_util:encode(Address32)),\n\t?assertEqual(LastTX, get_last_tx(ChecksumAddr)),\n\tPrice = get_price(ar_util:encode(Address32)),\n\t?assertEqual(Price, get_price(ChecksumAddr)),\n\tServeTXTarget = maps:get(<<\"target\">>, jiffy:decode(get_tx(TX2#tx.id), [return_maps])),\n\t?assertEqual(ar_util:encode(TX2#tx.target), ServeTXTarget).\n\nget_balance(EncodedAddr) ->\n\tPeer = ar_test_node:peer_ip(main),\n\t{_, _, _, _, Port} = Peer,\n\t{ok, {{<<\"200\">>, _}, _, Reply, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => Peer,\n\t\t\tpath => \"/wallet/\" ++ binary_to_list(EncodedAddr) ++ \"/balance\",\n\t\t\theaders => [{<<\"x-p2p-port\">>, integer_to_binary(Port)}]\n\t\t}),\n\tbinary_to_integer(Reply).\n\nget_last_tx(EncodedAddr) ->\n\tPeer = ar_test_node:peer_ip(main),\n\t{_, _, _, _, Port} = Peer,\n\t{ok, {{<<\"200\">>, _}, _, Reply, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => Peer,\n\t\t\tpath => \"/wallet/\" ++ binary_to_list(EncodedAddr) ++ \"/last_tx\",\n\t\t\theaders => [{<<\"x-p2p-port\">>, integer_to_binary(Port)}]\n\t\t}),\n\tReply.\n\nget_price(EncodedAddr) ->\n\tPeer = ar_test_node:peer_ip(main),\n\t{_, _, _, _, Port} = Peer,\n\t{ok, {{<<\"200\">>, _}, _, Reply, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => Peer,\n\t\t\tpath => \"/price/0/\" ++ binary_to_list(EncodedAddr),\n\t\t\theaders => [{<<\"x-p2p-port\">>, integer_to_binary(Port)}]\n\t\t}),\n\tbinary_to_integer(Reply).\n\nget_tx(ID) ->\n\tPeer = ar_test_node:peer_ip(main),\n\t{_, _, _, _, Port} = Peer,\n\t{ok, {{<<\"200\">>, _}, _, Reply, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => Peer,\n\t\t\tpath => \"/tx/\" ++ binary_to_list(ar_util:encode(ID)),\n\t\t\theaders => [{<<\"x-p2p-port\">>, integer_to_binary(Port)}]\n\t\t}),\n\tReply.\n\n%% @doc Ensure that server info can be retreived via the HTTP interface.\ntest_get_info(_) ->\n\t?assertEqual(info_unavailable,\n\t\tar_http_iface_client:get_info(ar_test_node:peer_ip(main), bad_key)),\n\t?assertEqual(<<?NETWORK_NAME>>,\n\t\t\tar_http_iface_client:get_info(ar_test_node:peer_ip(main), network)),\n\t?assertEqual(?RELEASE_NUMBER,\n\t\t\tar_http_iface_client:get_info(ar_test_node:peer_ip(main), release)),\n\t?assertEqual(\n\t\t?CLIENT_VERSION,\n\t\tar_http_iface_client:get_info(ar_test_node:peer_ip(main), version)),\n\t?assertEqual(1, ar_http_iface_client:get_info(ar_test_node:peer_ip(main), peers)),\n\tar_util:do_until(\n\t\tfun() ->\n\t\t\t1 == ar_http_iface_client:get_info(ar_test_node:peer_ip(main), blocks)\n\t\tend,\n\t\t100,\n\t\t2000\n\t),\n\t?assertEqual(1, ar_http_iface_client:get_info(ar_test_node:peer_ip(main), height)).\n\n%% @doc Ensure that transactions are only accepted once.\ntest_single_regossip(_) ->\n\tar_test_node:disconnect_from(peer1),\n\tTX = ar_tx:new(),\n\t?assertMatch(\n\t\t{ok, {{<<\"200\">>, _}, _, _, _, _}},\n\t\tar_http_iface_client:send_tx_json(ar_test_node:peer_ip(main), TX#tx.id,\n\t\t\t\tar_serialize:jsonify(ar_serialize:tx_to_json_struct(TX)))\n\t),\n\t?assertMatch(\n\t\t{ok, {{<<\"200\">>, _}, _, _, _, _}},\n\t\tar_test_node:remote_call(peer1, ar_http_iface_client, send_tx_binary, [ar_test_node:peer_ip(peer1), TX#tx.id,\n\t\t\t\tar_serialize:tx_to_binary(TX)])\n\t),\n\t?assertMatch(\n\t\t{ok, {{<<\"208\">>, _}, _, _, _, _}},\n\t\tar_test_node:remote_call(peer1, ar_http_iface_client, send_tx_binary, [ar_test_node:peer_ip(peer1), TX#tx.id,\n\t\t\t\tar_serialize:tx_to_binary(TX)])\n\t),\n\t?assertMatch(\n\t\t{ok, {{<<\"208\">>, _}, _, _, _, _}},\n\t\tar_test_node:remote_call(peer1, ar_http_iface_client, send_tx_json, [ar_test_node:peer_ip(peer1), TX#tx.id,\n\t\t\t\tar_serialize:jsonify(ar_serialize:tx_to_json_struct(TX))])\n\t).\n\ntest_node_blacklisting_get_spammer() ->\n\t{ok, Config} = arweave_config:get_env(),\n\t{RequestFun, ErrorResponse} = get_fun_msg_pair(get_info),\n\tLimitWithBursts = Config#config.'http_api.limiter.general.sliding_window_limit'\n\t\t+ Config#config.'http_api.limiter.general.leaky_limit',\n\tnode_blacklisting_test_frame(\n\t\tRequestFun,\n\t\tErrorResponse,\n\t\tLimitWithBursts,\n\t\t1\n\t).\n\ntest_node_blacklisting_post_spammer() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tLimitWithBursts = Config#config.'http_api.limiter.general.sliding_window_limit'\n\t\t+ Config#config.'http_api.limiter.general.leaky_limit',\n\t{RequestFun, ErrorResponse} = get_fun_msg_pair(send_tx_binary),\n\tNErrors = 11,\n\tNRequests = LimitWithBursts + NErrors,\n\tnode_blacklisting_test_frame(\n\t\tRequestFun,\n\t\tErrorResponse,\n\t\tNRequests,\n\t\tNErrors\n\t).\n\n%% @doc Given a label, return a fun and a message.\n-spec get_fun_msg_pair(atom()) -> {fun(), any()}.\nget_fun_msg_pair(get_info) ->\n\t{ fun(_) ->\n\t\t\tar_http_iface_client:get_info(ar_test_node:peer_ip(main))\n\t\tend\n\t, info_unavailable};\nget_fun_msg_pair(send_tx_binary) ->\n\t{ fun(Index) ->\n\t\t\tInvalidTX = (ar_tx:new())#tx{ owner = <<\"key\">>, signature = <<\"invalid\">> },\n\t\t\tsend_tx_binary(Index, InvalidTX)\n\t\tend\n\t, too_many_requests}.\n\nsend_tx_binary(Index, InvalidTX) ->\n\tcase ar_http_iface_client:send_tx_binary(ar_test_node:peer_ip(main),\n\t\t\tInvalidTX#tx.id, ar_serialize:tx_to_binary(InvalidTX)) of\n\t\t{ok,\n\t\t\t{{<<\"429\">>, <<\"Too Many Requests\">>}, _,\n\t\t\t\t<<\"Too Many Requests\">>, _, _}} ->\n\t\t\ttoo_many_requests;\n\t\t{ok, _} ->\n\t\t\tok;\n\t\t{error, Error} ->\n\t\t\t?debugFmt(\"Unexpected response on call ~p: ~p. Trying again...~n\", [Index, Error]),\n\t\t\tsend_tx_binary(Index, InvalidTX)\n\tend.\n\n\n%% @doc Frame to test spamming an endpoint.\n-spec node_blacklisting_test_frame(fun(), any(), non_neg_integer(), non_neg_integer()) -> ok.\nnode_blacklisting_test_frame(RequestFun, ErrorResponse, NRequests, ExpectedErrors) ->\n\tar_blacklist_middleware:reset(),\n\tarweave_limiter_sup:reset_all(),\n\tar_rate_limiter:off(),\n\tResponses = ar_util:batch_pmap(\n\t\tRequestFun,\n\t\tlists:seq(1, NRequests),\n\t\t50,\n\t\t60_000\n\t),\n\t?assertEqual(length(Responses), NRequests),\n\tar_blacklist_middleware:reset(),\n\tarweave_limiter_sup:reset_all(),\n\tGot = count_by_response_type(ErrorResponse, Responses),\n\t%% Other test nodes may occasionally make some requests in the background disturbing the stats.\n\tTolerance = 5,\n\t?debugFmt(\"Requests sent: ~p, ExpectedErrors: ~p, Tolerance: ~p, Got: ~p~n\",\n\t\t[NRequests, ExpectedErrors, Tolerance, maps:get(error_responses, Got, 0)]),\n\t?assert(maps:get(error_responses, Got, 0) =< ExpectedErrors + Tolerance),\n\t?assert(maps:get(error_responses, Got, 0) >= ExpectedErrors - Tolerance),\n\t?assertEqual(NRequests - maps:get(error_responses, Got, 0), maps:get(ok_responses, Got, 0)),\n\tar_rate_limiter:on().\n\n%% @doc Count the number of successful and error responses.\ncount_by_response_type(ErrorResponse, Responses) ->\n\tcount_by(\n\t\tfun\n\t\t\t(Response) when Response == ErrorResponse -> error_responses;\n\t\t\t(_) -> ok_responses\n\t\tend,\n\t\tResponses\n\t).\n\n%% @doc Count the occurances in the list based on the predicate.\ncount_by(Pred, List) ->\n\tmaps:map(fun (_, Value) -> length(Value) end, group(Pred, List)).\n\n%% @doc Group the list based on the key generated by Grouper.\ngroup(Grouper, Values) ->\n\tgroup(Grouper, Values, maps:new()).\n\ngroup(_, [], Acc) ->\n\tAcc;\ngroup(Grouper, [Item | List], Acc) ->\n\tKey = Grouper(Item),\n\tUpdater = fun (Old) -> [Item | Old] end,\n\tNewAcc = maps:update_with(Key, Updater, [Item], Acc),\n\tgroup(Grouper, List, NewAcc).\n\n%% @doc Check that balances can be retreived over the network.\ntest_get_balance({B0, _, _, {_, Pub1}}) ->\n\tLocalHeight = ar_node:get_height(),\n\tAddr = binary_to_list(ar_util:encode(ar_wallet:to_address(Pub1))),\n\t{ok, {{<<\"200\">>, _}, _, Body, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\tpath => \"/wallet/\" ++ Addr ++ \"/balance\"\n\t\t}),\n\t?assertEqual(?AR(10), binary_to_integer(Body)),\n\tRootHash = binary_to_list(ar_util:encode(B0#block.wallet_list)),\n\t{ok, {{<<\"200\">>, _}, _, Body, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\tpath => \"/wallet_list/\" ++ RootHash ++ \"/\" ++ Addr ++ \"/balance\"\n\t\t}),\n\tar_test_node:mine(),\n\twait_until_height(main, LocalHeight + 1),\n\t{ok, {{<<\"200\">>, _}, _, Body, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\tpath => \"/wallet_list/\" ++ RootHash ++ \"/\" ++ Addr ++ \"/balance\"\n\t\t}).\n\ntest_get_wallet_list_in_chunks({B0, {_, Pub1}, {_, Pub2}, {_, StaticPub}}) ->\n\tAddr1 = ar_wallet:to_address(Pub1),\n\tAddr2 = ar_wallet:to_address(Pub2),\n\tStaticAddr = ar_wallet:to_address(StaticPub),\n\tNonExistentRootHash = binary_to_list(ar_util:encode(crypto:strong_rand_bytes(32))),\n\t{ok, {{<<\"404\">>, _}, _, <<\"Root hash not found.\">>, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\tpath => \"/wallet_list/\" ++ NonExistentRootHash\n\t\t}),\n\n\t[TX] = B0#block.txs,\n\tGenesisAddr = ar_wallet:to_address(TX#tx.owner, {?RSA_SIGN_ALG, 65537}),\n\tTXID = TX#tx.id,\n\tExpectedWallets = lists:sort([\n\t\t\t{Addr1, {?AR(10000), <<>>}},\n\t\t\t{Addr2, {?AR(10000), <<>>}},\n\t\t\t{StaticAddr, {?AR(10), <<\"TEST_ID\">>}},\n\t\t\t{GenesisAddr, {0, TXID}}]),\n\t{ExpectedWallets1, ExpectedWallets2} = lists:split(2, ExpectedWallets),\n\tRootHash = binary_to_list(ar_util:encode(B0#block.wallet_list)),\n\t{ok, {{<<\"200\">>, _}, _, Body1, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\tpath => \"/wallet_list/\" ++ RootHash\n\t\t}),\n\tCursor = maps:get(next_cursor, binary_to_term(Body1)),\n\t?assertEqual(#{\n\t\t\tnext_cursor => Cursor,\n\t\t\twallets => lists:reverse(ExpectedWallets1)\n\t\t}, binary_to_term(Body1)),\n\n\t{ok, {{<<\"200\">>, _}, _, Body2, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\tpath => \"/wallet_list/\" ++ RootHash ++ \"/\" ++ ar_util:encode(Cursor)\n\t\t}),\n\t?assertEqual(#{\n\t\t\tnext_cursor => last,\n\t\t\twallets => lists:reverse(ExpectedWallets2)\n\t\t}, binary_to_term(Body2)).\n\n%% @doc Test that heights are returned correctly.\ntest_get_height(_) ->\n\t0 = ar_http_iface_client:get_height(ar_test_node:peer_ip(main)),\n\tar_test_node:mine(),\n\twait_until_height(main, 1),\n\t1 = ar_http_iface_client:get_height(ar_test_node:peer_ip(main)).\n\n%% @doc Test that last tx associated with a wallet can be fetched.\ntest_get_last_tx_single({_, _, _, {_, StaticPub}}) ->\n\tAddr = binary_to_list(ar_util:encode(ar_wallet:to_address(StaticPub))),\n\t{ok, {{<<\"200\">>, _}, _, Body, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\tpath => \"/wallet/\" ++ Addr ++ \"/last_tx\"\n\t\t}),\n\t?assertEqual(<<\"TEST_ID\">>, ar_util:decode(Body)).\n\n%% @doc Ensure that blocks can be received via a hash.\ntest_get_block_by_hash({B0, _, _, _}) ->\n\t{_Peer, B1, _Time, _Size} = ar_http_iface_client:get_block_shadow(B0#block.indep_hash,\n\t\t\tar_test_node:peer_ip(main), binary, #{}),\n\tTXIDs = [TX#tx.id || TX <- B0#block.txs],\n\t?assertEqual(B0#block{ size_tagged_txs = unset, account_tree = undefined, txs = TXIDs,\n\t\t\treward_history = [], block_time_history = [] }, B1).\n\n%% @doc Ensure that blocks can be received via a height.\ntest_get_block_by_height({B0, _, _, _}) ->\n\t{_Peer, B1, _Time, _Size} = ar_http_iface_client:get_block_shadow(0, ar_test_node:peer_ip(main), binary, #{}),\n\tTXIDs = [TX#tx.id || TX <- B0#block.txs],\n\t?assertEqual(B0#block{ size_tagged_txs = unset, account_tree = undefined, txs = TXIDs,\n\t\t\treward_history = [], block_time_history = [] }, B1).\n\ntest_get_current_block({B0, _, _, _}) ->\n\tPeer = ar_test_node:peer_ip(main),\n\t{ok, BI} = ar_http_iface_client:get_block_index(Peer, 0, 100),\n\t{_Peer, B1, _Time, _Size} =\n\tar_http_iface_client:get_block_shadow(hd(BI), Peer, binary, #{}),\n\tTXIDs = [TX#tx.id || TX <- B0#block.txs],\n\t?assertEqual(B0#block{ size_tagged_txs = unset, txs = TXIDs, reward_history = [],\n\t\t\tblock_time_history = [], account_tree = undefined }, B1),\n\t{ok, {{<<\"200\">>, _}, _, Body, _, _}} =\n\t\tar_http:req(#{ method => get, peer => ar_test_node:peer_ip(main), path => \"/block/current\" }),\n\t{JSONStruct} = jiffy:decode(Body),\n\t?assertEqual(ar_util:encode(B0#block.indep_hash),\n\t\t\tproplists:get_value(<<\"indep_hash\">>, JSONStruct)).\n\n%% @doc Test that the various different methods of GETing a block all perform\n%% correctly if the block cannot be found.\ntest_get_non_existent_block(_) ->\n\t{ok, {{<<\"404\">>, _}, _, _, _, _}} =\n\t\tar_http:req(#{ method => get, peer => ar_test_node:peer_ip(main), path => \"/block/height/100\" }),\n\t{ok, {{<<\"404\">>, _}, _, _, _, _}} =\n\t\tar_http:req(#{ method => get, peer => ar_test_node:peer_ip(main), path => \"/block2/height/100\" }),\n\t{ok, {{<<\"404\">>, _}, _, _, _, _}} =\n\t\tar_http:req(#{ method => get, peer => ar_test_node:peer_ip(main), path => \"/block/hash/abcd\" }),\n\t{ok, {{<<\"404\">>, _}, _, _, _, _}} =\n\t\tar_http:req(#{ method => get, peer => ar_test_node:peer_ip(main), path => \"/block2/hash/abcd\" }),\n\t{ok, {{<<\"404\">>, _}, _, _, _, _}} =\n\t\tar_http:req(#{ method => get, peer => ar_test_node:peer_ip(main),\n\t\t\t\tpath => \"/block/height/101/wallet_list\" }),\n\t{ok, {{<<\"404\">>, _}, _, _, _, _}} =\n\t\tar_http:req(#{ method => get, peer => ar_test_node:peer_ip(main),\n\t\t\t\tpath => \"/block/hash/abcd/wallet_list\" }),\n\t{ok, {{<<\"404\">>, _}, _, _, _, _}} =\n\t\tar_http:req(#{ method => get, peer => ar_test_node:peer_ip(main),\n\t\t\t\tpath => \"/block/height/101/hash_list\" }),\n\t{ok, {{<<\"404\">>, _}, _, _, _, _}} =\n\t\tar_http:req(#{ method => get, peer => ar_test_node:peer_ip(main),\n\t\t\t\tpath => \"/block/hash/abcd/hash_list\" }).\n\n%% @doc A test for retrieving format=2 transactions from HTTP API.\ntest_get_format_2_tx(_) ->\n\tLocalHeight = ar_node:get_height(),\n\tDataRoot = (ar_tx:generate_chunk_tree(#tx{ data = <<\"DATA\">> }))#tx.data_root,\n\tValidTX = #tx{ id = TXID } = (ar_tx:new(<<\"DATA\">>))#tx{\n\t\t\tformat = 2,\n\t\t\tdata_root = DataRoot },\n\tInvalidDataRootTX = #tx{ id = InvalidTXID } = (ar_tx:new(<<\"DATA\">>))#tx{ format = 2 },\n\tEmptyTX = #tx{ id = EmptyTXID } = (ar_tx:new())#tx{ format = 2 },\n\tEncodedTXID = binary_to_list(ar_util:encode(TXID)),\n\tEncodedInvalidTXID = binary_to_list(ar_util:encode(InvalidTXID)),\n\tEncodedEmptyTXID = binary_to_list(ar_util:encode(EmptyTXID)),\n\tar_http_iface_client:send_tx_json(ar_test_node:peer_ip(main), ValidTX#tx.id,\n\t\t\tar_serialize:jsonify(ar_serialize:tx_to_json_struct(ValidTX))),\n\t{ok, {{<<\"400\">>, _}, _, <<\"The attached data is split in an unknown way.\">>, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => post,\n\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\tpath => \"/tx\",\n\t\t\tbody => ar_serialize:jsonify(ar_serialize:tx_to_json_struct(InvalidDataRootTX))\n\t\t}),\n\tar_http_iface_client:send_tx_binary(ar_test_node:peer_ip(main),\n\t\t\tInvalidDataRootTX#tx.id,\n\t\t\tar_serialize:tx_to_binary(InvalidDataRootTX#tx{ data = <<>> })),\n\tar_http_iface_client:send_tx_binary(ar_test_node:peer_ip(main), EmptyTX#tx.id,\n\t\t\tar_serialize:tx_to_binary(EmptyTX)),\n\twait_until_receives_txs([ValidTX, EmptyTX, InvalidDataRootTX]),\n\tar_test_node:mine(),\n\twait_until_height(main, LocalHeight + 1),\n\t%% Ensure format=2 transactions can be retrieved over the HTTP\n\t%% interface with no populated data, while retaining info on all other fields.\n\t{ok, {{<<\"200\">>, _}, _, Body, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\tpath => \"/tx/\" ++ EncodedTXID\n\t\t}),\n\t?assertEqual(ValidTX#tx{\n\t\t\tdata = <<>>,\n\t\t\tdata_size = 4\n\t\t}, (ar_serialize:json_struct_to_tx(Body))#tx{ owner_address = not_set }),\n\t%% Ensure data can be fetched for format=2 transactions via /tx/[ID]/data.\n\t{ok, Data} = wait_until_syncs_tx_data(TXID),\n\t?assertEqual(ar_util:encode(<<\"DATA\">>), Data),\n\t{ok, {{<<\"404\">>, _}, _, _, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\tpath => \"/tx/\" ++ EncodedInvalidTXID ++ \"/data\"\n\t\t}),\n\t%% Ensure /tx/[ID]/data works for format=2 transactions when the data is empty.\n\t{ok, {{<<\"200\">>, _}, _, <<>>, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\tpath => \"/tx/\" ++ EncodedEmptyTXID ++ \"/data\"\n\t\t}),\n\t%% Ensure data can be fetched for format=2 transactions via /tx/[ID]/data.html.\n\t{ok, {{<<\"200\">>, _}, Headers, HTMLData, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\tpath => \"/tx/\" ++ EncodedTXID ++ \"/data.html\"\n\t\t}),\n\t?assertEqual(<<\"DATA\">>, HTMLData),\n\t?assertEqual(\n\t\t[{<<\"content-type\">>, <<\"text/html\">>}],\n\t\tproplists:lookup_all(<<\"content-type\">>, Headers)\n\t).\n\ntest_get_format_1_tx(_) ->\n\tLocalHeight = ar_node:get_height(),\n\tTX = #tx{ id = TXID } = ar_tx:new(<<\"DATA\">>),\n\tEncodedTXID = binary_to_list(ar_util:encode(TXID)),\n\tar_http_iface_client:send_tx_binary(ar_test_node:peer_ip(main), TX#tx.id,\n\t\t\tar_serialize:tx_to_binary(TX)),\n\twait_until_receives_txs([TX]),\n\tar_test_node:mine(),\n\twait_until_height(main, LocalHeight + 1),\n\t{ok, Body} =\n\t\tar_util:do_until(\n\t\t\tfun() ->\n\t\t\t\tcase ar_http:req(#{\n\t\t\t\t\tmethod => get,\n\t\t\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\t\t\tpath => \"/tx/\" ++ EncodedTXID\n\t\t\t\t}) of\n\t\t\t\t\t{ok, {{<<\"404\">>, _}, _, _, _, _}} ->\n\t\t\t\t\t\tfalse;\n\t\t\t\t\t{ok, {{<<\"200\">>, _}, _, Payload, _, _}} ->\n\t\t\t\t\t\t{ok, Payload}\n\t\t\t\tend\n\t\t\tend,\n\t\t\t100,\n\t\t\t2000\n\t\t),\n\t?assertEqual(TX, (ar_serialize:json_struct_to_tx(Body))#tx{ owner_address = not_set }).\n\n%% @doc Test adding transactions to a block.\ntest_add_external_tx_with_tags(_) ->\n\tLocalHeight = ar_node:get_height(),\n\tTX = ar_tx:new(<<\"DATA\">>),\n\tTaggedTX =\n\t\tTX#tx {\n\t\t\ttags =\n\t\t\t\t[\n\t\t\t\t\t{<<\"TEST_TAG1\">>, <<\"TEST_VAL1\">>},\n\t\t\t\t\t{<<\"TEST_TAG2\">>, <<\"TEST_VAL2\">>}\n\t\t\t\t]\n\t\t},\n\tar_http_iface_client:send_tx_json(ar_test_node:peer_ip(main), TaggedTX#tx.id,\n\t\t\tar_serialize:jsonify(ar_serialize:tx_to_json_struct(TaggedTX))),\n\twait_until_receives_txs([TaggedTX]),\n\tar_test_node:mine(),\n\twait_until_height(main, LocalHeight + 1),\n\t[B1Hash | _] = ar_node:get_blocks(),\n\tB1 = read_block_when_stored(B1Hash, true),\n\tTXID = TaggedTX#tx.id,\n\t?assertEqual([TXID], [TX2#tx.id || TX2 <- B1#block.txs]),\n\t?assertEqual(TaggedTX, (ar_storage:read_tx(hd(B1#block.txs)))#tx{ owner_address = not_set }).\n\n%% @doc Test getting transactions\ntest_find_external_tx(_) ->\n\tLocalHeight = ar_node:get_height(),\n\tTX = ar_tx:new(<<\"DATA\">>),\n\tar_http_iface_client:send_tx_binary(ar_test_node:peer_ip(main), TX#tx.id,\n\t\t\tar_serialize:tx_to_binary(TX)),\n\twait_until_receives_txs([TX]),\n\tar_test_node:mine(),\n\twait_until_height(main, LocalHeight + 1),\n\t{ok, FoundTXID} =\n\t\tar_util:do_until(\n\t\t\tfun() ->\n\t\t\t\tcase ar_http_iface_client:get_tx(ar_test_node:peer_ip(main), TX#tx.id) of\n\t\t\t\t\tnot_found ->\n\t\t\t\t\t\tfalse;\n\t\t\t\t\tTX2 ->\n\t\t\t\t\t\tcase TX2#tx.id == TX#tx.id of\n\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t{ok, TX#tx.id};\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\tfalse\n\t\t\t\t\t\tend\n\t\t\t\tend\n\t\t\tend,\n\t\t\t100,\n\t\t\t5000\n\t\t),\n\t?assertEqual(FoundTXID, TX#tx.id).\n\n%% @doc Post a tx to the network and ensure that last_tx call returns the ID of last tx.\ntest_add_tx_and_get_last({_B0, Wallet1, Wallet2, _StaticWallet}) ->\n\tLocalHeight = ar_node:get_height(),\n\tar_test_node:disconnect_from(peer1),\n\t{_Priv1, Pub1} = Wallet1,\n\t{_Priv2, Pub2} = Wallet2,\n\tSignedTX = ar_test_node:sign_tx(Wallet1, #{\n\t\ttarget => ar_wallet:to_address(Pub2),\n\t\tquantity => ?AR(2),\n\t\treward => ?AR(1)}),\n\tID = SignedTX#tx.id,\n\tar_http_iface_client:send_tx_binary(ar_test_node:peer_ip(main), SignedTX#tx.id,\n\t\t\tar_serialize:tx_to_binary(SignedTX)),\n\twait_until_receives_txs([SignedTX]),\n\tar_test_node:mine(),\n\twait_until_height(main, LocalHeight + 1),\n\t{ok, {{<<\"200\">>, _}, _, Body, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\tpath => \"/wallet/\"\n\t\t\t\t\t++ binary_to_list(ar_util:encode(ar_wallet:to_address(Pub1)))\n\t\t\t\t\t++ \"/last_tx\"\n\t\t}),\n\t?assertEqual(ID, ar_util:decode(Body)).\n\n%% @doc Post a tx to the network and ensure that its subfields can be gathered\ntest_get_subfields_of_tx(_) ->\n\tLocalHeight = ar_node:get_height(),\n\tTX = ar_tx:new(<<\"DATA\">>),\n\tar_http_iface_client:send_tx_binary(ar_test_node:peer_ip(main), TX#tx.id,\n\t\t\tar_serialize:tx_to_binary(TX)),\n\twait_until_receives_txs([TX]),\n\tar_test_node:mine(),\n\twait_until_height(main, LocalHeight + 1),\n\t{ok, Body} = wait_until_syncs_tx_data(TX#tx.id),\n\tOrig = TX#tx.data,\n\t?assertEqual(Orig, ar_util:decode(Body)).\n\n%% @doc Correctly check the status of pending is returned for a pending transaction\ntest_get_pending_tx(_) ->\n\tTX = ar_tx:new(<<\"DATA1\">>),\n\tar_http_iface_client:send_tx_json(ar_test_node:peer_ip(main), TX#tx.id,\n\t\t\tar_serialize:jsonify(ar_serialize:tx_to_json_struct(TX))),\n\twait_until_receives_txs([TX]),\n\t{ok, {{<<\"202\">>, _}, _, Body, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\tpath => \"/tx/\" ++ binary_to_list(ar_util:encode(TX#tx.id))\n\t\t}),\n\t?assertEqual(<<\"Pending\">>, Body).\n\n%% @doc Mine a transaction into a block and retrieve it's binary body via HTTP.\ntest_get_tx_body(_) ->\n\tar_test_node:disconnect_from(peer1),\n\tLocalHeight = ar_node:get_height(),\n\tTX = ar_tx:new(<<\"TEST DATA\">>),\n\tar_test_node:assert_post_tx_to_peer(main, TX),\n\tar_test_node:mine(),\n\twait_until_height(main, LocalHeight + 1),\n\t{ok, Data} = wait_until_syncs_tx_data(TX#tx.id),\n\t?assertEqual(<<\"TEST DATA\">>, ar_util:decode(Data)).\n\ntest_get_tx_status(_) ->\n\tar_test_node:connect_to_peer(peer1),\n\tHeight = ar_node:get_height(),\n\tassert_wait_until_height(peer1, Height),\n\tar_test_node:disconnect_from(peer1),\n\tTX = (ar_tx:new())#tx{ tags = [{<<\"TestName\">>, <<\"TestVal\">>}] },\n\tar_test_node:assert_post_tx_to_peer(main, TX),\n\tFetchStatus = fun() ->\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\tpath => \"/tx/\" ++ binary_to_list(ar_util:encode(TX#tx.id)) ++ \"/status\"\n\t\t})\n\tend,\n\t?assertMatch({ok, {{<<\"202\">>, _}, _, <<\"Pending\">>, _, _}}, FetchStatus()),\n\tar_test_node:mine(),\n\twait_until_height(main, Height + 1),\n\tar_util:do_until(\n\t\tfun() ->\n\t\t\tcase FetchStatus() of\n\t\t\t\t{ok, {{<<\"200\">>, _}, _, _, _, _}} -> true;\n\t\t\t\t_ -> false\n\t\t\tend\n\t\tend,\n\t\t200,\n\t\t5000\n\t),\n\t{ok, {{<<\"200\">>, _}, _, Body, _, _}} = FetchStatus(),\n\t{Res} = ar_serialize:dejsonify(Body),\n\tBI = ar_node:get_block_index(),\n\t?assertEqual(\n\t\t#{\n\t\t\t<<\"block_height\">> => length(BI) - 1,\n\t\t\t<<\"block_indep_hash\">> => ar_util:encode(element(1, hd(BI))),\n\t\t\t<<\"number_of_confirmations\">> => 1\n\t\t},\n\t\tmaps:from_list(Res)\n\t),\n\tar_test_node:mine(),\n\twait_until_height(main, Height + 2),\n\tar_util:do_until(\n\t\tfun() ->\n\t\t\t{ok, {{<<\"200\">>, _}, _, Body2, _, _}} = FetchStatus(),\n\t\t\t{Res2} = ar_serialize:dejsonify(Body2),\n\t\t\t#{\n\t\t\t\t<<\"block_height\">> => length(BI) - 1,\n\t\t\t\t<<\"block_indep_hash\">> => ar_util:encode(element(1, hd(BI))),\n\t\t\t\t<<\"number_of_confirmations\">> => 2\n\t\t\t} == maps:from_list(Res2)\n\t\tend,\n\t\t200,\n\t\t5000\n\t),\n\t%% Create a fork which returns the TX to mempool.\n\tar_test_node:mine(peer1),\n\tassert_wait_until_height(peer1, Height + 1),\n\tar_test_node:mine(peer1),\n\tassert_wait_until_height(peer1, Height + 2),\n\tar_test_node:connect_to_peer(peer1),\n\tar_test_node:mine(peer1),\n\twait_until_height(main, Height + 3),\n\t?assertMatch({ok, {{<<\"202\">>, _}, _, _, _, _}}, FetchStatus()).\n\ntest_post_unsigned_tx({_B0, Wallet1, _Wallet2, _StaticWallet}) ->\n\tLocalHeight = ar_node:get_height(),\n\t{_, Pub} = Wallet = Wallet1,\n\t%% Generate a wallet and receive a wallet access code.\n\t{ok, {{<<\"421\">>, _}, _, _, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => post,\n\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\tpath => \"/wallet\"\n\t\t}),\n\t{ok, Config} = arweave_config:get_env(),\n\ttry\n\t\tarweave_config:set_env(Config#config{\n\t\t\tinternal_api_secret = <<\"correct_secret\">>\n\t\t}),\n\t\t{ok, {{<<\"421\">>, _}, _, _, _, _}} =\n\t\t\tar_http:req(#{\n\t\t\t\tmethod => post,\n\t\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\t\tpath => \"/wallet\",\n\t\t\t\theaders => [{<<\"X-Internal-Api-Secret\">>, <<\"incorrect_secret\">>}]\n\t\t\t}),\n\t\t{ok, {{<<\"200\">>, <<\"OK\">>}, _, CreateWalletBody, _, _}} =\n\t\t\tar_http:req(#{\n\t\t\t\tmethod => post,\n\t\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\t\tpath => \"/wallet\",\n\t\t\t\theaders => [{<<\"X-Internal-Api-Secret\">>, <<\"correct_secret\">>}]\n\t\t\t}),\n\t\tarweave_config:set_env(Config#config{ internal_api_secret = not_set }),\n\t\t{CreateWalletRes} = ar_serialize:dejsonify(CreateWalletBody),\n\t\t[WalletAccessCode] = proplists:get_all_values(<<\"wallet_access_code\">>, CreateWalletRes),\n\t\t[Address] = proplists:get_all_values(<<\"wallet_address\">>, CreateWalletRes),\n\t\t%% Top up the new wallet.\n\t\tTopUpTX = ar_test_node:sign_tx(Wallet, #{\n\t\t\towner => Pub,\n\t\t\ttarget => ar_util:decode(Address),\n\t\t\tquantity => ?AR(100),\n\t\t\treward => ?AR(1)\n\t\t\t}),\n\t\t{ok, {{<<\"200\">>, _}, _, _, _, _}} =\n\t\t\tar_http:req(#{\n\t\t\t\tmethod => post,\n\t\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\t\tpath => \"/tx\",\n\t\t\t\tbody => ar_serialize:jsonify(ar_serialize:tx_to_json_struct(TopUpTX))\n\t\t\t}),\n\t\twait_until_receives_txs([TopUpTX]),\n\t\tar_test_node:mine(),\n\t\twait_until_height(main, LocalHeight + 1),\n\t\t%% Send an unsigned transaction to be signed with the generated key.\n\t\tTX = (ar_tx:new())#tx{reward = ?AR(1), last_tx = TopUpTX#tx.id},\n\t\tUnsignedTXProps = [\n\t\t\t{<<\"last_tx\">>, <<>>},\n\t\t\t{<<\"target\">>, TX#tx.target},\n\t\t\t{<<\"quantity\">>, integer_to_binary(TX#tx.quantity)},\n\t\t\t{<<\"data\">>, TX#tx.data},\n\t\t\t{<<\"reward\">>, integer_to_binary(TX#tx.reward)},\n\t\t\t{<<\"denomination\">>, integer_to_binary(TopUpTX#tx.denomination)},\n\t\t\t{<<\"wallet_access_code\">>, WalletAccessCode}\n\t\t],\n\t\t{ok, {{<<\"421\">>, _}, _, _, _, _}} =\n\t\t\tar_http:req(#{\n\t\t\t\tmethod => post,\n\t\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\t\tpath => \"/unsigned_tx\",\n\t\t\t\tbody => ar_serialize:jsonify({UnsignedTXProps})\n\t\t\t}),\n\t\tarweave_config:set_env(Config#config{\n\t\t\tinternal_api_secret = <<\"correct_secret\">>\n\t\t}),\n\t\t{ok, {{<<\"421\">>, _}, _, _, _, _}} =\n\t\t\tar_http:req(#{\n\t\t\t\tmethod => post,\n\t\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\t\tpath => \"/unsigned_tx\",\n\t\t\t\theaders => [{<<\"X-Internal-Api-Secret\">>, <<\"incorrect_secret\">>}],\n\t\t\t\tbody => ar_serialize:jsonify({UnsignedTXProps})\n\t\t\t}),\n\t\t{ok, {{<<\"200\">>, <<\"OK\">>}, _, Body, _, _}} =\n\t\t\tar_http:req(#{\n\t\t\t\tmethod => post,\n\t\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\t\tpath => \"/unsigned_tx\",\n\t\t\t\theaders => [{<<\"X-Internal-Api-Secret\">>, <<\"correct_secret\">>}],\n\t\t\t\tbody => ar_serialize:jsonify({UnsignedTXProps})\n\t\t\t}),\n\t\tarweave_config:set_env(Config#config{ internal_api_secret = not_set }),\n\t\t{Res} = ar_serialize:dejsonify(Body),\n\t\tTXID = proplists:get_value(<<\"id\">>, Res),\n\t\ttimer:sleep(200),\n\t\tar_test_node:mine(),\n\t\twait_until_height(main, LocalHeight + 2),\n\t\ttimer:sleep(200),\n\t\t{ok, {{<<\"200\">>, <<\"OK\">>}, _, GetTXBody, _, _}} =\n\t\t\tar_http:req(#{\n\t\t\t\tmethod => get,\n\t\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\t\tpath => \"/tx/\" ++ binary_to_list(TXID) ++ \"/status\"\n\t\t\t}),\n\t\t{GetTXRes} = ar_serialize:dejsonify(GetTXBody),\n\t\t?assertMatch(\n\t\t\t#{\n\t\t\t\t<<\"number_of_confirmations\">> := 1\n\t\t\t},\n\t\t\tmaps:from_list(GetTXRes)\n\t\t)\n\tafter\n\t\tok = arweave_config:set_env(Config)\n\tend.\n\n%% @doc Ensure the HTTP client stops fetching data from an endpoint when its data size\n%% limit is exceeded.\ntest_get_error_of_data_limit(_) ->\n\tLocalHeight = ar_node:get_height(),\n\tLimit = 1460,\n\tTX = ar_tx:new(<< <<0>> || _ <- lists:seq(1, Limit * 2) >>),\n\tar_http_iface_client:send_tx_binary(ar_test_node:peer_ip(main), TX#tx.id,\n\t\t\tar_serialize:tx_to_binary(TX)),\n\twait_until_receives_txs([TX]),\n\tar_test_node:mine(),\n\twait_until_height(main, LocalHeight + 1),\n\t{ok, _} = wait_until_syncs_tx_data(TX#tx.id),\n\tResp =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\tpath => \"/tx/\" ++ binary_to_list(ar_util:encode(TX#tx.id)) ++ \"/data\",\n\t\t\tlimit => Limit\n\t\t}),\n\t?assertEqual({error, too_much_data}, Resp).\n\ntest_send_missing_tx_with_the_block({_B0, Wallet1, _Wallet2, _StaticWallet}) ->\n\tar_test_node:disconnect_from(peer1),\n\tLocalHeight = ar_node:get_height(),\n\tRemoteHeight = height(peer1),\n\tTXs = [ar_test_node:sign_tx(Wallet1, #{ last_tx => ar_test_node:get_tx_anchor(peer1) }) || _ <- lists:seq(1, 10)],\n\tlists:foreach(fun(TX) -> ar_test_node:assert_post_tx_to_peer(main, TX) end, TXs),\n\tEverySecondTX = element(2, lists:foldl(fun(TX, {N, Acc}) when N rem 2 /= 0 ->\n\t\t\t{N + 1, [TX | Acc]}; (_TX, {N, Acc}) -> {N + 1, Acc} end, {0, []}, TXs)),\n\tlists:foreach(fun(TX) -> ar_test_node:assert_post_tx_to_peer(peer1, TX) end, EverySecondTX),\n\tar_test_node:mine(),\n\tBI = wait_until_height(main, LocalHeight + 1),\n\tB = ar_storage:read_block(hd(BI)),\n\tB2 = B#block{ txs = ar_storage:read_tx(B#block.txs) },\n\tar_test_node:connect_to_peer(peer1),\n\tar_bridge ! {event, block, {new, B2, #{ recall_byte => undefined }}},\n\tassert_wait_until_height(peer1, RemoteHeight + 1).\n\ntest_fallback_to_block_endpoint_if_cannot_send_tx({_B0, Wallet1, _Wallet2, _StaticWallet}) ->\n\tar_test_node:disconnect_from(peer1),\n\tLocalHeight = ar_node:get_height(),\n\tRemoteHeight = height(peer1),\n\tTXs = [ar_test_node:sign_tx(Wallet1, #{ last_tx => ar_test_node:get_tx_anchor(peer1) }) || _ <- lists:seq(1, 10)],\n\tlists:foreach(fun(TX) -> ar_test_node:assert_post_tx_to_peer(main, TX) end, TXs),\n\tEverySecondTX = element(2, lists:foldl(fun(TX, {N, Acc}) when N rem 2 /= 0 ->\n\t\t\t{N + 1, [TX | Acc]}; (_TX, {N, Acc}) -> {N + 1, Acc} end, {0, []}, TXs)),\n\tlists:foreach(fun(TX) -> ar_test_node:assert_post_tx_to_peer(peer1, TX) end, EverySecondTX),\n\tar_test_node:mine(),\n\tBI = wait_until_height(main, LocalHeight + 1),\n\tB = ar_storage:read_block(hd(BI)),\n\tar_test_node:connect_to_peer(peer1),\n\tar_bridge ! {event, block, {new, B, #{ recall_byte => undefined }}},\n\tassert_wait_until_height(peer1, RemoteHeight + 1).\n\ntest_get_recent_hash_list_diff({_B0, Wallet1, _Wallet2, _StaticWallet}) ->\n\tLocalHeight = ar_node:get_height(),\n\tBTip = ar_node:get_current_block(),\n\tar_test_node:disconnect_from(peer1),\n\t{ok, {{<<\"404\">>, _}, _, <<>>, _, _}} = ar_http:req(#{ method => get,\n\t\tpeer => ar_test_node:peer_ip(main), path => \"/recent_hash_list_diff\",\n\t\theaders => [], body => <<>> }),\n\t{ok, {{<<\"400\">>, _}, _, <<>>, _, _}} = ar_http:req(#{ method => get,\n\t\tpeer => ar_test_node:peer_ip(main), path => \"/recent_hash_list_diff\",\n\t\theaders => [], body => crypto:strong_rand_bytes(47) }),\n\t{ok, {{<<\"404\">>, _}, _, <<>>, _, _}} = ar_http:req(#{ method => get,\n\t\tpeer => ar_test_node:peer_ip(main), path => \"/recent_hash_list_diff\",\n\t\theaders => [], body => crypto:strong_rand_bytes(48) }),\n\tB0H = BTip#block.indep_hash,\n\t{ok, {{<<\"200\">>, _}, _, B0H, _, _}} = ar_http:req(#{ method => get,\n\t\tpeer => ar_test_node:peer_ip(main), path => \"/recent_hash_list_diff\",\n\t\theaders => [], body => B0H }),\n\tar_test_node:mine(),\n\tBI1 = wait_until_height(main, LocalHeight + 1),\n\t{B1H, _, _} = hd(BI1),\n\t{ok, {{<<\"200\">>, _}, _, << B0H:48/binary, B1H:48/binary, 0:16 >> , _, _}} =\n\t\tar_http:req(#{ method => get, peer => ar_test_node:peer_ip(main),\n\t\t\t\tpath => \"/recent_hash_list_diff\", headers => [], body => B0H }),\n\tTXs = [ar_test_node:sign_tx(main, Wallet1, #{ last_tx => ar_test_node:get_tx_anchor(peer1) }) || _ <- lists:seq(1, 3)],\n\tlists:foreach(fun(TX) -> ar_test_node:assert_post_tx_to_peer(main, TX) end, TXs),\n\tar_test_node:mine(),\n\tBI2 = wait_until_height(main, LocalHeight + 2),\n\t{B2H, _, _} = hd(BI2),\n\t[TXID1, TXID2, TXID3] = [TX#tx.id || TX <- (ar_node:get_current_block())#block.txs],\n\t{ok, {{<<\"200\">>, _}, _, << B0H:48/binary, B1H:48/binary, 0:16, B2H:48/binary,\n\t\t\t3:16, TXID1:32/binary, TXID2:32/binary, TXID3/binary >> , _, _}}\n\t\t\t= ar_http:req(#{ method => get, peer => ar_test_node:peer_ip(main),\n\t\t\tpath => \"/recent_hash_list_diff\", headers => [], body => B0H }),\n\t{ok, {{<<\"200\">>, _}, _, << B0H:48/binary, B1H:48/binary, 0:16, B2H:48/binary,\n\t\t\t3:16, TXID1:32/binary, TXID2:32/binary, TXID3/binary >> , _, _}}\n\t\t\t= ar_http:req(#{ method => get, peer => ar_test_node:peer_ip(main),\n\t\t\tpath => \"/recent_hash_list_diff\", headers => [],\n\t\t\tbody => << B0H/binary, (crypto:strong_rand_bytes(48))/binary >>}),\n\t{ok, {{<<\"200\">>, _}, _, << B1H:48/binary, B2H:48/binary,\n\t\t\t3:16, TXID1:32/binary, TXID2:32/binary, TXID3/binary >> , _, _}}\n\t\t\t= ar_http:req(#{ method => get, peer => ar_test_node:peer_ip(main),\n\t\t\tpath => \"/recent_hash_list_diff\", headers => [],\n\t\t\tbody => << B0H/binary, B1H/binary, (crypto:strong_rand_bytes(48))/binary >>}).\n\ntest_get_total_supply(_Args) ->\n\tBlockDenomination = (ar_node:get_current_block())#block.denomination,\n\tTotalSupply =\n\t\tar_patricia_tree:foldr(\n\t\t\tfun\t(_, {B, _}, Acc) ->\n\t\t\t\t\tAcc + ar_pricing:redenominate(B, 1, BlockDenomination);\n\t\t\t\t(_, {B, _, Denomination, _}, Acc) ->\n\t\t\t\t\tAcc + ar_pricing:redenominate(B, Denomination, BlockDenomination)\n\t\t\tend,\n\t\t\t0,\n\t\t\tar_diff_dag:get_sink(sys:get_state(ar_wallets))\n\t\t),\n\tTotalSupplyBin = integer_to_binary(TotalSupply),\n\t?assertMatch({ok, {{<<\"200\">>, _}, _, TotalSupplyBin, _, _}},\n\t\t\tar_http:req(#{ method => get, peer => ar_test_node:peer_ip(main), path => \"/total_supply\" })).\n\nwait_until_syncs_tx_data(TXID) ->\n\tar_util:do_until(\n\t\tfun() ->\n\t\t\tcase ar_http:req(#{\n\t\t\t\tmethod => get,\n\t\t\t\tpeer => ar_test_node:peer_ip(main),\n\t\t\t\tpath => \"/tx/\" ++ binary_to_list(ar_util:encode(TXID)) ++ \"/data\"\n\t\t\t}) of\n\t\t\t\t{ok, {{<<\"404\">>, _}, _, _, _, _}} ->\n\t\t\t\t\tfalse;\n\t\t\t\t{ok, {{<<\"200\">>, _}, _, <<>>, _, _}} ->\n\t\t\t\t\tfalse;\n\t\t\t\t{ok, {{<<\"200\">>, _}, _, Payload, _, _}} ->\n\t\t\t\t\t{ok, Payload}\n\t\t\tend\n\t\tend,\n\t\t100,\n\t\t10000\n\t).\n\nheight(Node) ->\n\tar_test_node:remote_call(Node, ar_node, get_height, []).\n"
  },
  {
    "path": "apps/arweave/test/ar_http_util_tests.erl",
    "content": "-module(ar_http_util_tests).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n-include_lib(\"arweave/include/ar.hrl\").\n\nget_tx_content_type_test() ->\n\t?assertEqual(\n\t\tnone,\n\t\tcontent_type_from_tags([])\n\t),\n\t?assertEqual(\n\t\t{valid, <<\"text/plain\">>},\n\t\tcontent_type_from_tags([\n\t\t\t{<<\"Content-Type\">>, <<\"text/plain\">>}\n\t\t])\n\t),\n\t?assertEqual(\n\t\t{valid, <<\"text/html; charset=utf-8\">>},\n\t\tcontent_type_from_tags([\n\t\t\t{<<\"Content-Type\">>, <<\"text/html; charset=utf-8\">>}\n\t\t])\n\t),\n\t?assertEqual(\n\t\t{valid, <<\"application/x.arweave-manifest+json\">>},\n\t\tcontent_type_from_tags([\n\t\t\t{<<\"Content-Type\">>, <<\"application/x.arweave-manifest+json\">>}\n\t\t])\n\t),\n\t?assertEqual(\n\t\tinvalid,\n\t\tcontent_type_from_tags([\n\t\t\t{<<\"Content-Type\">>, <<\"application/javascript\\r\\nSet-Cookie: foo=bar\">>}\n\t\t])\n\t).\n\ncontent_type_from_tags(Tags) ->\n\tar_http_util:get_tx_content_type(#tx { tags = Tags }).\n"
  },
  {
    "path": "apps/arweave/test/ar_info_tests.erl",
    "content": "-module(ar_info_tests).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_chain_stats.hrl\").\n\nrecent_blocks_test_() ->\n\t[\n\t\t{timeout, 300, fun test_recent_blocks_post/0},\n\t\t{timeout, 300, fun test_recent_blocks_announcement/0}\n\t].\n\nrecent_forks_test_() ->\n\t[\n\t\t{timeout, 300, fun test_get_recent_forks/0},\n\t\t{timeout, 300, fun test_recent_forks/0}\n\t].\n\n%% -------------------------------------------------------------------------------------------\n%% Recent blocks tests\n%% -------------------------------------------------------------------------------------------\ntest_recent_blocks_post() ->\n\ttest_recent_blocks(post).\n\ntest_recent_blocks_announcement() ->\n\ttest_recent_blocks(announcement).\n\ntest_recent_blocks(Type) ->\n\t[B0] = ar_weave:init(),\n\tar_test_node:start_peer(peer1, B0),\n\tGenesisBlock = [#{\n\t\t<<\"id\">> => ar_util:encode(B0#block.indep_hash),\n\t\t<<\"received\">> => <<\"pending\">>,\n\t\t<<\"height\">> => 0\n\t}],\n\t?assertEqual(GenesisBlock, get_recent(ar_test_node:peer_ip(peer1), blocks)),\n\n\tTargetHeight = ?CHECKPOINT_DEPTH+2,\n\tPeerBI = lists:foldl(\n\t\tfun(Height, _Acc) ->\n\t\t\tar_test_node:mine(peer1),\n\t\t\tar_test_node:wait_until_height(peer1, Height)\n\t\tend,\n\t\tok,\n\t\tlists:seq(1, TargetHeight)\n\t),\n\t%% Peer1 recent has no timestamps since it hasn't received any of its own blocks\n\t%% gossipped back\n\t?assertEqual(expected_blocks(peer1, PeerBI, true), \n\t\tget_recent(ar_test_node:peer_ip(peer1), blocks)),\n\n\t%% Share blocks to peer1\n\tlists:foreach(\n\t\tfun({H, _WeaveSize, _TXRoot}) ->\n\t\t\ttimer:sleep(1000),\n\t\t\tB = ar_test_node:remote_call(peer1, ar_block_cache, get, [block_cache, H]),\n\t\t\tcase Type of\n\t\t\t\tpost ->\n\t\t\t\t\tar_test_node:send_new_block(ar_test_node:peer_ip(peer1), B);\n\t\t\t\tannouncement ->\n\t\t\t\t\tAnnouncement = #block_announcement{ indep_hash = H,\n\t\t\t\t\t\tprevious_block = B#block.previous_block,\n\t\t\t\t\t\trecall_byte = B#block.recall_byte,\n\t\t\t\t\t\trecall_byte2 = B#block.recall_byte2,\n\t\t\t\t\t\tsolution_hash = B#block.hash,\n\t\t\t\t\t\ttx_prefixes = [] },\n\t\t\t\t\tar_http_iface_client:send_block_announcement(\n\t\t\t\t\t\tar_test_node:peer_ip(peer1), Announcement)\n\t\t\tend\n\t\tend,\n\t\t%% Reverse the list so that the peer receives the blocks in the same order they\n\t\t%% were mined.\n\t\tlists:reverse(lists:sublist(PeerBI, TargetHeight))\n\t),\n\n\t%% Peer1 recent should now have timestamps, but also black out the most recent\n\t%% ones.\n\t?assertEqual(expected_blocks(peer1, PeerBI), \n\t\tget_recent(ar_test_node:peer_ip(peer1), blocks)).\n\t\t\nexpected_blocks(Node, BI) ->\n\texpected_blocks(Node, BI, false).\nexpected_blocks(Node, BI, ForcePending) ->\n\t%% There are a few list reversals that happen here:\n\t%% 1. BI has the blocks in reverse chronological order (latest block first)\n\t%% 2. [Element | Acc] reverses the list into chronological order (latest block last)\n\t%% 3. The final lists:reverse puts the list back into reverse chronological order\n\t%%\t(latest block first)\n\tBlocks = lists:foldl(\n\t\tfun({H, _WeaveSize, _TXRoot}, Acc) ->\n\t\t\tB = ar_test_node:remote_call(Node, ar_block_cache, get, [block_cache, H]),\n\t\t\tTimestamp = case ForcePending of\n\t\t\t\ttrue -> <<\"pending\">>;\n\t\t\t\tfalse ->\n\t\t\t\t\tcase length(Acc) < ?RECENT_BLOCKS_WITHOUT_TIMESTAMP of\n\t\t\t\t\t\ttrue -> <<\"pending\">>;\n\t\t\t\t\t\tfalse -> ar_util:timestamp_to_seconds(B#block.receive_timestamp)\n\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\t[#{\n\t\t\t\t<<\"id\">> => ar_util:encode(H),\n\t\t\t\t<<\"received\">> => Timestamp,\n\t\t\t\t<<\"height\">> => B#block.height\n\t\t\t} | Acc]\n\t\tend,\n\t\t[],\n\t\tlists:sublist(BI, ?CHECKPOINT_DEPTH)\n\t),\n\tlists:reverse(Blocks).\n\n%% -------------------------------------------------------------------------------------------\n%% Recent forks tests\n%% -------------------------------------------------------------------------------------------\ntest_get_recent_forks() ->\n\t[B0] = ar_weave:init(),\n\tar_test_node:start(B0),\n\n\tForkRootB1 = #block{ indep_hash = <<\"1\">>, height = 1 },\n\tForkRootB2= #block{ indep_hash = <<\"2\">>, height = 2 },\n\tForkRootB3= #block{ indep_hash = <<\"3\">>, height = 3 },\n\n\tOrphans1 = [<<\"a\">>],\n\ttimer:sleep(5),\n\tar_chain_stats:log_fork(Orphans1, ForkRootB1),\n\tExpectedFork1 = #fork{\n\t\tid = crypto:hash(sha256, list_to_binary(Orphans1)),\n\t\theight = 2,\n\t\tblock_ids = Orphans1\n\t},\n\tassert_forks_json_equal([ExpectedFork1]),\n\n\tOrphans2 = [<<\"b\">>, <<\"c\">>],\n\ttimer:sleep(5),\n\tar_chain_stats:log_fork(Orphans2, ForkRootB1),\n\tExpectedFork2 = #fork{\n\t\tid = crypto:hash(sha256, list_to_binary(Orphans2)),\n\t\theight = 2,\n\t\tblock_ids = Orphans2\n\t},\n\tassert_forks_json_equal([ExpectedFork2, ExpectedFork1]),\n\n\tOrphans3 = [<<\"b\">>, <<\"c\">>, <<\"d\">>],\n\ttimer:sleep(5),\n\tar_chain_stats:log_fork(Orphans3, ForkRootB1),\n\tExpectedFork3 = #fork{\n\t\tid = crypto:hash(sha256, list_to_binary(Orphans3)),\n\t\theight = 2,\n\t\tblock_ids = Orphans3\n\t},\n\tassert_forks_json_equal([ExpectedFork3, ExpectedFork2, ExpectedFork1]),\n\n\tOrphans4 = [<<\"e\">>, <<\"f\">>, <<\"g\">>],\n\ttimer:sleep(5),\n\tar_chain_stats:log_fork(Orphans4, ForkRootB2),\n\tExpectedFork4 = #fork{\n\t\tid = crypto:hash(sha256, list_to_binary(Orphans4)),\n\t\theight = 3,\n\t\tblock_ids = Orphans4\n\t},\n\tassert_forks_json_equal([ExpectedFork4, ExpectedFork3, ExpectedFork2, ExpectedFork1]),\n\n\t%% Same fork seen again - not sure this is possible, but since we're just tracking\n\t%% forks based on when they occur, it should be handled.\n\ttimer:sleep(5),\n\tar_chain_stats:log_fork(Orphans3, ForkRootB1),\n\tassert_forks_json_equal(\n\t\t[ExpectedFork3, ExpectedFork4, ExpectedFork3, ExpectedFork2, ExpectedFork1]),\n\n\t%% If the fork is empty, ignore it.\n\ttimer:sleep(5),\n\tar_chain_stats:log_fork([], ForkRootB2),\n\tassert_forks_json_equal(\n\t\t[ExpectedFork3, ExpectedFork4, ExpectedFork3, ExpectedFork2, ExpectedFork1]),\n\n\t%% Confirm that limiting the number of forks returned is handled correctly (e.g.\n\t%% the oldest fork is not returned)\n\tOrphans5 = [<<\"h\">>, <<\"i\">>, <<\"j\">>],\n\ttimer:sleep(5),\n\tar_chain_stats:log_fork(Orphans5, ForkRootB3),\n\tExpectedFork5 = #fork{\n\t\tid = crypto:hash(sha256, list_to_binary(Orphans5)),\n\t\theight = 4,\n\t\tblock_ids = Orphans5\n\t},\n\tassert_forks_json_equal(\n\t\t[ExpectedFork5, ExpectedFork3, ExpectedFork4, ExpectedFork3, ExpectedFork2]),\n\n\tok.\n\ntest_recent_forks() ->\n\t[B0] = ar_weave:init(),\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:start_peer(peer2, B0),\n\tar_test_node:connect_to_peer(peer1),\n\tar_test_node:connect_to_peer(peer2),\n\tar_test_node:connect_peers(peer1, peer2),\n   \n\t%% Mine a few blocks, shared by both peers\n\tar_test_node:mine(peer1),\n\tar_test_node:wait_until_height(peer1, 1),\n\tar_test_node:wait_until_height(peer2, 1),\n\tar_test_node:mine(peer2),\n\tar_test_node:wait_until_height(peer1, 2),\n\tar_test_node:wait_until_height(peer2, 2),\n\tar_test_node:mine(peer1),\n\tar_test_node:wait_until_height(peer1, 3),\n\tar_test_node:wait_until_height(peer2, 3),\n\n\t%% Disconnect peers, and have peer1 mine 1 block, and peer2 mine 3\n\tar_test_node:disconnect_from(peer1),\n\tar_test_node:disconnect_from(peer2),\n\tar_test_node:disconnect_peers(peer1, peer2),\n\n\tar_test_node:mine(peer1),\n\tBI1 = ar_test_node:wait_until_height(peer1, 4),\n\tOrphans1 = [ID || {ID, _, _} <- lists:sublist(BI1, 1)],\n\tFork1 = #fork{\n\t\tid = crypto:hash(sha256, list_to_binary(Orphans1)),\n\t\theight = 4,\n\t\tblock_ids = Orphans1\n\t},\n\t\t\n\tar_test_node:mine(peer2),\n\tar_test_node:wait_until_height(peer2, 4),\n\tar_test_node:mine(peer2),\n\tar_test_node:wait_until_height(peer2, 5),\n\tar_test_node:mine(peer2),\n\tar_test_node:wait_until_height(peer2, 6),\n\n\t%% Reconnect the peers. This will orphan peer1's block\n\tar_test_node:connect_to_peer(peer2),\n\tar_test_node:wait_until_height(main, 6),\n\n\tar_test_node:connect_to_peer(peer1),\n\tar_test_node:wait_until_height(peer1, 6),\n\n\tar_test_node:connect_peers(peer1, peer2),\n\tar_test_node:wait_until_height(peer2, 6),\n\n\t%% Disconnect peers, and have peer1 mine 2 block2, and peer2 mine 3\n\tar_test_node:disconnect_from(peer1),\n\tar_test_node:disconnect_from(peer2),\n\tar_test_node:disconnect_peers(peer1, peer2),\n\n\tar_test_node:mine(peer1),\n\tar_test_node:wait_until_height(peer1, 7),\n\tar_test_node:mine(peer1),\n\tBI2 = ar_test_node:wait_until_height(peer1, 8),\n\tOrphans2 = [ID || {ID, _, _} <- lists:reverse(lists:sublist(BI2, 2))],\n\tFork2 = #fork{\n\t\tid = crypto:hash(sha256, list_to_binary(Orphans2)),\n\t\theight = 7,\n\t\tblock_ids = Orphans2\n\t},\n\n\tar_test_node:mine(peer2),\n\tar_test_node:wait_until_height(peer2, 7),\n\tar_test_node:mine(peer2),\n\tar_test_node:wait_until_height(peer2, 8),\n\tar_test_node:mine(peer2),\n\tar_test_node:wait_until_height(peer2, 9),\n\n\t%% Reconnect the peers. This will create a second fork as peer1's blocks are orphaned\n\tar_test_node:connect_to_peer(peer2),\n\tar_test_node:wait_until_height(main, 9),\n\n\tar_test_node:connect_to_peer(peer1),\n\tar_test_node:wait_until_height(peer1, 9),\n\n\tar_test_node:connect_peers(peer1, peer2),\n\tar_test_node:wait_until_height(peer2, 9),\n\n\n\tar_test_node:disconnect_from(peer1),\n\tar_test_node:disconnect_from(peer2),\n\tar_test_node:disconnect_peers(peer1, peer2),\n\n\tassert_forks_json_equal([Fork2, Fork1], get_recent(ar_test_node:peer_ip(peer1), forks)),\n\tok.\n\nassert_forks_json_equal(ExpectedForks) ->\n\tassert_forks_json_equal(ExpectedForks, get_recent(ar_test_node:peer_ip(main), forks)).\n\nassert_forks_json_equal(ExpectedForks, ActualForks) ->\n\tExpectedForksStripped = [ \n\t\t#{\n\t\t\t<<\"id\">> => ar_util:encode(Fork#fork.id),\n\t\t\t<<\"height\">> => Fork#fork.height,\n\t\t\t<<\"blocks\">> => [ ar_util:encode(BlockID) || BlockID <- Fork#fork.block_ids ]\n\t\t} \n\t\t|| Fork <- ExpectedForks],\n\tActualForksStripped = [ maps:remove(<<\"timestamp\">>, Fork) || Fork <- ActualForks ],\n\t?assertEqual(ExpectedForksStripped, ActualForksStripped).\n\t\nget_recent(Peer, Type) ->\n\tcase get_recent(Peer) of\n\t\tinfo_unavailable -> info_unavailable;\n\t\tInfo ->\n\t\t\tmaps:get(atom_to_binary(Type), Info)\n\tend.\nget_recent(Peer) ->\n\tcase\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => Peer,\n\t\t\tpath => \"/recent\",\n\t\t\tconnect_timeout => 1000,\n\t\t\ttimeout => 2 * 1000\n\t\t})\n\tof\n\t\t{ok, {{<<\"200\">>, _}, _, JSON, _, _}} -> \n\t\t\tcase ar_serialize:json_decode(JSON, [return_maps]) of\n\t\t\t\t{ok, JsonMap} ->\n\t\t\t\t\tJsonMap;\n\t\t\t\t{error, _} ->\n\t\t\t\t\tinfo_unavailable\n\t\t\tend;\n\t\t_ -> info_unavailable\n\tend."
  },
  {
    "path": "apps/arweave/test/ar_mempool_tests.erl",
    "content": "-module(ar_mempool_tests).\n\n-include_lib(\"arweave/include/ar.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\nstart_node() ->\n\t%% Starting a node is slow so we'll run it once for the whole test module\n\tKey = ar_wallet:new(),\n\tOtherKey = ar_wallet:new(),\n\tLastTXID = crypto:strong_rand_bytes(32), \n\t[B0] = ar_weave:init([\n\t\twallet(Key, 1000, LastTXID),\n\t\twallet(OtherKey, 800, crypto:strong_rand_bytes(32))\n\t]),\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:connect_to_peer(peer1),\n\tets:insert(node_state, [{wallet_list, B0#block.wallet_list}]),\n\t{Key, LastTXID, OtherKey, B0}.\n\nreset_node_state() ->\n\tar_mempool:reset(),\n\tets:delete_all_objects(ar_tx_emitter_recently_emitted),\n\tets:match_delete(node_state, {{tx, '_'}, '_'}),\n\tets:match_delete(node_state, {{tx_prefixes, '_'}, '_'}).\n\nadd_tx_test_() ->\n\tTimeout = 30,\n\t{setup, fun start_node/0,\n\t\tfun (GenesisData) ->\n\t\t\t{foreach, fun reset_node_state/0,\n\t\t\t\t[\n\t\t\t\t\t{timeout, Timeout, {with, GenesisData, [fun test_mempool_sorting/1]}},\n\t\t\t\t\t{timeout, Timeout, {with, GenesisData, [fun test_drop_low_priority_txs_header/1]}},\n\t\t\t\t\t{timeout, Timeout, {with, GenesisData, [fun test_drop_low_priority_txs_data/1]}},\n\t\t\t\t\t{timeout, Timeout, {with, GenesisData, [fun test_drop_low_priority_txs_data_and_header/1]}},\n\t\t\t\t\t{timeout, Timeout, {with, GenesisData, [fun test_clashing_last_tx/1]}},\n\t\t\t\t\t{timeout, Timeout, {with, GenesisData, [fun test_overspent_tx/1]}},\n\t\t\t\t\t{timeout, Timeout, {with, GenesisData, [fun test_mixed_deposit_spend_tx_old_address/1]}},\n\t\t\t\t\t{timeout, Timeout, {with, GenesisData, [fun test_mixed_deposit_spend_tx_new_address/1]}},\n\t\t\t\t\t{timeout, Timeout, {with, GenesisData, [fun test_clash_and_overspend_tx/1]}},\n\t\t\t\t\t{timeout, Timeout, {with, GenesisData, [fun test_clash_and_low_priority_tx/1]}},\n\t\t\t\t\t{timeout, Timeout, {with, GenesisData, [fun test_load_from_disk_denomination/1]}}\n\t\t\t]\n\t\t}\n\tend\n}.\n\n%% @doc Test that mempool transactions are correctly sorted in priority order\ntest_mempool_sorting({{_, {_, Owner}}, _LastTXID, _OtherKey, _B0}) ->\n\t%% Transactions are named with their expected, prioritized order.\n\t%% The sorting is assumed to consider, in order:\n\t%% 1. Reward (higher reward is higher priority)\n\t%% 2. Timestamp (lower timestamp is higher priority)\n\t%% 3. Format 1 transactions with a lot of data are deprioritized\n\t%% \n\t%% Only the above criteria are expected to impact sort order\n\tTX9 = tx(1, Owner, 15, crypto:strong_rand_bytes(200)),\n\tar_mempool:add_tx(TX9, waiting),\n\tTX8 = tx(1, Owner, 20, crypto:strong_rand_bytes(200)),\n\tar_mempool:add_tx(TX8, waiting),\n\tTX5 = tx(2, Owner, 1, <<>>),\n\tar_mempool:add_tx(TX5, waiting),\n\tTX6 = tx(2, Owner, 1, <<\"abc\">>),\n\tar_mempool:add_tx(TX6, waiting),\n\tTX7 = tx(1, Owner, 1, <<>>),\n\tar_mempool:add_tx(TX7, waiting),\n\tTX1 = tx(1, Owner, 10, <<>>),\n\tar_mempool:add_tx(TX1, waiting),\n\tTX2 = tx(1, Owner, 10, <<\"abcdef\">>),\n\tar_mempool:add_tx(TX2, waiting),\n\tTX3 = tx(2, Owner, 10, <<>>),\n\tar_mempool:add_tx(TX3, waiting),\n\tTX4 = tx(1, Owner, 10, <<>>),\n\tar_mempool:add_tx(TX4, waiting),\n\n\t%% {HeaderSize, DataSize}\n\t%% HeaderSize: TX_SIZE_BASE per transaction plus the data size of all\n\t%% format 1 transactions.\n\t%% DataSize: the total data size of all format 2 transactions\n\tExpectedMempoolSize = {(9* ?TX_SIZE_BASE) + 200 + 200 + 6, 3},\n\tExpectedTXIDs = [\n\t\tTX1#tx.id,\n\t\tTX2#tx.id,\n\t\tTX3#tx.id,\n\t\tTX4#tx.id,\n\t\tTX5#tx.id,\n\t\tTX6#tx.id,\n\t\tTX7#tx.id,\n\t\tTX8#tx.id,\n\t\tTX9#tx.id\n\t\t],\n\n\tassertMempoolTXIDs(ExpectedTXIDs, \"Sorted mempool transactions\"),\n\tassertMempoolSize(ExpectedMempoolSize).\n\t\n%% @doc Test dropping transactions when the mempool max header size is exceeded\ntest_drop_low_priority_txs_header({{_, {_, Owner}}, _LastTXID, _OtherKey, _B0}) ->\n\t%% Add 9x Format 1 transactions each with a data size equal to\n\t%% 1/10 the MEMPOOL_HEADER_SIZE_LIMIT. This puts us close to exceeding the\n\t%% header size limit\n\tNumTransactions = 9,\n\tDataSize = ?MEMPOOL_HEADER_SIZE_LIMIT div (NumTransactions + 1),\n\t{ExpectedTXIDs, HighestReward, LowestReward} =\n\t\tadd_transactions(NumTransactions, 1, Owner, DataSize),\n\tExpectedMempoolSize = {NumTransactions * (?TX_SIZE_BASE + DataSize),0},\n\n\tassertMempoolTXIDs(ExpectedTXIDs, \"Mempool is below the header size limit\"),\n\tassertMempoolSize(ExpectedMempoolSize),\n\n\t%% Add multiple low priority transactions to push us over the header size\n\t%% limit. All of these new transactions should be dropped\n\tar_mempool:add_tx(\n\t\ttx(1, Owner, LowestReward-2, crypto:strong_rand_bytes(500)), waiting),\n\tar_mempool:add_tx(\n\t\ttx(1, Owner, LowestReward-2, crypto:strong_rand_bytes(500)), waiting),\n\tar_mempool:add_tx(\n\t\ttx(1, Owner, LowestReward-2, crypto:strong_rand_bytes(500)), waiting),\n\tar_mempool:add_tx(\n\t\ttx(1, Owner, LowestReward-1, crypto:strong_rand_bytes(DataSize)), waiting),\n\tassertMempoolTXIDs(\n\t\tExpectedTXIDs, \"Multiple low priority TX pushed Mempool over the header size limit\"),\n\tassertMempoolSize(ExpectedMempoolSize),\n\n\t%% Add a high priority transaction to push us over the header size limit.\n\t%% This newest transaction should *not* be dropped, instead a lower priority\n\t%% transaction should be dropped\n\tTXHigh = tx(1, Owner, HighestReward+1, crypto:strong_rand_bytes(DataSize)),\n\tar_mempool:add_tx(TXHigh, waiting),\n\tExpectedTXIDs2 = [TXHigh#tx.id | lists:delete(lists:last(ExpectedTXIDs), ExpectedTXIDs)],\n\tassertMempoolTXIDs(\n\t\tExpectedTXIDs2, \"High priority TX pushed Mempool over the header size limit\"),\n\tassertMempoolSize(ExpectedMempoolSize).\n\n%% @doc Test dropping transactions when the mempool max data size is exceeded\ntest_drop_low_priority_txs_data({{_, {_, Owner}}, _LastTXID, _OtherKey, _B0}) ->\n\t%% Add 9x Format 2 transactions each with a data size slightly larger than\n\t%% 1/10 the MEMPOOL_DATA_SIZE_LIMIT. This puts us close to exceeding the\n\t%% data size limit\n\tNumTransactions = 9,\n\tDataSize = (?MEMPOOL_DATA_SIZE_LIMIT div (NumTransactions+1)) + 1,\n\t{ExpectedTXIDs, HighestReward, LowestReward} =\n\t\tadd_transactions(NumTransactions, 2, Owner, DataSize),\n\tExpectedMempoolSize = {NumTransactions * ?TX_SIZE_BASE, NumTransactions * DataSize},\n\n\tassertMempoolTXIDs(ExpectedTXIDs, \"Mempool is below the data size limit\"),\n\tassertMempoolSize(ExpectedMempoolSize),\n\n\t%% Add multiple low priority transactions to push us over the data size\n\t%% limit. All of these new transactions should be dropped\n\tar_mempool:add_tx(\n\t\ttx(2, Owner, LowestReward-2, crypto:strong_rand_bytes(500)), waiting),\n\tar_mempool:add_tx(\n\t\ttx(2, Owner, LowestReward-2, crypto:strong_rand_bytes(500)), waiting),\n\tar_mempool:add_tx(\n\t\ttx(2, Owner, LowestReward-2, crypto:strong_rand_bytes(500)), waiting),\n\tar_mempool:add_tx(\n\t\ttx(2, Owner, LowestReward-1, crypto:strong_rand_bytes(DataSize)), waiting),\n\tassertMempoolTXIDs(\n\t\tExpectedTXIDs, \"Low priority TX pushed Mempool over the data size limit\"),\n\tassertMempoolSize(ExpectedMempoolSize),\n\n\t%% Add a high priority transaction to push us over the data size limit.\n\t%% This newest transaction should *not* be dropped, instead a lower priority\n\t%% transaction should be dropped\n\tTXHigh = tx(2, Owner, HighestReward+1, crypto:strong_rand_bytes(DataSize)),\n\tar_mempool:add_tx(TXHigh, waiting),\n\tExpectedTXIDs2 = [TXHigh#tx.id | lists:delete(lists:last(ExpectedTXIDs), ExpectedTXIDs)],\n\tassertMempoolTXIDs(\n\t\tExpectedTXIDs2, \"High priority TX pushed Mempool over the data size limit\"),\n\tassertMempoolSize(ExpectedMempoolSize).\n\n%% @doc Test dropping transactions when both the mempool data size and header\n%% size are exceeded\ntest_drop_low_priority_txs_data_and_header({{_, {_, Owner}}, _LastTXID, _OtherKey, _B0}) ->\n\tNumTransactions = 9,\n\tFormat1DataSize = ?MEMPOOL_HEADER_SIZE_LIMIT div (NumTransactions+1),\n\t{Format1ExpectedTXIDs, _Format1HighestReward, Format1LowestReward} =\n\t\tadd_transactions(NumTransactions, 1, Owner, Format1DataSize),\n\tFormat2DataSize = (?MEMPOOL_DATA_SIZE_LIMIT div (NumTransactions+1)) + 1,\n\t{Format2ExpectedTXIDs, _Format2HighestReward, Format2LowestReward} =\n\t\tadd_transactions(NumTransactions, 2, Owner, Format2DataSize),\n\n\tExpectedTXIDs = Format2ExpectedTXIDs ++ Format1ExpectedTXIDs,\n\t{ExpectedHeaderSize, ExpectedDataSize} = {\n\t\t(2 * NumTransactions * ?TX_SIZE_BASE) + (NumTransactions * Format1DataSize),\n\t\tNumTransactions * Format2DataSize},\n\n\tassertMempoolTXIDs(ExpectedTXIDs, \"Mempool is below both the data and header size limits\"),\n\tassertMempoolSize({ExpectedHeaderSize, ExpectedDataSize}),\n\n\tRemainingHeaderSpace = ?MEMPOOL_HEADER_SIZE_LIMIT - ExpectedHeaderSize,\n\tar_mempool:add_tx(\n\t\ttx(\n\t\t\t1, \n\t\t\tOwner, \n\t\t\tFormat1LowestReward-2,\n\t\t\tcrypto:strong_rand_bytes(RemainingHeaderSpace - ?TX_SIZE_BASE)\n\t\t), waiting),\n\t%% This transaction will cause both the header and data size limits to be\n\t%% exceeded simultaneously\n\tar_mempool:add_tx(\n\t\ttx(\n\t\t\t2,\n\t\t\tOwner,\n\t\t\tFormat2LowestReward-1,\n\t\t\tcrypto:strong_rand_bytes(Format2DataSize)\n\t\t), waiting),\n\n\t%% Last two transactions should be dropped\n\tassertMempoolTXIDs(\n\t\tExpectedTXIDs, \"TX pushed the Mempool over both the header and data size limits\").\n\n%% @doc Test that only 1 TX with a given last_tx can exist in the mempool.\ntest_clashing_last_tx({{_, {_, Owner}}, LastTXID, _OtherKey, B0}) ->\n\tBaseID = crypto:strong_rand_bytes(31),\n\t\n\t%% Add some extra, non-clashing, transactions to test that only clashing\n\t%% transactions are dropped\n\tNumTransactions = 2,\n\tFormat1DataSize = 200,\n\t{Format1ExpectedTXIDs, Format1HighestReward, _} =\n\t\tadd_transactions(NumTransactions, 1, Owner, Format1DataSize),\n\tFormat2DataSize = 50,\n\t{Format2ExpectedTXIDs, Format2HighestReward, _} =\n\t\tadd_transactions(NumTransactions, 2, Owner, Format2DataSize),\n\n\tExpectedTXIDs = Format2ExpectedTXIDs ++ Format1ExpectedTXIDs,\n\tTest0 = \"Test 0: Transactions with empty last_tx can never clash\",\n\tassertMempoolTXIDs(ExpectedTXIDs, Test0),\n\n\tTest1 = \"Test 1: Lower reward TX is dropped\",\n\tTX1 = tx(2, Owner, Format2HighestReward+2, <<>>, <<\"c\", BaseID/binary>>, LastTXID),\n\tTX2 = tx(2, Owner, Format2HighestReward+1, <<>>, <<\"d\", BaseID/binary>>, LastTXID),\n\tar_mempool:add_tx(TX1, waiting),\n\tar_mempool:add_tx(TX2, waiting),\n\tassertMempoolTXIDs([TX1#tx.id | ExpectedTXIDs], Test1),\n\n\tTest2 = \"Test 2: Higher reward TX replace existing TX with lower reward\",\n\tTX3 = tx(2, Owner, Format2HighestReward+3, <<>>, <<\"e\", BaseID/binary>>,  LastTXID),\n\tar_mempool:add_tx(TX3, waiting),\n\tassertMempoolTXIDs([TX3#tx.id | ExpectedTXIDs], Test2),\n\n\tTest3 = \"Test 3: Higher TXID alphanumeric order replaces lower\",\n\tTX4 = tx(2, Owner, Format2HighestReward+3, <<>>, <<\"f\", BaseID/binary>>, LastTXID),\n\tar_mempool:add_tx(TX4, waiting),\n\tassertMempoolTXIDs([TX4#tx.id | ExpectedTXIDs], Test3),\n\n\tTest4 = \"Test 4: Lower TXID alphanumeric order is dropped\",\n\tTX5 = tx(2, Owner, Format2HighestReward+3, <<>>, <<\"b\", BaseID/binary>>, LastTXID),\n\tar_mempool:add_tx(TX5, waiting),\n\tassertMempoolTXIDs([TX4#tx.id | ExpectedTXIDs], Test4),\n\n\tTest5 = \"Test 5: Deprioritized format 1 TX is dropped\",\n\tTX6 = tx(\n\t\t\t1,\n\t\t\tOwner,\n\t\t\tFormat1HighestReward+3,\n\t\t\tcrypto:strong_rand_bytes(200), <<\"g\", BaseID/binary>>,\n\t\t\tLastTXID\n\t\t),\n\tar_mempool:add_tx(TX6, waiting),\n\tassertMempoolTXIDs([TX4#tx.id | ExpectedTXIDs], Test5),\n\n\tTest6 = \"Test 6: High priority format 1 replaces a low priority format 2\",\n\tTX7 = tx(1, Owner, Format1HighestReward+3, <<>>, <<\"h\", BaseID/binary>>, LastTXID),\n\tar_mempool:add_tx(TX7, waiting),\n\tassertMempoolTXIDs([TX7#tx.id | ExpectedTXIDs], Test6),\n\n\tTest7 = \"Test 7: TX last_tx set to block hash can not clash\",\n\tTX8 = tx(\n\t\t\t2,\n\t\t\tOwner,\n\t\t\tFormat2HighestReward+4,\n\t\t\t<<>>,\n\t\t\t<<\"i\", BaseID/binary>>,\n\t\t\tB0#block.indep_hash\n\t\t),\n\tTX9 = tx(\n\t\t\t2,\n\t\t\tOwner,\n\t\t\tFormat2HighestReward+5,\n\t\t\t<<>>,\n\t\t\t<<\"j\", BaseID/binary>>,\n\t\t\tB0#block.indep_hash\n\t\t),\n\tar_mempool:add_tx(TX8, waiting),\n\tar_mempool:add_tx(TX9, waiting),\n\tExpectedTXIDs2 = [TX9#tx.id | [TX8#tx.id | [TX7#tx.id | ExpectedTXIDs]]],\n\tassertMempoolTXIDs(ExpectedTXIDs2, Test7),\t\n\n\tTest8 = \"Test 8: TX last_tx set to <<>> can not clash\",\n\tTX10 = tx(2, Owner, Format2HighestReward+6, <<>>, <<\"k\", BaseID/binary>>, <<>>),\n\tTX11 = tx(2, Owner, Format2HighestReward+7, <<>>, <<\"l\", BaseID/binary>>, <<>>),\n\tar_mempool:add_tx(TX10, waiting),\n\tar_mempool:add_tx(TX11, waiting),\n\tassertMempoolTXIDs([TX11#tx.id | [TX10#tx.id | ExpectedTXIDs2]], Test8).\n\n%% @doc Test that TXs that would overspend an account are dropped from the\n%% mempool.\ntest_overspent_tx({{_, {_, Owner}}, _LastTXID, _OtherKey, _B0}) ->\n\tBaseID = crypto:strong_rand_bytes(31),\n\n\tTest1 = \"Test 1: Lower reward TX is dropped\",\n\tTX1 = tx(2, Owner, 3, <<>>, <<\"c\", BaseID/binary>>, <<>>, 400),\n\tTX2 = tx(2, Owner, 2, <<>>, <<\"d\", BaseID/binary>>, <<>>, 400),\n\tTX3 = tx(2, Owner, 1, <<>>, <<\"e\", BaseID/binary>>, <<>>, 400),\n\tar_mempool:add_tx(TX1, waiting),\n\tar_mempool:add_tx(TX2, waiting),\n\tar_mempool:add_tx(TX3, waiting),\n\tassertMempoolTXIDs([TX1#tx.id, TX2#tx.id], Test1),\n\n\tTest2 = \"Test 2: Higher reward TX replace existing TX with lower reward\",\n\tTX4 = tx(2, Owner, 4, <<>>, <<\"f\", BaseID/binary>>, <<>>, 400),\n\tar_mempool:add_tx(TX4, waiting),\n\tassertMempoolTXIDs([TX4#tx.id, TX1#tx.id], Test2),\n\n\tTest3 = \"Test 3: Higher TXID alphanumeric order replaces lower\",\n\tTX5 = tx(2, Owner, 3, <<>>, <<\"g\", BaseID/binary>>, <<>>, 400),\n\tar_mempool:add_tx(TX5, waiting),\n\tassertMempoolTXIDs([TX4#tx.id, TX5#tx.id], Test3),\n\n\tTest4 = \"Test 4: Lower TXID alphanumeric order is dropped\",\n\tTX6 = tx(2, Owner, 3, <<>>, <<\"b\", BaseID/binary>>, <<>>, 400),\n\tar_mempool:add_tx(TX6, waiting),\n\tassertMempoolTXIDs([TX4#tx.id, TX5#tx.id], Test4),\n\n\tTest5 = \"Test 5: Deprioritized format 1 TX is dropped\",\n\tTX7 = tx(1, Owner, 3, crypto:strong_rand_bytes(200), <<\"h\", BaseID/binary>>, <<>>, 400),\n\tar_mempool:add_tx(TX7, waiting),\n\tassertMempoolTXIDs([TX4#tx.id, TX5#tx.id], Test5),\n\n\tTest6 = \"Test 6: High priority format 1 replaces a low priority format 2\",\n\tTX8 = tx(1, Owner, 3, <<>>, <<\"i\", BaseID/binary>>, <<>>, 400),\n\tar_mempool:add_tx(TX8, waiting),\n\tassertMempoolTXIDs([TX4#tx.id, TX8#tx.id], Test6),\n\n\tTest7 = \"Test 7: 0 quantity TX can still overspend\",\n\tTX9 = tx(2, Owner, 400, <<>>, <<\"j\", BaseID/binary>>, <<>>, 0),\n\tar_mempool:add_tx(TX9, waiting),\n\tassertMempoolTXIDs([TX9#tx.id, TX4#tx.id], Test7).\n\n%% @doc Test that unconfirmed deposit TXs are ignored when determining whether an\n%% account is overspent. In this case the deposit comes from an address which \n%% has an on-chain balance.\ntest_mixed_deposit_spend_tx_old_address({\n\t\t{_, Pub} = {_, {_, Owner}},\n\t\t_LastTXID,\n\t\t{_, {_, OtherOwner}},\n\t\t_B0}) ->\n\tBaseID = crypto:strong_rand_bytes(31),\n\tOrigin = ar_wallet:to_address(Pub),\n\n\tTest1 = \"Test 1: Unconfirmed deposits from old addresses are not considered for overspend\",\n\tTX2 = tx(2, OtherOwner, 9, <<>>, <<\"d\", BaseID/binary>>, <<>>, 500, Origin),\n\tTX3 = tx(2, Owner, 8, <<>>, <<\"c\", BaseID/binary>>, <<>>, 600),\n\tTX4 = tx(2, Owner, 7, <<>>, <<\"b\", BaseID/binary>>, <<>>, 600),\n\tar_mempool:add_tx(TX2, waiting),\n\tar_mempool:add_tx(TX3, waiting),\n\tar_mempool:add_tx(TX4, waiting),\n\tassertMempoolTXIDs([TX2#tx.id, TX3#tx.id], Test1).\n\n%% @doc Test that unconfirmed deposit TXs are ignored when determining whether an\n%% account is overspent. In this case the deposit comes from an address which\n%% has not made it on-chain yet (deposit and spend are both in the mempool).\ntest_mixed_deposit_spend_tx_new_address({\n\t\t{_, Pub} = {_, {_, Owner}}, _LastTXID, _OtherKey, _B0}) ->\n\tBaseID = crypto:strong_rand_bytes(31),\n\tOrigin = ar_wallet:to_address(Pub),\n\n\t{_, NewPub} = {_, {_, NewOwner}} = ar_wallet:new(),\n\tNewAddr = ar_wallet:to_address(NewPub),\n\n\tTest1 = \"Test 1: Unconfirmed deposits from new addresses are not considered for overspend\",\n\tTX1 = tx(2, Owner, 10, <<>>, <<\"e\", BaseID/binary>>, <<>>, 400, NewAddr),\n\tTX2 = tx(2, NewOwner, 9, <<>>, <<\"d\", BaseID/binary>>, <<>>, 400, Origin),\n\tTX3 = tx(2, Owner, 8, <<>>, <<\"c\", BaseID/binary>>, <<>>, 400),\n\tTX4 = tx(2, Owner, 7, <<>>, <<\"b\", BaseID/binary>>, <<>>, 400),\n\tar_mempool:add_tx(TX1, waiting),\n\tar_mempool:add_tx(TX2, waiting),\n\tar_mempool:add_tx(TX3, waiting),\n\tar_mempool:add_tx(TX4, waiting),\n\tassertMempoolTXIDs([TX1#tx.id, TX3#tx.id], Test1).\n\n%% @doc Test a TX that has a last_tx clash and overspends an account is\n%% handled correctly.\ntest_clash_and_overspend_tx({{_, {_, Owner}}, LastTXID, _OtherKey, _B0}) ->\n\tBaseID = crypto:strong_rand_bytes(31),\n\n\tTest1 = \"Test 1: Clashing TXs are dropped before overspend is calculated\",\n\tTX1 = tx(2, Owner, 3, <<>>, <<\"d\", BaseID/binary>>, <<>>, 400),\n\tTX2 = tx(2, Owner, 2, <<>>, <<\"c\", BaseID/binary>>, LastTXID, 400),\n\tTX3 = tx(2, Owner, 1, <<>>, <<\"b\", BaseID/binary>>, LastTXID, 400),\n\tTX4 = tx(2, Owner, 4, <<>>, <<\"e\", BaseID/binary>>, LastTXID, 400),\n\tar_mempool:add_tx(TX1, waiting),\n\tar_mempool:add_tx(TX2, waiting),\n\tar_mempool:add_tx(TX3, waiting),\n\tar_mempool:add_tx(TX4, waiting),\n\tassertMempoolTXIDs([TX4#tx.id, TX1#tx.id], Test1).\n\n%% @doc Test that the right TXs are dropped when the mempool max data size is reached due to\n%% clashing TXs. Only the clashing TXs should be dropped.\ntest_clash_and_low_priority_tx({{_, {_, Owner}}, LastTXID, _OtherKey, _B0}) ->\n\t%% Add 9x Format 2 transactions each with a data size slightly larger than\n\t%% 1/10 the MEMPOOL_DATA_SIZE_LIMIT. This puts us close to exceeding the\n\t%% data size limit\n\tNumTransactions = 9,\n\tDataSize = (?MEMPOOL_DATA_SIZE_LIMIT div (NumTransactions+1)) + 1,\n\t{ExpectedTXIDs, HighestReward, LowestReward} =\n\t\tadd_transactions(NumTransactions, 2, Owner, DataSize),\n\n\tTX1 = tx(2, Owner, HighestReward+1, <<>>, crypto:strong_rand_bytes(32), LastTXID),\n\tar_mempool:add_tx(TX1, waiting),\n\n\tExpectedMempoolSize = {(NumTransactions+1) * ?TX_SIZE_BASE, NumTransactions * DataSize},\n\n\tassertMempoolTXIDs([TX1#tx.id] ++ ExpectedTXIDs, \"Mempool is below the data size limit\"),\n\tassertMempoolSize(ExpectedMempoolSize),\n\n\tClashTX = tx(\n\t\t2, Owner, LowestReward-1, crypto:strong_rand_bytes(DataSize),\n\t\tcrypto:strong_rand_bytes(32), LastTXID),\n\tar_mempool:add_tx(ClashTX, waiting),\n\n\tassertMempoolTXIDs([TX1#tx.id] ++ ExpectedTXIDs, \"Clashing TX dropped\"),\n\tassertMempoolSize(ExpectedMempoolSize).\n\nadd_transactions(NumTransactions, Format, Owner, DataSize) ->\n\tHighestReward = NumTransactions+2,\n\tLowestReward = 3,\n\tTXs = [\n\t\ttx(Format, Owner, Reward, crypto:strong_rand_bytes(DataSize))\n\t\t\t|| Reward <- lists:seq(HighestReward, LowestReward, -1)\n\t],\n\tlists:foreach(\n\t\tfun(TX) ->\n\t\t\tar_mempool:add_tx(TX, waiting)\n\t\tend,\n\t\tTXs),\n\tExpectedTXIDs = lists:map(\n\t\tfun(#tx{id = TXID}) ->\n\t\t\tTXID\n\t\tend,\n\t\tTXs),\n\t{ExpectedTXIDs, HighestReward, LowestReward}.\n\n%% @doc Test that load_from_disk computes origin_spent_total_map using the maximum\n%% denomination found among stored TXs, not denomination 0.\n%% TX1 has denomination 1, TX2 has denomination 2. The spent totals must\n%% be computed in denomination 2 (the max). With the bug (denomination 0),\n%% the denomination-1 TX cost is not scaled up by 1000x.\ntest_load_from_disk_denomination({{_, {_, Owner}}, _LastTXID, _OtherKey, _B0}) ->\n\tBaseID = crypto:strong_rand_bytes(31),\n\tTX1 = (tx(2, Owner, 3000, <<>>, <<\"a\", BaseID/binary>>, <<>>))#tx{ denomination = 1 },\n\tTX2 = (tx(2, Owner, 5, <<>>, <<\"b\", BaseID/binary>>, <<>>))#tx{ denomination = 2 },\n\tSerializedTXs = #{\n\t\tTX1#tx.id => {ar_serialize:tx_to_binary(TX1), waiting},\n\t\tTX2#tx.id => {ar_serialize:tx_to_binary(TX2), waiting}\n\t},\n\tar_storage:write_term(mempool, {SerializedTXs, {0, 0}}),\n\treset_node_state(),\n\tar_mempool:load_from_disk(),\n\t[{origin_spent_total_denomination, LoadedDenomination}] =\n\t\tets:lookup(node_state, origin_spent_total_denomination),\n\t?assertEqual(2, LoadedDenomination,\n\t\t\"load_from_disk should use the max TX denomination, not 0\"),\n\t[{origin_spent_total_map, SpentTotalMap}] =\n\t\tets:lookup(node_state, origin_spent_total_map),\n\tAddr = ar_wallet:to_address(Owner, ?DEFAULT_KEY_TYPE),\n\tTX1Cost = ar_pricing:redenominate(3000, 1, 2),\n\tTX2Cost = ar_pricing:redenominate(5, 2, 2),\n\tExpectedTotal = TX1Cost + TX2Cost,\n\t?assertEqual(ExpectedTotal, maps:get(Addr, SpentTotalMap),\n\t\t\"Spent totals should be redenominated to the max TX denomination\").\n\nwallet({_, Pub}, Balance, LastTXID) ->\n\t{ar_wallet:to_address(Pub), Balance, LastTXID}.\n\ntx(Format, Owner, Reward, Data) ->\n\ttx(Format, Owner, Reward, Data, crypto:strong_rand_bytes(32), <<>>).\n\ntx(Format, Owner, Reward, Data, TXID, Anchor) ->\n\ttx(Format, Owner, Reward, Data, TXID, Anchor, 0).\n\ntx(Format, Owner, Reward, Data, TXID, Anchor, Quantity) ->\n\ttx(Format, Owner, Reward, Data, TXID, Anchor, Quantity, <<>>).\n\ntx(Format, Owner, Reward, Data, TXID, Anchor, Quantity, Target) ->\n\t#tx{\n\t\tid = TXID,\n\t\tformat = Format,\n\t\treward = Reward,\n\t\tdata = Data,\n\t\tdata_size = byte_size(Data),\n\t\towner = Owner,\n\t\ttarget = Target,\n\t\tlast_tx = Anchor,\n\t\tquantity = Quantity\n\t}.\n\nassertMempoolSize(ExpectedMempoolSize) ->\n\t[{mempool_size, MempoolSize}] = ets:lookup(node_state, mempool_size),\n\t?assertEqual(ExpectedMempoolSize, MempoolSize).\n\nassertMempoolTXIDs(ExpectedTXIDs, Title) ->\n\t%% Unordered list of all TXIDs in the mempool\n\tTXIDs = lists:map(\n\t\tfun([TXID]) ->\n\t\t\tTXID\n\t\tend,\n\t\tets:match(node_state, {{tx, '$1'}, '_'})\n\t),\n\n\t%% Ordered list of expected TXIDs paired with their expected status \n\tExpectedTXIDsStatuses = lists:map(\n\t\tfun(ExpectedTXID) ->\n\t\t\t{ExpectedTXID, waiting}\n\t\tend,\n\t\tExpectedTXIDs\n\t),\n\n\t%% gb_sets:to_list returns elements in ascending order of utility\n\t%% (lowest reward, latest TX first), so we need to reverse the list to get\n\t%% the true priority order (highest reward, oldest TX first)\n\tMempoolInPriorityOrder = lists:reverse(gb_sets:to_list(ar_mempool:get_priority_set())),\n\t%% Ordered list of actual TXIDs paired with their actual status\n\tActualTXIDsStatuses = lists:map(\n\t\tfun({_, ActualTXID, ActualStatus}) ->\n\t\t\t{ActualTXID, ActualStatus}\n\t\tend,\n\t\tMempoolInPriorityOrder\n\t),\n\n\t%% If we're adding and removing TX to/from the last_tx_map and origin_tx_map\n\t%% correctly, this list of TXIDs should match the total set of TXIDs in the\n\t%% mempool\n\tLastTXMapTXIDs = lists:foldl(\n\t\tfun({_Priority, TXID}, Acc) ->\n\t\t\t[TXID | Acc]\n\t\tend,\n\t\t[],\n\t\tlists:flatten(\n\t\t\tlists:foldl(\n\t\t\t\tfun(Set, Acc) ->\n\t\t\t\t\t[gb_sets:to_list(Set) | Acc]\n\t\t\t\tend,\n\t\t\t\t[],\n\t\t\t\tmaps:values(ar_mempool:get_last_tx_map())\n\t\t\t)\n\t\t)\n\t),\n\n\tOriginTXMapTXIDs = lists:foldl(\n\t\tfun({_Priority, TXID}, Acc) ->\n\t\t\t[TXID | Acc]\n\t\tend,\n\t\t[],\n\t\tlists:flatten(\n\t\t\tlists:foldl(\n\t\t\t\tfun(Set, Acc) ->\n\t\t\t\t\t[gb_sets:to_list(Set) | Acc]\n\t\t\t\tend,\n\t\t\t\t[],\n\t\t\t\tmaps:values(ar_mempool:get_origin_tx_map())\n\t\t\t)\n\t\t)\n\t),\n\n\t%% Only this first test will assert the ordering of TXIDs in the mempools\n\t?assertEqual(ExpectedTXIDsStatuses, ActualTXIDsStatuses, Title),\n\t%% These remaining tests only assert that the unordered set of TXIDs is correct\n\t?assertEqual(lists:sort(ExpectedTXIDs), lists:sort(TXIDs), Title),\n\t?assertEqual(lists:sort(ExpectedTXIDs), lists:sort(LastTXMapTXIDs), Title),\n\t?assertEqual(lists:sort(ExpectedTXIDs), lists:sort(OriginTXMapTXIDs), Title)."
  },
  {
    "path": "apps/arweave/test/ar_mine_randomx_tests.erl",
    "content": "-module(ar_mine_randomx_tests).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n\n-define(ENCODED_RX512_HASH, <<\"NcXUtn7gA42QoM8MtaS-vgVy8gJ21EE2YxV18mHndmM\">>).\n-define(ENCODED_RX4096_HASH, <<\"HqbpuoVNu8u4l4slkwnP3fvX9Q-mgjFH-3LgCyhMPPk\">>).\n-define(ENCODED_NONCE, <<\"f_z7RLug8etm3SrmRf-xPwXEL0ZQ_xHng2A5emRDQBw\">>).\n-define(ENCODED_SEGMENT,\n    <<\"7XM3fgTCAY2GFpDjPZxlw4yw5cv8jNzZSZawywZGQ6_Ca-JDy2nX_MC2vjrIoDGp\">>\n).\n\nencrypt_chunk({rx512, RandomXState}, Key, Chunk, PackingRounds, JIT, LargePages, HardwareAES, _ExtraArgs) ->\n\tar_rx512_nif:rx512_encrypt_chunk_nif(\n\t\tRandomXState, Key, Chunk, PackingRounds, JIT, LargePages, HardwareAES).\n\ndecrypt_chunk({rx512, RandomXState}, Key, Chunk, PackingRounds, JIT, LargePages, HardwareAES, _ExtraArgs) ->\n\tar_rx512_nif:rx512_decrypt_chunk_nif(\n\t\tRandomXState, Key, Chunk, byte_size(Chunk), PackingRounds, JIT, LargePages, HardwareAES).\n\nreencrypt_chunk({rx512, RandomXState}, Key1, Key2, Chunk, PackingRounds1, PackingRounds2,\n\t\tJIT, LargePages, HardwareAES, _ExtraArgs) ->\n\tar_rx512_nif:rx512_reencrypt_chunk_nif(\n\t\tRandomXState, Key1, Key2, Chunk, byte_size(Chunk), PackingRounds1, PackingRounds2,\n\t\tJIT, LargePages, HardwareAES).\n\nencrypt_composite_chunk({rx4096, RandomXState}, Key, Chunk, PackingRounds, JIT, LargePages, HardwareAES,\n\t\t[IterationCount, SubChunkCount] = _ExtraArgs) ->\n\tar_rx4096_nif:rx4096_encrypt_composite_chunk_nif(\n\t\tRandomXState, Key, Chunk, JIT, LargePages, HardwareAES, PackingRounds, \n\t\tIterationCount, SubChunkCount).\n\ndecrypt_composite_chunk({rx4096, RandomXState}, Key, Chunk, PackingRounds, JIT, LargePages, HardwareAES,\n\t\t[IterationCount, SubChunkCount] = _ExtraArgs) ->\n\tar_rx4096_nif:rx4096_decrypt_composite_chunk_nif(\n\t\tRandomXState, Key, Chunk, byte_size(Chunk), JIT, LargePages, HardwareAES,\n\t\tPackingRounds, IterationCount, SubChunkCount).\n\nreencrypt_composite_chunk({rx4096, RandomXState}, Key1, Key2, Chunk, PackingRounds1, PackingRounds2,\n\t\tJIT, LargePages, HardwareAES, \n\t\t[IterationCount1, IterationCount2, SubChunkCount1, SubChunkCount2] = _ExtraArgs) ->\n\tar_rx4096_nif:rx4096_reencrypt_composite_chunk_nif(\n\t\tRandomXState, Key1, Key2, Chunk, JIT, LargePages, HardwareAES,\n\t\tPackingRounds1, PackingRounds2, IterationCount1, IterationCount2,\n\t\tSubChunkCount1, SubChunkCount2).\n\nsetup() ->\n    FastState512 = ar_mine_randomx:init_fast2(rx512, ?RANDOMX_PACKING_KEY, 0, 0,\n\t\terlang:system_info(dirty_cpu_schedulers_online)),\n    LightState512 = ar_mine_randomx:init_light2(rx512, ?RANDOMX_PACKING_KEY, 0, 0),\n    FastState4096 = ar_mine_randomx:init_fast2(rx4096, ?RANDOMX_PACKING_KEY, 0, 0,\n\t\terlang:system_info(dirty_cpu_schedulers_online)),\n    LightState4096 = ar_mine_randomx:init_light2(rx4096, ?RANDOMX_PACKING_KEY, 0, 0),\n\t{FastState512, LightState512, FastState4096, LightState4096}.\n\ntest_register(TestFun, Fixture) ->\n\t{timeout, 120, {with, Fixture, [TestFun]}}.\n\nrandomx_suite_test_() ->\n\t{setup, fun setup/0,\n\t\tfun (SetupData) ->\n\t\t\t[\n\t\t\t\ttest_register(fun test_state/1, SetupData),\n\t\t\t\ttest_register(fun test_bad_state/1, SetupData),\n\t\t\t\ttest_register(fun test_regression/1, SetupData),\n\t\t\t\ttest_register(fun test_empty_chunk_fails/1, SetupData),\n\t\t\t\ttest_register(fun test_nif_wrappers/1, SetupData),\n\t\t\t\ttest_register(fun test_pack_unpack/1, SetupData),\n\t\t\t\ttest_register(fun test_repack/1, SetupData),\n\t\t\t\ttest_register(fun test_input_changes_packing/1, SetupData),\n\t\t\t\ttest_register(fun test_composite_packing/1, SetupData),\n\t\t\t\ttest_register(fun test_composite_packs_incrementally/1, SetupData),\n\t\t\t\ttest_register(fun test_composite_unpacked_sub_chunks/1, SetupData),\n\t\t\t\ttest_register(fun test_composite_repack/1, SetupData),\n\t\t\t\ttest_register(fun test_hash/1, SetupData)\n\t\t\t]\n\t\tend\n\t}.\n\n%% -------------------------------------------------------------------------------------------\n%% spora_2_6 and composite packing tests\n%% -------------------------------------------------------------------------------------------\n\ntest_state({\n\t\tFastState512, LightState512,\n\t\tFastState4096, LightState4096}) ->\n\t%% The legacy dataset size is 568,433,920 bytes. Roughly 30 MiB more than 512 MiB.\n\t%% Our nifs don't have access to the raw dataset size used in the RandomX C code, but\n\t%% they have access to the dataset item count - which is just the size divided by 64.\n\t%% So the expected dataset size is 568,433,920 / 64 = 8,881,780 items.\n\t%% \n\t%% The new dataset size is 4,326,530,304 bytes. Roughly 30 MiB more than 4 GiB.\n\t%% So the expected dataset size is 4,326,530,304 / 64 = 67,602,036 items.\n\t?assertEqual(\n\t\t{ok, {rx512, fast, 8881780, 2097152}},\n\t\tar_mine_randomx:info(FastState512)\n\t),\n\t?assertEqual(\n\t\t{ok, {rx4096, fast, 67602036, 2097152}},\n\t\tar_mine_randomx:info(FastState4096)\n\t),\n\t%% Unfortunately we don't have access to the cache size. The randomx_info_nif will check\n\t%% that in fast mode the cache is not initialized, and in light mode the dataset is not\n\t%% initialized and return an error if either check fails.\n\t?assertEqual(\n\t\t{ok, {rx512, light, 0, 2097152}},\n\t\tar_mine_randomx:info(LightState512)\n\t),\n\t?assertEqual(\n\t\t{ok, {rx4096, light, 0, 2097152}},\n\t\tar_mine_randomx:info(LightState4096)\n\t).\n\ntest_bad_state(_) ->\n\tBadState = {bad_mode, bad_state},\n\t?assertEqual({error, invalid_randomx_mode},\n\t\tar_mine_randomx:info(BadState)),\n\t?assertEqual({error, invalid_randomx_mode},\n\t\tar_mine_randomx:hash(BadState, crypto:strong_rand_bytes(32))),\n\t?assertEqual({error, invalid_randomx_mode},\n\t\tar_mine_randomx:randomx_encrypt_chunk(\n\t\t\t{spora_2_6, crypto:strong_rand_bytes(32)}, BadState,\n\t\t\tcrypto:strong_rand_bytes(32), crypto:strong_rand_bytes(?DATA_CHUNK_SIZE))),\n\t?assertEqual({error, invalid_randomx_mode},\n\t\tar_mine_randomx:randomx_encrypt_chunk(\n\t\t\t{composite, 2, crypto:strong_rand_bytes(32)}, BadState,\n\t\t\tcrypto:strong_rand_bytes(32), crypto:strong_rand_bytes(?DATA_CHUNK_SIZE))),\n\t?assertEqual({error, invalid_randomx_mode},\n\t\tar_mine_randomx:randomx_decrypt_chunk(\n\t\t\t{spora_2_6, crypto:strong_rand_bytes(32)}, BadState,\n\t\t\tcrypto:strong_rand_bytes(32), crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\t\t\t?DATA_CHUNK_SIZE)),\n\t?assertEqual({error, invalid_randomx_mode},\n\t\tar_mine_randomx:randomx_decrypt_chunk(\n\t\t\t{composite, 2, crypto:strong_rand_bytes(32)}, BadState,\n\t\t\tcrypto:strong_rand_bytes(32), crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\t\t\t?DATA_CHUNK_SIZE)),\n\t?assertEqual({error, invalid_randomx_mode},\n\t\tar_mine_randomx:randomx_decrypt_sub_chunk(\n\t\t\t{composite, 2, crypto:strong_rand_bytes(32)}, BadState,\n\t\t\tcrypto:strong_rand_bytes(32), crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),0)),\n\t?assertEqual({error, invalid_randomx_mode},\n\t\tar_mine_randomx:randomx_reencrypt_chunk(\n\t\t\t{spora_2_6, crypto:strong_rand_bytes(32)}, {spora_2_6, crypto:strong_rand_bytes(32)},\n\t\t\tBadState, crypto:strong_rand_bytes(32), crypto:strong_rand_bytes(32), \n\t\t\tcrypto:strong_rand_bytes(?DATA_CHUNK_SIZE), ?DATA_CHUNK_SIZE)),\n\t?assertEqual({error, invalid_reencrypt_packing},\n\t\tar_mine_randomx:randomx_reencrypt_chunk(\n\t\t\t{composite, 2, crypto:strong_rand_bytes(32)},\n\t\t\t{composite, 2, crypto:strong_rand_bytes(32)},\n\t\t\tBadState, crypto:strong_rand_bytes(32), crypto:strong_rand_bytes(32), \n\t\t\tcrypto:strong_rand_bytes(?DATA_CHUNK_SIZE), ?DATA_CHUNK_SIZE)).\n\ntest_regression({FastState512, LightState512, FastState4096, LightState4096}) ->\n\t%% Test all permutations of:\n\t%% 1. Light vs. fast state\n\t%% 2. spora_2_6 vs. depth-1 composite vs. depth-2 composite packing\n\t%% 3. JIT vs. no JIT\n\t%% 4. RandomX dataset size 512 vs. 4096\n\t%%   (this is handled implicitly by the legacy packing vs. composite packing)\n\n\ttest_regression(FastState512,\n\t\t\"ar_mine_randomx_tests/packed.spora26.bin\", 0, [],\n\t\tfun encrypt_chunk/8, fun decrypt_chunk/8),\n\ttest_regression(FastState512,\n\t\t\"ar_mine_randomx_tests/packed.spora26.bin\", 1, [],\n\t\tfun encrypt_chunk/8, fun decrypt_chunk/8),\n\ttest_regression(FastState4096,\n\t\t\"ar_mine_randomx_tests/packed.composite.1.bin\", 0, [1, 32],\n\t\tfun encrypt_composite_chunk/8, fun decrypt_composite_chunk/8),\n\ttest_regression(FastState4096,\n\t\t\"ar_mine_randomx_tests/packed.composite.1.bin\", 1, [1, 32],\n\t\tfun encrypt_composite_chunk/8, fun decrypt_composite_chunk/8),\n\ttest_regression(FastState4096,\n\t\t\"ar_mine_randomx_tests/packed.composite.2.bin\", 0, [2, 32],\n\t\tfun encrypt_composite_chunk/8, fun decrypt_composite_chunk/8),\n\ttest_regression(FastState4096,\n\t\t\"ar_mine_randomx_tests/packed.composite.2.bin\", 1, [2, 32],\n\t\tfun encrypt_composite_chunk/8, fun decrypt_composite_chunk/8),\n\ttest_regression(LightState512,\n\t\t\"ar_mine_randomx_tests/packed.spora26.bin\", 0, [],\n\t\tfun encrypt_chunk/8, fun decrypt_chunk/8),\n\ttest_regression(LightState512,\n\t\t\"ar_mine_randomx_tests/packed.spora26.bin\", 1, [],\n\t\tfun encrypt_chunk/8, fun decrypt_chunk/8),\n\ttest_regression(LightState4096,\n\t\t\"ar_mine_randomx_tests/packed.composite.1.bin\", 0, [1, 32],\n\t\tfun encrypt_composite_chunk/8, fun decrypt_composite_chunk/8),\n\ttest_regression(LightState4096,\n\t\t\"ar_mine_randomx_tests/packed.composite.1.bin\", 1, [1, 32],\n\t\tfun encrypt_composite_chunk/8, fun decrypt_composite_chunk/8),\n\ttest_regression(LightState4096,\n\t\t\"ar_mine_randomx_tests/packed.composite.2.bin\", 0, [2, 32],\n\t\tfun encrypt_composite_chunk/8, fun decrypt_composite_chunk/8),\n\ttest_regression(LightState4096,\n\t\t\"ar_mine_randomx_tests/packed.composite.2.bin\", 1, [2, 32],\n\t\tfun encrypt_composite_chunk/8, fun decrypt_composite_chunk/8),\n\tok.\n\ntest_regression(State, Fixture, JIT, ExtraArgs, EncryptFun, DecryptFun) ->\n\tKey = ar_test_node:load_fixture(\"ar_mine_randomx_tests/key.bin\"),\n\tUnpackedFixture = ar_test_node:load_fixture(\"ar_mine_randomx_tests/unpacked.bin\"),\n\tPackedFixture = ar_test_node:load_fixture(Fixture),\n\n\t{ok, Packed} = EncryptFun(State, Key, UnpackedFixture, 8, JIT, 0, 0, ExtraArgs),\n\t?assertEqual(PackedFixture, Packed, Fixture),\n\n\t{ok, Unpacked} = DecryptFun(State, Key, PackedFixture, 8, JIT, 0, 0, ExtraArgs),\n\t?assertEqual(UnpackedFixture, Unpacked, Fixture).\n\ntest_empty_chunk_fails({FastState512, _LightState512, FastState4096, _LightState4096}) ->\n\ttest_empty_chunk_fails(FastState512, [], fun encrypt_chunk/8),\n\ttest_empty_chunk_fails(FastState4096, [1, 32], fun encrypt_composite_chunk/8),\n\ttest_empty_chunk_fails(FastState512, [], fun decrypt_chunk/8),\n\ttest_empty_chunk_fails(FastState4096, [1, 32], fun decrypt_composite_chunk/8).\n\ntest_empty_chunk_fails(State, ExtraArgs, Fun) ->\n\ttry\n\t\tFun(State, crypto:strong_rand_bytes(32), <<>>, 1, 0, 0, 0, ExtraArgs),\n\t\t?assert(false, \"Encrypt/Decrypt with an empty chunk should have failed\")\n\tcatch error:badarg ->\n\t\tok\n\tend.\n\ntest_nif_wrappers({FastState512, _LightState512, FastState4096, _LightState4096}) ->\n\ttest_nif_wrappers(FastState512, FastState4096,\n\t\t\tcrypto:strong_rand_bytes(?DATA_CHUNK_SIZE - 12)),\n\ttest_nif_wrappers(FastState512, FastState4096,\n\t\t\tcrypto:strong_rand_bytes(?DATA_CHUNK_SIZE)).\n\ntest_nif_wrappers(State512, State4096, Chunk) ->\n\tAddrA = crypto:strong_rand_bytes(32),\n\tAddrB = crypto:strong_rand_bytes(32),\n\tKeyA = crypto:strong_rand_bytes(32),\n\tKeyB= crypto:strong_rand_bytes(32),\n\t%% spora_26 randomx_encrypt_chunk \n\t{ok, Packed_2_6A} = ar_rx512_nif:rx512_encrypt_chunk_nif(\n\t\telement(2, State512), KeyA, Chunk, ?RANDOMX_PACKING_ROUNDS_2_6,\n\t\tar_mine_randomx:jit(), ar_mine_randomx:large_pages(), ar_mine_randomx:hardware_aes()),\n\t?assertEqual({ok, Packed_2_6A},\n\t\tar_mine_randomx:randomx_encrypt_chunk({spora_2_6, AddrA}, State512, KeyA, Chunk)),\n\n\t%% composite randomx_encrypt_composite_chunk\n\t{ok, PackedCompositeA2} = ar_rx4096_nif:rx4096_encrypt_composite_chunk_nif(\n\t\telement(2, State4096), KeyA, Chunk,\n\t\tar_mine_randomx:jit(), ar_mine_randomx:large_pages(), ar_mine_randomx:hardware_aes(),\n\t\t?COMPOSITE_PACKING_ROUND_COUNT, 2, ?COMPOSITE_PACKING_SUB_CHUNK_COUNT),\n\t?assertEqual({ok, PackedCompositeA2},\n\t\tar_mine_randomx:randomx_encrypt_chunk({composite, AddrA, 2}, State4096, KeyA, Chunk)),\n\n\t%% spora_2_6 randomx_decrypt_chunk\n\t?assertEqual({ok, Chunk},\n\t\tar_mine_randomx:randomx_decrypt_chunk(\n\t\t\t{spora_2_6, AddrA}, State512, KeyA, Packed_2_6A, byte_size(Chunk))),\n\n\t%% composite randomx_decrypt_composite_chunk\n\t?assertEqual({ok, Chunk},\n\t\tar_mine_randomx:randomx_decrypt_chunk(\n\t\t\t{composite, AddrA, 2}, State4096, KeyA, PackedCompositeA2, byte_size(Chunk))),\n\n\t%% Prepare data for the reencryption tests\n\t{ok, Packed_2_6B} = ar_rx512_nif:rx512_encrypt_chunk_nif(\n\t\telement(2, State512), KeyB, Chunk, ?RANDOMX_PACKING_ROUNDS_2_6,\n\t\tar_mine_randomx:jit(), ar_mine_randomx:large_pages(), ar_mine_randomx:hardware_aes()),\n\t{ok, PackedCompositeA3} = ar_rx4096_nif:rx4096_encrypt_composite_chunk_nif(\n\t\telement(2, State4096), KeyA, Chunk,\n\t\tar_mine_randomx:jit(), ar_mine_randomx:large_pages(), ar_mine_randomx:hardware_aes(),\n\t\t?COMPOSITE_PACKING_ROUND_COUNT, 3, ?COMPOSITE_PACKING_SUB_CHUNK_COUNT),\n\t{ok, PackedCompositeB3} = ar_rx4096_nif:rx4096_encrypt_composite_chunk_nif(\n\t\telement(2, State4096), KeyB, Chunk,\n\t\tar_mine_randomx:jit(), ar_mine_randomx:large_pages(), ar_mine_randomx:hardware_aes(),\n\t\t?COMPOSITE_PACKING_ROUND_COUNT, 3, ?COMPOSITE_PACKING_SUB_CHUNK_COUNT),\n\t\t\n\t%% spora_2_6 -> spora_2_6 randomx_reencrypt_chunk\n\t?assertEqual({ok, Packed_2_6B, Chunk},\n\t\tar_mine_randomx:randomx_reencrypt_chunk(\n\t\t\t{spora_2_6, AddrA}, {spora_2_6, AddrB},\n\t\t\tState512, KeyA, KeyB, Packed_2_6A, byte_size(Chunk))),\n\t\n\t%% composite -> composite randomx_reencrypt_chunk\n\t?assertEqual({ok, PackedCompositeB3, Chunk},\n\t\tar_mine_randomx:randomx_reencrypt_chunk(\n\t\t\t{composite, AddrA, 2}, {composite, AddrB, 3},\n\t\t\tState4096, KeyA, KeyB, PackedCompositeA2, byte_size(Chunk))),\n\t?assertEqual({ok, PackedCompositeA3, none},\n\t\tar_mine_randomx:randomx_reencrypt_chunk(\n\t\t\t{composite, AddrA, 2}, {composite, AddrA, 3},\n\t\t\tState4096, KeyA, KeyA, PackedCompositeA2, byte_size(Chunk))),\t\n\t\n\t%% spora_2_6 -> composite randomx_reencrypt_chunk\n\t?assertEqual({error, invalid_reencrypt_packing},\n\t\tar_mine_randomx:randomx_reencrypt_chunk(\n\t\t\t{spora_2_6, AddrA}, {composite, AddrB, 3},\n\t\t\tState512, KeyA, KeyB, Packed_2_6A, byte_size(Chunk))).\n\ntest_pack_unpack({FastState512, _LightState512, FastState4096, _LightState4096}) ->\n\ttest_pack_unpack(FastState512, [], fun encrypt_chunk/8, fun decrypt_chunk/8),\n\ttest_pack_unpack(\n\t\tFastState4096, [1, 32], fun encrypt_composite_chunk/8, fun decrypt_composite_chunk/8),\n\ttest_pack_unpack(\n\t\tFastState4096, [2, 32], fun encrypt_composite_chunk/8, fun decrypt_composite_chunk/8).\n\ntest_pack_unpack(State, ExtraArgs, EncryptFun, DecryptFun) ->\n\t%% Add 3 0-bytes at the end to test automatic padding.\n\tChunkWithoutPadding = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE - 3),\n\tChunk = << ChunkWithoutPadding/binary, 0:24 >>,\n\tKey = crypto:strong_rand_bytes(32),\n\t{ok, Packed1} = EncryptFun(State, Key, Chunk, 8, 0, 0, 0, ExtraArgs),\n\t?assertEqual(?DATA_CHUNK_SIZE, byte_size(Packed1)),\n\t?assertEqual({ok, Packed1},\n\t\tEncryptFun(State, Key, Chunk, 8, 0, 0, 0, ExtraArgs)),\n\t%% Run the decryption twice to test that the nif isn't corrupting any state.\n\t{ok, Unpacked1} = DecryptFun(State, Key, Packed1, 8, 0, 0, 0, ExtraArgs),\n\t?assertEqual(Unpacked1, Chunk),\n\t?assertEqual({ok, Unpacked1}, \n\t\tDecryptFun(State, Key, Packed1, 8, 0, 0, 0, ExtraArgs)),\n\t%% Run the encryption twice to test that the nif isn't corrupting any state.\n\t{ok, Packed2} = EncryptFun(State, Key, ChunkWithoutPadding, 8, 0, 0, 0, ExtraArgs),\n\t?assertEqual(Packed2, Packed1),\n\t?assertEqual({ok, Packed2},\n\t\tEncryptFun(State, Key, ChunkWithoutPadding, 8, 0, 0, 0, ExtraArgs)).\n\n\ntest_repack({FastState512, _LightState512, FastState4096, _LightState4096}) ->\n\ttest_repack(FastState512, [], [], fun encrypt_chunk/8, fun reencrypt_chunk/10),\n\ttest_repack(\n\t\tFastState4096, [1, 32], [1, 1, 32, 32], \n\t\tfun encrypt_composite_chunk/8, fun reencrypt_composite_chunk/10),\n\ttest_repack(\n\t\tFastState4096, [2, 32], [2, 2, 32, 32], \n\t\tfun encrypt_composite_chunk/8, fun reencrypt_composite_chunk/10).\n\n\ntest_repack(State, EncryptArgs, ReencryptArgs, EncryptFun, ReencryptFun) ->\n\tChunk = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE - 12),\n\tKey1 = crypto:strong_rand_bytes(32),\n\tKey2 = crypto:strong_rand_bytes(32),\n\t{ok, Packed1} = EncryptFun(State, Key1, Chunk, 8, 0, 0, 0, EncryptArgs),\n\t{ok, Packed2} = EncryptFun(State, Key2, Chunk, 8, 0, 0, 0, EncryptArgs),\n\t{ok, Repacked, RepackInput} =\n\t\t\tReencryptFun(State, Key1, Key2, Packed1, 8, 8, 0, 0, 0, ReencryptArgs),\n\t?assertEqual(Chunk, binary:part(RepackInput, 0, byte_size(Chunk))),\n\t?assertEqual(Packed2, Repacked), \n\n\t%% Reencrypt with different RandomX rounds.\n\t{ok, Repacked2, RepackInput2} =\n\t\t\tReencryptFun(State, Key1, Key2, Packed1, 8, 10, 0, 0, 0, ReencryptArgs),\n\t?assertEqual(Chunk, binary:part(RepackInput2, 0, byte_size(Chunk))),\n\t?assertNotEqual(Packed2, Repacked2),\n\n\t?assertEqual({ok, Repacked2, RepackInput2},\n\t\t\tReencryptFun(State, Key1, Key2, Packed1, 8, 10, 0, 0, 0, ReencryptArgs)). \n\ntest_input_changes_packing({FastState512, _LightState512, FastState4096, _LightState4096}) ->\n\ttest_input_changes_packing(FastState512, [], fun encrypt_chunk/8, fun decrypt_chunk/8),\n\ttest_input_changes_packing(\n\t\tFastState4096, [1, 32], fun encrypt_composite_chunk/8, fun decrypt_composite_chunk/8),\n\t\n\t%% Also check arguments specific to composite packing:\n\t%% \n\tChunk = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\tKey = crypto:strong_rand_bytes(32),\n\t{ok, Packed} = encrypt_composite_chunk(FastState4096, Key, Chunk, 8, 0, 0, 0, [1, 32]),\n\t%% A different iterations count.\n\t{ok, Packed2} = encrypt_composite_chunk(FastState4096, Key, Chunk, 8, 0, 0, 0, [2, 32]),\n\t?assertEqual(?DATA_CHUNK_SIZE, byte_size(Packed2)),\n\t?assertNotEqual(Packed2, Packed),\n\n\t{ok, Unpacked2} = decrypt_composite_chunk( FastState4096, Key, Packed, 8, 0, 0, 0, [2, 32]),\n\t?assertEqual(?DATA_CHUNK_SIZE, byte_size(Unpacked2)),\n\t?assertNotEqual(Unpacked2, Chunk),\n\n\t%% A different sub-chunk count.\n\t{ok, Packed3} = encrypt_composite_chunk(FastState4096, Key, Chunk, 8, 0, 0, 0, [1, 64]),\n\t?assertEqual(?DATA_CHUNK_SIZE, byte_size(Packed3)),\n\t?assertNotEqual(Packed3, Packed),\n\n\t{ok, Unpacked3} = decrypt_composite_chunk(FastState4096, Key, Packed, 8, 0, 0, 0, [1, 64]),\n\t?assertEqual(?DATA_CHUNK_SIZE, byte_size(Unpacked3)),\n\t?assertNotEqual(Unpacked3, Chunk).\n\t\ntest_input_changes_packing(State, ExtraArgs, EncryptFun, DecryptFun) ->\n\tChunk = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\tKey = crypto:strong_rand_bytes(32),\n\t{ok, Packed} = EncryptFun(State, Key, Chunk, 8, 0, 0, 0, ExtraArgs),\n\t{ok, Unpacked} = DecryptFun(State, Key, Packed, 8, 0, 0, 0, ExtraArgs),\n\t?assertEqual(Unpacked, Chunk),\n\n\t%% Pack a slightly different chunk to assert the packing is different for different data.\n\t<< ChunkPrefix:262143/binary, LastChunkByte:8 >> = Chunk,\n\tChunk2 = << ChunkPrefix/binary, (LastChunkByte + 1):8 >>,\n\t{ok, Packed2} = EncryptFun(State, Key, Chunk2, 8, 0, 0, 0, ExtraArgs),\n\t?assertEqual(?DATA_CHUNK_SIZE, byte_size(Packed2)),\n\t?assertNotEqual(Packed2, Packed),\n\n\t%% Unpack a slightly different chunk to assert the packing is different for different data.\n\t<< PackedPrefix:262143/binary, LastPackedByte:8 >> = Packed,\n\tPacked3 = << PackedPrefix/binary, (LastPackedByte + 1):8 >>,\n\t{ok, Unpacked2} = DecryptFun(State, Key, Packed3, 8, 0, 0, 0, ExtraArgs),\n\t?assertEqual(?DATA_CHUNK_SIZE, byte_size(Unpacked2)),\n\t?assertNotEqual(Unpacked2, Chunk),\n\n\t%% Pack with a slightly different key.\n\t<< Prefix:31/binary, LastByte:8 >> = Key,\n\tKey2 = << Prefix/binary, (LastByte + 1):8 >>,\n\t{ok, Packed4} = EncryptFun(State, Key2, Chunk, 8, 0, 0, 0, ExtraArgs),\n\t?assertEqual(?DATA_CHUNK_SIZE, byte_size(Packed4)),\n\t?assertNotEqual(Packed4, Packed),\n\n\t%% Unpack with a slightly different key.\n\t{ok, Unpacked3} = DecryptFun(State, Key2, Packed, 8, 0, 0, 0, ExtraArgs),\n\t?assertEqual(?DATA_CHUNK_SIZE, byte_size(Unpacked3)),\n\t?assertNotEqual(Unpacked3, Chunk),\n\n\t%% Pack with a different RX program count.\n\t{ok, Packed5} = EncryptFun(State, Key, Chunk, 7, 0, 0, 0, ExtraArgs),\n\t?assertEqual(?DATA_CHUNK_SIZE, byte_size(Packed5)),\n\t?assertNotEqual(Packed5, Packed),\n\n\t%% Unpack with a different RX program count.\n\t{ok, Unpacked4} = DecryptFun(State, Key, Packed, 7, 0, 0, 0, ExtraArgs),\n\t?assertEqual(?DATA_CHUNK_SIZE, byte_size(Unpacked4)),\n\t?assertNotEqual(Unpacked4, Chunk).\n\n%% -------------------------------------------------------------------------------------------\n%% Composite packing tests\n%% -------------------------------------------------------------------------------------------\ntest_composite_packing({FastState512, _LightState512, FastState4096, _LightState4096}) ->\n\t{rx512, RandomXState512} = FastState512,\n\t{rx4096, RandomXState4096} = FastState4096,\n\tChunkWithoutPadding = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE - 5),\n\tChunk = << ChunkWithoutPadding/binary, 0:(5 * 8) >>,\n\tKey = crypto:strong_rand_bytes(32),\n\t{ok, Packed} = ar_rx4096_nif:rx4096_encrypt_composite_chunk_nif(RandomXState4096, Key, Chunk,\n\t\t0, 0, 0, 8, 1, 1),\n\tKey2 = crypto:hash(sha256, << Key/binary, ?DATA_CHUNK_SIZE:24 >>),\n\t{ok, Packed2} = ar_rx512_nif:rx512_encrypt_chunk_nif(RandomXState512, Key2, Chunk,\n\t\t8, % RANDOMX_PACKING_ROUNDS\n\t\t0, 0, 0),\n\t?assertNotEqual(Packed, Packed2),\n\t{ok, Packed3} = ar_rx4096_nif:rx4096_encrypt_composite_chunk_nif(RandomXState4096, Key,\n\t\tChunkWithoutPadding, 0, 0, 0, 8, 1, 1),\n\t{ok, Packed4} = ar_rx512_nif:rx512_encrypt_chunk_nif(RandomXState512, Key2, ChunkWithoutPadding,\n\t\t8, % RANDOMX_PACKING_ROUNDS\n\t\t0, 0, 0),\n\t?assertNotEqual(Packed3, Packed4).\n\ntest_composite_packs_incrementally(\n\t\t{_FastState512, _LightState512, FastState4096, _LightState4096}) ->\n\t{rx4096, RandomXState4096} = FastState4096,\n\tChunk = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE - 3),\n\tKey = crypto:strong_rand_bytes(32),\n\t{ok, Packed1} = ar_rx4096_nif:rx4096_encrypt_composite_chunk_nif(\n\t\tRandomXState4096, Key, Chunk, 0, 0, 0, 8, 1, 32),\n\t{ok, Packed2} = ar_rx4096_nif:rx4096_encrypt_composite_chunk_nif(\n\t\tRandomXState4096, Key, Packed1, 0, 0, 0, 8, 1, 32),\n\t{ok, Packed3} = ar_rx4096_nif:rx4096_encrypt_composite_chunk_nif(\n\t\tRandomXState4096, Key, Chunk, 0, 0, 0, 8, 2, 32),\n\t?assertEqual(Packed2, Packed3),\n\t{ok, Packed4} = ar_rx4096_nif:rx4096_encrypt_composite_chunk_nif(\n\t\tRandomXState4096, Key, Chunk, 0, 0, 0, 8, 3, 32),\n\t{ok, Packed5} = ar_rx4096_nif:rx4096_encrypt_composite_chunk_nif(\n\t\tRandomXState4096, Key, Packed1, 0, 0, 0, 8, 2, 32),\n\t{ok, Packed6} = ar_rx4096_nif:rx4096_encrypt_composite_chunk_nif(\n\t\tRandomXState4096, Key, Packed2, 0, 0, 0, 8, 1, 32),\n\t?assertEqual(Packed4, Packed5),\n\t?assertEqual(Packed4, Packed6).\n\ntest_composite_unpacked_sub_chunks(\n\t\t{_FastState512, _LightState512, FastState4096, _LightState4096}) ->\n\t{rx4096, RandomXState4096} = FastState4096,\n\tChunkWithoutPadding = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE - 3),\n\tChunk = << ChunkWithoutPadding/binary, 0:24 >>,\n\tKey = crypto:strong_rand_bytes(32),\n\t{ok, Packed} = ar_rx4096_nif:rx4096_encrypt_composite_chunk_nif(\n\t\tRandomXState4096, Key, Chunk, 0, 0, 0, 8, 1, 32),\n\tSubChunks = split_chunk_into_sub_chunks(Packed, ?DATA_CHUNK_SIZE div 32, 0),\n\tUnpackedInSubChunks = iolist_to_binary(lists:reverse(lists:foldl(\n\t\tfun({SubChunk, Offset}, Acc) ->\n\t\t\t{ok, Unpacked} = ar_rx4096_nif:rx4096_decrypt_composite_sub_chunk_nif(\n\t\t\t\tRandomXState4096, Key, SubChunk, byte_size(SubChunk), 0, 0, 0, 8, 1, Offset),\n\t\t\t{ok, Unpacked2} = ar_rx4096_nif:rx4096_decrypt_composite_sub_chunk_nif(\n\t\t\t\tRandomXState4096, Key, SubChunk, byte_size(SubChunk), 0, 0, 0, 8, 1, Offset),\n\t\t\t?assertEqual(Unpacked, Unpacked2),\n\t\t\t[Unpacked | Acc]\n\t\tend,\n\t\t[],\n\t\tSubChunks\n\t))),\n\t?assertEqual(UnpackedInSubChunks, Chunk),\n\tChunk2 = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE - 3),\n\t{ok, Packed2} = ar_rx4096_nif:rx4096_encrypt_composite_chunk_nif(\n\t\tRandomXState4096, Key, Chunk2, 0, 0, 0, 8, 3, 32),\n\tSubChunks2 = split_chunk_into_sub_chunks(Packed2, ?DATA_CHUNK_SIZE div 32, 0),\n\tUnpackedInSubChunks2 = iolist_to_binary(lists:reverse(lists:foldl(\n\t\tfun({SubChunk, Offset}, Acc) ->\n\t\t\t{ok, Unpacked} = ar_rx4096_nif:rx4096_decrypt_composite_sub_chunk_nif(\n\t\t\t\tRandomXState4096, Key, SubChunk, byte_size(SubChunk), 0, 0, 0, 8, 3, Offset),\n\t\t\t{ok, Unpacked2} = ar_rx4096_nif:rx4096_decrypt_composite_sub_chunk_nif(\n\t\t\t\tRandomXState4096, Key, SubChunk, byte_size(SubChunk), 0, 0, 0, 8, 3, Offset),\n\t\t\t?assertEqual(Unpacked, Unpacked2),\n\t\t\t[Unpacked | Acc]\n\t\tend,\n\t\t[],\n\t\tSubChunks2\n\t))),\n\t?assertEqual(UnpackedInSubChunks2, << Chunk2/binary, 0:24 >>).\n\nsplit_chunk_into_sub_chunks(Bin, Size, Offset) ->\n\tcase Bin of\n\t\t<<>> ->\n\t\t\t[];\n\t\t<< SubChunk:Size/binary, Rest/binary >> ->\n\t\t\t[{SubChunk, Offset} | split_chunk_into_sub_chunks(Rest, Size, Offset + Size)]\n\tend.\n\ntest_composite_repack({_FastState512, _LightState512, FastState4096, _LightState4096}) ->\n\t{rx4096, RandomXState4096} = FastState4096,\n\tChunk = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE - 12),\n\tKey = crypto:strong_rand_bytes(32),\n\t{ok, Packed2} = ar_rx4096_nif:rx4096_encrypt_composite_chunk_nif(RandomXState4096, Key,\n\t\t\tChunk, 0, 0, 0, 8, 2, 32),\n\t{ok, Packed3} = ar_rx4096_nif:rx4096_encrypt_composite_chunk_nif(RandomXState4096, Key,\n\t\t\tChunk, 0, 0, 0, 8, 3, 32),\n\t{ok, Repacked_2_3, RepackInput} =\n\t\t\tar_rx4096_nif:rx4096_reencrypt_composite_chunk_nif(RandomXState4096,\n\t\t\t\t\tKey, Key, Packed2, 0, 0, 0, 8, 8, 2, 3, 32, 32),\n\t?assertEqual(Packed2, RepackInput),\n\t?assertEqual(Packed3, Repacked_2_3),\n\t\n\t%% Repacking a composite chunk to same-key higher-diff composite chunk...\n\t{ok, Repacked_2_5, RepackInput} =\n\t\t\tar_rx4096_nif:rx4096_reencrypt_composite_chunk_nif(RandomXState4096,\n\t\t\t\t\tKey, Key, Packed2, 0, 0, 0, 8, 8, 2, 5, 32, 32),\n\t{ok, Packed5} = ar_rx4096_nif:rx4096_encrypt_composite_chunk_nif(RandomXState4096, Key,\n\t\t\tChunk, 0, 0, 0, 8, 5, 32),\n\t?assertEqual(Packed5, Repacked_2_5),\n\tKey2 = crypto:strong_rand_bytes(32),\n\n\t%% Repacking a composite chunk to different-key higher-diff composite chunk...\n\t{ok, RepackedDiffKey_2_3, RepackInput2} =\n\t\t\tar_rx4096_nif:rx4096_reencrypt_composite_chunk_nif(RandomXState4096,\n\t\t\t\t\tKey, Key2, Packed2, 0, 0, 0, 8, 8, 2, 3, 32, 32),\n\t?assertEqual(<< Chunk/binary, 0:(12 * 8) >>, RepackInput2),\n\t{ok, Packed2_3} = ar_rx4096_nif:rx4096_encrypt_composite_chunk_nif(RandomXState4096, Key2,\n\t\t\tChunk, 0, 0, 0, 8, 3, 32),\n\t?assertNotEqual(Packed2, Packed2_3),\n\t?assertEqual(Packed2_3, RepackedDiffKey_2_3),\n\ttry\n\t\tar_rx4096_nif:rx4096_reencrypt_composite_chunk_nif(RandomXState4096,\n\t\t\t\t\tKey, Key, Packed2, 0, 0, 0, 8, 8, 2, 2, 32, 32),\n\t\t?assert(false, \"ar_rx4096_nif:rx4096_reencrypt_composite_chunk_nif to reencrypt \"\n\t\t\t\t\"to same diff should have failed\")\n\tcatch error:badarg ->\n\t\tok\n\tend,\n\ttry\n\t\tar_rx4096_nif:rx4096_reencrypt_composite_chunk_nif(RandomXState4096,\n\t\t\t\t\tKey, Key, Packed2, 0, 0, 0, 8, 8, 2, 1, 32, 32),\n\t\t?assert(false, \"ar_rx4096_nif:rx4096_reencrypt_composite_chunk_nif to reencrypt \"\n\t\t\t\t\"to lower diff should have failed\")\n\tcatch error:badarg ->\n\t\tok\n\tend.\n\ntest_hash({\n\t\tFastState512, LightState512,\n\t\tFastState4096, LightState4096}) ->\n    ExpectedHash512 = ar_util:decode(?ENCODED_RX512_HASH),\n\tExpectedHash4096 = ar_util:decode(?ENCODED_RX4096_HASH),\n    Nonce = ar_util:decode(?ENCODED_NONCE),\n    Segment = ar_util:decode(?ENCODED_SEGMENT),\n    Input = << Nonce/binary, Segment/binary >>,\n\t?assertEqual(ExpectedHash512,\n\t\tar_mine_randomx:hash(FastState512, Input, 0, 0, 0)),\n\t?assertEqual(ExpectedHash512,\n\t\tar_mine_randomx:hash(LightState512, Input, 0, 0, 0)),\n\t?assertEqual(ExpectedHash4096,\n\t\tar_mine_randomx:hash(FastState4096, Input, 0, 0, 0)),\n\t?assertEqual(ExpectedHash4096,\n\t\tar_mine_randomx:hash(LightState4096, Input, 0, 0, 0)).\n\n"
  },
  {
    "path": "apps/arweave/test/ar_mine_vdf_tests.erl",
    "content": "-module(ar_mine_vdf_tests).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-define(ENCODED_PREV_STATE, <<\"f_z7RLug8etm3SrmRf-xPwXEL0ZQ_xHng2A5emRDQBw\">>).\n-define(ITERATIONS_SHA, 10).\n-define(CHECKPOINT_COUNT, 4).\n-define(CHECKPOINT_SKIP_COUNT, 9).\n-define(ENCODED_SHA_CHECKPOINT, <<\"hewn3qrpFlGUPVexAqyqrb72v7dvhAMd26Cwih7R8w9WPNxZmSZSSqGcibtalnKnwHruGcFvEqzSWq2ySAMsRxzC57qaktI6gV6dAcPQ41ZLzpw9i3AJUPCFsShzNbV8EVx29JpHdMZ0VzlViPUrgbfS_EVSWAqiZKhJcJmUcbI\">>).\n-define(ENCODED_SHA_CHECKPOINT_FULL, <<\"hewn3qrpFlGUPVexAqyqrb72v7dvhAMd26Cwih7R8w9WPNxZmSZSSqGcibtalnKnwHruGcFvEqzSWq2ySAMsRxzC57qaktI6gV6dAcPQ41ZLzpw9i3AJUPCFsShzNbV8EVx29JpHdMZ0VzlViPUrgbfS_EVSWAqiZKhJcJmUcbJlfDXdCy38G1r-6tB_OLCVTYcIg7if-g6ejTQERJ2AXg\">>).\n-define(ENCODED_SHA_CHECKPOINT_SKIP, <<\"_Wl7xvFvX_XOHfnaHxtv17b_UCYIx1fOU2SeuM7VV-_5xuzpESPJCM2wh0ry9sjpDqLPjKmI3hkdWm3CZalD5lSdAOJp62vJXdKbTOstTDi__oL1OIPFOnDxawC_TIxLO-YebYbMN6hGk7bz7le65Bciat3ahUEIM_GmriHJVh4\">>).\n-define(ENCODED_SHA_RES, <<\"ZXw13Qst_Bta_urQfziwlU2HCIO4n_oOno00BESdgF4\">>).\n-define(ENCODED_SHA_RES_SKIP, <<\"Sobef2mx_AgxJ4ubzi2FLDYhouKqojyTzUXASCXrSZ0\">>).\n\n%%%===================================================================\n%%% utils\n%%%===================================================================\n\nsoft_implementation_vdf_sha(_Salt, PrevState, 0, _CheckpointCount) ->\n\tPrevState;\n\nsoft_implementation_vdf_sha(Salt, PrevState, Iterations, 0) ->\n\tNextState = crypto:hash(sha256, <<Salt/binary, PrevState/binary>>),\n\tsoft_implementation_vdf_sha(Salt, NextState, Iterations-1, 0);\n\nsoft_implementation_vdf_sha(Salt, PrevState, Iterations, CheckpointCount) ->\n\tNextState = soft_implementation_vdf_sha(Salt, PrevState, Iterations, 0),\n\t<< SaltValue:256 >> = Salt,\n\tNextSalt = << (SaltValue+1):256 >>,\n\tsoft_implementation_vdf_sha(NextSalt, NextState, Iterations, CheckpointCount-1).\n\n%%%===================================================================\n%%% \n%%% no skip iterations (needed for last 1 sec has more checkpoints)\n%%% \n%%%===================================================================\n\n%%%===================================================================\n%%% SHA.\n%%%===================================================================\n\nvdf_sha_test_() ->\n\t{timeout, 500, fun test_vdf_sha/0}.\n\ntest_vdf_sha() ->\n\tPrevState = ar_util:decode(?ENCODED_PREV_STATE),\n\tOutCheckpointSha3 = ar_util:decode(?ENCODED_SHA_CHECKPOINT),\n\tOutCheckpointSha3Full = ar_util:decode(?ENCODED_SHA_CHECKPOINT_FULL),\n\tRealSha3 = ar_util:decode(?ENCODED_SHA_RES),\n\tSalt1 = << (1):256 >>,\n\tSalt2 = << (2):256 >>,\n\n\t{ok, Real1, _OutCheckpointSha} = ar_vdf_nif:vdf_sha2_nif(Salt1, PrevState, 0, 0, ?ITERATIONS_SHA),\n\tExpectedHash = soft_implementation_vdf_sha(Salt1, PrevState, ?ITERATIONS_SHA, 0),\n\t?assertEqual(ExpectedHash, Real1),\n\n\t{ok, RealSha2, OutCheckpointSha2} = ar_vdf_nif:vdf_sha2_nif(Salt2, Real1, ?CHECKPOINT_COUNT-1, 0, ?ITERATIONS_SHA),\n\t{ok, RealSha3, OutCheckpointSha3} = ar_vdf_nif:vdf_sha2_nif(Salt1, PrevState, ?CHECKPOINT_COUNT, 0, ?ITERATIONS_SHA),\n\tExpectedSha3 = soft_implementation_vdf_sha(Salt1, PrevState, ?ITERATIONS_SHA, ?CHECKPOINT_COUNT),\n\t?assertEqual(ExpectedSha3, RealSha2),\n\t?assertEqual(ExpectedSha3, RealSha3),\n\tExpedctedOutCheckpoint3 = << Real1/binary, OutCheckpointSha2/binary >>,\n\t?assertEqual(ExpedctedOutCheckpoint3, OutCheckpointSha3),\n\tExpectedOutCheckpointSha3Full = << OutCheckpointSha3/binary, RealSha3/binary >>,\n\t?assertEqual(ExpectedOutCheckpointSha3Full, OutCheckpointSha3Full),\n\tok = test_vdf_sha_verify_break1(Salt1, PrevState, ?CHECKPOINT_COUNT, 0, ?ITERATIONS_SHA, OutCheckpointSha3, RealSha3),\n\tok = test_vdf_sha_verify_break2(Salt1, PrevState, ?CHECKPOINT_COUNT, 0, ?ITERATIONS_SHA, OutCheckpointSha3, RealSha3),\n\n\t% test vdf_fused\n\t{ok, Real1, _OutCheckpointSha} = ar_vdf_nif:vdf_sha2_fused_nif(Salt1, PrevState, 0, 0, ?ITERATIONS_SHA),\n\t{ok, RealSha2, OutCheckpointSha2} = ar_vdf_nif:vdf_sha2_fused_nif(Salt2, Real1, ?CHECKPOINT_COUNT-1, 0, ?ITERATIONS_SHA),\n\t{ok, RealSha3, OutCheckpointSha3} = ar_vdf_nif:vdf_sha2_fused_nif(Salt1, PrevState, ?CHECKPOINT_COUNT, 0, ?ITERATIONS_SHA),\n\n\t% test vdf_hiopt\n\t{ok, Real1, _OutCheckpointSha} = ar_vdf_nif:vdf_sha2_hiopt_nif(Salt1, PrevState, 0, 0, ?ITERATIONS_SHA),\n\t{ok, RealSha2, OutCheckpointSha2} = ar_vdf_nif:vdf_sha2_hiopt_nif(Salt2, Real1, ?CHECKPOINT_COUNT-1, 0, ?ITERATIONS_SHA),\n\t{ok, RealSha3, OutCheckpointSha3} = ar_vdf_nif:vdf_sha2_hiopt_nif(Salt1, PrevState, ?CHECKPOINT_COUNT, 0, ?ITERATIONS_SHA),\n\tok.\n\ntest_vdf_sha_verify_break1(Salt, PrevState, CheckpointCount, SkipCheckpointCount, Iterations, OutCheckpoint, Hash) ->\n\ttest_vdf_sha_verify_break1(Salt, PrevState, CheckpointCount, SkipCheckpointCount, Iterations, OutCheckpoint, Hash, size(OutCheckpoint)-1).\n\ntest_vdf_sha_verify_break1(_Salt, _PrevState, _CheckpointCount, _SkipCheckpointCount, _Iterations, _OutCheckpoint, _Hash, 0) ->\n\tok;\ntest_vdf_sha_verify_break1(Salt, PrevState, CheckpointCount, SkipCheckpointCount, Iterations, OutCheckpoint, Hash, BreakPos) ->\n\ttest_vdf_sha_verify_break1(Salt, PrevState, CheckpointCount, SkipCheckpointCount, Iterations, OutCheckpoint, Hash, BreakPos-1).\n\ntest_vdf_sha_verify_break2(Salt, PrevState, CheckpointCount, SkipCheckpointCount, Iterations, OutCheckpoint, Hash) ->\n\ttest_vdf_sha_verify_break2(Salt, PrevState, CheckpointCount, SkipCheckpointCount, Iterations, OutCheckpoint, Hash, size(Hash)-1).\n\ntest_vdf_sha_verify_break2(_Salt, _PrevState, _CheckpointCount, _SkipCheckpointCount, _Iterations, _OutCheckpoint, _Hash, 0) ->\n\tok;\ntest_vdf_sha_verify_break2(Salt, PrevState, CheckpointCount, SkipCheckpointCount, Iterations, OutCheckpoint, Hash, BreakPos) ->\n\ttest_vdf_sha_verify_break2(Salt, PrevState, CheckpointCount, SkipCheckpointCount, Iterations, OutCheckpoint, Hash, BreakPos-1).\n\n%%%===================================================================\n%%% \n%%% with skip iterations\n%%% \n%%%===================================================================\n\n%%%===================================================================\n%%% SHA.\n%%%===================================================================\n\nvdf_sha_skip_iterations_test_() ->\n\t{timeout, 500, fun test_vdf_sha_skip_iterations/0}.\n\ntest_vdf_sha_skip_iterations() ->\n\tPrevState = ar_util:decode(?ENCODED_PREV_STATE),\n\tOutCheckpointSha3 = ar_util:decode(?ENCODED_SHA_CHECKPOINT_SKIP),\n\tRealSha3 = ar_util:decode(?ENCODED_SHA_RES_SKIP),\n\tSalt1 = << (1):256 >>,\n\tSaltJump = << (1+?CHECKPOINT_SKIP_COUNT+1):256 >>,\n\n\t{ok, Real1, _OutCheckpointSha} = ar_vdf_nif:vdf_sha2_nif(Salt1, PrevState, 0, ?CHECKPOINT_SKIP_COUNT, ?ITERATIONS_SHA),\n\tExpectedHash = soft_implementation_vdf_sha(Salt1, PrevState, ?ITERATIONS_SHA, ?CHECKPOINT_SKIP_COUNT),\n\t?assertEqual(ExpectedHash, Real1),\n\n\t{ok, RealSha2, OutCheckpointSha2} = ar_vdf_nif:vdf_sha2_nif(SaltJump, Real1, ?CHECKPOINT_COUNT-1, ?CHECKPOINT_SKIP_COUNT, ?ITERATIONS_SHA),\n\t{ok, RealSha3, OutCheckpointSha3} = ar_vdf_nif:vdf_sha2_nif(Salt1, PrevState, ?CHECKPOINT_COUNT, ?CHECKPOINT_SKIP_COUNT, ?ITERATIONS_SHA),\n\tExpectedSha3 = soft_implementation_vdf_sha(Salt1, PrevState, ?ITERATIONS_SHA, (1+?CHECKPOINT_COUNT)*(1+?CHECKPOINT_SKIP_COUNT)-1),\n\t?assertEqual(ExpectedSha3, RealSha2),\n\t?assertEqual(ExpectedSha3, RealSha3),\n\tExpedctedOutCheckpoint3 = << Real1/binary, OutCheckpointSha2/binary >>,\n\t?assertEqual(ExpedctedOutCheckpoint3, OutCheckpointSha3),\n\tok = test_vdf_sha_verify_break1(Salt1, PrevState, ?CHECKPOINT_COUNT, ?CHECKPOINT_SKIP_COUNT, ?ITERATIONS_SHA, OutCheckpointSha3, RealSha3),\n\tok = test_vdf_sha_verify_break2(Salt1, PrevState, ?CHECKPOINT_COUNT, ?CHECKPOINT_SKIP_COUNT, ?ITERATIONS_SHA, OutCheckpointSha3, RealSha3),\n\n\t% test vdf_fused\n\t{ok, Real1, _OutCheckpointSha} = ar_vdf_nif:vdf_sha2_fused_nif(Salt1, PrevState, 0, ?CHECKPOINT_SKIP_COUNT, ?ITERATIONS_SHA),\n\t{ok, RealSha2, OutCheckpointSha2} = ar_vdf_nif:vdf_sha2_fused_nif(SaltJump, Real1, ?CHECKPOINT_COUNT-1, ?CHECKPOINT_SKIP_COUNT, ?ITERATIONS_SHA),\n\t{ok, RealSha3, OutCheckpointSha3} = ar_vdf_nif:vdf_sha2_fused_nif(Salt1, PrevState, ?CHECKPOINT_COUNT, ?CHECKPOINT_SKIP_COUNT, ?ITERATIONS_SHA),\n\n\t% test vdf_hiopt\n\t{ok, Real1, _OutCheckpointSha} = ar_vdf_nif:vdf_sha2_hiopt_nif(Salt1, PrevState, 0, ?CHECKPOINT_SKIP_COUNT, ?ITERATIONS_SHA),\n\t{ok, RealSha2, OutCheckpointSha2} = ar_vdf_nif:vdf_sha2_hiopt_nif(SaltJump, Real1, ?CHECKPOINT_COUNT-1, ?CHECKPOINT_SKIP_COUNT, ?ITERATIONS_SHA),\n\t{ok, RealSha3, OutCheckpointSha3} = ar_vdf_nif:vdf_sha2_hiopt_nif(Salt1, PrevState, ?CHECKPOINT_COUNT, ?CHECKPOINT_SKIP_COUNT, ?ITERATIONS_SHA),\n\tok.\n"
  },
  {
    "path": "apps/arweave/test/ar_mining_io_tests.erl",
    "content": "-module(ar_mining_io_tests).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n-include_lib(\"arweave/include/ar_mining.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-define(WEAVE_SIZE, trunc(2.5 * ar_block:partition_size())).\n\nchunks_read(_Worker, WhichChunk, Candidate, RangeStart, ChunkOffsets) ->\n\tets:insert(?MODULE, {WhichChunk, Candidate, RangeStart, ChunkOffsets}).\n\nsetup_all() ->\n\t[B0] = ar_weave:init([], 1, ?WEAVE_SIZE),\n\tRewardAddr = ar_wallet:to_address(ar_wallet:new_keyfile()),\n\t{ok, Config} = arweave_config:get_env(),\n\tStorageModules = lists:flatten(\n\t\t[[{8 * 262144, N, {spora_2_6, RewardAddr}}] || N <- lists:seq(0, 8)]),\n\tar_test_node:start(B0, RewardAddr, Config, StorageModules),\n\t{Setup, Cleanup} = ar_test_node:mock_functions([\n\t\t{ar_mining_worker, chunks_read, fun chunks_read/5},\n\t\t{ar_block, partition_size, fun() -> 8 * 262144 end}\n\t]),\n\tFunctions = Setup(),\n\t{Cleanup, Functions}.\n\ncleanup_all({Cleanup, Functions}) ->\n\tCleanup(Functions).\n\nsetup_one() ->\n\tets:new(?MODULE, [named_table, duplicate_bag, public]).\n\ncleanup_one(_) ->\n\tets:delete(?MODULE).\n\nread_recall_range_test_() ->\n\t{setup, fun setup_all/0, fun cleanup_all/1,\n\t\t{foreach, fun setup_one/0, fun cleanup_one/1,\n\t\t[\n\t\t\t{timeout, 30, fun test_read_recall_range/0},\n\t\t\t{timeout, 30, fun test_partitions/0}\n\t\t]}\n    }.\n\ntest_read_recall_range() ->\n\tCandidate = default_candidate(),\n\t?assertEqual(true, ar_mining_io:read_recall_range(chunk1, self(), Candidate, 0)),\n\twait_for_io(1),\n\t[Chunk1, Chunk2] = get_recall_chunks(),\n\tassert_chunks_read([{chunk1, Candidate, 0, [\n\t\t{?DATA_CHUNK_SIZE, Chunk1},\n\t\t{?DATA_CHUNK_SIZE*2, Chunk2}]}]),\n\n\t?assertEqual(true, ar_mining_io:read_recall_range(chunk1, self(), Candidate, ?DATA_CHUNK_SIZE div 2)),\n\twait_for_io(1),\n\t[Chunk1, Chunk2, Chunk3] = get_recall_chunks(),\n\tassert_chunks_read([{chunk1, Candidate, ?DATA_CHUNK_SIZE div 2, [\n\t\t{?DATA_CHUNK_SIZE, Chunk1},\n\t\t{?DATA_CHUNK_SIZE*2, Chunk2},\n\t\t{?DATA_CHUNK_SIZE*3, Chunk3}]}]),\n\n\t?assertEqual(true, ar_mining_io:read_recall_range(chunk1, self(), Candidate, ?DATA_CHUNK_SIZE)),\n\twait_for_io(1),\n\t[Chunk2, Chunk3] = get_recall_chunks(),\n\tassert_chunks_read([{chunk1, Candidate, ?DATA_CHUNK_SIZE, [\n\t\t{?DATA_CHUNK_SIZE*2, Chunk2},\n\t\t{?DATA_CHUNK_SIZE*3, Chunk3}]}]),\n\n\t?assertEqual(true, ar_mining_io:read_recall_range(chunk2, self(), Candidate,\n\t\tar_block:partition_size() - ?DATA_CHUNK_SIZE)),\n\twait_for_io(1),\n\t[Chunk4, Chunk5] = get_recall_chunks(),\n\tassert_chunks_read([{chunk2, Candidate, ar_block:partition_size() - ?DATA_CHUNK_SIZE, [\n\t\t{ar_block:partition_size(), Chunk4},\n\t\t{ar_block:partition_size() + ?DATA_CHUNK_SIZE, Chunk5}]}]),\n\n\t?assertEqual(true, ar_mining_io:read_recall_range(chunk2, self(), Candidate, ar_block:partition_size())),\n\twait_for_io(1),\n\t[Chunk5, Chunk6] = get_recall_chunks(),\n\tassert_chunks_read([{chunk2, Candidate, ar_block:partition_size(), [\n\t\t{ar_block:partition_size() + ?DATA_CHUNK_SIZE, Chunk5},\n\t\t{ar_block:partition_size() + (2*?DATA_CHUNK_SIZE), Chunk6}]}]),\n\n\t?assertEqual(true, ar_mining_io:read_recall_range(chunk1, self(), Candidate,\n\t\t?WEAVE_SIZE - ?DATA_CHUNK_SIZE)),\n\twait_for_io(1),\n\t[Chunk7] = get_recall_chunks(),\n\tassert_chunks_read([{chunk1, Candidate, ?WEAVE_SIZE - ?DATA_CHUNK_SIZE, [\n\t\t{?WEAVE_SIZE, Chunk7}]}]),\n\n\t?assertEqual(false, ar_mining_io:read_recall_range(chunk1, self(), Candidate, ?WEAVE_SIZE)).\n\ntest_partitions() ->\n\tCandidate = default_candidate(),\n\tMiningAddress = Candidate#mining_candidate.mining_address,\n\n\tar_mining_io:set_largest_seen_upper_bound(0),\n\t?assertEqual([], ar_mining_io:get_partitions()),\n\n\tar_mining_io:set_largest_seen_upper_bound(ar_block:partition_size()),\n\t?assertEqual([], ar_mining_io:get_partitions(0)),\n\t?assertEqual([\n\t\t\t{0, MiningAddress, 0}],\n\t\tar_mining_io:get_partitions()),\n\n\tar_mining_io:set_largest_seen_upper_bound(trunc(2.5 * ar_block:partition_size())),\n\t?assertEqual([\n\t\t\t{0, MiningAddress, 0}],\n\t\tar_mining_io:get_partitions(ar_block:partition_size())),\n\t?assertEqual([\n\t\t\t{0, MiningAddress, 0},\n\t\t\t{1, MiningAddress, 0}],\n\t\tar_mining_io:get_partitions()),\n\n\tar_mining_io:set_largest_seen_upper_bound(trunc(5 * ar_block:partition_size())),\n\t?assertEqual([\n\t\t\t{0, MiningAddress, 0},\n\t\t\t{1, MiningAddress, 0}],\n\t\tar_mining_io:get_partitions(trunc(2.5 * ar_block:partition_size()))),\n\t?assertEqual([\n\t\t\t{0, MiningAddress, 0},\n\t\t\t{1, MiningAddress, 0},\n\t\t\t{2, MiningAddress, 0},\n\t\t\t{3, MiningAddress, 0},\n\t\t\t{4, MiningAddress, 0}],\n\t\tar_mining_io:get_partitions()),\n\t?assertEqual([\n\t\t\t{0, MiningAddress, 0},\n\t\t\t{1, MiningAddress, 0},\n\t\t\t{2, MiningAddress, 0},\n\t\t\t{3, MiningAddress, 0},\n\t\t\t{4, MiningAddress, 0}],\n\t\tar_mining_io:get_partitions(trunc(5 * ar_block:partition_size()))).\n\nget_minable_storge_modules_test() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tAddr = Config#config.mining_addr,\n\ttry\n\t\tInput = [\n\t\t\t{100, 0, {spora_2_6, Addr}},\n\t\t\t{200, 0, unpacked},\n\t\t\t{300, 0, {replica_2_9, Addr}}\n\t\t],\n\t\tExpected = [\n\t\t\t{100, 0, {spora_2_6, Addr}},\n\t\t\t{300, 0, {replica_2_9, Addr}}\n\t\t],\n\t\tarweave_config:set_env(Config#config{storage_modules = Input}),\n\t\t?assertEqual(Expected, ar_mining_io:get_minable_storage_modules())\n\tafter\n\t\tarweave_config:set_env(Config)\n\tend.\n\nget_packing_test() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tAddr = Config#config.mining_addr,\n\ttry\n\t\tInput = [\n\t\t\t{100, 0, unpacked},\n\t\t\t{200, 0, {spora_2_6, Addr}},\n\t\t\t{300, 0, {replica_2_9, Addr}}\n\t\t],\n\t\tExpected = {spora_2_6, Addr},\n\t\tarweave_config:set_env(Config#config{storage_modules = Input}),\n\t\t?assertEqual(Expected, ar_mining_io:get_packing())\n\tafter\n\t\tarweave_config:set_env(Config)\n\tend.\n\ndefault_candidate() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tMiningAddr = Config#config.mining_addr,\n\t#mining_candidate{\n\t\tmining_address = MiningAddr\n\t}.\n\nwait_for_io(NumChunks) ->\n\tResult = ar_util:do_until(\n\t\tfun() ->\n\t\t\tNumChunks == length(ets:tab2list(?MODULE))\n\t\tend,\n\t\t100,\n\t\t60000),\n\t?assertEqual(true, Result, \"Timeout while waiting to read chunks\").\n\nget_recall_chunks() ->\n\tcase ets:tab2list(?MODULE) of\n\t\t[] -> [];\n\t\t[{_WhichChunk, _Candidate, _RangeStart, ChunkOffsets}] -> \n\t\t\tlists:map(\n\t\t\t\tfun({_Offset, Chunk}) -> \n\t\t\t\t\tChunk\n\t\t\t\tend,\n\t\t\t\tChunkOffsets\n\t\t\t)\n\tend.\n\nassert_chunks_read(ExpectedChunks) ->\n\t?assertEqual(ExpectedChunks, ets:tab2list(?MODULE)),\n\tets:delete_all_objects(?MODULE).\n"
  },
  {
    "path": "apps/arweave/test/ar_mining_server_tests.erl",
    "content": "-module(ar_mining_server_tests).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n-include_lib(\"arweave/include/ar_mining.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-define(ALIGNED_PARTITION_SIZE, 2_097_152).\n-define(WEAVE_SIZE, (3 * ?ALIGNED_PARTITION_SIZE)).\n\n%% RECALL_RANGE_1 and SYNCED_RECALL_RANGE_2 must be different partitions so that different io\n%% threads are used. \n%% ?RECALL_RANGE_1 is set so 1 chunk is synced and one is missing.\n-define(RECALL_RANGE_1, (3*?PARTITION_SIZE-?DATA_CHUNK_SIZE)).\n-define(SYNCED_RECALL_RANGE_2, ?PARTITION_SIZE).\n-define(UNSYNCED_RECALL_RANGE_2, 0).\n\n\n%% ------------------------------------------------------------------------------------------------\n%% Fixtures\n%% ------------------------------------------------------------------------------------------------\nsetup_all() ->\n\t[B0] = ar_weave:init([], ar_test_node:get_difficulty_for_invalid_hash(), ?WEAVE_SIZE),\n\tRewardAddr = ar_wallet:to_address(ar_wallet:new_keyfile()),\n\t{ok, Config} = arweave_config:get_env(),\n\t%% We'll use partition 0 for any unsynced ranges.\n\tStorageModules = [\n\t\t{ar_block:partition_size(), 1, {spora_2_6, RewardAddr}},\n\t\t{ar_block:partition_size(), 2, {spora_2_6, RewardAddr}}\n\t],\n\tar_test_node:start(B0, RewardAddr, Config, StorageModules),\n\tConfig.\n\ncleanup_all(Config) ->\n\tok = arweave_config:set_env(Config).\n\n%% @doc Setup the environment so we can control VDF step generation.\nsetup_pool_client() ->\n\t[B0] = ar_weave:init([], ar_test_node:get_difficulty_for_invalid_hash(), ?WEAVE_SIZE),\n\tRewardAddr = ar_wallet:to_address(ar_wallet:new_keyfile()),\n\t{ok, Config} = arweave_config:get_env(),\n\t%% We'll use partition 0 for any unsynced ranges.\n\tStorageModules = [\n\t\t{ar_block:partition_size(), 1, {spora_2_6, RewardAddr}},\n\t\t{ar_block:partition_size(), 2, {spora_2_6, RewardAddr}}\n\t],\n\tar_test_node:start(B0, RewardAddr,\n\t\tConfig#config{\n\t\t\tnonce_limiter_server_trusted_peers = [ ar_util:format_peer(vdf_server()) ],\n\t\t\tis_pool_client=true,\n\t\t\tpool_server_address= <<\"http://localhost:2002\">>,\n\t\t\tpool_api_key = <<\"pool_secret\">>\n\t\t},\n\t\tStorageModules),\n\tConfig.\n\ncleanup_pool_client(Config) ->\n\tok = arweave_config:set_env(Config).\n\nsetup_one() ->\n\tets:new(mock_counter, [set, public, named_table]),\n\tets:new(add_task, [named_table, bag, public]).\n\ncleanup_one(_) ->\n\tets:delete(add_task),\n\tets:delete(mock_counter).\n\n%% ------------------------------------------------------------------------------------------------\n%% Test Registration\n%% ------------------------------------------------------------------------------------------------\nchunk_cache_size_test_() ->\n\t{setup, fun setup_all/0, fun cleanup_all/1,\n\t\t{foreach, fun setup_one/0, fun cleanup_one/1,\n\t\t[\n\t\t\t{timeout, 30, fun test_h2_solution_chunk1_first/0},\n\t\t\t{timeout, 30, fun test_h2_solution_chunk2_first/0},\n\t\t\t{timeout, 30, fun test_h1_solution_h2_synced_chunk1_first/0},\n\t\t\t{timeout, 30, fun test_h1_solution_h2_synced_chunk2_first/0},\n\t\t\t{timeout, 30, fun test_h1_solution_h2_unsynced/0},\n\t\t\t{timeout, 30, fun test_no_solution_then_h2_solution/0},\n\t\t\t{timeout, 30, fun test_no_solution_then_h1_solution_h2_synced/0},\n\t\t\t{timeout, 30, fun test_no_solution_then_h1_solution_h2_unsynced/0}\n\t\t]}\n    }.\n\npool_job_test_() ->\n\t{setup, fun setup_pool_client/0, fun cleanup_pool_client/1,\n\t\t{foreach, fun setup_one/0, fun cleanup_one/1,\n\t\t[\n\t\t\tar_test_node:test_with_mocked_functions([mock_add_task(), mock_get_current_sesssion()],\n\t\t\tfun test_pool_job_no_cached_sessions/0, 120)\n\t\t]}\n    }.\n\n%% ------------------------------------------------------------------------------------------------\n%% chunk_cache_size_test_\n%% ------------------------------------------------------------------------------------------------\ntest_h2_solution_chunk1_first() ->\n\tdo_test_chunk_cache_size_with_mocks(\n\t\t[ar_test_node:invalid_solution()],\n\t\t[ar_test_node:valid_solution()],\n\t\t[?SYNCED_RECALL_RANGE_2],\n\t\t[chunk1]\n\t).\n\ntest_h2_solution_chunk2_first() ->\n\tdo_test_chunk_cache_size_with_mocks(\n\t\t[ar_test_node:invalid_solution()],\n\t\t[ar_test_node:valid_solution()],\n\t\t[?SYNCED_RECALL_RANGE_2],\n\t\t[chunk2]\n\t).\n\ntest_h1_solution_h2_synced_chunk1_first() ->\n\tdo_test_chunk_cache_size_with_mocks(\n\t\t[ar_test_node:valid_solution()],\n\t\t[ar_test_node:invalid_solution()],\n\t\t[?SYNCED_RECALL_RANGE_2],\n\t\t[chunk1]\n\t).\n\ntest_h1_solution_h2_synced_chunk2_first() ->\n\tdo_test_chunk_cache_size_with_mocks(\n\t\t[ar_test_node:valid_solution()],\n\t\t[ar_test_node:invalid_solution()],\n\t\t[?SYNCED_RECALL_RANGE_2],\n\t\t[chunk2]\n\t).\n\ntest_h1_solution_h2_unsynced() ->\n\tdo_test_chunk_cache_size_with_mocks(\n\t\t[ar_test_node:valid_solution()],\n\t\t[],\n\t\t[?UNSYNCED_RECALL_RANGE_2],\n\t\t[chunk1]\n\t).\n\ntest_no_solution_then_h2_solution() ->\n\tdo_test_chunk_cache_size_with_mocks(\n\t\t[ar_test_node:invalid_solution()],\n\t\t[ar_test_node:invalid_solution(), ar_test_node:invalid_solution(),\n\t\t\tar_test_node:valid_solution()],\n\t\t[?SYNCED_RECALL_RANGE_2],\n\t\t[chunk1]\n\t).\n\ntest_no_solution_then_h1_solution_h2_synced() ->\n\tdo_test_chunk_cache_size_with_mocks(\n\t\t[ar_test_node:invalid_solution(), ar_test_node:invalid_solution(),\n\t\t\tar_test_node:valid_solution()],\n\t\t[ar_test_node:invalid_solution()],\n\t\t[?SYNCED_RECALL_RANGE_2],\n\t\t[chunk1]\n\t).\n\ntest_no_solution_then_h1_solution_h2_unsynced() ->\n\tdo_test_chunk_cache_size_with_mocks(\n\t\t[ar_test_node:invalid_solution(), ar_test_node:invalid_solution(),\n\t\t\tar_test_node:valid_solution()],\n\t\t[],\n\t\t[?UNSYNCED_RECALL_RANGE_2],\n\t\t[chunk1]\n\t).\n\n%% ------------------------------------------------------------------------------------------------\n%% pool_job_test_\n%% ------------------------------------------------------------------------------------------------\n%% we have to wait to let the ar_events get processed whenever we add a pool job\n-define(WAIT_TIME, 1000).\n\ntest_pool_job_no_cached_sessions() ->\n\tSessionKey1 = {<<\"session1\">>, 1, 1},\n\tSessionKey2 = {<<\"session2\">>, 2, 1},\n\tSessionKey3 = {<<\"session3\">>, 3, 1},\n\tOutput = crypto:strong_rand_bytes(32),\n\tPartitionUpperBound = ?WEAVE_SIZE,\n\tSeed1 = crypto:strong_rand_bytes(32),\n\tSeed2 = crypto:strong_rand_bytes(32),\n\tSeed3 = crypto:strong_rand_bytes(32),\n\tPartialDiff = {1, 1},\n\tar_mining_server:add_pool_job(\n\t\tSessionKey1, 1, Output, PartitionUpperBound, Seed1, PartialDiff),\n\tar_mining_server:add_pool_job(\n\t\tSessionKey1, 2, Output, PartitionUpperBound, Seed1, PartialDiff),\n\ttimer:sleep(?WAIT_TIME),\n\t?assertEqual(sets:from_list([SessionKey1]), ar_mining_server:active_sessions()),\n\t?assertEqual([1, 1, 2, 2], lists:sort(mined_steps())),\n\n\tar_mining_server:add_pool_job(\n\t\tSessionKey2, 5, Output, PartitionUpperBound, Seed2, PartialDiff),\n\tar_mining_server:add_pool_job(\n\t\tSessionKey2, 6, Output, PartitionUpperBound, Seed2, PartialDiff),\n\ttimer:sleep(?WAIT_TIME),\n\t?assertEqual(sets:from_list([SessionKey1, SessionKey2]), ar_mining_server:active_sessions()),\n\t?assertEqual([5, 5, 6, 6], lists:sort(mined_steps())),\n\n\tar_mining_server:add_pool_job(\n\t\tSessionKey3, 10, Output, PartitionUpperBound, Seed3, PartialDiff),\n\tar_mining_server:add_pool_job(\n\t\tSessionKey3, 12, Output, PartitionUpperBound, Seed3, PartialDiff),\n\ttimer:sleep(?WAIT_TIME),\n\t?assertEqual(sets:from_list([SessionKey2, SessionKey3]), ar_mining_server:active_sessions()),\n\t?assertEqual([10, 10, 12, 12],lists:sort(mined_steps())),\n\n\tar_mining_server:add_pool_job(\n\t\tSessionKey1, 4, Output, PartitionUpperBound, Seed1, PartialDiff),\n\ttimer:sleep(?WAIT_TIME),\n\t?assertEqual(sets:from_list([SessionKey2, SessionKey3]), ar_mining_server:active_sessions()),\n\t?assertEqual([], mined_steps()).\n\n\n%% ------------------------------------------------------------------------------------------------\n%% Helpers\n%% ------------------------------------------------------------------------------------------------\ndo_test_chunk_cache_size_with_mocks(H1s, H2s, RecallRange2s, FirstChunks) ->\n\tHeight = ar_node:get_height() + 1,\n\tets:insert(mock_counter, {compute_h1, 0}),\n\tets:insert(mock_counter, {compute_h2, 0}),\n\tets:insert(mock_counter, {get_recall_range, 0}),\n\tets:insert(mock_counter, {get_range, 0}),\n\t{Setup, Cleanup} = ar_test_node:mock_functions([\n\t\t{\n\t\t\tar_retarget, is_retarget_height,\n\t\t\tfun (_Height) ->\n\t\t\t\tfalse\n\t\t\tend\n\t\t},\n\t\t{\n\t\t\tar_block, compute_h1,\n\t\t\tfun (_H0, _Nonce, _Chunk) ->\n\t\t\t\tCount = increment_mock_counter(compute_h1),\n\t\t\t\tSolution = get_mock_value(Count, H1s),\n\t\t\t\t{Solution, Solution}\n\t\t\tend\n\t\t},\n\t\t{\n\t\t\tar_block, compute_h2,\n\t\t\tfun (_H0, _Nonce, _Chunk) ->\n\t\t\t\tCount = increment_mock_counter(compute_h2),\n\t\t\t\tSolution = get_mock_value(Count, H2s),\n\t\t\t\t{Solution, Solution}\n\t\t\tend\n\t\t},\n\t\t{\n\t\t\tar_block, get_recall_range,\n\t\t\tfun (_H0, _PartitionNumber, _PartitionUpperBound) ->\n\t\t\t\tCount = increment_mock_counter(get_recall_range),\n\t\t\t\tRecallRange2 = get_mock_value(Count, RecallRange2s),\n\t\t\t\t{?RECALL_RANGE_1, RecallRange2}\n\t\t\tend\n\t\t},\n\t\t{\n\t\t\tar_chunk_storage, get_range,\n\t\t\tfun (RangeStart, Size, StoreID) ->\n\t\t\t\tCount = increment_mock_counter(get_range),\n\t\t\t\tFirstChunk = get_mock_value(Count, FirstChunks),\n\t\t\t\tcase FirstChunk == chunk1 andalso RangeStart /= ?RECALL_RANGE_1 of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\ttimer:sleep(100);\n\t\t\t\t\t_ -> ok\n\t\t\t\tend,\n\t\t\t\tcase FirstChunk == chunk2 andalso RangeStart == ?RECALL_RANGE_1 of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\ttimer:sleep(100);\n\t\t\t\t\t_ -> ok\n\t\t\t\tend,\n\t\t\t\tmeck:passthrough([RangeStart, Size, StoreID])\n\t\t\tend\n\t\t}\n\t]),\n\tFunctions = Setup(),\n\n\ttry\n\t\tar_test_node:mine(),\n\t\tar_test_node:wait_until_height(main, Height),\n\t\t%% wait until the mining has stopped\n\t\t?assert(ar_util:do_until(fun() -> get_chunk_cache_size() == 0 end, 200, 10000))\n\tafter\n\t\tCleanup(Functions)\n\tend.\n\nget_chunk_cache_size() ->\n\tPattern = {{chunk_cache_size, '$1'}, '_'}, % '$1' matches any PartitionNumber\n    Entries = ets:match(mock_counter, Pattern),\n    lists:foldl(\n        fun(PartitionNumber, Acc) ->\n\t\t\tcase ets:lookup(ar_mining_server, {chunk_cache_size, PartitionNumber}) of\n\t\t\t\t[] ->\n\t\t\t\t\tAcc;\n\t\t\t\t[{_, Size}] ->\n\t\t\t\t\tAcc + Size\n\t\t\tend\n\t\tend,\n\t\t0,\n        Entries\n    ).\n\nget_mock_value(Index, Values) when Index < length(Values) ->\n    lists:nth(Index, Values);\n    \nget_mock_value(_, Values) ->\n    lists:last(Values).\n\nincrement_mock_counter(Mock) ->\n\tets:update_counter(mock_counter, Mock, {2, 1}),\n\t[{_, Count}] = ets:lookup(mock_counter, Mock),\n\tCount.\n\nvdf_server() ->\n\t{127,0,0,1,2001}.\n\nmined_steps() ->\n\tSteps = lists:reverse(ets:foldl(\n\t\tfun({_Worker, _Task, Step}, Acc) -> [Step | Acc] end, [], add_task)),\n\tets:delete_all_objects(add_task),\n\tSteps.\n\nmock_add_task() ->\n\t{\n\t\tar_mining_worker, add_task, \n\t\tfun(Worker, TaskType, Candidate) -> \n\t\t\tets:insert(add_task, {Worker, TaskType, Candidate#mining_candidate.step_number})\n\t\tend\n\t}.\n\nmock_get_current_sesssion() ->\n\t{\n\t\tar_nonce_limiter, get_current_session, \n\t\tfun() -> \n\t\t\t{undefined, not_found}\n\t\tend\n\t}.\n"
  },
  {
    "path": "apps/arweave/test/ar_mining_worker_tests.erl",
    "content": "-module(ar_mining_worker_tests).\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_mining.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-define(PARTITION, 0).\n\n-define(TESTER_REGISTER_NAME, ?MODULE).\n-record(state, {\n\tworker_pid :: pid(),\n\tsession_key :: binary(),\n\tcandidate :: #mining_candidate{}\n}).\n\n-define(compute_h0(WorkerPid, Candidate), {compute_h0, WorkerPid, Candidate}).\n-define(compute_h1(WorkerPid, Candidate), {compute_h1, WorkerPid, Candidate}).\n-define(compute_h2(WorkerPid, Candidate), {compute_h2, WorkerPid, Candidate}).\n-define(get_recall_range(H0, Partition, PartitionUpperBound), {get_recall_range, H0, Partition, PartitionUpperBound}).\n-define(get_recall_range_response(RecallRange1, RecallRange2), {get_recall_range_response, RecallRange1, RecallRange2}).\n-define(is_recall_range_readable(H0, RecallRange), {is_recall_range_readable, H0, RecallRange}).\n-define(is_recall_range_readable_response(IsReadable), {is_recall_range_readable_response, IsReadable}).\n-define(read_recall_range(Kind, WorkerPid, Candidate2, RecallRangeStart), {read_recall_range, Kind, WorkerPid, Candidate2, RecallRangeStart}).\n-define(read_recall_range_response(), {read_recall_range_response}).\n-define(passes_diff_check(SolutionHash, IsPoA1, DiffPair, PackingDifficulty), {passes_diff_check, SolutionHash, IsPoA1, DiffPair, PackingDifficulty}).\n-define(passes_diff_check_response(IsPassed), {passes_diff_check_response, IsPassed}).\n-define(prepare_and_post_solution(Candidate), {prepare_and_post_solution, Candidate}).\n\n-define(with_setup_each(F), fun() ->\n\tState = setup_each(),\n\ttry\n\t\tF(State)\n\tcatch\n\t\tC:E:S ->\n\t\t\t?debugFmt(\"Error: ~p:~p:~p\", [C, E, S]),\n\t\t\tcleanup_each(State),\n\t\t\tthrow({C, E})\n\tafter\n\t\tcleanup_each(State)\n\tend\nend).\n\n%% ------------------------------------------------------------------------------------------------\n%% Fixtures\n%% ------------------------------------------------------------------------------------------------\n\n%% This function is called for all tests.\n%% It mocks the necessary modules and functions.\n%% Mocked functions are sending messages to the ?TESTER_REGISTER_NAME process (the test).\nsetup_all() ->\n\tMocks = [\n\t\t{ar_mining_cache, session_exists, fun(_, _) -> true end},\n\t\t{ar_mining_hash, compute_h0, fun ar_mining_hash__compute_h0/2},\n\t\t{ar_mining_hash, compute_h1, fun ar_mining_hash__compute_h1/2},\n\t\t{ar_mining_hash, compute_h2, fun ar_mining_hash__compute_h2/2},\n\t\t{ar_block, get_recall_range, fun ar_block__get_recall_range/3},\n\t\t{ar_mining_io, is_recall_range_readable, fun ar_mining_io__is_recall_range_readable/2},\n\t\t{ar_mining_io, read_recall_range, fun ar_mining_io__read_recall_range/4},\n\t\t{ar_node_utils, passes_diff_check, fun ar_node_utils__passes_diff_check/4},\n\t\t{ar_mining_server, prepare_and_post_solution, fun ar_mining_server__prepare_and_post_solution/1}\n\t],\n\tMockedModules = lists:usort([Module || {Module, _, _} <- Mocks]),\n\tmeck:new(MockedModules, [passthrough]),\n\t[meck:expect(M, F, Fun) || {M, F, Fun} <- Mocks],\n\tMockedModules.\n\n%% This function is called for each test.\n%% It creates a new worker and registers itself as a tester.\n%% It also sets up the worker with the necessary cache limits and sessions.\nsetup_each() ->\n\tStepNumber = 1,\n\tMiningSession = <<\"mining_session\">>,\n\tCandidate = #mining_candidate{\n\t\tpacking_difficulty = ?REPLICA_2_9_PACKING_DIFFICULTY,\n\t\tsession_key = MiningSession,\n\t\tstep_number = StepNumber\n\t},\n\t{ok, Pid} = ar_mining_worker:start_link(?PARTITION, ?REPLICA_2_9_PACKING_DIFFICULTY),\n\tar_mining_worker:set_cache_limits(Pid, 10 * ?MiB, 1_000),\n\tar_mining_worker:set_sessions(Pid, [MiningSession]),\n\n\tregister(?TESTER_REGISTER_NAME, self()),\n\n\t#state{\n\t\tworker_pid = Pid,\n\t\tsession_key = MiningSession,\n\t\tcandidate = Candidate\n\t}.\n\n%% This function is called after each test.\n%% It unregisters the tester and exits the worker.\ncleanup_each(State) ->\n\tcleanup_each_dump_messages(),\n\tunregister(?TESTER_REGISTER_NAME),\n\terlang:unlink(State#state.worker_pid),\n\terlang:exit(State#state.worker_pid, kill),\n\tok.\n\ncleanup_each_dump_messages() ->\n\treceive\n\t\tMsg ->\n\t\t\t?debugFmt(\"Unexpected message in queue: ~p\", [Msg]),\n\t\t\tcleanup_each_dump_messages()\n\tafter 0 ->\n\t\tok\n\tend.\n\n%% This function is called after all tests.\n%% It unloads the mocked modules.\ncleanup_all(MockedModules) ->\n\tmeck:unload(MockedModules).\n\nar_mining_hash__compute_h0(WorkerPid, Candidate) ->\n\t?TESTER_REGISTER_NAME ! ?compute_h0(WorkerPid, Candidate).\n\nar_mining_hash__compute_h1(WorkerPid, Candidate) ->\n\t?TESTER_REGISTER_NAME ! ?compute_h1(WorkerPid, Candidate).\n\nar_mining_hash__compute_h2(WorkerPid, Candidate) ->\n\t?TESTER_REGISTER_NAME ! ?compute_h2(WorkerPid, Candidate).\n\nar_block__get_recall_range(H0, Partition, PartitionUpperBound) ->\n\t?TESTER_REGISTER_NAME ! ?get_recall_range(H0, Partition, PartitionUpperBound),\n\treceive\n\t\t?get_recall_range_response(RecallRange1, RecallRange2) -> {RecallRange1, RecallRange2}\n\tafter 1000 ->\n\t\texit(no_get_recall_range_response_received)\n\tend.\n\nar_mining_io__is_recall_range_readable(Candidate, RecallRange) ->\n\t?TESTER_REGISTER_NAME ! ?is_recall_range_readable(Candidate, RecallRange),\n\treceive\n\t\t?is_recall_range_readable_response(IsReadable) -> IsReadable\n\tafter 1000 ->\n\t\texit(no_is_recall_range_readable_response_received)\n\tend.\n\nar_mining_io__read_recall_range(Kind, WorkerPid, Candidate, RecallRangeStart) ->\n\t?TESTER_REGISTER_NAME ! ?read_recall_range(Kind, WorkerPid, Candidate, RecallRangeStart),\n\treceive\n\t\t?read_recall_range_response() -> ok\n\tafter 1000 ->\n\t\texit(no_read_recall_range_response_received)\n\tend.\n\nar_node_utils__passes_diff_check(SolutionHash, IsPoA1, DiffPair, PackingDifficulty) ->\n\t?TESTER_REGISTER_NAME ! ?passes_diff_check(SolutionHash, IsPoA1, DiffPair, PackingDifficulty),\n\treceive\n\t\t?passes_diff_check_response(IsPassed) -> IsPassed\n\tafter 1000 ->\n\t\texit(no_passes_diff_check_response_received)\n\tend.\n\n%% ar_mining_server:prepare_and_post_solution(Candidate)\nar_mining_server__prepare_and_post_solution(Candidate) ->\n\t?TESTER_REGISTER_NAME ! ?prepare_and_post_solution(Candidate).\n\n%% ------------------------------------------------------------------------------------------------\n%% Helpers\n%% ------------------------------------------------------------------------------------------------\n\nhandle_compute_h0(StepNumber, H0, State) ->\n\t?debugFmt(\"Handling compute_h0 for step ~p (H0 ~p)\", [StepNumber, H0]),\n\treceive\n\t\t?compute_h0(Pid, #mining_candidate{step_number = StepNumber} = Candidate) when Pid == State#state.worker_pid ->\n\t\t\tar_mining_worker:computed_hash(State#state.worker_pid, computed_h0, H0, undefined, Candidate)\n\tafter 1000 ->\n\t\texit(no_compute_h0_message_received)\n\tend.\n\nhandle_compute_h1s(H1s, State) ->\n\thandle_compute_h1s(H1s, [], State).\n%% The order of calculations might be different, but it does not really matter:\n%% the \"subchunks\" we used to represent the recall range are all zeroes and\n%% therefore are all equal.\n%% In the test we only rely on the fake hash value to mark the hash valid or not,\n%% we do not rely on the order in which these hashes will arrive later for\n%% difficulty checks.\n%% The same stands for H2s below.\nhandle_compute_h1s([], Acc, State) -> handle_send_computed_h1s(lists:reverse(Acc), State);\nhandle_compute_h1s([H1 | H1s], Acc, State) ->\n\t?debugFmt(\"Handling compute_h1 for H1 ~p\", [H1]),\n\tAcc1 = receive\n\t\t?compute_h1(Pid, Candidate) when Pid == State#state.worker_pid -> [{H1, Candidate} | Acc]\n\tafter 1000 ->\n\t\texit(no_compute_h1_message_received)\n\tend,\n\thandle_compute_h1s(H1s, Acc1, State).\n\nhandle_send_computed_h1s([], _State) -> ok;\nhandle_send_computed_h1s([{H1, Candidate} | Acc], State) ->\n\tar_mining_worker:computed_hash(State#state.worker_pid, computed_h1, H1, <<\"Preimage1\">>, Candidate),\n\thandle_send_computed_h1s(Acc, State).\n\nhandle_compute_h2s(H2s, State) ->\n\thandle_compute_h2s(H2s, [], State).\nhandle_compute_h2s([], Acc, State) -> handle_send_computed_h2s(lists:reverse(Acc), State);\nhandle_compute_h2s([H2 | H2s], Acc, State) ->\n\t?debugFmt(\"Handling compute_h2 for H2 ~p\", [H2]),\n\tAcc1 = receive\n\t\t?compute_h2(Pid, Candidate) when Pid == State#state.worker_pid -> [{H2, Candidate} | Acc]\n\tafter 1000 ->\n\t\texit(no_compute_h2_message_received)\n\tend,\n\thandle_compute_h2s(H2s, Acc1, State).\n\nhandle_send_computed_h2s([], _State) -> ok;\nhandle_send_computed_h2s([{H2, Candidate} | Acc], State) ->\n\tar_mining_worker:computed_hash(State#state.worker_pid, computed_h2, H2, <<\"Preimage2\">>, Candidate),\n\thandle_send_computed_h2s(Acc, State).\n\nhandle_get_recall_range(H0, RecallRange1, RecallRange2, State) ->\n\t?debugFmt(\"Handling get_recall_range for H0 ~p (~p ~p)\", [H0, RecallRange1, RecallRange2]),\n\treceive\n\t\t?get_recall_range(H0, _Partition1, _PartitionUpperBound) ->\n\t\t\tState#state.worker_pid ! ?get_recall_range_response(RecallRange1, RecallRange2)\n\tafter 1000 ->\n\t\texit(no_recall_range_message_received)\n\tend.\n\nhandle_is_recall_range_readable(RecallRangeStart, IsReadable, State) ->\n\t?debugFmt(\"Handling is_recall_range_readable for RecallRangeStart ~p (IsReadable ~p)\", [RecallRangeStart, IsReadable]),\n\treceive\n\t\t?is_recall_range_readable(_Candidate, RecallRangeStart) ->\n\t\t\tState#state.worker_pid ! ?is_recall_range_readable_response(IsReadable)\n\tafter 1000 ->\n\t\texit(no_is_recall_range_readable_message_received)\n\tend.\n\nhandle_read_recall_range(Kind, RecallRangeStart, State) ->\n\t?debugFmt(\"Handling read_recall_range for Kind ~p (RecallRangeStart ~p)\", [Kind, RecallRangeStart]),\n\treceive\n\t\t?read_recall_range(Kind, _WorkerPid, Candidate, RecallRangeStart) ->\n\t\t\tState#state.worker_pid ! ?read_recall_range_response(),\n\t\t\tCandidate\n\tafter 1000 ->\n\t\texit(no_read_recall_range_message_received)\n\tend.\n\nhandle_passes_diff_checks(HashesMap, _State) when map_size(HashesMap) == 0 -> ok;\nhandle_passes_diff_checks(HashesMap, State) ->\n\tHashesMap1 = receive\n\t\t?passes_diff_check(Hash, _IsPoA1, _DiffPair, _PackingDifficulty) ->\n\t\t\t{IsPassed, HashesMap_} = maps:take(Hash, HashesMap),\n\t\t\t?debugFmt(\"Checking difficulty for ~p (IsPassed ~p)\", [Hash, IsPassed]),\n\t\t\tState#state.worker_pid ! ?passes_diff_check_response(IsPassed),\n\t\t\tHashesMap_\n\tafter 1000 ->\n\t\texit(no_passes_diff_check_message_received)\n\tend,\n\thandle_passes_diff_checks(HashesMap1, State).\n\nhandle_prepare_and_post_solution_h1(H1) ->\n\t?debugFmt(\"Handling prepare_and_post_solution for H1 ~p\", [H1]),\n\treceive\n\t\t?prepare_and_post_solution(#mining_candidate{h1 = H1}) -> ok\n\tafter 1000 ->\n\t\texit(no_prepare_and_post_solution_message_received)\n\tend.\n\nhandle_prepare_and_post_solution_h2(H2) ->\n\t?debugFmt(\"Handling prepare_and_post_solution for H2 ~p\", [H2]),\n\treceive\n\t\t?prepare_and_post_solution(#mining_candidate{h2 = H2}) -> ok\n\tafter 1000 ->\n\t\texit(no_prepare_and_post_solution_message_received)\n\tend.\n\nassert_no_messages() ->\n\t?debugMsg(\"Asserting no significant messages left\"),\n\treceive\n\t\tMsg ->\n\t\t\t?debugFmt(\"Unexpected message received: ~p\", [Msg]),\n\t\t\terror({unexpected_message_received, Msg})\n\tafter 1000 ->\n\t\tok\n\tend.\n\ngenerate_recall_range(RecallRangeStart, Difficulty) ->\n\tRecallRangeSize = ar_block:get_recall_range_size(Difficulty),\n\t[{RecallRangeStart + RecallRangeSize, <<0:RecallRangeSize/unit:8>>}].\n\ngenerate_hashes_for_recall_range(Prefix, Difficulty) ->\n  [<<Prefix/binary, (integer_to_binary(N))/binary>> || N <- lists:seq(1, ar_block:get_nonces_per_recall_range(Difficulty))].\n\n%% ------------------------------------------------------------------------------------------------\n%% Tests\n%% ------------------------------------------------------------------------------------------------\n\nmining_worker_test_() ->\n\t{setup, fun setup_all/0, fun cleanup_all/1,\n\t\t[\n\t\t\t?with_setup_each(fun test_no_available_ranges/1),\n\t\t\t?with_setup_each(fun test_only_second_range_available/1),\n\t\t\t?with_setup_each(fun test_only_first_range_available_no_solutions/1),\n\t\t\t?with_setup_each(fun test_both_ranges_available_no_solutions/1),\n\t\t\t?with_setup_each(fun test_both_ranges_available_in_reverse_order_no_solutions/1),\n\t\t\t?with_setup_each(fun test_both_ranges_available_h1_solution/1),\n\t\t\t?with_setup_each(fun test_both_ranges_available_h2_solution/1)\n\t\t]\n\t}.\n\n%% This test checks the worker behavior when it does not have any available\n%% recall ranges for a VDF step.\ntest_no_available_ranges(State) ->\n\tH0 = <<\"H0\">>,\n\tRecallRange1Start = 100,\n\tRecallRange2Start = 200,\n\tCandidate1 = State#state.candidate,\n\t%% 1. Add compute_h0 task\n\t?debugMsg(\"Adding compute_h0 task\"),\n\tar_mining_worker:add_task(State#state.worker_pid, compute_h0, Candidate1),\n\t%% 2. Worker asks to compute H0\n\thandle_compute_h0(Candidate1#mining_candidate.step_number, H0, State),\n\t%% 3. Worker asks for recall ranges for the given H0\n\thandle_get_recall_range(H0, RecallRange1Start, RecallRange2Start, State),\n\t%% 4. Worker checks if recall ranges are readable\n\thandle_is_recall_range_readable(RecallRange1Start, false, State),\n\thandle_is_recall_range_readable(RecallRange2Start, false, State),\n\t%% 5. No more messages expected, recall ranges are not readable\n\tassert_no_messages().\n\n%% This test checks the worker behavior when it has only second recall range\n%% available for a VDF step.\ntest_only_second_range_available(State) ->\n\tH0 = <<\"H0\">>,\n\tRecallRange1Start = 100,\n\tRecallRange2Start = 200,\n\tCandidate1 = State#state.candidate,\n\t%% 1. Add compute_h0 task\n\t?debugMsg(\"Adding compute_h0 task\"),\n\tar_mining_worker:add_task(State#state.worker_pid, compute_h0, Candidate1),\n\t%% 2. Worker asks to compute H0\n\thandle_compute_h0(Candidate1#mining_candidate.step_number, H0, State),\n\t%% 3. Worker asks for recall ranges for the given H0\n\thandle_get_recall_range(H0, RecallRange1Start, RecallRange2Start, State),\n\t%% 4. Worker checks if recall ranges are readable\n\thandle_is_recall_range_readable(RecallRange1Start, false, State),\n\thandle_is_recall_range_readable(RecallRange2Start, true, State),\n\t%% 5. No more messages expected, only second recall range is readable\n\tassert_no_messages().\n\n%% This test checks the worker behavior when it has only first recall range\n%% available for a VDF step, but no valid solutions produced.\ntest_only_first_range_available_no_solutions(State) ->\n\tH0 = <<\"H0\">>,\n\tH1Prefix = <<\"H1-\">>,\n\tRecallRange1Start = 100,\n\tRecallRange2Start = 200,\n\tCandidate1 = State#state.candidate,\n\t%% 1. Add compute_h0 task\n\t?debugMsg(\"Adding compute_h0 task\"),\n\tar_mining_worker:add_task(State#state.worker_pid, compute_h0, Candidate1),\n\t%% 2. Worker asks to compute H0\n\thandle_compute_h0(Candidate1#mining_candidate.step_number, H0, State),\n\t%% 3. Worker asks for recall ranges for the given H0\n\thandle_get_recall_range(H0, RecallRange1Start, RecallRange2Start, State),\n\t%% 4. Worker checks if recall ranges are readable\n\thandle_is_recall_range_readable(RecallRange1Start, true, State),\n\thandle_is_recall_range_readable(RecallRange2Start, false, State),\n\t%% 5. Worker asks to read only first recall range\n\tCandidate2 = handle_read_recall_range(chunk1, RecallRange1Start, State),\n\t%% 6. Providing the first recall range\n\tRecallRange1 = generate_recall_range(RecallRange1Start, Candidate2#mining_candidate.packing_difficulty),\n\tar_mining_worker:chunks_read(State#state.worker_pid, chunk1, Candidate2, RecallRange1Start, RecallRange1),\n\t%% 7. Worker asks to compute H1s for the first recall range.\n\t%% The number of H1s is equal to the number of nonces in the recall range.\n\tH1s = generate_hashes_for_recall_range(H1Prefix, Candidate2#mining_candidate.packing_difficulty),\n\thandle_compute_h1s(H1s, State),\n\t%% 8. Worker checks if H1s pass the diff check, rejecting all of them\n\tHashes1Map = maps:from_list([{H1, false} || H1 <- H1s]),\n\thandle_passes_diff_checks(Hashes1Map, State),\n\t%% 9. No more messages expected\n\tassert_no_messages().\n\n%% This test checks the worker behavior when it has both recall ranges\n%% available for a VDF step, but no valid solutions produced.\ntest_both_ranges_available_no_solutions(State) ->\n\tH0 = <<\"H0\">>,\n\tH1Prefix = <<\"H1-\">>,\n\tH2Prefix = <<\"H2-\">>,\n\tRecallRange1Start = 100,\n\tRecallRange2Start = 200,\n\tCandidate1 = State#state.candidate,\n\t%% 1. Add compute_h0 task\n\t?debugMsg(\"Adding compute_h0 task\"),\n\tar_mining_worker:add_task(State#state.worker_pid, compute_h0, Candidate1),\n\t%% 2. Worker asks to compute H0\n\thandle_compute_h0(Candidate1#mining_candidate.step_number, H0, State),\n\t%% 3. Worker asks for recall ranges for the given H0\n\thandle_get_recall_range(H0, RecallRange1Start, RecallRange2Start, State),\n\t%% 4. Worker checks if recall ranges are readable\n\thandle_is_recall_range_readable(RecallRange1Start, true, State),\n\thandle_is_recall_range_readable(RecallRange2Start, true, State),\n\t%% 5. Worker asks to read both recall ranges\n\tCandidate2 = handle_read_recall_range(chunk1, RecallRange1Start, State),\n\tCandidate3 = handle_read_recall_range(chunk2, RecallRange2Start, State),\n\t%% 6. Providing the recall ranges\n\tRecallRange1 = generate_recall_range(RecallRange1Start, Candidate3#mining_candidate.packing_difficulty),\n\tRecallRange2 = generate_recall_range(RecallRange2Start, Candidate3#mining_candidate.packing_difficulty),\n\tar_mining_worker:chunks_read(State#state.worker_pid, chunk1, Candidate2, RecallRange1Start, RecallRange1),\n\tar_mining_worker:chunks_read(State#state.worker_pid, chunk2, Candidate3, RecallRange2Start, RecallRange2),\n\t%% 7. Worker asks to compute H1s for the recall ranges.\n\t%% The number of H1s is equal to the number of nonces in the recall range.\n\tH1s = generate_hashes_for_recall_range(H1Prefix, Candidate3#mining_candidate.packing_difficulty),\n\thandle_compute_h1s(H1s, State),\n\t%% 8. Worker checks if H1s pass the diff check, rejecting all of them\n\tHashes1Map = maps:from_list([{H1, false} || H1 <- H1s]),\n\thandle_passes_diff_checks(Hashes1Map, State),\n\t%% 9. Worker asks to compute H2s for the recall ranges.\n\t%% The number of H2s is equal to the number of nonces in the recall range.\n\tH2s = generate_hashes_for_recall_range(H2Prefix, Candidate3#mining_candidate.packing_difficulty),\n\thandle_compute_h2s(H2s, State),\n\t%% 10. Worker checks if H2s pass the diff check, rejecting all of them\n\tHashes2Map = maps:from_list([{H2, false} || H2 <- H2s]),\n\thandle_passes_diff_checks(Hashes2Map, State),\n\t%% 11. No more messages expected\n\tassert_no_messages().\n\n%% This test checks the worker behavior when it has both recall ranges\n%% available for a VDF step, but no valid solutions produced.\n%% In this test the second recall range is available first.\ntest_both_ranges_available_in_reverse_order_no_solutions(State) ->\n\tH0 = <<\"H0\">>,\n\tH1Prefix = <<\"H1-\">>,\n\tH2Prefix = <<\"H2-\">>,\n\tRecallRange1Start = 100,\n\tRecallRange2Start = 200,\n\tCandidate1 = State#state.candidate,\n\t%% 1. Add compute_h0 task\n\t?debugMsg(\"Adding compute_h0 task\"),\n\tar_mining_worker:add_task(State#state.worker_pid, compute_h0, Candidate1),\n\t%% 2. Worker asks to compute H0\n\thandle_compute_h0(Candidate1#mining_candidate.step_number, H0, State),\n\t%% 3. Worker asks for recall ranges for the given H0\n\thandle_get_recall_range(H0, RecallRange1Start, RecallRange2Start, State),\n\t%% 4. Worker checks if recall ranges are readable\n\thandle_is_recall_range_readable(RecallRange1Start, true, State),\n\thandle_is_recall_range_readable(RecallRange2Start, true, State),\n\t%% 5. Worker asks to read both recall ranges\n\tCandidate2 = handle_read_recall_range(chunk1, RecallRange1Start, State),\n\tCandidate3 = handle_read_recall_range(chunk2, RecallRange2Start, State),\n\t%% 6. Providing the recall ranges\n\tRecallRange1 = generate_recall_range(RecallRange1Start, Candidate3#mining_candidate.packing_difficulty),\n\tRecallRange2 = generate_recall_range(RecallRange2Start, Candidate3#mining_candidate.packing_difficulty),\n\tar_mining_worker:chunks_read(State#state.worker_pid, chunk2, Candidate3, RecallRange2Start, RecallRange2),\n\tar_mining_worker:chunks_read(State#state.worker_pid, chunk1, Candidate2, RecallRange1Start, RecallRange1),\n\t%% 7. Worker asks to compute H1s for the recall ranges.\n\t%% The number of H1s is equal to the number of nonces in the recall range.\n\tH1s = generate_hashes_for_recall_range(H1Prefix, Candidate3#mining_candidate.packing_difficulty),\n\thandle_compute_h1s(H1s, State),\n\t%% 8. Worker checks if H1s pass the diff check, rejecting all of them\n\tHashes1Map = maps:from_list([{H1, false} || H1 <- H1s]),\n\thandle_passes_diff_checks(Hashes1Map, State),\n\t%% 9. Worker asks to compute H2s for the recall ranges.\n\t%% The number of H2s is equal to the number of nonces in the recall range.\n\tH2s = generate_hashes_for_recall_range(H2Prefix, Candidate3#mining_candidate.packing_difficulty),\n\thandle_compute_h2s(H2s, State),\n\t%% 10. Worker checks if H2s pass the diff check, rejecting all of them\n\tHashes2Map = maps:from_list([{H2, false} || H2 <- H2s]),\n\thandle_passes_diff_checks(Hashes2Map, State),\n\t%% 11. No more messages expected\n\tassert_no_messages().\n\n%% This test checks the worker behavior when it has both recall ranges\n%% available for a VDF step, only one valid H1 solution is produced.\ntest_both_ranges_available_h1_solution(State) ->\n\tH0 = <<\"H0\">>,\n\tH1Prefix = <<\"H1-\">>,\n\tH2Prefix = <<\"H2-\">>,\n\tRecallRange1Start = 100,\n\tRecallRange2Start = 200,\n\tCandidate1 = State#state.candidate,\n\t%% 1. Add compute_h0 task\n\t?debugMsg(\"Adding compute_h0 task\"),\n\tar_mining_worker:add_task(State#state.worker_pid, compute_h0, Candidate1),\n\t%% 2. Worker asks to compute H0\n\thandle_compute_h0(Candidate1#mining_candidate.step_number, H0, State),\n\t%% 3. Worker asks for recall ranges for the given H0\n\thandle_get_recall_range(H0, RecallRange1Start, RecallRange2Start, State),\n\t%% 4. Worker checks if recall ranges are readable\n\thandle_is_recall_range_readable(RecallRange1Start, true, State),\n\thandle_is_recall_range_readable(RecallRange2Start, true, State),\n\t%% 5. Worker asks to read both recall ranges\n\tCandidate2 = handle_read_recall_range(chunk1, RecallRange1Start, State),\n\tCandidate3 = handle_read_recall_range(chunk2, RecallRange2Start, State),\n\t%% 6. Providing the recall ranges\n\tRecallRange1 = generate_recall_range(RecallRange1Start, Candidate3#mining_candidate.packing_difficulty),\n\tRecallRange2 = generate_recall_range(RecallRange2Start, Candidate3#mining_candidate.packing_difficulty),\n\tar_mining_worker:chunks_read(State#state.worker_pid, chunk1, Candidate2, RecallRange1Start, RecallRange1),\n\tar_mining_worker:chunks_read(State#state.worker_pid, chunk2, Candidate3, RecallRange2Start, RecallRange2),\n\t%% 7. Worker asks to compute H1s for the recall ranges.\n\t%% The number of H1s is equal to the number of nonces in the recall range.\n\t[ValidH1 | RestH1s] = H1s = generate_hashes_for_recall_range(H1Prefix, Candidate3#mining_candidate.packing_difficulty),\n\thandle_compute_h1s(H1s, State),\n\t%% 8. Worker checks if H1s pass the diff check, rejecting all of them but the very first one\n\tHashes1Map = maps:from_list([{ValidH1, true} | [{H1, false} || H1 <- RestH1s]]),\n\thandle_passes_diff_checks(Hashes1Map, State),\n\t%% 8.1. Worker posts the valid H1 solution\n\thandle_prepare_and_post_solution_h1(ValidH1),\n\t%% 9. Worker asks to compute H2s for the recall ranges.\n\t%% The number of H2s is equal to the number of nonces in the recall range.\n\t%% The very first H2 will never be called to be computed, because the\n\t%% corresponding H1 passes the difficulty check; therefore, we expect one less\n\t%% H2 to be computed.\n\t[_ | RestH2s] = _H2s = generate_hashes_for_recall_range(H2Prefix, Candidate3#mining_candidate.packing_difficulty),\n\thandle_compute_h2s(RestH2s, State),\n\t%% 10. Worker checks if H2s pass the diff check, rejecting all of them\n\tHashes2Map = maps:from_list([{H2, false} || H2 <- RestH2s]),\n\thandle_passes_diff_checks(Hashes2Map, State),\n\t%% 11. No more messages expected\n\tassert_no_messages().\n\n%% This test checks the worker behavior when it has both recall ranges\n%% available for a VDF step, only one valid H2 solution is produced.\ntest_both_ranges_available_h2_solution(State) ->\n\tH0 = <<\"H0\">>,\n\tH1Prefix = <<\"H1-\">>,\n\tH2Prefix = <<\"H2-\">>,\n\tRecallRange1Start = 100,\n\tRecallRange2Start = 200,\n\tCandidate1 = State#state.candidate,\n\t%% 1. Add compute_h0 task\n\t?debugMsg(\"Adding compute_h0 task\"),\n\tar_mining_worker:add_task(State#state.worker_pid, compute_h0, Candidate1),\n\t%% 2. Worker asks to compute H0\n\thandle_compute_h0(Candidate1#mining_candidate.step_number, H0, State),\n\t%% 3. Worker asks for recall ranges for the given H0\n\thandle_get_recall_range(H0, RecallRange1Start, RecallRange2Start, State),\n\t%% 4. Worker checks if recall ranges are readable\n\thandle_is_recall_range_readable(RecallRange1Start, true, State),\n\thandle_is_recall_range_readable(RecallRange2Start, true, State),\n\t%% 5. Worker asks to read both recall ranges\n\tCandidate2 = handle_read_recall_range(chunk1, RecallRange1Start, State),\n\tCandidate3 = handle_read_recall_range(chunk2, RecallRange2Start, State),\n\t%% 6. Providing the recall ranges\n\tRecallRange1 = generate_recall_range(RecallRange1Start, Candidate3#mining_candidate.packing_difficulty),\n\tRecallRange2 = generate_recall_range(RecallRange2Start, Candidate3#mining_candidate.packing_difficulty),\n\tar_mining_worker:chunks_read(State#state.worker_pid, chunk1, Candidate2, RecallRange1Start, RecallRange1),\n\tar_mining_worker:chunks_read(State#state.worker_pid, chunk2, Candidate3, RecallRange2Start, RecallRange2),\n\t%% 7. Worker asks to compute H1s for the recall ranges.\n\t%% The number of H1s is equal to the number of nonces in the recall range.\n\tH1s = generate_hashes_for_recall_range(H1Prefix, Candidate3#mining_candidate.packing_difficulty),\n\thandle_compute_h1s(H1s, State),\n\t%% 8. Worker checks if H1s pass the diff check, rejecting all of them but the very first one\n\tPasses1Map = maps:from_list([{H1, false} || H1 <- H1s]),\n\thandle_passes_diff_checks(Passes1Map, State),\n\t%% 9. Worker asks to compute H2s for the recall ranges.\n\t%% The number of H2s is equal to the number of nonces in the recall range.\n\t[ValidH2 | RestH2s] = H2s = generate_hashes_for_recall_range(H2Prefix, Candidate3#mining_candidate.packing_difficulty),\n\thandle_compute_h2s(H2s, State),\n\t%% 9.1. Worker checks if H2s pass the diff check, rejecting all of them but the very first one\n\tHashes2Map = maps:from_list([{ValidH2, true} | [{H2, false} || H2 <- RestH2s]]),\n\thandle_passes_diff_checks(Hashes2Map, State),\n\t%% 10. Worker posts the valid H2 solution\n\thandle_prepare_and_post_solution_h2(ValidH2),\n\t%% 12. No more messages expected\n\tassert_no_messages().\n"
  },
  {
    "path": "apps/arweave/test/ar_node_tests.erl",
    "content": "-module(ar_node_tests).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_pricing.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-import(ar_test_node, [sign_v1_tx/3, read_block_when_stored/1]).\n\nar_node_interface_test_() ->\n\t{timeout, 300, fun test_ar_node_interface/0}.\n\ntest_ar_node_interface() ->\n\t[B0] = ar_weave:init(),\n\tar_test_node:start(B0),\n\t?assertEqual(0, ar_node:get_height()),\n\t?assertEqual(B0#block.indep_hash, ar_node:get_current_block_hash()),\n\tar_test_node:mine(),\n\tB0H = B0#block.indep_hash,\n\t[{H, _, _}, {B0H, _, _}] = ar_test_node:wait_until_height(main, 1),\n\t?assertEqual(1, ar_node:get_height()),\n\t?assertEqual(H, ar_node:get_current_block_hash()).\n\nmining_reward_test_() ->\n\t{timeout, 120, fun test_mining_reward/0}.\n\ntest_mining_reward() ->\n\t{_Priv1, Pub1} = ar_wallet:new_keyfile(),\n\t[B0] = ar_weave:init(),\n\tar_test_node:start(B0, MiningAddr = ar_wallet:to_address(Pub1)),\n\tar_test_node:mine(),\n\tar_test_node:wait_until_height(main, 1),\n\tB1 = ar_node:get_current_block(),\n\t[{MiningAddr, _, Reward, 1}, _] = B1#block.reward_history,\n\t{_, TotalLocked} = lists:foldl(\n\t\tfun(Height, {PrevB, TotalLocked}) ->\n\t\t\t?assertEqual(0, ar_node:get_balance(Pub1)),\n\t\t\t?assertEqual(TotalLocked, ar_rewards:get_total_reward_for_address(MiningAddr, PrevB)),\n\t\t\tar_test_node:mine(),\n\t\t\tar_test_node:wait_until_height(main, Height + 1),\n\t\t\tB = ar_node:get_current_block(),\n\t\t\t{B, TotalLocked + B#block.reward}\n\t\tend,\n\t\t{B1, Reward},\n\t\tlists:seq(1, ?LOCKED_REWARDS_BLOCKS)\n\t),\n\t?assertEqual(Reward, ar_node:get_balance(Pub1)),\n\n\t%% Unlock one more reward.\n\tar_test_node:mine(),\n\tar_test_node:wait_until_height(main, ?LOCKED_REWARDS_BLOCKS + 2),\n\tFinalB = ar_node:get_current_block(),\n\t?assertEqual(Reward + 10, ar_node:get_balance(Pub1)),\n\t?assertEqual(\n\t\tTotalLocked - Reward - 10 + FinalB#block.reward,\n\t\tar_rewards:get_total_reward_for_address(MiningAddr, FinalB)).\n\n% @doc Check that other nodes accept a new block and associated mining reward.\nmulti_node_mining_reward_test_() ->\n\tar_test_node:test_with_mocked_functions([{ar_fork, height_2_6, fun() -> 0 end}],\n\t\tfun test_multi_node_mining_reward/0, 120).\n\ntest_multi_node_mining_reward() ->\n\t{_Priv1, Pub1} = ar_test_node:remote_call(peer1, ar_wallet, new_keyfile, []),\n\t[B0] = ar_weave:init(),\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0, MiningAddr = ar_wallet:to_address(Pub1)),\n\tar_test_node:connect_to_peer(peer1),\n\tar_test_node:mine(peer1),\n\tar_test_node:wait_until_height(main, 1),\n\tB1 = ar_node:get_current_block(),\n\t[{MiningAddr, _, Reward, 1}, _] = B1#block.reward_history,\n\t?assertEqual(0, ar_node:get_balance(Pub1)),\n\tlists:foreach(\n\t\tfun(Height) ->\n\t\t\t?assertEqual(0, ar_node:get_balance(Pub1)),\n\t\t\tar_test_node:mine(),\n\t\t\tar_test_node:wait_until_height(main, Height + 1)\n\t\tend,\n\t\tlists:seq(1, ?LOCKED_REWARDS_BLOCKS)\n\t),\n\t?assertEqual(Reward, ar_node:get_balance(Pub1)).\n\n%% @doc Ensure that TX replay attack mitigation works.\nreplay_attack_test_() ->\n\t{timeout, 120, fun() ->\n\t\tKey1 = {_Priv1, Pub1} = ar_wallet:new(),\n\t\t{_Priv2, Pub2} = ar_wallet:new(),\n\t\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub1), ?AR(10000), <<>>}]),\n\t\tar_test_node:start(B0),\n\t\tar_test_node:start_peer(peer1, B0),\n\t\tar_test_node:connect_to_peer(peer1),\n\t\tSignedTX = sign_v1_tx(main, Key1, #{ target => ar_wallet:to_address(Pub2),\n\t\t\t\tquantity => ?AR(1000), reward => ?AR(1), last_tx => <<>> }),\n\t\tar_test_node:assert_post_tx_to_peer(main, SignedTX),\n\t\tar_test_node:mine(),\n\t\tar_test_node:assert_wait_until_height(peer1, 1),\n\t\t?assertEqual(?AR(8999), ar_test_node:remote_call(peer1, ar_node, get_balance, [Pub1])),\n\t\t?assertEqual(?AR(1000), ar_test_node:remote_call(peer1, ar_node, get_balance, [Pub2])),\n\t\tar_events:send(tx, {ready_for_mining, SignedTX}),\n\t\tar_test_node:wait_until_receives_txs([SignedTX]),\n\t\tar_test_node:mine(),\n\t\tar_test_node:assert_wait_until_height(peer1, 2),\n\t\t?assertEqual(?AR(8999), ar_test_node:remote_call(peer1, ar_node, get_balance, [Pub1])),\n\t\t?assertEqual(?AR(1000), ar_test_node:remote_call(peer1, ar_node, get_balance, [Pub2]))\n\tend}.\n\n%% @doc Create two new wallets and a blockweave with a wallet balance.\n%% Create and verify execution of a signed exchange of value tx.\nwallet_transaction_test_() ->\n\tar_test_node:test_with_mocked_functions([{ar_fork, height_2_6, fun() -> 0 end}],\n\t\tfun test_wallet_transaction/0, 120).\n\ntest_wallet_transaction() ->\n\tTestWalletTransaction = fun(KeyType) ->\n\t\tfun() ->\n\t\t\t{Priv1, Pub1} = ar_wallet:new_keyfile(KeyType),\n\t\t\t{_Priv2, Pub2} = ar_wallet:new(),\n\t\t\tTX = ar_tx:new(ar_wallet:to_address(Pub2), ?AR(1), ?AR(9000), <<>>),\n\t\t\tSignedTX = ar_tx:sign(TX#tx{ format = 2 }, Priv1, Pub1),\n\t\t\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub1), ?AR(10000), <<>>}]),\n\t\t\tar_test_node:start(B0, ar_wallet:to_address(ar_wallet:new_keyfile({eddsa, ed25519}))),\n\t\t\tar_test_node:start_peer(peer1, B0),\n\t\t\tar_test_node:connect_to_peer(peer1),\n\t\t\tar_test_node:assert_post_tx_to_peer(main, SignedTX),\n\t\t\tar_test_node:mine(),\n\t\t\tar_test_node:wait_until_height(main, 1),\n\t\t\tar_test_node:assert_wait_until_height(peer1, 1),\n\t\t\t?assertEqual(?AR(999), ar_test_node:remote_call(peer1, ar_node, get_balance, [Pub1])),\n\t\t\t?assertEqual(?AR(9000), ar_test_node:remote_call(peer1, ar_node, get_balance, [Pub2]))\n\t\tend\n\tend,\n\t[\n\t\t{\"PS256_65537\", timeout, 60, TestWalletTransaction({?RSA_SIGN_ALG, 65537})},\n\t\t{\"ES256K\", timeout, 60, TestWalletTransaction({?ECDSA_SIGN_ALG, secp256k1})},\n\t\t{\"Ed25519\", timeout, 60, TestWalletTransaction({?EDDSA_SIGN_ALG, ed25519})}\n\t].\n\n%% @doc Ensure that TX Id threading functions correctly (in the positive case).\ntx_threading_test_() ->\n\t{timeout, 120, fun() ->\n\t\tKey1 = {_Priv1, Pub1} = ar_wallet:new(),\n\t\t{_Priv2, Pub2} = ar_wallet:new(),\n\t\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub1), ?AR(10000), <<>>}]),\n\t\tar_test_node:start(B0),\n\t\tar_test_node:start_peer(peer1, B0),\n\t\tar_test_node:connect_to_peer(peer1),\n\t\tSignedTX = sign_v1_tx(main, Key1, #{ target => ar_wallet:to_address(Pub2),\n\t\t\t\tquantity => ?AR(1000), reward => ?AR(1), last_tx => <<>> }),\n\t\tSignedTX2 = sign_v1_tx(main, Key1, #{ target => ar_wallet:to_address(Pub2),\n\t\t\t\tquantity => ?AR(1000), reward => ?AR(1), last_tx => SignedTX#tx.id }),\n\t\tar_test_node:assert_post_tx_to_peer(main, SignedTX),\n\t\tar_test_node:mine(),\n\t\tar_test_node:wait_until_height(main, 1),\n\t\tar_test_node:assert_post_tx_to_peer(main, SignedTX2),\n\t\tar_test_node:mine(),\n\t\tar_test_node:assert_wait_until_height(peer1, 2),\n\t\t?assertEqual(?AR(7998), ar_test_node:remote_call(peer1, ar_node, get_balance, [Pub1])),\n\t\t?assertEqual(?AR(2000), ar_test_node:remote_call(peer1, ar_node, get_balance, [Pub2]))\n\tend}.\n\npersisted_mempool_test_() ->\n\t%% Make the propagation delay noticeable so that the submitted transactions do not\n\t%% become ready for mining before the node is restarted and we assert that waiting\n\t%% transactions found in the persisted mempool are (re-)submitted to peers.\n\tar_test_node:test_with_mocked_functions([{ar_node_worker, calculate_delay,\n\t\t\tfun(_Size) -> 5000 end}], fun test_persisted_mempool/0).\n\ntest_persisted_mempool() ->\n\t{_, Pub} = Wallet = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(10000), <<>>}]),\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:disconnect_from(peer1),\n\tSignedTX = ar_test_node:sign_tx(Wallet, #{ last_tx => ar_test_node:get_tx_anchor(main) }),\n\t{ok, {{<<\"200\">>, _}, _, <<\"OK\">>, _, _}} = ar_test_node:post_tx_to_peer(main, SignedTX, false),\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\tmaps:is_key(SignedTX#tx.id, ar_mempool:get_map())\n\t\tend,\n\t\t100,\n\t\t30000\n\t),\n\tConfig = ar_test_node:stop(),\n\ttry\n\t\t%% Rejoin the network.\n\t\t%% Expect the pending transactions to be picked up and distributed.\n\t\tok = arweave_config:set_env(Config#config{\n\t\t\tstart_from_latest_state = false,\n\t\t\tpeers = [ar_test_node:peer_ip(peer1)]\n\t\t}),\n\t\tar:start_dependencies(),\n\t\tar_test_node:wait_until_joined(),\n\t\tar_test_node:connect_to_peer(peer1),\n\t\tar_test_node:assert_wait_until_receives_txs(peer1, [SignedTX]),\n\t\tar_test_node:mine(),\n\t\t[{H, _, _} | _] = ar_test_node:assert_wait_until_height(peer1, 1),\n\t\tB = read_block_when_stored(H),\n\t\t?assertEqual([SignedTX#tx.id], B#block.txs)\n\tafter\n\t\tok = arweave_config:set_env(Config)\n\tend.\n"
  },
  {
    "path": "apps/arweave/test/ar_nonce_limiter_tests.erl",
    "content": "-module(ar_nonce_limiter_tests).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n\n%% @doc Reset the state and stop computing steps automatically. Used in tests.\nreset_and_pause() ->\n\tgen_server:cast(ar_nonce_limiter, reset_and_pause).\n\n%% @doc Do not emit the initialized event. Used in tests.\nturn_off_initialized_event() ->\n\tgen_server:cast(ar_nonce_limiter, turn_off_initialized_event).\n\n%% @doc Get all steps starting from the latest on the current tip. Used in tests.\nget_steps() ->\n\tgen_server:call(ar_nonce_limiter, get_steps).\n\n%% @doc Compute a single step. Used in tests.\nstep() ->\n\tSelf = self(),\n\tspawn(\n\t\tfun() ->\n\t\t\tok = ar_events:subscribe(nonce_limiter),\n\t\t\tgen_server:cast(ar_nonce_limiter, compute_step),\n\t\t\treceive\n\t\t\t\t{event, nonce_limiter, {computed_output, _}} ->\n\t\t\t\t\tSelf ! done\n\t\t\tend\n\t\tend\n\t),\n\treceive\n\t\tdone ->\n\t\t\tok\n\tend.\n\n\nassert_session(B, PrevB) ->\n\t%% vdf_diffic ulty and next_vdf_difficulty in cached VDF sessions should be\n\t%% updated whenever a new block is validated.\n\t#nonce_limiter_info{\n\t\tvdf_difficulty = PrevBVDFDifficulty, next_vdf_difficulty = PrevBNextVDFDifficulty\n\t} = PrevB#block.nonce_limiter_info,\n\t#nonce_limiter_info{\n\t\tvdf_difficulty = BVDFDifficulty, next_vdf_difficulty = BNextVDFDifficulty\n\t} = B#block.nonce_limiter_info,\n\tPrevBSessionKey = ar_nonce_limiter:session_key(PrevB#block.nonce_limiter_info),\n\tBSessionKey = ar_nonce_limiter:session_key(B#block.nonce_limiter_info),\n\n\tBSession = ar_nonce_limiter:get_session(BSessionKey),\n\t?assertEqual(BVDFDifficulty, BSession#vdf_session.vdf_difficulty),\n\t?assertEqual(BNextVDFDifficulty,\n\t\tBSession#vdf_session.next_vdf_difficulty),\n\tcase PrevBSessionKey == BSessionKey of\n\t\ttrue ->\n\t\t\tok;\n\t\tfalse ->\n\t\t\tPrevBSession = ar_nonce_limiter:get_session(PrevBSessionKey),\n\t\t\t?assertEqual(PrevBVDFDifficulty,\n\t\t\t\tPrevBSession#vdf_session.vdf_difficulty),\n\t\t\t?assertEqual(PrevBNextVDFDifficulty,\n\t\t\t\tPrevBSession#vdf_session.next_vdf_difficulty)\n\tend.\n\nassert_validate(B, PrevB, ExpectedResult) ->\n\tar_nonce_limiter:request_validation(B#block.indep_hash, B#block.nonce_limiter_info,\n\t\t\tPrevB#block.nonce_limiter_info),\n\tBH = B#block.indep_hash,\n\treceive\n\t\t{event, nonce_limiter, {valid, BH}} ->\n\t\t\tcase ExpectedResult of\n\t\t\t\tvalid ->\n\t\t\t\t\tassert_session(B, PrevB),\n\t\t\t\t\tok;\n\t\t\t\t_ ->\n\t\t\t\t\t?assert(false, iolist_to_binary(io_lib:format(\"Unexpected \"\n\t\t\t\t\t\t\t\"validation success. Expected: ~p.\", [ExpectedResult])))\n\t\t\tend;\n\t\t{event, nonce_limiter, {invalid, BH, Code}} ->\n\t\t\tcase ExpectedResult of\n\t\t\t\t{invalid, Code} ->\n\t\t\t\t\tok;\n\t\t\t\t_ ->\n\t\t\t\t\t?assert(false, iolist_to_binary(io_lib:format(\"Unexpected \"\n\t\t\t\t\t\t\t\"validation failure: ~p. Expected: ~p.\",\n\t\t\t\t\t\t\t[Code, ExpectedResult])))\n\t\t\tend\n\tafter 2000 ->\n\t\t?assert(false, \"Validation timeout.\")\n\tend.\n\nassert_step_number(N) ->\n\ttimer:sleep(200),\n\t?assert(ar_util:do_until(fun() -> ar_nonce_limiter:get_current_step_number() == N end, 100, 1000)).\n\ntest_block(StepNumber, Output, Seed, NextSeed, LastStepCheckpoints, Steps,\n\t\tVDFDifficulty, NextVDFDifficulty) ->\n\t#block{ indep_hash = crypto:strong_rand_bytes(48),\n\t\t\tnonce_limiter_info = #nonce_limiter_info{ output = Output,\n\t\t\t\t\tglobal_step_number = StepNumber, seed = Seed, next_seed = NextSeed,\n\t\t\t\t\tlast_step_checkpoints = LastStepCheckpoints, steps = Steps,\n\t\t\t\t\tvdf_difficulty = VDFDifficulty, next_vdf_difficulty = NextVDFDifficulty }\n\t}.\n\t\nmock_reset_frequency() ->\n\t{ar_nonce_limiter, get_reset_frequency, fun() -> 5 end}.\n\napplies_validated_steps_test_() ->\n\tar_test_node:test_with_mocked_functions([mock_reset_frequency()],\n\t\tfun test_applies_validated_steps/0, 60).\n\ntest_applies_validated_steps() ->\n\treset_and_pause(),\n\tSeed = crypto:strong_rand_bytes(48),\n\tNextSeed = crypto:strong_rand_bytes(48),\n\tNextSeed2 = crypto:strong_rand_bytes(48),\n\tInitialOutput = crypto:strong_rand_bytes(32),\n\tB1VDFDifficulty = 3,\n\tB1NextVDFDifficulty = 3,\n\tB1 = test_block(1, InitialOutput, Seed, NextSeed, [], [],\n\t\t\tB1VDFDifficulty, B1NextVDFDifficulty),\n\tturn_off_initialized_event(),\n\tar_nonce_limiter:account_tree_initialized([B1]),\n\ttrue = ar_util:do_until(fun() -> ar_nonce_limiter:get_current_step_number() == 1 end, 100, 1000),\n\tassert_session(B1, B1),\n\t{ok, Output2, _} = ar_nonce_limiter:compute(2, InitialOutput, B1VDFDifficulty),\n\tB2VDFDifficulty = 3,\n\tB2NextVDFDifficulty = 4,\n\tB2 = test_block(2, Output2, Seed, NextSeed, [], [Output2], \n\t\t\tB2VDFDifficulty, B2NextVDFDifficulty),\n\tok = ar_events:subscribe(nonce_limiter),\n\tassert_validate(B2, B1, valid),\n\tassert_validate(B2, B1, valid),\n\tassert_validate(B2#block{ nonce_limiter_info = #nonce_limiter_info{} }, B1, {invalid, 1}),\n\tN2 = B2#block.nonce_limiter_info,\n\tassert_validate(B2#block{ nonce_limiter_info = N2#nonce_limiter_info{ steps = [] } },\n\t\t\tB1, {invalid, 4}),\n\tassert_validate(B2#block{\n\t\t\tnonce_limiter_info = N2#nonce_limiter_info{ steps = [Output2, Output2] } },\n\t\t\tB1, {invalid, 2}),\n\tassert_step_number(2),\n\t[step() || _ <- lists:seq(1, 3)],\n\tassert_step_number(5),\n\tar_events:send(node_state, {new_tip, B2, B1}),\n\t%% We have just applied B2 with a VDF difficulty update => a new session has to be opened.\n\tassert_step_number(2),\n\tassert_session(B2, B1),\n\t{ok, Output3, _} = ar_nonce_limiter:compute(3, Output2, B2VDFDifficulty),\n\t{ok, Output4, _} = ar_nonce_limiter:compute(4, Output3, B2VDFDifficulty),\n\tB3VDFDifficulty = 3,\n\tB3NextVDFDifficulty = 4,\n\tB3 = test_block(4, Output4, Seed, NextSeed, [], [Output4, Output3],\n\t\t\tB3VDFDifficulty, B3NextVDFDifficulty),\n\tassert_validate(B3, B2, valid),\n\tassert_validate(B3, B1, valid),\n\t%% Entropy reset line crossed at step 5, add entropy and apply next_vdf_difficulty\n\t{ok, Output5, _} = ar_nonce_limiter:compute(5, ar_nonce_limiter:mix_seed(Output4, NextSeed), B3NextVDFDifficulty),\n\tB4VDFDifficulty = 4,\n\tB4NextVDFDifficulty = 5,\n\tB4 = test_block(5, Output5, NextSeed, NextSeed2, [], [Output5],\n\t\t\tB4VDFDifficulty, B4NextVDFDifficulty),\n\t[step() || _ <- lists:seq(1, 6)],\n\tassert_step_number(10),\n\tassert_validate(B4, B3, valid),\n\tar_events:send(node_state, {new_tip, B4, B3}),\n\tassert_step_number(9),\n\tassert_session(B4, B3),\n\tassert_validate(B4, B4, {invalid, 1}),\n\t% % 5, 6, 7, 8, 9, 10\n\tB5VDFDifficulty = 5,\n\tB5NextVDFDifficulty = 6,\n\tB5 = test_block(10, <<>>, NextSeed, NextSeed2, [], [<<>>],\n\t\t\tB5VDFDifficulty, B5NextVDFDifficulty),\n\tassert_validate(B5, B4, {invalid, 3}),\n\tB6VDFDifficulty = 5,\n\tB6NextVDFDifficulty = 6,\n\tB6 = test_block(10, <<>>, NextSeed, NextSeed2, [],\n\t\t\t% Steps 10, 9, 8, 7, 6.\n\t\t\t[<<>> | lists:sublist(get_steps(), 4)],\n\t\t\tB6VDFDifficulty, B6NextVDFDifficulty),\n\tassert_validate(B6, B4, {invalid, 3}),\n\tInvalid = crypto:strong_rand_bytes(32),\n\tB7VDFDifficulty = 5,\n\tB7NextVDFDifficulty = 6,\n\tB7 = test_block(10, Invalid, NextSeed, NextSeed2, [],\n\t\t\t% Steps 10, 9, 8, 7, 6.\n\t\t\t[Invalid | lists:sublist(get_steps(), 4)],\n\t\t\tB7VDFDifficulty, B7NextVDFDifficulty),\n\tassert_validate(B7, B4, {invalid, 3}),\n\t%% Last valid block was B4, so that's the vdf_difficulty to use (not next_vdf_difficulty cause\n\t%% the next entropy reset line isn't until step 10)\n\t{ok, Output6, _} = ar_nonce_limiter:compute(6, Output5, B4VDFDifficulty),\n\t{ok, Output7, _} = ar_nonce_limiter:compute(7, Output6, B4VDFDifficulty),\n\t{ok, Output8, _} = ar_nonce_limiter:compute(8, Output7, B4VDFDifficulty),\n\tB8VDFDifficulty = 4,\n\t%% Change the next_vdf_difficulty to confirm that apply_tip2 handles updating an\n\t%% existing VDF session\n\tB8NextVDFDifficulty = 6, \n\tB8 = test_block(8, Output8, NextSeed, NextSeed2, [], [Output8, Output7, Output6],\n\t\t\tB8VDFDifficulty, B8NextVDFDifficulty),\n\tar_events:send(node_state, {new_tip, B8, B4}),\n\ttimer:sleep(1000),\n\tassert_session(B8, B4),\n\tassert_validate(B8, B4, valid),\n\tok.\n"
  },
  {
    "path": "apps/arweave/test/ar_packing_tests.erl",
    "content": "-module(ar_packing_tests).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-define(CHUNK_OFFSET, 10*256*1024).\n-define(ENCODED_TX_ROOT, <<\"9d857DmXbSyhX6bgF7CDMDCl0f__RUjryMMvueFN9wE\">>).\n-define(REQUEST_REPACK_TIMEOUT, 50_000).\n-define(REQUEST_UNPACK_TIMEOUT, 50_000).\n\n% request_test() ->\n% \tRewardAddress = ar_test_node:load_fixture(\"ar_packing_tests/address.bin\"),\n\n% \t[B0] = ar_weave:init(),\n% \tar_test_node:start(B0, RewardAddress),\n\n% \ttest_full_chunk(),\n% \ttest_partial_chunk(),\n% \ttest_full_chunk_repack(),\n% \ttest_partial_chunk_repack(),\n% \ttest_invalid_pad(),\n% \ttest_request_repack(RewardAddress),\n% \ttest_request_unpack(RewardAddress).\n\npacking_test_() ->\n    {setup, \n     fun setup/0, \n     fun teardown/1, \n     [fun test_feistel/0,\n      fun test_full_chunk/0,\n      fun test_partial_chunk/0,\n      fun test_full_chunk_repack/0,\n      fun test_partial_chunk_repack/0,\n      fun test_invalid_pad/0,\n      fun test_request_repack/0,\n      fun test_request_unpack/0]}.\n\nsetup() ->\n    RewardAddress = ar_test_node:load_fixture(\"ar_packing_tests/address.bin\"),\n    [B0] = ar_weave:init(),\n    ar_test_node:start(B0, RewardAddress),\n    RewardAddress.\n\nteardown(_) ->\n    % optional cleanup code\n    ok.\n\ntest_feistel()->\n\tUnpacked = << 1:(8*2097152) >>,\n\tEntropy = << 2:(8*2097152) >>,\n\t{ok, Packed} = ar_rxsquared_nif:rsp_feistel_encrypt_nif(Unpacked, Entropy),\n\tPackedHashReal = crypto:hash(sha256, Packed),\n\tPackedHashExpd = << 73,123,99,202,146,24,95,220,127,228,210,8,106,220,94,\n\t\t251,234,166,63,206,16,213,64,208,35,104,15,144,215,\n\t\t139,183,59 >>,\n\t?assertEqual(PackedHashExpd, PackedHashReal),\n\t{ok, UnpackedReal} = ar_rxsquared_nif:rsp_feistel_decrypt_nif(Packed, Entropy),\n\t?assertEqual(Unpacked, UnpackedReal),\n\n\tUnpacked2 = << 3:(8*2097152) >>,\n\tEntropy2 = << 4:(8*2097152) >>,\n\t{ok, Packed2} = ar_rxsquared_nif:rsp_feistel_encrypt_nif(Unpacked2, Entropy2),\n\tPackedHashReal2 = crypto:hash(sha256, Packed2),\n\tPackedHashExpd2 = << 226,95,254,246,118,154,133,215,229,243,245,255,18,48,\n\t\t130,246,98,240,207,197,188,161,222,66,140,47,110,18,\n\t\t193,145,96,210 >>,\n\t?assertEqual(PackedHashExpd2, PackedHashReal2),\n\t{ok, UnpackedReal2} = ar_rxsquared_nif:rsp_feistel_decrypt_nif(Packed2, Entropy2),\n\t?assertEqual(Unpacked2, UnpackedReal2),\n\tok.\n\ntest_full_chunk() ->\n\tUnpackedData = ar_test_node:load_fixture(\"ar_packing_tests/unpacked.256kb\"),\n\tSpora25Data = ar_test_node:load_fixture(\"ar_packing_tests/spora25.256kb\"),\n\tSpora26Data = ar_test_node:load_fixture(\"ar_packing_tests/spora26.256kb\"),\n\n\tChunkSize = 256*1024,\n\tTXRoot = ar_util:decode(?ENCODED_TX_ROOT),\n\tRewardAddress = ar_test_node:load_fixture(\"ar_packing_tests/address.bin\"),\n\n\t?assertEqual(\n\t\t{ok, UnpackedData},\n\t\tar_packing_server:pack(\n\t\t\tunpacked, ?CHUNK_OFFSET, TXRoot, UnpackedData)),\n\t?assertEqual(\n\t\t{ok, Spora25Data},\n\t\tar_packing_server:pack(\n\t\t\tspora_2_5, ?CHUNK_OFFSET, TXRoot, UnpackedData)),\n\t?assertEqual(\n\t\t{ok, Spora26Data},\n\t\tar_packing_server:pack(\n\t\t\t{spora_2_6, RewardAddress}, ?CHUNK_OFFSET, TXRoot, UnpackedData)),\n\n\t?assertEqual(\n\t\t{ok, UnpackedData},\n\t\tar_packing_server:unpack(\n\t\t\tunpacked, ?CHUNK_OFFSET, TXRoot, UnpackedData, ChunkSize)),\n\t?assertEqual(\n\t\t{ok, UnpackedData},\n\t\tar_packing_server:unpack(\n\t\t\tspora_2_5, ?CHUNK_OFFSET, TXRoot, Spora25Data, ChunkSize)),\n\t?assertEqual(\n\t\t{ok, UnpackedData},\n\t\tar_packing_server:unpack(\n\t\t\t{spora_2_6, RewardAddress}, ?CHUNK_OFFSET, TXRoot, Spora26Data, ChunkSize)).\n\ntest_partial_chunk() ->\n\tUnpackedData = ar_test_node:load_fixture(\"ar_packing_tests/unpacked.100kb\"),\n\tSpora25Data = ar_test_node:load_fixture(\"ar_packing_tests/spora25.100kb\"),\n\tSpora26Data = ar_test_node:load_fixture(\"ar_packing_tests/spora26.100kb\"),\n\n\tChunkSize = 100*1024,\n\tTXRoot = ar_util:decode(?ENCODED_TX_ROOT),\n\tRewardAddress = ar_test_node:load_fixture(\"ar_packing_tests/address.bin\"),\n\n\t?assertEqual(\n\t\t{ok, UnpackedData},\n\t\tar_packing_server:pack(\n\t\t\tunpacked, ?CHUNK_OFFSET, TXRoot, UnpackedData)),\n\t?assertEqual(\n\t\t{ok, Spora25Data},\n\t\tar_packing_server:pack(\n\t\t\tspora_2_5, ?CHUNK_OFFSET, TXRoot, UnpackedData)),\n\t?assertEqual(\n\t\t{ok, Spora26Data},\n\t\tar_packing_server:pack(\n\t\t\t{spora_2_6, RewardAddress}, ?CHUNK_OFFSET, TXRoot, UnpackedData)),\n\n\t?assertEqual(\n\t\t{ok, UnpackedData},\n\t\tar_packing_server:unpack(\n\t\t\tunpacked, ?CHUNK_OFFSET, TXRoot, UnpackedData, ChunkSize)),\n\t?assertEqual(\n\t\t{ok, UnpackedData},\n\t\tar_packing_server:unpack(\n\t\t\tspora_2_5, ?CHUNK_OFFSET, TXRoot, Spora25Data, ChunkSize)),\n\t?assertEqual(\n\t\t{ok, UnpackedData},\n\t\tar_packing_server:unpack(\n\t\t\t{spora_2_6, RewardAddress}, ?CHUNK_OFFSET, TXRoot, Spora26Data, ChunkSize)).\n\ntest_full_chunk_repack() ->\n\tUnpackedData = ar_test_node:load_fixture(\"ar_packing_tests/unpacked.256kb\"),\n\tSpora25Data = ar_test_node:load_fixture(\"ar_packing_tests/spora25.256kb\"),\n\tSpora26Data = ar_test_node:load_fixture(\"ar_packing_tests/spora26.256kb\"),\n\n\tChunkSize = 256*1024,\n\tTXRoot = ar_util:decode(?ENCODED_TX_ROOT),\n\tRewardAddress = ar_test_node:load_fixture(\"ar_packing_tests/address.bin\"),\n\n\t?assertEqual(\n\t\t{ok, UnpackedData, UnpackedData},\n\t\tar_packing_server:repack(unpacked, unpacked,\n\t\t\t?CHUNK_OFFSET, TXRoot, UnpackedData, ChunkSize)),\n\t?assertEqual(\n\t\t{ok, Spora25Data, UnpackedData},\n\t\tar_packing_server:repack(spora_2_5, unpacked,\n\t\t\t?CHUNK_OFFSET, TXRoot, UnpackedData, ChunkSize)),\n\t?assertEqual(\n\t\t{ok, Spora26Data, UnpackedData},\n\t\tar_packing_server:repack({spora_2_6, RewardAddress}, unpacked,\n\t\t\t?CHUNK_OFFSET, TXRoot, UnpackedData, ChunkSize)),\n\n\t?assertEqual(\n\t\t{ok, UnpackedData, UnpackedData},\n\t\tar_packing_server:repack(unpacked, spora_2_5,\n\t\t\t?CHUNK_OFFSET, TXRoot, Spora25Data, ChunkSize)),\n\t?assertEqual(\n\t\t{ok, Spora25Data, none},\n\t\tar_packing_server:repack(spora_2_5, spora_2_5,\n\t\t\t?CHUNK_OFFSET, TXRoot, Spora25Data, ChunkSize)),\n\t?assertEqual(\n\t\t{ok, Spora26Data, UnpackedData},\n\t\tar_packing_server:repack({spora_2_6, RewardAddress}, spora_2_5,\n\t\t\t?CHUNK_OFFSET, TXRoot, Spora25Data, ChunkSize)),\n\n\t?assertEqual(\n\t\t{ok, UnpackedData, UnpackedData},\n\t\tar_packing_server:repack(unpacked, {spora_2_6, RewardAddress},\n\t\t\t?CHUNK_OFFSET, TXRoot, Spora26Data, ChunkSize)),\n\t?assertEqual(\n\t\t{ok, Spora25Data, UnpackedData},\n\t\tar_packing_server:repack(spora_2_5, {spora_2_6, RewardAddress},\n\t\t\t?CHUNK_OFFSET, TXRoot, Spora26Data, ChunkSize)),\n\t?assertEqual(\n\t\t{ok, Spora26Data, none},\n\t\tar_packing_server:repack({spora_2_6, RewardAddress}, {spora_2_6, RewardAddress},\n\t\t\t?CHUNK_OFFSET, TXRoot, Spora26Data, ChunkSize)).\n\ntest_partial_chunk_repack() ->\n\tUnpackedData = ar_test_node:load_fixture(\"ar_packing_tests/unpacked.100kb\"),\n\tSpora25Data = ar_test_node:load_fixture(\"ar_packing_tests/spora25.100kb\"),\n\tSpora26Data = ar_test_node:load_fixture(\"ar_packing_tests/spora26.100kb\"),\n\n\tChunkSize = 100*1024,\n\tTXRoot = ar_util:decode(?ENCODED_TX_ROOT),\n\tRewardAddress = ar_test_node:load_fixture(\"ar_packing_tests/address.bin\"),\n\n\t?assertEqual(\n\t\t{ok, UnpackedData, UnpackedData},\n\t\tar_packing_server:repack(unpacked, unpacked,\n\t\t\t?CHUNK_OFFSET, TXRoot, UnpackedData, ChunkSize)),\n\t?assertEqual(\n\t\t{ok, Spora25Data, UnpackedData},\n\t\tar_packing_server:repack(spora_2_5, unpacked,\n\t\t\t?CHUNK_OFFSET, TXRoot, UnpackedData, ChunkSize)),\n\t?assertEqual(\n\t\t{ok, Spora26Data, UnpackedData},\n\t\tar_packing_server:repack({spora_2_6, RewardAddress}, unpacked,\n\t\t\t?CHUNK_OFFSET, TXRoot, UnpackedData, ChunkSize)),\n\n\t?assertEqual(\n\t\t{ok, UnpackedData, UnpackedData},\n\t\tar_packing_server:repack(unpacked, spora_2_5,\n\t\t\t?CHUNK_OFFSET, TXRoot, Spora25Data, ChunkSize)),\n\t?assertEqual(\n\t\t{ok, Spora25Data, none},\n\t\tar_packing_server:repack(spora_2_5, spora_2_5,\n\t\t\t?CHUNK_OFFSET, TXRoot, Spora25Data, ChunkSize)),\n\t?assertEqual(\n\t\t{ok, Spora26Data, UnpackedData},\n\t\tar_packing_server:repack({spora_2_6, RewardAddress}, spora_2_5,\n\t\t\t?CHUNK_OFFSET, TXRoot, Spora25Data, ChunkSize)),\n\t?assertEqual(\n\t\t{ok, UnpackedData, UnpackedData},\n\t\tar_packing_server:repack(unpacked, {spora_2_6, RewardAddress},\n\t\t\t?CHUNK_OFFSET, TXRoot, Spora26Data, ChunkSize)),\n\t?assertEqual(\n\t\t{ok, Spora25Data, UnpackedData},\n\t\tar_packing_server:repack(spora_2_5, {spora_2_6, RewardAddress},\n\t\t\t?CHUNK_OFFSET, TXRoot, Spora26Data, ChunkSize)),\n\t?assertEqual(\n\t\t{ok, Spora26Data, none},\n\t\tar_packing_server:repack({spora_2_6, RewardAddress}, {spora_2_6, RewardAddress},\n\t\t\t?CHUNK_OFFSET, TXRoot, Spora26Data, ChunkSize)).\n\ntest_invalid_pad() ->\n\tChunkSize = 100*1024,\n\n\tUnpackedData = ar_test_node:load_fixture(\"ar_packing_tests/unpacked.256kb\"),\n\tSpora25Data = ar_test_node:load_fixture(\"ar_packing_tests/spora25.256kb\"),\n\tSpora26Data = ar_test_node:load_fixture(\"ar_packing_tests/spora26.256kb\"),\n\n\tShortUnpackedData = binary:part(UnpackedData, 0, ChunkSize),\n\n\tTXRoot = ar_util:decode(?ENCODED_TX_ROOT),\n\tRewardAddress = ar_test_node:load_fixture(\"ar_packing_tests/address.bin\"),\n\n\t?assertEqual(\n\t\t{ok, ShortUnpackedData},\n\t\tar_packing_server:unpack(\n\t\t\tspora_2_5, ?CHUNK_OFFSET, TXRoot, Spora25Data, ChunkSize),\n\t\t\"We don't check the pad when unpacking SPoRA 2.5\"),\n\t?assertEqual(\n\t\t{error, invalid_padding},\n\t\tar_packing_server:unpack(\n\t\t\t{spora_2_6, RewardAddress}, ?CHUNK_OFFSET, TXRoot, Spora26Data, ChunkSize),\n\t\t\t\"We do check the pad when unpacking SPoRA 2.6\"),\n\t?assertEqual(\n\t\t{ok, ShortUnpackedData, ShortUnpackedData},\n\t\tar_packing_server:repack(\n\t\t\tunpacked, spora_2_5, ?CHUNK_OFFSET, TXRoot, Spora25Data, ChunkSize),\n\t\t\t\"We don't check the pad when repacking from SPoRA 2.5\"),\n\t?assertMatch(\n\t\t{ok, _, ShortUnpackedData},\n\t\tar_packing_server:repack(\n\t\t\t{spora_2_6, RewardAddress}, spora_2_5, ?CHUNK_OFFSET, TXRoot, Spora25Data, ChunkSize),\n\t\t\t\"We don't check the pad when repacking from SPoRA 2.5\"),\n\t?assertEqual(\n\t\t{error, invalid_padding},\n\t\tar_packing_server:repack(\n\t\t\tunpacked, {spora_2_6, RewardAddress}, ?CHUNK_OFFSET, TXRoot, Spora25Data, ChunkSize),\n\t\t\t\"We do check the pad when repacking from SPoRA 2.6\"),\n\t?assertMatch(\n\t\t{error, invalid_padding},\n\t\tar_packing_server:repack(\n\t\t\tspora_2_5, {spora_2_6, RewardAddress}, ?CHUNK_OFFSET, TXRoot, Spora25Data, ChunkSize),\n\t\t\t\"We do check the pad when repacking from SPoRA 2.6\").\n\ntest_request_repack() ->\n\tUnpackedData = ar_test_node:load_fixture(\"ar_packing_tests/unpacked.256kb\"),\n\tSpora26Data = ar_test_node:load_fixture(\"ar_packing_tests/spora26.256kb\"),\n\n\tChunkSize = 256*1024,\n\tTXRoot = ar_util:decode(?ENCODED_TX_ROOT),\n\tRewardAddress = ar_test_node:load_fixture(\"ar_packing_tests/address.bin\"),\n\n\t%% unpacked -> unpacked\n\tar_packing_server:request_repack(?CHUNK_OFFSET, {\n\t\tunpacked,\n\t\tunpacked, UnpackedData,\n\t\t?CHUNK_OFFSET, TXRoot, ChunkSize}),\n\treceive\n        {chunk, {packed, _, {unpacked, Unpacked1, _, _, _}}} ->\n            ?assertEqual(UnpackedData, Unpacked1)\n    after ?REQUEST_REPACK_TIMEOUT ->\n        erlang:error(timeout)\n    end,\n\t%% unpacked -> packed\n\tar_packing_server:request_repack(?CHUNK_OFFSET, {\n\t\t{spora_2_6, RewardAddress},\n\t\tunpacked, UnpackedData,\n\t\t?CHUNK_OFFSET, TXRoot, ChunkSize}),\n\treceive\n        {chunk, {packed, _, {{spora_2_6, RewardAddress}, Packed, _, _, _}}} ->\n            ?assertEqual(Spora26Data, Packed)\n    after ?REQUEST_REPACK_TIMEOUT ->\n        erlang:error(timeout)\n    end,\n\t%% packed -> unpacked\n\tar_packing_server:request_repack(?CHUNK_OFFSET, {\n\t\tunpacked,\n\t\t{spora_2_6, RewardAddress}, Spora26Data,\n\t\t?CHUNK_OFFSET, TXRoot, ChunkSize}),\n\treceive\n        {chunk, {packed, _, {unpacked, Unpacked2, _, _, _}}} ->\n            ?assertEqual(UnpackedData, Unpacked2)\n    after ?REQUEST_REPACK_TIMEOUT ->\n        erlang:error(timeout)\n    end,\n\t%% packed -> packed\n\tar_packing_server:request_repack(?CHUNK_OFFSET, {\n\t\t{spora_2_6, RewardAddress},\n\t\t{spora_2_6, RewardAddress}, Spora26Data,\n\t\t?CHUNK_OFFSET, TXRoot, ChunkSize}),\n\treceive\n        {chunk, {packed, _, {{spora_2_6, RewardAddress}, Packed2, _, _, _}}} ->\n            ?assertEqual(Spora26Data, Packed2)\n    after ?REQUEST_REPACK_TIMEOUT ->\n        erlang:error(timeout)\n    end.\n\ntest_request_unpack() ->\n\tUnpackedData = ar_test_node:load_fixture(\"ar_packing_tests/unpacked.256kb\"),\n\tSpora26Data = ar_test_node:load_fixture(\"ar_packing_tests/spora26.256kb\"),\n\n\tChunkSize = 256*1024,\n\tTXRoot = ar_util:decode(?ENCODED_TX_ROOT),\n\tRewardAddress = ar_test_node:load_fixture(\"ar_packing_tests/address.bin\"),\n\n\t%% unpacked -> unpacked\n\tar_packing_server:request_unpack(?CHUNK_OFFSET, {\n\t\tunpacked,\n\t\tUnpackedData,\n\t\t?CHUNK_OFFSET, TXRoot, ChunkSize}),\n\treceive\n        {chunk, {unpacked, _, {unpacked, Unpacked1, _, _, _}}} ->\n            ?assertEqual(UnpackedData, Unpacked1)\n    after ?REQUEST_UNPACK_TIMEOUT ->\n        erlang:error(timeout)\n    end,\n\t%% packed -> unpacked\n\tar_packing_server:request_unpack(?CHUNK_OFFSET, {\n\t\t{spora_2_6, RewardAddress}, Spora26Data,\n\t\t?CHUNK_OFFSET, TXRoot, ChunkSize}),\n\treceive\n        {chunk, {unpacked, _, {{spora_2_6, RewardAddress}, Unpacked2, _, _, _}}} ->\n            ?assertEqual(UnpackedData, Unpacked2)\n    after ?REQUEST_UNPACK_TIMEOUT ->\n        erlang:error(timeout)\n    end,\n\t%% invalid padding\n\tar_packing_server:request_unpack(?CHUNK_OFFSET, {\n\t\t{spora_2_6, RewardAddress}, Spora26Data,\n\t\t?CHUNK_OFFSET, TXRoot, ChunkSize - 10}), % reduce chunk size to create invalid padding\n\treceive\n\t\t{chunk, {unpack_error, _, {{spora_2_6, RewardAddress}, Spora26Data, _, _, _}, invalid_padding}} ->\n\t\t\tok\n\tafter ?REQUEST_UNPACK_TIMEOUT ->\n\t\terlang:error(timeout)\n\tend.\n\npacks_chunks_depending_on_packing_threshold_test_() ->\n\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_fork, height_2_9, fun() -> 10 end},\n\t\t\t{ar_retarget, is_retarget_height, fun(_Height) -> false end},\n\t\t\t{ar_retarget, is_retarget_block, fun(_Block) -> false end}],\n\t\t\tfun test_packs_chunks_depending_on_packing_threshold/0).\n\ntest_packs_chunks_depending_on_packing_threshold() ->\n\tMainWallet = ar_wallet:new_keyfile(),\n\tPeerWallet = ar_test_node:remote_call(peer1, ar_wallet, new_keyfile, []),\n\tMainAddr = ar_wallet:to_address(MainWallet),\n\tPeerAddr = ar_wallet:to_address(PeerWallet),\n\tDataMap =\n\t\tlists:foldr(\n\t\t\tfun(Height, Acc) ->\n\t\t\t\tChunkCount = 3,\n\t\t\t\t{DR1, Chunks1} = ar_test_data_sync:generate_random_split(ChunkCount),\n\t\t\t\t{DR2, Chunks2} = ar_test_data_sync:generate_random_original_split(ChunkCount),\n\t\t\t\t{DR3, Chunks3} = ar_test_data_sync:generate_random_original_v1_split(),\n\t\t\t\tmaps:put(Height, {{DR1, Chunks1}, {DR2, Chunks2}, {DR3, Chunks3}}, Acc)\n\t\t\tend,\n\t\t\t#{},\n\t\t\tlists:seq(1, 20)\n\t\t),\n\tWallet = ar_test_data_sync:setup_nodes(#{ addr => MainAddr, peer_addr => PeerAddr }),\n\t{_LegacyProofs, StrictProofs, V1Proofs} = lists:foldl(\n\t\tfun(Height, {Acc1, Acc2, Acc3}) ->\n\t\t\t{{DR1, Chunks1}, {DR2, Chunks2}, {DR3, Chunks3}} = maps:get(Height, DataMap),\n\t\t\t{#tx{ id = TXID1 } = TX1, Chunks1} =\n\t\t\t\t\tar_test_data_sync:tx(Wallet, {fixed_data, DR1, Chunks1}),\n\t\t\t{#tx{ id = TXID2 } = TX2, Chunks2} =\n\t\t\t\t\tar_test_data_sync:tx(Wallet, {fixed_data, DR2, Chunks2}),\n\t\t\t{#tx{ id = TXID3 } = TX3, Chunks3} =\n\t\t\t\t\tar_test_data_sync:tx(Wallet, {fixed_data, DR3, Chunks3}, v1),\n\t\t\t{Miner, Receiver} =\n\t\t\t\tcase rand:uniform(2) == 1 of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\t{main, peer1};\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t{peer1, main}\n\t\t\t\tend,\n\t\t\t?debugFmt(\"miner: ~p, receiver: ~p~n\", [Miner, Receiver]),\n\t\t\t?debugFmt(\"Mining block ~B.~n\", [Height]),\n\t\t\tTXs = ar_util:pick_random([TX1, TX2, TX3], 2),\n\t\t\tB = ar_test_node:post_and_mine(#{ miner => Miner, await_on => Receiver }, TXs),\n\t\t\tAcc1_2 =\n\t\t\t\tcase lists:member(TX1, TXs) of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tar_test_data_sync:post_proofs(main, B, TX1, Chunks1),\n\t\t\t\t\t\tmaps:put(TXID1,\n\t\t\t\t\t\t\t\tar_test_data_sync:get_records_with_proofs(B, TX1, Chunks1),\n\t\t\t\t\t\t\t\tAcc1);\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tAcc1\n\t\t\t\tend,\n\t\t\tAcc2_2 =\n\t\t\t\tcase lists:member(TX2, TXs) of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tar_test_data_sync:post_proofs(peer1, B, TX2, Chunks2),\n\t\t\t\t\t\tmaps:put(TXID2,\n\t\t\t\t\t\t\t\tar_test_data_sync:get_records_with_proofs(B, TX2, Chunks2),\n\t\t\t\t\t\t\t\tAcc2);\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tAcc2\n\t\t\t\tend,\n\t\t\tAcc3_2 =\n\t\t\t\tcase lists:member(TX3, TXs) of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tmaps:put(TXID3,\n\t\t\t\t\t\t\t\tar_test_data_sync:get_records_with_proofs(B, TX3, Chunks3),\n\t\t\t\t\t\t\t\tAcc3);\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\tAcc3\n\t\t\t\tend,\n\t\t\t{Acc1_2, Acc2_2, Acc3_2}\n\t\tend,\n\t\t{#{}, #{}, #{}},\n\t\tlists:seq(1, 20)\n\t),\n\t%% Mine some empty blocks on top to force all submitted data to fall below\n\t%% the disk pool threshold so that the non-default storage modules can sync it.\n\tlists:foreach(\n\t\tfun(_) ->\n\t\t\t{Miner, Receiver} =\n\t\t\t\tcase rand:uniform(2) == 1 of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\t{main, peer1};\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t{peer1, main}\n\t\t\t\tend,\n\t\t\tar_test_node:post_and_mine(#{ miner => Miner, await_on => Receiver }, [])\n\t\tend,\n\t\tlists:seq(1, 5)\n\t),\n\tBILast = ar_node:get_block_index(),\n\tLastB = ar_test_node:read_block_when_stored(\n\t\t\telement(1, lists:nth(10, lists:reverse(BILast)))),\n\tlists:foldl(\n\t\tfun(Height, PrevB) ->\n\t\t\tH = element(1, lists:nth(Height + 1, lists:reverse(BILast))),\n\t\t\tB = ar_test_node:read_block_when_stored(H),\n\t\t\tPoA = B#block.poa,\n\t\t\tNonceLimiterInfo = B#block.nonce_limiter_info,\n\t\t\tPartitionUpperBound =\n\t\t\t\t\tNonceLimiterInfo#nonce_limiter_info.partition_upper_bound,\n\t\t\tH0 = ar_block:compute_h0(B, PrevB),\n\t\t\t{RecallRange1Start, _} = ar_block:get_recall_range(H0,\n\t\t\t\t\tB#block.partition_number, PartitionUpperBound),\n\t\t\tRecallByte =\n\t\t\t\tcase B#block.packing_difficulty of\n\t\t\t\t\t0 ->\n\t\t\t\t\t\tRecallRange1Start + B#block.nonce * ?DATA_CHUNK_SIZE;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tRecallRange1Start + (B#block.nonce div 32) * ?DATA_CHUNK_SIZE\n\t\t\t\tend,\n\t\t\t{BlockStart, BlockEnd, TXRoot} = ar_block_index:get_block_bounds(RecallByte),\n\t\t\t?debugFmt(\"Mined a block. \"\n\t\t\t\t\t\"Computed recall byte: ~B, block's recall byte: ~p. \"\n\t\t\t\t\t\"Height: ~B. Previous block: ~s. \"\n\t\t\t\t\t\"Computed search space upper bound: ~B. \"\n\t\t\t\t\t\"Block start: ~B. Block end: ~B. TX root: ~s.\",\n\t\t\t\t\t[RecallByte, B#block.recall_byte, Height,\n\t\t\t\t\tar_util:encode(PrevB#block.indep_hash), PartitionUpperBound,\n\t\t\t\t\tBlockStart, BlockEnd, ar_util:encode(TXRoot)]),\n\t\t\t?assertEqual(RecallByte, B#block.recall_byte),\n\t\t\tSubChunkIndex = ar_block:get_sub_chunk_index(B#block.packing_difficulty,\n\t\t\t\t\tB#block.nonce),\n\t\t\t{Packing, PoA2} =\n\t\t\t\tcase B#block.packing_difficulty of\n\t\t\t\t\t0 ->\n\t\t\t\t\t\t{{spora_2_6, B#block.reward_addr}, PoA};\n\t\t\t\t\t?REPLICA_2_9_PACKING_DIFFICULTY ->\n\t\t\t\t\t\t{ok, #{ chunk := UnpackedChunk }}\n\t\t\t\t\t\t\t= ar_data_sync:get_chunk(RecallByte + 1,\n\t\t\t\t\t\t\t\t#{ packing => unpacked, pack => true, origin => test }),\n\t\t\t\t\t\tUnpackedChunk2 = ar_packing_server:pad_chunk(UnpackedChunk),\n\t\t\t\t\t\t{{replica_2_9, B#block.reward_addr},\n\t\t\t\t\t\t\t\tPoA#poa{ unpacked_chunk = UnpackedChunk2 }};\n\t\t\t\t\t_ ->\n\t\t\t\t\t\t{ok, #{ chunk := UnpackedChunk }}\n\t\t\t\t\t\t\t= ar_data_sync:get_chunk(RecallByte + 1,\n\t\t\t\t\t\t\t\t#{ packing => unpacked, pack => true, origin => test }),\n\t\t\t\t\t\tUnpackedChunk2 = ar_packing_server:pad_chunk(UnpackedChunk),\n\t\t\t\t\t\t{{composite, B#block.reward_addr, B#block.packing_difficulty},\n\t\t\t\t\t\t\t\tPoA#poa{ unpacked_chunk = UnpackedChunk2 }}\n\t\t\t\tend,\n\t\t\t?assertMatch({true, _}, ar_poa:validate({BlockStart, RecallByte, TXRoot,\n\t\t\t\t\tBlockEnd - BlockStart, PoA2, Packing, SubChunkIndex, not_set})),\n\t\t\tB\n\t\tend,\n\t\tLastB,\n\t\tlists:seq(10, 20)\n\t),\n\t?debugMsg(\"Asserting synced data with the strict splits.\"),\n\tmaps:map(\n\t\tfun(TXID, [{_, _, Chunks, _} | _]) ->\n\t\t\tExpectedData = ar_util:encode(binary:list_to_bin(Chunks)),\n\t\t\tar_test_node:assert_get_tx_data(main, TXID, ExpectedData),\n\t\t\tar_test_node:assert_get_tx_data(peer1, TXID, ExpectedData)\n\t\tend,\n\t\tStrictProofs\n\t),\n\t?debugMsg(\"Asserting synced v1 data.\"),\n\tmaps:map(\n\t\tfun(TXID, [{_, _, Chunks, _} | _]) ->\n\t\t\tExpectedData = ar_util:encode(binary:list_to_bin(Chunks)),\n\t\t\tar_test_node:assert_get_tx_data(main, TXID, ExpectedData),\n\t\t\tar_test_node:assert_get_tx_data(peer1, TXID, ExpectedData)\n\t\tend,\n\t\tV1Proofs\n\t),\n\t?debugMsg(\"Asserting synced chunks.\"),\n\tar_test_data_sync:wait_until_syncs_chunks([P || {_, _, _, P} <- lists:flatten(maps:values(StrictProofs))]),\n\tar_test_data_sync:wait_until_syncs_chunks([P || {_, _, _, P} <- lists:flatten(maps:values(V1Proofs))]),\n\tar_test_data_sync:wait_until_syncs_chunks(peer1, [P || {_, _, _, P} <- lists:flatten(\n\t\t\tmaps:values(StrictProofs))], infinity),\n\tar_test_data_sync:wait_until_syncs_chunks(peer1, [P || {_, _, _, P} <- lists:flatten(maps:values(V1Proofs))],\n\t\t\tinfinity).\n\n"
  },
  {
    "path": "apps/arweave/test/ar_peer_intervals_discovery_test.erl",
    "content": "-module(ar_peer_intervals_discovery_test).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include(\"ar_data_discovery.hrl\").\n\n-include(\"ar.hrl\").\n\nno_unsynced_intervals_test_() ->\n\tTestCase = #{\n\t\tsynced => [{0, 10}],\n\t\tpeer1 => [{0, 5}],\n\t\tpeer2 => [{3, 8}]\n\t},\n\ttest_interval_discovery(TestCase, footprint, \"No unsynced intervals\").\n\nbasic_interval_discovery_test_() ->\n\tTestCase = #{\n\t\tsynced => [],\n\t\tpeer1 => [{0, 3}]\n\t},\n\ttest_interval_discovery(TestCase, footprint, \"Three chunks\").\n\noverlapping_intervals_test_() ->\n\tTestCase = #{\n\t\tsynced => [{8, 12}],\n\t\tpeer1 => [{0, 1}],\n\t\tpeer2 => [{3, 10}],\n\t\tpeer3 => [{6, 13}]\n\t},\n\ttest_interval_discovery(TestCase, footprint, \"Overlapping intervals\").\n\ntest_interval_discovery(TestCase, Mode, Title) ->\n\tSyncedChunks = maps:get(synced, TestCase, []),\n\tPeerChunksData = maps:remove(synced, TestCase),\n\n\t%% Convert to bytes\n\tSyncedBytes = chunks_to_bytes(SyncedChunks),\n\tPeerBytesData = maps:map(fun(_K, V) -> chunks_to_bytes(V) end, PeerChunksData),\n\n\tTestRangeEnd = 50 * ?DATA_CHUNK_SIZE,\n\tUnsyncedBytes = calculate_unsynced_from_synced(SyncedBytes, TestRangeEnd),\n\n\tPeers = maps:keys(PeerBytesData),\n\n\tExpectedIntervals = calculate_expected_intervals(UnsyncedBytes, PeerBytesData),\n\n\tMocks = create_test_mocks(Peers),\n\n\tTestConfig = #config{ sync_from_local_peers_only = false, local_peers = [] },\n\tarweave_config:set_env(TestConfig),\n\n\tsetup_sync_record_servers(SyncedBytes, PeerBytesData),\n\n\tar_test_node:test_with_mocked_functions(Mocks, fun() ->\n\t\tStart = 0,\n\t\tEnd = TestRangeEnd,\n\t\tStoreID = test_store_id,\n\n\t\tar_peer_intervals:fetch(Start, Start, End, StoreID, Mode),\n\n\t\t%% Verify we get the expected intervals enqueued\n\t\tcase maps:size(ExpectedIntervals) == 0 of\n\t\t\ttrue ->\n\t\t\t\treceive\n\t\t\t\t\t{'$gen_cast', {enqueue_intervals, []}} -> ok\n\t\t\t\tafter 100 -> ok\n\t\t\t\tend;\n\t\t\tfalse ->\n\t\t\t\tAllEnqueueIntervals = collect_enqueue_intervals(#{}, StoreID, Mode),\n\n\t\t\t\tFlattenedIntervals = lists:flatten(AllEnqueueIntervals),\n\t\t\t\tverify_enqueued_intervals(FlattenedIntervals, ExpectedIntervals, Title)\n\t\tend\n\tend).\n\ncollect_enqueue_intervals(Acc, StoreID, Mode) ->\n\treceive\n\t\t{'$gen_cast', {enqueue_intervals, EnqueueIntervals}} ->\n\t\t\tAcc2 = update_peer_intervals(EnqueueIntervals, Acc),\n\t\t\tcollect_enqueue_intervals(Acc2, StoreID, Mode);\n\t\t{'$gen_cast', {collect_peer_intervals, Offset, _Start, End, _}} when Offset >= End ->\n\t\t\tmaps:to_list(Acc);\n\t\t{'$gen_cast', {collect_peer_intervals, Offset, Start, End, _}} ->\n\t\t\tar_peer_intervals:fetch(Offset, Start, End, StoreID, Mode),\n\t\t\tcollect_enqueue_intervals(Acc, StoreID, Mode)\n\tafter 10_000 ->\n\t\t?assert(false, \"No enqueue_intervals messages received\")\n\tend.\n\nupdate_peer_intervals([], Acc) ->\n\tAcc;\nupdate_peer_intervals([{Peer, Intervals, _FootprintKey} | Rest], Acc) ->\n\tPeerIntervals = maps:get(Peer, Acc, ar_intervals:new()),\n\tPeerIntervals2 = ar_intervals:union(PeerIntervals, Intervals),\n\tupdate_peer_intervals(Rest, maps:put(Peer, PeerIntervals2, Acc)).\n\ncreate_test_mocks(Peers) ->\n\t[\n\t\t{ar_data_discovery, get_footprint_bucket_peers, fun(_FootprintBucket) -> Peers end},\n\t\t{ar_tx_blacklist, get_blacklisted_intervals, fun(_Start, _End) -> ar_intervals:new() end},\n\t\t{ar_http_iface_client, get_footprints, fun(Peer, Partition, Footprint) ->\n\t\t\tIntervals = ar_footprint_record:get_intervals(Partition, Footprint, Peer),\n\t\t\t{ok, Intervals}\n\t\tend},\n\t\t{ar_data_sync, name, fun(_StoreID) -> self() end},\n\t\t{ar_peers, get_peer_release, fun(_Peer) -> ?GET_FOOTPRINT_SUPPORT_RELEASE end},\n\t\t{ar_rate_limiter, is_on_cooldown, fun(_Peer, _Key) -> false end},\n\t\t{ar_rate_limiter, is_throttled, fun(_Peer, _Path) -> false end}\n\t].\n\nverify_enqueued_intervals(EnqueueIntervals, ExpectedIntervals, Title) ->\n\t?assert(is_list(EnqueueIntervals)),\n\n\tEnqueuedByPeer = maps:from_list(EnqueueIntervals),\n\n\t%% Verify each expected peer has intervals\n\tmaps:fold(fun(Peer, ExpectedPeerIntervals, _) ->\n\t\t?assert(maps:is_key(Peer, EnqueuedByPeer),\n\t\t\tlists:flatten(\n\t\t\t\tio_lib:format(\"Expected peer ~p not found in enqueued intervals\", [Peer]))),\n\n\t\tActualPeerIntervals = maps:get(Peer, EnqueuedByPeer),\n\n\t\t?assertEqual(ar_intervals:to_list(ExpectedPeerIntervals), ar_intervals:to_list(ActualPeerIntervals), Title)\n\tend, ok, ExpectedIntervals).\n\nchunks_to_bytes(ChunkIntervals) ->\n\tlists:map(fun({Start, End}) ->\n\t\tStartBytes = trunc(Start * ?DATA_CHUNK_SIZE),\n\t\tEndBytes = trunc(End * ?DATA_CHUNK_SIZE),\n\t\t{EndBytes, StartBytes}\n\tend, ChunkIntervals).\n\n%% Calculate unsynced intervals as gaps in synced intervals within the test range\ncalculate_unsynced_from_synced(SyncedBytes, TestRangeEnd) ->\n\tcase SyncedBytes of\n\t\t[] ->\n\t\t\t%% Nothing synced, everything is unsynced\n\t\t\t[{TestRangeEnd, 0}];\n\t\t_ ->\n\t\t\t%% Find gaps in synced intervals\n\t\t\tSyncedIntervals = ar_intervals:from_list(SyncedBytes),\n\t\t\tTestRange = ar_intervals:from_list([{TestRangeEnd, 0}]),\n\t\t\tUnsyncedIntervals = ar_intervals:outerjoin(SyncedIntervals, TestRange),\n\t\t\tar_intervals:to_list(UnsyncedIntervals)\n\tend.\n\ncalculate_expected_intervals(UnsyncedBytes, PeerBytesData) ->\n\t%% For each peer, calculate intersection with unsynced intervals\n\tUnsyncedIntervals = ar_intervals:from_list(UnsyncedBytes),\n\tExpectedByPeer = maps:map(fun(_Peer, PeerIntervals) ->\n\t\tPeerIntervalsObj = ar_intervals:from_list(PeerIntervals),\n\t\tIntersection = ar_intervals:intersection(UnsyncedIntervals, PeerIntervalsObj),\n\t\tIntersection\n\tend, PeerBytesData),\n\tmaps:filter(fun(_Peer, Intervals) -> not ar_intervals:is_empty(Intervals) end, ExpectedByPeer).\n\nsetup_sync_record_servers(SyncedBytes, PeerBytesData) ->\n\tcase ets:info(sync_records) of\n\t\tundefined ->\n\t\t\tets:new(sync_records, [named_table, public, {read_concurrency, true}]);\n\t\t_ ->\n\t\t\tets:delete_all_objects(sync_records)\n\tend,\n\n\tPacking = unpacked,\n\tSyncedStoreID = test_store_id,\n\tProcessName = ar_sync_record:name(SyncedStoreID),\n\tcase whereis(ProcessName) of\n\t\tundefined -> ar_sync_record:start_link(ProcessName, SyncedStoreID);\n\t\t_ -> ok\n\tend,\n\tadd_bytes_to_footprint(SyncedBytes, Packing, SyncedStoreID),\n\n\tmaps:foreach(\n\t\tfun(Peer, PeerBytes) ->\n\t\t\tPeerProcessName = ar_sync_record:name(Peer),\n\t\t\tcase whereis(PeerProcessName) of\n\t\t\t\tundefined -> ar_sync_record:start_link(PeerProcessName, Peer);\n\t\t\t\t_ -> ok\n\t\t\tend,\n\t\t\tadd_bytes_to_footprint(PeerBytes, Packing, Peer)\n\t\tend,\n\t\tPeerBytesData\n\t).\n\nadd_bytes_to_footprint([], _Packing, _StoreID) -> ok;\nadd_bytes_to_footprint([{End, Start} | Rest], Packing, StoreID) ->\n\tlists:foreach(\n\t\tfun(Offset) ->\n\t\t\tar_footprint_record:add(Offset, Packing, StoreID) end,\n\t\tlists:seq(Start + ?DATA_CHUNK_SIZE, End, ?DATA_CHUNK_SIZE)\n\t),\n\tadd_bytes_to_footprint(Rest, Packing, StoreID)."
  },
  {
    "path": "apps/arweave/test/ar_poa_tests.erl",
    "content": "-module(ar_poa_tests).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-import(ar_test_node, [wait_until_height/2, assert_wait_until_height/2, read_block_when_stored/1]).\n\nv1_transactions_after_2_0_test_() ->\n\t{timeout, 420, fun test_v1_transactions_after_2_0/0}.\n\ntest_v1_transactions_after_2_0() ->\n\tKey = {_, Pub1} = ar_wallet:new(),\n\tKey2 = {_, Pub2} = ar_wallet:new(),\n\t[B0] = ar_weave:init([\n\t\t{ar_wallet:to_address(Pub1), ?AR(100), <<>>},\n\t\t{ar_wallet:to_address(Pub2), ?AR(100), <<>>}\n\t]),\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:connect_to_peer(peer1),\n\tTXs = generate_txs(Key, fun ar_test_node:sign_v1_tx/2),\n\tlists:foreach(\n\t\tfun(TX) ->\n\t\t\tar_test_node:assert_post_tx_to_peer(peer1, TX)\n\t\tend,\n\t\tTXs\n\t),\n\tar_test_node:assert_wait_until_receives_txs(TXs),\n\tlists:foreach(\n\t\tfun(Height) ->\n\t\t\tar_test_node:mine(peer1),\n\t\t\tBI = wait_until_height(main, Height),\n\t\t\tcase Height of\n\t\t\t\t1 ->\n\t\t\t\t\tassert_txs_mined(TXs, BI);\n\t\t\t\t_ ->\n\t\t\t\t\tnoop\n\t\t\tend,\n\t\t\tassert_wait_until_height(peer1, Height)\n\t\tend,\n\t\tlists:seq(1, 10)\n\t),\n\tMoreTXs = generate_txs(Key2, fun ar_test_node:sign_v1_tx/2),\n\tlists:foreach(\n\t\tfun(TX) ->\n\t\t\tar_test_node:assert_post_tx_to_peer(peer1, TX)\n\t\tend,\n\t\tMoreTXs\n\t),\n\tar_test_node:assert_wait_until_receives_txs(MoreTXs),\n\tlists:foreach(\n\t\tfun(Height) ->\n\t\t\tar_test_node:mine(peer1),\n\t\t\tBI = wait_until_height(main, Height),\n\t\t\tcase Height of\n\t\t\t\t11 ->\n\t\t\t\t\tassert_txs_mined(MoreTXs, BI);\n\t\t\t\t_ ->\n\t\t\t\t\tnoop\n\t\t\tend,\n\t\t\tassert_wait_until_height(peer1, Height)\n\t\tend,\n\t\tlists:seq(11, 20)\n\t).\n\nv2_transactions_after_2_0_test_() ->\n\t{timeout, 420, fun test_v2_transactions_after_2_0/0}.\n\ntest_v2_transactions_after_2_0() ->\n\tKey = {_, Pub1} = ar_wallet:new(),\n\tKey2 = {_, Pub2} = ar_wallet:new(),\n\t[B0] = ar_weave:init([\n\t\t{ar_wallet:to_address(Pub1), ?AR(100), <<>>},\n\t\t{ar_wallet:to_address(Pub2), ?AR(100), <<>>}\n\t]),\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:connect_to_peer(peer1),\n\tTXs = generate_txs(Key, fun ar_test_node:sign_tx/2),\n\tlists:foreach(\n\t\tfun(TX) ->\n\t\t\tar_test_node:assert_post_tx_to_peer(peer1, TX)\n\t\tend,\n\t\tTXs\n\t),\n\tar_test_node:assert_wait_until_receives_txs(TXs),\n\tlists:foreach(\n\t\tfun(Height) ->\n\t\t\tar_test_node:mine(peer1),\n\t\t\tBI = wait_until_height(main, Height),\n\t\t\tcase Height of\n\t\t\t\t1 ->\n\t\t\t\t\tassert_txs_mined(TXs, BI);\n\t\t\t\t_ ->\n\t\t\t\t\tnoop\n\t\t\tend,\n\t\t\tassert_wait_until_height(peer1, Height)\n\t\tend,\n\t\tlists:seq(1, 10)\n\t),\n\tMoreTXs = generate_txs(Key2, fun ar_test_node:sign_tx/2),\n\tlists:foreach(\n\t\tfun(TX) ->\n\t\t\tar_test_node:assert_post_tx_to_peer(peer1, TX)\n\t\tend,\n\t\tMoreTXs\n\t),\n\tar_test_node:assert_wait_until_receives_txs(MoreTXs),\n\tlists:foreach(\n\t\tfun(Height) ->\n\t\t\tar_test_node:mine(peer1),\n\t\t\tBI = wait_until_height(main, Height),\n\t\t\tcase Height of\n\t\t\t\t11 ->\n\t\t\t\t\tassert_txs_mined(MoreTXs, BI);\n\t\t\t\t_ ->\n\t\t\t\t\tnoop\n\t\t\tend,\n\t\t\tassert_wait_until_height(peer1, Height)\n\t\tend,\n\t\tlists:seq(11, 20)\n\t).\n\nrecall_byte_on_the_border_test_() ->\n\t{timeout, 420, fun test_recall_byte_on_the_border/0}.\n\ntest_recall_byte_on_the_border() ->\n\tKey = {_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([\n\t\t{ar_wallet:to_address(Pub), ?AR(100), <<>>}\n\t]),\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:connect_to_peer(peer1),\n\t%% Generate one-byte transactions so that recall byte is often on the\n\t%% the border between two transactions.\n\tTXs = [\n\t\tar_test_node:sign_tx(Key, #{ data => <<\"A\">>, tags => [random_nonce()], last_tx => ar_test_node:get_tx_anchor(peer1) }),\n\t\tar_test_node:sign_tx(Key, #{ data => <<\"B\">>, tags => [random_nonce()], last_tx => ar_test_node:get_tx_anchor(peer1) }),\n\t\tar_test_node:sign_tx(Key, #{ data => <<\"B\">>, tags => [random_nonce()], last_tx => ar_test_node:get_tx_anchor(peer1) }),\n\t\tar_test_node:sign_tx(Key, #{ data => <<\"C\">>, tags => [random_nonce()], last_tx => ar_test_node:get_tx_anchor(peer1) })\n\t],\n\tlists:foreach(\n\t\tfun(TX) ->\n\t\t\tar_test_node:assert_post_tx_to_peer(peer1, TX)\n\t\tend,\n\t\tTXs\n\t),\n\tar_test_node:assert_wait_until_receives_txs(TXs),\n\tlists:foreach(\n\t\tfun(Height) ->\n\t\t\tar_test_node:mine(peer1),\n\t\t\tBI = wait_until_height(main, Height),\n\t\t\tcase Height of\n\t\t\t\t1 ->\n\t\t\t\t\tassert_txs_mined(TXs, BI);\n\t\t\t\t_ ->\n\t\t\t\t\tnoop\n\t\t\tend,\n\t\t\tassert_wait_until_height(peer1, Height)\n\t\tend,\n\t\tlists:seq(1, 10)\n\t).\n\nignores_transactions_with_invalid_data_root_test_() ->\n\t{timeout, 420, fun test_ignores_transactions_with_invalid_data_root/0}.\n\ntest_ignores_transactions_with_invalid_data_root() ->\n\tKey = {_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([\n\t\t{ar_wallet:to_address(Pub), ?AR(100), <<>>}\n\t]),\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:connect_to_peer(peer1),\n\t%% Generate transactions where half of them are valid and the other\n\t%% half has an invalid data_root.\n\tGenerateTXParams =\n\t\tfun\n\t\t\t(valid) ->\n\t\t\t\t#{ data => <<\"DATA\">>, tags => [random_nonce()], last_tx => ar_test_node:get_tx_anchor(peer1) };\n\t\t\t(invalid) ->\n\t\t\t\t#{ data_root => crypto:strong_rand_bytes(32),\n\t\t\t\t\tdata => <<\"DATA\">>, tags => [random_nonce()], last_tx => ar_test_node:get_tx_anchor(peer1) }\n\t\tend,\n\tTXs = [\n\t\tar_test_node:sign_tx(Key, GenerateTXParams(valid)),\n\t\t(ar_test_node:sign_tx(Key, GenerateTXParams(invalid)))#tx{ data = <<>> },\n\t\tar_test_node:sign_tx(Key, GenerateTXParams(valid)),\n\t\t(ar_test_node:sign_tx(Key, GenerateTXParams(invalid)))#tx{ data = <<>> },\n\t\tar_test_node:sign_tx(Key, GenerateTXParams(valid)),\n\t\t(ar_test_node:sign_tx(Key, GenerateTXParams(invalid)))#tx{ data = <<>> },\n\t\tar_test_node:sign_tx(Key, GenerateTXParams(valid)),\n\t\t(ar_test_node:sign_tx(Key, GenerateTXParams(invalid)))#tx{ data = <<>> },\n\t\tar_test_node:sign_tx(Key, GenerateTXParams(valid)),\n\t\t(ar_test_node:sign_tx(Key, GenerateTXParams(invalid)))#tx{ data = <<>> }\n\t],\n\tlists:foreach(\n\t\tfun(TX) ->\n\t\t\tar_test_node:assert_post_tx_to_peer(peer1, TX)\n\t\tend,\n\t\tTXs\n\t),\n\tar_test_node:assert_wait_until_receives_txs(TXs),\n\tlists:foreach(\n\t\tfun(Height) ->\n\t\t\tar_test_node:mine(peer1),\n\t\t\tBI = wait_until_height(main, Height),\n\t\t\tcase Height of\n\t\t\t\t1 ->\n\t\t\t\t\tassert_txs_mined(TXs, BI);\n\t\t\t\t_ ->\n\t\t\t\t\tnoop\n\t\t\tend,\n\t\t\tassert_wait_until_height(peer1, Height)\n\t\tend,\n\t\tlists:seq(1, 10)\n\t).\n\ngenerate_txs(Key, SignFun) ->\n\t[\n\t\tSignFun(Key, #{ data => <<>>, tags => [random_nonce()], last_tx => ar_test_node:get_tx_anchor(peer1) }),\n\t\tSignFun(Key, #{ data => <<\"B\">>, tags => [random_nonce()], last_tx => ar_test_node:get_tx_anchor(peer1) }),\n\t\tSignFun(\n\t\t\tKey, #{\n\t\t\t\tdata => <<\"DATA\">>,\n\t\t\t\ttags => [random_nonce()],\n\t\t\t\tlast_tx => ar_test_node:get_tx_anchor(peer1)\n\t\t\t}\n\t\t),\n\t\tSignFun(\n\t\t\tKey, #{\n\t\t\t\tdata => << <<\"B\">> || _ <- lists:seq(1, ?DATA_CHUNK_SIZE) >>,\n\t\t\t\ttags => [random_nonce()],\n\t\t\t\tlast_tx => ar_test_node:get_tx_anchor(peer1)\n\t\t\t}\n\t\t),\n\t\tSignFun(\n\t\t\tKey,\n\t\t\t#{\n\t\t\t\tdata => << <<\"B\">> || _ <- lists:seq(1, ?DATA_CHUNK_SIZE * 2) >>,\n\t\t\t\ttags => [random_nonce()],\n\t\t\t\tlast_tx => ar_test_node:get_tx_anchor(peer1)\n\t\t\t}\n\t\t),\n\t\tSignFun(\n\t\t\tKey, #{\n\t\t\t\tdata => << <<\"B\">> || _ <- lists:seq(1, ?DATA_CHUNK_SIZE * 3) >>,\n\t\t\t\ttags => [random_nonce()],\n\t\t\t\tlast_tx => ar_test_node:get_tx_anchor(peer1)\n\t\t\t}\n\t\t),\n\t\tSignFun(\n\t\t\tKey, #{\n\t\t\t\tdata => << <<\"B\">> || _ <- lists:seq(1, ?DATA_CHUNK_SIZE * 13) >>,\n\t\t\t\ttags => [random_nonce()],\n\t\t\t\tlast_tx => ar_test_node:get_tx_anchor(peer1)\n\t\t\t}\n\t\t)\n\t].\n\nrandom_nonce() ->\n\t{<<\"nonce\">>, integer_to_binary(rand:uniform(1000000))}.\n\nassert_txs_mined(TXs, [{H, _, _} | _]) ->\n\tB = read_block_when_stored(H),\n\tTXIDs = [TX#tx.id || TX <- TXs],\n\t?assertEqual(length(TXIDs), length(B#block.txs)),\n\t?assertEqual(lists:sort(TXIDs), lists:sort(B#block.txs)).\n"
  },
  {
    "path": "apps/arweave/test/ar_poller_tests.erl",
    "content": "-module(ar_poller_tests).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-import(ar_test_node, [assert_wait_until_height/2, read_block_when_stored/1]).\n\npolling_test_() ->\n\tar_test_node:test_with_mocked_functions([\n\t\t{ar_retarget, is_retarget_height, fun(_Height) -> false end},\n\t\t{ar_retarget, is_retarget_block, fun(_Block) -> false end}],\n\t\tfun test_polling/0).\n\ntest_polling() ->\n\t{_, Pub} = Wallet = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(10000), <<>>}]),\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:disconnect_from(peer1),\n\tTXs =\n\t\tlists:map(\n\t\t\tfun(Height) ->\n\t\t\t\tSignedTX = ar_test_node:sign_tx(Wallet, #{ last_tx => ar_test_node:get_tx_anchor(peer1) }),\n\t\t\t\tar_test_node:assert_post_tx_to_peer(peer1, SignedTX),\n\t\t\t\tar_test_node:mine(peer1),\n\t\t\t\tassert_wait_until_height(peer1, Height),\n\t\t\t\tSignedTX\n\t\t\tend,\n\t\t\tlists:seq(1, 9)\n\t\t),\n\tar_test_node:connect_to_peer(peer1),\n\tar_test_node:wait_until_height(main, 9),\n\tlists:foreach(\n\t\tfun(Height) ->\n\t\t\t{H, _, _} = ar_node:get_block_index_entry(Height),\n\t\t\tB = read_block_when_stored(H),\n\t\t\tTX = lists:nth(Height, TXs),\n\t\t\t?assertEqual([TX#tx.id], B#block.txs)\n\t\tend,\n\t\tlists:seq(1, 9)\n\t),\n\t%% Make the nodes diverge. Expect one of them to fetch and apply the blocks\n\t%% from the winning fork.\n\tar_test_node:disconnect_from(peer1),\n\tar_test_node:mine(),\n\tar_test_node:mine(peer1),\n\t[{MH11, _, _} | _] = ar_test_node:wait_until_height(main, 10),\n\t[{SH11, _, _} | _] = ar_test_node:wait_until_height(peer1, 10),\n\t?assertNotEqual(SH11, MH11),\n\tar_test_node:mine(),\n\tar_test_node:mine(peer1),\n\t[{MH12, _, _} | _] = ar_test_node:wait_until_height(main, 11),\n\t[{SH12, _, _} | _] = ar_test_node:wait_until_height(peer1, 11),\n\t?assertNotEqual(SH12, MH12),\n\tar_test_node:mine(),\n\tar_test_node:mine(peer1),\n\t[{MH13, _, _} | _] = MBI12 = ar_test_node:wait_until_height(main, 12),\n\t[{SH13, _, _} | _] = SBI12 = ar_test_node:wait_until_height(peer1, 12),\n\t?assertNotEqual(SH13, MH13),\n\tBM13 = ar_block_cache:get(block_cache, MH13),\n\tBS13 = ar_test_node:remote_call(peer1, ar_block_cache, get, [block_cache, SH13]),\n\tCDiffM13 = BM13#block.cumulative_diff,\n\tCDiffS13 = BS13#block.cumulative_diff,\n\tar_test_node:connect_to_peer(peer1),\n\tcase CDiffM13 > CDiffS13 of\n\t\ttrue ->\n\t\t\t?debugFmt(\"Case 1.\", []),\n\t\t\t?assertEqual(ok, ar_test_node:wait_until_block_index(peer1, MBI12)),\n\t\t\t?assertMatch([{MH13, _, _} | _], ar_node:get_block_index());\n\t\tfalse ->\n\t\t\tcase CDiffM13 < CDiffS13 of\n\t\t\t\ttrue ->\n\t\t\t\t\t?debugFmt(\"Case 2.\", []),\n\t\t\t\t\t?assertEqual(ok, ar_test_node:wait_until_block_index(SBI12)),\n\t\t\t\t\t?assertMatch([{SH13, _, _} | _],\n\t\t\t\t\t\t\tar_test_node:remote_call(peer1, ar_node, get_block_index, []));\n\t\t\t\tfalse ->\n\t\t\t\t\t?debugFmt(\"Case 3.\", []),\n\t\t\t\t\tar_test_node:mine(peer1),\n\t\t\t\t\t[{MH14, _, _}, {MH13_1, _, _}, {MH12_1, _, _}, {MH11_1, _, _} | _]\n\t\t\t\t\t= ar_test_node:wait_until_height(main, 13),\n\t\t\t\t\t[{SH14, _, _} | _] = ar_test_node:wait_until_height(peer1, 13),\n\t\t\t\t\t?assertEqual(MH14, SH14),\n\t\t\t\t\t?assertEqual(SH13, MH13_1),\n\t\t\t\t\t?assertEqual(SH12, MH12_1),\n\t\t\t\t\t?assertEqual(SH11, MH11_1)\n\t\t\tend\n\tend.\n"
  },
  {
    "path": "apps/arweave/test/ar_post_block_tests.erl",
    "content": "-module(ar_post_block_tests).\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include(\"ar_consensus.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-import(ar_test_node, [\n\t\twait_until_height/2, post_block/2,\n\t\tsend_new_block/2, sign_block/3,\n\t\tread_block_when_stored/2,\n\t\tassert_wait_until_height/2,\n\t\ttest_with_mocked_functions/2]).\n\nstart_node() ->\n\t[B0] = ar_weave:init([], 0), %% Set difficulty to 0 to speed up tests\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:connect_to_peer(peer1).\n\nreset_node() ->\n\tar_blacklist_middleware:reset(),\n\tar_test_node:remote_call(peer1, ar_blacklist_middleware, reset, []),\n\tar_test_node:connect_to_peer(peer1),\n\n\tHeight = height(peer1),\n\t[{PrevH, _, _} | _] = wait_until_height(main, Height),\n\tar_test_node:disconnect_from(peer1),\n\tar_test_node:mine(peer1),\n\t[{H, _, _} | _] = ar_test_node:assert_wait_until_height(peer1, Height + 1),\n\tB = ar_test_node:remote_call(peer1, ar_block_cache, get, [block_cache, H]),\n\tPrevB = ar_test_node:remote_call(peer1, ar_block_cache, get, [block_cache, PrevH]),\n\t{ok, Config} = ar_test_node:remote_call(peer1, arweave_config, get_env, []),\n\tKey = ar_test_node:remote_call(peer1, ar_wallet, load_key, [Config#config.mining_addr]),\n\t{Key, B, PrevB}.\n\nsetup_all_post_2_7() ->\n\t{Setup, Cleanup} = ar_test_node:mock_functions([\n\t\t{ar_fork, height_2_7, fun() -> 0 end}\n\t\t]),\n\tFunctions = Setup(),\n\tstart_node(),\n\t{Cleanup, Functions}.\n\nsetup_all_post_2_8() ->\n\t{Setup, Cleanup} = ar_test_node:mock_functions([\n\t\t{ar_fork, height_2_8, fun() -> 0 end}\n\t\t]),\n\tFunctions = Setup(),\n\tstart_node(),\n\t{Cleanup, Functions}.\n\ncleanup_all_post_fork({Cleanup, Functions}) ->\n\tCleanup(Functions).\n\ninstantiator(TestFun) ->\n\tfun (Fixture) -> {timeout, 120, {with, Fixture, [TestFun]}} end.\n\npost_2_7_test_() ->\n\t{setup, fun setup_all_post_2_7/0, fun cleanup_all_post_fork/1,\n\t\t{foreach, fun reset_node/0, [\n\t\t\tinstantiator(fun test_reject_block_invalid_miner_reward/1),\n\t\t\tinstantiator(fun test_reject_block_invalid_denomination/1),\n\t\t\tinstantiator(fun test_reject_block_invalid_kryder_plus_rate_multiplier/1),\n\t\t\tinstantiator(fun test_reject_block_invalid_kryder_plus_rate_multiplier_latch/1),\n\t\t\tinstantiator(fun test_reject_block_invalid_endowment_pool/1),\n\t\t\tinstantiator(fun test_reject_block_invalid_debt_supply/1),\n\t\t\tinstantiator(fun test_reject_block_invalid_wallet_list/1),\n\t\t\tinstantiator(fun test_mitm_poa_chunk_tamper_warn/1),\n\t\t\tinstantiator(fun test_mitm_poa2_chunk_tamper_warn/1),\n\t\t\tinstantiator(fun test_reject_block_invalid_proof_size/1),\n\t\t\tinstantiator(fun test_cached_poa/1)\n\t\t]}\n\t}.\n\npost_2_8_test_() ->\n\t{setup, fun setup_all_post_2_8/0, fun cleanup_all_post_fork/1,\n\t\t{foreach, fun reset_node/0, [\n\t\t\tinstantiator(fun test_reject_block_invalid_packing_difficulty/1),\n\t\t\tinstantiator(fun test_reject_block_invalid_replica_format/1),\n\t\t\tinstantiator(fun test_reject_block_invalid_denomination/1),\n\t\t\tinstantiator(fun test_reject_block_invalid_kryder_plus_rate_multiplier/1),\n\t\t\tinstantiator(fun test_reject_block_invalid_kryder_plus_rate_multiplier_latch/1),\n\t\t\tinstantiator(fun test_reject_block_invalid_endowment_pool/1),\n\t\t\tinstantiator(fun test_reject_block_invalid_debt_supply/1),\n\t\t\tinstantiator(fun test_reject_block_invalid_wallet_list/1),\n\t\t\tinstantiator(fun test_mitm_poa_chunk_tamper_warn/1),\n\t\t\tinstantiator(fun test_mitm_poa2_chunk_tamper_warn/1),\n\t\t\tinstantiator(fun test_reject_block_invalid_proof_size/1),\n\t\t\tinstantiator(fun test_cached_poa/1)\n\t\t]}\n\t}.\n\n%% ------------------------------------------------------------------------------------------\n%% post_2_7_test_\n%% ------------------------------------------------------------------------------------------\n\ntest_mitm_poa_chunk_tamper_warn({_Key, B, _PrevB}) ->\n\t%% Verify that, in 2.7, we don't ban a peer if the poa.chunk is tampered with.\n\tok = ar_events:subscribe(block),\n\tassert_not_banned(ar_test_node:peer_ip(main)),\n\tB2 = B#block{ poa = #poa{ chunk = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE) } },\n\tpost_block(B2, invalid_first_chunk),\n\tassert_not_banned(ar_test_node:peer_ip(main)).\n\ntest_mitm_poa2_chunk_tamper_warn({Key, B, PrevB}) ->\n\t%% Verify that, in 2.7, we don't ban a peer if the poa2.chunk is tampered with.\n\t%% For this test we have to re-sign the block with the new poa2.chunk - but that's just a\n\t%% test limitation. In the wild the poa2 chunk could be modified without resigning.\n\tok = ar_events:subscribe(block),\n\tassert_not_banned(ar_test_node:peer_ip(main)),\n\tB2 = sign_block(B#block{ \n\t\t\trecall_byte2 = 100000000,\n\t\t\tpoa2 = #poa{ chunk = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE) } }, PrevB, Key),\n\tpost_block(B2, invalid_second_chunk),\n\tassert_not_banned(ar_test_node:peer_ip(main)).\n\ntest_reject_block_invalid_proof_size({Key, B, PrevB}) ->\n\tok = ar_events:subscribe(block),\n\tMaxDataPathSize = 349504,\n\tMaxTxPathSize = 2176,\n\tpost_block(sign_block(\n\t\tB#block{\n\t\t\tpoa = #poa{\n\t\t\t\ttx_path = crypto:strong_rand_bytes(MaxTxPathSize + 1)\n\t\t\t}\n\t\t}, PrevB, Key),\n\t\tinvalid_proof_size),\n\tpost_block(sign_block(\n\t\tB#block{\n\t\t\tpoa = #poa{\n\t\t\t\tdata_path = crypto:strong_rand_bytes(MaxDataPathSize + 1)\n\t\t\t}\n\t\t}, PrevB, Key),\n\t\tinvalid_proof_size),\n\tpost_block(sign_block(\n\t\tB#block{\n\t\t\tpoa = #poa{\n\t\t\t\tchunk = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE+1)\n\t\t\t}\n\t\t}, PrevB, Key),\n\t\tinvalid_proof_size),\n\tpost_block(sign_block(\n\t\tB#block{\n\t\t\tpoa2 = #poa{\n\t\t\t\ttx_path = crypto:strong_rand_bytes(MaxTxPathSize + 1)\n\t\t\t}\n\t\t}, PrevB, Key),\n\t\tinvalid_proof_size),\n\tpost_block(sign_block(\n\t\tB#block{\n\t\t\tpoa2 = #poa{\n\t\t\t\tdata_path = crypto:strong_rand_bytes(MaxDataPathSize + 1)\n\t\t\t}\n\t\t}, PrevB, Key),\n\t\tinvalid_proof_size),\n\tpost_block(sign_block(\n\t\tB#block{\n\t\t\tpoa2 = #poa{\n\t\t\t\tchunk = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE+1)\n\t\t\t}\n\t\t}, PrevB, Key),\n\t\tinvalid_proof_size).\n\ntest_cached_poa({Key, B, PrevB}) ->\n\t%% Verify that comparing against a cached poa works\n\tok = ar_events:subscribe(block),\n\tB2 = sign_block(B, PrevB, Key),\n\tpost_block(B2, valid),\n\tB3 = sign_block(B, PrevB, Key),\n\tpost_block(B3, valid).\n\n%% The banning process is asynchronous now so we may have to wait a little until\n%% the peer gets banned.\nassert_banned(Peer) ->\n\tcase ar_util:do_until(\n\t\tfun() ->\n\t\t\tbanned == ar_blacklist_middleware:is_peer_banned(Peer)\n\t\tend,\n\t\t200,\n\t\t2000\n\t) of\n\t\ttrue ->\n\t\t\ttrue;\n\t\tfalse ->\n\t\t\t?assert(false, \"Expected the peer to be banned but the peer was not banned.\")\n\tend.\n\n%% The banning process is asynchronous now so we should wait a little to gain some\n%% confidence the peer is not banned.\nassert_not_banned(Peer) ->\n\ttimer:sleep(2000),\n\t?assertEqual(not_banned, ar_blacklist_middleware:is_peer_banned(Peer)).\n\n%% ------------------------------------------------------------------------------------------\n%% post_2_6_test_\n%% ------------------------------------------------------------------------------------------\n\ntest_reject_block_invalid_miner_reward({Key, B, PrevB}) ->\n\tok = ar_events:subscribe(block),\n\tB2 = sign_block(B#block{ reward = 0 }, PrevB, Key),\n\tpost_block(B2, invalid_reward_history_hash),\n\tHashRate = ar_difficulty:get_hash_rate_fixed_ratio(B2),\n\tRewardHistory = tl(B2#block.reward_history),\n\tAddr = B2#block.reward_addr,\n\tB3 = sign_block(B2#block{\n\t\t\treward_history_hash = ar_rewards:reward_history_hash(\n\t\t\t\t\tB2#block.height,\n\t\t\t\t\tPrevB#block.reward_history_hash,\n\t\t\t\t\t[{Addr, HashRate, 0, 1} | RewardHistory])\n\t\t\t}, PrevB, Key),\n\tpost_block(B3, invalid_miner_reward).\n\ntest_reject_block_invalid_denomination({Key, B, PrevB}) ->\n\tok = ar_events:subscribe(block),\n\tB2 = sign_block(B#block{ denomination = 0 }, PrevB, Key),\n\tpost_block(B2, invalid_denomination).\n\ntest_reject_block_invalid_kryder_plus_rate_multiplier({Key, B, PrevB}) ->\n\tok = ar_events:subscribe(block),\n\tB2 = sign_block(B#block{ kryder_plus_rate_multiplier = 0 }, PrevB, Key),\n\tpost_block(B2, invalid_kryder_plus_rate_multiplier).\n\ntest_reject_block_invalid_kryder_plus_rate_multiplier_latch({Key, B, PrevB}) ->\n\tok = ar_events:subscribe(block),\n\tB2 = sign_block(B#block{ kryder_plus_rate_multiplier_latch = 2 }, PrevB, Key),\n\tpost_block(B2, invalid_kryder_plus_rate_multiplier_latch).\n\ntest_reject_block_invalid_endowment_pool({Key, B, PrevB}) ->\n\tok = ar_events:subscribe(block),\n\tB2 = sign_block(B#block{ reward_pool = 2 }, PrevB, Key),\n\tpost_block(B2, invalid_reward_pool).\n\ntest_reject_block_invalid_debt_supply({Key, B, PrevB}) ->\n\tok = ar_events:subscribe(block),\n\tB2 = sign_block(B#block{ debt_supply = 100000000 }, PrevB, Key),\n\tpost_block(B2, invalid_debt_supply).\n\ntest_reject_block_invalid_wallet_list({Key, B, PrevB}) ->\n\tok = ar_events:subscribe(block),\n\tB2 = sign_block(B#block{ wallet_list = crypto:strong_rand_bytes(32) }, PrevB, Key),\n\tpost_block(B2, invalid_wallet_list).\n\n%% ------------------------------------------------------------------------------------------\n%% post_2_8_test_\n%% ------------------------------------------------------------------------------------------\n\ntest_reject_block_invalid_packing_difficulty({Key, B, PrevB}) ->\n\tok = ar_events:subscribe(block),\n\tassert_not_banned(ar_test_node:peer_ip(main)),\n\tB2 = sign_block(B#block{ unpacked_chunk_hash = <<>>,\n\t\t\tpacking_difficulty = 33 }, PrevB, Key),\n\tpost_block(B2, invalid_first_unpacked_chunk),\n\tassert_not_banned(ar_test_node:peer_ip(main)),\n\tC = crypto:strong_rand_bytes(262144),\n\tPackedC = crypto:strong_rand_bytes(262144 div 32),\n\tUH = crypto:hash(sha256, C),\n\tH = crypto:hash(sha256, PackedC),\n\tPoA = B#block.poa,\n\tB3 = sign_block(B#block{ packing_difficulty = 33,\n\t\tpoa = PoA#poa{ unpacked_chunk = C, chunk = PackedC }, unpacked_chunk_hash = UH,\n\t\t\t\tchunk_hash = H }, PrevB, Key),\n\tpost_block(B3, invalid_packing_difficulty),\n\tassert_banned(ar_test_node:peer_ip(main)).\n\ntest_reject_block_invalid_replica_format({Key, B, PrevB}) ->\n\tok = ar_events:subscribe(block),\n\tassert_not_banned(ar_test_node:peer_ip(main)),\n\tC = crypto:strong_rand_bytes(262144),\n\tPackedC = crypto:strong_rand_bytes(262144 div 32),\n\tUH = crypto:hash(sha256, C),\n\tH = crypto:hash(sha256, PackedC),\n\tPoA = B#block.poa,\n\tB2 = sign_block(B#block{ replica_format = 2,\n\t\tpoa = PoA#poa{ unpacked_chunk = C, chunk = PackedC }, unpacked_chunk_hash = UH,\n\t\t\t\tchunk_hash = H }, PrevB, Key),\n\tpost_block(B2, invalid_packing_difficulty),\n\tassert_banned(ar_test_node:peer_ip(main)).\n\n%% ------------------------------------------------------------------------------------------\n%% Others tests\n%% ------------------------------------------------------------------------------------------\n\nadd_external_block_with_invalid_timestamp_test_() ->\n\tar_test_node:test_with_mocked_functions([{ar_fork, height_2_7, fun() -> 0 end}],\n\t\tfun test_add_external_block_with_invalid_timestamp/0).\n\ntest_add_external_block_with_invalid_timestamp() ->\n\tstart_node(),\n\t{Key, B, PrevB} = reset_node(),\n\n\t%% Expect the timestamp too far from the future to be rejected.\n\tFutureTimestampTolerance = ?JOIN_CLOCK_TOLERANCE * 2 + ?CLOCK_DRIFT_MAX,\n\tTooFarFutureTimestamp = os:system_time(second) + FutureTimestampTolerance + 3,\n\tB2 = sign_block(B#block{ timestamp = TooFarFutureTimestamp }, PrevB, Key),\n\tok = ar_events:subscribe(block),\n\tpost_block(B2, invalid_timestamp),\n\t%% Expect the timestamp from the future within the tolerance interval to be accepted.\n\tOkFutureTimestamp = os:system_time(second) + FutureTimestampTolerance - 3,\n\tB3 = sign_block(B#block{ timestamp = OkFutureTimestamp }, PrevB, Key),\n\tpost_block(B3, valid),\n\t%% Expect the timestamp too far behind the previous timestamp to be rejected.\n\tPastTimestampTolerance = lists:sum([?JOIN_CLOCK_TOLERANCE * 2, ?CLOCK_DRIFT_MAX]),\n\tTooFarPastTimestamp = PrevB#block.timestamp - PastTimestampTolerance - 1,\n\tB4 = sign_block(B#block{ timestamp = TooFarPastTimestamp }, PrevB, Key),\n\tpost_block(B4, invalid_timestamp),\n\tOkPastTimestamp = PrevB#block.timestamp - PastTimestampTolerance + 1,\n\tB5 = sign_block(B#block{ timestamp = OkPastTimestamp }, PrevB, Key),\n\tpost_block(B5, valid).\n\nrejects_invalid_blocks_test_() ->\n\t{timeout, 120, fun test_rejects_invalid_blocks/0}.\n\ntest_rejects_invalid_blocks() ->\n\t[B0] = ar_weave:init([], ar_retarget:switch_to_linear_diff(2)),\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:disconnect_from(peer1),\n\tar_test_node:mine(peer1),\n\tBI = ar_test_node:assert_wait_until_height(peer1, 1),\n\tB1 = ar_test_node:remote_call(peer1, ar_storage, read_block, [hd(BI)]),\n\t%% Try to post an invalid block.\n\tInvalidH = crypto:strong_rand_bytes(48),\n\tok = ar_events:subscribe(block),\n\tpost_block(B1#block{ indep_hash = InvalidH }, invalid_hash),\n\t%% Verify the IP address of self is NOT banned in ar_blacklist_middleware.\n\tInvalidH2 = crypto:strong_rand_bytes(48),\n\tpost_block(B1#block{ indep_hash = InvalidH2 }, invalid_hash),\n\t%% The valid block with the ID from the failed attempt can still go through.\n\tpost_block(B1, valid),\n\t%% Try to post the same block again.\n\tPeer = ar_test_node:peer_ip(main),\n\t?assertMatch({ok, {{<<\"208\">>, _}, _, _, _, _}}, send_new_block(Peer, B1)),\n\t%% Correct hash, but invalid signature.\n\tB2Preimage = B1#block{ signature = <<>> },\n\tB2 = B2Preimage#block{ indep_hash = ar_block:indep_hash(B2Preimage) },\n\tpost_block(B2, invalid_signature),\n\t%% Nonce limiter output too far in the future.\n\tInfo1 = B1#block.nonce_limiter_info,\n\t{ok, Config} = ar_test_node:remote_call(peer1, arweave_config, get_env, []),\n\tKey = ar_test_node:remote_call(peer1, ar_wallet, load_key, [Config#config.mining_addr]),\n\tB3 = sign_block(B1#block{\n\t\t\t%% Change the solution hash so that the validator does not go down\n\t\t\t%% the comparing the resigned solution with the cached solution path.\n\t\t\thash = crypto:strong_rand_bytes(32),\n\t\t\tnonce_limiter_info = Info1#nonce_limiter_info{\n\t\t\t\tglobal_step_number = 100000 } }, B0, Key),\n\tpost_block(B3, invalid_nonce_limiter_global_step_number),\n\t%% Nonce limiter output lower than that of the previous block.\n\tB4 = sign_block(B1#block{ previous_block = B1#block.indep_hash,\n\t\t\tprevious_cumulative_diff = B1#block.cumulative_diff,\n\t\t\t%% Change the solution hash so that the validator does not go down\n\t\t\t%% the comparing the resigned solution with the cached solution path.\n\t\t\thash = crypto:strong_rand_bytes(32),\n\t\t\theight = B1#block.height + 1,\n\t\t\tnonce_limiter_info = Info1#nonce_limiter_info{ global_step_number = 1 } },\n\t\t\tB1, Key),\n\tpost_block(B4, invalid_nonce_limiter_global_step_number),\n\tB1SolutionH = B1#block.hash,\n\tB1SolutionNum = binary:decode_unsigned(B1SolutionH),\n\tB5 = sign_block(B1#block{ previous_block = B1#block.indep_hash,\n\t\t\tprevious_cumulative_diff = B1#block.cumulative_diff,\n\t\t\theight = B1#block.height + 1,\n\t\t\thash = binary:encode_unsigned(B1SolutionNum - 1) }, B1, Key),\n\tpost_block(B5, invalid_nonce_limiter_global_step_number),\n\t%% Correct hash, but invalid PoW.\n\tInvalidKey = ar_wallet:new(),\n\tInvalidAddr = ar_wallet:to_address(InvalidKey),\n\tB6 = sign_block(B1#block{ reward_addr = InvalidAddr,\n\t\t\t%% Change the solution hash so that the validator does not go down\n\t\t\t%% the comparing the resigned solution with the cached solution path.\n\t\t\thash = crypto:strong_rand_bytes(32),\n\t\t\treward_key = element(2, InvalidKey) }, B0, InvalidKey),\n\ttimer:sleep(100 * 2), % ?THROTTLE_BY_IP_INTERVAL_MS * 2\n\tpost_block(B6, [invalid_hash_preimage, invalid_pow]),\n\tassert_banned(Peer),\n\t?assertMatch({ok, {{<<\"403\">>, _}, _,\n\t\t\t<<\"IP address blocked due to previous request.\">>, _, _}},\n\t\t\tsend_new_block(Peer, B1#block{ indep_hash = crypto:strong_rand_bytes(48) })),\n\tar_blacklist_middleware:reset(),\n\tB7 = sign_block(B1#block{\n\t\t\t%% Change the solution hash so that the validator does not go down\n\t\t\t%% the comparing the resigned solution with the cached solution path.\n\t\t\t%% Also, here it changes the block hash (the previous one would be ignored),\n\t\t\t%% because the poa field does not explicitly go in there (the motivation is to\n\t\t\t%% have a \"quick pow\" step which is quick to validate and somewhat expensive to\n\t\t\t%% forge).\n\t\t\thash = crypto:strong_rand_bytes(32),\n\t\t\tpoa = (B1#block.poa)#poa{ chunk = <<\"a\">> } }, B0, Key),\n\tpost_block(B7, invalid_first_chunk),\n\tB7_1 = sign_block(B7#block{ chunk_hash = crypto:hash(sha256, <<\"a\">>) }, B0, Key),\n\tpost_block(B7_1, invalid_pow),\n\tassert_banned(Peer),\n\t?assertMatch({ok, {{<<\"403\">>, _}, _,\n\t\t\t<<\"IP address blocked due to previous request.\">>, _, _}},\n\t\t\tsend_new_block(Peer, B1#block{ indep_hash = crypto:strong_rand_bytes(48) })),\n\tar_blacklist_middleware:reset(),\n\tB8 = sign_block(B1#block{ last_retarget = 100000 }, B0, Key),\n\tpost_block(B8, invalid_last_retarget),\n\tassert_banned(Peer),\n\t?assertMatch({ok, {{<<\"403\">>, _}, _,\n\t\t\t<<\"IP address blocked due to previous request.\">>, _, _}},\n\t\t\tsend_new_block(Peer, B1#block{ indep_hash = crypto:strong_rand_bytes(48) })),\n\tar_blacklist_middleware:reset(),\n\tB9 = sign_block(B1#block{ diff = 100000 }, B0, Key),\n\tpost_block(B9, invalid_difficulty),\n\tassert_banned(Peer),\n\t?assertMatch({ok, {{<<\"403\">>, _}, _,\n\t\t\t<<\"IP address blocked due to previous request.\">>, _, _}},\n\t\t\tsend_new_block(Peer, B1#block{ indep_hash = crypto:strong_rand_bytes(48) })),\n\tar_blacklist_middleware:reset(),\n\tB10 = sign_block(B1#block{\n\t\t\t%% Change the solution hash so that the validator does not go down\n\t\t\t%% the comparing the resigned solution with the cached solution path.\n\t\t\thash = crypto:strong_rand_bytes(32),\n\t\t\tnonce = 100 }, B0, Key),\n\tpost_block(B10, invalid_nonce),\n\tassert_banned(Peer),\n\t?assertMatch({ok, {{<<\"403\">>, _}, _,\n\t\t\t<<\"IP address blocked due to previous request.\">>, _, _}},\n\t\t\tsend_new_block(Peer, B1#block{ indep_hash = crypto:strong_rand_bytes(48) })),\n\tar_blacklist_middleware:reset(),\n\tB11_1 = sign_block(B1#block{ partition_number = 1 }, B0, Key),\n\t%% We might get invalid_hash_preimage occasionally, because the partition number\n\t%% changes H0 which changes the solution hash which may happen to be lower than\n\t%% the difficulty.\n\tpost_block(B11_1, [invalid_resigned_solution_hash, invalid_hash_preimage]),\n\tB11 = sign_block(B1#block{\n\t\t\t%% Change the solution hash so that the validator does not go down\n\t\t\t%% the comparing the resigned solution with the cached solution path.\n\t\t\thash = crypto:strong_rand_bytes(32),\n\t\t\tpartition_number = 1 }, B0, Key),\n\tpost_block(B11, [invalid_partition_number, invalid_hash_preimage]),\n\tassert_banned(Peer),\n\t?assertMatch({ok, {{<<\"403\">>, _}, _,\n\t\t\t<<\"IP address blocked due to previous request.\">>, _, _}},\n\t\t\tsend_new_block(Peer, B1#block{ indep_hash = crypto:strong_rand_bytes(48) })),\n\tar_blacklist_middleware:reset(),\n\tB12 = sign_block(B1#block{\n\t\t\tnonce_limiter_info = (B1#block.nonce_limiter_info)#nonce_limiter_info{\n\t\t\t\t\tlast_step_checkpoints = [crypto:strong_rand_bytes(32)] } }, B0, Key),\n\t%% Reset the node to the genesis block.\n\tar_test_node:start(B0),\n\tok = ar_events:subscribe(block),\n\tpost_block(B12, invalid_nonce_limiter),\n\tassert_banned(Peer),\n\t?assertMatch({ok, {{<<\"403\">>, _}, _,\n\t\t\t<<\"IP address blocked due to previous request.\">>, _, _}},\n\t\t\tsend_new_block(Peer, B1#block{ indep_hash = crypto:strong_rand_bytes(48) })),\n\tar_blacklist_middleware:reset(),\n\tB13 = sign_block(B1#block{ poa = (B1#block.poa)#poa{ data_path = <<>> } }, B0, Key),\n\tpost_block(B13, invalid_poa),\n\tassert_banned(Peer),\n\t?assertMatch({ok, {{<<\"403\">>, _}, _,\n\t\t\t<<\"IP address blocked due to previous request.\">>, _, _}},\n\t\t\tsend_new_block(Peer, B1#block{ indep_hash = crypto:strong_rand_bytes(48) })),\n\tar_blacklist_middleware:reset(),\n\tB14 = sign_block(B1#block{\n\t\t\t%% Change the solution hash so that the validator does not go down\n\t\t\t%% the comparing the resigned solution with the cached solution path.\n\t\t\thash = crypto:strong_rand_bytes(32),\n\t\t\tnonce_limiter_info = (B1#block.nonce_limiter_info)#nonce_limiter_info{\n\t\t\t\t\tnext_seed = crypto:strong_rand_bytes(48) } }, B0, Key),\n\tpost_block(B14, invalid_nonce_limiter_seed_data),\n\tassert_banned(Peer),\n\t?assertMatch({ok, {{<<\"403\">>, _}, _,\n\t\t\t<<\"IP address blocked due to previous request.\">>, _, _}},\n\t\t\tsend_new_block(Peer, B1#block{ indep_hash = crypto:strong_rand_bytes(48) })),\n\tar_blacklist_middleware:reset(),\n\tB15 = sign_block(B1#block{\n\t\t\t%% Change the solution hash so that the validator does not go down\n\t\t\t%% the comparing the resigned solution with the cached solution path.\n\t\t\thash = crypto:strong_rand_bytes(32),\n\t\t\tnonce_limiter_info = (B1#block.nonce_limiter_info)#nonce_limiter_info{\n\t\t\t\t\tpartition_upper_bound = 10000000 } }, B0, Key),\n\tpost_block(B15, invalid_nonce_limiter_seed_data),\n\tassert_banned(Peer),\n\t?assertMatch({ok, {{<<\"403\">>, _}, _,\n\t\t\t<<\"IP address blocked due to previous request.\">>, _, _}},\n\t\t\tsend_new_block(Peer, B1#block{ indep_hash = crypto:strong_rand_bytes(48) })),\n\tar_blacklist_middleware:reset(),\n\tB16 = sign_block(B1#block{\n\t\t\t%% Change the solution hash so that the validator does not go down\n\t\t\t%% the comparing the resigned solution with the cached solution path.\n\t\t\thash = crypto:strong_rand_bytes(32),\n\t\t\tnonce_limiter_info = (B1#block.nonce_limiter_info)#nonce_limiter_info{\n\t\t\t\t\tnext_partition_upper_bound = 10000000 } }, B0, Key),\n\tpost_block(B16, invalid_nonce_limiter_seed_data),\n\tassert_banned(Peer),\n\t?assertMatch({ok, {{<<\"403\">>, _}, _,\n\t\t\t<<\"IP address blocked due to previous request.\">>, _, _}},\n\t\t\tsend_new_block(Peer, B1#block{ indep_hash = crypto:strong_rand_bytes(48) })),\n\tar_blacklist_middleware:reset().\n\nrejects_blocks_with_invalid_double_signing_proof_test_() ->\n\ttest_with_mocked_functions([{ar_fork, height_2_9, fun() -> 0 end}],\n\t\tfun test_reject_block_invalid_double_signing_proof/0).\n\nrejects_blocks_with_small_rsa_keys_test_() ->\n\t{timeout, 60, fun test_rejects_blocks_with_small_rsa_keys/0}.\n\ntest_rejects_blocks_with_small_rsa_keys() ->\n\t[B0] = ar_weave:init(),\n\tar_test_node:start(B0),\n\tok = ar_events:subscribe(block),\n\tar_test_node:mine(main),\n\tBI = ar_test_node:assert_wait_until_height(main, 1),\n\tB1 = ar_storage:read_block(hd(BI)),\n\tKey2 = ar_test_node:new_custom_size_rsa_wallet(512), % normal 512-byte key\n\tB2 = sign_block(B1, B0, Key2),\n\tpost_block(B2, invalid_resigned_solution_hash), % because reward_addr changed\n\tKey3 = ar_test_node:new_custom_size_rsa_wallet(66), % 66-byte key\n\tB3 = sign_block(B1, B0, Key3),\n\tpost_block(B3, invalid_signature),\n\tKey4 = ar_test_node:new_custom_size_rsa_wallet(511),\n\tB4 = sign_block(B1, B0, Key4),\n\tpost_block(B4, invalid_signature).\n\ntest_reject_block_invalid_double_signing_proof() ->\n\t[test_reject_block_invalid_double_signing_proof(KeyType)\n\t\t|| KeyType <- [?RSA_KEY_TYPE, ?ECDSA_KEY_TYPE]].\n\ntest_reject_block_invalid_double_signing_proof(KeyType) ->\n\t?debugFmt(\"KeyType: ~p~n\", [KeyType]),\n\tFullKey = ar_test_node:remote_call(peer1, ar_wallet, new_keyfile, [KeyType]),\n\tMiningAddr = ar_wallet:to_address(FullKey),\n\tKey0 = ar_wallet:new(),\n\tAddr0 = ar_wallet:to_address(Key0),\n\t[B0] = ar_weave:init([{Addr0, ?AR(1000), <<>>}], ar_retarget:switch_to_linear_diff(2)),\n\t?debugFmt(\"Genesis address: ~s, initial balance: ~B AR.~n\", [ar_util:encode(Addr0), 1000]),\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0, MiningAddr),\n\tar_test_node:disconnect_from(peer1),\n\tok = ar_events:subscribe(block),\n\t{Priv, _} = Key = ar_test_node:remote_call(peer1, ar_wallet, load_key, [MiningAddr]),\n\tTX0 = ar_test_node:sign_tx(Key0, #{ target => ar_wallet:to_address(Key), quantity => ?AR(10) }),\n\tar_test_node:assert_post_tx_to_peer(peer1, TX0),\n\tar_test_node:assert_post_tx_to_peer(main, TX0),\n\tar_test_node:mine(peer1),\n\tBI = ar_test_node:assert_wait_until_height(peer1, 1),\n\tB1 = ar_test_node:remote_call(peer1, ar_storage, read_block, [hd(BI)]),\n\tRandom512 = crypto:strong_rand_bytes(512),\n\tRandom64 = crypto:strong_rand_bytes(64),\n\tInvalidProof = {Random512, Random512, 2, 1, Random64, Random512, 3, 2, Random64},\n\tB2 = sign_block(B1#block{ double_signing_proof = InvalidProof }, B0, Key),\n\tpost_block(B2, invalid_double_signing_proof_same_signature),\n\tRandom512_2 = crypto:strong_rand_bytes(512),\n\tInvalidProof_2 = {Random512, Random512, 2, 1, Random64, Random512_2, 3, 2, Random64},\n\tB2_2 = sign_block(B1#block{ double_signing_proof = InvalidProof_2 }, B0, Key),\n\tpost_block(B2_2, invalid_double_signing_proof_cdiff),\n\tCDiff = B1#block.cumulative_diff,\n\tPrevCDiff = B0#block.cumulative_diff,\n\tSignedH = ar_block:generate_signed_hash(B1),\n\tPreimage1 = << (B0#block.hash)/binary, SignedH/binary >>,\n\tPreimage2 = << (B0#block.hash)/binary, (crypto:strong_rand_bytes(32))/binary >>,\n\tSignaturePreimage = ar_block:get_block_signature_preimage(CDiff, PrevCDiff,\n\t\t\tPreimage2, 0),\n\tSignature2 = ar_wallet:sign(Priv, SignaturePreimage),\n\t%% We cannot ban ourselves.\n\tInvalidProof2 = {element(3, Priv), B1#block.signature, CDiff, PrevCDiff, Preimage1,\n\t\t\tSignature2, CDiff, PrevCDiff, Preimage2},\n\tB3 = sign_block(B1#block{ double_signing_proof = InvalidProof2 }, B0, Key),\n\tpost_block(B3, invalid_double_signing_proof_same_address),\n\tar_test_node:mine(peer1),\n\tBI2 = ar_test_node:assert_wait_until_height(peer1, 2),\n\t{ok, MainConfig} = arweave_config:get_env(),\n\tKey2 = element(1, ar_wallet:load_key(MainConfig#config.mining_addr)),\n\tPreimage3 = << (B0#block.hash)/binary, (crypto:strong_rand_bytes(32))/binary >>,\n\tPreimage4 = << (B0#block.hash)/binary, (crypto:strong_rand_bytes(32))/binary >>,\n\tSignaturePreimage3 = ar_block:get_block_signature_preimage(CDiff, PrevCDiff,\n\t\t\tPreimage3, 0),\n\tSignaturePreimage4 = ar_block:get_block_signature_preimage(CDiff, PrevCDiff,\n\t\t\tPreimage4, 0),\n\tSignature3 = ar_wallet:sign(Key2, SignaturePreimage3),\n\tSignature4 = ar_wallet:sign(Key2, SignaturePreimage4),\n\t%% The account address is not in the reward history.\n\tInvalidProof3 = {element(3, Key2), Signature3, CDiff, PrevCDiff, Preimage3,\n\t\t\tSignature4, CDiff, PrevCDiff, Preimage4},\n\tB5 = sign_block(B1#block{ double_signing_proof = InvalidProof3 }, B0, Key),\n\tpost_block(B5, invalid_double_signing_proof_not_in_reward_history),\n\tB6 = ar_test_node:remote_call(peer1, ar_storage, read_block, [lists:nth(2, BI2)]),\n\tB7 = ar_test_node:remote_call(peer1, ar_storage, read_block, [hd(BI2)]),\n\t%% ECDSA signatures are deterministic - we add a new tag to get a new signature here.\n\tB7_2 = sign_block(B7#block{ tags = [<<\"new_tag\">>] }, B6, Key),\n\tpost_block(B6, valid),\n\tpost_block(B7, valid),\n\tpost_block(B7_2, valid),\n\t%% Wait until the node records conflicting proofs.\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\tmap_size(maps:get(double_signing_proofs,\n\t\t\t\t\tsys:get_state(ar_node_worker), #{})) > 0\n\t\tend,\n\t\t200,\n\t\t30000\n\t),\n\tar_test_node:connect_to_peer(peer1),\n\tar_test_node:mine(),\n\tBI3 = assert_wait_until_height(peer1, 3),\n\tB8 = ar_test_node:remote_call(peer1, ar_storage, read_block, [hd(BI3)]),\n\t?assertNotEqual(undefined, B8#block.double_signing_proof),\n\tRewardAddr = B8#block.reward_addr,\n\tBannedAddr = ar_wallet:to_address(Key),\n\tAccounts = ar_wallets:get(B8#block.wallet_list, [BannedAddr, RewardAddr]),\n\t?assertMatch(#{ BannedAddr := {_, _, 1, false}, RewardAddr := {_, _} }, Accounts),\n\t%% The banned address may still use their accounts for transfers/uploads.\n\tKey3 = ar_wallet:new(),\n\tTarget = ar_wallet:to_address(Key3),\n\tTX1 = ar_test_node:sign_tx(FullKey, #{ last_tx => <<>>, quantity => 1, target => Target }),\n\tTX2 = ar_test_node:sign_tx(FullKey, #{ last_tx => ar_test_node:get_tx_anchor(peer1), data => <<\"a\">> }),\n\tlists:foreach(fun(TX) -> ar_test_node:assert_post_tx_to_peer(main, TX) end, [TX1, TX2]),\n\tar_test_node:mine(),\n\tBI4 = assert_wait_until_height(peer1, 4),\n\tB9 = ar_test_node:remote_call(peer1, ar_storage, read_block, [hd(BI4)]),\n\tAccounts2 = ar_wallets:get(B9#block.wallet_list, [BannedAddr, Target]),\n\tTXID = TX2#tx.id,\n\t?assertEqual(2, length(B9#block.txs)),\n\t?assertMatch(#{ Target := {1, <<>>}, BannedAddr := {_, TXID, 1, false} }, Accounts2).\n\nsend_block2_test_() ->\n\ttest_with_mocked_functions([{ar_fork, height_2_6, fun() -> 0 end}],\n\t\tfun() -> test_send_block2() end).\n\ntest_send_block2() ->\n\t{_, Pub} = Wallet = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(100), <<>>}]),\n\tMainWallet = ar_wallet:new_keyfile(),\n\tMainAddress = ar_wallet:to_address(MainWallet),\n\tPeerWallet = ar_test_node:remote_call(peer1, ar_wallet, new_keyfile, []),\n\tPeerAddress = ar_wallet:to_address(PeerWallet),\n\tar_test_node:start(B0, MainAddress),\n\tar_test_node:start_peer(peer1, B0, PeerAddress),\n\tar_test_node:disconnect_from(peer1),\n\tTXs = [ar_test_node:sign_tx(Wallet, #{ last_tx => ar_test_node:get_tx_anchor(peer1) }) || _ <- lists:seq(1, 10)],\n\tlists:foreach(fun(TX) -> ar_test_node:assert_post_tx_to_peer(main, TX) end, TXs),\n\tar_test_node:mine(),\n\t[{H, _, _}, _] = wait_until_height(main, 1),\n\tB = ar_storage:read_block(H),\n\tTXs2 = sort_txs_by_block_order(TXs, B),\n\tEverySecondTX = element(2, lists:foldl(fun(TX, {N, Acc}) when N rem 2 /= 0 ->\n\t\t\t{N + 1, [TX | Acc]}; (_TX, {N, Acc}) -> {N + 1, Acc} end, {0, []}, TXs2)),\n\tlists:foreach(fun(TX) -> ar_test_node:assert_post_tx_to_peer(peer1, TX) end, EverySecondTX),\n\tAnnouncement = #block_announcement{ indep_hash = B#block.indep_hash,\n\t\t\tprevious_block = B0#block.indep_hash,\n\t\t\ttx_prefixes = [binary:part(TX#tx.id, 0, 8) || TX <- TXs2] },\n\t{ok, {{<<\"200\">>, _}, _, Body, _, _}} = ar_http:req(#{ method => post,\n\t\t\tpeer => ar_test_node:peer_ip(peer1), path => \"/block_announcement\",\n\t\t\tbody => ar_serialize:block_announcement_to_binary(Announcement) }),\n\tResponse = ar_serialize:binary_to_block_announcement_response(Body),\n\t?assertEqual({ok, #block_announcement_response{ missing_chunk = true,\n\t\t\tmissing_tx_indices = [0, 2, 4, 6, 8] }}, Response),\n\tAnnouncement2 = Announcement#block_announcement{ recall_byte = 0 },\n\t{ok, {{<<\"200\">>, _}, _, Body2, _, _}} = ar_http:req(#{ method => post,\n\t\t\tpeer => ar_test_node:peer_ip(peer1), path => \"/block_announcement\",\n\t\t\tbody => ar_serialize:block_announcement_to_binary(Announcement2) }),\n\tResponse2 = ar_serialize:binary_to_block_announcement_response(Body2),\n\t%% We always report missing chunk currently.\n\t?assertEqual({ok, #block_announcement_response{ missing_chunk = true,\n\t\t\tmissing_tx_indices = [0, 2, 4, 6, 8] }}, Response2),\n\tAnnouncement3 = Announcement#block_announcement{ recall_byte = 100000000000000 },\n\t{ok, {{<<\"200\">>, _}, _, Body, _, _}} = ar_http:req(#{ method => post,\n\t\t\tpeer => ar_test_node:peer_ip(peer1), path => \"/block_announcement\",\n\t\t\tbody => ar_serialize:block_announcement_to_binary(Announcement3) }),\n\t{ok, {{<<\"418\">>, _}, _, Body3, _, _}} = ar_http:req(#{ method => post,\n\t\t\tpeer => ar_test_node:peer_ip(peer1), path => \"/block2\",\n\t\t\tbody => ar_serialize:block_to_binary(B) }),\n\t?assertEqual(iolist_to_binary(lists:foldl(fun(#tx{ id = TXID }, Acc) -> [TXID | Acc] end,\n\t\t\t[], TXs2 -- EverySecondTX)), Body3),\n\tB2 = B#block{ txs = [lists:nth(1, TXs2) | tl(B#block.txs)] },\n\t{ok, {{<<\"418\">>, _}, _, Body4, _, _}} = ar_http:req(#{ method => post,\n\t\t\tpeer => ar_test_node:peer_ip(peer1), path => \"/block2\",\n\t\t\tbody => ar_serialize:block_to_binary(B2) }),\n\t?assertEqual(iolist_to_binary(lists:foldl(fun(#tx{ id = TXID }, Acc) -> [TXID | Acc] end,\n\t\t\t[], (TXs2 -- EverySecondTX) -- [lists:nth(1, TXs2)])), Body4),\n\tTXs3 = [ar_test_node:sign_tx(main, Wallet, #{ last_tx => ar_test_node:get_tx_anchor(peer1),\n\t\t\tdata => crypto:strong_rand_bytes(10 * 1024) }) || _ <- lists:seq(1, 10)],\n\tlists:foreach(fun(TX) -> ar_test_node:assert_post_tx_to_peer(main, TX) end, TXs3),\n\tar_test_node:mine(),\n\t[{H2, _, _}, _, _] = wait_until_height(main, 2),\n\t{ok, {{<<\"412\">>, _}, _, <<>>, _, _}} = ar_http:req(#{ method => post,\n\t\t\tpeer => ar_test_node:peer_ip(peer1), path => \"/block_announcement\",\n\t\t\tbody => ar_serialize:block_announcement_to_binary(#block_announcement{\n\t\t\t\t\tindep_hash = H2, previous_block = B#block.indep_hash }) }),\n\tBTXs = ar_storage:read_tx(B#block.txs),\n\tB3 = B#block{ txs = BTXs },\n\t{ok, {{<<\"200\">>, _}, _, <<\"OK\">>, _, _}} = ar_http:req(#{ method => post,\n\t\t\tpeer => ar_test_node:peer_ip(peer1), path => \"/block2\",\n\t\t\tbody => ar_serialize:block_to_binary(B3) }),\n\t{ok, {{<<\"200\">>, _}, _, SerializedB, _, _}} = ar_http:req(#{ method => get,\n\t\t\tpeer => ar_test_node:peer_ip(main), path => \"/block2/height/1\" }),\n\t?assertEqual({ok, B}, ar_serialize:binary_to_block(SerializedB)),\n\tMap = element(2, lists:foldl(fun(TX, {N, M}) -> {N + 1, maps:put(TX#tx.id, N, M)} end,\n\t\t\t{0, #{}}, TXs2)),\n\t{ok, {{<<\"200\">>, _}, _, Serialized2B, _, _}} = ar_http:req(#{ method => get,\n\t\t\tpeer => ar_test_node:peer_ip(main), path => \"/block2/height/1\",\n\t\t\tbody => << 1:1, 0:(8 * 125 - 1) >> }),\n\t?assertEqual({ok, B#block{ txs = [case maps:get(TX#tx.id, Map) == 0 of true -> TX;\n\t\t\t_ -> TX#tx.id end || TX <- BTXs] }}, ar_serialize:binary_to_block(Serialized2B)),\n\t{ok, {{<<\"200\">>, _}, _, Serialized2B, _, _}} = ar_http:req(#{ method => get,\n\t\t\tpeer => ar_test_node:peer_ip(main), path => \"/block2/height/1\",\n\t\t\tbody => << 1:1, 0:7 >> }),\n\t{ok, {{<<\"200\">>, _}, _, Serialized3B, _, _}} = ar_http:req(#{ method => get,\n\t\t\tpeer => ar_test_node:peer_ip(main), path => \"/block2/height/1\",\n\t\t\tbody => << 0:1, 1:1, 0:1, 1:1, 0:4 >> }),\n\t?assertEqual({ok, B#block{ txs = [case lists:member(maps:get(TX#tx.id, Map), [1, 3]) of\n\t\t\ttrue -> TX; _ -> TX#tx.id end || TX <- BTXs] }},\n\t\t\t\t\tar_serialize:binary_to_block(Serialized3B)),\n\tB4 = read_block_when_stored(H2, true),\n\ttimer:sleep(500),\n\t{ok, {{<<\"200\">>, _}, _, <<\"OK\">>, _, _}} = ar_http:req(#{ method => post,\n\t\t\tpeer => ar_test_node:peer_ip(peer1), path => \"/block2\",\n\t\t\tbody => ar_serialize:block_to_binary(B4) }),\n\tar_test_node:connect_to_peer(peer1),\n\tlists:foreach(\n\t\tfun(Height) ->\n\t\t\tar_test_node:mine(),\n\t\t\tassert_wait_until_height(peer1, Height)\n\t\tend,\n\t\tlists:seq(3, 3 + ?SEARCH_SPACE_UPPER_BOUND_DEPTH)\n\t),\n\tB5 = ar_storage:read_block(ar_node:get_current_block_hash()),\n\t{ok, {{<<\"208\">>, _}, _, _, _, _}} = ar_http:req(#{ method => post,\n\t\t\tpeer => ar_test_node:peer_ip(peer1), path => \"/block_announcement\",\n\t\t\tbody => ar_serialize:block_announcement_to_binary(#block_announcement{\n\t\t\t\t\tindep_hash = B5#block.indep_hash,\n\t\t\t\t\tprevious_block = B5#block.previous_block }) }),\n\tar_test_node:disconnect_from(peer1),\n\tar_test_node:mine(),\n\t[_ | _] = wait_until_height(main, 3 + ?SEARCH_SPACE_UPPER_BOUND_DEPTH + 1),\n\tB6 = ar_storage:read_block(ar_node:get_current_block_hash()),\n\t{ok, {{<<\"200\">>, _}, _, Body5, _, _}} = ar_http:req(#{ method => post,\n\t\t\tpeer => ar_test_node:peer_ip(peer1), path => \"/block_announcement\",\n\t\t\tbody => ar_serialize:block_announcement_to_binary(#block_announcement{\n\t\t\t\t\tindep_hash = B6#block.indep_hash,\n\t\t\t\t\tprevious_block = B6#block.previous_block,\n\t\t\t\t\trecall_byte = 0 }) }),\n\t%% We always report missing chunk currently.\n\t?assertEqual({ok, #block_announcement_response{ missing_chunk = true,\n\t\t\tmissing_tx_indices = [] }},\n\t\t\tar_serialize:binary_to_block_announcement_response(Body5)),\n\t{ok, {{<<\"200\">>, _}, _, Body6, _, _}} = ar_http:req(#{ method => post,\n\t\t\tpeer => ar_test_node:peer_ip(peer1), path => \"/block_announcement\",\n\t\t\tbody => ar_serialize:block_announcement_to_binary(#block_announcement{\n\t\t\t\t\tindep_hash = B6#block.indep_hash,\n\t\t\t\t\tprevious_block = B6#block.previous_block,\n\t\t\t\t\trecall_byte = 1024 }) }),\n\t%% We always report missing chunk currently.\n\t?assertEqual({ok, #block_announcement_response{ missing_chunk = true,\n\t\t\tmissing_tx_indices = [] }},\n\t\t\tar_serialize:binary_to_block_announcement_response(Body6)).\n\nresigned_solution_test_() ->\n\ttest_with_mocked_functions([{ar_fork, height_2_6, fun() -> 0 end}],\n\t\tfun() -> test_resigned_solution() end).\n\ntest_resigned_solution() ->\n\t[B0] = ar_weave:init(),\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:connect_to_peer(peer1),\n\tar_test_node:mine(peer1),\n\twait_until_height(main, 1),\n\tar_test_node:disconnect_from(peer1),\n\tar_test_node:mine(peer1),\n\tB = ar_node:get_current_block(),\n\t{ok, Config} = ar_test_node:remote_call(peer1, arweave_config, get_env, []),\n\tKey = ar_test_node:remote_call(peer1, ar_wallet, load_key, [Config#config.mining_addr]),\n\tok = ar_events:subscribe(block),\n\tB2 = sign_block(B#block{ tags = [<<\"tag1\">>] }, B0, Key),\n\tpost_block(B2, [valid]),\n\tB3 = sign_block(B#block{ tags = [<<\"tag2\">>] }, B0, Key),\n\tpost_block(B3, [valid]),\n\tassert_wait_until_height(peer1, 2),\n\tB4 = ar_test_node:remote_call(peer1, ar_node, get_current_block, []),\n\t?assertEqual(B#block.indep_hash, B4#block.previous_block),\n\tB2H = B2#block.indep_hash,\n\t?assertNotEqual(B2#block.indep_hash, B4#block.previous_block),\n\tPrevStepNumber = ar_block:vdf_step_number(B),\n\tPrevInterval = PrevStepNumber div ar_nonce_limiter:get_reset_frequency(),\n\tInfo4 = B4#block.nonce_limiter_info,\n\tStepNumber = Info4#nonce_limiter_info.global_step_number,\n\tInterval = StepNumber div ar_nonce_limiter:get_reset_frequency(),\n\tB5 =\n\t\tcase Interval == PrevInterval of\n\t\t\ttrue ->\n\t\t\t\tsign_block(B4#block{\n\t\t\t\t\t\thash_list_merkle = ar_block:compute_hash_list_merkle(B2),\n\t\t\t\t\t\tprevious_block = B2H }, B2, Key);\n\t\t\tfalse ->\n\t\t\t\tsign_block(B4#block{ previous_block = B2H,\n\t\t\t\t\t\thash_list_merkle = ar_block:compute_hash_list_merkle(B2),\n\t\t\t\t\t\tnonce_limiter_info = Info4#nonce_limiter_info{ next_seed = B2H } },\n\t\t\t\t\t\tB2, Key)\n\t\tend,\n\tB5H = B5#block.indep_hash,\n\tpost_block(B5, [valid]),\n\t[{B5H, _, _}, {B2H, _, _}, _] = wait_until_height(main, 2),\n\tar_test_node:mine(),\n\t[{B6H, _, _}, _, _, _] = wait_until_height(main, 3),\n\tar_test_node:connect_to_peer(peer1),\n\t[{B6H, _, _}, {B5H, _, _}, {B2H, _, _}, _] = assert_wait_until_height(peer1, 3).\n\n%% ------------------------------------------------------------------------------------------\n%% Helper functions\n%% ------------------------------------------------------------------------------------------\n\nsort_txs_by_block_order(TXs, B) ->\n\tTXByID = lists:foldl(fun(TX, Acc) -> maps:put(tx_id(TX), TX, Acc) end, #{}, TXs),\n\tlists:foldr(fun(TX, Acc) -> [maps:get(tx_id(TX), TXByID) | Acc] end, [], B#block.txs).\n\ntx_id(#tx{ id = ID }) ->\n\tID;\ntx_id(ID) ->\n\tID.\n\nheight(Node) ->\n\tar_test_node:remote_call(Node, ar_node, get_height, []).\n"
  },
  {
    "path": "apps/arweave/test/ar_pricing_tests.erl",
    "content": "-module(ar_pricing_tests).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_pricing.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-define(DISTANT_FUTURE_BLOCK_HEIGHT, 262800000). %% 1,000 years from genesis\n\nget_price_per_gib_minute_test_() ->\n\t[\n\t\t{timeout, 30, fun test_price_per_gib_minute_pre_block_time_history/0},\n\t\tar_test_node:test_with_mocked_functions(\n\t\t\t[\n\t\t\t\t{ar_fork, height_2_7_2, fun() -> 10 end},\n\t\t\t\t{ar_pricing_transition, transition_start_2_6_8, fun() -> 5 end},\n\t\t\t\t{ar_pricing_transition, transition_start_2_7_2, fun() -> 15 end},\n\t\t\t\t{ar_pricing_transition, transition_length_2_6_8, fun() -> 20 end},\n\t\t\t\t{ar_pricing_transition, transition_length_2_7_2, fun() -> 40 end},\n\t\t\t\t%% This test uses specific price constants computed for this partition size.\n\t\t\t\t{ar_block, partition_size, fun() -> 2097152 end}\n\t\t\t],\n\t\t\tfun test_price_per_gib_minute_transition_phases/0),\n\t\tar_test_node:test_with_mocked_functions(\n\t\t\t[\n\t\t\t\t{ar_block, partition_size, fun() -> 2097152 end}\n\t\t\t],\n\t\t\tfun test_v2_price/0),\n\t\tar_test_node:test_with_mocked_functions(\n\t\t\t[\n\t\t\t\t{ar_block, partition_size, fun() -> 2097152 end},\n\t\t\t\t{ar_difficulty, poa1_diff_multiplier, fun(_) -> 2 end}\n\t\t\t],\n\t\t\tfun test_v2_price_with_poa1_diff_multiplier/0)\n\t].\n\n%% @doc This test verifies an edge case code path that probably shouldn't ever be triggered.\n%% ar_fork:height_2_7() and ar_fork:height_2_6_8() are 0\n%% ?BLOCK_TIME_HISTORY_BLOCKS is 3\n%% ?PRICE_2_6_8_TRANSITION_START is 2\n%% So when the price transition starts we don't have enough block time history to apply the\n%% new algorithm.\ntest_price_per_gib_minute_pre_block_time_history() ->\n\tStart = ar_pricing_transition:transition_start_2_6_8(),\n\tB = #block{\n\t\treward_history = reward_history(1, 1), block_time_history = block_time_history(1, 1) },\n\t?assertEqual(ar_pricing_transition:static_price(),\n\t\tar_pricing:get_price_per_gib_minute(Start, B),\n\t\t\"Before we have enough block time history\").\n\ntest_price_per_gib_minute_transition_phases() ->\n\t%% V2 price when calculated with:\n\t%% - reward_history(1, 1)\n\t%% - block_time_history(1, 1)\n\t%% - PoA1 difficulty multiplier of 1\n\tB = #block{\n\t\treward_history = reward_history(1, 1), block_time_history = block_time_history(1, 1) },\n\tV2Price = 61440,\n\t?assertEqual(V2Price,\n\t\tar_pricing:get_v2_price_per_gib_minute(10, B),\n\t\t\"V2 Price\"),\n\t%% Static price\n\t?assertEqual(ar_pricing_transition:static_price(),\n\t\tar_pricing:get_price_per_gib_minute(4, B),\n\t\t\"Static price\"),\n\t%% 2.6.8 start\n\t?assertEqual(ar_pricing_transition:static_price(),\n\t\tar_pricing:get_price_per_gib_minute(5, B),\n\t\t\"2.6.8 start price\"),\n\t%% 2.6.8 transition, pre-2.7.2\n\t%% 1/20 interpolation from 8162 to 61440\n\t?assertEqual(10825,\n\t\tar_pricing:get_price_per_gib_minute(6, B),\n\t\t\"2.6.8 transition, pre-2.7.2 activation price\"),\n\t%% 2.6.8 transition, post-2.7.2, pre-cap\n\t%% 5/20 interpolation from 8162 to 61440\n\t?assertEqual(21481,\n\t\tar_pricing:get_price_per_gib_minute(10, B),\n\t\t\"2.6.8 transition, at 2.7.2 activation price\"),\n\t%% 6/20 interpolation from 8162 to 61440\n\t?assertEqual(24145,\n\t\tar_pricing:get_price_per_gib_minute(11, B),\n\t\t\"2.6.8 transition, post 2.7.2 activation, pre-cap price\"),\n\t%% 2.6.8 transition, post-2.7.2, post-cap\n\t?assertEqual(30_000,\n\t\tar_pricing:get_price_per_gib_minute(14, B),\n\t\t\"2.6.8 transition, post 2.7.2 activation, post-cap price\"),\n\t%% 2.7.2 start\n\t?assertEqual(30_000,\n\t\tar_pricing:get_price_per_gib_minute(15, B),\n\t\t\"2.7.2 start price\"),\n\t%% 2.7.2 transition, before 2.6.8 end\n\t%% 5/40 interpolation from 30000 to 61440\n\t?assertEqual(33930,\n\t\tar_pricing:get_price_per_gib_minute(20, B),\n\t\t\"2.7.2 transition price, before 2.6.8 end\"),\n\t%% 2.7.2 transition, at 2.6.8 end\n\t%% 10/40 interpolation from 30000 to 61440\n\t?assertEqual(37860,\n\t\tar_pricing:get_price_per_gib_minute(25, B),\n\t\t\"2.7.2 transition price, at 2.6.8 end\"),\n\t%% 2.7.2 transition, after 2.6.8 end\n\t%% 11/40 interpolation from 30000 to 61440\n\t?assertEqual(38646,\n\t\tar_pricing:get_price_per_gib_minute(26, B),\n\t\t\"2.7.2 transition price, after 2.6.8 end\"),\n\t%% 2.7.2 end\n\t?assertEqual(V2Price,\n\t\tar_pricing:get_price_per_gib_minute(55, B),\n\t\t\"2.7.2 end price\"),\n\t%% v2 price\n\t?assertEqual(V2Price,\n\t\tar_pricing:get_price_per_gib_minute(56, B),\n\t\t\"After 2.7.2 transition end\").\n\ntest_v2_price() ->\n\tAtTransitionEnd = ar_pricing_transition:transition_start_2_7_2() + \n\t\tar_pricing_transition:transition_length(ar_pricing_transition:transition_start_2_7_2()),\n\n\t%% 2 chunks per partition when running tests\n\t%% If we get 1 solution per chunk (or 2 per partition), then we expect a price of 61440\n\t%%   that's our \"baseline\" for the purposes of this explanation\n\t%% AllOneChunkBaseline: 1x baseline\n\t%%   - 3 1-chunk blocks, 0 2-chunk blocks\n\t%%   => 3/3 solutions per chunk\n\t%%   => 2 per partition\n\t%% AllTwoChunkBaseline: 2x baseline\n\t%%   - 0 1-chunk blocks, 3 2-chunk blocks\n\t%%   => max, 2 solutions per chunk\n\t%%   => 4 per partition\n\t%% MixedChunkBaseline: 1.5x baseline\n\t%%   - 2 1-chunk blocks, 1 2-chunk blocks\n\t%%   => 3/2 solutions per chunk\n\t%%   => 3 per partition\n\tdo_price_per_gib_minute_post_transition(AtTransitionEnd, 61440, 122880, 92160),\n\tBeyondTransition = AtTransitionEnd + 1000,\n\tdo_price_per_gib_minute_post_transition(BeyondTransition, 61440, 122880, 92160).\n\ntest_v2_price_with_poa1_diff_multiplier() ->\n\tAtTransitionEnd = ar_pricing_transition:transition_start_2_7_2() + \n\t\tar_pricing_transition:transition_length(ar_pricing_transition:transition_start_2_7_2()),\n\n\t%% 2 chunks per partition when running tests\n\t%% If we get 1 solution per chunk (or 2 per partition), then we expect a price of 61440\n\t%%   that's our \"baseline\" for the purposes of this explanation\n\t%%\n\t%% Note: in these tests teh poa1 difficulty modifier is set to 2, which changes the \n\t%%       number of solutions per chunk.\n\t%%\n\t%% AllOneChunkBaseline: 0.5x baseline\n\t%%   - 3 1-chunk blocks, 0 2-chunk blocks\n\t%%   => 3/(3*2) solutions per chunk\n\t%%   => 1 per partition\n\t%% AllTwoChunkBaseline: 1.5x baseline\n\t%%   - 0 1-chunk blocks, 3 2-chunk blocks\n\t%%   => max, (2+1)/2 solutions per chunk\n\t%%   => 3 per partition\n\t%% MixedChunkBaseline: 0.5x baseline\n\t%%   - 2 1-chunk blocks, 1 2-chunk blocks\n\t%%   => 3/4 solutions per chunk \n\t%%   => 1.5 per partition\n\t%%   => Since we deal in integers, that gets rounded to 1 per partition\n\tdo_price_per_gib_minute_post_transition(AtTransitionEnd, 30720, 92160, 30720),\n\tBeyondTransition = AtTransitionEnd + 1000,\n\tdo_price_per_gib_minute_post_transition(BeyondTransition, 30720, 92160, 30720).\n\ndo_price_per_gib_minute_post_transition(Height,\n\t\tAllOneChunkBaseline, AllTwoChunkBaseline, MixedChunkBaseline) ->\n\t\n\tPoA1DiffMultiplier = ar_difficulty:poa1_diff_multiplier(Height),\n\tB0 = #block{\n\t\treward_history = reward_history(1, 1), block_time_history = block_time_history(1, 1) },\n\t?assertEqual(AllOneChunkBaseline,\n\t\tar_pricing:get_price_per_gib_minute(Height, B0),\n\t\tio_lib:format(\n\t\t\t\"hash_rate: low, reward: low, vdf: perfect, chunks: all_one, poa1_diff: ~B\",\n\t\t\t[PoA1DiffMultiplier])),\n\tB1 = #block{\n\t\treward_history = reward_history(1, 10), block_time_history = block_time_history(1, 1) },\n\t?assertEqual(AllOneChunkBaseline * 10,\n\t\tar_pricing:get_price_per_gib_minute(Height, B1),\n\t\tio_lib:format(\n\t\t\t\"hash_rate: low, reward: high, vdf: perfect, chunks: all_one, poa1_diff: ~B\",\n\t\t\t[PoA1DiffMultiplier])),\n\tB2 = #block{\n\t\treward_history = reward_history(10, 1), block_time_history = block_time_history(1, 1) },\n\t?assertEqual(AllOneChunkBaseline div 10,\n\t\tar_pricing:get_price_per_gib_minute(Height, B2),\n\t\tio_lib:format(\n\t\t\t\"hash_rate: high, reward: low, vdf: perfect, chunks: all_one, poa1_diff: ~B\",\n\t\t\t[PoA1DiffMultiplier])),\n\tB3 = #block{\n\t\treward_history = reward_history(10, 10), block_time_history = block_time_history(1, 1) },\n\t?assertEqual(AllOneChunkBaseline,\n\t\tar_pricing:get_price_per_gib_minute(Height, B3),\n\t\tio_lib:format(\n\t\t\t\"hash_rate: high, reward: high, vdf: perfect, chunks: all_one, poa1_diff: ~B\",\n\t\t\t[PoA1DiffMultiplier])),\n\tB4 = #block{\n\t\treward_history = reward_history(1, 1), block_time_history = block_time_history(10, 1) },\n\t?assertEqual(AllOneChunkBaseline div 10,\n\t\tar_pricing:get_price_per_gib_minute(Height, B4),\n\t\tio_lib:format(\n\t\t\t\"hash_rate: low, reward: low, vdf: slow, chunks: all_one, poa1_diff: ~B\",\n\t\t\t[PoA1DiffMultiplier])),\n\tB5 = #block{\n\t\treward_history = reward_history(1, 1), block_time_history = block_time_history(1, 10) },\n\t?assertEqual(AllOneChunkBaseline * 10,\n\t\tar_pricing:get_price_per_gib_minute(Height, B5),\n\t\tio_lib:format(\n\t\t\t\"hash_rate: low, reward: low, vdf: fast, chunks: all_one, poa1_diff: ~B\",\n\t\t\t[PoA1DiffMultiplier])),\n\tB6 = #block{\n\t\treward_history = reward_history(1, 1), block_time_history = all_two_chunks() },\n\t?assertEqual(AllTwoChunkBaseline,\n\t\tar_pricing:get_price_per_gib_minute(Height, B6),\n\t\tio_lib:format(\n\t\t\t\"hash_rate: low, reward: low, vdf: perfect, chunks: all_two, poa1_diff: ~B\",\n\t\t\t[PoA1DiffMultiplier])),\n\tB7 = #block{\n\t\treward_history = reward_history(1, 1), block_time_history = mix_chunks() },\n\t?assertEqual(MixedChunkBaseline,\n\t\tar_pricing:get_price_per_gib_minute(Height, B7),\n\t\tio_lib:format(\n\t\t\t\"hash_rate: low, reward: low, vdf: perfect, chunks: mix, poa1_diff: ~B\",\n\t\t\t[PoA1DiffMultiplier])).\n\nreward_history(HashRate, Reward) ->\n\t[\n\t\t{crypto:strong_rand_bytes(32), 1*HashRate, 1*Reward, 0},\n\t\t{crypto:strong_rand_bytes(32), 2*HashRate, 2*Reward, 0},\n\t\t{crypto:strong_rand_bytes(32), 3*HashRate, 3*Reward, 0}\n\t].\n\nblock_time_history(BlockInterval, VDFSteps) ->\n\t[\n\t\t{BlockInterval*10, VDFSteps*10, 1},\n\t\t{BlockInterval*20, VDFSteps*20, 1},\n\t\t{BlockInterval*30, VDFSteps*30, 1}\n\t].\n\nall_two_chunks() ->\n\t[\n\t\t{10, 10, 2},\n\t\t{20, 20, 2},\n\t\t{30, 30, 2}\n\t].\n\nmix_chunks() ->\n\t[\n\t\t{10, 10, 1},\n\t\t{20, 20, 2},\n\t\t{30, 30, 1}\n\t].\n\nrecalculate_price_per_gib_minute_test_block() ->\n\t#block{\n\t\theight = ?PRICE_ADJUSTMENT_FREQUENCY-1,\n\t\tdenomination = 1,\n\t\treward_history = [\n\t\t\t{<<>>, 10000, 10, 1}\n\t\t],\n\t\tblock_time_history = [\n\t\t\t{129, 135, 1}\n\t\t],\n\t\tprice_per_gib_minute = 10000,\n\t\tscheduled_price_per_gib_minute = 15000\n\t}.\n\nrecalculate_price_per_gib_minute_2_7_test_() ->\n\tar_test_node:test_with_mocked_functions(\n\t\t[{ar_fork, height_2_6, fun() -> -1 end},\n\t\t{ar_fork, height_2_7, fun() -> -1 end},\n\t\t{ar_fork, height_2_7_1, fun() -> infinity end}],\n\t\tfun() ->\n\t\t\tB = recalculate_price_per_gib_minute_test_block(),\n\t\t\t?assertEqual({15000, 8162}, ar_pricing:recalculate_price_per_gib_minute(B)),\n\t\t\tok\n\t\tend).\n\nrecalculate_price_per_gib_minute_2_7_1_ema_test_() ->\n\tar_test_node:test_with_mocked_functions(\n\t\t[{ar_fork, height_2_6, fun() -> -1 end},\n\t\t{ar_fork, height_2_7, fun() -> -1 end},\n\t\t{ar_fork, height_2_7_1, fun() -> -1 end}],\n\t\tfun() ->\n\t\t\tB = recalculate_price_per_gib_minute_test_block(),\n\t\t\t?assertEqual({15000, 14316}, ar_pricing:recalculate_price_per_gib_minute(B)),\n\t\t\tok\n\t\tend).\n\nauto_redenomination_and_endowment_debt_test_() ->\n\t%% Set some weird mocks to preserve the existing behavior of this test\n\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_pricing_transition, transition_start_2_7_2, fun() -> 3 end},\n\t\t\t{ar_pricing_transition, transition_length_2_7_2, fun() -> 1 end}\n\t\t],\n\t\tfun test_auto_redenomination_and_endowment_debt/0).\n\ntest_auto_redenomination_and_endowment_debt() ->\n\tKey1 = {_, Pub1} = ar_wallet:new(),\n\t{_, Pub2} = ar_wallet:new(),\n\tKey3 = {_, Pub3} = ar_wallet:new(),\n\t[B0] = ar_weave:init([\n\t\t{ar_wallet:to_address(Pub1), 20000000000000, <<>>},\n\t\t{ar_wallet:to_address(Pub2), 2000000000, <<>>},\n\t\t{ar_wallet:to_address(Pub3), ?AR(1000000000000000000), <<>>}\n\t]),\n\tar_test_node:start(B0),\n\tar_test_node:start_peer(peer1, B0),\n\tar_test_node:connect_to_peer(peer1),\n\t?assert(ar_pricing_transition:transition_start_2_6_8() == 2),\n\t?assert(ar_pricing_transition:transition_length_2_6_8() == 2),\n\t?assert(?PRICE_ADJUSTMENT_FREQUENCY == 2),\n\t?assert(?REDENOMINATION_DELAY_BLOCKS == 2),\n\t?assert(?REWARD_HISTORY_BLOCKS == 3),\n\t?assert(?DOUBLE_SIGNING_REWARD_SAMPLE_SIZE == 2),\n\t?assertEqual(262144 * 3, B0#block.weave_size),\n\t{ok, Config} = arweave_config:get_env(),\n\t{_, MinerPub} = ar_wallet:load_key(Config#config.mining_addr),\n\t?assertEqual(0, get_balance(MinerPub)),\n\t?assertEqual(0, get_reserved_balance(Config#config.mining_addr)),\n\tar_test_node:mine(),\n\tar_test_node:assert_wait_until_height(main, 1),\n\tar_test_node:assert_wait_until_height(peer1, 1),\n\t?assertEqual(0, get_balance(MinerPub)),\n\tB1 = ar_node:get_current_block(),\n\t?assertEqual(10, B1#block.reward),\n\t?assertEqual(10, get_reserved_balance(B1#block.reward_addr)),\n\t?assertEqual(1, ar_difficulty:get_hash_rate_fixed_ratio(B1)),\n\tMinerAddr = ar_wallet:to_address(MinerPub),\n\t?assertEqual([{MinerAddr, 1, 10, 1}, {B0#block.reward_addr, 1, 10, 1}],\n\t\t\tB1#block.reward_history),\n\t?assertEqual([{time_diff(B1, B0), vdf_diff(B1, B0), chunk_count(B1)}, {1, 1, 1}],\n\t\t\tB1#block.block_time_history),\n\t%% The price is recomputed every two blocks.\n\t?assertEqual(B1#block.price_per_gib_minute, B1#block.scheduled_price_per_gib_minute),\n\tar_test_node:mine(),\n\tar_test_node:assert_wait_until_height(main, 2),\n\tar_test_node:assert_wait_until_height(peer1, 2),\n\t?assertEqual(0, get_balance(MinerPub)),\n\tB2 = ar_node:get_current_block(),\n\t?assertEqual(2, B2#block.height),\n\t?assertEqual(10, B2#block.reward),\n\t?assertEqual(20, get_reserved_balance(B2#block.reward_addr)),\n\t?assertEqual(B1#block.price_per_gib_minute, B2#block.price_per_gib_minute),\n\t?assertEqual(B2#block.scheduled_price_per_gib_minute, B2#block.price_per_gib_minute),\n\t?assertEqual([{MinerAddr, 1, 10, 1}, {MinerAddr, 1, 10, 1},\n\t\t\t{B0#block.reward_addr, 1, 10, 1}], B2#block.reward_history),\n\t?assertEqual([{time_diff(B2, B1), vdf_diff(B2, B1), chunk_count(B2)},\n\t\t\t{time_diff(B1, B0), vdf_diff(B1, B0), chunk_count(B1)}, {1, 1, 1}],\n\t\t\tB2#block.block_time_history),\n\tar_test_node:mine(),\n\tar_test_node:assert_wait_until_height(main, 3),\n\tar_test_node:assert_wait_until_height(peer1, 3),\n\t?assertEqual(0, get_balance(MinerPub)),\n\tB3 = ar_node:get_current_block(),\n\t?assertEqual(10, B3#block.reward),\n\t?assertEqual(30, get_reserved_balance(B3#block.reward_addr)),\n\t?assertEqual(B3#block.price_per_gib_minute, B2#block.price_per_gib_minute),\n\t?assertEqual(B3#block.scheduled_price_per_gib_minute,\n\t\t\tB2#block.scheduled_price_per_gib_minute),\n\t?assertEqual(0, B3#block.kryder_plus_rate_multiplier_latch),\n\t?assertEqual(1, B3#block.kryder_plus_rate_multiplier),\n\t?assertEqual([{MinerAddr, 1, 10, 1}, {MinerAddr, 1, 10, 1}, {MinerAddr, 1, 10, 1},\n\t\t\t{B0#block.reward_addr, 1, 10, 1}], B3#block.reward_history),\n\tFee = ar_test_node:get_optimistic_tx_price(main, 1024),\n\tar_test_node:mine(),\n\tar_test_node:assert_wait_until_height(main, 4),\n\tar_test_node:assert_wait_until_height(peer1, 4),\n\tB4 = ar_node:get_current_block(),\n\t%% We are at the height ?PRICE_2_6_8_TRANSITION_START + ?PRICE_2_6_8_TRANSITION_BLOCKS\n\t%% so the new algorithm kicks in which estimates the expected block reward and takes\n\t%% the missing amount from the endowment pool or takes on debt.\n\tAvgBlockTime4 = ar_block_time_history:compute_block_interval(B3),\n\tExpectedReward4 = max(ar_inflation:calculate(4), B3#block.price_per_gib_minute\n\t\t\t* ?N_REPLICATIONS(B4#block.height)\n\t\t\t* AvgBlockTime4 div 60\n\t\t\t* 3 div (4 * 1024)), % weave_size / GiB\n\t?assertEqual(ExpectedReward4, B4#block.reward),\n\t?assertEqual(ExpectedReward4 + 20, get_reserved_balance(B4#block.reward_addr)),\n\t?assertEqual(10, get_balance(MinerPub)),\n\t?assertEqual(ExpectedReward4 - 10, B4#block.debt_supply),\n\t?assertEqual(1, B4#block.kryder_plus_rate_multiplier_latch),\n\t?assertEqual(2, B4#block.kryder_plus_rate_multiplier),\n\t?assertEqual(B4#block.price_per_gib_minute, B3#block.scheduled_price_per_gib_minute),\n\tPricePerGiBMinute3 = B3#block.price_per_gib_minute,\n\t?assertEqual(max(PricePerGiBMinute3 div 2, min(PricePerGiBMinute3 * 2,\n\t\t\tar_pricing:get_price_per_gib_minute(B3#block.height, B3))),\n\t\t\tB4#block.scheduled_price_per_gib_minute),\n\t%% The Kryder+ rate multiplier is 2 now so the fees should have doubled.\n\t?assert(lists:member(ar_test_node:get_optimistic_tx_price(main, 1024), [Fee * 2, Fee * 2 + 1])),\n\t?assertEqual([{MinerAddr, 1, ExpectedReward4, 1}, {MinerAddr, 1, 10, 1},\n\t\t\t{MinerAddr, 1, 10, 1}, {MinerAddr, 1, 10, 1}, {B0#block.reward_addr, 1, 10, 1}],\n\t\t\tB4#block.reward_history),\n\t?assertEqual(\n\t\tar_rewards:reward_history_hash(B4#block.height, B3#block.reward_history_hash,\n\t\t\t[{MinerAddr, 1, ExpectedReward4, 1}, {MinerAddr, 1, 10, 1}, {MinerAddr, 1, 10, 1}]),\n\t\tB4#block.reward_history_hash),\n\tar_test_node:mine(),\n\tar_test_node:assert_wait_until_height(main, 5),\n\tar_test_node:assert_wait_until_height(peer1, 5),\n\tB5 = ar_node:get_current_block(),\n\t?assertEqual(20, get_balance(MinerPub)),\n\tAvgBlockTime5 = ar_block_time_history:compute_block_interval(B4),\n\tExpectedReward5 = max(B4#block.price_per_gib_minute\n\t\t\t* ?N_REPLICATIONS(B5#block.height)\n\t\t\t* AvgBlockTime5 div 60\n\t\t\t* 3 div (4 * 1024), % weave_size / GiB\n\t\t\tar_inflation:calculate(5)),\n\t?assertEqual(ExpectedReward5, B5#block.reward),\n\t?assertEqual([{MinerAddr, 1, ExpectedReward5, 1}, {MinerAddr, 1, ExpectedReward4, 1},\n\t\t\t{MinerAddr, 1, 10, 1}, {MinerAddr, 1, 10, 1}, {MinerAddr, 1, 10, 1},\n\t\t\t{B0#block.reward_addr, 1, 10, 1}], B5#block.reward_history),\n\t?assertEqual(\n\t\tar_rewards:reward_history_hash(B5#block.height, B4#block.reward_history_hash,\n\t\t\t[{MinerAddr, 1, ExpectedReward5, 1}, {MinerAddr, 1, ExpectedReward4, 1},\n\t\t\t\t{MinerAddr, 1, 10, 1}]),\n\t\tB5#block.reward_history_hash),\n\t?assertEqual(1, B5#block.kryder_plus_rate_multiplier_latch),\n\t?assertEqual(2, B5#block.kryder_plus_rate_multiplier),\n\t%% The price per GiB minute recalculation only happens every two blocks.\n\t?assertEqual(B5#block.scheduled_price_per_gib_minute,\n\t\t\tB4#block.scheduled_price_per_gib_minute),\n\t?assertEqual(B4#block.price_per_gib_minute, B5#block.price_per_gib_minute),\n\t?assertEqual(20000000000000, get_balance(Pub1)),\n\t?assert(lists:member(ar_test_node:get_optimistic_tx_price(main, 1024), [Fee * 2, Fee * 2 + 1])),\n\tHalfKryderLatchReset = ?RESET_KRYDER_PLUS_LATCH_THRESHOLD div 2,\n\tTX1 = ar_test_node:sign_tx(main, Key1, #{ denomination => 0, reward => HalfKryderLatchReset }),\n\tar_test_node:assert_post_tx_to_peer(main, TX1),\n\tar_test_node:mine(),\n\tar_test_node:assert_wait_until_height(main, 6),\n\tar_test_node:assert_wait_until_height(peer1, 6),\n\tB6 = ar_node:get_current_block(),\n\t{MinerShareDividend, MinerShareDivisor} = ?MINER_FEE_SHARE,\n\t?assertEqual(30, get_balance(MinerPub)),\n\t?assertEqual(10 + % inflation\n\t\t\tHalfKryderLatchReset,\n\t\t\tB6#block.reward\n\t\t\t+ B6#block.reward_pool - B5#block.reward_pool\n\t\t\t- (B6#block.debt_supply - B5#block.debt_supply)),\n\t?assertEqual(20000000000000 - HalfKryderLatchReset, get_balance(Pub1)),\n\t?assertEqual(1, B6#block.kryder_plus_rate_multiplier_latch),\n\t?assertEqual(2, B6#block.kryder_plus_rate_multiplier),\n\t?assertEqual(B6#block.price_per_gib_minute, B5#block.scheduled_price_per_gib_minute),\n\tScheduledPricePerGiBMinute5 = B5#block.scheduled_price_per_gib_minute,\n\t?assertEqual(\n\t\tmax(ScheduledPricePerGiBMinute5 div 2,\n\t\t\tmin(ar_pricing:get_price_per_gib_minute(B5#block.height, B5),\n\t\t\t\tScheduledPricePerGiBMinute5 * 2)),\n\t\t\tB6#block.scheduled_price_per_gib_minute),\n\tassert_new_account_fee(),\n\t?assertEqual(1, B6#block.denomination),\n\t?assertEqual(?TOTAL_SUPPLY + B6#block.debt_supply - B6#block.reward_pool,\n\t\t\tprometheus_gauge:value(available_supply)),\n\t?assertEqual([{MinerAddr, 1, B6#block.reward, 1}, {MinerAddr, 1, ExpectedReward5, 1},\n\t\t\t{MinerAddr, 1, ExpectedReward4, 1}], lists:sublist(B6#block.reward_history, 3)),\n\tTX2 = ar_test_node:sign_tx(main, Key1, #{ denomination => 0, reward => HalfKryderLatchReset * 2 }),\n\tar_test_node:assert_post_tx_to_peer(main, TX2),\n\tar_test_node:mine(),\n\tar_test_node:assert_wait_until_height(main, 7),\n\tar_test_node:assert_wait_until_height(peer1, 7),\n\tB7 = ar_node:get_current_block(),\n\t?assertEqual(10 + % inflation\n\t\t\tHalfKryderLatchReset * 2,\n\t\t\tB7#block.reward\n\t\t\t+ B7#block.reward_pool - B6#block.reward_pool\n\t\t\t- (B7#block.debt_supply - B6#block.debt_supply)),\n\t?assertEqual(30 + ExpectedReward4, get_balance(MinerPub)),\n\t?assertEqual(20000000000000 - HalfKryderLatchReset * 3, get_balance(Pub1)),\n\t?assert(B7#block.reward_pool > ?RESET_KRYDER_PLUS_LATCH_THRESHOLD),\n\t?assertEqual(1, B7#block.kryder_plus_rate_multiplier_latch),\n\t?assertEqual(2, B7#block.kryder_plus_rate_multiplier),\n\t?assertEqual(B6#block.price_per_gib_minute, B7#block.price_per_gib_minute),\n\t?assert(ar_test_node:get_optimistic_tx_price(main, 1024) > Fee),\n\tar_test_node:mine(),\n\tar_test_node:assert_wait_until_height(main, 8),\n\tar_test_node:assert_wait_until_height(peer1, 8),\n\tB8 = ar_node:get_current_block(),\n\t?assertEqual(30 + ExpectedReward5 + ExpectedReward4, get_balance(MinerPub)),\n\t%% Release because at the previous block the endowment pool exceeded the threshold.\n\t?assert(B8#block.reward_pool < B7#block.reward_pool),\n\t?assertEqual(0, B8#block.kryder_plus_rate_multiplier_latch),\n\t?assertEqual(2, B8#block.kryder_plus_rate_multiplier),\n\t?assertEqual(1, B8#block.denomination),\n\t?assert(prometheus_gauge:value(available_supply) > ?REDENOMINATION_THRESHOLD),\n\t?assert(B8#block.scheduled_price_per_gib_minute > B8#block.price_per_gib_minute),\n\t%% A transaction with explicitly set denomination.\n\tTX3 = ar_test_node:sign_tx(main, Key3, #{ denomination => 1 }),\n\t{Reward3, _} = ar_test_node:get_tx_price(main, 0, ar_wallet:to_address(Pub2)),\n\t{Reward4, _} = ar_test_node:get_tx_price(main, 0),\n\tar_test_node:assert_post_tx_to_peer(main, TX3),\n\tTX4 = ar_test_node:sign_tx(main, Key3, #{ denomination => 2 }),\n\t?assertMatch({ok, {{<<\"400\">>, _}, _, _, _, _}}, ar_test_node:post_tx_to_peer(main, TX4)),\n\t?assertEqual({ok, [\"invalid_denomination\"]}, ar_tx_db:get_error_codes(TX4#tx.id)),\n\tTX5 = ar_test_node:sign_tx(main, Key3, #{ denomination => 0, target => ar_wallet:to_address(Pub2),\n\t\t\tquantity => 10 }),\n\tar_test_node:assert_post_tx_to_peer(main, TX5),\n\tar_test_node:mine(),\n\tar_test_node:assert_wait_until_height(main, 9),\n\tar_test_node:assert_wait_until_height(peer1, 9),\n\tB9 = ar_node:get_current_block(),\n\t?assertEqual(0, B9#block.kryder_plus_rate_multiplier_latch),\n\t?assertEqual(2, B9#block.kryder_plus_rate_multiplier),\n\t?assertEqual(1, B9#block.denomination),\n\t?assertEqual(2, length(B9#block.txs)),\n\t?assertEqual(0, B9#block.redenomination_height),\n\t?assert(prometheus_gauge:value(available_supply) < ?REDENOMINATION_THRESHOLD),\n\t?assertEqual(?AR(1000000000000000000) - Reward3 - Reward4 - 10, get_balance(Pub3)),\n\t?assertEqual(2000000000 + 10, get_balance(Pub2)),\n\tar_test_node:mine(),\n\tar_test_node:assert_wait_until_height(main, 10),\n\tar_test_node:assert_wait_until_height(peer1, 10),\n\tB10 = ar_node:get_current_block(),\n\t?assertEqual(9 + ?REDENOMINATION_DELAY_BLOCKS, B10#block.redenomination_height),\n\t?assertEqual(1, B10#block.denomination),\n\tTX6 = ar_test_node:sign_tx(main, Key3, #{ denomination => 0 }),\n\t%% Transactions without explicit denomination are not accepted now until\n\t%% the redenomination height.\n\t?assertMatch({ok, {{<<\"400\">>, _}, _, _, _, _}}, ar_test_node:post_tx_to_peer(main, TX6)),\n\t?assertEqual({ok, [\"invalid_denomination\"]}, ar_tx_db:get_error_codes(TX6#tx.id)),\n\t%% The redenomination did not start yet.\n\tTX7 = ar_test_node:sign_tx(main, Key3, #{ denomination => 2 }),\n\t?assertMatch({ok, {{<<\"400\">>, _}, _, _, _, _}}, ar_test_node:post_tx_to_peer(main, TX7)),\n\t?assertEqual({ok, [\"invalid_denomination\"]}, ar_tx_db:get_error_codes(TX7#tx.id)),\n\t{_, Pub4} = ar_wallet:new(),\n\tTX8 = ar_test_node:sign_tx(main, Key3, #{ denomination => 1, target => ar_wallet:to_address(Pub4),\n\t\t\tquantity => 3 }),\n\tar_test_node:assert_post_tx_to_peer(main, TX8),\n\tar_test_node:mine(),\n\tar_test_node:assert_wait_until_height(main, 11),\n\tar_test_node:assert_wait_until_height(peer1, 11),\n\tB11 = ar_node:get_current_block(),\n\t?assertEqual(1, length(B11#block.txs)),\n\t?assertEqual(1, B11#block.denomination),\n\t?assertEqual(3, get_balance(Pub4)),\n\tBalance11 = get_balance(Pub3),\n\t{_, Pub5} = ar_wallet:new(),\n\tTX9 = ar_test_node:sign_v1_tx(main, Key3, #{ denomination => 0, target => ar_wallet:to_address(Pub5),\n\t\t\tquantity => 100 }),\n\t?assertMatch({ok, {{<<\"400\">>, _}, _, _, _, _}}, ar_test_node:post_tx_to_peer(main, TX9)),\n\t?assertEqual({ok, [\"invalid_denomination\"]}, ar_tx_db:get_error_codes(TX9#tx.id)),\n\t%% The redenomination did not start just yet.\n\tTX10 = ar_test_node:sign_v1_tx(main, Key3, #{ denomination => 2 }),\n\t?assertMatch({ok, {{<<\"400\">>, _}, _, _, _, _}}, ar_test_node:post_tx_to_peer(main, TX10)),\n\t?assertEqual({ok, [\"invalid_denomination\"]}, ar_tx_db:get_error_codes(TX10#tx.id)),\n\t{Reward11, _} = ar_test_node:get_tx_price(main, 0, ar_wallet:to_address(Pub5)),\n\t?assert(ar_difficulty:get_hash_rate_fixed_ratio(B11) > 1),\n\t?assertEqual(lists:sublist([{MinerAddr, ar_difficulty:get_hash_rate_fixed_ratio(B11), B11#block.reward, 1}\n\t\t\t| B10#block.reward_history],\n\t\t\t?REWARD_HISTORY_BLOCKS + ar_block:get_consensus_window_size()),\n\t\t\tB11#block.reward_history),\n\tTX11 = ar_test_node:sign_tx(main, Key3, #{ denomination => 1, target => ar_wallet:to_address(Pub5),\n\t\t\tquantity => 100 }),\n\tar_test_node:assert_post_tx_to_peer(main, TX11),\n\tar_test_node:mine(),\n\tar_test_node:assert_wait_until_height(main, 12),\n\tar_test_node:assert_wait_until_height(peer1, 12),\n\t?assertEqual(100 * 1000, get_balance(Pub5)),\n\t?assertEqual(3 * 1000, get_balance(Pub4)),\n\t?assertEqual((Balance11 - Reward11 - 100) * 1000, get_balance(Pub3)),\n\tB12 = ar_node:get_current_block(),\n\t?assertEqual(1, length(B12#block.txs)),\n\t?assertEqual(2, B12#block.denomination),\n\t?assertEqual(10 * 1000 + % inflation\n\t\t\tReward11 * 1000, % fees\n\t\t\tB12#block.reward\n\t\t\t+ B12#block.reward_pool - B11#block.reward_pool * 1000\n\t\t\t- (B12#block.debt_supply - B11#block.debt_supply * 1000)),\n\t?assertEqual(B11#block.debt_supply * 1000, B12#block.debt_supply),\n\t%% Setting the price scheduled on height=10.\n\t?assertEqual(B11#block.scheduled_price_per_gib_minute * 1000,\n\t\t\tB12#block.price_per_gib_minute),\n\t?assertEqual([{MinerAddr, ar_difficulty:get_hash_rate_fixed_ratio(B12), B12#block.reward, 2} | B11#block.reward_history],\n\t\t\tB12#block.reward_history),\n\tTX12 = ar_test_node:sign_tx(main, Key3, #{ denomination => 0, quantity => 10,\n\t\t\ttarget => ar_wallet:to_address(Pub5) }),\n\t{Reward12, 2} = ar_test_node:get_tx_price(main, 0),\n\tar_test_node:assert_post_tx_to_peer(main, TX12),\n\tTX13 = ar_test_node:sign_v1_tx(main, Key3, #{ denomination => 0,\n\t\t\treward => ar_test_node:get_optimistic_tx_price(main, 0),\n\t\t\ttarget => ar_wallet:to_address(Pub4), quantity => 4 }),\n\tar_test_node:assert_post_tx_to_peer(main, TX13),\n\tReward14 = ar_test_node:get_optimistic_tx_price(main, 0),\n\tTX14 = ar_test_node:sign_v1_tx(main, Key3, #{ denomination => 2, reward => Reward14,\n\t\t\tquantity => 5, target => ar_wallet:to_address(Pub4) }),\n\tar_test_node:assert_post_tx_to_peer(main, TX14),\n\tTX15 = ar_test_node:sign_v1_tx(main, Key3, #{ denomination => 2,\n\t\t\treward => erlang:ceil(Reward14 / 1000) }),\n\t?assertMatch({ok, {{<<\"400\">>, _}, _, _, _, _}}, ar_test_node:post_tx_to_peer(main, TX15)),\n\t?assertEqual({ok, [\"tx_too_cheap\"]}, ar_tx_db:get_error_codes(TX15#tx.id)),\n\tTX16 = ar_test_node:sign_v1_tx(main, Key3, #{ denomination => 3 }),\n\t?assertMatch({ok, {{<<\"400\">>, _}, _, _, _, _}}, ar_test_node:post_tx_to_peer(main, TX16)),\n\t?assertEqual({ok, [\"invalid_denomination\"]}, ar_tx_db:get_error_codes(TX16#tx.id)),\n\t{_, Pub6} = ar_wallet:new(),\n\t%% Divide the reward by 1000 and specify the previous denomination.\n\tReward17 = ar_test_node:get_optimistic_tx_price(main, 0, ar_wallet:to_address(Pub6)),\n\tTX17 = ar_test_node:sign_tx(main, Key3, #{ denomination => 1,\n\t\t\treward => erlang:ceil(Reward17 / 1000), target => ar_wallet:to_address(Pub6),\n\t\t\tquantity => 7 }),\n\tar_test_node:assert_post_tx_to_peer(main, TX17),\n\tar_test_node:mine(),\n\tar_test_node:assert_wait_until_height(main, 13),\n\tar_test_node:assert_wait_until_height(peer1, 13),\n\tB13 = ar_node:get_current_block(),\n\tReward17_2 = erlang:ceil(Reward17 / 1000) * 1000,\n\tAvgBlockTime13 = ar_block_time_history:compute_block_interval(B12),\n\tBaseReward13 =\n\t\tB12#block.price_per_gib_minute\n\t\t* ?N_REPLICATIONS(B13#block.height)\n\t\t* AvgBlockTime13 div 60 % minutes\n\t\t* B13#block.weave_size div (1024 * 1024 * 1024), % weave_size / GiB\n\tFeeSum13 =\n\t\t\tReward12 * MinerShareDividend div MinerShareDivisor % TX12\n\t\t\t+ Reward14 * MinerShareDividend div MinerShareDivisor % TX13\n\t\t\t+ Reward14 * MinerShareDividend div MinerShareDivisor % TX14\n\t\t\t+ Reward17_2 * MinerShareDividend div MinerShareDivisor, % TX17\n\t?debugFmt(\"B12#block.reward_pool: ~B, fees: ~B, fees received by the reward pool:~B, \"\n\t\t\t\"expected reward: ~B, miner fee share: ~B~n\", [B12#block.reward_pool,\n\t\t\t\tReward12 + Reward14 * 2 + Reward17_2,\n\t\t\t\tReward12 + Reward14 * 2 + Reward17_2 - FeeSum13,\n\t\t\t\tBaseReward13, FeeSum13]),\n\t?assertEqual(B12#block.reward_pool + Reward12 + Reward14 * 2 + Reward17_2 - FeeSum13\n\t\t\t- max(0, (BaseReward13 - (10 * 1000 + FeeSum13))), B13#block.reward_pool),\n\t?assertEqual(4, length(B13#block.txs)),\n\t?assertEqual(2, B13#block.denomination),\n\t?assertEqual(100 * 1000 + 10, get_balance(Pub5)),\n\t?assertEqual(7 * 1000, get_balance(Pub6)),\n\t?assertEqual(3 * 1000 + 4 + 5, get_balance(Pub4)),\n\tassert_new_account_fee(),\n\tar_test_node:mine(),\n\tar_test_node:assert_wait_until_height(main, 14),\n\tar_test_node:assert_wait_until_height(peer1, 14),\n\tB14 = ar_node:get_current_block(),\n\tScheduledPricePerGiBMinute13 = B13#block.scheduled_price_per_gib_minute,\n\t?assertEqual(\n\t\tmax(ScheduledPricePerGiBMinute13 div 2, min(\n\t\t\tar_pricing:get_price_per_gib_minute(B13#block.height, B13),\n\t\t\tScheduledPricePerGiBMinute13 * 2\n\t\t)), B14#block.scheduled_price_per_gib_minute).\n\ntime_diff(#block{ timestamp = TS }, #block{ timestamp = PrevTS }) ->\n\tmax(1, TS - PrevTS).\n\nvdf_diff(B, PrevB) ->\n\tar_block:vdf_step_number(B) - ar_block:vdf_step_number(PrevB).\n\nchunk_count(#block{ recall_byte2 = undefined }) ->\n\t1;\nchunk_count(_) ->\n\t2.\n\nassert_new_account_fee() ->\n\t?assert(ar_test_node:get_optimistic_tx_price(main,\n\t\t\t262144 + ?NEW_ACCOUNT_FEE_DATA_SIZE_EQUIVALENT) >\n\t\t\tar_test_node:get_optimistic_tx_price(main, 0, <<\"non-existent-address\">>)),\n\t?assert(ar_test_node:get_optimistic_tx_price(main,\n\t\t\t?NEW_ACCOUNT_FEE_DATA_SIZE_EQUIVALENT - 262144) <\n\t\t\tar_test_node:get_optimistic_tx_price(main, 0, <<\"non-existent-address\">>)).\n\n%% @doc Return the current balance of the given account.\nget_balance(Pub) ->\n\tAddress = ar_wallet:to_address(Pub),\n\tPeer = ar_test_node:peer_ip(main),\n\t{ok, {{<<\"200\">>, _}, _, Reply, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => Peer,\n\t\t\tpath => \"/wallet/\" ++ binary_to_list(ar_util:encode(Address)) ++ \"/balance\"\n\t\t}),\n\tBalance = binary_to_integer(Reply),\n\tB = ar_node:get_current_block(),\n\t{ok, {{<<\"200\">>, _}, _, Reply2, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => Peer,\n\t\t\tpath => \"/wallet_list/\" ++ binary_to_list(ar_util:encode(B#block.wallet_list))\n\t\t\t\t\t++ \"/\" ++ binary_to_list(ar_util:encode(Address)) ++ \"/balance\"\n\t\t}),\n\tcase binary_to_integer(Reply2) of\n\t\tBalance ->\n\t\t\tBalance;\n\t\tBalance2 ->\n\t\t\t?assert(false, io_lib:format(\"Expected: ~B, got: ~B.~n\", [Balance, Balance2]))\n\tend.\n\n\nget_reserved_balance(Address) ->\n\tPeer = ar_test_node:peer_ip(main),\n\t{ok, {{<<\"200\">>, _}, _, Reply, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => Peer,\n\t\t\tpath => \"/wallet/\" ++ binary_to_list(ar_util:encode(Address))\n\t\t\t\t\t++ \"/reserved_rewards_total\"\n\t\t}),\n\tbinary_to_integer(Reply).\n"
  },
  {
    "path": "apps/arweave/test/ar_reject_chunks_tests.erl",
    "content": "-module(ar_reject_chunks_tests).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_data_sync.hrl\").\n\n-import(ar_test_node, [sign_v1_tx/2, wait_until_height/2, assert_wait_until_height/2,\n\t\tread_block_when_stored/1, test_with_mocked_functions/2]).\n\nrejects_invalid_chunks_test_() ->\n\t{timeout, 180, fun test_rejects_invalid_chunks/0}.\n\ntest_rejects_invalid_chunks() ->\n\tar_test_data_sync:setup_nodes(),\n\t?assertMatch(\n\t\t{ok, {{<<\"400\">>, _}, _, <<\"{\\\"error\\\":\\\"chunk_too_big\\\"}\">>, _, _}},\n\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(#{\n\t\t\tchunk => ar_util:encode(crypto:strong_rand_bytes(?DATA_CHUNK_SIZE + 1)),\n\t\t\tdata_path => <<>>,\n\t\t\toffset => <<\"0\">>,\n\t\t\tdata_size => <<\"0\">>\n\t\t}))\n\t),\n\t?assertMatch(\n\t\t{ok, {{<<\"400\">>, _}, _, <<\"{\\\"error\\\":\\\"data_path_too_big\\\"}\">>, _, _}},\n\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(#{\n\t\t\tdata_path => ar_util:encode(crypto:strong_rand_bytes(?MAX_PATH_SIZE + 1)),\n\t\t\tchunk => <<>>,\n\t\t\toffset => <<\"0\">>,\n\t\t\tdata_size => <<\"0\">>\n\t\t}))\n\t),\n\t?assertMatch(\n\t\t{ok, {{<<\"400\">>, _}, _, <<\"{\\\"error\\\":\\\"offset_too_big\\\"}\">>, _, _}},\n\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(#{\n\t\t\toffset => integer_to_binary(trunc(math:pow(2, 256))),\n\t\t\tdata_path => <<>>,\n\t\t\tchunk => <<>>,\n\t\t\tdata_size => <<\"0\">>\n\t\t}))\n\t),\n\t?assertMatch(\n\t\t{ok, {{<<\"400\">>, _}, _, <<\"{\\\"error\\\":\\\"data_size_too_big\\\"}\">>, _, _}},\n\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(#{\n\t\t\tdata_size => integer_to_binary(trunc(math:pow(2, 256))),\n\t\t\tdata_path => <<>>,\n\t\t\tchunk => <<>>,\n\t\t\toffset => <<\"0\">>\n\t\t}))\n\t),\n\t?assertMatch(\n\t\t{ok, {{<<\"400\">>, _}, _, <<\"{\\\"error\\\":\\\"chunk_proof_ratio_not_attractive\\\"}\">>, _, _}},\n\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(#{\n\t\t\tchunk => ar_util:encode(<<\"a\">>),\n\t\t\tdata_path => ar_util:encode(<<\"bb\">>),\n\t\t\toffset => <<\"0\">>,\n\t\t\tdata_size => <<\"0\">>\n\t\t}))\n\t),\n\tar_test_data_sync:setup_nodes(),\n\tChunk = crypto:strong_rand_bytes(500),\n\tSizedChunkIDs = ar_tx:sized_chunks_to_sized_chunk_ids(\n\t\tar_tx:chunks_to_size_tagged_chunks([Chunk])\n\t),\n\t{DataRoot, DataTree} = ar_merkle:generate_tree(SizedChunkIDs),\n\tDataPath = ar_merkle:generate_path(DataRoot, 0, DataTree),\n\t?assertMatch(\n\t\t{ok, {{<<\"400\">>, _}, _, <<\"{\\\"error\\\":\\\"data_root_not_found\\\"}\">>, _, _}},\n\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(#{\n\t\t\tdata_root => ar_util:encode(DataRoot),\n\t\t\tchunk => ar_util:encode(Chunk),\n\t\t\tdata_path => ar_util:encode(DataPath),\n\t\t\toffset => <<\"0\">>,\n\t\t\tdata_size => <<\"500\">>\n\t\t}))\n\t),\n\t?assertMatch(\n\t\t{ok, {{<<\"413\">>, _}, _, <<\"Payload too large\">>, _, _}},\n\t\tar_test_node:post_chunk(main, << <<0>> || _ <- lists:seq(1, ?MAX_SERIALIZED_CHUNK_PROOF_SIZE + 1) >>)\n\t).\n\ndoes_not_store_small_chunks_after_2_5_test_() ->\n\t{timeout, 600, fun test_does_not_store_small_chunks_after_2_5/0}.\n\ntest_does_not_store_small_chunks_after_2_5() ->\n\tSize = ?DATA_CHUNK_SIZE,\n\tThird = Size div 3,\n\tSplits = [\n\t\t{\"Even split\", Size * 3, Size, Size, Size, Size, Size * 2, Size * 3,\n\t\t\t\tlists:seq(0, Size - 1, 2048), lists:seq(Size, Size * 2 - 1, 2048),\n\t\t\t\tlists:seq(Size * 2, Size * 3 + 2048, 2048),\n\t\t\t\t[{O, first} || O <- lists:seq(1, Size - 1, 2048)]\n\t\t\t\t\t\t++ [{O, second} || O <- lists:seq(Size + 1, 2 * Size, 2048)]\n\t\t\t\t\t\t++ [{O, third} || O <- lists:seq(2 * Size + 1, 3 * Size, 2048)]\n\t\t\t\t\t\t++ [{3 * Size + 1, 404}, {4 * Size, 404}]},\n\t\t{\"Small last chunk\", 2 * Size + Third, Size, Size, Third, Size, 2 * Size,\n\t\t\t\t2 * Size + Third, lists:seq(3, Size - 1, 2048),\n\t\t\t\tlists:seq(Size, 2 * Size - 1, 2048),\n\t\t\t\tlists:seq(2 * Size, 2 * Size + Third, 2048),\n\t\t\t\t%% The chunk is expected to be returned by any offset of the 256 KiB\n\t\t\t\t%% bucket where it ends.\n\t\t\t\t[{O, first} || O <- lists:seq(1, Size - 1, 2048)]\n\t\t\t\t\t\t++ [{O, second} || O <- lists:seq(Size + 1, 2 * Size, 2048)]\n\t\t\t\t\t\t++ [{O, third} || O <- lists:seq(2 * Size + 1, 3 * Size, 2048)]\n\t\t\t\t\t\t++ [{3 * Size + 1, 404}, {4 * Size, 404}]},\n\t\t{\"Small chunks crossing the bucket\", 2 * Size + Third, Size, Third + 1, Size - 1,\n\t\t\t\tSize, Size + Third + 1, 2 * Size + Third, lists:seq(0, Size - 1, 2048),\n\t\t\t\tlists:seq(Size, Size + Third, 1024),\n\t\t\t\tlists:seq(Size + Third + 1, 2 * Size + Third, 2048),\n\t\t\t\t[{O, first} || O <- lists:seq(1, Size, 2048)]\n\t\t\t\t\t\t++ [{O, second} || O <- lists:seq(Size + 1, 2 * Size, 2048)]\n\t\t\t\t\t\t++ [{O, third} || O <- lists:seq(2 * Size + 1, 3 * Size, 2048)]\n\t\t\t\t\t\t++ [{3 * Size + 1, 404}, {4 * Size, 404}]},\n\t\t{\"Small chunks in one bucket\", 2 * Size, Size, Third, Size - Third, Size, Size + Third,\n\t\t\t\t2 * Size, lists:seq(0, Size - 1, 2048), lists:seq(Size, Size + Third - 1, 2048),\n\t\t\t\tlists:seq(Size + Third, 2 * Size, 2048),\n\t\t\t\t[{O, first} || O <- lists:seq(1, Size - 1, 2048)]\n\t\t\t\t\t\t%% The second and third chunks must not be accepted.\n\t\t\t\t\t\t%% The second chunk does not precede a chunk crossing a bucket border,\n\t\t\t\t\t\t%% the third chunk ends at a multiple of 256 KiB.\n\t\t\t\t\t\t++ [{O, 404} || O <- lists:seq(Size + 1, 2 * Size + 4096, 2048)]},\n\t\t{\"Small chunk preceding 256 KiB chunk\", 2 * Size + Third, Third, Size, Size, Third,\n\t\t\t\tSize + Third, 2 * Size + Third, lists:seq(0, Third - 1, 2048),\n\t\t\t\tlists:seq(Third, Third + Size - 1, 2048),\n\t\t\t\tlists:seq(Third + Size, Third + 2 * Size, 2048),\n\t\t\t\t%% The first chunk must not be accepted, the first bucket stays empty.\n\t\t\t\t[{O, 404} || O <- lists:seq(1, Size - 1, 2048)]\n\t\t\t\t\t\t%% The other chunks are rejected too - their start offsets\n\t\t\t\t\t\t%% are not aligned with the buckets.\n\t\t\t\t\t\t++ [{O, 404} || O <- lists:seq(Size + 1, 4 * Size, 2048)]}],\n\tlists:foreach(\n\t\tfun({Title, DataSize, FirstSize, SecondSize, ThirdSize, FirstMerkleOffset,\n\t\t\t\tSecondMerkleOffset, ThirdMerkleOffset, FirstPublishOffsets, SecondPublishOffsets,\n\t\t\t\tThirdPublishOffsets, Expectations}) ->\n\t\t\t?debugFmt(\"Running [~s]\", [Title]),\n\t\t\tWallet = ar_test_data_sync:setup_nodes(),\n\t\t\t{FirstChunk, SecondChunk, ThirdChunk} = {crypto:strong_rand_bytes(FirstSize),\n\t\t\t\t\tcrypto:strong_rand_bytes(SecondSize), crypto:strong_rand_bytes(ThirdSize)},\n\t\t\t{FirstChunkID, SecondChunkID, ThirdChunkID} = {ar_tx:generate_chunk_id(FirstChunk),\n\t\t\t\t\tar_tx:generate_chunk_id(SecondChunk), ar_tx:generate_chunk_id(ThirdChunk)},\n\t\t\t{DataRoot, DataTree} = ar_merkle:generate_tree([{FirstChunkID, FirstMerkleOffset},\n\t\t\t\t\t{SecondChunkID, SecondMerkleOffset}, {ThirdChunkID, ThirdMerkleOffset}]),\n\t\t\tTX = ar_test_node:sign_tx(Wallet, #{ last_tx => ar_test_node:get_tx_anchor(main), data_size => DataSize,\n\t\t\t\t\tdata_root => DataRoot }),\n\t\t\tar_test_node:post_and_mine(#{ miner => main, await_on => main }, [TX]),\n\t\t\tlists:foreach(\n\t\t\t\tfun({Chunk, Offset}) ->\n\t\t\t\t\tDataPath = ar_merkle:generate_path(DataRoot, Offset, DataTree),\n\t\t\t\t\tProof = #{ data_root => ar_util:encode(DataRoot),\n\t\t\t\t\t\t\tdata_path => ar_util:encode(DataPath),\n\t\t\t\t\t\t\tchunk => ar_util:encode(Chunk),\n\t\t\t\t\t\t\toffset => integer_to_binary(Offset),\n\t\t\t\t\t\t\tdata_size => integer_to_binary(DataSize) },\n\t\t\t\t\t%% All chunks are accepted because we do not know their offsets yet -\n\t\t\t\t\t%% in theory they may end up below the strict data split threshold.\n\t\t\t\t\t?assertMatch({ok, {{<<\"200\">>, _}, _, _, _, _}},\n\t\t\t\t\t\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(Proof)), Title)\n\t\t\t\tend,\n\t\t\t\t[{FirstChunk, O} || O <- FirstPublishOffsets]\n\t\t\t\t\t\t++ [{SecondChunk, O} || O <- SecondPublishOffsets]\n\t\t\t\t\t\t++ [{ThirdChunk, O} || O <- ThirdPublishOffsets]\n\t\t\t),\n\t\t\t%% In practice the chunks are above the strict data split threshold so those\n\t\t\t%% which do not pass strict validation will not be stored.\n\t\t\ttimer:sleep(2000),\n\t\t\tGenesisOffset = ar_block:strict_data_split_threshold(),\n\t\t\tlists:foreach(\n\t\t\t\tfun\t({Offset, 404}) ->\n\t\t\t\t\t\t?assertMatch({ok, {{<<\"404\">>, _}, _, _, _, _}},\n\t\t\t\t\t\t\t\tar_test_node:get_chunk(main, GenesisOffset + Offset), Title);\n\t\t\t\t\t({Offset, first}) ->\n\t\t\t\t\t\t{ok, {{<<\"200\">>, _}, _, ProofJSON, _, _}} = ar_test_node:get_chunk(main, \n\t\t\t\t\t\t\t\tGenesisOffset + Offset),\n\t\t\t\t\t\t?assertEqual(FirstChunk, ar_util:decode(maps:get(<<\"chunk\">>,\n\t\t\t\t\t\t\t\tjiffy:decode(ProofJSON, [return_maps]))), Title);\n\t\t\t\t\t({Offset, second}) ->\n\t\t\t\t\t\t{ok, {{<<\"200\">>, _}, _, ProofJSON, _, _}} = ar_test_node:get_chunk(main, \n\t\t\t\t\t\t\t\tGenesisOffset + Offset),\n\t\t\t\t\t\t?assertEqual(SecondChunk, ar_util:decode(maps:get(<<\"chunk\">>,\n\t\t\t\t\t\t\t\tjiffy:decode(ProofJSON, [return_maps]))), Title);\n\t\t\t\t\t({Offset, third}) ->\n\t\t\t\t\t\t{ok, {{<<\"200\">>, _}, _, ProofJSON, _, _}} = ar_test_node:get_chunk(main, \n\t\t\t\t\t\t\t\tGenesisOffset + Offset),\n\t\t\t\t\t\t?assertEqual(ThirdChunk, ar_util:decode(maps:get(<<\"chunk\">>,\n\t\t\t\t\t\t\t\tjiffy:decode(ProofJSON, [return_maps]))), Title)\n\t\t\t\tend,\n\t\t\t\tExpectations\n\t\t\t)\n\t\tend,\n\t\tSplits\n\t).\n\nrejects_chunks_with_merkle_tree_borders_exceeding_max_chunk_size_test_() ->\n\t{timeout, 120,\n\t\t\tfun test_rejects_chunks_with_merkle_tree_borders_exceeding_max_chunk_size/0}.\n\ntest_rejects_chunks_with_merkle_tree_borders_exceeding_max_chunk_size() ->\n\tWallet = ar_test_data_sync:setup_nodes(),\n\tBigOutOfBoundsOffsetChunk = crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\tBigChunkID = ar_tx:generate_chunk_id(BigOutOfBoundsOffsetChunk),\n\t{BigDataRoot, BigDataTree} = ar_merkle:generate_tree([{BigChunkID, ?DATA_CHUNK_SIZE + 1}]),\n\tBigTX = ar_test_node:sign_tx(Wallet, #{ last_tx => ar_test_node:get_tx_anchor(main), data_size => ?DATA_CHUNK_SIZE,\n\t\t\tdata_root => BigDataRoot }),\n\tar_test_node:post_and_mine(#{ miner => main, await_on => main }, [BigTX]),\n\tBigDataPath = ar_merkle:generate_path(BigDataRoot, 0, BigDataTree),\n\tBigProof = #{ data_root => ar_util:encode(BigDataRoot),\n\t\t\tdata_path => ar_util:encode(BigDataPath),\n\t\t\tchunk => ar_util:encode(BigOutOfBoundsOffsetChunk), offset => <<\"0\">>,\n\t\t\tdata_size => integer_to_binary(?DATA_CHUNK_SIZE)},\n\t?assertMatch({ok, {{<<\"400\">>, _}, _, <<\"{\\\"error\\\":\\\"invalid_proof\\\"}\">>, _, _}},\n\t\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(BigProof))).\n\nrejects_chunks_exceeding_disk_pool_limit_test_() ->\n\t{timeout, ?TEST_NODE_TIMEOUT, fun test_rejects_chunks_exceeding_disk_pool_limit/0}.\n\ntest_rejects_chunks_exceeding_disk_pool_limit() ->\n\tWallet = ar_test_data_sync:setup_nodes(),\n\tData1 = crypto:strong_rand_bytes(\n\t\t(?DEFAULT_MAX_DISK_POOL_DATA_ROOT_BUFFER_MB * ?MiB) + 1\n\t),\n\tChunks1 = ar_test_data_sync:imperfect_split(Data1),\n\t{DataRoot1, _} = ar_merkle:generate_tree(\n\t\tar_tx:sized_chunks_to_sized_chunk_ids(\n\t\t\tar_tx:chunks_to_size_tagged_chunks(Chunks1)\n\t\t)\n\t),\n\t{TX1, Chunks1} = ar_test_data_sync:tx(Wallet, {fixed_data, DataRoot1, Chunks1}),\n\tar_test_node:assert_post_tx_to_peer(main, TX1),\n\t[{_, FirstProof1} | Proofs1] = ar_test_data_sync:build_proofs(TX1, Chunks1, [TX1], 0, 0),\n\tlists:foreach(\n\t\tfun({_, Proof}) ->\n\t\t\t?assertMatch(\n\t\t\t\t{ok, {{<<\"200\">>, _}, _, _, _, _}},\n\t\t\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(Proof))\n\t\t\t)\n\t\tend,\n\t\tProofs1\n\t),\n\t?assertMatch(\n\t\t{ok, {{<<\"400\">>, _}, _, <<\"{\\\"error\\\":\\\"exceeds_disk_pool_size_limit\\\"}\">>, _, _}},\n\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(FirstProof1))\n\t),\n\tData2 = crypto:strong_rand_bytes(\n\t\tmin(\n\t\t\t?DEFAULT_MAX_DISK_POOL_BUFFER_MB - ?DEFAULT_MAX_DISK_POOL_DATA_ROOT_BUFFER_MB,\n\t\t\t?DEFAULT_MAX_DISK_POOL_DATA_ROOT_BUFFER_MB - 1\n\t\t) * ?MiB\n\t),\n\tChunks2 = ar_test_data_sync:imperfect_split(Data2),\n\t{DataRoot2, _} = ar_merkle:generate_tree(\n\t\tar_tx:sized_chunks_to_sized_chunk_ids(\n\t\t\tar_tx:chunks_to_size_tagged_chunks(Chunks2)\n\t\t)\n\t),\n\t{TX2, Chunks2} = ar_test_data_sync:tx(Wallet, {fixed_data, DataRoot2, Chunks2}),\n\tar_test_node:assert_post_tx_to_peer(main, TX2),\n\tProofs2 = ar_test_data_sync:build_proofs(TX2, Chunks2, [TX2], 0, 0),\n\tlists:foreach(\n\t\tfun({_, Proof}) ->\n\t\t\t%% The very last chunk will be dropped later because it starts and ends\n\t\t\t%% in the bucket of the previous chunk (the chunk sizes are 131072).\n\t\t\t?assertMatch(\n\t\t\t\t{ok, {{<<\"200\">>, _}, _, _, _, _}},\n\t\t\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(Proof))\n\t\t\t)\n\t\tend,\n\t\tProofs2\n\t),\n\tLeft =\n\t\t?DEFAULT_MAX_DISK_POOL_BUFFER_MB * ?MiB -\n\t\tlists:sum([byte_size(Chunk) || Chunk <- tl(Chunks1)]) -\n\t\tbyte_size(Data2),\n\t?assert(Left < ?DEFAULT_MAX_DISK_POOL_DATA_ROOT_BUFFER_MB * ?MiB),\n\tData3 = crypto:strong_rand_bytes(Left + 1),\n\tChunks3 = ar_test_data_sync:imperfect_split(Data3),\n\t{DataRoot3, _} = ar_merkle:generate_tree(\n\t\tar_tx:sized_chunks_to_sized_chunk_ids(\n\t\t\tar_tx:chunks_to_size_tagged_chunks(Chunks3)\n\t\t)\n\t),\n\t{TX3, Chunks3} = ar_test_data_sync:tx(Wallet, {fixed_data, DataRoot3, Chunks3}),\n\tar_test_node:assert_post_tx_to_peer(main, TX3),\n\t[{_, FirstProof3} | Proofs3] = ar_test_data_sync:build_proofs(TX3, Chunks3, [TX3], 0, 0),\n\tlists:foreach(\n\t\tfun({_, Proof}) ->\n\t\t\t%% The very last chunk will be dropped later because it starts and ends\n\t\t\t%% in the bucket of the previous chunk (the chunk sizes are 131072).\n\t\t\t?assertMatch(\n\t\t\t\t{ok, {{<<\"200\">>, _}, _, _, _, _}},\n\t\t\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(Proof))\n\t\t\t)\n\t\tend,\n\t\tProofs3\n\t),\n\t?assertMatch(\n\t\t{ok, {{<<\"400\">>, _}, _, <<\"{\\\"error\\\":\\\"exceeds_disk_pool_size_limit\\\"}\">>, _, _}},\n\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(FirstProof3))\n\t),\n\tar_test_node:mine(main),\n\tassert_wait_until_height(main, 1),\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\t%% After a block is mined, the chunks receive their absolute offsets, which\n\t\t\t%% end up above the strict data split threshold and so the node discovers\n\t\t\t%% the very last chunks of the last two transactions are invalid under these\n\t\t\t%% offsets and frees up 131072 + 131072 bytes in the disk pool => we can submit\n\t\t\t%% a 262144-byte chunk. Also, expect 303 instead of 200 because the last block\n\t\t\t%% was large such that the configured partitions do not cover at least two\n\t\t\t%% times as much space ahead of the current weave size.\n\t\t\tcase ar_test_node:post_chunk(main, ar_serialize:jsonify(FirstProof3)) of\n\t\t\t\t{ok, {{<<\"303\">>, _}, _, _, _, _}} ->\n\t\t\t\t\ttrue;\n\t\t\t\tResponse ->\n\t\t\t\t\t?debugFmt(\"post_chunk response (offset: ~p, data_root: ~p): ~p\",\n\t\t\t\t\t\t[maps:get(offset, FirstProof3), maps:get(data_root, FirstProof3), Response]),\n\t\t\t\t\tfalse\n\t\t\tend\n\t\tend,\n\t\t2000,\n\t\t30 * 1000\n\t),\n\t%% Now we do not have free space again.\n\t?assertMatch(\n\t\t{ok, {{<<\"400\">>, _}, _, <<\"{\\\"error\\\":\\\"exceeds_disk_pool_size_limit\\\"}\">>, _, _}},\n\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(FirstProof1))\n\t),\n\t%% Mine two more blocks to make the chunks mature so that we can remove them from the\n\t%% disk pool (they will stay in the corresponding storage modules though, if any).\n\tar_test_node:mine(main),\n\tassert_wait_until_height(main, 2),\n\tar_test_node:mine(main),\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\tcase ar_test_node:post_chunk(main, ar_serialize:jsonify(FirstProof1)) of\n\t\t\t\t{ok, {{<<\"200\">>, _}, _, _, _, _}} ->\n\t\t\t\t\ttrue;\n\t\t\t\t_ ->\n\t\t\t\t\tfalse\n\t\t\tend\n\t\tend,\n\t\t2000,\n\t\t30 * 1000\n\t).\n\naccepts_chunks_test_() ->\n\tar_test_node:test_with_mocked_functions([{ar_fork, height_2_5, fun() -> 0 end}],\n\t\tfun test_accepts_chunks/0, 120).\n\ntest_accepts_chunks() ->\n\ttest_accepts_chunks(original_split).\n\ntest_accepts_chunks(Split) ->\n\tWallet = ar_test_data_sync:setup_nodes(),\n\t{TX, Chunks} = ar_test_data_sync:tx(Wallet, {Split, 3}),\n\tar_test_node:assert_post_tx_to_peer(peer1, TX),\n\tar_test_node:assert_wait_until_receives_txs([TX]),\n\t[{Offset, FirstProof}, {_, SecondProof}, {_, ThirdProof}] = \n\t\t\tar_test_data_sync:build_proofs(TX, Chunks, [TX], 0, 0),\n\tEndOffset = Offset + ar_block:strict_data_split_threshold(),\n\t%% Post the third proof to the disk pool.\n\t?assertMatch(\n\t\t{ok, {{<<\"200\">>, _}, _, _, _, _}},\n\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(ThirdProof))\n\t),\n\tar_test_node:mine(peer1),\n\t[{BH, _, _} | _] = wait_until_height(main, 1),\n\tB = read_block_when_stored(BH),\n\t?assertMatch(\n\t\t{ok, {{<<\"404\">>, _}, _, _, _, _}},\n\t\tar_test_node:get_chunk(main, EndOffset)\n\t),\n\t?assertMatch(\n\t\t{ok, {{<<\"200\">>, _}, _, _, _, _}},\n\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(FirstProof))\n\t),\n\t%% Expect the chunk to be retrieved by any offset within\n\t%% (EndOffset - ChunkSize, EndOffset], but not outside of it.\n\tFirstChunk = ar_util:decode(maps:get(chunk, FirstProof)),\n\tFirstChunkSize = byte_size(FirstChunk),\n\tExpectedProof = #{\n\t\tdata_path => maps:get(data_path, FirstProof),\n\t\ttx_path => maps:get(tx_path, FirstProof),\n\t\tchunk => ar_util:encode(FirstChunk)\n\t},\n\tar_test_data_sync:wait_until_syncs_chunk(EndOffset, ExpectedProof),\n\tar_test_data_sync:wait_until_syncs_chunk(\n\t\tEndOffset - rand:uniform(FirstChunkSize - 2), ExpectedProof),\n\tar_test_data_sync:wait_until_syncs_chunk(EndOffset - FirstChunkSize + 1, ExpectedProof),\n\t?assertMatch({ok, {{<<\"404\">>, _}, _, _, _, _}}, ar_test_node:get_chunk(main, 0)),\n\t?assertMatch({ok, {{<<\"404\">>, _}, _, _, _, _}}, ar_test_node:get_chunk(main, EndOffset + 1)),\n\tTXSize = byte_size(binary:list_to_bin(Chunks)),\n\tExpectedOffsetInfo = ar_serialize:jsonify(#{\n\t\toffset => integer_to_binary(TXSize + ar_block:strict_data_split_threshold()),\n\t\tsize => integer_to_binary(TXSize)\n\t}),\n\t?assertMatch({ok, {{<<\"200\">>, _}, _, ExpectedOffsetInfo, _, _}},\n\t\tar_test_data_sync:get_tx_offset(main, TX#tx.id)),\n\t%% Expect no transaction data because the second chunk is not synced yet.\n\t?assertMatch({ok, {{<<\"404\">>, _}, _, _Binary, _, _}},\n\t\tar_test_data_sync:get_tx_data(TX#tx.id)),\n\t?assertMatch({ok, {{<<\"200\">>, _}, _, _, _, _}},\n\t\t\tar_test_node:post_chunk(main, ar_serialize:jsonify(SecondProof))),\n\tExpectedSecondProof = #{\n\t\tdata_path => maps:get(data_path, SecondProof),\n\t\ttx_path => maps:get(tx_path, SecondProof),\n\t\tchunk => maps:get(chunk, SecondProof)\n\t},\n\tSecondChunk = ar_util:decode(maps:get(chunk, SecondProof)),\n\tSecondChunkOffset = ar_block:strict_data_split_threshold() + FirstChunkSize + byte_size(SecondChunk),\n\tar_test_data_sync:wait_until_syncs_chunk(SecondChunkOffset, ExpectedSecondProof),\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\t{ok, {{<<\"200\">>, _}, _, Data, _, _}} = ar_test_data_sync:get_tx_data(TX#tx.id),\n\t\t\tar_util:encode(binary:list_to_bin(Chunks)) == Data\n\t\tend,\n\t\t500,\n\t\t10 * 1000\n\t),\n\tExpectedThirdProof = #{\n\t\tdata_path => maps:get(data_path, ThirdProof),\n\t\ttx_path => maps:get(tx_path, ThirdProof),\n\t\tchunk => maps:get(chunk, ThirdProof)\n\t},\n\tar_test_data_sync:wait_until_syncs_chunk(B#block.weave_size, ExpectedThirdProof),\n\t?assertMatch({ok, {{<<\"404\">>, _}, _, _, _, _}}, ar_test_node:get_chunk(main, B#block.weave_size + 1)).\n"
  },
  {
    "path": "apps/arweave/test/ar_replica_2_9_nif_tests.erl",
    "content": "-module(ar_replica_2_9_nif_tests).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-include_lib(\"arweave/include/ar_consensus.hrl\").\n\nsetup_replica_2_9() ->\n\tFastState = ar_mine_randomx:init_fast2(rxsquared, ?RANDOMX_PACKING_KEY, 0, 0,\n\t\t\terlang:system_info(dirty_cpu_schedulers_online)),\n\tLightState = ar_mine_randomx:init_light2(rxsquared, ?RANDOMX_PACKING_KEY, 0, 0),\n\t{FastState, LightState}.\n\ntest_register(TestFun, Fixture) ->\n\t{timeout, 120, {with, Fixture, [TestFun]}}.\n\nrandomx_replica_2_9_suite_test_() ->\n\t{setup, fun setup_replica_2_9/0,\n\t\tfun (SetupData) ->\n\t\t\t[\n\t\t\t\ttest_register(fun test_vectors/1, SetupData), % TODO move bottom\n\t\t\t\ttest_register(fun test_state/1, SetupData),\n\t\t\t\ttest_register(fun test_pack_unpack_sub_chunks/1, SetupData)\n\t\t\t]\n\t\tend\n\t}.\n\n\n%% -------------------------------------------------------------------------------------------\n%% replica_2_9 tests\n%% -------------------------------------------------------------------------------------------\ntest_state({FastState, LightState}) ->\n\n\t?assertEqual(\n\t\t{ok, {rxsquared, fast, 34047604, 2097152}},\n\t\tar_mine_randomx:info(FastState)\n\t),\n\t?assertEqual(\n\t\t{ok, {rxsquared, light, 0, 2097152}},\n\t\tar_mine_randomx:info(LightState)\n\t),\n\n\t{ok, {_, _, _, ScratchpadSize}} = ar_mine_randomx:info(FastState),\n\t?assertEqual(?RANDOMX_SCRATCHPAD_SIZE, ScratchpadSize).\n\ntest_vectors({FastState, _LightState}) ->\n\tKey = << 1 >>,\n\tEntropy = ar_mine_randomx:randomx_generate_replica_2_9_entropy(FastState, Key),\n\tEntropyHash = crypto:hash(sha256, Entropy),\n\tEntropyHashExpd = << 56,199,231,119,170,151,220,154,45,204,70,193,80,68,\n\t\t46,50,136,31,35,102,141,77,19,66,191,127,97,183,230,\n\t\t119,243,151 >>,\n\t?assertEqual(EntropyHashExpd, EntropyHash),\n\n\tKey2 = << 2 >>,\n\tEntropy2 = ar_mine_randomx:randomx_generate_replica_2_9_entropy(FastState, Key2),\n\tEntropyHash2 = crypto:hash(sha256, Entropy2),\n\tEntropyHashExpd2 = << 206,47,133,111,139,20,31,64,185,33,107,29,14,10,252,\n\t\t76,201,75,203,186,131,32,20,45,34,125,76,248,64,90,\n\t\t220,196 >>,\n\t?assertEqual(EntropyHashExpd2, EntropyHash2),\n\n\tSubChunk = << 255:(8*8192) >>,\n\tEntropySubChunkIndex = 1,\n\t{ok, Packed} = ar_mine_randomx:randomx_encrypt_replica_2_9_sub_chunk(\n\t\t{FastState, Entropy, SubChunk, EntropySubChunkIndex}),\n\tPackedHashReal = crypto:hash(sha256, Packed),\n\tPackedHashExpd = << 15,46,184,11,124,31,150,77,199,107,221,0,136,154,61,\n\t\t146,193,198,126,52,19,7,211,28,121,108,176,15,124,33,\n\t\t48,99 >>,\n\t?assertEqual(PackedHashExpd, PackedHashReal),\n\t{ok, Unpacked} = ar_mine_randomx:randomx_decrypt_replica_2_9_sub_chunk(\n\t\t{FastState, Entropy, Packed, EntropySubChunkIndex}),\n\t?assertEqual(SubChunk, Unpacked),\n\n\tok.\n\ntest_pack_unpack_sub_chunks({State, _LightState}) ->\n\tKey = << 0:256 >>,\n\tSubChunk = << 0:(8192 * 8) >>,\n\tEntropy = ar_mine_randomx:randomx_generate_replica_2_9_entropy(State, Key),\n\t?assertEqual(8388608, byte_size(Entropy)),\n\tPackedSubChunks = pack_sub_chunks(SubChunk, Entropy, 0, SubChunk, State),\n\t?assert(lists:all(fun(PackedSubChunk) -> byte_size(PackedSubChunk) == 8192 end,\n\t\t\tPackedSubChunks)),\n\tunpack_sub_chunks(PackedSubChunks, 0, SubChunk, Entropy, State).\n\npack_sub_chunks(_SubChunk, _Entropy, Index, _PreviousSubChunk, _State)\n\t\twhen Index == 1024 ->\n\t[];\npack_sub_chunks(SubChunk, Entropy, Index, PreviousSubChunk, State) ->\n\t{ok, PackedSubChunk} = ar_mine_randomx:randomx_encrypt_replica_2_9_sub_chunk(\n\t\t\t{State, Entropy, SubChunk, Index}),\n\tNote = io_lib:format(\"Packed a sub-chunk, index=~B.~n\", [Index]),\n\t?assertNotEqual(PackedSubChunk, PreviousSubChunk, Note),\n\t[PackedSubChunk | pack_sub_chunks(SubChunk, Entropy, Index + 1, PackedSubChunk, State)].\n\nunpack_sub_chunks([], _Index, _SubChunk, _Entropy, _State) ->\n\tok;\nunpack_sub_chunks([PackedSubChunk | PackedSubChunks], Index, SubChunk, Entropy, State) ->\n\t{ok, UnpackedSubChunk} = ar_mine_randomx:randomx_decrypt_replica_2_9_sub_chunk(\n\t\t\t{State, Entropy, PackedSubChunk, Index}),\n\tNote = io_lib:format(\"Unpacked a sub-chunk, index=~B.~n\", [Index]),\n\t?assertEqual(SubChunk, UnpackedSubChunk, Note),\n\tunpack_sub_chunks(PackedSubChunks, Index + 1, SubChunk, Entropy, State).\n"
  },
  {
    "path": "apps/arweave/test/ar_semaphore_tests.erl",
    "content": "-module(ar_semaphore_tests).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\none_wait_per_process_test_() ->\n\twith_semaphore_(one_wait_per_process_sem, 4, fun() ->\n\t\t?assertEqual(ok, ar_semaphore:acquire(one_wait_per_process_sem, ?DEFAULT_CALL_TIMEOUT)),\n\t\t?assertEqual({error, process_already_waiting}, ar_semaphore:acquire(one_wait_per_process_sem, ?DEFAULT_CALL_TIMEOUT))\n\tend).\n\nwait_for_one_process_at_a_time_test_() ->\n\twith_semaphore_(wait_for_one_process_at_a_time_sem, 1, fun() ->\n\t\tTestPid = self(),\n\t\tSleepMs = 500,\n\t\tNoMessageMs = 250,\n\t\tDoneTimeoutMs = 3000,\n\t\tspawn_worker(wait_for_one_process_at_a_time_sem, SleepMs, TestPid, p1),\n\t\tspawn_worker(wait_for_one_process_at_a_time_sem, SleepMs, TestPid, p2),\n\t\tspawn_worker(wait_for_one_process_at_a_time_sem, SleepMs, TestPid, p3),\n\t\t?assert(receive _ -> false after NoMessageMs -> true end),\n\t\tDone1 = receive_done(DoneTimeoutMs),\n\t\t?assert(receive _ -> false after NoMessageMs -> true end),\n\t\tDone2 = receive_done(DoneTimeoutMs),\n\t\t?assertNotEqual(Done1, Done2),\n\t\t?assert(receive _ -> false after NoMessageMs -> true end),\n\t\tDone3 = receive_done(DoneTimeoutMs),\n\t\t?assertNotEqual(Done1, Done3),\n\t\t?assertNotEqual(Done2, Done3)\n\tend).\n\nwait_for_two_processes_at_a_time_test_() ->\n\twith_semaphore_(wait_for_two_processes_at_a_time_sem, 2, fun() ->\n\t\tTestPid = self(),\n\t\tSleepMs = 400,\n\t\tDoneTimeoutMs = 3000,\n\t\tspawn_worker(wait_for_two_processes_at_a_time_sem, SleepMs, TestPid, p1),\n\t\tspawn_worker(wait_for_two_processes_at_a_time_sem, SleepMs, TestPid, p2),\n\t\tspawn_worker(wait_for_two_processes_at_a_time_sem, SleepMs, TestPid, p3),\n\t\tspawn_worker(wait_for_two_processes_at_a_time_sem, SleepMs, TestPid, p4),\n\t\tDone1 = receive_done(DoneTimeoutMs),\n\t\tDone2 = receive_done(DoneTimeoutMs),\n\t\t?assertNotEqual(Done1, Done2),\n\t\t?assert(receive _ -> false after 250 -> true end),\n\t\tDone3 = receive_done(DoneTimeoutMs),\n\t\t?assertNotEqual(Done1, Done3),\n\t\t?assertNotEqual(Done2, Done3),\n\t\tDone4 = receive_done(DoneTimeoutMs),\n\t\t?assertNotEqual(Done1, Done4),\n\t\t?assertNotEqual(Done2, Done4),\n\t\t?assertNotEqual(Done3, Done4)\n\tend).\n\nspawn_worker(SemaphoreName, SleepMs, TestPid, WorkerID) ->\n\tspawn_link(fun() ->\n\t\tok = ar_semaphore:acquire(SemaphoreName, ?DEFAULT_CALL_TIMEOUT),\n\t\ttimer:sleep(SleepMs),\n\t\tTestPid ! {done, WorkerID}\n\tend).\n\nreceive_done(TimeoutMs) ->\n\treceive\n\t\t{done, WorkerID} ->\n\t\t\tWorkerID\n\tafter TimeoutMs ->\n\t\t?assert(false)\n\tend.\n\nwith_semaphore_(Name, Value, Fun) ->\n\t{setup,\n\t\tfun() -> {ok, _} = ar_semaphore:start_link(Name, Value) end,\n\t\tfun(_) -> _ = ar_semaphore:stop(Name) end,\n\t\t[Fun]\n\t}.\n"
  },
  {
    "path": "apps/arweave/test/ar_serialize_tests.erl",
    "content": "-module(ar_serialize_tests).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_mining.hrl\").\n-include_lib(\"arweave/include/ar_pool.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\nblock_to_binary_test_() ->\n\t%% Set the mainnet values here because we are using the mainnet fixtures.\n\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_fork, height_1_6, fun() -> 95000 end},\n\t\t\t{ar_fork, height_1_7, fun() -> 235200 end},\n\t\t\t{ar_fork, height_1_8, fun() -> 269510 end},\n\t\t\t{ar_fork, height_1_9, fun() -> 315700 end},\n\t\t\t{ar_fork, height_2_0, fun() -> 422250 end},\n\t\t\t{ar_fork, height_2_2, fun() -> 552180 end},\n\t\t\t{ar_fork, height_2_3, fun() -> 591140 end},\n\t\t\t{ar_fork, height_2_4, fun() -> 633720 end},\n\t\t\t{ar_fork, height_2_5, fun() -> 812970 end},\n\t\t\t{ar_fork, height_2_6, fun() -> infinity end}],\n\t\tfun test_block_to_binary/0).\n\ntest_block_to_binary() ->\n\t{ok, Cwd} = file:get_cwd(),\n\tBlockFixtureDir = filename:join(Cwd, \"./apps/arweave/test/fixtures/blocks\"),\n\tTXFixtureDir = filename:join(Cwd, \"./apps/arweave/test/fixtures/txs\"),\n\t{ok, BlockFixtures} = file:list_dir(BlockFixtureDir),\n\ttest_block_to_binary([filename:join(BlockFixtureDir, Name)\n\t\t\t|| Name <- BlockFixtures], TXFixtureDir).\n\ntest_block_to_binary([], _TXFixtureDir) ->\n\tok;\ntest_block_to_binary([Fixture | Fixtures], TXFixtureDir) ->\n\t{ok, Bin} = file:read_file(Fixture),\n\tB = ar_storage:migrate_block_record(binary_to_term(Bin)),\n\t?debugFmt(\"Block ~s, height ~B.~n\", [ar_util:encode(B#block.indep_hash),\n\t\t\tB#block.height]),\n\ttest_block_to_binary(B),\n\tRandomTags = [crypto:strong_rand_bytes(rand:uniform(2))\n\t\t\t|| _ <- lists:seq(1, rand:uniform(1024))],\n\tB2 = B#block{ tags = RandomTags },\n\ttest_block_to_binary(B2),\n\tB3 = B#block{ reward_addr = unclaimed },\n\ttest_block_to_binary(B3),\n\t{ok, TXFixtures} = file:list_dir(TXFixtureDir),\n\tTXs =\n\t\tlists:foldl(\n\t\t\tfun(TXFixture, Acc) ->\n\t\t\t\t{ok, TXBin} = file:read_file(filename:join(TXFixtureDir, TXFixture)),\n\t\t\t\tTX = ar_storage:migrate_tx_record(binary_to_term(TXBin)),\n\t\t\t\tmaps:put(TX#tx.id, TX, Acc)\n\t\t\tend,\n\t\t\t#{},\n\t\t\tTXFixtures),\n\tBlockTXs = [maps:get(TXID, TXs) || TXID <- B#block.txs],\n\tB4 = B#block{ txs = BlockTXs },\n\ttest_block_to_binary(B4),\n\tBlockTXs2 = [case rand:uniform(2) of 1 -> TX#tx.id; _ -> TX end\n\t\t\t|| TX <- BlockTXs],\n\tB5 = B#block{ txs = BlockTXs2 },\n\ttest_block_to_binary(B5),\n\tTXIDs = [TX#tx.id || TX <- BlockTXs],\n\tB6 = B#block{ txs = TXIDs },\n\ttest_block_to_binary(B6),\n\ttest_block_to_binary(Fixtures, TXFixtureDir).\n\ntest_block_to_binary(B) ->\n\t{ok, B2} = ar_serialize:binary_to_block(ar_serialize:block_to_binary(B)),\n\t?assertEqual(B#block{ txs = [] }, B2#block{ txs = [] }),\n\t?assertEqual(true, compare_txs(B#block.txs, B2#block.txs)),\n\tlists:foreach(\n\t\tfun\t(TX) when is_record(TX, tx)->\n\t\t\t\t?assertEqual({ok, TX}, ar_serialize:binary_to_tx(ar_serialize:tx_to_binary(TX)));\n\t\t\t(_TXID) ->\n\t\t\t\tok\n\t\tend,\n\t\tB#block.txs\n\t).\n\ncompare_txs([TXID | TXs], [#tx{ id = TXID } | TXs2]) ->\n\tcompare_txs(TXs, TXs2);\ncompare_txs([#tx{ id = TXID } | TXs], [TXID | TXs2]) ->\n\tcompare_txs(TXs, TXs2);\ncompare_txs([TXID | TXs], [TXID | TXs2]) ->\n\tcompare_txs(TXs, TXs2);\ncompare_txs([], []) ->\n\ttrue;\ncompare_txs(_TXs, _TXs2) ->\n\tfalse.\n\nblock_announcement_to_binary_test() ->\n\tA = #block_announcement{ indep_hash = crypto:strong_rand_bytes(48),\n\t\t\tprevious_block = crypto:strong_rand_bytes(48) },\n\t?assertEqual({ok, A}, ar_serialize:binary_to_block_announcement(\n\t\t\tar_serialize:block_announcement_to_binary(A))),\n\tA2 = A#block_announcement{ recall_byte = 0 },\n\t?assertEqual({ok, A2}, ar_serialize:binary_to_block_announcement(\n\t\t\tar_serialize:block_announcement_to_binary(A2))),\n\tA3 = A#block_announcement{ recall_byte = 1000000000000000000000 },\n\t?assertEqual({ok, A3}, ar_serialize:binary_to_block_announcement(\n\t\t\tar_serialize:block_announcement_to_binary(A3))),\n\tA4 = A3#block_announcement{ tx_prefixes = [crypto:strong_rand_bytes(8)\n\t\t\t|| _ <- lists:seq(1, 1000)] },\n\t?assertEqual({ok, A4}, ar_serialize:binary_to_block_announcement(\n\t\t\tar_serialize:block_announcement_to_binary(A4))),\n\tA5 = A#block_announcement{ recall_byte2 = 1,\n\t\t\tsolution_hash = crypto:strong_rand_bytes(32) },\n\t?assertEqual({ok, A5}, ar_serialize:binary_to_block_announcement(\n\t\t\tar_serialize:block_announcement_to_binary(A5))),\n\tA6 = A#block_announcement{ recall_byte2 = 1, recall_byte = 2,\n\t\t\tsolution_hash = crypto:strong_rand_bytes(32) },\n\t?assertEqual({ok, A6}, ar_serialize:binary_to_block_announcement(\n\t\t\tar_serialize:block_announcement_to_binary(A6))).\n\nblock_announcement_response_to_binary_test() ->\n\tA = #block_announcement_response{},\n\t?assertEqual({ok, A}, ar_serialize:binary_to_block_announcement_response(\n\t\t\tar_serialize:block_announcement_response_to_binary(A))),\n\tA2 = A#block_announcement_response{ missing_chunk = true,\n\t\t\tmissing_tx_indices = lists:seq(0, 999) },\n\t?assertEqual({ok, A2}, ar_serialize:binary_to_block_announcement_response(\n\t\t\tar_serialize:block_announcement_response_to_binary(A2))),\n\tA3 = A#block_announcement_response{ missing_chunk = true, missing_chunk2 = false,\n\t\t\tmissing_tx_indices = lists:seq(0, 1) },\n\t?assertEqual({ok, A3}, ar_serialize:binary_to_block_announcement_response(\n\t\t\tar_serialize:block_announcement_response_to_binary(A3))),\n\tA4 = A#block_announcement_response{ missing_chunk2 = true,\n\t\t\tmissing_tx_indices = [731] },\n\t?assertEqual({ok, A4}, ar_serialize:binary_to_block_announcement_response(\n\t\t\tar_serialize:block_announcement_response_to_binary(A4))).\n\n\npoa_map_to_json(Map) ->\n\tjiffy:encode(ar_serialize:poa_map_to_json_map(Map)).\n\njson_to_poa_map(Body) ->\n\t{ok, JSON} = ar_serialize:json_decode(Body, [return_maps]),\n\t{ok, ar_serialize:json_map_to_poa_map(JSON)}.\n\npoa_map_test() ->\n\ttest_poa_map(fun ar_serialize:poa_map_to_binary/1, fun ar_serialize:binary_to_poa/1, #{}),\n\ttest_poa_map(fun poa_map_to_json/1, fun json_to_poa_map/1,\n\t\t#{ data_size => 0, data_root => <<>> }).\n\ntest_poa_map(Serialize, Deserialize, BaseProof) ->\n\tProof = BaseProof#{ chunk => crypto:strong_rand_bytes(1), data_path => <<>>,\n\t\t\ttx_path => <<>>, packing => unpacked },\n\t?assertEqual({ok, Proof},\n\t\t\tDeserialize(Serialize(Proof))),\n\tProof2 = Proof#{ chunk => crypto:strong_rand_bytes(256 * 1024) },\n\t?assertEqual({ok, Proof2},\n\t\t\tDeserialize(Serialize(Proof2))),\n\tProof3 = Proof2#{ data_path => crypto:strong_rand_bytes(1024),\n\t\t\tpacking => spora_2_5, tx_path => crypto:strong_rand_bytes(1024) },\n\t?assertEqual({ok, Proof3},\n\t\t\tDeserialize(Serialize(Proof3))),\n\tProof4 = Proof3#{ packing => {spora_2_6, crypto:strong_rand_bytes(33)} },\n\t?assertEqual({ok, Proof4},\n\t\t\tDeserialize(Serialize(Proof4))),\n\tProof5 = Proof3#{ packing => {composite, crypto:strong_rand_bytes(33), 2} },\n\t?assertEqual({ok, Proof5},\n\t\t\tDeserialize(Serialize(Proof5))).\n\npoa_no_chunk_map_test() ->\n\ttest_poa_no_chunk_map(fun ar_serialize:poa_no_chunk_map_to_binary/1, fun ar_serialize:binary_to_no_chunk_map/1).\n\ntest_poa_no_chunk_map(Serialize, Deserialize) ->\n\tProof = #{ data_path => crypto:strong_rand_bytes(500),\n\t\ttx_path => crypto:strong_rand_bytes(250) },\n\t?assertEqual({ok, Proof},\n\t\t\tDeserialize(Serialize(Proof))).\n\nblock_index_to_binary_test() ->\n\tlists:foreach(\n\t\tfun(BI) ->\n\t\t\t?assertEqual({ok, BI}, ar_serialize:binary_to_block_index(\n\t\t\t\tar_serialize:block_index_to_binary(BI)))\n\t\tend,\n\t\t[[], [{crypto:strong_rand_bytes(48), rand:uniform(1000),\n\t\t\t\tcrypto:strong_rand_bytes(32)}],\n\t\t\t[{crypto:strong_rand_bytes(48), 0, <<>>}],\n\t\t\t[{crypto:strong_rand_bytes(48), rand:uniform(1000),\n\t\t\t\tcrypto:strong_rand_bytes(32)} || _ <- lists:seq(1, 1000)]]).\n\n%% @doc Convert a new block into JSON and back, ensure the result is the same.\nblock_roundtrip_test_() ->\n\tar_test_node:test_with_mocked_functions([\n\t\t\t{ar_fork, height_2_6, fun() -> infinity end},\n\t\t\t{ar_fork, height_2_6_8, fun() -> infinity end},\n\t\t\t{ar_fork, height_2_7, fun() -> infinity end}],\n\t\tfun test_block_roundtrip/0).\n\ntest_block_roundtrip() ->\n\t[B] = ar_weave:init(),\n\tTXIDs = [TX#tx.id || TX <- B#block.txs],\n\tJSONStruct = ar_serialize:jsonify(ar_serialize:block_to_json_struct(B)),\n\tBRes = ar_serialize:json_struct_to_block(JSONStruct),\n\t?assertEqual(B#block{ txs = TXIDs, size_tagged_txs = [], account_tree = undefined },\n\t\t\tBRes#block{ hash_list = B#block.hash_list, size_tagged_txs = [] }).\n\n%% @doc Convert a new TX into JSON and back, ensure the result is the same.\ntx_roundtrip_test() ->\n\tTXBase = ar_tx:new(<<\"test\">>),\n\tTX =\n\t\tTXBase#tx{\n\t\t\tformat = 2,\n\t\t\ttags = [{<<\"Name1\">>, <<\"Value1\">>}],\n\t\t\tdata_root = << 0:256 >>,\n\t\t\tsignature_type = ?DEFAULT_KEY_TYPE,\n\t\t\towner_address = ar_wallet:to_address(TXBase#tx.owner, ?DEFAULT_KEY_TYPE)\n\t\t},\n\tJsonTX = ar_serialize:jsonify(ar_serialize:tx_to_json_struct(TX)),\n\t?assertEqual(\n\t\tTX,\n\t\tar_serialize:json_struct_to_tx(JsonTX)\n\t).\n\nwallet_list_roundtrip_test_() ->\n\t{timeout, 30, fun test_wallet_list_roundtrip/0}.\n\ntest_wallet_list_roundtrip() ->\n\t[B] = ar_weave:init(),\n\tWL = B#block.account_tree,\n\tJSONWL = ar_serialize:jsonify(\n\t\tar_serialize:wallet_list_to_json_struct(B#block.reward_addr, false, WL)),\n\tExpectedWL = ar_patricia_tree:foldr(fun(K, V, Acc) -> [{K, V} | Acc] end, [], WL),\n\tActualWL = ar_patricia_tree:foldr(\n\t\tfun(K, V, Acc) -> [{K, V} | Acc] end, [], ar_serialize:json_struct_to_wallet_list(JSONWL)\n\t),\n\t?assertEqual(ExpectedWL, ActualWL).\n\nblock_index_roundtrip_test_() ->\n\t{timeout, 10, fun test_block_index_roundtrip/0}.\n\ntest_block_index_roundtrip() ->\n\t[B] = ar_weave:init(),\n\tHL = [B#block.indep_hash, B#block.indep_hash],\n\tJSONHL = ar_serialize:jsonify(ar_serialize:block_index_to_json_struct(HL)),\n\tHL = ar_serialize:json_struct_to_block_index(ar_serialize:dejsonify(JSONHL)),\n\tBI = [{B#block.indep_hash, 1, <<\"Root\">>}, {B#block.indep_hash, 2, <<>>}],\n\tJSONBI = ar_serialize:jsonify(ar_serialize:block_index_to_json_struct(BI)),\n\tBI = ar_serialize:json_struct_to_block_index(ar_serialize:dejsonify(JSONBI)).\n\nquery_roundtrip_test() ->\n\tQuery = {'equals', <<\"TestName\">>, <<\"TestVal\">>},\n\tQueryJSON = ar_serialize:jsonify(\n\t\tar_serialize:query_to_json_struct(\n\t\t\tQuery\n\t\t)\n\t),\n\t?assertEqual({ok, Query}, ar_serialize:json_struct_to_query(QueryJSON)).\n\ndata_roots_roundtrip_test() ->\n\t%% TXRoot must be empty or 32 bytes:\n\t?assertEqual({error, invalid_input1}, ar_serialize:binary_to_data_roots({<<\"a\">>, 0, []})),\n\t%% The number of entries must not exceed the transaction count limit:\n\t?assertEqual({error, invalid_input1},\n\t\tar_serialize:binary_to_data_roots({<<>>, 0, make_entries(1001)})),\n    TXRoot = crypto:strong_rand_bytes(32),\n    BlockSizes = [0, 1, 255, 256, 65535, 65536, 123456789],\n    Cases = lists:flatten([\n\t\t{<<>>, 0, []},\n        [{TXRoot, BS, []} || BS <- BlockSizes],\n\t\t[{TXRoot, BS, make_entries(1)} || BS <- BlockSizes],\n\t\t[{TXRoot, BS, make_entries(2)} || BS <- BlockSizes],\n\t\t[{TXRoot, BS, make_entries(1000)} || BS <- BlockSizes]\n    ]),\n    lists:foreach(\n        fun({TR, BS, Entries}) ->\n            Bin = ar_serialize:data_roots_to_binary({TR, BS, Entries}),\n            ?assertMatch({ok, {TR, BS, _}}, ar_serialize:binary_to_data_roots(Bin)),\n            ?assertEqual({ok, {TR, BS, Entries}}, ar_serialize:binary_to_data_roots(Bin))\n        end,\n        Cases\n    ).\n\nmake_entries(N) ->\n    lists:map(\n        fun(I) ->\n            DataRoot = crypto:strong_rand_bytes(32),\n\t\t\tTXSize =\n\t\t\t\tcase I rem 2 of\n\t\t\t\t\t0 ->\n\t\t\t\t\t\t0;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\trand:uniform(1000000)\n\t\t\t\tend,\n\t\t\tTXStartOffset =\n\t\t\t\tcase I rem 2 of\n\t\t\t\t\t0 ->\n\t\t\t\t\t\t0;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\trand:uniform(1000000)\n\t\t\t\tend,\n\t\t\tTXPath =\n\t\t\t\tcase I rem 2 of\n\t\t\t\t\t0 ->\n\t\t\t\t\t\t<<>>;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tcrypto:strong_rand_bytes(200)\n\t\t\t\tend,\n            {DataRoot, TXSize, TXStartOffset, TXPath}\n        end,\n        lists:seq(1, N)\n    ).\n\ncandidate_to_json_struct_test() ->\n\n\tTest = fun(Candidate) ->\n        JSON = ar_serialize:jsonify(ar_serialize:candidate_to_json_struct(Candidate)),\n\t\t{ok, JSONStruct} = ar_serialize:json_decode(JSON, [return_maps]),\n        CandidateAfter = ar_serialize:json_map_to_candidate(JSONStruct),\n        ExpectedCandidate = Candidate#mining_candidate{\n            cache_ref = not_set,\n            chunk1 = not_set,\n            chunk2 = not_set,\n            cm_lead_peer = not_set\n        },\n        ?assertEqual(ExpectedCandidate, CandidateAfter)\n    end,\n\n\tDefaultCandidate = #mining_candidate{\n        cm_diff = {rand:uniform(1024), rand:uniform(1024)},\n\t\tcm_h1_list = [\n\t\t\t{crypto:strong_rand_bytes(32), rand:uniform(100)},\n\t\t\t{crypto:strong_rand_bytes(32), rand:uniform(100)},\n\t\t\t{crypto:strong_rand_bytes(32), rand:uniform(100)}\n\t\t],\n        h0 = crypto:strong_rand_bytes(32),\n        h1 = crypto:strong_rand_bytes(32),\n        h2 = crypto:strong_rand_bytes(32),\n        mining_address = crypto:strong_rand_bytes(32),\n        next_seed = crypto:strong_rand_bytes(32),\n\t\tnext_vdf_difficulty = rand:uniform(100),\n        nonce = rand:uniform(100),\n        nonce_limiter_output = crypto:strong_rand_bytes(32),\n        partition_number = rand:uniform(100),\n        partition_number2 = rand:uniform(100),\n        partition_upper_bound = rand:uniform(100),\n        poa2 = #poa{\n\t\t\tchunk = crypto:strong_rand_bytes(256 * 1024),\n\t\t\tdata_path = crypto:strong_rand_bytes(1024),\n \t\t\ttx_path = crypto:strong_rand_bytes(1024) },\n        preimage = crypto:strong_rand_bytes(32),\n        seed = crypto:strong_rand_bytes(32),\n\t\tsession_key = {crypto:strong_rand_bytes(32), rand:uniform(100), rand:uniform(10000)},\n        start_interval_number = rand:uniform(100),\n        step_number = rand:uniform(100)\n    },\n\n\tTest(DefaultCandidate),\n\n\t%% clear optional fields\n\tTest(DefaultCandidate#mining_candidate{\n\t\tcm_h1_list = [],\n\t\th1 = not_set,\n\t\th2 = not_set,\n\t\tnonce = not_set,\n\t\tpoa2 = not_set,\n\t\tpreimage = not_set}),\n\n\t%% set unserialized fields\n\tTest(DefaultCandidate#mining_candidate{\n\t\tcache_ref = {rand:uniform(100), rand:uniform(100), rand:uniform(100), make_ref()},\n\t\tchunk1 = crypto:strong_rand_bytes(256 * 1024),\n\t\tchunk2 = crypto:strong_rand_bytes(256 * 1024),\n\t\tcm_lead_peer = ar_test_node:peer_ip(main)}).\n\nsolution_to_json_struct_test() ->\n\n\tTest = fun(Solution) ->\n        JSON = ar_serialize:jsonify(ar_serialize:solution_to_json_struct(Solution)),\n\t\t{ok, JSONStruct} = ar_serialize:json_decode(JSON, [return_maps]),\n        SolutionAfter = ar_serialize:json_map_to_solution(JSONStruct),\n        ?assertEqual(Solution, SolutionAfter)\n    end,\n\n\tDefaultSolution = #mining_solution{\n\t\tlast_step_checkpoints = [\n\t\t\tcrypto:strong_rand_bytes(32),\n\t\t\tcrypto:strong_rand_bytes(32),\n\t\t\tcrypto:strong_rand_bytes(32)],\n\t\tmining_address = crypto:strong_rand_bytes(32),\n\t\tnext_seed = crypto:strong_rand_bytes(32),\n\t\tnext_vdf_difficulty = rand:uniform(100),\n\t\tnonce = rand:uniform(100),\n\t\tnonce_limiter_output = crypto:strong_rand_bytes(32),\n\t\tpartition_number = rand:uniform(100),\n\t\tpartition_upper_bound = rand:uniform(100),\n\t\tpoa1 = #poa{\n\t\t\tchunk = crypto:strong_rand_bytes(256 * 1024),\n\t\t\tdata_path = crypto:strong_rand_bytes(1024),\n \t\t\ttx_path = crypto:strong_rand_bytes(1024) },\n\t\tpoa2 = #poa{\n\t\t\tchunk = crypto:strong_rand_bytes(256 * 1024),\n\t\t\tdata_path = crypto:strong_rand_bytes(1024),\n \t\t\ttx_path = crypto:strong_rand_bytes(1024) },\n\t\tpreimage = crypto:strong_rand_bytes(32),\n\t\trecall_byte1 = rand:uniform(100),\n\t\trecall_byte2 = rand:uniform(100),\n\t\tseed = crypto:strong_rand_bytes(32),\n\t\tsolution_hash = crypto:strong_rand_bytes(32),\n\t\tstart_interval_number = rand:uniform(100),\n\t\tstep_number = rand:uniform(100),\n\t\tsteps = [\n\t\t\tcrypto:strong_rand_bytes(32),\n\t\t\tcrypto:strong_rand_bytes(32),\n\t\t\tcrypto:strong_rand_bytes(32)]\n\t},\n\n\tTest(DefaultSolution),\n\n\t%% clear optional fields\n\tTest(DefaultSolution#mining_solution{\n\t\trecall_byte2 = undefined}).\n\npartial_solution_to_json_struct_test() ->\n\tTestCases = [\n\t\t#mining_solution{\n\t\t\tmining_address = <<\"a\">>,\n\t\t\tnext_seed = <<\"s\">>,\n\t\t\tseed = <<\"s\">>,\n\t\t\tnext_vdf_difficulty = 1,\n\t\t\tnonce = 2,\n\t\t\tpartition_number = 10,\n\t\t\tpartition_upper_bound = 5001,\n\t\t\tsolution_hash = <<\"h\">>,\n\t\t\tnonce_limiter_output = <<\"output\">>,\n\t\t\tpreimage = <<\"pr\">>,\n\t\t\tpoa1 = #poa{ chunk = <<\"c\">>, tx_path = <<\"t\">>, data_path = <<\"dpath\">> },\n\t\t\tpoa2 = #poa{},\n\t\t\trecall_byte1 = 123234234234,\n\t\t\trecall_byte2 = undefined,\n\t\t\tstart_interval_number = 23,\n\t\t\tstep_number = 1113423423423423423423423432342342342344\n\t\t},\n\t\t#mining_solution{\n\t\t\tmining_address = <<\"a\">>,\n\t\t\tnext_seed = <<\"s\">>,\n\t\t\tseed = <<\"s\">>,\n\t\t\tnext_vdf_difficulty = 1,\n\t\t\tnonce = 2,\n\t\t\tpartition_number = 10,\n\t\t\tpartition_upper_bound = 5001,\n\t\t\tsolution_hash = <<\"h\">>,\n\t\t\tnonce_limiter_output = <<\"output\">>,\n\t\t\tpreimage = <<\"pr\">>,\n\t\t\tpoa1 = #poa{ chunk = <<\"c\">>, tx_path = <<\"t\">>, data_path = <<\"dpath\">> },\n\t\t\tpoa2 = #poa{ chunk = <<\"chunk2\">>, tx_path = <<\"t2\">>, data_path = <<\"d2\">> },\n\t\t\trecall_byte1 = 123234234234,\n\t\t\trecall_byte2 = 2,\n\t\t\tstart_interval_number = 23,\n\t\t\tstep_number = 1113423423423423423423423432342342342344\n\t\t}\n\t],\n\tlists:foreach(\n\t\tfun(Solution) ->\n\t\t\t?assertEqual(Solution,\n\t\t\t\t\tar_serialize:json_map_to_solution(jiffy:decode(ar_serialize:jsonify(\n\t\t\t\t\t\t\tar_serialize:solution_to_json_struct(Solution)), [return_maps])))\n\t\tend,\n\t\tTestCases\n\t).\n\npartial_solution_response_to_json_struct_test() ->\n\tTestCases = [\n\t\t{#partial_solution_response{}, <<>>, <<>>},\n\t\t{#partial_solution_response{ indep_hash = <<\"H\">>, status = <<\"S\">>},\n\t\t\t\t<<\"H\">>, <<\"S\">>}\n\t],\n\tlists:foreach(\n\t\tfun({Case, ExpectedH, ExpectedStatus}) ->\n\t\t\t{Struct} = ar_serialize:dejsonify(ar_serialize:jsonify(\n\t\t\t\t\tar_serialize:partial_solution_response_to_json_struct(Case))),\n\t\t\t?assertEqual(ExpectedH,\n\t\t\t\t\tar_util:decode(proplists:get_value(<<\"indep_hash\">>, Struct))),\n\t\t\t?assertEqual(ExpectedStatus, proplists:get_value(<<\"status\">>, Struct))\n\t\tend,\n\t\tTestCases\n\t).\n\njobs_to_json_struct_test() ->\n\tTestCases = [\n\t\t#jobs{}\n\t\t#jobs{ seed = <<\"a\">> },\n\t\t#jobs{ jobs = [#job{ output = <<\"o\">>,\n\t\t\t\tglobal_step_number = 1,\n\t\t\t\tpartition_upper_bound = 100 }] },\n\t\t#jobs{ jobs = [#job{ output = <<\"o2\">>,\n\t\t\t\t\tglobal_step_number = 2,\n\t\t\t\t\tpartition_upper_bound = 100 }, #job{ output = <<\"o1\">>,\n\t\t\t\t\t\tglobal_step_number = 1,\n\t\t\t\t\t\tpartition_upper_bound = 99 }],\n\t\t\t\tpartial_diff = {12345, 6789},\n\t\t\t\tseed = <<\"gjhgjkghjhg\">>,\n\t\t\t\tnext_seed = <<\"dfdgfdg\">>,\n\t\t\t\tinterval_number = 23,\n\t\t\t\tnext_vdf_difficulty = 32434 }\n\t],\n\tlists:foreach(\n\t\tfun(Jobs) ->\n\t\t\t?assertEqual(Jobs,\n\t\t\t\t\tar_serialize:json_struct_to_jobs(\n\t\t\t\t\t\tar_serialize:dejsonify(ar_serialize:jsonify(\n\t\t\t\t\t\t\tar_serialize:jobs_to_json_struct(Jobs)))))\n\t\tend,\n\t\tTestCases\n\t).\n\nfootprint_to_json_map_test() ->\n\tAddr = crypto:strong_rand_bytes(32),\n\tTestCases = [\n\t\t{ar_intervals:new()},\n\t\t{ar_intervals:from_list([{3, 0}, {2048, 1024}])},\n\t\t{ar_intervals:from_list([{1024, 0}])},\n\t\t{ar_intervals:from_list([{3, 0}, {10000, 500}, {200000, 100000}])}\n\t],\n\tlists:foreach(\n\t\tfun(TestCase) ->\n\t\t\t{Intervals} = TestCase,\n\t\t\t?assertEqual(Intervals,\n\t\t\t\tar_serialize:json_map_to_footprint(jiffy:decode(\n\t\t\t\t\tjiffy:encode(ar_serialize:footprint_to_json_map(Intervals)),\n\t\t\t\t\t[return_maps])))\n\t\tend,\n\t\tTestCases\n\t)."
  },
  {
    "path": "apps/arweave/test/ar_start_from_block_tests.erl",
    "content": "-module(ar_start_from_block_tests).\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n\nstart_from_block_test_() ->\n    [\n\t\t{timeout, ?TEST_NODE_TIMEOUT, fun test_start_from_block/0}\n\t].\n\ntest_start_from_block() ->\n    [B0] = ar_weave:init(),\n\tar_test_node:start(B0),\n    ar_test_node:start_peer(peer1, B0),\n    ar_test_node:start_peer(peer2, B0),\n    ar_test_node:connect_to_peer(peer1),\n    ar_test_node:connect_to_peer(peer2),\n   \n    %% Mine a few blocks, shared by both peers\n    ar_test_node:mine(peer1),\n    ar_test_node:wait_until_height(peer1, 1),\n    ar_test_node:wait_until_height(peer2, 1),\n    ar_test_node:mine(peer2),\n    ar_test_node:wait_until_height(peer1, 2),\n    ar_test_node:wait_until_height(peer2, 2),\n    ar_test_node:mine(peer1),\n    ar_test_node:wait_until_height(peer1, 3),\n    ar_test_node:wait_until_height(peer2, 3),\n\n    %% Disconnect peers, and have peer1 mine 1 block, and peer2 mine 3\n    ar_test_node:disconnect_from(peer1),\n    ar_test_node:disconnect_from(peer2),\n\n    ar_test_node:mine(peer1),\n    ar_test_node:wait_until_height(peer1, 4),\n\n    ar_test_node:mine(peer2),\n    ar_test_node:wait_until_height(peer2, 4),\n    ar_test_node:mine(peer2),\n    ar_test_node:wait_until_height(peer2, 5),\n    ar_test_node:mine(peer2),\n    ar_test_node:wait_until_height(peer2, 6),\n\n    %% Reconnect the peers. This will orphan peer1's block\n    ar_test_node:connect_to_peer(peer1),\n    ar_test_node:connect_to_peer(peer2),\n\n    ar_test_node:wait_until_height(peer1, 6),\n    ar_test_node:wait_until_height(peer2, 6),\n    ar_test_node:wait_until_height(main, 6),\n\n    ar_test_node:disconnect_from(peer1),\n    ar_test_node:disconnect_from(peer2),\n\n    MainBI = ar_node:get_blocks(),\n\n    StartFrom = get_block_hash(4, MainBI),\n    StartMinus1 = get_block_hash(3, MainBI),\n\n    assert_block_index(peer1, 6, MainBI),\n    assert_block_index(peer2, 6, MainBI),\n    assert_reward_history(main, peer1, StartFrom),\n    assert_reward_history(main, peer2, StartFrom),\n    assert_reward_history(main, peer1, StartMinus1),\n    assert_reward_history(main, peer2, StartMinus1),\n\n    %% Have peer1 start_from_block\n    restart_from_block(peer1, StartFrom),\n    assert_start_from(main, peer1, 4),\n    restart_from_block(peer1, StartMinus1),\n    assert_start_from(main, peer1, 3),\n\n    %% Restart peer2 off of peer1\n    ar_test_node:start_peer(peer2, B0),\n    ar_test_node:remote_call(peer2, ar_test_node, connect_to_peer, [peer1]),\n    ar_test_node:wait_until_height(peer2, 3),\n\n    assert_start_from(main, peer1, 3),\n    assert_start_from(main, peer2, 3),\n\n    %% disconnect peer2 and mine a block on peer1\n    ar_test_node:remote_call(peer2, ar_test_node, disconnect_from, [peer1]),\n    ar_test_node:mine(peer1),\n    ar_test_node:wait_until_height(peer1, 4),\n\n    %% Confirm legacy block index still matches\n    assert_start_from(main, peer1, 3),\n\n    %% Restart peer2 off of peer1\n    ar_test_node:start_peer(peer2, B0),\n    ar_test_node:remote_call(peer2, ar_test_node, connect_to_peer, [peer1]),\n    ar_test_node:wait_until_height(peer2, 4),\n\n    assert_start_from(peer1, peer2, 4),\n\n    %% Mine a block on peer2\n    ar_test_node:mine(peer2),\n    ar_test_node:wait_until_height(peer2, 5),\n    ar_test_node:wait_until_height(peer1, 5),\n\n    assert_start_from(peer2, peer1, 5),\n\n    %% Have peer1 start_from_block one last time\n    Peer1BI = get_block_index(peer1),\n    restart_from_block(peer1, get_block_hash(4, Peer1BI)),\n    assert_start_from(peer2, peer1, 4),\n    ok.\n\n\nrestart_from_block(Peer, BH) ->\n    {ok, Config} = ar_test_node:get_config(Peer),\n    ok = ar_test_node:set_config(Peer, Config#config{\n        start_from_latest_state = false,\n        start_from_block = BH,\n        block_pollers = 0\n    }),\n    ar_test_node:restart(Peer),\n    ar_test_node:remote_call(Peer, ar_test_node, wait_until_syncs_genesis_data, []).\n\nassert_start_from(ExpectedPeer, Peer, Height) ->\n    BI = get_block_index(Peer),\n    StartFrom = get_block_hash(Height, BI),\n    StartMinus1 = get_block_hash(Height-1, BI),\n\n    assert_block_index(Peer, Height, BI),\n    assert_reward_history(ExpectedPeer, Peer, StartFrom),\n    assert_reward_history(ExpectedPeer, Peer, StartMinus1).\n\nassert_block_index(Peer, Height, ExpectedBI) ->\n    BI = get_block_index(Peer),\n    BITail = lists:nthtail(length(BI)-Height-1, BI),\n    ExpectedBITail = lists:nthtail(length(ExpectedBI)-Height-1, ExpectedBI),\n    ?assertEqual(ExpectedBITail, BITail,\n        io:format(\"Block Index mismatch for peer ~s\", [Peer])).\n\nassert_reward_history(ExpectedPeer, Peer, H) ->\n    RewardHistory = get_reward_history(Peer, H),\n    {B, _} = ar_test_node:remote_call(ExpectedPeer, ar_block_cache, get_block_and_status, [block_cache, H]),\n    ExpectedRewardHistory = B#block.reward_history,\n\n    ?assertEqual(ExpectedRewardHistory, RewardHistory).\n\nget_block_hash(Height, BI) ->\n    {H, _, _} = lists:nth(length(BI) - Height, BI),\n    H.\n\nget_block_index(Peer) ->\n    ar_test_node:remote_call(Peer, ar_node, get_blocks, []).\n\nget_reward_history(Peer, H) ->\n    PeerIP = ar_test_node:peer_ip(Peer),\n    case ar_http:req(#{\n        peer => PeerIP,\n        method => get,\n        path => \"/reward_history/\" ++ binary_to_list(ar_util:encode(H)),\n        timeout => 30000\n    }) of\n        {ok, {{<<\"200\">>, _}, _, Body, _, _}} ->\n            case ar_serialize:binary_to_reward_history(Body) of\n                {ok, RewardHistory} ->\n                    RewardHistory;\n                {error, Error} ->\n                    Error\n            end;\n        Reply ->\n            Reply\n    end.\n"
  },
  {
    "path": "apps/arweave/test/ar_sync_record_tests.erl",
    "content": "-module(ar_sync_record_tests).\n\n-include(\"ar.hrl\").\n-include(\"ar_consensus.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\nsync_record_test_() ->\n\t[\n\t\t{timeout, 120, fun test_sync_record/0}\n\t].\n\ntest_sync_record() ->\n\tSleepTime = 1000,\n\tDiskPoolStart = ar_block:partition_size(),\n\tPartitionStart = ar_block:partition_size() - ?DATA_CHUNK_SIZE,\n\tWeaveSize = 4 * ?DATA_CHUNK_SIZE,\n\t[B0] = ar_weave:init([], 1, WeaveSize),\n\tRewardAddr = ar_wallet:to_address(ar_wallet:new_keyfile()),\n\t{ok, Config} = arweave_config:get_env(),\n\ttry\n\t\tPartition = {ar_block:partition_size(), 0, {composite, RewardAddr, 1}},\n\t\tPartitionID = ar_storage_module:id(Partition),\n\t\tStorageModules = [Partition],\n\t\tar_test_node:start(B0, RewardAddr, Config, StorageModules),\n\t\tOptions = #{ format => etf, random_subset => false },\n\n\t\t%% Genesis data only\n\t\t{ok, Binary1} = ar_global_sync_record:get_serialized_sync_record(Options),\n\t\t{ok, Global1} = ar_intervals:safe_from_etf(Binary1),\n\n\t\t?assertEqual([{1048576, 0}], ar_intervals:to_list(Global1)),\n\t\t?assertEqual(not_found,\n\t\t\tar_sync_record:get_interval(DiskPoolStart+1, ar_data_sync, ?DEFAULT_MODULE)),\n\t\t?assertEqual({1048576, 0}, ar_sync_record:get_interval(1, ar_data_sync, PartitionID)),\n\n\t\t%% Add a diskpool chunk\n\t\tar_sync_record:add(\n\t\t\tDiskPoolStart+?DATA_CHUNK_SIZE, DiskPoolStart, unpacked, ar_data_sync, ?DEFAULT_MODULE),\n\t\ttimer:sleep(SleepTime),\n\t\t{ok, Binary2} = ar_global_sync_record:get_serialized_sync_record(Options),\n\t\t{ok, Global2} = ar_intervals:safe_from_etf(Binary2),\n\n\t\t?assertEqual([{1048576, 0},{DiskPoolStart+?DATA_CHUNK_SIZE,DiskPoolStart}],\n\t\t\tar_intervals:to_list(Global2)),\n\t\t?assertEqual({DiskPoolStart+?DATA_CHUNK_SIZE,DiskPoolStart},\n\t\t\tar_sync_record:get_interval(DiskPoolStart+1, ar_data_sync, ?DEFAULT_MODULE)),\n\t\t?assertEqual({1048576, 0}, ar_sync_record:get_interval(1, ar_data_sync, PartitionID)),\n\n\t\t%% Remove the diskpool chunk\n\t\tar_sync_record:delete(\n\t\t\tDiskPoolStart+?DATA_CHUNK_SIZE, DiskPoolStart, ar_data_sync, ?DEFAULT_MODULE),\n\t\ttimer:sleep(SleepTime),\n\t\t{ok, Binary3} = ar_global_sync_record:get_serialized_sync_record(Options),\n\t\t{ok, Global3} = ar_intervals:safe_from_etf(Binary3),\n\t\t?assertEqual([{1048576, 0},{DiskPoolStart+?DATA_CHUNK_SIZE,DiskPoolStart}],\n\t\t\tar_intervals:to_list(Global3)),\n\t\t%% We need to explicitly declare global removal\n\t\tar_events:send(sync_record,\n\t\t\t\t{global_remove_range, DiskPoolStart, DiskPoolStart+?DATA_CHUNK_SIZE}),\n\t\ttrue = ar_util:do_until(\n\t\t\t\tfun() ->\n\t\t\t\t\t{ok, Binary4} = ar_global_sync_record:get_serialized_sync_record(Options),\n\t\t\t\t\t{ok, Global4} = ar_intervals:safe_from_etf(Binary4),\n\t\t\t\t\t[{1048576, 0}] == ar_intervals:to_list(Global4) end,\n\t\t\t\t200,\n\t\t\t\t5000),\n\n\t\t%% Add a storage module chunk\n\t\tar_sync_record:add(\n\t\t\tPartitionStart+?DATA_CHUNK_SIZE, PartitionStart, unpacked, ar_data_sync, PartitionID),\n\t\ttimer:sleep(SleepTime),\n\t\t{ok, Binary5} = ar_global_sync_record:get_serialized_sync_record(Options),\n\t\t{ok, Global5} = ar_intervals:safe_from_etf(Binary5),\n\n\t\t?assertEqual([{1048576, 0},{PartitionStart+?DATA_CHUNK_SIZE,PartitionStart}],\n\t\t\tar_intervals:to_list(Global5)),\n\t\t?assertEqual(not_found,\n\t\t\tar_sync_record:get_interval(DiskPoolStart+1, ar_data_sync, ?DEFAULT_MODULE)),\n\t\t?assertEqual({1048576, 0}, ar_sync_record:get_interval(1, ar_data_sync, PartitionID)),\n\t\t?assertEqual({PartitionStart+?DATA_CHUNK_SIZE, PartitionStart},\n\t\t\t\tar_sync_record:get_interval(PartitionStart+1, ar_data_sync, PartitionID)),\n\n\t\t%% Remove the storage module chunk\n\t\tar_sync_record:delete(\n\t\t\tPartitionStart+?DATA_CHUNK_SIZE, PartitionStart, ar_data_sync, PartitionID),\n\t\ttimer:sleep(SleepTime),\n\t\t?assertEqual([{1048576, 0},{PartitionStart+?DATA_CHUNK_SIZE,PartitionStart}],\n\t\t\tar_intervals:to_list(Global5)),\n\t\tar_events:send(sync_record,\n\t\t\t\t{global_remove_range, PartitionStart, PartitionStart+?DATA_CHUNK_SIZE}),\n\t\ttrue = ar_util:do_until(\n\t\t\t\tfun() ->\n\t\t\t\t\t{ok, Binary6} = ar_global_sync_record:get_serialized_sync_record(Options),\n\t\t\t\t\t{ok, Global6} = ar_intervals:safe_from_etf(Binary6),\n\t\t\t\t\t[{1048576, 0}] == ar_intervals:to_list(Global6) end,\n\t\t\t\t200,\n\t\t\t\t1000),\n\t\t?assertEqual(not_found,\n\t\t\tar_sync_record:get_interval(DiskPoolStart+1, ar_data_sync, ?DEFAULT_MODULE)),\n\t\t?assertEqual({1048576, 0}, ar_sync_record:get_interval(1, ar_data_sync, PartitionID)),\n\t\t?assertEqual(not_found,\n\t\t\t\tar_sync_record:get_interval(PartitionStart+1, ar_data_sync, PartitionID)),\n\n\t\t%% Add chunk to both diskpool and storage module\n\t\tar_sync_record:add(\n\t\t\tPartitionStart+?DATA_CHUNK_SIZE, PartitionStart, unpacked, ar_data_sync, ?DEFAULT_MODULE),\n\t\tar_sync_record:add(\n\t\t\tPartitionStart+?DATA_CHUNK_SIZE, PartitionStart, unpacked, ar_data_sync, PartitionID),\n\t\ttimer:sleep(SleepTime),\n\t\t{ok, Binary6} = ar_global_sync_record:get_serialized_sync_record(Options),\n\t\t{ok, Global6} = ar_intervals:safe_from_etf(Binary6),\n\n\t\t?assertEqual([{1048576, 0}, {PartitionStart+?DATA_CHUNK_SIZE,PartitionStart}],\n\t\t\tar_intervals:to_list(Global6)),\n\t\t?assertEqual({PartitionStart+?DATA_CHUNK_SIZE,PartitionStart},\n\t\t\tar_sync_record:get_interval(PartitionStart+1, ar_data_sync, ?DEFAULT_MODULE)),\n\t\t?assertEqual({1048576, 0}, ar_sync_record:get_interval(1, ar_data_sync, PartitionID)),\n\t\t?assertEqual({PartitionStart+?DATA_CHUNK_SIZE, PartitionStart},\n\t\t\tar_sync_record:get_interval(PartitionStart+1, ar_data_sync, PartitionID)),\n\n\t\t%% Now remove it from just the diskpool\n\t\tar_sync_record:delete(\n\t\t\tPartitionStart+?DATA_CHUNK_SIZE, PartitionStart, ar_data_sync, ?DEFAULT_MODULE),\n\t\ttimer:sleep(SleepTime),\n\t\t{ok, Binary7} = ar_global_sync_record:get_serialized_sync_record(Options),\n\t\t{ok, Global7} = ar_intervals:safe_from_etf(Binary7),\n\n\t\t?assertEqual([{1048576, 0}, {PartitionStart+?DATA_CHUNK_SIZE,PartitionStart}],\n\t\t\tar_intervals:to_list(Global7)),\n\t\t?assertEqual(not_found,\n\t\t\tar_sync_record:get_interval(DiskPoolStart+1, ar_data_sync, ?DEFAULT_MODULE)),\n\t\t?assertEqual({1048576, 0}, ar_sync_record:get_interval(1, ar_data_sync, PartitionID)),\n\t\t?assertEqual({PartitionStart+?DATA_CHUNK_SIZE, PartitionStart},\n\t\t\tar_sync_record:get_interval(PartitionStart+1, ar_data_sync, PartitionID)),\n\n\t\tar_test_node:stop()\n\tafter\n\t\tok = arweave_config:set_env(Config)\n\tend."
  },
  {
    "path": "apps/arweave/test/ar_test_data_sync.erl",
    "content": "-module(ar_test_data_sync).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-export([setup_nodes/0, setup_nodes/1,\n\t\timperfect_split/1, build_proofs/3, build_proofs/5,\n        tx/1, tx/2, tx/3, tx/4, wait_until_syncs_chunk/2,\n        wait_until_syncs_chunks/1, wait_until_syncs_chunks/2, wait_until_syncs_chunks/3,\n        get_tx_offset/2, get_tx_data/1,\n        post_random_blocks/1, get_records_with_proofs/3, post_proofs/4, post_proofs/5,\n        generate_random_split/1, generate_random_original_split/1,\n        generate_random_standard_split/0, generate_random_original_v1_split/0]).\n\n-define(SYNC_CHUNKS_CHECK, 1000).\n%% Chunk sync can exceed 60s on slow CI (fork recovery, composite packing, many peers).\n-define(SYNC_CHUNKS_TIMEOUT, 120_000).\n\nget_records_with_proofs(B, TX, Chunks) ->\n\t[{B, TX, Chunks, Proof} || Proof <- build_proofs(B, TX, Chunks)].\n\nsetup_nodes() ->\n\tsetup_nodes(#{}).\n\nsetup_nodes(Options) ->\n\tAddr = maps:get(addr, Options, ar_wallet:to_address(ar_wallet:new_keyfile())),\n\tPeerAddr = maps:get(peer_addr, Options, ar_wallet:to_address(\n\t\t\tar_test_node:remote_call(peer1, ar_wallet, new_keyfile, []))),\n\tsetup_nodes2(Options#{ addr => Addr, peer_addr => PeerAddr }).\n\nsetup_nodes2(#{ peer_addr := PeerAddr } = Options) ->\n\tWallet = {_, Pub} = ar_wallet:new(),\n\t{B0, Options2} =\n\t\tcase maps:get(b0, Options, not_set) of\n\t\t\tnot_set ->\n\t\t\t\t[Genesis] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(200000), <<>>}], ar_retarget:switch_to_linear_diff(2)),\n\t\t\t\t{Genesis, Options#{ b0 => Genesis }};\n\t\t\tValue ->\n\t\t\t\t{Value, Options}\n\t\tend,\n\t{ok, Config} = arweave_config:get_env(),\n\tOptions3 = Options2#{ config => Config#config{ \n\t\tenable = Config#config.enable ++ [pack_served_chunks] } },\n\tar_test_node:start(Options3),\n\t{ok, PeerConfig} = ar_test_node:remote_call(peer1, arweave_config, get_env, []),\n\tar_test_node:start_peer(peer1, B0, PeerAddr, PeerConfig#config{ \n\t\tenable = Config#config.enable ++ [pack_served_chunks] }),\n\tar_test_node:connect_to_peer(peer1),\n\tWallet.\n\ntx(Wallet, SplitType) ->\n\ttx(Wallet, SplitType, v2, fetch).\n\nv1_tx(Wallet) ->\n\ttx(Wallet, original_split, v1, fetch).\n\ntx(Wallet, SplitType, Format) ->\n\ttx(Wallet, SplitType, Format, fetch).\n\ntx(Wallet, SplitType, Format, Reward) ->\n\ttx(#{ wallet => Wallet, split_type => SplitType,\n\t\t\tformat => Format, reward => Reward }).\n\ntx(Params) when is_map(Params) ->\n\t#{ wallet := Wallet, split_type := SplitType,\n\t\tformat := Format, reward := Reward } = Params,\n\tTXAnchorPeer = maps:get(tx_anchor_peer, Params, main),\n\tTXAnchor = ar_test_node:get_tx_anchor(TXAnchorPeer),\n\tGetFeePeer = maps:get(get_fee_peer, Params, peer1),\n\tcase {SplitType, Format} of\n\t\t{{fixed_data, DataRoot, Chunks}, v2} ->\n\t\t\tData = binary:list_to_bin(Chunks),\n\t\t\tArgs = #{ data_size => byte_size(Data), data_root => DataRoot,\n\t\t\t\t\tlast_tx => TXAnchor },\n\t\t\tArgs2 = case Reward of fetch -> Args; _ -> Args#{ reward => Reward } end,\n\t\t\t{ar_test_node:sign_tx(GetFeePeer, Wallet, Args2), Chunks};\n\t\t{{fixed_data, DataRoot, Chunks}, v1} ->\n\t\t\tData = binary:list_to_bin(Chunks),\n\t\t\tArgs = #{ data_size => byte_size(Data), data_root => DataRoot,\n\t\t\t\t\tlast_tx => TXAnchor, data => Data },\n\t\t\tArgs2 = case Reward of fetch -> Args; _ -> Args#{ reward => Reward } end,\n\t\t\t{ar_test_node:sign_v1_tx(GetFeePeer, Wallet, Args2), Chunks};\n\t\t{original_split, v1} ->\n\t\t\t{_, Chunks} = generate_random_original_v1_split(),\n\t\t\tData = binary:list_to_bin(Chunks),\n\t\t\tArgs = #{ data => Data, last_tx => TXAnchor },\n\t\t\tArgs2 = case Reward of fetch -> Args; _ -> Args#{ reward => Reward } end,\n\t\t\t{ar_test_node:sign_v1_tx(GetFeePeer, Wallet, Args2), Chunks};\n\t\t{original_split, v2} ->\n\t\t\t{DataRoot, Chunks} = generate_random_original_split(),\n\t\t\tData = binary:list_to_bin(Chunks),\n\t\t\tArgs = #{ data_size => byte_size(Data), data_root => DataRoot,\n\t\t\t\t\tlast_tx => TXAnchor },\n\t\t\tArgs2 = case Reward of fetch -> Args; _ -> Args#{ reward => Reward } end,\n\t\t\t{ar_test_node:sign_tx(GetFeePeer, Wallet, Args2), Chunks};\n\t\t{{custom_split, ChunkNumber}, v2} ->\n\t\t\t{DataRoot, Chunks} = generate_random_split(ChunkNumber),\n\t\t\tArgs = #{ data_size => byte_size(binary:list_to_bin(Chunks)),\n\t\t\t\t\tlast_tx => TXAnchor, data_root => DataRoot },\n\t\t\tArgs2 = case Reward of fetch -> Args; _ -> Args#{ reward => Reward } end,\n\t\t\tTX = ar_test_node:sign_tx(GetFeePeer, Wallet, Args2),\n\t\t\t{TX, Chunks};\n\t\t{standard_split, v2} ->\n\t\t\t{DataRoot, Chunks} = generate_random_standard_split(),\n\t\t\tData = binary:list_to_bin(Chunks),\n\t\t\tArgs = #{ data_size => byte_size(Data), data_root => DataRoot,\n\t\t\t\t\tlast_tx => TXAnchor },\n\t\t\tArgs2 = case Reward of fetch -> Args; _ -> Args#{ reward => Reward } end,\n\t\t\tTX = ar_test_node:sign_tx(GetFeePeer, Wallet, Args2),\n\t\t\t{TX, Chunks};\n\t\t{{original_split, ChunkNumber}, v2} ->\n\t\t\t{DataRoot, Chunks} = generate_random_original_split(ChunkNumber),\n\t\t\tData = binary:list_to_bin(Chunks),\n\t\t\tArgs = #{ data_size => byte_size(Data), data_root => DataRoot,\n\t\t\t\t\tlast_tx => TXAnchor },\n\t\t\tArgs2 = case Reward of fetch -> Args; _ -> Args#{ reward => Reward } end,\n\t\t\tTX = ar_test_node:sign_tx(GetFeePeer, Wallet, Args2),\n\t\t\t{TX, Chunks}\n\tend.\n\ngenerate_random_split(ChunkCount) ->\n\tChunks = lists:foldl(\n\t\tfun(_, Chunks) ->\n\t\t\tRandomSize =\n\t\t\t\tcase rand:uniform(3) of\n\t\t\t\t\t1 ->\n\t\t\t\t\t\t?DATA_CHUNK_SIZE;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tOneThird = ?DATA_CHUNK_SIZE div 3,\n\t\t\t\t\t\tOneThird + rand:uniform(?DATA_CHUNK_SIZE - OneThird) - 1\n\t\t\t\tend,\n\t\t\tChunk = crypto:strong_rand_bytes(RandomSize),\n\t\t\t[Chunk | Chunks]\n\t\tend,\n\t\t[],\n\t\tlists:seq(1, case ChunkCount of random -> rand:uniform(5); _ -> ChunkCount end)),\n\tSizedChunkIDs = ar_tx:sized_chunks_to_sized_chunk_ids(\n\t\t\tar_tx:chunks_to_size_tagged_chunks(Chunks)),\n\t{DataRoot, _} = ar_merkle:generate_tree(SizedChunkIDs),\n\t{DataRoot, Chunks}.\n\ngenerate_random_original_v1_split() ->\n\t%% Make sure v1 data does not end with a digit, otherwise it's malleable.\n\tData = << (crypto:strong_rand_bytes(rand:uniform(?MiB)))/binary, <<\"a\">>/binary >>,\n\toriginal_split(Data).\n\ngenerate_random_original_split() ->\n\tData = << (crypto:strong_rand_bytes(rand:uniform(?MiB)))/binary >>,\n\toriginal_split(Data).\n\ngenerate_random_standard_split() ->\n\tData = crypto:strong_rand_bytes(rand:uniform(3 * ?DATA_CHUNK_SIZE)),\n\tv2_standard_split(Data).\n\ngenerate_random_original_split(ChunkCount) ->\n\tRandomSize = (ChunkCount - 1) * ?DATA_CHUNK_SIZE + rand:uniform(?DATA_CHUNK_SIZE),\n\tData = crypto:strong_rand_bytes(RandomSize),\n\toriginal_split(Data).\n\n%% @doc Split the way v1 transactions are split.\noriginal_split(Data) ->\n\tChunks = ar_tx:chunk_binary(?DATA_CHUNK_SIZE, Data),\n\tSizedChunkIDs = ar_tx:sized_chunks_to_sized_chunk_ids(\n\t\tar_tx:chunks_to_size_tagged_chunks(Chunks)\n\t),\n\t{DataRoot, _} = ar_merkle:generate_tree(SizedChunkIDs),\n\t{DataRoot, Chunks}.\n\n%% @doc Split the way v2 transactions are usually split (arweave-js does it\n%% this way as of the time this was written).\nv2_standard_split(Data) ->\n\tChunks = v2_standard_split_get_chunks(Data),\n\tSizedChunkIDs = ar_tx:sized_chunks_to_sized_chunk_ids(\n\t\tar_tx:chunks_to_size_tagged_chunks(Chunks)\n\t),\n\t{DataRoot, _} = ar_merkle:generate_tree(SizedChunkIDs),\n\t{DataRoot, Chunks}.\n\nv2_standard_split_get_chunks(Data) ->\n\tv2_standard_split_get_chunks(Data, [], 32 * 1024).\n\nv2_standard_split_get_chunks(Chunk, Chunks, _MinSize) when byte_size(Chunk) =< 262144 ->\n    lists:reverse([Chunk | Chunks]);\nv2_standard_split_get_chunks(<< _:262144/binary, LastChunk/binary >> = Rest, Chunks, MinSize)\n\t\twhen byte_size(LastChunk) < MinSize ->\n    FirstSize = round(math:ceil(byte_size(Rest) / 2)),\n    << Chunk1:FirstSize/binary, Chunk2/binary >> = Rest,\n    lists:reverse([Chunk2, Chunk1 | Chunks]);\nv2_standard_split_get_chunks(<< Chunk:262144/binary, Rest/binary >>, Chunks, MinSize) ->\n    v2_standard_split_get_chunks(Rest, [Chunk | Chunks], MinSize).\n\nimperfect_split(Data) ->\n\timperfect_split(?DATA_CHUNK_SIZE, Data).\n\nimperfect_split(_ChunkSize, Bin) when byte_size(Bin) == 0 ->\n\t[];\nimperfect_split(ChunkSize, Bin) when byte_size(Bin) < ChunkSize ->\n\t[Bin];\nimperfect_split(ChunkSize, Bin) ->\n\t<<ChunkBin:ChunkSize/binary, Rest/binary>> = Bin,\n\tHalfSize = ChunkSize div 2,\n\tcase byte_size(Rest) < HalfSize of\n\t\ttrue ->\n\t\t\t<<ChunkBin2:HalfSize/binary, Rest2/binary>> = Bin,\n\t\t\t%% If Rest is <<>>, both chunks are HalfSize - the chunks are invalid\n\t\t\t%% after the strict data split threshold.\n\t\t\t[ChunkBin2, Rest2];\n\t\tfalse ->\n\t\t\t[ChunkBin | imperfect_split(ChunkSize, Rest)]\n\tend.\n\nbuild_proofs(B, TX, Chunks) ->\n\tbuild_proofs(TX, Chunks, B#block.txs, B#block.weave_size - B#block.block_size,\n\t\t\tB#block.height).\n\nbuild_proofs(TX, Chunks, TXs, BlockStartOffset, Height) ->\n\tSizeTaggedTXs = ar_block:generate_size_tagged_list_from_txs(TXs, Height),\n\tSizeTaggedDataRoots = [{Root, Offset} || {{_, Root}, Offset} <- SizeTaggedTXs],\n\t{value, {_, TXOffset}} =\n\t\tlists:search(fun({{TXID, _}, _}) -> TXID == TX#tx.id end, SizeTaggedTXs),\n\t{TXRoot, TXTree} = ar_merkle:generate_tree(SizeTaggedDataRoots),\n\tTXPath = ar_merkle:generate_path(TXRoot, TXOffset - 1, TXTree),\n\tSizeTaggedChunks = ar_tx:chunks_to_size_tagged_chunks(Chunks),\n\t{DataRoot, DataTree} = ar_merkle:generate_tree(\n\t\tar_tx:sized_chunks_to_sized_chunk_ids(SizeTaggedChunks)\n\t),\n\tDataSize = byte_size(binary:list_to_bin(Chunks)),\n\tlists:foldl(\n\t\tfun\n\t\t\t({<<>>, _}, Proofs) ->\n\t\t\t\tProofs;\n\t\t\t({Chunk, ChunkOffset}, Proofs) ->\n\t\t\t\tTXStartOffset = TXOffset - DataSize,\n\t\t\t\tAbsoluteChunkEndOffset = BlockStartOffset + TXStartOffset + ChunkOffset,\n\t\t\t\tProof = #{\n\t\t\t\t\ttx_path => ar_util:encode(TXPath),\n\t\t\t\t\tdata_root => ar_util:encode(DataRoot),\n\t\t\t\t\tdata_path =>\n\t\t\t\t\t\tar_util:encode(\n\t\t\t\t\t\t\tar_merkle:generate_path(DataRoot, ChunkOffset - 1, DataTree)\n\t\t\t\t\t\t),\n\t\t\t\t\tchunk => ar_util:encode(Chunk),\n\t\t\t\t\toffset => integer_to_binary(ChunkOffset - 1),\n\t\t\t\t\tdata_size => integer_to_binary(DataSize)\n\t\t\t\t},\n\t\t\t\tProofs ++ [{AbsoluteChunkEndOffset, Proof}]\n\t\tend,\n\t\t[],\n\t\tSizeTaggedChunks\n\t).\n\nget_tx_offset(Node, TXID) ->\n\tPeer = ar_test_node:peer_ip(Node),\n\tar_http:req(#{\n\t\tmethod => get,\n\t\tpeer => Peer,\n\t\tpath => \"/tx/\" ++ binary_to_list(ar_util:encode(TXID)) ++ \"/offset\"\n\t}).\n\nget_tx_data(TXID) ->\n  {ok, Config} = arweave_config:get_env(),\n\tar_http:req(#{\n\t\tmethod => get,\n\t\tpeer => {127, 0, 0, 1, Config#config.port},\n\t\tpath => \"/tx/\" ++ binary_to_list(ar_util:encode(TXID)) ++ \"/data\"\n\t}).\n\npost_random_blocks(Wallet) ->\n\tpost_blocks(Wallet,\n\t\t[\n\t\t\t[v1],\n\t\t\tempty,\n\t\t\t[v2, v1, fixed_data, v2_no_data],\n\t\t\t[v2, v2_standard_split, v1, v2],\n\t\t\tempty,\n\t\t\t[v1, v2, v2, empty_tx, v2_standard_split],\n\t\t\t[v2, v2_no_data, v2_no_data, v1, v2_no_data],\n\t\t\t[empty_tx],\n\t\t\tempty,\n\t\t\t[v2_standard_split, v2_no_data, v2, v1, v2],\n\t\t\tempty,\n\t\t\t[fixed_data, fixed_data],\n\t\t\tempty,\n\t\t\t[fixed_data, fixed_data] % same tx_root as in the block before the previous one\n\t\t]\n\t).\n\npost_blocks(Wallet, BlockMap) ->\n\tFixedChunks = [crypto:strong_rand_bytes(256 * 1024) || _ <- lists:seq(1, 4)],\n\tSizedChunkIDs = ar_tx:sized_chunks_to_sized_chunk_ids(\n\t\t\tar_tx:chunks_to_size_tagged_chunks(FixedChunks)),\n\t{DataRoot, _} = ar_merkle:generate_tree(SizedChunkIDs),\n\tlists:foldl(\n\t\tfun\n\t\t\t({empty, Height}, Acc) ->\n\t\t\t\tar_test_node:mine(),\n\t\t\t\tar_test_node:assert_wait_until_height(peer1, Height),\n\t\t\t\tAcc;\n\t\t({TXMap, Height}, Acc) ->\n\t\t\tTXsWithChunks = lists:map(\n\t\t\t\tfun\n\t\t\t\t\t(v1) ->\n\t\t\t\t\t\t{v1_tx(Wallet), v1};\n\t\t\t\t\t(v2) ->\n\t\t\t\t\t\t{tx(Wallet, original_split), v2};\n\t\t\t\t\t(v2_no_data) -> % same as v2 but its data won't be submitted\n\t\t\t\t\t\t{tx(Wallet, {custom_split, random}), v2_no_data};\n\t\t\t\t\t(v2_standard_split) ->\n\t\t\t\t\t\t{tx(Wallet, standard_split), v2_standard_split};\n\t\t\t\t\t(empty_tx) ->\n\t\t\t\t\t\t{tx(Wallet, {custom_split, 0}), empty_tx};\n\t\t\t\t\t(fixed_data) ->\n\t\t\t\t\t\t{tx(Wallet, {fixed_data, DataRoot, FixedChunks}), fixed_data}\n\t\t\t\tend,\n\t\t\t\tTXMap\n\t\t\t),\n\t\t\tB = ar_test_node:post_and_mine(\n\t\t\t\t#{ miner => main, await_on => main },\n\t\t\t\t[TX || {{TX, _}, _} <- TXsWithChunks]\n\t\t\t),\n\t\t\tar_test_node:assert_wait_until_height(peer1, Height),\n\t\t\tAcc ++ [{B, TX, C} || {{TX, C}, Type} <- lists:sort(TXsWithChunks),\n\t\t\t\t\tType /= v2_no_data, Type /= empty_tx]\n\t\tend,\n\t\t[],\n\t\tlists:zip(BlockMap, lists:seq(1, length(BlockMap)))\n\t).\n\npost_proofs(Peer, B, TX, Chunks) ->\n\tpost_proofs(Peer, B, TX, Chunks, infinity).\npost_proofs(Peer, B, TX, Chunks, DiskPoolThreshold) ->\n\tProofs = build_proofs(B, TX, Chunks),\n\n\tlists:foreach(\n\t\tfun({_, Proof}) ->\n\t\t\tOffset = binary_to_integer(maps:get(offset, Proof)),\n\t\t\tHttpStatus = case Offset > DiskPoolThreshold of\n\t\t\t\ttrue -> <<\"303\">>;\n\t\t\t\tfalse -> <<\"200\">>\n\t\t\tend,\n\t\t\t{ok, {{HttpStatus, _}, _, _, _, _}} =\n\t\t\t\tar_test_node:post_chunk(Peer, ar_serialize:jsonify(Proof))\n\t\tend,\n\t\tProofs\n\t),\n\tProofs.\n\nwait_until_syncs_chunk(Offset, ExpectedProof) ->\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\tcase ar_test_node:get_chunk(main, Offset) of\n\t\t\t\t{ok, {{<<\"200\">>, _}, _, ProofJSON, _, _}} ->\n\t\t\t\t\tProof = jiffy:decode(ProofJSON, [return_maps]),\n\t\t\t\t\t{ok, {{<<\"200\">>, _}, _, NoChunkProofJSON, _, _}}\n\t\t\t\t\t\t= ar_test_node:get_chunk_proof(main, Offset),\n\t\t\t\t\tNoChunkProof = jiffy:decode(NoChunkProofJSON, [return_maps]),\n\t\t\t\t\t?assertEqual(maps:get(<<\"data_path\">>, Proof),\n\t\t\t\t\t\t\tmaps:get(<<\"data_path\">>, NoChunkProof)),\n\t\t\t\t\t?assertEqual(maps:get(<<\"tx_path\">>, Proof),\n\t\t\t\t\t\t\tmaps:get(<<\"tx_path\">>, NoChunkProof)),\n\t\t\t\t\tmaps:fold(\n\t\t\t\t\t\tfun\t(_Key, _Value, false) ->\n\t\t\t\t\t\t\t\tfalse;\n\t\t\t\t\t\t\t(Key, Value, true) ->\n\t\t\t\t\t\t\t\tmaps:get(atom_to_binary(Key), Proof, not_set) == Value\n\t\t\t\t\t\tend,\n\t\t\t\t\t\ttrue,\n\t\t\t\t\t\tExpectedProof\n\t\t\t\t\t);\n\t\t\t\t_ ->\n\t\t\t\t\tfalse\n\t\t\tend\n\t\tend,\n\t\t1000,\n\t\t20_000\n\t).\n\nwait_until_syncs_chunks(Proofs) ->\n\twait_until_syncs_chunks(main, Proofs, infinity).\n\nwait_until_syncs_chunks(Proofs, UpperBound) ->\n\twait_until_syncs_chunks(main, Proofs, UpperBound).\n\nwait_until_syncs_chunks(Node, Proofs, UpperBound) ->\n\tlists:foreach(\n\t\tfun({EndOffset, Proof}) ->\n\t\t\ttrue = ar_util:do_until(\n\t\t\t\tfun() ->\n\t\t\t\t\tcase EndOffset > UpperBound of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\ttrue;\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tcase ar_test_node:get_chunk(Node, EndOffset) of\n\t\t\t\t\t\t\t\t{ok, {{<<\"200\">>, _}, _, EncodedProof, _, _}} ->\n\t\t\t\t\t\t\t\t\tFetchedProof = ar_serialize:json_map_to_poa_map(\n\t\t\t\t\t\t\t\t\t\tjiffy:decode(EncodedProof, [return_maps])\n\t\t\t\t\t\t\t\t\t),\n\t\t\t\t\t\t\t\t\tExpectedProof = #{\n\t\t\t\t\t\t\t\t\t\tchunk => ar_util:decode(maps:get(chunk, Proof)),\n\t\t\t\t\t\t\t\t\t\ttx_path => ar_util:decode(maps:get(tx_path, Proof)),\n\t\t\t\t\t\t\t\t\t\tdata_path => ar_util:decode(maps:get(data_path, Proof))\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\tcompare_proofs(FetchedProof, ExpectedProof, EndOffset);\n\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\tfalse\n\t\t\t\t\t\t\tend\n\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\t\t?SYNC_CHUNKS_CHECK,\n\t\t\t\t?SYNC_CHUNKS_TIMEOUT\n\t\t\t)\n\t\tend,\n\t\tProofs\n\t).\n\ncompare_proofs(#{ chunk := C, data_path := D, tx_path := T },\n\t\t#{ chunk := C, data_path := D, tx_path := T }, _EndOffset) ->\n\ttrue;\ncompare_proofs(#{ chunk := C1, data_path := D1, tx_path := T1 } = FetchedProof,\n\t\t#{ chunk := C2, data_path := D2, tx_path := T2 }, EndOffset) ->\n\t?debugFmt(\"Proof mismatch for ~B data_path: ~p tx_path: ~p chunk: ~p \"\n\t\t\t\"expected chunk size :~B chunk size: ~B fetched proof packing: ~p.~n\",\n\t\t\t[EndOffset, D1 == D2, T1 == T2, C1 == C2, byte_size(C2), byte_size(C1),\n\t\t\t\tmaps:get(packing, FetchedProof, not_set)]),\n\tfalse.\n"
  },
  {
    "path": "apps/arweave/test/ar_test_inet_mock.erl",
    "content": "%%%===================================================================\n%%% @doc a module to mock `inet'.\n%%% @end\n%%%===================================================================\n-module(ar_test_inet_mock).\n-export([getaddrs/2]).\n\n%%--------------------------------------------------------------------\n%% @doc a function to mock `inet:getaddrs/2'. mostly used to test\n%% internal resolver feature in `ar_peers' and `ar_util'.\n%% @end\n%%--------------------------------------------------------------------\ngetaddrs(\"single.record.local\", _) ->\n\t{ok, [{127, 0, 0, 1}]};\ngetaddrs(\"multi.record.local\", _) ->\n\t{ok, [\n\t\t{127,0,0,2},\n\t\t{127,0,0,3},\n\t\t{127,0,0,4},\n\t\t{127,0,0,5}\n\t]};\ngetaddrs(\"error.record.local\", _) ->\n\t{error, not_found};\ngetaddrs(_, _) ->\n\t{error, invalid}.\n\n"
  },
  {
    "path": "apps/arweave/test/ar_test_node.erl",
    "content": "-module(ar_test_node).\n\n%% The new, more flexible, and more user-friendly interface.\n-export([boot_peers/1, wait_for_peers/1, get_config/1,set_config/2,\n\t\twait_until_joined/0, wait_until_joined/1,\n\t\trestart/0, restart/1, restart_with_config/1, restart_with_config/2,\n\t\tstart_other_node/4, start_node/2, start_node/3, start_coordinated/1, base_cm_config/1, mine/1,\n\t\twait_until_height/1, wait_until_height/2, wait_until_height/3, wait_until_height/4,\n\t\tdo_wait_until_height/2,\n\t\tassert_wait_until_height/2,\n\t\twait_until_mining_paused/1, http_get_block/2, get_blocks/1,\n\t\tmock_to_force_invalid_h1/0, mainnet_packing_mocks/0,\n\t\tget_difficulty_for_invalid_hash/0, invalid_solution/0,\n\t\tvalid_solution/0, new_mock/2, mock_function/3, unmock_module/1, remote_call/4,\n\t\tload_fixture/1,\n\t\tget_default_storage_module_packing/2, get_genesis_chunk/1,\n\t\tall_nodes/1, new_custom_size_rsa_wallet/1]).\n\n%% The \"legacy\" interface.\n-export([start/0, start/1, start/2, start/3, start/4,\n\t\tstop/0, stop/1, start_peer/2, start_peer/3, start_peer/4, peer_name/1, peer_port/1,\n\t\tstop_peers/1, stop_peer/1, connect_peers/2, connect_to_peer/1,\n\t\tdisconnect_peers/2, disconnect_from/1,\n\t\tjoin/2, join/3, join_on/1, join_on/2, rejoin_on/1,\n\t\tgenerate_join_config/0, generate_join_config/1,\n\t\tpeer_ip/1, get_node_namespace/0, get_unused_port/0,\n\t\twith_gossip_paused/2,\n\n\t\tmine/0, get_tx_anchor/1, get_tx_confirmations/2, get_tx_price/2, get_tx_price/3,\n\t\tget_optimistic_tx_price/2, get_optimistic_tx_price/3,\n\t\tsign_tx/1, sign_tx/2, sign_tx/3, sign_v1_tx/1, sign_v1_tx/2, sign_v1_tx/3,\n\n\t\twait_until_block_index/1, wait_until_block_index/2,\n\t\twait_until_receives_txs/1,\n\t\tassert_wait_until_receives_txs/1, assert_wait_until_receives_txs/2,\n\t\tpost_tx_to_peer/2, post_tx_to_peer/3, assert_post_tx_to_peer/2, assert_post_tx_to_peer/3,\n\t\tpost_and_mine/2, post_block/2, post_block/3, send_new_block/2,\n\t\tawait_post_block/2, await_post_block/3, sign_block/3, read_block_when_stored/1,\n\t\tread_block_when_stored/2, get_chunk/2, get_chunk/3, get_chunk_proof/2, post_chunk/2,\n\t\trandom_v1_data/1, assert_get_tx_data/3,\n\t\tassert_data_not_found/2, post_tx_json/2,\n\t\twait_until_syncs_genesis_data/0, wait_until_syncs_genesis_data/1,\n\n\t\tmock_functions/1, test_with_mocked_functions/2, test_with_mocked_functions/3]).\n\n-include(\"ar.hrl\").\n-include(\"ar_consensus.hrl\").\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%% May occasionally take quite long on a slow CI server, expecially in tests\n%% with height >= 20 (2 difficulty retargets).\n-define(WAIT_UNTIL_BLOCK_HEIGHT_TIMEOUT, 500_000).\n-define(WAIT_UNTIL_RECEIVES_TXS_TIMEOUT, 500_000).\n\n%% Sometimes takes a while on a slow machine\n-define(PEER_START_TIMEOUT, 500_000).\n%% Set the maximum number of retry attempts\n-define(MAX_BOOT_RETRIES, 3).\n\n-define(MAX_MINERS, 3).\n\n% define check timeout and interval, used with ar_util:do_until/3.\n-define(NODE_READY_CHECK_INTERVAL, 200).\n-define(NODE_READY_CHECK_TIMEOUT, 500_000).\n-define(REMOTE_CALL_TIMEOUT, 500_000).\n-define(CONNECT_TO_PEER_TIMEOUT, 500_000).\n-define(BLOCK_INDEX_TIMEOUT, 500_000).\n-define(TEST_MOCKED_FUNCTIONS_TIMEOUT, 500). %% in seconds\n-define(POST_AND_MINE_TIMEOUT, 500_000).\n-define(READ_BLOCK_TIMEOUT, 500_000).\n-define(GET_TX_DATA_TIMEOUT, 200_000).\n-define(WAIT_UNTIL_JOINED_TIMEOUT, 200_000).\n-define(WAIT_SYNCS_DATA_TIMEOUT, 500_000).\n-define(WAIT_UNTIL_MINING_PAUSED_TIMEOUT, 60_000).\n-define(TEST_HTTP_CLIENT_KEEPALIVE, 4_000).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\nall_peers(test) ->\n\t[{test, peer1}, {test, peer2}, {test, peer3}, {test, peer4}];\nall_peers(e2e) ->\n\t[{e2e, peer1}, {e2e, peer2}].\n\nall_nodes(TestType) ->\n\t[{TestType, main} | all_peers(TestType)].\n\nnew_custom_size_rsa_wallet(Size) ->\n\tKeyType = ?RSA_KEY_TYPE,\n\tPublicExpnt = 65537,\n\t{[Expnt, Pub], [Expnt, Pub, Priv, P1, P2, E1, E2, C]} =\n\t\tcrypto:generate_key(rsa, {Size * 8, PublicExpnt}),\n\tKey =\n\t\tar_serialize:jsonify(\n\t\t\t{\n\t\t\t\t[\n\t\t\t\t\t{kty, <<\"RSA\">>},\n\t\t\t\t\t{ext, true},\n\t\t\t\t\t{e, ar_util:encode(Expnt)},\n\t\t\t\t\t{n, ar_util:encode(Pub)},\n\t\t\t\t\t{d, ar_util:encode(Priv)},\n\t\t\t\t\t{p, ar_util:encode(P1)},\n\t\t\t\t\t{q, ar_util:encode(P2)},\n\t\t\t\t\t{dp, ar_util:encode(E1)},\n\t\t\t\t\t{dq, ar_util:encode(E2)},\n\t\t\t\t\t{qi, ar_util:encode(C)}\n\t\t\t\t]\n\t\t\t}\n\t\t),\n\tFilename = ar_wallet:wallet_filepath(wallet_address, Pub, KeyType),\n\tcase filelib:ensure_dir(Filename) of\n\t\tok ->\n\t\t\tcase ar_storage:write_file_atomic(Filename, Key) of\n\t\t\t\tok ->\n\t\t\t\t\t{{KeyType, Priv, Pub}, {KeyType, Pub}};\n\t\t\t\tError2 ->\n\t\t\t\t\tError2\n\t\t\tend;\n\t\tError ->\n\t\t\tError\n\tend.\n\nboot_peers([]) ->\n\tok;\nboot_peers([{TestType, Node} | Peers]) ->\n\tboot_peer(TestType, Node),\n\tboot_peers(Peers);\nboot_peers(TestType) ->\n\tboot_peers(all_peers(TestType)).\n\nboot_peer(TestType, Node) ->\n\ttry_boot_peer(TestType, Node, ?MAX_BOOT_RETRIES).\n\ntry_boot_peer(_TestType, _Node, 0) ->\n\t%% You might log an error or handle this case specifically\n\t%% as per your application logic.\n\t{error, max_retries_exceeded};\ntry_boot_peer(TestType, Node, Retries) ->\n\tNodeName = peer_name(Node),\n\tPort = get_unused_port(),\n\tCookie = erlang:get_cookie(),\n\tPaths = code:get_path(),\n\tfilelib:ensure_dir(\"./.tmp\"),\n\tSchedulers = erlang:system_info(schedulers_online),\n\tRawCommand = string:join([\n\t\t\"erl +S ~B:~B\",\n\t\t\"-pa\", \"~s\",\n\t\t\"-config\", \"config/sys.config\",\n\t\t\"-noshell\",\n\t\t\"-name\", \"~s\",\n\t\t\"-setcookie\", \"~s\",\n\t\t\"-run ar main\",\n\t\t\"debug\",\n\t\t\"port\", \"~p\",\n\t\t\"data_dir\", \".tmp/data_~s_~s\",\n\t\t\"no_auto_join\",\n\t\t\"disable_replica_2_9_device_limit\",\n\t\t\"> ~s-~s.out 2>&1\"\n\t], \" \"),\n\tCommandParams = [\n\t\tSchedulers,\n\t\tSchedulers,\n\t\tstring:join(Paths, \" \"),\n\t\tNodeName,\n\t\tCookie,\n\t\tPort,\n\t\tatom_to_list(TestType),\n\t\tNodeName,\n\t\tNode,\n\t\tget_node_namespace()\n\t],\n\tCmd = io_lib:format(RawCommand, CommandParams),\n\trun_command(Node, Cmd),\n\tcase wait_until_node_is_ready(NodeName) of\n\t\t{ok, _Node} ->\n\t\t\tio:format(\"~s started at port ~p.~n\", [NodeName, Port]),\n\t\t\t{node(), NodeName};\n\t\t{error, Reason} ->\n\t\t\tio:format(\"Error starting ~s: ~p. Retries left: ~p~n\", [NodeName, Reason, Retries]),\n\t\t\ttry_boot_peer(TestType, Node, Retries - 1)\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc run a command in asynchronous way using `spawn/1' instead of\n%% using `&' from shell feature.\n%% @end \n%%--------------------------------------------------------------------\nrun_command(Node, Command) ->\n\tspawn(fun() -> run_command_init(Node, Command) end).\n\n%% @hidden\nrun_command_init(Node, Command) ->\n\tio:format(\"Launching peer (~p) ~p: ~s~n\", [self(), Node, Command]),\n\ttry\n\t\tResult = os:cmd(Command),\n\t\tio:format(\"command result: ~p~n\", [Result])\n\tcatch\n\t\tE:R:S ->\n\t\t\tio:format(\"failed command: ~p:~p:~p~n\", [E,R,S])\n\tend.\n\nwait_for_peers([]) ->\n\tok;\nwait_for_peers([{_TestType, Node} | Peers]) ->\n\twait_for_peer(Node),\n\twait_for_peers(Peers);\nwait_for_peers(TestType) ->\n\twait_for_peers(all_peers(TestType)).\n\nwait_for_peer(Node) ->\n\tremote_call(Node, application, ensure_all_started, [arweave, permanent], 60000).\n\nself_node() ->\n\tlist_to_atom(get_node()).\n\npeer_name(Node) ->\n\tlist_to_atom(\n\t\tatom_to_list(Node) ++ \"-\" ++ get_node_namespace() ++ \"@127.0.0.1\"\n\t).\n\npeer_port(Node) ->\n\tcase get({peer_port, Node}) of\n\t\tundefined ->\n\t\t\t{ok, Config} = ar_test_node:remote_call(Node, arweave_config, get_env, []),\n\t\t\tPort = Config#config.port,\n\t\t\tput({peer_port, Node}, Port),\n\t\t\tPort;\n\t\tPort ->\n\t\t\tPort\n\tend.\n\nstop_peers([]) ->\n\tok;\nstop_peers([{_TestType, Node} | Peers]) ->\n\tstop_peer(Node),\n\tstop_peers(Peers);\nstop_peers(TestType) ->\n\tstop_peers(all_peers(TestType)).\n\nstop_peer(Node) ->\n\ttry\n\t\trpc:call(peer_name(Node), init, stop, [], 30000)\n\tcatch\n\t\tE:R:S ->\n\t\t\tio:format(\"stop_peer error: ~p:~p:~p~n\", [E,R,S]),\n\t\t\t%% we don't care if the node is already stopped\n\t\t\tok\n\tend.\n\npeer_ip({external, Peer}) ->\n\tPeer;\npeer_ip(Node) ->\n\t{127, 0, 0, 1, peer_port(Node)}.\n\nwait_until_joined(Node) ->\n\tremote_call(Node, ar_test_node, wait_until_joined, []).\n\n%% @doc Wait until the node joins the network (initializes the state).\nwait_until_joined() ->\n\tar_util:do_until(\n\t\tfun() -> ar_node:is_joined() end,\n\t\t100,\n\t\t?WAIT_UNTIL_JOINED_TIMEOUT\n\t ).\n\nget_config(Node) ->\n\tremote_call(Node, arweave_config, get_env, []).\n\nset_config(Node, Config) ->\n\tremote_call(Node, arweave_config, set_env, [Config]).\n\nupdate_config(Config) ->\n\t{ok, BaseConfig} = arweave_config:get_env(),\n\tConfig2 = BaseConfig#config{\n\t\tstart_from_latest_state = Config#config.start_from_latest_state,\n\t\tauto_join = Config#config.auto_join,\n\t\tmining_addr = Config#config.mining_addr,\n\t\tsync_jobs = Config#config.sync_jobs,\n\t\treplica_2_9_workers = Config#config.replica_2_9_workers,\n\t\tdisk_pool_jobs = Config#config.disk_pool_jobs,\n\t\theader_sync_jobs = Config#config.header_sync_jobs,\n\t\tenable = Config#config.enable ++ BaseConfig#config.enable,\n\t\tmining_cache_size_mb = Config#config.mining_cache_size_mb,\n\t\tdebug = Config#config.debug,\n\t\tcoordinated_mining = Config#config.coordinated_mining,\n\t\tcm_api_secret = Config#config.cm_api_secret,\n\t\tcm_poll_interval = Config#config.cm_poll_interval,\n\t\tpeers = Config#config.peers,\n\t\tcm_exit_peer = Config#config.cm_exit_peer,\n\t\tcm_peers = Config#config.cm_peers,\n\t\tlocal_peers = Config#config.local_peers,\n\t\tmine = Config#config.mine,\n\t\tstorage_modules = Config#config.storage_modules,\n\t\trepack_in_place_storage_modules = Config#config.repack_in_place_storage_modules,\n\t\tallow_rebase = Config#config.allow_rebase,\n\t\t'http_client.http.keepalive' = ?TEST_HTTP_CLIENT_KEEPALIVE\n\t},\n\tok = arweave_config:set_env(Config2),\n\t?LOG_INFO(\"Updated Config:\"),\n\tar_config:log_config(Config2),\n\tConfig2.\n\nstart_other_node(Node, B0, Config, WaitUntilSync) ->\n\tremote_call(Node, ar_test_node, start_node, [B0, Config, WaitUntilSync], 90000).\n\n%% @doc Start a node with the given genesis block and configuration.\nstart_node(B0, Config) ->\n\tstart_node(B0, Config, true).\nstart_node(B0, Config, WaitUntilSync) ->\n\t?LOG_INFO(\"Starting node\"),\n\tclean_up_and_stop(),\n\tprometheus:start(),\n\tarweave_config:start(),\n\tok = arweave_limiter:start(),\n\t{ok, BaseConfig} = arweave_config:get_env(),\n\twrite_genesis_files(BaseConfig#config.data_dir, B0),\n\tupdate_config(Config),\n\tstart_dependencies(),\n\twait_until_joined(),\n\tcase WaitUntilSync of\n\t\ttrue ->\n\t\t\twait_until_syncs_genesis_data();\n\t\tfalse ->\n\t\t\tok\n\tend,\n\t?LOG_INFO(\"Node started\"),\n\terlang:node().\n\n%% @doc Launch the given number (>= 1, =< ?MAX_MINERS) of the mining nodes in the coordinated\n%% mode plus an exit node and a validator node.\n%% Return [Node1, ..., NodeN, ExitNode, ValidatorNode].\nstart_coordinated(MiningNodeCount) when MiningNodeCount >= 1, MiningNodeCount =< ?MAX_MINERS ->\n\t%% Set weave larger than what we'll cover with the 3 nodes so that every node can find\n\t%% a solution.\n\t[B0] = ar_weave:init([], get_difficulty_for_invalid_hash(), ar_block:partition_size() * 5),\n\tExitPeer = peer_ip(peer1),\n\tValidatorPeer = peer_ip(main),\n\tMinerNodes = lists:sublist([peer2, peer3, peer4], MiningNodeCount),\n\n\tBaseCMConfig = base_cm_config([ValidatorPeer]),\n\tRewardAddr = BaseCMConfig#config.mining_addr,\n\tExitNodeConfig = BaseCMConfig#config{\n\t\tmine = true,\n\t\tlocal_peers = [peer_ip(Peer) || Peer <- MinerNodes]\n\t},\n\tValidatorNodeConfig = BaseCMConfig#config{\n\t\tmine = false,\n\t\tpeers = [ExitPeer],\n\t\tcoordinated_mining = false,\n\t\tcm_api_secret = not_set\n\t},\n\n\t%% Start the validator first so that its HTTP server is available when\n\t%% other nodes validate it as a trusted peer during startup.\n\t%% Use peers=[] here because the exit node isn't configured yet.\n\tremote_call(main, ar_test_node, start_node,\n\t\t\t[B0, ValidatorNodeConfig#config{ peers = [] }]),\n\tremote_call(peer1, ar_test_node, start_node, [B0, ExitNodeConfig]), %% exit node\n\n\tlists:foreach(\n\t\tfun(I) ->\n\t\t\tMinerNode = lists:nth(I, MinerNodes),\n\t\t\tMinerPeers = lists:filter(fun(Peer) -> Peer /= MinerNode end, MinerNodes),\n\t\t\tMinerPeerIPs = [peer_ip(Peer) || Peer <- MinerPeers],\n\n\t\t\tMinerConfig = BaseCMConfig#config{\n\t\t\t\tcm_exit_peer = ExitPeer,\n\t\t\t\tcm_peers = MinerPeerIPs,\n\t\t\t\tlocal_peers = MinerPeerIPs ++ [ExitPeer],\n\t\t\t\tstorage_modules = get_cm_storage_modules(RewardAddr, I, MiningNodeCount)\n\t\t\t},\n\t\t\tremote_call(MinerNode, ar_test_node, start_node, [B0, MinerConfig])\n\t\tend,\n\t\tlists:seq(1, MiningNodeCount)\n\t),\n\n\tMinerNodes ++ [peer1, main].\n\nbase_cm_config(Peers) ->\n\tRewardAddr = ar_wallet:to_address(remote_call(peer1, ar_wallet, new_keyfile, [])),\n\t#config{\n\t\tmining_cache_size_mb = 128,\n\t\tstart_from_latest_state = true,\n\t\tauto_join = true,\n\t\tmining_addr = RewardAddr,\n\t\thashing_threads = 1,\n\t\tsync_jobs = 2,\n\t\tdisk_pool_jobs = 2,\n\t\theader_sync_jobs = 2,\n\t\tenable = [search_in_rocksdb_when_mining, serve_tx_data_without_limits,\n\t\t\t\tserve_wallet_lists, pack_served_chunks, public_vdf_server],\n\t\tdebug = true,\n\t\tpeers = Peers,\n\t\tcoordinated_mining = true,\n\t\tcm_api_secret = <<\"test_coordinated_mining_secret\">>,\n\t\tcm_poll_interval = 2000,\n\t\tdisable_replica_2_9_device_limit = true,\n\t\t%% Disable rebasing by default to make the tests more reliable.\n\t\tallow_rebase = false\n\t}.\n\nmine() ->\n\tar_node_worker:mine_one_block().\n\n%% @doc Start mining on the given node. The node will be mining until it finds a block.\nmine(Node) ->\n\tremote_call(Node, ar_test_node, mine, []).\n\n%% @doc Fetch and decode a binary-encoded block by hash H from the HTTP API of the\n%% given node. Return {ok, B} | {error, Reason}.\nhttp_get_block(H, Node) ->\n\t{ok, Config} = remote_call(Node, arweave_config, get_env, []),\n\tPort = Config#config.port,\n\tPeer = {127, 0, 0, 1, Port},\n\tcase ar_http:req(#{ peer => Peer, method => get,\n\t\t\tpath => \"/block2/hash/\" ++ binary_to_list(ar_util:encode(H)) }) of\n\t\t{ok, {{<<\"200\">>, _}, _, BlockBin, _, _}} ->\n\t\t\tar_serialize:binary_to_block(BlockBin);\n\t\t{error, Reason} ->\n\t\t\t{error, Reason};\n\t\t{ok, {{StatusCode, _}, _, Body, _, _}} ->\n\t\t\t{error, {StatusCode, Body}}\n\tend.\n\nget_blocks(Node) ->\n\tremote_call(Node, ar_node, get_blocks, []).\n\ninvalid_solution() ->\n\t<<0:256>>.\n\nvalid_solution() ->\n\t<<255:256>>.\n\nmock_to_force_invalid_h1() ->\n\t{\n\t\tar_block, compute_h1,\n\t\tfun(H0, Nonce, Chunk1) ->\n\t\t\t%% First call the original compute_h1 function\n\t\t\tmeck:passthrough([H0, Nonce, Chunk1]),\n\t\t\t%% Then return invalid solutions\n\t\t\t{invalid_solution(), invalid_solution()}\n\t\tend\n\t}.\n\n%% @doc Mock out packing-related constants to replicate mainnet behavior.\nmainnet_packing_mocks() ->\n\t[\n\t\t{ar_block, partition_size, fun() -> 3_600_000_000_000 end},\n\t\t{ar_block, strict_data_split_threshold, fun() -> 30_607_159_107_830 end},\n\t\t{ar_storage_module, get_overlap, fun(_) -> 104_857_600 end},\n\t\t{ar_block, get_sub_chunks_per_replica_2_9_entropy, fun() -> 1024 end},\n\t\t{ar_block, get_replica_2_9_entropy_sector_size, fun() -> 3_515_875_328 end}\n\t].\n\nget_difficulty_for_invalid_hash() ->\n\t%% Set the difficulty just high enough to exclude the invalid_solution(), this lets\n\t%% us selectively disable one- or two-chunk mining in tests.\n\tbinary:decode_unsigned(invalid_solution(), big) + 1.\n\nload_fixture(Fixture) ->\n\tDir = filename:dirname(?FILE),\n\tFixtureFilename = filename:join([Dir, \"fixtures\", Fixture]),\n\t{ok, Data} = file:read_file(FixtureFilename),\n\tData.\n\n%%%===================================================================\n%%% Private functions.\n%%%===================================================================\n\nstart_dependencies() ->\n\tok = arweave_limiter:start(),\n\t{ok, _} = application:ensure_all_started(arweave, temporary),\n\tok.\n\nclean_up_and_stop() ->\n\tConfig = stop(),\n\t?LOG_DEBUG([{event, clean_up_and_stop}, {data_dir, Config#config.data_dir}]),\n\tok = filelib:ensure_dir(Config#config.data_dir),\n\t{ok, Entries} = file:list_dir_all(Config#config.data_dir),\n\tlists:foreach(\n\t\tfun\t(\"wallets\") ->\n\t\t\t\tok;\n\t\t\t(Entry) ->\n\t\t\t\t?LOG_DEBUG([{event, clean_up_and_stop},\n\t\t\t\t\t{delete, filename:join(Config#config.data_dir, Entry)}]),\n\t\t\t\tok = file:del_dir_r(filename:join(Config#config.data_dir, Entry))\n\t\tend,\n\t\tEntries\n\t).\n\nwrite_genesis_files(DataDir, B0) ->\n\tBH = B0#block.indep_hash,\n\tBlockDir = filename:join(DataDir, ?BLOCK_DIR),\n\tok = filelib:ensure_dir(BlockDir ++ \"/\"),\n\tBlockFilepath = filename:join(BlockDir, binary_to_list(ar_util:encode(BH)) ++ \".bin\"),\n\tok = file:write_file(BlockFilepath, ar_serialize:block_to_binary(B0)),\n\tTXDir = filename:join(DataDir, ?TX_DIR),\n\tok = filelib:ensure_dir(TXDir ++ \"/\"),\n\tlists:foreach(\n\t\tfun(TX) ->\n\t\t\tTXID = TX#tx.id,\n\t\t\tTXFilepath = filename:join(TXDir, binary_to_list(ar_util:encode(TXID)) ++ \".json\"),\n\t\t\tTXJSON = ar_serialize:jsonify(ar_serialize:tx_to_json_struct(TX)),\n\t\t\tok = file:write_file(TXFilepath, TXJSON)\n\t\tend,\n\t\tB0#block.txs\n\t),\n\t_ = ar_kv:create_ets(),\n\t{ok, _} = ar_kv:start_link(),\n\tok = ar_kv:open(#{\n\t\tpath => filename:join([DataDir, ?ROCKS_DB_DIR, \"reward_history_db\"]),\n\t\tname => reward_history_db}),\n\tok = ar_kv:open(#{\n\t\tpath => filename:join([DataDir, ?ROCKS_DB_DIR, \"block_time_history_db\"]),\n\t\tname => block_time_history_db}),\n\tok = ar_kv:open(#{\n\t\tpath => filename:join([DataDir, ?ROCKS_DB_DIR, \"block_index_db\"]),\n\t\tname => block_index_db}),\n\tH = B0#block.indep_hash,\n\tWeaveSize = B0#block.weave_size,\n\tTXRoot = B0#block.tx_root,\n\tok = ar_kv:put(block_index_db, << 0:256 >>, term_to_binary({H, WeaveSize, TXRoot, <<>>})),\n\tok = ar_kv:put(reward_history_db, H, term_to_binary(hd(B0#block.reward_history))),\n\tcase ar_fork:height_2_7() of\n\t\t0 ->\n\t\t\tok = ar_kv:put(block_time_history_db, H,\n\t\t\t\t\tterm_to_binary(hd(B0#block.block_time_history)));\n\t\t_ ->\n\t\t\tok\n\tend,\n\tok = gen_server:stop(ar_kv),\n\t_ = ets:delete(ar_kv),\n\tWalletListDir = filename:join(DataDir, ?WALLET_LIST_DIR),\n\tok = filelib:ensure_dir(WalletListDir ++ \"/\"),\n\tRootHash = B0#block.wallet_list,\n\tWalletListFilepath =\n\t\tfilename:join(WalletListDir, binary_to_list(ar_util:encode(RootHash)) ++ \".json\"),\n\tWalletListJSON =\n\t\tar_serialize:jsonify(\n\t\t\tar_serialize:wallet_list_to_json_struct(B0#block.reward_addr, false,\n\t\t\t\t\tB0#block.account_tree)\n\t\t),\n\tok = file:write_file(WalletListFilepath, WalletListJSON).\n\nwait_until_syncs_data(Left, Right, WeaveSize, _Packing)\n  \t\twhen Left >= Right orelse\n\t\t\tLeft >= WeaveSize orelse\n\t\t\t(Right - Left < ?DATA_CHUNK_SIZE) orelse\n\t\t\t(WeaveSize - Left < ?DATA_CHUNK_SIZE) ->\n\tok;\nwait_until_syncs_data(Left, Right, WeaveSize, Packing) ->\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\tcase Packing of\n\t\t\t\tany ->\n\t\t\t\t\tcase ar_sync_record:is_recorded(Left + 1, ar_data_sync) of\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tfalse;\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\ttrue\n\t\t\t\t\tend;\n\t\t\t\t_ ->\n\t\t\t\t\tcase ar_sync_record:is_recorded(Left + 1, {ar_data_sync, Packing}) of\n\t\t\t\t\t\t{{true, _}, _} ->\n\t\t\t\t\t\t\ttrue;\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\tfalse\n\t\t\t\t\tend\n\t\t\tend\n\t\tend,\n\t\t1000,\n\t\t?WAIT_SYNCS_DATA_TIMEOUT\n\t),\n\twait_until_syncs_data(Left + ?DATA_CHUNK_SIZE, Right, WeaveSize, Packing).\n\nget_cm_storage_modules(RewardAddr, 1, 1) ->\n\t%% When there's only 1 node it covers all 3 storage modules.\n\tget_cm_storage_modules(RewardAddr, 1, 3) ++\n\tget_cm_storage_modules(RewardAddr, 2, 3) ++\n\tget_cm_storage_modules(RewardAddr, 3, 3);\nget_cm_storage_modules(RewardAddr, N, MiningNodeCount)\n\t\twhen MiningNodeCount == 2 orelse MiningNodeCount == 3 ->\n\t%% skip partitions so that no two nodes can mine the same range even accounting for ?OVERLAP\n\t%% Note that replica_2_9 modules do not have ?OVERLAP.\n\tRangeNumber = lists:nth(N, [0, 2, 4]),\n\t[{ar_block:partition_size(), RangeNumber, get_default_storage_module_packing(RewardAddr, 0)}].\n\nremote_call(Node, Module, Function, Args) ->\n\tremote_call(Node, Module, Function, Args, ?REMOTE_CALL_TIMEOUT).\n\nremote_call(Node, Module, Function, Args, Timeout) ->\n\tNodeName = peer_name(Node),\n\tcase node() == NodeName of\n\t\ttrue ->\n\t\t\tapply(Module, Function, Args);\n\t\tfalse ->\n\t\t\tKey = rpc:async_call(NodeName, Module, Function, Args),\n\t\t\tResult = ar_util:do_until(\n\t\t\t\tfun() ->\n\t\t\t\t\tcase rpc:nb_yield(Key) of\n\t\t\t\t\t\ttimeout ->\n\t\t\t\t\t\t\tfalse;\n\t\t\t\t\t\t{value, Reply} ->\n\t\t\t\t\t\t\t{ok, Reply}\n\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\t\t200,\n\t\t\t\tTimeout\n\t\t\t),\n\t\t\tcase Result of\n\t\t\t\t{error, timeout} ->\n\t\t\t\t\t?LOG_ERROR(\"Timed out (~pms) waiting for the rpc reply; module: ~p, function: ~p, \"\n\t\t\t\t\t\t\t\"args: ~p, node: ~p.~n\", [Timeout, Module, Function, Args, Node]);\n\t\t\t\t_ ->\n\t\t\t\t\tok\n\t\t\tend,\n\t\t\t?assertMatch({ok, _}, Result),\n\t\t\telement(2, Result)\n\tend.\n\n%%%===================================================================\n%%% Legacy public interface.\n%%%===================================================================\n\n%% @doc Start a fresh node.\nstart() ->\n\tstart(#{}).\n\nstart(Options) when is_map(Options) ->\n\tprometheus:start(),\n\tarweave_config:start(),\n\tok = arweave_limiter:start(),\n\tB0 =\n\t\tcase maps:get(b0, Options, not_set) of\n\t\t\tnot_set ->\n\t\t\t\thd(ar_weave:init());\n\t\t\tValue ->\n\t\t\t\tValue\n\t\tend,\n\tRewardAddr =\n\t\tcase maps:get(addr, Options, not_set) of\n\t\t\tnot_set ->\n\t\t\t\tar_wallet:to_address(ar_wallet:new_keyfile());\n\t\t\tAddr ->\n\t\t\t\tAddr\n\t\tend,\n\tConfig =\n\t\tcase maps:get(config, Options, not_set) of\n\t\t\tnot_set ->\n\t\t\t\telement(2, arweave_config:get_env());\n\t\t\tValue2 ->\n\t\t\t\tValue2\n\t\tend,\n\tStorageModules =\n\t\tcase maps:get(storage_modules, Options, not_set) of\n\t\t\tnot_set ->\n\t\t\t\t[{10 * ar_block:partition_size(), N,\n\t\t\t\t\t\tget_default_storage_module_packing(RewardAddr, N, Options)}\n\t\t\t\t\t|| N <- lists:seq(0, 8)];\n\t\t\tValue3 ->\n\t\t\t\tValue3\n\t\tend,\n\tstart(B0, RewardAddr, Config, StorageModules);\nstart(B0) ->\n\tstart(#{ b0 => B0 }).\nstart(B0, RewardAddr) ->\n\tstart(#{ b0 => B0, addr => RewardAddr }).\n\n%% @doc Start a fresh node with the given genesis block, mining address, and config.\nstart(B0, RewardAddr, Config) ->\n\tStorageModules = [{10 * ar_block:partition_size(), N, get_default_storage_module_packing(RewardAddr, N)}\n\t\t\t|| N <- lists:seq(0, 8)],\n\tstart(B0, RewardAddr, Config, StorageModules).\n\n%% @doc Start a fresh node with the given genesis block, mining address, config,\n%% and storage modules.\n%%\n%% Note: the Config provided here is written to disk. This is fine if it's the default Config,\n%% but if you've modified any of the Config fields for your test, please restore the default\n%% Config after the test is done. Otherwise the tests that run after yours may fail.\nstart(B0, RewardAddr, Config, StorageModules) ->\n\tclean_up_and_stop(),\n\tprometheus:start(),\n\tarweave_config:start(),\n\twrite_genesis_files(Config#config.data_dir, B0),\n\tok = arweave_config:set_env(Config#config{\n\t\tstart_from_latest_state = true,\n\t\tauto_join = true,\n\t\tpeers = [],\n\t\tcm_exit_peer = not_set,\n\t\tcm_peers = [],\n\t\tmining_addr = RewardAddr,\n\t\tstorage_modules = StorageModules,\n\t\tdisk_space_check_frequency = 1000,\n\t\tdisable_replica_2_9_device_limit = true,\n\t\tsync_jobs = 2,\n\t\tdisk_pool_jobs = 2,\n\t\theader_sync_jobs = 2,\n\t\tenable = [search_in_rocksdb_when_mining, serve_tx_data_without_limits,\n\t\t\t\tdouble_check_nonce_limiter, serve_wallet_lists | Config#config.enable],\n\t\t%% Disable rebasing by default to make the tests more reliable.\n\t\tallow_rebase = false,\n\t\t'http_client.http.keepalive' = ?TEST_HTTP_CLIENT_KEEPALIVE,\n\t\tdebug = true\n\t}),\n\tok = arweave_limiter:start(),\n\tstart_dependencies(),\n\twait_until_joined(),\n\twait_until_syncs_genesis_data().\n\nrestart() ->\n\t?LOG_INFO(\"Restarting node\"),\n\tstop(),\n\tstart_dependencies(),\n\twait_until_joined().\n\nrestart_with_config(Config) ->\n\t?LOG_INFO(\"Restarting node with new config\"),\n\tstop(),\n\n\tupdate_config(Config),\n\n\tstart_dependencies(),\n\twait_until_joined().\n\nrestart(Node) ->\n\tremote_call(Node, ?MODULE, restart, [], 90000).\n\nrestart_with_config(Node, Config) ->\n\tremote_call(Node, ?MODULE, restart_with_config, [Config], 90000).\n\nstart_peer(Node, Args) when is_map(Args) ->\n\t?LOG_DEBUG([{event, start_peer}, {peer, Node}]),\n\tremote_call(Node, ?MODULE, start, [Args], ?PEER_START_TIMEOUT),\n\twait_until_joined(Node),\n\twait_until_syncs_genesis_data(Node);\n\n%% @doc Start a fresh peer node with the given genesis block.\nstart_peer(Node, B0) ->\n\tstart_peer(Node, #{ b0 => B0 }).\n\n%% @doc Start a fresh peer node with the given genesis block and mining address.\nstart_peer(Node, B0, RewardAddr) ->\n\tstart_peer(Node, #{ b0 => B0, addr => RewardAddr }).\n\n%% @doc Start a fresh peer node with the given genesis block, mining address, and config.\nstart_peer(Node, B0, RewardAddr, Config) ->\n\tstart_peer(Node, #{ b0 => B0, addr => RewardAddr, config => Config }).\n\n%% @doc Fetch the fee estimation and the denomination (call GET /price2/[size])\n%% from the given node.\nget_tx_price(Node, DataSize) ->\n\tget_tx_price(Node, DataSize, <<>>).\n\n%% @doc Fetch the fee estimation and the denomination (call GET /price2/[size]/[addr])\n%% from the given node.\nget_tx_price(Node, DataSize, Target) ->\n\tPeer = peer_ip(Node),\n\tPath = \"/price/\" ++ integer_to_list(DataSize) ++ \"/\"\n\t\t\t++ binary_to_list(ar_util:encode(Target)),\n\t{ok, {{<<\"200\">>, _}, _, Reply, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => Peer,\n\t\t\tpath => Path\n\t\t}),\n\tFee = binary_to_integer(Reply),\n\tPath2 = \"/price2/\" ++ integer_to_list(DataSize) ++ \"/\"\n\t\t\t++ binary_to_list(ar_util:encode(Target)),\n\t{ok, {{<<\"200\">>, _}, _, Reply2, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => Peer,\n\t\t\tpath => Path2\n\t\t}),\n\tMap = jiffy:decode(Reply2, [return_maps]),\n\tcase binary_to_integer(maps:get(<<\"fee\">>, Map)) of\n\t\tFee ->\n\t\t\t{Fee, maps:get(<<\"denomination\">>, Map)};\n\t\tFee2 ->\n\t\t\t?assert(false, io_lib:format(\"Fee mismatch, expected: ~B, got: ~B.\", [Fee, Fee2]))\n\tend.\n\n%% @doc Fetch the optimistic fee estimation (call GET /price/[size]) from the given node.\nget_optimistic_tx_price(Node, DataSize) ->\n\tget_optimistic_tx_price(Node, DataSize, <<>>).\n\n%% @doc Fetch the optimistic fee estimation (call GET /price/[size]/[addr]) from the given\n%% node.\nget_optimistic_tx_price(Node, DataSize, Target) ->\n\tPath = \"/optimistic_price/\" ++ integer_to_list(DataSize) ++ \"/\"\n\t\t\t++ binary_to_list(ar_util:encode(Target)),\n\t{ok, {{<<\"200\">>, _}, _, Reply, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => peer_ip(Node),\n\t\t\tpath => Path\n\t\t}),\n\tbinary_to_integer(maps:get(<<\"fee\">>, jiffy:decode(Reply, [return_maps]))).\n\n%% @doc Return a signed format=2 transaction with the minimum required fee fetched from\n%% GET /price/0 on the peer1 node.\nsign_tx(Wallet) ->\n\tsign_tx(peer1, Wallet, #{ format => 2 }, fun ar_tx:sign/2).\n\n%% @doc Return a signed format=2 transaction with properties from the given Args map.\n%% If the fee is not in Args, fetch it from GET /price/{data_size}\n%% or GET /price/{data_size}/{target} (if the target is specified) on the peer1 node.\nsign_tx(Wallet, Args) ->\n\tsign_tx(peer1, Wallet, insert_root(Args#{ format => 2 }), fun ar_tx:sign/2).\n\n%% @doc Like sign_tx/2, but use the given Node to fetch the fee estimation and\n%% block anchor from.\nsign_tx(Node, Wallet, Args) ->\n\tsign_tx(Node, Wallet, insert_root(Args#{ format => 2 }), fun ar_tx:sign/2).\n\n%% @doc Like sign_tx/1 but return a format=1 transaction.\nsign_v1_tx(Wallet) ->\n\tsign_tx(peer1, Wallet, #{}, fun ar_tx:sign_v1/2).\n\n%% @doc Like sign_tx/2 but return a format=1 transaction.\nsign_v1_tx(Wallet, TXParams) ->\n\tsign_tx(peer1, Wallet, TXParams, fun ar_tx:sign_v1/2).\n\n%% @doc Like sign_tx/3 but return a format=1 transaction.\nsign_v1_tx(Node, Wallet, Args) ->\n\tsign_tx(Node, Wallet, Args, fun ar_tx:sign_v1/2).\n\n%%%===================================================================\n%%% Legacy private functions.\n%%%===================================================================\n\ninsert_root(Params) ->\n\tcase {maps:get(data, Params, <<>>), maps:get(data_root, Params, <<>>)} of\n\t\t{<<>>, _} ->\n\t\t\tParams;\n\t\t{Data, <<>>} ->\n\t\t\tTX = ar_tx:generate_chunk_tree(#tx{ data = Data }),\n\t\t\tParams#{ data_root => TX#tx.data_root };\n\t\t_ ->\n\t\t\tParams\n\tend.\n\nsign_tx(Node, Wallet, Args, SignFun) ->\n\t{_, {_, Pub}} = Wallet,\n\tData = maps:get(data, Args, <<>>),\n\tDataSize = maps:get(data_size, Args, byte_size(Data)),\n\tFormat = maps:get(format, Args, 1),\n\t{Fee, Denomination} = get_tx_price(Node, DataSize, maps:get(target, Args, <<>>)),\n\tFee2 =\n\t\tcase {Format, maps:get(reward, Args, none)} of\n\t\t\t{1, none} ->\n\t\t\t\t%% Make sure the v1 tx is not malleable by assigning a fee with only\n\t\t\t\t%% the first digit being non-zero.\n\t\t\t\tFirstDigit = binary_to_integer(binary:part(integer_to_binary(Fee), {0, 1})),\n\t\t\t\tLen = length(integer_to_list(Fee)),\n\t\t\t\tFee3 = trunc((FirstDigit + 1) * math:pow(10, Len - 1)),\n\t\t\t\tFee3;\n\t\t\t{_, none} ->\n\t\t\t\tFee;\n\t\t\t{_, AssignedFee} ->\n\t\t\t\tAssignedFee\n\t\tend,\n\tSignFun(\n\t\t(ar_tx:new())#tx{\n\t\t\towner = Pub,\n\t\t\treward = Fee2,\n\t\t\tdata = Data,\n\t\t\ttarget = maps:get(target, Args, <<>>),\n\t\t\tquantity = maps:get(quantity, Args, 0),\n\t\t\ttags = maps:get(tags, Args, []),\n\t\t\tlast_tx = maps:get(last_tx, Args, get_tx_anchor(Node)),\n\t\t\tdata_size = DataSize,\n\t\t\tdata_root = maps:get(data_root, Args, <<>>),\n\t\t\tformat = Format,\n\t\t\tdenomination = maps:get(denomination, Args, Denomination)\n\t\t},\n\t\tWallet\n\t).\n\nstop() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tcase stop_application(arweave, 60000) of\n\t\tok ->\n\t\t\tok;\n\t\t{error, {not_started, arweave}} ->\n\t\t\tok;\n\t\t{error, timeout} ->\n\t\t\t?LOG_WARNING([{event, application_stop_timeout}, {app, arweave}]),\n\t\t\tforce_stop_application(arweave)\n\tend,\n\tar:stop_dependencies(),\n\tConfig.\n\nstop_application(App, Timeout) ->\n\tParent = self(),\n\tRef = make_ref(),\n\tPid = spawn(fun() -> Parent ! {Ref, application:stop(App)} end),\n\treceive\n\t\t{Ref, Result} ->\n\t\t\tResult\n\tafter Timeout ->\n\t\texit(Pid, kill),\n\t\t{error, timeout}\n\tend.\n\nforce_stop_application(App) ->\n\tcase application_controller:get_master(App) of\n\t\tMaster when is_pid(Master) ->\n\t\t\texit(Master, kill),\n\t\t\ttimer:sleep(1000);\n\t\t_ ->\n\t\t\tok\n\tend.\n\nstop(Node) ->\n\tremote_call(Node, ar_test_node, stop, []).\n\nrejoin_on(#{ node := Node, join_on := JoinOnNode } = Options) ->\n\tConfig = maps:get(config, Options, generate_join_config(Node)),\n\tjoin_on(#{ node => Node, join_on => JoinOnNode, config => Config }, true).\n\ngenerate_join_config(Node) ->\n\tremote_call(Node, ar_test_node, generate_join_config, []).\n\ngenerate_join_config() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tRewardAddr = ar_wallet:to_address(ar_wallet:new_keyfile()),\n\tStorageModules = [{ar_block:partition_size(), N,\n\t\t\tget_default_storage_module_packing(RewardAddr, N)} || N <- lists:seq(0, 4)],\n\tConfig#config{\n\t\tmining_addr = RewardAddr,\n\t\tstorage_modules = StorageModules\n\t}.\n\njoin_on(Params) ->\n\tjoin_on(Params, false).\n\njoin_on(#{ node := Node, join_on := JoinOnNode } = Params, Rejoin) ->\n\tConfig = maps:get(config, Params, generate_join_config(Node)),\n\tremote_call(Node, ar_test_node, join, [JoinOnNode, Rejoin, Config], ?REMOTE_CALL_TIMEOUT).\n\njoin(JoinOnNode, Rejoin) ->\n\tjoin(JoinOnNode, Rejoin, generate_join_config()).\n\njoin(JoinOnNode, Rejoin, Config) ->\n\tPeer = peer_ip(JoinOnNode),\n\tcase Rejoin of\n\t\ttrue ->\n\t\t\tstop();\n\t\tfalse ->\n\t\t\tclean_up_and_stop()\n\tend,\n\tprometheus:start(),\n\tarweave_config:start(),\n\tok = arweave_config:set_env(Config#config{\n\t\tstart_from_latest_state = false,\n\t\tauto_join = true,\n\t\tpeers = [Peer],\n\t\t'http_client.http.keepalive' = ?TEST_HTTP_CLIENT_KEEPALIVE\n\t}),\n\tstart_dependencies(),\n\twait_until_joined(),\n\twhereis(ar_node_worker).\n\nget_default_storage_module_packing(RewardAddr, Index) ->\n\tget_default_storage_module_packing(RewardAddr, Index, #{}).\n\nget_default_storage_module_packing(RewardAddr, Index, Options) ->\n\tcase {ar_fork:height_2_9(), ar_fork:height_2_8()} of\n\t\t{infinity, infinity} ->\n\t\t\t{spora_2_6, RewardAddr};\n\t\t{infinity, 0} ->\n\t\t\t{composite, RewardAddr, 1};\n\t\t{0, 0} ->\n\t\t\tcase maps:get(packing, Options, not_set) of\n\t\t\t\tspora_2_6 ->\n\t\t\t\t\t{spora_2_6, RewardAddr};\n\t\t\t\t{composite, PackingDiff} ->\n\t\t\t\t\t{composite, RewardAddr, PackingDiff};\n\t\t\t\treplica_2_9 ->\n\t\t\t\t\t{replica_2_9, RewardAddr};\n\t\t\t\tnot_set ->\n\t\t\t\t\t{replica_2_9, RewardAddr}\n\t\t\tend;\n\t\t_ ->\n\t\t\tcase maps:get(packing, Options, not_set) of\n\t\t\t\tspora_2_6 ->\n\t\t\t\t\t{spora_2_6, RewardAddr};\n\t\t\t\t{composite, PackingDiff} ->\n\t\t\t\t\t{composite, RewardAddr, PackingDiff};\n\t\t\t\treplica_2_9 ->\n\t\t\t\t\t{replica_2_9, RewardAddr};\n\t\t\t\tnot_set ->\n\t\t\t\t\tcase Index rem 3 of\n\t\t\t\t\t\t0 ->\n\t\t\t\t\t\t\t{spora_2_6, RewardAddr};\n\t\t\t\t\t\t1 ->\n\t\t\t\t\t\t\t{composite, RewardAddr, 1};\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t{replica_2_9, RewardAddr}\n\t\t\t\t\tend\n\t\t\tend\n\tend.\n\nconnect_peers(Node, Peer) ->\n\tremote_call(Node, ar_test_node, connect_to_peer, [Peer]).\n\nconnect_to_peer(Node) ->\n\t%% Unblock connections possibly blocked in the prior test code.\n\tar_http:unblock_peer_connections(),\n\tremote_call(Node, ar_http, unblock_peer_connections, []),\n\tPeer = peer_ip(Node),\n\tSelf = self_node(),\n\t%% Make requests to the nodes to make them discover each other.\n\t{ok, {{<<\"200\">>, <<\"OK\">>}, _, _, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => Peer,\n\t\t\tpath => \"/info\",\n\t\t\theaders => p2p_headers(Self)\n\t\t}),\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\tPeers = remote_call(Node, ar_peers, get_peers, [lifetime]),\n\t\t\tlists:member(peer_ip(Self), Peers)\n\t\tend,\n\t\t100,\n\t\t?CONNECT_TO_PEER_TIMEOUT\n\t),\n\t{ok, {{<<\"200\">>, <<\"OK\">>}, _, _, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => peer_ip(Self),\n\t\t\tpath => \"/info\",\n\t\t\theaders => p2p_headers(Node)\n\t\t}),\n\tar_util:do_until(\n\t\tfun() ->\n\t\t\tlists:member(Peer, ar_peers:get_peers(lifetime))\n\t\tend,\n\t\t100,\n\t        ?CONNECT_TO_PEER_TIMEOUT\n\t).\n\ndisconnect_peers(Node, Peer) ->\n\tremote_call(Node, ar_test_node, disconnect_from, [Peer]).\n\ndisconnect_from(Node) ->\n\tar_http:block_peer_connections(),\n\tremote_call(Node, ar_http, block_peer_connections, []).\n\nwith_gossip_paused(Node, Fun) when is_function(Fun, 0) ->\n\tok = remote_call(Node, ar_bridge, stop_gossip, []),\n\ttry\n\t\tFun()\n\tafter\n\t\tok = remote_call(Node, ar_bridge, start_gossip, [])\n\tend.\n\nwait_until_syncs_genesis_data(Node) ->\n\tok = remote_call(Node, ar_test_node, wait_until_syncs_genesis_data, [], 100_000).\n\nwait_until_syncs_genesis_data() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tar_util:do_until(\n\t\tfun() ->\n\t\t\tcase ar_node:get_current_block() of\n\t\t\t\tnot_joined -> false;\n\t\t\t\t_ -> true\n\t\t\tend\n\t\tend,\n\t\t1000,\n\t\t10_000\n\t),\n\tB = ar_node:get_current_block(),\n\tWeaveSize = B#block.weave_size,\n\t?LOG_INFO([{event, wait_until_syncs_genesis_data}, {status, initial_sync_started},\n\t\t{weave_size, WeaveSize}]),\n\t[wait_until_syncs_data(N * Size, (N + 1) * Size, WeaveSize, any)\n\t\t\t|| {Size, N, _Packing} <- Config#config.storage_modules],\n\t?LOG_INFO([{event, wait_until_syncs_genesis_data}, {status, initial_sync_complete}]),\n\t%% Once the data is stored in the disk pool, make the storage modules\n\t%% copy the missing data over from each other. This procedure is executed on startup\n\t%% but the disk pool did not have any data at the time.\n\t[\n\t\tgen_server:cast(\n\t\t\tlist_to_atom(\"ar_data_sync_\" ++ ar_storage_module:label(ar_storage_module:id(M))),\n\t\t\tsync_data\n\t\t) \n\t\t|| M <- Config#config.storage_modules\n\t],\n\t[wait_until_syncs_data(N * Size, (N + 1) * Size, WeaveSize, Packing)\n\t\t\t|| {Size, N, Packing} <- Config#config.storage_modules],\n\t?LOG_INFO([{event, wait_until_syncs_genesis_data}, {status, cross_module_sync_complete}]),\n\tok.\n\nwait_until_height(Node, TargetHeight) ->\n\twait_until_height(Node, TargetHeight, true, ?WAIT_UNTIL_BLOCK_HEIGHT_TIMEOUT).\n\nwait_until_height(Node, TargetHeight, Strict) ->\n\twait_until_height(Node, TargetHeight, Strict, ?WAIT_UNTIL_BLOCK_HEIGHT_TIMEOUT).\n\nwait_until_height(Node, TargetHeight, Strict, Timeout) ->\n\tBI = case Node of\n\t\tmain ->\n\t\t\tdo_wait_until_height(TargetHeight, Timeout);\n\t\t_ ->\n\t\t\tremote_call(Node, ?MODULE, do_wait_until_height, [TargetHeight, Timeout],\n\t\t\t\tTimeout + 500)\n\tend,\n\tcase Strict of\n\t\ttrue ->\n\t\t\tHeight = length(BI) - 1,\n\t\t\t?assert(Height >= TargetHeight,\n\t\t\t\tiolist_to_binary(io_lib:format(\n\t\t\t\t\t\"Node ~p not at the expected height. Expected: ~B, got: ~B\",\n\t\t\t\t\t[Node, TargetHeight, Height])));\n\t\tfalse ->\n\t\t\tok\n\tend,\n\tBI.\n\nwait_until_height(TargetHeight) ->\n\tdo_wait_until_height(TargetHeight, ?WAIT_UNTIL_BLOCK_HEIGHT_TIMEOUT).\n\ndo_wait_until_height(TargetHeight, Timeout) ->\n\t{ok, BI} = ar_util:do_until(\n\t\tfun() ->\n\t\t\tcase ar_node:get_blocks() of\n\t\t\t\tBI when length(BI) - 1 >= TargetHeight ->\n\t\t\t\t\t{ok, BI};\n\t\t\t\t_ ->\n\t\t\t\t\tfalse\n\t\t\tend\n\t\tend,\n\t\t100,\n\t\tTimeout\n\t),\n\tBI.\n\nassert_wait_until_height(Node, TargetHeight) ->\n\tBI = wait_until_height(Node, TargetHeight),\n\t?assert(is_list(BI), iolist_to_binary(io_lib:format(\"Got ~p.\", [BI]))),\n\tBI.\n\nwait_until_block_index(Node, BI) ->\n\tremote_call(Node, ?MODULE, wait_until_block_index, [BI]).\n\nwait_until_block_index(BI) ->\n\tar_util:do_until(\n\t\tfun() ->\n\t\t\tcase ar_node:get_blocks() of\n\t\t\t\tBI ->\n\t\t\t\t\tok;\n\t\t\t\t_ ->\n\t\t\t\t\tfalse\n\t\t\tend\n\t\tend,\n\t\t100,\n\t\t?BLOCK_INDEX_TIMEOUT\n\t).\n\nwait_until_mining_paused(Node) ->\n\tar_util:do_until(\n\t\tfun() ->\n\t\t\tcase Node of\n\t\t\t\tmain ->\n\t\t\t\t\tar_mining_server:is_paused();\n\t\t\t\t_ ->\n\t\t\t\t\tremote_call(Node, ar_mining_server, is_paused, [])\n\t\t\tend\n\t\tend,\n\t\t1000,\n\t\t?WAIT_UNTIL_MINING_PAUSED_TIMEOUT\n\t).\n\n%% Safely perform an rpc:call/4 and return results in a tagged tuple.\nsafe_remote_call(Node, Module, Function, Args) ->\n    try rpc:call(Node, Module, Function, Args, 30000) of\n        Result -> {ok, Result}\n    catch\n        error:Reason:S ->\n            %% Log the error if necessary\n            io:format(\"Remote call error: ~p:~p~n\", [Reason,S]),\n            {error, Reason};\n\tE:R:S ->\n            %% Catching other exceptions, returning a general error.\n            io:format(\"Remote call error: ~p:~p:~p~n\", [E,R,S]),\n            {error, unknown}\n    end.\n\nwait_until_node_is_ready(NodeName) ->\n    ar_util:do_until(\n        fun() ->\n            case net_adm:ping(NodeName) of\n                pong ->\n                    %% The node is reachable, doing a second check.\n\t\t    % safe_remote_call(NodeName, erlang, is_alive, []);\n\t\t    RemoteApps =\n\t\t\tcase safe_remote_call(NodeName, application, which_applications, []) of\n\t\t\t    {ok, R} when is_list(R) -> R;\n\t\t\t    _ -> []\n\t\t\tend,\n\t\t    case lists:keyfind(arweave, 1, RemoteApps) of\n\t\t\t{arweave, _, _} -> {ok, ready};\n\t\t\t_ -> false\n\t\t    end;\n                pang ->\n                    %% Node is not reachable.\n                    false\n            end\n        end,\n        ?NODE_READY_CHECK_INTERVAL,\n        ?NODE_READY_CHECK_TIMEOUT\n    ).\n\nassert_wait_until_receives_txs(TXs) ->\n\t?assertEqual(ok, wait_until_receives_txs(TXs)).\n\nassert_wait_until_receives_txs(Node, TXs) ->\n\t?assertEqual(ok, wait_until_receives_txs(Node, TXs)).\n\nwait_until_receives_txs(Node, TXs) ->\n\tremote_call(Node, ?MODULE, wait_until_receives_txs, [TXs],\n\t\t\t\t\t?WAIT_UNTIL_RECEIVES_TXS_TIMEOUT + 500).\n\nwait_until_receives_txs(TXs) ->\n\tar_util:do_until(\n\t\tfun() ->\n\t\t\tMinedTXIDs = ar_node:get_ready_for_mining_txs(),\n\t\t\tcase lists:all(fun(TX) -> lists:member(TX#tx.id, MinedTXIDs) end, TXs) of\n\t\t\t\ttrue ->\n\t\t\t\t\tok;\n\t\t\t\t_ ->\n\t\t\t\t\tfalse\n\t\t\tend\n\t\tend,\n\t\t100,\n\t\t?WAIT_UNTIL_RECEIVES_TXS_TIMEOUT\n\t).\n\nassert_post_tx_to_peer(Node, TX) ->\n\tassert_post_tx_to_peer(Node, TX, true).\n\nassert_post_tx_to_peer(Node, TX, Wait) ->\n\tassert_post_tx_to_peer(Node, TX, Wait, 3).\n\nassert_post_tx_to_peer(Node, TX, Wait, Retries) ->\n\t{ok, {{<<\"200\">>, _}, _, <<\"OK\">>, _, _}} = post_tx_to_peer(Node, TX, Wait, Retries).\n\npost_tx_to_peer(Node, TX) ->\n\tpost_tx_to_peer(Node, TX, true).\n\npost_tx_to_peer(Node, TX, Wait) ->\n\tpost_tx_to_peer(Node, TX, Wait, 3).\n\npost_tx_to_peer(Node, TX, Wait, Retries) ->\n\tReply = post_tx_json(Node, ar_serialize:jsonify(ar_serialize:tx_to_json_struct(TX))),\n\tcase Reply of\n\t\t{ok, {{<<\"200\">>, _}, _, <<\"OK\">>, _, _}} ->\n\t\t\tcase Wait of\n\t\t\t\ttrue ->\n\t\t\t\t\tassert_wait_until_receives_txs(Node, [TX]);\n\t\t\t\tfalse ->\n\t\t\t\t\tok\n\t\t\tend;\n\t\t_ when Retries > 0 ->\n\t\t\t?debugFmt(\"Failed to post transaction, retrying. Error: ~p~nRetries: ~p~n\", [Reply, Retries]),\n\t\t\ttimer:sleep(3000),\n\t\t\tpost_tx_to_peer(Node, TX, Wait, Retries - 1);\n\t\t_ ->\n\t\t\tErrorInfo =\n\t\t\t\tcase Reply of\n\t\t\t\t\t{ok, {{StatusCode, _}, _, Text, _, _}} ->\n\t\t\t\t\t\t{StatusCode, Text};\n\t\t\t\t\tOther ->\n\t\t\t\t\t\tOther\n\t\t\t\tend,\n\t\t\tAddr =\n\t\t\t\tcase TX#tx.owner of\n\t\t\t\t\t<<>> ->\n\t\t\t\t\t\tDataSegment = ar_tx:generate_signature_data_segment(TX),\n\t\t\t\t\t\tar_wallet:to_address(\n\t\t\t\t\t\t\tar_wallet:recover_key(DataSegment, TX#tx.signature, TX#tx.signature_type),\n\t\t\t\t\t\t\tTX#tx.signature_type);\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tar_wallet:to_address(TX#tx.owner, TX#tx.signature_type)\n\t\t\tend,\n\t\t\t?debugFmt(\n\t\t\t\t\"Failed to post transaction.~nTX: ~s.~nTX format: ~B.~nTX fee: ~B.~n\"\n\t\t\t\t\"TX size: ~B.~nTX last_tx: ~s.~nTX owner: ~s.~nTX owner address: ~s.~n\"\n\t\t\t\t\"Error(s): ~p.~nReply: ~p.~n\",\n\t\t\t\t[ar_util:encode(TX#tx.id), TX#tx.format, TX#tx.reward,\n\t\t\t\t\tTX#tx.data_size, ar_util:encode(TX#tx.last_tx),\n\t\t\t\t\tar_util:encode(TX#tx.owner),\n\t\t\t\t\tar_util:encode(Addr),\n\t\t\t\t\tremote_call(Node, ar_tx_db, get_error_codes, [TX#tx.id]),\n\t\t\t\t\tErrorInfo]),\n\t\t\tnoop\n\tend,\n\tReply.\n\npost_tx_json(Node, JSON) ->\n\tar_http:req(#{\n\t\tmethod => post,\n\t\tpeer => peer_ip(Node),\n\t\tpath => \"/tx\",\n\t\tbody => JSON\n\t}).\n\nget_tx_anchor(Node) ->\n\t{ok, {{<<\"200\">>, _}, _, Reply, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => peer_ip(Node),\n\t\t\tpath => \"/tx_anchor\"\n\t\t}),\n\tar_util:decode(Reply).\n\nget_tx_confirmations(Node, TXID) ->\n\tResponse =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => peer_ip(Node),\n\t\t\tpath => \"/tx/\" ++ binary_to_list(ar_util:encode(TXID)) ++ \"/status\"\n\t\t}),\n\tcase Response of\n\t\t{ok, {{<<\"200\">>, _}, _, Reply, _, _}} ->\n\t\t\t{Status} = ar_serialize:dejsonify(Reply),\n\t\t\tlists:keyfind(<<\"number_of_confirmations\">>, 1, Status);\n\t\t{ok, {{<<\"404\">>, _}, _, _, _, _}} ->\n\t\t\t-1\n\tend.\n\nnew_mock(Module, Options) ->\n\tnew_mock(Module, Options, 5).\n\nnew_mock(_Module, _Options, 0) ->\n\tok;\nnew_mock(Module, Options, Retries) ->\n\tOptions2 = lists:usort([no_link | Options]),\n\ttry\n\t\tmeck:new(Module, Options2)\n\tcatch\n\t\t%% If the mock is already started, treat as success\n\t\terror:{already_started, _Pid} ->\n\t\t\tok;\n\t\t%% Retry on other errors\n\t\terror:E ->\n\t\t\t?debugFmt(\"ar_test_node (retries left ~p): Error creating mock for ~p: ~p\",\n\t\t\t\t\t[Retries - 1, Module, E]),\n\t\t\ttimer:sleep(1000),\n\t\t\tnew_mock(Module, Options, Retries - 1);\n\t\texit:E ->\n\t\t\t?debugFmt(\"ar_test_node (retries left ~p): Exit creating mock for ~p: ~p\",\n\t\t\t\t\t[Retries - 1, Module, E]),\n\t\t\ttimer:sleep(1000),\n\t\t\tnew_mock(Module, Options, Retries - 1)\n\tend.\n\nmock_function(Module, Fun, Mock) ->\n\tmock_function(Module, Fun, Mock, 5).\n\nmock_function(_Module, _Fun, _Mock, 0) ->\n\tok;\nmock_function(Module, Fun, Mock, Retries) ->\n\ttry\n\t\tmeck:expect(Module, Fun, Mock)\n\tcatch\n\t\terror:E ->\n\t\t\t?debugFmt(\"ar_test_node (retries left ~p): Error setting mock for ~p: ~p\",\n\t\t\t\t\t[Retries - 1, Module, E]),\n\t\t\ttimer:sleep(1000),\n\t\t\tmock_function(Module, Fun, Mock, Retries - 1);\n\t\texit:E ->\n\t\t\t?debugFmt(\"ar_test_node (retries left ~p): Exit setting mock for ~p: ~p\",\n\t\t\t\t\t[Retries - 1, Module, E]),\n\t\t\ttimer:sleep(1000),\n\t\t\tmock_function(Module, Fun, Mock, Retries - 1)\n\tend.\n\nunmock_module(Module) ->\n\tunmock_module(Module, 5).\n\nunmock_module(_Module, 0) ->\n\tok;\nunmock_module(Module, Retries) ->\n\tPid = erlang:whereis(Module),\n\tcase is_pid(Pid) of\n\t\ttrue ->\n\t\t\tcatch sys:suspend(Pid, 5000);\n\t\tfalse ->\n\t\t\tok\n\tend,\n\ttry\n\t\ttimed_meck_unload(Module, 10000)\n\tcatch\n\t\terror:{not_mocked, Module} ->\n\t\t\tok;\n\t\terror:E ->\n\t\t\t?debugFmt(\"ar_test_node (retries left ~p): Error unloading mock for ~p: ~p\",\n\t\t\t\t\t[Retries - 1, Module, E]),\n\t\t\tresume_if_alive(Pid),\n\t\t\ttimer:sleep(1000),\n\t\t\tunmock_module(Module, Retries - 1);\n\t\texit:E ->\n\t\t\t?debugFmt(\"ar_test_node (retries left ~p): Exit unloading mock for ~p: ~p\",\n\t\t\t\t\t[Retries - 1, Module, E]),\n\t\t\tresume_if_alive(Pid),\n\t\t\ttimer:sleep(1000),\n\t\t\tunmock_module(Module, Retries - 1)\n\tafter\n\t\tresume_if_alive(Pid)\n\tend.\n\nresume_if_alive(Pid) ->\n\tcase is_pid(Pid) andalso erlang:is_process_alive(Pid) of\n\t\ttrue ->\n\t\t\tcatch sys:resume(Pid);\n\t\tfalse ->\n\t\t\tok\n\tend.\n\n%% meck:unload internally uses gen_server:call(..., infinity), so if the meck\n%% process is stuck handling a call from a blocked process, it will hang forever\n%% and the catch/retry logic above never fires. Wrap it with a finite timeout\n%% and kill the stuck meck process if needed.\n%%\n%% After killing the meck process we must restore the original module from the\n%% beam file on disk, because meck's terminate (which normally does this) did\n%% not run.\ntimed_meck_unload(Module, Timeout) ->\n\tCaller = self(),\n\tRef = make_ref(),\n\tWorker = spawn(fun() ->\n\t\ttry\n\t\t\tResult = meck:unload(Module),\n\t\t\tCaller ! {Ref, {ok, Result}}\n\t\tcatch\n\t\t\tClass:Reason ->\n\t\t\t\tCaller ! {Ref, {Class, Reason}}\n\t\tend\n\tend),\n\treceive\n\t\t{Ref, {ok, Result}} ->\n\t\t\tResult;\n\t\t{Ref, {error, Reason}} ->\n\t\t\terror(Reason);\n\t\t{Ref, {exit, Reason}} ->\n\t\t\texit(Reason)\n\tafter Timeout ->\n\t\texit(Worker, kill),\n\t\tMeckProcName = list_to_atom(atom_to_list(Module) ++ \"_meck\"),\n\t\tcase erlang:whereis(MeckProcName) of\n\t\t\tundefined ->\n\t\t\t\tok;\n\t\t\tMeckPid ->\n\t\t\t\texit(MeckPid, kill),\n\t\t\t\ttimer:sleep(100)\n\t\tend,\n\t\tforce_restore_module(Module),\n\t\texit(timed_meck_unload_timeout)\n\tend.\n\n%% After force-killing the meck process, the module is left with meck-generated\n%% stub code and no backing ETS tables. Restore the original beam from disk so\n%% processes don't crash in an infinite meck stub loop.\nforce_restore_module(Module) ->\n\tOrigName = list_to_atom(atom_to_list(Module) ++ \"_meck_original\"),\n\tcode:purge(Module),\n\tcode:delete(Module),\n\tcode:purge(OrigName),\n\tcode:delete(OrigName),\n\tcode:load_file(Module).\n\nmock_functions(Functions) ->\n\t{\n\t\tfun() ->\n\t\t\twith_meck_lock(fun() ->\n\t\t\t\tlists:foldl(\n\t\t\t\t\tfun({Module, Fun, Mock}, Mocked) ->\n\t\t\t\t\t\tNewMocked = case maps:get(Module, Mocked, false) of\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\tnew_mock(Module, [passthrough]),\n\t\t\t\t\t\t\t\tlists:foreach(\n\t\t\t\t\t\t\t\t\tfun({_TestType, Node}) ->\n\t\t\t\t\t\t\t\t\t\tremote_call(Node, ar_test_node, new_mock,\n\t\t\t\t\t\t\t\t\t\t\t\t[Module, [no_link, passthrough]])\n\t\t\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\t\t\tall_peers(test)),\n\t\t\t\t\t\t\t\tmaps:put(Module, true, Mocked);\n\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\tMocked\n\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\tmock_function(Module, Fun, Mock),\n\t\t\t\t\t\t\tlists:foreach(\n\t\t\t\t\t\t\t\tfun({_TestType, Node}) ->\n\t\t\t\t\t\t\t\t\tremote_call(Node, ar_test_node, mock_function,\n\t\t\t\t\t\t\t\t\t\t\t[Module, Fun, Mock])\n\t\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\t\tall_peers(test)),\n\t\t\t\t\t\t\tNewMocked\n\t\t\t\t\tend,\n\t\t\t\t\tmaps:new(),\n\t\t\t\t\tFunctions\n\t\t\t\t)\n\t\t\tend)\n\t\tend,\n\t\tfun(Mocked) ->\n\t\t\twith_meck_lock(fun() ->\n\t\t\t\tmaps:fold(\n\t\t\t\t\tfun(Module, _, _) ->\n\t\t\t\t\t\tunmock_module(Module),\n\t\t\t\t\t\tlists:foreach(\n\t\t\t\t\t\t\tfun({_TestType, Node}) ->\n\t\t\t\t\t\t\t\tremote_call(Node, ar_test_node, unmock_module, [Module])\n\t\t\t\t\t\t\tend,\n\t\t\t\t\t\t\tall_peers(test))\n\t\t\t\t\tend,\n\t\t\t\t\tnoop,\n\t\t\t\t\tMocked\n\t\t\t\t)\n\t\t\tend)\n\t\tend\n\t}.\n\n%% @doc Execute Fun under a distributed lock to avoid concurrent meck operations.\nwith_meck_lock(Fun) when is_function(Fun, 0) ->\n\tglobal:trans({arweave, meck_lock}, Fun).\n\ntest_with_mocked_functions(Functions, TestFun) ->\n\ttest_with_mocked_functions(Functions, TestFun, ?TEST_MOCKED_FUNCTIONS_TIMEOUT).\n\ntest_with_mocked_functions(Functions, TestFun, Timeout) ->\n\t{Setup, Cleanup} = mock_functions(Functions),\n\t{\n\t\tforeach,\n\t\tSetup, Cleanup,\n\t\t[{timeout, Timeout, TestFun}]\n\t}.\n\npost_and_mine(#{ miner := Node, await_on := AwaitOnNode }, TXs) ->\n\tCurrentHeight = remote_call(Node, ar_node, get_height, []),\n\tlists:foreach(fun(TX) -> assert_post_tx_to_peer(Node, TX) end, TXs),\n\tmine(Node),\n\t[{H, _, _} | _] = wait_until_height(AwaitOnNode, CurrentHeight + 1),\n\tremote_call(AwaitOnNode, ar_test_node, read_block_when_stored, [H, true],\n\t  ?POST_AND_MINE_TIMEOUT).\n\npost_block(B, ExpectedResult) when not is_list(ExpectedResult) ->\n\tpost_block(B, [ExpectedResult], peer_ip(main));\npost_block(B, ExpectedResults) ->\n\tpost_block(B, ExpectedResults, peer_ip(main)).\n\npost_block(B, ExpectedResults, Peer) ->\n\tResult = send_new_block_with_retry(Peer, B, 2),\n\t?assertMatch({ok, {{<<\"200\">>, _}, _, _, _, _}}, Result),\n\tawait_post_block(B, ExpectedResults, Peer).\n\nsend_new_block(Peer, B) ->\n\tar_http_iface_client:send_block_binary(Peer, B#block.indep_hash,\n\t\t\tar_serialize:block_to_binary(B)).\n\nsend_new_block_with_retry(Peer, B, RetriesLeft) ->\n\tcase send_new_block(Peer, B) of\n\t\t{error, {stream_error, closed}} when RetriesLeft > 0 ->\n\t\t\ttimer:sleep(50),\n\t\t\tsend_new_block_with_retry(Peer, B, RetriesLeft - 1);\n\t\tResult ->\n\t\t\tResult\n\tend.\n\nawait_post_block(B, ExpectedResults) ->\n\tawait_post_block(B, ExpectedResults, peer_ip(main)).\n\nawait_post_block(#block{ indep_hash = H } = B, ExpectedResults, Peer) ->\n\tPostGossipFailureCodes = [invalid_denomination,\n\t\t\tinvalid_double_signing_proof_same_signature, invalid_double_signing_proof_cdiff,\n\t\t\tinvalid_double_signing_proof_same_address,\n\t\t\tinvalid_double_signing_proof_not_in_reward_history,\n\t\t\tinvalid_double_signing_proof_already_banned,\n\t\t\tinvalid_double_signing_proof_invalid_signature,\n\t\t\tmining_address_banned, invalid_account_anchors, invalid_reward_pool,\n\t\t\tinvalid_miner_reward, invalid_debt_supply, invalid_reward_history_hash,\n\t\t\tinvalid_kryder_plus_rate_multiplier_latch, invalid_kryder_plus_rate_multiplier,\n\t\t\tinvalid_wallet_list],\n\treceive\n\t\t{event, block, {rejected, Reason, H, Peer2}} ->\n\t\t\tcase lists:member(Reason, PostGossipFailureCodes) of\n\t\t\t\ttrue ->\n\t\t\t\t\t?assertEqual(no_peer, Peer2);\n\t\t\t\tfalse ->\n\t\t\t\t\t?assertEqual(Peer, Peer2)\n\t\t\tend,\n\t\t\tcase lists:member(Reason, ExpectedResults) of\n\t\t\t\ttrue ->\n\t\t\t\t\tok;\n\t\t\t\t_ ->\n\t\t\t\t\t?assert(false, iolist_to_binary(io_lib:format(\"Unexpected \"\n\t\t\t\t\t\t\t\"validation failure: ~p. Expected: ~p.\",\n\t\t\t\t\t\t\t[Reason, ExpectedResults])))\n\t\t\tend;\n\t\t{event, block, {new, #block{ indep_hash = H }, #{ source := {peer, Peer} }}} ->\n\t\t\tcase ExpectedResults of\n\t\t\t\t[valid] ->\n\t\t\t\t\tok;\n\t\t\t\t_ ->\n\t\t\t\t\tcase lists:any(fun(FailureCode) -> not lists:member(FailureCode,\n\t\t\t\t\t\t\tPostGossipFailureCodes) end, ExpectedResults) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t?assert(false, iolist_to_binary(io_lib:format(\"Unexpected \"\n\t\t\t\t\t\t\t\t\t\"validation success. Expected: ~p.\", [ExpectedResults])));\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\tawait_post_block(B, ExpectedResults)\n\t\t\t\t\tend\n\t\t\tend\n\tafter 60_000 ->\n\t\t\t?assert(false, iolist_to_binary(io_lib:format(\"Timed out. Expected: ~p.\",\n\t\t\t\t\t[ExpectedResults])))\n\tend.\n\nsign_block(#block{ cumulative_diff = CDiff } = B, PrevB, {Priv, Pub}) ->\n\tB2 = B#block{ reward_key = Pub, reward_addr = ar_wallet:to_address(Pub) },\n\tSignedH = ar_block:generate_signed_hash(B2),\n\tPrevCDiff = PrevB#block.cumulative_diff,\n\tSignaturePreimage = ar_block:get_block_signature_preimage(CDiff, PrevCDiff,\n\t\t\t<< (B#block.previous_solution_hash)/binary, SignedH/binary >>,\n\t\t\tB#block.height),\n\tSignature = ar_wallet:sign(Priv, SignaturePreimage),\n\tH = ar_block:indep_hash2(SignedH, Signature),\n\tB2#block{ indep_hash = H, signature = Signature }.\n\nread_block_when_stored(H) ->\n\tread_block_when_stored(H, false).\n\nread_block_when_stored(H, IncludeTXs) ->\n\t{ok, B} = ar_util:do_until(\n\t\tfun() ->\n\t\t\tcase ar_storage:read_block(H) of\n\t\t\t\tunavailable ->\n\t\t\t\t\tunavailable;\n\t\t\t\tB2 ->\n\t\t\t\t\tar_util:do_until(\n\t\t\t\t\t\tfun() ->\n\t\t\t\t\t\t\tTXs = ar_storage:read_tx(B2#block.txs),\n\t\t\t\t\t\t\tcase lists:any(fun(TX) -> TX == unavailable end, TXs) of\n\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\tfalse;\n\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\tcase IncludeTXs of\n\t\t\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\t\t\t{ok, B2#block{ txs = TXs }};\n\t\t\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\t\t\t{ok, B2}\n\t\t\t\t\t\t\t\t\tend\n\t\t\t\t\t\t\tend\n\t\t\t\t\t\tend,\n\t\t\t\t\t\t100,\n\t\t\t\t\t\t?READ_BLOCK_TIMEOUT\n\t\t\t\t\t)\n\t\t\tend\n\t\tend,\n\t\t200,\n\t\t?READ_BLOCK_TIMEOUT\n\t),\n\tB.\n\nget_chunk(Node, Offset) ->\n\tget_chunk(Node, Offset, undefined).\n\nget_chunk(Node, Offset, Packing) ->\n\tHeaders = case Packing of\n\t\tundefined -> [];\n\t\t_ ->\n\t\t\tPackingBinary = iolist_to_binary(ar_serialize:encode_packing(Packing, false)),\n\t\t\t[{<<\"x-packing\">>, PackingBinary}]\n\tend,\n\tar_http:req(#{\n\t\tmethod => get,\n\t\tpeer => peer_ip(Node),\n\t\tpath => \"/chunk/\" ++ integer_to_list(Offset),\n\t\theaders => [{<<\"x-bucket-based-offset\">>, <<\"true\">>} | Headers]\n\t}).\n\nget_chunk_proof(Node, Offset) ->\n\tar_http:req(#{\n\t\tmethod => get,\n\t\tpeer => peer_ip(Node),\n\t\tpath => \"/chunk_proof/\" ++ integer_to_list(Offset),\n\t\theaders => [{<<\"x-bucket-based-offset\">>, <<\"true\">>}]\n\t}).\n\npost_chunk(Node, Proof) ->\n\tPeer = peer_ip(Node),\n\tar_http:req(#{\n\t\tmethod => post,\n\t\tpeer => Peer,\n\t\tpath => \"/chunk\",\n\t\tbody => Proof\n\t}).\n\nrandom_v1_data(Size) ->\n\t%% Make sure v1 txs do not end with a digit, otherwise they are malleable.\n\t<< (crypto:strong_rand_bytes(Size - 1))/binary, <<\"a\">>/binary >>.\n\nassert_get_tx_data(Node, TXID, ExpectedData) ->\n\t?debugFmt(\"Polling for data of ~s.\", [ar_util:encode(TXID)]),\n\tPeer = peer_ip(Node),\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\tcase ar_http:req(#{ method => get, peer => Peer,\n\t\t\t\t\tpath => \"/tx/\" ++ binary_to_list(ar_util:encode(TXID)) ++ \"/data\" }) of\n\t\t\t\t{ok, {{<<\"200\">>, _}, _, ExpectedData, _, _}} ->\n\t\t\t\t\ttrue;\n\t\t\t\t{ok, {{<<\"404\">>, _}, _, _, _, _}} ->\n\t\t\t\t\tfalse;\n\t\t\t{ok, {{<<\"200\">>, _}, _, OtherData, _, _}} ->\n\t\t\t\t?assertEqual(byte_size(ExpectedData), byte_size(OtherData),\n\t\t\t\t\t\tlists:flatten(io_lib:format(\n\t\t\t\t\t\t\t\"TX data size mismatch. TXID: ~s. Peer: ~s.\",\n\t\t\t\t\t\t\t[ar_util:encode(TXID), ar_util:format_peer(Peer)])));\n\t\t\t\tUnexpectedResponse ->\n\t\t\t\t\t?debugFmt(\"Got unexpected tx data response. TXID: ~s. Peer: ~s. \"\n\t\t\t\t\t\t\t\" response: ~p.~n\",\n\t\t\t\t\t\t\t[ar_util:encode(TXID), ar_util:format_peer(Peer),\n\t\t\t\t\t\t\t\t\tUnexpectedResponse]),\n\t\t\t\t\tfalse\n\t\t\tend\n\t\tend,\n\t\t200,\n\t\t?GET_TX_DATA_TIMEOUT\n\t),\n\t{ok, {{<<\"200\">>, _}, _, OffsetJSON, _, _}}\n\t\t\t= ar_http:req(#{ method => get, peer => Peer,\n\t\t\t\t\tpath => \"/tx/\" ++ binary_to_list(ar_util:encode(TXID)) ++ \"/offset\" }),\n\tMap = jiffy:decode(OffsetJSON, [return_maps]),\n\tOffset = binary_to_integer(maps:get(<<\"offset\">>, Map)),\n\tSize = binary_to_integer(maps:get(<<\"size\">>, Map)),\n\t?assertEqual(ExpectedData, get_tx_data_in_chunks(Offset, Size, Peer)),\n\t?assertEqual(ExpectedData, get_tx_data_in_chunks_traverse_forward(Offset, Size, Peer)).\n\nget_tx_data_in_chunks(Offset, Size, Peer) ->\n\tget_tx_data_in_chunks(Offset, Offset - Size, Peer, []).\n\nget_tx_data_in_chunks(Offset, Start, _Peer, Bin) when Offset =< Start ->\n\tar_util:encode(iolist_to_binary(Bin));\nget_tx_data_in_chunks(Offset, Start, Peer, Bin) ->\n\t{ok, {{<<\"200\">>, _}, _, JSON, _, _}}\n\t\t\t= ar_http:req(#{ method => get, peer => Peer,\n\t\t\t\t\tpath => \"/chunk/\" ++ integer_to_list(Offset) }),\n\tMap = jiffy:decode(JSON, [return_maps]),\n\tChunk = ar_util:decode(maps:get(<<\"chunk\">>, Map)),\n\tget_tx_data_in_chunks(Offset - byte_size(Chunk), Start, Peer, [Chunk | Bin]).\n\nget_tx_data_in_chunks_traverse_forward(Offset, Size, Peer) ->\n\tget_tx_data_in_chunks_traverse_forward(Offset, Offset - Size, Peer, []).\n\nget_tx_data_in_chunks_traverse_forward(Offset, Start, _Peer, Bin) when Offset =< Start ->\n\tar_util:encode(iolist_to_binary(lists:reverse(Bin)));\nget_tx_data_in_chunks_traverse_forward(Offset, Start, Peer, Bin) ->\n\t{ok, {{<<\"200\">>, _}, _, JSON, _, _}}\n\t\t\t= ar_http:req(#{ method => get, peer => Peer,\n\t\t\t\t\tpath => \"/chunk/\" ++ integer_to_list(Start + 1) }),\n\tMap = jiffy:decode(JSON, [return_maps]),\n\tChunk = ar_util:decode(maps:get(<<\"chunk\">>, Map)),\n\tget_tx_data_in_chunks_traverse_forward(Offset, Start + byte_size(Chunk), Peer,\n\t\t\t[Chunk | Bin]).\n\nassert_data_not_found(Node, TXID) ->\n\tPeer = peer_ip(Node),\n\t?assertMatch({ok, {{<<\"404\">>, _}, _, _Binary, _, _}},\n\t\t\tar_http:req(#{ method => get, peer => Peer,\n\t\t\t\t\tpath => \"/tx/\" ++ binary_to_list(ar_util:encode(TXID)) ++ \"/data\" })).\n\nget_node_namespace() ->\n\t% Return the namespace part (everything after first - and before @)\n\t{_, Namespace} = split_node_name(),\n\tNamespace.\n\nget_node() ->\n\t% Return the name part (everything before first -)\n\t{Name, _} = split_node_name(),\n\tName.\n\nsplit_node_name() ->\n\t% First split by '@' to separate host part\n\t[NamePart, _Host] = string:split(atom_to_list(node()), \"@\"),\n\t% Then split by first '-' to get name and namespace\n\tcase string:split(NamePart, \"-\", leading) of\n\t\t[Name, Namespace] -> {Name, Namespace};\n\t\t[Name] -> {Name, \"\"}  % Handle case where there is no '-'\n\tend.\n\nget_unused_port() ->\n  {ok, ListenSocket} = gen_tcp:listen(0, [{port, 0}]),\n  {ok, Port} = inet:port(ListenSocket),\n  gen_tcp:close(ListenSocket),\n  Port.\n\np2p_headers(Node) ->\n\t[\n\t\t{<<\"x-p2p-port\">>, integer_to_binary(peer_port(Node))},\n\t\t{<<\"x-release\">>, integer_to_binary(?RELEASE_NUMBER)}\n\t].\n\n%% @doc: get the genesis chunk between a given start and end offset.\n-spec get_genesis_chunk(integer()) -> binary().\n-spec get_genesis_chunk(integer(), integer()) -> binary().\nget_genesis_chunk(EndOffset) ->\n\tStartOffset = case EndOffset rem ?DATA_CHUNK_SIZE of\n        0 ->\n            EndOffset - ?DATA_CHUNK_SIZE;\n        _ ->\n            (EndOffset div ?DATA_CHUNK_SIZE) * ?DATA_CHUNK_SIZE\n    end,\n\tget_genesis_chunk(StartOffset, EndOffset).\n\nget_genesis_chunk(StartOffset, EndOffset) ->\n\tSize = EndOffset - StartOffset,\n\tStartValue = StartOffset div 4,\n\tar_weave:generate_data(StartValue, Size, <<>>).\n"
  },
  {
    "path": "apps/arweave/test/ar_test_runner.erl",
    "content": "%%%\n%%% @doc Test runner utilities for running EUnit tests with various granularity.\n%%% Supports running all tests, specific modules, or specific test functions.\n%%%\n-module(ar_test_runner).\n\n-export([run/1, run/2]).\n-export([start_shell/1, stop_shell/1]).\n-export([list_tests/1, list_tests_json/1]).\n\n-include(\"ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%% @doc Run all tests for a given test type.\n%% TestType is 'test' or 'e2e'.\nrun(TestType) ->\n\tModules = default_modules(TestType),\n\trun_tests(TestType, {modules, Modules}).\n\n%% @doc Run tests based on CLI arguments.\n%% Supports:\n%%   - module                    run all tests in module\n%%   - module:test               run specific test from module  \n%%   - module1 module2           run all tests in multiple modules\n%%   - module1 module2:test      mixed mode\nrun(TestType, Args) when is_list(Args) ->\n\tSpecs = lists:map(fun parse_arg/1, Args),\n\trun_tests(TestType, {mixed, Specs}).\n\n%% @doc Start the test environment for interactive shell use (without running tests).\nstart_shell(TestType) ->\n\tensure_started(TestType).\n\n%% @doc Stop the test environment started by start_shell/1.\nstop_shell(TestType) ->\n\tar_test_node:stop_peers(TestType),\n\tinit:stop().\n\n%% Parse a CLI argument into either {module, Mod} or {test, Mod, Test}.\nparse_arg(Arg) when is_atom(Arg) ->\n\t{module, Arg};\nparse_arg(Arg) when is_list(Arg) ->\n\tcase string:split(Arg, \":\") of\n\t\t[Mod, Test] -> {test, list_to_atom(Mod), list_to_atom(Test)};\n\t\t[Mod]       -> {module, list_to_atom(Mod)}\n\tend.\n\n%% @doc List all tests in a module.\n%% Returns a list of {Module, Test} tuples.\nlist_tests(Mod) when is_atom(Mod) ->\n\tExports = Mod:module_info(exports),\n\tTests = lists:filtermap(\n\t\tfun({Name, 0}) ->\n\t\t\tNameStr = atom_to_list(Name),\n\t\t\tcase lists:suffix(\"_test_\", NameStr) of\n\t\t\t\ttrue -> {true, {Mod, Name}};\n\t\t\t\tfalse -> false\n\t\t\tend;\n\t\t   (_) -> false\n\t\tend,\n\t\tExports\n\t),\n\tlists:sort(Tests);\nlist_tests(Mods) when is_list(Mods) ->\n\tlists:flatmap(fun list_tests/1, Mods).\n\n%% @doc Output tests as JSON for CI systems.\nlist_tests_json(Mods) ->\n\tTests = list_tests(Mods),\n\tJsonItems = lists:map(\n\t\tfun({Mod, Test}) ->\n\t\t\tio_lib:format(\"{\\\"module\\\":\\\"~s\\\",\\\"test\\\":\\\"~s\\\"}\", [Mod, Test])\n\t\tend,\n\t\tTests\n\t),\n\tJsonArray = \"[\" ++ string:join(JsonItems, \",\") ++ \"]\",\n\tio:format(\"~s~n\", [JsonArray]).\n\n%%%===================================================================\n%%% Internal functions\n%%%===================================================================\n\ndefault_modules(e2e) ->\n\t[ar_sync_pack_mine_tests, ar_repack_mine_tests, ar_repack_in_place_mine_tests];\ndefault_modules(test) ->\n\tload_default_modules(\"scripts/full_test_modules.txt\").\n\nrun_tests(TestType, TestSpec) ->\n\tensure_started(TestType),\n\tTotalTimeout = case TestType of\n\t\te2e -> ?E2E_TEST_SUITE_TIMEOUT;\n\t\t_ -> ?TEST_SUITE_TIMEOUT\n\tend,\n\tResult =\n\t\ttry\n\t\t\tEunitSpec = build_eunit_spec(TotalTimeout, TestSpec),\n\t\t\teunit:test(EunitSpec, [verbose, {print_depth, 100}])\n\t\tafter\n\t\t\tar_test_node:stop_peers(TestType)\n\t\tend,\n\tcase Result of\n\t\tok -> ok;\n\t\t_ -> init:stop(1)\n\tend.\n\nensure_started(TestType) ->\n\ttry\n\t\tarweave_config:start(),\n\t\tok = arweave_limiter:start(),\n\t\tstart_for_tests(TestType),\n\t\tar_test_node:boot_peers(TestType),\n\t\tar_test_node:wait_for_peers(TestType)\n\tcatch\n\t\tType:Reason:S ->\n\t\t\tio:format(\"Failed to start the peers due to ~p:~p:~p~n\", [Type, Reason, S]),\n\t\t\tinit:stop(1)\n\tend.\n\nbuild_eunit_spec(Timeout, {modules, []}) ->\n\t{timeout, Timeout, []};\nbuild_eunit_spec(Timeout, {modules, Mods}) ->\n\t{timeout, Timeout, Mods};\nbuild_eunit_spec(Timeout, {mixed, Specs}) ->\n\tEunitSpecs = lists:map(fun spec_to_eunit/1, Specs),\n\t{timeout, Timeout, EunitSpecs}.\n\nspec_to_eunit({module, Mod}) ->\n\tMod;\nspec_to_eunit({test, Mod, Test}) ->\n\t%% Check if it's a generator (_test_) or simple test (_test)\n\tTestName = atom_to_list(Test),\n\tcase lists:suffix(\"_test_\", TestName) of\n\t\ttrue ->\n\t\t\t%% Generator - returns a test spec\n\t\t\t{generator, fun() -> Mod:Test() end};\n\t\tfalse ->\n\t\t\t%% Simple test function - run directly\n\t\t\t{Mod, Test}\n\tend.\n\nload_default_modules(Path) ->\n\tcase file:read_file(Path) of\n\t\t{ok, Bin} ->\n\t\t\tparse_default_modules(binary_to_list(Bin));\n\t\t{error, Reason} ->\n\t\t\terlang:error({failed_to_load_test_modules, Path, Reason})\n\tend.\n\nparse_default_modules(Content) ->\n\tLines = string:split(Content, \"\\n\", all),\n\tlists:filtermap(fun parse_default_module_line/1, Lines).\n\nparse_default_module_line(Line) ->\n\tTrimmed = string:trim(Line),\n\tcase Trimmed of\n\t\t\"\" ->\n\t\t\tfalse;\n\t\t[$# | _] ->\n\t\t\tfalse;\n\t\t_ ->\n\t\t\t{true, list_to_atom(Trimmed)}\n\tend.\n\nstart_for_tests(TestType) ->\n\tUniqueName = ar_test_node:get_node_namespace(),\n\tTestConfig = #config{\n\t\tdebug = true,\n\t\tpeers = [],\n\t\tdata_dir = \".tmp/data_\" ++ atom_to_list(TestType) ++ \"_main_\" ++ UniqueName,\n\t\tport = ar_test_node:get_unused_port(),\n\t\tdisable = [randomx_jit],\n\t\t'http_client.http.keepalive' = 4_000,\n\t\tauto_join = false\n\t},\n\tar:start(TestConfig).\n"
  },
  {
    "path": "apps/arweave/test/ar_tx_blacklist_tests.erl",
    "content": "-module(ar_tx_blacklist_tests).\n\n-export([init/2]).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include(\"ar.hrl\").\n\n-import(ar_test_node, [\n\t\tsign_v1_tx/2, random_v1_data/1, \n\t\twait_until_height/2,\n\t\tassert_wait_until_height/2]).\n\ninit(Req, State) ->\n\tSplitPath = ar_http_iface_server:split_path(cowboy_req:path(Req)),\n\thandle(SplitPath, Req, State).\n\nhandle([<<\"empty\">>], Req, State) ->\n\t{ok, cowboy_req:reply(200, #{}, <<>>, Req), State};\n\nhandle([<<\"good\">>], Req, State) ->\n\t{ok, cowboy_req:reply(200, #{}, ar_util:encode(hd(State)), Req), State};\n\nhandle([<<\"bad\">>, <<\"and\">>, <<\"good\">>], Req, State) ->\n\tReply =\n\t\tlist_to_binary(\n\t\t\tio_lib:format(\n\t\t\t\t\"~s\\nbad base64url \\n~s\\n\",\n\t\t\t\tlists:map(fun ar_util:encode/1, State)\n\t\t\t)\n\t\t),\n\t{ok, cowboy_req:reply(200, #{}, Reply, Req), State}.\n\nuses_blacklists_test_() ->\n\t{timeout, 300, fun test_uses_blacklists/0}.\n\ntest_uses_blacklists() ->\n\t{\n\t\tBlacklistFiles,\n\t\tB0,\n\t\tWallet,\n\t\tTXs,\n\t\tGoodTXIDs,\n\t\tBadTXIDs,\n\t\tV1TX,\n\t\tGoodOffsets,\n\t\tBadOffsets,\n\t\tDataTrees\n\t} = setup(),\n\tWhitelistFile = random_filename(),\n\tok = file:write_file(WhitelistFile, <<>>),\n\tRewardAddr = ar_wallet:to_address(ar_wallet:new_keyfile()),\n\t{ok, Config} = arweave_config:get_env(),\n\tStorageModule = {30 * ?MiB, 0, {composite, RewardAddr, 1}},\n\ttry\n\t\tar_test_node:start(#{ b0 => B0, addr => RewardAddr,\n\t\t\t\tconfig => Config#config{\n\t\t\ttransaction_blacklist_files = BlacklistFiles,\n\t\t\ttransaction_whitelist_files = [WhitelistFile],\n\t\t\tsync_jobs = 10,\n\t\t\ttransaction_blacklist_urls = [\n\t\t\t\t%% Serves empty body.\n\t\t\t\t\"http://localhost:1985/empty\",\n\t\t\t\t%% Serves a valid TX ID (one from the BadTXIDs list).\n\t\t\t\t\"http://localhost:1985/good\",\n\t\t\t\t%% Serves some valid TX IDs (from the BadTXIDs list) and a line\n\t\t\t\t%% with invalid Base64URL.\n\t\t\t\t\"http://localhost:1985/bad/and/good\"\n\t\t\t],\n\t\t\tenable = [pack_served_chunks | Config#config.enable]},\n\t\t\tstorage_modules => [StorageModule]\n\t\t}),\n\t\tar_test_node:connect_to_peer(peer1),\n\t\tBadV1TXIDs = [V1TX#tx.id],\n\t\tlists:foreach(\n\t\t\tfun({TX, Height}) ->\n\t\t\t\tar_test_node:assert_post_tx_to_peer(peer1, TX),\n\t\t\t\tar_test_node:assert_wait_until_receives_txs([TX]),\n\t\t\t\tcase Height == length(TXs) of\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tar_test_node:assert_post_tx_to_peer(peer1, V1TX),\n\t\t\t\t\t\tar_test_node:assert_wait_until_receives_txs([V1TX]);\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tok\n\t\t\t\tend,\n\t\t\t\tar_test_node:mine(peer1),\n\t\t\t\tupload_data([TX], DataTrees),\n\t\t\t\twait_until_height(main, Height)\n\t\t\tend,\n\t\t\tlists:zip(TXs, lists:seq(1, length(TXs)))\n\t\t),\n\t\tassert_present_txs(GoodTXIDs),\n\t\tassert_present_txs(BadTXIDs), % V2 headers must not be removed.\n\t\tassert_removed_txs(BadV1TXIDs),\n\t\tassert_present_offsets(GoodOffsets),\n\t\tassert_removed_offsets(BadOffsets),\n\t\tStoreID = ar_storage_module:id(StorageModule),\n\t\tChunks = ar_chunk_storage:get_range(0, 30 * ?MiB, StoreID),\n\t\tChunkOffsets = [Offset || {Offset, _Chunk} <- Chunks],\n\t\t?debugFmt(\"chunk offsets: ~p ~n good offsets: ~p ~n bad offsets: ~p~n\",\n\t\t[ChunkOffsets, GoodOffsets, BadOffsets]),\n\t\t?assert(lists:all(fun(BadOffset) ->\n\t\t\tnot lists:member(ar_block:get_chunk_padded_offset(BadOffset), ChunkOffsets)\n\t\tend, lists:flatten(BadOffsets))),\n\t\tassert_does_not_accept_offsets(BadOffsets),\n\t\t%% Add a new transaction to the blacklist, add a blacklisted transaction to whitelist.\n\t\tok = file:write_file(lists:nth(3, BlacklistFiles), <<>>),\n\t\tok = file:write_file(WhitelistFile, ar_util:encode(lists:nth(2, BadTXIDs))),\n\t\tok = file:write_file(lists:nth(4, BlacklistFiles), io_lib:format(\"~s~n~s\",\n\t\t\t\t[ar_util:encode(hd(GoodTXIDs)), ar_util:encode(V1TX#tx.id)])),\n\t\t[UnblacklistedOffsets, WhitelistOffsets | BadOffsets2] = BadOffsets,\n\t\tRestoredOffsets = [UnblacklistedOffsets, WhitelistOffsets] ++\n\t\t\t\t[lists:nth(6, lists:reverse(BadOffsets))],\n\t\tBadOffsets3 = BadOffsets2 -- [lists:nth(6, lists:reverse(BadOffsets))],\n\t\t[_UnblacklistedTXID, _WhitelistTXID | BadTXIDs2] = BadTXIDs,\n\t\t%% Expect the transaction data to be resynced.\n\t\tassert_present_offsets(RestoredOffsets),\n\t\t%% Expect the freshly blacklisted transaction to be erased.\n\t\tassert_present_txs([hd(GoodTXIDs)]), % V2 headers must not be removed.\n\t\tassert_removed_offsets([hd(GoodOffsets)]),\n\t\tassert_does_not_accept_offsets([hd(GoodOffsets)]),\n\t\t%% Expect the previously blacklisted transactions to stay blacklisted.\n\t\tassert_present_txs(BadTXIDs2), % V2 headers must not be removed.\n\t\tassert_removed_txs(BadV1TXIDs),\n\t\tassert_removed_offsets(BadOffsets3),\n\t\tassert_does_not_accept_offsets(BadOffsets3),\n\t\t%% Blacklist the last transaction. Fork the weave. Assert the blacklisted offsets are moved.\n\t\tar_test_node:disconnect_from(peer1),\n\t\tTX = ar_test_node:sign_tx(Wallet, #{ data => crypto:strong_rand_bytes(?DATA_CHUNK_SIZE),\n\t\t\t\tlast_tx => ar_test_node:get_tx_anchor(peer1) }),\n\t\tar_test_node:assert_post_tx_to_peer(main, TX),\n\t\tar_test_node:mine(),\n\t\t[{_, WeaveSize, _} | _] = wait_until_height(main, length(TXs) + 1),\n\t\tassert_present_offsets([[WeaveSize]]),\n\t\tok = file:write_file(lists:nth(3, BlacklistFiles), ar_util:encode(TX#tx.id)),\n\t\tassert_removed_offsets([[WeaveSize]]),\n\t\tTX2 = sign_v1_tx(Wallet, #{ data => random_v1_data(2 * ?DATA_CHUNK_SIZE),\n\t\t\t\tlast_tx => ar_test_node:get_tx_anchor(peer1) }),\n\t\tar_test_node:assert_post_tx_to_peer(peer1, TX2),\n\t\tar_test_node:mine(peer1),\n\t\tassert_wait_until_height(peer1, length(TXs) + 1),\n\t\tar_test_node:assert_post_tx_to_peer(peer1, TX),\n\t\tar_test_node:mine(peer1),\n\t\tassert_wait_until_height(peer1, length(TXs) + 2),\n\t\tar_test_node:connect_to_peer(peer1),\n\t\t[{_, WeaveSize2, _} | _] = wait_until_height(main, length(TXs) + 2),\n\t\tassert_removed_offsets([[WeaveSize2]]),\n\t\tassert_present_offsets([[WeaveSize]])\n\tafter\n\t\tteardown(Config)\n\tend.\n\nsetup() ->\n\t{B0, Wallet} = setup(peer1),\n\t{TXs, DataTrees} = create_txs(Wallet),\n\tTXIDs = [TX#tx.id || TX <- TXs],\n\tBadTXIDs = [lists:nth(1, TXIDs), lists:nth(3, TXIDs)],\n\tV1TX = sign_v1_tx(Wallet, #{ data => random_v1_data(3 * ?DATA_CHUNK_SIZE),\n\t\t\tlast_tx => ar_test_node:get_tx_anchor(peer1), reward => ?AR(10000) }),\n\tDataSizes = [TX#tx.data_size || TX <- TXs],\n\tS0 = B0#block.block_size,\n\t[S1, S2, S3, S4, S5, S6, S7, S8 | _] = DataSizes,\n\tBadOffsets = [S0 + O || O <- [S1, S1 + S2 + S3, % Blacklisted in the file.\n\t\t\tS1 + S2 + S3 + S4 + S5,\n\t\t\tS1 + S2 + S3 + S4 + S5 + S6 + S7]], % Blacklisted in the endpoint.\n\tBlacklistFiles = create_files([V1TX#tx.id | BadTXIDs],\n\t\t\t[{S0 + S1 + S2 + S3 + ?DATA_CHUNK_SIZE, S0 + S1 + S2 + S3 + ?DATA_CHUNK_SIZE * 2},\n\t\t\t\t{S0 + S1 + S2 + S3 + S4 + S5,\n\t\t\t\t\t\tS0 + S1 + S2 + S3 + S4 + S5 + ?DATA_CHUNK_SIZE * 5},\n\t\t\t\t% This one just repeats the range of a blacklisted tx:\n\t\t\t\t{S0 + S1 + S2 + S3 + S4 + S5 + S6, S0 + S1 + S2 + S3 + S4 + S5 + S6 + S7}\n\t\t\t]),\n\tBadTXIDs2 = [lists:nth(5, TXIDs), lists:nth(7, TXIDs)], % The endpoint.\n\tBadTXIDs3 = [lists:nth(4, TXIDs), lists:nth(6, TXIDs)], % Ranges.\n\tRoutes = [{\"/[...]\", ar_tx_blacklist_tests, BadTXIDs2}],\n\t{ok, _PID} =\n\t\tar_test_node:remote_call(peer1, cowboy, start_clear, [\n\t\t\tar_tx_blacklist_test_listener,\n\t\t\t[{port, 1985}],\n\t\t\t#{ env => #{ dispatch => cowboy_router:compile([{'_', Routes}]) } }\n\t\t]),\n\tGoodTXIDs = TXIDs -- (BadTXIDs ++ BadTXIDs2 ++ BadTXIDs3),\n\tBadOffsets2 =\n\t\tlists:map(\n\t\t\tfun(TXOffset) ->\n\t\t\t\t%% Every TX in this test consists of 10 chunks.\n\t\t\t\t%% Only every second chunk is uploaded in this test\n\t\t\t\t%% for (originally) blacklisted transactions.\n\t\t\t\t[TXOffset - ?DATA_CHUNK_SIZE * I || I <- lists:seq(0, 9, 2)]\n\t\t\tend,\n\t\t\tBadOffsets\n\t\t),\n\tBadOffsets3 = BadOffsets2 ++ [S0 + O || O <- [S1 + S2 + S3 + ?DATA_CHUNK_SIZE * 2,\n\t\t\tS1 + S2 + S3 + S4 + S5 + ?DATA_CHUNK_SIZE,\n\t\t\tS1 + S2 + S3 + S4 + S5 + ?DATA_CHUNK_SIZE * 2,\n\t\t\tS1 + S2 + S3 + S4 + S5 + ?DATA_CHUNK_SIZE * 3,\n\t\t\tS1 + S2 + S3 + S4 + S5 + ?DATA_CHUNK_SIZE * 4,\n\t\t\tS1 + S2 + S3 + S4 + S5 + ?DATA_CHUNK_SIZE * 5]], % Blacklisted as a range.\n\tGoodOffsets = [S0 + O || O <- [S1 + S2, S1 + S2 + S3 + S4, S1 + S2 + S3 + S4 + S5 + S6,\n\t\t\tS1 + S2 + S3 + S4 + S5 + S6 + S7 + S8]],\n\tGoodOffsets2 =\n\t\tlists:map(\n\t\t\tfun(TXOffset) ->\n\t\t\t\t%% Every TX in this test consists of 10 chunks.\n\t\t\t\t[TXOffset - ?DATA_CHUNK_SIZE * I || I <- lists:seq(0, 9)] -- BadOffsets3\n\t\t\tend,\n\t\t\tGoodOffsets\n\t\t),\n\t{\n\t\tBlacklistFiles,\n\t\tB0,\n\t\tWallet,\n\t\tTXs,\n\t\tGoodTXIDs,\n\t\tBadTXIDs ++ BadTXIDs2 ++ BadTXIDs3,\n\t\tV1TX,\n\t\tGoodOffsets2,\n\t\tBadOffsets3,\n\t\tDataTrees\n\t}.\n\nsetup(Node) ->\n\t{ok, Config} = ar_test_node:get_config(Node),\n\tWallet = {_, Pub} = ar_test_node:remote_call(Node, ar_wallet, new_keyfile, []),\n\tRewardAddr = ar_wallet:to_address(Pub),\n\t[B0] = ar_weave:init([{RewardAddr, ?AR(100000000), <<>>}]),\n\tar_test_node:start_peer(Node, B0, RewardAddr, Config#config{\n\t\tenable = [pack_served_chunks | Config#config.enable]\n\t}),\n\t{B0, Wallet}.\n\ncreate_txs(Wallet) ->\n\tlists:foldl(\n\t\tfun\n\t\t\t(_, {TXs, DataTrees}) ->\n\t\t\t\tChunks =\n\t\t\t\t\tlists:sublist(\n\t\t\t\t\t\tar_tx:chunk_binary(?DATA_CHUNK_SIZE,\n\t\t\t\t\t\t\t\tcrypto:strong_rand_bytes(10 * ?DATA_CHUNK_SIZE)),\n\t\t\t\t\t\t10\n\t\t\t\t\t), % Exclude empty chunk created by chunk_to_binary.\n\t\t\t\tSizedChunkIDs = ar_tx:sized_chunks_to_sized_chunk_ids(\n\t\t\t\t\tar_tx:chunks_to_size_tagged_chunks(Chunks)\n\t\t\t\t),\n\t\t\t\t{DataRoot, DataTree} = ar_merkle:generate_tree(SizedChunkIDs),\n\t\t\t\tTX = ar_test_node:sign_tx(Wallet, #{ format => 2, data_root => DataRoot,\n\t\t\t\t\t\tdata_size => 10 * ?DATA_CHUNK_SIZE, last_tx => ar_test_node:get_tx_anchor(peer1),\n\t\t\t\t\t\treward => ?AR(10000), denomination => 1 }),\n\t\t\t\t{[TX | TXs], maps:put(TX#tx.id, {DataTree, Chunks}, DataTrees)}\n\t\tend,\n\t\t{[], #{}},\n\t\tlists:seq(1, 10)\n\t).\n\ncreate_files(BadTXIDs, [{Start1, End1}, {Start2, End2}, {Start3, End3}]) ->\n\tFiles = [\n\t\t{random_filename(), <<>>},\n\t\t{random_filename(), <<\"bad base64url \">>},\n\t\t{random_filename(), ar_util:encode(lists:nth(2, BadTXIDs))},\n\t\t{random_filename(),\n\t\t\tlist_to_binary(\n\t\t\t\tio_lib:format(\n\t\t\t\t\t\"~s\\nbad base64url \\n~s\\n~s\\n~B,~B\\n\",\n\t\t\t\t\tlists:map(fun ar_util:encode/1, BadTXIDs) ++ [Start1, End1]\n\t\t\t\t)\n\t\t\t)},\n\t\t{random_filename(), list_to_binary(io_lib:format(\"~B,~B\\n~B,~B\",\n\t\t\t\t[Start2, End2, Start3, End3]))}\n\t],\n\tlists:foreach(\n\t\tfun\n\t\t\t({Filename, Binary}) ->\n\t\t\t\tok = file:write_file(Filename, Binary)\n\t\tend,\n\t\tFiles\n\t),\n\t[Filename || {Filename, _} <- Files].\n\nrandom_filename() ->\n\t{ok, Config} = ar_test_node:remote_call(peer1, arweave_config, get_env, []),\n\tfilename:join(Config#config.data_dir,\n\t\t\"ar-tx-blacklist-tests-transaction-blacklist-\"\n\t\t++\n\t\tbinary_to_list(ar_util:encode(crypto:strong_rand_bytes(32)))).\n\nencode_chunk(Proof) ->\n\tar_serialize:jsonify(#{\n\t\tchunk => ar_util:encode(maps:get(chunk, Proof)),\n\t\tdata_path => ar_util:encode(maps:get(data_path, Proof)),\n\t\tdata_root => ar_util:encode(maps:get(data_root, Proof)),\n\t\tdata_size => integer_to_binary(maps:get(data_size, Proof)),\n\t\toffset => integer_to_binary(maps:get(offset, Proof))\n\t}).\n\nupload_data(TXs, DataTrees) ->\n\tlists:foreach(\n\t\tfun(TX) ->\n\t\t\t#tx{\n\t\t\t\tid = TXID,\n\t\t\t\tdata_root = DataRoot,\n\t\t\t\tdata_size = DataSize\n\t\t\t} = TX,\n\t\t\t{DataTree, Chunks} = maps:get(TXID, DataTrees),\n\t\t\tChunkOffsets = lists:zip(Chunks,\n\t\t\t\t\tlists:seq(?DATA_CHUNK_SIZE, 10 * ?DATA_CHUNK_SIZE, ?DATA_CHUNK_SIZE)),\n\t\t\tUploadChunks = ChunkOffsets,\n\t\t\tlists:foreach(\n\t\t\t\tfun({Chunk, Offset}) ->\n\t\t\t\t\tDataPath = ar_merkle:generate_path(DataRoot, Offset - 1, DataTree),\n\t\t\t\t\t{ok, {{<<\"200\">>, _}, _, _, _, _}} =\n\t\t\t\t\t\tar_test_node:post_chunk(peer1, encode_chunk(#{\n\t\t\t\t\t\t\tdata_root => DataRoot,\n\t\t\t\t\t\t\tchunk => Chunk,\n\t\t\t\t\t\t\tdata_path => DataPath,\n\t\t\t\t\t\t\toffset => Offset - 1,\n\t\t\t\t\t\t\tdata_size => DataSize\n\t\t\t\t\t\t}))\n\t\t\t\tend,\n\t\t\t\tUploadChunks\n\t\t\t)\n\t\tend,\n\t\tTXs\n\t).\n\nassert_present_txs(GoodTXIDs) ->\n\t?debugFmt(\"Waiting until these txids are stored: ~p.\",\n\t\t\t[[ar_util:encode(TXID) || TXID <- GoodTXIDs]]),\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\tlists:all(\n\t\t\t\tfun(TXID) ->\n\t\t\t\t\tis_record(ar_storage:read_tx(TXID), tx)\n\t\t\t\tend,\n\t\t\t\tGoodTXIDs\n\t\t\t)\n\t\tend,\n\t\t500,\n\t\t10000\n\t),\n\tlists:foreach(\n\t\tfun(TXID) ->\n\t\t\t?assertMatch({ok, {_, _}}, ar_storage:get_tx_confirmation_data(TXID))\n\t\tend,\n\t\tGoodTXIDs\n\t).\n\nassert_removed_txs(BadTXIDs) ->\n\t?debugFmt(\"Waiting until these txids are removed: ~p.\",\n\t\t\t[[ar_util:encode(TXID) || TXID <- BadTXIDs]]),\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\tlists:all(\n\t\t\t\tfun(TXID) ->\n\t\t\t\t\t{error, not_found} == ar_data_sync:get_tx_data(TXID)\n\t\t\t\t\t\t\t%% Do not use ar_storage:read_tx because the\n\t\t\t\t\t\t\t%% transaction is temporarily kept in the disk cache,\n\t\t\t\t\t\t\t%% even when blacklisted.\n\t\t\t\t\t\t\tandalso ar_kv:get(tx_db, TXID) == not_found\n\t\t\t\tend,\n\t\t\t\tBadTXIDs\n\t\t\t)\n\t\tend,\n\t\t500,\n\t\t30000\n\t),\n\t%% We have to keep the confirmation data even for blacklisted transactions.\n\tlists:foreach(\n\t\tfun(TXID) ->\n\t\t\t?assertMatch({ok, {_, _}}, ar_storage:get_tx_confirmation_data(TXID))\n\t\tend,\n\t\tBadTXIDs\n\t).\n\nassert_present_offsets(GoodOffsets) ->\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\tlists:all(\n\t\t\t\tfun(Offset) ->\n\t\t\t\t\tcase ar_test_node:get_chunk(main, Offset) of\n\t\t\t\t\t\t{ok, {{<<\"200\">>, _}, _, _, _, _}} ->\n\t\t\t\t\t\t\ttrue;\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t?debugFmt(\"Waiting until the end offset ~B is stored.\", [Offset]),\n\t\t\t\t\t\t\tfalse\n\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\t\tlists:flatten(GoodOffsets)\n\t\t\t)\n\t\tend,\n\t\t500,\n\t\t120000\n\t).\n\nassert_removed_offsets(BadOffsets) ->\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\tlists:all(\n\t\t\t\tfun(Offset) ->\n\t\t\t\t\tcase ar_test_node:get_chunk(main, Offset) of\n\t\t\t\t\t\t{ok, {{<<\"404\">>, _}, _, _, _, _}} ->\n\t\t\t\t\t\t\ttrue;\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t?debugFmt(\"Waiting until the end offset ~B is removed.\", [Offset]),\n\t\t\t\t\t\t\tfalse\n\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\t\tlists:flatten(BadOffsets)\n\t\t\t)\n\t\tend,\n\t\t500,\n\t\t60000\n\t).\n\nassert_does_not_accept_offsets(BadOffsets) ->\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\tlists:all(\n\t\t\t\tfun(Offset) ->\n\t\t\t\t\tcase ar_test_node:get_chunk(main, Offset) of\n\t\t\t\t\t\t{ok, {{<<\"404\">>, _}, _, _, _, _}} ->\n\t\t\t\t\t\t\t{ok, {{<<\"200\">>, _}, _, EncodedProof, _, _}} =\n\t\t\t\t\t\t\t\tar_test_node:get_chunk(peer1, Offset),\n\t\t\t\t\t\t\tProof = decode_chunk(EncodedProof),\n\t\t\t\t\t\t\tDataPath = maps:get(data_path, Proof),\n\t\t\t\t\t\t\t{ok, DataRoot} = ar_merkle:extract_root(DataPath),\n\t\t\t\t\t\t\tRelativeOffset = ar_merkle:extract_note(DataPath),\n\t\t\t\t\t\t\tProof2 = Proof#{\n\t\t\t\t\t\t\t\toffset => RelativeOffset - 1,\n\t\t\t\t\t\t\t\tdata_root => DataRoot,\n\t\t\t\t\t\t\t\tdata_size => 10 * ?DATA_CHUNK_SIZE\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\tEncodedProof2 = encode_chunk(Proof2),\n\t\t\t\t\t\t\t%% The node returns 200 but does not store the chunk.\n\t\t\t\t\t\t\tcase ar_test_node:post_chunk(main, EncodedProof2) of\n\t\t\t\t\t\t\t\t{ok, {{<<\"200\">>, _}, _, _, _, _}} ->\n\t\t\t\t\t\t\t\t\tcase ar_test_node:get_chunk(main, Offset) of\n\t\t\t\t\t\t\t\t\t\t{ok, {{<<\"404\">>, _}, _, _, _, _}} ->\n\t\t\t\t\t\t\t\t\t\t\ttrue;\n\t\t\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\t\t\tfalse\n\t\t\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\t\tfalse\n\t\t\t\t\t\t\tend;\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\tfalse\n\t\t\t\t\tend\n\t\t\t\tend,\n\t\t\t\tlists:flatten(BadOffsets)\n\t\t\t)\n\t\tend,\n\t\t500,\n\t\t60000\n\t).\n\ndecode_chunk(EncodedProof) ->\n\tar_serialize:json_map_to_poa_map(\n\t\tjiffy:decode(EncodedProof, [return_maps])\n\t).\n\nteardown(Config) ->\n\tok = ar_test_node:remote_call(peer1, cowboy, stop_listener, [ar_tx_blacklist_test_listener]),\n\tarweave_config:set_env(Config).\n"
  },
  {
    "path": "apps/arweave/test/ar_tx_replay_pool_tests.erl",
    "content": "-module(ar_tx_replay_pool_tests).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_pricing.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\nverify_block_txs_test_() ->\n\t{timeout, 30, fun test_verify_block_txs/0}.\n\ntest_verify_block_txs() ->\n\tKey1 = ar_wallet:new(),\n\tKey2 = ar_wallet:new(),\n\tRandomBlockAnchors =\n\t\t[crypto:strong_rand_bytes(32) || _ <- lists:seq(1, ar_block:get_max_tx_anchor_depth())],\n\tBlockAnchorTXAtForkHeight = tx(Key1, fee(ar_fork:height_2_0()), <<\"hash\">>),\n\tBlockAnchorTXAfterForkHeight =\n\t\ttx(Key1, fee(ar_fork:height_2_0() + 1), <<\"hash\">>),\n\tTimestamp = os:system_time(second),\n\tTestCases = [\n\t\t#{\n\t\t\ttitle => \"Fork height 2.0 accepts block anchors\",\n\t\t\ttxs => [tx(Key1, fee(ar_fork:height_2_0()), <<\"hash\">>)],\n\t\t\theight => ar_fork:height_2_0(),\n\t\t\tblock_anchors => [<<\"hash\">>],\n\t\t\trecent_txs_map => #{},\n\t\t\twallet_list => [wallet(Key1, fee(ar_fork:height_2_0()))],\n\t\t\texpected_result => valid\n\t\t},\n\t\t#{\n\t\t\ttitle => \"After fork height 2.0 accepts block anchors\",\n\t\t\ttxs => [tx(Key1, fee(ar_fork:height_2_0() + 1), <<\"hash\">>)],\n\t\t\theight => ar_fork:height_2_0() + 1,\n\t\t\tblock_anchors => [<<\"hash\">>],\n\t\t\trecent_txs_map => #{},\n\t\t\twallet_list => [wallet(Key1, fee(ar_fork:height_2_0() + 1))],\n\t\t\texpected_result => valid\n\t\t},\n\t\t#{\n\t\t\ttitle => \"Fork height 2.0 rejects outdated block anchors\",\n\t\t\ttxs => [\n\t\t\t\ttx(\n\t\t\t\t\tKey1,\n\t\t\t\t\tfee(ar_fork:height_2_0()),\n\t\t\t\t\tcrypto:strong_rand_bytes(32)\n\t\t\t\t)\n\t\t\t],\n\t\t\tblock_anchors => RandomBlockAnchors,\n\t\t\trecent_txs_map => #{},\n\t\t\theight => ar_fork:height_2_0(),\n\t\t\twallet_list => [wallet(Key1, fee(ar_fork:height_2_0()))],\n\t\t\texpected_result => invalid\n\t\t},\n\t\t#{\n\t\t\ttitle => \"Fork height 2.0 accepts wallet list anchors\",\n\t\t\ttxs => [\n\t\t\t\ttx(Key1, fee(ar_fork:height_2_0()), <<>>),\n\t\t\t\ttx(Key2, fee(ar_fork:height_2_0()), <<>>)\n\t\t\t],\n\t\t\theight => ar_fork:height_2_0(),\n\t\t\twallet_list => [\n\t\t\t\twallet(Key1, fee(ar_fork:height_2_0())),\n\t\t\t\twallet(Key2, fee(ar_fork:height_2_0()))\n\t\t\t],\n\t\t\tblock_anchors => [],\n\t\t\trecent_txs_map => #{},\n\t\t\texpected_result => valid\n\t\t},\n\t\t#{\n\t\t\ttitle => \"After fork height 2.0 accepts wallet list anchors\",\n\t\t\ttxs => [\n\t\t\t\ttx(Key1, fee(ar_fork:height_2_0() + 1), <<>>),\n\t\t\t\ttx(Key2, fee(ar_fork:height_2_0() + 1), <<>>)\n\t\t\t],\n\t\t\theight => ar_fork:height_2_0() + 1,\n\t\t\twallet_list => [\n\t\t\t\twallet(Key1, fee(ar_fork:height_2_0() + 1)),\n\t\t\t\twallet(Key2, fee(ar_fork:height_2_0() + 1))\n\t\t\t],\n\t\t\tblock_anchors => [],\n\t\t\trecent_txs_map => #{},\n\t\t\texpected_result => valid\n\t\t},\n\t\t#{\n\t\t\ttitle => \"Fork height 2.0 rejects conflicting wallet list anchors\",\n\t\t\ttxs => [\n\t\t\t\ttx(Key1, fee(ar_fork:height_2_0()), <<>>),\n\t\t\t\ttx(Key1, fee(ar_fork:height_2_0()), <<>>)\n\t\t\t],\n\t\t\theight => ar_fork:height_2_0(),\n\t\t\tblock_anchors => [],\n\t\t\trecent_txs_map => #{},\n\t\t\twallet_list => [wallet(Key1, 2 * fee(ar_fork:height_2_0()))],\n\t\t\texpected_result => invalid\n\t\t},\n\t\t#{\n\t\t\ttitle => \"Fork height 2.0 rejects chained wallet list anchors\",\n\t\t\ttxs => make_tx_chain(Key1, ar_fork:height_2_0()),\n\t\t\theight => ar_fork:height_2_0(),\n\t\t\tblock_anchors => [],\n\t\t\trecent_txs_map => #{},\n\t\t\twallet_list => [wallet(Key1, 2 * fee(ar_fork:height_2_0()))],\n\t\t\texpected_result => invalid\n\t\t},\n\t\t#{\n\t\t\ttitle => \"Fork height 2.0 rejects conflicting balances\",\n\t\t\ttxs => [\n\t\t\t\ttx(Key1, fee(ar_fork:height_2_0()), <<>>),\n\t\t\t\ttx(Key1, fee(ar_fork:height_2_0()), <<>>)\n\t\t\t],\n\t\t\theight => ar_fork:height_2_0(),\n\t\t\twallet_list =>\n\t\t\t\t[wallet(Key1, erlang:trunc(1.5 * fee(ar_fork:height_2_0())))],\n\t\t\tblock_anchors => [],\n\t\t\trecent_txs_map => #{},\n\t\t\texpected_result => invalid\n\t\t},\n\t\t#{\n\t\t\ttitle => \"Fork height 2.0 rejects duplicates\",\n\t\t\ttxs => [BlockAnchorTXAtForkHeight, BlockAnchorTXAtForkHeight],\n\t\t\theight => ar_fork:height_2_0(),\n\t\t\tblock_anchors => [],\n\t\t\trecent_txs_map => #{},\n\t\t\twallet_list => [wallet(Key1, 2 * fee(ar_fork:height_2_0()))],\n\t\t\texpected_result => invalid\n\t\t},\n\t\t#{\n\t\t\ttitle => \"After fork height 2.0 rejects duplicates\",\n\t\t\ttxs => [BlockAnchorTXAfterForkHeight, BlockAnchorTXAfterForkHeight],\n\t\t\theight => ar_fork:height_2_0() + 1,\n\t\t\tblock_anchors => [],\n\t\t\trecent_txs_map => #{},\n\t\t\twallet_list => [wallet(Key1, 2 * fee(ar_fork:height_2_0() + 1))],\n\t\t\texpected_result => invalid\n\t\t},\n\t\t#{\n\t\t\ttitle => \"Fork height 2.0 rejects txs from the weave\",\n\t\t\ttxs => [BlockAnchorTXAtForkHeight],\n\t\t\theight => ar_fork:height_2_0(),\n\t\t\tblock_anchors => [<<\"hash\">>, <<\"otherhash\">>],\n\t\t\trecent_txs_map => #{\n\t\t\t\t<<\"txid\">> => ok,\n\t\t\t\t<<\"txid2\">> => ok,\n\t\t\t\tBlockAnchorTXAtForkHeight#tx.id => ok\n\t\t\t},\n\t\t\twallet_list => [wallet(Key1, fee(ar_fork:height_2_0()))],\n\t\t\texpected_result => invalid\n\t\t},\n\t\t#{\n\t\t\ttitle => \"After fork height 2.0 rejects txs from the weave\",\n\t\t\ttxs => [BlockAnchorTXAfterForkHeight],\n\t\t\theight => ar_fork:height_2_0() + 1,\n\t\t\tblock_anchors => [<<\"hash\">>, <<\"otherhash\">>],\n\t\t\trecent_txs_map => #{\n\t\t\t\t<<\"txid\">> => ok,\n\t\t\t\t<<\"txid2\">> => ok,\n\t\t\t\tBlockAnchorTXAfterForkHeight#tx.id => ok\n\t\t\t},\n\t\t\twallet_list => [wallet(Key1, fee(ar_fork:height_2_0() + 1))],\n\t\t\texpected_result => invalid\n\t\t}\n\t],\n\tlists:foreach(\n\t\tfun(#{\n\t\t\ttitle := Title,\n\t\t\ttxs := TXs,\n\t\t\theight := Height,\n\t\t\twallet_list := WL,\n\t\t\tblock_anchors := BlockAnchors,\n\t\t\trecent_txs_map := RecentTXMap,\n\t\t\texpected_result := ExpectedResult\n\t\t}) ->\n\t\t\tRate = {1, 4},\n\t\t\tPricePerGiBMinute = 2000,\n\t\t\tKryderPlusRateMultiplier = 1,\n\t\t\tDenomination = 1,\n\t\t\tRedenominationHeight = 0,\n\t\t\tWallets = maps:from_list([{A, {B, LTX}} || {A, B, LTX} <- WL]),\n\t\t\t?debugFmt(\"~s:~n\", [Title]),\n\t\t\t?assertEqual(\n\t\t\t\tExpectedResult,\n\t\t\t\tar_tx_replay_pool:verify_block_txs({TXs, Rate, PricePerGiBMinute,\n\t\t\t\t\t\tKryderPlusRateMultiplier, Denomination, Height, RedenominationHeight,\n\t\t\t\t\t\tTimestamp, Wallets, BlockAnchors, RecentTXMap}),\n\t\t\t\tTitle),\n\t\t\tPickedTXs = ar_tx_replay_pool:pick_txs_to_mine({BlockAnchors, RecentTXMap, Height,\n\t\t\t\t\tRedenominationHeight, Rate, PricePerGiBMinute, KryderPlusRateMultiplier,\n\t\t\t\t\tDenomination, Timestamp, Wallets, TXs}),\n\t\t\t?assertEqual(\n\t\t\t\tvalid,\n\t\t\t\tar_tx_replay_pool:verify_block_txs({PickedTXs, Rate, PricePerGiBMinute,\n\t\t\t\t\t\tKryderPlusRateMultiplier, Denomination, Height, RedenominationHeight,\n\t\t\t\t\t\tTimestamp, Wallets, BlockAnchors, RecentTXMap}),\n\t\t\t\tlists:flatten(\n\t\t\t\t\tio_lib:format(\"Verifying after picking_txs_to_mine: ~s:\", [Title])\n\t\t\t\t)\n\t\t\t)\n\t\tend,\n\t\tTestCases\n\t).\n\nmake_tx_chain(Key, Height) ->\n\tTX1 = tx(Key, fee(Height), <<>>),\n\tTX2 = tx(Key, fee(Height), TX1#tx.id),\n\t[TX1, TX2].\n\ntx(Key = {_, {_, Owner}}, Reward, Anchor) ->\n\tar_tx:sign(\n\t\t#tx{\n\t\t\tformat = 2,\n\t\t\towner = Owner,\n\t\t\treward = Reward,\n\t\t\tlast_tx = Anchor\n\t\t},\n\t\tKey\n\t).\n\nwallet({_, Pub}, Balance) ->\n\t{ar_wallet:to_address(Pub), Balance, <<>>}.\n\nfee(Height) ->\n\tar_tx:get_tx_fee({0, 2000, 1, <<>>, #{}, Height + 1}).\n"
  },
  {
    "path": "apps/arweave/test/ar_tx_tests.erl",
    "content": "-module(ar_tx_tests).\n\n-include(\"ar.hrl\").\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-import(ar_test_node, [wait_until_height/2, assert_wait_until_height/2,\n\tread_block_when_stored/1, random_v1_data/1]).\n\n%% -------------------------------------------------------------------------------------------\n%% Test registration\n%% -------------------------------------------------------------------------------------------\n\naccepts_gossips_and_mines_test_() ->\n\tPrepareTestFor = fun(BuildTXSetFun, KeyType) ->\n\t\tfun() ->\n\t\t\t%% The weave has to be initialised under the fork so that\n\t\t\t%% we can get the correct price estimations according\n\t\t\t%% to the new pricinig model.\n\t\t\tKey = {_, Pub} = ar_wallet:new(KeyType),\n\t\t\tWallets = [{ar_wallet:to_address(Pub), ?AR(5), <<>>}],\n\t\t\t[B0] = ar_weave:init(Wallets),\n\t\t\taccepts_gossips_and_mines(B0, BuildTXSetFun(Key, B0))\n\t\tend\n\tend,\n\t[\n\t\t{timeout, ?TEST_NODE_TIMEOUT, {\n\t\t\t\"One RSA transaction with wallet list anchor followed by one with block anchor\",\n\t\t\tPrepareTestFor(fun one_wallet_list_one_block_anchored_txs/2, ?RSA_KEY_TYPE)\n\t\t}},\n\t\t{timeout, ?TEST_NODE_TIMEOUT, {\n\t\t\t\"One ECDSA transaction with wallet list anchor followed by one with block anchor\",\n\t\t\tPrepareTestFor(fun one_wallet_list_one_block_anchored_txs/2, ?ECDSA_KEY_TYPE)\n\t\t}},\n\t\t{timeout, ?TEST_NODE_TIMEOUT, {\n\t\t\t\"Two RSA transactions with block anchor\",\n\t\t\tPrepareTestFor(fun two_block_anchored_txs/2, ?RSA_KEY_TYPE)\n\t\t}},\n\t\t{timeout, ?TEST_NODE_TIMEOUT, {\n\t\t\t\"Two ECDSA transactions with block anchor\",\n\t\t\tPrepareTestFor(fun two_block_anchored_txs/2, ?ECDSA_KEY_TYPE)\n\t\t}}\n\t].\n\npolls_for_transactions_and_gossips_and_mines_test_() ->\n\tPrepareTestFor = fun(BuildTXSetFun, KeyType) ->\n\t\tfun() ->\n\t\t\t%% The weave has to be initialised under the fork so that\n\t\t\t%% we can get the correct price estimations according\n\t\t\t%% to the new pricinig model.\n\t\t\tKey = {_, Pub} = ar_wallet:new(KeyType),\n\t\t\tWallets = [{ar_wallet:to_address(Pub), ?AR(5), <<>>}],\n\t\t\t[B0] = ar_weave:init(Wallets),\n\t\t\tpolls_for_transactions_and_gossips_and_mines(B0, BuildTXSetFun(Key, B0))\n\t\tend\n\tend,\n\t[\n\t\t{timeout, ?TEST_NODE_TIMEOUT, {\n\t\t\t\"Two RSA transactions with block anchor\",\n\t\t\tPrepareTestFor(fun two_block_anchored_txs/2, ?RSA_KEY_TYPE)\n\t\t}},\n\t\t{timeout, ?TEST_NODE_TIMEOUT, {\n\t\t\t\"Two ECDSA transactions with block anchor\",\n\t\t\tPrepareTestFor(fun two_block_anchored_txs/2, ?ECDSA_KEY_TYPE)\n\t\t}}\n\t].\n\nkeeps_txs_after_new_block_test_() ->\n\tPrepareTestFor = fun(BuildFirstTXSetFun, BuildSecondTXSetFun) ->\n\t\tfun() ->\n\t\t\tKey = {_, Pub} = ar_wallet:new(),\n\t\t\tKey2 = {_, Pub2} = ar_test_node:new_custom_size_rsa_wallet(66),\n\t\t\tWallets = [{ar_wallet:to_address(Pub), ?AR(5), <<>>},\n\t\t\t\t\t{ar_wallet:to_address(Pub2), ?AR(5), <<>>}],\n\t\t\t[B0] = ar_weave:init(Wallets),\n\t\t\tkeeps_txs_after_new_block(\n\t\t\t\tB0,\n\t\t\t\tBuildFirstTXSetFun(Key, B0),\n\t\t\t\tBuildSecondTXSetFun(Key2, B0)\n\t\t\t)\n\t\tend\n\tend,\n\t[\n\t\t%% Main node receives the second set then the first set. Peer node only\n\t\t%% receives the second set.\n\t\t{timeout, ?TEST_NODE_TIMEOUT, {\n\t\t\t\"First set: two block anchored txs, second set: empty\",\n\t\t\tPrepareTestFor(fun two_block_anchored_txs/2, fun empty_tx_set/2)\n\t\t}},\n\t\t{timeout, ?TEST_NODE_TIMEOUT, {\n\t\t\t\"First set: empty, second set: two block anchored txs\",\n\t\t\tPrepareTestFor(fun empty_tx_set/2, fun two_block_anchored_txs/2)\n\t\t}},\n\t\t{timeout, ?TEST_NODE_TIMEOUT, {\n\t\t\t\"First set: two block anchored txs, second set: two block anchored txs\",\n\t\t\tPrepareTestFor(fun two_block_anchored_txs/2, fun two_block_anchored_txs/2)\n\t\t}}\n\t].\n\nreturns_error_when_txs_exceed_balance_test_() ->\n\tPrepareTestFor = fun(BuildTXSetFun) ->\n\t\tfun() ->\n\t\t\treturns_error_when_txs_exceed_balance(BuildTXSetFun)\n\t\tend\n\tend,\n\t[\n\t\t{timeout, ?TEST_NODE_TIMEOUT, {\n\t\t\t\"Three transactions with block anchor\",\n\t\t\tPrepareTestFor(fun block_anchor_txs_spending_balance_plus_one_more/2)\n\t\t}},\n\t\t{timeout, ?TEST_NODE_TIMEOUT, {\n\t\t\t\"Five transactions with mixed anchors\",\n\t\t\tPrepareTestFor(fun mixed_anchor_txs_spending_balance_plus_one_more/2)\n\t\t\t}}\n\t].\n\nmines_blocks_under_the_size_limit_test_() ->\n\tPrepareTestFor = fun(BuildTXSetFun) ->\n\t\tfun() ->\n\t\t\t{B0, TXGroups} = BuildTXSetFun(),\n\t\t\tmines_blocks_under_the_size_limit(B0, TXGroups)\n\t\tend\n\tend,\n\t[\n\t\t{\n\t\t\t\"Five transactions with block anchors\",\n\t\t\t{timeout, ?TEST_NODE_TIMEOUT, PrepareTestFor(fun() -> grouped_txs() end)}\n\t\t}\n\t].\n\njoins_network_successfully_test_() ->\n\t{timeout, ?TEST_NODE_TIMEOUT, fun joins_network_successfully/0}.\n\nrecovers_from_forks_test_() ->\n\t{timeout, ?TEST_NODE_TIMEOUT, fun() -> recovers_from_forks(7) end}.\n\nrejects_transactions_above_the_size_limit_test_() ->\n\t{timeout, ?TEST_NODE_TIMEOUT, fun test_rejects_transactions_above_the_size_limit/0}.\n\naccepts_at_most_one_wallet_list_anchored_tx_per_block_test_() ->\n\t{timeout, ?TEST_NODE_TIMEOUT, fun test_accepts_at_most_one_wallet_list_anchored_tx_per_block/0}.\n\ndoes_not_allow_to_spend_mempool_tokens_test_() ->\n\t{timeout, ?TEST_NODE_TIMEOUT, fun test_does_not_allow_to_spend_mempool_tokens/0}.\n\ndoes_not_allow_to_replay_empty_wallet_txs_test_() ->\n\t{timeout, ?TEST_NODE_TIMEOUT, fun test_does_not_allow_to_replay_empty_wallet_txs/0}.\n\nrejects_txs_with_outdated_anchors_test_() ->\n\t{timeout, ?TEST_NODE_TIMEOUT, fun() ->\n\t\t%% Post a transaction anchoring the block at ar_block:get_max_tx_anchor_depth() + 1.\n\t\t%%\n\t\t%% Expect the transaction to be rejected.\n\t\tKey = {_, Pub} = ar_wallet:new(),\n\t\t[B0] = ar_weave:init([\n\t\t\t{ar_wallet:to_address(Pub), ?AR(20), <<>>}\n\t\t]),\n\t\t_ = ar_test_node:start_peer(peer1, B0),\n\t\tmine_blocks(peer1, ar_block:get_max_tx_anchor_depth()),\n\t\tassert_wait_until_height(peer1, ar_block:get_max_tx_anchor_depth()),\n\t\tTX1 = ar_test_node:sign_v1_tx(Key, #{ last_tx => B0#block.indep_hash }),\n\t\t{ok, {{<<\"400\">>, _}, _, <<\"Invalid anchor (last_tx).\">>, _, _}} =\n\t\t\tar_test_node:post_tx_to_peer(peer1, TX1)\n\tend}.\n\ndrops_v1_txs_exceeding_mempool_limit_test_() ->\n\t{timeout, ?TEST_NODE_TIMEOUT, fun test_drops_v1_txs_exceeding_mempool_limit/0}.\n\ndrops_v2_txs_exceeding_mempool_limit_test_() ->\n\t{timeout, ?TEST_NODE_TIMEOUT, fun drops_v2_txs_exceeding_mempool_limit/0}.\n\nmines_format_2_txs_without_size_limit_test_() ->\n\t{timeout, ?TEST_NODE_TIMEOUT, fun mines_format_2_txs_without_size_limit/0}.\n\n%% -------------------------------------------------------------------------------------------\n%% Test functions\n%% -------------------------------------------------------------------------------------------\n\naccepts_gossips_and_mines(B0, TXFuns) ->\n\t%% Post the given transactions made from the given wallets to a node.\n\t%%\n\t%% Expect them to be accepted, gossiped to the peer and included into the block.\n\t%% Expect the block to be accepted by the peer.\n\t_ = ar_test_node:start(B0),\n\t_ = ar_test_node:start_peer(peer1, B0),\n\t%% Sign here after the node has started to get the correct price\n\t%% estimation from it.\n\tTXs = lists:map(fun(TXFun) -> TXFun() end, TXFuns),\n\tar_test_node:connect_to_peer(peer1),\n\t%% Post the transactions to peer1.\n\tlists:foreach(\n\t\tfun(TX) ->\n\t\t\tar_test_node:assert_post_tx_to_peer(peer1, TX),\n\t\t\t%% Expect transactions to be gossiped to main.\n\t\t\tar_test_node:assert_wait_until_receives_txs([TX])\n\t\tend,\n\t\tTXs\n\t),\n\t%% Mine a block.\n\tar_test_node:mine(peer1),\n\t%% Expect both transactions to be included into block.\n\tPeerBI = assert_wait_until_height(peer1, 1),\n\tTXIDs = lists:map(fun(TX) -> TX#tx.id end, TXs),\n\t?assertEqual(\n\t\tlists:sort(TXIDs),\n\t\tlists:sort((ar_test_node:remote_call(peer1, ar_test_node, read_block_when_stored, [hd(PeerBI)]))#block.txs)\n\t),\n\tlists:foreach(\n\t\tfun(TX) ->\n\t\t\t?assertEqual(TX, ar_test_node:remote_call(peer1, ar_storage, read_tx, [TX#tx.id]))\n\t\tend,\n\t\tTXs\n\t),\n\t%% Expect the block to be accepted by main.\n\tBI = wait_until_height(main, 1),\n\t?assertEqual(\n\t\tlists:sort(TXIDs),\n\t\tlists:sort((read_block_when_stored(hd(BI)))#block.txs)\n\t),\n\tlists:foreach(\n\t\tfun(TX) ->\n\t\t\t?assertEqual(TX, ar_storage:read_tx(TX#tx.id))\n\t\tend,\n\t\tTXs\n\t).\n\npolls_for_transactions_and_gossips_and_mines(B0, TXFuns) ->\n\t%% Post the given transactions made from the given wallets to a node.\n\t%%\n\t%% Expect them to be accepted, fetched by the peer we did not push them to\n\t%% and included into the block.\n\t%% Expect the block to be accepted by the peer.\n\t{ok, MainConfig} = arweave_config:get_env(),\n\t{ok, PeerConfig} = ar_test_node:remote_call(peer1, arweave_config, get_env, []),\n\ttry\n\t\tMainConfig2 = MainConfig#config{ max_propagation_peers = 0 },\n\t\t_ = ar_test_node:start(#{ b0 => B0, config => MainConfig2 }),\n\t\tPeerConfig2 = PeerConfig#config{ max_propagation_peers = 0 },\n\t\t_ = ar_test_node:start_peer(peer1, #{ b0 => B0, config => PeerConfig2 }),\n\t\t%% Sign here after the node has started to get the correct price\n\t\t%% estimation from it.\n\t\tTXs = lists:map(fun(TXFun) -> TXFun() end, TXFuns),\n\t\tar_test_node:connect_to_peer(peer1),\n\t\t%% Post the transactions to peer1.\n\t\tlists:foreach(\n\t\t\tfun(TX) ->\n\t\t\t\tar_test_node:assert_post_tx_to_peer(peer1, TX),\n\t\t\t\t%% Expect transactions to be fetched by main.\n\t\t\t\tar_test_node:assert_wait_until_receives_txs([TX])\n\t\t\tend,\n\t\t\tTXs\n\t\t),\n\t\t%% Mine a block.\n\t\tar_test_node:mine(peer1),\n\t\t%% Expect both transactions to be included into block.\n\t\tPeerBI = assert_wait_until_height(peer1, 1),\n\t\tTXIDs = lists:map(fun(TX) -> TX#tx.id end, TXs),\n\t\t?assertEqual(\n\t\t\tlists:sort(TXIDs),\n\t\t\tlists:sort((ar_test_node:remote_call(peer1, ar_test_node, read_block_when_stored, [hd(PeerBI)]))#block.txs)\n\t\t),\n\t\tlists:foreach(\n\t\t\tfun(TX) ->\n\t\t\t\t?assertEqual(TX, ar_test_node:remote_call(peer1, ar_storage, read_tx, [TX#tx.id]))\n\t\t\tend,\n\t\t\tTXs\n\t\t),\n\t\t%% Expect the block to be accepted by main.\n\t\tBI = wait_until_height(main, 1),\n\t\t?assertEqual(\n\t\t\tlists:sort(TXIDs),\n\t\t\tlists:sort((read_block_when_stored(hd(BI)))#block.txs)\n\t\t),\n\t\tlists:foreach(\n\t\t\tfun(TX) ->\n\t\t\t\t?assertEqual(TX, ar_storage:read_tx(TX#tx.id))\n\t\t\tend,\n\t\t\tTXs\n\t\t)\n\tafter\n\t\tarweave_config:set_env(MainConfig),\n\t\tar_test_node:set_config(peer1, PeerConfig)\n\tend.\n\n\nkeeps_txs_after_new_block(B0, FirstTXSetFuns, SecondTXSetFuns) ->\n\t%% Post the transactions from the first set to a node but do not gossip them.\n\t%% Post transactions from the second set to both nodes.\n\t%% Mine a block with transactions from the second set on a different node\n\t%% and gossip it to the node with transactions.\n\t%%\n\t%% Expect the block to be accepted.\n\t%% Expect transactions from the difference between the two sets to be kept in the mempool.\n\t%% Mine a block on the first node, expect the difference to be included into the block.\n\t{ok, MainConfig} = arweave_config:get_env(),\n\t{ok, PeerConfig} = ar_test_node:remote_call(peer1, arweave_config, get_env, []),\n\n\ttry\n\t\tMainConfig2 = MainConfig#config{ disable = [tx_poller | MainConfig#config.disable] },\n\t\t_ = ar_test_node:start(#{ b0 => B0, config => MainConfig2 }),\n\t\tPeerConfig2 = PeerConfig#config{ disable = [tx_poller | PeerConfig#config.disable] },\n\t\t_ = ar_test_node:start_peer(peer1, #{ b0 => B0, config => PeerConfig2 }),\n\t\t%% Sign here after the node has started to get the correct price\n\t\t%% estimation from it.\n\t\tFirstTXSet = lists:map(fun(TXFun) -> TXFun() end, FirstTXSetFuns),\n\t\tSecondTXSet = lists:map(fun(TXFun) -> TXFun() end, SecondTXSetFuns),\n\t\t%% Disconnect the nodes so that peer1 does not receive txs.\n\t\tar_test_node:disconnect_from(peer1),\n\t\t%% Post transactions from the first set to main.\n\t\tlists:foreach(\n\t\t\tfun(TX) ->\n\t\t\t\tar_test_node:post_tx_to_peer(main, TX)\n\t\t\tend,\n\t\t\tSecondTXSet ++ FirstTXSet\n\t\t),\n\t\t?assertEqual([], ar_test_node:remote_call(peer1, ar_mempool, get_all_txids, [])),\n\t\t%% Post transactions from the second set to peer1.\n\t\tlists:foreach(\n\t\t\tfun(TX) ->\n\t\t\t\tar_test_node:assert_post_tx_to_peer(peer1, TX)\n\t\t\tend,\n\t\t\tSecondTXSet\n\t\t),\n\t\t%% Wait to make sure the tx will not be gossiped upon reconnect.\n\t\ttimer:sleep(2000), % == 2 * ?CHECK_MEMPOOL_FREQUENCY\n\t\t%% Connect the nodes and mine a block on peer1.\n\t\tar_test_node:connect_to_peer(peer1),\n\t\tar_test_node:mine(peer1),\n\t\t%% Expect main to receive the block.\n\t\tBI = wait_until_height(main, 1),\n\t\tSecondSetTXIDs = lists:map(fun(TX) -> TX#tx.id end, SecondTXSet),\n\t\t?assertEqual(lists:sort(SecondSetTXIDs),\n\t\t\t\tlists:sort((read_block_when_stored(hd(BI)))#block.txs)),\n\t\t%% Expect main to have the set difference in the mempool.\n\t\tar_test_node:assert_wait_until_receives_txs(FirstTXSet -- SecondTXSet),\n\t\t%% Mine a block on main and expect both transactions to be included.\n\t\tar_test_node:mine(),\n\t\tBI2 = wait_until_height(main, 2),\n\t\tSetDifferenceTXIDs = lists:map(fun(TX) -> TX#tx.id end, FirstTXSet -- SecondTXSet),\n\t\t?assertEqual(\n\t\t\tlists:sort(SetDifferenceTXIDs),\n\t\t\tlists:sort((read_block_when_stored(hd(BI2)))#block.txs)\n\t\t)\n\tafter\n\t\tarweave_config:set_env(MainConfig),\n\t\tar_test_node:set_config(peer1, PeerConfig)\n\tend.\n\nreturns_error_when_txs_exceed_balance(BuildTXSetFun) ->\n\tKey = {_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(20), <<>>}]),\n\n\t_ = ar_test_node:start(B0),\n\t_ = ar_test_node:start_peer(peer1, B0),\n\n\tTXs = BuildTXSetFun(Key, B0),\n\n\tar_test_node:connect_to_peer(peer1),\n\n\t%% Expect the post for all TXs (including the balance exceeding one) to\n\t%% succeed. However immeidately after adding each TX to the mempool,\n\t%% we'll check whether any balances are exceeded and eject the TXs that\n\t%% exceed the balance. The ordering used is {Utility, TXID} - so TXs with\n\t%% the same Utility but with a lower alphanumeric ID will be ejected first.\n\tSortedTXs = lists:sort(\n\t\tfun (TX1, TX2) ->\n\t\t\t% Sort in reverse order - \"biggest\" first.\n\t\t\t{ar_tx:utility(TX1), TX1#tx.id} > {ar_tx:utility(TX2), TX2#tx.id}\n\t\tend,\n\t\tTXs\n\t),\n\tExceedBalanceTX = lists:last(SortedTXs),\n\tBelowBalanceTXs = lists:droplast(SortedTXs),\n\tlists:foreach(\n\t\tfun(TX) ->\n\t\t\tar_test_node:assert_post_tx_to_peer(peer1, TX, false)\n\t\tend,\n\t\tTXs\n\t),\n\n\tar_test_node:assert_wait_until_receives_txs(BelowBalanceTXs),\n\t%% Expect only the first two to be included into the block.\n\tar_test_node:mine(peer1),\n\tPeerBI = assert_wait_until_height(peer1, 1),\n\tTXIDs = lists:map(fun(TX) -> TX#tx.id end, BelowBalanceTXs),\n\t?assertEqual(\n\t\tlists:sort(TXIDs),\n\t\tlists:sort((ar_test_node:remote_call(peer1, ar_test_node, read_block_when_stored, [hd(PeerBI)]))#block.txs)\n\t),\n\tBI = wait_until_height(main, 1),\n\t?assertEqual(\n\t\tlists:sort(TXIDs),\n\t\tlists:sort((read_block_when_stored(hd(BI)))#block.txs)\n\t),\n\t%% Post the balance exceeding transaction again\n\t%% and expect the balance exceeded error.\n\tar_test_node:remote_call(peer1, ets, delete, [ignored_ids, ExceedBalanceTX#tx.id]),\n\t{ok, {{<<\"400\">>, _}, _, _Body, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => post,\n\t\t\tpeer => ar_test_node:peer_ip(peer1),\n\t\t\tpath => \"/tx\",\n\t\t\tbody => ar_serialize:jsonify(ar_serialize:tx_to_json_struct(ExceedBalanceTX))\n\t\t}),\n\t?assertEqual({ok, [\"overspend\"]}, ar_test_node:remote_call(peer1, ar_tx_db, get_error_codes,\n\t\t\t[ExceedBalanceTX#tx.id])).\n\n\ntest_rejects_transactions_above_the_size_limit() ->\n\t%% Create a genesis block with a wallet.\n\tKey1 = {_, Pub1} = ar_wallet:new(),\n\tKey2 = {_, Pub2} = ar_wallet:new(),\n\t[B0] = ar_weave:init([\n\t\t{ar_wallet:to_address(Pub1), ?AR(20), <<>>},\n\t\t{ar_wallet:to_address(Pub2), ?AR(20), <<>>}\n\t]),\n\t%% Start the node.\n\t_ = ar_test_node:start_peer(peer1, B0),\n\t_ = ar_test_node:connect_to_peer(peer1),\n\tSmallData = random_v1_data(?TX_DATA_SIZE_LIMIT),\n\tBigData = random_v1_data(?TX_DATA_SIZE_LIMIT + 1),\n\tGoodTX = ar_test_node:sign_v1_tx(Key1, #{ data => SmallData }),\n\tar_test_node:assert_post_tx_to_peer(peer1, GoodTX),\n\tBadTX = ar_test_node:sign_v1_tx(Key2, #{ data => BigData }),\n\t?assertMatch(\n\t\t{ok, {{<<\"400\">>, _}, _, <<\"Transaction verification failed.\">>, _, _}},\n\t\tar_test_node:post_tx_to_peer(peer1, BadTX)\n\t),\n\t?assertMatch(\n\t\t{ok, [\"tx_fields_too_large\"]},\n\t\tar_test_node:remote_call(peer1, ar_tx_db, get_error_codes, [BadTX#tx.id])\n\t).\n\ntest_accepts_at_most_one_wallet_list_anchored_tx_per_block() ->\n\t%% Post a TX, mine a block.\n\t%% Post another TX referencing the first one.\n\t%% Post the third TX referencing the second one.\n\t%%\n\t%% Expect the third to be rejected.\n\t%%\n\t%% Post the fourth TX referencing the block.\n\t%%\n\t%% Expect the fourth TX to be accepted and mined into a block.\n\tKey = {_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([\n\t\t{ar_wallet:to_address(Pub), ?AR(20), <<>>}\n\t]),\n\t_ = ar_test_node:start_peer(peer1, B0),\n\t_ = ar_test_node:connect_to_peer(peer1),\n\tTX1 = ar_test_node:sign_v1_tx(Key),\n\tar_test_node:assert_post_tx_to_peer(peer1, TX1),\n\tar_test_node:mine(peer1),\n\tassert_wait_until_height(peer1, 1),\n\tTX2 = ar_test_node:sign_v1_tx(Key, #{ last_tx => TX1#tx.id }),\n\tar_test_node:assert_post_tx_to_peer(peer1, TX2),\n\tTX3 = ar_test_node:sign_v1_tx(Key, #{ last_tx => TX2#tx.id }),\n\t{ok, {{<<\"400\">>, _}, _, <<\"Invalid anchor (last_tx from mempool).\">>, _, _}} = ar_test_node:post_tx_to_peer(peer1, TX3),\n\tTX4 = ar_test_node:sign_v1_tx(Key, #{ last_tx => B0#block.indep_hash }),\n\tar_test_node:assert_post_tx_to_peer(peer1, TX4),\n\tar_test_node:mine(peer1),\n\tPeerBI = assert_wait_until_height(peer1, 2),\n\tB2 = ar_test_node:remote_call(peer1, ar_test_node, read_block_when_stored, [hd(PeerBI)]),\n\t?assertEqual([TX2#tx.id, TX4#tx.id], B2#block.txs).\n\ntest_does_not_allow_to_spend_mempool_tokens() ->\n\t%% Post a transaction sending tokens to a wallet with few tokens.\n\t%% Post the second transaction spending the new tokens.\n\t%%\n\t%% Expect the second transaction to be rejected.\n\t%%\n\t%% Mine a block.\n\t%% Post another transaction spending the rest of tokens from the new wallet.\n\t%%\n\t%% Expect the transaction to be accepted.\n\tKey1 = {_, Pub1} = ar_wallet:new(),\n\tKey2 = {_, Pub2} = ar_wallet:new(),\n\t[B0] = ar_weave:init([\n\t\t{ar_wallet:to_address(Pub1), ?AR(20), <<>>},\n\t\t{ar_wallet:to_address(Pub2), ?AR(0), <<>>}\n\t]),\n\t_ = ar_test_node:start_peer(peer1, B0),\n\t_ = ar_test_node:connect_to_peer(peer1),\n\tTX1 = ar_test_node:sign_v1_tx(Key1, #{ target => ar_wallet:to_address(Pub2), reward => ?AR(1),\n\t\t\tquantity => ?AR(2) }),\n\tar_test_node:assert_post_tx_to_peer(peer1, TX1),\n\tTX2 = ar_test_node:sign_v1_tx(\n\t\tKey2,\n\t\t#{\n\t\t\ttarget => ar_wallet:to_address(Pub1),\n\t\t\treward => ?AR(1),\n\t\t\tquantity => ?AR(1),\n\t\t\tlast_tx => B0#block.indep_hash,\n\t\t\ttags => [{<<\"nonce\">>, <<\"1\">>}]\n\t\t}\n\t),\n\t{ok, {{<<\"400\">>, _}, _, _, _, _}} = ar_test_node:post_tx_to_peer(peer1, TX2),\n\t?assertEqual({ok, [\"overspend\"]}, ar_test_node:remote_call(peer1, ar_tx_db, get_error_codes, [TX2#tx.id])),\n\tar_test_node:mine(peer1),\n\tPeerBI = assert_wait_until_height(peer1, 1),\n\tB1 = ar_test_node:remote_call(peer1, ar_test_node, read_block_when_stored, [hd(PeerBI)]),\n\t?assertEqual([TX1#tx.id], B1#block.txs),\n\tTX3 = ar_test_node:sign_v1_tx(\n\t\tKey2,\n\t\t#{\n\t\t\ttarget => ar_wallet:to_address(Pub1),\n\t\t\treward => ?AR(1),\n\t\t\tquantity => ?AR(1),\n\t\t\tlast_tx => B1#block.indep_hash,\n\t\t\ttags => [{<<\"nonce\">>, <<\"3\">>}]\n\t\t}\n\t),\n\tar_test_node:assert_post_tx_to_peer(peer1, TX3),\n\tar_test_node:mine(peer1),\n\tPeerBI2 = assert_wait_until_height(peer1, 2),\n\tB2 = ar_test_node:remote_call(peer1, ar_test_node, read_block_when_stored, [hd(PeerBI2)]),\n\t?assertEqual([TX3#tx.id], B2#block.txs).\n\ntest_does_not_allow_to_replay_empty_wallet_txs() ->\n\t%% Create a new wallet by sending some tokens to it. Mine a block.\n\t%% Send the tokens back so that the wallet balance is back to zero. Mine a block.\n\t%% Send the same amount of tokens to the same wallet again. Mine a block.\n\t%% Try to replay the transaction which sent the tokens back (before and after mining).\n\t%%\n\t%% Expect the replay to be rejected.\n\tKey1 = {_, Pub1} = ar_wallet:new(),\n\tKey2 = {_, Pub2} = ar_wallet:new(),\n\t[B0] = ar_weave:init([\n\t\t{ar_wallet:to_address(Pub1), ?AR(50), <<>>}\n\t]),\n\t_ = ar_test_node:start_peer(peer1, B0),\n\tTX1 = ar_test_node:sign_v1_tx(Key1, #{ target => ar_wallet:to_address(Pub2), reward => ?AR(6),\n\t\t\tquantity => ?AR(2), last_tx => <<>> }),\n\tar_test_node:assert_post_tx_to_peer(peer1, TX1),\n\tar_test_node:mine(peer1),\n\tassert_wait_until_height(peer1, 1),\n\tGetBalancePath = binary_to_list(ar_util:encode(ar_wallet:to_address(Pub2))),\n\t{ok, {{<<\"200\">>, _}, _, Body, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => ar_test_node:peer_ip(peer1),\n\t\t\tpath => \"/wallet/\" ++ GetBalancePath ++ \"/balance\"\n\t\t}),\n\tBalance = binary_to_integer(Body),\n\tTX2 = ar_test_node:sign_v1_tx(Key2, #{ target => ar_wallet:to_address(Pub1), reward => Balance - ?AR(1),\n\t\t\tquantity => ?AR(1), last_tx => <<>> }),\n\tar_test_node:assert_post_tx_to_peer(peer1, TX2),\n\tar_test_node:mine(peer1),\n\tassert_wait_until_height(peer1, 2),\n\t{ok, {{<<\"200\">>, _}, _, Body2, _, _}} =\n\t\tar_http:req(#{\n\t\t\tmethod => get,\n\t\t\tpeer => ar_test_node:peer_ip(peer1),\n\t\t\tpath => \"/wallet/\" ++ GetBalancePath ++ \"/balance\"\n\t\t}),\n\t?assertEqual(0, binary_to_integer(Body2)),\n\tTX3 = ar_test_node:sign_v1_tx(Key1, #{ target => ar_wallet:to_address(Pub2), reward => ?AR(6),\n\t\t\tquantity => ?AR(2), last_tx => TX1#tx.id }),\n\tar_test_node:assert_post_tx_to_peer(peer1, TX3),\n\tar_test_node:mine(peer1),\n\tassert_wait_until_height(peer1, 3),\n\t%% Remove the replay TX from the ignore list (to simulate e.g. a node restart).\n\tar_test_node:remote_call(peer1, ets, delete, [ignored_ids, TX2#tx.id]),\n\t{ok, {{<<\"400\">>, _}, _, <<\"Invalid anchor (last_tx).\">>, _, _}} =\n\t\tar_test_node:post_tx_to_peer(peer1, TX2).\n\nmines_blocks_under_the_size_limit(B0, TXGroups) ->\n\t%% Post the given transactions grouped by block size to a node.\n\t%%\n\t%% Expect them to be mined into the corresponding number of blocks so that\n\t%% each block fits under the limit.\n\t_ = ar_test_node:start(B0),\n\t_ = ar_test_node:start_peer(peer1, B0),\n\tar_test_node:connect_to_peer(peer1),\n\tlists:foreach(\n\t\tfun(TX) ->\n\t\t\tar_test_node:assert_post_tx_to_peer(peer1, TX),\n\t\t\tar_test_node:assert_wait_until_receives_txs([TX])\n\t\tend,\n\t\tlists:flatten(TXGroups)\n\t),\n\t%% Mine blocks, expect the transactions there.\n\tlists:foldl(\n\t\tfun(Group, Height) ->\n\t\t\tar_test_node:mine(peer1),\n\t\t\tPeerBI = assert_wait_until_height(peer1, Height),\n\t\t\tGroupTXIDs = lists:map(fun(TX) -> TX#tx.id end, Group),\n\t\t\t?assertEqual(\n\t\t\t\tlists:sort(GroupTXIDs),\n\t\t\t\tlists:sort(\n\t\t\t\t\t(ar_test_node:remote_call(peer1, ar_test_node, read_block_when_stored, [hd(PeerBI)]))#block.txs\n\t\t\t\t),\n\t\t\t\tio_lib:format(\"Height ~B\", [Height])\n\t\t\t),\n\t\t\tassert_wait_until_txs_are_stored(GroupTXIDs),\n\t\t\tHeight + 1\n\t\tend,\n\t\t1,\n\t\tTXGroups\n\t).\n\nassert_wait_until_txs_are_stored(TXIDs) ->\n\tar_util:do_until(\n\t\tfun() ->\n\t\t\tlists:all(fun(TX) -> is_record(TX, tx) end, ar_storage:read_tx(TXIDs))\n\t\tend,\n\t\t200,\n\t\t60_000\n\t).\n\nmines_format_2_txs_without_size_limit() ->\n\tKey = {_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([\n\t\t{ar_wallet:to_address(Pub), ?AR(20), <<>>}\n\t]),\n\t_ = ar_test_node:start(B0),\n\t_ = ar_test_node:start_peer(peer1, B0),\n\tar_test_node:connect_to_peer(peer1),\n\tChunkSize = ?MEMPOOL_DATA_SIZE_LIMIT div (?BLOCK_TX_COUNT_LIMIT + 1),\n\tlists:foreach(\n\t\tfun(N) ->\n\t\t\tTX = ar_test_node:sign_tx(\n\t\t\t\tKey,\n\t\t\t\t#{\n\t\t\t\t\tlast_tx => B0#block.indep_hash,\n\t\t\t\t\tdata => << <<1>> || _ <- lists:seq(1, ChunkSize) >>,\n\t\t\t\t\ttags => [{<<\"nonce\">>, integer_to_binary(N)}]\n\t\t\t\t}\n\t\t\t),\n\t\t\tar_test_node:assert_post_tx_to_peer(peer1, TX),\n\t\t\tar_test_node:assert_wait_until_receives_txs([TX])\n\t\tend,\n\t\tlists:seq(1, ?BLOCK_TX_COUNT_LIMIT + 1)\n\t),\n\tar_test_node:mine(),\n\t[{H, _, _} | _] = wait_until_height(main, 1),\n\tB = read_block_when_stored(H),\n\t?assertEqual(?BLOCK_TX_COUNT_LIMIT, length(B#block.txs)),\n\tTotalSize = lists:sum([(ar_storage:read_tx(TXID))#tx.data_size || TXID <- B#block.txs]),\n\t?assert(TotalSize > ?BLOCK_TX_DATA_SIZE_LIMIT).\n\ntest_drops_v1_txs_exceeding_mempool_limit() ->\n\t%% Post transactions which exceed the mempool size limit.\n\t%%\n\t%% Expect the exceeding transaction to be dropped.\n\tKey = {_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([\n\t\t{ar_wallet:to_address(Pub), ?AR(20), <<>>}\n\t]),\n\t_ = ar_test_node:start_peer(peer1, B0),\n\tBigChunk = random_v1_data(?TX_DATA_SIZE_LIMIT - ?TX_SIZE_BASE),\n\tTXs = lists:map(\n\t\tfun(N) ->\n\t\t\tar_test_node:sign_v1_tx(Key, #{ last_tx => B0#block.indep_hash,\n\t\t\t\t\tdata => BigChunk, tags => [{<<\"nonce\">>, integer_to_binary(N)}] })\n\t\tend,\n\t\tlists:seq(1, 6)\n\t),\n\tlists:foreach(\n\t\tfun(TX) ->\n\t\t\tar_test_node:assert_post_tx_to_peer(peer1, TX)\n\t\tend,\n\t\tlists:sublist(TXs, 5)\n\t),\n\tPeer1 = ar_test_node:peer_ip(peer1),\n\t{{ok, Mempool1}, Peer1} = ar_http_iface_client:get_mempool(Peer1),\n\t%% The transactions have the same utility therefore they are sorted in the\n\t%% order of submission.\n\t?assertEqual([TX#tx.id || TX <- lists:sublist(TXs, 5)], Mempool1),\n\tLast = lists:last(TXs),\n\t{ok, {{<<\"200\">>, _}, _, <<\"OK\">>, _, _}} = ar_test_node:post_tx_to_peer(peer1, Last, false),\n\t{{ok, Mempool2}, Peer1} = ar_http_iface_client:get_mempool(Peer1),\n\t%% There is no place for the last transaction in the mempool.\n\t?assertEqual([TX#tx.id || TX <- lists:sublist(TXs, 5)], Mempool2).\n\ndrops_v2_txs_exceeding_mempool_limit() ->\n\tKey = {_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([\n\t\t{ar_wallet:to_address(Pub), ?AR(20), <<>>}\n\t]),\n\t_ = ar_test_node:start_peer(peer1, B0),\n\tBigChunk = crypto:strong_rand_bytes(?TX_DATA_SIZE_LIMIT div 2),\n\tTXs = lists:map(\n\t\tfun(N) ->\n\t\t\tar_test_node:sign_tx(Key, #{ last_tx => B0#block.indep_hash,\n\t\t\t\t\tdata => case N of 11 -> << BigChunk/binary, BigChunk/binary >>;\n\t\t\t\t\t\t\t_ -> BigChunk end,\n\t\t\t\t\ttags => [{<<\"nonce\">>, integer_to_binary(N)}] })\n\t\tend,\n\t\tlists:seq(1, 11)\n\t),\n\tlists:foreach(\n\t\tfun(TX) ->\n\t\t\tar_test_node:assert_post_tx_to_peer(peer1, TX)\n\t\tend,\n\t\tlists:sublist(TXs, 10)\n\t),\n\tPeer1 = ar_test_node:peer_ip(peer1),\n\t{{ok, Mempool1}, Peer1} = ar_http_iface_client:get_mempool(Peer1),\n\t%% The transactions have the same utility therefore they are sorted in the\n\t%% order of submission.\n\t?assertEqual([TX#tx.id || TX <- lists:sublist(TXs, 10)], Mempool1),\n\tLast = lists:last(TXs),\n\t{ok, {{<<\"200\">>, _}, _, <<\"OK\">>, _, _}} = ar_test_node:post_tx_to_peer(peer1, Last, false),\n\t{{ok, Mempool2}, Peer1} = ar_http_iface_client:get_mempool(Peer1),\n\t%% The last TX is twice as big and twice as valuable so it replaces two\n\t%% other transactions in the memory pool.\n\t?assertEqual([Last#tx.id | [TX#tx.id || TX <- lists:sublist(TXs, 8)]], Mempool2),\n\t%% Strip the data out. Expect the header to be accepted.\n\tStrippedTX = ar_test_node:sign_tx(Key, #{ last_tx => B0#block.indep_hash,\n\t\t\tdata => BigChunk, tags => [{<<\"nonce\">>, integer_to_binary(12)}] }),\n\tar_test_node:assert_post_tx_to_peer(peer1, StrippedTX#tx{ data = <<>> }),\n\t{{ok, Mempool3}, Peer1} = ar_http_iface_client:get_mempool(Peer1),\n\t?assertEqual([Last#tx.id] ++ [TX#tx.id || TX <- lists:sublist(TXs, 8)]\n\t\t\t++ [StrippedTX#tx.id], Mempool3).\n\njoins_network_successfully() ->\n\t%% Start a node and mine ar_block:get_max_tx_anchor_depth() blocks, some of them\n\t%% with transactions.\n\t%%\n\t%% Join this node by another node.\n\t%% Post a transaction with an outdated anchor to the new node.\n\t%% Expect it to be rejected.\n\t%%\n\t%% Expect all the transactions to be present on the new node.\n\t%%\n\t%% Isolate the nodes. Mine 1 block with a transaction anchoring the\n\t%% oldest block possible on peer1. Mine a block on main so that it stops\n\t%% tracking the block just referenced by peer1. Reconnect the nodes, mine another\n\t%% block with transactions anchoring the oldest block possible on peer1.\n\t%% Expect main to fork recover successfully.\n\tKey = {_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([\n\t\t{ar_wallet:to_address(Pub), ?AR(200000000), <<>>},\n\t\t{Addr = crypto:strong_rand_bytes(32), ?AR(200000000), <<>>},\n\t\t{crypto:strong_rand_bytes(32), ?AR(200000000), <<>>}\n\t]),\n\tar_test_node:start(B0),\n\t_ = ar_test_node:start_peer(peer1, B0),\n\t{TXs, _} = lists:foldl(\n\t\tfun(Height, {TXs, LastTX}) ->\n\t\t\t{TX, AnchorType} = case rand:uniform(4) of\n\t\t\t\t1 ->\n\t\t\t\t\t{ar_test_node:sign_v1_tx(Key, #{ last_tx => LastTX, reward => ?AR(10000) }), tx_anchor};\n\t\t\t\t2 ->\n\t\t\t\t\t{ar_test_node:sign_v1_tx(Key, #{ last_tx => ar_test_node:get_tx_anchor(peer1), reward => ?AR(10000),\n\t\t\t\t\t\t\ttags => [{<<\"nonce\">>, integer_to_binary(rand:uniform(100))}] }),\n\t\t\t\t\t\t\tblock_anchor};\n\t\t\t\t3 ->\n\t\t\t\t\t{ar_test_node:sign_tx(Key, #{ last_tx => LastTX, target => Addr,\n\t\t\t\t\t\t\treward => ?AR(10000) }), tx_anchor};\n\t\t\t\t4 ->\n\t\t\t\t\t{ar_test_node:sign_tx(Key, #{ last_tx => ar_test_node:get_tx_anchor(peer1), reward => ?AR(10000),\n\t\t\t\t\t\t\ttags => [{<<\"nonce\">>, integer_to_binary(rand:uniform(100))}]}),\n\t\t\t\t\t\t\tblock_anchor}\n\t\t\tend,\n\t\t\tar_test_node:assert_post_tx_to_peer(peer1, TX),\n\t\t\tar_test_node:mine(peer1),\n\t\t\tassert_wait_until_height(peer1, Height),\n\t\t\tar_util:do_until(\n\t\t\t\tfun() ->\n\t\t\t\t\tar_test_node:remote_call(peer1, ar_mempool, get_all_txids, []) == []\n\t\t\t\tend,\n\t\t\t\t200,\n\t\t\t\t1000\n\t\t\t),\n\t\t\t{TXs ++ [{TX, AnchorType}], TX#tx.id}\n\t\tend,\n\t\t{[], <<>>},\n\t\tlists:seq(1, ar_block:get_max_tx_anchor_depth())\n\t),\n\tar_test_node:join_on(#{ node => main, join_on => peer1 }),\n\tBI = ar_test_node:remote_call(peer1, ar_node, get_block_index, []),\n\t?assertEqual(ok, ar_test_node:wait_until_block_index(BI)),\n\tTX1 = ar_test_node:sign_tx(Key, #{ last_tx => element(1, lists:nth(ar_block:get_max_tx_anchor_depth() + 1, BI)) }),\n\t{ok, {{<<\"400\">>, _}, _, <<\"Invalid anchor (last_tx).\">>, _, _}} =\n\t\tar_test_node:post_tx_to_peer(main, TX1),\n\t%% Expect transactions to be on main.\n\tlists:foreach(\n\t\tfun({TX, _}) ->\n\t\t\t?assert(\n\t\t\t\tar_util:do_until(\n\t\t\t\t\tfun() ->\n\t\t\t\t\t\tar_test_node:get_tx_confirmations(main, TX#tx.id) > 0\n\t\t\t\t\tend,\n\t\t\t\t\t100,\n\t\t\t\t\t20000\n\t\t\t\t)\n\t\t\t)\n\t\tend,\n\t\tTXs\n\t),\n\tlists:foreach(\n\t\tfun({TX, AnchorType}) ->\n\t\t\tReply = ar_test_node:post_tx_to_peer(main, TX),\n\t\t\tcase AnchorType of\n\t\t\t\ttx_anchor ->\n\t\t\t\t\t?assertMatch({ok, {{<<\"400\">>, _}, _,\n\t\t\t\t\t\t\t<<\"Invalid anchor (last_tx).\">>, _, _}}, Reply);\n\t\t\t\tblock_anchor ->\n\t\t\t\t\tRecentBHL = lists:sublist(?BI_TO_BHL(BI), ar_block:get_max_tx_anchor_depth()),\n\t\t\t\t\tcase lists:member(TX#tx.last_tx, RecentBHL) of\n\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t?assertMatch({ok, {{<<\"400\">>, _}, _,\n\t\t\t\t\t\t\t\t\t<<\"Transaction is already on the weave.\">>, _, _}}, Reply);\n\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t?assertMatch({ok, {{<<\"400\">>, _}, _,\n\t\t\t\t\t\t\t\t\t<<\"Invalid anchor (last_tx).\">>, _, _}}, Reply)\n\t\t\t\t\tend\n\t\t\tend\n\t\tend,\n\t\tTXs\n\t),\n\tar_test_node:disconnect_from(peer1),\n\n\t%% Mine the block on main first to ensure that it can't be rebased after the 2-block\n\t%% fork from peer1 wins.\n\tTX2 = ar_test_node:sign_tx(main, Key, #{ last_tx => element(1, lists:nth(ar_block:get_max_tx_anchor_depth(), BI)) }),\n\tar_test_node:assert_post_tx_to_peer(main, TX2),\n\tar_test_node:mine(),\n\twait_until_height(main, ar_block:get_max_tx_anchor_depth() + 1),\n\n\t%% mine two blocks on peer to ensure that the main branch is orphaned.\n\tar_test_node:mine(peer1),\n\tassert_wait_until_height(peer1, ar_block:get_max_tx_anchor_depth() + 1),\n\n\t%% lists:nth(ar_block:get_max_tx_anchor_depth() - 1, BI) since we'll be at at ar_block:get_max_tx_anchor_depth() + 2.\n\tTX3 = ar_test_node:sign_tx(peer1, Key, #{ last_tx => element(1, lists:nth(ar_block:get_max_tx_anchor_depth() - 1, BI)) }),\n\tar_test_node:assert_post_tx_to_peer(peer1, TX3),\n\tar_test_node:mine(peer1),\n\tBI2 = assert_wait_until_height(peer1, ar_block:get_max_tx_anchor_depth() + 2),\n\n\tar_test_node:connect_to_peer(peer1),\n\n\twait_until_height(main, ar_block:get_max_tx_anchor_depth() + 2),\n\n\tTX4 = ar_test_node:sign_tx(peer1, Key, #{ last_tx => element(1, lists:nth(ar_block:get_max_tx_anchor_depth(), BI2)) }),\n\tar_test_node:assert_post_tx_to_peer(peer1, TX4),\n\tar_test_node:assert_wait_until_receives_txs([TX4]),\n\tar_test_node:mine(peer1),\n\tBI3 = assert_wait_until_height(peer1, ar_block:get_max_tx_anchor_depth() + 3),\n\tBI3 = wait_until_height(main, ar_block:get_max_tx_anchor_depth() + 3),\n\n\t?assertEqual([TX4#tx.id], (read_block_when_stored(hd(BI3)))#block.txs),\n\t?assertEqual([TX3#tx.id], (read_block_when_stored(hd(BI2)))#block.txs).\n\nrecovers_from_forks(ForkHeight) ->\n\t%% Mine a number of blocks with transactions on peer1 and main in sync,\n\t%% then mine another bunch independently.\n\t%%\n\t%% Mine an extra block on peer1 to make main fork recover to it.\n\t%% Expect the fork recovery to be successful.\n\t%%\n\t%% Try to replay all the past transactions on main. Expect the transactions to be rejected.\n\t%%\n\t%% Resubmit all the transactions from the orphaned fork. Expect them to be accepted\n\t%% and successfully mined into a block.\n\tKey = {_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([\n\t\t{ar_wallet:to_address(Pub), ?AR(20), <<>>}\n\t]),\n\t_ = ar_test_node:start(B0),\n\t_ = ar_test_node:start_peer(peer1, B0),\n\tar_test_node:connect_to_peer(peer1),\n\t{ok, Config} = arweave_config:get_env(),\n\tMainPort = Config#config.port,\n\tPreForkTXs = lists:foldl(\n\t\tfun(Height, TXs) ->\n\t\t\tTX = ar_test_node:sign_v1_tx(Key, #{ last_tx => ar_test_node:get_tx_anchor(peer1),\n\t\t\t\t\ttags => [{<<\"nonce\">>, random_nonce()}] }),\n\t\t\tar_test_node:assert_post_tx_to_peer(peer1, TX),\n\t\t\tar_test_node:assert_wait_until_receives_txs([TX]),\n\t\t\tar_test_node:mine(peer1),\n\t\t\tBI = assert_wait_until_height(peer1, Height),\n\t\t\tBI = wait_until_height(main, Height),\n\t\t\tassert_block_txs(peer1, [TX], BI),\n\t\t\tassert_block_txs(main, [TX], BI),\n\t\t\tTXs ++ [TX]\n\t\tend,\n\t\t[],\n\t\tlists:seq(1, ForkHeight)\n\t),\n\tPostTXToMain =\n\t\tfun() ->\n\t\t\tUnsignedTX = #{ last_tx => ar_test_node:get_tx_anchor(main),\n\t\t\t\t\ttags => [{<<\"nonce\">>, random_nonce()}], reward => ?AR(1) },\n\t\t\tTX = case rand:uniform(2) of\n\t\t\t\t1 ->\n\t\t\t\t\tar_test_node:sign_tx(main, Key, UnsignedTX);\n\t\t\t\t2 ->\n\t\t\t\t\tar_test_node:sign_v1_tx(main, Key, UnsignedTX)\n\t\t\tend,\n\t\t\tar_test_node:assert_post_tx_to_peer(main, TX),\n\t\t\t[TX]\n\t\tend,\n\tPostTXToPeer =\n\t\tfun() ->\n\t\t\tUnsignedTX = #{ last_tx => ar_test_node:get_tx_anchor(peer1),\n\t\t\t\t\ttags => [{<<\"nonce\">>, random_nonce()}] },\n\t\t\tTX = case rand:uniform(2) of\n\t\t\t\t1 ->\n\t\t\t\t\tar_test_node:sign_tx(Key, UnsignedTX);\n\t\t\t\t2 ->\n\t\t\t\t\tar_test_node:sign_v1_tx(Key, UnsignedTX)\n\t\t\tend,\n\t\t\tar_test_node:assert_post_tx_to_peer(peer1, TX),\n\t\t\t[TX]\n\t\tend,\n\tar_test_node:disconnect_from(peer1),\n\t{MainPostForkTXs, PeerPostForkTXs} = lists:foldl(\n\t\tfun(Height, {MainTXs, PeerTXs}) ->\n\t\t\tUpdatedMainTXs = MainTXs ++ ([NewMainTX] = PostTXToMain()),\n\t\t\tar_test_node:mine(),\n\t\t\tBI = wait_until_height(main, Height),\n\t\t\tassert_block_txs(main, [NewMainTX], BI),\n\t\t\tUpdatedPeerTXs = PeerTXs ++ ([NewPeerTX] = PostTXToPeer()),\n\t\t\tar_test_node:mine(peer1),\n\t\t\tPeerBI = assert_wait_until_height(peer1, Height),\n\t\t\tassert_block_txs(peer1, [NewPeerTX], PeerBI),\n\t\t\t{UpdatedMainTXs, UpdatedPeerTXs}\n\t\tend,\n\t\t{[], []},\n\t\tlists:seq(ForkHeight + 1, 9)\n\t),\n\tar_test_node:connect_to_peer(peer1),\n\tTX2 = ar_test_node:sign_tx(Key, #{ last_tx => ar_test_node:get_tx_anchor(peer1),\n\t\t\ttags => [{<<\"nonce\">>, random_nonce()}] }),\n\tar_test_node:assert_post_tx_to_peer(peer1, TX2),\n\tar_test_node:assert_wait_until_receives_txs([TX2]),\n\tar_test_node:mine(peer1),\n\tassert_wait_until_height(peer1, 10),\n\twait_until_height(main, 10),\n\tforget_txs(\n\t\tPreForkTXs ++\n\t\tMainPostForkTXs ++\n\t\tPeerPostForkTXs ++\n\t\t[TX2]\n\t),\n\t%% Assert pre-fork transactions, the transactions which came during\n\t%% fork recovery, and the freshly created transaction are in the\n\t%% weave.\n\tlists:foreach(\n\t\tfun(TX) ->\n\t\t\t?assert(\n\t\t\t\tar_util:do_until(\n\t\t\t\t\tfun() ->\n\t\t\t\t\t\tar_test_node:get_tx_confirmations(main, TX#tx.id) > 0\n\t\t\t\t\tend,\n\t\t\t\t\t100,\n\t\t\t\t\t1000\n\t\t\t\t)\n\t\t\t),\n\t\t\t{ok, {{<<\"400\">>, _}, _, _, _, _}} =\n\t\t\t\tar_test_node:post_tx_to_peer(main, TX)\n\t\tend,\n\t\tPreForkTXs ++ PeerPostForkTXs ++ [TX2]\n\t),\n\t%% Assert the block anchored transactions from the abandoned fork are\n\t%% back in the memory pool.\n\tlists:foreach(\n\t\tfun(TX) ->\n\t\t\t{ok, {{<<\"208\">>, _}, _, <<\"Transaction already processed.\">>, _, _}} =\n\t\t\t\tar_http:req(#{\n\t\t\t\t\tmethod => post,\n\t\t\t\t\tpeer => {127, 0, 0, 1, MainPort},\n\t\t\t\t\tpath => \"/tx\",\n\t\t\t\t\theaders => [{<<\"x-p2p-port\">>, integer_to_binary(MainPort, 10)}],\n\t\t\t\t\tbody => ar_serialize:jsonify(ar_serialize:tx_to_json_struct(TX))\n\t\t\t\t})\n\t\tend,\n\t\tMainPostForkTXs\n\t).\n\none_wallet_list_one_block_anchored_txs(Key, B0) ->\n\t%% Sign only after the node has started to get the correct price\n\t%% estimation from it.\n\t{_, {KeyType, _}} = Key,\n\tTX1Fun = fun() ->\n\t\tcase KeyType of\n\t\t\t?RSA_KEY_TYPE ->\n\t\t\t\tar_test_node:sign_v1_tx(Key, #{ reward => ?AR(1) });\n\t\t\t?ECDSA_KEY_TYPE ->\n\t\t\t\tar_test_node:sign_tx(Key, #{ reward => ?AR(1), last_tx => <<>> })\n\t\tend end,\n\tTX2Fun = fun() ->\n\t\tcase KeyType of\n\t\t\t?RSA_KEY_TYPE ->\n\t\t\t\tar_test_node:sign_v1_tx(Key, #{ reward => ?AR(1),\n\t\t\t\t\t\tlast_tx => B0#block.indep_hash });\n\t\t\t?ECDSA_KEY_TYPE ->\n\t\t\t\tar_test_node:sign_tx(Key, #{ reward => ?AR(1),\n\t\t\t\t\t\tlast_tx => B0#block.indep_hash })\n\t\tend end,\n\t[TX1Fun, TX2Fun].\n\ntwo_block_anchored_txs(Key, B0) ->\n\t%% Sign only after the node has started to get the correct price\n\t%% estimation from it.\n\t{_, {KeyType, _}} = Key,\n\tTX1Fun = fun() ->\n\t\tcase KeyType of\n\t\t\t?RSA_KEY_TYPE ->\n\t\t\t\tar_test_node:sign_v1_tx(Key, #{ reward => ?AR(1),\n\t\t\t\t\t\tlast_tx => B0#block.indep_hash });\n\t\t\t?ECDSA_KEY_TYPE ->\n\t\t\t\tar_test_node:sign_tx(Key, #{ reward => ?AR(1),\n\t\t\t\t\t\tlast_tx => B0#block.indep_hash })\n\t\tend end,\n\tTX2Fun = fun() ->\n\t\tcase KeyType of\n\t\t\t?RSA_KEY_TYPE ->\n\t\t\t\tar_test_node:sign_v1_tx(Key, #{ reward => ?AR(1),\n\t\t\t\t\t\tlast_tx => B0#block.indep_hash });\n\t\t\t?ECDSA_KEY_TYPE ->\n\t\t\t\tar_test_node:sign_tx(Key, #{ reward => ?AR(1),\n\t\t\t\t\t\tlast_tx => B0#block.indep_hash,\n\t\t\t\t\t\t%% A tag to distinguish deterministic ECDSA transactions.\n\t\t\t\t\t\ttags => [{<<\"id\">>, <<>>}] })\n\t\tend end,\n\t[TX1Fun, TX2Fun].\n\nempty_tx_set(_Key, _B0) ->\n\t[].\n\nblock_anchor_txs_spending_balance_plus_one_more(Key, B0) ->\n\tTX1 = ar_test_node:sign_v1_tx(Key, #{ denomination => 1,\n\t\t\treward => ?AR(10), last_tx => B0#block.indep_hash }),\n\tTX2 = ar_test_node:sign_v1_tx(Key, #{ denomination => 1,\n\t\t\treward => ?AR(10), last_tx => B0#block.indep_hash }),\n\tTX3 = ar_test_node:sign_v1_tx(Key, #{ denomination => 1,\n\t\t\treward => ?AR(1), last_tx => B0#block.indep_hash }),\n\t[TX1, TX2, TX3].\n\nmixed_anchor_txs_spending_balance_plus_one_more(Key, B0) ->\n\tTX1 = ar_test_node:sign_v1_tx(Key, #{ denomination => 1, reward => ?AR(10), last_tx => <<>> }),\n\tTX2 = ar_test_node:sign_v1_tx(Key, #{ denomination => 1, reward => ?AR(5),\n\t\t\tlast_tx => B0#block.indep_hash }),\n\tTX3 = ar_test_node:sign_v1_tx(Key, #{ denomination => 1, reward => ?AR(2),\n\t\t\tlast_tx => B0#block.indep_hash }),\n\tTX4 = ar_test_node:sign_v1_tx(Key, #{ denomination => 1,\n\t\t\treward => ?AR(3), last_tx => B0#block.indep_hash }),\n\tTX5 = ar_test_node:sign_v1_tx(Key, #{ denomination => 1,\n\t\t\treward => ?AR(1), last_tx => B0#block.indep_hash }),\n\t[TX1, TX2, TX3, TX4, TX5].\n\ngrouped_txs() ->\n\tKey1 = {_, Pub1} = ar_wallet:new(),\n\tKey2 = {_, Pub2} = ar_wallet:new(),\n\tWallets = [\n\t\t{ar_wallet:to_address(Pub1), ?AR(100), <<>>},\n\t\t{ar_wallet:to_address(Pub2), ?AR(100), <<>>}\n\t],\n\t[B0] = ar_weave:init(Wallets),\n\tChunk1 = random_v1_data(?TX_DATA_SIZE_LIMIT),\n\tChunk2 = <<\"a\">>,\n\tTX1 = ar_test_node:sign_v1_tx(Key1, #{ reward => ?AR(1), data => Chunk1, last_tx => <<>> }),\n\tTX2 = ar_test_node:sign_v1_tx(Key2, #{ reward => ?AR(1), data => Chunk2,\n\t\t\tlast_tx => B0#block.indep_hash }),\n\t%% TX1 is expected to be mined first because wallet list anchors are mined first while\n\t%% the price per byte should be the same since we assigned the minimum required fees.\n\t{B0, [[TX1], [TX2]]}.\n\nmine_blocks(Node, TargetHeight) ->\n\tmine_blocks(Node, 1, TargetHeight).\n\nmine_blocks(_Node, Height, TargetHeight) when Height == TargetHeight + 1 ->\n\tok;\nmine_blocks(Node, Height, TargetHeight) ->\n\tar_test_node:mine(Node),\n\tassert_wait_until_height(Node, Height),\n\tmine_blocks(Node, Height + 1, TargetHeight).\n\nforget_txs(TXs) ->\n\tlists:foreach(\n\t\tfun(TX) ->\n\t\t\tets:delete(ignored_ids, TX#tx.id)\n\t\tend,\n\t\tTXs\n\t).\n\nassert_block_txs(Node, TXs, BI) ->\n\tTXIDs = lists:map(fun(TX) -> TX#tx.id end, TXs),\n\tB = ar_test_node:remote_call(Node, ar_test_node, read_block_when_stored, [hd(BI)]),\n\t?assertEqual(lists:sort(TXIDs), lists:sort(B#block.txs)).\n\nrandom_nonce() ->\n\tinteger_to_binary(rand:uniform(1000000)).\n"
  },
  {
    "path": "apps/arweave/test/ar_vdf_block_validation_tests.erl",
    "content": "-module(ar_vdf_block_validation_tests).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-define(TEST_RESET_FREQUENCY, 400).\n-define(BLOCK_DELIVERY_TIMEOUT, 120000).\n\nfork_at_entropy_reset_point_test_() ->\n\t[\n\t\t{timeout, ?TEST_NODE_TIMEOUT, fun test_fork_checkpoints_not_found/0},\n\t\t{timeout, ?TEST_NODE_TIMEOUT, fun test_fork_refuse_validation/0}\n\t].\n\n%% Scenario:\n%% 1. VDF server applies a block that opens a new VDF session\n%% 2. VDF client mines a solution at that same height\n%%    (i.e. it mines a fork before receiving the other block)\n%% 3. That solution fails because it is mined off VDF steps from the\n%%    server which are in the new session, but the block being mined\n%%    is an entropy reset block. \n%% \n%% The failure in this case (`step_checkpoints_not_found' error) is\n%% unavoidable in this specific scenario. So this test will just assert\n%% that the block is rejected and that the VDF client can later get on\n%% the correct chain and then mine a solution there.\ntest_fork_checkpoints_not_found() ->\n\tmock_reset_frequency_and_block_propagation_parallelization(),\n\ttry\n\t\t[B0] = ar_weave:init(),\n\n\t\t%% Start nodes in such way that they will not gossip blocks to\n\t\t%% each other. This lets us control when blocks are shared.\n\t\t%% Note: also relies on `mock_block_propagation_parallelization()`.\n\t\t{ok, Config} = arweave_config:get_env(),\n\t\tar_test_node:start(#{\n\t\t\tb0 => B0,\n\t\t\tconfig => Config#config{\n\t\t\t\tnonce_limiter_client_peers = [\n\t\t\t\t\tar_util:format_peer(ar_test_node:peer_ip(peer1))\n\t\t\t\t],\n\t\t\t\tblock_pollers = 0\n\t\t\t}\n\t\t}),\n\t\tmock_reset_frequency_and_block_propagation_parallelization(main),\n\n\t\t{ok, PeerConfig} = ar_test_node:get_config(peer1),\n\t\tar_test_node:start_peer(peer1, #{\n\t\t\tb0 => B0,\n\t\t\tconfig => PeerConfig#config{\n\t\t\t\tnonce_limiter_server_trusted_peers = [\n\t\t\t\t\tar_util:format_peer(ar_test_node:peer_ip(main))\n\t\t\t\t],\n\t\t\t\tblock_pollers = 0\n\t\t\t}\n\t\t}),\n\t\tmock_reset_frequency_and_block_propagation_parallelization(peer1),\n\n\t\tH2 = ar_test_node:with_gossip_paused(main, fun() ->\n\t\t\t%% Still need to connect to make sure VDF is shared\n\t\t\tar_test_node:connect_to_peer(peer1),\n\n\t\t\tar_test_node:mine(main),\n\t\t\t[H1 | _] = ar_test_node:wait_until_height(main, 1),\n\t\t\tsend_block(H1, main, peer1),\n\t\t\tar_test_node:wait_until_height(peer1, 1),\n\n\t\t\tar_test_node:disconnect_from(peer1),\n\t\t\t%% Make sure that we are deep into the new session before we try to mine.\n\t\t\t%% Suspend peer1's nonce limiter so it cannot advance to the new session while isolated.\n\t\t\t[H2Local | _] = with_nonce_limiter_paused(peer1, fun() ->\n\t\t\t\twait_until_step_number(main, ?TEST_RESET_FREQUENCY + 101),\n\t\t\t\tar_test_node:mine(main),\n\t\t\t\tar_test_node:wait_until_height(main, 2)\n\t\t\tend),\n\n\t\t\tar_test_node:connect_to_peer(peer1),\n\t\t\t%% Wait until peer1 has transitioned to the new VDF session.\n\t\t\twait_until_step_number(peer1, ?TEST_RESET_FREQUENCY + 1),\n\t\t\twith_vdf_pull_and_push_disabled(peer1, fun() ->\n\t\t\t\tar_test_node:mine(peer1),\n\t\t\t\t%% Assert that peer1 is unable to mine a block.\n\t\t\t\ttimer:sleep(10000),\n\t\t\t\tBI = ar_test_node:remote_call(peer1, ar_node, get_blocks, []),\n\t\t\t\t?assertEqual(2, length(BI))\n\t\t\tend),\n\t\t\tH2Local\n\t\tend),\n\n\t\t%% Get peer1 on the main chain\n\t\tsend_block(H2, main, peer1),\n\t\tar_test_node:wait_until_height(peer1, 2),\n\n\t\t%% Now that we're on the main chain and still mining, we should eventually mine a block.\n\t\tar_test_node:mine(peer1),\n\t\tar_test_node:wait_until_height(peer1, 3)\n\tafter\n\t\tdisable_mocks(main),\n\t\tdisable_mocks(peer1)\n\tend.\n\n%% Scenario:\n%% 1. There's a chain fork on a block that opens a new VDF session.\n%%    The \"winning\" block has a higher VDF step than the \"losing\" block. \n%%    Both blocks need to be validated using the current VDF session\n%%    (not the new one)\n%% 2. VDF server applies the \"winning\" block, validates with the current\n%%    VDF session, and opens a new VDF session.\n%% 3. VDF client applies the \"losing\" block, is able to get the VDF steps\n%%    it needs to validate because the steps are before the new session that\n%%    was opened on the VDF server so they still belong to the \"old\" session\n%%    (or perhaps it just validates the \"losing\" block before the VDF server\n%%    opens a new session)\n%% 4. Later the VDF client tries to apply the winning block. However when it\n%%    queries the steps it needs to validate the block, the VDF server which is\n%%    now on the new session returns the steps for that new session - which won't\n%%    validate.\n%% 5. VDF client is stuck trying to validate the winning block and can't proceed.\n%% \n%% We built in a fix for this scenario before 2.9.5-alpha1, but it relied on\n%% VDF Pull being enabled (in which case the VDF client would explicitly ask\n%% the server for the full current and previous sessions). In 2.9.5-alpha1 we\n%% broke this fix for nodes using `disable vdf_server_pull`. We've now\n%% re-applied the fix and added this test.\ntest_fork_refuse_validation() ->\n\tmock_reset_frequency_and_block_propagation_parallelization(),\n\ttry\n\t\t[B0] = ar_weave:init(),\n\n\t\t%% Start nodes in such way that they will not gossip blocks to\n\t\t%% each other. This lets us control when blocks are shared.\n\t\t%% Note: also relies on `mock_block_propagation_parallelization()`.\n\t\t{ok, Config} = arweave_config:get_env(),\n\t\tar_test_node:start(#{\n\t\t\tb0 => B0,\n\t\t\tconfig => Config#config{\n\t\t\t\tnonce_limiter_client_peers = [\n\t\t\t\t\tar_util:format_peer(ar_test_node:peer_ip(peer1))\n\t\t\t\t],\n\t\t\t\tblock_pollers = 0\n\t\t\t}\n\t\t}),\n\t\tmock_reset_frequency_and_block_propagation_parallelization(main),\n\n\t\t{ok, PeerConfig} = ar_test_node:get_config(peer1),\n\t\tar_test_node:start_peer(peer1, #{\n\t\t\tb0 => B0,\n\t\t\tconfig => PeerConfig#config{\n\t\t\t\tnonce_limiter_server_trusted_peers = [\n\t\t\t\t\tar_util:format_peer(ar_test_node:peer_ip(main))\n\t\t\t\t],\n\t\t\t\tblock_pollers = 0,\n\t\t\t\tdisable = [vdf_server_pull | PeerConfig#config.disable]\n\t\t\t}\n\t\t}),\n\t\tmock_reset_frequency_and_block_propagation_parallelization(peer1),\n\n\t\tar_test_node:with_gossip_paused(main, fun() ->\n\t\t\t%% Still need to connect to make sure VDF is shared\n\t\t\tar_test_node:connect_to_peer(peer1),\n\n\t\tar_test_node:mine(main),\n\t\t[H1 | _] = ar_test_node:wait_until_height(main, 1),\n\t\tsend_block(H1, main, peer1),\n\t\tar_test_node:assert_wait_until_height(peer1, 1),\n\t\twait_until_step_number(peer1, ?TEST_RESET_FREQUENCY + 1),\n\n\t\tar_test_node:mine(peer1),\n\t\tar_test_node:wait_until_height(peer1, 2),\n\t\tar_test_node:disconnect_from(peer1),\n\t\twait_until_step_number(main, ?TEST_RESET_FREQUENCY + 100),\n\n\t\tar_test_node:mine(main),\n\t\t[H2 | _] = ar_test_node:wait_until_height(main, 2),\n\t\tar_test_node:mine(main),\n\t\t[H3 | _] = ar_test_node:wait_until_height(main, 3),\n\t\t%% Just avoids some errors if the test finishes before the mining server is paused.\n\t\tar_test_node:wait_until_mining_paused(main),\n\n\t\t\tar_test_node:connect_to_peer(peer1),\n\t\t\tensure_block_applied(H2, main, peer1, 2),\n\t\t\tensure_block_applied(H3, main, peer1, 3)\n\t\tend),\n\t\tar_test_node:wait_until_height(peer1, 3)\n\tafter\n\t\tdisable_mocks(main),\n\t\tdisable_mocks(peer1)\n\tend.\n\nmock_reset_frequency_and_block_propagation_parallelization() ->\n\tar_test_node:new_mock(ar_nonce_limiter, [passthrough]),\n\tar_test_node:new_mock(ar_bridge, [passthrough]),\n\tar_test_node:mock_function(ar_nonce_limiter, get_reset_frequency, fun() -> ?TEST_RESET_FREQUENCY end),\n\tar_test_node:mock_function(ar_bridge, block_propagation_parallelization, fun() -> 0 end).\n\nmock_reset_frequency_and_block_propagation_parallelization(Node) ->\n\tar_test_node:remote_call(Node, ar_test_node, new_mock, [ar_nonce_limiter, [passthrough]]),\n\tar_test_node:remote_call(Node, ar_test_node, new_mock, [ar_bridge, [passthrough]]),\n\tar_test_node:remote_call(Node, ar_test_node, mock_function, [ar_nonce_limiter, get_reset_frequency, fun() -> ?TEST_RESET_FREQUENCY end]),\n\tar_test_node:remote_call(Node, ar_test_node, mock_function, [ar_bridge, block_propagation_parallelization, fun() -> 0 end]).\n\ndisable_mocks(Node) ->\n\tok = ar_test_node:remote_call(Node, ar_test_node, unmock_module, [ar_bridge]),\n\tok = ar_test_node:remote_call(Node, ar_test_node, unmock_module, [ar_nonce_limiter]).\n\nsend_block(H, FromNode, ToNode) ->\n\tBlock = ar_test_node:remote_call(FromNode, ar_storage, read_block, [H]),\n\tcase ar_test_node:send_new_block(ar_test_node:peer_ip(ToNode), Block) of\n\t\t{ok, {{<<\"200\">>, _}, _, _, _, _}} ->\n\t\t\tok;\n\t\t{ok, {{<<\"208\">>, _}, _, _, _, _}} ->\n\t\t\tok;\n\t\tError ->\n\t\t\t?assert(false, io_lib:format(\"Got unexpected error: ~p\", [Error]))\n\tend.\n\nensure_block_applied(H, FromNode, ToNode, TargetHeight) ->\n\tar_util:do_until(\n\t\tfun() ->\n\t\t\tsend_block(H, FromNode, ToNode),\n\t\t\tHeight = ar_test_node:remote_call(ToNode, ar_node, get_height, []),\n\t\t\tHeight >= TargetHeight\n\t\tend,\n\t\t1000,\n\t\t?BLOCK_DELIVERY_TIMEOUT).\n\nwait_until_step_number(Node, StepNumber) ->\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\ttry\n\t\t\t\tCurrentStepNumber = ar_test_node:remote_call(\n\t\t\t\t\tNode, ar_nonce_limiter, get_current_step_number, []),\n\t\t\t\tCurrentStepNumber >= StepNumber\n\t\t\tcatch\n\t\t\t\t%% meck's internal gen_server proxy uses gen_server:call/2\n\t\t\t\t%% with the default 5s timeout, which can fire under load.\n\t\t\t\texit:{timeout, _} ->\n\t\t\t\t\tfalse\n\t\t\tend\n\t\tend,\n\t\t500,\n\t\t120000).\n\nwith_nonce_limiter_paused(Node, Fun) when is_function(Fun, 0) ->\n\tPid = suspend_nonce_limiter(Node),\n\ttry\n\t\tFun()\n\tafter\n\t\tresume_nonce_limiter(Node, Pid)\n\tend.\n\nwith_vdf_pull_and_push_disabled(Node, Fun) when is_function(Fun, 0) ->\n\t{ok, Config} = ar_test_node:remote_call(Node, arweave_config, get_env, []),\n\tDisableFlags = Config#config.disable,\n\t%% Update config so that ar_http_iface_middleware\n\t%% responds to POST /vdf with #nonce_limiter_update_response {postpone = 120 }.\n\tok = ar_test_node:remote_call(\n\t\tNode,\n\t\tarweave_config,\n\t\tset_env,\n\t\t[Config#config{ disable = lists:delete(vdf_server_pull, DisableFlags) }]\n\t),\n\t%% Also suspend the pull loop so peer1 cannot fetch full sessions.\n\tPid = suspend_nonce_limiter_client(Node),\n\ttry\n\t\tFun()\n\tafter\n\t\tok = ar_test_node:remote_call(Node, arweave_config, set_env, [Config]),\n\t\tresume_nonce_limiter_client(Node, Pid)\n\tend.\n\nsuspend_nonce_limiter(Node) ->\n\tPid = ar_test_node:remote_call(Node, erlang, whereis, [ar_nonce_limiter]),\n\t?assert(is_pid(Pid)),\n\tok = ar_test_node:remote_call(Node, sys, suspend, [Pid]),\n\tPid.\n\nsuspend_nonce_limiter_client(Node) ->\n\tPid = ar_test_node:remote_call(Node, erlang, whereis, [ar_nonce_limiter_client]),\n\t?assert(is_pid(Pid)),\n\tok = ar_test_node:remote_call(Node, sys, suspend, [Pid]),\n\tPid.\n\nresume_nonce_limiter(_Node, undefined) ->\n\tok;\nresume_nonce_limiter(Node, Pid) ->\n\tcase ar_test_node:remote_call(Node, erlang, is_process_alive, [Pid]) of\n\t\ttrue ->\n\t\t\tok = ar_test_node:remote_call(Node, sys, resume, [Pid]);\n\t\tfalse ->\n\t\t\tok\n\tend.\n\nresume_nonce_limiter_client(_Node, undefined) ->\n\tok;\nresume_nonce_limiter_client(Node, Pid) ->\n\tcase ar_test_node:remote_call(Node, erlang, is_process_alive, [Pid]) of\n\t\ttrue ->\n\t\t\tok = ar_test_node:remote_call(Node, sys, resume, [Pid]);\n\t\tfalse ->\n\t\t\tok\n\tend.\n"
  },
  {
    "path": "apps/arweave/test/ar_vdf_external_update_tests.erl",
    "content": "-module(ar_vdf_external_update_tests).\n\n-export([init/2]).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_mining.hrl\").\n\n-import(ar_test_node, [assert_wait_until_height/2, post_block/2, send_new_block/2]).\n\n%% we have to wait to let the ar_events get processed whenever we apply a VDF step\n-define(WAIT_TIME, 1000).\n\n%% -------------------------------------------------------------------------------------------------\n%% Test Fixtures\n%% -------------------------------------------------------------------------------------------------\n\nsetup_external_update() ->\n\t{ok, Config} = arweave_config:get_env(),\n\t[B0] = ar_weave:init(),\n\t%% Start the testnode with a configured VDF server so that it doesn't compute its own VDF -\n\t%% this is necessary so that we can test the behavior of apply_external_update without any\n\t%% auto-computed VDF steps getting in the way.\n\t_ = ar_test_node:start(\n\t\tB0, ar_wallet:to_address(ar_wallet:new_keyfile()),\n\t\tConfig#config{ \n\t\t\tnonce_limiter_server_trusted_peers = [\n\t\t\t\tar_util:format_peer(vdf_server_1()),\n\t\t\t\tar_util:format_peer(vdf_server_2()) \n\t\t\t],\n\t\t\tmine = true\n\t\t}\n\t),\n\tets:new(computed_output, [named_table, ordered_set, public]),\n\tets:new(add_task, [named_table, bag, public]),\n\tPid = spawn(\n\t\tfun() ->\n\t\t\tok = ar_events:subscribe(nonce_limiter),\n\t\t\tcomputed_output()\n\t\tend\n\t),\n\t{Pid, Config}.\n\ncleanup_external_update({Pid, Config}) ->\n\texit(Pid, kill),\n\tok = arweave_config:set_env(Config),\n\tets:delete(add_task),\n\tets:delete(computed_output).\n\n%% -------------------------------------------------------------------------------------------------\n%% Test Registration\n%% -------------------------------------------------------------------------------------------------\n\nexternal_update_test_() ->\n    {foreach,\n\t\tfun setup_external_update/0,\n     \tfun cleanup_external_update/1,\n\t\t[\n\t\t\tar_test_node:test_with_mocked_functions([mock_add_task(), mock_reset_frequency()],\n\t\t\t\tfun test_session_overlap/0, 120),\n\t\t\tar_test_node:test_with_mocked_functions([mock_add_task(), mock_reset_frequency()],\n\t\t\t\tfun test_client_ahead/0, 120),\n\t\t\tar_test_node:test_with_mocked_functions([mock_add_task(), mock_reset_frequency()],\n\t\t\t\tfun test_skip_ahead/0, 120),\n\t\t\tar_test_node:test_with_mocked_functions([mock_add_task(), mock_reset_frequency()],\n\t\t\t\tfun test_2_servers_switching/0, 120),\n\t\t\tar_test_node:test_with_mocked_functions([mock_add_task(), mock_reset_frequency()],\n\t\t\t\tfun test_backtrack/0, 120),\n\t\t\tar_test_node:test_with_mocked_functions([mock_add_task(), mock_reset_frequency()],\n\t\t\t\tfun test_2_servers_backtrack/0, 120)\n\t\t]\n    }.\n\nmining_session_test_() ->\n    {foreach,\n\t\tfun setup_external_update/0,\n     \tfun cleanup_external_update/1,\n\t[\n\t\tar_test_node:test_with_mocked_functions([mock_add_task(), mock_reset_frequency()],\n\t\t\tfun test_mining_session/0, 120)\n\t]\n    }.\n\n%% -------------------------------------------------------------------------------------------------\n%% Tests\n%% -------------------------------------------------------------------------------------------------\n\n%%\n%% external_update_test_\n%%\n\n%% @doc The VDF session key is only updated when a block is procesed by the VDF server. Until that\n%% happens the serve will push all VDF steps under the same session key - even if those steps\n%% cross an entropy reset line. When a block comes in the server will update the session key\n%% *and* move all appropriate steps to that session. Prior to 2.7 this caused VDF clients to\n%% process some steps twice - once under the old session key, and once under the new session key.\n%% This test asserts that this behavior has been fixed and that VDF clients only process each\n%% step once.\ntest_session_overlap() ->\n\tSessionKey0 = get_current_session_key(),\n\tSessionKey1 = {<<\"session1\">>, 1, 1},\n\tSessionKey2 = {<<\"session2\">>, 2, 1},\n\t?assertEqual(\n\t\t#nonce_limiter_update_response{ session_found = false },\n\t\tapply_external_update(SessionKey1, [], 8, true, SessionKey0),\n\t\t\"Partial session1, session not found\"),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey1, [7, 6, 5], 8, false, SessionKey0),\n\t\t\"Full session1\"),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey1, [], 9, true, SessionKey0),\n\t\t\"Partial session1\"),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey1, [], 10, true, SessionKey0),\n\t\t\"Partial session1, interval2\"),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey1, [], 11, true, SessionKey0),\n\t\t\"Partial session1, interval2\"),\n\t?assertEqual(\n\t\t#nonce_limiter_update_response{ session_found = false },\n\t\tapply_external_update(SessionKey2, [], 12, true, SessionKey1),\n\t\t\"Partial session2, interval2\"),\n\t?assertEqual(\n\t\t#nonce_limiter_update_response{ session_found = true, step_number = 11 },\n\t\tapply_external_update(SessionKey1, [8, 7, 6, 5], 9, false, SessionKey1),\n\t\t\"Full session1, all steps already seen\"),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey2, [11, 10], 12, false, SessionKey1),\n\t\t\"Full session2, some steps already seen\"),\n\ttimer:sleep(?WAIT_TIME),\n\t?assertEqual(\n\t\t[<<\"8\">>, <<\"7\">>, <<\"6\">>, <<\"5\">>, <<\"9\">>, <<\"10\">>, <<\"11\">>, <<\"12\">>],\n\tcomputed_steps()),\n\t?assertEqual(SessionKey0, get_current_session_key()),\n\t?assertEqual(\n\t\t[10, 10, 10, 10, 10, 20, 20, 20],\n\t\tcomputed_upper_bounds()).\n\n\n%% @doc This test asserts that the client responds correctly when it is ahead of the VDF server.\ntest_client_ahead() ->\n\tSessionKey0 = get_current_session_key(),\n\tSessionKey1 = {<<\"session1\">>, 1, 1},\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey1, [7, 6, 5], 8, false, SessionKey0),\n\t\t\"Full session\"),\n\t?assertEqual(\n\t\t#nonce_limiter_update_response{ step_number = 8 },\n\t\tapply_external_update(SessionKey1, [], 7, true, SessionKey0),\n\t\t\"Partial session, client ahead\"),\n\t?assertEqual(\n\t\t#nonce_limiter_update_response{ step_number = 8 },\n\t\tapply_external_update(SessionKey1, [6, 5], 7, false, SessionKey0),\n\t\t\"Full session, client ahead\"),\n\ttimer:sleep(?WAIT_TIME),\n\t?assertEqual(SessionKey0, get_current_session_key()),\n\t?assertEqual(\n\t\t[<<\"8\">>, <<\"7\">>, <<\"6\">>, <<\"5\">>],\n\t\tcomputed_steps()),\n\t?assertEqual(\n\t\t[10, 10, 10, 10],\n\t\tcomputed_upper_bounds()).\n\n%% @doc\n%% Test case:\n%% 1. VDF server pushes a partial update that skips too far ahead of the client\n%% 2. Simulate the updates that the server would then push (i.e. full session updates of the current\n%%    session and maybe previous session)\n%%\n%% Assert that the client responds correctly and only processes each step once (even though it may\n%% see the same step several times as part of the full session updates).\ntest_skip_ahead() ->\n\tSessionKey0 = get_current_session_key(),\n\tSessionKey1 = {<<\"session1\">>, 1, 1},\n\tSessionKey2 = {<<\"session2\">>, 2, 1},\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey1, [5], 6, false, SessionKey0),\n\t\t\"Full session1\"),\n\t?assertEqual(\n\t\t#nonce_limiter_update_response{ session_found = true, step_number = 6 },\n\t\tapply_external_update(SessionKey1, [], 8, true, SessionKey0),\n\t\t\"Partial session1, server ahead\"),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey1, [7, 6, 5], 8, false, SessionKey0),\n\t\t\"Full session1\"),\n\t?assertEqual(\n\t\t#nonce_limiter_update_response{ session_found = false },\n\t\tapply_external_update(SessionKey2, [], 12, true, SessionKey1),\n\t\t\"Partial session2, server ahead\"),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey1, [8, 7, 6, 5], 9, false, SessionKey0),\n\t\t\"Full session1, all steps already seen\"),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey2, [11, 10], 12, false, SessionKey1),\n\t\t\"Full session2, some steps already seen\"),\n\ttimer:sleep(?WAIT_TIME),\n\t?assertEqual(SessionKey0, get_current_session_key()),\n\t?assertEqual(\n\t\t[<<\"6\">>, <<\"5\">>, <<\"8\">>, <<\"7\">>, <<\"9\">>, <<\"12\">>, <<\"11\">>, <<\"10\">>],\n\t\tcomputed_steps()),\n\t?assertEqual(\n\t\t[10, 10, 10, 10, 10, 20, 20, 20],\n\t\tcomputed_upper_bounds()).\n\ntest_2_servers_switching() ->\n\tSessionKey0 = get_current_session_key(),\n\tSessionKey1 = {<<\"session1\">>, 1, 1},\n\tSessionKey2 = {<<\"session2\">>, 2, 1},\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey1, [6, 5], 7, false, SessionKey0, vdf_server_1()),\n\t\t\"Full session1 from vdf_server_1\"),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey1, [], 8, true, SessionKey0, vdf_server_2()),\n\t\t\"Partial session1 from vdf_server_2\"),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey1, [], 9, true, SessionKey0, vdf_server_2()),\n\t\t\"Partial session1 from vdf_server_2\"),\n\t?assertEqual(\n\t\t#nonce_limiter_update_response{ session_found = false },\n\t\tapply_external_update(SessionKey2, [], 11, true, SessionKey1, vdf_server_1()),\n\t\t\"Partial session2 from vdf_server_1\"),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey2, [10], 11, false, SessionKey1, vdf_server_1()),\n\t\t\"Full session2 from vdf_server_1\"),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey1, [], 10, true, SessionKey0, vdf_server_2()),\n\t\t\"Partial session1 from vdf_server_2 (should not change current session)\"),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey1, [], 11, true, SessionKey0, vdf_server_2()),\n\t\t\"Partial session1 from vdf_server_2 (should not change current session)\"),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey1, [], 12, true, SessionKey0, vdf_server_2()),\n\t\t\"Partial session1 from vdf_server_2 (should not change current session)\"),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(\n\t\t\tSessionKey2, [11, 10], 12, false, SessionKey1, vdf_server_2()),\n\t\t\"Full session2 from vdf_server_2\"),\n\t?assertEqual(\n\t\t#nonce_limiter_update_response{ step_number = 12 },\n\t\tapply_external_update(SessionKey2, [], 12, true, SessionKey1, vdf_server_1()),\n\t\t\"Partial (repeat) session2 from vdf_server_1\"),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey2, [], 13, true, SessionKey1, vdf_server_1()),\n\t\t\"Partial (new) session2 from vdf_server_1\"),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey2, [], 14, true, SessionKey1, vdf_server_2()),\n\t\t\"Partial (new) session2 from vdf_server_2\"),\n\ttimer:sleep(?WAIT_TIME),\n\t?assertEqual(SessionKey0, get_current_session_key()),\n\t?assertEqual([\n\t\t<<\"7\">>, <<\"6\">>, <<\"5\">>, <<\"8\">>, <<\"9\">>,\n\t\t<<\"11\">>, <<\"10\">>, <<\"10\">>, <<\"11\">>, <<\"12\">>,\n\t\t<<\"12\">>, <<\"13\">>, <<\"14\">>\n\t], computed_steps()),\n\t?assertEqual(\n\t\t[10, 10, 10, 10, 10, 20, 20, 20, 20, 20, 20, 20, 20],\n\t\tcomputed_upper_bounds()).\n\ntest_backtrack() ->\n\tSessionKey0 = get_current_session_key(),\n\tSessionKey1 = {<<\"session1\">>, 1, 1},\n\tSessionKey2 = {<<\"session2\">>, 2, 1},\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey1, [\n\t\t\t16, 15,\t\t\t\t%% interval 3\n\t\t\t14, 13, 12, 11, 10, %% interval 2\n\t\t\t9, 8, 7, 6, 5\t\t%% interval 1\n\t\t], 17, false, SessionKey0),\n\t\t\"Full session1\"),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey1, [], 18, true, SessionKey0),\n\t\t\"Partial session1\"),\n\t?assertEqual(\n\t\t#nonce_limiter_update_response{ session_found = false },\n\t\tapply_external_update(SessionKey2, [], 15, true, SessionKey1),\n\t\t\"Partial session2\"),\n\t?assertEqual(\n\t\t#nonce_limiter_update_response{ step_number = 18 },\n\t\tapply_external_update(\n\t\t\tSessionKey1, [8, 7, 6, 5], 9, false, SessionKey0),\n\t\t\"Backtrack. Send full session1.\"),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(\n\t\t\tSessionKey2, [14, 13, 12, 11, 10], 15, false, SessionKey1),\n\t\t\"Backtrack. Send full session2\"),\n\ttimer:sleep(?WAIT_TIME),\n\t?assertEqual(SessionKey0, get_current_session_key()),\n\t?assertEqual([\n\t\t<<\"17\">>,<<\"16\">>,<<\"15\">>,<<\"14\">>,<<\"13\">>,<<\"12\">>,\n        <<\"11\">>,<<\"10\">>,<<\"9\">>,<<\"8\">>,<<\"7\">>,<<\"6\">>,\n        <<\"5\">>,<<\"18\">>,<<\"15\">>\n\t], computed_steps()),\n\t?assertEqual(\n\t\t[20, 20, 20, 20, 20, 20, 20, 20, 10, 10, 10, 10, 10, 20, 30],\n\t\tcomputed_upper_bounds()).\n\ntest_2_servers_backtrack() ->\n\tSessionKey0 = get_current_session_key(),\n\tSessionKey1 = {<<\"session1\">>, 1, 1},\n\tSessionKey2 = {<<\"session2\">>, 2, 1},\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey1, [\n\t\t\t16, 15,\t\t\t\t%% interval 3\n\t\t\t14, 13, 12, 11, 10, %% interval 2\n\t\t\t9, 8, 7, 6, 5\t\t%% interval 1\n\t\t], 17, false, SessionKey0, vdf_server_1()),\n\t\t\"Full session1 from vdf_server_1\"),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey1, [], 18, true, SessionKey0, vdf_server_1()),\n\t\t\"Partial session1 from vdf_server_1\"),\n\t?assertEqual(\n\t\t#nonce_limiter_update_response{ session_found = false },\n\t\tapply_external_update(SessionKey2, [], 15, true, SessionKey1, vdf_server_2()),\n\t\t\"Partial session2 from vdf_server_2\"),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(\n\t\t\tSessionKey2, [14, 13, 12, 11, 10], 15, false, SessionKey1, vdf_server_2()),\n\t\t\"Backtrack in session2 from vdf_server_2\"),\n\ttimer:sleep(?WAIT_TIME),\n\t?assertEqual([\n\t\t<<\"17\">>,<<\"16\">>,<<\"15\">>,<<\"14\">>,<<\"13\">>,<<\"12\">>,\n        <<\"11\">>,<<\"10\">>,<<\"9\">>,<<\"8\">>,<<\"7\">>,<<\"6\">>,\n        <<\"5\">>,<<\"18\">>,<<\"15\">>\n\t], computed_steps()),\n\t?assertEqual(SessionKey0, get_current_session_key()),\n\t?assertEqual(\n\t\t[20, 20, 20, 20, 20, 20, 20, 20, 10, 10, 10, 10, 10, 20, 30],\n\t\tcomputed_upper_bounds()).\n\ntest_mining_session() ->\n\tSessionKey0 = get_current_session_key(),\n\tSessionKey1 = {<<\"session1\">>, 1, 1},\n\tSessionKey2 = {<<\"session2\">>, 2, 1},\n\tSessionKey3 = {<<\"session3\">>, 3, 1},\n\tar_test_node:mine(),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey0, [], 2, true, undefined),\n\t\t\"Partial session0, should mine\"),\n\ttimer:sleep(?WAIT_TIME),\n\t?assertEqual([SessionKey0], sets:to_list(ar_mining_server:active_sessions())),\n\t?assertEqual([2], mined_steps()),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey0, [3], 4, false, undefined),\n\t\t\"Full session0, should mine\"),\n\ttimer:sleep(?WAIT_TIME),\n\t?assertEqual([SessionKey0], sets:to_list(ar_mining_server:active_sessions())),\n\t?assertEqual([4, 3], mined_steps()),\n\t?assertEqual(\n\t\t#nonce_limiter_update_response{ step_number = 4 },\n\t\tapply_external_update(SessionKey0, [], 4, true, undefined),\n\t\t\"Repeat step, should not mine\"),\n\ttimer:sleep(?WAIT_TIME),\n\t?assertEqual([SessionKey0], sets:to_list(ar_mining_server:active_sessions())),\n\t?assertEqual([], mined_steps()),\n\t?assertEqual(\n\t\t#nonce_limiter_update_response{ session_found = false },\n\t\tapply_external_update(SessionKey1, [], 6, true, SessionKey0),\n\t\t\"Partial session1, should not mine\"),\n\ttimer:sleep(?WAIT_TIME),\n\t?assertEqual([SessionKey0], sets:to_list(ar_mining_server:active_sessions())),\n\t?assertEqual([], mined_steps()),\n\t?assertEqual(\n\t\tok,\n\t\tapply_external_update(SessionKey1, [5], 6, false, SessionKey0),\n\t\t\"Full session1, should mine\"),\n\ttimer:sleep(?WAIT_TIME),\n\tassert_sessions_equal([SessionKey0, SessionKey1], ar_mining_server:active_sessions()),\n\t?assertEqual([6, 5], mined_steps()),\n\t?assertEqual(\n\t\t#nonce_limiter_update_response{ session_found = false },\n\t\tapply_external_update(SessionKey3, [], 16, true, SessionKey2),\n\t\t\"Partial session3, should not mine\"),\n\ttimer:sleep(?WAIT_TIME),\n\tassert_sessions_equal([SessionKey0, SessionKey1], ar_mining_server:active_sessions()),\n\t?assertEqual([], mined_steps()),\n\t?assertEqual(\n\t\t#nonce_limiter_update_response{ session_found = false },\n\t\tapply_external_update(SessionKey3, [15], 16, true, SessionKey2),\n\t\t\"Full session3, should not mine\"),\n\ttimer:sleep(?WAIT_TIME),\n\tassert_sessions_equal([SessionKey0, SessionKey1], ar_mining_server:active_sessions()),\n\t?assertEqual([], mined_steps()),\n\t%% Current session is only updated when applying a new tip block, not when applying a VDF step\n\t%% from a VDF server.\n\t?assertEqual(SessionKey0, get_current_session_key()).\n\n%% -------------------------------------------------------------------------------------------------\n%% Helper Functions\n%% -------------------------------------------------------------------------------------------------\n\ninit(Req, State) ->\n\tSplitPath = ar_http_iface_server:split_path(cowboy_req:path(Req)),\n\thandle(SplitPath, Req, State).\n\nhandle([<<\"vdf\">>], Req, State) ->\n\t{ok, Body, _} = ar_http_req:body(Req, ?MAX_BODY_SIZE),\n\tcase ar_serialize:binary_to_nonce_limiter_update(2, Body) of\n\t\t{ok, Update} ->\n\t\t\thandle_update(Update, Req, State);\n\t\t{error, _} ->\n\t\t\tResponse = #nonce_limiter_update_response{ format = 2 },\n\t\t\tBin = ar_serialize:nonce_limiter_update_response_to_binary(Response),\n\t\t\t{ok, cowboy_req:reply(202, #{}, Bin, Req), State}\n\tend.\n\nhandle_update(Update, Req, State) ->\n\t{Seed, _, _} = Update#nonce_limiter_update.session_key,\n\tIsPartial  = Update#nonce_limiter_update.is_partial,\n\tSession = Update#nonce_limiter_update.session,\n\tStepNumber = Session#vdf_session.step_number,\n\tNSteps = length(Session#vdf_session.steps),\n\tCheckpoints = maps:get(StepNumber, Session#vdf_session.step_checkpoints_map),\n\n\tUpdateOutput = hd(Checkpoints),\n\n\tSessionOutput = hd(Session#vdf_session.steps),\n\n\t?assertNotEqual(Checkpoints, Session#vdf_session.steps),\n\t%% #nonce_limiter_update.checkpoints should be the checkpoints of the last step so\n\t%% the head of checkpoints should match the head of the session's steps\n\t?assertEqual(UpdateOutput, SessionOutput),\n\n\tcase ets:lookup(computed_output, Seed) of\n\t\t[{Seed, FirstStepNumber, LatestStepNumber}] ->\n\t\t\t?assert(not IsPartial orelse StepNumber == LatestStepNumber + 1,\n\t\t\t\tlists:flatten(io_lib:format(\n\t\t\t\t\t\"Partial VDF update did not increase by 1, \"\n\t\t\t\t\t\"StepNumber: ~p, LatestStepNumber: ~p\",\n\t\t\t\t\t[StepNumber, LatestStepNumber]))),\n\n\t\t\tets:insert(computed_output, {Seed, FirstStepNumber, StepNumber}),\n\t\t\t{ok, cowboy_req:reply(200, #{}, <<>>, Req), State};\n\t\t_ ->\n\t\t\tcase IsPartial of\n\t\t\t\ttrue ->\n\t\t\t\t\tResponse = #nonce_limiter_update_response{ session_found = false },\n\t\t\t\t\tBin = ar_serialize:nonce_limiter_update_response_to_binary(Response),\n\t\t\t\t\t{ok, cowboy_req:reply(202, #{}, Bin, Req), State};\n\t\t\t\tfalse ->\n\t\t\t\t\tets:insert(computed_output, {Seed, StepNumber - NSteps + 1, StepNumber}),\n\t\t\t\t\t{ok, cowboy_req:reply(200, #{}, <<>>, Req), State}\n\t\t\tend\n\tend.\n\nvdf_server_1() ->\n\t{127,0,0,1,2001}.\n\nvdf_server_2() ->\n\t{127,0,0,1,2002}.\n\ncomputed_steps() ->\n    lists:reverse(ets:foldl(fun({_, Step, _}, Acc) -> [Step | Acc] end, [], computed_output)).\n\ncomputed_upper_bounds() ->\n    lists:reverse(ets:foldl(fun({_, _, UpperBound}, Acc) -> [UpperBound | Acc] end, [], computed_output)).\n\nmined_steps() ->\n\tSteps = lists:reverse(ets:foldl(\n\t\tfun({_Worker, _Task, Step}, Acc) -> [Step | Acc] end, [], add_task)),\n\tets:delete_all_objects(add_task),\n\tSteps.\n\ncomputed_output() ->\n\treceive\n\t\t{event, nonce_limiter, {computed_output, Args}} ->\n\t\t\t{_SessionKey, _StepNumber, Output, UpperBound} = Args,\n\t\t\tKey = ets:info(computed_output, size) + 1, % Unique key based on current size, ensures ordering\n    \t\tets:insert(computed_output, {Key, Output, UpperBound}),\n\t\t\tcomputed_output()\n\tend.\n\napply_external_update(SessionKey, ExistingSteps, StepNumber, IsPartial, PrevSessionKey) ->\n\tapply_external_update(SessionKey, ExistingSteps, StepNumber, IsPartial, PrevSessionKey,\n\t\tvdf_server_1()).\napply_external_update(SessionKey, ExistingSteps, StepNumber, IsPartial, PrevSessionKey, Peer) ->\n\t{Seed, Interval, _Difficulty} = SessionKey,\n\tSteps = [list_to_binary(integer_to_list(Step)) || Step <- [StepNumber | ExistingSteps]],\n\tSession = #vdf_session{\n\t\tupper_bound = Interval * 10,\n\t\tnext_upper_bound = (Interval+1) * 10,\n\t\tprev_session_key = PrevSessionKey,\n\t\tstep_number = StepNumber,\n\t\tseed = Seed,\n\t\tsteps = Steps\n\t},\n\n\tUpdate = #nonce_limiter_update{\n\t\tsession_key = SessionKey,\n\t\tis_partial = IsPartial,\n\t\tsession = Session\n\t},\n\tar_nonce_limiter:apply_external_update(Update, Peer).\n\nget_current_session_key() ->\n\t{CurrentSessionKey, _} = ar_nonce_limiter:get_current_session(),\n\tCurrentSessionKey.\n\nmock_add_task() ->\n\t{\n\t\tar_mining_worker, add_task,\n\t\tfun(Worker, TaskType, Candidate) ->\n\t\t\tets:insert(add_task, {Worker, TaskType, Candidate#mining_candidate.step_number})\n\t\tend\n\t}.\n\nmock_reset_frequency() ->\n\t{\n\t\tar_nonce_limiter, get_reset_frequency,\n\t\tfun() ->\n\t\t\t5\n\t\tend\n\t}.\n\nassert_sessions_equal(List, Set) ->\n\t?assertEqual(lists:sort(List), lists:sort(sets:to_list(Set))).\n"
  },
  {
    "path": "apps/arweave/test/ar_vdf_server_tests.erl",
    "content": "-module(ar_vdf_server_tests).\n\n-export([init/2]).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-import(ar_test_node, [assert_wait_until_height/2, post_block/2, send_new_block/2]).\n\n%% -------------------------------------------------------------------------------------------------\n%% Test Fixtures\n%% -------------------------------------------------------------------------------------------------\n\nsetup() ->\n\tets:new(computed_output, [named_table, set, public]),\n\t{ok, Config} = arweave_config:get_env(),\n\t{ok, PeerConfig} = ar_test_node:remote_call(peer1, arweave_config, get_env, []),\n    {Config, PeerConfig}.\n\ncleanup({Config, PeerConfig}) ->\n\tarweave_config:set_env(Config),\n\tar_test_node:remote_call(peer1, arweave_config, set_env, [PeerConfig]),\n\tets:delete(computed_output).\n\n%% -------------------------------------------------------------------------------------------------\n%% Test Registration\n%% -------------------------------------------------------------------------------------------------\n\n%% @doc All vdf_server_push_test_ tests test a few things\n%% 1. VDF server posts regular VDF updates to the client\n%% 2. For partial updates (session doesn't change), each step number posted is 1 greater than\n%%    the one before\n%% 3. When the client responds that it doesn't have the session in a partial update, server\n%%    should post the full session\n%%\n%% test_vdf_server_push_fast_block tests that the VDF server can handle receiving\n%% a block that is ahead in the VDF chain: specifically:\n%%    When a block comes in that starts a new VDF session, the server should first post the\n%%    full previous session which should include all steps up to and including the\n%%    global_step_number of the block (it may also include additional \"overflow\" steps that\n%%    were computed before the block arrived). The server should not post the new session\n%%    until it has computed a step in that session.\n%%\n%% test_vdf_server_push_slow_block tests that the VDF server can handle receiving\n%% a block that is behind in the VDF chain: specifically:\n%%\nvdf_server_push_test_() ->\n    {foreach,\n\t\tfun setup/0,\n     \tfun cleanup/1,\n\t\t[\n\t\t\tar_test_node:test_with_mocked_functions([mock_reset_frequency()],\n\t\t\t\tfun test_vdf_server_push_fast_block/0, ?TEST_NODE_TIMEOUT),\n\t\t\tar_test_node:test_with_mocked_functions([mock_reset_frequency()],\n\t\t\t\tfun test_vdf_server_push_slow_block/0, ?TEST_NODE_TIMEOUT)\n\t\t]\n    }.\n\n%% @doc Similar to the vdf_server_push_test_ tests except we test the full end-to-end\n%% flow where a VDF client has to validate a block with VDF information provided by\n%% the VDF server.\nvdf_client_test_() ->\n\t{foreach,\n\t\tfun setup/0,\n\t\tfun cleanup/1,\n\t\t[\n\t\t\tar_test_node:test_with_mocked_functions([mock_reset_frequency()],\n\t\t\t\tfun test_vdf_client_fast_block/0, ?TEST_NODE_TIMEOUT),\n\t\t\tar_test_node:test_with_mocked_functions([mock_reset_frequency()],\n\t\t\t\tfun test_vdf_client_fast_block_pull_interface/0, ?TEST_NODE_TIMEOUT),\n\t\t\tar_test_node:test_with_mocked_functions([mock_reset_frequency()],\n\t\t\t\tfun test_vdf_client_slow_block/0, ?TEST_NODE_TIMEOUT),\n\t\t\tar_test_node:test_with_mocked_functions([mock_reset_frequency()],\n\t\t\t\tfun test_vdf_client_slow_block_pull_interface/0, ?TEST_NODE_TIMEOUT)\n\t\t]\n    }.\n\nserialize_test_() ->\n    [\n\t\t{timeout, 120, fun test_serialize_update_format_2/0},\n\t\t{timeout, 120, fun test_serialize_update_format_3/0},\n\t\t{timeout, 120, fun test_serialize_update_format_4/0},\n\t\t{timeout, 120, fun test_serialize_response/0},\n\t\t{timeout, 120, fun test_serialize_response_compatibility/0}\n\t].\n\n%% -------------------------------------------------------------------------------------------------\n%% Tests\n%% -------------------------------------------------------------------------------------------------\n\n%%\n%% vdf_server_push_test_\n%%\ntest_vdf_server_push_fast_block() ->\n\tVDFPort = ar_test_node:get_unused_port(),\n\t{_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(10000), <<>>}]),\n\n\t%% Let peer1 get ahead of main in the VDF chain\n\t_ = ar_test_node:start_peer(peer1, B0),\n\tar_test_node:remote_call(peer1, ar_http, block_peer_connections, []),\n\ttimer:sleep(3000),\n\n\t{ok, Config} = arweave_config:get_env(),\n\t_ = ar_test_node:start(\n\t\tB0, ar_wallet:to_address(ar_wallet:new_keyfile()),\n\t\tConfig#config{ \n\t\t\tnonce_limiter_client_peers = [ \"127.0.0.1:\" ++ integer_to_list(VDFPort) ]\n\t\t}\n\t),\n\t%% Setup a server to listen for VDF pushes\n\tRoutes = [{\"/[...]\", ar_vdf_server_tests, []}],\n\t{ok, _} = cowboy:start_clear(\n\t\tar_vdf_server_test_listener,\n\t\t[{port, VDFPort}],\n\t\t#{ env => #{ dispatch => cowboy_router:compile([{'_', Routes}]) } }\n\t),\n\t%% Mine a block that will be ahead of main in the VDF chain\n\tar_test_node:mine(peer1),\n\tBI = assert_wait_until_height(peer1, 1),\n\tB1 = ar_test_node:remote_call(peer1, ar_storage, read_block, [hd(BI)]),\n\t%% Post the block to main which will cause it to validate VDF for the block under\n\t%% the B0 session and then begin using the (later) B1 VDF session going forward\n\tok = ar_events:subscribe(block),\n\tpost_block(B1, valid),\n\n\tSeed0 = B0#block.nonce_limiter_info#nonce_limiter_info.next_seed,\n\tSeed1 = B1#block.nonce_limiter_info#nonce_limiter_info.next_seed,\n\tStepNumber1 = ar_block:vdf_step_number(B1),\n\tar_util:do_until(\n\t\tfun() ->\n\t\t\t%% Wait until both VDF sessions are present and we apply VDF upt to the block's step number.\n\t\t\tcase {ets:lookup(computed_output, Seed0), ets:lookup(computed_output, Seed1)} of\n\t\t\t\t{[{Seed0, _, LatestStepNumber}], [{Seed1, _, _}]} ->\n\t\t\t\t\tLatestStepNumber >= StepNumber1;\n\t\t\t\t_ ->\n\t\t\t\t\tfalse\n\t\t\tend\n\t\tend,\n\t\t200,\n\t\t20_000\n\t),\n\n\t[{Seed0, _, LatestStepNumber0}] = get_computed_output(Seed0),\n\t[{Seed1, _FirstStepNumber1, _}] = get_computed_output(Seed1),\n\t?assertEqual(2, ets:info(computed_output, size), \"VDF server did not post 2 sessions\"),\n\t?assert(LatestStepNumber0 >= StepNumber1,\n\t\t\"VDF server did not post the full Session0 when starting Session1\"),\n\n\tcowboy:stop_listener(ar_vdf_server_test_listener).\n\ntest_vdf_server_push_slow_block() ->\n\tVDFPort = ar_test_node:get_unused_port(),\n\t{_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(10000), <<>>}]),\n\n\t{ok, Config} = arweave_config:get_env(),\n\t_ = ar_test_node:start(\n\t\tB0, ar_wallet:to_address(ar_wallet:new_keyfile()),\n\t\tConfig#config{ \n\t\t\tnonce_limiter_client_peers = [ \"127.0.0.1:\" ++ integer_to_list(VDFPort) ]\n\t\t}\n\t),\n\t%% Let main get ahead of peer1 in the VDF chain\n\ttimer:sleep(3000),\n\n\t\n\t_ = ar_test_node:start_peer(peer1, B0),\n\tar_test_node:remote_call(peer1, ar_http, block_peer_connections, []),\n\n\t%% Setup a server to listen for VDF pushes\n\tRoutes = [{\"/[...]\", ar_vdf_server_tests, []}],\n\t{ok, _} = cowboy:start_clear(\n\t\tar_vdf_server_test_listener,\n\t\t[{port, VDFPort}],\n\t\t#{ env => #{ dispatch => cowboy_router:compile([{'_', Routes}]) } }\n\t),\n\n\t%% Mine a block that will be behind main in the VDF chain\n\tar_test_node:mine(peer1),\n\tBI = assert_wait_until_height(peer1, 1),\n\tB1 = ar_test_node:remote_call(peer1, ar_storage, read_block, [hd(BI)]),\n\n\t%% Post the block to main which will cause it to validate VDF for the block under\n\t%% the B0 session and then begin using the (earlier) B1 VDF session going forward\n\tok = ar_events:subscribe(block),\n\tpost_block(B1, valid),\n\ttimer:sleep(3000),\n\n\tSeed0 = B0#block.nonce_limiter_info#nonce_limiter_info.next_seed,\n\tSeed1 = B1#block.nonce_limiter_info#nonce_limiter_info.next_seed,\n\n\t[{Seed0, _, LatestStepNumber0}] = get_computed_output(Seed0),\n\t[{Seed1, FirstStepNumber1, LatestStepNumber1}] = get_computed_output(Seed1),\n\t?assert(LatestStepNumber0 > FirstStepNumber1, \"Session0 should have started later than Session1\"),\n\n\ttimer:sleep(3000),\n\t[{Seed0, _, NewLatestStepNumber0}] = get_computed_output(Seed0),\n\t[{Seed1, _, NewLatestStepNumber1}] = get_computed_output(Seed1),\n\t?assertEqual(LatestStepNumber0, NewLatestStepNumber0,\n\t\t\"Session0 should not have progressed\"),\n\t?assert(NewLatestStepNumber1 > LatestStepNumber1, \"Session1 should have progressed\"),\n\n\tcowboy:stop_listener(ar_vdf_server_test_listener).\n\n%%\n%% vdf_client_test_\n%%\ntest_vdf_client_fast_block() ->\n\tar_test_node:stop(),\n\t{ok, Config} = arweave_config:get_env(),\n\t{_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(10000), <<>>}]),\n\n\tPeerAddress = ar_wallet:to_address(ar_test_node:remote_call(peer1, ar_wallet, new_keyfile, [])),\n\n\t%% Let peer1 get ahead of main in the VDF chain\n\t_ = ar_test_node:start_peer(peer1, B0),\n\tar_test_node:remote_call(peer1, ar_http, block_peer_connections, []),\n\ttimer:sleep(5_000),\n\n\t%% Mine a block that will be ahead of main in the VDF chain\n\tar_test_node:mine(peer1),\n\tBI = assert_wait_until_height(peer1, 1),\n\tB1 = ar_test_node:remote_call(peer1, ar_storage, read_block, [hd(BI)]),\n\tar_test_node:stop(peer1),\n\n\t%% Restart peer1 as a VDF client\n\t{ok, PeerConfig} = ar_test_node:get_config(peer1),\n\t_ = ar_test_node:start_peer(peer1,\n\t\tB0, PeerAddress,\n\t\tPeerConfig#config{ \n\t\t\tnonce_limiter_server_trusted_peers = [\n\t\t\t\tar_util:format_peer(ar_test_node:peer_ip(main))\n\t\t\t]\n\t\t}),\n\t%% Isolate the client-path assertion below: when B1 is posted directly to peer1,\n\t%% peer1 must not relay it to main before we explicitly post it to main.\n\tar_test_node:remote_call(peer1, ar_http, block_peer_connections, []),\n\t%% Start main as a VDF server\n\tar_test_node:stop(),\n\t_ = ar_test_node:start(\n\t\tB0, ar_wallet:to_address(ar_wallet:new_keyfile()),\n\t\tConfig#config{ \n\t\t\tnonce_limiter_client_peers = [\n\t\t\t\tar_util:format_peer(ar_test_node:peer_ip(peer1))\n\t\t\t]\n\t\t}),\n\n\t%% Post the block to the VDF client. It won't be able to validate it since the VDF server\n\t%% isn't aware of the new VDF session yet. Also, it cannot gossip it to main because\n\t%% we disabled gossip.\n\tsend_new_block(ar_test_node:peer_ip(peer1), B1),\n\ttimer:sleep(5_000),\n\t?assertEqual(1,\n\t\tlength(ar_test_node:remote_call(peer1, ar_node, get_blocks, [])),\n\t\t\"VDF client shouldn't be able to validate the block until the VDF server posts a \"\n\t\t\"new VDF session\"),\n\n\t%% Re-enable p2p communication - main will receive B1 and peer1 is\n\t%% expected to sync and validate it.\n\tar_test_node:connect_to_peer(peer1),\n\n\t%% After the VDF server receives the block, it should push the old and new VDF sessions\n\t%% to the VDF client allowing it to validate teh block.\n\tsend_new_block(ar_test_node:peer_ip(main), B1),\n\t%% If all is right, the VDF server should push the old and new VDF sessions allowing\n\t%% the VDF client to finally validate the block.\n\tBI = assert_wait_until_height(peer1, 1).\n\ntest_vdf_client_fast_block_pull_interface() ->\n  \t{ok, Config} = arweave_config:get_env(),\n\t{_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(10000), <<>>}]),\n\n\tPeerAddress = ar_wallet:to_address(ar_test_node:remote_call(peer1, ar_wallet, new_keyfile, [])),\n\n\t%% Let peer1 get ahead of main in the VDF chain\n\t_ = ar_test_node:start_peer(peer1, B0),\n\t_ = ar_test_node:remote_call(peer1, ar_http, block_peer_connections, []),\n\ttimer:sleep(20000),\n\n\t%% Mine a block that will be ahead of main in the VDF chain\n\tar_test_node:mine(peer1),\n\tBI = assert_wait_until_height(peer1, 1),\n\tB1 = ar_test_node:remote_call(peer1, ar_storage, read_block, [hd(BI)]),\n\tar_test_node:stop(peer1),\n\n\t%% Restart peer1 as a VDF client\n\t{ok, PeerConfig} = ar_test_node:remote_call(peer1, arweave_config, get_env, []),\n\t_ = ar_test_node:start_peer(peer1,\n\t\tB0, PeerAddress,\n\t\tPeerConfig#config{ \n\t\t\tnonce_limiter_server_trusted_peers = [\n\t\t\t\tar_util:format_peer(ar_test_node:peer_ip(main))\n\t\t\t],\n\t\t\tenable = [vdf_server_pull | PeerConfig#config.enable]\n\t\t}\n\t),\n\t%% Start the main as a VDF server\n\t_ = ar_test_node:start(\n\t\tB0, ar_wallet:to_address(ar_wallet:new_keyfile()),\n\t\tConfig#config{ \n\t\t\tnonce_limiter_client_peers = [\n\t\t\t\tar_util:format_peer(ar_test_node:peer_ip(peer1))\n\t\t\t]\n\t\t}\n\t),\n\tar_test_node:connect_to_peer(peer1),\n\n\t%% Post the block to the VDF client. It won't be able to validate it since the VDF server\n\t%% isn't aware of the new VDF session yet.\n\tsend_new_block(ar_test_node:peer_ip(peer1), B1),\n\ttimer:sleep(10000),\n\t?assertEqual(1,\n\t\tlength(ar_test_node:remote_call(peer1, ar_node, get_blocks, [])),\n\t\t\"VDF client shouldn't be able to validate the block until the VDF server posts a \"\n\t\t\"new VDF session\"),\n\n\t%% After the VDF server receives the block, it should push the old and new VDF sessions\n\t%% to the VDF client allowing it to validate teh block.\n\tsend_new_block(ar_test_node:peer_ip(main), B1),\n\t%% If all is right, the VDF server should push the old and new VDF sessions allowing\n\t%% the VDF clietn to finally validate the block.\n\tBI = assert_wait_until_height(peer1, 1).\n\ntest_vdf_client_slow_block() ->\n\t{ok, Config} = arweave_config:get_env(),\n\t{_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(10000), <<>>}]),\n\n\tPeerAddress = ar_wallet:to_address(ar_test_node:remote_call(peer1, ar_wallet, new_keyfile, [])),\n\n\t%% Let peer1 get ahead of main in the VDF chain\n\t_ = ar_test_node:start_peer(peer1, B0),\n\tar_test_node:remote_call(peer1, ar_http, block_peer_connections, []),\n\n\t%% Mine a block that will be ahead of main in the VDF chain\n\tar_test_node:mine(peer1),\n\tBI = assert_wait_until_height(peer1, 1),\n\tB1 = ar_test_node:remote_call(peer1, ar_storage, read_block, [hd(BI)]),\n\tar_test_node:stop(peer1),\n\n\t%% Restart peer1 as a VDF client\n\t{ok, PeerConfig} = ar_test_node:remote_call(peer1, arweave_config, get_env, []),\n\t_ = ar_test_node:start_peer(peer1,\n\t\tB0, PeerAddress,\n\t\tPeerConfig#config{ \n\t\t\tnonce_limiter_server_trusted_peers = [\n\t\t\t\t\"127.0.0.1:\" ++ integer_to_list(Config#config.port)\n\t\t\t]\n\t\t}\n\t),\n\t%% Start the main as a VDF server\n\t_ = ar_test_node:start(\n\t\tB0, ar_wallet:to_address(ar_wallet:new_keyfile()),\n\t\tConfig#config{ \n\t\t\tnonce_limiter_client_peers = [\n\t\t\t\t\"127.0.0.1:\" ++ integer_to_list(ar_test_node:peer_port(peer1))\n\t\t\t]\n\t\t}\n\t),\n\tar_test_node:connect_to_peer(peer1),\n\ttimer:sleep(10000),\n\n\t%% Post the block to the VDF client, it should validate it \"immediately\" since the\n\t%% VDF server is ahead of the block in the VDF chain.\n\tsend_new_block(ar_test_node:peer_ip(peer1), B1),\n\tBI = assert_wait_until_height(peer1, 1).\n\ntest_vdf_client_slow_block_pull_interface() ->\n\t{ok, Config} = arweave_config:get_env(),\n\t{_, Pub} = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(10000), <<>>}]),\n\n\tPeerAddress = ar_wallet:to_address(ar_test_node:remote_call(peer1, ar_wallet, new_keyfile, [])),\n\n\t%% Let peer1 get ahead of main in the VDF chain\n\t_ = ar_test_node:start_peer(peer1, B0),\n\tar_test_node:remote_call(peer1, ar_http, block_peer_connections, []),\n\n\t%% Mine a block that will be ahead of main in the VDF chain\n\tar_test_node:mine(peer1),\n\tBI = assert_wait_until_height(peer1, 1),\n\tB1 = ar_test_node:remote_call(peer1, ar_storage, read_block, [hd(BI)]),\n\tar_test_node:stop(peer1),\n\n\t%% Restart peer1 as a VDF client\n\t{ok, PeerConfig} = ar_test_node:remote_call(peer1, arweave_config, get_env, []),\n\t_ = ar_test_node:start_peer(peer1,\n\t\tB0, PeerAddress,\n\t\tPeerConfig#config{ \n\t\t\tnonce_limiter_server_trusted_peers = [\n\t\t\t\t\"127.0.0.1:\" ++ integer_to_list(Config#config.port) \n\t\t\t],\n\t\t\tenable = [vdf_server_pull | PeerConfig#config.enable]\n\t\t}\n\t),\n\t%% Start the main as a VDF server\n\t{ok, Config} = arweave_config:get_env(),\n\t_ = ar_test_node:start(\n\t\tB0, ar_wallet:to_address(ar_wallet:new_keyfile()),\n\t\tConfig#config{ \n\t\t\tnonce_limiter_client_peers = [\n\t\t\t\t\"127.0.0.1:\" ++ integer_to_list(ar_test_node:peer_port(peer1))\n\t\t\t]\n\t\t}\n\t),\n\tar_test_node:connect_to_peer(peer1),\n\ttimer:sleep(10000),\n\n\t%% Post the block to the VDF client, it should validate it \"immediately\" since the\n\t%% VDF server is ahead of the block in the VDF chain.\n\tsend_new_block(ar_test_node:peer_ip(peer1), B1),\n\tBI = assert_wait_until_height(peer1, 1).\n\n%%\n%% serialize_test_\n%%\n\ntest_serialize_update_format_2() ->\n\tSessionKey0 = {crypto:strong_rand_bytes(48), 0, 1},\n\tSessionKey1 = {crypto:strong_rand_bytes(48), 1, 1},\n\tCheckpoints = [crypto:strong_rand_bytes(32) || _ <- lists:seq(1, 25)],\n\tUpdate = #nonce_limiter_update{\n\t\tsession_key = SessionKey1,\n\t\tis_partial = true,\n\t\tsession = #vdf_session{\n\t\t\tstep_checkpoints_map = #{ 1 => Checkpoints },\n\t\t\tupper_bound = 1,\n\t\t\tnext_upper_bound = 1,\n\t\t\tprev_session_key = SessionKey0,\n\t\t\tstep_number = 1,\n\t\t\tseed = element(1, SessionKey1),\n\t\t\tsteps = [crypto:strong_rand_bytes(32)]\n\t\t}\n\t},\n\tBinary = ar_serialize:nonce_limiter_update_to_binary(2, Update),\n\t?assertEqual({ok, Update}, ar_serialize:binary_to_nonce_limiter_update(2, Binary)).\n\ntest_serialize_update_format_3() ->\n\tSessionKey0 = {crypto:strong_rand_bytes(48), 0, 1},\n\tSessionKey1 = {crypto:strong_rand_bytes(48), 1, 1},\n\tCheckpoints = [crypto:strong_rand_bytes(32) || _ <- lists:seq(1, 25)],\n\tUpdate = #nonce_limiter_update{\n\t\tsession_key = SessionKey1,\n\t\tis_partial = true,\n\t\tsession = #vdf_session{\n\t\t\tstep_checkpoints_map = #{ 1 => Checkpoints },\n\t\t\tupper_bound = 1,\n\t\t\tnext_upper_bound = 1,\n\t\t\tprev_session_key = SessionKey0,\n\t\t\tstep_number = 1,\n\t\t\tseed = element(1, SessionKey1),\n\t\t\tsteps = [crypto:strong_rand_bytes(32)]\n\t\t}\n\t},\n\tBinary = ar_serialize:nonce_limiter_update_to_binary(3, Update),\n\t?assertEqual({ok, Update}, ar_serialize:binary_to_nonce_limiter_update(3, Binary)).\n\ntest_serialize_update_format_4() ->\n\tSessionKey0 = {crypto:strong_rand_bytes(48), 0, 1},\n\tSessionKey1 = {crypto:strong_rand_bytes(48), 1, 1},\n\tCheckpoints = [crypto:strong_rand_bytes(32) || _ <- lists:seq(1, 25)],\n\tUpdate = #nonce_limiter_update{\n\t\tsession_key = SessionKey1,\n\t\tis_partial = true,\n\t\tsession = #vdf_session{\n\t\t\tstep_checkpoints_map = #{ 1 => Checkpoints },\n\t\t\tupper_bound = 1,\n\t\t\tnext_upper_bound = 1,\n\t\t\tprev_session_key = SessionKey0,\n\t\t\tvdf_difficulty = 10000,\n\t\t\tnext_vdf_difficulty = 1,\n\t\t\tstep_number = 1,\n\t\t\tseed = element(1, SessionKey1),\n\t\t\tsteps = [crypto:strong_rand_bytes(32)]\n\t\t}\n\t},\n\tBinary = ar_serialize:nonce_limiter_update_to_binary(4, Update),\n\t?assertEqual({ok, Update}, ar_serialize:binary_to_nonce_limiter_update(4, Binary)).\n\n%% @doc test serializing and deserializing a #nonce_limiter_update_response when the client\n%% is running the same node version as the server.\ntest_serialize_response() ->\n\tResponseA = #nonce_limiter_update_response{},\n\tBinaryA = ar_serialize:nonce_limiter_update_response_to_binary(ResponseA),\n\t?assertEqual({ok, ResponseA}, ar_serialize:binary_to_nonce_limiter_update_response(BinaryA)),\n\n\tResponseB = #nonce_limiter_update_response{\n\t\tsession_found = false,\n\t\tstep_number = 8589934593,\n\t\tpostpone = 255,\n\t\tformat = 2\n\t},\n\tBinaryB = ar_serialize:nonce_limiter_update_response_to_binary(ResponseB),\n\t?assertEqual({ok, ResponseB}, ar_serialize:binary_to_nonce_limiter_update_response(BinaryB)).\n\n%% @doc test serializing and deserializing a #nonce_limiter_update_response when the client\n%% is running an older node version than the server.\ntest_serialize_response_compatibility() ->\n\tBinaryA = << 0:8, 1:8, 5:8 >>,\n\tResponseA = #nonce_limiter_update_response{\n\t\tsession_found = false,\n\t\tstep_number = 5,\n\t\tpostpone = 0,\n\t\tformat = 1\n\t},\n\t?assertEqual({ok, ResponseA}, ar_serialize:binary_to_nonce_limiter_update_response(BinaryA)),\n\n\tBinaryB = << 1:8, 2:8, 511:16, 120:8 >>,\n\tResponseB = #nonce_limiter_update_response{\n\t\tsession_found = true,\n\t\tstep_number = 511,\n\t\tpostpone = 120,\n\t\tformat = 1\n\t},\n\t?assertEqual({ok, ResponseB}, ar_serialize:binary_to_nonce_limiter_update_response(BinaryB)).\n\n%% -------------------------------------------------------------------------------------------------\n%% Helper Functions\n%% -------------------------------------------------------------------------------------------------\n\ninit(Req, State) ->\n\tSplitPath = ar_http_iface_server:split_path(cowboy_req:path(Req)),\n\thandle(SplitPath, Req, State).\n\nhandle([<<\"vdf\">>], Req, State) ->\n\t{ok, Body, _} = ar_http_req:body(Req, ?MAX_BODY_SIZE),\n\tcase ar_serialize:binary_to_nonce_limiter_update(2, Body) of\n\t\t{ok, Update} ->\n\t\t\thandle_update(Update, Req, State);\n\t\t{error, _} ->\n\t\t\tResponse = #nonce_limiter_update_response{ format = 2 },\n\t\t\tBin = ar_serialize:nonce_limiter_update_response_to_binary(Response),\n\t\t\t{ok, cowboy_req:reply(202, #{}, Bin, Req), State}\n\tend.\n\nhandle_update(Update, Req, State) ->\n\t{Seed, _, _} = Update#nonce_limiter_update.session_key,\n\tIsPartial  = Update#nonce_limiter_update.is_partial,\n\tSession = Update#nonce_limiter_update.session,\n\tStepNumber = Session#vdf_session.step_number,\n\tNSteps = length(Session#vdf_session.steps),\n\tCheckpoints = maps:get(StepNumber, Session#vdf_session.step_checkpoints_map),\n\n\tUpdateOutput = hd(Checkpoints),\n\n\tSessionOutput = hd(Session#vdf_session.steps),\n\n\t?assertNotEqual(Checkpoints, Session#vdf_session.steps),\n\t%% #nonce_limiter_update.checkpoints should be the checkpoints of the last step so\n\t%% the head of checkpoints should match the head of the session's steps\n\t?assertEqual(UpdateOutput, SessionOutput),\n\n\tcase ets:lookup(computed_output, Seed) of\n\t\t[{Seed, FirstStepNumber, LatestStepNumber}] ->\n\t\t\t%% Normally a partial VDF update should always increase by 1, but the VDF_DIFFICULTY\n\t\t\t%% is so low in tests that there can be a race condition which causes a partial\n\t\t\t%% update to repeat a VDF step. This assertion allows for that scenario in order\n\t\t\t%% to improve test reliability.\n\t\t\t?assert(\n\t\t\t\t\tnot IsPartial orelse\n\t\t\t\t\tStepNumber == LatestStepNumber + 1 orelse\n\t\t\t\t\tStepNumber == LatestStepNumber,\n\t\t\t\tlists:flatten(io_lib:format(\n\t\t\t\t\t\"Partial VDF update has step gap, \"\n\t\t\t\t\t\"StepNumber: ~p, LatestStepNumber: ~p\",\n\t\t\t\t\t[StepNumber, LatestStepNumber]))),\n\n\t\t\tets:insert(computed_output, {Seed, FirstStepNumber, StepNumber}),\n\t\t\t{ok, cowboy_req:reply(200, #{}, <<>>, Req), State};\n\t\t_ ->\n\t\t\tcase IsPartial of\n\t\t\t\ttrue ->\n\t\t\t\t\tResponse = #nonce_limiter_update_response{ session_found = false },\n\t\t\t\t\tBin = ar_serialize:nonce_limiter_update_response_to_binary(Response),\n\t\t\t\t\t{ok, cowboy_req:reply(202, #{}, Bin, Req), State};\n\t\t\t\tfalse ->\n\t\t\t\t\tets:insert(computed_output, {Seed, StepNumber - NSteps + 1, StepNumber}),\n\t\t\t\t\t{ok, cowboy_req:reply(200, #{}, <<>>, Req), State}\n\t\t\tend\n\tend.\n\nget_computed_output(Seed) ->\n\tar_util:do_until(\n\t\tfun() ->\n\t\t\tcase ets:lookup(computed_output, Seed) of\n\t\t\t\t[] -> false;\n\t\t\t\t_ -> true\n\t\t\tend\n\t\tend,\n\t\t1000,\n\t\t10_000\n\t),\n\tets:lookup(computed_output, Seed).\n\nmock_reset_frequency() ->\n\t{\n\t\tar_nonce_limiter, get_reset_frequency,\n\t\tfun() ->\n\t\t\t5\n\t\tend\n\t}.\n"
  },
  {
    "path": "apps/arweave/test/ar_vdf_tests.erl",
    "content": "-module(ar_vdf_tests).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_vdf.hrl\").\n-include_lib(\"arweave/include/ar_pricing.hrl\").\n\n-define(ENCODED_PREV_OUTPUT, <<\"f_z7RLug8etm3SrmRf-xPwXEL0ZQ_xHng2A5emRDQBw\">>).\n-define(RESET_SEED, <<\"f_z7RLug8etm3SrmRf-xPwXEL0ZQ_xHng2A5emRDQBw\">>).\n-define(MAX_THREAD_COUNT, 4).\n\n%-define(TEST_VDF_DIFFICULTY, 15000000 div 25).\n-define(TEST_VDF_DIFFICULTY, 10).\n\n%%%===================================================================\n%%% utils\n%%%===================================================================\n\nbreak_byte(Buf, Pos)->\n\tHead = binary:part(Buf, 0, Pos),\n\tTail = binary:part(Buf, Pos+1, size(Buf)-Pos-1),\n\tChangedByte = binary:at(Buf,Pos) bxor 1,\n\t<<Head/binary, ChangedByte, Tail/binary>>.\n\nreset_mix(PrevOutput, ResetSeed) ->\n\tcrypto:hash(sha256, << PrevOutput/binary, ResetSeed/binary >>).\n\n%%%===================================================================\n\nvdf_basic_test_() ->\n\t{timeout, 1000, fun test_vdf_basic_compute_verify_/0}.\n\n% no reset\ntest_vdf_basic_compute_verify_() ->\n\tStartStepNumber1 = 2,\n\tStartStepNumber2 = 3,\n\tStartSalt1 = ar_vdf:step_number_to_salt_number(StartStepNumber1-1),\n\tStartSalt2 = ar_vdf:step_number_to_salt_number(StartStepNumber2-1),\n\tPrevOutput = ar_util:decode(?ENCODED_PREV_OUTPUT),\n\tResetSeed = ar_util:decode(?RESET_SEED),\n\n\tResetSalt = -1,\n\n\t{ok, Output1, Checkpoints1} =\n\t\t\tar_vdf:compute2(StartStepNumber1, PrevOutput, ?TEST_VDF_DIFFICULTY),\n\tassert_verify(StartSalt1, ResetSalt, PrevOutput, 1, lists:reverse(Checkpoints1)),\n\n\t{ok, _Output2, Checkpoints2} =\n\t\t\tar_vdf:compute2(StartStepNumber2, Output1, ?TEST_VDF_DIFFICULTY),\n\tassert_verify(StartSalt2, ResetSalt, Output1, 1, lists:reverse(Checkpoints2)),\n\n\tHashes = lists:reverse(Checkpoints1) ++ lists:reverse(Checkpoints2),\n\tassert_verify(StartSalt1, ResetSalt, PrevOutput, 1, Hashes),\n\n\t% test damage on any byte, arg (aka negative tests)\n\tok = test_vdf_basic_compute_verify_break_(StartSalt1, PrevOutput, 1,\n\t\tHashes, ResetSalt, ResetSeed),\n\n\tok.\n\ntest_vdf_basic_compute_verify_break_(StartSalt, PrevOutput, StepBetweenHashCount,\n\t\tHashes, ResetSalt, ResetSeed)->\n\ttest_vdf_basic_compute_verify_break_(StartSalt, PrevOutput, StepBetweenHashCount,\n\t\tHashes, ResetSalt, ResetSeed,\n\t\tsize(iolist_to_binary(Hashes))-1).\n\ntest_vdf_basic_compute_verify_break_(_StartSalt, _PrevOutput, _StepBetweenHashCount,\n\t\t_Hashes, _ResetSalt, _ResetSeed, 0)->\n\tok;\n\ntest_vdf_basic_compute_verify_break_(StartSalt, PrevOutput, StepBetweenHashCount,\n\t\tHashes, ResetSalt, ResetSeed, BreakPos)->\n\tBufferHash = iolist_to_binary(Hashes),\n\tBufferHashBroken = break_byte(BufferHash, BreakPos),\n\tHashesBroken = ar_vdf:checkpoint_buffer_to_checkpoints(BufferHashBroken),\n\tfalse = ar_vdf:verify(StartSalt, PrevOutput,\n\t\tStepBetweenHashCount, HashesBroken, ResetSalt, ResetSeed,\n\t\t?MAX_THREAD_COUNT, ?TEST_VDF_DIFFICULTY),\n\ttest_vdf_basic_compute_verify_break_(StartSalt, PrevOutput, StepBetweenHashCount,\n\t\tHashes, ResetSalt, ResetSeed, BreakPos-1).\n\nassert_verify(StartSalt, ResetSalt, Output, NumCheckpointsBetweenHashes, Checkpoints) ->\n\tResetSeed = ar_util:decode(?RESET_SEED),\n\t?assertEqual(\n\t\t{true, iolist_to_binary(Checkpoints)},\n\t\tar_vdf:verify(\n\t\t\tStartSalt, Output, NumCheckpointsBetweenHashes, Checkpoints, ResetSalt, ResetSeed,\n\t\t\t?MAX_THREAD_COUNT, ?TEST_VDF_DIFFICULTY)\n\t).\n\nvdf_reset_test_() ->\n\t{timeout, 1000, fun test_vdf_reset_verify_/0}.\n\ntest_vdf_reset_verify_() ->\n\tok = test_vdf_reset_0_(),\n\tok = test_vdf_reset_1_(),\n\tok = test_vdf_reset_mid_checkpoint_(),\n\tok.\n\ntest_vdf_reset_0_() ->\n\tStartStepNumber1 = 2,\n\tStartStepNumber2 = 3,\n\tStartSalt1 = ar_vdf:step_number_to_salt_number(StartStepNumber1-1),\n\tStartSalt2 = ar_vdf:step_number_to_salt_number(StartStepNumber2-1),\n\tPrevOutput = ar_util:decode(?ENCODED_PREV_OUTPUT),\n\tResetSeed = ar_util:decode(?RESET_SEED),\n\n\tResetSalt = StartSalt1,\n\n\tMixOutput = reset_mix(PrevOutput, ResetSeed),\n\t{ok, Output1, Checkpoints1} = ar_vdf:compute2(StartStepNumber1, MixOutput, ?TEST_VDF_DIFFICULTY),\n\t{ok, _Output2, Checkpoints2} = ar_vdf:compute2(StartStepNumber2, Output1, ?TEST_VDF_DIFFICULTY),\n\n\t% partial verify should work\n\tassert_verify(StartSalt1, ResetSalt, PrevOutput, 1, lists:reverse(Checkpoints1)),\n\tassert_verify(StartSalt2, ResetSalt, Output1, 1, lists:reverse(Checkpoints2)),\n\n\tHashes3 = lists:sublist(lists:reverse(Checkpoints2), 1, ?VDF_CHECKPOINT_COUNT_IN_STEP + 1),\n\tassert_verify(StartSalt1, ResetSalt, PrevOutput, 1, lists:reverse(Checkpoints1) ++ Hashes3),\n\n\tHashes4 = lists:reverse(Checkpoints1) ++ lists:reverse(Checkpoints2),\n\tassert_verify(StartSalt1, ResetSalt, PrevOutput, 1, Hashes4),\n\n\tok.\n\ntest_vdf_reset_1_() ->\n\tStartStepNumber1 = 2,\n\tStartStepNumber2 = 3,\n\tStartSalt1 = ar_vdf:step_number_to_salt_number(StartStepNumber1-1),\n\tStartSalt2 = ar_vdf:step_number_to_salt_number(StartStepNumber2-1),\n\tPrevOutput = ar_util:decode(?ENCODED_PREV_OUTPUT),\n\tResetSeed = ar_util:decode(?RESET_SEED),\n\n\tResetSalt = StartSalt2,\n\n\t{ok, Output1, Checkpoints1} = ar_vdf:compute2(StartStepNumber1, PrevOutput, ?TEST_VDF_DIFFICULTY),\n\tMixOutput = reset_mix(Output1, ResetSeed),\n\t{ok, _Output2, Checkpoints2} = ar_vdf:compute2(StartStepNumber2, MixOutput, ?TEST_VDF_DIFFICULTY),\n\n\t% partial verify should work\n\tassert_verify(StartSalt1, ResetSalt, PrevOutput, 1, lists:reverse(Checkpoints1)),\n\tassert_verify(StartSalt2, ResetSalt, Output1, 1, lists:reverse(Checkpoints2)),\n\n\tHash1 = lists:last(Checkpoints2),\n\tHash2 = lists:nth(length(Checkpoints2) - 1, Checkpoints2),\n\tassert_verify(StartSalt1, ResetSalt, PrevOutput, 1, lists:reverse([Hash1 | Checkpoints1])),\n\n\tassert_verify(StartSalt1, ResetSalt, PrevOutput, 1, lists:reverse([Hash2, Hash1 | Checkpoints1])),\n\n\tHashes5 = lists:reverse(Checkpoints1) ++ lists:reverse(Checkpoints2),\n\tassert_verify(StartSalt1, ResetSalt, PrevOutput, 1, Hashes5),\n\n\tok.\n\ntest_vdf_reset_mid_checkpoint_() ->\n\tStartStepNumber1 = 2,\n\tStartStepNumber2 = 3,\n\tStartSalt1 = ar_vdf:step_number_to_salt_number(StartStepNumber1-1),\n\tStartSalt2 = ar_vdf:step_number_to_salt_number(StartStepNumber2-1),\n\tPrevOutput = ar_util:decode(?ENCODED_PREV_OUTPUT),\n\tResetSeed = ar_util:decode(?RESET_SEED),\n\n\t% means inside 1 iteration\n\tResetSaltFlat = 10,\n\tResetSalt = StartSalt1 + ResetSaltFlat,\n\n\tSalt1 = << StartSalt1:256 >>,\n\t{ok, Output1Part1, LastStepCheckpoints1Part1} =\n\t\tar_vdf_nif:vdf_sha2_nif(Salt1, PrevOutput, ResetSaltFlat-1, 0, ?TEST_VDF_DIFFICULTY),\n\tMixOutput = reset_mix(Output1Part1, ResetSeed),\n\n\tSalt2 = << ResetSalt:256 >>,\n\t{ok, Output1Part2, LastStepCheckpoints1Part2} =\n\t\tar_vdf_nif:vdf_sha2_nif(Salt2, MixOutput, ?VDF_CHECKPOINT_COUNT_IN_STEP-ResetSaltFlat-1, 0, ?TEST_VDF_DIFFICULTY),\n\tOutput1 = Output1Part2,\n\tLastStepCheckpoints1 = <<\n\t\tLastStepCheckpoints1Part1/binary, Output1Part1/binary,\n\t\tLastStepCheckpoints1Part2/binary, Output1Part2/binary\n\t>>,\n\tCheckpoints1 = ar_vdf:checkpoint_buffer_to_checkpoints(LastStepCheckpoints1),\n\n\t{ok, _Output2, Checkpoints2} = ar_vdf:compute2(StartStepNumber2, Output1, ?TEST_VDF_DIFFICULTY),\n\n\t% partial verify should work\n\tassert_verify(StartSalt1, ResetSalt, PrevOutput, 1, lists:reverse(Checkpoints1)),\n\tassert_verify(StartSalt2, ResetSalt, Output1, 1, lists:reverse(Checkpoints2)),\n\n\tHash1 = lists:last(Checkpoints2),\n\tHash2 = lists:nth(length(Checkpoints2) - 1, Checkpoints2),\n\tassert_verify(StartSalt1, ResetSalt, PrevOutput, 1, lists:reverse([Hash1 | Checkpoints1])),\n\n\tassert_verify(StartSalt1, ResetSalt, PrevOutput, 1, lists:reverse([Hash2, Hash1 | Checkpoints1])),\n\n\tHashes5 = lists:reverse(Checkpoints1) ++ lists:reverse(Checkpoints2),\n\tassert_verify(StartSalt1, ResetSalt, PrevOutput, 1, Hashes5),\n\n\t% test vdf_fused\n\t{ok, Output1Part1, LastStepCheckpoints1Part1} =\n\t\tar_vdf_nif:vdf_sha2_fused_nif(Salt1, PrevOutput, ResetSaltFlat-1, 0, ?TEST_VDF_DIFFICULTY),\n\t{ok, Output1Part2, LastStepCheckpoints1Part2} =\n\t\tar_vdf_nif:vdf_sha2_fused_nif(Salt2, MixOutput, ?VDF_CHECKPOINT_COUNT_IN_STEP-ResetSaltFlat-1, 0, ?TEST_VDF_DIFFICULTY),\n\n\t% test vdf_hiopt\n\t{ok, Output1Part1, LastStepCheckpoints1Part1} =\n\t\tar_vdf_nif:vdf_sha2_hiopt_nif(Salt1, PrevOutput, ResetSaltFlat-1, 0, ?TEST_VDF_DIFFICULTY),\n\t{ok, Output1Part2, LastStepCheckpoints1Part2} =\n\t\tar_vdf_nif:vdf_sha2_hiopt_nif(Salt2, MixOutput, ?VDF_CHECKPOINT_COUNT_IN_STEP-ResetSaltFlat-1, 0, ?TEST_VDF_DIFFICULTY),\n\tok.\n\ncompute_next_vdf_difficulty_test_block() ->\n\tHeight1 = max(ar_block_time_history:history_length(), ?REWARD_HISTORY_BLOCKS),\n\tHeight2 = Height1 + ?VDF_DIFFICULTY_RETARGET - Height1 rem ?VDF_DIFFICULTY_RETARGET,\n\t#block{\n\t\theight = Height2-1,\n\t\tnonce_limiter_info = #nonce_limiter_info{\n\t\t\tvdf_difficulty = 10000,\n\t\t\tnext_vdf_difficulty = 10000\n\t\t},\n\t\treward_history = lists:duplicate(?REWARD_HISTORY_BLOCKS, {<<>>, 10000, 10, 1}),\n\t\tblock_time_history = lists:duplicate(ar_block_time_history:history_length(), {129, 135, 1}),\n\t\tprice_per_gib_minute = 10000,\n\t\tscheduled_price_per_gib_minute = 15000\n\t}.\n\ncompute_next_vdf_difficulty_2_7_test_()->\n\tar_test_node:test_with_mocked_functions(\n\t\t[{ar_fork, height_2_6, fun() -> -1 end},\n\t\t{ar_fork, height_2_7, fun() -> -1 end},\n\t\t{ar_fork, height_2_7_1, fun() -> infinity end}],\n\t\tfun() ->\n\t\t\tB = compute_next_vdf_difficulty_test_block(),\n\t\t\t10465 = ar_block:compute_next_vdf_difficulty(B),\n\t\t\tok\n\t\tend).\n\ncompute_next_vdf_difficulty_2_7_1_test_()->\n\tar_test_node:test_with_mocked_functions(\n\t\t[{ar_fork, height_2_6, fun() -> -1 end},\n\t\t{ar_fork, height_2_7, fun() -> -1 end},\n\t\t{ar_fork, height_2_7_1, fun() -> -1 end}],\n\t\tfun() ->\n\t\t\tB = compute_next_vdf_difficulty_test_block(),\n\t\t\t10046 = ar_block:compute_next_vdf_difficulty(B),\n\t\t\tok\n\t\tend).\n"
  },
  {
    "path": "apps/arweave/test/ar_wallet_tests.erl",
    "content": "-module(ar_wallet_tests).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\nwallet_sign_verify_test_() ->\n\t{timeout, 30, fun test_wallet_sign_verify/0}.\n\ntest_wallet_sign_verify() ->\n\tTestWalletSignVerify = fun(KeyTypeEnc) ->\n\t\tfun() ->\n\t\t\tKeyType = ar_serialize:binary_to_signature_type(KeyTypeEnc),\n\t\t\t{Priv, Pub} = ar_wallet:new(KeyType),\n\t\t\tTestData = <<\"TEST DATA\">>,\n\t\t\tSignature = ar_wallet:sign(Priv, TestData),\n\t\t\ttrue = ar_wallet:verify(Pub, TestData, Signature)\n\t\tend\n\tend,\n\t[\n\t\t{\"PS256_65537\", TestWalletSignVerify(<<\"PS256_65537\">>)},\n\t\t{\"ES256K\", TestWalletSignVerify(<<\"ES256K\">>)},\n\t\t{\"Ed25519\", TestWalletSignVerify(<<\"Ed25519\">>)}\n\t].\n\ninvalid_signature_test_() ->\n    TestInvalidSignature = fun(KeyTypeEnc) ->\n        fun() ->\n\t\t\tKeyType = ar_serialize:binary_to_signature_type(KeyTypeEnc),\n\t\t\t{Priv, Pub} = ar_wallet:new(KeyType),\n           \tTestData = <<\"TEST DATA\">>,\n\t\t\t<< _:32, Signature/binary >> = ar_wallet:sign(Priv, TestData),\n\t\t\tfalse = ar_wallet:verify(Pub, TestData, << 0:32, Signature/binary >>)\n        end\n    end,\n    [\n        {\"PS256_65537\", TestInvalidSignature(<<\"PS256_65537\">>)},\n        {\"ES256K\", TestInvalidSignature(<<\"ES256K\">>)},\n\t\t{\"Ed25519\", TestInvalidSignature(<<\"Ed25519\">>)}\n    ].\n\n%% @doc Check generated keyfiles can be retrieved.\ngenerate_keyfile_test_() ->\n\tGenerateKeyFile = fun(KeyTypeEnc) ->\n\t\tfun() ->\n\t\t\tKeyType = ar_serialize:binary_to_signature_type(KeyTypeEnc),\n\t\t\t{Priv, Pub} = ar_wallet:new_keyfile(KeyType),\n\t\t\tFileName = ar_wallet:wallet_filepath(ar_util:encode(ar_wallet:to_address(Pub))),\n\t\t\t{Priv, Pub} = ar_wallet:load_keyfile(FileName)\n\t\tend\n\tend,\n\t[\n\t\t{\"PS256_65537\", GenerateKeyFile(<<\"PS256_65537\">>)},\n\t\t{\"ES256K\", GenerateKeyFile(<<\"ES256K\">>)},\n\t\t{\"Ed25519\", GenerateKeyFile(<<\"Ed25519\">>)}\n\t].\n\nload_keyfile_test_() ->\n    TestLoadKeyfile = fun(KeyTypeEnc) ->\n        fun() ->\n            {Priv, Pub = {KeyType, _}} = ar_wallet:load_keyfile(wallet_fixture_path(KeyTypeEnc)),\n            KeyType = ar_serialize:binary_to_signature_type(KeyTypeEnc),\n            TestData = <<\"TEST DATA\">>,\n            Signature = ar_wallet:sign(Priv, TestData),\n            true = ar_wallet:verify(Pub, TestData, Signature)\n        end\n    end,\n    [\n        {\"PS256_65537\", TestLoadKeyfile(<<\"PS256_65537\">>)},\n        {\"ES256K\", TestLoadKeyfile(<<\"ES256K\">>)},\n        {\"Ed25519\", TestLoadKeyfile(<<\"Ed25519\">>)}\n    ].\n\nwallet_fixture_path(KeyTypeEnc) ->\n\t{ok, Cwd} = file:get_cwd(),\n\tfilename:join(Cwd, \"./apps/arweave/test/ar_wallet_tests_\" ++ binary_to_list(KeyTypeEnc) ++ \"_fixture.json\").\n"
  },
  {
    "path": "apps/arweave/test/ar_wallet_tests_ES256K_fixture.json",
    "content": "{\n    \"kty\":\"EC\",\n    \"crv\":\"secp256k1\",\n    \"x\":\"dWCvM4fTdeM0KmloF57zxtBPXTOythHPMm1HCLrdd3A\",\n    \"y\":\"36uMVGM7hnw-N6GnjFcihWE3SkrhMLzzLCdPMXPEXlA\",\n    \"d\":\"rhYFsBPF9q3-uZThy7B3c4LDF_8wnozFUAEm5LLC4Zw\"\n}"
  },
  {
    "path": "apps/arweave/test/ar_wallet_tests_Ed25519_fixture.json",
    "content": "{\n    \"kty\":\"OKP\",\n    \"alg\": \"EdDSA\",\n    \"crv\":\"Ed25519\",\n    \"x\":\"11qYAYKxCrfVS_7TyWQHOg7hcvPapiMlrwIaaPcHURo\",\n    \"d\":\"nWGxne_9WmC6hEr0kuwsxERJxWl7MmkZcDusAxyuf2A\"\n}"
  },
  {
    "path": "apps/arweave/test/ar_wallet_tests_PS256_65537_fixture.json",
    "content": "{\n    \"kty\":\"RSA\",\n    \"e\":\"AQAB\",\n    \"n\":\"kmM4O08BJB85RbxfQ2nkka9VNO6Czm2Tc_IGQNYCTSXRzOc6W9bHRrlZ_eDhWO0OdfaRalgLeuYCXx9DV-n1djeerKHdFo2ZAjRv5WjL_b4IxbQnPFnHOSNHVg49yp7CUWUgDQOKtylt3x0YENIW37RQPZJ-Fvyk7Z0jvibj2iZ0K3K8yNenJ4mWswyQdyPaJcbP6AMWvUWT62giWHa3lDgBZNhXqakkYdoaM157kRUfrZDRSWXbilr-4f40PQF1DV5YSj81Fl72N7j30r0vL1yoj0bZn74WRquQ5j3QsiAA-SzhAxpecWniljj1wvZlyIgJpCYCvCrKZCcCq_JW1nYP6to5YM3fAqcYRadbTNdQ3oH0Sjy8vyvLYNe48Ur_TFTTAwZxJV70BgZfkJ00BxiNTb8EhSchejabeExUkCNlOrQsCHDxOig-WXOrjX5fb4NeR3jedeYWbhN922ORLuEwVLeyjc7hBfQXU2-mYraFAVTc0QST201P7rRu-UGtZ4gRavFuOvAyYrMimFVW9dTwTrcYXFK2zKCEv2aRRQAHZanKjBv0Xq9m3BqvxKy-_3Cj1O6ft7FT21drPoDRDzfnkyOeUjlXzRJzn-iQ0nqgHAQr9WBWPzLEcaTFpw3KmwDYHW_6JOkUWDyMW9anuS8cyqt_2O29SK_rHHuucD8\",\n    \"d\":\"Bq6C13vknF6Ln1MrKI3Ilq-83IuSvQpe7NRAuT69u7i8sv4XwsHOJAV7qpGvp37NXT5R1G3ehEZ6qoSxJbcN4IVrQMKq5mMiCY6DBv5C6fHZGoNZE2gxXV7uydf8I1Vnnw4xYIj5oyC_5nSJlFAc3U-MAcbkfJuvrhGxLVGsrqHmjoqQPGG_hTxCjuAOlOBs-9cmWTujbm1-OyjaAQwfTbXYbUy7hC1TCE05SxLPmTUwaJxY8AXJigpbYqjpWsc15HjRlv38A44tEnIwHjHda_3JpmbSsffSslRej2vPCCgSPHHyLeO437Nc7DraogKStugisRfhoe89yY4QSBVXbtvWJeF1LxPtg8uPtfoKt3wdnGWKaLDqYNDeA3AckbKrPp50kHEMR7hnNHq3lAoMAXTz8BbI_Czo5n9-f9DQpvJC8kpM7gCGG8DptA2nTPuQG02MOx7AsEE99EN8ltD_dA0l0MgG7CDsaQC5IPMHcRs1wyvZMBGA8fvZdURiVv9YSnCddndXjBJuetf5KdES-1EmrSLzo5hobQbkc7dkHMS5dmLm5YtK-aLYXZi31nRIGkA1UfZhf2TtfRxP6uKlRT106EtDX1rT3RgsLqg06y_xoS4SFQ6u-8wHqgbIHBmKdsWVtBkC4SGUlYDPgrJe2V9CaPFAcoSDFK1D_IPvU2U\",\n    \"p\":\"zhyauK0ISMg9Wk7iZK2ifW2cj5KSr-_k_Em0nfUrtsaKp0iXOsCKxOH__zcAVj7oLxaEP2l8i2Pdi7CzhVRiqrjgVwA1JuLPgxtryuVqwRCYbO_Y_2Xutk404iKmDX6_LQ7BeIzUI8GD6rQCeLq1HBd3Yvok9bPvbbMZjFtUmBf_Kfb0cYP8tewMmV_USGpqwJXdB_4aHF6qBrZBtd1KLoO1E7MNkAPk7pbiA1-KO2Xa6oY6fy2pztNe1MO7tz_QywqlDdymfhnpk41arY3A6US-ZFXOinqXKdh9uEfxiyZwzaLMNWVKEaWxRxbqSUOLV3uZS05N6B2ZqvOp2h9Csw\",\n    \"q\":\"tdHmYcbJpZ0U27d_j0YvUeb6sdcuFLg2vmDgKwUamC5_vvV41LSm8LIkLuY2DAN5MKg6-HTWetKWQhgbCIbubLtX5164MFrES1YVZI-aggrYohhH8MRn_hwMZZQndv9H07WUVgQ1GZ2ZDvhO7XxPDIXyBNQ46x6V1AikHtyTmqARjgrkgs-1XN55S9rhcffixOlJ-egIDPVei_Z6YNdSpLlhtiqHOp_lX37mrPSYGxjgIZVxevpPgBhVFlnAMqC2iRd87XupmWgiluSos8I7i1VESBzwlFZGk5hRb8och4zwmDBDwx65XWngg6LneSXTWcKjKKGM2NnX7wHrZBuyRQ\",\n    \"dp\":\"Gfo49fW5CZNTSEKQ_id0R2K9TMsoecw-jB2uCgqQi-TSLOtVRC5oTxA896my_SvIj8bCvEtLSzY3AhgvSCqulN3gSJbaHCCSDvAx0czAe7zfuTsxml76izeoKqg7TZAgAEnP0KXPRwJo4ff2J8lAcl3yyiLE7cLT9nuQSMRqERFVM7DQdk4wV618mQge9VGUStmYlh1MpS65N0dZWNafNuWauPTkTLZw8DFMIyizf3EC-nQYg1b6A_tYBHD3A82jPzQEQY8B3PrfGZ3DRASNv9jONk8qTQHOc5O5pLRMmUErDn_qRQCTKU483bzhooJE2a3WUEt6Pjsc1xMG4Vr3SQ\",\n    \"dq\":\"cCVai36Yi-06m1cwd8fbkhH9GUpXIvKI2Z5ZRk-smqc7piY0dEZFHftS9BaMyZYu3wM09GDklfdkNLo3mmfXkftv-cbjpvelUa50HYWx0HouKrT9UpVia0sTnmfme7BztjKunuuTcQxTBvfDfxoIi_nmUHIx9Vv1IEaALITzChGnIky3q7O_8ttKR65nFevG1JvsRBeJN6z0tzG9RBQr5mxtx3Wt2Uwcp21XjOCFHVmXjT9nMmpINQNNIC8VrGSSkjaJmNWIw5WGmDnLkKzCG2vpZO1suqIIgCsYN_Ka7ETTdZt3gFdoECUpFSiay4-4MAospvgWLv8XAFXXwfSPXQ\",\n    \"qi\":\"n-R81MpbwfWfqRSVgD8nDk7D8zlJ-tpMaojfTwNNqDt34Cr-BpMjxaQyEfMnzOd2dY4OV0rKhd29DIuwFEb2UERHdVWF3gM8f2byYGj4357CRkiwq6I050bUxd1ODgAXjVGNpOK_fmaNHDWfe5v3wVIcCmwH0mJxEu9kuz7fr9TJNxGJBGUphpGS6NQZDCbDXg9-FPafMeNV-Jdo0NQaKMwm8uZyW7YGSNpUXYnksrWt4Fa-B9H2KoC4PPSWESPxNooXdxK7Y0J1KbzNyrUmOl4dT6p_oFKcU-1unuDCZ11e6EmMKyUGjpDzTIAZ2XxmyWUJ06yzEw7oLo8noiCE_Q\"\n}"
  },
  {
    "path": "apps/arweave/test/ar_webhook_tests.erl",
    "content": "-module(ar_webhook_tests).\n\n-export([init/2]).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n-import(ar_test_node, [\n\t\twait_until_height/2, read_block_when_stored/1]).\n\ninit(Req, State) ->\n\tSplitPath = ar_http_iface_server:split_path(cowboy_req:path(Req)),\n\thandle(SplitPath, Req, State).\n\nhandle([<<\"tx\">>], Req, State) ->\n\t{ok, Reply, _} = cowboy_req:read_body(Req),\n\tJSON = jiffy:decode(Reply, [return_maps]),\n\tTX = maps:get(<<\"transaction\">>, JSON),\n\tets:insert(?MODULE, {{tx, maps:get(<<\"id\">>, TX)}, TX}),\n\t{ok, cowboy_req:reply(200, #{}, <<>>, Req), State};\n\nhandle([<<\"block\">>], Req, State) ->\n\t{ok, Reply, _} = cowboy_req:read_body(Req),\n\tJSON = jiffy:decode(Reply, [return_maps]),\n\tB = maps:get(<<\"block\">>, JSON),\n\tets:insert(?MODULE, {{block, maps:get(<<\"height\">>, B)}, B}),\n\t{ok, cowboy_req:reply(200, #{}, <<>>, Req), State};\n\nhandle([<<\"txdata\">>], Req, State) ->\n\t{ok, Reply, _} = cowboy_req:read_body(Req),\n\tJSON = jiffy:decode(Reply, [return_maps]),\n\tets:insert(?MODULE, {{tx_data_payload, maps:get(<<\"txid\">>, JSON)}, JSON}),\n\t{ok, cowboy_req:reply(200, #{}, <<>>, Req), State};\n\nhandle([<<\"solution\">>], Req, State) ->\n\t{ok, Reply, _} = cowboy_req:read_body(Req),\n\tJSON = jiffy:decode(Reply, [return_maps]),\n\tcase maps:get(<<\"event\">>, JSON, not_found) of\n\t\t<<\"solution_accepted\">> ->\n\t\t\tets:update_counter(?MODULE, accepted_solutions, {2, 1}, {accepted_solutions, 0});\n\t\t_ ->\n\t\t\tok\n\tend,\n\t{ok, cowboy_req:reply(200, #{}, <<>>, Req), State}.\n\nwebhooks_test_() ->\n\t{timeout, 120, fun test_webhooks/0}.\n\ntest_webhooks() ->\n\t{_, Pub} = Wallet = ar_wallet:new(),\n\t[B0] = ar_weave:init([{ar_wallet:to_address(Pub), ?AR(10000), <<>>}]),\n\t{ok, Config} = arweave_config:get_env(),\n\ttry\n\t\tPort = ar_test_node:get_unused_port(),\n\t\tPortBinary = integer_to_binary(Port),\n\t\tTXBlacklistFilename = random_tx_blacklist_filename(),\n\t\tAddr = ar_wallet:to_address(ar_wallet:new_keyfile()),\n\t\tConfig2 = Config#config{\n\t\t\twebhooks = [\n\t\t\t\t#config_webhook{\n\t\t\t\t\turl = <<\"http://127.0.0.1:\", PortBinary/binary, \"/tx\">>,\n\t\t\t\t\tevents = [transaction]\n\t\t\t\t},\n\t\t\t\t#config_webhook{\n\t\t\t\t\turl = <<\"http://127.0.0.1:\", PortBinary/binary, \"/block\">>,\n\t\t\t\t\tevents = [block]\n\t\t\t\t},\n\t\t\t\t#config_webhook{\n\t\t\t\t\turl = <<\"http://127.0.0.1:\", PortBinary/binary, \"/txdata\">>,\n\t\t\t\t\tevents = [transaction_data]\n\t\t\t\t},\n\t\t\t\t#config_webhook{\n\t\t\t\t\turl = <<\"http://127.0.0.1:\", PortBinary/binary, \"/solution\">>,\n\t\t\t\t\tevents = [solution]\n\t\t\t\t}\n\t\t\t],\n\t\t\ttransaction_blacklist_files = [TXBlacklistFilename]\n\t\t},\n\t\tar_test_node:start(#{ b0 => B0, addr => Addr, config => Config2,\n\t\t\t\t%% Replica 2.9 modules do not support updates.\n\t\t\t\tstorage_modules =>[{10 * ?MiB, 0, {composite, Addr, 1}}] }),\n\t\t%% Setup a server that would be listening for the webhooks and registering\n\t\t%% them in the ETS table.\n\t\tets:new(?MODULE, [named_table, set, public]),\n\t\tRoutes = [{\"/[...]\", ar_webhook_tests, []}],\n\t\tcowboy:start_clear(\n\t\t\tar_webhook_test_listener,\n\t\t\t[{port, Port}],\n\t\t\t#{ env => #{ dispatch => cowboy_router:compile([{'_', Routes}]) } }\n\t\t),\n\t\t{V2TX, Proofs} = create_v2_tx(Wallet),\n\t\tTXs =\n\t\t\tlists:map(\n\t\t\t\tfun(Height) ->\n\t\t\t\t\tSignedTX =\n\t\t\t\t\t\tcase Height rem 2 == 1 of\n\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\tData = crypto:strong_rand_bytes(262144 * 2 + 10),\n\t\t\t\t\t\t\t\tar_test_node:sign_v1_tx(main, Wallet, #{ data => Data });\n\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\tcase Height == 2 of\n\t\t\t\t\t\t\t\t\ttrue ->\n\t\t\t\t\t\t\t\t\t\tV2TX;\n\t\t\t\t\t\t\t\t\tfalse ->\n\t\t\t\t\t\t\t\t\t\tar_test_node:sign_tx(main, Wallet, #{})\n\t\t\t\t\t\t\t\tend\n\t\t\t\t\t\tend,\n\t\t\t\t\tar_test_node:assert_post_tx_to_peer(main, SignedTX),\n\t\t\t\t\tar_test_node:mine(),\n\t\t\t\t\twait_until_height(main, Height),\n\t\t\t\t\t[{_, AcceptedSolutionCount}] = ets:lookup(?MODULE, accepted_solutions),\n\t\t\t\t\t?assert(AcceptedSolutionCount >= Height),\n\t\t\t\t\tSignedTX\n\t\t\t\tend,\n\t\t\t\tlists:seq(1, 10)\n\t\t\t),\n\t\tUnconfirmedTX = ar_test_node:sign_tx(main, Wallet, #{}),\n\t\tar_test_node:assert_post_tx_to_peer(main, UnconfirmedTX),\n\t\tlists:foreach(\n\t\t\tfun(Height) ->\n\t\t\t\tTX = lists:nth(Height, TXs),\n\t\t\t\ttrue = ar_util:do_until(\n\t\t\t\t\tfun() ->\n\t\t\t\t\t\tcase ets:lookup(?MODULE, {block, Height}) of\n\t\t\t\t\t\t\t[{_, B}] ->\n\t\t\t\t\t\t\t\t{H, _, _} = ar_node:get_block_index_entry(Height),\n\t\t\t\t\t\t\t\tB2 = read_block_when_stored(H),\n\t\t\t\t\t\t\t\tStruct = ar_serialize:block_to_json_struct(B2),\n\t\t\t\t\t\t\t\tExpected =\n\t\t\t\t\t\t\t\t\tmaps:remove(\n\t\t\t\t\t\t\t\t\t\t<<\"wallet_list\">>,\n\t\t\t\t\t\t\t\t\t\tjiffy:decode(ar_serialize:jsonify(Struct), [return_maps])\n\t\t\t\t\t\t\t\t\t),\n\t\t\t\t\t\t\t\t?assertEqual(Expected, B),\n\t\t\t\t\t\t\t\ttrue;\t\n\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\tfalse\n\t\t\t\t\t\tend\n\t\t\t\t\tend,\n\t\t\t\t\t200,\n\t\t\t\t\t10000\n\t\t\t\t),\n\t\t\t\ttrue = ar_util:do_until(\n\t\t\t\t\tfun() ->\n\t\t\t\t\t\tcase ets:lookup(?MODULE, {tx, ar_util:encode(TX#tx.id)}) of\n\t\t\t\t\t\t\t[{_, TX2}] ->\n\t\t\t\t\t\t\t\tStruct = ar_serialize:tx_to_json_struct(TX),\n\t\t\t\t\t\t\t\tExpected =\n\t\t\t\t\t\t\t\t\tmaps:remove(\n\t\t\t\t\t\t\t\t\t\t<<\"data\">>,\n\t\t\t\t\t\t\t\t\t\tjiffy:decode(ar_serialize:jsonify(Struct), [return_maps])\n\t\t\t\t\t\t\t\t\t),\n\t\t\t\t\t\t\t\t?assertEqual(Expected, TX2),\n\t\t\t\t\t\t\t\ttrue;\n\t\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t\tfalse\n\t\t\t\t\t\tend\n\t\t\t\t\tend,\n\t\t\t\t\t200,\n\t\t\t\t\t10000\n\t\t\t\t),\n\t\t\t\tcase Height < 8 andalso Height rem 2 == 1 of\n\t\t\t\t\tfalse ->\n\t\t\t\t\t\t%% Do not expect events about data from the latest blocks because it\n\t\t\t\t\t\t%% stays in the disk pool.\n\t\t\t\t\t\tok;\n\t\t\t\t\ttrue ->\n\t\t\t\t\t\tassert_transaction_data_synced(TX#tx.id)\n\t\t\t\tend\n\t\t\tend,\n\t\t\tlists:seq(1, 10)\n\t\t),\n\t\ttrue = ar_util:do_until(\n\t\t\tfun() ->\n\t\t\t\tcase ets:lookup(?MODULE, {tx, ar_util:encode(UnconfirmedTX#tx.id)}) of\n\t\t\t\t\t[{_, TX}] ->\n\t\t\t\t\t\tStruct = ar_serialize:tx_to_json_struct(UnconfirmedTX),\n\t\t\t\t\t\tExpected =\n\t\t\t\t\t\t\tmaps:remove(\n\t\t\t\t\t\t\t\t<<\"data\">>,\n\t\t\t\t\t\t\t\tjiffy:decode(ar_serialize:jsonify(Struct), [return_maps])\n\t\t\t\t\t\t\t),\n\t\t\t\t\t\t?assertEqual(Expected, TX),\n\t\t\t\t\t\ttrue;\n\t\t\t\t\t_ ->\n\t\t\t\t\t\tfalse\n\t\t\t\tend\n\t\t\tend,\n\t\t\t200,\n\t\t\t2000\n\t\t),\n\t\tV2TXID = (V2TX)#tx.id,\n\t\tupload_chunks(Proofs),\n\t\tassert_transaction_data_synced(V2TXID),\n\t\tFirstTXID = (hd(TXs))#tx.id,\n\t\tappend_txid_to_file(FirstTXID, TXBlacklistFilename),\n\t\tassert_transaction_data_removed(FirstTXID),\n\t\tSecondTXID = (lists:nth(3, TXs))#tx.id, % The second v1 transaction with data.\n\t\tappend_second_chunk_to_file(SecondTXID, TXBlacklistFilename),\n\t\tassert_transaction_data_removed(SecondTXID),\n\t\tappend_second_chunk_to_file(V2TXID, TXBlacklistFilename),\n\t\tassert_transaction_data_removed(V2TXID),\n\t\tempty_file(TXBlacklistFilename),\n\t\t%% Wait until the new blacklisting policy (=no blacklisting) takes effect.\n\t\ttimer:sleep(3000),\n\t\tupload_chunks(Proofs),\n\t\tassert_transaction_data_synced(V2TXID),\n\t\tcowboy:stop_listener(ar_webhook_test_listener)\n\tafter\n\t\tarweave_config:set_env(Config#config{ webhooks = [] })\n\tend.\n\ncreate_v2_tx(Wallet) ->\n\tDataSize = 3 * ?DATA_CHUNK_SIZE + 11,\n\tChunks = ar_tx:chunk_binary(?DATA_CHUNK_SIZE, crypto:strong_rand_bytes(DataSize)),\n\tSizeTaggedChunks = ar_tx:chunks_to_size_tagged_chunks(Chunks),\n\tSizedChunkIDs = ar_tx:sized_chunks_to_sized_chunk_ids(SizeTaggedChunks),\n\t{DataRoot, DataTree} = ar_merkle:generate_tree(SizedChunkIDs),\n\tTX = ar_test_node:sign_tx(main, Wallet,\n\t\t\t#{ format => 2, data_root => DataRoot, data_size => DataSize, reward => ?AR(1) }),\n\tProofs = [encode_proof(#{ data_root => DataRoot, chunk => Chunk,\n\t\t\t\tdata_path => ar_merkle:generate_path(DataRoot, Offset - 1, DataTree),\n\t\t\t\toffset => Offset - 1, data_size => DataSize })\n\t\t\t|| {Chunk, Offset} <- SizeTaggedChunks],\n\t{TX, Proofs}.\n\nencode_proof(Proof) ->\n\tar_serialize:jsonify(#{\n\t\tchunk => ar_util:encode(maps:get(chunk, Proof)),\n\t\tdata_path => ar_util:encode(maps:get(data_path, Proof)),\n\t\tdata_root => ar_util:encode(maps:get(data_root, Proof)),\n\t\tdata_size => integer_to_binary(maps:get(data_size, Proof)),\n\t\toffset => integer_to_binary(maps:get(offset, Proof))\n\t}).\n\nassert_transaction_data_synced(TXID) ->\n\tEncodedTXID = ar_util:encode(TXID),\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\tcase ets:lookup(?MODULE, {tx_data_payload, EncodedTXID}) of\n\t\t\t\t[{_, JSON}] ->\n\t\t\t\t\tmaps:get(<<\"event\">>, JSON) == <<\"transaction_data_synced\">>;\n\t\t\t\t_ ->\n\t\t\t\t\tfalse\n\t\t\tend\n\t\tend,\n\t\t1000,\n\t\t30000\n\t).\n\nupload_chunks([]) ->\n\tok;\nupload_chunks([Proof | Proofs]) ->\n\t{ok, {{<<\"200\">>, _}, _, _, _, _}} = ar_test_node:post_chunk(main, Proof),\n\tupload_chunks(Proofs).\n\nrandom_tx_blacklist_filename() ->\n\t{ok, Config} = arweave_config:get_env(),\n\tfilename:join(Config#config.data_dir,\n\t\t\"ar-webhook-tests-transaction-blacklist-\"\n\t\t++\n\t\tbinary_to_list(ar_util:encode(crypto:strong_rand_bytes(32)))).\n\nappend_txid_to_file(TXID, Filename) ->\n\t{ok, F} = file:open(Filename, [append]),\n\tok = file:write(F, io_lib:format(\"~s~n\", [ar_util:encode(TXID)])),\n\tfile:close(F).\n\nassert_transaction_data_removed(TXID) ->\n\tEncodedTXID = ar_util:encode(TXID),\n\ttrue = ar_util:do_until(\n\t\tfun() ->\n\t\t\t[{_, JSON}] = ets:lookup(?MODULE, {tx_data_payload, EncodedTXID}),\n\t\t\tmaps:get(<<\"event\">>, JSON) == <<\"transaction_data_removed\">>\n\t\tend,\n\t\t100,\n\t\t60000\n\t).\n\nappend_second_chunk_to_file(TXID, Filename) ->\n\t{ok, {EndOffset, Size}} = ar_data_sync:get_tx_offset(TXID),\n\tSecondChunkStart = EndOffset - Size + ?DATA_CHUNK_SIZE,\n\tSecondChunkEnd = SecondChunkStart + ?DATA_CHUNK_SIZE,\n\t{ok, F} = file:open(Filename, [append]),\n\tok = file:write(F, io_lib:format(\"~B,~B~n\", [SecondChunkStart, SecondChunkEnd])),\n\tfile:close(F).\n\nempty_file(Filename) ->\n\t{ok, F} = file:open(Filename, [write]),\n\tok = file:write(F, <<\" \">>),\n\tfile:close(F).\n"
  },
  {
    "path": "apps/arweave_config/README.md",
    "content": "# Arweave Configuration Application\n\n`arweave_config`  application is  in  charge of  dealing with  arweave\nconfiguration and contains all the modules, functions and processes to\nmanage it.\n\n## Getting Started\n\n## Features\n\n## Usage\n\n## Test\n\n`arweave_config` is using\n[`eunit`](https://www.erlang.org/doc/apps/eunit) and\n[`common_test`](https://www.erlang.org/doc/apps/common_test/).\n\n```sh\n# execute eunit test suite\nrebar3 eunit -c\n\n# execute common test test suite\nrebar3 ct -c\n\n# check the coverage\nrebar3 cover -v\n```\n\n## FAQ\n\n## References and Resources\n"
  },
  {
    "path": "apps/arweave_config/include/arweave_config.hrl",
    "content": "-ifndef(AR_CONFIG_HRL).\n-define(AR_CONFIG_HRL, true).\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave/include/ar_verify_chunks.hrl\").\n\n-record(config_webhook, {\n\tevents = [],\n\turl = undefined,\n\theaders = []\n}).\n\n%% The polling frequency in seconds.\n-define(DEFAULT_POLLING_INTERVAL, 2).\n\n%% The number of processes periodically searching for the latest blocks.\n-define(DEFAULT_BLOCK_POLLERS, 10).\n\n%% The number of processes fetching the recent blocks and transactions on join.\n-define(DEFAULT_JOIN_WORKERS, 10).\n\n%% The number of data sync jobs to run. Each job periodically picks a range\n%% and downloads it from peers.\n-ifdef(AR_TEST).\n-define(DEFAULT_SYNC_JOBS, 10).\n-else.\n-define(DEFAULT_SYNC_JOBS, 100).\n-endif.\n\n%% The number of disk pool jobs to run. Disk pool jobs scan the disk pool to index\n%% no longer pending or orphaned chunks, pack chunks with a sufficient number of confirmations,\n%% or remove the abandoned ones.\n-define(DEFAULT_DISK_POOL_JOBS, 20).\n\n%% The number of header sync jobs to run. Each job picks the latest not synced\n%% block header and downloads it from peers.\n-define(DEFAULT_HEADER_SYNC_JOBS, 1).\n\n%% The default expiration time for a data root in the disk pool.\n-define(DEFAULT_DISK_POOL_DATA_ROOT_EXPIRATION_TIME_S, 30 * 60).\n\n%% The default size limit for unconfirmed and seeded chunks, per data root.\n-ifdef(AR_TEST).\n-define(DEFAULT_MAX_DISK_POOL_DATA_ROOT_BUFFER_MB, 50).\n-else.\n-define(DEFAULT_MAX_DISK_POOL_DATA_ROOT_BUFFER_MB, 10000).\n-endif.\n\n%% The default number of duplicate data roots checked for a posted chunk.\n-define(DEFAULT_MAX_DUPLICATE_DATA_ROOTS, 5).\n\n%% The default total size limit for unconfirmed and seeded chunks.\n-ifdef(AR_TEST).\n-define(DEFAULT_MAX_DISK_POOL_BUFFER_MB, 100).\n-else.\n-define(DEFAULT_MAX_DISK_POOL_BUFFER_MB, 100000).\n-endif.\n\n%% The default frequency of checking for the available disk space.\n-ifdef(AR_TEST).\n-define(DISK_SPACE_CHECK_FREQUENCY_MS, 1000).\n-else.\n-define(DISK_SPACE_CHECK_FREQUENCY_MS, 30 * 1000).\n-endif.\n\n-define(NUM_HASHING_PROCESSES,\n\tmax(1, (erlang:system_info(schedulers_online) - 1))).\n\n-define(MAX_PARALLEL_BLOCK_INDEX_REQUESTS, 1).\n-define(MAX_PARALLEL_GET_CHUNK_REQUESTS, 100).\n-define(MAX_PARALLEL_GET_AND_PACK_CHUNK_REQUESTS, 1).\n-define(MAX_PARALLEL_GET_TX_DATA_REQUESTS, 1).\n-define(MAX_PARALLEL_WALLET_LIST_REQUESTS, 1).\n-define(MAX_PARALLEL_POST_CHUNK_REQUESTS, 100).\n-define(MAX_PARALLEL_GET_SYNC_RECORD_REQUESTS, 10).\n-define(MAX_PARALLEL_REWARD_HISTORY_REQUESTS, 1).\n-define(MAX_PARALLEL_GET_TX_REQUESTS, 20).\n-define(MAX_PARALLEL_GET_DATA_ROOTS_REQUESTS, 1).\n\n%% The number of parallel tx validation processes.\n-define(MAX_PARALLEL_POST_TX_REQUESTS, 20).\n%% The time in seconds to wait for the available tx validation process before dropping the\n%% POST /tx request.\n-define(DEFAULT_POST_TX_TIMEOUT, 20).\n\n%% The default value for the maximum number of threads used for nonce limiter chain\n%% validation.\n-define(DEFAULT_MAX_NONCE_LIMITER_VALIDATION_THREAD_COUNT,\n\t\tmax(1, (erlang:system_info(schedulers_online) div 2))).\n\n%% The default value for the maximum number of threads used for nonce limiter chain\n%% last step validation.\n-define(DEFAULT_MAX_NONCE_LIMITER_LAST_STEP_VALIDATION_THREAD_COUNT,\n\t\tmax(1, (erlang:system_info(schedulers_online) - 1))).\n\n%% Accept a block from the given IP only once in so many milliseconds.\n-ifdef(AR_TEST).\n-define(DEFAULT_BLOCK_THROTTLE_BY_IP_INTERVAL_MS, 10).\n-else.\n-define(DEFAULT_BLOCK_THROTTLE_BY_IP_INTERVAL_MS, 1000).\n-endif.\n\n%% Accept a block with the given solution hash only once in so many milliseconds.\n-ifdef(AR_TEST).\n-define(DEFAULT_BLOCK_THROTTLE_BY_SOLUTION_INTERVAL_MS, 10).\n-else.\n-define(DEFAULT_BLOCK_THROTTLE_BY_SOLUTION_INTERVAL_MS, 2000).\n-endif.\n\n-define(DEFAULT_CM_POLL_INTERVAL_MS, 60000).\n-define(DEFAULT_CM_BATCH_TIMEOUT_MS, 20).\n\n-define(CHUNK_GROUP_SIZE, (256 * 1024 * 8000)). % 2 GiB.\n\n%% The number of consecutive chunks to read at a time during in-place repacking.\n-ifdef(AR_TEST).\n-define(DEFAULT_REPACK_BATCH_SIZE, 2).\n-else.\n-define(DEFAULT_REPACK_BATCH_SIZE, 100).\n-endif.\n\n-define(DEFAULT_REPACK_CACHE_SIZE_MB, 4000).\n\n%% default filtering value for the peer list (30days)\n-define(CURRENT_PEERS_LIST_FILTER, 30*60*60*24).\n\n%% The default rocksdb databases flush interval, 30 minutes.\n-define(DEFAULT_ROCKSDB_FLUSH_INTERVAL_S, 1800).\n%% The default rocksdb WAL sync interval, 1 minute.\n-define(DEFAULT_ROCKSDB_WAL_SYNC_INTERVAL_S, 60).\n\n%% The number of 2.9 storage modules allowed to prepare the storage at a time.\n-ifdef(AR_TEST).\n-define(DEFAULT_REPLICA_2_9_WORKERS, 2).\n-else.\n-define(DEFAULT_REPLICA_2_9_WORKERS, 8).\n-endif.\n\n%% The default maximum number of replica 2.9 entropies to cache at a time\n%% while syncing data. Each entropy is 256 MiB.\n-define(DEFAULT_REPLICA_2_9_ENTROPY_CACHE_SIZE_MB, 4000).\n\n%% The number of packing workers.\n-define(DEFAULT_PACKING_WORKERS, erlang:system_info(dirty_cpu_schedulers_online)).\n\n%% The default connection tcp delay when arweave is shutting down\n-define(SHUTDOWN_TCP_CONNECTION_TIMEOUT, 30).\n-define(SHUTDOWN_TCP_MODE, shutdown).\n\n%% Global socket configuration\n-define(DEFAULT_SOCKET_BACKEND, inet).\n\n%% Default Gun HTTP/TCP parameters\n-define(DEFAULT_GUN_HTTP_CLOSING_TIMEOUT, 15_000).\n-define(DEFAULT_GUN_HTTP_KEEPALIVE, 60_000).\n-define(DEFAULT_GUN_TCP_DELAY_SEND, false).\n-define(DEFAULT_GUN_TCP_KEEPALIVE, true).\n-define(DEFAULT_GUN_TCP_LINGER, false).\n-define(DEFAULT_GUN_TCP_LINGER_TIMEOUT, 0).\n-define(DEFAULT_GUN_TCP_NODELAY, true).\n-define(DEFAULT_GUN_TCP_SEND_TIMEOUT_CLOSE, true).\n-define(DEFAULT_GUN_TCP_SEND_TIMEOUT, 15_000).\n\n%% The time the cowboy loop handler waits before killing a request handler process.\n-define(DEFAULT_HTTP_HANDLER_TIMEOUT_MS, 55000).\n\n%% Per-chunk HTTP body read period passed to cowboy_req:read_body/2.\n-define(DEFAULT_HTTP_READ_BODY_PERIOD_MS, 15000).\n\n%% Total wall-clock limit for reading the complete request body.\n-define(DEFAULT_HTTP_MAX_BODY_READ_TIME_MS,\n\t?DEFAULT_HTTP_HANDLER_TIMEOUT_MS - 2000).\n\n%% Default Cowboy HTTP/TCP parameters\n-define(DEFAULT_COWBOY_HTTP_ACTIVE_N, 100).\n-define(DEFAULT_COWBOY_HTTP_IDLE_TIMEOUT, 60_000).\n-define(DEFAULT_COWBOY_HTTP_INACTIVITY_TIMEOUT, 300_000).\n-define(DEFAULT_COWBOY_HTTP_LINGER_TIMEOUT, 1000).\n-define(DEFAULT_COWBOY_HTTP_REQUEST_TIMEOUT, 5000).\n-define(DEFAULT_COWBOY_TCP_BACKLOG, 1024).\n-define(DEFAULT_COWBOY_TCP_DELAY_SEND, false).\n-define(DEFAULT_COWBOY_TCP_IDLE_TIMEOUT_SECOND, 10).\n-define(DEFAULT_COWBOY_TCP_KEEPALIVE, true).\n-define(DEFAULT_COWBOY_TCP_LINGER, false).\n-define(DEFAULT_COWBOY_TCP_LINGER_TIMEOUT, 0).\n-define(DEFAULT_COWBOY_TCP_MAX_CONNECTIONS, 5000).\n-define(DEFAULT_COWBOY_TCP_NODELAY, true).\n-define(DEFAULT_COWBOY_TCP_NUM_ACCEPTORS, 500).\n-define(DEFAULT_COWBOY_TCP_SEND_TIMEOUT_CLOSE, true).\n-define(DEFAULT_COWBOY_TCP_SEND_TIMEOUT, 15_000).\n-define(DEFAULT_COWBOY_TCP_LISTENER_SHUTDOWN, 5000).\n\n%% Common RLG Settings\n-define(DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_INTERVAL, 120000).\n-define(DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_EXPIRY, 120000).\n-define(DEFAULT_HTTP_API_LIMITER_IS_MANUAL_REDUCTION_DISABLED, false).\n\n%% General RLG\n-define(DEFAULT_HTTP_API_LIMITER_GENERAL_SLIDING_WINDOW_LIMIT, 0).\n-define(DEFAULT_HTTP_API_LIMITER_GENERAL_SLIDING_WINDOW_DURATION, 1000).\n-ifdef(AR_TEST).\n-define(DEFAULT_HTTP_API_LIMITER_GENERAL_LEAKY_LIMIT, 45000).\n-else.\n-define(DEFAULT_HTTP_API_LIMITER_GENERAL_LEAKY_LIMIT, 450).\n-endif.\n-define(DEFAULT_HTTP_API_LIMITER_GENERAL_LEAKY_TICK_INTERVAL, 30000).\n-define(DEFAULT_HTTP_API_LIMITER_GENERAL_LEAKY_TICK_REDUCTION, 450).\n-define(DEFAULT_HTTP_API_LIMITER_GENERAL_CONCURRENCY_LIMIT, 150).\n\n%% Chunk RLG\n-define(DEFAULT_HTTP_API_LIMITER_CHUNK_SLIDING_WINDOW_LIMIT, 100).\n-define(DEFAULT_HTTP_API_LIMITER_CHUNK_SLIDING_WINDOW_DURATION, 1000).\n-define(DEFAULT_HTTP_API_LIMITER_CHUNK_LEAKY_LIMIT, 6000).\n-define(DEFAULT_HTTP_API_LIMITER_CHUNK_LEAKY_TICK_INTERVAL, 30000).\n-define(DEFAULT_HTTP_API_LIMITER_CHUNK_LEAKY_TICK_REDUCTION, 30).\n-define(DEFAULT_HTTP_API_LIMITER_CHUNK_CONCURRENCY_LIMIT, 200).\n\n%% Data Sync RLG\n-define(DEFAULT_HTTP_API_LIMITER_DATA_SYNC_RECORD_SLIDING_WINDOW_LIMIT, 0).\n-define(DEFAULT_HTTP_API_LIMITER_DATA_SYNC_RECORD_SLIDING_WINDOW_DURATION, 1000).\n-define(DEFAULT_HTTP_API_LIMITER_DATA_SYNC_RECORD_LEAKY_LIMIT, 20).\n-define(DEFAULT_HTTP_API_LIMITER_DATA_SYNC_RECORD_LEAKY_TICK_INTERVAL, 30000).\n-define(DEFAULT_HTTP_API_LIMITER_DATA_SYNC_RECORD_LEAKY_TICK_REDUCTION, 20).\n-define(DEFAULT_HTTP_API_LIMITER_DATA_SYNC_RECORD_CONCURRENCY_LIMIT, 40).\n\n%% Recent Hash List Diff RLG\n-define(DEFAULT_HTTP_API_LIMITER_RECENT_HASH_LIST_DIFF_SLIDING_WINDOW_LIMIT, 0).\n-define(DEFAULT_HTTP_API_LIMITER_RECENT_HASH_LIST_DIFF_SLIDING_WINDOW_DURATION, 1000).\n-define(DEFAULT_HTTP_API_LIMITER_RECENT_HASH_LIST_DIFF_LEAKY_LIMIT, 120).\n-define(DEFAULT_HTTP_API_LIMITER_RECENT_HASH_LIST_DIFF_LEAKY_TICK_INTERVAL, 30000).\n-define(DEFAULT_HTTP_API_LIMITER_RECENT_HASH_LIST_DIFF_LEAKY_TICK_REDUCTION, 120).\n-define(DEFAULT_HTTP_API_LIMITER_RECENT_HASH_LIST_DIFF_CONCURRENCY_LIMIT, 240).\n\n%% Block Index RLG\n-define(DEFAULT_HTTP_API_LIMITER_BLOCK_INDEX_SLIDING_WINDOW_LIMIT, 0).\n-define(DEFAULT_HTTP_API_LIMITER_BLOCK_INDEX_SLIDING_WINDOW_DURATION, 1000).\n-ifdef(AR_TEST).\n-define(DEFAULT_HTTP_API_LIMITER_BLOCK_INDEX_LEAKY_LIMIT, 10).\n-else.\n-define(DEFAULT_HTTP_API_LIMITER_BLOCK_INDEX_LEAKY_LIMIT, 1).\n-endif.\n-define(DEFAULT_HTTP_API_LIMITER_BLOCK_INDEX_LEAKY_TICK_INTERVAL, 30000).\n-define(DEFAULT_HTTP_API_LIMITER_BLOCK_INDEX_LEAKY_TICK_REDUCTION, 1).\n-define(DEFAULT_HTTP_API_LIMITER_BLOCK_INDEX_CONCURRENCY_LIMIT, 2).\n\n%% Wallet list RLG\n-define(DEFAULT_HTTP_API_LIMITER_WALLET_LIST_SLIDING_WINDOW_LIMIT, 0).\n-define(DEFAULT_HTTP_API_LIMITER_WALLET_LIST_SLIDING_WINDOW_DURATION, 1000).\n-ifdef(AR_TEST).\n-define(DEFAULT_HTTP_API_LIMITER_WALLET_LIST_LEAKY_LIMIT, 10).\n-else.\n-define(DEFAULT_HTTP_API_LIMITER_WALLET_LIST_LEAKY_LIMIT, 1).\n-endif.\n-define(DEFAULT_HTTP_API_LIMITER_WALLET_LIST_LEAKY_TICK_INTERVAL, 30000).\n-define(DEFAULT_HTTP_API_LIMITER_WALLET_LIST_LEAKY_TICK_REDUCTION, 1).\n-define(DEFAULT_HTTP_API_LIMITER_WALLET_LIST_CONCURRENCY_LIMIT, 2).\n\n%% Get VDF RLG\n-define(DEFAULT_HTTP_API_LIMITER_GET_VDF_SLIDING_WINDOW_LIMIT, 0).\n-define(DEFAULT_HTTP_API_LIMITER_GET_VDF_SLIDING_WINDOW_DURATION, 1000).\n-ifdef(AR_TEST).\n-define(DEFAULT_HTTP_API_LIMITER_GET_VDF_LEAKY_LIMIT, 4500).\n-else.\n-define(DEFAULT_HTTP_API_LIMITER_GET_VDF_LEAKY_LIMIT, 90).\n-endif.\n-define(DEFAULT_HTTP_API_LIMITER_GET_VDF_LEAKY_TICK_INTERVAL, 30000).\n-define(DEFAULT_HTTP_API_LIMITER_GET_VDF_LEAKY_TICK_REDUCTION, 90).\n-define(DEFAULT_HTTP_API_LIMITER_GET_VDF_CONCURRENCY_LIMIT, 90).\n\n%% VDF Session RLG\n-define(DEFAULT_HTTP_API_LIMITER_GET_VDF_SESSION_SLIDING_WINDOW_LIMIT, 0).\n-define(DEFAULT_HTTP_API_LIMITER_GET_VDF_SESSION_SLIDING_WINDOW_DURATION, 1000).\n-ifdef(AR_TEST).\n-define(DEFAULT_HTTP_API_LIMITER_GET_VDF_SESSION_LEAKY_LIMIT, 50000).\n-else.\n-define(DEFAULT_HTTP_API_LIMITER_GET_VDF_SESSION_LEAKY_LIMIT, 30).\n-endif.\n-define(DEFAULT_HTTP_API_LIMITER_GET_VDF_SESSION_LEAKY_TICK_INTERVAL, 30000).\n-define(DEFAULT_HTTP_API_LIMITER_GET_VDF_SESSION_LEAKY_TICK_REDUCTION, 30).\n-define(DEFAULT_HTTP_API_LIMITER_GET_VDF_SESSION_CONCURRENCY_LIMIT, 30).\n\n%% Previous VDF Session RLG\n-define(DEFAULT_HTTP_API_LIMITER_GET_PREVIOUS_VDF_SESSION_SLIDING_WINDOW_LIMIT, 0).\n-define(DEFAULT_HTTP_API_LIMITER_GET_PREVIOUS_VDF_SESSION_SLIDING_WINDOW_DURATION, 1000).\n-ifdef(AR_TEST).\n-define(DEFAULT_HTTP_API_LIMITER_GET_PREVIOUS_VDF_SESSION_LEAKY_LIMIT, 50000).\n-else.\n-define(DEFAULT_HTTP_API_LIMITER_GET_PREVIOUS_VDF_SESSION_LEAKY_LIMIT, 30).\n-endif.\n-define(DEFAULT_HTTP_API_LIMITER_GET_PREVIOUS_VDF_SESSION_LEAKY_TICK_INTERVAL, 30000).\n-define(DEFAULT_HTTP_API_LIMITER_GET_PREVIOUS_VDF_SESSION_LEAKY_TICK_REDUCTION, 30).\n-define(DEFAULT_HTTP_API_LIMITER_GET_PREVIOUS_VDF_SESSION_CONCURRENCY_LIMIT, 30).\n\n%% Metrics RLG\n-define(DEFAULT_HTTP_API_LIMITER_METRICS_SLIDING_WINDOW_LIMIT, 0).\n-define(DEFAULT_HTTP_API_LIMITER_METRICS_SLIDING_WINDOW_DURATION, 1000).\n-define(DEFAULT_HTTP_API_LIMITER_METRICS_LEAKY_LIMIT, 2).\n-define(DEFAULT_HTTP_API_LIMITER_METRICS_LEAKY_TICK_INTERVAL, 1000).\n-define(DEFAULT_HTTP_API_LIMITER_METRICS_LEAKY_TICK_REDUCTION, 2).\n-define(DEFAULT_HTTP_API_LIMITER_METRICS_CONCURRENCY_LIMIT, 2).\n\n\n%% @doc Startup options with default values.\n-record(config, {\n\tinit = false,\n\tport = ?DEFAULT_HTTP_IFACE_PORT,\n\tmine = false,\n\tverify = false,\n\tverify_samples = ?SAMPLE_CHUNK_COUNT,\n\tpeers = [],\n\tblock_gossip_peers = [],\n\tlocal_peers = [],\n\tsync_from_local_peers_only = false,\n\tdata_dir = \"./data\",\n\tlog_dir = ?LOG_DIR,\n\tpolling = ?DEFAULT_POLLING_INTERVAL, % Polling frequency in seconds.\n\tblock_pollers = ?DEFAULT_BLOCK_POLLERS,\n\tauto_join = true,\n\tjoin_workers = ?DEFAULT_JOIN_WORKERS,\n\tdiff = ?DEFAULT_DIFF,\n\tmining_addr = not_set,\n\thashing_threads = ?NUM_HASHING_PROCESSES,\n\tmining_cache_size_mb,\n\tpacking_cache_size_limit,\n\tdata_cache_size_limit,\n\tpost_tx_timeout = ?DEFAULT_POST_TX_TIMEOUT,\n\tmax_emitters = ?NUM_EMITTER_PROCESSES,\n\tsync_jobs = ?DEFAULT_SYNC_JOBS,\n\theader_sync_jobs = ?DEFAULT_HEADER_SYNC_JOBS,\n\tenable_data_roots_syncing = true,\n\tdata_sync_request_packed_chunks = false,\n\tdisk_pool_jobs = ?DEFAULT_DISK_POOL_JOBS,\n\tload_key = not_set,\n\tdisk_space_check_frequency = ?DISK_SPACE_CHECK_FREQUENCY_MS,\n\tstorage_modules = [],\n\trepack_in_place_storage_modules = [],\n\trepack_batch_size = ?DEFAULT_REPACK_BATCH_SIZE,\n\trepack_cache_size_mb = ?DEFAULT_REPACK_CACHE_SIZE_MB,\n\tstart_from_latest_state = false,\n\tstart_from_state = not_set,\n\tstart_from_block = not_set,\n\tinternal_api_secret = not_set,\n\tenable = [],\n\tdisable = [],\n\ttransaction_blacklist_files = [],\n\ttransaction_blacklist_urls = [],\n\ttransaction_whitelist_files = [],\n\ttransaction_whitelist_urls = [],\n\trequests_per_minute_limit = ?DEFAULT_REQUESTS_PER_MINUTE_LIMIT,\n\trequests_per_minute_limit_by_ip = #{},\n\tmax_propagation_peers = ?DEFAULT_MAX_PROPAGATION_PEERS,\n\tmax_block_propagation_peers = ?DEFAULT_MAX_BLOCK_PROPAGATION_PEERS,\n\twebhooks = [],\n\tdisk_pool_data_root_expiration_time = ?DEFAULT_DISK_POOL_DATA_ROOT_EXPIRATION_TIME_S,\n\tmax_disk_pool_buffer_mb = ?DEFAULT_MAX_DISK_POOL_BUFFER_MB,\n\tmax_disk_pool_data_root_buffer_mb = ?DEFAULT_MAX_DISK_POOL_DATA_ROOT_BUFFER_MB,\n\tmax_duplicate_data_roots = ?DEFAULT_MAX_DUPLICATE_DATA_ROOTS,\n\tsemaphores = #{\n\t\tget_chunk => ?MAX_PARALLEL_GET_CHUNK_REQUESTS,\n\t\tget_and_pack_chunk => ?MAX_PARALLEL_GET_AND_PACK_CHUNK_REQUESTS,\n\t\tget_tx_data => ?MAX_PARALLEL_GET_TX_DATA_REQUESTS,\n\t\tpost_chunk => ?MAX_PARALLEL_POST_CHUNK_REQUESTS,\n\t\tget_block_index => ?MAX_PARALLEL_BLOCK_INDEX_REQUESTS,\n\t\tget_wallet_list => ?MAX_PARALLEL_WALLET_LIST_REQUESTS,\n\t\t%% The get_sync_record semaphore is shared with GET /sync_buckets,\n\t\t%% GET /footprints, and GET /footprint_buckets.\n\t\tget_sync_record => ?MAX_PARALLEL_GET_SYNC_RECORD_REQUESTS,\n\t\tpost_tx => ?MAX_PARALLEL_POST_TX_REQUESTS,\n\t\tget_reward_history => ?MAX_PARALLEL_REWARD_HISTORY_REQUESTS,\n\t\tget_tx => ?MAX_PARALLEL_GET_TX_REQUESTS,\n\t\tget_data_roots => ?MAX_PARALLEL_GET_DATA_ROOTS_REQUESTS\n\t},\n\tdisk_cache_size = ?DISK_CACHE_SIZE,\n\tmax_nonce_limiter_validation_thread_count\n\t\t\t= ?DEFAULT_MAX_NONCE_LIMITER_VALIDATION_THREAD_COUNT,\n\tmax_nonce_limiter_last_step_validation_thread_count\n\t\t\t= ?DEFAULT_MAX_NONCE_LIMITER_LAST_STEP_VALIDATION_THREAD_COUNT,\n\tnonce_limiter_server_trusted_peers = [],\n\tnonce_limiter_client_peers = [],\n\tdebug = false,\n\trun_defragmentation = false,\n\tdefragmentation_trigger_threshold = 1_500_000_000,\n\tdefragmentation_modules = [],\n\tblock_throttle_by_ip_interval = ?DEFAULT_BLOCK_THROTTLE_BY_IP_INTERVAL_MS,\n\tblock_throttle_by_solution_interval = ?DEFAULT_BLOCK_THROTTLE_BY_SOLUTION_INTERVAL_MS,\n\ttls_cert_file = not_set, %% required to enable TLS\n\ttls_key_file = not_set,  %% required to enable TLS\n\thttp_api_transport_idle_timeout = ?DEFAULT_COWBOY_TCP_IDLE_TIMEOUT_SECOND*1000,\n\tcoordinated_mining = false,\n\tcm_api_secret = not_set,\n\tcm_exit_peer = not_set,\n\tcm_peers = [],\n\tcm_poll_interval = ?DEFAULT_CM_POLL_INTERVAL_MS,\n\tcm_out_batch_timeout = ?DEFAULT_CM_BATCH_TIMEOUT_MS,\n\tis_pool_server = false,\n\tis_pool_client = false,\n\tpool_server_address = not_set,\n\tpool_api_key = not_set,\n\tpool_worker_name = not_set,\n\tpacking_workers = ?DEFAULT_PACKING_WORKERS,\n\treplica_2_9_workers = ?DEFAULT_REPLICA_2_9_WORKERS,\n\tdisable_replica_2_9_device_limit = false,\n\treplica_2_9_entropy_cache_size_mb = ?DEFAULT_REPLICA_2_9_ENTROPY_CACHE_SIZE_MB,\n\t%% Undocumented/unsupported options\n\tchunk_storage_file_size = ?CHUNK_GROUP_SIZE,\n\trocksdb_flush_interval_s = ?DEFAULT_ROCKSDB_FLUSH_INTERVAL_S,\n\trocksdb_wal_sync_interval_s = ?DEFAULT_ROCKSDB_WAL_SYNC_INTERVAL_S,\n\t%% openssl (will be removed), fused, hiopt_m4\n\tvdf = openssl,\n\t%% Turn on/off the rebasing check. Only disabled in tests.\n\tallow_rebase = true,\n\n\t% Shutdown procedures\n\tshutdown_tcp_connection_timeout = ?SHUTDOWN_TCP_CONNECTION_TIMEOUT,\n\tshutdown_tcp_mode = ?SHUTDOWN_TCP_MODE,\n\n\t% global socket configuration\n\t'socket.backend' = ?DEFAULT_SOCKET_BACKEND,\n\n\t% gun network stack configuration.\n\t% these parameters are mainly configured using default\n\t% values from inet module\n\t'http_client.http.closing_timeout' = ?DEFAULT_GUN_HTTP_CLOSING_TIMEOUT,\n\t'http_client.http.keepalive' = ?DEFAULT_GUN_HTTP_KEEPALIVE,\n\t'http_client.tcp.delay_send' = ?DEFAULT_GUN_TCP_DELAY_SEND,\n\t'http_client.tcp.keepalive' = ?DEFAULT_GUN_TCP_KEEPALIVE,\n\t'http_client.tcp.linger' = ?DEFAULT_GUN_TCP_LINGER,\n\t'http_client.tcp.linger_timeout' = ?DEFAULT_GUN_TCP_LINGER_TIMEOUT,\n\t'http_client.tcp.nodelay' = ?DEFAULT_GUN_TCP_NODELAY,\n\t'http_client.tcp.send_timeout_close' = ?DEFAULT_GUN_TCP_SEND_TIMEOUT_CLOSE,\n\t'http_client.tcp.send_timeout' = ?DEFAULT_GUN_TCP_SEND_TIMEOUT,\n\n\t% cowboy network stack configuration.\n\t% these parameters are mainly configured using default\n\t% values from inet module\n\t'http_api.http.active_n' = ?DEFAULT_COWBOY_HTTP_ACTIVE_N,\n\t'http_api.http.inactivity_timeout' = ?DEFAULT_COWBOY_HTTP_INACTIVITY_TIMEOUT,\n\t'http_api.http.linger_timeout' = ?DEFAULT_COWBOY_HTTP_LINGER_TIMEOUT,\n\t'http_api.http.request_timeout' = ?DEFAULT_COWBOY_HTTP_REQUEST_TIMEOUT,\n\t'http_api.tcp.backlog' = ?DEFAULT_COWBOY_TCP_BACKLOG,\n\t'http_api.tcp.delay_send' = ?DEFAULT_COWBOY_TCP_DELAY_SEND,\n\t'http_api.tcp.keepalive' = ?DEFAULT_COWBOY_TCP_KEEPALIVE,\n\t'http_api.tcp.linger' = ?DEFAULT_COWBOY_TCP_LINGER,\n\t'http_api.tcp.linger_timeout' = ?DEFAULT_COWBOY_TCP_LINGER_TIMEOUT,\n\t'http_api.tcp.listener_shutdown' = ?DEFAULT_COWBOY_TCP_LISTENER_SHUTDOWN,\n\t'http_api.tcp.max_connections' = ?DEFAULT_COWBOY_TCP_MAX_CONNECTIONS,\n\t'http_api.tcp.nodelay' = ?DEFAULT_COWBOY_TCP_NODELAY,\n\t'http_api.tcp.num_acceptors' = ?DEFAULT_COWBOY_TCP_NUM_ACCEPTORS,\n\t'http_api.tcp.send_timeout_close' = ?DEFAULT_COWBOY_TCP_SEND_TIMEOUT_CLOSE,\n\t'http_api.tcp.send_timeout' = ?DEFAULT_COWBOY_TCP_SEND_TIMEOUT,\n\n\t'http_api.limiter.general.sliding_window_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_GENERAL_SLIDING_WINDOW_LIMIT,\n\t'http_api.limiter.general.sliding_window_duration' =\n                     ?DEFAULT_HTTP_API_LIMITER_GENERAL_SLIDING_WINDOW_DURATION,\n\t'http_api.limiter.general.sliding_window_timestamp_cleanup_interval' =\n                     ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_INTERVAL,\n\t'http_api.limiter.general.sliding_window_timestamp_cleanup_expiry' =\n                     ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_EXPIRY,\n\t'http_api.limiter.general.leaky_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_GENERAL_LEAKY_LIMIT,\n\t'http_api.limiter.general.leaky_tick_interval' =\n                     ?DEFAULT_HTTP_API_LIMITER_GENERAL_LEAKY_TICK_INTERVAL,\n\t'http_api.limiter.general.leaky_tick_reduction' =\n                     ?DEFAULT_HTTP_API_LIMITER_GENERAL_LEAKY_TICK_REDUCTION,\n\t'http_api.limiter.general.concurrency_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_GENERAL_CONCURRENCY_LIMIT,\n\t'http_api.limiter.general.is_manual_reduction_disabled' =\n                     ?DEFAULT_HTTP_API_LIMITER_IS_MANUAL_REDUCTION_DISABLED,\n\n        'http_api.limiter.chunk.sliding_window_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_CHUNK_SLIDING_WINDOW_LIMIT,\n\t'http_api.limiter.chunk.sliding_window_duration' =\n                     ?DEFAULT_HTTP_API_LIMITER_CHUNK_SLIDING_WINDOW_DURATION,\n\t'http_api.limiter.chunk.sliding_window_timestamp_cleanup_interval' =\n                     ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_INTERVAL,\n\t'http_api.limiter.chunk.sliding_window_timestamp_cleanup_expiry' =\n                     ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_EXPIRY,\n\t'http_api.limiter.chunk.leaky_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_CHUNK_LEAKY_LIMIT,\n\t'http_api.limiter.chunk.leaky_tick_interval' =\n                     ?DEFAULT_HTTP_API_LIMITER_CHUNK_LEAKY_TICK_INTERVAL,\n\t'http_api.limiter.chunk.leaky_tick_reduction' =\n                     ?DEFAULT_HTTP_API_LIMITER_CHUNK_LEAKY_TICK_REDUCTION,\n\t'http_api.limiter.chunk.concurrency_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_CHUNK_CONCURRENCY_LIMIT,\n\t'http_api.limiter.chunk.is_manual_reduction_disabled' =\n                     ?DEFAULT_HTTP_API_LIMITER_IS_MANUAL_REDUCTION_DISABLED,\n\n\t'http_api.limiter.data_sync_record.sliding_window_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_DATA_SYNC_RECORD_SLIDING_WINDOW_LIMIT,\n\t'http_api.limiter.data_sync_record.sliding_window_duration' =\n                     ?DEFAULT_HTTP_API_LIMITER_DATA_SYNC_RECORD_SLIDING_WINDOW_DURATION,\n\t'http_api.limiter.data_sync_record.sliding_window_timestamp_cleanup_interval' =\n                     ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_INTERVAL,\n\t'http_api.limiter.data_sync_record.sliding_window_timestamp_cleanup_expiry' =\n                     ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_EXPIRY,\n\t'http_api.limiter.data_sync_record.leaky_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_DATA_SYNC_RECORD_LEAKY_LIMIT,\n\t'http_api.limiter.data_sync_record.leaky_tick_interval' =\n                     ?DEFAULT_HTTP_API_LIMITER_DATA_SYNC_RECORD_LEAKY_TICK_INTERVAL,\n\t'http_api.limiter.data_sync_record.leaky_tick_reduction' =\n                     ?DEFAULT_HTTP_API_LIMITER_DATA_SYNC_RECORD_LEAKY_TICK_REDUCTION,\n\t'http_api.limiter.data_sync_record.concurrency_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_DATA_SYNC_RECORD_CONCURRENCY_LIMIT,\n\t'http_api.limiter.data_sync_record.is_manual_reduction_disabled' =\n                     ?DEFAULT_HTTP_API_LIMITER_IS_MANUAL_REDUCTION_DISABLED,\n\n\t'http_api.limiter.recent_hash_list_diff.sliding_window_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_RECENT_HASH_LIST_DIFF_SLIDING_WINDOW_LIMIT,\n\t'http_api.limiter.recent_hash_list_diff.sliding_window_duration' =\n                     ?DEFAULT_HTTP_API_LIMITER_RECENT_HASH_LIST_DIFF_SLIDING_WINDOW_DURATION,\n\t'http_api.limiter.recent_hash_list_diff.sliding_window_timestamp_cleanup_interval' =\n                     ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_INTERVAL,\n\t'http_api.limiter.recent_hash_list_diff.sliding_window_timestamp_cleanup_expiry' =\n                     ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_EXPIRY,\n\t'http_api.limiter.recent_hash_list_diff.leaky_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_RECENT_HASH_LIST_DIFF_LEAKY_LIMIT,\n\t'http_api.limiter.recent_hash_list_diff.leaky_tick_interval' =\n                     ?DEFAULT_HTTP_API_LIMITER_RECENT_HASH_LIST_DIFF_LEAKY_TICK_INTERVAL,\n\t'http_api.limiter.recent_hash_list_diff.leaky_tick_reduction' =\n                     ?DEFAULT_HTTP_API_LIMITER_RECENT_HASH_LIST_DIFF_LEAKY_TICK_REDUCTION,\n\t'http_api.limiter.recent_hash_list_diff.concurrency_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_RECENT_HASH_LIST_DIFF_CONCURRENCY_LIMIT,\n\t'http_api.limiter.recent_hash_list_diff.is_manual_reduction_disabled' =\n                     ?DEFAULT_HTTP_API_LIMITER_IS_MANUAL_REDUCTION_DISABLED,\n\n\t'http_api.limiter.block_index.sliding_window_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_BLOCK_INDEX_SLIDING_WINDOW_LIMIT,\n\t'http_api.limiter.block_index.sliding_window_duration' =\n                     ?DEFAULT_HTTP_API_LIMITER_BLOCK_INDEX_SLIDING_WINDOW_DURATION,\n\t'http_api.limiter.block_index.sliding_window_timestamp_cleanup_interval' =\n                     ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_INTERVAL,\n\t'http_api.limiter.block_index.sliding_window_timestamp_cleanup_expiry' =\n                     ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_EXPIRY,\n\t'http_api.limiter.block_index.leaky_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_BLOCK_INDEX_LEAKY_LIMIT,\n\t'http_api.limiter.block_index.leaky_tick_interval' =\n                     ?DEFAULT_HTTP_API_LIMITER_BLOCK_INDEX_LEAKY_TICK_INTERVAL,\n\t'http_api.limiter.block_index.leaky_tick_reduction' =\n                     ?DEFAULT_HTTP_API_LIMITER_BLOCK_INDEX_LEAKY_TICK_REDUCTION,\n\t'http_api.limiter.block_index.concurrency_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_BLOCK_INDEX_CONCURRENCY_LIMIT,\n\t'http_api.limiter.block_index.is_manual_reduction_disabled' =\n                     ?DEFAULT_HTTP_API_LIMITER_IS_MANUAL_REDUCTION_DISABLED,\n\n\t'http_api.limiter.wallet_list.sliding_window_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_WALLET_LIST_SLIDING_WINDOW_LIMIT,\n\t'http_api.limiter.wallet_list.sliding_window_duration' =\n                     ?DEFAULT_HTTP_API_LIMITER_WALLET_LIST_SLIDING_WINDOW_DURATION,\n\t'http_api.limiter.wallet_list.sliding_window_timestamp_cleanup_interval' =\n                     ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_INTERVAL,\n\t'http_api.limiter.wallet_list.sliding_window_timestamp_cleanup_expiry' =\n                     ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_EXPIRY,\n\t'http_api.limiter.wallet_list.leaky_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_WALLET_LIST_LEAKY_LIMIT,\n\t'http_api.limiter.wallet_list.leaky_tick_interval' =\n                     ?DEFAULT_HTTP_API_LIMITER_WALLET_LIST_LEAKY_TICK_INTERVAL,\n\t'http_api.limiter.wallet_list.leaky_tick_reduction' =\n                     ?DEFAULT_HTTP_API_LIMITER_WALLET_LIST_LEAKY_TICK_REDUCTION,\n\t'http_api.limiter.wallet_list.concurrency_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_WALLET_LIST_CONCURRENCY_LIMIT,\n\t'http_api.limiter.wallet_list.is_manual_reduction_disabled' =\n                     ?DEFAULT_HTTP_API_LIMITER_IS_MANUAL_REDUCTION_DISABLED,\n\n\t'http_api.limiter.get_vdf.sliding_window_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_GET_VDF_SLIDING_WINDOW_LIMIT,\n\t'http_api.limiter.get_vdf.sliding_window_duration' =\n                     ?DEFAULT_HTTP_API_LIMITER_GET_VDF_SLIDING_WINDOW_DURATION,\n\t'http_api.limiter.get_vdf.sliding_window_timestamp_cleanup_interval' =\n                     ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_INTERVAL,\n\t'http_api.limiter.get_vdf.sliding_window_timestamp_cleanup_expiry' =\n                     ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_EXPIRY,\n\t'http_api.limiter.get_vdf.leaky_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_GET_VDF_LEAKY_LIMIT,\n\t'http_api.limiter.get_vdf.leaky_tick_interval' =\n                     ?DEFAULT_HTTP_API_LIMITER_GET_VDF_LEAKY_TICK_INTERVAL,\n\t'http_api.limiter.get_vdf.leaky_tick_reduction' =\n                     ?DEFAULT_HTTP_API_LIMITER_GET_VDF_LEAKY_TICK_REDUCTION,\n\t'http_api.limiter.get_vdf.concurrency_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_GET_VDF_CONCURRENCY_LIMIT,\n\t'http_api.limiter.get_vdf.is_manual_reduction_disabled' =\n                     ?DEFAULT_HTTP_API_LIMITER_IS_MANUAL_REDUCTION_DISABLED,\n\n\t'http_api.limiter.get_vdf_session.sliding_window_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_GET_VDF_SESSION_SLIDING_WINDOW_LIMIT,\n\t'http_api.limiter.get_vdf_session.sliding_window_duration' =\n                     ?DEFAULT_HTTP_API_LIMITER_GET_VDF_SESSION_SLIDING_WINDOW_DURATION,\n\t'http_api.limiter.get_vdf_session.sliding_window_timestamp_cleanup_interval' =\n                     ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_INTERVAL,\n\t'http_api.limiter.get_vdf_session.sliding_window_timestamp_cleanup_expiry' =\n                     ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_EXPIRY,\n\t'http_api.limiter.get_vdf_session.leaky_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_GET_VDF_SESSION_LEAKY_LIMIT,\n\t'http_api.limiter.get_vdf_session.leaky_tick_interval' =\n                     ?DEFAULT_HTTP_API_LIMITER_GET_VDF_SESSION_LEAKY_TICK_INTERVAL,\n\t'http_api.limiter.get_vdf_session.leaky_tick_reduction' =\n                     ?DEFAULT_HTTP_API_LIMITER_GET_VDF_SESSION_LEAKY_TICK_REDUCTION,\n\t'http_api.limiter.get_vdf_session.concurrency_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_GET_VDF_SESSION_CONCURRENCY_LIMIT,\n\t'http_api.limiter.get_vdf_session.is_manual_reduction_disabled' =\n                     ?DEFAULT_HTTP_API_LIMITER_IS_MANUAL_REDUCTION_DISABLED,\n\n\t'http_api.limiter.get_previous_vdf_session.sliding_window_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_GET_PREVIOUS_VDF_SESSION_SLIDING_WINDOW_LIMIT,\n\t'http_api.limiter.get_previous_vdf_session.sliding_window_duration' =\n                     ?DEFAULT_HTTP_API_LIMITER_GET_PREVIOUS_VDF_SESSION_SLIDING_WINDOW_DURATION,\n\t'http_api.limiter.get_previous_vdf_session.sliding_window_timestamp_cleanup_interval' =\n                     ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_INTERVAL,\n\t'http_api.limiter.get_previous_vdf_session.sliding_window_timestamp_cleanup_expiry' =\n                     ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_EXPIRY,\n\t'http_api.limiter.get_previous_vdf_session.leaky_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_GET_PREVIOUS_VDF_SESSION_LEAKY_LIMIT,\n\t'http_api.limiter.get_previous_vdf_session.leaky_tick_interval' =\n                     ?DEFAULT_HTTP_API_LIMITER_GET_PREVIOUS_VDF_SESSION_LEAKY_TICK_INTERVAL,\n\t'http_api.limiter.get_previous_vdf_session.leaky_tick_reduction' =\n                     ?DEFAULT_HTTP_API_LIMITER_GET_PREVIOUS_VDF_SESSION_LEAKY_TICK_REDUCTION,\n\t'http_api.limiter.get_previous_vdf_session.concurrency_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_GET_PREVIOUS_VDF_SESSION_CONCURRENCY_LIMIT,\n\t'http_api.limiter.get_previous_vdf_session.is_manual_reduction_disabled' =\n                     ?DEFAULT_HTTP_API_LIMITER_IS_MANUAL_REDUCTION_DISABLED,\n\n\t'http_api.limiter.metrics.sliding_window_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_METRICS_SLIDING_WINDOW_LIMIT,\n\t'http_api.limiter.metrics.sliding_window_duration' =\n                     ?DEFAULT_HTTP_API_LIMITER_METRICS_SLIDING_WINDOW_DURATION,\n\t'http_api.limiter.metrics.sliding_window_timestamp_cleanup_interval' =\n                     ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_INTERVAL,\n\t'http_api.limiter.metrics.sliding_window_timestamp_cleanup_expiry' =\n                     ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_EXPIRY,\n\t'http_api.limiter.metrics.leaky_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_METRICS_LEAKY_LIMIT,\n\t'http_api.limiter.metrics.leaky_tick_interval' =\n                     ?DEFAULT_HTTP_API_LIMITER_METRICS_LEAKY_TICK_INTERVAL,\n\t'http_api.limiter.metrics.leaky_tick_reduction' =\n                     ?DEFAULT_HTTP_API_LIMITER_METRICS_LEAKY_TICK_REDUCTION,\n\t'http_api.limiter.metrics.concurrency_limit' =\n                     ?DEFAULT_HTTP_API_LIMITER_METRICS_CONCURRENCY_LIMIT,\n\t'http_api.limiter.metrics.is_manual_reduction_disabled' =\n                     ?DEFAULT_HTTP_API_LIMITER_IS_MANUAL_REDUCTION_DISABLED\n\n\n}).\n\n-endif.\n"
  },
  {
    "path": "apps/arweave_config/include/arweave_config_spec.hrl",
    "content": "-import(arweave_config_spec, [is_function_exported/3]).\n"
  },
  {
    "path": "apps/arweave_config/priv/.gitkeep",
    "content": ""
  },
  {
    "path": "apps/arweave_config/src/arweave_config.app.src",
    "content": "{application, arweave_config, [\n\t{id, \"arweave_config\"},\n\t{description, \"Arweave Configuration\"},\n\t{vsn, \"0.0.1\"},\n\t{mod, {arweave_config, []}},\n\t{env, []},\n\t{applications, [\n\t\tkernel,\n\t\tstdlib,\n\t\tsasl,\n\t\tcowboy,\n\t\ttomerl,\n\t\tyamerl\n\t]},\n\t{modules, [\n\t\tarweave_config,\n\t\tarweave_config_environment,\n\t\tarweave_config_http_server,\n\t\tarweave_config_legacy,\n\t\tarweave_config_parameters,\n\t\tarweave_config_parser,\n\t\tarweave_config_signal_handler,\n\t\tarweave_config_spec,\n\t\tarweave_config_spec_default,\n\t\tarweave_config_spec_deprecated,\n\t\tarweave_config_spec_enabled,\n\t\tarweave_config_spec_environment,\n\t\tarweave_config_spec_handle_get,\n\t\tarweave_config_spec_handle_set,\n\t\tarweave_config_spec_inherit,\n\t\tarweave_config_spec_legacy,\n\t\tarweave_config_spec_long_argument,\n\t\tarweave_config_spec_long_description,\n\t\tarweave_config_spec_nargs,\n\t\tarweave_config_spec_parameter_key,\n\t\tarweave_config_spec_runtime,\n\t\tarweave_config_spec_short_argument,\n\t\tarweave_config_spec_short_description,\n\t\tarweave_config_spec_type,\n\t\tarweave_config_store,\n\t\tarweave_config_sup,\n\t\tarweave_config_type\n\t]},\n\t{registered, [\n\t\tarweave_config,\n\t\tarweave_config_sup,\n\t\tarweave_config_legacy,\n\t\tarweave_config_environment,\n\t\tarweave_config_spec,\n\t\tarweave_config_store,\n\t\tarweave_config_signal_handler\n\t]}\n]}.\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Arweave Configuration Interface.\n%%%\n%%% `arweave_config' module is an interface to the Arweave\n%%% configuration data store where all configuration parameters are\n%%% stored and specified.\n%%%\n%%% WARNING: this module/application is in active development, the\n%%% interfaces can change.\n%%%\n%%% == Usage ==\n%%%\n%%% `arweave_config' application needs to be started to work\n%%% correctly, many processes are mandatory and will be in charge to\n%%% deal with stored configuration.\n%%%\n%%% ```\n%%% % start arweave_config\n%%% arweave_config:start().\n%%% '''\n%%%\n%%% Parameters keys are defined as list and can be retrieve using\n%%% `arweave_config:get/1' or `arweave_config:get/2'.\n%%%\n%%% ```\n%%% % get debug parameter.\n%%% arweave_config:get([debug]).\n%%%\n%%% % get debug parameter, if undefined, use false instead.\n%%% arweave_config:get([debug], false).\n%%% '''\n%%%\n%%% Parameters keys can be dynamically set using\n%%% `arweave_config:set/2'.\n%%%\n%%% ```\n%%% % set debug parameter to true\n%%% arweave_config:set([debug], true).\n%%%\n%%% % set debug parameter to false\n%%% arweave_config:set([debug], false).\n%%% '''\n%%%\n%%% Parameters are defined in parameter specification, defined as\n%%% callback modules or as map. If a specification is not containing\n%%% the parameter key, the interface will return and error.\n%%%\n%%% @end\n%%%===================================================================\n-module(arweave_config).\n-compile(warnings_as_errors).\n-vsn(1).\n-behavior(application).\n-behavior(gen_server).\n-export([\n\tget/1,\n\tget/2,\n\tget_env/0,\n\tis_runtime/0,\n\truntime/0,\n\tset/2,\n\tset_env/1,\n\tstart/0,\n\tstart_link/0,\n\tstop/0\n]).\n% application behavior callbacks.\n-export([start/2, stop/1]).\n% gen_server behavior callbacks\n-export([init/1, terminate/2, handle_call/3, handle_cast/2, handle_info/2]).\n-compile({no_auto_import,[get/1]}).\n-include(\"arweave_config.hrl\").\n-include_lib(\"kernel/include/logger.hrl\").\n\n%%--------------------------------------------------------------------\n%% @doc helper function to started `arweave_config' application.\n%% @end\n%%--------------------------------------------------------------------\n-spec start() -> ok | {error, term()}.\n\nstart() ->\n\tcase application:ensure_all_started(?MODULE, permanent) of\n\t\t{ok, Dependencies} ->\n\t\t\t?LOG_DEBUG(\"arweave_config started dependencies: ~p\", Dependencies),\n\t\t\tok;\n\t\tElsewise ->\n\t\t\tElsewise\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc help function to stop `arweave_config' application.\n%% @end\n%%--------------------------------------------------------------------\n-spec stop() -> ok.\n\nstop() ->\n\tapplication:stop(?MODULE).\n\n%%--------------------------------------------------------------------\n%% @doc A wrapper for `application:get_env/2'.\n%% @deprecated this function is a temporary interface and will be\n%%             replaced by `arweave_config:get/1' function.\n%% @see application:get_env/2\n%% @end\n%%--------------------------------------------------------------------\n-spec get_env() -> {ok, #config{}}.\n\nget_env() ->\n\tarweave_config_legacy:get_env().\n\n%%--------------------------------------------------------------------\n%% @doc A wrapper for `application:set_env/3'.\n%% @deprecated this function is a temporary interface and will be\n%%             replaced by `arweave_config:set/2' function.\n%% @see application:set_env/3\n%% @end\n%%--------------------------------------------------------------------\n-spec set_env(term()) -> ok.\n\nset_env(Value) ->\n\tarweave_config_legacy:set_env(Value).\n\n%%--------------------------------------------------------------------\n%% @doc Get a value from the configuration.\n%%\n%% Note: the behavior of this function is not the same depending of\n%% the kind of parameter desired. Indeed, to help the transition to\n%% the new configuration format, when an `atom' is set as first\n%% argument,   `arweave_config'  will   act  as   proxy  to   the  old\n%% configuration method (using a record).\n%%\n%% == Examples ==\n%%\n%% ```\n%% > get(<<\"global.debug\">>).\n%% {ok, false}\n%%\n%% > get([global, debug]).\n%% {ok, false}\n%%\n%% > get([test]).\n%% {error, #{ reason => not_found }}.\n%% '''\n%%\n%% @end\n%%--------------------------------------------------------------------\n-spec get(ParameterKey) -> Return when\n\tParameterKey :: atom() | string() | binary() | list(),\n\tReturn :: {ok, term()} | {error, term()}.\n\nget(Key) when is_atom(Key) ->\n\t% TODO: pattern to remove.\n\t% this pattern is ONLY for legacy purpose, it should be\n\t% removed after the full migration to the new arweave\n\t% configuration format.\n\t?LOG_DEBUG([\n\t\t{function, ?FUNCTION_NAME},\n\t\t{module, ?MODULE},\n\t\t{key, Key}\n\t]),\n\tarweave_config_legacy:get(Key);\nget(Key) ->\n\tcase arweave_config_parser:key(Key) of\n\t\t{ok, Parameter} ->\n\t\t\tarweave_config_spec:get(Parameter);\n\t\tElsewise ->\n\t\t\tElsewise\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc Get a value from the  configuration, if not defined, a default\n%% value can be returned instead.\n%%\n%% == Examples ==\n%%\n%% ```\n%% > get(<<\"global.debug\">>, true).\n%% false\n%%\n%% > get([global, debug], true).\n%% false\n%%\n%% > get([test], true).\n%% true\n%% '''\n%% @end\n%%--------------------------------------------------------------------\n-spec get(ParameterKey, Default) -> Return when\n\tParameterKey :: atom() | string() | binary() | list(),\n\tDefault :: term(),\n\tReturn :: term().\n\nget(Key, Default) ->\n\ttry get(Key) of\n\t\t{ok, Value} ->\n\t\t\tValue;\n\t\t_Else ->\n\t\t\tDefault\n\tcatch\n\t\t_:_ -> Default\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc Set a configuration value using a key.\n%%\n%% == Examples==\n%%\n%% ```\n%% > set(<<\"global.debug\">>, <<\"true\">>).\n%% {ok, true}\n%%\n%% > set([global, debug]), true).\n%% {ok, true}\n%%\n%% > set(\"global.debug\", \"true\").\n%% {ok, true}\n%%\n%% > set(\"global.debug\", 1234).\n%% {error, #{ reason => not_boolean }}\n%% '''\n%%\n%% @end\n%%--------------------------------------------------------------------\n-spec set(ParameterKey, Value) -> Return when\n\tParameterKey :: atom() | string() | iolist() | binary() | list(),\n\tValue :: term(),\n\tReturn :: {ok, term()} | {error, term()}.\n\nset(Key, Value) when is_atom(Key) ->\n\t% TODO: pattern to remove.\n\t% this pattern is ONLY for legacy purpose and should be\n\t% removed after the migration to the new arweave configuration\n\t% format.\n\t?LOG_DEBUG([\n\t\t{function, ?FUNCTION_NAME},\n\t\t{module, ?MODULE},\n\t\t{key, Key},\n\t\t{value, Value}\n\t]),\n\tcase arweave_config_legacy:set(Key, Value) of\n\t\t{ok, V} ->\n\t\t\tcase arweave_config_spec:get_legacy(Key) of\n\t\t\t\t{ok, PK} ->\n\t\t\t\t\t_ = set(PK, Value),\n\t\t\t\t\t{ok, V};\n\t\t\t\tElse ->\n\t\t\t\t\tElse\n\t\t\tend;\n\t\tElse ->\n\t\t\tElse\n\tend;\nset(Key, Value) ->\n\tcase arweave_config_parser:key(Key) of\n\t\t{ok, Parameter} ->\n\t\t\tarweave_config_spec:set(Parameter, Value);\n\t\tElsewise ->\n\t\t\tElsewise\n\tend.\n\n%%--------------------------------------------------------------------\n%%\n%% == Examples ==\n%%\n%% ```\n%% 10 = getm(#{}, logdir, [logging,default,path], 10).\n%% parameter_value = getm(#{}, logdir, [logging,default,path], 10).\n%% 1 = getm(#{ logdir => 1 }, logdir, [logging,default,path], 10).\n%% '''\n%%\n%%--------------------------------------------------------------------\n% getm(MapKey, Map, Parameter, Default) ->\n\n%%--------------------------------------------------------------------\n%% @doc Start arweave_config process.\n%% @end\n%%--------------------------------------------------------------------\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%%--------------------------------------------------------------------\n%% @doc Switch to runtime mode. No rollback is possible there, this is\n%% a one time operation to announce arweave config is ready to deal\n%% with dynamic configuration.\n%% @end\n%%--------------------------------------------------------------------\n-spec runtime() -> ok.\n\nruntime() ->\n\tgen_server:call(?MODULE, runtime, 10_000).\n\n%%--------------------------------------------------------------------\n%% @doc Returns if arweave config is in runtime mode or not.\n%% @end\n%%--------------------------------------------------------------------\n-spec is_runtime() -> boolean().\n\nis_runtime() ->\n\tcase ets:lookup(?MODULE, runtime) of\n\t\t[{runtime, true}] -> true;\n\t\t_Elsewise -> false\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc `gen_server' callback.\n%% @end\n%%--------------------------------------------------------------------\ninit(_) ->\n\tets:new(?MODULE, [named_table, protected]),\n\t{ok, ?MODULE}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc `gen_server' callback.\n%% @end\n%%--------------------------------------------------------------------\nterminate(_, _) ->\n\t?LOG_INFO(\"arweave_config process stopped\").\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc `gen_server' callback.\n%% @end\n%%--------------------------------------------------------------------\nhandle_call(runtime, _From, State) ->\n\ttry\n\t\tets:insert(?MODULE, {runtime, true})\n\tof\n\t\ttrue -> ok;\n\t\t_ -> ok\n\tcatch\n\t\t_:_ -> ok\n\tend,\n\t{reply, ok, State};\nhandle_call(_, _, State) -> {noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc `gen_server' callback.\n%% @end\n%%--------------------------------------------------------------------\nhandle_cast(_, State) ->\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc `gen_server' callback.\n%% @end\n%%--------------------------------------------------------------------\nhandle_info(_, State) -> {noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc `application' callback.\n%% @end\n%%--------------------------------------------------------------------\nstart(_StartType, _StartArgs) ->\n\t?LOG_INFO(\"arweave_config application starting\"),\n\n\t% start application supervisor\n\tarweave_config_sup:start_link().\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc `application' callback.\n%% @end\n%%--------------------------------------------------------------------\nstop(_Args) ->\n\t?LOG_INFO(\"arweave_config application stopped\"),\n\tok.\n\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_arguments.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Arweave Configuration CLI Arguments Parser.\n%%%\n%%% This module is in charge of parsing the arguments from the command\n%%% line, usually defined at startup. The parser will use the\n%%% specifications from `arweave_config_spec' (this means, for now,\n%%% `arweave_config' must be started to correctly parse something).\n%%%\n%%% The main idea is to have all arguments flags directly available\n%%% from the specifications and parse the arguments from CLI using\n%%% them. It should then return a list of actions to be executed.\n%%%\n%%% This module is also a process called `arweave_config_arguments',\n%%% keeping the original command line passed by the user.\n%%%\n%%% @todo add support for more than one parameter taken from the flags\n%%%\n%%% @todo add support for short string flags (e.g. -def will look for\n%%%       -d, -e and -f flags if they are boolean).\n%%%\n%%% @todo returns a comprehensive error message.\n%%%\n%%% @todo when parsing fails, the documentation of the last parameter\n%%%       should be displayed, or the documentation of the whole\n%%%       application.\n%%%\n%%% @todo sub-arguments parsing, a more complex way to parse certain\n%%% kind of value will be required in some situation, for example for\n%%% peers and storage modules. see section below.\n%%%\n%%% @end\n%%%===================================================================\n-module(arweave_config_arguments).\n-behavior(gen_server).\n-compile(warnings_as_errors).\n-compile({no_auto_import,[get/0]}).\n-export([\n\tstart_link/0,\n\tload/0,\n\tset/1,\n\tget/0,\n\tget_args/0,\n\tparse/1,\n\tparse/2\n]).\n-export([init/1]).\n-export([handle_call/3, handle_cast/2, handle_info/2]).\n-include_lib(\"kernel/include/logger.hrl\").\n\n%%--------------------------------------------------------------------\n%% @doc Parses command line arguments.\n%% @see parse/2\n%% @end\n%%--------------------------------------------------------------------\n-spec parse(Args) -> Return when\n\tArgs :: [binary()],\n\tReturn :: {ok, [{Spec, Values}]} | {error, Reason},\n\tSpec :: map(),\n\tValues :: [term()],\n\tReason :: map().\n\nparse(Args) ->\n\tparse(Args, #{}).\n\n%%--------------------------------------------------------------------\n%% @doc Parses an argument from command line. Erlang is usually giving\n%% us these arguments as a `[string()]', but we want it to be a\n%% `[binary()]', to make our life easier when displaying this\n%% information somewhere else (e.g. JSON).\n%%\n%% Custom specifications can be set using `long_arguments' and\n%% `short_arguments' options. Those are mostly used for testing and\n%% debugging purpose, by default, this function will fetch\n%% specifications from `arweave_config_spec' process.\n%% @end\n%%--------------------------------------------------------------------\n-spec parse(Args, Opts) -> Return when\n\tArgs :: [binary()],\n\tOpts :: #{\n\t\tlong_arguments => #{},\n\t\tshort_arguments => #{}\n\t},\n\tReturn :: {ok, [{Spec, Values}]} | {error, Reason},\n\tSpec :: map(),\n\tValues :: [term()],\n\tReason :: map().\n\nparse(Args, Opts) ->\n\tparse_converter(Args, Opts).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc type converter, the parser only check binary data.\n%% @end\n%%--------------------------------------------------------------------\nparse_converter(Args, Opts) ->\n\tparse_converter(Args, [], Opts).\n\nparse_converter([], Buffer, Opts) ->\n\tReverse = lists:reverse(Buffer),\n\tparse_final(Reverse, Opts);\nparse_converter([Arg|Rest], Buffer, Opts) when is_list(Arg) ->\n\tNewBuffer = [list_to_binary(Arg)|Buffer],\n\tparse_converter(Rest, NewBuffer, Opts);\nparse_converter([Arg|Rest], Buffer, Opts) when is_integer(Arg) ->\n\tNewBuffer = [integer_to_binary(Arg)|Buffer],\n\tparse_converter(Rest, NewBuffer, Opts);\nparse_converter([Arg|Rest], Buffer, Opts) when is_float(Arg) ->\n\tNewBuffer = [float_to_binary(Arg)|Buffer],\n\tparse_converter(Rest, NewBuffer, Opts);\nparse_converter([Arg|Rest], Buffer, Opts) when is_atom(Arg) ->\n\tNewBuffer = [atom_to_binary(Arg)|Buffer],\n\tparse_converter(Rest, NewBuffer, Opts);\nparse_converter([Arg|Rest], Buffer, Opts) when is_binary(Arg) ->\n\tNewBuffer = [Arg|Buffer],\n\tparse_converter(Rest, NewBuffer, Opts).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nparse_final(Args, Opts) ->\n\ttry\n\t\tLongArgs = maps:get(\n\t\t\tlong_arguments,\n\t\t\tOpts,\n\t\t\t% @todo it's annoying to convert these values, longs/short\n\t\t\t% args from specifications should be returned as map directly.\n\t\t\tmaps:from_list(\n\t\t\t\tarweave_config_spec:get_long_arguments()\n\t\t\t)\n\t\t),\n\t\tShortArgs = maps:get(\n\t\t\tshort_arguments,\n\t\t\tOpts,\n\t\t\t% @todo it's annoying to convert these values, longs/short\n\t\t\t% args from specifications should be returned as map directly.\n\t\t\tmaps:from_list(\n\t\t\t\tarweave_config_spec:get_short_arguments()\n\t\t\t)\n\t\t),\n\t\tState = #{\n\t\t\targs => Args,\n\t\t\tla => LongArgs,\n\t\t\tsa => ShortArgs,\n\t\t\tpos => 1,\n\t\t\tactions => []\n\t\t},\n\t\tparse(Args, State, Opts)\n\tcatch\n\t\t_:R ->\n\t\t\t{error, #{\n\t\t\t\t\treason => R\n\t\t\t\t}\n\t\t\t}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc loop over the arguments and check them.\n%% @end\n%%--------------------------------------------------------------------\n-spec parse(Args, State, Opts) -> Return when\n\tArgs :: [binary()],\n\tState :: map(),\n\tOpts :: map(),\n\tReturn :: {ok, [{Spec, Values}]} | {error, Reason},\n\tSpec :: map(),\n\tValues :: [term()],\n\tReason :: map().\n\nparse([], #{ actions := Buffer }, _Opts) ->\n\t{ok, lists:reverse(Buffer)};\nparse([Arg = <<\"---\",_/binary>>|_], State, _Opts) ->\n\tPos = maps:get(pos, State),\n\t{error, #{\n\t\t\treason => <<\"bad_argument\">>,\n\t\t\targument => Arg,\n\t\t\tposition => Pos\n\t\t}\n\t};\nparse([Arg = <<\"--\",_/binary>>|Rest], State = #{la := LA}, Opts)\n\twhen is_map_key(Arg, LA) ->\n\t\t% by default, we assume the argument is a long\n\t\t% arguments and we try to find it.\n\t\tSpec = maps:get(Arg, LA),\n\t\tPos = maps:get(pos, State),\n\t\tcase apply_spec(Rest, Spec, State#{ pos => Pos+1 }) of\n\t\t\t{ok, NewRest, NewState} ->\n\t\t\t\tparse(NewRest, NewState, Opts);\n\t\t\tElse ->\n\t\t\t\tElse\n\t\tend;\nparse([<<\"-\", Arg>>|Rest], State = #{sa := SA}, Opts)\n\twhen is_map_key(Arg, SA),\n\t     Arg =/= $- ->\n\t\tSpec = maps:get(Arg, SA),\n\t\tPos = maps:get(pos, State),\n\t\tcase apply_spec(Rest, Spec, State#{ pos => Pos+1 }) of\n\t\t\t{ok, NewRest, NewState} ->\n\t\t\t\tparse(NewRest, NewState, Opts);\n\t\t\tElse ->\n\t\t\t\tElse\n\t\tend;\nparse([Unknown|_], #{ pos := Pos }, _Opts) ->\n\t{error, #{\n\t\t\treason => <<\"unknown argument\">>,\n\t\t\targument => Unknown,\n\t\t\tposition => Pos\n\t\t}\n\t}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc Take a value and check its type.\n%% @end\n%%--------------------------------------------------------------------\napply_spec([], Spec = #{type := boolean}, State) ->\n\tBuffer = maps:get(actions, State),\n\tNewBuffer = [{Spec, [true]}|Buffer],\n\tNewState = State#{\n\t\tactions => NewBuffer\n\t},\n\t{ok, [], NewState};\napply_spec([Value|Rest], Spec = #{type := boolean}, State) ->\n\tBuffer = maps:get(actions, State),\n\tcase arweave_config_type:boolean(Value) of\n\t\t{ok, Return} ->\n\t\t\tPos = maps:get(pos, State),\n\t\t\tNewBuffer = [{Spec, [Return]}|Buffer],\n\t\t\tNewState = State#{\n\t\t\t\tactions => NewBuffer,\n\t\t\t\tpos => Pos+1\n\t\t\t},\n\t\t\t{ok, Rest, NewState};\n\t\t_ ->\n\t\t\tNewBuffer = [{Spec, [true]}|Buffer],\n\t\t\tNewState = State#{\n\t\t\t\tactions => NewBuffer\n\t\t\t},\n\t\t\t{ok, [Value|Rest], NewState}\n\tend;\napply_spec([Value|Rest], Spec = #{type := Type}, State) ->\n\tBuffer = maps:get(actions, State),\n\tPos = maps:get(pos, State),\n\tcase arweave_config_type:Type(Value) of\n\t\t{ok, Return} ->\n\t\t\tNewBuffer = [{Spec, [Return]}|Buffer],\n\t\t\tNewState = State#{\n\t\t\t\tactions => NewBuffer,\n\t\t\t\tpos => Pos+1\n\t\t\t},\n\t\t\t{ok, Rest, NewState};\n\t\t_ ->\n\t\t\t{error, #{\n\t\t\t\t\treason => <<\"bad value\">>,\n\t\t\t\t\tvalue => Value,\n\t\t\t\t\ttype => Type,\n\t\t\t\t\tposition => Pos\n\t\t\t\t}\n\t\t\t}\n\tend;\napply_spec(_, Spec, State) ->\n\tType = maps:get(type, Spec),\n\tPos = maps:get(pos, State),\n\t{error, #{\n\t\t\treason => <<\"missing value\">>,\n\t\t\ttype => Type,\n\t\t\tposition => Pos\n\t\t}\n\t}.\n\n%%--------------------------------------------------------------------\n%% @doc starts arweave_config_arguments server.\n%% @end\n%%--------------------------------------------------------------------\n-spec start_link() -> {ok, pid()}.\n\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, #{}, []).\n\n%%--------------------------------------------------------------------\n%% @doc load arguments.\n%% @end\n%%--------------------------------------------------------------------\n-spec load() -> ok | {error, term()}.\n\nload() ->\n\tgen_server:call(?MODULE, load, 10_000).\n\n%%--------------------------------------------------------------------\n%% @doc set arguments from command line.\n%% @end\n%%--------------------------------------------------------------------\n-spec set(Args) -> Return when\n\tArgs :: [string() | binary()],\n\tReturn :: ok.\n\nset(Args) ->\n\tgen_server:call(?MODULE, {set, Args}, 10_000).\n\n%%--------------------------------------------------------------------\n%% @doc returns parsed arguments from process state.\n%% @end\n%%--------------------------------------------------------------------\n-spec get() -> {ok, [map()]}.\n\nget() ->\n\tgen_server:call(?MODULE, get, 10_000).\n\n%%--------------------------------------------------------------------\n%% @doc returns raw arguments from process state.\n%% @end\n%%--------------------------------------------------------------------\n-spec get_args() -> [string() | binary()].\n\nget_args() ->\n\tgen_server:call(?MODULE, {get, args}, 10_000).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\n-spec init(Args) -> Return when\n\tArgs :: #{\n\t\targs => [string() | binary()]\n\t},\n\tReturn :: {ok, State},\n\tState :: #{\n\t\tinit_args => Args,\n\t\targs => Args,\n\t\tparams => []\n\t}.\n\ninit(Args) ->\n\tState = #{ params => [] },\n\tinit_args(Args, State).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc get arguments directly from the command line.\n%%\n%% ```\n%% arweave_config_arguments:start(#{ args => [] }).\n%% '''\n%%\n%% @end\n%%--------------------------------------------------------------------\ninit_args(Args, State) ->\n\tRawArgs = maps:get(args, Args, []),\n\tNewState = State#{\n\t\tinit_args => Args,\n\t\targs => RawArgs\n\t},\n\tinit_final(Args, NewState).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc returns the final state, ready to be used by the process.\n%% @end\n%%--------------------------------------------------------------------\ninit_final(_, State) ->\n\t{ok, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\n-spec handle_call\n\t(get, From, State) -> Return when\n\t\tFrom :: term(),\n\t\tState :: map(),\n\t\tReturn :: {reply, Reply, State},\n\t\tReply :: [string()];\n\t({get, args}, From, State) -> Return when\n\t\tFrom :: term(),\n\t\tState :: map(),\n\t\tReturn :: {reply, Reply, State},\n\t\tReply :: [string()];\n\t({set, Args}, From, State) -> Return when\n\t\tArgs :: [string() | binary()],\n\t\tFrom :: term(),\n\t\tState :: map(),\n\t\tReturn :: {reply, Reply, State},\n\t\tReply :: {ok, [map()]} | {error, term()};\n\t(load, From, State) -> Return when\n\t\tFrom :: term(),\n\t\tState :: map(),\n\t\tReturn :: {reply, Reply, State},\n\t\tReply :: ok | {error, term()};\n\t(any(), From, State) -> Return when\n\t\tFrom :: term(),\n\t\tState :: map(),\n\t\tReturn :: {reply, ok, State}.\n\nhandle_call(get, _From, State) ->\n\tArgs = maps:get(params, State, []),\n\t{reply, Args, State};\nhandle_call({get, args}, _From, State) ->\n\tRawArgs = maps:get(args, State, []),\n\t{reply, RawArgs, State};\nhandle_call({set, RawArgs}, _From, State) ->\n\ttry\n\t\tparse(RawArgs)\n\tof\n\t\t{ok, Parsed} ->\n\t\t\tNewState = State#{\n\t\t\t\t     args => RawArgs,\n\t\t\t\t     params => Parsed\n\t\t\t},\n\t\t\t{reply, {ok, Parsed}, NewState};\n\t\tElse ->\n\t\t\t{reply, Else, State}\n\tcatch\n\t\t_Error:Reason ->\n\t\t\t{reply, {error, Reason}, State}\n\tend;\nhandle_call(load, _From, State = #{ params := Params}) ->\n\ttry\n\t\tlists:map(fun load_fun/1, Params),\n\t\t{reply, ok, State}\n\tcatch\n\t\t_Error:Reason ->\n\t\t\t{reply, {error, Reason}, State}\n\tend;\nhandle_call(Msg, _, State) ->\n\t?LOG_WARNING(\"~p (~p) received: ~p\", [?MODULE, self(), Msg]),\n\t{reply, ok, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\n-spec handle_cast(Msg, State) -> Return when\n\tMsg :: any(),\n\tState :: #{},\n\tReturn :: {noreply, State}.\n\nhandle_cast(Msg, State) ->\n\t?LOG_WARNING(\"~p (~p) received: ~p\", [?MODULE, self(), Msg]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\n-spec handle_info(Msg, State) -> Return when\n\tMsg :: any(),\n\tState :: #{},\n\tReturn :: {noreply, State}.\n\nhandle_info(Msg, State) ->\n\t?LOG_WARNING(\"~p (~p) received: ~p\", [?MODULE, self(), Msg]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nload_fun({Spec, [Value]}) ->\n\tParameterKey = maps:get(parameter_key, Spec),\n\tarweave_config:set(ParameterKey, Value).\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_arguments_legacy.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2026 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @deprecated this module is a legacy compat layer.\n%%% @doc Support for legacy arweave configuration.\n%%%\n%%% This module has been created to deal with legacy arguments parser\n%%% from `ar.erl'. The goal is to slowly migrate to the new\n%%% parser without breaking everything.\n%%%\n%%% @end\n%%%===================================================================\n-module(arweave_config_arguments_legacy).\n-behavior(gen_server).\n-compile(warnings_as_errors).\n-compile({no_auto_import,[get/0]}).\n-export([\n\t get/0,\n\t get_args/0,\n\t load/0,\n\t parse/1,\n\t set/1,\n\t start_link/0\n]).\n-export([init/1]).\n-export([handle_call/3, handle_cast/2, handle_info/2]).\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"kernel/include/logger.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%--------------------------------------------------------------------\n%% local type definition.\n%%--------------------------------------------------------------------\n-type state() :: #{ args => [string()], config => #config{} }.\n\n%%--------------------------------------------------------------------\n%% @doc Uses `ar_cli_parser:parse/2' legacy parser to parse a list of\n%% arguments. This is mostly an helper to get rid of the extra data\n%% produced by this function.\n%% @end\n%%--------------------------------------------------------------------\n-spec parse(Args) -> Return when\n\tArgs :: [string()],\n\tReturn :: {ok, #config{}} | {error, term()}.\n\nparse(Args) ->\n\ttry\n\t\tConfig = #config{},\n\t\tar_cli_parser:parse(Args, Config)\n\tof\n\t\tResult = {ok, _} -> Result;\n\t\t{error, _, _} -> {error, badarg};\n\t\t{error, _} -> {error, badarg};\n\t\tElse -> {error, Else}\n\tcatch\n\t\t_Error:Reason ->\n\t\t\t{error, Reason}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc Load parsed arguments into `arweave_config' process.\n%%\n%% ```\n%% ok = arweave_config_arguments_legacy:load().\n%% '''\n%%\n%% @end\n%%--------------------------------------------------------------------\n-spec load() -> ok | {error, term()}.\n\nload() ->\n\tgen_server:call(?MODULE, load, 10_000).\n\n%%--------------------------------------------------------------------\n%% @doc Set a new list of arguments, overwritting the old one if\n%% present.\n%%\n%% ```\n%% {ok, #config{}} = arweave_config_arguments_legacy:set([\n%%   \"debug\"\n%% ]).\n%% '''\n%%\n%% @end\n%%--------------------------------------------------------------------\n-spec set(Args) -> Return when\n\tArgs :: [string()],\n\tReturn :: {ok, #config{}} | {error, term()}.\n\nset(Args) ->\n\tgen_server:call(?MODULE, {set, Args}, 10_000).\n\n%%--------------------------------------------------------------------\n%% @doc Returns the parsed arguments as `#config{}' record.\n%%\n%% ```\n%% #config{} = arweave_config_arguments_legacy:get().\n%% '''\n%%\n%% @end\n%%--------------------------------------------------------------------\n-spec get() -> #config{}.\n\nget() ->\n\tgen_server:call(?MODULE, get, 10_000).\n\n%%--------------------------------------------------------------------\n%% @doc Returns the arguments defined stored in\n%% `arweave_config_arguments_legacy' process.\n%%\n%% ```\n%% [\"debug\"] = arweave_config_arguments_legacy:get_args().\n%% '''\n%%\n%% @end\n%%--------------------------------------------------------------------\n-spec get_args() -> [string()].\n\nget_args() ->\n\tgen_server:call(?MODULE, {get, args}, 10_000).\n\n%%--------------------------------------------------------------------\n%% @doc Start `arweave_config_arguments_legacy' process.\n%% @end\n%%--------------------------------------------------------------------\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\n-spec init(any()) -> {ok, state()}.\n\ninit(_) ->\n\t?LOG_INFO(\"start ~p  process\", [?MODULE]),\n\tState = #{\n\t\targs => [],\n\t\tconfig => #config{}\n\t},\n\t{ok, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\n-spec handle_call\n\t({set, Args}, From, State) -> Return when\n\t\tArgs :: [string()],\n\t\tFrom :: term(),\n\t\tState :: state(),\n\t\tReturn :: {reply, Reply, State},\n\t\tReply :: {ok, #config{}} | {error, term()};\n\t(get, From, State) -> Return when\n\t\tFrom :: term(),\n\t\tState :: state(),\n\t\tReturn :: {reply, Reply, State},\n\t\tReply :: #config{};\n\t({get, args}, From, State) -> Return when\n\t\tFrom :: term(),\n\t\tState :: state(),\n\t\tReturn :: [string()];\n\t(load, From, State) -> Return when\n\t\tFrom :: term(),\n\t\tState :: state(),\n\t\tReturn :: {reply, Reply, State},\n\t\tReply :: ok | {error, term()};\n\t({merge, Config}, From, State) -> Return when\n\t\tConfig :: #config{},\n\t\tFrom :: term(),\n\t\tState :: state(),\n\t\tReturn :: {reply, Reply, State},\n\t\tReply :: {ok, Config} | {error, term()};\n\t(term(), From, State) -> Return when\n\t\tFrom :: term(),\n\t\tState :: state(),\n\t\tReturn :: {reply, ok, State}.\n\nhandle_call(Msg = {set, Args}, From, State) ->\n\t?LOG_DEBUG([{message, Msg}, {from, From}]),\n\tcase parse(Args) of\n\t\t{ok, Config} ->\n\t\t\tNewState = State#{\n\t\t\t\targs => Args,\n\t\t\t\tconfig => Config\n\t\t\t},\n\t\t\t{reply, {ok, Config}, NewState};\n\t\tElse ->\n\t\t\t{reply, Else, State}\n\tend;\nhandle_call(Msg = get, From, State = #{ config := Config }) ->\n\t?LOG_DEBUG([{message, Msg}, {from, From}]),\n\t{reply, Config, State};\nhandle_call(Msg = {get,args}, From, State = #{ args := Args }) ->\n\t?LOG_DEBUG([{message, Msg}, {from, From}]),\n\t{reply, Args, State};\nhandle_call(Msg = load, From, State) ->\n\t?LOG_DEBUG([{message, Msg}, {from, From}]),\n\thandle_load(State);\nhandle_call(Msg, From, State) ->\n\t?LOG_WARNING([{process, self()}, {message, Msg}, {from, From}]),\n\t{reply, ok, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\n-spec handle_cast(any(), State) -> Return when\n\tState :: map(),\n\tReturn :: {noreply, state()}.\n\nhandle_cast(Msg, State) ->\n\t?LOG_WARNING([{process, self()}, {message, Msg}]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\n-spec handle_info(any(), State) -> Return when\n\tState :: map(),\n\tReturn :: {noreply, state()}.\n\nhandle_info(Msg, State) ->\n\t?LOG_WARNING([{process, self()}, {message, Msg}]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc this function is mostly a hack around legacy parameters.\n%% Indeed, the current process is to let arweave_config set both\n%% arweave_config and arweave_config_legacy from a list of parameters.\n%% Because legacy parameters need to be loaded (and not all parameters\n%% are supported yet), a way to set them is required. Legacy format is\n%% \"temporary\" and will be removed after the complete migration to the\n%% new format. The following code should be executed once anyway\n%% (during arweave startup).\n%% @end\n%%--------------------------------------------------------------------\nhandle_load(State = #{ config := Config }) ->\n\ttry\n\t\t% get the list of compatible legacy arguments\n\t\tSupportedMap = arweave_config_spec:get_legacy(),\n\n\t\t% convert the current configuration to a map\n\t\tConfigMap = maps:from_list(\n\t\t\tarweave_config_legacy:config_to_proplist(Config)\n\t\t),\n\n\t\t% set arweave_config parameters one by one\n\t\t_ = maps:map(fun\n\t\t\t(LegacyKey, ParameterKey) ->\n\t\t\t\tcase maps:get(LegacyKey, ConfigMap, undefined) of\n\t\t\t\t\tundefined ->\n\t\t\t\t\t\tundefined;\n\t\t\t\t\tValue ->\n\t\t\t\t\t\tarweave_config:set(ParameterKey, Value),\n\t\t\t\t\t\tValue\n\t\t\t\tend\n\t\t\tend,\n\t\t\tSupportedMap\n\t\t),\n\n\t\thandle_load2(State)\n\tcatch\n\t\t_Error:Reason ->\n\t\t\t{reply, {error, Reason}, State}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_load2(State = #{ config := Config }) ->\n\ttry\n\t\t% to be sure the legacy configuration has been\n\t\t% configured, merge the current one.\n\t\tarweave_config_legacy:merge(Config)\n\tof\n\t\t{ok, C} ->\n\t\t\t{reply, {ok, C}, State};\n\t\t{error, Reason} ->\n\t\t\t{reply, {error, Reason}, State}\n\tcatch\n\t\t_Error:Reason ->\n\t\t\t{reply, {error, Reason}, State}\n\tend.\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_bootstrap.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2026 (c) Arweave\n%%% @author Mathieu Kerjouan\n%%% @author Arweave Team\n%%% @doc Arweave Configuration Bootstrap module.\n%%%\n%%% This module is in charge to configure arweave parameters from\n%%% different sources and in specific order. The only public interface\n%%% is `start/1' function. All function prefixed by `init' are\n%%% internal function callbacks.\n%%%\n%%% == Legacy Mode ==\n%%%\n%%% The legacy mode has been created to be compatible with the old\n%%% static configuration format. The procedure has been a bit\n%%% modified, but the execution path is globally the same.\n%%%\n%%% 1. load environment\n%%%\n%%% 2. find config_file parameter from arguments and load legacy\n%%% configuration file\n%%%\n%%% 3. parse arguments and load them\n%%%\n%%% 4. switch to runtime mode.\n%%%\n%%% 5. start arweave\n%%%\n%%% == New Mode ==\n%%%\n%%% In the new configuration mode, every steps are modifying the\n%%% stored configuration file in `arweave_config' step by step. The\n%%% final step is to start arweave based on the final parsed\n%%% configuration and in runtime mode.\n%%%\n%%% 1. set environment\n%%%\n%%% 2. set arguments\n%%%\n%%% 3. set configuration files if present in arguments\n%%%\n%%% 4. load configuration into arweave_config\n%%%\n%%% 5. switch to runtime mode\n%%%\n%%% 6. start arweave application and features.\n%%%\n%%% @end\n%%%===================================================================\n-module(arweave_config_bootstrap).\n-compile(warnings_as_errors).\n-export([\n\tstart/1,\n\tinit_environment/1,\n\tinit_config_file/1,\n\tinit_arguments/1,\n\tinit_load/1,\n\tinit_runtime/1,\n\tinit_final/1\n]).\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%%--------------------------------------------------------------------\n%% @doc Configure Arweave parameters from different sources.\n%% @end\n%%--------------------------------------------------------------------\n-spec start(Args) -> Return when\n\tArgs :: [string() | binary()],\n\tReturn :: {ok, #config{}} | {error, term()}.\n\nstart(Args) ->\n\t% to ensure the compatibility with the legacy parsers, an\n\t% environment variable called AR_CONFIG_MODE can be set.\n\t% By default, the legacy format is used for now, but if an\n\t% user wants to switch to the new mode, this environment\n\t% variable needs to be set to \"new\".\n\t% @todo remove this environment variable when arweave_config\n\t% is fully operational.\n\tArweaveConfigMode = os:getenv(\"AR_CONFIG_MODE\"),\n\tConfig = arweave_config_legacy:get(),\n\tState = #{\n\t\tmode => ArweaveConfigMode,\n\t\tconfig => Config,\n\t\targs => Args\n\t},\n\n\t% Let call the fsm loop.\n\tarweave_config_fsm:init(?MODULE, init_environment, State).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc init arweave configuration with environment variable.\n%% @end\n%%--------------------------------------------------------------------\ninit_environment(State = #{ mode := \"new\" }) ->\n\tarweave_config_environment:reset(),\n\t{next, init_arguments, State};\ninit_environment(State) ->\n\tarweave_config_environment:reset(),\n\tarweave_config_environment:load(),\n\t{next, init_config_file, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc init arweave configuration from configuration file.\n%% @end\n%%--------------------------------------------------------------------\ninit_config_file(State = #{ mode := \"new\" }) ->\n\t% the configuration file is directly loaded when it has been\n\t% found in the arguments.\n\t?LOG_WARNING(\"arweave_config will use new configuration format.\"),\n\t{next, init_load, State};\ninit_config_file(State = #{ args := Args, config := Config }) ->\n\t% @todo enable arweave_config_file_legacy.\n\tcase ar_config:parse_config_file(Args, Config) of\n\t\t{ok, NewConfig} when is_record(NewConfig, config)  ->\n\t\t\tarweave_config_legacy:merge(Config),\n\t\t\tNewState = State#{\n\t\t\t\tconfig => NewConfig\n\t\t\t},\n\t\t\t{next, init_arguments, NewState};\n\t\t{error, Reason} ->\n\t\t\t{error, Reason}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc init arweave configuration from command line arguments.\n%% @end\n%%--------------------------------------------------------------------\ninit_arguments(State = #{ args := Args, mode := \"new\" }) ->\n\t?LOG_WARNING(\"arweave_config will use new argument format.\"),\n\tcase arweave_config_arguments:set(Args) of\n\t\t{ok, _} ->\n\t\t\t{next, init_config_file, State};\n\t\tElse ->\n\t\t\t{error, Else}\n\tend;\ninit_arguments(State = #{ config := Config, args := Args }) ->\n\tcase ar_cli_parser:parse(Args, Config) of\n\t\t{ok, NewConfig} ->\n\t\t\tarweave_config_legacy:set(NewConfig),\n\t\t\tNewState = State#{ config => NewConfig },\n\t\t\t{next, init_load, NewState};\n\t\t{error, Reason} ->\n\t\t\t{error, Reason};\n\t\tElse ->\n\t\t\tElse\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_load(State = #{ mode := \"new\" }) ->\n\t% in the new mode, this is where we defines which part of the\n\t% configuration is loaded before, between, environment,\n\t% arguments and configuration. Indeed, every part of the\n\t% configuration are being stored in individual processes. We\n\t% can merge them now\n\tarweave_config_environment:load(),\n\tarweave_config_file:load(),\n\tarweave_config_arguments:load(),\n\t{next, init_runtime, State};\ninit_load(State) ->\n\t{next, init_runtime, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc set arweave configuration in runtime mode to avoid setting\n%% static parameters. Only dynamic parameters will be allowed to be\n%% configured in this mode.\n%% @end\n%%--------------------------------------------------------------------\ninit_runtime(State) ->\n\tcase arweave_config:runtime() of\n\t\tok ->\n\t\t\t{next, init_final, State}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc finalize arweave configuration initialization.\n%% @end\n%%--------------------------------------------------------------------\ninit_final(_State = #{ mode := \"new\" })->\n\t% @todo this part of the code should not work like that.\n\t% there, we should retrieve all configuration using\n\t% Module:get/0 using the same format and them merging them\n\t% based on a specific order. The problem though, is to deal\n\t% with complex variable (like list).\n\tok = arweave_config_environment:load(),\n\tok = arweave_config_file:load(),\n\tok = arweave_config_arguments:load(),\n\tLegacyConfig = arweave_config_legacy:get(),\n\t{ok, LegacyConfig};\ninit_final(_State = #{ config := Config }) ->\n\t% parse the arguments from command line and check if a\n\t% configuration file is defined, returns #config{} record.\n\t% Note: this function will halt the node and print helps if\n\t% the arguments or configuration file are wrong.\n\t% @todo: re-enable legacy parser\n\t% Config = ar_cli_parser:parse_config_file(Args)\n\tarweave_config_legacy:set(Config),\n\t{ok, Config}.\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_environment.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Manage and store local environment variable.\n%%%\n%%% This module has been created to be a frontend around the local\n%%% system environment variable. Environment variables are set\n%%% read-only after a program is started. In this case, there is no\n%%% point to call `os:getenv/0' and parse all values everytime. This\n%%% module is getting environment variables, parses them and store\n%%% them in an ETS table called `arweave_config_environment'.\n%%%\n%%% All environment variables are stored as binary to display them\n%%% easily in debug mode or in JSON/YAML format.\n%%%\n%%% ```\n%%%  _____________\n%%% |             |\n%%% | os:getenv/0 |\n%%% |_____________|\n%%%     /_ _\\\n%%%      | |\n%%%      | | [arweave_config_environment:init/0]\n%%%      | | [arweave_config_environment:reset/0]\n%%%  ____| |_____________________              _____\n%%% |                            \\            (     )\n%%% | arweave_config_environment |--[state]-->| ets |\n%%% \\____________________________|            (_____)\n%%%      | |\n%%%      | | [arweave_config_environment:load/0]\n%%%     _| |_\n%%%  ___\\___/________\n%%% |                |\n%%% | arweave_config |\n%%% |________________|\n%%%\n%%% '''\n%%%\n%%% == TODO ==\n%%%\n%%% @todo store the configuration spec in the process and modify the\n%%% `get/0' function to return it.\n%%%\n%%% @todo creates `get_environment/0' and `get_environment/1' to\n%%% retrieve one environment value.\n%%%\n%%% @end\n%%%===================================================================\n-module(arweave_config_environment).\n-behavior(gen_server).\n-compile(warnings_as_errors).\n-compile({no_auto_import,[get/0]}).\n-export([load/0, get/0, get/1, reset/0]).\n-export([start_link/0]).\n-export([init/1]).\n-export([handle_call/3, handle_cast/2, handle_info/2]).\n-include_lib(\"kernel/include/logger.hrl\").\n\n%%--------------------------------------------------------------------\n%% @doc start `arweave_config_environment' process.\n%% @end\n%%--------------------------------------------------------------------\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%%--------------------------------------------------------------------\n%% @doc load environment variable into `arweave_config'.\n%% @end\n%%--------------------------------------------------------------------\n-spec load() -> ok.\n\nload() ->\n\tgen_server:call(?MODULE, load, 10_000).\n\n%%--------------------------------------------------------------------\n%% @doc returns the environment variables stored.\n%% @end\n%%--------------------------------------------------------------------\n-spec get() -> [{binary(), binary()}].\n\nget() ->\n\tets:tab2list(?MODULE).\n\n%%--------------------------------------------------------------------\n%% @doc returns the environment variables stored.\n%%--------------------------------------------------------------------\n-spec get(Key) -> Return when\n\tKey :: binary(),\n\tReturn :: {ok, binary()} | {error, term()}.\n\nget(Key) ->\n\tcase ets:lookup(?MODULE, Key) of\n\t\t[{Key, Value}] -> {ok, Value};\n\t\t_ -> {error, not_found}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc reset the environment variable. Remove all environment\n%% variables set and reload them from the environment. Mostly used for\n%% development and testing purpose.\n%% @end\n%%--------------------------------------------------------------------\n-spec reset() -> {ok, [{binary(), binary()}]}.\n\nreset() ->\n\tgen_server:call(?MODULE, reset, 1000).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\n-spec init(any()) -> {ok, reference() | atom()}.\n\ninit(_) ->\n\t% list environment variables available on the system\n\t% when arweave is started. These variables will need\n\t% to be stored.\n\tEts = ets:new(?MODULE, [named_table, protected]),\n\thandle_reset(),\n\t{ok, Ets}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_call(Msg = load, From, State) ->\n\t?LOG_DEBUG(\"received: ~p\", [Msg, From]),\n\tSpec = arweave_config_spec:get_environments(),\n\tMapping = [\n\t\tbegin\n\t\t\t?LOG_DEBUG(\"found environment ~p=~p\", [EnvKey,EnvValue]),\n\t\t\t{Parameter, EnvValue}\n\t\tend\n\t||\n\t\t{EnvKey, EnvValue} <- get(),\n\t\t{EnvSpec, Parameter} <- Spec,\n\t\tEnvSpec =:= EnvKey\n\t],\n\tlists:map(\n\t\tfun({Parameter, Value}) ->\n\t\t\tarweave_config:set(Parameter, Value)\n\t\tend,\n\t\tMapping\n\t),\n\t{reply, ok, State};\nhandle_call(Msg = reset, From, State) ->\n\t?LOG_DEBUG(\"received: ~p\", [Msg, From]),\n\tResult = handle_reset(),\n\t{reply, {ok, Result}, State};\nhandle_call(Msg, From, State) ->\n\t?LOG_WARNING([\n\t\t{module, ?MODULE},\n\t\t{function, handle_cast},\n\t\t{from, From},\n\t\t{message, Msg}\n\t]),\n\t{reply, ok, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_cast(Msg, State) ->\n\t?LOG_WARNING([\n\t\t{module, ?MODULE},\n\t\t{function, handle_cast},\n\t\t{message, Msg}\n\t]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_info(Msg, State) ->\n\t?LOG_WARNING([\n\t\t{module, ?MODULE},\n\t\t{function, handle_cast},\n\t\t{message, Msg}\n\t]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_reset() ->\n\t% Environments are list of string. They must be at least\n\t% splitted in half using '=' separator. the left part is the\n\t% key, the right part is the value. Environments are static,\n\t% they can't be modified during runtime, then, keeping them\n\t% inside an ETS already parsed to be reused later will avoid\n\t% some friction in the future.\n\tets:delete_all_objects(?MODULE),\n\t_Environment = [\n\t\tbegin\n\t\t\t[K,V] = re:split(E, \"=\", [{parts, 2}, {return, list}]),\n\t\t\tBK = list_to_binary(K),\n\t\t\tVK = list_to_binary(V),\n\t\t\tets:insert(?MODULE, {BK, VK}),\n\t\t\t{BK,VK}\n\t\tend ||\n\t\tE <- os:getenv()\n\t].\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_file.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2026 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Arweave Configure File Interface.\n%%%\n%%% This process is in charge of managing the configuration files. All\n%%% configuration files are stored in memory with their parsed\n%%% content.\n%%%\n%%% == Usage ==\n%%%\n%%% ```\n%%% % add a new path, if the path/file is valid, the parsed version is\n%%% % returned.\n%%% {ok, ValidConfig1} = arweave_config_file:add(Path1).\n%%% {ok, ValidConfig2} = arweave_config_file:add(Path2).\n%%%\n%%% % get the merged configuration, if more than one path is present\n%%% % all of them will be merged by alphanumeric order\n%%% {ok, Merged} = arweave_config_file:get().\n%%%\n%%% % returns the parsed configuration from the path\n%%% {ok, ValidConfig1} = arweave_config_file:get_by_path(Path1).\n%%%\n%%% % returns the paths currently stored and merged.\n%%% {ok, Paths} = arweave_config_file:get_paths().\n%%%\n%%% % reset the configuration, the last merged configuration is\n%%% % returned to the caller\n%%% {ok, Merged} = arweave_config_file:reset().\n%%%\n%%% % load the configuration into arweave_config\n%%% ok = arweave_config_file:load().\n%%% '''\n%%%\n%%% == TODO ==\n%%%\n%%% @todo create `check/0' function. It will ensure all files are still\n%%% present and with the same values. If it's not the case, the files\n%%% are reloaded and merged together. will be used with sighup.\n%%%\n%%% @todo create `delete/1' function. A configuration file can be removed\n%%% from the store, in this case, the remaining files are merged\n%%% together after the file has been removed.\n%%%\n%%% @todo create `get_by_format/1' function. It will return the\n%%% configuration files by their format.\n%%%\n%%% @todo store the raw value and the parsed value in the store.\n%%% Useful for debugging and analysis.\n%%%\n%%% @todo find a way to deal with a transition when a file is modified\n%%% locally.\n%%%\n%%% @todo what to do if a configuration file can't be loaded?\n%%%\n%%% @todo should we store the parsed configuration files?\n%%%\n%%% @todo should we follow the files in case of modification\n%%%       (e.g. inotify)?\n%%%\n%%% @todo add support for glob pattern\n%%%\n%%% @end\n%%%===================================================================\n-module(arweave_config_file).\n-compile(warnings_as_errors).\n-behavior(gen_server).\n-export([\n\tstart_link/0,\n\tadd/1,\n\tget/0,\n\tget_by_path/1,\n\tget_paths/0,\n\tload/0,\n\tload/1,\n\treset/0,\n\tparsers/0\n]).\n-export([\n\tparse/1,\n\tparse/2,\n\tcheck_path/1,\n\tidentify_parser/1,\n\tparse_data/1\n]).\n-export([init/1]).\n-export([handle_call/3, handle_cast/2, handle_info/2]).\n\n%%--------------------------------------------------------------------\n%% @doc start arweave_config_file process.\n%% @end\n%%--------------------------------------------------------------------\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n\n%%--------------------------------------------------------------------\n%% @doc Returns the list of supported parsers based on the file\n%% extension. If another extension needs to be supported, this parsers\n%% list can be modified (ensure the test suite is working though).\n%% @end\n%%--------------------------------------------------------------------\n-spec parsers() -> #{ binary() => atom() }.\n\nparsers() ->\n\t#{\n\t\t<<\".json\">> => arweave_config_format_json,\n\t\t<<\".yaml\">> => arweave_config_format_yaml,\n\t\t<<\".toml\">> => arweave_config_format_toml,\n\t\t<<\".ljson\">> => arweave_config_format_legacy\n\t}.\n\n%%--------------------------------------------------------------------\n%% @doc Add a new configuration file path. The configuration must be\n%% valid. The format of the file is identified using the postfix (e.g.\n%% json, yaml, toml, ljson).\n%% @end\n%%--------------------------------------------------------------------\n-spec add(Path) -> Return when\n\tPath :: string() | binary(),\n\tReturn :: {ok, [map()]}.\n\nadd(Path) when is_list(Path) ->\n\tadd(list_to_binary(Path));\nadd(Path) ->\n\tgen_server:call(?MODULE, {add, Path}, 1000).\n\n%%--------------------------------------------------------------------\n%% @doc Get the final merged configuration.\n%% @end\n%%--------------------------------------------------------------------\n-spec get() -> Return when\n\tReturn :: map().\n\nget() ->\n\tgen_server:call(?MODULE, get, 1000).\n\n%%--------------------------------------------------------------------\n%% @doc Get the configuration file from a stored path.\n%% @end\n%%--------------------------------------------------------------------\n-spec get_by_path(Path) -> Return when\n\tPath :: string() | binary(),\n\tReturn :: {ok, {Timestamp, map()}} | {error, term()},\n\tTimestamp :: pos_integer().\n\nget_by_path(Path) when is_list(Path) ->\n\tget_by_path(list_to_binary(Path));\nget_by_path(Path) ->\n\tgen_server:call(?MODULE, {get, Path}, 1000).\n\n%%--------------------------------------------------------------------\n%% @doc Get the list of configuration file stored.\n%% @end\n%%--------------------------------------------------------------------\n-spec get_paths() -> [binary()].\n\nget_paths() ->\n\tgen_server:call(?MODULE, get_paths, 1000).\n\n%%--------------------------------------------------------------------\n%% @doc Load the merged configuration in arweave_config.\n%% @end\n%%--------------------------------------------------------------------\n-spec load() -> Return when\n\tReturn :: ok | timeout.\n\nload() ->\n\tgen_server:call(?MODULE, load, 1000).\n\n%%--------------------------------------------------------------------\n%% @doc Load a specific stored configuration file in arweave_config.\n%% @end\n%%--------------------------------------------------------------------\n-spec load(Path) -> Return when\n\tPath :: string() | binary(),\n\tReturn :: {ok, map()}.\n\nload(Path) when is_list(Path) ->\n\tload(list_to_binary(Path));\nload(Path) ->\n\tgen_server:call(?MODULE, {load, Path}, 1000).\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\n-spec reset() -> ok.\n\nreset() ->\n\tgen_server:call(?MODULE, reset, 1000).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit(_) ->\n\tStore = ets:new(?MODULE, [\n\t\tnamed_table,\n\t\tordered_set,\n\t\tprotected\n\t]),\n\t{ok, Store}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_call({add, Path}, _From, Store) ->\n\thandle_add(Path, Store);\nhandle_call(get, _From, Store) ->\n\thandle_get(Store);\nhandle_call({get, Path}, _From, Store) ->\n\thandle_get_path(Path, Store);\nhandle_call(get_paths, _From, Store) ->\n\thandle_get_paths(Store);\nhandle_call(load, _From, Store) ->\n\thandle_load(Store);\nhandle_call({load, Path}, _From, Store) ->\n\thandle_load(Path, Store);\nhandle_call(reset, _From, Store) ->\n\thandle_reset(Store);\nhandle_call(_Msg, _From, State) ->\n\t{reply, ok, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_info(_Msg, State) ->\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_cast(_Msg, State) ->\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_get(Store) ->\n\tcase ets:lookup(?MODULE, merge) of\n\t\t[] ->\n\t\t\t{reply, #{}, Store};\n\t\t[{merge, Config}] ->\n\t\t\t{reply, Config, Store}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_get_paths(Store) ->\n\tPattern = {{config, '$1'}, '_'},\n\tGuard = [],\n\tFormat = ['$1'],\n\tReturn = ets:select(?MODULE, [{Pattern, Guard, Format}]),\n\t{reply, Return, Store}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_get_path(Path, Store) ->\n\tPattern = {{config, Path}, '$2'},\n\tGuard = [],\n\tFormat = ['$2'],\n\tcase ets:select(?MODULE, [{Pattern, Guard, Format}]) of\n\t\t[Config] ->\n\t\t\t{reply, {ok, Config}, Store};\n\t\t_ ->\n\t\t\t{reply, {error, not_found}, Store}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_load(Store) ->\n\t{reply, Config, _} = handle_get(Store),\n\tmaps:map(fun (K, V) ->\n\t\tarweave_config:set(K, V)\n\tend, Config),\n\t{reply, ok, Store}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_load(Path, Store) ->\n\tcase handle_get_path(Path, Store) of\n\t\t{reply, {ok, {_, Config}}, _} ->\n\t\t\tmaps:map(fun (K, V) ->\n\t\t\t\tarweave_config:set(K, V)\n\t\t\tend, Config),\n\t\t\t{reply, ok, Store};\n\t\tElse ->\n\t\t\tElse\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_add(Path, Store) ->\n\tcase parse(Path) of\n\t\t{ok, {ValidPath, Config}} ->\n\t\t\tKey = {config, ValidPath},\n\t\t\tValue = {erlang:system_time(), Config},\n\t\t\tets:insert(?MODULE, {Key, Value}),\n\t\t\thandle_merge(Store);\n\t\tElse ->\n\t\t\t{reply, Else, Store}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_merge(Store) ->\n\tMerged = ets:foldl(fun\n\t\t\t({{config, _}, {_, Config}}, Acc) ->\n\t\t\t\tmaps:merge(Acc, Config);\n      \t\t\t(_, Acc) ->\n\t\t\t\tAcc\n\t\tend,\n  \t\t#{},\n\t\t?MODULE\n\t),\n\tets:insert(?MODULE, {merge, Merged}),\n\t{reply, {ok, Merged}, Store}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_reset(Store) ->\n\tets:delete_all_objects(Store),\n\t{reply, ok, Store}.\n\n%%--------------------------------------------------------------------\n%% @doc parses a path.\n%% @see parse/2\n%% @end\n%%--------------------------------------------------------------------\n-spec parse(Path) -> Return when\n\tPath :: binary() | string(),\n\tReturn :: {ok, {Path, map()}} | {error, term()}.\n\nparse(Path) ->\n\tparse(Path, #{}).\n\n%%--------------------------------------------------------------------\n%% @doc parses a path.\n%% @end\n%%--------------------------------------------------------------------\n-spec parse(Path, Opts) -> Return when\n\tPath :: binary() | string(),\n\tOpts :: map(),\n\tReturn :: {ok, {Path, map()}} | {error, term()}.\n\nparse(Path, Opts) ->\n\tState = #{\n\t\topts => Opts,\n\t\tpath => Path\n\t},\n\tarweave_config_fsm:init(?MODULE, check_path, State).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc check the path used for the configuration file.\n%% @end\n%%--------------------------------------------------------------------\ncheck_path(_State = #{ path := Path }) ->\n\tcase arweave_config_file_path:check(Path) of\n\t\t{ok, Data, NewState} ->\n\t\t\t{next, identify_parser, NewState#{ data => Data }};\n\t\tElse ->\n\t\t\tElse\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc check the file extension (generated during the check), and\n%% select the parser.\n%% @end\n%%--------------------------------------------------------------------\nidentify_parser(State = #{ file_extension := Extension }) ->\n\tParsers = parsers(),\n\tcase maps:get(Extension, Parsers, undefined) of\n\t\tundefined ->\n\t\t\t{error, \"unsupported file\"};\n\t\tParser when is_atom(Parser) ->\n\t\t\tNewState = State#{\n\t\t\t\tparser => Parser\n\t\t\t},\n\t\t\t{next, parse_data, NewState};\n\t\t_ ->\n\t\t\t{error, \"unsupported extension or parser\"}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc\n%% @end return the path (absolute) and the configuration (parsed).\n%%--------------------------------------------------------------------\nparse_data(_State = #{ path := Path, data := Data, parser := Parser }) ->\n\tcase Parser:parse(Data) of\n\t\t{ok, Config} ->\n\t\t\t{ok, {Path, Config}};\n\t\tElse ->\n\t\t\tElse\n\tend.\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_file_path.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2026 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Arweave Configuration File Checker.\n%%%\n%%% This module contains the code to check configuration files used by\n%%% arweave and it is used by all `arweave_config_file_*' module. The\n%%% goal is to have understandable and flexible rules easy to reuse in\n%%% other part of the code.\n%%%\n%%% At the end of the pipeline, the path must have been fully checked\n%%% and its content stored in `data' map key.\n%%%\n%%% Here the check steps:\n%%%\n%%% 1. check if the path has the right type (binary).\n%%%\n%%% 2. check if the path is absolute (or convert it).\n%%%\n%%% 3. check if the file exists.\n%%%\n%%% 4. check if the file is readable.\n%%%\n%%% 5. check if the file extension is correct.\n%%%\n%%% 6. read the content of the file.\n%%%\n%%% @end\n%%%===================================================================\n-module(arweave_config_file_path).\n-compile(warnings_as_errors).\n-export([\n\tcheck/1,\n\tinit/1,\n\tcheck_path_type/1,\n\tcheck_path/1,\n\tcheck_relative_path/1,\n\tcheck_file_mode/1,\n\textract_directory/1,\n\textract_extension/1,\n\tread_file/1\n]).\n-include_lib(\"kernel/include/file.hrl\").\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\ncheck(Path) ->\n\tarweave_config_fsm:init(\n\t\t?MODULE,\n\t\tinit,\n\t\t#{\n\t\t\tpath => Path\n\t\t}\n\t).\n\n%%--------------------------------------------------------------------\n%% @private\n%% @doc init the file checker.\n%% @end\n%%--------------------------------------------------------------------\n-spec init(State) -> Return when\n\tState :: map(),\n\tReturn :: term().\n\ninit(State) ->\n\tcase file:get_cwd() of\n\t\t{ok, Cwd} ->\n\t\t\tNewState = State#{\n\t\t\t\tcwd => Cwd\n\t\t\t},\n\t\t\t{next, check_path_type, NewState};\n\t\t_ ->\n\t\t\t{error, \"can't find current working directory\"}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc Check the Erlang path type. To avoid type confusion, all\n%% configuration file path are encoded using binary type and not list.\n%% @end\n%%--------------------------------------------------------------------\ncheck_path_type(State = #{ path := Path }) when is_list(Path) ->\n\tNewState = State#{\n\t\tpath => list_to_binary(Path)\n\t},\n\t{next, check_path, NewState};\ncheck_path_type(State = #{ path := Path }) when is_binary(Path) ->\n\t{next, check_path, State};\ncheck_path_type(_State) ->\n\t{error, \"bad path type\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc Check if a path is relative or absolute. Relative paths should\n%% not be allowed, only absolute path should be used.\n%% @end\n%%--------------------------------------------------------------------\ncheck_path(State = #{ path := Path }) ->\n\tcase filename:pathtype(Path) of\n\t\trelative ->\n\t\t\t{next, check_relative_path, State};\n\t\tabsolute ->\n\t\t\t{next, check_file_mode, State}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc Check if the relative file path is safe (without \"../\"). We\n%% don't want the application to load unwanted files, this is a first\n%% protection against potential file injections.\n%% @end\n%%--------------------------------------------------------------------\ncheck_relative_path(State = #{ path := Path, cwd := Cwd }) ->\n\tcase filelib:safe_relative_path(Path, Cwd) of\n\t\tunsafe ->\n\t\t\t{error, \"unsafe path\"};\n\t\t_ ->\n\t\t\tAbsolutePath = filename:absname(Path),\n\t\t\tNewState = State#{\n\t\t\t\torigin_path => Path,\n\t\t\t\tpath => AbsolutePath\n\t\t\t},\n\t\t\t{next, check_file_mode, NewState}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc Check file mode. A file must be readable at least.\n%% @end\n%%--------------------------------------------------------------------\ncheck_file_mode(State = #{ path := Path }) ->\n\tcase file:read_file_info(Path) of\n\t\t{ok, #file_info{\n\t\t\ttype = regular,\n\t\t\taccess = read\n\t\t\t}\n\t\t} ->\n\t\t\t{next, extract_extension, State};\n\t\t{ok, #file_info{\n\t\t\ttype = regular,\n\t\t\taccess = read_write\n\t\t\t}\n\t\t} ->\n\t\t\t{next, extract_extension, State};\n\t\t_Else ->\n\t\t\t{error, \"bad path\"}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc Extract the file extension and store it in `file_extension'.\n%% @end\n%%--------------------------------------------------------------------\nextract_extension(State = #{ path := Path }) ->\n\tExtension = filename:extension(Path),\n\tNewState = State#{\n\t\tfile_extension => Extension\n\t},\n\t{next, extract_directory, NewState}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc Extract the directory where the file is stored in\n%% `file_directory'.\n%% @end\n%%--------------------------------------------------------------------\nextract_directory(State = #{ path := Path }) ->\n\tDir = filename:dirname(Path),\n\tNewState = State#{\n\t\tfile_directory => Dir\n\t},\n\t{next, read_file, NewState}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc read file content.\n%% @end\n%%--------------------------------------------------------------------\nread_file(State = #{ path := Path }) ->\n\t% At this step, the application must be able to read the file,\n\t% except if a race condition appears and the file removed or\n\t% ownership/mode are changed.\n\t{ok, Data} = file:read_file(Path),\n\t{ok, Data, State}.\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_format_json.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2026 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Arweave Configuration JSON Format Support.\n%%%\n%%% @reference https://www.json.org/json-en.html\n%%% @reference https://github.com/davisp/jiffy\n%%% @end\n%%%===================================================================\n-module(arweave_config_format_json).\n-compile(warnings_as_errors).\n-export([\n\tparse/1,\n\tparse/2\n]).\n-export([\n\tdecode_data/1,\n\tparse_config/1\n]).\n\n%%--------------------------------------------------------------------\n%% @doc Parses JSON data.\n%% @see parse/2\n%% @end\n%%--------------------------------------------------------------------\n-spec parse(Data) -> Return when\n\tData :: string() | binary(),\n\tReturn :: {ok, map()} | {error, term()}.\n\nparse(Data) ->\n\tparse(Data, #{}).\n\n%%--------------------------------------------------------------------\n%% @doc Parses JSON data.\n%% @end\n%%--------------------------------------------------------------------\n-spec parse(Data, Opts) -> Return when\n\tData :: string() | binary(),\n\tOpts :: map(),\n\tReturn :: {ok, map()} | {error, term()}.\n\nparse(Data, Opts) ->\n\tState = #{\n\t\tdata => Data,\n\t\topts => Opts\n\t},\n\tarweave_config_fsm:init(?MODULE, decode_data, State).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc parses a string or a binary with JSON decoder.\n%% @end\n%%--------------------------------------------------------------------\ndecode_data(_State = #{ data := <<>> }) ->\n\t{ok, #{}};\ndecode_data(_State = #{ data := [] }) ->\n\t{ok, #{}};\ndecode_data(State = #{ data := Data }) ->\n\ttry\n\t\tJson = jiffy:decode(Data, [return_maps]),\n\t\tNewState = State#{\n\t\t\tjson => Json\n\t\t},\n\t\t{next, parse_config, NewState}\n\tcatch\n\t\t_Error:{Position, Reason} ->\n\t\t\t{error, #{\n\t\t\t\t\treason => Reason,\n\t\t\t\t\tposition => Position\n\t\t\t\t}\n\t\t\t}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc converts a json into a configuration file\n%% @end\n%%--------------------------------------------------------------------\nparse_config(_State = #{ json := Json }) ->\n\tarweave_config_serializer:encode(Json).\n\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_format_legacy.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2026 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Arweave Configuration Legacy Format Support.\n%%%\n%%% @see ar_config\n%%% @end\n%%% @todo convert config record to arweave_config spec.\n%%%===================================================================\n-module(arweave_config_format_legacy).\n-compile(warnings_as_errors).\n-export([\n\tparse/1,\n\tparse/2\n]).\n-export([\n\tdecode_data/1\n]).\n-include_lib(\"kernel/include/file.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%%--------------------------------------------------------------------\n%% @doc Parses a JSON using legacy parser.\n%% @see parse/2\n%% @end\n%%--------------------------------------------------------------------\n-spec parse(Data) -> Return when\n\tData :: string() | binary(),\n\tReturn :: {ok, map()} | {error, term()}.\n\nparse(Data) ->\n\tparse(Data, #{}).\n\n%%--------------------------------------------------------------------\n%% @doc Parses a JSON using legacy parser.\n%% @end\n%%--------------------------------------------------------------------\n-spec parse(Data, Opts) -> Return when\n\tData :: string() | binary(),\n\tOpts :: map(),\n\tReturn :: {ok, map()} | {error, term()}.\n\nparse(Data, Opts) ->\n\tState = #{\n\t\topts => Opts,\n\t\tdata => Data\n\t},\n\tarweave_config_fsm:init(?MODULE, decode_data, State).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\ndecode_data(_State = #{ data := \"\"}) ->\n\t{ok, #config{}};\ndecode_data(_State = #{ data := <<>>}) ->\n\t{ok, #config{}};\ndecode_data(State = #{ data := Data }) when is_list(Data) ->\n\tNewState = State#{\n\t\tdata => list_to_binary(Data)\n\t},\n\tdecode_data(NewState);\ndecode_data(_State = #{ data := Data }) ->\n\tcase ar_config:parse(Data) of\n\t\t{ok, LegacyConfig} ->\n\t\t\t{ok, LegacyConfig};\n\t\t{error, Reason, _} ->\n\t\t\t{error, Reason};\n\t\t{error, Reason} ->\n\t\t\t{error, Reason}\n\tend.\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_format_toml.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2026 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Arweave Configuration TOML Format Support.\n%%%\n%%% @reference https://toml.io/en/\n%%% @reference https://github.com/filmor/tomerl\n%%% @end\n%%%===================================================================\n-module(arweave_config_format_toml).\n-compile(warnings_as_errors).\n-export([\n\tparse/1,\n\tparse/2\n]).\n-export([\n\tdecode_data/1,\n\tparse_config/1\n\n]).\n-include_lib(\"kernel/include/file.hrl\").\n\n%%--------------------------------------------------------------------\n%% @doc Parse TOML data.\n%% @see parse/2\n%% @end\n%%--------------------------------------------------------------------\n-spec parse(Data) -> Return when\n\tData :: string() | binary(),\n\tReturn :: {ok, map()} | {error, term()}.\n\nparse(Data) ->\n\tparse(Data, #{}).\n\n%%--------------------------------------------------------------------\n%% @doc Parse TOML data.\n%% @end\n%%--------------------------------------------------------------------\n-spec parse(Data, Opts) -> Return when\n\tData :: string() | binary(),\n\tOpts :: map(),\n\tReturn :: {ok, map()} | {error, term()}.\n\nparse(Data, Opts) ->\n\tState = #{\n\t\topts => Opts,\n\t\tdata => Data\n\t},\n\tarweave_config_fsm:init(?MODULE, decode_data, State).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\ndecode_data(State = #{ data := Data }) ->\n\tcase tomerl:parse(Data) of\n\t\t{ok, Parsed} ->\n\t\t\t{next, parse_config, State#{ config => Parsed }};\n\t\t{error, Reason} ->\n\t\t\t{error, Reason};\n\t\tElse ->\n\t\t\tElse\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\nparse_config(#{ config := Config }) ->\n\tarweave_config_serializer:encode(Config).\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_format_yaml.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2026 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Arweave Configuration YAML Format Support.\n%%%\n%%% @reference https://yaml.org/\n%%% @reference https://github.com/yakaz/yamerl/\n%%% @end\n%%%===================================================================\n-module(arweave_config_format_yaml).\n-compile(warnings_as_errors).\n-export([\n\tparse/1,\n\tparse/2,\n\tproplist_to_map/1\n]).\n-export([\n\tdecode_data/1,\n\tdecode_proplist/1,\n\tparse_config/1\n\n]).\n-include_lib(\"kernel/include/file.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%--------------------------------------------------------------------\n%% @doc Parse YAML data.\n%% @see parse/2\n%% @end\n%%--------------------------------------------------------------------\n-spec parse(Data) -> Return when\n\tData :: string() | binary(),\n\tReturn :: {ok, map()} | {error, term()}.\n\nparse(Data) ->\n\tparse(Data, #{}).\n\n%%--------------------------------------------------------------------\n%% @doc Parse YAML data.\n%% @end\n%%--------------------------------------------------------------------\n-spec parse(Data, Opts) -> Return when\n\tData :: string() | binary(),\n\tOpts :: map(),\n\tReturn :: {ok, map()} | {error, term()}.\n\nparse(Data, Opts) ->\n\tState = #{\n\t\topts => Opts,\n\t\tdata => Data\n\t},\n\tarweave_config_fsm:init(?MODULE, decode_data, State).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc convert yaml data to erlang terms.\n%% @end\n%%--------------------------------------------------------------------\ndecode_data(State = #{ data := Data }) ->\n\ttry\n\t\tyamerl:decode(Data)\n\tof\n\t\t[] ->\n\t\t\tNewState = State#{ proplist => [] },\n\t\t\t{next, decode_proplist, NewState};\n\t\t[Proplist] ->\n\t\t\tNewState = State#{ proplist => Proplist },\n\t\t\t{next, decode_proplist, NewState};\n\t\t_Else ->\n\t\t\t{error, multi_yaml_unsupported}\n\tcatch\n\t\t_:Reason ->\n\t\t\t{error, Reason}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc convert proplist to map.\n%% @end\n%%--------------------------------------------------------------------\ndecode_proplist(State = #{ proplist := Proplist }) ->\n\ttry\n\t\tParsed = proplist_to_map(Proplist),\n\t\tNewState = State#{ config => Parsed },\n\t\t{next, parse_config, NewState}\n\tcatch\n\t\t_:Reason ->\n\t\t\t{error, Reason}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc returns the final configuration spec.\n%% @end\n%%--------------------------------------------------------------------\nparse_config(#{ config := Config }) ->\n\tarweave_config_serializer:encode(Config).\n\n%%--------------------------------------------------------------------\n%% @doc recursively convert a proplist to a map. `yamerl' does not\n%% return a map and then we need to convert the proplist returned\n%% recursively.\n%% @end\n%%--------------------------------------------------------------------\n-spec proplist_to_map(Proplist) -> Return when\n\tProplist :: proplists:proplist(),\n\tReturn :: map().\n\nproplist_to_map(Proplist) ->\n\tproplist_to_map(Proplist, #{}).\n\nproplist_to_map_test() ->\n\t?assertEqual(\n\t\t#{},\n\t\tproplist_to_map([])\n\t),\n\n\t?assertEqual(\n\t\t#{ <<\"1\">> => 2 },\n\t\tproplist_to_map([{1,2}])\n\t),\n\n\t?assertEqual(\n\t\t#{\n\t\t\t<<\"test\">> => #{\n\t\t\t\t<<\"a\">> => <<\"b\">>,\n\t\t\t\t<<\"c\">> => <<\"d\">>\n\t\t\t}\n\t\t},\n\t\tproplist_to_map([\n\t\t\t{test,[\n\t\t\t\t{\"a\", \"b\"},\n\t\t\t\t{c, d}\n\t\t\t]}\n\t\t])\n\t).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\nproplist_to_map([], Buffer) ->\n\tBuffer;\nproplist_to_map([{K,V = [{_,_}|_]}|Rest], Buffer) ->\n\t% if a value is made of tuple, we assume this is an object,\n\t% then, we convert it to map.\n\tRecurse = proplist_to_map(V),\n\tKey = converter_key(K),\n\tproplist_to_map(Rest, Buffer#{ Key => Recurse });\nproplist_to_map([{K, V}|Rest], Buffer) when is_list(V) ->\n\t% if a value is a list we assume this is a string and it\n\t% is converted to binary.\n\tKey = converter_key(K),\n\tValue = converter_value(V),\n\tproplist_to_map(Rest, Buffer#{ Key => Value });\nproplist_to_map([{K,V}|Rest], Buffer) ->\n\tKey = converter_key(K),\n\tValue = converter_value(V),\n\tproplist_to_map(Rest, Buffer#{ Key => Value }).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\nconverter_key(Key) when is_integer(Key) ->\n\tinteger_to_binary(Key);\nconverter_key(Key) when is_atom(Key) ->\n\tatom_to_binary(Key);\nconverter_key(Key) when is_list(Key) ->\n\tlist_to_binary(Key);\nconverter_key(Key) ->\n\tKey.\n\nconverter_value(Value) when is_atom(Value) ->\n\tatom_to_binary(Value);\nconverter_value(Value) when is_list(Value) ->\n\tlist_to_binary(Value);\nconverter_value(Value) ->\n\tValue.\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_fsm.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2026 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc An internal FSM implementation for Arweave Config.\n%%%\n%%% This kind of abstraction helps to create tests on individual part\n%%% of functions, based on the returned values. With it, one controls\n%%% both the input and the output. Side effects functions can also be\n%%% isolated easily.\n%%%\n%%% The state passed to all callback function is not enforced, and can\n%%% be of any Erlang type.\n%%%\n%%% == Examples ==\n%%%\n%%% `arweave_config_fsm' is a really simple finite state machine,\n%%% inspired by `gen_statem' and `gen_fsm'. Each function returns a\n%%% tuple with the next function to call. The state is passed to the\n%%% next function, until one function is returning `{ok, term()}' to\n%%% end the pipeline.\n%%%\n%%% ```\n%%% -module(fsm_test).\n%%% -export([start/1]).\n%%% -export([first/1, second/1, third/1]).\n%%%\n%%% % starter\n%%% start(Opts) ->\n%%%   State = #{},\n%%%   arweave_config_fsm:do_loop(?MODULE, first, State).\n%%%\n%%% % callback transition\n%%% first(State) ->\n%%%   {next, second, State#{ first => ok }}.\n%%%\n%%% % module callback transition and error return\n%%% second(State = #{ first := ok }) ->\n%%%   {next, {?MODULE, third}, State#{ second => ok }}.\n%%% second(#{ first := error }) ->\n%%%   {error, \"first function failed\"}.\n%%%\n%%% % loop (dangerous) support and final return value\n%%% third(#{ reset := true }) ->\n%%%   {next, first, #{ state => loop });\n%%% third(State) ->\n%%%   {ok, final_result}.\n%%% '''\n%%%\n%%% == Metadata for Tracing and Debugging Purpose ==\n%%%\n%%% Debugging and tracing an execution pipeline can sometime be\n%%% complex, even with Erlang tooling. This fsm implement a way to\n%%% collect those information on demand, by setting the flag `meta' to\n%%% `true' during initialization phase.\n%%%\n%%% ```\n%%% -module(t).\n%%% -export([start/0, my_function/1]).\n%%%\n%%% start() ->\n%%%   {ok, value, Metadata} = arweave_config_fsm:init(\n%%%     ?MODULE,\n%%%     my_function,\n%%%     #{ meta => true },\n%%%     my_state\n%%%   ).\n%%%\n%%% my_function(State) ->\n%%%   {ok, value}.\n%%% '''\n%%%\n%%% `Metadata' variable will contain the execution history with the\n%%% timestamp when it was executed and the module/function callback. A\n%%% counter is also available to have the number of step executed.\n%%%\n%%% Note: this will have a small impact on the performance, but it\n%%% should be negligible. In case of a long and complex pipeline, the\n%%% size of history can grow dangerously and use lot of memory.\n%%%\n%%% == TODO ==\n%%%\n%%% Better errors management (e.g. specific error message when a\n%%% callback module or a callback function are not atom).\n%%%\n%%% Custom options (e.g. return metadata or send them to another\n%%% process during execution).\n%%%\n%%% Enforce behavior.\n%%%\n%%% Add delay/timeout support.\n%%%\n%%% Permits to filter what we want to store in meta-data, and/or allow\n%%% lambda function to collect custom data.\n%%%\n%%% Add a debug state where the returned values are stored.\n%%%\n%%% @end\n%%%===================================================================\n-module(arweave_config_fsm).\n-compile(warnings_as_errors).\n-export([init/3, init/4]).\n-include_lib(\"kernel/include/logger.hrl\").\n\n%%--------------------------------------------------------------------\n%% type definition, usefull for behavior feature and DRY.\n%%--------------------------------------------------------------------\n% a module callback as atom.\n-type fsm_module() :: atom().\n\n% a function callbcak as atom.\n-type fsm_callback() :: atom().\n\n% available fsm options/parameters.\n-type fsm_opts() :: #{\n\tmeta => boolean()\n}.\n\n% fsm state defined by developer during initialization or during\n% execution.\n-type fsm_state() :: term().\n\n% fsm metadata used to store and collect stats and information about\n% fsm execution.\n-type fsm_meta_history() :: #{\n\ttimestamp => pos_integer(),\n\tmodule => fsm_module(),\n\tcallback => fsm_callback(),\n\tprocess_info => map(),\n\tpid => pid()\n}.\n-type fsm_metadata() :: #{\n\tmeta => boolean(),\n\thistory => [fsm_meta_history()],\n\tcounter => pos_integer()\n}.\n\n% values returned by the callback defined by the developer.\n-type fsm_callback_return() ::\n\tmeta |\n\t{ok, term()} |\n\t{next, fsm_callback(), fsm_state()} |\n\t{next, fsm_module(), fsm_callback(), fsm_state()} |\n\t{error, term()} |\n\t{error, term(), fsm_state()}.\n\n% values always returned by the fsm after the final callback.\n-type fsm_return() ::\n\t{ok, term()} |\n\t{ok, term(), fsm_state() | fsm_metadata()} |\n\t{ok, term(), fsm_state(), fsm_metadata()} |\n\t{error, term()} |\n\t{error, term(), fsm_state()}.\n\n%%--------------------------------------------------------------------\n%% if one wants to use it as behavior, callback_name is an example\n%% function, and no errors/warnings will be reported if this one is\n%% not explicitely defined.\n%%--------------------------------------------------------------------\n-callback(\n\tcallback_name(fsm_state()) -> fsm_callback_return()\n).\n-optional_callbacks(callback_name/1).\n\n%%--------------------------------------------------------------------\n%% @doc `arweave_config_fsm' initialize/starter function.\n%% @see init/4\n%% @end\n%%--------------------------------------------------------------------\n-spec init(Module, Callback, State) -> Return when\n\tModule :: fsm_module(),\n\tCallback :: fsm_callback(),\n\tState :: fsm_state(),\n\tReturn :: fsm_return().\n\ninit(Module, Callback, State) ->\n\tinit(Module, Callback, #{}, State).\n\n%%--------------------------------------------------------------------\n%% @doc `arweave_config_fsm' initialize/starter function. A meta\n%% parameter can be configured during initialization, storing\n%% information about the pipelined functions and offering a way to\n%% trace the execution of the fsm.\n%% @end\n%%--------------------------------------------------------------------\n-spec init(Module, Callback, Opts, State) -> Return when\n\tModule :: fsm_module(),\n\tCallback :: fsm_callback(),\n\tOpts :: fsm_opts(),\n\tState :: fsm_state(),\n\tReturn :: fsm_return().\n\ninit(Module, Callback, Opts, State) ->\n\tinit_meta(Module, Callback, Opts, State).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc initialize metadata.\n%% @end\n%%--------------------------------------------------------------------\ninit_meta(Module, Callback, Opts = #{ meta := true }, State) ->\n\tMeta = #{\n\t\topts => Opts,\n\t\tmeta => maps:get(meta, Opts, false),\n\t\thistory => [meta_history(Module, Callback)],\n\t\tcounter => 0\n\t},\n\tdo_loop(Module, Callback, State, Meta);\ninit_meta(Module, Callback, _Opts, State) ->\n\tdo_loop(Module, Callback, State, #{}).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc main loop where all callbacks are executed.\n%% @end\n%%--------------------------------------------------------------------\n-spec do_loop(Module, Callback, State, Meta) -> Return when\n\tModule :: fsm_module(),\n\tCallback :: fsm_callback(),\n\tState :: fsm_state(),\n\tMeta :: fsm_metadata(),\n\tReturn :: fsm_return().\n\ndo_loop(Module, Callback, State, Meta) ->\n\ttry\n\t\tReturn = erlang:apply(Module, Callback, [State]),\n\t\tdo_eval(\n\t\t\tModule,\n\t\t\tCallback,\n\t\t\tReturn,\n\t\t\tState,\n\t\t\tupdate_meta(Module, Callback, Meta)\n\t\t)\n\tcatch\n\t\t_Error:Reason ->\n\t\t\tdo_eval(\n\t\t\t\tModule,\n\t\t\t\tCallback,\n\t\t\t\t{error, Reason},\n\t\t\t\tState,\n\t\t\t\tupdate_meta(Module, Callback, Meta)\n\t\t\t )\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc evaluate the return of a callback and do the routing part.\n%% @end\n%%--------------------------------------------------------------------\n-spec do_eval(Module, Callback, CallbackReturn, State, Meta) -> Return when\n\tModule :: fsm_module(),\n\tCallback :: fsm_callback(),\n\tCallbackReturn :: fsm_callback_return(),\n\tState :: fsm_state(),\n\tMeta :: fsm_metadata(),\n\tReturn :: fsm_return().\n\n% return latested value with meta information and stop the fsm.\ndo_eval(_Module, _Callback, {ok, Return}, _State, Meta = #{ meta := true }) ->\n\t{ok, Return, Meta};\n\n% return latest value with meta information and state then stop the\n% fsm.\ndo_eval(_Module, _Callback, {ok, Return, NewState}, _State, Meta = #{ meta := true}) ->\n\t{ok, Return, NewState, Meta};\n\n% final evaluation, return the value from the callback.\ndo_eval(_Module, _Callback, {ok, Return}, _State, _Meta) ->\n\t{ok, Return};\n\n% final evaluation, return the value from the callback and its last\n% state.\ndo_eval(_Module, _Callback, {ok, Return, NewState}, _State, _Meta) ->\n\t{ok, Return, NewState};\n\n% return meta information and stop the fsm.\ndo_eval(_Module, _Callback, meta, _State, Meta) ->\n\t{meta, Meta};\n\n% fsm transition with a new callback and a new state on the same module\n% callback.\ndo_eval(Module, _Callback, {next, NextCallback, NewState}, _State, Meta)\n\twhen is_atom(NextCallback) ->\n\t\tdo_loop(\n\t\t\tModule,\n\t\t\tNextCallback,\n\t\t\tNewState,\n\t\t\tMeta\n\t\t);\n\n% fsm transition with a new state. this function will switch to\n% another module and another callback with a new state.\ndo_eval(_Module, _Callback, {next, NextModule, NextCallback, NewState}, _State, Meta)\n\twhen is_atom(NextModule), is_atom(NextCallback) ->\n\t\tdo_loop(\n\t\t\tNextModule,\n\t\t\tNextCallback,\n\t\t\tNewState,\n\t\t\tMeta\n\t\t);\n\n% fsm error management with debugging feature for traceability.\ndo_eval(Module, Callback, {error, Reason}, State, Meta) ->\n\t{error, #{\n\t\t\tdebug => #{\n\t\t\t\tmodule => ?MODULE,\n\t\t\t\tfunction => ?FUNCTION_NAME,\n\t\t\t\tfunction_arity => ?FUNCTION_ARITY,\n\t\t\t\tline => ?LINE\n\t\t\t},\n\t\t\treason => Reason,\n\t\t\tmodule => Module,\n\t\t\tcallback => Callback,\n\t\t\tstate => State,\n\t\t\tmeta => Meta\n\t\t }\n\t};\ndo_eval(Module, Callback, {error, Reason, NewState}, State, Meta) ->\n\t{error, #{\n\t\t\tdebug => #{\n\t\t\t\tmodule => ?MODULE,\n\t\t\t\tfunction => ?FUNCTION_NAME,\n\t\t\t\tfunction_arity => ?FUNCTION_ARITY,\n\t\t\t\tline => ?LINE\n\t\t\t},\n\t\t\treason => Reason,\n\t\t\tmodule => Module,\n\t\t\tcallback => Callback,\n\t\t\tstate => State,\n\t\t\tnew_state => NewState,\n\t\t\tmeta => Meta\n\t\t }, NewState\n\t};\n\n% fsm error callback returned value.\ndo_eval(Module, Callback, Return, State, Meta) ->\n\t{error, #{\n\t\t\tdebug => #{\n\t\t\t\tmodule => ?MODULE,\n\t\t\t\tfunction => ?FUNCTION_NAME,\n\t\t\t\tfunction_arity => ?FUNCTION_ARITY,\n\t\t\t\tline => ?LINE\n\t\t\t},\n\t\t\treason => unsupported_return,\n\t\t\treturn => Return,\n\t\t\tmodule => Module,\n\t\t\tcallback => Callback,\n\t\t\tstate => State,\n\t\t\tmeta => Meta\n\t\t}\n\t}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc update metadata information if enabled.\n%% @end\n%%--------------------------------------------------------------------\n-spec update_meta(Module, Callback, Meta) -> Return when\n\tModule :: fsm_module(),\n\tCallback :: fsm_callback(),\n\tMeta :: fsm_metadata(),\n\tReturn :: Meta.\n\nupdate_meta(Module, Callback, Meta = #{ meta := true }) ->\n\tHistory = maps:get(history, Meta),\n\tCounter = maps:get(counter, Meta),\n\tNewHistory= [meta_history(Module, Callback)|History],\n\tMeta#{\n\t\thistory => NewHistory,\n\t\tcounter => Counter+1\n\t};\nupdate_meta(_, _, Meta) ->\n\tMeta.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc generate a meta history item.\n%% @end\n%%--------------------------------------------------------------------\nmeta_history(Module, Callback) ->\n\t#{\n\t\ttimestamp => erlang:system_time(),\n\t\tmodule => Module,\n\t\tcallback => Callback,\n\t\tprocess_info => meta_process_info(),\n\t\tpid => self()\n\t }.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc wrapper around erlang:process_info/2\n%% @end\n%%--------------------------------------------------------------------\nmeta_process_info() ->\n\tmaps:from_list(\n\t\terlang:process_info(\n\t\t\tself(),\n\t\t\t[\n\t\t\t\theap_size,\n\t\t\t\tmessage_queue_len,\n\t\t\t\treductions,\n\t\t\t\tstack_size,\n\t\t\t\tstatus,\n\t\t\t\ttotal_heap_size,\n\t\t\t\ttrap_exit\n\t\t\t]\n\t\t )\n\t).\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_http_server.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Configuration HTTP Server Interface.\n%%%\n%%% This module is using cowboy to handle configuration requests. The\n%%% goal is to configure dynamically some values using a web\n%%% interface. The default TCP port used is `4891'.\n%%%\n%%% The interface is using a RESTful like module, where a path is\n%%% representing an object.\n%%%\n%%% A configuration path is converted to a parameter:\n%%%\n%%% ```\n%%% v1/config/debug\n%%%\n%%% % becomes\n%%%\n%%% [debug]\n%%% '''\n%%%\n%%% This API is also versionned, the `v0' version is mostly a draft to\n%%% see how the different methods are behaving.\n%%%\n%%% JSON data returned try to follow jsend format.\n%%%\n%%% see: https://github.com/omniti-labs/jsend\n%%%\n%%% == Examples ==\n%%%\n%%% By default, the values being used and returned are raw:\n%%%\n%%% ```\n%%% # get the value of global.debug parameter\n%%% $ curl localhost:4891/v1/config/debug\n%%% {\"status\":\"success\",\"data\":true}\n%%%\n%%% # set the value of global.debug parameter\n%%% $ curl localhost:4891/v1/config/global/debug -d false\n%%% {\"status\":\"success\",\"data\":false}\n%%% '''\n%%%\n%%% === Unix Socket Support ===\n%%%\n%%% Arweave Configuration HTTP API can listen to an unix socket\n%%% instead of an IP address. If a valid path is given instead of an\n%%% IP address, cowboy will listen on this file. When the server is\n%%% stopped, this file should be removed.\n%%%\n%%% One can then use an HTTP client (e.g. curl) to send HTTP requests,\n%%% here an example\n%%%\n%%% ```\n%%% curl \\\n%%%   --unix-socket ${WORKDIR}/arweave.sock \\\n%%%   http://localhost/v1/config/...\n%%% '''\n%%%\n%%% Enabling the usage of an unix socket restrict the surface attack,\n%%% and limit the configuration access to only the user with\n%%% read/write access to it. The \"authentication\" is then based on\n%%% UNIX credentials.\n%%%\n%%% Note: it can also be a good way to offer an interface for a GUI.\n%%%\n%%% @end\n%%%===================================================================\n-module(arweave_config_http_server).\n-export([start_link/0, stop/0]).\n-export([start_as_child/0, stop_as_child/0]).\n-export([init/2]).\n-include_lib(\"kernel/include/logger.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%--------------------------------------------------------------------\n%% @doc start cowboy as `arweave_config_sup' child.\n%% @end\n%%--------------------------------------------------------------------\nstart_as_child() ->\n\tSpec = #{\n\t\t\tid => ?MODULE,\n\t\t\tstart => {?MODULE, start_link, []},\n\t\t\ttype => worker,\n\t\t\trestart => temporary\n\t},\n\tsupervisor:start_child(arweave_config_sup, Spec).\n\n%%--------------------------------------------------------------------\n%% @doc stop cowboy from `arweave_config_sup'.\n%% @end\n%%--------------------------------------------------------------------\nstop_as_child() ->\n\tstop(),\n\tsupervisor:terminate_child(arweave_config_sup, ?MODULE),\n\tsupervisor:delete_child(arweave_config_sup, ?MODULE).\n\n%%--------------------------------------------------------------------\n%% @doc start arweave config http api interface.\n%% @end\n%%--------------------------------------------------------------------\nstart_link() ->\n\t{ok, DefaultHost} = arweave_config:get([config,http,api,listen,address]),\n\t{ok, DefaultPort} = arweave_config:get([config,http,api,listen,port]),\n\tTransportOpts =\n\t\tcase inet:parse_address(binary_to_list(DefaultHost)) of\n\t\t\t{ok, Address} ->\n\t\t\t\t[\n\t\t\t\t\t{port, DefaultPort},\n\t\t\t\t\t{ip, Address}\n\t\t\t\t];\n\t\t\t{error, _} ->\n\t\t\t\t% if it's not an ip address, this is\n\t\t\t\t% an unix socket.\n\t\t\t\t[\n\t\t\t\t\t{ip, {local, DefaultHost}}\n\t\t\t\t]\n\t\tend,\n\tProtocolOpts = #{\n\t\tenv => #{ dispatch => dispatch() }\n\t},\n\tcowboy:start_clear(?MODULE, TransportOpts, ProtocolOpts).\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\nstop() ->\n\tListenAddress = ranch:get_addr(?MODULE),\n\tcowboy:stop_listener(?MODULE),\n\tcase ListenAddress of\n\t\t{local, Address} ->\n\t\t\t?LOG_DEBUG(\"remove ~p\", [Address]),\n\t\t\tfile:delete(Address),\n\t\t\tok;\n\t\t_ ->\n\t\t\tok\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ndispatch() ->\n\tcowboy_router:compile([\n\t\t{'_', router()}\n\t]).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nheaders() ->\n\t#{ <<\"content-type\">> => <<\"application/json\">> }.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nrouter() ->\n\t[\n\t\t{\"/v0\", ?MODULE, #{}},\n\t\t{\"/v0/config\", ?MODULE, #{}},\n\t\t{\"/v0/config/[...]\", ?MODULE, #{}},\n\t\t{\"/v0/environment\", ?MODULE, #{}}\n\t].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\n% @todo returns API specifications\n% init(Req = #{ path := <<\"/v0\">> }, State) ->\n% \tReply = cowboy_req:reply(200, #{}, <<>>, Req),\n% \t{ok, Reply, State};\n% @todo returns arweave and beam arguments\n% init(Req = #{ path := <<\"/v0/arguments\">> }, State) -> ok;\ninit(Req = #{ path := <<\"/v0/environment\">>, method := <<\"GET\">> }, State) ->\n\tEnvironment = arweave_config_environment:get(),\n\tAsMap = maps:from_list(Environment),\n\tHeaders = headers(),\n\tBody = encode(\n\t\tjsend(\n\t\t\tsuccess,\n\t\t\tAsMap\n\t\t)\n\t),\n\tReply = cowboy_req:reply(200, Headers, Body, Req),\n\t{ok, Reply, State};\ninit(Req = #{ path := <<\"/v0/config\">> }, State) ->\n\t% @todo: add the configuration from spec (with default value)\n\t% should return the full configuration using different format.\n\tConfig = arweave_config_store:to_map(),\n\tHeaders = headers(),\n\tBody = encode(\n\t\tjsend(\n\t\t\tsuccess,\n\t\t\tConfig\n\t\t)\n\t),\n\tReply = cowboy_req:reply(200, Headers, Body, Req),\n\t{ok, Reply, State};\ninit(Req = #{ path := <<\"/v0/config/\">> }, State) ->\n\tinit(Req#{ path => <<\"/v0/config\">> }, State);\ninit(Req = #{ path := <<\"/v0/config/\", Key/binary>> }, State)\n\twhen Key =/= <<>> ->\n\t\tapply_config(Key, Req, State);\ninit(Req, State) ->\n\t?LOG_INFO(\"~p\", [{Req, State}]),\n\tHeaders = headers(),\n\tBody = encode(\n\t\tjsend(\n\t\t\terror,\n\t\t\t<<\"not found\">>\n\t\t)\n\t),\n\tReply = cowboy_req:reply(404, Headers, Body, Req),\n\t{ok, Reply, State}.\n\n%%--------------------------------------------------------------------\n%%\n%%--------------------------------------------------------------------\napply_config(Key, Req, State) ->\n\tcase config(Key, Req, State) of\n\t\t{ok, #{\n\t\t\tstatus := Status,\n\t\t\tbody := Body,\n\t\t\treq := NewReq\n\t\t}} ->\n\t\t\tReply = cowboy_req:reply(\n\t\t\t\tStatus,\n\t\t\t\theaders(),\n\t\t\t\tencode(Body),\n\t\t\t\tNewReq\n\t\t\t),\n\t\t\t{ok, Reply, State};\n\t\t_Else ->\n\t\t\tReply = cowboy_req:reply(\n\t\t\t\t400,\n\t\t\t\theaders(),\n\t\t\t\tencode(\n\t\t\t\t\tjsend(\n\t\t\t\t\t\terror,\n\t\t\t\t\t\t<<\"configuration error\">>\n\t\t\t\t\t)\n\t\t\t\t),\n\t\t\t\tReq\n\t\t\t),\n\t\t\t{ok, Reply, State}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% config end-point\n%%--------------------------------------------------------------------\nconfig(Key, Req, State) ->\n\tio:format(\"~p~n\", [{Key, Req, State}]),\n\tKey2 = re:replace(Key, <<\"/\">>, <<\".\">>, [global]),\n\tKey3 = case Key2 of\n\t\t_ when is_list(Key2) -> list_to_binary(Key2);\n\t\t_ when is_binary(Key2) -> Key2\n\tend,\n\tcase arweave_config_parser:key(Key3) of\n\t\t{ok, Parameter} ->\n\t\t\tconfig1(Parameter, Req, State);\n\t\t_ ->\n\t\t\tNewState = #{\n\t\t\t\tstatus => 400,\n\t\t\t\theaders => headers(),\n\t\t\t\tbody => jsend(\n\t\t\t\t\terror,\n\t\t\t\t\t<<\"bad data\">>\n\t\t\t\t),\n\t\t\t\treq => Req\n\t\t\t},\n\t\t\t{ok, NewState}\n\tend.\n\nconfig1(Parameter, Req = #{ method := <<\"GET\">> }, State) ->\n\tcase arweave_config:get(Parameter) of\n\t\t{ok, Value} ->\n\t\t\tNewState = State#{\n\t\t\t\tstatus => 200,\n\t\t\t\theaders => headers(),\n\t\t\t\tbody => jsend(\n\t\t\t\t\tsuccess,\n\t\t\t\t\tValue\n\t\t\t\t),\n\t\t\t\treq => Req\n\t\t\t},\n\t\t\t{ok, NewState};\n\t\t_ ->\n\t\t\tNewState = State#{\n\t\t\t\tstatus => 404,\n\t\t\t\theaders => headers(),\n\t\t\t\tbody => jsend(\n\t\t\t\t\terror,\n\t\t\t\t\t<<\"not_found\">>\n\t\t\t\t),\n\t\t\t\treq => Req\n\t\t\t},\n\t\t\t{ok, NewState}\n\tend;\nconfig1(Parameter, Req = #{ method := <<\"POST\">> }, State) ->\n\tcase cowboy_req:has_body(Req) of\n\t\ttrue ->\n\t\t\tconfig_post(Parameter, Req, State);\n\t\tfalse ->\n\t\t\tNewState = State#{\n\t\t\t\tstatus => 400,\n\t\t\t\theaders => headers(),\n\t\t\t\tbody => jsend(\n\t\t\t\t\terror,\n\t\t\t\t\t<<\"missing body\">>\n\t\t\t\t),\n\t\t\t\treq => Req\n\t\t\t},\n\t\t\t{ok, NewState}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nconfig_post(Parameter, Req, State) ->\n\tcase cowboy_req:read_body(Req) of\n\t\t{ok, Data, Req0} ->\n\t\t\tconfig_post1(Data, Parameter, Req0, State);\n\t\t_ ->\n\t\t\tNewState = State#{\n\t\t\t\tstatus => 400,\n\t\t\t\theaders => headers(),\n\t\t\t\tbody => jsend(\n\t\t\t\t\terror,\n\t\t\t\t\t<<\"bad data\">>\n\t\t\t\t),\n\t\t\t\treq => Req\n\t\t\t},\n\t\t\t{ok, NewState}\n\tend.\n\nconfig_post1(Data, Parameter, Req, State) ->\n\tcase arweave_config_spec:set(Parameter, Data) of\n\t\t{ok, NewValue, OldValue} ->\n\t\t\tNewState = State#{\n\t\t\t\tstatus => 200,\n\t\t\t\theaders => headers(),\n\t\t\t\tbody => jsend(\n\t\t\t\t\tsuccess,\n\t\t\t\t\t#{\n\t\t\t\t\t\tnew => NewValue,\n\t\t\t\t\t\told => OldValue\n\t\t\t\t\t}\n\t\t\t\t),\n\t\t\t\treq => Req\n\t\t\t},\n\t\t\t{ok, NewState};\n\t\t_ ->\n\t\t\tNewState = State#{\n\t\t\t\tstatus => 400,\n\t\t\t\theaders => headers(),\n\t\t\t\tbody => jsend(\n\t\t\t\t\terror,\n\t\t\t\t\t<<\"bad data\">>\n\t\t\t\t),\n\t\t\t\treq => Req\n\t\t\t},\n\t\t\t{ok, NewState}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\n-spec jsend(Status, Data) -> Return when\n\tStatus :: success | fail | error,\n\tData :: binary() | map() | list() | integer(),\n\tReturn :: #{\n\t\tstatus => success | fail | error,\n\t\tdata => Data,\n\t\tmessage => Data\n\t}.\n\njsend(success, Data) ->\n\t#{\n\t\tstatus => success,\n\t\tdata => Data\n\t};\njsend(fail, Data) ->\n\t#{\n\t\tstatus => fail,\n\t\tdata => Data\n\t};\njsend(error, Message) ->\n\t#{\n\t\tstatus => error,\n\t\tmessage => Message\n\t}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nencode(Data) ->\n\tjiffy:encode(Data).\n\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_legacy.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @deprecated This module is a legacy compat layer.\n%%% @doc temporary interface to arweave legacy configuration.\n%%%\n%%% This  module is  mainly used  as a  process to  deal with  arweave\n%%% legacy configuration. Indeed, the previous implementation was\n%%% using a record to store parameters as record's key, unfortunately,\n%%% this is not flexible enough to do everything. This process is a\n%%% direct interface to `application:set_env(arweave, config, _)'\n%%% function and to `#config{}' record.\n%%%\n%%% The record needs to be converted as proplists, then, it will\n%%% introduce a slower answers, but at this time, the configuration is\n%%% not dynamic at all, this means this performance issue will only\n%%% impact arweave during startup.\n%%%\n%%% @TODO TO REMOVE when legacy configuration will be dropped.\n%%%\n%%% == Examples ==\n%%%\n%%% ```\n%%% % get the configuration as #config{} record from\n%%% % arweave_config_legacy process state.Similar to\n%%% % application:get_env/2\n%%% {ok, #config{}} = arweave_config_legacy:get_env().\n%%%\n%%% % overwrite the configuration present in `arweave_config_legacy'\n%%% % process state, similar to application:set_env/3.\n%%% arweave_config_legacy:set_env(#config{}).\n%%%\n%%% % get value's key.\n%%% Init = arweave_config_legacy:get(init).\n%%%\n%%% % set a value's key.\n%%% arweave_config_legacy:set(init, false).\n%%%\n%%% % reset the configuration with the default state (default values\n%%% % from `#config{}'.\n%%% arweave_config_legacy:reset().\n%%% '''\n%%%\n%%% @end\n%%%===================================================================\n-module(arweave_config_legacy).\n-behavior(gen_server).\n-compile(warnings_as_errors).\n-compile({no_auto_import,[get/0, get/1]}).\n-export([start_link/0, stop/0]).\n-export([\n\tget/0,\n\tget/1,\n\tget_config_value/2,\n\tget_env/0,\n\thas_key/1,\n\tkeys/0,\n\tmerge/1,\n\treset/0,\n\tset/1,\n\tset/2,\n\tset_env/1,\n\tconfig_merge/2,\n\tconfig_to_proplist/1,\n\tproplist_to_config/1\n]).\n-export([init/1, terminate/2]).\n-export([handle_call/3, handle_info/2, handle_cast/2]).\n-include(\"arweave_config.hrl\").\n-include_lib(\"kernel/include/logger.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n-record(?MODULE, {proplist, record}).\n\n%%--------------------------------------------------------------------\n%% @doc Returns the complete list of all keys from configuration\n%% process state.\n%% @end\n%%--------------------------------------------------------------------\n-spec keys() -> [atom()].\n\nkeys() ->\n\tgen_server:call(?MODULE, keys, 1000).\n\n%%--------------------------------------------------------------------\n%% @doc Check if a key is present in the process record state.\n%% @end\n%%--------------------------------------------------------------------\n-spec has_key(atom()) -> boolean().\n\nhas_key(Key) when is_atom(Key) ->\n\tgen_server:call(?MODULE, {has_key, Key}, 1000).\n\n%%--------------------------------------------------------------------\n%% @doc Returns process state configuration as `#config{}' record.\n%% @end\n%%--------------------------------------------------------------------\n-spec get() -> Return when\n\tReturn :: undefined | {ok, #config{}}.\n\nget() ->\n\ttry gen_server:call(?MODULE, get, 1000) of\n\t\t{ok, Value} -> Value;\n\t\t_Elsewise -> undefined\n\tcatch\n\t\t_E:R:S -> throw({error, {R, S}})\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc Returns the value of a key from process state configuration.\n%% @end\n%%--------------------------------------------------------------------\n-spec get(Key) -> Return when\n\tKey :: atom(),\n\tReturn :: undefined | term().\n\nget(Key) when is_atom(Key) ->\n\ttry gen_server:call(?MODULE, {get, Key}, 1000) of\n\t\t{ok, Value} -> Value;\n\t\t_Elsewise -> undefined\n\tcatch\n\t\t_E:R:S -> throw({error, {R, S}})\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc Set a new config file.\n%% @end\n%%--------------------------------------------------------------------\n-spec set(Config) -> Return when\n\tConfig :: #config{},\n\tReturn :: ok | {error, term()} | timeout.\n\nset(Config)\n\twhen is_record(Config, config) ->\n\t\tgen_server:call(?MODULE, {set, Config}, 1000);\nset(_) ->\n\t{error, badarg}.\n\n%%--------------------------------------------------------------------\n%% @doc Set a value to a key.\n%% @end\n%%--------------------------------------------------------------------\n-spec set(Key, Value) -> Return when\n\tKey :: atom(),\n\tValue :: term(),\n\tReturn :: {ok, Value} | {error, term()}.\n\nset(Key, Value) ->\n\tset(Key, Value, #{}).\n\n%%--------------------------------------------------------------------\n%% @doc Set a value to a key (custom options). When setting `set_env'\n%% to `true', the environment application arweave/config is configured\n%% with the content of the store from this process.\n%%\n%% Warning: this part is not protected against race condition, if\n%% another process is setting application environment variable with\n%% `application:set_env/2' function, the state present in this process\n%% will not have the correct information.\n%% @end\n%%--------------------------------------------------------------------\n-spec set(Key, Value, Opts) -> Return when\n\tKey :: atom(),\n\tValue :: term(),\n\tOpts :: #{ set_env => boolean() },\n\tReturn :: {ok, Value} | {error, term()}.\n\nset(Key, Value, Opts) when is_atom(Key), is_map(Opts) ->\n\ttry gen_server:call(?MODULE, {set, Key, Value, Opts}, 1000) of\n\t\t{ok, NewValue, _OldValue} -> {ok, NewValue};\n\t\tElsewise -> {error, Elsewise}\n\tcatch\n\t\t_E:R:S -> throw({error, {R, S}})\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc import #config{} record and set it as new state.\n%% @end\n%%--------------------------------------------------------------------\nset_env(Config) when is_record(Config, config) ->\n\tgen_server:call(?MODULE, {set_env, Config}, 1000).\n\n%%--------------------------------------------------------------------\n%% @doc reset the legacy configuration by using the default values.\n%% @end\n%%--------------------------------------------------------------------\nreset() ->\n\tgen_server:call(?MODULE, reset, 1000).\n\n%%--------------------------------------------------------------------\n%% @doc export the current configuration as `#config{}' record.\n%% @end\n%%--------------------------------------------------------------------\n-spec get_env() -> {ok, #config{}}.\n\nget_env() ->\n\tgen_server:call(?MODULE, get_env, 1000).\n\n%%--------------------------------------------------------------------\n%% @doc merge a configuration file (set only modified values).\n%% @end\n%%--------------------------------------------------------------------\n-spec merge(Config) -> Return when\n\tConfig :: #config{},\n\tReturn :: {ok, Config} | {error, term()}.\n\nmerge(Config) when is_record(Config, config) ->\n\tgen_server:call(?MODULE, {merge, Config}, 1000);\nmerge(_) ->\n\t{error, badarg}.\n\n%%--------------------------------------------------------------------\n%% @doc start `arweave_config_legacy' process.\n%% @end\n%%--------------------------------------------------------------------\nstart_link() ->\n\t?LOG_INFO(\"start ~p process (~p)\", [?MODULE, self()]),\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%%--------------------------------------------------------------------\n%% @doc stop `arweave_config_legacy' process.\n%% @end\n%%--------------------------------------------------------------------\nstop() ->\n\tgen_server:stop(?MODULE).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit(_) ->\n\t?LOG_INFO(\"start ~p  process\", [?MODULE]),\n\tProplist = config_to_proplist(#config{}),\n\t?LOG_DEBUG([{configuration, Proplist}]),\n\tset_environment(Proplist),\n\t{ok, #?MODULE{\n\t\tproplist = Proplist,\n\t\trecord = #config{}\n\t      }\n\t}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nterminate(_, _) ->\n\t?LOG_INFO(\"stop ~p process (~p)\", [?MODULE, self()]),\n\tok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_call({merge, Config}, _From, State = #?MODULE{ proplist = P })\n\twhen is_record(Config, config) ->\n\t\ttry\n\t\t\tMergedProplist = config_merge(P, Config),\n\t\t\tMergedConfig = proplist_to_config(MergedProplist),\n\t\t\tNewState = State#?MODULE{\n\t\t\t\tproplist = MergedProplist,\n\t\t\t\trecord = MergedConfig \n\t\t\t},\n\t\t\t{reply, {ok, MergedConfig}, NewState}\n\t\tcatch\n\t\t\t_Error:Reason ->\n\t\t\t\t{reply, {error, Reason}, State}\n\t\tend;\nhandle_call({has_key, Key}, _From, State = #?MODULE{ proplist = P }) ->\n\t{reply, proplists:is_defined(Key, P), State};\nhandle_call(keys, _From, State = #?MODULE{ proplist = P }) ->\n\t{reply, [ K || {K,_} <- P ], State};\nhandle_call(get, _From, State = #?MODULE{ record = R }) ->\n\t{reply, {ok, R}, State};\nhandle_call({get, Key}, _From, State = #?MODULE{ proplist = P })\n\twhen is_atom(Key) ->\n\t\tReturn = {ok, proplists:get_value(Key, P)},\n\t\t{reply, Return, State};\nhandle_call({set, Config}, _From, State)\n\twhen is_record(Config, config) ->\n\t\ttry\n\t\t\tProplist = config_to_proplist(Config),\n\t\t\tNewState = #?MODULE{\n\t\t\t\tproplist = Proplist,\n\t\t\t\trecord = Config\n\t\t\t},\n\t\t\tset_environment(Config),\n\t\t\t{reply, ok, NewState}\n\t\tcatch\n\t\t\t_Error:Reason ->\n\t\t\t\t{reply, {error, Reason}, State}\n\t\tend;\nhandle_call({set, Key, Value, Opts}, _From, State = #?MODULE{ proplist = P })\n\twhen is_atom(Key), is_map(Opts) ->\n\t\tOldValue = proplists:get_value(Key, P),\n\t\tNewP = lists:keyreplace(Key, 1, P, {Key, Value}),\n\t\tset_environment(NewP),\n\t\tReturn = {ok, Value, OldValue},\n\t\tNewState = State#?MODULE{\n\t\t\tproplist = NewP,\n\t\t\trecord = proplist_to_config(NewP)\n\t\t},\n\t\t{reply, Return, NewState};\nhandle_call(get_env, _From, State = #?MODULE{ record = R }) ->\n\t{reply, {ok, R}, State};\nhandle_call({set_env, Config}, _From, State) ->\n\tcase import_config(Config) of\n\t\t{ok, NewP} ->\n\t\t\tset_environment(NewP),\n\t\t\tNewState = State#?MODULE{\n\t\t\t\tproplist = NewP,\n\t\t\t\trecord = proplist_to_config(NewP)\n\t\t\t},\n\t\t\t{reply, ok, NewState};\n\t\t_ ->\n\t\t\t{reply, error, State}\n\tend;\nhandle_call(reset, _From, State) ->\n\tcase reset_config() of\n\t\t{ok, NewP} ->\n\t\t\tNewState = State#?MODULE{\n\t\t\t\tproplist = NewP,\n\t\t\t\trecord = proplist_to_config(NewP)\n\t\t\t},\n\t\t\t{reply, ok, NewState};\n\t\t_ ->\n\t\t\t{reply, error, State}\n\tend;\nhandle_call(Message, From, State) ->\n\tError = [\n\t\t{from, From},\n\t\t{message, Message},\n\t\t{from, From},\n\t\t{pid, self()}\n\t],\n\t?LOG_ERROR(Error),\n\t{reply, {error, Error}, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_cast(Msg, State) ->\n\t?LOG_ERROR(\"received: ~p\", [Msg]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_info(Msg, State) ->\n\t?LOG_ERROR(\"received: ~p\", [Msg]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @doc Converts `#config{}' records to `proplists'.\n%% @end\n%%--------------------------------------------------------------------\n-spec config_to_proplist(Config) -> Return when\n\tConfig :: #config{},\n\tReturn :: [{atom(), term()}].\n\nconfig_to_proplist(Config)\n\twhen is_record(Config, config) ->\n\t\tFields = record_info(fields, config),\n\t\tValues = erlang:delete_element(1, Config),\n       \t\tList = erlang:tuple_to_list(Values),\n\t\tlists:zip(Fields, List).\n\nconfig_to_proplist_test() ->\n\tConfig = #config{},\n\tKeys = record_info(fields, config),\n\tProplist = config_to_proplist(Config),\n\t[\n\t\tbegin\n\t\t\t{ok, VC} = get_config_value(Key, Config),\n\t\t\tVP = proplists:get_value(Key, Proplist),\n\t\t\t?assertEqual(VC, VP)\n\t\tend\n\t||\n\t\tKey <- Keys\n\t].\n\n%%--------------------------------------------------------------------\n%% @doc Converts a proplists to a `#config{}' record.\n%% @end\n%%--------------------------------------------------------------------\n-spec proplist_to_config(Proplist) -> Return when\n\tProplist :: [{atom(), term()}],\n\tReturn :: #config{}.\n\nproplist_to_config(Proplist)\n\twhen is_list(Proplist) ->\n\t\tFields = record_info(fields, config),\n\t\tproplist_to_config2(Proplist, Fields, Proplist, 1).\n\nproplist_to_config_test() ->\n\tConfig = #config{},\n\tProplist = config_to_proplist(Config),\n\tNewConfig = proplist_to_config(Proplist),\n\t[\n\t\tbegin\n\t\t\t{ok, VC} = get_config_value(Key, NewConfig),\n\t\t\tVP = proplists:get_value(Key, Proplist),\n\t\t\t?assertEqual(VC, VP)\n\t\tend\n\t||\n\t\tKey <- proplists:get_keys(Proplist)\n\t].\n\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% private: check the order of the fields, if not in right order, it\n%% will fail.\n%%--------------------------------------------------------------------\nproplist_to_config2([], [], Proplist, _Pos) ->\n\tproplist_to_config3(Proplist);\nproplist_to_config2([{Key,_}|R1], [Key|R2], Proplist, Pos) ->\n\tproplist_to_config2(R1, R2, Proplist, Pos+1);\nproplist_to_config2([{K1, _V1}|_R1], [K2|_R2], _, Pos) ->\n\tthrow({error, #{\n\t\t\t\treason => {badkey, K1, K2},\n\t\t\t\tposition => Pos\n\t\t\t}\n\t      }\n\t);\nproplist_to_config2(_, _, _, Pos) ->\n\tthrow({error, #{\n\t\t\t\treason => badvalue,\n\t\t\t\tposition => Pos\n\t\t\t}\n\t      }\n\t).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% private: finally, convert the last values\n%%--------------------------------------------------------------------\nproplist_to_config3(Proplist) ->\n\tValues = lists:map(fun({_,V}) -> V end, Proplist),\n\tValues2 = [config|Values],\n\terlang:list_to_tuple(Values2).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% private: import a config record as proplist\n%%--------------------------------------------------------------------\nimport_config(Config)\n\twhen is_record(Config, config) ->\n\t\tProplist = config_to_proplist(Config),\n\t\t{ok, Proplist};\nimport_config(Config) ->\n\t{error, Config}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% private: reset internal configuration using #config{} record.\n%%--------------------------------------------------------------------\nreset_config() ->\n\tProplist = config_to_proplist(#config{}),\n\t{ok, Proplist}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc wrapper for `application:set_env/3'.\n%% @end\n%% @TODO: to  remove, arweave_legacy_config will  set the environment\n%% for compatibility. If another application/module is still using\n%% application:get_env, it will not be impacted and will have an\n%% updated configuration.\n%%--------------------------------------------------------------------\nset_environment(Config) when is_list(Config) ->\n\t% convert the list to config record\n\tRecord = proplist_to_config(Config),\n\tset_environment(Record);\nset_environment(Config) when is_record(Config, config) ->\n\tapplication:set_env(arweave_config, config, Config).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc\n%% helper function  to extract  config value using  a key,  similar to\n%% `proplists:get_value/2'.\n%% @end\n%%--------------------------------------------------------------------\n-spec get_config_value(Key, Config) -> Return when\n\tKey :: atom(),\n\tConfig :: #config{},\n\tReturn :: {error, undefined} | {ok, term()}.\n\nget_config_value(Key, Config)\n\twhen is_atom(Key), is_record(Config, config) ->\n\t\tKeys = record_info(fields, config),\n\t\t[_|List] = tuple_to_list(Config),\n\t\tZip = lists:zip(Keys, List),\n\t\tcase lists:keyfind(Key, 1, Zip) of\n\t\t\tfalse -> {error, undefined};\n\t\t\t{Key, Value} -> {ok, Value}\n\t\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc Merge configuration as records or proplists, return the\n%% configuration as proplist.\n%% @end\n%%--------------------------------------------------------------------\n-spec config_merge(OldConfig, NewConfig) -> Return when\n\tOldConfig :: #config{} | proplists:proplist(),\n\tNewConfig :: #config{} | proplists:proplist(),\n\tReturn :: proplists:proplist().\n\nconfig_merge(OldConfig, NewConfig)\n\twhen is_record(OldConfig, config) ->\n\t\tOldProplist = config_to_proplist(OldConfig),\n\t\tconfig_merge(OldProplist, NewConfig);\nconfig_merge(OldConfig, NewConfig)\n\twhen is_record(NewConfig, config) ->\n\t\tNewProplist = config_to_proplist(NewConfig),\n\t\tconfig_merge(OldConfig, NewProplist);\nconfig_merge(OldConfig, NewConfig)\n\twhen is_list(OldConfig), is_list(NewConfig) ->\n\t\tZipped = lists:zip(NewConfig, OldConfig),\n\t\tlists:foldr(\n\t\t\tfun\n\t\t\t\t% same values, nothing to change\n\t\t\t\t({{K, NV}, {K, OV}}, Acc) when NV =:= OV ->\n\t\t\t\t\t[{K, OV}|Acc];\n\t\t\t\t% different values, we set the new one\n\t\t\t\t({{K, NV}, {K, OV}}, Acc) when NV =/= OV ->\n\t\t\t\t\t[{K, NV}|Acc];\n\t\t\t\t% something wrong, the configuration\n\t\t\t\t% is bad\n\t\t\t\t(Else, _Acc) ->\n\t\t\t\t\tthrow({error, {badconfig, Else}})\n\t\t\tend,\n\t\t\t[],\n\t\t\tZipped\n\t\t).\n\nconfig_merge_test() ->\n\tMerged1 = proplist_to_config(\n\t\tconfig_merge(\n\t\t\t#config{ init = false },\n\t\t\t#config{ init = true }\n\t\t)\n\t),\n\t#config{ init = Init1 } = Merged1, \n\t?assertEqual(true, Init1),\n\n\tMerged2 = proplist_to_config(\n\t\tconfig_merge(\n\t\t\t#config{ init = true },\n\t\t\t#config{ init = true }\n\t\t)\n\t),\n\t#config{ init = Init2 } = Merged2, \n\t?assertEqual(true, Init2).\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_parameters.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Arweave Configuration Parameters.\n%%%\n%%% == TODO ==\n%%%\n%%% @todo create an `include' parameter to include files from\n%%% other places.\n%%%\n%%% @end\n%%%===================================================================\n-module(arweave_config_parameters).\n-compile(warnings_as_errors).\n-export([init/0]).\n-include_(\"arweave_config.hrl\").\n\n%%--------------------------------------------------------------------\n%% @doc returns a list of map containing arweave parameters.\n%% @end\n%%--------------------------------------------------------------------\ninit() ->\n\t[\n\t\t% set configuration file. The configuration parameter\n\t\t% is a list of binary, stored in arweave_config_file\n\t\t% process.\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [configuration],\n\t\t\tdefault => [],\n\t\t\ttype => path,\n\t\t\truntime => true,\n\t\t\tdeprecated => false,\n\t\t\trequired => false,\n\t\t\tenvironment => <<\"AR_CONFIGURATION\">>,\n\t\t\tshort_argument => $c,\n\t\t\tlong_argument => <<\"--configuration\">>,\n\t\t\thandle_set => fun\n\t\t\t\t(_, V, _ ,_) ->\n\t\t\t\t\t{ok, _} = arweave_config_file:add(V),\n\t\t\t\t\tP = arweave_config_file:get_paths(),\n\t\t\t\t\t{store, P}\n\t\t\tend,\n\t\t\thandle_get => fun\n\t\t\t\t(_, _) ->\n\t\t\t\t\t{ok, arweave_config_file:get_paths()}\n\t\t\tend\n\t\t},\n\n\t\t% set data directory\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [data,directory],\n\t\t\tdefault => \"./data\",\n\t\t\truntime => false,\n\t\t\ttype => path,\n\t\t\tdeprecated => false,\n\t\t\tlegacy => data_dir,\n\t\t\trequired => true,\n\t\t\tshort_description => \"\",\n\t\t\tlong_description => \"\",\n\t\t\tenvironment => <<\"AR_DATA_DIRECTORY\">>,\n\t\t\tshort_argument => $D,\n\t\t\tlong_argument => <<\"--data.directory\">>,\n\t\t\thandle_get => fun legacy_get/2,\n\t\t\thandle_set => fun legacy_set/4\n\t\t},\n\t\t#{\n\t\t\tparameter_key => [start,from,state],\n\t\t\tdefault => not_set,\n\t\t\truntime => false,\n\t\t\ttype => path,\n\t\t\tdeprecated => false,\n\t\t\tlegacy => start_from_state,\n\t\t\trequired => false,\n\t\t\tshort_description => \"\",\n\t\t\tlong_description => \"\",\n\t\t\tenvironment => <<\"AR_START_FROM_STATE\">>,\n\t\t\tlong_argument => <<\"--start-from-state\">>,\n\t\t\thandle_get => fun legacy_get/2,\n\t\t\thandle_set => fun legacy_set/4\n\t\t},\n\t \t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [debug],\n\t\t\tdefault => false,\n\t\t\truntime => true,\n\t\t\ttype => boolean,\n\t\t\tdeprecated => false,\n\t\t\tlegacy => debug,\n\t\t\trequired => false,\n\t\t\tshort_description => \"\",\n\t\t\tlong_description => \"\",\n\t\t\tenvironment => <<\"AR_DEBUG\">>,\n\t\t\tshort_argument => $d,\n\t\t\tlong_argument => <<\"--debug\">>,\n\t\t\thandle_get => fun legacy_get/2,\n\t\t\thandle_set => fun\n\t\t\t\t(K, V, S = #{ config := #{ debug := Old }}, _) ->\n\t\t\t\t\tcase {V, Old} of\n\t\t\t\t\t\t{true, true} ->\n\t\t\t\t\t\t\tignore;\n\t\t\t\t\t\t{false, false} ->\n\t\t\t\t\t\t\tignore;\n\t\t\t\t\t\t{false, true} ->\n\t\t\t\t\t\t\tlogger:set_application_level(arweave_config, info),\n\t\t\t\t\t\t\tlogger:set_application_level(arweave, info),\n\t\t\t\t\t\t\tar_logger:stop_handler(arweave_debug),\n\t\t\t\t\t\t\tlegacy_set(K, V, S, []);\n\t\t\t\t\t\t{true, false} ->\n\t\t\t\t\t\t\tlogger:set_application_level(arweave_config, debug),\n\t\t\t\t\t\t\tlogger:set_application_level(arweave, debug),\n\t\t\t\t\t\t\tar_logger:start_handler(arweave_debug),\n\t\t\t\t\t\t\tlegacy_set(K, V, S, [])\n\t\t\t\t\tend;\n\t\t\t\t(K, V, S, _) ->\n\t\t\t\t\tlogger:set_application_level(arweave_config, debug),\n\t\t\t\t\tlogger:set_application_level(arweave, debug),\n\t\t\t\t\tar_logger:start_handler(arweave_debug),\n\t\t\t\t\tlegacy_set(K, V, S, [])\n\t\t\tend\n\t\t},\n\n\t\t%-----------------------------------------------------\n\t\t% arweave logging feature\n\t\t%-----------------------------------------------------\n\t\t#{\n\t\t\tenabled => true,\n\t\t\t% parse a string and convert it to valid\n\t\t\t% logger template.\n\t\t\tparameter_key => [logging,formatter,template],\n\t\t\tdefault => [time,\" [\",level,\"] \",mfa,\":\",line,\" \",msg,\"\\n\"],\n\t\t\ttype => logging_template,\n\t\t\tenvironment => false,\n\t\t\truntime => false\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\t% see: https://www.erlang.org/doc/apps/kernel/logger.html\n\t\t\t% config.path must be a string (list of\n\t\t\t% integer). By default, type path will convert\n\t\t\t% its input in binary. So, one can convert the\n\t\t\t% value stored using handle_get, or converting\n\t\t\t% the value when using handle_set.\n\t\t\tparameter_key => [logging,path],\n\t\t\tdefault => \"./logs\",\n\t\t\ttype => path,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\truntime => false,\n\t\t\thandle_set => fun\n\t\t\t\t(_K, Path, _S, _) when is_list(Path) ->\n\t\t\t\t\t{store, Path};\n\t\t\t\t(_K, Path, _S, _) when is_binary(Path) ->\n\t\t\t\t\t{store, binary_to_list(Path)}\n\t\t\tend\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\t% see: https://www.erlang.org/doc/apps/kernel/logger_formatter.html\n\t\t\t% TODO: this parameter can also be an atom\n\t\t\t% (unlimited).\n\t\t\tparameter_key => [logging,formatter,max_size],\n\t\t\tdefault => 8128,\n\t\t\ttype => pos_integer,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\truntime => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_info,formatter,max_size]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\t% see: https://www.erlang.org/doc/apps/kernel/logger_formatter.html\n\t\t\t% TODO: this parameter can also be an atom\n\t\t\t% (unlimited)\n\t\t\tparameter_key => [logging,formatter,depth],\n\t\t\tdefault => 256,\n\t\t\ttype => pos_integer,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\truntime => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_info,formatter,depth]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\t% see: https://www.erlang.org/doc/apps/kernel/logger_formatter.html\n\t\t\t% TODO: this parameter can also be an atom\n\t\t\t% (unlimited)\n\t\t\tparameter_key => [logging,formatter,chars_limit],\n\t\t\tdefault => 16256,\n\t\t\ttype => pos_integer,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\truntime => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_info,formatter,chars_limit]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\t% see: https://www.erlang.org/doc/apps/kernel/logger_std_h.html\n\t\t\tparameter_key => [logging,max_no_files],\n\t\t\tdefault => 20,\n\t\t\ttype => pos_integer,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\truntime => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_info,config,max_no_files]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\t% see: https://www.erlang.org/doc/apps/kernel/logger_std_h.html\n\t\t\t% TODO: this parameter can also be an atom\n\t\t\t% (infinity)\n\t\t\tparameter_key => [logging,max_no_bytes],\n\t\t\tdefault => 51418800,\n\t\t\ttype => pos_integer,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\truntime => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_info,config,max_no_bytes]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\t% see: https://www.erlang.org/doc/apps/kernel/logger_std_h.html\n\t\t\tparameter_key => [logging,compress_on_rotate],\n\t\t\tdefault => false,\n\t\t\ttype => boolean,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\truntime => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_info,config,compress_on_rotate]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,sync_mode_qlen],\n\t\t\truntime => false,\n\t\t\ttype => pos_integer,\n\t\t\tdefault => 10,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_info,config,sync_mode_qlen]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,drop_mode_qlen],\n\t\t\truntime => true,\n\t\t\ttype => pos_integer,\n\t\t\tdefault => 200,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_info,config,drop_mode_qlen]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,flush_qlen],\n\t\t\truntime => true,\n\t\t\ttype => pos_integer,\n\t\t\tdefault => 1000,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_info,config,flush_qlen]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,burst_limit_enable],\n\t\t\truntime => true,\n\t\t\ttype => boolean,\n\t\t\tdefault => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_info,config,burst_limit_enable]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,burst_limit_max_count],\n\t\t\truntime => true,\n\t\t\ttype => pos_integer,\n\t\t\tdefault => 500,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_info,config,burst_limit_max_count]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,burst_limit_window_time],\n\t\t\truntime => true,\n\t\t\ttype => pos_integer,\n\t\t\tdefault => 1000,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_info,config,burst_limit_window_time]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,overload_kill_enable],\n\t\t\truntime => true,\n\t\t\ttype => boolean,\n\t\t\tdefault => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_info,config,overload_kill_enable]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,overload_kill_qlen],\n\t\t\truntime => true,\n\t\t\ttype => pos_integer,\n\t\t\tdefault => 20_000,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_info,config,overload_kill_qlen]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,overload_kill_mem_size],\n\t\t\truntime => true,\n\t\t\ttype => pos_integer,\n\t\t\tdefault => 3_000_000,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_info,config,overload_kill_mem_size]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,overload_kill_restart_after],\n\t\t\truntime => true,\n\t\t\ttype => pos_integer,\n\t\t\tdefault => 5000,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_info,config,overload_kill_restart_after]\n\t\t\t}\n\t\t},\n\n\t\t% debug logs\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,debug],\n\t\t\tdefault => false,\n\t\t\ttype => boolean,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\truntime => true,\n\t\t\thandle_set => fun\n\t\t\t\t(_,true,_,_) ->\n\t\t\t\t\tar_logger:start_handler(arweave_debug),\n\t\t\t\t\t{store, true};\n\t\t\t\t(_,false,_,_) ->\n\t\t\t\t\tar_logger:stop_handler(arweave_debug),\n\t\t\t\t\t{store, false}\n\t\t\tend\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,debug,max_no_files],\n\t\t\tinherit => [logging,max_no_files],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_debug,config,max_no_files]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,debug,max_no_bytes],\n\t\t\tinherit => [logging,max_no_bytes],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_debug,config,max_no_bytes]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,debug,sync_mode_qlen],\n\t\t\tinherit => [logging,sync_mode_qlen],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_debug,config,sync_mode_qlen]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,debug,drop_mode_qlen],\n\t\t\tinherit => [logging,drop_mode_qlen],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_debug,config,drop_mode_qlen]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,debug,flush_qlen],\n\t\t\tinherit => [logging,flush_qlen],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_debug,config,flush_qlen]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,debug,burst_limit_enable],\n\t\t\tinherit => [logging,burst_limit_enable],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_debug,config,burst_limit_enable]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,debug,burst_limit_max_count],\n\t\t\tinherit => [logging,burst_limit_max_count],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_debug,config,burst_limit_max_count]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,debug,burst_limit_window_time],\n\t\t\tinherit => [logging,burst_limit_window_time],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_debug,config,burst_limit_window_time]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,debug,overload_kill_enable],\n\t\t\tinherit => [logging,overload_kill_enable],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_debug,config,overload_kill_enable]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,debug,overload_kill_qlen],\n\t\t\tinherit => [logging,overload_kill_qlen],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_debug,config,overload_kill_qlen]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,debug,overload_kill_mem_size],\n\t\t\tinherit => [logging,overload_kill_mem_size],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_debug,config,overload_kill_mem_size]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,debug,overload_kill_restart_after],\n\t\t\tinherit => [logging,overload_kill_restart_after],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_debug,config,overload_kill_restart_after]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,debug,formatter,chars_limit],\n\t\t\tinherit => [logging,formatter,chars_limit],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_debug,formatter,chars_limit]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,debug,formatter,depth],\n\t\t\tinherit => [logging,formatter,depth],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_debug,formatter,depth]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,debug,formatter,max_size],\n\t\t\tinherit => [logging,formatter,max_size],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_debug,formatter,max_size]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,debug,formatter,template],\n\t\t\tinherit => [logging,formatter,template],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_debug,formatter,template]\n\t\t\t}\n\t\t},\n\n\t\t% http api logs\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,http,api],\n\t\t\tdefault => false,\n\t\t\ttype => boolean,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\truntime => true,\n\t\t\thandle_set => fun\n\t\t\t\t(_K,true,_S,_) ->\n\t\t\t\t\tar_logger:start_handler(arweave_http_api),\n\t\t\t\t\t{store, true};\n\t\t\t\t(_K,false,_S,_) ->\n\t\t\t\t\tar_logger:stop_handler(arweave_http_api),\n\t\t\t\t\t{store, false}\n\t\t\tend\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,http,api,max_no_files],\n\t\t\tinherit => [logging,max_no_files],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_http_api,config,max_no_files]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,http,api,max_no_bytes],\n\t\t\tinherit => [logging,max_no_bytes],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_http_api,config,max_no_bytes]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,http,api,sync_mode_qlen],\n\t\t\tinherit => [logging,sync_mode_qlen],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_http_api,config,sync_mode_qlen]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,http,api,drop_mode_qlen],\n\t\t\tinherit => [logging,drop_mode_qlen],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_http_api,config,drop_mode_qlen]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,http,api,flush_qlen],\n\t\t\tinherit => [logging,flush_qlen],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_http_api,config,flush_qlen]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,http,api,burst_limit_enable],\n\t\t\tinherit => [logging,burst_limit_enable],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_http_api,config,burst_limit_enable]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,http,api,burst_limit_max_count],\n\t\t\tinherit => [logging,burst_limit_max_count],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_http_api,config,burst_limit_max_count]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,http,api,burst_limit_window_time],\n\t\t\tinherit => [logging,burst_limit_window_time],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_http_api,config,burst_limit_window_time]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,http,api,overload_kill_enable],\n\t\t\tinherit => [logging,overload_kill_enable],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_http_api,config,overload_kill_enable]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,http,api,overload_kill_qlen],\n\t\t\tinherit => [logging,overload_kill_qlen],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_http_api,config,overload_kill_qlen]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,http,api,overload_kill_mem_size],\n\t\t\tinherit => [logging,overload_kill_mem_size],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_http_api,config,overload_kill_mem_size]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,http,api,overload_kill_restart_after],\n\t\t\tinherit => [logging,overload_kill_restart_after],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_http_api,config,overload_kill_restart_after]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,http,api,compress_on_rotate],\n\t\t\tinherit => {[logging,compress_on_rotate], [type, default]},\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_http_api,config,compress_on_rotate]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,http,api,formatter,chars_limit],\n\t\t\tinherit => [logging,formatter,chars_limit],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_http_api,formatter,chars_limit]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,http,api,formatter,max_size],\n\t\t\tinherit => [logging,formatter,max_size],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_http_api,formatter,max_size]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tenabled => true,\n\t\t\tparameter_key => [logging,handlers,http,api,formatter,depth],\n\t\t\tinherit => [logging,formatter,depth],\n\t\t\truntime => true,\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\thandle_set => {\n\t\t\t\tfun logger_set/4,\n\t\t\t\t[arweave_http_api,formatter,depth]\n\t\t\t}\n\t\t},\n\n\t\t%-----------------------------------------------------\n\t\t% arweave_config http api parameters\n\t\t%-----------------------------------------------------\n\t\t#{\n\t\t\tparameter_key => [config,http,api,enabled],\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\tshort_description => <<\"enable arweave configuration http api interface\">>,\n\t\t\t% @todo enable it by default after testing\n\t\t\tdefault => false,\n\t\t\ttype => boolean,\n\t\t\trequired => false,\n\t\t\truntime => false\n\t\t},\n\t\t#{\n\t\t\tparameter_key => [config,http,api,listen,port],\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\tshort_description => \"set arweave configuration http api interface port\",\n\t\t\tdefault => 4891,\n\t\t\ttype => tcp_port,\n\t\t\trequired => false,\n\t\t\truntime => false\n\t\t},\n\t\t#{\n\t\t\tparameter_key => [config,http,api,listen,address],\n\t\t\tenvironment => true,\n\t\t\tlong_argument => true,\n\t\t\tshort_description => \"set arweave configuration http api listen address\",\n\t\t\ttype => [ipv4, file],\n\t\t\trequired => false,\n\t\t\t% can be an ip address or an unix socket path,\n\t\t\t% the configuration should be transparent\n\t\t\t% though and we should avoid using\n\t\t\t%   {local, socket_path}\n\t\t\t% the rule is probably to say if the value\n\t\t\t% start with / then this is an unix socket,\n\t\t\t% else this is an ip address or an hostname.\n\t\t\tdefault => <<\"127.0.0.1\">>,\n\t\t\truntime => false\n\t\t}\n\t\t% @todo implement read, write and token parameters\n\t\t% #{\n\t\t% \tparameter => [config,http,api,read],\n\t\t% \tenvironment => <<\"AR_CONFIG_HTTP_API_READ\">>,\n\t\t% \tshort_description => \"allow read (get method) on arweave configuration http api\",\n\t\t% \ttype => boolean,\n\t\t% \trequired => false,\n\t\t% \tdefault => true\n\t\t% },\n\t\t% #{\n\t\t% \tparameter => [config,http,api,write],\n\t\t% \tenvironment => <<\"AR_CONFIG_HTTP_API_WRITE\">>,\n\t\t% \tshort_description => \"allow write (post method) on arweave configuration http api\",\n\t\t% \ttype => boolean,\n\t\t% \trequired => false,\n\t\t% \tdefault => true\n\t\t% },\n\t\t% #{\n\t\t% \tparameter => [config,http,api,token],\n\t\t% \tenvironment => <<\"AR_CONFIG_HTTP_API_TOKEN\">>,\n\t\t% \tshort_description => \"set an access token for arweave configuration http api interface\",\n\t\t% \ttype => string,\n\t\t% \trequired => false,\n\t\t% \tdefault => <<>>\n\t\t% }\n\t].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc function helper to deal with legacy configuration.\n%% @end\n%%--------------------------------------------------------------------\nlegacy_get(_K, #{ spec := #{ legacy := L }}) ->\n\tV = arweave_config_legacy:get(L),\n\t{ok, V};\nlegacy_get(_K, _) ->\n\t{error, not_found}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc function helper to deal with legacy configuration.\n%% @end\n%%--------------------------------------------------------------------\nlegacy_set(_K, V, #{ spec := #{ legacy := L }},_) ->\n\tarweave_config_legacy:set(L, V),\n\t{store, V};\nlegacy_set(_K, V, _,_) ->\n\t{store, V}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc helper function to dynamically set logging handlers.\n%% @end\n%%--------------------------------------------------------------------\nlogger_set(_I, Value, _S, [HandlerId, formatter, Key]) ->\n\tcase logger:get_handler_config(HandlerId) of\n\t\t{ok, #{formatter := {logger_formatter, Config}}} ->\n\t\t\tNewConfig = Config#{\n\t\t\t\tKey => Value\n\t\t\t},\n\t\t\tlogger:update_handler_config(\n\t\t\t\tHandlerId,\n\t\t\t\tformatter,\n\t\t\t\t{logger_formatter, NewConfig}\n\t\t\t),\n\t\t\t{store, Value};\n\t\t_Else ->\n\t\t\t{store, Value}\n\tend;\nlogger_set(_K, Value, _S, [HandlerId, ConfigKey, Key]) ->\n\tcase logger:get_handler_config(HandlerId) of\n\t\t{ok, HandlerConfig} ->\n\t\t\tC = maps:get(ConfigKey, HandlerConfig),\n\t\t\tlogger:update_handler_config(\n\t\t\t\tHandlerId,\n\t\t\t\tConfigKey,\n\t\t\t\tC#{ Key => Value }\n\t\t\t),\n\t\t\t{store, Value};\n\t\t_Else ->\n\t\t\t{store, Value}\n\tend.\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_parser.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc arweave configuration parser.\n%%%\n%%% This module exports all function required to parse keys and values\n%%% for arweave configuration.\n%%%\n%%% @end\n%%%===================================================================\n-module(arweave_config_parser).\n-export([\n\tseparator/0,\n\tkey/1,\n\tis_parameter/1\n]).\n-include_lib(\"eunit/include/eunit.hrl\").\n-define(SEPARATOR, $.).\n\n%%--------------------------------------------------------------------\n%% @doc default separator used.\n%% @end\n%%--------------------------------------------------------------------\nseparator() -> ?SEPARATOR.\n\n%%--------------------------------------------------------------------\n%% @doc parses a string and converted it to a configuration key.\n%% At this time, only ASCII characters are supported.\n%%\n%% == Examples ==\n%%\n%% ```\n%% > arweave_config_parser:key(\"test.2.3.[127.0.0.1:1984].data\").\n%% {ok,[test,2,3,<<\"127.0.0.1:1984\">>,data]}\n%% '''\n%%\n%% @TODO check indepth list.\n%% @end\n%%--------------------------------------------------------------------\n-spec key(Key) -> Return when\n\tKey :: atom() | binary() | string(),\n\tReturn :: {ok, [atom() | binary()]}\n\t        | {error, map()}.\n\nkey(Key) ->\n\tcase is_parameter(Key) of\n\t\ttrue ->\n\t\t\t{ok, Key};\n\t\tfalse ->\n\t\t\tkey2(Key)\n\tend.\n\nkey2(Atom) when is_atom(Atom) ->\n\tkey2(atom_to_binary(Atom));\nkey2(List) when is_list(List) ->\n\ttry\n\t\tkey2(list_to_binary(List))\n\tcatch\n\t\t_:_ ->\n\t\t\t{error, #{ reason => invalid_data }}\n\tend;\nkey2(Binary) when is_binary(Binary) ->\n\tkey_parse(Binary, <<>>, [], 1);\nkey2(_Elsewise) ->\n\t{error, #{ reason => invalid_data }}.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\nis_parameter([]) -> false;\nis_parameter(List) when is_list(List) ->\n\ttry\n\t\t_ = list_to_binary(List),\n\t\tfalse\n\tcatch\n\t\t_:_ ->\n\t\t\tis_parameter_list(List)\n\tend;\nis_parameter(_) -> false.\n\n%%--------------------------------------------------------------------\n%%\n%%--------------------------------------------------------------------\nis_parameter_list([]) -> true;\nis_parameter_list([<<>>|_]) -> false;\nis_parameter_list([Item|Rest]) when is_atom(Item) ->\n\tis_parameter_list(Rest);\nis_parameter_list([Item|Rest]) when is_integer(Item) ->\n\tis_parameter_list(Rest);\nis_parameter_list([Item|Rest]) when is_binary(Item) ->\n\tis_parameter_list(Rest);\nis_parameter_list(_) -> false.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc parse the key and convert it to parameter format.\n%% @end\n%%--------------------------------------------------------------------\nkey_parse(<<>>, <<>>, [], 1) ->\n\t{error, #{\n\t\t\tposition => 1,\n\t\t\treason => empty_key\n\t\t}\n\t};\nkey_parse(<<?SEPARATOR>>, _Buffer, _Key, Pos) ->\n\t{error, #{\n\t\t\tposition => Pos,\n\t\t\treason => separator_ending\n\t\t}\n\t};\nkey_parse(<<?SEPARATOR, ?SEPARATOR, _Rest/binary>>, _Buffer, _Key, Pos) ->\n\t{error, #{\n\t\t\tposition => Pos,\n\t\t\treason => multi_separators\n\t\t}\n\t};\nkey_parse(<<?SEPARATOR, Rest/binary>>, Buffer, [], 1) ->\n\tkey_parse(Rest, Buffer, [], 2);\nkey_parse(<<>>, <<>>, Key, _Pos) ->\n\tkey_convert(Key, []);\nkey_parse(<<>>, Buffer, Key, Pos) ->\n\tkey_parse(<<>>, <<>>, [Buffer|Key], Pos);\nkey_parse(<<?SEPARATOR, \"[\", Rest/binary>>, <<>>, Key, Pos) ->\n\tkey_parse_string(Rest, <<>>, Key, Pos+2);\nkey_parse(<<?SEPARATOR, \"[\", Rest/binary>>, Buffer, Key, Pos) ->\n\tkey_parse_string(Rest, <<>>, [Buffer|Key], Pos+2);\nkey_parse(<<?SEPARATOR, Rest/binary>>, Buffer, Key, Pos) ->\n\tkey_parse(Rest, <<>>, [Buffer|Key], Pos+1);\nkey_parse(<<C:8, Rest/binary>>, Buffer, Key, Pos)\n\twhen C >= $0, C =< $9;\n\t     C >= $A, C =< $Z;\n\t     C >= $a, C =< $z;\n\t     C =:= $_ ->\n\tkey_parse(Rest, <<Buffer/binary, C:8>>, Key, Pos+1);\nkey_parse(<<C:8, Rest/binary>>, Buffer, Key, Pos) ->\n\t{error, #{\n\t\t\tchar => <<C>>,\n\t\t\trest => Rest,\n\t\t\tbuffer => Buffer,\n\t\t\tposition => Pos,\n\t\t\tkey => Key,\n\t\t\treason => bad_char\n\t\t }\n\t}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc parse a string enclosed by `[' and `]'.\n%% @end\n%%--------------------------------------------------------------------\nkey_parse_string(<<\"]\">>, Buffer, Key, Pos) ->\n\tkey_parse(<<>>, <<>>, [{string, Buffer}|Key], Pos+1);\nkey_parse_string(<<\"]\", ?SEPARATOR, Rest/binary>>, Buffer, Key, Pos) ->\n\tkey_parse(Rest, <<>>, [{string, Buffer}|Key], Pos+2);\nkey_parse_string(<<C:8, Rest/binary>>, Buffer, Key, Pos)\n\twhen C >= $!, C =< $/;\n\t     C >= $0, C =< $9;\n\t     C >= $?, C =< $Z;\n\t     C >= $a, C =< $z;\n\t     C =:= $:;\n\t     C =:= $=;\n\t     C =:= $_;\n\t     C =:= $~ ->\n\tkey_parse_string(Rest, <<Buffer/binary, C:8>>, Key, Pos+1);\nkey_parse_string(Binary, Buffer, Key, Pos) ->\n\t{error, #{\n\t\t\trest => Binary,\n\t\t\tbuffer => Buffer,\n\t\t\tposition => Pos,\n\t\t\tkey => Key,\n\t\t\treason => bad_string\n\t\t }\n\t}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc convert a key to its final parameter form.\n%% @end\n%%--------------------------------------------------------------------\nkey_convert([], Buffer) ->\n\t{ok, Buffer};\nkey_convert([{string, Value}|Rest], Buffer) ->\n\tkey_convert(Rest, [Value|Buffer]);\nkey_convert([Item|Rest], Buffer) ->\n\ttry\n\t\tInteger = binary_to_integer(Item),\n\t\tkey_convert(Rest, [Integer|Buffer])\n\tcatch\n\t\t_:_ ->\n\t\t\tkey_convert_to_atom(Item, Rest, Buffer)\n\tend;\nkey_convert(Rest, Buffer) ->\n\t{error, #{\n\t\t\trest => Rest,\n\t\t\tbuffer => Buffer,\n\t\t\treason => bad_key\n\t\t }\n\t}.\n\nkey_convert_to_atom(Item, Rest, Buffer) ->\n\ttry\n\t\tAtom = binary_to_existing_atom(Item),\n\t\tkey_convert(Rest, [Atom|Buffer])\n\tcatch\n\t\t_:_ ->\n\t\t\t{error, #{\n\t\t\t\t\treason => invalid_key,\n\t\t\t\t\tkey => Item\n\t\t\t\t}\n\t\t\t}\n\tend.\n\nkey_test() ->\n\t?assertEqual(\n\t\t{ok, [global, debug]},\n\t\tkey(<<\n\t\t\t?SEPARATOR,\n\t\t\t\"global\",\n\t\t\t?SEPARATOR,\n\t\t\t\"debug\"\n\t\t >>)\n\t),\n\t?assertEqual(\n\t\t{ok, [global, debug]},\n\t\tkey(\n\t\t\t[?SEPARATOR]\n\t\t\t++ \"global\"\n\t\t\t++ [?SEPARATOR]\n\t\t\t++ \"debug\"\n\t\t)\n\t),\n\t?assertEqual(\n\t\t{ok, [storage,3,unpacked,state]},\n\t\tkey(<<\n\t\t\t?SEPARATOR,\n\t\t\t\"storage\",\n\t\t\t?SEPARATOR,\n\t\t\t\"3\",\n\t\t\t?SEPARATOR,\n\t\t\t\"unpacked\",\n\t\t\t?SEPARATOR,\n\t\t\t\"state\"\n\t\t    >>)\n\t),\n\t?assertEqual(\n\t\t{ok, [peers,<<\"127.0.0.1:1984\">>,trusted]},\n\t\tkey(<<\n\t\t\t?SEPARATOR,\n\t\t\t\"peers\",\n\t\t\t?SEPARATOR,\n\t\t\t\"[127.0.0.1:1984]\",\n\t\t\t?SEPARATOR,\n\t\t\t\"trusted\"\n\t\t>>)\n\t),\n\t?assertEqual(\n\t\t{error, #{\n\t\t\t\treason => bad_char,\n\t\t\t\tbuffer => <<>>,\n\t\t\t\tchar => <<\"~\">>,\n\t\t\t\trest => <<>>,\n\t\t\t\tkey => [],\n\t\t\t\tposition => 1\n\t\t\t}\n\t\t},\n\t\tkey(<<\"~\">>)\n\t),\n\t?assertEqual(\n\t\t{error, #{\n\t\t\t\treason => empty_key,\n\t\t\t\tposition => 1\n\t\t\t}\n\t\t},\n\t\tkey(\"\")\n\t),\n\t?assertEqual(\n\t\t{error, #{\n\t\t\t\treason => invalid_key,\n\t\t\t\tkey => <<\"totally_random_key\">>\n\t\t\t}\n\t\t},\n\t\tkey(\"totally_random_key\")\n\t),\n\t?assertEqual(\n\t\t{error, #{\n\t\t\t\treason => multi_separators,\n\t\t\t\tposition => 1\n\t\t\t}\n\t\t},\n\t\tkey(<<\n\t\t\t?SEPARATOR,\n\t\t\t?SEPARATOR,\n\t\t\t\"test\"\n\t\t>>)\n\t),\n\t?assertEqual(\n\t\t{error, #{\n\t\t\t\treason => multi_separators,\n\t\t\t\tposition => 6\n\t\t\t}\n\t\t},\n\t\tkey(<<\n\t\t\t?SEPARATOR,\n\t\t\t\"test\",\n\t\t\t?SEPARATOR, ?SEPARATOR,\n\t\t\t\"test\"\n\t\t>>)\n\t),\n\t?assertEqual(\n\t\t{error, #{\n\t\t\t\treason => separator_ending,\n\t\t\t\tposition => 5\n\t\t\t}\n\t\t},\n\t\tkey(<<\n\t\t\t\"test\",\n\t\t\t?SEPARATOR\n\t\t>>)\n\t).\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_serializer.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2026 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Arweave Configuration File Serializer.\n%%%\n%%% This module is in charge of serializing a map and convert it to a\n%%% specification compatible format.\n%%%\n%%% @end\n%%%===================================================================\n-module(arweave_config_serializer).\n-compile(warnings_as_errors).\n-export([encode/1, encode/2, decode/1, decode/2]).\n-export([map_merge/1]).\n-export([encode_enter/1, encode_iterate/1, encode_merge/1]).\n-export([decode_enter/1, decode_iterate/1, decode_merge/1]).\n-include_lib(\"kernel/include/logger.hrl\").\n\n%%--------------------------------------------------------------------\n%% @doc convert a map to arweave_config specification format.\n%% @see encode/2\n%% @end\n%%--------------------------------------------------------------------\n-spec encode(Map) -> Return when\n\tMap :: map(),\n\tReturn :: map().\n\nencode(Map) ->\n\tencode(Map, #{}).\n\n%%--------------------------------------------------------------------\n%% @doc convert a map to arweave_config specification format.\n%% @end\n%%--------------------------------------------------------------------\nencode(Map, Opts) ->\n\tencode(Map, Opts, []).\n\n%%--------------------------------------------------------------------\n%% @doc convert a map to arweave_config specification format.\n%% @end\n%%--------------------------------------------------------------------\n-spec encode(Map, Opts, Level) -> Return when\n\tMap :: map(),\n\tOpts :: map(),\n\tLevel :: list(),\n\tReturn :: map().\n\nencode(Map, Opts, Level) when is_map(Map), is_list(Level) ->\n\tIterator = maps:iterator(Map),\n\tState = #{\n\t\topts => Opts,\n\t\tmap => Map,\n\t\titerator => Iterator,\n\t\tlevel => Level\n\t},\n\tarweave_config_fsm:init(?MODULE, encode_enter, State);\nencode(_, _, _) ->\n\t{error, badarg}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc initialize the buffer and start the iteration.\n%% @end\n%%--------------------------------------------------------------------\nencode_enter(State = #{ iterator := Iterator }) ->\n\tNewState = State#{\n\t\titerator => maps:next(Iterator),\n\t\tbuffer => #{}\n\t},\n\t{next, encode_iterate, NewState}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc iterate over the iterator and route the data when needed.\n%% @end\n%%--------------------------------------------------------------------\nencode_iterate(#{ iterator := none, buffer := Buffer }) ->\n\t{ok, Buffer};\nencode_iterate(State = #{ iterator := {K, V, Iterator} })\n\twhen is_map(V) ->\n\t\tOpts = maps:get(opts, State),\n\t\tLevel = maps:get(level, State),\n\t\tBuffer = maps:get(buffer, State),\n\t\tKey = encode_convert_key(K),\n\t\tcase encode(V, Opts, [Key|Level]) of\n\t\t\t{ok, MapBuffer} ->\n\t\t\t\tNewState = State#{\n\t\t\t\t\titerator => Iterator,\n\t\t\t\t\tbuffer => maps:merge(Buffer, MapBuffer)\n\t\t\t\t},\n\t\t\t\t{next, encode_iterate, NewState};\n\t\t\tElse ->\n\t\t\t\t{error, Else}\n\t\tend;\nencode_iterate(State = #{ iterator := {_K, _V, _Iterator} }) ->\n\t{next, encode_merge, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc merge key/values into one unique map.\n%% @end\n%%--------------------------------------------------------------------\nencode_merge(State = #{ iterator := {K, V, Iterator} }) ->\n\tBuffer = maps:get(buffer, State),\n\tLevel = maps:get(level, State),\n\tKey = encode_convert_key(K),\n\tReversedLevel = lists:reverse([Key|Level]),\n\tNewState = State#{\n\t\titerator => maps:next(Iterator),\n\t\tbuffer => Buffer#{ ReversedLevel => V }\n\t},\n\t{next, encode_iterate, NewState}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc convert the key as existing atoms.\n%% @end\n%%--------------------------------------------------------------------\nencode_convert_key(List) when is_list(List) ->\n\ttry list_to_existing_atom(List)\n\tcatch _:_ -> List\n\tend;\nencode_convert_key(Binary) when is_binary(Binary) ->\n\ttry binary_to_existing_atom(Binary)\n\tcatch _:_ -> Binary\n\tend;\nencode_convert_key(Else) -> Else.\n\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @see decode/2\n%% @end\n%%--------------------------------------------------------------------\n-spec decode(Map) -> Return when\n\tMap :: #{ [term()] => term() },\n\tReturn :: #{ term() => term() }.\n\ndecode(Map) ->\n\tdecode(Map, #{}).\n\n%%--------------------------------------------------------------------\n%% @doc decode arweave config serialized map. A similar implementation\n%% was present in `arweave_config_store', this one is using\n%% `arweave_config_fsm'.\n%% @end\n%%--------------------------------------------------------------------\n-spec decode(Map, Opts) -> Return when\n\tMap :: #{ [term()] => term() },\n\tOpts :: map(),\n\tReturn :: #{ term() => term() }.\n\ndecode(Map, Opts) ->\n\tIterator = maps:iterator(Map),\n\tLevel = [],\n\tState = #{\n\t\topts => Opts,\n\t\tmap => Map,\n\t\titerator => Iterator,\n\t\tlevel => Level\n\t},\n\tarweave_config_fsm:init(?MODULE, decode_enter, State).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc initialize the decoder and the iterator.\n%% @end\n%%--------------------------------------------------------------------\ndecode_enter(State = #{ iterator := Iterator }) ->\n\tNewState = State#{\n\t\titerator => maps:next(Iterator),\n\t\tbuffer => []\n\t},\n\t{next, decode_iterate, NewState}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc loop over the items present in the map.\n%% @end\n%%--------------------------------------------------------------------\ndecode_iterate(State = #{ iterator := none }) ->\n\t{next, decode_merge, State};\ndecode_iterate(State = #{ iterator := {K, V, Iterator} }) when is_list(K) ->\n\tBuffer = maps:get(buffer, State),\n\t[K0|KS] = lists:reverse(K),\n\tFold = lists:foldl(fun decode_fold/2, #{ K0 => V }, KS),\n\tNewState = State#{\n\t\titerator => maps:next(Iterator),\n\t\tbuffer => [Fold|Buffer]\n\t},\n\t{next, decode_iterate, NewState}.\n\n% lambda using in decode_iterate.\ndecode_fold(Item, Acc) ->\n\t#{ Item => Acc }.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc merge the final values.\n%% @end\n%%--------------------------------------------------------------------\ndecode_merge(#{ buffer := Buffer }) ->\n\tResult = map_merge(Buffer),\n\t{ok, Result}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @private\n%% @doc merge map recursively using arweave config rules.\n%% @end\n%% @todo this function should use arweave_config_fsm\n%%--------------------------------------------------------------------\nmap_merge(ListOfMap) ->\n\tlists:foldr(\n\t\tfun(X, A) ->\n\t\t\tmap_merge(X, A)\n\t\tend,\n\t\t#{},\n\t\tListOfMap\n\t).\n\nmap_merge(A, B) when is_map(A), is_map(B) ->\n\tI = maps:iterator(A),\n\tmap_merge2(I, B);\nmap_merge(A, B) when is_map(A) ->\n\tA#{ '_' => B }.\n\nmap_merge2(none, B) ->\n\tB;\nmap_merge2({K, V, I2}, B)\n\twhen is_map(V), is_map_key(K, B) ->\n\t\tBV = maps:get(K, B, #{}),\n\t\tResult = map_merge(V, BV),\n\t\tmap_merge2(I2, B#{ K => Result });\nmap_merge2({K, V, I2}, B)\n\twhen is_map_key(K, B) ->\n\t\tBV = maps:get(K, B),\n\t\tcase V =:= BV of\n\t\t\ttrue ->\n\t\t\t\tmap_merge2(I2, B#{ K => V });\n\t\t\tfalse ->\n\t\t\t\tmap_merge3(I2, B, K, BV, V)\n\t\tend;\nmap_merge2({K, V, I2}, B) ->\n\tmap_merge2(I2, B#{ K => V });\nmap_merge2(I = [0|_], B) ->\n\tI2 = maps:next(I),\n\tmap_merge2(I2, B).\n\nmap_merge3(I2, B, K, BV, V) when is_map(BV) ->\n\tcase is_map_key('_', BV) of\n\t\ttrue ->\n\t\t\tOV = maps:get('_', BV),\n\t\t\t?LOG_WARNING(\"value ~p will be overwritten.\", [OV]),\n\t\t\tmap_merge2(I2, B#{ K => BV#{ '_' => V }});\n\t\tfalse ->\n\t\t\tmap_merge2(I2, B#{ K => BV#{ '_' => V }})\n\tend;\nmap_merge3(I2, B, K, BV, V) ->\n\t?LOG_WARNING(\"value ~p will be ignored.\", [BV]),\n\tmap_merge2(I2, B#{ K => V }).\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_signal_handler.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Arweave Configuration Unix Signal Handler.\n%%%\n%%% inspired by: https://github.com/rabbitmq/rabbitmq-server/pull/2227/files\n%%% see also: https://www.erlang.org/docs/20/man/os#set_signal-2\n%%% see also: https://www.erlang.org/docs/20/man/kernel_app#id63489\n%%%\n%%% @end\n%%%===================================================================\n-module(arweave_config_signal_handler).\n-compile(warnings_as_errors).\n-behavior(gen_event).\n-export([\n\tstart_link/0,\n\tsignals/0,\n\tsignal/2,\n\tsigusr1/0,\n\tsigusr2/0\n]).\n-export([\n\tinit/1,\n\tterminate/2,\n\thandle_call/2,\n\thandle_event/2,\n\thandle_info/2\n]).\n-include_lib(\"kernel/include/logger.hrl\").\n\n-type signal() :: sighup | sigquit | sigterm | sigusr1 | sigusr2.\n\n-type state() :: #{ signal() => pos_integer() }.\n\n%%--------------------------------------------------------------------\n%% @doc Starts unix signal handler.\n%% @end\n%%--------------------------------------------------------------------\nstart_link() ->\n\t_ = gen_event:delete_handler(erl_signal_server, ?MODULE, []),\n\tok = gen_event:swap_sup_handler(\n\t\terl_signal_server ,\n\t\t{erl_signal_handler, []},\n\t\t{?MODULE, []}\n\t),\n\tset_signals(),\n\tgen_event:start_link({local, ?MODULE}).\n\n%%--------------------------------------------------------------------\n%% @doc Returns supported Unix signals.\n%% @end\n%%--------------------------------------------------------------------\n-spec signals() -> [signal()].\n\nsignals() ->\n\t[\n\t\tsighup,\n\t\tsigquit,\n\t\tsigterm,\n\t\tsigusr1,\n\t\tsigusr2\n\t].\n\n%%--------------------------------------------------------------------\n%% @doc Executes signal side effect.\n%% @end\n%%--------------------------------------------------------------------\n-spec signal(Signal, State) -> Return when\n\tSignal :: signal(),\n\tState :: state(),\n\tReturn :: {ok, state()}.\n\nsignal(E = sighup, State) ->\n\t% 1. reload environment variable\n\t% 2. reload arguments\n\t% 3. reload the configuration\n\tarweave_config_environment:reset(),\n\tarweave_config_environment:load(),\n\tarweave_config_legacy:reset(),\n\tupdate_state(E, State);\nsignal(E = sigquit, State) ->\n\t% stop arweave and generate a core dump\n\terlang:halt(\"sigquit received\", [{flush, true}]),\n\tupdate_state(E, State);\nsignal(E = sigterm, State) ->\n\t% stop arweave, call init:stop/0.\n\tinit:stop(0),\n\tupdate_state(E, State);\nsignal(E = sigusr1, State) ->\n\t% Custom signal mostly used to print arweave state\n\t% diagnostic, and debugging information in the logs.\n\t% In case of odd behavior, this signal can be helpful\n\t% to extract internal information.\n\tsigusr1(),\n\tupdate_state(E, State);\nsignal(E = sigusr2, State) ->\n\t% custom signal mostly used to recover arweave\n\t% 1. check if arweave is still connected to epmd\n\t% 2. reconnect to epmd if disconnected\n\tsigusr2(),\n\tupdate_state(E, State);\nsignal(E, State) ->\n\t% if the signal is not supported, we use the default\n\t% behavior from erl_signal_handler.\n\t?LOG_INFO(\"received signal ~p\", [E]),\n\terl_signal_handler:handle_event(E, State),\n\tupdate_state(E, State).\n\n%%--------------------------------------------------------------------\n%% @doc execute sigusr1. prints diagnostic.\n%% @end\n%%--------------------------------------------------------------------\nsigusr1() ->\n\tspawn(fun () -> arweave_diagnostic:all() end).\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\nsigusr2() ->\n\tok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\n-spec init(any()) -> {ok, state()}.\n\ninit(_) ->\n\terlang:process_flag(trap_exit, true),\n\t{ok, #{}}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\n-spec handle_event(Signal, State) -> Return when\n\tSignal :: signal(),\n\tState :: state(),\n\tReturn :: {ok, state()}.\n\nhandle_event(Signal, State) when is_atom(Signal) ->\n\t?LOG_INFO(\"received signal ~p\", [Signal]),\n\ttry\n\t\tsignal(Signal, State)\n\tcatch\n\t\t_:_ ->\n\t\t\t{ok, State}\n\tend;\nhandle_event(Event, State) ->\n\t?LOG_DEBUG(\"received unexpected event: ~p\", [Event]),\n\t{ok, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\n-spec handle_info(Event, State) -> Return when\n\tEvent :: any(),\n\tState :: state(),\n\tReturn :: {ok, state()}.\n\nhandle_info(Event, State) ->\n\t?LOG_DEBUG(\"received unexpected event: ~p\", [Event]),\n\t{ok, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\n-spec handle_call(Event, State) -> Return when\n\tEvent :: any(),\n\tState :: state(),\n\tReturn :: {ok, ok, state()}.\n\nhandle_call(Event, State) ->\n\t?LOG_DEBUG(\"received unexpected event: ~p\", [Event]),\n\t{ok, ok, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\n-spec terminate(Reason, State) -> Return when\n\tReason :: any(),\n\tState :: state(),\n\tReturn :: ok.\n\nterminate(_Reason, _State) ->\n\t?LOG_INFO(\"unix signal handler stopped\"),\n\tok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nset_signals() ->\n\t[\n\t\tbegin\n\t\t\t?LOG_DEBUG(\"catch signal ~p\", [S]),\n\t\t\tos:set_signal(S, handle)\n\t\tend\n\t\t|| S <- signals()\n\t].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc counts the amount of signals received/used. Mostly used for\n%% debugging purpose.\n%% @end\n%%--------------------------------------------------------------------\n-spec update_state(Signal, State) -> Return when\n\tSignal :: signal(),\n\tState :: state(),\n\tReturn :: {ok, state()}.\n\nupdate_state(Signal, State) ->\n\tValue = maps:get(Signal, State, 0),\n\t{ok, State#{ Signal => Value + 1 }}.\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_spec.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @copyright 2025 (c) Arweave\n%%% @doc Arweave configuration specification behavior.\n%%%\n%%% When used  as module, `arweave_config_spec' defines  a behavior to\n%%% deal with arweave parameters.\n%%%\n%%% When used as process, `arweave_config_spec' is in charge of\n%%% loading and  managing arweave configuration  specification, stored\n%%% in a map.\n%%%\n%%% == `arweave_config_spec' process ==\n%%%\n%%% The    `arweave_config_spec'   process    is   a    frontend   for\n%%% `arweave_config_store'.  The idea  is to  pass the  parameter from\n%%% another module/process, check it first based on the specification,\n%%% and then  forward the valid  result to  store it. here  an example\n%%% anwith the environment\n%%%\n%%% ```\n%%%  _____________\n%%% |             |\n%%% | system      |\n%%% | environment |\n%%% |_____________|\n%%%    |\n%%%    |\n%%%  _\\_/________________________\n%%% |                            |\n%%% | arweave_config_environment |<--+\n%%% |____________________________|   |\n%%%    |      / \\                    |\n%%%    |       | [specification      |\n%%%    |       |  errors]            |\n%%%  _\\ /______|__________           |\n%%% |                     |          |\n%%% | arweave_config_spec |          |\n%%% |_____________________|          |\n%%%    |                             |\n%%%    | [specification success]     |\n%%%    |                             |\n%%%  _\\_/__________________          |\n%%% |                      |         |\n%%% | arweave_config_store |---------+ [valid result]\n%%% |______________________|\n%%%\n%%% '''\n%%%\n%%% == TODO ===\n%%%\n%%% === Check Parameter Function ===\n%%%\n%%% A function is required to check manually/on demande a value\n%%% without setting it. It will be needed for testing.\n%%%\n%%% === Variable Parameter Item Specification ===\n%%%\n%%% Parameter item can be a variable, defined by a `type'. This is\n%%% helpful when setting different kind of values like storage modules\n%%% or peers.\n%%%\n%%% ```\n%%% [peers, {peer}, enabled]\n%%% '''\n%%%\n%%% === Check for duplicated values ===\n%%%\n%%% A warning (or an error) should be returned when there is a\n%%% duplicated specification. Here a quick list of error/warning:\n%%%\n%%% - errors (stop execution):\n%%%   - duplicated parameter_key\n%%% - warnings (last one overwrite the first one):\n%%%   - duplicated environments\n%%%   - duplicated short arguments\n%%%   - duplicated long arguments\n%%%   - duplicated legacy\n%%%\n%%% ===  Improve callback functions ===\n%%%\n%%% Add support for pre/post actions.\n%%%\n%%% Add support for MFA\n%%%\n%%% Add support for embedded lambdas\n%%%\n%%% @end\n%%%===================================================================\n-module(arweave_config_spec).\n-behavior(gen_server).\n-compile(warnings_as_errors).\n-export([\n\tstart_link/0,\n\tstart_link/1,\n\tstop/0,\n\tspec/0,\n\tspec/1,\n\tspec_to_argparse/0,\n\tget_default/1,\n\tget_legacy/0,\n\tget_legacy/1,\n\tget_environments/0,\n\tget_environment/1,\n\tget_short_arguments/0,\n\tget_short_argument/1,\n\tget_long_arguments/0,\n\tget_long_argument/1,\n\tget/1,\n\tset/2\n]).\n-export([init/1, terminate/2]).\n-export([handle_call/3, handle_cast/2, handle_info/2]).\n-export([is_function_exported/3]).\n-compile({no_auto_import,[get/1]}).\n-include_lib(\"kernel/include/logger.hrl\").\n\n% A raw key from external sources (cli, api...), non-sanitized,\n% non-parsed.\n% -type key() :: term().\n\n% An arweave parameter, parsed and valid, containing only known\n% terms and specified.\n-type parameter() :: [atom() | {atom()} | binary()].\n\n% A value associated with a key/parameter, usually any kind of term.\n-type value() :: term().\n\n%---------------------------------------------------------------------\n% REQUIRED:  defines the  configuration key  used to  identify arweave\n% parameter, usually stored in a data store like ETS.\n%---------------------------------------------------------------------\n-callback parameter_key() -> Return when\n\tReturn :: parameter().\n\n%---------------------------------------------------------------------\n% OPTIONAL: defines how to retrieve the value using the parameter key.\n%---------------------------------------------------------------------\n-callback handle_get(ParameterKey, State) -> Return when\n\tParameterKey :: parameter(),\n\tState :: map(),\n\tReturn :: {ok, value()}\n\t\t| {ok, Action}\n\t\t| {ok, MFA}\n\t\t| {error, map()},\n\tCallbackReturn :: {ok, term()} | {error, term()},\n\tAction :: fun ((ParameterKey, State) -> CallbackReturn),\n\tMFA :: {atom(), atom(), list()}.\n\n%---------------------------------------------------------------------\n% OPTIONAL: defines how to set the value Value with the parameter key.\n% It should be transaction. This callback must be improved, instead\n% of returning directly a value, it should also be possible to return\n% a list of MFA or lambda functions executed in order like\n% transactions.\n%\n% `ignore' will keep the old value in place.\n%\n% `{ok, term()}' will simply returns the term, the side effect is\n% protected during the execution, and the module can do whatever (even\n% setting the value anywhere).\n%\n% `{store, term()}' will automatically store the returned value into\n% `arweave_config_store' using `arweave_config_store:set/2' function.\n%\n% `{error, map()}' will return an error and the reason.\n%\n% == TODO ==\n%\n% `{ok, action() | actions()}' will execute a list of action in order\n% first from last. Those are function with side effects, and their\n% return is not controlled. Those action should be defined as `fun/3':\n%\n% ```\n% Action = fun (Parameter, NewValue, OldValue) ->\n%   % do some action...\n% end.\n% '''\n%\n% `{ok, mfa() | mfas()}' same than the previous action definition.\n%\n% ```\n% -module(my_module).\n% export([mfa/3]).\n% mfa(Parameter, NewValue, OldValue) ->\n%   ok.\n% '''\n%\n%---------------------------------------------------------------------\n-callback handle_set(Parameter, NewValue, State, Args) -> Return when\n\tParameter:: parameter(),\n\tNewValue :: value(),\n\tState :: map(),\n\tArgs :: term(),\n\tReturn :: ignore\n\t\t| {ok, term()}\n\t\t| {store, term()}\n\t\t| {error, map()}.\n\n%---------------------------------------------------------------------\n% OPTIONAL: defines  if the  parameter can be  set during  runtime. if\n% true, the  parameter can be set  when arweave is running,  else, the\n% parameter can only be set during startup\n% DEFAULT: false\n%---------------------------------------------------------------------\n-callback runtime() -> Return when\n\tReturn :: boolean().\n\n%---------------------------------------------------------------------\n% OPTIONAL: short argument used to  configure the parameter, usually a\n% single ASCII letter present in range 0-9, a-z and A-Z.\n% DEFAULT: undefined\n%---------------------------------------------------------------------\n-callback short_argument() -> Return when\n\tReturn :: undefined\n\t\t| [pos_integer()].\n\n%---------------------------------------------------------------------\n% OPTIONAL: a long argument, used  to configure the parameter, usually\n% lower cases words separated by dashes.\n% DEFAULT: converted parameter key (e.g. --global.debug)\n%---------------------------------------------------------------------\n-callback long_argument() -> Return when\n\tReturn :: undefined\n\t\t| iolist().\n\n%---------------------------------------------------------------------\n% OPTIONAL: the type of the value.\n% DEFAULT: undefined\n%---------------------------------------------------------------------\n-callback type() -> Return when\n\tReturn :: undefined\n\t\t| atom()\n\t\t| [atom()].\n\n%---------------------------------------------------------------------\n% OPTIONAL: a function returning  a string representing an environment\n% variable.\n% DEFAULT: false\n%---------------------------------------------------------------------\n-callback environment() -> Return when\n\tReturn :: undefined\n\t\t| string().\n\n%---------------------------------------------------------------------\n% OPTIONAL: a list  of legacy references used to  previously fetch the\n% value.\n% DEFAULT: undefined\n%---------------------------------------------------------------------\n-callback legacy() -> Return when\n\tReturn :: undefined\n\t\t| atom()\n\t\t| iolist()\n\t\t| [atom() | iolist()].\n\n%---------------------------------------------------------------------\n% OPTIONAL: a short description of the parameter.\n% DEFAULT: undefined\n%---------------------------------------------------------------------\n-callback short_description() -> Return when\n\tReturn :: undefined\n\t\t| iolist().\n\n%---------------------------------------------------------------------\n% OPTIONAL: a long description of the parameter.\n% DEFAULT: undefined\n%---------------------------------------------------------------------\n-callback long_description() -> Return when\n\tReturn :: undefined\n\t\t| iolist().\n\n%---------------------------------------------------------------------\n% OPTIONAL: defines if a parameter is deprecated, can eventually\n% returns a message.\n% DEFAULT: false\n%---------------------------------------------------------------------\n-callback deprecated() -> Return when\n\tReturn :: true\n\t\t| {true, term()}\n\t\t| false.\n\n%--------------------------------------------------------------------\n% OPTIONAL: defines the number of arguments to take\n% DEFAULT: 1\n% see: argparse\n%--------------------------------------------------------------------\n-callback nargs() -> Return when\n\tReturn :: pos_integer()\n\t\t| list\n\t\t| nonempty_list\n\t\t| 'maybe'\n\t\t| {'maybe', term()}\n\t\t| all.\n\n%--------------------------------------------------------------------\n% OPTIONAL: defines if the parameter is enabled or not.\n% DEFAULT: true\n%--------------------------------------------------------------------\n-callback enabled() -> Return when\n\t  Return :: boolean()\n\t\t| {false, term()}.\n\n%---------------------------------------------------------------------\n% @TODO inherit callback\n% OPTIONAL: defines an inherited parameter.\n% DEFAULT: undefined\n%---------------------------------------------------------------------\n-callback inherit() -> Return when\n\tReturn :: Parameter | {Parameter, Fields},\n\tParameter :: [atom()],\n\tFields :: [atom()].\n\n%---------------------------------------------------------------------\n% @TODO: protected callback\n% OPTIONAL: defines if the value should be public or protected (not\n% displayed or even encrypted, useful for password)\n% DEFAULT: false\n%---------------------------------------------------------------------\n% -callback protected() -> Return when\n%       Return :: boolean().\n\n%---------------------------------------------------------------------\n% @TODO: dependencies callback\n% OPTIONAL: defines a list of required parameters to be set\n% DEFAULT: undefined\n%---------------------------------------------------------------------\n% -callback dependencies() -> Return when\n%\tReturn :: undefined | [atom()|iolist()|tuple()].\n\n%---------------------------------------------------------------------\n% @TODO: conflicts callback\n% OPTIONAL: defines a list of conflicting parameters\n% DEFAULT: undefined\n%---------------------------------------------------------------------\n% -callback conflicts() -> RETURN when\n%\tReturn :: undefined | [atom()|iolist()|tuple()].\n\n%---------------------------------------------------------------------\n% @TODO: formatter callback\n% OPTIONAL: defines a function callback to format short or long\n% help message.\n% DEFAULT: undefined\n%---------------------------------------------------------------------\n% -callback formatter(Type, Value) when\n%\tType :: short | long,\n%\tValue :: iolist(),\n%\tReturn :: undefined | {ok, FormattedValue},\n%\tFormattedValue :: iolist().\n\n%---------------------------------------------------------------------\n% @TODO: positional arguments callback\n% OPTIONAL: defines if the argument is positional, those are found\n% after a special separator, usually `--'.\n%---------------------------------------------------------------------\n% -callback positional() -> Return when\n%\tReturn :: boolean().\n\n%---------------------------------------------------------------------\n% @TODO: before/after arguments callback\n% OPTIONAL: defines if another argument should be set before or after\n% the current one\n%---------------------------------------------------------------------\n% -callback before() -> Return when\n%\tReturn :: undefined | [atom()].\n% -callback after() -> Return when\n%\tReturn :: undefined | [atom()].\n\n%---------------------------------------------------------------------\n% @TODO: dryrun argument\n% OPTIONAL: instead of executing the set callback, it simply returns\n% the action and modification would have been applied.\n%---------------------------------------------------------------------\n% -callback dryrun() -> Return when\n%\tReturn :: term().\n\n%---------------------------------------------------------------------\n% @TODO fail callback\n% OPTIONAL: defines if a wrong value should stop the execution, with a\n% specific error set.\n%---------------------------------------------------------------------\n% -spec fail() -> Return when\n%\tReturn :: boolean()\n%\t\t| {true, term()}.\n\n-optional_callbacks([\n\thandle_get/2,\n\thandle_set/4,\n\truntime/0,\n\tshort_argument/0,\n\tlong_argument/0,\n\ttype/0,\n\tenvironment/0,\n\tlegacy/0,\n\tshort_description/0,\n\tlong_description/0,\n\tdeprecated/0,\n\tnargs/0,\n\tenabled/0,\n\tinherit/0\n]).\n\n%%--------------------------------------------------------------------\n%% @doc Start `arweave_config_spec' process.\n%%\n%% @end\n%%--------------------------------------------------------------------\n-spec start_link() -> Return when\n\tReturn :: {ok, pid()}.\n\nstart_link() ->\n\tstart_link([]).\n\n%%--------------------------------------------------------------------\n%% @doc Start `arweave_config_spec' process.\n%% @end\n%%--------------------------------------------------------------------\n-spec start_link(Specs) -> Return when\n\tSpecs :: [map() | atom()],\n\tReturn :: {ok, pid()}.\n\nstart_link(Specs) ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, Specs, []).\n\n%%--------------------------------------------------------------------\n%% @doc Stop `arweave_config_spec' process.\n%% @end\n%%--------------------------------------------------------------------\nstop() ->\n\tgen_server:stop(?MODULE).\n\n%%--------------------------------------------------------------------\n%% @doc returns the full list of specifications.\n%% @end\n%%--------------------------------------------------------------------\nspec() ->\n\tPattern = {'$1', '$2'},\n\tGuard = [],\n\tSelect = [{{'$1', '$2'}}],\n\tResult = ets:select(?MODULE, [{Pattern, Guard, Select}]),\n\tmaps:from_list(Result).\n\n%%--------------------------------------------------------------------\n%% @doc returns parameter's specification from specification store.\n%% @end\n%%--------------------------------------------------------------------\nspec(ParameterSpec) ->\n\tPattern = {'$1', '$2'},\n\tGuard = [{'=:=', '$1', ParameterSpec}],\n\tSelect = [{{'$1', '$2'}}],\n\tcase ets:select(?MODULE, [{Pattern, Guard, Select}]) of\n\t\t[{Parameter, Spec}] ->\n\t\t\t{ok, Parameter, Spec};\n\t\t_Elsewise ->\n\t\t\t{error, not_found}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc returns default parameter value if defined.\n%% @end\n%%--------------------------------------------------------------------\nget_default(Parameter) ->\n\tPattern = {'$1', #{ default => '$2'}},\n\tGuard = [{'=:=', '$1', Parameter}],\n\tSelect = ['$2'],\n\tcase ets:select(?MODULE, [{Pattern, Guard, Select}]) of\n\t\t[Default] ->\n\t\t\t{ok, Default};\n\t\t_Elsewise ->\n\t\t\t{error, not_found}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc returns the list of environment variable supported.\n%% @end\n%%--------------------------------------------------------------------\nget_environments() ->\n\tPattern = {'$1', #{ environment => '$2'}},\n\tGuard = [],\n\tSelect = [{{'$2', '$1'}}],\n\tets:select(?MODULE, [{Pattern, Guard, Select}]).\n\n%%--------------------------------------------------------------------\n%% @doc Returns the specification for an environment variable.\n%% @end\n%%--------------------------------------------------------------------\nget_environment(EnvironmentKey) ->\n\tPattern = {'$1', #{ environment => '$2'}},\n\tGuard = [{'=:=', '$2', EnvironmentKey}],\n\tSelect = ['$1'],\n\tcase ets:select(?MODULE, [{Pattern, Guard, Select}]) of\n\t\t[Parameter] ->\n\t\t\t[{Parameter, Value}] = ets:lookup(?MODULE, Parameter),\n\t\t\t{ok, Parameter, Value};\n\t\t_Elsewise ->\n\t\t\t{error, not_found}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc Returns the list of short arguments supported with the number\n%% of elements required.\n%% @end\n%%--------------------------------------------------------------------\nget_short_arguments() ->\n\tPattern = {'$1', #{ short_argument => '$2' }},\n\tGuard = [],\n\tSelect = [{{'$2', '$_'}}],\n\t% match spec does not support correctly map, so, a filter\n\t% is required to cleanup things.\n\t[\n\t\t{Argument, Spec}\n\t\t|| {Argument, {_, Spec}}\n\t\t<- ets:select(?MODULE, [{Pattern, Guard, Select}])\n\t].\n\n%%--------------------------------------------------------------------\n%% @doc returns specification for a short argument.\n%% @end\n%%--------------------------------------------------------------------\nget_short_argument(ArgumentKey) ->\n\tPattern = {'$1', #{ short_argument => '$2' }},\n\tGuard = [{'=:=', '$2', ArgumentKey}],\n\tSelect = [{{'$2', '$_'}}],\n\t% match spec does not support correctly map, so, a filter\n\t% is required to cleanup things.\n\t[\n\t\t{Argument, Spec}\n\t\t|| {Argument, {_, Spec}}\n\t\t<- ets:select(?MODULE, [{Pattern, Guard, Select}])\n\t].\n\n%%--------------------------------------------------------------------\n%% @doc Returns the list of long arguments supported with the number\n%% of elements required.\n%% @end\n%%--------------------------------------------------------------------\nget_long_arguments() ->\n\tPattern = {'$1', #{ long_argument => '$2' }},\n\tGuard = [],\n\tSelect = [{{'$2', '$_'}}],\n\t% match spec does not support correctly map, so, a filter\n\t% is required to cleanup things.\n\t[\n\t\t{Argument, Spec}\n\t\t|| {Argument, {_, Spec}}\n\t\t<- ets:select(?MODULE, [{Pattern, Guard, Select}])\n\t].\n\n%%--------------------------------------------------------------------\n%% @doc Returns specification from a long argument.\n%% @end\n%%--------------------------------------------------------------------\nget_long_argument(ArgumentKey) ->\n\tPattern = {'$1', #{ long_argument => '$2' }},\n\tGuard = [{'=:=', '$2', ArgumentKey}],\n\tSelect = [{{'$2', '$_'}}],\n\t% match spec does not support correctly map, so, a filter\n\t% is required to cleanup things.\n\t[\n\t\t{Argument, Spec}\n\t\t|| {Argument, {_, Spec}}\n\t\t<- ets:select(?MODULE, [{Pattern, Guard, Select}])\n\t].\n\n%%--------------------------------------------------------------------\n%% @doc Returns legacy keys (used for legacy configuration\n%% compatibility).\n%% @end\n%%--------------------------------------------------------------------\nget_legacy() ->\n\tPattern = {'$1', #{ legacy => '$2' }},\n\tGuard = [{'=/=', '$2', undefined}],\n\tSelect = [{{'$2', '$1'}}],\n\tQuery = [{Pattern, Guard, Select}],\n\tmaps:from_list(ets:select(?MODULE, Query)).\n\n%%--------------------------------------------------------------------\n%% @doc Returns the parameter keys (if it exists) using a legacy\n%% parameter from `#config{}' record.\n%% @end\n%%--------------------------------------------------------------------\nget_legacy(Key) ->\n\tPattern = {'$1', #{ legacy => Key }},\n\tGuard = [{'=/=', Key, undefined}],\n\tSelect = ['$1'],\n\tQuery = [{Pattern, Guard, Select}],\n\tcase ets:select(?MODULE, Query) of\n\t\t[V] ->\n\t\t\t{ok, V};\n\t\t_ ->\n\t\t\t{error, undefined}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc get a value using a parameter.\n%% @end\n%%--------------------------------------------------------------------\nget(Parameter) ->\n\tgen_server:call(?MODULE, {get, Parameter}, 10_000).\n\n%%--------------------------------------------------------------------\n%% @doc set a parameters. The process will be in charge to check both\n%% keys and values then if everything is good, it will execute a side\n%% effect (to modify the application state) and finally store/update\n%% the value in `arweave_config_store'.\n%%\n%% == Examples ==\n%%\n%% ```\n%% {ok, NewValue = true, OldValue = false} =\n%%   set([global, debug], <<\"true\">>).\n%% '''\n%%\n%% @end\n%%--------------------------------------------------------------------\n-spec set(Parameter, Value) -> Return when\n\tParameter :: [atom() | iolist()],\n\tValue :: term(),\n\tReturn :: {ok, term()}\n\t\t| {error, term()}.\n\nset(Parameter, Value) ->\n\tgen_server:call(?MODULE, {set, Parameter, Value}, 10_000).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc Returns a list of module callbacks to check specifications.\n%% This function has been created to avoid having to deal with a very\n%% long and complex file. Each module callbacks only export an init/2\n%% function to initialize the final state corresponding to a spec\n%% callback.\n%% @end\n%%--------------------------------------------------------------------\ncallbacks_check() -> [\n\t% mandatory callbacks\n\t{parameter_key, arweave_config_spec_parameter_key},\n\n\t% optional callbacks\n\t{enabled, arweave_config_spec_enabled},\n\t{default, arweave_config_spec_default},\n\t{handle_get, arweave_config_spec_handle_get},\n\t{handle_set, arweave_config_spec_handle_set},\n\t{type, arweave_config_spec_type},\n\t{runtime, arweave_config_spec_runtime},\n\t{deprecated, arweave_config_spec_deprecated},\n\t{environment, arweave_config_spec_environment},\n\t{short_argument, arweave_config_spec_short_argument},\n\t{long_argument, arweave_config_spec_long_argument},\n\t{legacy, arweave_config_spec_legacy},\n\t{short_description, arweave_config_spec_short_description},\n\t{long_description, arweave_config_spec_long_description},\n\t{nargs, arweave_config_spec_nargs},\n\t{inherit, arweave_config_spec_inherit}\n].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @TODO check if the specs are correct (list of maps).\n%%--------------------------------------------------------------------\n-spec init(Specs) -> Return when\n\tSpecs :: [atom() | map()],\n\tReturn :: {ok, NamedEts},\n\tNamedEts :: ?MODULE.\n\ninit([]) ->\n\tSpecs = arweave_config_parameters:init(),\n\tinit_process(Specs);\ninit(Specs) when is_list(Specs) ->\n\tinit_process(Specs).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% init process. this one should not crash, then we catch all\n%% exceptions.\n%%--------------------------------------------------------------------\ninit_process(Specs) ->\n\terlang:process_flag(trap_exit, true),\n\tinit_ets(Specs).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% create the ETS table, this one should be only reachable by\n%% the current process to avoid doing nasty things with the\n%% specification during runtime.\n%%--------------------------------------------------------------------\ninit_ets(Specs) ->\n\tets:new(?MODULE, [\n\t\tnamed_table,\n\t\tprotected\n\t]),\n\tinit_parameters(Specs).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% parse and load all specifications from each modules.\n%%--------------------------------------------------------------------\ninit_parameters(Specs) ->\n\tcase init_loop(Specs, #{}) of\n\t\t{ok, MapSpec} ->\n\t\t\tinit_inherit(MapSpec);\n\t\tElse ->\n\t\t\t{error, Else}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% loops over parameters and if `inherit' spec is defined, update the\n%% parameters accordingly. At the end of this function, the specs\n%% should be ready.\n%%--------------------------------------------------------------------\ninit_inherit(MapSpec) ->\n\tInheritedMap = inherit_loop(MapSpec),\n\tinit_state(InheritedMap).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% loop over the specifications and set inherited values if present.\n%%--------------------------------------------------------------------\ninherit_loop(Specs) ->\n\tinherit_loop(maps:iterator(Specs), Specs, #{}).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% this loop restrist the inherited values to be a tuple containing\n%% the inherited parameter and the list of fields.\n%%--------------------------------------------------------------------\ninherit_loop(none, _Specs, Buffer) -> Buffer;\ninherit_loop(Iterator, Specs, Buffer) ->\n\tcase maps:next(Iterator) of\n\t\t{K, Parameter = #{inherit := {Parent, Fields}}, Next} ->\n\t\t\tNewParameter = inherit(Parameter, Fields, Parent, Specs),\n\t\t\tNewBuffer = Buffer#{ K => NewParameter },\n\t\t\tinherit_loop(Next, Specs, NewBuffer);\n\t\t{K, Parameter, Next} ->\n\t\t\tNewBuffer = Buffer#{ K => Parameter },\n\t\t\tinherit_loop(Next, Specs, NewBuffer)\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% loop over inherited fields and search them in parent parameter. if\n%% the original parameter does not have a field set, use the one from\n%% the parent.\n%%--------------------------------------------------------------------\n-spec inherit(Parameter, InheritedFields, ParentKey, Specs) -> Return when\n\tParameter :: map(),\n\tInheritedFields :: [atom()],\n\tParentKey :: list(),\n\tSpecs :: map(),\n\tReturn :: map().\n\ninherit(Parameter, _, ParentKey, Specs)\n\twhen is_map_key(ParentKey, Specs) =:= false ->\n\t\t?LOG_WARNING(\"undefined inherited parent: ~p\", [Parameter]),\n\t\tParameter;\ninherit(Parameter, Fields, ParentKey, Specs) ->\n\tParent = maps:get(ParentKey, Specs),\n\tinherit(Parameter, Fields, Parent).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% check if the parameter is set from the original spec.\n%%--------------------------------------------------------------------\n-spec inherit(Parameter, Fields, Parent) -> Return when\n\tParameter :: map(),\n\tFields :: [atom()],\n\tParent :: map(),\n\tReturn :: map().\n\ninherit(Parameter, [], _) -> Parameter;\ninherit(Parameter, [Field|Rest], Parent) ->\n\tcase maps:get(Field, Parameter, undefined) of\n\t\tundefined ->\n\t\t\tinherit2(Parameter,[Field|Rest],Parent);\n\t\t_ ->\n\t\t\tinherit(Parameter, Rest, Parent)\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% check if the parent spec got the inherited field and set it.\n%%--------------------------------------------------------------------\n-spec inherit2(Parameter, Fields, Parent) -> Return when\n\tParameter :: map(),\n\tFields :: [atom()],\n\tParent :: map(),\n\tReturn :: map().\n\ninherit2(Parameter, [Field|Rest], Parent) ->\n\tcase maps:get(Field, Parent, undefined) of\n\t\tundefined ->\n\t\t\tinherit(Parameter, Rest, Parent);\n\t\tValue ->\n\t\t\tNewParameter = Parameter#{\n\t\t\t\tField => Value\n\t\t\t},\n\t\t\tinherit(NewParameter, Rest, Parent)\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% insert all specification into the specification store (ETS).\n%% @TODO specification must be unique, when inserting a new\n%% specification, if the same key exists, a warning should be\n%% displayed in some way.\n%%--------------------------------------------------------------------\ninit_state(MapSpec) ->\n\t[\n\t\tets:insert(?MODULE, {K, V})\n\t\t||\n\t\t{K, V} <- maps:to_list(MapSpec)\n\t],\n\tinit_final(?MODULE).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% It should be good, we can start the module.\n%%--------------------------------------------------------------------\ninit_final(State) ->\n\t?LOG_INFO(\"~p ready\", [?MODULE]),\n\t{ok, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_loop([], Buffer) ->\n\t% no more arguments to path, the buffer is returned, it should\n\t% contain the complete list of all arguments specification\n\t% supported.\n\t{ok, Buffer};\ninit_loop([Map|Rest], Buffer) when is_map(Map) ->\n\t% the argument is defined as map, then all callback are\n\t% checked as map key.\n\tcase init_map(Map, #{}) of\n\t\t{ok, #{ parameter_key := K } = R} ->\n\t\t\t?LOG_DEBUG(\"checked callback from map ~p\", [Map]),\n\t\t\tinit_loop(Rest, Buffer#{ K => R });\n\t\tdiscard ->\n\t\t\t?LOG_NOTICE(\"can't load parameter from module ~p\", [Map]),\n\t\t\tinit_loop(Rest, Buffer);\n\t\t{discard, _Message} ->\n\t\t\t?LOG_NOTICE(\"can't load parameter from module ~p\", [Map]),\n\t\t\tinit_loop(Rest, Buffer);\n\t\tElsewise ->\n\t\t\tthrow(Elsewise)\n\tend;\ninit_loop([Module|Rest], Buffer) when is_atom(Module) ->\n\t% the argument is defined as module callback, then\n\t% all callback are checked as functions exported.\n\tcase init_module(Module, #{}) of\n\t\t{ok, #{ parameter_key := K } = R} ->\n\t\t\t?LOG_DEBUG(\"checked callback from map ~p\", [Module]),\n\t\t\tinit_loop(Rest, Buffer#{ K => R });\n\t\tdiscard ->\n\t\t\t?LOG_NOTICE(\"can't load parameter from module ~p\", [Module]),\n\t\t\tinit_loop(Rest, Buffer);\n\t\t{discard, _Message} ->\n\t\t\t?LOG_NOTICE(\"can't load parameter from module ~p\", [Module]),\n\t\t\tinit_loop(Rest, Buffer);\n\t\tElsewise ->\n\t\t\tthrow(Elsewise)\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_module(Module, State) ->\n\tCallbacksCheck = callbacks_check(),\n\tinit_module(Module, CallbacksCheck, State).\n\ninit_module(Module, [], State) ->\n\t?LOG_DEBUG(\"loaded callback module ~p\", [Module]),\n\t{ok, State};\ninit_module(Module, [{_Callback, ModuleCallback}|Rest], State) ->\n\tcase erlang:apply(ModuleCallback, init, [Module, State]) of\n\t\t{ok, NewState} ->\n\t\t\tinit_module(Module, Rest, NewState);\n\t\tElsewise ->\n\t\t\t{discard, Elsewise}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_map(Map, State) ->\n\tCallbacksCheck = callbacks_check(),\n\tinit_map(Map, CallbacksCheck, State).\n\ninit_map(Map, [], State) ->\n\t?LOG_DEBUG(\"loaded callback map ~p\", [Map]),\n\t{ok, State};\ninit_map(Map, [{_Callback, ModuleCallback}|Rest], State) ->\n\tcase erlang:apply(ModuleCallback, init, [Map, State]) of\n\t\t{ok, NewState} ->\n\t\t\tinit_map(Map, Rest, NewState);\n\t\tElsewise ->\n\t\t\t{discard, Elsewise}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nterminate(_, _) ->\n\tok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_call({get, Parameter}, _From, State) ->\n\tcase apply_get(Parameter) of\n\t\t{ok, Value} ->\n\t\t\t{reply, {ok, Value}, State};\n\t\tElsewise ->\n\t\t\t{reply, Elsewise, State}\n\tend;\nhandle_call({set, Parameter, Value}, _From, State) ->\n\tcase apply_set(Parameter, Value) of\n\t\tReturn = {ok, Return} ->\n\t\t\t{reply, Return, State};\n\t\tElsewise ->\n\t\t\t{reply, Elsewise, State}\n\tend;\nhandle_call(Msg, From, State) ->\n\t?LOG_WARNING([\n\t\t{message, Msg},\n\t\t{from, From},\n\t\t{module, ?MODULE},\n\t\t{function, handle_call}\n\t]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_cast(Msg, State) ->\n\t?LOG_WARNING([\n\t\t{message, Msg},\n\t\t{module, ?MODULE},\n\t\t{function, handle_cast}\n\t]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_info(Msg, State) ->\n\t?LOG_WARNING([\n\t\t{message, Msg},\n\t\t{module, ?MODULE},\n\t\t{function, handle_info}\n\t]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc Check if a function from a module is exported.\n%% @end\n%%--------------------------------------------------------------------\n-spec is_function_exported(Module, Function, Arity) -> Return when\n\tModule :: atom(),\n\tFunction :: atom(),\n\tArity :: pos_integer(),\n\tReturn :: boolean().\n\nis_function_exported(Module, Function, Arity) ->\n\ttry\n\t\tExports = Module:module_info(exports),\n\t\tproplists:get_value(Function, Exports, undefined)\n\tof\n\t\tundefined -> false;\n\t\tA when A =:= Arity -> true;\n\t\t_ -> false\n\tcatch\n\t\t_:_ -> false\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc A pipeline function to check if the value is correct or not.\n%% @end\n%%--------------------------------------------------------------------\ncheck(Parameter, Value, Spec) ->\n\tcheck_type(Parameter, Value, Spec, #{}).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc Check the type of the value associated with the parameter.\n%% This function will check for a type present in\n%% `arweave_config_type' module and execute it. It should return,\n%% `ok', `{ok, ConvertedValue}' or `{error, Term}'.\n%% @end\n%%--------------------------------------------------------------------\ncheck_type(Parameter, Value, Spec = #{ type := Type }, Buffer) ->\n\tcase\n\t\tcheck_type(Value, Type)\n\tof\n\t\tok ->\n\t\t\tNewBuffer = Buffer#{ type => ok },\n\t\t\tcheck_final(Parameter, Value, Spec, NewBuffer);\n\t\t{ok, V} ->\n\t\t\tNewBuffer = Buffer#{ type => ok },\n\t\t\tcheck_final(Parameter, V, Spec, NewBuffer);\n\t\tError ->\n\t\t\tNewBuffer = Buffer#{ type => Error },\n\t\t\tcheck_final(Parameter, Value, Spec, NewBuffer)\n\tend;\ncheck_type(Parameter, Value, Spec, Buffer) ->\n\tcheck_final(Parameter, Value, Spec, Buffer#{ type => undefined }).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc check a single type.\n%% @end\n%%--------------------------------------------------------------------\n-spec check_type(Value, Types) -> Return when\n\tValue :: term(),\n\tTypes :: atom() | [atom()],\n\tReturn :: ok | {ok, Value} | {error, term()}.\n\ncheck_type(Value, Types) when is_list(Types) ->\n\tcheck_types(Value, Types);\ncheck_type(Value, Type) when is_atom(Type) ->\n\ttry\n\t\tarweave_config_type:Type(Value)\n\tof\n\t\tok -> {ok, Value};\n\t\t{ok, V} -> {ok, V};\n\t\t{error, Reason} -> {error, Reason}\n\tcatch\n\t\tE:R -> {E, R}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc check a list of type.\n%% @end\n%%--------------------------------------------------------------------\n-spec check_types(Value, Types) -> Return when\n\tValue :: term(),\n\tTypes :: [atom()],\n\tReturn :: ok | {ok, Value} | {error, term()}.\n\ncheck_types(_Value, []) ->\n\t{error, <<\"value does not match types\">>};\ncheck_types(Value, [Type|Rest]) when is_atom(Type) ->\n\t\tcase check_type(Value, Type) of\n\t\t\tok ->\n\t\t\t\t{ok, Value};\n\t\t\t{ok, V} ->\n\t\t\t\t{ok, V};\n\t\t\t{error, _} ->\n\t\t\t\tcheck_types(Value, Rest)\n\t\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc final check function, all returned values from the previous\n%% call should be present in `Buffer' variable.\n%% @end\n%%--------------------------------------------------------------------\ncheck_final(_, Value, _, Buffer) ->\n\t?LOG_DEBUG(\"~p\", [Buffer]),\n\tcase Buffer of\n\t\t#{ type := undefined } ->\n\t\t\t{ok, Value, Buffer};\n\t\t#{ type := ok } ->\n\t\t\t{ok, Value, Buffer};\n\t\t_ ->\n\t\t\t{error, Value, Buffer}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% 1. get he specification, if they are present, we can continue\n%% to execute the transaction\n%%--------------------------------------------------------------------\napply_set(Parameter, Value) ->\n\tcase spec(Parameter) of\n\t\t{ok, Parameter, Spec} ->\n\t\t\tapply_set_runtime(Parameter, Value, Spec);\n\t\tElsewise ->\n\t\t\tElsewise\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% check if the parameter can be set during runtime or not.\n%%--------------------------------------------------------------------\napply_set_runtime(Parameter, Value, Spec) ->\n\tRuntimeMode = maps:get(runtime, Spec, false),\n\tRuntime = arweave_config:is_runtime(),\n\tcase {Runtime, RuntimeMode} of\n\t\t{false, false} ->\n\t\t\tapply_set_parameter(Parameter, Value, Spec);\n\t\t{false, true} ->\n\t\t\tapply_set_parameter(Parameter, Value, Spec);\n\t\t{true, true} ->\n\t\t\tapply_set_parameter(Parameter, Value, Spec);\n\t\t{true, false} ->\n\t\t\t{error, #{\n\t\t\t\t\tparameter => Parameter,\n\t\t\t\t\treason => not_a_runtime_parameter,\n\t\t\t\t\tvalue => Value\n\t\t\t\t}\n\t\t\t}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% 2. check if the specification match the parameter/value,\n%% if everything is fine, we can continue the execution\n%%--------------------------------------------------------------------\napply_set_parameter(Parameter, Value, Spec) ->\n\tcase check(Parameter, Value, Spec) of\n\t\t{ok, Return, _} ->\n\t\t\tapply_set_value(Parameter, Return, Spec);\n\t\tElsewise ->\n\t\t\tElsewise\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% 3. let retrieve the value (if set) and use the handle_set/3\n%% callback to set the value.\n%%--------------------------------------------------------------------\napply_set_value(Parameter, Value, Spec = #{ set := Set }) ->\n\t% if handle_set/4 is present, we execute it.\n\tDefault = maps:get(default, Spec, undefined),\n\n\t% @todo: potential race condition\n\tOldValue = arweave_config_store:get(Parameter, Default),\n\tState = local_state(#{\n\t\tspec => Spec,\n\t\told_value => OldValue\n\t}),\n\n\tArgs = maps:get(set_args, Spec, []),\n\ttry\n\t\tSet(Parameter, Value, State, Args)\n\tof\n\t\tignore ->\n\t\t\t{ok, OldValue, OldValue};\n\t\t{ok, NewValue} ->\n\t\t\t{ok, NewValue, OldValue};\n\t\t{store, NewValue} ->\n\t\t\tapply_set_store(Parameter,NewValue,OldValue,Spec)\n\tcatch\n\t\tE:R ->\n\t\t\t{E,R}\n\tend;\napply_set_value(Parameter, Value, Spec) ->\n\t% if no handle_set/3 has been configured, we only store the\n\t% value by default.\n\tDefault = maps:get(default, Spec, undefined),\n\tOldValue = arweave_config_store:get(Parameter, Default),\n\tapply_set_store(Parameter, Value, OldValue, Spec).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% 4. the previous callback returned `store', then we store\n%% the value into arweave_config_store.\n%%--------------------------------------------------------------------\napply_set_store(Parameter, NewValue, OldValue, _Spec) ->\n\ttry arweave_config_store:set(Parameter, NewValue) of\n\t\t{ok, {_, _}} ->\n\t\t\t{ok, NewValue, OldValue};\n\t\tElsewise ->\n\t\t\tElsewise\n\tcatch\n\t\tE:R ->\n\t\t\t{E, R}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\napply_get(Parameter) ->\n\tcase spec(Parameter) of\n\t\t{ok, Parameter, Spec} ->\n\t\t\tapply_get2(Parameter, Spec);\n\t\tElsewise ->\n\t\t\tElsewise\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\napply_get2(Parameter, Spec = #{ get := Get, default := Default }) ->\n\tState = local_state(#{ spec => Spec }),\n\tcase Get(Parameter, State) of\n\t\t{ok, Value} ->\n\t\t\t{ok, Value};\n\t\t_ ->\n\t\t\t{ok, Default}\n\tend;\napply_get2(Parameter, Spec = #{ get := Get }) ->\n\tState = local_state(#{ spec => Spec }),\n\tcase Get(Parameter, State) of\n\t\t{ok, Value} ->\n\t\t\t{ok, Value};\n\t\tElsewise ->\n\t\t\t{error, Elsewise}\n\tend;\napply_get2(Parameter, _Spec = #{ default := Default }) ->\n\tValue = arweave_config_store:get(Parameter, Default),\n\t{ok, Value};\napply_get2(Parameter, _Spec) ->\n\tarweave_config_store:get(Parameter).\n\n%%--------------------------------------------------------------------\n%% @doc Converts arweave configuration specification to argparse map.\n%% @see argparse\n%% @end\n%%--------------------------------------------------------------------\nspec_to_argparse() ->\n\t% fetch the specifications and convert them to proplists, it\n\t% will be easier to loop over.\n\tSpecs = [\n\t\tmaps:to_list(X)\n\t\t||\n\t\t{_, X} <- maps:to_list(spec())\n\t],\n\t#{ arguments => spec_to_argparse(Specs, []) }.\n\n% @hidden\nspec_to_argparse([], Buffer) -> Buffer;\nspec_to_argparse([Spec|Rest], Buffer) ->\n\tArgParse = spec_to_argparse2(Spec, #{}),\n\tspec_to_argparse(Rest, [ArgParse|Buffer]).\n\n% @hidden\nspec_to_argparse2([], ArgParse) -> ArgParse;\nspec_to_argparse2([{parameter_key, Name}|Rest], ArgParse) ->\n\t% convert the configuration key parameter to name\n\tspec_to_argparse2(Rest, ArgParse#{ name => Name });\nspec_to_argparse2([{default, Default}|Rest], ArgParse) ->\n\tspec_to_argparse2(Rest, ArgParse#{ default => Default });\nspec_to_argparse2([{nargs, Nargs}|Rest], ArgParse) ->\n\tspec_to_argparse2(Rest, ArgParse#{ nargs => Nargs });\nspec_to_argparse2([{long_argument, Long}|Rest], ArgParse)\n\twhen is_binary(Long) ->\n\t\t% arweave config spec uses binary, convert it.\n\t\tspec_to_argparse2(Rest, ArgParse#{ long => binary_to_list(Long) });\nspec_to_argparse2([{short_argument, Short}|Rest], ArgParse) ->\n\tspec_to_argparse2(Rest, ArgParse#{ short => Short });\nspec_to_argparse2([{required, Required}|Rest], ArgParse) ->\n\tspec_to_argparse2(Rest, ArgParse#{ required => Required });\nspec_to_argparse2([{short_description, SD}|Rest], ArgParse) ->\n\tspec_to_argparse2(Rest, ArgParse#{ help => SD });\nspec_to_argparse2([{type, Type}|Rest], ArgParse) ->\n\t% At this time, only support boolean type\n\tcase Type of\n\t\tboolean ->\n\t\t\tspec_to_argparse2(Rest, ArgParse#{ type => Type });\n\t\t_Elsewise ->\n\t\t\tspec_to_argparse2(Rest, ArgParse)\n\tend;\nspec_to_argparse2([Ignore|Rest], ArgParse) ->\n\t?LOG_DEBUG(\"ignored value: ~p\", [Ignore]),\n\tspec_to_argparse2(Rest, ArgParse).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nlocal_state(Map) ->\n\tmaps:merge(Map, #{config => arweave_config_store:to_map()}).\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_specs/arweave_config_spec_default.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc \"Default\" value specification. Returns a default value if\n%%% specified.\n%%% @end\n%%%===================================================================\n-module(arweave_config_spec_default).\n-compile(warnings_as_errors).\n-export([init/2]).\n-include(\"arweave_config_spec.hrl\").\n-include_lib(\"kernel/include/logger.hrl\").\n\ninit(Map, State) when is_map(Map) ->\n\tfetch(Map, State);\ninit(Module, State) when is_atom(Module) ->\n\tcase is_function_exported(Module, default, 0) of\n\t\ttrue ->\n\t\t\tfetch(Module, State);\n\t\tfalse ->\n\t\t\t{ok, State }\n\tend.\n\nfetch(#{ default := DefaultValue }, State) ->\n\t{ok, State#{ default => DefaultValue }};\nfetch(Map, State) when is_map(Map) ->\n\t{ok, State};\nfetch(Module, State) ->\n\ttry Module:default() of\n\t\tDefault ->\n\t\t\tNewState = State#{ default => Default },\n\t\t\t{ok, NewState}\n\tcatch\n\t\tE:R:S ->\n\t\t\t?LOG_ERROR([\n\t\t\t\t{module, ?MODULE},\n\t\t\t\t{parameter, Module},\n\t\t\t\t{state, State},\n\t\t\t\t{error, {E,R,S}}\n\t\t\t]),\n\t\t\t{ok, State}\n\tend.\n\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_specs/arweave_config_spec_deprecated.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Deprecated specification feature. Returns a warning message\n%%% when a deprecated flag is present.\n%%% @end\n%%%===================================================================\n-module(arweave_config_spec_deprecated).\n-export([default/0, init/2]).\n-include(\"arweave_config_spec.hrl\").\n-include_lib(\"kernel/include/logger.hrl\").\n\ndefault() -> false.\n\ninit(#{ deprecated := Deprecated }, State) when is_boolean(Deprecated) ->\n\t{ok, State#{ deprecated => Deprecated }};\ninit(Map, State) when is_map(Map) ->\n\t{ok, State#{ deprecated => default() }};\ninit(Module, State) when is_atom(Module) ->\n\tcase is_function_exported(Module, deprecated, 0) of\n\t\ttrue ->\n\t\t\tfetch(Module, State);\n\t\tfalse ->\n\t\t\t{ok, State#{ deprecated => default() }}\n\tend.\n\nfetch(Module, State) ->\n\ttry Module:deprecated() of\n\t\tfalse ->\n\t\t\tNewState = State#{ deprecated => default() },\n\t\t\t{ok, NewState};\n\t\ttrue ->\n\t\t\tNewState = State#{ deprecated => true },\n\t\t\t?LOG_WARNING(\"~p~n is deprecated\", [Module]),\n\t\t\t{ok, NewState};\n\t\t{true, Message} ->\n\t\t\tNewState = State#{ deprecated => true },\n\t\t\t?LOG_WARNING(\"~p~n is deprecated: ~p\", [Module, Message]),\n\t\t\t{ok, NewState};\n\t\tElsewise ->\n\t\t\t{error, Elsewise}\n\tcatch\n\t\t_:Reason ->\n\t\t\t{error, Reason}\n\tend.\n\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_specs/arweave_config_spec_enabled.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc \"Enabled\" value specification\n%%% @end\n%%%===================================================================\n-module(arweave_config_spec_enabled).\n-export([init/2]).\n-include(\"arweave_config_spec.hrl\").\n-include_lib(\"kernel/include/logger.hrl\").\n\ninit(Map, State) when is_map(Map) ->\n\tfetch(Map, State);\ninit(Module, State) when is_atom(Module) ->\n\tcase is_function_exported(Module, enabled, 0) of\n\t\ttrue ->\n\t\t\tfetch(Module, State);\n\t\tfalse ->\n\t\t\t{ok, State}\n\tend.\n\nfetch(#{ enabled := false }, State) ->\n\tPK = maps:get(parameter_key, State),\n\t?LOG_DEBUG(\"parameter_key: ~p disabled\", [PK]),\n\tskip;\nfetch(#{ enabled := {false, Reason} }, State) ->\n\tPK = maps:get(parameter_key, State),\n\t?LOG_DEBUG(\"parameter_key: ~p disabled (~p)\", [PK, Reason]),\n\tskip;\nfetch(_Module, State) ->\n\t{ok, State#{ enabled => true }}.\n\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_specs/arweave_config_spec_environment.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc\n%%% Define an environment variable to check or generate one\n%%% automatically.\n%%% @end\n%%%===================================================================\n-module(arweave_config_spec_environment).\n-compile(warnings_as_errors).\n-export([init/2]).\n-include(\"arweave_config_spec.hrl\").\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit(#{environment := true}, State) ->\n\tPK = maps:get(parameter_key, State),\n\tEnv = convert(PK),\n\t{ok, State#{ environment => Env }};\ninit(#{environment := Env}, State) when is_binary(Env) ->\n\t{ok, State#{ environment => Env }};\ninit(Map, State) when is_map(Map) ->\n\t{ok, State};\ninit(Module, State) when is_atom(Module) ->\n\tcase is_function_exported(Module, environment, 0) of\n\t\ttrue ->\n\t\t\tfetch(Module, State);\n\t\tfalse ->\n\t\t\t{ok, State}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nfetch(Module, State) ->\n\ttry\n\t\tEnv = erlang:apply(Module, environment, []),\n\t\tcheck(Module, Env, State)\n\tcatch\n\t\t_:R ->\n\t\t\t{error, R}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc environment check callback.\n%% @end\n%%--------------------------------------------------------------------\n-spec check(Module, Environment, State) -> Return when\n\t  Module :: atom() | map(),\n\t  Environment :: string() | binary(),\n\t  State :: map(),\n\t  Return :: {ok, State} | {error, map()}.\n\ncheck(Module, Environment, State) when is_list(Environment) ->\n\tcheck(Module, list_to_binary(Environment), State);\ncheck(_Module, Environment, State) when is_binary(Environment) ->\n\t{ok, State#{ environment => Environment }};\ncheck(Module, Env, State) ->\n\t{error, #{\n\t\t\treason => {invalid, Env},\n\t\t\tmodule => Module,\n\t\t\tcallback => environment,\n\t\t\tstate => State\n\t\t}\n\t}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\n-spec convert(PK) -> Return when\n\tPK :: [binary()|integer()|atom()],\n\tReturn :: binary().\n\nconvert(PK) -> convert(PK, []).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\n-spec convert(PK, Buffer) -> Return when\n\tPK :: [binary()|integer()|atom()],\n\tBuffer :: [binary()],\n\tReturn :: binary().\n\nconvert([], Buffer) ->\n\tReverse = lists:reverse(Buffer),\n\tJoin = lists:join(<<\"_\">>, [<<\"AR\">>|Reverse]),\n\tlist_to_binary(Join);\nconvert([H|T], Buffer) when is_atom(H) ->\n\tBin = atom_to_binary(H),\n\tUpper = string:uppercase(Bin),\n\tconvert(T, [Upper|Buffer]);\nconvert([H|T], Buffer) when is_integer(H) ->\n\tBin = integer_to_binary(H),\n\tconvert(T, [Bin|Buffer]);\nconvert([H|T], Buffer) when is_binary(H) ->\n\tconvert(T, [H|Buffer]).\n\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_specs/arweave_config_spec_handle_get.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc get specification feature. insert a custom get feature inside\n%%% a parameter.\n%%% @end\n%%%===================================================================\n-module(arweave_config_spec_handle_get).\n-export([init/2]).\n-include(\"arweave_config_spec.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\ninit(#{ handle_get := Get }, State) when is_function(Get, 2) ->\n\t{ok, State#{ get => Get }};\ninit(Map, State) when is_map(Map) ->\n\t{ok, State};\ninit(Module, State) when is_atom(Module) ->\n\tcase is_function_exported(Module, handle_get, 2) of\n\t\ttrue ->\n\t\t\t{ok, State#{ get => fun Module:handle_get/2 }};\n\t\tfalse ->\n\t\t\t{ok, State}\n\tend.\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_specs/arweave_config_spec_handle_set.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc set specification feature. adds a custom set specification\n%%% inside a parameter.\n%%% @end\n%%%===================================================================\n-module(arweave_config_spec_handle_set).\n-export([init/2]).\n-include(\"arweave_config_spec.hrl\").\n\ninit(#{ handle_set := Set }, State) when is_function(Set, 4) ->\n\t{ok, State#{ set => Set, set_args => [] }};\ninit(#{ handle_set := {Set, Args}}, State) when is_function(Set, 4) ->\n\t{ok, State#{ set => Set, set_args => Args }};\ninit(Map, State) when is_map(Map) ->\n\t{ok, State};\ninit(Module, State) when is_atom(Module) ->\n\tcase is_function_exported(Module, handle_set, 4) of\n\t\ttrue ->\n\t\t\tNewState = State#{set => fun Module:handle_set/4},\n\t\t\tinit_args(Module, NewState);\n\t\tfalse ->\n\t\t\t{ok, State}\n\tend.\n\ninit_args(Module, State) ->\n\tcase is_function_exported(Module, set_args, 0) of\n\t\ttrue ->\n\t\t\tArgs = Module:set_args(),\n\t\t\tNewState = #{ set_args => Args },\n\t\t\t{ok, NewState};\n\t\t_Else ->\n\t\t\t{ok, State}\n\tend.\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_specs/arweave_config_spec_inherit.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%%------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc inherit specific values from another parameter.\n%%% @end\n%%%===================================================================\n-module(arweave_config_spec_inherit).\n-compile(warnings_as_errors).\n-export([inherited_fields/0, init/2]).\n-include(\"arweave_config_spec.hrl\").\n-include_lib(\"kernel/include/logger.hrl\").\n\ninherited_fields() ->\n\t[\n\t\tdefault,\n\t\tenabled,\n\t\trequired,\n\t\truntime,\n\t\ttype\n\t].\n\ninit(Map, State)\n\twhen is_map(Map); is_atom(Map) ->\n\t\tfetch(Map, State);\ninit(_, State) ->\n\t{ok, State}.\n\nfetch(Map = #{ inherit := {Parameter, Fields}}, State)\n\twhen is_list(Parameter); is_list(Fields) ->\n\t\tcheck(Map, {Parameter, Fields}, State);\nfetch(Map = #{ inherit := Parameter }, State)\n\twhen is_list(Parameter) ->\n\t\tFields = inherited_fields(),\n\t\tcheck(Map, {Parameter, Fields}, State);\nfetch(Map, State) when is_map(Map) ->\n\t{ok, State};\nfetch(Module, State) ->\n\ttry Module:inherit() of\n\t\t{Parameter, Fields} ->\n\t\t\tcheck(Module, {Parameter, Fields}, State);\n\t\tParameter when is_list(Parameter) ->\n\t\t\tcheck(Module, {Parameter, inherited_fields()}, State);\n\t\t_Else ->\n\t\t\t{ok, State}\n\tcatch\n\t\tE:R:S ->\n\t\t\t?LOG_ERROR([\n\t\t\t\t{module, ?MODULE},\n\t\t\t\t{parameter, Module},\n\t\t\t\t{state, State},\n\t\t\t\t{error, {E,R,S}}\n\t\t\t]),\n\t\t\t{ok, State}\n\tend.\n\ncheck(_, Inherit, State) ->\n\t{ok, State#{ inherit => Inherit }}.\n\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_specs/arweave_config_spec_legacy.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc legacy specification feature. set the key used by the legacy\n%%% configuration.\n%%% @todo legacy configuration is kinda special, and instead of an\n%%% atom, a function should probably be used sometimes.\n%%% @end\n%%%===================================================================\n-module(arweave_config_spec_legacy).\n-export([init/2]).\n-include(\"arweave_config_spec.hrl\").\n-include(\"arweave_config.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\ninit(Map = #{ legacy := Legacy }, State) when is_map(Map) ->\n\tcheck(Map, Legacy, State);\ninit(Map, State) when is_map(Map) ->\n\t{ok, State};\ninit(Module, State) when is_atom(Module) ->\n\tcase is_function_exported(Module, legacy, 0) of\n\t\ttrue ->\n\t\t\tfetch(Module, State);\n\t\tfalse ->\n\t\t\t{ok, State}\n\tend.\n\ninit_test() ->\n\t?assertEqual(\n\t\t{ok, #{ legacy => init }},\n\t\tinit(#{ legacy => init }, #{})\n\t),\n\t?assertMatch(\n\t\t{error, _},\n\t\tinit(#{ legacy => 123 }, #{})\n\t),\n\t?assertMatch(\n\t\t{error, _},\n\t\tinit(#{ legacy => does_not_exist }, #{})\n\t).\n\nfetch(Module, State) when is_atom(Module) ->\n\ttry\n\t\tL = erlang:apply(Module, legacy, []),\n\t\tcheck(Module, L, State)\n\tcatch\n\t\t_:R ->\n\t\t\t{error, R}\n\tend.\n\ncheck(Module, Legacy, State) when is_atom(Legacy) ->\n\t% ensure the presence of the field in config record.\n\tFields = record_info(fields, config),\n\tcase [ X || X <- Fields, X =:= Legacy ] of\n\t\t[Legacy] ->\n\t\t\t{ok, State#{ legacy => Legacy }};\n\t\t_ ->\n\t\t\t{error, #{\n\t\t\t\t\treason => \"does not exists\",\n\t\t\t\t\tlegacy_key => Legacy,\n\t\t\t\t\tmodule => Module\n\t\t\t\t }\n\t\t\t}\n\tend;\ncheck(Module, Legacy, State) ->\n\t{error, #{\n\t\t\treason => {invalid, Legacy},\n\t\t\tmodule => Module,\n\t\t\tcallback => legacy,\n\t\t\tstate => State\n\t\t}\n\t}.\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_specs/arweave_config_spec_long_argument.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Long argument specification feature. It should be compatible\n%%% with arguments modules in OTP.\n%%%\n%%% ```\n%%% % example\n%%% Parameter = [global, debug].\n%%% LongArgumentDefault = <<\"--global.debug\">>.\n%%%\n%%% % example\n%%% Spec = [peers, {peer}, enabled].\n%%% Parameter = [peers, <<\"127.0.0.1:1984\">>, enabled].\n%%% Key = <<\"peers.[127.0.0.1:1984].ebaled\">>.\n%%% LongArgumentDefault = <<\"--peers.[peer].enabled\">>.\n%%% '''\n%%%\n%%% @end\n%%%===================================================================\n-module(arweave_config_spec_long_argument).\n-compile(warnings_as_errors).\n-export([init/2]).\n-include(\"arweave_config_spec.hrl\").\n\ninit(Map = #{ long_argument := LA }, State) ->\n\tcheck(Map, LA, State);\ninit(Map, State) when is_map(Map) ->\n\t{ok, State};\ninit(Module, State) when is_atom(Module) ->\n\tcase is_function_exported(Module, long_argument, 0) of\n\t\ttrue ->\n\t\t\tfetch(Module, State);\n\t\tfalse ->\n\t\t\tcheck(Module, undefined, State)\n\tend.\n\nfetch(Module, State) ->\n\ttry\n\t\tLA = erlang:apply(Module, long_argument, []),\n\t\tcheck(Module, LA, State)\n\tcatch\n\t\t_:R ->\n\t\t\t{error, R}\n\tend.\n\ncheck(_Module, false, State) ->\n\t{ok, State};\ncheck(_Module, true, State = #{ parameter_key := CK }) ->\n\t{ok, State#{ long_argument => convert(CK) }};\ncheck(_Module, undefined, State = #{ parameter_key := CK }) ->\n\t{ok, State#{ long_argument => convert(CK) }};\ncheck(_Module, LA, State) when is_binary(LA) orelse is_list(LA) ->\n\t{ok, State#{ long_argument => convert(LA) }};\ncheck(Module, LA, State) ->\n\t{error, #{\n\t\t\treason => {invalid, LA},\n\t\t\tstate => State,\n\t\t\tmodule => Module,\n\t\t\tcallback => long_argument\n\t\t}\n\t}.\n\nconvert(List) when is_list(List) -> convert(List, []);\nconvert(<<\"-\", _/binary>> = Binary) -> Binary;\nconvert(Binary) when is_binary(Binary) -> <<\"-\", Binary/binary>>.\n\nconvert([], Buffer) ->\n\tSep = application:get_env(arweave_config, long_argument_separator, \".\"),\n\tBin = list_to_binary(lists:join(Sep, lists:reverse(Buffer))),\n\t<<\"--\", Bin/binary>>;\nconvert([H|T], Buffer) when is_integer(H) ->\n\tconvert([integer_to_binary(H)|T], Buffer);\nconvert([H|T], Buffer) when is_atom(H) ->\n\tconvert([atom_to_binary(H)|T], Buffer);\nconvert([H|T], Buffer) when is_list(H) ->\n\tconvert([list_to_binary(H)|T], Buffer);\nconvert([H|T], Buffer) when is_binary(H) ->\n\tconvert(T, [H|Buffer]).\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_specs/arweave_config_spec_long_description.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc long description specification feature.\n%%% @end\n%%%===================================================================\n-module(arweave_config_spec_long_description).\n-export([init/2]).\n-include(\"arweave_config_spec.hrl\").\n\ninit(#{ long_description := LD }, State) ->\n\t{ok, State#{ long_description => LD }};\ninit(Map, State) when is_map(Map) ->\n\t{ok, State};\ninit(Module, State) when is_atom(Module) ->\n\tcase is_function_exported(Module, long_description, 0) of\n\t\ttrue ->\n\t\t\tfetch(Module, State);\n\t\tfalse ->\n\t\t\t{ok, State}\n\tend.\n\nfetch(Module, State) ->\n\ttry\n\t\tLD = erlang:apply(Module, long_description, []),\n\t\tcheck(Module, LD, State)\n\tcatch\n\t\t_:R ->\n\t\t\t{error, R}\n\tend.\n\ncheck(_Module, undefined, State) ->\n\t{ok, State#{ long_description => undefined }};\ncheck(_Module, LD, State) when is_binary(LD); is_list(LD) ->\n\t{ok, State#{ long_description => LD }};\ncheck(Module, LD, State) ->\n\t{error, #{\n\t\t\treason => {invalid, LD},\n\t\t\tmodule => Module,\n\t\t\tstate => State,\n\t\t\tcallback => long_description\n\t\t}\n\t}.\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_specs/arweave_config_spec_nargs.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc nargs parameter interface from `argparse' module. Only\n%%% available for command line arguments.\n%%% @see argparse\n%%% @end\n%%%===================================================================\n-module(arweave_config_spec_nargs).\n-export([init/2]).\n-include(\"arweave_config_spec.hrl\").\n\ninit(Map = #{ nargs := Nargs }, State) ->\n\tcheck(Map, Nargs, State);\ninit(Map, State) when is_map(Map) ->\n\t{ok, State};\ninit(Module, State) when is_atom(Module) ->\n\tcase is_function_exported(Module, nargs, 0) of\n\t\ttrue ->\n\t\t\tfetch(Module, State);\n\t\tfalse ->\n\t\t\t{ok, State}\n\tend.\n\nfetch(Module, State) ->\n\tNargs = Module:nargs(),\n\tcheck(Module, Nargs, State).\n\ncheck(Module, nonempty_list, State) ->\n\t{ok, State#{ nargs => nonempty_list }};\ncheck(Module, list, State) ->\n\t{ok, State#{ nargs => list }};\ncheck(Module, all, State) ->\n\t{ok, State#{ nargs => all }};\ncheck(Module, 'maybe', State) ->\n\t{ok, State#{ nargs => 'maybe'}};\ncheck(Module, {'maybe', Term}, State) ->\n\t{ok, State#{ nargs => {'maybe', Term}}};\ncheck(Module, Nargs, State) when is_integer(Nargs), Nargs >= 0 ->\n\t{ok, State#{ nargs => Nargs }}.\n\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_specs/arweave_config_spec_parameter_key.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Arweave Configuration Specification Configuration Key.\n%%%\n%%% A specification for a specification key is a way to describe a\n%%% parameter in arweave_config. The idea is to have something similar\n%%% like a path/uri that can be checked before updated. A simple\n%%% example with the debug parameter:\n%%%\n%%% ```\n%%% [global,debug].\n%%% '''\n%%%\n%%% How to configure a \"dynamic\" key, for example, with a peer or a\n%%% storage module? It can be done by inserting a special term to\n%%% define what kind of type is accepted.\n%%%\n%%% ```\n%%% [peers,{peer},enabled].\n%%% '''\n%%%\n%%% What if the variable parameter can have many types?\n%%%\n%%% ```\n%%% [peers, {[peer,ipv4,ipv6]}, enabled].\n%%% '''\n%%%\n%%% Now, how it's possible to match quickly the content of a parameter\n%%% and this kind of key?\n%%%\n%%% ```\n%%% RawKey = <<\"peers.[127.0.0.1].enabled\">>.\n%%% Value = <<\"true\">>.\n%%% FormattedKey = [peers, <<\"127.0.0.1\">>, enabled].\n%%% Specification = [peers, {[peer,ipv4,ipv6]}, enabled].\n%%%\n%%% % an idea for an internal representation\n%%% % InternalSpec = [peers, fun param/1, enabled].\n%%%\n%%% {ok, Spec} = find(FormattedKey).\n%%% true = is_valid(FormattedKey, Spec).\n%%% '''\n%%%\n%%% @todo if the provided key is a binary, it should probably be good\n%%% to convert it.\n%%%\n%%% @end\n%%%===================================================================\n-module(arweave_config_spec_parameter_key).\n-export([init/2]).\n-include(\"arweave_config_spec.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\ninit(Map = #{ parameter_key := CK }, State) ->\n\tfetch(Map, State);\ninit(Map, State) when is_map(Map) ->\n\t{error, #{\n\t\t\tmodule => Map,\n\t\t\treason => missing_parameter_key,\n\t\t\tstate => State\n\t\t}\n\t};\ninit(Module, State) when is_atom(Module) ->\n\tcase is_function_exported(Module, parameter_key, 0) of\n\t\ttrue ->\n\t\t\tfetch(Module, State);\n\t\tfalse ->\n\t\t\t{error, #{\n\t\t\t\t\tcallback => parameter_key,\n\t\t\t\t\treason => parameter_key_not_defined,\n\t\t\t\t\tmodule => Module,\n\t\t\t\t\tstate => State\n\t\t\t\t }\n\t\t\t}\n\tend.\n\ninit_test() ->\n\t?assertEqual(\n\t\t{ok, #{ parameter_key => [test]}},\n\t\tinit(#{ parameter_key => <<\"test\">> }, #{})\n\t),\n\t?assertEqual(\n\t\t{ok, #{ parameter_key => [test,1,2,3]}},\n\t\tinit(#{ parameter_key => <<\"test.1.2.3\">> }, #{})\n\t),\n\t?assertEqual(\n\t\t{ok, #{ parameter_key => [test]}},\n\t\tinit(#{ parameter_key => [test]}, #{})\n\t),\n\t?assertEqual(\n\t\t{ok, #{ parameter_key => [<<\"test\">>] }},\n\t\tinit(#{ parameter_key => [<<\"test\">>] }, #{})\n\t),\n\t?assertMatch(\n\t\t{error, _},\n\t\tinit(#{}, #{})\n\t),\n\t?assertMatch(\n\t\t{error, _},\n\t\tinit(#{ parameter_key => []}, #{})\n\t),\n\t?assertMatch(\n\t\t{error, _},\n\t\tinit(#{ parameter_key => test}, #{})\n\t),\n\t?assertMatch(\n\t\t{error, _},\n\t\tinit(#{ parameter_key => [test, #{}]}, #{})\n\t).\n\nfetch(Map = #{ parameter_key := CK }, State) ->\n\tcheck(Map, CK, State);\nfetch(Module, State) when is_atom(Module) ->\n\ttry\n\t\tCK = Module:parameter_key(),\n\t\tcheck(Module, CK, State)\n\tcatch\n\t\t_:Reason ->\n\t\t\t{error, Reason}\n\tend.\n\ncheck(Module, CK, State) when is_binary(CK) ->\n\tcase arweave_config_parser:key(CK) of\n\t\t{ok, Value} ->\n\t\t\t{ok, State#{ parameter_key => Value }};\n\t\t{error, Reason} ->\n\t\t\t{error, #{\n\t\t\t\t\tmodule => Module,\n\t\t\t\t\tcallback => parameter_key,\n\t\t\t\t\treason => {Reason, CK},\n\t\t\t\t\tstate => State\n\t\t\t\t}\n\t\t\t}\n\tend;\ncheck(Module, CK, State) when is_list(CK) ->\n\tcheck2(Module, CK, CK, State);\ncheck(Module, CK, State) ->\n\t{error, #{\n\t\t\tcallback => parameter_key,\n\t\t\treason => {invalid, CK},\n\t\t\tmodule => Module,\n\t\t\tstate => State\n\t\t}\n\t}.\n\ncheck2(Module, [], [], State) ->\n\t{error, #{\n\t\t\treason => {invalid, []},\n\t\t\tmodule => Module,\n\t\t\tstate => State,\n\t\t\tcallback => parameter_key\n\t\t}\n\t};\ncheck2(_Module, [], CK, State) ->\n\t{ok, State#{ parameter_key => CK }};\ncheck2(Module, [Item|Rest], CK, State) when is_atom(Item) ->\n\tcheck2(Module, Rest, CK, State);\ncheck2(Module, [Item|Rest], CK, State) when is_binary(Item) ->\n\tcheck2(Module, Rest, CK, State);\ncheck2(Module, [{Variable}|Rest], CK, State) when is_atom(Variable) ->\n\tcheck2(Module, Rest, CK, State);\ncheck2(Module, [Item|Rest], _CK, State) ->\n\t{error, #{\n\t\t  callback => parameter_key,\n\t\t  reason => {invalid, Item},\n\t\t  module => Module,\n\t\t  state => State,\n\t\t  rest => Rest\n\t\t}\n\t}.\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_specs/arweave_config_spec_runtime.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Runtime Specification Definition.\n%%%\n%%% Runtime callback has been created to deal with different kind of\n%%% parameters. Some are static and can be set only at startup. Others\n%%% are dynamic and can be set during runtime.\n%%%\n%%% @end\n%%%===================================================================\n-module(arweave_config_spec_runtime).\n-export([init/2]).\n-include(\"arweave_config_spec.hrl\").\n\ndefault() -> false.\n\ninit(_Map = #{ runtime := Runtime }, State) ->\n\t{ok, State#{ runtime => Runtime }};\ninit(Map, State) when is_map(Map) ->\n\t{ok, State#{ runtime => default() }};\ninit(Module, State) when is_atom(Module) ->\n\tcase is_function_exported(Module, runtime, 0) of\n\t\ttrue ->\n\t\t\tinit2(Module, State);\n\t\tfalse ->\n\t\t\t{ok, State#{ runtime => default() }}\n\tend.\n\ninit2(Module, State) ->\n\ttry Module:runtime() of\n\t\tfalse ->\n\t\t\tNewState = State#{ runtime => false },\n\t\t\t{ok, NewState};\n\t\ttrue ->\n\t\t\tNewState = State#{ runtime => true },\n\t\t\t{ok, NewState};\n\t\tElsewise ->\n\t\t\t{error, Elsewise}\n\tcatch\n\t\tE:R:S ->\n\t\t\t{error, #{\n\t\t\t\t\tmodule => Module,\n\t\t\t\t\tstate => State,\n\t\t\t\t\terror => E,\n\t\t\t\t\treason => R,\n\t\t\t\t\tstack => S\n\t\t\t\t}\n\t\t\t}\n\tend.\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_specs/arweave_config_spec_short_argument.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc short argument specification feature.\n%%% @end\n%%%===================================================================\n-module(arweave_config_spec_short_argument).\n-export([init/2]).\n-include(\"arweave_config_spec.hrl\").\n-include_lib(\"kernel/include/logger.hrl\").\n\ninit(Map = #{ short_argument := SA }, State) ->\n\tcheck(Map, SA, State);\ninit(Map, State) when is_map(Map) ->\n\t{ok, State};\ninit(Module, State) when is_atom(Module) ->\n\tcase is_function_exported(Module, short_argument, 0) of\n\t\ttrue ->\n\t\t\t?LOG_DEBUG(\"~p is defined\", [{Module, short_argument, []}]),\n\t\t\tfetch(Module, State);\n\t\tfalse ->\n\t\t\t?LOG_DEBUG(\"~p is undefined\", [{Module, short_argument, []}]),\n\t\t\t{ok, State}\n\tend.\n\nfetch(Module, State) ->\n\ttry\n\t\tSA = erlang:apply(Module, short_argument, []),\n\t\tcheck(Module, SA, State)\n\tcatch\n\t\t_:R ->\n\t\t\t{error, R}\n\tend.\n\ncheck(Module, undefined, State) ->\n\t{ok, State};\ncheck(Module, false, State) ->\n\t{ok, State};\ncheck(Module, [SA], State) when is_integer(SA), SA > 0 ->\n\tcheck(Module, SA, State);\ncheck(Module, SA, State)\n\twhen integer(SA),\n\t\t( SA >= $0 andalso SA =< $9 );\n\t\t( SA >= $a andalso SA =< $z );\n\t\t( SA >= $A andalso SA =< $Z ) ->\n\t{ok, State#{ short_argument => SA }};\ncheck(Module, SA, State) ->\n\t{error, #{\n\t\t\treason => {invalid, SA},\n\t\t\tcallback => short_argument,\n\t\t\tmodule => Module,\n\t\t\tstate => State\n\t\t}\n\t}.\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_specs/arweave_config_spec_short_description.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Specification callback module define short description field.\n%%% @end\n%%%===================================================================\n-module(arweave_config_spec_short_description).\n-export([init/2]).\n-include(\"arweave_config_spec.hrl\").\n\ninit(Map = #{ short_description := SD }, State) ->\n\tcheck(Map, SD, State);\ninit(Map, State) when is_map(Map) ->\n\t{ok, State};\ninit(Module, State) when is_atom(Module) ->\n\tcase is_function_exported(Module, short_description, 0) of\n\t\ttrue ->\n\t\t\tfetch(Module, State);\n\t\tfalse ->\n\t\t\t{ok, State}\n\tend.\n\nfetch(Module, State) ->\n\ttry\n\t\tSD = erlang:apply(Module, short_description, []),\n\t\tcheck(Module, SD, State)\n\tcatch\n\t\t_:R ->\n\t\t\t{error, R}\n\tend.\n\ncheck(_Module, undefined, State) ->\n\t{ok, State};\ncheck(_Module, SD, State) when is_binary(SD); is_list(SD) ->\n\t{ok, State#{ short_description => SD }};\ncheck(Module, SD, State) ->\n\t{error, #{\n\t\t\treason => {invalid, SD},\n\t\t\tmodule => Module,\n\t\t\tcallback => short_description,\n\t\t\tstate => State\n\t\t}\n\t}.\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_specs/arweave_config_spec_type.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Configuration specification type.\n%%%\n%%% A type MUST BE defined somewhere. This is a way to reuse already\n%%% used format on the system.\n%%%\n%%% A type CAN CONVERT a value to an internal Erlang term.\n%%%\n%%% ```\n%%% % type boolean:\n%%% boolean(<<\"true\">>) -> {ok, true}.\n%%% boolean(\"true\") -> {ok, true}.\n%%% '''\n%%%\n%%% == TODO ==\n%%%\n%%% 1. Configures a default generic  type (e.g. any) returning always,\n%%% `ok' it  should be defined in  this module or any  other. At this,\n%%% time ,  he default value  is set to  `undefined', but this  is not\n%%% coherent with the rest of the code.\n%%%\n%%% ```\n%%% any(_, _) -> ok.\n%%% '''\n%%%\n%%% 2. Configures a default generic type (e.g. none) returning always\n%%% `error'.\n%%%\n%%% ```\n%%% none(_, _) -> {error, none}.\n%%% '''\n%%%\n%%% 3. Configure a custom Module/Function type callback:\n%%%\n%%% ```\n%%% {Module, Function}\n%%% '''\n%%%\n%%% @end\n%%%===================================================================\n-module(arweave_config_spec_type).\n-export([init/2]).\n-include(\"arweave_config_spec.hrl\").\n-include_lib(\"kernel/include/logger.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\ninit(#{ type := Type }, State) ->\n\t{ok, State#{ type => Type }};\ninit(Map, State) when is_map(Map) ->\n\t{ok, State};\ninit(Module, State) when is_atom(Module) ->\n\tcase is_function_exported(Module, type, 0) of\n\t\ttrue ->\n\t\t\tfetch(Module, State);\n\t\tfalse ->\n\t\t\t{ok, State}\n\tend.\n\nfetch(Module, State) ->\n\ttry erlang:apply(Module, type, []) of\n\t\tT when is_atom(T) ->\n\t\t\tcase is_function_exported(arweave_config_type, T, 1) of\n\t\t\t\ttrue ->\n\t\t\t\t\t{ok, State#{ type => T }};\n\t\t\t\tfalse ->\n\t\t\t\t\t?LOG_WARNING(\"non existing type ~p\", [T]),\n\t\t\t\t\t{ok, State#{ type => T }}\n\t\t\tend;\n\t\tElsewise ->\n\t\t\t{error, Elsewise}\n\tcatch\n\t\tE:R:S ->\n\t\t\t{error, #{\n\t\t\t\t\tmodule => Module,\n\t\t\t\t\tstate => State,\n\t\t\t\t\terror => E,\n\t\t\t\t\treason => R,\n\t\t\t\t\tstack => S\n\t\t\t\t}\n\t\t\t}\n\tend.\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_store.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Arweave Configuration Data Store Interface.\n%%%\n%%% This module/process is in charge to store the configuration.\n%%% Usually, `arweave_config_spec' is the only process allowed to do\n%%% so, but this rule is not enforced at the moment.\n%%%\n%%% == Features ==\n%%%\n%%% === Dealing with map root value ===\n%%%\n%%% While creating new parameter, a problem will probably arise very\n%%% soon. What if a leaf is also a branch? Let imagine we want to\n%%% create a more flexible way to configure the debugging parameter,\n%%% and permit users to configure debug on some part of the code, the\n%%% implementation would be something like the code below:\n%%%\n%%% ```\n%%% {[global,debug], true}\n%%% {[global,debug,arweave_config], true}\n%%% '''\n%%%\n%%% Unfortunately, this will not work in the current implementation, a\n%%% map. When a value is already present and is not a `map()', then it\n%%% will  be  set  as  `root'  item.  The  `root'  is  represented  as\n%%% underscore character (`_').\n%%%\n%%% ```\n%%% % extracted from arweave_config_store\n%%% Proplist = [\n%%%   {[global,debug], true},\n%%%   {[global,debug,arweave_config, true}\n%%% ].\n%%%\n%%% % converted as map\n%%% Maps = [\n%%%   #{ global => #{ debug => true }},\n%%%   #{ global => #{ debug => #{ arweave_config => true }}}\n%%% ]\n%%%\n%%% % merged as map\n%%% MergedMap = #{\n%%%   global => #{\n%%%     debug => #{\n%%%       '_' => true,\n%%%       arweave_config => true\n%%%     }\n%%%   }\n%%% }\n%%%\n%%% % as JSON\n%%% {\n%%%   \"global\": {\n%%%     \"debug\": {\n%%%       \"_\": true,\n%%%       \"arweave_config\": true\n%%%     }\n%%%   }\n%%% }\n%%%\n%%% % as YAML\n%%% global:\n%%%   debug:\n%%%     \"_\": true\n%%%     arweave_config: true\n%%% '''\n%%%\n%%% The key `_' then becomes a reserved key.\n%%%\n%%% == TODO ==\n%%%\n%%% === Single Line Format Support ===\n%%%\n%%% Instead of exporting classic JSON format, an easier one can be\n%%% created, where one value is attributed on one line:\n%%%\n%%% ```\n%%% global.debug=true\n%%% global.data.directory=\".\"\n%%% peers.[127.0.0.1:1984].enabled=true\n%%% '''\n%%%\n%%% The separatator could be `=' or a null char (e.g `\\t', ` '). One\n%%% huge advantage is no external module will be required to\n%%% parse/decode, and the format is pretty close from what we already\n%%% have in the database.\n%%%\n%%% === JSON Support ===\n%%%\n%%% The key present in the store can have different type, usually\n%%% `atom()', `binary()' and/or `integer()'. Encoder like `jiffy' will\n%%% not encode `integer()' key to JSON string directly, then, a\n%%% conversion step will be required.\n%%%\n%%% @end\n%%%===================================================================\n-module(arweave_config_store).\n-behavior(gen_server).\n-vsn(1).\n-export([\n\tstart_link/0,\n\tstop/0,\n\tget/1,\n\tget/2,\n\tset/2,\n\tdelete/1,\n\tto_map/0,\n\tfrom_map/1\n]).\n-export([\n\tinit/1,\n\thandle_call/3,\n\thandle_cast/2,\n\thandle_info/2\n]).\n-compile({no_auto_import,[get/1]}).\n-record(key, {id}).\n-record(value, {value, meta}).\n-include_lib(\"kernel/include/logger.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%--------------------------------------------------------------------\n%% @doc Starts `arweave_config_store' registered process.\n%% @end\n%%--------------------------------------------------------------------\nstart_link() ->\n\tgen_server:start_link({local, ?MODULE}, ?MODULE, [], []).\n\n%%--------------------------------------------------------------------\n%% @doc Stops `arweave_config_store' process.\n%% @end\n%%--------------------------------------------------------------------\nstop() ->\n\tgen_server:stop(?MODULE).\n\n%%--------------------------------------------------------------------\n%% @doc Retrieve a key from configuration store using ETS directly.\n%% @see lookup/1\n%% @end\n%%--------------------------------------------------------------------\n-spec get(Key) -> Return when\n\tKey :: term(),\n\tReturn :: {ok, term()} | {error, undefined}.\n\nget(Key) ->\n\tcase arweave_config_parser:key(Key) of\n\t\t{ok, Id} ->\n\t\t\tlookup(Id);\n\t\tElsewise ->\n\t\t\t{error, Elsewise}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc Retrieve a value from ETS table, if not defined, return the\n%% default value from second argument.\n%% @end\n%%--------------------------------------------------------------------\n-spec get(Key, Default) -> Return when\n\tKey :: term(),\n\tDefault :: term(),\n\tReturn :: term() | Default.\n\nget(Key, Default) ->\n\tcase get(Key) of\n\t\t{ok, Value} -> Value;\n\t\t_ -> Default\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc Configure a value using a new key.\n%% @todo if the key is already defined, the old value should be\n%%       returned.\n%% @end\n%%--------------------------------------------------------------------\n-spec set(Key, Value) -> Return when\n\tKey :: term(),\n\tValue :: term(),\n\tReturn :: {ok, New}\n\t\t| {ok, New, Old}\n\t\t| {error, term()},\n\tNew :: {Id, Value},\n\tOld :: {Id, Value},\n\tId :: term().\n\nset(Key, Value) ->\n\tcase arweave_config_parser:key(Key) of\n\t\t{ok, Id} ->\n\t\t\tgen_server:call(?MODULE, {set, Id, Value});\n\t\tElsewise ->\n\t\t\tElsewise\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\n-spec delete(Key) -> Return when\n\tKey :: term(),\n\tReturn :: {ok, term()} | {error, undefined}.\n\ndelete(Key) ->\n\tcase arweave_config_parser:key(Key) of\n\t\t{ok, Id} ->\n\t\t\tgen_server:call(?MODULE, {delete, Id});\n\t\tElsewise ->\n\t\t\tElsewise\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc Converts a map into a valid structure ready to be inserted\n%% into an ETS table.\n%% @todo create the import feature.\n%%\n%% ```\n%% % from:\n%% #{ 1 => 2, 2 => #{ 3 => 4 }}.\n%%\n%% % every keys/values must be valid, and then the final data\n%% % structure before insert should look like that:\n%% [{1, 2}, {[2,3], 4}].\n%% '''\n%%\n%% @end\n%%--------------------------------------------------------------------\n-spec from_map(Map) -> Return when\n\tMap :: map(),\n\tReturn :: {ok, list()} | {error, term()}.\n\nfrom_map(Data) when is_map(Data) ->\n\ttodo.\n\n%%--------------------------------------------------------------------\n%% @doc Converts the content of the ETS table into a map. It will\n%% be easier to export the database in this case.\n%% @todo the merger is not finished yet.\n%%\n%% ```\n%% % the final output should look like that\n%% #{ key1 => #{ key2 => value } }.\n%%\n%% % it can easily be converted into json, yaml or toml.\n%% '''\n%%\n%% @end\n%%--------------------------------------------------------------------\n-spec to_map() -> Return when\n\tReturn :: map().\n\nto_map() ->\n\tParameters = ets:tab2list(?MODULE),\n\tListOfMap = to_map(Parameters, []),\n\tarweave_config_serializer:map_merge(ListOfMap).\n\nto_map([], Buffer) -> Buffer;\nto_map([{#key{ id = Id }, #value{ value = Value }}|Rest], Buffer) ->\n\tto_map(Rest, [map_path(Id, Value)|Buffer]).\n\nto_map_test() ->\n\t{ok, Pid} = start_link(),\n\tset(\"test.a.b\", 1),\n\tset(<<\"test.a.c\">>, 2),\n\t?assertEqual(\n\t\t#{\n\t\t\ttest => #{\n\t\t\t\ta => #{\n\t\t\t\t       b => 1,\n\t\t\t\t       c => 2\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\tto_map()\n\t),\n\tgen_server:stop(Pid).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nmap_path(List, Value) ->\n\t[H|Rest] = lists:reverse(List),\n\tmap_path2(Rest, #{ H => Value }).\n\nmap_path2([], Buffer) -> Buffer;\nmap_path2([H|T], Buffer) ->\n\tmap_path2(T, #{ H => Buffer }).\n\nmap_path2_test() ->\n\t?assertEqual(\n\t\t#{\n\t\t\t1 => #{\n\t\t\t\t2 => #{\n\t\t\t\t       3 => data\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\tmap_path([1,2,3], data)\n\t).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc a wrapper around ets:lookup/2\n%% @end\n%%--------------------------------------------------------------------\nlookup(Id) ->\n\tcase ets:lookup(?MODULE, #key{ id = Id }) of\n\t\t[] ->\n\t\t\t{error, undefined};\n\t\t[{#key{ id = Id }, #value{ value = Value}}] ->\n\t\t\t{ok, Value};\n\t\tElsewise ->\n\t\t\t{error, Elsewise}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit(_Args) ->\n\terlang:process_flag(trap_exit, true),\n\tEts = ets:new(?MODULE, [named_table, protected]),\n\t{ok, Ets}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_call({set, Id, Value}, _From, State) ->\n\tK = #key{ id = Id },\n\tV = #value{ value = Value, meta = #{} },\n\tcase ets:insert(?MODULE, {K, V}) of\n\t\ttrue ->\n\t\t\t{reply, {ok, {Id, Value}}, State};\n\t\tfalse ->\n\t\t\t{reply, {error, {Id, Value}}, State}\n\tend;\nhandle_call({delete, Id}, From, State) ->\n\tcase ets:take(?MODULE, #key{ id = Id }) of\n\t\t[] ->\n\t\t\t{reply, {error, undefined}, State};\n\t\t[{_, #value{ value = Value}}] ->\n\t\t\t{reply, {ok, {Id, Value}}, State}\n\tend;\nhandle_call(Msg, From, State) ->\n\t?LOG_ERROR([{message, Msg}, {from, From}, {module, ?MODULE}]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_cast(Msg, State) ->\n\t?LOG_ERROR([{message, Msg}, {module, ?MODULE}]),\n\t{noreply, State}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nhandle_info(Msg, State) ->\n\t?LOG_ERROR([{message, Msg}, {module, ?MODULE}]),\n\t{noreply, State}.\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_sup.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Arweave Configuration Application Supervisor.\n%%% @end\n%%%===================================================================\n-module(arweave_config_sup).\n-export([start_link/0]).\n-export([init/1]).\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\nstart_link() ->\n\tsupervisor:start_link({local, ?MODULE}, ?MODULE, []).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit(_Args) ->\n\t{ok, {supervisor(), children()}}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nsupervisor() ->\n\t#{\n\t\tstrategy => one_for_all\n\t }.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nchildren() ->\n\t[\n\t\t#{\n\t\t\tid => arweave_config,\n\t\t\tstart => {\n\t\t\t\tarweave_config,\n\t\t\t\tstart_link,\n\t\t\t\t[]\n\t\t\t}\n\t\t},\n\t \t#{\n\t\t\tid => arweave_config_environment,\n\t\t\tstart => {\n\t\t\t\tarweave_config_environment,\n\t\t\t\tstart_link,\n\t\t\t\t[]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tid => arweave_config_arguments,\n\t\t\tstart => {\n\t\t\t\tarweave_config_arguments,\n\t\t\t\tstart_link,\n\t\t\t\t[]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tid => arweave_config_file,\n\t\t\tstart => {\n\t\t\t\tarweave_config_file,\n\t\t\t\tstart_link,\n\t\t\t\t[]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tid => arweave_config_arguments_legacy,\n\t\t\tstart => {\n\t\t\t\tarweave_config_arguments_legacy,\n\t\t\t\tstart_link,\n\t\t\t\t[]\n\t\t\t}\n\t\t},\n\t\t% @TODO at this time, this process/feature is not\n\t\t% stable enough to be started.\n\t\t% #{\n\t\t% \tid => arweave_config_file_legacy,\n\t\t% \tstart => {\n\t\t% \t\tarweave_config_file_legacy,\n\t\t% \t\tstart_link,\n\t\t% \t\t[]\n\t\t% \t}\n\t\t% },\n\t\t#{\n\t\t\tid => arweave_config_store,\n\t\t\tstart => {\n\t\t\t\tarweave_config_store,\n\t\t\t\tstart_link,\n\t\t\t\t[]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tid => arweave_config_spec,\n\t\t\tstart => {\n\t\t\t\tarweave_config_spec,\n\t\t\t\tstart_link,\n\t\t\t\t[]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tid => arweave_config_legacy,\n\t\t\tstart => {\n\t\t\t\tarweave_config_legacy,\n\t\t\t\tstart_link,\n\t\t\t\t[]\n\t\t\t}\n\t\t},\n\t\t#{\n\t\t\tid => arweave_config_signal_handler,\n\t\t\tstart => {\n\t\t\t\tarweave_config_signal_handler,\n\t\t\t\tstart_link,\n\t\t\t\t[]\n\t\t\t}\n\t\t}\n\t].\n"
  },
  {
    "path": "apps/arweave_config/src/arweave_config_type.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @copyright 2025 (c) Arweave\n%%% @doc Arweave Configuration Type Definition.\n%%% @end\n%%%===================================================================\n-module(arweave_config_type).\n-compile(warnings_as_errors).\n-export([\n\tnone/1,\n\tany/1,\n\tboolean/1,\n\tinteger/1,\n\tpos_integer/1,\n\tipv4/1,\n\tfile/1,\n\ttcp_port/1,\n\tpath/1,\n\tatom/1,\n\tstring/1,\n\tbase64/1,\n\tbase64url/1,\n\tlogging_template/1\n]).\n-include_lib(\"kernel/include/file.hrl\").\n\n%%--------------------------------------------------------------------\n%% @doc always returns an error.\n%% @end\n%%--------------------------------------------------------------------\n-spec none(V) -> {error, V}.\n\nnone(V) -> {error, V}.\n\n%%--------------------------------------------------------------------\n%% @doc always returns the value.\n%% @end\n%%--------------------------------------------------------------------\n-spec any(V) -> {ok, V}.\n\nany(V) -> {ok, V}.\n\n%%--------------------------------------------------------------------\n%% @doc check if the data is an atom and convert list/binary to\n%% existing atoms.\n%% @end\n%%--------------------------------------------------------------------\n-spec atom(Input) -> Return when\n\tInput :: string() | binary() | atom(),\n\tReturn :: {ok, atom()} | {error, Input}.\n\natom(List) when is_list(List) ->\n\ttry {ok, list_to_existing_atom(List)}\n\tcatch _:_ -> {error, List}\n\tend;\natom(Binary) when is_binary(Binary) ->\n\ttry {ok, binary_to_existing_atom(Binary)}\n\tcatch _:_ -> {error, Binary}\n\tend;\natom(V) when is_atom(V) -> {ok, V};\natom(V) -> {error, V}.\n\n%%--------------------------------------------------------------------\n%% @doc check booleans from binary, list, integer and atoms. When a\n%% string is used, a regexp is being used and ignore the case of the\n%% word.\n%%\n%% == Examples ==\n%%\n%% ```\n%% {ok, true} = boolean(true).\n%% {ok, true} = boolean(<<\"true\">>).\n%% {ok, true} = boolean(\"true\").\n%% {ok, true} = boolean(\"on\").\n%% {ok, true} = boolean(<<\"TruE\">>).\n%% '''\n%%\n%% @end\n%%--------------------------------------------------------------------\n-spec boolean(Input) -> Return when\n\tInput :: string() | binary() | boolean(),\n\tReturn :: {ok, boolean()} | {error, Input}.\n\nboolean(true) -> {ok, true};\nboolean(on) -> {ok, true};\nboolean(false) -> {ok, false};\nboolean(off) -> {ok, false};\nboolean(String) when is_list(String); is_binary(String) ->\n\tRegexp = \"^(?:(?<f>false|off)|(?<t>true|on))$\",\n\tOpts = [extended, caseless, {capture, all_names, binary}],\n\tcase re:run(String, Regexp, Opts) of\n\t\t{match, [<<>>, _True]} -> {ok, true};\n\t\t{match, [_False, <<>>]} -> {ok, false};\n\t\t_ -> {error, String}\n\tend;\nboolean(V) -> {error, V}.\n\n%%--------------------------------------------------------------------\n%% @doc check integers.\n%% @end\n%%--------------------------------------------------------------------\n-spec integer(Integer) -> Return when\n\tInteger :: list() | binary() | integer(),\n\tReturn :: {ok, integer()} | {error, term()}.\n\ninteger(List) when is_list(List) ->\n\ttry integer(list_to_integer(List))\n\tcatch _:_ -> {error, List} end;\ninteger(Binary) when is_binary(Binary) ->\n\ttry integer(binary_to_integer(Binary))\n\tcatch _:_ -> {error, Binary} end;\ninteger(Integer) when is_integer(Integer) ->\n\t{ok, Integer};\ninteger(V) ->\n\t{error, V}.\n\n%%--------------------------------------------------------------------\n%% @doc check positive integers.\n%% @end\n%%--------------------------------------------------------------------\n-spec pos_integer(Integer) -> Return when\n\tInteger :: list() | binary() | pos_integer(),\n\tReturn :: {ok, pos_integer()} | {error, term()}.\n\npos_integer(Data) ->\n\tcase integer(Data) of\n\t\t{ok, Integer} when Integer >= 0 ->\n\t\t\t{ok, Integer};\n\t\t_Else ->\n\t\t\t{error, Data}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc check ipv4 addresses.\n%% @end\n%%--------------------------------------------------------------------\n-spec ipv4(IPv4) -> Return when\n\tIPv4 :: inet:ip4_address() | binary() | list(),\n\tReturn :: {ok, list()} | {error, term()}.\n\nipv4(Tuple = {_, _, _, _}) ->\n\tcase inet:is_ipv4_address(Tuple) of\n\t\ttrue ->\n\t\t\tipv4(inet:ntoa(Tuple));\n\t\tfalse ->\n\t\t\t{error, Tuple}\n\tend;\nipv4(Binary) when is_binary(Binary) ->\n\tipv4(binary_to_list(Binary));\nipv4(List) when is_list(List) ->\n\tcase inet:parse_strict_address(List, inet) of\n\t\t{ok, _} ->\n\t\t\t{ok, list_to_binary(List)};\n\t\t_Elsewise ->\n\t\t\t{error, List}\n\tend;\nipv4(Elsewise) ->\n\t{error, Elsewise}.\n\n%%--------------------------------------------------------------------\n%% @doc Defines file type.\n%% @todo if an unix socket path length is > 108, it will fail, needs\n%% to be fixed.\n%% @end\n%%--------------------------------------------------------------------\n-spec file(File) -> Return when\n\tFile :: binary() | list(),\n\tReturn :: {ok, binary()} | {error, term()}.\n\nfile(List) when is_list(List) ->\n\tfile(list_to_binary(List));\nfile(Binary) when is_binary(Binary) ->\n\tcase filename:pathtype(Binary) of\n\t\tabsolute ->\n\t\t\tfile2(Binary);\n\t\trelative ->\n\t\t\t{ok, Cwd} = file:get_cwd(),\n\t\t\tAbsolute = filename:join(Cwd, Binary),\n\t\t\tfile2(Absolute)\n\tend;\nfile(Path) ->\n\ttype_error(\n\t\tfile,\n\t\t<<\"unsupported format\">>,\n\t\t#{ path => Path }\n\t ).\n\n% check if the directory is present, arweave_config should not\n% be in charge of creating it.\nfile2(Path) ->\n\tSplit = filename:split(Path),\n\t[_Filename|Reverse] = lists:reverse(Split),\n\tDirectory = filename:join(lists:reverse(Reverse)),\n\tcase filelib:is_dir(Directory) of\n\t\ttrue ->\n\t\t\tfile3(Path, Directory);\n\t\tfalse ->\n\t\t\ttype_error(\n\t\t\t\tfile,\n\t\t\t\t<<\"directory not found\">>,\n\t\t\t\t#{\n\t\t\t\t\tpath => Path,\n\t\t\t\t\tdirectory => Directory\n\t\t\t\t }\n\t\t\t)\n\tend.\n\n% check if the directory has a read/write access.\nfile3(Path, Directory) ->\n\tSplit = filename:split(Path),\n\t[_Filename|_] = lists:reverse(Split),\n\tcase file:read_file_info(Directory) of\n\t\t{ok, #file_info{access = read_write }} ->\n\t\t\tfile4(Path);\n\t\t{error, Reason} ->\n\t\t\ttype_error(\n\t\t\t\tfile,\n\t\t\t\tReason,\n\t\t\t\t#{ path => Path }\n\t\t\t)\n\tend.\n\n% convert a list into path. It should not be the case there, but it's\n% to avoid having different type format.\nfile4(Path) when is_list(Path) ->\n\t{ok, list_to_binary(Path)};\nfile4(Path) ->\n\t{ok, Path}.\n\n%%--------------------------------------------------------------------\n%% @doc check tcp port.\n%% @end\n%%--------------------------------------------------------------------\n-spec tcp_port(Port) -> Return when\n\tPort :: pos_integer(),\n\tReturn :: {ok, pos_integer()} | {error, term()}.\n\ntcp_port(Binary) when is_binary(Binary) ->\n\ttcp_port(binary_to_integer(Binary));\ntcp_port(List) when is_list(List) ->\n\ttcp_port(list_to_integer(List));\ntcp_port(Integer) when is_integer(Integer) ->\n\tcase Integer of\n\t\t_ when Integer >= 0, Integer =< 65535 ->\n\t\t\t{ok, Integer};\n\t\t_ ->\n\t\t\t{error, Integer}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc check unix path.\n%% @end\n%%--------------------------------------------------------------------\npath(List) when is_list(List) ->\n\tpath(list_to_binary(List));\npath(Binary) when is_binary(Binary) ->\n\tcase filename:validate(Binary) of\n\t\ttrue ->\n\t\t\tpath_relative(Binary);\n\t\tfalse ->\n\t\t\t{error, Binary}\n\tend.\n\npath_relative(Path) ->\n\tcase filename:pathtype(Path) of\n\t\trelative ->\n\t\t\t{ok, filename:absname(Path)};\n\t\tabsolute ->\n\t\t\t{ok, Path}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc a string type.\n%% @todo to be defined correctly.\n%% @end\n%%--------------------------------------------------------------------\n-spec string(String) -> Return when\n\tString :: list(),\n\tReturn :: {ok, list()} | {error, term()}.\n\nstring(String) -> string(String, String).\nstring([], String) -> {ok, String};\nstring([H|T], String) when is_integer(H) -> string(T, String);\nstring(_, String) -> {error, String}.\n\n%%--------------------------------------------------------------------\n%% @doc check base64 type.\n%% @end\n%%--------------------------------------------------------------------\n-spec base64(String) -> Return when\n\tString :: binary() | list(),\n\tReturn :: {ok, binary()} | {error, term()}.\n\nbase64(List) when is_list(List) ->\n\tbase64(list_to_binary(List));\nbase64(Binary) ->\n\ttry {ok, base64:decode(Binary)}\n\tcatch _:_ -> {error, Binary}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc check base64url\n%% @end\n%%--------------------------------------------------------------------\n-spec base64url(String) -> Return when\n\tString :: binary() | list(),\n\tReturn :: {ok, binary()} | {error, term()}.\n\nbase64url(List) when is_list(List) ->\n\tbase64url(list_to_binary(List));\nbase64url(Binary) ->\n\ttry {ok, b64fast:decode(Binary)}\n\tcatch _:_ -> {error, Binary}\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc Check, parse and convert a logging template from custom\n%% parser.\n%%\n%% The rules are strict, only tab and space as separator, only ASCII\n%% printable chars as word, only a limited list of chars for\n%% existing atoms (`[a-zA-Z_]'). An atom is a word starting with a\n%% null char and '%' symbol. All templates are terminated with \"\\n\".\n%%\n%% @see logger_formatter:template/0\n%%\n%% == Examples ==\n%%\n%% ```\n%% {ok, [\"test\", \"\\n\"]} = logging_template(\"test\").\n%% {ok, [test, \"\\n\"]} = logging_template(\"%test\").\n%% {ok, [\"message:\", msg, \"\\n\"]} = logging_template(\"message: %msg\").\n%% '''\n%%\n%% @end\n%%--------------------------------------------------------------------\n-spec logging_template(String) -> Return when\n\tString :: binary() | list(),\n\tReturn :: {ok, [atom()|list()]} | {error, term()}.\n\nlogging_template(List) when is_list(List) ->\n\tlogging_template_parse(list_to_binary(List));\nlogging_template(Binary) when is_binary(Binary) ->\n\tlogging_template_parse(Binary).\n\nlogging_template_parse(Binary) ->\n\tlogging_template_tokenizer(Binary, []).\n\nlogging_template_tokenizer(<<>>, Buffer) ->\n\tlogging_template_parser(Buffer);\nlogging_template_tokenizer(<<Char, Rest/binary>>, Buffer)\n\twhen Char =:= $ ; Char =:= $\\t ->\n\t\tNewBuffer = [{null, Char}|Buffer],\n\t\tlogging_template_tokenizer(Rest, NewBuffer);\nlogging_template_tokenizer(<<$%, Rest/binary>>, Buffer) ->\n\tcase logging_template_token_atom(Rest) of\n\t\t{ok, Atom, NewRest} ->\n\t\t\tNewBuffer = [{atom, Atom}|Buffer],\n\t\t\tlogging_template_tokenizer(NewRest, NewBuffer);\n\t\tElse ->\n\t\t\tElse\n\tend;\nlogging_template_tokenizer(Bin, Buffer) when is_binary(Bin) ->\n\tcase logging_template_token_word(Bin) of\n\t\t{ok, Word, NewRest} ->\n\t\t\tlogging_template_token_word(Bin),\n\t\t\tNewBuffer = [{word, Word}|Buffer],\n\t\t\tlogging_template_tokenizer(NewRest, NewBuffer);\n\t\tElse ->\n\t\t\tElse\n\tend.\n\nlogging_template_token_atom(Binary) ->\n\tlogging_template_token_atom(Binary, <<>>).\nlogging_template_token_atom(<<>>, Buffer) ->\n\t{ok, Buffer, <<>>};\nlogging_template_token_atom(Rest = <<Char, _/binary>>, Buffer)\n\twhen Char =:= $ ; Char =:= $\\t ->\n\t\t{ok, Buffer, Rest};\nlogging_template_token_atom(<<Char, Rest/binary>>, Buffer)\n\twhen Char >= $a, Char =< $z;\n\t     Char >= $A, Char =< $Z;\n\t     Char >= $0, Char =< $9;\n\t     Char =:= $_ ->\n\t\tlogging_template_token_atom(Rest, <<Buffer/binary, Char>>);\nlogging_template_token_atom(<<Char, _/binary>>, _Buffer) ->\n\t{error, {atom, Char}}.\n\nlogging_template_token_word(Binary) ->\n\tlogging_template_token_word(Binary, <<>>).\nlogging_template_token_word(<<>>, Buffer) ->\n\t{ok, Buffer, <<>>};\nlogging_template_token_word(Rest= <<Char, _/binary>>, Buffer)\n\twhen Char =:= $ ; Char =:= $\\t ->\n\t\t{ok, Buffer, Rest};\nlogging_template_token_word(<<Char, Rest/binary>>, Buffer)\n\twhen Char >= 21, Char =< 126 ->\n\t\tlogging_template_token_word(Rest, <<Buffer/binary, Char>>);\nlogging_template_token_word(<<Char, _/binary>>, _Buffer) ->\n\t{error, {word, Char}}.\n\nlogging_template_parser(Tokens) ->\n\tlogging_template_parser(Tokens, []).\nlogging_template_parser([], Buffer) ->\n\t{ok, Buffer ++ [\"\\n\"]};\nlogging_template_parser([{null, Null}|Rest], Buffer) ->\n\tNewBuffer = [[Null]|Buffer],\n\tlogging_template_parser(Rest, NewBuffer);\nlogging_template_parser([{atom, Atom}|Rest], Buffer) ->\n\ttry\n\t\tResult = binary_to_existing_atom(Atom),\n\t\tNewBuffer = [Result|Buffer],\n\t\tlogging_template_parser(Rest, NewBuffer)\n\tcatch\n\t\t_:_ ->\n\t\t\t{error, {atom, Atom}}\n\tend;\nlogging_template_parser([{word, Word}|Rest], Buffer) ->\n\tNewBuffer = [binary_to_list(Word)|Buffer],\n\tlogging_template_parser(Rest, NewBuffer).\n\n%%--------------------------------------------------------------------\n%% common format for all errors\n%%--------------------------------------------------------------------\ntype_error(Name, Reason, Data) ->\n\t{error, #{\n\t\t\tstatus => error,\n\t\t\tmessage => #{\n\t\t\t\ttype => Name,\n\t\t\t\treason => Reason,\n\t\t\t\tdata => Data\n\t\t\t}\n\t\t }\n\t}.\n"
  },
  {
    "path": "apps/arweave_config/test/arweave_config_SUITE.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @copyright 2025 (c) Arweave\n%%% @doc\n%%% @end\n%%%===================================================================\n-module(arweave_config_SUITE).\n-export([suite/0, description/0]).\n-export([init_per_suite/1, end_per_suite/1]).\n-export([init_per_testcase/2, end_per_testcase/2]).\n-export([all/0]).\n-export([arweave_config/1, arweave_config_legacy/1]).\n-include(\"arweave_config.hrl\").\n-include_lib(\"common_test/include/ct.hrl\").\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nsuite() -> [{userdata, [description()]}].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ndescription() -> {description, \"arweave_config test main interface\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_suite(Config) -> Config.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_suite(_Config) -> ok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_testcase(_TestCase, Config) ->\n\tct:pal(info, 1, \"start arweave_config\"),\n\tok = arweave_config:start(),\n\tConfig.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_testcase(_TestCase, _Config) ->\n\tct:pal(info, 1, \"stop arweave_config\"),\n\tok = arweave_config:stop().\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nall() ->\n\t[\n\t\tarweave_config,\n\t\tarweave_config_legacy\n\t].\n\n%%--------------------------------------------------------------------\n%% @doc test `arweave_config' interface\n%% @end\n%%--------------------------------------------------------------------\narweave_config(_Config) ->\n\tct:pal(test, 1, \"get an existing parameter\"),\n\t{ok, _} = arweave_config:get([debug]),\n\n\tct:pal(test, 1, \"get a missing parameter\"),\n\t{error, _} = arweave_config:get([missing, parameter]),\n\n\tct:pal(test, 1, \"get an existing parameter with default value\"),\n\t_ = arweave_config:get([debug], true),\n\n\tct:pal(test, 1, \"get a missing parameter with default value\"),\n\t1 = arweave_config:get([missing, parameter], 1),\n\n\tct:pal(test, 1, \"set an existing parameter\"),\n\t{ok, DebugValue1, _} = arweave_config:set([debug], true),\n\t{ok, DebugValue1} = arweave_config:get([debug]),\n\n\tct:pal(test, 1, \"unset an existing parameter\"),\n\t{ok, DebugValue2, _} = arweave_config:set([debug], false),\n\t{ok, DebugValue2} = arweave_config:get([debug]),\n\tok.\n\n%%--------------------------------------------------------------------\n%% @doc test `arweave_config' legacy interface.\n%% @end\n%%--------------------------------------------------------------------\narweave_config_legacy(_Config) ->\n\t% legacy compatible interface, to remove\n\tct:pal(test, 1, \"init legacy environment\"),\n\tok = arweave_config:set_env(#config{}),\n\n\t% legacy compatible interface, to remove\n\tct:pal(test, 1, \"get legacy environment\"),\n\t{ok, Config1} = arweave_config:get_env(),\n\tfalse = Config1#config.init,\n\n\t% legacy compatible interface, to remove\n\tct:pal(test, 1, \"set legacy environment\"),\n\tok = arweave_config:set_env(#config{ init = true }),\n\t{ok, Config2} = arweave_config:get_env(),\n\ttrue = Config2#config.init,\n\n\t% check runtime mode\n\tfalse = arweave_config:is_runtime(),\n\n\tct:pal(test, 1, \"switch to runtime mode\"),\n\tok = arweave_config:runtime(),\n\ttrue = arweave_config:is_runtime(),\n\n\t{comment, \"arweave_config interface tested\"}.\n\n"
  },
  {
    "path": "apps/arweave_config/test/arweave_config_arguments_SUITE.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @copyright 2026 (c) Arweave\n%%% @doc arweave configuration arguments parser suite.\n%%% @end\n%%%===================================================================\n-module(arweave_config_arguments_SUITE).\n-export([suite/0, description/0]).\n-export([init_per_suite/1, end_per_suite/1]).\n-export([init_per_testcase/2, end_per_testcase/2]).\n-export([all/0]).\n-export([\n\tdefault/1,\n\tparser/1\n]).\n-include(\"arweave_config.hrl\").\n-include_lib(\"common_test/include/ct.hrl\").\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nsuite() -> [{userdata, [description()]}].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ndescription() -> {description, \"arweave config cli arguments interface\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_suite(Config) -> Config.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_suite(_Config) -> ok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_testcase(_TestCase, _Config) ->\n\tct:pal(info, 1, \"start arweave_config\"),\n\tok = arweave_config:start(),\n\t[].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_testcase(_TestCase, _Config) ->\n\tct:pal(info, 1, \"stop arweave_config\"),\n\tok = arweave_config:stop().\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nall() ->\n\t[\n\t\tdefault,\n\t\tparser\n\t].\n\n%%--------------------------------------------------------------------\n%% @doc test `arweave_config_arguments' main interface.\n%% @end\n%%--------------------------------------------------------------------\ndefault(_Config) ->\n\tct:pal(test, 1, \"check if arweave_config_arguments is alive\"),\n\ttrue = is_process_alive(whereis(arweave_config_arguments)),\n\n\tct:pal(test, 1, \"send an unsupported message to the process\"),\n\tok = gen_server:call(arweave_config_arguments, '@random_test', 1000),\n\tok = gen_server:cast(arweave_config_arguments, '@random_test'),\n\t_ = erlang:send(arweave_config_arguments, '@random_test'),\n\n\tct:pal(test, 1, \"check the default configuration\"),\n\t[] = arweave_config_arguments:get(),\n\n\tct:pal(test, 1, \"by default, no arguments are present\"),\n\t[] = arweave_config_arguments_legacy:get_args(),\n\n\tct:pal(test, 1, \"set debug arguments\"),\n\t{ok, _} =\n\t\tarweave_config_arguments:set([\n\t\t\t\"--debug\"\n\t\t]),\n\n\tct:pal(test, 1, \"raw arguments are returned\"),\n\t[\"--debug\"] =\n\t\tarweave_config_arguments:get_args(),\n\n\tct:pal(test, 1, \"the configuration has been set\"),\n\t[{#{ parameter_key := [debug] }, [true]}] =\n\t\tarweave_config_arguments:get(),\n\n\tct:pal(test, 1, \"load into arweave_config\"),\n\tok = arweave_config_arguments:load(),\n\n\tct:pal(test, 1, \"check arweave_config result\"),\n\t{ok, true} = arweave_config:get([debug]),\n\t{comment, \"arguments process tested\"}.\n\n%%--------------------------------------------------------------------\n%% @doc test `arweave_config_arguments' parser.\n%% @end\n%%--------------------------------------------------------------------\nparser(_Config) ->\n\t% Create a custom long arguments map\n\tLongArguments = #{\n\t\t<<\"--boolean\">> => #{\n\t\t\ttype => boolean\n\t\t},\n\t\t<<\"--integer\">> => #{\n\t\t\ttype => integer\n\t\t},\n\t\t<<\"--integer.pos\">> => #{\n\t\t\ttype => pos_integer\n\t\t}\n\t},\n\n\t% create a custom short argmuents map\n\tShortArguments = #{\n\t\t$b => #{\n\t\t\ttype => boolean\n\t\t},\n\t\t$i => #{\n\t\t\ttype => integer\n\t\t},\n\t\t$I => #{\n\t\t\ttype => pos_integer\n\t\t}\n\t},\n\tArguments = [\n\t\t% long arguments\n\t\t<<\"--boolean\">>,\n\t\t<<\"--boolean\">>, <<\"true\">>,\n\t\t<<\"--boolean\">>, <<\"false\">>,\n\t\t<<\"--boolean\">>, <<\"True\">>,\n\t\t<<\"--boolean\">>, <<\"TRUE\">>,\n\t\t<<\"--boolean\">>, <<\"FALSE\">>,\n\t\t<<\"--integer\">>, <<\"-65535\">>,\n\t\t<<\"--integer\">>, <<\"-0\">>,\n\t\t<<\"--integer\">>, <<\"-65535\">>,\n\t\t<<\"--integer.pos\">>, <<\"0\">>,\n\t\t<<\"--integer.pos\">>, <<\"65535\">>,\n\n\t\t% short arguments\n\t\t<<\"-b\">>,\n\t\t<<\"-b\">>, <<\"true\">>,\n\t\t<<\"-b\">>, <<\"on\">>,\n\t\t<<\"-b\">>, <<\"off\">>,\n\t\t<<\"-b\">>, <<\"false\">>,\n\t\t<<\"-i\">>, <<\"65535\">>,\n\t\t<<\"-i\">>, <<\"0\">>,\n\t\t<<\"-i\">>, <<\"-65535\">>,\n\t\t<<\"-I\">>, <<\"0\">>,\n\t\t<<\"-I\">>, <<\"65535\">>\n\t],\n\n\t% --peer 127.0.0.1 --vdf --trusted\n\t% --storage.module 1.unpacked --enabled\n\n\tOpts = #{\n\t\tlong_arguments => LongArguments,\n\t\tshort_arguments => ShortArguments\n\t},\n\n\tct:pal(test, 1, \"parse ~p\", [Arguments]),\n\tResult = [\n\t\t{#{type => boolean},[true]},\n\t\t{#{type => boolean},[true]},\n\t\t{#{type => boolean},[false]},\n\t\t{#{type => boolean},[true]},\n\t\t{#{type => boolean},[true]},\n\t\t{#{type => boolean},[false]},\n\t\t{#{type => integer},[-65535]},\n\t\t{#{type => integer},[0]},\n\t\t{#{type => integer},[-65535]},\n\t\t{#{type => pos_integer},[0]},\n\t\t{#{type => pos_integer},[65535]},\n\t\t{#{type => boolean},[true]},\n\t\t{#{type => boolean},[true]},\n\t\t{#{type => boolean},[true]},\n\t\t{#{type => boolean},[false]},\n\t\t{#{type => boolean},[false]},\n\t\t{#{type => integer},[65535]},\n\t\t{#{type => integer},[0]},\n\t\t{#{type => integer},[-65535]},\n\t\t{#{type => pos_integer},[0]},\n\t\t{#{type => pos_integer},[65535]}\n\t],\n\t{ok, Result} = arweave_config_arguments:parse(Arguments, Opts),\n\n\tct:pal(test, 1, \"check bad arguments\"),\n\t{error, #{ reason := <<\"bad_argument\">> }} =\n\t\tarweave_config_arguments:parse([<<\"---bad-arg\">>]),\n\t{error, #{ reason := <<\"bad_argument\">> }} =\n\t\tarweave_config_arguments:parse([<<\"----bad-arg\">>]),\n\n\tct:pal(test, 1, \"check unknown argument\"),\n\t{error, #{ reason := <<\"unknown argument\">> }} =\n\t\tarweave_config_arguments:parse([<<\"--unknown\">>]),\n\n\tct:pal(test, 1, \"check missing value\"),\n\t{error, #{ reason := <<\"missing value\">> }} =\n\t\tarweave_config_arguments:parse([<<\"--data.directory\">>]),\n\n\t{comment, \"arguments parser tested\"}.\n\n"
  },
  {
    "path": "apps/arweave_config/test/arweave_config_arguments_legacy_SUITE.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @copyright 2026 (c) Arweave\n%%% @doc arweave configuration legacy parser test suite.\n%%% @end\n%%%===================================================================\n-module(arweave_config_arguments_legacy_SUITE).\n-export([suite/0, description/0]).\n-export([init_per_suite/1, end_per_suite/1]).\n-export([init_per_testcase/2, end_per_testcase/2]).\n-export([all/0]).\n-export([\n\tdefault/1,\n\tparser/1\n]).\n-include(\"arweave_config.hrl\").\n-include_lib(\"common_test/include/ct.hrl\").\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nsuite() -> [{userdata, [description()]}].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ndescription() -> {description, \"arweave config cli arguments interface\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_suite(Config) -> Config.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_suite(_Config) -> ok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_testcase(_TestCase, _Config) ->\n\tct:pal(info, 1, \"start arweave_config\"),\n\tok = arweave_config:start(),\n\t[].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_testcase(_TestCase, _Config) ->\n\tct:pal(info, 1, \"stop arweave_config\"),\n\tok = arweave_config:stop().\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nall() ->\n\t[\n\t\tdefault,\n\t\tparser\n\t].\n\n%%--------------------------------------------------------------------\n%% @doc test `arweave_config_arguments_legacy' main interface.\n%% @end\n%%--------------------------------------------------------------------\ndefault(_Config) ->\n\tct:pal(test, 1, \"check if arweave_config_arguments is alive\"),\n\ttrue = is_process_alive(whereis(arweave_config_arguments_legacy)),\n\n\tct:pal(test, 1, \"send an unsupported message to the process\"),\n\tok = gen_server:call(arweave_config_arguments_legacy, '@random_test', 1000),\n\tok = gen_server:cast(arweave_config_arguments_legacy, '@random_test'),\n\t_ = erlang:send(arweave_config_arguments_legacy, '@random_test'),\n\n\tct:pal(test, 1, \"check the default configuration\"),\n\t#config{} = arweave_config_arguments_legacy:get(),\n\n\tct:pal(test, 1, \"by default, no arguments are present\"),\n\t[] = arweave_config_arguments_legacy:get_args(),\n\n\tct:pal(test, 1, \"set debug and init arguments\"),\n\t{ok, _} =\n\t\tarweave_config_arguments_legacy:set([\n\t\t\t\"debug\",\n\t\t\t\"init\"\n\t\t]),\n\n\tct:pal(test, 1, \"raw arguments are returned\"),\n\t[\"debug\", \"init\"] =\n\t\tarweave_config_arguments_legacy:get_args(),\n\n\tct:pal(test, 1, \"the configuration has been set\"),\n\t#config{\n\t\tdebug = true,\n\t\tinit = true\n\t} = arweave_config_arguments_legacy:get(),\n\n\tct:pal(test, 1, \"load into arweave_config_legacy\"),\n\t{ok, _} = arweave_config_arguments_legacy:load(),\n\n\tct:pal(test, 1, \"check arweave_config_legacy result\"),\n\t#config{\n\t\tdebug = true,\n\t\tinit = true\n\t} = arweave_config_legacy:get(),\n\n\tct:pal(test, 1, \"check arweave_config result\"),\n\t{ok, true} = arweave_config:get([debug]),\n\n\t{comment, \"arguments process tested\"}.\n\n%%--------------------------------------------------------------------\n%% @doc test `arweave_config_arguments_legacy' parser interface.\n%% @end\n%%--------------------------------------------------------------------\nparser(_Config) ->\n\tct:pal(test, 1, \"check parser\"),\n\t{ok, _} = arweave_config_arguments_legacy:parse([\"debug\"]),\n\t{error, _} = arweave_config_arguments_legacy:parse([\"wrong_arg\"]),\n\t{comment, \"arguments parser tested\"}.\n"
  },
  {
    "path": "apps/arweave_config/test/arweave_config_bootstrap_SUITE.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @copyright 2026 (c) Arweave\n%%% @doc arweave configuration legacy parser test suite.\n%%% @end\n%%%===================================================================\n-module(arweave_config_bootstrap_SUITE).\n-export([suite/0, description/0]).\n-export([init_per_suite/1, end_per_suite/1]).\n-export([init_per_testcase/2, end_per_testcase/2]).\n-export([all/0]).\n-export([\n\tlegacy_mode/1,\n\tnew_mode/1\n]).\n-include(\"arweave_config.hrl\").\n-include_lib(\"common_test/include/ct.hrl\").\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nsuite() -> [{userdata, [description()]}].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ndescription() -> {description, \"arweave config parameters bootstrap\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_suite(Config) -> Config.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_suite(_Config) -> ok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_testcase(new_mode, _Config) ->\n\tct:pal(info, 1, \"start arweave_config\"),\n\tok = arweave_config:start(),\n\tct:pal(test, 1, \"set AR_CONFIG_MODE environment\"),\n\tos:putenv(\"AR_CONFIG_MODE\", \"new\"),\n\t[];\ninit_per_testcase(_TestCase, _Config) ->\n\tct:pal(info, 1, \"start arweave_config\"),\n\tok = arweave_config:start(),\n\tct:pal(test, 1, \"ensure ARWEAVE_CONFIG_MODE is unset\"),\n\tos:putenv(\"AR_CONFIG_MODE\", \"\"),\n\t[].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_testcase(new_mode, _Config) ->\n\tct:pal(info, 1, \"stop arweave_config\"),\n\tok = arweave_config:stop(),\n\tct:pal(test, 1, \"reset AR_CONFIG_MODE environment\"),\n\tos:putenv(\"AR_CONFIG_MODE\", \"\");\nend_per_testcase(_TestCase, _Config) ->\n\tct:pal(info, 1, \"stop arweave_config\"),\n\tok = arweave_config:stop().\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nall() ->\n\t[\n\t\tlegacy_mode,\n\t\tnew_mode\n\t].\n\n%%--------------------------------------------------------------------\n%% @doc complete legacy mode test.\n%% @end\n%%--------------------------------------------------------------------\nlegacy_mode(_Config) ->\n\tct:pal(test, 1, \"legacy arguments must succeed\"),\n\t{ok, Valid} = arweave_config_bootstrap:start([\"init\"]),\n\t% true = arweave_config:get([]),\n\t#config{ init = true } = Valid,\n\n\t{comment, \"\"}.\n\n%%--------------------------------------------------------------------\n%% @doc complete new mode test.\n%% @end\n%%--------------------------------------------------------------------\nnew_mode(_Config) ->\n\tct:pal(test, 1, \"new arguments must succeed (debug true)\"),\n\t{ok, Valid1} = arweave_config_bootstrap:start([\"--debug\"]),\n\t{ok, true} = arweave_config:get([debug]),\n\t#config{ debug = true } = Valid1,\n\n\tct:pal(test, 1, \"new arguments must succeed (debug false)\"),\n\t{ok, Valid2} = arweave_config_bootstrap:start([\"--debug\", \"false\"]),\n\t{ok, false} = arweave_config:get([debug]),\n\t#config{ debug = false } = Valid2,\n\n\tct:pal(test, 1, \"legacy arguments must fail\"),\n\t{error, _Invalid} = arweave_config_bootstrap:start([\"debug\"]),\n\n\t{comment, \"\"}.\n"
  },
  {
    "path": "apps/arweave_config/test/arweave_config_environment_SUITE.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @copyright 2025 (c) Arweave\n%%% @doc\n%%% @end\n%%%===================================================================\n-module(arweave_config_environment_SUITE).\n-export([suite/0, description/0]).\n-export([init_per_suite/1, end_per_suite/1]).\n-export([init_per_testcase/2, end_per_testcase/2]).\n-export([all/0]).\n-export([default/1]).\n-include(\"arweave_config.hrl\").\n-include_lib(\"common_test/include/ct.hrl\").\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nsuite() -> [{userdata, [description()]}].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ndescription() -> {description, \"arweave config environment interface\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_suite(Config) -> Config.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_suite(_Config) -> ok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_testcase(_TestCase, Config) ->\n\tset_environment(),\n\tct:pal(info, 1, \"start arweave_config\"),\n\tok = arweave_config:start(),\n\t[{environment, environment()}|Config].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_testcase(_TestCase, _Config) ->\n\tct:pal(info, 1, \"stop arweave_config\"),\n\tok = arweave_config:stop(),\n\tunset_environment().\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nall() ->\n\t[ default ].\n\n%%--------------------------------------------------------------------\n%% @doc test `arweave_config_environment' main interface.\n%% @end\n%%--------------------------------------------------------------------\ndefault(Config) ->\n\tEnvironment = proplists:get_value(environment, Config),\n\n\tct:pal(test, 1, \"ensure arweave_config_environment is started\"),\n\ttrue = is_process_alive(whereis(arweave_config_environment)),\n\n\tct:pal(test, 1, \"send an unsupported message to the process\"),\n\tok = gen_server:call(arweave_config_environment, '@random_test', 1000),\n\tok = gen_server:cast(arweave_config_environment, '@random_test'),\n\t_ = erlang:send(arweave_config_environment, '@random_test'),\n\n\tct:pal(test, 1, \"reset arweave_config_environment\"),\n\tarweave_config_environment:reset(),\n\n\tct:pal(test, 1, \"retrieve environment from arweave_config_environment\"),\n\tArweaveEnvironment = arweave_config_environment:get(),\n\n\tct:pal(test, 1, \"check if variables have been configured\"),\n\t[\n\t \tbegin\n\t\t\tct:pal(test, 1, \"found: ~p\", [{K,V}]),\n\t\t\tBK = list_to_binary(K),\n\t\t\tVE = proplists:get_value(BK, ArweaveEnvironment),\n\t\t\tct:pal(test, 1, \"~p\", [{BK,VE}]),\n\t\t\tVE = list_to_binary(V)\n\t\tend\n\t\t|| {K, V} <- Environment\n\t],\n\n\tct:pal(test, 1, \"check all variables one by one\"),\n\t[\n\t\tbegin\n\t\t\tct:pal(test, 1, \"check: ~p\", [{K,V}]),\n\t\t\tBK = list_to_binary(K),\n\t\t\t{ok, VE} = arweave_config_environment:get(BK),\n\t\t\tVE = list_to_binary(V)\n\t\tend\n\t\t|| {K, V} <- Environment\n\t],\n\n\tct:pal(test, 1, \"check current value of debug parameter\"),\n\t{ok, false} = arweave_config:get([debug]),\n\t#config{ debug = false } = arweave_config_legacy:get(),\n\n\t% load the environment, it will lookup in the environment list\n\t% and set the value, in our case, AR_DEBUG should be\n\t% configured and set to true instead of false.\n\tct:pal(test, 1, \"load the environment\"),\n\tok = arweave_config_environment:load(),\n\n\tct:pal(test, 1, \"check [debug] parameter.\"),\n\t{ok, true} = arweave_config:get([debug]),\n\n\tct:pal(test, 1, \"check [debug] parameter (legacy).\"),\n\t#config{ debug = true } = arweave_config_legacy:get(),\n\n\tct:pal(test, 1, \"check unconfigured environment variable\"),\n\t{error, not_found} =\n\t\tarweave_config_environment:get(<<\"UNKNOWN_VARIABLE\">>),\n\n\t{comment, \"environment feature tested\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nenvironment() ->\n\t[\n\t\t{\"AR_TEST_ENVIRONMENT_VARIABLE\", \"test\"},\n\t\t{\"AR_DEBUG\", \"true\"}\n\t].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nset_environment() ->\n\t[\n\t\tbegin\n\t\t\tct:pal(test, 1, \"prepare: set ~p=~p\",[K,V]),\n\t\t\tos:putenv(K,V),\n\t\t\tV = os:getenv(K)\n\t\tend\n\t\t|| {K,V} <- environment()\n\t].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nunset_environment() ->\n\t[\n\t\tbegin\n\t\t\tct:pal(test, 1, \"cleanup: unset ~p\", [K]),\n\t\t\tos:unsetenv(K)\n\t\tend\n\t\t|| {K,_} <- environment()\n\t].\n"
  },
  {
    "path": "apps/arweave_config/test/arweave_config_file_SUITE.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2026 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Arweave Config File Support Test Suite.\n%%% @end\n%%%===================================================================\n-module(arweave_config_file_SUITE).\n-export([suite/0, description/0]).\n-export([init_per_suite/1, end_per_suite/1]).\n-export([init_per_testcase/2, end_per_testcase/2]).\n-export([all/0]).\n-export([\n\tdefault/1,\n\tload/1,\n\tjson/1,\n\tyaml/1,\n\ttoml/1,\n\tlegacy/1\n]).\n-include(\"arweave_config.hrl\").\n-include_lib(\"common_test/include/ct.hrl\").\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nsuite() -> [{userdata, [description()]}].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ndescription() -> {description, \"\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_suite(Config) -> Config.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_suite(_Config) -> ok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_testcase(_TestCase, Config) ->\n\tct:pal(info, 1, \"start arweave_config\"),\n\tok = arweave_config:start(),\n\tConfig.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_testcase(_TestCase, _Config) ->\n\tct:pal(info, 1, \"stop arweave_config\"),\n\tok = arweave_config:stop().\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nall() ->\n\t[\n\t\tdefault,\n\t\tload,\n\t\tjson,\n\t\ttoml,\n\t\tyaml,\n\t\tlegacy\n\t].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ndefault(Config) ->\n\tDataDir = proplists:get_value(data_dir, Config),\n\n\tct:pal(test, 1, \"Check post-initialization value\"),\n\t#{} = arweave_config_file:get(),\n\t[] = arweave_config_file:get_paths(),\n\n\tct:pal(test, 1, \"Check presence of non loaded file\"),\n\t{error, not_found} =\n\t\tarweave_config_file:get_by_path(\"/tmp2/not_present.json\"),\n\n\tct:pal(test, 1, \"check unsupported file\"),\n\t{error, _} = arweave_config_file:add(\n\t\tfilename:join(DataDir, \"config_unsupported.xml\")\n\t),\n\n\t{ok, JsonPath} = valid_format_test([\n\t\t{format, \"json\"},\n\t\t{filename, \"config_valid.json\"}\n\t|Config]),\n\n\t{ok, TomlPath} = valid_format_test([\n\t\t{format, \"toml\"},\n\t\t{filename, \"config_valid.toml\"}\n\t|Config]),\n\n\t{ok, YamlPath} = valid_format_test([\n\t\t{format, \"yaml\"},\n\t\t{filename, \"config_valid.yaml\"}\n\t|Config]),\n\n\t% @todo legacy\n\t% LegacyPath = filename:join(DataDir, \"config_valid.ljson\"),\n\t% format_test([\n\t% \t{format, \"legacy\"},\n\t% \t{filename, \"config_valid.ljson\"},\n\t% \t{path, LegacyPath}\n\t% |Config]),\n\n\tct:pal(test, 1, \"merge toml, json and yaml files together\"),\n\t{ok, _} = arweave_config_file:add(JsonPath),\n\t{ok, _} = arweave_config_file:add(TomlPath),\n\t{ok, _} = arweave_config_file:add(YamlPath),\n\n\tct:pal(test, 1, \"check merged configuration\"),\n\t_ = arweave_config_file:get(),\n\n\t% it should return the list of files loaded in a list, sorted\n\t% in alphabetical order.\n\tct:pal(test, 1, \"check if the paths have been added\"),\n\tMergedPaths = arweave_config_file:get_paths(),\n\ttrue = search_path(JsonPath, MergedPaths),\n\ttrue = search_path(TomlPath, MergedPaths),\n\ttrue = search_path(YamlPath, MergedPaths),\n\n\t% finally, reset the configuration\n\tct:pal(test, 1, \"reset arweave_config_file state\"),\n\tok = arweave_config_file:reset(),\n\n\t{error, _InvalidJsonPath} = invalid_format_test([\n\t\t{format, \"json\"},\n\t\t{filename, \"config_invalid.json\"}\n\t|Config]),\n\n\t{error, _InvalidYamlPath} = invalid_format_test([\n\t\t{format, \"yaml\"},\n\t\t{filename, \"config_invalid.yaml\"}\n\t|Config]),\n\n\t{error, _InvalidTomlPath} = invalid_format_test([\n\t\t{format, \"toml\"},\n\t\t{filename, \"config_invalid.toml\"}\n\t|Config]),\n\n\tok = arweave_config_file:load(),\n\n\t% check unsupported call, the process should not crash.\n\tok = erlang:send(arweave_config_file, ok),\n\tok = gen_server:cast(arweave_config_file, ok),\n\tok = gen_server:call(arweave_config_file, unsupported, 1000),\n\n\t{comment, \"tested arweave config file worker\"}.\n\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc test valid common pattern for all format.\n%% @end\n%%--------------------------------------------------------------------\nvalid_format_test(Config) ->\n\tDataDir = proplists:get_value(data_dir, Config),\n\tFormat = proplists:get_value(format, Config),\n\tFilename = proplists:get_value(filename, Config),\n\tPath = filename:join(DataDir, Filename),\n\n\tct:pal(test, 1, \"~p: Add ~p file\", [Format, Filename]),\n\t{ok, _} = arweave_config_file:add(Path),\n\n\tct:pal(test, 1, \"~p: check if the file has been added\", [Format]),\n\tPaths = arweave_config_file:get_paths(),\n\tct:pal(test, 1, \"~p\", [Paths]),\n\ttrue = search_path(Path, Paths),\n\n\tct:pal(test, 1, \"~p: check if the file has been parsed\", [Format]),\n\tMerged = arweave_config_file:get(),\n\ttrue = 0 < map_size(Merged),\n\n\tct:pal(test, 1, \"~p: retrieve the configuration (~p)\", [Format, Path]),\n\t{ok, {_Timestamp, _Config}} = arweave_config_file:get_by_path(Path),\n\n\tct:pal(test, 1, \"~p: load ~p\", [Format, Path]),\n\tok = arweave_config_file:load(Path),\n\t{ok, true} = arweave_config:get([debug]),\n\n\tct:pal(test, 1, \"~p: load merged configuration\", [Format]),\n\tok = arweave_config_file:load(),\n\t{ok, true} = arweave_config:get([debug]),\n\n\tct:pal(test, 1, \"reset arweave_config_file state\"),\n\tok = arweave_config_file:reset(),\n\n\t% return the full path\n\t{ok, Path}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc test invalid common pattern for all format.\n%% @end\n%%--------------------------------------------------------------------\ninvalid_format_test(Config) ->\n\tDataDir = proplists:get_value(data_dir, Config),\n\tFormat = proplists:get_value(format, Config),\n\tFilename = proplists:get_value(filename, Config),\n\tPath = filename:join(DataDir, Filename),\n\tct:pal(test, 1, \"~p: load invalid file ~p (~p)\", [Format, Filename, Path]),\n\t{error, _} = arweave_config_file:add(Path),\n\n\tct:pal(test, 1, \"~p: ensure the file ~p was not loaded\", [Format, Filename]),\n\tPaths = arweave_config_file:get_paths(),\n\tfalse = search_path(Path, Paths),\n\n\tct:pal(test, 1, \"reset arweave_config_file state\"),\n\tok = arweave_config_file:reset(),\n\t{error, Path}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nload(_Config) ->\n\tok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc test json file support.\n%% @end\n%%--------------------------------------------------------------------\njson(Config) ->\n\t{error, _} = file_bad_name([{extension, \".jsn\"}|Config]),\n\tfile_checks([{extension, \".json\"}|Config]),\n\t{comment, \"json file format tested\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc test toml file support.\n%% @end\n%%--------------------------------------------------------------------\ntoml(Config) ->\n\t{error, _} = file_bad_name([{extension, \".tml\"}|Config]),\n\tfile_checks([{extension, \".toml\"}|Config]),\n\t{comment, \"toml file format tested\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc test yaml file support.\n%% @end\n%%--------------------------------------------------------------------\nyaml(Config) ->\n\t{error, _} = file_bad_name([{extension, \".yml\"}|Config]),\n\tfile_checks([{extension, \".yaml\"}|Config]),\n\t{comment, \"yaml file format tested\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc test legacy file support.\n%% @end\n%%--------------------------------------------------------------------\nlegacy(Config) ->\n\t{error, _} = file_bad_name([{extension, \".lson\"}|Config]),\n\tfile_checks([{extension, \".ljson\"}|Config]),\n\t{comment, \"legacy file format tested\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc check all common file test.\n%% @end\n%%--------------------------------------------------------------------\nfile_checks(Config) ->\n\t{error, _} = file_bad_format(Config),\n\t{ok, _} = file_empty(Config),\n\t{error, _} = file_norights(Config),\n\t{ok, _} = file_read(Config),\n\t{ok, _} = file_readwrite(Config),\n\t{error, _} = file_unsafe_path(Config),\n\t{ok, _} = file_relative_path(Config),\n\tok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc create a regular file containing bad data.\n%% @end\n%%--------------------------------------------------------------------\nfile_bad_format(Config) ->\n\tModule = proplists:get_value(module, Config),\n\tPrivDir = proplists:get_value(priv_dir, Config),\n\tExtension = proplists:get_value(extension, Config),\n\tFilename = string:join([\"bad_format\", Extension], \"\"),\n\tData = proplists:get_value(data, Config, \"test::data::bad\"),\n\tPath = filename:join(PrivDir, Filename),\n\tct:pal(test, 1, \"create file ~p\", [Path]),\n\tfile:write_file(Path, Data),\n\tarweave_config_file:parse(Path).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc creates a regular file containing a bad name.\n%% @end\n%%--------------------------------------------------------------------\nfile_bad_name(Config) ->\n\tModule = proplists:get_value(module, Config),\n\tPrivDir = proplists:get_value(priv_dir, Config),\n\tExtension = proplists:get_value(extension, Config),\n\tFilename = string:join([\"bad_name\", Extension], \"\"),\n\tPath = filename:join(PrivDir, Filename),\n\tct:pal(test, 1, \"create file ~p\", [Path]),\n\tfile:write_file(Path, \"\"),\n\tarweave_config_file:parse(Path).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc creates an empty file.\n%% @end\n%%--------------------------------------------------------------------\nfile_empty(Config) ->\n\tModule = proplists:get_value(module, Config),\n\tPrivDir = proplists:get_value(priv_dir, Config),\n\tExtension = proplists:get_value(extension, Config),\n\tFilename = string:join([\"empty\", Extension], \"\"),\n\tPath = filename:join(PrivDir, Filename),\n\tct:pal(test, 1, \"create file ~p\", [Path]),\n\tfile:write_file(Path, \"\"),\n\tarweave_config_file:parse(Path).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc creates an empty file with no rights.\n%% @end\n%%--------------------------------------------------------------------\nfile_norights(Config) ->\n\tModule = proplists:get_value(module, Config),\n\tPrivDir = proplists:get_value(priv_dir, Config),\n\tExtension = proplists:get_value(extension, Config),\n\tFilename = string:join([\"norights\", Extension], \"\"),\n\tPath = filename:join(PrivDir, Filename),\n\tct:pal(test, 1, \"create file ~p\", [Path]),\n\tfile:write_file(Path, \"\"),\n\tfile:change_mode(Path, 8#000),\n\tReturn = arweave_config_file:parse(Path),\n\tfile:change_mode(Path, 8#600),\n\tReturn.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc creates an empty file in read-only.\n%% @end\n%%--------------------------------------------------------------------\nfile_read(Config) ->\n\tModule = proplists:get_value(module, Config),\n\tPrivDir = proplists:get_value(priv_dir, Config),\n\tExtension = proplists:get_value(extension, Config),\n\tFilename = string:join([\"read\", Extension], \"\"),\n\tPath = filename:join(PrivDir, Filename),\n\tct:pal(test, 1, \"create file ~p\", [Path]),\n\tfile:write_file(Path, \"\"),\n\tfile:change_mode(Path, 8#400),\n\tarweave_config_file:parse(Path).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc creates an empty file in read/write.\n%% @end\n%%--------------------------------------------------------------------\nfile_readwrite(Config) ->\n\tModule = proplists:get_value(module, Config),\n\tPrivDir = proplists:get_value(priv_dir, Config),\n\tExtension = proplists:get_value(extension, Config),\n\tFilename = string:join([\"readwrite\", Extension], \"\"),\n\tPath = filename:join(PrivDir, Filename),\n\tct:pal(test, 1, \"create file ~p\", [Path]),\n\tfile:write_file(Path, \"\"),\n\tfile:change_mode(Path, 8#600),\n\tarweave_config_file:parse(Path).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc check unsafe page (containing \"../\").\n%% @end\n%%--------------------------------------------------------------------\nfile_unsafe_path(Config) ->\n\tModule = proplists:get_value(module, Config),\n\tPrivDir = proplists:get_value(priv_dir, Config),\n\tExtension = proplists:get_value(extension, Config),\n\tFilename = string:join([\"unsafe_path\", Extension], \"\"),\n\tPath = filename:join([\"../..\", Filename]),\n\tct:pal(test, 1, \"check unsafe path ~p\", [Path]),\n\tarweave_config_file:parse(Path).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc check safe relative path.\n%% @end\n%%--------------------------------------------------------------------\nfile_relative_path(Config) ->\n\tModule = proplists:get_value(module, Config),\n\tPrivDir = proplists:get_value(priv_dir, Config),\n\tExtension = proplists:get_value(extension, Config),\n\tFilename = string:join([\"relative_path\", Extension], \"\"),\n\t{ok, Cwd} = file:get_cwd(),\n\tRelative =\n\t\tcase PrivDir -- Cwd of\n\t\t\t[$/|R] -> R;\n\t\t\t[$.,$/|R] -> R;\n\t\t\tE -> E\n\t\tend,\n\tPath = filename:join([\"./\", Relative, Filename]),\n\tct:pal(test, 1, \"check relative path ~p\", [Path]),\n\tfile:write_file(Path, \"\"),\n\tfile:change_mode(Path, 8#644),\n\tarweave_config_file:parse(Path).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc search a term in a list.\n%% @see lists:search/2\n%% @end\n%%--------------------------------------------------------------------\nsearch_path(Path, List) when is_list(Path) ->\n\tsearch_path(list_to_binary(Path), List);\nsearch_path(Path, List) ->\n\tFun = fun\n\t\t(P) when P =:= Path -> true;\n\t\t(_) -> false\n\tend,\n\tcase lists:search(Fun, List) of\n\t\t{value, _} -> true;\n\t\t_ -> false\n\tend.\n"
  },
  {
    "path": "apps/arweave_config/test/arweave_config_file_SUITE_data/config_invalid.json",
    "content": "--\n"
  },
  {
    "path": "apps/arweave_config/test/arweave_config_file_SUITE_data/config_invalid.toml",
    "content": "~~[]\n"
  },
  {
    "path": "apps/arweave_config/test/arweave_config_file_SUITE_data/config_invalid.yaml",
    "content": "--[\n"
  },
  {
    "path": "apps/arweave_config/test/arweave_config_file_SUITE_data/config_unsupported.xml",
    "content": "<config>\n</config>\n"
  },
  {
    "path": "apps/arweave_config/test/arweave_config_file_SUITE_data/config_valid.json",
    "content": "{\n\t\"debug\": true,\n\t\"data\": {\n\t\t\"directory\": \"/tmp/data\"\n\t}\n}\n"
  },
  {
    "path": "apps/arweave_config/test/arweave_config_file_SUITE_data/config_valid.toml",
    "content": "# global parameters\ndebug = true\ndata.directory = \"/tmp/data\"\n\n# local http api server\n[config.http.api]\nenabled = true\nlisten.port = 4891\nlisten.address = \"127.0.0.1\"\n\n# default logging parameter, inherited by other logging parameters.\n[logging]\nmax_no_bytes = 51418800\ncompress_on_rotate = false\nsync_mode_qlen = 10\ndrop_mode_qlen = 200\nflush_qlen = 1000\nburst_limit_enable = true\nburst_limit_max_count = 500\nburst_limit_window_time = 1000\noverload_kill_enable = true\noverload_kill_qlen = 20_000\noverload_kill_mem_size = 3_000_000\n\n# debug logging handlers parameters\n[logging.handlers.debug]\n\n# http api logging handlers parameters\n[logging.handlers.http.api]\n\n######################################################################\n# draft: features\n######################################################################\n# [features.feature_1]\n# enabled = true\n# \n# [features.feature_2]\n# enabled = false\n\n######################################################################\n# draft: network\n######################################################################\n\n######################################################################\n# draft: rate limiter \n######################################################################\n\n######################################################################\n# draft: peers\n# Could it be helpful to create some helpers to group easily vdf or\n# trusted peers?\n#   peers.vdf = [...]\n#   peers.trusted = [...]\n######################################################################\n# [peers.default]\n# ...\n#\n# [peers.\"127.0.0.1:1984\"]\n# vdf = true\n# ...\n#\n# [peers.\"127.0.0.2:1984\"]\n# trusted = true\n# ...\n#\n# [peers.\"mypeers.arweave.xyz\"]\n# ...\n#\n\n######################################################################\n# draft: storage modules\n# format: [storage.{type}.{index}]\n######################################################################\n# [storage.default]\n# address = \"LKC84RnISouGUw4uMQGCpPS9yDC-tIoqM2UVbUIt-Sw\"\n#\n# [storage.unpacked.0]\n# enabled = true\n# path = ...\n# size = ...\n# \n# [storage.spora_2_6.10]\n# enabled = true\n# address = \"LKC84RnISouGUw4uMQGCpPS9yDC-tIoqM2UVbUIt-Sw\"\n#\n# [storage.replica_2_9.0]\n# enabled = true\n# address = address_value\n# repack_in_place = true\n#\n# [storage.s\n#\n"
  },
  {
    "path": "apps/arweave_config/test/arweave_config_file_SUITE_data/config_valid.yaml",
    "content": "# global parameters\ndebug: true\ndata:\n  directory: \"/tmp/data\"\n\n# local http api server\nconfig:\n  http:\n    api:\n      enabled: true\n      listen:\n        port: 4891\n        address: \"127.0.0.1\"\n\n# default logging parameter, inherited by other logging parameters.\nlogging:\n  max_no_bytes: 51418800\n  compress_on_rotate: false\n  sync_mode_qlen: 10\n  drop_mode_qlen: 200\n  flush_qlen: 1000\n  burst_limit_enable: true\n  burst_limit_max_count: 500\n  burst_limit_window_time: 1000\n  overload_kill_enable: true\n  overload_kill_qlen: 20000\n  overload_kill_mem_size: 3000000\n"
  },
  {
    "path": "apps/arweave_config/test/arweave_config_format_SUITE.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2026 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Arweave Config Format Test Suite.\n%%% @end\n%%%===================================================================\n-module(arweave_config_format_SUITE).\n-export([suite/0, description/0]).\n-export([init_per_suite/1, end_per_suite/1]).\n-export([init_per_testcase/2, end_per_testcase/2]).\n-export([all/0]).\n-export([\n\tjson/1,\n\ttoml/1,\n\tyaml/1,\n\tlegacy/1\n]).\n-include(\"arweave_config.hrl\").\n-include_lib(\"common_test/include/ct.hrl\").\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nsuite() -> [{userdata, [description()]}].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ndescription() -> {description, \"arweave_config format\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_suite(Config) -> Config.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_suite(_Config) -> ok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_testcase(_TestCase, Config) ->\n\tct:pal(info, 1, \"start arweave_config\"),\n\tok = arweave_config:start(),\n\tConfig.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_testcase(_TestCase, _Config) ->\n\tct:pal(info, 1, \"stop arweave_config\"),\n\tok = arweave_config:stop().\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nall() ->\n\t[\n\t\tjson,\n\t\ttoml,\n\t\tyaml,\n\t\tlegacy\n\t].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\njson(_Config) ->\n\t{ok, #{}} = arweave_config_format_json:parse(\"\"),\n\t{ok, #{}} = arweave_config_format_json:parse(<<\"\">>),\n\t{ok, #{}} = arweave_config_format_json:parse(<<\"{}\">>),\n\t{ok, #{}} = arweave_config_format_json:parse(\"{}\"),\n\t{error, _} = arweave_config_format_json:parse(\"--\"),\n\t{comment, \"tested json format\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ntoml(_Config) ->\n\t{ok, #{}} = arweave_config_format_toml:parse(\"\"),\n\t{ok, #{}} = arweave_config_format_toml:parse(<<\"\">>),\n\t{ok, #{}} = arweave_config_format_toml:parse(<<\"test = 1\">>),\n\t{ok, #{}} = arweave_config_format_toml:parse(\"test = 1\"),\n\t{error, _} = arweave_config_format_toml:parse(\"--[]\"),\n\t{comment, \"tested toml format\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nyaml(_Config) ->\n\t{ok, #{}} = arweave_config_format_yaml:parse(\"\"),\n\t{ok, #{}} = arweave_config_format_yaml:parse(<<\"\">>),\n\t{ok, _} = arweave_config_format_yaml:parse(<<\"test: 1\\n\">>),\n\t{ok, _} = arweave_config_format_yaml:parse(\"test: 1\\n\"),\n\t{error, _} = arweave_config_format_yaml:parse(\"--[]\"),\n\t{error, _} = arweave_config_format_yaml:parse(<<\"---\\n---\\n\">>),\n\t{comment, \"tested yaml format\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nlegacy(_Config) ->\n\t{ok, _} = arweave_config_format_legacy:parse(\"\"),\n\t{ok, _} = arweave_config_format_legacy:parse(<<\"\">>),\n\t{ok, _} = arweave_config_format_legacy:parse(<<\"{}\">>),\n\t{ok, _} = arweave_config_format_legacy:parse(\"{}\"),\n\t{error, _} = arweave_config_format_legacy:parse(\"--\"),\n\t{comment, \"tested legacy format\"}.\n"
  },
  {
    "path": "apps/arweave_config/test/arweave_config_fsm_SUITE.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @copyright 2025 (c) Arweave\n%%% @doc Arweave Configuration Finite State Machine Test Suite.\n%%% @end\n%%%===================================================================\n-module(arweave_config_fsm_SUITE).\n-compile(warnings_as_errors).\n-export([suite/0, description/0]).\n-export([init_per_suite/1, end_per_suite/1]).\n-export([init_per_testcase/2, end_per_testcase/2]).\n-export([all/0]).\n-export([default/1]).\n-export([\n\tfsm_callback_ok/1,\n\tfsm_callback_ok_state/1,\n\tfsm_callback_function_transition_state/1,\n\tfsm_callback_module_transition_state/1,\n\tfsm_callback_error/1,\n\tfsm_callback_error_state/1,\n\tfsm_callback_error_wildcard/1,\n\tfsm_callback_meta/1,\n\tfsm_callback_ok_meta/1,\n\tfsm_callback_ok_meta_state/1\n]).\n\n-include(\"arweave_config.hrl\").\n-include_lib(\"common_test/include/ct.hrl\").\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nsuite() -> [{userdata, [description()]}].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ndescription() -> {description, \"arweave_config_fsm test suite\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_suite(Config) -> Config.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_suite(_Config) -> ok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_testcase(_TestCase, Config) ->\n\tConfig.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_testcase(_TestCase, _Config) ->\n\tok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nall() ->\n\t[\n\t\tdefault\n\t].\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\ndefault(_Config) ->\n\tct:pal(test, 1, \"set debug\"),\n\tlogger:set_module_level(arweave_config_fsm, debug),\n\n\tct:pal(test, 1, \"check return without state\"),\n\t{ok, value} =\n\t\tarweave_config_fsm:init(\n\t\t\t?MODULE,\n\t\t\tfsm_callback_ok,\n\t\t\tstate\n\t\t),\n\n\tct:pal(test, 1, \"check return with state\"),\n\t{ok, value, state} =\n\t\tarweave_config_fsm:init(\n\t\t\t?MODULE,\n\t\t\tfsm_callback_ok_state,\n\t\t\tstate\n\t\t),\n\n\tct:pal(test, 1, \"check function transition with state\"),\n\t{ok, value, #{ state := test, return := ok }} =\n\t\tarweave_config_fsm:init(\n\t\t\t?MODULE,\n\t\t\tfsm_callback_function_transition_state,\n\t\t\t#{ state => test }\n\t\t),\n\n\tct:pal(test, 1, \"check module/function transition with state\"),\n\t{ok, value, #{ state := test, return := ok }} =\n\t\tarweave_config_fsm:init(\n\t\t\t?MODULE,\n\t\t\tfsm_callback_module_transition_state,\n\t\t\t#{ state => test }\n\t\t),\n\n\tct:pal(test, 1, \"check error without state\"),\n\t{error, _} =\n\t\tarweave_config_fsm:init(\n\t\t\t?MODULE,\n\t\t\tfsm_callback_error,\n\t\t\t#{ state => test }\n\t\t),\n\n\tct:pal(test, 1, \"check error with state\"),\n\t{error, _, #{ state := test }} =\n\t\tarweave_config_fsm:init(\n\t\t\t?MODULE,\n\t\t\tfsm_callback_error_state,\n\t\t\t#{ state => test }\n\t\t),\n\n\tct:pal(test, 1, \"check evaluation error\"),\n\t{error, _} =\n\t\tarweave_config_fsm:init(\n\t\t\t?MODULE,\n\t\t\tfsm_callback_error_wildcard,\n\t\t\t#{ state => test }\n\t\t),\n\n\tct:pal(test, 1, \"check evaluation error\"),\n\t{error, _} =\n\t\tarweave_config_fsm:init(\n\t\t\t\"not_module\",\n\t\t\t<<\"bad_callback\">>,\n\t\t\t#{ state => test }\n\t\t),\n\n\tct:pal(test, 1, \"return metadata\"),\n\t{meta, Meta1} =\n\t\tarweave_config_fsm:init(\n\t\t\t?MODULE,\n\t\t\tfsm_callback_meta,\n\t\t\t#{ meta => true },\n\t\t\tstate\n\t\t),\n\t#{ counter := 1 } = Meta1,\n\t#{ history := _} = Meta1,\n\t#{ meta := true } = Meta1,\n\n\tct:pal(test, 1, \"return metadata with value\"),\n\t{ok, value, Meta2} =\n\t\tarweave_config_fsm:init(\n\t\t\t?MODULE,\n\t\t\tfsm_callback_ok_meta,\n\t\t\t#{ meta => true },\n\t\t\tstate\n\t\t),\n\t#{ counter := 1 } = Meta2,\n\t#{ history := _} = Meta2,\n\t#{ meta := true } = Meta2,\n\n\tct:pal(test, 1, \"return metadata with value and state\"),\n\t{ok, value, State3, Meta3} =\n\t\tarweave_config_fsm:init(\n\t\t\t?MODULE,\n\t\t\tfsm_callback_ok_meta_state,\n\t\t\t#{ meta => true },\n\t\t\tstate\n\t\t),\n\tstate = State3,\n\t#{ counter := 1 } = Meta3,\n\t#{ history := _} = Meta3,\n\t#{ meta := true } = Meta3,\n\n\tct:pal(test, 1, \"unset debug\"),\n\tlogger:set_module_level(arweave_config_fsm, none),\n\n\t{comment, \"tested\"}.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\nfsm_callback_ok(_State) ->\n\t{ok, value}.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\nfsm_callback_ok_state(State) ->\n\t{ok, value, State}.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\nfsm_callback_function_transition_state(State) ->\n\t{next, fsm_callback_ok_state, State#{ return => ok }}.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\nfsm_callback_module_transition_state(State) ->\n\t{next, ?MODULE, fsm_callback_ok_state, State#{ return => ok }}.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\nfsm_callback_error(_State) ->\n\t{error, test}.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\nfsm_callback_error_state(State) ->\n\t{error, test, State#{ return => error }}.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\nfsm_callback_error_wildcard(_State) ->\n\t{unsupported_return}.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\nfsm_callback_meta(_State) ->\n\tmeta.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\nfsm_callback_ok_meta(_State) ->\n\t{ok, value}.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\nfsm_callback_ok_meta_state(State) ->\n\t{ok, value, State}.\n"
  },
  {
    "path": "apps/arweave_config/test/arweave_config_http_server_SUITE.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @copyright 2025 (c) Arweave\n%%% @doc\n%%% @end\n%%%===================================================================\n-module(arweave_config_http_server_SUITE).\n-export([suite/0, description/0]).\n-export([init_per_suite/1, end_per_suite/1]).\n-export([init_per_testcase/2, end_per_testcase/2]).\n-export([all/0]).\n-export([\n\tdefault/1,\n\tunix_socket/1\n]).\n-include(\"arweave_config.hrl\").\n-include_lib(\"common_test/include/ct.hrl\").\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nsuite() -> [{userdata, [description()]}].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ndescription() -> {description, \"arweave_config http api interface\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_suite(Config) ->\n\tapplication:ensure_all_started(gun),\n\tConfig.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_suite(_Config) ->\n\tapplication:stop(gun),\n\tok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_testcase(_TestCase, Config) ->\n\tct:pal(info, 1, \"start arweave_config\"),\n\tok = arweave_config:start(),\n\tConfig.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_testcase(_TestCase, _Config) ->\n\tct:pal(info, 1, \"stop arweave_config\"),\n\tok = arweave_config:stop().\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nall() ->\n\t[\n\t\tdefault,\n\t\tunix_socket\n\t].\n\n%%--------------------------------------------------------------------\n%% @doc test `arweave_config' main interface.\n%% @end\n%%--------------------------------------------------------------------\ndefault(_Config) ->\n\t% the server can be started as child (under\n\t% arweave_config_sup). The goal is to enable it on demand only\n\t% if a specific parameter or environment variable is present.\n\tct:pal(test, 1, \"start arweave config http server\"),\n\tarweave_config_http_server:start_as_child(),\n\n\t% the whole configuration can be seen using /v0/config\n\t% end-point\n\tct:pal(test, 1, \"fetch the whole configuration\"),\n\t{ok, 200, D1} = get_path(\"/v0/config\"),\n\t{true, {success, _}} = is_jsend(D1),\n\n\t% any parameters can be fetched using /v0/config/${parameter},\n\t% they are separated by '/'\n\tct:pal(test, 1, \"fetch debug parameter\"),\n\t{ok, 200, D2} = get_path(\"/v0/config/debug\"),\n\t{true, {success, false}} = is_jsend(D2),\n\n\t% parameters can be set using a POST method and following the\n\t% same pattern. At this time, the data sent is untyped (no\n\t% json support)\n\tct:pal(test, 1, \"set debug parameter\"),\n\t{ok, 200, D3} = post_path(\"/v0/config/debug\", <<\"true\">>),\n\t{true,\n\t\t{success, #{\n\t\t\t\t<<\"new\">> := true,\n\t\t\t\t<<\"old\">> := false\n\t\t\t}\n\t\t}\n\t} = is_jsend(D3),\n\n\t% when a parameter was set, the new value should be present.\n\tct:pal(test, 1, \"fetch debug parameter\"),\n\t{ok, 200, D4} = get_path(\"/v0/config/debug\"),\n\t{true, {success, true}} = is_jsend(D4),\n\n\t% if a bad value is given by the client, an error must be\n\t% returned, if possible with a message containing the reason.\n\tct:pal(test, 1, \"set bad value on parameter\"),\n\t{ok, 400, D5} = post_path(\"/v0/config/debug\", <<\"random\">>),\n\t{true, {error, _}} = is_jsend(D5),\n\n\t% if a parameter is not present, an error should be returned\n\t% with the reason\n\tct:pal(test, 1, \"check unknown parameter\"),\n\t{ok, 404, D6} = get_path(\"/v0/config/parameter/not/found\"),\n\t{true, {error, _}} = is_jsend(D6),\n\n\t% arweave environment should be available to the client, at\n\t% this time, all environment variables are displayed.\n\tct:pal(test, 1, \"fetch arweave config environment\"),\n\t{ok, 200, D7} = get_path(\"/v0/environment\"),\n\t{true, {success, _}} = is_jsend(D7),\n\n\tct:pal(test, 1, \"stop config http server\"),\n\tarweave_config_http_server:stop_as_child(),\n\n\t{comment, \"arweave_config_http_server tested \"}.\n\nunix_socket(Config) ->\n\tSocketPath = filename:join(\"/tmp\", \"./arweave.sock\"),\n\tct:pal(test, 1, \"set socket to ~p\", [SocketPath]),\n\tarweave_config:set([config,http,api,listen,address], SocketPath),\n\n\tct:pal(test, 1, \"start arweave config http server\"),\n\tarweave_config_http_server:start_link(),\n\ttimer:sleep(500),\n\t{ok, _} = file:read_file_info(SocketPath),\n\n\tct:pal(test, 1, \"stop arweave config http server\"),\n\tarweave_config_http_server:stop(),\n\ttimer:sleep(500),\n\t{error, enoent} = file:read_file_info(SocketPath),\n\n\t{command, \"unix socket feature tested\"}.\n\n%%--------------------------------------------------------------------\n%% simple http client for get request.\n%%--------------------------------------------------------------------\nget_path(Path) ->\n\tget_path(Path, #{}).\n\nget_path(Path, Opts) ->\n\tHost = maps:get(host, Opts, \"127.0.0.1\"),\n\tPort = maps:get(port, Opts, 4891),\n\t% @todo host and port should be defined as macros\n\t{ok, Pid} = gun:open(Host, Port),\n\tStreamRef = gun:get(Pid, Path),\n\tbody(Pid, StreamRef).\n\n%%--------------------------------------------------------------------\n%% simple http client for post request.\n%%--------------------------------------------------------------------\npost_path(Path, Data) ->\n\tpost_path(Path, Data, #{}).\n\npost_path(Path, Data, Opts) ->\n\tHost = maps:get(host, Opts, \"127.0.0.1\"),\n\tPort = maps:get(port, Opts, 4891),\n\t% @todo host and port should be defined as macros\n\t{ok, Pid} = gun:open(Host, Port),\n\tStreamRef = gun:post(Pid, Path, #{}, Data),\n\tbody(Pid, StreamRef).\n\n%%--------------------------------------------------------------------\n%% from https://ninenines.eu/docs/en/gun/2.1/guide/http/\n%%--------------------------------------------------------------------\nbody(ConnPid, MRef) ->\n\treceive\n\t\t{gun_response, ConnPid, StreamRef, fin, Status, Headers} ->\n\t\t\t{ok, Status, no_data};\n\t\t{gun_response, ConnPid, StreamRef, nofin, Status, Headers} ->\n\t\t\treceive_data(\n\t\t\t\tConnPid,\n\t\t\t\tMRef,\n\t\t\t\tStatus,\n\t\t\t\tStreamRef,\n\t\t\t\t<<>>\n\t\t\t);\n\t\t{'DOWN', MRef, process, ConnPid, Reason} ->\n\t\t\t{error, Reason}\n\tafter 1000 ->\n\t\ttimeout\n\tend.\n\n%%--------------------------------------------------------------------\n%% from https://ninenines.eu/docs/en/gun/2.1/guide/http/\n%%--------------------------------------------------------------------\nreceive_data(ConnPid, MRef, Status, StreamRef, Buffer) ->\n\treceive\n\t\t{gun_data, ConnPid, StreamRef, nofin, Data} ->\n\t\t\treceive_data(\n\t\t\t\tConnPid,\n\t\t\t\tMRef,\n\t\t\t\tStatus,\n\t\t\t\tStreamRef,\n\t\t\t\t<<Buffer/binary, Data/binary>>\n\t\t\t );\n\t\t{gun_data, ConnPid, StreamRef, fin, Data} ->\n\t\t\t{ok, Status, <<Buffer/binary, Data/binary>>};\n\t\t{'DOWN', MRef, process, ConnPid, Reason} ->\n\t\t\t{error, Reason}\n\tafter 1000 ->\n\t\ttimeout\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nis_jsend(Data) ->\n\ttry\n\t\tjiffy:decode(Data, [return_maps])\n\tof\n\t\t#{\n\t\t\t<<\"status\">> := <<\"success\">>,\n\t\t\t<<\"data\">> := D\n\t\t} -> {true, {success, D}};\n\t\t#{\n\t\t\t<<\"status\">> := <<\"fail\">>,\n\t\t\t<<\"data\">> := D\n\t\t} -> {true, {fail, D}};\n\t\t#{\n\t\t\t<<\"status\">> := <<\"error\">>,\n\t\t\t<<\"message\">> := M\n\t\t} -> {true, {error, M}};\n\t\t_ -> false\n\tcatch\n\t\t_:_ -> false\n\tend.\n"
  },
  {
    "path": "apps/arweave_config/test/arweave_config_legacy_SUITE.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @copyright 2025 (c) Arweave\n%%% @doc\n%%% @end\n%%%===================================================================\n-module(arweave_config_legacy_SUITE).\n-export([suite/0, description/0]).\n-export([init_per_suite/1, end_per_suite/1]).\n-export([init_per_testcase/2, end_per_testcase/2]).\n-export([all/0]).\n-export([arweave_config_legacy/1]).\n-include(\"arweave_config.hrl\").\n-include_lib(\"common_test/include/ct.hrl\").\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nsuite() -> [{userdata, [description()]}].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ndescription() -> {description, \"arweave_config_legacy test\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_suite(Config) -> Config.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_suite(_Config) -> ok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_testcase(_TestCase, Config) -> \n\tct:pal(info, 1, \"start arweave_config_legacy\"),\n\t{ok, Pid} = arweave_config_legacy:start_link(),\n\t[{arweave_config_legacy, Pid}|Config].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_testcase(_TestCase, _Config) ->\n\tct:pal(info, 1, \"stop arweave_config_legacy\"),\n\tok = arweave_config_legacy:stop().\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nall() ->\n\t[ arweave_config_legacy ].\n\n%%--------------------------------------------------------------------\n%% @doc test `arweave_config' main interface.\n%% @end\n%%--------------------------------------------------------------------\narweave_config_legacy(_Config) ->\n\tct:pal(test, 1, \"config keys should be the same\"),\n\tKeys = arweave_config_legacy:keys(),\n\tConfigKeys = record_info(fields, config),\n\tLengthKeys = length(Keys),\n\tLengthConfigKeys = length(ConfigKeys),\n\tLengthKeys = LengthConfigKeys,\n\t[ K1 = K2 || {K1, K2} <- lists:zip(Keys, ConfigKeys) ],\n\n\tct:pal(test, 1, \"check if config keys are present\"),\n\t[\n\t\ttrue = arweave_config_legacy:has_key(Key)\n\t||\n\t\tKey <- ConfigKeys\n\t],\n\n\tct:pal(test, 1, \"all values should be set with default\"),\n\t#config{} = arweave_config_legacy:get(),\n\t[\n\t\tbegin\n\t\t\t{ok, VC} = arweave_config_legacy:get_config_value(Key, #config{}),\n\t\t\tVP = arweave_config_legacy:get(Key),\n\t\t\tVC = VP\n\t\tend\n\t||\n\t\tKey <- ConfigKeys\n\t],\n\n\tct:pal(test, 1, \"set init value to true\"),\n\tarweave_config_legacy:set(init, true),\n\ttrue = arweave_config_legacy:get(init),\n\n\tct:pal(test, 1, \"reset the configuration to default value\"),\n\tarweave_config_legacy:reset(),\n\tfalse = arweave_config_legacy:get(init),\n\n\tct:pal(test, 1, \"check legacy application env interface\"),\n\tarweave_config_legacy:set_env(#config{ init = true}),\n\t{ok, #config{ init = true }} = arweave_config_legacy:get_env(),\n\t{ok, #config{ init = true }} = application:get_env(arweave_config, config),\n\n\t{comment, \"arweave_config_legacy tested\"}.\n"
  },
  {
    "path": "apps/arweave_config/test/arweave_config_serializer_SUITE.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @copyright 2026 (c) Arweave\n%%% @doc Arweave Configuration File Serializer Test Suite.\n%%% @end\n%%%===================================================================\n-module(arweave_config_serializer_SUITE).\n-export([suite/0, description/0]).\n-export([init_per_suite/1, end_per_suite/1]).\n-export([init_per_testcase/2, end_per_testcase/2]).\n-export([all/0]).\n-export([encoder/1,decoder/1,map_merge/1]).\n-include(\"arweave_config.hrl\").\n-include_lib(\"common_test/include/ct.hrl\").\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nsuite() -> [{userdata, [description()]}].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ndescription() -> {description, \"arweave config parameters bootstrap\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_suite(Config) -> Config.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_suite(_Config) -> ok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_testcase(_TestCase, _Config) ->\n\tct:pal(info, 1, \"start arweave_config\"),\n\tok = arweave_config:start(),\n\t[].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_testcase(_TestCase, _Config) ->\n\tct:pal(info, 1, \"stop arweave_config\"),\n\tok = arweave_config:stop().\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nall() ->\n\t[\n\t\tencoder,\n\t\tdecoder,\n\t\tmap_merge\n\t].\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\nencoder(_Config) ->\n\tct:pal(test, 1, \"check encoder with empty value\"),\n\t{ok, #{}} = arweave_config_serializer:encode(#{}),\n\n\tct:pal(test, 1, \"check encoder with 1 key\"),\n\t{ok, #{\n\t\t[1] := 1\n\t}} = arweave_config_serializer:encode(#{ 1 => 1 }),\n\n\tct:pal(test, 1, \"check encoder with binary key\"),\n\t{ok, #{\n\t\t[test] := 2\n\t}} = arweave_config_serializer:encode(#{ <<\"test\">> => 2 }),\n\n\tct:pal(test, 1, \"check encoder with list key\"),\n\t{ok, #{\n\t\t[test] := 3\n\t}} = arweave_config_serializer:encode(#{ \"test\" => 3 }),\n\n\tct:pal(test, 1, \"check encoder with recursive map\"),\n\t{ok, #{\n\t\t[1,2,3] := 4\n\t}} = arweave_config_serializer:encode(#{ 1 => #{ 2 => #{ 3 => 4 }}}),\n\n\tct:pal(test, 1, \"full encoding test\"),\n\tMap = #{\n\t\t<<\"data\">> => #{\n\t\t\t<<\"directory\">> => <<\"/path/to/data\">>,\n\t\t\t1 => 2,\n\t\t\ta => b\n\t\t},\n\t\t<<\"logging\">> => #{\n\t\t\t<<\"debug\">> => #{\n\t\t\t\t<<\"enabled\">> => true\n\t\t\t}\n\t\t},\n\t\t<<\"random_uRxsNKiM\">> => #{\n\t\t\t<<\"random_gblL5sdA\">> => []\n\t\t},\n\t\t\"test\" => #{\n\t\t\t<<\"test\">> => #{\n\t\t\t\ttest => #{\n\t\t\t\t\t  data => test\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t},\n\tResult = #{\n\t\t[data,directory] => <<\"/path/to/data\">>,\n\t\t[data,1] => 2,\n\t\t[data,a] => b,\n\t\t[logging,debug,enabled] => true,\n\t\t[<<\"random_uRxsNKiM\">>, <<\"random_gblL5sdA\">>] => [],\n\t\t[test,test,test,data] => test\n\t},\n\t{ok, Result} = arweave_config_serializer:encode(Map),\n\n\t{comment, \"encoder tested\"}.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\ndecoder(_Config) ->\n\tct:pal(test, 1, \"check decoder with empty value\"),\n\t{ok, #{}} = arweave_config_serializer:decode(#{}),\n\n\tct:pal(test, 1, \"check decoder with complex value\"),\n\t{ok, #{\n\t\t1 := #{\n\t\t\t'_' := 1,\n\t\t\t2 := test,\n\t\t\t3 := data\n\t\t},\n\t\tt := #{\n\t\t\t1 := #{\n\t\t\t\ttest := data\n\t\t\t}\n\t\t}\n\t}} = arweave_config_serializer:decode(#{\n\t\t[1] => 1,\n\t\t[1,2] => test,\n\t\t[1,3] => data,\n\t\t[t,1,test] => data\n\t}),\n\n\tct:pal(test, 1, \"full decoding test\"),\n\tResult = #{\n\t\t1 => #{\n\t\t\t2 => #{\n\t\t       \t\t3 => #{\n\t\t\t\t       '_' => 4,\n\t\t\t\t       a => b\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t},\n\t{ok, Result} = arweave_config_serializer:decode(#{\n\t\t[1,2,3] => 4,\n\t\t[1,2,3,a] => b\n\t}),\n\n\t{comment, \"decoder tested\"}.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\nmap_merge(_Config) ->\n\tct:pal(test, 1, \"test merge map\"),\n\tResult = #{\n\t\t1 => #{\n\t\t       '_' => 1,\n\t\t\t2 => test,\n\t\t\t3 => data\n\t\t},\n\t\tt => #{\n\t\t       1 => #{\n\t\t\t      test => data\n\t\t\t}\n\t\t}\n\t},\n\tResult = arweave_config_serializer:map_merge([\n\t\t#{ 1 => 1 },\n\t\t#{ 1 => #{ 2 => test } },\n\t\t#{ 1 => #{ 3 => data } },\n\t\t#{ t => #{ 1 => #{ test => data }}}\n\t]),\n\t{comment, \"map merger tested\"}.\n"
  },
  {
    "path": "apps/arweave_config/test/arweave_config_spec_SUITE.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @copyright 2025 (c) Arweave\n%%% @doc\n%%% @end\n%%%===================================================================\n-module(arweave_config_spec_SUITE).\n-export([suite/0, description/0]).\n-export([init_per_suite/1, end_per_suite/1]).\n-export([init_per_testcase/2, end_per_testcase/2]).\n-export([all/0]).\n-export([specs/1]).\n-export([\n\tdefault/1,\n\tdefault_value/1,\n\tdefault_type/1,\n\tdefault_get/1,\n\tdefault_set/1,\n\tdefault_set_state/1,\n\tdefault_multi/1,\n\tdefault_runtime/1,\n\tdefault_multi_types/1,\n\tdefault_inherit/1,\n\tdefault_environment/1\n]).\n-include(\"arweave_config.hrl\").\n-include_lib(\"common_test/include/ct.hrl\").\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nsuite() -> [{userdata, [description()]}].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ndescription() -> {description, \"arweave configuration spec interface\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_suite(Config) -> Config.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_suite(_Config) -> ok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_testcase(TestCase, Config) ->\n\t% required for runtime parameter\n\tct:pal(info, 1, \"start arweave_config\"),\n\t{ok, _} = arweave_config:start_link(),\n\n\t% required for configuration storage\n\tct:pal(info, 1, \"start arweave_config_store\"),\n\t{ok, PidStore} = arweave_config_store:start_link(),\n\n\t% required for specificatoin\n\tct:pal(info, 1, \"Start arweave_config_spec\"),\n\tSpecs = specs(TestCase),\n\t{ok, PidSpec} = arweave_config_spec:start_link(Specs),\n\n\t[\n\t\t{arweave_config_store, PidStore},\n\t\t{arweave_config_spec, PidSpec}\n\t\t| Config\n\t].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_testcase(_TestCase, _Config) ->\n\tct:pal(info, 1, \"stop arweave_config_spec\"),\n\tok = arweave_config_spec:stop(),\n\n\tct:pal(info, 1, \"stop arweave_config_store\"),\n\tok = arweave_config_store:stop(),\n\n\tct:pal(info, 1, \"stop arweave_config\"),\n\tok = gen_server:stop(arweave_config).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nall() ->\n\t[\n\t\tdefault,\n\t\tdefault_value,\n\t\tdefault_type,\n\t\tdefault_get,\n\t\tdefault_set,\n\t\tdefault_set_state,\n\t\tdefault_multi,\n\t\tdefault_runtime,\n\t\tdefault_multi_types,\n\t\tdefault_inherit,\n\t\tdefault_environment\n\t].\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\ndefault(_Config) ->\n\t% default parameter only have all value by default, present\n\t% from the spec. In this case, it should return an error.\n\t{error, undefined} = arweave_config_spec:get([default]),\n\n\t% when setting a value, we should see the new value and the\n\t% old value. The value should also be present in the store\n\t{ok, test, undefined} = arweave_config_spec:set([default], test),\n\t{ok, test} = arweave_config_store:get([default]).\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\ndefault_value(_Config) ->\n\t% a parameter with a default value\n\t{ok, true} = arweave_config_spec:get([default_value]),\n\t{error, undefined} = arweave_config_store:get([default_value]),\n\t{ok, false, true} = arweave_config_spec:set([default_value], false),\n\t{ok, false} = arweave_config_store:get([default_value]).\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\ndefault_type(_Config) ->\n\t% a parameter with a defined type\n\t{ok, true, undefined} =\n\t\tarweave_config_spec:set([default_type], true),\n\t{error, _, _} =\n\t\tarweave_config_spec:set([default_type], \"not a boolean\").\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\ndefault_get(_Config) ->\n\t% a parameter with a specific get\n\t{ok, valid} =\n\t\tarweave_config_spec:get([default_get]).\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\ndefault_set(_Config) ->\n\t% a parameter with a specific set\n\t{ok, ok, undefined} =\n\t\tarweave_config_spec:set([default_set], self()),\n\tok = receive\n\t\tok -> ok\n\tafter\n\t\t10 -> error\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\ndefault_set_state(_Config) ->\n\t% set state\n\t{ok, empty, undefined} =\n\t\tarweave_config_spec:set([default_set_state], ok),\n\t{ok, full, empty} =\n\t\tarweave_config_spec:set([default_set_state], ok),\n\n\t{comment, \"set state tested\"}.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\ndefault_multi(_Config) ->\n\t% default value to 1\n\tct:pal(test, 1, \"set value to 1\"),\n\t{ok, 1} = arweave_config_spec:get([one]),\n\t{ok, one, 1} = arweave_config_spec:set([one], one),\n\n\t% on default value, but always return 3\n\tct:pal(test, 1, \"get value\"),\n\t{ok, 3} = arweave_config_spec:get([three]),\n\t{ok, 3, undefined} = arweave_config_spec:set([three], any),\n\n\t{comment, \"multi spec tested\"}.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\ndefault_runtime(_Config) ->\n\tct:pal(test, 1, \"initial state not in runtime\"),\n\tfalse = arweave_config:is_runtime(),\n\t{ok, 1, undefined} = arweave_config_spec:set([default], 1),\n\t{ok, 1, undefined} = arweave_config_spec:set([runtime], 1),\n\t{ok, 1, undefined} = arweave_config_spec:set([not_runtime], 1),\n\n\tct:pal(test, 1, \"swith to runtime\"),\n\tok = arweave_config:runtime(),\n\ttrue = arweave_config:is_runtime(),\n\n\t% by default, a parameter without runtime feature is not\n\t% allowed to be set during runtime\n\t{error, _} = arweave_config_spec:set([default], 2),\n\n\t% runtime parameter can be set during runtime\n\t{ok, 2, 1} = arweave_config_spec:set([runtime], 2),\n\n\t% not_runtime parameter can't be set during runtime\n\t{error, _} = arweave_config_spec:set([not_runtime], 2).\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\ndefault_multi_types(_Config) ->\n\tct:pal(test, 1, \"set a boolean\"),\n\t{ok, true, undefined} =\n\t\tarweave_config_spec:set([default], <<\"true\">>),\n\n\tct:pal(test, 1, \"set an integer\"),\n\t{ok, 1, true} =\n\t\tarweave_config_spec:set([default], 1),\n\n\tct:pal(test, 1, \"set an ipv4\"),\n\t{ok, <<\"127.0.0.1\">>, 1} =\n\t\tarweave_config_spec:set([default], <<\"127.0.0.1\">>),\n\n\t{comment, \"multi types tested\"}.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\ndefault_inherit(_Config) ->\n\tct:pal(test, 1, \"check inheritance\"),\n\t{ok, true} = arweave_config:get([default]),\n\t{ok, true} = arweave_config:get([inherit,all]),\n\t{ok, 1} = arweave_config:get([inherit,nothing]),\n\n\tct:pal(test, 1, \"check specs from ets\"),\n\t[{_, #{ type := boolean, default := true}}] =\n\t\tets:lookup(arweave_config_spec, [default]),\n\n\t[{_, #{ type := boolean, default := true}}] =\n\t\tets:lookup(arweave_config_spec, [inherit, all]),\n\n\t[{_, #{ type := boolean }}] =\n\t\tets:lookup(arweave_config_spec, [inherit, type]),\n\n\t[{_, #{ default := true }}] =\n\t\tets:lookup(arweave_config_spec, [inherit, default]),\n\n\t[{_, #{ default := 1, type := pos_integer }}] =\n\t\tets:lookup(arweave_config_spec, [inherit, nothing]),\n\n\t{comment, \"inherit feature tested\"}.\n\n%%--------------------------------------------------------------------\n%% @doc\n%% @end\n%%--------------------------------------------------------------------\ndefault_environment(_Config) ->\n\tct:pal(test, 1, \"get the list of environment variables\"),\n\tEnv = arweave_config_spec:get_environments(),\n\n\tct:pal(test, 1, \"check disabled environment\"),\n\tfalse = lists:search(fun\n\t\t({_, [environment,disabled]}) -> true;\n\t\t(_) -> false end,\n\t\tEnv\n\t),\n\n\tct:pal(test, 1, \"check enabled environment (generated)\"),\n\t{value, {<<\"AR_ENVIRONMENT_ENABLED\">>, [environment,enabled]}} =\n\t\tlists:search(fun\n\t\t\t({_,[environment,enabled]}) -> true;\n\t\t\t(_) -> false\n\t\tend,\n\t\tEnv\n\t),\n\n\tct:pal(test, 1, \"check custom enabled environment\"),\n\t{value, {<<\"CUSTOM\">>, [environment,custom]}} =\n\t\tlists:search(fun\n\t\t\t({_,[environment,custom]}) -> true;\n\t\t\t(_) -> false\n\t\tend,\n\t\tEnv\n\t),\n\n\t{comment, \"environment feature tested\"}.\n\n%%--------------------------------------------------------------------\n%% @doc defines custom parameters for tests.\n%% @end\n%%--------------------------------------------------------------------\nspecs(default) ->\n\t[\n\t\t#{ parameter_key => [default] }\n\t];\nspecs(default_value) ->\n\t[\n\t\t#{\n\t\t\tparameter_key => [default_value],\n\t\t\tdefault => true\n\t\t }\n\t];\nspecs(default_type) ->\n\t[\n\t\t#{\n\t\t\tparameter_key => [default_type],\n\t\t\ttype => boolean\n\t\t}\n\t];\nspecs(default_get) ->\n\t[\n\t\t#{\n\t\t\tparameter_key => [default_get],\n\t\t\thandle_get => fun\n\t\t\t\t(K, _S) ->\n\t\t\t\t\t{ok, valid}\n\t\t\tend\n\t\t}\n\t];\nspecs(default_set) ->\n\t[\n\t\t#{\n\t\t\tparameter_key => [default_set],\n\t\t\thandle_set => fun\n\t\t\t\t(K, V, S, _) ->\n\t\t\t\t\tV ! ok,\n\t\t\t\t\t{ok, ok}\n\t\t\tend\n\t\t}\n\t];\nspecs(default_set_state) ->\n\t[\n\t\t#{\n\t\t\tparameter_key => [default_set_state],\n\t\t\thandle_set => fun\n\t\t\t\t(_K, _V, #{ config := Config }, _) ->\n\t\t\t\t\tcase Config of\n\t\t\t\t\t\t#{ default_set_state := empty } ->\n\t\t\t\t\t\t\t{store, full};\n\t\t\t\t\t\t_ ->\n\t\t\t\t\t\t\t{store, empty}\n\t\t\t\t\tend\n\t\t\tend\n\t\t}\n\t];\nspecs(default_multi) ->\n\t[\n\t\t#{\n\t\t\tparameter_key => [one],\n\t\t\tdefault => 1\n\t\t},\n\t\t#{\n\t\t\tparameter_key => [three],\n\t\t\thandle_get => fun\n\t\t\t\t(_K, _S) ->\n\t\t\t\t\t{ok, 3}\n\t\t\tend,\n\t\t\thandle_set => fun\n\t\t\t\t(_K, _V, _S, _) ->\n\t\t\t\t\t{ok, 3}\n\t\t\tend\n\t\t}\n\t];\nspecs(default_runtime) ->\n\t[\n\t\t#{\n\t\t\tparameter_key => [default]\n\t\t},\n\t\t#{\n\t\t\tparameter_key => [runtime],\n\t\t\truntime => true\n\t\t},\n\t\t#{\n\t\t\tparameter_key => [not_runtime],\n\t\t\truntime => false\n\t\t}\n\t];\nspecs(default_multi_types) ->\n\t[\n\t\t#{\n\t\t\tparameter_key => [default],\n\t\t\ttype => [boolean, integer, ipv4]\n\t\t}\n\t];\nspecs(default_inherit) ->\n\t[\n\t\t#{\n\t\t\tparameter_key => [default],\n\t\t\ttype => boolean,\n\t\t\tdefault => true\n\t\t},\n\t\t#{\n\t\t\tparameter_key => [inherit,all],\n\t\t\tinherit => [default]\n\t\t},\n\t\t#{\n\t\t\tparameter_key => [inherit,type],\n\t\t\tinherit => {[default], [type]}\n\t\t},\n\t\t#{\n\t\t\tparameter_key => [inherit,default],\n\t\t\tinherit => {[default], [default]}\n\t\t},\n\t\t#{\n\t\t\tparameter_key => [inherit,nothing],\n\t\t\tinherit => [default],\n\t\t\ttype => pos_integer,\n\t\t\tdefault => 1\n\t\t}\n\t];\nspecs(default_environment) ->\n\t[\n\t\t#{\n\t\t\tparameter_key => [environment,disabled],\n\t\t\tenvironment => false\n\t\t},\n\t\t#{\n\t\t\tparameter_key => [environment,enabled],\n\t\t\tenvironment => true\n\t\t},\n\t\t#{\n\t\t\tparameter_key => [environment,custom],\n\t\t\tenvironment => <<\"CUSTOM\">>\n\t\t}\n\t].\n"
  },
  {
    "path": "apps/arweave_config/test/arweave_config_store_SUITE.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @copyright 2025 (c) Arweave\n%%% @doc\n%%% @end\n%%%===================================================================\n-module(arweave_config_store_SUITE).\n-export([suite/0, description/0]).\n-export([init_per_suite/1, end_per_suite/1]).\n-export([init_per_testcase/2, end_per_testcase/2]).\n-export([all/0]).\n-export([arweave_config_store/1]).\n-include(\"arweave_config.hrl\").\n-include_lib(\"common_test/include/ct.hrl\").\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nsuite() -> [{userdata, [description()]}].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ndescription() -> {description, \"arweave configuration store interface\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_suite(Config) -> Config.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_suite(_Config) -> ok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_testcase(_TestCase, Config) ->\n\tct:pal(info, 1, \"start arweave_config_store\"),\n\t{ok, Pid} = arweave_config_store:start_link(),\n\t[{arweave_config_store, Pid}|Config].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_testcase(_TestCase, _Config) ->\n\tct:pal(info, 1, \"stop arweave_config_store\"),\n\tok = arweave_config_store:stop().\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nall() ->\n\t[ arweave_config_store ].\n\n%%--------------------------------------------------------------------\n%% @doc test `arweave_config_store' storage interface.\n%% @end\n%%--------------------------------------------------------------------\narweave_config_store(_Config) ->\n\tct:pal(test, 1, \"check undefined parameter\"),\n\t{error, undefined} = arweave_config_store:get(\"test\"),\n\n\tct:pal(test, 1, \"try to delete an undefined parameter\"),\n\t{error, undefined} = arweave_config_store:delete(\"test\"),\n\n\tct:pal(test, 1, \"ensure default parameter is working\"),\n\tdefault = arweave_config_store:get(\"test\", default),\n\n\tct:pal(test, 1, \"set a new parameter\"),\n\t{ok, {[test], data}} = arweave_config_store:set(\"test\", data),\n\n\tct:pal(test, 1, \"get an existing parameter\"),\n\t{ok, data} = arweave_config_store:get(\"test\"),\n\n\tct:pal(test, 1, \"delete an existing parameter\"),\n\t{ok, {[test], data}} = arweave_config_store:delete(\"test\"),\n\n\tct:pal(test, 1, \"ensure the paramater was removed\"),\n\t{error, undefined} = arweave_config_store:get(\"test\"),\n\n\t{comment, \"arweave_config_store process tested\"}.\n"
  },
  {
    "path": "apps/arweave_config/test/arweave_config_type_SUITE.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @copyright 2025 (c) Arweave\n%%% @doc\n%%% @end\n%%%===================================================================\n-module(arweave_config_type_SUITE).\n-export([suite/0, description/0]).\n-export([init_per_suite/1, end_per_suite/1]).\n-export([init_per_testcase/2, end_per_testcase/2]).\n-export([all/0]).\n-export([\n\tnone/1,\n\tany/1,\n\tboolean/1,\n\tatom/1,\n\tinteger/1,\n\tpos_integer/1,\n\tipv4/1,\n\tpath/1,\n\tbase64/1,\n\tbase64url/1,\n\ttcp_port/1,\n\tfile/1,\n\tlogging_template/1\n]).\n-include_lib(\"common_test/include/ct.hrl\").\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nsuite() -> [{userdata, [description()]}].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ndescription() -> {description, \"arweave_config_type test interface\"}.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_suite(Config) -> Config.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_suite(_Config) -> ok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\ninit_per_testcase(_TestCase, Config) ->\n\tConfig.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nend_per_testcase(_TestCase, _Config) ->\n\tok.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%%--------------------------------------------------------------------\nall() ->\n\t[\n\t\tnone,\n\t\tany,\n\t\tatom,\n\t\tinteger,\n\t\tboolean,\n\t\tinteger,\n\t\tpos_integer,\n\t\tipv4,\n\t\tpath,\n\t\tbase64,\n\t\tbase64url,\n\t\ttcp_port,\n\t\tfile,\n\t\tlogging_template\n\t].\n\nnone(_Config) ->\n\t{error, 1} = arweave_config_type:none(1).\n\nany(_Config) ->\n\t{ok, 1} = arweave_config_type:any(1).\n\natom(_Config) ->\n\t{ok, atom} = arweave_config_type:atom(atom),\n\t{ok, atom} = arweave_config_type:atom(<<\"atom\">>),\n\t{ok, atom} = arweave_config_type:atom(\"atom\").\n\nboolean(_Config) ->\n\t[\n\t\t{ok, true} = arweave_config_type:boolean(X)\n\t\t|| X <- [<<\"true\">>, \"true\", true]\n\t],\n\t[\n\t\t{ok, false} = arweave_config_type:boolean(X)\n\t\t|| X <- [<<\"false\">>, \"false\", false]\n\t],\n\t{error, not_boolean} =\n\t\tarweave_config_type:boolean(not_boolean).\n\ninteger(_Config) ->\n\t{ok, 1} = arweave_config_type:integer(1),\n\t{ok, 1} = arweave_config_type:integer(\"1\"),\n\t{ok, 1} = arweave_config_type:integer(<<\"1\">>),\n\t{error, a} = arweave_config_type:integer(a).\n\npos_integer(_Config) ->\n\t{ok, 1} = arweave_config_type:pos_integer(1),\n\t{error, -1} = arweave_config_type:pos_integer(-1).\n\nipv4(_Config) ->\n\t{ok, <<\"127.0.0.1\">>} = arweave_config_type:ipv4(\"127.0.0.1\"),\n\t{ok, <<\"127.0.0.1\">>} = arweave_config_type:ipv4({127,0,0,1}),\n\t{ok, <<\"127.0.0.1\">>} = arweave_config_type:ipv4(<<\"127.0.0.1\">>),\n\t{error, _ } = arweave_config_type:ipv4(test).\n\npath(Config) ->\n\t_PrivDir = proplists:get_value(priv_dir, Config),\n\t{ok, Cwd} = file:get_cwd(),\n\n\t% absolute path\n\t{ok, <<\"/\">>} = arweave_config_type:path(<<\"/\">>),\n\t{ok, <<\"/\">>} = arweave_config_type:path(\"/\"),\n\n\t% relative path: convert automatically in absolute path\n\tCwdBinary = list_to_binary(Cwd),\n\t{ok, CwdBinary} = arweave_config_type:path(<<\"./\">>),\n\t{ok, CwdBinary} = arweave_config_type:path(\"./\").\n\nbase64(_Config) ->\n\t{ok, <<\"test\">>} = arweave_config_type:base64(\"dGVzdA==\"),\n\t{ok, <<\"test\">>} = arweave_config_type:base64(<<\"dGVzdA==\">>).\n\nbase64url(_Config) ->\n\t{ok, <<\"test\">>} = arweave_config_type:base64url(\"dGVzdA\"),\n\t{ok, <<\"test\">>} = arweave_config_type:base64url(<<\"dGVzdA\">>).\n\ntcp_port(_Config) ->\n\t{ok, 0} = arweave_config_type:tcp_port(0),\n\t{ok, 65535} = arweave_config_type:tcp_port(65535),\n\t{ok, 1234} = arweave_config_type:tcp_port(1234),\n\t{ok, 1234} = arweave_config_type:tcp_port(\"1234\"),\n\t{ok, 1234} = arweave_config_type:tcp_port(<<\"1234\">>),\n\t{error, 78912} = arweave_config_type:tcp_port(<<\"78912\">>).\n\nfile(_Config) ->\n\tct:pal(test, 1, \"test absolute path and path as binary\"),\n\t{ok, <<\"/tmp/arweave.sock\">>} =\n\t\tarweave_config_type:file(<<\"/tmp/arweave.sock\">>),\n\n\tct:pal(test, 1, \"test relative path and path as list\"),\n\t{ok, P1} =\n\t\tarweave_config_type:file(\"./arweave.sock\"),\n\ttrue = is_binary(P1),\n\n\tct:pal(test, 1, \"test a wrong path\"),\n\t{error, _} =\n\t\tarweave_config_type:file(\"/random/t/a/b/c.sock\"),\n\n\tct:pal(test, 1, \"test a file without write access\"),\n\t{error, _} =\n\t\tarweave_config_type:file(\"/root/data/arweave.sock\"),\n\n\tct:pal(test, 1, \"test a wrong erlang type\"),\n\t{error, _} =\n\t\tarweave_config_type:file(1234),\n\n\tok.\n\nlogging_template(_Config) ->\n\tct:pal(test, 1, \"valid template can be string\"),\n\t{ok, [\"test\", \"\\n\"]} =\n\t\tarweave_config_type:logging_template(\"test\"),\n\n\tct:pal(test, 1, \"valid template can be a binary\"),\n\t{ok, [\"test\",\"\\n\"]} =\n\t\tarweave_config_type:logging_template(<<\"test\">>),\n\n\tct:pal(test, 1, \"An atom start with %\"),\n\t{ok, [test,\"\\n\"]} =\n\t\tarweave_config_type:logging_template(<<\"%test\">>),\n\n\tct:pal(test, 1, \"a string and an atom can be part of the same template\"),\n\t{ok, [\"message:\", \" \", test, \"\\n\"]} =\n\t\tarweave_config_type:logging_template(\"message: %test\"),\n\n\tct:pal(test, 1, \"an atom must start with a null char\"),\n\t{ok, [\"message:%test\",\"\\n\"]} =\n\t\tarweave_config_type:logging_template(\"message:%test\"),\n\n\tct:pal(test, 1, \"an atom must only use [a-zA-Z_] chars\"),\n\t{error, _} =\n\t\tarweave_config_type:logging_template(\"%test!#&\"),\n\n\tct:pal(test, 1, \"an atom must exist\"),\n\t{error, _} =\n\t\tarweave_config_type:logging_template(\"%total_random_atom\"),\n\n\tok.\n"
  },
  {
    "path": "apps/arweave_diagnostic/README.md",
    "content": ""
  },
  {
    "path": "apps/arweave_diagnostic/include/.gitkeep",
    "content": ""
  },
  {
    "path": "apps/arweave_diagnostic/priv/.gitkeep",
    "content": ""
  },
  {
    "path": "apps/arweave_diagnostic/src/arweave_diagnostic.app.src",
    "content": "{application, arweave_diagnostic, [\n\t{id, \"arweave_diagnostic\"},\n\t{description, \"Arweave Diagnostic Tool\"},\n\t{vsn, \"0.0.1\"},\n\t{mod, {arweave_diagnostic, []}},\n\t{env, []},\n\t{applications, [\n\t\tkernel,\n\t\tstdlib,\n\t\tsasl\n\t]},\n\t{modules, [\n\t\tarweave_diagnostic\n\t]},\n\t{registered, [\n\t]}\n]}.\n"
  },
  {
    "path": "apps/arweave_diagnostic/src/arweave_diagnostic.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Mathieu Kerjouan\n%%% @doc Arweave Diagnostic Module.\n%%%\n%%% This module has been created to display detailed information about\n%%% an Arweave application, including information from the BEAM as\n%%% well.\n%%%\n%%% @todo create diagnostic for epmd\n%%% @todo create diagnostic for timers\n%%% @end\n%%%===================================================================\n-module(arweave_diagnostic).\n-compose(warnings_as_errors).\n-compile({no_auto_import,[processes/0]}).\n-compile({no_auto_import,[process_info/1]}).\n-export([\n\tall/0,\n\tselect/1,\n\tcpu/0,\n\tmemory/0,\n\tmemory_worst/0,\n\tnetwork/0,\n\tprocesses/0,\n\tsockets/0,\n\tarweave/0,\n\tets/0,\n\tdets/0,\n\trocksdb/0\n]).\n-include_lib(\"kernel/include/logger.hrl\").\n\n-type diagnostic() :: cpu | memory | memory_worst | processes |\n\tarweave | ets | dets.\n\n%%--------------------------------------------------------------------\n%% @doc returns all diagnostic available.\n%% @end\n%%--------------------------------------------------------------------\n-spec all() -> proplists:proplist().\n\nall() ->\n\tselect([\n\t\tcpu,\n\t\tmemory,\n\t\tmemory_worst,\n\t\tnetwork,\n\t\tprocesses,\n\t\tarweave,\n\t\tets,\n\t\tdets,\n\t\trocksdb\n\t]).\n\n%%--------------------------------------------------------------------\n%% @doc returns a subset of supported diagnostics.\n%% @end\n%%--------------------------------------------------------------------\n-spec select([diagnostic()]) -> proplists:proplist().\n\nselect(List) -> select(List, []).\n\nselect([], Buffer) -> Buffer;\nselect([cpu|Rest], Buffer) ->\n\tselect(Rest,[{cpu,cpu()}|Buffer]);\nselect([memory|Rest], Buffer) ->\n\tselect(Rest,[{memory,memory()}|Buffer]);\nselect([memory_worst|Rest], Buffer) ->\n\tselect(Rest,[{memory_worst,memory_worst()}|Buffer]);\nselect([network|Rest], Buffer) ->\n\tselect(Rest,[{network,network()}|Buffer]);\nselect([processes|Rest], Buffer) ->\n\tselect(Rest,[{processes,processes()}|Buffer]);\nselect([arweave|Rest], Buffer) ->\n\tselect(Rest,[{arweave,arweave()}|Buffer]);\nselect([ets|Rest], Buffer) ->\n\tselect(Rest,[{ets_tables,ets()}|Buffer]);\nselect([dets|Rest], Buffer) ->\n\tselect(Rest,[{dets_tables,dets()}|Buffer]);\nselect([rocksdb|Rest], Buffer) ->\n\tselect(Rest,[{rocksdb,rocksdb()}|Buffer]);\nselect([_|Rest], Buffer) ->\n\tselect(Rest, Buffer).\n\n%%--------------------------------------------------------------------\n%% @doc Returns cpu diagnostic. Most of the cpu information are being\n%% collected using `cpu_sup' module, this is then, a dependency.\n%% @end\n%%--------------------------------------------------------------------\n-spec cpu() -> proplists:proplist().\n\ncpu() ->\n\ttry\n\t\tdisplay([\n\t\t\t{nprocs, cpu_sup:nprocs()},\n\t\t\t{avg1, cpu_sup:avg1()},\n\t\t\t{avg5, cpu_sup:avg5()},\n\t\t\t{avg15, cpu_sup:avg15()},\n\t\t\t{util, cpu_sup:util()},\n\t\t\t{ping, cpu_sup:ping()}\n\t\t], cpu)\n\tcatch _:_  ->\n\t\t ?LOG_WARNING(\"cpu_sup not started\"),\n\t\t []\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc Returns memory diagnostic. Most of the memory information are\n%% collected using `memsup' module. This is then a dependency.\n%% @end\n%%--------------------------------------------------------------------\n-spec memory() -> proplists:proplist().\n\nmemory() ->\n\ttry\n\t\tMemory = memsup:get_system_memory_data(),\n\t\tdisplay(Memory, memory)\n\tcatch _:_ ->\n\t\t ?LOG_WARNING(\"memsup not started\"),\n\t\t []\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc Returns the pid using the greatest amount of memory. This\n%% function is using `memsup' module.\n%% @end\n%%--------------------------------------------------------------------\n-spec memory_worst() -> proplists:proplist().\n\nmemory_worst() ->\n\ttry\n\t\tmemsup:get_memory_data()\n\tof\n\t\t{Total, Allocated, {Pid, PidAllocated}} ->\n\t\t\tInfo = process_info(Pid),\n\t\t\tName = proplists:get_value(registered_name, Info),\n\t\t\tHeapSize = proplists:get_value(heap_size, Info),\n\t\t\tTotalHeapSize = proplists:get_value(total_heap_size, Info),\n\t\t\tStackSize = proplists:get_value(stack_size, Info),\n\t\t\tReductions = proplists:get_value(reductions, Info),\n\t\t\tMsgQueue = proplists:get_value(message_queue_len, Info),\n\t\t\tStatus = proplists:get_value(status, Info),\n\t\t\tdisplay([\n\t\t\t\t{total, Total},\n\t\t\t\t{allocated, Allocated},\n\t\t\t\t{worst_pid, Pid},\n\t\t\t\t{worst_pid_allocated, PidAllocated},\n\t\t\t\t{worst_pid_name, Name},\n\t\t\t\t{worst_pid_total_heap_size, TotalHeapSize},\n\t\t\t\t{worst_pid_heap_size, HeapSize},\n\t\t\t\t{worst_pid_stack_size, StackSize},\n\t\t\t\t{worst_pid_reductions, Reductions},\n\t\t\t\t{worst_pid_message_queue, MsgQueue},\n\t\t\t\t{worst_pid_status, Status}\n\t\t\t], memory_worst)\n\tcatch _:_ ->\n\t\t?LOG_WARNING(\"memsup is not started\"),\n\t\t[]\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc Returns network diagnostic.\n%% @end\n%%--------------------------------------------------------------------\n-spec network() -> proplists:proplist().\n\nnetwork() ->\n\tSockets = sockets(),\n\tdisplay([\n\t\t{sockets, length(Sockets)}\n\t], network).\n\n%%--------------------------------------------------------------------\n%% @doc Returns sockets (network ports) diagnostic.\n%% @end\n%%--------------------------------------------------------------------\nsockets() ->\n\tPorts = erlang:ports(),\n\tSockets = lists:filter(fun is_network_port/1, Ports),\n\t[\n\t\t{socket, socket_info(S)}\n\t\t|| S <- Sockets\n\t].\n\n%%--------------------------------------------------------------------\n%% @doc Returns processes diagnostic.\n%% @end\n%%--------------------------------------------------------------------\n-spec processes() -> proplists:proplist().\n\nprocesses() ->\n\tProcesses = erlang:processes(),\n\t[\n\t\tprocess_info(P)\n\t\t|| P <- Processes\n\t].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc wrapper around `erlang:process_info/1'.\n%% @end\n%%--------------------------------------------------------------------\n-spec process_info(Pid) -> Return when\n\tPid :: pid(),\n\tReturn :: proplists:proplist().\n\nprocess_info(Pid) ->\n\ttry erlang:process_info(Pid) of\n\t\tundefined ->\n\t\t\t[];\n\t\tInfo ->\n\t\t\tprocess_info2(Pid, Info)\n\tcatch _:_ ->\n\t\t[]\n\tend.\n\nprocess_info2(Pid, Info) ->\n\tRegisteredName = proplists:get_value(registered_name, Info),\n\tStatus = proplists:get_value(status, Info),\n\tMsgQueueLen = proplists:get_value(message_queue_len, Info),\n\tTrapExit = proplists:get_value(trap_exit, Info),\n\tPriority = proplists:get_value(priority, Info),\n\tGroupLeader = proplists:get_value(group_leader, Info),\n\tTotalHeapSize = proplists:get_value(total_heap_size, Info),\n\tHeapSize = proplists:get_value(heap_size, Info),\n\tStackSize = proplists:get_value(stack_size, Info),\n\tReductions = proplists:get_value(reductions, Info),\n\tGC = proplists:get_value(garbage_collection, Info),\n\tGC_MinBinVHeapSize = proplists:get_value(min_bin_vheap_size, GC),\n\tGC_MinHeapSize = proplists:get_value(min_heap_size, GC),\n\tGC_FullsweepAfter = proplists:get_value(fullsweep_after, GC),\n\tGC_MinorGCS = proplists:get_value(minor_gcs, GC),\n\tCurrentFunction =\n\t\ttry\n\t\t\terlang:process_info(Pid, current_location)\n\t\tof\n\t\t\t{current_location,{M,F,A,_}} ->\n\t\t\t\tio_lib:format(\"~s:~s/~b\", [M,F,A]);\n\t\t\t_ ->\n\t\t\t\tundefined\n\t\tcatch _:_ ->\n\t\t\t      undefined\n\t\tend,\n\tMemory =\n\t\ttry\n\t\t\terlang:process_info(Pid, memory)\n\t\tof\n\t\t\t{memory, Mem} ->\n\t\t\t\tMem;\n\t\t\t_ ->\n\t\t\t\tundefined\n\t\tcatch _:_ ->\n\t\t\t      undefined\n\t\tend,\n\tdisplay([\n\t\t{pid, Pid},\n\t\t{status, Status},\n\t \t{location, CurrentFunction},\n\t \t{memory, Memory},\n\t\t{group_leader, GroupLeader},\n\t\t{heap_size, HeapSize},\n\t\t{message_queue_len, MsgQueueLen},\n\t\t{priority, Priority},\n\t\t{reductions, Reductions},\n\t\t{registered_name, RegisteredName},\n\t\t{stack_size, StackSize},\n\t\t{total_heap_size, TotalHeapSize},\n\t\t{trap_exit, TrapExit},\n\t\t{gc_min_bin_vheap_size, GC_MinBinVHeapSize},\n\t\t{gc_min_heap_size, GC_MinHeapSize},\n\t\t{gc_fullsweep_after, GC_FullsweepAfter},\n\t\t{gc_minor_gcs, GC_MinorGCS}\n\t], process).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc Check if a port is a network socket (tcp, udp, sctp).\n%% @end\n%%--------------------------------------------------------------------\n-spec is_network_port(Port) -> Return when\n\tPort :: port(),\n\tReturn :: boolean().\n\nis_network_port(Port) ->\n\ttry\n\t\tInfo = erlang:port_info(Port),\n\t\tproplists:get_value(name, Info)\n\tof\n\t\t\"tcp_inet\" -> true;\n\t\t\"udp_inet\" -> true;\n\t\t\"sctp_inet\" -> true;\n\t\t_ -> false\n\tcatch _:_ ->\n\t\tfalse\n\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc A function helper to gather information about a socket.\n%% @end\n%%--------------------------------------------------------------------\n-spec socket_info(Socket) -> Return when\n\tSocket :: port(),\n\tReturn :: proplists:proplist().\n\nsocket_info(Socket) ->\n\ttry inet:info(Socket) of\n\t\tundefined ->\n\t\t\t[];\n\t\tInfo ->\n\t\t\tsocket_info2(Socket, Info)\n\tcatch _:_ ->\n\t\t[]\n\tend.\n\nsocket_info2(Socket, Info) ->\n\tCounters = maps:get(counters, Info, #{}),\n\tActive = maps:get(active, Info, undefined),\n\tDomain = maps:get(domain, Info, undefined),\n\tNumAcceptors = maps:get(num_acceptors, Info, undefined),\n\tNumReaders = maps:get(num_readers, Info, undefined),\n\tNumWriters = maps:get(num_writers, Info, undefined),\n\tProtocol = maps:get(protocol, Info, undefined),\n\tRecvAvg = maps:get(recv_avg, Counters, undefined),\n\tRecvCnt = maps:get(recv_cnt, Counters, undefined),\n\tRecvDvi = maps:get(recv_dvi, Counters, undefined),\n\tRecvMax = maps:get(recv_max, Counters, undefined),\n\tRecvOct = maps:get(recv_oct, Counters, undefined),\n\tSendAvg = maps:get(send_avg, Counters, undefined),\n\tSendCnt = maps:get(send_cnt, Counters, undefined),\n\tSendMax = maps:get(send_max, Counters, undefined),\n\tSendOct = maps:get(send_oct, Counters, undefined),\n\tSendPend = maps:get(send_pend, Counters, undefined),\n\n\t{PeerIP,PeerPort} =\n\t\tcase inet:peername(Socket) of\n\t\t\t{ok, {PI,PP}} ->\n\t\t\t\t{inet:ntoa(PI), PP};\n\t\t\t_ ->\n\t\t\t\t{undefined, undefined}\n\t\tend,\n\t{SockIP,SockPort} =\n\t\tcase inet:sockname(Socket) of\n\t\t\t{ok, {SI, SP}} ->\n\t\t\t\t{inet:ntoa(SI), SP};\n\t\t\t_ ->\n\t\t\t\t{undefined, undefined}\n\t\tend,\n\tdisplay([\n\t\t{socket, Socket},\n\t\t{active, Active},\n\t\t{domain, Domain},\n\t\t{protocol, Protocol},\n\t\t{peer_ip, PeerIP},\n\t\t{peer_port, PeerPort},\n\t\t{sock_ip, SockIP},\n\t\t{sock_port, SockPort},\n\t\t{num_acceptors, NumAcceptors},\n\t\t{num_readers, NumReaders},\n\t\t{num_writers, NumWriters},\n\t\t{recv_oct, RecvOct},\n\t\t{recv_avg, RecvAvg},\n\t\t{recv_cnt, RecvCnt},\n\t\t{recv_dvi, RecvDvi},\n\t\t{recv_max, RecvMax},\n\t\t{send_avg, SendAvg},\n\t\t{send_cnt, SendCnt},\n\t\t{send_max, SendMax},\n\t\t{send_oct, SendOct},\n\t\t{send_pend, SendPend}\n\t], socket).\n\n%%--------------------------------------------------------------------\n%% @doc Returns arweave diagnostic. This function should display more\n%% information than the `processes/0' diagnostic. The application to\n%% be checked are `arweave', `arweave_config'.\n%%\n%%   - check processes (like in `processes/0')\n%%   - check ETS tables\n%%   - check workers status\n%%\n%% This function will become huge, and should probably be migrated in\n%% its own module called.\n%%\n%% @end\n%%--------------------------------------------------------------------\n-spec arweave() -> proplists:proplist().\n\narweave() ->\n\tarweave_processes().\n\narweave_processes() ->\n\tcase get_process_group_leader(ar_sup) of\n\t\tundefined ->\n\t\t\t[];\n\t\tLeader ->\n\t\t\tarweave_processes(Leader)\n\tend.\n\narweave_processes(Leader) ->\n\tProcesses = erlang:processes(),\n\t[\n\t \tdisplay(N, arweave_processes)\n\t \t|| N <- [\n\t\t\tprocess_info(P)\n\t\t\t|| P <- Processes\n\t\t],\n\t\tproplists:get_value(group_leader, N) =:= Leader\n\t].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc extract process group leader of a pid or registered process.\n%% @end\n%%--------------------------------------------------------------------\nget_process_group_leader(undefined) -> undefined;\nget_process_group_leader(Atom) when is_atom(Atom) ->\n\tget_process_group_leader(whereis(Atom));\nget_process_group_leader(Pid) when is_pid(Pid) ->\n\ttry\n\t\terlang:process_info(Pid, group_leader)\n\tof\n\t\t{group_leader, GL} ->\n\t\t\tGL;\n\t\t_ ->\n\t\t\tundefined\n\tcatch _:_ ->\n\t\t      undefined\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc Returns ets diagnostic.\n%% @end\n%%--------------------------------------------------------------------\n-spec ets() -> proplists:proplist().\n\nets() ->\n\tEts = ets:all(),\n\tets(Ets, []).\n\nets([], Buffer) -> Buffer;\nets([Ets|Rest], Buffer) ->\n\ttry\n\t\tInfo = display(\n\t\t\tets:info(Ets),\n\t\t\tets\n\t\t),\n\t\tNewBuffer = [{Ets, Info}|Buffer],\n\t\tets(Rest, NewBuffer)\n\tcatch _:_ ->\n\t\tets(Rest, Buffer)\n\tend.\n\n%%--------------------------------------------------------------------\n%% @doc Returns dets diagnostic.\n%% @end\n%%--------------------------------------------------------------------\n-spec dets() -> proplists:proplist().\n\ndets() ->\n\tDets = dets:all(),\n\tdets(Dets, []).\n\ndets([], Buffer) -> Buffer;\ndets([Dets|Rest], Buffer) ->\n\ttry\n\t\tInfo = display(\n\t\t\tdets:info(Dets),\n\t\t\tdets\n\t\t),\n\t\tNewBuffer = [{Dets, Info}|Buffer],\n\t\tdets(Rest, NewBuffer)\n\tcatch _:_ ->\n\t\tdets(Rest, Buffer)\n\tend.\n\n%% record extracted from ar_kv module. This record is used to store\n%% rocksdb information used by arweave.\n-record(db, {\n\tname :: term() | undefined,\n\tfilepath :: file:filename_all(),\n\tdb_options :: rocksdb:db_options(),\n\tdb_handle :: rocksdb:db_handle() | undefined,\n\tcf_names = undefined :: [term()],\n\tcf_descriptors = undefined :: [rocksdb:cf_descriptor()],\n\tcf_handle = undefined :: rocksdb:cf_handle()\n}).\n%%--------------------------------------------------------------------\n%% @doc Returns rocksdb diagnostic. This function get the list of\n%% opened database by checking the content of `ar_kv' ETS table.\n%%--------------------------------------------------------------------\n-spec rocksdb() -> proplists:proplist().\nrocksdb() ->\n\tcase ets:info(ar_kv) of\n\t\tundefined -> [];\n\t\tInfo -> rocksdb(Info)\n\tend.\n\nrocksdb(Info) ->\n\tEts = proplists:get_value(id, Info),\n\t[\n\t\t{rocksdb, rocksdb_struct(Db)}\n\t\t|| Db <- ets:tab2list(Ets)\n\t].\n\nrocksdb_struct(#db{filepath = Filepath, db_handle = Handle}) ->\n\tProperties = rocksdb_properties(),\n\tResult = rocksdb_properties(Handle, Properties, []),\n\tdisplay([\n\t\t\t{filepath, Filepath},\n\t\t\t{handle,Handle}\n\t\t\t|Result\n\t\t],\n\t\trocksdb\n\t).\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc These properties should always returns an integer formatted\n%% as binary.\n%% @end\n%%--------------------------------------------------------------------\n-spec rocksdb_properties() -> [binary()].\n\nrocksdb_properties() ->\n\t[\n\t\t<<\"rocksdb.actual-delayed-write-rate\">>,\n\t\t<<\"rocksdb.background-errors\">>,\n\t\t<<\"rocksdb.base-level\">>,\n\t\t<<\"rocksdb.block-cache-capacity\">>,\n\t\t<<\"rocksdb.block-cache-pinned-usage\">>,\n\t\t<<\"rocksdb.block-cache-usage\">>,\n\t\t<<\"rocksdb.compaction-pending\">>,\n\t\t<<\"rocksdb.current-super-version-number\">>,\n\t\t<<\"rocksdb.cur-size-active-mem-table\">>,\n\t\t<<\"rocksdb.cur-size-all-mem-tables\">>,\n\t\t<<\"rocksdb.estimate-live-data-size\">>,\n\t\t<<\"rocksdb.estimate-num-keys\">>,\n\t\t<<\"rocksdb.estimate-pending-compaction-bytes\">>,\n\t\t<<\"rocksdb.estimate-table-readers-mem\">>,\n\t\t<<\"rocksdb.is-file-deletions-enabled\">>,\n\t\t<<\"rocksdb.is-write-stopped\">>,\n\t\t<<\"rocksdb.live-blob-file-size\">>,\n\t\t<<\"rocksdb.live-sst-files-size\">>,\n\t\t<<\"rocksdb.mem-table-flush-pending\">>,\n\t\t<<\"rocksdb.min-log-number-to-keep\">>,\n\t\t<<\"rocksdb.min-obsolete-sst-number-to-keep\">>,\n\t\t<<\"rocksdb.num-blob-files\">>,\n\t\t<<\"rocksdb.num-deletes-active-mem-table\">>,\n\t\t<<\"rocksdb.num-deletes-active-mem-table\">>,\n\t\t<<\"rocksdb.num-deletes-imm-mem-tables\">>,\n\t\t<<\"rocksdb.num-entries-active-mem-table\">>,\n\t\t<<\"rocksdb.num-entries-imm-mem-tables\">>,\n\t\t<<\"rocksdb.num-files-at-level0\">>,\n\t\t<<\"rocksdb.num-files-at-level1\">>,\n\t\t<<\"rocksdb.num-files-at-level2\">>,\n\t\t<<\"rocksdb.num-files-at-level3\">>,\n\t\t<<\"rocksdb.num-files-at-level4\">>,\n\t\t<<\"rocksdb.num-immutable-mem-table\">>,\n\t\t<<\"rocksdb.num-immutable-mem-table-flushed\">>,\n\t\t<<\"rocksdb.num-live-versions\">>,\n\t\t<<\"rocksdb.num-running-compactions\">>,\n\t\t<<\"rocksdb.num-running-flushes\">>,\n\t\t<<\"rocksdb.num-snapshots\">>,\n\t\t<<\"rocksdb.size-all-mem-tables\">>,\n\t\t<<\"rocksdb.total-blob-file-size\">>,\n\t\t<<\"rocksdb.total-sst-files-size\">>\n\t].\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc loop over the properties and convert them into a proplist.\n%% @end\n%%--------------------------------------------------------------------\n-spec rocksdb_properties(Handle, Properties, Buffer) -> Return when\n\tHandle :: reference(),\n\tProperties :: [binary()],\n\tBuffer :: proplists:proplist(),\n\tReturn :: proplists:proplist().\n\nrocksdb_properties(_, [], Buffer) -> Buffer;\nrocksdb_properties(Handle, [Property|Rest], Buffer)\n\twhen is_binary(Property) ->\n\t\ttry\n\t\t\t{ok, Raw} = rocksdb:get_property(Handle, Property),\n\t\t\tValue = binary_to_integer(Raw),\n\t\t\t% converting the value for presentation\n\t\t\t% purpose.\n\t\t\tString = binary_to_list(Property),\n\t\t\tNewBuffer = [{String,Value}|Buffer],\n\t\t\trocksdb_properties(Handle,Rest,NewBuffer)\n\t\tcatch\n\t\t\t_:_ ->\n\t\t\t\trocksdb_properties(Handle,Rest,Buffer)\n\t\tend.\n\n%%--------------------------------------------------------------------\n%% @hidden\n%% @doc display diagnostics via logs.\n%% @end\n%%--------------------------------------------------------------------\ndisplay(Diagnostic, Category) ->\n\tMessage = [{diagnostic, Category}|Diagnostic],\n\t?LOG_INFO(Message),\n\tDiagnostic.\n"
  },
  {
    "path": "apps/arweave_limiter/include/.gitkeep",
    "content": ""
  },
  {
    "path": "apps/arweave_limiter/priv/.gitkeep",
    "content": ""
  },
  {
    "path": "apps/arweave_limiter/src/arweave_limiter.app.src",
    "content": "{application, arweave_limiter,\n [\n  {id, \"arweave_limiter\"},\n  {description, \"Arweave Rate Limiter\"},\n  {vsn, \"0.0.1\"},\n  {mod, {arweave_limiter, []}},\n  {env, []},\n  {applications, [\n                  kernel,\n                  stdlib,\n                  sasl,\n                  arweave_config,\n                  prometheus,\n                  prometheus_cowboy,\n                  prometheus_process_collector,\n                  prometheus_httpd,\n                  runtime_tools\n                 ]},\n  {modules, [\n             arweave_limiter,\n             arweave_limiter_sup,\n             arweave_limiter_time,\n             arweave_limiter_metrics,\n             arweave_limiter_metrics_collector\n            ]},\n  {registered, [\n                arweave_limiter,\n                arweave_limiter_sup\n               ]}\n ]}.\n"
  },
  {
    "path": "apps/arweave_limiter/src/arweave_limiter.erl",
    "content": "%%%===================================================================\n%%% GNU General Public License, version 2 (GPL-2.0)\n%%% The GNU General Public License (GPL-2.0)\n%%% Version 2, June 1991\n%%%\n%%% ------------------------------------------------------------------\n%%%\n%%% @copyright 2025 (c) Arweave\n%%% @author Arweave Team\n%%% @author Kristof Hetzl\n%%% @doc Arweave Rate Limiter.\n%%%\n%%% `arweave_limiter' module is an interface to the Arweave\n%%% Rate Limiter functionality.\n%%%\n%%% @end\n%%%===================================================================\n-module(arweave_limiter).\n-vsn(1).\n-behavior(application).\n-export([\n         start/0,\n         start/2,\n         stop/0,\n         stop/1\n        ]).\n\n-export([register_or_reject_call/2, reduce_for_peer/2]).\n\n-include_lib(\"kernel/include/logger.hrl\").\n\n\n%%--------------------------------------------------------------------\n%% @doc helper function to start `arweave_limiter' application.\n%% @end\n%%--------------------------------------------------------------------\n-spec start() -> ok | {error, term()}.\n\nstart() ->\n    case application:ensure_all_started(?MODULE, permanent) of\n        {ok, Dependencies} ->\n            ?LOG_DEBUG(\"arweave_limiter started dependencies: ~p\", [Dependencies]),\n            ok;\n        Elsewise ->\n            Elsewise\n    end.\n\n%%--------------------------------------------------------------------\n%% @doc Application API function to start `arweave_config' app.\n%% @end\n%%--------------------------------------------------------------------\n-spec start(term(), term()) -> {ok, pid()}.\nstart(_StartType, _StartArgs) ->\n    arweave_limiter_sup:start_link().\n\n%%--------------------------------------------------------------------\n%% @doc help function to stop `arweave_config' application.\n%% @end\n%%--------------------------------------------------------------------\n-spec stop() -> ok.\n\nstop() ->\n    application:stop(?MODULE).\n\n%%--------------------------------------------------------------------\n%% @doc help function to stop `arweave_config' application.\n%% @end\n%%--------------------------------------------------------------------\n-spec stop(term()) -> ok.\nstop(_State) ->\n    ok.\n\n%%--------------------------------------------------------------------\n%% @doc Rate limit request\n%% @end\n%%--------------------------------------------------------------------\nregister_or_reject_call(LimiterRef, Peer) ->\n    arweave_limiter_group:register_or_reject_call(LimiterRef, Peer).\n\n\n%%--------------------------------------------------------------------\n%% @doc Reduce leaky tokens for peer.\n%% @end\n%%--------------------------------------------------------------------\nreduce_for_peer(LimiterRef, Peer) ->\n    arweave_limiter_group:reduce_for_peer(LimiterRef, Peer).\n"
  },
  {
    "path": "apps/arweave_limiter/src/arweave_limiter_group.erl",
    "content": "%%%\n%%% @doc Leaky bucket token rate limiter based on\n%%%      https://gist.github.com/humaite/21a84c3b3afac07fcebe476580f3a40b\n%%%      combined with a concurrency limiter similar to Ranch's connection pool.\n%%%      The leaky bucket limiter sits on top of a sliding window limiter.\n%%%\n%%%      Concurrency is validated first, then sliding window, followed by leaky\n%%%      bucket. If sliding windows passes, the call is accepted, otherwise it\n%%%      burns leaky tokens, if those are exhausted as well, the call will be\n%%%      marked as rejected.\n%%%      It only stores data in process memory.\n%%%\n-module(arweave_limiter_group).\n\n-behaviour(gen_server).\n\n%% API\n-export([\n         start_link/2,\n         info/1,\n         config/1,\n         register_or_reject_call/2,\n         reduce_for_peer/2,\n         reset_all/1,\n         stop/1\n        ]).\n\n%% gen_server callbacks\n-export([init/1, handle_call/3, handle_cast/2, handle_info/2,\n         terminate/2, code_change/3, format_status/2]).\n\n-ifdef(AR_TEST).\n-export([\n         expire_and_get_requests/4,\n         drop_expired/3,\n         add_and_order_timestamps/2,\n         cleanup_expired_sliding_peers/3]).\n-endif.\n\n\n-include_lib(\"arweave/include/ar.hrl\").\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n\n%%% API\nstart_link(LimiterRef, Config) ->\n    gen_server:start_link({local, LimiterRef}, ?MODULE, [Config], []).\n\ninfo(LimiterRef) ->\n    gen_server:call(LimiterRef, get_info).\n\nconfig(LimiterRef) ->\n    gen_server:call(LimiterRef, get_config).\n\nregister_or_reject_call(LimiterRef, Peer) ->\n    {Time, Value} = timer:tc(fun do_register_or_reject_call/2, [LimiterRef, Peer]),\n    prometheus_histogram:observe(ar_limiter_response_time_microseconds, [atom_to_list(LimiterRef)], Time),\n    Value.\n\ndo_register_or_reject_call(LimiterRef, Peer) ->\n    prometheus_counter:inc(ar_limiter_requests_total,\n                           [atom_to_list(LimiterRef)]),\n    case gen_server:call(LimiterRef, {register_or_reject, Peer}) of\n        {reject, Reason, _Data} = Rejection ->\n            prometheus_counter:inc(ar_limiter_rejected_total,\n                                   [atom_to_list(LimiterRef), atom_to_list(Reason)]),\n            Rejection;\n        Accept ->\n            Accept\n    end.\n\n%% This function is called when a transaction is accepted. This is how the previous\n%% solution dealt with high loads. This will perform double reduction. (as the periodic\n%% reduction is still occurring).\nreduce_for_peer(LimiterRef, Peer) ->\n    Result = gen_server:call(LimiterRef, {reduce_for_peer, Peer}),\n    Result == ok andalso prometheus_counter:inc(ar_limiter_reduce_requests_total,\n                                                [atom_to_list(LimiterRef)]),\n    Result.\n\nreset_all(LimiterRef) ->\n    whereis(LimiterRef) == undefined orelse gen_server:call(LimiterRef, reset_all).\n\nstop(LimiterRef) ->\n    gen_server:stop(LimiterRef).\n\n%% gen_server callbacks\ninit([Config] = _Args) ->\n    Id = maps:get(id, Config),\n\n    IsDisabled = maps:get(no_limit, Config, false),\n    IsManualReductionDisabled = maps:get(is_manual_reduction_disabled, Config, false),\n\n    LeakyTickMs = maps:get(leaky_tick_ms, Config, ?DEFAULT_HTTP_API_LIMITER_GENERAL_LEAKY_TICK_INTERVAL),\n    TimestampCleanupTickMs = maps:get(timestamp_cleanup_tick_ms, Config,\n                                      ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_INTERVAL),\n    TimestampCleanupExpiry = maps:get(timestamp_cleanup_expiry, Config,\n                                      ?DEFAULT_HTTP_API_LIMITER_TIMESTAMP_CLEANUP_EXPIRY),\n    LeakyRateLimit = maps:get(leaky_rate_limit, Config, ?DEFAULT_HTTP_API_LIMITER_GENERAL_LEAKY_LIMIT),\n    ConcurrencyLimit = maps:get(concurrency_limit, Config, ?DEFAULT_HTTP_API_LIMITER_GENERAL_CONCURRENCY_LIMIT),\n    TickReduction = maps:get(tick_reduction, Config,\n                             ?DEFAULT_HTTP_API_LIMITER_GENERAL_LEAKY_TICK_REDUCTION),\n    SlidingWindowDuration = maps:get(sliding_window_duration, Config,\n                                     ?DEFAULT_HTTP_API_LIMITER_GENERAL_SLIDING_WINDOW_DURATION),\n    SlidingWindowLimit = maps:get(sliding_window_limit, Config,\n                                  ?DEFAULT_HTTP_API_LIMITER_GENERAL_SLIDING_WINDOW_LIMIT),\n\n    {ok, LeakyRef} = timer:send_interval(LeakyTickMs, self(), {tick, leaky_bucket_reduction}),\n    {ok, TsRef} = timer:send_interval(TimestampCleanupTickMs, self(), {tick, sliding_window_timestamp_cleanup}),\n    {ok, #{\n           id => atom_to_list(Id),\n           is_disabled => IsDisabled,\n           is_manual_reduction_disabled => IsManualReductionDisabled,\n           leaky_tick_timer_ref => LeakyRef,\n           timestamp_cleanup_timer_ref => TsRef,\n           leaky_tick_ms => LeakyTickMs,\n           timestamp_cleanup_tick_ms => TimestampCleanupTickMs,\n           timestamp_cleanup_expiry => TimestampCleanupExpiry,\n           tick_reduction => TickReduction,\n           leaky_rate_limit => LeakyRateLimit,\n           concurrency_limit => ConcurrencyLimit,\n           concurrent_requests => #{}, %% Peer -> List of {MonitorRef, Pid}\n           concurrent_monitors => #{}, %% MonitorRef -> Peer\n           leaky_tokens => #{}, %% Peer -> Leaky Bucket tokens\n           sliding_window_duration => SlidingWindowDuration,\n           sliding_window_limit => SlidingWindowLimit,\n           sliding_timestamps => #{} %% Peer -> Ordered list of timestamps\n          }}.\n\nhandle_call(reset_all, _From, State) ->\n    {reply, ok, State#{concurrent_requests => #{},\n                       concurrent_monitors => #{},\n                       leaky_tokens => #{},\n                       sliding_timestamps => #{}}};\nhandle_call({register_or_reject, Peer}, {FromPid, _},\n            State = #{id := Id,\n                      is_disabled := IsDisabled,\n                      leaky_rate_limit := LeakyRateLimit,\n                      leaky_tokens := LeakyTokens,\n                      concurrency_limit := ConcurrencyLimit,\n                      concurrent_requests := ConcurrentRequests,\n                      concurrent_monitors := ConcurrentMonitors,\n                      sliding_window_duration := SlidingWindowDuration,\n                      sliding_window_limit := SlidingWindowLimit,\n                      sliding_timestamps := SlidingTimestamps\n                     }) ->\n    Now = arweave_limiter_time:ts_now(),\n    Tokens = maps:get(Peer, LeakyTokens, 0) + 1,\n    Concurrency = length(maps:get(Peer, ConcurrentRequests, [])) + 1,\n\n    SlidingTimestampsForPeer0 =\n        expire_and_get_requests(Peer, SlidingTimestamps, SlidingWindowDuration, Now),\n    case IsDisabled of\n        true ->\n            {reply, {register, no_limiting_applied}, State};\n        _ ->\n            case Concurrency > ConcurrencyLimit of\n                true ->\n                    %% Concurrency Hard Limit\n                    ?LOG_DEBUG([{event, ar_limiter_reject}, {reason, concurrency},\n                                  {peer, Peer}, {id, Id}]),\n                    {reply, {reject, concurrency, data}, State};\n                _ ->\n                    case length(SlidingTimestampsForPeer0) + 1 > SlidingWindowLimit of\n                        true ->\n                            %% Sliding Window limited, check Leaky Bucket Tokens\n                            case Tokens > LeakyRateLimit of\n                                true ->\n                                    %% Burst exhausted with the Leaky Tokens\n                                    ?LOG_DEBUG([{event, ar_limiter_reject}, {reason, rate_limit},\n                                                  {sliding_window_limit, SlidingWindowLimit},\n                                                  {leaky_rate_limit, LeakyRateLimit},\n                                                  {peer, Peer}, {id, Id}]),\n                                    {reply, {reject, rate_limit, data}, State};\n                                false ->\n                                    NewLeakyTokens = update_token(Peer, Tokens, LeakyTokens),\n                                    {NewRequests, NewMonitors} =\n                                        register_concurrent(\n                                          Peer, FromPid, ConcurrentRequests, ConcurrentMonitors),\n                                    {reply, {register, leaky},\n                                     State#{leaky_tokens => NewLeakyTokens,\n                                            concurrent_requests => NewRequests,\n                                            concurrent_monitors => NewMonitors}}\n                            end;\n                        _ ->\n                            {NewRequests, NewMonitors} =\n                                register_concurrent(\n                                  Peer, FromPid, ConcurrentRequests, ConcurrentMonitors),\n                            SlidingTimestampsForPeer1 = add_and_order_timestamps(Now, SlidingTimestampsForPeer0),\n                            NewSlidingTimestamps = SlidingTimestamps#{Peer => SlidingTimestampsForPeer1},\n                            {reply, {register, sliding}, State#{sliding_timestamps => NewSlidingTimestamps,\n                                                                concurrent_requests => NewRequests,\n                                                                concurrent_monitors => NewMonitors}}\n                    end\n            end\n    end;\nhandle_call({reduce_for_peer, Peer}, _From, State =\n                #{is_manual_reduction_disabled := false,\n                  leaky_tokens := LeakyTokens}) ->\n    NewLeakyTokens = do_reduce_for_peer(Peer, LeakyTokens),\n    {reply, ok, State#{leaky_tokens => NewLeakyTokens}};\nhandle_call({reduce_for_peer, _Peer}, _From, State =\n                #{is_manual_reduction_disabled := true}) ->\n    {reply, disabled, State};\nhandle_call(get_info, _From, State =\n                #{sliding_timestamps := SlidingTimestamps,\n                  leaky_tokens := LeakyTokens,\n                  concurrent_requests := ConcurrentRequests,\n                  concurrent_monitors := ConcurrentMonitors}) ->\n    {reply, #{sliding_timestamps => SlidingTimestamps,\n              leaky_tokens => LeakyTokens,\n              concurrent_requests => ConcurrentRequests,\n              concurrent_monitors => ConcurrentMonitors}, State};\nhandle_call(get_config, _From, State) ->\n    {reply, filter_state_for_config(State), State};\nhandle_call(Request, From, State = #{id := Id}) ->\n    ?LOG_WARNING([{event, unhandled_call}, {id, Id}, {module, ?MODULE},\n                  {request, Request}, {from, From},\n                  {config, filter_state_for_config(State)}]),\n    {reply, ok, State}.\n\nhandle_cast(_Request, State) ->\n    {noreply, State}.\n\nhandle_info({tick, sliding_window_timestamp_cleanup},\n            State = #{id := Id, sliding_timestamps := SlidingTimestamps,\n                      timestamp_cleanup_expiry := CleanupExpiry}) ->\n    Now = arweave_limiter_time:ts_now(),\n    NewSlidingTimestamps = cleanup_expired_sliding_peers(SlidingTimestamps, CleanupExpiry, Now),\n    Deleted = maps:size(SlidingTimestamps) - maps:size(NewSlidingTimestamps),\n    prometheus_counter:inc(ar_limiter_cleanup_tick_expired_sliding_peers_deleted_total, [Id], Deleted),\n    {noreply, State#{sliding_timestamps => NewSlidingTimestamps}};\nhandle_info({tick, leaky_bucket_reduction},\n            State = #{id := Id, tick_reduction := TickReduction, leaky_tokens := LeakyTokens}) ->\n    %% This is going to be more precise than ar_limiter_leaky_ticks*ar_limiter_peers\n    prometheus_counter:inc(ar_limiter_leaky_ticks, [Id]),\n    SizeBefore = maps:size(LeakyTokens),\n    prometheus_counter:inc(ar_limiter_leaky_tick_reductions_peer, [Id], SizeBefore),\n    NewTokens =\n        maps:fold(fun(Key, Value, AccIn) ->\n                          fold_decrease_rate(Id, Key, Value, AccIn, TickReduction)\n                  end, #{}, LeakyTokens),\n    prometheus_counter:inc(\n      ar_limiter_leaky_tick_delete_peer_total, [Id], SizeBefore - maps:size(NewTokens)),\n    {noreply, State#{leaky_tokens => NewTokens}};\nhandle_info({'DOWN', MonitorRef, process, Pid, Reason},\n            State = #{concurrent_requests := ConcurrentRequests,\n                      concurrent_monitors := ConcurrentMonitors}) ->\n    {NewConcurrentRequests, NewConcurrentMonitors} =\n        remove_concurrent(\n          MonitorRef, Pid, Reason, ConcurrentRequests, ConcurrentMonitors),\n    {noreply, State#{concurrent_requests => NewConcurrentRequests,\n                     concurrent_monitors => NewConcurrentMonitors}};\nhandle_info(Info, State = #{id := Id}) ->\n    ?LOG_WARNING([{event, unhandled_info}, {id, Id}, {module, ?MODULE}, {info, Info}]),\n    {noreply, State}.\n\nterminate(_Reason, #{leaky_tick_timer_ref := _LeakyRef,\n                     timestamp_cleanup_timer_ref := _TsRef} = _State) ->\n    ok.\n\ncode_change(_OldVsn, State, _Extra) ->\n    {ok, State}.\n\nformat_status(_Opt, Status) ->\n    Status.\n\n%%% Internal functions\n\n%% Sliding window manipulation\nexpire_and_get_requests(Peer, SlidingTimestamps, SlidingWindowDuration, Now) ->\n    Timestamps = maps:get(Peer, SlidingTimestamps, []),\n    drop_expired(Timestamps, SlidingWindowDuration, Now).\n\ndrop_expired([TS|Timestamps], WindowDuration, Now) when TS + WindowDuration =< Now ->\n    drop_expired(Timestamps, WindowDuration, Now);\ndrop_expired(Timestamps, _WindowDuration, _Now) ->\n    Timestamps.\n\n%% There is no idomatic way of adding an element to the end of a list in Erlang.\n%% So, we reverse the list add it to the beginning and reverse it again.\nadd_and_order_timestamps(Ts, Timestamps) ->\n    lists:reverse(do_add_and_order_timestamps(Ts, lists:reverse(Timestamps))).\n\ndo_add_and_order_timestamps(Ts, []) ->\n    [Ts];\ndo_add_and_order_timestamps(Ts, [Head | _Rest] = Timestamps) when Ts >= Head ->\n    [Ts | Timestamps];\ndo_add_and_order_timestamps(Ts, [Head | Rest])  ->\n    %% This clause shouldn't really reached, because we use monotonic time\n    %% for timestamps.\n    [Head | do_add_and_order_timestamps(Ts, Rest)].\n\ncleanup_expired_sliding_peers(SlidingTimestamps, WindowDuration, Now) ->\n    maps:fold(fun(Peer, TsList, AccIn) ->\n                      case drop_expired(TsList, WindowDuration, Now) of\n                          [] ->\n                              AccIn;\n                          ValidTimestamps ->\n                              AccIn#{Peer => ValidTimestamps}\n                      end\n              end, #{}, SlidingTimestamps).\n\n%% Token manipulation\nupdate_token(Peer, Token, LeakyToken) ->\n    maps:put(Peer, Token, LeakyToken).\n\ndo_reduce_for_peer(Peer, LeakyTokens) ->\n    case maps:get(Peer, LeakyTokens, 0) of\n        0 ->\n            LeakyTokens;\n        Tokens ->\n            LeakyTokens#{Peer => Tokens - 1}\n    end.\n\nfold_decrease_rate(_Id, _Key, Counter, Acc, _TickReduction)\n  when is_integer(Counter), Counter =< 0 ->\n    Acc;\nfold_decrease_rate(Id, Key, Counter, Acc, TickReduction) when Counter < TickReduction ->\n    prometheus_counter:inc(ar_limiter_leaky_tick_token_reductions_total, [Id], Counter),\n    maps:put(Key, 0, Acc);\nfold_decrease_rate(Id, Key, Counter, Acc, TickReduction) ->\n    prometheus_counter:inc(ar_limiter_leaky_tick_token_reductions_total, [Id], TickReduction),\n    maps:put(Key, Counter-TickReduction, Acc).\n\n%% Concurrency magic\nregister_concurrent(Peer, Pid, ConcurrentRequests, ConcurrentMonitors) ->\n    MonitorRef = erlang:monitor(process, Pid),\n    Processes = maps:get(Peer, ConcurrentRequests, []),\n    NewConcurrentRequests = maps:put(Peer, [{MonitorRef, Pid} | Processes], ConcurrentRequests),\n    NewConcurrentMonitors = maps:put(MonitorRef, Peer, ConcurrentMonitors),\n    {NewConcurrentRequests, NewConcurrentMonitors}.\n\nremove_concurrent(MonitorRef, _Pid, _Reason, ConcurrentRequests, ConcurrentMonitors) ->\n    %% Peer for a MonitorRef shouldn't be undefined, because we started to\n    %% monitor the process as a first thing when register was called.\n    case maps:get(MonitorRef, ConcurrentMonitors, not_found) of\n        not_found ->\n            %% MonitorRef not found. This happens when we reset all the peers\n            %% manually. This also means everything else has been deleted as well.\n            %% Nothing to do, just return the current state.\n            {ConcurrentRequests, ConcurrentMonitors};\n        Peer ->\n            ConcurrentForPeer = maps:get(Peer, ConcurrentRequests),\n            NewConcurrentForPeer = proplists:delete(MonitorRef, ConcurrentForPeer),\n            NewConcurrentRequests =\n                case NewConcurrentForPeer of\n                    [] ->\n                        maps:remove(Peer, ConcurrentRequests);\n                    _ ->\n                        ConcurrentRequests#{Peer => NewConcurrentForPeer}\n                end,\n            NewConcurrentMonitors = maps:remove(MonitorRef, ConcurrentMonitors),\n            {NewConcurrentRequests, NewConcurrentMonitors}\n    end.\n\nfilter_state_for_config(#{id := Id,\n                          is_disabled := IsDisabled,\n                          is_manual_reduction_disabled := IsManualReductionDisabled,\n                          leaky_tick_ms := LeakyTickMs,\n                          timestamp_cleanup_tick_ms := TimestampCleanupTickMs,\n                          timestamp_cleanup_expiry := TimestampCleanupExpiry,\n                          tick_reduction := TickReduction,\n                          leaky_rate_limit := LeakyRateLimit,\n                          concurrency_limit := ConcurrencyLimit,\n                          sliding_window_duration := SlidingWindowDuration,\n                          sliding_window_limit := SlidingWindowLimit}) ->\n    #{id => Id,\n      is_disabled => IsDisabled,\n      is_manual_reduction_disabled => IsManualReductionDisabled,\n      leaky_tick_ms => LeakyTickMs,\n      timestamp_cleanup_tick_ms => TimestampCleanupTickMs,\n      timestamp_cleanup_expiry => TimestampCleanupExpiry,\n      tick_reduction => TickReduction,\n      leaky_rate_limit => LeakyRateLimit,\n      concurrency_limit => ConcurrencyLimit,\n      sliding_window_duration => SlidingWindowDuration,\n      sliding_window_limit => SlidingWindowLimit}.\n"
  },
  {
    "path": "apps/arweave_limiter/src/arweave_limiter_metrics.erl",
    "content": "-module(arweave_limiter_metrics).\n\n-export([register/0]).\n\n%%%===================================================================\n%%% Public interface.\n%%%===================================================================\n\n%% @doc Declare Arweave Rate Limiter metrics.\nregister() ->\n    ok = prometheus_histogram:new([\n                                   {name, ar_limiter_response_time_microseconds},\n                                   {help, \"Time it took for the limiter to respond to requests\"},\n                                   %% buckets might be reduced for production\n                                   {buckets, [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50]},\n                                   {labels, [limiter_id]}]),\n\n    ok = prometheus_counter:new([\n                                 {name, ar_limiter_requests_total},\n                                 {help, \"The number of requests the limiter has processed\"},\n                                 {labels, [limiter_id]}]),\n    ok = prometheus_counter:new([{name, ar_limiter_rejected_total},\n                                 {help, \"The number of request were rejected by the limiter\"},\n                                 {labels, [limiter_id, reason]}\n                                ]),\n    ok = prometheus_counter:new([{name, ar_limiter_reduce_requests_total},\n                                 {help, \"The number of reduce request by peer in total\"},\n                                 {labels, [limiter_id]}\n                                ]),\n    ok = prometheus_gauge:new([\n                               {name, ar_limiter_peers},\n                               {help, \"The number of peers the limiter is monitoring currently\"},\n                               %% limiting type:\n                               %% sliding_window -> baseline, leaky_bucket -> burst, concurrency -> concurrency\n                               {labels, [limiter_id, limiting_type]}]),\n    ok = prometheus_gauge:new([\n                               {name, ar_limiter_tracked_items_total},\n                               {help, \"The number of timestamps, leaky tokens, concurrent processes are tracked\"},\n                               %% limiting type:\n                               %% sliding_window -> baseline, leaky_bucket -> burst, concurrency -> concurrency\n                               {labels, [limiter_id, limiting_type]}]),\n    ok = prometheus_counter:new([\n                                 {name, ar_limiter_leaky_ticks},\n                                 {help, \"The number of leaky bucket ticks the limiter has processed\"},\n                                 {labels, [limiter_id]}]),\n    ok = prometheus_counter:new([\n                                 {name, ar_limiter_leaky_tick_delete_peer_total},\n                                 {help, \"The number of times a peer has been dropped from the leaky bucket token register\"},\n                                 {labels, [limiter_id]}]),\n    ok = prometheus_counter:new([\n                                 {name, ar_limiter_cleanup_tick_expired_sliding_peers_deleted_total},\n                                 {help, \"The number of times a peer has been dropped from the sliding window timestamp register\"},\n                                 {labels, [limiter_id]}]),\n    ok = prometheus_counter:new([\n                                 %% To show how much tokens clients are burning for bursts.\n                                 {name, ar_limiter_leaky_tick_token_reductions_total},\n                                 {help, \"All the consumed leaky bucket tokens that were reduced for all peers in total\"},\n                                 {labels, [limiter_id]}]),\n    ok = prometheus_counter:new([\n                                 %% To see how many peers bite into their burst tokens. (in a period)\n                                 {name, ar_limiter_leaky_tick_reductions_peer},\n                                 {help, \"The times a leaky bucket token reduction had have to be performed for a peer\"},\n                                 {labels, [limiter_id]}]),\n    ok.\n"
  },
  {
    "path": "apps/arweave_limiter/src/arweave_limiter_metrics_collector.erl",
    "content": "-module(arweave_limiter_metrics_collector).\n\n-behaviour(prometheus_collector).\n\n-export([\n\tderegister_cleanup/1,\n\tcollect_mf/2\n]).\n\n-ifdef(AR_TEST).\n-export([\n         metrics/0,\n         tracked_items/1,\n         peers/1\n        ]).\n-endif.\n\n-import(prometheus_model_helpers, [create_mf/4]).\n\n-include_lib(\"prometheus/include/prometheus.hrl\").\n-define(METRIC_NAME_PREFIX, \"arweave_\").\n\n%% ===================================================================\n%% API\n%% ===================================================================\n\n%% called to collect Metric Families\n-spec collect_mf(_Registry, Callback) -> ok when\n\t_Registry :: prometheus_registry:registry(),\n\tCallback :: prometheus_collector:callback().\ncollect_mf(_Registry, Callback) ->\n\tMetrics = metrics(),\n\t[add_metric_family(Metric, Callback) || Metric <- Metrics],\n\tok.\n\n%% called when collector deregistered\nderegister_cleanup(_Registry) -> ok.\n\n%% ===================================================================\n%% Private functions\n%% ===================================================================\n\nadd_metric_family({Name, Type, Help, Metrics}, Callback) ->\n\tCallback(create_mf(?METRIC_NAME(Name), Help, Type, Metrics)).\n\nmetrics() ->\n    AllInfo = arweave_limiter_sup:all_info(),\n    [\n     {ar_limiter_tracked_items_total, gauge, \"tracked requests, timestamps, leaky tokens\", tracked_items(AllInfo)},\n     {ar_limiter_peers, gauge, \"The number of peers the limiter is monitoring currently\", peers(AllInfo)}\n    ].\n\ntracked_items(AllInfo) ->\n    lists:foldl(fun tracked_items_info/2, [], AllInfo).\n\ntracked_items_info({Id, Info}, Acc) ->\n    SlidingTimestamps = count_sliding_timestamps(Info),\n    Monitors = maps:get(concurrent_monitors, Info),\n    LeakyPeers = maps:get(leaky_tokens, Info),\n    Items = [\n             {[{limiter_id, Id}, {limiting_type, concurrency}], maps:size(Monitors)},\n             {[{limiter_id, Id}, {limiting_type, leaky_bucket_tokens}], maps:size(LeakyPeers)},\n             {[{limiter_id, Id}, {limiting_type, sliding_window_timestamps}], SlidingTimestamps}\n            ],\n    Items ++ Acc.\n\ncount_sliding_timestamps(Info) ->\n    SlidingTimestamps = maps:get(sliding_timestamps, Info),\n    maps:fold(fun(_Peer, TimestampList, Acc) ->\n                        length(TimestampList) + Acc\n                end, 0, SlidingTimestamps).\n\npeers(AllInfo) ->\n    lists:foldl(fun peers_info/2, [], AllInfo).\n\npeers_info({Id, Info}, Acc) ->\n    ConcurrentRequests = maps:get(concurrent_requests, Info),\n    LeakyPeers = maps:get(leaky_tokens, Info),\n    SlidingPeers = maps:get(sliding_timestamps, Info),       \n    Items = [\n             {[{limiter_id, Id}, {limiting_type, concurrency}], maps:size(ConcurrentRequests)},\n             {[{limiter_id, Id}, {limiting_type, leaky_bucket_tokens}], maps:size(LeakyPeers)},\n             {[{limiter_id, Id}, {limiting_type, sliding_window_timestamps}], maps:size(SlidingPeers)}\n            ],\n    Items ++ Acc.\n"
  },
  {
    "path": "apps/arweave_limiter/src/arweave_limiter_sup.erl",
    "content": "-module(arweave_limiter_sup).\n-behaviour(supervisor).\n\n%% API\n-export([start_link/0, all_info/0]).\n\n-ifdef(AR_TEST).\n-export([start_link/1, child_spec/1, reset_all/0]).\n-endif.\n\n%% Supervisor callbacks\n-export([init/1]).\n\n-include_lib(\"arweave_config/include/arweave_config.hrl\").\n-include_lib(\"arweave/include/ar_sup.hrl\").\n\n-include_lib(\"kernel/include/logger.hrl\").\n\n%% ===================================================================\n%% API functions\n%% ===================================================================\nstart_link() ->\n    start_link(get_limiter_config()).\n\nstart_link(Config) ->\n    supervisor:start_link({local, ?MODULE}, ?MODULE, [Config]).\n\n%% ===================================================================\n%% Supervisor callbacks\n%% ===================================================================\ninit([Config]) ->\n    ok = arweave_limiter_metrics:register(),\n    {ok, {supervisor_spec(Config), children_spec(Config)}}.\n\nsupervisor_spec(_Config) ->\n    #{ strategy => one_for_all,\n       intensity => 5,\n       period => 10 }.\n\n%%--------------------------------------------------------------------\n%% Child spec generation based on Config.\n%%--------------------------------------------------------------------\nchildren_spec(Configs) ->\n    [child_spec(Config) || Config <- Configs].\n\nchild_spec(#{id := Id} = Config) ->\n    #{ id => Id,\n       start => {arweave_limiter_group, start_link, [Id, Config]},\n       type => worker,\n       shutdown => ?SHUTDOWN_TIMEOUT}.\n\nget_limiter_config() ->\n    {ok, Config} = arweave_config:get_env(),\n    [\n     #{id => chunk,\n       sliding_window_limit => Config#config.'http_api.limiter.chunk.sliding_window_limit',\n       sliding_window_duration => Config#config.'http_api.limiter.chunk.sliding_window_duration',\n       timestamp_cleanup_tick_ms =>\n           Config#config.'http_api.limiter.chunk.sliding_window_timestamp_cleanup_interval',\n       timestamp_cleanup_expiry =>\n           Config#config.'http_api.limiter.chunk.sliding_window_timestamp_cleanup_expiry',\n       leaky_rate_limit => Config#config.'http_api.limiter.chunk.leaky_limit',\n       leaky_tick_ms => Config#config.'http_api.limiter.chunk.leaky_tick_interval',\n       tick_reduction => Config#config.'http_api.limiter.chunk.leaky_tick_reduction',\n       concurrency_limit => Config#config.'http_api.limiter.chunk.concurrency_limit',\n       is_manual_reduction_disabled => Config#config.'http_api.limiter.chunk.is_manual_reduction_disabled'},\n\n     #{id => data_sync_record,\n       sliding_window_limit => Config#config.'http_api.limiter.data_sync_record.sliding_window_limit',\n       sliding_window_duration => Config#config.'http_api.limiter.data_sync_record.sliding_window_duration',\n       timestamp_cleanup_tick_ms =>\n           Config#config.'http_api.limiter.data_sync_record.sliding_window_timestamp_cleanup_interval',\n       timestamp_cleanup_expiry =>\n           Config#config.'http_api.limiter.data_sync_record.sliding_window_timestamp_cleanup_expiry',\n       leaky_rate_limit => Config#config.'http_api.limiter.data_sync_record.leaky_limit',\n       leaky_tick_ms => Config#config.'http_api.limiter.data_sync_record.leaky_tick_interval',\n       tick_reduction => Config#config.'http_api.limiter.data_sync_record.leaky_tick_reduction',\n       concurrency_limit => Config#config.'http_api.limiter.data_sync_record.concurrency_limit',\n       is_manual_reduction_disabled => Config#config.'http_api.limiter.data_sync_record.is_manual_reduction_disabled'},\n\n     #{id => recent_hash_list_diff,\n       sliding_window_limit => Config#config.'http_api.limiter.recent_hash_list_diff.sliding_window_limit',\n       sliding_window_duration => Config#config.'http_api.limiter.recent_hash_list_diff.sliding_window_duration',\n       timestamp_cleanup_tick_ms =>\n           Config#config.'http_api.limiter.recent_hash_list_diff.sliding_window_timestamp_cleanup_interval',\n       timestamp_cleanup_expiry =>\n           Config#config.'http_api.limiter.recent_hash_list_diff.sliding_window_timestamp_cleanup_expiry',\n       leaky_rate_limit => Config#config.'http_api.limiter.recent_hash_list_diff.leaky_limit',\n       leaky_tick_ms => Config#config.'http_api.limiter.recent_hash_list_diff.leaky_tick_interval',\n       tick_reduction => Config#config.'http_api.limiter.recent_hash_list_diff.leaky_tick_reduction',\n       concurrency_limit => Config#config.'http_api.limiter.recent_hash_list_diff.concurrency_limit',\n       is_manual_reduction_disabled => Config#config.'http_api.limiter.recent_hash_list_diff.is_manual_reduction_disabled'},\n\n     #{id => block_index,\n       sliding_window_limit => Config#config.'http_api.limiter.block_index.sliding_window_limit',\n       sliding_window_duration => Config#config.'http_api.limiter.block_index.sliding_window_duration',\n       timestamp_cleanup_tick_ms =>\n           Config#config.'http_api.limiter.block_index.sliding_window_timestamp_cleanup_interval',\n       timestamp_cleanup_expiry =>\n           Config#config.'http_api.limiter.block_index.sliding_window_timestamp_cleanup_expiry',\n       leaky_rate_limit => Config#config.'http_api.limiter.block_index.leaky_limit',\n       leaky_tick_ms => Config#config.'http_api.limiter.block_index.leaky_tick_interval',\n       tick_reduction => Config#config.'http_api.limiter.block_index.leaky_tick_reduction',\n       concurrency_limit => Config#config.'http_api.limiter.block_index.concurrency_limit',\n       is_manual_reduction_disabled => Config#config.'http_api.limiter.block_index.is_manual_reduction_disabled'},\n\n     #{id => wallet_list,\n       sliding_window_limit => Config#config.'http_api.limiter.wallet_list.sliding_window_limit',\n       sliding_window_duration => Config#config.'http_api.limiter.wallet_list.sliding_window_duration',\n       timestamp_cleanup_tick_ms =>\n           Config#config.'http_api.limiter.wallet_list.sliding_window_timestamp_cleanup_interval',\n       timestamp_cleanup_expiry =>\n           Config#config.'http_api.limiter.wallet_list.sliding_window_timestamp_cleanup_expiry',\n       leaky_rate_limit => Config#config.'http_api.limiter.wallet_list.leaky_limit',\n       leaky_tick_ms => Config#config.'http_api.limiter.wallet_list.leaky_tick_interval',\n       tick_reduction => Config#config.'http_api.limiter.wallet_list.leaky_tick_reduction',\n       concurrency_limit => Config#config.'http_api.limiter.wallet_list.concurrency_limit',\n       is_manual_reduction_disabled => Config#config.'http_api.limiter.wallet_list.is_manual_reduction_disabled'},\n\n     #{id => get_vdf,\n       sliding_window_limit => Config#config.'http_api.limiter.get_vdf.sliding_window_limit',\n       sliding_window_duration => Config#config.'http_api.limiter.get_vdf.sliding_window_duration',\n       timestamp_cleanup_tick_ms =>\n           Config#config.'http_api.limiter.get_vdf.sliding_window_timestamp_cleanup_interval',\n       timestamp_cleanup_expiry =>\n           Config#config.'http_api.limiter.get_vdf.sliding_window_timestamp_cleanup_expiry',\n       leaky_rate_limit => Config#config.'http_api.limiter.get_vdf.leaky_limit',\n       leaky_tick_ms => Config#config.'http_api.limiter.get_vdf.leaky_tick_interval',\n       tick_reduction => Config#config.'http_api.limiter.get_vdf.leaky_tick_reduction',\n       concurrency_limit => Config#config.'http_api.limiter.get_vdf.concurrency_limit',\n       is_manual_reduction_disabled => Config#config.'http_api.limiter.get_vdf.is_manual_reduction_disabled'},\n\n     #{id => get_vdf_session,\n       sliding_window_limit => Config#config.'http_api.limiter.get_vdf_session.sliding_window_limit',\n       sliding_window_duration => Config#config.'http_api.limiter.get_vdf_session.sliding_window_duration',\n       timestamp_cleanup_tick_ms =>\n           Config#config.'http_api.limiter.get_vdf_session.sliding_window_timestamp_cleanup_interval',\n       timestamp_cleanup_expiry =>\n           Config#config.'http_api.limiter.get_vdf_session.sliding_window_timestamp_cleanup_expiry',\n       leaky_rate_limit => Config#config.'http_api.limiter.get_vdf_session.leaky_limit',\n       leaky_tick_ms => Config#config.'http_api.limiter.get_vdf_session.leaky_tick_interval',\n       tick_reduction => Config#config.'http_api.limiter.get_vdf_session.leaky_tick_reduction',\n       concurrency_limit => Config#config.'http_api.limiter.get_vdf_session.concurrency_limit',\n       is_manual_reduction_disabled => Config#config.'http_api.limiter.get_vdf_session.is_manual_reduction_disabled'},\n\n     #{id => get_previous_vdf_session,\n       sliding_window_limit => Config#config.'http_api.limiter.get_previous_vdf_session.sliding_window_limit',\n       sliding_window_duration =>\n           Config#config.'http_api.limiter.get_previous_vdf_session.sliding_window_duration',\n       timestamp_cleanup_tick_ms =>\n           Config#config.'http_api.limiter.get_previous_vdf_session.sliding_window_timestamp_cleanup_interval',\n       timestamp_cleanup_expiry =>\n           Config#config.'http_api.limiter.get_previous_vdf_session.sliding_window_timestamp_cleanup_expiry',\n       leaky_rate_limit => Config#config.'http_api.limiter.get_previous_vdf_session.leaky_limit',\n       leaky_tick_ms => Config#config.'http_api.limiter.get_previous_vdf_session.leaky_tick_interval',\n       tick_reduction => Config#config.'http_api.limiter.get_previous_vdf_session.leaky_tick_reduction',\n       concurrency_limit => Config#config.'http_api.limiter.get_previous_vdf_session.concurrency_limit',\n       is_manual_reduction_disabled => Config#config.'http_api.limiter.get_previous_vdf_session.is_manual_reduction_disabled'},\n\n     #{id => general,\n       sliding_window_limit => Config#config.'http_api.limiter.general.sliding_window_limit',\n       sliding_window_duration => Config#config.'http_api.limiter.general.sliding_window_duration',\n       timestamp_cleanup_tick_ms =>\n           Config#config.'http_api.limiter.general.sliding_window_timestamp_cleanup_interval',\n       timestamp_cleanup_expiry =>\n           Config#config.'http_api.limiter.general.sliding_window_timestamp_cleanup_expiry',\n       leaky_rate_limit => Config#config.'http_api.limiter.general.leaky_limit',\n       leaky_tick_ms => Config#config.'http_api.limiter.general.leaky_tick_interval',\n       tick_reduction => Config#config.'http_api.limiter.general.leaky_tick_reduction',\n       concurrency_limit => Config#config.'http_api.limiter.general.concurrency_limit',\n       is_manual_reduction_disabled => Config#config.'http_api.limiter.general.is_manual_reduction_disabled'},\n\n     #{id => metrics,\n       sliding_window_limit => Config#config.'http_api.limiter.metrics.sliding_window_limit',\n       sliding_window_duration => Config#config.'http_api.limiter.metrics.sliding_window_duration',\n       timestamp_cleanup_tick_ms =>\n           Config#config.'http_api.limiter.metrics.sliding_window_timestamp_cleanup_interval',\n       timestamp_cleanup_expiry =>\n           Config#config.'http_api.limiter.metrics.sliding_window_timestamp_cleanup_expiry',\n       leaky_rate_limit => Config#config.'http_api.limiter.metrics.leaky_limit',\n       leaky_tick_ms => Config#config.'http_api.limiter.metrics.leaky_tick_interval',\n       tick_reduction => Config#config.'http_api.limiter.metrics.leaky_tick_reduction',\n       concurrency_limit => Config#config.'http_api.limiter.metrics.concurrency_limit',\n       is_manual_reduction_disabled => Config#config.'http_api.limiter.metrics.is_manual_reduction_disabled'},\n     %% Local peers\n     #{id => local_peers,\n       no_limit => true}\n    ].\n\nall_info() ->\n    Children = supervisor:which_children(?MODULE),\n    [{Id, arweave_limiter_group:info(Id)}  || {Id, _Child, _Type, _Modules} <- Children].\n\nreset_all() ->\n    Children = supervisor:which_children(?MODULE),\n    [{Id, arweave_limiter_group:reset_all(Id)}  || {Id, _Child, _Type, _Modules} <- Children].\n"
  },
  {
    "path": "apps/arweave_limiter/src/arweave_limiter_time.erl",
    "content": "%%%\n%%% @doc Rate limiter clock and time management library\n%%%\n\n%%% NOTE: this module seems pretty redundant. However, moving erlang:monotonic_time/1\n%%%       into this module allows us to test production code without alteration\n%%%       and mock time related functions, and so manipulate and control time precisely\n%%%       in tests. So here it is.\n\n-module(arweave_limiter_time).\n\n-export([\n         ts_now/0\n        ]).\n\nts_now() ->\n    erlang:monotonic_time(millisecond).\n"
  },
  {
    "path": "apps/arweave_limiter/test/arweave_limiter_group_tests.erl",
    "content": "-module(arweave_limiter_group_tests).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n-include_lib(\"arweave/include/ar.hrl\").\n\n-define(M, arweave_limiter_group).\n-define(TABLE, eunit_arweave_limiter_tests_mock).\n-define(KEY, ts_now).\n-define(TEST_LIMITER, test_limiter).\n\n-define(setTsMock(Ts), ets:insert(?TABLE, {?KEY, Ts})).\n\n-define(assertHandlerRegisterOrRejectCall(LimiterRef, Pattern, Peer, Now),\n \t((fun () ->\n                  ?assert(?setTsMock(Now)),\n                  spawn_link(fun() ->\n                                     ?assertMatch(\n                                        Pattern,\n                                        ?M:register_or_reject_call(LimiterRef, Peer)),\n                                     receive\n                                         done -> ok\n                                     end\n                             end)\n\t  end)())).\n\nexpire_test() ->\n    IP = {1,2,3,4},\n    ?assertEqual([], ?M:expire_and_get_requests(IP, #{}, 1000, 1)),\n    ?assertEqual([1], ?M:drop_expired([1], 1000, 500)),\n    ?assertEqual([1], ?M:expire_and_get_requests(IP, #{IP => [1]}, 1000, 500)),\n    ?assertEqual([1, 500], ?M:expire_and_get_requests(IP, #{IP => [1, 500]}, 1000, 501)),\n    ?assertEqual([500, 501], ?M:expire_and_get_requests(IP, #{IP => [1, 500, 501]}, 1000, 1100)),\n    ?assertEqual([500, 501], ?M:expire_and_get_requests(IP, #{IP => [1, 500, 501]}, 1000, 1499)),\n    ?assertEqual([501], ?M:expire_and_get_requests(IP, #{IP => [1, 500, 501]}, 1000, 1500)),\n    ?assertEqual([], ?M:expire_and_get_requests(IP, #{IP => [1, 500, 501]}, 1000, 1501)),\n    ok.\n\nadd_and_order_test() ->\n    ?assertEqual([5], ?M:add_and_order_timestamps(5, [])),\n    ?assertEqual([1,2,3,4,5], ?M:add_and_order_timestamps(5, [1,2,3,4])),\n    ?assertEqual([1,2,3,4,5,6,7], ?M:add_and_order_timestamps(5, [1,2,3,4,6,7])),\n    ?assertEqual([5,7,8], ?M:add_and_order_timestamps(5, [7,8])),\n    ok.\n\ncleanup_timestamps_map_test() ->\n    IP1 = {1,2,3,4},\n    IP2 = {2,3,4,5},\n    ?assertEqual(\n       #{IP1 => [1],\n         IP2 => [500]\n        }, ?M:cleanup_expired_sliding_peers(\n              #{IP1 => [1],\n                IP2 => [500]}, 1000, 501)),\n    ?assertEqual(\n       #{%%IP1 => [1], - removed\n         IP2 => [500]\n        }, ?M:cleanup_expired_sliding_peers(\n              #{IP1 => [1],\n                IP2 => [500]}, 1000, 1100)),\n    Empty = ?M:cleanup_expired_sliding_peers(\n               #{IP1 => [1],\n                 IP2 => [500]}, 1000, 2100),\n    %% Now it's empty\n    ?assertEqual(0, maps:size(Empty)),\n    ok.\n\nsetup(Config) ->\n    ?TABLE = ets:new(?TABLE, [named_table, public]),\n    ?setTsMock(0),\n    {module, arweave_limiter_time} = code:ensure_loaded(arweave_limiter_time),\n\n    ok = meck:new(prometheus_counter, [passthrough]),\n    ok = meck:expect(prometheus_counter, inc, 2, ok),\n    ok = meck:expect(prometheus_counter, inc, 3, ok),\n\n    ok = meck:new(arweave_limiter_time, []),\n    ok = meck:expect(arweave_limiter_time, ts_now,\n                     fun() ->\n                             [{?KEY, Value}] = ets:lookup(?TABLE, ?KEY),\n                             Value\n                     end),\n    0 = arweave_limiter_time:ts_now(),\n    {ok, LimiterPid} = ?M:start_link(?TEST_LIMITER, Config),\n    LimiterPid.\n\ncleanup(_Config, _LimiterPid) ->\n    true = meck:validate(arweave_limiter_time),\n    true = meck:validate(prometheus_counter),\n    ok = meck:unload([prometheus_counter, arweave_limiter_time]),\n    ?M:stop(?TEST_LIMITER),\n    true = ets:delete(?TABLE),\n    ok.\n\nrate_limiter_process_test_() ->\n    {foreachx,\n     fun setup/1,\n     fun cleanup/2,\n     [{#{id => ?TEST_LIMITER,\n         tick_reduction => 1,\n         leaky_rate_limit => 0,\n         concurrency_limit => 5,\n         sliding_window_limit => 2,\n         sliding_window_duration => 1000,\n         timestamp_cleanup_expiry => 1000,\n         leaky_tick_ms => 100000},\n       fun(_Config, _LimiterPid) -> {\"sliding test\", fun simple_sliding_happy/0} end},\n      {#{id => ?TEST_LIMITER,\n         tick_reduction => 1,\n         leaky_rate_limit => 5,\n         concurrency_limit => 2,\n         sliding_window_limit => 0,\n         sliding_window_duration => 1000,\n         timestamp_cleanup_expiry => 1000,\n         leaky_tick_ms => 100000},\n       fun simple_leaky_happy_path/2},\n      {#{id => ?TEST_LIMITER,\n         tick_reduction => 1,\n         leaky_rate_limit=> 5,\n         concurrency_limit => 2,\n         sliding_window_limit => 0,\n         sliding_window_duration => 1000,\n         timestamp_cleanup_expiry => 1000,\n         leaky_tick_ms => 100000},\n       fun rate_limiter_rejected_due_concurrency/2},\n      {#{id => ?TEST_LIMITER,\n         tick_reduction => 1,\n         leaky_rate_limit => 2,\n         concurrency_limit => 5,\n         sliding_window_limit => 0,\n         sliding_window_duration => 1000,\n         timestamp_cleanup_expiry => 1000,\n         leaky_tick_ms => 100000},\n       fun rejected_due_leaky_rate/2},\n      {#{id => ?TEST_LIMITER,\n         tick_reduction => 1,\n         leaky_rate_limit => 1,\n         concurrency_limit => 10,\n         sliding_window_limit => 1,\n         sliding_window_duration => 100000,\n         leaky_tick_ms => 10000000,\n         timestamp_cleanup_expiry => 1000,\n         timestamp_cleanup_tick_ms => 1000000},\n       fun both_exhausted/2},\n      {#{id => ?TEST_LIMITER,\n         tick_reduction => 1,\n         leaky_rate_limit => 1,\n         concurrency_limit => 2,\n         sliding_window_limit => 1,\n         sliding_window_duration => 1000,\n         timestamp_cleanup_expiry => 1000,\n         leaky_tick_ms => 100000},\n       fun peer_cleanup/2},\n      {#{id => ?TEST_LIMITER,\n         tick_reduction => 1,\n         leaky_rate_limit => 5,\n         concurrency_limit => 10,\n         sliding_window_limit => 0,\n         sliding_window_duration => 1000,\n         timestamp_cleanup_expiry => 1000,\n         leaky_tick_ms => 100000},\n       fun leaky_manual_reduction/2},\n      {#{id => ?TEST_LIMITER,\n         is_manual_reduction_disabled => true,\n         tick_reduction => 1,\n         leaky_rate_limit => 5,\n         concurrency_limit => 10,\n         sliding_window_limit => 0,\n         sliding_window_duration => 1000,\n         timestamp_cleanup_expiry => 1000,\n         leaky_tick_ms => 100000},\n       fun leaky_manual_reduction_disabled/2}\n     ]}.\n\nsimple_sliding_happy() ->\n    IP = {1,2,3,4},\n\n    Caller1 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, sliding}, IP, 1),\n    Caller1 ! done,\n\n    timer:sleep(100),\n    Info1 = ?M:info(?TEST_LIMITER),\n    ?assertMatch(#{sliding_timestamps := #{IP := [1]}}, Info1),\n    #{concurrent_requests := ConcurrentReqs1} = Info1,\n    ?assertEqual(0, maps:size(ConcurrentReqs1)),\n\n    Caller2 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, sliding}, IP, 500),\n    Caller2 ! done,\n\n    timer:sleep(100),\n    Info2 = ?M:info(?TEST_LIMITER),\n    ?assertMatch(#{sliding_timestamps := #{IP := [1,500]}}, Info2),\n    #{concurrent_requests := ConcurrentReqs2} = Info2,\n    ?assertEqual(0, maps:size(ConcurrentReqs2)),\n\n    Caller3 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, sliding}, IP, 2000),\n    Caller3 ! done,\n    timer:sleep(100),\n    %% 2 previous ts expired due to the time elapsed.\n    ?assertMatch(#{sliding_timestamps := #{IP := [2000]}}, ?M:info(?TEST_LIMITER)),\n\n    Caller4 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, sliding}, IP, 2001),\n    Caller4 ! done,\n    timer:sleep(100),\n    ?assertMatch(#{sliding_timestamps := #{IP := [2000, 2001]}}, ?M:info(?TEST_LIMITER)),\n\n    Caller5 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {reject, rate_limit, _} , IP, 2002),\n    Caller5 ! done,\n    timer:sleep(100),\n    %% Wait a bit for surely have request processed, and observe, no new timestamp\n    ?assertMatch(#{sliding_timestamps := #{IP := [2000, 2001]}}, ?M:info(?TEST_LIMITER)),\n    ok.\n\nsimple_leaky_happy_path(_Config, LimiterPid) ->\n    {\"Leaky happy path\",\n     fun() ->\n             ?assertMatch(#{is_manual_reduction_disabled := false}, ?M:config(?TEST_LIMITER)),\n             IP = {1,2,3,4},\n             %% init state, the ip is not blocked\n             Caller1 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, leaky}, IP, 0),\n             timer:sleep(20),\n             Caller2 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, leaky}, IP, 2),\n\n             %% wait a bit so they are surely started.\n             timer:sleep(100),\n             ?assertMatch(#{concurrent_requests := #{IP := [_,_]},\n                            leaky_tokens := #{IP := 2}}, ?M:info(?TEST_LIMITER)),\n\n             Caller1 ! done,\n             %% wait a tiny bit so the logic surely runs.\n             timer:sleep(100),\n             ?assertMatch(#{concurrent_requests := #{IP := [_]},\n                            leaky_tokens := #{IP := 2}}, ?M:info(?TEST_LIMITER)),\n\n             Caller2 ! done,\n             %% wait a tiny bit so the logic surely runs.\n             timer:sleep(100),\n             %% Keys deleted\n             ?assertMatch(#{concurrent_requests := #{},\n                            leaky_tokens := #{IP := 2}}, ?M:info(?TEST_LIMITER)),\n\n             %% manually trigger a tick.\n             LimiterPid ! {tick, leaky_bucket_reduction},\n\n             %% wait a tiny bit so the tick logic surely runs.\n             timer:sleep(100),\n             ?assertMatch(#{concurrent_requests := #{},\n                            leaky_tokens := #{IP := 1}}, ?M:info(?TEST_LIMITER)),\n\n             %% manually trigger a tick.\n             LimiterPid ! {tick, leaky_bucket_reduction},\n             %% wait a tiny bit so the tick logic surely runs.\n             timer:sleep(100),\n             ?assertMatch(#{concurrent_requests := #{},\n                            leaky_tokens := #{IP := 0}}, ?M:info(?TEST_LIMITER)),\n\n             %% manually trigger a tick.\n             LimiterPid ! {tick, leaky_bucket_reduction},\n             %% wait a tiny bit so the tick logic surely runs.\n             timer:sleep(100),\n             %% Key only deleted from leaky_tokens map, when it reached 0 in the previous tick\n             #{concurrent_requests := ConcurrentReqs,\n               leaky_tokens := LeakyTokens} = ?M:info(?TEST_LIMITER),\n             ?assertEqual(0, maps:size(ConcurrentReqs)),\n             ?assertMatch(0, maps:size(LeakyTokens)),\n             ok\n     end}.\n\nrate_limiter_rejected_due_concurrency(_Config, LimiterPid) ->\n    {\"rejected due concurrency\",\n     fun() ->\n             ?assertMatch(#{is_manual_reduction_disabled := false}, ?M:config(?TEST_LIMITER)),\n             %% init state, the ip is not blocked\n             IP = {1,2,3,4},\n\n             Caller1 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, leaky}, IP, -1),\n             timer:sleep(120),\n             Caller2 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, leaky}, IP, 10),\n             timer:sleep(120),\n             Caller3 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {reject, concurrency, _Data}, IP, 10),\n\n             %% wait a bit so they are surely started.\n             timer:sleep(100),\n\n             ?assertMatch(#{concurrent_requests := #{IP := [_,_]},\n                            leaky_tokens := #{IP := 2}}, ?M:info(?TEST_LIMITER)),\n\n\n             Caller1 ! done,\n             Caller2 ! done,\n             Caller3 ! done,\n             %% wait a tiny bit so the logic surely runs.\n             timer:sleep(100),\n             %% Keys deleted\n             %% NOTE: concurrent_requests := #{} matches to any map, so we don't what's in there.\n             ?assertMatch(#{concurrent_requests := #{},\n                            leaky_tokens := #{IP := 2}}, ?M:info(?TEST_LIMITER)),\n\n             %% manually trigger a tick.\n             LimiterPid ! {tick, leaky_bucket_reduction},\n             %% wait a tiny bit so the tick logic surely runs.\n             timer:sleep(100),\n             ?assertMatch(#{concurrent_requests := #{},\n                            leaky_tokens := #{IP := 1}}, ?M:info(?TEST_LIMITER)),\n\n             %% Concurrency reduced, one handler terminated, will register again\n             Caller4 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, leaky}, IP, 0),\n             %% wait a tiny bit so the logic surely runs.\n             timer:sleep(100),\n             Caller4 ! done,\n             %% Keys deleted\n             ?assertMatch(#{concurrent_requests := #{},\n                            leaky_tokens := #{IP := 2}}, ?M:info(?TEST_LIMITER)),\n\n\n             %% manually trigger two ticks.\n             LimiterPid ! {tick, leaky_bucket_reduction},\n             LimiterPid ! {tick, leaky_bucket_reduction},\n             %% wait a tiny bit so the tick logic surely runs.\n             timer:sleep(100),\n             ?assertMatch(#{concurrent_requests := #{},\n                            leaky_tokens := #{IP := 0}}, ?M:info(?TEST_LIMITER)),\n\n             %% manually trigger a tick.\n             LimiterPid ! {tick, leaky_bucket_reduction},\n             %% wait a tiny bit so the tick logic surely runs.\n             timer:sleep(100),\n             %% Key only deleted from leaky_tokens map, when it reached 0 in the previous tick\n             #{concurrent_requests := ConcurrentReqs,\n               leaky_tokens := LeakyTokens} = ?M:info(?TEST_LIMITER),\n             ?assertEqual(0, maps:size(ConcurrentReqs)),\n             ?assertEqual(0, maps:size(LeakyTokens)),\n             ok\n     end}.\n\nrejected_due_leaky_rate(_Config, LimiterPid) ->\n    {\"rejected due leaky rate\",\n     fun() ->\n             ?assertMatch(#{is_manual_reduction_disabled := false}, ?M:config(?TEST_LIMITER)),\n             %% init state, the ip is not blocked\n             IP = {1,2,3,4},\n\n             Caller1 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, leaky}, IP, 1),\n             timer:sleep(20),\n             Caller2 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, leaky}, IP, 2),\n             timer:sleep(20),\n             Caller3 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {reject, rate_limit, _Data}, IP, 3),\n\n             %% wait a bit so they are surely started.\n             timer:sleep(100),\n             %% 2 concurrent, 2 token\n             ?assertMatch(#{concurrent_requests := #{IP := [_,_]},\n                            leaky_tokens := #{IP := 2}}, ?M:info(?TEST_LIMITER)),\n\n             %% Simulate a tick\n             LimiterPid ! {tick, leaky_bucket_reduction},\n             %% wait a tiny bit so the logic surely runs.\n             timer:sleep(100),\n             %% 2 concurrent, but tokens reduced.\n             ?assertMatch(#{concurrent_requests := #{IP := [_,_]},\n                            leaky_tokens := #{IP := 1}}, ?M:info(?TEST_LIMITER)),\n\n             %% Tokens reduced, will register again\n             Caller4 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, leaky}, IP, 10),\n             %% wait a tiny bit so the logic surely runs.\n             timer:sleep(100),\n             %% 3 concurrent, 2 tokens\n             ?assertMatch(#{concurrent_requests := #{IP := [_,_,_]},\n                            leaky_tokens := #{IP := 2}}, ?M:info(?TEST_LIMITER)),\n\n             %% manually trigger two ticks.\n             LimiterPid ! {tick, leaky_bucket_reduction},\n             LimiterPid ! {tick, leaky_bucket_reduction},\n\n             %% wait a tiny bit so the tick logic surely runs.\n             timer:sleep(100),\n             ?assertMatch(#{concurrent_requests := #{IP := [_,_,_]},\n                            leaky_tokens := #{IP := 0}}, ?M:info(?TEST_LIMITER)),\n\n             %% Clean up\n             Caller1 ! done,\n             Caller2 ! done,\n             Caller3 ! done,\n             Caller4 ! done,\n\n             LimiterPid ! {tick, leaky_bucket_reduction},\n             %% Key only deleted from leaky_tokens map, when it reached 0 in the previous tick\n             LimiterPid ! {tick, leaky_bucket_reduction},\n\n             %% wait a tiny bit so the tick logic surely runs.\n             timer:sleep(100),\n             #{concurrent_requests := ConcurrentReqs,\n               leaky_tokens := LeakyTokens} = ?M:info(?TEST_LIMITER),\n             ?assertEqual(0, maps:size(ConcurrentReqs)),\n             ?assertEqual(0, maps:size(LeakyTokens)),\n             ok\n     end}.\n\nboth_exhausted(_Config, LimiterPid) ->\n    {\"Both exhausted\",\n     fun() ->\n             ?assertMatch(#{is_manual_reduction_disabled := false}, ?M:config(?TEST_LIMITER)),\n             IP = {1,2,3,4},\n\n             Caller1 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, sliding}, IP, -1),\n\n             %% wait a bit so they are surely started.\n             timer:sleep(100),\n             %% 1 concurrent, 0 token\n             ?assertMatch(#{concurrent_requests := #{IP := [_]},\n                            sliding_timestamps := #{IP := [_]},\n                            leaky_tokens := #{}}, ?M:info(?TEST_LIMITER)),\n\n             Caller2 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, leaky}, IP, 20),\n\n             %% wait a tiny bit so the logic surely runs.\n             timer:sleep(100),\n             %% 2 concurrent, but tokens reduced.\n             Info = ?M:info(?TEST_LIMITER),\n             ?assertMatch(#{concurrent_requests := #{IP := [_,_]},\n                            sliding_timestamps := #{IP := [_]},\n                            leaky_tokens := #{IP := 1}}, Info),\n\n             Caller3 = ?assertHandlerRegisterOrRejectCall(\n                          ?TEST_LIMITER, {reject, rate_limit, _Data}, IP, 130),\n\n             %% Tokens reduced, will register again\n             %% wait a tiny bit so the logic surely runs.\n             timer:sleep(100),\n             %% 2 concurrent, 1 token\n             ?assertMatch(#{concurrent_requests := #{IP := [_,_]},\n                            sliding_timestamps := #{IP := [_]},\n                            leaky_tokens := #{IP := 1}}, ?M:info(?TEST_LIMITER)),\n\n             %% Clean up\n             Caller1 ! done,\n             Caller2 ! done,\n             Caller3 ! done,\n\n             LimiterPid ! {tick, leaky_bucket_reduction},\n             %% Key only deleted from leaky_tokens map, when it reached 0 in the previous tick\n             LimiterPid ! {tick, leaky_bucket_reduction},\n\n             %% wait a tiny bit so the tick logic surely runs.\n             timer:sleep(100),\n             #{concurrent_requests := ConcurrentReqs,\n               leaky_tokens := LeakyTokens} = ?M:info(?TEST_LIMITER),\n             ?assertEqual(0, maps:size(ConcurrentReqs)),\n             ?assertEqual(0, maps:size(LeakyTokens)),\n\n             ok\n     end}.\n\npeer_cleanup(_Config, LimiterPid) ->\n    {\"Peer cleanup\",\n     fun() ->\n             ?assertMatch(#{is_manual_reduction_disabled := false}, ?M:config(?TEST_LIMITER)),\n             %% init state, the ip is not blocked\n             IP = {1,2,3,4},\n\n             Caller1 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, sliding}, IP, 1),\n\n             %% wait a bit so they are surely started.\n             timer:sleep(100),\n             %% 2 concurrent, 2 token\n             ?assertMatch(#{concurrent_requests := #{IP := [_]},\n                            sliding_timestamps := #{IP := [_]},\n                            leaky_tokens := #{}}, ?M:info(?TEST_LIMITER)),\n\n             Caller2 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, leaky}, IP, 20),\n\n             %% wait a tiny bit so the logic surely runs.\n             timer:sleep(100),\n             %% 2 concurrent, but tokens reduced.\n             ?assertMatch(#{concurrent_requests := #{IP := [_,_]},\n                            sliding_timestamps := #{IP := [_]},\n                            leaky_tokens := #{IP := 1}}, ?M:info(?TEST_LIMITER)),\n\n             %% further requests are rejected\n             Caller3 = ?assertHandlerRegisterOrRejectCall(\n                          ?TEST_LIMITER, {reject, concurrency, _Data}, IP, 300),\n\n             %% Tokens reduced, will register again\n             %% wait a tiny bit so the logic surely runs.\n             timer:sleep(100),\n             %% 2 concurrent, 1 token\n             ?assertMatch(#{concurrent_requests := #{IP := [_,_]},\n                            sliding_timestamps := #{IP := [_]},\n                            leaky_tokens := #{IP := 1}}, ?M:info(?TEST_LIMITER)),\n\n             %% Clean up\n             Caller1 ! done,\n             Caller2 ! done,\n             Caller3 ! done,\n             LimiterPid ! {tick, leaky_bucket_reduction},\n             %% Key only deleted from leaky_tokens map, when it reached 0 in the previous tick\n             LimiterPid ! {tick, leaky_bucket_reduction},\n\n             %% wait a tiny bit so the tick logic surely runs.\n             %% Now we still have timestamps for IP1 in the state.\n             timer:sleep(100),\n             #{concurrent_requests := ConcurrentReqs,\n               sliding_timestamps := SlidingTimestamps,\n               leaky_tokens := LeakyTokens} = ?M:info(?TEST_LIMITER),\n             ?assertEqual(0, maps:size(ConcurrentReqs)),\n             ?assertEqual(1, maps:size(SlidingTimestamps)),\n             ?assertEqual(0, maps:size(LeakyTokens)),\n\n             ?setTsMock(20000),\n\n             timer:sleep(500),\n             %% Trigger timestamp cleanup.\n             LimiterPid ! {tick, sliding_window_timestamp_cleanup},\n\n             %% wait a tiny bit so the tick logic surely runs.\n             %% Now we should have all cleaned up.\n             timer:sleep(100),\n             #{concurrent_requests := ConcurrentReqs,\n               sliding_timestamps := SlidingTimestamps2,\n               leaky_tokens := LeakyTokens} = ?M:info(?TEST_LIMITER),\n             ?assertEqual(0, maps:size(ConcurrentReqs)),\n             ?assertEqual(0, maps:size(SlidingTimestamps2)),\n             ?assertEqual(0, maps:size(LeakyTokens)),\n\n             ok\n     end}.\n\nleaky_manual_reduction(_Config, _LimiterPid) ->\n    {\"Leaky tokens manual peer reduction\",\n     fun() ->\n             ?assertMatch(#{is_manual_reduction_disabled := false}, ?M:config(?TEST_LIMITER)),\n             %% init state, the ip is not blocked\n             IP = {1,2,3,4},\n             NonRecordedIP = {2,3,4,5,1984},\n\n             Caller1 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, leaky}, IP, 1),\n             Caller2 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, leaky}, IP, 20),\n             Caller3 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, leaky}, IP, 40),\n             Caller4 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, leaky}, IP, 60),\n             %% wait a bit so they are surely started.\n             timer:sleep(100),\n             %% 2 concurrent, 2 token\n             ?assertMatch(#{concurrent_requests := #{IP := [_, _, _, _]},\n                            leaky_tokens := #{IP := 4}}, ?M:info(?TEST_LIMITER)),\n\n             ?assertEqual(ok, ?M:reduce_for_peer(?TEST_LIMITER, IP)),\n             ?assertEqual(ok, ?M:reduce_for_peer(?TEST_LIMITER, IP)),\n\n             %% call for one that's surely not in the state\n             ?assertEqual(ok, ?M:reduce_for_peer(?TEST_LIMITER, NonRecordedIP)),\n\n             %% 2 concurrent, but tokens reduced.\n             ?assertMatch(#{concurrent_requests := #{IP := [_, _, _, _]},\n                            leaky_tokens := #{IP := 2}}, ?M:info(?TEST_LIMITER)),\n\n             ?assertEqual(ok, ?M:reduce_for_peer(?TEST_LIMITER, IP)),\n             ?assertEqual(ok, ?M:reduce_for_peer(?TEST_LIMITER, IP)),\n\n             %% 4 concurrent, but tokens reduced.\n             ?assertMatch(#{concurrent_requests := #{IP := [_, _, _, _]},\n                            leaky_tokens := #{IP := 0}}, ?M:info(?TEST_LIMITER)),\n\n             ?assertEqual(ok, ?M:reduce_for_peer(?TEST_LIMITER, IP)),\n\n             %% 4 concurrent, no change, there is nothing to reduce beyond 0\n             ?assertMatch(#{concurrent_requests := #{IP := [_, _, _, _]},\n                            leaky_tokens := #{IP := 0}}, ?M:info(?TEST_LIMITER)),\n\n             %% Clean up\n             Caller1 ! done,\n             Caller2 ! done,\n             Caller3 ! done,\n             Caller4 ! done,\n\n             ok\n     end}.\n\nleaky_manual_reduction_disabled(_Config, _LimiterPid) ->\n    {\"Leaky tokens manual peer reduction\",\n     fun() ->\n             ?assertMatch(#{is_manual_reduction_disabled := true}, ?M:config(?TEST_LIMITER)),\n             %% init state, the ip is not blocked\n             IP = {1,2,3,4},\n\n             Caller1 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, leaky}, IP, 1),\n             Caller2 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, leaky}, IP, 20),\n             Caller3 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, leaky}, IP, 40),\n             Caller4 = ?assertHandlerRegisterOrRejectCall(?TEST_LIMITER, {register, leaky}, IP, 60),\n             %% wait a bit so they are surely started.\n             timer:sleep(100),\n             ?assertMatch(#{concurrent_requests := #{IP := [_, _, _, _]},\n                            leaky_tokens := #{IP := 4}}, ?M:info(?TEST_LIMITER)),\n\n             ?assertEqual(disabled, ?M:reduce_for_peer(?TEST_LIMITER, IP)),\n\n             %% Didn't reduce anything\n             ?assertMatch(#{concurrent_requests := #{IP := [_, _, _, _]},\n                            leaky_tokens := #{IP := 4}}, ?M:info(?TEST_LIMITER)),\n\n             %% We can repeat this, but still disabled\n             ?assertEqual(disabled, ?M:reduce_for_peer(?TEST_LIMITER, IP)),\n\n             %% Clean up\n             Caller1 ! done,\n             Caller2 ! done,\n             Caller3 ! done,\n             Caller4 ! done,\n\n             ok\n     end}.\n"
  },
  {
    "path": "apps/arweave_limiter/test/arweave_limiter_metrics_collector_tests.erl",
    "content": "-module(arweave_limiter_metrics_collector_tests).\n\n-include_lib(\"eunit/include/eunit.hrl\").\n-include_lib(\"arweave/include/ar.hrl\").\n\n-define(M, arweave_limiter_metrics_collector).\n-define(S, arweave_limiter_sup).\n-define(L, arweave_limiter).\n-define(ME, arweave_limiter_metrics).\n\n-define(GENERAL, general_test).\n-define(METRICS, metrics_test).\n\n%% Very similar but not identical to ar_limiter_tests macro\n-define(assertHandlerRegisterOrRejectCall(LimiterRef, Pattern, Peer),\n \t((fun () ->\n                  spawn_link(fun() ->\n                                     ?assertMatch(\n                                        Pattern,\n                                        ?L:register_or_reject_call(LimiterRef, Peer)),\n                                     receive\n                                         done -> ok\n                                     end\n                             end)\n\t  end)())).\n\ndo_setup() ->\n    %% It would be tempting to just use what the node has started already,\n    %% but we need to start new limiters to control the config, and make\n    %% sure these tests don't break with only config change.\n    %% It is especially important to increase the interval for the tests.\n    Configs = [#{id => ?GENERAL,\n                 leaky_rate_limit => 50,\n                 concurrency_limit => 150,\n                 sliding_window_limit => 100,\n                 leaky_tick_interval_ms => 1000000},\n               #{id => ?METRICS,\n                 leaky_rate_limit => 50,\n                 concurrency_limit => 150,\n                 sliding_window_limit => 100,\n                 leaky_tick_interval_ms => 1000000}\n               ],\n    LimiterIds =\n        lists:map(fun(Config) ->\n                          {ok, _LimPid} = supervisor:start_child(?S, ?S:child_spec(Config)),\n                          maps:get(id, Config)\n                  end, Configs),\n    {LimiterIds, []}.\n\ndo_setup_with_data() ->\n    {LimiterIds, _Callers} = do_setup(),\n    %% Generate IP tuples (up to like 16k peers), but any term can be a peer ID.\n    Port = 1984,\n    IPs = [{1,2,X div 128, X rem 128, Port} || X <- lists:seq(1, 1000)],\n\n    Callers = lists:foldl(fun(IP, Acc) ->\n                                  Acc ++ [?assertHandlerRegisterOrRejectCall(?GENERAL, {register, _}, IP) ||\n                                             _ <- lists:seq(1,150)]\n                          end, [], IPs),\n    timer:sleep(500),\n\n    {LimiterIds, Callers}.\n\ncleanup({LimiterIds, Callers}) ->\n    [Caller ! done || Caller <- Callers],\n    timer:sleep(150),\n    ok = lists:foreach(fun(Id) ->\n                               supervisor:terminate_child(?S, Id),\n                               supervisor:delete_child(?S, Id),\n                               ?debugFmt(\">>> Terminated and deleted limiter: ~p ~n\", [Id])\n                       end, LimiterIds),\n    ok.\n\nempty_limiters_sanity_check_test_() ->\n    {\n     setup,\n     fun do_setup/0,\n     fun cleanup/1,\n     fun({_Sup, _Callers}) ->\n             [fun() ->\n                      ?assertMatch(\n                         [{ar_limiter_tracked_items_total,gauge,\n                           \"tracked requests, timestamps, leaky tokens\",\n                           _},\n                          {ar_limiter_peers,gauge,\n                           \"The number of peers the limiter is monitoring currently\", _}], ?M:metrics())\n              end]\n     end\n    }.\n\n\nrate_limiter_happy_path_sanity_check_test_() ->\n    {\n     setup,\n     fun do_setup_with_data/0,\n     fun cleanup/1,\n     fun({_Sup, _Callers}) ->\n             [fun() ->\n                      ?assertMatch(\n                         [{ar_limiter_tracked_items_total,gauge,\n                           \"tracked requests, timestamps, leaky tokens\",\n                           _},\n                          {ar_limiter_peers,gauge,\n                           \"The number of peers the limiter is monitoring currently\", _}], ?M:metrics()),\n\n                      Info = arweave_limiter_group:info(?GENERAL),\n                      ?assertMatch(\n                         [\n                          {[{limiter_id, ?GENERAL}, {limiting_type, concurrency}], 150*1000},\n                          {[{limiter_id, ?GENERAL}, {limiting_type, leaky_bucket_tokens}], 1000},\n                          {[{limiter_id, ?GENERAL}, {limiting_type, sliding_window_timestamps}], 100*1000}\n                         ], ?M:tracked_items([{?GENERAL, Info}])),\n                      ?assertMatch(\n                         [\n                          {[{limiter_id, ?GENERAL}, {limiting_type, concurrency}], 1000},\n                          {[{limiter_id, ?GENERAL}, {limiting_type, leaky_bucket_tokens}], 1000},\n                          {[{limiter_id, ?GENERAL}, {limiting_type, sliding_window_timestamps}], 1000}\n                         ], ?M:peers([{?GENERAL, Info}]))\n              end]\n     end}.\n"
  },
  {
    "path": "apps/randomx_square_latency_tester/.gitignore",
    "content": "*.o\nmain\n"
  },
  {
    "path": "apps/randomx_square_latency_tester/Makefile",
    "content": "# Compiler\nCXX = g++\n\n# Include directories\nINCLUDES = -I../arweave/c_src/randomx -I ../arweave/lib/RandomX/src\n\n# Compiler Flags\nCXXFLAGS = -msse4.2 -mavx2 -Wall -O2 $(INCLUDES)\n\n# Linker Flags\nLDFLAGS = -L/usr/local/lib\nLDLIBS = -lssl -lcrypto\n\n# Source files in ../arweave/c_src/randomx\nDEPS_SOURCES := $(wildcard ../arweave/c_src/randomx/*.cpp)\nDEPS_OBJECTS := $(patsubst ../arweave/c_src/randomx/%.cpp,%.o,$(DEPS_SOURCES))\n\n# Local main.cpp\nMAIN_SOURCES := main.cpp\nMAIN_OBJECTS := $(patsubst %.cpp,%.o,$(MAIN_SOURCES))\n\n# All object files\nOBJECTS := $(DEPS_OBJECTS) $(MAIN_OBJECTS)\n\n# Path to RandomX library\nRANDOMX_LIB = ../arweave/lib/RandomX/build4096/librandomx4096.a\n\n# Target executable\nTARGET = main\n\n# Default target\nall: $(TARGET)\n\n# Link object files to create the executable\n$(TARGET): $(OBJECTS)\n\t$(CXX) $(CXXFLAGS) -o $@ $^ $(RANDOMX_LIB) $(LDFLAGS) $(LDLIBS)\n\n# Compile source files from ../arweave/c_src/randomx\n%.o: ../arweave/c_src/randomx/%.cpp\n\t$(CXX) $(CXXFLAGS) -c $< -o $@\n\n# Compile local main.cpp\n%.o: %.cpp\n\t$(CXX) $(CXXFLAGS) -c $< -o $@\n\n# Clean up build files\nclean:\n\trm -f $(OBJECTS) $(TARGET)\n\n.PHONY: all clean\n"
  },
  {
    "path": "apps/randomx_square_latency_tester/main.cpp",
    "content": "#include <iostream>\n#include <chrono>\n#include <cstdlib>\n#include <cstring>\n#include \"randomx_squared.h\"\n\nint main() {\n    return 0;\n\n    // // Constants\n    // const size_t entropySize = 8 * 1024 * 1024; // 8 MB\n    // const int iterations = 100;\n\n    // // Allocate memory for entropies\n    // unsigned char* inEntropy = new unsigned char[entropySize];\n    // unsigned char* keyEntropy = new unsigned char[entropySize];\n    // unsigned char* outEntropy = new unsigned char[entropySize];\n\n    // // Seed the random number generator\n    // std::srand(static_cast<unsigned>(std::time(nullptr)));\n\n    // // Fill entropies with random data\n    // for (size_t i = 0; i < entropySize; ++i) {\n    //     inEntropy[i] = std::rand() % 256;\n    //     keyEntropy[i] = std::rand() % 256;\n    // }\n\n    // // Variables to store elapsed time\n    // std::chrono::duration<double> elapsedFeistelShaFull(0);\n    // std::chrono::duration<double> elapsedFeistelAesFull(0);\n    // std::chrono::duration<double> elapsedFeistelCrc32(0);\n    // std::chrono::duration<double> elapsedCrc32(0);\n    // std::chrono::duration<double> elapsedFcrc32w(0);\n    // std::chrono::duration<double> elapsedLcgMmix(0);\n    // std::chrono::duration<double> elapsedSimdLcg(0);\n\n    // // Benchmark packing_mix_entropy_feistel_sha_full\n    // {\n    //     auto startTime = std::chrono::high_resolution_clock::now();\n\n    //     for (int iter = 0; iter < iterations; ++iter) {\n    //         // Modify the first byte of inEntropy\n    //         inEntropy[0] = static_cast<unsigned char>((inEntropy[0] + 1) % 256);\n\n    //         // Call the function\n    //         packing_mix_entropy_feistel_sha_full(inEntropy, keyEntropy, outEntropy, entropySize);\n    //     }\n\n    //     auto endTime = std::chrono::high_resolution_clock::now();\n    //     elapsedFeistelShaFull = endTime - startTime;\n    // }\n    // // Benchmark packing_mix_entropy_feistel_aes_full\n    // {\n    //     auto startTime = std::chrono::high_resolution_clock::now();\n\n    //     for (int iter = 0; iter < iterations; ++iter) {\n    //         // Modify the first byte of inEntropy\n    //         inEntropy[0] = static_cast<unsigned char>((inEntropy[0] + 1) % 256);\n\n    //         // Call the function\n    //         packing_mix_entropy_feistel_aes_full(inEntropy, keyEntropy, outEntropy, entropySize);\n    //     }\n\n    //     auto endTime = std::chrono::high_resolution_clock::now();\n    //     elapsedFeistelAesFull = endTime - startTime;\n    // }\n\n    // // Benchmark packing_mix_entropy_feistel_crc32\n    // {\n    //     auto startTime = std::chrono::high_resolution_clock::now();\n\n    //     for (int iter = 0; iter < iterations; ++iter) {\n    //         // Modify the first byte of inEntropy\n    //         inEntropy[0] = static_cast<unsigned char>((inEntropy[0] + 1) % 256);\n\n    //         // Call the function\n    //         packing_mix_entropy_feistel_crc32(inEntropy, keyEntropy, outEntropy, entropySize);\n    //     }\n\n    //     auto endTime = std::chrono::high_resolution_clock::now();\n    //     elapsedFeistelCrc32 = endTime - startTime;\n    // }\n\n    // // Benchmark packing_mix_entropy_fcrc32w\n    // {\n    //     auto startTime = std::chrono::high_resolution_clock::now();\n\n    //     for (int iter = 0; iter < iterations; ++iter) {\n    //         // Modify the first byte of inEntropy\n    //         inEntropy[0] = static_cast<unsigned char>((inEntropy[0] + 1) % 256);\n\n    //         // Call the function\n    //         packing_mix_entropy_fcrc32w(inEntropy, outEntropy, entropySize);\n    //     }\n\n    //     auto endTime = std::chrono::high_resolution_clock::now();\n    //     elapsedFcrc32w = endTime - startTime;\n    // }\n\n    // // Benchmark packing_mix_entropy_crc32\n    // {\n    //     auto startTime = std::chrono::high_resolution_clock::now();\n\n    //     for (int iter = 0; iter < iterations; ++iter) {\n    //         // Modify the first byte of inEntropy\n    //         inEntropy[0] = static_cast<unsigned char>((inEntropy[0] + 1) % 256);\n\n    //         // Call the function\n    //         packing_mix_entropy_crc32(inEntropy, outEntropy, entropySize);\n    //     }\n\n    //     auto endTime = std::chrono::high_resolution_clock::now();\n    //     elapsedCrc32 = endTime - startTime;\n    // }\n\n    // // Benchmark packing_mix_entropy_lcg_mmix\n    // {\n    //     auto startTime = std::chrono::high_resolution_clock::now();\n\n    //     for (int iter = 0; iter < iterations; ++iter) {\n    //         // Modify the first byte of inEntropy\n    //         inEntropy[0] = static_cast<unsigned char>((inEntropy[0] + 1) % 256);\n\n    //         // Call the function\n    //         packing_mix_entropy_lcg_mmix(inEntropy, outEntropy, entropySize);\n    //     }\n\n    //     auto endTime = std::chrono::high_resolution_clock::now();\n    //     elapsedLcgMmix = endTime - startTime;\n    // }\n\n    // // Benchmark packing_mix_entropy_simd_lcg\n    // {\n    //     auto startTime = std::chrono::high_resolution_clock::now();\n\n    //     for (int iter = 0; iter < iterations; ++iter) {\n    //         // Modify the first byte of inEntropy\n    //         inEntropy[0] = static_cast<unsigned char>((inEntropy[0] + 1) % 256);\n\n    //         // Call the function\n    //         packing_mix_entropy_simd_lcg(inEntropy, outEntropy, entropySize);\n    //     }\n\n    //     auto endTime = std::chrono::high_resolution_clock::now();\n    //     elapsedSimdLcg = endTime - startTime;\n    // }\n\n    // // Output results\n    // std::cout << \"Benchmark results for \" << iterations << \" iterations on \"\n    //           << (entropySize / (1024 * 1024)) << \" MB of data:\\n\\n\";\n\n    // std::cout << \"1. packing_mix_entropy_feistel_sha_full:\\n\";\n    // std::cout << \"   Total time: \" << elapsedFeistelShaFull.count() << \" seconds.\\n\";\n    // std::cout << \"   Average time per iteration: \" << (elapsedFeistelShaFull.count() / iterations) << \" seconds.\\n\\n\";\n\n    // std::cout << \"2. packing_mix_entropy_feistel_aes_full:\\n\";\n    // std::cout << \"   Total time: \" << elapsedFeistelAesFull.count() << \" seconds.\\n\";\n    // std::cout << \"   Average time per iteration: \" << (elapsedFeistelAesFull.count() / iterations) << \" seconds.\\n\\n\";\n\n    // std::cout << \"3. packing_mix_entropy_feistel_crc32:\\n\";\n    // std::cout << \"   Total time: \" << elapsedFeistelCrc32.count() << \" seconds.\\n\";\n    // std::cout << \"   Average time per iteration: \" << (elapsedFeistelCrc32.count() / iterations) << \" seconds.\\n\\n\";\n\n    // std::cout << \"4. packing_mix_entropy_crc32:\\n\";\n    // std::cout << \"   Total time: \" << elapsedCrc32.count() << \" seconds.\\n\";\n    // std::cout << \"   Average time per iteration: \" << (elapsedCrc32.count() / iterations) << \" seconds.\\n\\n\";\n\n    // std::cout << \"5. packing_mix_entropy_fcrc32w:\\n\";\n    // std::cout << \"   Total time: \" << elapsedFcrc32w.count() << \" seconds.\\n\";\n    // std::cout << \"   Average time per iteration: \" << (elapsedFcrc32w.count() / iterations) << \" seconds.\\n\\n\";\n\n    // std::cout << \"6. packing_mix_entropy_lcg_mmix:\\n\";\n    // std::cout << \"   Total time: \" << elapsedLcgMmix.count() << \" seconds.\\n\";\n    // std::cout << \"   Average time per iteration: \" << (elapsedLcgMmix.count() / iterations) << \" seconds.\\n\\n\";\n\n    // std::cout << \"7. packing_mix_entropy_simd_lcg:\\n\";\n    // std::cout << \"   Total time: \" << elapsedSimdLcg.count() << \" seconds.\\n\";\n    // std::cout << \"   Average time per iteration: \" << (elapsedSimdLcg.count() / iterations) << \" seconds.\\n\\n\";\n\n    // // Clean up allocated memory\n    // delete[] inEntropy;\n    // delete[] keyEntropy;\n    // delete[] outEntropy;\n\n    // return 0;\n}\n"
  },
  {
    "path": "ar-rebar3",
    "content": "#!/bin/bash\nset -e\nset -x\n\nif [ $# -ne 2 ]\nthen\n\techo \"ar-rebar3 <profile> <command>\"\n\texit 1\nfi\n\nSYSTEM=$(uname -s)\nSCRIPT_DIR=\"$(cd \"$(dirname \"$0\")\" && pwd)\"\nPROFILE=$1\nCOMMAND=$2\nOVERLAY_TARGET=\"${SCRIPT_DIR}/_vars.config\"\n\n# helper function to create erlang like value from the shell\n# and stored them into a specific file.\ncreate_overlay_var() {\n\tlocal target=\"${OVERLAY_TARGET}\"\n\tlocal name=\"${1}\"\n\tlocal value=\"$(eval ${2} 2>/dev/null || echo undefined)\"\n\n\tif ! echo \"${name}\" | grep -E '^[a-z]+[0-9A-Za-z_]+$' >/dev/null\n\tthen\n\t\techo \"invalid variable ${name}\" 1>&2\n\t\treturn 1\n\tfi\n\n\tif ! echo \"${value}\" | grep -E '^[[:print:]]+$' >/dev/null\n\tthen\n\t\techo \"invalid value ${value}\" 1>&2\n\t\treturn 1\n\tfi\n\n\tif test -e \"${target}\"\n\tthen\n\t\tprintf '{%s, \"%s\"}.\\n' \"${name}\" \"${value}\" >> \"${target}\"\n\t\treturn 0\n\tfi\n\n\tprintf '{%s, \"%s\"}.\\n' \"${name}\" \"${value}\" > \"${target}\"\n\treturn 0\n}\n\n# create variables required for the overlay, it will contain\n# various information regarding the build and will be \n# hardcoded in the final release.\nrebar3_overlay_variables() {\n\techo \"Crafting overlay variables...\"\n\tif test \"${SYSTEM}\" = \"Linux\"\n\tthen\n\t\tcreate_overlay_var git_rev \"git rev-parse HEAD\"\n\t\tcreate_overlay_var datetime \"date -u '+%Y-%m-%dT%H:%M:%SZ'\"\n\t\tcreate_overlay_var cc_version \"cc --version | head -n1\"\n\t\tcreate_overlay_var gmake_version \"gmake --version | head -n1\"\n\t\tcreate_overlay_var cmake_version \"cmake --version | head -n1\"\n\telse\n\t\ttouch $OVERLAY_TARGET\n\tfi\n}\n\n# remove old artifacts that must be recreated everytime.\n# The lib and releases symlinks are removed here to prevent infinite loops\n# when VSCode extensions (like Erlang LS) traverse the project.\n# bin/arweave recreates them on-demand when running the application.\nrebar3_clean_artifacts() {\n\techo Removing build artifacts...\n\trm -vf \"_vars.config\"\n\trm -vf \"${SCRIPT_DIR}/lib\"\n\trm -vf \"${SCRIPT_DIR}/releases\"\n}\n\n# execute rebar3 using the profile and the command previously\n# configured\nrebar3_invocation() {\n\techo \"Executing rebar3 as ${PROFILE} ${COMMAND}\"\n\t${SCRIPT_DIR}/rebar3 as ${PROFILE} ${COMMAND}\n}\n\n# create artifacts required to run the code locally, only useful\n# in case of release.\nrebar3_create_artifacts() {\n\techo Copying and linking build artifacts\n}\n\n######################################################################\n# main script\n######################################################################\nrebar3_clean_artifacts\nrebar3_overlay_variables\nrebar3_invocation\n\nif [ \"${COMMAND}\" = \"release\" ]\nthen\n\tRELEASE_PATH=$(${SCRIPT_DIR}/rebar3 as ${ARWEAVE_BUILD_TARGET:-default} path --rel)\n\trebar3_create_artifacts\nfi\n"
  },
  {
    "path": "arweave-server",
    "content": "#!/usr/bin/env bash\nset -e\nSCRIPT_DIR=\"$(dirname \"$0\")\"\nexport ARWEAVE_DEV=1\n\"$SCRIPT_DIR/bin/start\" \"$@\"\n"
  },
  {
    "path": "arweave_styleguide.md",
    "content": "# Arweave code style\n\nThe main development language of the Arweave client is Erlang, and as the number of developers of the project continues to grow this style guide will act as a means of keeping the codebase clean and comprehensible.\n\n\n\n## Code comprehensibility\n\n\n\n### Module header comments\n\nEach module should have a simplistic comment at the top that encompasses and describes the set of functions that can be found within it.\n\nModule description comments should be prefixed with '%%%' .\n\n```erlang\n%% Example: head of ar_serialize.\n\n-module(ar_serialize).\n-export([full_block_to_json_struct/1, block_to_json_struct/1, ...]).\n-export([tx_to_json_struct/1, json_struct_to_tx/1]).\n-export([wallet_list_to_json_struct/1, block_index_to_json_struct/1, ...]).\n-export([jsonify/1, dejsonify/1]).\n-export([query_to_json_struct/1, json_struct_to_query/1]).\n-include(\"ar.hrl\").\n-include_lib(\"eunit/include/eunit.hrl\").\n\n%%% Module containing serialisation/deserialisation utility functions for use in the HTTP server.\n```\n\n\n\n### Function clause comments\n\n\nFunction clause comments should be placed above the header.\n\nEvery function should have a comment describing its purpose, unless the function signature explains it well enough.\n\nFunction comments should not include implementation details unless absolutely required, the code itself should be the main conveyor of the specific implementation.\n\nIt is more important to comment exported functions.\n\nA specification (`-spec`) may be used to document the function, as an alternative or an addition to the comment.\n\nFunction description comments should be prefixed with  '%% @doc'.\n\n```erlang\n%% Example\n\n%% @doc Takes a list containing tx records and returns the number of those 'not_found'.\ncount_unavailable_txs(TXList) ->\n    length([TX || TX <- TXList, TX == not_found]).\n```\n\n\n\n### Sparing use of comments inside function bodies\n\nComments should only need to be used inside functions if the code being described has high complexity and without description would take reasonable time to trace and understand.\n\nIf the written code does have high complexity consider if descriptive variable names, code abstraction and or basic refactoring could improve the comprehensibility before resorting to commenting the code block.\n\n```erlang\n%% Bad\nsign_verify_test(Keypair) ->\n\t% Deconstructs keypair into two separate terms, Pub and Priv.\n\t{Priv, Pub} = Keypair,\n\t% Generates an integer between 1 and 100 to sign and then verify.\n\tData = floor(rand:uniform() * 100),\n\t% Sign the data generated above.\n\tSignedData = sign(Priv, Data),\n\t% Verify the signed data.\n\tverify(Pub, SignedData).\n\n%% Good\nsign_verify_test({Priv, Pub}) ->\n\tData = floor(rand:uniform() * 100),\n\tSignedData = sign(Priv, Data),\n\tverify(Pub, SignedData).\n```\n\n\n\n### Use the minimal descriptive words for function names\n\nFunction names should be descriptive enough to explain the high-level purpose of the function whilst remaining short enough as to not hinder the code readability.\n\nThe name of the module can sometimes be used to help increase clarity without increasing the wordiness of the function name.\n\n```erlang\n%% Bad function names\nar_tx:generate_data_segment_for_signing(TX).\nar_util:pretty_print_internal_ip_representation(IPAddr).\nar_retarget:is_current_block_retarget_block(Block).\n\n%% Good function names\nar_tx:to_binary(TX).\nar_block:generate_block_from_shadow(BShadow).\nar_serialize:block_to_json_struct(Block).\n```\n\n\n\n### Tests should be defined at the tail of the module\n\nTests should be defined as the last thing present within a module and should be prefixed with the following comment.\n\n```erlang\n% Tests: {module name}\n```\n\n\n\n### Maximum of eighty characters per line\n\nA maximum of eighty characters should be present on any singular line.\n\nTo help enforce this styling consider using a ruler, most extensible editors will have this functionality by default or a simple plugin should be available to help.\n\n### Do not use if\n\nThe `true ->` subclause of the `if` clause is confusing because `true` suggests the `if` expression evaluates to `true`, while the clause is executed when the expression is false. Use `case` instead.\n\n### Try to avoid deeply nested code\n\nDeeply nested code should be avoided as it can mask a large set of alternative code paths and can become very difficult to debug.\n\nCode that uses case or receives structures should aim for a singular level of nesting and at most two levels of depth.\n\n```erlang\n%% Bad\ncontains_data_tx([]) -> false;\ncontains_data_tx(TXList) ->\n\t[TX|Rest] = TXList,\n    case is_record(TX, tx) of\n        true ->\n            case byte_size(TX#tx.data) > 0 of\n                true -> true;\n                false -> contains_data_tx(Rest)\n            end;\n        false -> error_not_tx.\n    end.\n\n%% Better\ncontains_data_tx([]) -> false;\ncontains_data_tx([TX|Rest]) when is_record(TX, tx) ->\n\tcase byte_size(TX#tx.data) > 0 of\n\t    true -> true;\n\t\tfalse -> contains_data_tx(Rest)\n\tend;\ncontains_data_tx(_) ->\n\terror_not_tx.\n```\n\n\n\n### Deconstruct arguments in the function header\n\nThe maximum number of variables should be deconstructed within the function clause header and not the clause body.\n\nThis makes the arguments to the function explicit and helps debugging as should the wrong form of data be provided no matching function clause will be found.\n\n```erlang\n%% Bad\nserver(State, Keypair) ->\n    Keypair = {Priv, Pub},\n\tState#state { peers = Peers, heard = HeardMsg, ignored = IgnoredMsg },\n\t...\n\n%% Good\nserver(State#state {\n    peers = Peers,\n    heard = HeardMsg,\n    ignored = IgnoredMsg\n}, {Priv, Pub}) ->\n\t...\n```\n\n\n\n### Atoms should be lowercase and separated by underscores\n\nFor easy recognisability the Arweave codebase uses descriptive lowercase atoms where multiple words are separated by the underscore character.\n\n```erlang\n%% Bad atoms\n'iAtom'\n'block not found'\n\n%% Good atoms\nunavailable\nblock_not_found\n```\n\n\n\n### Record definitions should include descriptions of fields\n\nWhen a new record is defined the information regarding the purpose of each field should be included in a comment inline with the field it pertains to. These comments should be prefixed with a singular '%' character and aim to be as concise as possible.\n\n```erlang\n-record(tx, {\n\tid = <<>>, \t\t\t% TX UID (Hash of signature)\n\tlast_tx = <<>>, \t% Wallets last TX hash.\n\towner = <<>>, \t\t% Public key of transaction owner.\n\ttags = [], \t\t\t% Indexable TX category identifiers.\n\ttarget = <<>>, \t\t% Wallet address of target of the tx.\n\tquantity = 0, \t\t% Amount of Winston to send\n\tdata = <<>>, \t\t% Data body (if data transaction).\n\tsignature = <<>>, \t% Transaction signature.\n\treward = 0 \t\t\t% Transaction mining reward.\n}).\n```\n\n### Redundant or deprecated code should be removed\n\nShould existing code be made redundant with the implementation of new developments, this old code should be removed. It should not be left cluttering the code base as either code or comment.\n\nIf reference to these old implementation details is still required they will remain present in the project repositories version control.\n\n\n\n### Modules should export the minimal number of functions\n\nModules should export the minimum number of functions in which are externally required and these exports should be logically ordered.\n\nThis helps show the interface that the module exposes. Via looking at the exports other engineers should be able to identify which functions they need understand and are available for external use.\n\n```erlang\n%% Bad function exporting\n-compile(export_all).\n\n%% Good function exporting\n-export([sign/2, verify/3]).\n-export([to_address/1]).\n```\n\n\n\n### Variable names should be descriptive of the data they contain\n\nVariable names should be descriptive of the data in which they are representing.\n\nThis helps in understanding the purpose of a codeblock without the need for verbose comments detailing its purpose.\n\n```erlang\n%% Bad variables\nsign_data(X, Y) ->\n\t{A, B} = X,\n\tsign(A, Y).\n\n%% Good variables\nsign_data(Keypair, Data) ->\n    {Priv, Pub} = Keypair,\n    sign(Priv, Data).\n```\n\n\n\n### Use ar:console/1-2 to present information to the end user\n\nWriting to the Erlang console is done via the `ar:console/1-2` functions. This will also be written to the log file.\n\n```erlang\nar:console(\"Started mining on block height ~B\", [Height]),\n```\n\n```erlang\nar:console(\n\t[\n\t\tnode_joined_successfully,\n\t\t{height, NewB#block.height}\n\t]\n),\n```\n\n\n\n### Use `ar:info/1-2`, `ar:warn/1-2`, `ar:err/1-2` to generate log entries.\n\nAll three types of messages will be written to the one and only log file.\nNote! Errors (generated by `ar:err/1-2`) will also be displayed in the console.\n\n```erlang\nar:warn(\"Could not retrieve current block. Will retry in ~B seconds\", [?REJOIN_TIMEOUT]),\n```\n\n```erlang\nar:err(\n\t[\n\t\tnode_not_joining,\n\t\t{reason, cannot_get_full_block_from_peer},\n\t\t{received_instead, NewB}\n\t]\n),\n```\n\n\n\n### Don't log huge messages\nAvoid logging huge messages. Truncate arguments, e.g. with `~P` like this:\n```erlang\nar:warn(\"Invalid Block Hash List: ~P\", [BI, 100]),\n```\n\n\n\n### Tuple construction/deconstruction\n\nWhen constructing or deconstructing tuples ensure a space between each comma separated element.\n\n```erlang\n%% Bad\n{ one,two,three, A, B, C}\n\n%% Good\n{one, two, three, A, B, C}\n```\n\n\n\n### List deconstruction\n\nWhen deconstructing a list into head and tail ensure that a space is placed on either side of the '|' separation character.\n\n```erlang\n%% Bad\n[Head|Tail]\n\n%% Good\n[Head | Tail]\n```\n\n\n\n### Record construction/deconstruction\n\nWhen constructing or deconstructing a record ensure that a space is placed on either side of each field being handled.\n\n```erlang\n%% Bad\nState#state {first=\"hello\", second=\"world\"}\n\n%% Good\nState#state { first = \"hello\", second = \"world\" },\n```\n\n\n\n### Function arguments on new lines\n\nIf the arguments for a given function call exceed the previously stated line length limit (80 characters) or contain an inline function split the arguments each on to new lines.\n\n```erlang\n%% Bad\nexample() ->\n\tTotalTime = lists:foldl(fun(X, Acc) -> X + Acc end, 0, [12, 15, 8, 21, 35, 33, 14]),\n\t...\n\n%% Better\nexample() ->\n\tTotalTime = lists:foldl(\n\t\tfun(X, Acc) -> X + Acc end,\n\t\t0,\n\t\t[12, 15, 8, 21, 35, 33, 14]\n\t),\n\t...\n```\n\n## Error handling\n\nFunctions with side effects (in Erlang it boils down to IO) should return an `{ok, ...}` tuple upon successful execution, and `error_code` or `{error_code, ...}` otherwise.\n\nWhen invoking functions with side effects, failing fast by only pattern matching against `{ok, ...}` is encouraged.\n\nIn rare cases when even unexpected failures have to be processed, like in the HTTP event loop, `try/catch` may be used.\n\n\n## Put tests for the module X into X_tests.erl\n\nIt is usually very difficult to separate tests from the actual code in the search results unless tests reside in the dedicated files. For instance, using separate files for tests makes it easier to see in how many places a particular function is used.\n\n## Version control\n\nThe Arweave client codebase is hosted on Github, the below standards define the criteria for committed code. We aim to adhere to these standards as to make it as easy possible for new contributors to get involved.\n\n\n### All committed code must be commented\n\nAll committed code should be fully commented and should aim to fit the styling as detailed in this document. Committing uncommented code is unhelpful to all those maintaining or exploring the project.\n\n\n\n### Code pushed to master must work\n\nAll code committed to the master branch of the Arweave project should be fully functioning.\n\nThis is a **strict** requirement as this is the prime location of where end users will be obtaining the software to join and participate in the network.\n\n\n\n### Commits should aim to be as atomic as possible\n\nCode commits should aim to be a single logical change or addition to the codebase, though if not possible all logical alterations should be explained in the commit message, each separated by a comma.\n\n```\n- Added generic protocol implementation.\n- Removed ar_deprecated.\n- Added block shadows, refactored HTTP iface.\n```\n\n### Commit message syntax\n\nTo keep the repository clean a set structure for commit messages has been decided.\n\n- The first character should be capitalized.\n- The message should be succinct.\n- The message should be in the imperative mood.\n- Multiple actions should be comma separated.\n\n### Commit description\n\nIn addition to a message, a commit should have a description focusing on why the change was made rather than what was made.\n\n### Commit example\n\n```\nAdd arweave style guide\n\nInconsistent styling made it hard for us to view, comprehend, and edit the code so we had a discussion and agreed on the common style.\n```\n"
  },
  {
    "path": "bin/arweave",
    "content": "#!/usr/bin/env bash\n######################################################################\n# EXTRA_DIST_ARGS environment variable can be set to set extra VM\n# arguments.\n######################################################################\n\nset -e\n\n######################################################################\n# Switch to user or dev mode. bin/arweave should be the script used\n# only by users, and bin/arweave-dev should be the one used only\n# for developers.\n######################################################################\ncase ${0##*/}\nin\n  arweave-dev)\n    export ARWEAVE_DEV=1\n    ;;\nesac\n\n######################################################################\n# EPMD Configuration. force epmd to listen on loopback interface.\n######################################################################\nexport ERL_EPMD_ADDRESS=\"${ERL_EPMD_ADDRESS:=127.0.0.1,::1}\"\nexport ERL_EPMD_PORT=\"${ERL_EPMD_PORT:=4369}\"\n\n# http://erlang.org/doc/man/run_erl.html\n# If defined, disables input and output flow control for the pty\n# opend by run_erl. Useful if you want to remove any risk of accidentally\n# blocking the flow control by using Ctrl-S (instead of Ctrl-D to detach),\n# which can result in blocking of the entire Beam process, and in the case\n# of running heart as supervisor even the heart process becomes blocked\n# when writing log message to terminal, leaving the heart process unable\n# to do its work.\nRUN_ERL_DISABLE_FLOWCNTRL=${RUN_ERL_DISABLE_FLOWCNTRL:-true}\nexport RUN_ERL_DISABLE_FLOWCNTRL\nRUN_ERL_LOG_GENERATIONS=${RUN_ERL_LOG_GENERATIONS:-1}\nexport RUN_ERL_LOG_GENERATIONS\nRUN_ERL_LOG_MAXSIZE=${RUN_ERL_LOG_MAXSIZE:-$((100*1024*1024))}\nexport RUN_ERL_LOG_MAXSIZE\nRUN_ERL_LOG_ALIVE_MINUTES=${RUN_ERL_LOG_ALIVE_MINUTES:-15}\nexport RUN_ERL_LOG_ALIVE_MINUTES\n\nif [ \"$TERM\" = \"dumb\" ] || [ -z \"$TERM\" ]; then\n  export TERM=screen\nfi\n\n# OSX does not support readlink '-f' flag, work\n# around that\n# shellcheck disable=SC2039,SC3000-SC4000\ncase $OSTYPE in\n    darwin*)\n\tSCRIPT=$(readlink \"$0\" || true)\n    ;;\n    *)\n\tSCRIPT=$(readlink -f \"$0\" || true)\n    ;;\nesac\n[ -z \"$SCRIPT\" ] && SCRIPT=$0\nexport SCRIPT_DIR=\"$(cd \"$(dirname \"$SCRIPT\")\" && pwd -P)\"\nexport PARENT_DIR=\"$(cd \"${SCRIPT_DIR}/..\" && pwd  -P)\"\nexport SYSTEM_NAME=\"$(uname -s)\"\nexport RELEASE_ROOT_DIR=\"$(cd \"${SCRIPT_DIR}/..\" && pwd -P)\"\nexport REBAR_CONFIG=\"${RELEASE_ROOT_DIR}/rebar.config\"\nexport BUILD_DIR=\"${RELEASE_ROOT_DIR}/_build\"\n\n# let extract release relx information from rebar.config.\n# the following erlang code will read/parse the file\n# and extract the information required. In case of issue\n# it print an error message and return 1, else 0.\nextract_release_from_rebar_config() {\n\terl -noshell -eval '\n\t\ttry\n\t\t\t% extract file from REBAR_CONFIG variable\n\t\t\tC = case os:getenv(\"REBAR_CONFIG\") of\n\t\t\t\tfalse -> throw(\"REBAR_CONFIG not set\");\n\t\t\t\tVRC -> VRC\n\t\t\tend,\n\n\t\t\t% read/parse rebar.config\n\t\t\tF = case file:consult(C) of\n\t\t\t\t{ok, FC} -> FC;\n\t\t\t\t{error, EC} -> throw(EC)\n\t\t\tend,\n\n\t\t\t% extract relx section\n\t\t\tR = case proplists:get_value(relx, F) of\n\t\t\t\tundefined -> throw(\"relx section not found\");\n\t\t\t\tRX -> RX\n\t\t\tend,\n\n\t\t\t% extract release section\n\t\t\tV = case lists:keyfind(release, 1, R) of\n\t\t\t\tM = {release, {_, VX}, _} -> VX;\n\t\t\t\t_ -> throw(\"release not found\")\n\t\t\tend,\n\t\t\tio:format(\"~s~n\", [V]),\n\t\t\terlang:halt(0)\n\t\tcatch\n\t\t\t_:E ->\n\t\t\t\tio:format(standard_error, \"error: ~p~n\", [E]),\n\t\t\t\terlang:halt(255)\n\t\tend.\n\t'\n\treturn $?\n}\n\n# Make the value available to variable substitution calls below\n# Most of The following variables are usually hardcoded by rebar3\n# if they are empty, this means the entry-point is used from\n# the sources.\nexport REL_NAME=\"\"\nexport REL_VSN=\"\"\nexport RELEASE_NAME=\"\"\nexport RELEASE_VSN=\"\"\nexport RELEASE_GIT_REV=\"\"\nexport RELEASE_DATETIME=\"\"\nexport RELEASE_ERTS=\"\"\nexport RELEASE_CC=\"\"\nexport RELEASE_CMAKE=\"\"\nexport RELEASE_GMAKE=\"\"\nexport ERTS_VSN=\"\"\nexport RELEASE_PROG=\"${SCRIPT}\"\n\n# ensure REL_NAME and RELEASE_NAME variables are set\n# by default, if the script is running from sources,\n# the release name must be arweave.\ntest -z \"${REL_NAME}\" && export REL_NAME=\"arweave\"\ntest -z \"${RELEASE_NAME}\" && export RELEASE_NAME=\"arweave\"\n\n# check REL_VSN variable content. This one is quite important\n# to be able to start arweave.\nif test -z \"${REL_VSN}\"\nthen\n\tREL_VSN=$(extract_release_from_rebar_config)\n\tif test $? -ne 0\n\tthen\n\t\techo \"error: failed to read rebar file\" 1>&2\n\t\texit 1\n\tfi\n\n\tif test -z \"${REL_VSN}\"\n\tthen\n\t\techo \"error: no release found\" 1>&2\n\t\texit 1\n\tfi\n\n\texport REL_VSN\n\texport REL_PATH=\"${BUILD_DIR}/default/rel/${REL_NAME}/${REL_VSN}\"\n\texport REL_PATH_ALT=\"${BUILD_DIR}/default/rel/${REL_NAME}/releases/${REL_VSN}\"\n\tif ! test -e ${REL_PATH}\n\tthen\n\t\techo \"error: ${REL_PATH} does not exist\" 1>&2\n\t\tif ! test \"${ARWEAVE_DEV}\"\n\t\tthen\n\t\t\texit 1\n\t\tfi\n\tfi\nfi\n\n# ROOTDIR is used by the Erlang VM for code:root_dir() and relative path resolution.\n# Keep it as the project root so config files and logs work correctly.\nexport ROOTDIR=\"$RELEASE_ROOT_DIR\"\n\n# Track whether we're running from source (need symlinks) or pre-built release (have real dirs)\nRUNNING_FROM_SOURCE=\"\"\n\n# Function to create symlinks to the build release directory on-demand.\n# Called after arweave_developer_mode (if any) so symlinks aren't deleted by ar-rebar3.\n# Only needed when running from source; pre-built releases have real lib/ and releases/ dirs.\nensure_release_symlinks() {\n    # If releases dir already exists as a real directory, we're running a pre-built release\n    if [ -d \"${RELEASE_ROOT_DIR}/releases/${REL_VSN}\" ] && [ ! -L \"${RELEASE_ROOT_DIR}/releases\" ]; then\n        RUNNING_FROM_SOURCE=\"\"\n        return\n    fi\n\n    # Running from source - create symlinks to the build directory\n    RUNNING_FROM_SOURCE=\"1\"\n    RELEASE_BUILD_DIR=\"${BUILD_DIR}/default/rel/${REL_NAME}\"\n    if [ -d \"${RELEASE_BUILD_DIR}/releases\" ] && [ ! -e \"${RELEASE_ROOT_DIR}/releases\" ]; then\n        ln -sf \"${RELEASE_BUILD_DIR}/releases\" \"${RELEASE_ROOT_DIR}/releases\"\n    fi\n    if [ -d \"${RELEASE_BUILD_DIR}/lib\" ] && [ ! -e \"${RELEASE_ROOT_DIR}/lib\" ]; then\n        ln -sf \"${RELEASE_BUILD_DIR}/lib\" \"${RELEASE_ROOT_DIR}/lib\"\n    fi\n}\n\n# Schedule symlink cleanup after VM boots. The symlinks are only needed during\n# boot to load code; once loaded, they can be removed so VSCode extensions work.\n# Default delay is 30 seconds; set ARWEAVE_SYMLINK_CLEANUP_DELAY to override (0 to disable).\n# Only runs when running from source (RUNNING_FROM_SOURCE is set).\nschedule_symlink_cleanup() {\n    # Only cleanup if we're running from source and created symlinks\n    if [ -z \"$RUNNING_FROM_SOURCE\" ]; then\n        return\n    fi\n\n    local delay=\"${ARWEAVE_SYMLINK_CLEANUP_DELAY:-30}\"\n    if [ \"$delay\" = \"0\" ]; then\n        return\n    fi\n    (\n        sleep \"$delay\"\n        # Only remove if they're actually symlinks (safety check)\n        [ -L \"${RELEASE_ROOT_DIR}/lib\" ] && rm -f \"${RELEASE_ROOT_DIR}/lib\"\n        [ -L \"${RELEASE_ROOT_DIR}/releases\" ] && rm -f \"${RELEASE_ROOT_DIR}/releases\"\n    ) &\n}\n\nexport REL_DIR=\"${RELEASE_ROOT_DIR}/releases/${REL_VSN}\"\nexport RUNNER_LOG_DIR=\"${RUNNER_LOG_DIR:-$RELEASE_ROOT_DIR/logs}\"\nexport ESCRIPT_NAME=\"${ESCRIPT_NAME-$SCRIPT}\"\n\n# if RELX_RPC_TIMEOUT is set then use that\n# otherwise check for NODETOOL_TIMEOUT and convert to seconds\nif [ -z \"$RELX_RPC_TIMEOUT\" ]; then\n    # if NODETOOL_TIMEOUT exists then turn the old nodetool timeout into the rpc timeout\n    if [ -n \"$NODETOOL_TIMEOUT\" ]; then\n\t# will exit the script if NODETOOL_TIMEOUT isn't a number\n\tRELX_RPC_TIMEOUT=$((NODETOOL_TIMEOUT / 1000))\n    else\n\tRELX_RPC_TIMEOUT=60\n    fi\nfi\n\nexport RELX_RPC_TIMEOUT\n\n\n# start/stop/install/upgrade pre/post hooks\nPRE_START_HOOKS=\"\"\nPOST_START_HOOKS=\"\"\nPRE_STOP_HOOKS=\"\"\nPOST_STOP_HOOKS=\"\"\nPRE_INSTALL_UPGRADE_HOOKS=\"\"\nPOST_INSTALL_UPGRADE_HOOKS=\"\"\nSTATUS_HOOK=\"\"\nEXTENSIONS=\"\"\n\n_warning() {\n    printf -- \"warning: %s\\n\" \"${*}\" 1>&2\n}\n\n_error () {\n    printf -- \"error: %s\\n\" \"${*}\" 1>&2\n}\n\n######################################################################\n# Arweave Section\n######################################################################\n\n# Not all systems supports randomx jit\nif test ${SYSTEM_NAME} = \"Darwin\"\nthen\n    export RANDOMX_JIT=\"disable randomx_jit\"\nelse\n    export RANDOMX_JIT=\"\"\nfi\n\n# This variable is the main own used to start arweave\nARWEAVE_OPTS=\"-run ar main ${RANDOM_JIT}\"\n\n######################################################################\n# Arweave System Check Section\n######################################################################\narweave_check() {\n    case \"${1}\" in\n\thelp)\n\t    arweave_check_help\n\t    ;;\n\t*)\n\t    arweave_check_nofile\n\t    arweave_check_hugepages\n\t    ;;\n    esac\n}\n\narweave_check_help() {\n    echo \"Usage: ${REL_NAME} check\"\n    echo \"Check system configuration. Examples:\"\n    echo \"  ${REL_NAME} check\"\n    exit 1\n}\n\narweave_check_nofile() {\n    recommendation=\"1000000\"\n    limit=\"$(ulimit -n)\"\n\n    if [ \"$limit\" -lt \"$recommendation\" ]\n    then\n\t_warning \"************************************************************************\"\n\t_warning \"Your maximum number of open file descriptors is currently set to $limit.\"\n\t_warning \"We recommend setting that limit to $recommendation or higher.\"\n\t_warning \"Otherwise, consider setting your max_connections setting to something\"\n\t_warning \"lower than your file descriptor limit. This value can be check with:\"\n\t_warning \"    sysctl fs.file-max\"\n\t_warning \"or\"\n\t_warning \"    ulimit -n\"\n\t_warning \"see more at https://docs.arweave.org/\"\n\t_warning \"************************************************************************\"\n    fi\n}\n\narweave_check_hugepages() {\n    # execute this check only on linux\n    test ${SYSTEM_NAME} != \"Linux\" && return 0\n    recommendation=\"3500\"\n    value=$(sysctl -n vm.nr_hugepages)\n\n    if test ${value} -lt ${recommendation}\n    then\n\t_warning \"************************************************************************\"\n\t_warning \"huge pages is not configured on this system.\"\n\t_warning \"It should be set to ${recommendation}. This value can be check with:\"\n\t_warning \"    sysctl vm.nr_hugepages\"\n\t_warning \"see more at https://docs.arweave.org/\"\n\t_warning \"************************************************************************\"\n    fi\n}\n\n######################################################################\n# Arweave Benchmark Section\n######################################################################\narweave_benchmark() {\n    case \"${1}\"\n    in\n\thash)\n\t    shift\n\t    arweave_benchmark_hash ${*}\n\t    ;;\n\tpacking)\n\t    shift\n\t    arweave_benchmark_packing ${*}\n\t    ;;\n\tvdf)\n\t    shift\n\t    arweave_benchmark_vdf ${*}\n\t    ;;\n\tvdf_exp)\n\t    shift\n\t    arweave_benchmark_vdf_exp ${*}\n\t    ;;\n\t*)\n\t    arweave_benchmark_help\n\t    ;;\n    esac\n}\n\narweave_benchmark_help() {\n    echo \"Usage: ${REL_NAME} benchmark [hash|packing|vdf]\"\n    echo \"Execute Arweave benchmarks. Examples:\"\n    echo \"  ${REL_NAME} benchmark hash\"\n    echo \"  ${REL_NAME} benchmark packing\"\n    echo \"  ${REL_NAME} benchmark vdf\"\n    exit 1\n}\n\narweave_benchmark_hash() {\n    ARWEAVE_OPTS=\"-run ar benchmark_hash\"\n    echo ${*}\n}\n\narweave_benchmark_packing() {\n    ARWEAVE_OPTS=\"-run ar benchmark_packing\"\n    echo ${*}\n}\n\narweave_benchmark_vdf() {\n    ARWEAVE_OPTS=\"-run ar benchmark_vdf\"\n    echo ${*}\n}\n\narweave_benchmark_vdf_exp() {\n    ARWEAVE_OPTS=\"-run ar benchmark_vdf_exp\"\n    echo ${*}\n}\n\n######################################################################\n# Arweave Wallet Management Section\n######################################################################\narweave_wallet() {\n    case \"${1}\" in\n\tcreate)\n\t    shift\n\t    arweave_wallet_create ${*}\n\t    ;;\n\t*)\n\t    arweave_wallet_help\n\t    ;;\n    esac\n}\n\narweave_wallet_help() {\n    echo \"Usage: ${REL_NAME} wallet [create]\"\n    echo \"Manage Arweave wallets. Examples:\"\n    echo \"  ${REL_NAME} wallet create rsa\"\n    echo \"  ${REL_NAME} wallet create ecdsa\"\n    exit 1\n}\n\narweave_wallet_create() {\n    case \"${1}\" in\n\trsa)\n\t    shift\n\t    arweave_wallet_create_rsa ${*}\n\t    ;;\n\tecdsa)\n\t    shift\n\t    arweave_wallet_create_ecdsa ${*}\n\t    ;;\n\t*)\n\t    arweave_wallet_create_help\n\t    ;;\n    esac\n}\n\narweave_wallet_create_help() {\n    echo \"Usage: ${REL_NAME} wallet create [rsa|ecdsa]\"\n    echo \"Create Arweave wallet. examples:\"\n    echo \"  ${REL_NAME} wallet create rsa\"\n    echo \"  ${REL_NAME} wallet create ecdsa\"\n    exit 1\n}\n\narweave_wallet_create_rsa() {\n    ARWEAVE_OPTS=\"-run ar create_wallet\"\n    echo ${*}\n}\n\narweave_wallet_create_ecdsa() {\n    ARWEAVE_OPTS=\"-run ar create_ecdsa_wallet\"\n    echo ${*}\n}\n\n######################################################################\n# Arweave Data Doctor Section\n######################################################################\narweave_doctor() {\n    ARWEAVE_OPTS=\"-run ar_data_doctor main\"\n    echo ${*}\n}\n\narweave_doctor_help() {\n    echo \"Usage: ${REL_NAME} doctor\"\n    echo \"Execute data doctor analyzer\"\n    exit 1\n}\n\n######################################################################\n# Arweave Developer mode Section\n######################################################################\n# when ARWEAVE_DEV environment variable is set, the release is rebuild\narweave_developer_mode() {\n\t(\n\t    cd ${PARENT_DIR} \\\n\t\t&& ./ar-rebar3 ${ARWEAVE_BUILD_TARGET:-default} release\n\t    sleep 1\n\t)\n}\n\n# check if a command (subcommand) is a developer command.\nis_arweave_developer_command() {\n  local commands=\"test test_e2e\"\n  local value=\"${1}\"\n  if test \"${ARWEAVE_DEV}\"\n  then\n    return 1\n  fi\n\n  for command in ${commands}\n  do\n    if test \"${command}\" = \"${value}\"\n    then\n      return 0\n    fi\n  done\n  return 1\n}\n\n######################################################################\n# Arweave Version Section\n######################################################################\narweave_version() {\n    case \"${1}\" in\n\t*) arweave_version_light\n\t   ;;\n    esac\n}\n\narweave_version_light() {\n    echo \"${RELEASE_NAME} ${RELEASE_VSN} (${RELEASE_GIT_REV}) ${RELEASE_DATETIME}\"\n    echo \"   erts ${RELEASE_ERTS}\"\n    echo \"   ${RELEASE_CC}\"\n    echo \"   ${RELEASE_GMAKE}\"\n    echo \"   ${RELEASE_CMAKE}\"\n    exit 0\n}\n\narweave_version_help() {\n    echo \"Usage: ${REL_NAME} version\"\n    echo \"Return Arweave release\"\n    exit 1\n}\n\n######################################################################\n# test section\n######################################################################\narweave_test() {\n    TEST_CONFIG=\"./config/sys.config\"\n    TEST_PROFILE=\"test\"\n    TEST_NODE_NAME=\"${NODE_NAME:-main-localtest}\"\n    TEST_NODE_HOST=\"${NODE_HOST:-127.0.0.1}\"\n    TEST_COOKIE=\"${COOKIE:-test}\"\n    TEST_MODULE=\"tests\"\n    TEST_LOG=\"main-localtest.out\"\n    arweave_test_run ${*}\n}\n\narweave_test_help() {\n    echo \"Usage: ${REL_NAME} test [module | module:test ...]\"\n    echo \"Run Arweave Test Suite\"\n    echo \"  test                     - run all tests\"\n    echo \"  test module              - run all tests in module\"\n    echo \"  test module:test         - run specific test from module\"\n    echo \"  test mod1 mod2:test mod3 - mixed mode\"\n}\n\narweave_e2e() {\n    TEST_CONFIG=\"./config/sys.config\"\n    TEST_PROFILE=\"e2e\"\n    TEST_NODE_NAME=\"${NODE_NAME:-main-e2e}\"\n    TEST_NODE_HOST=\"${NODE_HOST:-127.0.0.1}\"\n    TEST_COOKIE=\"${COOKIE:-e2e}\"\n    TEST_MODULE=\"e2e\"\n    TEST_LOG=\"main-e2e.out\"\n    arweave_test_run ${*}\n}\n\narweave_e2e_help() {\n    echo \"Usage: ${REL_NAME} test_e2e [module | module:test ...]\"\n    echo \"Run Arweave e2e Test Suite\"\n    echo \"  test_e2e                     - run all e2e tests\"\n    echo \"  test_e2e module              - run all tests in module\"\n    echo \"  test_e2e module:test         - run specific test from module\"\n    echo \"  test_e2e mod1 mod2:test mod3 - mixed mode\"\n}\n\n# test and e2e features are sharing the same procedures.\narweave_test_run() {\n    (\n\techo -e \"\\033[0;32m===> Enter into ${PARENT_DIR}\\033[0m\"\n\tcd ${PARENT_DIR}\n\n\techo -e \"\\033[0;32m===> Compile ${TEST_PROFILE} profile\\033[0m\"\n\t./ar-rebar3 \"${TEST_PROFILE}\" compile\n\n\t# if a specific test is specified\n\tif test \"${1}\"\n\tthen\n\t    # Replace colons with underscores for valid node name\n\t    SANITIZED_ARG=\"${1//:/_}\"\n\t    TEST_NODE=\"${TEST_NODE_NAME}-${SANITIZED_ARG}@${TEST_NODE_HOST}\"\n\telse\n\t    TEST_NODE=\"${TEST_NODE_NAME}@${TEST_NODE_HOST}\"\n\tfi\n\n        TEST_PATH=\"$(./rebar3 as ${TEST_PROFILE} path)\"\n        \n        ## TODO: Generate path for all apps -> Should we fetch this from somewhere?\n        APPS=\"arweave arweave_config arweave_limiter arweave_diagnostic\"\n\n\tPATH_ARGS=\"\"\n        for app in $APPS; do\n            P=\"$(./rebar3 as ${TEST_PROFILE} path --base)/lib/${app}/test\"\n            echo $P\n            PATH_ARGS=\"${PATH_ARGS} ${P}\"\n        done\n\n        PARAMS=\"-pa ${TEST_PATH} ${PATH_ARGS} -config ${TEST_CONFIG} -noshell\"\n\tENTRY_POINT=\"-run ar ${TEST_MODULE} ${*} -s init stop\"\n\tcommand=\"erl ${PARAMS} -name ${TEST_NODE} -setcookie ${TEST_COOKIE} ${ENTRY_POINT}\"\n\techo -e \"\\033[0;32m===> Execute command ${command}\\033[0m\"\n\tset -xe -o pipefail\n\t${command} | tee \"${TEST_LOG}\"\n\texit $?\n    )\n}\n\n######################################################################\n# Relx section\n######################################################################\nrelx_usage() {\n    command=\"$1\"\n\n    case \"$command\" in\n\tbenchmark)\n\t    arweave_benchmark_help\n\t    ;;\n\tcheck)\n\t    arweave_check_help\n\t    ;;\n\tdoctor)\n\t    arweave_doctor_help\n\t    ;;\n\tversion)\n\t    arweave_version_help\n\t    ;;\n\tpacking)\n\t    arweave_packing_help\n\t    ;;\n\twallet)\n\t    arweave_wallet_help\n\t    ;;\n\tdaemon)\n\t    echo \"Usage: ${REL_NAME} daemon\"\n\t    echo \"Start Arweave as daemon (in background)\"\n\t    ;;\n\tdaemon_attach)\n\t    echo \"Usage: ${REL_NAME} daemon_attach\"\n\t    echo \"Attach to a running Arweave daemonized process\"\n\t    ;;\n\trpc)\n\t    echo \"Usage: $REL_NAME rpc [Mod [Fun [Args]]]]\"\n\t    echo \"Applies the specified function and returns the result.\"\n\t    echo \"Mod must be specified. However, start and [] are assumed\"\n\t    echo \"for unspecified Fun and Args, respectively. Args is to \"\n\t    echo \"be in the same format as for erlang:apply/3 in ERTS.\"\n\t    ;;\n\tescript)\n\t    echo \"Usage: ${REL_NAME} escript [ESCRIPT]\"\n\t    echo \"Execute an Erlang script in the Arweave release environment.\"\n\t    echo \"Note: it will not start Arweave.\"\n\t    ;;\n\t\"eval\")\n\t    echo \"Usage: $REL_NAME eval [Exprs]\"\n\t    echo \"Executes a sequence of Erlang expressions, separated by\"\n\t    echo \"comma (,) and ended with a full stop (.)\"\n\t    ;;\n\tforeground)\n\t    echo \"Usage: $REL_NAME foreground\"\n\t    echo \"Starts the Arweave release in the foreground, meaning all output\"\n\t    echo \"going to stdout but without an interactive shell.\"\n\t    echo \"The entry point is set to -run ar main\"\n\t    ;;\n\tforeground_clean)\n\t    echo \"Usage: $REL_NAME foreground\"\n\t    echo \"Starts the Arweave release in the foreground, meaning all output\"\n\t    echo \"going to stdout but without an interactive shell.\"\n\t    echo \"No entry point is configured\"\n\t    ;;\n\tconsole)\n\t    echo \"Usage: $REL_NAME console\"\n\t    echo \"Starts Arweave with an interactive shell.\"\n\t    ;;\n\tconsole_clean)\n\t    echo \"Usage: ${REL_NAME} console_clean\"\n\t    echo: \"Starts an interactived Erlang shell without Arweave started.\"\n\t    ;;\n\tremote_console|remote|remsh)\n\t    echo \"Usage: $REL_NAME remote\"\n\t    echo \"Attach a remote shell to an already running Erlang node for this release.\"\n\t    ;;\n\treboot)\n\t    echo \"Usage: ${REL_NAME} reboot\"\n\t    echo \"Reboot the entire Arweave VM.\"\n\t    ;;\n\trestart)\n\t    echo \"Usage: ${REL_NAME} restart\"\n\t    echo \"Restart the running applications but not the Arweave VM.\"\n\t    ;;\n\tpid)\n\t    echo \"Usage: ${REL_NAME} pid\"\n\t    echo \"Returns the system PID of Arweave release (if running).\"\n\t    ;;\n\tping)\n\t    echo \"Usage: ${REL_NAME} ping\"\n\t    echo \"Checks if the Arweave node is running.\"\n\t    ;;\n\tstatus)\n\t    echo \"Usage: $REL_NAME status\"\n\t    echo \"Obtains node status information through optionally defined hooks.\"\n\t    ;;\n\tstop)\n\t    echo \"Usage: ${REL_NAME} stop\"\n\t    echo \"Stop the Arweave node.\"\n\t    ;;\n\ttest)\n\t    arweave_test_help\n\t    ;;\n\ttest_e2e)\n\t    arweave_e2e_help\n\t    ;;\n\t*)\n\t    # check for extension\n\t    IS_EXTENSION=$(relx_is_extension \"$command\")\n\t    if [ \"$IS_EXTENSION\" = \"1\" ]; then\n\t\tEXTENSION_SCRIPT=$(relx_get_extension_script \"$command\")\n\t\trelx_run_extension \"$EXTENSION_SCRIPT\" help\n\t    else\n\t\tEXTENSIONS=$(echo $EXTENSIONS | sed -e 's/|undefined//g')\n\t\techo \"Usage: ${REL_NAME} [COMMAND] [ARGS]\"\n\t\techo \"\"\n\t\techo \"Arweave Commands:\"\n\t\techo \"\"\n\t\techo \"  benchmark               Run Arweave Benchmarks\"\n\t\techo \"  check                   Check system parameters for Arweave\"\n\t\techo \"  console                 Start Arweave with an interactive Erlang shell\"\n\t\techo \"  console_clean           Start an interactive Erlang shell without the Arweave release's applications\"\n\t\techo \"  daemon                  Start Arweave in the background with run_erl (named pipes)\"\n\t\techo \"  daemon_attach           Connect to Arweave node started as daemon with to_erl (named pipes)\"\n\t\techo \"  doctor                  Start Arweave Data Analyzer tool\"\n\t\techo \"  escript                 Run an escript in the same environment as the Arweave release\"\n\t\techo \"  eval [Exprs]            Run Erlang expressions on Arweave node\"\n\t\techo \"  foreground              Start Arweave with output to stdout\"\n\t\techo \"  foreground_clean        Start Arweave VM without any entry-point as arguments\"\n\t\techo \"  pid                     Print the PID of the Arweave OS process\"\n\t\techo \"  ping                    Print pong if the Arweave node is alive\"\n\t\techo \"  reboot                  Reboot the entire Arweave VM\"\n\t\techo \"  reload                  Restart only Arweave application in the VM\"\n\t\techo \"  remote_console          Connect remote shell to the Arweave node\"\n\t\techo \"  restart                 Restart the running applications but not the Arweave VM\"\n\t\techo \"  rpc [Mod [Fun [Args]]]] Run apply(Mod, Fun, Args) on the Arweave node\"\n\t\techo \"  status                  Verify if the Arweave node is running and then run status hook scripts\"\n\t\techo \"  stop                    Stop the Arweave node\"\n\t\techo \"  version                 Print the Arweave version\"\n\t\techo \"  wallet                  Manage Arweave wallets\"\n\n\t\tif test \"$EXTENSIONS\"\n\t\tthen\n\t\t  echo \"$EXTENSIONS\"\n                fi\n\n\t\tif test \"${ARWEAVE_DEV}\"\n                then\n\t\t  echo \"\"\n\t\t  echo \"Arweave Commands (developer mode):\"\n\t\t  echo \"  test [MODULE [TEST]]     Run Arweave test Suite\"\n\t\t  echo \"  test_e2e [MODULE [TEST]] Run Arweave e2e Test Suite\"\n\t\tfi\n\t    fi\n\t    ;;\n    esac\n}\n\nfind_erts_dir() {\n    __erts_dir=\"$RELEASE_ROOT_DIR/erts-$ERTS_VSN\"\n    if [ -d \"$__erts_dir\" ]; then\n\tERTS_DIR=\"$__erts_dir\";\n    else\n\t__erl=\"$(command -v erl)\"\n\tcode=\"io:format(\\\"~s\\\", [code:root_dir()]), halt().\"\n\t__erl_root=\"$(\"$__erl\" -boot no_dot_erlang -sasl errlog_type error -noshell -eval \"$code\")\"\n\tERTS_DIR=\"$__erl_root/erts-$ERTS_VSN\"\n\tif [ ! -d \"$ERTS_DIR\" ]; then\n\t    erts_version_code=\"io:format(\\\"~s\\\", [erlang:system_info(version)]), halt().\"\n\t    __erts_version=\"$(\"$__erl\" -boot no_dot_erlang -sasl errlog_type error -noshell -eval \"$erts_version_code\")\"\n\t    ERTS_DIR=\"${__erl_root}/erts-${__erts_version}\"\n\t    if [ -d \"$ERTS_DIR\" ]; then\n\t\techo \"Exact ERTS version (${ERTS_VSN}) match not found, instead using ${__erts_version}. The release may fail to run.\" 1>&2\n\t\tERTS_VSN=${__erts_version}\n\t    else\n\t\techo \"Can not run the release. There is no ERTS bundled with the release or found on the system.\"\n\t\texit 1\n\t    fi\n\tfi\n    fi\n}\n\nfind_erl_call() {\n    # users who depend on stdout when running rpc calls must still use nodetool\n    # so we have an overload option to force use of nodetool instead of erl_call\n    if [ \"$USE_NODETOOL\" ]; then\n\tERL_RPC=relx_nodetool\n    else\n\t# only OTP-23 and above have erl_call in the erts bin directory\n\t# and only those versions have the features and bug fixes needed\n\t# to work properly with this script\n\t__erl_call=\"$ERTS_DIR/bin/erl_call\"\n\tif [ -f \"$__erl_call\" ]; then\n\t    ERL_RPC=\"$__erl_call\";\n\telse\n\t    ERL_RPC=relx_nodetool\n\tfi\n    fi\n}\n\n# Get node pid\nrelx_get_pid() {\n    if output=\"$(erl_rpc os getpid 2>/dev/null)\"\n    then\n\techo \"$output\" | sed -e 's/\"//g'\n\treturn 0\n    else\n\techo \"$output\"\n\treturn 1\n    fi\n}\n\nping_or_exit() {\n    if ! erl_rpc erlang is_alive > /dev/null 2>&1; then\n\techo \"Node is not running!\"\n\texit 1\n    fi\n}\n\nrelx_get_nodename() {\n    id=\"longname$(relx_gen_id)-${NAME}\"\n    if [ -z \"$COOKIE\" ]; then\n\t# shellcheck disable=SC2086\n\t\"$BINDIR/erlexec\" -boot \"$REL_DIR\"/start_clean \\\n\t\t\t  -mode interactive \\\n\t\t\t  -boot_var SYSTEM_LIB_DIR \"$SYSTEM_LIB_DIR\" \\\n\t\t\t  -eval '[_,H]=re:split(atom_to_list(node()),\"@\",[unicode,{return,list}]), io:format(\"~s~n\",[H]), halt()' \\\n\t\t\t  -dist_listen false \\\n\t\t\t  ${START_EPMD} \\\n\t\t\t  -noshell \"${NAME_TYPE}\" \"$id\"\n    else\n\t# running with setcookie prevents a ~/.erlang.cookie from being created\n\t# shellcheck disable=SC2086\n\t\"$BINDIR/erlexec\" -boot \"$REL_DIR\"/start_clean \\\n\t\t\t  -mode interactive \\\n\t\t\t  -boot_var SYSTEM_LIB_DIR \"$SYSTEM_LIB_DIR\" \\\n\t\t\t  -eval '[_,H]=re:split(atom_to_list(node()),\"@\",[unicode,{return,list}]), io:format(\"~s~n\",[H]), halt()' \\\n\t\t\t  -setcookie \"${COOKIE}\" \\\n\t\t\t  -dist_listen false \\\n\t\t\t  ${START_EPMD} \\\n\t\t\t  -noshell \"${NAME_TYPE}\" \"$id\"\n    fi\n}\n\n# Connect to a remote node\nrelx_rem_sh() {\n    # Remove remote_nodename when OTP-23 is the oldest version supported by rebar3/relx.\n    # sort the used erts version against 11.0 to see if it is less than 11.0 (OTP-23)\n    # if it is then we must generate a node name to use for the remote node.\n    # But this feature is only for short names in 23.0 (erts 11.0). It can be used\n    # for long names with 23.1 (erts 11.1) and above.\n    if [ \"${NAME_TYPE}\" = \"-sname\" ] && [  \"11.0\" = \"$(printf \"%s\\n11.0\" \"${ERTS_VSN}\" | sort -V | head -n1)\" ] ; then\n\n\tremote_nodename=\"${NAME_TYPE} undefined@${RELX_HOSTNAME}\"\n    # if the name type is longnames then make sure this is erts 11.1+\n    elif [ \"${NAME_TYPE}\" = \"-name\" ] && [  \"11.1\" = \"$(printf \"%s\\n11.1\" \"${ERTS_VSN}\" | sort -V | head -n1)\" ] ; then\n\tremote_nodename=\"${NAME_TYPE} undefined@${RELX_HOSTNAME}\"\n    else\n\t# Generate a unique id used to allow multiple remsh to the same node transparently\n\tremote_nodename=\"${NAME_TYPE} remsh$(relx_gen_id)-${NAME}\"\n    fi\n\n    # Get the node's ticktime so that we use the same one\n    TICKTIME=\"$(erl_rpc net_kernel get_net_ticktime)\"\n\n    # Setup remote shell command to control node\n    # -dist_listen is new in OTP-23. It keeps the remote node from binding to a listen port\n    # and implies the option -hidden\n    # shellcheck disable=SC2086\n    exec \"$BINDIR/erlexec\" ${remote_nodename} -remsh \"$NAME\" -boot \"$REL_DIR\"/start_clean -mode interactive \\\n\t -boot_var SYSTEM_LIB_DIR \"$SYSTEM_LIB_DIR\" \\\n\t -setcookie \"$COOKIE\" -hidden -kernel net_ticktime \"$TICKTIME\" \\\n\t -dist_listen false \\\n\t $DIST_ARGS \\\n\t $EXTRA_DIST_ARGS\n}\n\nerl_rpc() {\n    case \"$ERL_RPC\" in\n\t\"relx_nodetool\")\n\t    relx_nodetool rpc \"$@\"\n\t    ;;\n\t*)\n\t    command=$*\n\n\t    # erl_call -R is recommended for generating dynamic node name but is only available in 23.0+\n\t    if [  \"11.0\" = \"$(printf \"%s\\n11.0\" \"${ERTS_VSN}\" | sort -V | head -n1)\" ] ; then\n\t\tDYNAMIC_NAME=\"-R\"\n\t    else\n\t\tDYNAMIC_NAME=\"-r\"\n\t    fi\n\n\t    if [ \"$ADDRESS\" ]; then\n\t\tresult=$(\"$ERL_RPC\" \"${DYNAMIC_NAME}\" -c \"${COOKIE}\" -address \"${ADDRESS}\" -timeout \"${RELX_RPC_TIMEOUT}\" -a \"${command}\")\n\t    else\n\t\tresult=$(\"$ERL_RPC\" \"$NAME_TYPE\" \"$NAME\" \"${DYNAMIC_NAME}\" -c \"${COOKIE}\" -timeout \"${RELX_RPC_TIMEOUT}\" -a \"${command}\")\n\t    fi\n\t    code=$?\n\t    if [ $code -eq 0 ]; then\n\t\techo \"$result\"\n\t    else\n\t\treturn $code\n\t    fi\n\t    ;;\n    esac\n}\n\nerl_eval() {\n    case \"$ERL_RPC\" in\n\t\"relx_nodetool\")\n\t    relx_nodetool eval \"$@\"\n\t    ;;\n\t*)\n\t    local command=\"${*}\"\n\t    if [ \"$ERL_DIST_PORT\" ]; then\n\t\tresult=$(echo \"${command}\" | eval \"$ERL_RPC\" \"${DYNAMIC_NAME}\" -c \"${COOKIE}\" -address \"${ADDRESS}\" -timeout \"${RELX_RPC_TIMEOUT}\" -e)\n\t    else\n\t\tresult=$(echo \"${command}\" | eval \"$ERL_RPC\" \"$NAME_TYPE\" \"$NAME\" \"${DYNAMIC_NAME}\" -c \"${COOKIE}\" -timeout \"${RELX_RPC_TIMEOUT}\" -e)\n\t    fi\n\t    code=$?\n\t    if [ $code -eq 0 ]; then\n\t\techo \"$result\" | sed 's/^{ok, \\(.*\\)}$/\\1/'\n\t    else\n\t\treturn $code\n\t    fi\n\t    ;;\n    esac\n}\n\n\n# Generate a random id\nrelx_gen_id() {\n    # To prevent exhaustion of atoms on target node, optionally avoid\n    # generation of random node prefixes, if it is guaranteed calls\n    # are entirely sequential.\n    if [ -z \"${NODETOOL_NODE_PREFIX}\" ]; then\n\tdd count=1 bs=4 if=/dev/urandom 2> /dev/null | od -x  | head -n1 | awk '{print $2$3}'\n    else\n\techo \"${NODETOOL_NODE_PREFIX}\"\n    fi\n}\n\n# Control a node with nodetool if erl_call isn't from OTP-23+\nrelx_nodetool() {\n    command=\"$1\"; shift\n\n    # Generate a unique id used to allow multiple nodetool calls to the\n    # same node transparently\n    nodetool_id=\"maint$(relx_gen_id)-${NAME}\"\n\n    if [ -z \"${START_EPMD}\" ]; then\n\tERL_FLAGS=\"${ERL_FLAGS} ${DIST_ARGS} ${EXTRA_DIST_ARGS} ${NAME_TYPE} $nodetool_id -setcookie ${COOKIE} -dist_listen false\" \\\n\t\t \"$ERTS_DIR/bin/escript\" \\\n\t\t \"$ROOTDIR/bin/nodetool\" \\\n\t\t \"$NAME_TYPE\" \"$NAME\" \\\n\t\t \"$command\" \"$@\"\n    else\n\t# shellcheck disable=SC2086\n\tERL_FLAGS=\"${ERL_FLAGS} ${DIST_ARGS} ${EXTRA_DIST_ARGS} ${NAME_TYPE} $nodetool_id -setcookie ${COOKIE} -dist_listen false\" \\\n\t\t \"$ERTS_DIR/bin/escript\" \\\n\t\t \"$ROOTDIR/bin/nodetool\" \\\n\t\t $START_EPMD \"$NAME_TYPE\" \"$NAME\" \"$command\" \"$@\"\n    fi\n}\n\n# Run an escript in the node's environment\nrelx_escript() {\n    scriptpath=\"$1\"; shift\n    export RELEASE_ROOT_DIR\n\n    \"$ERTS_DIR/bin/escript\" \"$ROOTDIR/$scriptpath\" \"$@\"\n}\n\n# Convert {127,0,0,1} to 127.0.0.1 (inet:ntoa/1)\naddr_tuple_to_str() {\n    addr=\"$1\"\n    saved_IFS=\"$IFS\"\n    IFS=\"{,}'\\\" \"\n    # shellcheck disable=SC2086\n    eval set -- $addr\n    IFS=\"$saved_IFS\"\n\n    case $# in\n    4) printf '%u.%u.%u.%u' \"$@\";;\n    8) printf '%.4x:%.4x:%.4x:%.4x:%.4x:%.4x:%.4x:%.4x' \"$@\";;\n    *) echo \"Cannot parse IP address tuple: '$addr'\" 1>&2;;\n    esac\n}\n\nmake_out_file_path() {\n    # Use output directory provided in the RELX_OUT_FILE_PATH environment variable\n    # (default to the current location of vm.args and sys.config)\n    DIR=$(dirname \"$1\")\n    [ -d \"${RELX_OUT_FILE_PATH}\" ] && DIR=\"${RELX_OUT_FILE_PATH}\"\n    FILE=$(basename \"$1\")\n    IN=\"${DIR}/${FILE}\"\n\n    PFX=$(echo \"$IN\"   | awk '{sub(/\\.[^.]+$/, \"\", $0)}1')\n    SFX=$(echo \"$FILE\" | awk -F . '{if (NF>1) print $NF}')\n    if [ \"$RELX_MULTI_NODE\" ]; then\n\techo \"${PFX}.${NAME}.${SFX}\"\n    else\n\techo \"${PFX}.${SFX}\"\n    fi\n}\n\n# Replace environment variables\nreplace_os_vars() {\n    awk '{\n\twhile(match($0,\"[$]{[^}]*}\")) {\n\t    var=substr($0,RSTART+2,RLENGTH -3)\n\t    slen=split(var,arr,\":-\")\n\t    v=arr[1]\n\t    e=ENVIRON[v]\n\t    gsub(\"&\",\"\\\\\\\\\\\\&\",e)\n\t    if(slen > 1 && e==\"\") {\n\t\ti=index(var, \":-\"arr[2])\n\t\tdef=substr(var,i+2)\n\t\tgsub(\"[$]{\"var\"}\",def)\n\t    } else {\n\t\tgsub(\"[$]{\"var\"}\",e)\n\t    }\n\t}\n    }1' < \"$1\" > \"$2\"\n}\n\nadd_path() {\n    # Use $CWD/$1 if exists, otherwise releases/VSN/$1\n    local FILE=${1}; shift\n    local IN_FILE_PATH=${1}; shift\n    local EXTRA_PATHS=${*}\n\n    if [ \"${IN_FILE_PATH}\" ]\n    then\n      echo \"${IN_FILE_PATH}\"\n      return 0\n    fi\n\n    for e in \"${RELEASE_ROOT_DIR}\" \"${REL_DIR}\" ${EXTRA_PATHS}\n    do\n      if [ -f \"${e}/${FILE}\" ]\n      then\n        echo \"${e}/${FILE}\"\n        return 0\n      fi\n    done\n    return 1\n}\n\nmulti_check_replace_os_vars() {\n\tlocal file=\"${1}\"; shift\n\twhile test \"${*}\"\n\tdo\n\t\tlocal path=${1}; shift\n\t\tlocal ret=$(check_replace_os_vars ${file} ${path})\n\t\tif test \"${ret}\"\n\t\tthen\n\t\t\techo ${ret}\n\t\t\treturn 0\n\t\tfi\n\tdone\n\treturn 1\n}\n\ncheck_replace_os_vars() {\n    IN_FILE_PATH=$(add_path \"$1\" \"$2\")\n    OUT_FILE_PATH=\"$IN_FILE_PATH\"\n    SRC_FILE_PATH=\"$IN_FILE_PATH.src\"\n    ORIG_FILE_PATH=\"$IN_FILE_PATH.orig\"\n    if [ -f \"$SRC_FILE_PATH\" ]; then\n\tOUT_FILE_PATH=$(make_out_file_path \"$IN_FILE_PATH\")\n\treplace_os_vars \"$SRC_FILE_PATH\" \"$OUT_FILE_PATH\"\n    elif [ \"$RELX_REPLACE_OS_VARS\" ]; then\n\tOUT_FILE_PATH=$(make_out_file_path \"$IN_FILE_PATH\")\n\t# If vm.args.orig or sys.config.orig is present then use that\n\tif [ -f \"$ORIG_FILE_PATH\" ]; then\n\t   IN_FILE_PATH=\"$ORIG_FILE_PATH\"\n\tfi\n\n\t# apply the environment variable substitution to $IN_FILE_PATH\n\t# the result is saved to $OUT_FILE_PATH\n\t# if they are both the same, then ensure that we don't clobber\n\t# the file by saving a backup with the .orig extension\n\tif [ \"$IN_FILE_PATH\" = \"$OUT_FILE_PATH\" ]; then\n\t    cp \"$IN_FILE_PATH\" \"$ORIG_FILE_PATH\"\n\t    replace_os_vars \"$ORIG_FILE_PATH\" \"$OUT_FILE_PATH\"\n\telse\n\t    replace_os_vars \"$IN_FILE_PATH\" \"$OUT_FILE_PATH\"\n\tfi\n    else\n\t# If vm.arg.orig or sys.config.orig is present then use that\n\tif [ -f \"$ORIG_FILE_PATH\" ]; then\n\t    OUT_FILE_PATH=$(make_out_file_path \"$IN_FILE_PATH\")\n\t    cp \"$ORIG_FILE_PATH\" \"$OUT_FILE_PATH\"\n\tfi\n    fi\n    echo \"$OUT_FILE_PATH\"\n}\n\nrelx_run_hooks() {\n    HOOKS=$1\n    for hook in $HOOKS\n    do\n\t# the scripts arguments at this point are separated\n\t# from each other by | , we now replace these\n\t# by empty spaces and give them to the `set`\n\t# command in order to be able to extract them\n\t# separately\n\t# shellcheck disable=SC2046\n\tset $(echo \"$hook\" | sed -e 's/|/ /g')\n\tHOOK_SCRIPT=$1; shift\n\t# all hook locations are expected to be\n\t# relative to the start script location\n\t# shellcheck disable=SC1090,SC2240\n\t[ -f \"$SCRIPT_DIR/$HOOK_SCRIPT\" ] && . \"$SCRIPT_DIR/$HOOK_SCRIPT\" \"$@\"\n    done\n}\n\nrelx_disable_hooks() {\n    PRE_START_HOOKS=\"\"\n    POST_START_HOOKS=\"\"\n    PRE_STOP_HOOKS=\"\"\n    POST_STOP_HOOKS=\"\"\n    PRE_INSTALL_UPGRADE_HOOKS=\"\"\n    POST_INSTALL_UPGRADE_HOOKS=\"\"\n    STATUS_HOOK=\"\"\n}\n\nrelx_is_extension() {\n    EXTENSION=$1\n    case \"$EXTENSION\" in\n\t# )\n\t#    echo \"1\"\n\t# ;;\n\t*)\n\t    echo \"0\"\n\t;;\n    esac\n}\n\nrelx_get_extension_script() {\n    EXTENSION=$1\n    # below are the extensions declarations\n    # of the form:\n    # foo_extension=\"path/to/foo_script\";bar_extension=\"path/to/bar_script\"\n\n    # get the command extension (eg. foo) and\n    # obtain the actual script filename that it\n    # refers to (eg. \"path/to/foo_script\"\n    eval echo \"$\"\"${EXTENSION}_extension\"\n}\n\nrelx_run_extension() {\n    # drop the first argument which is the name of the\n    # extension script\n    EXTENSION_SCRIPT=$1\n    shift\n    # all extension script locations are expected to be\n    # relative to the start script location\n    # shellcheck disable=SC1090,SC2240\n    [ -f \"$SCRIPT_DIR/$EXTENSION_SCRIPT\" ] && . \"$SCRIPT_DIR/$EXTENSION_SCRIPT\" \"$@\"\n}\n\n# given a list of arguments, identify the internal ones\n#   --relx-disable-hooks\n# and process them accordingly\nprocess_internal_args() {\n    for arg in \"$@\"\n    do\n      shift\n      case \"$arg\" in\n\t  --relx-disable-hooks)\n\t    relx_disable_hooks\n\t    ;;\n\t  *)\n\t    ;;\n      esac\n    done\n}\n\n# This function takes a list of terms (usually arguments)\n# and split them in two categories, the one before --\n# and the one after. The one before is used as Erlang\n# VM parameters and should overwrite default configuration,\n# The last part (LOCAL_PARAMS) contains arweave parameters.\n# This function export LOCAL_PARAMS and VM_PARAMS variables.\nparse_args() {\n\tlocal separator=\"--\"\n\tlocal vm_params=\"\"\n\tlocal params=\"\"\n\twhile test \"${*}\"\n\tdo\n\t\tlocal arg=\"${1}\"\n\t\tif test \"${arg}\" = ${separator}\n\t\tthen\n\t\t\ttest \"${vm_params}\" \\\n\t\t\t\t&& vm_params=\"${vm_params} ${params}\" \\\n\t\t\t\t|| vm_params=\"${params}\"\n\t\t\tparams=\"\"\n\t\telse\n\t\t\ttest \"${params}\" \\\n\t\t\t\t&& params=\"${params} ${arg}\" \\\n\t\t\t\t|| params=\"${arg}\"\n\t\tfi\n\n\t\t# don't forget to shift to remove the previous\n\t\t# argument from the list\n\t\tshift\n\tdone\n\texport VM_PARAMS=\"${vm_params}\"\n\texport LOCAL_PARAMS=\"${params}\"\n}\n\n# if ARWEAVE_DEV environment is defined, then\n# we start by rebuild a release.\nif test \"${ARWEAVE_DEV}\"\nthen\n    arweave_developer_mode\nfi\n\n# Ensure symlinks exist after any rebuild (ar-rebar3 removes them during build)\nensure_release_symlinks\n\n# process internal arguments\nprocess_internal_args \"$@\"\n\nfind_erts_dir\nfind_erl_call\nexport BINDIR=\"$ERTS_DIR/bin\"\nexport EMU=\"beam\"\nexport PROGNAME=\"erl\"\nexport LD_LIBRARY_PATH=\"$ERTS_DIR/lib:$LD_LIBRARY_PATH\"\nSYSTEM_LIB_DIR=\"$(dirname \"$ERTS_DIR\")/lib\"\n\n# vm_args configuration, we can use priv/files/vm_args or\n# the path from the release.\nVMARGS_PATH=$(add_path \\\n\tvm.args \\\n\t\"${VMARGS_PATH}\" \\\n\t\"${REL_DIR}\" \\\n\t\"${REL_PATH}\" \\\n\t\"${REL_PATH_ALT}\" \\\n\t\"${RELEASE_ROOT_DIR}/config\" \\\n\t\"${RELEASE_ROOT_DIR}/priv/templates\")\nVMARGS_PATH=$(multi_check_replace_os_vars \\\n\tvm.args \\\n\t\"${VMARGS_PATH}\"\\\n\t\"${REL_DIR}\" \\\n\t\"${REL_PATH}\" \\\n\t\"${REL_PATH_ALT}\" \\\n\t\"${RELEASE_ROOT_DIR}/config\")\nRELX_CONFIG_PATH=$(multi_check_replace_os_vars \\\n\tsys.config \\\n\t\"${RELX_CONFIG_PATH}\" \\\n\t\"${REL_DIR}\" \\\n\t\"${REL_PATH}\" \\\n\t\"${REL_PATH_ALT}\" \\\n\t\"${RELEASE_ROOT_DIR}/config\")\n\n# Check vm.args and other files referenced via -args_file parameters for:\n#    - nonexisting -args_files\n#    - circular dependencies of -args_files\n#    - relative paths in -args_file parameters\n#    - multiple/mixed occurrences of -name and -sname parameters\n#    - missing -name or -sname parameters\n# If all checks pass, extract the target node name\nset +e\nTMP_NAME_ARG=$(awk 'function shell_quote(str)\n{\n    gsub(/'\\''/,\"'\\'\\\\\\\\\\'\\''\", str);\n    return \"'\\''\" str \"'\\''\"\n}\n\nfunction check_name(file)\n{\n    # if file exists, then it should be readable\n    if (system(\"test -f \" shell_quote(file)) == 0 && system(\"test -r \" shell_quote(file)) != 0) {\n\tprint file\" not readable\"\n\texit 3\n    }\n    while ((getline line<file)>0) {\n\tif (line~/^-args_file +/) {\n\t    gsub(/^-args_file +| *$/, \"\", line)\n\t    if (line in files) {\n\t\tprint \"circular reference to \"line\" encountered in \"file\n\t\texit 5\n\t    }\n\t    files[line]=line\n\t    check_name(line)\n\t}\n\telse if (line~/^-s?name +/) {\n\t    if (name!=\"\") {\n\t\tprint \"\\\"\"line\"\\\" parameter found in \"file\" but already specified as \\\"\"name\"\\\"\"\n\t\texit 2\n\t    }\n\t    name=line\n\t}\n    }\n}\n\nBEGIN {\n    split(\"\", files)\n    name=\"\"\n}\n\n{\n    files[FILENAME]=FILENAME\n    check_name(FILENAME)\n    if (name==\"\") {\n\tprint \"need to have exactly one of either -name or -sname parameters but none found\"\n\texit 1\n    }\n    print name\n    exit 0\n}' \"$VMARGS_PATH\")\nTMP_NAME_ARG_RC=$?\ncase $TMP_NAME_ARG_RC in\n    0) NAME_ARG=\"$TMP_NAME_ARG\";;\n    *) echo \"$TMP_NAME_ARG\"\n       exit $TMP_NAME_ARG_RC;;\nesac\nunset TMP_NAME_ARG\nunset TMP_NAME_ARG_RC\nset -e\n\n\n# Perform replacement of variables in ${NAME_ARG}\nNAME_ARG=$(eval echo \"${NAME_ARG}\")\n\n# Extract the name type and name from the NAME_ARG for REMSH\nNAME_TYPE=\"$(echo \"$NAME_ARG\" | awk '{print $1}')\"\nNAME=\"$(echo \"$NAME_ARG\" | awk '{print $2}')\"\n\n# Extract dist arguments\nDIST_ARGS=\"\"\nPROTO_DIST=\"$(grep '^-proto_dist' \"$VMARGS_PATH\" || true)\"\nif [ \"$PROTO_DIST\" ]; then\n    DIST_ARGS=\"${PROTO_DIST}\"\nfi\nSTART_EPMD=\"$(grep '^-start_epmd' \"$VMARGS_PATH\" || true)\"\nif [ \"$START_EPMD\" ]; then\n    DIST_ARGS=\"${DIST_ARGS} ${START_EPMD}\"\nfi\nEPMD_MODULE=\"$(grep '^-epmd_module' \"$VMARGS_PATH\" || true)\"\nif [ \"$EPMD_MODULE\" ]; then\n    DIST_ARGS=\"${DIST_ARGS} ${EPMD_MODULE}\"\nfi\nINET_DIST_USE_INTERFACE=\"$(grep '^-kernel  *inet_dist_use_interface' \"$VMARGS_PATH\" || true)\"\nif [ \"$INET_DIST_USE_INTERFACE\" ]; then\n    DIST_ARGS=\"${DIST_ARGS} ${INET_DIST_USE_INTERFACE}\"\nfi\n\nif [ \"$ERL_DIST_PORT\" ]; then\n    if [ \"$INET_DIST_USE_INTERFACE\" ]; then\n\tADDRESS=\"$(addr_tuple_to_str \"${INET_DIST_USE_INTERFACE#*inet_dist_use_interface }\"):$ERL_DIST_PORT\"\n    else\n\tADDRESS=\"$ERL_DIST_PORT\"\n    fi\n    if [  \"11.1\" = \"$(printf \"%s\\n11.1\" \"${ERTS_VSN}\" | sort -V | head -n1)\" ] ; then\n\t# unless set by the user, set start_epmd to false when ERL_DIST_PORT is used\n\tif [ ! \"$START_EPMD\" ]; then\n\t    EXTRA_DIST_ARGS=\"-erl_epmd_port ${ERL_DIST_PORT} -start_epmd false\"\n\telse\n\t    EXTRA_DIST_ARGS=\"-erl_epmd_port ${ERL_DIST_PORT}\"\n\tfi\n    else\n\tERL_DIST_PORT_WARNING=\"ERL_DIST_PORT is set and used to set the port, but doing so on ERTS version ${ERTS_VSN} means remsh/rpc will not work for this release\"\n\tif ! command -v logger > /dev/null 2>&1\n\tthen\n\t    echo \"WARNING: ${ERL_DIST_PORT_WARNING}\"\n\telse\n\t    logger -p warning -t \"${REL_NAME}[$$]\" \"${ERL_DIST_PORT_WARNING}\"\n\tfi\n\tEXTRA_DIST_ARGS=\"-kernel inet_dist_listen_min ${ERL_DIST_PORT} -kernel inet_dist_listen_max ${ERL_DIST_PORT}\"\n    fi\nfi\n\n# Force use of nodetool if proto_dist set as erl_call doesn't support proto_dist\nif [ \"$PROTO_DIST\" ]; then\n    ERL_RPC=relx_nodetool\nfi\n\n# Extract the target cookie\n# Do this before relx_get_nodename so we can use it and not create a ~/.erlang.cookie\nif [ -n \"$RELX_COOKIE\" ]; then\n    COOKIE=\"$RELX_COOKIE\"\nelse\n    COOKIE_ARG=\"$(grep '^-setcookie' \"$VMARGS_PATH\" || true)\"\n    DEFAULT_COOKIE_FILE=\"$HOME/.erlang.cookie\"\n    if [ -z \"$COOKIE_ARG\" ]; then\n\tif [ -f \"$DEFAULT_COOKIE_FILE\" ]; then\n\t    COOKIE=\"$(cat \"$DEFAULT_COOKIE_FILE\")\"\n\telse\n\t    echo \"No cookie is set or found. This limits the scripts functionality, installing, upgrading, rpc and getting a list of versions will not work.\"\n\tfi\n    else\n\t# Extract cookie name from COOKIE_ARG\n\tCOOKIE=\"$(echo \"$COOKIE_ARG\" | awk '{print $2}')\"\n    fi\nfi\n\n# User can specify an sname without @hostname\n# This will fail when creating remote shell\n# So here we check for @ and add @hostname if missing\ncase \"${NAME}\" in\n    *@*) ;;                             # Nothing to do\n    *)   NAME=${NAME}@$(relx_get_nodename);;  # Add @hostname\nesac\n\n# Export the variable so that it's available in the 'eval' calls\nexport NAME\n\n# create a variable of just the hostname part of the nodename\nRELX_HOSTNAME=$(echo \"${NAME}\" | cut -d'@' -f2)\n\ntest -z \"$PIPE_DIR\" && PIPE_BASE_DIR='/tmp/erl_pipes/'\nPIPE_DIR=\"${PIPE_DIR:-/tmp/erl_pipes/$NAME/}\"\n\n# Change to the project root directory (instead of the release root ROOTDIR)\n# so that relative paths (like config files) and logs work as expected.\ncd \"$RELEASE_ROOT_DIR\"\n\nif is_arweave_developer_command \"${1}\"\nthen\n  relx_usage\n  exit 1\nfi\n\n# Check the first argument for instructions\ncase \"$1\" in\n    check)\n\tshift\n\tarweave_check ${*}\n\t;;\n\n    version)\n\tshift\n\tarweave_version ${*}\n\t;;\n\n    daemon|daemon_boot)\n\tarweave_check\n\tcase \"$1\" in\n\t    daemon)\n\t\tshift\n\t\tSTART_OPTION=\"console\"\n\t\tHEART_OPTION=\"daemon\"\n\t\t;;\n\t    daemon_boot)\n\t\tshift\n\t\tSTART_OPTION=\"console_boot\"\n\t\tHEART_OPTION=\"daemon_boot\"\n\t\t;;\n\tesac\n\n\tARGS=\"$(printf \"'%s' \" \"$@\")\"\n\n\t# shellcheck disable=SC2174\n\ttest -z \"$PIPE_BASE_DIR\" || mkdir -m 1777 -p \"$PIPE_BASE_DIR\"\n\tmkdir -p \"$PIPE_DIR\"\n\tif [ ! -w \"$PIPE_DIR\" ]\n\tthen\n\t    echo \"failed to start, user '$USER' does not have write privileges on '$PIPE_DIR', either delete it or run node as a different user\"\n\t    exit 1\n\tfi\n\n\t# Make sure log directory exists\n\tmkdir -p \"$RUNNER_LOG_DIR\"\n\n\trelx_run_hooks \"$PRE_START_HOOKS\"\n\n\t# check system configuration\n\tarweave_check\n\n\t\"$BINDIR/run_erl\" \\\n\t  -daemon \"$PIPE_DIR\" \\\n\t  \"$RUNNER_LOG_DIR\" \\\n\t  \"exec \\\"$RELEASE_ROOT_DIR/bin/$REL_NAME\\\" \\\"$START_OPTION\\\" ${ARGS}\"\n\n\t# wait for node to be up before running hooks\n\twhile ! erl_rpc erlang is_alive > /dev/null 2>&1\n\tdo\n\t    sleep 1\n\tdone\n\n\t# Clean up symlinks now that VM is running (allows VSCode extensions to work)\n\t# Only remove if running from source and they're actually symlinks\n\tif [ -n \"$RUNNING_FROM_SOURCE\" ]; then\n\t    [ -L \"${RELEASE_ROOT_DIR}/lib\" ] && rm -f \"${RELEASE_ROOT_DIR}/lib\"\n\t    [ -L \"${RELEASE_ROOT_DIR}/releases\" ] && rm -f \"${RELEASE_ROOT_DIR}/releases\"\n\tfi\n\n\trelx_run_hooks \"$POST_START_HOOKS\"\n\t;;\n\n    stop)\n\trelx_run_hooks \"$PRE_STOP_HOOKS\"\n\t# Wait for the node to completely stop...\n\tPID=\"$(relx_get_pid)\"\n\tif ! erl_rpc init stop > /dev/null 2>&1; then\n\t    exit 1\n\tfi\n\twhile kill -s 0 \"$PID\" 2>/dev/null;\n\tdo\n\t    sleep 1\n\tdone\n\n\t# wait for node to be down before running hooks\n\twhile erl_rpc erlang is_alive > /dev/null 2>&1\n\tdo\n\t    sleep 1\n\tdone\n\n\trelx_run_hooks \"$POST_STOP_HOOKS\"\n\t;;\n\n    restart)\n\t## Restart the VM without exiting the process\n\tif ! erl_rpc init restart > /dev/null; then\n\t    exit 1\n\tfi\n\t;;\n\n    reboot)\n\t## Restart the VM completely (uses heart to restart it)\n\tif ! erl_rpc init reboot > /dev/null; then\n\t    exit 1\n\tfi\n\t;;\n\n    reload)\n\t## Reload only arweave application in the vm\n\tRELX_RPC_TIMEOUT=3600\n\n\t# first arweave and prometheus application must be stopped\n\tif erl_eval '[application:stop(A) || A <- [arweave, prometheus]].'\n\tthen\n\t\t# then arweave application can be restarted\n\t\terl_eval 'application:ensure_all_started(arweave).'\n\t\ttest $? -ne 0 && exit 1\n\t\texit $?\n\telse\n\t    exit 1\n\tfi\n\t;;\n    pid)\n\t## Get the VM's pid\n\tif ! relx_get_pid; then\n\t    exit 1\n\tfi\n\t;;\n\n    ping)\n\t## See if the VM is alive\n\tping_or_exit\n\n\techo \"pong\"\n\t;;\n\n    escript)\n\t## Run an escript under the node's environment\n\tshift\n\tif ! relx_escript \"$@\"; then\n\t    exit 1\n\tfi\n\t;;\n\n    daemon_attach|attach)\n\tcase \"$1\" in\n\t    attach)\n\t\t# TODO, add here the right annoying message asking users to consider\n\t\t# instead using systemd or some such other init system\n\t\techo \"'attach' has been deprecated, replaced by 'daemon_attach' and will be removed in the short-term, please consult rebar3.org on why you should be\"\\\n\t\t     \"using 'foreground' and an init tool such as 'systemd'\"\n\t\t;;\n\tesac\n\t# Make sure a node IS running\n\tping_or_exit\n\n\tif [ ! -w \"$PIPE_DIR\" ]\n\tthen\n\t    echo \"failed to attach, user '$USER' does not have sufficient privileges on '$PIPE_DIR', please run node as a different user\"\n\t    exit 1\n\tfi\n\n\tshift\n\texec \"$BINDIR/to_erl\" \"$PIPE_DIR\"\n\t;;\n\n    remote_console|remote|remsh)\n\t# Make sure a node IS running\n\tping_or_exit\n\n\tshift\n\trelx_rem_sh\n\t;;\n\n    console|console_clean|console_boot|foreground|foreground_clean|benchmark|wallet|doctor)\n\tFOREGROUNDOPTIONS=\"\"\n\t# .boot file typically just $REL_NAME (ie, the app name)\n\t# however, for debugging, sometimes start_clean.boot is useful.\n\t# For e.g. 'setup', one may even want to name another boot script.\n\tsubcommand=\"${1}\"\n\tcase \"$1\" in\n\t    console)\n\t\tshift\n\t\tif [ -f \"$REL_DIR/$REL_NAME.boot\" ]; then\n\t\t  BOOTFILE=\"$REL_DIR/$REL_NAME\"\n\t\telse\n\t\t  BOOTFILE=\"$REL_DIR/start\"\n\t\tfi\n\t\tARGS=${*}\n\t\t;;\n\t    foreground|foreground_clean|benchmark|wallet|doctor)\n\t\tshift\n\t\t# start up the release in the foreground for use by runit\n\t\t# or other supervision services\n\t\tif [ -f \"$REL_DIR/$REL_NAME.boot\" ]; then\n\t\t  BOOTFILE=\"$REL_DIR/$REL_NAME\"\n\t\telse\n\t\t  BOOTFILE=\"$REL_DIR/start\"\n\t\tfi\n\t\tFOREGROUNDOPTIONS=\"-noinput +Bd\"\n\n\t\t# all these arweave commands are being executed in\n\t\t# foreground mode, ARGS will be modified.\n\t\tcase ${subcommand} in\n\t\t    benchmark)\n\t\t\tarweave_benchmark ${*}\n\t\t\tARGS=$(arweave_benchmark ${*})\n\t\t\t;;\n\t\t    wallet)\n\t\t\tarweave_wallet ${*}\n\t\t\tARGS=$(arweave_wallet ${*})\n\t\t\t;;\n\t\t    doctor)\n\t\t\tarweave_doctor ${*}\n\t\t\tARGS=$(arweave_doctor ${*})\n\t\t\t;;\n\t\t    foreground_clean)\n\t\t\tARWEAVE_OPTS=\"\"\n\t\t\tARGS=${*}\n\t\t\t;;\n\t\t    *)\n\t\t\tARGS=${*}\n\t\t\t;;\n\t\tesac\n\t\t;;\n\t    console_clean)\n\t\tshift\n\t\t# if not set by user use interactive mode for console_clean\n\t\tCODE_LOADING_MODE=\"${CODE_LOADING_MODE:-interactive}\"\n\t\tBOOTFILE=\"$REL_DIR/start_clean\"\n\t\tARGS=${*}\n\t\t;;\n\t    console_boot)\n\t\tshift\n\t\tBOOTFILE=\"$1\"\n\t\tshift\n\t\tARGS=${*}\n\t\t;;\n\tesac\n\n\t# split the argument in two parts based on the previously\n\t# passed args, LOCAL_PARAMS is for arweave, VM_PARAMS is for\n\t# the vm.\n\tparse_args ${ARGS}\n\tARGS=${LOCAL_PARAMS}\n\n\t# if not set by user or console_clean use embedded\n\tCODE_LOADING_MODE=\"${CODE_LOADING_MODE:-embedded}\"\n\n\t# Setup beam-required vars\n\tEMU=\"beam\"\n\tPROGNAME=\"${0#*/}\"\n\n\texport EMU\n\texport PROGNAME\n\n\t# Dump environment info for logging purposes\n\t# shellcheck disable=SC2086\n\techo \"Exec: $BINDIR/erlexec\" \\\n\t    ${VM_PARAMS} \\\n\t    ${EXTRA_DIST_ARGS} \\\n\t    ${FOREGROUNDOPTIONS} \\\n\t    -boot \"$BOOTFILE\" \\\n\t    -mode \"$CODE_LOADING_MODE\" \\\n\t    -boot_var SYSTEM_LIB_DIR \"$SYSTEM_LIB_DIR\" \\\n\t    -config \"$RELX_CONFIG_PATH\" \\\n\t    -args_file \"$VMARGS_PATH\" \\\n\t    -- ${ARWEAVE_OPTS} ${ARGS}\n\techo \"Root: $ROOTDIR\"\n\n\t# Log the startup\n\techo \"$RELEASE_ROOT_DIR\"\n\tif ! command -v logger > /dev/null 2>&1\n\tthen\n\t    echo \"${REL_NAME}[$$] Starting up\"\n\telse\n\t    logger -t \"${REL_NAME}[$$]\" \"Starting up\"\n\tfi\n\n\trelx_run_hooks \"$PRE_START_HOOKS\"\n\n\t# check system configuration\n\tarweave_check\n\n\t# Schedule cleanup of symlinks after VM boots (allows VSCode extensions to work)\n\tschedule_symlink_cleanup\n\n\t# Start the VM\n\t# The variabre FOREGROUNDOPTIONS must NOT be quoted.\n\t# shellcheck disable=SC2086\n\texec \"$BINDIR/erlexec\" \\\n\t    ${VM_PARAMS} \\\n\t    ${EXTRA_DIST_ARGS} \\\n\t    ${FOREGROUNDOPTIONS} \\\n\t    -boot \"$BOOTFILE\" \\\n\t    -mode \"$CODE_LOADING_MODE\" \\\n\t    -boot_var SYSTEM_LIB_DIR \"$SYSTEM_LIB_DIR\" \\\n\t    -config \"$RELX_CONFIG_PATH\" \\\n\t    -args_file \"$VMARGS_PATH\" \\\n\t    -- ${ARWEAVE_OPTS} ${ARGS}\n\t# exec will replace the current image and nothing else gets\n\t# executed from this point on, this explains the absence\n\t# of the pre start hook\n\t;;\n    rpc)\n\t# Make sure a node IS running\n\tping_or_exit\n\n\tshift\n\n\terl_rpc \"$@\"\n\t;;\n    eval)\n\t# Make sure a node IS running\n\tping_or_exit\n\n\tshift\n\n\terl_eval \"$@\"\n\t;;\n    status)\n\t# Make sure a node IS running\n\tping_or_exit\n\n\t# shellcheck disable=SC1090,SC2240\n\t[ -n \"${STATUS_HOOK}\" ] && [ -f \"$SCRIPT_DIR/$STATUS_HOOK\" ] && . \"$SCRIPT_DIR/$STATUS_HOOK\" \"$@\"\n\t;;\n    tunnel)\n\t# prepare a tunnel to the remote node\n\tshift\n\ttarget=\"${1}\"\n\t# if epmd is running locally, try to kill it\n\tpgrep epmd && pkill epmd\n\n\t# fetch the port of the remote arweave node\n\tREMOTE_EPMD_PORT=$(ssh ${target} \"epmd -names | sed 1d | awk '$2==\\\"^arweave$\\\" {print $NF}'\")\n\n\t# create a local forward tunnel\n\tssh -L ${ERL_EPMD_PORT}:localhost:${ERL_EPMD_PORT} \\\n\t\t-L ${REMOTE_EPMD_PORT=}:localhost:${REMOTE_EPMD_PORT} \\\n\t\t${target} echo \"epmd tunnel is ready on localhost:${REMOTE_EPMD_PORT}\"\n\t;;\n    remote_observer)\n\t# start observer locally, assuming a tunnel has been previouly\n\t# created\n\tOBSERVER_ID=$(($(date \"+%N\")%6421))\n\terl -name observer-${OBSERVER_ID}@127.0.0.1 \\\n\t\t-setcookie ${COOKIE} \\\n\t\t-hidden -run observer\n\t;;\n    test)\n\tshift\n\tarweave_test ${*}\n\t;;\n    test_e2e)\n\tshift\n\tarweave_e2e ${*}\n\t;;\n    help)\n\tif [ -z \"$2\" ]; then\n\t    relx_usage\n\t    exit 1\n\tfi\n\n\tTOPIC=\"$2\"; shift\n\trelx_usage \"$TOPIC\"\n\t;;\n    *)\n\t# check for extension\n\tIS_EXTENSION=$(relx_is_extension \"$1\")\n\tif [ \"$IS_EXTENSION\" = \"1\" ]; then\n\t    EXTENSION_SCRIPT=$(relx_get_extension_script \"$1\")\n\t    shift\n\t    relx_run_extension \"$EXTENSION_SCRIPT\" \"$@\"\n\t    # all extension scripts are expected to exit\n\telse\n\t    relx_usage \"$1\"\n\tfi\n\texit 1\n\t;;\nesac\n\nexit 0\n"
  },
  {
    "path": "bin/benchmark-hash",
    "content": "#!/usr/bin/env bash\nset -e\nSCRIPT_DIR=$(dirname ${0})\nARWEAVE=${SCRIPT_DIR}/arweave\n${ARWEAVE} benchmark hash ${*}\n"
  },
  {
    "path": "bin/benchmark-packing",
    "content": "#!/usr/bin/env bash\nset -e\nSCRIPT_DIR=$(dirname ${0})\nARWEAVE=${SCRIPT_DIR}/arweave\n${ARWEAVE} benchmark packing ${*}\n"
  },
  {
    "path": "bin/benchmark-vdf",
    "content": "#!/usr/bin/env bash\nset -e\nSCRIPT_DIR=$(dirname ${0})\nARWEAVE=${SCRIPT_DIR}/arweave\n${ARWEAVE} benchmark vdf ${*}\n"
  },
  {
    "path": "bin/console",
    "content": "#!/usr/bin/env bash\nset -e\nSCRIPT_DIR=$(dirname ${0})\nARWEAVE=${SCRIPT_DIR}/arweave\n${ARWEAVE} remote_console ${*}\n"
  },
  {
    "path": "bin/create-ecdsa-wallet",
    "content": "#!/usr/bin/env bash\nset -e\nSCRIPT_DIR=$(dirname ${0})\nARWEAVE=${SCRIPT_DIR}/arweave\n${ARWEAVE} wallet ecdsa create ${*}\n"
  },
  {
    "path": "bin/create-wallet",
    "content": "#!/usr/bin/env bash\nset -e\nSCRIPT_DIR=$(dirname ${0})\nARWEAVE=${SCRIPT_DIR}/arweave\n${ARWEAVE} wallet create rsa ${*}\n"
  },
  {
    "path": "bin/data-doctor",
    "content": "#!/usr/bin/env bash\nset -e\nSCRIPT_DIR=$(dirname ${0})\nARWEAVE=${SCRIPT_DIR}/arweave\n${ARWEAVE} doctor ${*}\n"
  },
  {
    "path": "bin/debug-logs",
    "content": "#!/usr/bin/env bash\nset -e\n\nSCRIPT_DIR=\"$(dirname \"$0\")\"\nLOGS_DIR=\"$(cd $SCRIPT_DIR/../logs && pwd -P)\"\ntail -n 500 ${*} ${LOGS_DIR}/*debug.log\n"
  },
  {
    "path": "bin/e2e",
    "content": "#!/usr/bin/env bash\nset -e\nSCRIPT_DIR=$(dirname ${0})\nexport ARWEAVE_DEV=1\nARWEAVE=${SCRIPT_DIR}/arweave\n${ARWEAVE} test_e2e ${*}\n"
  },
  {
    "path": "bin/e2e_shell",
    "content": "#!/usr/bin/env bash\n\nSCRIPT_DIR=\"$(dirname \"$0\")\"\ncd \"$SCRIPT_DIR/..\"\n\n./ar-rebar3 e2e compile\n\nif [ \"$(uname -s)\" == \"Darwin\" ]; then\n    RANDOMX_JIT=\"disable randomx_jit\"\nelse\n    RANDOMX_JIT=\nfi\n\nexport ERL_EPMD_ADDRESS=127.0.0.1\n\nERL_E2E_OPTS=\"-pa $(./rebar3 as e2e path) $(./rebar3 as e2e path --base)/lib/arweave/test -config config/sys.config\"\necho -e \"\\033[0;32m===> Running e2e shell...\\033[0m\"\n\nif pgrep -f \"beam.smp\" > /dev/null; then\n    echo \"BEAM is already running. Exiting.\"\n    exit 1\nelse\n    erl $ERL_E2E_OPTS -name main-e2e@127.0.0.1 -setcookie e2e -run ar shell_e2e 2>&1\nfi\nkill 0\n"
  },
  {
    "path": "bin/gen-dev-certs",
    "content": "#!/usr/bin/env bash\n\nset -e\n\nSCRIPT_DIR=\"$(dirname \"$0\")\"\n\nPRIV_DIR=\"$(cd $SCRIPT_DIR/../apps/arweave/priv && pwd -P)\"\n\nCERT_FILE=\"$PRIV_DIR/tls/cert.pem\"\nKEY_FILE=\"$PRIV_DIR/tls/key.pem\"\n\nAPEX_DOMAIN=\"${1:-\"gateway.localhost\"}\"\n\nmkdir -p \"$PRIV_DIR/tls\"\nmkcert -cert-file \"$CERT_FILE\" \\\n       -key-file \"$KEY_FILE\" \\\n       \"$APEX_DOMAIN\" \"*.$APEX_DOMAIN\"\n"
  },
  {
    "path": "bin/localnet_shell",
    "content": "#!/usr/bin/env bash\n\nSCRIPT_DIR=\"$(dirname \"$0\")\"\ncd \"$SCRIPT_DIR/..\"\n\n./ar-rebar3 localnet compile\n\nif [ \"$(uname -s)\" == \"Darwin\" ]; then\n    RANDOMX_JIT=\"disable randomx_jit\"\nelse\n    RANDOMX_JIT=\nfi\n\nSNAPSHOT_DIR=\"${1:-localnet_snapshot}\"\n\nexport ERL_EPMD_ADDRESS=127.0.0.1\n\nERL_LOCALNET_OPTS=\"-pa $(./rebar3 as localnet path) $(./rebar3 as localnet path --base)/lib/arweave/test -config config/sys.config\"\necho -e \"\\033[0;32m===> Starting localnet shell from ${SNAPSHOT_DIR}...\\033[0m\"\n\n# Check if BEAM process is running\nif pgrep -f \"beam.smp\" > /dev/null; then\n    echo \"BEAM is already running. Exiting.\"\n    exit 1\nelse\n    erl $ERL_LOCALNET_OPTS -name main-localnet@127.0.0.1 -setcookie localnet -noshell -s ar shell_localnet \"$SNAPSHOT_DIR\" -s shell start_interactive 2>&1\nfi\nkill 0\n\n"
  },
  {
    "path": "bin/logs",
    "content": "#!/usr/bin/env bash\nset -e\n\nSCRIPT_DIR=\"$(dirname \"$0\")\"\nLOGS_DIR=\"$(cd $SCRIPT_DIR/../logs && pwd -P)\"\ntail -n 500 ${*} ${LOGS_DIR}/*info.log\n"
  },
  {
    "path": "bin/shell",
    "content": "#!/usr/bin/env bash\n\nSCRIPT_DIR=\"$(dirname \"$0\")\"\ncd \"$SCRIPT_DIR/..\"\n\n./ar-rebar3 test compile\n\nif [ `uname -s` == \"Darwin\" ]; then\n    RANDOMX_JIT=\"disable randomx_jit\"\nelse\n    RANDOMX_JIT=\nfi\n\nexport ERL_EPMD_ADDRESS=127.0.0.1\n\nERL_TEST_OPTS=\"-pa `./rebar3 as test path` `./rebar3 as test path --base`/lib/arweave/test -config config/sys.config\"\necho -e \"\\033[0;32m===> Running tests...\\033[0m\"\n\n# Check if BEAM process is running\nif pgrep -f \"beam.smp\" > /dev/null; then\n    echo \"BEAM is already running. Exiting.\"\n    exit 1\nelse\n    erl $ERL_TEST_OPTS -name main-localtest@127.0.0.1 -setcookie test -run ar shell 2>&1\nfi\nkill 0\n"
  },
  {
    "path": "bin/start",
    "content": "#!/usr/bin/env bash\n######################################################################\n# Arweave Heartbeat script, unrelated to heart(3erl). This script will\n# restart arweave in case of crash. \n#\n# The epmd feature is a workaround to deal with a bug. When arweave\n# stops, in some case, an epmd session leaks and is still registered\n# in epmd. Two solutions: (1) wait for the timeout, but for some\n# reason it can take more than 24h (2) kill/restart epmd. This feature\n# is optional and can be activated by setting\n# ARWEAVE_EPMD_AUTO_RESTART environment variable.\n######################################################################\nSCRIPT_DIR=$(dirname ${0})\nARWEAVE=${SCRIPT_DIR}/arweave\n\n# set the default before restarting arweave\nARWEAVE_RESTART_DELAY=${ARWEAVE_RESTART_DELAY:=15}\n\n# set the number of restart allowed.\nARWEAVE_RESTART_LIMIT=${ARWEAVE_RESTART_LIMIT:=\"\"}\n\n# set epmd auto restart. this is a workaround when arweave crash, an\n# epmd session can still be present (epmd session leak). If enabled,\n# a recovery/restart procedure is started automatically.\nARWEAVE_EPMD_AUTO_RESTART=${ARWEAVE_EPMD_AUTO_RESTART:=\"\"}\n\n# defines the method to restart method to use. At this time, only\n# kill and systemctl are supported. If systemctl is used, epmd service\n# must be called \"epmd\". If epmd is running with a different user,\n# systemctl will be called with sudo and the process' user.\nARWEAVE_EPMD_RESTART_METHOD=${ARWEAVE_EPMD_RESTART_METHOD:=\"kill\"}\n\n######################################################################\n# function helper to print arweave heartbeat messages.\n######################################################################\n_msg() {\n\tprintf -- 'Arweave Heartbeat: %s\\n' \"${*}\"\n}\n\n######################################################################\n# print the signal name instead of its number and return it.\n######################################################################\n_signal_sys() {\n\tlocal code=\"${1}\"\n\tlocal kill_code\n\tif test ${code} -gt 127\n\tthen\n\t\tkill_code=$((code-128))\n\telse\n\t\tkill_code=${code}\n\tfi\n\tcase \"${kill_code}\" in\n\t\t1) echo SIGHUP;;\n\t\t2) echo SIGINT;;\n\t\t3) echo SIGQUIT;;\n\t\t4) echo SIGILL;;\n\t\t5) echo SIGTRAP;;\n\t\t6) echo SIGABRT;;\n\t\t7) echo SIGBUS;;\n\t\t9) echo SIGKILL;;\n\t\t10) echo SIGUSR1;;\n\t\t11) echo SIGSEGV;;\n\t\t12) echo SIGUSR2;;\n\t\t13) echo SIGPIPE;;\n\t\t14) echo SIGALRM;;\n\t\t15) echo SIGTERM;;\n\t\t17) echo SIGCHLD;;\n\t\t18) echo SIGSTOP;;\n\t\t*) echo \"UNKNOWN_${code}\";;\n\tesac\n\treturn ${code}\n}\n\n######################################################################\n# this function is a quick and dirty patch to deal with epmd\n# session leaks. When arweave is stopping, in some situation\n# epmd keeps its session. It can be annoying.\n######################################################################\n_epmd_restart() {\n\t# only try to restart epmd if ARWEAVE_EPMD_AUTO_RESTART is set\n\t# not everyone want to do that.\n\ttest \"${ARWEAVE_EPMD_AUTO_RESTART}\" || return 0\n\t_msg \"Start epmd restart procedure\"\n\n\t# check epmd program existance.\n\tlocal epmd=$(which epmd)\n\ttest \"${epmd}\" || return 1\n\ttest -x \"${epmd}\" || return 1\n\n\t# check how many arweave process is running, if\n\t# there is more than one, there is a problem and epmd\n\t# should not be restarted.\n\tlocal instances=$(pgrep -f \"${ARWEAVE}\" | wc -l)\n\tif test \"${instances}\" -gt 1\n\tthen\n\t\t_msg \"More than one arweave instance is running.\"\n\t\t_msg \"epmd can't be restarted, here the nodes:\"\n\t\tepmd -names\n\t\treturn 1\n\tfi\n\n\t# check if epmd daemon is started. If it's the case, then we\n\t# extract some information (e.g. UID, GID, PPID)\n\tlocal epmd_pid=$(pgrep epmd)\n\tif ! test \"${epmd_pid}\"\n\tthen\n\t\t_msg \"epmd is not started, can't restart it.\"\n\t\treturn 1\n\tfi\n\tlocal epmd_pid_user=$(ps -houser -p ${epmd_pid} | xargs echo)\n\tlocal epmd_pid_group=$(ps -hogroup -p ${epmd_pid} | xargs echo)\n\tlocal epmd_pid_ppid=$(ps -hoppid -p ${epmd_pid} | xargs echo)\n\n\t# extract epmd session in better format\n\tlocal epmd_sessions=$(epmd -names \\\n\t\t| sed 1d \\\n\t\t| sed -E \"s/name (.+) at port (.+)/\\1:\\2/\")\n\tlocal epmd_sessions_count=$(echo ${epmd_sessions} | wc -w)\n\n\t# small epmd report\n\t_msg \"epmd (${epmd_pid})\" \\\n\t\t\"run as ${epmd_pid_user}:${epmd_pid_group}\" \\\n\t\t\"with ${epmd_sessions_count} sessions\" \\\n\t\t\"with ppid ${epmd_pid_ppid}.\"\n\n\t# check if there is an epmd session leak,\n\t# an arweave existing session should not be present.\n\t# only work if node's name is \"arweave\".\n\t${epmd} -names | awk 'BEGIN{f=0} $1~/name/ && $2~/arweave/{f=1} END{exit f}'\n\tepmd_session_leak=\"$?\"\n\tif test \"${epmd_session_leak}\" -eq 1\n\tthen\n\t\tlocal ret=1\n\n\t\t# kill method used. only called if epmd's user is the same\n\t\t# than the one used by this script.\n\t\tif test \"${ARWEAVE_EPMD_RESTART_METHOD}\" = \"kill\" \\\n\t\t\t&& test \"${epmd_pid_user}\" = \"${USER}\"\n\t\tthen\n\t\t\tkill ${epmd_pid}\n\t\t\tret=${?}\n\t\tfi\n\n\t\t# systemctl method used. invoke systemctl to restart epmd.\n\t\tif test \"${ARWEAVE_EPMD_RESTART_METHOD}\" = \"systemctl\" \\\n\t\t\t&& test \"${epmd_pid_user}\" = \"${USER}\"\n\t\tthen\n\t\t\tsystemctl restart epmd\n\t\t\tret=${?}\n\t\tfi\n\n\t\t# systemctl method (sudo) used. invoke systemctl with\n\t\t# sudo and the pid's user.\n\t\tif test \"${ARWEAVE_EPMD_RESTART_METHOD}\" = \"systemctl\" \\\n\t\t\t&& test \"${epmd_pid_user}\" != \"${USER}\"\n\t\tthen\n\t\t\tsudo -u \"${epmd_pid_user}\" systemctl restart epmd\n\t\t\tret=${?}\n\t\tfi\n\n\t\t# if no methods are available, and the user's pid is\n\t\t# not our, then we stop.\n\t\tif test \"${epmd_pid_user}\" != \"${USER}\" \\\n\t\t\t&& test \"${ret}\" != 0\n\t\tthen\n\t\t\t_msg \"epmd can't be restarted (uid:${epmd_pid_user}).\"\n\t\t\tret=${ret}\n\t\tfi\n\n\t\tif test \"${ret}\" -ne 0\n\t\tthen\n\t\t\t_msg \"epmd (${epmd_pid}) restart failed.\"\n\t\t\tret=${ret}\n\t\tfi\n\n\t\treturn \"${ret}\"\n\tfi\n\treturn 0\n}\n\n######################################################################\n# main script\n######################################################################\nrestart_counter=0\nwhile true\ndo\n\t# check for epmd presence (if the feature is enabled)\n\t_epmd_restart\n\n\t# we would like to avoid restarting arweave too much\n\tif test \"${ARWEAVE_RESTART_LIMIT}\" \\\n\t\t&& test \"${restart_counter}\" -gt \"${ARWEAVE_RESTART_LIMIT}\"\n\tthen\n\t\t_msg \"Number of restart reached: ${restart_counter}.\"\n\t\t_msg \"Arweave will not be restarted.\"\n\t\t_msg \"Please check the system.\"\n\t\texit 1\n\tfi\n\n\t# start arweave\n\t_msg \"Launching Erlang Virtual Machine...\"\n\t${ARWEAVE} foreground ${*}\n\tret=\"${?}\"\n\n\t# arweave terminated normally (0).\n\tif test \"${ret}\" -eq 0\n\tthen\n       \t\t_msg \"Server terminated safely.\"\n       \t\texit 0\n\tfi\n\n\t# arweave terminated with an error code, it needs to be\n\t# restarted.\n\tif test \"${ret}\" -le 127\n\tthen\n        \t_msg \"The Arweave server has terminated with an error code (${ret}).\" \n\tfi\n\n\t# arweave terminated with a signal from the system or another\n\t# process, it could be an OOM. In this case, we need to\n\t# restart epmd and ensure everything is fine.\n\tif test \"${ret}\" -gt 127\n\tthen\n\t\tsignal=$(_signal_sys ${ret})\n\t\t_msg \"The Arweave server has been terminated by the system (${signal}).\"\n\tfi\n\n\t_msg \"It will restart in ${ARWEAVE_RESTART_DELAY} seconds.\"\n       \t_msg \"If you would like to avoid this, press control+c to kill the server.\"\n        sleep \"${ARWEAVE_RESTART_DELAY}\"\n\trestart_counter=$((restart_counter+1))\ndone\n"
  },
  {
    "path": "bin/start-localnet",
    "content": "#!/usr/bin/env bash\nset -e\nSCRIPT_DIR=\"$(dirname \"$0\")\"\nARWEAVE=${SCRIPT_DIR}/arweave\n${ARWEAVE}/arweave localnet $*\n\n# while true; do\n#     echo Launching Erlang Virtual Machine...\n#     if\n#         # -run ar main: call ar:main() on launch\n#         $ARWEAVE $ARWEAVE_COMMAND $ARWEAVE_OPTS -run ar main $RANDOMX_JIT \"$@\"\n#     then\n#         echo \"Arweave Heartbeat: Server terminated safely.\"\n#         exit 0\n#     else\n#         echo \"Arweave Heartbeat: The Arweave server has terminated. It will restart in 15 seconds.\"\n#         echo \"Arweave Heartbeat: If you would like to avoid this, press control+c to kill the server.\"\n#         sleep 15\n#     fi\n# done\n"
  },
  {
    "path": "bin/stop",
    "content": "#!/usr/bin/env bash\nset -e\nSCRIPT_DIR=$(dirname ${0})\nARWEAVE=${SCRIPT_DIR}/arweave\n${ARWEAVE} stop ${*}\n"
  },
  {
    "path": "bin/test",
    "content": "#!/usr/bin/env bash\nset -e\nSCRIPT_DIR=$(dirname ${0})\nexport ARWEAVE_DEV=1\nARWEAVE=${SCRIPT_DIR}/arweave\n${ARWEAVE} test ${*}\n"
  },
  {
    "path": "config/sys.config",
    "content": "[\n\t{arweave, []},\n\t{kernel, [\n\t\t{inet_dist_use_interface, {127, 0, 0, 1}},\n\t\t{logger_level, all},\n\t\t{logger, [{handler, default, logger_std_h, #{\n\t\t\t\tlevel => warning,\n\t\t\t\tformatter => {\n\t\t\t\t\tlogger_formatter, #{\n\t\t\t\t\t\tlegacy_header => false,\n\t\t\t\t\t\tsingle_line => true,\n\t\t\t\t\t\tchars_limit => 16256,\n\t\t\t\t\t\tmax_size => 8128,\n\t\t\t\t\t\tdepth => 256,\n\t\t\t\t\t\ttemplate => [time,\" [\",level,\"] \",mfa,\":\",line,\" \",msg,\"\\n\"]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}]}\n\t]},\n\t{sasl, [\n\t\t{sasl_error_logger, false}\n\t]},\n\t{prometheus, [\n\t\t{cowboy_instrumenter, [\n\t\t\t{duration_buckets, [infinity]},\n\t\t\t{request_labels, [http_method, route, reason, status_class, status]},\n\t\t\t{error_labels, [http_method, route, reason, error]},\n\t\t\t{labels_module, ar_prometheus_cowboy_labels}\n\t\t]},\n\t\t{vm_system_info_collector_metrics, []},\n\t\t{vm_msacc_collector_metrics, []},\n\t\t{vm_dist_collector_metrics, []}\n\t]}\n].\n"
  },
  {
    "path": "config/vm.args",
    "content": "######################################################################\n## Default vm arguments templates used by Arweave.\n##\n## Some useful links to configure emulator flags:\n##   https://www.erlang.org/doc/apps/erts/erl_cmd.html#emulator-flags\n##\n## Some useful links on Erlang's memory management:\n##   https://www.erlang-factory.com/static/upload/media/139454517145429lukaslarsson.pdf\n##   https://www.youtube.com/watch?v=nuCYL0X-8f4\n##\n## Note for testing it's sometimes useful to limit the number of\n## schedulers that will be used, to do that: +S 16:16\n######################################################################\n## Name of the node\n-name ${ARNODE:-arweave@127.0.0.1}\n\n## Cookie for distributed erlang\n-setcookie ${ARCOOKIE:-arweave}\n\n## This is now the default as of OTP-26\n## Multi-time warp mode in combination with time correction is the\n## preferred configuration.\n## It is only not the default in Erlang itself because it could break\n## older systems.\n# +C multi_time_warp\n\n## Uncomment the following line if running in a container.\n## +sbwt none\n\n## Increase number of concurrent ports/sockets\n##-env ERL_MAX_PORTS 4096\n\n## Tweak GC to run more often\n##-env ERL_FULLSWEEP_AFTER 10\n\n## +B [c | d | i]\n## Option c makes Ctrl-C interrupt the current shell instead of\n## invoking the emulator break\n## handler. Option d (same as specifying +B without an extra option)\n## disables the break handler. # Option i makes the emulator ignore any\n## break signal.\n## If option c is used with oldshell on Unix, Ctrl-C will restart the\n## shell process rather than\n## interrupt it.\n## Disable the emulator break handler\n## it easy to accidentally type ctrl-c when trying\n## to reach for ctrl-d. ctrl-c on a live node can\n## have very undesirable results\n+Bi\n\n## Enables the kernel poll functionality.\n+Ktrue\n\n## +A1024: emulator number of threads in the Async long thread pool for linked\n## in drivers, mostly unused\n+A1024\n\n## +SDio1024: emulator Scheduler thread count for Dirty I/O, 200\n## threads for file access\n+SDio1024\n\n## +MBsbct 103424: binary_alloc singleblock carrier threshold (in KiB)\n## (101MiB, default 512KiB). Blocks larger than the threshold are\n## placed in singleblock carriers. However multi-block carriers are\n## more efficient. Since we have so many 100MiB binary blocks due to\n## the recall range, set the threshold so that they are all placed in\n## multi-block carriers and not single-block carriers.\n+MBsbct 103424\n\n## +MBsmbcs 10240: binary_alloc smallest multi-block carrier size (in\n## KiB) (10MiB, default 256KiB).\n+MBsmbcs 10240\n\n## MBlmbcs 410629: binary_alloc largest multi-block carrier size (in\n## KiB) (~401MiB, default 5MiB). Set so that a single multi-block\n## carrier can hold roughly 4 full recall ranges.\n+MBlmbcs 410629\n\n## +MBmmsbc 1024: binary_alloc maximum mseg_alloc singleblock carriers\n## (1024 carriers, default 256). Once exhausted, the emulator will start\n## using sys_alloc rather than mseg_alloc for singleblock carriers.\n## This can be slower.\n+MBmmmbc 1024\n\n## +MBas aobf: emulator Memory Binary Allocation Strategy set to Address\n## Order Best Fit.\n## see: https://www.erlang.org/doc/man/erts_alloc.html#strategy\n+MBas aobf\n\n## Sets scheduler busy wait threshold. Defaults to medium. The\n## threshold determines how long schedulers are to busy wait when\n## running out of work before going to sleep.\n+sbwt very_long\n\n## Sets dirty scheduler busy wait threshold.\n+sbwtdcpu very_long\n\n## Sets dirty IO scheduler busy wait threshold\n+sbwtdio very_long\n\n##  Sets scheduler wakeup threshold.\n+swt very_low\n\n##  Sets dirty scheduler wakeup threshold.\n+swtdcpu very_low\n\n## Sets dirty IO scheduler wakeup threshold.\n+swtdio very_low\n"
  },
  {
    "path": "config/vm.args.dev",
    "content": "-name 'arweave@127.0.0.1'\n-setcookie arweave\n"
  },
  {
    "path": "config/vm.args.src",
    "content": "######################################################################\n## Default vm arguments templates used by Arweave.\n##\n## Some useful links to configure emulator flags:\n##   https://www.erlang.org/doc/apps/erts/erl_cmd.html#emulator-flags\n##\n## Some useful links on Erlang's memory management:\n##   https://www.erlang-factory.com/static/upload/media/139454517145429lukaslarsson.pdf\n##   https://www.youtube.com/watch?v=nuCYL0X-8f4\n##\n## Note for testing it's sometimes useful to limit the number of\n## schedulers that will be used, to do that: +S 16:16\n######################################################################\n## Name of the node\n-name ${ARNODE:-arweave@127.0.0.1}\n\n## Cookie for distributed erlang\n-setcookie ${ARCOOKIE:-arweave}\n\n## This is now the default as of OTP-26\n## Multi-time warp mode in combination with time correction is the\n## preferred configuration.\n## It is only not the default in Erlang itself because it could break\n## older systems.\n# +C multi_time_warp\n\n## Uncomment the following line if running in a container.\n## +sbwt none\n\n## Increase number of concurrent ports/sockets\n##-env ERL_MAX_PORTS 4096\n\n## Tweak GC to run more often\n##-env ERL_FULLSWEEP_AFTER 10\n\n## +B [c | d | i]\n## Option c makes Ctrl-C interrupt the current shell instead of\n## invoking the emulator break\n## handler. Option d (same as specifying +B without an extra option)\n## disables the break handler. # Option i makes the emulator ignore any\n## break signal.\n## If option c is used with oldshell on Unix, Ctrl-C will restart the\n## shell process rather than\n## interrupt it.\n## Disable the emulator break handler\n## it easy to accidentally type ctrl-c when trying\n## to reach for ctrl-d. ctrl-c on a live node can\n## have very undesirable results\n+Bi\n\n## Enables the kernel poll functionality.\n+Ktrue\n\n## +A1024: emulator number of threads in the Async long thread pool for linked\n## in drivers, mostly unused\n+A1024\n\n## +SDio1024: emulator Scheduler thread count for Dirty I/O, 200\n## threads for file access\n+SDio1024\n\n## +MBsbct 103424: binary_alloc singleblock carrier threshold (in KiB)\n## (101MiB, default 512KiB). Blocks larger than the threshold are\n## placed in singleblock carriers. However multi-block carriers are\n## more efficient. Since we have so many 100MiB binary blocks due to\n## the recall range, set the threshold so that they are all placed in\n## multi-block carriers and not single-block carriers.\n+MBsbct 103424\n\n## +MBsmbcs 10240: binary_alloc smallest multi-block carrier size (in\n## KiB) (10MiB, default 256KiB).\n+MBsmbcs 10240\n\n## MBlmbcs 410629: binary_alloc largest multi-block carrier size (in\n## KiB) (~401MiB, default 5MiB). Set so that a single multi-block\n## carrier can hold roughly 4 full recall ranges.\n+MBlmbcs 410629\n\n## +MBmmsbc 1024: binary_alloc maximum mseg_alloc singleblock carriers\n## (1024 carriers, default 256). Once exhausted, the emulator will start\n## using sys_alloc rather than mseg_alloc for singleblock carriers.\n## This can be slower.\n+MBmmmbc 1024\n\n## +MBas aobf: emulator Memory Binary Allocation Strategy set to Address\n## Order Best Fit.\n## see: https://www.erlang.org/doc/man/erts_alloc.html#strategy\n+MBas aobf\n\n## Sets scheduler busy wait threshold. Defaults to medium. The\n## threshold determines how long schedulers are to busy wait when\n## running out of work before going to sleep.\n+sbwt very_long\n\n## Sets dirty scheduler busy wait threshold.\n+sbwtdcpu very_long\n\n## Sets dirty IO scheduler busy wait threshold\n+sbwtdio very_long\n\n##  Sets scheduler wakeup threshold.\n+swt very_low\n\n##  Sets dirty scheduler wakeup threshold.\n+swtdcpu very_low\n\n## Sets dirty IO scheduler wakeup threshold.\n+swtdio very_low\n"
  },
  {
    "path": "default.nix",
    "content": "(import (\n  let\n    lock = with builtins; fromJSON (readFile ./flake.lock);\n  in fetchTarball {\n    url = \"https://github.com/edolstra/flake-compat/archive/${lock.nodes.flake-compat.locked.rev}.tar.gz\";\n    sha256 = lock.nodes.flake-compat.locked.narHash; }\n) {\n  src =  ./.;\n}).defaultNix\n"
  },
  {
    "path": "deploy/Dockerfile.base.ubuntu20.04",
    "content": "FROM ubuntu:20.04\n\n# Set noninteractive installation\nENV DEBIAN_FRONTEND=noninteractive\n\n# Add add-apt-repository application\nRUN apt-get update && apt-get install -y software-properties-common\n\n# Add rabbitmq erlang R26 repository\nRUN add-apt-repository -y ppa:rabbitmq/rabbitmq-erlang-26\n\n# Install the necessary software to add a new repository over HTTPS\nRUN apt-get update && apt-get install -y \\\n    apt-transport-https \\\n    ca-certificates \\\n    curl \\\n    gnupg \\\n    lsb-release \\\n    wget\n\n# Install missing dependencies\nRUN apt-get install -y \\\n    libncurses5 \\\n    libwxbase3.0-0v5 \\\n    libwxgtk3.0-gtk3-0v5 \\\n    libsctp1\n"
  },
  {
    "path": "deploy/Dockerfile.base.ubuntu22.04",
    "content": "FROM ubuntu:22.04\n\n# Set noninteractive installation\nENV DEBIAN_FRONTEND=noninteractive\n\n# Add add-apt-repository application\nRUN apt-get update && apt-get install -y software-properties-common\n\n# Add rabbitmq erlang R26 repository\nRUN add-apt-repository -y ppa:rabbitmq/rabbitmq-erlang-26\n\n# Install other dependencies\nRUN apt-get update && apt-get install -y erlang\n"
  },
  {
    "path": "deploy/Dockerfile.rocky",
    "content": "# Set the base image using a build argument\nFROM rockylinux:9\n\n# Update the system and install necessary tools\nRUN dnf update -y && \\\n    dnf install -y wget \\\n    gcc \\\n    gcc-c++ \\\n    glibc-devel \\\n    make \\\n    ncurses-devel \\\n    openssl-devel \\\n    autoconf \\\n    java-1.8.0-openjdk-devel \\\n    m4\n\n# Download and extract Erlang/OTP source\nWORKDIR /tmp\nRUN wget https://github.com/erlang/otp/releases/download/OTP-26.2.5.12/otp_src_26.2.5.12.tar.gz\nRUN tar zxf otp_src_26.2.5.12.tar.gz\n\n# Build and install Erlang/OTP\nWORKDIR /tmp/otp_src_26.2.5.12\nRUN ./configure --prefix=/usr/local && \\\n    make && \\\n    make install\n\n# Clean up\nWORKDIR /\nRUN rm -rf /tmp/otp_src_26.2.5.12 /tmp/otp_src_26.2.5.12.tar.gz\n\n# Install other dependencies\nRUN dnf install -y \\\n    gmp-devel \\\n    cmake \\\n    git\n\n# Set the working directory\nWORKDIR /app\n\n# Define the output directory as a volume\nVOLUME /output\n\n# The build steps are executed every time\nCMD set -x && \\\n    git clone --recursive https://github.com/ArweaveTeam/arweave.git && \\\n    cd arweave && \\\n    git fetch --all && \\\n    git pull --force && \\\n    git checkout --force $GIT_TAG && \\\n    git submodule update && \\\n    ./rebar3 as prod tar && \\\n    cp _build/prod/rel/arweave/arweave-*.tar.gz /output/arweave.tar.gz\n\n"
  },
  {
    "path": "deploy/Dockerfile.ubuntu",
    "content": "# Set the base image using a build argument\nARG BASE_IMAGE\nFROM ${BASE_IMAGE}\n\n# Install other dependencies\nRUN apt-get install -y \\\n    libssl-dev \\\n    libgmp-dev \\\n    libsqlite3-dev \\\n    make \\\n    cmake \\\n    gcc \\\n    g++ \\\n    git\n\n# Set the working directory\nWORKDIR /app\n\n# Define the output directory as a volume \nVOLUME /output\n\n# The build steps are executed every time\nCMD set -x && \\\n    git clone --recursive https://github.com/ArweaveTeam/arweave.git && \\\n    cd arweave && \\\n    git fetch --all && \\\n    git pull --force && \\\n    git checkout --force $GIT_TAG && \\\n    git submodule update && \\\n    ./rebar3 as prod tar && \\\n    cp _build/prod/rel/arweave/arweave-*.tar.gz /output/arweave.tar.gz\n"
  },
  {
    "path": "deploy/Makefile",
    "content": "######################################################################\n# Arweave Release GNU Makefile for MacOS\n#\n# This Makefile was created to build release on Darwin/MacOS system\n# using homebrew package manage. Every build is created using a\n# fresh version of the arweave git repository isolated from other\n# build.\n#\n# To install dependencies and create all releases based on Erlang\n# defined in ERLANG_VERSIONS variables:\n#\n#     make all\n#\n# To install only one release using one erlang version:\n#\n#     make build-release ERLANG_VERSION=24\n#\n######################################################################\nARWEAVE_GIT_TAG ?= master\nARWEAVE_REPOSITORY ?= https://github.com/ArweaveTeam/arweave.git\nERLANG_VERSIONS ?= 24 26\nERLANG_VERSION ?= 24\nBUILDDIR ?= ./build\nRELEASEDIR ?= ./release\nSYSTEM_NAME = $(shell uname -o)\nSYSTEM_ARCH = $(shell uname -m)\nARWEAVE_RELEASE_NAME ?= $(ARWEAVE_GIT_TAG)-$(SYSTEM_NAME)-$(SYSTEM_ARCH)\nHOMEBREW_PATH = /opt/homebrew\nHOMEBREW_COMMAND ?= brew\n\n######################################################################\n# default entry-point target\n######################################################################\nPHONY += help\nhelp:\n\t@echo \"Usage: make [help|install-deps|all|build-release|clean|clean-all]\"\n\t@echo \"  help: print help message\"\n\t@echo \"  install-deps: install dependencies with homebrew\"\n\t@echo \"  all: create all release\"\n\t@echo \"  build-release: create release using default erlang version\"\n\t@echo \"  clean: remove built artifacts\"\n\t@echo \"  clean-all: remove built artifacts and releases\"\n\t@echo \"Variables:\"\n\t@echo \"  ARWEAVE_GIT_TAG=$(ARWEAVE_GIT_TAG)\"\n\t@echo \"  ARWEAVE_RELEASE_NAME=$(ARWEAVE_RELEASE_NAME)\"\n\t@echo \"  ARWEAVE_REPOSITORY=$(ARWEAVE_REPOSITORY)\"\n\t@echo \"  BUILDDIR=$(BUILDDIR)\"\n\t@echo \"  ERLANG_VERSION=$(ERLANG_VERSION)\"\n\t@echo \"  ERLANG_VERSIONS=$(ERLANG_VERSIONS)\"\n\t@echo \"  RELEASEDIR=$(RELEASEDIR)\"\nifneq ($(SYSTEM_NAME), Darwin)\n\t@echo \"WARNING: this Makefile is not compatible with this system: $(SYSTEM_NAME)\"\nendif\n\n######################################################################\n# template to install/cleanup erlang using brew\n######################################################################\ndefine template_erlang\nDEPS_$(1) += $(HOMEBREW_PATH)/opt/erlang@$(1)\nHOMEBREW_ERLANG_DEPS += $(HOMEBREW_PATH)/opt/erlang@$(1)\n$(HOMEBREW_PATH)/opt/erlang@$(1):\n\t$(HOMEBREW_COMMAND) install erlang@$(1)\n\nHOMEBREW_CLEAN += clean-erlang-$(1)\nPHONY += clean-erlang-$(1)\nclean-erlang-$(1):\n\t-$(HOMEBREW_COMMAND) uninstall erlang@$(1)\nendef\n\n######################################################################\n# template to install/release arweave\n######################################################################\ndefine template_builder\n$$(BUILDDIR)/arweave-$(1): $$(BUILDDIR)\n\tgit clone --recursive $$(ARWEAVE_REPOSITORY) $$(BUILDDIR)/arweave-$(1)\n\nRELEASE_$(1) += $$(RELEASEDIR)/$$(ARWEAVE_RELEASE_NAME)-R$(1).tar.gz\nRELEASES += $$(RELEASEDIR)/$$(ARWEAVE_RELEASE_NAME)-R$(1).tar.gz\nALL += $$(RELEASES)\n$$(RELEASEDIR)/$$(ARWEAVE_RELEASE_NAME)-R$(1).tar.gz: $$(RELEASEDIR) $$(BUILDDIR)/arweave-$(1)\n\tgit -C $$(BUILDDIR)/arweave-$(1) fetch --all\n\tgit -C $$(BUILDDIR)/arweave-$(1) pull --force\n\tgit -C $$(BUILDDIR)/arweave-$(1) checkout --force $$(ARWEAVE_GIT_TAG)\n\tgit -C $$(BUILDDIR)/arweave-$(1) submodule update\n\tcd $$(BUILDDIR)/arweave-$(1) \\\n\t\t&& export PATH=\"/opt/homebrew/opt/erlang@$(1)/bin:/opt/homebrew/bin:$${PATH}\" \\\n\t\t&& ./rebar3 as prod tar\n\tcp $$(BUILDDIR)/arweave-$(1)/_build/prod/rel/arweave/arweave-*.tar.gz $$@\n\nCHECKSUM_$(1) += $$(RELEASEDIR)/$$(ARWEAVE_RELEASE_NAME)-R$(1).tar.gz.sha256\nCHECKSUMS += $$(RELEASEDIR)/$$(ARWEAVE_RELEASE_NAME)-R$(1).tar.gz.sha256\nALL += $$(CHECKSUMS)\n$$(RELEASEDIR)/$$(ARWEAVE_RELEASE_NAME)-R$(1).tar.gz.sha256: $$(RELEASEDIR)/$$(ARWEAVE_RELEASE_NAME)-R$(1).tar.gz\n\tsha256sum $$(RELEASEDIR)/$$(ARWEAVE_RELEASE_NAME)-R$(1).tar.gz \\\n\t\t> $$@\n\nARWEAVE_CLEAN += clean-arweave-$(1)\nPHONY += clean-arweave-$(1)\nclean-arweave-$(1):\n\t-rm -rf $$(BUILDDIR)/arweave-$(1)\n\nARWEAVE_CHECKSUMS_CLEAN += clean-arweave-checksum-$(1)\nPHONY += clean-arweave-checksum-$(1)\nclean-arweave-checksum-$(1):\n\t-rm $$(RELEASEDIR)/$$(ARWEAVE_RELEASE_NAME)-R$(1).tar.gz.sha256\nendef\n\n######################################################################\n# main directories\n######################################################################\n$(BUILDDIR):\n\tmkdir -p $@\n\n$(RELEASEDIR):\n\tmkdir -p $@\n\n######################################################################\n# homebrew deps targets\n######################################################################\n$(foreach v, $(ERLANG_VERSIONS), $(eval $(call template_erlang,$(v))))\n\n# gmp dep\nHOMEBREW_DEPS += $(HOMEBREW_PATH)/Cellar/gmp\n$(HOMEBREW_PATH)/Cellar/gmp:\n\t$(HOMEBREW_COMMAND) install gmp\n\nHOMEBREW_CLEAN += clean-homebrew-gmp\nPHONY += clean-homebrew-gmp\nclean-homebrew-gmp:\n\t-$(HOMEBREW_COMMAND) uninstall gmp\n\n# pkg-config dep\nHOMEBREW_DEPS += $(HOMEBREW_PATH)/Cellar/pkgconf\n$(HOMEBREW_PATH)/Cellar/pkgconf:\n\t$(HOMEBREW_COMMAND) install pkg-config\n\nHOMEBREW_CLEAN += clean-homebrew-pkg-config\nPHONY += clean-homebrew-pkg-config\nclean-homebrew-pkg-config:\n\t-$(HOMEBREW_COMMAND) uninstall pkg-config\n\n# cmake dep\nHOMEBREW_DEPS += $(HOMEBREW_PATH)/Cellar/cmake\n$(HOMEBREW_PATH)/Cellar/cmake:\n\t$(HOMEBREW_COMMAND) install cmake\n\nHOMEBREW_CLEAN += clean-homebrew-cmake\nPHONY += clean-homebrew-cmake\nclean-homebrew-cmake:\n\t-$(HOMEBREW_COMMAND) uninstall cmake\n\n######################################################################\n# arweave targets\n######################################################################\n$(foreach v, $(ERLANG_VERSIONS), $(eval $(call template_builder,$(v))))\n\n######################################################################\n# main targets.\n######################################################################\nifneq ($(SYSTEM_NAME), Darwin)\nall:\n\t@echo \"This Makefile was created for MacOS/Darwin system only\"\n\t@exit 1\n\nelse\nPHONY += all\nall: install-deps $(ALL)\n\nPHONY += build-release\nbuild-release: $(HOMEBREW_DEPS) \\\n\t$(DEPS_$(ERLANG_VERSION)) \\\n\t$(RELEASE_$(ERLANG_VERSION)) \\\n\t$(CHECKSUM_$(ERLANG_VERSION))\n\nPHONY += build-checksum\nbuild-checksum: $(CHECKSUMS)\n\nPHONY += install-deps\ninstall-deps: deps-update $(HOMEBREW_DEPS) $(HOMEBREW_ERLANG_DEPS)\n\nPHONY += deps-update\ndeps-update:\n\t$(HOMEBREW_COMMAND) update\n\nPHONY += clean-deps\nclean-deps: $(HOMEBREW_CLEAN)\n\nPHONY += clean\nclean: clean-deps $(ARWEAVE_CLEAN)\n\nPHONY += clean-all\nclean-all: clean $(ARWEAVE_CHECKSUMS_CLEAN)\n\t-rm $(RELEASES)\n\n.PHONY: $(PHONY)\nendif\n\n"
  },
  {
    "path": "deploy/build.sh",
    "content": "#!/bin/bash\n\nECHO_ONLY=0\nPRE_RELEASE=0\nBRANCH=\"\"\nLINUX_VERSION=\"\"\n\n# Parse flags\nwhile getopts 'eb:l:' flag; do\n  case \"${flag}\" in\n    e) ECHO_ONLY=1 ;;\n    b) PRE_RELEASE=1\n       BRANCH=\"${OPTARG}\" ;;\n    l) LINUX_VERSION=\"${OPTARG}\" ;;\n    *) echo \"Usage: $0 [-e] [-b <pre-release branch>] [-l <linux version>] version\" \n       exit 1 ;;\n  esac\ndone\nshift $((OPTIND-1))\n\n# Check if version is supplied\nif [ \"$#\" -ne 1 ]; then\n    echo \"Usage: $0 [-e] [-b <pre-release branch>] [-l <linux version>] version\"\n    exit 1\nfi\n\nVERSION=$1\nif [ $PRE_RELEASE -eq 1 ]; then\n  GIT_TAG=\"$BRANCH\"\nelse\n  GIT_TAG=\"N.$VERSION\"\nfi\n\nBASE_IMAGES=(\n\t\"arweave-base:20.04\" \\\n\t\"arweave-base:22.04\" \\\n\t\"\"\n)\nLINUX_VERSIONS=(\"ubuntu20\" \"ubuntu22\" \"rocky9\")\nBASE_DOCKERFILES=(\n\t\"Dockerfile.base.ubuntu20.04\" \\\n\t\"Dockerfile.base.ubuntu22.04\" \\\n\t\"\"\n)\n\n# If specific Linux version is supplied, filter the arrays\nif [ ! -z \"$LINUX_VERSION\" ]; then\n  for i in \"${!LINUX_VERSIONS[@]}\"; do\n    if [ \"${LINUX_VERSIONS[$i]}\" = \"$LINUX_VERSION\" ]; then\n      BASE_IMAGES=(\"${BASE_IMAGES[$i]}\")\n      LINUX_VERSIONS=(\"${LINUX_VERSIONS[$i]}\")\n      BASE_DOCKERFILES=(\"${BASE_DOCKERFILES[$i]}\")\n      break\n    fi\n  done\nfi\n\n# Function to execute a command, optionally just echoing it\nfunction run_cmd {\n  if [ $ECHO_ONLY -eq 1 ]; then\n    echo $1\n  else\n    eval $1\n  fi\n}\n\n# Build base images first\nfor i in \"${!BASE_DOCKERFILES[@]}\"; do\n    BASE_DOCKERFILE=${BASE_DOCKERFILES[$i]}\n    BASE_IMAGE=${BASE_IMAGES[$i]}\n\n    if [ ! -z \"$BASE_DOCKERFILE\" ] && [ ! -z \"$BASE_IMAGE\" ]; then\n      echo \"Building base image $BASE_IMAGE...\"\n\n      # Build the base Docker image\n      run_cmd \"docker build -f $BASE_DOCKERFILE -t $BASE_IMAGE .\"\n    fi\ndone\n\nfor i in \"${!LINUX_VERSIONS[@]}\"; do\n    LINUX_VERSION=${LINUX_VERSIONS[$i]}\n    IMAGE_NAME=\"arweave:$VERSION-$LINUX_VERSION\"\n    OUTPUT_FILE=\"./output/arweave-$VERSION.$LINUX_VERSION-x86_64.tar.gz\"\n\n    DOCKERFILE=\"Dockerfile.ubuntu\"\n    if [ \"$LINUX_VERSION\" == \"rocky9\" ]; then\n        DOCKERFILE=\"Dockerfile.rocky\"\n    fi\n\n    echo \"Building $IMAGE_NAME...\"\n\n    if [ ! -z \"${BASE_IMAGES[$i]}\" ]; then\n        # Build the Docker image\n        run_cmd \"docker build -f $DOCKERFILE --build-arg BASE_IMAGE=${BASE_IMAGES[$i]} -t $IMAGE_NAME .\"\n    else\n        run_cmd \"docker build -f $DOCKERFILE -t $IMAGE_NAME .\"\n    fi\n\n    echo \"Running $IMAGE_NAME...\"\n\n    # Run the Docker container\n    run_cmd \"docker run --rm -e GIT_TAG=$GIT_TAG -v $(pwd)/output:/output $IMAGE_NAME\"\n\n    echo \"Renaming output file...\"\n\n    # Rename the output file\n    run_cmd \"mv './output/arweave.tar.gz' '$OUTPUT_FILE'\"\ndone\n\nif [ $PRE_RELEASE -eq 0 ]; then\n  run_cmd \"cp './output/arweave-$VERSION.ubuntu22-x86_64.tar.gz' './output/arweave-$VERSION.linux-x86_64.tar.gz'\"\nfi\n"
  },
  {
    "path": "deploy/create_storage_modules.sh",
    "content": "#!/bin/bash\n\n# Script to create storage module directories and symlinks\n# Usage: ./create_storage_modules.sh [-e] [-u user] <directory_root> <start_partition> <end_partition> <mining_address>\n\nset -e\n\n# Parse arguments\nDRY_RUN=false\nCHOWN_USER=\"\"\n\nwhile [[ $# -gt 0 ]]; do\n    case $1 in\n        -e)\n            DRY_RUN=true\n            shift\n            ;;\n        -u)\n            CHOWN_USER=\"$2\"\n            shift 2\n            ;;\n        *)\n            break\n            ;;\n    esac\ndone\n\nif [ $# -ne 4 ]; then\n    echo \"Usage: $0 [-e] [-u user] <directory_root> <start_partition> <end_partition> <mining_address>\"\n    echo \"  -e         Dry run mode - only echo what would be done, don't create anything\"\n    echo \"  -u user    Set ownership of created directories and symlinks to specified user\"\n    echo \"Example: $0 /mnt/vol02 1 10 1seRanklLU_1VTGkEk7P0xAwMwGkD8aYi1\"\n    echo \"Example: $0 -e /mnt/vol02 1 10 1seRanklLU_1VTGkEk7P0xAwMwGkD8aYi1\"\n    echo \"Example: $0 -u arweave /mnt/vol02 1 10 1seRanklLU_1VTGkEk7P0xAwMwGkD8aYi1\"\n    echo \"Example: $0 -e -u arweave /mnt/vol02 1 10 1seRanklLU_1VTGkEk7P0xAwMwGkD8aYi1\"\n    exit 1\nfi\n\nDIRECTORY_ROOT=\"$1\"\nSTART_PARTITION=\"$2\"\nEND_PARTITION=\"$3\"\nMINING_ADDRESS=\"$4\"\n\n# Validate arguments\nif ! [[ \"$START_PARTITION\" =~ ^[0-9]+$ ]] || ! [[ \"$END_PARTITION\" =~ ^[0-9]+$ ]]; then\n    echo \"Error: Start and end partition must be numbers\"\n    exit 1\nfi\n\nif [ \"$START_PARTITION\" -gt \"$END_PARTITION\" ]; then\n    echo \"Error: Start partition must be <= end partition\"\n    exit 1\nfi\n\n# Validate user exists if specified\nif [ -n \"$CHOWN_USER\" ]; then\n    if ! id \"$CHOWN_USER\" >/dev/null 2>&1; then\n        echo \"Error: User '$CHOWN_USER' does not exist on this system\"\n        exit 1\n    fi\nfi\n\n# Function to execute commands with dry run support\nexecute_command() {\n    local description=\"$1\"\n    local command=\"$2\"\n    \n    echo \"$description\"\n    if [ \"$DRY_RUN\" = false ]; then\n        if ! eval \"$command\"; then\n            echo \"Error: Command failed: $command\" >&2\n            exit 1\n        fi\n    fi\n}\n\n# Function to count storage modules in a directory\ncount_storage_modules() {\n    local dir=\"$1\"\n    local mining_addr=\"$2\"\n    \n    if [ ! -d \"$dir\" ]; then\n        echo -1  # Return -1 to indicate directory doesn't exist\n        return\n    fi\n    \n    # Count directories matching storage_module_*_MINING_ADDRESS pattern\n    count=$(find \"$dir\" -maxdepth 1 -type d -name \"storage_module_*_${mining_addr}.replica.2.9\" 2>/dev/null | wc -l)\n    echo \"$count\"\n}\n\n# Function to find the first volume directory with < 4 storage modules\nfind_available_volume() {\n    local root=\"$1\"\n    local mining_addr=\"$2\"\n    \n    local volume_num=1\n    while true; do\n        local volume_dir=\"${root}-$(printf \"%02d\" $volume_num)\"\n        local count=$(count_storage_modules \"$volume_dir\" \"$mining_addr\")\n        \n        # Skip if directory doesn't exist (count == -1)\n        if [ \"$count\" -ne -1 ] && [ \"$count\" -lt 4 ]; then\n            echo \"$volume_dir\"\n            return\n        fi\n        \n        volume_num=$((volume_num + 1))\n        \n        # Safety check to prevent infinite loop\n        if [ $volume_num -gt 100 ]; then\n            echo \"Error: Could not find available volume directory after checking 100 volumes\" >&2\n            exit 1\n        fi\n    done\n}\n\n# Function to create storage module directory\ncreate_storage_module() {\n    local volume_dir=\"$1\"\n    local partition_num=\"$2\"\n    local mining_addr=\"$3\"\n    \n    local storage_dir=\"${volume_dir}/storage_module_$(printf \"%02d\" $partition_num)_${mining_addr}.replica.2.9\"\n    \n    # Check if volume directory exists\n    if [ ! -d \"$volume_dir\" ]; then\n        echo \"Error: Volume directory $volume_dir does not exist\" >&2\n        return 1\n    fi\n    \n    # Create storage module directory\n    execute_command \"Creating directory: $storage_dir\" \"mkdir -p '$storage_dir'\"\n    \n    # Set ownership if user specified\n    if [ -n \"$CHOWN_USER\" ]; then\n        execute_command \"Setting ownership: $storage_dir -> $CHOWN_USER\" \"chown '$CHOWN_USER:$CHOWN_USER' '$storage_dir'\"\n    fi\n    \n    # Return the path via a global variable to avoid output mixing\n    CREATED_STORAGE_DIR=\"$storage_dir\"\n}\n\n# Function to create symlink in current directory\ncreate_symlink() {\n    local target_dir=\"$1\"\n    local partition_num=\"$2\"\n    local mining_addr=\"$3\"\n    \n    local link_name=\"storage_module_$(printf \"%02d\" $partition_num)_${mining_addr}.replica.2.9\"\n    \n    execute_command \"Creating symlink: $link_name -> $target_dir\" \"ln -sf '$target_dir' '$link_name'\"\n    \n    # Set ownership of symlink if user specified\n    if [ -n \"$CHOWN_USER\" ]; then\n        execute_command \"Setting symlink ownership: $link_name -> $CHOWN_USER\" \"chown -h '$CHOWN_USER:$CHOWN_USER' '$link_name'\"\n    fi\n}\n\n# Main logic\nif [ \"$DRY_RUN\" = true ]; then\n    echo \"=== DRY RUN MODE - No files will be created ===\"\nfi\n\necho \"Creating storage modules from partition $START_PARTITION to $END_PARTITION\"\necho \"Directory root: $DIRECTORY_ROOT\"\necho \"Mining address: $MINING_ADDRESS\"\nif [ -n \"$CHOWN_USER\" ]; then\n    echo \"Owner: $CHOWN_USER\"\nfi\necho\n\ncurrent_volume=\"\"\nmodules_in_current_volume=0\nCREATED_STORAGE_DIR=\"\"\n\nfor partition in $(seq $START_PARTITION $END_PARTITION); do\n    # Find available volume if we don't have one or current is full\n    if [ -z \"$current_volume\" ] || [ $modules_in_current_volume -ge 4 ]; then\n        current_volume=$(find_available_volume \"$DIRECTORY_ROOT\" \"$MINING_ADDRESS\")\n        modules_in_current_volume=$(count_storage_modules \"$current_volume\" \"$MINING_ADDRESS\")\n        echo \"Using volume directory: $current_volume (current modules: $modules_in_current_volume)\"\n    fi\n    \n    # Create storage module directory\n    if create_storage_module \"$current_volume\" $partition \"$MINING_ADDRESS\"; then\n        # Create symlink if storage module was created successfully\n        create_symlink \"$CREATED_STORAGE_DIR\" $partition \"$MINING_ADDRESS\"\n        modules_in_current_volume=$((modules_in_current_volume + 1))\n    else\n        # If creation failed (volume doesn't exist), reset current volume and retry\n        echo \"Retrying with next available volume...\"\n        current_volume=\"\"\n        modules_in_current_volume=0\n        # Retry this partition\n        partition=$((partition - 1))\n    fi\ndone\n\necho\necho \"Storage module creation complete!\"\n"
  },
  {
    "path": "doc/ar-ipfs-howto.md",
    "content": "# How to set up and run Arweave+IPFS nodes\n\n## ipfs\n\nDownload from https://dist.ipfs.io/#go-ipfs\n\nFrom their website:\n\n> After downloading, untar the archive, and move the ipfs binary somewhere in your executables $PATH using the install.sh script:\n>\n```\n$ tar xvfz go-ipfs.tar.gz\n$ cd go-ipfs\n$ ./install.sh\n```\n\nThe install.sh wants to install the ipfs binary to /usr/local/bin.  Rather than run with sudo, I edit the script to change binpaths to, e.g., \"/home/ivan/bin\" (full path seems to be required).\n\nSet up the local ipfs node with\n\n```\n$ ipfs init\n```\n\nThe ipfs node needs to be running as a daemon before app_ipfs is started.  Start it in a separate terminal (screen/tmux/etc) session with\n\n```\n$ ipfs daemon\n```\n\n## arweave-server\n\nWhen running `arweave-server` with the argument `ipfs_pin`, the server listens for incoming TXs with data and an `{\"IPFS_Add\", Hash}` tag, and `ipfs add`s the data to the local ipfs node.\n\n### ipfs_pin\n\n#### in erlang shell\n\n```erlang\n$ arweave-server peer ...\n\n1> app_ipfs:start_pinning().\nok\n```\n\n#### with commandline argument\n\n```\n$ arweave-server peer ... ipfs_pin\n```\n\n### monitoring\n\nHere are some functions for basic monitoring:\n\nAt any time, state of the app_ipfs server can be accessed via either of:\n\n```\n> app_ipfs:report(app_ipfs).\n> app_ipfs:report(IPFSPid).\n\n[{adt_pid,<0.208.0>},          % the simple_adt server, for listening.\n {queue,<0.203.0>},            % the app_queue pid, for sending TXs.\n {wallet,{{<<123,45,67,...},   % used with app_queue to finance sending TXs.\n {ipfs_name,\"my_ipfs_node\"},   % used to generate the ipfs key and PeerID.\n {ipfs_key,<<\"QmXYZ...8jk\">>}, % identity of the local ipfs node.\n {blocks,0,[]},                % these last three ...\n {txs,0,[]},                   % ... only used ...\n {ipfs_hashes,0,[]}]           % ... in testing.\n```\n\nThe status of an IPFS hash can be checked:\n\n```\n> app_ipfs:ipfs_hash_status(Hash).\n\n[{pinned, true | false},   % whether the hash is pinned by the local ipfs node\n {tx,     list()      }].  % IDs of TXs containing the ipfs hash & data (generally only one TX)\n```\n"
  },
  {
    "path": "doc/gateway_setup_guide.md",
    "content": "# Arweave Gateway Setup Guide\n\n### Certificate files\n\nAssuming the gateway will run under the domain name `gateway.example`, you will need to acquire a  certificate valid for both `gateway.example` and the wildcard `*.gateway.example`. This certificate's files should be installed at the following location:\n\n- `apps/arweave/priv/tls/cert.pem` for the certificate file\n- `apps/arweave/priv/tls/key.pem` for this certificate's key file\n\nIn order to allow the gateway to serve transactions under custom domain names, additional files need to be installed. For example, for a given domain name `custom.domain.example`, a certificate for that domain should be acquired and its files installed at the following location:\n\n- `apps/arweave/priv/tls/custom.domain.example/cert.pem` for the certificate file\n- `apps/arweave/priv/tls/custom.domain.example/key.pem` for this certificate's key file\n\n### Custom domain DNS records\n\nIn order to point a custom domain name to a specific transaction a special DNS record needs to be created in its DNS zone.\n\nFor example, for a given custom domain name `custom.domain.example` and a given target transaction ID `1H0jHTlM6bYFdnrwZ4yMx92EgJITDRakse2YP_sDkBc`, a TXT record should be created with the name `_arweave.custom.domain.example` and the transaction ID as its value (`1H0jHTlM6bYFdnrwZ4yMx92EgJITDRakse2YP_sDkBc`).\n\n### Startup\n\nTo run a node in gateway node, use the `gateway` command line flag or the `\"gateway\"` configuration field and specify which domain name should be this gateway's main domain name.\n\nFor example, with `gateway.example` as the gateway's main domain name:\n\nCommand line flag:\n```\n./arweave-server gateway gateway.example\n```\n\nConfiguration field:\n```jsonc\n{\n  // ...\n  \"gateway\": \"gateway.example\"\n}\n```\n\nTo allow a transaction to be served from custom domains, use the command line flag `custom_domain` or the `\"custom_domains\"` configuration field and specify the custom domain name to serve from **in addition** to the `gateway` flag.\n\nFor example, given the custom domain names `custom1.domain.example` and `custom2.domain.example`:\n\nCommand line flag:\n```\n./arweave-server gateway gateway.example custom_domain custom1.domain.example custom_domain custom2.domain.example\n```\n\nConfiguration field:\n```jsonc\n{\n  // ...\n  \"gateway\": \"gateway.example\"\n  \"custom_domains\": [\n    \"custom1.domain.example\",\n    \"custom2.domain.example\"\n  ]\n}\n```\n"
  },
  {
    "path": "doc/path-manifest-schema.md",
    "content": "## Schema\n\nPath manifests are JSON objects with the following keys.\n\n| Field            | Mandatory? | Type   | Description |\n| ---------------- | ---------- | ------ | ----------- |\n| `manifest`       | ✓          | string | The manifest type identifier, this MUST be `arweave/paths`. |\n| `version`        | ✓          | string | The manifest specification version, currently \"0.1.0\". This will be updated with future updates according to [semver](https://semver.org). |\n| `index`          |            | object | The behavior gateways SHOULD follow when the manifest is accessed directly. When defined, `index` MUST contain a member describing the behavior to adopt. Currently, the only supported behavior is `path`. `index` MAY be be omitted, in which case gateways SHOULD serve a listing of all paths. |\n| `index.path`     |            | string | The default path to load. If defined, the field MUST reference a key in the `paths` object (it MUST NOT reference a transaction ID directly). |\n| `paths`          | ✓          | object | The path mapping between subpaths and the content they resolve to. The object keys represent the subpaths, and the values tell us which content to resolve to. |\n| `paths[path].id` | ✓          | string | The transaction ID to resolve to for the given path. |\n\nA path manifest transaction MUST NOT contain any data other than this JSON object.\n\nThe `Content-Type` tag for manifest files MUST be `application/x.arweave-manifest+json`, users MAY add other arbitrary user defined tags.\n\n**Example manifest**\n\n```json\n{\n  \"manifest\": \"arweave/paths\",\n  \"version\": \"0.1.0\",\n  \"index\": {\n    \"path\": \"index.html\"\n  },\n  \"paths\": {\n    \"index.html\": {\n      \"id\": \"cG7Hdi_iTQPoEYgQJFqJ8NMpN4KoZ-vH_j7pG4iP7NI\"\n    },\n    \"js/style.css\": {\n      \"id\": \"fZ4d7bkCAUiXSfo3zFsPiQvpLVKVtXUKB6kiLNt2XVQ\"\n    },\n    \"css/style.css\": {\n      \"id\": \"fZ4d7bkCAUiXSfo3zFsPiQvpLVKVtXUKB6kiLNt2XVQ\"\n    },\n    \"css/mobile.css\": {\n      \"id\": \"fZ4d7bkCAUiXSfo3zFsPiQvpLVKVtXUKB6kiLNt2XVQ\"\n    },\n    \"assets/img/logo.png\": {\n      \"id\": \"QYWh-QsozsYu2wor0ZygI5Zoa_fRYFc8_X1RkYmw_fU\"\n    },\n    \"assets/img/icon.png\": {\n      \"id\": \"0543SMRGYuGKTaqLzmpOyK4AxAB96Fra2guHzYxjRGo\"\n    }\n  }\n}\n```\n"
  },
  {
    "path": "doc/transaction_blacklists.md",
    "content": "\n# Arweave Transaction Blacklists\n\nTo support the freedom of individual participants in the network to control what content they store, and to allow the network as a whole to democratically reject content that is widely reviled, the Arweave software provides a blacklisting system.\nEach node maintains an (optional) blacklist containing the identifiers of transactions with data it doesn't wish to store.\n\nThese blacklists can be built by individuals or collaboratively, or can be imported from other sources.\n\n## Blacklist Sources\n\n### Local Files\n\nSpecify one or more files containing transaction identifiers in the command line using the `transaction_blacklist` argument or in a config file via the `transaction_blacklists` field.\n\n```\n./bin/start transaction_blacklist my_tx_blacklist.txt transaction_blacklist my_other_tx_blacklist.txt ...\n```\n\nInside a file, every line is a Base64 encoded transaction identifier. For example:\n\n```\nK76dxpFF7MJXa3SPG8XnrgXxf05eAz7jz2Vue1Bdw1M\ncPm9Et8pNCh1Boo1aJ7eLGxywhI06O7DQm84V1orBsw\nxiQYsaUMtlIq9DvTyucB4gu0BFC-qnFRIDclLv8wUT8\n```\n\n### HTTP Endpoints\n\nSpecify one more HTTP endpoints in the command line, using the `transaction_blacklist_url` argument or in a config file via the `transaction_blacklist_urls` field.\n\n```\n./bin/start transaction_blacklist_url http://blacklist.org/blacklist\n```\n\nA GET request to a given endpoint has to return a list of transaction identifiers in the\nsame format the blacklist files use.\n\n## Update Content Policy On The Fly\n\nIf blacklisted transactions are removed from the provided files or stop being served by the\nspecified endpoints, they are automatically un-blacklisted. Added transactions are picked\nup automatically too. However, the changes may not take effect immediately as it takes time\nuntil the node refreshes the list and applies the changes.\n\nIf you wish to add more files or remote endpoints, restart the miner with the additional command argument(s) or config parameter(s) specifying the file(s).\n\nIf you restart the node without any of the previously specified files or endpoints, the unique\ntransactions fetched from them will be un-blacklisted.\n\n## Whitelisting Transactions\n\nIf you want to whitelist particular transactions, put them into one or more files in the same format\nused in blacklist files and specify them on startup via the `transaction_whitelist` command argument\nor via the `transaction_whitelists` field in the configuration file. Also, the node can fetch\nwhitelists from remote endpoints specified via `transaction_whitelist_url` command arguments or\n`transaction_whitelist_urls` config field.\n\nIf a transaction is both in blacklist and whitelist, it is whitelisted.\n\nIf you restart the node without specifying any whitelists, the previously whitelisted transactions\ncan be blacklisted.\n\n## Clean Up Old Data\n\nData already stored at the time a new transaction is blacklisted is removed automatically.\n"
  },
  {
    "path": "erlang_ls.config",
    "content": "apps_dirs:\n  - \"apps/*\"\n  - \"_build/default/lib/*\"\ndeps_dirs:\n  - \"_build/default/lib/*\"\ndiagnostics:\n  enabled:\n    - crossref\ninclude_dirs:\n  - \"apps\"\n  - \"apps/*/include\"\n  - \"_build/default/lib\"\n  - \"_build/default/lib/*/include\"\nproviders:\n  enabled:\n    - signature-help"
  },
  {
    "path": "flake.nix",
    "content": "{\n  description = \"The Arweave server and App Developer Toolkit.\";\n\n  inputs = {\n    nixpkgs.url = \"github:NixOS/nixpkgs/nixpkgs-unstable\";\n    utils.url = \"github:numtide/flake-utils\";\n    flake-compat = {\n      url = \"github:edolstra/flake-compat\";\n      flake = false;\n    };\n  };\n\n  outputs = { self, nixpkgs, utils, ... }: utils.lib.eachDefaultSystem (system:\n    let\n      pkgs = import nixpkgs { inherit system; };\n      arweave = pkgs.callPackage ./nix/arweave.nix { inherit pkgs; };\n    in\n    {\n      packages = utils.lib.flattenTree {\n        inherit arweave;\n      };\n\n      nixosModules.arweave = {\n        imports = [ ./nix/module.nix ];\n        nixpkgs.overlays = [ (prev: final: { inherit arweave; }) ];\n      };\n\n      defaultPackage = self.packages.\"${system}\".arweave;\n\n      devShells = {\n        # for arweave development, made to work with rebar3 builds (not nix)\n        default = with pkgs; mkShellNoCC {\n          name = \"arweave-dev\";\n          buildInputs = [\n            bashInteractive\n            cmake\n            elvis-erlang\n            erlang\n            erlang-ls\n            gmp\n            openssl\n            pkg-config\n            rebar3\n            rsync\n          ];\n\n          PKG_CONFIG_PATH = \"${openssl.dev}/lib/pkgconfig\";\n\n          shellHook = ''\n            ${pkgs.fish}/bin/fish --interactive -C \\\n              '${pkgs.any-nix-shell}/bin/any-nix-shell fish --info-right | source'\n            exit $?\n          '';\n\n        };\n      };\n\n    });\n\n}\n"
  },
  {
    "path": "genesis_data/genesis_txs/-M5_EBM4MayX8ZpuLFoANHO00c4pdrSmAQbPYv7fq4U.json",
    "content": "{\"id\":\"-M5_EBM4MayX8ZpuLFoANHO00c4pdrSmAQbPYv7fq4U\",\"last_tx\":\"\",\"owner\":\"1ZukhdLqipW_i4TncK8A3S2gMER5ySKwzAtkfACYKiRIh72VgstVYW-h-JF96NlC_ZDrOu-XJvACPIxjVdZr3KPH5RM3EHTk6LCXHsTqOoVkswybbEzZ6gbCSDJjkGvVscpqJsAcVcNqiPjX9qTtFTXzncFUqlYjzBcZQDN5kI8c3Pokg1ToklYEFeH7BnbacXxyyy6n7uTGtzCxnB4gPL_Gn88O6QSLLYfUNT7oH3m-jrU3RB8vZGDMhmdx46a4XeG64BRHSh_ff1KyvYsnB-nDVcq6VqYKrRt4m8S_No5-r8Wy_Mr5Y_092XDnMlu82yMtS2avuAaNF8kDCmPjyWuP-QarherI90wSDfYOjw_-zlhG3clz151bDvYQFYP1o0O5ARC3poDhsEDdJYbikEwJqkuuOOrZTbFaFAAPW8LQlN14vp5DNlKOt5qTKqPMtBjBeSWERrhY0JxGpCmrT1p7Rv25VBL69_RAl6zItJzWxEYNwpP6cavMhhoPCHoU4t6xnyLGPD3wNnNoKJ5UQxaTqxZ_T72vSp6YC_2c1uSw6x_vMForWyKb9R27vj72fgOuswp0sSlVssuJrvE1dS5nflhAnk9-QdTX9aURu8W3CtiZo4acj0n1q_njjtV34w3pAqfUWdzGj-Hyv5vpj9nxu9GG7hxuATMfXWpw2Pc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Z3V5cyE\",\"reward\":\"0\",\"signature\":\"J0w6nl36VttDwzxr6m9KceZxo5VSnRuj9VoDQ3wZlMxIwk4aTKiuJ8Ouac7okAXecmAtsxRtbEE7XTvhv66Yldc3apo5Wc1blRZJBoN3_kxfUETzaIsyrf-AWmhUL8OuM4euxJ0UEE86P-7ONRAHNtqVpXgJbEA8oTo8lefTFrm_TxxVHtqbkW-VNEGWKdMqoSFDfa8GRujyKfcfQEZdvAoo5FacrYpqrxji3O4rLN8I6DiIr8suxWiTV2lxR6zFVhJiUA7upeg1EBBjJmuMxNHiW6RZ2mqeeE_2tawrjhITLtl6jASv7e2UWCp2Vpxovh8ICDRmynSDZyO8jV6n5k28GkhRH68-UNEZbICPkCe6EcU1Bk4_z5UduRexaCoi6xPowmd5-N4bSPNwLK_isCQ23b4YDuwZAJoNqiMAh82rXYjQ4n4VkOFV90IQGfz_GuIr5coSipqtM6WZADhGFs0-IpZ8byeJdz-erN-eE_rxDZqnHLfEjudKTFdWJwsB00VVYWo1D8o1GUNiHtnZ9GqCyW5ETzEgr8A_f6coJt06PJBTU6FzLHHlH0gsUeprcXk8BkOcn5XZMqxURdtUCl86E-5NucD2CPTWFwrKUuYCKl57hRQKU1z27c8lBodDw0i6deClB_ak6MGTjXcBC1CxWvCEJEthPUY1i7O1cNI\"}"
  },
  {
    "path": "genesis_data/genesis_txs/-wzIQJ19Hq8Zyf1L85Ga3uGTrdWA2W-UNyr8aH4a4iE.json",
    "content": "{\"id\":\"-wzIQJ19Hq8Zyf1L85Ga3uGTrdWA2W-UNyr8aH4a4iE\",\"last_tx\":\"\",\"owner\":\"rpv3hShMP8ZGcEYs-cZZ3AgCLGjNp9MBUl5i1NonHaBfwWWm65wkns4qpUL0rLuA8JTBepnrE_YVhCpdV8NGHJHBsUwOlM9EQg5-NaSgq_GR6kvpTE4vdH-yqBDEzdHL0tDV5OjTEAVrSw9HuIUOqknAnxgcUIxYwTZw0K0Zdluq70Fvcsk77EryMB5AUj8S0POYVJ3s0CUp_4jsalp_x8S4DrV2sM4v-1GtWA7apzNI0X30JlwRJsyJHN7Sgi3HfrIT2izXCQyezT9v_DvzJ0B0sRuwywxxE2jmdY86ilHaQ7txP2tRdJKdF4afjkT1PZuxXkZ2e9Rea7h8QzfKEKUNprHLuTbvagNb5iiRRYy0OXoo62UKqCd3JYxvG7xWb4t4ySloxGm6HLavHtp_T6mCVrJA2e7LPvb05LT0s6U63tTZOHEAnHvfrS1gH9bVybaExWMTzsLQTbwriMaLcS0Hdao-LPs3Ut6--L50XzDcyeeITjFH8vE3RSxkUdHLFJsZFgr66QMPa3_LDCB9A5OPVr9eVSvX0eFPoFgj_TOssUyjj50LMYj333Kgd_KS2dA6jgt3Bl7H0b5iUt_WfjAgVRGHiwdFz2zzisz4oA6d6XQKKFFvahTx4Z6eO6kcCTHzUDOVb2st_EiugQHpHohFf5R5Jj9rVCpAd9_X5u0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"NA\",\"reward\":\"0\",\"signature\":\"IikTtEyeMK-bU-wi_sAZ2rxhjbJn3D1Nj04AfYnxiegJ7HCQgSqhkZRcdpEPT89fgKQSiGXKXV9MNyaeKCP6vcVrqK70Cg-at50kkn6pkyMDZmdxHASYpqorHBg6ev4f8c-gX8jrKVk9-p4_C7No9VDgyKXGVa6ZHVVKH0vB3JhFg9Dolkq6jGzHhLxPs1LaTMCjLoD5UyEbciVRFERYpj8SH5RZgHREvqp0tzlBkBOkGRRAlQs3qWIc-k_CBSsRxJ7Lw5X8DzaxJP34Na9grs7O0ef9WqfChXBlUpnQu2bRIOC6kW3Ey6mPtRntiDM2B3rpjrbXjZ1SFyHPIDheAhMzYK2HMbx6dIJ_Aci0pS7AsyJ2ouZwu3HOvt4TUxZ172vmu0ZgE-0yt-t15VpI9kT3Ywoi62Sipw4ruI17kI9LlxxCRRB5L6WOFrxygzg5wf6EEl4FZ5sNh5LQj_WUtYQZDVxlqjvX1pAt3M0DMUIMbNP3gNgur7IS9hBR9cldYf9f6d_DwL1AECt4czCofDT_OxjOtwV-91QWY6bPAR1rp643C-H2lrBMxEzi_jauzziAkkfWWcNbXMGA5G8-Om0038fvaIUJwNVAygcCgSEuIMiF6GaQ9ok-J4J_r5eSgHbjmIFCR04G3tpgO5llD_G3gq6rI5shI4h-8aaIsTc\"}"
  },
  {
    "path": "genesis_data/genesis_txs/00nFXThK86Aog_HfLJc9j0nnXzXSlU6VdGC8qZc5ekI.json",
    "content": "{\"id\":\"00nFXThK86Aog_HfLJc9j0nnXzXSlU6VdGC8qZc5ekI\",\"last_tx\":\"\",\"owner\":\"0_bqF5ptW5iLV2-aJ6bTZva7d601UncOicbBR6MrvahaZKZQUcZjm3kK7c5eGFbWUGxbU8p7f7rjfmF1L-1bly11--MHSwEkgAh2ypRFLF5TSwziBjGGUUHa8p3sSbANoUphbRyrGx1ZimX3WcxYnnGXjuDvwkxerRnwkYIe_vez5CZVe2fe28ceKaVfm2VtFN7Pb3QG4NwMes8CcLdmaoVNPmSsg5ptwhMV6wQ3k_bg9pNG1B0TwCbHvi3xmbEpjOd1Lv5Bvjyeyjykmd0J98ZbuEaBX_QgYOXGtA9PtCg8lheXbYwcvTKNXYf9fDywlgJ9DtwsN4yrndmWD1U8J5TTOoOgAsCcSfvwJ8hXhdQp7YgdA66LUQv1vaM2ZpvIGHu6hbqBr7lt--a0nT-YKUSpKB9ELWsZCk8ut1X3XNxJMoBi-GpDF9gJ-cUeThsNlik7nuHBcuMqXgfCNqDu6hCQmYB38w6io4DMxi1MXozV-VvPgM6BYcP1LbHQnYb93NMYsN7cbjFnCOMP5frm4qHqcDdYlLLKmJcW2gB84_nRoLjsj4b2FYBh3Jfo9aClmAEQ3o40Qf6_gmP9Hqemhw80lqKgGEKJk0l2-K6fNgHagHw_r3JMzCIOH_zCma9hQdDvWH40-2ggHGGlFvSy4jbcIwvkV2ZtI1bgIxj-Hk0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"QmF0bWFuIGxpa2VzIHlvdSAtIFVwZW5kcmEgTWVlbmE\",\"reward\":\"0\",\"signature\":\"Z_IgNeQLLHNESZeyfF2sYdVAYqbBmKOnVleK7utf5dCsQ0hdxv5TL7DtRzTucixtxua8W0ZQTtI1tKhZRNg8SBKDIP2-TTwmOloWbvDvqpzNNKW2Dx8ViyRbKyZq14B8BLObnRhdkcc1SCmgUvzWiRhWTdQV0sQoHM-mqi-dIYHJKa57zjR-fKvYpIHFCcDlH_H0RtDuJs77MBUxaKTKI4zN54WFc_xiyrLmC-t3JMPBxYdCbjVSZRKEmAWnoBx45AQjE9n7SUup7M2JM0aWPmRem7sQS_Kvbq_0q5iItlwG2HZQahMQbok9YkKhMW6hfyQCcA74xQdq-PbpmGUnnwqiUQeBVT73boTNiyg3skq6RTm2_F3vQr2ZnaIOGAceQv8YfHEuOxhy8KHB0GdDqFqNQLu4Otn7DqbGdxuMIiQzua4FZ1kHgJbmfn8yUiW9wSZTFfcdTZSts76ey-VUO5aST3vvJ-t_wt3MSJu3qfS6CAzPF_r3gqNJHW2R0tHCK_T1JUvK8EdvFP9yN_xxgRRkLlOum5f0WTchCciHMCpmvkpdRcmkGqVut0TXXEopVJknAY2yTjh8OJwMb5DBXUIG9cG12dZWn0OPClr8mYeYOApyvE-btC_83E8aCyDqijpsPVsNDyeQ789Y5kYG_DXuWMw7UljUlmKWNdcB01c\"}"
  },
  {
    "path": "genesis_data/genesis_txs/06dr4mrXcKlfPbK8t9vWOBCDJznyG-AsKxED-Jr0U88.json",
    "content": "{\"id\":\"06dr4mrXcKlfPbK8t9vWOBCDJznyG-AsKxED-Jr0U88\",\"last_tx\":\"\",\"owner\":\"uS4MMcT-67rchYvbu1dVoru4ojmv0EpENu1nCr0oBiQIJDq0hMgHfUEeuIf9sbBJwvBLC4EFWw5lBHmZimAzvOv-0BbPrjBdz3AZHpjifX3WmoGl6zqfkv8m7eLPI9qFZMLzOJJ2avGk6WoDB1FYoUQlyKdcyhd2gScNgpHc2616bEptlzP-uLk_xgjcaA_tdG6hMG0O33fbfWCHfH2ws2zWmqImY9GTboMuk6Z4xjXH9w3OG-EADsSmmolSPhDQuy2UtlOb3h8HP_5ChRnYS18BQ9E0PraxFsgOoqwdjqB5imCZWkhLeXtVjHs__IRbopM_xpbwc4c_c_q3kuh_yMi3g3gJPlGgjSMikzSyY5IfRgXHTQuTB9naPuGr0n28Gjzxt4hO7xoiZHOUFqCstTnc6bHaoOZeVDi0Lm8R6IvWBe39krinElZGkVBagcIYkrExfLlobOgaeqctd9qFoym1C_hf9VBSdATXbIdG_pCXU0pBI51QACn7uAk4sePIUM0crEQXXaZGckkvsL4hnlDAU3_tfRtdLRZA8WqwmF3DWcXOISOXccF2WlzgrjAappkVQallq1EkljGGijrLOR77bnDIUnbV9wrVvdxBpisK1EviStz4_NXjrj5aCd7-N6IdjiSRTo05eNTaY2xhrULsNYHfKiuceg5QapJsfas\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"aGVsbG8\",\"reward\":\"0\",\"signature\":\"nz2dhzg87o9coz9I0cdI-FsPxsay10XoH0exW63UvV420ib26A2H5Pjw2N87PNaKcTuXzWRof9Peafl5-e7rj1gRpfKU0kGl3Sk_78iISN_dH8ywX7qVYXN9S_sDvR1HdpaUawBHc9AS-n3mew7Q2pc3bKra-Fr89gGuEHre0L6yqjtmSEMSCf6rwEnsjJpfupg9DaGlIz5RDRTF5NQHOp6H9cCghApYrYn_VFZGxpY6x155-gBrCLIlG9W0OxzE01g0_hsIySg7rHx2iJRq30YJ0ohdUpL4BdbUOCJY9grKvs-L0dXqWWqClYnGMDb_L46misjOicysePR31rSPnhpj9inY7OVhk6VHg6nRg3KGt9AP2tr8_XI1sPcn3D9Be6mjZNwC2j-YovgV4nkeytlBcYzRnx_126EBqNucO0jfUCUCbuImMBEJ4sllplrGBSGRYlmOq9fI3T-9jaEqTSy90qp12TacLRjtG1urbCNU7aNbNtBw38XOyjyQta1jQWlS6TkcDHCJIOXSJoZFCmrBOCGgpm4_8iZHgsfrojj9bnxq2xYhxGF5S13z33uRt-P3dXU7JcmmFd17WRVehbFIhP0mBMtg5llyrPajhqz7Y_BJWT3Jsn4TzcZF-vWwrjI45As9Ly2jSHpNtPcgN7GXwzq4KegfbMSF_E4lWGk\"}"
  },
  {
    "path": "genesis_data/genesis_txs/07u3F6WH-ohqBclh6UanAQ9Tau089eLJrIYM-8qkAbw.json",
    "content": "{\"id\":\"07u3F6WH-ohqBclh6UanAQ9Tau089eLJrIYM-8qkAbw\",\"last_tx\":\"\",\"owner\":\"unjIwZ7-TIlfW_y4jkIbxCr6f-zApT5kko-4zfFvQyfSHdccAMg9jwHM5wNGTFDE6hCbT-g4VDIl45Rsv3tGPdfRG1PtBfrIsYNZ6TbJK1pEOMors9sRmlGweqabeS_RS4T37OJ_nFXfzEbEgI97buA9HlfrSAiaFtB4UP2J8MlwWoU-4h92HksgVIGQ5a0Nq74KduviDj6ysK2bWCteGtspReSn7s3Se5Zc2zxwhx5DSoO8e_w-mV1WaA3A8lyCWojWrGFCCQLiBY8TMFcvUFgzayq6EiS7oNn4uO-FBURF1qT3are1aHKLHUADn7mQgN5FfokAdD8ePixqvL1Yi1ghUV218QAZGTQaBB-GMdskSud7tPxS0UbpKwGdqXE3LvG_P9S-5uCFTMPzEVQSLzXRX_m-VEzYcGn4mbxNl1e0A0zJBJ82BY6YEl8brFw0IDJyaozpUNU2YRzRK7Fc08wemlVNyioJuVhLBefSgtSC8PJG_IixYfF2OtYTHLoeUSDAHbhQAU5ASR0nvhQZsu-aIIYmYfSxQQ9Ub845AQGP1z4j8ccrS5gy3TpDoWtTEtB0cokU14uBt1QPGEKRWUVZR7kFWLa_MzYz978zLe6nN9I_z9dVVAdezqsQfJT5mP52z8PHqCQ3z2iZJUfPUYcwC80S8vEjQ7sMGE9sHyU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TWFsaSBQYcWhdGV0ZWs\",\"reward\":\"0\",\"signature\":\"UuS6zYGmYMrM6bvVl7IX52RPLlYzciEw2pLN5bGGwaAD0Gx6tY0oorrw-tLuhxSSJAVDmFN8a7DGFaD89UsLns-gH4E1VRAQY8K-dMp2YRbuqF6wfgkaf1MkJ82qoIQACcQu-U7_7mrTL1uHSaM5M3OLqcsAwSXKUptn-JZybmrnImygB0-UdjEgdjFihlQa-6os_iQD_TSjaPbni7fdpc1H65ByByF84xf6ikaFXa9hpVnMNzcPxe1wAP0_TQgH9mO1C7X6rSBOfZJFdhJwqR6EKy2L6mDPAYgZDCXuv78pckJ8uCtiVataMh-efqsoPu-s9b1GP8-OScZTUnXDaYqMvmElAs9BG9y_XvI1dzyQHfxyf0-JB8knlYUXG_kQUjTA9kpsCG-5mxfeI5YFYi-6D2nyw1VG4YSxI4jgWcyJ0Xqt9MHMejL6wuvImzxICcP1otH8uU5nx8vUcsx7Fe0cnbVJBHe84Qir3OEhEbi4oJlfGOs2fce_iHiR5Fqqt6MyIL9GrDF_YLaT6B07xiZgfrBeztSWFgIgsb3ALBnAGbVJXfc0s0Ejt0fvLfsmNt6IgXEwjZA5aKKXzq07hyaNG4_K06EqnoHLI-Z-KWe-tehRC3wGlJlYojKROhOHVNT3kv5cK6jsYtArA8ehHytn1aITl_HSKPt8snU3LL4\"}"
  },
  {
    "path": "genesis_data/genesis_txs/0EzNUQy_5b7CwNNLVAi7CnameMgnxVh-XyahT2kn74Y.json",
    "content": "{\"id\":\"0EzNUQy_5b7CwNNLVAi7CnameMgnxVh-XyahT2kn74Y\",\"last_tx\":\"\",\"owner\":\"vBe1dL5_IwUqidI2ye_rEgSL9QwxniBK1jDpLD3QFoy-QAQ52-Lz9V1s1TuZARF0Ouk6E60mKpDEOoOu-qCN1r505LFSg5l37-DyGYv5ePnbEyeWKzDWMakCFHYzJy8jIhFRJ9PtqWD_XoVQ-t30-TeQPqTB9Knc9CcLiHd3k6E0v1Yzk9f9IIrH0a1FpLIbiSh2kqZjkqe6AuysP8_v4P5Wj5fucdHeJUZhhatj4F802IzaBMnvloeuKoRE5kYLmfpD0BtSTO6f46P0ms9iyGgECCENicSQ8sf_a_y__wicHLDmRIhCH8TVMtDqNbtsprheZNWZpYOajD_eE4bR46zV04cY9ZwCNzb3KZTHqVGanBHVP7PYwM6am1FTAWzzh9UKjMpaaToAUDeyzV9ToDePcM8vajcciNYP-yDv3RruigwU3cH4JOqIP8t4xABooDAQzs8crkFuUpaGoOA2ZpJYzf3kOdMnlYjX4T4opagbv9JntZREeb6mDWJnSFuRABVC3_eZc1-dkrOATTYWHXK3Pn7ULmTAouMc_Q9DffxrKQwTgkABslG57t9ljhbsVu3W_yIJOwkN_72DI6TyN52ywCg_PZX99uEGqhC1eUZml53NV5Z0BF7KsiWijkm36WEgxoDq4OzMBbsoX6QnnfWJvm2dY2tn5GDsBk3niY8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"dGVzdA\",\"reward\":\"0\",\"signature\":\"XUvEtWBt0IfiJHd2x6qdptqwd6E12GVf7Eb5pOJ4zhk5GuNz97YIXruTSPkfbKbNa_fwuH0nE-Y61zG5AdC2lJscfTcZmuYhCfGgl0F3K9iEmSwkUAv6xlPS5OaktlzfSRiCYhimZJQIgGf-7siByHXe-5FHI1uhMVLTenm9OqzVjNp3kPQObjz_otZXDrxo39hdcESNC7ggaZH5IWv3YAYbN9oTAWGh1Mpf_t09beMoIPkRZJrNQ_lieBmCj_QQtslVcgAz5mgUk0i-OXiPhRMKY20lKZU2caHPZnT5xCxJI5IHjSbmphg7mI5hf6UUH5nKRzVcDn1vmRvo2lQr-ZiKlbrhAQRQyF-Hr50y8UpXQktIYl_CKC6SwB5MiCEDanDuB43yEuQwxSCnWBakOjvR6syPOFqaTjD_XDDD-Tdh9OtPcifrrXf5ayunHSb4mrJDK23hfeRpSXmsZXLt0ijwOtY0Rd8QHdGqlFgLzoOYoDyzVvSlNE8r8uHdwdehNP_kYZaWw8bzzceeSYcDh39dfTQmRE5mBpIsu-qXuRYc81DYicYbE8fvrwGc5K7bbx0rYufkKoZE7SzKwuYvK1NOjw0AB6JK3f3GBAYbxQp8o97YcAQoxRcoA9WiYGt4vnOUOyjbIqmoXpi4HRpsUju608tsVkxY8nWbeq22jAc\"}"
  },
  {
    "path": "genesis_data/genesis_txs/0FJrLrxrFkVTBwRrzCCh88Gm2tG1xPxg8s_IuRZDVzw.json",
    "content": "{\"id\":\"0FJrLrxrFkVTBwRrzCCh88Gm2tG1xPxg8s_IuRZDVzw\",\"last_tx\":\"\",\"owner\":\"ygjBV6EA700C_OwE03ezCj5yKdABrMfipvr6OgUadBtHJ0xvn6BG-sNbCJEz_uI76_tsvL70EJ93thzK2wz7Kf8R4eGYaHytEJnd_r9naKuz_kX5tyiCcvc97iw9pPfX_-vs-S8tDjRQponOZXC3xyhlD_QtcwuWZs-mHMQgOoYE8BCPl018tWULUA_vSnUkIz5w3hLeELBum_8umoEvw3mOE6YQLA9mzpBevkDWkHbUbwxGowpj68Fd3TWwyLC5vxTCtJi4aI18L-m1Ln75yiDPdvNEL0msfoGHxNbj-n100VGRK9INNm7S54LEZ8EF0wxXUrYYpxgp93ZHp0Hc91-XrPW62uGrCefY3T2Q7Y1I6XlH6kOH6Upw5UpNWhD_EGUMX7dWJb12U17QxaI1OZroEEpUp8sZq4VnSsi4Spt-AhrG-Xw51HVHDdtLbDJy7TmUE0_sQF7M8zLj7UkKDEJFEjWgrjzACt2T28YIhsk-OPD7HmKCrASg2JyziBdpvDFh2HeeB59CrDgo2ivW7uW0MOglhVqlfIHltwT4Psx1pVQumYU8A-UmrnAi0VaLn1yp5lmR9RoaCWmhN3y7gzCVx9oKO781rVJYU2M3OV5L4X9oD3GK_KMezRl1-53vqNZx6osRBAaQqUe48NIUBCmnOYT_VqqRSN2pJrg8Kss\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Mg\",\"reward\":\"0\",\"signature\":\"v4vbK_qEdvd1aHfqjni3qxNsP9JzaDbQSD0wEIJyTSNYSCfkCm6iB1Fi5XKKKmJRk-YWKdwVqzB0H-DL9cCTfWBwQXaH-yH5qW9Xf0w8mlusv0ydEwt-WYF-imfvRWkv2SuW7ecgRH9HO0NHHOG7CThM0q0xjWgxX07m4Z8foPQIHi7DGGe_ynR3RZbED0aF4NFK70ilkZO7D6G4zn88I2kKDY2Z2sw6JyXQ5DS7Khj52ldDKMP6BOmOEH7dlxJL68tfhwIemcHe3zYl4Vu-1GqmH_ENIKeR21iH77VvCm0_voc2lAe9U946Txt5SMBby-Wxk6Q1V1ugY1AnrIMPqx6Nd7TBSa-JpS5IYbLcZUocvATj92nKw13ye3Oon2mTPz1EIYjvP670dYnfFiF6OZe2PhEcBXraXaDjr_-NOVVYdwKWX_h2EBAZcZvEfMTTXYUnKC2Mmqtvs84o-BYJlizjX7iCVgC8L1kLSW5yY_LKQsxAdyzczWgA1xaP-a1oE9dkvE56GuQ8fN8D0bZz218FJxv6fvjXtlgR3VDpENdKaSZw8oap3hBuTlnlvMY9a3XEr_LegybXZgs3aijCtkyW67ft6YeTFtbShYrFY_qAjNjFHlQ0AaZhVdBVsVDxcRKCS2fUUDyK6FY43Dgg9iromLM5X_1slunSLYwGm3s\"}"
  },
  {
    "path": "genesis_data/genesis_txs/0Mxvgz6_wL0FBOxJmHcRcNwiaV8B90whDxG4Vh_GFic.json",
    "content": "{\"id\":\"0Mxvgz6_wL0FBOxJmHcRcNwiaV8B90whDxG4Vh_GFic\",\"last_tx\":\"\",\"owner\":\"ukagDSx1dymaqdiSvCMR4ulMOaJaChntx9xaqL-sgcQgTT5_m4-Z7R1BQoCIWtPK3AyXu3B0Z_B6RuUh_ISwcf69wYa-oXXVV3Y0iW7ogRlNcAo7zdwetvetwOqK-g89SCCeXOvHoDW0Dtc1wHe4tTEgawPJN0lpVqnySmpbEP1vNfeahD9wi6hA7nBnf2lzGNhPS2TIrPb4t2rxHgz9P3sz0mok2RY7qLdqMhDVB9Vx_eV_2LSuQtblFNRkvUO6CVFLgld8tO39f9jz-bymdu3_0J0FXrf97gWceYgPXnXZT-FC6Txe5wULLGmWEbmPtF_YWzTwupeOLm93MeE5mDTP059DoT-PgYWdDgToXN6SVgRd3jR6LpZv3JvRZFQ6E1LXBoS3l1FCmuu4ubSrIHBm4OdQy1EiCRsxIx1JOVMxVcdgMgLpkCeLv_letFuJB_6wB_UVHq4gYh5hoRd72NILNuC4O1tkvDV4cc-j4kFpLGaRYxdwwJgJlGtgBUUunnNtIGxgTB7b_WiJYCOi5o0ntXJrLDOnk8apwvSXYvTkxMLL8LMyyV6s3I3NBEOwHAwt4DRMLszIuS7x9bdId_vs_2YIY_h6BH2nwg_SHD527usKb-EBhIQdN3GQvDuS4BwExooEjuMYN7o53MEhVXuiYrTHapm1zsPNyaTF57M\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"QW5kIG9uIHRoZSBwZWRlc3RhbCB0aGVzZSB3b3JkcyBhcHBlYXI6CgoJCU15IG5hbWUgaXMgT3p5bWFuZGlhcwoKCQlraW5nIG9mIGtpbmdzOiwKCgkJTG9vayBvbiBteSB3b3JrcywKCgkJeWUgTWlnaHR5LAoKCQlhbmQgZGVzcGFpciEn\",\"reward\":\"0\",\"signature\":\"du7VNgNQAKNvQjlG8WC2xoFI9wC4W6Sr0dUivAzywiuiTNwrCSgR8RKw8LM9KNLIqxW53H2QkFmqFfaK8SCbPxKKhnvqvrCVLZxDn3JfUadIpHjqNlGjQPQVOQFjIxSkY6d5kKlH581RpWM8GGANiSH_tZi8lGokSJriHUFuTsqHckQW_92jIg9oTWLMfWq0pxz2J9nVrvVUBxjpNFYB4OwcapOy7lH4bxrRuRJ-hQS55Cn5z0nB30fvWM8NGBuakVkXXJcGZVRNAbG82RvivijSa4e9B83BhfGu5QAhqWy6f0G8Mi6mS_hoYsjfoY0IHBQ4HLR6k5pLNl2iSwT7Tqeb-eDHBmBfGH7yG7OFt_IjkYoyBM1mEjwfFTfQNQo5ccXNH79CQwYnugoKMMBZPFG6HaC7hDD1AdVeILSv1YK4x7MH4aSnGcjqwWBqSKeix7efCzFD8eTmCYIJkcYhEEMDUqHrDu_U1liAl-o9sKPqes2sdAfgNRP2Uj4KbilATZS3aDae0jFfO514-t-L7X0YxxOXP9X1k6mU92Pnqm556f1oQu7DGilnqwJRPz7dersRJRjTNoQoAvVVvj5Th5B1M6iAnO9Y1cf1KVW27d75gBQBYvQ2Ktxh0Ch9FIT2tJsNXPNWirF3CFKGzdZiud_k3WxmyQ76ak2yC8EYAu0\"}"
  },
  {
    "path": "genesis_data/genesis_txs/0O-UnzBvSFYoMQrbcsKHRH_YqNNylC1n9KWXmm-rr90.json",
    "content": "{\"id\":\"0O-UnzBvSFYoMQrbcsKHRH_YqNNylC1n9KWXmm-rr90\",\"last_tx\":\"\",\"owner\":\"1pcS98oH7pkbVe8FaVjo16ahf2IH6x7W7aGCWZ6krYM_K9oStXg-KDj5lDsIHP78qawmG3wTutj0NBauqWlTVUIapmbGERnJ_qgFr7BeIVQ9cxSqXRBf-Cvj8GYSvjxm1vEytzfEqzKkifQHgqmcKlgQFVIy6Lu-qgT_jTtxt5FFDWKapF3kevVX5RfCmrT6T3lEjG12KPH8YvAQAEG6uG0OOK-ThcAAsPsZ2EdNu0_gSXpujJS7YuFg7W0FIMcRrAkk6FJrCmzW4xdwzSSpN1eJMaaNho8PL8d6XdeTeCKLMcz9eWqWyS9LYQ9Mx3800D65leWaYA2VRiee6J5uGMedS4AWENkXSkFhMP744Z7QN85v9KVBSJi0mlIK9IETYG1WcLNZXUBevUpk-yq7n11mR9AM3f6KMTTXQwLzrBN_CvghNr40z1ruPtaDPNSGQAajT_YyjxubPjSa4sBZ-MWG403P9wCRn84rv4o1XFmpqXgFeiYJPkZdkm0RAU_MAEPz9Wwj32YmmTeD8svat26Hi2_L7JZEh2E2Hh15ogZPApw28r78DG-MArwGyp268ltzZBIs__S91gOEZnvYWachD_DN8dGiNP7ZYDUKIfijHN_CZWbNoe2LHfNYzKgu9plLyfOn9c739MzPZaFOl4iS8ATeSK4vyerapJG4Vu0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"dGVzdA\",\"reward\":\"0\",\"signature\":\"RiIKVfETYoVJhD8lnO7dzZ_FU9FqRjwUGJkZl01IFxn0suESgTrAuNPRYFE1TD-RjOVvrQZ83JOPR6_PtECH-E-3YchyTqJ_MjAAGcgJoSWPtOM7NXKIsbQcqh5KETuGD796diFSE3ncUpN6eNy9ZqsMWgyNF5Um62gYpLDtPAsTKWgfLMx5zHIefHy4ncgaBcYEeZMg5aHDQv1b2JpZYwxN657_bf1QKFZAmMLJO3VcuqqgfW5H5YdPLEV8eTazzFuWavGLNWVo1KgoHiwoql4-UUFjJOPbJo8JhVBa-AqpvrwJTV7fXJwroEBTfilZsn6MhlGuFxV4nUs5BdY2IrDe_DiHnK7m8EwUc-4IGGP9AsschL7s-Mk4uBfSwpXoK8j75bqsXwWO1Ist2a42Jg1831gqN1AcwogJza15aKPX6t8l4Yv-kv6uR08oOUtWdD9pCYBmBYKqg97KxdOa2FlZQ4tFgIPJPEj0dE6xqVyCUw2LlOpjqnvwgDS4wT9NlhVF7X9vpivcT-7KwtFjtCMSUtUaWuX5Ot5vCIelfEk1wYEcM3-KJ4Xwmw8w1TBim4HgC7IqT0KK0MmTjH7EaX9YXIFsum70x1GbQXoJxXd2QbCLDGqMiwznVjQSXNn6SveNR-RePOoMK3ntMUAnIbYCEReiOzGiKoP3_5p0KJk\"}"
  },
  {
    "path": "genesis_data/genesis_txs/0_GKZOdtRH-nc094U5kFBlvQSjPz_oX0tcIroqLFD3U.json",
    "content": "{\"id\":\"0_GKZOdtRH-nc094U5kFBlvQSjPz_oX0tcIroqLFD3U\",\"last_tx\":\"\",\"owner\":\"rIpHdfTLSATegAWr0tgtIADHzC8ranmxjJPqqWV4UpTZUFtrI4nzzvbGXuIk6wjNY_pcmLtmmU_5ifYB7k_lKBoLBjkgAf5Ctkl2Pc0wX3hQL_ylJSdQOUi5OK2zT_7nUYf7JDAkLhnwsQlApMxglFb1LGoL-l86HIz3sEtAq_sh7iGESlwahhxlJFeIZc0BunkwqfMSbSfF5VOsODZNEy1kFw8dhv3pVqeYf3pOsC1FJfslfl41sTLGz5RzdWAwD7PjRg_AFlw6MdQm41g3AjJ1wce0UFmLsKgniH1vEO9Zhz4UJ6UAVIsX-V4LroOKFTLOmRxHXOidxWyHlz0IHd8EHwS0XmQcX3qe8uudL9_X2nqUMp9nOCCm2cA2k0-3inGueUUGD-sCtZJmixk3K53wFmLQa_stPviFOA5xKlBwWgU7K5C7z0fahT3P9miF9YT6IgEhI6NpQtZdfd2q37y8AnPPXa_icPL7sCXmePIdAy4qoYJlWl3DVcjBmf13gwfS1E-ilseQqpvK9fW7mRuREt3NHTihVMHr-rFCseEWJEGeHhucXJI8ViZIcu7iOPByEQ0V36j--O8tJCLM6YWYeQ31BYh-olMao_DobU96d2onqdMsuXewnd89NqZTZesfbVHijSUUR9afunZE1b92taR7kzMRpzU8cFPRXqM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSBiZWxpZXZlIGluIHlvdXIgYnVzaW5lc3M\",\"reward\":\"0\",\"signature\":\"Rwd9d_WkDUgkC1nXt4QXBD_rC640h5yNDeJ6F-hNmnfSSYyp1JiUXZtntksqAhgBEuxjTHXpiug2eMmYomN8SShOyYPyPBz3UOOT22LXmr2qCbPCicLz_Ki-qcJ8ct0bM5YdRUTmqzLCguudhV5MEHNo59rFUdAy3AUGsrwn6f5A3Jr6fGwI07IIVcC3Z4jRPwYZvaWadCPQG6bu6jiqlEgFcU9mhYcd4xxIjRsjEnOuqKyWmwz0A9zc6bN7JywU-OJKJxLwEhSYqMkW7ZFRBDQUotJfINAM2TqXjB54iKtXAez0fGNpTLNaIuenUb2I756TXJqZPPri3v5xl27X5SEr6RNDgYceXW6UN2f8cgKF_kYonPf716nl00KUvUdDUh0Xmwm4IQtSefASdlr0ZT74iSLdOrJO9OZISvsAK5ylA1XQeAe7PaW8lYozp6XNBgpfbVp78HV-fm0Z8kkMmVW33xpV2JWE735PYSSXqkWNcJbaYTNVUYad8_BmqLZllr4qrQbTOl7LgM-vRl77yKDvndK4VYnrWh7b9SdWjf513ahFouthc7t4aZF2Wh2Swx6Y1GpinuvAANTKbT0epSIfgp6MVKkIrHO00d-s5tgmanl_JcJMrvdWKjjJzehIx9qtO2Yzr9ZBzcY2fLwz5tlC6KYN0WjXuxLr-A91Q1Y\"}"
  },
  {
    "path": "genesis_data/genesis_txs/0biLy8DoOhucpeYzOj5jnopxxwe0XDRfCOMjyz_a74U.json",
    "content": "{\"id\":\"0biLy8DoOhucpeYzOj5jnopxxwe0XDRfCOMjyz_a74U\",\"last_tx\":\"\",\"owner\":\"139xZjilLwGQOYO3LhR4s1HD_gEcFMYqKHsPrFIN2ZlYze2PGau3ea-Y_JWpQrzNqCA_PvmlV-BUb-QZDc6hlQkqw_ubu989OPyNLOtma1aIMyjR1JA-wnsdsMCy502jg203t-JKuOg-D9FrX8ecnxzpb0wtUGFsCwgg3GJvoBq3UhAu8c_elO1ZqdynrQM0HfCHeqAVM2EhWxFKtQYwOWIjhp9xIAdJmJcZ5mmqGxqhxvaZum_9_owzgrlx6HGHuV53QXuXTETznOKohMIEf80iQE1YkEdPcS0L9uk2TYU_Ud5KpLq4V6XZ-DDBX4UqWpS7VJihUgP5xR6gQ4XzzoJVKV-AmRCRy0cZ_Wo-aaw4hCfR3Fu9JKutRqBvjw44LmKIWDyLAfmo_pxl3C5UBXD5a8n_ge9irhbKCFwBjZJGMW1WrQxga1fAaAQ6JPCnXd6UCHPSHCALm6eGhh0Bdpxe568yk2QKvnNKPNG8aIth_z4A0O6ivCxshfv1pftxco1Kkk6KDwHI5WU3TpThFM5p61PcCaSbpcyqtAl1cVCdTShf0YwKI89oRLZ_JJTnD5urVTSXJIWp3BzCS4pc3th6DvlApojeTSsABF_x5vEw6Tq1OtgA8PgysJExrdazqSiZq7dZJGJzhZgjcridpc7nu1V9C0ApsamzTmkNnd0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SUVJV0dUTUVQQVJUSFFJVENPQVNDQUE\",\"reward\":\"0\",\"signature\":\"yW7kOjc0R2wjx5WRxfvcgwYoPhZ1_FlGdCbou79C8C8rCGrf4V4N-MMurSyKtwoPKpleskFuXRwWh2cXNjizuhPtvNOJg5m3diJ96gvpJhdLdVl0jPyOCQdZJeL7n1dGUQnG1iN06utApm2zum5Bet94YYeZl9tzv4cjRPwZc_zyBKy1wxItFNMoJS6wSDluETo_UdqDU0p5ZwcwDtm2M2-sU254eDGISzvfO9Qn8DKAJQMGJBcaNUz2pXcgkgqhjTWt3F87gKM4bse8s3QjlF8uPJ-EqYKfzM61Ub3XxRbH0E7Mb_glZvLt3XoN72HSyz6I027Wk2-cNIx884_nQH0zxWxod39BmuRntYcEIIKYyMjd-VvG5XTAgFBY8pWAJ7hBUoSLyTopccv8UzUwmQpUsrcqpoxEdVZDI7cWYhNskFPedp5CCCOLxfQsJbhXLvaFoGY07TsEhDIsoRlS6jKy3uVYbQ-HromeZ1cCrBRJ1akHa0dRG50RxKrExCu95qjKkdUZ0m_CutOg4uafdhxcudyZ2xWI8_DvoOus0x3kf6pl9uA9Bf7xrtNW5anAw6X4jd6glxgc4gu1sXH-zx2ATP4gaA2_iKoFM-PBo6e9-ts03F2cyCQcrLmXtLPF7mrVPdxOdAMlwZKOO-9YvKmi33TQdvIURPtGS7jE_C8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/0mFNtCi-u34uwOj3BimQTPOT9PgLGE8uqCbtXhnwoKI.json",
    "content": "{\"id\":\"0mFNtCi-u34uwOj3BimQTPOT9PgLGE8uqCbtXhnwoKI\",\"last_tx\":\"\",\"owner\":\"w-RKGMnJuJm44_PZ4gXHfIkcqTaQY-hdLoILdQIFIFgzPryYw2vqxslLbRUBScfyVCYv55Rm_LASQUaLGOa0-lnUZsaKTXW304H-zGJedAp50rATAbN-A4LcO6NfJgM4TZpP956IMqy4pssr-a2pqmWD6eWX48lWY5DTNIfmTj-eYGtB709zzngwtymcP4E4dj-YYsBfvFXfBkxBohlxScq1EzMqEhFcFK4EJNUKxn_j6Tp6fIMHxlFm5RvoniC06r8lf2QhjfB1_UP0aiZO0wUms4owZYe2Dr3H-Cq2ODGNAEuSmza7q6NwJqZ7NssVeDbpinpTj5sjn9-cE98gSPenQqbqwnRJTfq153fmZNKrx33nDWmn9mGDm1f2K8_yThgEXm-LTG6uYi1--ZZ-VDCEHDFF7wdMla_aTqSEmIVh7OLiHs1hju8BnpbIZ7jwpbsC4e3xJTUpZ9YMcbzv4iaS8it50hbmIxOvao-7orv4GvXozQHYmywVrqkSYnnSQYbZV-x7YQz8ivKDfOL2xnNJX48X_LzO_UVsdoep81O-ohyDVuf79AYwBqQhIcgF2_AC4ZqnxjdJj8YepmG37KEuZRWv4L-UoL0Chm0dE6vZiZh5mtYJLwO_dGNydy7EIOK5vSQzKQyoiTu30AkghAMiWSe_mbygPvhCwAnNWkU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"dGVzdA\",\"reward\":\"0\",\"signature\":\"Xa5oSMd8Q7oaGutgG5J-HAHgeln4Irt4R3z_jL7wtKC_gmdKzQn5_-sjFh-exO1HVjN7hA4WwvSWZ0Wpg6hMsZ5mC5QbJXB54FB7e8T00p9I-YRhpvi6h3uLoi84AN42ay70F_OsLbyb4u5jtHlCd_adsE9lAcRNi6HlLB5VgYpNeueO3K26QQnnR-esxchYvqtqwvpXQy0Q2lHLnv8XmMcm1cnDugG2TKwSRspORK2-SuoieaSJOGzbyMNwBwZLWrV2z9XSLDwMESgMEi0fZAfmnpe4XUH04DlVVstxjCtlFCfGTqlYh-yUl8QbaCE9RGObEkuCb2d_PB3nc0zNgM_7eHHyDrg1qldMgqJ_6y6QWkbDM7g2FxilxLGbF9qkz_bi88tg89pPg-RvXrQQ9CHwqx85V_vgV9RX8xJNsEwalIGTXepxXuwAbki-P4Anb_UqyCW0gTqSaO7U-F2h_u7bIjyBYqLYeUc41zQwgkWx7osY5v4c-ayuAajnhAAajlnaledaxeKsKN3fjGU0SQGB6yQ6R9G4DYZQw_SK1P_5_hsCFlRa2vZM1UuEXvpEWhhARTsu7gxNyOpMBqUH4GNMW5og1qvmmS6Tp2lET-mIBKJihQ8ib2Ua2BEtzgyr42JOynAC5LQ3T0Mh9tIVABOk8YfHaVdSFPX4ML1cato\"}"
  },
  {
    "path": "genesis_data/genesis_txs/0ogs8DTdSrNxfE2LzrScPvnyf7CQ7jMdFaS_l0-K-GU.json",
    "content": "{\"id\":\"0ogs8DTdSrNxfE2LzrScPvnyf7CQ7jMdFaS_l0-K-GU\",\"last_tx\":\"\",\"owner\":\"xtiH5XagvK_am8DYqq28XlkUEXHrURpZgt0gngqrpokl5PIlFrLX7-RsEyoktSx_TtKhKkyhANxwzI0Ke96DW1momlDkGV_injZEDYVzRw9abTAK-NW8sSriGHwHkh-YKNEmIgzCOjtYttf0Q_DnGdHjo4zHKZ29sN3jFBIKUsboqxHN9li0SkNjGElQzdBup8NXGN1V9VImLLhYhXdpNb6PNvJR1Dj84fi3-fwjZqjK8UjC-VAyNDmnBsBTUPFpN0QOZAGxIb4xuwGE9VGibR9B2IQyg3ltlfhNMx3cZh1RRNbTYo9B6IM61EIC5C-od2EWxA4IIeE6p2BXuWKNdC4_zaF5c23Xz0wbl20-zCGh5yjgk2ovfjbLksfx13yRsmX80bNHFG-nDUK4SCx2igfqMOMWRXamNp8rCyb_-r7-JwTBdD8ZBhy2n2AjmdfNRqjcm-tQGxicjQ4mUvmuVxOuXnA3eQWVMp7z3OSD0OAO1U6S6MhCRAL6VaHzc03JzEWxRi6LW1F7qIboJccaD2VA57ev7UYXl35vplR5f-zcmbieLDBvW7qGRcUjnXqf_qp37tiSVouLyE7R92kuBpZO78oIXC4lEGMCY1ZRPR2SHmeUC6xPedArBSLbcr2G2IQCyvquemUm1xG9q1vZ7bMQG7bp-Ey9QHWPanqtvvc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TG9va3MgZXhjaXRpbmc\",\"reward\":\"0\",\"signature\":\"QaDAILPRoARO7jiCdK274QXz1ec9azKLARlbHIxD3nmodM0kDpxM6s_i3-AexeCQNw0KD3uk2_RLozO6yv1iAiQgyb981Oxa96L3NfOyirRhOKa-bgqG8ET17XdQQ6DXGfNHKBTKGPdtgGF0DcZY_xoQEm7TLx0sAB7pllf_YM4GK9aJ-m4irzWrtqO6tpWGFluNUQw_Isr_nL1NMfoknXK_xCY5r1baIPK9pr_LGL3mJDd_47TLyNH2YDwaHoKA7vW6_GaDQP88hVw5uFJDy3rrTotQePQHVMY045PWhbuQW7_tejwceJIGRwngRT0JyijNEZEDpP3vyNz5rTzvGn_G0CTtWc1azXNZdI7Kw7dtYjYB0YIVop_9a1fQc7etohoh4SUpq1ULzAIkUZtqW9ptxZOqzrwPyjOgSdnENUVl4PS-DcgJl0xuy0Du24we32FjSeAOZfewI0RcNBm0C0UP0xWQ3FKeMQV38ZhktCiO7vpTdvfltFcIGzd3rCV_BFThC6p7TGzaYtAXTTD6PNTCpFTg04oTxnlWt6kw2mGztY0nYqMCKRtjpd2FcVpxhENmEW2adHiKik-ZIrErBfaGjRZBMqtpWxbUiJsnh8vT4bvl-xeagOrQBSvh-kiDOziNtirADOH40EndIvYfDhqouR4QlXySSbcaYP9Q_Ew\"}"
  },
  {
    "path": "genesis_data/genesis_txs/0ooE635sVsd6vdhX3Pb8Ufvuqd7XRjfUbG2eXde_CmI.json",
    "content": "{\"id\":\"0ooE635sVsd6vdhX3Pb8Ufvuqd7XRjfUbG2eXde_CmI\",\"last_tx\":\"\",\"owner\":\"xtYm0AInoFMTwc_Ag7BKY2WWiL3qJMfLDfi2uNm_5gguS_ImQm5Gghxvnuo_yNa0AzgD7wEu-RIhQT5hYD916QIna_X5MexsSbXs3qCRQBfTzcY7XC3DSqfrONnkE7XQfzsSY5vWlrC8h6MIijeuiewyMOjVIAn3-V1DEVgVV4Fdqu10JpFuBMfJQIkJHsbphGAjBRKHEM0klYnbPotWFT4Tih8kg0B2v56anEmS3Wy9curATBNJlpXt8j9xkr7yeyPsEUGZV73Emlal9ksbokeb4B-pivplM6_cNBHRben2xvlDF0xGXsaU-G5NIr9cFfZLRjerAg7wG-3aKVKPiocidk8wg_4lYemRVyVZb2RJ63pw2V_YoAQnX4CDzU_7lbrp02cQ3F4glqv5pE5n4V71hlJbhxWz_qBSTXXn8VHRbD7TXSlAyyitFyrtKQKSBwx1oNIu1VlzZqfGK7cGvQiUuHWbQGDB5TNbdlTEz5TJ1sr2gK9BViw9_hvtLHSID_iWGLBrQlwSPa0G02HdSV8NeKobqocVlAE6f_pY1W-br1yX66_kNi95wTBHRXqK6EsJ4PISAGDOQLuD5UtTPJYEi4v5M2zm2N1RLGBbMUpRwPEGUsYcWtVU5mAU3MVlHQj8JnILJf7jeP8xvcjf9E5Lt41CUa231sNQzyHWsrk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TW9yZSBDb2lucw\",\"reward\":\"0\",\"signature\":\"Pvp39V36IcYj1844dxowBxPIwD7k00m7RVNkst5wPBeaqiOCGRnJLfD3OaPqwkcU2xn4nnnJ0fqmNdBMpfj1Qfhy9xoUlvarLMjo9_Y7MpjqrcZ3t7jhtJVD5rKBeQuvV39xwjnngrg9l2NJ5eWsZ9NZatsfEt9-8f-tQjxBJnr2l57hV_iCEnihu-lCASq3L6UAOcbxvxj3sBx9Taib71jVbV0kEf07bD5n6FGln9syyE-LmojXrgrviyqjKAtops5DUtYVhX5IyDC4gjTM44Iiimfme7pXnfKoKVwDIXiHzTcJr_XcCUvwa5kqykCqUDjc_ibZzu9GmHOHwGe-E0hQCMIdJY4rqccRSWEitkIq6ZR3lUO4P6Nx0_T4tc0RXEci9642rm76uFv0TUjqm7vfXLCKuJbTZ-dOyczwXn2LWVy8U_aFuHT59JX0qdu13t2lMAGxuM28RjbDqd6bDz2EiOGPRxumFB4tuRDUcmIR-bQTfVuUDq65vUer9XfMlcBcJAndQZyE6MqjO5NeT0dbtTL7Eb31YiWozX7_fSTesXUTUf76UyMxcjtxW0jHHyKigCN0F-kmHDJYy4HfooMZIyinAFRQSlPqhZmjl2vwXUpdQrNVwdVszCsUkK87dsHVfNMbpbihUFWQ8DIJEbLl8BhoXumNxOOGaYGUyyI\"}"
  },
  {
    "path": "genesis_data/genesis_txs/0qob-AeHGTS5EDamY6Mtsnxf1MCyUk18l09bqHAYQjU.json",
    "content": "{\"id\":\"0qob-AeHGTS5EDamY6Mtsnxf1MCyUk18l09bqHAYQjU\",\"last_tx\":\"\",\"owner\":\"n2pLe8A1iOb9gwlSg-hqlNjxuT1KJ-rAnnrJQdZMkhgs_cMIR9ws9DbZowmqBtBR5MTKVi0u69fnRZmZisrugi0RWXyFJdAa7C7Ze3XXOR5CATCl2ms1nwEXFR4y3OPD88L3X6eszMTZ06lTx16XNd2m0QiC5uxgRlZ4TD0F0ssBln1yZKOKZfmoCtHS_JwaaA-tk4z6alWavJHhVktYMqDSTUYc1J5US3KtlD49BuVaflR2JjT2nl08DAN_b1epMStv70QR9Km3K5nbWUA413Q3pL9cGrxgwG102maiM3uct_R78mMI3ouxNBo4zum1Xs522KeCgXYkIrqeUEa7Kxir0O_KQ2lNrhycP2xHXt4QbPZ9e-y182WUmdgJ4ku_lM5PYDGwEP6xe-uEww6UMox_LsmXuLrGnFEk1hoIHUsFNQ1rX2lO7osbewNEP2xCm7lGQk94_W2QWg7Bhev9X7NVyI_teeyL9hm20YPxCpgka3dY2mT771SbFFBSGPrs3JEndHCX2QKOMWda4dMC_y9pyqus48fL2zSzFWuudNnlEvqSG4pho9seSemVCS-MEJAuZAcA40nGL4I03GuAmaheyR4zM8WUbWVJw2fvY3CDXRY_FFX3aSPDuHztkGs6tygOLytoIF9K32wmhwdalH0arKYBWkMyc4Eaj6A1raM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TmV2ZXIgcHJldGVuZCB0byBhIGxvdmUgd2hpY2ggeW91IGRvIG5vdCBhY3R1YWxseSBmZWVsLCBmb3IgbG92ZSBpcyBub3Qgb3VycyB0byBjb21tYW5kLg\",\"reward\":\"0\",\"signature\":\"fURk7zoXx_gX4ofsPuH_cOjnpI3wnz9sf-_f6Z2TcxMbn70S79_0WxmsGq1ILXTzudTEd6ReoKOUdF6m06GEYYsxJ6biF-ocpzmvIODm-e9i6fc290dZTgfhoPlyW1vVVRvZtPqMc-EQ_maVdefSv3JXNT6hxzo-tMGhPcST1wRyrt3yvORuhw94pVZ8wOhfYb1knWmyUWzJOxwHX_jP0sVqaRDV1MxoNgaKjg9DidjWK-GM7hh8415Xvw9FG5yknJR-5f6HXIsqmXBkDcEmWc2odCuQAN5nE8YJvX_NlPjQLCDWmvStplAPEQJJECfpksSqp1ivgSNoa2M2GuOrv6KsDy-3ymL3suFrolMn97kRSbyX0HV964Xg0VU6uj_itehjA7HuRFmYZqHq4X91iYD8WL4wCml50SrP9BLcQ73z5oMYAh0oZFWKOcoa5KH8A1hauT53SmLF-Cu3D-bKObO2ux_dwrOOqMnT4bdLgG8isreNbyC3O9ZGEhLe_rPLiqaDyP13Soq47XP_Y2bx3EcQF0Hir1ozLibgEJPit5XZGgCMEebS73tl0UCHbPxk1rRla-q6cxTKUifH2gQyPWmBazRNZwYnqtbJSr2FJRf7IlBsdzkgw4nUDTPvv9I3ljrhxMQAKnU_NsPyNc1DMzPIk7H235LL3R32kZ_oM7E\"}"
  },
  {
    "path": "genesis_data/genesis_txs/128KaPgVaZyrl8Vuzt795ZlWidERzih15pNDAJgahI0.json",
    "content": "{\"id\":\"128KaPgVaZyrl8Vuzt795ZlWidERzih15pNDAJgahI0\",\"last_tx\":\"\",\"owner\":\"xqIemNFLINya17CYAzAfoAaKLbhv3lsAaeYY24JFXEYQsSzHj9LBSOi_rrt1wy7CTvQbqL_jEWiafUZ6nXoTqIo9Ge51ZZwaTkDFc71AO55SyFScpeXNVow2FRXOtM1NNi4Q_jvY_yUyQlXEKWfAIlq8T9a1R853TfM2QLjFmMzws08a2npoRQ1oVeEzijfUaDO1ghfw2Ybca77T0k2SSlj4R3Nyje3vEvhIiYrXv1h8bznJd0MGV7ru73GrpkRChLzrXzPxxpkpSSST7AIq4Pmpale9Q2lPVqC4rAU5yLpabwxQ49yWElSvRjVihJx25RJzE-7e9OF4jMKWYhS0qlaMXuVJydcF2bjH-C0mX0-nL9Pt3o2O792b7-l1jSgTFt4V3Q-Sqy5BCTxq-7AtjAQ8xfXhv8BmMsdxZS45ug2r_isDvcilTxiq1PiLhob-XW_gSYWnTVRYYqau0msJ1BrUmQkOxEqybzqWRt6hfqEzTdBFYskHw6qTFnMoFaS0l4ONoq8w3toGpGN7iw8ihgennaLiffG60KYp9dkfbRI9QUUoUBqyeDOj6I6HFvvtVO37rQld6VhMNyBTVYuzLuPPvifH3cvP5vkwSXtD6TMJnNDIcarOMo6Q2IByKKkgwQLnkt0GdfKWH5zi7MA88UO-AVXScvBsBRTFG6CEtwM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"dGVzdA\",\"reward\":\"0\",\"signature\":\"rAKVe5-tvBafeBTNLAc7wBj9vYgETSSttCABLP8_1noDEDEV-CTmiT-vti4OMKe3dWhWAIhiPQuLJ0PFhxXlt8YjnnGXdGkd2AocbLJLYzHcjv_GfpPZk1LFQRP9aE-04MJYwUGTncCrw1lTfJTW-BXr8qgv5h6keHbN0W3ryFa1rwtnV3cyHSB_wkGD71FhNLzxUeoF4S01TI3hadBjqEAIzk1kRq1Tvf_AQ-WvfK83Ywb-N0cbE-OKRGZ1Em4sB5CxJ4_qQ9xg1IxqmRclWbiP_cRJgpD6ExSCPpcy7gtsC-jHE4SKYfSQyTmUdGmIU15lLtZ1mzrh49CcVczlxNgeNsUIQs_6Y_GAhPNKVPe6ThszP8wJgGNKhYoBIWXA8zmK9T1ucwRdy2j7tN74YbtHr1yYdDyqbryhVfz7FOxFfSIkOJimL4u0L5xjR4JOO4YkMJwdk0LDaQz_Z3pTC0GXJVFXfbIz_fcCp5gal3-0N6g0bh63G4OxQf_2Mi5Tg-xPdEAZDzj5vLolkCXi3CPJEC59EeWwpUfveDGTiGjrP4lf95cWUeUHmZBCRhNtd0E_LFPHOzOizSgRmBEhiL2gvN5rNhGLrqa1lom8FJf1n4RQ9QO6fZxcbed4w55MrzmsS_aBRS26MPApSe8gqMsO0B9HhIT5OoiXXpL-5hQ\"}"
  },
  {
    "path": "genesis_data/genesis_txs/1Lwuom2q3FFI2pZz5EYgOzJRymgVWE3F9ZIl4vi3-kU.json",
    "content": "{\"id\":\"1Lwuom2q3FFI2pZz5EYgOzJRymgVWE3F9ZIl4vi3-kU\",\"last_tx\":\"\",\"owner\":\"x03xLiKfK-UVdZvoZY2VBQubXjKHU9hpiF-HJaK968-C5frGONFZ_8Jlu4DkVLpX4tsx6biIG3q49ruXU29SxzFwpT_0eRCgcjsvaaJsFENlllCIG_kUzUnJSJLbB7VHhJFnq8tbh9Vitx8SLQV6Fi2oIFw3ipEL3hIoiO_m-66MAGCeVioZd-C4_5QpRrZikcpfBG51xFIbYBo73C-gCuWTA8K5ct3PLHnOY35T53vExzH1WIxSMI27I9kqUAyjx_BUoHuSfN5SmxC1GRiE_3y83-F6TXSk6bBRfYZy-vGAP3VgSc7CTKb9ikzI0B0ILTH1O3EADY7W92_uI6A5B6fAcD-tlQMeNZwnwNzEye5elOnaIGMjAjt0j4w58EIAGsBlJ02rosQXVfmMq98uFkwPhnbiUvPKph6owj0xnZCrT7sPaJhd_MHFS1yH4LvOpaECnvL7qy0YTHL6zBH6two8S8apZJpgyrSkh2OEt-JqnMtMUpcl7dFRPa2BQJwV7ZN3YjYRg8ITuImLBT4rwk2inYKOIuvMsi8ScmjrW0U52RAbYg7oqfWlcosVHOOJe-13d5DhNK6Kvempbvask2kegHOMHHu9aUzL9SB_RuZHSdIXtzxfFHzB7lRTOnrwMMSoENF8-C5Oxhu0hg8hibBLGn0rnVkYtlVP0WU6JtU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"aGViaWt2YW5kYWFnYWxnZXplZ3Rob2V2ZWVsaWt2YW5qZWhvdS4gUlItTUstSy1FIGVuIFJvZXRpZQ\",\"reward\":\"0\",\"signature\":\"IuF7qyrTgzyyzmxC1FI4C2aanrN1crtK2UVOjah3qoqnZXwTwB-wgYKpT2jeV0gygCG7YlQoQk4qImhWN7MvWVklrteNXwcGB_TtIlDoTZ7QSvqm8wiKe9Wy5YfAY8-S2i0R3gdC6zFAIl-rYCoNaBz_XkmFQqg_3eK-OJF0uBexCWWWWzICp_dUrFhCWmZqE0STKQabLmOB6ECUTm4pxh9I6YsU9Kq7kYbDNnUZC3YyTSuhfbYQzX6hPM4LbnmGNygjBaZxtjh5x2laPSlb9Wcujrr6LA_l95DACI3cORGVEG0hVvjdvBKMNZA_dG9FbIFcM0mUov8eGZ27ij_d7yRdxDADwHleB0QLZTHWGPjfnztphI8-ptdtpwasdwC5qFzn5hC_8VQDeyWFhOU9P4AGHRLkKUX3MmAPkE_MCmV4Dxp2HpqF9hJ1jic1AiNBonGgvmieaYLWpS6Z4T8_HY36R7oOkMrLtGt2c1HfXI6LQVM51dxWxG3LI9o_XyyvEOn-Thj5gZRO72cmdsCdKmEKHliR1oiKSYl0SvQmhlMR9SvnjJP_p4lidPFIagmn4ffAHmD6rqYgrOJ-z2keYrx6heasZIWDDnHh6_j-yDfTdEAQQGsX1ZGV00oi5nKjYv1JgO8CwsYXc0UFHCa9RNJOuXZQ_tUxMsqqimtoFIs\"}"
  },
  {
    "path": "genesis_data/genesis_txs/1Q2plP5JFTLwdTC27VfIgDJ-ri5h3mVsKxZploTrRmQ.json",
    "content": "{\"id\":\"1Q2plP5JFTLwdTC27VfIgDJ-ri5h3mVsKxZploTrRmQ\",\"last_tx\":\"\",\"owner\":\"2v01xaqM45uVGvECKmwGyLS-7MBgIDATLtD9R3fbdEGi4MKOAnLznWHfrvQ5Kev3Cf-9s-dAJf7ZTj6hqOqa0So7n-S7ZykkYoo1seaYXAeimvwrKNxRDIPmqggdsjcZ2zBicQPddomRZziATXW1R1_mdHpi1uek4YC_Z-b3SMPI9BP9OB2882pA8RTxUQ55WWsKobhi8dqET2VbgUmap4nX5hXVwl-Fqg14V2ZM7hErcevmXXi7AACRYTlD9hay4d1WdjCPesMIkYrUtiCoxzKtWWpbqrGfsGoDVqnZsJVToUeavD_4xy43xqIxLRiQCx6YObfnJPqwCmIiJtlOYK3QA6Go11DoOqJcgfr2JXNhB68iG3psieeW4xAqqdV15GBK7O4TSJdfJ1zTIXs8kbhYsP-Y6Vo6JnyHyuSrpZi1OPoBPcdPAlFuBG1Qw8xRzTUNy-ud3NCvlD_PKKAHQMT4NsXpf5Ra3HqkiPPDowCv_fnhUZFhJhRTTUoblGiODbDHBa1n5__zQqJZUD74NmnIYsprzOe52LDfYucRt_FQ8rhiT66VIXq9baKiXpProkkd-EdkO046PTYg8VtRfOZrMgHxnJFKNOhmdrQ2td2lrZzzfbxsTtoIhACuSb0a6lwBDs2I9ZFzrd6rOwDk3eTrO4rEUI4dicb6CaWikU0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"ZW5saWdodGVuZWQgYnkgd2F5IG9mIHRoZSBSK0MuIE1heSB0aGUgaW5maW5pdGUgYmxvY2tjaGFpbiBicmluZyBMTEwgdG8gYWxsISBTbyBtb3JlIGl0IGJlIQ\",\"reward\":\"0\",\"signature\":\"f_OoFml9TZEwy-QR1wfNjuFdeu6sRIu6bVJxbRE5qfKbjoORL26pf63EOh093-tO2ISR0fPMCmoRwIhT2ITaJeAqFF7ABR8klZ0IfwVGJqHvfM3-_qiuQv7CNgf77YgsRcUY01cH-Zq85bwL9q15woB7Q0HsaFHaHqc3mK7JyzeABlk0QAD15r7f8AfKtpzacM3dywukhKMzrY3E98v0T775U-EJzqib1tj2ElLP0eUndR37k1hUCxJTH-he95lGIVY_y8DLNb7hKs58PAB5LbvB8OsKYX1kIpBb9jfWLLhulw144xdoiZZHVS0Gt2DZLQbzKfLUkoK0pyUWn4HBJT-niJ4MA4CAG8mrMNPwWlqgwPO2z-d0_MeJvEO34wmNUYSP23R8lo_6ItMdQCgJjQg-35gLTX3_j6Ns4p0FsfSPnLA3sEKjn0-G0pcm_o36vlZELVXfx7zGJO8JgcIgwrXN58I2LDp0A9fJcd2rpwx_1MEtRew-t49jC0PcVJxJexKKAvy2HjkDoyIVkyzCWu1gr_MCxk1dPYRl8niNGG_gHOJ0zX-XzpTBgdC7Jc8ffXyRKDlqUzB9-EL5DHjxAGJ-3WSanWHxvw-hp7ATgNs_UB_tQxJvAiGo9w0vpHw8j7NShUb0y6r8Dv1X2Mrqwe59COBq710fk_XayTvoAbc\"}"
  },
  {
    "path": "genesis_data/genesis_txs/1QoMjs6Q3XKklJ9LfovRmGbe4bAy9xY247JfDZqN3Eo.json",
    "content": "{\"id\":\"1QoMjs6Q3XKklJ9LfovRmGbe4bAy9xY247JfDZqN3Eo\",\"last_tx\":\"\",\"owner\":\"vtWywWNGTQuOv_MggV9RJ9GAwkfk8VSkh-ynlSlsIj5NlohTGfy9hBEdRSbqRMc5tZ3Jrn9jbBQD8HYZWZTYSymUxJW1A41LBmGrJZOTt3pLDycvEK89YrAAfSkcxkEmhGBgNNqPU6F74E7CRXQfW4i5_vaKgwmKCDe7rejLFzM9c4n1uO7D3EapxWMdLKYTRZIu9Ux_6H5_kC_yfwHy5Qrs9KxcoqBCZ2quT7ITGHifuk-190RFcTvmxROJOcWUthzJdxRphVPsY1cDir9zxKZHv8wYI6jvXCPCARiE8Z7URx4jHIcXjpyELv6BY1x77Mk4L5UPohpsGo_nR0vXSvTdi6xEc7I9KXSmVG2k2vEtpJMmTHVw4tySu3jquIPWFb2E6ZWFdTIz7Xf9f2XH8QMc-yuVwjoei5jaK8lNQoHZjhMhtGu7I37xp_cMZZ8leDMlVqIJ5xhlWc_XU8TN0CoDB54SYz5f8ywwoOebcm_-2GxsrrYXGc0ob7u-19RhcKqaRUaUSIjhJBV-_jzmHQwgDxNqo4SIHGPZur5we7K272Z2PgpKNOsMt8-cC8Gh6PcEoqtm0P4-u2apnbpqn4SWmIuhB1exr5kz-_OEHD9l_aGicfHfAEh0tlKikEUdX6q6tT67222HL-r8t_Vqc842sD3xtgQtvXnxHBMWOI8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TmFtbyBCdWRkaGF5YQ\",\"reward\":\"0\",\"signature\":\"WdDVgogKi-37ITw_xKKuJy8LPZEB2UWQzRcN0gnVwYMWdQBC_14ieholRE6pTHakYlHWk5L8N9vsGRR-Ig4xU0Y9e3ZB-jGCyc2ybz5xo_xJNyaxkpgNkfs25Hqqd5rD1stimV5-v9N6Dgqbsm84dgwSBTheE0ns7_ipNEqrOUllVa6rItEUFaGQKMBjp-fiNvdI67jWdlhS7PjACp2J6ztj239EyEwOf2BTBWUnsAX0LeAwJCcqgoHe6t0RT5hnYkyX4oU0iulYmw-uIDcYTXVXqhxt2l5taTNSK95fpn6emtTJqwl7oCqtqfEIPn_FL7-hrd3MMDMuFATEnU9aj5FzSLfebCzaiCIP_nXKaioqsco_YIKhc_LeX4BSDP-N9Z6TCzU3cHtCPkImwQQcgbBeL_1XiEBYPwFh4eNMfvrHOIK9R7REYeV5PLNu3NyS9aujDrEAO6nYA2Klq6aYJy-GQgtwowDVu7nHRclWTB7pOpUFC9peplgG5nsjVjnDXGLPK_1WXvPPnjur5OeG_zH7Fl-Mjuqo1Gs1ecnAO0HVo5LYfvCXpKe1vixxMakFpMh-qEpbyKIeniaZJiKqVV-s1WBbuX7lCQTCILeS9KojcFZbWqgF0eYudltacqDgWgHMmV5BrYBaPHRc49FES9VUtMYIZ0hUJTfDuGPx2hQ\"}"
  },
  {
    "path": "genesis_data/genesis_txs/1nu07yo-0eB5GLxIJzzlxZW6nFTFiZ3XCDobJUcNyP4.json",
    "content": "{\"id\":\"1nu07yo-0eB5GLxIJzzlxZW6nFTFiZ3XCDobJUcNyP4\",\"last_tx\":\"\",\"owner\":\"5_f-keUEGP9XXTQu9P-E4F9Vqe5awy1xjR3gTQBWdJxdcAHaHl0CyFOpgflqDQtMniVKmZzBx2BPXvGdvp7-vyLEExQdHt3BRnhkdyudfoS1607_-DRnzEHLXneg8CXFFs_Mkx0ehXBpKiS712wtIl7TgmxHbHY_tr5-E-vRsg14twe1DyHRxLualQfQX2pAkViSvQtjvxbRqQIt_v02k48EX7Z-RvMddbeTIfujwl0c9dZUfPGWoCQsGD7jfllqglgOhOgbKAMo-fVPoqs5U0ETyhBmci6oDpanjFM_tfDUIcJUD_lHMQRB_MbmU5zcNmPizEC7Ljbdmk0uLHowscACZU6zlFWb2n08G6x-USnDs8FncyUAC-vXAGqxN4avGiHuAfsbBxKAKMZTZIXqC-chrv66_UCGG56v3KMdYtUkWZ6dkNYPaMwT9C6kH7DRj4iVju9jt57bvojty6X6F8C5S7e7RJECPnpjn4sDWRoux_QhToifFlqduSB0vhQrbkgp_mPDbhsqj0jTsfhNlUGGbdPugSZAcc1aTBjMxM_yVf6CY-mSs0IgLBIOIA4n0ESJhXaMFJc7IwyUxirx1CDsULvpIRlqSThIJTr3Vl2ICTYSN_S43lvJM6xKcLk742n3qIs5QsNH5i5KFVi67FozfbrSXvcCUyR0u_1tEW8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"b3IgZGVzcG9uZGVudCBiZWNhdXNlIHlvdXIgZGF5cyBhcmVuJ3QgcGFja2VkIHdpdGggd2lzZSBhbmQgbW9yYWwgYWN0aW9ucy4gQnV0IHRvIGdldCBiYWNrIHVwIHdoZW4geW91IGZhaWw\",\"reward\":\"0\",\"signature\":\"B1_zxbc8AZv8rLoOzYWfmjn_GXd7zzhV7DoUi9t3Q6W-1_HBH4E4e2i-WwaYNzbszYetkUNo_bAKoss8R9icdBPEt4NdUaIjiq67_V7SnYpfzOIg6tPMw20PYAl69NCksflJnM0ck-TYoQ5-GMzOXNBP59qkvnl77oPOhG_2I4NqlJvPhpvqez1cf9CSbFlwTASR4vNKkTzEIWXJ6AzCjjVkYMXdne_DWGeqWpshMUK1e6po916tjxGSg3IneULZZh5dXj0b2hodaYrieAhBLb7XehOJekSoUvgDIfldb-h_8hJtegFYFVMjzA7h594sU-GNXtD1wKfvPDfkyQB4LNWLam66xi3U-bcXKAJAPwgGWDXCE1puBl33N4oN_-t2e16WR1rBPg7zxnmfXZ7TuVcHpjbkGsUyGarauX4JwH7PVEYvwwcFeQMqX1AC4xiz3LEJJU67g3kEpqe6UlhHWMTtf-cPa5ROAb6zfoI_eCeN7s6gJgSLdA_ZKZ4eVtPUS10dzNZE0VHjs8Y2dZJOOcEFZHRPmcwu5DzECDj8AzPSgYbFtgNE9uBuVoUstcqBDnjxHALQZLJf0c_X5FFT9HiHf-m0atJc0u-1RP8Pns25xqLSXH0npW9l9Z91KxxqKBhPnlkgKl6e1LAd3rFfvg2NTn0dpoeO6fZPOe4j-eQ\"}"
  },
  {
    "path": "genesis_data/genesis_txs/1qVeYpf2sY8Qkz0iVomVPVb15NA7QUtF3eFDoMwa8PI.json",
    "content": "{\"id\":\"1qVeYpf2sY8Qkz0iVomVPVb15NA7QUtF3eFDoMwa8PI\",\"last_tx\":\"\",\"owner\":\"5O7AEF_36y9i4PDbTRyEFT6W00feGSwzDygn_prpIi-fboy5m9s2mIYiXL8XT9HDh4HE-4PfSjv0fFZgEPRYECVkBlw3oD64NFrpKpBg5HkiW8KkPdUrLvennVaK-rN8rgqXk2IcP0Sj_E4dkpxc2SW0azi3wfQqGzKAPXreQ4xIVuKtyKzs0fjoKNH21Id6iTPPEyZnRFAiuKgVaf-HsltAPt1b-zPO2-FqKUKXn1mTZuSYYupi_-pgXTNESaVdRnKILwsN6oYZ7zRq1_GOzEEExgYDMjWS-jcXDCJYghS5EECjMFAoQBOlnux4Y4WHUZ3qcCla3Evm6Ugr1HvlEOsX3CrMgF-rqsP-P8Wgst6H-A5U5F6AAEt938SWm0yfU8wqoShEzYnLmsBuBdH4XQBdqH3SxMVwNxhAdyA5bt_BNPtLWmvrzR-JlLw86IZVqLtLC5M046jnOxMeMvx380lvbrpG96nYcvBzH0SXYZsmVbIf_Vc1aTrFSpJBRQPGHGTl6LcQCwWY0nHmKMKpfuBYYyNWAsSKxUWXLeGwiQKe5AJFqXb55Ul4E8mPh3gV7MNli-k1kElsDYudaRwGzZHXblMCbbnn8AfMtVlNGtAAM8nK26pHq1WIzmB2F4Trx-7Jwu2MZ80_xirwUEzzjnMCUMeIeVbwMJcHXsIVdsE\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SGVsbG8gd29ybGQgOi0p\",\"reward\":\"0\",\"signature\":\"QLSaSaH7FkEftX_sV75eFzdIwnqr6fxhlNXqOhvq1byC6nN4K9dpBSRx8azo-ybr2JxBpXtDkJ4jrcgPCFp5wFVt8TbvL2KBOLib8YCUdZP-zFssgJfAA03Dk_SKpQbwBGKkQhhDRsmhB5prRP2BPThxK_BvplVfVtsOSvfZCZk-nLeFmPFMw4eoqqEczHtmd4hvRSAKzOC5sXnF3HtGGDCX4B0IRwnQV-Y1Cum_97bcVWWnlzxQR0HgKyvWCtmbCIbWDcqYAEC6CiilsZ25gorNaEul4YD7SuBk2SVi3gx-jhPJjeMRnC6GXLOozCS3Jy9RJMgj1Ga7caZNzabprZc5ypNC6lWePqN2cpYZmXxdmWUDaTQAMT4vk9AQqLhi2Lh6_yAh8KLqDQG7RbWKwTAziRtUIjvD0AqzwdqRlY6MGb2CQ971CH3iaz6Vd59a6wn61xSUwic7Ql_bvT_VrfnUNpkXXyd7ywB6aDZbn-FUbvZsadISdqSDj6whxL_Fr2vvyiF4LqMu-PDwn4BtRFOQVLcIeq1H7aXFxejLMSiCduBiUVMllfau6mjKeL4wLncw0EaROx6_whKaeipXgu0Be6TvLy6zy6HeUrGT3gshwoM5K0nICMinI7dH6TpL_o0ixjmcOZ9e7J_CnIV8OcGGtTO0GEV5KVXcaXfgTNY\"}"
  },
  {
    "path": "genesis_data/genesis_txs/1xh_NCIFYbprcgNM4AVvZ47jRxsQmJYvCG-L-oEK4iE.json",
    "content": "{\"id\":\"1xh_NCIFYbprcgNM4AVvZ47jRxsQmJYvCG-L-oEK4iE\",\"last_tx\":\"\",\"owner\":\"uhNEkI2-GNJBgol05-nhONngaNhTdWALp1vqqWxPOfn4yYkX0gsFHyoystrxcRq_jYgAJQ2DQXjsB8LgNfDGjAfLtwam6ojPxMxwJfTPXw1uoiX_27lf78YCa92w657yUQu15QzU_i7yrs8oTgGj0BaQ00vZkCQ0IEX5HUoS_YwNm0qZ_pEVkB_fWfSVX4my8zffFNxaHZFuvWqEwat7Blm3MizXMUhQAwJi6dNgK-f3K7xgyXzPo0XTR-iwWUOU6DVPG9W630WzGXtF8VXIL9jZ3rkwMBPYmdmdwTEIRngGKllnXR-QmiiaQmWlXfwLPWFC_V9pLRZCa8i7mSGS4oA-AcPmO4gkUs5YCW9PkXxbH_l6ApszO7_JtgXPYp5td1-7-SG9Eo0TxV2NestUFZ1t91fUbI6ZYPA370PKuBstXo9-ARX52WqONEVgXWt45b9vUiplPq_EcugErrbMiqKXGtdrFoLzAPrM12IJ-2jEB28ecLV_8k3QgLs07gKP0aWVFCnD_c954y30T4siqbjy9IDdlwaYXXnayn8htqJMKVxmulFtbIJNSXKfafxvjXwUTGDUMAzW2dBqYZLj25bi_D1kZhvHhzHLJEyuO-xKP1YFoxMGfOVYTGaE77bRGer0IX2NCWus55KgmtOr5owvu2orF_YbiZRsCeJZx80\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"WW91IGd1eXMgYXJlIGFuIGluc3BpcmF0aW9uLiBJIGJlbGlldmUgdGhhdCB0aGlzIHByb2plY3Qgd2lsbCBzdWNjZWVkIQ\",\"reward\":\"0\",\"signature\":\"IutBgZspQoPIjEwDTn3iffzyUYqPdJPjH_PcF7PPoW5uw-BpSjEGJT1m1c7UJ8NreC2tF_ybEB8pbv7OYalosV2lFOa6ZzSI8wabY8jlBC0CuskiUDK0wl0KAMXlM3tEJZlNGtBqF7S2pc6E4Vr9LuMHVky9SDUHoDYv6cjDFCdultInHdUOAf6Eo5uu_YKJ1lRIGxB7OQnSjBtGIsvx8uX1tuSvWFnQetdA2g7MTI-Gl9DEUD7lE6GKlvkiHC5BH4VbQK_Dt8KYrw5DPKtrTV5m_DzhaIDvGNZ6NVrrBKFvreJ8f0pOYEYFNfbHA433HAPgnAPMwb5v_yGmlim2iaM-95rH265Ox_kfNB2q3d2SAhlYXFQ0WDTWCJbqefWuDTocoVygX8EgUK8GcAVaD3972wHJSqh0OUhSxasi2Lg_P6mtj3byBE38QpXcBYbd4HyQ_e1vzzGHN_J6ixBv3QV3r67_UlglbreKoK4k_mT0UyoSDzoEORqpamN2_5IftfDHMEnoF8_xslWuSUyUpH-yXOmcXUny77hemx3kbHZ-9IVUPWdC9sxNMeKwucG85mb0Wjlh5f21AdciR_LJsoLYs-zWYNeQByCTZB1ao9VfqJvNZ6NcPtkQP4Fa8zJtrXBujpxDYncvV_FywT2IhsUIZrm2a7w615n1zpy0jrI\"}"
  },
  {
    "path": "genesis_data/genesis_txs/1yvqJKdnb9SRRKoBg1m0kWAsSh9S0R5r9T9TE0YHfRQ.json",
    "content": "{\"id\":\"1yvqJKdnb9SRRKoBg1m0kWAsSh9S0R5r9T9TE0YHfRQ\",\"last_tx\":\"\",\"owner\":\"vD50Itn4nhfFhH7NuiaeFwMktAMwFv523KbEJfszd9EuXJJnOBjqjhU-N8R6qKufaMj2hzocXiLyVVRcl1bfFOm3GAunIlDl8jspbFCal9U74Qpzq9JHpRyrtEg5gwV04Pb_ZhWkNHudrQwpEGAMvNNMIGjNrP5MfYEHyS5Q-_8RcYUShr3MG16MPwzfxGi0e2iMnlKmAAEW_6ZpbFs9C1aAhV2SZnS9uQD56LbboU4JgN21mdTvwhZBNI0DPL-dpYNpSb34JpPo5PTfT7kDFs9Y_qg7x787qK1n-n9kO6ciVi8QB2ZeMJALY6UHdMFFrvyc4Nuab4urNQ_MyP9PzPzOfLMklcDbc73XAUiY0-3XxQ9wa8QYn-0-pRcNlcBMStx_AbAb2WUabHDBjzxfdAGzBoouPXStj-h2yGNHWr8pxf_ylz83Np0LBjA8wc-iw3YT-2xoLBkeyFM6zNyVEKQALTtgiKaWBVUD1sVYBNaABvD7r8dRxE9aeGiC0c2aTYdkeKeMEAf4avythTuH_uzch2OO6CeqClGcpshbw7MbzobI3BvmeE04fmUkxEnWA769BLkUyZWa7hwYZ5vOVUBgl9APLq42Y0JbxJjLly86F10jZdeXmRNR-hGpyNINiTJLg3P84irzht73FUGUEaWOkiVgrCAviOOpctb5bIc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"b3JkZXIgaXMgbm90aGluZyBtb3JlIHRoYW4gYSBjaGFyYWN0ZXJpc3RpYyBvZiB0aGUgaHVtYW4gaW5hYmlsaXR5IHRvIGFjY2VwdCBjaGFvcy4\",\"reward\":\"0\",\"signature\":\"nW_wLwnN1u4BMz7-3uQx2kK4D3WRBtYC8jO8j_gT-y_F95HGkl95tCdQO-HSO6q0ij_p5_a73HhEN4m49f01NO6CaSJ6dyTiK9gjqGSZbYq08FrW-Yz22K1_u2_IcYoeE-dn1dn7DFhjeyChC2v2ToVgNHT7mzqVpKGeVYUbk3XDFYQ84zhJZPoR5vbaCoXyN3TnHqu0qiZbkP9CAlS4jy9oy67j02BkzD6VM_3y5wm5Knhuehp-CXeTj8-BQzxk1jUjATCTjxLKYz3vtjCBCQXAWpy8JwQhm12nnW9KT1C_r3pQvr84odIge6v7mJXAvwLZuycxsSqQhWi5kBdnCneC_fJ03wilDKo8qFjTG9bVEearEdqQDvKxikPE8wz4yQyAisk4gMned7wDPTHMep2JlqFmLrLBIB6LnFsURBJXtPPu47N8U61vL60JcAoHvzdmzNisnI1iLthrc9QHKOH3XQj7ShqicY0wvD6P0PpvljHdVMuoiERkFRJGK27RjpqYJEc4rpS5lVsuOZjdOmeUpbLPkTl5MiuKfTKg-lFsh7bJwAZqEcUfkIu8FxlngASubVxIAqhdDRaWA5pwYRQz98g7PJcDkzNYISyofdYtH2ALlnc9FNfJmDrql2puGmlMCxQkCw5h6urFInuDSbi3P1flgs3GWMHPQqBuQpE\"}"
  },
  {
    "path": "genesis_data/genesis_txs/21Kfm2Apa8QWeqdMqyQAcxg9HbiluZXfQFu4-6xe-AY.json",
    "content": "{\"id\":\"21Kfm2Apa8QWeqdMqyQAcxg9HbiluZXfQFu4-6xe-AY\",\"last_tx\":\"\",\"owner\":\"16WVWsdscV-7Fv45vnbRYVLncTg2ClaarC31n5sAS8bWSal8XXJWFvZy0gmj_We7n7jp81pnHzaC9m2eia-0NPFCwv8rMJ7YZbhTKGrFJeb0fy0B3MBWuWCl-oWBgMpH8Dz4OtLGebagaz7GF66-KZnsdaERsRJ76TqCTfzX3l2H52JOXaZm4k1AzMbkzyWEOIECUhKYZK8ZxVDHAuKpItBJKZ8tk5ewDXYMEzsYNAV-9xyuDopY5l7RU2T-xVXg969oy2hRpCxkVQ3EsZhmMEIbHDBf6__jNW08xb3ooSiEIUXCIZ23B02DXT1akIkVEkLcvaphzuTQXCHzumr9UVvMc76CMVJlH0VnswhwTJC7s-YHpetQGxwGwGSEKK3zCrXlVYvE14xLX7k7zmw7vu3BZEtO3kNhW9krgdWHvNta8kJ_MTIaGlSzIrCfJ1xysKxUN05NZilJxIdW8m1PGM356EZc0aCBE56vTJ0hANtQ23Tmz93Gh5QA_Rf9AIQim6n9gTBszeiXjjqELScB4kWRxsneChHOH5ReD0RFWO4xJBKIEqGPYpixh0HcBtz7lIiudoSAnZxtT6NWKAnkHHCpd8CPm7aD311BUZMCmYovQqa3bu7gS2hIhH3dZ8Dpd_5yg9Zlmm8yeTT8crbzFDSE0qgZqT7giIqhiUPk1_8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"dG8gY2VsZWJyYXRlIGJlaGF2aW5nIGxpa2UgYSBodW1hbi0taG93ZXZlciBpbXBlcmZlY3RseS0tYW5kIGZ1bGx5IGVtYnJhY2UgdGhlIHB1cnN1aXQgdGhhdCB5b3UndmUgZW1iYXJrZWQgb24uIC0gTWFyY3VzIEF1cmVsaXVz\",\"reward\":\"0\",\"signature\":\"C_5oVIQ9d8KJq0D_nGY0qBlzzg250E-jIH_C8YPrE0GksnUeL5-AcwVWXaM59cCmmJNIAAzs-iLm6MzsK-1ACkOFrs-hMZrmgRC2yY3ZJrFwUmu9sg78h5XcabH-byUrrg-tvUtUIR8UQ5bkLukY2-lGJh9qhjPbVsjpBobHZx3hekCI3_FNDI-PeUAZ5Uf-WJQ6-uDKxXx0Y07SroFVlqJ2Z95wqPJzJSC1l-rAZKoWh5qKGbPorfjsTVIkpsP9RqNlKI0MNQWu8U9q6JlG5jfIhLkpabJfct3YIYAu7ltSkS8rsgpyAh-WyPQh7RDRn1aDibxtRVyrYGL9v4aI6vaeu_ZCtYO_kyBMPYCWPYDkpVyMVBtoXKI_vOqMwvnq-Cs6h_SIHjtVtyaZdm6TVQWE4g8okVmuzJGc-wxPVEZfRQDwnvXzaxKAOPb1CH-_tLzzTrKzBPZYc4YGHZVNCbCqo34rcyXTt6783uC9r3yOsowg5eRiSh-QBpesCCFf0-F_ytfVdH84Ps8ci6PtI4_mMmQlD9GJ6OG25kfGVSZyaiUEDQ0px8M1VQqPUKIae9_RuqgEamGTUSG69JDgD_V68hLBkKUaTinlrrAM_s6nSGK0oN50fNfTIwN1wf0hU0uGcNsABsdP2Z3CGzcWw42o2z1x23yTHVnwZstnR3U\"}"
  },
  {
    "path": "genesis_data/genesis_txs/24VRr4yT-_fOndcFYtK2oSO-p9Pm6lNtzQv8E-U43Bc.json",
    "content": "{\"id\":\"24VRr4yT-_fOndcFYtK2oSO-p9Pm6lNtzQv8E-U43Bc\",\"last_tx\":\"\",\"owner\":\"z5VWuXUz470LnUXHESayM0oXjMnuiCfY5qfXqnVGK-pc8ckzW-kkhTBTDlr92mGSd6AkbiSQ2zz1Pw5moOVTFH3ev5qDfiFkCE-kWV0Ph2DJmr2XkKxP5OrWYucOwL_Qb14YPuMSOg04Pc1SHl9T0J8ruBFEGApfC21tDfTN5vDq8vq-T-gky8GkAWh9r5w_-RWBw6T6BsORzYSWMSKwd-u9KktLp6Qm-lUU_xdvWqLayZJFmwEkSiPj4Obh5rsaqyBC2DEMt_ha4-LgLzq_DWzlxdEF9QjRzQoWeWzw9HFXW2lJWDYOGnS5hnYj8eCPWQNphJ-I19DIHEF4gfu5sTofiEBpAvomH_BWsrlSihhfHiX0Z9zqm-NeMn3IF-kY5k-RvmN9Rb3adF6hwwqtQrPeI8qC3eyNGIhsgRrvtoi3vp9OwHhcWefo4P3TcWJx7G9mTxcAaQp03I1ZNtGXA3phMauFnLw6o2ll_i5TpllQMEzS-ZoKlGIUxN045TM7YjXIV6l6AoUJnvMpMqmyu7kCUBxWxmXAPdsh_NZLwmEJ2vnh-UVz7FoeY70fqNCTjf2M6fPwIdq9Inlvqfd2y4Myicqx8KqQtIWRlXi11WodmYHsqx6GWuLudJGDcnVYOfFXwsnCAa2Pi0G-G2B4v76j6IHHp14e41p8_2t3Oe0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"R29vZCBsdWNrIQ\",\"reward\":\"0\",\"signature\":\"LMJthLPFOTy0-WSjD-w6fXq5c8Eosl75svD4qdWs6QZO0vg76GevKz76R7_o4u_8KykJCHXVQTaSKBohADT7yx4-iPcqqqQfaSbb5hosfh88a9w8AgCI8YgVT74hG2KIvlAKgZX_7eM1mnuxmGHmHuEmD0pwGZnFqLmsXu_-iNXflIelMy8rFwBurlQEAIvwMPQqpwCDZ3syX5hovp0_rOBvGgl8hIZ6Xi5p7XigxNqyXf0t88bpVnRJiOZhFRfNYug5LLfZBX8n6j7kLM11odnEY7Tci9H4FByMNLzUGAfx7SfMdqi8dDlVnK8SsoA5LKpttwyRCObnBwXMXo0_ZIC6sW5JzPgDMSKy35WS822ObSCmz7HCNwlMCPYSXdnw6kFihUcSr2jXxkXYnkjWiivNIyhdhJKubJ1hWyy24QcCRKVe3XSpU3kFCaxDw19SgXvKr83fZNALqCbEIuF4Bhea3QBD2aOW8ji1iqRv9jRcPnoTg8AKvhT3kmLhN15h_8ljpiKtDJcLdiXLodfJqk2qbspeAAQWCKF7ptFHXevn34aeVzP-n0Ltgx9mjjUDs9ZFQdtaiEWBc7l3Neg1ylr9VX4SYzk4cK5TX821STOx2WT2R1CUIpxi1kKI64z3CIZpWaryH75rMGlDd3u6XZc10asawfy0geIQNte75B8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/2vn7V0FR0JMXrVbj3Ofvc_2nvrFYCCpRoFjc7UYpJcA.json",
    "content": "{\"id\":\"2vn7V0FR0JMXrVbj3Ofvc_2nvrFYCCpRoFjc7UYpJcA\",\"last_tx\":\"\",\"owner\":\"ugH5BgNNgeguR0kTD6mtiwo2RdVG93r5TRq_iCZf78cCsfWDApiQldW-891CZ8Zj45zDJoKCd3Dy5Oh_M_UaQShjV2ppXGoqepE96rep96jvMPYgb4Mc0ndSg8J1-5T_xXdKA3sYBaQSreS0upJi3Ixil3DRbCQWXgwI9sMfI9Na3fmWOf8oq8R96EToiwDE24sW3IKfFeNQpfv9bFzBenjbbm77Vg-I5Ymu1RFWLqXAUp-TYnyn5m9xq8juk26gPo1-Bw1qWzj32F7Jjt7DueRGDeOE5tG7g4vH5BebY88RkAuj8n1B0hA0NbaJsnoBWUuKs7eCqWsmgphdvujuechU-BbID6IAQqQz0Ef4_iAcERFEywpIeWhPwxUGd7wrivV74jhr83hf-QuNVKAkeJEiWvvvrl7ruHGRUvb3yeMPePu-VdZ-f7Jm3XjkJ92lhwpVSz2KSpp9PDQQOR0cfqvUch2ml5yjW9Z2SYjncAM1iaLZcprqvcZURY2irxOA1wxhSISCEPfDluxuAJ9x8McCXl_ewdBiKwHJA3DcSjplK7S_fPQbjfbzh6XICK51bHO0i_UyganttNoXOwt-n49NYS1njtIUfSgEGn-mbl61EMMIDg5NN8Zj0ElkhU8v31CV39SG0SAyvBYxKOf6onWKSMnxfyHsxz5I_HeGEps\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"c28gbGV0IG1lIGtub3cgaWYgdSB3YW50IHRvIHRhbGsgd2l0aCBtZSBmb3Igc29tZSBuZXcgdGhpbmdz\",\"reward\":\"0\",\"signature\":\"nHX4jZT8QHIjRtFxBnLfbFeh-UkhGBp3rL8HDhUaPZGYGDGZACc2x-Y53QgsWg4ovfxnexEgUBdd2icw6jbui9JkcvCOBMlwKWa80JMbX0ixXkWdJc0aiBhleNS0hcuX6aJvEn2rO0qXGFR34y9UYjUfv-jP_2xk34cq2dRXg5VxSkQutm1iepy03r1ddbxukHV9y01z38icg8JoLFB89JcfLpAqB-hGtZU3JMxgWFd4sC8wWJSm-fMXeUF6BqbUPAsH7Sszfnufk3qbnxFoYo_X05TvCE3dw4prFMt6WPtLP2TLLtDdEBJ74M21y36ySaSef2eNIx2SRK4doAMwnLy125AhY9t2X7AzR8ThU5Qv_uIaJkH7Nv_sCcJ_TWjLZKEZVQUfUa429havsqxA8WDSotk90f6W76vjXd94AmAfDC82A8ZRgsg0WlQVM3wOJDjKqSGCbyekc0deHzfQq_flO7_U5ER0sE4PwEMez-rc9bG4fdr7sk0yfVPFMpmbMdE3-EFNX6dsKW5Wm-oqUEmKoACYo9PYOXMBeqprocuuQO85V8VzHBiiHhySyVMSHMOxL8gV8LCDInhxOaw0UXw_Anh-ISuAGkxJEhCTm_lpvChvDX3ST8t_3AVnNEsQSjRBCR6EU1B7cc-bXybty7O4lCnkjbegvb8qe8fyIeI\"}"
  },
  {
    "path": "genesis_data/genesis_txs/328-6fOVCfCid4QTxHjkAMkQLMHZgDg-hZo5PnVfp2Q.json",
    "content": "{\"id\":\"328-6fOVCfCid4QTxHjkAMkQLMHZgDg-hZo5PnVfp2Q\",\"last_tx\":\"\",\"owner\":\"0tFmYP4c0e5KKqhpXErTOL2hCl3saf4CKWy8_J2rE2qJT5cybq_bxqeadgkmhMf6LZov_DZF4-hAmqi_sCnauuQz9f-oBjvKiZ_X1TMnU_NiluMLPcDj1vFJ3uNGjBlwpJhmNwUhKb9DWrrVkX-v9kDdWyR-opgCJsdQTV_hYXAfNjTX8bo6s0W8w_uhpz4cXk81DtJWg2stx0N752eKZJwqNK78bus65LAzoyaIkHSfQrqd6Mnj2KoEU78kUO87zofg26HLNgY6OIWFsyYCPulC6GG2Hs3f1rIQ9vy3s_2CvZEyeFBtUqhMQ0YVORyN74pok59jM3Qi_CULA9rEXkU-p3DOXESmMBjOLNtgDoMF4IbnStMfgICgX9ZAM8mbVCiFgU4bT0xzXM70Aq452NHpOCpzKhbW-2r2BWnIKlUJaOq_ZOOaCdhFLhsjS1KaQsEyR6TT_DHNuidGah5wVsOKK4gqqhumRaMSj7LYSY_uXJk2vJztoxkEegfemIsLfofTDe2WUg0vQKFoaHPsXlOK28J22Cxj3vTbIuN_hEFLEgmNr0k-6lwEme6Plm-ybsy_3iuq893KnJOtn2aSETdYNTxmhHgK85_aFPdj32yQsuOuqhJzZFKr-7eL17qN6IsyojhIRS8sy1ptMJCvr6Df8JtaoLV47fLw6unUas0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"R3JvdyBBcmNoYWluIGdyb3c\",\"reward\":\"0\",\"signature\":\"udvcpC1kzqCaxT1afIPdWUMshjp4iL084qwdy6M_0PZpxiTUNlNidZYxhF6Gm29aUsRQZZmSaO7RhlCPunA74gPwOHnfq92QDz2KD7Ys4YDJDh6RGvRiUsK9JeNt8NpOevMOIWj_VZoF11cuSr3irdRZDqAG6wUkaAM21jpLs-_Q_9cNAwpEmf__BAKzG8w6jRmXLCeyOVcVBhHU89-_DXQMiF-Cu6d1Wws5MZ7sNAFSRMdFt1xWhhL6-3cpxp-bx6zOU_Qfo94cA6NDx0bL-6CUTYioLnWEsazFwHmJeeC40sZQ__5Hy6wBrVL-Q-n_7FhcKJ4nlM2HQh4pjp3VHMVzGvKKKdq96wzFUms3pI4ZBECbiPWAl8S0bDbySJD93OsLIderIdf27JRKHIM99Y4B-svz4SRB2h1cioyBOmIb-hKi_QwOGgCFnfgWF-iO4HRJJI7OUhNi6N6qoiRh2nLVcVoIw9dLm_IIelZP6Wsn8GtKROz-ZFDLxBXWudKPPGj5qB7xKIye9a6KN_KbFGJGe0oNoaDqZXmX6e6Z24StPiV7pErOfleZTHB-HjCqMySnxXHX0wrENYx6k2yHO9WvlP-cPY7sFI28U_t7okzkCTMWKclA_R3oGiVjqvgvl9CahUVKnqyqb6ty5UUdtocASW1050Tn8VR-NY7kupA\"}"
  },
  {
    "path": "genesis_data/genesis_txs/3BSgxVi4vtVtgMBtDE8xPMqU0PmkiKtKX6P_Iw0kMsM.json",
    "content": "{\"id\":\"3BSgxVi4vtVtgMBtDE8xPMqU0PmkiKtKX6P_Iw0kMsM\",\"last_tx\":\"\",\"owner\":\"43g_C5qexn_3VnWkBzQSgesSGXegEedB50sXfPSaZiD7GWribE641H-DnxTFGAWaKMKnDEYvbrApDYvwXYBMJ1F0iLjlh_iJ_whhWRjOmXPoGFs8LReDFLwCJnumCYNJVkhvIVZOHQ--Q_iek6Ehu-R0Fxpmu6pKv0iy3oI1T0vVkmd4tH7c1Gn7c1bwBZOcJBpBaKTSrICrc_ocdyQrcgHXcs7JSo5VUZlKNEgw0K1OaiXbywOFn1eyueVTTjMnBfd9mNT82t2Reeely6n44gIBLFa2vZ5dbhFyaOE-x99Sqvz854kpkxQHohPtOyDbA1uMs6qIljJAMY39pfUQsh-hLB0AuPFIcILBFKHaQuRFw5CZGs2LDfrij8EI9fbCsb92OLD7I4IuRsAhRvN7i1kvresUo3PsLzeDMNCDnPSwzhHeEVHBRBk91bvq1mxNPKl-fx0yDmAWJdom7dTfLoVh9IKTAc_6g6TGMWc9tl0qRKazvyjEqyGoBcRihDAFyEk1dOoqO5P9OPh0VMv8_nBqChL877aUX71s9GiFFxXNjm0Tfc10u4RltkoBef52Ix_cfTdwrBqZtSM76MbzWMuOJOK5ocEC2zCS9UvsUKFWz8Y1PObS0dd0LXXCDVvP61OrLqJfu-msMWTaj9uA3PydmjQ2vi7_B6AVxUAeE3M\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSByZWFsbHkgYmVsaXZlIHRoYXQgSW50ZXJuZXQgaXMgaW4gYSBkaXNydXB0aXZlIG1vbWVudC4gc2VwLzIwMTc\",\"reward\":\"0\",\"signature\":\"VpPIjjpQVjQv2RfEv6Kx-B8-aJWibaYOU_JfiHGOG7jqTTIO1NsCISMP9XTf0he2E2fiC1LlXuIPQ7_1Igz2YI7UQ05dDQqMCUvHL-wHcUSTQP6Rfwp-X5Au7I58IDKRxM2W1YGVCaryq_Xb5mIX3vUbpLQ5UEouIQh2LKIrP59bsvKsFRQ-Ltn6Qe3kgzf4nzRz-NFfFXrrHdUgdxuvVhu1YZPIioD1CJfiJbOuP6gF0viVuSeNAQJufWvVsPCIzh7O35-8EL6D2p7laJZs_6ZwT4u-svHzPtWIf9Flx8MgsIyDnpMH65t7rNjP3UzBtsNOYrLkWHT8dpv4SJOBmUVR1uEk4cRLz2TD1rFxNpdfm6Wy2K3f831_9u22CSJLvDNAhEgG88P-Cb8l2aI2VoAahw4m3vQoVaHJ6RN-kwQhT6EphZo920rssao6PwOmyLnwSlOpRKvkqIKRGyJE61fb0aQfRlb3Y3EwtnvVCzpMblknM64UD222voio0BWrprLc8ioVFPAddwjc0lKQbHPm7Onpg9IMs99jPVibMl-Fy-wT9lbBqmja5NG9ufNb1070XMwNDJ--Evr1kNfd_FzWPLwVYjByw0-aHvfDIScPzuqBkp0gGAMGR2RxTu1w3macuE6_RgfMZDfXCeIpuswPV1Joh7U8as5DNZ1hzp4\"}"
  },
  {
    "path": "genesis_data/genesis_txs/3MMMUrHDmjbCn_-TOZJJHvjLBp8PffZKUNfm_Ziy0Vk.json",
    "content": "{\"id\":\"3MMMUrHDmjbCn_-TOZJJHvjLBp8PffZKUNfm_Ziy0Vk\",\"last_tx\":\"\",\"owner\":\"z4GZEU93WhZlKZo1KgtZ4M-BC3Wz5WyYQnai0x7fhDaRmNDPhpK-YlJXwpiDdeXZM1oWOqln2ZFPt-8PYKPR04dyqvYG3eMcY1-DwGiBi3UfKH-Nmpa9qp47mUV8EchJ5lfNkVhk2STgEFf_jiirEv6HCUhIM5f7fZ5okjzyYsPjNSFljkwFnqUNI0DWt7daeXO_7plY7zqQWDm-r4wrpKTSQG4ssK7IF4TE5bCdRq0XaMRq8vNulhW2pBt21tTIecwuWKdc6sWt1mnQEZ0HYlaxitLdCZ91u36dH40xkAInEKQdxU49AEBRlL_1lDCIycDfct-Tr_MBFNvwSbcWnp8rLelDVUNjYGTqV-FQpexLsePKhIdnmO05tOCA--qEHyK9spaHyQ10EgNlYa3NMStyYNmK4r-VyqNYx2MPEQUdwTAcqOZhrZQaPyBEwbB0nTNhj9hh7H3fVZekQoC8iB9pOMSmDSI6yRne9NwL3bjBk032Ki4dAnRSoPGxFw3q4-_WmgAu-RbCzlXwHo7Q3nmCzHinN1IkxL-JuX7q0I31SPCzPIi11T5w0pa5rNtUOP53o0UdyqfahDJJS3jRRYLuq2U_WuOsUijXQZZwPuSLrjFaDQ9Uv_DtHMMs1_-uktg3H6NShHgYbPMmXoxaIYZaktYydopLWE6-xhoZwR0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Rmlyc3QgdHJ5\",\"reward\":\"0\",\"signature\":\"WTDAXVuRw1blOgFO76ruXiks0Az4Y3jnTyla7QN-DUCVTi7pUt7vxYaD_PXTHIkTM2nsNkh2B8z0oBlMfFSaC5c3OiUpFQvE5OUUDHvuuhs1t8Ujt7V8M_e_uY_n0cNHmaG3Ku6zidBhdFWDyN0jkMrM7l7_WZo1fWtjK39QKAd5GekJOG4682g5D0vqFOqnCIYVV693SDsRj-D-qTsjiCNHQrQIxU0PzTlIKe94eogZUAt3oO7SWQZ1ouOYu1GbVlqgCUqk2hehVU4e7dfKiNBGEqcA0HUCsl6dpZWjKjzsuMY1NOkkqQ0zl6ZKmfKmOZIKEnWQzIk0NmMi2hOgD-SrZ18GWm-M6O7lFiFZpsgzyrjn7Vr369lK97K4k79ELNLZxx24fUO-s1GzD2xqGqJbNnqM3Nb23Jc9z62gCC9MrKV7QmSM0f59QYVRhXXgclFtUWwvbIqvnQ2cb_1WMRyBkBG148EiWWiAvAmQWF7h4NmKPd7E-dO2ELq80wnWpTribHH1B6g11GDxoCMr4NqcpqDZMWgqYtN1mtdmhWt11dcstnZb0w_Z5vQcAn7CRzLVaXO72wiWbMcO2jVr56p8pT429R-XmQNZdICgTSDKU1hgmZS_MPmStGmRzLZ87EINJLz2hcVBwzg5Matmi5zhm_ty9pD2GfT7KLL2LBw\"}"
  },
  {
    "path": "genesis_data/genesis_txs/3Q5gJrbqc-PeOvD4QQ4WCNp-f5cYzTyHyg6P9b-WvwM.json",
    "content": "{\"id\":\"3Q5gJrbqc-PeOvD4QQ4WCNp-f5cYzTyHyg6P9b-WvwM\",\"last_tx\":\"\",\"owner\":\"vYbZq65ub_6bK02xLbB_kEgE6cMGTRmfQzIIz7woBMjLJR8wgxN1MR-njwnIwYv7UND1xI2q508r_7stySTOn5SPN-TJWLpPFJT3ygK8_xJPrgOvpK5vectidpzST_x77wz3sXSxw6G0g4BH90zpQj7TaFYFzv6eoZ7eMCRo2aeznmDBq4DyKH75v2WhD55UakySdqsUaREIxKvrjnHjCc6V3M2uxTx69BjXvwU1bhMCq6UwG8z1xfE2IRxJ4mNr1kDS2fOZ263aoO_8tFb8zbU7KJYy8ihCmB876rSOqxNw-xMyWe-MoiSgh1oYHLvmzfgeeEnAa2obDhbZCPkUaZ0dSoCRlCuVKr1XUuSXBfOGJysXt3z0lHZIXPLTMgJ6V88CEwoGy2BNCo7zkLZOlBRGWiGrpsOhwWSFxNfKLmbBY7KEj3v-CGrCY2IKFJjTVF2MiV7v50g_glugKH2umUgQqVqnnZNMYev2C0H59zkyLWf0pUuoMdrhgtkftZh7rXbOBJRm7ulHvJ0jqoRSwJSAuqTLSF12sdNP8fD50pQFcoXTCt8w0qOCpdRYjlK8nIV6txW7HAfYITJSn9Lcu7PGb21SzLiv_f6F4nE3HRHek4r1MBTA4l6ft3bGbrBPrG3YXdoIqDArSc1dvqrOqQKs2ZxY8UG6gqJ6tXk249k\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"R29vZCBsdWNrIGFyY2hhaW4hIEdyZWF0IHByb2plY3QuIEhvcGUgdGhpcyBpcyB0aGUgc3RhcnQgb2YgdGhlIG5ldyBpbnRlcm5ldC4gTG92ZSB0byBteSBjaGlsZHJlbiBPbGl2ZXIgTiBhbmQgRGVpcmRyZSBPIGFuZCBsYWR5IExpc2EgeHg\",\"reward\":\"0\",\"signature\":\"OxAFoDteRI3tjVREUanNe-S7cR64GsV5f2-bTz6DWCppPtthZWEMHLad-vQFTRIG1UtSkaP9Fh1jwZ7mM3zhyhwQbHHa1Nxz7boXVVMDQi04ojhsCAg77qgtiV2Uqc-noBkCYtW8_1GCBW53jdmehfXGYSGs5AgtgapILtt6T_EBJrH8kYQugSeca6DUYtv6M5eIIjeiSeZ-5Z96lu3Sm3z3T3Y254ZG9wMwA_hTCB-fpMImFRGjbswiZY0LSkMUOe17Oap9ICA8nVTA2xZ94o0yMSL-d3LbysNdc76suMlKBP0msW8newif_5EiYWIarb1zCKDjWpkC3RHfsA8K3fIyHaafdVMj68mzlwWSGu7CShXIjH2DXcLgwE0yb3Z97OI2NexcP54Gv57lmQ5EmAKnGxkhx1JGVIplQGmST6juz6Ny2YgQAr3sh7wFaDt_hDKU-LipQbCQqBb8AoqhF4Ok1hZ2fPUgq_UglW7ihoUGJXghRzGBA3DA0apbs9Y8unHxzn7IXlcL7nMRZXZfBC2n0Yiy4pm0rqGzM5TrgE38ZYuBES-3KlOWt4zMuybNaT3GO3G-ddvIakvCx-U49Kc4GPx6Po-FEiu9GQkYTNjX7VSnJNyCbB1jdiEsi6JsZVmy89xt66YfsL4Tt8ZTEOdjQTLioQ9o8rsWJYY-NLE\"}"
  },
  {
    "path": "genesis_data/genesis_txs/3T6mnguMWl8GeiqZWiBZrGXHHtwm12mIWciusoSACkQ.json",
    "content": "{\"id\":\"3T6mnguMWl8GeiqZWiBZrGXHHtwm12mIWciusoSACkQ\",\"last_tx\":\"\",\"owner\":\"xeu6ymhEdXvhQqZAyVvMSsAfL-D5QDosFe7hY4Cx2Rn7hFUe2jlCZxtu2EkhgldpAa0EoLEFNGRSDz-bMA6zxtlEIl4LFvqK1_RhBrjl8EVc9ROBtz7G_BPgOIRn5dPBudUZPOr6JvTqenT4XaERGbzeWyKBH8_JkvhNXAkmmMCeDDWk8wEf-WY9JY2xCZdFiG0Vm5WR0jgcpxvgdVrewFgv8yw6Y4FW32KP_xPyv08Dy7PxDh83bul9QuRud92O7WlbFxCvgrZ9Bv-EeE_a2idlTrTlwl05T0Vkt6MO75yO-3WOMmJ5p3SPYrRbdI85x7eBBF0vCz05OFZl6zWr_Q3noKj7aIJGUsnTiEYUE1Kr1r_IWJbJBD4rZbowk87s4iCJv_I7xStwYQy_suvaZ16QR304q2rr49xVUbvTyOPIbwO-MEVdIYW-MQ93PziXEbTx4186m4Cd-rlFOR3cGJcwTLO-4DzkN1NDzFDX9Zqib362bYdiPo8thQLTH9vQCQ66qYXHH1ZQK0rkDDMfSuh3Ywv7YCBmZUC6s__as0dpxKwbyrT_h4o16SOJpolk_TNiHo-uQmmv9k6ul8fJGBvhkJktOEvKMi0lXO_l93y2ybRt9wMc4DsaqXzWjiqdnQ6dC5SY-t9Vlw09INsj0EdEojyXXcsJKlxOqG5KYfc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSByZWFsbHkgd2FudGVkIHRvIG1lYXN1cmUgU29udSB0aGF0IGRheSEgbWlzc2VkIGNoYW5jZSEgOjAtKSA\",\"reward\":\"0\",\"signature\":\"sECpfsvR5UL3jwn5mgFHX3TNb8Vx9gCG8Ml2YnJ3Tjo2Vs9mbapsVmCQEHa-Vr04XOBM9FxP86I5gXNY3-e2PPz3Dg7Gpt1ObdNcSCuPwgdaXOrlVgn-Vr2dqUFefWCcXVSTv4kxjmqJeYLW6MLtEj10yBjqJA83gmUOwEY16n2PLzMqPJ9UHpxDVU2RVElsXQ4m88-hxikz2WLChEis734sumkmBx9WBRicWQCvh0a0LeV53Amqt4ZcJdSZbuN09CwlOF3thbzy72abXVTgLf3btAMaaYFaQ-9AB6_RO7hDMHx_5DIAa9f75TkOPQ8eSQZ9jQXvrGFji_O5LQ5QTApMxXYJ5dZEZ7B0DKGy955Vf2QNWGiVX9QeULKoP64sFjBaQEuT7LenNqaZPwJS6D-VyPLXV2_sCYBXTvfKkYkoHduFIXmY16avGOBFrmQYJya99aF-JNASigQDMDokRR8Ycub4qRIdb7iHJH_skI41rWVwLWiK3pCBqCDVcM_88hPuFdOi4zqE4fyw5cQx5OS1p5amQSQloJlzOf1mbztCP-yzsSrbXR49sjK2RNG8uEr3j2K53nnITUqgEz8oWJNaH89bFuxYQGRj9CkW6KJfXSJZAq9Myfit36xZXC2ScEjB-TNhNSkO_hfNPmwoHXfFFno37YJSshVp9n8xsKs\"}"
  },
  {
    "path": "genesis_data/genesis_txs/3khTH_o8WZHSCzP-AThkmt7zZL-d_lcqUKC8nz7c8lk.json",
    "content": "{\"id\":\"3khTH_o8WZHSCzP-AThkmt7zZL-d_lcqUKC8nz7c8lk\",\"last_tx\":\"\",\"owner\":\"vihs8tPlYJaDxA_pGnOtq9AoLQn4Iv4EUcWt6obYANkPEeojHCpRoH6IPButCTaYiIcQDIvz_IIqw8jSZFjpvIZkDtAg5mIMXHV0CvZj7D3fndVxBf6XWE8U47WPGEJYi2NTSVChYSSAkZSzoqm356F9Uwv6z0AGDAk7H9AomUYLU4KX60K7VI5GC8U19PGERlW8VHriOd88Mh5Zrc5P3ukm36AzQf1Csptxad-DKLBU9i9HFBR54WzxzMASxzciXKhAnKMFLCjraOTvByfonwbl9lrB5li-ZbVP9Lz_TtNqrqdT06gste9eM4Qfj2CCZQGHqNEv9fbIRFSX9vMMqqvdVmFrMKo4bskA4P0qG4vztaMC9GOtTPmJ9OkHQBTHdf2b4Flj5em1SWeTECR51jMapRi0xzUDvXQG2gTCe7yRwVoUUmIPh_6Y1YyVX-6KhW9nrRhYdqEKi9kyXU6qRz6iyO6aGpX_7vm79KeSIEofIL83HAlL4fHkDOaTnjVq3QzbThLWlPZEusmn4uoWjCx9kk9Y0svJqqoMUsTH5UPM2WXLB0dDkN-sATN3x5fqfqhDUEOQOZhE-daDO_pEvNG7XCP4ABT1gzxEOoms5YlKx3X8g4lgSZ5CfuWSv8UuCsMxNF-yD7WI4kxUhsO4wVhudC0J_nQxju88e2SOoTk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TcOmZiAtIGtlYmFiIHRvIHRoZSBwZW9wbGU\",\"reward\":\"0\",\"signature\":\"u7oXf_zaTIivUTfUgwPBxYpWDUoNhptt0uWJLLIdzSFhnyic5U1g4-3rQ8_ZPJmGcuIvQ86bOjCSpIlZa_6yIzA-yOC2yjwxPikd_RK1dTwLjx-v9msGpSXQghGg2K2DJYYb57smP74bY1Q4yBetMt-ZG-IshYIdptGYI6p57Z-MHzdd7Ql6CAptCyTFulM9fIsGZET8G8Ey1K_ojW9Drzxy3XT6BR6arvenPh3u6eGateaBLqut4nikTHyKXYMrZ8giIUHdn_5bEcirPPXff51iItdF3BDY1-LUivQrGrGb2J7TDtBEql6gebn-iRm6LJLN5T7pnj83C4dkSHL_82tmQ_jFJBnY-6jn2bAW4f9_ZJ1SqWRg4xVsMgg05XnLxSmm7zJBNmop3m2zHuXVQSI9LMpYs7Tv9JyRgTmjfjzQNjpQFvT8k0IosPiczO_UF9--10_SCcalY8yix926yj9TeDVavPZfR-CgbhZlTiIvUZvpfMLGRBC-pB7LW-bItkrE5xXJXu8RvidffV1KRvHbyd5Gkb9PhBluRBg6S1HCDntTjG5s00Hwu15NuvfzdxPYVVU8hXMVFTxg2IDn2fxby6wnGWpe05BT-lImfA10NwAueJTVj8sARzwYGjiWK_nl6ycGUEejRce9EcoQl8gBhfbhynFtjt7NlEH5ASc\"}"
  },
  {
    "path": "genesis_data/genesis_txs/3ku6XelnvBsaRjoNxDWb_kT_PRlQ88U0pbWURziCj7s.json",
    "content": "{\"id\":\"3ku6XelnvBsaRjoNxDWb_kT_PRlQ88U0pbWURziCj7s\",\"last_tx\":\"\",\"owner\":\"4jjzirvWyOZEbYvmExBdpl0BMp3nZ8Grh12KODQ9Xq_HHsXbHmRAnsSverwdDVLtQpBOu0XCiPKaFq_VJ_IHskqv6a6rgQqVLpEj_DWND44Xl_4j0eojXJ2IwcJOw25ScumfGKRflec7UFAprX2wOhiXZ6I2uvnT9t3_0GIDtfOghi08vE3_vOW0Y9RUouX4zTnRfbYFm2n_iyFZLqAIAEre5JAx3dTp9zWunlOe2B3RCge5xrE1MFilC9H__qfzEiNH3m-wjymC7IgoVmbqBQRCoXps7YzqAJxdDkyQDc6NVlfmoc2oXwazzkz0h_F4LmXWO1FlNJK80iRs1P-0tj5OPYnHUf-48FXB91ha6BboAxzdZIxyzg4YDjyUVgLhQtYtWfo0_a-shdHXi3pdCTEB8B5vfr5uBOmonJG8ZY9qPw7tSMDiK72QZv7iIV5a-jGiVlcIW7o9ospVKoNFCt2lSfGyF6jmUvkYvYbln4wCiY6iUBZiTLaZIOzl9DIA3SZbH3VZgNA9B8Ig8EIT69rapWg2DGUM1_An5JKM3knBqyJcQb8TNJGvljGIhId8AgRB4UglVZ9YAfC3X3fGUyaZ_T_Cba4EvP_HsjWLsMeUeurKoGkWhIXSQNOZZlU8_V8p2yfJhgbwuCTviS5donmQeYQNcYY-JBXUy7swA6M\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"d2Vhc2Vs\",\"reward\":\"0\",\"signature\":\"awYyfg0iMfwwJ372okD_Psl3r8S5RMX0zkwInSW1RxIcwmdgqIKOjnAleRi6BDRitkgyub6r04xAw4SNbtvZRC8p1y01ghwAXVifo9TH9oY5Q0lOSRLiFPfgg5iFCzZakdreqC5SYeaCOIKJ5TgQkXAijFcG_vHJ579alIsKNObu3XsC-vkZFmtQsYmxz3jBlGDdZykzUzRO5BlwbpTfdej3YRpoKnmtoRjNEMKIjWFn6Hml5rws3dvc0NsVnpyptxwuzQDcfvzU_VpkaqYD4EBd0jgLqnycTb_y-Ks2HziiwR2Lb-yXWLoH1S1ssnWdpIz6fdY1L3OJM6FeTVK1pKmUrk39sax_5d8DTzRXeYnaraKgJFpLcys_IPgghk8lK3DvfNcvOBNA8O1gu-2NCa6bq7vVdjdhwuI0AzU8YZUyl0p2z2-Cs0oKoYli36PwOog61SjTm1gsJkklAHyHg50dunLrnEBl5R4_iYI8vhHzFXGR_0D2Glu6MjVCDXJHrwvOUTZyf9pb8Ms2Tk5h7Vcr21tNeYm4QZsMDU3qjUN4hIcvEXtscujzqHYIlKkBYu4WVGm8NPd1ZXdV-6dWJbL5RwjPpwJodgGVra075m7vkHRMg-Q1h-0v2eobEvZBHNpwS-lfKXNdZ8Znbxl6nI-Owp-4WjMVX7oO_A_dnzU\"}"
  },
  {
    "path": "genesis_data/genesis_txs/4LwZwAVcaBXhXsP5b4mnE11tUXefuRUTtTibtvoozDQ.json",
    "content": "{\"id\":\"4LwZwAVcaBXhXsP5b4mnE11tUXefuRUTtTibtvoozDQ\",\"last_tx\":\"\",\"owner\":\"vRfC0ispBTSwB8oajO6eH7uOc__XbMBIH_SlDVS__qN1Zon7osDfUA75xngIyrx6f3fMf_NbF4KqNyNyxZEBqqUklcT1Rh8T5aVI397ZWq0p99ar2872GMlGxWjn6Ep1tcLnE2vbEtN4KHJc3Mw9XAxQPzH1RVotBtw96YklU2c-Z96rGK9rQliYCrcpu2X3GpTDXNYnVMLHHHGZcuEexSnRFaNA4ulQjKMKVRFVGGjuLp0hOIuELDEuIyGL4SxaZ8z59Nt2Txzfh7BmgJKkB1DnDwecKP1Gg_ZUfSBIJI-C4yIS1ph6U-7986KV_pRqFtILPQn1l9Y1WiiJVFiMrl45KN-LJ2Ef_V_JhBHgSctFWFdm9w0nWz_SGNs_oS9BjK4sXj5tSaHrKCBVBwjym9K_hsILT18zI7ESSa7dKMDHBt4AvA9tYLlgFM1yF4CfCKLcHXX916uvzRRXdnRO1zhlBIMcXbXxFvc9xNmAhBtUsPuaJxMsd3DyhYjJsHHVwQCUWMfofyldUTteyQZM85O4WkoT5nJpny01T3o11GK_cFUq3ELPFo2xfYFljVVwfj7oLkhG6Hzqy9MucwQHnO0hgHe-TMCssM-HhLX4s3xTy50nLpTqNbUph6OpDZJM5W_jIJyXb-mi3tsvLDMvjuQmb02sc4zBOZ7Cbb1T3vE\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"R2xhZCB0byBiZSBvbiBib2FyZCE\",\"reward\":\"0\",\"signature\":\"fko6xfxpTXtvet6Yp_Oi1JeF3ZFdlwT1ruZr8iUTp4rDnPgki-oqxUyrrIK--0tcQjPQOW48APuWzDPIabAjcpr6nTjPdH7xwqgJ2EEBLrZklh6t2KkEp1HmAPuTMB8pVlw2jr8uP7WY_iaDNv760GNtGui2w2SsShvfx-QJ-udoJMncbvUuUWUfItzZ4MwDbAOmJ86lBGp_M7exebsoB4lCYDKAvhsJYJuuHnOmKJ2vV4YoH-nFRdaoq274YK4GC0BPcSBk41tf2PR9FbbYGnW2r_QWrqC4Yiq0Sg4lHK0gu_x8U-mPeLhIjsJC_LCV4zMXcYQvZ_p7VH2ffhax24-XozKOHuBQkQVM35MIoBnEA04oobq-QYGH0s_lPz99yXuvsik4g1iMXJcw67_GVM_Nh7haZku2yS8idX-a8xfxsdh2IMxuBpSddRzwxbUlXGp2GO6dZWmeXQHOYCoFHq0AbsrUWfX0cGbrxKO-xCP1eQ7Tw-R4Bc2T0uIUHRj_O34CN8r3OL1GTeNeQ64Qas3ziifw-Eab_-Okrb6LqI_umacOd0XMZCf7kne8mzl_4OSS5brDvstGpjBmtyUK_y1_RBsHb3jqo-6w0ox2Xon7FEmCkPLTilQB2YrpQZLVhjhKO2jJES6tpAA-QmnT0dEy1ifH_id5PsthntNgQTo\"}"
  },
  {
    "path": "genesis_data/genesis_txs/4UEhkNbsGdJUjx1lJQgX9KorwSf_RRZG8VMW6jMmf8Y.json",
    "content": "{\"id\":\"4UEhkNbsGdJUjx1lJQgX9KorwSf_RRZG8VMW6jMmf8Y\",\"last_tx\":\"\",\"owner\":\"ydrVhsTG1L8jGKzxREj29zhFVurD4PJHX-m1YzqLM8nzLDDJBAnRE190t0Z5Cn03qNLw-ciQAdw-tmwJf5eVY2Y78iYdhteIRvjpYa7JYxCljeuTAIgz7LDHgn8n5pSnx0IpM362mudtEHX4_4bnNoHzXF8MaU_Fr--ydgnantGPmGagYzaZlOZX9PNU5h_hWpIVgZAF4FkVNyjXViSUThOVVzxIZqJ3FOAA4-4kNjAxvkc2awKBTuS4yWUaoaYxKNfNKw5GzTFGRY4xSzrBDnAPDI7OAGBudsgU8qcaZxhiAnMaOqNstNdw92nG1ExJl9LSEUH6GF3Hknnwqn6JPLw1a_RvTYIHavBEp170y8zbBQAg13FmZ3QUQBBkpoDBO199LxjicRXuUOXEgObDcSiys2v_Ogp7bopOvB2WEqMFKXtJ8paAsaGxsXe5r_HPtMRMxI31AzlXIEv04mmO-UF1uXQSFoKWlky_K3FBZazV3RDEO9pKYzrt4yXggvRIGov05qgEnhd96ezBPFYlPoCxvL80qx-yK9Ld6JAGj4IZKtzcNAaDLE9tVoALWEBLLj0jercjpGdkux5o_kwa3cdI7YysQ1xMCB02V6qnNS2kS9KVfTLnpsLQlcxWEgxXtWfoA7zhHXcDy3LTCIA6US2GFQ-rayg26_0hhSzUJVU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"U21vb2NoIGJ1ZGRpZXMgS3VtaXJlaSBhbmQgQWJlcnRzc3F1aXJyZWwgd2VyZSBoZXJlLiA\",\"reward\":\"0\",\"signature\":\"cCwK5uNiYVBj_MkNO6soloFf6t_EIcba3hqYzkgYzc0lIJafB-U8rYvgJrwgDFySUttAro2-sh9eEAn4NulotFL_0uaRK0v6ShBQcU1xnwROLLq9mJR2hkXCcUtqYedXFAkYXFxuYjdkvHbgKemnDkMlEDT85iqY87mEtCgKglMagzPl41S-8-z2jrw5K6aH4Y8dKXAcEQqXh_DDsF1Czci-RevZ7_ACp0upBkzLqpl0BKcC4xwITYQPDHHxvzCcXhjdY1h8k6nTXCVEQd62ga-oxJRo9Cj7kDSX369hg6RnW87rd9GqPvnA-JTgzBLnlclsoBlQMcJEoQG1PDPACmuHckqkfc7yxa88XS2NZeHitBW0o5iSO5qJa4iZ0JlYnUDbC8COdkTgnuku_Lt2WMlKAbeCEPW3LpG6TmLxAZxmhxtc9y0wILX8LHZVpFx5dXoNXm11o7bcHpK3ponW81jADYPFlNjzvJv7Cpe3J7ikxldhBHj4eD2upj2YZLiDMl7l1lEepA3zrBY2pS8yzgRfVO2CAyn28hJbOI5lBGs9cuiEQtkFAW0TQ1puSQHKeDo8VHDtuA3Jn_4HgJS3uErhLl4ZoVW2DbtEnfZagcfJQrRjaEu2ENcMy9fn9fbrZqRN0bIE36PBA9QGke_ZxPzRc__AHRTAPP37Asfh8j4\"}"
  },
  {
    "path": "genesis_data/genesis_txs/4bPVo0hCI3E-ry2mBjvOZsBpNwPM108NT0vnJCxCeJw.json",
    "content": "{\"id\":\"4bPVo0hCI3E-ry2mBjvOZsBpNwPM108NT0vnJCxCeJw\",\"last_tx\":\"\",\"owner\":\"oTkEosK0oQ-WpwcnZtNHInQXwPoChbpJb3hyncJgW-hOs2TYqeFGxnMm2tScqLsvH5MlvpL7gKcp03vZpn5964wmnHjMLuPvwRw4HEwswgvszl4HfJNWKpuo3kTv_fm6Yse7H5F1uRptvrkhm86g9TaHRfeDRP6f7IaiB7Suc2Ereo6qPaTRhO2efc3ShlM9qrLZQvnULQzwIyklN7HiNmv-GlhlVEpO1fs0cyt9H98XD-3aJ2xqdIdbeHoucUd8vvMMD1NJse0ejpWlRX3bK9sm8g4g5e-KFTK4C_ayTQlMdVjkScH0dt5R38zzVo_VYlD5XWEJFr6XHe9m06w-ZAvDjEhFyTO_EVx8C15rJBc3h1-8qAZOueb3NYRLOEnFG9ovOD7xy5X_2JZ5aYp7vizNV2zOBlybN6upx6Qb2wHxqxRevB2iNQJHNT4hnqRr7u7RJOLKMV4UoGV0MmbmjFJtmlari8qRYh1i6w7ysUtHqT1mAqqrV88VZoyKP50z05OFKXEIfcEcKAhCmoEtfjJqup_nc6yME_O-49IPHVI6FFXf5SFU5aGwx7zy9Np0GuxM8N2gv76svWudkqtEWDZRB3T3sGjTA8X3dMnKIDKMJBJvdP1attDTjIvpf1Ndeg0cvuNlshbXzykIQgS69X0CsAbHbTicHKxJdb07rd0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SGVsbG8\",\"reward\":\"0\",\"signature\":\"T3oKjykqcilmCMH_a8d1C2yhwpLq1kTYKTLsqmPl69acTGcEQALHtKVbx0drPlOgZ3H9wlyKk2WjJM2_KUWjJckvVC0vgrJyQcqHXeGkdkyTODNBBnIqpAttmDhkDHuLKFiUUWTVdDE5DgNA9c5pCL6mJDgdACePZlaPUV_SyrRoVa52hNhgRk0KApWu0ILHGXp-sDX-HrF6z1PYywBdO5H811ip2PdILjd_aGHeK4xh18HpOp6c9jeRODs8wydq3YaUelnq7jdIHIdY9JtgBks9PXuGJrZ5M-dyilFxfB9SZ10bfDGioRg0YAtiN-c3bKLTRQCjhUV_7w5fEQ1vwY7NUym7SYP7UUBnN0JlD8Qhy1lzKs4cZ-WhTYY62VO8Qg0tRJOkcJW2Hz92HdVAwWEANoQO8ZtAyZR6a6LfFOfi6CZb3kQEXE7gfbU9m490VaTdyhKbj07AvQbQL1O4cabYNvaShWe1jk2UilnyCMeC6ZUm3ND-0_MrFZc1XSgl1uAkBUBMMtCnRUGIO9ovPGP-TByYJalPGxazbAdOuwltWsUwKaMiUi1zxH7pQ35jP4_XkT5ORPNdl6Bx0SmJg3Zwe8k2d_5RkhW_vGsVRTiVoytsIs1sufkEZScgiEIZ-gAAGc20zuNwb7wLDX7c-fJVa5P1umTVje9izXaNXYY\"}"
  },
  {
    "path": "genesis_data/genesis_txs/4ewYAvsgaT-6Oy23qPqK29O_AgfvNbhLvol13yN1PdQ.json",
    "content": "{\"id\":\"4ewYAvsgaT-6Oy23qPqK29O_AgfvNbhLvol13yN1PdQ\",\"last_tx\":\"\",\"owner\":\"znlYjxwFlNjt9NfIDIp78hfvmi5UIX4ERBXPWJspJkNPHTNrLRtwLVzFpblc7QV_taXQoYqnBZZ2bW1uuAbSVIpZH-n3HfKIHnXwLsq-4W-EcM62mSvK6FMs9aTIP4PTL56cwVdPktLPycOTQgKPEc5kBZF7c0AWCMpfapBdHh1mcdQ-ESDanrCWR-WyzFIXkVERm2ObHevFqDNhRsQcHoRsdrepZ1hXuwosjDwKGrC1xzQxq3d-iAC9kttBRY_cUgr7lKT0OfDbalx3q358u7mgrfLu0otY9sdoZYEsP04QatRIcBoJ9n6iGcsDgYGM4r_FuoZscbLiCku2c8BgsB6DAz5YBhJdyEXOWFSYHKmIwd8x2vA_J8mETmsmPBSEPrD3Uwej5K_h-KJDNRWqQdNX17qmDzvJtrHfYqMMMgTU4yrE16iH87oUAlhaJBpRVzeob9vnPctOfpy5rQq79XBfJDEmJ7s6QEvo6wnyr0ySNRjh8_eb2EQGXteDX_0htECjcARqFdn6ZbKZA4Q-jhxTxCwmr_36EGhVm6-T_-7uxdvkFhBfkhDNSuAW3MkukLwGJaQGwNmeUZCVXi95P-83ftWZYsAQnDuWq92MP7u83ByktLHkPjbV23vEDgZ9YqXJ_eoe06eEWfb75L8POumRUhyvGKS_pzWPQsIdMQM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSdtIGluLg\",\"reward\":\"0\",\"signature\":\"G9NLN2k_8bX-Iwg7uiWNGp52HwIwnHg945W3ctVSLy4HHdQL7xB3OJVJ-Yb1f07xsRP8CrBpUjrkneG1u4N7Ez9Whj3lPtER4UY3u3kpicl6h7Lf949l7xj-uCPjVVh90lLtZkuv0ciGTxGsi1KKWqYFtr8Di_T8ZDg82BGsVPzX05xMyqrl1ued7jR6veeHaLkHMsZ4piNTkmTthIa4rbFNmFaNVLD0FLptDT-emNTP09DKTbm-TIgjSjfNiK7Zg0tt2qJrmbXUVQTdapMfAzNV4d2E1-c_GqJ8bSYXSTz6J_14tXCAw7doGoPn7RPtsKx6VFU0mSNi7gDvLdf01tTMcM8hEZAAaZRmEIYxrUPgKghLpZh1LRLdlDS67bZw9JVXz5Mj3pmVqBBb-GYalYWMy4lCsnGw2ppFwaB4NYQyxsc4Q_AVGiu4TAG_pYP8yPTFOa-sYxA6x32XTNmprZewb32UnQ7CeaiVXce14JxM4QzV6IaJ3spE5YYaHcIfAd2f6eHRa4tJRCE3MkqBDYdpaNluhqI15HtxzVNKYB3DLv5Z1Jl9xFYTSflYUwnMJPFYGqpkQ61Q5a5oCKpWMLs_KAbSMpA8pyzHDj30vFepQiOusz5S980fE-CysNSGDzYKuB2FeHvLD8Ohsa-kr71DR9fBewXXGpXXnu4sL-A\"}"
  },
  {
    "path": "genesis_data/genesis_txs/4gLPD5njSRtiaJwjcjmNOyI5Vw8sFBQQWOefmy4SPmQ.json",
    "content": "{\"id\":\"4gLPD5njSRtiaJwjcjmNOyI5Vw8sFBQQWOefmy4SPmQ\",\"last_tx\":\"\",\"owner\":\"zyiLqjzV8rfGF2l3MfV9mmqab3MiqsmQTmhz0Jb71bvmIzY0bvsgY-Lii4PRYjiuBHY_IAHzmVnl9f3hK79UCNAV7SNNGawwOVPoeatOg_0OjU-a_RP8esJvcAm99bVgLpYc1g30cR0okBV-7nE284DX6d3MfjQoDVV36nrcXPahyG6tPJoah_qqSAqY48TVJgcymHuI2Re27B6aqaXBHOMIDz3-db2LMYUBZQqckwzIlcAvwPzlv-sqqV-DwFNVtQc_--iY_weHfPn0EJ70ABypbJPwz6pV0Otka_1wtDsGKXey1hTP6d67cxeEjdxv5gSFyvqpdQqEnMB0P8Eae3aW9fyjPvZl2QeYVhrY3HzmX9DEJSaj65MncePxo2yo63Ng0qblJZCZzVbtDRwtBI9DXaLvo1tiinahyVMQ7sRbIUD95J2sGP46wOs5-QdexkpG3pBN6UC378eRVhfLXR6IuKGoP3k93mq65CVcdhjylydHXhkzIP4Kw-4IekE2I_zRpLn3nE_rN3Ia6HFg--W5cJJBB8hNw6odNuH9AAXYxVwiMEAKnki9edeK3nFm_tO0ZvWTHyoQXYpTC9oNxpovloMRXBA8dV7DhLYAnNhwOYhieEfIOGnBvZZkFpj5Mq716QU0tV4Rp_qAK0LElGdiUPOjVOS6hrqjjyIrby0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SW5kaWUgYW5kIFNhbSA\",\"reward\":\"0\",\"signature\":\"XNCLlUaeApHjQd5v3S5EIlEbRBxuICdjYotnSgJSH4aLReeq4xtdwY6Tmj8nYE28yQ7-WgJS3Nk0wPL6iMzlUjOG5ol1mA8o7Y2ZDTNRpG4pMQRKcOt88PZhFHWmfjzH0StFKpCHSCBYsBw9jaBqserkNXQavMBDW4qH5NwDaz4vTbeeSDiUoxB47av3CPkWz1GOT7_TzmepTh8ViwTX49uWZg4kdikcDOkjM0v0k_PYnIbhhHVLTTBBEebb9g1Svl8KOwcnkjMbaP3hRIjydTg2vhP9DMyRo2ReG82iotyRxDcqyZQzB-HIzwmxVWwfJY4hsHhfqQoQPqSiVVhkLMvE7S6RQmHYRIUJkXYHOmceo6wQzoESIFkI0PegHFzNhWaB_wBxmEDHtEHwgIvekCromttJjuMzAVQKPFP9SarApqEcpO-k8yh4BzXymjcyC6eXRQfiax5u8fC6BnddrQ85wGtepChTDFk0CodBrcwHVrheNpSt65qbZRi-5TulunCqLvGC7EQVi9g-9Z-b8U3wBFhBIunVXI3aWWLxm8nF8SxZoiDQzTeLw9im4vS65owjkI-hUwejdI4e71CGlLIjQsYRYiHeMJ-1PYxB7CWWHCCK1fGqvqyCc1F7iXltduRdumOvj6iqT8j1xk8QEEm4M5lpLjNEtwcnYy9l1cY\"}"
  },
  {
    "path": "genesis_data/genesis_txs/4pNPqxodBesN6jQl51nH17GA1fWYfHVm8cIEfusnPLY.json",
    "content": "{\"id\":\"4pNPqxodBesN6jQl51nH17GA1fWYfHVm8cIEfusnPLY\",\"last_tx\":\"\",\"owner\":\"20gZ7-uGvyEvTMafLm70C2l5Fvh_kEu1PoHD90BDJBDv1SJ7-p09LCsrTwIcvGKWALxJ1DyWtu56WPpm9XuIMIcSZebj5SZXBMnJKdHrHOLQxmWGoU5pSvMRRtR7zO-pI5N3161f5hg177BNdzptnL0PS6V9EwYXk12_MLlA3yOEHeqDnJStBVl5w_DULLbgjDPh94wJizy--vxVPglVn11NPkSt_afP6oIn1YS_lTVEr2bJFQLC74iT1kRqPwCEG6bsRfuM4TQr77nP5GGs5CxqMeO1tf1n1RkPL60cCxLhf3zNU7IUkxGSMMSMEGDOkHlfReh6kMPJuq6m1Z1rnBZ70Vl6SVbtyjiY2IqAxM9lIWqboL7jnDS2iY7xj7QqXvBtWVan2avXjFllnb33Vjkx8blatSeaWQw0G5V9g2D-JruP_7N9E1sGZvRrHawP-6i6KMzQJvRvKQI57BzgzYjPxj1NjXK64JaeWB9L0V-EyW17qWRL_T0gtrW7FS2I1ueTeFyFC0WLAVB7ofrJ8637WgLNcnNfABrnEz-akmGAxBXS31ZZGbtGWU8j9DEI4BTWTBGKt6o6bcIrlULFk64ia8Y0axIQQ6azXQLKLTiNCR-kn7EBZp-NXdo5BZCCmq0xyZg0K2XthTJQO4jst2cqnwkB5oB_V0VC781bzQU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"bXkgdHdvIGJlYXV0aWZ1bCBib3lzIFphY2g\",\"reward\":\"0\",\"signature\":\"XgO_-3x6SRfd5wJGrUmYKNJGUpLPJkpgU5_nOyPohhmWDqVXkKaOJbA8rfR_u08nSrg-wa3mhPxvRF12dWIf81jMiwiASxpkcCmHQRLPMTLijAnnptK_wPda9x5_cU_qkWe9pPmHFvqVYTLBk0tD1FXKW6N5SlXnuM65cD-0DE51gfl60vm4Bb3uvwLO0JRBbsUGPSmsL1OTJ1cuzksuEcx7lRmx_gFDd2NFjWNbaJOsGVoUyv6wS7gstKLncZ3c-HMa7uYTASX2_L9UQd6xk70uy8PBPAUuMs5nx-dJCIsvC8FHmowsxWR-MketCblBKazstMo8DgnQy7sQ4m-9zThpBhjlDR04XIfzpV0HnN1nu0AUNYRlJpFp7haIKLuDNBz6tCb0acw_5AX7jo4RYnc0Z3ia8uZCqg3ixv5G9kWhdegJgm2ZL17RyMm6vKTIgdjHD5JL1fqlN5GnRgqhremWtN3SBi33aZm6dJm4RNwNN2tz5AbFw8CpVU6xYMbvBqVh9vJi8jK7PLkVzFg4-UZQUscGHOz3aWZrT3BpbccgXQdWdoqtMB5VQsVSTOo5BirReGswOJcN9VKBx1o5Y1VXY2-5fQa7GSiYRLznASgFRFL03DE_MNyqbZ4lP3AkpUQF3SpQrJJE5xvoejshTSAGxOOwZYxlWOZ4g9hvPsY\"}"
  },
  {
    "path": "genesis_data/genesis_txs/576xa7WLVidNoEcYPhAm7OlyYgbrp7Z1RBIfqLbVFzw.json",
    "content": "{\"id\":\"576xa7WLVidNoEcYPhAm7OlyYgbrp7Z1RBIfqLbVFzw\",\"last_tx\":\"\",\"owner\":\"1feL3K7vCky4LzgEPuWIxIIDRS-fBVO6mh6FyL2jTHHIz8mDtcJ5mF1R-MYfJ06FQUvQB3PhHYa2yGPVegBispDsr70k9PIYlJRTFZhpCHCLrMEUVX-Cmbd3NRnc91LVQU1wUyGlSDq91IdkDNrk-ykS2myorH4Je42fBdME-QUI08b1QIxinlDbxJpuwZ5MyNjCsrIISVqyk0pwv-6RBEjrcED8kdxUfIbi_nCNJ31aR0mCsZVmyYWnPAbAZNYp-Vb-r4rjwr_TBXLZT_Cb03hl6KmKO5l8DBn2LPdRx8Po99TYdkC73mBw2nh-6DhbQGEhKAZe0SKE2BTP61Ccv5k3Q7KNvzz1niEiKrcjtrAe7PVrwmqnNr2F3lrZST3wC_gj1INT7H6doErBlGoplUaqUq4my0RTnTPknc4i-fjQVGoDS1bl89yMqv2pnu1PFbGU5BkLzGbKJoFZEXxqYRNlnrpOKi2vBmSeAP6CqeOmNDTe8qChdOqcr2B_jIRGahxV678q8viV83eUuSxyqoonRh9deqbWSZVPoQqKkzaE_znadp_zexg6WuWuAUGEsFx4OSb-IMsuYQienXbiA0B_aSGS9aa073TosMiTwsqweMAYaH4wX2n_geTQOHlF3I5xBDy-vwAt1J0tfSMlXO9yd3_ARZkGYu5g4JwLn1s\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSBjYW4ndCB3YWl0IHRvIHNlZSB3aGVyZSB0aGlzIGdvZXMhIEFsbCB0aGUgYmVzdA\",\"reward\":\"0\",\"signature\":\"h8Rgqc_LcQMWsihv77pOrNA9VUYE5UG9DgISNqvUn2iUsARcnZIgrwtK1OU0FhH_cp1uZA_g4QKTGW2Qf2PycFrWpgWXJGGirmLJOqaH79X3esuSVMsVpDsTYQ5M3qJoVY7oYzbB7CqxtxgCZUF_WTY4mHQ1BqpJmEKrlNBVCdUiGyjbp3qyTcbCDKSQfRheU_QERTgPAiHsEJYryQhPJK__pcADuis0Tq5jruw3BjpLVvHRUxmZnqszlR8x0hWSTff8ZRHhhIf03SFIwuEhqwuPcUc-0Qh_4ql2KnOjk7ihtflHGEYwbt14C6CSK5cr5lhAuTkRxbtUXr8eIB4FoUhAAQxj30UXV8kc9YiGMim1brOpwuNn0w0KQyh2HqaKDkDnR46eC8BsDw0KM0O9UhjS8cHXrPkumzreiRzDNG4XY4MHSd3aGLw3-MOkRRdSKswBgSfZRu4MoHztE_GqEjQQCLLfAD2JgFvw2E4PxI3qDG7Y-O_-RFciRLaHB1TNCuW6FpCXD2ZxYB8XvXkN9FC6XYyxmsDMLQH8NWodHkB5m3laVMCxT5q_lMkntWdzzayusKI7MMO9KVKTrrXpOk3YZxTRXAlYvWsyVBhkyBTutcYWwBW18ccd1M6bHKWoEnKBiLMkOb8np8ac0gNUg8fo9JqwqN7vWsLG_ZVxYk8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/5FL2C4l-5cTl9wg4CblgIxzko8hGsB5URVA_yTAd4Nk.json",
    "content": "{\"id\":\"5FL2C4l-5cTl9wg4CblgIxzko8hGsB5URVA_yTAd4Nk\",\"last_tx\":\"\",\"owner\":\"xGgINtNQ5femfxJeFp2E1hvVIL18PrzBEvPfiXb0xgwuCxrlQ7o4p1CbOr7f7Slaza7vyiXlXuwcx-Fv-HxmoLHckg6VsNWgnaZpz2EP64_aANV8dNkxVpAhBINKWnQ2DntwHFx0TPHH6Nro3s2HTlog43JTEn0UhrBymBsjc_6zI0grn9cxJtbDzhHx5O9L4KhyZfSUAu2x35AdTZMWQ6ZTiea4BPk6lBnl2Wfcdj-NtajRwmzOXAo2S-hZaVQzVc1Q5g2yjE0P7B43tJqHrRCTxNy066-ws5BcJFFl9FjDPPAF1ePYrK0oTR61VqviMmQmjN0-sWy8Ditb7bMaW59N-UO6bmr6bYet5MFC9TYLoDKEiceWx3IaT8oAnBgm-Z46_mPA6ClhrtK8U3yUOjBFa6MNQiXXF8HHNKYhgwXwhts5_Gs92zfifhJOXacxway2aQCBB77kekTziDEcmYbz8WdhdCm4VzomTWY8qeUCxFTIBNH62zn2na7PLlrcYH8uVM_pSmiNbfONVhLJ2azbVEvKUI3d02yCFW58KDgXp_N-kJAcHbVhkOmwSFOIlRAuk-TSKyELtQQ8MsS96fbhLnP-xgv6SzKRlOU_EfJSyY9UZxKd1ERBvzjU4VVBrtbs2nmUyovQ1jyFvoskPWm6_x8FKnUVKPf_TAl9vIs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TG92ZSBhbmQgUGVhY2UhIEpaIHdhcyBoZXJlLg\",\"reward\":\"0\",\"signature\":\"KgJjBhWvl3TI3HgTLFFyMnxjahAnQ5oDBtcRJT7CiYqP6V1tsrWx_sMXhpMGznGMDWKUQep7gWy4AQZwBRa51O_n4L90IRo-kZXqHMs6C2P2s7nt7cUI_JJN617Q0ZiCI4Dn-WdMAWW9AmwIIVxESLyfj_v0fG2AIUCYJsvQXxZ0_L3UEdEGOirQpzPcrJCzmw6Qpu6VFJyDqRYBLb3162wro9UXat3jrVoGXxKgFDUdVRgOAD6oxhBFoygp2jCde7DJYuY7dkT-A-jHwnouMmVDExzUxIQzjNG8XN_M0QQBtCYnN94cGCevh1cl8YESc0ROKyjHhu0y4cL0DUo2SaX6LL-XrH1gzZj2ajzyWzpAgGfDNMNP_zgLoRIYqP7p6C-LUgB5CHSghotWC15JJjmgWlhUYfEQjVZwLHfVqvAxFud4eSf5O4WTDxDT9h0vTl38BYhf2xiIpreJsNv9l4EPn418iHAhZfsYgTlDzewa5hJd70GTABZ9418RRQEyekBZCB_S3Vcmv6q37_EGW2s1YyxeScTX_AofFv986msKzXrn4-AM2nZ6azFaRb8EHVl5r7er3xNj_Czxd96SjMlSKDbs9m3L1J0aH2HEoqtkuolS9dbg0FxsIzkurddMYrZKgIjHiKAr0Qv9vmoy0z_nHhw2tLjD9YmXoCuzBko\"}"
  },
  {
    "path": "genesis_data/genesis_txs/5Hatfzkj7ivvIsUIDjdOSp-4CdkClH6B7S_SNX0B2-o.json",
    "content": "{\"id\":\"5Hatfzkj7ivvIsUIDjdOSp-4CdkClH6B7S_SNX0B2-o\",\"last_tx\":\"\",\"owner\":\"3i3l8r00_IimugSX-InZbowD-IoAypUHdp5EEjOktiyShbpSszKXpjv5EWtH7G82ODY_mmY4GUXNCGM3hN-21oP90oAX1TK_YnwNl58dJvRPQmP2vmrsJ5h2OJi7TzTMhtsCjdr7kSvHORDqWtyFKTE_tYdYZiGZ6mVphwMHL_RNEVWIhobWp1obGm-FS0x8ToZzGK-yr8DiXRwVFjVk9WiUUjovYFw2Ylb5CrXPnPjuSi8EcCNBWCyPmjyj3KZj1iPCjCR1-jncrNT9fmOxNUwohvbm5ZKA6PAtDbI0PfowNcL7YNmoK9mBtWGO7WpbxObiNcZXt0RtXjEs3kbWJLUqxNI-MZObbkSEIIxb0YbyU7CVeOAZuAVKQUryv7yHUGWhEHDMZJLZJWtCzOdPsxFOkouEWWiA7qjJ0H0nDjrLj1k3vVQVtz2aGUmF6IhBFlywA6omPo6SzD0M6oCr0IsP20GsAzJDM-ed4pf8Dhv3jbeuKZl0UFSD5zjSypQ_kJI4zAWaGb_2WAdZMI1ujFzv2VVPT0AT0HNkhogpjX5uyLfsbCdIT1fYXuvKF5v7ofUZX-mEkKL3f6jl_R22T0m7xBUtOBEF3nqu2v5rlM7pnYXl3YpplL-SL1FEq0kDBcvMQFFwXRhJMEI6zcx1fj4Bi6Iv27uXlFJVqcgvMlU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"R2ltbWUgdGhlIEFSQ2hhaW4hIEkgdGhpbmsgdGhlIHdvcmxkIG5lZWRzIGl0IGFuZCBJIGhvcGUgaXQgd2lsbCBoYXBwZW4u\",\"reward\":\"0\",\"signature\":\"ionJm4R-YV3Z1R1hyBM4wVDcyMssGLazk-azfO4amXgoY-ligzhad_VAcE7vFNC1Q01b2E_LuUuLz7z9iP2iLOy_aBQl26qCoFhVuBwrnyehtXFT4PfiJzIM3ccDWFPdx1R5PmEv3SF2ZZX9cUuY0xr2up4rn-TgSzMgaPHguwc4ofm4WzkAuj2jvKMhNBQFMwY6VJZSTmkOiTz_gGSZai4ocU9iEzwiaqjTOIfP8LRweeO6CMENsMM678wAFlximpg_p1v7LRfZK5zlO0Jerpr-IpXrfDZy0TsSLNPfuZzIJI_RoI39-KDPrA9v4x31lVUPyKO3kGQ0WIaUf5jDsi9UdrccPuFhWXHGhB66FUpFzZB3zOGzy40NHdLFJhtAUzjn3zb3QgYQZGdsC2sTbSoCzqilxDAQcmzMEsCBgNl7gRVBctzXH6WTfbxarSL-NGNtGy2APtuYpzJ2FgqfkRwZUt0ulszm2zPosik-ybCwzCCCu3urHwKLx-cAnacz2kKwWSIvf_WLdVx93lMmduvpfHSkY76lIsND43OK-YHZaMd__ISrM0hjkb14rSwdkHnTVtfELwm8J1Zckp1XtWmB2G1pt24MMjI1BrpFN-BMO3slJPGqQkLFKI5Ab6iq3oSQWMjU2LiDkRA_FRrC-2QJeu8g0dfX42YDcviUxK0\"}"
  },
  {
    "path": "genesis_data/genesis_txs/5OdjYWAipCjWzpqfNoNhyJ673d4pRMNva8la_SFfu_c.json",
    "content": "{\"id\":\"5OdjYWAipCjWzpqfNoNhyJ673d4pRMNva8la_SFfu_c\",\"last_tx\":\"\",\"owner\":\"yoZeHi3l28TSC00PaPnMyk5l8B89L_1rnPDXN-L_-cfowmIYKch_rxuw5Jh1azBbb-EU2syGtg2tU-E7prH3gcamSVoufWgoyQQI-Updr6BiaVk4mP36rdh0E_kfg2ALJowORyTSy9okM5-KCJ_xXjbEn0AvvecV2WRXtMidTwWPR9mC9duCX0_-fEIyZEWMUD4_IbtGYpIeiWdPMARoLVeoprFM0tGoKqdZHVi3nJqXFjdTGhtX-bH9377cgCsbSgNIUZaAM868ETkHpySADo5KTJ5rSTKry-bPP7B66NKvR_iWna5nUDR6Qnw4U90EQunkaRHk1LcbUFu-K_wxSXcV3UvRzuBYmUOiAXpdIgj4wGwjv1sIpei03FPfzRbtj1UFc5OvF8K2GVZ8PXTlgClu05uXV2hMtu3JaBrPAsjkfAMR90Uhk0kJyBBzkVTvNmSvs1vm-Eyuc8AKQQhqV5ciUzBasi4XYWfBxElVEFu-hDOOoDK8LSYt6VWCkHtvj3BxhQdtiuuDX5gYdUvoyvgLey1MjkauUr8VKShhUAf4j4wG6U_jyLD8z3bIxIf_hLMZ8MEiPeN3HNr60wTarwhVrHQpLP2gSTO6_p2iXyO15MG7D6Tq9-ugZXCGjGCiv2ASkWQyAFyaf8SetWLZ0jee_dn23lWlicfiFqdP0uc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"QmVjYXVzZSBzb21lIHRoaW5ncyBzaG91bGQgbmV2ZXIgYmUgZm9yZ290dGVuIA\",\"reward\":\"0\",\"signature\":\"NRNg-fTM3QVeeRqjfudqUEMY6TZeH2iapbdt4qT7THtsRftU0wdb_KwJFPm-couU3lmuGFosc_66Q7zS96pLC-mvgwwfNSgxrbap_96ug1iLPa8OO5h68zwGifVv43KPr6fsOmPrP5aTAAZg4UIOmf9-zmopoUD3QgNtF6nT5qw2dH5rNUWOPZsNtmFX71rVM_-WBuLItjyQvrP8o6udpkSF_B9e45-xoGCaMu021g_RoiT2ZS4Yve5o355v5jFgnlkHkdesHaet54ab4KWEjMgL6A1ltzEjWemfLSyKwwbvrBZ8CroC1jHz_5oDMa4_2eMwvF44RVM0KnSfR-o8-YY1SUqlw0SSIibGO5MOuDX_WufSBWBAS3V87U0zcffF5Ip70cZZtQROw1RdFQBOzqVXX1m6I2YOTzAYPTkkuUO2q9fiJ1EteeH6FPhZreU4U8NpJAeKwKpEoN7RURrWk-iopwT7Zbupo3jaZV3J57gZ-9lUC_ejL0wLAOul9AXlBfhtroRLBfpoFbv1mdVblFeWp9KMwrHtqR2agnl6lzjLNR_bwLG9BvOlkd7U_bVGWSFYILUgNtmgB5CDj_zHYbBy5QYrJCD09peBwH2-QnRt3rLPJWhg0b6YASeSMuVf7fL_IhmAuO0hfqStBzVGfRriwF1Sm5lYRWbZPSgJ7co\"}"
  },
  {
    "path": "genesis_data/genesis_txs/5WKzIeQrDGC86IQvl2NhRtgPNKHGRA9oyjRByV1F7p4.json",
    "content": "{\"id\":\"5WKzIeQrDGC86IQvl2NhRtgPNKHGRA9oyjRByV1F7p4\",\"last_tx\":\"\",\"owner\":\"umt_O82prxNim0k3EwAyX0-b-JYBF-bdo0LgYcz3gwlakfO_Kf8XK8Nj8PlQDDOIIj58rRoWtNem2y3v5wjGwAPx7Ge1jFL68d8SMN_ZfYZ2vlyz-vOObJvF8QVq2ozS4CPXsLLXad0dY5_s6K46vEHd2YkEOMo2yvoH5WQQTG2JUPwIMQFNIxCSD9N0yqPCdXiF3tDgkOfZ8VJ-shZyrvuxD-rRR5d_-GeusTbgI-R_Ip-BDwtjRIi_2lVGZEo4-F1JzDt_1ZizodUldftmrp-5Sr_4680tQ90Pa2CLMeu0DgOMme0OzH9H-dIWHwBoB17dhBINSLnL_-7q8U6PEXyvKRHIMAhuV_u96WxdCkT2bBVIbhJhsn6c7RjfWhBrZhAcl17mrshxJFpQo9LE9qVlzLd34qVgzfFyZ4oPVT8nCFWgYT_CG-2icIvKH-t45astMJ5q4LqJxCNVi6ZlsN6P9H2eO1g6JUrtaXhK4Qp6tS6KkA5hePQalRFJyfAIPBSjXgqHF25V3661YF1ueH2zrJle6SmtOUDaHl2hj8PYlb5ds3QLAQJ6ZQSD7fI3IBSTEv_xzdLE5A_Bz3JQbkqbtjZS7El8mnDwsmLq9rL3h7kzu4uEGqxAs0jlYvpZ9Q902JMc7vdbnRbBlaSS8t5fAFAkW4_s1Cb2cp3_FEc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TW9yZSBSaWdz\",\"reward\":\"0\",\"signature\":\"Is4rdH7ECVf_9Cbwe912z-UC6xIfb7jLxE-8ec0tzU38LPxzuWhH9gBwvye9PSgmjcmqSxE_c5GXs2thjJeJiSZD6RBIfBDUY0y5Ot3IKrrq9wvIkgShqqjqU8YNW1iMUhIfS01NZYfY2px4oMj1ZH6hSG09VIYyrqgR23BHayeK_MHozY5FHBhoKSYJ5N6x1Hx5ke5rzHM1HjH6p35B5IqAvWwi707lMhpBoxSJYHt2gU_Es4zraGFdAZt2cg1_yCQ_TwgLUfwzfoiPPM8sNr-d8ueFtJae047ZodEK4awZCNA464QfCLRt_AkU1ykug9QSkFDpAJIjlSTWxizpKHGGPz3quVu9v9cdUsHcKDCN6YhCXSxDbemhMtM5cA48OXAqTCDVLQp19YeqfB4hOpmsw1ZLgyYh6dwrfvrN0xqjRnabRSfwbA8xBPNPUsNRJwqwpNt1MP7InuOOWF0M54tv6kQG2SK7YdVMmpghyhTff2zIcs83YSE0kQ0gvZp05k33rbOVsfaPzkR7dzYNEnQtUZlO9qi594MhyLXC6ODuxzOnqfMwptt9u6zbCyDQUm1E6NCCBTBvhUVK43ilz4-vyPuU1naxTDln9RlS-aE8ydzhxKIOh2j6i9I1AQyvE4437EOFKE8LLFfLM3QbDWtezGis2YJxI_QNYOh2lL0\"}"
  },
  {
    "path": "genesis_data/genesis_txs/5dsjbEwH2r-EWCkfOznV4JkCOLSK9vNY-0iqPr4RZUM.json",
    "content": "{\"id\":\"5dsjbEwH2r-EWCkfOznV4JkCOLSK9vNY-0iqPr4RZUM\",\"last_tx\":\"\",\"owner\":\"xAYyyJbHLkGmfza2uGSt58gNFQqANaPpi-WDAviIaY3ExtEQC_RQWBnOWwhG0o4qoUIFNE3sR7x_HX9MOYdtTjN8AAwq0vbIj9Aui8H-ka_EtHZZzYR1nCJylfhPtg5eLbQl1nYenbqDzR9My_hoddC6ZB79Bn-fWutVtalmvGmplpsth_ZXT8mn9E4yTqNrkD4yU3N5QHIv_1nZnktywUuwqpKptP5F1-KE2T2CF-xrEHqKYgA3Km1FO5TN3v5LWiAO5mS0TgH5Z6gZsrwbhL135f6vnZx-3g2xmzfkijI7vka5mVZb9HunKIEE1OVQdkC7jcgEdx5e3UBkf0rUvfbkPxEHptxdgboXR7mNodDS4Vx7zK-oPqDTABHialMoLqezjCcjJu6vKRWPfhiDB0YQ9Lyj5jpxy-o4RkEGF0wy4bcmLpSQw90kbCUTj_xxwrDljMfT-t224aja-baSw2liG5KrfSk0CEv6c8_9EsjY4rD1ONMtF8yN-_HALIAeG04vyQxAa2W-jGGoY4ih_EscAw9NL8Zb0qGNUFblh2ijjkbpMvd0K1V4kWKOxvI1li5m2JPyxNXR4xFfAHmaZJTDazLAf3jpgmv71EetA0XILPt_Z6DUTGRq9sSPgpcu8QmeRqk5P7XJqsnctNd5eGvpQDGm46OpaPjQ-a97Ihs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"d2lsbCBhbGwgdG9vIHNvb24gaGF2ZSBlbWJlZGVkIGF3YXkgYW5kIG15IGxpZmUgd2lsbCB0aGVuIGhhdmUgcGFzdC4gU28gSSBtdXN0IG1ha2UgdGhlIG1vc3Qgb2YgdGltZSBkcmlmdGluZyBub3Qgd2l0aCB0aGUgdGlkZXM\",\"reward\":\"0\",\"signature\":\"qa5Zz1CpWfp2vO4AKehpValDMaGd2Acc6mUZ4Xj_ibK4COUCiUu7bSCv1EW1Bg-hplxuH1zlKvmcn_p0qwG8npcgl-E_cWrkdvrnyuoSSzHicSeLHKn4S-cV3T3ERbEpRr0tSNF1yZrxk2r3DwEgxNu3hDdqtI2ioWWQy6Wwjaw0HKmlzrfz-zw2a63aFCllyVzV_PJVhfwn8_lwfdzHGdtWXb7Fw1-Wgq5OLRpNF59G9c6bmf4jE8EXsG0vPTmsOc23Aw2x2O_gB9ZEL0uhZXJ7eBUM5lJbNmh-s_AUB9PM9zaMF9_cyxJ6xutk4xq-hCE-ejWWh55HP7rzQ07nrl3oZ-LOFonICgn-u2vWCJ2gFnNeJ-U5gqAAykS3gYzBiNGLSH4e1ade52140HGqt2vEljf-FXUrInbJalknRvU8-o9OTphjnG-76r8JZYC9R-CyHTnS3VJ4ickxIXwELAzgnDXq9NlxGyDpETgFyyp6b7nJNTJ6hYsxw_8JOQuFraY5pOuVBHNMneBV2my5sIjY0xtf_a-mb7ZPhzhPXCyjZq7XODIijTvuxA7XUvh3KKugTNA-XvgwKGRu2unIl81MqvbYbQdDt5Zixoh2OcFl9t37RFJBPqI3qEoMZqd7Irs5HlVPMKEa9H9Ar5VHVflN7aSrMpz3OdlMNtYetxw\"}"
  },
  {
    "path": "genesis_data/genesis_txs/5mt79Uz6p83vdLtYRiByyWLqLI2GZBeSTutDRmzw7tM.json",
    "content": "{\"id\":\"5mt79Uz6p83vdLtYRiByyWLqLI2GZBeSTutDRmzw7tM\",\"last_tx\":\"\",\"owner\":\"tgYzNX04Yk7fsisAq5t5Cxuo9Iq2o0uZwrGZopX9T8HL9mcxb5zzFpeWeuxWKyR7tQHGoNVvj3Ak7SBZfpfot5RB4x0T2DOEGGMASrQDIG35CvwQIiSDc6UKwTfQ7b5ubwuLYEddfyr4INTQ6s1MAunxJL0eTzUmJDn3lAR0TO6qYCJJF4Xagato6TU8TD67qx80T4bx91ubYvpNmMoTOqYZpRqyBPvnI42nNxeVmBQCUNvvkzvUcBPNTnWn72O0mM_CgCZ0RoX0Up0GiPiECJjgOQLNFAyz6rN2d85s7iXpC2g8gIME1WnBgLgwoP8OpFCYw-WSYxKWJWudDVhH43wBoGrtZooX4FDXMKH96aYCHDrPsJv7n_jgZWrkf75sL3eajGw3fvBtx_y4FVwdGGd99XTluiJqGvOhentXc3Rv-F_DLErd7ThKe3-Ofrg-CESH82--E0VFLdI1eOY6Ad45QI96tnwjQzEgvlLgxzpaGphhCoMgnDEVR5D1H64Z-7RSsOXZ_wP-JTxKAuAV77Lq5i4BihlTtQaDFXDYcO5KWX8Tzr8eoTf78g0ZaFfoPv7FflGCpioQwiJNrQLcgPSreVqamhl3nrATJZvabY1XtpYTGgSrqIp2sA6l0D18BdX8aoJ62tFLzPPCMxXJbc9hdhIamzSRmLj2bljUXD0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSBidXkgdGhlc2UgY29pbnMgZm9yIEphY2tpZQ\",\"reward\":\"0\",\"signature\":\"U1bZxa6z4R6s69-rN-Z5rSQLm8JnY-ld_jn7rDoBb-2OAHrUpVoTVS4h-UO6qHBpY2lqleYn9z821Vzz_kjw1sF9c-eTw1VzIrdJwoVdhXlI5H9oEfmgnGpmcq9xI-J_j0ChPHzz0ecNO2-jhyt5MoMePuMqBp31EM0h7_RsqAPuePLf-RnKPWSsJMXbT8a_KAigsWagXphrH4bIKUuNp2pjyLX9P_awKtI7kaqwhwF9HI7PfFfga6U4Ans3gni62FHhidnVQMcsJ6W0A_prWhHVrtaM8V6Z-qWLGLUMnaRIWJOeE82Q0GSSWEJZiB_1kb88msSKwKUJ1qkx9aZfQS-CW-oF_SstRFZrMnosUQpS9wj7OLHqtqSHTcnEG0S0RU1ZvaKkLZydwbbVu17Bn-F7qqs_SpEbEa1K1DQ4cj56SXGkCmU7Qtjp0n_7l03s3gFKNrjj97_0q4-Z1G04DiTNgkEr3kIoEbuunxR4Lf-2hzA0LfyHHwOF2avdwelProA1xjSZhfsF9Ef4V8xEwh7xJcMHoB5lPYUzyjzxpZLw-_yBGT2sfwfi-mwSdq56y35cKhpv7E0-roMwZqWY8kuUZGZxCpPqTm-Skno4Naaz3iaYB9ki60Nvd-D9rglP-Vqe1KubghKV-QAv-rAu2EnpXRmOZNBrSV8qMkBSwBc\"}"
  },
  {
    "path": "genesis_data/genesis_txs/5qRekKepIlFbUhGMq_nNy89bzx_K44e4GmUKYAe9MRU.json",
    "content": "{\"id\":\"5qRekKepIlFbUhGMq_nNy89bzx_K44e4GmUKYAe9MRU\",\"last_tx\":\"\",\"owner\":\"xepWNZSy05zd8lt53_3633oSfQmqAdmF4Dt-0_754Y0qaqVdEzBV_xpgiJAsBMqJw8Iz1Wfpl8bnBYsFEMcuUmXbKD3yGZYdqZ8DDkuOO0BdLwAyoS67epIoH-i325hL80pRWOYepxIEMiTdY75O-Bfc-dnT3Lq6ABkOQ1e4buSn4SJq5GoaNYgaGmnN-5vAUTTIESaAqTwVKpDaDoZOeCdmSShK-bidkSP1sBukI1yVALp_PJAfncbl28iPb6-ZOkLxdnICuug6QU6oCcqMChYMNbF42vA66DUkPXmFbOL6IAqBWM8HenMEUeQaEkBa7ssP9jAdGMYiOGnypYBMvZyioDf2hVMXfZee7eTYNsNYJMsNJZGduxdJULJaZzTy2l8-r14KzJRGnXgNG3ATK5YPoN_yX3XmtbDlAoJiI2yVm5WD6pmfraF0GjFeWlqydJzQjL9HvRPwf25GxxiVzPin-YY5gIDxFJ-Zz6VGUvev6Ms6zSSh0TxIg_by8iJIDkyTwQG7DN3Z7wjIhjeKsHHhRFBRtzHcMWO7Zsv09B3QPXEuFsCQuP4xKkMAoDGanyPHeIiDadE3WEzo1TGyKRwrlqLDjv0OM3yaTnCxFNTU7baCFK8Jn_LuPIX9GbywJvC6TrcWmbUunOmvAcT-i3fqYPTBPG9fSB-JINcTd9c\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSBsb3ZlIHlvdSBTYW5haCBNaWFu\",\"reward\":\"0\",\"signature\":\"TrUTQ15dW7zb7AroEYWKR2RZ-C-egZuOgKHPKygEA78xvv-YkedgQGiLIhOx13S0VuFteVoYbMIPba1zeCm44MQhE9F7Rj6kCmqI28D7L6oMMeUK8uJnHwVcWk3kZaxK8KcaviTgr0UOfecRLADelpY9MyU-bcXysNjlgwLwhfM6oU4jUdiJvzpQ2OjnwIo8W6qrx1QqHBkwe3yargj0JepVXZb_jXMkx8HgHlcARr7sYcovkt6KI6vRsLN2yIDL7oLGyEyAoJGD62wmv9cxJSdhqp0HGigqhvgDxepgGXeFMENYhvTFdK_BoTqPUJ8GFqMzoo5-NJz9t-cCSZDr3DZpGpa8AwJ0rO-Hgtz4dxugwCQuBj-2b-OTHVNXVvxIXdIFdwmCqqZdw-2ykldZMewa6o8db610u1gxbR2w8aeJG59jp1NOXPGJIYM_RW4exHVxrfnf04Xpjfv363MsIGfLC8aNp0eWijBYz8daGUdXv7IXyhvRwyJ9VUpRHW8CTPoGYAcP7HkyV3FY505mkLF6Y3eZcGdiVWGQNK3V39f3LVw_62fmHZ-A9GeczJbhv4E2ophk-dP2B1p4rxmZPr_UBllTR7qZmqfoUooIKBceEIAwH-lkkR66Ns83tA0AVTFEkNkeDJbQzNjXMfQTIwmVGf0-6xdrBeLI-A9Zyf8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/5ynd-L6Z1vrR7Vlyr-rkrga_Jw2ibALkIgldNmsVRcQ.json",
    "content": "{\"id\":\"5ynd-L6Z1vrR7Vlyr-rkrga_Jw2ibALkIgldNmsVRcQ\",\"last_tx\":\"\",\"owner\":\"tuTwag8rVClQhfnkNfb1B_HLAYK2p1IbDMvWPnin2_lwRkFz7YsHkMkLaxcGJo9M8N1i1adyadnizF7vW5zq92W7pni552gK4dC3BrpUB6bSzrGuRnSr2y-JDRfBLVh8usZt7kjhfXpeGO39smc82oTAeJfSMm4fVo_7sbSLcEP4oNFKqAEVFXtfCPmYoN6aisN6lcEkcYB2dfZTcTeV6zAv2tdbQax1S9JUD7DXcxZBVwItjd71E-HPpYEtORmuMpTZ87K6DCsPTNdbHCzfR6Qwx3_jfBRTCDs-GAPZlzZtIHHIdRh_cGYpTEjRljjZjFtKz9v12fWBl7T1Z_IbhJNuKWO2aenjWFUIaKjaoKIj38ZGF-YIzDbS6CPr_sZqlgsgQ1-ho6Nv-pKxPRZC7ggViZDLnY1Ch-zxqhHzPLpuGGmbvtcdSdv7yXclCB-_a_VuOlL63p8FwxohUKDPQ3pVymKWYgTwWjICcQBi3EBbXKxud9DL2jM1r9vVyKN-ZhixYNpB2p4mPSXdolMFcmMbRZ86_CtQ9WPEKJWlHOBpIK0qhxrBGF-njraTXTG9v7w228OOiXBIh1Qngbh-dO9asWgV3rjW7efTAA6NV5_Q5O3pdblaLCzrlx2e1QkoqdQRtmvwMxCWj5Y9FjgBHSp6l-h5IYJMJWcDMfBELA8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSB3aWxsIGFsd2F5cyBsb3ZlIHlvdSBTY291dCBhbmQgQXZp\",\"reward\":\"0\",\"signature\":\"ei_g2MzLGEsoi8nZ6f75r32PbMteP4i6Ot7MPJ78WwWsDHJUJfAIcOFpvSNnDdqbHIJsXNYL5ftAdWq3lJ5H4Va8MhyWSsFbRMAnmlcoXboCmAikoKVJqv3ymQJ8iHCM8aEAoHv6CG20cV6Bep3TDl98mVTiOESZ5RdsiV2nCbJK2-pNgqKQuiwlmSS7LG3d6-3rJUx2zZ3rqGWOnS4YopV_UmDhFqVeyBOfz_j5dLWcrd5gN6SZdTKr4ZXPheyiqwq_EvsJ7OKcAt-NmxXK8qOZ3qUiIVJhb9gNe-SHg3Pg8poKKJHwLKsRP31RhvKqK2hyVwHqpb8REnNxXAqTAsuqdIhAmUFZsfn8KRJMmwJeRcrYeenFyq-fxjOUTnPgfosY-3ejipGVLljatPa2aBKlXzA0Y4cWqFIIblpt9HK7992ZOUPZ9vg4WTZiz7BMjXk05vY38k72ZPFvvtoxXNytFbniGmlmmZSf80rQqnKy5IS0-rjSfdp-fGkUX9wACVTQZg4nrF-bCrS56e_qXHTP_c3iDmpa1DZmCc0tE0hQAPF9cCsNMnvFifGyQR823k5KWgVZuKGACwIGNNxQCFtBEGM3G6LeqjLZkP7g65eJ9nFLvPvK9JfDJyOdV6ZqrMBUW5V74PwTuMI8ecGhXW22Z_gw2bQfh9hI7xGFQRs\"}"
  },
  {
    "path": "genesis_data/genesis_txs/6GNIVQ-23jPJTxQkQITbSKE7SYm6J3MF4qbSgH3-AXU.json",
    "content": "{\"id\":\"6GNIVQ-23jPJTxQkQITbSKE7SYm6J3MF4qbSgH3-AXU\",\"last_tx\":\"\",\"owner\":\"3TorrdKH4wNbzPscxBKf26k2D9gyZ6NUJG4ltgNFCtxDPG83Lv3YSaT7l4m1uUfndLDOIkuiPpzykT9r8as-7BLJwhXFf0YXKVO18kxJFE5JRWYtFKcE1anzsEPplTMUgg48nyM9LJV0cnj0TmNZIwjCYGKXaw5R42UMxHyaIS_OxUx2tJpppZIwJlF5jFU2bltDlCDRmZOACk9uWz7AYEp-s8LGS_IOGRaknVyGyINlThfWfx1-q1Ij4M9R2LNTwvZ5EUgVvCCyS92juV5yXHLwzq0o-xpwp9KW5_7jhyYoyEF6TBZucmgrTbJTLSpJV7pTzm5UzJ2UhhXEo-U8CWpH33ZvszN_DY5ZYIiSPZNmoWlIkzQnMGCONfxvvNejYdyBOVzR5C7zWiC6SE51zoh4vudM8IP_zo0_DGFuGLFivt_AUuHb95KYphqEyRE7DK6YF5dBL1LQuW6L7oQvOvKhDsinvYypYj9cz70WiL0YNyzAeovzpvnjJ5KZQUHpfdvnn64g_edxJLdIv1RCATjzGnbGAZ4WzSbpw8gzxNItS8l9VjmwffDdLSYz7DBxa4LAvGjx4S0WIK-hHkHjDrBkE4LKAXbKMlAQMoPvDvgzwXHiMuVDMyHATpoTiNLoAoNeIyvsZCkfoYseyxhQV93R_gkeH8Gvn6coXxElBCs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"WW91IG1ha2UgbXkgZmxvcHB5IGRyaXZlIGhhcmQu\",\"reward\":\"0\",\"signature\":\"0T64l1CqrPSoXQp6EBOZktyV9ZchcVOLMzNmM2UALrRNIEGFoduP0aF_hp3Z2byAwmgKtAb6rUgFstcUu3c7ZOAc7inchtRLkvSpDFbud_Dy1KqVbbMAwzCFiMGqLqilWCwqGZv5HLSvxLHYTerOb3InPwrd4nCBpGkmHB0hFKxK4ZwQGxxEfBRZNL1B8e8nEGA8cr2GnHfUawPszLU5usafV6DAaQgXWz8QG5K9WFgN9QupDH9SIG5uF6uFFvzL3qHl9qAnhpf94e9o8OCOtPjqS332UQc1PifNjqMFgSZqQfyyf1VrDl4dk1e-CI06QOcghSGktWZvu2Y-I7_OU9e-ySA58E2W-998rkASE4TgmmKufo6K3_EuaB-N9Of1DH2HzciLaTo7pi6QY_qAGbofbExa3nTN8YrRTBVZhqf-TZwK0qx4N08jsbduz49R2byI4bXFK9zOlxTlnTrwJQLiDtl8W4mf-TO64SuiMUgHeMCYKJKwwuyDVmfggN5DD9tRmb8vHR_5OewXcgU_-pufqwiucbY2shcz9NswqaWPgUzU5Geuz3GU9OGt_o5b8hi5GND8jdKVBp6HbILX4OJiZhnZfB16gDtKRed9pE1I7u5WiiuHDTcbNc59D_kOyo_Olkwt1mj9ecalT8JZFBWJAUkXUdQxcroyCc44eaI\"}"
  },
  {
    "path": "genesis_data/genesis_txs/6J1sN2nhGpqe9iJwgdfnxxCK4af88__HoEG8MLeqtyM.json",
    "content": "{\"id\":\"6J1sN2nhGpqe9iJwgdfnxxCK4af88__HoEG8MLeqtyM\",\"last_tx\":\"\",\"owner\":\"6AfMzYXd5Ul06Gm-SfaOFJlqD7qWBdHVT9nVdh3LxNaCjBhcEcd1MNV-kPMm6g69f0QPQHahNgkNxTFbctY94Rh5mPlS_HH8wUvu7gCXEjN1llZFYyxtqBganVdIxxZfzU1gVX-od8bT7Y2EDtG427c4XLH1ya46Xs3rh1k7gAi9dn7QSgITt9_S1cL2cylm88ejTHibZR_tarn7wMN-3VF4VmD7idn38QPgKBOfY8ZmLMw1EG-AcvLzlQDk7eaBU1W0WqkGr-5v79P_j2tDlWwPY6jg9qDRUfIt1DgA7Wm-8IeIAyLv3ikq5VfOX3_RCpbo9s2E9FNXxIYsmDjncK6jMXUfDzZ97F9bUkV4C0sqKuTZUN_5Tz2pyWHLWmCpqjOsi19p1VtdeGp2L9ovS9Vpkh0Tdk4Vz5JyszfEkJbqli1dd5N4qMs517OmbFLai_BToSXvDEBtqv9iPzXeYriJ2W4Ky5h2lhozH9oWdBRfqwIGDh6n-rODHR1Jwsd50wfge8K8Ygl1sMc-VZ5dbbMFrWvAkEb3QPT4CRF2jJTvuTuUpXgcvzTc8QLAJxGqHctsSF8RSyJK1XKaszw6hB3yAw9pMK6cyGttmmFdYmkZLTJeIY-s-9Jt6n6gKVgVDI3d9XuqRNgrdiQsbfJvquUNnQdobTgPC0puPKxuj-s\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Q2hlZXJzIQ\",\"reward\":\"0\",\"signature\":\"rIi7iSK6A4VY9xhpy1lG92W1gwUJfajWSI4xmmQOjlDkGfUF4noozfGptiU4boYVq2cTj3fpzZGLChG2YoFUG71xVt9R1B8PxKeM_zsHRzkVaXtsmPRX5zDrOvvBdXA6Hjp2YUhzF7Xjqtw45uUbI3Xyc2Upu0irXDH4_CCSorpAPpN_bq8zPD0Jj_cJA-0AvduiFNpWgwVN9dEAu8d-bxIR8ZM7PpmswEsVqbyEIs02M0rcCm5BaVZopIxW0M4MstFltji-abaa-XJv8JiROBJ0w5t8YDMj9OtKexDPjB5hZH6z3qwcJ5Xq-mEQtgyyRMfe0EiBUY6Q9i0f31Dc_pWGcrB5ZB66I7ToWGSI-a6zvyBehMfkMwM2xqFqOZOkq8A4y-IanbKK4p4sltlYDlxcEvtUEuRzw6AXGzROUHgWA5B26NZfC8n1de4coNYnwFMxQ_zXi756aUoSn2YXsdA1IP3eOD-YP5Zc3EZK6IVAHE7ysRylEpPrAE9wZykAYs7xxkFXwSLyEtR08bonWZ10kUACK61MZpbqZwAL1uitjMgE0CpZ7MLhz5DoPDBWVwxUix06Mbt6vR6zBLopGDQaCs7yujykva4aJM1lO2AV5IKlsZ1Gwfv8_bB509qnjwpwE6BUrfvn2n3zf8283AOVX96CXcuDQJp4O9kPq5k\"}"
  },
  {
    "path": "genesis_data/genesis_txs/6NaT-Mz8QAiQS8atFaOu_ezqZnfu_XaQb-Grng-hvHc.json",
    "content": "{\"id\":\"6NaT-Mz8QAiQS8atFaOu_ezqZnfu_XaQb-Grng-hvHc\",\"last_tx\":\"\",\"owner\":\"xYn4Hg3Ghcfv6ztWqfw7uTU9OQrnMUdprDArOYgPuEz3Un0E-Tc1rwkNtT2Sr9sHyhXalO4DyBXlvUwUCcxwnDCg9r0nbPzDeVDE_TRpP0ULq2gh0t1SPPLmrhYgNn0VgtshK8zkzObCqqFjYsS9dS-Fh3gGa4hxkA_xIpIg6Hw5SBgbRHdRZ30MNsiLkghow4ONM6bP7rTBnIM6TxTGVYTF7Dqh5pSjUNmB9BqJFsmVbHtk272KgezvM6XDIV4IGpmnPBIOCQ4htQY5DRMIFkcyMvWrnafUjUBNVVm7cynJPFrnyc2wdVHwyiM2Dpmn8jssNWxnucy3XmOYLPL0FdQYTErYqR8M4r-tGXVlGBAcyXcEopQaw7P3toh5iRgCQrNeAmpqiu_fJlHYtSJID90XCgFs2YG3dWnUddnOqICb5nrnv3EcfA8EmLB-ny8rIGbL_R9L6d3szCUU9jdRucpTsremWFfnpA-jaIsrmvz0o1xgT8s5liBU8dEJPUtXQDYuEVIVVkzfmoePXAEVNg4OMM-_bKKRQ7hW96qW2tTee7y0VHuB-LZ_kCWijJKwepcEnrIkI3fBPCWvJZNvg7SR40-1yXUs_cZSM-Gtft_Pw0dvEbPX28k4uD1gYY7Z6UIXswdNr84eSTKd7bwsFX_7XE6-bw6J8MLWX-_YK_M\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SGVsbG8gZGVjZW50cmFsaXplZCB3b3JsZCBmcm9tIEV2YW5nZWxvcyBCYXJha29zIQ\",\"reward\":\"0\",\"signature\":\"XZgzxYqlw79OcUmbddhSynL_MUb-ved77n0XTWQE7fOIzdJo617auX40ZC9ysV40VGaKuE9yw56zzENgq0T0DtsPLTtxPEDnID7-cl10ZrkbiXCpYIdY6Ji2oXMhElPGUC731JXIkfgojKq-XqZM1NkhB_lsKlxTjRywVHeyWVEqNlACHZ3lSotFVtlb_XKnhcWcW9A0YKcM_7t-RFnjhW8z9nksXevolRbo7e0A2SGIWRWeHFjnySH9KFLCm8JPPQkEqUIpMiHE6uqvysc0WCDv2nkYrq9ft2P0_LCY4FmcqiCb5Ptrd7Sm1MFPM5VHr2gn7gs6aZ-DygMLXzG7ALKZFnvremPCf2YCGTIw1DZ3bTfPM4QwAaKXRoVpiAQ-bJaCPhojWK_zrYm4_rskA726kqcesgWSKDZYLQ6EBTUMNrlZ-AbUem6SHKtXAxErW4gsnwpESqzE6psGbCFt7qWX0IvIkQ4oI087M4RpwYWc7x31KAAodLf6S07Zic_xu-U1ei4x42qzFncT4cDm5y38gxvSylHolCPkHM1S4_B64QLrzJhGe8XUGqIxuSmWjGByff55zS97qg4qa2V_A66qo2rEmPJkJhCzvKuESqalp296m7UIEExfNC5msVZy3IenAuRv9gSarg4SiMGnd1CaQOth_Nmmszsj_Pm0Zbo\"}"
  },
  {
    "path": "genesis_data/genesis_txs/6YbxtptbO-sidrnYdgn0G_CiNBh-az5ZzWrSCP9DYKA.json",
    "content": "{\"id\":\"6YbxtptbO-sidrnYdgn0G_CiNBh-az5ZzWrSCP9DYKA\",\"last_tx\":\"\",\"owner\":\"3D0CoWo2whTSsWBK2jKXIZUxyRcqMMdL4d4WyqHVcsR2s1NDrtGyqK_H5frG6ehGaL5Q48O_Lrqnvb1RFzE-PrEG7Mh7-t12fBGB0KTrLzB-Kam5G7iQ1XOrXzz_hHuXbtpzb_XbM43zkDY7YOGVVcMXyk7RCu4u-oK0GdudkUzrCV_LGArZXEc0iY8HWZ9cEGrrc38PAU8I_meOF6DtIq2dz1JtlNsiO5HB6PVhsPDadSzcWInHfkxhLGPXJ9oz084_4gfrK7X5M3xVr26uw6STGvu9w2XLi4rctYfJbX9fZLzJJ8fGeHKjWwRE0erRTTZxNG9tEPXMXSaSrTApqz94AwPHNx41U9erqSrDnjY7kC2VFiFVJeSqeKvvQui9-2IirHfLCWDURqUy7M_k0m9OF_EOcE3UGQBXQgnnCkzJQ-Ux5eM9-vrno7jopcxh5F460WDvRvINkyltBxA1I40Hy7nyuMPBXlJpyBN0EswIJ0M5NT5cesLZo0caxANSFG-ov5WMS47U8GFRemaM87UHH7lrtwf9llM3qwfIkGdaj1kqiBPq6ebxnGQmvvmVKDaKNe0HkGuq_PYKVGsZMwDNqNEidmzScGWiI903aemDb8twAOpozWnX3tF47mEYm2OUQ-4d5U88I93ccNQFt9rcFrwd2yGOT9PF8clEGv8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VGhlIDEgJiBPbmx5IERvZ3N0YXJ0YXlsb3I\",\"reward\":\"0\",\"signature\":\"agVcHwLyejUWIveBF8AQe2kaFy-1jMuVn1jnZxoTPmNxqIga9bXxljJMGapf-KIRvAF2nm-tkh7sDtfkC-Cv8Ra03wqnkInzu1H3iCM7DlykrEwH8VpIgG90BV6XZ0rFMeJapLSMslLAuFS2bfppPN_D0XF9hRsbNtAyhpFF5lM3vaJRLscPDMA99bAFOwZkdnxiZIWVDVsm27qWi6DyDSMuwMfRiB2F7u7nIGE55L4egUi6_jCQENhtdTEoXipDEM2_pdnpoiFO8Fg1FyDa61Zsff_FUnHtaYZfwsAFZKNDg3-NPF2qQjrYpTrMIIgmpZjW87CTaegRbe1AUrLKNhi_4zZqBRoDAqm08mBQfdl4h0M6IX-yCFrthwErkV8Go9LmtfioCf6e2jGYZXDPwzl8HYAEECOKTyMR2iPF2I_kil_X5EiZl3DXqlKD-uieurNRHwWtK7yAMzp2beeK8odkVzQ8wm3m7SUN3NyQTjJ3ZbIduI_OyvKtYTwxbftsOrpttTQ6hda4rkvgY3s_snc0GAF2-OCWA2Qj59Zgaqm1RCo1TGrmsFAbaN4mKJdSrZr84C9WPHtx72i76ekRF4TUtm6_tOU74AZtsf2LxQT2OwHMY73O7uvXwS7wkvACy9nZsbXoxd1-53neuYw_v0jCMonjBSkoKE8F53eQJkU\"}"
  },
  {
    "path": "genesis_data/genesis_txs/71M1E7A4e0PFW_6C0gly77iCg7ykX17647i00eEiA-s.json",
    "content": "{\"id\":\"71M1E7A4e0PFW_6C0gly77iCg7ykX17647i00eEiA-s\",\"last_tx\":\"\",\"owner\":\"qGrm2wCIkKdk7x0ncVlwHfQVHqjV1Kb8GF2RyOwJIUvwShOuexk8mB29VDk1xQaCw718nX4iul1o_iN5_lZ9C96NlWIP6ZHveiaq_PB3tjLiV9IkE9nC9qZghw13PxjyPprr2Hd9Q7VDSpbPMCTIjjPVKIfaMSNiRsP4T1Q8uViBXsectothQLBcBuBT-ZNV7Ljw3jT8kkKyTdfqFn7M_KOL-aI_sT8glzX-6AmD5IYEBsTSsY-pkLy_9NxSiJvlC5dMGdXeYn4gLFdPszQ4IDB6Fo_6FOEULZXJWxrPMq38Kt2uL3mj0pjV3tnljDJ74sA2fC30hUX-YmuW43gFi5hWQdnIbqw5zakZD6LDSp2bEjOvyuDe1DCTP4FCEsV6Q7UwDHxqchAVXbdOfAJT02xCkT4e-updk3GF8ObcghQVvA2LH4hMo8x8aByiddBGJUccOZlpAlxfeMHY0wqRzjomAF1shzRyyyr0CQBdsXrf60iGBTdy5jHDXWN_6FA278HwH_QEFXGllgiBpyr1zmlGYpU3PqIwLy4H0W_F895AyRO4RPy_flIO7wljaujA6KgJ7c_6PZHfoN-L8mHq98k9Ppggwqzfal5X_pEGwxMEODgW0nPcssRNvdG6M7TXEFbSp1RWV59YZOcjR1AY8iVASNewJ-vSBXBy5rosfNc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSBob3BlIHNlbnBhaSB3aWxsIG5vdGljZSBtZSBhZnRlciBpIGJ1eSB0aGlzLg\",\"reward\":\"0\",\"signature\":\"jrd3Yq-sFOHhbVGDkO4T2QdMmEgr3zb7rC8xavFFn8tWCOr8Ier5F_sE-BBNswfbfBFuLolezV9bf0mU7tbuidenJxNJJQlm9w_R2lUtK5hlOGZYysFKeNAAxUfHZB3YOHxK4G12itZ1QwBf-meCHfHkFLAW9mRcqfslrh4l9puWu65TIVd4vPIPD-vdZGJPqi3AjxZhRr1W-sCbpECJMgaDMqblAsox5pbq7y7J9-x1JeOLqj96i_lxsJDa-WHzNpZXwSkhCKfh4SJKUUGx8bz5tZjLuVjBRneCESLm3mU8UYZzImN5V-CtHUpbpAUq3v6km-T23cx4SK6vUa2Xk37HeFPfVzQuXWo7DY4lPMWVDBemLi-uvL9VUmdLxT8iuqFQImQzSkx51SifBNXd86SZr9suCIhANMFZeLyimhYcLDKGiR6XYtO8PftdOv07HLJj6JqEU6tnIrT_9KCvI8e0RbppDB9_53pC-DJ7l2UqaPi6DTphFTX8SBnTgqkaoq9LVaow9q6foiuPwz5ltV_4U8Yx8IADQQdDHM1qc2se0SwBelafBD3SXCiUk6B7eXH-V9D410s1SNmn3Ku_7BIez5Kl2WVeNhCOdQlsjsaN_oDxVjOleMMM5DkseNa-l7t1-uxovLPOx8fRm_-_M2YEsuuB_bY_ItvoJ5ca3Q8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/7SfLhJLtevo0zu-1bo8q6zX98WbGgpDNuY6PXbzS_j0.json",
    "content": "{\"id\":\"7SfLhJLtevo0zu-1bo8q6zX98WbGgpDNuY6PXbzS_j0\",\"last_tx\":\"\",\"owner\":\"1TrJEnyg2GF0p0sXt8Eu0XSjGTLeaO_4Vft1_2r0MdoxYdy4cBqFhEP1xvaRns7D3NGMR-PVZTMzd9I-Lu0HkPCX6f_g-uJ-Z-2rotNNpDHc1xi0fdufAyBQgYS2ODc7S-MKkno4oF2O9HfIOFINkom1GwS4XkZuhGD4XyXJlBX0bB1EhYuxBuX0uw8ZUnVRbhKtLAqjwFA3tzZk5Xw6WUT7R83UNYGMObetJd8-5R_J-UP_XN194xN_rMzI8m8s7mkxuFCBILuXlqZWTtfsTtR2IsnfxCujXgR_vtPtrg6xS67WeXS-LyME6pGhiS4VjfQDaDgi9EfC6cnyIYk9uVcdRZdhz68_XF5qkRDU3K6U7rzlQJ3aMzdK5zmBS4v8Cl7mCzkYNPps_GEy0e42lI6etejCd0K33S36MwyY4p1EQYE78GdrrBoOS3UEG8GGStufNFDTbe4OBaryz7GjBFPkiatcMVN1k1Gbl28oSnimPXyNSHmr3YvjCMGULlmxYvsZivYpSnaFrUu5Z-Gpk6bkD6NPSBXt-vJUUjC9EAplg1mK_ECgcAqnYxLuPZFsxaqn0ccntX1JYiSpbGX_V1UNiruv2PKVNhmjPaf6p46n3kNSYxBRt5A8QX8KQcReG5iKX8VxTFu_5N_v3QEyiFtfzo97RVPyBOvpzc-hciU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Mg\",\"reward\":\"0\",\"signature\":\"x5Y_OgfDsk7CfwPtxNWiki4hl2BrjAWeV6QyTPVljUKesM4K808XybnHUymGk8Q-6b_lFcCjI7VcvZ4m7QWg8QhsAH-aeGlqVIYhMefA5iSz03T5mbfYkYvYF1byRcoZIuOhPlgGh1VtyInigf6INS8w5i5ypgquZie01n1cTKBMBHfHjTn3BtBtSv0bdJjsDOMoTjvi5N5TxpW5BqL07hVppyWWyAzddfMq3nDbUeruLN3ExZf1ZDC4pmkojj9SiXE4d1SXRGQI63AD6f0qXObl1S7ISAnLC-KVYJRP82r_zmcrTPgsq-zUNU22AkXI0i5iZ268VhBuEZDk8CXsEyZPf9Eyt0lN2GE31b03eCw-Nmlpd0pSTQqahpXIMi74mdolMlczXUPVR-Q-Ez_cncTqHdllQNlC7UBvMiog2bHhOt-mBUAFYSkJaw81AytdETORf_phX3PMeHbvljiLtTicj6R0W15CK3AilLip8WbUe7InlWLt4gjOsyJEBKRFxaoxKBH_vHrNjVs2K5t-y8_VcSEI6UrlwLiHwBZrGdF6WLdt_yJTTgKem8dOoaRgKTjD3JdSM6JydqG7LNC2y_QTAMp0_Dtoj3Fibl5xrdIPSy2W4ub4p9j-6JKytkWj7z6krnfTV177qJFOuMYTqGK3Ryt_5eqZzZluflfO-CM\"}"
  },
  {
    "path": "genesis_data/genesis_txs/7kT0is0QnxdjqkPi0BKamhLW6z6_SK55LMAVKQC6F0M.json",
    "content": "{\"id\":\"7kT0is0QnxdjqkPi0BKamhLW6z6_SK55LMAVKQC6F0M\",\"last_tx\":\"\",\"owner\":\"04I1VdLjKnBACNeXRdWtMcXlBMjkVbyzhyjvbcnzaeiPFZNCeuGOws96dFfupfv9_vqs-IOuZALCcK6sN8pBSsSin4MvApILCb0edYkb57tae5WseJhQ8dl7JBjSq0ThU06VpC5WBhiMiFGRXuyTzo8tG_j3W_Kgs0jG-XRy3bNQj_vKgGNaj4mESP8HO_Kg_laGYAPmsjZwU5QobCXhZe1entxgbxa1AU9OnEzDG8InLR_aGBiUyRvbdbLrDZuk558xPdroQ-cT2_dZoQfnydf1VWveQ7FxM_Aqgn4z11cyOLtXBixzAKWXIEuEY4JnnIcJvC-WsNRiOjL1atB1q4r5hqXHWBGf32POjDt_sbngqK0q_Ff-QDcu6ZGCZu0Nv8hm_3_t9q28YNz9fRfhtpv45ditlHvnNpfE7ovZF63Tv7fBugpyo49psmqzd-s3n_NYzoAlRKDx4MQXqZZwqYSNxQ0Sxte22fXQ3ftgbedM2znJUij9Lnm5qxxyIo9yxz9P2QPjfp1o4MGFgqGUGG5mVwpnFsO8sVMfSxoJvEMWvKsyPhz91Fj970ciGAevbmBPRHNgUoV1GaiROC8jz_LVKpHfRFONfDk2Nnf4tFL4W7ecI3KmLPcEeTG8KA9uHz9_1gyLXRkLL84b0dLRF60r2nP_5o9tTyvu1P-Pfes\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"V2UgZG9uJ3Qgb3duIGEgYnVzaW5lc3MsIEJ1dCB3ZSBtZWFuIGJ1c2luZXNzIA\",\"reward\":\"0\",\"signature\":\"eyk8xLUY6geTpM1n5gacvcGt0lrOtdoBX7yIBwgM0mc5xE_uPO39SpnMU43UelTLdXvfKUFEjwybLhrsHYqx-YMz1D_UrI9srjcV2yrfEGVKHqoXiLPiwi70tTSoiAF11Ea-zIMqSQ0LltlXQwnHugt1AxpHKamUmwfVJdavqifHcPieDI5I4RDgGl1kqYjka60wdWX-ClyXsptQ8NB4NBenp54QmXrcFV7IdTvlVWcLvpaJ_oogd0rRt968K-W9Q9FIGRayr4FQtlrpwGBW6KhrBcKw-9kIMo6yBy_oxanWNgenQduTWYRwckJFpvnP_c8OfbL-MqoSUAxwywQOYB0e0NtCaQXTOr0Kq5_xB-xUpVWvRjOXrXwo6nwy7hCml-ZaZk7eD7pHElnK5J5f_pd4aU9jqlaPVvVJNw37WX47kSYDjuSIqOGCPCWkDfFS5HPbnJWrznseUvB2rxKR-bnmeO2MkfVv8WablZMnkASicRGGKomaBsMrURhbFyPzRIhnb26fbjYeGB2eO_RmwWRZyCLnCdpM8DQAXfj-YeaZ-lVbeuwlg73TLoLNME58Vl1rzjnNkgNUQtm2GlCjTnnK2-dfAiDXDSJ6TswT0RkKSXYaE4o1cawDNsdW1R8VJfFYSz_Y-UxL3csOhiPK9hPaVT9Qzjf1MzrqpOnJZIQ\"}"
  },
  {
    "path": "genesis_data/genesis_txs/87ieWrloTFUdW7YjJqJcINd1M_PBWCzA1dIRFzF4RKM.json",
    "content": "{\"id\":\"87ieWrloTFUdW7YjJqJcINd1M_PBWCzA1dIRFzF4RKM\",\"last_tx\":\"\",\"owner\":\"vxBuootRdMb_4owb0cJ5VhEwGKViOpjvoiVSBzaUSSovZCzxSs8lhKOKRCUqFc7GNbfwgcvLDi_50LqlFarmIIQoaiPNYE0-kC6hCMQNPLx06BfheQpSHdIyRsOp959CxOLzu33BkAG1NfPhsHNE_WgQa0XcJuqSle0B_unjIHsLExr_nll-5hWgn-OUBWEg3SOjnNa4ff_QaFPPaWTX9Kw9E10_hvApiJDukU0RzQp6FpxPA6CDRWv5JVDkI3Vj-EizCKBNgYtCm8JNZ51vUg3hHXiv71pFjh6uD9jnvKkOEnApnSn7o-H4vo2ReFA6hyq6qWz6hXxsnxnZEfNwBGtTIeDWjDvm-Zppp-84ZNbfikKM5s9x0o2h3RAOK0i3ScRI1WIzJqxCDroNzX_ev3CIlnccKSnz3A-5M1lK3LJSV_fs1_vpd9Zi4HPTRbcR6jP9CheHeKaDkwcpuvI8emJLNCKITHh2OqDiz00FndTXE7uFOKk8ZMhu4kcTsINST83dj3PLzMkyUBjflkEatcXygS4VRbEJZC1J3Mpna7l2RvAK_HHZ0-nuYD0sFLrniGqImDFzTVxvzBu-kjnjS52Jpkdlr3Thv3ghObsh-xUE2s_0dGesr7nITUXbiuwN_k4oac-CzKSRjUkciaQQTsA3GchCKSksM0gq-muspUk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"QWxsIHRoZSBiZXN0IHdpdGggdGhlIHByb2plY3Qh\",\"reward\":\"0\",\"signature\":\"CDfWDUvB_Q52XfkmeLZHhs2ZsYhFFZfIlRKSNo3zCj-hoUSgp762HjfatKJZB9N1a3to_vzJ_mXlgfNn0bzHY0rqnoJKBRKYdq-3pBN2924IIZhvwpxf02QzZaTzHBjwD5ramcV6CvNnPRcvDqs7xt81YLopTlaa7SEnHQ4KFcEWHa55qt7DCcvHg1jR_TXp8tXtN2CveicTMDtkb1K9eujrjbDuWZcOzAymMl2kKnIlGPIaHBhZ8Bwt_LZ5aizT6oI-9hNIzt44mDw5CvFGkNpwqaNtnpD4liP0Yq5NANU6KoABtvGxdzgEIbfWStozrxUFcHRs2PkrUKBm3T4UvlfCtfBCo0OIsfCaz0IVHKVXsAczm8PfTXwqAer1pOPsUfgFU8pqUx2fbUG4NyMqSDjKWNbPoFwyl2HJRJPiZW47FxgE8BQ6sv7wa_2ZpGdE6nIqAUCELECqIFTayDTkpBCPJeSzsQDSusJlbA1wskNcSYemf-2SVyY_BarXSCJYC_MUjGdQps2KHSIOFQhM-uOKTyZ0gLnzAqcuE1WH4rqp0o05Jg8_fKu8ILWd7fkiGOrGasKbP_1e2tLP3bTb9fn8Lpwy0BNFct89meH9HHbnuuGBjQfX5f_l9so3H9-HJzMxjvpsh9ENbEMdmShBsRwlc0OEH0qH33AOegmlRnA\"}"
  },
  {
    "path": "genesis_data/genesis_txs/8b-7D96aRFJgDm8z5Tg47vBbdjseW0rRi17TYDcaQ5Q.json",
    "content": "{\"id\":\"8b-7D96aRFJgDm8z5Tg47vBbdjseW0rRi17TYDcaQ5Q\",\"last_tx\":\"\",\"owner\":\"pdkggT10Ufk5dqZ8U-YfaPg1s7MDZ3bt8SbbWhraEqm2jmgw0IRxfnj3h5QwsXjWgS5ZPOXcV9SvLpTRC4tvgayg3sPKnnCc7HtcU07_M2w2cOunQk-ZCAKnzFFKT5KQGcChGub47zLsZUcSm4CE9_BfD-BPUsFXLdo0TkaDpOEcchLcKAV2aKLx5KTboEMp7FZZLj16MZnNbNXfOoCMlO9tyCmRkUPa35WxU3bT_VQ5ZP3udkYcWVvg-ryocaiR0IBfGuglreRCE_b71mZaR8Wc1cWXMJMct-t_XEhEfccJU_GOTkbcHoAX8uSoxuiRdblz-n0uiOtxjpmZDhUropjrsmox379CPtZ0nHH9D6r52ppAC04Yuv8F_ZKgqfkrkIhAQtnGHk_HysI24eqZ1rS0DUX9oGQy6gCluAlBKkLLrflMIMe8IHJpI3BTHyZLcHIflbEexZCOFl2cO3j-X-6bA7AHLf8dEDqeJxLsbdvBFDiQaVy0ODGlHVJHHiXnNDXzXWkd9KKeJmfeTHliW1SxF_inF5KoulRFxHxgp7eDIW-PvwH4_b2wULREyv_ksbnH6VZcGuPpS5KKvFQT-Pr8wAp92w8YX4F6g1MXDwkx2IXxGgspjgb9l8CdyLnDSHP99eEmoMG1Apm_TzsfPtUChho7HrEImokdwUlw6q0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"U3VjYXRlbWVsYSBGb3J0aXNzaW1v\",\"reward\":\"0\",\"signature\":\"cGVdswnHQMJ1ZxrQAp-uSoVmc18onOBH6-l1B3AsfqfcFTkkPi49vviym-OJsi7NIcqA5uU19WXRMBWzEgVYEUTkW_iFWHu1kD5akLOwYiOEXR-pXg7VHn29O9qpkDgc5-jhnU-ePnx4TvW4YQHC4JvIiVMlK2UQmJCoMuDj6S8Kig0S3tyZbeJsC2Bu8vnWbCBELKGqsFb9uOJ8pBAJtHbIJM2SsTE8DCIFv1VmC11SEqZdPeMKMxh0FH-GAcfQ6xYUCTYq8JnB5gs9sP3XpbmZBsXE_SqT68hXxsmbKlDrEgPtYQyzHQPvOX8Wfc2s4CtKRZizi3VQ7kXl0oGh-X9R3MHQB4hxRmq5wJZZfZ7y7cb-Y6l87-VtiM4Cj-6r9Ciwj6yJG80fLw90NL4jJc_uuMP6bEZgDYLHKmk4jztovA248rYjSOhssQRFo5UJiY08l-khkTrtmrCym4S4w1OgzfdqVWey9T3br7OI1uFSz7Yn88mX__iUx8h55Lvx5x85r3jx09KlL69YQXr7Kue2ltslVbbHLtMC3xcOIbBg_jacY7eK6_j5GRZAjIpRCZq5UlV0U0ySGVh8qsiIuTlOMvMvsrWfUEpPnGPF0uj7s2PbVPaE9CcOwOjlRXPyUzMwPEkbsM8Eo2r2TQi3kcgNQDZlsag0wY-i0EbFGQw\"}"
  },
  {
    "path": "genesis_data/genesis_txs/8gTAwQ3f17PKI9KCX1cjuXCs9F8Hcdz8KyhsecKuCJ0.json",
    "content": "{\"id\":\"8gTAwQ3f17PKI9KCX1cjuXCs9F8Hcdz8KyhsecKuCJ0\",\"last_tx\":\"\",\"owner\":\"8Ptdr9yfOD7wZYxgop5Mf1ueNFoF38OMppUpcTfszKgW5q5agtZOEUXpDGQ5d7_2gSNfchv3HPuVfRjU4iy0KpEI1PVrUfwdnSATpmBqvUrYPe_EAqilrSmyXFVOQJ3DvvWBXdQfyL4RcdpJQQhXzlmjv4oicFXPRepf-kzWzQ-KoQY7JRqx72EF-p13IDokYqPSlIvfb22FzVo0n4OPRp9EF2ZK0dPqVMTY6cKLpC6tpOQC-yrXvx_Wakwm89q0tGMhf0PCtsRwA74EWh6hASww3leXjGytlA0bX8qKqbmZbmtNO449hNAmhFO2OmjBPhm74p9cUe2CH11YaqwgO6aGe4mjfLur9aE1WPCeA9RnWtqBUKjn1YDIHSA7ppaXWZGqwy-wXtynDEutHti8YuFHinH9giw0qmsN7If64q-Vq3_SpicmwGt14j178PWzQIbIE9AJTzc4qp5Kuw5cXSP1vG7EhpsKU1wkRKV3BVAUla1xnBgkVvSrGwhwmfQiurFUT5drpK--z1QbfVUphICh13-oERoyTqiWkMXqSlDumxjjRfF-3lTIMJ17CdMYTs0QbNsoFm7gywqtGtJqGtiWEu2uedGFfW9a1rns75PDHOuoxQr6B78v-VE2X42GZqpD7f5J9gWSLPG5FhSL8wf-badk2r6duULd60iZf70\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Rm9yIGEgZnV0dXJlIHdoZXJlIG91ciBraWRzIHdpbGwgcmVtZW1iZXIgeW91IGZvcmV2ZXIu\",\"reward\":\"0\",\"signature\":\"GVjeGjSI2n9LfktcE4P1Moen_zegBFYwiqpvZzPox7bRuKcuUFV_Bb7l4c8Ni3XdoIwmuBIPAY-wkICeiMqFelEGoQEqFPsp0qhiV4FbRvXoXEhyowWwx4x062lFVEZASAk4GGHTgcbyiQk5HVrhkd-FjUruuqfx3lw1Y99l_2HCXsHD-QTZOi6YkB8kY-FxYYk6THWZz5QvU2qp1HfHvHHeONMEJv-kQd6nMIIBEu4gJTDU55819f3PvBiZlb7nWaIX9Ugx4BFc-vrqHRH3P8orM5h0-zXeuiMeKLe2tAr-kaMAYy-lle5PnHl8lzjoyVxfUFTXrPYTVcnk7K_wCTEqsJz9LrZ-Sn8c1xhWqGGH03soAWbDRgXPIMqapSqsNHv_03HWYt19twLzpE9ASlgvZGzgW0V05eh515McDjtubnHVZhhidDgJsvm_QAYlw8b8T04EZHWGyKT17K89XBqZ91pL77Gnqu7lqfnDZNQwjXml1-beCRm96BTxka9JOADp-Z-7KOQ8kEsYb_vRQChCLLIJHwknQG60hjO92C2G2koYrjdTA_IOGwGW8IcEnOiz006qNmU6GsnjCq3kcU2OYvjBokdSy5uOxdzPfctimEk2pVHyeDxnxnyIWspzHtGAPQszhbDdBWZNdYZg5c-sv9dyiK8fHMuVdl0elBg\"}"
  },
  {
    "path": "genesis_data/genesis_txs/8rKBfpmkPlxnnYr6t0xIpUDubdidK0Fpnois7-xQJtc.json",
    "content": "{\"id\":\"8rKBfpmkPlxnnYr6t0xIpUDubdidK0Fpnois7-xQJtc\",\"last_tx\":\"\",\"owner\":\"ykRmM_NRHswZ2rjjjbWgTX4DrlWRpTw_QAgebi__p1tGCfuMrCaN0Jcl1XwM3-eqnazY734oqqlnuZwGjzIfOMHnNaiFPzcPcigz1B_3bGxjTfsAJF2PoAxxk3Y2TS38odQiJQJd_z-EzkNYhMTLSjLYjdBABWgE_Wj55wpGu1AqpZLnHrNebgr8l4ud6kEdFiYx8jcCVrMkmWyBo7A0Y8uk7ePlAbGv4zZH5-jt2_XzQc9p8YkoCyxu7YQsj8meFwpug8aLH9_nl7s30EueXpX9GaqjOB17Mq4mfkrU4Xv2DH3_z7WmV1jpmsmPtP88lnVtklZTXa1Ee0W8eMRW-JZWweN2nmhD0cGz7HqZVYC6pos4nhwswDNks_7ekegjQ5mZFsh4SOAax6dNa_tjaaumJgAbd12MT9OkZnpiObM7AZ5vKJc2okrLPM-cjt5bgMQVA8OiO7MLz85MyF9iAB09XPfGIt2zW40Q9iDFKlqasTabGDYCZgzx65tMI8S0Xg8rqpu1i49-wZBpTINi8viycWLKqFm_TBWnDUvD14MxTpwAfujo6gxYaTDzKlTV-MPPaTSEAFx_WPGfiJklhaG1B1BdCw__GmGMa8GpK-wWhIM-orCOV5Lkw7kOQSwd8rMBA2rdweWIfcjpCcYkJm15pWrNPe6ySU6WOBc7Oek\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"bm8gY2Vuc29yc2hpcA\",\"reward\":\"0\",\"signature\":\"f4Fi0ad_Be1nRWtzDQV87sy_4wTVZ7MyN2yaIM5CI7ccErOyQ5rXhlnVttSwh10uCe0yWeax8fVR6GPs-hFrunO3YmtrbxjugUXNTJ09QauMqPNnIw6n4V14riDFXkly2Ev7txjj08qVkPAFSjR8DzJWM4WZjQ3urA42PnFz4iJWTONkmrZFGpZbTwdyJtgWw7bJlHc3yiS1Jv6GjYwfowDDS5Dl5qNz7mHsi4Rg0L2ihef8i4pNSZhArq5vBmWGGHZzWSKyKj01Ji5TSlCSvLgynKfO5WX0KDSiCoxzvMAji2vbahF1zEei33y7SbpoavnW6ezjnmPsa1SDJUVyEObHbxvZvFVAAgIMaMarCV9Y5AYzZ4AkaPtGR0eac3SyNaaHHbkR4uEdvfS5ZgdWmRVaCmk1MgCLrG0kk8-inr7VYwMGtSemf3NqVF17jnzB3OjVwIGRKte6X4Cy1jEaRc1eo41Vvm39oR7AFwzYznqXTClnxMwv6ux7s3mlHWGfSADhNGNcWfG95gUtDEFoHwiCUe-OFoqUy1GJ34lgBnHE-aOwKUk735KTTPNF-7hD59paxmMGJ1HkrmPHb-u8Bt1k2Iu4CbOaWEJCbj7m3gFmh2ML4-bLjZTiDwBSKGXpDxEoE4YX3YZdSIVyD7sRVezNeXT8RtLZx9hHc5MM3mg\"}"
  },
  {
    "path": "genesis_data/genesis_txs/8y-ghHqMT2lEHQn86jRXkQ8I5cLWWtKW1CQROp8mzIs.json",
    "content": "{\"id\":\"8y-ghHqMT2lEHQn86jRXkQ8I5cLWWtKW1CQROp8mzIs\",\"last_tx\":\"\",\"owner\":\"plnRiUaJrGazHA7n6M6fTW2ymgM8Os2PorVGbilrdvKxBbDSFcS7eN0VdoOAbdNLOcCZD2ihtR6dPJQ3E8eL2dejufh46I-TXzF7t0t6gQJDrX3E6MZNmqGOnPoh7zHmUtB0ET0eEBZr-b9hRc_Zhyl6VOhDjLwZ8OfunUu4iCUzVaq2_jTLF1NEqWN8SklEWKZg3a5CJpVaqXfYLnK6OTtNmlr4Qgd77tVn3QFIR9MS2BDGVwlEa_VFKrnT2s6AYGbqZA2gxc_6YOQPRcpNsEbH_u_wiQAnAo4CWv8n_2Xg4YNeLxcYB3YhSWbnws25MNoEBAMqZvHRppLX3gUsCBDJg9WFQwmAxSuC7k741hG8xrA8QOiCl2y-Q7omx6LhYnUA9an8He1uH8PL-d-uiP3wKQzavc2lFeoVqnqKcwI8dnjsHbiUH0G0HVgujfVWO10QlzaZEABGFF07m7d81lj7LrW2OvfV9xfOwpJNPiQz6PW3Mn0WYuSvs58-Y7of7hsyah44j9ViJJHHKJe4UV-qeB0dw-5mYqMWBx-JDj-76L4wr1kHxZ30vPugkaY_p6NWe31jq-n5ChtaGFedzBAxoa1xqFhTgcDjapM6aFh9LQmBtlRtAbjQIWuS4zBMVIowO_JtFSLUrLF_02jFnxq6qTWSmM4g-8J_1fUEGFE\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"R29vZCBsdWNrIC0gQXJjaGFpbg\",\"reward\":\"0\",\"signature\":\"OE8vwuAT8vy4tO_TzScZNh90es3lbSfFA44nuTb3YaSA8gi4xIG_zHO7qI2UB08h8mU09Ze3FEA3nw_gJVzS3qOXa9AuDu2ND94AW2MgUpxjKNvDbt-QMxH5iZfPFqGjd_5D_5wFiaPHw_CUP4z-yjiYVm3-AKe48IuQe7de-cA6Ql9hZVr1fiJgm3DunWHfbH74XPAvUXSMHObrw_lzvtmiarvANqBfjhw3siVIxEGGAz47nPXHAZFyLwgzt3b741G3dmQ68pDASbiqhOhlQZmJqKzzy_08T8Mry4zDPrByg2Goau95X_anW48RpVJ7Shag9umBqe0BSlPdloW5Zh87oepCNLDbFO1L6EFxA0nRnm5aIurs-U5371dhFjEBcrW3cAxuMhGAezhF_383yo004ONxWj_vJ6-LB_jPKFSl1E-EJJ9YJGohjv209d_aNTQmSbAqgryuq9CfqKKy0eWX_sorXsqcuFQ6c2uBYcXbaurY7PxtuJHhqM-lOECzH5UaXgm49PHVX7Foi032mE4hrT6EJdSat9rsiqtfji7aryd1nUqv_SY8_XpVcqhgp5tYOop2KCDu9a7BD_N9sEApAG7nXa5sAbfKOKjuf43YEaLgXCCCJYTY5fOy7UBlmW5wBYVB4tPyaT-rJ17RRy7uCb9wq6n8gji67on77vA\"}"
  },
  {
    "path": "genesis_data/genesis_txs/96Ijx5TWSxZmZaDH1pteGHFjIYY0aHmGWNHiMYeSYIM.json",
    "content": "{\"id\":\"96Ijx5TWSxZmZaDH1pteGHFjIYY0aHmGWNHiMYeSYIM\",\"last_tx\":\"\",\"owner\":\"utMQVcfoIixH05suJGI-NYPGDm4ZIeWKVdkkG3KuVQd3php3rbF4Bkz-6gmh8cr1Lga-n_z38kHbt66P5R2ywtM5JypP29JdJ6VcHjVAB0L0JRw2Yxt007U0bJy4iRvZ7H4itRH9K9wyFuB-FiMOa-qsAlCkWX-vTSw6CRagw8_8xj9XCjirkL9xd_Xjk2H8UadqY7CIe2HRqMg1jWtpi09rcO4U9Vg30IXg5QRud3B42I_bmBN-mBFuzbjU1oELze-VxhU7GLQhZ2nBM1bQS5gwBtUf3DGEYqn4FOz4VcUoGyfPmkOuJFeVyf4L3EjDkM19zqWBy7cljZn857WmysRsBtY-u51BcLrrz7_7jmLICYp127fJ1qIAwVAh8t1i1OuJIKtryoX2_6hAoc6Q_fij9wWHOHZaMQeVovaA1X8wxsiqtPhU9yzZoUDjWKO1WZsPF2rDPCfgQwnnlF-V63ugN0MFvsO614a6I4VNNnrwu2b7BWIwlyXtZnMZk-Efm_W3LOmLtiy-P9VpnYeF2HkmdLZC5Pkr_noNZrvDzLF58XH4mELK1zf66oPoP5ADX0q9NwtCE-3OdaVOauHh-L7xK0VVqw-TxbLCXzU6IcEJocseSE9I3hi2BU8ri7zG7xdZvTjHGa49ReOPHDHIIXGOT9Pe5HzHJ8ZFkQiVits\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"dHJhbnNwYXJlbmN5IGFuZCBpbnRlZ3JpdHkgYXJlIHRoZSBzYWx2YXRpb24gb2YgaHVtYW5pdHkuICA\",\"reward\":\"0\",\"signature\":\"ZXeTax2df9R_ooaeVcWz42O29amTJPJBTTlcGMNTyaGP2RX0NTFrO9hRwaA-218YauIKCcw9G4X-3iKTfAjYCK2KM2sn2PjltU-gdhvBe2kHAdSnlf5EhqeqDJXjfjvreQbNOAFpxfefGYTqVW9qZWvlRpGWpUtlHttgcOrgAnekpMk3GPxmOp-MbnHZ67AXmyUJ_XJu7bZIdnYZXGmziUJNqvUGDH3-YKjonHdKE0wGAXf7DlL3zuHSBjdD_N9aikX6eVInXyeucTR0379AVi341NlFYjkXitoBOsRZtj_6EQoUb_976hgkKhQoJa2LsvlwdyBix-6HsbbXq8XfOBe3ol-CueVVbWm7lZRc331MbWlez-RQqhYNuvMOo5A1LsLRR_MdNre9OmUCmJS4RMyQqC6h4h-2pN6Q7qbI5yDXHN9iCsBCAEHEdxzjHX-QsFR903wWPhWsTg7HlY4RZ02F7mRt-vEoJwnnqRVCdac7HUd8v2IxV_wKJRW5PlDkRorxNYyVuZ6-zYT77Y6mVFBPT4AR5RVcvu5AxeNrvHSttl9gudGJmjVt1jTWdqa97aqFJ67h3bBD2XsCS3cMDGTOwfwvmShR3PnoLr2zbhx3qSWUWqtjVg3mGLhJhW4LicN7sBJz6XeL4qLcrNLM7Gpv2Hf4LsrggOczb04eubQ\"}"
  },
  {
    "path": "genesis_data/genesis_txs/98kadyXY0OPfEZKeeZcCyQ7z5mRToZklK-D6f1a-Lxw.json",
    "content": "{\"id\":\"98kadyXY0OPfEZKeeZcCyQ7z5mRToZklK-D6f1a-Lxw\",\"last_tx\":\"\",\"owner\":\"v_xIO1uRsPnNeu9cBN8icr5RW0G6CjBsgZ-3WVtM4fNzqBbbfZ9ag96wM3I5sfZjNQObqWwZU0ddRazdFOsYZ9L6iPKE5lTole1OWdlg3xsA1mdxEi45Y8jbofGPBzsUKrpb1WT3z7prabi-e4tuezoJS7Aa40oeKKWBLGBdOe2KqPeA0o2yZQy3GgbVNb1QKY54TGVV08hq8VyJTqZ2-O7A8Xgcndaz6S8leQeq65CFaipyWLoAOhG4H2j8czxSBOkEt5b4wSy1bs2tjBEa57fTuapds8MccneMryMFYRZg38BXBw1kdvJi9zcJK8GmXZXErb3xa9P80suidpxMJawqlk4S90Hx3h70VTJlHCLi0Fdz8J8tIQvVqTHyBLe0S5XIJbbC5GjjtnJdx0r3NDN4w3FHcgX7STJNtsY01xdpxIZpOdzDQKKAdLpfG56mNZnnVrvkZNqMJrUDIgPIDUayQyokKzWX6i19pTvpJ7_7bhm3YsclIyVtP3ShXQoLFAzfc3O5s0AXENBfOsQDdIAYYdWO4XMgE2HOddgE97AixDtSTCa2RNjakzE5h8lXHr0eLaRvNRw2wDxc9lbgWMswsFSPo3mdy_P25j73X7T6KRzIVMzGs5Uf1AfVrI4ingpyztjGG7aZCD-2oquoHdakoh6sE37EnfN1lMkAcGM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"V29ybGQgUGVhY2Uu\",\"reward\":\"0\",\"signature\":\"mxpIYDS7aO0Gi4ItVt5tj3DVInBfFikSvSU8boW4adcjZfcvQXiW4RvJuuv2RbFfUSaXe2wKPENwaLMuwMHd0yNzfJSIkLsY3545QxQQ7pEmmVQtKKxt40j8GXKFIsLakPL_Z3iSvfiICOKm1blVo6gnH2VPybhwJ27xjeD1Eq4AIk26_mHnxYf4OfgivUd-ziJ6xcO4aESElC7RCfU-rYUhB750Pod0ozx5hS32-fJvTMuyCIqsyj8GDePoS-xk0ieuLAVDe7KaZmgREIYikwKZsRaXjSdan3iKdLYlnTCz4zkP-3BZhqpoDBE6PVqLH5ZX410RnPJ2EkWlkik4ycUmK_9p895yVZYPaaxkXMRyWG9NAX_hxKakCNR8_44CM4TXlWG1P-gBM2ahjs30x2Bsquq2L05PXFQgpZsw_iVosWWknOJRnR95sXHdyNgHlWgoM6lxJPwGuVgLr-NQJdt_RfCxP0n7Zeo8YnYfxGkIaY0S_O4Vrtx81g3oObFgDv-3WNoks3YITcxY-sCfggRmjjpESIUS2IE1h_D2nRRhvV15CUFJuh_oss0I7xtAbP0vdzzyqNx_hz3G8OlIVRcCR3spJegrltWnv2oSYvQaKVMhbItlPJVLfK4mgFdKLzaHjG3e6o2YG8ooBoBSpBXIbj9Ljw9P9kCMYSQ8jYo\"}"
  },
  {
    "path": "genesis_data/genesis_txs/9JWfraRekKtgXiIjssn0tVSzhaCaN682jECsrKtR0_E.json",
    "content": "{\"id\":\"9JWfraRekKtgXiIjssn0tVSzhaCaN682jECsrKtR0_E\",\"last_tx\":\"\",\"owner\":\"u6Ib62iMbwA_Cn8UG18qyWxqEg3HFs7TaqLaJY9SBEZ-o3SuoV_E0evwcoqRy3nNtKJoSDLZuoO9jiHZomiwm399_rojAW3vYzZq7jMMUbRytKA_FHFR2uPvlk-GdEk25M90W64UpYfhPnioAQ0Ls0QeNMgghWW7sKANlLbC3RYQNTwFOpXQMDcBcNLKxMsiny0lmx8D4bQuKNlrhJV_i7C-_-Lp986-Tw_-HTDB67-7ssY7ePoap23IJAZjzPe2mxeaDpEvsxF-MJ8p344HKBeb9M0e76d-2BRTIUuYmMVPcftZ30PgP7qDGuLKWzP2D-8rkgoCSA_EaMRIxjji97NJwOwrBjypn61v9QQc8pn4ITTd3x7yqMkl3eDMMaIO_scFgiVPAaeGBwlZyfij_ZPexqUqHnYbJpcZyE4ft7O-ciR99B3vjKwCyzZ0ZLtFC-7FliAn0FIbaAnYN8ZG0F3H0E9EzbClWhpqvKcbo_7-6swHG2pcOWmqFnBhnfIZ6ru5EjkhEP1Kz1wvB9de_e6WFh5bXKKJr_Z8pa1Ia5nDkFnHjhk1z7_-7hNn6DONA5f_NGH6hzuYR0s0J0aQKQbQENPTx7S3PG426KRyj0oeUuw_zm2DX8a4cu1g1KWN2RKZz6yYQpzAnMxCwaJtiM0CJWIOOwfbIVzyAeiNR_E\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Tm90IHRvIGZlZWwgZXhhc3BlcmF0ZWQ\",\"reward\":\"0\",\"signature\":\"nt6We2KvBhaRI4TZcdF8yZrepo6M4XhHSrUWWteZkiDkAVcrqK-Vs7xmhkOC0mzAYKsRmUEahhlJLq_ZESSlK8D5a920qcDsM79cJcImDC6tS4q8xc8Ks9Px3LtBT8QOiTwt8GN6SLmcoOOJOIMAWuRmax6akq5hMFneDJ5sM3X-uMiP5dgXbwoo7V0NAtX2DX_I6yVREq8QVcDmXd3fZTxoImOJHTkDs778xGNdHqWcst_BJw2OFP9Xt_2Br6VW_8P_ZM4PC7w92VvW6Cl_JQ7AdyBTIhIHEIciYWunI33MiJokPzc4bTCdCdHauR3etIohk4u9Gr70T85Mzuo6fV8JX6QhLn4X9IhRGXs8npolKP2ijuJqZHWCmx27NYy0GA8RZzwWaVtkutft5crPQzwlLwZLrQ2oj9VNVyP0UeCfwWJtHtoa5rQNqT9z8MOzh36MFzOnqhqZXRcrRfUSM2fiJ8bE15nyVh5kKdFDOxkNvPw77BLaQ_zQXAGQytv_gTI8KRK9NlDny37U2RGriWK-GtJtl6qwVnx1QPwlvzZp3A3eMBU5JBYRiuQ54ChZD7fbLAnAWFuMsQ6d8fjMe9vnCBMY8mryj_TfsyeRUh7_oMwTj2YjWyTRw62oO9QcnlprwwrLF7OJHZVrB70kjhXcW3o7g2y5Y9sG5i2_l3k\"}"
  },
  {
    "path": "genesis_data/genesis_txs/AN48OPO2-1mh4PKtpyoNm7SWJK2j8dF0-TFLU7Z1C9g.json",
    "content": "{\"id\":\"AN48OPO2-1mh4PKtpyoNm7SWJK2j8dF0-TFLU7Z1C9g\",\"last_tx\":\"\",\"owner\":\"uDkcGJb0Rgl6EFeYrkGxHOrNESshwOG89UgaEVM3_Dp7AquUXWYRru7cFTgs4fZV15_fUAZ7JFDqcQPfFLLQMoGY1HvCu5a-Gze0tx9VqdtNTOugujakE5pmison2QJILJnPK0v9tnn76URfW_dUzsjaiAM9_EpGKhlv_yoHlXPguFG0pxbfkagUMfAX6GuwkvwkJeMMI4jwM8o8-LF6mBFhV1-HhgS--0pHFfIe1rv7_ENoGo8gPHq_90jipTEJJTC5Ez851xAuWgQcw6KVKl04ev0snVE_dOsezEMbFEXEqVi-MLeI0A5YV3eY7zsyck-NkykJ9arPdnsnIfKEeAYTUYNwDFqMNnubWMdhHMQJRYewBJGDcI9hQF7WaNhLVsIId4bZe0IxrL1j-ZfHFQfPh_YRHRDuL5Y2mLbAlCRUd-SqAjZort83ZkOtQh1C-dhAO7csdlkUgw18sLQPdywolF9IJ5AGGj-0rvogMnS71hy0KAH2GtJzWSPiG8wVL3WS0V-5fs2sCqRFnOsDfEsWVYFSIRO9BHdn3dUFDsZQXRSZ86XA3LgVL2vqIuUsauIZsAIQS4AjWNHydojuA5L57w6yYH5zDJUB_YAbEx12KL_Y9MOQYae4-G_dRR7vU3o8qbdRpbwNPdxNmTBO8xl4xULtn3iP2wKMYpgBiBs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"QXBwQ2VkYXIgLSBQYXZpbmcgYSBXYXk\",\"reward\":\"0\",\"signature\":\"cTwC7zpEYsVozQSCpKEBBVshLXIwePVDwfKVACEBfoyeMw-l4slkMPCnIJxvhwwi-KhD1YrZg2mKDVz17NFiGRaktRSfUAQGzNyxXRNVR8OMCxRqip4Oy5jZZgOUEwZPvpsSFh3Vm-o1X6oK3eQ19ETt9sbbtpUkXJHCGvil9WleNRjw8O2QQkuwp6O9aTmGlETJ6siOxX4q0h18pSxOCPCs5RF_5RFVIblafh2HSQjLb43ezLivqDtFgt8EcUPj22ZS7YN0wDVoDrxpKBZMQHgeGKNhOKz8MxXR0K9QSvEldyisEtuv6-N9pm_6HBgHCxQibcYu-RS8hjkTy9h6jBIaxvvHWHJwGtivxdwjEF7zMdZ_IjSlRhZHkXv3iKSvKgfNhM-QnXYqIcrOCfQtmVCRqy3aWV2pSFdHjU5GxpjRl4kTghQT36_r0ad0FeHSwdkgRh2QUD-8Ysp95MYt1WNOF0jT-wNxPVScXWFJJY-7yQiUCvILxChGKiQEEdxHPH_r-ck518OTrEbTFbSrfW_7SOWl-MTJlxAMWiYTx4a9jMbveAeG51-z4AAQhn-IyPzD0hC9RSPZYdovl4Bbsjjr4zjU4Le038qiXGvE35R0BrOPxXM9V95z5NyHyelOTXJfFpbNzrG1CSvk05CUkRUuGIfwpAknSGhjvaXB9p0\"}"
  },
  {
    "path": "genesis_data/genesis_txs/AX6ZZxDpFlNhoN5Am5Hi4DER4zOBGVnQm_bse5PfHNw.json",
    "content": "{\"id\":\"AX6ZZxDpFlNhoN5Am5Hi4DER4zOBGVnQm_bse5PfHNw\",\"last_tx\":\"\",\"owner\":\"xGoqfv9y5LHarHCaEGqvcO_70eU_rfnxTF8gkZMEI9af6pXI_4qoJR0VIS93B0GjMFAcEzxe90iyPiw3MTK4S-6VflE9E-iBPpAeQOeF8JXriIQqIGml1K83bdo9LRvfnon9Q7vU8MEn0R7qUcQgMGA4ML2h4aluE3Y1LXMRcSbcEqAYaLyjId0hjjozeuL1ATFxH8JTIb8E_GvYaWlb41mrryG7WAYFEj8vFGLW-WEHLURYyJbKD6ZL-6Hamm1R77Zk1SjLt-ZdeqS0Z3NU8-ngfN1SUr-glJMQ-HQskgkCkxWNKOGeuUnslM7QvX48RcyQr2isCR7pruGbYSAK3AVsWgO4NxfnAaWTr6rbyPwejIJinNzXVGNcR3rDurBWG3Sus3L2ndP3-of5TNR_IcUVxr0YYbIT5vNCBWDAhFo1cKyzLsipXFuLrwAXY65JjHatv947Ax4tg1MeHiTaCAOEyziDrzY_3WqSGNJj5c7iWUUyngOt7VyExgagwtUCDicQkPWXE5IGUNYtXmCATTZSBfw3UdeY1CnFN2OmE4mxjcWVnQeTxXsZq59dITbL-SQBTygUF781snmcqW3UfVYOxZigJVnfA4TUsrfYbd07HZDpJ1SUUhL1g1HDKcAbPFwpY3UgKAsLelNusSZM0162UFxJUZdgPOmFZeBdanE\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"V2hvIGNvbnRyb2xzIHRoZSBwYXN0IGNvbnRyb2xzIHRoZSBmdXR1cmUu\",\"reward\":\"0\",\"signature\":\"H8Ld7Zn5nn1rlXfdQW2TdyWuXO7S1WhH09hVpzS8Rkbymjng-5Plu2arekIC_iVFIzHxTsFIPETfBv6am0oZUnRJAKDcNuP3U0yVy3Ei-Y9j7lyn-pf6FWJMGskImtJrAJIduXabq4rVhFwiTjQgiZL55N9sGX_OQxqKf_6HJDp3HLdUaWlmMtpzqHbvooqJ-xPFZw8KPvOkywi9hSqWMvPT6APgkF3qR68laXe_N09FF_ZYRE3rmyap--GC9-zQ0GYfvE1Hmm38sWmZoRZiANks6xLSycuff-3ayu1GEKBYu_mos9HEnmAFKKBLYEgSla5wNXJkcIhG53g8iJiJ679lQQxo4ms8exicvB8vNKfN0ADUK2-f3rDjBSwaEm_5IJ8pf_N1dFqGfwjRCl_mxIfe55zZ2K-z0XzfN8QqRz3HyAh4zR5WKd9mBNEDh8TjQ1ykXVIviZS5AFY4RqlN1d3SNJf18TSXeOOuIS-Cq1Vklo2uQyJVCmAREbBQi4aCjHzPlZ0psVZoJ6tZe6jFQrhGvxKr5LyXTkNfaMQoGZYx1AHeKDR2NqXPKRtMvDW5SL4bdEL_R5OgyZW8o7oChgT5Pnpzn3iBjWV-LdAxScqHP6wSHu3YFx5Q_ddY2DW5-x1jQ88f27-RhU3hf8EwQp_SKjO2UiYnCv1z-VEZ8vk\"}"
  },
  {
    "path": "genesis_data/genesis_txs/Achd6pqJVZ-1vNMLC977Lu8f20eBmgAv4dIddXql51s.json",
    "content": "{\"id\":\"Achd6pqJVZ-1vNMLC977Lu8f20eBmgAv4dIddXql51s\",\"last_tx\":\"\",\"owner\":\"rRB4Z_bKycU-DRh_XR6AM39rTMhjJwucJFFu7PU1BrFak83gqd9C-IAfIwne7cRUYUl2rdWIQvIM__-6OIUglFlbHk1GdAR-gyUPiivK58kkFj_I92lB_yyu1pSv7hVIJKNG8vpYdBAo3QxFjH5EU2BL8zCoc452YrB0NQnZObHRKnWCwaHUHCLh9N60qYz_C5jgXTracJQp8P2mktFfFYHrr_omRoPjUGT2uESMQdareF8fiu6zT7gFdt5B2s1mKilH373Qmujh2kGCG9lRfryXcg3Uq3PmKfV4Cti_ewc_60VoQ58BaSyUDGTu0_S4XvF3RvJev30hBvpD8KmRjb1WnDpzJgxqsz8U4CRJlY1lXNSJXo21EHbegZB9NznvS3YS1uC2ZA8b44LOMPhPO1JfM0d6x7L7ne0Kj8-1SwU9d2KgCCSTeKKsWYnKU3TxN2jD7eVJcq_4Acy9nB_GX2dzWU6d_rBXbx_tcNK4mBTxpk5t1B807yd4DjrcAxIx2IeKrdIP8Yma04E0mRR5l3kBvUNGIeSDFr1dMejFIQpSsH5Di6WhoqwiN2z5LGwr02sOpuovqjvoUDiLXwC_SYZhRCvcdRU64wcMD0EvbYh7UGlw1UPX9DhWIlmbLVcVuW12GB5Kn-NbnJaDV_tVZo2inGpderBHEjTVOA7P_r8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VGhpcyBpcyBDeW50aGlhIGZyb20gU1o\",\"reward\":\"0\",\"signature\":\"H4zivDFeovD1iEUZ3I2snwyHHYyIXiyh0re6ADZZDgEibMMC5Xoi5WVG2_GXgpHZaD9VGd2pgZPbAwo-ZZ-yychHh9th4HwAYIbAQLN0ghU4_RwtC5w4nrVjxHfgiQr4NbP4V-fAVutjXfJ_UCA-snHJ68Gqn3LYdYKqDE_rNg1TKdS7ZAzntesDkXV8uwrFOfCp_OGPc_rGcdL6auJ5FYzP1ukpgUl5DYgwfvSnBaPWTM-INvQf9_6mCSkubxo20ZMdAEAYOy4V6_byWjIzVBVW_etLhBz-pcSHxoazrWQ--yYbWg_H-YRTfqWDCuXiDqOy6s9s-tBvLX3xMw9H5koaQk4FqwDNivbWCOvuUNHG-3fICvGBRPYUWOParCWZXzui7OhfD7II6QMbDegohrx4As_T5BQcXsYnQE9bKeFtRxP--QQI0zFPIpq-U0jutXdrXkJJ8flrPiIhDEQGUiysY-2LlKUNcGRnUqP6Qt3Q4l5f4N6T_PVF0hWcQsL9hoIZYlFW3uqo3ot8TRtje0yiezXBGqr1cTgu8HDN36RmX4t5IIF2tjPKrFYyVGbjGJ9ZL-jybhK6D_vCs6ZQMfg2nU6YmDxgD9J0vAOjguoJenORYxxfL4kMTfP-HlWJy-BlkeiYbHTYDn82bVdzkcZ-IcV48J-pLJnsXD5tEgM\"}"
  },
  {
    "path": "genesis_data/genesis_txs/Ah6I8y8q0jb15KXjn0PyNfe7FR3v2xobg09Lfj7n1Mo.json",
    "content": "{\"id\":\"Ah6I8y8q0jb15KXjn0PyNfe7FR3v2xobg09Lfj7n1Mo\",\"last_tx\":\"\",\"owner\":\"xthai8g6_mgJdBL6C1JwKZ-pDeQdu8opMYn7mVU9bu6kEi8YdWYzuLSG8jEdTPG6LULqkuN7G9o-78XKy2A5W74x5bYWu1jeuNF1tvNLqP4fg4mSVZajvFrY_c6aTi-AxZwA5QkdqX0KajlWymKtearDFMyR8D-5Bhf8uHhgzijTgltRkMlMgWWGwPn4IhR2lDArFRWxt6B5WUD3JCsLCaCjlxZGyl9xY_fLmwP8X99XVD2p--DMw_ZBTIafoFKyiGYymbQE_FuADj1XJ1ZLWG-eaXq6-XYk4jJGMWM5JEtDO7EHrJUWVV_pH11gN_tEzjcBlEIOeNz4X8I5VKzPQPnaI6TA6bg3FPY11DbJWXovXZ_AAjdE_sAr0wDsgaAZb60WQazrWCQR3Grmhn_FOz33kpI9R923e_ysBh6ePJE7DB1VHCfU9Bt2kgCdau4Kg8apB7ZTIYxhZTLXQmRcbON8DzJhn8X11HlPquycZEqVd--ueQjEXL4MmjZ_s9EoaFTXx96DvStMenZ4iZRX_WZaqhNEzH1qkPatvB1geof-hzbuZ1GmEH_OhVf2yxJ8VFmWqpDcc_xfv_RPChuR3RB3Qz35MTCwyKqlo47N75LElRkVcBDcBiGo8--cwpWJnZuQFM8g4HrAfwJxx6PTKuPCuFaNsSP-OGHzZcy64XU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSBsb3ZlIHlvdSBLZWxzZXkuIEhlcmUncyB0byBhIHdvcmxkIHdoZXJlIG91ciBmdXR1cmUgY2hpbGRyZW4gY2FuIGxpdmUgaW4gcGVhY2UgYW5kIHByb3NwZXJpdHkuIA\",\"reward\":\"0\",\"signature\":\"DRzxAHIzj5fYarzn39lzEDrlh55om-NIIZhr9V0AjA_K2RNJFnzxdoaiC9kYchvPB-yvJsW9UaT5grblnYYKDqHX6uAB401C7wG-qV4xW-NkROE1YrVkYJKKCEmILzlMgHsFIeolRg8dgjg9EoGt0LDKaqwrNfWPwXdBT-Q03pEhlIzjzCcdHAzFjlFNtMnk7DCjb-JPc1X4jP0rBvlOKSkHUh9cP6REjeSy42GFDYj2Mip7GORPZKhOMc3AD7n4QuI5tvk91Gd7X7Uido3ScTI8slMbZ-LlKqc2xugq50DI7aFI_Sfcv9-RWaIBGOyILcJz4sahln9I2zNJr6hM_pArOzEG_AMhwrqazfiIGe0j-eZu48tw4sguoMAmNf3qT5jUljGgqV5Hfk-7sORGdRqOoogCAW1JRxhIo1x1V9Nknt0U2EcpTxBF5AUCIgwrCMorCpWudHY4d9liC0J9neDiLHSYDTQQ-74WMR4jM3Cd1HUxSrfm4uTTlnD17a5bZ1BhYhVfL7PE9dOVxFGgIELgdZs-vTrbAalObGM-6rBydtfPZPgjcaG3ZXlbJZXYdoelXgoRye4lrlHcgduVHF2AEPDQWRk91uevovnQBn1k-8ncolX6-SE-BwFteLp0CnX7FlYh7m-uliBY4OF-q5aRs3S2oIwHaIdjKOesd-o\"}"
  },
  {
    "path": "genesis_data/genesis_txs/AoSTMf_ZxlcY12bK6_sWj02kssD00K4E-vkHx2vRxG4.json",
    "content": "{\"id\":\"AoSTMf_ZxlcY12bK6_sWj02kssD00K4E-vkHx2vRxG4\",\"last_tx\":\"\",\"owner\":\"xodepVz3J3yxajub_Nt1vXeBQ-N615hlMwvjmDIdH60di3CZTFDUXqvONpbiJtCQx1TDHV6PM9WSOUcypsg6Q-h_UdwJc5F3VLGZ0stgJMnd9V9GF0-6LHrHf36upGxyojKcskKJoMWIuEN3hgJT8Z5HmtrPlPK80YLuUDppXlLCdV3Lz8arA5wp5PVLeZ3XadzgNWdP90krzVJghFf7KIT101ZkWAjlwccZfej7wrkZnT5zVyDx5dnBKKPgpplYmebgDXr10VUhaOTza2D0Y400lsYNPovSDR2XxWKmi4P0aJOD534QLa9Dh7MRiM5q9T2rtwjPHMHvkUAPqxC_METujyplxRYWGc8rDQ9NMBCkK2-QveW381gDIW0zB3V0qLUAFLXzJCcqV5To_UVyxKhC6fr8B4IGs_CpWtr37JAE5jBQ0I5JVNAtETUbJMx5N5D0cBLrX02BeXarWPuwY3X7-rDdB9ei8UOd5AczX9SbjPkDZSEsN2R1NGFNypC34fnMU8X1NIfUN1cLRkfyC7R-UWJdfDJ2EiEYoBX_iJwh1zkJl78xNU0AI4O8EfFCv3iEqCjNUFPl74CpXD1KhnnE9GgcN7MXABzwwzM_qDuSe3WPkqlTwpBDqZM-CtlAAyqVuxp4XjN81ydriujCe1NHRodbxxBwUL79fhRUvfc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"ZG9uJ3QgY29uZnVzZSBtb3Rpb24gd2l0aCBwcm9ncmVzcw\",\"reward\":\"0\",\"signature\":\"NKu_531a2_vNb9_YpeXbErSfGRlGem9SCKWJY6-CaX7__ezM38jIvT6vZGeh6_I7bghVghBNUw5EFMObfmlBttulSPCXfjiKbM1WqkFCiJVE1h633rvcoiLhOWOoAJ6jdefgCCwcrfzCvgNdpKjoqDmfmw83XIBr4XYRYf-cudJVKBbNmZe6PyJJzPJwYnIWpxKS6XgGN8LU1POzKb_SRM6FODZM5s1psEC9njxd8WcKEDGjyl669O-Fjec01QNV788JA01G7sznEpc_8dQ2K1Yugf6SN_bvnO046_vkGtGklkXsjv47c3CoJBEJHMeEqwLezSBC1A4uMriuCCAKzzVBqU8sOsi4PhCLt2SbbGBvXos8o2HzOr4JJbYBzDvr02TAz1SYYzS5d3hs1dg40DBRvNBEFiGUe_jJYE2lDOs9F78pvf3gvtN9gJ_U0L8-e5R_OxSF8Fehko9B5lY1f8gbGrC5FUNRBjMMIhE8P2ooadlhl8VdL_ditzeqTGxbbh867oTw6bdt2qM1CmOoF_FeU4k8tOrXwvf5kUTO6APHm-HkoqXNskrgH8n1_kP0gayIxv8abba8HUKJEdQnOFUdg1rYH-DQG2P8k6D85AvbpBicaq-wX5v_hh2ihZzwdc3bkIZygjIL1LNayeuMCLw96Yakzrf7SsueBsGCGC8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/B4e9FBfqZGBszHAhZqTq-TNjb-oG7rYdlMWrQa4CPZU.json",
    "content": "{\"id\":\"B4e9FBfqZGBszHAhZqTq-TNjb-oG7rYdlMWrQa4CPZU\",\"last_tx\":\"\",\"owner\":\"onnVDPfdNrAwx_DaZhoiS40b4xESqDKkctYhgLf1DJUQYHICatTbqAtjQqL-l6S6A3wnjw4L39AX3yKh6buTjioSPLBA8gx4Ml0x3guapZn2GGTHTObMjYTZy4b4yR2Xf_s3WbHC36J9xImWT_5-uNRuw1GUb5IVGfGHf0jY72sCZ9hnRW3KMRnJn1s0ux1CfmdV1zazgOq3b1k-UAkMYshENLRI7gN9MpCbWEj1vGbQA-YdPy_c-w-ToFZGeicji4wgnIMQBakFW44WvzeCqZxY4nTMjFO7GsyVz5HvB6mqdGyDr_Zi3IhA73L2cH0OZj1dkIhYhkbpOSVTtez9W8aq1H1baRO0YXdk8fF8SOZP72k3o3bD2Nd8eAyLv598mA1xAn1SW9Z3aA2kiL8DU3J9I4j7yQW4gqW-9JX-l_2vMCB88PtWyZVLTnPEvHXIHJyfjksBLVfAkIv4fmIwSjMzqEy3sbX5uzJbeZ91AJ-Th_8Sqp71JlrIxon_BHUQ2xfvvZbH361tQi3Jjgc4HDF0bL7V8qMvy_YCD2qwRXn_btJ7BoEfQPVcLBw4gLkvTFcoovWkZtXbjK-LOfgmuhnHJtyTq0kTFywtwNbso56AMxj40e5jDIJP27aa4dEvVo5bN6KlJkEutGpfnQZoldex0jBOzAfXZhaYjPJkyHU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TGl2ZWxhdWdobG92ZSA\",\"reward\":\"0\",\"signature\":\"ZZIDqx9r8Rr9AcEfVa3E9Frw9aiK2omAwoiTJXU8ILPKqOiMNbuAkNpps6DCqAQygzBlb_TiPQHHW2wUQiXGW9XRRhxnSTfHF_S8sURSUnV4nPbhMQHrWoKp2LhOi84bBNKVRkg7yyNvoSPLlHkbW26NUJtZTHY2rLABjoAho0m3SQUjVCgga5ungzbGgGXC7yc-ay_wW4-lnzbJeyxzhrNNPNRipwOsEbPq1RGc2Rgdv5-HGtUSgiFqScjbwNVgS2n0yup3JQmYaj_oD3TWDchzCgNkTYjDKls2LwIbWHSpYcPu5ADQ-HsClVNPeGrIFRyzYCcjJm6sFDKFuzyuDqj3uav1A_ayNcc2IJvN-5dL3A1Hg7BtncCqxtV5z4xUYjg8BUFln6gqjUUCq0dvlxxtrWEaNtrgFbI_noYv9jj7QgTUPoCbm-xfVzMLkESK2w_Z3xd3CYBczjyKWkyshsbfzwSFPHHN8GdOQ4Yz2h2JMNzYXB0IT3Spf7K_eehQfpQoOEqITbdFGB0ixXFh00DMZN6ghseRl2O0Sf-lyq5vtkgQtmsPZPk5y1j7SqDyFsiV78O9qGs6bIqiVXg1LS2YYWV3s7Dd83F6rPeKLYm_WKvmh6faYlHfJ6RcrF6fVxCXbLT26cH_suLre_5YbCRWFogTQA3JeoFjV7DSlCE\"}"
  },
  {
    "path": "genesis_data/genesis_txs/BQ2RVL6XY99AIkPKDBCfUfRmJGejkZ8YKgKZc2LewhU.json",
    "content": "{\"id\":\"BQ2RVL6XY99AIkPKDBCfUfRmJGejkZ8YKgKZc2LewhU\",\"last_tx\":\"\",\"owner\":\"s6jeFFIyL7Ac8quudTrDnWm52uNwmKQtze8vkbwhMiJW7TmOfLbr6WhtRkxalk-C5Ck-qWBHC0EDPqD0osXhhCeQ_zgEWb91S9URINOome5BUZn29re_VCytEuLOU_kWkjYSGUpvh5h91ooymfluQz5hMSr5vOINcLgi3gyEqKzsjSM8HsDBQrk2uCPNJ4dfbp2xWt1iy1IpB7Ko8R8c47Jl8PEAN1Dm2DdqmbvfRh_R1HY7vqqDQChzC5zTRsB067RRFT070t8zge1sUQlQgytnMnup_9DveRSMqFt2sVTD3QlzFocuyvjOI_xvA3M6sT2H3vg5ZxDEexX6b1PqYV1rtfeSh0rn6iYFOZC2m0s4QhNLVuqJvmwUwwvGjRmGLZ1xFYnUGjXlZ2AG5IBXx6i9VR68XrXQ92wyPjdl87GtiMSCNzVHJD0r3WOB_FiItz7BCn2xYngrvqPoPCqK73dyEsTflBwZhs_Xk7sTii_BFiuc46tmYl4sIw6t62KQxIWYoQwxv194gmuRW7o9ETUK6gWmuCoAqKG-DGx0sfuy-DKe5BShLVRGuExh7wfyjCLBMBewO-NnCFei4D6Pku4LKpr2eZZzyJ5BCE6yuakfhwE0l_ELIXGu5vyUJ4CIYQ5cKk3ec7eu6paDw7u8iV6FgetNUx4XfzwA_axyW-k\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"R3JlYXQgY29uY2VwdA\",\"reward\":\"0\",\"signature\":\"m0FMPr1SbbrwU4kd1W_AsiDEfF9hEsJatQhi-IdOPEHj9uwAlj8pE23lxalILhMId8-bUl_hzHrIa-Y86-LcZgQY-h4Tpje9DffRSKS3RVq5Eim29lK4CzUy7uOXKORrqg-9tRQcN45Z2qsf1TTbGFcefrQmGsFOZTmMDBztPTHPS_mDiK1eKdZHawGA1GlxYbiQXrB3Mof_xoPkIOXUiCGeKwk8HRCYOPbVCpax4kr3-E3hvv2KHw_Q24JNwuL_yV3cGP3P2Wo1m5KYMLbLrYWp1LIXi-K6gLusiUMrTVP9sOeEonnBWPs3jn9fyq2fFqrA-UepX68T1gwD_sgqyFsQ1AhlzuThmb4wN0ojPhyQerxIDT0vVrlAglbQHoKOV-7JcX5nIuVgo8AJzWHsyUlspyoawxk2g4-c-zZTyYlWKGYnesB-4fr-Yq_MKlHuYjcaBwTgF5OkWn9IlFiakNpYwDAivLXxwQqe8oru93MikQ5rZoppFXN6eSC3frrBoOcUOz1ZOrG8zojHX_7QsJT5vdpRtkmifmFO-xwxDlWrivnJSD42vPZN6moIEMfPw9vVZQB3Xc4LgF-uLOZMteWOHnp-61cpvJpEsdlvNyekihkgywqVGYYarTo6PeeBLVHC_7JlIz-7OkkFqc1k1poH3cOXFQnthY08SQpfxGE\"}"
  },
  {
    "path": "genesis_data/genesis_txs/BRD5ARo8tiY64RqIoxYZ6jwbE-LQT_7jA513nHwWyRE.json",
    "content": "{\"id\":\"BRD5ARo8tiY64RqIoxYZ6jwbE-LQT_7jA513nHwWyRE\",\"last_tx\":\"\",\"owner\":\"pLn0x-1vcGG4pE-EQHF38fH6N2fZcNRCNGN-qQtgi9IStimAmk3OA_UGVfXBApik0KNfbvN78EKUUqky_0cThBbQ86rF6uDl54I-fHDJ_DGOkqGBUu-Mb_pZ4hriFb4ZZ0rDwLoyboqz4Ymd-LHPh48NkKlFlAtJ0SpYEpBfNhO5QAOBU9RCm4eeze_40F6mwgOp7HOgZ2rQ40b6zfbK3ppb9Ciss3PVITk2KRBh1lL3Ux3h6KCi4rHApoFmOdOeSZ99Ei-I7ZA5UyTC1skA4eO4hvQ-_GnPK5G5uJvTWszNs1cBfpGeyUNtjNjB1E431LVoSg9gq5xWOWeEhlfBDfKdjSXsNhcPVCF97hpJkoEiGd08vU5fl2i9oPyiKFHJ03lNNXMwlJ1M2bHh2-1oSLSfUDHTXNIbeYN2bmHFCOPF2Xk88AN4PW9AS8mVLbG_T7FiOQNeJaOF-Ga_OSA-s24Fiexgc4tWQpSygk8wNdFXa-Ps4hulSZm6av82636q6ApTitapsUBCupsvoUDQ2_fgb6IW6Nd3qb1HIJrDnXW4qHZ2ZdEf-e0xN9IGsNrXFvfr4TuzMpt1EbbvN9kpFm2HrGL3lCsp31_-k1qb8r3YGTUCf6U-TCSALWF8BLakQIPbCfH84u3e38ZcLte3yshuKuqHDma-YI_yAqE0TDM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"YXJjc3VwcG9ydA\",\"reward\":\"0\",\"signature\":\"P9llVCwsnTDZ5rFvSa3tkrdR7DKoLvsCqGCE83lSkY4jN4NQSOf0faT2VG_hpMdpiBQsQNTOQQ8mE4HuMmj79l1RgfAWcova1A5zrRzTZhmKn1kp3eOkWHc8k9_zbMCZGnvVNvTnvE-x8HlqKPowAOk1cj2WuYQmFHVa2jy7jNb3-Z3TiG6KbOuakgM9gwHdve33e-vgNVTYk2e4Q2T_xY8UjKh9nR_lnGDDYmgK1l_nylDNJdNCSwCEpiKEX6G44-6vKTlJ3YqvwSVAsG3aE5aw9eqbU1iVOBC3LUWBbqlbuxehcPeAD5PUsrRj2EK2AIHKmdUGX_vTY2adxITelZoNES3JCRB5oZEJLHPFnp9YM1PZPL4cmkuecz4nLqQezcwMRi2A_roL5vxai2E0gTXCtxp8YC02VPfWR3aJlmEDp9JEJmiiaCFkiqSo6fY-2O3u0gFofWxrttW9IUpUL5-b5hwSOlHrxGV6BvlfMuM3s2-SegI4eycxHtS5L-RRDgRzstlTvTXjkgLd7EnuzBSUmqLuJqtwLugrazQzj3D5_VbIxGiUcilXv3wSzkAv17EUlGQ_VRG_6FlkmdM5VCmI9RCrGCiEDKAv14O6yur1RmA2NUmCRffY4mQrCvXdvf5Y-Kt5lQo0OE1QPS2ebYcH_FcIkjLfBNvzpF_fWYY\"}"
  },
  {
    "path": "genesis_data/genesis_txs/BYJCPwCLpd9a5K1HFy5F6ZvnemPiPFtV4hz5wMHr1NI.json",
    "content": "{\"id\":\"BYJCPwCLpd9a5K1HFy5F6ZvnemPiPFtV4hz5wMHr1NI\",\"last_tx\":\"\",\"owner\":\"34gwR6cTYvzPku6eHGkC1PnoIXfPOGddXSmuCSClqrq_MKO_K-IPtzuXzPvkJAGOjEUuTbQ4PeuainpSe13nst_TKR7bfV8Ncl-Z0LEIK-uZsM3orlKbaQa9XJ3hcjTYt4J9qh2M7VfdXRdGwSjHPq5pbKcM3nNiOxFgTK8agoxSFBi1EYUjX4AcGZaCQO7JC37i7YVnzHtOIr-qbeYEk4zVvNJicgmtl78zTPVBAogbX5E7tPPnyZgizo4JlYY-ZL_awyeMax0IdX8gykNHBe_iIauvaCtrzPICMjpHNaW_UdSz__Ue36JjocS_0Yok6nyPyPcqAD9WS8u4xYrsZERgmKS8jp5F29wdq-4TAlUcVJI_DWovYjxouNrU42sA9vy0B7AFhlOvH1J-iLna5oyyYkq2683_MauNVePe5KsFZTBfL796f54GCypQf1U1zH0BcmNKBvut6WFcOAE09aRfF3fd8U-b36SyyyJQ6jchDAGZ5i2u9FqIee_KsmHXUbyoKvmY1Edb2afQb7ory9pBsgyR_JenK7VgpOdbhN5vQr0NtET2zMPmdljOVskN4M1oRL6NMDRUNVdJtzrzhnHErJnnTA9fm0TMSmqUbjyOJmBXwm1rrEwQxRZp9U3u3qkKkY-coF57xbMxxiPymPJoiJjCKEIXUJHcQnyWVXc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"d2h5IG5vdA\",\"reward\":\"0\",\"signature\":\"HkZNBpVtZnBrRizTuIHH9Rtd-Zdul2b1WNWEjX5WB59mmBjfcj9MeljveS0P_InZG9EJZn03Jp0fbOia00yLZINJVvKO7OpOxH82RsChLc-Gegl5PmrqBA1pcND5f7I_dUzndsSmgo1MFsD9TYjmk9mL9_2aCU723GakcU80ICyBgnrMboNHy9971ChAT9kWbLQ-TloIUTp9gpGJuQJpx6NIXopM4ZH_9Dc_sWFQyXyWNPT-SfAM5h-lvjxBO5VMtmH62vo4uADUhY_iJyMQRL2kqww4mTitwh3pp5tHUrft2L0gebFeDJ64lbQ8azapbERSu7mmUUjKFsYf0DEaM0EQqUma_qMrXP3b5QwoBjpeuiARtD1exTYNx5Yx4ZOgZEa_5uMgHoENSwP0DLYXpmYL9_qAROReMfmesLKfQqanfFnLn7ItY2mk3sszifesAMUBdLnJEKrY7kl2cke90-TfDqPuYRfglUkstJqBVsZLoXlUCP4y-R8QMZo4jjWJPbSXkfthbGAt6nNnrkAKELeTuQ7lCVO9aBThXf7za5NKcaM1SPoWHooqrJ6SeI0XwVbyADx_6RvfEB4fv4Vb-i-cz41NkBiRg16HdqdOplWlAza0ZHIvhhA05LPOuCkNwBy65osvJq5DvCdCJSZ07hZxwt2Kpfv_HwHLdsywDIo\"}"
  },
  {
    "path": "genesis_data/genesis_txs/ByvrfeR4UNmWJwF2fU41mBo6ThFl49u24rEGpbeSI0Q.json",
    "content": "{\"id\":\"ByvrfeR4UNmWJwF2fU41mBo6ThFl49u24rEGpbeSI0Q\",\"last_tx\":\"\",\"owner\":\"stZftcxWjaA7Wzp-vbpC0WaJc0foMC5qtbsghLg4Uid4wUZOa6AfC6aSEU517I46IRjAcgyJEPMtuzjd_aTMZ_8sweAa24gVxmtOC-nYkwf5fvrj4PHlCuugDOsJXoy7rnrIHMJuuMk4eHLX74GtuSZJMgltz0CaHszCXtqINu4UodEXsF9Ig5x4_JxzPz10yXOZaYWkg24afyZJOdZYU1iYkkO86sgrqYWDkn3p1jW3neOIv82ZJ--MOWF60bfV9lySG9gCiqiKUXuWjdqq6FHj-y1xTSx8hTPnXbgCvleJVquP9L36a2TvuOOqgdBgIGSMipoFLwMRwm9piqjVHYAMs7lTv_fKvktUNSQY43KBpZWGu7IATr_S3qsO_elp7uis13KdQiqjjs9WDjC9etWlYORFBJwk39jygpd-89yNum6amqMLVIftVDxwRapBooeBGoHuT5V9SqgwyVxY7uBbOkAlfMZjIbfO96NPvZ4DOxIpu629zNC8gGAUSTEhcXy-nwth08BobUJv2JE0l9AUPjVvlZvT4Mlsv9_H-Bd1W07KKfhqBcuzLvfmRVmP0_EAgTwdmUZK8AtD_hFdsANV01IM3tVxvYCG8VlrlHU74J7LOajUjJt3YbwDuzPKHqYNcZI877N1RFpYmFgpBJrsebMcL0IgtERczeV5b3E\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"bGV0cyB0YWtlIG92ZXIgdGhlIHdvcmxkISEh\",\"reward\":\"0\",\"signature\":\"odfYYwE-kWOjhqMn-1rgZI0hn_SeB75uytkrQMhvhGifo5ZyI2sGxrcWHNUdTmGPAQ9qcM5x2G2jtEkOdTkfFHenFCzwXx8NcWXtmSNBNmGPQBxpmoRKSBTPRrvwjrjtD082OP4MTB-pRBPxlt5WkthHde1UQca3rwbKTWyIR-Wyy_lXe_wOgSEIcgsY3lLWtDVck61Qv6lcOdQcCr-iAO_Gkp5UrGtnVhOz_HZi4SKqrPqg7Oo6SLJKYtuDvjZFyc1hUxC84gIRTGAMcDxkIvxYxkzb_rX4jD3Pt20PXyZuHcHXSEbUmTgDtkKOVoLB0qparyVY3qep3fxCl0jQYf9egko6rVCxlUy0L3OfrPAAP9OaGOw-7qwJG2U2Q-rAvXYJffCw3sENdQOQgNbF5-ky0eX3zJee1D3uUJ9O77ifhTF9pJb5THjeU-_6z0t9daG2HW1KADjd35Qf7zqHwEgi_QFqL86-ybGgCcGXKLQPOCcYfdqt825IYfqcmm9fuo6tUZ_d0tD2RdSlow_P2-H2Bgk4_qCQ5rjwNhis4XiMpi3iYJx_plavmj9AxMO9_iIAaRR06yTQcJKiAA0Q5eNEIzjPpoD6ss2QqIc_qYQLQ-vzt33WQTdrUbTP2IGZvNeWHtZHyKjj8u86WOBMAnq4l8pA-1VeOx3u4gvkrBs\"}"
  },
  {
    "path": "genesis_data/genesis_txs/C3auX8HXhc2dChmvSBUfgGyYynuAr6P3g0p7420GG78.json",
    "content": "{\"id\":\"C3auX8HXhc2dChmvSBUfgGyYynuAr6P3g0p7420GG78\",\"last_tx\":\"\",\"owner\":\"u3NGzV6-aIFsgG7alVMtfUkVUc452v6Bjmg5XT6yGOq5eCTEXEIlOq5AZ0SPIN0L9g1ED2a4UKs9geA7tEPTkipDKBZsp7_xXFwWfkRMOSL782W_cRm0PGhucN7gv8qjuKB6-qKv_Ps8yTiq5CBdlnqZX0VJQhgGavTgAhXvV-OpnySQHbYKSimAaXuyMuWooFNvI7UnvBlkO2E0aOucvHPIoIwh8zkr45DCk5ZKuZX7VmoNJ5w0eOsc3D2tQryLRHsaaOMa8-Qiqs87Pn3_MNfsq2_p9ymzPf_ZvNIlrAwglOjBCo_Bf-nabNPcZO91Rr9gyXsGiLeOMXE_roGmkYM7rtMHGb4YCcOkLtXnZqCF51MgzSMFYJQ6Z66xEY_x0qgbWlgCLh22gE8uJ8LYmJCAP7yE5vR6X2dFwyP_SQBELwdiBnuCqFYEkimKTZ9-JNmOQ-BcVfT4aPvdWK46hFK1u-j5Do96Lqn9KfOKLzzNe-n7zNTBiWVR2HPnGc_qLbdg85yzvv1jjPD_Hif40UtdD1mhVtCQvRL8LLHM5uPGxfh5N5s2BxH8COtqn2DQcDD0_LEJPYrceEz3ZbM7MBgq6MlTuMzylROZdyNKYjicNnJONj2qr7b1W3oc-Z66RR06pEa8eVpSp6TM4JTZErwYOHtFMI989bplkRnipuE\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"S2FydGhpaydzIEJpcnRoZGF5\",\"reward\":\"0\",\"signature\":\"qnBwiVM3E_FFcuhZpdC8BfGhvEhmKFl1gaVvKOdPMEJLvESxy_0jfV3S55aRbCF-AamP0LTqaRVEypqYNOs4Q4S_OLeuqAfA1TT8DhyY5Z23l772drDt8VWWBVvNNMKF-f7emtiOkCKxnAVTBrwT5SNZKlI-Yr3wfUt9_P_a-G-jQWPy8BC9lvAMomPi-_3-NJgus7hc5M6tPTdEBOOQhCPZKrwBxvjR2bQyt5OLAARutA8qONc62Sm6mz6oBo3rziXVPVIJW05sfBkFDkJommroSeBbhxUfBR1LBIUNpc9bnMSAMFm37tRi515qE86zQvBd9U31Z3JiwUDiPCugwoC1y_rgJk_VatMMMxrK7kbQfkzXf5imG80jm3hblBO7-OJzhX3egYrZUIPiX2_80nqmMu_skc-HneZfo4QqCuEpamkCPgycF70Z63MtzpVt0iO_8fk14jUuxQ_g28FHmq41sEXez-RifQqxReXzFPeU610_taGe87c4UbXuMMlv7evoe52aoXkA6OYZgIRmvN0FQU0hV5J0A5q4lRKnlvAnamJcUHTKZZtwDBozr64Z4JRsVWJDt5kyzeSO2i41wB3K6ZWmj7tkn45HL4Ziaeu276pb02n1aPKNrsT7KZlYcfL81ZO1A-_6CfUTlWoi8uT9WB2XyvmEFe35vgAH8ug\"}"
  },
  {
    "path": "genesis_data/genesis_txs/CEXuGv3KvVtkf5gkV0ip3g1FF-i12WIDo6IOigORIZA.json",
    "content": "{\"id\":\"CEXuGv3KvVtkf5gkV0ip3g1FF-i12WIDo6IOigORIZA\",\"last_tx\":\"\",\"owner\":\"sPrx02PJE99dKRBqPotHEsuw54dRiJdeB73_Rf9Yl-tFj90TaNCzS0yq47JVsapT5dfCwefLpW6LGnMh1aTaLeg4Bz5RVCbSxFO0vajDoYB5SaveljJPXF01HcTwx96vdoR6Za-hfQgdd4HJC0d6QP4SC3z8J6ngsQg0DDcIMP2gngs_9FD8dl9E6zn07rHebi-pBFKUpuch4NxYsSrl99M5KdXvGd0hzfH9YwtciRGCDOP0Lf1DdsYXLlBECEPmVlBvoUTOUz3SlUFA4YEDZw9sK4i9v4fOygW59qrx8yTCy5D00RXkIh6NPP_GpmwRuuSnfV6xCa8zWcxXRVAPbZVfzHgZ48rsxZfdHS74OuiJ3vhO-SwpddA18h_fzVhAlHm8WHDbZiE6IvW85QMBk-fxinuTuRx_qz4thMnWL86SM-LVV4QEbNQnmWKapFmOi4gaMZjTDl5QF0ndgKqvitl-rXBPMuEU3EKPaMSh2d64cJm1_urbUevnioG3TXylNae9t4g0m1j8ZQzAWvmRneGSoWZEZIGh39WNjh4B7y1dh3ggSDnPMh-lmxvEIkborA5xNL_nG7zmC8jXLSLsqx8G5yDO1fZt-XQbUgK7fYA2Rkj4EZiwxuLIZo56wxZBdK52GbVIhu4f-winNDtHImELjX6Y4whaVjgPxyJVCck\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"R290dGEgbG92ZSBDaGFpbnMh\",\"reward\":\"0\",\"signature\":\"PdvoHRBIDiFUKDtF9jBvoRAuwJA9BkW-WFhzuzWb24a6j77KejriHwQ0BxLAU0AMJHpIkZ1S-dDHYLPQiC-n1kGu4k32u5yXuJ1KknF-amVGajYaz7DKp4eP0GuBeVJGihQ8lSuGsQgVor71Q9SKfYJIjiUiJjQVPkAzvY45btGYM28u3PvjN7ob-MwU4PBkZ5acY0u46GtDmILiazegBO1kzFmvPCzPZYJs0aLYBZ5FF17Hi4qazoEtitJEkOEMHghXxcJMpCQ5eRMl_cihSYXDbqroGZ0x5lFjgT_DGMYktkix8K2I7fg4ts5q3q3CKjvn-Clo5m9RifEoNfg9GSlayMXX5JkyiKYSzJss3QuIwI0ug7A7iUXppeDBb_ufG72dqZDG-eE5Pjc3knQJkT1Cx-1frV__E_dELaempeurdi8Ac4pD9shxnybMTrdzLGQTIiFwrJnDZq1Aa0vq9kyheBss5h2UaMet4ucbUZVQRBS4kPqurmQh5LBQhXIrXRETcZJfx8uH74gPzKp2PNMF0MpPN8DcOnFPGNpKe3SB3hfQn6Ul8LW4xGAeeXMlHmz-GuEM2M3RsFH1f_7ca4mBSiBHKmHx_Tpe0_XhRPrpUNGy461d6nR2OSPX9b1sSiQoJ7pc16vjn37Z3jtooIZEHHXLXJQ1SyiGLsbcwT8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/CMr-rV5FdlQcRBo4loZzj66EFqwHBmA36tWiRMKGigQ.json",
    "content": "{\"id\":\"CMr-rV5FdlQcRBo4loZzj66EFqwHBmA36tWiRMKGigQ\",\"last_tx\":\"\",\"owner\":\"oyHwne8bkEP5Q-5S0r51EMu8_4IBH148l0uGUHoNGPOWVxj94nrBkLaL8Yz3FJCR1o5cMOSYR0cAnI3brpgJTG2TR-jFdrxtt6QxQkj3ZEGIvSzv-ZEvkxI4osXfrvnFQHBm6tzV_5ZUBE_-rUTFRAILfXjoPDYJtkJnC7dsUD08ZgXuMc5e_q-sd5eRMgvo24vWETbZJwBs7bRpL8yAodN0ZOCTniGcVfv9gk30OLfuAovyUYmN-zugYZZLetYe537AoY8LxKyVXfwofuKHkH6SJJM3jHYMqu1YGp5u-bqb4XAURBV77UKh2co2DjLLVdZ9k8Qmmg-lKKfjNT9ZulB4_g7fDQsZVPSqtAP9keqJxo-wunDUBjlbEtgqeFVPdP-JUjMFZW15DvBZaYoJXAkepFwrSC4XNU7Xhd0Q44IN0zBiCTyAVPBMwRQ1E_J0GyfzEtb14sUBy1YotdpCLRNCVjiipuY1S6DwJHSO9Zjr3-cCRxx7toYIqU-mLldJaokouYWRAllVfyC3BxNfsbVNqOj-zU2emGPht7Z4268jeETme_9e12ZEdQElEOc0Sk56CZwcSRdyEbP5g5iDF4Z2VfkHUUzb0FlaqKIFuLV8T9nqFgpfKC645HNrA_RfqwaWGQBgniIQ0btJFJEGP35col_h5MXSZS4AYOceEnU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"UGFua2EgJiBLZW5kZQ\",\"reward\":\"0\",\"signature\":\"AYQOQ6I-elNGGdEQJfbZziloBLM4NCXwSvcAsTjn1IkyjvP8cNetFhBV1O7uW3fQh6qyxCqDDO9xsO7C2ApRMKASZq7xAUxXeu21V_pPAKbJ4KxDAu4F985DIgi7bb_gbvX92yGn5MC_AO7WaplJI5ckPdyRLI8RhmXM3T2kyHK7C_xTRC5UJwIZZDhXcZLadJlYwvT3jqhDdItPzW48LZ9MpTStd3VJ5OipRnpd3ODLA8wfTeLUjMhj_UnnaY5S4oVbk-6I9n_HY4BgvUYzWhzY2TcVVGu06xlmkR-EMNDCsD4WJdS3MEQem8PCoaibQnh_vd2Ord_UC3yoDGFYvGDPfdvnZ8RXu7qa6JC-mln4LKcpmOaOoA7AIy_YNSb33SwylflKYVLJY03GTSp7MTooux3FYmeTj3rEHqTfbAPX0vC__VloOFmXY69pbvQ2TZrB2uCuMdUdUVlV0uNyOqeMnudaaXt0EA409RXnQIa0jNonXRBmMllWNSD-cL_yZUtI8MAleiZ_Ho2p8LynzPSi6L5ixCfjVTnC61oIfJHeK_AJrmc1inOhnUH144BtjMfEM1OzS5gAItqEqGCFqMsl2hsYYxRKN9t8nKll9Ld94zFVYz6GoqaD4JfHwNfnJwcmULFEkiYeg9ZTHWrNW1juYAX02MuuE2IpAhB1hZs\"}"
  },
  {
    "path": "genesis_data/genesis_txs/COXhhpbcLSEe2iP2kp4SDj5NjjBAC8CucsAgOHRF_lc.json",
    "content": "{\"id\":\"COXhhpbcLSEe2iP2kp4SDj5NjjBAC8CucsAgOHRF_lc\",\"last_tx\":\"\",\"owner\":\"1gJFGx8-oO0St_Qv5FJGgTLINva6t26rCM1IX8tu3P40DTS0UhDvTx8gjnIWDGY6tou8ptK0vFGUb9S0_o8TVB4ymosP_ZuOOeZGTuWhZ2eZ0oyCZqAzT8IBPnwEIBQs7gUmXQSXFTceLv6BuoVNWUM7LcmzySGDFLH0XC0ovok8wB-2gtJPnWBCPNh3OM_JHdIJ3Jg7bkNgnnj80f-u_saSYh-0qvJnV4027pG2VJ6_TRpghA5TbgPTeXf8LW_CXFYbhDz2WObKeQeM7Bn-soqxvcmGah5tCgbICkVYGfxeDgvNxGlLlvP5lrErUMzJU1qE3bo6MHfyLK69yDwrUkhZ5BWoNFGW2unVeAgPQstBHfFOFU2F5f4EoYtuQbvWUlAE81oDwkmNu1vQOI9eeEJmthH1we37pmbMGDAbfRZP7iR30j_IKp9arEnWQG5CK8xjDsR4Q6tPR9eZ2zGTIf6du2APePl7Eqd8ONSSWRpPWqzPvKSlkBU0xLtD3vCKRLCL-qy2vD6c-Ki2kH1xWgTIPoWa4GvKdEYQ9OBnHxsCh80bbsKWmjjAxHhrPt4kDyzJjH5FUr-ZQLzZ9k5RJkb3WwiNPS9WKnGxUt6a2334OyFbiaS_b1cjOfpbWuwlYY2hlyg90s7-unzs1VSCt4jyrnYbxAmseXOPSZiFlMs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Z3JlYXQgcHJvZHVjdA\",\"reward\":\"0\",\"signature\":\"V97CJe1NwJWMUWh5bsmkF7xAAAkhKzcgAYHD1lDw_KcAbgpiijUuKQdjaIUwTkhFA9vkzPCfVqBUgnVfXX-sfBKDq4KBoVJaAXBy6sqqfvQOgb4bGQfTmmzhYbVgDREUk5p0TVL2L_Nczo2XUv6fb1iVOeAe9uVDFXDZMaUX75GrgdHkV_xfSIqDB1nX9rOHy4K7tJElbT58Dc34A5_sVJAE_WU7o-IP7UOZkbZMsQFLha3p3SK1NRNh2lZZc5ZwMilbIItl3o5mdujj-BlYwtRp-3FcKNDpQogTScfA4CJEUZZys7Qzz5drtEDTcS6FPAg9ikk-DN_ywKo8i0MEugBJ87GRYchEOY_nTr3GTu2Qik9dAq530gaMynt90M6A-l1ugdCM2DVrLJiHn61y9Fg3GlL_TZifCFNDPb_8x4QBBLJTgEyZCn8IrOGjN8Bo_EM1rk-aGpTaYphwjGTKDU14W8-yHrdS4cUJMGP3miO_INspfeh0bc6dDl11hM1657Ou5aRxxEQXKRPHUTdQWS_KA7h19YTcBedPvPhEwRPYsF-3qCDJZiRNARme7gHizvNeqbp_FcCleeBw8DA1nomR5BLiARPcue9A4cttnwNCn30KXAB749GexIfSuL05_x4Y-8c9diNB5Et6BHMrcccBxpzbmK8zbWUzvquV6qk\"}"
  },
  {
    "path": "genesis_data/genesis_txs/CSkFcCmNgvnp7jp7aK0tEGsLWiZVMF-QBkEFaJrAG48.json",
    "content": "{\"id\":\"CSkFcCmNgvnp7jp7aK0tEGsLWiZVMF-QBkEFaJrAG48\",\"last_tx\":\"\",\"owner\":\"tu-sNMjZll6XINwUUWPqQQ8pFZF5ufnezJsQMBovxydJnrgqtolhBKGGS5N0tud8Pw4aGFWsycpYUyDkCw68DUeKqTiSQa6b6v9Y3lSuI2iA19th5zRIxuF_TApcwTFzXhNITTLFW5Gt8VeaUhrGD6pcBZLzAKJNk4Jx034YbOPPAgxCy_shgzRZZuvjTkaMFs7-dZ9T7a8kEk3jC568ddsqpk68z60bHdA4w8RgOhJrALaAIk52je8N-80xGJ3bYmUnPDa-oncl3ik-tKIQ01aIEvEdElLktS1WHIHlO73v4aQJQsfy4uZs2yDp2sfZcrVSpBQ_UDiYk_poYGt9OpNezHli-TLcJ6HDXcJUj4_AcvO-cbodIMGrdFfOqp-u066Q3TY1EBRD8j9tZ5NGoasgP1ICNsX11vBIi4IRAd56E8n7AVSvacbaJWgnkKamd3V5j5_KB0vE2YAOT1sZU64_vh9yR2KGxd97JGmF6HzbC67vkZ_ZZb9FLpgxARTeirRJ0CE0F_gL2QJWPbhoyDwskzKiBM7ndV90egfxgh0Zpn-8zYiEBpRoOacYp5CF2S7baYm39C9xr9g-GGvue4bJ0V9p9v6ONgZMchHq2auFBs835hZ4ThdtRwggUJJcRUwhxaNjuYcYmpSdfP-UY_LB52hvo-OKT1tiYklNags\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Qmx1Yg\",\"reward\":\"0\",\"signature\":\"WXIKFq646Yr5MHqtJwNkzNlS4V5e727rWOnuGKZRolBKpR2JM31hNLdkVA2w4Fmg0iLpnhTTLkMtys4zkmJDnFOEdMzgi1z0KxLUz0jXKIfh1PjEb1ipwcaW-XMNDYdPFabOTnN0yghdVx3tvRJx79U7MwoJzDDpMXgZ0xK0vMT68dgkBY30YzdsKomfDO6M-Z49tLIn6IO4u4j_gAtUH25M6f1duqPd3NUc0ahDBSTRIOQqbsS-4WupXBryU7GD3Q1rvKFQZGh-QYfOVNEqCbnVA15R8DUTuRAq5o3L0sEgt8CwFFl2tUN1BbT-YklGauV84o3eHZuEa1V77drrp0XHO5JLu8VQ3RnnHHzopLsCLaUJNvRTn1nihHf00mP09dszACu9sbFp0xyehIvA6zp63f9vkX21tD9Q2tpVdYVyChv8qE_lN9V0E2ls1ftRPQQzBql6FH0FX-LKUR0Cl80FmtHJDur0DcrccnI6p1P0BaK9FL83S5TZUDpz40HsQWzuIFLMVFeOzWBqmZ3guCUEYblMzGpoIzD6npMzSEn4T5-DzJoVe95QU286Pe8uKdnHWq7WjjTDqiQ54dYUKekKl2QukGONG5krIgePJoCrPzKamBjUIe14u1dXdzsNX1sDtCbuc6vy8ni2MSyuTOZwwcq5xC6I42D82rdCGaE\"}"
  },
  {
    "path": "genesis_data/genesis_txs/CUu1gtu6L5tJxkOAu13tNBGDKECohV8M4qgCOOPNtas.json",
    "content": "{\"id\":\"CUu1gtu6L5tJxkOAu13tNBGDKECohV8M4qgCOOPNtas\",\"last_tx\":\"\",\"owner\":\"xCb_nmC5a2uJLUHDyig0VRWnXIIS-QoGETwoN1K_QmLaguOoyUy0QPRQSARafSzytqB8cSPtRB8w82aHOlI8xscE7q2s-ZQFgSzhG4fEBegkEUb2vvNkTchgAbgN_amv2Ad0q2oGHaq5jPYFzv9Nkna_OA9G1kkUFSjNZM0a9U4ybXna1ewn3pklcUJ-OswbHAiuFQ_3xvxG-HsuyuGGOcsVMlZHQmHvHbbX0_o-M6s9ntlORjMTVodAbYKsx5xSj9X3bGOFO5BeWZoGvcXtB_5AVWxdwA8noTRedRh3aIL2qmUINowwsHy8HtHlhNz63xANphu7zb9lYa5q-ciORQ_ZSKbVCSXNnmt_jxofBdN1x-FLdDI8zXKQWBCak3CJWxpw6JCi4dEDoDNgutapdCh8UiSbxoeyBdANWIWlvawhJYmZ0IaZrDfdd226F6-yvspG5KU_8mE3U3lK9ni0TGq3LMkwMMZ9C64PphWBabu_FL5pK1zc6VKWGvLqeK_br3MUJPxPN1FnSRneGD-Mo6-ruEf-YwENUTFTpXaI0QW5i3S1JWr6Nbxo5l0grEeGvXBvZlY1xWPeaSdC5V_2B8JIibyjwCfDhFGNDvfhquRTy8mj-PRpHwqU5WGK9MfaUDrFLNdAq_tgMO0Am1G3Er5y5wJ82bah9NHCDGr20Vs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"R29kIGlzIGdvb2Q\",\"reward\":\"0\",\"signature\":\"Gv_aKzF7L2YuZfufu908Wiizlz5OT6vPWbUv1sEbRLb1ko1e6CB8ftpAaXPyfjk79WwQh4Igaxhd3K9T7zKNFmULCU2n6k2kRj_DQLS-Nj3ViDtAT8AGyKzhZhPJr0EQqGJDrArxRIiYGQj_pYkFaRd1ztcbRHH9r782p9CpAKZN_Xedj24SHfJLyHFb_Ss6um7yRlWNvBC_LwA3uaAtWfFFvECOp8LMa-YUDllEANnx-ShG-IuowD4yurY4ayCcIezUjKA8z49f0i3YM6gfiFZ3KuxdkMpWxjZ4iFpGmLiMZFrGt5kYAGgnJqgQNWfMoSZxjBTPqz-cRdY7xlU_HA2FcSf5uQ9AIoQHvA4qKF47OLsZRMNBAbjRgh3dGFeEdaB6c4loDkyU6C09h_KQLTX5qzv6Nr5jMPqaKYaz9dzPggTuUISAIv7Veh5OG1JGafbvfLDSpliA8JwJO_4DWhg75qmC7RmTuZ9nsmcD-ZpGY5kXr0TUkaK7UUW90TDPP-rrqF0UU1AgSgCt7srUZvQqG2WQaXm3rauggpXr0ffVzpWBFlFW4P3SuntWtrHXp4Fdkrhu7Xd3SV1P0k5qbVKdIlbuArJTqJWagka5utDIVou-ha-VQXwtZBk-R7NccmAH_iv7IWYGV4Lf0YLZeVUlktlmh4T48r9TRZHi6bc\"}"
  },
  {
    "path": "genesis_data/genesis_txs/CZ181FVir4NaSJ7JsVb50-xCaZtd3dmKbDer7jpTSyI.json",
    "content": "{\"id\":\"CZ181FVir4NaSJ7JsVb50-xCaZtd3dmKbDer7jpTSyI\",\"last_tx\":\"\",\"owner\":\"s5e03LkGfu88YaCs8W1bePRGRQnP9wZfkTeSLqtV4NWE1RNqJoA17_NyY2AfplDQQ1kXtlFEoP2oGwCSydkFeqyiYTxLSf9rYgTWrsZbH_7vPv1_JCp-KRpuw_xNFNzuEz7Q9mKqleo1Sh6o_1-IjSzhkbZE57cOT0V5FJ5fW7jnijItmSyHFll4B_1dRW6tOOVsvNUUc9InSKdI2ULu_QAqrwOuaewXnX_EefwR4HxBjH1QvX2zfi96-fkorPFfQlrr9e-ZfWvI-4g7Y2zJH5WlW7NwOq-DYoA1QB0NNc8PioNWiJj-d0F5g-6Fg9kMIo_EGp5VuLzl_uKEd_nAQOQmI8HqTCiDjM-pcIgcsgtL_FJKptpzoWjiYCdJ6LCUg6faWDHQDQvz6-6h_pUVHGtN9tN4BEngzZEvtyFOC7olypTyjq_tmz0BTvauYKKZ4lIH5Ju-5Vhs2V858I5ZbBka464d8wnNSHSrp8-v9veSFXVdnj5Li3aD8XyND7EfWcDT8B8ScEIDa8yMC19f1YuoQe2L4oa37tWF05IsqmMkrOHUpixOPpkfgjZzgsHMfQ86aDXZQTQRfddJizxiK8HIXn-vQVA_91eeSRILvmmLOeZdlBVWPSRouC5yiAywUEU6IBAlGNZEY5e8rI50I9pRA29nFnct8GKo9VHKKd8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"S2VlcCBvbiBjbGltYmluZyE\",\"reward\":\"0\",\"signature\":\"J67_NR7v9LheGe69BUHU8jYzRVxsrdBaNo1Vz4QwI-aamBDf_Ytd72VKQsdu8bF7TC2aTzxtyt1HBUpuNbkxhs6a2gg53TSbbXY1f9kMIaXhk92CBfuaUQTXO5Q_SpzbpycwEgyL3K6cPQy2oNRVcE6eg6krTnRhdZiTu0wR_PhBYH-bm96cRbi5Mi91tQwR64jTW0W4yeohHFtXTcFdRhL_JkD6-FD6P7M57e3-j4HHpNl48zk_vNbgx68aYv9m4dH8lYx6FYizxCCN-zkdyTvDyXT_74nsuvEFFtWKQgK_RU0X23JqSvfPE2x631bdjjgsbMm867btIHRbGwjiUlz-3W3h646uE4vHqOrikzyeDZNkUsajahGIX3Bp9oyLtxGE6eYyNDSJuveagJwmDVFHok10sA25JAng0mN_B2cxuh2xiaEEAZ1-oiEmhFQCdDDhGLC1ViJjh9h-kGMXWbJHycfR_9LTkYpbh5WyzzLZc6om_yC-uX55IXJOKi2wova4-XMMv74MRuN2nlr2Pchm-4IOitx5ITxNBGKWhua2K4oVR4WNxIc-pQMkFJIcfcXSUL0eumd7ft_6WE8mRNFfoZSAhuQ38DMlOM26xiIjkVSig0cVqMKSkISnuTkIg4ruO-p4atCziJaA_mRftB25HQctdUCvoWbYD_u8_ss\"}"
  },
  {
    "path": "genesis_data/genesis_txs/CbV_CDXgVNjV6fyoBDkYmbAcaC5VsLDYXgEIwj2Ewyo.json",
    "content": "{\"id\":\"CbV_CDXgVNjV6fyoBDkYmbAcaC5VsLDYXgEIwj2Ewyo\",\"last_tx\":\"\",\"owner\":\"wyDKbJ5GXN48RGneEzgIEQZfURGmPPCTd2bfJ4O0oM2xGnCMnZ2WBqfKR-2jXApe6VEaEQX--dM-V98BqCQrkKIwm_SOoQQ7sUTOI3kaC35C1dLhobF5bBFF0OuHKk7KeGqPb361QGB9yMDlj7eTyZN_VUDXjS_UdIhzqurHHOrtXsmyqD3zdoUN9wGLkJJIHaOk1TqwLcZyphDnqc_0sb-au-nKbf2JqJN4DD9-yxVw4rv-qBRKil4z3p1CEMpOhgxCXhxlIBTb_6IN29vRoTDWju-OqWuoFDCRoC-8Vd3ad66VPvZBi6NG1k95EyUgqzfB7i9VPoUp2f959MsbsldsFG-Jroh82wArN2ydvni2UcYcOQiFXqf_cH4pKvL2IdnDmcIv5BF0-rMYUfCyuxPOplqZYDH6BC5O0y-fSZip1-T5tWSP2OSRcghTtZmyJd4vvdLJDiD_FL7a6a5zanY3mrVILj5h5sdQh7uudz6OEx5lZT4GFUGXHYoFAa92FPLV0ywYTZsM92za2aRULlBS4WXTsXrylnJ-fr8SJCFehgSkdSuoGslG-XwKZNtEP_lXxfB2MfczrnokYkPSpiWFnDgFWm3eAB1LCdkazUDeljYpiybMbPwoMj5wyeczwD2MiERM6-CBahQO34SFUwT6sc7cbz12GlqI9igFJyM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SE9ETEVUSEVSRVVNQkFCWQ\",\"reward\":\"0\",\"signature\":\"KpFnbhNaOMtTJ9tBuBZbVkO7VMvwF2CossHxomc5fUkvgIBmSvRl0srvZtr4i-MEfBParuIdXCh4usFRW1L1tx2D1hpuWFzGESCL27KVlzmqaIPK_efroc-FzVHYuhxEIXSaWqA-64TjcUVKW5HR3okWYhCk7vuVvuEMnNyoeVRniSTsnPJBK6prDu1vwQhfXlGb2iBwRvzsVqfniJC-PuLW3Vr8DGuECJq8W_OHrAKhmQ4I_Cidnhh5ivJPPZr5IXimQn3Y0IMljDHr-FcIf0elBaabPmuH5L69uT3TjLCIQeauIERlYeVkic80q_0VM71Mmixc1mDrqfDlwOBBN6Sm0GOawM_Uvel263a_gOCSIz2ouekq4xss_zBKyf8DOxU6d50kQhHQ__ZP6dvyhWK-uRyprJ3dVFPRdkazhR3e_kJ5_jWq9YyF7yxaa1AoGfTmEl55AtTZoiE1jEpQzwD9M2GXaHSiC5jpg8FDXfiby1236FOBMJAYhrhDqoFMQOsoW4Eq96oJyA3BwxgdMSwAhqnyiggCXtYJQp9IDVHfG7R0HtQwQtfHGT41Rf5JSI8mkvQRlzSTG25TrX-yD_EWLEPVbUrWIiWC1P7h2in8TXgijaDzsg7CPE8AGOkoIjpV21tWE9IqKpd5igSSVD0r5dhtCDfnBYpdV4sONjU\"}"
  },
  {
    "path": "genesis_data/genesis_txs/DC6gmByeCki7uyXHJhX_A9x3pkMgmJ8Tv6wDRnh7vGs.json",
    "content": "{\"id\":\"DC6gmByeCki7uyXHJhX_A9x3pkMgmJ8Tv6wDRnh7vGs\",\"last_tx\":\"\",\"owner\":\"pXFY5XdQ0pZRAV-YAydPaxF8z7I1cZMRm6Xjp0fchrVfbTCZ6W9RJy2utdayEdVTqhybuyPqPIdLjhofihuPhCO8XX2baRxPdnkbimm33W1k_F5XfUX6quaJJjcqRM6DDXi4r9Bt3EN466DHALh-lEyFiym2GYI31XxTTMRPNfALs3waX2CNjlcYTuSuTykFmoy068fe7xQyWYMQewLaIjAhSpRIhtiX6CqnB5H4I4qscTOHrn90OrK9zDGV6QaYbRDyUVq-9AgYSg67SPf3tXB7kGAsfwWoPciFhAmP9yk5zARbo7AZMNmea0lTTCpdD3MoPIi3lZyejYoN2cg_myc6aqS1_huXNtX07gDf3jQtlxKOQkZPRR35_qA_u7iquRLP6aZew1GcW2c_p70PD6Q665OHYNB-5hTrRAzJEUzEq6y17BDIHiWNPRfMQ1oJxv-pOLwyRgdT2EG58qG0Tlz2FNFtTJ-8o9vn-omRpse0LksrO1PD0ivK1a5lOvv5GXbgx7cVJBrs6oeD7U27HTazrno3mwpoPrcGh2slNonsBUtUtMZaEIcLpyeOOOz7PyKXml8YXoCLA7Ag8E7u3Mj4aW_NdKFkQYEq3qyKV895vXP8GAm8tyuFdAD8GT5zDvilXlm8komyDHZKXe9WhUnCX-vodZAjGSB9xXtx0A8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSdkIGxpa2UgdG8gY29udHJpYnV0ZSA6KQ\",\"reward\":\"0\",\"signature\":\"I3isCYmPyXRf_ZmlbzRjMN6l3qinx1cNH2TaQ6ROGplblxuvw2fDo-mvZ8gJOx-ne5FfteakrriPavJpMhgD1CE4w0HTcdXOu7zGMcF7DxAtquvPfR1egK-l4Np27ORpMQS_DO4tl21XCUMiJucJv8_7zQb0OZJhsjFNn7-E2opYzulfND_eUnkdqUFyLSynSpedDQZvlpkkFI-klLjV3viNcvdsSMvdcXRh2Nhq6Ixluwhvx1Esd2kaxyu7cnPAsJdWMdZxDfseuu0TLB00X_tWaQIRpCcVNB6D8VlbzfNNONeMU-quB5oUNolrJwe-lsQ5fsUHNGHckzEy17dLFuFKraUCKmHVlRFaCAPcd7h39XfrvSm8DPwn3XKN3m0-vSl4U3UHLEyE0cN_3vUe_3asu5828ZS_yxoOmMY7VDFPN_npalbhSNrUG7oO--HEgTuuHPC5H8BdtC5lYR1Fr7Y_AUdP3oPj3KQYDWEo6AeAd3Sxj3s3gOSlC69oyiXUhWvoKx4WXRrt077OOg5rpGmFQHkp8iBwoXO6MHgDo8v9fc_ZqLZ40r9MROl32ZWGK3XWShOK3Ai11yri08ur_Hc-NmygEzFmSkFEDDQzfuiI3iWL92hV_nUPz0M8kZTJwG4pfuBxIJxt6H9WVSpwIKXNsfswYsYqupDJYfMsIoQ\"}"
  },
  {
    "path": "genesis_data/genesis_txs/DDrS8BD0XTUVJt5E8kwisVTBX4PBWp0lCnSkSD3PJto.json",
    "content": "{\"id\":\"DDrS8BD0XTUVJt5E8kwisVTBX4PBWp0lCnSkSD3PJto\",\"last_tx\":\"\",\"owner\":\"16uZgW-85BC1KC_eZnsSIrYt1TPCcRFh7sobb6_3ol_sE8dHpnCGOMumAxZ43kdlReZF1rCJ72rJZYlTY1ENQGUfmux2NItHpao9Wiqvr24JLNDZMQn0HgUcMQNk10zUvMBgQURmahmtjBRDmaK1F9prPuWiKxFCGAbeLkNdQnEu2k57UzI-6fmJokgOJP8oSXou_lYs6JSn4VG2s4aTm5pOEEd7Y15e2DJ5QEJqCHQYFP7iQdG6phTv284JiZtmWcKbYn6WgGQ-GUkCSexJ4QEdzD6dXKgwcdUh2tiyZy-zOBhZfUo8I4bQuiYks6kMLAj2nWdGxM4wxN9VrD1WG7-bnv4t-GNb2fzMRdEY3BwJUAOe4YC-0_hjEHE7PXB08k8X39NrCMarljbWrxJEXTIaH7reE0EQiknVxlzbVWQfbZCBl5JzKdNzyZ9E312f3xmtwVcb0irVNo5sbIBK3mh130vTNs2fkHRgvSmWnKMjd3Q3vumWTg64rDOTKaXpXNL6OTR8b4MqwTG4mDmk_H6wQCQmJ4-KV7YGd8U7lYtqcegivEVhmS2W5rTNRlbc-nIwZ87YBftG3ktWzbe_8rJYEV9gfv5HnYfZUC-K000RyKP3UxDIrWcSZVR5GERKPepHTsKrClOaVSjxS9cPRXLfp-uyCs9ztoN_xCyZEVs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"UkcwOA\",\"reward\":\"0\",\"signature\":\"HTEWQ-G6i-gzGVbrAc2e55k0zuhn86kkdlGmJgu2ZjQ9yCeVENa5g39Z5XA9GsLcCSxxgsTDblB9Fk9xOmtZbNr5vS9APWSZmDx6yADlTXBfjoO-XolasxHOLm290zPfmC-_4QJotLolvImF96V5aMog5Z6sGnwHXW7aeRobfsly_oHUXybvaK7RQmNGaO9BjX_mBxiBlbpTjW6YKI8VWY2AH0zXntXRlR4PPFbeJsEbUI8W5RJ1SyG6Ds78TIO70kYwdf4kLct1AP_Fk9Y6QJ4NZIz7itRCdFAdsIq-shsx3s5mJTaueJ_YKxG5bd7xP7zWUmlNgkW501jA4-BRvZhTaxVZxLhdWGJwjxbNlnUtvmFfomsO786_UvDrQR-143vZQqOoXwZtwJQ9qJcJQ2c-Nan5ea0VAM9c8Pvs2eRMEd4xSRN5etSFK6R4pNTNcUvQGbfM9lpKfFBSvpcARgffSU5zcx9gAcIAx375yJWH1tTU45ESQXDGu7ErpfRADEb5VHr4gAP_1CDfHN1lcOiBfzrFmaQ1853vBTiD2lSWdeq6Luy4wib99r7OZXldJUY4JOBQz9Y_JlgdBcBKDsoVNbjvJHRwXbQS01z7Ipu9LomjJWrXLMpMledp7Pbi6Wyndntzcr_WiMIp6l8xXtRdWr46DYKvYWZll8WxVhE\"}"
  },
  {
    "path": "genesis_data/genesis_txs/DJf1SRoKaPo1h3F-7oKIMu4A-r9dXXMjE57WQilPdTk.json",
    "content": "{\"id\":\"DJf1SRoKaPo1h3F-7oKIMu4A-r9dXXMjE57WQilPdTk\",\"last_tx\":\"\",\"owner\":\"zsyFdyHVbaXMKt2myMvIg3_Rq4dMFK1HuZCI2iA6zAQRmGMfVE4dDwsFUCjy_roE7iDD6PeXY0DAF1Xji555QWJsXeiJhnVVzSTgtJqxW3RIo6JJYo8BmHuvkN7SqiawjtukEPUF9f8BMigiaChV5wte34aY0SLb4y1R5_nSlSXaj3XGuRJ1fWwZoU6bf2nAXhvJG5W7b0NC7WFlgNG5VZR2g3N6l7gzgJR9eAA829UHEt__gECLY1oYLAr4BuB3LZaywyfOa-kIEp_LhZEvfcRF-JGU5Ew3iLw-XZJVUx8o6YYn6F7Hce3O4hFKgsaGsGl0XHrdXYNBUmjtk39q486wL0ZLpjdYEC8nvFbCT7Z48q6becznyaG5sPKxdnkvqn5ph8zE75kWBG29xAT4nxTCERk8KSys9qfrtUZ7mLnTT8p-NYYapDdOpxc3pytFT34Thp1M1d6kjGOX6bemuw1clOZcXAghI7vLkM4wuymboqlB4fW4wG4DpBZ0w3gp7hlT43ae7VLLCxIM8tyJ7WEHKRaNzafoecOmV7tBnaKZgAprDPzTMJdbPRGhFGNRTITJONQ8NSEBSPnlIQrk_OMJNfsxLAhlvhK9FsP0bm647ohpyowyEmihBqpnXw-_A_9dF5ZSoDowfNvvNcr3HWH5yeAGb9zSqrsqyJLX-1U\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TGV0J3Mgc2Vl\",\"reward\":\"0\",\"signature\":\"SJUNJMCsD8fh-BRN3IDVhKXxdpMqutvfKZQ8metGHzCFqMerqMO6YYalZy3uWbuKJZTPIRR1zp_CQOooHNz0GB3vuOGg8D7DkA_JeY_-JGwAuC5WR2OxBQoqtqvSNzHUipocG19rGbjJ6F0FeRBimPXQ7GPhFyS5e-pYDwtjzwEvhGElWcBYSLo1xgsF-uj7smrK9uVaQDYGvkumRq3_uMBzaRvTMRCPJl9PDDgbvBPrVZ4bPJlMvTHaBBzZ_0NpeKjUZSYv5LovF_kHlxpDJoWoi3tdL4CuoSv_Xn6Q47o4iHvbJ7wAjK1_PnlR1M_srjpnFOTR9SGFeEZgzW2Eq5EL_1S8TdKqSreNyUoIBczw1J60HvIHWlGEiFY_5GcqiylZIlw2ll0dd2PNXRCrNA697aKve68Fq5IOENgRG3O6f4hYgTpNEkAE2MravS4_pJwo6KZr5JWjBZbYmvoZEO8nL860j2cY1dt0aIxUYAclRlR2-v9zXOI9J-z_pqH5FhGJhivICMOTcaYdhWfSag7tkn7UYK5yAIRLNoXZ7MOIIoP-W8cz2q6KFRLABJn-lnuENNFW7r_59BQvI5S4EJFEqnzbezofsBjaS01q-LHsrwqRLsT8A6qmmYnISSc3EK7zu_YICmagkSGxeczYyZaFSKQSc3f5OjvW9qPl-oA\"}"
  },
  {
    "path": "genesis_data/genesis_txs/DMtXbcR_qHwdYXvkuCGOQARs_QtN9iWPw4x6TTaWOcw.json",
    "content": "{\"id\":\"DMtXbcR_qHwdYXvkuCGOQARs_QtN9iWPw4x6TTaWOcw\",\"last_tx\":\"\",\"owner\":\"3iUmHTlxrbkj_MqShxKABrCeKBfd4gsFUHYLiDNTkCl_JwUyrguQ8mT_22AgQWHXONaBpLJSnJ0X-0NcFOQ184lwIIqZGOlhMY3Ti1sHq-MnflqVzjSGEeSYW-Teu27v1ILfZvYZPI61YWpBVosxGUTOk7jPsvmrW4WFOtnAS7KRytqRPpy4waplu2yEy0s-ssthupvVU4Xdmi9kuqNpzRR041uSevjdvLUdcf0lBct7FZ4j7x9-fJkjf6GjcEJzFJzX2bC5-C_JOQmKoGOubkPRQHxo47pjyB6mXa7ZenNHRvLNTuD3TlEK6BqJlIobgda-LJl656ahzhOC0Ph2CqKu3drmsMmv7Utr7XuUgJP0bqxtMwCslAU2w7m5FzUMFWq_xxx11bD-9wiqxAwwjoCyxoNBAkaiTldjxxKzGpikg4WZ0lwR5agAOwv_2YgWLY4gWG-Ufk7BYCMLbttc9i7ODj82yIWKGa5i833dorxpAaqUGcmc6uTXCE4U0bFZqsbMOOCufr_MJG8rg2s-KnKRGl3kSIgPn9QRH1E79gVBU6_paIwzZb1x4ZwFinIFvWXk-P4J5zWkZ4_8xJmKMwZIIVj4dMgH3J08_G1O9LTOTL_B5Rj_rmNF2DWiLZ8x_EFPzZLG547Pbte2d_yPp_V_p07KM7RUjT1iIREOmhU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"d2lsbHd1emhlcmUtMDkvMTc\",\"reward\":\"0\",\"signature\":\"xfKfcJrEeyUq0s7ZRHfbUQ8mJcn78FjUqg49rkpXiCRZWGnY_gz-o8yJ4L8lNeFBPOv4xEz6ywPVpZ72ypIngkibT-JFVe7ZE8mV8y2JRACYLtZbDrHxLseCDHmwS7aXr8DQlUxoFVOlz0wCtvxO9CDV-JxXZN-OTutbZ4PNOT3xVc6McH_9MUfJGOcV0Klu_ojFwIKjDqQNPk3-4CiO0yx6zPLPpeuZ8-s43fxnXMsT9SX47RCMDrZvxRYg7Md4GNjkzXMZP2gV9fG7l3cR74FiSZ73g9BmCQVGkJ8ueK0ge3KBIabfU1bUxrNajAJj86VNuHxrWtHcE460Wx5vnBynK7L4GbuucFllFc1P3LnQqnWPoVsqCYM_dJMIjx42XgGAG7zp_TVcyu-qs19XlUYKxRHB7fGz0mdQTpbpmA_OONiQImQepEXZjy2W4hXDz67IsO-yGga-fEtccJ9mCMj55o64vnfGbKVB8oRW7-6DZdLoKZ0ZaEjQ9EY6YqTu29wD5sRZeOdW04UfNUMPjJQCauvpAvYqEExRpcqVIjudh2oAjdZd3MLBB_abApcrK1VN3OeFX_KK8ahDEiNWm_uNLrqg8oYQFO5QVlYg--jD0gTUONaPWMUkCz_dAGjUtNXXfmp1B5QI-8k1GLG1j5WjQTv-ykTkGDDwiWywd-Q\"}"
  },
  {
    "path": "genesis_data/genesis_txs/DQ6WaBfLEMEFhKoMoutuPyO_zFg1hWTDXT13CD8n1nw.json",
    "content": "{\"id\":\"DQ6WaBfLEMEFhKoMoutuPyO_zFg1hWTDXT13CD8n1nw\",\"last_tx\":\"\",\"owner\":\"xOewibN6O2c1JOGfdxWDruM3eHQ9FnoZXP4sP5hWPkjjlvkbMluZGL-hvPEJMaInTXwa6iwmpoIKSqAW4dwqC_im-RgHOAvgINC1IaBREbsAYRHCED27EJSAEfGtq_HZSNw5GuiRMu_1OrOkkBmBtE_NEJAbd7113cBzEA-J3xjZIzsCSgixxn6Uv_WCTKzk1hWP_ulkUB-4hsUg9pBIbbt3i9MKhAv4NRSBu5gh8k7r271oxSTDaoqlsUJyN1yITCQh_EuYBlgyEEgDnARItoQ7jWlvgXQvuhI2dxjCb-IXpwbpBTu4YmUZpzIIj-A7yD-9zJ2k-E_ToPY6xxyX5zZf5e-2ZVHdsIGY4n2E5_RWH4t8B7GGZDg3KvhZZhDRJ-p40oMA_hp6waDF-pkoL8Da6vvS1jaSAKdWBdytlaL5lV6BwLE9SnAdqSfI4nDy74YSMT4CiPHneBwUNiP1kjYc7KsVKOdiMUimFHdL3sN2uFa_qEG-7WWgHgFbrqsBzOGsKpmZK0eL1P6pQQxEKI6b-oeuJZRTQ-dSxJc_BUWFJKx4Sho7wTsSKL0jOdOO3jU-vjwmgbEefyvCLyqKzWQJZKCv5qJci_J9ZjoWzL-nWXGwiN8OIBMb7SHbCf3Xx2krvwqnNETKAGghzlfEGzj5_JT_eg50cdRnRC8EPVk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"RG9uJ3QgdGVsbCBNYXJsZWU\",\"reward\":\"0\",\"signature\":\"Ked9wMPw7cgjlAalh-m0AvyeoflTmAHHwi-meyDb-6sCZnAZv5g_7duG47oSiqqx1cNXLCTVy_TCR565mOvxRUaO3MnEdpqTO2UW34_iNmA5J96ptWeeW9a_RWNezM45UdO2L2qC1fpOxZ4kUNHYsHaq0GED981f-ghKyp7Q7txcGoh3DAEjhJ3S07W4fqhCsdNoPZmU5JNwxaf573d16PXmf10t89MkA7WwUrgLIRLaimk34FcQMeQ3PuLVZGBswpjraDPiIjQz2PSme2ahPmax_xDjy8Sm92U7SkQOlqIpdNLEbY-svom70pS2EsWA4DfYA3qHFg83NJ4uZJQM8VQ0vwc0mxGgrCQtZ7cIm8xp5mVtzZbFB1jD5qNsbRnF6E7rP-mBpos1i76fxHHL_lfdQvz5v6GNUTGtxKAfRFeFIzqYLfLukKZyFezXjrtwXEa_ees8YR-wKqVcZXwXtusExx2OnexhuztPrNRrZR6AfhAI_trO1eFq_PHJYe5aaSCHp28XiLkSHYVQQk1eqOJVZJjgeI0Ok3xpDdS3K5ZsVUdoyv_YmgcDiZyf_pqjO4SCwnolX_hB21eEf9bKBHJFBajLpaa4wSC7Udcy8TE5IDIv9L1rgLqa6TTPShmqfseP3EPxN2odVgGcCpmVrXn340D3-x9ZycskWf2NByM\"}"
  },
  {
    "path": "genesis_data/genesis_txs/DTGNdsYZDXoU1nE82yEjG5ZEssxwUmkFTkM3_i6oSx8.json",
    "content": "{\"id\":\"DTGNdsYZDXoU1nE82yEjG5ZEssxwUmkFTkM3_i6oSx8\",\"last_tx\":\"\",\"owner\":\"uinD1QiU4I0-KITCiKjAPm3RSCFEgbdtURIE7m-E15mRAVupVIHYk0JqUshEZPAXchf6vekaQcodXFeG6r1zlxJZT38WNWK8T7Ksj8hQKRXdsBk8essYD9fVBfbUgwlczIkpkon_uBZEDC2TLzy7p7c_Kyi79KEUNJgkFC7n8sV_btOiwewswF0tlR1iHa9Cs50xQnTMx-jVa_LHr4oUwpgMsJlBMU_FKawdBMLEBbyWs8rmzuBB-6gPLqhKCTE2qVWBRXX4_QTsKJnUA56ApTfel8HuyBU4zCvYIae1q0rYagozeFAhudzY-0ge19qU2DoHVVZG9QReUexz46sNcgOwXEZiBQ2pwNF3nqo77ZAZOKKjsilmAA-yuPaz0KEIpSNthMQRWjcGVnkV0kooxwAN3k42CUq_wYKANuh2Jt-c2mWxqz4JWkHABukeFh7I26msIU4HyZ4uyaLWtE3IdP2hRwAvxsdcX60OK2zqukVR3cp6_C8nE08xypNZfbHIBYvp6MEDdC6yJ1_UQEPA_bEvKlH_x7_obYrM5wW3S5a-1UsvmGKTk7Tjpz7MlQDV8YXY2222ePdqk0zl01jpfhGj8_J9cgFkH4Sx8jAl58ZpavN_tINgI1WB6d1kGMzETDEGlLM1iTq6vbtdfLNgXXFCqn0PME8f-729k7e6uRs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"YW1lbGlhY29ycmVhY2Fubw\",\"reward\":\"0\",\"signature\":\"ee6cfCoU9IzUmX5zuVRSdjNDVae_zzWXG7rP1tfdAOwysp-qi6k8dSomPdAPvk9K91jxfxJArlxfhvbhssJ09vM9BrBB5mUDLaE99Cz1G1pcihh-oEqBFgdLLpIuofVDlcqhgBl_4d3OiSWSH-AbzCRH4b66tuE0OP2Tl5CJprSax8mdErxNGFt7sceAeRvZcOdoCusqxeZ_dT14OEJq2Xf6FWMHDYcROL2Kc6eipe9kw0UcBJrz7UeF9n57GDmrLGu8OUla9bs-w8rdCW1W2oc22_T7EZd0ei2AEvBCkUt2QPTtThqLvgXerXBRcw_jHadhR1AfeTyFI5F-FL9QMMh565kbaB9oXhzkh7dW8toEQA7REBmVOUucB31gZwLA-qWsRyvpdYrpLHF8alvJu8Dwj75s-uniBPLj84Mooa51GCFGPp6RPeqFGyS1IkQzEC2As6KLNa0NTb4KAExC7sEnyDFpUpPArVXUHSlAOrbizKnvwz8oZII2TscAJFEitkv7Uv5FVNXXR2EPsEBqa94sKq9pvcDYJdMLGl_6Sa-dGq39kxa9PVvZz2-HIikU2O3IL9QembNFZsBLYKKfxMlBmSCk1b1EKf8D_UgX77EbQjlpAIIJAg1Qv5Jn13uunq0SYzshhfuH2AIv4LZSBmdGY_Ttwao300Sht5Mu8-Y\"}"
  },
  {
    "path": "genesis_data/genesis_txs/DkBAprUInkCbFa6A_WJJNL1z_PnhEavvyZtF09lmyvw.json",
    "content": "{\"id\":\"DkBAprUInkCbFa6A_WJJNL1z_PnhEavvyZtF09lmyvw\",\"last_tx\":\"\",\"owner\":\"oaNKpspIzu52y0fFWAJGCMQRwiOrV2IGWuN1CR-euNa1wzdya0RaF4WPcDVfY6FQoF0zGllA6p9pPPGozynll05DK-5sPXlom3D830qennb77CIwAWedlpnnXmMvyqDxdUNAK-gjP3Z8tJlqiPScCYdEYThIovCzDudswZsdeGBxX9qHARLM3jhVrgIChdFaF30Sc9b6GjMpKxy_ArUipdT5UtcAv-Upoe88R2F8I0ryXzJmMyLonvDrr8AoA2Hwf3Lsw_hH6MJ9MD0eV7wONP7mgRKFiNK57Jt0PUp1crDavp2Kbe7oA0Uiq-4pYaAqMT_BHJU-q5YGq8oSamaH9OCjzZi6Pxfi8lKA4CSU2PvvtiNBJRMFlGJFqHjeAsjhipXJCLwzYFfZSXWK9uonsoTDOT0vbR2m-yDdxwCsM82B1JxyWbPQLyUeQoLjMkh_l9DLeERj9Cq7PtexAA7ZJNgeBjLKWxjKHZI8AtrBZdTD8Lwi7OAkqMrQlrDyc2gISIOFsyU7hVUdwfLG6H8fQS0ECOoJuVB2xwbX8Myj6xBjUl9uFIeskvX8sabv1LdaadvKrMnTztsc3nPI4nHRXYSyVjDS9oQSytEi8QjZFrc4rQCmn06fIGdw2Wf7ze1_nkAhOq19SxV2P1do8T67QYo8_6D61xxRNrzUOy5j2NM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Ng\",\"reward\":\"0\",\"signature\":\"LAmYMF9cWPRjj3u_GWvpaonFal_lpIZ3_5VKJ72Jn6xkxEbBh6SMMXtDVmzMIjhHz4hqYbHUdanvK62ZXbWy0qgsTnZspPpRx7Irx-RjJNorLU2VRBoEXBKG9SviwmPjYpwRKp10RIsLmjFuE_B7PQ6yJ4hibuKOkDA9R6hVQT2OxLCakxKi6EcMTYl6gEkT8lvQpZIo4_jIl9ZIvh98BMAt6AMoLg6bwR-4aPs5H6ZTv5VLbOzeHAoZVHokGc8ZfRDsVw3NQBfz9167aYN5p223X_uePouObEzvDlE_30s8SuzAzPL0y-SaRsVW34FnggKB6zGPh31VJD5z87igNPasUtd6Th7H665Pkb0PRz9TjxHPhQV3A71bQH6GzAzdAPaIuU_8SY910k0s6SY3TTI34A8DcmYV--g_G-RYsa_hdwWzcZ0i8Mqk8bpLRYY7IxoDfil8xkiwWAIAINf7qRs03jdalA2fp8WDcrTt85b5YSBE6RWWviVIw_lNJwXb7S8ry5plXSXAlY5SPGSGgdHavgVbIInvzfndjo3JRiXbVnv1kziSjWoQ1hlOtIjMJM2wNOXYGwANPs_22OOLt6jv-PX2YMFg4k2JsQfopfYOrKjjwHKY-3Nu0CQoYGG_l2x1vc0jng7VnBX5waKP0BIwl7HaTeBwyZvOHmRn4eo\"}"
  },
  {
    "path": "genesis_data/genesis_txs/DpEoi9F4g952ajGuT4g1HWY-xndyE77dn0VfdNXkrC8.json",
    "content": "{\"id\":\"DpEoi9F4g952ajGuT4g1HWY-xndyE77dn0VfdNXkrC8\",\"last_tx\":\"\",\"owner\":\"ynKJ4cFZw3ucAQjEotdU_l4wBcT9fI-eDInayfMWJ85WutgGE_F7YpYtOlJu-UGAVDFgbGHjX4iEDqonr3Y6N5SctRYNWFVJhifj09m4fsp0COkA5H5Es6xLPFpBFkKcgVPNd_3722MnmrEN3mkqC-boRsqFOpUbesIEARZjpEmC6HEcm6oR4FQXTpa1RReSttAcJRYXq7v5nTazXHAfjoMe0qGJ3kRWiI9qat8xIxo8JseEU6AB7EYRxAH73DOtEoeubuTDaLs7BitwQA6pHOyNfAoPeSwOiqqT27t_Gim7OBiV42B9Xx35rgm3jU2rLoyY_VIReI8pUAMs4ETdP3O-nxvOl2HZL5SGEvglsO3XFFMiXobhTLZc-c7WBOkaykO6H9FxEYzQAmqg-bRCmw_zz5ENG4aEunHG_VrLBKILk216LB4ps6ruBTaON6IKqoY1NDLjPUrTT2xYQoUVxdNcdeGtaH1LZjz7HxlDKjlc3sqh-AVX9pj4wKTe8BeULalOu7SdZHEQv6iCQiDHWeAzW75V6FYrFJrjKgXoyilS0zH_uZt9ecgExV7iKNSTlkxzkuTaLf-GCjHgxnpkXnyOggcnwJMcvRiDQhOmQiCIeHS2WQhUCtdm393I-gI_waxmLNAoBVhWHTYeaCsL09NSlp_22jUjDA77AVuaopk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"RGFtaWFuIExld2lzIOKdpA\",\"reward\":\"0\",\"signature\":\"mfaAPS0TsEGTir3_E8xtIhcOTMgdSNfjZ3Wf9HtD8fjKT5CopDX1rsKjmX3oRU859T6nwi_FCRN51JrPiImPoLlJYg4nnjq3ESsR66dttfXTVfBJIX5kKOnmVnK7wQRa1uejQ42O7_gc93b-QVKRPx2sLQX0TBZ_kHhRHsMhfupnGXSgQ4B1allqL7Fmdf27CaM_KsEFRe0NBEcubWwCqOfZ6G5PDaeUsPyhoaUEnJgGPu7Pnm_vR-Ugs3QHS3dFJ_YZzO_ft37U6OhfACOMygLl9s56na0J4EmjjoQzIYNCazGxieKXTKK9XMMgHwJuInFS1Ck1gAJd4BP68WFLV-mvYEzQSvR5kfqFPbkVIwVX7DZMHXAlwSgryoQfv9D2sFaQQqFbTUtiyDR9eUUUGW73DQK59_pJRJWG72uX4l88elkXjoMmAVoUTung0ENki_pEkQr7xLM9wbV2rvkijuczs2iuJXx2K20iIk4rx-6yQu6PrTlxsCkgC2pCTIA0ggjP24QxWWsenqgvopfpGqfdLDiFQaRiqBC4FcCkz2jSFaKlsDokzOjjpBa69ryHv2KUtXppuWC0flh3r6KEw2Yxsf86uWSBlh2nr5p-FEAHx2bOLNoYdlb4jOzK6LY4BWAZKg2HVZpyQ9FvyNiWOUugwSM-YCyb1cvnvGGQlJc\"}"
  },
  {
    "path": "genesis_data/genesis_txs/Dxrsx0xuPVY7oz9yHbL6wOFxo6ws7ycVe778C2bc9J8.json",
    "content": "{\"id\":\"Dxrsx0xuPVY7oz9yHbL6wOFxo6ws7ycVe778C2bc9J8\",\"last_tx\":\"\",\"owner\":\"ub7SsqUzHX3RbdxLJS93dsGqiSe3ZrKMgxK4XhFT0wqvcj4C_0lJzi6L0EMIKeYuFPs-LYv3qKJBi95pvkN7EKQxqaP7JguDia-LD6CbqvokOsD08wpXsCUncZy8P28KFrcWmBmDiLInL7l05LW6iyx9a5g7va3hGVAb603UQ8ISDs8ohrXw6XiPk9VLsZBtD6fxGmWd4iidZ-TG_kp8ZZCrFQ5y0aV4FQdkaHcX5auqTdfaz1I4rr6Taq9y7ajMX97YaUiISoSHdsLIxtfkGNoZ6kv3Bgdwr1WIQxQuHpLYikfAZi9hZye_eFcUTuktmHp2FCX6ITRfRT2WSN5akBlRz1RWumTj5aD6pNmUdhtcVk0BJisdVnywH4dF-nC23xyfQL_cpmFaqoY3GNL1lNM9SZRAPeaQI9baMYyTBwkNsoFxsB5-XbqIPqtCzU1-cIroheHeJq9RitdMrw9Lpm3ph5yOFHcsI4LXetTwqmfiv88XlZhI5cmT5dwJ76l8irTuCch5ovSi83ePaig2Jn2YAHUlRXrsa8manntL2Nihp0fsx7_DMFRnxR-dWdjG__xg3Xfoy1Sipm1LTT-vKymshy90RigKO4L_u32SWF80qNWHSKKXUAU9TOBO_yI7KMupxoCfyZwgybVyDWFiGQ5VFsrlRBaXHxTSyBOmSwU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VHVsYXNpS3Jpc2huYSBJ\",\"reward\":\"0\",\"signature\":\"AM2hmlPaVJheE3vLoEGYwiDFtL8tdYSP9589G5CifRdiMD4kH9evU1KFHvP3Xj2LCuLiHBgWVarUtP9U5YrbLv1MIctp-T8P95pm5u2TAx53nuSbocvSnlEvMFKXr6jh2_8cnQV-ITH40qAVz_sZR1yzObRT5Goph-PSM4WX0ZGeDTjcW9FWFkqPuLrD60SWee4fw4fQCBdxsrH9PS-QO4Womqx1pL7SXz4Ribfpm5B97L8fLmUtKrrCQClvtzJlPkqbTDl39NzTbvVbc4Lid9npWxsJnVHwURwJzdrhuIKBxPJafuO5d3DfBWBtP5gW3zErywk_Wb3O0Vs1QFsqG1TivtcjdxWFfd9y9tPUW_9wJojuheEMb4kqMu8LiKzD0uGfzFiNuCDKDdwr8-HCq2gGbBkpOHYsjDIl-7kaIsaCE50qalxP4b0zymmvPRouxR4MD3OtFNs4Wsu-Y0UjxJNuNIdh4dmSYBZCZDdhOLR1j9TMcoiHA1eF459zFu9L0wBRelkGkYXevZCTe66Bcp-p_PKSLIXeVVOFgZF-j8qPNq-8rlkw7cAW--84_2hmNDzv8olVoLCN6S2WHlRpMmqmCWNLHDWCFLkfcrH4RBH0R6GMQxVgBoyUms_xHvoZz_xmxxmzmWd73-zESoZQG81dIB3E8hA4bKClv68UKa0\"}"
  },
  {
    "path": "genesis_data/genesis_txs/EPZ0hBh1wp-7T4JED4v6DOItd-9MNWkRfbLyizDLBsE.json",
    "content": "{\"id\":\"EPZ0hBh1wp-7T4JED4v6DOItd-9MNWkRfbLyizDLBsE\",\"last_tx\":\"\",\"owner\":\"sQ2WmD_MpIXraVDqnEAtYkr4_8D2mR_UKTvhVAJH_zCnPwRW5g3xibFWP4A77RPPuByIm7g0jo2O0GY3GXXMiDHTL8EMgDa3Q4Scz5FX1EsSWCNsgrx0quCaXaRq_KJiUQAaA1DjySb0VkkJ9VKwSPQqkFfNKNqk_Gfi_QBIbDu90UvGYtYGVTwUEL3j-XTyzqv5vyVtyJuscb6XPDbNLRCj7jeFTI9hrEiTtoJsJ80l3Q9SwufuyJMlH0zwFN0zocNEa1Q8BXzhGK4hqw9FllSyQzsBA_qUS5LNBiqZdm0G7Rn7--rW9VFxrWMaGUeVcUBcJys70M1wA7nOdu5B3rLIg8lAY4zhqodFtRq4XYFaE8im4RPNbjvlmPYjBRWCgGQXD68AP_FzJBG8gx9UpdkJg9w5BV-6c12xqM9i4eD8H-HzUfHPZ-pY4B-cNRHzKzGwiOzdglXM6iljLfxIcgPkjztMoXDyEFzzuzyjdzkCtkoYCTbZdTuu13G3OKktKECFv7Jt84w1c-tZSyeEEU_4fBs8BD9LLt0DRanSLSblnze3pKdkSU9e5mPYEui3nGhtn-aJL3uvMh8CGCsNWFlXy5CRayQxKmR3RxzqTrFXQU2_aTU6lQFFfBXe3ll81d8E9MWRNZLSYHHvpJmxbRJghIR5yFzq-5_szoaMBJk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"8J-ZiA\",\"reward\":\"0\",\"signature\":\"ZTKaByyKdlra20h8TEwDhad_d1NqQTz5Eli0pN5Wery1u5iu3LIb4UgJ03-BZWw8qNW9cLIJNjmK2piEKNV482p88nKEn8PxXnL7ZW01-cJrr-jddHYghs9amqZhGMc-8hxwxXKtfja2lHSk6NDNlJ3z_0rxgt-ePHzgbnoYhqrbRQ3s4dMCfKFP7con2-ECPeRLif7yWAQQhx0gM9_-I_qzmY4Au9nOyEcfvDwXisDsM81moiUm85FdhFulXwHCNEf9lRCt9hvzy3-e9RXAnk0XDro3uvqIKab8L038svpSVJFbyT5Zc9Ba80nGBSFCEiQNzlxxDnEz1iX0B3SMySPoIcTm26_Qb8voRoBU1T-bOUiVg7UiRQXkM3wvbvBNztRUSNgaNFzROrVDExQaXs_wJ9oiQ91UVq_na3Ja5zfGZPqYn6t1MiKiUd3K8D9zGinN8vpCcPtl-bI4ZRzV7mM2vfREIrnOZllD_zSHZSUaaeLj2Sa8xDweloaQRA20V5PvVuMefLkmdgQL5p3MPye1SeE7_JV2TFjIJX3U_tnILyDz0SMqqsAt3cgrwgVdUd3paENb4F82haTncjK_lk8WDwo-4-QtdTxIdGnCBo5e6zx0Wa7NyLg40B-UOdYfLNjXjXk1w3_-yGn7fLDy6FIJ6HH-qipNuOLWQMWR3U8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/EQh5rYFJ5Z5yESi4DIuvl2n6iVZS899tA6V6rf2Xwhk.json",
    "content": "{\"id\":\"EQh5rYFJ5Z5yESi4DIuvl2n6iVZS899tA6V6rf2Xwhk\",\"last_tx\":\"\",\"owner\":\"07mPgdLkcj8M1iv-I7DiZTZNCjrZUUoFTIUtuDCgodvUGlL5TLMdPH6eMwRWIGMMVGpkW3m3AiDL7GjC0loBvPWn_onGK7C9mm9SWWaKr5gB5SDtghlOgCvMaodeJ47tJFQ3KdkiKO8Gy5cqB4bpEltAIYZYJv-OOkaVW1zYhnSSA21DaEGleHHEwY15KRMn9uH3YWUmBwV4As-JfJnymTuqhF-ySH8Wk7GTq3Lm8I3uhm-npkSSwqI2SayDTj1XXs3oF2Tue5IfMJ4LcgzfVRLA-pxapqqKP3lWr_bl9uF5V_3wU1NG2F5zGk2CFFbgADOZ7Giw9s2JcPynQ2o9tvhhN4t1wRD2sc3Z4qhLuraq63--suX77YdGaYDlV8n53FXrPo480FEsvX7GcYcM-OBVe18IWP9REErZ2OM3jS2Bg4F1lXy6IgYhnwFruZwpdVxUCubfW8DYvcc5ArWwZTSqfjts_FoHJ7qEPKg2rmAy0tZtZ8FC8M5EyOsf0l42szaaWh3L2bIvNIoci04YqR1MuWfaQmd0bukdFouTbTOPhNme8mXPQB7aiJcad62u0nosgMTrkOYoKygWBxKXsxIHW1Rvvphr-a3yFbwAcXAwFSCd3aS4yRH6YEjK6_Op8qr5qfCwb0TMkyGDBUvfTelFyQybjjSzdR44T2Hl2Ik\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"QkFSVCAmIElWQUkgMjAxNw\",\"reward\":\"0\",\"signature\":\"UM0v8gBlYxpG-4n8uiqGnCCpWco6tC679GomDhMe58mf1422ZpfrwAObh5Rbtz0pTU_XXQ7sxfeut1HfVs01pHwVFSChL_ufsh2G3z83czZz5E9izlE9R65AGIGdvhNx9_qYTB59ay-unao2zUU0jICTfWNWsf292c-DC6_nU0bLCg_4Ca_6Yqx9v5HRs82hI8-Ynn_a4hFABaX872r1rVhKvxo9W2tLsjoZCEXxSf6Lm1_Quyfc0N_89iXhI4YsFbjwxJfczj66tJEc6fZSDvQrW4nvjbmmXwXRLT8g3TQ6XjxbAotoRDhIgCZ06lFnzwTQoVxwqhmGXxmZO5mJIIuNjEpsnvthYpZspN86uDS3uU1fCPVNhvBfi8V3eA9lK7o0pc6Qc_YKpvR-mWvHaURFbzT3qWlBLzU7pPEwTd_d7R8QN-tqxffqn6qz88rxca7k2Grcc8PUFd10URCVDe63IzgPhFy8DVgdXT9FG1aTX08QeWr308eOV7FY9bgr4-6AJ0Vc48OBEhAY3b8B9mB5BvwDXLR7xDsEgu-US_XhGJwinTqmOtum3HtAgnRjenxZqjQ0_PwvSTO5pg-BJ7jovq9BEUQ6gRyl9IWCVS2QExapW0EQ5qgtO3mqV9Zyvujvhpw3UPHMlnlSOREpJiv-1EavI70H_v61X1uVRXw\"}"
  },
  {
    "path": "genesis_data/genesis_txs/EUMtkWCJU0L23RnhXKfQ1wtD3Jh2O-vpFnLcQXynoAQ.json",
    "content": "{\"id\":\"EUMtkWCJU0L23RnhXKfQ1wtD3Jh2O-vpFnLcQXynoAQ\",\"last_tx\":\"\",\"owner\":\"twuBoKcwwuoVaLg06T80bFIYPzhtufP8qZTa-QjzYBcwXuldiF-OnXJD8QILtnGS316kMgYBmDi5801Zqiszff5V4wL1rEdCLdyzCthXKY7_DTIhWNYktUTyKx1m5Xcp_RcOpAOubRewkXIB64VeO5hARUO5PCpzsKoody4CWTH8HMNccVfcsGNxQ865UxJrE4k6on2qclvBKCPaA3z2FrunFS_qPpLTIO5SP_Ve9VdIrz1P1mKwRRHcO6OoApjug9NJzP1p-2tgBS7LNNSmqQINc7S4LIIcqct_herAF4m4y2vNO89MLflDLCDl9G6-X5y5We9WMYo3HoycZoiTMAdnjYxpN56jNC0trCAPvISUyYjyk4yY9Mw808mcNoZjsPUb15mZ8Uygji9rQC8Y7ChNkM3rJVucLDOtom8LASy2mck0HkN7TCzfhJhkIQuS9hIrIdV1Z7gWBNG3oYtu9op_kRRD-1yVFsGiOjUNq4hy60opZFQJFr5ntgmm6Sogl5ee9l7Ug3XKux0Dz5I70NK_3n0bst0w4oQeWcMOMwGd0UI0LYf5k4K0TYYRDBMyIcLmVOjtU4KtfYFaAQZRqE1VVG6rq2A7eOTCC-IqQMy_pr-ZCnak9usDWTTEbtMpY7PqBpnt9q6nq4MzGc9yUEuSP3Er9Ek9SUeM03aYOrk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"dGhpcyByZWFsbHkgaGFzIHNvbWUgZ3JlYXQgcG90ZW50aWFsLg\",\"reward\":\"0\",\"signature\":\"Qo3RtExQ_4Z75G3CVTbqOE7webO2lTT80RqL-vP6SWIlA7btzy9WNqSKjkUIh53IY3grboPCMT7Y6QhRt85LU_MqvxVrWTu_EGx5hpczA5ECRSFadfJyyRoztbItkmCbeKw4aVWmXkStuCvvDTdBr12Mx0-tlqvQ8gCceYWOompMsQufUv3bxT1ELM3Igx9dJ-vcpLgAyEWVvOXXjR994u8Tc9x-CghLEgVVX6vNc2Aa3spWeRqpBXNM8KuFrAFmQqZsj4jT5Epg99DChKoZHSCd5JQatqGrurecgjI8QhKJkL6aEF4sSKbxEDB1znmEqWIwvIE02OZuQaA5HSwEzh4MJt16uuoU3DWDhjF2weWlt_110P7SXBtrHrRu3fr3N148QsPgNplZV8kHeYXXzdEo22iz2h5vtRlDvCReqEXoPtGA7MESTGN2BTc7zpZWr8_Gmoe-Uyg7e7qd_FfVX_eDtA4Cd3yix36uAXOZr5gmDaKuImsDsZyPaOMCjIwR5uh2petXmKfl7x-7wJpml4mevVdotmiQk919qw4GRtxmEAWwfpvsysvPbypho8u79maBKpG75eCe79s0g2oN7B6C4F-mzU8P4MSKg3LKn-Igo4Kp2lqeuDrwt_JkaLF3_OHKMB65dB-CzCDkd8f3nMNzBCAt9J5zjnYyzlBGmH4\"}"
  },
  {
    "path": "genesis_data/genesis_txs/Eeo6rANLMAXonDFLDG2nu7n99O3Ymfk01wYXJBbEixY.json",
    "content": "{\"id\":\"Eeo6rANLMAXonDFLDG2nu7n99O3Ymfk01wYXJBbEixY\",\"last_tx\":\"\",\"owner\":\"0L2MclLMhEAcWd0wdi5L3FihbjsDSIOCVS2WSCga-uxAbWDPDU1ERWl_Bqymr_ClMbpwCpZnR-UNXAQvrecP1LQlOxPF2uxPyWYw3RB1K9p5XA2RPBXvE8uUdqoUWq2nxf-h8RD4orXjyt6kaa5Q5eM7prKVzRVyot4IRtfJgR_lNW-xtw_lrQJEwBjDKvYHHfRe-5UEGUzg60BxQAQyZkZdJq_O3_5LnafGdvnRlYDvlxUnA60HT61R3fLUd4weyIou7PBdXSePoi3ZruKlQSLOogZ5N-EcSQc6cHOQcVX1b9Cu3-p_r00zayyDuQ-VsniGWoFIkOj0tdHWOk0YXuf1zy_dmUvhCK8mbQIsghmr-aBTLSLyXp7fEAExhnMqMBGrkYzYhDeWMmc9-oVw-9_m9leCdgO8F2F6k2Zy7ZLVz93rC8QXC8JZlcb_cd1rfBbxA9lgBtXSpmgm8_rwCJg57TlbKAEItQ80NpzWUDf87yYQQF2BAAy477rvG7eyyP85YdJzOZa-JTgachNXcGC-hGh8qfgREoI9ZvNl2Sc53S9V-y-YBomw_nrd8FdroX-sn6KByoufGbGZnOIbFjoKsxL4ewCBgOBJjmcZFl1HU_YK7BxpYcIEghvSxZuX0Y3gMLBn6Oxl-OAEHCHHI0J7gIC2tGl8vRmtVH04Jtc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VG9nZXRoZXIgd2Ugc2hhbGwgd2l0bmVzcyBhbmQgZ3VhcmQgdGhlIGhpc3Rvcnkgb2YgbWFua2luZC4gQUcu\",\"reward\":\"0\",\"signature\":\"PVrNUWn-2S_ryxqppXhUaoImDWrh8AlHvsYjHvCcz1ZLhcse2h0ZBjti_34C8KeWQgGZQUoyID7HS8sbrft0UgzheocIDZZbXdkSHBXT6TiQBttLhBQRFtLYBMxzjHpTU-jCbdRzrG_lXR-oXsW3A8LKo4hnnrH1M1AWkiQdQO3e8A56G-HqixijyY2if3oVb2d0AWDswQZM1oTlBAKAPB7WAkQaZhK1Mzn3s8RIFzdK4xkJ8ziNPZUFLCVxeWeE0k0beCtpY9geFVZUlLHT7NuMxgT_-rT5ehSlJHOLnazXfsKKVYBfgPy6h3cscss20w302vMlJ25d53yi-qB5T4sdanb4pSI8FUv8pVyBYNbibGpXs5WNPHRilPiO6O0EFsPBWs8qnlUByWF00KjLC7vFa8NP_SI23giXL_zi8VC5vQRCubFjF3kTrjBTDFEraR7d6gqxtRU1HykcpJG6HIYwoc5z2Lb4in0zdonoYWZkCyPkaXeWSJ-tfYL_AH-SiPFLeXFGp5tpfaZbxL_G5r8sFTjYgdK6adj4cOupDtBNOBHN0_rwgZIpyyoBEYbTZQ22UN-jfWuDiIbKDiKj-Lp99Jh3WVfl9RyafYXL6euEtBW-hUaa1p0NWmkNcNz39Ub5G6elAlC3bVhtmrZMefCKTdVYlQ3cLAcYFKH1gqA\"}"
  },
  {
    "path": "genesis_data/genesis_txs/EnPMt9yzTsxLPR5mD9zUvndxicdYBUNzOlcCPvQlOK8.json",
    "content": "{\"id\":\"EnPMt9yzTsxLPR5mD9zUvndxicdYBUNzOlcCPvQlOK8\",\"last_tx\":\"\",\"owner\":\"ox0n5YPKBnv8bdv0beSKdiZovUAGS90YEZivwZqNll_yH7_GSalTOWzBO-ayrXr6Fy32Zriij-M5qqN5YiF24xnejzCQDZCiyJFkuwOeDSkHVnyY7mmGeKJE1kUsYcwTIWp1W3sgUVe4fWzu_KsZYtAt478KQ97lxIg6hIVYunsLg_0RxUTa31ZVti8JbL2KAGVQwsx6pJGQEoZVSW6ol-jn0cFKsVXmBIUyNZM1SdxtWZbMl6TRnkhE2O6dW4rbimdxIokb5ODxz2N0RNbku9tYmFLab9MSHXSShMMAtBAZINypyIhovf_aMOeYeDIQlBanXOikFrfh2TbiOWQv2aBSEryCRwKAUaM_JjHcTrICNh4PXzv32XQRhEGt96tCSEnXNbrrIS6TqP6_zqrKR4vBTp_VvLBN9PB3FhFOVdSBYkULBEsAup98V81QbKylcPy31fd11pnaXzg6qwELQANLKci8dHTUKeUdoQedK4z79F15LKN4MQ1FmymIaoIWdGIcVBMZjXY8tucZR4VDbvxH2hM6Aoku6UiznOQVlCQ79E7M5yOFJcKbK0hW8SXsU-rh03HW4BRatz2uaAEy_cckLQJBXVcrkZK9ga2dgJ_8NV3GV4-knTETIGszeMpnbSALwhRE1iL4ILWElOrHN3bI8GmTg8bBn35yJCNyxzc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Q29uZ3JhdHMgb24gdXIgbGF1bmNo\",\"reward\":\"0\",\"signature\":\"Vs0a3cEgzwhGjOBw8K9iemutJ9hMsEe02NZu2TILoZik3Vtx62uJrDY_kd_xouEY_s6sTLLLVM2cTPx8wVhah4K5X72GLp1cAB5HaNRDzE_6QFIwlCu7BzKBlu4zkOjEn589jEwmxa7-JY22R1_DBKb1jeNFqGYd9MO7R9-VPNAq_PkwsK1vm2ReeDFxoBMJzqlTUvI_hwbNB5SPtzbUQCFz3MvvwEGsMDRikkEAs12se30bp6UJ1Bs6peQK19BzKF8ptZ6-lqUDgheYnwfEsCqlSyb_iL5mRu8TMUuPRcYnx2iG9WWoFh86-CD0DnInyJDkjSBYwifsfRbYCnzd_eXMJpgiDduq07BYGvRNeylVHqvsGc5-jt1nhUJmrMDCRpKQgxOtX0NH6d5-ErAsZ2ydhI9pRBFwt-7ZYkRslNsLnK7pf_feFeukXHvhRGToMyUtDjBMoHzcZLWxWDz0eWfuA3gAQIYcje0dTnuhBlbjQ5sioeH0xJ11_Xxl_XPQvca6W_YaqvGfuuByOuPkOfCl28VVxulwahCBkGoQPRfBE1MrMWZHty7WWRq9ZTNAtZ3_AO1i5xh8wMb4vkEkgMrcrgI4Emmm8dJP6J4tPxOT70IDRGPbdmNdGeW29AKsXtkruqg1kapNtgZfaBFnJklyLFgTpSzwOwighcySGvY\"}"
  },
  {
    "path": "genesis_data/genesis_txs/EvKHSfokNyuiTarFKOuQ_-SaBwtllGpQGc7IFkRfBfc.json",
    "content": "{\"id\":\"EvKHSfokNyuiTarFKOuQ_-SaBwtllGpQGc7IFkRfBfc\",\"last_tx\":\"\",\"owner\":\"x3LXJAhrvSwBEvvWxaUVJjh-iY8k1khS3-1vvqW9JK3lyy_eNZAlcxAXy2HOq4KrrNBlNPhLDwM7BS-TzBFg0SpKRRSlFKBN2ioCyyoHW0G851b9dOdZc7vLZmLQsj33hkia_QgkK3r68LM15cUpN1HSNcN_yMT1ZGmmDJy19AbbgPijRSnPMBWYWbFnUTFZXRxO7CUmnsKvvUbEyADLjGAxuKE_nKQVe93aiCk-Q8lFj1gtM18MAro1DmGzYEwMSvZwAqL2rkKNr7N6xfCM_7ZS0WqvV7EgKexJyKVObaYj6DQG4siOY8LJd7du70GYaDQMBeZOccqkCQj5KUgW-vya5dRFrmxf2NDkmvPd1aS4j712zqT88L-cSa9iMMaoTJ7kK62KaxBMsK2JB9b1nlfZKQSmWx4iYIAm5h5nRUi6NGk1MikFWXFEFrbtXxjTZRwWRO1E0GOMnPjgbqjNFk0z0X6ZlFGdEPj50Y-zwrYal4cPDcaKJGV4A5oftUqMf1z07UepLSGzGtlQ5Lt2f0lvNa5yFmr57zo50LK1oZU7TZl7cIR7GZ226SbM9riNAqQGt9rjAJKe8ADmZnMs3SGVI4WITYT607ddOfisgisK4mwHwjJPafxIsyirfOT5KX7vHUpwz0_KvxNskN0ITH-51l5FHsl5PZSqNqyWQ0s\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TmljZSB0byBtZWV0IHUgaGVyZQ\",\"reward\":\"0\",\"signature\":\"kYWrCi9E2txq5jQ6n3knMD-uOiFcJsImPCYLozXfMJuDqsNNLgRcdWfIyM2Gbq2OGnarLOmylume7EB8We_saZ8UybgONlka-KM_9k7NUbQSBx1vikNsC19DaGiA6wNfYkXoFXLVgfhc5FwHqh3io6zZr3yVa_AMHj4P70tx-oX97VQUo410YqwyfBle4_7g3KfS_aP-YpzIBVHRGnCzwuq7ye7yW4ETA36gqYAT3qa0AymtMBl-KcWuKWgdKd_BXAEhYjqm3kOcA9X8r2GEGxvjpF6RANS8X0dmysVZFoO7Z5q5tb4sDHVpSbSqG8BCOtfT7VhQG_qM_W6NlkZvP4jWzX7xCovwEuN0aKsBwO-jMCt4xEgHJ5xFOLKt0RENRIJdw1Y-LPrTvb0y1I3xzxzPlxB-bZCsZijIeWj6d-riJzcACArZvT8EahVQTQiw9cnJvMs9_njqjOSRXCg61hAmnDJg0g4qSzLeY_6dkKvu22NhEOBjDfoNjW48jvCIFT3YojNoE7T0C2MOXXOnqE1-kIiZBxSj1dYXiCyYvlfWQjyZThj6gbduPW_w7i5k6DawosXtFHhAdRk1LHNJDLVjWQ2h0BnkEP8gseVWsqtynV-DqDelIoAzx4hNv0dB94nPXfMuT4AXQ50SJ1aKuWVFhUktF9SeL3sr2kwqwWE\"}"
  },
  {
    "path": "genesis_data/genesis_txs/F5R2EA-gM8AtQ9_NymKwtr_Im3_ljMR38ndzCs5c77Y.json",
    "content": "{\"id\":\"F5R2EA-gM8AtQ9_NymKwtr_Im3_ljMR38ndzCs5c77Y\",\"last_tx\":\"\",\"owner\":\"wdEn35bobyVCkQVPXXhsAAC-v2zsqaBDKiJ_VFzPwhDCuNYFAKqjPe80F-ersyRvuQoPYIC4SJ4_jOxzd5mqp8X4qnLYUDsk8MmCeW8N_NvLyU3Pr_V134urArTLqzWAOrLIz5o9XFtn84wS2bQ-zUvS4TVgh_VQUNJPRe4k1bYg-iVb4hJKUYpEu1eRh7IEtGDmW0OVT-XK6XJcmWv7Y1s3S616P7AyziM2Hj-mIu9FTzxfxGmYltiEWCKeL9XNbm5fZ_H__rkQ2ljDzxWYYrYBL4teHq5ce6CZxisaShgn2-QUUHQ5fumC-VXPBB3LS_-sQVmmiCXKi7J6MBW_Y8cU5x0OuzYkMGtkUcI2fpDif6dWTGgfctSIsIm_AN8Baxy75MvazALwur1n_bF4nFnVo46WH3birXKjx_pFiEgPj9OTWx8oXdG4I0NHxPle-PrK61HhgG5jTFfx0v3tDl7LLKsn5c6PevnSGxkKlBZSvMonNnumMT9zc__4jYtc3Pi91JvLX7AREx15ZSgfz9OCnMM0hTjsUO7_Kgd4Qi7DmRkkQ-2K2uB7ZxRLwcsk3zPsCcOIozkPVQyX6rjnXeE92Zt6Mr-FbTzoWZADvjdx8rAgrQCDI49KdWxIVd6UyklxkoN_Dl8iZfTEKm2Ibg7Vwy0bm4KfaNV0EExbpM8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"dGVzdA\",\"reward\":\"0\",\"signature\":\"qyw4d86pN_5Qh3Jt_K35OEzHDKDqRsRlRg7JBdD9m2d7eZYzLnYCpN1MVFin3gSyTo3H1OYokyAogGXsh1kDYZPcpSqiB-N7gfWGjdOHuUewQ2VpNHuR9endurfThmVvrIjmFH-w6glHD6O5JxsvrLnAWpcauVumH5r4BDZNcvwb5Ytu151wGtCYB5DbAcJyWXqluSkAdgKkeXw8e9lpy-D1BtbsfLnYJODLoLQQVk3RBvl1M1q6KaOWu50tHf85s7bGj3s0p5G7MQiZJN1E8zBGVWSJn-t5dnIKgwGiI5mdGc7l3cv0a5_DnVzPNuAogogklUiMxDEQxRqI61xP578dac6fRENvQnb4_ZHYpLh7Z9tZfieiETY0NSBjGQQek-ILmpTZKSEE7N0UwjommBbKlUnmQKQZEKSWWIm2-s9xAOU-muIaWf9tClkuOemWY0ae_iuDHkCRO_c5Hdauxh8UzKKVQdgEn3fV2cW8Qu3G_-1Q-MkOrJrBxvNXRKuma7UxyJOpoBDHtwYX1NKslHM_oUv6fRzWi9GdFLB_pPVEmsUvWCSfK939NWDrYqFwPL-6EHe-Sy7xMCNgrf4GyFDp1pxGD1k8aRxc4gN_5sPNqLZf80qGQsouaTb9GofLKMb2KsTeavE0Q6uc4Vq6W3K-gv_YzYeI7AhxLpm_YcQ\"}"
  },
  {
    "path": "genesis_data/genesis_txs/FTYnf3Z3QqEpNzTigfAlGTkgpgCWtFA7R8i-I1ik_Vo.json",
    "content": "{\"id\":\"FTYnf3Z3QqEpNzTigfAlGTkgpgCWtFA7R8i-I1ik_Vo\",\"last_tx\":\"\",\"owner\":\"19e6ae8WGh-bYt-FOlmBKTDU9vWYoXQntCt7FqXyPSdrTBUvhqQ2pkSXChUg65rTB3jdvvbpdP1ecQRNgA7eEVo4O-ULKLoDV8XgeC_ybp9JKta1rqVm2d9AlApsiqjd8EXs_1wQAoRq7kLhm3Re6PcMGoeT8PEFGYNj9B0hCYroo_UL7TZeTYwpNseF4IzaARJ3WquN8C_04e_rjrthopXM5iqbkFAjE8kt9acYBqr0raYnteSsSKDMmtAkLHutQvDqAWk2MjyGRlZO4RYx9S1DA2sIq-PPeo8MVRjBS0d3fjOiMlCfDHbK21gXfO8Nyy8JYdLXwjtTxyobq5oW9LdWky2Cf0gNies-kUf3JjqfvFvqAUzZ5DOn2KWTTlD_YMvBZID6cyZib2bmGCA2s7X2ySpyhIRy5q5-5LKDoWy7oheg0y87fsLnfpzgi9o2bgYav189kL-BZxGYJP9pexaf9v28EjhFCO-3Zptvl3D-XQbd2SfpeR-RECSQzkj2exf5KO8JfSSgrn20KICuU7C864D0meogajfqnGzbsUFJijFOO5-T17nRVSWlcS2V41REy50ux-bkFGlBYEbUC8O8Obbl3uSEut_aMrEAkm-H5tPkLycktTJhg0iANMDGbjqmV_nHXBgJhqG_lU4AAl8YxoRZC1qchj37Jgp3rHU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TW9yZSBMaWZl\",\"reward\":\"0\",\"signature\":\"bFx8SsfOGU2U_hCB3FUekZtsUUpqo6pdru-0mG3Vx0nFiw8-B0hmXcag5MxjBZwkJNWg0G0Vk_Ty3BAOLf0gSuEPl1gXj0oTxF7qW41FEUZUiCxOGwveK9XlkCoVT2BPEfs7Y6vO-HVETDAaXp2aPvJzzRLMnUEC98g5F0XE2MJMzVHmMNcd0M_WCokht5Sl8GE-RSeDJbDJ1_u5ajHti_WeQ4q4ZqM1pHkppuLRelUP0F1vDiO5P32SKDPJoP-vCTSseN9b9ZvbWbY3taWmIjM3bdsH8HBbD3oLWDomJ-LnThrj5YiwhPzXnuwNDK6dsBoGcFcvz7G_X-QZwXgLIiNW-7SXzPAvix0tGV9e7-GhPaUuX-Q6EWDshB-5zdk2-21OpymCfsX2ACL3_F7kcI6d7JowuFmYXxl9PbJU9T5VbGcp09t2rxa3KPImqSuXP-JWfFIh1iOQNoH11XDrPJr6dCsmQ944XT8J8fItsV2R91uqIIeZhRTpL3apfYND3vbYv35kuZyXWB1_Sj0FPQAmxE3hD_Va-MzAnAbLHLmxpEu7-e0g9i9vS2xDK1NknCaR0F2DXs3UTJHy3dhacYPh8r0pzVn6RfCXn9WNMb-iLqdNUfmzVZD6Ibj-1QDq8ghoIrRIYBxlj8N9PIsw7pnwkYODfb8TOl_p_Yzd9-E\"}"
  },
  {
    "path": "genesis_data/genesis_txs/FkZzg_-5eSdFlbq9XnHe3wRhYidHJPXwUQ6YLuJijS0.json",
    "content": "{\"id\":\"FkZzg_-5eSdFlbq9XnHe3wRhYidHJPXwUQ6YLuJijS0\",\"last_tx\":\"\",\"owner\":\"ypJfRMRl4tEvRs5vhRu-cMXZ-lsXDjZqh5DL5-2WJrVhSlTiAgNXB53dooaAlRKMI6rqnmut9uPRIvBpmYTNEc77ZEwPSKXrHFfwGBWzoqaz1cHan4p0jVOmi3MJDQeCqQgGvNT6K6U7-BOVCI3aHawEH8RMXTC6iWRTo0jG9YrQbRFcL9QMHZxtbA-Fb2J2ydkvVmxHPSULliSSQGrU6uDDYHFaf5p6UvzGILVibl-62XJSJcsnVn2aQ2zyov3j-WkqXK_NsFgquzF7NLU__51V5w7Q5pnDf4JZfCnckOGCjII1TCOxNNZPn8LEhYoBTyuWqX4DhHsr9gQRUbPJPh7kndLUASt_PpewrmmuU-55FBTX1YU7EGLoF7z0BKZUMnKKEICIs5FlQn9_LKRTWJal1e_AUwPYWH1DTpeNOqiiaLaZQyF0gh3qr_ZFFZ6gqIwUwHsrWuWcx3llrKkz3MT0N2-2GZXaBIS2rVruEPcNNQ8UqfE52fDcwUVXbxzGQyQeYf6ubCYzgx8Wx7OqBGyuk-EbD9zA3YJW5xpQR9I-hBEQGRc5YwhdlRSqJxy9aEzawSv6yzt9ParZyW_JLg7hdngsMgLHdaXbpMABUuydmwA--Kdlzszz3LhAnFGIB6ooiHnVWcJG_bLE9GafbxVRdpIVtHIkZfwgQ80kp0U\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SW52ZXN0aW5nIGludG8gdGhvc2UgSSBiZWxpZXZlIGFuZCB0cnVzdCBpbi4uLi4uLi5TdGVwIHg\",\"reward\":\"0\",\"signature\":\"EjMGkubnRV3uO8Tf7bTZObbymNJQ6RPchXbbNmmvSjjfB2I97ERv9phOP_l9ITxHNPkSTT_W_0sBhq4QzhctT9AH9ZPHp50rWqv2NB9ori22ZLi-gRWM0aFZS-siwks9_yd1Rs31Bt4JWjCGABYL1HQ7_4rnJl86zihdeFNKGczE_vBnCjQScChKnFzQLhNVMseSHB0AnnR0kmZlJt1iFYMjvfZvo2GwymTgXRvMTNEq-pz9aNlwjalueogE8Fr8SJETLooBDuxueRTye-IVkGgqmkdfXplsU0xgMzmfzEmMu5QdYGJ1NC1lI83miW0aMjhhRj6jHsqBpoNWhYqC4ukMzNUJdqPH5aMJm68FWDs79s6TbXmqUDFOVuQIpqJSUyDQrJEckDpQrkzQFk0gExvp544AvyABUPn4ROsPT5p9G8srErTuU1lFp_i78FEy_5YPMgHJZ8Drb3GqL8Jk2m3aJrm1thBnyn6AkNBwyk9sd3kXTefSnm77qizw59U2vNVLp4XctEuqB9tjP4rbob4deWiuNoreSxhlsVrQHwXrXHVMW7zGBmfg8k-Fam7-TMhptPtrJyP-GXOfeH3_vvK7Hs5bpI-Dp8PSz01dvkqsa3NOkjCzWazMDvBNz1hRjLN4hRCg9BL-sWc-cmdvHJbsGy72kmedYQTARLgZ4_4\"}"
  },
  {
    "path": "genesis_data/genesis_txs/FmfkuPmh0vkdv_qbjXBUX1sQ-DmwBFbjuC4punobGy0.json",
    "content": "{\"id\":\"FmfkuPmh0vkdv_qbjXBUX1sQ-DmwBFbjuC4punobGy0\",\"last_tx\":\"\",\"owner\":\"wj-p2_SmBU30bmKTmAJliAVRbHf7StI8ZywIwaZXWwcbtcbpJs11VUaAiFkbAJqaztqZ08dqahTpl5lVfhtnZkzB1SztIAjke7RP9RJGYAqAUiQHQDEY849nmJtPAyUN8TpzZ29aO44WT6Tk1wrH83_pKBBZA9w82nZHzcxs8GwxHW1rmWwTsFszvaiWm2rSfx4s8xYxcbXsOqhy1pitq-_FNBoyqReHEgNWwUD7hixjr4__mQMo6M0hzsBFpxoFszHNxwMXaFxT0Kcs_oHhYkNYDcCxksc_evtgFJ8O3zd0iCj5OwKkwnVV87piOVu-fPp5uRN2wYsfEnfxHroNnzwjolfL1cRPKyTeicL5OCfqlbt5_zDIFTt5kaz7voXhzf4mr10x9uLvEYhYvYZCI32TloNA9sM7Hv-ngaobizBg36agX4g1UX8m2bTP9WXSq8OhKX55QbIV8s4f1J1WAaJzpbhF0k8hD-Mi6EnIuT9pwr5qadPK2yRKgXqfzKoydiM_A5NB1NYeENCtUvF6TT8p3o0ljF6Ecova9iAJpSgIu_iUplfvQhDXwU3piRPIZuB6BvucFxSMhwLZcWKlLZ1IzRoC8R_mC4zVsP9I3CJ-Qc2zMZe_51KtJg2TzrVgSDe8MRLHouNi8XPA_Hr6u7dyI035OVfXaKOmLyH1brc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"JiM2NTUzMzvvv73sl7AmIzY1NTMzOyYjNjU1MzM7ICYjNjU1MzM777-9656RJiM2NTUzMzsmIzY1NTMzOy4\",\"reward\":\"0\",\"signature\":\"ooqbMCM_uvPOr4MfVADy6d2uv4HFcHY2mdncj4JhfLK2i8we6VD4oyMA6rv3b1c_GOcC40wPf5rIMIw_ptKDeCvB6GOCWp8T20i2tP1JUKoW8G7twIMNl0RJ2IbABmDHh3wlM0jwhRwfRTmgd9gDtMPiVRzZ9o3LZSvmvsDAOIZ8Au9_yVY8qE9Pa2kwoLMSIeGe4t_HnXs1_Ae7uOyq29hD06R5L060LOJIVRRyJ3vYh-G2uG_bXcKaZ4WDquRz6EPHxfllV-2IPr3WhVQY3ymllA_yQiYjZDUjsW0MVEDxZj8DkiFJD2zHnYJ4Fz6qj6aB64pApYdLCR1lh3TURDudgvU7M3rkc91hcYmTKF4bNiWXIP8AVSCnKOE_HCtp5e4HVvRd4dp90iN-g4-e4GWW0cfKejlZizkMz11L-y5VUIhLDdfZFwh0XN2GEd3hJ4dz6Ip9_C7gIXqLzYLad5Rk6SyIe9BHKjxfQsKtU_HTFsdjQycmbsYAQa6GoMRybza_wEBJtrqMNTPLq0jHkd4htjYiG0_4VAmBAIUxg5qbXNKzZNH8cfI4mt09oTFeAxdl0mJsf_5rN7ld03EEd63MIbVNs7bjHesM7TjlglgaCvjaH2pieG0md2ZhSDmh4NpYr5lnRnV7yKukVBjIUk9C6C1Wo6jkEOt3qcKtRxY\"}"
  },
  {
    "path": "genesis_data/genesis_txs/G1GqspPmLkJTiT35QUTWBT4def7j5ORSfHCtrYzrrng.json",
    "content": "{\"id\":\"G1GqspPmLkJTiT35QUTWBT4def7j5ORSfHCtrYzrrng\",\"last_tx\":\"\",\"owner\":\"5tzVDE1rYes0z90I6VEn3jCsFra3-UvlKS0D0Iw7Qci9tGGM8kU2jVuzRhzzFkzHHJQ0G5xHGJ6IO95r90gjzZjipbjIs1B7oWd8gW9FrgmcodnqSSgosqCxmY0pTV4BNhCqzJ-f5LCTUvQOACynlt3bQNxyyDipL5sae0ezk-_2uCyWVCM2_k5UfWc3QJrgxMv2HiGKpN0STSP4wTsSeGhU93LRb15hZhhh7sfwhvdWGj3_aVodpFxO3pC8oWdn49ncIpHhdWLwMuWUfMQnOkhYk5UDr8TSoTXJa7x0fMEoOkgE3bqGr-OA-1od5WN_hHhDGpBqPn_l5kZrD6xCHk0J7lrrMXQwwwFWGth6356rxBVoj7rWCzeUG7qRnYt0CbEUJ7d79Gk_DF7uYrVTmy5tErCXLGQ03m5dhsA_JXeIlWGBSaqtZjKORn3gp0I7ZgHvmZFc22vqyWiI4DhYqGIz9DxhtmJfH6uJaHpCOOLvlU7FgahqpCgytWH8mOS240rOiP6uUnSJizpc1iPZ9SblJWuAmWhaMUIYMMLArXKzfMVliJEil8k9z3wZBbqn0c-TDlzj6FgDEhbHQFee-565Z0pQVsZyKFUm5pUnfjezKIGIJZnRXA7eJAP7Xs3Xs0LETcAdGdhYKDYDCk7M1_c64m5tl_cRe_h3Zt2qY_c\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"QW1vIEdpb3Zhbm5hIFppcHBvbmU\",\"reward\":\"0\",\"signature\":\"cKEnS-uBL6fFLl1GLXONSfKIZHo3MQW5j6Xumlqn3I2SmmDycwGkSk_v_3c838c6CMs-ik6LYtzFPxOg6N4blEa0sd5Y51_6pCEp5R2vrRwpW6001X8sVF0YMBKRmP7W9Fvt4WSdsQ0qGczvQlhwb48u_AeVBxur2FljIDlD6oFTbNHzE93FQj5IUUcGPdn06c41T35IXkA5x0rB8PMwyFjt3k8_QCIdr-DhhayYtMt7plAtPXu4uVo8_O4sWbG7Hq_YU2G90DU-Jx2wLTwnrp-VvQ2MUpryIA4etFqb7_F7shmjmOBJA_YeXERYX3ohrTqCHt6Pf3koOeCVQigZQxyyiRQrPVH9TJLlmUEZ9LtmvHxrGGCry63AMrHUEcKm0UG1bE16JaKmWIXpy5zCGzJhoBQHB2lWOQq6KESKrSzl2fj_85nI2_wd3vQdGNGufjcNyOQkf5LnTnmfUF55zMhh8Acfxndy7H_7pvqlW-2PsA4C5sKjggqf6GHV3A0FlUWz2rq8XGVQ7dXKgKKX_57AGlfo2YjCuauqQwDGtQAhM5Lyw1QvNTmxEGFc5Lx11aQcgrc91sufcGB_C9ZT-lhAj932JC5ntKBplRdmgQVFNGsWovfoTL656lCE5iUHcD-M5kqoawcOLNL-hyKG_g4cTdsb9tnNCAV0SYgW-b8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/G5FyMvm8E0_07vFgz-XISJN3VEviSrbtih9_Wptef9w.json",
    "content": "{\"id\":\"G5FyMvm8E0_07vFgz-XISJN3VEviSrbtih9_Wptef9w\",\"last_tx\":\"\",\"owner\":\"yXRMdYhH8avNYQqzk7UFDQO3TY3C7E8JKhuId3gDd8aFQOjB72lOLF5w7aB3MWZbahJYQCwrym6U0SiB9xf-blF39oDzsCFG4RdZEV8w-WMdvkDl2YsGh8cMBvOL_e6tugnAZp4jCpUcl411O5dNvdS-b2sfxlQ8if9olXLCO81pDvP6rlC5yZbBTnUQxQ3fXc6zh1K-R0x43c5d0biUe0awJUZEWm-jKRcYf9PfcmtqesTrG1O85go-3NlwnndcRr__vQx-3seFsWQ6lAKRUbfTv7zTkSKmmLnVFb1AdzFZuyPDxBRRF3N82izo4pbA9FopTED9oJJgm2zxAijhK34omKk6olTPOBFYb-rod5bH4_FoT9j8noGWkXfsOwLoCdoAT4ljgse0aJAzOwdSyAjx_fay0x60p-LyxT_WclUEBufG4lrZ5mPxix0ukpQ1HtC4wOe1ZHYPOWlUetHSGbOi2EfG93f96yMwM2MtqwP7nBlMe6d1bFjmysfdZ1NMJZ2ULoJMFX2c0nfoXjuXGYXOC3o4KRSP4jeEfkY2F31YYYyMqk0zPIcjGrezowenP882Hc_akvbNcmM3jYMDmNi2FMIjRmjxbfaczflW9L-syv1eCRxiW-a0VA48ZNL-cbo-A6szqANTd92tyy17NRxp5shgqzBai-9Eoa_EMX8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VGhlIGxvbmUgYW5kIGxldmVsIHNhbmRzIHN0cmV0Y2ggZmFyIGF3YXku\",\"reward\":\"0\",\"signature\":\"Dz1QhMBgZWHZ9K29dzvykbVof4D4lZ5EbZLplRyjH92OS08_yIfjR06_waWQ0oxkODcLCTXazCzxfUXDx18JmF0W2e2l5xV2u63TWcgh9i4UTIs4T55FwnJFokv3SgPyHGB19x2fSw2pKsIVQdYyegVnA17IZ83RKS976HUmLhCoNBCVBCY6lnuUkoUf-XvqGwg0BR0v1zWvKi9zHe5OYHad6aQp95GQcoWdS7Da9Z6git3VocnWwg8VlzUvAsVX_3fikbsi2n8p02lsHYLOhE3GxKlLgdCo5aNp2A1mNkCK8H8YmhWE0iMtP99BRoLTzJ2C1j9yYS9GRUAEYRRsJGnuMSDT4uP4bxElAitk5h1CTDIC2ddYxidWDNQLP20EGPyPW0BmAaw-Nch4FSkWbWWzzLim6QY-TXN4PEZ8whFVzDqfIftt-NrleavBIK1XBlQcSz8xOMmEn3aTvavYhVJwBPmQ_WwUqTfka38XwZ4hc-b3xboI-GJbWDGtorMM4NyXVUBY6DOAQMI4Rzv7WkuzBWHaUzMLEzgOHy9Ph2ys7xeOftqGbFLoU2Kqa9xtnXObNXIq2O1JZVGQ42rk7pUmEhUw6XjTbSttXuxYx7HrGEsKnZP9uVEj-eMUUtM_wmI3x3ajiGfe0vdK8S2zrqDW0PhiaFOhYWBvN_ZzN-0\"}"
  },
  {
    "path": "genesis_data/genesis_txs/GlWMQUuiL80knS07G7NpoYat3w18VMuyLEuC_Pmijng.json",
    "content": "{\"id\":\"GlWMQUuiL80knS07G7NpoYat3w18VMuyLEuC_Pmijng\",\"last_tx\":\"\",\"owner\":\"0MEzGJhdr8eMKwiFhBd8xFN-kEVL-pA75blyMx1IBsDOZpAr6q8qIWxxpIQKnUJEhBckh7XPWxLEDGcRiT4G01o4OsSfWFRJC4-GqvOWLv7bV1BYwaJwPIJoxgncWDh9OUhs_Xk5iBEoVsdrkvWP_W1i8hGJjarcsgNYH84RwnAcfoELsh8lSMDzceCuUOdtmDY08eqJt1W8xJM7nV0jR89Pa_oWpbZm77LDJnLc4nXTcIq6nwKdXgH_PeCuiV_SBQMJLhdrjniXLbuRwzUCSwM7ho2ZnNkI6OEBrxXxEXI2PK1sNlpfZ2V4eIsrgVZ9lIgXRTauTzpGJPvFlIBVGdGMdaM2JwZI2rk-Ymyt1aNfGv0lwX4OEsEfeoXSEyhdyDF3_rcGCAjUyJz3Q0LqnZqO9PNhKHjh__wL_SmB5Wt6L_5Yx0GEiovUgpeedBXDm3MoVVkGSwCLZra6V4nz8uaLq749A_RthyjTw6dgW1M7XonWtfB107gVqry6pnkLS5mVLVIM1QatgfEamv_LOEs9_xJOE42MwmTcol9QBFcWJSeNa5gjgtTooyc7cL6PP5ooAb6bZvCDzZ4J5hgDqrb1kBcG0atzTOBYsx6lgm-I1zDgwgLP47-nDZ2bfmMifIDbMv99H5YH-smXHkpxI5pIQPWXM1OxEOR0H8AAZaU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VGltZSB0byByZS1yZWFkIHRoZSBWb3lhZ2VyIDEgZ29sZGVuIHJlY29yZC4\",\"reward\":\"0\",\"signature\":\"UZz2PI4K4PQhbdO44ilXzNctHNhQ7hRFlyuTIERc5tCTnAzj2IzPdqEXGCGZ4qtQP6QpWNzRypgbJ19uxbqr4HL-VnNNZSGD1RjhQ86ZIqvKNuve3hbpICWPTTQ2msI9HvqSCLARYnlyDfZHRwQuxsPOlX8nK6l9Vp-MpKa7KRr5jMbS5Yo9UFqs2AB7bnFtKpxUFQEnAXQxqKvy6Y7FLnpCq6CflxLDiPJmzMYNOA2ve-cKKE34dSfCqnf9qSjhdK7MJL-EvicaM9rOOfNjBPnxnRVtm1ym178q3OLeVOgU2zL_N_ZNKed-dUDAGWFqFE0RVLXaBCMtLnsRZ632bBfw9CoSR9FKxijesOFeEqOayVJqXS1-vou2ueTYYkhJutE_ufKKMb6ndDmJ_94xj9Eq861dfseXnnudfV2l_275ESxic1uAcUKXRmHqI7yU-l1RyslSKB3m5Pwvatx_-WOT2nVMWSe0bLqVDA5LqQvIGXMn8voUYJVMKlRm-CC-ft5ttXSQUnKdZmu3Ft7iVFkiE0_dUaUH4xlZq2VQpMc6TwhJD2X2Pb9vno4HEmfWbouyd5TcNSR_hTNx_LUYcUWmruPnz-k0JQ0NfMrLzn2p-EYabFOmdsE0sBuX_IjxneFXlbKKczooblPmqafk3S-onOqvhCGjQQT75r88tio\"}"
  },
  {
    "path": "genesis_data/genesis_txs/GypgExivgblZSA-1n7KjdI0SJOyXwFJkuzzPWS4NID8.json",
    "content": "{\"id\":\"GypgExivgblZSA-1n7KjdI0SJOyXwFJkuzzPWS4NID8\",\"last_tx\":\"\",\"owner\":\"wiMuS8n7al-1axBqpaRMFgP9stScu8QBfMgJhIcyWAI1XHvjOFN2PsXYWaA8Q1XOE5uB19FvRBCq0AZtyUTVuzQfDNTV7KO8tCHPHVnJxmd1LrGyDSeC_oR3CP6ZV5o-BXN8hjTXrstuc15X78w7iW1miWv8AKSVo3Sja0u1ZV3r83hwfX8__KYqPECw0JReB3R2w45gVmCTTT5TARRp0S_Z4AMXgKcVgK3aQ-AxhCN5eokSdQ2X3VI12-JseKB7PBCkNb7Vgl4zlmb2Wx2NfCQXhMfCRBANpqJ6CMADGZbx01Rtw9C-Pwrde7u9E3b7MnZFoph77bKrx9a9za4XklkopJGm0SMfc0YQ0_RV5i07utVhuIFNcC6_DRaMLcUFv5_7q7Hsco_jfkI5bTuU9h14hadwCqE4ZXO8946t6bQUSgu_HxkecSiOUM6HTbxH_QOfayWZ7Fnqn4Q2PmAe36JLaUCMW_4CQXl5rAdVmu77dJ6Ontyoov6XVGEv67nojmI0ZtccB4lhDFXnjohtEfMGucyJH-a8klNJgsm4Clu2sBB3AGL7n2TNJH9O5zH7PO6bj2HAswZKMRus17PpBXDeTK_8rzrhqGwRL3jssgfklZf7EoYZFSSczDyM1-s44tYPKnAqO9yO4WxMZXhmaVjYUx_KuOZncj61Y8aKi8U\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSBmZWVsIHByaXZpbGVnZWQgdG8gaGF2ZSBiZWVuIGdyYW50ZWQgdGhlIG9wcG9ydHVuaXR5IHRvIGhhdmUgZXhwbG9yZWQgdGhlIGludGVybmV0IGR1cmluZyBpdHMgcmlzZS4\",\"reward\":\"0\",\"signature\":\"OOoTeJg_fno8rNw_qufcQ0DHpwpphC06YiVzw6gRPJqBHLF1grehhHyd1t8Yb8yM1LDimUmA7uO37u3solavaLKMMEIHAcwTpV30J63m3GLhXrzicz7-JCvL0hS94uXxCq3devMksNLVssKw_Vs-BZOVZ9PoUDDQrua4Kj1VLEIOD_NYVtH_Ot_LqzfgLYY3LazU_KvEyIbNX1XFUWn-YnB01AYJOgiGjQ2eDG75XrkwFDaX6Fck6juHJru6fC64gR7Zp3_FVMcYTfHpXDNLtlBrIl3oJGKA3bZibZGn9fN3Im6tUk3YcCVGENGkVxdKIkr2h9LCf9cewfN4rGFQuF6w6qNnk5wCYaEYp7eeHStZqhcPRJRCJYX0jxM0LxyUbX0SYQ2NPmdabPwRoKMc_ngOF-gMBZY7bGnFzQRXRZSYElymDi--3qzbhVxvZRe4RiEDT6q4AUr5nJoHp22I06FMZIpqlEsHPvsV4tZZJSLiaYr3g45vo8vFl25WZEtAw6_qQujBQip8UlNZl1A-rhX5_oGdx6gDIgquIy9BGNpjdQSer19mXGJcXaNTycOnPQfcFD8pwi0hqUuGHAE4Z1nGkTQDWgZ3wQE1JH31JYeQ08S5Xz-bZIJH4eSr0GC53iz6Oiz0lGizezNC4dqRTSnY-Nz6Y1JQ98u6No-JTGk\"}"
  },
  {
    "path": "genesis_data/genesis_txs/HFUR5ZwLihdaonJWHRHBuLay6cw8ZMV0bM870xhE6Qk.json",
    "content": "{\"id\":\"HFUR5ZwLihdaonJWHRHBuLay6cw8ZMV0bM870xhE6Qk\",\"last_tx\":\"\",\"owner\":\"u1OqnAOYZohbpLmXPOnI4R8RpxsCd3ByRrIi-QoImZ-QwuBwTNc29BBx4w2m1hHsnjSYC8tH9SIbPMQAGE5psdes4U7uU1DYJjJbcIlxIlchAIBq-doE9GFLoZpRRu8Ky8P-Ut9Jp57rXzEudTj6Np50i7aEtaUeaGnqGur0u98WlZyL2mk4XNM25Zb2ofrwUh_cRGoJK728cEcl1VzvipTrAYxwet3mQ-L1-St0PDYLHp_ZE0cIXDSNPuDALg3vWfH04VCCtLPvnV5X7t8kIJ-KCll4ZoWM3a1TBkVHEV33B-k_EKoYcw_Nckqy1wTTEqO5zlJ0rAyR8LbKi5fk8h64clQUcQNLoxC2zsu3QKJT7nsAoIi3Ufsf--XV6sADIFAlOLrIAZYNUwaoiMbcu4aFyKJFPCwVM2D1OcafcXRE6P2wZVkGh-JBbCkpM6TWxjc1emZr6OYr7c2r5e8j_zAmv0NtWCjt53V9xolz6Qaj4fVR3dEhtkciaGtYwGUTbqkfGO8Pw-tXgtGHQRWMmvCkZ3-RGiYjEUQbrvhj1IvkbElyVVqPejEVV9WS25cyNEe2yx-kkkGEtQb9Pe4YhZS0mgny-LG-BNZk-_vNSsb0m9ZXNVyjFMnuay9VaUm6OlkMpkILKBnKhFe_HFXsGvgWjghTsKR3rYs9l49fe4c\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"R3JlZXRpbmdzLg\",\"reward\":\"0\",\"signature\":\"ppQYJ3-otBviVMcjHRd3XzYoNq_T-7_jgxymJfqaQXvLiGuDPiMflsDQX1LqqZyHSibSpjMtymQXwyzvLn6gpID7KO5yPCIvUb982BZG5xDpW9Hl2yEAbJ2y5aTxnGMB7iMMnxgpu6BSovQvWeavhDNuOy93u6z9jCJ9Ba_qDELfv7mPjwYR_qT7EmXvxPWfI8uIB-LwtrmkDve54f-cF8ITg4voLMEE6kbVNjMpKbe4O-pbmPejKrzqhy8hWS1fc51sLt_C2iu3r4r4OFzdPTfOB41zbPP1fXA7TXoiaCyVxP4e3pUsRAnpQIZG3OHjZR1tioDA2gIa8V0Nb6BxJOhJsjipwjATImGJtY-qSLsTSLvTIsw_WiF4G_FfKdKCGwycjSNTAQ_sl1VOwrds5Zp_IMywRRvpAb1Dz-0ruSIusfj0pOGEbOOv-BLPRYPip-KJ26S1Qr-pjcCgbrIp-qPJA0oaL3akdawIL8wkfELLGLQqFOd4LaqNz3HDUCmTpACdCOEWyDSRBVpfNy3GbWVtjyYGODEPouVb-_y86xNtMB6MvVwTsNoBOo0P8rKFE84mg3y0E5XfcClqnHoqEq8OSDX3akjC05SU286LBYvezrrAsLQ-V_lpS53qm8OjPAFHERabagKzJW16fl4h9ZMZlDJDH57sEcebTM1ICxI\"}"
  },
  {
    "path": "genesis_data/genesis_txs/HSlgnBu2Yxros7zyehPgiu2u7h80dJfCCqrA88UnkB4.json",
    "content": "{\"id\":\"HSlgnBu2Yxros7zyehPgiu2u7h80dJfCCqrA88UnkB4\",\"last_tx\":\"\",\"owner\":\"tAf2q5REOelZsFgW7Yu9X5ohits68judV9lkRnR7aSXuxYTsZsyNlQGR2m3S9EFQyis9bGxKjLWdFSpJ6BMpTorZ4sy_4FkVTN4EOLbAjTlKBEwJ651b5NTiKtdi8jowCoWHFfuojHJU62B7_SypcW2iQUzUnp_CTmJDjOjs58xSor4nU3BMhfpwawixq6lwCBZCVq0RWbElHEnGmEE3UdFZgPBpJusNJ3EXFWC-abDnAwEnMoyWsyAlLb4grW2Im4nvYZluhecpdHJEkHcrOhj3uCKzEl8HTttye22bYVSBi7cvLQ0XB3EcrLY6hDwqo4Qc88jihVaf6Zkj4PfftJaKKveg5V93XCzb28vWsNMezn-FrBSDrzqe4DdB6HiisQCcUb1FIWYhW6XsZIUbZWJIqaXLeOPykCDUcjF7CbL3QnKMlFfD5Sd5T9qbSOFvQW5AAfNzGCS3lWKg9bO8vEMM4u1J4JA5a3S5bc2cVQZvADysuNaD6iVnBx-4ou1rr6AyJbix4C9-POWzml3H9jJEBU-_Ij70XQD5c1LFMYZFUa70UNmgTmln-vTeSaA64742JWtUNE24iWWKbZO5ZaXnohDGM28AlpfNOhrwH3E_8Wixjvr0NLMILm_MpMPZc8VZrpTyTkN4EbGFbxFQKKNMraAruKujF6rxfikpgd8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Q2hvb3NlIHlvdXIgam95OyBNYXN0ZXIgdGh5c2VsZi4gLSBKLiBGcmllZG1hbg\",\"reward\":\"0\",\"signature\":\"YvuySiCV4neZZDl0ywZVCg4sj8nmzsSCrJ8P0z1huv6PALL2vrzfnCpvtxzSHnBVzIr3Wd7PSc_ZysEm_V0N6x7lpeiQFIJEpi7NwwYeVi4IAUvNV_Tu44e5KkiaU0eN3UxTiqoUakjAxCljMGC4m8rNVbp6RMbTDjlHjThzi8HtTqsNLT9ty2tDdLzNBLULShlI5F6BuZKGqBBq6YEJ5biOa6Lt7NHk96QFwZbc_-mvJbwbrbg4Jx0f8zIBpZUtO3_uEqvcMbcwJ0f7-TFo-XyDR0PpsQmKaU7_AMptiYZfE50o1zhbpzf21asbqCCVAf_UPJW7es0fQKcAvJeEe7D3OZYppfPu7mScA_cEfI92NofUwjs6Lem3bEVfYlmkviFaCIQBUc9MAYSHR4o50Z93-I7rWG-eOriDCeIisUlRAQVF63s3dID6_bdBgtt_TuwLPdwjb2dM-TSVjOa0swuHhgppL-LRyr4gi94rjIf2AoPKGRJdSU5wxjvcqxUOvntIZ0myEZsLLz9zCXIFh-uaYVFg8SPcP8A_VmUTwstWSGD5xyf_GjuopcFLKGvk8BSuf9QY1drbq9hSozAONESDC8iazcYQUKWKAseQE2sGjtO0s8kYrIYxmiExIXPbKAVC_8vTc3jqb7nrMvDhwbgwwCjen_Bn3354dgFZv6c\"}"
  },
  {
    "path": "genesis_data/genesis_txs/HTt6lPYQfcIgUxKPjUt3aQrpwE5e3UA4UT2EI9RxSbw.json",
    "content": "{\"id\":\"HTt6lPYQfcIgUxKPjUt3aQrpwE5e3UA4UT2EI9RxSbw\",\"last_tx\":\"\",\"owner\":\"7TC0Qn3_k2DNCwz47TX-CpOwJ7g_EoGOtT5FftWFAwRrrqirSzO4SkC9GWgQMOqaiye2ww9b6h8ihN0Q6lQNTUTHCijDe7GjjRB3P5ORO31Y9N3vGn7tyJ5yEncSFwSS8y5vSZKTwL5lVFbM94BaMRETKfcofG2dVpXmLOn7a3nQ10TlTy4_xME81fF8Xjz-1SBL9GWEkdOs0EmfBPwudBv5F5bounNMWe9LmXGjFX82KdUjR1y6R6p7x6iRu0iiTBZsSRjHloSHxjXpgBHIvPzcOaRP7oWv0mpaTqpDj1AGnlf3gA2X65dCNHWI6pIzKIwsWzr5O3oHYUmnN3csyTliVBcxXCjS-fU-lLZoIHwyh_6jTFyNN0agzje19oe5XP5uVY8S1Lw9ptH1Tpe0GJSck5MZMXakAULh28f-ODtC5qY0E_Ts1Ut-BHUiSTbp8MUKO07Mq91EtsjdtZl7g13p0-wawh5Lfr3yGX7isFN39uivwCckQmQLX1_d5TqHJkZkQvK7b1TtpWjBs43jzkOnllGO4zytvPNpLKJWwUqStjS5UFfgMW1EMaiK0fcqmZrzDEjliO-jeA9eEepazNE3hN60qe7IiBOJkyxeoPXqLUVajJPTCeOaj4AhmeFiSQJZsmuFJObh_Ugjhz7e69_V4EsDWduK_kgr2Ohi8B8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"QXVtIEdhbSBHYW5hcGF0YXllIE5hbWFoYSBBdW0gSW0gSHJlZW0gU3JlZW0gU3JlZU1hYXRocmUgTmFtYWhhIEF1bSBTcmlTYWlSYW0gR3VydURldmEgRGF0dGEgLSBWaWpheWFEdXJnYSBJIA\",\"reward\":\"0\",\"signature\":\"UPrpyq6UgXHnsScGsyUpJd_4nmxcWoqHU4GlCc_PgKZ2n3vzQoThNZd4MwRRY-x-e7zN22SnG-1gSYty-MVqJZEJTqzvdX7q22jDpFYoN9-m_POYVfKPzwVC1EOfiY_A3goaTUvrfqc1yCa2OiBjXqlDYB3ZrwgpO7Y0QwyA4mVylNcg7pnsfD0jouymt7PQiMazQlo50MVv8UEDROyjS75F428w_1Ub_VMquygkATryLRPOpp4dC79N4iPS-lOjNjdLB2dh72agywSCccUHx2zj-5x0Xg0ZykVxWsm965Gkul--ElwB1nu304P8yBt6m65rYwDJr7PicTXPz5Stjzqo8rbJB9Gd_pnIYZUZM4v0NrrXPhEp6bbM0sBumQ8o0xgAIqOsUywIzdtJoetGSdBi0fepvL69WrO53HuDfZB61d21N97nmEdX_UAULVqoHD5V8WU4oOnbqiBlcoTP8nRG9gOtpvKe6ZtglWLyCvTC4VbjehBW3E0Gc96dIDLXAMXaBSDRaeAGRKylAL03So4cYqOMI_uktQSgV0BDjnvkCoVvl_9SClYpjZM6S8lhBcY3_rEFT0DVhLKPN2qjGIA4Rut442FNBYQyOsztl4LBHBUd-5HkZiE3mBzPjhGW4U7vZFGnyD3GzCQTSl42Y5nPMSIb5TNZQZ5CUAOAM_c\"}"
  },
  {
    "path": "genesis_data/genesis_txs/H_0S6x36tsFH-x1h77jV_zzGGp97V8UjmgC0RZYwbtM.json",
    "content": "{\"id\":\"H_0S6x36tsFH-x1h77jV_zzGGp97V8UjmgC0RZYwbtM\",\"last_tx\":\"\",\"owner\":\"0rvKZSDkCavOrZXIKB140BgjUKhw3wfdWNz1qbgXZKs0lyBwQjVOi2PkuR0K0wnrLbYpm3znRXxmzvE0QRYmWElIgKef7HFQuDJOpDYGeahlUnryCkaXhp2N68JjyZOwQejGGdCeh1XoNKb7dxnQuX7RNmRgIyVUUHC4Xqs2W-FqHHphPufQHH6k00TaEF3eiz2YID7Iqc2QaJh4TPJdhU9iqXxq2DhONSh4Lw68O3mHdBEhemxVEkZ3l3xMsVAV4IsKcfvFkGdLSKr_XX74TF2AXd-3Hs-raBUTrW_aCyYEAeFafWqrL2__WFqvSBzxV6vnFgZGJmyStVPg63FeTdP9-FDvr6twCEgKvYlbIdv0s_KWS3eDKjIF_cqg0ZC6AxGdsY5J3sHV9VHENk5IyUOpv1yd_1L6AIwkqttYVZmjjDyCQwdrVSxIERx2QJhqFVeyupu75pImMIZ5-FGOeU_zEYL8NuFiot3KKH0WGSuE7mPJi8DCcvFueY5G-9eKBlC0T7POvGivUYD-ccVQ9AiGAqtw5e2-s9SNpB9VUghUy5Tjwzw_1ArCzLUip93KpNWtJmySfrkS_WVzU-coe7GMvSpz6Xh5jYJJUQaRCTp5T-xq6ve-BXxNSqfDMO47Ehb327qmxJOym5IotGZckKjbmt0cB-d7zZVjTaQF7Zk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TG92ZSB5b3U\",\"reward\":\"0\",\"signature\":\"sYvutzkNKPYl5qDQ-LXWI_pwcUD-Q7gey9pW1kBYZlCnQsHb4lPWR1AjlSvOsO2CbDpbkA-mJVtsS8TNewIl9oMc9Q2dY310n5ke_JF6UEbU-t8muByoi5SqhyBx6if_5bgIlREXmo5lyNcTFnPJw75PjCm_A8FapARU8RvZbJdG9SC7Bix3rAJOO2GNCTGhPVqkq3wEJDnEIgIPad0vxVnJzIbgN8VMNFH3K8Mg3BvmsEztCFbXd95julPuzlNmpZPVIg5oe1JzYd9dGxxeY2xlswERYXQYrMaWQv-mF3tffJwoRrzz9nDLJ5ARI8e7udEKCpFafmIFIbW4uxGHroeEvUra9qI8vNzb1KeNlpnYV7lm0eR8rY1r7Xa6bqijvvQePLK4xhrohVNPEQK0itqJbhDp23iTy9fz31sOuNHSktN36w9SeQpGeGMP4xccTBYWL-tZf88i5mp7RLKDyjG3ZC6tErQBaClKJmhTMvon9KvxLsLaOPqkqWvFmCD7UNwPvJVKgq44OhW7JdZTJPnxUoEYAcmkTUFSAlYxwk-PFYdcCMIFDzQ_Bhlvke1fShOpTDxU1oXsyFGDp5NCF1izoaoonftCXXwxivICWn51rM2OfOPFNF_APbNtowP5qIZGvmXh9HASSF5FIPgrEfo-WYnx088DsoawCq8eZ94\"}"
  },
  {
    "path": "genesis_data/genesis_txs/I6s8Z6gEPLQABFstkCoLVv_gdQNGb-uuMMut-R7q2hA.json",
    "content": "{\"id\":\"I6s8Z6gEPLQABFstkCoLVv_gdQNGb-uuMMut-R7q2hA\",\"last_tx\":\"\",\"owner\":\"xyHlyZvzMw77-kOrSREIOZx5hFH8OgE2WrbyY9YazqfiIWk0kJmvru-Ec1g5_vJG6ZRWFr8rquqJrZZ9AahWHLJj3d8PzYmLUhoUeXqEZDObovWFz5V8wI_6Jpr6o7dLck7J1rN4A-L3lmCm-qU0dldjt2jczeipoA0rsWY8DE9J_XFynd-d1O3PvXB9QifPb_7lHpWSwQ3q5YVijcUrMDkst55Z3pgA5BTUkJqgQffpUbP-hDZd2iuWKqacKs1L_c9KM-kKj5LJZOozCTsHPDPnYBIJCdcaTvms3R-0yoPLiQjcIPyd7CQwD48g52q4TUzr9Q1pbJJcktoE8GqXuE4tUO8qaeUZWSXZADLkf4zuPSbV_yhJsCzrf8IB6ti_vq8Qwah5r8KQptWd-BtyPwjl-c-7KRFhMz_Jw7oUZqxCTjeNMLPpuxyMhUwbvzMo9pctClaf_id3V_zF2CTYCOdoNj0uYV94MGXPI1YspbCgfSfxUtZYDUPkxp3xvL2Be3SQiihVRKKUXz8MpjpEsviGnQQlu54sVYOMgAk1jkDNiiX6et9gKGs4b_Wozqy5p_aTcaRaVDZPnq61M9CMoLZuQJXaBxpw72cswgjSCP45eMaHci7Iem8jBWsccpnkVE5GiRqCMm0tBgYm8qdTVzpHrwt87a_5RKFk4jrWbsc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"JiM2NTUzMzvvv73snKA\",\"reward\":\"0\",\"signature\":\"ToUUl0dsulTbpZUvDu4AfN6Klfc_DhuFx8gtPn5piibnf2t1HdXSSsv3ctrpNlDFRDAlvh7IvCKlkwgZBPn4E8CLinKjzXaO-HWWMqNm1PAuUgqutEJckDawlm5B_pX2tQACYvQixAiGqsZt4lJcmzlzc9zH_xtCwnYl2n598e6wLQyd5R8oeGcTusWYXDozi7WfjRBgir2cO4V8Xuil8d2D2Cq7CvRNdR0N_vMHtw8Ap0pYttLulpWQneXJWHZMUcIGj0D9nsGaK0f1PGX7ez3Ihcmk1vJ79m-H1V-WdbfJvFmi6wr-c8_eAkAgjhXbn6w6jdFhOAcyoOZNLJfN27-fKarb_2qupyHlfkBdegQIW7QdxTe4UslCoPl7BnHmTLHXQzhFhYOabs7MCan2LmVm-ZPHg4cbuR1Mcc1mOgiGxVwgfywfJj3iZLNNVFMARi6weSwP5o5R8_e1j6tYoqfiV5MHs5WLW0LIlc12mEdwPSqIIAnHGtS35EM1MxVkU4kL74Sk7GHSsiWD2z7WUyAA1lYQ4hLaPHSjQ9zXEqZEjs_9ZngQxre-MfrIL0LTZCd4Q7nZDE2sk5yoHZyMt4M1bCAmewXXaOAzNw2zTsFumFhpPUf47OG7jAg3bXVzCV3cKoUd-CtxXx8zV83JmpThkQ2JRZU5O9zZ9YvpyLw\"}"
  },
  {
    "path": "genesis_data/genesis_txs/IACLRsWq-T6aesGEAjfFTZJd2sy7sFvWL7O6FI9A39U.json",
    "content": "{\"id\":\"IACLRsWq-T6aesGEAjfFTZJd2sy7sFvWL7O6FI9A39U\",\"last_tx\":\"\",\"owner\":\"ulUtfRLuZO0kprZEDOgrmG1Ma9StJc0lGl1yiZWV9cKhEC--VMRgxe55ep0MXucPtgAPnc_tSyZwy6If3f9wBYJRXopuH7LGYWjStdwSgcXKNj67s4eh7boyGP72s1uztm8h1hgP3ShFv7aZ9O_wUZ08uNH6s2q5EDEr56q_G9vgRAYWsdFJe5Qc9PKQRESKxyYJbwNQtVLDj7UvL07XHiojIM8vg6-96HbsgB6ldf7_Zq7e428HYHE86N1kfhwn8i-T1kolaqP9Pr-U05gwGFm0mKtBgLqrajKz6Tro2K_YGI-DnlUGFQjc5tI2NlIZsgph9D5aeAKNaT4Wr7IQUZH8qvtOhOC3Z2E3fYW2ky3E-QE4qBNcw9QFYa0nhukV-1xz8QLyBLAhzguCWKi862r9Lx7RMgG-EoEZz4Jex2BSXAoVI2my3l68XDIiem_pj760JvNTzTlEOUubGkiWsZoxWxst_tZl1Weae2cniSKIt-pIDniJdBjvJDApO5cfY9p_vrbkQVQ6fB7u-F1749jSuKcGZQxSbWegPWDD_6JiKb0g34dUkbmJNjWvKMQdkXVirw2AWy0QXlztdW-XiIkAPuwTUwnCX-_5omnOGcU9L7rzJ-JHVSWTCtWTD95LhHhjO82J9zxrB2AYsC4nsY-1lDy4rHSgxRVl4ZQ5whE\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"R29kc3BlZWQgVW5aYW5l\",\"reward\":\"0\",\"signature\":\"lOuxY6HnEGgoZZYiAnmPEttOU2TAA4bDXEAKHWUztH_GdAMllzX9WLLt1V7bLZS3JS64FoFBxTFXJRxa0XwNdV8YbDWaTKO2QS1R5guI_gv4aAV9VPPD33AxSAMy6jWSF0efiZuJLJjp6kUDaMa7CY8OVPVm9PQh5ILMFwQsoSl8mfdQulewh7KvuoswIYY_hgOV2a1UgitxBvl_-7JJRV1VER_lpwdmLV6cysyjdgola5GvpMyqaBzyRdqT72xFw2elUsYT4TxOk7-SrMhpkL5SMujis2YmYakTXwQSm6pcVN5JC_6XZmklcYr8VK68IvcXbujUddBtQiwBLJTIxP0nSrAOvSK-dVxPrrjVuPn1WPic9F0ZKHgsA4SA0bnOSrW8oN-oAbMCdI5zphou44tEIn5df5qbVd-vocQZQko26_A4582-vfvACIx0nUpL-7z2X0KiTVSq4xsmaJQqyU5qtyBhwOBbyjVJi-mK29I3D8F5Bq13npqIS4JawQBEhcUjavytdL-C052k1FDlHJbOGZ2XZ5wgFSbwohI0LNL7E_CFvt9859ByQZn0rZjQKjjGnXZWcI038ucT2Per_OPonG-l5llm1babKaSJzIhkfdkAuRx4JWgn0bVpgy_qM_nbimvdw8Px0Cv3pnKEDxs66fqc5q_qERdivP9hcDY\"}"
  },
  {
    "path": "genesis_data/genesis_txs/IJsiiIbd-Qs39TAJ67hiRJFsBye_rgQdU9GBid_PnZw.json",
    "content": "{\"id\":\"IJsiiIbd-Qs39TAJ67hiRJFsBye_rgQdU9GBid_PnZw\",\"last_tx\":\"\",\"owner\":\"sTU0yn58bvNatvRvPb21qfqV4xKt3o9TvryQKH_NMMNtMY6GgPp93SnKZ6U_jt3uw4yob-KTFRIMI8NFQVtoEGxS973XRKcFQ5Bcx1XARAEf4KfQmIFJ52wZdsxrOiHbWTzhU3NsLgECdbF9m7arILRW9kv28Irth5kxeOxMWzfh8mUsNGVOeEKzaT3_cGVOeTiYz7umtL1P19PFYW_mBdLsgFnp7wdubQ7WfRzhMq-JHquoZD1FeRiip9Yw5dBHqOyVqTrhHN510iT4ESdowEb0C4fkrmSuBmbF4Ar2eD2A_A0evR2ONT6-ElunrZDFRuG4saBNB4uLXlkb2uzIkc7kV_rYq5tNn9PPgzjgAsg2V28cGyZPs4cHV9N0q5XXkHXT1nZ0fmRf7QwNZBTHB_OJcb9XibkFjwH8u5uKHBzt4jg5gqGkEEd1HqHN-FqPSNosc_P9Ybl9zvBPN4Bkdwqk_xwKgUhiLkM3y9-oGgOyvbd-Z66jD8OH459VKG03mav74W9Bsl3BV94riDayBYu0qWzrtwd8RzQPyh01U-8ksweHgXkV81QPK96SXpMM04jGeTkkt4e6yo6Lkgl5bbdfYAYmrHwv5S4M_D5KfoXlqdlgUQxo5xDXK18xAd74yI_vlk3tA3cz-5z1Qbm1NcfGMVMlooz4hTRquahZQL8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SW52ZXN0IGluIGh1bWFuaXR5IGFuZCBoYXBwaW5lc3M\",\"reward\":\"0\",\"signature\":\"JeL_4R2xH2ZGdUdGnYCfyFOmIrMvgfvE6uuE6hxOF0qEvuUahEp2Lao6qPGVOx_OJg88vf4jYp3m4KoflgcErunimWWs-GTCeQKSPaTSV9NzUpvh8gX3BkoAZkWZAX-dhjvcXBiJlKsAMHSllQoe1q0oKz3WxtMfe1zWjYercxj1MehYqO-sUKYNvHGreH3r10BhPT5QaIEHEMDeyRLfYOnL3FwyolGwfVt-eIbPeUeST2oWn36uiPXBzlmHeezX09SfPpD4GuQiK3VJLnWROIF2oBQPGfTZ-7WfgE0nmcrtpeQMLqro2Jqhzqbnl1rqh_H4NvIms3accoa69uJ4sTTRSAubuib_nZkxFTuAIGbpMm4Gmdvsjb-gH_yS-NV6aCL2oJVVJ4vL0c30U5LqBcxnCJxZzNKWVypF10gOaL3f6xYfAI0EzkVbWksNlRllhUDsZi1RIKtkhVbUPTjq6SffbHP_NE_4ZJ9ymM2dx2WSeEl73Ab3eIn5MMjS2q5wzIVMG67rrgV9mHw-Mp2Z5acgBZP8vaG07YIg8mFsUqX63Y6IPVTRQGN-pUN_dPqGHFF5kSSXPcKApzHItE3bAsamsAEwKJviCNlb7HMvchLJLm6EkO6C0qjZw0fZT4aaYBFnKGCocE4ZyronPuBa2pJcnbbGhgsKkmkPYFXa0uA\"}"
  },
  {
    "path": "genesis_data/genesis_txs/IQgiEwMLp1bb6muuB_G7Q3sRaaZ3OZHUSjgshUq5YMU.json",
    "content": "{\"id\":\"IQgiEwMLp1bb6muuB_G7Q3sRaaZ3OZHUSjgshUq5YMU\",\"last_tx\":\"\",\"owner\":\"uXiwGMYtnV912Ds2mY84MJFwKJ-6D5DgT7Pk2GdNgtZ5UNIuDyrwoEAsmdl_9rprSNxQEyPxM55FQLwob2_oOrm5PMY3ltrwjfcgu9zOL4ljMSWLw9SNwgGh6tfKMUqJUTsNU8OwAo7VKpIXxs-J174dk98DtDqqLAjpb07kSUSux4BhSpmD2Ux7J7MVl7cycNu3xoQQZeeErxm5k6WuMhptL2h7f5I24tmR1FYsGUhKNym6FKOOuBf5nx6Xy_O6vUzxLmiy5L0nMEcM0aukkaLlXDE_yyliskC6EBr1eWu-y40h9S7b4-9u51PZHprixSy4xEDcCQ6Gjxd_UYlEcNXVuCxXFCiEGQE3L-Oti80AqcB5MpPuRW3E1s8DX4K2YaS0eF5vUsZlVHuNXF8GCbODHZjhwnSgIRjj-c5Rl2bGYXlQ7bTaj-MW8d0ZAZTU-kWS6W_GcGg_a5dbB6BeA-vrwP_rYk_OiYds6WUZvyQcwzMnZxWQK12C7l7sWgjBdiM8d-SyTScQu8gdotnCuCqTG9-xr6xVcJ_F2TI4BmYn3xOXZf3WBKxtSpehWaN6VH3iGUlN4nNjxPMXWi9kBRlfg3JOsxJvClsSocfibT_Imrn2gSwi3JvD7E_YbdGIvb5gdEzrNWzludd_z0javaN2yt8XD21ESF0fC2PbrHc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VGhpcyBpcyBteSBzZWNvbmQgYXR0ZW1wdCB0byBidXkgYXJjJ3Mg\",\"reward\":\"0\",\"signature\":\"WmykHf9WyLjg6ka47j1BoiwdCqcUtDjhJAndKvt4NTd7UC9uN-j_GQHxHmhycilObw2hJrdQXKPdu8Z7m74u9-3YqBOPEEgY3pBqVFXumlJOKKTzEb1Pbc1Xo6Y2CpTz8i6bwuZQzXvpuL3LaXpgbSuYCF7AnexeNZWT_VZQOx_hOTuvKttXp2P358DhjOM5BaeiBLxWpjviTYUkYXuxi0I3jEcZ7i61ReZ6jCQuF777zU9vkQ_n3ocpaBlr1h_-MD6IUICf58rwwp8DAOnLQ_LeKnlKZ8M62AOonNU-jQ3f4XVAiW63xomL_wu9TwS0VBhENv-MFHtspoqm3UIdNidMyoWrhJCCB9U2SnU6X5mrq7uB-xqGcyZjB4MnhWgyB0LkMwiX8gT96ivcDMacacGICZcDVGY76JAetiBvMNAESu9vCFh5iUOgDhuVccW8vfHOwaijmFFTJcST4-2i6dHDZ4YpR3t_lh37-Q4HZDiAsuFqKbMCFBH9wMyfHmODlti-9QSCfCsmQ9-Sck1EXzeQUImBsbJQDudt8sQBY16T3AX7kt59P5seG39N9PL6K4doisj3wxFnJuM17y51c_Gnwrxu56QV2FQ1_pf7u50SQAQUv80UlmCFT_fiP4mrQ5qp1u7A8PePkHBFSFKh7mAPWuqZ55SybU0OYwJplVQ\"}"
  },
  {
    "path": "genesis_data/genesis_txs/ISiC3yaTW9KnZmgs39osghIg0HP8ISh77bzH7u2m55Q.json",
    "content": "{\"id\":\"ISiC3yaTW9KnZmgs39osghIg0HP8ISh77bzH7u2m55Q\",\"last_tx\":\"\",\"owner\":\"nxSCfZYcODUXNVTlp-s84u9HkqX3FJoEWhIXMJ-xu6L8pOreofZOUY44JCQKW1b3alPd1S0q1UyvnDeEyu0yiQvkETKEnwswYq3iewqtajhdQOlfJuCxFFglmNt-c4YTqSOeJqtBZt1V7hgSQNxLWjvgYuPM-1A8bYPcMOkGFNI_SVlDEMXDfZFa-tsfRWViYWoDoqTiocV2UDQxhg0CcbfjXdNgUe3_Pr5MEDDCm6CAocL2yVTMU5qsqxB0YyrRYF_UO-K7SavyTUeK4vGxPQPLPBK5EnihisPIs9cBQkU1swJPKsE__QV8U4DYyJWjShOHZQx6rugx0scn1qXII8f5e40X0obdlzU1vhU3fox518I_3-MEmsEhdPllDB8Jn4KSZik3gJJu2pw2J2CJYSy9R7kbIXqUhoZ80NRu3IV-h1j973_cig4bW_JyKyjH9aq6pn8FhJuZRgwZMbxgHndC61bMkep6iSLCYPZtnBPxEDbl-5kvtUN9hffijOYHZ8Eb8OVIEcpeJavHy7f58mveJaSoHE0oM1F-K1LYl3WhnJT4ZUj-A28W7GnqXrzKxd_iXOH-Ks0L0gXBrS0kFrg4qQS_EzuDp_qcsS0VR87VyIAzt4oSuBFDOh-xNxbCWEFlcQJ_mZcwtb7Mq7W2YP3s_10nmiLGRItBru_TEw8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"U28gY29vbCB0byBidXkgYXJjaGFpbg\",\"reward\":\"0\",\"signature\":\"BajSxma71R6EJ6dyR7Qwm4PwUs0VARah8yFLJM0oXQtKpqwXJUrBwH78T3Js3HZF4mSM5JJ5Uh6g_v76M4fCUVbBjfuIhyls3KNcQfAEEFw0QxNvlvwNqW6I5wfnZN4ckSyRqQP6BxyXfH5-QVsFbRkT7rw5rGAXPT2PRrTgkwrjI-mrTrB4kt4rf2eibOR1-M1TqT0z89tnm7a5GC4Z7_O99WV-BZgysUu9FF2wQyWQOXRUZpnSOdF0l7Al3TKoj9xjqFjUexkIk9FttNNZBREIP61Q1hSCtdFLWqvNGayoRzSkHTmITRXYqgT6tXbTQEinqoDlWvgHhG_xyMHCill5IE2i9hxzqdFH52kdwT23lOmORCnnR99pLJ3yC2rS7s-3P68FyAuc1TyuwuV8-6RYCOslGrBHZ2QHc-S_dCQW4pTZp-0-AzYJ6-ih3_fN4kacLXxHPOZGcDw9gyW831w8_gzKqFWHvHVtMO79mCSTusZs9mTG9EQ8Y6XVAKiGzSGcAL2mRLB6s2WAlYuB9Q9hFWg70BPddRDY9oLEAPgA6sHCMPZSPPgodzw5j6ieWZJRm3mhTLczHjLfuuPLBe0lZ7UdPc_QjVIzgPOsns4mHJkvhX_ynTBNnbnwsFU4KQzFMLIaUmuBHrXtuzTZOENt8j-8686cx7ei_nzyqFw\"}"
  },
  {
    "path": "genesis_data/genesis_txs/IpwG_74praZjsu9L91_KWYHrVTpEDwyHZrsHgum4Z8o.json",
    "content": "{\"id\":\"IpwG_74praZjsu9L91_KWYHrVTpEDwyHZrsHgum4Z8o\",\"last_tx\":\"\",\"owner\":\"sOuY6QBmgLH6lrl33fjWznhAsFZWVI7ZoTL34S15bXSGdE_80oQSXn3u6a-84fD_phRaeTgYo6cKGPDpi4Rj63NUkEGyV-Hk-quuqOMb7Ky_98Rb2h9opikqdb9puVjnoFT7MD6_YuxQNf0I3InnHTLKnBe0wQz8R3o4uMGa_DzaxE7unrbEOo7HjUvWsIAOW7eu9dUIdclcBKZSruLWYEw2d4i5xSRitQLPCaYlcCS64pxsFb9TKoIRCgas8aj7qdthoUIzZQmGtAtmCF6iAeFt9_vcqsLjD-xDm-YNyShQFWxXWwof_k9iU6ihdSM0ibIqa3cJ9Oia0aIRmZ3Gvgeg7-0zX_u87OSjGhoZJJmmlbpkurlhwV84b3DuuqeInRLv-8MHmj6-KP6SwkxZxzu7mmuIMxXZe4-zSE88yHxmgof-sfYOwla72VplVDN-BBjKcAWdAZ8S0x4noHohh-h_EMEBd1geDlGCrxFS1RrA7mA3mr7Xt-VMWwq-2ALzVnAhbixo0gtzRmYrQtJBHv8pUpyTwZwpexSCsT-4EVqakgVEHVxDVzvwE-Qqc0uLes7bvD487xTAqfH4HYKAWhoZBCVl0rWcwgEfRD2jS6dDGzmbfjHAup9hoalKqKkDzh1_U3etJznGs636zF-mMKib9gZ3atJySyZFtYMcLJ0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"U28gZXhjaXRlZCBmb3IgeW91IC4uLiBzbyBwcm91ZCE\",\"reward\":\"0\",\"signature\":\"qegvL5duUuuW0I6U2CyL_c7IDIj2rX8t8soN5Lw29Cd8RhAJNuvADnXq-ERa5Hwoit99UvDUJmTA0SUuWRTB6WGwWllANxFkPNt_0fnAwKfk7hoETKN3D-MgUWR38C9Ij126H8mCFNRU52FN5p_Q8O8FzemnKofzHEBl6HDNQOXr0xGstkLAXEL1VfwOCI9_9F44Q0D6o3Sl2SW8x92Ss08DJYv3gXM1P96zCAqS34MR0FnTkUcEjc1S0jBEduPO0ly-gIECqQRQmgyQQzRWKrq28PsbbNga9aeG4UdV2jqUQYMOalabPuCVWtGNCpj_lFIRRGgJo5LFY7dB5Fo9fovIouZmr8qdjTFOJnRF7HS1c9ICgBrLMLCj8VbKK2pYXfgaxiUMEIcrWrtO7sEnhzvC5PaFNLsMtOVkaoXLSVU9mmMBko8LyqeBJxmU8_VjgwlF8Aclp2yCZ7PiEjuDRaOoP1b0zvzPgEG3TGlLMrCCPAblniI46p46SEMBgrqe-xHRvGUTp46D4Sn-5_5Lh646ZaoXbfZcIEicSxhC0wBtj3dwrbs1kIiE6eOw5dDyNJMQvy6pNAqqxc4eosmB4cpzSw6uhh01hPBMzxLUILHNmhHWTpoy536Bunhj9AYLyyYIs9L4vsUQAR4YcpOVPZgzZl82teKEy8Tq_VkmiF8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/IvyUOghXQ31LnYE3bYEkS82gTAvpIa1rGGQKmiJuuMk.json",
    "content": "{\"id\":\"IvyUOghXQ31LnYE3bYEkS82gTAvpIa1rGGQKmiJuuMk\",\"last_tx\":\"\",\"owner\":\"vjFb1LphT1bpBRSQOyakROqwcdeEy3pNZJaumPZSJoF5FpN-kxulnJg5quRl5-_1K4q4qHCD4Nmb62xGcr22pzgWMQQnvyNdpYK37i7syWEmxcmmZgN9fIjW_zSsGAxQcdb0hDiwdz4fsCZY7vhGxpP2JKu2L4WH_lYgmb-EP6u99U74OGB2UL2wjQcESP8iksV5VqQ_k9onvvfpKH4filvyft4jHb80wKBuihAY75IOg6YauVVFgnSHQk9znmqqe4e2VGbr1m0Tm_u457wJApfpsp9vW047n6AddAVKxltfcdo90mCBjH0p1C9nexK-5uQVnjJfSAg7slxuUK7ciaC41G7ANuH4_01HCCl8MEQISRMEeQMcXe_w2OcXerVbc1BHapHfa0y1laq7BzUv03p5tgRWBVRNv3IZpwmQtAukZSOrg0Dhm8A6_AFk0jITZ1ygjMwEC069uEdLrrd4iFnUO7Z1y9kIvpfAKA2sCJYCY5XxeGTX398wfrPDeA33fvt-BZRfvgNtlcoLrjkMmSHoM0WhDc34I80gY1AuBCI2J1afH1lVw6jxlVHl7BGegoREnlMzIyHP3IvDpS84HXVMPsQTlCnBSFWP60NkG98GK_eYVUrIIKdpDUZ6xhqARTDSUR08VGtdfLijNo6kXO3d6DbGq5yoUK4-r3e7ncc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Z3V5cyE\",\"reward\":\"0\",\"signature\":\"JigWGANrJZ-7Po5U_t4BdV150A6uCbXi6n-3fUgYNFW3Y3p1PIQ03ijvFZiFRqknUjNnHWS8Q-i_XU-L1ZLUJEY1Zj6PxvG2qC_toipYqVn9VgIE7jyAz-GsgyPWTO8zQTUTk2GTWzAFv54c1f8RMFy67BaZJItqp_mMjjGQCrYZ-ALQ7CEDRwxF8Knyur-6jEU5HMdWlCTPAsP_zXhjIhhlReJ9GF9xtFZVauXeeUYcTDO9EBsHfPhdJBhfyRrYy2TDm-Pni9wxhERqcro94pQPQzWw2rDY5HZdtyvgjTCSQ7PLwFzZ3SAGCLQ_H-zP5dNvkYq_Cwei6t1kREMA5xT6tOxfc_sT4hahk7jQGNukSuYAu6Pj_nkQ3mw4_zVQmBYrNsy1c1n2HU7qTnMbTsFh48f3IS-TikoY9f0e12V88EmQ6Jlam9M0XobgunFX1HDJwKDq_sHdJm9F_MJQMRPyQIjL9cX5yNNQS77EpvJphOY0UI2F2Kqdl8imzLXBol0E6MS7JWALn7DEOYjrkk1Ps6EYEi_pWMgFepGEOvMXR_35w-xQv18J-d4ZN8ULJ02exbMHlg3TxNczvMivd2IuWKj1skJJgQTWEWROEfKi5h8kwJ3ZD330-g9h8Mc3bXqdSP7xhE9YfukET6U89AEkqKfBJVIDA8TfRxFVYW0\"}"
  },
  {
    "path": "genesis_data/genesis_txs/IwSIt1P5I_mM-gAeAvXiyxRVb73hqkQAMfxLIHbbZYk.json",
    "content": "{\"id\":\"IwSIt1P5I_mM-gAeAvXiyxRVb73hqkQAMfxLIHbbZYk\",\"last_tx\":\"\",\"owner\":\"zhyxp7jWVQIbTj_ei6naaXs-jufb5m2L61Xkor3g9RlTBWih0hTtTDHAfvjyQKviKh4SPWKpZ6jI7L2igW7GCShEu50vi8uTbI962tdJNBr5mjrhtkl--wcpGADvd7USIOfb1j6LOrxswv4LD85bEzezkJZ1asvy2n5QBypDUaLOspc8nmXqG9MniXSqIJ8HI0kFmlj1bVlB2X1f_A3TJIhqUJbQjICM2S9fxviIsleZuFMmQ0ZjFUH_EPtd5qFx012zP1xjkpc-8SZRPQOctFecD02Wgn_33omIzajdUa0vTYxYUSvjUrWWUM_kvwPph5KIbj9iZgHdrvH0qDAfAhoxRd1EBc2I5zJNdyJlb_sVV84PKq9LPBb9nGnlx4YTgqMZvU7EMDeSPSOcvuCDEj8QATnW8VQf65vhngZtdfRIZHiXk9e5cu3wcxNdbgm15Mij4pMdD9Sbv1g3BK5iIxFLRanzrzdTrtlkOLX2kTihuqrykFK3NtLPSsfmrIYE-WrTD7aFj4NIAVaUIPYzxIW1b6DMlfbbX12jLRoqMJ7syukRaUeT0n8OMrtlqeGX1C1rGjO5GDQ9sIFTsON4AA9J09-sovUvKhUsxQjzJF2yAlpgVxwSeJvrcYxrDqK-zE9d9y-Grptqc7YBq2MVtwQwRF3Gx0wRDQ3SwmIIEcs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSBhbSBmcm9tIHRoZSBwYXN0IQ\",\"reward\":\"0\",\"signature\":\"w2NqG58648mw3AJsarA7-rxw9KUsbZX7ttzHIf_cYFOURvuft1I3z4Mfq4rRvsO9iiNbPnMsRW64MH5-a8snIKD-aamIK4M8gPtAzGEDPPjuBl9xF5LkMK4q7dfBKAy69nInqVSAEO4gTYdNoflU8mmtCfz5fDlPy5b-VNuHdWTUFgf0HrWrOquREM7BHrCmwL5LYI5zSo3gXbwK7DU-KxHltnsPKtXcHgfgRJ2399yGgyBrKG5McmeeUAY_Xl5ATHi1ahtbEDigat0a-rpy-EPVzCg_lFUHO_akm7LaXIUb1Hld8nHOHFuulHIKwJ3zT93vfpJt39eNZrKBnSnYuJjRulTAb4Rl0xk72OG7LRvaMCKF9umMRrAKcpthzRP63xfuK-7SfVX7FueoaL55ODvlQvPsiMrt-T1lIDXYrHqvnApKl4gGW3UswfIp0_GcocSk-hgC8aONAOU0g0IP9MjWfHoPHPdBq5ytxL4ihMRjrWZlqCMikziH-0VtRE1f0kNsUDZBOOqSHZ7Peadk1A6PnoBr6HbxlyB2hX7diL5gEsl_64Jdj6JjA0Y8va953IyFeiNbsoKrK2o-TZXNKcvzFD9832mVNm_L9iRaD7cKvfZD6Iz10OIrlLXIZl-OKx_EvAi7x6sKFctCv-34icwiVk-KozH71gNa0Dhzm2c\"}"
  },
  {
    "path": "genesis_data/genesis_txs/Juzb8MlmGd2qomIUwgfGzIFO7c7ZcY87kJPmqpSkt18.json",
    "content": "{\"id\":\"Juzb8MlmGd2qomIUwgfGzIFO7c7ZcY87kJPmqpSkt18\",\"last_tx\":\"\",\"owner\":\"xZqJniAJUaMyq8cCQUG8PnwCjHlNRhiti9Br35s69YGpwkBIGHhLiPhJERLm4fVL9jkzFVTyWOd2hkHMTJQRC8kKlWLOFVogPfa26jc_Ys5OycycSNMP53EI9JmoiPaUGCBbH0VOrznhuDAR_HlLvoy6wUMqEW4EuDEZpW4Vm7I_Fr4Ck6dmeafScgVi9-uRK-aYpdF_lz4F9VDKTQczGTDmzddxRIZ4F2wfCcGSQefvahQAmtbyzMLqB4vr4XtNPM_BXDd4YW0pcsoDE_v-KfRuwrnhTQ30EI4BV3pPvSQnKNx7of9h9YWpAF-umAExqeWr7pmBrx4OsBgVawwBJUqtvR8MihSFx6flTsvk739wDI1ufVGA_ltK0NBtwyv0VHQiNeDx3YgTHlQZP20j8WdO3hF06NIRr-q7msmKTzeKotg6YCOd-AwGM-yMWIQ0AY4mUbuxyTaUpLxxRQo-mfWqqgieXxQvIv00mY_Lw4pRAiLD_lawdoxg2comMpp4zFKCLY7dEPN2b7grmUpBT51cHOS9lT4hxGh4dAbkm1VRORNA4KSm2LSLJFMP92j6tFZ5LE7mFFfQAqId2dWW-EAWJw8uAZR3Y49hOcU8EcoNoV_CcPYtgCfQ7Tqvrcrnv9HK-W7hRgZmiprfP5qiWQ7CXK6Lc5WnNXO0n2WqdOU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"bG9mZXJiYXJ0Mg\",\"reward\":\"0\",\"signature\":\"n1pYo3UsA9zf3FSZfuIt2BWG4dn_UseN0O4gYVezs2Z8iFj5hWI160sBS4feAknqrlpeW_E_-3d2_JfTN4o4hE8J4juTzsZ-CUhVC2L5mQtfG69gfoiax2LFcNcb2OBBfSNFM3Az4z2EnrjwKhM3L8DR_eou-Rvs0DXiDLDIF1gOnURfj4Nrb9sKgNmFprlSgsuafJIsfgQDyqIgl3rHlKnFBtbh2vUeU3hjUvkCE3iDPwgEr7-ebY8vHzQDjbuA7xI34L-WNS29L0kg8Zav7oPvhM9thcPPRF3ail-WSl5W5CaZNS1f3RgIG9Yv70jYFPFsvIQNjRcrTmhbOGdD2cQaT-tSBahpVxapp1W78fYiQxDLiau_oqIPPDHrRLQ6qC_OcdDR2ZCjKUzVbDEigxDf3ePhKNfp81DTqDKgRKzDGFhQkX-JFoJPbcOcDB45MgMQgyU7zxuP2TXeRpnCrOusuJLZqKKyOrjSPi0Ej38trZtbuIg_IMxrC9r0qJ6t0Azd-dLHN302Te4ab21IYtN3MC_9dFCxWpFYVpzInFVP8HLQ6Km527TzM9eNc3GZLFPI31usTF_oLLBzQpTMeCTrt6K-PZwBnKcT8Fblcjsu5gfr7Vk38F-Pp6lbx_4ef0IfAYPNcCp4TxFfmRgFrN7eKTG0pC3tIbvKMYrzjVs\"}"
  },
  {
    "path": "genesis_data/genesis_txs/K47jh6Jr6TmZeZ_TadmyLLy1V6ZvLNpvV5FWcICohnk.json",
    "content": "{\"id\":\"K47jh6Jr6TmZeZ_TadmyLLy1V6ZvLNpvV5FWcICohnk\",\"last_tx\":\"\",\"owner\":\"ykieywknqm3oSzlRMiQemPGhkfZNpILOcmWVVgVcDotvUHg0aSHGoH9uwG5ia4ChyilJqA9uHR5h4LWTy6lcFU5KMkXT1hC-QpVlVt-0vOY0n6A6FstIl7YiC8C6K-uZdI2IGWqOBy2WfojdjoEDrO6ps--LRrj1N2rKDr0uDBRIIRtxGwnE7bO2zVfUGXBpgn_3vUh-SnOCTzItNcFl6t48xw2h6jpO73JuW5Ct6ku1U40Pd1Oq8Tf8-mE8MYcXeQT1s3kUwDQw_e0TlslWbeMRwy0iDWytX2XLnQKOfp9jBl-XXBuQoR6rSp-cCYL8PJnb9O6nbtrATmPE9lumbwnAdlsyEr6GZbn4n5ZSvNq12-M5nhl8Ri7eMAyiI9yl155_9O5XI7b5r01T8ud3hBkHIoE1EzIrSNX242wGvobyuVT8A8DnvvtgRlakH6mK3dIIHZq66Y_JdzXvu-AByHh8mu_DuqJ9GktY-150O-LQq7yi4I0JWY_XFUCVU0M1xsBlyAA0C-9pSWuVTT7qg2j6JXMZZjIm35tdHkKG-55k5Xl_K9KW8V7FgrQs8DLT3z8kp1QFfFkns3RdeW0Ax7x5J5u1CqC1hWoVKoMNjbQtO2mOk31XF8T_b9G3F_Gj-ZrzNJsPjGInd1JhyRy47jvf2mX_ztnBxUEvZt1812s\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SGkgZ3V5cw\",\"reward\":\"0\",\"signature\":\"GcaPVr9D3YvPe1jdMSZGPrAx_VUPEOc0fAbbrmLvizLJxygkHZU8wLgJkO86GfEUHXLtZOJKwNWeqOgNe50pT-zYgoKd-F4BPdS0i14Qk4SEeE4dy7lnVIzwoFbkYR2ct0RVRVYdXVGIwB-9yV-hrI24WMOiezNffqfJzu8BcoZgsggI0xGJehpI_IzoDaer6AL0ybmNI77FFWEYxovD31Lv8hutoBBfNlsiq_k2xFBHPm0bNI5yh4ltrDW69MrecV7hL_eODMg1UQ5BAiSxjGEBY_qmi3yV5Jh9UGMeLHrTt5kspF8kllzf9zZQ_fUMAp7Qwd7GRT-Gy9IfLxE2gP1HF4bScRuyCddjYC006woNjoMGBhYe53oXACOnAU84vqF-cwOnnMzN1B2gkdXgwvF3R5uExIWj-si7a5W7nehSEnMDBSaXx0wvPvF5caCesR2Y8ts9913MjIr-znSeADKkxTVytF5bacalEwX7BwNLSLX40PK15MSISdmOCIKIpK59gf4qxJR1tG9YAaW07KWkeI6k4NjmsP_usmhE3IyvrboLmLg5iRX52UXrpxaIyRDR0O92TSGsbTLhfAdt_FC0lat_q7t41Lxs8IqmiA04W6jNpIIb9GemYBX-nXLdYNxKGP1T3V5I_8MCRXxmtMdv6xmM7J9euKX6XPP8Mag\"}"
  },
  {
    "path": "genesis_data/genesis_txs/KOm2FJzmNXa_yjYC-58DkysCdk7FRFMcRmBx3DF6S9A.json",
    "content": "{\"id\":\"KOm2FJzmNXa_yjYC-58DkysCdk7FRFMcRmBx3DF6S9A\",\"last_tx\":\"\",\"owner\":\"sBnBs07G3NcSjtn8Pya3KoyjxuhE_4hgOiiARRN7Ez9JK4lmeeebQ_gsbhAtOCIn9IBhSEjEGQ7Bi0ebYfX7qYOx_RZlNi9bpBdtDe58mCZ4LGYF4nVVGff0DvRo94gJorFgKrmXkJZRWTB2ZcmFkkUqq6wmJNEHdDb3b60JSIEOqlVHR8bgHKAIey9QI4YDFR5ZdwbCDlJZCggNN2bg-0NhEIQT3_GS5z4Q2J99eKT8ajjNXvg4Z-Rm9U8w5vsJrJ9aPYJQLXPbGO00YYa00u09aZZpaTMeuA-c5nv5z8dZMABkcusl9xks13v5YGPGUEyXmnDvdEMkcJyjLueqhfO2wtF1hgEHdJoOqp5ilbtu45nazYyWOyE1FSlmZi0MeyLgQ9Vj_-E9aKeYy5FNG3RqaJRvLoDWLb8ajar21ayQGtuF7dVy66cVXfgRzO5J4tt7DHbHfiz_FNNPb9SGAuIDuhZlQiHxN6tJuJ_uNCprEIP_OekBEZFfBIpnhyOuKxWdalhsnKsHXC3kv42acxPWJ971yyZ7Fb2SDTZkEUnOkkPPKihaqV7PngMqWyuUgPJ9TdOQKFwteEVO4HYycc67ljHjcVD0ATwACZAuZVpzt0fVnf6U_o3d1xbKPom1346i_L0eMLQ98xdJ5TXHZyXP7ncAUyag8oizkr_ISi0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"MQ\",\"reward\":\"0\",\"signature\":\"hndrC6YZcjS0eGvKEtrCrggnIZRbA-fbNon1wQXpYAoNFbWz700AT0LU8MUimZEXa5u0Hs01QRkfWhknhFvbFMhtsOVLbEzrZ3URynS7XXLHbNRMlZ7O_Q5lxauoa4W35MB9hin4QyPKqxpxgOAIPVCekm4fNyJqS_AiVy3n4qzPLTasPdjAkF0S7Jw2NLxe9XVKEZxZF_65QqVfl6SVQQOT2DqLEX6l-lh1GMmR3K2_z9PtlnAFXIgLunv1UPX8xs4y9kPLVi59jQSMJklgO4hnFKOBKBPKuq7_X69cJJiDyFkw27ZmWG3VX5-I9CvL0j4NWNnxIM8ospPx5A5S_m2PXSbfVogBlkqnBTMUfbSo769iKkcVRb6gSnGRPF207QCNEvVZZHJTsSSc3aZZQWdMw_PyQwEd3Lu7MuKX2hWPLbo20F0tMXGGkJhlPmVbkFxaprgiUC97aisLscbW5zNSr1cbjR4RzzOWnIfek1mIp7YoaNHMusnaUnGt4M0iRQnFDYeEy_syCl_Fl_eaIqDSWpdZsL9aM6MA682P-lLDnDZTY4D849k6C21On6Jj9hMwqZz9QPxM-LgUVZmgC2yDD41tevua80l2CGqodT-J-Q-rzc87c5dsfT1jqsRxss13nSSTF3ZBcn3R-IkLiEM5Zp5AqM0msCWTSMg4SRE\"}"
  },
  {
    "path": "genesis_data/genesis_txs/KPNGfBMOznCXZwOVvCXHRR6sVJx1akVkmXTV98lCMKY.json",
    "content": "{\"id\":\"KPNGfBMOznCXZwOVvCXHRR6sVJx1akVkmXTV98lCMKY\",\"last_tx\":\"\",\"owner\":\"5o-sItqgu3e_MsX69oGY0E5x7FDum0-T_p3QKPj16alXiLzLfR5j2BASHKvFyxqmLQXXnGyzmaUau6OB2RRi7vw13LbLNwafvyPyHh8toQDnIb1GS7xtEq4CD9pcPm_WrmIapit7sQEhthtX_-PdMKr_FnhX7L8-p46VibGZI_VBYcgffLFreCvkAQsN4qbPbFayla_VkiCnUdqNTqGwXIP8RqGkWFcdB4g7HtWRI7Ni3Zipy3UG6liEEDO8m1FW7z5R4pLRgdzl_dG-KkSotp8aX4porlEMAA8sFQXVL0Y0SMVL5yfm_K7wQDvqsVxeoyIYhupeRIFRF6oFcYAxjhhFlAoo9E-XqLL5uLkuwn1_SwDGPupxCzAEzi-5iEOArRBiPFJRM6e1vcCHvcoHa-TTqDGosWihjBU-LX6HlMgS1wVhJhbv_bq6fsl7PIjrUGh3DWLdk5jzZS50e9vhjZlD77mregUZw1iN2xyAcA2G_UqXzyuO5VXw95k7ot0DiRkDtVz22nKIfUId3ODyS3b0WI0P9j03uPy_MOZbpePMLIRHcyZQSrgD2O86X5sUtbEwhe3Qw0ByL4bb1Ro_Bph6fM5IBJx7HXgVJGFaGAL3VSQpoDDCF_cUVeThDEu-5ON9JeWqzqIzIdLU1_R-1f2KAgp3Dr6MBG3zi3Fus_s\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"QW4gdW5jZW5zb3JlZCB3ZWIgaXMgaW50ZWdyYWwgdG8gb3VyIHBvbGl0aWNhbCBzdXJ2aXZhbC4\",\"reward\":\"0\",\"signature\":\"rH81UBykJGwi5Py3K8mAl8y_IahO7YA8JxY5H5Cmf1y0QhUnVO1JEn5Ct3Sb0GoDANj6KMJNRMFOsEx8IkxDXOnIuIaijIj3g_giuIvi2nvtUnf7EOoXQk6m8dSZHs_l_gGEABcP_tcevd3lAzxSqEFUDms2qA_7-SkG6h7JkAcztgHAI5lR_Gb_FQdRvJcyzKelD_Rou3LwUzqO7Iei2rDlDw6qZL-_h3qf4vWn0AVj2QzY8SILfMisahNsPyUScilG7e5c6pPZ-F8rYN5trUJqIBCCb7r4FC0cEoI0Q2okJQoDovsRMWwmn4krbqzrHwmCEzrYZZ4kDT3XXbQmUMcKKhzUQR1AneeIQV6AWHY53H1B6pmj92ncQeukcx_xFid6mHkwtdsmLlDi41lf8PqukhaiPAQwq5-elvhDHIKrewZEDpd_LwqicHBVnPvqXyJUWVuaKaSmJRddF6zT_VhQ0nO_H7RRK1w8xpDGZU6gGnEeZvTMrRrHavFxCSle4L_ag52uMNzE_TQdJOZQ9S6yDUTePu2xZOa07tIRhVRZz4vVrduNyENvZoK0ye3-cIoVhG0saHtarIQMkKHUz9qHU95SazA0cqwg-p-_82CLi3ypXieitdVt__22qpbDv2uKs8q43EUIxW7NZFKGn2zQX0XLyJqH618LE3jxAlw\"}"
  },
  {
    "path": "genesis_data/genesis_txs/K_ae8Bfvql0dGhIfRH-R7W-zWoeB95kYGJNi3HjFyrs.json",
    "content": "{\"id\":\"K_ae8Bfvql0dGhIfRH-R7W-zWoeB95kYGJNi3HjFyrs\",\"last_tx\":\"\",\"owner\":\"0EFYUx-H44dlQ6-7nvldAjluvJ5Lw2nM3RcAUwHsky9Vv_TL_h5LUDX8-GvovCxM38XV1cbW1JgQlrvnixJXe79vVkpCbaQnLhryWZPFVhEsWGtVHE5sMVqBEEUDLpk80iGgS4Fyp8ZDkf4GPfDPthr-sGjeCgKgHEZzsE0SLhyAP6Fie276_vQE26OU7pImotrAFnzDCu8LOq-kgpCc0GOpIYvz_-Vl3Ssl3s4Hw7QvEhE-2B5Tn-Og1gVCEM0ooQscFEtwAN4pYoRtCeAMVHVGQhuR-Cr9X0y1MTE0wZL5OM9aiHdcLl300Tkn8kVZk_io3SvIXDTYcf4p1aeiSaxVcAGUw22ZmPkyGbzx681r6jceELre64PSdZryArIaSYs9VuVhc-niKuerAriCey5BPLkEvrjonUd9Ii9SvzJx-0m7zwNRaN4YVhyLvtlAvLUaqI5-dKEEewcZLd6gmKqJNhXbc5922mng4QtNA3abhbHCevTRn8qb4bT72stMQ7pik77EIXhCYXWtQN8UszsvfzDkzrrMjoIYsJyZEUE4vdRqwBlVKly2TUVdN-ALQpBXAKy1cS2bq7Y--e7sfpuLt-63SXb3tgYK3QD6-2LdqAtF-uAklLu1U5UgPJxS1pw51rwXesDGQcpyC7FsfLfnGXgw7QEte5zFklhFsvs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"I2lncm9raW52ZXN0b3I\",\"reward\":\"0\",\"signature\":\"M7M33UfKKb4d9hUwpN6-tDFJ8JAf9OeIvavZ6KqoPX1e1QClCnM8d8hq3-JHhcMiYy-M-gOH_mFAfPJw7g5_nqBN5DhuDVbPUUXtAJFQyhuG062pvyRpQQZUA8F_IjSWhoJmS2GSaBzxhIcqBHmMavnohagXvFu_1ppLrATV0wSLxlyC2TdWvXLcAAQANOLVMqyIo51Tvf2fFUAiqgJ1xToTTirLL41x1F2fJo2kPfqsT2V4Jdej02FGDYTx-xTAODyYCNxZk090kc9kgWOoLrTF19XDnYj5yN7r5HAU3KanqX4j8A4g24sA2LtGYoUquA54Ob96DE0Gm_rxzNsxNQsmThzekhuhIA3q5lMYBW3B-gwxKMkt8xJEiVw2tGdl02buMHhnRUoG4yQfaBaD35o8LwzO-XQtQmVr-gjKYx81UBDHMiQ8ewhyCsy2GVZ2AUxK4KyZCYyAhPnuDceNhWCucE1e2JoEdvsfClXzsWv6KfTdkPwWoPJXIC0tXqPbudtGJrY-07X3BHriOwLjVl7ZHT3TmbSPSeZl0PB2vsWIj1g7nH8EQ7YAqJiopEiMK5PAYpbeHZrCfjQM1BPLyWm0vScmp_Y-hYPLm-Zt1-7cl-7RT71wXYYDcPfrbwuZBYq4U7hmWTkYzkie49IvX1hCGBJ3aRYKJVh5rxS_ACk\"}"
  },
  {
    "path": "genesis_data/genesis_txs/Kgr-XWwHYos5Y95ZJ9mAUwjYjj_rP0I-GnWctQDNlp8.json",
    "content": "{\"id\":\"Kgr-XWwHYos5Y95ZJ9mAUwjYjj_rP0I-GnWctQDNlp8\",\"last_tx\":\"\",\"owner\":\"8PhJjM5MU6_jQSRoVbyXjLN-7Zjsk7vMcvpyP9VvpAPqfceMC67iJmC98syXB1qD3OXco-yzhPMXvkVHWchxIrYSrdEzBj8kX1SXIzI_BKdSzzkeh9RNwm7DFnVYJbLB8xPPXVd4NGDTlUxKf6KAADoADE3nzVOGMDDTVvf8D1C20BEGDDznWvMGvOhZbk_fdwamvurviKHw2ywuxVjnBszSfQAJgkqbNFasxjpfHuvQpUKz9ctZhDJbuGI4Jmkjhb0OuWRLYrLJK12_SyN6OysHCeSH3SZ_dIN-UJvZfTYt8WaEOApSBNclZaNLNhISng3ZADMJGuWS08OXJAkrbo9G5EFuIEqRWwltPEJ2UU6N20AlckQfyqxehhuN-F8tRCvkXIyyv1mrvF_VvI9xGWFdyCBtxQMeUZjcJu8t6BT4X3z6R-dqc8R4-4fWlb-NWy5n6YNWF4c0rBMo_ol22FacvrCiI8pzNvU4ONwGGxhUjzv7Ra2NPHI_YcikDapp7bYCdpRDoBSgHaXZ9_WOat3S0u1Iy5KoMMKGZThmZpM-p2nk5P45MZzvS5eif7CKlKUIoQWa_BAMJV7NuRGupBHFEVaea4ajIzJgKsUp0tw8n7bHC8frvYj5viEvCf6yzQopyKV4GtDLiiJZfGrwRNwbhZWUYETY6kq8vsQf2aU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TXVtIC8gUGhpbGlwcGEgeHh4\",\"reward\":\"0\",\"signature\":\"Pt3KTXR-Ook3hZ00ZRy_-oONek7A1QnhOzWUDd4LdaHW5Yv7XZfZEbcBMdRDjQSk28qomIiDYbc1gN4ber_ToUGagR2aRagdAxXV7cGDJMt9PhZTNaKBHXTeksZgqnOdViB6caJwi0-6faiKsPzR0C_umspt3nKj8O3lbQoYrZ2fNFIN2KosJaySVourjKhvVjG3F-A6NDPjqGJcMT15qhTa7iJNFKj08MSbs22rqFwemp8gPsyYXdEB05aZcSfr5smLvgVehCILW9-kklgcB2tqPKvkyvlZ7tl1bi10W_z6c32e0S49WaXck4-C1OMIakdCegZjub5hHPenlJ8dF5dX1KCfxnhsEeqTbaYpbGST8sDS6CdPibroT2rGdjE4CXNU0eKZCeiSF493N9Pxp23_8UVBwdKFC62Yj6ZiaWTrY392L-49BkqTTdlDMqUxCTomYBLVviYtd1BlEHyY0j9gg9gjbwusFXTi806RSv4w0Op4G1Fz9kGoalY5AFrpM-_hfPtKGHTcBGlKaCtmyNUF-K6aUjcp4QTSnU_ixfdywlQS_Hkfdk7nc54_cq1Un_elsrFliuN8sDy5baM2BWR3E6CoJq_NQw6xSvIphehCrNQ3PxYDHRKNF-Ny_O8_VAQG-fNWaeJG5kNN2ELcAJF0mtBUu6AqASZihnleKvc\"}"
  },
  {
    "path": "genesis_data/genesis_txs/KhQeu3CG_X1zoHbyy99GUlC9gVFFexf6vVPOlLgCj9I.json",
    "content": "{\"id\":\"KhQeu3CG_X1zoHbyy99GUlC9gVFFexf6vVPOlLgCj9I\",\"last_tx\":\"\",\"owner\":\"v2zCS7BgfXnwLayQt11YEEmT74laCB0pGQmn1ltHvR8jQ0O7kKq3tTrEJ2HCgGMY8LdJyz1u3RJT9BULcJlEoSqxeJV7M8xWdM0mZG3O3s2OVEQgCI-BD-7ypwea2Z2p187RkKk6MKyVAdC-HM_8ybapQgjNz98lnfggduYryaQoRWRPmL306dGmZnJ7fuNwJup9yjiPzUyMbsz6d0A_PTqoaC8WnCIHcv6GTRNbzmmFogpRsysa-DaMmUdOGJs90-m2be1Uwe1L4qL5Vj4ePbiFifpPE34E3Xs9NaRRP0vTKx6B6zxUCfoClu-bEI3We8-gLJ0sykvm3TwrXy_vkc5zRMA04k6FBMa45xd3gosnYwA0VErHbaDP_vuQth7wflqyLrFuAx2iu-cYGKx5JtxbZeTBgRvaEon0SbCf3XOzaPEY2Gh3XgM_C9NCqZuJxVIJLl9BiQyCkj_JF4TlC7WVTk9R8cJMlqE63tKoY8uBKy8JRaKWORMwOf3EVxAE_xmkJowdGmkIiWoFZlk66DLYtBxp7YUHfWH9DTA7DkCMiAGV1Fr3W-t7ErIhXLYKH9XDczyrZz-uBc_hty3lr-erX-BbpdSfp5n_DJc9tCDUGhsFmRjWJaMeVi24AGaWA2vssx162vxeqzF-mWIAlCGwVmzkQu7sTgRk1mY6ZgM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TmloaWwgc2luZSBEZW8h\",\"reward\":\"0\",\"signature\":\"KFvUQCUGJZ3lqHfH2hQDivkBicc3mthAHXH2-BR8xmhQcl-743WHhIT_H1Cl9yoBeF92yTkc0zNABOPtDUI4h556vm8ZibnVbVSm-YMYW3hm7iWXC7gNqFBQjrFdsYu6AK3wGNCBiKcnmMYIB7HLS35lJbVTyBcoaWwSEtlrxcsP3df0kdPkoW8wAkoPMquJ9DEKPaWrbCsLiVb-Ea58bY45RSsBwN74zD-iyQblsciBaZVyMmR3hDCkVqxi0d7UdboIdDftVKYnTMY4OvxRmXboQyDs16kEPIlVzQ-bA3tD_IiMTt75jUM1fA4LZ3oHnrbSj9CO2aC84ocxNlGBKkQladKfCMWboZljZi38oDxYB3NnVuS0neYSW71qYBwgNrFBd3mKO6KeCkyyt3UKvpQIUE4FSSXlhLNDRZ6FlcRwIsSfFN_UZSUzy3tC7B5wUyrSgmXGBOCORaxE-yZJlVd3DCwzE81jn6hOwwEtnZyMm2W1GVn431rhVtGwo1d6g0Pa-38dzbQ2-GpD1qFaHNpKiAdXUjmmoDgKUShQ4z4SDCx3WNxAjpuok61Fq6rp2ywb6IFZo9aDOt-M8WfRQDWSaDDnR5i8tH3m6q5sEHqJE2TFPHQ8Cwu-PNmJj-OjJuZvKNgWBEOMMxZ83vVGxpN7XKaP2XolqRKETWeD8Vw\"}"
  },
  {
    "path": "genesis_data/genesis_txs/Kl1zrMIDIC9yW8yLMnSKQYDoV0PY41ymzJQw91qaZvY.json",
    "content": "{\"id\":\"Kl1zrMIDIC9yW8yLMnSKQYDoV0PY41ymzJQw91qaZvY\",\"last_tx\":\"\",\"owner\":\"q8Tcnjk08iG9HNQg4oeOdrgpINCoiSAXnIa42j7W0q_p0IDG72RGL5mqkRqzvVf3HZ1y3NhdSDOx18LeCpzZ6fleBgNq_D7L3_WIRddkM--2jE5qwm5JW9zZjaJiAy4Kf_hjylW8VZ5FM48xShduCW4Iqi-VP-eAw3VfTsRBPE7HA2eY_fBbejZNsYh5OqqQxYDmsVNJoUJNs0xAaCSubzJgXksSZwS2OH3ju59DFpK5IUZzpvraXcL32TIxFxad0Ovx8xVcGxuli8P7JQ4Q6EFPhjVWqaxkknPqjxuzmItwj1BTuaXCHhsfR1bDzHqAA8ZpwlEIyFq7aPi58Nn78GwnwHsYJmbZvqbC2q12SA0IXNgO7wFgZ-2tFsJvWRANeQQ5DSfP02IEy0nez47e7YdqLJtFPd7y_oz7Vnhnt6y9-KscGnzbLpTRqvdNtgvHZDiMnaLpEJix99NQKjE_6xrqtRDXDPEz_mBeF00YK90CZmGEEuwlT_vfA9hGpRyYauhaHqgViJU6nvc-2GLB37fF53JB84c4qg-BCA-d7oru3r2S8MiwREll6fOD9LDD38Su9X-gGr9wTg5suAoycuzR2iW2bnNr5QLQzv1rCNRIwP5hc_T5g5Jwwa5TWPV0OKKcdOx43V5GuqleLAZni-il_p-Y6TuW2Oe02KlxeAU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"dHJlbmNoYW50ZW50\",\"reward\":\"0\",\"signature\":\"iuDOFUv3VQtiQjTymOWvXRcPGmU_FNUi1K15kioen902yKbaJmKgwX5X3eybIUGAyFwdsXmN7CJE5jh0zmxEvguhJYWIah147amdyA6ta9x2Welyf9kCtewYD7RzisBZNo2WXi2xuc9f0XsUXKXbqmTw6DzziBCXzs9ZWj3NeidSBv88Tqd3xLz7rLJZv4sklSUJAzcBcomPakxJc9HDTve34YThI3j0bO_-JAWqkitLRw8nPZXdiMzlQmN4JvrYrQo7dcp7X5qTvuLWvqRBZfwT-zpYXQaFQubr5zQ1OLi1hPv5Hga0H7RQFQ--aH-ikL0sXKxNBJND2xHiZft5tSrSxoMBshH-C2V5NJxYxGi1Zna94ryXznhmfOxjZ7CEzc5FBpkHXwLFo89zM-YDBtJdMwCPyXo3qoqNrmfeVIClbu8KiODI6wWXJCUdwre25ZNHbkiZaB1rK_6UyqrJp7rsAuE3BNrBfLtr9bYZLK5UmmpLGw-B0H5CR2qKoC186SBQV4JjoebMOLm3W6xweUZf6B1jRfqPu9nuyw1YxUD8jw6AEwlnowiObpWHaayxvOzm8q4Fgldg81R-5zlfYnrwzWp4OutOQqTdVIfJQ1LK7d00o4ptQRPkcB0IO97XdFtiH9Mw9oUKvo7dTDEaQO6j4wDsr5P9VoVY5RPJHO8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/L8tkBBP7fyYfK4txqP-fGk_ODOU4UfIgFV79O-qd5vY.json",
    "content": "{\"id\":\"L8tkBBP7fyYfK4txqP-fGk_ODOU4UfIgFV79O-qd5vY\",\"last_tx\":\"\",\"owner\":\"smJNtkDevoa5KLLtf22cCzuICFDaP9K6TgnUieK-OlYRuY_1o15VRg330XfHkZ98L04sJgaUIbOQaDSuDjFhaYtfss-pdxK8Y8kmdce0qLmZOU7oOZVnUJs6ArAL88MIxclzo-A9DNY5zxunzXhOiAdJ-2tjFel1AtHGHl8_CzNuHl_ooH1A1Qovx0-8o6FMh5fwh3UAuVNu63odQEPQHfLNqX8DCGCjlKfW5f-4pCdHFeRsqpZodW6EpknOCvfrBqpZiKVQ_pZduSFaqaropA18-bJRA9Fx1zQppJwCWzW6h5zyoXgTIKbhzPlXDdBHQH-JD5kRzjN_I0XA3CRjZR656L065bGkfX6C3GLAX3pUd7HH2jLhH8R4m-rrZg8WGPjv23CvGAq0VMZbckKib4TPnR5v1PPA5Qij9P0dqLxbDoqVdAl3Pn747RP2JyQERW-f6WItVvgzRTTJ03SDMFj1miNg2_ppC9j-CfxiQwhvycvovMFFNlYE39JDUv1x5SXm5STCghDNO0SqfoVYE-hSKB71rvJ1WErg9xQfiTAtYrepf4BnpKRlxh6uajpI26a4K7UykYdAL72M_HWFK2cxbLWzfYr3Vz72FI52kyB0rt-U80NYALnM2HW8F_tKuGrxtN3sXIeY_FKU3wOuoRJ-PxDKyJqqiuRxGvzVj2E\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"amF0aGluIG1lbm9uIGMvbyB1cm9vYiB3YXMgaGVyZQ\",\"reward\":\"0\",\"signature\":\"na5plKk6OfotVEFh4Rpx9QTDthRKgYhnXkJAP0EstsLYiCv0DEXshpY0G8nHiuUokqAOoJYC29aIU_3umskV83BJP2wkQ8c4nHMCqnxdtXZ5o0_mgPKZhdlBeMQMgWypqShamsUASscwAKBXbhIJ6LVJg_krnVKtv_17lGOg1yjveYQdaAyJ9Hd_Fd9g5f7sjK3f9ThdrSfA4pI8cNd47twMS5njcQA1PaMaMxTfZmmqlHm9a-HU3tt40ibq8uA5hwJitQxWoFOAEh1RQtJbS0ob_vcuYZF_O5jbEfmIlTe6werVwhgqbbD6jaBclVvEjOEzTbESp4Tc-GbvOGmIrpuX19-MjpVyAQ3nFHbx6N1MMOAGjCbL-oGhLNl-GAJf52bFQfTSW7SkTC1vPZbbnIbbnm2T8vq5cRILTHBJs5p_F2gmBbftDtCrEOsruKgEBPuJaS7cCyzc_Zxydwt1RfHMxHXHiJk3bjDl6H6yu9F3k9lghZOJJDl3Era34m8fDHrWjUm9xXZnu01KByEHdwuqAmoQcNgQf4yNHhuzzb0jHGb4vUEnM8zXhizOuSJifgszxyqGN1pmT-hhXgPwcR7WfF0zuTwjYYkiC2B3AzyTwuNiMXwoPHEJhZHOKe5TMbuFPfG8TClWIoj6-5Xql3Vn9HCt6ZNB39dNsy_P-gE\"}"
  },
  {
    "path": "genesis_data/genesis_txs/L9J9SkTWI_Fx5KhujeWGokIchHTSFlSIC0blr0JIz80.json",
    "content": "{\"id\":\"L9J9SkTWI_Fx5KhujeWGokIchHTSFlSIC0blr0JIz80\",\"last_tx\":\"\",\"owner\":\"srbfy86PAVew0vVzfarcd1JUupj5a6UKVkhF8hQGr0i0x5uOGYjUeW26rTrkVAhq8VbcSt75A0XXNjaBqGx_T959vcw41qEl9YP-pwMA0lO3yrO7AbFrINBhNWsIg0l1AWOGGbNLEUlB8rLqpNO8fqdJ90u5dnZzaS8ZPaE1Y51VCMAV2hOXTsyw_UCoCIuSn8Qzb54E14hgWkgsLmaYoVJEV7yOy7CSDyjavsKJ9Q5veZyiHEKZ2lxmuO2Af7k9n1sqou0FZLFhaBZr2VhjDYxm6qh-bNj-OCtTLA5MQMyuFRHC5AzRmIveR9NVBxH1tXaLC83mwM7Y7V9J8GZ0H2CQAcAwZq21dWn9HHal9dSQUxZ0G49Gfv0STBjdu1sO53_iZDHwdm0N2rHFqKY6UDgXjcAyRqsI8utC9KVBPTmF_GMdAQ9xjy45v4JM4-E8lxkIXk83SK6VIA4254vd_L46dtx_8KC7IwGySnw3mrD72bHhLsOw7TdJNnFhzcOpFJOhDtKAYSOPY4D4droMTMpDchWJvKH29XxKgaF3TkUJpiZYcSJstL0w2AUQzyp_y5RTk_bQjMlJIi4JOPvopkcIQutciNGX5teLbpFWoE9WZZWGOmMGK7F2PAwH_ES-H5_pWZCSRQ5xfk6h20S0fqQETaMbk-fyQFgS04Eci9k\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"U3ByZWFkIGxvdmUu\",\"reward\":\"0\",\"signature\":\"Aw24T_Q61S5BMc3kABh6XDtZ5LKEhTXk6ADUxo4TfTrWXQKbKYG-GtdfxDevYp-0N9GXahVheMDHV9mlLTQAjm3lglJ_KazTzk3I3HpCl-hpyGWBipKYEX2Rlj5xJ1Z6vWdXTW_eYxtVBkdTYc9UUgRhIiqtJpek_As6WNZLvP6VqoD1Ee4cx3UmyxxULj-NjWu22x14wlnCn24GZHKt9JWesNaijby2lxRJfjInJoh2wYPHkW77zcyYM_L3Le3bQYYgvQhyGM8rYpfhvbdRqmY9VW_Wufx8aKQYZpSNyrNmni8jIXu6Scb68O5fClEebAhDBKXKQv0cXefvIxTmQ6H0R0X7_LaBj0-d_qdD7D73QbnLr5KCjMB2nrGiW16DOuj1-QQQqkL4gqD1eCUiOUGb25PmDS8SsJu1ccY0JgswrgULOLidVdtUWd0wc2mKDUQfd_7Ab1VF_69l9_Cq5R3kvWBmBb07QZTExiX0p_Yhw5p5g4LxS_6ZHbG6TYrw_-ALk5JHCc7BRb6I3oaVyzLfTjLcYPMmsT0Dk4mDODlze3pL9TC-_3-23gf02sSewy82LHrf2WvKgpk9nhEDrQrFZgx1B0--Kh0xLJSIaG7IP29UwzK3K28hU381EbwHworcAdyvj1BZxHg-6-ULezDtjMwrXYGCF8VKw5BJM9Q\"}"
  },
  {
    "path": "genesis_data/genesis_txs/LBTipZADoYfO-9UecE07Z83ijiLl0f2wAGXyRFQqKCY.json",
    "content": "{\"id\":\"LBTipZADoYfO-9UecE07Z83ijiLl0f2wAGXyRFQqKCY\",\"last_tx\":\"\",\"owner\":\"v2yWmW0mWwTHLn1TjvAjPcMUodRm6c_xeAWp0GRS6ryHUr6RX3gfgFc8I40OyuuqttC1wVfSpabCtDNOXcOWN9SoDCrBO5_H8sCwFL3x6NsYGE17whjVJ1DosEIloElIkru3nkJy6_6HJPlVXW-KwGjJumG20DrMlPbk-5GHLxi7vX36cn7Bm_o1ctnD-s2d2smaPRxRXbfWRWNNOYy1Ln3CPoXh5GotRpuAdey1_u7yKNDZJUDfkBCTHlIv97iKRksgIIXySeq455pWsq96OzaOUJY7cGUWr6YsnvoWqEQadsRVCImih-RXiWV9MdcT3ImiQ30pE8hT5AhJ-h0a2YEwGOF9olNx-O-Zyp_4grS97Oc_PoGw2P8xbQhYMS3Q4568GhAl13_bXBrfl8ehYmF7RJ0awjka4x6t11JWUVey6Bj6d4MniPZO6sSiee1hSWCbFYCVwHlElN21TWlHf4HyHe7hHaEOETx63d_O_eFClBCH7ZtnZWS9ofZg72PXdxqsQLQDA1zsNXm9mYo2TaiJmuR-vHoWv_nQZD0KCxAzuiY2knAwcG8R-GYRoCCPsX24cktGHNYfD6pnDnMr8Zvc2lui-8OBKd0x1v6dvDc1BQxnJJwa7IBhPKogaLzCIL1CNCjyGjLwJfhlZW-Sa646NdJQxvCp4CuRKq6G2ec\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"I01BR0EhIA\",\"reward\":\"0\",\"signature\":\"qoJHahT7t3KtmnBsI2M4fr2OognM4q6aJt05aalGonVGKjhHRSqox5sSLM2YwBJPiWt7dKCtuL4z3fwHIppUWETpvaZv7L1XZS_fBuDgzSygmYIv9ZVUA2wgWN3OXSpsUvp-7W5HRyfnx-icF0tJaz6NAB1M5imTfeIjhrXywjl1z0CVp-_AS0p-qAgEldSb_nhyt1dCTc2hyTyLvihYIHUE1gLr8WoKFknf8DLQ2rIGqpm14HQz14hvltPaNHADs6eDtLHggVfm2QDcSkSGmCXaHXeSaNSUioG015iaNvOxh8eLYQOt7DgphBLe9K75e_6IB1hQJ-AIewXCPwtmOTCUewYpa2w4Cpt21XNHOsiaXwW-0QtC_6NLGf1G-lTOwrX0AFbfkJrZvNlt_qSlRtkYsQMA7w60UByT-i-bVXDVNJX-twgxrEK_z_LFAkXT9v7dYyVgDZ5VS-Uah8as6sIXkOXMIFBNpbb4270flxbF1YTHWzxSqtmDY9YBWB52f6PVffD3ZknpyllVIrY1hpApkgjKlCxPbPkY3kBW33WckgGk2F6HgAR4EI-GNO20wduWFSOgjSs00PqhTKVIzqh48VpKdt53mviZOj7JSQ_UF-8sCnnyVabeHdNG8DJwY7HFog2rkM8gbJ-mpvfGNXSHzXmK3H_gaKVCWhr9SA4\"}"
  },
  {
    "path": "genesis_data/genesis_txs/LC-_5GDhs09OvN7r8GPmjMa6A9xSeVtsAmDgYCgspvc.json",
    "content": "{\"id\":\"LC-_5GDhs09OvN7r8GPmjMa6A9xSeVtsAmDgYCgspvc\",\"last_tx\":\"\",\"owner\":\"05FIAagGAWAtVnQSs61qNVpTn_ytsElXfwKjt62PODTD-A8S7Jt47o2gys30iee48L8iA_wdwsHPFVup87R9pLXdw6xLoC2oqRcEKG-CHOFXxzFDthuZMz-0qZ11je2z-xiD5tn1-a4EJWSEe9eY3v5ShrDg9ikMLVCHKQHFuhJiv-HhexUAK2Oeb0ROd1TRmXCOb6NLWqKSXAuOJVCtw2l5Rf69P0D3YWbL2KCF572ldF160i2i-6kwYjz0uSrB4FzXGmiExhn7eCNGmn5wdxCYhwgaPZ3JIhsn1ysc5wwAU_ZrxpNKZrRESZfa1lJa3krgZ6ZFXc6DPi7NneBRZGL6hNwX4JDXWOwUBfHDnINRBTplO4AkNKFPhypPd4J3mZiImG455QA_Yi1jrRjmkjPoN_WRJLMgWkLX18_V_dZWKaQFU-gheW4bM4mMcuHqN6QxN-jp2bmpku88oJBuFpaEoXFJfRZEdG5NEVm9aS4wWMMI8fDvGS4l-qXtgWRjGovRX3vJvVnfFu2WgjQ08A5oYW3mJiX_CHqJHCr2P5JjPpuPVCVVAvRNNKmRCYVKRp0Xn7k0DlnjZJE22cUSmDHDB8fNxzWwnIZtNOI_g6vXUZ9Efo97lm9vM2bFZpUDQEgNUJFvQ8BpDDzxFy6K2ldFCckTGwD5tkBeNh7iaGE\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"dGVzdA\",\"reward\":\"0\",\"signature\":\"NjjFE-sJfpQxTDx5nS9YGBYlkCybe7Uap7eJipB_tPAPpWUwXoICEDLWCLuGUYXb_L00qGmE_hur9wHo6xLh0u-7gXLDWznjXihZQFFxN8njIWpzcZPNt27oIqUsA7miLf00wMA57-iL8HOi5d4Pn2W10hyTcvInxx45_RLfTTNOiEWcgqFsIzgrDYpmsRVxehi1s-f5i-1UTE-ksvY7kvlioVIJviuJDCnIIqwCzQijVyoKLOstjUE-L2pKDuNAhjLChra4kZqD87-60-OCumgDsZnXOWhZtGnsgZbgJfPUEeMGU7jvaQ2TYPZSaoP9tw1arF4dUZRkWjlwqe5By1AaZlaP9Og4QBGA9cTbvs8dYCkMLjFLSC6zz90kWtN7D_KfT1qCQCpdt0uc-WAwbP8X1cp6oOTB8ThOCslMP-ohdF84kVYtURxDVbY67sHrmlfAD2evR11NdlbsHtBRMx4nGPum8Q3f5utcY5x6INsbCw2YrUR1c-tVBKN9BAskk-lcHZdUD2mHW3Tbylhb2LwEVz8ZiHXEJCd7eamo_doSBGM9yMvfFnYh9W88NSYZMqxyV_YoaDKa6Yi7Med-MVedcKLhO08uAT8pghCYTRJPYiwlRPCcGcMp2gkLGTr_30WpQkACA-Mc9TSZH6frjdlvlabAlTGQBwDmjy-3830\"}"
  },
  {
    "path": "genesis_data/genesis_txs/LFQ5iV6E5wyBbJmJoFJdH39ZxfW-y7mZFKou2H-ONvg.json",
    "content": "{\"id\":\"LFQ5iV6E5wyBbJmJoFJdH39ZxfW-y7mZFKou2H-ONvg\",\"last_tx\":\"\",\"owner\":\"rWtgN1mZLPU8w3d_UpUwEHjeoCoobohVO3MldwVxRAUV1ZAX2_q7zpewclqskwsKR7puJS7-8gy4vBpXuxGGgzYOUEU1bmbL-yAkDascjeMWKoKUO3sg_yddaFvHQVFpAqgMZdbJ3e0hSYf85kbnv2sryxGIZ_5cgkg9KyJKzWG_ottBexAbeh2zNos8xOAxxkAzTytEnVCkbSLN1HR-5jbi9pISxYUi9EJ0Aq-eoIAGQZ6jZ-otZMnskkdhQ3c5aTW61b5oCDGugVuEX7Yx82cyrBdwwRrfqKT_7vFsRhqsivtSGO1F8o0b7McDgU_8lMjkERI_ZhDT8ev1ZSg4uKvSqMaKqWlK-wnss8TaSeVwWxH1N4pd6bGfv3KSCv4egY1EPOJ14CTJwIqK69gMUZQlH65CZUZsKp0NjVDV9QN9K15kYYg4pWM54sDyl1LZusYX7T24FjGG0EgS2jfhWv5WM3Rlz0SoDwUW6EF3QpFPFtQ-a4ftoFf3BjE2snJsi7-gASOCf6jd3FBC5x4gWbM2WLHX8a-h47WNlxStkwiyA4cLci3SFgMdKj5LjSJMsvxg1EsVAByAzldvfqbsTb7zWdW8ezlBA_VulbKjNlmjoM85wd5odb-tpbtufMnSmzIepodYrSyHu2lc2angXAG0WklQC106UlIQTpE3tLk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Q2hpbmEu\",\"reward\":\"0\",\"signature\":\"ldgimAec1ZS6wzwT3kay5DtEqTTizk5LNGD7h8vJYIdVIae-Fw2l7ARWOs79RDYXFa6xamf0OVSs1ESKQNidOCVjVB09bANWwWSUHMfBCpVWnBg0MgY07HDWlD6khRhVIU1eXzpIPDExfq7tFe-Vr0kBJKgV2X6jlCwCVYNUwsQmAFZ34jBceENPvehzwAtCG-Xx71uSGZZoGLF38byX5lJYvGkXMP5xG00XVoAIHqj9RLs5mjWQWyNmf56rkNAxibv2Y-h0QbMNvtFBP83gzMehHLv1qpiCrQNi6K37NtAHOTxsalW1pKc5lkg1_vMjeWRRnKRH8HbnBo8pLhdutz1g1qByoKKzx3HaasZN1k8r7vx2IgtqyMEkRXOoWclutgF9KnvSXAhsgNTtefJVAOuklncBeYxfJ6XOgnNy_YdH1DfoJeurY9afDNFKtuDKRomI5vplhsbJzknQ-qcJ4m6lgjdfBVMZGnnOkbI6YxQq9Y07qjEG7uT5Ph6yEpR4zD-LdZ37L6Fdj_zK0lLuTszZn5vqCjRohWrxXN7ezILgRFoTAvxmgO8RcMQq8TFLeCHBvRRKs4atXTm4iL0fWk9aJzdOkax12RHyqj2qDR2v5yulimJ17ME6JMXzOqWFK2SHdJDgxqYHk2L7KbAYguCISkUdsyrPxBHou06aXJc\"}"
  },
  {
    "path": "genesis_data/genesis_txs/LJ2QSdjHftgyCOSgy9Ub0OkTTN25rxCY7D7mt6u8Uy8.json",
    "content": "{\"id\":\"LJ2QSdjHftgyCOSgy9Ub0OkTTN25rxCY7D7mt6u8Uy8\",\"last_tx\":\"\",\"owner\":\"uCCZUBqTT0cGMkh-JfLJ776kXh2XkPIiS6uYNw7drKEKL93Bmohlo74W3559fJ0MJqKtsbHkLsLpizTy7am5cGPlz4tApfSegHzRCSmUPKtcK-eyNXPRHOx87S2A8cBiRpfsZHS9dwrzQ-Ql6ZFIPaowonxd3QU4_n2xvgewfgEEUSk7wnt3TatY-5kTWwF69Xu5QKp-iC_aqBX-grGkEQS27nlmGLBzMwkoyn-keWyCMYsPRtW4lxP1JiyUZco05gISFluVADSYnPVpKDX0i88bSw5lZr2pNXXYxtuVAsR9K0E4dtyE6Oy54c_skVq5szjMY1J6nJYj1xv2uw0e5Owy6v5VDIN3rzRHB-nhdV3C9Tv1EYSvjS3NSVg_5qoMxrUVAhLIDQeo6zGfCAkdFMreJ96tFiHiiaeNSokKjmZscTkz3jTuIvDVzaLIvpjgBu7lDQXTJ6l8RXreoGhT9F3ZU9goe3Zke3OLXkHF1bTVFdVJnIvGaGx-moLqNyVowypQRcZ8K1LFB6wgApSbasPNPMPPr_0xqnn7kh9dJ6WCspEVrxO0fuu8mY3sZNSpIlVlB30mbZIQk0crarNG1KwoC3VCEIjmha5FPLy7B_sukUVBl4iSIgGmp-67vMRmqVOMr--xH21ybL_Zjuzrkcx1rSJSaFqLuhfHqAd2bT0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"dGVzdGluZw\",\"reward\":\"0\",\"signature\":\"idVMekMXC4_NTWSLc9DOzjIf0EN7cBBsCW5NTNvPBNMFKMsthO7lyiu83DjbXgEpCA6cFjsGZt2m7IS-VEptCGMChgCehCFaHzNvuc4_JFBwBiXiXL9G7sFJiJHHiyGFaB1SKkLnAhtkjIg9BHNwDLnMlH1u7GYSQcs-rVCPcQhiQtDGWVmbrmSxbMWUVWGwuHhre2IzqUzXZyKXPvPdEqfShWY16qJ1lIPCaVkbDNBoRo6WJ9u5OMCY3MlgWjE-nWTJwbtz7pM_wgts2oILd0rQ6Cf8KszlfmtVPj47BzZS4PL4w_c2xAxPwrNe-_1ASJztjc2Wp9fffGfRJqCC7X1bEHz0tiI6NvEW_Sc9xCrl1CJ4CuCh0ZWlre4YgHRcmGApjaQnxODJE9ptct3uagEqCblYlHfop2XDSb3gX95wtdZJMtz3zfgbzRV9MdSjTsvBdkmnk-fYtfzwxZuFv-Rq36cANxac_r4cMAt-oqBmi0jAjMHsolPe7IaHoy_ZCcUSdxReMoidf8PpwPo1ovu4A1JuPbu7MC1AOcQh2Hg-498qXy-Z9FAWAWB0NKKbb4qw9FbsAVXVbqRsBjqnBXABk2IfGgUJsJ3PESraWSXsChDYzLD7tWkC1FlHnEh3zpjf0iEnWa34Rulwh2A6-Es8qCxX976DYkrh_5KMczQ\"}"
  },
  {
    "path": "genesis_data/genesis_txs/LUdFh6g9auj1LRtk8IUwLoY3e91jIkcSyPKuQQekPY4.json",
    "content": "{\"id\":\"LUdFh6g9auj1LRtk8IUwLoY3e91jIkcSyPKuQQekPY4\",\"last_tx\":\"\",\"owner\":\"9KeRUUXA9j8QU9jwX0yM6LE884cuWElUEOskB0x0Ky9_ZY0TnyNgqNQ_63dkArqdWpfYEWyEGVp7qbLs8nnpgcvv2xAI7Tld9cmvVHgXPagYIGZwaFzzTd8k_jQWWPekjI2oD4kGLPZfDQsF3LsZVufHsOTbKJoGwXYPTP2tQ9npoW99FWLQ2yBo_taNa9QeCcZQEn6b6C_qYIC0lCYHuQQo67vZ5GJxuga1a230pNEXYbVEB7QuXzhzUL3MsR88d_IuK4TW8UmE-a-p2ZEGYBmBK9GTUI0y6nRfLgOLr-VXEGLkOjoATI4Vosmma7KreOzDJh8QTHeCsKO5vRmfE1HT6g48BBxiB_X8PsK94P4-fh6uenZPgVOrRKDNuRZQ5wZSft22cWIBSasXo32SpxDkKf3JKMrs_WRhc1pyTqY9E7Pa0Ffe8wppA4bOVNhL31nZmvag55S33oZvzNqeLkwgP_dKrtQMt2UcoQQPeGxXqN5I6yarZb2lIboJrHNUquVaYAC--29FJ8KXZQOSHLT7_uraBIqH2aQefV_j_IIDDR-Ue3kHGp8W_i_VkHqcvKXInnYuMsu5QkRFWzr57vuNVAfWGhDHt9CxrA3ozOfM6k_RIOjBgauzVwD_rvYIATvGbvW8EM9_srH0kvGAGjva04AEcVCu2rrTtPkmrWU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"S2FtaWxsYSBlciBzw7hkIQ\",\"reward\":\"0\",\"signature\":\"fH_ynse2th8oa1cSIEokYrJrzJFsl6Kt4Slx8auFON1V9MU0OVePp4gXY_MJnAG6CsIl2Y_4_97tmINz5Y-i2SrAdHRuwocNJw0tHz4bzHPN1nyjjlFRKPCbiqaBKESq-VXTcXwkNaLyVFYboe_x49Qq60DkjiiKsME6HAnfB5_6-uaqdkW6_BH0d1AoLCq5fRIIIKEyIRtMIehjFSdccxAjgLVT6sfxv46cZ7fYpUyJ7700ZgOKP4ow4rek_jirMk_5fTgTe4qTqTPtsb5Yq1tXSM5owvxBZ7N2LVyq1iyFdXCvL9mibee4rFMxShqjmamfk0zxyVoh3AzjVpaB6q2khCZAbTQ4Q7ljbxSek5hOkIyZMVrUyi06Uki89SXayAfUcEmZMAp9aammGt2iiok1fqYaLCGmp7qK_KCg_c-Zl4yUoWqzsAPJSVAX6hWVwBm1Iiy7xvyjvEK6hMKgOdyQrQtZLbsEz_WFimCP_CgNeOHVEBkvnMbOYDMeGb43Bjf4bCwB47lSemvXfDmflS6Ze2uLhHi61w0kKvocgv_J42c0Govc3PTN3LFoIvnOV0PZfWuqsFmbK6wnFuZ0w40ND6xyx-1gaWni8jGbeq7l6WDVbbj7mk0n-Pc0KbvJo6V4kKYSPKOom92Ih-eL5tUE0uH67bpOaj9JURximhU\"}"
  },
  {
    "path": "genesis_data/genesis_txs/LiitFWnODMUA7esa_f49IiMEdN7cTKoKw1cgG2J_eNE.json",
    "content": "{\"id\":\"LiitFWnODMUA7esa_f49IiMEdN7cTKoKw1cgG2J_eNE\",\"last_tx\":\"\",\"owner\":\"qPIhDK5q0LK3HqLOd_alCstBWOXZZwfwKZdP8EgtViQuIO4C3qVwgtLfIvkHNFxXpCRsXu90Wc_OYt7iawc9DnGCsBtEaBGD-EJcFG0LfZwZt3HTLXaGiOzPkr7LFgqrES3ByCyKQ3eNoB7uXyuJSN9pQ_bMLQwjy9W1NhCaMHLdwlr49fMmpPTYdXOS45jdMUHUCfx_Vwxwzz2DYExy3rBENSEISaGz35syrIml1jBjWP9QMCuZpgEUuzzsYS6OgB236_JJLu-gq0m8vn7gL6BH7nXl52Ei_L08m6ApPXsyHyEvOUFO2nH72rwa72ySYBsJpeCsCwiAzfKJ4zGs7NKbpfyRVbGFvh4VqonHFCOWhbqWpNVJSYuU-GlKD7SVjy-GN06m90IbdyhUluO7VXAoIVeZXWekQXiCmaL--0afAZFjsxcDUQZ81xxkeINAROmMUOXS9TYEA1pYISSWWQNTLZWA4E_y6NQy4fF6aruFm9Ukz4qzZ7_Y11jndRH_H0TDxwqRmAfls1Y8lunF8P2dcNWfbSOAHOXCUO3fbR1L0iNR4G0AmhOZAgrnUN9lctGeQK0DDz3oDdrc85qAtJf_9kIVWun8hyL-nb9rcO8XY3a5dJJcLK6oeg3FBHi0YtTmdV2CUByaPO7kQTI6jikwbh3CY_3jksEwg4bn6pU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SGVjdG9yIFJlY2lvIE1vbGluYQ\",\"reward\":\"0\",\"signature\":\"DR6gAnWAyHBgMofY9VOhH5iT5HvacinZ8sq_yWfLlbZp1iQ7-K_i2SWmHkdGeHGvI21Wn3_NegMNNgG_AA3IlZQ4zap3pHnMtwbfhGDVDnELQCYA_WfQenByQWdUficlR_1Wh0pz0x9lCfrk0pz4ZyL8pLvYzaw4CAlQ2d8r9xDaNo9BAfslnNJlYWqXgHFwSYv-jm3Ez_a4S_A2tzW46mG9h2nlnfE9it66klT7FU-EX1Vzfv5uup4slbxRLJAqqPr_9jrneylrvuvimhIjFanvhlhrVGAGAiVpWlA_iNnbx1WiK_YkcXQTin6Qs6MPRxKqLCj3TuNwd2k920jTW7fb-bJJ97oflYmpvjGNq2mkqcBtSx6ZdhNz_JfHgsbeVhSyH-FUQwg6eaCviudWP-6qUDOIkDCxni13APWkaVxUcZllQivKKVZPWkufDYSMadv_oqx22tCMRBKnCt8iSluPA6vEDvH8PSKly6CTNlB7KhwACFB_XhkLp56zQGFlB0vGddSNXcD0jalWvPgrftv0uZE_W5EVifI53DkA_QGDyVZBEGNZtkS4XnS4NeqrnFUAVPnAX5iPpP9uL_3xkyNbsxRh6H0eorrrflcXSalTIItNgX9yy2e4mqfYld3WkF6qZ1ydN6BL3VMe-nnDtPn2dAQHgMe4wbYUfdNrDXk\"}"
  },
  {
    "path": "genesis_data/genesis_txs/LixFbPqM1ZZ-5JWo339FMfPCpD_6M85rVK8IVmmt8m8.json",
    "content": "{\"id\":\"LixFbPqM1ZZ-5JWo339FMfPCpD_6M85rVK8IVmmt8m8\",\"last_tx\":\"\",\"owner\":\"u8mNxMCQaNEf5z_0scFG1KQuuZ0Q_iY9GkiajCRmLmtE0MmlQcJLb4zN_p_JPWWM7J7eXR7vNPiR74NuwGoJv_4nx4DH6_oaUfiBJvIORaCPnwmx_Lu8yWx_z0OALMSIPaKBKrLgNUFhgrceoGjVKw8twxzIiYUKxFZ_ftsqUHQsFYO0D66_tInIZmrZO5kQFWKjzDqefiC1ftiDsLYWAUYA4n8UJRxsHQ5kYDd4M-WN2UtjU2hy7n3CoCbzS3m27F5FO5AE-Js8TtHEKrgkgB9uSjJsQBcQsh0LHxMWsyFWBhI7A1iEZg8c1lw09cX0GEBbuCI7PmXPeR_f5jJVBI_v98qyKtQxBcJ-8g2r1XBWlMhLqCUk2WsY2POUAY9YoFIb12_i-pcqQG1KAWOQhsps7FHZW_9DwJCcu4m-Wb5BmMpLOR0688j1jCaAEQVMEtIdy2GDua9rYd4Lnmc1BRxpaIqQeJibg7ZkFcuaBTvRRGNjudVklivzoh7fVg_jeUIWnTx3lR3EkiwdavgcrVHWQg9AnuYqjC2nn1eQPzVjPmdfxB0Owib5Xj9IqLunG1JUOnDFnFZjgSh4Jbd0ezMLkBnhUO7UXMyGJjUr8F1KsA-oI3p8C4f0sDUWBS2KJJZZwXVHSbppx5NJOtYO7UbL_KEkGQJy1pYiIkd2Rw0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"U3VjaGV0IFNEIC0gcmVtZW1iZXIgbXkgbmFtZQ\",\"reward\":\"0\",\"signature\":\"rgc7cIstGHuWnDIl0PE57Hp-ljFz2vDaJewfNQNsOCxfTkq8lkRHLEy_mEYhmU0_Cwd-PRtvwQjPJ11Oa8HWFlD1dxC1UjHFNvCOZWeqOI84HeY7U15VIgIiNiilTAJAkdQvSTmv0e27hoawyfWSelOowfN9Zz9JYxzM_sgWQUUqf1emh8jU8mBs_jg6I0b0bzpOPLZwWNYkuJS4QVj5qWYe1A9YMjsoSSlzW0iMJXysdebZjA2tPj00Tag1zhRuJfb9TizpYu26enTlQCxb4McfItRm1hNvh2hvCsgpiLEPaq_ixwmDlk8xmSCdLRpw2IEUBFF64BjT1rHBNgpH2KZzQSCCZuQgbH_yzFgfYRnzp3JwOws3Ii0RhYgKe6AE_FiLtbZb_ZQEd_kqSccLUl4m9z4YXdAaBu6f-OcrWzGNMmNRqV3RG4KczwJqM8lDT4xB9lhuudi4yWsnQVvYwKpeQaGUuw3Z-l4kn0uzRTqtVCiW9KtyL8vlwPuxTYTZj91sno7qkFP_PVaHZpXhuCiPwYuRQG71vjKs7fO8vNY21ptnPxOjzdm3hrQU5CebBcGZuWLD4O3vOPMsAjzcUW3_-MuIFs6N0msR6punoyahM9qUXXAAzp5pyv8iiIbiT3YXgJVinTaPHeMj3FRQIn2aD6cUzRtJuN0xU1G0D8w\"}"
  },
  {
    "path": "genesis_data/genesis_txs/M7oOLbk7TPBanLCS0pzkJSbV1CYoJabbsSDe_pCjhEo.json",
    "content": "{\"id\":\"M7oOLbk7TPBanLCS0pzkJSbV1CYoJabbsSDe_pCjhEo\",\"last_tx\":\"\",\"owner\":\"zOKwtqZQmENDpTQEqNtqV6mu0qDsgI6PzqBM7dQ4GKJxVgn23SiTdmnHP_1HgwXp47tv063PjFpU-OCtAXwSLpjhRABycr45xYhkCiZZmnUXyfBrPspWWCZ4VwdP_SeDffQgc95mjcQZUr624BTPcAdol6QpUAz7oUju3HuOv4QZXn9gDU_1RcgZWRV1YL7joqOLZSeaxj9o0zohm4jXsAzkYbmFE1wmIuqbB7gFu9FQr-tA3mWa7x_S6r2Lk07QVTpieVY2TriyO_K-3OEgolDljkyGFZUptxoan3jvFm6idn0TCtYjEbpQ9bZhZlfg1QBpE5FrX3MbeFUnn-Gbp1mdqKzf5DQXkr66daJGyqC8WrGi3bIXx4dCqZPn3WrjoVMkhR1e4BYVHwUMAVGd4Q0gVSGwz91eG2uTjk_drEIhRg7VioFoKXgujbGDuv6VJV_Xsa24Jnc4u6BVtcIoopKPBOLFe0wJQYfqSr46Ds9F01HTiQKfh9RzpU1GkqxFWuui96Dsd-1CP6nWlbms3wS4Iex84YsjcRjHQ8X27JGYy8mjKWQC-Kjv1sFGhuobG6uyq_CEIqFa6XhcydXrsuOoK4uy0bpJsP6ko9rzgNTd7u3u2VCpb-oZBt5jPqY2Q5oJSIwo_ZNZgOkszMUiWW5tx7BZtv4AqOPojs5GlO0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"S2VlbiB0byBzdXBwb3J0IHRoaXMgdmVudHVyZQ\",\"reward\":\"0\",\"signature\":\"VnW2KPXvK_yryUZKqRDzDG1D_mmlljPbvszxV7gyh2ARRNSkUE8mBvShlGggKW9Ha0USUrYtoIdNY3C34-00DT09_y93Ew4t4N42bzJLDjIA7nKhJr4Tj-h8NzglR_SrgDwBICmjDuUduckvOzGiSGr6GYI0n8CScBdw0QLKojHuyb3-Bh2krA1SPXlRM7Gevw5wd7EzRFSOW2O2FkIv5vKqxBgzCFkgN3drpVzKZnUeLfKUjt-bH5uLKe9ems_wi_EqpoXa0B2-VI0SbJL8vPI_f1AM3tVVS49SkLu-6NUgssE4BmtgobifD4NMnJKXKBsEHyRrizRCsnIbAndC6fK0ANYnaM_wyJ6d3Xe2gTmmtnm5fWbmkDEU0bhe5UswltKNbMVIKs7xtdJXb3plmpL-UL79hv6f7UCR0BhU1DpHqt6QiH8AIefG0Byns-BfVvcnNmQhG3jvL-dOscQRcLREmd5iwofNCrIJQ2Z6hHbCsQ9Q9asK7jyLuSG33TRlkGaxF2LoBS_0j3D-KfeBYmi8Rev7nmO1kfdf39fm2Xrui2xh1YjyxRugBBX-zDTKqoUfk9mCNMFRb8Z4req28okPUWNMRFzq2JAMyZ6v2eRgmeegg8yZRgRip50tYuyqO1G5NKo-X1UU0DpSEdnoqX8lFL0ZV_mWsbwUMx-BT_g\"}"
  },
  {
    "path": "genesis_data/genesis_txs/MOoLwb8S881q3-gM4GK7DuCEoh5CZnF1tMIZG300X58.json",
    "content": "{\"id\":\"MOoLwb8S881q3-gM4GK7DuCEoh5CZnF1tMIZG300X58\",\"last_tx\":\"\",\"owner\":\"6db4iwf8JDRUVNCaFAbzLIVD87pj7sAfSk0cXvkrM6xgAg1P1zhL5cx5L2rd47GayxP6tj35q8_NAbddLIC9vsmZvOBeLWTEcIL15pPfoYwrmJz3DpKuZ2vqQoxtaN2H-sh62HJCb9TwedBhU9CT1QfWT5jc6WZTmJLXJlw13ZZ-04P3e8fg3LK6i0WwTnO1tkZqwLXKHk_yLodA99oB2RrDe4mTkJj2oDO7Ky6wiJUSnfNoymE4H-n6vsa-82Sr37lX_HVFU3ZFOW2fbTJnXX8kLmevW3u3h7RZtaISuLGjnEQYAHmWCW4hzuK6QqmMYZrVkf4W60W5Xv0WwNipXyjBucngeovGpcmkhCIXIPWurT-CS70Jqv_MVXS3Oxdv5gFLUi1Ko5h444CfZ1gSoCiF4wgEl2cUo5oobQYTE72Z7H2rYyrUvNcfVuIbam0JRz7dONf3gX-szfTOTXwgKGcmCcQ36_HSPs9s30aMbFkUOkjG6KggibSBXVLhkBZRi8FVsdhW7lzSZL2HeFdQMS7K86nAA_UDpkpnCeDTnRS8nwCdeQj8wJI32ext1wkF5blV-pffHUHXwvjd3cee0A0I-b9HDPfdUAZtHYiwFVUPpBTHl9xM6zJka7xSf4lyIRfMLasEqcRm7plUAr0qZxnYE_9WIkfExFXJchaXpoM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"QW5kcmV3IFJ1ZG1hbg\",\"reward\":\"0\",\"signature\":\"qyavphbX_tb7_lv29bZeH_Eus6fJMzDBwnq7FrnYZavA41sTCJfyyUMzHhj8PbjM0T1QkoTUVCm4hhI8D9lnlKLx2e17SYPeiFvpKHVpS1Wp1ZjLXnjwEWJ4TaMS9bcfFR4akTrqlktrstmjWcOCnG-NXm_B5PRQbuhFthkKkfqSKZFwJjjzQCG3xQfBUqQ-RlawdopmLUMUGx5i4NSwKW5d1IGQICYpPcnI8v4QUNLWQVi7AdaW_l-1ONsRLOVOh_ATVlHaiWtvhuznzSdMlz3yKZh4ESz6orpYvsbVsot_7PDK6pvELMWIkBsH8l5mIq_kbjeklU-N1tbkLZy8uzTKp6k2tUm4EKuWU6M-IT_JhaOPI1ue7NZIkYRTQ5Y1t05ZExVvnz3qO-xqV67GfaowMVW3OuhS2HJ83BjN_GeMF78myoyN3mCL8W0MMLobzXP_e1chqYMSOT0QzcsEGRDKDdF3OIKCva3pMMveTPfruqdc5q_1dwWnS8fterQeAgpcvCT9HbrkpOjW0GUb4MsHNVnvwTEwY_mPlig3MnFXgmE21H3taDVvb2UVVl2-yWWrdW2AD0wsbKF86kfETXTDCG_u1yX8s19mJVFVQ1241BI8HDRP4xUalf2xaDrVtbnlElqz2M1WLaolcpaUCz4LdXcMKHu-6Z6CWQNUejU\"}"
  },
  {
    "path": "genesis_data/genesis_txs/MPP4fxmSkvM2BVq8rumeT5yvDNu3QAT_kqpOlAq5s2E.json",
    "content": "{\"id\":\"MPP4fxmSkvM2BVq8rumeT5yvDNu3QAT_kqpOlAq5s2E\",\"last_tx\":\"\",\"owner\":\"xL1_0ZoXer4vIgkcg682epYrfGw5ZCRCOteHmKJwjBi9Ziw98WmIYX4gwL9lafv17vLQHIAdE042VSnGpTgwwNcYiurXdJTcXXXDRGo9pQcHMzJQmnMIyg_mI2d78f5qOl3wbesW9_UVt5j2opANb-VPwZoGKiRA2OJQfguhqSGuz6iY7-1gga73pJgrf1hHWmlimBg5yAIMExf-UZarb3fyjaQaMssJbozYs603xdRRhIys0umebLDVTwTcvYbRl6Siovnm0-JYFF9izV1AiTQG1rXsl3ROXfaYSegVe7hO8zrwAo2o6busxHEdNO-jKBV-Y9Fb0w_EtU3-r88DrZlB9bTLHHN8IlGQG2CgfW_tP7Y-hYaYtHWu14itu1uqMeHYJAms7RZ-U4PH5TCvZ8w2AmrL40dppgkJe_hkc1bs2BGa2-NB2CzYHxJJ20NnL_R2jJniQgH-42qeVG0y0UaGrqlzLaX0h4hHS0MRnd0IycoJ9iKtTNof8_bh88CWHdUSI_8sSNDgYGeJhsvU35HXi9trKFWqDsL3eDQqiyg7neYF0DFvhCoYuSwvhVFrtrIdKuDL89XAE1dSXpo8wZeMUNr4DFNhUSLDnPXNZtd04206dBgMzQJPw5QcOritfyL-bz67AjKrvcbahqRFUCtfJ6ZoUtAGSPpCcBNF1U8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"WWVhaCEgRG9rdW1lbnRzIQ\",\"reward\":\"0\",\"signature\":\"nt5Bv-y9vEIGc0Bw1h3Bw9p_izOXlnUunK87dK6EQkO_90xt0FDjW3twJ6HiJ8sdcs0Yr9mzFeLnBYvhk0UuBjKDCWaRF7Z8ksnJIrPZqzrbqsn_yEHMxMf6j_skBZhOXvECk_syfKwzGr5da30wbowyhqTSHPCEn-oI0I4Vku1qxEdDRWytgQV13Z1mFYcJ-f7iy7m0eUIJMeT_H5ZlNpkIPbshQ81uC88PFJGenrlBEBAvkq1ISzsKR8E77R1B4_6ApU2W0JfBKVG8KVCfywQcgKFmx91BAn51LdWNyMXaTAboE-bXW7aptjNGiAmxoKWCWQGwxX8BUWG9IWSQgosDBbq78IxCCQ6aT2rEvfX-DHQjFtUognplfoPjDEBbM_iGZk7axo979QkJ9tYNHUukbs_gkMpESWLzPnRCCnTcYeFLbbg0is9_xeQACagwVREPSyy9O8HTC1BOen7ZKF9b1GMKJQfG2eQ-nRIbKHfAfczHOg5mVICUPSmCLhwpqfJKG1drE2luj3DV9NK6ACh1A_zNeufzbhxvuQJhdCz7rKK65VKbkVYgI0eERWKaoamm5uSmMBUYVkTxVv1w6-k-FO0nsqaM9c-jgCWJmle91XnJZf08Bv67xN9U5ctv8vNmdpjlFH9oZS9o8LtxiW4NDCEhtvIDeRTOIbUtxvg\"}"
  },
  {
    "path": "genesis_data/genesis_txs/M_wQsQbFGtGiEaH0uW2swBubAnFab3ZcCN8IYWZvVzo.json",
    "content": "{\"id\":\"M_wQsQbFGtGiEaH0uW2swBubAnFab3ZcCN8IYWZvVzo\",\"last_tx\":\"\",\"owner\":\"3spOip8iKzzEqGW_jRcNeIByIB-rYkE0azw_irvGkPPZ53Isoz1kbSt3nBdyMoyF0CaAVyt2WVddoHlHcDq1Fz1dZ74O5740DMTxXn7RbbEwd5rsrbrGAPM57wZZ6PZx9KLgKY9jg9PQ4gT9lKJuSgfBMjTpnB_7b-UPoO9w7Jlf3WLvKOqbg8YjVpcghl66oV3mNELZ2EINrvB-1beEIhQyIy8CuNp4kz2q0o4D1zXYBGyuq22WBAmQXLrCIcc9ADmlaceQfXmCYVdH-Mq7npagWchhgfiW6oLg2GT5TuebfqY4uO1h5JQUriWs--VU9n0wor9oZjxOKQzvaWCtjMvtjZDl3jWnz0h2igavxVDarOt9nqEs-JuVAvW4MmvfNCPf2C3u25Eq2_haH_1FI5coSH2HaIGyQADcCmpMbJuS7LMoHVDMUbPlfmGNrcbizCm60vLBxkSD84Olfn9UX51Lyo1qnZPW-RdlWwFfy4h3L1lqbTgADT8nvtkJLxmSA3OpdbAOopBECSjf3SFwytxJ5JCWSPiShE7EKV2sSU7d5tKX5NcTjWpuel7nO2CDIUj7_9rpjkGksVT-M19Rnvx7vC9UowzBJ190htp8km3KdSPpoLqYFc1xtBLtIqkMxq8_doJ_P-yEgQr_Tyj2pClRtIIe4JiMotsiCigneYs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Mw\",\"reward\":\"0\",\"signature\":\"2NEWvCZa840SNpHygBGEnhpR9qsc0O6fmFXqlrj7rF5mmPXmTaW4kB-oWK95JRIGrQNvxyYBzD7nzpou9vOF99u7lM6PweUkkVhP-wqz1W3D9C4Z0pi9XSWbkLEvjxRIF9VY1rmRYyN3s07HTu1Q_cajb5VOymOuT1-B1R4Iy7LLKeQSM3oU6tXHGLGDNYRXYSK5X1llqxbTN74yc7L3ooMPPoRQQhQ0_IqPQZUFeWZUr1RcvuLG5sI7D1y4TNkYDjGmdyCXfcabQ4bq8Qc-opeBH1fTvyoheat12qWvknO5Pbeot8iZjRhsKKeUD1peIQIBbg0lJmPwzHCKtyhZmAefcY4dFpejd0G-vrnoF6AQPXOFopkcfiAZ9jzJisF35dSYjZlnyQGFe7ZZG-LO5DQ2iu1BrHwdbmdl5IO-ImSS9e3TUVkmKkAiSEKZ8b-FELtzjf1twLlPUc0ymF9ErRqSyDkoQvj0wr1fUIMzGQEMbMsi6O2A9Xc3D-5NTu-r1pd0eSAxa_dzPLqPos9XTcYbnlLJupPeRSqRNNGlH8sv8ghQFoUUWX9ElJ8vNNyMOi7x5FT4rTEKv0UJM6w6nGZ087jYP88B6LmD7nTvQsTpNB_L6UGIQ3Xn1v3jOnrq-bh4-15HgJJ1chZ3EonT8Glvskd5ERr_wOq42MXxpWw\"}"
  },
  {
    "path": "genesis_data/genesis_txs/Mk8XJgQPSOIsx_QX_XDPxdEG5NcKgO92q9i37uLZsrs.json",
    "content": "{\"id\":\"Mk8XJgQPSOIsx_QX_XDPxdEG5NcKgO92q9i37uLZsrs\",\"last_tx\":\"\",\"owner\":\"3STz8xp4flG_ygxTDb_GSdz3qGjB9IhSuTDjSnJtNm1om-HDHGvgiX1bjeTFSNG3jDzuszd6uPiIl7qJkY7ZRyF8QRNPEXsMNxjvQj5ZhSlAQHCqpURYgUZC1oMt-CdxkzgqA1kVdZsjhsLeTfsZ7RyrL9pR7tRQsaQnrx-4tMbEVILF5sLqPzk34I6abRh7Tz1FAsES2d4hrunXp4gcnyK7nTS36IjhffUt9AK1_kB_mhGR78ngM3bV7AjxBKVAoV4sPZp0AxNVtDcPNd00F9wX2jPxInbQMkfBu41aY94njInTvg5IChKBvugzLeqO7HZfckSKKK-MYz8b6h6U6Vqv_me73rjRXgGEYBq_kmXZKQPIhk72YGhgAVdus3OcwiEzz0r_DRkTECVNHLUMskF8eP1_upyMtNlLwoicKpDNxlPqDSkEFWpQf0u-YLRF0O8q2L_OLidBU0M9gGaPwIJUAG0hUDdLSNgD8qRbdRwgatMuS3-2Tq3FzjpcBng6IMKG_F5MfaUC0VAf6xg5RESoTk2kMgLTL4OazTxj1YX-0c4REBsspZDTiS_dVqNS4T8hooWCCVi1JXrvcjIab0By0B4HXTuhGUTw1w3yuReNd72TSpvja-qrW0hLtFA--UHivjRt9aY5Q9LKBOoE1IlB5p41PhYdkhDilU3VA98\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"RG9uJ3Qgd2FzdGUgeW91ciB0aW1lIG9uIGplYWxvdXN5LiBTb21ldGltZXMgeW91J3JlIGFoZWFkIHNvbWV0aW1lcyB5b3UncmUgYmVoaW5kOyB0aGUgcmFjZSBpcyBsb25nIGJ1dCBpbiB0aGUgZW5kIGl0J3Mgb25seSB3aXRoIHlvdXJzZWxmLg\",\"reward\":\"0\",\"signature\":\"Nbhs69xEkl0n5AqAWmPmxgiPcZe025YHje314GRrilwbsXtMGFqp-AXL1LVAX3_D63Jg_iSM5EM8poSVVjzJAyobg-nJ1ghCF7TOOZ5WU-TzvqgruzPNgG46RtzJMfEiAocWhCwH28G1zLgl1byF9o6yxP6iT1WcpZn_-hGYBcjh3S-sUXo2OE1vVIbCCQm58h3_LAf6naik_YBsNDN58XEG55fke38UBFRBrwNoC8_EQeCDL9majT_PcA-hl6FDy-abrPdxTSgpdFn23Tdgnw7dfSXyAywKeid-uj3tX1njYtnN7sGcNGBhd4Vd6sBIGpQpmhAcMoinzuKAdT19injpQUoUmjHbV9i8dRDUiz_hU2OPaFUOCYEGYpz3nRvyJj5Xe3nCCvFm3mA8-jQm0wPgALYlBE9Wfkx9Paibfegenf3ClmwDkF1ole2aW6GtFMXP083yWRMxQ7-9A3tPrgMY28qhyyVKansazlbt_s8puCq8DHtRkMmjUJ9lJ7jQ1lmbxGhdhRNoRr1U69BNU_l2b-IfpLk4SwGI9wMI1dJ-U8VK7pCGCLVB9brsB4e7hbs6DWupUq1THsKmEVWVMGQRlkOyPiMskluiTVcC9bI4Rwm5yMSl9nyVVb4cz5G1Aa0izdQGelnMZa_XXWimLEV2Hhnywy9rgcSp0_LOoRA\"}"
  },
  {
    "path": "genesis_data/genesis_txs/Ms9gCRdVwT9u8-ewYd6c-T0bet-n24n_q_Hn0-BlMow.json",
    "content": "{\"id\":\"Ms9gCRdVwT9u8-ewYd6c-T0bet-n24n_q_Hn0-BlMow\",\"last_tx\":\"\",\"owner\":\"50TgxcnKunD3vAcwH5XsgHG9XsYkuiqfhEkqLjidBX8HMxaVft3yZSpTB31Mj1gWigHO1W5UsTZUmUFmpvW6zQzJCDb8W-VyBl6_FmLGDUgqhlJ2z3tIURQmqrSfUL9cYVvWHIfipPS7S0buv6yImMqnc07sl2ugAIUHCVtGhVp9max1_cNDKdHZssIhESp6hbrU6cDYKAIRF4o90g4dO61erTwd8Cpns84itn298o9IN5128-HEtCmJ5-xvpx9WCppFkCGq4fPIlx0rJ8WWUYlioHDdrgkUqQh-bxiiBZkLJYGZYBKTwvSPCiot7nL7pbZDwNVkzwH63tMNN95tzRuEmbAQZyBTQ3VgE_Fp4cNQi7H7-8NuyiMFK8NI0ZnPKh-OUz0YfrHuSlozT8RyIlX3AX8N1WmYB7KD3bQFRLxgiesY1aor1UDnxxHI9fywelLVNztMan72bfW_fexd-54p5g7IeZv80L8n8Y7knPz0dRSZFrrzjF_hDj91ZlxMIntT137jFa1_IhRGzeNjW_P478E8_t3s3_aZguS58durEk_4rFeTEhObn2jl-b-0wG5sdcqj2tR5iOA7ntODhRZ8hImKPBfypaJ8M51-daYXIP-LOfl59LHwOrYuUm8vXidIvBoK1rrFZ6JJsUYK0Jxk1vNV9K6VDhnuXVZ1IVs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"dGVzdA\",\"reward\":\"0\",\"signature\":\"bfbLamBGbe1obP1VwIHoy854mtwtF1TDUFGg3Ofj3oV4WJlh8tLUy4mUzVtsOIkBSoU4d3ywF1nVmPnwpexm9DyUt_sKl2sNIluTNHX4Qc_4_b39N_Ff1IBCkX5Fuc9LhYu_tQaJw91B2GTf9_94J3PbKGqiJ44BcwxbD5Vah9lQU90n2DvQBj1Y3Kgf03tB8UI_eTPQSqJx6BiVUKD7Z0YPS3iOE6tqxZ6dgOr7qSgQzYO1jiQB5ItJsipIuLlafVPxvsrTxrEl36r4F0Fw8kBPX_dyVb5UpcEoJAn4ZfUZdiaJxovWrb0BYXZ31HgxFhn_Jn7gComc_dnRZGaQi3m6Gr8AmjZdmfqi_fCOb4TzSeCSlKWeM83QIv81KTgkbrh_QiNjZC1R4IhJL3EUUj2vLL7nAU2g0HFcgFGf-SWQCTAAuOYjyq51Jept4I6Q2CpUNxSvDrxRoYKZfqyc6tqB1dQR2PrmxHS2-QJvE1-40IKWCZcTzGbqIX9XB8PoVKOrnVHHMBD215aDN0uV6I9mQ-2bQsCfq6I9tPjtOqFRxEDB_WmJukYTiwHwYucKZPbQulERdaMmieELhfNMR5RywYYnG9uy2qLRQAJuqsicdIuOTPMXVHAkbharKt5u5U_VabZ2L7Mt3yIjP0Df5RWj-T9nwYcrBu4i-Wgl5Rc\"}"
  },
  {
    "path": "genesis_data/genesis_txs/Mv-TFhA3639O4JbKzoO3wo8LNPcFwA_vaaOLHfWRfSo.json",
    "content": "{\"id\":\"Mv-TFhA3639O4JbKzoO3wo8LNPcFwA_vaaOLHfWRfSo\",\"last_tx\":\"\",\"owner\":\"pTz0vEGrpQr2ByW_Jvwf7w_nWJDm3iEbmKQ1zRWou0-KhxxWic7GAgpzcBWqcEmM7VIp8KL_ggCpxx6z8wjdP6OAqILrsL5Gp1NOTgNITe-3MrAa7VRYqBmIcuRZm9HgbqsH6YL_EoSlzwNZFi0tB_PKfEqusaq5bAXegALw6VznNUPdbJPDJO3sHd6HZRC66_PvVrevfbn5LVNxUNFMuLMrBbgiuzPw5_CFxyCr6rvvQyQha-13c_rR3lL-_lUjgVT1Of55-dQ0_6UQuOlc25zCtYBxxk-i46uHrXla0W4iTZnk7pMIFzQNCz8rJM-zgWrNiKzaKqFKXfUBldw1I_6kw21boPfocNcBnS8Y-E-GG_21AM5P1JVlQk8L4-EmacdYMU6uJibIM9SoOTsuRA_xABD9Rv3x-IQsoqAE_b17FnCxmdP6SgYYuuBs8NSFysQJmw9c6v1xhRuEx3yZy9yrZpfdaPFn-p48fc0d2QTOQNwSFjJD1seCMs3YdR5kyL8DUyeoc9zGXcrwTi3QWEf-a9dF5U7_8D0u6BPXdmmQRJb7kd98xpeHoErCA89D43vL64CJzX8D4Cr0l0eijC5gsK7Z8hrHzTe4EWAJ0c8Te24Z8v07f4kkk8qGch_MxDYhUCorrDKsuIiKosuiBgJUAtKpL9-AVXUBcNqfNYc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TEVFUk9ZIEpFTktJTlM\",\"reward\":\"0\",\"signature\":\"Q2zemcLfdjARzaKYfwyU16DX4p71NxyP7JQTaXceD-LjB1b-8rok3l2A9ubogZ6R4tjTAnR0uNNonAmN3hLc5TO8hs_rIEVZFTN51kavZ9iXBBx2gNHJcunNqPc9uJTb9niyfzfZ3J2t5k85VuIsSKsvZEiA9yBcRROIyEQ-DCW4VqB8rhpV03WsAMQ2aWJHCk8pAMY9EPG6G8S6lGun1Jqe8OxheDBDWbLwq44b2DWhncWFcEQeDQFyESyrYjP0Tl1zde7yDPuZ8ZuyhswL1bGEfJVTwM1QZm2u0TztVv6XCxj1gKct8kAMJkUD2bG5vOmrsn-JtqhFkflu7gHDWM_B8ZEQp-mrGx2VSeomuUNWxYIqMOZL0BXM6fZHSu603XJtH9HNGefIp2K1lNsGGp5OgNKLknI5bqE886G9W9KU_bNbP8-z-6D0QFWixwiZMboxdb4l3k4JUIn-_qLm0Qvmspta5Tr-Nd8M7WefsJvPgS2JpNZ0AcLZoMdaMmgzfr93CVFJmAoGUWnKgicKUyOHIGQCodBjZzQrNlx8FL01jjpJxHbVMId9gCQPfFWRnzlD-BnGkTmnhuc2qImZgJaun-O6xXjE2I8wgsaq5OCutJ6oXjMAt1nK_nJAv1zGcwy-yiXOYPS2akZMdURvUhIYDtzBJ0OaiiWHlTyrBDI\"}"
  },
  {
    "path": "genesis_data/genesis_txs/N3lqe8CUwPfChinYVV4OZZQNjtXc26JkOJyqgoKhq7E.json",
    "content": "{\"id\":\"N3lqe8CUwPfChinYVV4OZZQNjtXc26JkOJyqgoKhq7E\",\"last_tx\":\"\",\"owner\":\"qfF1ttQacUz--8t6awxW1qaaZgEwsc9jUk3y7z_MoqMTPGzdUOLzW3vfPNR-2FLwnbPyF4azGU-X-2A9dc09q6e2bHTfXtKWwmxFrXhWgq-o74IOY4LXAyYom_2aH5giezkilPXLB-AY_FqjmDb3euzNgqbLx8SOmqpRLMzSQQHwqlelRh2F1Dl6OYO-r4P0zmPCG85wQ6ufSFG4Bh9Sw6fZWCySirtRo-Hbi6Oa6Y0OnZXWjQlhsF6vs62YLnEApKVcfIMCYdJSyOZJAm6jvQJXqazQzSJsgbcy3gtLK8MojmODo50smdy6-JBseuMEjjbyqSQe9oUivwP3udnMaHP6yYb7AtklRhwOIR4phcmOk9nY2oblZqhmxcIGztcE6Gg8gsRnWNZ5RN4yunZGRjUphJybuEToJJRVB8JA_3K7JLk70taSj3D-REyC35v4nBIfb_-vuZNE5n70Cqz86yvnGpim0hhDLB0pCSRp17zcL-6E9pPQOKR68WctTbYIV2Ry0GPU49sRNVvsjDXHbCEBNq1Gtnjyi3aeN_xie_2dqQShAETjjFvJ6st_gu608-C0J-VIdzNaeNU7OXUor4Wmpnjm162LvBQwBngqj8ttEYriiYLBnRIXyfS7LIbnoSEK9bgXoCbvYXDX5bj1j1YO4k22Iq-iwK1Bg2eY08U\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"QmVzdCBvZiBsdWNr\",\"reward\":\"0\",\"signature\":\"NGR3xT8VWhPmjk39AMnl7dnD_2H_Z1Ak3Y7EJNzZxFrVtGEzvXYN80reGD1byzy5VJ5wB20T1g7VRZF0_q41KxlhSOJ9VAWTHFN_-lpmBbYS_ZyKuWlN5UgFBbnuSFtWHMGnfb4t-2pRo7Sg5XMxTvrPXf-p-e1xW2toa2BdzX8BpXNZgIT_Ah0VD7_0TRxCPOMvto8YZbXfeA3KsQzOnzfJmCLIcXbeinqz9JJHZ2QSUlZOdn9Pa650TVvCVmlA5LwJ98NkyZrFoXT7imK9IrftH5RRkKXEvt9BYPvepMNK9xBn0T9WNZlV2YqsAwYaVqDYGYBjqCX5F3LJrek1Mf3XS5bxq5MaChoPg2xjcKP7tVrS-kjlbVHTH0NUmwchmIWeM2QYeYNjpO56fFYxj5Z2LNV88-QiBGqLYjhRkAr10JiJqLFN09y1tshRm0KviIo4HMtMag6VM_u0nrUeXvjeM5KUeBS4cH4tL8wHKQvAzpCIKyk_KaBEGPeMTTKcWOsabVAmK1XWdojQPnTKVBQlyRDKxogtD_0MZklh8etNp88pBN5RscT7kepHstE3xMHBt9nuI_dXfPKajefs8Lft6Vl2-L0okGNIJjpDjb-IcSuDsm3X9Gr3Mb9UkwOcRoc6c3skXJ5tiqV-3iEluJmVXindiAWoM27cX26n4YE\"}"
  },
  {
    "path": "genesis_data/genesis_txs/N6-1fOVDkoeDwKyoNdLxCVoyy-c0EF178A_oQeEchs8.json",
    "content": "{\"id\":\"N6-1fOVDkoeDwKyoNdLxCVoyy-c0EF178A_oQeEchs8\",\"last_tx\":\"\",\"owner\":\"syKRV0_12JvUKoYcLpvLnW2IOajyCxZ_FpvG0I7sDv1eiOq_6SjIiI6OKXGGhABV-1aBZtqWTfb5P_vl57NY-4E7uvGF3fXoEoSOkldHXAKHhbVypV0q-RfD0r6pmDwhzKaSXxBtqXVIcGm3FFAjeXE96i5Nmko7Hhq47TzjCgEA6WhpkYi1h9pD7pdwmVBQcGrp6TA5W2CDbLGP1RlOvd-ju65QAuMTH2rNsLPD8bqOn2b2DvM7qJ-_YeDS50zq3kZ1iAnX53v6fmlEjYRirGr1ieeylK6MqZp3G4wysGp6VHDP4OpVhbAlPVKSe7a5O4X9OM7UUnMGGlG4DDD0RIBJW5dtIFyaBaXbOBz_kb0Tzb7xJCa-Ki4q6uMa9GLwpvF86H_3gYb6lU8DHOy6P8Bhv6r2qvulXapPJStePWHX5fE4oEho0Kmwvo95Qh-Zi3PRdydTOLrDgTvb1Bdv_5Eo50eD-qyp-UmgG_fvvxNGg8mMJH-ZUHqP_f7sBnawgPo02aSZiFqQSs2UEiPnTzOrDB1QzmbHCSoWjNv2XN5KHmrsqsascvnz5xBCxCZiBvs6vKQ_OD06K544_-1egThNMZfwZFZJFlpWiA7snPquT6QkDyVhpwG3u30VTHzAqvkoaJtho_d3uJZTMKgAbFGHQVDXsAvbpOUIpg5G4yk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"dGVzdA\",\"reward\":\"0\",\"signature\":\"gF1kC7DY1LkGBy_RLjcfcbshkNeOPG04VCqR3Q472NcMqyC3z6mbVuT_XxgeEd3zjOvUc-72K4iKlOf2UhEECTnX9cj_FnKV963zNmY3BpIV8MGIQuTp9xKCvmL30-np6bd2j-WthBZQHv1LwTrzCnDGh-K1TIlAVCXv5ILVfiEsJeudIkz4a9IIMx9eS9pWO6BBbkwkmV52nJu4_HDZlGijf5onNkXUgqy2fWVTpy6f7IkSmgC6rIWc6J6PfgIZPuNs75cQjaVjHSQ9LK9dfplT-jZwmuVe5bZ_WbzCDwzwmtq2GzDnAVngbjWohajndSaZJ6KzhYlt3v9otVtoXlJfszUpSocrY1s_ltZC_OBTctMHxQpJ-gi9B8tvaj9ysu2qakfKkGGPoD8zEvy8WBgnm4r7tFv0eAmINasVdIiNxga2yFXY_l1zq4aQWU7ki_pyiioRjurzoPZMCrAhi-4UyX9-AT4RfHjlzShHhTSTTPezXjBeAoX9k9gR4-0V6dElK9elYTOB7AKVa82YHOo_1kX4FWEWEHB5KW7cC6xbiNmgOi9p9gfkxhrXucdugIW1LkCAuTE_HV50bIckKW_4Vx2d2nq7fCIH83i2yQhQhprHgj1fmp1dSHXVIKBzzGlMWJkoGhHaTOoHYHlLe7zLXm7Blekt7BLBFvKRAIo\"}"
  },
  {
    "path": "genesis_data/genesis_txs/NBxewjnZAfekK0hKmwL_OpF1521JTeIpLk2a2TLDnTk.json",
    "content": "{\"id\":\"NBxewjnZAfekK0hKmwL_OpF1521JTeIpLk2a2TLDnTk\",\"last_tx\":\"\",\"owner\":\"6NnMACJ4r2JODThVLRHBPVEVgu7oxeE_zWZZdUvP9ozWGMY0QG-BxF3rWhB9TvZAFrhtvdqsM9xXsu6i-k39rS7ypAqtQ4d6BMs2AnBeksxikN8Q0N3M30JxKzijNkp4dGx1HCeubIY64wIRamCgUGDVOIUtkCvhP9oZX7Xhpr1LwpjrD_JSaane01EIyQt5apcHvvMPhkMPivSrt3T9e46KJn54kI_EF6n2lnRy2OoKCtSksMutD4OPgxFz5ofuXzOfvIno6mE2RMNXSghmmHpvkDVC2Yx9sV8wvknsKTmJZEoDiyhwqpcKCTm_LdzkwXo6m-cg9qLAd-RRZnZ1JqbWctHTwL1_rUXRtYpS3qlmormiZibh0CmgAT_59uKkPGcEGzmThVayOTg-orrWQf_MmTVesMbFOJorb-A14Xs_rGVXUkWHXlHeV5MmTTolRfUpMZtnQEN7dvSMsI4TMLJp8hDA62cAWnfW1bWnJYctxGOCXLtIPHEcSbqMLxSnHjD0TLCCbBVdlJvXu7EtYXZShnG_5OWj2QT3uY5W6EpcDORmZetS5iurfwh5Wl_oOL3VdXyEFOUuLvjILgpdSN8o77e1ZRfDhGq31N5wZq_LIHswR1Q28mI-5x775_O6HjPrQjezIzDt38otRbfOf1qCs6NAdXgCjeeOcs8UB5s\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"QXJlIHdlIHRoZXJlIHlldD8\",\"reward\":\"0\",\"signature\":\"AqSfaTqAkvRAv-5PCbOVvDdBuZvpsLYZTGdLOfSLHsGbsbiyOYBT4T6gce5HA_SKOtpZMsyQ8ydp9fT1fA33QR4JtLY9wiPsmMQb909BmtIa_uQU3yHLOf4rCHfCv4WsQq52hLzOODLtjDosZsJ1QtnkR_HJTDXAEt39ASMQl-CqNZG0KIhvwaqhZ2wgXPgwXLNKxkZc2IMFQkClc7an71xqkSDaDPX1YpzjT6cjmEHmj02OR-Li7xKBrfCdKj1_rfwnt_CorUfZg4teFW2b2fRLXchoegslNeu-L8OUv47prBipa6xgBpFE_fXPSY9q8hJ-NCY7z98MkrV8-O9UEoIRapXO3Io4vVYvi71EoOZGwvpvMJKhCWrKzVV3fOMpeGLJWkrnetBnoSSHzZoWtQnnd2Pj7r0fqZSrF8dHSg5b1TFZ_r_W18KFMxRiaV2lfW-ZMujWlGAfnFcb-EjtLYgs7Y7lAn9dgi1OwDlRgs-1Rjga6nfI2bHLbyHXeLsVdwrXpYJ5Xz4b2_7xp9c-is8BvpZvOwt7XTxcytqiJ6X7TctLW5_0OpE_J7JQ7K8G6NwJIlsSmxBmyA2z2y59h9_eGsoUq2ac6Hg17NOVhjjaiTXOYBgDd5FrDtavmG7x4I_VvlU_Dk52KMY_96_ktW6kQnQMScebhZSkuwH3Dxk\"}"
  },
  {
    "path": "genesis_data/genesis_txs/NE7AIvW60iQL_6aagNTSiaMpmLfAfRwbxau5FZLA10g.json",
    "content": "{\"id\":\"NE7AIvW60iQL_6aagNTSiaMpmLfAfRwbxau5FZLA10g\",\"last_tx\":\"\",\"owner\":\"4cAfWNyrBJix7ys3_Jb1qYRYpeHq05r1PnuKQL3L3PX7j_di_ZJyLSFH72VFqLov_nW1Z0gsALTYJDDzR0bCyNYe6WmC8VefMHpasXK6CQfD8LUfxNq0Jau3Lggh4RCIGh9EP6getoSyrrgpP4Cm7xNKB6zvrpAHR79jSICdjgU-qq4UT36mLOacDScg9QP0sNh-SmQxVWc5FLZDwt7wNM40FPh3tdDKCypLzruQekk2AquBXOhkvp-Q0wJMDiywzVnd2YL8PYYILx2JzjmwWKHhEFtRFmzsqa1RIKDDWr9TGw5EY5aCbkhFIhQuySR61HNSGbXbkpEAqGXBWe5jOT0r6OjsuvWAv1VMGJSj2z-HTJLUZZm6tx-fy3bVdbvwaC58e7Ar5LuLis63ON_EMG1vgqn8-zmvauatJrg4El5QRarZxSCAM0vnJqtqASxy7_wjVfagtq5dWVH3K5fwrlyjscJ_vaGWJad6Ydtk52XZ1B8a4lH2bjZTH4K4MRv6ZpLTnCOVsjJJj7n8DOg_mZRozIEwkorlQ8ZbIAotrhID9kqP1xT3r2vWukN0p6NNflTY0fhRwA9EHVPcp_FbU7zCX2094VfRQ7hajnR7G-NdW3kTL0KrGoC0tR8bZLYhPh0Qnp3_c5ks3uOnE0zqQoX-44xhcLMXyWdRmWEgmE8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"a2FzaGV5QGFyY2hpdmUubGli\",\"reward\":\"0\",\"signature\":\"S5I33unL0jrZYDx348oZ9KFBK5rSy-D9YM15e23CsCt5UVGHjIcDKVLzndnflXnKiFJew83hS39JrYP7LXXX3DNBbEnLF-0K9ZZDHWY0dACcbNAYvI0Rz8z14hSMP4TnzDmIEQmYu134LqhJArkS6y_oHo4TSa4K5bIenoIGZl6CU5Qn9E4cX7ZFtFPIgNTrWooYduNV4E6jkQkp0eu5t2LT8h8JsmlbkPUatXLCZLg7odDSKRrzxnuWE6sNEj6tCqcqmi4UAUQjKrXD-6fmaEdJ5QhdU_rw-WmYCAkaiEQIa58L1sK4-ZY4kt6CZ3_tVUUukHdSqu5H_TW-nFNShU6CguGtmBzJsAJUs_1Zut-B4ybOcTUy1G1EChJ5p9T6BhxxRozvhksKnL_YD1feoQBUtO6TCeOhVie0JB9Q8AmIauZO8Ri0hGQ6bcpjnaVCgyBlbuXNlmZT009cwpPNxhP8MFYEfGzuhE6qr-bjscB8FiL-zObBhxbRCjGeutgymIrsIUvM1Je8TwqXH5QGQtFd-ttlq8XOpzYQI2Lyu9dm2ddAEoiXQ53LJu08ZXj5OU0whk_L0CLCYBEBSrj2FOFhCEp70I4dnlp6KwsMXXqH62l74FhcVvW75lxuJrwua9x5dZEXwRsxX4exw2uDALMOXTd-wiSNfcW9CMEZTl8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/NEXnMz8Yuw-xfIPprKT2iwx5A1UjWwRHCH7XCpeXIPg.json",
    "content": "{\"id\":\"NEXnMz8Yuw-xfIPprKT2iwx5A1UjWwRHCH7XCpeXIPg\",\"last_tx\":\"\",\"owner\":\"4Pr5VItE-OLK8ppbGwoC5lMclzV4AYIyAkRLU2Xlhig-VhwSTrAgPbsla9iH-wn7U0n2hHlEDJ_XjnIWi7lQSuxnPycjYoACbk1LpK3bG4KdMCxrfQq0-2gPd_7kBev9nBCLf2qq4SDiwXbJ3WA5fQCwlJeszXpYQghCbXmpLcidlW3tbeuWrlUM3TscxsF0TGeJDbgEpDR5FcESTKh7gNjFfWhVCLgsZjGQuFwdPI6EOExYl-6EuoAq3suYCnRdhdIU5NDO99qE8tzuj4WUCDjRSxZzPCJNdbg_CovnsHBz7z-DX1of0arjFMyPglof6st2LpOM8gPYJAjrIHcRfJnRyarJg6X6JhIg_hvh3_GBpe-Hz03y-6Dvi-7KaumwTQZJOq6oNErVYqXxV4B-uJdpKFqsGLl6LnIIgNkFZFn0qO-Og2eqtRhzJW85UsiocOMl8FrYAE5McRDH7_784RAwkX0ch1gWO84zw7cTOFwDdoWUZ0m-DofkhL55q5xBD42DjvjawWQFvL8lzXmuGI4hbSkn2o6tkfOQ0fyQneSPF7M4-TfRQPHr0ykGL-8pjEcTY9OMenw8kQVAMPxrYFcOxfegs25VqjLnZe0-lQelF2a4IcXhJC4DrWZ_JuUGDXhHEo0_2vEcoZhETgtF577oX6-gLkteIxu6Hxk81Ks\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"bGV0cyBzZWUgdGhpcyB3b3Jr\",\"reward\":\"0\",\"signature\":\"T4FN6FEA1ZbHMj953sfP8KGzioj-AZgrZcZxUsM1rnVkFTCbz-wk4d-gq0noInTsgdToiI5pVJlcY_lYoUKV3UxStLENo42nzNQxIqsOdXulEvsaC1c40ilDjep647OCZrJSO9VaL6xu6xhMhE7L2Mdit8UKuOFDCu73j8pwW8COWGBzLDisqApwSgUfgoJerFRzzgdJVH7Cqyt8yeYYD-erpdl6Q1FhE0UgHFZ99y7MCpaOHlHKS2vkNf8iuII2azMNLueY8M2_0EanzkuLcp0gd6dLTM9fXImpu3JHmiFRk4UQfZicpnywFHnghBq40k2gaMvkdM1h5rAIINo_kJjQ45tW5IJiBUBJBYlHERJXaI003W6wY6KLDUiOJtt3MRVJXjHV2krtBR5xj2jkWnuHu9uqM3L3zxfCDyVEqEBizRfphmRXgCPOMJ-dSdqB535hpyerpNnTlFvAKbNd5w_a0IBV83fPwn5H47gOriygwkFUKc14zfluagYJIS5kbMaXD3yoBdonuRiL6lAyr7JfONnV7jj9zHOuiAA3xAXGzjYZJ9beg80zRM2NBLPlZ5fDqUHg23IRDvbbkWKPzyRkc0Wwc6YtQZPkYLNLuP1Nn428A2tVbSQ-6ZcnVMcm3uYt9FZddmk0Ag20AhsRhNJsy0DW9TwwQzc28NsC6Cg\"}"
  },
  {
    "path": "genesis_data/genesis_txs/NPLj86idALmTczSq2vrZdTs0bjI-e-KI0j3EOWWpu54.json",
    "content": "{\"id\":\"NPLj86idALmTczSq2vrZdTs0bjI-e-KI0j3EOWWpu54\",\"last_tx\":\"\",\"owner\":\"t2J3Ly7R0-NmXcOnS0wb2UvwYZS8LN_U51VK6ifADLDHlhyPW19LBncbv4DLhQ0ybhU6xaY_wlXm2KWcjYMUtHPPn3nHsgQ0T8Lm0PdKvBPVmy_HL7g4YYsh7T2FP0rGdcwtg4583g0fCHxg4OgmzpEwEBngcKJT-LTDDPyOvkL-f-1htgqgGD6x4Y-bgDTsR4e3Kf4MpJvNEiqNUZG8pJVULdEjDf6nFeevbUXwXX5Av-ybEnwbIALrSXZvc4HmVNNXyTenMgB6t9WmdDVzIsaVm30rnuPSP59C4YOGkYtJP_nquoesailMy2qRuJw5HGIXo912KL74Ka8_uKoQIfP9M8nfu_HvtttgQJxqkXhCMBoKdQjiimSfmpjjKzxIWvrjUCrbNA8BvNFn6CmggmDfjSBWcAqfI-lpWqsI6J2Nz9UN9iOuK1m5Zj10MUam6POmzo5LvIgSpzQKBE84-mBYw-F3J21RJN7ywRFt_ky-3J_GvPM9CLHOx-5P_VvGoh1XFjRqw94J0X3JlmxqVXLMdL_Hvce3axI8NNM1ptufduWULd9oVKGx7frn-1fjFnrnliELQONf65wfjMrH6zRpj4m-ACODjA2FXP4TFo8R1lDgs1Nqa1Beh0NOHIikqOWXTId0TqdH-s_JMaZ04SXqAf9vr7e77YMX_QhFrgE\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"R29vZCBMdWNr\",\"reward\":\"0\",\"signature\":\"MH_8fPnTSXZbyve2iqhSsv65n6JGTIAyhWt-cs6rb8zbpDLHbNlvoUYWvkmEtQ2sc-_zuGF_v6Sba6wTpVLgL5QAiNfRmj4wRtaJhk2ro1cNfqTH4YNeCVOnHyhNbQhMKV12SoOeDaPWspALWE7_D906QM7opinnWqe6J_p3xv2dDHi8SJaDo6wjKwZWOR5H-me84cqV05xS621PJfQ-8yBvJzrhcuWlndhuvjpn1KzOrm895nf62xJbgOsFaPu3eg8aV-dLFFwocDtJFW2q5jZUEfK1pso8i3QmYsfN5vgYDeYtQ8wtwU1b6I-AuWn5fZnyUs-T9ns4xjoJ6CDdrQBZCFcKhYgy2RGM1EkNh8xlbGDSYrz4bC3QUQbPZSx8CyHFH11sKd_eQogoLtBXdozPLO98XnXCg5u2LDLJAIV3YDisZeSPYgVEAAVrCd_YLt_mO3vXkamGyNmd7pCKTsSqQxMbTpXPK9NLBYlQ09Nr14VHkMVGowPgem7KQDnq4LsTXD6snb_KqLgxYNv9kKgKAXwrM7z67si34o43IE033cfPPe8e2c-iC4lTJBSHCenhTDoJku3l6VccL6WyH9yXcXAiVdtmK_7UKKSUxkef0MgXuShncjaGOeFMHEbv9Ot1UhWGNdVjG8iRWpfdD6w5Wb-Izm_M_9_fDQF4qyM\"}"
  },
  {
    "path": "genesis_data/genesis_txs/NptjIrqZrQMSdLbXAGyQCr8audCzArV3EofsjRCqrQw.json",
    "content": "{\"id\":\"NptjIrqZrQMSdLbXAGyQCr8audCzArV3EofsjRCqrQw\",\"last_tx\":\"\",\"owner\":\"4ckASt73h9vv6AtmEoOeUK37uO9lTeTwRLxmCXAsseyFxVo8wHBQzuibdR-7aETHy0i9tcSBqaEgI6b0esqO1j_OkKPw2jTLS6Jv23RPxXwe06HWPUjI255k4vS5X1LzzmXUae2F0Vr-C5Ku2eW8CJGkbmucwnzwNSWbMdp9tpJ1wHXeauHgjNmuleiJbyrw5rUlC7YJuh_Giu7of9lDs03VPX4-JUa-Uh3nb06LfkAGbw9sfYcmOz0CAiTaTVebYiLEcbar9fADGFwwon1oNXMWN1iaJYABLK685MZIv_-5IpfFRUVdwaAsr2xk-hVdPzXJVV6t6VB5foN4iWlQHp-wy0YT-8owYFOGYEH6-TaTCGHtJPqpl81jhIZ2e1Re0v_UliinwEJ9coIRqF_MNpblC2SG6RjCmnTNJD39ltX-gCYHS6UgKPeZbHyl6ao9qtfCkHXOwqkl2k63OIzAkDSNwV95Il7WoGuV-_HZt7LgqqIlVtogVexSYggrSV0w9pJAdjyFTXom7jxyMx5ro8If3hahAxnGPhOiNSCB9iO3gk1MeeFEZdSbwKQLuuzTXgCM4DizO3dPmw6pN5s8krwkzwoJRkaEnWv2Q3jHJOuxkBE4Psu6eukbWu0ITv7tA6nkAz60TpdOnYoE315xq8zdBYNt4ntLvmBCF6WguLE\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"ZGlnYW1tYTg4OSB3YXMgaGVyZS4uLg\",\"reward\":\"0\",\"signature\":\"VITdE_nP8aLCjNb37B94_9pQbU7EFs9U4jrdGoBOHju66DUvZEqj308sM6Mqo_YI8KU8XXo8Znqiw5blzlkCax-qusafA_rUG-YBwuVf043oPezNLIaVWEFc7pZ6bgZlf2deIJ8ywd-HY3kulZM2JzQeCUu2a2ppd2Ra8BCGbn8ALf1LD7XuambpntFSnnrbjGm_bYQy3wBCfEfsMw3ROqA3-xKgYheIBklue2NW5uAOjIQZbbDZSFGLDFU-4VqYPZUnp0N_dBXkzkIurHD1uNJ_zm0IVG5UOGJNYxOdPqy1TOqxXr2uJ-w8M_fEko8j_19uCo-VmWh9gaPA0O9gmLiirQPYEycm_PI5n2k4ssO5WQdyrxuA0J6R8gQtpIVdMkXY6m5A-7H6P9Q4lQSSzUB5_WaVUNtE_IjMG0iVSJETPFKXF0Gs9HRsDUHPYHlpVDCEyCiQ16XL_uzRjCduKjOVq_HdymvT12IsmZklkalt3Z_ewNCWNMmf-7rjzifmp2vw8S2CqyhTzzTW1YOUwIxQ6CG3w-hlidMjZYZHwATGqwZomBHoOl42_02_-cvBudAcZxBN-yEhxC9Ktct6bbmf7cs9B6ZovI7T-SwTRWJWo-iKNBSZhJS4QBxu07_FrPPRlyhvzoPAKBv62Sq1H3VdQVSQ5_kJPVlnfl806gc\"}"
  },
  {
    "path": "genesis_data/genesis_txs/NvGRQrdis2HV22enpSpPqsb0M8s-pN_nl7eJtalZyC4.json",
    "content": "{\"id\":\"NvGRQrdis2HV22enpSpPqsb0M8s-pN_nl7eJtalZyC4\",\"last_tx\":\"\",\"owner\":\"yUp3Y634J0FKmhhBD8tMJ8pYCuJpere07JaHXF0200pfsSgpkgMVlmHAskCO4Niiztu6J0AANXsaAciHQgqTihnhU6QBwNfKWf5Nos3WWGmH33Bt6vVVODihWSS9kwg-Bdcepu1HaqjptPVvZn8DnqxLZc_vZOW9lrlRriKKUoJiXpZ3_mvjqZmnuRejQfVxnxNUSzfx55DNGoCtbqkOdoVzAGyK0ATVnmH_goDnW7t4oePw6XgDtK63YJS9co_lrSeursJRo1Q3JSWdCuFxdDVxtbOSz4-7myFVG8WqcQYDaQaF-utG5rbBHUvrlxxrEy3FdtotiZQPXhwIee1mxGrYNThAsTpoS_FeW-AFo8edVGxncada68jWwHIiHi9iD5tHX461pYOawz3F-bScK6gzYXi1Rov9JYDhLMHZ94vDhbh5twqv-FF7SrGTu2Hbe8qFSSPZXh19MSJSndDf9Yu6O-mPIK7x2qeZYhop9YSGrjxZou3P_XDC3veP-URZD-543DaD_uHbbTl1g2mW8_WwmLteRsqNhEBUx331a7HqnBs0DVqK2tcRH-gRAVpuLgYWD-MLwKo_vJMDPFxekkPXDxfmP0S55t9_VCQQywiU2rzcHbznhQACm9qrd_GI2c-evL_QpxDLPzg-XxgLcK-E9WCRLG1jHAq9hz3eCbk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"ZW1pbmVtIGluZGlhbiBmYW4h\",\"reward\":\"0\",\"signature\":\"rYKBBWZ02OG3NGd_a0R8yO0WuC-5IoThV_Mf1ovP_Ig8g00A4LcTrcynwwIHklsIxMQRN0jX3XBKptK4J5Xz0Xe9PxSmbz5nDqm6wGh3KzopCKgAoozefJtOKE_1RNN-7Hayvph-nLlWurPihdwTbhnED3WuQaOMmfRzACJsFv9JPF3FIuaTmNOrwswQmWNoYVgyQ79hTIfXSK0FVHkpRw8juuIRwMcJSmXHsW4OatQa5aQVafCFF2-Cu7p_xKsxqDSYWzQVKo4rGWE02NTKILya5MyznUog_AEAyzRgCSPiC73g70OZYtQdSeDSYyD96exhP0ftf6dsa2PK_VsIwxRCkTHKiUlc0OYXyCCGlYi_MSRVp0xYpboNliYjJ7YGix1k41g3wO2Sn4eNUKdG22gIpA4Ycl1eXFRib-nsmcSE2Jltm3ytRWNokwMzrJPyceQ-pX6e2OkuO948mMmM5G6rgXtacs2B9gX7SHdJEK_BcbwRzYTWfI-GEr2rOGdXs1pR6VkNO888kbntpg0oqJ4ep6HXAHmr9iSVgdpiNKICpRX0d0wVi61w4Z2wD2Z2xZbdck94UYc4R-XTI1m9II2OofKYheT0PJ3wIb-pvpqkwHdgCV0BaKk9fZ8gmTxFF9EPY9jwGcyTkywsj8NZDfrWQTEpBTnWskpEKPv-C7M\"}"
  },
  {
    "path": "genesis_data/genesis_txs/O6qlkPRgr7H3WLHjVov-CTm-q66Q4TuvhP6GC-c5ZjY.json",
    "content": "{\"id\":\"O6qlkPRgr7H3WLHjVov-CTm-q66Q4TuvhP6GC-c5ZjY\",\"last_tx\":\"\",\"owner\":\"1DpCMLIndupNtMa2SR1jm_QivewdCWZywsOEmn_Z7mj1gIzygChDXUpcwJF0oEHZJkEvzoy2Q3wg6nmz_ogbsYnL4zFdO9_32x6K2pqECVwlrmP9v0XDgHrcLhCNlo0ddlmxMcibdYW-DTa2HPHtUT0JwvBTEoWtHY7zECdxgSf-Tk6PAMrKW_Rb8FcVlO5--yqzNaN7FNLT5HD3lnqa2Y1ULFb4uv-WG6jqRAK0b-3Ou6GUBajWOHPpyBiSwP6EaHA7v4wg-jhFNPRamgCJB97lMgGFuEKdZtKXhMIPTAQfE2qPu9a2AM0yw_CcwHziEkF-asuc2L7T6CkDizI4rdHynJAsl4hBpiudq3tVG-G_fWANd5tjrP4r8O9e7G4UqwF77ZLEz-hsBTSIszBtr2ysEHDha0Am33eLV7afHKlCljG57X1iQm43WjOLCQeXSjK7Y3dA_QV0ke7HHJEyxRqqcv5gvvMF91gJ2_Q-s3vO4IFak-ZSxW98pAWcivL9qcZbgdnPpUsUYkFW9SvHNT6GPddF5sZO5_2WDPZ31NmJXNUStGsnUGFfQ5_4l8rXAeXrrV1Y3v_WJD0-vMByv1_7P4vYsrU7jd4vdS-WeTjY-SLei-M4b7CSfp1zTXHgk18yCqC4dSs1lkVDBxqhkpCpFAOGppK2PV_OTlWM2y8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"dGVzdA\",\"reward\":\"0\",\"signature\":\"dtI1y7oNyS9nF9w4TDeN7FTppnNyHQ134Jdp0YG0LTbAkuY2-TOdvNKNFU3YmUzWPwha1te7QtoJHv36toipEzoUP9Cc997oULvPyOF-MClpygdPHhipxAoJFh8q5LSVT3hwDUXVQshKw685BM9n9ED_8gWuJ4fgxY-6hiG14Pyi_qXz3DcRxGoUzp668C5OhYYBUi9iasakknkOWhfoolMURi0FwwbRnlywno0y6-XHVLh89FToQ0DH4H2_osRoLJxH7-5B4A2EnR3utWJfeaRpqUSZJNd-oB012t-bQ3LCVe0Mg_kHEyPIoEmPpUpM36jjd2bvfEMpsv6n5P5Xy41hcFdpV_KWJgKsGHXsbkMnwH9mreznVEECuzibWOAnkfAASPAmFNLMCGNcpDoVyswaUsk2BTYVtldL7gfdHOs4xpgGZiQLhpuicTo9e5tTLV3v95YF0SGNw_befcwlm1rNHuzIYHhsb1z_q9ly-nQzoE3D_dHABqqOMHND16Y265-MKdesTqkD1tdOk2tFibVnpbY3hD_TFDOtkc-ZDEhy5uLHKC2dYzRGrRyLORs887jIuEqJIZJkPD7Z9EAwPVspg75qNHuAJLbsIM1G8SJEHGjGQov2x8zJ741RTIoQIJ-g7WzaDw-c9Iw4dWjhj7icR8uyAcyEIg6nIYEdwgg\"}"
  },
  {
    "path": "genesis_data/genesis_txs/OILhne7UcvACtB4peA4osAjRMthaZZSW9OWhe3NpLBw.json",
    "content": "{\"id\":\"OILhne7UcvACtB4peA4osAjRMthaZZSW9OWhe3NpLBw\",\"last_tx\":\"\",\"owner\":\"7wiW-nxbsbOZ0z8WAT4JgJa1qVPH7vBPCbbrRzjoQf2JXWHvHpTgPbMhuBxRJgQLUmrtKDZiAuO1RcT2fTP_0sP_PiYXhTx30xUlBXdbPOkNqp8PUsl1xa0aM8pYlVRQSQmoTxc3STogpvpeq_Do01v_OOFqiSjb6gq_hJJCNFzAr3eqbucbiShLAVS7Bye5S09pXYvkLpd0lb5fb2tufj4pbhtA4vTWGJOrH74Kl_76y0etguVX14oD6KooBtHDFGPeVb_QHEU6YfdJZXCTyOTMPN-0JzW2WtHy04OzDmLIyzbNYEjCxr5521Aqzm-vI-937WmrfHYm5Le0upxNTCrnSnLMm3Zo7U3Hyj4RgCuWJ5oSouwucvn9vfiOku3KkC8DKTrk3U80vE_YIlE8PnXrAAO0_-MnFQdcEyUegJr0OskD_xpug-SPOiOkvY4sNDW8qEc3B7--DV7gWNfqCu_Yb5sCwTFQzQur_uAy6uhd3zJHpbw2PIUSASJUvjTjj70TMmQQwZGcXEK6RyF92aj20YR5SAex70uqks_SQ9gwC4aqGdfkvQVDVMCrtmNIFMxtKbthuC_wa758Q-jftLwe231RoiDhkH9nsTPMQwwL3Ly03R1YMsgqdKj_BhGmBq7haeaVfEe8vcscygboiREY9A80VtRhG59y4jd62BE\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"b3IgZGVmZWF0ZWQ\",\"reward\":\"0\",\"signature\":\"3hZGiaw1HD5ly-u8bPDuKopl_B-YOUeu_7_jJjKeQEK_Opjj1ACcZOFQt6VcCT6x3-WKU6Kegbw1esxCQ2OPVL5swNgqruQfV9dGMk1V8wvl6mXcpBtbPlD2BoKB2TI9wd4ESQueEOJP-gHrfMoJXoWIGd7W4IubU8v_3Ox2npe8wrtiP8mvA0-moRo6W0ulMPgdaGJbKu6erLG9vFjKdpBOrHUZp-1FgxF3hLltlWzywhvad9Tmhbgk59WvivOS3piWT-8isWmOYjArXgenrMqWZiqIKxsv4FBekfPZY4z3Twb07r2QKqCIXEzg0P9iM0VzsLqrvMMps5ECR5dT426sFKIKWF-87CD-EW-n4JBFd21kb8BYpMoMCo8pZJMsE2G0lE1hSRrKpI7MHiBV4Sdb_6YLAOmFhfI_J1UX0GAIBrztH7YiRQwQQTsGaU67o3IL6RseKepdu53k8UBzvzEHbBgCjmgSeZbhGildhVkqjHw5QcnTwGxVJ0-BG6du4F_z54zSwE1bLwCQL3JQNPE2eHDvW5YIIX0coIC0--LeOFgO_cs9QYXgd_cCmWKMWr7VoiVWk_e37KcZgWVpG7wf6bFVwMh29Befs1n99pCAdJXPyE72-i-Kf7d7J7Y7KQYSWLXA4FAJgoQ8RlTBTwCFAXPFM6bu3uteVxsQKjU\"}"
  },
  {
    "path": "genesis_data/genesis_txs/OIOqGvvuafD_5J9QzfxyPiNlnqzIcL96i6u4PTUeDmA.json",
    "content": "{\"id\":\"OIOqGvvuafD_5J9QzfxyPiNlnqzIcL96i6u4PTUeDmA\",\"last_tx\":\"\",\"owner\":\"xkLDDrJrNjPpf1YJv5TEWC7PQNvdLvMOFMTLEtoFSRWgnfwhOA1HB1OKmHPTFYyNmRMnZBYfqJWFuMqpMJzQ6HdV2DIzOMBFQMKJTBKU8LALFDCkPACEtkD88mn1P9voVwzhYq-Wkzlk3KlKdv7n29CcPfIsWh11Q5g44CHeJt10z7u2xGgG81HPbK4inlXh5fkpqbass_SL94ALMHiU2AehP5RaC48UX3k1F8DUl-KNmvBOYXFvShHrUrtjriv_PU-ejglKvkNhMe2p9kwWCT7yMTwK7ggeiHs18F9dSXDoPkd_I1gyxfX2dZ_XLGJkjia1pHTV3ZlVzHsAX_RFX-oUf5i0jJhMy-Y65hkzvjc94cJCs3AGhrZuvC0m1JnfRgNjom2ers_-nYrjxbTcl2Fipf5cjcZHQYSgqE6k1UMM7DjJEbZ7WeyEH_fbhFCgx4mxhjUcNXqn3JKo_oik-uD8i4cdLFfrF9iZQZ8GZ3hqf2ZegXojN9XKRWo1qHewJ8o0mWH3uEfY_jBNCAR8Kpq9lMgDBcOVWGGRinysWmw9bVodrrHtuG53qSTs1dWmdHicH-8IVO9TK0sXrGiyW8w3hM4A7fjE-M9PdwHpY4SUmkRfP-v9vDpbUECn2Ii1Nl0FSoR1_WH2AowxyxQ6tDn_USCV17o7AcbuZbo3Pm0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSBsaWtlIC4gQSBsb3QuIENoZWVycw\",\"reward\":\"0\",\"signature\":\"AANo0kCrxMyXCb5jqCh-i0Ju6DzsxJDLt6vap_w-JZZOlCfRB46HeUTchGJoJ48IiZlAWxouabLBoH0FGouhpAt2SYac7eLw82JyIEThdlPqtWe3hymbH5-EAUiTEBWrRQd4_8Z7QN5gV7aM6PYjD1h4R6oOXv0A6vRXg4xGQWWwUND-omdYDn53PnOgqkuGDZbfItquhNicG9CqCP8-hTS47FpxbgIgtkuoj64bxnviSq7n5IaeJ8frjFg05okdiGKrWqj6CqPSI8Jd07ScrbUIwzYT_3b3RXG9nSCCE70H8T7dQBnhDUC5E6cfDZu5Hjt4wmNqQsuz5TaHrE9L_jDOYscAOoG76V5ImluPMGy2GXJhsGI06NqitXbj-30QbvIx-56iosG5tayRtmJMRSmbctMfLUtizMOOcM5TXlOSOV69lGzkkFordYpOs7KU2OWsxjnc8pqhoQzlEpUa8OoBju4-jfH4Rvk1dgiP1dbc14ypX_dfx7wluB6rd1TrhQUSuzjnpvGfoNxlL8pyQT0OBzIFKVIz4GcbuzxjqXpldpyJol_o9txgLHFNeCgFlSdqZmqtvYe6haMK6MUBHinwj5ImnAFnNw1KzcB0jn22ySRfpYuhQki_Ax4ZygHEVSAvoce60wESJwGiZ2D4vSwkeWMQpidL6b69WL_jQKs\"}"
  },
  {
    "path": "genesis_data/genesis_txs/OaumRLT8oE6J8gqrQ9DrY_grMuSfWtai95VnqrX24hs.json",
    "content": "{\"id\":\"OaumRLT8oE6J8gqrQ9DrY_grMuSfWtai95VnqrX24hs\",\"last_tx\":\"\",\"owner\":\"uQ0O38J8Xz5fwjrWJw3wZe8sfitb3LON2fnToY0Fj2b4LSoN5cokVyUOoYJ68cy0qVfwUX_qEO_xMZ0KEjHn45Y_8VlnA_hp-9H4F5efHiRgIXlpO8XUEUMQEFOqthxgSQXHpFTkspGSxsXWldlxQsndx23Sbn_bialwLsigXLe7If5RIaGohghCu2Ej_YAC-JRqv20TwMPZbG5_4yfS3Q1DlRJvLRHICTWYD-a4qSqHFRhujG2Sd7CZ4q2YXlUYHk3gyvAyhulF_powKUiGNYK2lcK78RXnvcw2Iqj-YUp5A1XOzo4eltzp6fHohZxLpuLgv4gYVPuu9uQa8itI7WNwB20xXMRdP6NQsGxfMZVXi52q6pmKSfG5wtEDuoKUSpidOI6kVtpg-i7mVx8IrzHuKLEV9TtydoLAKXfY4FwvXeXroSd1ImHU-g4CNuh1acniPc33eqMGtIEji0-7BnWK12-iTLPFIBBu9eUIEn9blxZ-3SwnwJNY6ARD4-YA_U79249UHyCvw4xbXsnxyuy9FkXJO6ImumAmVib-8LW3b-jQ-kgU70ioZpBL2D_nB_MqKN9GmaIjOuTThixDXJzJNBXuPs-CK7mrCtc6f5z0qzCq9mOoHUdxtE1hvSf4gxOdsKxGltWav1K2M3KwLJXD3WwEBJS6alqi8c1VVq8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"QmVuZWRpa3RlIEJTIC0gamVnIGVsc2tlciBkZWc8Mw\",\"reward\":\"0\",\"signature\":\"QU-0tcYJETy1hq1MV_J3IZoSKV_gfCIytclzcbT_6YLbjj25KKc6I69rvx1yAgXu5ulXqgNrBBZVc8Yc2D7Qci-pcETjOTTfu3IC9sKq06x8_rdzyVlvqLYdbdEBi4uW1PmrXNjOZRnRAxPqjs1dUCI5fYSdISg5EiyMeSmdnb77Xc1MAiXjfeHX25r5pHm6AIs9T31LNG-99KTSjYn3iIJWDkmyo-ntt-oT2K43m4wWk2bpjWOkRVr3NsBHIK2ydkMXz0MM83Vy8cZl71QqBa2IwAMukiuqtd8dJyUQb3SA8t6OSPGIr9GiCCmGRqZ9qjBQiWeq2AlrQc6v7hlVwv-TDbZeThMHjNQzWiQsl6Q_SYepEBvupHhS2gNv87dIRD2PZ3tZEfvYsNJkJvZqgig4nf9xN1luXx-W56nMUKq7aaxzw4YQLeZoecXCvwMvX17MEbORAELW0Xh47pq8f0DMSMxjNMo_df7wHAa4vf6OQT7FauVHPQ3Wo2y5j1akHy0RpV41tKmm8jBb3j_J1e9j0Tgt5Es2lmxXnzuOMfzXTiThPIBZQmtZiBg3DjT23GJKJEhZI97QGZmpNLYJ2UTSbZIxF2ZFRu3MXvhYA3NKzfa0fL8AGqZ3qokY_XXCW_SeyJe6G6FVU8lKiSb0RCSnZ1Xd_AGFvgBejDo5Wwc\"}"
  },
  {
    "path": "genesis_data/genesis_txs/Osgzf9EDK9j7TMlqSJ_5Y1rzZgOA6qfR7ktiakLPk4A.json",
    "content": "{\"id\":\"Osgzf9EDK9j7TMlqSJ_5Y1rzZgOA6qfR7ktiakLPk4A\",\"last_tx\":\"\",\"owner\":\"tGc2zDnhsaSVnE01LgR9CY0uT_iXmQt7ojbwE22x49bFV_af49gW_RDwVJ6YTrh_4ythN4R3w5Q7_F3-RarxGsbe1Lof_qdVsujUDqgwE_WaH-91xfJvvIlPCwTzTuZbKvV9ijfWMJA1DT2-giJcgX6pwLkQkeTJL0MelC5nDemsOGdzph7ot2fnEosQG3RF0JmKQVrIca603AP-H9jeSufbbtD3g5VLmrLCEE9zpwhePc2Tr8KkHzKOWBsE0hqpxDfu1O5vec1rQGPiUFUMHWsIwxMB1U93LxDTJS4bZlk22Cmb9QoFiesk94o2EyXF63qG101feUvpEeXOAByih_Y8Sufdo1BrZL_xaEUCRBiBNKBs1CFFV0shkTtjspOWvzK169xU9l10egKfF5gM8LQRDmWDkWej1Y5M6W0s23ynF_BAORwiH_rNAzIs5Zzh3sBraIyMjjMO_l52CHQ2YFkRwJRNMe_vGQfenKddszL-oNcTskM1nkqD76oBNj8dwx5dnnD0oDUn6Zms7fEG2Y4o5Y-3CMCcN2FeO8LDziAntiSCY9XR82Auz0OqMR7pz7vVSxaAEa1KoEfnzqfG83FpNJrrsHMEuo3PN_z4cJSip5Ih7H4A-NsQ1Y_XsNnJX8_y5bNlmJEom8vXC5FtefVwUxK1fAXttaJ0ffEcbSU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Qmxlc3NlZCBhcmUgdGhlIG1lZWs\",\"reward\":\"0\",\"signature\":\"WJ-T8oeJ4Trmu85d1sCxP0hY2mvGj7o5lzRvewrDI7qMOoQS7r4Z-L9NmheIcIq4732cVg_0XWI1UOaVbXunnSWcAiwQW-Y5rHzkMuAgmUXqWaEUkBaPiGaGWMnHB3Yz4UGY6NdIBsItBbXIjkS4r8kUmAq8Wok1FR7L73bu3YEf87RRTPLqVXCg-oYxpopESE4oK53gUL0ywApEqnyAT-aeqpL0OO48qqO8d0kh8Oi8Ll_Jl6cYga_wrLSLWSZgGDjcZncVy0Cf9-Dskw8AZKodGuTveErLGt0ezruMhY1K0WWvApphP_pj67cQOGLsaqSgiVGqgVXsE5_T55Mt5wlDauJZgMFdEkctkQiNSFFgSqF6QrwX6kNmgjHNcostLve-U6G2HV94nZFFTAiBwk9kGojvBJ3qZLoRJzsBvDQEhI1ixgRhiD1Kxe5Hrjkk1TJ_j7Bs8gcgOyyVhr3TEaeQXrwL6CFOFMVQGwyVO4WR-p50daFWi3A5z2_mWuklRAHN3ZKEhRPMpo8p47RMETtYIa4RDj0e6J6Pgv-Fvvx3-prYdYqvyKV0sL2UrIIN5x1VSJH8cDqwfsS8B00IX-9TjndrJUDqnozFkCBBRzSJxoZpgDF6ayCvcDzPRl0X9GV-6UAx13eN9Afx601qLdwTzDIuOIBA0HDx9JVeucQ\"}"
  },
  {
    "path": "genesis_data/genesis_txs/P_pvvzlCIX7Yaiuv6zt1voLcn69gb9jAHPRhHaHjLng.json",
    "content": "{\"id\":\"P_pvvzlCIX7Yaiuv6zt1voLcn69gb9jAHPRhHaHjLng\",\"last_tx\":\"\",\"owner\":\"qx21z7mgGCuKfCtUmErijrPCG3TTBu2uVUiW7r948DgPpCvek_PzJHP0bYYoz4CWooDMru6ScrTspFWk3kXDjCUUd2m-dT4hR0iqyigfWaJm8GlpyrUTKTH0p66BnQgul1DWhJePB7i-DyBHTJOd4dxdDRBD35SZRyWNJJwjhecE9DRt_zfxenMv8eXG-CkyQeMBnk_5xq2zRHCS4KPJNaTZzirohvFnxdv93WXj5X0p2FoaUOK6kPKQHXpArfgymJenHcYQHO2-OTDPO2ioKzT84SSfAbUVtB5IlghmFu3YkwUNLtgEq6WbuV_B9VoEyTMEV46HFa7TrOUsb6kv11Dn2qZRRUuZzKPeFOTCjZcHH8cUo-ZfJwmzHW16JnxB3gdgHQEl5E26Hw3D97GC6fHS6naKC6-b_Yjn-aqTocWT6D18PQ-bXpiuyrdt2yW7ONfrso60UG3npWJtTcIvWBTH_fBSqClhm_bh4B_pb3EkwlFFQFLGeXsS_srr_s4X629rtNOXTGxR0bS1-PUq6FZdEnHXgVVx58lGZynLzkweloczs4Tsv6PN-naK4mCrCQBvU4RrXFdeWjTFqxrx4JMPD40wFvw-Vqwdu0gJtLBlXvAlgKp16BnVQAH5Lvfe7Dh8kxeRREAFHhu9nVurnx4sl4_9HC-y6ZIxQwh9joc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"V2UnbGwgc2VlIHdoYXQgaGFwcGVucw\",\"reward\":\"0\",\"signature\":\"UuqufsybZgIpvF_IPRZ7Yz4meyjyq86pD_nB6NZkzHQmGeJFYbScQb_hQt8VAGsIR-A3n6HeBHCjEq4EoW_vZTpdjuNH2yaZ-8i-hWijTnY3oiu_mYnDcvfmYFJhz3-yZrJomagemy99SZ4kUVx5B0N-rqkiGcdSOR-j05kYYCS3gChxRyDiZCR9Y-lSRaoA2ePEY5XjUtyw93OVe2yyy1-N1IP-dzHFlo_uVZ5Ef246J24mnvYayw69gP8xJ3LY1TeSxZCRcRvYn9IhC_vkzVWl1vg1leRjl7JcXOx-lotNvH8J4iGp5lGEorIgebRRBn5C9SlqvEXpBOIrmXMyGu9m2kEUmN_puGud4VrTteuRqusHaLp3OOoI24qOaBlBt8N8NJqg4eDbwL_u_XRTUeBPTIfFgdmJ3gpLudPOjvp3pSXmW2zjQBevAJfEMcNGSW9ldCCrGEoHr2DWRA0es57ALKBhLE_-u8ogRrmKXAUjCbsMSnrYIcnm_mht8BGP3A94XSQfO1ibF594hEtfC_-daXndHRSP0BgZmJtV7u4ehXlfiBpgwygml9MDfs7lu7TIfALuzlmcOD1B9_4jryyfqugzoP3JCgaYnAAkr2nRPoTfdqOeNKTy7p2FbvMwEFurdEy_X-HB5rs7Mzz4XqfpmYKQQR6QXjXb24_4Y5M\"}"
  },
  {
    "path": "genesis_data/genesis_txs/PjeEg7GpKT8twlBkp8GHAsEqfMvmNd3RaAx-l0R_i2w.json",
    "content": "{\"id\":\"PjeEg7GpKT8twlBkp8GHAsEqfMvmNd3RaAx-l0R_i2w\",\"last_tx\":\"\",\"owner\":\"rNRIDj-hqfbif3qSouqBtmm-e_aqaZy8Ecarg9nLSI9mLGBN7M_j2o8xHXvpRtJqojMxJz2CZ31oyPWJWzdZNqDXewhpgThefeb8Qpy3K0MaEGwJ9hTLv53LXDLYfAULEnSxZikgFn71gkwyL48-XoTlMhqaeIRhJk46c_w8t6gve17hQH3foODtFnofiwH-08Den_AzveYHsHAVDAaEMsyoNph_Ydfo_aohvUPo-gnuC2jWjxaSnRYjoVNOJjyQ9t7Uc58ys5dcSa347CHDUDjxlSm41c2LfCN44jlhHhpwYj0lF_rv27xerB-EjaSBUxcmnV4xGC5F0FhAEPhwitMARr7oSdrfp9iqlRY-Dtk3SJbYvl76ZnU5Cb4PDQ0b7o1JMUeqVbuGOxn1jPjH1wXIhk7Dda8eAS69N-taujdYrIzUMSrm2XNRRWSnfSPNuKWTDk2iLuKwva8oZDzE1RhHUuQ2d2RdL1nrs1cMy46Un-12UebJjUSGt8zqiM-74E6D7V69OhX9SapNb2Yk0fcTbWfX7YsQeJzN8FbXktMwGyGdbD5CU30mRC774EbLV7zSIIUdUcyqRdHe1P6Ierg3SPQahNQoGiWe3YYY_e8HUETx-ERPrhticBrDj6__699O-pIQ_4o6qrGpyKANt6CeqZmJ_was1Mupck029h0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"aGlnaCByZXdhcmQu\",\"reward\":\"0\",\"signature\":\"QkpVTwXpJmZnEkWJF0rQjISC0RdKq8tt71ZVJOh2jDoHSFkAwCLlkFwrBMArcqhrsRXqICxGFgw48LNqgsecYUi8Aw0z8sNC3R1xI5QG3nMz_ad1ioOClfxBpKyaWHsAfpOEcLJGsL0bW-aqknRS68Q3SPeNTKojXQK_pFw3mQEeigwJJkIT5S5SdNEV906GvTEnL5OcvIpybgcSF_u-Fh2DCMIx8JUt8pki6E5ndr_sD54QYmr2Lf_0VCsbQbxLF73XgXUSs9oFW4bj1_lz_UUBIBKVWuALWfljpIhvoF1zOPSNeVpK-2dlkluabolH7G3PlC8xyLKoRMLFryByr52MhyjDjgOlZxXY5Wxo6SrjrlCo5_Y3ihYPcY8MGrwpJnb3rURyhTmdmhJzw4S2IBC-2Nr0XSx6SKas8ptzKaTJv8eAT2rMOhvRH4lsrwJx2TwpNxDypNFtiDoieZz2hkXzwTvYI3AvraNdMkWKe9XZQqzDdpRfgWCnvYBwECEqeZ3Z6BIdDrTASj4PePeBq9d5WU7gM8By2YE-JqPX1-MtfCB7UJ4oKbBZ1g5Sz2rLJubDkLKNeys87kLcyjj2AFxYHsieY_o_73xk6Uf1F9k7BqepzmrrAgWSQ8uOPxixgSSYChOTc1e-g61-ok9FsZZsUzl5I8V2fXlOguz87Mg\"}"
  },
  {
    "path": "genesis_data/genesis_txs/PySb_0NIjROmsIgwz4kMwC9MVmeY1MwuKdil0WeUzxw.json",
    "content": "{\"id\":\"PySb_0NIjROmsIgwz4kMwC9MVmeY1MwuKdil0WeUzxw\",\"last_tx\":\"\",\"owner\":\"4sPyS3wSmm3amE9f7zfmSP652detpvtXD87-jeTzz2ib5I8aJFBURGhdHE8QvXhawdY6VF36nm52dXgjBgJAGH9pg7h6MSUX-hLknQImvRdmeimg1Vhc6Ol2-o1QExYENwcRyIYEzVEemyHm4uPOWWrNuY9T6PSoX2sXjaW18ySCURQpXNJtuczYxdkGok4mOKbmuegg9wE2079UEjPEeeKwOafsNaNt7tTbaVNtBuC09EdSZ_UmqalQ1TF22-VN1jyIdJ5ncpjrj1weay-UlKEVabsFcvyJaMMfSnme7Qz1939ODpKL_gp4Y6jlAJNKVDT4GFYMTFttyHk4r0SQRtjTqO3Si5BYQo-9qe-rYTlD6lbOyEyBDi-fobf1mqeIc7oPUmJ1iM30mM5bYbu5lWrkv0QMb311hXz1l1FgDnpvBsFVChiksubRlNlWYT-a2lk8Gmt55AOPrdetSBzt5vYbo8m0ME1i6XSD5WuVpPq9jOA7P2gCwSR2i9Y5Ab7YPJ-Q_gjClMc5pvnwlRFVbCsZs_zgbvbPlgzFldz2aGuGFzmGHXW5GKUSR813BwdoG2WDecHmY89Yu9T5viHt2F7hkrd3XUNlZubFLSkpNCpfK6kGegAmxK5KxrCVZtzfsGdnrzp98LKg4q68Gv5TRE1LFsB6JDbj1744r_Ov-W8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"R3JlYXQgSWRlYQ\",\"reward\":\"0\",\"signature\":\"CbBLAopxhPg28OVOkMKblyxhaxPwngtnbsjroaIKA6sxZqcDsjXwVXYWEmErtOVsHPd0z_BOLqXMOaAEBiMsGyadyZMIqqWYidOa1tJqchCUg02VRqJkBW5Fjab4iTdUtgFw_lFzH7RiLbzPuN4M96W5gdvRxGPMnBhsvPj6UW2JDUEr6vfB0fcYOC0FAwSeptcNPL8R862zVMhzXNyNwbOzrLoTrwcvYTuhjA6yFZ-lRfQFv3VlpIQXZydwln8ewQDrEFfaiSPYVj5bIgW7KdfvJZMO3JIRePJ2_ZqcMRDVa1dQjf20COQaJGUnfg7f54SQpU_-tc6L-4StrWINPP9PnJ8TZ7xPfxqn6BukkdP3dVCZTwo6qsjNGnBaecRvtvY0Zxtq4qVzeyFXe4IP8ngiPQEQue9BsZSX2D2kJveNGfq5lQg7z_jr7Nn5GndT8i5zXSRwuJHc7mouY9jZFdF3uZjwNnNdVroadbjV2g1dr6ec93gvhmDa7SfShwNIsOuWB8l6SZjhI31JuchIuf7mlKxcfJFUMBCq-k49yGAVWDlcO7UVK-oa50KBsHw38p_EQatlh-gBdL9EK5XQ5DETOWUlihytmVjDf6bl8f_7jGXsWIUMAySXzYWvmIE8-VMyoCgduVv6NTqLv4j7aXxDBaqFlF8pagC7C7hFk84\"}"
  },
  {
    "path": "genesis_data/genesis_txs/QDBM2PowqCX0eUCKzgV-DgdzeDz5TXLKYS3HVXLyqoo.json",
    "content": "{\"id\":\"QDBM2PowqCX0eUCKzgV-DgdzeDz5TXLKYS3HVXLyqoo\",\"last_tx\":\"\",\"owner\":\"sKFr_9mPdKkp4gZJldT_zPdpvoa-jhZlz9aBMnycurlpo2Ger7pi4YEnYZw65QPT235_Tkh2nRNfJRk22tOkLN8wTd50X8BWFT_z8XlFrf10BWe8qOnrqEeu1jjV2cY2wZzw_V71TbuE8HmI9l7EesHx0lXpSlRDkBTkGz1zzkV1jiKemOdKq4V6xbH14Xsz1wgPFLvHQAUfoA8o3T3rWk7Bl1U91F-R2NRl1w3bamOdrWAc3nuTz3bGDuH1_lbPOePUipQaHeZ88TdOnXdEkH4LbQT8l8I46TyTGL0PoGraDUpEkjkAORhJh4gqIlKuC0_KrOn8P7kojvJLoIzcgQv-KMNcnwYJ6u0cIyar31BWHfGCpspEa3coCMUzgH6hSpr2D9QEGHy27LtD_ZTzL0GvURfwppl7yyZYDP0vZT0-EjdUvSObq93aXRTBjZzK5hHhW4LLRqgg30DvC6LStl2i5t0S8VsVNGDeGEVZozc9_gS-8I49S-ECukKB8_foluyx8AeFpLhW0mdnUlkcQU532a0iNaQ2mNfLvRGS2WBv-oOm-kyuIScyYvAEU8O236yC-iQG6A7tuXguMHTOWeeDr2ragUPcsGH-TdAG6VMqPKHzUlkmb9r2Hde41TYhBH7WpVa75dg7XH7z3OIuQavj4i-no7KrCVY0gBPXO18\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SWYgR29kIGlzIGZvciB1cw\",\"reward\":\"0\",\"signature\":\"NPjQIksTI-HA8loU7x_evW3-IQo3e6y1HvSEMFqB8gsHTI7o6t_Pdsi3t3xkPJJjMupGRu1MDlPzbkm7QeQmjWB7NKTTI8cy5vWSh3ZGQuluUEzhf3fFIqp4NeVcJSGXqa0SkY1w7dJMCjtWn-8KUaWlHmCdThASaUMvU2aXnFcNt-siPsPMBjJwhe5DcP7tc_eCpdAkyyqYBvfSxkt118ozzrMhNK6k4ZDVJ7UHux9_Q0c2fyIfSrTdazVgG7Pfpy2ujnMRal2LuoMRFjK0LS2SV8wKcPkA_KJf7FnSHLF-nhuLQJgBQA6ug4DGgDrkq6-lhZ0Wm4-bxNIevhyYbwgSUUwPUru0LoMNfi7RvuNDSTJApHMhtKmJ8CocpJ4VR-czo6T_cp5-Np3j-0osgF-BELbrgrxPX0-KrFx9aLhsdz6naLl9rvI9PZQ2qbvasmO-S3mKst7kVWk_HJeRu07x8uUwQbebSy8bvkILGQN8GxT1ewglvuPBmsHmkEAxxgU7pH-xCP5Tso4Ba-cDn5RdbID1fCRvkJMKRp8xDvsW-eWX6yBH3a4y50meN4Xhg841atwZ948POaLEYogB0TVyKRBNlx8_zTN7rv3Hh64BGsqmwIGaKLiVYGbd55qgl2up1fuYYH5915g2F1CVt1xXizagjnnWIAAlEyGynMo\"}"
  },
  {
    "path": "genesis_data/genesis_txs/QDbVk-efwdVbHDGL1vZO3mQ3g65ol5RR-1wOvPLUkkE.json",
    "content": "{\"id\":\"QDbVk-efwdVbHDGL1vZO3mQ3g65ol5RR-1wOvPLUkkE\",\"last_tx\":\"\",\"owner\":\"rfYGHsPMN14zQv7XOmivaVfCCYDbgRGKUZcjUNnQXo-QS69Sk4V5rQSHR8q4cTb-Seh7uPIa3yvqcxnTiROvMQLw1Q5cxiiJxnANyC6CGzLlKptbS-XkI2-WbGPUfQOtyNbsL-uRDpqhgSFAnrc6V09K9JM61Sw-Vl6uZ5iCezt6YkJnXkdMv2KP38gJWmUKFJmxqaHCyLSbW_ITrFvjtbSKAFA_uGImTMAQ8s_ggMxbNUnNgcDwmadcw4Zzxd93pNxwUvwfhKp6KK-jXM8o4hhuECs9AJB5aSBCjwgf0nBefcXEm5lWjHt0fqQ7OBHn6cKs-rOOIsQCK7LlMOTT0e4WJP13U36BiQN3ryHIgoam1bzUHK9h1-UsV7NkMOMTFvFnxmma7DtWhNaZGCE092Z38xK9KfyVPag9EbU5wignK-InhDdQvOx8HJUZmr8FyUVIY3xewxJ9cAwdXZO5udpEvf3B6xM9Bhg7mRUbJP9ZiEWiKUEDDlNRRCgjg1WgLEAVzwnornuAyyWm9voQqXnGX4RRG--Ae-ooKXMvEEWVC5PrSoPArI3EJ5LW__4xOGsMdlntOqBXHK5brzUoZjNDEpsmYGPex42jecLkvxPC_TgkMWxq1Ln4sEVtX49HYtwIAXgWaiJp9j8l0Eg05B2zAo1MzbQps37kexfCZjk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VGhpcyBpcyB0aGUgZnVja2luZyBmdXR1cmUu\",\"reward\":\"0\",\"signature\":\"SQI9Ae3MeN426Y22taZPwsTLkFr3weknZQ_hTZ0jrVdCI8XQ7l2fnyKLZKyybD19Qmf5yKUlsfS7dxDg5vIiMxBtw1Fif8tBjwniOlsWXS1j_x4ia2GyiYV39zUdDmTKgj8oqWqItCk1MW-Qn1owckWn0MqSG42I142DyxkimzAIZ289pai6B-GQqVJ_7M9SFhq34GUabrPKYwobQOJQHgvhznCuqv1F_F-Zxq3A2g76wr0jn8wM0jc2iwrK4ziEbvdNDtmfHCflqfDalYVIzvgjq4RDg41Br1E5CbJC8It9MvLojFTpOTVXoxnMLM5nQRXNbGdTEO4ClzgEJzt4IJir2DP61iWTVtGw2Qcr8ex2cxumq_MQJqowdN1rM7BpxhPwUYL_EZzbtsvQwSKnkIAJECO0jYRQHOQ60b3haGQk765Wm0W0VB4GJVbVWI1AWJOxxbF7uF9fczexyqnsbIe4HbGGQCRPMGPj61qDarVi1X6KYiHNU89WAZxoZ8k07tocpyMUlKenHa3h8Qe6n3HFFE81OlosDbybeJaTr8uso15ltFqklocRboEbh_qSigFurA3jl684dWBK0_UF9NJOgc_OKC9CCUn-zh8vr3jsLohRf5gasET5LcIlzV-QDQe-_Zxv4ZBKWt_XZHq1SRblw5I4fzzelkT3RzXPXkc\"}"
  },
  {
    "path": "genesis_data/genesis_txs/QJlE99-614f6XzZ-7VctQjX9DYe5wnO21aHSgg1RhnA.json",
    "content": "{\"id\":\"QJlE99-614f6XzZ-7VctQjX9DYe5wnO21aHSgg1RhnA\",\"last_tx\":\"\",\"owner\":\"54Cn7vK6n5KJN_jLKRzCjW5ifs0ArkCwITN9ucVT3j-uy_kx0Iqiso4EFiLOaJyGUCnPirPiAg2GiADISEaT_oYvosN5DBu1P0eN3kqf4j8NtQp4RxRfoxHdmHK1ksRkIA_Kj5pDLFsMwtRFeji15x7NrdEt-47jKKeK7fya2eUr4SU4y0FKC7vwWVzbrc8XhL8m_eJjBgPtDp933FqpxC2Zx4EPIVriqQoFt0iekdOuYE0J15NKN-X_eStWjPvSgxA9-K1zWT2daU5iHne2McBgLO3TbN5QHmev-FcLCDKC2SdCUNXZhi-lWGOhYFIvf3BEsNUagLKfBGQhnVuAZ2KCfc-ncpMBTVcHLzdqYT-V--hm3pXvQht5FnfqGpJ2fXh-fs3bZa28HUw7CgatV_NJp1zqH1aYQCwD71Ek9c_vgBmRGjwRdZKq3SHwZbP1uhp567sMwFASCVKpY2l7-2hsGESzfBXON9gjXNa81kLrJvlnpBX3vE6fRvJSyfr_mf8Y3rzP3FBS7LCOQh7kbGdLMJFn5SXStqx6_VDrMB_WSxo96UpARO_x7lAPtFtqF3k1liwPBvNe_ctsOAHv4ZauhwGThaSOtPCexcsw2juO4_Kigx_A-Pu8BFKnYT_deyAD3oG3OiAdHrM8U5TzV2-gxQaj1Cg_4qdM9xGavi0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TWF4aW1lIENvdXJnaWJldA\",\"reward\":\"0\",\"signature\":\"ncTQcboQLlW0u3O7qDWnKnQpCB0kelqCQuo5b9vozGt0FKF8ryssWIulc2TbBni1OL27Ny2jtgO7OiBtDKAkvNuZ94rtUaNBxIXLLv0PLYYFZQbPyarbID50ktTEPtsQrUoTORN-IH47bu3dubdQBuYyLIIl1hedup98a7f7fw0KJZcncm58CAeYWQMVI0VxWzWSaQePlqXPuZdCXPhH48fbCOw-DXb4zoJx69FBHVLLBCoK9pNmtPAqzGgyNrEfU8YJenLZt95PIq39j4vLZHzMyE_vGepMaPB4gBBVpOVgmlqcCB0REQjF9UusrXOeIsiICQRK7mEy8k_hwe0VzsIAYA5htWVpu9ESYnBsPXgXXEIeg5gIA2SG1uIKSVt2aIsUH6NlixKxw43enulf7ExEptouCK6cTkFW2ouvN5Sh3dWaQjnywEEp6syjC9uTIP0uUEb9a6MI9L0L1je2iZ0r9eIqRA1pUmWaIwi4ljXon6UISdK6KeRipPeBxkuUorjjHB8YwYS2MoHugnC0scfIWpWVjBZKOjrwDnvo5Xi-pGLIEp95fIYkh3ConjMEQWt1Lt7DjPwGcGP15OsKEMROO-u6VMEQiiqv6dDraFXGC99CL2-SYUXdqqqYMq308E-m9ufPurOiTMYooRjWXovwmp1JcBjnSZf6kD75cnY\"}"
  },
  {
    "path": "genesis_data/genesis_txs/QR75we1zHW-qO7dsI932kXX0YrAIyuC2XIDRhfmK-fE.json",
    "content": "{\"id\":\"QR75we1zHW-qO7dsI932kXX0YrAIyuC2XIDRhfmK-fE\",\"last_tx\":\"\",\"owner\":\"nAWqUDiELL9x9z5OipjQLsXx797gmEhhG18KdXoaTNp85sti3yThyWtsb-s6oSE4wgPVhO38XinXVOpIlTZWsA6Q7i7Zah931L_NTeIPBnYgz6OFLtBABJXIpR6kFr7lmF47PHxDd6HfCr46fIY5PvQWSObg5bEK3rJO80oKgUZxTiFfNSHRfQZaXT_uApt8fi4Hb29wmCXYOnZG5p6pDiYKnG3oKKIzZGJY9oE-pRg-lynnQF_2vOv_YOEQ2yxNg_977Yopua-pG_pR3AyXkP6-qOqZblg0pUAFrWjh4i_2vZuZ87ZtAs78bqwctKy1SW0fp47nlJq_MHYJXXRi2bmGrJRcv3-tfHDTAiR4x1U0HQ2W6bopUyB7kQtg3jGq3hzEFVfniTNCz5qpXdOPcOVI76BDWyKgN6ukTPDUdrUZ-sIVijD2L3XkT1ltHp6rxlYudx0YCsA3oE-yuvds987XnsIFBdWgoCg28ErPLrXKt7R_s9xbdOJIPqfJfi7cMEzie9_noKvaRTunWzOMpBKSI08HrJNA67V0fiwBrSuQgA4gxZ5sKESb8rRb9Xt5LKuTDvn7bXASvrpEJ5QRHczGAlVgyu7jNTaN-5zA7zEeUCftdAghK0b1kDBGID5aOvNkeVzvFbusoSyrtYZvNYTXhhpHVb5YSnQo11ZPEj0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SGkgSSdtIEphc29uIEJlaW5n\",\"reward\":\"0\",\"signature\":\"LUpyZGEFjl5Wl1aGc4joGFTWIhkoo6LCf1cMavVaikP8u73Zhq4WAg2Lf-2kI2O4BnuNeMzQK-gRZiGeCQBB5ofzV6KM3zQ7jOuY-NfSUrf_CRLb4GQTO7L2QMsJaUixDUXcCbes41u-F1GiJt3nOUkKOBOX4xt4O9W1nYCRyAidBXOImrugP350XdW_8uIXVX5Rnjdus329vVyC00Dy8CWxPI4ilOaoScpP4g_SDixw3w_kcLqNhtFMpKI_wYK6n2r5Fkwzdb1cGAXllomBHuLY18BqpX9Mg3gLrkoTbLEazCt3JgsgGDAEJpSfT4SjnK68uBe93dKfoDbKH6o4iIgIqDfIot1-ZmDs6XwqcUSX8LX4GdTVJ1v-YLwqgu_jtf4a9OLUVyLy332RbRAtTpDja7c1zzjb_I0PBVuwPTrNLOx1lpm-v5M9CRNRvOICnIKYDqG5l2uj-7aq6FzNtNR7s55_QxGfOdP--gw3YFwEdBkblffXnuJpBdDYIIyWS8jhG5iE6MdbKCrZ2cLG4lUozKgoXsCjZdXLi3f25CbtXe8D5zp_Zkr6kRX-OW8y66dscTJ0gO5hwdGxmzRq5sSoFsmoDohmTVUuSBQ4mdIMrJTHXyI3I1hpjCQLGYYnStUqQAZA6VE5hF5aGkOsB5WBYTwguNEbxMOBzq6W75c\"}"
  },
  {
    "path": "genesis_data/genesis_txs/R0Mhun4e-WmLLGxnJq4SDTRqyNvTDTKC-uXuol1s63A.json",
    "content": "{\"id\":\"R0Mhun4e-WmLLGxnJq4SDTRqyNvTDTKC-uXuol1s63A\",\"last_tx\":\"\",\"owner\":\"8QWLMurCUQXQHGRBvmvoYqY1cAxUl91i74uvQhnSdJ4vj0Kt0P3NhhNbLMtwW5dYjfHDRAzc3IhJh0vZDi6X8iiFky5E3FerJwDckUoeVqk630MyW161D1gMh1ZGywM1xxHytShCY9FZYyuUso-nftwka0rEGDCckAquXSTsdJBavzQC7ejRw0uxZmNRCbYD5ItvVr51xFcBq-Px4pXDWfcozI_Ej_j9y2F7dyksSqdu5Jd_CihjUdN5HxCWf4dgVXKliqoQ4eJ81Nqe8XMeJ4rO-sXk4hh8__nyAWDyk6NV8uB7gtfT3YQUgVNZTmPQ7sCF46uSmdW-Y-j8TmITvYBAyxHmjfIYs35rw9Aiyf8rrmDlUnJSfqjJK8Vol24d_T0dje79XaziSHzvyUccGJIISn8IfNzlIjsG4jvxayDqpGAWvDiJAlHumUdXcYvm_khpZ33SBcS9b2lVjTsX8Ke4X2EeRE82Q8T4iJwrP63a2b_meyTQpqFVowDwdH58x5bp6NZySGxT5nzSsQD5SOd_VykCapE9R1nR0fmYYkFUNYaTVLhz0J_c8-9GPyj_B3_L6jZCGBpyAuAsNjhq-Md638fMuEE1g7s14ruo5pSz6yoCX1aPeVKmI4wBSzQb1_OCWXv1Q4QYVPRd_zsCpnK6Or4-TUqV7E2PQbwGVnk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"S2FybWFwYSBjaGVubm8\",\"reward\":\"0\",\"signature\":\"uD_EURuDBYTwyFbv6iRW90HSQITkfjTBoK6_U70awvfVTrbOTj4i6hiZZ2zce91YQkY4x4nOskMJugmY-dkr599odayCQJ6hu0mZlyQ2tPSfx1DrmyQURvsJOwS_3IdrZ9eq5Ho5Ly9V6nFyJrkMlEPcBoYChhv79ZubyQIhiAjAhMArPSq_wRavTpQ0hpBxnQh0KZwpIgZtxJEVDiG3NUNnCAI64pgOav0uou6l8fqKvm0V1SnTB4yu7klWRix9Lnx7eB_u2sly9m1uzvp4sqaWo39-RV18cuAzJ-W7BaXlidq29gHiK1N68GwNLQKnANoq4I2HpfETvSujXP75GRylAsxcNN3-bGg9hptjBfAUXuYtk-WPEACi3p9ziCqWVWUX1XTYbKFfRbkxwRmivrwO_ZxRv7ZSIIYWxdMCar57DBsa2Q5TjmH4GlKOmamSWi8Awydlnf5l5aiFa1nkN1avKxyaX-X3K3ZrRVrRpon1kKQhLMJZfkDe1qkwokJnTCLJAT4pGdAtTtC-X4Ok4r1dm6f2pbqS_jh1bsYVDpLhg3Y2cnvDHwBDbZdU82T_XZCCJv9pBAkWac2mUrh5vvzAuNCooWq_DrH-qIwfFcn-lS4jIKEl8UUoxDY58jIfQ7p8OaJW_gE8lzcpxvblAa5nHJ-r9xrK-Jn1Yq8HWZk\"}"
  },
  {
    "path": "genesis_data/genesis_txs/R2h2i6y-KFxuHukxmHIjSncPZSiS4tpuzH0tD1NAooI.json",
    "content": "{\"id\":\"R2h2i6y-KFxuHukxmHIjSncPZSiS4tpuzH0tD1NAooI\",\"last_tx\":\"\",\"owner\":\"o7Xp764JXAUybNjZg0Zo2_FVgo4etg72aCcUnSR7ti06P_WqCDKB0eV8aObfixRXq54cjjotZ_s0GZ8oarwtodeGBX1CaDpTacoy6Cdj7vOBaDKpCH2IUMaSZYJGPHxv6iMQ18Frdk3_dNuJCJOWebT8CBWUK0DE5trqJGS22MlW7HeHMWa0CWNnsiqqdZ8EMrplaMsw_d_TAS2LwUbU-ilbKx34ilgkWD52k13C63bHOax3ZyGKykMNo6WmcAYsuIKfzYOLiuPKQGm7EbA2MEc9fzUb_6idcn1EeQlBcqPFpKAkFzfNt3Du9ID6FuRqYSRdyGqEEi1W_MAHgpUtUVECq8r0uJunVzN7fLmvFjDpQdz3uwss1ApwTRdh5RJT6JIwmWmQUGRNpfNRzzMoRFCc9Q2m53SdKcxchH7FVuW1dPnJYG0_hpcwvhqwp5Ek4RghNPn-19Oyfuu5t0mM7rEIEKEr64hrpk2PRDLR7DoHNohwJb-Zb20BkdCY0Whm6yRonCVgL_bKtGCzhBpazBIf54VUwqzBqesl8xPK7fDChvT50UGQm5aKolIywW6lGmVtDeJOhbAIqFIL04h768xkeX8w5c6USYbXNN9yMbZHyXpuerCnudjSipUm084r36Ao3JI5YCPx9zkOrEzh4edvsuipThebDsSD_LNBbl8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"V2hpbGUgSSBtYXkgbm90IGhhdmUgYmVlbiBib3JuIGVhcmx5IGVub3VnaCB0byBleHBsb3JlIHVuY2hhcnRlcmVkIGxhbmRzIG9yIGJvcm4gbGF0ZSBlbm91Z2ggdG8gZXhwbG9yZSBkZWVwIHNwYWNl\",\"reward\":\"0\",\"signature\":\"OfxPk2AtB09-FSyG7eSD3LUYHglTpf1y47nKgydLcM1jVW7zAWAX-do2EXlexP_FJU6dvhwTX0As7v15AKosAWwz8TVp2SxPf0xK60vVF-rCpUYAe0bMuARk_56PB7KvJ937vKrzLv4I_2kPF0ZBv4R16cTb7oObwjV-AYKkIerJl3Oy5U4Q4Pgs7w4BiWRHSf_VNdrtuNDtsKVx5pEQEZF9B5Nqqe7Yw1fZvzU2YJ_-fc1F7OLTFdUMrFsHHP9YfY4no80QbzI9-3GR2yONCJMyU033_rNEtWvp9ZGaijoTru5FEUCzFAc_qlJLRqBelw7qe75Md9jyTFykNgbzeWuwidFqLQ5wp_zRaeo92LyhdPL9L3yl4aRtDsPOr7QMC9fDxXT-psuFdznVYxhAnUU6EsAIrkDTlPMqCwU23tZKX8K_dSNGUQ1E7tviyaXBUrhcsHhzu4yRO8VAWtJI4CpsWFcBp1pgMsVU-Hx1SDb9ciaANNz7h_GyJIW68U_jW8uiIcIhHJxEm3XNs088B-4F3STl_y3ZqUdOPybGupOjNn9CqrLXNPXXHt4QN_MzlGP8e1PmLyrmjnTleHGDlF5a7X8CAmeLDz4DYC4tlp8IWyvTvtDBGeXUnUFnxZSRm3caZ6nqYFJv8CbmwHJ23D7MWNQnyYCmh_p-BSWV0p0\"}"
  },
  {
    "path": "genesis_data/genesis_txs/RU5mkM_3UrjRMffwgj7ovDMYxxjhfXvliozhpIqw0sA.json",
    "content": "{\"id\":\"RU5mkM_3UrjRMffwgj7ovDMYxxjhfXvliozhpIqw0sA\",\"last_tx\":\"\",\"owner\":\"uBWykl4OeuYQnxiik46oOX8cM0WHP3_tYUUVF8BQ2RNQNyqethufKnEb_TJcNteAPjmm1r3XpzRFSSk3eyIDOVgKy5kKnVSazptjmA0IyL13q8XeMimE3By9FoDsr6HKkRNLvQbfzfmaDgdG0-jbP8ICjOBWEhFuDeImc1RmqiGJx8lYgcAJdG01hXboN-dNnK7ThRnnnMEveIdVzuKYMpF2sQK_H-osBSDr1BRR-104dMwobhaJE5IZJaxZgIOzGdAGsojNOnGBwOGeOL6rGLZ5bRvQMaSMzuUXG29SIE7-JhwxxRatXdJ2aln7MYANWGs-Xim8HxLiljW2xzRDm0G3fbDRsxPOjMA_PcVzfsvVLRSyCGLADIJ1qrQA3NtYIHn7_Z5ie94UxvShk9T4HaWQxUsmZ7sGe1y6GvFrTF2rFuHdSbO1yLGrEIROgu8dfPvbeBUTOIh1nRIToJIVr0C8PwmFyEv9gpPbu1WjlDRm39--kXMR9oetMyteb5Wb2bAwITbv17P0jnzy64Ahr2-PuoxJkUnDGLpkJLue81J9tBvZweUy9IEvmeEztpRulzjbnYysO9q7mcvW3iXo7PeYqJ6Rh2T5QB3dw8NrAIiK3Avsj1a0_pAtd6k_48Tgwzqc5Ds80qrW8fzR4aRSHCFse3LD6Np5qUnFc_geOjM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Zm9yIGtpbGxpbmcgdGltZSBpcyBub3QgbXVyZGVyIGl0J3MgbW9yZSBsaWtlIHN1aWNpZGUh\",\"reward\":\"0\",\"signature\":\"CldG9jOeOQGdGLslRe6LSNNY69ssFcd43DvS8Qk2LDovx69hjSXiKXV_GaNUbjlHzHaCoVcKTLNzuDMsTc2HS1FEJIb7SNDQ5YRToUQTsUQTo5iOAY_-oDOwJIdr7hRzXTLM4l_JMoCWIR_AIFNE-8MhgN2uFyDwAVAqAKA7saTfWIAEpahA748Il2HNvoWiz9Azn2YOroUhAJORBbsKoKaSUcMfLVweWnG2Wb5faluo9R3jQLO1VDgTQXrN_2pUwTJxRn6NL_UaANOXJfyNiyNzHvdEkV7my7vn7MXa7-wOywPUYD6vlsUvfoM7MkMqxf_nbQYykRvdAHp39pUTRhn2icbDGtekIPrtx4wqitrzYqNeiMSFlW0rOzdbJiK0iYoVQDJmHdfphbl_CxgXR0_j0W65ini73gxBXfzILQEb0-wy2PiTX2W0jIsNhnsNkCj1wtaygPDTF_uaZI6EnkuKLrd-J2m3gLaqhmZ1NEmeuHZMNFzvxGd44h90tw04c-HXIheTnK6R698tSy_USxxiD3K6O0KwD5TVFYxd1IK0cjEp6u355_4JPKSXUjxU4V62XTIc9EYIUN53p661KAzwTiBESuj5fy0WCbws_8AUfMEyqKh-EHDmNnjXhb7RfI787EauqA54qnJA81qfvtJ1mpDQ8vTxl-bF0U3T6qM\"}"
  },
  {
    "path": "genesis_data/genesis_txs/S5Uv2W6erubrzYjzm9QHKij51XE-j-GFdYwcV2uPIAA.json",
    "content": "{\"id\":\"S5Uv2W6erubrzYjzm9QHKij51XE-j-GFdYwcV2uPIAA\",\"last_tx\":\"\",\"owner\":\"su3ovqflJmqgBh_Uraw-mSEQHwpELCOcTKOvMSB4X_ITszVhCUYw0aS5LcJYcjYhkLWw9wP78OOiZAOepQWbMrBuugLn2__HhcMvmcFFaP0lObwGJnzWt7oI_DqckSvyBvmRjUQTxdcOQIeBqLyOXC8Qjo_LqAzupf7oj7NaiU-qbgwb92Fp12Aa_7_zLAjmctxWIagvMhoAX6muMUoUtnEZa_vbke25pB63FG9XAS9uswkJoUEGZ2Zsn5o13ndU_wyn1KuH_5vLUv_xDfXkkjo4dQLSWtJGl4-yES7sLKFVAU45ck1JZ8zftFEiHJea-yxe23Iq8Rj_bjmi45UlxUa7REkZHDFdIxuqgj5dqBi4wsAdEVQYxz7BmJsD-RXVUdNWb74zQNqzBGt5nAPYm3PHnCrd4LoCsXJMtYjRt1d68ioDGmJNKTbK4mUDahcza4Dnc-P_IeFyJedwqX-L6ar2k5YOQJT_Adc0DakIfdI_m1pu8VxtEXuj11dXwhYGeFZWpRj70TFAHCSx5vydGZ_8DVv4FOT2t31mdS1jz9Y_U9DXpRDR-y5FcY--4GbB_m7jC3N-rhgQ1vRJ6xtjx6RQmzQ9ygrczem9QXe8HRiGbST3I64W0zL4GXizr63HXbkh6YwX_TaiKniozTTZoy7jkOjL7qHlG77_Z0VyXtc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"U0kyMDIwSVdBV0ZBRlRPQlRFTzIwMzBJV0hJTVBUTUQ\",\"reward\":\"0\",\"signature\":\"aNePB8Pio0VwZjJjR13gyDQ9T1dk0kd9h-baQe2DRRR7r_kBZvyLiw2vfD2RraEwBpD7rHVINRJEJFnqeU9ZyGglL990-pCp-pcg5PrXJrSNT_FWdwrc5ZUyPtuEZ5kMe9weRxUd2I0JISiDOvG0XHYBYiSXQ6tIV7ohYUV6JXGwCV81h9oYI4JwrUfGxrTgttt-6Zc9bN-YAjaC5X2VaoprBcxku2S6sOYeMN2Z3pQhoLeis8M6Kbf5Sy8MloZAs-atzeRaeXryllI9SR-EBDmHQ0ubP6YKM2_0fuNJPrr2k5uP8frihL3Epbd0f5Zh2-gyS7YpQ1wX68-pzVC8R9iTskDr8ACfYCRDDalD72TWK3zfIKPDyvqtw3nB9smzjcFs-042yZx5_AWbyHXWddtnY_Lra8_GL7ZWbOgJ1ViUCfQSJN7Jrnja5jg_m10qrtaMSmaZmrd-Kih9m7ydr5kPIVJ4JmpTWUStHeWIjoXkFa_vb1a6mmte9W19cFCgeR_BK8d12dpt1g9Ucw--GPaQf6X-jBVOGhZfZgLiaBAHU-Zug1rsHwLos9Ozou2hgnSW2adJDxIpnhkudXAL4OdJkC2mRBYyLIMmTpETsoIMUAtUaqN2ssbJ60XpYeQdCftP1LCYZFQzTQ5MvCiydOFOiP5kTmpL702bHNQjXJ4\"}"
  },
  {
    "path": "genesis_data/genesis_txs/SBhaeMSTQm3rS6puYacdT-4wzlnkBlZ1agn6IW6Oyg8.json",
    "content": "{\"id\":\"SBhaeMSTQm3rS6puYacdT-4wzlnkBlZ1agn6IW6Oyg8\",\"last_tx\":\"\",\"owner\":\"o3SQGukA79nTOdmFoLNt6I1OSvFIC_cIlcJjGjKwfcFMZaP6DDL5Q1QJc15GEi2gNiAs7HsNtVzDBqHeTEaaMdfF5MyjF-nVa739AkvAFj-hszgvu8SfRkd2YHzjiufPQiL8nZh2UMHwK6NtNVdT5GdbNmAHMtCppc32fZJDi9OKgQTg98hnQKLIttNtTOTBKaEmXltuFprMoAQgJit98JlAKN7gmqVJlPi0LE4Q8Sisx_kcy2Gq4wDEntZUiUeqwLw-P3xzNR99K3TSp3J2YQbnp2FVuZhR29pUmiktQmFW1lJaQMyrRk8GjrrBfWUxmj7pQQ32gXw4l14Sw9NpmaSodh1TDNnx79VfnTlQITEp_e1nsQbE16zN5Em0-lOSgAgaGq-tKqE5xqFZOov003vjnlESnsn_8ruTy40bmy7B8v8mPuuqziv_g8qwY4RfX3oH7L7RRfsM6i5WmqR458Hq65xkxadMbU0vnpkYtX0-Y-EHaLT6NkTZVb7QGdXluNv3CsaN9qDxhgYGgSiaFSZERb1s1i66A_7B_iiD6ixWXFiqM6mBGaAose4nMED9JDe8c-9UrJQ7rxi07P-L6DDHzx1e3aUt4WIt8KMG0yMd79vcvlh56En0vOE5AMZG3Hb2NT8wEI3de9RmYFcGamijMf53qSHQLr-JFPMvrps\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SGVsbG8gb3VyIGZ1dHVyZSBzZWx2ZXMuIFdlIHRyaWVk4oCm\",\"reward\":\"0\",\"signature\":\"LjSevjaKn1AqFy14nx79Fg-jkuLiENQVZsWwLiS5VTOTQ7T_7hVfQec08y9Sdoqv4Jv8sVYa9jL5RZfNUGCDgf1fpQ6_4ThbSz53N7aWb0sbZpIBUimd4P_-FDug1bWeA_UDoCmz53SGJjOZaY-uhOrVosHZx1e4FTeSqZVT4n5s_QH6c4nrq7PHKkwsHbo3r3HYnddgo7BY9h-GIAKPtDxq1qGA0RAHkfkUvgneqfRjZxgHoIFpYbvyI_lD4-inSW_8qIQRorISDrcYCmik68Sj17wzaRKnaljwSViB0f2wsfkbJV36hJxEfqSEkaeXVo4z72-WPzuWHXizfkp92sUejyjBnvslaVEFJ1_h90gosc6ZmM6nK2aIprkJa8f2lmdwRRRoIintDuT--2N_Aur8z-DMGB3-9Ni9gNv3Tq41ScHD12Mq8e2_BXgEnVLIIWLvHKCoGMGQlhzIKsYLy4_SDt8ujWuNdOoD2evwfsDe3BYJmFQP3KDl91M9yu_Dpr0f0UR8SxbMSqUxV5nKU_dtOy6CVVcTBvnM7Vshoh5xDu1xLPHKYos3N-oqZMMBol5sBEkSprHTg_UghPTeGdErFpJnyuysIkAC3Dq98o2Om2y-fqDtrn27W9TMoMMiH8F7v3yATek2E5Wkq_lgSyfgYHt4ccrl8cDGWxch43w\"}"
  },
  {
    "path": "genesis_data/genesis_txs/SCN8yn0cQASui1DeV4mMYeQrRn8eXKr7Cp9ll7L3UfI.json",
    "content": "{\"id\":\"SCN8yn0cQASui1DeV4mMYeQrRn8eXKr7Cp9ll7L3UfI\",\"last_tx\":\"\",\"owner\":\"pqZ3jbKaPmncUkZ93YuOo50A6owD_Y41qhJBcgudK0PuJn9JpRCFgiIqdbvUhUwRE65dkb92Xyl2MZvVTEgtnqgzdV5bDwh6-Fi6uHcneJg-uksrsXzCd0SReg0C2X9ObqHkti07vxtFK8DRSmDkLE6I5LwY74-6OE17snHviJAJkzFB0iMPhuD2p3O18fl_yWWOMMOVs080ckrB2XQeUPSts5CBquZJgeVThZiICJbHTBGOzGtRrvXk4iUHquV3OnhN2rnDriPftc2fqYX3lbF_q5PGNuUBj2f20m6_hS48pv03s3Xz8gIOhR7UsvmHSk13UX_7sByGhE56nr_RUXoqurS9npWo6Xwyt58YpcgvmoNjVUL_4r6bR-47HoLdQXnLNLsYJojSAS1y3PSimP7sJUYbiPg2lpzdliL4kJG1sywvb_R4D7ck6uE9E8-VH7RnC0KKiu7DQIAB1PoyLYuA60tyYEYwjzPvzUMfY2RiotjYPwayQHos1K5960skm-sd7bAfSgaZgOnBJTMv0LYimiw942A5Fu3PfB3Ctqx95XJyBfC7IObxBO9tihePRujbocWYnuO0GeYbUU8Upl0072OfIu-jRam3COqvjqvXOo7Mr5OIReMZWvjX4F_X-6_TZ5Ta288h6QVrVCGmiM2BB1hVmk4S06QZbFxZSYk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TGV0IGZyZWVkb20gcHJldmFpbCE\",\"reward\":\"0\",\"signature\":\"DV6_ABJDG5KZ6FalZd6K9vFbMfB4uUPBMvIOdB4h_wod2Uw0294GnJHHMLh2Oqr2m3qH0A4KiXmmUOiUTk5pfSJwwSmycx7iZEYu3k48LQ-VyDZXqt6JaExrGzH0axdmJTUf0sUS2nRghLtM5bC23aSrAU7qYNN1QJyBzW6TWw7qWEHHJfz8XUaEUdfhBcJO3XWR9qzN-Tic9EIFhK1QmKPFxpXCXopLR28QHPGuusEI3R8afLdT82WA8sk83CCzzqMSNqJdZT6q135zzyrrjeTR_NILUqZSRsJ1-oIqSEy8lA9A78YxYCnCUt1xFyOow4J3q8HtGO2Kwd2BPMYE-gAMC9Q84BUV-ccj7gxD3S3ZGkBd2BX_L3sCUx20lr1s-S6oo3wp3OwCV48dsVs729B1IXI6uIfTtebZqDkVkSQMNeYj-TSIpduAT4AcAh3iU_AmDQPeuXTOyhXjo7Z5g2c3iD-GII49tHaqUpXXRZTLkHhd_KFWUayX_1fXsa4xNTQCZBcBM4VqfirFcXJNK10zB_Zoo9nTIPVJencD4f0dES_ZdBakyJ_ReKH7nkOD6K_F2TGN8BvbCX6Vewu_dY8CHsducBbtq_cW7aDjR1GU2uwBLo2EHxC28vPRJmRUjjjol1KOx1Qqfh5QiaONYv7g84qxN4zNwmUS7ISnjZk\"}"
  },
  {
    "path": "genesis_data/genesis_txs/SHxtj5_gLdJMI-6CcspsDbFBuU_74df3I4-sAJkAr6w.json",
    "content": "{\"id\":\"SHxtj5_gLdJMI-6CcspsDbFBuU_74df3I4-sAJkAr6w\",\"last_tx\":\"\",\"owner\":\"vxta9Tv-wEfhHJ2eHdCbBob95AevSn0LEsM9x1_x7D-psaLR-x7AWQPuBC7XZ_dLoFUXIVbLnwma5JcHQvEs70S6EuUUPSiq0-HNzL5dDj9pfE_Krc1AtDuSfFd8WLQU3CtbZuJLyHXLSLAhMkZcZrKh2SxD7F5oos9I_xBWxwMVRetCgfAKw8DAVVtGVP26ggIJUwER0j0V4lBa46fELfpq2wUqQ02R6Gn6uro2juoa9IiAr_P5t_CE9ZnM--yskq2PB0BHIJcrMdPYYlW0qlCervDlmAaVIrK3DQBwVxATS8mPQ4bXcZdnptvw39Mjeo4Fx3TLMuoGnRjLI5hwJokJJGNL5g5x2QKZ27KhnTQLBXB8FIkn-2hmBZZKlfU1aI9R8KDTjCToLwCck8Jnk62DcR5nN0b9r5wxsNImQQejaBc072umCu0iwFpRyTw6oxvC5RnGUEzQmV86fguu8q1F0jAxZeYWtdxAn7aFCw7O8tMaMyRp3p8I_RmE2yujk1IL7jP7MT94SzmAWTIaYNWGClniYtO4J3t7qRaU6IWRJH7nzmPWOIijh2x59u3mBDUU213QyAghVH_C4riqBrUkhfxu4KV4fEVQpjX9B_SVznyWSS7cNCjR6ySljZi9CF_4uKn_oyXMX7dd9Nmg34urA0igsj-t3hKqqjaRd30\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"YXNk\",\"reward\":\"0\",\"signature\":\"fJ_-UTZysSwVs-Y75ROaJuoPi-1GLEY2o38tbJGGDfq_hyYuO5LvOjCQ4ihps-ZN33NH3gCokttfocabagjyk0IyZho6pWaA-Ty4KXvCnHehluFYPYqosv4PiAeEt2eCYSoT3ReJtmpYbWHHOxEbqyCosn84_obvrkCr4Wrp7LJCCTgAHhAMw3SHUQuGHz6-bWT9z0L1EyYROx3bjEFPq8Vo1I7lH9Xge0zQsnk-JoyyHmF27SnM3q0780LUJtgFK4ONg5hViUHiJjcVKAS5TC7i5ESW40k-eTwu9H1RZksyzltCXDoOQy8uNXSFbh6eNZ4Le-Ct60CTY-ElIsudx45CfSZ33XxyG7jrDg2rXLuthn17eZpSpAsuohKSLzOdnuD0jAL_M4nNEocWKF7WCOqusmdsuoeX87zw86EjKyF5Yp4-LYSR4MRj9_kEqvgV0ECKDANp2riRCbekI7fP5w6XYUSwlGgowz3KcagVVUBnr1M8YaXVzb__j_5vH5TAVTf0wQLL01z163ARuiSnWJK8Kf417iCV7Oy7b1jBFfe79jkZl8I0PkU7n7RxMLmfEY2x6KbYwiU0qX-gP2yyE1iydP7TWRR5ZlCo3MSbZT37xSm5gZcPVq8eMFszTBrs0-VRXRAHoHwve23R1J9J2KmlF7BdIhk0kb37INGPRdI\"}"
  },
  {
    "path": "genesis_data/genesis_txs/SJXMM0tlXown7l3ffjhsiKf311FDTRa7QkKX8tgyEZ8.json",
    "content": "{\"id\":\"SJXMM0tlXown7l3ffjhsiKf311FDTRa7QkKX8tgyEZ8\",\"last_tx\":\"\",\"owner\":\"3pt7riOg0KCqlhgEEHylXYH2a95lqbTMr5WR_f1kD3fx6yJHcTh8Pj2WU8lEVc8GCfzX1hzBCgkBFq3Z1BsKjAbIWjrzjPrigDkXic6T4V-ypzKw8gCIYVVAMnEla58RJgcjmSn7fexKCAoQXBrMv7sZb0-uUUR3LY4GTfRjPNZfJv7CujLPWJJRcQ36tZ1vHbFWN00_KMWe5_2m8Ww8PiXA5pi-MEymiqjBFoWPywJUybB81UZ8uWyKVQnu0qpLSXaX5BQZOdqbi91AUjvvemLbbWuoUksXPESpbKx518IfbdoiupdkF-yYwM_Lnj3AarPjc3gxgSKFLqGcdTbEyOJ1azO1zVlLhxsKBYLMfoyFmlA66dEseynb3GfHA506GlCVKFfFH10AfTYcc1WLUUzyBZlE-B3bOknRQJ4I5uqooffHmfX2fFRmTF3DDvy-QBiFfL-bJoI9NHBR01dKlmHZTBsG6KO_bRSvfJhciHoPYkog1utM4XEyfJ3i0xw0J5urrhkajQbx2BoVb0uiojvaOvcfQsRh7S7YsKocnBDIl_4qemSxjwD3t8RfNoFfzdjblv1ltgGMK-SnSfMGJKPo8S-DVIZqBvdos66_d0e7WHMi1K13PmnfEcBJeSbdLG5Zsi4oEyaFHbuT9Bx-wB8j6Q09XUEdS_b94AxOG_k\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"0KPQtNCw0YfQvdC-0Lkg0YDQtdCw0LvQuNC30LDRhtC40Lgg0LrQvtC90YbQtdC_0YbQuNC4IQ\",\"reward\":\"0\",\"signature\":\"MUt9CH05n3eNjI4FounCZ8oORsQB8QkT2UHJS8Oekfx7KNlMer23tlxCFrrfTxQwVY7uTGutVyWuZp4xPD4FFsw1ytIAaC7056jKmhYXESmyYvHF0BV-EYtP3DP49lACUTgyBm_ekNij5NTJAfqjFv_DYdXt7kGha9tYSjT8NDKQHIW3qeD9vuBs9vQ1el66i0rQ8Gr1LpMMxV8x6J2o5d1CN2PE533zKbxjU8OqcN6FKKmSjmEBj84ihsa5tpGdA5PMBAcKMMdI2qyek5NrJJPDA_y0o6SlUyk_8q5iBLYWw0Bb8nmi56w-JHbd778IaKHIaaMEuXgoLnRjluikzNxzPk6Am9pwAWnoSCrI9cjb1D4pXZlIjOszPwixfkR5C_hpo1w7yNqcw7iNFpBPWS9CUiknEqbD6zw-PdwdaX7cYhzrFNoK4oE1BUlkd_dYZ6aDLDvQGDb0hI97C83ziak7cGk9H0l8Fd25u7pUKFejEpEIohog5dwfRLaq8wYkAOMuCYaV38svImIcfklzPwxB2a6_oSLnD_w2Pi0uro2PuhUkrVtnYcm4OkxwCDsOO-RKLn0sA6XrN8t5I9EcMTQQ7N6FNcHjhHDAwe6hZ98Vab_MioGI5wz8Ph1WLY-2YhZaLwWzX7IblzkMdmMDmt2810PdyAZ4YQbobA5xlD0\"}"
  },
  {
    "path": "genesis_data/genesis_txs/SWNkfm9ZZPCiYKFg6oIW_IgqJp5Ypbp-Fs9S7YgPm0c.json",
    "content": "{\"id\":\"SWNkfm9ZZPCiYKFg6oIW_IgqJp5Ypbp-Fs9S7YgPm0c\",\"last_tx\":\"\",\"owner\":\"xSjidkjEW1TTEO6lsZhF98M8gPeAZBwZkif5JMDcdGoXI2nYneqIlPvSeWkEVKOllqhgPdyWaSHYihFCIHOwsFmy51xh-HDUCQTv15FQd9MZbW0bvkEjgIfwHQq-JWPuObaIRBMBvE97IAq_4LhmhUWcvjXt8vej1CjIucn99_LJGP4u841cw6whJu4-kosoQIF2Zg_dfnFw6WtFJq6m66uOUjKKZab3YURhTjDBgvVcboqG-itaFtevYsb4VFl0jvwLXPaFhTtBR1eD5uNUQLxVP2OcwZjTmr-eCelAxDS7YGirXwki9CSr-MXgNhpBLvXT6l-TInPLYbj7NJrzkffpnc9QAHJK7B7IlPmukdYuYNF-nip1y8hhEowTU4yNt0Sjhfnsiycr3rHrv9gnPSXZ5Epv9seHEZwauPI5S8Q2Qiax9F61HiDpnW5FdoNicaJAWYT9sEpk45pgV5tMT3IQhGRBPt1oPq8YWYwK744xSiJUzR_JkPAHoHY0jlIIl5lkxjWiKNRs82jnFhK_HgC3sZObhKkKNWnmEEfKeJucX6HuAs5IfEUv7Jbvgdw2qTrT-Hh9Auiv5YHbOPTMxNudtmTfNAEFde5_9mgwJdzD4rRLE3ZtYeKg_-FvfFLtAoa2PLRguHuCL5KQdvlrNSGSuGNcyh_Fge08OHX6bbE\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SGV5IEFyY2hhaW4hISBTYW50b3NoIGhlcmU\",\"reward\":\"0\",\"signature\":\"cRuFXYLofyu3oac5lZrOTtGeyids7QQPOSnWjeO-8KoaUuc5SWLvdlzXp2QZtJ02jQKGp9yVIHr_peyu9bDF0IV1XOvJ-mdQe-LlcREI9s_GnfLTzfUbZx62scffRt-bhPGXG6O-KEo_oTSrK7EJcEGEIIhqasUDnX-vlUr6OvctUbTSbRd_du7jhOOJA-6uKK3edF0pHErpGBzqiaHzt1ts2iVfcK_CO9TazEuIUoaCtCegLPDDL56ygPxqxE1Rwhh87_5VS-KQJgev4YnnjBq8xBv_egxU3aHZ2Crc5INT9o9FmmsYOlkxkBRvNdD0zgNHjxaVaBDzfbv9cFP0Mn8NEm_9hFYhOeFuePZvFbzLtNwuzcm7T9CFQHfSyRIlv3bhEcoE9sEyfRvAfpP5gDV78q36b-o1h0Hp_AqeZBswle0jvJtdbmXA3RJbQxRXzfidHPiMZvFPjwJlReXXmDIO8LT0jCYzgpuHHYuH9TGMrML8hOsO5miR6_b_IgAvETQNRCffWujGOwtdmoEovFQ426TXWEwaDvAeL1ksTjE2mMx1GRawTuvU2pr28U_FmZzI119MemPMsw0gDQXJh5fJy8TnQb1GnMLdAkbvKyIJS-Cev1AH29pr_AO54NVjKXdXXx81FEEqJvMx_taJiul85YTOBtxUya1vdONKsm8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/TFX7m_Kf56rV6LNuyQ31NeVoDHJ3x0YqhIv4-IBQ-3s.json",
    "content": "{\"id\":\"TFX7m_Kf56rV6LNuyQ31NeVoDHJ3x0YqhIv4-IBQ-3s\",\"last_tx\":\"\",\"owner\":\"rXPk340SHXLKky6ZFKD8zaZwHaxWxMQLSELy6XX3I6ZBpUVtJdhQZYa59Rswc-9IqZRrZQXNN1TsSKk_h56bKs0cOqae1fQhNfV5lymVjm3-5RjtNQ7gjVjyDh2shugh8Y8H2sD8E2x3S4au0ZbvkXCNIMegTAMm-uBi_U0fy5zZS1FXlUW8BI6NfEmjqBsZRa-UD_yRpGR9g1AGOhWVTrTpLRtnBefaOo-g7TnRzSC8SByKowO0tKz_ph_iUi1VZN8OroNHmniY-Slb5OSUd1eY0hqWldud3C2dVznMd0kZCgqcpO4SmriLApSJdkOPXFMQWpeb8zO6S2md4xAJy4ou4PTVjGzdLxN-WndC7ujL9gPd7v--LPFxiH0bKpl3TOuI3NAntYePxE4V0RWOL4D1kl6f1KhIAeBYeCqx-N5zMId0KZ6I5y3lDcNh0FvMDNPab3uNGe8yK4bOpi8ZDF7FvbWXbrY69oEpguwfTO014kiBgF505IQzC870ttab0sZ43Z3KTq8QWZvOqII7I_ArA3ihHVGURwkDQ1L7HYANHSY-d2s4PqunWZre0HGezDHtcmTy7S5QvHSCnRhAjw08P5pTtllVfHHU9baosNHiLIcMuFiI23FwC_B-pf4H1VGu41HOoWb-m9Duy7MSlhDaAIgCYS5SAdsWFawdXoM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSdtIHJlYWxseSBleGNpdGVkIGFib3V0IHRoaXM\",\"reward\":\"0\",\"signature\":\"FEs0JsWvJ5tacf03JZdkqpafBsfYtBiPGnZEe9-Y5kMK4SkVltH3Bknr0JxvHUB1k0AOyYG50-zT23GTab8Y1g1zGcgWwVIOK9uaVq8GhyLJ6y81PvzFyi97iuUO8Si69dbLFWNkBE3eFNUyVYa3J_NpP2izKbtFwxXw0YTob4ojeQC-zN8mW14m_tUvLt9V7QVLC3JhRX-dlUFPjBco93mF4F_ycvqYvlJQk6bof0cxH4BaO1kWslIdoB-Gi9LJ-KJq7RcsZNDbFo_VxZaT1fgIVgBeX5jcggXW4iuqPcqpzMp8ojMHpvWJFRx60JlEBnYYTbkUI7pX8kDepbhUAD0Cm5nLZqJUY8yWOSoZA6Rxwk1zUYpFSzqqZdCweSRcLtesWTRX-dAEkGgwgVoCQuVpobS52Wzqksrd5demb0WrAO_3px46_jyBMYFa98_jRmyp80lXReZNWNznQq1BaosiG4G3Rn7m5Kq9nWIf8U_W7MS11qXcUQFS9S17w23p0_qK3oxAMQtRDcax-RjlxGn9fQeHojRFzzCHgGHCQNugiS3IMmMn8TqrpWouOKwHYLH-Ygiek1Xm2oBdEVT8NY3LPkK-iH5qMuHzwhJsYkcqQESRMOjpXFtobpPz6EariUnMqi07NJgafSHXnsXggknM2-7KB6FYR--9sam3Mb0\"}"
  },
  {
    "path": "genesis_data/genesis_txs/TGdhJ01pPw49A0ZIaCCcYBnL-RPK_3KZH3cA6E9dVqc.json",
    "content": "{\"id\":\"TGdhJ01pPw49A0ZIaCCcYBnL-RPK_3KZH3cA6E9dVqc\",\"last_tx\":\"\",\"owner\":\"6W4NK5__ZpYAEIPeepUtNOJWVM_CJiq3MTFOVFW9bSzxOA0uaQKUneY5lAMPzJz3RaTAEsSylc26cZShqoLrXA1Ko19u6nXxrAfSK48Kj3-ZRHFFEsRIpx7w-y89oT_huI6CSDEq4_xP4hlCT71y9_boZYp6VQjuArGA1JUXeaDMfwxifW5_eYqbMd6gslu2dpXUJcwiJYHSNzeqsoPcl3cNZA7lcz26nJC3123IsKC8bx5g7TnUEc6ETf2M8mm4cW5pddmRWAl45Ac16Z3ktAEFcQijLirgTVgXxaXgUQ0I-rhtjkbIgXOy-h8pe42qGit_RWoGRgFt7nBjCZNRIaiFJzWumNbrrEJd1i-t_ChhV9Vyygrc7Ykgtt6Eze3Vu_lyRzzobjs8jFeZ-VQg0dAXFi9SrSWybSOSHBL3sXMnAXZJDMDMBe6cWbkSXQJhIFG1YRXA-ZsF8Oc0kGUa9tESbGxBFuXpF_La1arLz25YQprPnV2eqxC8oyHNoNb-ngTeZT6EuviXUcPx1OiTOJlF2pCqpRuoIDlfIS31xieTQt__XSnqbbFtduZTKQX3EI4K_ItZEQo_HRUKzBlIm0GejBzg-VXRuTYqN52ouIkShBaloXx7P4x86fMoHLWGpFxcXWbAzf3xYiFFsIAfomvQUEMGRPGw1KITxt0CorM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SGVsbG8\",\"reward\":\"0\",\"signature\":\"FyHDGKpbyePfXLREXJTLLEuDq44XZuKyGDmgn35TmLOVxgIiWYGzgKifmRa7mvTxRNItfk005nr01lIbj0Nw9s_1jHZJJagFPXTBGxj5MFQmOO5YZGEFnat4Y3I1TeN-f4dSOxUXKGgMvR2npbn_9DiCUZ7oFlmYgf3hHWEgRO55TGs90Y0ky8bVT_9xwJ0FdTqRny4um2o4QE5-kTUS0qAW9Fji3PVBlStjCVUCrUgQ7uc2DI4vWK4Dooqz23Wp1vGr6HlqX__q8lnT-VdHdJMECL8NWWMiMry3AIC8uVPbw0AD4p_yrg3xdiyAtpFLgWG2MPTkWxGY0Vk74Jc_qZeg593aY6jt6FPm_BdLUFlPMdqu7Lsvyp1hHnvXCiQw6tYbZhw0BAQhit8DkzEmTQFu11hw6isNijXrheiipvbcw3pSpbTdwHZ1CinBeT7sPi6FKoln34UyUKFK_cxaaHj-26J2URfNTUGAmPZb8rLkmMsRm7XcookmR8k71uvEGq3YYTRSqis_geOuksqjACIKM60mVjK6hZtE9jjplXS70_M5zP_3ucQa2uTk5BsITNNU7yXgUN3QrouOrsYJN67wLmcEoZStVp3hCu3bfWV4T5gVgqUQnkE-2xMoqm8V6_9sdzBeDeNnCNQjZ04y4Gd_GrM3bQKcjP9dtP3Cz_I\"}"
  },
  {
    "path": "genesis_data/genesis_txs/TGp-18LYjSWQQ36gs5prU-vDgteOL79aywxXoDS-w0c.json",
    "content": "{\"id\":\"TGp-18LYjSWQQ36gs5prU-vDgteOL79aywxXoDS-w0c\",\"last_tx\":\"\",\"owner\":\"u_mRLmTOFEs0l-PA29CY-YhkC-s8XxAgz3Q_sXG7WFDPzZMC_yRAC2PJTOlMuZHVO6eEPVQrp7CoSyOKb9CuCExUsMQMg-g6qQgDd32_07ikWhFKZMFT-Iyobr-HMszL23wGRsUeuepNXLBdxkKKBIks6Ec9u_dw9V6dTaQG2ju-vuj-KYXOIjGkG9_bO1ts6pQaJ2552XC9NtuarurVO6F15qAqUgBwrP3MhC-wHBeGD8HENhsCEB6BsKZ7sP3cFGCvtZRNSL5TnMVz_WQhrFRINTfJhy4qEdgUIHaQx-18fecztgyq7EOBeo1-n6MG8bIJ7LbW6cbGTWwoL5LQrlBiMEsWf32LUpCSwa02ppn9mLKJNS0POKFrHOn_wJrMezEwFSVALFBHsJfSTXLk8bQJWnx_WkDZxDWcmutA60mBhTIZHBXYqGVlVvPfAnTidXX15axpp_phvvCnVBbzr_f7T7x7wiUpqPblE0VLQnwWaBO_soSWn7gThJyWO-5XaKKUrtNjj8U-fesRNYoYU-d8uiz871eot9UGph9DCQkC0PcRxZRCaLcLrFDa-ITvNK61MBfdDXd6ZCdFtL2E2fqL9ney54tJQ84TcuURN7kxfe9CkUFComTjnsBn633X6iyGqHBNVWx4TNRe8Z6onqidfOBS_ycH-R5LO2NdlKs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"YW5kIHNvb24gdG8gYmUgTm9haCBhbmQgcG9zc2libHkgYW55IG90aGVyIGtpZChzKS9ncmFuZC9ncmVhdC1ncmFuZCBldGMuIHRoYXQgd2UgbWlnaHQgaGF2ZS4gSSBsb3ZlIHlvdSBhbGwgd2l0aCBhbGwgbXkgaGVhcnQuIE5vdGhpbmcgY291bGQgZXZlciBjaGFuZ2UgdGhhdC4gSSBob3BlIG9uZSBkYXkgZXZlcnkgY2hpbGQgb24gZWFydGggY2FuIGZlZWwgdGhpcyBsb3ZlLg\",\"reward\":\"0\",\"signature\":\"Vg7kS0BjLFiqQvYXmQuseJyjmfdStUxBL9GEI2HANfrgZVri4Nr0vPNMgmw9FJkW5AVX2CBW5roOP6cRiuN4-oT7dPyECgc4_sXoae2b8S-qoPtFiq-eut2VaTBKCBdf5JQhLJC7tTMc9drqndLmEn2ROEX-KTi5ED2RhdtbrIUjRNFpjw8tkPnatnD-RGDZjB-D6SCAuYCYvh-6azMI7L-Vd77y-m-x9xyyuju3cMl5XrFbF028Who4UUY9Z8dQaRN9VRb4-63Q1bohHYXTbi6rTJdV-nc1M2NmoYIw0D3S22Pxwq2L2N1dBYxX-DLZZSxFpzBCDlYFMFk9_YmnSSW6iYgqPn7VJAhh18QPNH-aAKUC-q1OaYnEtxgCBG7pHHNreX2aTOBLRYhU4E4vL2XvYbelBNMZ9CKiVnmwJHneQpwuLG5HSxLV__g3H45Its7_w6j3xYrXp2gKPpbBUqUMYwMfgDFN27cY4caQiA_S3u_jCCeAiIBAAzubCDBwMlIF9qhjLFL4fI1s3Qktzy2U7IAHEZuXKyGj7BXQZNMV8Z6M65GyJl_vhhFXcWTMXBPWc8bcg6uXsGznpZYPCxM0CQ1Vq8C-UIzplM3uBtIUM2-6BAltvGmA2cqeZCOIEqmX_NsilX_42p62AWJwxRfxBGtWr-y26zZU-4PRnRk\"}"
  },
  {
    "path": "genesis_data/genesis_txs/TUIdVI5yQH50laHvkxgAnTV6uuE2LXXH3pxIe6Q2S7I.json",
    "content": "{\"id\":\"TUIdVI5yQH50laHvkxgAnTV6uuE2LXXH3pxIe6Q2S7I\",\"last_tx\":\"\",\"owner\":\"ua0v7mt3Dhj_yFfuVQGi0BHLXN-s3HVVA8PIhxrVeqtzFKIjcGBf_i9LONMxWKvBByoRPkiA02JgmL4iBQj0cCNY4vyseQEDfxrZE2G-MYC8EIAsxXgS3w5j1zgKrl9WDd-nA_tHX4Igez_LZhZ1UbyvfvEPbxHdTVHa6iAv_4hgZxb4Yu_6HLzs_g5e5Vx1-xf-xfFOaXZFyYGtGjjbYALcjax1HGdqjSJtxcdiiAgI56_Fdju2l8f1wwMZDV_o6RyVlvTpjSof4iihY-HyuIrHJnc6u8zmujCdMPUrc_bieTb14cgbsmi0vcMuIKXML36BQAIX4jSG5azIuAtSG-ri5HMJudc77Ro7f4YH7Mfjwu5rIDmO4b83BbwrMzcEpC9W09rfeWiTGOp6YrPs90rV_H9_CkQH_xeXVMi35G9GD5bns7A6M3hVIgzJdgLOgoQ1OKfak4MGr_BTcga2ztPS29Zh63SZhmn0QGqJG6BLFI744uWZEBDY4MiDR3Sr2zupl2yQFys1UirO0D5VociUbzQ6nvmIYVOZfO4ImHl-sYtm-pOU6bAdmEho1lEwN-o9m14YPZORtwdG4AnEEyxepHX45dO-NDM1zT2zrcXX0scG3jAaRJxYcHjkW4HgNGuoeLd7Zc-XvBsgJb-WRTsMSeRbiofja94T3BgITSs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Uk9DSyBTT0xJRA\",\"reward\":\"0\",\"signature\":\"Q8ADpFlNUVyTf1dE6w1LivfCvInw2vGIfrhaPQGk7VNcU5-Ln2fPAKrQBQJ6p-6C8HOTR6zcWJvNIyFzb9EacLeB--2zshTl28JT8RJkRT4KZYCyBvb39dFN_D3cO4p0WiN7dCI3YzF1VgrjRFlIKK48xfjMlR9LWtL9id95-mDgakC40skNrOQXWZcW9UhUXQ9khvKOAg9LkZ300khdWCm8y1Q67ZsNHxsvGyd4HZc9dhWeGZxc1XRG1O1VbZKVNiKmlnXS-6p1AswcbW2cWm7wEpy7fmD_VmR1wCV49BGCRtYMCybJLF9cJ2qri_t6wmeUFOzBJVpgOpAs_jkc_Hai9yHS9681VWigslJPDD4cz28ucZentgq_N9k3SfsRjw7pCqBBKI88E5nEWSxl5NfMuTtomWcWhPBvmbASDYWzsZyMi59L4QsK_XK0Kel6jKx3HQcvoPQTOb8_egRhNyXxhV5y8BodGPwzpvs_nh5F9DAnT2_6AANE6C3gtfV8cfqgmhMWAxbfKsY5p2smkm9tw8f9FdweAD2-ty4H48ny3eaolM58F5CWE95DrflJosRHEC1RpcqPr9IFZUCa4NSkU6UZyH5wg-ZzpGQsel_oGVyUG4XxOeHpxitJMhvpSUF6f2XlmuJ0gxFgkk1VTJDwZ9g30EM8_Si_Uhtq_0w\"}"
  },
  {
    "path": "genesis_data/genesis_txs/TkN4QLdC4tu-_Po50RYwF33shyHcanHSe_BKpryK0JA.json",
    "content": "{\"id\":\"TkN4QLdC4tu-_Po50RYwF33shyHcanHSe_BKpryK0JA\",\"last_tx\":\"\",\"owner\":\"sVbQsAh8Cep7uJm3ferXj-U54bkY1s3yKuJMxZ93KYePva3DT00YNJKd4JoJXFyuZWqbS_UdmI2DRANCjmKkEOiG4kwAKtt6IsgIA21PH1jOnTR8isIAleQV8FDUFSC5IsVArXJz5IsYxYb-IBOfiHKOPzdwgBw9TaELvfx66shqu3hEOksgE1g2JJwnIvGemAuYjrn1K4eRXMylmFUJqkWSA2qulFzw4L1cHeEUjWsxDjYJlsZS1ZeTRkZPkw30XK1iNJQsF7P0ixVrwKK1OzZsOoNiQ8xZb6VHoe7PikLWI8KOWmBUc4f75dzebGJmGCcYiHAuNfbVmgLEI2ykGZ_HRpViZ2UaKBmfTS0FoJR8_5fV0i3CV4iSZW5uzotLsCRN6RKzGwzDZW-ufdlJ7E3Ihqw401Wqqytzq2oLqQBjhs6-oOlfw_snZg8d1EFhYKwrjcNkVrbDyfYqbenmYJHo2F1PeiG89V--vsQ28xf7YXQ9IQZ2kPK6P5yf72W7FOH92s_KI4O9svKlcSN1SLyoIUSi5wcWwhMroGRPfXLzH8qrnefEjN6tMhQdAQc3zo67SkUJWjgXV8YiGF6nhnnkKTqenX4P4cxmYeGxUzjHFNNsoCHOSuXsV7UM-mfeLY_O8Sb2EZ7dHJbtMDEs-df3IPmfPB4lUCjifKnvLS0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SGlnaCByaXNr\",\"reward\":\"0\",\"signature\":\"GXO3FB1neK2JrH1QXONZiLFx6URMusNAyqXIkBbwcjtQXh8sM2gIbRUPUlT0kuDpDqEYY9RIrKJm2F7f5t4XR2lEHzbfZSICnQD1xkhx5DlohMvO99-WgWDGbPaxI4MkM2GkskFTU0EZMXOS_F0Vnni6Xrhum2Edc7v7giPNdKH-LVX7s0sb-UkIbDieyriY8IZv7Rwclxe_vBWE2Fr2PfVvQj7GcV8w-Qj6d1Uf24ks16tLbJ2vDM8z0AtwW0-ukKN8Ntrfbnx9mgBUVyiiEs1hJLRL3Os2YBEbgzjJzZ7V7UG2oxkfBQLsnJAK4mf15Qp2Zmj9Q1tm6urR9ndHyskjsFLFhbY4ZQYIgWflUQfLx2cCgO3rA1SJ4mzPxVxCeOh2F8-ZaawcZ05cjoRzoQglnpT8SZ5dNLsliPpAGwC77Z-GrRRuBA-DPamssL2MQCphHorUGaHN3HbgmyBP15Srvs4AnUMg3lSFYNhlqoFibHV9h7uQFBWTMaoFaSoJ525tutt4l3fJKzwNxBIFI4pNY6B8KdAPYjPw7bUsbtq7JWP8-NSKXVbmgmxh4cYFgypZSJqI7qfOrr25SvB9kbEQKpxjgoTUOc_Mao6F4iPcQHyDEAa0szznYWZ_z-6LhwAC1AahEtTcaUkhp46wDATbTxuGHQbXx4JpQ5vP8PM\"}"
  },
  {
    "path": "genesis_data/genesis_txs/Tnf6b1F67AEV2r9Flj8ktSSHYoV8SeL9dFvHRkavlZo.json",
    "content": "{\"id\":\"Tnf6b1F67AEV2r9Flj8ktSSHYoV8SeL9dFvHRkavlZo\",\"last_tx\":\"\",\"owner\":\"0jVL_1GX3dmYhatPZkATAH34cffj9DJ46zkzl9YyDRfYK0YVOPvkMAn4alVvSqZiEaMU0ItyjyUSg7X3fX_Q2coZ4ac5PNTALeuhR9RrFGnwQw45hlyT3MsnZ4Ytx6Ox9MZpyVVX7WA23uZ82MBmphq-2sOATePrdlmxNx9sFOe3eLQwvAc8PznP0lps9wFCDj3Lt9W-fwe_lapRuxm9p3XE2MKE1KF15_TaAzf6iuIxwhtgVO-qPwhLEaKVPhFR06QD8TDZgKNcfGZJeqvsvJrjrE1pzpr815VLD0sdS-64FXL6aGaFZ4cDPuXyx4cIDP5453X4H0CFUhI2phL3IscTRV3h7cc2fALXlXl5aUdNWIKzfOWMPa11TodXTRIyJ6Yt6LGZJmRSfjWKxP02EDb0iUvtflqP4FlD6gySimnKwVCBfPx1jGuusLU-RnmTwFAGj8gEp2UVfdjoQNyf8RRYCquWid0ny80GmFlTOi_5rpBPZbeULzfxJyKWCucpF7uCkhpTL2e4V1FEy-ivsgZ05F7eWXVYlig5TsQQwHixN1OPhRqAmUurAS-qTASpoDG2oQqbsIJ0W82gUt5q1dWCxWWlvYozLbiiPxGHkpUJifi8L6J5XpTIZy6aljxcw0MNejwS-J-RwqZk_6s4o3_LAbYqKfTt24k4hzdsTXU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SG9wZSBBcmNjaGFpbiBkb2VzIGdyZWF0IQ\",\"reward\":\"0\",\"signature\":\"feNNQiH_tgJT4y71d0cxfY6ubntB2kUW-89cQ77gIDsBAxQRTWXOfNK_bGc5ZHuMhEdyj97yqhMKQXf6rTqfw9EyhEngsEsWBRBQtAQer09jynKHh4yV42QidRDJ-dhynf1tDjyGqBWI_GBVlDd8x_g2BCzfoE7WZQkyz2M5PtASbTbhJG4ynWWwEWTWQA-vBsRwr4T1_TapDTEwYM1bU-W-gAsaWqJAAR3x31jMLfIR23uCC5sYor9uej2kPNF56uWFW0Sj-r-C1LFVmzosODk1BMRu_LdqYusOEoSnn2ryKFeYY0fW_RywAJUJpv4eayj7yd53mD7s_wPyeTnNpfngXbsw2IhdP2xVrSzRel4Rmi_vgdvN8Q19uUXrnOYflikOJwPnoB5BacIL3MQ7--X_hSNGnGdcHJFOtDCdxSdad7hsoa7m_FioqvSzf6nl1aVo1UhYg-nabKTUdHxJ7H1s7nsHl1eRghMEjPY3cWkmPWg-IINdqlsG-VK3HBcAho6DLm_Jy_NZo2ravpBtCTnvCnIP66b7_RD2wzYkDdqd5j7p37Em8I8-gKuGBWm18gf6edSiYrUoTP5keRBGYHPdSe6SGGsxTLUPwAxmxYbKqqC7u94z-1LP5WhcxG1DroRD3pACsMoS789uBtt6X7CMabK2pyE1f18LU_5sMZs\"}"
  },
  {
    "path": "genesis_data/genesis_txs/UMk64563QZfxgZr_vKOTDrcp5XJNENF82Pji4a078YY.json",
    "content": "{\"id\":\"UMk64563QZfxgZr_vKOTDrcp5XJNENF82Pji4a078YY\",\"last_tx\":\"\",\"owner\":\"2vYN73CnqAI6pnUkhQdnBaOSuvDZCsBTrj-5C3Y_c6OFxukJXjNDeXYD01BiaeTC4LjMZttmhfedA3E9sR9qx7s9PZ5F_3-2VoXjZUqYahYoFGmWq5sCLEMT4uWCNZwQOYIUGM-sUbzVoVjOmzIpej-RsH0WQ8ijsgUaIwam6BoXFsvhLRsM199KlJbY-E__AeX7U8cXxljn_1OUM8_u1FKv0YoaMiwVJWJSxR6ZXRQDV3ZEl6bofPbDqxh2JdeWtuHaeYps-a69-dWkudtvB1aNgaEfDfPM8TqwCgYujDkIEyn67rFM5vz-G2stkKa8da-kFVONTYzwjt6q-JVgbudcQa8Bj1fbYEpqejAleYu_teyd7pI0AZ9rjH1U7kHJ2tbOC08T383LDyTmkBleInjumEh6ZM9aM0RqEk2U9iRwLgEsEEPR4X3z6QCKIswEARp-uYbQVNmVvVQCWWKhTFbFN3b57sDxBXZYHT6_jHFOTBvsNhlWb2sOEL9pWhppfC4YD1Nu3O2MqfJjctHSR4FOpK-v9HxglUZLdviTRauYWGL4ZryhkOz6cW-yq-girolhVFFncpjNnYzU2EtjJwaOjEzryx8HY5ggdQgAH7B784VSslgOj4RG-Vw3y8-Fd3TlTM9fpgfZqtbNvgwf8ys05Pzx8n1TUFtYkgslaxU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSBob3BlIHRoaXMgZG9lc24ndCBjYXVzZSBtZSBhbnkgdHJvdWJsZS4\",\"reward\":\"0\",\"signature\":\"E8ENbpxcvSPR7KCVSOcjmWxG6y1Yp58uUyrCV7960u1pkaFE2sFMH_b1wnUFNvojYRDP_Cus_bzWcsjPBibJ6OeHr85IenbjtSQ-Tgj0AoGpnsUzdH3J3giGaCsDSgYJF8v-tovxuJdPVPSwwYX70oGjD1MKb4DWBhkI5Xzw7JodougIH5XFwkTSPkeC-8F8Mjj3-14dtdc8dSiGc_QTacbL3pC9jY-0SWIvfiTzrrciBp67HscfbZpO-EcVwecm3hmXnZCmI306OCLcyuzTkaVQPciIRrR9d-GXr5ah-ry7UpctJaX9uerXxdhaNV5gFHPZEBq1BJJavhDnlWndtMGINq3dAepKP9dgXkAShnosXzzKgcvaK-XyrCvR74D5jEuBNyuv5HHGQcJjZ9-W1GDmHsP-64UWTwQC5mO5C7rP_TIJU93G_cSULBL9DJlrhwAyrk0dhq2gYSh-xY1RxoE1JzTTVR0sXvbiFMazxvca9i0SbKo4HDYgf8z5qJ9qdywMKFWr1Ep0fszdnvcG_9TvhYVi9J_bWOXkVbyyqxoCtqD6dJEatMQ5ES0CsdQ8T6FfZqABpOu68Aj2RaYMOF5WH6lo7yTowMrPNUmGCyTrQx32mp2Cnqa4-JSu4DYJCRXLhsrphAMfNP4TFHPLGbo97MbKPZPZI4P6yoPs1sY\"}"
  },
  {
    "path": "genesis_data/genesis_txs/UYoJMT0QxMtB6ctUB-9iQlcx6fF8R3s8ahM4_iF4wiQ.json",
    "content": "{\"id\":\"UYoJMT0QxMtB6ctUB-9iQlcx6fF8R3s8ahM4_iF4wiQ\",\"last_tx\":\"\",\"owner\":\"_qLEukMBrmsL16aFmXldqUoYG7Twf0E4DuCa00sYRiRwxl8UxG5Cw_7es0G-IehQp0QTA5jYvELFjXhZQElKjhmxQIs1QtQOi2Gu4zpQXaS4_dKGSnzzhWcUKJV_X3RguqEeOBcXO_jT_mQw170IsfjkK8kMDejSDqLVm2jnGiIubqA_NxhdfHnLsXFGc1SKxHh8scFfKlHekL9blvinOfajs02fYMYVGoZjvUyjpsjeTjIls67qnuVmF15fcV0gVfh03jSPb7et239rz6SZmpIDefk0GXkquHPiYSm6HXDLqVF3Cuoz-vd2p_CtMkw1NZaareK2ZnogRgAeI8ea_VKRbHMjV661YEj72h47GPelE_BcYmFAxGUhUMLsNAnzjh9hTknB2Ii1-UCZdi66w2OCFWB2ARQvYK2gu7C0kMMX7AXKDyTo6NA0a6pXYio2M-LQt9up8gjPmRXxS2CFuh5N8bp6QL4tJthI9z_gdYeDGWEHmJc0moAm3x1yH0laRNDoOt0VNWMN_oqZR7_vligOKA5h6ePFrMZSKyLzO_mATv5mWhqySrqQUxvuBq2IFrlhj0k1DltpJvFpgkshNAUIDJI_d14QVE6-dsdYh2ezv7Uvio72gpZoG8iqplw4IhJuwsH87cmGKWQC0n_Vmwtd63yoXnwKqBlntHD99CU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VGhpbmsgTm90IHdoYXQgdGhlIEludGVybmV0IGNhbiBkbyBmb3IgeW91LCBUaGluayB3aGF0IFlvdSBjYW4gZG8gZm9yIHRoZSBJbnRlbmV0Lg\",\"reward\":\"0\",\"signature\":\"zAgbl7Dr0g-ui5Hw3ulSwrRBtwsm1w3rnBy30eUh88uKjpZeWr8-uQZ-YCEb4bSR4vy0Bpj9_EYAZ9LVfo-nys15PwJu249wudN550vhn4N47J4sDEtxzAgniKXbreDaUTmG2QqFNv1P9MeaAhf2mEjAUnBbaUDQxX-YeEJanyiD8ZiYz7xZIvTC4s6v2CLm-WGSesQwfE24LittR-FMEzjBx5QwDGElkn3UjStN2yc37eUtsBIwI0UnNjJXzFDhI-ULoCM_8blDzUpgzF-1LLVKZzP-wMIU6xGnfpYGKGaVds6YRF-MYmZY8jxv-14QDrTmqgU5PDW8VU2E-JXgUkicp6Vzk5JdAjst-MJGeel4HNZ1ARtpQy75Y5ejfSp0O-pLqYDXgj0yzqG8O-jb9n6JM8rGhvD1jf0fJbuxJBCqYmTQKP6xLbXRLqFG3CV34JV5gbq-VS2WVo4Vdxf6ZYGb3T_J4EjccP5iO5A6gqhDeRWq1AM47gKdM147va58UNWn3AVXO_Jb1OBrAwWR7IGG0ozmnCS3NFhZEmXcXf3VfUSQSZ8k90rxJgYOgyeKlRkxUArufzy6XdhRXStN6IyazdZA34Eq8a8PBlh9UJCWpuy2W88RihPmqY9CAyC2FJ8ZNUxL4J7jh5aLFK4d_YVhVXdNgeEB1ncQPrLUNAY\"}"
  },
  {
    "path": "genesis_data/genesis_txs/UdCfZG1jBYUKgeLc13zjRxmQHO4_13B-NigE57jmJ5A.json",
    "content": "{\"id\":\"UdCfZG1jBYUKgeLc13zjRxmQHO4_13B-NigE57jmJ5A\",\"last_tx\":\"\",\"owner\":\"rYg-Q1yPVdsi54v39M1s2N4UVbQd3UIc5yPIBU5UwxaJh2-jqSlwOfrIh-v6_7D3jzATExVKlMLesMKkllXxlZ6tPsF85SBn-fqp-MnzlSfnbMLUn5pb3lK6cmOVeDKp-dPWpBHKLcMZl3Go3_WyCrYeKoanBF2Hcn8O9cYlYYjrxt3rscevHUonGIuwjjETP0t1NDMkbvqKF8XKIlZqFjjZrKQiZC7V1kHYgkITKJLV3UB0XmGFt2bYyF3Todq1CZl4gv8yfOuaAZq-ZhisUdRJ6Kqr3sLtyByiUG35QsfXB4k9ltODxTTSyEuja27kIyfE01EQInRGYoIadk8Djt7pwZl74TSDGExzghIYaXHv0iP4hWPWnQWvLIHPHM2hN4pzVu7UVNlPTmbJQYCch-YIsosSk4dgOZi3eYXqIiOnhh8kV3cj0_UsJdFRJ7pLWlLy9U5N_YYpvKHCqsDFS0d_w5jYH-_b8oCULNwejNoSaZeq9SzksXFPPB0rjvSvddwwGZpvrc6MZmQ8Vm_pU6ODYAzkIaOggtWNasqSsGTGGUugUKFc7yG0GxgT3qeOykG628DgpkFeMDMILL5ou22G7Q8lhsKQi26pCU1vOeU6m0nd9ch2kZ8mhKGdPCwYKM9Mg0PP1P66IYWzfvkzShTWVZ7rfu5j09YYlWZdafs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"c2FtZG93bmV5LmNvbQ\",\"reward\":\"0\",\"signature\":\"fRl_q2ZCG4vKdnXAsUS4wpStQwdBQizlyU1bl5eSLt0OIbJ6ffLxeOi68_WuyLnd6iqcz922GSDOdSi_VQiyIo-feBqCPe9BZkcGUzjXT9vAmWfothEx4luQkvnQFsq1GpS9Ay2Flb0UirOlH-J80JDzTxNgBj1_oe13EvQbHkzsBQoG_Cw_Zky8GJx7iDtUsNhazLhcY9raMNQ9JIvWDWdAHwJwDQsXUzhcSaf26sUNWjZakE19eaU_rloF01EBVa5ozBJlJ4ZLJawY1TDLLweE2VA2MMNaSxSSHkPwQyLe-qGiQRMWESZ4YpFR_8dP8Y0XnI6ezqeDrkH2rutVwgI3x8hqJpLT5aYsAAgpcP27aFDR4BoUhtzxeQLIjzG7VRKhbFwWbxGm3mrxIt6P77aZ-0lAuOef6h9JIyWSKPs7FjFAhhc52uaBVpwwVnV-Q7g3jfRmo0buu5LAa5aurC0-cpY_DAlXBYhHpL3W3OarpKnxGzDWIateRQTC7cdgYTqu6DKBpKWRf4Gedrb5q44GH6fFq9wSK4PpBz4jBYMPLWLUNl-1rPkP2P_hlJoZD4sexicjLvODkpj-hf2KVXi-JSE6P_oAVt8tpnlD26SNc-yKN0uWEGtMXQaQ7ZDdaPxpxfobWVkNBPF7xyQ0mwp12JNplcK7nZvfLYphYFQ\"}"
  },
  {
    "path": "genesis_data/genesis_txs/VUfaTp1eAzjnbxLR6xx_qQGVn_WOTna3rTolM8wY5BA.json",
    "content": "{\"id\":\"VUfaTp1eAzjnbxLR6xx_qQGVn_WOTna3rTolM8wY5BA\",\"last_tx\":\"\",\"owner\":\"v4Kkd3gTp_gWLwoH0r4Uodwvnsl0aGebrQlzMI7V64G6AqtINKeunmbVV0may6PV85VSWdGmES9uH1U5tOggsK-whEGk6wcU01hhiEmLb5HQDmYZqsVcgTDzqTNXBLuvUUWuClQVXk1MBytsxE-cmXvTlNqbqBjFPu9AaY5CkKCK1xpvR1NcCuzXCUIl280LgNdKgdluEAh4iER3rmIfsxsh3vFDAq7DPt1X5nikZW1GqL0xSgq1pByy-nXInLKByWh35aB-S3-O6YcXT-Zj6sbzq672fRoR1iXA2ZXYoGZyvGGkz83h2Gdos7gf4ZP6DwfpOD8u0Mdtg_4oCVDYUeoW4WwBuNcaQQySGOBDwDJZRWyNrRDo-G6FOb0yicKIfeaeAISVzd9x902udmeqQkxQ31CA6Te93OVTamrKpnvwtO6d2O1uTtaBqLPdkS3P0U_ne6tep3T5jtz-VE3cJsaIk2DRObRS7vQ3K4cyk4968eowaAo-H9yJ2uGFfA8twnW2MpB8VEk9zSzUOCjaCgsccg7jtj3cDdy_NLyxB-OA4qql6LG4PirskA00MWlDjlAQ0-EcNoiP9r9igjZdwsxAZ8OAd1FHG80xmbxTD3gobV0LX9Ae2hcmDIuE_54x_-FyiF3-XM_APvdMtipiJE6rUincuMo7P6qb6wt7Pqc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TG9yZAoKCW1ha2UgbWUgYW4gaW5zdHJ1bWVudCBvZiBUaHkgcGVhY2U7CgoJd2hlcmUgdGhlcmUgaXMgaGF0cmVkCgoJbGV0IG1lIHNob3cgbG92ZTsKCgl3aGVyZSB0aGVyZSBpcyBpbmp1cnkKCglwYXJkb247CgoJd2hlcmUgdGhlcmUgaXMgZG91YnQKCglmYWl0aDsKCgl3aGVyZSB0aGVyZSBpcyBkZXNwYWlyCgoJaG9wZTsKCgl3aGVyZSB0aGVyZSBpcyBkYXJrbmVzcwoKCWxpZ2h0OwoKCWFuZCB3aGVyZSB0aGVyZSBpcyBzYWRuZXNzCgoJam95LgoKCU8gRGl2aW5lIE1hc3RlciwKCglncmFudCB0aGF0IEkgbWF5IG5vdCBzbyBtdWNoIHNlZWsgdG8gYmUgY29uc29sZWQgYXMgdG8gY29uc29sZTsKCgl0byBiZSB1bmRlcnN0b29kCgoJYXMgdG8gdW5kZXJzdGFuZDsKCgl0byBiZSBsb3ZlZAoKCWFzIHRvIGxvdmU7CgoJZm9yIGl0IGlzIGluIGdpdmluZyB0aGF0IHdlIHJlY2VpdmUsCgoJaXQgaXMgaW4gcGFyZG9uaW5nIHRoYXQgd2UgYXJlIHBhcmRvbmVkLAoKCWFuZCBpdCBpcyBpbiBkeWluZyB0aGF0IHdlIGFyZSBib3JuIHRvIEV0ZXJuYWwgTGlmZS4\",\"reward\":\"0\",\"signature\":\"RP27ZxEBPN5fvRqP9LrkbndkM3Ci0UprSr3wC2tjS_FFDq22GSpWF3_ZMn2RyGUy3ztK2RIJee-WmNCjnHNz_dYFuT-guwD5dT00fa_szKWeAIgBkZR0G5RDasGq610J4PLYlyIaEZu_-PxivW73HNIgQcXNvyShtCkGKvnora8XoEdf_IgPW70NaqYSw6yFU8FqR7LCha7jU9fIW0FhQKGWKLzXnWDRwLAP7R65MITGkqt-SCwdErTzOAPBpEPLzldZPFiZwT4TNV6dl3CeimBjtOtD_EmnBwhgdDzTs5ZY9nVdTsEjmngugqBFZaTaxRURyT5tJi468I87uyjlHCTes5fTQCwkbwYepbHQzDu_uuam7Bet11qICEEL__vXqrb7m0cK0hGl459pUdBNZFkAHaEoZ4f1DxWft17gqprDIri942gqfOYS8KPhiSGIHO5_zfeLar1HivNTRUXWxiIRN99L6O6Cegb0ZGjoC7RB01Hgqs0o1X6g2x1GPrQmuBUIoIFYWkR5SI_pyKyy8S5UAeQ1gXtPtEZzJNjxOxCVqZZQYoQOt-s4GL6GlOn66lFF-D_oJzjoLTjyJZ9QuaMJinQ5l-RX-k0aM-JqsP_uufkZq7Fu_Z7bk61_dkDxOI9kFJCSzxV4qSd8tAOk59yT8O9IQrAkL-LSB2-hCYM\"}"
  },
  {
    "path": "genesis_data/genesis_txs/VuXQZjhUaZ2Hyi6Pl8_VTOu2mUWjoEemYb5TKXPFOS0.json",
    "content": "{\"id\":\"VuXQZjhUaZ2Hyi6Pl8_VTOu2mUWjoEemYb5TKXPFOS0\",\"last_tx\":\"\",\"owner\":\"tRdM_yeBdtDsw60HvwMhEYEcC58ojL4ekA_n6JI9UtdzZ6Y_S7-Vgmslc1IuI-oEcttgHuvOi0ixuHW9qFQZ9CC7W6H2jvOS_yPdRc5TkGBV3385zCbB9YXjaB7JfOpWDzM1QZyb-3LH9HleFw5sP6c6CKMyWxk4K58TSc-8zjqRfK6hmsorKh9_YaSepFLgoJpW91jGrT0cmDF3FXcRtD6mX2WxK6tu1p2Qfmyc5h5RKmWUG8e3t795yVpY_TePckPmpRldjZL70NyBUFwarxFdzBrH8iKN_YYkRK9kfs97yAM2iovfJRshzcS1QQcwbGUCNc69IGD1DSPg-PzqFQ0gTjaRa5NP2JQzzwPtzKZj_9fhfLkGHJP698e3XTYJsntGjWFciuUQfEC_3JEfDW3uZy4bTAqDlv_NsdBvB8f_wYgSuKiDKXpAPZKTaYNDyfRdOwZw1KL3wP4tjfIpsG1iY89omzbjZRk34Jhb1MHpRSV88WTKldl8lJlyCPmCCpDqJINyMgmZEgmNXrVdmqkd58muQUfSObTtsiTisYTKr2QdRd2-jXUQXGyVl28ageHdCQ_Xb9jEfM1ew_vCt0sAX1ZNAN0nFZfNvot1FfiloKwnkWwGJz_WEsKWks2xb2BkE2olTkenkQ2my3X9DgSmvJ5US_fmhn1BY7Hrd98\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"aW80\",\"reward\":\"0\",\"signature\":\"QJT6xkO3TBNaPXwyoX34HLamhkGXd-9-H9AuV6SbUXcIuUJf4ArCZIkkdug2jNxt3E1UNkOo_2itTO1JvcJXD2axfeQpQwUprtbNLh3vTHPDcqe9A5HB22RxE9wqAgPfUatq45ObkPxU6UgjRxlbYuDGMvIOtWKwFpf-aHKBEGJUFnHo1RE_kr_Q5Iw_a8_ILN9fCzdNepgNPVmosJ7Zki-B4HhEkZDPgVRpsDLijRgTg757pHr6If94NwaQaf7mZqyQ3Dw0yEmuEDscOoLGY5Rsuu0bdkQNoEpf76kHtmORjF-U9-J2DpBTiHfuEp9eyiUeVJ_qqPfk2-8PdFaFqd4YYfRmzZisD5YnHKhNn1ejltDjrrkTAf1vwvDAEQTSZshER4Wf0R3YBIVhTMII9eRLcK6nfwBJDR8Ep7GehTChXRiD2mhZTsVYur00DzY5vEFfxiO1VD0rb2hQfFnEcmDuY07savgBZRroS0RJG_9Ga9IS_gttLFmpmLeNXO8wIahy8k2W-3ySUBcbHlxvg-Lk9aH4pLqVAv5K0med7kT3pVYPE9hc8yb41_XtbEds_u_CuubTedsiMiCb0f6IdaqzMRtICwuLhcMqp19qT1KZsJ7UXsyHYczNpmgmdGTea9627j6qkPykU1-Vg8AIl10koXcfQsitb7slGBLYf3M\"}"
  },
  {
    "path": "genesis_data/genesis_txs/WE5eBi6hEq90HQvDjtJr-EmZATWJthgxh3HPPuQ7410.json",
    "content": "{\"id\":\"WE5eBi6hEq90HQvDjtJr-EmZATWJthgxh3HPPuQ7410\",\"last_tx\":\"\",\"owner\":\"w-sdvsyskkxL3NWBXBTftNhwOHYitd0hXbSV0aMT0_65b3Iuom6YgLPwTF5x_Lj1aSNPFGELz0a7pnG3jqbICsFi7J6B64dvvjF8RNIqfn_nVTSmcoxlgTxpkqnl2Tf1XmtmKVGYReW-egBng4pVR0fFy6n_4XlF3a-TRiW6X9BXm9oFo6LWXowUtY5gAMaN_ssjwy5esqNzi5LGks45IiMyjQdXEuyvRO6JbBi1unIZtrFrGzeqfc0tlvBsAaLrYP7TTVoohBgizqhCRCmQTuoQMrydZX9GeX568x-2MUCETymf_RYkRTGjiENin3wgYVRoq4HFei3hasY_vB_i5SL_ZjhnaGUAtGNXZWa5AK1dYzWo_cEiyM95yxHDDqCE5-mP3fJq0gsPCyREOVNYHG3sEP7Q45xNh8DV87Xcjxr7FYmooeMNEJw6zG4qMxGsuKjuoYDa6SoBwtZRpMmnyaqcyx_6WpViYRVroZZchlZV193mpWDMTYWFgx64bwgJ2E67-cx4Z0OvH2aS1_iTsqMnrn_fAW4D2c2YHLqQXcllZvLodcCE80twd-6frUZ8si8OXzFwZ1gJpFad-9Z0EnNXbnbygIg8-KpUVjwyW7_eU1L7mlL2w1Mp6MOaIYforga-MLlI1xI_A7a_hOxB64SP5XpbUfTwIH-J_YXT3V8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SGkgOikgVGhpcyB3aWxsIGJlIGFyb3VuZCBpbiAxMDAgeWVhcnMgc3RpbGwuIFByZXR0eSBDb29sLiBTUEguIENZLiA\",\"reward\":\"0\",\"signature\":\"P3XktQwng5HXLVwcsx6AAVjZGojheor3sjb7p6WzmnO6SFiwRSnfIyE_VKukvZ7aag5CX2tlAV85bqupbywnLcBXtlRzG6952L1d6A2akAMxmtvAVFjNqmvAFoNsr-VCmczEuWIh7CdxRJozxMfwm6bX7mYBDEYNurwRy76W9vmiFb2pCHTDMeCW6FkCy3dVlPKg4aXM01je4qgRBZ5rZhNZpcY-GsWsYxyUXpmM54fXwob9PW0NEfWGyBbpBm_Y-qrcBBNRQliaSWTkkrbOZ_Q6v89sDSIqPPVT3ecTllAciC9fydj1OXxEkp-FSDDOom9OCHZtQ3ZPsyGbRr9oVVO3rVW5H-zX1HQBM7c3gNjaFmCHGpsbaXg3vhU0PzJnnpTG-ZzCYwsObN26Zq_ErGizcJdgw0y3t8y5HrH2hQ4lSuGgp1BFsaIEOvhy-TzSQtRXs22opepcf9-QI7eF3YIYjGZmQZ7IpnQrQ9uwcgjwL9ljyDOUGpZXZ7DfMy6yjWSvApZVXNIZOJlR1hN9OcWhHUvpRGNp99FOcXZY5DkdESyDfbZCx87WZwlWejWYUMVIYSc8z_9D43CYN3-2YwYKoRxNRExcJISR4hFnQ5XxuROPnPj9A_npJFp5ZB-g-3v1o4NL6P4laZpazwELkJRGlwwP0nllXAiX79RfRcA\"}"
  },
  {
    "path": "genesis_data/genesis_txs/WsYJKhqhppBF6_eGbd0OACdu3LU6-CUuMcLeG3ST2qc.json",
    "content": "{\"id\":\"WsYJKhqhppBF6_eGbd0OACdu3LU6-CUuMcLeG3ST2qc\",\"last_tx\":\"\",\"owner\":\"6pH_4c1ES-ZZC1RHihaUdmP-XXCqTmNMQbBtrNgvTPH2MLCXM57xEL8hpH-fZXjpRuDkZeFybrQw1XAr-qnlbKmYlDwUs26sTY6zPakflg75EEkbWpT_m0G-nN0RU0RR-zwh2r5BbLUYlnoNOULUgAW6wr5w7R_D3yshOzlqGc-jRwFI8v-MmMA2AiezvgnEkZh6uOObK7EvTCvTaHr3z3xFAGeCok_Mm_HhNKt5JuT3hCr0tNCNqwKfJNE49SK1sx24y7GcTKG0dgj1R97QJt5sK1igsOILOczmmJw_RvhDTIL83aTFg7VOD-lEB_bGcZVOQUGrO5BNWR1sKVyidA4_czSGa7bCXss4_d5BU-TJWPADNKYltSckr_ztpIroQFonXHReFkJkmzaExevzigY3OyTRtF62auYAv-hk0nXoWi5Bab8T1YJpwBvRKiJ9gltb3058XagVEE-rE9xteF2U_POIQ6n1cvs4iUFbthQyxlkVe4z259rxveEqPDYmP_ideVM0MoDGEDoRXuwafkAp4L8mqAwFWbOHnexT2DVINrWS3582M7Zk8HN-to27SgDOw59AgP-Qkbm6UmpVVHBTVhaJbtjLyox8nwuAMbGHp03_lOEmABcBrrdh10Hv9qIJ1bbB6JqmaEHZax_LEQZRyxFAoa8VMLbkFwCB93E\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"R29vZCBMdWNr\",\"reward\":\"0\",\"signature\":\"ano_3glaFYm5jQBt4bAIOkHkJtnKeIm7z-wHsxhRDatcyZd6Mcc5BDyUBszI2N14p6Yj7z2gwFnsHBeyKl6VvQUw0lhHXalTCUszPebeDr5Fnjy7nhJxRdEzHjpusCsOV3OQkfrSCRCGOQk8oZj_9Oy6IPmO8eYgDtmQ1rPYpOZ_kswGm1RNRFZwBNPNs_TznwPQNCLdkwISv1RX1hTEHzaAeXir7jwzyDTREegzlWFnocL32JcJiccb0wV5dHryIj5h_p-ULudrY3GHOCS1Ffolk-rKtGc3Ua9U3Ng8wLkZRZp2ucoyZ2-L-DlM-6FQlwIPorALe9D2BAvShQ78hJ3yIdqaQMaHB5B6jM2M8hGIKWPUl8Pf-7gOg5GNfeI5lxVE-OaHrGk31cKcVNu945dWK-PHYSXalERQ0_JsA-PGFWCd2n83N2vlzh1hPAz5sM5UHZhxLUYU4fYTXENdTMTvIJu_rb1TXsQ70JkG4e6ZNdP3e66HgBz3r_KkJReJhCcVCUoR4iK3Ds5M7sVnuA1eeB3jXZrQGPTK-blfEgdZWLIp9-pQkkZXLwaaYrbuf8ow2j5TpSk5OuG_NQAlaIz8s8K5MypqOwjWYzVhY_hVUItVHsf2_964gEN9RVtEunjWXUdwmajGzOOl1bybPBOV6wqDPtE9QndYjbttHSc\"}"
  },
  {
    "path": "genesis_data/genesis_txs/X9biR_ZA-rnpzk4gfLi0-pBSsjjT2l9Rk0VfYwf1WMo.json",
    "content": "{\"id\":\"X9biR_ZA-rnpzk4gfLi0-pBSsjjT2l9Rk0VfYwf1WMo\",\"last_tx\":\"\",\"owner\":\"2DYb2HXHOXQTUhYDTPVU7JQ6R47bvTO3E2EIqhHiXygo7MRjpgAogXRqOiohorEDT0TgPHsQmLOIXlNgshxhFimlIw1e0FAEm2DSEDwA7iZcTtuwLUwV5fn3DlX_xgESO6izEm8AB823q2mitd3npLjXsmFjZUTyneGXYh4RZqOfSFieEoeg4SlQedy8IN-npqiHG3Rnd6s9gk2VpFkhLHXQGUZ0QVIoqDcUpBIZ-H9zNcZzY9tDwwzM9iwle8N8NtfibZSabo9EZlwpJ1LsR2Y-lj5WWB5p8a0yzaDHHsvszSRuZciWceiJwR8xAX_rkEBFcgajXRKoRkAdoAG5RjgNB60ifpwY7_pY_N06DqxIWLCJq5M26mQmTweGwQzauDKSdMqOuDYKrtoD-pMeW7QjlXVKlYt8VWQuWzMGJmtH1PmAGw2OrklpTJ1LJmLf6FQ_qJa_Wkok4GVr8WR0WbONMEMj3UWgpS1B7xvRCWkYi3mRHJWwKUrPVCTU7Jhqf0WFmddwsbsigMUiB2IbPaymYCd7tHbLHuEWlx3QGM09TVz0EmMR2v-m_i3KzApyyfEhi7AKkEEpRjGUuo2D2H_iEF80Xa0i7OSPPdad6dLGRGZ_X9wuQHmxBnMD6VCrvhZE_DNaX0k0Lfh5-wOBRnMdRWmEIi2x93eqIofaDYs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VG8gYmUgQU5EIG5vdCB0byBiZSA9IExvdmU\",\"reward\":\"0\",\"signature\":\"lSxl2IH8fCWLQN0PcoKjvZjzdNa75WDgRHWtrjoN1P5V537JT5dSqwV0xSJwsUVeWEMSEVPvUBRNKf8mVGLAm3TgY-iB8zMIf8nxzvBcKaYWMeaAHzDV3lW0Z6TIrY7gc_GlhI_Q18vb0DADF11kS_nYzTfTcXd365ZUJjXrtgzzHj0lszheWATg8IwJvChInSkUNgHOp35yU7ErMlgSMwVWS9_SaIgfS5ytc0mjEwkFLma66_NRQFyHPisFGUfEjUlH5mRSSC9_X8Drx_dm3nwFdrGbpEq4NkAFZiN4DZKLzO-Iiq1EuwNjJ5_WeS_JjAcVdZGk2LRvygP4IzLyg4sughBbohYppDkD-5z9gx5aQMTRvH4gQRDcfU4IsFUESg5IyUcWWY2XBSSCGLeqGE8gQe64Mek6wH_3dCdbxe-9XaOs-AyVHKEex5m2PG8ChDe9BaRgEdJh3AU2QH100e_jKtI7lDK_Hnp971T-RbCxAg3EucwPnM412u5eraUa_foL7FCTT_J8NyBcKUPCw9mIDxqEsUIArVrrnFR7D6BL3dYZzdmLsMQqudEnLmHZoFMqk-ZJA2HkjjKnRFO48UyCLCKWjAo0xJX6QERH-A7JW9ZhtjOPRKLjbW8tekTFiz_OppnLUJMG3MmP7W3uW0ixE8N13H2eCHRbltX_lXs\"}"
  },
  {
    "path": "genesis_data/genesis_txs/Xjz72yVLd_Qzl8_GfSPqZA1MAkxxhjr2Lsf2tGCj_ZQ.json",
    "content": "{\"id\":\"Xjz72yVLd_Qzl8_GfSPqZA1MAkxxhjr2Lsf2tGCj_ZQ\",\"last_tx\":\"\",\"owner\":\"rUos6-Jqcv-u7NB_IRN6G-VE2zVd6WWGMH2rOkeALfJKkV1qfLIQIKzLuVDaWgSWRVGBijDaBft9-bqd1x-KjGU6JL0nLYVWY52vyUTbVhsWnTQaUSs-p3ZhxMwfHXbMq5vstaxvARQNXS9H-_7QWWgzHWUtl8VVTX7tVq79YUcAsNigqLWmCyrH2_WkPjX5Cx1Dwqud6nymYLlDlASgL2thS65XmxjKcODipapLuYk9d3aYnC1pnMsR6JY85_QFwrbF7dnOArH8_C60zxhRDLk2RYz4p-CkpLqyaHegZCHHUMzRBTgaESyobVIZbOlpc43BYYJea3PcITMxiVvIJ-nwPGDcrgRym2Wo7O7AHNJQuHQQ6s0Xjk06iEQgGZKQVq6MPaG15KZ6Hi_1kCCBdZ3Hp-ldlfNIMkf_uSn76u7og3Xlnh1fU2rvLTccB3I9XpG05dKQvYem3mgx5pAWAQ7UgKBmi5KyEHembw1A9g1NuX6MO695qK6JHjxTh9jMlJ2xhP-6ICCuCgDnNV_0OQTeSoaRlZcEDu75_44293eiUdk3WBaF_HnF7jfhI9uSr_oKEd3iwA6erdxOs_TB8F4FNU8I_cngPuuE5gTwASmfV_U9XxqtGtxcEsBZ1lkfIFJV3OV04j9feoImunQSsLeQwAp5uGDNFwh6rhTVZy0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"dGVzdA\",\"reward\":\"0\",\"signature\":\"l0avRzJQHXJgne5y6tH2GUpmyxJl9qMdSpbNeZV8pIIayH__HvO6B3A8EHOSDzwXG0sOannH6vt-AkS3ytwfvhn2MMt82901N0TmZwNyGo0LUAkYVswYLCKJygmD0W8TB0RfPZ_W-zKc5Q6ycBItZI58KzMR0hqLAV5x0sKTAPyI3nih5iCkX8HDIy_NFd4sshlTNFNx9dzAXv5P39z0gxLmLV7DekGt3GMnRFMnmoX6LE0f1F7vYGfFcMcGcaTGsXLdNKdyj5n93gqbbBqsQCPVI2LKEnL8WTy_vT-EOPlBiF4v_pDtNdficATwbi2FFmLX_0DPOKpVLiP-1u4ClV8FJ3vpW-hTPMamdUwIaKZ2S1TbRcAs-Qj0gZrc4pP5HAd3kZnwhVINlOrB2ryZJQvLKuUEKVH5X_eSh3CbOPHXEfRU85W_iKafilWt2n_6MzSw5j-Fpl3sfpoVGKpcrldNhF6_CZyCo2bw84ps_4WlU3BWlNJy1r_X3WwViSYMtdeRS_7Khrsyj4R6N0YSdlFBmPhAitC9JHBXBueyvI86arh3VdnJ8f0IgcMLIplQ14RXBmU5wO1A0wCsZ1Q3rOW-cCj8oGJVvpntQE7Th_TsCOoesZ6RA-A5v7OjXJdeaa8wUFLeyqpCJ-k0yGchb4hemMZOaNCBXh3Lzhdv1rw\"}"
  },
  {
    "path": "genesis_data/genesis_txs/XtDRu-1SyoRL21gpKcxWtxyksVwTF9kvW26hvQ_bPzE.json",
    "content": "{\"id\":\"XtDRu-1SyoRL21gpKcxWtxyksVwTF9kvW26hvQ_bPzE\",\"last_tx\":\"\",\"owner\":\"q1RSDA4wzpbqWggL3jfP3INUPJIKtIKdtCvw99wGmehu-Yct1Q-zYV-D4h1zyM7ysXD3gcMIDPZ3-TBWdJ_EOOTokNbj6F5IhTQ31ygB2niBNaD50oTb4pOLT-pGf3mh6OggBan80JVXX_MrtLprh4tH3u6T9DAgJR_BLaIzRiKiLXqxxIQgaqsFlbuIoSCChg48GZ4INR6iPzz4s8QO2tr6XNA8XDoZKgU8L1XZN-ZPDGkWoYEAE2TW4n9WT8rlb5Xc4SPw7o3JqmkzaYXgMy4Mg5Y5xO_-NKqBVo5BnwLqlfyKdqHa6qd9nDULv1-bTTsDLpEDVjshuRAnKA5IgFez8uY02tnsjZq3QNDie8b8ATkxD-KtrFToGXNTj7gXpfJXJYf99aQNxlZHBrCDuOYH5OxhzpZ6S4qYJNhdSNhFIjZJfSylhYxTyp582xHlruWZGTEvB8DdR6ifSSRAFhh-CRJC9PVGyvzKxppVGQAwplMFgo_gGzoJ1Lgfq0rWuAavvJ_ceB9X3nEvzmfx-eXgddgV7xX7XRCzKHUN3KXGrkZejAOKKfyk7tLTYgFvuYT8TZn_LjfzUZcc9a35OTnAMQGRVYrt4odX29zMQ7eBsSCBoNfr-17KxiMouykembNleitAD2quVMcBfmyXHLgGwCEekzpVxuxEGPpGmlE\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TU9ORVknUyBXT1JUSAosCgoJV2hlbiB5b3VyIEdvZCBjb21lcyBhIGNhbGxpbmcKCglhbmQgeW91IGZhY2UgdXAgdG8gaGltLgoKCVdpbGwgeW91IHRoZW4gY29uZmVzcwoKCXRvIG1hbidzIGdyZWF0ZXN0IHNpbj8KCglUaGUgY29uZmVzc2lvbiBvZiBtYW5raW5kCgoJdG8gR29kIHlvdSBub3cgbXVzdCB0ZWxsLAoKCUxldHMgaG9wZSB0aGF0IGhlJ3MgZm9yZ2l2aW5nCgoJZm9yIHdlIG1heSBhbGwgYnVybiBpbiBoZWxsLgoKCQoKCUJsZXNzIG1lIGxvcmQgZm9yIEkgaGF2ZSBzaW5uZWQKCglJIGxvc3QgdGhlIG1lYW5pbmcgb2YgbGlmZSwKCglhbmQgZXZlcnl3aGVyZSBJIHRyYXZlbGxlZAoKCUkgaGVsZCBhIGJsb29keSBrbmlmZS4KCglXaG8ncyBpcyB0aGUga25pZmU_IFdobydzIGlzIHRoZSBibG9vZD8gRG9uJ3Qga25vdyBidXQgdGhleSdyZSBub3QgbWluZSwKCgl5b3UgaGF2ZSBub3QgZ290IHRoZSBib3R0bGUKCglidXQgeW91IHBhaWQgYW5kIHRoYXQncyB5b3VyIGNyaW1lLgoKCQoKCVRoZSBiZWF1dHkgb2YgdGhlIHNpbHZlcmJhY2sKCgl3aXRoIGl0cyBncmVhdCBiaWcgYXNodHJheSBoYW5kcy4gCgoJQ3J1c2ggYSByaGlubydzIGhvcm4KCglmb3IgbWVkaWNpbmUgaW4gZmFyIG9mZiBsYW5kcy4KCglUaGUgYmlnZ2VzdCBvZiBhbGwgdGhlIGNhdHMKCgl5b3UndmUgYSB0aWdlciBmb3IgYSBydWcsCgoJYW5kIGZvciBpdCdzIGl2b3J5IHR1c2tzCgoJa2luZyBlbGVwaGFudCB5b3Ugd2lsbCBtdWcuIAoKCQoKCVRoZXkncmUgb25seSBibG9vZHkgYW5pbWFscyEgV2hvIGNhcmVzIGFib3V0IHRoZSBiZWFzdHM_CgoJV2hlbiB0aGV5J3JlIGFsbCBkZWFkIGFuZCBnb25lCgoJd2UnbGwgbW92ZSBvbiB0byBiaWdnZXIgZmVhc3RzLgoKCVRha2UgYWxsIHRoZSBnb29kbmVzcyBmcm9tIGhpcyBsYW5kCgoJYW5kIHllcyB5b3Ugd2lsbCBlbmRlYXZvdXIuCgoJVG8ga2VlcCBhIGZlbGxvdyBtYW4gZW5zbGF2ZWQKCglhbmQgZGlhbW9uZHMgYXJlIGZvcmV2ZXIuCgoJCgoJTWFuJ3MgbWVhbiBtYWNoaW5lIGtlZXBzIHJvbGxpbmcKCglub3cgd2hpY2ggbGFuZCB3aWxsIGl0IHNwb2lsPwoKCUhlJ3MgZ290IHRvIGZlZWQgaGlzIGluZHVzdHJ5CgoJaGlzIGxhbmQgaXQgaGFzIG5vIG9pbC4KCglTbyBoZSBzZXRzIGEgYnJvdGhlciBhZ2FpbnN0IGhpcyBicm90aGVyCgoJdGhleSBzYXkgdGhhdCdzIHdoYXQgdGhleSBuZWVkLgoKCVRoZSBzdGFydmluZyB3aW5uZXJzIHByb21pc2VkIGZvb2QKCgl3aGlsc3Qgb2lsIGZlZWRzIGNvcnBvcmF0ZSBncmVlZC4KCgkKCglXaGVuIGEgaHVtYW4gYmVpbmcgaXMgYm9ybgoKCWl0J3MgdnVsbmVyYWJsZSBhbmQgbnVkZS4KCglKdXN0IGxpa2UgYW55IG90aGVyIGFuaW1hbAoKCUl0IG9ubHkgdGhpbmtzIG9mIGZvb2QsCgoJYnV0IGFzIGEgaHVtYW4gZ3Jvd3MKCglhbmQgdGhpcyBJIGRvbid0IGZpbmQgZnVubnksCgoJaXQncyB0YXVnaHQgdG8gd29yc2hpcAoKCWhvbm91cgoKCWxvdmUgYW5kIGtpbGwgZm9yIG1vbmV5LgoKCUxpdmUgcHJvZ3Jlc3NpdmUKCglzdGF5IG1pbmltYWwuIEFmbQoKCUx1Y2sgYW5kIGxvdmUgdG8gYWxsISAKCgliZSBnb29kCgoJZG8gZ29vZA\",\"reward\":\"0\",\"signature\":\"bGkv43Fu9p7NxsF5N_QbcH3iHD80ZEmFxDSEA3haseymiKbpKU-tHdK6IrlKFtMf770ZN6ao6Ks1d7EXUbX39xLLVWmqtwpzTwlQP6j_RNJ84Ke9u3-s9x1oL7yIPI6XZrvinwW07sfFSaOXT0A4ZD0HERUh7DUIcecE4V_E9uTRRgejF-Bv3-MPRKREbNDtQI5Q9hRjl1lL8T_ByHGxk9LaJ3NGofHDHCKUTBXuCqdsAcwpQBt7DIE92dwTM6GFwOGSFzxQAtG8uKBjMTfSmjij5U5WxV0nz_5w0RWz25WogdABDT9D7piYzC7QeNDWDTbScYRzPFDggmvsoKO2X0vv9UPn2pp12AYMXdgfZwHyrbvjD2Pw_GU3tbDFIKucyeH01SpvPVS5vgQQANSlYF37yeToJvT1lV2RmIraOVSV5JcPXtKYQbRdKD6-IeVYURWoposvSovBF08GBNg36leiUJLZIE2PTnVdL63ZN5NxyKX1JazKDwwhranvxVUaMhJy9n7o1NqXSQvTDauPvZecGBNWP1xn8X64ez1-RmB4UeDS_mVNhPY2bzP5_aAsoT8_DXMVsPebWyKTHl1RpBCJ9rsvE2Ee_xrkh16o23hJ06Kd3fI4mTr3OhL5KajuKa60CEExoQhqeJhlNEvMBFznwQP_wM8DnbUnE3e-pWA\"}"
  },
  {
    "path": "genesis_data/genesis_txs/XxgirNr3QGaJTKxPWqK9byYLj7SdbfZudKd9rbynWyM.json",
    "content": "{\"id\":\"XxgirNr3QGaJTKxPWqK9byYLj7SdbfZudKd9rbynWyM\",\"last_tx\":\"\",\"owner\":\"yp9ZAC90UlO6qJ4NJy-_ohIXI9Kp9vfswKwyN3NlywEHiO6dM1hcFHTvs1pipy6Qompt-kllQSKPhqYx6E5TlsYbmFPqeQXpP5o4pJesVZAoknV-ST9g35CohhaU95iS7OxfDQK3ZzHcdsgn_SH5jSwGSgz-jN_uLwvSiolYL2sUPYZ24HTG9XjcULMg9YZU0iAwkZgbRul6d1OC_vJ9sJXXccSKmCLZgk9VXU8pUT4GH2qxOKFuKcklX51nhtFbMKy0F2IXMHvSfG2CQ7R8VbGdxZRmourHBq8Ino0DFm2mWpsv0xKuOPQd0sjZG_CmNuhrEREhZAKSdKLASwz3dxjzAcnYMyL2PM8yJpRJsUFRqf31Rjl7jSZqNbh-dQthY8fYKpFdmGJKH7pM487WppOXD3-hSV30LzpE0IOySqOCX1mwkb8wxPE0BXQTVrRSIrYIUuo7Mln3EngAp_jbS_X45BkOvDqqYkzNtCrivsRxoLDIcbMG7OacDPj43Y5mZqJfxiFOampJiZvAdynUSVfi1LKMAsZqQFY_xPEGlObOIgKlKZ_qqdD7dtDFxkZ-gtbGUSnb450bwJaJXONJ_8QARuEY9Io9i0LjlCY_TAYpcU8lD23vNyvDtJrZg1KL0T8Iuo2gtmQm2lfCOGKJDGmJTne74XzwZDNLfrV4I-c\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TXkgZmlyc3QgSUNPIQ\",\"reward\":\"0\",\"signature\":\"M7iD3wy2R4KAj45NmlAFPce5Jl6PSzgt43gqxDYeb_M5Ak2GY2_f0aMDWfYS2htbAhnbN1AQzfp1jc16nWcpj3DsAU4PhFg5lWjj5CUCBPMran60ch2fYio0fgs4m-MkkJtma71huuk-2AHINY0eMJkNNT9qhi0iJGKtHbAUEOKhPQtVTPVt6VjpD00sJ1I4u6-VxkuxgDnh_dXma1nUNihb4NKKA6DgS30CU3_Im6oi0til3fyjNDGKSBoW2FdV_UFphXhg9KGY9vISEuZ9gTpIe8PxChnyL6LY1mfHWMXuUjRhuLHFWUDOaw1sMHGMjaFQN9wO9kKrRCWKqk5Tu0J0Yr_itx2dADL7eruiS7loUcJbLA_IRQimTNO0CUew3dKAUDlObBVZEJh89MObqKCDvLsFveWnqIHnwyWop_oXEWdP6QUcRPGvXLI_9R7x0E1GHb4iZTZVWng2L6Doj56LhBKuz6kJtzgf5wjzdTa7hBNTHA2dw5ogv1IO1Ktz_Jacv4ezG54m8v-mZZICJ7IXEOppNQDuBnRgBvkTl_ON8JDxwPmHp0GYUAGx7DqP6Y-YkGnydJFxsU_RApi9jeGAuCx-mCuti_DGfPGfIPgr152Gw75WgqRfMg9rzT6bOJi171g_f1uM40wPhSf670tKx-nLuhcAoUyEKVuasSs\"}"
  },
  {
    "path": "genesis_data/genesis_txs/Y0PLaTBQ73JXn_jHvldOKC3jdbqDbqTMkcW0x65_Jek.json",
    "content": "{\"id\":\"Y0PLaTBQ73JXn_jHvldOKC3jdbqDbqTMkcW0x65_Jek\",\"last_tx\":\"\",\"owner\":\"1WQM7cWKavaMhqtRWt-1Z5Pvz03UtZMrbKjnv2Bla22etp-8Eq5dfIQvcuvFivqQV5CP91eefMhoyuUUD-UTaw2OQV6I6WcguEdHiPsrHE55kylT0vBE8C_0gvR0SOwTUEywqCqfOiwkuGoPHgfJuXt5T_aB33HqZ-Qorlgq4hf-Bz2tIYFR2EqWQ43mjYWmOldY-5g0cTKMZ4BIgipg3Bv383qI2fEQM6aeGtztUCUkTTdf2xH2XfG7P7qSSqi-3IYdkHoRI6OT6F7j0chhJZqnAUEopqg8cdN4A7QmX1nLzGhAHrZcrcd_DohjrNsgD635cdSjnCixcKEvsERKPrQmG3klaCDFVKkCi0peGsHudLYY4bRIJ84uinnmK1cXTUqcnDca8EWiQJHtzrylSBdT0FrMfQZwqAj92GL6VONeC2tp-Mn7dFlH6PWCna3saojhGk2q7-0Gj-onuYi1efkL1M-6jJs7liMqxdrSfm0I-FbB4uGFRPKrgr-rsCCjrS_hUHEA5CKsssZl1XbAhv3YS4plrkIIzyRKm8z0WcG72R8hAoDz39_i7rPSe0CcWYoqsbOkOX5Tsts6B6N1EEl56A9-M9r6A8fTPRHa433LW8YwCwi_Wn4d8hk73cYqpGimxBm1ymZcje3MKx_aS4eBeQ57VvTJY3nYm5MG9HU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Z29vZCBsdWNrIQ\",\"reward\":\"0\",\"signature\":\"vP-jZoTX7PM_Vl-aX5r66iWmP5HlejNyvT_zlF6GfVK8fPonyWXgh6NOMiZ41yNr_MFCugPjHGnBNDH-A2A6tsI6BcBK2Wuelxf41dsJ6CI9P-wthHJWctC--SogzargRFl6hPhcXlZc0VZLe4HaSouo3fzReoxBTfR55DUQ-Go8efcxb_873rvembbf6jJToluaxgVeu10Tyjaw3bKZ3reG7vcbD5S9VxoPdd_hXky-IHy7KZ7MFNxVhHA9ZR3cGB7wv2V6XoI6Wrpka13p7GU-9fUBu0G505JLm_JJECMFB9kkQtitwRqZcef6nAuy_E33Fb_025GSRPEL_MHSW2dDb2utvWc4fDOd5X8LbQl2O8Y-iEjWJIShELHionxg1c-v8Z9MIUgERvd_B6eb6s9n_XS1vL6bqbtSJHQPH7Apz5o4R-J3zTe2egE07pT8lBu6Qug5iqiA0UNlbyqlFp7ViA86PWajhBpJ2nHhmk-wCiT3OSRNvVcvhim39OGpRGvKz0WZ2vDzsLbmwmXROSUOtiWrpTE83YWlFenwY4OI1-QUMc0GvtC_nPNUY1vyBJETKyRRT_BvAjTlwDpIyQcuzCD56Ze_0gaTYb4tDA664RYBxIZeC1s_tTSQmQaUUbvuhi_r5v97Nhs9TRpI3vZ390V0GuWsI4LpxaQFjWM\"}"
  },
  {
    "path": "genesis_data/genesis_txs/YIEEyYfNIRSjzm_gzv6l5CelyL4AOzKX9M4XPXRk2Yo.json",
    "content": "{\"id\":\"YIEEyYfNIRSjzm_gzv6l5CelyL4AOzKX9M4XPXRk2Yo\",\"last_tx\":\"\",\"owner\":\"1PzC8obsqe2uCEVXHHbA3FFblOnYIKs_x7_N6YG_ld_TqLIU6qeoI747oepqiYq-_EhAyC_CA9oqvqpRGAPzf3olKza7KypN34eVxl_tUpChSegVgxB-M9GItqp_4xa5BLMn8ZVOw2NtdanHMGnbtTy0z5Ds6Fb7cvk0T24_nNEw3YR-1RWXhnnmGN6GfyTiCd3HB3P53mAmyEhyRVlk61Gq_XhNUpi_LdJj1ch6dELhWmlPr1S32sneTrwzNUp02PSyaAWyTHNeW7XeTr3lzkWntv4VrNCdmoErLTRJeZ0MBFbcIi5XtlEIm2KbwBhdb26uncUIznAdBlJEjPGztqgC_FBVL_Qt69bnoCMPA9lhcyT4_5PzZczkA2Y9Hw5QNWayBexncL3Y94RUSEuIXtEcqI2FsA90toM6tgfy3RTvc1E9PEy2rloWIzSqPFFK3Ho-xUCxrP3Qw-yCBUWjJflJZSh32hPgpnpHC1QrQuk6lT0ZEPS56MfwqlHLosYwWUCgcGFgnjlmGLJJ5AFU0LRCGyoxrC_IH3_yir89Oz3w79eYAWoXBiJXHl0fg5wzv-8L_1h7PgyBFyrYKqlPqEle_sn4i5WI6r2fK8BwZmeavRmtDgLUImKsbfZS-kwOuXH5dCSDJ8Ir5agSCPQXUftyoxHXMBb_CyHVWY6CUs8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VG8gbXkgc29ucyBmdXR1cmU\",\"reward\":\"0\",\"signature\":\"E1rWYyHSZnJMSWBiqtS-4cxFPh5yipa2bg1oEfaCQrmJ-Dtsk7rkb_XllVd5S17mO5egIA3tkN5P8rXOyXVyLm1Q6gyumSERsbjPuYdLhlt7KCVqcYchtDEDmpH-rnnLoF5AHjXvEXSGl9it3Lo39NUSxvkF9LgcpygSqdayzpn2Rl0m-KY4NNvbdrmHlLMSrxSHFssKcxVtoRHB5pBz1Rw24VT5EgIKKYxX1XkqCC61S-6IVImuSbWaBWiSnleVFsKYJpgliz9R1P61gqYD9472dPMaT1HjSdcxq6X8WlTwgLtbAz7l5V_Xs0a1gAD0CNOfMYHd2-p-OZt1PeemNV_BXyE55r4AtEOAbjkRTdnAzBpF184r54PT4bgn2Y4zvzIX5njDHO_mCwTMvTPaW9F-u2lVOCu_b0M9oEsdwT7kKnBpgtRiMsaB0xGLuy7CKwfwsSUqlhhdKnjuJSzyOYbA-LO9bjL2yb1FBhh1Hig9hotym-Jd7H7LCieXY6jR85YzTp6qNwWbPwhLzks5vYciK63k1kb-cbTRpEN-A-OE0fdi1GnAmmNjJAW6U7RtBRINO7-APxAUUXJtxGwQ2ILEeEN_LXWqoa_Jhhkzspvta7DSGwK3dWQ74Q0jyiqnAUvD0-UsFouLuPCLHWkmouB4d9TInL_EJq8w89Ymot4\"}"
  },
  {
    "path": "genesis_data/genesis_txs/YfHEyNUGsOUiuqCgHV127cg2Z5Yap9tcQB1LH7tq9ZA.json",
    "content": "{\"id\":\"YfHEyNUGsOUiuqCgHV127cg2Z5Yap9tcQB1LH7tq9ZA\",\"last_tx\":\"\",\"owner\":\"4NbPfDsG4IXeMRP00BN3T8C-l2zJ_YEZJJa1wr0dV2O82PdjJ3Jj4dc-tprhOQT_Fw6YK3IPADQ59NRAgJnLCxDf0yg-_Zvo9xeF651Yv12dphDUXUAuZXRt_VwnBB_uX6LOXxwBu6TbiU0mFGsHm4okSqvkB9BvTH1pfAui0HYggoUwy2_TSF95xfgBUgPAuSvxZwuanNqVCgVjULHqw9KkCsFEmsvpvGT3qBbn-coI6c8QZ0NhWt5f7imRFoF75FBRa8uglhwkG7E7fSJ4g1uj4XHr68XzyRExKcMcHjRTcHFOeb6Wrybvd7aA0PQNUKIG1gSUWBAReHge43JcTAsd2cbOngrFv27LwBGAsG184vMv_y9G0_twBNeGiL6ALaJIIB8OWAuMiSSk-DmL56Mh-BQOkYMpiq6dBz_OuOMKKCYQgx2p8sGtd-AZ-JQwJUn_lE38g9t46SG8jWYZ9uW8Gy7rMMj62GtPIFiXvskHaTYtP1XnPZdrLyjcDTSaEaKaFgEj6i127iVOQxO88kSM2AWInqtrcZRClbTbUsXMhi_JiakREvgSB5oqQ62PzynU6t3ADWRp5mu3fIYV919wjShMkzv1CW4cRARGQinoKvlZWfTiJ0kdQEjxKd-iHSvGBuTZpP5BtF6sqoKWcx928vZSXjgSP2rvrZBv2N8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"dGVzdA\",\"reward\":\"0\",\"signature\":\"z_TSXuZa0x3ewXDy3CttmmTeqp3HnMFXVA1AG3zRyx_84A49KvaApkkz_g130sAYwyGNFUIuyHtaLL4bu4LVF8-L0LCl6DP5UjSLphp5cZCistKYouO0COep_27d1Brx5H-cnnjwcHW7MxYvto774i1YX_E4MYnMw3nTukcW9teez8yRsw6sWpgz9HcbXQPoDNzxUizlLQP7g_HESSB-kKltFd0yWbOkqbP97elSDwYuxnPvkGyZDDMnivsqB8kvcWIAcM4UbTdlwIjsztWxqqPWHAhUCiQLu9NiOBDyWRjadEvgaUg1IYK2Qe49nna1Cr3NwogmH-tV-wBrEqtW0WffC2Sn3EehD2Buw5a3hh_qsyi3JryDkFPy2cdNoGwT2BYd8ErmtOnRNoU_RLMdD4kFNauZOCj_DJ5NmfxtkgGlB3lzmCv0aelmpUzwxGp-z6IPJ0kcindEoYX8OWw4eYX1O3DQI8fElz1aMho49ZMxdlh37XrJ6ol_OVjOp4wfvMNy7bZT8lvCHk3TOZCfOWZh3T3E2GDVw50UpvpRJ-Rgh11Y7E0n-3IerlqbC3uunetKqgL5N6q0kmWSJ6OYXzSjATO5t5YwV2GnDzqJ_TaHcWxc0EtFAehSpAC0a91eY_AMCWwuDmideWIW2o58KXtM5qWH9XSaO_rxorU9zJI\"}"
  },
  {
    "path": "genesis_data/genesis_txs/Ykh5TAI6koBN4UTQZ3GNIDr_uHNjlpHH9HsvtEkoWLA.json",
    "content": "{\"id\":\"Ykh5TAI6koBN4UTQZ3GNIDr_uHNjlpHH9HsvtEkoWLA\",\"last_tx\":\"\",\"owner\":\"rduXOXjlJ3kP1SF2AuhzLzrIuIVGgqcm2yzWo5JY1f_OD2PhJtff2tbYwCmZ1KWbLiHP4GnqX9zdHJ1erYZy-8d0ZsM9pCqSwqNFy3HeiPG-eNGfdLbHy2DIQCJ47ZYBe8BW7lGNvNBcIIEihs2VPM8MlZKmJxhyI2bDgZO6ONxokw62wQbX-X4JLoG9_-zKIuYt4ZoW6xIkBpHifM9B4bsZKNE3Mx-ictHx5SzJJpoYlbxgxFXfv--zhMU6lnm6b1RKmIewFCa24aE1p5Sy821eczA0b7n3tWGTnPeOUV8mSD5CSPRsXPB1SeyamKvBjYh09OLQ_el83Z9zXDJen-FXBVPUnTMGCAmOa6fKjGZENXR2vc7479jJA3dGiI1KNSR1tYpzHvbK1GXWpYVJYBTC_8BYZEjv5P7zUGgOp14JLHJ-jUS-RUBa7Khf6bB0iiC5DpcqATok7Yxv1KV0HBZBv78HXEyyS5lQAKFx5UHnqHDrSVLK7Bhzk29ffqeAyLHAJuKazM2-TdRDCapsNg7-qAWYVwmoYDrrWQ79_Qf1vaKClBOSquI4Pgf-y5xoOvnl8jraoxg5bqtkStsq67qRuEjERpZgnf7A-ncpVm1VxVmYKkXb_Xs31WIwxLEVYuftrDbl45SEHve-I9h4FKFCj55TNoClUaj907REE7E\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSBhbSBNYXR0aGV3\",\"reward\":\"0\",\"signature\":\"FmKNlfGWRM4FFG4lUdSWRpO_2Tef0IP9kKnethsdcQt9NpXMFfbmYrXUZ4l2ZNTYXhzrBs1_SeedzIAK7qjbXGdmtJ6T-3LkcTptLWQh1rSmrWciNXJd-30weMRn2kuIB3MplWWmD6rLuirK6MCF1U5wdS6bMxXoqcjWeGNJZrnc0WdlcCOBp8C3Ddub126PdlivdqvCcEi7ymmxwRiB5RHbWMH__EohP2PNhUA2-jWPPPjmZ4whH4Rdn4OIDeybeMecsFaBne4NzVO-IgSFlD1M6sfIAJ94ISmw2dzraEeqMq6y1TNZ9AaZZuYNrvcgjci5Z9StfM3sV1KOn8w0DV7FPlrcdIspix9JUe_0e-8qPp0r5XjZ-m1TO0elxkH-CUU9tJh15rdDz4jGSZUGcpPDdZGaSxBpnb_N6FPJo2ndeRe4Iklsmp6juXnjhG5Gyj7yleoAjBz4LMlB3L3D3MH5f3isKSi7jNKslDUNQvop9DkAhF2RcwEn2mQTWWcYI0fNxKK4RelyUhvFbpzRhuPynoebL7Iq12cjj8hdlUshANhd3PQNl90HiFgWkEoqCWNfhpUdV31xk1b9ApSyMH8mTb4w0HDLiCFJYVzL-B_Zx7ixcBBGkefluOccmUZDcwuBsw-QIs9vl8QgCdqruLu53YqWSghedY0l0uYI108\"}"
  },
  {
    "path": "genesis_data/genesis_txs/YlalzFjBD8CgZxDlI6eNWE3PIIflHGzXyY9VzPPeCFo.json",
    "content": "{\"id\":\"YlalzFjBD8CgZxDlI6eNWE3PIIflHGzXyY9VzPPeCFo\",\"last_tx\":\"\",\"owner\":\"u1m20sw8ENhw3axQIudfK725us0Frlf8ZiRYLF7EJVjQCr03xQ_5aPnfCyYqo5ah2uH9lqwmurUWGn9XHqkKSKWgnhedl8uCT2yLj3Gt4MQLmVOV9N4bvlbmiFuXkB0ywO_WQOsJvx-Q7A3Bu9aWkJrdZDG0txLFe1SUwzAkLKO9oh-lG-yMkFRIGXY1HcCz67tqQqEBpt96QJCtY8zsBUXYLKuDmUY6-d7TrIzvGhMtHFvR5enLoWpbC8CcjyAm54TULyPKwjYUtZ7-vJWgctMxyKDw4GfR9TQlqHh-OfX05gomEDcOsblkGHswvZRGwEY3ibMAIriHnokxu0hnstX7DuQSGWAXir-c-8O7QVmkLrdxReq_7X8SCmht5F_SdaxwcTtlKCkZ2tDjsROPHpK5adiVr42JufnRkBlH9-CjlckSb3EN0iATeAVhWM2cYMFpHnzriuMd0_HDDqOs1EcLcp2eDMzOkZrIFrBeYfpsVal9Ulh8DKPPHZdeL4zQkKonh-Gs4pK2s9sUK2iKIyWLTLFqXKQGvVE1dntKg2AGSq6fZWKCNejC8m7YdMIOl3VICfbCd8F8YeTSa5fKD8baZc6rvt0Nn0wSHqytkc6ZM2xaiTMCfV5y8EUeZC5uh0pX-EfTdYEr4OvpBgP80FCEinx2NSxAXlMYLwBiZfU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"YmVzdCBvZiBsdWNr\",\"reward\":\"0\",\"signature\":\"LRB6f_sQoEEE3nA_tREDgr5wGBDpgGRB5JWPOSD-2f_veeKjWbHCUM_4JfkUzVuEYMWW4OeQcyjZSeaFck2j2z3vgTatv0PyxZcBnZFTtRXByH_ODuEUzQEhnDToG64tE8Ew1pOH5tmIDrkdPt2-_8mG4Bx1JFjIMUcrLWBegmzqzR4u9A-6Zv8GFB8P7x2f0zgIsDhX94tdkxRSxrUm4CA5y_ap6XbJFbR2hHuzCAwQkq51y09HxsPIM7BneaT6_lzWcZJkCx7lTJx8-2i92mveGHUirJhq3j05xLlHFmAxf-cn5Mc3aruMoyUftxKN9qC1aREAGZiWwZpecGY9Je6s6L5mwkrygGTKMdOLMwbUAkUmoC0HLtPr-WHH0gaReqEUqeDkD0nBjVd_F6I2GQVo1JIYX3DdbE6CmGduG-da9tzCkmMv24FPKQtexzGawZ3zNQ2d6YVvuRlmTIz48m4rMP9ZMbiSIedC0P4mgafvuGHjSixCm5zTa_UwfkNK1NC_7vR0yAscdTqK354Tt0kQyz_gqwUywvFaVfuLsld4XpZ2HvMUqYuU9dv5rx5ScmsEwDXDcSuufz-LSu9LwfrdwxzHO4jt2eJf2ax55vAZiiy9DUBFXbHzA428duTV1NtrQuYH3F_huRLiPhD3Dg2rSIbVT3KyQbvEPafh-s8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/YukfPvGxtYmXFF6wJjDiZcvqmH5YItxwsoLbMxWCVFg.json",
    "content": "{\"id\":\"YukfPvGxtYmXFF6wJjDiZcvqmH5YItxwsoLbMxWCVFg\",\"last_tx\":\"\",\"owner\":\"qxI9YZZB6P-vn1l4mGdTz71gLp_JPD00tv_LTNMoyJqaZWQ_P-SLPdWhS2Ebg5ui5Zdu-3kqIeQma-dnF5FQoMDn-yMEoT0oPAFC0nhXnOXPHLZ8FjZgWR8PvcIs5QVJ9AlmEYClsHft7097rbvRevBcO0lMT8yoTFtqherST2RZctaeGbDg3wWoPF-wTOnPdjyS9Ja2O6vap9MLyQmjcSHZsDcJm2V4tcuprncdlmebxxckSCwMPToyY2aIrAMaNiGUkbfn00GXUrgHL2-cMtsGawFEnJSdrGmUXKyi27hZMY53jaIb6K1Op5i2AsQAgAvZasBo9d9Qe0h8Ei_b60v1-HH10KCYgaefrRzSqWupmJ3dzy6kUpdbY6fRCdgCGkqmmVzw3U0KHqpU79HV2wQw-v1Hx-1Li98sY-dr3U5oxUSNoaVG7KecE3j-l0MS9mth6ctic780i2so7HP-iB7v06Ms1vkVRQBgH1zuxjXZqeCbZCT6W4UDSA74PR9dH_qLA1zjgDwCoOgHQgp1r9NxVi5YtYd1FmsjFPgCeMCo6FmwN9kZzKSRkSh9q3NbHlonDXq6EJNXhxkIa_eJTW0H57w3AQaRdZQON2sZLtsGQy1uQAY_xSPtl0VhkcP1-rt2h2uI4ifojvHVjqxrccriFWRJH0ulAKzH_8Jo07k\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSBhbSBlOTl5Lg\",\"reward\":\"0\",\"signature\":\"IpJq_fKGJsVbd_s17lRyqbvRqOr2PKE1sNFey-krWIVhmGOrzaRmL0UgZsPz9Vz01Exrrb8FAfZ37K4bN47q7JWbzL5uY3Eyagr9FnM7Z9QkcrdhkDRxwiApgog9GQhdKukgb9NvOlxSNJitSHJ5e2umh-HCvpekZd50lyUST6uRiX_PGsprZiFBw218ZldkH8ciGwufA1EO0MFRPc8yGHZf_n_oCMVRxYyvJgKEa3NiwXcGWlRTtJJ6NKq56djET0s6h6ZGUmjvpQ9PVs9wByahDNcjmMROolHITvNsbDE70TIDWUM0Ma-2y96rQr49L508YWlkYG9gBBsbKJ3vAYK8Lj4lB0xGuXXDNcRSPk91NmCXP77bRML4SS3QLtqjiQk6hOkceZz3Ux6tS0aLyZgzPG2ScOaU-ry6TdEVr_bErYMNAyL7VKZnk3aubTc3sw2Qe1ajRHn29Arz2wl1I4dfun0Loq1mvE4m0gdy8ziX1VqendMfE3XxF1UlCPMks2t8NmV9i6e2b-DoES62HjR1tFGdIo0fq9B1ZRxMsZK2UP6W46P3axJqTVp-6A7Uwpjy00P8yyBTIPB9bKYCV2JhpzFCWasccL9cmrqZUsmMt1hYaDDR4CAI1NPv_tRR9DUuEGqTWgUZYj1qoMgoVRMXQvwmJzMPZjo3LhIWwAg\"}"
  },
  {
    "path": "genesis_data/genesis_txs/Yzj2WZ-3q5vKkBJtrmGlVjZND7iqtzvMRafS0TnQiLE.json",
    "content": "{\"id\":\"Yzj2WZ-3q5vKkBJtrmGlVjZND7iqtzvMRafS0TnQiLE\",\"last_tx\":\"\",\"owner\":\"0ydRVVAQPDZx5qkgthV3mDUfz8et7Tw4fsU4cYIxf0XlHPHRu7COGX90mpXawdFFFZ349ZhKjBFMZJWwTCUp6tFOKHJyVPEMMQ-XxLmz-Ovus-5oR_AjjIQAT2TCZvxY6-5WcZZ9FeWRZX9OpyYPs05nsvToFbC6mHWw3jCL_inT8HPQhjwFeiNTy4gfz6lViD7B5GR8_BnzNwzt5grgLNaPqXbqAdByOh0fQ1agQUegg4io1tEZJAaSubS3gexBZcpt0-Ui6prSh8X0gq9ERuMIRM92_6R2gzX8QxosA3fcSYo-8PlwqtnNOVmjzm-NDKmrnUCs3vt9470qAKdZoiuVGOWLFwiNwMOWEyjk_ml1vzPCaRaA7r5Xbb13lfy2pbrzxefx0JWcIGDcTAHLplRAsjza7ezZrS-2OkPPQI1T-LxXU5Ra6OSMPZr8Nq1khziohsaAh2t1WVbSUBMOHC1zTvKjiKhayQ3skhTr8JcLZHbQpSpvC1uriwmeYyMovKOsW6bQoskCWkDHJSHj2wtBU_lNbGjl2axlzhVMSh__3-w6SrToo55APJViHGuj3GZJdEva98o4pGmRAaVoxAvVIobHrmVb2s8sVSL-LwhzpIFwfjBKnsbRFA_YQ3jvqJJYvBGcgFAFLo0iJIcTZd_ElpB6tl9s_s2j30LS9OM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TW9ua2V5IENhcGl0YWwgd2l0aCBEYW5pZWwgSGFycmlzb24gaXMgYSBjcnlwdG8gc2NhbSBncm91cC4g\",\"reward\":\"0\",\"signature\":\"fdNgnqEqYZCSfWROIMxGivdIsFA7YNuodeyzMUIYaREL_TJNOTgKrSAXl8v_n08y0Y3ypB86TQqozr66mlKBa6U0fAOlOhzesDf5lSYfAkRyG8PhUABKEbVTsA4gnOZrQEsj2yXwZXiYjBz183dSsBKYoh3YMNHArl8biisbIdTDNChzWmdvFYQoFqHP1GTL2Sq_yd0CFZF7JZtpoF8d8nHoCITAtdMYHPOM59I3lFrgmjRyE79-RDwmjKbYJNfNtYlHGOdaR0UezV-uj_Pc2yWa-BsYG8thsIoLMMjxYQZKVFQtWZ2nmElu60-sri5nrNo51UYIBhYOdThfvEhoBgOxn5lirLZ-OJstfZ88jiWKpsdXgoC8O8FmqzgVYY-dzOFPet_yoZNCEBYD3Yy9HTI66n0RoVoOo-A954J2w562paszOwu35fa5eLVeYlXNTDLsc5QGM_csrwd49PUBm5kLJrWDuHh39hzMTPsVtR2gekX3iJWufDFFJtlJ7AUExSfhaCNptBtUcb9Elhwdg-ayZF-XLWHYez28eP3DXLh2BZnznJ-kG1Kv1GmJvbzvC6EgHcs2fOsDV6MZpInv6rkCbPcjswjiIuywdBqpjPeWzqs1tPzxLb3J6SvazCM9gAwGJc8cZtFfzFEdX-TNLAHg5pijIzJ1nn85pTRrtb8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/Z5e9G5QMZ_scJQ62qoqUs2XSuhknTuuAIhhGmfg3Ye8.json",
    "content": "{\"id\":\"Z5e9G5QMZ_scJQ62qoqUs2XSuhknTuuAIhhGmfg3Ye8\",\"last_tx\":\"\",\"owner\":\"srhJUJV2ukt0E3h2Wd6O0Zgh99CPDZlqr0K6iLCU36uWMDa766i0IWP9OKUB-HJk8w-B75RH_fgoJiLMwcWEbh_Cxibi2Sd2iT-rRTOs5QVHD3Ek4GzUiebQGv_zlWF4aHwCnQcT_hqd6e-_YYLK3QhDZ9NOSzPd8godYUq69PU6BUYGG-TKYOg52Q37hH6d9wffjeTTLwLMDb99OShu5v2WWnkplpq6aFp2oLTowsribUdRBQomYtWATET0RIPesAm8clhJFnhRh5YgFS3XVAxCFCQb2uAYRv1yUFzbdIGgcuUwLWzuagFKuTUvccOcU0Vofy5AIZ7lnbvQSKbLyb5gh5sH3JRPktlW1l9TAN2MHi-1raxSJdDSp2wmJv5NZbA_COgcKmZNzM4Nu1l8E51AAScfbqh_yVR559T11fRXVBlZYcLViSwVvorohJ6ZNIqyjjcCh3sOZ2_FQbn7IwjKOJtneda3Ng1pT1CpdLXxhUrH--318cbNwGzXbwiwj5pUGwVLEinSNnYmDEYA0R5dFw9DjKKNuiS7ap7rJATNqwL9CprY9fJr-vwPLxwtNgtlsTMajXsBIP97dk4LHjLb47IkqYegvBs9KQIDZa44J79sBmZrL379Se6wXdldb7xvKf8KAVgFX1358gGEN3HYCFVmvfEAmsAl0AZRQHU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"aGk\",\"reward\":\"0\",\"signature\":\"hmdah5FA3j_JwwcarTjhFtR9HzHYK7buqMyrtBEmrTM0v2r8zy927ImBvr7uPBhysY_0Kdle2E2S8T3_9WvfyMvSn8nqWLV2Z7YgT20EA6ZyUFEH2jl6r6hGUaWhtlx37b1OC9MoDwWTpjj6Scz0SzzGPZ4UX7eQWrUJQ-udSDVgxwSoRMSF9wSMNeYNsORGSD7HVHOBjQqPAoW2GLFcuNCrWzQ8EtZDP4EVq_fXxc8sNcqi18AMjRqb7pHFQMLaTD4UWaeEaSXObSSSGlczZ4CrgKoMfaeF1fjDc7l59Cqoh7i9hzavypvJ6bVM9Ci6H-8_N99UvHvS_v_BmLaOSaG-nKwBWBoSRkQo8HmHR-kUNO_u8awCu-OMaugUe0ptGlmzUP9mFNheIw6O42t92b1iCar9vUixO9CkhjYneAXuKrbOq39VbhEIHuZGpQKHtaASuuPXrlgv2qtIgyQ-icSVI4FPtQ_1Jcs-tkA0c5gJnyNhIQTBlB2msqlnggVlQUUIgV4rS5WmLHiWtymz6c7e8DlH78vJJXJL0gQGE0QD6ATWsfSgPIgHq1sB8953ycKf1qDa2WaAc0q86geSYoP5PdE08cSHehAkz3cUn8aDMeisbLJeUq06-38reDnG_-M0gsiV4I0MP4jXfyvLaYWUC6bkQO8U9V_L5BMRq_8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/Z6IgRWClifhTSnomxJet2WLw8UUaslmqAi2nynj3Ke4.json",
    "content": "{\"id\":\"Z6IgRWClifhTSnomxJet2WLw8UUaslmqAi2nynj3Ke4\",\"last_tx\":\"\",\"owner\":\"o7K6nzp9yXeYCITkjRfe70OLgCayO6oRTPtgnGdXobRGdi_dST0sW7EYcJbMgeW2ck0k4hTq_edjznz8WZC27A63IUGzPN2sDVHbipUUJ0oPjyL3e8GzmOY5tTmuGpMnjY4XjNatPGakj_SPUqSF6tJfWcefVPvT9sgZdvyi1SAkLLqZJKiGbxgOZFzCjjnpmPIqY0vb-BKRIRgvugtmzIfVD1jNiPD2vSEZStCWMf41AKZuf15vrhOIbyXE_iFPzEb_gTkMnD5tw9Jz_3ru8AIKckfZaccCq_6rpHQVbG6U3BSnqhawXVT6Gjc3tmt4zOIqqPf28nl5I9QY3lInphAI3QLEDotDvd7N6zCOygP-rsMtIDC2_agIkOnhxMWgD150Wy93xH-6reEnJwuZpF65Phs8rUQ2J4U9nLyd61zpDAN2y6fCxa_tA54cHYrt6jzog3cUAlXxdTr7vc-m_FbAwTyKiYOZ3Vz3jZowUJqSjfinL5CI4R38x8ksBD_x-01RGNH0MkZMofaoqq2gIgKBESjD8jmwiJZg7YOCHeQ71KnJl8b8sGuWYcIarj66wLkh--l1TQ8rTkibDYaBFln-91IwsxdDpOP-feOjcWnkgTzYYuuCi1NW3SbUmJ-LQGYdlk4Ym5p5gT1st6dLu2x29EnewevQ0QROl3NNJt8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SGVyZSB3ZSBnbyA6KQ\",\"reward\":\"0\",\"signature\":\"JQX5Jjqvfa1o3z06xGNxpd_7JD_XxYxiTYPzBr0X9CuQH8u1GoYx9YFeeBxD4GjMxwMb6o8n0z7NUN04B4YbNavSlqHxBxVqVmp2mUBZdNX28NfWVQCTHcrWQ2xjtPsxgCR5Up8Ujql-FMr7KgjqSTGv0zCVsON48Q2aVd5FqndGGx7ZqdH_pP-Ju-nqqCwCpy0mH7y4JNGyC469JeOBq6EEduIJIqNPcssWEk-S0nA9QGFZm4vL4gmcH68OfwCP1WjDXnhKPK9gwnGvQUU8_lSanwkFZFj8BA5z19AwjnHy_qdnUYmclZJAXZJXSl5NrTaEb4DNnPKe4yeEc4t4kQ0GtDNHKaHgQHwdTDMYRuQuak9zEcgSgVx6wmZtQ0NivyPpIFgAZniP6KEFAOCqnNpu_yhcOO_SKu35AgJdmLt9wjXyglZmn7BFNqRms9i37Kpaxph0b22VO_mR1Fr1l6J1BLiKSf2UTy605C3TRjyAuaQpgb07LnWxMI9Lkv8xNCT5XKLk-cSml_LSup-8MIsFpxMT1pzIf9BrFamNYKyQpF99Leh7UCylttIzJ3Ty9-HR4-nPXoOt2AXQuSr0D_9jswrXYAQ77LFKEeYWIZMllKAabxKoALbob21xOqwQo8zMWYT-F0xQDTDlP5cxpETGRc8KMrNpmCICRgZQJMY\"}"
  },
  {
    "path": "genesis_data/genesis_txs/Z7gfizrPOypT4Pagg3oli5g8wA8pbKB0ZJnrw-FVyys.json",
    "content": "{\"id\":\"Z7gfizrPOypT4Pagg3oli5g8wA8pbKB0ZJnrw-FVyys\",\"last_tx\":\"\",\"owner\":\"26wRF9oNU0akucC-BqaFA0-8_6tt0wcUoQAtLgu2DyaUZuS7_qsvYqILCqr6Kjb6a03Lx42JHVtrt4XrcD2SKSxHhL0Oi6ugqJhJOZRiro8GGKjlH_K4-IhPWMcAr01vXUh2Pq3qLtY1GsU2zsSVzY72jLNyHIpusytNapGXF9GH3xT6RMMXsARRbEuTmoNNYw4YpJxiGIogTiQLBJ6B3XDRQ3B9j2xYZRF3PcTpKayUozD62ekTt5XEFT7YbTPElQDeHwuoUTxvf2WhA_b3sb_RJUDqSLUpptNAybaO9rhW2TwymQBkJfKIOUIfcREtnh-dqbpvvLdg4flyAxGhdW8I7LkD5AkqTzyHkv64FcGAbzB2nxxyYE9ZnfJ6BzvyenBCOnVVUs8c5t7MrXqCp-MDDCCijXAio0ksGkB7aIwwoTfLSy3XAmzIME8A2vTcWMJsB4U4MFXtd47Z2jBXOxdHS43CjdmI_pXYWvmgk5za830XjYV6idLPqZjPsusRiww2ZPD0ITPRs8yMGnT47Rd9HZzVzrCobRVrQiSGJO1EBOk8N3iz2czxyCNR8MOsKNI1SSzN1Y-EZTze3oz_AdyIO1sRFVqfxUYSMxxHtmtaOaz9YN3G3W-4MnzadEC34ntIrJE9bUkFAgiUHPmKi-sepfNPAC7W_xwQ30TgNws\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"S0FPUyBSRUlHTlM\",\"reward\":\"0\",\"signature\":\"qhxqHhj1ZR1pA-HQfHhjLUuo90AvFHBxoQnvpz6sm0G-ROtKIWaq082T2gryX0wbYlKXU05QVmS3L_q6mJoKYujWfU8zgsCpN3cUFa30ChbQMw_5_3z0Pwr2d7Lvzt9XlIADcpbtRG3IUEdJMgBKjbSrWN7qph4uflVFQ5veHwOpxOMM2rqR1S-4LF0S6gE7xuFeu8o_ZuSi8Y3RVBpaxl-fU2N-SQ4rysNQ4MLPKNw_2YBWNwWxvkpDtl4_UJjaR73jnEfPsDFIW3hFNqLhuRIMukQk9EmGltuxh3YhZFmN0MUfAM-KyG5QC6UA1S4tH4ZNjPmpI3mzr_MCEAAridIDwIn4LDgIr4wAoSzHQx69tQjf_xAltXem-7-HOCEYNWn02Qj5Lw5dlD3Toxx4qio0rVq1OZ-xJ4JUPs6_Mv8Gcx1q6-Dp8S466bHvtFi_KH94s74i3XI6VcGDGKVDUs7Td3Ma9daia6K3miUv5zBMeTKVXFMN8cdh6eeFuDPBmAz2swBGPAv-htbksAx-vjXRl93e4nikbk2OpOA93OXGdsf2-r1xRazFgptGzHsWi7kyLUasay4oLLFKZCrPOxeT2SuiA7sxaSdLTPO_eKWp5oujjGYsOlKBNdGdCz23iumWc03yRqFDuojEDf-rvlbh7mmVA8Mofl6LLFSlw8A\"}"
  },
  {
    "path": "genesis_data/genesis_txs/ZAk05et7CFN69E9NwET2mSRI0ISRigjMEjcy8kbO-Y8.json",
    "content": "{\"id\":\"ZAk05et7CFN69E9NwET2mSRI0ISRigjMEjcy8kbO-Y8\",\"last_tx\":\"\",\"owner\":\"v6AdxIYE7i0WryfruQmBSXNH5au9_Y4F7hP53qu4TSxKlypAVImLgGaG6CecuB9abqkDZPRATTQgw0lTUm02VUGzPipC7mfce98odcZVXlS5_WP2L6pvQD33oibPc9GHp7_lrEofhK7qZNYnHZB0FTmtL8_b4zOy7LODZA45HNwl3rxVkStd_UHxEZ9bv_1_j8lx2Wolk4P5rnAz-Guqc4BP50VhiX4iJi29dBpizxwJx2TtlI5MFBC_o6yK_-TaAN6rHnPlZUI-Epn1DelmKgOJddf8K-CnXJX4SiniKxynxPvdwIhhJTGCxdHY6Tkwy-ceO2_ACQiZk_vgex8VZPmJRgZAFu8dTsqhEL7hPbZDhjQjPvnkbyUURioPp9qrePzZ7l2pcVXr1ZluFtlaz-WglEfNsYH3zXLucqiLa4mpgKBgiC5uxvs89NxddgbapT2w2svA1Z4iHdN6E_EJpDyz-9Le2z7KNEp3e71U-cQqfdQI0JiiKDVl5dztfZLOevMXKDJj23lS-qmeiYHM6SgKkTLTRUUE1LDXGOH80SMiZHQPcqmdXRZlkfHZzNueDK0PJYVlvQ0cayEceuzwjP57JLxuO3C_xhnDIVdQdEkeozBSDMbmJvK_Zg8E9P6EUw2huHcJzgeq7FpOyz6PJot1qntZ3EBmoVBlXm_Q8lU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SUFSV0JUSFBJQVNJVFVT\",\"reward\":\"0\",\"signature\":\"fmSdIb8KpIiyrdEYIaMpIEgTdq2kOr7Vwst-n5RSOcUscbj3C1hK0Nv9ZUox6q0v4hcFbgu6KZQl1beHFsZQjLQ-GyHlUYw4aZ15MoEvWLU1KDbmBlFf8PZszJhN4y1h6CTCe0cHjQ25-_KlBkKpxiXYDOfID250F0fkxE-t3rkUNUw7PBDHKfwwk3QBLsnofXb4Q2XYJptdJdW1BFWhfEsO3Tcbmo-irqZT00pyKCdqLT-QisypQtYHd6b3jgAg6T7YLbT16MEp64Mk1msBeQvEsmlAbxhNLeIZAhYP1Hd0dYGuUen03LkgBArL7HaTrsCQxOAiN-U-ZJwwooQclWSJYI4Yi7i5-wvvcuPMhNfQLafXQITjYrq-PJX1CEc7HgbcvesLtyQfSAwgFNyKans2NDA2Bzlmov7JFKvx3AKjRT-AXZ9pRnVKWm8O21wppn4d4YS_t2tn2xZUUNQfzHDQvFOkmDQq5l0D1zozmONwG2fq2s3e1_qHekG8xmDOOeDJoq-b7SH2RwFJKDLhcacqchXFeAKB4scgOu0HMyu5lk8IuK-vFOVTdSuVVkwnVry_vt-p4OR4w0zez46mX7P1nfLkudrBb0sLCiMbB1G8bDbzuN4jFhRIKGsBdn55vpl-KTXvgse7iKSS9T08fi1o76f6r5o0ZR1isHDC56k\"}"
  },
  {
    "path": "genesis_data/genesis_txs/ZC44Bxrx6AtNJYLwhvpALuINZRBXklme3tpeJbJ2rdw.json",
    "content": "{\"id\":\"ZC44Bxrx6AtNJYLwhvpALuINZRBXklme3tpeJbJ2rdw\",\"last_tx\":\"\",\"owner\":\"qlTm6USprMfuH1KKWHl_lk3hvN1XjF_Q9nTd9nYjXGrnqSO3gbEBeqmNSr0GNHlTCMr50oGuosi-q_p1wyt90ZRttHYodZkZs-ecaoUiVJgcv8ccvqmAqPRb3xYF-qCx3e8d8Gekt05Ai5LAAyg5t5BZ0bMt6XekrmKzM2j8QsI6A8YJLpxS8JmPiP-JMKOO6K5-IxSyI9MMW92yYrUgvvXhtNjbhdCSh7OJKaJcAmRESena1bg09xbiemjlXwO7cK3ttUafV7NHxz6sllgz9liYYSaL_xmeAARO-saZ_0PxdsdazpPm8S97v5zXrNkpsFKk-giDHyrI2Ve94DlOQ4oWyVRYSoQ9xPbeBXwNN-IHuDkJDhyvnXWMC51PG8SYn-0WlMGEaYAzJZKmwFNdc7E7JvSF8PyX4M3wdtPUPXJY0j1Eg-p1zFmsUOC7688A12qj0Kd4EJJSg9GLiwnlV-TwCyEEUg3O8MW8nYEs8Y24RhrmjbFH1e4ce5Owt7YB4L8qCZnABboyUHvrItcWizqErpx1sVaz4UyX3n7HYlQboZ8-4CH9GRBYmNEoZjHJf7_Mp0MSIWne-tng13nViWaF4Tm-1ebOroWsLcWsf6vvHsCrb3kANhQQ6cUSXXzTLSjoLyO-1FwbiVRJcMRYnsb0pT93OtrHyho4R1EZm8E\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"U3VwcG9ydGluZyBBcmNoYWluIGFuZCB0aGUgZnV0dXJlISA\",\"reward\":\"0\",\"signature\":\"hKBvsuuiaWnhBi60ixzBX9aS3yMSb45ZWvr9Re2rLo8FjodHA5cDyAB52W_bEMbM51nozpfBMqmO8FwDy-Lgqq3pSCxZPxxNw24f6HCw_Qhc00Xq6weVSpXATXS-OMRKa_6Bi3I4qMKcsTohCbT05XrQ2WOpNUc1rTzYSIXJ6KTaIg3bM3hEV8nqKvtyQxIl74OHJJWX_5yxAUFOdbyJrDDnNksIEamZ2MRlSVdhjUFzhHcQeOJAwUCYVfCupy9YB7bEi3KalG2S-k4c_bTx-PKVK5qtsiCx_2ZwShQvcimKAs--wEEdOgxJn1v2GiP1YZ0ZcXLaZ4d6bMrLP_Y0PCbd1Ch6uA128knGQLa663OzFs1HEdZ1_6bl8084ASwd0y1tYoseEdQxRdz_1rIt86SsZA415hrK2FqbCZhWMZZWZUN_AZVlkRCB8jsSPGvKYFNQg-x3ybsz3ep0fr0pktM3gMBLTaJ6I5hwURIwjRndckkhUH3ojYU0VyzN4rLAA2io3Duhdj2l7cvbQCh2iRH3UlCG_xSmrjNVUdYz4-boWb1u_eQvRJ06i761mrtpSFE-gBDU5-rXesGrxd6hkp3BaKC5GIxAcIgQiMW0_Hh6BEWchw2CULSh7ry_dRSZwI_rdaMLYeCQDGyzBY1OtaeNL_Jl_nSztirvjg1WqVM\"}"
  },
  {
    "path": "genesis_data/genesis_txs/ZEB62vqKvkPK2s_RmxgQ2IhafMxJ_TXCGswrrKLhYiQ.json",
    "content": "{\"id\":\"ZEB62vqKvkPK2s_RmxgQ2IhafMxJ_TXCGswrrKLhYiQ\",\"last_tx\":\"\",\"owner\":\"pqqdMlh8U90sJcCJQ1rT2sStZ9MuqJiMqj25vkhyf_Un58csrD4L942JYnTQ3VCTiLmiZua6c9ZcodD3k_mqWxgLpOw1ZyxYuo_W87GZ4azGgfg4jbZ7joRgtUDRAx8uq7rG5PY5UtRicMlLv9UR2X9ActQD47WJCmObaEOkhOhy7maaRwbvBZa8-A19zP4rwLbWg3XY_twQRll8D1zEWTeDI8tjiJdkP_0M5Ee9EzWpeOwum7B1JyDg8hFdFks4NUts_Yz-JrmkneUu9fICUwAHZinyg0ttmDjmhALw7o-7OLMl3n7fSYpKZD-uTpX-frJU777tpINmriGkESgb6aWSTtP3xs6qwFmTL4eZdY4lSldpY-_sRC_zvXFYbEpOoKIwSNruOUd5GiTK1jDGr_kWqpa2ZBmblC2Sxk6pZQXgGYSHiaXaWUTW2VxMJ57u1g2DGtocLQqTVMJQD0WDsz6kUW8sLdjF4un8ksMwT293zPx_vKwbDTqZwlUJr_47Wz4Kpx68Jw38UjxOVx4v7q7p4ZmCC3Cdj2frKyiH32c_xduzQF9PAqf0VSeG3-E_X2J4vRT9RGnSbIqVF-mb9at8ZYUVhMFXAEG5vpWHuFd_c9VwX9cCei7IHeQkfA3d4TrrqUIIuEYJx3WCMrL5ETteGdPXCHovkqvOVk97pc0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Rm91bmRhdGlvbiBqdXN0IGJlZ3VuIQ\",\"reward\":\"0\",\"signature\":\"FAtyZPfd5BPBh446d9UKGfMaePu0uSSdfpVMmDrfJvOqaWHIAKGYi-DV3MckN9FLMRAlunbuvQ9NAXppe4PmRAcbhdru-hRL_x6-ASz3Mq74Z_XeSB4wx21qtgPD-Fb4Lfd70IKvVHXaJN7Oi-F4LhdTZ_WMP8cPFd5jjGkETePjIOdHaQDjJdSWzuF7N65nAoip0yrpUxMTm9LvVO_K_Cx4lYyuPLFgnD1pH_z-X4nfAtbF2YXJPbrmR4nQM_THqXI498wxFk3p3s4g1iog5EY3buEQ3QXwbhSNKA-kTPBCFi7FoZ4NU7dWBiXTZha3KSaVGIEsX0Spi5mAEUO7yKe_68VsKw1e17LjiX6xXzoB5kCaomHm67mbboJ4s5wz06v3odjy5ULSqyb51aLogwhDg5ScBdN85Rm61luoSaszQbO4tEsEBR3w26SBnO7eAFGzD4QBR_GlZTV0ibDqDTqbNuXHq39jgh2X8hZwoIHc5jBQilXt2TZqp0XTTh7FRfayhxrdSabOEmp2mRSgUXMYftBSIHJlZ9OJNX7WacD6u_oWRtTJcj_icC0ssBb69Eyn0gXPKurcBYOLyVdsDwKvGqC9XQR64MKS7oOpmER6_BBNJoomj0aXCynecMgd3C7jnBqHFokQJWnWike9Zkl6jbARIaaPyrl_GdwzHmk\"}"
  },
  {
    "path": "genesis_data/genesis_txs/Znw-6H_ayGJBReeQm9z9WKulBH1ZzrOovdMsNPcIe_Y.json",
    "content": "{\"id\":\"Znw-6H_ayGJBReeQm9z9WKulBH1ZzrOovdMsNPcIe_Y\",\"last_tx\":\"\",\"owner\":\"1EIvvKlnMTRhtAu-twHTN27g6TQtTptx1humep4vjTLiVLlhfUbXZyWVhtiiVJfwh8uo0hSr1rlM0wx3ZWcksZG_bEHlw-m3K8yx_VL1wSbgk8hNDDM9KS3QDv75ejvr6m__7IvcdbDL78kD51aHX_xI3icrKWD0rcUC2gKuZXoVf9JHrivzsJ0EDUxrxARVtIZ-pCHDpxf-8DEUwcdbP0zvXpyw7q-8dCRhTGN-DdXUT8Ave81mnjjLaLjeFrG5Sluv-RyBR7fv4wVHSPFpU0vkzKAxhf7gNviiNhnK7Vg2lg3o9FbDW_4Dj4iEbehyGJqWYB4xvYQMDw7YkMyHOH2rOLr-WGPL1gNaTacE3vrhlwUXJ3pmqFBX9UIFnQKC9M16dL1vRQzsPF7Zwfz0r0nWtO5_MLSpt03gA43DHtwlICqj9VMmcIKPVC2Rywcf9X9j1Vx5_HkzesjVYPS48zWuySQ2_9nL-hyGCxcsMS0x0VjTV3oV89m0Bjp7RybXHWz826LmIJf1rulre-yPwU3WcrOy0QYH-YjDJrASy9Dj8FySe1-8C3vtMOQZqO5ck64IYFQIRnyEy8JgILLG79LnZ-cRWqCs2cuJcqz1n2q14FPJMbGmkw40RoOqrooLHBxGnqxWS9He9eGOHtOkxc9VyUHfdsZJy6aOdo2KJTs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SGkgZmlyc3QgdGltZSBjb250cmlidXRvciAuIEkgaGF2ZSBhIGhhcmQgSWtlIHVuZGVyc3RhbmRpbmcgYW55IG9mIGl0IGJ1dCB3aGF0IEkgZG8gZ3Jhc3Ag\",\"reward\":\"0\",\"signature\":\"x6mBqZwIm6gnap5Z5Ddzb7PpPopFPEuLiP58cEzeNSWcc7TOMUIFfDg2DqbowbQOos4b9914DHI3IYTsHllCjR4Z9-0MCMSrKGXTPUy7BXn11Th4OX3zBmHpGiuTobozX59BxoG65gNNUEF9nc-fq1GCJJAwaa2T30qu4jg1-y81DF5VAsaSyE_nsvUMpVLrR7Wc_aTOHHQLzQ3iOTXA9Xaxn6D8_CyoOI4CCQxUBFwDjOWuRjVzgH8iNxuYeJ4CbadWVrUa1aMDW0FknixtdcBq1aM8lUfWaw_vwqyKgfFyMxIK7xM82z957USHNFCPXXaUNEx4klfzfSwKRA57_dbXexd8wHQ3yk44Xr5cpVdtw53SgguhJRfAjKLPGM-RBcBOKS8dOrJ4LelQuaj-mkU6To8N03ZxBb6LoZ0cQ9fmXH5UN-o7CghuUituP2wPSQmufDPubG6V8GXA3EN2L4Q8yhU2u8S4ANMVS1ANELUP3qSIXvf36uaaD7uY8OJvn-2DjC6lGaAKzfpMsrj-bskH8JXcHAwVY6jpA6WMz6tYrwZA7vkN_zz0vtDb6eduWI6LePOc-LepMOfoexabvT5XKMPsS_spv6F_TWIF9SKmi0Z4Zqmhvuf1i7CzNu9Uao5XbII5vaCyijQI_0SD--YjHVaB6jy_IqJuRNqS7yk\"}"
  },
  {
    "path": "genesis_data/genesis_txs/_01J_SIBJ164H0EedSfQ8h0dMfqet66WKHwcOFQEsMc.json",
    "content": "{\"id\":\"_01J_SIBJ164H0EedSfQ8h0dMfqet66WKHwcOFQEsMc\",\"last_tx\":\"\",\"owner\":\"x1tMT_9wL_r7a-wkJc21udxo12FpXdvyHytN0oIg-1Z3OAbMmP9kdVc5PasjiSH5veE_K0SNssre8HTfKaeDaNq_0VKPZQGlGUX-b7SJ8b_rd1P_C_BYix3MT1_JU-QYHzw5RIHSdW4XaqDmdaqtikx-Wxx8rH0oQ-e0FCjQkEWZkMeqSooapFuVIOdTCnS0qUNrkAFL2ix7Pf8ZEMMO87tBA9PEBq1hQ6IhZSX_kjBX2p_6UDK7nSzhL4-9DBzMqDl_AG5r5TKMFYUt50hc2PkcYHD6MFnz4CFWkzIKqS8vM17DsETAFqHHVnJExa7xOufIGVxJvG1kvIYOisMFISOtZI1pzDBNNOGrIKQkjkzE9gia56rT1EVUBXxCKL98k3Dy0arGDhL6kq2Wi04-ks9urLoZY5WK_lShaVDCTsr01W8RHFgvjptP7LcyESnK6enz1IDFjbLSLsSh6IHuR9KmlcZyK-RPhV_rTYz77LYOd5hMAsosGjfbtPVurjwoe_ffx0T5CVMYDHy9ieLVP1AP_Fm-K3NoXXv44sQMPi7MsWZw2IwYSNbikXgH8d5uIXpRrWuPYwlsxL1l9IzkDwQJl-ftxTOkZJ-11n5e5biMznXG7An-_KwZpRgtLR9Gow86EDZSEh8rTAWuPtUev2XaaEo7G7RUmGpCVAzVPh0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"0KHQs9C-0LLQvtGA0L3QsCDQtNGA0YPQttC40L3QsCDQv9C70LDQvdC40L3QsCDQv9C-0LLQtNC40LPQsCE\",\"reward\":\"0\",\"signature\":\"OwRY3S7hCbRJmyi2pQztvaE33O-qDDE3aKXek3-W_dPXdV3G2swQ87JLb9z4r8pe_uJSY8gCoWW5akeUtZ_x9234vGO9NioRAjK7wkE1YvA9iksQr0FjyNVpT3XbOssYyzNvKwyhCcvIG141_h0K8_SPdySsX0l2pcmAEiBSav7c3l9ePlyyU4MwPIfrOKygtY056vRo76uDpPJHKmM67Q-jtgAQKrjREKJEUlt4ZSSntkVkkMmf8uow_VZIB1zlY60GR46sa0O7QCtWmuOYjOoul8-63yDmw-ydEDsczgquxBxcu25qSmHSrgFIKK1J1hCGUVOG05EZvkaCVsP1tM0YdYTxYYqe3BwDRsNHKUX-OQUyD9pMsDOQVOAZYa3Fmli0xbxRVwov6sFJPBwllGdOVq73B6faO1tNtpPHa2c21VV2NWz4_qXuoFu-fKtP0L4KwTxMGl0StEYbGn4YvZdTob4P7xgibOp2_e_ql6dn0Rj_SA7AvaT6sQYtko2s4q31nM19njzVK1Z3P3esh97qi3muXjQmxQfDok0uGSyhwrG2ik_kcrgajfUpTV0ve6WYRXfbVPqPUVV9mrRBbCjA9_T5EA8ytHBpG2XRLqTejQuhBCV35LIi4XtSf04e0fW9KjbUWIdI1KnxdJihICN20ppKCTuR5WjJ13hoFGM\"}"
  },
  {
    "path": "genesis_data/genesis_txs/_Hf1lw_E6Lyd-0PGkCRQaN10cdEx4M-hl9y-zWiDo8k.json",
    "content": "{\"id\":\"_Hf1lw_E6Lyd-0PGkCRQaN10cdEx4M-hl9y-zWiDo8k\",\"last_tx\":\"\",\"owner\":\"qw1sQn1s1D43YeVbCv6amq_TOyhPEp-L3h6VSBX03q-qnXqJa6CK6Q8ocsc1a_v_NG45sOYy1YixtmBsqvMNsNFm60YVBYg5Qde9JK4Nd1-n_qTNPev45Yak4Jw_UScQA_s74CgvtMmDQAZZVsToyJDOi0HK8BdWn5RHHCMYCX6m1kOtEOr46Zjetu42oQTLNxy2zDCZTE-2TCWu7vUbSt4fjgESJX-shwL3ApWrEDBIDmfncyWW2ajY6Ctk-C9nD79hbWpxghps2GuB489vplidTA5Urp28RfMn98OhydnJ-AENTPJvQ_IWyHPl_3tJNlMvsw_xFT30QWI1EJ7ZJ4vWWwmhHPyA_N3w5GtAadaVac2SVh2i_kRad86tye0r1muB88AqoNc9EDSW9vSBNVGWsaZbdEtAOy2ddv1Tozgr7jsK3kWjjJRSiSQ3Tk1nI1kHYyRz6o-FZOExw6n5W39iI7IowHsNzq9aAhoA9NnilcN1GeFmBcGXkLUTV2NcSIK08Q0A3qg_4-GKTZ5RLgkc8DM_JgzBHEPLDRBiWoc9JbcxjkMe-LONbXBKiWpDIawclBQQzfFiglXnvfR9Jj7ebtGjZjoezaFENik3XXym17OiOUq-QZgDUhlOExgY8IAWNu-QH68Q3D6rjpb5pS4aZEU57B9LtLO4mxtIg9U\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"UGhpbGlwIER1bmF5\",\"reward\":\"0\",\"signature\":\"Eqd6HECjwRwijPj_NFnJ6JcLfIwksZ3tFtmczb7mZs04i4s5RS-u4Zg5Syr7mVYy8zaHrsPUqJeJtMarXP7EmBrp7NNJrYSF1KxqdgG3duecBmCibHmmRvcS1rQq5DCafJQe7jIqJDDRY2i4mvJHRuljyTHcNAR1qWnkDvQPLfuCUNB6wO697hbGFw6b48qNh5HwYZjwm5sVl3MjCikQycKDQIp99rdGThLdVJgAz-Yglq0nXemYpSPW4tVMM6rUCuWjyh5x1YEghYzL0uGtUrVh933HdRDCrlWnTfMKp_YTvgb-GMJbvLqObksWDqBQNaREOdTB7DoRXBGq6pTQLJqCFRZn6W9nK1L0xx9uq3hXqLUsjh8-dsGrpJ6pGrC7XXBnDDlPGg4dduyjL7Dq6dqTYqe_S4KdtWWsnxcZA5g2kyElMner2rbkcWcn7uduJkpKrwGk2Z1-ZGT9SkRoh6-6tAXwfByQ5ROp0tvZWULHPgIfruQzp8DIJTDvF8pyGM4pDjknUUH-6e0KsQSP6dOisUwy-RyIIQSItYBZlM_HD0PPBShhmyJ--DB8F46gTXDQIw7SC8wd9uOCJVPJsSV9BgvMmnIElQDXznUL5KHW2ynC0XW8-S47BenuctiusPw7IOCKdhmbZGlZSPZm-14BsjKHrPgCATHONKBj0yc\"}"
  },
  {
    "path": "genesis_data/genesis_txs/_QEE09XylMYgab9MYPvrrMy7v1jKWh0bGwqFvsBsO8s.json",
    "content": "{\"id\":\"_QEE09XylMYgab9MYPvrrMy7v1jKWh0bGwqFvsBsO8s\",\"last_tx\":\"\",\"owner\":\"q8S0gFgYLRPs4oyDYJeUxTm1AyaQPxKwtsWYH3dnNhox9J4vgzaoWSD6p8IBGHCmJqaA1XyAMOyM70M3qOJ8fnlSd5aVqcPjNOHXwl2tbpqODw1kqH2mjjheSfx3ixXUckUyP_A4Xxbaf8c9Gr7A31i034TsHrJHn3GPnexvY85VHL1xs7eAi5WDPqY8PjRfPPKdCN_9_qlgAPNmM7gghPlboGYJpdRkMkWOBrd9iaIuPgCzWadEqAyPWDQwk9C7y6VdSNLC07GrvDYgt-4pWG5STV9oFYEdNPePWEYRorgahmxrglBH7Jurld75p0ZVQJN3x2S_bikhi_Q40KOk2hzEY4V4iGcDTX1mXZdqV2ZKmHV-gnwxyh73H6faM5K8EbTKs93qnjjsolZNCtadpPy7JS5nqNfINITCgNaIYMr1U8R-DMIlWnfGYB3xqfh-tQOJr6bWkPo5dL9E_vO_Iiz-_Fg_fv0cWqdb4dfQzP7Gnv1o4kIqNOw6_fS_3cl73NdXD5yZpZ1LJ484WgH0FGHXihmWFOgp9ppB8ccZe_3Tr5Hqh_ali_dWDnzn_aHvtA-eBPF3lcDwU-uk6SEnEcTEUwUhMriTsjulbat3GBzkBCOvxRrhXPgo5_24pmxoTsfKREnavO3vTd5njfGrtEZh0lUAZYawcPuRv-gf_Cs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"V2VsY29tZSB0byB0aGUgZnV0dXJlIQ\",\"reward\":\"0\",\"signature\":\"BnpmILLQ08mY-jC_fSP2EfCEm_ThclPazcSc-d7JygvLB5Q-PIDvfOc6--jOV82VCSjrIfJZxTK4jtvizQ7FbTg6ialQCBgYXlMBms8JjBazkwo1gveOZpT6lY8AlcVKCeJt0D8E0zZhlzpiAlyi1gy-b-oJjzadBnOT8YFC36rMJWnAMq7vWRpSwyhVIu687eq3CCKr_LByIsUSjEoCSQVAO8aMqe3CWLrGKYV6hCYDyZC-daJbhNhTm7SM7drMN7wjGxQ3u00jsgHz00DaVIqPMLTRt0oDdSV3qPqrNBBSVl91siUwRyyfKLKKLYWe7I4g9d7oMVUwbQlywU1cJdxPokc3sJjPFcbNRlFNCLmuLufIrB1kV8MOVLsopweZ_fAPYLJj_CdEwmeda4pWlNcgbPjNCuBDFMMOyc12G5VFdhxXopMW33FJAaXV3YUnwD_iXRjOxWfuFmDV76KJ9ViWCIe0oQKs5myvUk2QIzOAQRUToxWSt6bfRpqd8iMvietVnOUGcASqEmc4WK2k-Uq31PJRKrV5SvJS6KplZk9NAoOK35UGAS3k_adXkAfIdIgZjr2acJWZ8Cd-5VzaXL1shrx0fgioVlHT_ECaeV2Iu735T3pJJf103YK4cVqfvBHtNCNBoBszIkXc6uA-qefC5q9X96jFMkwonoyGx74\"}"
  },
  {
    "path": "genesis_data/genesis_txs/_fLFu_BOzTEPdX35rqUruuyNxi7f_La8T1_JG7pIPd0.json",
    "content": "{\"id\":\"_fLFu_BOzTEPdX35rqUruuyNxi7f_La8T1_JG7pIPd0\",\"last_tx\":\"\",\"owner\":\"2DJr6cBwwPZ__KdB74emS9Px0SxOQsY4nXo8d1yqaEGmijv_Mno4DPAo0gDJeXCoZS7x9C6wYh-C_7daTt5kFpXihpsyzPmOK9qOJbv2uzk2aTl52OLzjYCI90zUskBQsBDAE6tBIAq7sLS2ZOGeDC-fdAQfFSf-7LIv47Me5larKPrqlkuCM3kNevUQkKHRgjFe5Xwf3znS79YK99YPbtULxUQh2G_Mls7pfzkCu-QOrxsk4LGxx_u_j1aS29Al_cYrebNEzwMpxGZ6JT6eeJYfpwQKZu6coZjNbyeq1bvSfp1UBoJAAxMYL7tedxGjKGi8U7R-ZNMYC0Flu3WnDudI99NmsQYixO5E0wuQVPG83JAzxlL-QF7pSQ8tRhul4YyjX6V5dBz3GlqBFurl9tkuabPn1PNcnMOyG54wVkOfZB03NyN3GIN9y0njPYMxN1pFruS8nV71t9zCbtoaCDkAlTQf4SOgwUmxaTKZscZYIPXj8wgSBcpgy7syRrFD_fWgCu9NgJxEhYkFCEtJLbVU-5RKxO4tLHMsoqC1Wh80UTlwzzu2BnCyDhXX4O6gvnnj9mtNBooUkF3R-W_RdqDj2Q6RFzSRz7IJYq8v2RraUBmOIL54_k1aL4N6H5NbWGqbbOmiI6orWOyi8gIcEZnihT3B_LBryokVGsWhcak\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Z2FtZS4\",\"reward\":\"0\",\"signature\":\"hDMdgw-hBPdr6i5mh8IHGfCWeVEWcy72m68sJWTbkdNGmDla1iJy2gfKDwINBbx8U5xc69_toQjPagudMB5AELP-eInXqQT15efj0COMXmyhKP_uK-RUdae5xasWmGHJTzwpgYveC7khxUDVw3yPfbsnzCZ9wWD7vKlo06P0hGEXVwCz7ZArTU7z8GnBtIuK954rhl0dJlZVYCnMb0Tkocn7Jh36pcmCz7t4294wneuOWJ53SvzNtS8o3ACkHcAUdbX9U5QOLZQwY3b89LGhZoQsQ8SkG7_SHOG1pzbR7_M-9lsF3PfS3W0Rip4NpigBM3TRnhXpIUMXv7PigonZJX1bgX5MdBG6Y-IWIwLbqhw3FFIR-W6wdEANbzXtKnubatjvLIQcWQ6lVESETuTeKAZCwWl76GQpZ9VgKG3m64nwkBWgzbCIjgoABKqkp7pY_lCY1Gr1iq16TE5SMkWc-Mp6n4vWvsKcUbpHcXiFuPjWiqj0UTitqbbarIWdYcL4GXHc4_Bb9O1hqxveE44dHbpca6EDE9GRPOyt3wnbiC5yq56fZ5vMPKwcL9XHtXEZfTZhnlNLHciUqb42s1blw862nM3e94Jw2sA-pvazSAakGCfG3b0KoMK7HDww2me-m4IN_tt-6ZWX7oHM4mPgzT-EL8zRTZ1NCwL6uTRZ1gY\"}"
  },
  {
    "path": "genesis_data/genesis_txs/_u44CiJCcYiOrGffgZoQSmUrJe8CfYD7Nw0MdPX0tUw.json",
    "content": "{\"id\":\"_u44CiJCcYiOrGffgZoQSmUrJe8CfYD7Nw0MdPX0tUw\",\"last_tx\":\"\",\"owner\":\"4Qblnx7gJe4GTyC9wGCIkqS0hqjjLKfVprQhDK0Rf0IXJJjG-_dGn3gI5XOBmUEOYog4vv2_kUh-5pdSgnLtD0B3iKSu0H3YHu4GiGZTcye7OTJo7RYvjcwP31JwFeRJfplnJnRTahpBBqYh5D2UqM-k4okV-1xZdzd61RPlXhSd4zEONITTsiA8IYMfcEpXBy1mDutXuIBhFFOzomStSkxf8hzMorV21LqpQKrb8VJ47n3-btbCRjY0FdJj1FR-8qXdst9EvCcxOec0kG7Kc_rlP0WI0ZCU7kLM8BiRvO4SFEJFjlrmKyeTEuPfxtnxdB3t78jRPsct7Cr4odC280b26AVBywkNhdHof4RWoZFLPjsySXePiZt3OTxz4iaHkUoaO2qSPfu5QdjE8G_Lfze_B-ZYZvTKxMxV-HWSbypCi3Hn0Fy40IY4youtd_SaoEAw7CWdukyDZwwBZhcYFJ8czvzNChp3y1RUoRkI40UbjFSqjY1YdS0DZdmYvOP4ZIMYG8DeO-Iy7wCE2dmOhvPKeh8GtLecIUuazEUovVgJPnzL8zkHf7YWxXFhppEAkSSWS_y5BL32pmh1kpPqlHo0TAn0JoLuw7wwqxK9pw6L3rfdJl8XnJBvQKhgAGEWzqbltiUl-PXvN8Bdl6NpBI-os00-CCh18mYaYYUbWJU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SG93IGRvIHlvdSBzdG9wIHJldmlzaW9uaXN0IGhpc3Rvcnk_IENyZWF0ZSBhbiBpbW11dGFibGUsIGZyZWVseSBkaXN0cmlidXRlZCBhbmQgZGVjZW50cmFsaXplZCBhcmNoaXZlLiAtIFRoZSBGYWN1bGFrJ3M\",\"reward\":\"0\",\"signature\":\"p69XT8d9WYZNcAO5NKGbFEW6jY35zl6PbWOKja7jyTOgmqJSrJnvnsZ_mpOOkhrVA_RkbXqQtzm22i2Y1n5vE5aPFiN7Q8cpW7fUj4JbEJPmAq-OlZFGAOMAwFsIgCPKrTnu69EzWU3slPCFVdkQ_YfS1e6vnKbNYEWAX3eoUPcgvpV8FisGWGWzAWBjCbX0m6TbCg8eXGWM3rBBiLVqnLigcztvYRdDbs-UMBWGYdLR4f6yh_UvkTXLYeO_06um4yWr_80ET1_E3m6EtXgWtSj8eILwJtlKx6k8ekZfkKbzgEWA-2e8iisSmIPaXSZZwa9gjZTmRUD_ZPwS5XQmsXnGiRBjy21_9DE_Zbv3BkVKYxTLvvhi7mm2btR_Fi86TyUBpW60EetLNr1hTxXw6adZFyloDAdcMs4Te7iGoLeg8e-sfCGSrMxdf4j5pS-D_pEWof5AU3OFN17Hb5OnMS-zzt4ssBOs4IeMnEF5YRFJAyWqb81gTKMWlNOvxA-dWak1gOiDA4vOmVSfYE7laJdApiA-A8S7TTLtaICHELZyaq4OERjeXpN6n2HmkOfZsofZt6Y8t1MjJRsfkcnuJTsBgA3H_3hTCyrfe6ZJUm7X1R7iwz9OL_6ChGA7xrTxArjP1X06SsjEP1yMaXkTTUogJEffZEd4aM9HRpjsOmE\"}"
  },
  {
    "path": "genesis_data/genesis_txs/aGqWG70qjD5P8spXLMtyXnYxS9k7Net-u932EyIFl28.json",
    "content": "{\"id\":\"aGqWG70qjD5P8spXLMtyXnYxS9k7Net-u932EyIFl28\",\"last_tx\":\"\",\"owner\":\"uAMDlK8hR02Xl7yH_DH6sZX5uDiCNkWSvAms5mIs-QvMx5kUnzp4jykTMBnLaoo7ms77kh0DoUbdpNwfba67FQ8WU93E3UrIPNQ3CRwPp-Dq4uUQ1KuaZ9kDvX-zq-PtQY40K2SzHVhkePbHdwsguwVi4DuxSNZBPbBevyTJXd7yI7UVzD6H9DGu0YkPtioP82r7X67PJRQOscvdUpdTpE__Wg22AwBzAz4mhFFf6_IPmaXwYJEAw155ft0BuxvjYQqaTSmgwzygUQmMAC9S_Fye3FKU02nHpelDI6v_9gK9q_eQze1eFC6CDFofK4d1Y8G1o2MAjTDERjy5ojay_wrLr2ou65BrXIygcquyYMQbhQAUnJvmaQliE6gWL2ZAtgl61y0EX8Z7IH2sOmU2ZhJR_oPxkbmj1GDH3OKgiTMGQ-nhzaU8gcjXqob_vVYac6DM6Wgd6xAjuu448IxMqodSh6CQNbiZYeWBbk5d8UG37F7hH9j5WM8A4NteMwuFI0btgZbXRy-uhZLH-Ym_anGfSf85-FOSGzTy9AyqDG-3x8jT2NAragXeGJpngnlslVwK_ON93gy2Um5F47fa3hnTU85j8_zwzzM6puCAgyxgB4Q2Zu_UqizdLnVf7Xzcz_hb2EhVK4mvqsvnErn9a2y72tkUPIw3WKp1Ot2HNCk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Rm9yIG15IHdpZmUgUHJpY2lsbGEgYW5kIG15IHVuYm9ybiBkYXVnaHRlciwgQW1iZXIuIEkgbG92ZSB5b3Uh\",\"reward\":\"0\",\"signature\":\"gADNtzwAZHazfeGQTYrmaoy1hCjDcZha3fdMGvORS0OnHF-JxekI0YeN5cB4OVT-jJjQz4hwgwTuvPHnGc3Eeoa3OW0Vr5TXxKSHcopav1_tZrJsmdeV2pE5KQ3QPa4EpuLyRfBi318t1VLr3ewMuc7sM0C0DQOD4d8uNtmdmtq_dCx15-Ugw6MEbI2j0Ku0RXjwXYzBLP00M1mVdz1jnL-734xKmd0qFzge2ulQ8iRDZWYduUJ5wmvbFaRU2t5cC2njDrZd50Mxr4uQJRkAtvTUUu3TJL8ORM54jeI9aawvObKACGA8ZiJ_Xg-9WJm29kcCTgZz6loTZGFSk9GM5qBxzzxc8XqoGcLSXQZ6SGMPJCT1U_wCba7qK0NHB6o-nmVXEnFZXapBQ2HWb49VkB7ldZxxOG5IJCj_KJVqd22IWZUzFHJ6xJsENLKw6kVlwc86k21KC0f4h95fW7EmP1vU0DHgwQMB3Npz17mwVpxCFLhYaFy6iJLu-y5bqjlkCEBblhIYjhjAYs5iEKw4bfp9MgScLjpdeugebVC-Sp8z9IyR2J9etboXp1FhS9WpJ4ztPh9-g7Pcm5QR0Ui51gZo42i3FkHseoXwKpbrqnH7tpFXo92ixbMxXLlHxYxj2HYor61O9Xiw2aDtpKuZAmzM221BSMQaCOQmS0jpaDk\"}"
  },
  {
    "path": "genesis_data/genesis_txs/aPxbCROotxwkdovWbQEhw18UNAzVy-AmjYwjo9lb5u4.json",
    "content": "{\"id\":\"aPxbCROotxwkdovWbQEhw18UNAzVy-AmjYwjo9lb5u4\",\"last_tx\":\"\",\"owner\":\"1W1o9lI-dW-oYiStVdshwd7qGeEhZsp1jqnHdm2aE20DBC0oUAjyJGTIUOnX0OasDH3C5Luk5huFSoJOChIzreUwpIxGI1Bc3PKbM7Ql4sD7NQ0JuCn_njup857Jw1zWllPgQ4M4_M2jMmbxA8P6BH5XNL5yBlGok_mFniH1U1mVhC1qN1BTEdqoT4dhISlBVqXop6y2XWbbnDRu3uu0UNcFFDgW38LxR8GhUC_vaLj4atAQwSfhk9M4K6UjQxaG4PSLPWl2MY_bVM-PdpLXzkqhNMyxUuPgMnB9_3ZiUM0E3U1E_SzQ2JJArgqzKefkteTiMeDISFfKXPWme9XC1vzjPqkgbuD373RVmIIVrksmDH2eHiuyV1YJcZhe8JEsCQh8fbk_Bm3zHpbSM9Ckv4cyt5aQRzS6dqMPhHDOUi2CiuyYAT_tminu4oH_wm76DYY3uZR8YRvKe3mNH0DHtiyFh2d-XJthWwqyWXNgQcpw2rp2eWsQOLEUZC8Oq5t4505wLNYQGndpCLi8C4v56L1hDymsqVdkm2eINiq_6VLRZiVa6tlLZQhfVxLhKxV3aw9FGg-dZKLwGkrGvyu5TziIyf0YqtL8W_uc5smB656M8gm34G8VpCU96x04_6vHdfgGp6o_9r8W7MULJDGaieFVSp9WtEJkMgmUCyfYgmM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VGhpcyBpcyBhIHRlc3Q\",\"reward\":\"0\",\"signature\":\"eogtvzZy9asMWYD6aJenPEbJ50oJVdgw3LwYate3mrfJX1R8ijgJ_agaBV11_G4V4gJn18IVAi6jix8Lq1ZJtnXWv7NKpeBGD0ILucV39UeOP_vk_uOSfqb7Vnlfgp_mS1-EIYiND98hRCpzEYhkLN8KP93SZhPo5olssuN_g-5FgcbtODBgc6qKucxugHS0vf4oO3jI_V1HDLD_IsxlPvWd1FhxXgesQd_Vpw9imYc7KRDbWgKmHqFUXDKxs6JePvfFqGB-ILJp-iMQdCAbacKxWw-1pbk3RbEt9hj0aLPBWAIcPMfA1uatDOrVQ_ZTYJ_DCDRVevEXGV9rmaHy0g16ykr8sQ0TkKuhdzG9NBNMs62JKdwOkAFrV1FnXUlTQ-m5LkyENmtVNOnO1ak7ofalDmYtjmIq6k9Ya2cYxXRJP7rpM6lSa7QlrPRjfAfG8qtKazCB2FAmTvSFTufrNsq2HiQ3UOG_ZvJwQalXgoDjcC9zlPCl_HP_pSN8RIEYlynZ2iz62YGExDI5Mw5O3e_ozQweFZXgI3vCiLxUL7wgp_AYN1hS5ye5jLFyXTJyAcObIN0Zm9FfTJE4XKUmoOnYd8YOCiP67ty8xRtJfUionqzHZopL4ox7PzgHT7wMlS1AEsyZomNjmRqI9DZFbeKEr0D4TmhrM1AfJPlhuIU\"}"
  },
  {
    "path": "genesis_data/genesis_txs/b96k6w6qUyLSSWZlmupyBmav6XYMsdt0xTc2yIUZtOA.json",
    "content": "{\"id\":\"b96k6w6qUyLSSWZlmupyBmav6XYMsdt0xTc2yIUZtOA\",\"last_tx\":\"\",\"owner\":\"qsd1tL0DoVzru24KiR3qqOoqc4Gq8emUfo298MCWmBhpi5aqmQxX32xP0H_joCmE-gfzFvE1UqYmYNn-H-372ibrxzgCmPDr9g6CL2kbdc_6QzWDBATLDOc4T5c2txX9NKMWodEON4bOhRa4nYqfVqHLQ6TzxFpw0c0KcBgzBCaILnliRrU5ijoovTp5XJxyt6RoDH7VjGvQGlwzwYS4bF4fUIBCsESNdVemVA3AnfBADO5YT2y8X6cvhDIzFGDwVGFxp-yy5cRef2tv7UhLvXzhOA4-1wFEjUlTZ2Ycy-5vRTyfkl4st5zVOSLb2MyfCxC2Gz5vXklCmB8w1fKOhRyjBLNxmyK9C32r5O3z3MtrwZeSM63Rp5n5wt9ZoGZzbCgsedW3kuLUTxfp-Vl_vYE5XcAEhIJ_H-NFVaax_ifhn-2Q9eSjkAybBUrPCNT1kHnEj-Rd6D5TinISBl3beifyEY6-QmpBP1kHk5h5Jd5I1TTmDVVO4MgoRI18Z9aRtbiYNG7I_s_v_WU43FkSzJmsg0qlcThAzT3hpNRu5hqSxQzWXhIIHsVP1PFjApw71L7YYMf_lrERYS3vITxaAu0Mmft4RFvfec0tm1ZVxnUkdfY8fxu7fdaaU2IfI2z5SLOPe9KSzchkYbbcLKmR4s7BkB6yLD769Ko7QjknWjU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"c29uIG9mIEVkIGFuZCBmYXRoZXIgb2YgRGlyay4gQXVndXN0IDI2LCAyMDE3Lg\",\"reward\":\"0\",\"signature\":\"LHf3AoFIY6i5Tq6iiq_iQ2hNQeC2Ok_11Qd9jxMspataIn3LiAwUYfZUsu0jNeKcLyUq-iu6lnqDrUIim2KeA_maoA6RVSf7dUuHRubmtI-ucBASd2Ne-FhLYP0HtfAhbTYLXkqmZJahF4p4f01oS9FpahyfVACy4oXZ43hM8lzdDFdoDw-HLc1ouNbCzdqd7aK3f0ZF-jporyNEamek79x4ARjSHxjqVBYpexAtIIXlW2xPKzHNlzSdeTXIzboVYE9Ka49gpSjejLLMR3XzXqUP2pO1RcNo6yXYkmci7trPyKGk-Fp-Bu8-ZFEPkfaqhy4qOIXl0Lzjg5oaEldc-_3OFaiI7mD-lGdmURDnNGc4UsYV6Bq5W7TsoH4wAFmKjdo499W4cCan6He1ydzOuUIITn7Xyg6siRgrpNzYWVhl1adghc0BCcGnvRPv4XkVpHxDb-Q6XJ4JZWypOLs_MqeGJYb-_5zLghdxSbXiiGi5B29wuWLaPEo7-SmpJtNXg0yEqeLZUgFdo7cWhHu20QdhC5H57q10IS_egxdaad8d0Xosq9chxtA_s0UvDDJokj7Cftg8hZaA4iLDbSgd5ZwBnKb5sUmvQowLSwSQO-s_3vS7yQjrnQTjxE3w_BNB0dPJ6ch6LH8OCr9DgwluH4SACWIl-jen7MVgt6ec9Iw\"}"
  },
  {
    "path": "genesis_data/genesis_txs/bnT7410oaZtnCdurp5jNgLKju9d_RRxhgggnxa5frMQ.json",
    "content": "{\"id\":\"bnT7410oaZtnCdurp5jNgLKju9d_RRxhgggnxa5frMQ\",\"last_tx\":\"\",\"owner\":\"wWTW5uL_Ta8m_U30GfmefoV3Xn6CwGBo4fYSV_8jCo2DxIKAiIbIiWI6zlngnpuZxGUirJ5iIsU91usYteZcOH5dQRtkXQmbsSkoNYSisVotLFMG5bggV5AN_vRD1Eo3F5sA-01AG2YG79nm4TFWT4frA1lga8iaxL6RNYddnP-ttGhDw7mbhMQxWwJehwTdW4V_fElVNvmxYnQZuHqpWkSd38fqXW0YDQSsp36y__lZeePoWW6JxnmdIsdrDvgvpYrle-TOKGfZst6MejKGQkHPY1hH1BR_x5Ci698xCdwmf6B8Dh5iANxmTYgR6tF0gjSHoLT2cEnPKKq9FPy6_MkGmXV9_hGk3PQ0LjNzvLZSIiOo4YbrVhYxopf5va_SxsxEpMlPW-HHnrGK2J3xrNRVUbHB9Q7_b8C6n0RxOniUjobUNxsAJ-ceKDVnNcZf3BzW-k9ohi0hpOP0uDMoY1Nf5qprRZ3wRw8kIeDlH_D2lxJqcSsOLsaN0a3-ROR-wCZm_SN9NQ-mqxwmpMpK_eA7e58ya4ZMv-ludLRHqbeSoqSwiGH5_kB1gVvOr_NZ1iRwJwrd2ZHWOTQZb__Hon-Ji0X5QoA2IjtTSV4nIbFog61ftlr7fyup-GKeseZI2UG4B_1hwNUypg9GNKtK5WXQifELSFbA_OVoc2Uzmnk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSdsbCBnbyB0byBzcGFjZSB0cmF2ZWwh\",\"reward\":\"0\",\"signature\":\"Gf8rK4PskI3JL6tWlRJTe5GNsNPavjt3ds1hjdhbk_HVwTwS6Fx0qWc6ikWBQKz9umDQxNM8POsIiyN58fWFWDAmsZeGaaOGYRCRVwnOz8loAonJTdwE_IxOUsZ5kj1rJ_I9TaWD9irKVt5WOZ6er9xW4g4BTWKt_tNt3DbD2QL9dTKijT_b1EjOhVf86t4TAedUMU5OEiZh7gEmgNTfc2DxeBRGE2AFbBlJBbVCCXZfOBVlsKkHVPkSr7AZHABMkrKKHTP8k9Ww4hpgNHwx56IZFDEvReM79kd_Q9xvLU5YRTP0foXgtGljLWKnSi5lEBUZnhFyjgloD6V1RL7qMiYz3S-0Lyl2fkw8PH_VphSUmafBcgWuRTBehXWkMv2l0Bs0ppDaBc08bxDDQk7JIc9NY_FrbZukWshRk-v4n_rWdVSWjQQ8-Vss2rGxscBK_n9VS0sM1_L-zxHO44Y_fLSDcxwwcRG7Z0VitqQv2RpQS4j-UHuRC0SC-RT51XlDhKpkDwuy7Abo2_JMwOwwfEZAYUkCFvZD79kN_AS1vYIlrmEexcAH6l8fFXEpEaFUVGYpL4KlfUxtrE6XeCqqGgY7c4NCSNQtkRF9C8BxNnfdKoVt5bZkRc2_xdMjuAoZF6k0QZpy7CUTrOzfLC5oIa2f45sWeStOxLZXilKxOzI\"}"
  },
  {
    "path": "genesis_data/genesis_txs/bqhG__MMablNhNpiSp8nopeKDCzXy97jLuSBlsKk_u8.json",
    "content": "{\"id\":\"bqhG__MMablNhNpiSp8nopeKDCzXy97jLuSBlsKk_u8\",\"last_tx\":\"\",\"owner\":\"mBPshDA6lOuGAqGXm88aq9ebJWc2f3mHXyq1muThO2n2W47bP_gLtRit5UymqRMr7kr1dcMB_vL78LYGn2vq1GZouWss6LCrGcXYZdq5c8nqmEPzYV1TOnloFXatH1lHhl8H1ZQW4ZjlLm0OX-2yT1Y7ToPTADpen9yK-HGKWTNueavz0OXlRQyOjw0cL8yn277VJges5AlUVE5aQ4eMSRey3vyWOn3y91g46zzlmPYTMXgvzSBsQNKGkbnD9lS3Lcg2IMMRK-gNj4rfwb9rJO3QtfuBWgchTPfL6yQNH3IeF2LdlYHy7NoqaA2VsMkSp-mCLj4bW5kEGVX-XZ71izNLoYf3uohmoNxFYDPmA8CAwTABnpK5nupKSV6Gm1lei4mfGZ55njiF7JtYlijV9om5iZpDOcGoVM102v_xJUkDGYaFQENUtQA4NNEpibFjznvJAR6vQtxY_321_YbpxyBwhwZ7IKe8LY15ux9chYOclVDQAyR2U1uwB6e1-l6enT_AU7Wsf1OFGvQeLPc3JwFFPj7tt4dJxNTNe_wWng-aKET5zn96zeMvs0hVu_QyLRG2H7TsCAS59xQiK8GEeo28n5i9JOaVmiVoXohJigHDEBptYbnbBF7KMonwLyVtZJywmRYDakEC-tKc02sw4JLWux27J7CdNzvCpnKfOW8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"RGFuIEFuZHJld3MgYW5kIERvbWluaWsgU2NoaWVuZXIgKG9mIElPVEEpIG11c3QgYmUgYnJvdGhlcnMuIFRoZXkgbG9vayBpZGVudGljYWwuIFNlYXJjaCBpdC4\",\"reward\":\"0\",\"signature\":\"Wy7ErBM2oth9Dm98paOFogWqZ_eXAwNSTY-si39NY47FLya4kxJUFXoZ20iohbu2oCpXmXH39-SRs4c01ayXHDxiWQmjxGOittNNTTJFwkVuF-yi2X_jJixPCNbiTX-2UB-pabyn_YbVcSph5iFV_QR6vm-83l2APuuDwBldmH2cTsHk_95g4T_ph1SsNaBF-zU_Xng7FyVPUfPTupXYCRe397wbl49mRpBKQ0cniptOy-vZXNV6WjM85m_6pYoPpxUXioeXgL-TJNpe-3n7higeJMpwr-TDeNKNxb7j2d5CgKgUSxCJ1Zy7J1k_9xaiNddJCmQ7OnLw06HeM0k6be8sVLkN_zWyHH_6GlLAq1nr3km32TNJA8y1o1zJybT3lg7BuPMcCmAeoavZM_jJOzaieKykLPQoY_bDXtqCDeEx2u-c1-w8qmNY_ouachftPWJoVMGbkts3WL7_ddeaTbsS6KGrXELvOUZBTpefoApZoUgJisuQOjwbsw0bcUr89sBPMxLI6fmQANsPWdIZbP4Ah9YIUzQo_88UvlJsTXUTXn76vS7sc0yr5kO9OUr4LXdX84UKANLoqtD0Az-L2ORL_SbDUuElC9QMzUQSl_w35EFyNcYOKPHBTcV-W6ndjoN3_ImwLeC8Z5R3DfZSGi3z6y_20boSc7hZwEBlu3o\"}"
  },
  {
    "path": "genesis_data/genesis_txs/cIXdvNTNHJSmA6Rt5UgSNfMcGfvxDnYTa3a1ulS1SiY.json",
    "content": "{\"id\":\"cIXdvNTNHJSmA6Rt5UgSNfMcGfvxDnYTa3a1ulS1SiY\",\"last_tx\":\"\",\"owner\":\"wggat8doPQ6VMEdddzkm6DfitfzxuKijXCD9okgLK-DuZc_YP1j7uxXQFWeJlzqrVbhbuTuhez0JRI2tSJ4rPkW62TuCB9klASSzvZYxnktfUx_gmPrvrK6gNeWTTmop_Eugle7t3HhkUvd0IvQOqLcZWVyyEfE_mI-1LZByPOS1c6H9UsDnlSbfSlcbYNDB00VLPjjmD-H6ZZkyhDaJZ67kmsARjOgT0htiK0eubaYeNv337-M1GRozfyUck1gq1S1GFHzawwGsuH_htW8uYJWItoMJQrjLLZRKJ2Q0x45FKMLwEdcVgWPIJ0nF83YuOct1SkXPaEogV2Q2jN3EHcUC2dw568M7iuy4ASMraU_YlWkxaZZF0Cp58bY-Je2ZWLFDNVmin4B9ymuFcZ-VpfLXOMLBJb91xf1MkhuC6XZxOJoG-PQ2sb2X991aDb4pUkRgkAKyVEA4lqISuiKB6RPE3dNXcVMGbfaulH57eN2yack2vurQo57MXseRTouurNrCM9-cB2TkfjTCF2j3ex9UEm4XZmZQ5WJNn3gXDVIqXcYGj_l6CLsc9XhuJzOF7gXH70LbnM81aYRWeq8KUkygYGEqGDcz2ZDWFNlASp_4zQd-jA3Ee-ujIngP0OhfEN1Al1LMIPavmIUg6EGqyie59NUE2ftpnJYv0Fxbjmk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"RnJvbSBLcmlzIEQ\",\"reward\":\"0\",\"signature\":\"sd2dIcYiFSjbZMjXw7_IkDkZw0J5oVqoOzgNgBpi9KwPxEVFz1mH_XiDmgCCgNGdL1_n6yFxaigkKLlsspUPOV-60R2YNouA3BG8DJO8dWJRi853kizs9AIq4xyaQ-3VRWspB7L7l6cZP1JOqgjBrJMi5G2ja1LnZasYgr1MEu4oAEnrmd2s-8xocVbLa_66w3kDzMlq4AH-1hzIAwZ74h7QpzoT4KQpOoqjLEIekwtS97uXZqQXrWVqMO_ruq8UWOXYIVJ0WlShCrnV0yfYzcDK8CrNQBbq1or7WV3s7QdrhSj98hk4MlAKqdc205vnYB1TteAhLX7BSn69vLlhyaIk0MlOcrd_RVhagYG0YeAxIvGqUHurzA3_sLye_5smRd1xDYUI0CtSFlDz0Ta8qjv9KJBt5ODy0DQm-zOwgUID1gCV4YH1IxzymeV2aIWkcbmNOeu0xwjR8P-08ijFNTzRtHGGxn432yzJ4iCN2SPTqq0km1U6mQGEmk_s3xjmvl8Xe0AsjYsrahK_BrHPi58dNxp_erBBKv2R3aEtM2P8CylWq5eogLuuESdR-8sejDYcUkn1qMfvU9mUEE8pMmqoPk5yE48y-BOFf8u5gA4a4luA1Zm9hJfo7Is5pUq42SLHpBL9XX7gzrON6JUD4YtTtTvQ2zfqXYaPV5SERzc\"}"
  },
  {
    "path": "genesis_data/genesis_txs/cgU_TlXi5gJ7hShSBYsS4UVi-sLTtfFv1y1sy2nNhos.json",
    "content": "{\"id\":\"cgU_TlXi5gJ7hShSBYsS4UVi-sLTtfFv1y1sy2nNhos\",\"last_tx\":\"\",\"owner\":\"5yoosPXV3A3cFxzLBfOiEqz-kAqDbNChg1PvBx_ahfzW35GEz_4ZolmBK4uPwGRedKNKiF4YP6Hyzm0hd9MR-B76Ei_pg9D3MItx9GrBFaKFbHu-y0_9ndhGc1d2msuAH-fWRWNsiY3m95AyIX2MNdB64k12oZ5GrrgHrKe9mQTFQDVEh0k5uEEtR2qBGfKva5s6qiyOfZGQgEcXgjMPLCFMSuogEHEeZBTCj3jiwOUn5hiN99q-BrmzH-M9J8dfd-Wv_KIj2Retq1QFXXvs_tHzzLJPfcqMfcRcoBvEQ74GsKni6e5-qSJOu-Z81ETTSvYMx7sNOQWC8gGz8FwT6W-m1CbonXnZ2qr3RKn3APmR-hErAK5xP9vQ6omdzHmnRCDU8b7a4OR_mySsrlZEvNJ8JpYCdmBsS5MCsB8S0u6O701T3HqpZaGHM8_sELS9O-wnMGrAL9Ii55I-E9yJGm4oLpMBVxeEFXIL_DxtoAV4Na85A8V7Q9EZw30cMLdiJUOPccaf6mbEbQ5vSaCGcKR5zOLsc9uazSVWKF7XNOamevRbOoKkG3cDqZj5AUqWUKmBA04BnazdffgFSTtfgKSW32IJDyOC5-QRLIZaQaDR_VTgGtBdlz_C_aC8DArbkhq_hvgUcgBHHRYLQjGPd3PtJQiC9QUb4ESyaGj4Jzk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"R29vZCBMdWNrIEFyY2hhaW4h\",\"reward\":\"0\",\"signature\":\"mhE4678maVK7LAQ-gq5bIUbmVJUGYY62rgsA62TWzudRrWitrDimG6WyQ1qL8a1xJBCTBNrDdJwF3AsNg_PW0gbqG6zkq1EAdN7No6pkUazw9QcXuY8pgDeLyBIAfQPrWsIJTsl8_OM1yMmmTelRkqFRcg7hoL775yqg6D5wmoXP9U3_x98mvL2q3Il5687G1__gdV-ajqM_V3UuEnQ84ZsUVmydqfvPTaGVq9bzP7k2p3C5AH3ZwCziGw76cg-uXf2WiPbMMOv3Dkk8BbbrTx6nI2OCk14zrN7AqGFb9jliPCPEIcU7YJY4jE5IZgcqsYyjloe78a5QIGE2BW2eJCm43aWs0DZWc4rpeFV6OlrUZOwQuzAanEPX7AWZEqhl_zC-2Pe-U2Ol-ih1HvmSDchcFYY2GabbS8NTrXNaBPRgydjMX8tZFjSlll9Pzoa70OlziCEjmSFIyD7qBGtSMOwiYau-G324Shpb4GX69gu7QL-qOImJIF1DtLb0nPepu-zJk2ImuyXgtaTeEzUdaxoZWXI-VJCjYUewFmezaUNgyDRageB4f7XBdBoms3FTBFM3kKRHR-4RqsbyndsKMGOv7Xv0CmAcGVddljkQ5sG42XMZIoYlrMknylW0DoNfngJ3WYbW1j3FNHjG9-ihXFOrGK3NY0eahLG47zKL01w\"}"
  },
  {
    "path": "genesis_data/genesis_txs/chdl-kIl4zG7VcJbKk0Q_5TeGwuH8Xp2YFPLRJJKTWw.json",
    "content": "{\"id\":\"chdl-kIl4zG7VcJbKk0Q_5TeGwuH8Xp2YFPLRJJKTWw\",\"last_tx\":\"\",\"owner\":\"2vMfzCTx1701yxRSc-4LZAxwu92hJ1pDhi-zFMzS5q06LbhWY8LmePSC-12FQRvSwv3HGVEJukpt6FNLT5Fxp9drJbrlmibEcd7_f27r4ZdubN-BYoTKhrVdYhuEWYiimbiwpBro56oijtdQCAsLIa6TGUFNpDfs7XqoE6KxkJ6Dcm8UDOCwO0XQBWkgn5nLPf-i3oYBotxFO7-wV6yqRa9EspRKs4rWGA4vpjFQXxUcsVVpUxsV_nn5jOX8ceSYd71-tIU-p677y2IxS12txCbGpKH83MPdJlCrhd5702---wiT2TYoA8jcYScD-k61gECEkmmS90yjse1jck8TYbiwOcEhXLcqKTXzRCVKCDyg3SwtbFz_ULAjoMjceLNfRovxYSPZsBOfe3WWk19DBNEHF4M7NP0akPqPM4SeGZ3GaTkzxZkpTgf7WEpq6UBpFN1Lp4DQazV0roCqw2nu4-IAadn9VDRh4U-ZijIbXl-ZA0TqbAidMDyg28w1srj55MVZrGIiHNSOy0w1GbBKy6KFvRVineGC-AijhNf4dR4KeI00q3HaX6Ce3xETaz1csbtRPn6x5FyS9OzqrAJ6Qk7m4sTAvLOfcspmn6XgLz5SP68B9OG543HrWYYt1W18dqyFkJWZg1vfQSOcwlPONhk0BM7P3wf3eo3FR4BdiR8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"bXkgYmVzdCBmcmllbmQgYW5kIHNvb24gdG8gYmUgbXkgd2lmZS4gTXkgb3VyIGZ1dHVyZSBiZSBicmlnaHQgYW5kIHNoaW55IQ\",\"reward\":\"0\",\"signature\":\"OCQ-eUe4oDuEghnK6m3ODehzzEkBBNPa_GXRWB8iiJXS-L8V-17etdXUVeY9rq9XpIdx-hBfJYX9_gGWFkT_OD_Bm4LfvapNgOrOBVj4HFKeKgfpdjXrb2f-L26g2CHRXgiDUP8oZRAjGQxpN6o2m4vyk5ORz7vmqU8EWRDdRiJSsN-ou5OKId4HwqrNb3X4sx153QdHDxPN72FKrDj-vVUdrm5g3pB2tLpv1AUiYKZy5NZdQjFVQPCQQF5hZqboUY8AADvdeb8lISj6xi0HigZlm75BYlAacf1DMLMC8smtXwfqbGkoCwPLjvG5JcqtWPadxFQl0Cz-doxR5gjvUFwmvTGvtpdhBZlBGmbL4dQi9G6e45BF5K-v_ShGT-ZjXhXeoxiscFncxxy31HJOrGzTTI3dKoAXp-ZRFzlYUNMXLyJFxO9fdjzDoSx8wQxk-w8-8nwk5sIVrZ6lwryzqdG7j7qlrt1ANn48xTl7_pfEoy-pdSY89UNcVxlzAADhdcDUQ0XpWOz7ag9xOJY-BUFveuYrAA5OF65tNcFzXpS3w6RE0BOoO8W5Z5_wsije63iASQevoGRyz-iNnB0YHQcwdxgK3XUifP21Lv4rPoiiqzSCoAWcTcfxgBXRa60xSnOwfAnVeFZyr5L6_AV_1yHBn29--VDK2Y6RNbv7GZQ\"}"
  },
  {
    "path": "genesis_data/genesis_txs/dYBZuFcCEgGVcfXgS9tmeJsue_qwaCRO3Mg2OHCZh_A.json",
    "content": "{\"id\":\"dYBZuFcCEgGVcfXgS9tmeJsue_qwaCRO3Mg2OHCZh_A\",\"last_tx\":\"\",\"owner\":\"zN9M9UvdICaVVs9MyXYRD3F8gv6ZZO2fUD6Wf56vnc-zHCMhUGPAMFaVnLsRcIuyIZpM2F1Q92015uOCs2IsOsyGkl7LMpUJAbh2UQ4xy0NHTK1HAAwxixfq-h2vGQcJ921dUFgmG7OWslOehr_3UUleSeuCuWxy7WoGdlPfOqbo0gbsZzS-winF7MxqEu-SJqtv6dEED5pcQv_FKnm-M1hP8wzvAY_SsgeiZ7rTRi5RYUx5FMyMjqiw9LUZZXxZYbfOQX6t7xoDIvIVN_oLNJRNQvmZg8_W6Dg0DK5kqz0ccTgmpt6Enu5SFZRSkuZWwVrIQi4FtClNiH6nF8zkE1Xdw7HUM8sBv4RuGxgHiHp6tbejnGYIgn5IeBlRQsbV-HqNTFMv97zlhlCZIdFZZyYQ2KSw1fqi7fgUQgVl8hXUPwbJEpC5w7qokaEZOjew2XtMdhwMxpoz4qLo2r5D_mhuLfaVJy7TbMUrGkGHQO6aNByVTBcQHFMOXqB8ZcmwhQVrqTfheWCXMuAfmee9acCSCeC0dZIDoh5xPMUk0y-byr6oc8EMip3fFpMgs6o-fubO4eNw1sMG1CxlbYA0CusMqn_NleBusp_U3oDsICNSf8mPir7k3BDeUxb9A_V9nHGCwLiWrAxsSJjVblxLu4dEFPa8Z_-RvQVknStQfJU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Q2FuJ3Qgd2FpdCBmb3IgYXJjaGFpbiB0byBiZWNvbWUgdW5jaGFpbmVkLi4u8J-Sqg\",\"reward\":\"0\",\"signature\":\"bwsk3tK3kSvB12m_kUiNQX6WPQCaBTuJdtbvCE0_rtL8-o42oUDwpZk48uHHYhZ-VXn4IVqGZPJO_l0z8e349jpTQ-oHRu_8xGxu7mhBB4q3WcbH0PMiIIM5WC2IR-Ps0C1lWMwbTJFiez-Yt87LZAykG_WZk4sbhOjykDT2R-yfmX-5rZk3izV0khr6-1PvTTsLmzJa4VtcOvEnLyIgg0G-xkDl4vKPRr6zPHZAxd-8VOSlO25cV-ZYGTAycQVF6XeUO4qgcoxkvx4E2i6ZA5q6dp1Ec2LR5o4wC-laoKEGRI-CrJej5dmrF7vEPutyPkh1818Y9Wh_TMkqC38DkQW05Sn8Du4-Bg9DF8msed6TItYEUJVfwSDuRGa36k58WYmW8YRiJ0Sa89zEYT41UUcL7A5Tol576CJLN3fO4DX0UlIcM1Il_YhgINYWXmHDgN1rmuRSHGOmh9r6GP9xUbJw9Y5-AZx_PL_W0GnnovkQAAcxNFBiLWFRhxyjwDD7wyQ7lnmQ-so-P2GgdHduL7BkQn0L2SQS3wKT3bMjLmwS46GCzVPyXVoxzwqvzG90f5GdPCx4feGYT-GYkGsRIX_DchwR6yyvmnIBj2erkI-udCsseyEXFiNInQ1mv8unngsL_MTC8pjtbE5SMngf3Pc9Zn15--TpKxs_gI9K-YQ\"}"
  },
  {
    "path": "genesis_data/genesis_txs/daTnztzTMlA8Ras9XgQ05Fr9ZYwOg4-UDfjW875yQeQ.json",
    "content": "{\"id\":\"daTnztzTMlA8Ras9XgQ05Fr9ZYwOg4-UDfjW875yQeQ\",\"last_tx\":\"\",\"owner\":\"t2CCQRHKh5EyrrpzVCct_dZIdRCEobP8hKb42FOCY_3zwl1BU5RjB1nyscpkaJQtQePz7hV79SjRNM48ldVQXXlcBgMLqLFYGFwA40dtVrfKNVmbRLLeZBFptlhccYaEasqSOBCnhiTKRkuwOXtzjmhn7h_lWTGnW5uBjGZP2vbJkZlcaZ-3vxjGt9Bta-WNWmhUWSaL-50fejzQGOI7UCUc40elXWnRdblp6vgSTbVnxuQM-Z1jBXs7_XqfUzBaiJboZUeWl9_WQlwrhkTncqJ1l8YLXptOd-T3Xvp1DtpFuZfJRXV-Hu-Lfb9KZfSxKLSMlSzk9wTUyvBio4TtBtTUAtXE5x9U-dJJKG3S8gAQ-FlWzCezVjHkKC_8PbWouF8npyDUR6s4CE_uUXmZzdeicorMN9Z3wdzBWH-NAVExBOr1lRAdnyn2MYbP-XfM5x_bq93QchNS8xiHAqaSzHlb0AP6-62WqJxgAN08DxKKX0Uh4-LAdTygkEJw-iUmLw22l8aggyr4jSVWkGmAFnE6uy2rjUi5PJZwRnMYZf6PmeM40aJEokN2JzSMr8sWwEKGBwXwc5Xr9ffS7IX7K63ytjnBE0Ye_GN3eXFChjxzOE28I2MZR6hEcTjXRwhX7b0IAqNb3zUBwPojNWBh9MXJdechPKwDq7Gg_UFrx70\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VGhvc2Ugd2hvIGRvIG5vdCByZW1lbWJlciB0aGUgcGFzdCBhcmUgY29uZGVtbmVkIHRvIHJlcGVhdCBpdC4\",\"reward\":\"0\",\"signature\":\"Wxx5NLc-xX7BreSg40DS9O32RyMh9J2lRNQli7RvTNc01Qy4wT7tdnfiRfxAjcSnmU7OLbFaJO732ZxYzmpt1LYCzF9H8LLmaBXQ4_gHgoZauNmOcv1RYBJ2HxrxWwPwSMKZQGrlO_PLupkJg7G2i5Y_gmHNukASbWiP7T2HLZqTtAzwuZwDU6W89IuIDKPmUI1OJyVdWS_XvoMLZZh6MzcwW165TgT54CLN_qpJ0vxRtsGQG6jzVsAUl98F2IokTu4GlCG80A958FsF8dASFFvCj3mM1mvZnK1jg9KrlxF3mRiHbzAMG2PnwjUGH6S-_yrdRDqmEGGg2B-p_r2peegnzBc1Dz2nx1Fnx388JjNHPPP5iNWjF0rpzAiAPJWOBt3YeI7ypzWcankfo2jYBiFTj7sTSM2GO-FabWh_oMy-aI69dM0DBbZEN6BsDICxCpip_s5i2yQs7SB0hZj281zORr0av6pJtle8H12DKa9gVgxZA3_BMCFQKmy1LUWq3MOt8aoxrkR_aTFnA3CpJC5kzCJAEs7MxUYovaFc0p4A3JzjTD_98VzGhMqDlOnAQzU0cgv1hmP8ZbRGpZtJG-0hBvnmb9XPGa89migA1dqk9dPsUiQ56js1qRe75LZ7r-hOgfGva12jK9X6Jp_VrdKQ9eq7LdQ5deC-YbZpOyQ\"}"
  },
  {
    "path": "genesis_data/genesis_txs/dn3p_BqD1gIcZQqdA8r6TucwycKGave22IqNjzKSHqI.json",
    "content": "{\"id\":\"dn3p_BqD1gIcZQqdA8r6TucwycKGave22IqNjzKSHqI\",\"last_tx\":\"\",\"owner\":\"sam1ocQLFyDkoh7WNPLZjL37ovCi0-BEehRnPLTTxvbNjzZCv2djMjiqoSag7TpGfvc2QvCZl1HXGsX6vY0R7jDs2uzaXPUeYvbL8pzIR9-JF46RwnnlitdkDp51TbOeEdxH6_hmp0qoD8V-Izz4rdB-ryG_cC1hednSTTvN8_Xu720d2-iM9XA5nGEbcGM3sn89WRgdaD25dQJRjq1hl9cMiq6iaA1kgTgdTX1DCrHtzkK-3rHsfNxQKc4wLEVTmx18JCKdaSXjQMs535Cga79ygf5gYql1B4jn-SjLgM62Y1GT8YyIKlAfr68JjJ8Foe-Xw6IsruQ1o2oj96Y1KtmP3g-EvpuBK9Z9XEuJXq7Azsu8xifEdGny3BgYpuWuw1cHqsh998CEdxuUNS_5I93760UasQinblTSqEu0Q70RrxPku1JwyfTG1iszEi5JP-rvpDiuRJjOsJRH2egKynnmrWbsCeqT7Dy7xAH8gEGUpAl2eFQIBiHwXQyp94mJoYnHiZeiNmdocbR7xihQGizM_-aoi1rPzpH9K5p_8E0kfXE0659j_gesBhYBHz5M0TvgNR6C5qbBScFxAI5EFaAHNrOs4QitlMUhMHkOSMeX8n8EY0HEf6eZ1dLn6zxmT7LcuJ4Z5GK85lugJ7x4yfAssX1R79TvccIj5WtSwy0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VGhlIG1pc2VyeSB0aGF0IGlzIG5vdyB1cG9uIHVzIGlzIGJ1dCB0aGUgcGFzc2luZyBvZiBncmVlZAoKCVRoZSBiaXR0ZXJuZXNzIG9mIG1lbiB3aG8gZmVhciB0aGUgd2F5IG9mIGh1bWFuIHByb2dyZXNzCgoJRnJvbSBwZXJmZWN0IGdyaWVmIHRoZXJlIG5lZWQgbm90IGJlCgoJV2lzZG9tIG9yIGV2ZW4gbWVtb3J5OgoKCU9uZSB0aGluZyB0aGVuIGxlYXJudCByZW1haW5zIHRvIG1lLC0KCglUaGUgd29vZHNwdXJnZSBoYXMgYSBjdXAgb2YgdGhyZWUuCgoJQWQgbWFpb3JhLiBBZCBpbmZpbml0dW0u\",\"reward\":\"0\",\"signature\":\"mWtuAyADsvC21TYG_mKPm7P0ut3hGEPzepHCbmNnVn9mx-Kc78Ce4ucwehEe4OhDgRMeJIEz2-w6df1dfMSSGnbYEjM3scQcfUfivb0AokfRoeapQTgC3y0aj4liNn07h5yWXeNCw4i7ViV8FF5jP6zl0SViJ5dT3VLIGfoewwEghDSBqDRgCae-m3SS5EBCTDY6VnJtP8XAGFtHasz9Rj1rT8Mbu59939_rqLHmEXqNOZVXeMqiNzCpxMArTQIUDPGo2POxUJZxyBFoPO3C-0wRXCF7KzzLq0GZJX3o8eKggk5VCoDfewJH0FrJftrZTeIq3NAoRIeHUza_MGSsJXlNcT5JFay4icgLbxWaOeMY4Z9GlTzikk2s6z3kJmfWLmVC_ltD8nt-Jqx-p8pHcXNTW-ZZeJUKhGTQa-XVmDi1JtYU2EFZVkjOsfvLKMDpDaGbn5ihA9JQMyrFJLbcLLZohvYv7lSHcSm9q0hJFh6Adc-YNby-pY1LNFWopZsVHlSP91Dpa79-5Zbdm1ClGvg0Ikr6k3Gx_edLRDphjQcqSUVDOWNgUyuqaAH2iTDfUF5w7Ze3RVgF6gy7QK1AMirDyNeLE885KT5JclircBVN_zVL4s02_eCiCufVHc9O2TWXyAl_lA87tpRheYFWKOmGnvlN_UNKK2H1a1E-D5k\"}"
  },
  {
    "path": "genesis_data/genesis_txs/drYsyF85HcvC7LM1hkzPPgTj3_zp3amcNVNobBmOxvc.json",
    "content": "{\"id\":\"drYsyF85HcvC7LM1hkzPPgTj3_zp3amcNVNobBmOxvc\",\"last_tx\":\"\",\"owner\":\"ugbb-AkdOkfxlKU5Tqx23wy91EtMJs_8MOqJDM0UehzpD0qW9OcgP1Lb36jLBgomlOOvgLyIxS9cmPRp6jghyuA21xIiaC8mcbyILo1riMpm9ka45P3Ywdv0mThW9IQhFjOOcc7zT_9_UgbHhr_z-1Mfv79K_lnW3WM0QzvzD-DMYiyEGf94UaiPGvWWuj1fMTT1HJ2a81NJAnS7jjxO8PPb9UdAFs1cVwfqgjMxfA5ul8GV4Dimck_nGWJa50zpLHkWZvXApJZOJvB7jwlLG634cU_y6_qfjcXkqK2sXwMYz8ZkxIKt9ur191qvzgRbIp1Ge-_2LP047pkVTX4gxEYOLtLVcREoKjterd9SsCXPE3xRR_psSZ9y0D2I5rF4pBpI0o57nreMiGHYcDjPnx3Bz63HHK49b8JAxYxIdc3rMQx_dA5kEU_k0lw-WQjKEjw3YyCQk9vPCKWxYpCBoPWtcliwO2o7EsXdHJ9WDz1Ki9yi4OAAeyBoDbfZstCaT5u4hsfQLJwhqXIIxqGKTBwpgtkHZK5GbsZr_WlFPxUABHf9ffe_fsqhvTZ74CBN9TPtrllKJLu3KZ4qxnrZMFL4dq04pkv08y13VoLMv8VZjd9Y3eO77HUD1g5WE3ltH45GtOax2GFfxj0zrv3oRyHqA_dKVQIUfHGq5_kYEAM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"aW50ZXJuZXQgZm9yZXZlcg\",\"reward\":\"0\",\"signature\":\"GyJe-uzMReDw6NYz_KKC3hfCVR40nT7CwhoAEW6Ba4ctzdBhl5pjb38FkvFFY2Nab_mBH5cwAW4lwD5bdzmM3Mfgbp_bNajSIoo4JAbYABsJgUo8NuJQFvj7zVfl5RlEpwhP-R84Z45GzEMoJhDYsJ6JMu-4Ae2q4ciL-Bx1iPDWnxYbZw1ac-R62jo04U6VBKepE0cc017_bToEuL_0gL7o-S-tUoCa4sZZbJvpe3KwuQN0iFyzubMx_UZ4CMgSXiHK3QS7init484-_OHiKyOzu06vIpPx-9ONNfUKinbGUC91A5v73ICfmd4bi9eOpqLBSxw49JKIRq03AE2cka5EJ3gNdAHBkiuq9_xalqFs-XDLRF3qmYGLrjH60MhbO5DT2w_tofHgLFcf78mznCxxoy0YzZfjeCf6QayO8rjVauEbWdx4cqYFOM64MOOn5YhOJ1J7HYv1W9b0uVlpMPMSx0F1a1KVfoNioXsIptg5QuBraCiwDw_MiFd9hFA_dOzBo8mxKOVYFcm2V-LJ6LKoyXOvB0INFSwOy7SvVAwO2mmh9zNEewMAWUFBwigwvgG5t1VTKGTBuhvWwqbK9kBTZkJodLoZkfbwIcqh5be_IIfN74YvwsF3ewPVu3fb9HxdBZzfT7Jtr-kHDSjI8a3QadUNdb0HhQQkxwbZNO4\"}"
  },
  {
    "path": "genesis_data/genesis_txs/duSw-WaGKAabAztyg2zkj6hjgaVaRGBrJuvZ5Gd2Pzk.json",
    "content": "{\"id\":\"duSw-WaGKAabAztyg2zkj6hjgaVaRGBrJuvZ5Gd2Pzk\",\"last_tx\":\"\",\"owner\":\"muWzriTg_Eymwd6zqnywNvhKLXoUHi1L1KMUT7NtlGKd6OExknojZnIlN75L56kytlUORY5S67EsUgn1JAdmaymDucDrBu7bTMoo9rD_nxHcMbUZM8kjG3Huy4So3GPCI_NH9LuwoTR-SJmA79I9zLG9UmtUvKOdDPwSBTC4SnaVc6bcnx4ApdH4hoxS0IZiDwD_k4SpYl9wIeJ9cpaP-jIt7u6J5lYSLTPWr2McM7JVNtwvJW0gr84n2Hz8fQ5KtEJsoTsYBPe_CK4IGfNVNdWePMTlM25NqXzoS1Pst0wcaN1SQxEzth5I9VgJ04KEBx8jcvWp2YNd_QaW86nlhVSuJaLrVKncGselftcVtYm2Amw0SIvWdhn519fOCkwGudYFardqW4hYeuEsQ9RQWh4WEDDW7hD3nD4xiaYrDvk5kPMn6g-NmKhXu9C-4ga0UZcZd4FRK1hawtYHJmRpcM8_B3F40hkuAAa52D-1y87o15r4BfkGEMkKhT5QpMFyzrlqT57aOTwyls7Gsy7Sm9vn6GAI47NOLxdW67xfPEERua_ZuauXIJoItqiBZAppxePtXinj8TW2IvN73umWhZCo2Nz-PitWZ1s1nNnhQnRwWxt-BKeecNfoCEZg70z_Q5YMAbyypbKT34Exv2oX2D1WkmvL3n0dBULhjU8GgB8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TGEgY2Vuc3VyYSBub24gY2kgZmVybWVyYSch\",\"reward\":\"0\",\"signature\":\"cu10YWLpyQaUtDue6h8F3GZwR6Ine-6_SYqVplQ4rigN0PbW-uL1VqvI_Re6JZmAJ29eeJzPWQhVB_ei87jqWrauGzbQzhn6xh6JzORdiF5izujquosdfAl8vlTcCEE3ELYD60FFs2P2A9uSlZ0NDxAoQ3QZjPsHwAnl48ziPte8TJ3OrGAO7iIeoejtr5OFJMS_uQxKiwb7iEqdE-RpGN5vvMhxcd3sJsXN4KEzytLyRCPUhTBsK8YqdD5givRUGs_HP46OudrxqqrbZep_J-LcU9EWgDV69nC2T1CbURB2mLWdAr8I3rVvZExO-DQsF0nvgE-4Xe7rgtfXuVydfMDWUny_kOt7pWrnkWH5MtJ4QhIpIcsR6Z8jdFDNNGH-JTA1KpNoIykM1nSClfkgYkjxBEP_-Q7DyoNn7YJFVQqpxUxSU_HDTXycNLl_zAS6eEdrux_GtPmaJqeytcxSDJrMRowjbq_yPel_o1nuwhJ6OSuNI4Hct7813GMtmjJZUX6R5Y1Me5n5TXmjFXxUJwmC9mdb5ucdo4mbFl9zbjtVRJgplpCIz7xiAPjoSVamuIC9aqtlYOfJ_7YBMT_o5euSEjOBjH-PcAPhzJZ4NNGdNUvBJMYuKgKb6s5Zgqy1px7SMwLr4TIBU2M_VAa3XudN1tCGD-cCAGAGZaKa9MU\"}"
  },
  {
    "path": "genesis_data/genesis_txs/dv28G4IsYul7liWrycsx4UKSYHA4sWUY6xFQzRPi4p4.json",
    "content": "{\"id\":\"dv28G4IsYul7liWrycsx4UKSYHA4sWUY6xFQzRPi4p4\",\"last_tx\":\"\",\"owner\":\"ts-hYwVlfOU0H3oH5b7886u7V4coQTCL5gNvuGPREW7rYcc-C0lyn1Ma1k-x-RZ15BALdvzBIc0XEWS_ytpLRbfdj_TAtIdEOrVAX8Tq7KkF5-dH-6fBwCt0zkRahKZA4RUkStNu_NYSIpeH2bk2jtsL_hRqfYmhcv5LTTfYv2Te4VUTL-YrCvrw0c241DdgHyeAQlDAMYgkBlZLLd5b585ia7NQ7ksAvdUy4fETR_agX_mJY7squbbMqdl2YeKjvQxM-wGO_K-p5YVdU3kFW_6uSYuIFaMlnpkgawYb9zHALBLGYVL99ZjUOQ75I2pustofx_OKAzBIa4_YunJcpAWFCa63Nlsh6F0Cm8Lxy31m0JsD10zQshVA5f_RA0s6YmA2bo9KpL4T-q57PsDuzgAOAMNiIhePW_71oeD2uUNYfB4wDkKnknsdRLMadYzmm8l8QCPK_-FfY6vkaYy1s7A-11cxOClerCfua2D52gW_zDJ8DJB7HiYkt6wGIYW2zaYiG3P2JKdM_dcWLLdnhmwVFQqWTZKIs4bF399h3RhrRSpX15ZegQwFOHz11pKuVqQlUsm5B-b0vh3CPYqV-0TyrfenWnivyCR88oI-0BuogpGHeZlAH8ErN_iI7tq79jLIr1lVNmzqLMQAPS1aBpGYkl5fxYvAJ_j2XAij6yc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VGhlIHNhbmRzIG9mIHRpbWUgdGhhdCBzbG93bHkgZmxvdyB0aHJvdWdob3V0IG15IGhvdXJnbGFzcw\",\"reward\":\"0\",\"signature\":\"cOGtWrOChFpyjqGKka3yLIGud8KgwO-wcIZ8bLokr-tGWb8X-YShuLo4HbgeNIGg_zDs7jHufvCwjAFmyLUZJIHvgJGpRF0S-xmy_H9Y1ND-dZ2hjy_-NcxUJlyhCb1K6_3KpQXaPSJ2CSmhIF8XqtDxZ3zuB7mSkmlu4Bd_EkAPyag22VfJAKGSN0z1ncKXrgFnZig78ZPicBlLeN3TI1B-xQ0p6ftEcgNVeL4ZZ33unMR1k51-G-wz77LbOEfwyRopN9q6kPPhw92ien_d_SHU0mm6OrPPt4W61BVYSbC-khPG-JC_g2MgejcMkKrS9k93AkPeNs8wQAWutMwBJSnEZmwCGqPWhMNABfmRoFoS4gsMhpmKafT6Sp8Sl6nGshvH9xNGG2OLkb9aWr8eAdIrq61yn8VeUTskhx-o-3AtJNY1anH93VTw_q7PE64GicligDliRrJECL6drX-BB5mwEejNgt-EJVQqwjs-dIBPlPMGG89gHJ2V_ZB_rPnUMjggap4EZCzkR3xNRMf01L_XdA43Hi5dA5S9OalC5lhkTC_TU9NPVBz71jWf72HD7MQUr8J2tJ4KOXv6m-VNu6EdBlOLkdUT9NkPqAYbkqCG7Y8TEZEBRc0OrZ2mOnxVgFpQPmtv6TRJKz6-x8Dw1k9rTk8gFH9OySC_gNeiCFc\"}"
  },
  {
    "path": "genesis_data/genesis_txs/eGhF0za2qN5WuadlVZ1iak1S5LxXswHRzIa3j_P-sUM.json",
    "content": "{\"id\":\"eGhF0za2qN5WuadlVZ1iak1S5LxXswHRzIa3j_P-sUM\",\"last_tx\":\"\",\"owner\":\"uAXuVFIvakVIKv5tU97_R5zd6lS7BApi7WEdlULOOkpkzS8sV-x9siDTqf2355fm6u4CKiIHtpQoj7icbTouNaxs4wyW_T6iyZL7L4hubWvMzlZJf6_eYtSIrXyhdnzXn-iMpax7TSn66WPwikBQbUr6zGI2xjhxKamb64eCy5exNsSwbxbdazvNGLSGuB0xQw5hI5-gU3-qdJ52U7vFTJeRTcZ1RRIH4-CQvFw0XIf9WFQklLquiT-6RyT9AYlQfb2gSHqpTE1St95so_-l3dF_mKQXBp-Anq1oxBLqwhe_4InnPpZR1_0mCG2RDGPjINrIiZNxGmP91W7ALf_dXgKqPZru6TX-IigYY8OkGBIyoSV8U6rGbPoRkmJTvNg2q_ClF_yXgxRYMaVDYUc_BWFBG5NygIWowVC70j_4KqFaRXOHcGHM0a8QCnja3IMxrnT8dfrKQAs7kCVK9ha6AAgDIHOnVZ5T_beUr9aFgJWu7eHmdYG6xtULzqV1pRi-3jnBWDvPjZ5kmLeSY_EE2aGCIU7VcoGeerQyXaGNvIfkGOIl0-VZ5h_YETybI-VmIUib_LOw7q3Ozqerk_TqyACc1Lk03VMbf7nBy30DXtg-M9JnAufwT-0nRDZEcnw5mbnW0lrJx-Z1-YWevy3Wjmyjf3Q1MsmL7MmJmJP8deU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"QWxlZSBpbnZlc3QgQVJD\",\"reward\":\"0\",\"signature\":\"N0ey0FWgBesQAxptPEoaKPNkoFc6RpiRhqskE86r4fklZt21msGV90eanSUIISOrT4AJx4DxxnERI6xVbJiFhp176dQMhfyEHN1iR6nYdMi1J-AQFhcCOqH7065iBszafbVQ9RRUH5wMx4-8Lm87eB2UWIdFF7MJzjZ23H4JX3Sg4tvXMboVBDlN99YfPB4UdgnMWNyoEJ4sdfBpfuuXG_cIGa1HOKkendengh4GyFEbGJp9d7L8srgKFnIU3_xz2uebNHqfMW_-pPpKdm9dfuYXLjGhtv53fwqY9K2gBSoXUOc-5PQpqVgn2QeNqgQwJWJNWElbbZvxAslVZtnvs_xagln53y_d6t0IWzKqwkudkOLerVKDkU02Xd_OMnG613dAej-5a1ZgmrCQpJSIW47HuntDT7g5pM66p_-Zg1reHs4PCUYMMWhsXn0s5sBS5QW3Eo9EJGEQOUPxiAzExjzLMJ95XvKBFD7_G0o4KMt4R0A8sFNZyGovFhDpjTtkU7gsCsgTbX7wfUFfvG9k4GAYi4LJ4P_FCRDYXu-SFOrsViWxw8jhjIArNWub_qOksJIGyUJ5hRptR6nYXjo8gvpVe664SMGc5zbIYxpYBTGH79Nth9QGa47MEZw9Lo_5I_bvA8vDXS1EvQVS086cGfQoZ2ssayo86GTYZnipyU0\"}"
  },
  {
    "path": "genesis_data/genesis_txs/eJ2aSQ4nm-i8XAZW2pcRq6GoEjW9K8EBM6w7rLiuSHw.json",
    "content": "{\"id\":\"eJ2aSQ4nm-i8XAZW2pcRq6GoEjW9K8EBM6w7rLiuSHw\",\"last_tx\":\"\",\"owner\":\"v3NRJ8Qnq6iGQqfFL0cedsjLN-sFJjgMK8V8IAX1jk8bl_W64TGWnBCwvGM9vz_mohEOEGv8pB4fn57yXanXvLZkIbROF01CC_OmmXnqkyaaqRoQ2OLKeKuDUd1s4XiW0Dbt0QhaCS_CkSiQFWKhPVLDPLsrhna0c-lvCqJDRnc7qTOLBYQ3JZt5kZXNXQ-YhK2_rDONyAn7Yex-ExLIFAvsU9M0SLkXaZGPfwX7M1y3Ee2PSeA1ErRbPPVSe1UqE4FEIeT19vw5repfflU6QJtA9fDvNRSr0bw7mpmSLmVYos8-UoTDjl2JANBOHpvFklW7RWC0tAR22pSLSxxgK9R71ZagN3-S2hT1gqkJNykc7rLV6DFr0Zr4oGAr9c8N1uW5N1_OqXx0s4edzefwjKZqKlw7P2raDFzhJ1-P7Cww0KYYOX5xHaBpOwVVaO9JlAcBgjxpP86anKhtGHqaZR8uFzgXScBnmcmTSSZHFHqm9PaByZs0B5wgLAjHL1BdGTDckCDn9r7LCaNCPwTYuQfc1-pGu2eNAkdCssLVsJwK3ic1F93xRwGC3HoknlVumrivtCkTu9ywStW7ICZQAgSWJAPeLHK_Y2Fbq3jbmRHfm88QWljjMwDcWtrEUNk18romTaJhLl54zeMmOHuGg9UrYJBn_7pSDnl4hxDRHEU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Z3JlYWQ\",\"reward\":\"0\",\"signature\":\"mRSL8_vH1Llku7HhEPkiI9JLe3r394nYVDlMpH4om1Wli8vn5M_MdIoUIRxFIk7y6DOT4XUg3_JVIX5wb9qgax48U6bGEhW9o74ZOUDh98uy1uOIYUcgEQAWRn-CeCD671pMXuWzRPeqAG1WBk-1FcJ9ddVMfNPVYyddN5tp_WxzYff2QDlmnczIm9ApQRRAmITZXPNLTeij5PKGYIH9G0FSFkSzyD4DBgekBVZ4w_6xVaRLJIZqqL3rOodtZ7wc3yEBHY9fs-Pd4Vylo9SSRA9l5gvio4R4hw4DR4VZXVUihb6yQ5qMyErjNAr9MRuUyzDy-rCXVsCYqGVEqon6Ef1Dn_uuwDq0BQ9AShwbcuM_X69drDvWsXQP9mUVwiY5g-4mn3EdYvW3x-OLrrjqWWjrNLX-qaEml80WDYC2JCnqRv9Qgle7r_Z7sgMmCRfBLBD90nRA7NE51N1qRrBH0uqxjcpOny_7RYVzhpaR-HnrbWJLJJje9tC1_z3vgU1QIPy1fCL9vokPVgqCZIga8JyA16Y7Ff0EqImNrpZm2Ko8oRZSzA5Ory-xQ6dlxjCwzDRJvzgHBf6tG4PD2_J26HWelBQSjHo8BWc9PXKwUtPIrrYZISeTYLOuVa1k4aKErFoQI20bXCF9o4AYNJp7qX2uCuhW0i1nVazYxqAfXo0\"}"
  },
  {
    "path": "genesis_data/genesis_txs/efqI0eDfp0OcYB-Ms5ELukIUr8-qtlX7Ica-ikhVZLU.json",
    "content": "{\"id\":\"efqI0eDfp0OcYB-Ms5ELukIUr8-qtlX7Ica-ikhVZLU\",\"last_tx\":\"\",\"owner\":\"7d4t1cSZkIvKy-77I6X6MqDe-S8qat8BA3osD-2K-pAoQimJyDMMKrq6PAUR0R0ssOwLdR7nwwK2ZNITS584wtI4j353ehfnWwbIgTB8LPitw6KvsQs0ErLqpC87Gslc8nw3hEg8nqg7b1HpRiRFcA3gl4hiG1B_6bYYGE9tfJKZPKh9GzflnwFY0uNf-3qZGAMi2xSHHX20IqDdhuYlHu9KD3ENVAM-1lhZuVIuUUZ-eXEAqdqq9vkwwoyItDhZz5UdPL-pvCwv2GYzTCCBJm8h3Moz8pz23RNKgq_BxyQeeh2huPZMDInAqoqcIYfzuPv6KdxulFsqClmjsz01opbIwIaehkGTJCYuNLQpqdox8cjFcX4vTWxeeVqQjiDMl9rSuYLftVZZqDPHbMXa3M8kC4JEwyItnaRtAt-XDyHZFtQwMoEyuSYaE4t8gk6OG64F6z0Rsx59vCOxDlCmBS4P9jcZJ8O6iaGV1E9fvMCbyOZs21W74vBomb_lMS3NjJzVl6rZLfD49FY8OJ3C8itCLlvTCjzzmJl7xsS8tenxIGQWIRzOK1NCozxRXtTbt9UGi__15JH2cqLWDGb6YH72bG7IKDpabZsw3FcRd0L1ozHyUNxn4vVq-tH-TDS67xsZ0ljVbN4MGQuyDj2N17ksy9B-xTpNJ5s94tGvklM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VGhlbiBhbmQgdGhlcmUgaGVyZSBhbmQgbm93LiBEaXNzb2x2ZSBpbnRvIGNvc21pYyBvbmVuZXNzLg\",\"reward\":\"0\",\"signature\":\"7ERKBQtL6zCAQl_xPlfJCFPHuuA1TT4vroH2-eAxnbdAD59X50OIOmiCMSoQxYKc5HJgHWZEy3t04GRLUBHMAN6bkk-4G9S3iqiWWWvXRtvdzX9SjGH4g0rDt1P9reUf3CkfoZfI2Az8UOzWk-oSkjv7uP1NIqhr3ke-W0JAl94INFpCIsHv07EHyVhC_1lRChxPT_YKWHH_h1kUrBmqaV_av9rNPiJBLJ-FCurtRt26oketujemgf6WR0d--w1CfPW2Nq2YYvYvjCFmcKamJLA_fVNcRoN4FgANc8iY52BzVMF6FMnKo2QovBOSx-mgP9yJ0FaCmk3EUV7Uu-OGl3x5GAilkJWyGAHmtm62caCoAeIT14y_fUc8qz-XkfBaNJqZxdqo0uGsTjF-7ZyEyG0RWrFQeA0wuhmN-wCyAj7ZtJLH-q700iMunEc6-z5qKCBC0ewux9j6DvtIfzRsrtkY4KMQU79TADrdeju-Z3SanFyONHYRGH0A6UY61qnxDWv5620bmi8LlWsbEXZfoSrM0NdcayU3eS7fBIlwTwM5PMldwWWGnl_VQLndj6BdHdTMjLwSVAILBna-1rA0RREud_d0KfTYGUqnTVXt_hazV2ak4Gy6IoKHFACQ_Z-4qYuWyId_2l1kupQTm_XMWDcFZ8dCNoOXhSkBoKV91oc\"}"
  },
  {
    "path": "genesis_data/genesis_txs/ez-ItWkyBvBZ6J7_Mobrpqc9RTp6I2JBmkPDV_xCQVY.json",
    "content": "{\"id\":\"ez-ItWkyBvBZ6J7_Mobrpqc9RTp6I2JBmkPDV_xCQVY\",\"last_tx\":\"\",\"owner\":\"1YBf0ANqOBDFEKhWZw1k2u7OUElohaCWl3EpvlpKH6_3IL05H576YbSXNsPBJe35PsWZ3rGK9ffGtuA2XP1g2nFDI376Ob6ihYRVKUduvujNGi3vPOte_cJzWQXT-6o5QcOOq89S96wyFJOu-MEHcvuzm_8aXm0LMgXkSY51RXTCZgStsNBBZKyZcKwovHoE06xE1nO6teR7VinbI2JVTqNh7MfJJmCXiWR7eTPLGVwoon5iv_1TTgRM8d-pa-NRPpQvkxL6JJJw__j6_qL5G2RlAB87bdkHKxot-3K7QFPYgVDY5a5TjZ0FLaiYbSiouaGUPBWQTpIspjqjBiMap_UAb7QGuZuWbhNZrLyCvC6iiJe2kSF5xqC7R9OGcC130MnI-MrnoioWY7UofoL-42LAOtRtYFUsDkQ3HRlfcrXCuiX4lC07R_1JB8efsuEyssgfkAUdPG4o9Lrpaul4c_zGJsd0Dcdt13eR066Nuv9MJBb9Sx7m4GvvI1KiGzW-MMPsQ4nGbU89ps-gJtjaHnlddRyqxyhjq-JtOrPrZjuCg8DYW9jB4j8rasp_mhxUgGQzA5EydGx0D55TIJOi8DDpCaNGKrcmraTkRNd02sbD_A_IMCrtTBkyFXlZNXaMmEnWpH95SPMq2wZv81k44SuAk6QJqemGUTd7Y62y0pk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"RmluaXMgY29yb25hdCBvcHVzIQ\",\"reward\":\"0\",\"signature\":\"INhORVY5sn09ba05kggTiVN8qqo1uD9wpJ6yt8Euc9jcHq5aIhN8KpWqTZj-WdXC53k0G00iGRQBnu_2XhaZM5wmxIhDV7-ugdpEv6aq1Jev7p8Bh0-5Eh7ivRCnt-_uYJ-TbaUFJLafksm8wP0qLscK8JwpAbhP4Mm9XRIj_xc1qHaaV64i0Rpl9XmkCZOzsY9lrtIJFqi-7zP43XkDzDjQJ3C6px0J0_hS0LScbT8NCBPDeDl2GViUu5WlP7bJknXE9wjNH_4dGrumy5HT-x3drcSGkM4K9kM8-lIYv4tNjM3F9ynSXBJy-UCB7kSbT6O3ym983N5wHgHioUrBbixnLzI3Ki9ducr_Jj1uR27g6KyMazmQZUsgP6M2Jrpzie7lMURD9faCaH1GsL39mns1eNCwD0jzx65HcaVDH_iERTYkxMMkK0f3jejpIVMve4y6M1Fjcb3FppVy1y6d1AITWY_77CqFUsjX-CYRT0YlTnj8hQuOkhJds5o60syA0cYDfVQXfGNQVNA--CAhajdNXCgEZ7iAYCBWgr4oxCdlGNNHFlYMwtW7r2XBw-3F-rzuH6clwJY3sDAHbMxNehYhCw5f4MIy9wytAEpVM73wqIPru9QAWwdiy6old-sF_-5g7wQQZP4chFpTgty0rU7OUvCjIMCF-JPUeRDmNhs\"}"
  },
  {
    "path": "genesis_data/genesis_txs/f3jE7NK419FZzwkx9VjTkrcX5FEgl2Ky3KSK0vH-wj0.json",
    "content": "{\"id\":\"f3jE7NK419FZzwkx9VjTkrcX5FEgl2Ky3KSK0vH-wj0\",\"last_tx\":\"\",\"owner\":\"tqYaS_oanm5iwKkIxZ-ryNiZ6kZu8Q20b-6rua24BXn1U1T0ygDbGZ-rnAGR-SZDFbR8dxx7tWU_nZKYhhWjIebhPFMTA7YARnW7gC2FQa4SCF4iDWur1mFl1ASEScWwP4YbFtzU-lcxQTMr0_F8P8j0bRoRbh2lIdcGN6qj-kpzXcNvRxqziE3Q3wKFevk2ScppzSwFtVKXUbPNxvJ-pmuHBrjgZtY7IpXvJNlXPNy2EKeKUKz5so-B_Bw4b37U79bIF9nlJfeVGThwk-sELWoqSd7rumTrL7zkHIQyLtvLQHpzbNRo18ad410dCEQzxInMPG3Bu2fypqYzuxs6YqIXYU-fa8RZMsr3vxixjC42vGpL_wmObkqHGvLdMsMuzTlfWumSJrVvYhuN05Y-f96PP8hH2H2WgqnXNFURQ1KzKm8WoDJ5z88Ul6acgivqqEO8cLH8FZpUohtku5yieofKOpQTD4k1eQt6A12R5Tpa_FNnCNT_A2uHq1T1eGKpYKUWY2YsSwwig3QGU2Bk-TiMPRISNxPb7cjYWqNki2F5RmFMmafJ8n4krCX1g_XO_59hIfFzKfmay_Mt20g_7kNQ4VQxufMRDMCohCUL8vncUX0n-4nPe8GgVJVKNQzjCZGOV8hM2Omj389M8yFpkmhjlz7wtqtND52gIYVcfuc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SnVsaW8gQ2VzYXIgU2FhdmVkcmE\",\"reward\":\"0\",\"signature\":\"X5NZXWk7PpwhpunlhmrNO0-aAzQW3fii9TN5yfz2I41G2ZKTRXPISDmD5sSvfGp3cL8Hb4Gmehkc7JP0-om_UWA35xxm3cWt6M-XA0kekAjUPHIC7bgjimYjqDGG4QSAziDQqbNwhXV7Ux-2bv0Zva-1fWw47gTMoaNrWrIx5srhP_hD-1JlAPL519oinRnr-H1XvpC7vc6_18CP4AgXDqpFCTXqe8MgDo6FafKLabqB9lNCM53aUUipBGlLYwvQc3VeaqWUzUPA5gU1yPSNaEpqkycVWQwmtGyr-zSnr2LcQxExnEXnDa4vByLT4C9oug7sL7xIbOtMpmXZAhk2Q96J6WUF-xmvZhuke-gHo02OkGSZ3lTlhwXAdSwqSU-Z4jwWbBhs9yCGSJvxavMgRzr2uRkLji4G7HDCXbapOwhw0LC3gmzTegAMiGMrJx4UjNjb8aX1-37-sn-NwMWtUAI5QzSSub_3Pkn_xWIGf2vOo_b1du9bQdxwb4SkhhbXqkopIalbEGjVSkVPL8nJyPvTafVWAxwKq9wlPUKaWz2zE0w3gb3AGYW458pekskp2JqjuiY1wYawA9515SoaUZjEwlN1HDGeHNuTB-z9b9uoXKVO2cK1v4SJl2e1rkux_BD-vE_Of6EDmzbyMk7-lZRptCLB84ZLlQvhqlFtkXY\"}"
  },
  {
    "path": "genesis_data/genesis_txs/f6MY8LMCwGbKZqXd4dkCROQK0qFMjS5OJAbZq-UhMGA.json",
    "content": "{\"id\":\"f6MY8LMCwGbKZqXd4dkCROQK0qFMjS5OJAbZq-UhMGA\",\"last_tx\":\"\",\"owner\":\"w_COBAsbcqsrULrKdXRu-Pe6jeIH0bJz49CBbHpZ2M7HccLB8bneFg66RsNt69LC55NGS7ByrtTYQacnrGDicvFia6N9d-mGYb9LQ1OCj6rIlV4Hq5HrVw8XfNApCuYVVssyONJehFVa5bmN7yBtz1w9nrWyAO3gUMpX6sTwMBdY8eYHqqQvKWFa669VnbWCwNOOY5sN_P7YYVFGftzGWblAAJrc5dxv_tHbRtA3BEQ6CuTH8sC2vccYedjXxioz1RDV6KR6R6779k2SUdsiJbsLmhgPN07geAmauvNdocbXamUs7on8Y0nTLRrIzo9pnHNZScuTtgqAu6PIDwXPKSSy2XXqzmeLRDNbgfpZnSrjlBf2QzXD7ic9xTCX6kzBAwPX4s25qBZeavNxBLY8mr-WH7cXnAEn7yuLUunI4LcZV_8rrZ--wnH1ehUjC6WI88FNNlAqJHcxHaScq3e20fMc9l45rhvLuXysspS6e0ANmKU2KDSh8B7YzhGzXBe5gVwkJQAkKilYVdUN-NeZlVC5xWtfT3KdC_GJg30NmoYci26aPDLL3L0nJtHz-iOZ59lf_8vsewRbeIx5ZnkYDt1HM1ZGDk0GM1oRz-4gfrG5L-J82dVeagbUcAIcpgZPlC5uFFmUBN3Y5ij716xuyS9pGTg8_unaCM8ZcUw_aa0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TVIgSiB3YXMgaGVyZQ\",\"reward\":\"0\",\"signature\":\"RkkJk0PokocPcPRhPpuASN-ncxW4ZoKvjZqtygK3ObCR2r0I4j29-BGPXQ-GiTeANWTSrx0qx3Njer38TwDNH4p9soN9EV0iyOHSwFw65A4rB9duZJ9254PIHuCvbEdi28YIarOlehHSRY2lJyWQuaJr-ks_wO6-5f5NFdr39XjuA0X3KCqPNSoIOs1g9UWNfDQas7cpbl7yOIllr_vpwONyWYu9b19ikaaXkZuzamQk1Kz7MCjRqmYA5jbq3HK11L7NhESC87aR-1EWgRjibee6MJS6ekIadMpXQ3go7wUSfRScvlUjXzyR1VJkNvZLHnAF98mBzgR8FUdydnkmhZsVhGCxyReEPK_QM2MyNm-_a9qOFPaOceCEwCya8QtIqZjVHIxtEkw7hfeqLTF3P8aqhm1kv4zyCkMMdf6Vvlxbl-hSnjWWLXu8SVX9HwdC_jjuaAB-vXl3gh7oX_-pchYGf-bCNZ0vrzH08JGThVQAL_LkaoHtN1yikp23ckworBJzuu5U7Ba_kJIxlvUIUKHD-n4X0C_Yba5vfGAJnhTMZOfRkmcV9duu_QqKYtb8PYRDDO8bj7QxI1lAU8085eJWF2KonzTz6mtsMclZdYpqdPUjVk_M16Yn_5rmBWiA_wKNHTYm19Ji5LMfaGWTYTOYXAOCFYWN01Bu_6Cnp7A\"}"
  },
  {
    "path": "genesis_data/genesis_txs/fBVa04p7MEL8BsPpyD_Pwv3uqBnBMVzG9YpXsCwZLtc.json",
    "content": "{\"id\":\"fBVa04p7MEL8BsPpyD_Pwv3uqBnBMVzG9YpXsCwZLtc\",\"last_tx\":\"\",\"owner\":\"trg3fJI7106PWDK9tvPSxzE4YuNwNzPUukzur5MIR90nuhobkf9vYX5yPdO5FgitHgE3SJ00tCBMVsBUj-y30wwfB82iYKNKAmaEHY0RlczAO7fwM2N-z9be0-KPop-TWImGMXllooIX4jABHgD-S33RVxtqd-rkP3gHLVaP4H6saBdT4AvvXlgxRWwpN1jg37in2DqavvkfjaBZ284X4Rtxt5lrHFiwLN-KHYtWaDJymQC7MMbdi6eqilm1qnDwuU0V7n_J6ph9pW0hw2uOHAJt436Z0_In84tp3TbbpPr6tV55n6__UypVnMTMLCaJKGdwHjreHuDqCZedaEHwMuDPNVB4A7o6yHzu-mpvXyWd8htCaWaRCMkLDQql-04zIJyEt2J3qPf3h2TOIatJOQp4tnlJ2NfCwiXP3pUGDyIpbrUWsmdryxkf7xMhBWRyo8N41wN51S74KxQ9tzRh9jvX-X9lpOsSI-cOgPFD5f4PzpbGldBoSrgoCZBpV_V6EFOxYLiuLtGeJ996behTCgg5v0aR2PBvoET7z-tyHVQ_zc26vflFAuBsVOct9b-UPrY4_Y6ocU45p7v0QWdke-fJ10uuAnOnukT0_baudJXmvmK8ufj5rkvQPAkdK6QgPTpzaCkNhrtncdMku_uYFTQBdUoPEMdlHacKxYkArgs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"dGVzdA\",\"reward\":\"0\",\"signature\":\"WPo4XixrCebw0ug7xcC9ZYwndHDF7JIWpqs8_Nioyf4ERbvzgAIIlAMHvi5x2TBl-Hm5nxRzlTzoJ7jiSlukRwF-XbgLrqiT8ws9k2wMsqYrT6aPj945XPp9somwqn9HVbrdFuvRJdKd8akBIORExoLsTDq0sDcHuFFJcUs8njpPh1LPluoaSSBsd0oElFSly-PVKoO1BvUvNNonXU1YHD9AHpjnAf1qEfQhVY5ZsDTDAe5gMeVtS9z7O-3ckYdP0tnUUy_CoaFZm5gnRgSduVDpwwZZ2F9BRsOEqQIi5zjVC8iIdYnaU4MsqNS0tgpTYKtNwtUQkf29OM7Awqemld73K5nhdTW-koY5J0cQM5wJ6-qDxMp7JbV9_J6w5-nqhp7q4C6sq3yKchE8WX86RTT5KRGOfDv-kDmRX2ksPOeXRN1xOrsukZbQK4qpA5utF8lfH2vfzEOs-zUYie_TdtW7q7tqoPPA3V8tabENqnOCPuXUJNahs0jjVa7yebgqlmPav-l-0Tg6NfwewK3gpzYTeqq3oUBav8kI8JdkV02UYJbweTORWH8TIIgJfua7bgs5MCs5gNe4B-JQ286s--eKeTfJv7IO90ixQeRSGXRQSMBwclWB8CuZwCDLmcOYdxLt7-SK8mFmQ7G7BSCWG1xMNqqYVRqKLbyngEFlhjU\"}"
  },
  {
    "path": "genesis_data/genesis_txs/fkbFeVpiaAOtvt_-M9_U4HzbA8Elh5sa8xJXObrItYM.json",
    "content": "{\"id\":\"fkbFeVpiaAOtvt_-M9_U4HzbA8Elh5sa8xJXObrItYM\",\"last_tx\":\"\",\"owner\":\"rxF0vQiMomNw4kEQRkoHnOIE9xftkrSI4WTfuKbZ_UVOCIi9tiEoz_YFEw3GLGPQuRbvnLHTEDPqoDBEsJS3EjG-dNi_-15eryxaRf2g6psMFN2w4fXVQ7zQnbFuw2rxTXMxVR8tWVgogoX_TRkBBGLsSrCJsU_mjxreuxodIRyS9hhFhm660APKCdaHmm6IPIz2wwfVpWW9qEibcp2LnyclzBOfwKuYi-gNhCb8YhsjGpYFRonbqwtjSZ8nX0h4WBXxvq98CaBXio0afvtaZ5m4A1-WLVBN1VG3_yq8exn3nsE5wVbeLLrgv9qfrstyWqFvtOvJyrN1vEOxhF9oRejYF1D23QAhYIg9TYb4Wj1sBqs97tyXuu5VZN76lBtH9zwqd658PAw-Owkmjotbk82QUPdWD7ProtJONfmj-kg_24InxYbnvmWEnsmAG06s6vZyvCcA1EQjO3H6UOvo98NrWmPL1k-uMLRW9tQAs8siGch96MehqHxIPt0-oxQjnPz_c8X-3nOVqsJMrrCSUj7AQSVQ7QLtLIVIoTnJQa7xppF0Mr-7S9TDquutSqW1fnpsGvvdahp6nPXsiPYSB5fzE2SHhSY-9bqdRn4C567Oci4jzrADgEkHT6-morch4m8D1mHb1yOw2nG0H9Nqs2WdQFAP5Lcwz-abjCz0L5U\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SW52ZXN0aW5nIGluIEZSRUVET00\",\"reward\":\"0\",\"signature\":\"Wd_t4vRp3QYWKRsqVGORt47bsA113e0ZrJhuqLqTT-pbmMmGr9K1SFtSQjPZAbfqcSTURSrINRW20BLdofxinWy-9T5ISo6WEMCTaQYhAu2pTUZPhcjrVlyBiiqMpC7FNaK-fJdAJzt_tRy6h7M-RosUDS8YpvNs1kRKSTBTtl-g-PxLU2LaZoKxi3Pe4iVb6VGX_WYaZqjL3kswtzGLfwmjLGPr06tqbL47QMBa5Y3l3ojwmH49vQJ5ONdryGdsg84q6jM3X_73ByILiI6lUBK4-wg7a4N-R-PMZUYe714mDq4QABI1BfzeQRwMwpym4ZFIGeDm-zLHriltPenvcg9xxnPW_i8g559__XR66NwdM_7IrdU8mGccxbEQJVvLW27ZI3mNsycc3FwXQeNXcOWGuCA67H_T3L3FF4kG9m1nDEtD0w9JJtv9DBb5y3xubgKk7xbJSeh-a6876Y9LJGRxjfnbGWDBpSIgLfZ8wj3c6R9kDwWXb3rdNzSvY4CBHYBVJUO4g7Io_AIguLm2FResDvx2MMuiFkW69-E_9pNO3qAuHK4FlrG_bVx7XcL9F9fF9Lz6wylh-0xf33G4TdgMJUkKMNiYNGZT1LwxgvM-ZwhhrkcbrVFDxfl0DCZhSXrEcm4SJyA6DvXjOI2du-u9a9xeAbEqRaW6STGGQ6c\"}"
  },
  {
    "path": "genesis_data/genesis_txs/fx1EmDF4yioha3ms_VbddDQjl4bt6pBLpFCESuEIT6E.json",
    "content": "{\"id\":\"fx1EmDF4yioha3ms_VbddDQjl4bt6pBLpFCESuEIT6E\",\"last_tx\":\"\",\"owner\":\"vDxPjreu5LQbJ0YUBX7_FTFgnXCR0fWFRyoDaUeFwTG54zvqv9UAYbQ7vxW_-xXvb4X5z6kItUv5rNipJ-Q4A5IxnxtyrF6Jq0pZ-ygXTreAy3k4hyqHKSyMP09_DIPMFDNfIXqo_di2vVtp7JSSmMwcYix-066HPTJssBvfqKnAQqP6qibSryBw1-e00bON1Ac0jKiubYnjpAH5_hRZO4JssRfyx4MbF0AZRckE30UVPpaxh4NzJaYdwVry39t_0KJOaJHxa8ezvmt5Wws1MRXxVUVToHM3-DhMnb4_Syvv_IK8Nc9WCJGVJ-h5p5us05o1jdtElu-G86RhCEihaQPX6ke9QOjO8w6iRbengEMsTSXAvK1lpIR0mH4IQYKhk34bRh1Sk7jhoXnCM7P98_RbvugVN26HNvwoCxhC9B2rl8PovMDIEdMSUDs4JL6R3BNl0Kfbq3DuKw-7eVMsD1REqoxh3v2RJBDr5yZnUqdi84cyf3iig3SJYuiYClyXHIFcVGoWhBigGxTS9dp6tHehO7Dp_7ahLA8C69fpVIiEt5ZKiMcl6bPPNfBR_g7gzzhk44QMhrotfvbupkTVe_LShU1ATyvdSqUgiX13KDoYVtw4Xi3UPwxt18ycnK2wKJhX05AxZ6TJ5bpnHKH1KDwn3YY30Dj_GQiR7rHlkC0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"ZXhjaXRpbmcgYW5kIG5ldyB0ZWNobm9sb2d5\",\"reward\":\"0\",\"signature\":\"KAoryyJv6Oiv_glyrvr5HJY1cNA7tUnDjkrrMZhqXcu7M8Zxlk2X_IgYsFhrwxyxRQxetiLEEmKXDUPsQBFcMOefr0ZmmyHq-DTQJoJ3Yc9LYbZjXndb769D5PASCvn1ABxj0Gm1YsLgmKqhBSlQidEK39xYuB6TF7l3IYFN9i1cp5OS68gp_LiABpp1WOsgv5NlSpF3WHBwCOWPU_xAZgmjrA14jaWARZjszMb2pUTA7Oi9Fz12RakDCWPmaSpYr0oLZxSjyy83scGut9spBZdY7atU2SvCKAwS7W0sRWz3qE7dbfUcNOn152CSr2soPewK2GPriB3toZKa5XztuMHlDcJL_ZGCnLsnpmNCXdxevwkLDtVHlcC6HMMrIqm19TKeTxMnYKWtgOsmv0HXdblhMHAJRxfFhkfyaQ78YW0OrGMbgMj6QH271B6N4197Z-PHZJt0ZLzyiuAzVAB0B46aD7oaLzjAMqYXLH_qYRB4t0QOCjfCcH_Y1EQEX7yh72D3dtuirO0myz_oGo_FTIYE9ZTwMZAG17G8S7NBvu2yRrXi5eegm2hwvCUswOa0OpGp2hJaF88X_RR7X8yYImIdQscRcJRkot4zD2l263MufDurNupR8RSGj7k2ydzeFiO-8702MQWRkD3RsffXVPBrYHdZ2PXCo_McbFHDH38\"}"
  },
  {
    "path": "genesis_data/genesis_txs/g19-Tkf4xuM9golcjx0mA1RkJUYocQJ3uYnH8MU1ePs.json",
    "content": "{\"id\":\"g19-Tkf4xuM9golcjx0mA1RkJUYocQJ3uYnH8MU1ePs\",\"last_tx\":\"\",\"owner\":\"1QzUjNEgwnoNA4csdhZ4Pl1ZGTFORWnUFgyliNk1uI3aKUH7Lgx_YAPla810B2JPq3UaVgEKKySL22cmv6elr4gK7z3lII0jITIeoSzyer3YaJhkPX5Z2VMOXqbgEBQevT-LrvA9jl17F-4D15tUPM8qdqDEq1jNikeyJiv-eKc62eEj3gpcQeajfttYGqfgDVloai2GQ5LaCaCyYjzoaADJ_VtgxjONtC7VckASu67JRwyOW39KUfH55r6cxssYUqRBrszG9Nx6CY3ja85u5vIZCVqB53IxVSd43oS_ZmzBDZeIq_0vuX_aeSK40ou7q2pBlfR5zo4CKxSb0IWslwhQM7DbmYq_aB-Tj3P8umBwr2ELxrKL24EIJLlAD-0zCrVEwLnQPnggMqB353O9dpDb6rDhvf60liKJzgvrC-k7KAmSnxvIkcQB1eWIH5-sYl7Gfrl1kD_VdmwED-gPM_Gsla2RoLfzjLMPHnpdaFdCphGYwwW7htMyhICw8bMSlMmUxseOrA4oH-R076SzYpLERO3W3NoHc2mfzgEzea9A7e--LS9R-y45onjzB8F5Xv2hgdOZB01K6qw1C-QZcNj-4DoINJQ-cWKMSk4mIxuhA5A6rEjF2gljX5uneL3Uf_ijtpEPQyR1EwptqkwTuBoFa-Glm4yzS65oZJV-KZU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SGV5LiBTZWUgeW91IHdoZW4gdGhlIHByaWNlIGhpdHMgMTAwMCQgcGVyIHRva2VuIDop\",\"reward\":\"0\",\"signature\":\"NQskYCaILRi0lPSD2IsDmNTV5IEkDl99VhLv4SxwP_UFzVI7kdYn2rdeYO2toTYuh-JUfk5y10r6OjTKuGU-puSp7XBfYrYEggvZr6cDNDl2E9UUJyB0xY50SVOU87Lw8idQdJkmGb-mf0ftLy8BvUEmO7nvPZk9OzoDp2wiIU-e8Y5PM-YuemAFebpBI7jyeGsma-FMPKICIFAcumVOXrKikisFRTJkbu52LpRoZDZK9tiDcADA3LJ8A8DZGVYLICyvWIdtlUJPNIv0vTkgSFMF8orpiCyeAqPATlXykFTmTnc-y4WhhffRxx1BQ1CxHPoPKjwFvZrKmZTF9o2Jd043MjLISFrboh_0l4GLIB4NrPKkNST9yQk6gSmPtH_np8QW5gLpA_5CcQw_R_EE2iJkZ4Pp1-pwbq8Sz4sPd80JvB_MbcHqOyBsIKK9cmonTClklYnV2OOnVxVsNhSSt07uiI0lZG9aosER2Q1xf5P_JYy8YvuDScifekW_Si_EIzINb_vXFECqt0U0oRy-wNUiLRCNiSkLwypxbQZyZp3ie7k5yvcZ1hEkol_ULns9kbqrxSLekWlRFykW09zakL7-N_w9LRbfPnBjoftG-btP5PHGWIKON0S4S2kx5SX6wSJ8rgBO81jDLnzGiVFICpgkZnkmAz5_cKxfTsR3Gxc\"}"
  },
  {
    "path": "genesis_data/genesis_txs/g6TUtTIi_rwlAHNuO6ACsQqIChWACugTPmZxaaJltDM.json",
    "content": "{\"id\":\"g6TUtTIi_rwlAHNuO6ACsQqIChWACugTPmZxaaJltDM\",\"last_tx\":\"\",\"owner\":\"x-62w7g2yKACOgP_d04bhG8IX-AWgPrxHl2JgZBDdNLfAsidiiAaoIZPeM8K5gGvl7-8QVk79YV4OC878Ey0gXi7Atj5BouRyXnFMjJcPVXVyBoYCBuG7rJDDmh4_Ilon6vVOuHVIZ47Vb0tcgsxgxdvVFC2mn9N_SBl23pbeICNJZYOH57kf36gicuV_IwYSdqlQ0HQ_psjmg8EFqO7xzvAMP5HKW3rqTrYZxbCew2FkM734ysWckT39TpDBPx3HrFOl6obUdQWkHNOeKyzcsKFDywNgVWZOb89CYU7JFYlwX20io39ZZv0UJUOEFNjtVHkT_s0_A2O9PltsrZLLlQXZUuYASdbAPD2g_qXfhmPBZ0SXPWCDY-UVwVN1ncwYmk1F_i35IA8kAKsajaltD2wWDQn9g5mgJAWWn2xhLqkbwGbdwQMRD0-0eeuy1uzCooJQCC_bPJksoqkYwB9SGOjkayf4r4oZ2QDY4FicCsswz4Od_gud30ZWyHjWgqGzSFYFzawDBS1Gr_nu_q5otFrv20ZGTxYqGsLHWq4VHs6KjsQvzgBjfyb0etqHQEPJJmbQmY3LSogR4bxdReUHhj2EK9xIB-RKzDvDdL7fT5K0V9MjbnC2uktA0VjLlvwJ64_RhbQhxdp_zR39r-zyCXT-brPEYW1-V7Ey9K3XUE\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"QGJlbmpicmFuZGFsbA\",\"reward\":\"0\",\"signature\":\"d-pie5-K8WoB56De7jU0nKID67zs9NrSFh68Q8qZMWewNFo2mUBeGa24TTQappxeik_8ivfqVu4IPRVVc3wvPkYNaS26RQU91EVxtx_Qucx4YFdSOMPktIhDTxlRwFrPWcZu80vBZE9nqVik3552_UoK78SC-7fdvQ0qAN44qDVG9dqU0aukXo__gxEzL56zVOiZf_6ENhErRw3Y7rwv4wQcBS607hNsI9Zg0HsHH9Khz8Dh3eiUI0hKtruzrIyQYqnIJ_rq6jd6uXGYpTyAqNY9FV-9PqiWpGZvBbDadEXBq5HXPx5rPF4hFgw30xsbh6Pf7g044a4E-JKf5MhFd-mb05jdL_LhbeujBx_K33frbfFW4W0H_mTj2ONc_djIzvXagGr2t0J_ZE8fK4gfH7RXZqkgz61E4zlABofPYCqah2JHHfo4cwt4fCpjrqneMyEYHiU2U7S2_Kq-Rjms5HI6xpv7ryP2D67VCRcIFM4Ep2k4lVI2XR6kJU8WvUBdTqMyraTyN0tSFBBnyiTSWTfmuzm2rPvuLkzc4wc9HLS91eC2eZ5f5VoW3Z4gcQihfHAM01x-8CrjWnke3HiTkx1YDBc-nd0SeiyC8n6oyuLepP0ZCtewp1Ag6_Zku8GxZsRERS2zOcvvYXcs6_j6tkw4hs4_GFb-IZ7J4ur8H_M\"}"
  },
  {
    "path": "genesis_data/genesis_txs/g8ZQaQTNUbg-jGeE61og18FrGqpFeZxjFDypGuhT7zI.json",
    "content": "{\"id\":\"g8ZQaQTNUbg-jGeE61og18FrGqpFeZxjFDypGuhT7zI\",\"last_tx\":\"\",\"owner\":\"mAI2Z_Z5PyLh6QUJalPyu2jXMVVfJS6vOYp3hpGE9xnC73Mp-O2byDEcHJ5QUztESpvwYX6RMkW1d8RoWPZGEmVUYMF_g5Gj_5RZqSIKBC3ga4GFRjaA-Nz7LbCwnVr4nuKjysDEZj3MfHkWhjYILj69pQli5Ka5KI1LRCfpYCdsinL6SMZ12aNPzFybW27b3bR7aqP6SDV_PKMD0WIbMHlBXQ-eY5RdkMX8xCNVI8o3uHoie1kglJbRV_bo9NELODY4LXFZmcsoRTLEMB-x2jkiBBQFc_PCnFnlGIR-RLuLtSCHhTrOUugYWC0GhQAEhgoIbH-AhKR5ZzafMSCo8wiAwjkKCn1nMRNK-CM9tUAgiRXkMgZ_NRs0fLmg9BO8b8HHmqb-RalrgtvhCkuWRwe3Ga1T2amwUe6m2Cu1wWDYr2hJmrPAt_AoMmz4j6DrqNFKkLQ-irByOtbcggnQE4pjF5rfyvbW2ODdlDV-oJme15yLIgtEgu8CjGSUnSY5dEm_huz5fuxOaoN1k-xkvhoE-kxVj9v4TAgBYLdP1NvtVYhiyn5Sus7u76R7uXAO_rhUdlqrw3UjBo6LeXsZQRxJ_jEsKUNvUdVmRVLXpCQ8_S6ycg35x2W3pnPiXeduQT2FtbsY28FEcKJ3CAUC1dT9EpohxJ3NE_S55traEhU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"R29vZCBsdWNr\",\"reward\":\"0\",\"signature\":\"NJqgd-w_3A9gfaBpHI9r_ultiYgJr5F-J0naITINUR0OnUIiblZmMmVnFg4xhiU3_KbEygv7UtOr3l411Le1INJEnlpO4eEJFJX4ue6RWf8XL5ks3SCFalZDfICD7amG9Ms4gMaVli_qFIjAODMS1hLE6jkPp2AeQB8KGS9rcBPkIWPiUJg7GV1qTDPQGEhVGk9Dt5l4J0R9j_OJ5WebWYcdSWwOCVJj8lnD068LA0SUkOBID-lsBmX29rvHS1rZCUocrQX7hpJH179GIyPiPCwm3vmCGnAOv4PTQze_Uxf2RSSAj6DBeLcyJHR-kQ5kT5UGmQNeWZP6XC4LyfB6SYM1AKNVNNKSCAhLQ6iI6QVub_XTweBiDtDV4y1YYd0wXIMZJD39CnF0np5fs12B0IVV5xdVlYDyxEibDexESB_QlrM6XCiZ-xosxqnOb8jgkQfdPivIqJ5ZM2vqcCf0mE74j7TIbLXtjopDH5dlhOAX2rjBeB3H4fBP9aVHycN5ikE-aD-bk3t_tsmfunkbQa4NOqKYyt24AkiR1dG1TmlnXf80nUX3YQ9W_-Uw-F54cF4fb9jCt8oACwiHkAXmosdeAiqhw_TL8LttlDSjv7gbCcfI6tUxUGvpjva8RR4QfjnasQGsSFfYyhcIdSE5v7pSMMXK9808s57c3z7XMzM\"}"
  },
  {
    "path": "genesis_data/genesis_txs/gE-2fjp2ncJ0ZRg12UBfqnCBb75OtAOksEX3wGZguqw.json",
    "content": "{\"id\":\"gE-2fjp2ncJ0ZRg12UBfqnCBb75OtAOksEX3wGZguqw\",\"last_tx\":\"\",\"owner\":\"1yQvG7vhlS7v6slHSQBy_2XjQn6OajpuPHltYsfKm2FO4R_RJ5ugy4_ioHOLhHzjPH64WYHlJuH63Gyy7kyodxRmtyIy8sl6VCcrEUIbjMBiGt33UFlcdj9ZlWJnFth3qUV618iSI17az9QDObK7H_NqCmH_Yq9yR8jarKWtlQcDae2GFRlucNZckwJMZuTWy2bQiihY5MpLduFTqBV2ovd-S3uxCbigo_v136tTfBazHkrFzBMBLdpJfgNG0hm9TRehExxrmGw_bacUF1SMgvEOSRwBxyWBo7kDid_sqCj2N6zX2-GOHsUx1tXIAPoJWg0BFj0ZdJjJ8UZ-4g6KsUWwan9BcAKN3XoIycRl5-3XEjNusmEy_vMhiVv_21t9pVDcSKUcA355c9wwET8JgR16eV4bXHv2ZUNaj-WkA2_Kc229wUjIBSSX8pDVtlruL845BkGlaX-6HpRsdlO1KXCWEq4YzU3NuRIapj-0B4L7nvJyId3FhDVR5m4ewsaNDiVI79McXVIfPl9KSy6R7Tv3TZ7OzCJIfL0q9qK243F7GW4mrHKaqMQcDt2_QY_rD8MaxR-GKL23V00Zj2Pdb8T6uKWsCJsBPYw-9sv2ET8Jt0jKUyc48PdjOswsMJfWExMriFuNH9sZtu5EOKY4qtmSWmvJsQBAulIR1aQx6SU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"R29vZGx1Y2sh\",\"reward\":\"0\",\"signature\":\"uYpb_3zQUnIFeoR-1u7PUK4GgMn7CM-RoPkxQwSu5SwVmCTGIVPXqFpxlIXv1zMKxMiopw5IspH6gmF-LlDEOtF9y63qOhH5_XypSrLbMJuSDEDG_K3a6k_8NIGag3CfNrr-Vlqq7F2mm1eLDVKGnp00rpjAshZKFn5_uLBaSjJgxm-MjPbS9d-o9ae71D8xekby9tdBxlgIYvebO9EaBNY3hXzAj97Opo6S2XUKa8036iXa4vLCKa1Sv-ulDjH3UkCvTK_ide7tFKI15DFXRSXuFzJ-rbrp60dujjED0na4z0MXd1w78jEVL60EW8date-bLFCWumHjVMGJXO7NJGnftV7sjOrlxsgp6TNwRZBT_8mOJe1x2uir10mXZaWaFqO_pgLvjIgvew5jTAQAGZ7aelJXgR4mkFoiHUtiCtx7Tj5TpwoBV1yoYUX5oLfFyAVk7XFi4vDf6Sd9n-oCyu9x-LgY5MMBsiuaVwBYWAu52ejEmnU1gsxjoBBXweR2bjlawNNPfibmSBiDzefPrZGU-WQ0KTQ_WTfer26BAf1VEcHdTl30LcerK6pu8aRr4Bls_E7oy_AN1D8LQvBg-Ah-c1ZFCccpUqaQKbJy-VV0oQ4GtIFOrHULhKQD6jUbvJz-AxcwoBAeMUkkhH3EE-Jai_xENKlOsvRcVsSK_sM\"}"
  },
  {
    "path": "genesis_data/genesis_txs/gXd75eQL5Yzcn1ba51nORAvb6f_surSnz3xcNlLAxEQ.json",
    "content": "{\"id\":\"gXd75eQL5Yzcn1ba51nORAvb6f_surSnz3xcNlLAxEQ\",\"last_tx\":\"\",\"owner\":\"tRKdRsXJcyeUX9ouUEZAOLgNXP4Mp5GCLwgnNyueiWKVdNKe-Jo5EgPBFcdfqQBQhVwuPNyUWTHORx_iBDyQt1rqWxPRO6Rj52V1EJ1CA-G85_oVwNWMRgFgKr_F1FiXTye0j-GirOVNNB17MKZVFOYr0QYkwoQd9msJBGD--c_zkWyTryd7BZkNj0SwkWHDxZSsGLx6S9uvjspuSYnkpsdes6VR14YW2bt1mq7ULzVVbuMR2RCat6vcURL7jBfBCElLIDrR1VCeC1TE59jC92rTNVGmzU-17ZS8xJsJnkgygK6bY4fZuuTNAfSMSLNQ3VXvx1YYPS77jaZBBCbVYMOt9U0B8umszaK-7hqltQAvaOzisvRAYNkAAhqkG2gO0sJ6e97PHV18rfd_h178MpuGrWOhgUpb4MSdgJd2OnGqYN6eggSM0OYwQ0ivKge8ti5TndYjJrw3v1mycuQ9CF6KwVYzwHAzgzXukYUYAxO35wkrQBJR86Yaj3HyujsGZyDijCsjjOBHAGyZu5pUTN1iPEmaWGMrW7jB_W42lcX2k0AocEKBKkujEw29cKmXwZaM3kWgzfqUZcVN8ylV1FcyhKcBR_k5dq2zLpz_b6e9s2awAID0kyfN_1X9nGG-xdwLymEn1tj992aKvYQ_PWykzHuKXL-a1vELEDeU2vk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"d2hlbiB5b3UncmUgcmVhZGluZyB0aGlzIGluIDIwMzAgaSdsbCBiZSByaWNoZXIgdGhhbiBhIHJvdGhzY2hpbGQ\",\"reward\":\"0\",\"signature\":\"e4r5BhWeEOgRfOgiBbTuSNutleF66EUZHNLIuidA_2GqePhde1PzB2bq4xvoOASQ1FmR7UT6B0B1URmoJ_dqz2wfA44-_OlOYh2AUkcs7wBQKVXFqDZThvOU3hszfmTnPsbopb7_R_MDfMvk5yVUVolClhjSpFH-1rxFWFzHP68EM9l1Zpt6CDvhADshSgWANuJWcPMeTbWD-CflIKZ5BdB_gUpWsLV5GJQnqMUvcM0ARSJGOpBQlSVl0jrE3Geuz8g1tQSaXf2UzY1kGkI3tkoTYeia7ESLzbHvdAIaBt_g7WIPJfQhzivrJ0ODVfGct3qg_DEU3DzRm5ujQp2mMe-_lRQ3vXrP3twKcq_cOY4WwIki1BJcEp1joWoq2wJ96_YKhvA82CRPV3Cd-fvnYORLVGuejtlR35ReDF4pmjhHHlpSgV5gV73P-7voLL-90VSMnO_uzb3Ym3BYIFpyjrodKE-_Cr_RFh7YSrscB81GM9tsodHu4ayMckBdrYpNQyqpDlhqjpUghs0bcEWhcUhjZGzcvUGpnUKxRBN6pO2v6k28LmDbBoBmvdkzq1NcMnSnzcyiKESzxLuPNSo89Vm-YrMVVMOPglcsK-iC3E31OaFu8XT6crT5EVUfTm758zrCKcTAdtKUeoic4PqoheYJTlXAHAYveUMHck0DOzM\"}"
  },
  {
    "path": "genesis_data/genesis_txs/gbYMogbLVx3rOmm7K-o3nfGPKauLMLkGMSXcKkXW13Q.json",
    "content": "{\"id\":\"gbYMogbLVx3rOmm7K-o3nfGPKauLMLkGMSXcKkXW13Q\",\"last_tx\":\"\",\"owner\":\"xgZWLy2a9i6zcZIO9u8jojQ_NDfqZlEaT3syvYm2S0mpsZlaHZ9vV_akOv3QkddwdvD5GG64GufyBo7r_a9uYoYwkTZvDeZ494D5Qv2L05lPoEc3hBKH45KtQwzhWT4ly5GTStaKCAZEHLL5340l4P4SlAwtVod_H5EZGFrHZhIe7TgmD33TLMadDXvpLmD0CfHwZ7JtZyQR3a-Keg2oM0Qb3T_I4lnlSe45Tt-7OuL-cUHPnnVruRsS-JoMBM-Vwh2JU5qvkl56F2BesJlT997KvPWsCMUgOmhqr-1v3jXY2vopO1OttAkuq7riogcTWwSHvv4DayIfKzgv8NY3x3o6SowQISFw987HKDSQVIOiwfOZwaPUTSCI3AqQmWkO6hqZF_U6xeGvSH-eiN6X5T07dHNQHpLqlwlzLlrJgdbCJWx_-7JfTeCFis5hS7pe2t-ylejlecXbnOarEtEk6z0R_q_2FHierVjKpyeda_gvMAzObIeZoszVnG_IzbDvepqqhdpzmUfXChSeFMjqZ2Cb6DCSeLJTg-K-xMKEuDSIOblAShHuiPbkPujmdC33TRS4DVZWfCmL3gT11rdktiJcIebnnBdjUJSGFYV9i0skpWq4xqc4AeQZIO9zwETozPZO_EdJkfsLc0Yv74lZw7CCTyhRYMNHqhm46xFPLuU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"U2FtbWVuIGZvciBEcmFtbWVu\",\"reward\":\"0\",\"signature\":\"CQvCUM-80QN6X0Nmc7SL6hzlX-qQmaVLakkiD8Zzi7ERwDXBynM2lP13CiJnAXrHKuRIldDkqAxqsPp5MVAX0_MO1wI-X8QY4yJKCDYd4YkE8kKcjGIoVbQTw0VfAemqd11p6JKhIkird3qGhr_q4kQ8Q6WtYYnc8uuOmFVgv48SsdTyMsAqZ5wGF6VWqq3O-lLmlHnehtwBJ7z9OzSlzKT3UqhB632sPA6eiK3IAwTTAdk4AX7J55oubA7M4E83s7xh1wqFqS2S2oRVeLD6uaXYGQXs8LP32NU1F2oVcB0L4sJ81w4MZa6aYu3PXlq2iFC0AfgiYPnHzeg0tgTuWx5KS0F6twCPDMy39ltHfSKOzC-LwwHnsi-br2KxuFB8zZBt7D5EDUVUF6jNn8S2O4_M2FDvsPXOOYN0zdZLFSWVIRPZ5gj31bg0HBnkMWRgjOqVb3W_iKzM-Y25Drf1K7EORHF2uhjEE43D0h7Fb63Qz3CO2pVVvLuMn3xx0Y6hIjAGtfM2FphYy1XhGuY8pph0twpJ8Fe6oQ8IAG7FGCwjtft0UTwkn1bL99pKmBSa0pj9SY-OkPyUZUNY2FiMwlQbxjHx1ZCS_jMqICkBhTRIx7K6OFeOkt-aW3Oz-9BhpyOJAYdJh4N5T_U04H-LzT-9i6Y8DbWiSQvgX4LquVs\"}"
  },
  {
    "path": "genesis_data/genesis_txs/gyG1bGFt7qkMyUCrKiEfMzMzc3_3PooewqNeJpy-3Xk.json",
    "content": "{\"id\":\"gyG1bGFt7qkMyUCrKiEfMzMzc3_3PooewqNeJpy-3Xk\",\"last_tx\":\"\",\"owner\":\"0c0-1JicFK28uZk_J21MRmRCdrA_763bwvvxN8Gmque5-OR-KC4OvNQzxJK3yHR0Av9p8_ajbGzLrcRw7ZWAyFvN0pLNLf-LPPD6MZ__HFZM7ICQ1AMwr182L1ecBv10cjAh8VJSH5VHthzCd4miBvoOqDyIYVeHqO9YfYi9OlvLYNzITVWkVznU9nHUQhmafHW3vudjmKtM-oKPbA_DOX6vl7bPq-Evd8M0f4ZSCEE0jkp8TvbZDtrszS8_Gdac2852mt2gsjDsjW-Iu6ikoquMKHg0hELpk3YcCH2EoZBfuAgZ7bKTkIYRCGCZ_Kphq5mB2IzrwiRwkSfoO-ZgBhP59Vm7W35QAeRLjBM6LBi4ZXKLij3A48t7jl3ucvfz6E6ywk0Jq6pgXI1tQ1vZVjyUTNyzX7kaetmx0O8vcmkc73SHB7OsUOAetvWOzIMhjQeWIX1wqorntk5OcNp1cuS5Ym_V_wTPx0aqQUadp6bUm4SyFg_U1zW-Esybe9FEAvwTtI4McT3V8zHDdQBaBowkC7jyq7KjAx9K7CgHswdlGnFIDiHfX5JymvvzWlGRldkhQF3y1_TfN9Y48FN4Ohwmgf0J3SW_O9GwWHJEV4oRzR3nPmtma0br786t_GLhKT4PzAQ6FOcBXuQ9rlxTmNDgin5tzI2jtAO5_K0VXPE\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Q29uZmlkZW50IGFuZCBHb29kICBMdWNrICE\",\"reward\":\"0\",\"signature\":\"CIyArit_k0qRSlbulntjvCOkvawb4YFRoeKFgHBUezVTELMQvItfKI11qjgA9KQw3Sj8I8sDY4AvWHwrZkqs3VKzm1_JxX2aQKIMihUdtHXpGCAr72NFFuyQa7nbOgKohzw5kDxFU6_RTdQgqcA-jEjUsmYYay-RAPYd0i5wVtqiOGAyU922ieIk6LL1dIVj2GI7SM_LZolOCMLe4S6U6VsDOu_iq0agyLfaVyi2QEvLqlTs445RK3oix4INmGlxg3a9MuVialWHjUfj_UYxSBmT-uXy2FshiysgHl38B07s4JitaGCfovZPi1F399So2y-HWrGpoEzqBQmnMNYTOGNKBlAuQGD9MDrOXMTD3xUTm6mEYdiDxzugGvHyrzxLFEnOp-01-OyzCysi-AiriROIV5g0MdJrZcCzU28JD1XEHAUaw0Eer9RIp-IYOPehHm3YegD1jTdCBmfdn00Ypy7h8PT-krHJw0vFM7W1qXm8RpMWTm1DfbDA62vY4njParPsoG5W0LymHRkz5iefNCbHsClzWG5v4Jpg7n_30EE-4Dgw-YGTFAAO_msPHDQUSSWttv9pWZKgw4TUtLZxwltpWqsSSsOxu1ibAEiBewnQiPvXYZssKJ4Cp1xlBPuzsx_nsmufYRGvj-GCI7c9WKNq18S6eA59r2qzrYgrxac\"}"
  },
  {
    "path": "genesis_data/genesis_txs/h0MlFXsvtNQlFwgTh6y7-gjXEj0CbGECgz77EwQsca0.json",
    "content": "{\"id\":\"h0MlFXsvtNQlFwgTh6y7-gjXEj0CbGECgz77EwQsca0\",\"last_tx\":\"\",\"owner\":\"wUpjGqLO31Mw0RTHKERCOVqBYi5Cx6sTe-BY0rXJ5izNdPSs4vgKv9e4gufZP4VTD6LEONGdjBYlWYH1l5S-H5VHxIqmNwG-c7tG7R2wm9Th37DYK46S2qa-q6QIolMXv-HQz37lW89zOIts5LyDUSa0oyFrXBmMimyArNYMiN-I4_JopbyjzSUaUY8FUeJ24xz8Judj7xUp8_pub66s6qA5ZBAlUUP0LhvSJX3dhyw_LWAKaGi6l9TkUGI9MSt5hdnwRri0TvxDZEyvCcDojDnvMH1mShcBZevYU2HDosSalLG2KOUDxURk9Mr1L2TjIo4sIvLZwgf3n-eFtENv6BRUCqDXMBSGF2P9nSJvVhUAS6gfuyJ0JlsyjP9eJISE1_HtFXvF8Z3xS--DH3g3OXBNX82tuLivggguoBGV2Nngqa9ZSIHwH-v2ih4sn7QmopmHBW-KAvg5tkG1alBEjrSRnL9ole_qgRrVbuIPqo861KtfIYFVHssLd-dHWog9U5-9jEnsMV9soMyOf3hMhw-BfbhpTIp0QSr0WWfJz4TQrjG6Z9oWhsLvKb2rASHSSo8pWO-pZBQM3moG8bnUwhQoWiF0HlYe8oqHM9dkPVnk1px7tG2jCpA-y1bCdOXaO-uSRW_uCQNrECcBiPY9tleKOQ5h8_ZnaH-XzZy9kpM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSB0aHJvdyBteSBjb2luIGludG8gdGhpcyB3aXNoaW5nIHdlbGwuIGdvb2QgbHVjayB0byB0aGlzIG5ldyBjdXJyZW5jeS4\",\"reward\":\"0\",\"signature\":\"gp0nJPq6kerVs05RUwUC-j_1j_miV-OSkzFi-wTg47Nxx5YOOv6LA9Ila68FoMBQduSMBw8FuaodQRgQrt56IWqZQIGEOYR34z5QkW_JUR2S36orQEt6LX2JG43qzVAKx0wdr-zSahj1nx-MIasVQmnMMYRJqc7oFmAIdOuMdORR7DhYcO4BYfZVJRschy8jX_15BvcD8oBmGw5Q5s_KrGgG25HGwHavq-lXKJ0EWQN2F3vJP_s9x_37v8y8OoGNBSu7oykytbw-YoJlLdpo1LyxRhajWyj24u2hqfiGI4v5bV1gxuWemp59eINwQAH_wwSdGdHKqvoAv4ot6kwS2Du_TzwpgmXni3DhFeUEjadGAzshG9qYjT4CtiCs0H9vJXzWJNo_J3Y2wIH_5-GJMubXh73-DN7cQ1YOuVVJlz-AL68cu2nTIBqGuuVxo9-jJCLZjhzD_-YyET92Zg6NqldbTKQ3Kok86301V9roIIheOgrUzicAWG_pFOQUHY63mAk_M8LtykCydOR02yARXfdT2duNMPnSBA9rY4PTFwoNpftea4VOaOiXN6ftegoaaMiuytnCr0f--Y915VbUqOIDmlUgO1cdtEWWedhRmtcXdNjoK3QqREUVGwXIT6yGiY54svO8mFOD_TYowMdLaPeGqUfLDtLXw9gau8sVJ_I\"}"
  },
  {
    "path": "genesis_data/genesis_txs/h0sgGEeQQcmSxg8uyiCOigWtI_r2ex-58nk1xso004c.json",
    "content": "{\"id\":\"h0sgGEeQQcmSxg8uyiCOigWtI_r2ex-58nk1xso004c\",\"last_tx\":\"\",\"owner\":\"rLUiJAnEBhIu0o6Xu5NbwsrtGcl5biezgHoXWYc4-4z5b-6CZA13FqkGHLfzdBXCG22e60HWq8LKgfYE0fh9uKmJm9kO8pk6Su6SyKssfP7ZV8s54mrc5W_knsQAYO3ETp2iPSc8TdJctc8dARsYqniUDXuI2dJqHr5KS7m7Jfe9EWRZFK9ByBbGsKEykd9hbu7WJ2y-x4CFdEC6Gc0yMpKjBUm_QmAOTvKP87DnQnebjy2ZqxK9oQ5Iv8s9ovdz5_q_g5oGluH9pS640ZAJ0UCFywNudLVlVV_UaokOXP_nrizlJpDfEL-qUD88RGt15mYl1ZftOE93rCRkK5T-FZn4-apghDu_HfcXb3i2rCfezlcer3zf0V2DPswsNK9EYWsviQvXvWulgLiW66cHxdb181yqw0ccjqzxXhNFFsSUYoon1GLP5YKGC1ApChIwpjYgRFOKx585DVAuQmeOcGb9eTgdbtaczNofpCYqGhOzaMreXyz5ROmGZQKzr-iqVoqrceOz-Xzs4kB18oLPCMczcn-vZur1aM1KyxRIVe3VRafFvppMZJBzRPbe2Hwinrox55zXEEdoUZUTJOkgrdKdkfQB3lT3eQhelR8nsvekohLW2uJklvfXfSWprQjZzbLXmkCAlYXbaHB64oEX9lcPTOkI9AwfMMbJ694h2SE\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TG9yZCBQYXRyaWNrIG9mIFRoZSBDYXN0bGUgSGFtYnVyZw\",\"reward\":\"0\",\"signature\":\"d-N7jHy-AZ3X2JwKyNkx5Yh73NzStUzbd9O8b4lqFBDc3Mx5OYTOG69sjKMG7kMXAT5aIjvZRO6D1qt7SNVxq9sX_j84e8crZfwkj9qqs-QJ0R0ZWoJ39Ej6j86ieKNJZ8AhOyGQMjKPXJkvI_rDGmw5bP9VGVfDn7aN7V1PitthFg8iZVA7CODQLblUU07lmzuVH3Xk-xo4wGHj0LymftlJUhdGHMH3GJ4BuYJ1tPsK6MIe5KHT8Fush5mxLFUkQhYaMVsIWaN4vYwBVo77lssLMWpATmEtmK7T9iuYaoGZ-n7rq3mhCS04QIshYWoqzfNOYeJOSHyvepp0j39unpA0LKstK_VWS6e_heSpH8XqHF1ne1OUWhvT4WT0wLRtPQ3QpJqYUTBwMhIXmyJixnScvgqqbJDquNKKpjx8YR_zqTdHaR3J3IfrCqQzx_YodNhVN3nKjgNxEPErELfF76M7c8nQnMZKlgIDdNQkIjNFdRGaC5m1s4n_P2kqIQZKCGMTBueuYIaK2CT0Dbl9SClnzxlbwpu4zUQCUK-s5ooPLPH54fmAQRIDiFSq1AHm784jy3LhChIJ5z-pfewEg4VjueXKcycvAXPH9rcKCkoWz4urpzsXSoP_qs-_6ImIlPrplX12F02F8j2seajQNUXKG0oMPWML9mf6FZa2HUQ\"}"
  },
  {
    "path": "genesis_data/genesis_txs/h37LQjpChpTPMquvaxpfFeKt_7oAB5ElDzsdbCQ61n0.json",
    "content": "{\"id\":\"h37LQjpChpTPMquvaxpfFeKt_7oAB5ElDzsdbCQ61n0\",\"last_tx\":\"\",\"owner\":\"ti9KzFJZX8RvQAEu5U8nmMGEM5eGJ6yVhayWpV3G5uRPXlCurH7w3Xi35pagW2yiv6uZMA-DQam-4KQzDYxoCobZWta3UeZDz8L4EWuBuQIe0jw8AxmF_5NR7bzBSGngpUJnD4dRaVxxDnIkLqqP5Y0aQTcPUpirhauG72FfUf0YbjL7cos5Y19xTFZZ8G0qqTjH3nwVvm2yqe2MPzGrw_cHtsXoNekgPW4mIQR4PGF7J4yA86rgWUysOAeDFQ0rcxB8wSeZ9Og5VPLaRhHo_5OvVoeG5PYfmrvrnA3l4p_T2Y1zP9O7dZEk4v3Q1ciBBhtqLBJz_ijPrbAOMtZds2TeOJyxR6WvTI9sUcDq_Q8Z-5pPF84VfzQEhds-WO_e5JSXA5EjyGSb5v5kA_FGbhefVkq0rzDVknf4s8TEA0PxUIqi874jid3QtUDW_QcuY9nK1sIuDxIThh-JLndaCu1P8gmk7w2N07elqguKeLS99ek4OyUL9rlg9UBMJzapKCUgqSLMh48FYNpN7VEizpSRfhqdpFMCXzCZRuwIISN_33i5E2WHYK5GAD7t6O1AwvdAcAFodAzPHfUp_G4CN5GDG67pO-588cT_NgWpiHubL26zrzgxWhPf1Clo7Io5XQt01fQfvd-u6QTBvbCSVMGewKc3QMMfVVNdjDJDo9E\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Q2xhcmE\",\"reward\":\"0\",\"signature\":\"LzaTBiwi4IwBZNvM_PjKVeaJ2f2sWyUCcd08wYF5T33Sttfe2M1W11jwL7RPFZsi7GvHiQBB9ukD2qgZJ4hiA36QIKmOEHwGK3vpZDJdg9CnkGJ4A0vQU-Uik7e60LdYp4ZZWHRTOKyqiPeGqDHj8VqSWt8KPBeTawVdmsExotIvhfye0su5RTd1yFNZBYvWP6KdOgcFkZOxtZ_kD8UVtC3Jk3YPlA-l3lqphJN7-jaFPKbFbYdSsTbsBeW3iHasHs96reHU-ItI1oV4ShV0c6nyRJP3nvHlxkaayRskEC5wG0XTMMF_d5lKdtvyiNcAihD9u_blT3u0K0yBR1OEVi9mokKMTXMbSx4AkgyggKnUmKeDcisXnjq4MqiSMEPkvleBX8a1CymruFvRRjxGwB8JzzyLbzaP8wZfAr3nrtrFCEFBIbATISXhhvyGq7-Ng7BjdqF8XbcbDPZGg23fKgfgXcPpYi1fGDJGWKotrpOvVg7XkcgT0OUIXjk7JaUyV-pweDFRNaVO9XSpktqO65S35dEXRrUr7rH9p5vKG7SKKq6jtkwA1chvk9106MjSAE3sfizfzIuaFKF720v0_DLEpR0iCY1aZMpW7JMItM-H4F4DyGJtpiLAWLGHeC5RZ6i-3NlaPJ1_j8WRIgw3-P4OY46TyB7gt5OEW1qHzeM\"}"
  },
  {
    "path": "genesis_data/genesis_txs/h7qIFbn0LoexuVwBcjKW7v5A65iQDQFYZUQjuowfIbk.json",
    "content": "{\"id\":\"h7qIFbn0LoexuVwBcjKW7v5A65iQDQFYZUQjuowfIbk\",\"last_tx\":\"\",\"owner\":\"v1SlP_q8MT9dYcqMYTCD0lksvwGGWn-rjY8XMjgh-WYW7EHbpEvBheTdY4AFeta2Fa7E5rlXPV21VTci_ndvtzKJrBM49JABJGKM3hw64WwrFmmTJ5WaLOWGPBJ7aOYeXlSMkpOEYHz60nvsI8qcYANqQ68mPGRB9h7EonNYmRqBwAvr6CDeMvELzzpoD2aO5q14kUBsrRMTQBkyztdiehy00h-sG1gv6vD7bymC7NuNjHwOj-_AP68uyqZyMLym3axIpOQ5wZvFFAiu64CscyVXAu9uQa5gcptdba9k6mHBpQarHiNgdZFCuJAw3OyR45Esn2wa4H7n59BD1HfFeWIs7EuIDOd9BiOj2SNZAxYTz3sMv5XEwMGbqUS89YAOK2LORbaHmAsQepclcriRm0Akg9bPwFaDuboZNyoshFY56hMqwyrOEgEud3SovVHnHmXMSRGpddWHTjEaid4PO6CFdhJ5Y-iFkhp234e8dbsmBdRBqhGB-MDBByZ64UiOjyARTSuL-SHMmrFzhcbucqvvLs0ONlSs959dwXuiwzf4LjJSP2yu5RsOIhBqSIi9oEBHjElz3rwrzl4XQc38jMgt_7rTvNDXMqgF-DhlIMJ5v7OF6RSDc6ZJOdYsRYdf987IngwKA69yVc4TDpWqyy4F6sVo4HWA8O8wNRHvZWU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"QEFQdXJwbGVLb2FsYQ\",\"reward\":\"0\",\"signature\":\"acr276fMzmZ_RUglI-vsQINt_KPC4fCzwGDyQuR77nwBQu1bwSMlflmy6v56vthCYsCDEWQDUXSCGBDGYOA33rrAYc2PuzUAbokFtOL7JxnwTXuHrAXaHXINc7dDoV0CCkA2zFVLeFlAiSq4j-PO__Bhh0mJcQWhLYCuVWHF9_plRvjhmL_6aVUNIR1IV_ttXfbMYFcdOkhcgZIkbLHCmzhP0-Kvjq20o5DBjjANXE1QDkFSjee8ruqFV3LrCn6X0mRl3MFPW_R9ZDE4oOmbTenA99xpBD2qASnwh0V33w9Wu9us51Ql5UoYwA-GDoR-ncSkW3HsG1q6ux3DZzdFThYKwBO_cOFJHejOLz8IShR1257ShlQfwvmRQPqQIF3WcmQexibQYcajDhJ-b6PiTO30amq9t0D3hamzR4r-5a4F9pVk-HtiP89ziRgHuFyPq-Aa2Lzygo-1oEE_5RAkEP1yzkRUlwpCp7IKD7nr3pdmrc-qILyH8Sl2sypVNKsGFZ1w4iAHb8E-L3M-w2DbcXr1IijXjsfZ6pT-xUyHFKoZbapqMj7CjXGPGFWvqvW-2ZB-GcxX-aaoWzDx8e1ozAOcAhqaADQgkZC0A-M6pZZjN0Yyz8NGHcaXNiwU3ElL8lmIkuX2tYcOdPld97aGvRxdWNxprBGv4bcgwXE6GIo\"}"
  },
  {
    "path": "genesis_data/genesis_txs/hRTkBAH0k74HlmlWXTWmetXcIFXvM_Zrz3i1JXULZSM.json",
    "content": "{\"id\":\"hRTkBAH0k74HlmlWXTWmetXcIFXvM_Zrz3i1JXULZSM\",\"last_tx\":\"\",\"owner\":\"4WB1edAYnquekrH22_Y-jDcXZgKlM3KPl9OKwISqSIcqbLuUvkso8A4b2yLGK1K6yCTF36btyw2diB83s4eoHSPJj8zIQlm9fH4f11wCdMufdYTSZqGoskNvenigQIPMtYy7sOxpx5b2iZQXtNeC4_dwFAQTnDd0DvIxmvH2yv47Ah1cLCSMVeHBEgxp3dgrgcisGQXqP511I14gJS8MINdD0BnOv1L3r8TITVHxZ7X3Zn_zUZAnFaySGmCT0X3yxygHRSB78XfrL-ayO6wy5TZ1ySlx1tCibVjbnMGex1ffR6exYad91nqaEvRJuud6Dqn8iWQnDAdI8eiXRTez0FjWlofxH6cQ4D3U06nxb6_y5PAJL3LRHp_LZhObIQhK89QZMagwV3k84D5EDScLiIsLyVrviDcI_fJR2r9wNWnr_zp3rVRG9IDacVu4KS94Vx5iFY_Ww0cntCmIjwo_dQTtx0crkL2H984sVnGvDFARNaPbZwpnDrxtVZE-YI58Ono7hZiKQlpiqW-7ts1BsyHlMCsNuKhc_qtgHJJqjUfPSC2Vm95Qu5UvfMylld4i6rpqNq6_xKYdhbD8yFHu0jmOb_xUr4Wvn_vxGItNbFa2VjsWcUHEXk3ak0lofq0f3v08EZGF6Y3vM0tSbTsGlRVsvt2bGepEm1psNCA4CDc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"dGVzdA\",\"reward\":\"0\",\"signature\":\"VrzyrGN6clHXnkt1SLJghRxqw2Ij5xUEdImexPm6OtaaNQmJBFN8_jw_3-xuP7TUp8PAUUHmdXQoibfx2JnXI_C83Qpg_bVeo0_dWbIESv3c6zKFCBeiXTXEzvyoXItrxVwvPNflNX7_yNY3lnzkZr_ujNd8TZlqPKGd8YvO2r6QfmjMM80Mx9RLOlcL6_mlOygo7CRKetKAkDBoi0Y_rW6MIDIzVhgh1O4twAwvoE7kBop5w8d614dzm2KLrLBXmUruxU516CMa9KO_pg89K56YIa_gXmmbBFczEGYGh86Hf0QpLp5MnKwtvZ3c5FvC_-Ed95Lv8C5mYVoS0JbQeT6dlDXIzgklmWdK_niMlPCLEMbIQif_8Tg8mxwDmg3QAZuU934ywB0aB15G_bnxaXgJE3ZQ_S1O-WaN3AzGZNEs4a8vTP1Zr04QMOw4PIVaF_KDXCKw55HprXCwzHx8i1SrqoA9yPCAVaBXoSOvifYzXTDNLqPZE2s0qfkc2zmYrIkegiv-7AC2AqSTfYSNMXQ3XfLqxpXZDS_-O7ifGIOF1HCSvcjvYjQoaVwXNiFNyJKOaG2oIYddfwZnpXHSg_yRAVpCYsTYqcaL6qqQk2rHt6Kdjar0HN1jJqo2-cCVUZv1Q2pXia37VOJjGJ72NQuQHcdcSEBe_U8L6bMxfBA\"}"
  },
  {
    "path": "genesis_data/genesis_txs/hX6nohfkKZ_9ajziHJ6g5V5cIe1EX9H9rg7eScK988s.json",
    "content": "{\"id\":\"hX6nohfkKZ_9ajziHJ6g5V5cIe1EX9H9rg7eScK988s\",\"last_tx\":\"\",\"owner\":\"2VcvMmcZD1ZenGi5Ii9JE49VLQPuVbk2HR3OGX5S996zb4Rrlsm9zM45g-eZSgZ4n2kzOZFRhILM45ue8LC_TT8jiMznz27G6WEHEvob0US1xgaLDakdr9aS5Ad9QurC1rWT8mJq2794-FEGPV-TExMHkcHhsIPx8-mo3acXzMV-aqR1sezjoxh3dIodxjyt01VLygy1cPmL7kz5uYvWmjxkShX0Vr47zZmJIylI6qxso--9jOyqeTLDHmvGGurEDM05L-YV8mA7rjBjUgYXxeQatjn9TbNOFcLM1kal3MGV9ZKoyTLlVpZTC20uC3HiCalIt2eEqS97_TPsZRolceN5PMF7IwX1zQQ_aAtmMeJx6rEWVu9oRHq5vAdd21gMXw1SPltbuotoprK6H9xUlRm8TP_OpnyrsfUsvm_bpz0yxWVP1ZZ9iZh1SAD3JPG4YA4Dz4k9wm7VWrq5U5cfsujYIgYBdeU4PSRZwxk40NMpTZV8dJYXCUA0Ok4qxhsV02e8ZEEzgyPQbDbAF83dAuj3p9BHs-OBHa0KEhDGXAmQk7gnSnhr_zNOWGzF-_WsYlrmzlTLaUTJ8miMI6kihNev_RaINOeEgN18b5-7yThdH32vNQzMOszDkN_xvhI54KuAdzbU9PEZPNgf1XSHkQ2x7BSQ8zZ3ACjabpeyd3s\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SGVyZSdzIHRvIHRoZSBmdXR1cmUh\",\"reward\":\"0\",\"signature\":\"dP53_-TNo3vL0cJDcLAy71rNZxmOZl7KKdw7QdhDTkO-05ElXGDyNG1BngzxUR8zG96TanRPC4czeyhDjiHFWreBUqw1wmy1dVLqxQg5zndDa_zAHL9lLqqkM4PpDVzQLoNit-HsxDpGV9ciRt3NBFsfrktg4OxAIpOpm0quaUGcgxUwmcW14kp1h8nQOZTY10z7Cr4AR6MP5DqYMruQWQdhRymvRgAS2IRvRjsJcOwdktcj-xXjOOOjcMYOc25vaJDuRBJxoL1gnOzIY0lLKmgIIjR1i-qoof8qlh5hdYoeUkDxYx3MFIGtqOKtu5cjz6IQfZ0R5CiQAGQrN3aWd8E9l1-9oFim-ey0_9f-hpP8HfggmKX9UtpZShrK-p40DhT-AJTpmctpqJMQJ-OYNbY-SqnYmTSCjIz1Yiaes2H9SVXe6iTe_FIXzlsBLdOMfhE0E0Lrl93_8JAp07838pFyO9EGQK41LqJm75ebtkpLzVDavVkVdWGDPiyvZw78yKjlXLMXbIaMVOhJfbbOU4FQeE-2LlrNU4BN9n0d7PZ1oEi4L93oDK6cgPcH4BtHhxiVp9cMLIcmNGGFo2U6if08LawZFxNGn3MlMEp9VSfVG-M6sbFjFT79NHEijg-RbjUNtULOtgXCzPhlTJjiFRMT8uoKIUO3QHuh57pl-2o\"}"
  },
  {
    "path": "genesis_data/genesis_txs/i9xaFWy0avtyCCxQdmWfGNDgh-PaJgIHkNK1pcJzmV8.json",
    "content": "{\"id\":\"i9xaFWy0avtyCCxQdmWfGNDgh-PaJgIHkNK1pcJzmV8\",\"last_tx\":\"\",\"owner\":\"8Lj8DI_mf5fhIt5lJ-Z_-WjaxU7wRc-C9NfTGb5WmZrMFndcncMleRhhyvwa-OPe2_rj7bqnWUTScP3Nmp91Vm3aB_RgqcT9IPnEE5_D1HNgZQZiL6JQdNniE0Ikz9bNaTIFT6ekQoXPGIlSyYXSfAN8CW83_9lEw9Axb5jtePwfzJnX_fP93A3CUGKVHaOuyRNudYRJigHBXG5Ba-EunQMIuUVuUpPCf4yh95Dha06a-_4gQjX0pZJ5NlHy7B9SicNQEGjSfZhA1AP7NutUaZCIm2kkYoV_FSa6NoLxIIUqyIBFJ0qhK4wej0Nv3w2FxIOzcE40gkR0C9PJfRsdjL7FO8RI5ZODCSmyTPiIj_WvttaXkBo_NMr6pGhkMve9ndEAzLzdC-vd3cqw8WKEPcXmtQjPdBltYn-Mm1KORRvkB7BchzJBX04YnIFhsn3YgKu70prHKonXbp_1EhqptooNtqRNrBf2MDE19h4iBVxzd5cHoRk6X7inRFoOAPOaiyoly8NlN3gDMBfpbxtqbmXT-r6AQHZ17_GO5DNF__MUwsaE_7hQItViB8WRjZ24wu5_vsG9Zb9Vfsk0e2ya_lJFoA343C9BqLmq0K2yIkeRsAqw0kAY6i1ei9WaH8kwI60ilpnjRx-wFentpbUfFPKPCJbUCQqiI-_GpWs6XP0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"dGVzdA\",\"reward\":\"0\",\"signature\":\"nlKcMUX51mfbgsvdeHqTEF3lZxkeFI6R72mg1zC_7rvkVTvzmHCgLkMwTOZxSfOLJZL1CWV-zloJW7_5_o0HNEsC4AR-GSXOeWZOgxLNwup2kvOGF4OXEZm_ycyV8d7km7DbgfPJNwTFxan5GAPjFjbYjX-Yd4gxMVetjXYHNSno492YLrMTCGGTLf19QI26oeoRIlqQYCbzvVNMMrj3xv8goYtCzJ-HaGNaPFbSn7UB2Y8etMwCCTP5AuqRbuTmKScT0lCqIAmz1QQWGYghGjrpXdkcVV2IcD-i9Or12EUaL0gWJTwJ0uYmblzea6nqlwilqwRobSRrBwLvzJXLDzL9ZGtP-Ir-P-TBhAgD2SOWRjBQlUWMK_FM1pUR2-tGCWp3plM2ebjClyMmciKSYuM0ZK05eQVXrc8bZmZs2LtjJproPgirvcNqkmvCJ_8S3L0vP-N6tWw2ADzSq7qLQz_BYBRzjLkljuxTTbJ-iIJMpPIttl7c6I1hu5iH_li3QmCcW1Q3pv9kCeaDKU8YqASpUU6VUIhmDgtll4XKOK58izmb7e9ESqkd9qYwHn14OJCe1uW5XigdwMHV9q0fH91kIX9w7lbvIOTGtUCkvGcMhtVqZ1kL5guX-wZGTQFjaRmfpjeGQmXKKMdWqoEDbmjNRCgwKqKPs_Oyzx13da0\"}"
  },
  {
    "path": "genesis_data/genesis_txs/iPb5JLzNajAzUNByVeIGSEPR0rzGOV5iIYjWpi99APQ.json",
    "content": "{\"id\":\"iPb5JLzNajAzUNByVeIGSEPR0rzGOV5iIYjWpi99APQ\",\"last_tx\":\"\",\"owner\":\"qhVpiajHn9zVnNuNCt6uPsrp1PuAfQ4DwAFsVexQv2VkCSlhNcOlQmVvBk8sHf1rdw248to6cHUdxbebCoFG0gr4jbj2FXy__jtQP7KMead7tcxvZONtNogoiW-OorPWZAGt8lbAYGzER1Ne7HzeJeNcMR385xd7McpLNKpOhkdYlLPwp1XI7edQC4txFvYrHWtJWiG98hgZRKUfT5vNIbTYRnPCu_dtt_ljhuHDoakjUxIJ-2ui_fXIdT0EROZDAXNsGWv_xCBbSJK2rKL4PezXHyvxsNmUljTYjev-tU_hUjyR3E4jGD4p8ahWp9NZ41yDKZ-Sd-uP5ogMGiQXVmuu-2VwLyTspgpv8c2dXezPwHTQQ4flYaKlWZcV1jh3RUTxqjDYLzp7fFTXnjdlyrY87Xi0c4BWt_VA5mmEVs79RWClDANqsag702xFPsqNLMmu3dXV1s1K2wackHnrK-nRd1mDcvfdQG_9_DCR0GgW4zeEXQLsaaL1Sv5U9cdJQbuUIyLnPXU-5Oe0wFKUhy2AyZ0zX65Q0Bicc7dSFfcyuwZti93k_mDh8to83Nf3auqGjuvoF5H1GlG3sCK00P-yRcS1sUxmWi-jEAO2vV4j3WhUxpQu058txO-jcppA2_5WihgD6ygXNFkLbbnnRayeER8HBzUu228PA-r2xGs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Uk9DSyBTT0xJRCBQUk9KRUNU\",\"reward\":\"0\",\"signature\":\"NEOXT3fnk4EjGboj0CoSu88bjFQRYnQ0Xk9EptCVuYz_X3MuZeRDepsDwHSRwsJFFwU7_ydAdCrRBw4LechA-hBWkWRSbGz7qVD1Sm7e81gGOw7JX8stP5zfFI5HsjVlFRZ4nianghfgeNHpT4p3vB7lWJ7X5tEmvUMN3AShM3GvdBVpkACK6bGYRTI3c_mmri1Bs7zmRPnx27gQTnrUA-xL-J30IbUZKsknypM9YJZJLgsi5JqdpzK3nNIiebjBjYqbBLOXjL8SYGo16d0yVWXGDeNp-F5v5OWnv50gNlEumcXWhf_0faTCpKkWDNye8FzCnvAlm80ydwb4yAOLYxAWvzbMvtYz0-A-SGJdBjZB1ezVVD3wzrTeKYUaO_Hj3LYC1ynvFi4KsafzVlZp2dy8Ts0X6cmO7LUbGWyA8E46AwvojnD74TQYFSdg0e9WAt5QtLvaZeN3ymRFeSrDSa0kOZVa27bU5fx7EcLCk87u0At3q38H9q8oienwbQrTukyXNEIRLEbuEu6yPtdU0wlPhx9LhHgwHQmRrUDRaQNpNsQbdF8g9ZNzaD5VkGbschdgPo2OOghe9JGQuUXdAZ-H0GNAKAbN7x32ib9GUJYbHNKkH04qhytr9flSgbMZ_NgqCsHSFftsvxxjtkdU5smnLG7bwJnzX1_YVwC4ZzY\"}"
  },
  {
    "path": "genesis_data/genesis_txs/iRF6OnneKHJLhLMdCXpo6LsxVyWIGyklFEpu1bN3cyE.json",
    "content": "{\"id\":\"iRF6OnneKHJLhLMdCXpo6LsxVyWIGyklFEpu1bN3cyE\",\"last_tx\":\"\",\"owner\":\"uFJEuFuDDsVB4zF30pZUf22RaIbb4q2tUTwhCOGUxkaP7CSqD2HmVaNufPupFt_yr-0W_lhqfcSWl0IdYvFxTGn19j0a0Dvrv8MHyy3-AWfMcEGxckusZjfPycUXWrJ7Qzc9PsiV496P-fJ1aUwv9czwFDFirMcnD7lNoYP5bd48FqjLzpCPB-xpQTiu0CGYQhMqgj2cfMYmelPqpPWanKQY5W34eY32cMc5VvA-9pkj353mBpxPi4_bLkKSOxj9zcJ_eGcN0CuiUmYskA9mXENNZZnpuAbPO04OXUgjafQfTAHT3Su3Km3UCHuZE3ThiVnV8F5zS5VmtAVUHvVGzJiOydTBdif5HpuvkwvT9Bwq6NGM9tOQ30UQ13P5kIeotdQm28H4UF4w-R-22nePYExjt7dq2nfSPNVv4NGGbLqOqUBoI7M-KXI-3GnXCDViflfZI4cDZSnViHlpL0f5tWk1pS3T_tx37Rzpg5FOGf2oX2mAS1vnWBNL-zvwyjj8AVt6_pogPA9Wq5kjgL8N8MEhrIABlb4HzrWpdiZQIobg3WvdOLlMAU8DOqAhRLSKKxy4IZ1f3AkZN1VkyOAhBRLdDNuM85bOSR9A1eaTS2sfN_YIMOHqo9EEnrVBSq_GJuh8qwB1gz6RA-EuY2tUww9ah4uIuwTuRbTCmjXs2mM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"c29uYXRpeC5jb20udWEgbG92ZSBldmVyeW9uZSA6RA\",\"reward\":\"0\",\"signature\":\"oS3y1A7Ii46ppdKDEoNKXxxOa4vsICMOR8sA1HA5gJDnf04CILRsYAWbFfvMXjCA3d-6nCbLr0A2HXU_ZmQ_TBK2tfy6JzN7VjfXy8gb-yQN0jIqrehwo1UJC15rJW9SUF2ASK95wAmbBm1uuW7aX1Bu02IsR-XYZN5cUETg0f8qoDNMnOGVTKXZ-IQfjisT_cx4PnyrH6sMEfFr-P_eu48xYw66flD7t7JQoygY_JcKlIf1CuxXX4JsDkgWZbS862VrdWE9N_EriQiAyb51WxEby2lVosKTBx73ulPAmbe0qIDc9lVcD-JUNIoCaPWATf9MZDgFMkDrBQfaDM8kkyZuEyuBF9_tIgc-f5oOGunUhydD1Yqc9Vt3eMu4Y3SIlQ07hh1FHyYCvx-nHPJl5gvyDtgaZ6b_YOQGsguZ5emOq4YAsTUOjNmF1pqI-dx73K5sPAkKNo6vBmytqltPX9bFw_UGVlwAlWq-qe68LyChCc366HUb66ezUz6QTH3f_HK_WGMP_f9xz3UfmBKDq0pJcdGExZxdy57odyYZWWAqludxilz0uB58sPkxOGjMOOu9r3S6DcXg82RWgLt-e_ZlhQX9UqspxrWdxDjkXu7AASEmE2kbIAwMyY7r03wXDCOxwHF2wb7CRVhB_kdv8Md04RhDqXWeXsk0sulCBW4\"}"
  },
  {
    "path": "genesis_data/genesis_txs/ijroBK9n_uKCS97V7iege_5Av2E-tm6ujquAazT_sBI.json",
    "content": "{\"id\":\"ijroBK9n_uKCS97V7iege_5Av2E-tm6ujquAazT_sBI\",\"last_tx\":\"\",\"owner\":\"oScfzCL0J1VHsF15k6zWjFu9D4VPRY9P0dKiLsKvqp3x0nT2siHB_KvomP0c3QgAUc2e4IpKr_TH4TnFPcXQDOMHi4Jugj_MHCIBzq17dzEsc-8ebFkdT2yS3DMGag9YMpJQVj8BNfPBr_HaCrtso0O4mEK1kb3ae7DXqp02wiIvZ8s3eh2ygxKNcccRRADqv5k1ilW20ReifHdHYO-7yWfsHJxdHV2P7JvLb-2bDcvhgeqHj9UacusVcPheBQLeUZ-TNVBMkGy2B6YUpo2aYY9DoQZ0-bU00dbZJbd3AZvEZS3fmkEqkEFDQES_lROLvVeROAy5jRbG1oyrkR-cpr6GgpWlQD441HDkJmEjoUbcQCx-AoJvun5zsT1pTqZaNramrD510oROd6nB40XTZHIwgCxdyrD8V3ZUonOGNrEqFNGX57ZHIVt9Sx4UNd6qW_hfQHMHy_uqOSLAn4lcp6LF7XKw4TcFEf8thm7XkCiYLrpxkWbFsBibdqxJ-l9hafhRyTFf7vD4rpkNt0GavIA1tq8JyY4TymRHpH1-KsOmw7BxyCiRd4Z8YPZ9A7LWGHm-0uobCLUDUA-S1fmc_4qkg3FLNdG9octPHMSKvlUUMyHi9XYxhEY4mIkZ1qawMw8pYH0eIEV6K5x178-8akRlso-Mwym5OenW0Ev7gC8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"R29vZCBsdWNrIQ\",\"reward\":\"0\",\"signature\":\"nFDNyc-x4pFssEjkx3McyIgCoCWYMhSAB_irX6b-uzykth5kzGv0U2BhtqeIIxBk2Qt2zKrgt_535VXI2mTuk06gqK1N1oQrtYPTJN0IuGXx6PvZ3dJMAF_t4t2dciqsgCAAPYatjtFhXncgqvcGYzxXY1v1XjKbOmnyphIDpM5C2xmajaatM6vcQWJBirwUhS_dxFqHeY236puVa2SejzBXKPf_4DrsakgwtkvLWLDnkiUxt7Kr_oG5ydLUXT-SpDS7PktDESAxiJrMGAGlg7Rnjoub4-WRTc5noSiosSP9VwGtBqUJV6E2R9_Zg8l05fdeGeTl69WMqYeX4t--HC8oJlj1NQkic9yqKAiIOiahaNNkqYpr1997xqcrnxQIZOw3mzAfhvvC8qaCsmPdKvmrmtgt8pCDOZIbAph3s6ncJTIHtrf0hFaC4n4KhWWBJyIrHQkiziwdytijl6HbcbHJimuaAX28pKVt67IE-LRVirQldIQlo-2DePKcXwYZ6Twcc81xSD4qBwAA6srWJ9ZTfIHDhJ4eWwud-MFN1AcoMaJV9Ww4fbVRr-BYIHv9bRa7b21AFl8vd9sOvBgrKVkw7KjldvpJV-jC7_GmRhqvU6ygF6WbApA1S-uisk3Da9SViYHpYGz2_DBTcHkAu-uJ5MKvn9qHNG5J3R3ygl4\"}"
  },
  {
    "path": "genesis_data/genesis_txs/iuTLZ3xxGpaBCggV5xfUkJ6hMdUQKHw6f_vEn6sbmPo.json",
    "content": "{\"id\":\"iuTLZ3xxGpaBCggV5xfUkJ6hMdUQKHw6f_vEn6sbmPo\",\"last_tx\":\"\",\"owner\":\"8pbyqAJdTwbk8xaHywIsTw3rOGqMIgcftC79zNUyemlJ-xIsGE5QR6OtRQ9FzjchexbHzRE1uQxf3OYRzO-k5HwPR0FtxOKbmfuJ1382zUW8OD5_qTxinpYFP61k9mA6iamM0xCAq5TF9RLBHruoCHaw14wK2WWfgcy41xca_TQ953h_oe0IQU1YRIc6oRYChYUCKjLehRZhciyT0PSkcOoFrPqNlozH3F0WWgPyASV8qc5PlXe-MuXS9Y_MhfZ2_fsS781SW-SqzGiDqrZxau3q3OtFZhe7dBvz0YKd4Xf3Lcoo8chkz2QYQZZx8ThOi08QHns6DJ2DLwbWUx6FB3qFVroc4DYUEEfCGUbkKYxQtWFmrK2qQs1bFCOadJ6VZHHQ90H6B2cRv2q_ZRKTdSpStf4jnzcJOf6oeQjqwDATp-byhbBaJV1A2OIrptSh53oTFjtfLJWeq8dEpx-20fRhYgYDmSazwDLWzEURS3THSWvOTMhd-Hb2VzbOtrGJ89cvKkKZsBaZ8inubcAjcu-YAlkJaV6QLOEyHjGi6DHC6_iWAoHmggHyevleiKxSBSwnmw-SCUEueZD8UygVKrOxc3YhcjsFONRMLDKl12LmN_dQcyUHnCixsv50llDsSKUGOsq1-3rpr7tIHCKQaaewa7yMqFNKtN7HTEp0Y20\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TWFydGEgeSBQYXBpIGZvcmV2ZXIhIQ\",\"reward\":\"0\",\"signature\":\"usJvxZ_4l7To7435PnYo5kWhYjhfZ5Gg8JQLCsWFoUrtnsYZGrnr6cQPq-iQwC_pISZ1rvrpReD-lC-xiyKgspe5LakZGXJe5i9mSl9VCh7bxbIu8E9jLscn_Eq970tqTYyNtN1ubwzlsALcbbC-6T2loMhtZug909O3BV8TBxAgrp0YCFSFyt0xB0IvhhLgRF2I3IF9a6YFB2wvylXauEDCs-CRfswB20hDC1s7jeuzegyLz_mkH9xsViB38Vw89TVyOHJ35PbCMw-XjO63dP9Aab4zw5CYPofpqbBDkAoDMu1Xxj3KKpYoAuWYwMiQodNp0FjYccT_TjStt6XMug5KIT5MPNkbYmaqD6r4XisUcmntrxCdRC5E1MeJO0glf_2yAlMXGe4vR0_MRaPjhGmdt7pySVxc0HQuX1b7SLWAsCmQE9FVDcFvC6BsOytmErYNkdeMcyI_VttLkCZ30MjH1leVCOlJYKBlLVnkPPG5dqgHS9F7rjYoeiCW3kWnBD1xQ9oywEr-KQfSwQxn_tMv5Ls0orzPuUWSMXrdiM35Cvha6LivsTpniTnvI7ZEkzBKzpvzZA0DTIdw4Pi15o1fDiR8l9U7m1XRPDRZG8duXhrfB7eQ3hvYH-7YiDRI5KdXmVA534eTV78YNR4uOYfvVKW1FjQxpaWcRpeUqmg\"}"
  },
  {
    "path": "genesis_data/genesis_txs/j2IiBCd5Vf2Q8ciTVxeHbN6JgrXUFiv0xtoMTA_VtqQ.json",
    "content": "{\"id\":\"j2IiBCd5Vf2Q8ciTVxeHbN6JgrXUFiv0xtoMTA_VtqQ\",\"last_tx\":\"\",\"owner\":\"4I4-_4lpNZBzhaOSAURvMwTQMj7PNAzNYTCyOm2cqKYihern9uZUsESM_2KovM5PXek9n0PBlv_5oFrJ2SGNLWjU0hxWTg9X5WfAd2zxModOu2mnnHLAYXXtVVNJdxVCuZ-IP4gkjCSCOXZ04fWfONORSQyGBxPnHIAjZSmjRWl2FL5nbUqeGkO5FXdbuCeGdDuUwQRcXkrOQBguSYbvUnbaYTgOiueuB9DxOZqJE-zcqO9V-CDXgVzw6XZhUi4S1yPer7akJOSK9jDfe4y52crnGQRP2R3txY_j50JoxoPbqnWjTnw7Lc_0GOd3PqdZfoPV1aWEeTvlQV9SuQSGlJoIDA7j-6kdNLwD_p9EWUiE71NsMo8h-JeeuajWlxTX80KQH8ENQgPNGolxmF5K9UbU-E0aLKTeDv81dRU3rWGaatgIjodI7R71lKWrgsWXAp-IiI4jmvkePAyFYqB3kOczbgylKw6F-jtxR9g61UTO4GOhX9ed9RxR5I0yg_avET7WOGNeDiLWYGV7mwMnk8cxLsiCRvLFTR-AC1Z_oODwBH6c9nXlhX2JC_ApU04q0sdHFUAD2ys3zC2r7eq1xQY0K_BKYPnpd0KcONqFSC8jEWgyczsllumej3tggikcIGkb8Fzvu7LpbqkK0aENLrmN1-z9LO_nalCrnHmGb68\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SW4gb24gaGlzdG9yeS4\",\"reward\":\"0\",\"signature\":\"h74TByCz-HIYiaOltJZR5k3iUvDhLedGd8LNsPCUFyLn9i7vaVY4JgaKQFaWgVPNFW_OK6tQS7AL1XB9lEWrbKleZQbN8KObME60phIfj7gO-Z1IZ804iGBYavTa88IKrp0pviSp6Gnm3zdaM26z6xvc1D2kvNM6NFJ8h2zK_EpdP8VcjU24IGVagVi4uE7gEbR4DAfNJTVPTdmPiuCs_WUDjjwUdZAU-tfkiB_DsohXrpocCVbkmIQ5rc8Aarw0lcusu0BQew_qlyg6SGwoGkdnIqom-njzlBRaGmmFMs-8uAEjroQF3xNLW4BDXuyOiIxaxIyHqbFvmegZbo9wGC0gVvyTkkUm8LeHAp-sjviqRhdwQDRRq_UQ_wM7W39ur-0zALu0ei0V585d5dawB2UjxDXeqqf7kqS4BjdulVRQAWBef_RbZabrRjJZ17EOBOa9FNCO2COivrRtfGQddHc8LR9Rtjd8kRPldOwC6KE3MOeEBCjYAMKi0a_LfRKLCcQiP42JrP09eBRKkLnnewF-VVYwIX0D-0U6Jno0VNkrl32q2XGRbD8z_2GfC5KEiO17BKX5eTx130YwBv1XAtXs2KeKdn6dXcCrjjPpVhauVxo0Y8ipiH-eXJ9cz8chvCo_UNYzp_QS9GkJ5nOSasI3o0FCsGnXsbZ7TdhdwGY\"}"
  },
  {
    "path": "genesis_data/genesis_txs/j3l4tvphmVOyVyFkNdS7ulmexBqPqEvsSJrBsjAFJXc.json",
    "content": "{\"id\":\"j3l4tvphmVOyVyFkNdS7ulmexBqPqEvsSJrBsjAFJXc\",\"last_tx\":\"\",\"owner\":\"3M5uzZscVe2KIhy3oDqPOoMUtctwn31VQc83_KRYrumsmNpsCT1gapoKyvp9QPlN1PjB988ERuN96yJhU8uHT1HDHIxRuKgIu-d-jswDlrr-ej4SE794dN-I6vDaeAP3KaNoF6DY39HI0MwyBd-dxl9Yt3gnfk6DaxRVDSjAmyn-5sYgHVeenJuc7tssrMg-x95Pg87DVLwxehILuKzT41EGvqUZ8KlU2lU1AqYCzlbWv4wiXYpMo4WOOdMRTfaQojk8fb8qwy3nAE5DVgFBAb3v2ov-NPUlvaaBqSxmHSLW1FjeHEpBvMxZGpA52ePYhZQerlr9XJK2sGYDIv3uv7RiGIZbNhVSsBfwXo52znZ2WHEpNg4Rtl7M026xdJL6wLYZxNVi5Zd_KgsqL4lFNUwYr750uSGs7ngq2oDuLm_lfFR8cn2_Wi-5mg4tgTvwpzXXqWedN3dYi6rL8cyRUeRGicFFV8kO3w7BWPMfTNJ-IZ5yRPzdHEZCX4zY9P6gb9UI1gv3p6mq18Gdw4B-6S0gwHFfwxYVkVESgKdwykiVHaWnvEjYyNB6-a1ymD6zjCeVQKvRFTghB3RcTnK6kpPqCPCeY6gmkgfo21Hb4Ht-CmdcjDhyzhjQYG4l01I0poEK_9lDbh7RdcpDeBtEMZO7thmncfcBCGFTc6hY74M\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSBsb3ZlIEJldGggU3RlaW4\",\"reward\":\"0\",\"signature\":\"iWEy7bPlMgdgBQoXR6HBTVQbTmujvnCYax9dPuMFoS6Ehx4QJdyZmNFZhMYBVNapTInvEhgwpPGWrWmPBau7mKRifTPDGQ8wxmt3No00hyHOUIwmY9j-KfinLWAxOAznPTcWAgTy7DCS6OYDfildaxRwyRDLSA4zs_DGeP4H_a9qV9xb91OUQNlMXWnhSlQk9RSYT8hpFhSjUokq0SXbX-lzAS4K05bOs---gooNFLve5Y6vVaUOP1oC1ROtP3VKVc753ND-J69py6Ae1lqxFYPn3ZpdFk6dpK1vX0L3SusFBWifq8S8kXIkZsMRM_dQuBzONW_5JCuaFc_ccYfzOIVyu9_NZLTcyTfeNaqQiZ0LDUR5X0_wtGN4t5HGPj13VmxwTH-6DJYdaS4IbPJNo2nnz-3-GN414s1gSR8-KmuowVN4sF9VzTEnGgz-eO_HP0IrZFDP2x73628n_yyndTX5nBbu2SnyXSb-XMungkXKoZMdpzlOIfCMd3O0cFSOSXqDvH9Bff701HdUPtM33vnZwiCLMh5RXUvKba_ovGMEMw05SUla3Jk60eua1pFQyHnMlSNJcUx7b-Cz9GTcmEy7Js_D4vENtoiUUvy9dZJkgU0Gqaz5shruqnlEkfJDuAmods1tV5rEi_3An8qNH-br1Rz0lR9G0xrB8hS4TVo\"}"
  },
  {
    "path": "genesis_data/genesis_txs/k6UueT0FWSSUbAAH4Uc1Oz6BivunVR0nSMTEILnB_dQ.json",
    "content": "{\"id\":\"k6UueT0FWSSUbAAH4Uc1Oz6BivunVR0nSMTEILnB_dQ\",\"last_tx\":\"\",\"owner\":\"zd0TGMB0T__R6fCkE7bXRIK1ySNXyJZq-6dMBLrkKwb-CF3R7V4GMumbKMfXeYMoFNx1dsPy33Lfd6X6QYTSNq1Y-k68iUB3gMimkmfIYwnPr30JODUelF8Mxy2WC48LWkm83tLYrQ2plXruK0fbGfQhBn3yGzCIVxAtJ7rZ1CmgnavawOmMf1g7ia2HVcDPleIx-tTkwi0zHbFsG4UHYsthn2GQNxsKfTVAAfAkuP3cbfLUE6sHQjCDR3JVasrjjw2NOoPJMtIJSqJh1MoBdmsqQ47hZPMe3uXzSGJZWKwYx6_f7150yVkZd-9WHofO0k8zONLW3os2cqsT3TE57R3HR-RGB9-Xln5ajI2CRti17FEZJTCID5R230QF9CfWysGJL9JmisA3I8i89Wl3bGRdUnsOd_liMOkDE6bf59L2pIikRU5QQv_HCf7RjeTivO1MhDnkmcj2gwDVFsXxp-iCD-ynL0FxoONARc5Jb_PZswgfIEnwRv3eletA3Y5EQqdnu81AQULwc-QElGIucnbTCUHxUJCYfHWZu5EqocRvtIGPW3u8qxVI3m_cHlyfzSp5GhseSsQWpp8D54I-LOvTeQJsvHfbNV1By-M-zLoqeD_qyI_MrLAziOykB0w30i_qFh4rMUQfWLqnA7EMRk8Hpq0q0IjA34RFRpHxiBc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Sm9obm55IFRlc3Qh\",\"reward\":\"0\",\"signature\":\"LCFY_HKqyNmK55ezsh0LazGoEGsse29vW1uumZn5xvXcTrkkbeCrf9DE8SiAlgcShwJeZsNtmOZOe5DvrvHabMG6qsgnhEZMWQ0s7SV5zXfKhhgdDepQ7ijhq5TTqswQfyOz9K7JPsU9tyaiEqGNKtaZ_OdI99JPqtCcmfYL-OW6Uol37R0HP7ulvaQMCcG6axk1NpFhvPHFHKKu6H2AYQ4kYIu2eTJ2EchkuqFCyRzkjFfFlDpTOREJzvQkecP3cjd_a5EuTQvipxIl4zE_VEU6hfHjqbKZGvxNCjHAk-UCYyxKsp5NMQJiG2XgFaKCk3buEyH68EmWCUOYTkUIbf3LcW9-VHhSc-kQXwDhPhO2tmT6FLkY3cNC9rVfasMVgUePDllb5XchPy1Wp_Gafz3JFtKxHJtchMWRRKwpgFjKBdvY-bJXsmvD6SqT21XfXUOxxkVHtzRpLwmrJTbx2NpreWk21tQyDQpjAYtncb8yTSohm5OZArSaYjeMxDEYYEUq5YT8k4pn2z8qWIday08l0i2pP_6i9kpQywONDfIwB13ep3qAw-BPsjTsmSuLpBHnHnNwYZjUgkxsII1COcgal2wI0tQGLu1lL0xgmbg1hzsP40mGc69RcPDuTwEkbMMETM0wYYpAkiZRXGm2rO97U7gf1XNZmNgqJFnIhSE\"}"
  },
  {
    "path": "genesis_data/genesis_txs/kXu3jTQwgYsphIUFbaVGg9rNiil96fNjw0RBa6oPRtU.json",
    "content": "{\"id\":\"kXu3jTQwgYsphIUFbaVGg9rNiil96fNjw0RBa6oPRtU\",\"last_tx\":\"\",\"owner\":\"ugJkJnr7FAgOewmZTwr_nvBzyw9FgCSmMCzaV--gljl57DdtiqnuOIsVPFTIDg8YMDrK5xErdG9FkRpTGwDhLRGnjw33V7Hge6SzK_MToK4GMdC0DELNc2Er4xSIlNGao7W0yYc5QALSQBFps5coEkMV9o2fqU8WNoNOuhGLqi8-9yTaBlVQChkamiR8J1iWp1wgI6-Nmkx3tQLjIGR_V5FE3eTI0ClZdJSL-cntzuCBl7Juq8cq3DeXDM39mBshpMUQowCVsYtNoNmS58_UknCkgFKNeCV0o6Jk5BvQDMVj2ORi75XoWIPgNHTEB5mCzHZfMKFK6q3yLmUv7Uwz-57BuBIX1P-fC5MxIwLIT6D0XQ8RDqxchlP58vDAeD7O9ev1CN-qxkO9lWDpot-fdxjqQS7_0KLnHVImI0XRfdODQcs4uzo3Mze99tcLWo9vLJFOo2Ha3vuWu4T9j2pZnTvrPdjk6szZN0ZdMmA5cv6YyFdWe6n2T7pIufrTtfzzJAWhGNbbO5eYpnpAuMtAmNLtYi9Z8IwYx5twtbKCKGKCcR7IBXzkIywLIlzksCpHcXVUgyg58JuC3bK3iR9b2ppavlnRPzxH46jT8tki4HwCea2qWNOyu4hLiB0KZuN-C069a0CQjjVutcLnuwec2UQuplEu0n4R94uUeuHAYW0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"R29vZCBsdWNrIFNhbSBhbmQgdGVhbS4\",\"reward\":\"0\",\"signature\":\"DjKsMbka34C5a-T6vQ9f5Hwofh5E95cxX3_zQg9_3tRE78z1Uml1enucgSHr3LReGnW9gFK3oBgtdyKsY_GhO9ZBvDG-pGGPnnDSb2t04ctPk20tK9lyZy2KGb0YyekRYWC0zzmJTB1kB9HoDlK7BcLLV0SDwb2LIcn9S__LgXw0yjQYfPCFYmgxCeiHP2HGhvCwWkmntdnZmIRdOA5KLptAzKn6AhqY14LAyTlzezSuJV-C2Vr92WwsO6cEIzjkqNyO0ylu-b598gAd7cC0dh1faDGt3QpppmaJgm8G05aUMAQFkyXPFRVIdlyA-juvyrI59Q08AoOI1dwp_hkD1iT130k-phY2qWgjikEB9iJXR-N3paXlWn-U3t-U-mMc4aVTtO6_MPJZeSCp1pJlpSGA-4IIrHjJHjQ4ZrLr-oDpuwvw-8pzyFljOfEHcnAKLhEO2g-IIDZZFuI_lmAJ25NTv8s1yYcc_3O0F2zHQoV71ECwDrzj9NsFtwjTjFwsJReLQ3szL2bj-KUKzTdoh_ZD73XNDDL02-ePq3URa61jItDOIIcExAWoRpCEqccEnt_Kcz0vE_5heNkmYugTQ3d5nrFpv5SN2MRyB--rJs0-ZEji1sBve3rXvemam5p8mBN730dms5czBbU_fp2sM53cqKD3G5Wmhy5lsyF5x7c\"}"
  },
  {
    "path": "genesis_data/genesis_txs/kcb41aN752OE__qEKDQAsbpzCUXMdlzI3clCBuxdVts.json",
    "content": "{\"id\":\"kcb41aN752OE__qEKDQAsbpzCUXMdlzI3clCBuxdVts\",\"last_tx\":\"\",\"owner\":\"5o5JcSZETIFfIwBS6imGKODHxRtCEbvtqgaQIpIv5al1ALI9mQQpRLaCevTJF1DOfHl-pTqU5wjn2mi21U5-je_zyiSh0U-fQF0DG_xJT8DwZqLMQ09mKoa1erBLqAJNAijc8GlUb0WKn7sTzncRMsDzz8hsFpkqfdcoEQ0PQQ5g1KP01Wj7BVA78Y1TeRth4-GUCmVnqhLTcOZJn7L0EiJgFXQWP92LKeYio3HNi4FsBCoGRnOZqof8EZNgcEEQCVDeg_0Fzf3zeO8z_NSMgigTIQ7VVhsIBzq8F-vZeIgir3E3xnNUoS0SGH_afJbte5fccPAFW_NOs9W9nqfXKvo46kA7793xCmu3SABYtHR0MgvGDc2AED83oVAVZgAtZzUBVBSAMm_WlQ6RyDnaQuXN_ri2R4xdMr3XHuoASoXKFN4Ln6Ga9rNWR8aDSNe0bf-sigVO0DGJI3ww6SRJobGi017VTZMM4NXWGYEjpNuGNpYxd0Kqsc1nuYjPMawKK90Sc3Xwq3AjX1rtgwIK8MhKrKY_Om_S_y4Dabcx2Xndb_hjznZium3HarK2p3k9EOmmYBmVgYFkXEh6cUaHmTVxJl6tkKw7vYEOtK0YP_USbF9lUc6qxqowgq_I7s8YmO--Wn3A8OmrljGyFD_JeiPr36h1FEQagRg3BgfPxpc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"UmVndWxhcmd1eSB3YXMgaGVyZQ\",\"reward\":\"0\",\"signature\":\"bUBBkDW5oLtK2eOhxunCIdPArO7m-Iuxm94X_c6s5zljrYhljrLCorFiB-PdFfb5bk5Ay2xTL9nhen1XM1oAHsJ3FQ67RNeV1j-Iav8FGIsSWxdyg80wZO1LZ-B7oe42RRbhS1o3lEli9idhp5nybjVAJRjehxE2oiXMQsaDxsMc1AkE6OxKAPjUAT_fOSdVxRM8qF7u6LY2jggGk8kFx_gPwH9cDnSRB03Vu4A16-UsF4VQDVVlMF-RPBZy4Zczr4NyiJyZJmzz6Zrdg2AjCOEQb2mto_aILn773wNk5dNDWMeNsPdrkD4uUJxiJbbfpv5dzmKqzEgBjo-nHOO_h9wNfcVHimId7LZW0FQI59l2-T3FX62LDLxUsbEmaCKP0kKv1OOlE7ECwOQtx5u-FWKNh3DvCZ4EKsw53aGYpKSdARp1WwX9rJh1IKHj77PayDvqqR9bwyzsts6Ai1FpBu8qgjfIXfCpO7HyGgjX-HmEHjTADoU1Xwn6fRCz-NNodfGQ4_NV57hzaNiQ-C0RVv8OXbBuR-_W7Dz9sgk_zM9z21kqYL8-jyeQo1VoBXpWF2b8HTMkjsRz220qg-YgzIRUI0ns_5Te7B20puiDMjuxIBF6IRn5PeOZWOP1lSfG3bkJo_UQVyqe97WRkfWMZBIa_bt1_pIR1VlMhrhX-RY\"}"
  },
  {
    "path": "genesis_data/genesis_txs/lFqBd1sEhgw1e_adedkee2hXP9beiNYbF625KV0vObU.json",
    "content": "{\"id\":\"lFqBd1sEhgw1e_adedkee2hXP9beiNYbF625KV0vObU\",\"last_tx\":\"\",\"owner\":\"99FpflBJvGYzSpjZw9yNZjwQuBgurNKGBIhugnBwGKVvX6744JVfLCgYQPWVVkTNsKBLEdEqvsNvKK3bp49c67x83gWe7D-bnaNhkQPpAwMq1RzepdboeN9J_36L_kDKmpAfrg16jX-LWt4VD0pW-y_IgEX1tNsLXKjPxrWvakaybT8t5U1Ob9LoZAYAYbdLWefJYCWoxDsUZqgkVJ2wZGj60nQ9LHqfY1jJky0iQg_7m1B8OlKQgj2zXHLHShpKjGZwyuEd8bd1KS_LPlebo0W3rtC9_Gg_rKT7tQidwv4JnxD6xEsKTQ-Ttei5gtgzDSGn8OW57ufsL-Jn80O-k6xATEPAfYEdjlp4N_cT-GvfDsuNAs6yY0V0y1xirrDsb3VuUhLay0GqhzZdHxNuSwSy1P04PITB0ndwFvAJQ2ZuUMh0-qHoLCoyFO9ZWpAzW3xnzE8ag5bkKBBxNY7sIi-RW4P8X8WOboV-DAW5vdN4a1tPdA6wFhKRAzuQTV0twXlZdAryiZ4uOBKEV0d4LBGuM739OfxH7-u5tU3Mn7R119ZlKoEeDvBPY0PChmZ8w4iMAEuMYp9RULbinp2lOvttWPKZ18E6P_vibenEwscBZw_Z0aaoQMZjhwVrN3tX26ZkITOeYH_pfoIs5Y5q4SICiQthL1Dy3AFRgY5F6qk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Q3JhaWcgR2FsbG93YXkgZnJvbSBTb3V0aCBBZnJpY2EgYmFja3MgdGhpcyA\",\"reward\":\"0\",\"signature\":\"HnrHedAY3EfYufuM4msiwDSL4i3-vCMIezQx0hgEKfnhKjBio665Sh6BTKF9f7LvItAf4LRjJEn7pimmmrT8K6A8c3k9B8cXGTqc3ptwyH-CcXk-8qaeo9IhksCmYxx1UlwGDezhmKESq0PgSk6i7a-gjaZFHT8Obis40eJc9ZTLceJJRNffRfwUZzJDXNIsG6yoqHbyyRxoDHnIsmdQJZafhan5hpg0X3r3JMWj6WpWY-2U-00u688GoWTyhOXhE98HtQMtBGpkgnwTuWQEm6psAlZU6tbq48Y5zawziPyhk9WGcR2pV6eHo4yTL7QpvdVVYvlbPkIYD-14aUqJwu7G16Y3-_XLgKFpWc2S9I8o-hTcZRpYynJ-hIX-6nkFediWP8TqhSVjwCFYhK8EtKkNFkiQuXFBegHg5LMsVQBxVc2-MlTC0Pe9dDKSXm50byh1__20EDw38f8gQ0vDI7PZfq3RmmeDdPoP_VYLmhuhGwHP57nouPA3IrxePzpaT6BrtGHtc4A04y50zVjLjwPnVr7Eu-F3Ueq8Ncjgo71ljWGEm2CTx6IkIt5SwLqsHnGbddwhwD-nE9XtntAU_CgzKq5xCRkbEB4HBhcIXx4FUrWbmLQWbm-9DP6-qHMwi9UVJjCsHmsSn_j_ye-FABjldXhAnmhLh6P6yn0g3CU\"}"
  },
  {
    "path": "genesis_data/genesis_txs/lq4SrnweWCHnEhw_AV69gMLyBrPxYOmOdVdRIXkHwOg.json",
    "content": "{\"id\":\"lq4SrnweWCHnEhw_AV69gMLyBrPxYOmOdVdRIXkHwOg\",\"last_tx\":\"\",\"owner\":\"yyJyhjObKzAoYJ9byf4U15_Yau06vCzvFT1ayGhEStb9Te6FcC1S3adgqyr5vk9olu-lyir3AtXRuU5mj_t4-38HOQujq6kW0oRDMGooEFIoSnclQh8FG6IFAENlLWCt9bUOXME-VJioAlQDeNcAPEHNTCB4tkRoNC0dBAQ3sVowFbMSxQ_SXJpu7oJGCxmQhtPKXJd7u04x3uGaOg8Wpw5amvpC3IHQmIHSQ3aUfStOTWb4NWX7Qf5ry04PKUJ01DSKJRgxWKu7NGGPsLAH0JdTAYob0ZsxoQl90h-dKYrOUDi4kdFZsV7TOy6udk2oUreZIe7ACqsYcBjtt-7d0KGrfDvmw3gEDHySDvdjAJF9Dlj9Ki8hkum2HTEJ6CjHmeutgFipLSF07Y2OTuPCpjz_jBDglHguk4BrhIeWtHRp_AiMwzSlImUz2G-f9acpvQtZbcuOEJkVKxh8Pt4Hd2DaFg1dwX5d9-LSYj9NAeBzlV2pxzV2qEFmLy_XGDAvEGc5K6sH7K5bTu0P0_UxWydDqulkTdBKeAt3p297puxuKvMHbiucD2nG0Benrr4tBBlFCrXE3c2l4pQTvp2h2yEgJfjNriSJP7l3paygFFeZVZUCPxJczIu2pfg2raY2OGFh3tDULsAl_MdS-IbZXe3ecFj-46HWiW3FaiKFvwE\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"T2YgdGhhdCBjb2xvc3NhbCB3cmVjaw\",\"reward\":\"0\",\"signature\":\"h4Q9y_SqZl7iD5UgW--CtK3kamrvPzaIJr88qu4u9mEMQL5hU4rVAm_8PZ4kooyXoSWtbUOTl9GBbik8giJU_GifiZCGQS5TcL0zLdq_ndmmgul2Aq8uz_D-08YxSlwvk39U7CdLf3rPO41Ny8H4WavDqAVT81e0964SiDQdCVRRROydCMq6LNFgQ_2xD5gjLhW9_fUnIpaS0hUH_pErQByY1oMFdP03Qy9cZnCohaUqn31SLPAaQZhFKQeXCx0-BLwiT9JcAxQciGV3jNQ_2_1INOylADAiZ1IJ_vDNBb7vUEuEd0UWCW0p-WzaPqzHRAlyjuP13FtX2kkeGJjT-RA6Vnp-xZiH2gCH2IPKB-PMnwC5TxmqxIpOX3D1snXuUXe776FAGGMkaxMkKnIX5y915I2kOp6DDK2ixzVp-dFniyXcCBDTGM1j_yXy-BQorPfm_HbFqU2idZQCpE7Y25BIVoC0N_p8eWDmoEbOSDPLIAVD-_netxA-PEqgdogQsGr5eVRbhxD3iHTNxHsYR6LeNHDDQv-AugsJb3AaypTyKK3Zpukhl-twZj9g7x6Q0DCfG4XrPNHUXs4P9b03yZvOBA7YxCf4M54gWO1Tjv2_S755_Im_CA5snzwA4wNse5uRHxXvr3qJ2hK0HUrhwgqn5CDcKM4mSrlXM5dufWo\"}"
  },
  {
    "path": "genesis_data/genesis_txs/lsuH-ITPI--6KSzhIFclsEAWOSoRQu-8tlnOSxj_Er0.json",
    "content": "{\"id\":\"lsuH-ITPI--6KSzhIFclsEAWOSoRQu-8tlnOSxj_Er0\",\"last_tx\":\"\",\"owner\":\"pXZj4aL8SYN1ZXQJhYd8YmaebIpsaidvecp4dnEaAZTghwTbQE8qms9preUjQga2q-2Df0AW6BRM6FZGJ26eLD8FTv0HPwc858MMMTVgnngx2lbI2cXOSrw7vLc8XsZ75EYkh3GxWTNr-VMxhNoIhfXeTv7lUo4DwtngbqFiiY-GKR3sw_fzXnypiV58iyse5XE-16ov-vSzDabXO5j2NqnQIMuAEktbBTDdJGcINOGjY7e5uLq1DPMlDJqD8h8Kfp9r9IzK30IAWxxBlOIs-LlrUHirYyTJimrbNf9xvrFKzMW4UUDxMNQLadmVUpjZsH5U2BqFVqIJySZ1XFRU-qzRIka4MbSOAhvbgYx5NaDdHyjPTHZ_UypKksC7gOJ4eyLYA-6sa6C4o-ric6ai_-05wo7YID4qWeNC8JtNjC8YVXZopOpQRd_fHprKOfsFjQfNk1GzqJYDJmeWWu4UDY2ksd-K8U8xMgiMS4Ezr2DA4F7iKmKdmN15RsVz0sFHXPn0ba_xfgBta1g2GmLWomKK38y3Hye0kkN7cQKuBPw4Z3oi1NT73qtq4m8WeXkmcASLzcfNMGNW7Mb0GAVW-IREa92AHa5ADcyyfcKCiMI9vIOQCM2I5vhGn_j92IY-lW3mKu3f-4eBB8pEWwsaEVkHrXbWzixN2oN8FplV2U0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"U3VsbG9mIHdhcyBoZXJlIDotKQ\",\"reward\":\"0\",\"signature\":\"FAFlc8pGwb1qv6jiX7-1NoFG0dwgdhpNAX01vj7oECcoGZXF5EgejlExQaelQAwlqtKVce9HDNrc2hhxTK92jvBf2yLalzjClA9pAMQBJmsQIxVuT-cVrmPysRBPpSSdcwrnEOi_W86vSMpUuHoV9x4PZ01CMuc-9pe7ALmgkd7wE1vhy-tqfVJZnGCn41v_YQuSJR0ht9y8UP0D8o3a2qolkDsxxj5XswPwusOYj0nJDlR1QY6t3XO3xYRdw_uB4f_tS1-ijqUovIWNrW-vL7wgHsyLRfwDLKaoBJHcW_iCbbHCY3XFn5WAWLaVF0Tn-vMfDQCTnJBKb5I4DPfdq2iZ82Z6BpP76Jg2xHMBWdlBfM_VmbMuCA4O5BdPTnC0TYDKtJLcdnmsiPz93TB-9xAzImWgJ0ukR6asM3w6qzlHZ0ek1pgjz4jxcrGvbKaHma2q-0Ak_59r2HV6pUdjXRpMMp4IwWr11Vq6qRgRKfPAXzJB4aXuR8kOBefBJH5XUJ2y_UkB4xtHi9tqdPavsApbwUZOBL8z9sXAeT0-KwZ74IBILdko-i7jOf3MeB0iT5ynIRitONuzKkk97nA_dlXTJhLFDErYr20k5mFdVMwKVgY3QhCPI81ZpkIys_oOeKO4DQeR22t6zcff8pyE0S4DEmk2GANoPra5793tWhM\"}"
  },
  {
    "path": "genesis_data/genesis_txs/luQlV_58e9qjm7EZpoO6f5Y1j349Q34UwTW1Lx9J_vE.json",
    "content": "{\"id\":\"luQlV_58e9qjm7EZpoO6f5Y1j349Q34UwTW1Lx9J_vE\",\"last_tx\":\"\",\"owner\":\"zcuj3nZt_YL4hipF-S1IclpbGeQZA8y4RsjGt5bz4qfHTbB2p7SgIcFDnR5noajbQ6nbIkZWLpKrPHEd_0XGZWdqsGiOdzNW3cjvpYJKMmpKgR3T6cvL_2e85x6VRbimaeuwIYQr-MYpsy7CzZHqF5CQZJ5feZ-6NbZEHd2oJkrTGFCVs4dqcUdVdTyLpwVuUYhfW4U8GtIB9Xg-jG_aYZYd2i-yr3j0WGPlerswOHmoT47P7s7eH8lP8yMe6selc2Y9zrlqkiz5iMAE_1dnf-pZNcys-8Gcu2dkE_tCuLbqwG_r2WjrnJTeWQN12NL-HPxtsbfbJEJauo6kwzFbZOLD6J83ImhHdZqY3G5eTKLkaU5CtBs45llgIcm85mZJ8eLfbyO_rvtXm_n34sc7jta-4_DlRV2la9SA5nOFB_iy57PecwtmgWzUa6qv73XvKDj0noJafYRssOcXiPBJPFvBhAAEyA-cIq-c81QeGXMPB7tZ5KRf2hZowvqKLgk4bI9pg9dlBjSDzSJwnw2N8VYX1qIf2pzcBbkPkvy2lt0pbUrTdDOkJce5VTseC5CX1dfHL9g-A83lu5FabEX1qzQt_-mTDQLByeuWwneNEx7bpC5KnSpSw2KHAEbLx35M_IKdY0HfD2OZuCVduEtj7q4PueXjmRbO8L34kvajYBM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TU1YVklJ\",\"reward\":\"0\",\"signature\":\"QQVB8HVlGM0si5it7GAngyFzRjUL-b5iBwc6fPSXazOxD0jTgHLL6Xrz3GuTaSr5fjcBiRcp2735jDpVZ3G5-9vNhtwARiJyXiMiCdh9pW7L6BGD3t6SEvrKD2Y9-LDkbcOj7-7f7bEeY_TtJTKyetwag2fMIuew2janvwXohQWt1V18yDmCKclkouAT4OpN1pR6iSxEVJkbOSdU69-zD6-22qIeElpiG9qvSVfFoJYplaNWf_v3EajGOITD3ULbY-5CvuPUjvnksILU70JhIqtI0taU57L2bPmxXpinTnk9K4PG7evixU-eN55l2lvJ7DYVrLtLxKFBhtyKrdu55vznJx5qKlgCDNBTYOxxIhfm6MKEw18YzcPosUbgfHho_mEEQpPjqYAmwOW-Mbx3izYR7W3KRt-gGooDWi5tpxbr23ABCXGnLLxqByS9qIGi6D-FY4K6HKIcBSMMTMQumc0VlZcTevdhqUBkOTMfr1LL16txmEJi0QrIImg-_lZ56vCrb97kOp_YKLtHBSsGy6IqVwZJnuN-SoxcJ4Gh3Z6ZlBLKIWsaq6HRgeayJYO8xRnPK6O57wLYMJl7TO_k5qFW5CX4CrfZrXWJS1AIB3Has2PM-oR6YdY32KkPi4sVyor2h9y74gJ7FnQW8I8OYOdkrhXzX2D_NKoJ87BIWA0\"}"
  },
  {
    "path": "genesis_data/genesis_txs/luyHFFFOvjKPqi6nVrxngcHaQ3RwbMDMqVTLqPagHy0.json",
    "content": "{\"id\":\"luyHFFFOvjKPqi6nVrxngcHaQ3RwbMDMqVTLqPagHy0\",\"last_tx\":\"\",\"owner\":\"sgsE9-C0c1KaOeGISQMRDrd5PFLVDBJ4xPh0NWKo8GtIrHa0ZjFO0-5DsaaCKaLsMhsNb_9AG-DG1Hwr1S3C_QoJlON4a7Dv-xOpTczjyvHaFZYsfyVttekSfV8vVhcP3i4VbKBO7Jln4OPt0qgJQB8vTVnr3F92levzt11RGu_1k3R3Y3NHse_KMA_dWiNSbUFYUXCO3Ykxol92VDWIBNojAkARjkASb9tgzBBtIU9w6x2Uw43NDavtUuxJZMKBatzAwPwfKAl2g2RC4aFHqBSvwiA-7jsvUB9Mj2506G0eIkla33Qyf5b7hT2JIq-mXeKk5qHetatHviPvOTCibQktee3jPSF78K97phAlVmr_dGBD7wfrsP-c3JKmNEX4aOBI4sjmLXgGqjzCMG7DqfU2bwQNFfaR_-M77dPjgaY58UlMw5j359G80qpB1abno_yB8Pdw6mLKIOmcDI4hBw_-n7_WW2L3EQf6m9B8UJZL8sBZIlItZ4gIrMtZ6hBFf36j7gdGq-BjxKMJhApe6ah2sKUbEeCRl_6g4Yk3Ai2RemifBYZB6AjJkAD4aJsVQYY2WOD5__xR5acLpul-tuJHYutxDbBvE4yANuQYk1WYiA8t72tcc1JWTJHGdJ6-2Ewp0YAIiOJVOepK9LqLfXjLPrkcSHNgP3mo84jQA_M\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SlVTVElDRSE\",\"reward\":\"0\",\"signature\":\"fSiWt8YR0NE_0tk26rzu5E-xuORGDD_x3M4eIO5rFWpczIZPzt3CJlKWtbAR0gGBrDMCRdeymLCJXSdDbAQ7zBa66PGpSKEDFGwmg0u2R42PItzpZNoeBTOtNqVJAOdZHQth8aSKwN1D-qCIRVfcUeWbZrmzWcRtmbWMyNAHiMzicWWAu-YSC1KOToLqdJeaY08hvUV6qLBbYMxB0_x4SOc_wakx2gz6oy5LSyhBRxK5nadbq49BXasczWEu3-2wIVsuDl3XXzZNQCodP0oe8-n9qtNqQp1-CzeHLanbmGGZeIKhlJ0sfv91sD93167mK7TN8rqjNH5W32VEUlyvOLhNGJMPTglzuxZmbmnUJzFZrDqYE_xla1YYZiwjxma2QqbhNM1j7J7PAQYpY-PmzBYRG-Z5wSG2FZfp8YN1Bdv2cFTgw3_CyZhMuMKYV0z6B9OTdNfprFwBh9XYLIroAvhud58cLMIBkapaAR9a8jvzANWzTqPwsVoVmOxPNkC-FGwV3a-2Xezg_Ex73yr09jp8e8iUgF7NJfeYstOlVC0V4fL7Rq-6hS-7HvuUAAin-2PQLgvYHb7a-o8hVQ3OVLmmLKVVhM31DNrBJADo089ZAFQqvwPRZTwGfUIHMWP0pTu5T9qXxHCp5VTgxx0zmt7qRSKpueZj34p1mZVDoy0\"}"
  },
  {
    "path": "genesis_data/genesis_txs/m1Vv28IVJIuYiToBhxFVp3dA47je3L8WkzSjggAWXAo.json",
    "content": "{\"id\":\"m1Vv28IVJIuYiToBhxFVp3dA47je3L8WkzSjggAWXAo\",\"last_tx\":\"\",\"owner\":\"4IQRR1tmNYlgd0xfhX4S8AmOipWRbuyUljy9ZRX00ssB98PNWqKFzWm9O9xn_xgZQSykh9QOReEsfI6nmu3EyDLaVYN0UfkR1HcMQb3javOAlxw5oJF4uDwHhR0V_J2Cxs2lAfO47tUASL3PCy5lIbtrWGDDN7OHLCakWL1lHBTlii8-CCSxff12LynpkvyNXc9VMCIP_wbEGpzQlq5htU7WF08F5UgO-kWVeZF1unpLQajtIU4BZKHzJMLvInsvmJ-RGdWdIPZDIY348HWyShtHLBaLXVwrLxlLfHeacwuOVNaiv4C5Eb72JddbLBkVQF2xMQFYxJfRTIFEnJFeEbpzPOkIjAAR9HsrlhhU1aD2Wg2dnBnJ2KosVx5ae96yXR6ZdtYeVm3R2OLEHthD-Z927fYOGF7Xz1oN297awk6qVH83pl0270dWWQapT5hkYBGckk--XmuE3qFyuG8GEfmFeNY5fA_-3aF1rmoMeuUPuTnQvQ6ZG8QL71l1G1l40dYFsGTrXRQLJCx0OWe2KArsHcM1EzaBUJjJsR-WShTnZ8us1IN_qqOMGCnhlSL_cEcrSCePskcQFdia8ovbRrjXEpu0hFR7f8CLBB-fobqNMT42yVcFh_e0rxA0CQGkVNsHR-SySM_lAArctWwbVu4sEd-aKhDvt-tUQCyJNAc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"U3RpbGxBcnRz\",\"reward\":\"0\",\"signature\":\"g8EXPjq0CcIN4epLePTSSadwYA0FjFwwFDp6rc278cmXsVv9apSPXztd-zZUVi97fC1PWS1J5mhNFXveSOF2r-Rni-Uo7sWfyb8LrbtDsJpmKc3dr34-QuDkZRR5hJFtKNQNAo21fAxkt75CXWPkYROA3gFnHOSqut9TZ_7oZzn3hT77MoCfMhB-ZW8JWHM45PD9jNjjqfUiCzsBUdF6lK7Hts3uMopBkefDLA_bTSNO98UL5Z_Ser26Io1lS1XeftQlMXA4das1aYMF-ySySJkHRZ70v-6ExdqQ1gpfNDBZzjGH8JHSTZ59_ecIjGGUO2NWTfXwHA7Y82YyMWzKk4zqd_BWpFF1u8iUjw-Yjpch6PDjSbkt5mk1i0JtBCxp8XPojQimil1OggBBKZlhmfUEn1YdgFUMiL6xj4qc_SJ3BBQTLYqjuQa69WOk7FCSS1wQYv9ZF10klbG6sSZ5ozWHdhx_rTyedOTfw8-foVd62VG_P6NQrvlO3U8x5OFrEBGm4PrXZffR5lScr4fBnG7fVAlYVnjiTvC0xMRYnBg5UAaiCGLPs7YAlfT7Bfh9S1nztdtLdd-_UcRHrS9imojUXf8P-b4h9UMvxRV27a9Muc9dnXvxplPudXbuAXt6WJaYXkoZ3rlK2oZsr0pqx1-YM_4LgoyS3ZMMBh_GWoc\"}"
  },
  {
    "path": "genesis_data/genesis_txs/m5zFPHB-2VjCgTLStD9TLZwD1CHfLELPKkVXFJGIptM.json",
    "content": "{\"id\":\"m5zFPHB-2VjCgTLStD9TLZwD1CHfLELPKkVXFJGIptM\",\"last_tx\":\"\",\"owner\":\"0P4DlnmysVSxS20bBeS2E7pyRlRR8iSi92O7R5yxbRDZNWSCOTOhGMef_ObwUo-UpyuaquKflFZ85YrzObA1Ttnpl_sIbpEL2LiKCkfD1e3-PZhTqo3DxENtmvnqHrhujwTVwQlgpKXhIzAPO7J5yJogjha-IUs9F5FYFRrMHiEckxhvwv5WQHGBGoxft9ykWYBdnRni-kQAQXRf99BoX62SM-lrmysGAjbRWBflKujkVz0XyfLzCV_ipJdslu2nWqmam4oU9FePhqaoA1wk-J85PeV0kmxNh8eHtqfRFTk3-NAQzCSvv4fRnMvrkXZaIw1vxK3aX4MdkZLoVFPKG4vXHRdGciNiU5Bjbn99inEZZnrfM8TPoEI8Dm0P4NO6HLuqOzK46iDvKS4dL3xpB4CBZEuhKqrInZu958rvcD-gMXA_BqJBGa2G7D-uO3I1zvPe7YsbJbUGvev_MWoJChIXM9-IpbyujZX3jbjEnmMzszrGqBWCuCHB7uRPObbfVJ2liHl78UiqJw8CO67g6Sc3DxFL-2zV8Zm_WuibKinjUCeIpoRvPRFRhxq1C8qbeWv0HS5gldTFXstToVe5ZkJiXueuntTv3e1Tdm3_PmIhdcKMR7B9MNrtqTaq2mJGIuX6r3Mbm_V1-xWWp7q68SayBTo7ZJvkjxiAL7J9p4M\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"V2VsY29tZSB0byB0aGUgbmV3IGJlZ2lubmluZyE\",\"reward\":\"0\",\"signature\":\"fo0h354sVKWoXysN1fRh3N02vQhmmWtu4wAmh3RAPKkPqYYpWgZjhzVxJRQ9USiLJTA3D10dIRBaseNAgPViv8ziWrEhiq78JYCfMuvL4M15R0Rq6PhJSBGV7CQqBHHkG_UYkum36brskZzrbzcj4fZwGej06femHmFUkCLKuCdn3Co342UeLfq52x_tUAWN-7gGk0ivY_OrWOKv1CsATMuBSzFFd5we9ILWUc9aCsqXThD_cehav1bNJG4jPdfgBE6PJ7eA4bdHRvyTlkbJqXn5IJedItV552naSnRd7IrO_9zmy1z9V8D2vV1UchwE1EdXuUVaV_jmb2EXOniyZUT03VFDiGw7P1qWT6ymXJ9k3RyZDcYg-qF7mpFAqFM5B7Dp6mzg-1nUAu1CbR0WpV_Vs3y4qRQWjImw2995Z4CNRJDOvx4BfHmHx8GhQm9b0Jhv7jhVn_3xo-zDz0fF-re1spItL7pO8cNr9-moGL2qGghWYCUPuQqeeSP4lnwRTUDs5FQKF-xM5JDU7q8hwhpG7PlIRBpTOTnlKebjQ5xylXDYd1T9W-HnA-tFOh_nZGSGtP0LKp5bVqbAaErzp3bd5poRIdcAVl_0V8Z6gmYk9SqDH6fWOVpEwr66dTMgqHXXWeDJeeIT556dwZ-mYGFltDkxgMZgGoXyP0GJ90E\"}"
  },
  {
    "path": "genesis_data/genesis_txs/mGAMsTqBzau-MjTkMS5Z3g2_nUD-qQWeLtq6qlzkVl0.json",
    "content": "{\"id\":\"mGAMsTqBzau-MjTkMS5Z3g2_nUD-qQWeLtq6qlzkVl0\",\"last_tx\":\"\",\"owner\":\"yk6SsEVtLmU8ROYXEGIiOIGfpxlsH6TA-uhADUQhd1FGYYriyK0gx_Ly68dkJYkSZTnaYE8sht9j3PxRwJSPpywN7ph0SyZSSsdTsxiYs0GFCzY04PIL6iev2PlbayMDwOWWQHBPgoE_yJgW2i8JoA07xOHcCo103OPGyiFySiIoaz1xvKcP9futUTuzlkEoQYJt3mDYknpBvKrcvabKXNUIY2h4ZqRDR0i26TCRxj2HP5YpviOHMPPnzDlIJxfp-5DPCawwnShfw0R-G00jANKT0vDtPz_iKImCtGfUYxsG8T7tM1jFLSZboUscirELdKqjE_yNMS36bEG3BW9mL-1QGK8_MXRapcJKp0HFaOIU3HVVdWO7XHuOpr52lj5FhhuT88cJ01sq3OJGBDVAUnL-EDYqj0QdqiiXwg3MeqdDL2HGYYba9Eiv1bO-C-zaxpazZR560AprqNgNDPEvwW_-PT_83Kk6YKINoUJNY4M26RjGDef2E08ZpqLArAVgTfTQjKcT2fIROu1on8tSfq9MYEHxqeT_TA_ftGmqGnGAlSjZKwvu0EMJLvl2uWs6eZuV1tfo30LIFdFuk42ReJud_Mwb1feEWK-4a0hO-aQEp2Z8zIS-LIsONZkTqM0SztM-G0sqcalENAM66nnRKH7lE4Ua0kEHqha8u8o9SPU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"U3RyYW5nZXIgaW4gYSBzdHJhbmdlIGxhbmQuIExhbmQgb2YgaWNlIGFuZCBzbm93LiBUcmFwcGVkIGluc2lkZSB0aGlzIHByaXNvbi4gTG9zdCBhbmQgZmFyIGZyb20gaG9tZS4\",\"reward\":\"0\",\"signature\":\"DLcLRE4l-2rBY3hd6kHDIKQxh45DkiRiPnmj3VkKzz278XriVsi1TWrvXCDsG1If2r_v5p3L-s8CqGCnHHiIJewVv76remhYZv8xE9OeelykQMsmYr73K8O7ZqD3yFriDGjG2_zNEIx2HAdHedyKc0kS9Yi_Y9vzmGItbbGxTTIo0LQ2-B6130hYfqMZ4uMMRrlxq4AQMPCYmLpsqVkb8P6RnJ2qN1N8_KA3m4PPf--omO7iNk1ETS0ng5ZwoAUknRoSIFT90H13ZtEYtHks6ar55t6THnHFSKtzF5cCuPxjze0RpTs93CrQ6CAz4GoylcPZtNfHt2T9IL-KNmYU48d3_zXC34Gk5QY2RglBR-ijMegcmjeaa6b_5K1rrZioXBRD8kP9U3_6QfhOpnna8zLbDIgQYNqlvpmwcSLNWgsHvPhVuV0tdVbhKyclnkRoNnkctLDMJ9gQtSqEf236zjnkmbcqi-2rNSwxWesUG0hkl5Oi6LpyMM-J0Ok3ueeASyArKQuKsfJ9eDjawswbX3wMnWBNPv5e7Njwm1GTsau9Q81B6L0S1AnX8zcvzMPjKKElDhFp4X40g5MU2KSehGTSaKs45jGCt9bmP67fVp2ciPRQNzGi5z6CwHZH4KvCEcgBLbtkK-_PniAIxN9sUYbdvFRnrxAr6eyc1zRLvkg\"}"
  },
  {
    "path": "genesis_data/genesis_txs/mJUxc7XyUp1HV_VRoi_54geidr26I9PUaiNL4msSNxk.json",
    "content": "{\"id\":\"mJUxc7XyUp1HV_VRoi_54geidr26I9PUaiNL4msSNxk\",\"last_tx\":\"\",\"owner\":\"uGga713Es8Iwb-rqgBXrzXEXPlNCxY0MZLW6KFCHmyDVS78PmoG6JV1eu4yxMAlnfF_sbr14yqJgcJcfIz4jFQjX8YTZiGRL-i0HcgD7ls7OU1fYp5NkzDxeuWdp5DAjm1s1h7uaI8frQyYQiStE9g3p2Hf2n6UNOPYP3rHtfrWe933he8m4hBO2r5UkucZtwS-nDirrBa59ybIzlOdF8DMDvyQbSx6-mTQSWtkrwXtyggGO36gVyHaviaDcYNJKy8NsXFmsKOExzvH2zqtJbidRwwRHtHZPvVy9xdetDx3Ira9iOAuYNWQLmM6OMG88SHaBYD9684Y4-BAgyf9b5Uzn3oj3oGKdvymKBAk1Kt58s2BQZjg52Pfd6JHDBKc4LX0cVUAU0eZlMTDus3BfUnGayT-suKi_1y2u9hNMEI2TWieIOmUUW7YScTCBi3m7V91VbZV1gJh3YozXEoeHM6_OImeOEDsNXV4eHprvPCWUc65erRqkkHAspW2AFuqIS3AZ3lXDSmCcWWKtHpBhcvjai31plK_Xt4rEjW75DeuvmyRbu9EUYEc7Ut60d1vpvn5tv_rfEOekxHdpV-j0Ai5FuQMdK_DD0xkO58GrSayluMsr6K3XdFVi_1zwhsL4LegNnafaEOK6Kx62_Np2u2aU3BZTPv45enXu_SZyjqc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Rm9yIFZhbGVyaWlhLCBIYXJyeSBhbmQgQ2hhcmxpZSBTY2hvbGV5LiBZb3UgYXJlIG15IGV2ZXJ5dGhpbmcu\",\"reward\":\"0\",\"signature\":\"j1Qmj3BRQoRk13l1GdfALqX5mSrOplCvxZLzXystMpEJzEXO8YukG4BwIDIz5XrZB5ceYpLxYr6AGVDeSmxbtLIMBi93J2YkG_wFYRHAj7g0UWu_DA4bBH2X1gp0mIvRYtWsaFGc0iFV6ASITkdQ8zzhSmBz5k8Npa_dzs2OGMZs8yMtnDsqOF2HAOUA3wwTFrUgn9UFZX4tZAz-_I1ubPNarq-wK8SNt8SIReiXVP79phdTtJ1ARSPT0rq0OnafIOiV-L_Y9OnOEUH226JCY59_8tuCjNX1cuCL3M2i0d_LZfzN1AOiEALLhq54CXxWozIiPQQYXF10ry8CQNJ_IROU3hr20erTWAtRTTTWxaNozwnF4GNRyQrtrECXpPKzdY9RZADMAqu2r8kJSH5JR5KfbN5ui7tQx9KgAIAJYk_E9DiY-Rc2bRyXp17raCwTN_EeeOknZ1T91mzFv9Sst2F10f9YWS_ubd4o-Dr1fd16V4FlqDXaLzoYsW1Blsy4GeG6iocDB9BzoZVCk36eveGdhWU2Mc5QDxwZ39gSiQrh5up9QpBBrHlDq8LW1bPngqQyZyTbxL_3b2B6s5KB4Mj1W3HNlUZK7rWlW1Tfo6RwzKzE-cE94WgW1OmClPExQHVAP5u_C_DiMfXHtHywjye8255j4bKvTyVY9JT93kI\"}"
  },
  {
    "path": "genesis_data/genesis_txs/mcFln0_6FIuLwE9GtMRzmdQts4QALV3dxQkXdgSdO2s.json",
    "content": "{\"id\":\"mcFln0_6FIuLwE9GtMRzmdQts4QALV3dxQkXdgSdO2s\",\"last_tx\":\"\",\"owner\":\"tQVoe4e5Y-3CDV5V84p1KxipTg-SSxHfwkDRjdSsuldtbYTrTN0OCkaz8k9bx4JbVWcpoGrscMAO23VfvQwj6wWVEOsredFn9w1yEF-XAZ8SSrCM_Hga5ZbvY_HnYdjl1JejL6EpnStLdalHctDjg2z8JZ78vmCcC4BWgVUo0Z964E8rDSHUmafoyN0S-gj8AW2jXKgrWrrB6B6oLdIrxuobJRoK4JWmg09Bq17eIjRmnomTg4OSlNO5KJoNTVHHEwiqSZywJgyUQcHFPprXCckIGmojduiIt8tYgIg25HynxhBOBA9OUbdx24GbUNDU2pC0q0nBN39b6mSOKinRLtuJzaAe0N3nUqlEjInRm_N62OrPebsaQAmm67Z4e0T0GRlcdp8mtMxoJwgZvWPL8CcXQX8r4fDYj6eugpRRE7RUbuAl0_kYhYTt2voS5hHqhoW4HGjGm4GgbPa73q3tZaQ35MFVWBeHosHpKa5w_UOmZlJkZ-QNp-F0fo49ciWe74alxzZsuCLUtawd7k3zLz6hmYhRDu1vqT_6zPKvIHwRmbZ_wuZC4o_3u4RRlpwZ5xM-flA4oIm8qATEtX62T4HYkFYbvctki_9OiD2hfVatoSaMr_McCLb01WTS0AdKvhEIRJA2zq6eXisiPHoiQkyMLVK5P1b1tAwTZnVvQyM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"UmVtZW1iZXIgQml0Y29pbnM_IElmIHNvLCBzZW5kIHlvdXIgc3BhcmVzIHRvIHRoaXMgYWRkcmVzcywgc2luY2UgdGhleSdyZSBub3cgb2Jzb2xldGUhIDFlWnIyRWZWaGRCZnBGMkpIclZUckFIc2dIZ3l3V280RA\",\"reward\":\"0\",\"signature\":\"lJRC2RIDY6t_e0Br5nSwTRx7PnSfrMBm5Oz5P-k1uBqBLCxMUU-pinNyH0tKEsWZDIGUFS0hSl9h7B2YkbksOaGfLbNxEFsNN15sDWjYNCCntEMXXkwcNS4Pc4Kn-sXYAGqkFY_VPiHLjv-pz4HYZIYyoo8re-0ACFaWavZTQSIYpxZntVbJXb7mp6wG-mqCTExWGXI4sKUyehIYoRrKVMa0CIIWBAuPDvWnoOJq1ocuFi97zN4rFiSTyBwaNuWkC4Mlnytd6lcmbyysM1pZsm40v95ONlVu_U_IfnM9B3Fp-kSlEvSf3CSI4snqf4Z3JX0ItfaBdjCaZ6AqOfI4hyEs61KDulld3ox3bso1fPvMfkZCgPNQ2UbYyQhe2JuTLV90qmCMJDhekZAakzy7r-u-qZ6bCo00sW8J-Rkjdwp0UPdv6pfx45mQX7UINm2syxV_cgJTO80Rvh0x_bwJK3KE7qy6yNvWLNi2d9z5DjEHdeRihwAjvlVQM80q0_A3sDOiGPbYExhCQG7NYoZ5YdWHBzDACbIEs4rYLl1oWMPzJBzwpLioW-c_zfYRLmU1eQIlQjhXky4dfuIuTLANXcccvPNw0_8Y9Fk5ctgEiRDLUW_r_F0u03CghtUyywQzyXrxs_dCMC2yWvRRk8idVW4h8Fr_5DMOZ2JAsNt6HNQ\"}"
  },
  {
    "path": "genesis_data/genesis_txs/mvGgGlFTDJ0ukM6Bssd8G8B5PrEppr4Sg1_NTvzzV1U.json",
    "content": "{\"id\":\"mvGgGlFTDJ0ukM6Bssd8G8B5PrEppr4Sg1_NTvzzV1U\",\"last_tx\":\"\",\"owner\":\"pGSppYyxPUmpTgIYrZOBx5NuX0GNpEjT9wtQ4TAuje9UlkzVGNkjJy1YUiK41sDYDcLy_ayhHS2xee7Yyz_tNSsvfO0HZ70ZdJxpE7746fOlLJtMCJhog_IvCu1lttPQBd2vHKCqh2Kdn0KL_QcIQI_6OfSJS6p3RXFEWjkkjzgGZ8Y4cTBwOUGObofbzb4bjccgpUwf8M1SMmw2oR4ifJSlrLc19g2q2WwmlRBSd0tTLoDwnF9FfQh_4lXKhTSj7ibJMnp_szLy5jI-5VgEG568WKYDg4cSFKpVqvELnQtktn6spyXIUVVm_X5y0BgyqeR-jz_LWRJL-Iy-ct_aEkFaAUS-RoyQwzjIO7bQByVGNQsOZpOtzQaQ3DFyBr0aE4_JkDDOK4cqLYY8n3d2ERRELcST6pT3xtsV2UWI66K1anwifO29h1nXHq-3vuV1Jpd6cFvY1OrJQcvLkUw9MbcRm5PiABppNjlYT3hPsJJGQyC0O3Ebid1STJuH5WmsxdX_OnALQFcCgkO8ZHg1A4Du_ZR-QCX4cyLJidA3_NxoqN5zixhtBfEnM2doOC0w3fFp6QEHSVMTmQQv4hEqc1edEdYjMB0iEhus5MRorNu25iX4WVHO8XlavMspeNkc4emzenHBLcA9qQasVaMytiH1KmzCcHkYgnfo05WJ3Ts\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TWFrZSBzb21lIGhpc3Rvcnkh\",\"reward\":\"0\",\"signature\":\"aC3hfBdG31cZXz0BlFaBeTuQeZUsbHR71cOEmbKmvyKNKJwBh1SBs7AUZiY5ropVHwk_zYwH4P0VECe8w50fLLJrv-RKvXuZRUMJoO6LokwSKctpb_fC3taoNEHFGj3k_L42Q6urSM6hN9fvHajSk7-gqObBlQkQ3hEG0fmi5sq1yV_p2wPIsCF52pCu-IN9F8iqfR5znuqGV6fpF3v4hG2hssy1ukfiiIgLXDrs7tbDYbId3u-x14s9yYSQxZWt6cMnFn576HJVIfFbtapBJiJY5cFdyLqa-Gz5hLghOF3ilTH71txvIfBXhrsTyeKbx_7H3GpEPdzOOggDUqF4ZDcx_jmvh0cGdwRTVW4kTJ19eK641nEalxYbcZnZ8ORX4SVfNGtg0aG_fPkGb-L6Hk-itGnP85WW3mMM_35P4b0ZVjDxj4xJAv8ZNmlPWgyr4pXYt-6cLzBf7nz4k7L-DF3h8CCtwDb2KDH-t6AQ6ftSthIid270E6jhj5FvrgneUfZlXBBSVVE3Y3uKKIdta6tUjJdSurUpNwv6rWX1urvzDlwi4SCaH3_FJ3bdTAYWWlCerd0dAYNjh5v6eGCaFx6Izwus8HKRDW5Kcql2dUZzBMxXMd5K9UX9ycTVPCsCPTnHSoeOo3RAfw3JaqnNqGBv4K7aiK3pm27rs669MJg\"}"
  },
  {
    "path": "genesis_data/genesis_txs/n6TKbsqmGl2m3yH15RAe405vYZQ7DStlvYsHCHp1D0U.json",
    "content": "{\"id\":\"n6TKbsqmGl2m3yH15RAe405vYZQ7DStlvYsHCHp1D0U\",\"last_tx\":\"\",\"owner\":\"vaYWXaJiRedVrwBFqn9YCh-s21DXUuR5TxMNxUM13sPET19RwH9_wFqjKzoHsAVDjt7OBtm8FkOiD8H6ZO449gWiGw1fVGiNEFQJMMQ9EanQ7NJ1vPD9u8bPAB6FCmPBPg-vpNCCArOPN2xY87YA_DdTGuKL_bVTxFrq4jYP5PMfPie5JhOSmD2KpmCrpLvW8GT1VFRVUAB9nbta7ec5U9KN7rd5aQwrnL2PkDPXR0cGigZLDB0UFUvlNXYR2SyBwAeGcBYfPLmKAmRTxtv8FhceH3UzlpYIWdiVDyk0RPuxtlYUzpvTi3yLEX_SyhltxUTd-qlwumi28pqUQRjPm7uqm2iC8diIZ8eBHnCRZHkgnXotBsMWhNAhmzQbG1GVS8EbpC50NmvZDGNwpnfnJuXHws1-RT_8Qw6MJHwhJclc1er9rTs5gqc56oBOBhDoZS_iV_XR4p3BFw24IqyPuoGQ4PIVcqbWhdns_uJ6aefUWrRp-WCsprHVWJZy-QtyuARld6HCpFmVYxkYHTyeEaVF7ywNX9sYvQzTM2y9bvmeLRYYaPeMvFs3XsS5is-yVSl7ICG0xs_TUvcGfOdByf3bumMqY08if4wdch-ye7tmVBtxGMfjekIeXpxMqAwNFgKjErLdAtr43Xfq_wcOGQBtVHiWEhQddakZueuLtok\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VGhpcyBJcyBGb3IgTXkgU29ucyBGdXR1cmUhIFRoZSBLLkkuTi5HIEdpbHJveSB3YXMgSGVyZSE\",\"reward\":\"0\",\"signature\":\"rJxe03qPInWG-BddW1G9St2HSjnkUmGLg0BVPDgLtx6z2A91tP06XpuGG9PVBJ3W3MEp-biqFbYIeXaK7LktOc11lN0S8sj2XpZbxokgXA-C68UZ7lS8YJoQ53rPvGYwMfGSfl-deOCqGsKCICFt9_AY05rjj8KD3c1zJ3zRNlx5xl631z0GOGyrUBWflZTUQW_Kxp6azjylNez1JdF2EW37ggFOif1aYajks6LiFeYsdjRSeKJ8rE6jyLNxYZXzzyVUnEgbfb1tuffVET0npbN9xdx3-Rjh-pEU7wivgPZWJLnE0q4Q8Uo8iGfdlD23GuP0Zr3JwKfMdEJ2Xfas__M7AoD7wXyAsrsiONqxhMQoibNAOsFjl-T6z9Vl4Fam6rLZ8GIWG6F8rQhg11o4j7aMISoIfTJVOqcyWBfxao5S53CDsZEkPUahJaewRYs1lgK1XXJjf4dvnecmYY9BVf10DwEDNxlc6WrCX4GJYYxcrMblAONNe8YJQzZ3O_LI8ObQ6-7Zr1gLC1kD8pe8o1z0izDP52JJw5qAI3iBqf5D41l3rGwHUpSolw6GmImjQNNoJ7NP-YDsjWhSzvQsD1y6QsULcRhqR-ucEHo-PdYfwmcBn9WkVlE5bMXffQSvr7fx3Iosrx38TpHQ5QoO96hwmp6ZGttiCJd4XspBtrI\"}"
  },
  {
    "path": "genesis_data/genesis_txs/nXGMduBKL3mpsnFNPctfjEa9Z9zlMpdxcRrdkK95D80.json",
    "content": "{\"id\":\"nXGMduBKL3mpsnFNPctfjEa9Z9zlMpdxcRrdkK95D80\",\"last_tx\":\"\",\"owner\":\"xC9EjByPG3krj1s0MGPbJzxehrr98lKqshtXr6ZYBIhKyt67Piq0-ZjXPs7URa840-Cous8wchZGjdY9Vl6aTztBwMHfeHvyVmBj7rdnGUVBUefOZuNyhrvYJnnJVDcU7dva-gMQpCk1971AyXmyA8kSS5OAPL4-GBhWEbF55eFemxjIFHBVdgGEh2_mawhjvVGozncaKcvqLcQ8T7DCYddgUGdKaOpqXdsjYOeEevh9Kh_PJ5WBsamYqAS_y1O7EMWFaMtzcAV87lvHjmamauqXBnvMnrJG7QNvgcal71-Lgx3wa49fsFftxsfg_au9qw66IriVeRYBlFIK6bWrRAP2wyk9-UmJpaLk0NpKAudpMY2qWRhngu_tUNTjrVOhmDPLbyocJnbSDnAaBP6hUgc45x_A-XhrBzXB4ePsQObXrBg7tSOfbIyVqaIIymHD5Vtf7JRCPoeQPrS_jQ5FIqleg-fGEsqga-0EAWR8wRWg5kMCG5ssuHFuj5Qw9wUlIbLbN6AosbLwSh9YCEKshOEL1ivjOaWkR8lu2QvaJ4nRl0sK4OZsmzeiXfxcUGtHp0q8PZPrutqp4SVhvZ28w5co1WQgmY2GVDr4WO53YRoHPlhS6uRLk3Hx-Zhk4RcWdJxh_UNjR1Hl86LPlIx1nB2Js5ck2rAhWOgea73z2sU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Xy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl9fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uX18uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl9fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uX18uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl9fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uX18uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl9fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uX18uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8uLS5fLi0uXy4tLl8gWE9YTyBDYWxsZXJibG9k\",\"reward\":\"0\",\"signature\":\"faBgU1z4hdhijeDlDUFO237huItulSISMDCfLrHISKLYN8EMJMeOt0zZFBAyyLExMtW68LQKKLRiudTjWoxoXU73jBbR8KluvDUB74xobMoAp8dAaX7XYoRq5D5HwTHYP2RAojMLgS-n6yqpfDoF4as6MfrNQXmAzlgsjD8hE8VXXWX22cTwPH7jvtxZaQrqyLB-uOarm_5FZShv6ubL7bzsYpHYD8-2tf9e18FdHY7cO1soFA8Ar278-epxYtc_BKRx2R7brF22LHrWgCEb2v1ZanVl4VF_xRrR0thUZQJ2MgY-nmvPFma-lVzn7Yp5R9TgHRjVZDyS1Lcp2mgtRREl8jU-LCzcOJcQ0c5t_hnIIAPp988bZziRuRj3mnqPvlja-kBgmpVXC3ZG7TGYZpDN8S3XczsRafa6IJdbmXvtDjyuM77oHwZlbnxDdHVvoSYgLlz6CNZHFc80hwTc4ajE2VTCoaNu6yMYLfQogA7zUQrnBwpztHbeiIRlDqyx7VzB_Di52njsBTM1zAM8wTeuAkx5uFU2cPGRdFnQMZcFMao7v17R8biR685Ou-gfFdEWSF70PRwvMjmQowDxvqXRrzow91j6vsS7IY5cZOjoETg_GrGfjlUEfhxZGmzweJAglH5-qcEIEEJdxKUSd9e2mqOGE3B3FDR0pwJrhDg\"}"
  },
  {
    "path": "genesis_data/genesis_txs/nh2sbgjxu6MmU8yGV00w7X4q4XCJETeYE3zVtcj2ldk.json",
    "content": "{\"id\":\"nh2sbgjxu6MmU8yGV00w7X4q4XCJETeYE3zVtcj2ldk\",\"last_tx\":\"\",\"owner\":\"60xroMkdahbCtcpxFC0U0NKakMMAFuarkVM7GH8bvubCxL0cb0Ci2xwyQquCJ2v8SKcv-HDR1bnjGvb6x_63sQahzqc5KJF67xTNi0L64u4UZpdGWj0etqm8rzE59rFZylhMbzcH4FiEJrIcX-6r3ZRjMEQPVCedji5gaVVokDWUjukPRyE_0Nk7alMXNrPaeEI_6bcU9V93VYx7rwoIfSBf9B6rBaFQqz0FCr4S5pt1rm0ung3DeHyls7Tq0M3h762G7LrJgwDcDx9yXuW2mHw3WoABj63uru02h_f_mEfk1ZLA9nDGSJTrd8MtPB7HhibfwIhMm2w9y3WMakzelCXZve_JsOsc2CYz9Um1TWOIm54y1JCJK7zYIRgteR4BdVii3G7XXE1nARSN_Lpor2Y2WlOnVEEWpg6cuDg2rXzGjT-YX3eTLYIdYuOfur3NKzTVViONLlnQZSaarCJGXKgrqFaqpM0w_InKseIAOvMFMfDbcmLvAzJI84mE6hmUeWzfKTaBAok4mq7EKePtEhCQvNJ_Qp_gIQH9bcmjTB8-Xtu0OX3NJbfEH14yCabuPgm0lK67Qb9JHeGUgsRMAL0SFF_P5nToVU9UwhQlLPBS3Gkvoum5HqJNBR-dJPrlVnc3uf0_PEWFJ_WzAse26FZ5PualZu0xUFsEgkPK-S8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SGF2ZSBhIGxvb2sgYXQgc29tZSB3b3JkcyBvZiB0aGUgZWFybHkgYWRvcHRlcnMh\",\"reward\":\"0\",\"signature\":\"1NzFNwdn7mTUEeG2e00V8g3EoWJt91WEvUzhKYjSq3z0y5w-uqxijeNKdAUyNveehpYI-gawS2jCbDnhWjTke5n7A5gBo_zV8TTjoK3j28rl9SMpbgU0ofd_Y1qkku-w8BOI6lvwiWT3lBaapOvK4TYTAFDDnS5kjBq1RL7EzW3mJZTlaRkM_lteKcaw82KqofL_OoN1maKIMcLx_3fWdl2MyKzqemtRwazSW-5aMYljt3hn3481FupxVjHB8J4aFDQDWUbfXNw4_iDjwUshLQ5zky4ZULshMLtDv0CaQ8yshtOiO-pnhpieg9qhp07-ohfX4rcwx0Swo2MXFepGSYWnSryxXplc72rxDdknr-bW5gxA7x7Tw28tdS_I_mxM47LKKdlYp2g0jGQ2HuO76aCeLyf1b1gWmMjJYwM2XXKe6euEbkupMMp6augvIBCnS7aVGiiiCPovcaRJW8BTZFBmUhC5izUgA-8nIburyAEWynWFYOraIjasQHJCpVVc0TfQrDux28Kdbd74Y0TnpoN-QNGUOln-6pLC1cig0FikJuQ--V2_qgAf7ofSlEZ-wYcRg3Q4iHkcZGbkN44fsMd-lFHhQGFNFgXJUMwsWM8PbDIgVyxi-aqv5tY9ch-gwOqwUX1u36pNrV71BFXFWRPnzDM0Zm9ojBBxn1lsKf4\"}"
  },
  {
    "path": "genesis_data/genesis_txs/o0nw6fU4gPL7Ae45x1BEQr5GkXSzZUrWnZrdIWqgx6w.json",
    "content": "{\"id\":\"o0nw6fU4gPL7Ae45x1BEQr5GkXSzZUrWnZrdIWqgx6w\",\"last_tx\":\"\",\"owner\":\"xIWQ43vEyFeOiUCOXgwcr05qeYO514wqrLD2hW3k8EJnvQKibwSOMAaVIPeiklHALEuvADoYNf1VlG7ezPwbbAHNCNvKa0v0-Q2v-Me0fFmK0BQyUU1QNKX6lPm5XfoqZ3nW-Z14uv4mBrIT1mx_nSIiEY2O_TfauMzMxkwZvFn55BtcNrdMfzDz6opIDPxfx_p04my4J-vtGXZDaI2psnJpAPHN0DPVT4Hi3PaNMllKcywmgzSm-XblgOSz_jJU75Ag6Vc3bjymiwO_ik-Co66hvEsjnvmdVXh5vXl3RVmi7iOQympWEVc6Cr1ak_EgAQNLGf6fXf9EoWaCBW3TWzv1oGT2upz3kKjxy96eAqe2Vi-reqEaaupt0hnMBtb8PHoIMSjyluxYg2QBfmjEnp4UkCvtj_exo9XwazVYnjUC_iAbiEXVTh7lZ5LIexj1Tm9qqFkfhwmUNchMdkpH8YAy6toBuu-DvZtDLUR77qKSxnVDSovPbJDyjFNv-90My6JN7l37X7leUbysxXIl60k6dqU1ylnrSUIgGnZtsU5h0ulWyux7kgzqJAu9HtxaVc2fP7_bxq2_fWOB6fwhD_vAhOO-4YmWYQenRJUsV1ys_3lADkk9WEuRQTPNtki4Z9QWB9wV-g9VQ9Mi1ngyd-QUQQ44lli8bFFlxLv-vYk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"UExTIFNVQ0NFRUQ\",\"reward\":\"0\",\"signature\":\"e5nsoNFdAjguWEBIh8FtEioInh412ts2vHuzsZI2lQIAbmIU8TlCf_goAVADdch01vVwU33CEw1oU_nnHZ9fEL8qrldHo7t7tlEjni9c_gEL_TjWL-r6k1XY2dXG7isKLBOvBXzJEvadamY73XQ63-GUUhXkpDmgDcSVLl4ngm33Syr4fZhN-To6glAkSqsa0aB8spMFYZy1y3MJIVw8U-xhxASe_g6ApEsKYonCcxkdkiiUmhg73vg8OcQgW0HOh18KaVPOgWHbya0HYK6yCSWa72YndGfTmObEiHYu0eZIYe_3EdjMiDQDMuLBJd4HM4rQos4yiPDdhD44l9ahJpqR5rrO-pOu3pDHI_rrt5yyem_uR99RixA3PzRfuLQhCJJNdxiJl25S3hCwxgA9uqcP1t9XcuDc2aUT-_4Jmn6Y8MrC4beRonGuqcZcJ2PgaFJ6qJAoGcyNmhvOVOqvXpsVuetFrn0oVWCcZS0gsXn4P-Z3CnKg6BdSIQemCr3E9t2v1bq8pcwTS6i_dXFhrYRwZzhDUWFU8jOA9n-10CugZtvYj0JMsB2Aye0BbQ_vgRhpHdTschxAc9jc55M9lm6oJPgdSaZxfgwMPRqKb4Q7sH8ElAGwhHI7mFxMrCQWJXrZTR2lTHJWsQtILJwPvRdgnSimmXFmOUA7Sh_TWZ4\"}"
  },
  {
    "path": "genesis_data/genesis_txs/oMP40Kgd9MxLfksmW_HAlGe8Rn1Px8tpF-NOHBfe9oo.json",
    "content": "{\"id\":\"oMP40Kgd9MxLfksmW_HAlGe8Rn1Px8tpF-NOHBfe9oo\",\"last_tx\":\"\",\"owner\":\"wQlmBNAKnFasm2EcS1nOXBy4PzxbWTjbPx-v-Q6G4-RHdAb8meBTs8Vf1AMS3hNzvjDfn1-wUhsFYzd8hl9qMcjU6LqPoM_1sHl0jHveDgLVkEHrPlKI_JVb6uyIiJpHZZgPrz008blg0ov_0CmbbATl8DiDGWhguOIov3rNEQY36YULRoqb47ODkt1e_ZkVue9Bw3BE2931vpl73bn5qI5ReylM_zY70TPtKx4ntyhjYxdsqVxyJGbJCscXM8GQFV9VdIbozy3a58X3JLdaRaMsgEws5dGv4ILVwRt_SvKUVKMzlMMds3mZSWbxisLl4n_XinZMbdJySaPyNKwMjBpfT-wUtEfIJS_3RZYADoYhTTO_Q-OQA2elCDKVvmGhdJK7J8_SwBCdUQ7OEMA0QIye6yyIRAJFBvFZkeJ_PhCd0X5qRsGm8LuP6IC9k_UFOZppJmt4Um4XVsUiUrj9ngsJX3lhJmY17IZVSFiWivDk91Oh_EihbT3eBmF5iXUMvWQM5dfoNJARXRzALni_twjMGP3kW3AhtefeS5dZ5orgDLi4gJomu0tpY8_4VzoWz2EA1LgP-I6qpos06xQ_AWdc5UcsID-iIgSKuSuntEcyV7ZbpZ5anxrhta3Rw3lsEj_3WXR-WmWMHntuRNIHZxMi8aR5wVarFlXnDxFzUec\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"UmFr\",\"reward\":\"0\",\"signature\":\"v5HOJUO6EQrmGqhIrgjRbb5FDujsdSXOt6oPImSWrq7-o25L9pWZorhr7eBoixVeyMfp3_otzjIgFgO_O8aSQ6syGsFm7d6_k8pxgpipOMtwQKZugBVObvp9KUOXq7xntTzWYsgi5vTQzSyKkxuPWNgIxg487NFDbi0LUSyQgGNNH_NwY5ZDtq2Yu1IF4Uf7G1BXxAa0nLKInsUdg21EGDy_fKkF6869gmqA72cgRNxYe34x2dOhyznQhJNQf7BKX5Alr11r40_iAzQs38tZS7Z5FBqmZYIACZ06b3JsNEXxSkNNxR9-mPRF0levREi4JHmHCfo1gynZoMqxSAMCHyAtToHYvNn4xaqMBRZ8uCwUiEpAZwSEVznAGEbqOkErg6SknpFhy-qcJyLlCbbDAWu1RKb_UYHd_7FXFuB6ztrzGKD178swwn0mStGW0dcdchu7L52qcaqoBg-4lZYj5DXIDgJsZp7BVJtNj8vWRgsJABKyjeDwa1xuSPY4ybjDwcnNk8x_oVHcjtwMRLTipCm6rMVWQwQKu_GP3K_c7YMEsRzNmmuSzDV0O2kifbIyrrUtOSO8IjH8eoCCoKtSufZKwsGt-DfTcTcFSKGEYl6lQc7sUqbg9WjoSXVZ1iXJg1tkk8_PieceHRf0UOR3pvHG3WBSaNmmMVwyDTXf3rw\"}"
  },
  {
    "path": "genesis_data/genesis_txs/oRvFwVpHVeo0iysSg2jFOAZKE-hKwbm6mGeZ6VUZmxk.json",
    "content": "{\"id\":\"oRvFwVpHVeo0iysSg2jFOAZKE-hKwbm6mGeZ6VUZmxk\",\"last_tx\":\"\",\"owner\":\"1SJ5MkTz386sln5Fpaq68cW1A38Bys_WivwjsRApRPxAaTn-1igZbakDBIOTtLzgLgRzrJ4qy0xpIOQDNBklZvfNO8pRNGySCyTBUmSlPdR46BxqBG6XekEajaAwZJTAQQV4T1VpDTHagA3CdDq6ho820NuzOxuJlUK-qTkHs8wtqJIyKjIZzGwF-_OChT3VNHUwZ2x9oTy2T6gAGAZgrwv40T4PB9eCJFUqvlwzSnbCRt4MfYsYxkJGcQwpIf-EoMsZpDyvWgjfoVzkkrfJPUvEEZGJ-b8nWPS9w28Ucf-iIl_O4z45CgzMgIKs1ZkWA7DcforBK5eGT1yMr5pgTATw-Semuq4eTIscfx_DKB6zU1JFL0pi6OipT6GWPgJfG60zowGYaqb-eRT1o1Jpd-yEE2J3qJ-x34Si60pFM1YnvzcqGj7Yscs6nuETmIO5ZDxAqCQMxPnIjWIjFlt3SnVwveXs2HKfeLMsA20NyybCZpNciSPTiLNyTXURbwyImlunG_JtOOgrOErKS0UYgbGxhDzVFubqLgmwri4gyChpQKLoqlcXZGwTsSDZnt2nBUnLPieK9V83qkG7OV6U_9PvZ9fEaN7mLlI1Whx9fxDlKDuZGCMYCc5b7rZVJW0iEuscs90LvB8LkghfQQYrPjC_JS4ZK-gBctgMi_enS08\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SGVsbG8gV29ybGQ\",\"reward\":\"0\",\"signature\":\"BSDRzmUlPnIw4auBQ0hcXSCw3ZeskKmsoD7UoQMmGR5NIDrsnr7BfHMqduf8XYVlg3ZnYgZrbFELwsk_nuh7MqqqpVWEGvwWy6IqbzsvBBXXJmoYN2b0FQcpnOZVelOC8y8fV6hLwO5w9-Xzu9fviYkcfbkHhYRPiDqlKtyL4avGSRvAWfLvowzPr3TBTcqNV3USRpzr9cjgfdN7tTn-hCVIVM0zgykcZRBlTor1KHs__N5oFCNRNCmqX7R6fOFvuIUSx9MjZrBKicNy8Bbp8t0HMSx-1AnsiQOJI1Q_ZzJ1Pae7mZJNz4Wmzf0hk0hUR2lzcqpgAYw4kHqxtsqrEBMM2iA6oCNHJTEe5vyx7420fxKULMAh3fYDp0e6EV_zFawXUro6l2xpr8PEFzBSQBe33kSUmxkvH7gs2UUx7ywIWpvtDGmqQ3-WHWTHlq3UQw_BxVRvrDmYtBJuEQx9pwAKZoByF5FI4wD-lHhy0eQFz0Sxp5qWFRCLOzdn5nLrqwdXY2bjySZYGB-p8a1Kyr661Z5mYkZtsc-G9ynwhq4gFb6FJdpm-axPO09BoPNzRqHY73W0vH67G0ea5yvuIMXXvdmFW8E6uUdo7vemHpBPsmqXCcGYXYQ7UE0uIjm3PbnTqUw50iA548NV9DqvV_IRKEf2NscDAYHLrEOUdZ4\"}"
  },
  {
    "path": "genesis_data/genesis_txs/oWWJcAiBCxhtWkIqwir4-vTvD3JFpHgZRNIpS-Xjzp4.json",
    "content": "{\"id\":\"oWWJcAiBCxhtWkIqwir4-vTvD3JFpHgZRNIpS-Xjzp4\",\"last_tx\":\"\",\"owner\":\"yXmINDaw-LbqQYSG-5e6mEJYb1VmGP2XRYrzY08JZfXbRYvUJLLHM1FT65pIbp75FLtvEP12sEpnuXJhipQI0YOk1eAiYThvbFB_sM6nH-Dr6Ex6PfhGJnMpHEbT-MDSwbW3ECIWR2QLd8WdNZE3cZmYUisd0liQs-tMl0GF2hW9yDmb5gAsgJp-z3d0xa2D5sRAz6gZYYpvKc50fczm-3VhJ_LlhzB-MHAFJ74hI7clKI_CAKQuf9SlqkYxuPMsD3utmks_xpQJGjHwHgDfpeAz2jYHPANCMBE12gUYyNebrHPwhiP4IJLfOPBGwRlRZ9amoVzytXVeki1wWOTzk1dk6bj4VMwa5ZJn6eIhbsHHuhL6EYv3MLRZiXfiUMKLakU-e5s3KUyx7C7GVRjyqQ4ffgxuzbP0f3mnhai3mXoWE9qfAoZi9azbrgRGyQyON3hxyDsk5D6-R5AXEXwZXsFFBdc8oaR31yt1AIoO7mjqNd5ouHNRRGSfga7QYO260rYS8vs4C848aQsP8q-Npa7hLBVLTIIROKGPuq9jI3QWpikQtj3DyThS_qvKlZk4umI2_MFPCBrwcWRd399OIU7NItCf5QWiFgZouvqQjNBH9QGZGtGlkZ676YQcHGJteDXzXgmllX30HASN4r4M1x1djMbTpPe_VpRa12qbukk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"d2hvIGNhbiBiZSBhZ2FpbnN0IHVzPw\",\"reward\":\"0\",\"signature\":\"fogKMEx9wQCrns5vXPOVTnmVPGzkHLl3ECWiBxBzjWvDw-YaPyb3YwOM_7Bxdxx9y92Fz5otg4rOF4rSFPwwAxXP4kkbGOIWe7-NDC5WD2L9Qk3e-ahDIl2q_oEPoL7RBq-mDcwwfoZyyiDVKLQQrbEi_e4SBJbXV05oWysiZW6r1rMr-kdHQwEkig3l1T1eW6w4e7IPc2SlL1aaR_FfDV_YNWTGvbcZjAo2IAFMbz4VK7IuvTJl4T3NHnDN_hX55JCqOSdgREnNFSOjKKsUZRRZDBk2eEAWlswGIQCB3sMX4nwJ1zPi-UnOdCrBS6JkITOPXjA1MFXBtM8S9FDc-LNwIml3fXKx9Y-hD_9hYpsGL5QA0KjR-hQRZ4eWd7gJxiTONUG44RjTg1l4_mCxKV1sg24VLNvY7PA9pNwJKlya3VheZ3An6F3wScu3yozx-NgEDUiJTYcI7D2m8WDqSTUyidcPo9dPa9Zhj93dDSWz9aFRoAhRcq_Qd6DOzgGiqC3GxGWQOVkg9kQoqg9VaO6rzMgxrZETe-4G9IBLRnECnI3WkgxZJwl1HI16tQYu8TksK3wIC2lcoVAe13M_dQnaEpu5EwNmL4_PK_VEb8n_zg1nDkTNN7SqeqiPx6h2C4jAaifHtePCNEeiSv0Ozo43cDzhL5rZEefMPFgxV98\"}"
  },
  {
    "path": "genesis_data/genesis_txs/ocUISm-0ItAS-N3Ydwe1swo4JmoVpRzWzngFt-pDwfo.json",
    "content": "{\"id\":\"ocUISm-0ItAS-N3Ydwe1swo4JmoVpRzWzngFt-pDwfo\",\"last_tx\":\"\",\"owner\":\"zBKa2Dqcs23BLxS92xHrUzd-kQLL_JEWc5wRuN7-XqDF0Qj8uCnkLmgLMKWAHHVNSiw9wP02iF9CHp82tifqeternf1ZgaTKTrHEEPVfzfD70kB-C99XDArlzdlDRHlFcq61CBRaLKjncQ4mTNtxTKETJp1QDwHgdiZikOkpZu8RiFL3nBmX5hTN6k8h_b_OnNq_-JRs8tWXv651MTR9YNZqruVaeuXocGDj0_F5dDJRO4Dr6r9M3Iyd8O1n8PthICLmSAa_jEaGShq2gwuuMh1_JpQIq98ik6D2rBQMMiiwwLVtpO5jUzo-zjUluTNExQpJbjUr9mVherPAQi_ENW7PslQw-FCEOdZItcAKWeqDO5q2hodnhTjB_IiDXr5eD8bmvEVzSLy1nucU1UDdZBA6OGZ39L0eTXZs-a6p-w8ED20L68ED4YyLpO6by6zC5Uv-2hQPohyYllGev64fGta70KZ_HktFNxl2ieso6pt7yaORy5SBO1uD89fx7H7CbglaFzvj15ShQGxl2QRGuM1LmW1Bl--6WgNXdl0J8WkD-dUYsCnzV7vU2EVMNwAlQW6W_gFe1p1ghTPiUDeiidEzPuL_2fFFUnHQdJ7Ih3OO7EcpSf46EiSw5dHWBn3Vmt3i-EgySzzPkV1s6GhIcfgO7BHIqpEjal5t-ZvU1-c\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Rm91dGlnaHQgbG92ZXMgTWluaS1Ccmln\",\"reward\":\"0\",\"signature\":\"jYFVv5XU3zNqpkAcxbs2xW5EWNHo4qBgPsTT-4GjSOMkII6aKYCaQlqhAiaS-cUCmXL4oZ9rIG4MkFm-gTo8T702_N3ahbFjrnS4awdf_xgm4W1pIm7dieMV1rL6DG1nbTlEBKGVKGWRefT6S7n_qNAUT2Qhj3XZodJXbMLP16rOdMUmr8J6JGqj__FD_nDr2WupXrN8pQneeu4lXfejsySbMMg5jkpQz14J5iqdMnekEsLeuZc1CFJDABCxng4vj_P5XKMxt9-i98JICHoZBSHNdHjbwMDuCP33Lm0y84odxODo9mA0Q2i4xOWLIfglai9g8rr7_2TC1hHfP1ukm0R2V73tuCi8SDDc4kQBjPQrQgiSePPIwcX3nclTY-7RDv7dHrymyhlZ32UecPysGR9BnAKp2wvlMs2AXh2DdfZVY5vBmwEvoRKP6Yn7dNJVnFydt0Xw0Vv2Yow0Sd9Mdo23MCLTgieuQy2DgYKz2gd7KwUmdXtfDis4v20cdqYQZ3SK6XfUmN4nR3CcsLjuAro5j6DL9GRkO7ObJ-L509RjNs9syhQB9EMm_7n6z0xVakti6Ss7FqeDN100zopMQVhMu0WhO5-ak0ux8zhjP_7PzFEjFqNFjSHBJcM1l1MqAse_KLJ7MLE2Tvzpy2sAjXgFghuaQiEZtHIt1Sj0Y90\"}"
  },
  {
    "path": "genesis_data/genesis_txs/opfZTSNdqaxXZUmaKROD2sd4QkyNDnZE3u1A95eSw4E.json",
    "content": "{\"id\":\"opfZTSNdqaxXZUmaKROD2sd4QkyNDnZE3u1A95eSw4E\",\"last_tx\":\"\",\"owner\":\"2FoOHlNiLqKgBz324I9pWVsRtWUN09fNIkQ3DzytimN3mInF88YxCVUNNA2sMnCm9gyEuh_5gYXD76JV3DDSBIthn-MTkjVD3V-Ut3A4yUN70oQ7UmmRqCeprbKMcPOd7RmWO4qgtpAnkBO7mW54meg61hTil9GYwcSmtGxsbzNAg3Vk-i0hwr6nrSMgZmaEINFQptjdheoai0zjMClZ62I_lcHsV1rhcrhcX2wiAoneDPnE4CFlTqUvfDoGC5OXHlvLXpUKW9baWKXq2jmIGP3c6-Spqkxiur-uNpifcuLWRXP7viJ6rE2nheIKxOwYsunuNzntsPVOZw0-cB9YlRLzQ49tGxl8ATNldqoNmRb7QiBohrsJHJdHzJxP00-4cp-y2qOIOKpfjGEY4SkBL8A2yskvSfj1HMoXauOM2NwOWj2bzdPgTE8EgQMxPg1Om1fseR9JYjp34WHa7PkF7AC8V_Ni5X9w0UrjVR2OBzMviywzHDoCKa_ALQ01MwuLUP2CH8bxu2_DdPiqNVZ_leRH8pHho0kWfaRS2voNZx3akiXkAKVnOauxtKL9RbVb4-ibIGlmhEOq2U6Q45ev60689MBjababt5I3DqShDfruy3yB_kSDpcBHq5EpQCgfsEBXoxlVbaLZ8QzyRPZ75rdlOiWvuE0zvZB8UMPsey8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VGhpcyBpcyB2ZXJ5IGV4Y2l0aW5nIGFuZCBJIHdob2xlaGVhcnRlZGx5IHN1cHBvcnQgdGhpcyBwcm9qZWN0IDop\",\"reward\":\"0\",\"signature\":\"bPrKowjT8NJ6gj6AbmRM6q06XA-WxylBt41HDudsMc6w160h3fULdId7EU1tGLQG0aIb_kj-K5-ruY8RHTK6jm0El8085o94aPlGUaALRBust99iQRcH9sTYrtZIUSsRa2kO-RPvIxBB6-pShmf2uoqc_Ou8QgrnsNS2R_xgWTjNlLAWXg8JMSvo693lw_Ev2zp9qFLIz4xo4Qmkr0POtNamJIINfDoSEwln1hRGibFZOJ4um4rmFpKdS9wPE9AtDSTJI4Ft_eJYP80-SuTaLtjN8eED0WDk51pU-QMvkJJeka0wLWf4BFwXPHwVDyEyvsNCPZrzpfyj8C4wns0mHfA5uWu0xApG5UuZrvB4C1xrLNZPIGILicYLtiZr2S_wjiyMq1dviSxNoagGRViMpzv6a420gQSNjCz6i7d-ruxgF0lKh5Z9frK-XNiMoDT8gPT07KJQzhX0HuwWUBTGjMImp8V0fqjvqF_9hACESBHzdZs7tWrFTAfDZyaECoa_S1y8namI-wueCsWc9uqFKN_lqNUWNnOOM131tAKzqhUw3d1PT0fwBV43QWFdpQS4O4eVpYW57jGsCay3LauQUiWI3csEpXYd_ZJ0kDl2hyPRnqw0D0GUFGq01gi09fnECyACzusv0Uqd150hh4-JVvywktnLgx81DTiBfJ8R7kE\"}"
  },
  {
    "path": "genesis_data/genesis_txs/p9PJG5GkKZAxLyPJyDYw4_1CmhodHGGGqB785duwVwM.json",
    "content": "{\"id\":\"p9PJG5GkKZAxLyPJyDYw4_1CmhodHGGGqB785duwVwM\",\"last_tx\":\"\",\"owner\":\"ujJ6LJOILRRwODjtK5Hi82vKhZk5wFNMqA0HV2nqdAtFPrq-1tw33QmPUhHEAm6CxPGScDYABPsk19ouzfWb7Hy8qFjKIZ1uxvodGtOR9TeJ2TNPgfEkFD9bl1q7cXEScoXgX6UdZCTJW7qSjIxJVMxxtoi2bfh_UWvrlDtsBK4Ru6s9KVC3nVTt38Ff0-zB5K3CjWppvAMQ4IjoBqjVfrlLbS-KbeGpV085lm3H8eC5WT2VBfIYCbJUwxzZS_6VgrJ6EI__cc78p9BXlf3E_abpV99feSO3AaheoMpHKmGYZORuhQXIuJucMpT0etofi5gCAvmCemYTJi_-40RFDNMjCkRCwEqCOjFL2APGBxPyIzAm4--rI7ggOyrghU6fqMmMs0FY9yQMPOUCiwEeWZP8km2PpfHge1d_iYE6tAvtMnWWvuhxgCk2KoRzkkRKcEruG39UF_ZS7lAh1Z1Q2HjSYyOcGtDET20W_YMf4te8rj84Jvr9ZPCc2RTbVLNrrz2i-SoDYXlkBiPSBwrpfG5xt9BFHQ4MxFPKSTBKkXmOyUgKqULukNbZfo2iJ9Skq6LVfZmiPhzQszoT2ACZeunphA9RJi5138g5-ZTwWi3LGBIan19xME1nBWLMDENygbDZnIWfIhUA7rUDzLYG03H_x9YCWnWvajyYrzIif6U\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SVdMVFdJUEFBSUhBSA\",\"reward\":\"0\",\"signature\":\"iA2Hs961tb8LDKKyUfgwFzJ6wS9FmduKrYF7oLWbwHVKh6NEZt62uDunKQoljslZtalaSlotOaXKghDxHLMoF8oAJUrShRNZpXoUeYC-8Wqqw5lm840Gv26_2h2TIvm4cBO0RlXhd29w14tPemj9tanL9osdfByX7sM26ywzEF5KCJpl4Zq6aRRmnO631o7A0KiWEleWWx3KDc5maccIu6fMnF7-3NbJgWVEky50FsdXSrv7BukkklxYL0CjbsDYa6mFcl5BHcnueuWXnqOL-FqmCpqVllW37nu_0cW3Sb54SeVicFL5S3FaM_kDMpzOdrQrkA2lBneNs0i3nSMdakE3wD1T_wmlAUDS9co9fCv8MBFa6ZIDZEEjT86c-EpGreb0VNbabeKmOfxsN6ZZrslTi9qiGcJyR4rN0v83Fv_IPRrr9Bwbyf4PVaXO4grCLX2VJJW2hsVlLXHWswYsha_rbmwoQKW70gMO8WAwYP2PNAnN6DqiLIV7Ua2hwcA1lWq-eU1TwvT1sZQlCJcUzBOHy1IbAI-ekUC7PrR-tN847mipjLse5zGuFaQ20fZg6_QCi3O6GQmoIAhatyZ2P7CQd2S77LdntNxOb5V6UDxg6q3MDEtqcaikdfpGCgoXxlPJ7XO7Aj3XQfeRgQ2I-HRmggBp5Jtz56NHOLUDaoE\"}"
  },
  {
    "path": "genesis_data/genesis_txs/pVZkxPK8F9VFM5lDp0oTBThaw1RvmwG64wIHFChYJKA.json",
    "content": "{\"id\":\"pVZkxPK8F9VFM5lDp0oTBThaw1RvmwG64wIHFChYJKA\",\"last_tx\":\"\",\"owner\":\"wRuiCcHABPnZI4-8pFpMhWlIqTrNhpCbYTEH-LqZtIyUTkbwp1oX1CQKl_t_-rk-Kzz-Qb5J0ovI3Z1BY49sx67U60n3atSc35AOIzMc_994aslftWyiEj30Njtrt8fExhOj1Zb1_fdHrvQptEwUZWArLlzVbpTxzvLnr94uNVPcDunUQByb7TNVBiEH1NPTnS17AwM8RlTB2q--Iynd30JcU6NZk9p4a80C-SLWm0IMvUGIvVDtSkieLLlEqvq64JgAFMG8bDsHixnc4N3j1fkw1uEGzjBn62fvM4NOFT2CPLLISvFZAR0jUbeMo-a9NxTAZjFaiG5X0RWevDQhvels0hn47n0dwYC2OeP-mq0GR5quDczyouYWbZvKEgdS8d0FoaNWN2xwXvRoKGE7Ng7t3BrP3XwPTN8LdR6QHXMrh03UIYarTL-zfbOUDEZvNMf5qOIoulkrQYWUHRJJ8rSQ-0csCHcxDIhDkUKUe6aAp585i2Z06baoRrWApTqwTozszpVYGrCvIdtCbPzVrq_qZFRjTsZBHa4hKniVMTQmLktX_bujShhB5Dfi44PvPpLqbuLWR4lZNQzPF9VgCvNHMmey5FAdqzA-D1O9skMHyKILmF3D3TCqwqaD67oat0IhEhClCusaYhGP6rNbox0s329u3ub-LRUeoCzVuPM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"R29vZCBsdWNrIGZvciB5b3VyIG5pY2UgcHJvamVjdA\",\"reward\":\"0\",\"signature\":\"o79EZls4QVSHHzLYmAsT9vHoeA7daZsgdqA6z_MhTgcAyjW64vPN2f23Z_hySUg0kFK_U8U-BJlg-QDmkbLdkiiLoSvjSQ30dg20L5NKpg6mYBLt-30qabBzMtZPO6udPBazv-pbnxt3Fdyg-0-tOx5NPV_72iDY4Cf3bP6jaLU3piqXlNJEFflA0VgJ9XYKqH3gMl1wfMqaRPCGZsuXm0yDFKQv2cS6xhU5fkxHtEs-4WNjqsDh-2YukpWHK3zarHDLBFaaKHLplPQzafW7BRWTRrrAujHr1YpRR1-TnAAk7DzhQAri9u-5ygpHH6TlEfB5_mD011unfRKJcCu-a4PX0G93Cr-1pnKYkwjvoEl_tSJdAhP7nHEGWIoVxeWK7OyzDav9K1YRQLHikF5vEkRF2JfXY4xzOQVfcFCHumqck7T4qB02IkXoMurKALdJN8CTopznDycJCZzkDn9ITi4Woj69v8E8OAWNPEJsJjSCvDussObB_RmQbFdAvSI8WAWg5CbUTXOFcu7fz0EUGneujwlL1zOmAsu7nPrbxKLNk4yfYAKQJkYGHqw3c8pdJGjtBrzP92Iq7qlbrrbmaORLuADYQm_0FokdG9u6Np0BZdEJQj1y47JRClNUB492UnKiyCn9t0mBsY6Tps8uJLkBtlFSFXiMfAAL7v59Cng\"}"
  },
  {
    "path": "genesis_data/genesis_txs/piTZgtn2oBsWKt09CV8LqH3I3JaVdRjFwjOAJmC-Xp4.json",
    "content": "{\"id\":\"piTZgtn2oBsWKt09CV8LqH3I3JaVdRjFwjOAJmC-Xp4\",\"last_tx\":\"\",\"owner\":\"nAqzD2kqyr7dLj9ClnZP0l_rvSUt69Mgcvjj7H1KhijzFQHkQm8EKGCKxTmZ9dwQ1sS5gp6k45UPfQcisDqF-yYav36AYwxtuZ5FPt3MJRSnhVIgUT9M2ZZIQ-9GxPCd1-ZV3J07uKitiMUxWzLZxRg_pSwQtpIOTKI3ILXv4mpcWZ4FHtbrYOUt69RshgW7LWeh1bb8eNanRaOJsFMfd9NikHd-cUXGXZ4RyaVqulEs-D81Q2yZq8it3XhJdTjD57TmpZtTEUa3eQIjDEW0Xk1dZHw8QxYvC3bc9GGMyhPgLWpweLWNFkj1aPS00WBmFid0bp7DP2WwBaS7q05ZGYO1TDNVB2_f9PtI--GahqyU-VO9ahEXzUuFuY56qKsLx0TFKnom6VLL694LJVoCrEyJYf1zEfSnCESNkWzDDzSweyX9fgeRTSGuWfrLF5OKCA6YNITtUoW_Sp10brbOXxbJ1ILxEnVm_IXdrX-e98nVdMo9TJuE-NKhxFdBn-YQVbIJ0QjVuxJzLcda-e8AK6bW4SMlKSvjiCfUXVDO86IogMw1LC3Wq5W0llDIWG6oaXHyQbFMl1Aa2pNzGlPHbWQdWwqJ6vVWqiFd68WB7PQCjxhOZo0mxvpDeZZnq_axqcHFYqhvKj72UUnliounc-ulFroWESxTFZBB9klEu0E\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SXQncyBpbnRlcmVzdGluZyB0byBtZWV0IHNvbWVib2R5IGF0IHRoaXMgcGxhcmZvcm0\",\"reward\":\"0\",\"signature\":\"JiLzkfQVNvjbymuHU8vw-s_CaBAtke5h1HEdoSU7Obn1sLj9IJNM5x_i8WnCBBscvOBSPARcFlgeXGW8Tz81VF0s0DITnJTK-3tPIm8Cmm80lHOoCSWGfHJJrAGoy1d4qOp5PAHbVcIQnT7vsVCP-TJiuifr3PXQtqjpTxplzTbGMvHi_hn_fVi4Z8qRoJWQTnlFlHFf5HxjHOdl4fU62jgXTRB7XBCZ9qcA1pb3rFdKxseQ3vg0mxH1ARJw6cMPZ32M7TNzMVpYjCb4Sg97iqfEZ8tbK2B7LJbYnWH1uqABZJgL3cruVGMbdjVY7BkKBlWxUhvTJ7sIwF9O-_y4XSNSNthjpAe5ADR-7HDKhkjkj-YhztnAxeHC8qAL8pbd-tmB_1_Ytx6tqZ417wUU19b3sDd5zaZTBOLZpbTaA_6me-clYtVGAst9E0L1GdZ_vj5kGZgmAenpDCyfo1CmpZDX8EX2ehUFNFD1LgFlnM3O5Zi6aGxS8sB_X25OG8k34hvZ9R37tkzuCWvuGYZUqJKz-YplWqozgUwdRSRsQYaz3pjkbICGwuLYpTqntnUIRkCvdy7dAU4yHuG2s_OF0VNk_GkVIRG8Yd3ZmGoUohLM8k-dehO-9ejFDTS0HCr0Q2UjPm8RkdpS2AjKUsfOchyIOHARiHwzPVfekKCBUGw\"}"
  },
  {
    "path": "genesis_data/genesis_txs/puLpw8OIIYCOatImKjpV5s0JWyKFq6bXFMz_qSf6mUA.json",
    "content": "{\"id\":\"puLpw8OIIYCOatImKjpV5s0JWyKFq6bXFMz_qSf6mUA\",\"last_tx\":\"\",\"owner\":\"uY6wABun1y-QxAEi7aOCn8ZWZ5-lSmBXdw1Ad-rjPPxOiIzS3IiMKgPrVkGE9ckrjdoU9a-dxtPG9RygK-LbTFrYfFwNHDT5hcZ8eUd8wo3WYg_yUMwGdzq8fPlz3hbXzpBlDvKsuZ9bCJhtSbyxg57jb3KpaYf3Ke4bCy1ATFVhvUcPb5PEDXnp6dC1K4lK4A3zdtbDwWlc7n3jSKjQFIDgsBqslOebaoBvXCfsSdnJ5hdIQqqRCoF6qAgfTIwJ9D_DDaHMSppwXKw4Qbo7vHDh7aZCJMnrZRDAc3WtARtRXGWZ430N7R-ItAG7Es1S1SfVKs4gUhatYNC6f_tKl30jM6lt4z8QQbk1wJKI-sNh88Koutm03clcXqg7EBtoxOC8zqBWJUmvou1VHaCy2Hct85jWycdCMn7e4yIAJKjZzm1VF9suJaHdwnHWHSglR-FOs9dI7GpfeuCGrf6wRRWAKgsfb3Vmml_a8wuRQA_7m2zfz-77mKr1YgWTAfY3jzwQkWkivYfuyj0XA5rQ1wBq60AW5B0dz4mHHBAOQUHG3qreKMBisJRFGWwrANwCzVlvMg53_TZHkzKafh8g1R9FdjACda_pmw1o58rODMFUUxpG_BclWQjjJyT3an5k12Gdw9YHlhc6odcj2VnKZVRMl0qG5rkAZzOtxpbTEgc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"T09IIFdFRQ\",\"reward\":\"0\",\"signature\":\"nIexemy4BbajYa5cbEVENtFW1jJBppP3yTKNoVfuNdAN9uvweY8CdJWUUM9wHA1CMMV26cBisuDca9qzBnidWG98qbFrmPwTnzkzC7t5DtzBCWeovJ53SEI49B9kLPhsPsUflCgtZpHvpZycFFwBbvcgtNJPngSZ90-btRy7Gp2DGmb58HPZ7NgXiNU7Oz7EZM36kTjSGN0IyYL6soUNSwbxYHHLQFfRUXrHo3gRf5OD730aLfj1XqI-OA643zgj736WcaB-pj_xUxpRyBWLDe3hag1Dk2yfe1wVly8eI_k7W6UdL1z65ggflxei-iz5lTt73AKlo4WUSVmI1TuLHOMbiPprdAvPKhiheyYvixKGIQ0kUxXYXRMjD_F-1CAj90oKj8gM04qG2VPC5OAV6sG6pk-eQzXbqmzrAXlMbgappyD7f-qRmqzLkcDDc10wl-M2H8-GkqAhoumhkoMJycNcSAWs-Q6YZQE7VStIuHPpl5ZJ_YoOIJS6OYP8bPSdvG7XvAluMZjOJa5tezOLw04IAp05mRzylHyBzZU_PcoLuqCS-62BHouxTS1BW2hln6XLVYVxaFW4rSALOmj0MrSuSwERCLRWkqRJ0732dMaTWMX5vHwmMYhw3KQER6tHmA_qXsDXq_KKwNwdS4ZsjEvGrbdiMP7Tv8ckvJuKVe8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/qU2Gu35-s9wMH1N4g_zMYKCqIStYzBZmRx0XlcIpjyk.json",
    "content": "{\"id\":\"qU2Gu35-s9wMH1N4g_zMYKCqIStYzBZmRx0XlcIpjyk\",\"last_tx\":\"\",\"owner\":\"veEVwP9QqBfXV5b7hI4gtMHWMX63VkVilpoiIm5JJWYwiGphuxrzTGCeceTLHt8pTj2LEDW65nkDD-OcGvPd4jX5pb9ivkotxtKynbYfFC_ZgxMclBD-J6kl8fj1PATaGwLcdl7c2dL0lpHVXvud81pODL6u1AMC9CiGwpIDTwxgfP_wQIBbniwgM4RpRsu9-RThtwNfBlFN7wQkJagrRFQRRMMF1zKSy3gYglvT0tF_S_15ekx0R1CoIrpyCSSkDilGmAiE7SVZhbP29FUN4abIs6BeNsNXsiTDTge-6BF-L3FwHtDbsY8caBC95ctcBCd1t1sC_oUBccwBVFnOn4j3upgExNG-qeU5Ofw412V1e-x5VGIpF3Psu2ShQFmx6DEsmpSKE65BQX_Fg2jRniMeMemjcjvUT3LV3MOJUHIQN4rE8Eq8zvNlC4z155y6Fk_Klczt5tUFS7toCVS_fJBIfwsW9PNBreDWPVcoIQfPgtk0GDcZg7NyrHzJhUwBELDiWcIGOhA6ioGv3qStQ5CBblPkYEJQjXCcMBkc1T_4To2fghyrgJn7XIopVdEdPSGw_8rDin7MZVxPtoQhkLkuaUQKqbJzCga6UuGzpZJa2D27kZnDdy_uNIGhTq-29INsmW7yyC8kpICyVhxKhX7TJxbnh2ij5ATzSFGfnwk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSBsb3ZlIHlvdQ\",\"reward\":\"0\",\"signature\":\"WLwbL-X1zguUYnBC_o0WN0vMZnw3fWBQkHBq6PQwiLTe_VjciW_9igGeqAyUZi0aLFkL5ScR0Ur1_ZblxiLKxj-awONTJn8Ed6iig363PTALNjfF5Szp3O1StVk0JLqFDs1rE3HXfeuLqd06X1u7SQpKWqogsILp0cSyCysnMZzlIaJedrU5nl58rxFz-224YL3BIaE0jLRsISGp0N_5NrQtGesC5XVPR3tEFoRDICYDgqdgUiuf6eE0_nug7v-U8Iye-FxzC1jGpt_LabHv64wnmupjZUT_5PL_7dpNfHaje9FGpnPewjcRqw8YjNQS-2iJqbLAN-AgFzr8SIuuy6lUvPFfsmDQIPvEQitPDGKvt9EEbYE0Kc0eXPTGUBS_yTFYgrHjWeeDoNjk-86G5tsliVpZqlUXb5eTqsiF6w0PlSZIYRFj3WhNOaV8bPUfSaiWvg7ZG01E09P39gS3C4nCnXpBfgAeQq6sYXQPHLMkAeL5ILB8BNyXeFkOizsEBCdfQC-L5_-XYTOfll_f0CEPiY8dUTFdJVRkq4ttHHtyvwHom61wo-Bg5ZV07CK4bT2mNzoX4ps9lxNYBRUN2JOAocTUURPguPL5V9LJ_-EfI3e-yad8tikJ74_-pay8-UwhN3Zo3saZFq_T9dYfRsZWyqzf7TC0IMjYCbEMaus\"}"
  },
  {
    "path": "genesis_data/genesis_txs/qX9u_AprdhyXAPGfh3C94x9AbxwWx9nJSs7g8FSwITM.json",
    "content": "{\"id\":\"qX9u_AprdhyXAPGfh3C94x9AbxwWx9nJSs7g8FSwITM\",\"last_tx\":\"\",\"owner\":\"lc4FcGs7g7-VwXe4i7ZyonjFN9k_JvM35oJvb0K5bsANeJwrZiS5FZFwzWcKeweVOD5AK3mjB2mwH6a20BELZc7eP8ItdjWA3T9t2hoDcZhaTDAEzGHqemHxbDFlOIbiVUasdfgZktuzWjyxaVNbpFDauRwgk3kuRdkvGVhU5c4JvNNcyOOsnbzSA9cTMeOcQShrWNBR26OfwWjIikINhK4cIxHOKryf1b9e_MoVB25a7mA9kv18GJV6xG0iKfUmY_GW1dT4aGtqRN6ygzK6AWKkzsNXfFEsJ3MKCbRWhrqOiyVmfx_3qu35sEdUGFOSKO-An8U5D2cQrY9lQEgKne00fwtx6Mbgpy1puFkHsOxUVo3AOodjmKp-45-dNnE82XFnVnrcSBlGo7NY-w3PSSYfYhGwCeV0AVX4KeGTMBsqWB3AnN5DiYPMLtfokuULVPtoq-R4dd-9yKKL-IXstDPR0gqyZCUSCUh0yYgRESVXt5X_uWEg4F1u8bPWlYltbngB1ewxs2d6Rfm4jdz2SjOPmFgH99Igh11LXaTBRnP7WrIZsd6-b5V-tsWRa3omIUqXlP91hD2aLsl2NbI3fITv9jPZk1J8gOv5aJRo_KODfpZx9K64hxdvvvWhgJ_mrmr5J-gFTHYdwbynBLWYi5jMMGGrRCK3OGArSlnKpqU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"\",\"reward\":\"0\",\"signature\":\"crrljaRVMZPYnznrEhfM1_P-_FFhbi0YeRVYsSJu1Cb7WKnlG8LPlld5GtJmjy6JIy4FiJjriJfwUliHYDMvQRGgJR5ocAMEtyfrtYkTzfzqVIV7HXezVMpFmrZTDbmg1fp0S5E6ZT0Js_pTi2Kp1sZAfLTW6s-B-BvvJD0ByGTHHVQAGd0-H-EqjIeNpebIZxXWdBv4xDTxwhjJDF-cOrzuAu8am8_v_oET2Hg4XT3XYMKq_qUO_ml25kZFLXjogJufeC_TYUVwHCJPhh2ypxdzcuSUB09C29Pf6J4-YC8hQJ4x5E0CyAGKOMvwYSGE6TK4ZdWbBINZ6QYh6nYNjhO5sAZPTEdVu4PAJL5GeyeqmDwv2yOAwiGhO9jIED4Z5UcKBVOE8e9qFz_L5mTi4bO3Dnae7KS7HpKC2qirlY6y_Te9HkjdBDutv3KJZxla617QoxWOinUN6oYc4wYKLrdaa-CSKx2l-igs7M3jV35QXiltNpc_UtmnaV98FEgrUEL1wjM1seU1AGJRoyv_M_kzO_gCR-lCAn40BLmL0DKJ9gAL0N5gEwcSNWI_g9V0wPACxPOEovZ-ynNjHnB-fhqbbE_9MDRd4YmPv_8WHBLAEGWmOVknYIFtV1ROe8ip-s_tGY-lZT2zeV_2tD5eWG2FZXqeryTLSqHjGUIZV9g\"}"
  },
  {
    "path": "genesis_data/genesis_txs/qyMWe-VUOzHXkQviMhNS0wJI_27nvCgDY9iiKANk-lI.json",
    "content": "{\"id\":\"qyMWe-VUOzHXkQviMhNS0wJI_27nvCgDY9iiKANk-lI\",\"last_tx\":\"\",\"owner\":\"4Ohsnv784deAFv0h4Xmm-LBQYfWwmS5w2uBINF6heOPxbpYEZeItF-fJvQKtRuW2_cbUSII6QYJ5tdhEgujRuMm0ZjEHb_8agGcpNO9rIC0HwD6pU-ELl90S2m8oyeA39SqxaHMYIfK8UxNagERDEqAnPg69Dyk0PcBSijsOP6UY_hS3720stZCQEi3V-zdYTtt8D1ZfpSzj3dkfELukBnpfgbCubX3bqoWdnTqS0XVhpKBi_OgtCGSz9h-ulZBlMcOxgLCKFdYlJ86IMCBrmF0v6pYrRG5s5xXkqVltTxYib4scIPoWaOOfZanz5BjpuYnFlccTO7uuU3mSpW7R1PuNbe_K41CTGSaMWMjkU2hcenyf9Pe2Tsfpctw7lf80VdP3VwV-GmJAQDVl57Q99sL0XKBiNxG6wfY54K2y8wP7qXHA0orDMGZ_9tiPiL45atBZdSKX1FB1LlWc64yZx84SwemOzmTwK9JmrR1VmofT8qmJxku_L2zfDBeJlW4LHXvSayQY7lBYbYhR3hdWKRk2d6G2_eYdOcbM-ALQtCbp5YQZIUEhJQIslNSwxJ77bWn7a-lwi3RkSKDXFDJe6MXQgAET-2WJBT7F_VfNhODlxa42qMwlRyjHhrfVZkDk_XVD3wNM0OIzCPtjAKtfvtre484V7MEz4Jp382Ln9zE\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SnVsaW8gU2FhdmVkcmE\",\"reward\":\"0\",\"signature\":\"bTI1-oRqbbAMhpoN6-5dn5DGiJbNUgiQmXeZZUCFSXbQtlSGRVRip5MvprVamYEYbLFGbZzBY_B54v4D460Jg2NlCt1GGxXsmb6lE271DtlZsiS27pmX6UGN5qDLg-iS3v8RxPWc_ycxtbl2C24lkrv1nVSGio8X9iY9FkqLDLcDXUNXBch2iR4Qp6p9vUph2XcQLR1VBN__Qivy8dQ6V2uXn6WMXtIQyIeLI7XUnhIOKO9HCA9T6kTy7gzjKgXPmwMR_gBbk6Ohc61KIHAoJI_EMAPYI4QtBpaARAIYNzrp65woma15JzXgzLwo9KRBreevwLn5nO8USD_B8ZKoxzGqB13H1Hqw5McWJpAAUxrTBtJTqeyYWfstVTxV0ogTdQ2BNwfeG-vcwMELGkqIbRlJr8h__CNkJ6eQhlWrAsNkuI48QAQBeZgM_5_4nRphi6XlUOY4dmU64nKLD_IOjLpSHPogSHX-BA82y_uus3uRiUogOU0sPTLzLZZpRVtFjO7rOdhtJc-WfQutfE_DvQAdRaJ1ZJZkiNqipU0L3dHJvZbwDk9LvmZ1153SU_feosbKrQVLE0eR8jNs_Dg-hS4chpBMkj1jziWVsPm1JY9wscOPhJHfIHBXQIeaU7aQaC2v-rwXJC5oX7O2t5elElK_Bf7M8SwF49c-MgpC5lQ\"}"
  },
  {
    "path": "genesis_data/genesis_txs/r8Yq7Lvx0FjFYyXBLn29UM5Evv4AtGLZ00LCtE_hC60.json",
    "content": "{\"id\":\"r8Yq7Lvx0FjFYyXBLn29UM5Evv4AtGLZ00LCtE_hC60\",\"last_tx\":\"\",\"owner\":\"x45U9G-JJYbFNiy51QMlXs0fYg70Oc1dUk7hIJBqecDpxierZjRZUUXk_Q0EmUZXBHbsWsg_PPN8SzYvXpvcR5hbXzGhBblMTzYijChhIk1QVP6YccVvWlt0nyC-Wl2HZGzsgk2jmHqJj7Ptm2qFuGkRufq_lyyHizq8KwythpJ8dHIVXXd4I2EguJsP39-b_WLTJFrMIOuCD7RLS5nVfAM_vjOd3__i9299SFkrpTHDB_8G_dgveHOqVg53Yt_5zRqTlMDjbeQkGEAlbKF2M69wRSEa4umrVDtm8J34_wLfCkrusB09TApMD5qQ7j-v8GVA1dN5DhVAIk72pg9a-o411ZWdY2A9FUMii3WTuzjQIt78p54gzllPfk7jpwsrarC5xuaxjHYKAsOIRCn2Wl_2rO4yzibWWQjksnL4AgNj6d3x4v7ovhpj4mL5eiJL-WKc6DT75uEf8Qkz5_83ODIGWDUtXWuQ8L1RKgOdB7Hes2aeXEC0aPTk_m0GF6T48lzFZXfvhCFUXkeDq301ZQhLqSrZZ0KNlc7GCY5NUO4MF0mfP6SsLzujRRHLkW_Qpzjn53nYsKYw9b-FXnMh8ohrg4h96n4VYby1vTzXNjN5sGc5Ww0jOBL8EmU7w-fFM3Tgbc-SXAmN4SAxH4N_swfm648uVHDoou1tlJrx6as\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Rm9yIGV2ZXJ5b25lJ3MgZnV0dXJlLiAgTWF5IHdlIGFsbCBsaXZlIGEgcGVhY2VmdWwgYW5kIHByb3NwZXJvdXMgbGlmZSEs\",\"reward\":\"0\",\"signature\":\"WVScGGywvRCrQJSsnD0knVF2lPXaotQg2NNS3s58zht5s5C3Arn3Y9mI2HMWF5jiA7NJytMRSMaEKGkFko3a5ZWDyHMnwcDOy4oX53EfpDwd_MXcvUWPr9CTdmWxMjxmAQ8whtEuoqnpZ640rI3_AonNltXF-xtmd_Yl4dCGRJfjRX6cFMt5Buj8jdEWRcicEmKVGD4LTm_RpS-VNvcBmvZ49GbhES73u9NqQxqshlfuUX4Xqjfg_VLw7PbyD6r3DiISknZghyDn7h2E9akdsApB1rMcWIVFtxtNa9yyhb1LpgfOcOM2jL1P9qhx0HSJJh4Y8bpPUVJ3ernY1nVeW1P1Q2ZkXdh-cF1BzuFzlqQbKj70nM3yLBim7hUbZqSCPmYyuQwtvGHdizIHNbm4GB7PgURq3HH_s6n1unKcheP82dXnUOAdL9PEMBG-e07A8oSgp2M28TxaQBYiuCCewrnZzGLh9LpkLbap5P99dvy_4jeTC10KsEU_GF9EMy3bM1WWzP6-hDYzKBxOIj-fD1Jm-lVOA5lz6kqgFxm38rXuT9NshRc9JWn3QoH4wx8QvUknOGwVtbHTx9bl768eSQdaLogyxpoMHymIJHxOSZaCiZQk1_x08WvgC38zzycDKk9WBT0v0YWnFfkaiGu_liaUpNJ5foKDDs15kW81hUk\"}"
  },
  {
    "path": "genesis_data/genesis_txs/rC7TOXwflo7w9Ky0ljTYlzdbR0A3g2GVRbRJbIIuBfY.json",
    "content": "{\"id\":\"rC7TOXwflo7w9Ky0ljTYlzdbR0A3g2GVRbRJbIIuBfY\",\"last_tx\":\"\",\"owner\":\"1RwUoV2FwWW5AYFARemfgnWWJBMwSZiy_w0lCdI2Tsoo1hdcUBtY3g2wjGht6rpzI2WsD4_GkOXqXh-N4nquIF2x2BQxBLP9mjvfLwoVCX5ibbv_Bw1SkelsFEksqt986DB7UTF1LuVk6KLfYUIVyKwTM0l7tnximWZ58-qTmpHQtLxF-Kjo5Ui9GRfpbDZU6y3fPcNzmL3KjL8RoTn2GKHol_7ULci88F68BDgWzxCSAyJSIuSOXrCCAWuxF6qKTPOj2jmL4W4_aZUirpKiKQPF_IfSP6NTeQwNoN1-7DwFw-4NOfgurEnZVRs1xHVdRvRgyLBhOZGR3gFeAYuQhqe2XvTtW3k7xpUxhuSWVR8YBiVfhQzSQExCLSg4nG0LncOKps6H4p2xXT3AcfKYGslNZbz5aCOEe1tnPu8qVBEx5HFKsYPMidLjnHi9X1WpJr8v_fLM2Moduz9Tvm6JyeTcuXH6CfmzaeO2f0SbXSUyHqPTerE0LJMj1nIRXqKeZZnOwhsOkpbBSGEFZRILPRB-C_XwtAVa6LxNw1WwzDuZBeRtrdWdB23yMyv0nyGZ9i8z0urt1s88pNcskJr-s3vU1U-nNnQrYtWRHDwUkws5oY2VfE9WIma_0y-v0ebqR7JuyPr-7NVwIiyNzyKJ4oITAH2kMoiJR_1Yt-ic7K0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"JiM2NTUzMzsmIzY1NTMzO--_ve-_vSAmIzY1NTMzO--_veuztSYjNjU1MzM777-97J6QISA\",\"reward\":\"0\",\"signature\":\"nk5-udQjt5ZjBo0MWgAugijICJXH3NS1b6FXR7869H97hJORCslP-t_QKhXovriDdgFNjBZcjp7WX6opZHbMJMpY28R45bkUIVTcH-wPWEZd0uSQu88aunm5gr9DKDtnmIMeHUyzxpXLqRGt57nhOeOnvPI_YxRRUJMLAkYOKv9A8Oy9gG5jVStxJZJcY3cgtgEo0bvqZJIGuOdpw-1WY7dt5CAYkimWwjXg3lo-he-hwAC4NqazW8YT2Jh_Huo_bdS8MPEYS-7vXQdNPZvlRwEhmOVhmr0UsYrBjYle94nbR2RW4VXe4f63V0sHCQwvalx2IfhWKHNd5GVPinMerE8KHx0LWAhmG9B4H5c3NkfGJojEDq-GKKXHPaXwTbc4dFf_SYSyyquKO-pLLTFJ2baj_0RUNsDyk9WujYa52YTrMbTYUwOV5MZBVMFur0UMzqPF39i1V2Rlecj1ikgsbtjn8ewcVhRvJRCjCuxkWkjXinHGOU5lxn05v_QTdPVNQU7Y1QHUeKP0jHu0TlMCqW9-uGVE_AuzsFGb4_Q1FyrPiscpvWyxaA98EZMBXGK4CaZ6cxkLCB8RfDJXmvQuoohl4q98wOWecBHZvfpeKc7oKW6RibC45ACHEM8uvDKulhc6BLR20z6jFZ0CxmW-eUkTwc8_MAw1dZ01IYfzgSg\"}"
  },
  {
    "path": "genesis_data/genesis_txs/rRoy9jsUZ-Y10NIBksSD3P4HcVDfZheloItTTnc8_ZQ.json",
    "content": "{\"id\":\"rRoy9jsUZ-Y10NIBksSD3P4HcVDfZheloItTTnc8_ZQ\",\"last_tx\":\"\",\"owner\":\"r37VKqMJgEJoiHsv8hD0wClL2AVWZGontCtXk0qucRNjB5WboQJFOQOGHB47RlCwxrWeWoljH3RuU7WS4Qm5ZI-BL4ZpvCARNMag2M03HrFUHiLHx6CjrT0XjtNP8EMudZ_pBMFtNNJMn3jFo-E9Wil7QyhC7mc4PKORdhxUpyF4KyJ2rzzERjvmqYDZnM5GpeDSW4Hqr5UbMMf8Q_4Pe68JrEILAVhsfGRR_uqh6xoMtUCgc1k57x-IcXFabwq3XsDP4tjb6Mfvkf6xZQsFDZCNXX8YCrfJIKm2mDYBUbTx9_cQtWgiuKRvk7nYe6RDhEwhPkpj6rHDlTonOMFJFjRL1locFeQyiFodwk9LFJ8peGOhZJ_g5VGEnAgRhyaOMc9LMR2fRM-FKTGSUFcHhuBYySn0VM3a9LuDBeUHDGVkHgUayOUhKZcDzfQ8hlK9CHadwO6AGM6eaRvAr2yPCjyozZuZUDrdvuC_SJDfAEkNQf2n01PJUr5HQZn2JOE2XKhTCX5zioHXn3Xp00_rM15cp6W-ol-opvccj76WXdJzuJfk4RakzI6eUGPZDhqOnpOe3u-N93Y04MnEqvsawNp_79wvsjLZaQ0OyTNYKBOx9mO5WzHF4R9kQ_iTAJEfKlw_1W4GuBduL4eRjaS-e5VUCOrccdr0QnlIHk6id20\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"V2lsbGluZyB0byBiZXQgb24gdGhpcyBiZWNvbWluZyBodWdlISE\",\"reward\":\"0\",\"signature\":\"MaickTyFgdf1aAB4Ko1kICwU4_fWT0CY-j736IcTLPkq63o3_15mYuWKZQI2ViB3FkSn824eLu8uU0xBeYcco5u7_f-4XhLaBHECW1SXXpnj_2wi2Xu6vudSubpfT2AEsrFur8khv4HPdnMta6bmgYRXg-6YkeR5GkzzsJcwCGHjpYH3rx0UHBSAbI1fnFeAVrvyCnfI3jlcaOfFBmFfIIZaOaV10R6PjihKP-6rThGINvIjrpp5nPBidqQFTj-QWL9_5HSpPYp3V-AfNM6mdqxzrWu58h25yXzcOPYqEvX-0iEjPZtlGhOaILDbb6sjXAWTCCovhrMkc5kV11xR0fbt68QU9NJ2Tf8tRGxJH82xhEvC3H_Ys7N-H3g7wkgOFUUAlXdAjMdatRGqpO5CV3fmNrNPEnjPdUwYQXmnAZ9dOEL0UOO2bD4GCNaZ0QsqvvaB5CC0eC_GZbN8BYXu8oLxK7G6GJ5eCS78dbCks5Tw2SDtzK44UYaHw4KBx5116hJKzb_gdcxZEC_hykgZ4TIek1RRpIMUshzu-P345wDSI-FyrTtcIDdpH7LVPhSI9Nv2jrR4JgcYEUGzdRjpAUf-LSQVnb3yNjcW-QnJejJPmOWX1gF6X-SNGHFwJuTwtIA9Pqbb9Vd-n6wPk7w6UBbf4fPeIvrUsPOJ5iVnECg\"}"
  },
  {
    "path": "genesis_data/genesis_txs/rTY6dpq4KEhZtB-5moP1mWN1CtrTKurv7QSY8wAN758.json",
    "content": "{\"id\":\"rTY6dpq4KEhZtB-5moP1mWN1CtrTKurv7QSY8wAN758\",\"last_tx\":\"\",\"owner\":\"nQAioD7X1ciSt8o9_WPYzsmYrAN6R4u87Du3-WLXaqUStdqRlnc7mv2y_7jI9lmdMb31Afd4xgRbl9Qcu5EJTHGTAoWZiAllkLkRouCd9WQdTDgG7oFI30IWEZYr0KHlGBq98nh-vRrp_CUrW5GtQ9Bt8zJHzadZR6GvTpfLExf2IrQ5XtWQvfWyg8IUoMPIe-g4QNt3mFsXZ-IMrQRMTaa-JmwEnYHEW-HBDWD9xB_8_88ZYcyQ46GOgfdIxUI9gpRcmDfdwqoC44mgNLa8NqAZFKEXSemkQW-Fr3w4_GO3dPwY1ola9zN_6mGXTX5ibEp6fT2-hT8iXujfyUvmHfj9XuNz95wFhgix2-MYhtvPwC819IuML73F-oEsgxRo2bprREFI6jPZEXxQNEHeI4GKBuaGDdL92SmxbuJKeYg1GCiLlf3IGfsd5ZRbZTBY7SncJVGtusPPL1sRvsY3Qgi2Rzck5koQ6qgYKzgZZf_FB9iVlZmXuFoQ9xLafa4swZUVeOVVyD2nn4-8PIZO8O_SamhfZw7GTfF98xaX8ABfCYedD4Y1ABfNr835FQgJp6PhtwiPFT0colSQqWILBJusiR0sZaG4uS20SJjwqPoOV9T1jCY1idD8RGbjP0H6DbNMQ1lMTOX2DZEtoj2EoYId2b1mFAiA7WS5H7Qxkic\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VGFraW5vIFl1bWlrbyw\",\"reward\":\"0\",\"signature\":\"Kwm71mm0-gFtdniqrXRo1ecyJQuew0hXIojh-cukc-9e9uTUDcRwiO8LAppXnzfWkg23CqyfUslF7nx9z571rjJW2iSAOorNosn1CAy3r5o-wYafEkWvkEZ_Nwu6yTGijHG-VFoDA3if14kq43zmvYFEFA0QzPc8cCiWs1Q4xGGbZvX1kLOCi4vJtbB31dWZlzTBLT58WP8fuZIFZnEDr8Yyhhg56TCXqEIfHTnf72kMbSgLlY6UCAFh1EjW2giDrgPzScOzxEkJlO2E4j3jNFob4xOLGMaLA9KFZWO_-oa8XyDGKDtzonTBNYZnWdG31LdwCM2hZR7fJYEAjOjs133YuH86cdHNllXfUrnYcW7Ro8zb3lsWZjoH1gQdzzlOPnWmbDwvOzqsWJHVOqxKnDx8hkGr8XTKefwSqfP3-a8j5Zjgoyz9plB_13vl8Ymp5tNEhDnDFfd6qhhyu2uOlbrxRfefIUbZfqqu0LA209pLhMd2LZnfcvaqbkLORjztPdkPx8w4Fy70WtGbWeFNkEIW2Ucz0zeg9XTnqMfeEC28mfXtwLZ9aMZXw8nUYnIm6hYMbIb35-f7DB_tCagL-AVkpAKJ9ooLvGGitxe-OYaHXxt2m0IQ3HVHRjUKhG8uYeg2HRx7iJpkHk6si-dLctPqe2o-NzJSw2IpRLIkgKY\"}"
  },
  {
    "path": "genesis_data/genesis_txs/rvbM0iB1HJ1YadedIDWjJ95J2XBHWwPAJD4VfpdQpxQ.json",
    "content": "{\"id\":\"rvbM0iB1HJ1YadedIDWjJ95J2XBHWwPAJD4VfpdQpxQ\",\"last_tx\":\"\",\"owner\":\"vvZf-lOXuNWvqcvKzmtpuU2M7gJ_bsblXw32P_k80z6a9CKf0jAp4gb1ww-CVHOcI1GeeEe3iKSEvMGLAr_Y7bwzvHUen97Y5yLm04-clTeAD5vTkr3Kh3eGdS0rXeZhL3l_9W0TPdhKF9bnUehTLNCmEhh7ki7LI4Od4fBYr5v9AT4CMBCCXIqFcvjmg-0VyNpyefxYky6q-g09a5mBlsAPp2QjLg6ehimMSEJ_6QW3PrD8BaTtAo_E90W5w9inW6UKIREN9wtEUYRo9-sCkz8YESwSd3b_rYKmLNyK-VBU-fvyTkeQkt4errklNxGmSxQ17fkdyu487nr4Tb2fGvGKoBwhCKkC3cRdQ2_kPRaMN72QzgF656ihMbaBSEGbCJQ_as5PpiB0u8jlKKhgt1LFE-PsMgt_SNKNjFL5wTTS5vLcKbyva_PGKk8EgdH9YsNVwQ4cfUNoBXIbT7Z6gkXbpLjh1jnc1JT2v4kr6yLpQw2SzIllcUYPG64V3lhGnNdNNOnluNrFqZi2vurpSQys9yUIh8F6Wy7zk9KwEiJF8nxRV_EEiljk-a31W_54sswylkzySpiiIK45K-Pa7h9eqO6B-EdRbNQkDsjCdok-CbhqZqnxLY03f4zHOnEpleouOYtKTfCfajwID0IO8oIq9xgYw4d6fyswsCtWGQs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Sm9uYWg\",\"reward\":\"0\",\"signature\":\"W8N5avfsMy5n97KGdSieqM61SaGvD_SnaStVQ49O8USbAfSv9VbjV0EKjZmPq5KjbNSDw1p_OKz_tZlaDM58tjIBkejuGucbEwLlaJJvwQT8YgKFC4uaxDJ-ZXgcS3NIRiCGGz4jR2IkaUR7KnSCqt4s_6HwtM8j-O8kzqX7J3fqnsYTssWBAkZedwW--RS9is4Q2kSwJQyl7kn8Z8w3QN5x-c7A0L8mJUASJJFXxButgQmGTdjyuZfTucbm_7-_PvXRnC5erqDaQUUC7lbkbJh_8EBJDMsrGANVmelqmLcrKMptww9I1VQC-h4mrtRdvVf4rlw_lnMsBC1wNPOy1Eg_ZIO1KZeiGBKbtxR18b_OLiZkglHqmFRTvJVfEpKmAzGri53j3ogKd8z3esXpzNqYgoyUgq2tFZLio4JYzWhbOya3Kpy3RjcqYUAR2JMtlLCuGkNKZvpVrcHBwYLoczt0iy_G8pa8sYuVzHbcx0TlsMePN4dU83r7soxCXWLLFhkB0znj6RcV7KRCphvwo0XtTl519cB6lPj6TY0xNaiWUshISkeDQ2phD5JkfdA5dlHrpnG1MktkA8E95simfPJRuXfxnv4SI5DadiLHfFvcxj4kcsszuB1evwjOnp_S_tbbWR6A8B-ngAttR7RX4Vo8qTsYJtFvXXw6MpalVbY\"}"
  },
  {
    "path": "genesis_data/genesis_txs/sB51Zz1HRjpwrWFhW6ZE2E-n5hl3joqxPQgnMCLX4ZM.json",
    "content": "{\"id\":\"sB51Zz1HRjpwrWFhW6ZE2E-n5hl3joqxPQgnMCLX4ZM\",\"last_tx\":\"\",\"owner\":\"nsLQqPD-0m1Z2eEeIEEertS1Nu-ZB4IEZ5xkVasXFQsOK9knMDAeecmX_9gekZ135KFExd3akQr3gezmBno4aN_rFsJL6_Hiz9qi0oiIt9OnHvuClmTsLl9il-Sy_2ZOE01Pb1BGsbeZNGKbSuTrw8EYsGqipAb44RH1HWlomxJi3EMvwNxl8fOB5ryWxxlXUPt0mg5St7xqVOkNv5F-5A9NlNZkCrHpcnn0Idc9vdhb1Nx9yLQHOVVStR9dYhEaiU1OQvwqkPEcAoyBBAX2PTgAqdprvYcW1YAXgEaySZHSgqUL3mzmGkqFqWb0A1GQ3klt-v3W55rARAcWHsWG2DAQrQizlEo3bu1rg3IqzNEK0DrSVtTKtfrz9kJQwzSPTWzCHypG2mw21pQWzczEcO-tqwH2UPq2aCjUWwyYEEX7l1kFnD9bTTPyo04-DhrmY89cVl25PmsFHGnQTq6pBtvlkoz7SBcgpcJ9-VY5BCp6W5pP2PsWIG1CrVGmfXeqXNe707g7op_q0VvuYGZ7EPx48FvSEUGDZv0CNXsPp_kKc8_LFtx9E-GHZ-CQygiK7lLFOEIYKWbZEON0KZexIVldj0ANSlkUN4QF4Jaz4PUKoXWgrW57moTm8GvN2zuFm9qdBDlNCreCfhwbQzCw8OpRY14UTlWY1fs968K6qQk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SGVsbG8gZnV0dXJlIQ\",\"reward\":\"0\",\"signature\":\"Y3lDUvDYgMvcFtXkBT0ZNtH8t9pL52ZXdUKXbSzP3jvwaKxCnAL_Pdar_t4iIzedE0h-wk-oAkQ6_w8worn4J2qlMhe-BtcmMiTNw1Z2ET8lDks9q48OUVHAOzmdJlnSCdbIfuMJHfmN_PHSfrgVzJnTlWSx5h-c4fwPEOqnUFgj3xXRec0Ofw3k3Sq_La9ak8vD-5YidxrWEngH2j5nR_OVVWH9ft6YGhdJ4V8jTfV5aNxTZj3C0kq8ftlrAiAP44hN6MC9Ltw7sooP9rOMPV3uLMB7ORtur5z3PG7YUpGiyA3M-7D4HuJp9UXB0E0OaZ-ayK2lUHW4vBFEOSgwuX0t7265uZiU21k_UchrHZ_llkOqLGej_9tVw5fmaAoZk7_TW23krNBK1cOwyQ3z1brvic2tQOzPWhXy2IXFrcCw_jyHHvLS_Juw8Wla1BuPp0FWX4pBRqFccs30WG3SbHWa6sj6EH1F6qknQ0ylLtBtI29gGu9ZKZXYDNV-BjATNa9xWh1jXPTJNIJBkQsq7vojK_P2bVyQcdNfMMA8Uwze9GFRBv6zkEFndzPHOxOsjf99ySxZIc1q6nEW_ULzavekamKmS04jZ_hUAn2hd8wgml0mJJnlspxfvdjI8yXto1EwdmrTvENiJ5_XXNG0IvuN31AEvX6Fq7_ckIjmu2E\"}"
  },
  {
    "path": "genesis_data/genesis_txs/sfAY_3fQ41LahxW45rXfndEzeHD1eeWJgI9ZaM3slFU.json",
    "content": "{\"id\":\"sfAY_3fQ41LahxW45rXfndEzeHD1eeWJgI9ZaM3slFU\",\"last_tx\":\"\",\"owner\":\"sFo7nuc1VXOqOm9eaWYbQTPMG4pSwvVFUPnrz98LpKMeAXCF4lBzEywqTtEjH0UoRFXZqSRAyjtzNA9feouol5Z_4nj3hGmOW4hN1mgPAI3sqoe8NTrkL309Fk92K_s-vv2pzO39umytyDmUQtU-aA0JjPjqVlJGCulmeljaJhgWGj3OCnxu25JXckENQ3dAsHWMR0lAEFKAubCDeX4a1qnOPsvaVUOi9KJykiRRhrfw6nBNwgmdSdQT3wSmD-nl8GZHW_NIs2vYmPi83x_478jzjVEf9MYJQvHeA2_qnGMDJ0IIUZ3DMpmNtvYfc3ltCe6loJtvGnwJyy5OoDKGpAEy0rPoD0QiOnBVI2oh5MM4iAfbFk9OvG9wbzqppO1bV3e3MGsL43Ma9QJxUgKtMcJ-ZIk-2VyJe2duhYp0suIPhX-uO6lg95kWSOzH43qcQQEEnBNoIUHAKIYUCtVPYfBJlCbxXwCvl45NnPIkICQZjawdrqx7FzdzXMGNgfbqhzQV_KoOUoRtu97k9MlvfmxKgIIUjIguJtIIHmeSuWjoOezZMFo99nV9TfGnKT2Q2YwSsbGuvyxgAhkL4AZ8hywnN7j3HHMb-Ytem_406-7dTSX6Z_GXOb5o-Zd_NjcjSADs6UiRBlOUqc7z4dDKb_qqpR7imT9xxGHrNEWzL88\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VG8gbXkgbG92aW5nIHdpZmUgQ2FyYQ\",\"reward\":\"0\",\"signature\":\"djkZuvcpCHherUC492MnjwmKhPZeQRvlDr0zTT1w4yc0UqJohNKHL9RH14iXeDoEC8lqw46LT1TYeLdljn8Gt57nVKFk-QXZ2CKQ_sWW8rFtil2xlg49XVhVvMh8JsgswLUrywrCHNuXweN2e2dHzUxLoI1ctRum7EW7hJN9JH-fgUXeBForiUlTggxcyhAxo3qAyuXVVtxNuFWR0toT1UgIO1nOIAOCoB4GihPDHhz8hCpjnrU6H_IgcsmQ8Pzb02HtiaxvrX3rDldyd4hTpKDmCa-jdU0-bKwGbMPYrUt46lKfBW8lldhpiaVKLgi5rmdRd3hz3M0n8xfQzetaCVAi2fWaOPkY6A5GZXDwG7BT-1OuP-v97StL09mqfItH_GpU3eP2RU5NRepl6PI8s51QkbtGO7Jz4sMxa9pvU8ZG7rFU55I35va77IHeGTvYE-lzPqHw8NRquuuLyyWhW7EgtnL6XcOiGpCm62WOUS3E_WkmP9CV3XquU0ox41hIc9jKPl1NApdqwGZHPR5eVzLFltXYlncOe-042bL347e14FxBh380CBJgCIZmRnFvNQl448Bbowadq1PSoGBGc7t5cODHyVMcE4hH_rOYlY6RlWCwK-W3CNvYJUZQnOo5eP7BwC9SLxDAPxf2KkV6bDMvZl5x-POI_BHbHwAnhfQ\"}"
  },
  {
    "path": "genesis_data/genesis_txs/snWRgSI3vlTOy3RRkuNckM-ws-5lpFiPMpYlLx_zPyk.json",
    "content": "{\"id\":\"snWRgSI3vlTOy3RRkuNckM-ws-5lpFiPMpYlLx_zPyk\",\"last_tx\":\"\",\"owner\":\"xCZick79YbWE_L7r5Jl2DaSGjgLpgUFtnJJVIPZS-pAVuahvMT3ZefiTgQTgPhOglf2x4Wg-ELcsbBzUf-l1J2aFVSjkodfXVLppoHMgJxKt90ozPdmYdb6QJdIXmJcSSJAltQW7Uuua_BrZQ-YWDMkei_12MqD15iZwT5YfMqKy2o62PIKaqew45oHLbSUOTfT3_CGNc8SWrS7rXN8FXEZgJtqDhI_jKwL2htZxQ69AfZyunWrfONdeTdPIjwY6VTojCUpMoCw7z1vdCfJKyMWveaUXzV4dhq2o3ZjC4QA3SjHuaH1AFb2ZM_018jKApXKg2yn0_37fav73rt2hDxxFL8h-j-YGsJp5EISVZU58VgxyVmkvJU_o22yST7O7Qe3p7Xxwh-dIaFUOI5HRmdoyMDAirnxGm3KcwlD25Sb6Ze-gLCuCJLdTMiaEhd0ZUjdWPzJ9KIn2wviIXdGLTs6coHZxLsoMOJQ58UcYJHmfeXVWNDBf62ETMqelSmgK-kldISYoY-i7aoZC9xpULiGiiIt7ofqNlKFaV2LMvfIzJXsutgqXlJ3EvME-Bz0h12c7cACjWSI3chF1sT2i2kEnqmioxE9S3E-JnhCEWynUedQPHkJLXDPThSju09CHTqlZuTK4Alrk0ec_neAlUDsOCiw24lZHMtflvrnTz-E\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"\",\"reward\":\"0\",\"signature\":\"oQ26WcCTzsQ0M9Mxrl-RtOSsYtbL2HmT_WGUvMWQxMrefZlRX9BfQMYO5sT7CQS-odShHPvQrSkgnc0QrRz3nrbG-9usmvB7oAGk-LL8LydUnfBSFH9uoiu0JiOnmwN8lOF2BEcW2GvwE396rDyaBMZESNsQiGQwwsXTRcHjRy5kYkswqrNa782q-rK7j9sFBHwzuQzw-4G6j6l34nVmPjgcQ5c16GFGgPhRn-ua_MAC1ymGdGeyDS_mtU87eK6yLT4ygvPmj-yrAyLBy9FDEZqmZNaUeeMxkHL5vqvEqF3EptA23XS9jA-LKTWkXNCUd2belzTsk46j4fxto-SJ47J1vL46o5eIjQz9mIpPjsItVq_RVIsSLiQBWSOi6_OtY3Z9PVhpNhguwjGpVMcQKF76SASOza-50eNwwqvo6sbSPqX6BE3L5bFwmG6zOaD5yXGuXsB1fz9LKhdlYLOTRXlmdu-anmEnqtcWp07MH3f1yJsY3Zdd8wm-wImUv1OxyNh4wG9ehStXNqq-dyhrV9L2fenNbKx2XTLxqYKa4aKR45hukjOg0KnA5dkw3uo4IkOFIDXMS1DFE4lwut181yhbftarurBLnwSqQ90OcvKw-fnPl5ZACPfYYXw2s7RNX6hAT3QJVZyCR32Vvz66WoWqrWn04SZyLRSqZ0iD3F0\"}"
  },
  {
    "path": "genesis_data/genesis_txs/tOIFTqEef5fQYPzhlkC2Um7rddT6MyrHPzUWXDv_mJc.json",
    "content": "{\"id\":\"tOIFTqEef5fQYPzhlkC2Um7rddT6MyrHPzUWXDv_mJc\",\"last_tx\":\"\",\"owner\":\"wITOQhpCsLUtJrV5TjdWhknpA1-DefF7TGI3EQxs0aclIW-lNRS-LJ8xOi0POgeqg8utaOIdkTIAsRcOKQNW1L4mV8IhAmoHdVN54yMJNOpHWcpQCWdgezTvOOeL7yABGZHLYmspS_xN1rDNSRck5M9xuctcsAlHGMC9JoaQja7MMhT4GoPMZR5NZlQMnC5QcRoM30N9wgMw79fGZmUB6uZyz6qsor74SKQNI3pVfp_4I8YIXjdMv7P-V6_5WO_FvfIIpVBxXoxYOIIQJ-GhlCauOGEZPSq0BTvW4WmgYNg4HAhTUrkoM-kDz8UzZ3okamZdqW6w0m5xwJX_W3PaORfhiEXK8-MVd4SKg9Ajpfzdt9j7qMAJNKIrU_ToemXDeVjaClugemMEiTPVyhEO6g53Dw96asUk0fi4n1JuUjyaXN_7tSV7NkUw5zsxbjs3M22K8Om-iBtGtlpnvWM1sqTMzzPHSwqy09bn9Q5WYO-Sd0PTtoRvJRwBujrOWLIfWAHY-mNhdc1QMtvTXCwk-QDeLZ6OC7CgxXDis4EFKF0qPXjouEZsnrmlya1FphmBy5-BSbkZDpe3C9wSQXiaj5_ciEkkbVsp4Ml32Db87scXpV8STy2Ze8SFMawMKB1P0h16oxxA_npLIRNbI39kMd-ibMM1SI5NCDLUrL8EIM0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VGhlIHN0YXJ0IG9mIGEgbmV3IGZ1dHVyZSBmb3IgRHIgYW5kIERyIExha2hvbywgYXMgb3VyIG5ldyBjaGlsZCBpcyBkdWUgaW4gMTEgZGF5cyAtIEkgbWFyayB0aGlzIGluIHRoZSBJbnRlcm5ldCBBcmNoaXZlIG9mIHRoZSBmdXR1cmUgOik\",\"reward\":\"0\",\"signature\":\"apYFT_uBANJQkj7XtXV7ao1vxF6w1nw4rNfysgZ0q2eZdR9rQojOduIT4zOy48gpXy0J5g8SMWCVjo4FAaAWRochDRaRKLFncQSSyNVSLJkygG2hCd-GjzGlc88PHci0aNv6iFy24mCNI1qRX_c-p0Vh41qNKzZSDnA7lF01Aej2PzYZKQXJg5voJK98Vl1QbAtG0tSPhx3D17oGdhuVrcks6l_AwJWWms2uQeS91jKVdWYHv2e5GPHsjk_LCCd3Mg2LsLYy9xtZFYXp0WTECYvbSraJcvjYsGcO4g6ZqV_aQZzs7XXPdeQTGk8KAgXDtZlZqS7Ra2tFYu1gK4S_iAErAjAiftpIfNj3MEgnQ_LrjP3F1F6xqsPIYTGo8QBIIdFWcWBhphIYIEDWD_JzpmAr6Gnnj_U_6qE6cgKfwGMJBLX7y6dhakbQJsH1URW6BsQX-Tuy_vo7XkrShQX63zhQTXcX92KcyPcat2pM5RQuh6p0A8D7AQfHN_1bkw8HEy8GWI2aysnrMs6n-A9zMdFxpWgZkoUsPVNTo0Hz5YIqZJYsJZZyyyMJhm4QDQAHfay7JHschSVIIiWTeP-WOj6l9pwg-CVrT8Ia3YIvdKHRXNYh99jqETlCbrJiz_OmyJN8EzoCyrqmoTM2BCKnXQLTyQlCt4PKMNux2SXMa9M\"}"
  },
  {
    "path": "genesis_data/genesis_txs/tVLYd_62zbU-VPzQPOMHUo9TJR1dvSZ_pAHrC5Ubs8Q.json",
    "content": "{\"id\":\"tVLYd_62zbU-VPzQPOMHUo9TJR1dvSZ_pAHrC5Ubs8Q\",\"last_tx\":\"\",\"owner\":\"nKK5DF-Tg_LMl_zRO9BEy0I_UQQs6Hu50ryjWR4zCthePATsNSdy-c4YOm08iyHNGS9lBVnLD7qICGa1mo2wfWzricfvx0EM549ME1EtKpSuzZ5gttmdIhy6GWWMlrrmtxOoBdJek_JMkorDT_2pvp858vRhp0sUxuJPES5TLRGH16uHNwd2HVnzVQPu0pKKdRjOJF9cM8IfiNfQeQLSwI_fkm4uHmYf2axKLvA01Dw-Ia6hP9_Cd2oD4OGfzpyPBtYkfqyNuO3KBpXID7WIHpw1tr__dUqj2PBCgq3IbAbINVFJfRIIo41-WYZFfIrSAr4R7iqzEfVerA0LNvM44aG9d4sAPKv3htxJo14x26dJqISln0KbtIsBi5Cfhf5XrroZfEhj9sgcyNqpEcsFAuYt68HE-0YXs-F28QJ-9C11CEJFw7GBtbMd-xBIH9sM4VlUjRSh1njCpeqkP4L8Cr58TFismqj8GMDkNzZNQe7DW-NglYzU6WGusQPSC6ySNkJvvAjUG4VsAciFSMQ2lPHqiZixFUFDo3RxBEnGn9g2CqHLVZMUyinEVf7GzhYgowo4n4mqjNqXtgVhx6C8fL3iGP3qztejwS44h9hLRm0KIejH5KpicyJZdyc0HUw43u30xt2b_jGMQqT9OXgPzvhfZ04eojSt-C3x9F0LnrM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"dGVzdDI\",\"reward\":\"0\",\"signature\":\"Yz6ZHz8Dyi7cKtFB6BRkUfvtSCjlxrLSM8HQKuScxf1zzXdcTfNAywakz70stoT5FKDQ5UevfKS6utADj_jefsj6kMT5XfGATSApiw7X4aWpuaqcyFvCHizGukIKxuy3UnTOeMX4tBbDmkx7AbK7ChJcQrd7hYBcoQZpkuoSif34sWXogGL50C6O-mGU186ymFvOxWDeipBFfQjHaZmLUP4C25vgmeFbCQEebhrOfQk3VGGpbji3iyqDoxXR0zAX89kMrqpUqPRJhlAJuu7tdvrclESwiquNrM_svzX3VtqvAQocxPfDVjZY5n4EfSUt-iGwrsRJHNdOBLJJGfsOakG5U-vmI7TP-XC9wEzmlsM-iKB3Gx8JXPcm5eyB8-xIpdKieRrjijkozARjodQ4yrAE81AyaAQ-CvcF1AR31JRbOglMiwA6UVgwtaHUSAmOyelEvEM9DYWDEF7zI8CIh-vwL-JGxWA_DTZAHirkCD2APAfds434bPblMBGhY6UA3hcpYXw4aRAvvGnXPR7Gko5CCHm61ayVVx47d5OMypS8LCyWluP-SLPSGL6WZdph_jbHDoZHafPJUkKkNk4xReS9U5_E5LxvYpX99pCLgiwJAznYiO_yoWgSuWRFyYrF0RHbRdM_4qu5NAlRECvFu_F6PYesgbgAS2HfJAox3DM\"}"
  },
  {
    "path": "genesis_data/genesis_txs/ud3zGJZA5tPRoitGG1c6HWm9W7iRS4ZF3u6PbZ-blns.json",
    "content": "{\"id\":\"ud3zGJZA5tPRoitGG1c6HWm9W7iRS4ZF3u6PbZ-blns\",\"last_tx\":\"\",\"owner\":\"0u-MkiRuTdWAVEUZAFcE6D3Zba1BrJj42UF87fM53s4hezhvGn8kM7pBfu1-XRaotp6ntFX6aRe2hK6-EAaklG4XbiRWaG9u2uPMOKHcTNQ6D-BgKaGiE7yXs4l8Bx7HnblHCQu3n4pYj8UkNIWirANdN8xnX8eknBINLgWFfXYaJ1A7P5Sr2FQKww15dWPC2SAYgSowX1no0QJiIqxy_0gtFNi3gueSFSSPYU9WTUUrJWFkwW9wobxSmVDsW-idHtgAfciO07YmZYppRP-wMpmlGykrIWYgvRryiQXSCC8M7lnSEEtktIVRUyg4UR4v1z8UwCvDrzpS_4fCOit0P6ScuZL4gGk5D5cz56oe6Vb3161nQboemfy-GeDMtmkFXGmLB3RxVXxilIVxMj7SQmaddDEHpzu5wwjEH2xsyDqMeWPW6poyJug0zGOuEr4M26DJS0mgSRwdrOlja1wG7Y12ZsvOCQ6fN06IygxodA2YcDDmn7MhcIkvkcvnvFhzqhw2bTpXNt6iXyEqB6ejNkG838tGJmQElWPvg1PdWoQ6ySO6_0QUcsRqx_6glarGp9r7GYYQwIHidj_R9OwGjbGV0LD_V1iJo5bobx7FJ4IMtCrMETekgV-9ZdldyNsGprFtB70lkChxuc-6-bNJ1SndiMrt-CwZ7P5UWPORRRc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSdtIGluISEgdGhpcyBpcyBnYW1lIGNoYW5naW5nLg\",\"reward\":\"0\",\"signature\":\"aonsO3wOUGuMirUalbQyq8T_kx237uVqltILJuJq8FE9NNpgBDuzdAkAgGPcXWVC7ZF3AOumS9EZyMzAF0R3joCttFuitXUu55HISclssVzP1pX1ajZJpcpARxDAzbWKP2UPoplrxdI7C18gSS6nmD-7ZqrNzJfJhshRwZqJiccm54qkTPjRoWJ5ANtB6TutDZAl6V0mIzKIu12WH0xe0dhmcKKY-wyKj0oIRa-Zpc7ANq2T5ng-MFlyD9yygOQHYnTOGhsyV6YZKzUfRZ6FV59eNr0INHe1d0dl7bWHNyXuE4ixBzK-XJC5i_bSZ1DfrUqhXuJfukV2P6uZd6btWISPgeqAGP5Li_BhYIaqsJWAXBWb7oWFCUQRT_sXnQXj3HM2jVGpkep2o4fbV-BqPsigFEwcZwp35f1gkwfSw2Kgk0J8UPYg83EMih1sDWMs3UozYrvjS_IX2TdyhAiCR-jOi6GoMt26BxU15pPYTwsjj0XRhq8cqKrrWdNbe8Tt_EILfbgkU5bBWdJtGs4h9YyHYk_XfkXmNurV2hZVVs3Lz9Xy6LbormtgeUPFUrWe_gfiy70kuEWteIHQd6ia9jkoAJdhBkr9rDaYXO6V-T53pTy33XPDWeJet-_VkMBJdpNEzpldVZrvFb8WjpziVQDaVglkMMkM6t2P_bfbnu8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/un3O49lggBX9raJKb6yuql_QTgZYWakWw5ydwUgUuXY.json",
    "content": "{\"id\":\"un3O49lggBX9raJKb6yuql_QTgZYWakWw5ydwUgUuXY\",\"last_tx\":\"\",\"owner\":\"3JNVK0XgM0W4MXA4nqU-wjallXtv0SLEdRR9ATfdCTrpAidnSHBDya6U1M9KwHQ73S2t7HtB2yQbnYShhoNA1NUJC2Djh_GaqrnFdx2iGVhlIuB3pYNRg1GqLnVWl97fUwFSOxrKtNzdDDIjYufNOWvJaAB5eIOyxnalB5POS5VLX50X0UztTBpaKBvtN9riiJwIC_VFjstjMkn9IagbV8sfIR6ouBN9KiEV8xwSWc7vieFFHaSwUOHxGSeAzhHOJVk4WQpr1VygzmoiIgUW9w-wyPw5omtw8CehSLJqHumicRHRaxZDOP70M9YnCASPArPvLZl5Ulv33j9DU-4fLODf2SHIrupVfTjxI5OsMw3O0UUjdAqBpBDP-7zRFf_NU5DHZrWJDsXU_qQinAV9j8vgc-qtQREZXoEFkBb1wm4mg5Zi1PeSOm_W60Vg7355MxX11Ked0Xt1-KAL2p0QkQ72KJMSgzWFhxZd8YtSeynSzPVKst-XyhD9-89ndlSEC3V98-gDnNRrkaAZBBY1sImkFXx-wu5tyGKGEROtEtxi-2trQMKhZuDWyl_kzkrV_E-dZl3PgtpFyuwD75zjImtX3RSdJMnyj2sNyqbCnmtCH-7CU-vG9KB5TWSNbmpT1wHCkvcmhk5ScFENRpgab6t1Ecc-Ov1YtGRYjOMLEq0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"QSBUcnlpbmcgTWFu\",\"reward\":\"0\",\"signature\":\"L-K9MHynT5tEJ892p3ehj7M7CtkHA6JVirZs-GWjzePFYYx1AFX2s8uUTFW5v04vue5a1VB2Y5elEzKh5abLnyPU6PQfDHIK6gL0em-NeoTyuRw62TvBncQVlJNEPC6QK5O2PbW2rapmwcOhvE97ckZlqfAaKz3lGNu3wK-Jo596KUSXDOqomvBQDzuscHpFfgl9E5eEolkuaCMgAcuNtxOXwVmUIfCzQnQfh02aVmLaoR-P6r8oqDESh4fHRNJwVHV3YCTqKnfh3LfHKG0LNSf-x14mFhxY534lDeXTaV2s2XUx0Ngo_CON21q9RvVHvTuhmPa5XDeUTT12epFnc4XYAiFIgNiVJHgnxm2o-00whfikjSJldLu31D5u3DQ8iiSelqp6V1oo-y0gYgQp1R7G5dunqk_amsQLc_NYK0KZZB1FdqnzUFQCmkc8GOdKbvYvGwPjlMoITkFxSzewNwoSAvPC0TO5_6tkYPd6FxdIjBxOCgpLHeXJeKpw30ixXXDve6jA8U6wovPhPViZ8VBIU2p9mFUlQnPGSDZRZQ4AnykrINFPm5B_nNq17y3kGaLvYduXyKRo4WCN2fpWUqFemzs7TB4wpIMC4MkKBQwGKcLTj9Ly68ChSkXBk_mWo1B_wphq4Hvy7KaQ2wo9hdnK6kPgKBnZPY5ZV4fR5MM\"}"
  },
  {
    "path": "genesis_data/genesis_txs/utAoO_xht393CbJ_7P_ektVYeEpkySWLM-066yJ5HyI.json",
    "content": "{\"id\":\"utAoO_xht393CbJ_7P_ektVYeEpkySWLM-066yJ5HyI\",\"last_tx\":\"\",\"owner\":\"44fkqwyKGn9tpZIllzfGN641scM0im0amYHIQFEj8J-SdvPTQ3CIWClgUaIKDbYR7ooKr-6AHr4G8yIIypfzf9HVfYJJOHo_Vz_BV4Sc9xV-nSxbYtzlstCjgyTdMRMZ8REWz99mTEIBrgmE00RDahM0xJ_G_8J4K1m9yVUg-t39X92i1wGt6HyHMHg1TiC8bjnVRCTziUSx46sGheWcXodjUZn_cVOgOq9g0fNUA5v6ZlFbqAp83VZJQz8D3lUcqQRKZVk9X940BCEO6skCKU1-0OwQiDj7IAOXwyCJg4KM6B8RexXHe4E8YEg8uAdWnmWEeQa37EPGjWZlnN-hpwcxi-nnfFjoQ3w9kdxQWoyLNLZKC9KSWaibFfX5zlHeaeh_kkhC89Wo2saefZ5LZNwlL2OVFfdZoxP93Po_LQ-_rqzNo4nfvABiAU_JSjUX7p9q6Iurwy6vXHvE-orZ7nbTKluzHkswpFeRVmyPnkrJs7Kr8vketKVVlJfJ5qcUM2XiAKejNEsWSFeaNZS1RHfnRRB6JOdXmKRSNZuu7VPtT2NovCpouY_fKJHW4T2392gQSaXQrSz6t7xrBwp7MEKc0f3jybjsLiB-sqKv7R9EEdorTuvgw9HtBMAaCSbRw9q4suZq7cOE8MJ-6zJIrngYxQFQY3vIQgtBQayPZ8k\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSBsb3ZlIGNpbmNpbGzDoA\",\"reward\":\"0\",\"signature\":\"wCGTrdB9sZHIzLzrMW-wpIPzsR5-0mAR7BzIwjv9FULpqqxHddtZoLcSVMu75o-v32dPJeDoMoTMNQr5b_Zg4wzTWEvW7KPW9D9AliIZGZuvz3CkMENojmQ1-cxmIVqWMJ4X7OXkHNdzuQBLtXJMCata7750e0ze3_ifT9MPkyldzmTIA3qhS_UdeaO3AfhvixPVpw5jw13dsD_HlG8M5LGw1iKxlG23K8kRDt1vm-Bzqxpwi6ymKZ4tBqofQiJaj-jOvEZfp1PwbYs2aWKO8w6KcEG3edXuPU2DkfnWkL0XaQVw3ZMYKfE7f_znsx10d4YAZaYhlP-ByW9afbRkEyOH_0HN5h6OKG94J212x_DFhx6A-8e7W8hd1-OIkIqlc162ci85XDIARqF-ph0xOTgT2RvXNIP0woSCryWls0VIFbq3Z8QS4Tdhash8-ER_Glse2H6UyKJ_QULZmh-iMbh7uBU7tsKR5_E0exHCYaP-VCKDnFIsdp8S6OFfUCMp1O-9sJQxqjB65oxM-DKNuuhfm7I-e7p3woHaid_BShHE9AIxBBBV1css9fjAX7MVizFcsxRmXzEteOJI8_OtUz9m4oCQgV89X1dBGUuphZU67BmAmIL3FlUXdjcDGWWRe_USc_2YLI-WF6z4MatrRMUQCVFzTk3zoz2By09EYSA\"}"
  },
  {
    "path": "genesis_data/genesis_txs/v2UplxDprWwaIwbB6z3KNEj3GjloqM8SinvVahZ1Wpk.json",
    "content": "{\"id\":\"v2UplxDprWwaIwbB6z3KNEj3GjloqM8SinvVahZ1Wpk\",\"last_tx\":\"\",\"owner\":\"rzVp0mivRGlTKQkQsHlt5ysXnXSsJ_6ep01_Y1pfejhLGbOMSSmQ93gZUlzxsIY4uGWuz-kl9hviWv_oLNTRIdIJWZrYet_-H0vePvFvSGOoTU05Rj0zU3yLxmdW9cm6dmMWaMdoPfN1VCTu6EA3ZO46pPZVAU_oKAogkQPJdWJc9kn4Pmb3MNrTD-Ab3-_paE4EVP_mQ4DH1xLoMU4sjLPsmbgrdU2S5WyjhtOLzxnPcl90Mk0UPY5wJw44AjQOdmI7aC6yghnwrxJ7cjJDyhoc8TbyKIeI3szRGr4aojLiLqS9Qe8i8bNw6hrE7oaSwcYUg2N9gqfzO2KYonnyoVvzoTf46yAA8-syhL0PcV-Ruv0d81gSqk7uAcQnNQreAKpnr4qrE19g5tJ1Dx5CJIgzXjrtwNsgRWu91V1pymdAW-cMHNahaEPyOaswzi97xjvrv9xqr6ESbZcDxf3ySZtNZXk7_NhTsN7h0HiASIl5wxTuktar6v8kR9WRs5Na-nE5yFm-oSTs7tPbMoU_iH2lKf6BXkMkxi_MmJyyMKF0xJj2RUjCj2O_51Gk3eyOa08GpKvHfIhXYNDVQIGHRb7tTvCutkMu5TgdQXN4i_UUFm8Vis7Pc_N5GkZUBmz6wJ3xa3KMKQxOg5EpjJZDanaLr5OpmcKQ-7m5OKSETNc\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"dGhpcyBzb3VuZHMgbGlrZSBhIHJlYWxseSBwcm9taXNpbmcgcHJvamVjdCE\",\"reward\":\"0\",\"signature\":\"Pd5VOkBU36aZyKpU8lpJjA5eAQW-Lu9NAYgaz4KiBRT6YuI2K1WWn9JigzdUyH95TpsFm7kmYAd3IhxlBTIJk_ARuWo0AorwHv63de-NVYDE5gGypyJQH88U3uc0f1v4sklDsM-DWoRf35Jo2-6uBzCqM05CKnbsIxmESslpKeIoXLu92f8k8zEpGKcDwsuGMJhc2wTuPtg48Vbol42m83n6qBenwfLPFDvDW-754q6mirUnsiSiod_i0zXHzvsY0I1uDWLAtaPrt1RwjyaNEhKDhoVa5nqqTNLo-hGbxOAoiSP5xhiSYrjFKFXylMPk6gfDll2-eUNaJcWi-eZx_pRDFxuzsw4f_FgKHPB2u3xOJTtkmCWy2NRxOuMmqb9WHXvHycfLH5s5wSLTjXoSQX2-ha6PYCbi164aSXjOPlbsI2Fz3g2UlEUK127lxsMcNMaDO0x9ZwD1LOXE94psQyLvEF4rUq_VJvwbzU8Mf-6GNHPPTyyJZGWzCaNMc-5n91TKGVzj36G4oKSbtO1Xsa0gtDOipgoM5pR6t9eXtyaSwxTrIJe_kgRtO-coek-x9hmX4-m3oDRSfkbY6Nx4WC8EOn52Qdfbfa3XQwFZX1RXD2Wu_EnM7wfdgu_sUiKq_KJzWnnRNb1QYVdRZqF87ZID8TVhbl4W7maHCcT6YsE\"}"
  },
  {
    "path": "genesis_data/genesis_txs/vQ4zTq--De8FHdVnE7sYCemwiaqoZDS4emR_y6o6ZFA.json",
    "content": "{\"id\":\"vQ4zTq--De8FHdVnE7sYCemwiaqoZDS4emR_y6o6ZFA\",\"last_tx\":\"\",\"owner\":\"8cK_ACUmfZLwqCxEKIv0ug5Ky064Z2jP_JiUEP4XjvtIo1HXGyZgKzNq_p35yOaLl9_5ezhBofXhDKuLb1_AoRAhNIEbtt_xGnV7e3s6vtTQtddMTuXO4cGicWNO41IA_AtJrnaKUmjFghaAZcGGKogAf7hv_x-pcpp451D37Y3D_ckK_I4An0i-cAN7p-3grKcxYlqA2ShtzUiH0R74muI2RroGg-IV4Z64ojol9rg_mikIrYVrpG-s5MSZD9zQuIQn1qp14OqNHpAIQrQQMx0iSdTX1YaS6oqGogE2DlxM9qSTYOrQzbvzd62LMvjU2-vldIw89vwlbG8l_2HHlN0z3Isst1ElS3s2wgjuEMPwPJDZH8G9Ye7el5BqEq_nhhdfF4s2XA5NbemVBzhYEPWu58AdoF5q5mdYrOEStdJeZ551Xn6mnkniBeADqDbNOv2sZPJF78nNmHP7QDPU7jqDzzKxPSP5N-X02L8oZmZX5O0NNubkl-61Sdm5IPqoS9UJ702C_kgPf2lZmapfI6TMVg0SeU2JOcY1Plfm9lo6BKtOFNY1njVihdqQeGhHeUto5AaEMCs54512R57c0Kj5TlcOyKtJujLpsggG5hk_qmMenk5s8cogZgHn6qaa-YtcVGduR5OSVE9z3NSK2zzGp5ejulOWwf0mn-wS9wE\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Ym91bmRsZXNzIGFuZCBiYXJl\",\"reward\":\"0\",\"signature\":\"U3Pds3yfvNSoZOWHbVVmpu2dzTXSW271a-QMdSy5AQgGb1r_t22OU7eGI4e1oLxHejcuwu9CxSAk9tWNXPYJQ1X8zbd_tviTLIZgeLOEVlWyzl5GjJlpqO4GQv60ruY9KnaqitvPeus0TjHXcwYiIKmDlePPXhYfmc7bFL9XK9A4Rt7pmI2DZ1AXkSDZW848CDZRQ4uD820WUSvwKTs-F50SAzdCNdymAgX6c-p6iOFbNQwNJqmBBG5fX7HZPQX1ZHsKoAmowwOLY3IJXcPTCfepn0_-8eF4WiJmrcYft1JLmLG8xlN46GwFjsxR7tWCRI_1wY5w1pSDI9nqz6Jn-fm_9cvQrU_h9ohEcUHZCqRJfCAryM5CJM6aYILMAlbFxiEybJM-ea1jcD3zkGVH5zNDKUOuUn_pRYQMPKHpcPXhWy1OdRqbmVOJKO4AV_7MjUekmclhzY2rUA94acEB1WV89A1n-9dcXl5fZgYqbFfoNHk7F2-gdTLFloy8SYuIyRyUZZvCq9oYaX0QY5NP10iwfTSmA7cqdZ8k0gm1hFM3o2fqA6m5jT6UxppthnorEAW_hZlwFr2_hFSTI50hRv_TopBb__Nsl4Pq0kY6pIwVa4Yqa2F_Y5RKZGmzJfnIQjLb5qBrQXpbiVPuhvFiiJGr0Guh2zy0pYbEPdwmyBA\"}"
  },
  {
    "path": "genesis_data/genesis_txs/vWeY4yJSJF9LXogRZb3Qr6QyLtEIL_8IY4bzJ2e7O5I.json",
    "content": "{\"id\":\"vWeY4yJSJF9LXogRZb3Qr6QyLtEIL_8IY4bzJ2e7O5I\",\"last_tx\":\"\",\"owner\":\"2AV7XaIi2w6q8fMi7zuTVn-ZjxHjy9Cy0Uq-uPTtGT87G0Mwhb2pOBBEaaYSPWhABq5HbfVBNyKueZfyqsH3tUpoGqc6qjLEkfqGSNxhMIHZnigj7icoTeYMeTxF8prG1QkMFMSCOG13Fvued_xHkM-8FxXJec_KQ3vkiV2dpKP0pPrvyWynsIpIdVn9ji_Od2548-VBWIKD8odyOv05hf1zCv0B0u72P2UPNMDpMPk2I6FCqd3wFr_j_vKtPuOG3BMQkbpjzfgoOCdvb3ETeQjNFCOt6Sr8YIkJVBUL3MePRJarmz1WdpZ9jA5jbFTode9pjk1RgUywFz6PKmEu3HPFA-J95-N5T6zgrl1MI_eUxnNcC6F5ab6-38696qhQ-041Rgq3ptK1XBpLdHbsyCMdfyr5TiMS9jZlQBcWHyFjIuLqEgK-rQrJAANDx1bFGkOLtJC7ottkaPAhxGIBXsYgXJ8Qs1B2WwvrdojQiAvdY_QJctHryjgBC7-WN08_M0fKjEZU-tKD5Ay0Bhv6fzQHUPPBQOGjnGK9i5qDIK6fFXuHQnfWwG8qzkWMK3S4k8sQ0_c_Xo_sRVrBpVyQskokdeuU7Hl4Q2cXEDpM-wJdkbJe-8viLav1O33rR3l_k_dP0-KBwTlh3FY83LQ6K7j7Yv4LCIXXeyYQkHZtXa8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"RGVkaWNhdGVkIHRvIG15IHNvbnM\",\"reward\":\"0\",\"signature\":\"VjD2hrAIkkFFGg5vanY5czWYnUnN9sI7fvcdM5ixyfIZJq71YMm3T55AiX3NbywRO6N1XhxWYh5QtQpC1FAXeQmuwUBhJxnkV4BLAnlcUAYBxsCJS1G1ALB9VieYqiarXDmRMEV3XQeOsYfJVZL1loJTf47fw-MueYDynOBShA9NmsjFjQoJ6YAz1BWBvbUbRwDC98_j0wNumN8ieC1oUf-jtqX-n9_re0NoyAtxOtJEuS8S3BdMH8WJ7krjQRQdZHCoToUsCOcY4Icvaai49-9trHfQBsHPw9g3g4SiyC4x7K7CyAblp4I419g8GDBOylHYbgufMUyGQCTAB9RsZCL5EyvEEYz9ftmgDfpFqCTSoqKvzj2YG9YjsOV2E-Y_psyRIRhvM-YUuBdOYrZQVom8wOTKBN3P40H8ithTEta9RH_PpXAZ-__X8c6hQSI9h1Zt7ZknwxujyypNBPuY1D_PVHG12pFK6lHHhDWObzOBlYab-uV_3512ZOrE5dk_YHyao01X9HAUqUKRZ5Af7B0BEPP4gHu8yfzMuo_baq4GNz9fEw-bgk7fopSM9AkwxNQgyGU-hp3VLP-Sx-Sn4zrc76uvsn3bgrj63fsCXz3TZ6Eyih8s2XiuHzAxwCNjGBPH-xSNTxSsQQt1UKG7YgOJ1tS7gHo79PtF8loWRyQ\"}"
  },
  {
    "path": "genesis_data/genesis_txs/vaJOh_TzVSoEgbgDyKz6ABzd_wt2-ouBTe0gA1F3oMY.json",
    "content": "{\"id\":\"vaJOh_TzVSoEgbgDyKz6ABzd_wt2-ouBTe0gA1F3oMY\",\"last_tx\":\"\",\"owner\":\"pOroELdvJ7uzhtuqm6ACzrzJCx2FQcBL8UKiJIoYT_3kibFK-tarNAobO7GudmYBDBBwQMerJhv84iE0aNyAb-pisdkZ5iAOe6IxePGavAlXVVOLwOoG3IbnrSworaCokNmlmlI3_RzZRJtUOIC41i5KnllaHMpPBReEXrT3pjvBzJZktkjyYORu_PU3FvjYmEsQ8JZVyeSzMilr14_4QWJp3_PkLFqflZ_TAbYALB9326YKRzCzRXYrezrG8aw6LVmN4CIKyIjpVsMfcB7axjbRi23g0r66GNURZmQDvKeFafHzU8sNUIMVkQTwe8ga8jKEzpg85QxvkbZe6_BG5fmHbPx4WFdJUiWY1kJGVrVWtJMtbloCuIASszKyebyHaYhNoVM4jBVObyrlBOajKRm3XBI_jjUVCDuAxgR-nIqmkJxjVmae9d0xVgOCGDZUmw94LzGnwrHv2pFUV1Zj_jcxVw4wUTit4Ur08Df_CCob5YXbbR60Bya_dkC_WEnR4vLFYXYZ0Oc9NLm3Inz31F_o04OFtephbUpCZLO2ctk3nHOXJBsin-U1oxsvEfZb9W8NQjL9WD3c9xL5z7iFVdixvBxj2xPbGl1qvt_Rbr9xdlwwobsTZfn8SSjhvDUf4Qw0hssvYHfeVFgGbpkUPhz1ptr8XrDOm_phWd0at88\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"R2l0IEdvb2Q\",\"reward\":\"0\",\"signature\":\"jE0ubYO4Wom3rFK6WFkBArQJvYEOCSTVBQTSpmi70LyvhbMLIgG1rRskP4YSDBNNJ2eNfgpJLLDq84wPLhGer9nGsAfz9J_DqkOerR7U6N0RJSek1GvxR4Xu0DnPoNsLHW3rfyJdeo5uw7wp29fFpoxbqBzOERPnWdvQ8fIvdun9YtGE4Beun6i-XQYPI6w9r7pIMJUmt1Vlu67pl6WHXTocslRAxfDsJZLOPUqxQhGLAO0oppn4GL_-hmN2ciJb5UR5c__jYhS4_E2Cf6gyMhrfvKKAFXC43lH0jGSTy9OGVjsSmbIxdVbZX6Z3wPQi-A6Hoing9De8utvImMjeqczls6dHcqehV40VBRBVgyu3csctQtSDzQq2Fo1x-BPcDAP0bVSzbS4bpNVTk5C78YxjUbuNLk6dpv43WddlxGYC3Hk1XE3psreUZmDoin_juYg5tYO25Jk3PQZfajX_KbupkHfk22LYNqas9Rp_4m3isHHhuwd4JsMqESvtTrW8Dmcy0wFhyILCfcXbTbIY_Tyy-KkelWhxmoGd9VOjNbe1WhmUKe9qrA85JJ_OLW0jOmj_h3gnZklBWs4WrPDFfsHQbZsS6mrfYqKHcFvnnj0eC20wPev8_XEtMYN_OTQDVwTGWaUXELDT0yK02dTj7K5QX_3BcQnsxIYNj0lVSM0\"}"
  },
  {
    "path": "genesis_data/genesis_txs/wFjsB5Y9GV61NqjCeyPCdkfXKUJOYccq8Bl9aljvwGc.json",
    "content": "{\"id\":\"wFjsB5Y9GV61NqjCeyPCdkfXKUJOYccq8Bl9aljvwGc\",\"last_tx\":\"\",\"owner\":\"r49svNp3E2C9gWAjqt7TWwMgeLbhLlwYA3MSdTP_trnPvkxh6K12B4bgREdWqe-NydZLZxwzEzHlUtigTavyRoMn5l7SQ9l9tGRXTef9yoTBdM9QLc8iJYtWFQ_M9l6waD2-YKsuuVLejtJMRoX59YVW_U_7kflYSXHNg8A1x-wDFGYqiO-mqewwQI6Gz2r00Z8VobltMoqr6-oZ2ZO1K3oN4y2cRX2E-5k82VI3PHq9OxR6CxFGGl1H-EPWuAX2Kqk4WvrNMQ2pwFA9cgD9sfBTK6Kaxif3o56uw5kbU8rFHIanvml75YXcHVCpw1iUSpdqi2_mqlsx-hlHaOymjtEDdlC-34w7zs-6QRvEs4O8gVtisoDAXO_HkPG9E0NHRfPd6_JAkMFBB2Kz2ZrXUaUyzt1elTHqkosFLGS-maphsIcIPB7nuNUdWveugSAPjgx_9Ftcv_y1irRb8lnebFS0_BWZzMUZFa_NBeDhHChp5DH8x-q1olhFPBy-NwmLfY9LPosbwDFI1FAR1kvJ5WHtsvONNWRJiHA492sL1UJtnllMj8yG7t1rpSnZXOyRZOnMiF72f_IQ_ng4Iq0p6lTBkNhehuQgQUbz01PsrLGv1Li_JlihPm0fnnISyFBC9avPOhMxzm_qYVzf21WvXgNPmJBxqeT9DoepQL0OCSk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VGhlIGVnZyBpcyBoYXRjaGVkLiBUaGUgc2VlZCBoYXMgZ3Jvd24u\",\"reward\":\"0\",\"signature\":\"ee3QGuXGjifu448031I3Wlg6eRq4ta_73zh38RjS8P1W7XiCvVFbAKxnX0oUrPIsBhY4Zm23t7t8b7DCNpMZzUZMD10xGkFniWZ6sCXWTWC3uL5Unfhk835RepYIYmAeMJi8KUZshACnJ5Jp8NJOt82PF_L524P1DiXrTURjCtNnFgJD9ydi1aT04jJH_G6EimqupbqccfQsxNRCMwoNfUeaJTr16YTU7CnlCknOa_Ac1UXzqWD8Llv7AB1J55KmCMMaKNd5UZ7sQHHXfpYsngWGgzAbI25Bt8a0_adshG3HoPlFakagyVRSrcWwb0oXvVRej6cQ4e5_If2AVTCEXZfRshmUtB0h65Q8xjMrpAI4bHAgWVpmBn19ofx7FlpL1xXYkqc_TLENfxCepfN8xVPjNOoVIQvRgFgHT7xGEG_FWRdGHyeooZgKae5Pru28MTumbklEroh7k9M4A3-lwJNqs8Jj7PKT-kvGBCQr0Nyzkxe5HWHrO_y7HR-Ax9SFTs5X0eJX0Wk2gP3qnPPjvHC_K-m1IEbeMLxyPAAaUu4azQaHkyLKptR6a5px4GZzIE9Zq3fcxf0D-xo9wHfUefEUJOBAGITOd4rjowpnZTkooqlMLi6OGGohIuq-z_k46UhOoaOoVRXYPobwPV7UnmRH_u4l_I4Ggisx7Bsn9-o\"}"
  },
  {
    "path": "genesis_data/genesis_txs/wUhEm861foyWdxy0SI7CvXRcWuohItlX6Ydqo2NvtY8.json",
    "content": "{\"id\":\"wUhEm861foyWdxy0SI7CvXRcWuohItlX6Ydqo2NvtY8\",\"last_tx\":\"\",\"owner\":\"mQyj4SGFKBKQuEt_8fpqcD2IPfd6MypdDtsJ7DMmvGaB1xKVuz1DtL67nVKZk3WWUDRARouDMxFbH8T78b3C_-IkZhp8tAImfhSFYg_fNTuI5qLYjKsZgPZpGZP2Gmi7e_2JoCAAXxka1ZI4eKy-SydA69iLQcl_NYyUoGW3dzyWSTwunZyfrNq9vdNifqViBR_In_dRFdGQtjhyvkLj6LHuvrv6ftGcEAZoqi9FBJpaBkC1BV3zv7rTeX83LDLjmGSXer6cWnrCoRzhR7aDhldsY98gtBpsTSgZ0lfUsn-Uurjx0VHvzKapVHWB-XTvRdJJaF687S6vozPSxYM-d6wsjNkpOGVxESo4W44q5Vh-LVcPWqVULJuKQhXVTbmJp6OW9INcTrJNyEpB4c04W-IKzJHphZVVdF17tENosaIIR2Z9ERC5VJZjSWrC7LKdl43NsJm6xfJiG9XEi7KdSvPATa2bo_MIIL-GIt5EkVjulod4xXrDpjAZQ6vp_6cxgnBLofThxTkbU6eU9oSJTvrXJpDPJ-EufZ4wabMfttKzZDEkcKD8qyffMcAVIOQIgWtjFK0t0PKcKNmqYuSyfhKT9jOeDg1H_5C2Ui1EcM_eziJSnppOeX8lFuRdvHyNWsq5u_N235WzKiuDINPHV-NxiPlo892dV0UTaQ-xJKU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"c29uYXRpeA\",\"reward\":\"0\",\"signature\":\"d6h0ojQ09kWGzsaIWN2Q61Si_tgflU1VBs9BjaA7I2PphoDoJmOMEYR27f1Q5sugeatNHDgVDJ9IztsdAINYnDg3Y7XJ9HF6iyXZCj4zIXQeuuY4Syx2bOTUosp-FLo6Rtf6TZzmQ72HtI-uUxpKm3G0xQfbhA3YAwC4wgB2837usZRdPvIMjJPoc2HlkktpJ7RfdRkTPQLeWbCmSjp7nVSIyllIgrA4x-_VA-6TQNFcMU12h3fVuTCYNT3d6HdbCm60d-aopB_jo4RqDL5vbEQcd3hy-Y88g8rl-HhEKwZ44C8MvVpCzIskQXtYzKpSz77DlqSIZ7g1f2QDWg777PGrCKXLG0IvCECuYhxUk7FKOfSKAmunq1vQikgs7OMQghK-6ogXHuICVYSpnz6f97zgw15UmXLsa1VI02ROkawbA1wkZmiaNnl5aLUwkKTP-UpH6znl-tXLBuc8c2cAHX2X69-W6hZUv6seLwq382eqqC3nFL_JmEBDUlfQFCQ5VA4KDFQClFp3uU03hy6X23MRMTb2NAxpwC4T4WPU2oxpLib1YXGUBmlU5vvdpA2FoaSok-1BrAF2QlLNXJjS--PqAJ4yvZHp6FVMHcqE4q3sc1XlblyI9VF-roNm1E2zApPAzM37H6vhr87UKqYWSEmIHRp5QAtEpDRmDPaomvs\"}"
  },
  {
    "path": "genesis_data/genesis_txs/weff0Y0_3-H7Vy1HrbpIzUmbTM1rZ8Lw0wgDGYmlsrM.json",
    "content": "{\"id\":\"weff0Y0_3-H7Vy1HrbpIzUmbTM1rZ8Lw0wgDGYmlsrM\",\"last_tx\":\"\",\"owner\":\"w9FugqJ1NTs2Qb6f0W1RbwruuvFa_dGfA0Nf5B_yEIa0MN65X4ad85KubXVkpraXdBPSIimD2GbXHT9Fsuqkxn04O2V9MzoZqdU0ZqvqrXY0h-eF_CqFc9_uM8PGwI7Go4_Mn1tfTcqq6lxag7Aj9kmuKi9pGs-8EH3qc7mDwSTuQBSl9VleDYys-M4HRCfEjWeXBsaDEhNNtZj3rjO4WpWWTa0gnaOPIPqPoGmxXhoYUN6Edj3QpQOVA0O_TmRHpBgP799lF6V8N74wWge59t9bGWx4LuZ8EV06ilMW1VWyEVe7XPn49B2b-iQIXzuGagVxEG5mu82Asosf8df2Psib8StNKojzTJg05A1qKzw6rTcacWZrvRDedAfKJAohTMDLnPAuHgX9odFvQp_cc_t6ZocYVHZcONiMEdUmQma4RUKAcBmjWK0H-9dGE8b7WVi4rRptgqKo7NW65YyV-QA2o1pYF_6Tq4V4Q_s2mYNSZlsguwM6pE4wjrms1wFTtcYg_9BNsvWDeO_qk_kOQ0NHAzzx892nJFJ8BR8mUOefbGEXu8cpqOfCziSk0N3JFnmBiZ0NVAvQ6T347fQ7CYzUPwJf800Y3JPxoVWVTO0wC30lHvUewnmbfFc-b0cvSw-9uIByJ1XYd1nAUIZ7BsCvt0sa4qqRUwAEJ82f-qs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSBhbSBzb3JyeSBKb3NjaC4gSSdtIGFmcmFpZCBJIGNhbid0IGRvIHRoYXQu\",\"reward\":\"0\",\"signature\":\"oevWSZ4CvB41EPVCYQHZdE25xoFUMwOkY-f-du00UjZh4GIIlTZoap1Ras7uYravlSrh9C6FRSJJo0imnmcl8XUSm1bs7hk17C2T1wA-QRKS-PeiKAsR8oJI2PcSOqTiSTCuWQ0Md15w0V_B2FK4zB6LNHChwvvg57x8K5sZZYC8Y3FxoRtyYzWoYwfZTHm6_sfemQsMe3Lx4QaYkEKmk1mRsu0tGiIMs1wbw_ovSXClyqXVgR9H7Ldm0oFjCB8ir9WUgT9F0oZUrnMoX4yJV1Mo7cRwk1sn2yqwagI9Cl5Oer3E_qR58_1BPtOPNMvH9tPUaeyMUQTQvRYTY9jqgVtIufOTGt6NWxF0MKFLM3OGy98m6gfdHxSqyRfuUUvGiuf2NGF5ka5_CD9IIiAaSULgx1mtPdw4GQN-rxG17UgYRwQ9KlBUGbhN-zr5E8v_x3QNDSoIlkzNhp3UGIPm9YDPMOUatbWw0rE1QlQmzBemKtQbb4VVwF8CtGCJDf9FVIbMHGnuyBWVhl8EL5YcCa-ZUbv0Dtiq5kIxljPdXovIB15cWyIgj6mkVjqEsgLuX6ebvyLmtLx1RLN9DlEGz9k5qflzuqrQLEamTKlBmZF5drup7EMpbOnRuFTehs47uWgXgwGzcRWWjp2rJZwbBIwWzoHVxe6JsW0qrcGHQJA\"}"
  },
  {
    "path": "genesis_data/genesis_txs/wmZTwziFc_VlvYJz_4nyxYd3WxznBmsn5QQyRKDcWXU.json",
    "content": "{\"id\":\"wmZTwziFc_VlvYJz_4nyxYd3WxznBmsn5QQyRKDcWXU\",\"last_tx\":\"\",\"owner\":\"3xxBY98XEjijL3Bf-6Crm245kkG96ForebXNAFT8UIxAKoLtyg8crIWAM8HQzSSy5uBcXP-4vQ5NF4IXqkCREjbPWJ98gIqcqsI8LgOubHlIo904bfKNu8tGHIEyvBkOPHA-YxVsoH5qD5NK0xpbnY8F4vl1UTvHEFN4XgrA-uPecpcOT0wMzbLNfQfdwwCwIoU-R53jziNBipWncj5spoC7X--73utKYK1FEVhK5ix4pf6yWajQrwjOLY0kNdQP_vtPV__XNkhZBZJ1Khhx0FjTU3LnnuADLkCEiH2O5x4r44n3UKGvfsQD-NNFB3C-IwYEYJYOvWjpjb7AgHfIClX6rJydsHYAKycFD0Fx3SrE4nQr-5OwIzDkFpCUZyUC9vApbLdZJIr-h9PQYUVzO61UHXLZYlBiHVu88CzoWK9A7bHjohnhYyvojS9kIO24KOkm6mv2hrQ31RB21QkRuWJwIEUSpqNrBBdikLxRSr75_OuxsS9BE0HGk9Hj9MmKDLleViXB84I8grj86LwsjqZlMv5TKRYgt5b5rZBM37tL2c2F6LEXxFHhYQkFWoFhc48jSbVkeAA8m33YktMVKifmto-Z71jR99iRfcXHA9bmyhje-6FH8VXYeSbgQzfEfFRnw3GZxdFoi_kKyPdt4FzGO2lzr83tY6bBUxm3t3c\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"V2VsY29tZSB0byB0aGUgcGFydHk\",\"reward\":\"0\",\"signature\":\"r2TagUwxUz8oU6MlFb4dIqPfhuCjDGShFa3pMvEi2SIWVL0EaEhhbfzDrl0rS3_oTzUWsyZnX0ZdZybJhFZi3jB79OMPab_E5p0zalR_ZjmPHTaWmNMsjU6t1TO5bKcSLRhUdozDVS3ldHFxrQstXYN7bJUQWIRUMIl_9TlsxVgeYfliVpuCNzsug07UhmMTb87JE7_yVt5vxk6s59pe0qHChON_7GSbIIP0AEaBEHAqpszBqbHUc0x-3__9FNkWh9bsclCxL8nkzCZfL8v-iKTOgQz8eYIoe81g6ZvFg_BscY-JNjxcSiNkhB4i7pDlRW79xtwcGENlVdoUbiDotD5zJMJW6QTrWWNDZHFgRZ_6xAseMZ9oi-S6iY3bDmo1-Pg0NxNox5ZZkEst5Y6hObW3qI52fFc4cyvWaY5iKHehFUUqaa89VArLbHG4QwPOQnH4pMFah8srSZbzW96WQbrJ855jUtuxccATLEIgYjf9HEE0sOEPL2UTB-SZGXsb6zU1-rsICIJNZfWr1hiUWLfaMFoW2f-rSEO7dD2rC_NOgV7IdK2kKcKHwtH4lpGghxMQJMRFjPD60Kz0tmy8Gqvlj_cY3oVKJdyNFLN2rmpoz2jBSWxqBil7kRXRSBVl8Eg2iGH3yGzGvIAlc-MGDYNKpj7214Y6MWFvIQFEO_8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/wnOghJX4aZlbm7SDDb4UUX8_6GZYpYYx3GireamHwAc.json",
    "content": "{\"id\":\"wnOghJX4aZlbm7SDDb4UUX8_6GZYpYYx3GireamHwAc\",\"last_tx\":\"\",\"owner\":\"m3gdez1BdV97-__3MyYNe_ddMaixdc7wjsnBMZTNy9MCsSaH1ziuCTk1i67BbvA44pcZKH2TpOXV9b8rBBK9tJmQXCZxM24-8SqKzkCoeAE04f2MY67QO8-NK1Tkx5mqRsA1jGQVfP3XhqO85R3Gapr3UqFC3Nv7B667BBMDN1TxrA5CCqx7nWyZQ77vKJcmazO1BLBSpJCdthHuZ18wcTOPWvnRWUxjF1f9ef7EWMLuDVLtukJRor3sq4YJvObxHBtGmTS0tYhy7ted1v1O7eL1qUxcui-FIeB4NCjsBdHikQWvHxFR28FhQTclhe69lNzF8bucnBIIjyIk7IAqScdDZjNzkRqgjSIqIQ8uHcKLmSLBXUns9jGggo-qkuaQ4fMXyYKj9RtPV6JF4X1D44Uxf1bNQjtI5KIlLziiF41Fx4VAVGtMAzno5wjpOdDn6fKDeLBkLvztAoXRsJ9XWIsC8JipnPOGEtAHLVWCi3I1GXokeVu9YivzyZzWFbZ3FOqp1pfqdyJzueIuAgwaUCUAWcA14Xr3e19u7TgnBJu6M03JxLv0nTTSi0ScmtjThyQFAp-QGuFkMxxxQ8GejN6CkZrgl7uO6ZscIuNPyB849YoMhSETdbtxqO5VdCtjSajm1fqB1VfN_nAoJoDZebx9QoNM_Kc3RtZrrIeOjBk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TXkgdHJ1c3QgaW4gdGhpcyBwcm9qZWN0IGlzIHJlZmxlY3RlZCBieSBteSBjb250cmlidXRpb25zLiA\",\"reward\":\"0\",\"signature\":\"gwIuY_0w3e5WBY6TrFfQNyxb38N8r_k0xag-rxkXW0lQ3HDtaGbeZc6NIpdZP0gnKp2WNAQn8PHFXUPL6Znh6qJvNfv_ozbuTq7VSfEi4nqnF2CujZaVorFUofiNvUUKczHHk8GRyD3xN6DPvwXQ0YHY_iVwj8Y7_gn8lwWcc1XbOODWR0z2s42_rU9kwgb44GxeQSvqDvhio2Wj-EpK_TlnnQPaxu6v92M-489NFRKVlU9z2sXpGNH7zGPxp56spMIyQu90El8khEdp0XJmlJvvdoWBHXXyEJ6Wz168kiMRiVp4ZSSh0D8TCSd5TaO2muX7nD7i7BbZwCsu3mu7z73PKLKuSYq8A4oVRHhj1hXQram4cVjkltc_om5DG5uT9FuZLW88CvxiNcGG99cIvSlJZGqFSBjI1AQVigRBWrHFFusYkcHShAnoE9mRKcu4N8G4x6A-qjM14oVAMNCw92cq61_9j7u4uuGNC0-FnuB_17lkyPqFwrMdrndGhN-rufUVGl2cDE28tEzDy-Y8QaRxif9K1yzWvPunObScMzxlKLUHW6drjt8qBxvDItlTQ1FulbRRpdy5HmMv_lYOH9P28cLhegKEOPB-GR0s9ZXwv7Bo4ThVYJtcbB2yZnOHRrbKodLWJQ3vOotbyANXUqHwPalPeOzsRwTONdrOBIM\"}"
  },
  {
    "path": "genesis_data/genesis_txs/x8KM69OVm6lzslK6ccAE-3EX5sW6CUHBZB-1hbc-J0A.json",
    "content": "{\"id\":\"x8KM69OVm6lzslK6ccAE-3EX5sW6CUHBZB-1hbc-J0A\",\"last_tx\":\"\",\"owner\":\"vX_rs7GY4ZWlXTkJc5l6Aa0KljBrZR7e0JaeLVW91_fSF9UT1bGGVoFYTq9kFxvQBt5hTKMK_JbFCWU_F_saYrc9m3_jB6ng3coVALDomiHP5Apa--rr3n3lDHzq5XxYsJxIi6o3HlrFGHY_yvv5qZWgEvzx62uQ334NicNUf1sdq2kKQcGTA3wtpNqdpwCvRx0MUX8-ru5ywxEeM8bo5E2kLuZcU0UOcG_F_zJVFyQhsyDyASp17q9wCTv4M64rXtlOlzDkzF-AIwfIh2wJB0Ly33ZRLcyV2XW8MiEMMu9o7cQh58z4D7H-dyDbvMXo5ervVGvf1GU938lheLQ1Wcyq6-lIQSNVcjw--AlHeR0HPpwo1tIauDVEqbjZIydOPkdX4mLncK6Am1sV41T440n9mH-9lkHJ8NFANQjH_WmZNk5CFeb6u8v0RYB0iRZuZlGKngREBKyoJd9oi2FYnkmAqRtDJMuA-vCr3dHiBTmcis1QON21vIx5z2Fk3VY9J-g8z2d71BRaIoPoX0sBEPYvTIMUwVZkO1Spj3DfZKUCxG3gA4CLEwO0aHHb1rifRyEUbtH2Cn0nYB1xe3AzAC2IaGt3IGoOKg2YnUxP8mnN0D8AM1oBEgTVN9EI91nx1y_twHfE4in2B48SBrU9dvALQoGeczpY6jqwGfmUEwk\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Tm90aGluZyBiZXNpZGUgcmVtYWlucy4gUm91bmQgdGhlIGRlY2F5\",\"reward\":\"0\",\"signature\":\"kC74HE-IVTgNo2dyPFZXpJlrWiLEFtVSzzvWJ-yT2XUbdZ-yWOScsunVUJTIyWpfwrueXs2OUFCIlhNKFSO30YNvjcuu0-zAXflI1FKYPopUKBriBIciiHOkAajhn2qWQouzF_FTGxdynRTCHn62TPEpD6gLNLFrp3ZI50qowgU5IgPecy-R7_rL04d4jTw883h4n3ZWtDcp067MZP6PhQaKEKwTBBiUcAbXIsHy_r177JlaukdwZW0PDo-67u25-AWUwzMruxsIA5Kzmi6ypMelpOOLu31Dmj8qQ6nphYdMjR4VS-p719GPFd1UGg1yvYEvh0_DI-KwcNsivy0UmuMIhb-GSi-DNLQcNbZ0p5Xbseai-1JK7Oa8QNNxwxD9koYbQAdvRQauymRbp698HPAX3augMq8FtVWC_Qie6aaAvWMydRHU8U3X-iF79G3jHCJrS5QTBixJJuAaIX5V3PfDuLHipTwOROfcAsiIr8u2jYsZ2Rdcwce40XdyISk2sUhdrD32l6E-2hftDO6wyb8pn494l0KTYoiCFDthV7a8n0Xha4csLwkDf_70gVlpMAiiLRRPT3eIFFRsMmYVj0r-o2Qr06nRkMIeVXTdBLkda48TH-S4miyZQDuNsz5zj1KH-jxKEvOIqVb0AVefOPzS1ZzypAQ9j4TuqY5XX-8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/xC7ski_qpcrRwRkxxHwPZd2lOX6Q---2qdQ4Rr-wxAM.json",
    "content": "{\"id\":\"xC7ski_qpcrRwRkxxHwPZd2lOX6Q---2qdQ4Rr-wxAM\",\"last_tx\":\"\",\"owner\":\"vnSo2oc4e7DalUTlFYC4X6UPiYPcRirnWVgOVhDF6hTXPw98mTH1cDjEGTQN8SfvaGRTQs9wWskEiPPE9gt41ouAXNOUxxUixMRGT7xstOA5Ysg6q_O6mYOvJDP0N7egz7IyBSlBNTJrhoTE3mJiYOeHptQX7-ABmwYK4EV3NCTOd2a6utI5ZHIawnj3NFf8P6J73Uh-eQkwUW6fpHfCJ9vz7ghPzMuam0I3HKG8A36BRsmolGNFlZ1xHQE4tUlVoP_vCpmqDVXjwARD_H7x1JvJ49W9nhPq4TnTFs3jCxapHEt8MFKDgoXIwVOi4vk4bXO8iNl12o7uT1ZGYWmJ3xCjEEjPlh0T20pOYBMIbGnVr3U5Cvnmd_o6NIK3H6kuEjooQ8V2vCV-pk2AwekDn1ezfESScFqBuFxfNzotlCgjq5ktv_ijzR3zwIYDozUfDrIiUSazcseRepHYKR0C54Ru3zQEnGPLasYAy_xtpdGWT3Y78ZlOBL1vkyh_8X-y_OSUxnI55WhKazMa23Sy6Cih4S5YQadt4nK3gwOxIEXS-S6RTxkG1-vyPQ0Pt2yR0mjCObQ8i2aopuK9T7UckkSkEOc4sWnAxlavT23UyH2mvtvYwEqtxHPg1z4Cw3MEaRGVkasJy2TDS43yDJQI-g4YkLgJ0I9u1aoQLvFG9NM\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"SSBleHBlY3QgdGhpcyBwcm9qZWN0IShAbmlsczAwMDAwKQ\",\"reward\":\"0\",\"signature\":\"C8a3AjmT-OgW_TTH_xIQJcHAl45ZC6UCWTHW8NqUouRC1Bco5rhhdIn_ITAr0avZvUZzr3Sxim6ps5c-XgHJVwijv5qS2vWoKEaTB0r7clw2qWLe8-fEN3YCdAQFSh183daTiEk7Gp5KExzNA87OsfYrWqAHJ-KZlpZx7lS7jehfpzALK7uH4btH_acgmItq985sSMb8oeqFgy8yfJgaO5uAYyK7jYMDYrGjMW3rRwGbQjPimBnyVfPQSykOvt7nRyrjabQjw7yCsQ3KCSEdyRlWF0kpLICe7A7_PG7cnKhv5AN9Ctocn29EZJCBeAGSexoTA0TkC_87lPtjcCfmPsNC1goniDdytbQB9p9iWvvN6CvOVSpQUES3ta96Mg6SA7h2vPshAC0jEcOu-kb1bXwXgut1ZbjpKT6FjPIjI-F12je-kzOVSE3aoc3hblGI6johDQ6jIwQk5Rfwl5l7VRywMbzVB7Pf2YNBzgNizJU_aU4Un10rl-WlQTtyhyse4J9akPPWs8i5jFynhdXoZMWAOtxwwyXaN92HRFb5j9G9NR39LJNTnOGgf78NiONZLEfBa11AX8Qbs-1Odhjvkw7MMz6IA7tybEg1EWW_5OXG-Pnww3T1KiWbiYv6uv7QeYuWwI4bHKwlPoTOWvcc7UJa7EuOO5L1EwES270mqN8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/xSkMzFablxREj8H_RwoMseAFk-TCwaLVIZMHqXh5DHY.json",
    "content": "{\"id\":\"xSkMzFablxREj8H_RwoMseAFk-TCwaLVIZMHqXh5DHY\",\"last_tx\":\"\",\"owner\":\"xftrZtKEf_UB3kg1DRwhyCIWzk3Xlqu1rqG1RzLIPJaqJjMXpJ4dCFsQ1CK8G0lQ41DLcVlofDHUtUtnje4rmE7cGF4tjuGfV8gxMq73D7jtAkr_j4mWl9H-mm7-JyiMUFXxOs8SFpxL2zS6cyi7TK_0CcAPEEBmzcStUxlJXwpLu5fZNrUiZ57j1bGi46uAmM5ThGEGK67LVQ-0BpYEp7blVa2Vl86NYjPd1nXEdBkjPZsrxUmNcvPt2nUp9N7OSAWWaui0bH1nP5e3SmARO7kmCrBj5SBLydtoYVLDo-jvjWRvDSZAIP_DYffcV5NVFP9FWO6HFPdt8OwKfjEYtAfRW_E-oxfhvNajDdT9BhtkUm410w4ryTbv1UVgPzxUkG3rgs9XBBfviq4TWXXA-xkYzpdmcQUzzilyj0ULFLiEfTFjfeG0sy2PKr5mzk7GgU01S54o76xWsYA_BleOssYbsFFsjOhgXapZMzyh3-STNau0BqwVT5rGHjDCDyho9GU3sMMiitIFdYFkPHTewF6GOtX-lbak2vKrlY7j8XngvLJ5arM4Xd6WkDKNjS06nhrlKuTx9YIuT9PE_7GVPE2UkwcavdR0FG1vj_q9o34PqT1-1KxZtk4f2nT3nsjST2vrv10ggJ5P482MSXuKJnj966aZ7eml8tlebnudaAs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"aGFja2cgd2FzIGhlcmU\",\"reward\":\"0\",\"signature\":\"W_B7wG-fPJWvl7SfmAgpV6h1gdKcdgbyD7DtnX51WI-vm7CjS8qnNDBiUXugFUJiart1i57uqQbGXtXoZS3MBQN9JGvwtktrfldLXVo561TR44VvxtkV7je_78TYHHIFBuko0mX4_MrBdAr-KNGRqWy83GCx1t-XOEOyDDihUNCTyz-g0P467yHOG4KRKrus6mJmulkXdAuQ1F71Yfad1wO4_NYKIKzOUE1Mrc-pQYrrjRsNbA9lDiD52eCnVuiVfDlK62cQyrxvGcDBdC-oEa2P8zjns2aSPQ6wUMStn2suxrfbqB4Pw4jMgO2s-cNUm0OtHuXzkW8AXIFrQxvqtsvgI28qWo5MJbyJPzn08_-hhj9Ctf8-1wxqxEb-RQ9pDtGUS8PZ-BpackYjqGB39DM76kxOP2tA9J7ogFCnHvd-7TGvPDOb5KrMR953OtKDxhwsh1o8hwpKKxLvW51fw4G_VI0sSrFInNfq4LHlkKmvET2bDGeIQ92gR4xgXmuEqwFScMGfo-SLzaAodfDQPAWuyvyxSlWf3HRWB4H9UFP_qk_73XcYYaZ5mYRi05Y4iXiMyxCJN26uQE1rrO818GnHKaKa6KXLDeoM5flvAzDbs3iK-vxFG9Puv4RP0hEyGM7FNPOdOseTJBgzZ8aRw5AQrOoUa2X_3q6tMVji20Q\"}"
  },
  {
    "path": "genesis_data/genesis_txs/xYpSRRpO8ejUGeohlRutNt9qUMgvuZJGkPGCyu1kSas.json",
    "content": "{\"id\":\"xYpSRRpO8ejUGeohlRutNt9qUMgvuZJGkPGCyu1kSas\",\"last_tx\":\"\",\"owner\":\"tDoZ681tEfQ1v7gcLlcY17LJ5ul0ONYkP5vDzPyBZehkWMFwj-dA2hOlo7O66VTGsOOxNgl-B_eFG-bNEEbcOtVGwouxUxlJRnkCz_e1AZ_AYKcjz_omVZ62tqLAZEp-9JowEQO_FCsMl7_h4ipwptvggFbKxhm092TU59ijgB_YkLRP-g5BUdEK3WbsqbH3pJwzn69lsxaooSuYEQW3EWrFLMTodNWskANTelFXu1nbcJyNG7oLJp8rfkXUwqIqJ4vXhVVonVXo7iBIWwDRjlkybzKpF40jRBEFhjLGudvUkLBS8SJCnhdm8fgAuiv82xNLv_fANRLluGIo4Zj_MkzXmyxb9NznRN8Yilv1bivEX9vjQzaq7TBS45zXs5wY1UWHKAMQ1yVzPrgmz1F9Nrh1GRKx0VhDSKuB_Cj0jI9d7CUXHiiYDHseh3lOdMZ2It5sWnQZvEaogWMMTO02cJVFYoX7MUnV3E7pbWzlLI0-ZO3V4bruXhJ_uItwypVMtAN8Xp6AKRgDZxVcY1WZrfkApoD7oVBJKVHlrtFgFNVGZERCtlVwyDU9MblZxq_Y0RfGDD2WHBXxmY_ZbNiz1MCf1KBJgZKr6G7gUoRPP_XE7cF2nVoTPRu8XlMMU_7diy4LreRyaBUJFbOeLiJYb6LFZXzryTYCg00gToFIT3U\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"bG9va2luZyBmb3J3YXJkIHRvIHRoaXMh\",\"reward\":\"0\",\"signature\":\"PKKH9IoQo4ZzoKvwtGFSa76ZFcbKFqKQlIUfKEl6SGEYbqS4zc4w1Za7T3lAhEHRmeCjtw4XAArtiZDu_SV0MiiqZmvhxUc0OMb9Ao8gLeG6t7brixkiDyfSCBMELk28mgVDOcFmAZIR2Wgvf0TTl7uiggu3IsDIdzjzVs_5QacICK6uZQTObZHFXiMhGaZ4QqIVXWigPsNJbGwVZxeA__2tJp_G6PDsXbVuFTKgN41rN1n_0p7AXifIkYSqfyybjA_BY4K0Ha1K1KGM9xzSeDnAWLGKRxnVBXKgRE3cwpgLujTDGHgMyfqNVmVYM8yNu-CMXgKWQLTf9OZg3CMJHmfOTD3OshPXLze-8RYdihDe3hslJ7yt-ZIy3doDqD6Mia3XVvcoFlQYzcOU8Na0cxBjnVDY8d5DuNJwk8VU1yc0he12jpJxb36QLXhynj-yt8YLldujhnjtufbmkNzj54WIaevTaaGmj8OBbfooyIL2p4dyNMfzNuozj0JuGXe09sWWWG-hYkSZboxcNhUmliyepLQH3WEfEbGPtCE_cyYD0mrVby5eLUHstV4rzX6w1LlyZhOJ7e2noBgjeYpD_cNlG-_iakU6e44p-EiDMCL485Y9WR0Csxn9NzxF0939tiW9s0WOVUPFr_39ypEVuB3_EAm6NputwdhmmNj1nE4\"}"
  },
  {
    "path": "genesis_data/genesis_txs/xavUY4L0L0nLNVvHiYfBqGL5iqUvdwQ-iY_nLLMB6J4.json",
    "content": "{\"id\":\"xavUY4L0L0nLNVvHiYfBqGL5iqUvdwQ-iY_nLLMB6J4\",\"last_tx\":\"\",\"owner\":\"rO-PhO_pirBgP0uc68hGt3Lr5L45-SSfvTKicicm4wJrrPiofuF5y9Ckw2jCc56RtRYEPImmwzZFlYt8RAsA0ywZSxtxiRqJD394lPNfHjMRZGDOJKUV429rvfQR4_ZmxlnEYXG0fgI3U3Jf7CKqt7Wz9ZqrZot1jwf5eHLgGrHuSHbhT5JIhKFt54tu2wiswuk5doVdicJQuKBInt3A4y7yr3EUdoBIiEKBJHFaiqalnso7JGAKeT5D-8kUUpwqCob8VuBY0puJpcTOBvPfZxQhIhMhOSHj-AmFuWdl967BoaWa5F5epLBEtjlUsveYzZixYcGclz33ouKGOM7iY5p3RQWfJpIyx_AIs-rcSF4loBX6wLV09RSSL67fJCAF012EWjPCUpiZTV4uwlMvFU0qnxDFNs4rQUTW70EvLqRGv2n7qlgfPRt_QeW49UTwucO98XBD6iKIGQaT3J-stYmbIbF-RBOQtYA1KebII-NukUEvn_jzwbRU0JaCopSPo4JKC9fkv6CGUR5nq3WUTpLrcBWy6XSuGobQPN3u1qICFRwBMG8BKTs6CMw8wjEYPIYXNFV6QhRZLnpeUVff7mKZvG2egKtQw8SWGNBT-beZrQeUyNc1y7CQwN44KNIykXwY8MsAOfrT_UAnu59ZqK3jsKZ7oDhXsJtpfsiUlK0\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TXkgYm95cw\",\"reward\":\"0\",\"signature\":\"o_QCkOU4pv-RRcEAiOfua6hVDTxvdKNUJu4VoCEI8yaUvNrIRDh1jMCr-amIJE6HI9-RfDb1x_NMaONK86vV0FB4_9UDFaviOfvPinMw8T7vJr1ysjMXftU79HyyWHxHb0tMYz-lVc4fxq3sGcu3bfGTDIA_mhX_R-WrHqbNE3SyBW8D5BxXVMFggAUTlvKFi38G2YwurP761CfIm4vBWs_gBSiaF9NWF1V0voKdfGCkjO4SBb_RhfC8Pn3ykV8OEWO5t0MMKGIRpY2ZXaJJ2HQj58F2APzym3no9qUeNZf9FVXSfxNNHEtV89iHbQKo-AgDvzS-IU4es0SHNE9bPck35QoM-YZL9yg-ZEmZjoqhofXmdxUgw6l2A6yNp4S4raRngKmUc5KYCrU6GKbnoypqbX7iRB-q7-oZDSrMRjZWKgdUOzmkdiyTLu6Y_0wfpJjUzSOJo61DlwA-F0z2vAyXmK-jSXA_N3D8loziPx7p-NsgwE7jlMRqua-6eO3K6BnZhiV3pvij95ohuo-UX0U4WcjnVPlQ9i7WOeFoCX4zwXowoNAS4Zfh9Ilii8Vye_PmdRFCMnF5iiu7Y1WhIvisIV3MEEWyC6q2yBDt2KwgAZJEVKJXbTiqJyxo4yEshFDGDoy8PN3EbGyI71GLR2iziZ7dms_zKEjM42eNxuA\"}"
  },
  {
    "path": "genesis_data/genesis_txs/xiQYsaUMtlIq9DvTyucB4gu0BFC-qnFRIDclLv8wUT8.json",
    "content": "{\"id\":\"xiQYsaUMtlIq9DvTyucB4gu0BFC-qnFRIDclLv8wUT8\",\"last_tx\":\"\",\"owner\":\"sc5BnGCv62aO1aWZGbVmfr4muDfbCpfHKGa-ekzdw3Nx4KQQUcE1fiKfOpcgyrCpkNq7E5EMnGJhF1ADgXa_yvrqRjhfRDMrs9uRY5XbLbDF0tzgHV3_IvPCF87piABVWROQSA85oUgxtop6FZemZ8ppQ35p8Cmo13_n764Hwi6leHu3hMD6W6IQ1OsAT6JMQkQd9B_GfWKu7TqnfYoR_Bk97qcnVBpdBS6W6TquoBTJFpBpMnzIgfRpduesip1J9FZzYSkzY2SiJ4KxWVAAgSM5Y4cSPvBxIFmDJv-y_ue7xm2Bh7wAJkJs-QcqrLuUjgOdkElGJkfs5bY8enxSjPpJypbSDqlqQBUTQemoDrEe9cAfUwG6r9juNl9ErxFZ7fvfqqxDOU3fBJV7L02kqOsMqmYgOvEn3lPFof51tb8syUdZdAqSU2P4TObEyrgZhsotWfp1dHBFDwRQrtL5Op65j5Jl-5ImiWDHKZ6xQxOeuKLJatvRM3tTgq9do5UpGTyXtamVEHNkTWDF4Cw4Nb94XT-NlhHWh3WUilhfMmbgoj4kljR-7cJGxDUvL4fTer908emhYrTrcqppkqBvYILUiXQMTE1O1FJZkE3FawDua38hqVbcDCmirde-U2tQwmLUcEbNX8sHckrWDABtCAcFfuH41RyW1LWPx-Q9Ugs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"Q3J5cHRvc1JVcy5jb20gc3VwcG9ydHMgQXJjaGFpbg\",\"reward\":\"0\",\"signature\":\"MnK8cVufE-YpkUqpB1UCEQ_gNxoCpR3ONXnaqfONXGer2i78aeSas2blqZ93Js4VBLRyBls5II2jgjp-GWq-It-f1P4yuwHIxW2RfYLciy_JCxjyafAgdU5lWBI2hLWiFMpmtTi5UZWAw47fECDktuA5uNAs2SAJ77bLMq5JH8RE-hbi8PcSeTP2s7-BXtZ4y0jdPToKGkDu95PQarOWGhGFFgt_Y0J83WivXBn5nEeNVpN7qzBT40Ycauyw0wEyVCL-aUvrqjXzjj2qyzA9E0DsTX_fhJGDnASLNV2cPE2Wt_9iZHyqJwcH98QljZHUt_6lCEL41tCoKqs6CyM8Vzu95mp7S5IWc8IljEkrnkJ3M8hX368Rvp3zzPno8xv4EujVhqpdR0TormM_Q0rfjWMqg4A0TAwbMjI4iIEVA4i1fA4VOOiOdxn8CJ3h5we2G1Y8m7CYx6L6V70OzyrxNstGsprKWxRLuLedTI8hXJR98jGDCDVZAHCqy-zX3e5spq3EPArhORPSUC0DQQFWbeSJef-pLOMFNgUrthwMDkIOWNlxk8c3YqK4jFDlT0ODJDSFfEBuyc6_z_KmY5-XKjqlWi3y9tj8qvV4QGj8N6fSV2M05JgQIyYzIenPV-J7JLx_Ct_tYoKfUYrLLSJVINFN4oxvw-IOkLyXCfXTjAQ\"}"
  },
  {
    "path": "genesis_data/genesis_txs/y-k4KjdSmwYmIugoObrtx5JWYczlEZBzwBHGMLqNP-0.json",
    "content": "{\"id\":\"y-k4KjdSmwYmIugoObrtx5JWYczlEZBzwBHGMLqNP-0\",\"last_tx\":\"\",\"owner\":\"7tg_P7JrI18LLl8UHGXQP5_FgcmgNyOK9Y_A9tX9hLcSq2_Yn_DfrS8ZjGz-GeC4jXbFrK-g_DPSaxcAx5HqM0NXlijlC6ahbXOS9Q16E_vg0GyJVc84OQeRclEAkq33OYy_3935OGT7I1z1gxqQptkTgeOJ1scEFggr-eTC3msgq_ldGITufDwqTsc3PkzR5ehbmFCQ_y2XtWbxFjY4gPX7w3PRRaK5pQvgR3yfcG6pM6xgQgiswtc74A9KP3RKbuyP9duuoWkXdAo8C_605uPBCSnTd7d52NHFTQa8mKU6Vhmq4Np1V9NfogoRzAxUkP_3YpXZiNdWmd7cRlLO2lXgnz5dgRsRx5KRDBEXvRIS-IVehwkotZqdWlTcjSza-2xj22xfwSdvKbzoo-jzoMUkJxzfxwrcH2KKhUI4meuTNo9pUWg-nlo-mRLuD6YajDr4tJzndEUBksIMuHi7vWAfnWT_D9hvr91PfxT9hF0vE745G7eg86pIo4t_O6LA6j5XiOKO99wtiZt5RHCp2PpCMg5SpNnqq2UHbL1bBEwe3Kxmaq2Heyr6Dwk6WlN4mY_cnmvLaU4NZ9jHSFyjHgVQlRqe_UwnQ0PTIoNNgicOIgRHbtVcouDdu7E2umV2yTctLDf_YtaG50plWMQdOlQlfvE1mRw7j0Ib2KxvZyU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"ZnJlZSB3aWxs\",\"reward\":\"0\",\"signature\":\"1q-UVH4HOi8A9TidtKWrUbP7mKw9Dt1BKlZZM0BsemTh8S_MPyTPmjDWT8_mBFJE-RBl1967LM5RGGmUDhOrOT5ZEeyXRz3BX1o3gScIc5OO0oviHexhe2E-BKdPbohwGP5HySrTcpJu8sKxurWfQ9DeUs_LgdntPa8lh3Yz8Jf6oa4nEu8FSPYpnuruh25EpqK2ptaSGq_UNnE7N33RHR45r0eRKKKhho4zgKDD3NAObISeGtPUELS41QVf5SUrrkAjie-bS23LHMCV1awoXdlQYEG4twhjfVLUeauCjVDh0zUDpAuQr59nzX6byUIHzWxCYV3V1KGisGuKCcsCKySgq_Z_q9v0vARURhhoTkjIvoy-Omkg73BXm5mjOr7kNxGbnGzA2Q6uOiQ54aplwvgmbmPh_f5bMBKIhD1rN69pXotqGSCDUA8iZSFS3yGpKGb76dMc7XCZ1IxvDySUs5rWj41ctU7QPrXN2UvO6zbsufE9Bqyz6NgrDj9YOvDx6dMQ2f0BKIOYR-0-brL2UhXXJiX_MY0Icd3UqNahbomn0z68BwV1G0aULTr-55eBEz_q0njvJ81tuzml-oL4nNNj-NDLa3UZuRsbIdWeDwbNlsXXAu1CvfVVMNV0wKiYthevdHRzqr1e92oNWfGNeuPUPv6gf68n0zm9NaC9FGc\"}"
  },
  {
    "path": "genesis_data/genesis_txs/y0PrXtX7PonEbIG3uEdu-k-McGeLLAjzUriUTCMTGcw.json",
    "content": "{\"id\":\"y0PrXtX7PonEbIG3uEdu-k-McGeLLAjzUriUTCMTGcw\",\"last_tx\":\"\",\"owner\":\"w_-P8SIVHxQllPLiCH3mH1iWUmtR-sTNwFF52NB-DHmvReMFZQKRjls0DA2_DplV4mR7tXCvlYkizsKSLayVXEYGknttz_F5BDI700NrlkbYZz31OVAWpJkploHljD_h-pOU7QbnKvuC6fUQ0DzKDEQK2aSrO-GGw0qUZHw818_xaP2kRJZtTrSUpgKv7yDPqbsHRqAJCEhs6ymNSbk6HhnRXvBn6Dx4nBEQ0QtRqa_s2XYs-ifk8GY3bR60Kebpb7f5GB3A3j5pVDhRFLeh01gslsuyzUbw9P6M2voEGpZFmTw2QPOdpKfaUrYMdj5Dc3pxDMPonCo_ZDqYLQYscvQ81b5biZfNEqpJctqT2LiV0tWxV6N6siCLQAc8PGnSaOvoUSahoFhBBdYAcH3uYvGwToajHWHHqsZ4K9wPTHgLavAC88IfL7SjWilKNz9S2hDLkatduALVga4EE1ALawlvIqM1ThUmZ8ZNO8tnj3nu2HLDVh2AYC-F_bU2OBja--7TDp1mWlV-G8YGHposedBfbg6IuVRIl-h0PRSO-1fagXkrxZSJtXlvEh_ib1pd1YHl9JsZnfz6HIdHIRqka4Wu3YqMOUyvDvs1idQqYO9av-qjygrDEIhh6YCPXXwIYX749iIpKHXvCn1gTHzxx1n6KvZYpBg8AbLZzt8wjGE\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"QWRhbSBTaGlyIHN1cHBvcnRzIGJ1aWxkaW5nIHRoZSBmdXR1cmUgZGVjZW50cmFsaXplZCBzb2NpZXR5IGZvciBhbGwgcGVvcGxlLCBuYXRpb25zIGFuZCBnZW5lcmF0aW9ucy4g\",\"reward\":\"0\",\"signature\":\"gwkSqfePg6gjTfMxjn0WVmazdIBxHjtl97B_OXtwps3kcFHJc6rQY7IZ_Q7hCgg_QoWFALkbfJwkESYo9E6gHW3Y5dFhR7VTP8tAXmR3tSHf88LJx3kJ8ZsbKGUZ1bK6hf7k_3R6EpKHKF_VEzYRkB1Ql_w-xZ2FnTTTg0Nf0qafoYePEfklRPPmIMRiyHTmd4NhlhNi-HUncyXnkUW5npyQG_IDg9jxMjileR5-U_xpYrQBNsCwz8iIstpAxNfIK8-ThKV3BGpo1025-HRwudU_8YnRpUNl6cJCsD7xqV6lH1hRgQS3FSXNmgd4DHNQMyJNzbaUvuk3oi_oyo8UYD-yNMono7zvphssV_piLuV4xab8ocKG7WIQOosTpf67BG1R57xsJSSIxIYcdXSrIYNFsJLDIa6HRSCROgtDqXdQBTPBM_GBW7KImiptZ8wrv54_wYHIP8hd2JvHV5sfbIxa3zla5J8k3oyziESXE7dQiWr-SQBm_oq_2IbezF1Q7J1snbOBaT74pSNDOUqo4v0v3wtlZGz5MipPE_YwSzh8M9_JjryXXqDbdxR7wFLWr6aRi2n0U5inXzOFftX79J-itC2gRXrRycY85XQxo7JAYe-idgpznGeKB0LZTjoydfE_MRxIZnKE8Lo5bFCGlvWJ43JUCvnfac3v2Nx1JUM\"}"
  },
  {
    "path": "genesis_data/genesis_txs/y6WPKL6MHzZp2ktvb1cETmNMBJyCEPlxdisKlroEBtc.json",
    "content": "{\"id\":\"y6WPKL6MHzZp2ktvb1cETmNMBJyCEPlxdisKlroEBtc\",\"last_tx\":\"\",\"owner\":\"4jEID33qOqyRWUr22JGyknuCFSuMkEedbLM0vSjfuAKDLO5vMuwhuY_JPQDE2itFqVWFXZRGsLPUKQCPNGT4zNlXTOsVp-audtAZlu3dchSDBpa-Jt6G8dnhZ72T5wFQNxPtfED-a0Cqse16WTORzZAYbuuuoKyJLu4UbHz7-C6vNc27LKZwP1sTfaVZRMhlbqDArOQ54tLgM8hBR3ludqSQMMkVIDAB77cg5qRiUJ03sprWe-5SNV3eKYpCkFR-KD8-xF3K7Ni_rndJv5m895BFletsw8S9QkBrqArUDKDWVEACxxAQilKmkyPpYXgSh4xsyOiRWs6P1ljzonqDIO4QwWfGmTz_jwhKdQnyLLRsd16fvmN1CHcvneLErJAfsd6u77knIbZZyd-zXpBCn-TZLG-qhpQIT1j6IuIDPusG4dTi4oHZn-TO6EiHq72Sa9iQOuO18-6NApq0BX8Qg8HG0Wr99xaJS-ZmC-ya1XALY4Jeq9KtcWfh95MfAsE8jIKNd3ZPiNyvR1AWKPyrUPbpUJytk4zXLVoF5usGyRHQm39ewZD7bjY6M973Tfczvt9IVO2fjNsVLi5lUApZE7JqERnjq1G3OCXfhDI6frEabMw7eLUDz-oZdcc7Dz_r51FHKB5fuZA3sjf22bE22Ilmx_mbK_8RiCqYxhhpt-8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"TGV0IHRoZSBoaXN0b3J5IHNheSB0aGF0IGkgYmVsaWV2ZWQgaW4gdGhpcyBwcm9qZWN0IQ\",\"reward\":\"0\",\"signature\":\"bv6MNOKCahx8yI6lQPbNOe_JccghRFJnStzBAWXBQB7av2F1YEN-rNawO2YKYLcwOP1TSKMbY0wJnoVjLdTeMS0t-sjOS3u7KmO7cflQOXax5sK6E1gbUS_nGr7SZevh9Hbt6K-woV_fmvgdl14WMfUSaTpubMdMeuKPIR23tJgc6ZYo5R46VREIza3u07z3rHzhjx7klMgHP8F8kvY_q_o4FoQrs5PGvNCNaHdpovIKHqD9EUUaPVEjWmfa_HnsPE1vVXO6c_2ypytLdZ8bFzTN7XKgbcgcZAuwEaoo-j5_9Zdstsf8kj0gMy5z_FuNY4Y1ywqvV-NQZhtO8D6AAO2IVA6nL-KzOcQH20YvHo5P2tJLVTm9mgSn5rF5Twc75UHxI6YGOfRvvoRsfKXWcQraIRZ7l0bm87WVjS_VwCh4jZl04b47D18BG0lWBZ4lNLUSZMnh2dAgRXz_o8tvasYAXQTGWwWR6U_ML9eFRTphxwHqcCN2g7BHgnbrFgUYJhxD1nMswnJ38OfCpt2Gq87Melj_Te-1DCUetMkiXGy9qYJIGJi0TT7jtFuLIHX-ktZieqXjAcEz0e-P19eNCFV-iIVGHc0t5XFUvnLkvbctn8ZZpnV-OSFRq2pfDVEiOk3ULb-UEGvE6Vb68o7DFjcy_0zIc2le1Ft_LPrO6h8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/ydvI6weQPIRj2hcNg4RPqzDpFOhqiTc9iDqQ-fUUl4I.json",
    "content": "{\"id\":\"ydvI6weQPIRj2hcNg4RPqzDpFOhqiTc9iDqQ-fUUl4I\",\"last_tx\":\"\",\"owner\":\"luF96I-rFf5pXPQQN6uhdK6mCL7soAuT69l8kP5SUHMp-JPBupSBz-8g0_dU47ljSMLA7rVJmmwCBnVcgdplZ0unyo6f8B7HDBqyO_3L1qnzKFoET-hI8zv2i7qvnTJXK1js1jBCzuOLtBQ3nvGkgeMFw6b1GuzGRx55sI_rIAzfycE7rqocLINmE6ymG54rJ40m6_3GNrpDbHp106N16kJTwkkrktprrRFeELvP-eCMJUOfUUeXXNCPhd_0Mf3pAaYe3F_LXixzYnUWDKa5n0Aa1F-NUufsJSHtJDhkIdCYG7ueFsFqakM_C5SSTuIxySPU9u3IwFLZfYdKW_e-euZ2bws40uJHI5EiqdmW1mKXKjZB7SpJbg-o9yWujDy1oolaAhek3Zhj729vVu3rOLLlzigpuAlkV_6V1k6Xs1mbOzhTfWe9dBUV0XQ3nJ2HDgCF28qSczpnF8k9HFzKA3atL9IAOsjkUbz-dC5YYi_HF8da9FRWhVOVg3DRYPeizXJJ6UFpHTtPYOQ6xDzZkekec7o4lIy0oqSiaunMtL6ybLorggdeZbs_qwyBf0_KnCg2dGn4nfStZKQGkZhpvm4zmsRZ6PNaoljtKke-Kpg-YnZKil8h8v1TC8_V4QUW4uCmMldFcTh6VEx9YNYO29ASZwRfkKjXLSlsquqL4n8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"RmFudGFzdGljIGlkZWEgc29uISBMb3ZpbmcgdGhlIGRpZ2l0YWwgYWR2ZW50dXJlcyB5b3UgYXJlIHRha2luZyBtZSBvbiEgTG92ZSBhbHdheXMgeW91ciBwcm91ZCBtdW0gWC4uLg\",\"reward\":\"0\",\"signature\":\"heB0JqM846yphzy-9H5Iu9NPwSIA_wxGFaWRUChZn2xkWmrCWwVMSEE6lIXcjnNUFD_ArzbWDZsCZMYHV2Zra-xA_eWQjJS70Aj_IDLz6b11avQgnXMyRD3sIrzL3vF4Z-QoK-wkeqdrbzO84kqjLXK0KViJ8Jkpwrgv-zsKetQlRikc3Rm9nsVY4gnPPrFvT-Wldogu4DL1r75XKwSEEjOU7Sq_uOXk9anXGUSme0V9cBQaPf0UWIuH2DLMDIP1kyfNd0I0Oj9FOUyj7LR52yiry-vGEi7tvovN0pTtKo4P4tKQKTLzpzZLiaf4pvh1re13XlCox5EH5_HZNTfJczZeQz4k4eaPBp10NnFJjz5Jc3RCeK0RZiulwtK3qj5StMgZ5glo7iKA32BCz5CSMimbt8U5BBk0R84XkU23OfyNdMDXlcoLPEU7CKEaQglKF1475873N2IWiJbJVGH6qzzXxQhxVUrb3xN720mIex6l0U_-OJpUIdwbJsKT9qfhKqkRffmvzPFz2b7o05gKmlWM2Ju2WzFt4DvN2R7yKFcZtqbvVoNMnw1dubPInixotHAGsFKoDsxdoFQOS26H4Btu9kWSwCREmdbK0L7TuR9eiZCfTtH6fM0N6zSvkj-zwLxCm_SZOcEqIq5nPWYcZqHkCq6T43LxSsW8HF4DVA8\"}"
  },
  {
    "path": "genesis_data/genesis_txs/z7Xvravldr4BhTI4KPOEWtG325_1ORaLQ4aUPOAe_us.json",
    "content": "{\"id\":\"z7Xvravldr4BhTI4KPOEWtG325_1ORaLQ4aUPOAe_us\",\"last_tx\":\"\",\"owner\":\"pobim0WtaOOBZtttT8RSMo5C673oWSs-yVaEwSROPfYVeMsO1kr7-ohUE4I7Oj7c6nbLtKRHESBCjx2llN8DdDjUcWT5o43rMMVfv4JxiudEQ28k4DNq7SIGpOQfDJNZ6_6RTKtGA5DzZis_qj2USYKpt0C1TKygHur1OpXPxOszr-YI02Ot2PD8nP25t6bgQqbuYFWM8tNhR4weP3XTIXVjPnKQM_CEBEf6WirMUeG0N4Ml_RSxjaGLBeF4F6u_2un6PZHqPaxu24wN_l2sJmP00o_wZF4Ill2BP4ZWLsUEt1xnbAiSUNXZu5rN7-razZMus-8Sl01c4qwetMdlKwvHNibdrIjJgl-QAK5KjrOcue9_GbRsIndiLY4NUBMirKaYTCp5Th3Ck9QOY8JlPu512UG1wcDEmWuklc59SSSGOE1naLtlrsAC-fW60FXQwmnYtYn-YbhHOWXFcguvZpcht_ULP60UdK-WxXxkuQcK3AcY3ne5QdeBhlTjdLThPGCWHfN_LdvihrGqgjwi3tD5sl80wuuw-SyvSEB1dD4V4PATKQlPvZ-ipVTmzFLnyrqJC_gwz7C8Tu5qRtaxkqqj79XMfbGn_Xby77kCsfdQpBkd585cEcwLkt6RDaoyVTyqRWrXePhacoz-zFp0w7-ek1D4SG3WU4Wh4belykU\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"UmlzayB2cy4gUmV3YXJk\",\"reward\":\"0\",\"signature\":\"e6vGNraL7T4AnwPHDn7pJwA_Mf-EXrvBXcNJOkZjFMSTXX-FIMJYiXHnF0N8Og0kmVywJEFTX07sIZmFj4RJXuJUJqchkBiegLI6R4A-TFtWw4xzK2aAbJre4otFcFBy1ogbz8sviUQidksazJhSa4yPupujySYIQ70v5t06jIY5Onf6FPTk3dCUoPVEcE8M3F-D2DpCbq0w7TPKcLhE2jVY-XmPYMsKCBLA0Khlou7tNEVJ5WnoYkcoutXOMLAEP_co70m-wmT1Yitfs_Bt_Pj4ErR9u5vKmhLHIVqLoLsGApB2tJQnOqIhGRNBmIO_2kfAwvlmUPXwkEVBqcwUqVe2NjzQxb-_fJHNfRL7D9aszha2uf9lhdeX2oRmZ-h8Kdqd_cM5K9F9w0ygYC2CkC3Hn3Uthb9Wa8REZjtrD7uMmhn08W-EMiBlrKNYl5rEmU7bIlHC_cRr2d6NYB6QwoBTebYlQIrinNbGqOtB_TFE-1uaDe3bzmQRx_FuQh6N_DegvIr02LmE_xxfPdnAGpo3Iug92OoO9m1hup7wj_1UgkTo3giDmpTjEtKWe1drgWSeVxgi3Ao8Fx45JKTh4k1pF8Q--HV7keDdE6ABY2JeRLuczyrX4HX_ZREiBXA25hgVEOmoE2zhZaDyBc7V8pYlxd1S5Fcz3H5nUsv3ZzM\"}"
  },
  {
    "path": "genesis_data/genesis_txs/zCOtSnXKGGhXgrWld31Ak9qQA_SjpOqB6n-9sF74rhk.json",
    "content": "{\"id\":\"zCOtSnXKGGhXgrWld31Ak9qQA_SjpOqB6n-9sF74rhk\",\"last_tx\":\"\",\"owner\":\"uc2Cy5PjGjjQuKUye3qnDdr9RYfVwPAvk5839BlSSM-IVsGAddjgY2k-HO6HrOIXBZwmAwLdtxBGVo8mwshL6oeFlAzhjyamOAiyt3hm0sExXKWrpA7oT_X8F4DFNy5jbeMkEOZUb16NROynKLwLdx_b8VHRC58KUPrsHmPNlsu8FtrRTmSZcfAmyeNXWgXEVFAdDmO15ku-WyYPCMJSz8i-6mcNoy-T5yxTZkG2JHnK9v59pa8YBOb3hFi0vIKVr5Iw6wVHR2-5ujkz1jNXgVAD0FxZpyU4c-Cm2zfZzWP1aLhp4B0YQPJbnMx6VwRtKBV-oaYUlsE0Xs9DfsO5GcGuCF9cB2AdMWdSjBjIG2L7PPoJCKcWGhBqQgfrjzhZC9To7SF14djEF5zA4u0TXR6xCDf67L1emaWijyzbji4GUVuBFrOBWU9-0lCEXUkh83csll-PLhA5xVdk_-ApNcQ4P2jYs7WO99KtLs1kDi6GjAVONaHv_e9OCxYXTRmsaWLTG1gSebusEPK4YH6VNwCkfkSnOJVD0PjZNRQSSGOdBjsYsdFWfHnheeZMmFziHX7fiV2_syTA56y3F--hN2nprMAXO4obKblV-lCoi1ApQksrhMN3YU7I535etyF2hF-DzFiHoi0MgtjCwpUoU3tePE_5cz5oj2LgzvwH4VE\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VGhpcyBpcyBhIGdyZWF0IGJ1c2luZXNzIG1vZGVsIGFuZCBJJ20gc28gZXhjaXRlZCB0byBiZSBhIHBhcnQuIA\",\"reward\":\"0\",\"signature\":\"bD6UYyKuZgQQERt0WbopAnAk5K30krwwMZTAEuX0SGCrttfGoJ8vCAqQ2atTzCMjpUh69MoR8sGGcR6dRAfaPr0Su2mc6AP12Rd7Ksiz6TEjvhQCe08uFhn1Ay8TvFaw4RJCzkyAgRINgbXeCuO2iTkOro-9377G4WNMTpz7TygrC0we9ZpzixQVIuylVRsExuQ5VF3fPI5Xv--32A40vsv5K3Ia3ZHYWjCClnjSPEHe9roDeAnO3ye76LOaunaELf9CIM48MazCJa4CBzNfDBS6m1rSraySlMH9kbSh_wyKm2eFQ9LDrvpgPIvJUuY2bJTGiSS4UEh_nOWUIGyDPgBDvbetXReE8jPkJxYB8Xn4a2Lulr0qbWYV2JgVWKphKDUJnjWRzvDZ7B0uSW2knyUvKsOwXfk3XgBqlLzioPgx2uciOQVhDzeKcoiqHy9GMTcjQflF59_KHEuN937svRHEGHEZhGfpiSjifd-YKttCY4dwXX-00V5AXad83pTZ4RwhDKyfPL5mfWyBhD-ZGa5-f9vnss_C1xQWg8VgnuG0iXaG16Kz-e45or2QAWGEYFED1VFmsWqzQ6Igpp3HvuaUgC6W6xMCUuo5ilod9GPGDyzWy04MoE26pAJwzCPXNDkzUmzFrizsdP5eDjcBJ2aKqZPcgMMXarNtls1-9ig\"}"
  },
  {
    "path": "genesis_data/genesis_txs/zUFRBcWpPAUyMlojffeTnPgsLo6YgU6JaJgOR0mpBuM.json",
    "content": "{\"id\":\"zUFRBcWpPAUyMlojffeTnPgsLo6YgU6JaJgOR0mpBuM\",\"last_tx\":\"\",\"owner\":\"1kZfS-jYsIUeE-2kD4T6Uu0oGya6rABfznEkLA2GUl9UPR-yYbz4GEHIU9ev98D3AD3cxoQRAtmaaGnU1SANrvNnL4_QvdUtSFiSW-3gOcEIlHqJbht_PMlmy5yFtVikYlU-KL3uyUebhgw8zhdW6GyDI6n3KWxdW86zLs-01a84tZJKAqFBSxgZVr0QlTZ0FJiIH-Veo8kV-nku59Si6ZDrDXZytWGoN1dIs4Q5tojjAmTyesc564NSMuZRurW-SS5nKyxb-zanL8vd91RRxqVpArW_W25qB9VN0sRd8r9bjbhun-P713hO8tM3yQKPge-2PyEBD82PPw6ItUZoCrlSY-hFrcBaiv2ZHsX6eoyEOk1-MIlgqdF8psCqLTXsScnaKmIqRaKfS9-lo36FvpCEq3RZI8XpCg6_Xjo2oQj7W2OuovlCnlSdNWwg8odzputXZCxj3RpGmuAKWSCJFWecWDk8d19jqI9pSNqWyUZLLtyP3flLtHjWkzhWgQ_n5oJkaJD_QEHuVOfdjTHtVxtNuFZnQuY1K9TDQXJs1JSigh4XgvVfjsyabLt1zivL5-QFTIAcSjmbU0390oajw6NFYWpgwF8tfsuYZwCQu71UrLvWZ-HkT0xiyS8tWuXQ1ci8BjkLuRbro5fMrEnxKC1KZJyWxkCQiS7gKDEXhA8\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"anVzdCBtYWRlIG15IDFzdCBpY28gcHVyY2hhc2Uu\",\"reward\":\"0\",\"signature\":\"mTJRElP3Udwg-O4MbpcHEyFSGHAU4XJ6LdBLArIPOS9455OOToem_WWb5FvdZXZN9yGZlbuMlbK1nYiSGXsZ-Mh6Jv8iBj82lGdTD73LXGRKOhRy8srztMcfUYWGWCaiTAl6QEXOpglCMN0HdKpQpnWlu-SGHsdB4w7CnQU8XP-RahCyjWqezZerTFm7kxIGpIQ6U4SpIbIfnQ9TB53zCxaAo_uJbwRnEOmH9-b9zOK4BGsbk10Dzs8sPOJSHOAZfO0T3v0Whf8RpEZ7tPp-YBJ1Y0l50uRrKWmg8ttKY1EBpc2qQbSgPe3MycjfE8kKnzjl5Pp23iNIiajNuFhLFs7nv0mBiAdVUXNmDY807UQIcoCywXEUOGjwBdQ0abZ3e7RZP_KDrVbxAsF0WgNiWlPB7u2zmV5tMi0jir8P1HFCS4OVOMcnM0Zi5v2uZKLOieYIBvin9l8mriqAr5AY-3UwaKXaGqsK0AvGsqQFylNUtXSKxy3tkC22kgfiEicw6OHCq3Lj-VfpgdGB2Lg6GhFPMn4VPGEZJTQrXGCbuRhHtFKljNvaQVvKtFeMODIe3seQIq0sx_7grpP_B1o3HCOf1Hji8-kdoeX3I_WtiqXRgmT0un79cJzCxqxFspy5O36evuSpqnQfRbCRXGajGalF2REe2lSHZC2rX6vOgmQ\"}"
  },
  {
    "path": "genesis_data/genesis_txs/zwl046ia6I5VWLRYPJzBI70ypBQN2VlvLH9a_ndNKxA.json",
    "content": "{\"id\":\"zwl046ia6I5VWLRYPJzBI70ypBQN2VlvLH9a_ndNKxA\",\"last_tx\":\"\",\"owner\":\"9ojklidHkD-812ugepCwDRiuKmkgCgjmWcLUOXn0gPUbD4pZCwHsQQisYG0BNFriIhutPWZlku-lENcYmlgr6rdh0UcWiJd0g3PY8PEXx-YjleRB-A0oBlWwcDgKtBrDMcRMSFNoQLylM6Y3wvt5mGZA9mUct1FUYUM70uXz3gZS7NP0mBzWpDa9ApgpS6LUcN_v2m35ZyZbi-IWE8Jx275Xv-EObPKTZGuyIM-tRpdyZ6MfC4Chhr7ybkHYp0rdKblvRWB0h_dl1JC74rm8UOSfrV0z-LZH9PJNtquhAeYf4GBhwCq8EMFOi-H3AjtOwfWTacMaQdAapw63JRKikgUhczPRAlxiNxbIV90g2NtT9b1Xxgksxa6fCXXmEz-i1yZbwmFgbOtxbjO7UlGwoT_UpZk2CjA5tNAaujExRFN7yA66AQm_UX-XM_7OqQZhR1ysYnF0SpHkdAePDiDANPNi9-8yooTbbm-RULBLacYd18XIv3WD1_VP0OOxnR8IyQ7d1O_R0M7Jy8015pIuAYguzCUCe7LbVkpxjsEXdR0GrwlVkg9hQInNIfZw9Cw__k9BZDadfis5MtQBTgAqOo6FgRnA7cAFokfcnhsft3g38Hwq52UvegvzXuHwIuLo28a2CVBjmX_89luneJOiO3kseE16D2nfwZWx5UTWkWs\",\"tags\":[],\"target\":\"\",\"quantity\":\"0\",\"data\":\"VGhpcyBzb3VuZHMgbGlrZSBpdCBjb3VsZCBtYWtlIHNvbWVvbmUgdmVyeSBwb3dlcmZ1bCB2ZXJ5IGFuZ3J5\",\"reward\":\"0\",\"signature\":\"BAhYUoRJCzt9nxxFDiDWFeJGf-Kt3Dxttc5MngrE2YMZuLN1AtNlz_EZT96OoaGmcc2gIkUHl_CZ0-cidat1lkyGOslUd-XfvCk9oYU0wT0dj7mVKT8UQbUt-iJ3-b8w_r1F5JKYOyKyG-1vYbN3iaFEkWVNEZWBV4cEn4Ge3PoL1JIcmpDIOHHypfSuqutP2XwrbTPysRNOAHm-Yq5uU9RbFWrQeo3Qg9AGGkBIqFGj5QPfG3_uvIfXqF7f2vDl1ckf5uJWOgfiYAmAKK_ToW6qVkcXkrtuhvJ_B5wwsafd8sUMWZyAqfDjErLHNQAUNadcoiNrycqwEVMwn2xkiTx6edPEQU8qP_HamyIeljf07U4DfNb5cMfcZgocEMYrg9HCvOgeyinDmEmdaV5UkycNV7QEqysC2J3ng34suEQt-wg3R0G4mWqzaQfXW-YfLm436W234S7rpjk0bkkfj9nMcNH7lkzW2ceZLlEWgTtL5gaNymn3WWkjB5JiDRGpPdC99yXY5sYp7UN9H0mn72W5wlCHz7m0R3wzGE9EVlVjAxiO7cD1zG2lsFt1nx33-K2BqNxsevvnOvwRbzS7plp-ZiVmJghbyi-aFo3ydGCs3L2wL2QeQrPefteEGzEtyZup3SKy4JOF16h4MzUA26O6VOnVelU9k7fLMtHPhp8\"}"
  },
  {
    "path": "genesis_data/genesis_wallets.csv",
    "content": "YaWA7ZhpNNiE_h8wYWiR9CHZ0fNprrdYlvqMg2kr_oY,100000\neZUX3x-PVNQth-U8APNH28SEkJ3B5d_0NQQ0sUmwH0k,100000\ns1BSJI_58-151xoV5Ed6p3589bf09JCPw1kJ421t4os,100000\nFAO2kzRviKXc3CIjGrPvZBY7WeC54bDnG3UQpZz56Ug,100000\nAL5bM0nM3cb-yWxhlvqr3cipTerpl-8qjlpMZboWuuU,100000\nWVZS7gulelBS_0XVWJxke5V5nAvaCf0w09mu4M_AbFo,100000\nRr4zI9DehKsG3g7ASYKOp9b3REwK6ch-K9bhtk5Zyog,100000\nhHRBQFl-LB9NysdcKp6x2NcL8xzPfdsoC6dyllOC5Uc,100000\nxuV1p7S8FFn4tj-fjgydZD1UdoTstu19cDdOpJ__Qyo,100000\nJqwIBdgzUL7EoxIFkLMJh-haJ0YhmPno4SM1zT1P6gI,100000\nJgMvMTXEw7g6q2dOzO-60M86HblGXYXmK6NkMFBMYbg,100000\nPdExigxolhAMzR-ayBI0JJcQjM_J7cBCUv91_PNxBig,100000\nvcDdvYf-Xm9MBDO-7f674vE4hjOZwhXneFZfuLGjTAs,100000\nHXS05jfe8SUgtqnaJ7cgBe3LBK_kiVIsFrj2Oe_0EuY,100000\nEr1sUW1YgIm-ALbZ_qLH_3Azd41fer-_9nH5g42Wv5Y,100000\nNozAy8OM1qjWIMyYcHjLDYtgHkBqgqKudlICyEP62RU,100000\nHKRsez1U8OvZAzYSzGwatQ0bLmnRHdvGm1osJYtU278,100000\ndRx5dziGUNNtTdSemIxZwQJOnkVVYyfEQNw5mLWHjsM,100000\nghqR-A0076PqnYO84fWj2KCXn3O22JvOgTU9HxWCFcU,100000\np0xxTtRdgNPI9zyAPspZDXI_3fKGVYyn_l-eRtWYDG8,100000\ngEPyxwWr_ehQjvaD9VgYshoZcZpG697Z8-Hvisoj8V0,100000\nxljHr1vaop-iX9B65k336T2SyWieigtjUSmREHujEmA,100000\nqi5jFyVhJrcS45HkG-wdvukxOlVISL6JPd5KB3iIvJM,100000\nYYO8Cxqz81wEyNCbavHhFK2LxZvOvDXJWqrLAbjxF1A,100000\nY4PkdzXV50-2cARGf9NhVd4kCVlhEdJvFwsGFW4tyfc,100000\n5rW5FLp8kY4vSBcZ1dX7Xv0S-cC6Xok3G7g2DQMBkao,100000\nQ2bfyGaYuh5kgRh_55wETu7VNn23WE3-CqbVX5ezh3I,100000\nN1EB72-FEJq-IMWrhXTbVIQwKB6UfxN6ePdk06jQ97E,100000\nWgV_x78rOvipprA-oO3brGFmhzQ5soEHsz2ZFEnYWd0,100000\ntLkPYMdUevh0dOcdp1MBxwv1ugIxmIEsN-Mg_oW0Xhw,100000\nNNanwgsOFAIoLVScLp4hI7s3AH22X2F9y4UciRBV7rA,100000\nV6G6fZT4OCE3wKsAuy0D1Ubvx4XNSqFoVvFC2tkxsbw,100000\nWPB-7OngQt0vj3SwBDOvpnXy9HcVtTAQaskwD-5I-kc,100000\npAg2nXhfX9Z_MI3zvBxrUE0lYE-6c92-Ukub3z0VNoc,100000\nn9BJeVmMQvH3cv3lx-z1_2ZEKwpLTtZL3mX1X-XEkVE,100000\neaEIrUM8VUo76mTY7dyOQQTYKHZu-seCfnaNp-SpWZY,100000\nV--KnZ0Iq5atP0zHitsGJyeGP-3itE9iKhfgQ4OCr7k,100000\n5odgBLQk9varjhIYF-0ZD7gf3lzPr1woFlQtIanN800,100000\nYAz_B6VeWTjL4R3AXqK3arEtA6PzUt7TTtamSp0KS5k,100000\n4Rtjb83GDQfCo7138x64CosWcFizwyDmM-CKeDt93pA,100000\nlH42QnW4xW44fa9eE9LL8NqBgMUeCcT5Nz0LoRnbxAU,100000\nvLSvn5M21JlPcKAzvcnZbKCamYiPp3hHRndVP2KCT8A,100000\n3ujAWw6fCa3D9rJTKC4eBmQwNmGVeb-wmGUpejBM7NM,100000\nkYLOUWwRu4WhjPku-P__TOQqi767rSF_v_bwaH6Beo0,100000\nuHmZez_09pg_uiv3ee1KjklrwnR01mtI4RL0q43iS14,100000\ntN08TH6UR--rdoS1V2RPKRrrN3Wp6tQMeUp9NqYbis4,100000\nsU4AF7zUMFe1tVHprB-CnQ4zFZKU3qmSS9FyFVBQ9-Y,100000\nFjzcJjiXhRLxuJquIA4LdQG1ZKeEHidvHFVTxP9HZ4A,100000\nKNYLhHPA4Lwbzu8yBygk_ulntcrpNi5F7HXASc0ljdg,100000\nC_bkulS-DSpBLlViItMhaCVnb2S_eNiCISuAV1_X1e0,100000\nayO8POFu6MN490Ynq21J180nIVUEhK058FK3tKskGXU,100000\nDI-mcMIyReCqbxiUNqjVkOEUbg5jZxtrC6dY4pp4Udg,100000\nuCxZw-TqQtmdoBrJpSzfZZTLVHvGYWJx_57cMTSfK_A,100000\n-6_H0nHLaSJcmWQhoMoEzBfDUp0FoX3o2KC0H-llh8g,100000\nDQe4EyeAqgzEx0TsaHmhFvozP8FO9dYi0j6eOvGXJ3M,100000\nSRxlg3twQUckGBXZNCWk2S35gSYHJ5EPVJ3LJ8g2-qc,100000\n5ng3VFd5JGFX40enqdseMoWj7N8ZV_8Kl2bb4SZT7ys,100000\nbA-Gx3RURNpnCdGQBky8oJH8mCAN7SHRx4kQaQUM3as,100000\nUKutS2o6owRXXvWyWKDZWtZDMBKr-S93vbV_0GNIisg,100000\naCda9LEgQ0-uMs-3-4OIvAB-fWWD-MPRFA3m1em2ZCM,100000\n_qStzwSDEQ4GGVqP7UpHxyo0BdXjwDky2Nj56eYLr9w,100000\nrcEpiafzs-nE6h17e3GqSlkyjw9G98q3jiBd2kBiYbs,100000\nSS33ruZnZgacoU_QUPMDl8Kz4_Ibz-swF4BkWKSICls,100000\nwWf2f4UwbjZKIPU7Jqx2e6dYdjbLnV9yzZCfV9Tm8fA,100000\nD1-8HwIuVw29vBLIxMGZvbZvCJ5C5JFi4ctTSPVq-8I,100000\n6TIT-X_xnza-rhnbq8PDbO1ZdAIp9Po_ebrDow0b3U4,100000\nUXx2wg0kHIP55CL0fqETgofvqnJLk8eHLNHjbrDZR8o,100000\nSPhwwr639nfNL1S_uL2M01Xe6RQGIBCXX6dbMifFYhY,100000\nD8zD1IIrgfJxQ0OH13XCmICqHcesXvgRuyXtyyCYYTA,100000\nvmGFbgOVsAaLrDpFUesq-ub7i38j6Ft8potzNuPql6g,100000\nsHnhiEQwCcBzriBBMHbLQiEmjHgHXWJbz3yX1yPqDiI,100000\nDlQGRbXi9YSMUeLZ2KyeT49BM-QEK_oI1OtR7UXF6qU,100000\nHK8aEpEGE8e3kamFK863MG5JGZWywU_vBWAI5jj1tdo,100000\nrSRIjSXYBPADyDy04OgogP2gMy59O7MPMkm5b8FXB0M,100000\n_FqDDNlu9sukCX_5MFDxXv3rTeI-A1bA6k9dT3OTPYc,100000\nNbVcIWQU3YERyBz2Z50YmxcGBQwuZQ6gpgHzAulnQGo,100000\nKKyG2bSV7kWiaa7Uw-njmlomvrNvFv2hltXRA4zdXjk,100000\nC3frptJWsCsy6fWEYOjhylRjnfarxgGGcUhnD8MOg2A,100000\nmE3v3UYQcdnmvCvuQ_6VSMOBkr-renRiu4HABtqYcIk,100000\nMmSSuz9jk3Kxp_RKH_Z3k-w5HRjg1KIR2Xar-OQSB7c,100000\nDlAREOK-dCpPBA67SiCwlW_2aQNp6BMoHtjgNGAECwU,100000\nKjRhLchXKUpmDvmr_ct82tMwMxK9qFyNoHC2IX6ZBBc,100000\n0OZC9fBlcXSF32SEX_fFdN9oRq5H5dLBfTF_pkRJnG4,100000\n_YPfOvaYHo2oK9HVA3NRVyzBOjrTSBuJQXsKnwoIb6g,100000\nxAcisk4-K2Eyx5O5gO6rUxCuOj_K2CXCj1P9qBSCX_g,100000\nA5VoQqNZ8AhdChe7MjwieS1rPcT3VZFTm6X0hNf0orA,100000\noG_bkTYxN4IzbvbMg3XJJxheug8uQCDtYf1BC-lNZQ0,100000\n8ckE0F95yAKpa36e5GB2k861Z0Wapp6QFaXwEQJz6bI,100000\nXWjT6pZUailFL-cuxKFs8pe41X58aN5MhAHCjl1AcIQ,100000\nY69MKJ-J1X59gt6nQQ44b3bJfxhEQWVD4IT6NjZOBH4,100000\nCYi1a__7XF0DI6WZOuYrjm_VMWlRl7wxkpY_DbyeRf8,100000\n8egCrE8CsSD8VLE3nIuP9FQZdX0k4L2kdZ4b97DwsgA,100000\nHA89R1nmsqq67DWFwEF0VdckuoNY9M4RvHu8bda42qg,100000\n--BwkLm3Ch8ZsVvevUPixP5z4KMIchK5f0a_zz3NHew,100000\nKq8w3RbTDWSWqhV-8YLhxHtP6S_ySZvc8cI8QwUd1-4,100000\nq27Kbj5OK_T6IBqgE9U406knDZrjr_niE1udAcn5h4g,100000\nOST-Z2mESfX4wiiTZJ5eVBYC0zG6hSQqhqxT6ZgYkr0,100000\n5ybm3BFl-lIsmBFiw6Y0To7y9710jAeqGelfuSFTem4,100000\nPVSrkpym_PbiK60hZHguexQEeGU3s8-_7Y8M9-zW-nQ,100000\nsEZPm875WJmZHAoNJR_V7ixr0McrGkym5HJNMhrEwZI,100000\n0TvTwYu30wsSqYaNT54rqugDrCC5Aodd86WXykXT0mc,100000\ngWOhYKmZ5jRLIEIERDp3qAuRxyy7khzzWzQoWv1SNuI,100000\nprxJBhGqxPPzzvNAuCb0WVGckWsNQIHYt0Z0XnaDdYA,100000\nfg8caO4Igv9OShNunTZ2jKEquS-3hBhcxnAzHA2Hxxc,100000\nxRbjPgA4u_fHHyZDfQPPWEzO_a2g5JspMQ4ptrsEsNI,100000\nRnYstFsZP1PV3QKa-34zqn9-k3SItXAOUKw102zE-iQ,100000\n85LRqrf4UpQxnkdnT2EM2jJFLJg2AR3P0r_3ng8O-Nk,100000\n2zgo3-DzCThwYmbDS9PgVqVkgNe5zxD9Obsl3Ah2ET8,100000\ngDsBkz5H-yEoJZqz4bXBLfHLjfIgJ4hMKErNpXMimp0,100000\nBvBx5VmMLHXoH-5hkuBQVVKWDNED8PMJXHBULFTaKZg,100000\nYucza2I4r2aKoN0No_dZN_KPUkizhKqyHH9GYEFrIB8,100000\nYK3ara_DRb0FqG2Fl5r9mJE_f7nMYjdr_GetolosEEc,100000\nqi7CAP92aKEefficbYKPdtMjrRegQ2GzHKeb79GBIUU,100000\nnFNlPKkhoVX5l4i-1N2aeM1Sdnp3vr6dVZUumjys5gc,100000\nOJLHCm7jw4-11LFL9wEb57PpgE7x-R1-3VjTlc27AwI,100000\nkKBf-E2Clh2TtHmLgfnCd9ZPZOfrqFp1zQHeINyPFUo,100000\nvBnjCjV7lt0tFemF-uFO6mvBePiRUfY7ugyMvgzB8Xc,100000\ng4h6phLV4jAQGBKO6Pt4bcmm__VV3i5AqJkUquKLXQI,100000\n5goh1U4AkkERdUKGUsZDk6-XQKko0bvBWMIk16Y2CaE,100000\ncNUKGTndf23r98qq7gMG2yU_85XKyb7blLPqO_ze5wI,100000\n1zIrx11nWzERkyZMRU3B-Uf7pNcZchTqvnpxdAKylQk,100000\nFYQBmRL4mQB_kDnkVVzXBcdpTW8viydZ3WvxpbUk9oc,100000\n_qa4arkdjK2X9SjechexnWzTtbOKcPkBPhrDDej6lI8,100000\nbp3Fpodi2l3MRXAlPb1JFYfYy5Cgq65aFPMmwdtL87o,100000\nwOrPJnrdOe-K475bxZm4ZsD-PR9ym_h3D4f6sTxgDxc,100000\nccsasQeTTwr5vm04xu0bxJdZkFE79NIxpM-QjB7ZYjI,100000\nMzOlJmKOUi_1oqqkxjeQe4ehy_4jWjLp54kiYY2CGl4,100000\nka2KKBo3xDhD4rLZtfMF1hzC1HVDtKp_LSe5WQQ6oiU,100000\nX_FKZeJVi-x-MumQVrEjgnDYy7J4ZWNXmIjzBOWGG4o,100000\nijaHRivbWlsa5QqPoVMPU8uvWDxHRaQFQjSJ3nvjhDE,100000\nB2Jp3rHgc0MJpb4Zmu6QATAzk_HxZnA4PMybkY8hse0,100000\nAzSQ71EuBPIq9kZ5MRKbghSS8yr0qya6bFPaimRwpaQ,100000\nktKRCltBqfSxsmVlsFE0XejV-XX_GeBUZYIvQMqUIVc,100000\nWwUDKA1q2k15jksOjNURQ6J96snTT9bR_kuUTLCywP8,100000\n7nIVIV2nUAMCOBWan6ncqstjWcMHqo2lrxWbAW5ncm0,100000\nI-0efTaGI8bH2KcSWEns41Pjt_muv5Ly-r4dAul4UOA,100000\nY4yACPcMLXybmpUgH1kqHP6reQT-ojMVVSjCMJwgXEQ,100000\nqTwdoLNxG_OjeLdcNTh9AsKP5MDjtPO5eGtzIOk_8f8,100000\nHSQjmlP96FiXAHDDeaU6ZBPDLMz2PF4FlZnt6BWXsto,100000\nzzTdoixXhqAl-hIReKlFmVWpEswRxVL1gqF3gEXV9mU,100000\nL3kKRgPRZsK9VOVtiS-_Fze3jm9xjHG8Et8gKj86Xu4,100000\noaqgnZD4I5kknRE4cgyL4DJgmsyKDG-vvDS-A3ztpwM,100000\n6LyoBAN_csKOz7H84yRpKN_N75FjQ3lzwHEme4kS6zY,100000\nGrjpTeiOX8ZYZ2_gcJ-9dW40xpxAQSK6NNmmFtpypuY,100000\nSAvejE4krzFUBqhh7Kz-ATIaHi5_v6IhFriHcJFZ8-s,100000\nLQ_aPHM0YYRYjViADRYTGfu9QiIxlL7V8eb7qrAr_Ds,100000\nTHV6Yo_w6llWQkovq1H-3ex8O_SKDc-oNpopIYMPkH4,100000\neLR-J9hhs03sAeEQl-13T5RgQ8R1v4yt-iJyUp-qPts,100000\n1CMo73uPr4AqnUi0dnNFUtejVvDv_9NsRC7k3RQ4Pto,100000\nPWjmKHIO7WQoekaC_5r57zn4F03N8TyyBm3er2IpLhM,100000\nk3rS1xQTWqdcMqX9PDvN9M1EUUHG2xP1eP5-V8A73mE,100000\nzn4A4Fhue-GYOK2EIk2tl4tLGIrn1tQ2igK96_-eBJ4,100000\nW6JCs231rNWHMl0tMGHE93R8ZXl7Cj_5TGSWx1DTF7o,100000\nYW4YRRHChAsy-xe3hAM-EbwP5Z16RDodjynexjwkuoA,100000\nDsKUU_PmDJJCmN-bZRtJmzdIhalnhTo75947aUsLX8s,100000\nmsNLAFvxOul8Tfox6on7L_8D8ABYWIqQplcfjInzNvs,100000\nrY17qDmA9EIPLc-DldJDGEkgjmUjMRfWKR8IDrxrMoo,100000\n1-Wv0mWANZBve1XcccIbjIuj1dZAKmbo_cvdFqKhaoM,100000\not7yut_QoYxO90n8_9BuGauj_Q9MPn5mOjVEO60kEHM,100000\nGJg0h_BNf013RRjqWUiEUGJmF3PnblIJxK0WUBapbl0,100000\nD6bMR6cpMZYKKCNjkdiJASL_Qp4oIkQN4xu4dx70rbs,100000\nb1TYqJWCs2ivAM2PgXmhmXsnjelrIt0o5OmHpzaFyzs,100000\n0IL-feSHbtIZ_VIzPCFQoIM0D1F60TFRb0j59Sh9SFw,100000\n96Yk1fDrdL9q_cBPAMAvwoeHnS-11kQ92oAHWEzWVuI,100000\nrf9Pud10bBc2VDz25jtMujcP9xu7w96v8qJDJp_mdos,100000\nLvfCowuw-7H6Wp7w_nt53XSpt_gH5LfRbp7Z-deq1mw,100000\nqAOLHhObB6-lF-FiHrXx2aHJtE5h1walpRmnvn_TTbg,100000\nuESNnUefGjM3cmKMu02MtFU_wiicuAOIxGA2HOTlZGE,100000\nsigyluoKLnk_IVD-Iy3vb9EcWUq2QGHYVBTz9_Bu_jw,100000\nZtrRqK91qmRJzm_vPEcxhZJHH6E4D8_l5CndA4PH2TQ,100000\nt36rLTMEbQKC0do3CkEC4tsVgIKRVA7HSsznh_tbD5U,100000\n3h_JnzAjalVIiWDCjfzwXUPBwL3rWenob2O_e-ESsh0,100000\nWATo0mu4SKTTpBn2iiqEl38qS7lM7Q_1zLeust-q4B8,100000\nYO5R5hvBNCbM6v-HukkUFFKP4Mvks-cf6ncZlgidAFk,100000\ns2kZ1XuRaTh06hVcSCHe6XdBuui2c6cGibqZ-KGVu0o,100000\nFg6W0XqVAgl_IQ9s5zzv4pZA8coQ1og23MKjv5Vt850,100000\nK_ur1650mVIZhkTIJRb39WgeNPatmg6d3_eWFm1t_DI,100000\nsVIOKpp4WiowCmS-JzUIGiT-Heot16RIciIJM8oJDY0,100000\nuKTPMIrsCl-rUVtwVYB4MndvmmQQX9FXdfOTBufGPJ0,100000\ny7qGneUmbjF1QAUHAwle8-_oTf166a12yY42YtVen1c,100000\n5RBU2MFQtqOl9alOYDBAPSuhwd_0ZMGsdjaFVxloTJg,100000\npPnrLCQSrHiYk96UP2Y4Kco6-gn7rUnk5zhi0X3MiRw,100000\nB35J_ZwuM3CZd-DxLj4XCJ9qTzjwxTHuHOmvMvgErzw,100000\n8Be_7cgyFMYF3fk8QjOPbfG0uy9IBuT2nWXs6ZSwzeg,100000\nQZ0Gjj0CfsGlWvXjMMGx5el5qYo_ytbHJQSxtsBqjls,100000\nGfuVD8mc8ZDecmioNCg7rrz_7f_OyAUqkLwsHMS15oo,100000\nrgcoLJgLtFtQfuIHQrmhoZ4n5_6-qZEQHHqx3N3IrZs,100000\nty8eKQcis9g08r9tPuLxLQ8X-bLGWfiDaHcqTMy2-8Y,100000\nLEJGkcPxJTCyo-bat63iWgTLDoh3k22CItmChoH-dTM,100000\n8gIaI1Ylxuhf9ahssNFQSUfh2Tiu9uTKWqCQZef7Zi8,100000\nES6Q6YtugV5zcqI6XxlqhAL0R0hEYmZZGfQ8z935BKA,100000\neSaAYmJ0xQV373m523pkTlCjmisJow8bs1tuNq0XhfM,100000\npdEaOWS3T62n2eUy5d_6GLdp3vIdw7JwZ7ALFPi9_Cs,100000\ncKp4e7X1b7TMeiy3Nb4E2hScNiGLxY3_1N2tLJS3ZUU,100000\nqgTs_vpYAeCoaFJxg8RZVUTYAQXG1KQXaikkU2FlKVw,100000\nne4jtyau_7GN7iTcK7lC6-l3o7hkb_TK09MuHcpIDig,100000\nED7hMCYbdR8Pm0xxPeitzkgMJbkDQn6hcZbLjuqSrFM,100000\nHIFk454nZSGLxzWTpd1zAg33RydqmZKDIiNintx7V1k,100000\nQdGFQcB1Fu6UtWa_b8x-t3c4tEZkFFbGLPlq_vFn7Q4,100000\nDY0tsIKuabnQ_8uBBCLmwNUjy30P3sD22yoEV4eGhbM,100000\nYoCGmMAZgU3STIIEEsyesnpR5P7toUKoFpNtnuN05uA,100000\n1roVFPIdxXtDjCgzUNvJs36ShP_aKMP9PelpOZqxDy4,100000\nEgIuGXNI7RzrPTRBvlepMkffOX3QO21enIqR2ewepd8,100000\n72Xcd59k1uTg6B5Tk-6upA-F_RccCkmuwrupa3hIE2U,100000\nduHWSYLVz6b_tTf2UR5KZkZJ9PgvIHTDTRGisArGknY,100000\n2QRgPfEhSQY43AqotKkDRlWmHkPbPdi9IWR8bTX4wCE,100000\n2NqPaii5HltyCiPIuCHVpz82_CHZLiQOWD1UkToKpKc,100000\nvJP4MtM0q8NVAZC4ej4MTH1vskxd-MEGV5JBE8o2vUs,100000\nLmtaJ7MiMJn00sAsBlhgqZ3xBCXnZrLEiwwnNFkt90Y,100000\nOWu_TeloAgLLzvPHAXTP1rKeza6eEfjmlDWd3-v952E,100000\nScgq0UTsKElqWWby2z4VJvAJ5oFWe64wBiY-qr6OS0U,100000\n7_Ofo_YBuxXLVcK31t35MOWHcM3yrgoM9vJDAztAujg,100000\nYIWZ6GPfICBp4gSY3_CRo0G5T6_nKJRvP_4Ski7LUGE,100000\n4GibH5U4mY0ErMzTQJsBrvJOEl6KmcLkj2NAdhOaWPU,100000\nuWc_WKdMWhSV-JY6tGA7K0HCh_6MhairQqiL7K7093Y,100000\nq1h4yY50Fs0m0ua7Wxcw3jAKLX0JHOF9zgQfHZn_nN0,100000\nIMk0PfgF86fnO39Kr8x1UgC0mTYUw1_gcv0jjijM-CM,100000\nnOEtSWZ1wX0paNxmw-GD99NgnLNPtToccRxg6-YTWDA,100000\npwbbAvcF7W8UtHsx-q4Nbilauj-qQMdDVBXBQQAhCdk,100000\n2FjjL0B9aT3LIfx7omr_owaWLBKyL2L9rWBiFQUorHQ,100000\n0dyEqjGLAMDzaRnCzH0camhX0XirrCEqVs_vc1btmG8,100000\nV501Rjl_YPqtJ7OZSXU7VAFlP0oZOSzEscf10cjPzmw,100000\nj-3b7b9bIh46BmuYd12DC48CSrj3mmkyFPzv97EAk2g,100000\nTsSp1_eQRCMIkhqUaI8HdR8MPOiU7uspM_pAcf5AGvM,100000\nkOHPE75QFQ3i8Mnt6tstgZ9iVe-YdBw-pf5XaScukl0,100000\nkwLRY5fWtjjOqxzp6tujCP5PcW5G9w5fiGrIuAfPcls,100000\nl79Dgbh3OBfflHNtMCnVpYUyb1N43Bh1Mk2P3Gt55Fw,100000\np2t0yC8h0nAvKlzX9U79mJ459m5vNbKrNckX4XlXCsA,100000\nPI5oL2_bonvAuqb2AKYEXgUSYkA-KXs1Wx1uhJ8abNU,100000\nPQimTX4gv68yXYSLjz5IOXRQGwBXBpHqukGnqhipzks,100000\n5sXKWDhvOrSw_cy6mX7-pj7-8c8ARvicLuzPahaNMpI,100000\no1-7qYYfBTlXTcI8sCl2LQR7R2eTPngGiGi7KaAHDfY,100000\npwXjbD7G0x4nQaifUdnEgpgiywdjm20sHf0e72RvkLA,100000\nxt4gKDTdQp_FM1CUOJJ_JO9w4vSEy8tPmErjg2xikiw,100000\nalga-vnK3HwW3mUuRZQ56zWaUKXkC2rOAI9eZ5NGqzk,100000\nyaMqBdwOQ0B2xAvZ82ncR0bOILu1OUCadp9vCZkWnKg,100000\nLXIjPNRDoBS8oV5XU9NBYVD6jDXQeOTL2uwUBUiWQ1I,100000\nOs3Vymfuii_7NJznPJMEUoYNZ4j8BlbHCj5ZRFrlGB0,100000\nLSg7NFC5KdYvUZcmi08-hs9BoY7QdJMKZtqubZdqBV8,100000\nH951WTmb2pBAhPjWf1WmmXbnugnLSBKUY7NtAsMw1_o,100000\nPaFmcufEYOpQS59C7E-NqqaQSbYQJJgPyQFjgdC5Ngg,100000\nP6HuOmT5gZQa93F5QwjE6hA39BuH8Jw_apbpnIVi2zc,100000\nzLsqSjfyjFofuNL3Z-a11jwUSJ7V_aeGyKrhyJ3fX6o,100000\nJHqRF3fovy0J8nlw_mo0r4Y9ZclO75PGBu0GD-PDOkM,100000\nVkUrj89DtJ8eZPllycwMrck8PiWUKcFwgos3_f_mH8c,100000\n2vPhgBAf8XaF2gJ6Awt1A8Oy6Ch55nJovy2JjLISdEM,100000\nwrvZSeM9MWr3pwjubsAeJdxCr1dVrwgD5m9OPtFduko,100000\nfdcmVLeAYRDzsu81BkKwyhQnztpxLdfDuBOj9ckBOO4,100000\nMteJRiQojXkdzBu7T6RtCOFEEKrI9U9IIz_GJtkRG54,100000\n_wtKD-bvlOP-qzmjtQWAn-zWB8ufJmGko9lBu6XllAg,100000\ndllCeWC-Y2d6bULNBNLPQON_YnLKHHrQDxXilfpWfVk,100000\nAC_xfhPRl-jsJEwo0x_q7Lj0LDvMA8QP0YH4PKKUyv0,100000\nVnck5x3aBBhb0xuzNfBgmzQPj_3G_fx2nhbMmp-idE0,100000\nH1Shevwg2PDfHcbarmNc4ExGffBtanzurQILPwyia9I,100000\nwJJpoWsJQdqUl6VHfv7uwtZ20ldgw1Lg82R_g7CsDOk,100000\nTs_Ad3YcFNNIXsyz6rCxIfKvI0p4pQp_KXERG4hTa_Q,100000\nVy8q29Pymza_LA3fNHAJL2RWzLgKBh9cPPwa_hnyGjk,100000\n6OwXkXZE4NtDrAY__xZbxK3fGRbsdxoe5_GIcdAeoFE,100000\nSWNoqHaifMtG3XcmUv8iFEIttqPAFDwZKwaOeQubzDI,100000\nwjRlg5gK0H0Gh0s2qvUI2kQVugT8bpheqVjNjgEgcw8,100000\n3rceYiNNCBQfQQdNujwSXNkQy3DZ-jKDpKaZlokvR6M,100000\nFQvY1Me_yfzA_5McnkspMW_nFJ3fsnsJu0JamGTyc7U,100000\no0lQQMxOZfzv475XIMu8P6POBqAvx-RQyqa4aXYuke8,100000\nN5vW5WndSKXofLOguIvPHfxMYb3DdNcBjBROUMqBMZM,100000\nt6EtWfOId5RN1icotTlFt0t8Fxraqc9QEINuEEdywp8,100000\nK07vS5kq_9PSI3d2SuzXuLJ1NmRSEgoqduaMa3vY9P8,100000\nWh7-stpgNWsW_l_er6uFeTUUJvm55rvlwc-f6_Xd_58,100000\nNw4kgALHC34qreeyjlwSTi8UNZ0MPiwUYQEgc6vsgo4,100000\nAbvZMwU2Wdkk9vHrqTAIyirUoGR16VSGvAcOOFrFUPs,100000\nyrvqJaJyKgSySxIE8HF_Co7SkgtFfga1h7ZeJ0lkDqo,100000\nMyp8h_krYnzoXT3frgPY9DcXD-QgicN6yQhFaz5k6lk,100000\nkdBz3Y5-e74ToYcSacJ4ED1mOlsHPuhi38gBNXph9z4,100000\n3ap19_7t1AH7PU3HZdWIUarjwMt3kaAlIQ-DSLaXC6w,100000\nQtgn1nDRx2CzWJGQcOkbfgnM6H5O8Rh1gYF5jJTN3s0,100000\nwtKzpomzx9s2V9YmxDtOc-7xw-3bleuTcqU2VCiuN8s,100000\nfKU7moqKN9R3BuAw2RcpE0oyANlVwCCC1J7TK_V0m8Y,100000\nJLNp6nNz7QLx_V7yyhzHojF0oJIk-JLOi9c3FmoZH3s,100000\nckQi3i9iSO0BbQl_wkq0I-WLdznYL4gTtN2DsewQGpg,100000\n0lNbaMLcmMJKI5U8ve8OCHi7jyjytTkBlD21bcRlDEY,100000\neX64gYzRzR5onoucH2sFmxznY-IczrGDQqFQuOSxun4,100000\nzBtF7atvry7rt-P0uCWSi-rIlkfBeDHOGC8n3qU2MxE,100000\nizQDLBGPPmhLL22A3yf8esrCgpdOzUNxWOLIFwtyGXQ,100000\nOlbY3rr-oro04Tf0hzy0ceXlD81b7Jwez6EaDRNO0bo,100000\nxS8cXSwVnmUaR6G2odaLr5fe9MRyIl0QDeS_VGPM1Cg,100000\nf0WZmIx4rSw98M96VRVTTBgiWFrw4Kz-TzeLSWs5NzI,100000\nDju86aXgYZJra0hzXydy7-NiYiCQ-bPwid9__MtVTJQ,100000\nAdedGIx6DI6DVeX0wfPTtUT_j9TOYGpalYE4HHrVRcs,100000\nzXEzZSZGGbfVHDkGL6PEDlX0UWAkWmjHRjTJaN5b2OM,100000\n01Uxoj7yjU2zIguQWipFkJqq_AOkN-GGOfUQ5Lo4pUY,100000\nruXmZDgZLzJQrnvdA3dK5ajVgTdy2PzqZdusiDjsp0Y,100000\nng91fPVHjwDvH5xQZAFrtzpLrcf5KQNznz98QSclEJ4,100000\ndc6pcipMvMxdOvXTU98zDKHp9L6uHWui7OgjVJKqrmo,100000\nVQdib_wCmpUmPEZURr9s-b-PYZjZ0WRyAL0UjncGccY,100000\n4QaLsvVQPN_IHErAH4JcNC-bBSMwYwMPkrDB4goRHj0,100000\nIXRXjO6hTHY1f4yBYf5GKeitbCeMoJm-UFuKj2PlQfw,100000\n_NCf0i2ke9kF8IrnVQX5PycddgCuuNPhmwdeaQTJ6B4,100000\nMREBlAb7tTN-1z1P70HkhSyBAj5KwVc6f3HZ_suIFEQ,100000\nP1jjEqRgxqVzdoDas5xSjxBzsLxc33FfKh5vy366_S4,100000\ngj0uVfYLX2qqkCvAUSM6Ps2_kIRgLlPpx2ovBr2j38w,100000\nxNJXaxQFvjjC-b_gN2AeDo7IwXwOXc-Oe9T_8SA0-8U,100000\n5HNg-BjwbzMn5bzUNpZhwh3qGnBFxEYO6fAu-3hCoSs,100000\nSGj4G47-jCH4bQJYNTMZKf364WZgJmi9WYmhngO0WU8,100000\nzkMhdI43u9cc5lKbz7uv47Vf1Olz9dI3fgJhszkoO1c,100000\n6nGqQ9dwTpK1lfD_eLv4eEKEfnvpWTbRYUSt27Me39Y,100000\nwBAh3OfCGOXslcxVTA1cFVCzhWYsKMj8_cVAXvVqE28,100000\nRXVGYTxomNgmYQqn6EzunelYqKfLHqfX-iWMD2hPPUI,100000\nWNs9cZgL33BlsJFyy4_Mz6CRigqde96s12JojYF-0Nc,100000\nHdW3WSkFcjHOWnVSYfIgyFwQyz89lgNF0NXuPjTrQkE,100000\n5110bgT0UBt0_y54Cjcv-GSsImMvgyvQSNyOS1QqwYc,100000\nMBXUPwU6VLQjtmTUJ9nQArac-pV62jSu7zG2lSjGUco,100000\n97b0_etS4DR5R_MT6aYTrknmxLAP6NDIy48aUYOJWR4,100000\nepbg-jmh7P3IqScEIfEDwiEKlz9VKSiHeN4NvhOPwZY,100000\nwfi-Dl4Ko6jh-0wzTo_KwmCWStBTb0QXTy0DN2SWYvQ,100000\nhCRwbhbZiqGZnx7visaSlXuHOEzMZrRet0qanIpwHW0,100000\nl2XiYWbAHo1nzq4Zq-nAYGYiDcKltroCfs1QVrrZaJ4,100000\n8CfM6tXFXT3d6yUAnU9ZW_8BcSnSdzLkbV_RMhL9G-U,100000\nyk7wGfir_Vm9zZbgydLAi171Uet-OedLlDLc_D16wiM,100000\nCw8V_sh-5V09dIKcMHawG6YkQNhnkkUuzVz86HnQeek,100000\n22ecdIsmZUgDW2NXFcnlyH7Sh457xTtGjq361V6aRVU,100000\ndC-zw0_PXRaIMJUj2LLz5DGv0BB3Cr-JYfgr8pU1U5g,100000\nz1MhfZjvU15JRBNEAjIr22nQgdqsnObvC_O7VKV5_4g,100000\nZqrt69Q_KBDBKCOLxRUh9ejsIYLuClVQrtRWienr0Ns,100000\nKnNyZSjKzVn0fsZXXWSJsOUjASHH33GFwcurQmG1NOw,100000\nsJz4cx9yH7BRZ6_rSYSlOwt61PEBa2earbG0_oE7vyE,100000\nREbC3bHwvn9VSU6pq4HkWeT8cTxvJXb6jfg8gjXhFtk,100000\nRMI44MyRQKQgClDIvkpw82x-mTS9h6JtNH6iBshV55k,100000\nOIngBDqOr0vWSH3gjuJnLqTNhTF0rPReTrNVYW-pQuI,100000\ne72Na_tuN6FMplJ0HdOL_OJGuKeZ7VIUjk4EEJf8BXY,100000\n9oeXwmOGvpnSXTmcQ_hiCASsr0dyaontOLjAwfVIbuY,100000\n6Use-agJfTVBXXntJoVVBWgjzRwpjMjyM3FvAAJqc9A,100000\nDs-X9cj4LnDhcply-su13NtMvQsltnJu4aU5Alz-qqA,100000\nejib69qAuALTl1kFg5u2mdi5ipa14tjr0NpwPkXTX-E,100000\nNuYOtLfqBK8hxf67umC4StvBpX-k6f0JrqxAfFlMuAs,100000\ngGDlNNGlkIg_efzS4lMwJdDoWxRjLCuz-zMMaP-gVBg,100000\nFS75Hz7tKCWxPt2AKcC0B4AtZ3gCjPDrX8a0CDEeuVs,100000\nlQIPWJEQksUndcEcezS-0XCkHmgTAZaiAJJbFLMq9kQ,100000\nM7jOGOrR0abysoVH0pKUrZe_5yxR4cKr_sBL3KRumeo,100000\n77Vsh4LkyL-K4mzI-eWm75X1fSgUXnuKoB6qbRVgF1w,100000\nMBJ-MNHAyOfrfJzVMkdgMl_DPD9pBu2EjzDoPhU0RjA,100000\ndFyiJqXrpcMUwWGp79ikMkq1S5J92zN3VqW7FagwPaQ,100000\ncKelbJBpSWfeSZvaqJCSx_tSuYP9CvJlKWDM8QDT5f8,100000\nk8TAlm_rnlIDaB55H6HOaE2HF4p-F4sduYoMYfxag8E,100000\nthVPhcneZo6bu2tHEONdDfmm92EuBtm7ivp5zN1gAQU,100000\nbkv1PSM2lOkoRnsS5BGsBgsTFha6eI76qguTH7Ovxvg,100000\n1Nf1j5LRHLsymK8gYiZzG-LAderhpX9pm8kpaiavvjY,100000\nshikV1ViLIW_YizxGzMktaTwTCKNHgYzyhFRp3hv7J8,100000\n1M-RT-GLlui9NKIwbpECE4toAFhaPZxk50SsV4-IqnU,100000\nnxgegMTWC32yBBZQnn7HEoM--QrADFnlk_U3yf5Bkuw,100000\nRNv73h6Iax404afhHC5VXsWODK_PQixEBqDpGPZDZnI,100000\nxwMz_OILB20jBsXRyy29Wdp-ysZdxtbpOScjO0oCrGk,100000\nD8xJu8shDX6zTo2uz8vCv0QFJCaYdarK-HTVsSKH_7Y,100000\nKRlZwpMHRf97SmsQGN6oydRIJcjWHEkvx6AxtYXHuYo,100000\nQv1i_AewPEnZpEwKZ37-apjOzASKmJKc0dYuG4wv6iU,100000\nLlSUxfTQAlce2ntAz8KHZc17UKfkq63SFLYtEsfeIcg,100000\nsXdYMSm5-1m-1J13_Ldp164jibG07q64689wKpCEvnc,100000\nwFoh_n1vrDFRNlJb_smUZtNtIvxp5LCHT8v30u94RRo,100000\nGTuT4rtvvEpEyP1O3sg92KnliQUq49jJJQwTk24PWWQ,100000\n18CFP6iXdF_idY562dYFLnHQzbm6HiDk1igfZ2XeppM,100000\n9wvXjWPg2NhThNXnkTFsWOLTV_5vkT93_stL88Gr7rc,100000\ntXqvhW0-hmbfJ-oDloPdgkMHgB3rU8nGG9VNpFQvF8U,100000\nfkoCDwklnuwWydLo90AUxsc65AwkfgOo10fbirqEbgY,100000\nXE0NUfYzln90a6JbvnAphKfdXxV-67I4ShD--nqqyrg,100000\nSus23I8bcFlxD0x_cLTcSh2u-VpsxEZG2PLwqiZfknQ,100000\no0lF6gAMwXYFXN1W1fXQOOJgEp56vFuW2fq0rcQj6UI,100000\nIWTtz_qUZrrkAiC8b_JngEjpfrkd7u-f1_GGaTc_brY,100000\nAtTCKy7nKQ7WGhyX_AH1v3bgWYTTy3d-9azSoJOxVck,100000\neH9WH178a72QVO2pKdkHvKHshggotiCIoS8_jpXCeOI,100000\nNbJfLHQ-84hYVRajwBteZo3htR7HP31dVdwbkmANwZw,100000\nKZAL-LfoGaPCG8z5R5_9Rbb0ajU4l1V3hVMDADgEQ8g,100000\nVRoozxUkICQfaYGQMgD0ISZ68lNU18VT1B3Kc6Xr2T4,100000\nD7AFGznB703S4r4z0_4Ld3lJxaIUVP86J6O8xlMHy9Y,100000\nuCqr3-Wpmg4Vv1qHPLSzxI-q1_PSkijAWUvJml8zmEo,100000\nQa3SaFt9KBIqQTweiHlr7X-mKmiRBeS_m7JhB1Kl7DQ,100000\nd_t2FebkpIYKcHPpPYMtr7tii0JJhUH3cWcyHGad29Y,100000\nMDlNV-mEAiUPoGhV_mpEVikAMBx4ed-2jNNtzYR3GFI,100000\n4e6GZrMbHorUhDbBn3rWtLaArR7PT1AfVmF3HQJZRmw,100000\nFRMdoqDMhnOj8ty-YG4pLNJfXkEk8vpRS3JjK9tghiM,100000\nHmxMPXS2JI7T3f-8oHeO9m6D4Dz6thU7Ti6D4X5VOv8,100000\nxHo51AHw_0KodpfGzESbxebbMv46DxOH4tfhGDj7IkA,100000\nB5ISYFdt99-4Tn_DrCDB9jjKTwnevLDrTtPsiblZgxQ,100000\nvTP2pznFk1HF4lc2VFbXGcITvcl2TPycNCkMc_S3Nvk,100000\nnTHdzcJy9RGNefLrbxHb5UzzRrJSPaW4v80ApzUCdtk,100000\nBJvIrWXL4VJh4UanL3K7frafkwaI9DDhFj6WC1mKL_g,100000\n8_OXB1ymhgFxvXpvAMKj1a7zLhhFPmLH9fRXqkDBVGg,100000\nCHsLZAzMgTYanJPwAO5e5HxNg4EnC-R97k_FOJFb07Q,100000\n7uXn3Vf6RM8-nslYmBre6Ou2d7iVzV4o1gcyP6bZTz0,100000\nhPQcIcg6kr-bug_zMxHIDNwxy2n862Qw0Yc3f6YSyeQ,100000\nVHimOmH-qmo9ceWXRHqTDxiUqEN84itY-HkfqzWuqQo,100000\npfTkWRqMEM6YAHNgLJcWqloFEjchT58qHWf4oCau_Ik,100000\nZxqbDPGECMWd-KwnfgteL0XbfY1UDLN5-Jf86tx8GVg,100000\nrEQosEY-vENaKdoeAWAC3UZQ5sUVESk_aQ9BnmWIf_o,100000\nSvzrX-2YYFZGR0AUy1v8qwl0JZlDsgspCZsPkDXWcQc,100000\nkDhfH3FXFxn9VQnkWvsRQRv8GMo7OZ1kN0I93ojyjS0,100000\nPYajxGD-kAg2UA6patL9nhdt6gMutQpONsPtidgJeRo,100000\nKqOEJ8jxFzL_LXR7VbPKPpwSRmU2q9cq_1wEWJmRs9A,100000\n9B4yWJ6ym1laiB2VGHWorDOS6QEZ3LsqDJ-p8LDFy3o,100000\noyL6eSytDUG9ql0ZYcVazR9Hg6PU-U66aXlNFPIubLI,100000\nb55mBuv2zfo6BeGm_03GIfVR7eiUD9HvNLKQcErwfMM,100000\nsTGgxxTCiZvzPIFDhKDEIF6h4Tgx9gAtu8dh6P7FruQ,100000\n5BUcvGZ22WSvxYn3oLvPuwAf0ka5fWy7dySvA1e1Q7I,100000\nr97SMyj6zFJyJyJYYNlTEZixBKd5uphY7uOZpQE1lyY,100000\n55ZKCjJKEFXzAWyual9CKO75CWMS5-jgBrLWVAsE8Tg,100000\np0MCgwBLvO8r4DpW31dDJ0E9pVRBND79eO-0nZeCyGU,100000\n0xueTpDRFYCarfItXzG-RBGdif8hzR2UgY16WpJjLlk,100000\ntXZjyKczUe80wNEmUnwKScMghCJ7X-0rjkZ8f6zkgVo,100000\nABPXQ3UzGLenVZ1Vq77G27ZETEgxIBDjvwtxt0J7Hy4,100000\numyMMLIsHYPefks5fGhal2XtTIzJmYI-b3VTbtiHZO0,100000\nfCuC7AmG_kSADTZG13OSL5tKoYhkoXAjCOD0MHXQ-QM,100000\nRwanwykA8PUjubdNvyk8QGf6yNiTD77k2O43x4wDQ4g,100000\nl0YKhSrbG7aGx63d0qdxTPHEwG9dtZlJQpMLSHLu6xM,100000\ny0EaYTjGpFNkBXmY__yk2XYfJVSL3syiRHUQygY9Nq0,100000\n0kJ4SQWyxeOL_JAf0IByaCCVCyWryLhNibRm4ncCHnI,100000\nWNPgL6bj3_6YZ1dFViOj8YhaVRqphEyav7KkIbzDq2I,100000\nZzgI5A0fr9B-ryOAxNK8QW42Nm0H4DvnMsDrutdvq3Q,100000\nzMqR7fF9OtIxQo-qTYC6bIp3_NSttvaTM6w0AaFazDk,100000\npoWvKCvB1Utyaud1vDQ3271fdt0rtlkTrpRBwU02R_E,100000\nIgYM9m6MmNUarG4vYjqvSsr9Dw-xpepTElp8_aDSLJA,100000\nXeJ9tec0I9YVO0qtl8nXaW3_d4TTuxMes8gH1Je6MvY,100000\niAxWFAKjvtQceYHP4C9eF8itlCK7D_fmHnMWG45egQA,100000\nd_TJQHw-Rtiuy9SQ9JxzLtzpEQglCBLS2zXk069qnMU,100000\n07yiQCnZddNZsU8DMuiuQw3-Gnm2ixR1JLTDXmWl_So,100000\n6h25oYs4eqFi4wZF62N0serXExrIDEVcK-T6GBKhgZM,100000\nm5GU-OYyz13CdVqtpUMeIrXtVKdgQ6kfCmwolud8yiw,100000\nTA7J9eH2Dga5qtTKPnyEa8JWZ6OVosn_aufHlMeh8WM,100000\nsecbAR3WR-CMdpYc-Yr6_-qZyChYUGuza99f8It4RdA,100000\np95cgaFUfkHP8Z3CLh4OhVqRl1Ei_J8XBTQjNNHGiP8,100000\nbc0k3hjkC7fxbYJu2D_EIoKHBObNKjXQ8GLaIxu4lh4,100000\nBIcC1VnKu-1xfb7E-AAMMnx4DJeYeW5dnh1YrQRaYBw,100000\nXwfl80ZJy-ID3yjU72_QM-jr05MoGSn40RT1Mm_1srU,100000\ngGCzfdkfR1z-Dc0VHdW4l8-PDG49mQIeGe-XGtrRW1o,100000\nfXBOy33AydzrX5HnJ2Q_6AbYGpNn1lw99S8cxFsFY2s,100000\n8yLsfCpDCPo68HBkcwBwlHcPyMeH8o7wdQOJLv08xPY,100000\nrhs0ycajW96qljThp86I-OrrzpR05wlqRMyjDYbOsNw,100000\nQ47PAIDBZBLWv37Zyww9brhn90IWAWBdeygDtiV1Mcw,100000\n53DjCWpsoxDGbGhv7jKzJE8ITGd8e-z1riTkAP4gi8U,100000\nVnbq4gW5HRASqmfrAmqLN6vLd_L0-tqdocnKVdZLNmI,100000\nMGNe8vUZmjIgkD0kSTx_gzQMkc7fYzh2QJZX8ioltao,100000\nt5XS8PLSeKzL6yz5PiNhL1ck955WXXQjaW0AuW6sAzk,100000\n0PhQGxgxwozdTh1aY8_kbVquqr6JF09nZuaY4gdP76M,100000\n24YxjBulMcIy0rmC_DxejUeDlbHWXNopjMLu1ZBhnt4,100000\nL206EedYLJi9aw1XjdMzKcTHGmsa4qM-SWS2wKOOhJM,100000\n7C3V6EOSfHYXPjn0U0Fkp_fhEhQa6g74akCC97dLQ7w,100000\nZDojENNTPqKxgdURnrOPz8gULktlQlG3SEvjQdTWkP0,100000\nL4Hn741pXfn11vMiAQ2ustw0gAY2W6CdrNqepYbnVDA,100000\nOLXFYAUv1NSt8D9XwfAIECm646XBPeExEQwxIK7VqVU,100000\nhtteP26LEgvzoTV-WcR20NRzHQjpoB9njCmiZwDcL9w,100000\nXd1aO0BT9fRmvipFP9NgB89Wai2_27fZ7ZAz_NDeBeo,100000\nVPPnauxrBGzyVlb4aGu6nGrDmtXH6NbiI1tT6vRfGmg,100000\nTdA1iGWvDD6_JMMhK7wNtkSP_0AThQ3FVkjVkRzJZgw,100000\nZJdcjiOPXXS0T1_zgqAPWNlLpc36SJ5P3CvGHzQCoC4,100000\n-1U7D-l8DRgqi5nSrgNHH9wkMe6YpsREUv7MJNUZdws,100000\n0M3oMvC6T2Lo1nK-KAq60GPVLU_5tB30130nJ4IjLqU,100000\nNXULugRS3ty4rKIjZ8hBkViSECwHwBd4cObddrl_PvU,100000\nZ2b6fyu1DRlzGzuksiU12RqshrPYiIHpxcHM_y_VYhA,100000\nKMHXU7VIUTj37ffZ7iSNjNNku5u2F3yaQUDIYpA3AX4,100000\nRQDZjak3dEffaqsNFtWY5g5Y9dvL0UK38zu0um7ETek,100000\nJXEAbD62XHZdBJFFD31mkFChxyfhjYx13SSNw5gFWGI,100000\nTQK8fBibYFZAxbtk6f8gjIB_et2pAaWo280Sbyxu0N4,100000\nk5tQ8mtbCR3aCUUIa2aFFdQhTl6q6wgcdgxXpZ0WPbo,100000\nBIPk50bJpRUwq6Wvp3O0wl2BRYCkEt1mczwzOHWxsPI,100000\nJnZ2o-2TUgGWn2CyQEEAf8mziUJA0BQEcTH-fMfPARw,100000\no2xeJEezizR68ru1s4Jc82_UtVOuTJrbg8Zm1X8Yps8,100000\nwSzMKnXHy-BMGpUsq5mDjHatqRbFdLEmSr6RvWe9Pe0,100000\n0fvcYz7tLqs6p1P_yg60MStdvBHJnnbhmSPZqIolDFE,100000\n61w7MUruclzCpiaULu4EIryU990E9neE2sloyOLvbVg,100000\nq5-ZM9KXk8uYvZMIqIX-M8v0wEmoZygs5jGK49EOdXY,100000\nr7jogId7A_OlMLAdBmqp2xhzYzHCwbf35r63OEm_adU,100000\n1GHgJuZcl0ydhYz7wZBfnNj0aPOa5n3DudrtdxzW7KM,100000\nrXbHmWO3uPLqm4I8x26gOAiW25gOImxer9ywQME1S64,100000\nqUOhM-xRMksrk3jt5i8Wrlm068WU1vWwTbclFK_pI18,100000\nV20XLd_zPsxjf5WqAddH5cwAVeoi54mbNnP21JGxzbo,100000\nhVbSqm-1bnI-G9T3gGtiMmfK7Pf8i0zO23WUxhdMcAc,100000\n0IU1RkK7hiM7m9rmnOXwGu_02QhVpyAHwdKXVtswMzw,100000\nicEctMjF5gnqC_NU2HoRqNUUSEvpMIpNOqAjbv9S63g,100000\n1njYhiAgEWrihVvRkUH7manm_EpHydYzcQeOC9H3gGM,100000\n6JjvYVrgeiigZ-zQ_6dL5UgWxPvBIXIXwMa3blt4VmI,100000\nQQ68TqO4M6XN_8IjCUBXcF1XAMxQOOcPcuySz-RkOGY,100000\nMqMJDeF9T1_BBtiPjac4SvJMHXchzKLjrbMK8Q5NPIc,100000\necJJ7u5-wU-iPYTiSbYtHp4x2V2Lbtbb3TtXIyOH1c4,100000\nWj1jovtgWKVVTJ6JGgLjwbb7PxLt3pSRUIXhmNftcAE,100000\neuz1BHurar-6EfFzuXHZytg2ftBVO2EMFD28ZQNS_1A,100000\nw_Ml47RQ3SmsxcDOUFhLXjSVLwWx83NoyKuJvBACga8,100000\nDfSYzyj0Q5zZYSo_g7x7jFrsE2U0J9zakWgKt5JZ2BI,100000\nC2q2BdDclzm8jBFLGSBJnufNByFhVGaVXbOGaqfcmm0,100000\n1sEWR5zgtUdmWJm_LfwcdeiOaqdokSJhSCKCWLQL5Ag,100000\n-bcEnnGAlRCy6dtOYxYFY9lICwLINWRRXXdANkjymic,100000\nE0HW-yksKqzEqF2HJ0Fx3EdhaWW0Knpi_LumHsB1hk8,100000\nA3mdjlqzXvVbS6kHtUbqe1Byoi-CnFrJG5ZKuef3Ogs,100000\nxbIBFBHahAxD0VyC2l0gH7yU4busVGFW6u8qJgX73gc,100000\n55xthd91GM9qp2wK9ep9H_ARy2GATioZpwSp3l-qLVY,100000\nFe8SXHp_WeCxJOoZWDRVbOcn3h_XZFAtv3QcsWqu2pY,100000\n56Y-NCaqmrn4hADi0KTJTLkJusJJxE3XVcgiotAnniI,100000\nlwjZT6foBqwqwCpsc7zdCGhPektWLF9yynvQ4VH6eeo,100000\n7c8vGdyTXmeOOVTZwo2J_L2ZDC62DW5Y2K8vTvtwktE,100000\nJFrHdlHFcHhdtd8BdXYGAQH5_lx4p6Z9QNxYYMWyt0w,100000\njHxuyJNsrzbPbVxNTazfqWscJPpU1AZvmbl1Oa6_HsI,100000\n8SOIdvoHMUx5PkjGLziEAdcFvTF_4s_DHFD3dKoLLhI,100000\nWmlCYiDhYiP7bqyndLaV19pJmpuB__0W5SfqMuWuU38,100000\n_Tjtr4UmyW9-2gClt5JzqYp0Od2Y7-LTAWfkzbP3ojQ,100000\n05lNyHqJVMfOYuFKuB6acm17roptrwJrwxWJUJ33DK8,100000\ngE-0R5ix6WQ1-765zivzI_R0YCZTisBsPgG0wdHS9ys,1000000\n9_666Wkk2GzL0LGd3xhb0jY7HqNy71BaV4sULQlJsBQ,1000000\nNEcMFcdT-aliSrCto8u5uZVfeaIfLyK0HPvOB3pUVDQ,1000000\nWxLW1MWiSWcuwxmvzokahENCbWurzvwcsukFTGrqwdw,1000000\nkzJ8p6DyaqWFdb62S1JTdUKqxo7cs3B_J5SPK4X2c0w,1000000\n\n\n"
  },
  {
    "path": "genesis_data/not_found.html",
    "content": "<html>\n    <head>\n        <title>Page not found.</title>\n        <link href=\"https://fonts.googleapis.com/css?family=Raleway:400,500,600\" rel=\"stylesheet\">\n        <style>\n            body {\n                font-family: \"Raleway\";\n                background: #203038;\n            }\n            .background-hash {\n                opacity: 0;\n                position: absolute;\n                color: rgba(255, 255, 255, 0.1);\n                font-weight: 400;\n            }\n\n            #main-text {\n                font-weight: 500;\n                text-align: center;\n                position: absolute;\n                bottom: 50vh;\n                left: 20vw;\n                right: 20vw;\n            }\n            #main {\n                font-size: 2.5em;\n                color: #03A9F4;\n            }\n            #sub {\n                font-size: 1.8em;\n                color: rgba(255, 255, 255, 1.0);\n            }\n        </style>\n    </head>\n    <body>\n    <div id=\"main-display\">\n        <div id=\"main-text\">\n            <p id=\"main\">This page cannot be found, yet.</p>\n            <p id=\"sub\">You might have to wait for it to be mined into a block.</p>    \n        </div>\n    </div>\n\n    <script>\n        document.addEventListener('DOMContentLoaded', function() {\n            var number_of_elements = 5;\n            var document_body = document.getElementsByTagName('body')[0];\n\n            for(i = 0; i < number_of_elements; i++) {\n                var div = document.createElement('div');\n                div.setAttribute('class', 'background-hash');\n                document.body.appendChild(div);\n                start_animation(div);\n            }\n        });\n\n        function start_animation(element) {\n\n            var body = document.getElementsByTagName('body')[0];\n            var width = body.getBoundingClientRect().width;\n            var height = body.getBoundingClientRect().height;\n\n            var text = generate_hash();\n            element.innerHTML = text;\n            var maxsize = 20;\n            var minsize = 12;\n            var size = (Math.random() * parseInt(maxsize) + parseInt(minsize));\n            element.style.fontSize = size + 'px';\n            var content_width = element.getBoundingClientRect().width;\n            var content_height = element.getBoundingClientRect().height;\n\n            var xpos = (Math.random() * (width - (content_width* 1.2))).toFixed();\n            var ypos = (Math.random() * (height - (content_height * 1.2))).toFixed();\n            var time = ((Math.random() * 10000) + 1500).toFixed();\n\n            element.style.left = xpos + 'px';\n            element.style.top = ypos + 'px';\n\n            fade_in(element, time).then(function() {\n                fade_out(element, time).then(function() {\n                    start_animation(element);\n                });\n            });\n\n        }\n\n        /**\n        * Fades an element in over a given window of time.\n        * @param {Object} [element] - The DOM element in which the transition applies to.\n        * @param {int} [time] - The amount of time for the transition to occur over in ms.\n        */\n        function fade_in(element, time) {\n            return new Promise((resolve, reject) => {\n                var time_per_increment = time / 100;\n\n                var fade_in = setInterval(function() {\n                    if (!element.style.opacity) {\n                        element.style.opacity = 0;\n                    }\n\n                    if (element.style.opacity > 0.95) {\n                        clearInterval(fade_in)\n                        resolve();\n                    } else {\n                        element.style.opacity = parseFloat(element.style.opacity) + 0.05;\n                    }\n                }, time_per_increment);\n            });\n        }\n\n        /**\n        * Fades an element out over a given window of time.\n        * @param {Object} [element] - The DOM element in which the transition applies to.\n        * @param {int} [time] - The amount of time for the transition to occur over in ms.\n        */\n        function fade_out(element, time) {\n            return new Promise((resolve, reject) => {\n                var time_per_increment = time / 100;\n\n                var fade_out = setInterval(function() {\n                    if (!element.style.opacity) {\n                        element.style.opacity = 1;\n                    }\n\n                    if (element.style.opacity < 0.05) {\n                        clearInterval(fade_out);\n                        resolve();\n                    } else {\n                        element.style.opacity = parseFloat(element.style.opacity) - 0.05;\n                    }\n                }, time_per_increment);\n            });\n        }\n\n        /**\n        * Generates a fake hash string.\n        */\n        function generate_hash() {\n            var array = new Uint8Array(32);\n            window.crypto.getRandomValues(array);\n            var string = String.fromCharCode.apply(null, new Uint8Array(array));\n\n            return base64_url_encode(string);\n        }\n\n        /**\n        * Encodes the input in base64 encoding.\n        * @param {(string|buffer)} [raw_data] - A raw string or buffer of bytes.\n        * @return {string} - Base64 encoded equivalent of the raw_data.\n        */\n        function base64_encode(raw_data) {\n            var encoded_data = btoa(raw_data);\n\n            return encoded_data;\n        }\n\n        /**\n        * Encodes the input in base64url encoding.\n        * @param {(string|buffer)} [raw_data] - A raw string or buffer of bytes.\n        * @return {string} - Base64url encoded equivalent of the raw_data.\n        */\n        function base64_url_encode(raw_data) {\n            var encoded_data = base64_encode(raw_data);\n            var safe_encoded = encoded_data.replace(/\\+/g, \"-\").replace(/\\//g, \"_\").replace(/\\=/g, \"\");\n\n            return safe_encoded\n        }\n\n\n\n    </script>\n\n    </body>\n</html>\n"
  },
  {
    "path": "http_iface_docs.md",
    "content": "# Arweave HTTP Interface Documentation\n\nYou can run this HTTP interface using Postman [![Postman](https://run.pstmn.io/button.svg)](https://app.getpostman.com/run-collection/8af0090f2db84e979b69) or you can find the documentation [here](https://documenter.getpostman.com/view/5500657/RWgm2g1r).\n\n\n## GET network information\n\nRetrieve the current network information from a specific node.\n\n- **URL**\n  `/info`\n- **Method**\n  GET\n\n#### Example Response\n\nA JSON array containing the network information for the current node.\n\n```javascript\n{\n  \"network\": \"arweave.N.1\",\n  \"version\": \"3\",\n  \"height\": \"2956\",\n  \"blocks\": \"3495\",\n  \"peers\": \"12\"\n}\n```\n\n#### JavaScript Example Request\n\n```javascript\nvar node = 'http://127.0.0.1:1984';\nvar path = '/info';\nvar url = node + path;\nvar xhr = new XMLHttpRequest();\n\nxhr.open('GET', url);\nxhr.onreadystatechange = function() {\n  if(xhr.readystate == 4 && xhr.status == 200) {\n    // Do something.\n  }\n};\nxhr.send();\n```\n\n\n\n\n\n## GET full transaction via ID\n\nRetrieve a JSON transaction record via the specified ID.\n\n- **URL**\n  `/tx/[transaction_id]`\n- **Method**\n  GET\n- **URL Parameters**\n  [transaction_id] : Base64 encoded ID associated with the transaction\n\n\n#### Example Response\n\nA JSON transaction record.\n\n```javascript\n{\n  \"id\": \"VvNF3aLS28MXD_o4Lv0lF9_WcxMibFOp166qDqC1Hlw\",\n  \"last_tx\": \"bUfaJN-KKS1LRh_DlJv4ff1gmdbHP4io-J9x7cLY5is\",\n  \"owner\": \"1Q7RfP...J2x0xc\",\n  \"tags\": [],\n  \"target\": \"\",\n  \"quantity\": \"0\",\n  \"data\": \"3DduMPkwLkE0LjIxM9o\",\n  \"reward\": \"1966476441\",\n  \"signature\": \"RwBICn...Rxqi54\"\n}\n```\n\n\n## GET additional info about the transaction via ID\n\n- **URL**\n  `/tx/[transaction_id]/status`\n- **Method**\n  GET\n- **URL Parameters**\n  [transaction_id] : base64url encoded ID associated with the transaction\n\n\n#### Example Response\n\n```javascript\n{\"block_indep_hash\": \"KCdtB29b5V0rz2hX_sSGfEd5Fw7iTEiuXp5M34dWPEIdhxPqf3rsNyRFUznAhDzb\",\"block_height\":10,\"number_of_confirmations\":3}\n```\n#### JavaScript Example Request\n\n```javascript\nvar node = 'http://127.0.0.1:1984';\nvar path = '/tx/VvNF3aLS28MXD_o4Lv0lF9_WcxMibFOp166qDqC1Hlw';\nvar url = node + path;\nvar xhr = new XMLHttpRequest();\n\nxhr.open('GET', url);\nxhr.onreadystatechange = function() {\n  if(xhr.readystate == 4 && xhr.status == 200) {\n    // Do something.\n  }\n};\nxhr.send();\n```\n\n\n\n\n\n## GET specific transaction fields via ID\n\nRetrieve a string of the requested field for a given transaction.\n\n- **URL**\n  `/tx/[transaction_id]/[field]`\n- **Method**\n  GET\n- **URL Parameters**\n  [transaction_id] : Base64url encoded ID associated with the transaction\n  [field] : A string containing the name of the data field being requested\n- **Fields**\n  id | last_tx | owner | target | quantity | data | reward | signature\n\n\n#### Example Response\n\nA string containing the requested field.\n\n```javascript\n\"bUfaJN-KKS1LRh_DlJv4ff1gmdbHP4io-J9x7cLY5is\"\n```\n\n\n#### JavaScript Example Request\n\n```javascript\nvar node = 'http://127.0.0.1:1984';\nvar path = '/tx/VvNF3aLS28MXD_o4Lv0lF9_WcxMibFOp166qDqC1Hlw/last_tx';\nvar url = node + path;\nvar xhr = new XMLHttpRequest();\n\nxhr.open('GET', url);\nxhr.onreadystatechange = function() {\n  if(xhr.readystate == 4 && xhr.status == 200) {\n    // Do something.\n  }\n};\nxhr.send();\n```\n\n\n\n\n\n\n## GET transaction body as HTML via ID\n\nRetrieve the data segment of the transaction body decoded from base64url encoding.\nIf the transaction was an archived website then the result will be browser rendererable HTML.\n\n- **URL**\n  `/tx/[transaction_id]/data.html`\n- **Method**\n  GET\n- **URL Parameters**\n  [transaction_id] : Base64url encoded ID associated with the transaction\n\n\n#### Example Response\n\nA string containing the requested field.\n\n```javascript\n\"Hello World\"\n```\n\n\n#### JavaScript Example Request\n\n```javascript\nvar node = 'http://127.0.0.1:1984';\nvar path = '/tx/B7j_bkDICQyl_y_hBM68zS6-p8-XiFCUmEBaXRroFTM/data.html'\nvar url = node + path;\nvar xhr = new XMLHttpRequest();\n\nxhr.open('GET', url);\nxhr.onreadystatechange = function() {\n  if(xhr.readystate == 4 && xhr.status == 200) {\n    // Do something.\n  }\n};\nxhr.send();\n```\n\n\n\n\n\n## GET estimated transaction price\n\nReturns an estimated cost for a transaction of the given size.\nThe returned amount is in winston (the smallest division of AR, 1 AR = 1000000000000 winston).\n\nThe endpoint is pessimistic, it reports the price as if the network difficulty was smaller by one, to account for the possible difficulty change.\n\n- **URL**\n  `/price/[byte_size]`\n- **Method**\n  GET\n- **URL Parameters**\n  [byte_size] : The size of the transaction's data field in bytes. For financial transactions without associated data, this should be zero.\n\n\n#### Example Response\n\nA string containing the estimated cost of the transaction in Winston.\n\n```javascript\n\"1896296296\"\n```\n\n\n#### JavaScript Example Request\n\n```javascript\nvar node = 'http://127.0.0.1:1984';\nvar path = '/price/2048';\nvar url = node + path;\nvar xhr = new XMLHttpRequest();\n\nxhr.open('GET', url);\nxhr.onreadystatechange = function() {\n  if(xhr.readystate == 4 && xhr.status == 200) {\n    // Do something.\n  }\n};\nxhr.send();\n```\n\n\n\n\n\n\n## GET block via ID\n\nRetrieve a JSON array representing the contents of the block specified via the ID.\n\n- **URL**\n  `/block/hash/[block_id]`\n- **Method**\n  GET\n- **URL Parameters**\n  [block_id] : Base64url encoded ID associated with the block\n\n\n#### Example Response\n\nA JSON array detailing the block.\n\n```javascript\n{\n  \"nonce\": \"c7V-8dLmmqo\",\n  \"previous_block\": \"yeCiFpWcguWtWRJnJ_XOKhQXw6xtiOHh-rAw-RjX0YE\",\n  \"timestamp\": 1517563547,\n  \"last_retarget\": 1517563547,\n  \"diff\": 8,\n  \"height\": 30,\n  \"hash\": \"-3-oyxTcYAgbbNoFyDz8hqs7KCJHI4qb4VdER9Jotbs\",\n  \"indep_hash\": \"oyxTcYAgbbNoFyDz8hqs7KCJHI4qb4VdER9Jotbs\",\n  \"txs\": [...],\n  \"hash_list\": [...],\n  \"wallet_list\": [...],\n  \"reward_addr\": \"unclaimed\"\n}\n```\n\n\n#### JavaScript Example Request\n\n```javascript\nvar node = 'http://127.0.0.1:1984';\nvar path = '/block/hash/oyxTcYAgbbNoFyDz8hqs7KCJHI4qb4VdER9Jotbs';//Use \"indep_hash\" above,not hash\nvar url = node + path;\nvar xhr = new XMLHttpRequest();\n\nxhr.open('GET', url);\nxhr.onreadystatechange = function() {\n  if(xhr.readystate == 4 && xhr.status == 200) {\n    // Do something.\n  }\n};\nxhr.send();\n```\n\n\n\n\n\n\n\n## GET block via height\n\nRetrieve a JSON array representing the contents of the block specified via the block height.\n\n- **URL**\n  `/block/height/[block_height]\n- **Method**\n  GET\n- **URL Parameters**\n  [block_height] : The height at which the block is being requested for\n\n\n#### Example Response\n\nA JSON array detailing the block.\n\n```javascript\n{\n  \"nonce\": \"c7V-8dLmmqo\",\n  \"previous_block\": \"yeCiFpWcguWtWRJnJ_XOKhQXw6xtiOHh-rAw-RjX0YE\",\n  \"timestamp\": 1517563547,\n  \"last_retarget\": 1517563547,\n  \"diff\": 8,\n  \"height\": 30,\n  \"hash\": \"-3-oyxTcYAgbbNoFyDz8hqs7KCJHI4qb4VdER9Jotbs\",\n  \"indep_hash\": \"oyxTcYAgbbNoFyDz8hqs7KCJHI4qb4VdER9Jotbs\",\n  \"txs\": [...],\n  \"hash_list\": [...],\n  \"wallet_list\": [...],\n  \"reward_addr\": \"unclaimed\"\n}\n```\n\n\n#### JavaScript Example Request\n\n```javascript\nvar node = 'http://127.0.0.1:1984';\nvar path = '/block/height/1101';\nvar url = node + path;\nvar xhr = new XMLHttpRequest();\n\nxhr.open('GET', url);\nxhr.onreadystatechange = function() {\n  if(xhr.readystate == 4 && xhr.status == 200) {\n    // Do something.\n  }\n};\nxhr.send();\n```\n\n\n\n\n\n\n\n\n## GET current block\n\nRetrieve a JSON array representing the contents of the current block, the network head.\n\n- **URL**\n  `/current_block`\n- **Method**\n  GET\n\n\n#### Example Response\n\nA JSON array detailing the block.\n\n```javascript\n{\n  \"nonce\": \"rihlezm7XAc\",\n  \"previous_block\": \"pc-0MvV6lQOWt0O2L3VcSheOfIdymntOBVcloERVbQQ\",\n  \"timestamp\": 1517564276,\n  \"last_retarget\": 1517564044,\n  \"diff\": 24,\n  \"height\": 166,\n  \"hash\": \"mGe34a3DcT8HLE0BfaME38XUelENSjPQA-vcYJG6PGs\",\n  \"indep_hash\": \"ntoWN8DMFSuxPsdF8CelZqP03Gr4GahMBXX8ZkyPA3U\",\n  \"txs\": [...],\n  \"hash_list\": [...],\n  \"wallet_list\": [...],\n  \"reward_addr\": \"unclaimed\"\n}\n```\n\n\n#### JavaScript Example Request\n\n```javascript\nvar node = 'http://127.0.0.1:1984';\nvar path = '/current_block';\nvar url = node + path;\nvar xhr = new XMLHttpRequest();\n\nxhr.open('GET', url);\nxhr.onreadystatechange = function() {\n  if(xhr.readystate == 4 && xhr.status == 200) {\n    // Do something.\n  }\n};\nxhr.send();\n```\n\n\n\n\n\n\n\n## GET wallet balance via address\n\nRetrieve the balance of the wallet specified via the address.\nThe returned amount is in winston (the smallest division of AR, 1 AR = 1000000000000 winston).\n\n- **URL**\n  `/wallet/[wallet_address]/balance`\n- **Method**\n  GET\n- **URL Parameters**\n  [wallet_address] : A base64url encoded SHA256 hash of the raw RSA modulus.\n\n\n#### Example Response\n\nA string containing the balance of the wallet.\n\n```javascript\n\"1249611338095239\"\n```\n\n\n#### JavaScript Example Request\n\n```javascript\nvar node = 'http://127.0.0.1:1984';\nvar path = '/wallet/VukPk7P3qXAS2Q76ejTwC6Y_U_bMl_z6mgLvgSUJIzE/balance';\nvar url = node + path;\nvar xhr = new XMLHttpRequest();\n\nxhr.open('GET', url);\nxhr.onreadystatechange = function() {\n  if(xhr.readystate == 4 && xhr.status == 200) {\n    // Do something.\n  }\n};\nxhr.send();\n```\n\n\n\n\n\n\n\n\n## GET last transaction via address\n\nRetrieve the ID of the last transaction made by the given address.\n\n- **URL**\n  `/wallet/[wallet_address]/last_tx`\n- **Method**\n  GET\n- **URL Parameters**\n  [wallet_address] : A Base64 encoded SHA256 hash of the public key.\n\n\n#### Example Response\n\nA string containing the ID of the last transaction made by the given address.\n\n```javascript\n\"bUfaJN-KKS1LRh_DlJv4ff1gmdbHP4io-J9x7cLY5is\"\n```\n\n\n#### JavaScript Example Request\n\n```javascript\nvar node = 'http://127.0.0.1:1984';\nvar path = '/wallet/VukPk7P3qXAS2Q76ejTwC6Y_U_bMl_z6mgLvgSUJIzE/last_tx';\nvar url = node + path;\nvar xhr = new XMLHttpRequest();\n\nxhr.open('GET', url);\nxhr.onreadystatechange = function() {\n  if(xhr.readystate == 4 && xhr.status == 200) {\n    // Do something.\n  }\n};\nxhr.send();\n```\n\n\n## GET transactions made by the wallet\n\nRetrieve identifiers of transactions made by the given wallet.\n\n- **URL**\n  `/wallet/[wallet_address]/txs/[earliest_tx]`\n- **Method**\n  GET\n- **URL Parameters**\n\n  - [wallet_address] : A Base64 encoded SHA256 hash of the public key.\n  - [earliest_tx] (optional) : A Base64 encoded ID of the earliest transaction to fetch. If not specified, all transactions made by the given wallet are returned.\n\n\n#### Example Response\n\nA JSON list of base64url encoded transaction identifiers.\n\n```javascript\n[\"bUfaJN-KKS1LRh_DlJv4ff1gmdbHP4io-J9x7cLY5is\",\"b23...xg\"]\n```\n\n\n#### JavaScript Example Request\n\n```javascript\nvar node = 'http://127.0.0.1:1984';\nvar path = '/wallet/VukPk7P3qXAS2Q76ejTwC6Y_U_bMl_z6mgLvgSUJIzE/txs/bUfaJN-KKS1LRh_DlJv4ff1gmdbHP4io-J9x7cLY5is';\nvar url = node + path;\nvar xhr = new XMLHttpRequest();\n\nxhr.open('GET', url);\nxhr.onreadystatechange = function() {\n  if(xhr.readystate == 4 && xhr.status == 200) {\n    // Do something.\n  }\n};\nxhr.send();\n```\n\n\n## Get transactions sent to the given address\n\nRetrieve identifiers of transfer transactions depositing to the given wallet. The index is partial - only transactions known by the given node are returned.\n\n- **URL**\n  `/wallet/[wallet_address]/deposits/[earliest_deposit]`\n- **Method**\n  GET\n- **URL Parameters**\n\n  - [wallet_address] : A Base64 encoded SHA256 hash of the public key.\n  - [earliest_deposit] (optional) : A Base64 encoded ID of the earliest transaction to fetch. If not specified, all deposits known by the node are fetched.\n\n\n#### Example Response\n\nA JSON list of base64url encoded transaction identifiers.\n\n```javascript\n[\"bUfaJN-KKS1LRh_DlJv4ff1gmdbHP4io-J9x7cLY5is\",\"b23...xg\"]\n```\n\n## GET nodes peer list  \n\nRetrieve the list of peers held by the contacted node.\n\n- **URL**\n  `/peers`\n- **Method**\n  GET\n\n\n#### Example Response\n\nA list containing the IP addresses of all of the nodes peers.\n\n```javascript\n[\n  \"127.0.0.1:1985\",\n  \"127.0.0.1.:1986\"\n]\n```\n\n\n#### JavaScript Example Request\n\n```javascript\nvar node = 'http://127.0.0.1:1984';\nvar path = '/peers';\nvar url = node + path;\nvar xhr = new XMLHttpRequest();\n\nxhr.open('GET', url);\nxhr.onreadystatechange = function() {\n  if(xhr.readystate == 4 && xhr.status == 200) {\n    // Do something.\n  }\n};\nxhr.send();\n```\n\n\n\n\n\n\n\n\n## POST transaction to network\n\nPost a transaction to the network.\n\n- **URL**\n  `/tx`\n\n- **Method**\n  POST\n\n\n#### Data Parameter (Post body)\n\n```javascript\n{\n    \"last_tx\": \"\",  // Base64 encoded ID of the last transaction made by this wallet. Empty if this is the first transaction.\n    \"owner\": \"\",    // The public key making this transaction.\n    \"target\": \"\",   // Base64 encoded SHA256 hash of recipient's public key. Empty for data transactions.\n    \"quantity\": \"\", // Decimal string representation of the amount of sent AR in winston. Empty for data transactions.\n    \"data\": \"\",     // The Base64 encoded data being store in the transaction. Empty for transfer transactions.\n    \"reward\": \"\",   // Decimal string representation of the mining reward AR amount in winston.\n    \"signature\": \"\" // Base64 encoded signature of the transaction\n}\n```\n\n#### JavaScript Example Request\n\n```javascript\nvar node = 'http://127.0.0.1:1984';\nvar path = '/tx';\nvar url = node + path;\nvar xhr = new XMLHttpRequest();\nvar post =\n    {\n      \"id\": \"VvNF3aLS28MXD_o4Lv0lF9_WcxMibFOp166qDqC1Hlw\",\n      \"last_tx\": \"bUfaJN-KKS1LRh_DlJv4ff1gmdbHP4io-J9x7cLY5is\",\n      \"owner\": \"1Q7RfP...J2x0xc\",\n      \"tags\": [],\n      \"target\": \"\",\n      \"quantity\": \"0\",\n      \"data\": \"3DduMPkwLkE0LjIxM9o\",\n      \"reward\": \"1966476441\",\n      \"signature\": \"RwBICn...Rxqi54\"\n  };\n\nxhr.open('POST', url);\nxhr.onreadystatechange = function() {\n  if(xhr.readystate == 4 && xhr.status == 200) {\n    // Do something.\n  }\n};\nxhr.send(post);\n```\n\n\n\n> Please note that in the JSON transaction records all winston value fields (quantity and reward) are strings. This is to allow for interoperability between environments that do not accommodate arbitrary-precision arithmetic. JavaScript for instance stores all numbers as double precision floating point values and as such cannot natively express the integer number of winston. Providing these values as strings allows them to be directly loaded into most 'bignum' libraries.\n\n\n\n\n\n# Contact\n\nIf you have questions or comments on the Arweave HTTP interface you can get in touch by\nfinding us on [Twitter](https://twitter.com/ArweaveTeam/), [Reddit](https://www.reddit.com/r/arweave), [Discord](https://discord.gg/2ZpV8nM) or by emailing us at team@arweave.org.\n\n# License\nThe Arweave project is released under GNU General Public License v2.0.\nSee [LICENSE](LICENSE.md) for full license conditions.\n"
  },
  {
    "path": "http_post_unsigned_tx_docs.md",
    "content": "# Internal HTTP API for generating wallets and posting unsigned transactions\n\n## **Warning** only use it if you really really know what you are doing.\n\nThese HTTP endpoints are only available if the `internal_api_secret` startup option is set when `arweave-server` is started.\n\n## Generate a wallet and receive its access code\n\n- **URL**\n  `/wallet`\n\n- **Method**\n  POST\n\n- **Request Headers**\n    * `X-Internal-Api-Secret` : must match `internal_api_secret`\n\n#### Example Response\n\nAn access code which can be used to sign transactions via `POST /unsigned_tx`.\n\n```javascript\n{\"wallet_access_code\":\"UEhkVh0LBqfIj60-EB-yaDSrMpR2_EytWrY0bGJc_AZaiITJ4PrzRZ_xaEH5KBD4\"}\n```\n\n## POST unsigned transaction to the network\n\nPost a transaction to be signed and sent to the network.\n\n- **URL**\n  `/unsigned_tx`\n\n- **Method**\n  POST\n\n- **Request Headers**\n   * `X-Internal-Api-Secret` : must match `internal_api_secret`\n\n#### Data Parameter (Post body)\n\n```javascript\n{\n    \"last_tx\": \"\",            // Base64 encoded ID of the last transaction made by this wallet.\n    \"target\": \"\",             // Base64 encoded SHA256 hash of recipient's public key. Empty for data transactions.\n    \"quantity\": \"\",           // Decimal string representation of the amount of sent AR in winston. Empty for data transactions.\n    \"data\": \"\",               // The Base64 encoded data being store in the transaction. Empty for transfer transactions.\n    \"reward\": \"\",             // Decimal string representation of the mining reward AR amount in winston.\n    \"wallet_access_code\": \"\"  // The wallet access code as returned by the POST /wallet endpoint.\n}\n```\n\n\n#### Example Response\n\nA transaction ID (Base64 encoded hash of the signature).\n\n```javascript\n{\"id\": \"F8ITA-zojpRtUNnULnKasJCHL46rcqQBpSyqBekWnF30S7GCd58LcIcOXhYnYL6U\"}\n```\n"
  },
  {
    "path": "localnet_snapshot/ar_tx_blacklist/ar_tx_blacklist",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:c2a7cfde6e6f8a241eb3437fbaf693fbd70972a87d6568a738779d25d9ded802\nsize 5464\n"
  },
  {
    "path": "localnet_snapshot/ar_tx_blacklist/ar_tx_blacklist_offsets",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:c2a7cfde6e6f8a241eb3437fbaf693fbd70972a87d6568a738779d25d9ded802\nsize 5464\n"
  },
  {
    "path": "localnet_snapshot/ar_tx_blacklist/ar_tx_blacklist_pending_data",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:c2a7cfde6e6f8a241eb3437fbaf693fbd70972a87d6568a738779d25d9ded802\nsize 5464\n"
  },
  {
    "path": "localnet_snapshot/ar_tx_blacklist/ar_tx_blacklist_pending_headers",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:c2a7cfde6e6f8a241eb3437fbaf693fbd70972a87d6568a738779d25d9ded802\nsize 5464\n"
  },
  {
    "path": "localnet_snapshot/ar_tx_blacklist/ar_tx_blacklist_pending_restore_headers",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:c2a7cfde6e6f8a241eb3437fbaf693fbd70972a87d6568a738779d25d9ded802\nsize 5464\n"
  },
  {
    "path": "localnet_snapshot/data_sync_state",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:8d97533661cc29384cc0be8981f8ce15a4f2c54dfb8d3cef3b65bc548063e8b0\nsize 700090\n"
  },
  {
    "path": "localnet_snapshot/header_sync_state",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:479db92a860a592e92fd62762a0f6de494595fb84f660bcbeea343bfd29bcad0\nsize 700042\n"
  },
  {
    "path": "localnet_snapshot/mempool",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:b29d129a466afbe559b487da949f24085df7d01826113ddb8888ac8c8b4ec2a5\nsize 14\n"
  },
  {
    "path": "localnet_snapshot/peers",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:1dbc103f784fa48be48b9d983a0f8372f901890fd5598f64c8b9781d9b0ee678\nsize 33460\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/account_tree_db/000009.sst",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:ebe6813b0fbf07c577beb5dd15927c9f3bb2e620c2e1676333c84b612e952494\nsize 61230933\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/account_tree_db/CURRENT",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:0861415cada612ea5834d56e2cf1055d3e63979b69eb71d32ae9ae394d8306cd\nsize 16\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/account_tree_db/IDENTITY",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:88a6151884275a8da7ba70344f03992cec60c470f6d7624694554faba5a42773\nsize 36\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/account_tree_db/LOCK",
    "content": ""
  },
  {
    "path": "localnet_snapshot/rocksdb/account_tree_db/MANIFEST-000004",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:fadf666120762b32b66cc7c431dc2764664687a1339987cbaf70a16b275d7fc7\nsize 231\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/account_tree_db/OPTIONS-000007",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:6219c1ad01d0668b3b27ce37250aa4189f76ca48d1a7ce6b33a45034c65cdad7\nsize 6270\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/ar_storage_block_db/CURRENT",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:0861415cada612ea5834d56e2cf1055d3e63979b69eb71d32ae9ae394d8306cd\nsize 16\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/ar_storage_block_db/IDENTITY",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:d57680e02d7a8475df317b1c8d0def94d5d1b1208494c89afebf6ab18e4a6e91\nsize 36\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/ar_storage_block_db/LOCK",
    "content": ""
  },
  {
    "path": "localnet_snapshot/rocksdb/ar_storage_block_db/MANIFEST-000004",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:e523d8c767e2a5228ebfd0799b0091933e16f003db341e26db9a9c02302f7a7f\nsize 57\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/ar_storage_block_db/OPTIONS-000007",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:eea8aec5435395643c5e1b608482d1d2f06dee1ea91c1f474347c2864faac3a4\nsize 6274\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/ar_storage_tx_confirmation_db/CURRENT",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:0861415cada612ea5834d56e2cf1055d3e63979b69eb71d32ae9ae394d8306cd\nsize 16\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/ar_storage_tx_confirmation_db/IDENTITY",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:2befe560d21c205b2676ebf59ad5ec735d4d825f3fda70b0fe7fef5146b47864\nsize 36\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/ar_storage_tx_confirmation_db/LOCK",
    "content": ""
  },
  {
    "path": "localnet_snapshot/rocksdb/ar_storage_tx_confirmation_db/MANIFEST-000004",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:e523d8c767e2a5228ebfd0799b0091933e16f003db341e26db9a9c02302f7a7f\nsize 57\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/ar_storage_tx_confirmation_db/OPTIONS-000007",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:848a834e65c92b857647b77f48cc72d749c5ba9f0d848ebd29cee6c7faa0c6af\nsize 6284\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/ar_storage_tx_db/CURRENT",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:0861415cada612ea5834d56e2cf1055d3e63979b69eb71d32ae9ae394d8306cd\nsize 16\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/ar_storage_tx_db/IDENTITY",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:9f6e524b72f97aa94cc2b59980587eed60e330f8c6b66531b52c93f16ba18b82\nsize 36\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/ar_storage_tx_db/LOCK",
    "content": ""
  },
  {
    "path": "localnet_snapshot/rocksdb/ar_storage_tx_db/MANIFEST-000004",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:e523d8c767e2a5228ebfd0799b0091933e16f003db341e26db9a9c02302f7a7f\nsize 57\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/ar_storage_tx_db/OPTIONS-000007",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:95a37a08373ab6e525c8a19339478495631b6702245a84b29d2e43b6ca8889c6\nsize 6271\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/block_index_db/000009.sst",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:7432d330dffc80de80aeccacfd4463b33e71d1036b530a019fa6fd8252d1acc1\nsize 51436345\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/block_index_db/000011.sst",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:e283f0cb4d4beeb1e48b39385c2cf5b67ce9bc6fd80b27e99b7046af0f92cafe\nsize 54196169\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/block_index_db/000013.sst",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:4753761abf73e35d6128b6fce94bbb92b52c27ae23098aac9852e7b2939f476f\nsize 54467463\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/block_index_db/000015.sst",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:f96e0c901344179e4ba749c5367e2a2ef4245dfb48629aeeaa671253df432110\nsize 54400486\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/block_index_db/000017.sst",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:d31f685265ced9aeb4ed7a8370638b3a616d4e668abe3d6a72a5c91886eb8538\nsize 54383883\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/block_index_db/000019.sst",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:b7e85ee62ffddaa2ee725be73b0bfb45ef566cd5b94022ab272a704edb63b2f7\nsize 53604952\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/block_index_db/CURRENT",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:0861415cada612ea5834d56e2cf1055d3e63979b69eb71d32ae9ae394d8306cd\nsize 16\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/block_index_db/IDENTITY",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:ac2d876bd5c5af44e498329d3ecd149908db442214b66a2d6673f8d474d5bdc0\nsize 36\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/block_index_db/LOCK",
    "content": ""
  },
  {
    "path": "localnet_snapshot/rocksdb/block_index_db/MANIFEST-000004",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:feb03baab5b5ea6d5978741dd40403cb012a2b218d4058b058992c8bef1890fa\nsize 1300\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/block_index_db/OPTIONS-000007",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:618e1352efb16fc5fea6f963556b5c2cb9bc8077296349910526b2bdfc8d5ae5\nsize 6269\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/block_time_history_db/CURRENT",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:0861415cada612ea5834d56e2cf1055d3e63979b69eb71d32ae9ae394d8306cd\nsize 16\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/block_time_history_db/IDENTITY",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:6210eeb4c995dbafecba96b593bd358bea1d8e634fcb421a8a768dc41b60f5d3\nsize 36\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/block_time_history_db/LOCK",
    "content": ""
  },
  {
    "path": "localnet_snapshot/rocksdb/block_time_history_db/MANIFEST-000004",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:e523d8c767e2a5228ebfd0799b0091933e16f003db341e26db9a9c02302f7a7f\nsize 57\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/block_time_history_db/OPTIONS-000007",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:8fae387eaa1b0bb58df6b48f4b881f83a4b809caa7e00b7f076af07d13e42803\nsize 6276\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/reward_history_db/CURRENT",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:0861415cada612ea5834d56e2cf1055d3e63979b69eb71d32ae9ae394d8306cd\nsize 16\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/reward_history_db/IDENTITY",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:551f7815cfcd974f1a8e3d2122ca307a3975bd34665ea3cc4c46253f8c8c3288\nsize 36\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/reward_history_db/LOCK",
    "content": ""
  },
  {
    "path": "localnet_snapshot/rocksdb/reward_history_db/MANIFEST-000004",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:e523d8c767e2a5228ebfd0799b0091933e16f003db341e26db9a9c02302f7a7f\nsize 57\n"
  },
  {
    "path": "localnet_snapshot/rocksdb/reward_history_db/OPTIONS-000007",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:b865b50830e8e1c80dc982ed9a8abc74b3d38636d1540959f211ebfde1d720f8\nsize 6272\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/-B7wF8TF5AodemKM2UjeFySwA_-Q12Ai8z9FSqgIEyA.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:94ffc9b1ab8a0cf2617b0b500f12eeb5384d40309e0c9f48ebce6fe0e625daec\nsize 1707\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/0KMeq830vwvxUUM7RLCwE0ve4i0h_XHugbUTCkPNH-M.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:d74073902bb101a48df99e91b3e51e3aa52e40c3c4418bbc636bcfd05eb34041\nsize 8120\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/1QGjyW1AEFlrFAs6VtUcmwOVOEZJjxaBR_z61W9mftI.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:7c4046462367bf38fec408330919bcc2bac870c89b1f2a80c064988d14581a08\nsize 41806\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/1VknqhhAXRQ6hzeZL-IMVBznTFCdiWcwlXhzpLKS8Zk.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:78540758f8d59397f42832666809e9adc102f23a7aacc1a3c5277ea69347676f\nsize 51670\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/1fzKf0Ygc-z3ejpZ1ZLOiNBYDRzViGRdPLtUqRS1nKY.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:b207347d0bdfeb570c6a90cf1478efc0ee27191e50e384899d8ae6b589014759\nsize 801045\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/2dxNaIAvkAuL_N2qpTGSl7d7rU3Hu7d4l4IkYb9jgDU.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:1444aeea018aa54ae25a5ab789c7261831ab600258a45d6b597b2a9aca4997a0\nsize 296983\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/35wYULjhQBiTFh9u-PJz6ki0v7Zi1whk_AhowUt99Ac.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:735798411ac703ff09ef7c5aec0b18366aa8a7cf2d325e7879731ad8e6d391a4\nsize 1159364\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/3DSCNJ5H9Hpyy7auT9qG5vom9jHBrCgjs48w_R6iSJg.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:c9bbf02788b5cd8508d0f685844b7ca6566f6695838c32808d03981b02b77480\nsize 132658\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/4QcodvSlgZnuz5uWGmBARsGUJ5XaYORIO5jYM1dTucI.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:3a546d575d77afa5a6d9209a69db0293ff943ab4e6d726b151707a749c82be77\nsize 664907\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/4tlIV1x4YRWtNMut11ox9SS-lWt3xIzcXnrBBbNxGYs.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:f747c57ca6b2c028f39d54ba6233c5d3d665b3389df53ff810b187978f882be1\nsize 27894\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/5iK4mPnFqGdUxpiZmGtTbj7xoSC2una7sjsbUyZkOmM.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:5c09b493861a658759bea5a6df826736a3f028d177d033e5a4db1eb725183230\nsize 1663\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/7M4KyVB4Wr-Le3Knb7JExgnsXTtG7718JIlhVBNstlE.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:fc68af7f1348231e72f9e5068c6f44e7cb0199ecbe500b61273e7e856bf0671f\nsize 2956\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/7fat_nqzDJCTfJMqyEpOcavt1cZNM-tfSzASJd0wrHo.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:f61fe12e2e9fdb2e30db3b08b782760b855ac7e0b515dcbe4442ab591c6c369e\nsize 1708\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/8CPVZq-zPdMQ2to1P91vl6XBXyL7sLH8-vNclnOCug8.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:0cd0c3c13f98f0825f57e63771d2fafb4ded62c242975de6fece8d62111ed4ee\nsize 1703\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/8TiSScQCv06oS9b8Tt5WBnf7sUVgzPAFGJ3Lq2bt8rY.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:f71cbe941da884b7d27aa9433582cd57461bfd3e41083f3ad10d13554cfa1ec4\nsize 591724\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/8qtH9T9jgYLHH-xi39w1OCNJykqew1O5qzrDkhAxN0U.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:44490825c7035ddc86b6587b2b07c2bd73a3035211c0bebbe1bc5978b66469d1\nsize 446843\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/9hX3cS3Vjr6vAqJW3WtPN665NpLJegcxyaDZO2esElM.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:ab05537199fe081db35ec7f47e710046a8ce074f3cf102aa2cc60dbd773d0e6e\nsize 205671\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/A5oMEDa7ZEm1kjPlXpwjuZd40rqP6eo3GobNGQY4HlY.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:804e9daef0af59e77e867a43be039c70356a96c9d11d232acf784635b0ac11ae\nsize 94081\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/Ace-njSprwHMwZaW5nuD0y1lKFoaafU3T8d7PLBeEIA.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:77009765d454c8468bb7a27976e3ac100fd69861b9522b2c7db8263ad59ed699\nsize 155501\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/AoCuo7S7ewDIqhYheBX6AjShrbyTgIv6Fp1AwQgmGqg.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:fbe83db0824779a860376a54129f584682801fdc86c598d2012fafb24abfe0a8\nsize 563392\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/BFfNP1eCeYIkLiWWAVvHNLzk1N2pxkOChFzQbdv1IiA.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:924683314a8475ad709de86eef1386be8093b794ff79a6ca0e0290103b37f698\nsize 110148\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/B_F4zIV1I5DXM-lR-Ko1tVUTTSmLCOYR7PoY8V8wFas.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:192d0fe4fbec6c56f4a7917fec95c0b3ffac633eeb94e0daf9df4acb0086f891\nsize 4362\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/CCH2h2MzMP7WMh0Xf3GYL7zZDbU7E4CZPJWngp1qmDc.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:99e3d5245cf82a4182c33040b90884bcaa8ccb997e91cf70ef324e7f63a4adc4\nsize 100111\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/CQv9OVOCzntq2DRqNJ9j_WnWPcsniyGRXpt4i_a8Iy0.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:a3b5980caf07a41d0c0a52d3142372664338c84b281c01af1ff7c56852dcbd88\nsize 235225\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/Cdcx7-UZJN324I9L47rrph9dIVy8RwfJa9mY7cJp9gk.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:1b827558452d68e24a47d8181727c79f65dcdbe948c9c76c3152a9cdd35d296c\nsize 1709\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/D29DVKVYAe74sAj9NBQ351rI6SseWZ5MMsSedGtydS8.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:6e1fb83d306ad585b99f9f2b5a32cae6775513454bac984e60879f056871a920\nsize 1709\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/D_3jwPKLfcTpWcrDV1Q7k3D4sMtyfw7vd45D2C9pUNA.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:359ceb534412b628eaf09d962686a828ed9f1e85b6dc20cac89a39e1aca05248\nsize 15114\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/DlRct3GdPx7oYi3MSdmv16CgGWqhLJjbrKcIfU0E48I.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:a00f713cb17426e67937716085e6bb25fa8fafd37213636dcef66ab0d8758d06\nsize 569037\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/EDt8sO0AWKJyNeUxd-U6ihy0rgRKUPjpfRGarEHlOCs.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:b88f228cbead2fe25c81b997beef32cc0fcda3062d54938602c8adf352f33836\nsize 52845\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/EayO1EsmOinnbi-NVa2V7cVraoI0TZ6xE5-sNU7fc94.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:ffe317204936347b6c9adadaa145f50dc4327238b506e2be1c243acfc4c261ad\nsize 1664\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/F3c9tsVvmCiFNxK7hVEzROraVm477QdyQ8t6afBs5E4.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:45a73469f91e4ccf1bede858c5bbba16253e3730e843ca9a16dcddace1ad3b33\nsize 193135\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/FIrCkHY8jVkXcIkWYbMpuQSRYxavkOQ3wtUZPwMS1hM.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:209109372cf5bb05b373f9d25e60ba01461b810c4011ee1927d91de22633296c\nsize 19518\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/FbeSRhJR00VPygimhm47VwirSeBATnlf240hv4a2G4E.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:b9c90985221f79000d52ff1d8d92fdb30efeb93e42a6d54993778ae61137749f\nsize 403896\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/G6JD1n-FXMSyTSryo0HoX7L3i7e4KEFK_ekDMEn9Bcg.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:199930f740bce0cf3071afa3bfc6005ed99a1cfb122eb8226deaa88501afbe81\nsize 48987\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/HOMVwtocaJIRPdCeKgzorJZJq1jw_lVGz0pQ3POj7No.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:5fd3c801bc07c554ae52a1ad5de0f467688c299467f9b8986eb9d78a0b6ad448\nsize 1630\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/Hiu5cti9FefwcvT6xRCIoADUMkuDEm_6pZo92CK3fiw.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:8b4605177358b28dcb2dbc677322fc3b91f671ff909aec48be0ac8ceeb96ae41\nsize 2140\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/HoEZ6sK46bzTg4Jzrfy1kHFzkFQgI2UMm9pm0qJS3as.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:a11dc8f25249f8528f099a6ac30fc98e6af83c1e34c4fb63e0394a348ebe7b41\nsize 312840\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/Hs-Yj4ZE9ACfQIjzS8E-qvxSkQALsCIDHwcLEMnlz90.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:1c56de0144ed3087bf505619042b8e7534455f5192cc27b34617fe6e14bcae3c\nsize 2183\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/Hv0Q5APV6ARfDXDpxI-07R1YFSJAQpxTFh1Z8_nCk3U.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:c6677efcc2da8446061135ed99a646ad9928b4abdc5437850adba8cfb009e760\nsize 1664\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/I4ifBnOF6OQFautfisGFTVIn2NsrvqrdnQ-O7JOMouE.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:99b44653108cadedfbe9e4a8fdede4d7108b2e4b76a8685e6f96a462e8219df8\nsize 490304\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/Ie-fxxzdBweiA0N1ZbzUqXhNI310uDUmaBc3ajlV6YY.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:449483524ed5970adddf15c623af03c26a18c98a8cad7936584b045ebe7d4512\nsize 82093\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/IeEkQUBq3aE2CSbCF2Bk126lLaLZEYjUPJ_IO601tZg.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:a70a4aac76f318c991575385a1c92c29fbe41b66d8d752bc0fac836a33960894\nsize 8671\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/IqJf6iISeiEj3oof9491-jQX4drDZ92VoFuZqNmoixk.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:d80dc01c2f0b5ba26d78ab3b38bf5418333a0fe1b4516c7f9242ec3f6555d843\nsize 597781\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/JDG-HBsrHGDodot2clC3nNkRKV5cvuhRWZjCwVFHG_Q.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:fbe8502514db37c0cb74fa125a97fe4855bf8bda6d5dce455a7b8f24958d4a80\nsize 107653\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/JDS1sGkpC0ua7UGfpLEJSF-jXUnjAs2fa5V7y6rccdY.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:9bd8fd71df829c78f4c073b5c8a3c16ecd15524324151764e2b5082e5cb3e5a7\nsize 47947\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/JNCYRy7XYR_20vvXEAwpT43ovKB23np9yE9cqQfsIJk.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:29840c1de7e7faf20e2aeb2280e666bbea5b59ee2eee8797db4cb23987932218\nsize 237904\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/JUf6alhhrfuL22XuQ0yrZ6_xBFBIQqi85wRxv2nUCMs.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:5a54d775979915767604824bf0831b48dc68a7b86d544b5fd904a851d9c6f9f7\nsize 745383\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/JeP9HaxmjN-TcbCkhKDIQejkGdKTlOgp68O5cy_2GRc.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:11298c590ffa24f41bad18af4b215f3c95eea6f0b6030113757b1fcd3f2390e1\nsize 3726\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/JfTiLBj5Gxr1v7JwoNf9-7sRAiLOrg1AZ6kqwSkEpTc.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:4f4016312c3535d0d069c9c64a0416084acb2591818edde842b3fc1768b24eb4\nsize 1707\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/Jo3rf0JPJR2kCHBqZG71xouWzuOSY-MXJufpfzFl7sE.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:fa11392dd22d4342c19dd9442fe5f33c55fdc0bcbf7400b0bf42c61fab7a3961\nsize 1705\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/K0w8hOO1oCu4sQipWDQGyEFvn6kAXO-M93neMZmRoUc.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:086357c70964972cdccddeba2831fef1dee26b0b9102260a53ab0d99dc167e24\nsize 1709\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/Lt7WJclVu4iYHqGHIYIBia3ABMnvmd5cW4ELIzUTfPE.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:4c1e510f03bcbe79977929f6edbde08f0433306e199b5d847cc852724a52af9d\nsize 1707\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/MCCCpl9AGNAzy3WvM5lniJ88iC3-8NPiiWIsxcLZZxQ.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:e6da268a212d3b1169014be995377fa8609710d83c9d70f066c74298cc7ccf2f\nsize 58629\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/MGDpPk3LsexVpFBF43-FIIvc0vyeEDroYcIONJ6abd0.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:8cd364264d81edc69c2b6e6db764bbcac1721b9b7e11c760182be3f84d4a2f1c\nsize 129770\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/MQD8-8yIZwNC4A006TC1FVZSyCDHeIAN6YpDbTiX2RU.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:2137cd96017ff6ced88f0710cc15c8f4f51357cc2a57bfed546a3c3205fa69a3\nsize 422204\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/MklsZ_cDz470C40UGZUJoVfMeVA89-r7SHxuomBeCPM.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:637436f7c9bbaa07790f0df80c8691d5ecffee30b338da202bed7655a63689ae\nsize 2140\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/NBjbIMFIdd6jFhSZ20izEke9Ju8jMuvYl8O4bqe4wC4.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:7a87521e4beeff05f23cb323a80c016d4ba5bcb571570551b40191bc54803d80\nsize 47262\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/NixeAD5Y_8sQfcrMBWkODQuoXgJouUBmQmQzwTzlaKU.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:52e7939d520c10400da857b670adc52427cd49d3cd6198501ed70d3822840517\nsize 310189\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/OGA55Jyg2c-Jhkx5zDNyiDvbFZiRXF0S_JESMhWAWcs.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:73d30f3dc41adabed66f17b211f927965bf2cb553f7012cefc4a0f030e1ea67f\nsize 782013\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/P5KQo3QSWLzTLWkq3wgJlii11CEUSKMG_O2NMN6y_8c.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:ddcd87e969821488786e4b5f6bd6afae79081b40bf2b2cde5e87b4ae86328bc1\nsize 244515\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/PgxqlgdluUGnmGCal3dgB6PYCd5S7FtBpI0zKDc8-AY.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:deee61d29fc9f7d0102bab68fd320c5339e0b75b1f6152cf063678ec6fe8d1dc\nsize 1673\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/QAQ-134At0mSPVrwBzTTUalyL_zqE_dMR_WggkZvF5E.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:165afbfc7b9eb6daf0f6b98b6c0ca462e9150eaeb1b9de33dfb8b4e31a6614bd\nsize 588285\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/QyQL1TYdwmguUIBjTV-shWqrwS6AwxhZ6lf7Rx-vxH0.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:bcd7f96b5fa558417b42671518985abb33920d344a4fa1cf1dc5b1d9944b1832\nsize 462904\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/R5utplMYRQsJwA9Y63cL3Na4mXtYzE4gWG6g6zwgEQE.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:9ca7e388eababce3a5f8806458451d0bc5add84fe47d5b4810e7c06aad8cc126\nsize 437080\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/RJzScDd1IYIVaVOMo8zV2sXaGE4ZtKxwO2ONPFK-ou4.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:125b994c2c67dba8dcced8b889e0da7a8cfbe21ab12eed6c86ac0bb2094f89ff\nsize 73529\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/Re-7lkSGlYP4SFddz0rrXIF0r4MVYZuagjkVpEm79bY.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:aee377a00717235783f3840413f7dd74283b91d540dfa8f988053f20900cd4a3\nsize 115739\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/TMjINkrJIS3kbGu8bmcVt_34TaFN8lINFQPR_YGzHss.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:d6217030428dbced29da075d2edaaaf1fff0bb39733d35bf9db7c31a8a8b4402\nsize 8431\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/TNj-jk-KpKzz84xb1SRiKqyp8LNBnONxA9SIXs3XU7k.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:e61661211eeec609e6c7f10d3745d553c4e9ab3a81580cad24ab109d7103e0a4\nsize 1664\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/Tg9QZvUPJoAZKRkPhPgQrgnlTY6s9UxRSQaMw6shhOU.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:fa57c66befe411dfb5d7f81cc6de8c2aea67daef946118ec7e53956e398afd59\nsize 47411\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/U2DZlRhnzhZrC7GsVNX0TxnXbHh03P3g-cU4fkHpiXA.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:9aab9de1d36cda0bc42e8a9e754d7eee836770971f523dd7939f96a280578e48\nsize 24403\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/U4o0STLxwOEf42F4DF22ooOoA5Ykdp5j_D1io-4w1lc.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:9b7391cc7d5f0d193b89a63890e46b2971192fdd3c3f0c3c47445272215c4353\nsize 1705\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/UH3C65dDo62rp5ciK3XzyhufE71xorL7r7MWVwdhavk.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:064ecc9e064964d8b4510dd34aef7175b6e6e1d0b36ffa06a3af1d589e55685c\nsize 599103\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/U_1PPd40n2grpuhkMJcMXPVuJhtaQoUWei63iN2rS7o.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:6c8b14abc39aa771884fdd3c5025ac638908f7c283558189594bf83762c4b03c\nsize 582091\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/U_UF7e-hOd5uLIj10fYZVxQ5mXyZUxvMxhWWgAMaj0s.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:ba26cb1cc80d8db7b9db63894d96cab7d767cf9a87967a67e7beb4b48842c987\nsize 97395\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/UbW68tRQtThl9ah8tJb-X_af5M8FHYARiGZFiPGk_90.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:bb1311369a6127ca33e728f643c34aa472cf039c980edd6c94566aeca155fcc1\nsize 229910\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/VL10zUkfmLz5eRxQsZi0G5wsfo8mvyN3p82updP17D4.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:bb9fd29e85c3433cf4b2624dadf1eb6e6716cb71eebf9158606a68b24e4bdf95\nsize 116961\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/ViCjDXb4IEZcXBtlYvTm3HCB6cf4gDbrXCCdvVVgB1g.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:39be5b6fb52e117de611490872381575997ac467febfef08feedd070c6a4442d\nsize 62537\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/WJTACYoRG89VIpjzsIZLIy93U7HoC4OJyLy6WAlqv-0.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:2a867500ec760f986f4dfb45bcbf9d171b26315448d9405969bf649071552211\nsize 62430\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/WwgngUwH7mXX15tdbfcjG_9gX2t8N8wbbfW2N34b3dA.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:a38dc1d10f8cc0492006c8cc0d6c77134ae85ee7061a2ad66aa06817db6c2a0f\nsize 1740\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/XrtNbxWFUGlP-SYqQm8aYawQJU6H7CSyHpRZM1iLdKg.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:db3b3609ff90166af3b5cdee93b04d916aa0f1709c33f12e84b70eb137814343\nsize 1673\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/YMnQwrWWVRmkMs0B41lz-VdixskatlPcY7j4r0iSLbQ.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:e82a596cd35270ffeec53c88b26fe12ab2035e002a25821716a9764804640b23\nsize 1707\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/YcTBCg3mLRFByb1cnjrq9DzEBnnOT9jQtfYEE34QZ1M.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:02b1869c5f7c8eb2dfa5688810c619437f95c5bd1e4db38baf3de485543759a0\nsize 18008\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/Yk_dta-f75GShvyUvXq132pohaNpiQgerfIKJA0vdCw.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:697a104a1a814b1a76e9467d3f1847d8ad7fcb5f67c9f4f2144f7d8539230e4f\nsize 880158\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/Zu9CSLWidXEnbSAQVuXGk62eMrVAGQb4qHmrtQrOQIQ.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:a3d1710f09dc716c1e40055a163cd9f94f6bace69b2fde63730ce3c1298919e1\nsize 1567719\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/_AiF52l4uqTkKOVpQw9hr6l6FCIdWs8PCFtFxEBOopU.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:8145ec8e54bab340bc41a7f2098a0bcb8843d536581a4c6dae49a40dd50ddc17\nsize 92318\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/_BN_07s59sawk5e9YcjHTX2qtYX9q7nCBYrlSWXoEsc.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:c34f7f121221fe03da2748a459854d51cd188e5758d24f27628a649dc64a1082\nsize 37967\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/_KI9ocPARF5JjaDPIbtpqw2hj_qRonw-AERjWOs5ZYM.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:931592ec10513c8dcfc1049c060e6e652180c6f80f4851ad144b1744d68af43e\nsize 63409\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/_gduN41u7Xxac_Gm3pBQI3icoKhOfiRV2TKhDnlyakU.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:a08775295f1dbc6aa37aec9afec725e0f488a79380421e1dc1baffb4d7a2e449\nsize 1708\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/bcbIZq2gy8ivQiUlEch7tjNoCcUMTTLhInMlj9P2P88.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:c0e721cf00c857aa37b6431f4b5ac51fdd6d05c61dcb8b4200a21c36d44efa50\nsize 1707\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/bhEMgsj4Yf5tdCDlwK9KpHmsgVLAsBDPOLtYeUDLw0M.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:3d325fba35ec20330a3ae4aa5a96223b29cf9b72d620fd725bc39e40974d9895\nsize 119653\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/cTmKy32Fbmlybl-WtbyuVFNhO11Efr4e_rGbzwAkPbs.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:a5db61766b85754dc53d1f99216ffd1bf0849fa003f07a2f64b48b07c7b8bb7f\nsize 1662\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/clMyhm_qgwUJq68xb8Yf9EEaN3F7jgdqgKnKgjVRom8.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:c4da2d2d3434290ee9be01ce0a1711ecb3d255b70b06d05f8b9ec85c8b4799ef\nsize 1802\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/d8CQoDBSrekoGZXqTatc7Y5JkHtNviX1D3JD-fxFDmU.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:6052eb02f3c62a7ffde324432f6a8fb4c3346e7e13da3b92f98f3001de4b35a1\nsize 237029\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/dMTZgKHD-NkP3iM5RjFNhppiwfTlYd-Imi9aA6IK0So.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:a7e36e7e3fd45809a677fd240589fbb3bba622c595db57e64c0dac2aea03cc30\nsize 52845\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/eGYHUFl46laNa8v_WjdadvCkIErWqmx0hoia7PCSmSw.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:b6488a80f2590d3b8edf4cc9fdc11d9050d31a7cc42c09817b4824502da08604\nsize 63083\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/ehTWq16I6ixhFOVkpTKi7s4jgYjNzGJ5CoJW3xjHDTE.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:084f6c71c7ea0b1c18ce57e172cb4f1093cd40c15e9bb56e0a62db118bca5d18\nsize 46230\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/fAnOUj-jmlzPMtIN90ZvowG9VUmBtD36MZ8-tRP1Ut4.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:dd8118930526dc47a9bd20094e0f3d98140eccf747b2b015b71013c3c803882f\nsize 227537\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/fr3nkF8AHXTcq9bT_b7x2X7Mun2A--Ssb7eyoKgQEwI.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:29c80411c4080a2ace61e78768dc5184d77bf7b564ebd96fe0b7c1416018375b\nsize 341621\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/goAmthhGPdbYUqbAymyG_MjBUWVdS9OBm78mOoiITHo.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:ea3331aeda40742e1ac214fc9552665d4c02d04ad505d7981b7d156970219209\nsize 222620\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/hB1Hj0mfuh_x3ijhqkw1s3wdCh8qdPz_IMs0MPraVCk.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:f3abc45056b72be91ad35daa7d5d401c80d97f0af97901c00766ddb4574afeec\nsize 23595\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/hMfNPSlINViUDVnor18GgPs0Ut0i9XY7dwM9MVOL-2I.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:6f91f182c3f05fec999522be99feeb9b0cbd2e5b5c11bef14cca886654ab0002\nsize 210027\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/hPnpcoVcfRdkyUyhYSFNhsEcz7nQU0UU-fPSiRalDvw.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:0bc6c7c8e7d94d811754552609b95123547a721173100ff6e464f537967cd7dc\nsize 2358\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/hQvPcHPcBhyxv7GPx-E3bZWiNBhnCpFIDwWa3XBcYEU.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:bfa76b27dedac10615de43b2d39e344e99117737febb86584903e33a3705e830\nsize 2140\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/hXNDNwQ6zA7aHAqvfBj_az9CovV37bJywdgPdb_ooIA.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:617057ed3f076f08ee022d8890d416431f8614e6023920aad1391e9505f41337\nsize 1664\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/hxyn3yZ7-LCgKqfkCljyM7Hq7HJnmPnEKaXoybXJjHo.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:44b131f73559061b36a7971d474930051f953ee27e364dc7d75de69780e46ab5\nsize 112922\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/iWUFDucATDE8gjbsL-9KpOIW9l8Ipsh1wliv4e05xhg.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:e83e9824c1888e9c9d212eb50493a106169e14bdca32d145dbe3d4de34fd1d20\nsize 1791\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/ivWTdg5M9XqjP-Iu4C97r3qZQhotJgfF17g__7EH7VM.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:247c6463baa3b0207d09fa7dd89c911af38792e66a8fd44026ce1fa32cfbefd4\nsize 154760\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/jOFeroI0Oz4TWcOx8mgv4iOZLv6ncbRXFRtJfqS4Pq0.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:73867d7f15dfa0f49cb05735924b239915dc4dda7d3dc6d2a1a382679fff96c3\nsize 106958\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/jStDc8gP5lyHVSFIJiT_2RrXhT26GpAhNItDEje07_Y.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:378a38cc1c634cd6b36fc63b109a64ff288cbc47110f6acaa70eb7aebeba07f4\nsize 1276655\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/kLP-8ILxdLSAQsrC6IwvfqQL6Loq2Q6lqOzwrnb6QoE.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:fe6b658bedba36ff18c087aaa1c5e13002239c69cceadd7ce37f0c5bc59abe01\nsize 456765\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/kVNsLH0kpIkFnBBGWxoIajVLSpvzmsKHpsATPAcR86Y.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:dee5c0396408c4e116ad11e3e48f91d7aa5d81e7ce39e71d7e73e9975f216ef3\nsize 44287\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/ks0ODNqrNY4CCDxJcrgRY324WykCeTiSH4Tmdi30I2E.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:761cfb280c62c24dd5822373c5ad9a8c41464a963d2745a160d47a916ee32f61\nsize 1712\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/lF7NSIz6CNf8WsMNQl8It8HbJem3MAllokozblLdU5A.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:fc65713ec75c86594e33fee24f2952751515fa1d82bf0f23a247aae67510f3b3\nsize 138406\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/ldoaD2NbG9VRhLOXddM1ypoAU3W5gR_zabUWZa4r6lM.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:32994592f5338b3e5a63feb37c7bc5e3d43199b92e456532bf3ae627cd5c11ff\nsize 38813\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/lxtOUAEj-E1jb6J8uGCRlRgJDHJyFOu0O73jQHnAhpg.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:a3e112c6544c6b7c407863cef2fee56762e9d4661f70f1ed40b5d9909e586651\nsize 1707\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/m1DnUoXf7wMtIGkkDZAALobw0GbGehfEMX_jNLvs3i8.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:237957c4fb4c278f68dde4fba9707f515adcf4624d80633ece545a2bdb05f9bf\nsize 601967\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/n1GVITzrvCF95Vz7l6hH7fdYzebDDAJav5z4-9C7lB4.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:d633d744a4ef30be5da0811766354dc1a838dc50db0c249eb341ea3bcdee56ac\nsize 234105\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/ntnx85KcYZ_ZhR6dL2A_p8foCmStgD-69ODoOUdipiA.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:b73dea3f4d2bf9f3d76b24e5a29bd99d342d9c6085257d001f0e747308585ac6\nsize 1710\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/o9ArU5IxydvpJo2iiPI-p4EGBwlpBlyFIfbnz8Qrg6c.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:15971a2d81d62a89b695325157e87213899566eae8d8b91b607d2968357c046a\nsize 1710\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/oNZMr_dB-L40nSUj6Fc19-FGteHQu7ZaRZu9_mgM1BI.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:a9883e0b47d2d05c7cd05d87b554649f8a0f2e1e4fca9390f684fbb4ac93cf29\nsize 12984\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/oO7raEVlJC6KhfK-UbNuppzbYPGdKWbh1e6rOymd_-o.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:5e37e35a8b4a22984555475e1837546d533b3e208942c6755f7689ab3b417155\nsize 52834\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/oiYeEvWqOkaHzCSunznZ09U_tuHqP1UyZkRrKYHgNBw.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:697f697aee17f1812c5b618c55feb0ebd5f30c33c273cf7279e6c36f7b675420\nsize 210533\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/ojgJyXT8qwRXj1hOVx2gbeJDT0xEOIye0o9EbfU2LRM.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:f33643e62bf7ac2fbe1907abb6810597fde7e8dcfdc6bd85c8795024df646089\nsize 289068\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/p0MVPvnv_lkWwfhSuSCgQ3NUj83shBffAx1NKPn4oy8.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:362939649ac22def7c3861aef513319c0ff8e5767aed61fe706bcb84756b771f\nsize 233689\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/p4oyXU5C3T0ZycNhEwBZ0MbpV0j3voWV4mr__3fhOek.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:87aa264f34df934e8aa831cdae2605857cd2c722dc44fecde8bd74b047dc39ab\nsize 41774\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/q8aw85uHTIPxuXcv2Awts4JVVHEMCl7J-61WfnvbYuQ.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:7803cbb60354e1b0eae8a3de9a95fadfbd0cecebec83dafcce9b5830e820b6dd\nsize 9453\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/qDEFXj8hSgOuuqWM52y6pbUX1cyp7bS4qItfctgtVx8.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:29b39d612e3225b2636d477d9f0c988758a170004ce85d09ee6ef29eaec2119c\nsize 116431\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/qHvSpQXYh9RZmXIoIOexmDs0iQgjCubl6KSsgg7cDz8.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:0b58cc8cb236bdb830ecc7ef7ddd9a6ea4f178d1e2bcfe4a67a13229cac09e9c\nsize 89795\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/rAARxLc7tOdjUXEdNmSpOtsJIAw0XS229YHO1KOeUqI.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:e1d00403e69bf3a3d874124b55432c728f28afda9abc02b060dd92bd03a2541e\nsize 18011\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/rTaanqa6Z5KxtBV4Kj2Fu2KKqAWlstE0JeUbZ3AuN3o.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:fcb315ae0caec453a9104c49e1ee1a0575c257b01682d695bc38b07e92f679f7\nsize 295972\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/rY4cJeAtYkg3bnTdqk4Vb0ojEcfS76L4B-iqyvQZ2VA.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:ef9b3b9da518ae39edfe3ed37def0834ea5c27ec6dacd8c53926ffc0caf4bdc6\nsize 67673\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/rcc-B4OWqf0dbVY7Eq6q3pRDHLUjJ8tix8UeLQ4D68w.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:2fd712b03ef582b919af568fc08ed73b4e6001a855432b164e7e2cd6e22a54f9\nsize 1673\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/sEw-yqeADuF0n_M6jTPLrOgH3coalIQHYPLrwM87nmo.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:fcf12acbfc1e5f379a8fe46eb78519b9e423d0fd581959708a7cec6c5898ce0d\nsize 266311\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/sMF6pWIkJFygBbR2IS10liEsjsLAMDja_E9_yUvUgeI.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:30f6ad77dac924e92ce3166e8a17dc9162565a8d865bcaf9077175447ba75b2f\nsize 56136\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/t81tluHdoePSxjq7qG-6TMqBKmQLYr5gupmfvW25Y_o.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:86cc0825c860ca118a5d6c447d86c1c9df5b91084ae2857a2cebcc32657980ee\nsize 800358\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/tn3FQGSVFt_TE5nyQNpuf_gnHdaWF85hZg1iE5hPQSE.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:af5d6a6b09a7ec5683bb62158ac5c03b27e6e31a2fe64e55baabc864eeae4dfc\nsize 9315\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/uNiZ8TfAQ8GWjtbqhVi90qO3U5dl9afmKE1-KbHQYM4.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:e8e22b7c0b8de9bf80c7bc0b7b38c3b71f0d72abaf1bc606c6265ca6262b7fe6\nsize 1088960\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/uOqsnEjVGQCbtrKI7QbHYxbbLUdCKC-792SgZr5KUKM.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:9d716b9fea4d2b7cc34fcd4970b304b794655270def6aec3536aab2f170cad35\nsize 314852\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/ujON59jsellR3M8hq9unBPISOwRgEVUogdi3FG_pVMk.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:2f38947eb1a80fb23cd00a67d6b524efd0c406d1227d9e9de9d6d935d1921847\nsize 220997\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/uoTzfoaN81h2_JyFkrvXTLFMnoSlWiuc9Yu1CmsFkH0.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:e8c830125cce357e14ad500f900e2258e058f4ffd2a5247b8ae9723b430ba398\nsize 198199\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/vDtQzZ9jl6r7yzczhoKhvzekCQYx-qskwYdzQO92eWo.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:85a346aefa8db778071c8fe7594b6f71ac7f1583e032020c019adbf9a7c5241d\nsize 48731\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/vFP1U-4lk3GypDZFceLvRXjoadcB2FRKrcNQf9WjzpQ.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:9a1738f9dcb080deb7219d2cf8f8a8e5f8b072dd13c393cfbae4e5784d298886\nsize 604349\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/vYnzbcbBQbPQB7GKrXzPlz1MuT9cfnNI_NBVajaTnPg.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:060a759611f6fbacfb6bcaba31232c77049185916fb5fb56048defcdb05fd6c0\nsize 49122\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/vvPtX1U0EZS9PMsQBVk3mjD9yS6EHIt0FXdKf2dOELw.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:792c95d15dcfc257d95f16381838f1503979df6f287e9b39547276965a6a2996\nsize 1664\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/wntmnG9yRP9aoioRDILKkmSZqdemR-XDCIKJS-wpRYw.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:902879cde871b3796a621efac4526cfa4ff13ec4ba02a27d8e56e84448f64342\nsize 1664\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/xCUsF5aatMdiiUAkGjg29_TiQGKqXpbzoMsB0yI-Dd8.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:f2fff45e228ff16c8b4bf623063cecfc832ef840484b25e089288d4de69e27d7\nsize 1664\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/xK4fFG-PbnQx6EGmmj1A0JVWQ9Bg7q-FncaU7hHk9ds.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:072a735d4765ea75eb92acffaac2fd0a432dcc0377e327d67a41d086e7dcaef0\nsize 7682\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/xaB3eS6qbtKSrfFACMcYpgxWRtaJfT1kmOVpyaE45tI.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:efea0a5a20a068648ef387dd36ae4c08f8e0e896b320c6e5c8d8022f801b6e03\nsize 475974\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/y9wJkLq6Q0hKSDD67ilFqtMMatw9qpsKM9W2uy2Rfjc.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:43171dc4cbaa683c7ad96827338d10344a4cfacb0e6c11b05635b1b3234efd80\nsize 141311\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/ycjvsn3A9cUMjnbDaSUpf1HRQd4duP9AL1YVwSjwuAQ.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:b80a234c19fde1b4256a6757247d975ae08f7177f386deb8247993d523a21380\nsize 1661\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/yo8VtPVXWBpTqLbLL-ZeOmZTW2HTqTzsf9RPzgHM-bQ.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:2e13b6db4a5ddef8e861dbe870c2392bc50504f80b40a6abd58b6562cad0560a\nsize 1708\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/zNae10gPNkFt5aRVaSL2eSgxZiRDG79B9oDIeYqyzDY.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:6e97d895fc433aa2d811579020d4d1a6b39ed1fc2c7911a4207689f06b05d1a0\nsize 514966\n"
  },
  {
    "path": "localnet_snapshot/seed_txs/zavm_CqSq0KuWfc-E0JccEyrrQzjigxt7yuW1ceYjE0.json",
    "content": "version https://git-lfs.github.com/spec/v1\noid sha256:fcd3606560602d23554cd0b7afc44a8820531ca3ec1c859bb9eec1c2c5001a54\nsize 67399\n"
  },
  {
    "path": "nix/README.md",
    "content": "## Building Arweave in Nix\n\nEasiest way to import arweave as systemd service, is via flakes\n\n```nix\n{\n  inputs.arweave.url = \"github:ArweaveTeam/arweave\";\n  outputs = { self, nixpkgs, arweave }: {\n    nixosSystem = nixpkgs.lib.nixosSystem {\n      modules = [ arweave.nixosModules.\"x86_64-linux\".arweave ];\n    };\n  }\n```\n\nIn non nixos system, the package derivation can be accessed and used as standalone.\n\n```nix\n{\n  inputs.arweave.url = \"github:ArweaveTeam/arweave\";\n  outputs = { self, nixpkgs, arweave }:\n    let\n      system = \"x86_64-linux\";\n      pkgs = import nixpkgs { inherit system; overlays = [ arweave.overlay ]; };\n    in {\n      # your flake here...\n      # pkgs.arweave should exist\n    }\n```\n\nModule extraArgs are also a good way to access pkgs.arweave for overrides if needed\n\n\n```nix\n{\n  inputs.arweave.url = \"github:ArweaveTeam/arweave\";\n  outputs = { self, nixpkgs, arweave }:\n    let\n      system = \"x86_64-linux\";\n      pkgs = import nixpkgs { inherit system; };\n      extraArgs = { inherit pkgs; };\n     in {\n        nixosSystem = nixpkgs.lib.nixosSystem {\n         inherit extraArgs system;\n         modules = [ arweave.nixosModules.\"${system}\".arweave ];\n        };\n     };\n  }\n```\n\n## Using services.arweave\n\nIn your configuration.nix you can enable arweave node as service.\nNote that this is limited to nixos the operating system (as opposed to just nix the package manager).\n\n```nix\n{\n  config = {\n    services.arweave = {\n      enable = true;\n      peer = [\n        \"188.166.200.45\"\n        \"188.166.192.169\"\n        \"163.47.11.64\"\n      ];\n      # see more options below\n    };\n  };\n}\n```\n\n<!--  Generated in nix repl: (builtins.toJSON (builtins.mapAttrs (k: v: if (builtins.typeOf v == \"set\" && builtins.hasAttr \"_type\" v && v._type == \"option\") then {option = k; defaultValue = if (builtins.typeOf v == \"set\") then if (builtins.hasAttr \"defaultText\" v) then v.defaultText.text else v.default else v; description = if (builtins.typeOf v == \"set\") then v.description else v; } else {}) (import ./module.nix (pkgs // {arweave = {};})).options.services.arweave)) -->\n\n_A schema of the available options as json_\n\n```json\n{\n  \"dataDir\": {\n    \"defaultValue\": \"/arweave-data\",\n    \"description\": \"Data directory path for arweave node.\\n\",\n    \"option\": \"dataDir\"\n  },\n  \"enable\": {\n    \"defaultValue\": false,\n    \"description\": \"Whether to enable Enable arweave node as systemd service\\n.\",\n    \"option\": \"enable\"\n  },\n  \"featuresDisable\": {\n    \"defaultValue\": [],\n    \"description\": \"List of features to disable.\\n\",\n    \"option\": \"featuresDisable\"\n  },\n  \"group\": {\n    \"defaultValue\": \"users\",\n    \"description\": \"Run Arweave Node under this group.\",\n    \"option\": \"group\"\n  },\n  \"headerSyncJobs\": {\n    \"defaultValue\": 10,\n    \"description\": \"The pace for which to sync up with historical data.\",\n    \"option\": \"headerSyncJobs\"\n  },\n  \"maxDiskPoolDataRootBufferMb\": {\n    \"defaultValue\": 500,\n    \"description\": \"Max disk-pool buffer size in mb.\",\n    \"option\": \"maxDiskPoolDataRootBufferMb\"\n  },\n  \"maxMiners\": {\n    \"defaultValue\": 0,\n    \"description\": \"Max amount of miners to spawn, 0 means no mining will be performed.\",\n    \"option\": \"maxMiners\"\n  },\n  \"maxParallelBlockIndexRequests\": {\n    \"defaultValue\": 2,\n    \"description\": \"As semaphore, the max amount of parallel block index requests to perform.\",\n    \"option\": \"maxParallelBlockIndexRequests\"\n  },\n  \"maxParallelGetAndPackChunkRequests\": {\n    \"defaultValue\": 10,\n    \"description\": \"As semaphore, the max amount of parallel get chunk and pack requests to perform.\",\n    \"option\": \"maxParallelGetAndPackChunkRequests\"\n  },\n  \"maxParallelGetChunkRequests\": {\n    \"defaultValue\": 100,\n    \"description\": \"As semaphore, the max amount of parallel get chunk requests to perform.\",\n    \"option\": \"maxParallelGetChunkRequests\"\n  },\n  \"maxParallelGetSyncRecord\": {\n    \"defaultValue\": 2,\n    \"description\": \"As semaphore, the max amount of parallel get sync record requests to perform.\",\n    \"option\": \"maxParallelGetSyncRecord\"\n  },\n  \"maxParallelGetTxDataRequests\": {\n    \"defaultValue\": 10,\n    \"description\": \"As semaphore, the max amount of parallel get transaction data requests to perform.\",\n    \"option\": \"maxParallelGetTxDataRequests\"\n  },\n  \"maxParallelPostChunkRequests\": {\n    \"defaultValue\": 100,\n    \"description\": \"As semaphore, the max amount of parallel post chunk requests to perform.\",\n    \"option\": \"maxParallelPostChunkRequests\"\n  },\n  \"maxParallelWalletListRequests\": {\n    \"defaultValue\": 2,\n    \"description\": \"As semaphore, the max amount of parallel block index requests to perform.\",\n    \"option\": \"maxParallelWalletListRequests\"\n  },\n  \"metricsDir\": {\n    \"defaultValue\": \"/var/lib/arweave/metrics\",\n    \"description\": \"Directory path for node metric outputs\\n\",\n    \"option\": \"metricsDir\"\n  },\n  \"package\": {\n    \"defaultValue\": \"pkgs.arweave\",\n    \"description\": \"The Arweave expression to use\\n\",\n    \"option\": \"package\"\n  },\n  \"peer\": {\n    \"defaultValue\": [],\n    \"description\": \"List of primary node peers\\n\",\n    \"option\": \"peer\"\n  },\n  \"transactionBlacklists\": {\n    \"defaultValue\": [],\n    \"description\": \"List of paths to textfiles containing blacklisted txids\\n\",\n    \"option\": \"transactionBlacklists\"\n  },\n  \"transactionWhitelists\": {\n    \"defaultValue\": [],\n    \"description\": \"List of paths to textfiles containing whitelisted txids\\n\",\n    \"option\": \"transactionWhitelists\"\n  },\n  \"user\": {\n    \"defaultValue\": \"arweave\",\n    \"description\": \"Run Arweave Node under this user.\",\n    \"option\": \"user\"\n  }\n}\n```\n"
  },
  {
    "path": "nix/arweave.nix",
    "content": "{ pkgs, crashDumpsDir ? null, erlangCookie ? null, vcPatches ? [ ] }:\n\n\nlet\n  gitignoreSrc = fetchFromGitHub {\n    owner = \"hercules-ci\";\n    repo = \"gitignore.nix\";\n    rev = \"a20de23b925fd8264fd7fad6454652e142fd7f73\";\n    sha256 = \"sha256-8DFJjXG8zqoONA1vXtgeKXy68KdJL5UaXR8NtVMUbx8=\";\n  };\n\n  inherit (import gitignoreSrc { inherit (pkgs) lib; }) gitignoreFilterWith;\n  inherit (pkgs) stdenv lib beamPackages fetchFromGitHub fetchHex fetchurl;\n\n  randomx = fetchFromGitHub {\n    owner = \"ArweaveTeam\";\n    repo = \"RandomX\";\n    rev = \"913873c13a2dffb7c4188c39b4eb188f912f523e\";\n    sha256 = \"sha256-obxX/b5o/RY46kCtHOhWMFX29jT5y8oigzVLwZRFHgQ=\";\n    fetchSubmodules = true;\n  };\n\n  buildRebar = beamPackages.buildRebar3.override { openssl = pkgs.openssl_1_1; };\n\n  b64fast = buildRebar rec {\n    name = \"b64fast\";\n    version = \"0.2.2\";\n    beamDeps = [ beamPackages.pc ];\n    compilePort = true;\n\n    src = fetchFromGitHub {\n      owner = \"arweaveteam\";\n      repo = name;\n      rev = \"a0ef55ec66ecf705848716c195bf45665f78818a\";\n      sha256 = \"sha256-CSBsTRqkrQWwX7oxPZWERss5Pk0mE1ETe7s4fhZEUaA=\";\n      fetchSubmodules = true;\n    };\n\n    postBuild = ''\n      env rebar3 pc compile\n    '';\n  };\n\n  erlang-rocksdb = buildRebar rec {\n    name = \"erlang-rocksdb\";\n    version = \"f580865c0bc18b0302a6190d7fa85e68ec0762e0\";\n    beamDeps = [ beamPackages.pc ];\n    nativeBuildInputs = [ pkgs.cmake ];\n    buildInputs = [ pkgs.getconf ];\n    configurePhase = \"true\";\n    src = fetchFromGitHub {\n      owner = \"ArweaveTeam\";\n      repo = name;\n      rev = version;\n      sha256 = \"sha256-hSqQsLEg1d/WdvwZxYlFbTS8wXdAfkDVddVJT+69nz8=\";\n    };\n    postInstall = ''\n      mv $out/lib/erlang/lib/erlang-rocksdb-${version} $out/lib/erlang/lib/rocksdb-1.6.0\n    '';\n  };\n\n  meck = buildRebar rec {\n    name = \"meck\";\n    version = \"0.8.13\";\n    src = fetchHex {\n      inherit version;\n      pkg = name;\n      sha256 = \"sha256-008BPBVttRrVfMVWiRuXIOahwd9f4uFa+ZnITWzr6xo=\";\n    };\n  };\n\n\n  rebar3_hex = buildRebar {\n    name = \"rebar3_hex\";\n    version = \"none\";\n    src = fetchFromGitHub {\n      owner = \"erlef\";\n      repo = \"rebar3_hex\";\n      rev = \"203466094b98fcbed9251efa1deeb69fefd8eb0a\";\n      sha256 = \"gVmoRzinc4MgcdKtqgUBV5/TGeWulP5Cm1pTsSWa07c=\";\n      fetchSubmodules = true;\n    };\n  };\n\n  geas_rebar3 = buildRebar {\n    name = \"geas_rebar3\";\n    version = \"none\";\n    src = fetchFromGitHub {\n      owner = \"crownedgrouse\";\n      repo = \"geas_rebar3\";\n      rev = \"e3170a36af491b8c427652c0c57290011190b1fb\";\n      sha256 = \"ooMalh8zZ94WlCBcvok5xb7a+7fui4/b+gnEEYpn7fE=\";\n    };\n  };\n\n  accept = buildRebar rec {\n    name = \"accept\";\n    version = \"0.3.5\";\n    src = fetchHex {\n      inherit version;\n      pkg = name;\n      sha256 = \"sha256-EbGMIgvMLqtjtUcMA47xDrZ4O8sfzbEapBN976WsG7g=\";\n    };\n  };\n\n  double-conversion = fetchFromGitHub {\n    owner = \"google\";\n    repo = \"double-conversion\";\n    rev = \"32bc443c60c860eb6b4843533a614766d611172e\";\n    sha256 = \"sha256-ysWwhvcVSWnF5HoJW0WB3MYpJ+dvqz3068G/uX9aBlU=\";\n  };\n\n  jiffy = buildRebar rec {\n    name = \"jiffy\";\n    version = \"1.0.8\";\n    nativeBuildInputs = with pkgs; [ gnumake pkg-config ];\n    buildInputs = [ pkgs.gnumake ];\n    configureFlags = [ \"-fno-lto\" ];\n    hardeningDisable = [ \"all\" ];\n\n    src = fetchFromGitHub {\n      owner = \"ArweaveTeam\";\n      repo = name;\n      rev = \"82792758e61be7d303a11290f859a7b3b20eaf95\";\n      sha256 = \"R7kbdMh5wOIN/aA7KFrICjlFAym3OJs9sYWrfdU06GM=\";\n    };\n\n    patchPhase = ''\n      sed -i -e 's|-compile.*||g' rebar.config\n      rm -rf c_src/double-conversion\n      cp -rf ${double-conversion}/double-conversion c_src/double-conversion\n      chmod -R +rw c_src/double-conversion\n    '';\n  };\n\n  quantile_estimator = buildRebar rec {\n    name = \"quantile_estimator\";\n    version = \"0.2.1\";\n    src = fetchHex {\n      inherit version;\n      pkg = name;\n      sha256 = \"sha256-KCqKMjyiqEXJ5veH0WY0j3dsHUpB7eYwRtctQi49qUY=\";\n    };\n  };\n\n  prometheus = buildRebar rec {\n    name = \"prometheus\";\n    version = \"4.11.0\";\n    buildInputs = [ quantile_estimator ];\n    src = fetchHex {\n      inherit version;\n      pkg = name;\n      sha256 = \"sha256-cZhiNRqr9N9webBdwIXSu8vjrArDAJ6VZnGx1auIJH0=\";\n    };\n  };\n\n  prometheus_httpd = buildRebar rec {\n    name = \"prometheus_httpd\";\n    version = \"2.1.11\";\n    src = fetchHex {\n      inherit version;\n      pkg = name;\n      sha256 = \"sha256-C76DFFLP35WIU46y9XCybzDDSK2uXpWn2H81pZELz5I=\";\n    };\n  };\n\n  prometheus_cowboy = buildRebar rec {\n    name = \"prometheus_cowboy\";\n    version = \"0.1.8\";\n    src = fetchHex {\n      inherit version;\n      pkg = name;\n      sha256 = \"sha256-uihr7KkwJhhBiJLTe81dxmmmzAAfTrbWr4X/gfP080w=\";\n    };\n  };\n\n  prometheus_process_collector = buildRebar rec {\n    name = \"prometheus_process_collector\";\n    version = \"1.6.0\";\n    buildInputs = [ rebar3_archive_plugin rebar3_hex ];\n    patchPhase = ''\n      rm -rf .git\n    '';\n\n    src = fetchFromGitHub {\n      owner = \"deadtrickster\";\n      repo = name;\n      rev = \"78697537f01a858959a26a9c74db5aad2971b244\";\n      sha256 = \"sha256-3Bb4d63JMdexzAI68Q+ASsj4FfNxQ9OUlG41fhFkMds=\";\n    };\n\n    postInstall = ''\n      mv $out/lib/erlang/lib/prometheus_process_collector-${version}/priv/source.so \\\n        $out/lib/erlang/lib/prometheus_process_collector-${version}/priv/prometheus_process_collector.so\n    '';\n  };\n\n  rebar3_archive_plugin = buildRebar rec {\n    name = \"rebar3_archive_plugin\";\n    version = \"0.0.2\";\n    src = fetchHex {\n      inherit version;\n      pkg = name;\n      sha256 = \"sha256-hMa0F1EdeazKg3WrLHXSD+zG0OK0C/puDz1i3OsyBYQ=\";\n    };\n  };\n\n  rebar3_elvis_plugin = buildRebar rec {\n    name = \"rebar3_elvis_plugin\";\n    version = \"0b7dd1a3808dbe2e2e916ecf3afd1ff24e723021\";\n    src = fetchFromGitHub {\n      owner = \"deadtrickster\";\n      repo = name;\n      rev = version;\n      sha256 = \"zM3WPLlbi05aYqMR5AhlNejBaPa6/nSIlq6CG7uNBoo=\";\n    };\n  };\n\n  cowlib = buildRebar rec {\n    name = \"cowlib\";\n    version = \"e9448e5628c8c1d9083223ff973af8de31a566d1\";\n    src = fetchFromGitHub {\n      owner = \"ninenines\";\n      repo = \"cowlib\";\n      rev = version;\n      sha256 = \"1j7b602hq9ndh0w3s7jcs923jclmiwfdmbfxaljcra5sl23ydwgf\";\n    };\n  };\n\n  cowboy = buildRebar rec {\n    name = \"cowboy\";\n    version = \"2.10.0\";\n    buildInputs = [ cowlib rebar3_archive_plugin ranch ];\n    beamDeps = [ cowlib rebar3_archive_plugin ranch ];\n    plugins = [ beamPackages.pc ];\n    src = fetchHex {\n      inherit version;\n      pkg = name;\n      sha256 = \"sha256-Ov3Mtxg8xvFDyxTTz1H6AOU9ueyAzc1SVIL16ZvEHWs=\";\n    };\n  };\n\n  gun = buildRebar rec {\n    name = \"gun\";\n    version = \"1.3.3\";\n    beamDeps = [ beamPackages.pc geas_rebar3 rebar3_hex cowlib ];\n    src = fetchHex {\n      inherit version;\n      pkg = name;\n      sha256 = \"sha256-MQbOFn+clyP4SeT7VOpKTYFOOZauJDocgoslbnSQQeA=\";\n    };\n  };\n\n  ranch = buildRebar rec {\n    name = \"ranch\";\n    version = \"1.8.0\";\n    src = fetchFromGitHub {\n      owner = \"ninenines\";\n      repo = name;\n      rev = version;\n      sha256 = \"sha256-9tFgIQU5rhYE0/EY4NKRNrKoCG2xlZCoSvtihDNXyg4=\";\n    };\n  };\n\n  stopScript = pkgs.writeTextFile {\n    name = \"stop-nix\";\n    text = ''\n      #! ${pkgs.stdenv.shell} -e\n\n      PATH=\n      ROOT_DIR=\n      PROFILE_DIR=\n\n      cd $ROOT_DIR\n      export ERL_EPMD_ADDRESS=127.0.0.1\n\n      erl -pa $(echo $PROFILE_DIR/lib/*/ebin) \\\n        -noshell \\\n        -config config/sys.config \\\n        -name stopper@127.0.0.1 \\\n        -setcookie arweave \\\n        -s ar shutdown arweave@127.0.0.1 -s init stop\n    '';\n  };\n\n  startScript = pkgs.writeTextFile {\n    name = \"start-nix\";\n    text = ''\n      #! ${pkgs.stdenv.shell} -e\n\n      PATH=\n      ROOT_DIR=\n      PROFILE_DIR=\n\n      ${if crashDumpsDir == null then \"\" else \"mkdir -p ${crashDumpsDir}\"}\n      export ERL_CRASH_DUMP=${if crashDumpsDir == null then \"$(pwd)/erl_crash.dump\" else \"${crashDumpsDir}/erl_crash_$(date \\\"+%Y-%m-%d_%H-%M-%S\\\").dump\"}\n      ${if erlangCookie == null then \"\" else \"export ERLANG_COOKIE=${erlangCookie}\"}\n      cd $ROOT_DIR\n      $ROOT_DIR/bin/check-nofile\n      if [ $# -gt 0 ] && [ `uname -s` == \"Darwin\" ]; then\n        RANDOMX_JIT=\"disable randomx_jit\"\n      else\n        RANDOMX_JIT=\n      fi\n\n      : \"''${ERL_EPMD_ADDRESS:=127.0.0.1}\"\n      export ERL_EPMD_ADDRESS\n\n      erl +MBas aobf +MBlmbcs 512 +A100 +SDio100 +A100 +SDio100 +Bi \\\n       -pa $(echo $PROFILE_DIR/lib/*/ebin) \\\n       -config $ROOT_DIR/config/sys.config \\\n       -args_file $ROOT_DIR/config/vm.args.dev \\\n       -run ar main $RANDOMX_JIT \"$@\"\n    '';\n  };\n\n  startScriptForeground = pkgs.writeTextFile {\n    name = \"start-nix-foreground\";\n    text = ''\n      #! ${pkgs.stdenv.shell} -e\n\n      PATH=\n      ROOT_DIR=\n      PROFILE_DIR=\n\n      ${if crashDumpsDir == null then \"\" else \"mkdir -p ${crashDumpsDir}\"}\n      export ERL_CRASH_DUMP=${if crashDumpsDir == null then \"$(pwd)/erl_crash.dump\" else \"${crashDumpsDir}/erl_crash_$(date \\\"+%Y-%m-%d_%H-%M-%S\\\").dump\"}\n      ${if erlangCookie == null then \"\" else \"export ERLANG_COOKIE=${erlangCookie}\"}\n      cd $PROFILE_DIR\n      $ROOT_DIR/bin/check-nofile\n      if [ $# -gt 0 ] && [ `uname -s` == \"Darwin\" ]; then\n        RANDOMX_JIT=\"disable randomx_jit\"\n      else\n        RANDOMX_JIT=\n      fi\n\n      : \"''${ERL_EPMD_ADDRESS:=127.0.0.1}\"\n      : \"''${ERL_EPMD_PATH:=${pkgs.erlang}/bin}\"\n      export ERL_EPMD_ADDRESS\n      export ERL_EPMD_PATH\n\n      export BINDIR=$ROOT_DIR/erts/bin\n      export EMU=\"beam\"\n      export TERM=\"dumb\"\n      BOOTFILE=$(echo $PROFILE_DIR/releases/*/start.boot | sed -e \"s/\\.boot$//\")\n\n      erlexec -noinput +Bd -boot \"$BOOTFILE\" \\\n       -config $ROOT_DIR/config/sys.config \\\n       -mode embedded \\\n       +MBas aobf +MBlmbcs 512 +A100 +SDio100 +A100 +SDio100 +Bi -pa $(echo $PROFILE_DIR/lib/*/ebin) \\\n       -args_file $ROOT_DIR/config/vm.args.dev \\\n       -run ar main $RANDOMX_JIT \"$@\"\n    '';\n  };\n\n  arweaveSources = ../.;\n  sourcesFilter = src:\n    let\n      srcIgnored = gitignoreFilterWith {\n        basePath = src;\n        extraRules = ''\n          .github/*\n          doc\n        '';\n      };\n    in\n    path: type:\n      srcIgnored path type;\n\n  arweaveVersion = \"2.7.4\";\n\n  mkArweaveApp = { installPhase, profile, releaseType }:\n    beamPackages.rebar3Relx {\n      inherit profile releaseType;\n      pname = \"arweave-${profile}\";\n      version = arweaveVersion;\n      src = lib.cleanSourceWith {\n        filter = sourcesFilter arweaveSources;\n        src = arweaveSources;\n        name = \"arweave-source\";\n      };\n      patches = vcPatches;\n      plugins = [\n        pkgs.beamPackages.pc\n        rebar3_archive_plugin\n        rebar3_elvis_plugin\n      ];\n\n      doStrip = false;\n\n      nativeBuildInputs = with pkgs; [ clang-tools cmake pkg-config ];\n\n      beamDeps = [\n        beamPackages.pc\n        geas_rebar3\n        rebar3_hex\n        b64fast\n        erlang-rocksdb\n        jiffy\n        accept\n        gun\n        ranch\n        cowlib\n        meck\n        cowboy\n        quantile_estimator\n        prometheus\n        prometheus_process_collector\n        prometheus_cowboy\n        prometheus_httpd\n      ];\n\n      buildInputs = with pkgs; [\n        darwin.sigtool\n        erlang\n        git\n        gmp\n        beamPackages.pc\n        ncurses\n        which\n      ];\n\n      postConfigure = ''\n        rm -rf apps/arweave/lib/RandomX\n        mkdir -p apps/arweave/lib/RandomX\n        cp -rf ${randomx}/* apps/arweave/lib/RandomX\n        cp -rf ${jiffy}/lib/erlang/lib/* apps/jiffy\n      '';\n\n      postPatch = ''\n        sed -i -e 's|-arch x86_64|-arch ${pkgs.stdenv.targetPlatform.linuxArch}|g' \\\n          apps/arweave/c_src/Makefile\n        sed -i -e 's|{b64fast,.*|{b64fast, \"0.2.2\"},|g' rebar.config\n        sed -i -e 's|{meck, \"0.8.13\"}||g' rebar.config\n      '';\n\n      installPhase = ''\n        mkdir -p $out/bin\n        cp -rf ./bin/* $out/bin\n        ${installPhase}\n        # broken symlinks fixup\n        rm -f $out/${profile}/rel/arweave/releases/*/{sys.config,vm.args.src}\n        ln -s $out/config/{sys.config,vm.args.src} $out/${profile}/rel/arweave/releases/*/\n\n        rm -f $out/${profile}/lib/arweave/{include,priv,src}\n        ln -s $out/${profile}/rel/arweave/lib/arweave-*/{include,priv,src} $out/${profile}/lib/arweave\n\n        rm -f $out/${profile}/lib/jiffy/{include,priv,src}\n        ln -s $out/${profile}/rel/arweave/lib/jiffy-*/{include,priv,src} $out/${profile}/lib/jiffy\n\n        rm -rf $out/${profile}/rel/arweave/lib/jiffy-*/priv\n        cp -rf ${jiffy}/lib/erlang/lib/jiffy-*/priv $out/${profile}/rel/arweave/lib/jiffy-*\n\n        rm -rf $out/${profile}/rel/arweave/lib/arweave-*/priv\n        cp -rf ./apps/arweave/priv $out/${profile}/rel/arweave/lib/arweave-*\n      '';\n    };\n\n  arweaveTestProfile = mkArweaveApp {\n    profile = \"test\";\n    releaseType = \"release\";\n    installPhase = ''\n      mkdir -p $out; cp -rf ./_build/test $out\n      cp -r ./config $out\n      ln -s ${meck}/lib/erlang/lib/meck-${meck.version} $out/test/rel/arweave/lib/\n\n      ARWEAVE_LIB_PATH=$(basename $(echo $out/test/rel/arweave/lib/arweave-*))\n      JIFFY_LIB_PATH=$(basename $(echo $out/test/rel/arweave/lib/jiffy-*))\n\n      rm -f $out/test/rel/arweave/lib/arweave-*\n      rm -f $out/test/rel/arweave/lib/jiffy-*\n\n      ln -s $out/test/lib/arweave $out/test/rel/arweave/lib/$ARWEAVE_LIB_PATH\n      ln -s $out/test/lib/jiffy $out/test/rel/arweave/lib/$JIFFY_LIB_PATH\n    '';\n  };\n  arweaveProdProfile = mkArweaveApp {\n    profile = \"prod\";\n    releaseType = \"release\";\n    installPhase = ''\n      mkdir -p $out/bin; cp -rf ./_build/prod $out\n      cp ${startScript.outPath} $out/bin/start-nix\n      cp ${startScriptForeground.outPath} $out/bin/start-nix-foreground\n      cp ${stopScript.outPath} $out/bin/stop-nix\n\n      chmod +xw $out/bin/start-nix\n      chmod +xw $out/bin/start-nix-foreground\n      chmod +xw $out/bin/stop-nix\n\n      sed -i -e \"s|ROOT_DIR=|ROOT_DIR=$out|g\" $out/bin/start-nix\n      sed -i -e \"s|PROFILE_DIR=|PROFILE_DIR=$out/prod/rel/arweave|g\" $out/bin/start-nix\n      sed -i -e \"s|PATH=|PATH=$PATH:$out/erts/bin|g\" $out/bin/start-nix\n\n      sed -i -e \"s|ROOT_DIR=|ROOT_DIR=$out|g\" $out/bin/start-nix-foreground\n      sed -i -e \"s|PROFILE_DIR=|PROFILE_DIR=$out/prod/rel/arweave|g\" $out/bin/start-nix-foreground\n      sed -i -e \"s|PATH=|PATH=$PATH:$out/erts/bin|g\" $out/bin/start-nix-foreground\n\n      sed -i -e \"s|ROOT_DIR=|ROOT_DIR=$out|g\" $out/bin/stop-nix\n      sed -i -e \"s|PROFILE_DIR=|PROFILE_DIR=$out/prod/rel/arweave|g\" $out/bin/stop-nix\n      sed -i -e \"s|PATH=|PATH=$PATH:$out/erts/bin|g\" $out/bin/stop-nix\n\n      cp -r ./config $out\n      ln -s $out/prod/rel/arweave/erts* $out/erts\n    '';\n  };\nin\n{\n  inherit arweaveTestProfile arweaveProdProfile;\n  arweave = arweaveProdProfile;\n}\n"
  },
  {
    "path": "nix/generate-config.nix",
    "content": "{ arweaveConfig, pkgs, ... }:\n\nlet\n  inherit (pkgs) lib;\n  filterTopLevelNulls = set:\n    let\n      isNotNull = value: value != null;\n    in\n    lib.filterAttrs (name: value: isNotNull value) set;\nin\npkgs.writeText \"config.json\" (builtins.toJSON (filterTopLevelNulls {\n  data_dir = arweaveConfig.dataDir;\n  log_dir = arweaveConfig.logDir;\n  storage_modules = arweaveConfig.storageModules;\n  start_from_block_index = arweaveConfig.startFromBlockIndex;\n  transaction_blacklists = arweaveConfig.transactionBlacklists;\n  transaction_whitelists = arweaveConfig.transactionWhitelists;\n  transaction_blacklist_urls = arweaveConfig.transactionBlacklistURLs;\n  max_disk_pool_buffer_mb = arweaveConfig.maxDiskPoolBufferMb;\n  max_disk_pool_data_root_buffer_mb = arweaveConfig.maxDiskPoolDataRootBufferMb;\n  max_nonce_limiter_validation_thread_count = arweaveConfig.maxVDFValidationThreadCount;\n  block_pollers = arweaveConfig.blockPollers;\n  polling = arweaveConfig.polling;\n  tx_validators = arweaveConfig.txValidators;\n  disable = arweaveConfig.featuresDisable;\n  enable = arweaveConfig.featuresEnable;\n  header_sync_jobs = arweaveConfig.headerSyncJobs;\n  sync_jobs = arweaveConfig.syncJobs;\n  disk_pool_jobs = arweaveConfig.diskPoolJobs;\n  debug = arweaveConfig.debug;\n  packing_rate = arweaveConfig.packingRate;\n  block_throttle_by_ip_interval = arweaveConfig.blockThrottleByIPInterval;\n  block_throttle_by_solution_interval = arweaveConfig.blockThrottleBySolutionInterval;\n  semaphores = {\n    get_chunk = arweaveConfig.maxParallelGetChunkRequests;\n    get_and_pack_chunk = arweaveConfig.maxParallelGetAndPackChunkRequests;\n    get_tx_data = arweaveConfig.maxParallelGetTxDataRequests;\n    post_chunk = arweaveConfig.maxParallelPostChunkRequests;\n    get_block_index = arweaveConfig.maxParallelBlockIndexRequests;\n    get_wallet_list = arweaveConfig.maxParallelWalletListRequests;\n    get_sync_record = arweaveConfig.maxParallelGetSyncRecord;\n    arql = 10;\n    gateway_arql = 10;\n  };\n  requests_per_minute_limit = arweaveConfig.requestsPerMinuteLimit;\n  max_connections = arweaveConfig.maxConnections;\n\n  requests_per_minute_limit_by_ip = lib.lists.foldr\n    (ipObj: acc: acc // {\n      \"${ipObj.ip}\" = {\n        chunk = ipObj.chunkLimit;\n        data_sync_record = ipObj.dataSyncRecordLimit;\n        default = ipObj.defaultLimit;\n      };\n    })\n    { }\n    arweaveConfig.requestsPerMinuteLimitByIp;\n}))\n"
  },
  {
    "path": "nix/module.nix",
    "content": "{ config, lib, pkgs, ... }:\n\nlet\n  cfg = config.services.arweave;\n  arweavePkg = (pkgs.callPackage ./arweave.nix {\n    inherit pkgs;\n    crashDumpsDir = cfg.crashDumpsDir;\n    erlangCookie = cfg.erlangCookie;\n    vcPatches = cfg.patches;\n  }).arweave;\n  generatedConfigFile = \"${import ./generate-config.nix { arweaveConfig = cfg; inherit pkgs; }}\";\n  arweave-service-start =\n    let\n      command = \"${cfg.package}/bin/start-nix-foreground config_file ${cfg.configFile}\";\n      peers = \"${builtins.concatStringsSep \" \" (builtins.concatMap (p: [\"peer\" p]) cfg.peer)}\";\n      vdf-peers = \"${builtins.concatStringsSep \" \" (builtins.concatMap (p: [\"vdf_client_peer\" p]) cfg.vdfClientPeer)}\";\n      vdf-server-peers = \"${builtins.concatStringsSep \" \" (builtins.concatMap (p: [\"vdf_server_trusted_peer\" p]) cfg.vdfServerTrustedPeer)}\";\n    in\n    pkgs.writeScriptBin \"arweave-start\" ''\n      #!${pkgs.bash}/bin/bash\n      # Function to handle termination and cleanup\n      cleanup() {\n        echo \"Terminating erl and killing epmd...\"\n        kill $(${pkgs.procps}/bin/pgrep epmd) || true\n        exit 0\n      }\n\n      # Set up a trap to call the cleanup function when the script is terminated\n      trap cleanup INT TERM\n      ${command} ${peers} ${vdf-peers} ${vdf-server-peers} &\n      ARWEAVE_ERL_PID=$! # capture PID of the background process\n      i=0\n      until [[ \"$(${pkgs.procps}/bin/ps -C beam &> /dev/null)\" -eq 0 || \"$i\" -ge \"200\" ]]\n      do\n        sleep 1\n        i=$((i+1))\n      done\n      if [[ \"$i\" -ge \"200\" ]]; then\n        echo \"beam process failed to start\"\n        exit 0\n      fi\n      echo \"beam process started...\"\n      wait $ARWEAVE_ERL_PID || true\n      counter=0\n      until [[ \"$(ps -C beam &> /dev/null)\" -ne 0 ]] || [[ $counter -ge 30 ]]\n      do\n        sleep 1\n        let counter++\n      done\n      cleanup\n    '';\n\nin\n{\n\n  options.services.arweave = import ./options.nix {\n    inherit lib;\n    defaultArweaveConfigFile = generatedConfigFile;\n    defaultArweavePackage = arweavePkg;\n  };\n\n  config = lib.mkIf cfg.enable {\n    systemd.services.arweave-screen = {\n      enable = false;\n    };\n\n    systemd.services.arweave = {\n      after = [ \"network.target\" ];\n      serviceConfig.Type = \"simple\";\n      serviceConfig.ExecStart = \"${arweave-service-start}/bin/arweave-start\";\n      serviceConfig.TimeoutStartSec = \"60\";\n      serviceConfig.ExecStop = \"${pkgs.bash}/bin/bash -c '${cfg.package}/bin/stop-nix || true; ${pkgs.procps}/bin/pkill beam || true; sleep 15'\";\n      serviceConfig.TimeoutStopSec = \"120\";\n      serviceConfig.RestartKillSignal = \"SIGINT\";\n    };\n  };\n}\n"
  },
  {
    "path": "nix/options.nix",
    "content": "{ lib, defaultArweaveConfigFile ? null, defaultArweavePackage ? null }:\nlet\n  inherit (lib) mkEnableOption literalExpression mkOption mkOptionals mkForce mkForceOption types;\nin\n{\n\n  enable = mkEnableOption ''\n    Enable arweave node as systemd service\n  '';\n\n  peer = mkOption {\n    type = types.nonEmptyListOf types.str;\n    default = [ ];\n    example = [ \"http://domain-or-ip.com:1984\" ];\n    description = ''\n      List of primary node peers\n    '';\n  };\n\n  vdfServerTrustedPeer = mkOption {\n    type = types.listOf types.str;\n    default = [ ];\n    example = [ \"http://domain-or-ip.com:1984\" ];\n    description = ''\n      List of trusted peers to fetch VDF outputs from\n    '';\n  };\n\n  vdfClientPeer = mkOption {\n    type = types.listOf types.str;\n    default = [ ];\n    example = [ \"http://domain-or-ip.com:1984\" ];\n    description = ''\n      List of peers to serve VDF updates to\n    '';\n  };\n\n  package = mkOption {\n    type = types.package;\n    default = defaultArweavePackage;\n    defaultText = literalExpression \"pkgs.arweave\";\n    example = literalExpression \"pkgs.arweave\";\n    description = ''\n      The Arweave expression to use\n    '';\n  };\n\n  dataDir = mkOption {\n    type = types.path;\n    default = \"/arweave-data\";\n    description = ''\n      Data directory path for arweave node.\n    '';\n  };\n\n  logDir = mkOption {\n    type = types.path;\n    default = \"/var/lib/arweave/logs\";\n    description = ''\n      Logging directory path.\n    '';\n  };\n\n  crashDumpsDir = mkOption {\n    type = types.path;\n    default = \"/var/lib/arweave/dumps\";\n    description = ''\n      Crash dumps directory path.\n    '';\n  };\n\n  erlangCookie = mkOption {\n    type = with types; nullOr str;\n    default = null;\n    description = ''\n      Erlang cookie for distributed erlang.\n    '';\n  };\n\n  storageModules = mkOption {\n    type = types.listOf types.str;\n    default = [ ];\n    example = [ \"0,1000000000000,unpacked\" \"1,1000000000000,unpacked\" ];\n    description = ''\n      List of configured storage modules.\n    '';\n  };\n\n  startFromBlockIndex = mkOption {\n    type = types.bool;\n    default = false;\n    description = \"If set, starts from the locally stored state.\";\n  };\n\n  debug = mkOption {\n    type = types.bool;\n    default = false;\n    description = \"Enable debug logging.\";\n  };\n\n  user = mkOption {\n    type = types.str;\n    default = \"arweave\";\n    description = \"Run Arweave Node under this user.\";\n  };\n\n  group = mkOption {\n    type = types.str;\n    default = \"users\";\n    description = \"Run Arweave Node under this group.\";\n  };\n\n  transactionBlacklists = mkOption {\n    type = types.listOf types.str;\n    default = [ ];\n    example = [ \"/user/arweave/blacklist.txt\" ];\n    description = ''\n      List of paths to textfiles containing blacklisted txids and/or byte ranges\n    '';\n  };\n\n  transactionBlacklistURLs = mkOption {\n    type = types.listOf types.str;\n    default = [ ];\n    example = [ \"http://example.org/blacklist.txt\" ];\n    description = ''\n      List of URLs of the endpoints serving blacklisted txids and/or byte ranges\n    '';\n  };\n\n  transactionWhitelists = mkOption {\n    type = types.listOf types.str;\n    default = [ ];\n    example = [ \"/user/arweave/whitelist.txt\" ];\n    description = ''\n      List of paths to textfiles containing whitelisted txids\n    '';\n  };\n\n  maxDiskPoolBufferMb = mkOption {\n    type = types.int;\n    default = 2000;\n    description = \"Max disk-pool buffer size in mb.\";\n  };\n\n  maxDiskPoolDataRootBufferMb = mkOption {\n    type = types.int;\n    default = 500;\n    description = \"Max disk-pool data-root buffer size in mb.\";\n  };\n\n  blockPollers = mkOption {\n    type = types.int;\n    default = 10;\n    description = \"The number of block polling jobs.\";\n  };\n\n  polling = mkOption {\n    type = types.int;\n    default = 2;\n    description = \"The frequency of block polling, in seconds.\";\n  };\n\n  blockThrottleByIPInterval = mkOption {\n    type = types.int;\n    default = 1000;\n    description = \"\";\n  };\n\n  blockThrottleBySolutionInterval = mkOption {\n    type = types.int;\n    default = 2000;\n    description = \"\";\n  };\n\n  txValidators = mkOption {\n    type = types.int;\n    default = 10;\n    description = \"The number of transaction validation jobs.\";\n  };\n\n  packingRate = mkOption {\n    type = types.int;\n    default = 30;\n    description = \"The maximum number of chunks the node will pack per second.\";\n  };\n\n  featuresDisable = mkOption {\n    type = types.listOf types.str;\n    default = [ ];\n    example = [ \"packing\" ];\n    description = ''\n      List of features to disable.\n    '';\n  };\n\n  featuresEnable = mkOption {\n    type = types.listOf types.str;\n    default = [ ];\n    example = [ \"repair_rocksdb\" ];\n    description = ''\n      List of features to enable.\n    '';\n  };\n\n  headerSyncJobs = mkOption {\n    type = types.int;\n    default = 10;\n    description = \"The pace for which to sync up with historical headers.\";\n  };\n\n  syncJobs = mkOption {\n    type = types.int;\n    default = 10;\n    description = \"The pace for which to sync up with historical data.\";\n  };\n\n  diskPoolJobs = mkOption {\n    type = types.int;\n    default = 50;\n    description = \"The number of disk pool jobs to run.\";\n  };\n\n  maxParallelGetChunkRequests = mkOption {\n    type = types.int;\n    default = 100;\n    description = \"As semaphore, the max amount of parallel get chunk requests to perform.\";\n  };\n\n  maxParallelGetAndPackChunkRequests = mkOption {\n    type = types.int;\n    default = 10;\n    description = \"As semaphore, the max amount of parallel get chunk and pack requests to perform.\";\n  };\n\n  maxParallelGetTxDataRequests = mkOption {\n    type = types.int;\n    default = 10;\n    description = \"As semaphore, the max amount of parallel get transaction data requests to perform.\";\n  };\n\n  maxParallelPostChunkRequests = mkOption {\n    type = types.int;\n    default = 100;\n    description = \"As semaphore, the max amount of parallel post chunk requests to perform.\";\n  };\n\n  maxParallelBlockIndexRequests = mkOption {\n    type = types.int;\n    default = 2;\n    description = \"As semaphore, the max amount of parallel block index requests to perform.\";\n  };\n\n  maxParallelWalletListRequests = mkOption {\n    type = types.int;\n    default = 2;\n    description = \"As semaphore, the max amount of parallel block index requests to perform.\";\n  };\n\n  maxParallelGetSyncRecord = mkOption {\n    type = types.int;\n    default = 2;\n    description = \"As semaphore, the max amount of parallel get sync record requests to perform.\";\n  };\n\n  maxVDFValidationThreadCount = mkOption {\n    type = with types; nullOr int;\n    default = null;\n    description = ''\n      The number of threads to use for VDF validation.\n      Note that the default value (null) defaults in runtime\n      to `max(1, (erlang:system_info(schedulers_online) div 2))).`\n    '';\n  };\n\n  requestsPerMinuteLimit = mkOption {\n    type = types.int;\n    default = 2500;\n    description = \"A rate limiter to prevent the node from receiving too many http requests over 1 minute period.\";\n  };\n\n  requestsPerMinuteLimitByIp = mkOption {\n    type = types.listOf (types.submodule {\n      options = {\n        ip = mkOption {\n          type = types.str;\n          description = ''\n            ip address of client to rate limit\n          '';\n        };\n        chunkLimit = mkOption {\n          type = types.int;\n          description = ''\n            rate of chunk data requests over 1 minute period to limit\n          '';\n        };\n        dataSyncRecordLimit = mkOption {\n          type = types.int;\n          description = ''\n            rate of sync_data_record requests over 1 minute period to limit\n          '';\n        };\n        defaultLimit = mkOption {\n          type = types.int;\n          description = ''\n            the default rate of requests over 1 minute period to limit\n          '';\n        };\n      };\n    });\n    default = [ ];\n    description = \"A rate limiter to prevent the node from receiving too many http requests over 1 minute period.\";\n  };\n\n  maxConnections = mkOption {\n    type = types.int;\n    default = 1024;\n    description = \"Maximum allowed TCP connections.\";\n  };\n\n  configFile = mkOption {\n    type = types.path;\n    default = defaultArweaveConfigFile;\n    description = \"The generated Arweave config file\";\n  };\n\n  patches = mkOption {\n    type = types.listOf (types.either types.path types.str);\n    default = [ ];\n    example = [ \"https://example.com/patch1\" ];\n    description = ''\n      List of paths to apply to version control system before building from sources\n    '';\n  };\n\n}\n"
  },
  {
    "path": "notebooks/README.md",
    "content": "# Jupyter Erlang Notebooks\n\nThis directory contains Jupyter notebooks using the Erlang kernel for interactive Arweave tests.\n\n## Prerequisites\n\n- Python 3 with venv + pip\n- Erlang kernel for Jupyter\n\nQuick setup:\n\n```\nscripts/setup_notebook_env.sh\n```\n\nThe setup script writes a local kernelspec under `.tmp/jupyter` and configures\nit to call `ierl_kernel.sh` from the `scripts/` directory (added to PATH by\nthe runner scripts).\n\nOn kernel startup, the wrapper compiles the localnet profile (if needed) and\nadds `_build/localnet/lib` via `ERL_LIBS`, so local modules are available in\nthe notebooks.\n\n## Run headless\n\n```\nscripts/run_notebook_headless.sh pricing_transition_localnet\n```\n\n## Run interactive\n\n```\nscripts/run_notebook.sh\n```\n\nYou can open specific notebooks after jupyter launches.\n\n## Notebook outputs\n\nWhen using the repo scripts, notebooks are configured to strip cell outputs on\nsave. To keep outputs in the file for a session, set `NOTEBOOK_SAVE_OUTPUTS=1`\nbefore launching the notebook (interactive or headless).\n\n## Environment variables\n\n- `ERLANG_JUPYTER_KERNEL`: kernelspec name (default `erlang`)\n- `IERL_URL`: download URL for the ierl escript\n- `IERL_PATH`: local path for the ierl escript (default `.tmp/ierl`)\n- `JUPYTER_DATA_DIR`: where kernelspecs live (default `.tmp/jupyter`)\n- `JUPYTER_CONFIG_DIR`: where Jupyter config lives (default `.jupyter`)\n- `JUPYTER_PORT`: interactive server port (default `8888`)\n- `JUPYTER_OPEN_BROWSER`: set to `true` to open a browser (default `true`)\n- `EXEC_TIMEOUT_SEC`: nbconvert execution timeout in seconds (default `1200`)\n- `LOCALNET_NODE_NAME`: shortname only (default `main-localnet`). If you pass\n  `name@host`, the host is used for RPC node selection, but shortnames are\n  still used for local nodes.\n- `NOTEBOOK_SKIP_COMPILE`: set to `1` to skip compile on kernel startup.\n"
  },
  {
    "path": "notebooks/autoredenomination_localnet.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"3c79455f\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Autoredenomination in localnet\\n\",\n    \"\\n\",\n    \"## Overview\\n\",\n    \"\\n\",\n    \"There is a mechanism in place where Arweave may redenominate its token: every balance, price, and reward\\n\",\n    \"is multiplied by 1000 once the circulating supply crosses a protocol-defined threshold.\\n\",\n    \"\\n\",\n    \"This notebook drives a full redenomination cycle on a running localnet node and validates\\n\",\n    \"that every observable quantity transitions correctly:\\n\",\n    \"\\n\",\n    \"1. **Setup** – connect to the node, compile record accessors, define HTTP and utility helpers.\\n\",\n    \"2. **Pre-redenomination** – mine past the pricing transition, load the mining wallet,\\n\",\n    \"   and override redenomination parameters so the cycle triggers quickly.\\n\",\n    \"3. **Trigger** – submit transactions to push the reward pool past the\\n\",\n    \"   threshold, mine the trigger block, and assert block reward and endowment pool updates.\\n\",\n    \"4. **Redenomination** – mine through the scheduled redenomination height, verify the\\n\",\n    \"   denomination increments, all HTTP pricing/wallet endpoints scale by 1000×, and\\n\",\n    \"   block rewards and endowment pool updates follow the expected formula across the boundary.\\n\",\n    \"5. **Post-redenomination** – validate every HTTP endpoint independently at the new\\n\",\n    \"   denomination.\\n\",\n    \"6. **Cleanup** – restore overridden parameters.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a8b745a8\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Setup\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"cf75077e\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Connect to the localnet node\\n\",\n    \"\\n\",\n    \"Starts a distributed Erlang node with long names, sets the cookie to `localnet`, and pings `main-localnet@127.0.0.1` to confirm connectivity.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 1,\n   \"id\": \"46391a9c\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:45:52.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:45:52.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:45:52.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:45:52.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"true\\n\"\n      ]\n     },\n     \"execution_count\": 1,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"Cookie = 'localnet',\\n\",\n    \"Node = 'main-localnet@127.0.0.1',\\n\",\n    \"\\n\",\n    \"HostHasDot =\\n\",\n    \"\\tcase string:split(atom_to_list(node()), \\\"@\\\") of\\n\",\n    \"\\t\\t[_Name, Host] ->\\n\",\n    \"\\t\\t\\tcase string:find(Host, \\\".\\\") of\\n\",\n    \"\\t\\t\\t\\tnomatch ->\\n\",\n    \"\\t\\t\\t\\t\\tfalse;\\n\",\n    \"\\t\\t\\t\\t_ ->\\n\",\n    \"\\t\\t\\t\\t\\ttrue\\n\",\n    \"\\t\\t\\tend;\\n\",\n    \"\\t\\t_ ->\\n\",\n    \"\\t\\t\\tfalse\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"_ =\\n\",\n    \"\\tcase {node(), HostHasDot} of\\n\",\n    \"\\t\\t{nonode@nohost, _} ->\\n\",\n    \"\\t\\t\\tnet_kernel:start([list_to_atom(\\\"redenom_notebook@127.0.0.1\\\"), longnames]);\\n\",\n    \"\\t\\t{_, true} ->\\n\",\n    \"\\t\\t\\tok;\\n\",\n    \"\\t\\t{_, false} ->\\n\",\n    \"\\t\\t\\tnet_kernel:stop(),\\n\",\n    \"\\t\\t\\tnet_kernel:start([list_to_atom(\\\"redenom_notebook@127.0.0.1\\\"), longnames])\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"erlang:set_cookie(node(), Cookie).\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 2,\n   \"id\": \"269375fb\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:45:52.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:45:52.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:45:52.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:45:52.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"{'main-localnet@127.0.0.1',pong}\\n\"\n      ]\n     },\n     \"execution_count\": 2,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"{Node, net_adm:ping(Node)}.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"484f8d02\",\n   \"metadata\": {},\n   \"source\": [\n    \"### RPC and mining helpers\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"766529a1\",\n   \"metadata\": {},\n   \"source\": [\n    \"- `RPCCall(M, F, A)` -- calls `M:F(A)` on the remote node.\\n\",\n    \"- `RPCHeight()` -- returns the current block height.\\n\",\n    \"- `MineUntilHeight(H)` -- asks localnet to mine up to height H and polls until the node reaches it.\\n\",\n    \"- `RPCBlockHashByHeight(H)`, `RPCBlockByHeight(H)` -- fetch block hash/record from the remote node's storage.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 3,\n   \"id\": \"4135962d\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:45:52.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:45:52.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:45:52.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:45:52.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"#Fun<erl_eval.42.105768164>\\n\"\n      ]\n     },\n     \"execution_count\": 3,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"RPCCall = fun(M, F, A) -> rpc:call(Node, M, F, A) end,\\n\",\n    \"RPCHeight = fun() -> RPCCall(ar_node, get_height, []) end,\\n\",\n    \"\\n\",\n    \"WaitForHeight =\\n\",\n    \"\\tfun\\n\",\n    \"\\t\\t(_, _TargetHeight, 0) ->\\n\",\n    \"\\t\\t\\t{error, mine_until_height_timeout};\\n\",\n    \"\\t\\t(Self, TargetHeight, AttemptsLeft) ->\\n\",\n    \"\\t\\t\\tcase RPCHeight() >= TargetHeight of\\n\",\n    \"\\t\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\t\\tok;\\n\",\n    \"\\t\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\t\\ttimer:sleep(100),\\n\",\n    \"\\t\\t\\t\\t\\tSelf(Self, TargetHeight, AttemptsLeft - 1)\\n\",\n    \"\\t\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"MineUntilHeight =\\n\",\n    \"\\tfun(TargetHeight) ->\\n\",\n    \"\\t\\tMineResult = RPCCall(ar_localnet, mine_until_height, [TargetHeight]),\\n\",\n    \"\\t\\tok =\\n\",\n    \"\\t\\t\\tcase MineResult of\\n\",\n    \"\\t\\t\\t\\tok ->\\n\",\n    \"\\t\\t\\t\\t\\tok;\\n\",\n    \"\\t\\t\\t\\t[] ->\\n\",\n    \"\\t\\t\\t\\t\\tok;\\n\",\n    \"\\t\\t\\t\\tOther ->\\n\",\n    \"\\t\\t\\t\\t\\t{error, {unexpected_mine_until_height_result, Other}}\\n\",\n    \"\\t\\t\\tend,\\n\",\n    \"\\t\\tWaitForHeight(WaitForHeight, TargetHeight, 1200)\\n\",\n    \"\\tend.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 4,\n   \"id\": \"a2069d05\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:45:52.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:45:52.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:45:52.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:45:52.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"#Fun<erl_eval.42.105768164>\\n\"\n      ]\n     },\n     \"execution_count\": 4,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"RPCBlockHashByHeight =\\n\",\n    \"\\tfun(Height) ->\\n\",\n    \"\\t\\tRPCCall(ar_block_index, get_element_by_height, [Height])\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"RPCBlockByHeight =\\n\",\n    \"\\tfun(Height) ->\\n\",\n    \"\\t\\tHash = RPCBlockHashByHeight(Height),\\n\",\n    \"\\t\\tRPCCall(ar_storage, read_block, [Hash])\\n\",\n    \"\\tend.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"e95d49fe\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Record accessor modules\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"de137927\",\n   \"metadata\": {},\n   \"source\": [\n    \"Compiles accessor modules at runtime so the notebook can read Erlang record fields:\\n\",\n    \"- `nb_block` -- accessors for `#block{}` fields (height, denomination, reward_pool, reward, etc.).\\n\",\n    \"- `nb_config` -- accessor for `#config.mining_addr`.\\n\",\n    \"- `nb_tx` -- accessors for `#tx{}` fields (reward, denomination, id).\\n\",\n    \"- `nb_pricing` -- exposes `?TOTAL_SUPPLY`, `?GiB`, and computes miner/endowment fee shares using `?MINER_FEE_SHARE`.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 5,\n   \"id\": \"a6ed630a\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:45:52.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:45:52.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:45:52.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:45:52.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 5,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"TmpDir = \\\".tmp/notebooks/\\\",\\n\",\n    \"ok = filelib:ensure_dir(filename:join([TmpDir, \\\"keep\\\"])),\\n\",\n    \"\\n\",\n    \"CompileModule =\\n\",\n    \"\\tfun(Name, Source) ->\\n\",\n    \"\\t\\tPath = filename:join([TmpDir, Name ++ \\\".erl\\\"]),\\n\",\n    \"\\t\\tok = file:write_file(Path, Source),\\n\",\n    \"\\t\\t{ok, Module, Bin} = compile:file(Path, [binary]),\\n\",\n    \"\\t\\t{module, Module} = code:load_binary(Module, Path, Bin)\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"BlockAccessors =\\n\",\n    \"\\tlists:flatten([\\n\",\n    \"\\t\\t\\\"-module(nb_block).\\\\n\\\",\\n\",\n    \"\\t\\t\\\"-export([height/1, denomination/1, redenomination_height/1, reward_pool/1,\\\\n\\\",\\n\",\n    \"\\t\\t\\\"         reward/1, wallet_list/1, txs/1, weave_size/1, reward_addr/1,\\\\n\\\",\\n\",\n    \"\\t\\t\\\"         debt_supply/1, price_per_gib_minute/1, reward_history/1,\\\\n\\\",\\n\",\n    \"\\t\\t\\\"         kryder_plus_rate_multiplier/1, kryder_plus_rate_multiplier_latch/1]).\\\\n\\\",\\n\",\n    \"\\t\\t\\\"-include_lib(\\\\\\\"arweave/include/ar.hrl\\\\\\\").\\\\n\\\",\\n\",\n    \"\\t\\t\\\"height(B) -> B#block.height.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"denomination(B) -> B#block.denomination.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"redenomination_height(B) -> B#block.redenomination_height.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"reward_pool(B) -> B#block.reward_pool.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"reward(B) -> B#block.reward.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"wallet_list(B) -> B#block.wallet_list.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"txs(B) -> B#block.txs.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"weave_size(B) -> B#block.weave_size.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"reward_addr(B) -> B#block.reward_addr.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"debt_supply(B) -> B#block.debt_supply.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"price_per_gib_minute(B) -> B#block.price_per_gib_minute.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"reward_history(B) -> B#block.reward_history.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"kryder_plus_rate_multiplier(B) -> B#block.kryder_plus_rate_multiplier.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"kryder_plus_rate_multiplier_latch(B) -> B#block.kryder_plus_rate_multiplier_latch.\\\\n\\\"\\n\",\n    \"\\t]),\\n\",\n    \"\\n\",\n    \"ConfigAccessors =\\n\",\n    \"\\tlists:flatten([\\n\",\n    \"\\t\\t\\\"-module(nb_config).\\\\n\\\",\\n\",\n    \"\\t\\t\\\"-export([mining_addr/1]).\\\\n\\\",\\n\",\n    \"\\t\\t\\\"-include_lib(\\\\\\\"arweave_config/include/arweave_config.hrl\\\\\\\").\\\\n\\\",\\n\",\n    \"\\t\\t\\\"mining_addr(C) -> C#config.mining_addr.\\\\n\\\"\\n\",\n    \"\\t]),\\n\",\n    \"\\n\",\n    \"TXAccessors =\\n\",\n    \"\\tlists:flatten([\\n\",\n    \"\\t\\t\\\"-module(nb_tx).\\\\n\\\",\\n\",\n    \"\\t\\t\\\"-export([reward/1, denomination/1, id/1]).\\\\n\\\",\\n\",\n    \"\\t\\t\\\"-include_lib(\\\\\\\"arweave/include/ar.hrl\\\\\\\").\\\\n\\\",\\n\",\n    \"\\t\\t\\\"reward(TX) -> TX#tx.reward.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"denomination(TX) -> TX#tx.denomination.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"id(TX) -> TX#tx.id.\\\\n\\\"\\n\",\n    \"\\t]),\\n\",\n    \"\\n\",\n    \"PricingAccessors =\\n\",\n    \"\\tlists:flatten([\\n\",\n    \"\\t\\t\\\"-module(nb_pricing).\\\\n\\\",\\n\",\n    \"\\t\\t\\\"-export([total_supply/0, miner_fee_share/1, endowment_fee_share/1, gib/0]).\\\\n\\\",\\n\",\n    \"\\t\\t\\\"-include_lib(\\\\\\\"arweave/include/ar.hrl\\\\\\\").\\\\n\\\",\\n\",\n    \"\\t\\t\\\"-include_lib(\\\\\\\"arweave/include/ar_pricing.hrl\\\\\\\").\\\\n\\\",\\n\",\n    \"\\t\\t\\\"total_supply() -> ?TOTAL_SUPPLY.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"gib() -> ?GiB.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"miner_fee_share(TXFee) ->\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\t{Dividend, Divisor} = ?MINER_FEE_SHARE,\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\tTXFee * Dividend div Divisor.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"endowment_fee_share(TXFee) ->\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\tTXFee - miner_fee_share(TXFee).\\\\n\\\"\\n\",\n    \"\\t]),\\n\",\n    \"\\n\",\n    \"CompileModule(\\\"nb_block\\\", BlockAccessors),\\n\",\n    \"CompileModule(\\\"nb_config\\\", ConfigAccessors),\\n\",\n    \"CompileModule(\\\"nb_tx\\\", TXAccessors),\\n\",\n    \"CompileModule(\\\"nb_pricing\\\", PricingAccessors),\\n\",\n    \"ok.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"b70f2845\",\n   \"metadata\": {},\n   \"source\": [\n    \"### HTTP helpers\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"7386e71d\",\n   \"metadata\": {},\n   \"source\": [\n    \"Defines HTTP helpers for the localnet node:\\n\",\n    \"- `HTTPGet(URL)` – GET request returning `{ok, Body}` or `{error, Reason}`.\\n\",\n    \"- `HTTPPostJSON(URL, Body)` – POST with JSON content type.\\n\",\n    \"- `HTTPGetInteger(URL)` – GET, parse body as a non-negative integer.\\n\",\n    \"- `HTTPGetJSONFee(URL)` – GET, parse the `\\\"fee\\\"` field from a JSON response.\\n\",\n    \"- `HTTPGetJSONMap(URL)` – GET, parse body as a JSON map.\\n\",\n    \"- `HTTPGetJSONList(URL)` – GET, parse body as a JSON list.\\n\",\n    \"- `HTTPAssertNonEmpty(URL)` – GET, assert body is non-empty.\\n\",\n    \"- `HTTPAssertBase64Bytes(URL, ExpectedSize)` – GET, decode base64 body and assert byte size.\\n\",\n    \"- `HTTPGetBlockHash(Height)` – GET block JSON by height, extract `indep_hash`.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 6,\n   \"id\": \"d4a35d59\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:45:52.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:45:52.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:45:52.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:45:52.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 6,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"{ok, _} = application:ensure_all_started(inets),\\n\",\n    \"\\n\",\n    \"LocalnetHTTPHost =\\n\",\n    \"\\tcase os:getenv(\\\"LOCALNET_HTTP_HOST\\\") of\\n\",\n    \"\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\\"127.0.0.1\\\";\\n\",\n    \"\\t\\tValue ->\\n\",\n    \"\\t\\t\\tValue\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"LocalnetHTTPPort =\\n\",\n    \"\\tcase os:getenv(\\\"LOCALNET_HTTP_PORT\\\") of\\n\",\n    \"\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\\"1984\\\";\\n\",\n    \"\\t\\tValue ->\\n\",\n    \"\\t\\t\\tValue\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"LocalnetNetworkName =\\n\",\n    \"\\tcase os:getenv(\\\"LOCALNET_HTTP_NETWORK\\\") of\\n\",\n    \"\\t\\tfalse ->\\n\",\n    \"\\t\\t\\tcase os:getenv(\\\"LOCALNET_NETWORK_NAME\\\") of\\n\",\n    \"\\t\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\t\\t\\\"arweave.localnet\\\";\\n\",\n    \"\\t\\t\\t\\tValue2 ->\\n\",\n    \"\\t\\t\\t\\t\\tValue2\\n\",\n    \"\\t\\t\\tend;\\n\",\n    \"\\t\\tValue ->\\n\",\n    \"\\t\\t\\tValue\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"BaseUrl = \\\"http://\\\" ++ LocalnetHTTPHost ++ \\\":\\\" ++ LocalnetHTTPPort,\\n\",\n    \"\\n\",\n    \"HTTPGet =\\n\",\n    \"\\tfun(Url) ->\\n\",\n    \"\\t\\tHeaders = [{\\\"x-network\\\", LocalnetNetworkName}],\\n\",\n    \"\\t\\tcase httpc:request(get, {Url, Headers}, [], []) of\\n\",\n    \"\\t\\t\\t{ok, {{_, 200, _}, _RespHeaders, Body}} ->\\n\",\n    \"\\t\\t\\t\\t{ok, Body};\\n\",\n    \"\\t\\t\\t{ok, {{_, Status, _}, _RespHeaders, Body}} ->\\n\",\n    \"\\t\\t\\t\\t{error, {http_status, Status, Body}};\\n\",\n    \"\\t\\t\\tError ->\\n\",\n    \"\\t\\t\\t\\t{error, Error}\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"HTTPPostJSON =\\n\",\n    \"\\tfun(Url, Body) ->\\n\",\n    \"\\t\\tHeaders = [{\\\"content-type\\\", \\\"application/json\\\"}, {\\\"x-network\\\", LocalnetNetworkName}],\\n\",\n    \"\\t\\tcase httpc:request(post, {Url, Headers, \\\"application/json\\\", Body}, [], []) of\\n\",\n    \"\\t\\t\\t{ok, {{_, 200, _}, _RespHeaders, RespBody}} ->\\n\",\n    \"\\t\\t\\t\\t{ok, RespBody};\\n\",\n    \"\\t\\t\\t{ok, {{_, Status, _}, _RespHeaders, RespBody}} ->\\n\",\n    \"\\t\\t\\t\\t{error, {http_status, Status, RespBody}};\\n\",\n    \"\\t\\t\\tError ->\\n\",\n    \"\\t\\t\\t\\t{error, Error}\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"HTTPGetInteger =\\n\",\n    \"\\tfun(Url) ->\\n\",\n    \"\\t\\tcase HTTPGet(Url) of\\n\",\n    \"\\t\\t\\t{ok, Body} ->\\n\",\n    \"\\t\\t\\t\\tBin = iolist_to_binary(Body),\\n\",\n    \"\\t\\t\\t\\tcase catch binary_to_integer(Bin) of\\n\",\n    \"\\t\\t\\t\\t\\t{'EXIT', _} ->\\n\",\n    \"\\t\\t\\t\\t\\t\\t{error, {not_integer, Bin}};\\n\",\n    \"\\t\\t\\t\\t\\tValue ->\\n\",\n    \"\\t\\t\\t\\t\\t\\tcase Value >= 0 of\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\tValue;\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\t{error, {negative_integer, Value}}\\n\",\n    \"\\t\\t\\t\\t\\t\\tend\\n\",\n    \"\\t\\t\\t\\tend;\\n\",\n    \"\\t\\t\\t{error, Reason} ->\\n\",\n    \"\\t\\t\\t\\t{error, Reason}\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"HTTPGetJSONFee =\\n\",\n    \"\\tfun(Url) ->\\n\",\n    \"\\t\\tcase HTTPGet(Url) of\\n\",\n    \"\\t\\t\\t{ok, Body} ->\\n\",\n    \"\\t\\t\\t\\tMap = jiffy:decode(Body, [return_maps]),\\n\",\n    \"\\t\\t\\t\\tcase maps:get(<<\\\"fee\\\">>, Map, undefined) of\\n\",\n    \"\\t\\t\\t\\t\\tundefined ->\\n\",\n    \"\\t\\t\\t\\t\\t\\t{error, {missing_fee, Map}};\\n\",\n    \"\\t\\t\\t\\t\\tFeeBin ->\\n\",\n    \"\\t\\t\\t\\t\\t\\tcase catch binary_to_integer(FeeBin) of\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t{'EXIT', _} ->\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\t{error, {not_integer, FeeBin}};\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\tValue ->\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\tcase Value >= 0 of\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\t\\t\\tValue;\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\t\\t\\t{error, {negative_integer, Value}}\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\tend\\n\",\n    \"\\t\\t\\t\\t\\t\\tend\\n\",\n    \"\\t\\t\\t\\tend;\\n\",\n    \"\\t\\t\\t{error, Reason} ->\\n\",\n    \"\\t\\t\\t\\t{error, Reason}\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"HTTPGetJSONMap =\\n\",\n    \"\\tfun(Url) ->\\n\",\n    \"\\t\\tcase HTTPGet(Url) of\\n\",\n    \"\\t\\t\\t{ok, Body} ->\\n\",\n    \"\\t\\t\\t\\tcase jiffy:decode(Body, [return_maps]) of\\n\",\n    \"\\t\\t\\t\\t\\tMap when is_map(Map) ->\\n\",\n    \"\\t\\t\\t\\t\\t\\tMap;\\n\",\n    \"\\t\\t\\t\\t\\tOther ->\\n\",\n    \"\\t\\t\\t\\t\\t\\t{error, {not_map, Other}}\\n\",\n    \"\\t\\t\\t\\tend;\\n\",\n    \"\\t\\t\\t{error, Reason} ->\\n\",\n    \"\\t\\t\\t\\t{error, Reason}\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"HTTPGetJSONList =\\n\",\n    \"\\tfun(Url) ->\\n\",\n    \"\\t\\tcase HTTPGet(Url) of\\n\",\n    \"\\t\\t\\t{ok, Body} ->\\n\",\n    \"\\t\\t\\t\\tcase jiffy:decode(Body) of\\n\",\n    \"\\t\\t\\t\\t\\tList when is_list(List) ->\\n\",\n    \"\\t\\t\\t\\t\\t\\tList;\\n\",\n    \"\\t\\t\\t\\t\\tOther ->\\n\",\n    \"\\t\\t\\t\\t\\t\\t{error, {not_list, Other}}\\n\",\n    \"\\t\\t\\t\\tend;\\n\",\n    \"\\t\\t\\t{error, Reason} ->\\n\",\n    \"\\t\\t\\t\\t{error, Reason}\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"HTTPAssertNonEmpty =\\n\",\n    \"\\tfun(Url) ->\\n\",\n    \"\\t\\tcase HTTPGet(Url) of\\n\",\n    \"\\t\\t\\t{ok, Body} ->\\n\",\n    \"\\t\\t\\t\\tcase byte_size(iolist_to_binary(Body)) > 0 of\\n\",\n    \"\\t\\t\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\t\\t\\tok;\\n\",\n    \"\\t\\t\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\t\\t\\t{error, {empty_body, Url}}\\n\",\n    \"\\t\\t\\t\\tend;\\n\",\n    \"\\t\\t\\t{error, Reason} ->\\n\",\n    \"\\t\\t\\t\\t{error, Reason}\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"HTTPAssertBase64Bytes =\\n\",\n    \"\\tfun(Url, ExpectedSize) ->\\n\",\n    \"\\t\\tcase HTTPGet(Url) of\\n\",\n    \"\\t\\t\\t{ok, Body} ->\\n\",\n    \"\\t\\t\\t\\tBin = iolist_to_binary(Body),\\n\",\n    \"\\t\\t\\t\\tcase catch ar_util:decode(Bin) of\\n\",\n    \"\\t\\t\\t\\t\\t{'EXIT', _} ->\\n\",\n    \"\\t\\t\\t\\t\\t\\t{error, {invalid_base64, Bin}};\\n\",\n    \"\\t\\t\\t\\t\\tDecoded ->\\n\",\n    \"\\t\\t\\t\\t\\t\\tcase byte_size(Decoded) == ExpectedSize of\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\tok;\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\t{error, {unexpected_size, byte_size(Decoded), ExpectedSize}}\\n\",\n    \"\\t\\t\\t\\t\\t\\tend\\n\",\n    \"\\t\\t\\t\\tend;\\n\",\n    \"\\t\\t\\t{error, Reason} ->\\n\",\n    \"\\t\\t\\t\\t{error, Reason}\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"HTTPGetBlockHash =\\n\",\n    \"\\tfun(Height) ->\\n\",\n    \"\\t\\tUrl = BaseUrl ++ \\\"/block/height/\\\" ++ integer_to_list(Height),\\n\",\n    \"\\t\\tMap = HTTPGetJSONMap(Url),\\n\",\n    \"\\t\\tcase maps:get(<<\\\"indep_hash\\\">>, Map, undefined) of\\n\",\n    \"\\t\\t\\tundefined ->\\n\",\n    \"\\t\\t\\t\\t{error, {missing_indep_hash, Height, Map}};\\n\",\n    \"\\t\\t\\tHash when is_binary(Hash) ->\\n\",\n    \"\\t\\t\\t\\tHash;\\n\",\n    \"\\t\\t\\tHash ->\\n\",\n    \"\\t\\t\\t\\tiolist_to_binary(Hash)\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"ok.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"utility-helpers-header\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Utility helpers\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"utility-helpers-doc\",\n   \"metadata\": {},\n   \"source\": [\n    \"- `DecodeBase64(Value)` – decodes a base64url-encoded value, returning `{ok, Decoded}` or `{error, Reason}`.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 7,\n   \"id\": \"utility-helpers-code\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:45:52.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:45:52.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:45:52.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:45:52.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"#Fun<erl_eval.42.105768164>\\n\"\n      ]\n     },\n     \"execution_count\": 7,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"DecodeBase64 =\\n\",\n    \"\\tfun(Value) ->\\n\",\n    \"\\t\\tBin = iolist_to_binary(Value),\\n\",\n    \"\\t\\tcase catch ar_util:decode(Bin) of\\n\",\n    \"\\t\\t\\t{'EXIT', _} ->\\n\",\n    \"\\t\\t\\t\\t{error, {invalid_base64, Bin}};\\n\",\n    \"\\t\\t\\tDecoded ->\\n\",\n    \"\\t\\t\\t\\t{ok, Decoded}\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"8f29972b\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Pre-redenomination\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"a64732c9\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Read chain state\\n\",\n    \"\\n\",\n    \"Reads the current block height and displays the starting denomination, redenomination_height, reward_pool, and debt_supply.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 8,\n   \"id\": \"ff2f165f\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:45:52.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:45:52.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:45:53.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:45:53.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"#{height => 2069888,denomination => 4,redenomination_height => 2069886,\\n\",\n       \"  reward_pool => 292873436146306634442575000,debt_supply => 0}\\n\"\n      ]\n     },\n     \"execution_count\": 8,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"Height0 = RPCHeight(),\\n\",\n    \"Block0 = RPCBlockByHeight(Height0),\\n\",\n    \"#{ height => nb_block:height(Block0),\\n\",\n    \"  denomination => nb_block:denomination(Block0),\\n\",\n    \"  redenomination_height => nb_block:redenomination_height(Block0),\\n\",\n    \"  reward_pool => nb_block:reward_pool(Block0),\\n\",\n    \"  debt_supply => nb_block:debt_supply(Block0) }.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"b0217a01\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Load mining address and wallet key\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"089e1b85\",\n   \"metadata\": {},\n   \"source\": [\n    \"Reads the mining address from the node's config and loads the corresponding wallet key pair via RPC. The wallet is used to sign transactions later.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 9,\n   \"id\": \"e3422be8\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:45:53.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:45:53.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:45:53.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:45:53.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"#{mining_addr => <<\\\"yjrOLPSHP1cvZ_y1bSLkLd8lIWhu9dsbSufXm9-QjsY\\\">>}\\n\"\n      ]\n     },\n     \"execution_count\": 9,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"{ok, Config} = RPCCall(arweave_config, get_env, []),\\n\",\n    \"MiningAddr = nb_config:mining_addr(Config),\\n\",\n    \"\\n\",\n    \"#{mining_addr => ar_util:encode(MiningAddr)}.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 10,\n   \"id\": \"e9f8e98b\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:45:53.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:45:53.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:45:53.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:45:53.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"{{{ecdsa,secp256k1},\\n\",\n       \"  <<157,27,57,209,187,167,148,55,118,249,37,206,94,220,143,242,208,115,56,154,\\n\",\n       \"    175,32,127,73,136,71,4,177,87,89,241,31>>,\\n\",\n       \"  <<2,215,4,40,91,61,226,129,118,138,18,7,154,221,140,229,244,16,43,144,42,9,\\n\",\n       \"    26,114,36,102,67,23,84,139,196,122,22>>},\\n\",\n       \" {{ecdsa,secp256k1},\\n\",\n       \"  <<2,215,4,40,91,61,226,129,118,138,18,7,154,221,140,229,244,16,43,144,42,9,\\n\",\n       \"    26,114,36,102,67,23,84,139,196,122,22>>}}\\n\"\n      ]\n     },\n     \"execution_count\": 10,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"MiningWallet = RPCCall(ar_wallet, load_key, [MiningAddr]).\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"4331a098\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Override redenomination parameters\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"7f02ad39\",\n   \"metadata\": {},\n   \"source\": [\n    \"Saves the current values of `redenomination_threshold`, `redenomination_delay_blocks`, and `locked_rewards_blocks`, then overrides them on the remote node:\\n\",\n    \"- `redenomination_delay_blocks = 2` (redenomination fires 2 blocks after scheduling).\\n\",\n    \"- `locked_rewards_blocks = 1` (rewards unlock after 1 block).\\n\",\n    \"- The redenomination threshold is set later, after mining reveals the available supply.\\n\",\n    \"\\n\",\n    \"Also defines local helpers `Pow1000Local` (computes 1000^N) and `RedenominateLocal` (scales amounts between denominations).\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 11,\n   \"id\": \"e3c51b45\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:45:53.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:45:53.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:45:53.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:45:53.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"#{pool_growth_target => 1000000000,\\n\",\n       \"  prev_threshold => {ok,65707142423137694720779001},\\n\",\n       \"  prev_delay => {ok,2},\\n\",\n       \"  prev_locked_rewards => {ok,1}}\\n\"\n      ]\n     },\n     \"execution_count\": 11,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"PrevRedenomThreshold = RPCCall(application, get_env, [arweave, redenomination_threshold]),\\n\",\n    \"PrevRedenomDelay = RPCCall(application, get_env, [arweave, redenomination_delay_blocks]),\\n\",\n    \"PrevLockedRewards = RPCCall(application, get_env, [arweave, locked_rewards_blocks]),\\n\",\n    \"\\n\",\n    \"Pow1000Local =\\n\",\n    \"\\tfun\\n\",\n    \"\\t\\tPow(0) ->\\n\",\n    \"\\t\\t\\t1;\\n\",\n    \"\\t\\tPow(N) when N > 0 ->\\n\",\n    \"\\t\\t\\t1000 * Pow(N - 1)\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"RedenominateLocal =\\n\",\n    \"\\tfun(Amount, FromDenom, ToDenom) ->\\n\",\n    \"\\t\\tcase ToDenom >= FromDenom of\\n\",\n    \"\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\tAmount * Pow1000Local(ToDenom - FromDenom);\\n\",\n    \"\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\tAmount div Pow1000Local(FromDenom - ToDenom)\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"PoolGrowthTarget = 1000000000,\\n\",\n    \"\\n\",\n    \"ok = RPCCall(application, set_env, [arweave, redenomination_delay_blocks, 2]),\\n\",\n    \"ok = RPCCall(application, set_env, [arweave, locked_rewards_blocks, 1]),\\n\",\n    \"\\n\",\n    \"#{ pool_growth_target => PoolGrowthTarget,\\n\",\n    \"  prev_threshold => PrevRedenomThreshold,\\n\",\n    \"  prev_delay => PrevRedenomDelay,\\n\",\n    \"  prev_locked_rewards => PrevLockedRewards }.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"06abbfae\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Wallet balance helpers\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"9f4e7767\",\n   \"metadata\": {},\n   \"source\": [\n    \"Defines `WalletBalanceFromBlock(Block, Addr)`, which reads the balance from the block's wallet tree via RPC and redenominates it to the block's denomination. Also defines `MinerBalanceAt(Height)` as a shorthand for the mining address.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 12,\n   \"id\": \"5241f7f7\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:45:53.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:45:53.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:45:53.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:45:53.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"#Fun<erl_eval.42.105768164>\\n\"\n      ]\n     },\n     \"execution_count\": 12,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"Pow1000 = Pow1000Local,\\n\",\n    \"Redenominate = RedenominateLocal,\\n\",\n    \"\\n\",\n    \"WalletBalanceFromBlock =\\n\",\n    \"\\tfun(Block, Addr) ->\\n\",\n    \"\\t\\tRoot = nb_block:wallet_list(Block),\\n\",\n    \"\\t\\tBlockDenom = nb_block:denomination(Block),\\n\",\n    \"\\t\\tAccountsMap = RPCCall(ar_wallets, get, [Root, Addr]),\\n\",\n    \"\\t\\tcase maps:get(Addr, AccountsMap, not_found) of\\n\",\n    \"\\t\\t\\tnot_found ->\\n\",\n    \"\\t\\t\\t\\t0;\\n\",\n    \"\\t\\t\\t{Balance, _LastTX} ->\\n\",\n    \"\\t\\t\\t\\tRedenominate(Balance, 1, BlockDenom);\\n\",\n    \"\\t\\t\\t{Balance, _LastTX, AccountDenom, _Perm} ->\\n\",\n    \"\\t\\t\\t\\tRedenominate(Balance, AccountDenom, BlockDenom)\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"MinerBalanceAt =\\n\",\n    \"\\tfun(Height) ->\\n\",\n    \"\\t\\tBlock = RPCBlockByHeight(Height),\\n\",\n    \"\\t\\tWalletBalanceFromBlock(Block, MiningAddr)\\n\",\n    \"\\tend.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"de72ffaf\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Mine past the pricing transition and unlock the first reward\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"30fc3239\",\n   \"metadata\": {},\n   \"source\": [\n    \"Mines `max(3, TransitionEnd - StartHeight)` blocks to ensure the 2.7.2 pricing transition is complete and at least one reward has been unlocked (with `locked_rewards_blocks = 1`, a reward earned at height H is applied at H+1).\\n\",\n    \"\\n\",\n    \"**Assertions:**\\n\",\n    \"- Miner balance is positive after mining.\\n\",\n    \"- Sets the redenomination threshold to `AvailableSupply + 1` (where `AvailableSupply = TotalSupply * 1000^denomination + DebtSupply - RewardPool`), so any further endowment fee will push the circulating supply past the threshold and trigger redenomination.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 13,\n   \"id\": \"d5ad9c9a\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:45:53.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:45:53.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:46:49.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:46:49.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"{2069888,2069891,2069870}\\n\"\n      ]\n     },\n     \"execution_count\": 13,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"StartHeight = RPCHeight(),\\n\",\n    \"TransitionEnd =\\n\",\n    \"\\tRPCCall(ar_pricing_transition, transition_start_2_7_2, []) +\\n\",\n    \"\\t\\tRPCCall(ar_pricing_transition, transition_length_2_7_2, []),\\n\",\n    \"TargetHeight = erlang:max(StartHeight + 3, TransitionEnd),\\n\",\n    \"ok = MineUntilHeight(TargetHeight),\\n\",\n    \"{StartHeight, TargetHeight, TransitionEnd}.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 14,\n   \"id\": \"cf2fa971\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:46:49.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:46:49.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:46:49.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:46:49.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 14,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"BalanceAfterUnlock = MinerBalanceAt(TargetHeight),\\n\",\n    \"ok = case BalanceAfterUnlock > 0 of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {mining_balance_not_unlocked, BalanceAfterUnlock}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 15,\n   \"id\": \"175c7fc1\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:46:49.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:46:49.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:46:49.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:46:49.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"#{threshold => 65707142423133030322557425001,pool_growth_target => 1000000000,\\n\",\n       \"  available_supply => 65707142423133030322557425000}\\n\"\n      ]\n     },\n     \"execution_count\": 15,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"BlockAfterTransition = RPCBlockByHeight(RPCHeight()),\\n\",\n    \"DenomAfterTransition = nb_block:denomination(BlockAfterTransition),\\n\",\n    \"TotalSupplyAfterTransition = RedenominateLocal(nb_pricing:total_supply(), 1, DenomAfterTransition),\\n\",\n    \"RewardPoolAfterTransition = nb_block:reward_pool(BlockAfterTransition),\\n\",\n    \"DebtSupplyAfterTransition = nb_block:debt_supply(BlockAfterTransition),\\n\",\n    \"AvailableSupplyAfterTransition =\\n\",\n    \"\\tTotalSupplyAfterTransition + DebtSupplyAfterTransition - RewardPoolAfterTransition,\\n\",\n    \"ThresholdAfterTransition = AvailableSupplyAfterTransition + 1,\\n\",\n    \"ok = RPCCall(application, set_env, [arweave, redenomination_threshold, ThresholdAfterTransition]),\\n\",\n    \"\\n\",\n    \"#{ available_supply => AvailableSupplyAfterTransition,\\n\",\n    \"  threshold => ThresholdAfterTransition,\\n\",\n    \"  pool_growth_target => PoolGrowthTarget }.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"8eb2197a\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Trigger redenomination\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"74fb7004\",\n   \"metadata\": {},\n   \"source\": [\n    \"Builds and submits transactions to push the reward pool past the redenomination threshold. No blocks are mined yet; transactions go to the mempool and are mined in the next step.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"b498f821\",\n   \"metadata\": {},\n   \"source\": [\n    \"### TX builder module\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"3e83e9e9\",\n   \"metadata\": {},\n   \"source\": [\n    \"Defines `nb_tx_builder:build_tx/5` (compiled dynamically). Also defines helpers: `GetTXAnchor` (fetches anchor via HTTP), `BuildTX(Reward, Data)` (builds a format-2 TX with a given reward and data payload), and `PostTX(TX)` (serializes and POSTs a transaction).\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 16,\n   \"id\": \"4079c8ff\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:46:49.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:46:49.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:46:49.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:46:49.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 16,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"TXBuilder =\\n\",\n    \"\\tlists:flatten([\\n\",\n    \"\\t\\t\\\"-module(nb_tx_builder).\\\\n\\\",\\n\",\n    \"\\t\\t\\\"-export([build_tx/5]).\\\\n\\\",\\n\",\n    \"\\t\\t\\\"-include_lib(\\\\\\\"arweave/include/ar.hrl\\\\\\\").\\\\n\\\",\\n\",\n    \"\\t\\t\\\"build_tx(Reward, Data, Anchor, Denomination, Wallet) ->\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\tDataSize = byte_size(Data),\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\tDataRoot =\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\t\\\\tcase DataSize > 0 of\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\t\\\\t\\\\ttrue ->\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\t\\\\t\\\\t\\\\tTreeTX = ar_tx:generate_chunk_tree(#tx{ data = Data }),\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\t\\\\t\\\\t\\\\tTreeTX#tx.data_root;\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\t\\\\t\\\\tfalse ->\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\t\\\\t\\\\t\\\\t<<>>\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\t\\\\tend,\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\tBaseTX = #tx{\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\t\\\\tformat = 2,\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\t\\\\tdata = Data,\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\t\\\\tdata_size = DataSize,\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\t\\\\tdata_root = DataRoot,\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\t\\\\treward = Reward,\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\t\\\\tlast_tx = Anchor,\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\t\\\\ttarget = <<>>,\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\t\\\\tquantity = 0,\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\t\\\\tdenomination = Denomination\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\t},\\\\n\\\",\\n\",\n    \"\\t\\t\\\"\\\\tar_tx:sign(BaseTX, Wallet).\\\\n\\\"\\n\",\n    \"\\t]),\\n\",\n    \"\\n\",\n    \"CompileModule(\\\"nb_tx_builder\\\", TXBuilder),\\n\",\n    \"ok.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 17,\n   \"id\": \"18474861\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:46:49.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:46:49.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:46:49.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:46:49.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"#Fun<erl_eval.42.105768164>\\n\"\n      ]\n     },\n     \"execution_count\": 17,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"GetTXAnchor =\\n\",\n    \"\\tfun() ->\\n\",\n    \"\\t\\tcase HTTPGet(BaseUrl ++ \\\"/tx_anchor\\\") of\\n\",\n    \"\\t\\t\\t{ok, AnchorB64} ->\\n\",\n    \"\\t\\t\\t\\tar_util:decode(iolist_to_binary(AnchorB64));\\n\",\n    \"\\t\\t\\t{error, Reason} ->\\n\",\n    \"\\t\\t\\t\\terlang:error({tx_anchor_failed, Reason})\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"BuildTX =\\n\",\n    \"\\tfun(Reward, Data) ->\\n\",\n    \"\\t\\tAnchor = GetTXAnchor(),\\n\",\n    \"\\t\\tBlock = RPCBlockByHeight(RPCHeight()),\\n\",\n    \"\\t\\tDenom = nb_block:denomination(Block),\\n\",\n    \"\\t\\tnb_tx_builder:build_tx(Reward, Data, Anchor, Denom, MiningWallet)\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"PostTX =\\n\",\n    \"\\tfun(TX) ->\\n\",\n    \"\\t\\tBody = ar_serialize:jsonify(ar_serialize:tx_to_json_struct(TX)),\\n\",\n    \"\\t\\tHTTPPostJSON(BaseUrl ++ \\\"/tx\\\", Body)\\n\",\n    \"\\tend.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"0c11180c\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Compute transaction count\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"d57cd75d\",\n   \"metadata\": {},\n   \"source\": [\n    \"No blocks mined. Queries the minimum fee for 4 bytes via `/price/4/{addr}`. Sets `RewardPerTX = max(100000000, MinFee)`. Splits each fee as `MinerShare = Fee div 21`, `EndowmentShare = Fee - MinerShare`. Computes `RequiredTXs = ceil(PoolGrowthTarget / EndowmentShare)` and caps at `MaxTXs = BalanceAfterUnlock div RewardPerTX`.\\n\",\n    \"\\n\",\n    \"**Assertion:** `TXCount >= RequiredTXs` (the miner has enough balance to cover all needed transactions).\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 18,\n   \"id\": \"ad9d8894\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:46:49.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:46:49.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:46:50.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:46:50.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"#{reward_per_tx => 4897618132653043,miner_share_per_tx => 233219911078716,\\n\",\n       \"  endowment_share => 4664398221574327,required_txs => 1,tx_count => 1}\\n\"\n      ]\n     },\n     \"execution_count\": 18,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"AddrB64 = binary_to_list(ar_util:encode(MiningAddr)),\\n\",\n    \"MinFeeFor4Bytes = HTTPGetInteger(BaseUrl ++ \\\"/price/4/\\\" ++ AddrB64),\\n\",\n    \"RewardPerTX = max(100000000, MinFeeFor4Bytes),\\n\",\n    \"MinerSharePerTX = RewardPerTX div 21,\\n\",\n    \"EndowmentShare = RewardPerTX - MinerSharePerTX,\\n\",\n    \"RequiredTXs = (PoolGrowthTarget + EndowmentShare - 1) div EndowmentShare,\\n\",\n    \"MaxTXs = BalanceAfterUnlock div RewardPerTX,\\n\",\n    \"TXCount = erlang:min(RequiredTXs, MaxTXs),\\n\",\n    \"\\n\",\n    \"ok = case TXCount >= RequiredTXs of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {insufficient_balance, BalanceAfterUnlock, RequiredTXs, RewardPerTX}}\\n\",\n    \"end,\\n\",\n    \"\\n\",\n    \"#{ reward_per_tx => RewardPerTX,\\n\",\n    \"  miner_share_per_tx => MinerSharePerTX,\\n\",\n    \"  endowment_share => EndowmentShare,\\n\",\n    \"  required_txs => RequiredTXs,\\n\",\n    \"  tx_count => TXCount }.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"daab126e\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Submit transactions\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"f089a33a\",\n   \"metadata\": {},\n   \"source\": [\n    \"Builds all transactions with unique data payloads and posts them via the HTTP `/tx` endpoint. Asserts all submissions succeed.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 19,\n   \"id\": \"cef6699c\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:46:50.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:46:50.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:46:50.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:46:50.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"{tx,2,\\n\",\n       \"    <<139,172,22,196,100,45,173,245,156,11,227,184,28,54,206,233,222,210,38,\\n\",\n       \"      113,148,195,169,70,236,87,68,5,15,80,113,11>>,\\n\",\n       \"    <<253,63,204,233,203,204,115,239,32,134,4,214,184,100,165,193,180,84,171,\\n\",\n       \"      198,148,28,152,234,44,193,176,146,214,230,209,245,44,32,9,254,99,89,31,\\n\",\n       \"      142,22,88,155,13,102,255,102,228>>,\\n\",\n       \"    <<2,215,4,40,91,61,226,129,118,138,18,7,154,221,140,229,244,16,43,144,42,9,\\n\",\n       \"      26,114,36,102,67,23,84,139,196,122,22>>,\\n\",\n       \"    <<202,58,206,44,244,135,63,87,47,103,252,181,109,34,228,45,223,37,33,104,\\n\",\n       \"      110,245,219,27,74,231,215,155,223,144,142,198>>,\\n\",\n       \"    [],<<>>,0,<<\\\"tx_1\\\">>,4,[],\\n\",\n       \"    <<80,150,207,185,223,249,233,156,209,143,234,81,171,110,46,139,135,104,51,\\n\",\n       \"      164,66,97,241,245,151,19,123,99,218,110,6,65>>,\\n\",\n       \"    <<229,118,71,208,177,55,47,223,163,169,232,69,227,132,137,19,98,177,244,99,\\n\",\n       \"      221,219,19,234,232,69,153,146,134,186,52,8,93,109,153,251,116,125,75,153,\\n\",\n       \"      170,0,66,188,202,114,255,42,43,180,4,28,225,108,251,151,22,187,148,7,145,\\n\",\n       \"      196,48,79,0>>,\\n\",\n       \"    4897618132653043,4,\\n\",\n       \"    {ecdsa,secp256k1}}\\n\"\n      ]\n     },\n     \"execution_count\": 19,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"TX1 = BuildTX(RewardPerTX, <<\\\"tx_1\\\">>).\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 20,\n   \"id\": \"38c443d8-03bd-49aa-a62d-afa43dd484bd\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:46:50.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:46:50.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:46:50.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:46:50.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"<<139,172,22,196,100,45,173,245,156,11,227,184,28,54,206,233,222,210,38,113,\\n\",\n       \"  148,195,169,70,236,87,68,5,15,80,113,11>>\\n\"\n      ]\n     },\n     \"execution_count\": 20,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"TXID = nb_tx:id(TX1).\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 21,\n   \"id\": \"b0a69c70-1b91-4c61-9da7-fd61774c2f77\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:46:50.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:46:50.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:46:50.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:46:50.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"not_found\\n\"\n      ]\n     },\n     \"execution_count\": 21,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"RPCCall(ar_tx_db, get_error_codes, [TXID]).\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 22,\n   \"id\": \"8d9f53eb\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:46:50.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:46:50.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:46:50.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:46:50.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 22,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"RemainingTXs =\\n\",\n    \"\\t[BuildTX(RewardPerTX, <<\\\"tx_\\\", (integer_to_binary(N))/binary>>)\\n\",\n    \"\\t\\t|| N <- lists:seq(2, TXCount)],\\n\",\n    \"TXs = [TX1 | RemainingTXs],\\n\",\n    \"Results = [PostTX(TX) || TX <- TXs],\\n\",\n    \"ok = case lists:all(fun({ok, _}) -> true; (_) -> false end, Results) of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {tx_submission_failures, [R || R <- Results, element(1, R) =/= ok]}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"84a6a171\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Mine the trigger block\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"1deb3bdf\",\n   \"metadata\": {},\n   \"source\": [\n    \"Mines 1 block. The transactions submitted above are included in this block. Displays the block's reward_pool and reward.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 23,\n   \"id\": \"3a4e6da8\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:46:50.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:46:50.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:47:17.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:47:17.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"#{reward_pool => 292873436150971032664149327,reward => 123478299179911078716,\\n\",\n       \"  redenom_trigger_block_height => 2069892}\\n\"\n      ]\n     },\n     \"execution_count\": 23,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"RedenomTriggerBlockHeight = RPCHeight() + 1,\\n\",\n    \"ok = MineUntilHeight(RedenomTriggerBlockHeight),\\n\",\n    \"RedenomTriggerBlock = RPCBlockByHeight(RedenomTriggerBlockHeight),\\n\",\n    \"#{ redenom_trigger_block_height => RedenomTriggerBlockHeight,\\n\",\n    \"  reward_pool => nb_block:reward_pool(RedenomTriggerBlock),\\n\",\n    \"  reward => nb_block:reward(RedenomTriggerBlock) }.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"3038cb88\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Assert reward pool and miner reward\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"da295ed0\",\n   \"metadata\": {},\n   \"source\": [\n    \"Reads the trigger block and its transactions. Queries the inflation reward at this height\\n\",\n    \"and the average block interval from the node. Computes expected reward and endowment pool\\n\",\n    \"from the protocol's reward formula:\\n\",\n    \"\\n\",\n    \"**Expected block reward** = `max(Inflation + MinerFeeShare, StorageCost)`, where:\\n\",\n    \"- `Inflation = redenominate(ar_inflation:calculate(Height), 1, PrevDenomination)`\\n\",\n    \"- `MinerFeeShare = sum(TXFee div 21)` over block transactions\\n\",\n    \"- `StorageCost = N_REPLICATIONS * WeaveSize * PricePerGiBMinute * BlockInterval / (60 * GiB)`\\n\",\n    \"\\n\",\n    \"**Expected endowment pool** = `PrevPool + EndowmentFeeShare - max(0, StorageCost - BaseReward)`\\n\",\n    \"\\n\",\n    \"Also defines `ComputeExpectedRewardAndPool` helper reused later for redenomination assertions.\\n\",\n    \"\\n\",\n    \"**Assertions:**\\n\",\n    \"- `block.reward` matches the expected reward within 0.1%.\\n\",\n    \"- `block.reward_pool` matches the expected pool within 0.1%.\\n\",\n    \"\\n\",\n    \"The 0.1% tolerance accounts for `ar_inflation:calculate/1` being queried once\\n\",\n    \"at the trigger block height and reused for nearby blocks. The inflation reward\\n\",\n    \"decays with height but changes by less than 0.001% per block at mainnet heights,\\n\",\n    \"so 0.1% is conservative.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 24,\n   \"id\": \"ed28dffe\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:47:17.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:47:17.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:47:17.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:47:17.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"#{miner_fee_share => 233219911078716,endowment_fee_share => 4664398221574327,\\n\",\n       \"  expected_reward => 123478299179911078716,\\n\",\n       \"  expected_pool => 292873436150971032664149327,\\n\",\n       \"  inflation => 123478065960000000000,storage_cost => 0,\\n\",\n       \"  actual_reward => 123478299179911078716,\\n\",\n       \"  actual_pool => 292873436150971032664149327}\\n\"\n      ]\n     },\n     \"execution_count\": 24,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"EnsureTXs =\\n\",\n    \"\\tfun(TXs) ->\\n\",\n    \"\\t\\tcase TXs of\\n\",\n    \"\\t\\t\\t[] -> [];\\n\",\n    \"\\t\\t\\t_ ->\\n\",\n    \"\\t\\t\\t\\tcase lists:any(fun(TX) -> is_binary(TX) end, TXs) of\\n\",\n    \"\\t\\t\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\t\\t\\t[RPCCall(ar_storage, read_tx, [TXID]) || TXID <- TXs];\\n\",\n    \"\\t\\t\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\t\\t\\tTXs\\n\",\n    \"\\t\\t\\t\\tend\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"ComputeFeeShares =\\n\",\n    \"\\tfun(BlockTXs, Denom) ->\\n\",\n    \"\\t\\tcase BlockTXs of\\n\",\n    \"\\t\\t\\t[] ->\\n\",\n    \"\\t\\t\\t\\t{0, 0};\\n\",\n    \"\\t\\t\\t_ ->\\n\",\n    \"\\t\\t\\t\\tlists:foldl(\\n\",\n    \"\\t\\t\\t\\t\\tfun(TX, {MinerAcc, EndowmentAcc}) ->\\n\",\n    \"\\t\\t\\t\\t\\t\\tTXFeeBase = nb_tx:reward(TX),\\n\",\n    \"\\t\\t\\t\\t\\t\\tTXDenom = nb_tx:denomination(TX),\\n\",\n    \"\\t\\t\\t\\t\\t\\tTXFee = RedenominateLocal(TXFeeBase, TXDenom, Denom),\\n\",\n    \"\\t\\t\\t\\t\\t\\tMinerShare = TXFee div 21,\\n\",\n    \"\\t\\t\\t\\t\\t\\t{MinerAcc + MinerShare, EndowmentAcc + (TXFee - MinerShare)}\\n\",\n    \"\\t\\t\\t\\t\\tend,\\n\",\n    \"\\t\\t\\t\\t\\t{0, 0},\\n\",\n    \"\\t\\t\\t\\t\\tBlockTXs\\n\",\n    \"\\t\\t\\t\\t)\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"PrevRedenomTriggerBlock = RPCBlockByHeight(RedenomTriggerBlockHeight - 1),\\n\",\n    \"RefInflationBase = RPCCall(ar_inflation, calculate, [RedenomTriggerBlockHeight]),\\n\",\n    \"RefBlockInterval = RPCCall(ar_block_time_history, compute_block_interval,\\n\",\n    \"\\t[PrevRedenomTriggerBlock]),\\n\",\n    \"NReplications = 20,\\n\",\n    \"GiB = nb_pricing:gib(),\\n\",\n    \"\\n\",\n    \"ComputeExpectedRewardAndPool =\\n\",\n    \"\\tfun(Height, Block, PrevBlock, BlockTXs) ->\\n\",\n    \"\\t\\tPrevDenomLocal = nb_block:denomination(PrevBlock),\\n\",\n    \"\\t\\tBlockDenomLocal = nb_block:denomination(Block),\\n\",\n    \"\\t\\tInflationLocal = RedenominateLocal(RefInflationBase, 1, PrevDenomLocal),\\n\",\n    \"\\t\\t{MinerFeeShareLocal, EndowmentFeeShareLocal} =\\n\",\n    \"\\t\\t\\tComputeFeeShares(BlockTXs, PrevDenomLocal),\\n\",\n    \"\\t\\tWeaveSizeLocal = nb_block:weave_size(Block),\\n\",\n    \"\\t\\tPriceLocal = nb_block:price_per_gib_minute(PrevBlock),\\n\",\n    \"\\t\\tStorageCostLocal = NReplications * WeaveSizeLocal * PriceLocal\\n\",\n    \"\\t\\t\\t* RefBlockInterval div (60 * GiB),\\n\",\n    \"\\t\\tBaseRewardLocal = InflationLocal + MinerFeeShareLocal,\\n\",\n    \"\\t\\tPrevPoolLocal = nb_block:reward_pool(PrevBlock),\\n\",\n    \"\\t\\tPool2 = PrevPoolLocal + EndowmentFeeShareLocal,\\n\",\n    \"\\t\\t{ExpReward0, ExpPool0} =\\n\",\n    \"\\t\\t\\tcase BaseRewardLocal >= StorageCostLocal of\\n\",\n    \"\\t\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\t\\t{BaseRewardLocal, Pool2};\\n\",\n    \"\\t\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\t\\tTakeLocal = StorageCostLocal - BaseRewardLocal,\\n\",\n    \"\\t\\t\\t\\t\\tcase TakeLocal > Pool2 of\\n\",\n    \"\\t\\t\\t\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t{StorageCostLocal, 0};\\n\",\n    \"\\t\\t\\t\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t{StorageCostLocal, Pool2 - TakeLocal}\\n\",\n    \"\\t\\t\\t\\t\\tend\\n\",\n    \"\\t\\t\\tend,\\n\",\n    \"\\t\\tExpReward = RedenominateLocal(ExpReward0, PrevDenomLocal, BlockDenomLocal),\\n\",\n    \"\\t\\tExpPool = RedenominateLocal(ExpPool0, PrevDenomLocal, BlockDenomLocal),\\n\",\n    \"\\t\\t#{\\n\",\n    \"\\t\\t\\texpected_reward => ExpReward,\\n\",\n    \"\\t\\t\\texpected_pool => ExpPool,\\n\",\n    \"\\t\\t\\tinflation => InflationLocal,\\n\",\n    \"\\t\\t\\tstorage_cost => StorageCostLocal,\\n\",\n    \"\\t\\t\\tminer_fee_share => MinerFeeShareLocal,\\n\",\n    \"\\t\\t\\tendowment_fee_share => EndowmentFeeShareLocal\\n\",\n    \"\\t\\t}\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"TriggerBlockTXs = EnsureTXs(nb_block:txs(RedenomTriggerBlock)),\\n\",\n    \"TriggerExpected = ComputeExpectedRewardAndPool(\\n\",\n    \"\\tRedenomTriggerBlockHeight, RedenomTriggerBlock,\\n\",\n    \"\\tPrevRedenomTriggerBlock, TriggerBlockTXs),\\n\",\n    \"\\n\",\n    \"ActualReward = nb_block:reward(RedenomTriggerBlock),\\n\",\n    \"ActualPool = nb_block:reward_pool(RedenomTriggerBlock),\\n\",\n    \"ExpectedReward = maps:get(expected_reward, TriggerExpected),\\n\",\n    \"ExpectedPool = maps:get(expected_pool, TriggerExpected),\\n\",\n    \"\\n\",\n    \"RewardTolerance = max(1, ExpectedReward div 1000),\\n\",\n    \"ok = case abs(ActualReward - ExpectedReward) =< RewardTolerance of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {reward_mismatch, ActualReward, ExpectedReward, RewardTolerance}}\\n\",\n    \"end,\\n\",\n    \"\\n\",\n    \"PoolTolerance = max(1, ExpectedPool div 1000),\\n\",\n    \"ok = case abs(ActualPool - ExpectedPool) =< PoolTolerance of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {pool_mismatch, ActualPool, ExpectedPool, PoolTolerance}}\\n\",\n    \"end,\\n\",\n    \"\\n\",\n    \"TriggerExpected#{\\n\",\n    \"\\tactual_reward => ActualReward,\\n\",\n    \"\\tactual_pool => ActualPool\\n\",\n    \"}.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"8a714115\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Redenomination\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"bb426304\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Schedule redenomination\\n\",\n    \"\\n\",\n    \"Mines 1 block. Because the redenomination threshold was set just above the circulating\\n\",\n    \"supply, the node schedules redenomination at the next opportunity.\\n\",\n    \"\\n\",\n    \"**Assertion:** The new block's `redenomination_height` is greater than the current height\\n\",\n    \"(redenomination has been scheduled for a future block, with a 2-block delay as configured).\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 25,\n   \"id\": \"42c64336\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:47:17.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:47:17.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:47:44.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:47:44.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"#{scheduled_height => 2069893}\\n\"\n      ]\n     },\n     \"execution_count\": 25,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"ScheduleStartHeight = RPCHeight(),\\n\",\n    \"ScheduleStartBlock = RPCBlockByHeight(ScheduleStartHeight),\\n\",\n    \"PrevRedenomHeight = nb_block:redenomination_height(ScheduleStartBlock),\\n\",\n    \"\\n\",\n    \"ok = MineUntilHeight(ScheduleStartHeight + 1),\\n\",\n    \"ScheduleBlock = RPCBlockByHeight(ScheduleStartHeight + 1),\\n\",\n    \"NewRedenomHeight = nb_block:redenomination_height(ScheduleBlock),\\n\",\n    \"\\n\",\n    \"ok = case NewRedenomHeight > ScheduleStartHeight of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {redenomination_not_scheduled, PrevRedenomHeight, NewRedenomHeight}}\\n\",\n    \"end,\\n\",\n    \"\\n\",\n    \"#{ scheduled_height => NewRedenomHeight }.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"e598ee92\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Snapshot HTTP values before redenomination\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"91e4bf2a\",\n   \"metadata\": {},\n   \"source\": [\n    \"Mines to `NewRedenomHeight` (the block just before redenomination fires). Captures all HTTP pricing and wallet values into a `PreHTTP` map for comparison after redenomination.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 26,\n   \"id\": \"578a9950\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:47:44.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:47:44.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:47:44.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:47:44.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"#{price0 => 59246720252252,price1g => 18458446185029267217,\\n\",\n       \"  price0_addr => 59246720252252,price1g_addr => 18458446185029267217,\\n\",\n       \"  price2_0 => 59246720252252,price2_1g => 18458446185029267217,\\n\",\n       \"  price2_0_addr => 59246720252252,price2_1g_addr => 18458446185029267217,\\n\",\n       \"  opt0 => 59246720252252,opt1g => 18458446185029267217,\\n\",\n       \"  opt0_addr => 59246720252252,opt1g_addr => 18458446185029267217,\\n\",\n       \"  v2_0 => 59246720252252,v2_1g => 18458446185029267217,\\n\",\n       \"  v2_0_addr => 59246720252252,v2_1g_addr => 18458446185029267217,\\n\",\n       \"  wallet_balance => 3332960484717335850673,\\n\",\n       \"  reserved_rewards => 123477740282000000000}\\n\"\n      ]\n     },\n     \"execution_count\": 26,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"PreRedenomHeight = NewRedenomHeight,\\n\",\n    \"ok = MineUntilHeight(PreRedenomHeight),\\n\",\n    \"\\n\",\n    \"PreInfoMap = HTTPGetJSONMap(BaseUrl ++ \\\"/info\\\"),\\n\",\n    \"PreInfoHeight =\\n\",\n    \"\\tcase maps:get(<<\\\"height\\\">>, PreInfoMap, undefined) of\\n\",\n    \"\\t\\tHeight when is_integer(Height) ->\\n\",\n    \"\\t\\t\\tHeight;\\n\",\n    \"\\t\\tHeightBin when is_binary(HeightBin) ->\\n\",\n    \"\\t\\t\\tbinary_to_integer(HeightBin);\\n\",\n    \"\\t\\tOther ->\\n\",\n    \"\\t\\t\\terlang:error({unexpected_info_height, Other})\\n\",\n    \"\\tend,\\n\",\n    \"ok = case PreInfoHeight == PreRedenomHeight of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {pre_height_mismatch, PreInfoHeight, PreRedenomHeight}}\\n\",\n    \"end,\\n\",\n    \"\\n\",\n    \"AddrB64 = binary_to_list(ar_util:encode(MiningAddr)),\\n\",\n    \"\\n\",\n    \"PreHTTP = #\\n\",\n    \"{\\n\",\n    \"\\tprice0 => HTTPGetInteger(BaseUrl ++ \\\"/price/0\\\"),\\n\",\n    \"\\tprice1g => HTTPGetInteger(BaseUrl ++ \\\"/price/1000000000\\\"),\\n\",\n    \"\\tprice0_addr => HTTPGetInteger(BaseUrl ++ \\\"/price/0/\\\" ++ AddrB64),\\n\",\n    \"\\tprice1g_addr => HTTPGetInteger(BaseUrl ++ \\\"/price/1000000000/\\\" ++ AddrB64),\\n\",\n    \"\\tprice2_0 => HTTPGetJSONFee(BaseUrl ++ \\\"/price2/0\\\"),\\n\",\n    \"\\tprice2_1g => HTTPGetJSONFee(BaseUrl ++ \\\"/price2/1000000000\\\"),\\n\",\n    \"\\tprice2_0_addr => HTTPGetJSONFee(BaseUrl ++ \\\"/price2/0/\\\" ++ AddrB64),\\n\",\n    \"\\tprice2_1g_addr => HTTPGetJSONFee(BaseUrl ++ \\\"/price2/1000000000/\\\" ++ AddrB64),\\n\",\n    \"\\topt0 => HTTPGetJSONFee(BaseUrl ++ \\\"/optimistic_price/0\\\"),\\n\",\n    \"\\topt1g => HTTPGetJSONFee(BaseUrl ++ \\\"/optimistic_price/1000000000\\\"),\\n\",\n    \"\\topt0_addr => HTTPGetJSONFee(BaseUrl ++ \\\"/optimistic_price/0/\\\" ++ AddrB64),\\n\",\n    \"\\topt1g_addr => HTTPGetJSONFee(BaseUrl ++ \\\"/optimistic_price/1000000000/\\\" ++ AddrB64),\\n\",\n    \"\\tv2_0 => HTTPGetInteger(BaseUrl ++ \\\"/v2price/0\\\"),\\n\",\n    \"\\tv2_1g => HTTPGetInteger(BaseUrl ++ \\\"/v2price/1000000000\\\"),\\n\",\n    \"\\tv2_0_addr => HTTPGetInteger(BaseUrl ++ \\\"/v2price/0/\\\" ++ AddrB64),\\n\",\n    \"\\tv2_1g_addr => HTTPGetInteger(BaseUrl ++ \\\"/v2price/1000000000/\\\" ++ AddrB64),\\n\",\n    \"\\twallet_balance => HTTPGetInteger(BaseUrl ++ \\\"/wallet/\\\" ++ AddrB64 ++ \\\"/balance\\\"),\\n\",\n    \"\\treserved_rewards => HTTPGetInteger(BaseUrl ++ \\\"/wallet/\\\" ++ AddrB64 ++ \\\"/reserved_rewards_total\\\")\\n\",\n    \"},\\n\",\n    \"PreHTTP.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"71caee34\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Assert redenomination at scheduled height\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"5d8dd92d\",\n   \"metadata\": {},\n   \"source\": [\n    \"Mines 1 block to `NewRedenomHeight + 1` (the redenomination block). Captures all HTTP\\n\",\n    \"pricing and wallet values in `PostHTTP`.\\n\",\n    \"\\n\",\n    \"**Assertions:**\\n\",\n    \"- `block.denomination` incremented exactly once.\\n\",\n    \"- `/info` height matches; network name unchanged.\\n\",\n    \"- **Wallet balance (exact):** `Post == (Pre + PreBlockReward) * 1000`.\\n\",\n    \"- **Pricing endpoints (approximate):** expected to scale by 1000× but one block of\\n\",\n    \"  economic activity occurs between snapshots. Tolerance: `max(1000, Expected div 10000)`\\n\",\n    \"  (0.01% or 1000 Winston, whichever is larger).\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 27,\n   \"id\": \"2fff60f2\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:47:44.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:47:44.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:47:58.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:47:58.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"#{price0 => 59246720252252719,price1g => 18458446185029267217100,\\n\",\n       \"  price0_addr => 59246720252252719,price1g_addr => 18458446185029267217100,\\n\",\n       \"  price2_0 => 59246720252252719,price2_1g => 18458446185029267217100,\\n\",\n       \"  price2_0_addr => 59246720252252719,\\n\",\n       \"  price2_1g_addr => 18458446185029267217100,opt0 => 59246720252252719,\\n\",\n       \"  opt1g => 18458446185029267217100,opt0_addr => 59246720252252719,\\n\",\n       \"  opt1g_addr => 18458446185029267217100,v2_0 => 59246720252252719,\\n\",\n       \"  v2_1g => 18458446185029267217100,v2_0_addr => 59246720252252719,\\n\",\n       \"  v2_1g_addr => 18458446185029267217100,\\n\",\n       \"  wallet_balance => 3456438224999335850673000,\\n\",\n       \"  reserved_rewards => 123477414604000000000000}\\n\"\n      ]\n     },\n     \"execution_count\": 27,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"RedenomBlockHeight = NewRedenomHeight + 1,\\n\",\n    \"ok = MineUntilHeight(RedenomBlockHeight),\\n\",\n    \"PreRedenomBlock = RPCBlockByHeight(RedenomBlockHeight - 1),\\n\",\n    \"RedenomBlock = RPCBlockByHeight(RedenomBlockHeight),\\n\",\n    \"DenomBefore = nb_block:denomination(PreRedenomBlock),\\n\",\n    \"DenomAt = nb_block:denomination(RedenomBlock),\\n\",\n    \"\\n\",\n    \"ok = case DenomAt == DenomBefore + 1 of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {denomination_not_incremented, DenomBefore, DenomAt}}\\n\",\n    \"end,\\n\",\n    \"\\n\",\n    \"PostInfoMap = HTTPGetJSONMap(BaseUrl ++ \\\"/info\\\"),\\n\",\n    \"PostInfoHeight =\\n\",\n    \"\\tcase maps:get(<<\\\"height\\\">>, PostInfoMap, undefined) of\\n\",\n    \"\\t\\tPostH when is_integer(PostH) ->\\n\",\n    \"\\t\\t\\tPostH;\\n\",\n    \"\\t\\tPostHBin when is_binary(PostHBin) ->\\n\",\n    \"\\t\\t\\tbinary_to_integer(PostHBin);\\n\",\n    \"\\t\\tOther2 ->\\n\",\n    \"\\t\\t\\terlang:error({unexpected_info_height, Other2})\\n\",\n    \"\\tend,\\n\",\n    \"ok = case PostInfoHeight == RedenomBlockHeight of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {post_height_mismatch, PostInfoHeight, RedenomBlockHeight}}\\n\",\n    \"end,\\n\",\n    \"\\n\",\n    \"PreNetwork = maps:get(<<\\\"network\\\">>, PreInfoMap, undefined),\\n\",\n    \"PostNetwork = maps:get(<<\\\"network\\\">>, PostInfoMap, undefined),\\n\",\n    \"ok = case PreNetwork == PostNetwork of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {network_changed, PreNetwork, PostNetwork}}\\n\",\n    \"end,\\n\",\n    \"\\n\",\n    \"PostHTTP = #\\n\",\n    \"{\\n\",\n    \"\\tprice0 => HTTPGetInteger(BaseUrl ++ \\\"/price/0\\\"),\\n\",\n    \"\\tprice1g => HTTPGetInteger(BaseUrl ++ \\\"/price/1000000000\\\"),\\n\",\n    \"\\tprice0_addr => HTTPGetInteger(BaseUrl ++ \\\"/price/0/\\\" ++ AddrB64),\\n\",\n    \"\\tprice1g_addr => HTTPGetInteger(BaseUrl ++ \\\"/price/1000000000/\\\" ++ AddrB64),\\n\",\n    \"\\tprice2_0 => HTTPGetJSONFee(BaseUrl ++ \\\"/price2/0\\\"),\\n\",\n    \"\\tprice2_1g => HTTPGetJSONFee(BaseUrl ++ \\\"/price2/1000000000\\\"),\\n\",\n    \"\\tprice2_0_addr => HTTPGetJSONFee(BaseUrl ++ \\\"/price2/0/\\\" ++ AddrB64),\\n\",\n    \"\\tprice2_1g_addr => HTTPGetJSONFee(BaseUrl ++ \\\"/price2/1000000000/\\\" ++ AddrB64),\\n\",\n    \"\\topt0 => HTTPGetJSONFee(BaseUrl ++ \\\"/optimistic_price/0\\\"),\\n\",\n    \"\\topt1g => HTTPGetJSONFee(BaseUrl ++ \\\"/optimistic_price/1000000000\\\"),\\n\",\n    \"\\topt0_addr => HTTPGetJSONFee(BaseUrl ++ \\\"/optimistic_price/0/\\\" ++ AddrB64),\\n\",\n    \"\\topt1g_addr => HTTPGetJSONFee(BaseUrl ++ \\\"/optimistic_price/1000000000/\\\" ++ AddrB64),\\n\",\n    \"\\tv2_0 => HTTPGetInteger(BaseUrl ++ \\\"/v2price/0\\\"),\\n\",\n    \"\\tv2_1g => HTTPGetInteger(BaseUrl ++ \\\"/v2price/1000000000\\\"),\\n\",\n    \"\\tv2_0_addr => HTTPGetInteger(BaseUrl ++ \\\"/v2price/0/\\\" ++ AddrB64),\\n\",\n    \"\\tv2_1g_addr => HTTPGetInteger(BaseUrl ++ \\\"/v2price/1000000000/\\\" ++ AddrB64),\\n\",\n    \"\\twallet_balance => HTTPGetInteger(BaseUrl ++ \\\"/wallet/\\\" ++ AddrB64 ++ \\\"/balance\\\"),\\n\",\n    \"\\treserved_rewards => HTTPGetInteger(BaseUrl ++ \\\"/wallet/\\\" ++ AddrB64 ++ \\\"/reserved_rewards_total\\\")\\n\",\n    \"},\\n\",\n    \"\\n\",\n    \"ParseIntValue =\\n\",\n    \"\\tfun(Value) ->\\n\",\n    \"\\t\\tcase Value of\\n\",\n    \"\\t\\t\\tInt when is_integer(Int) ->\\n\",\n    \"\\t\\t\\t\\tInt;\\n\",\n    \"\\t\\t\\tBin when is_binary(Bin) ->\\n\",\n    \"\\t\\t\\t\\tbinary_to_integer(Bin);\\n\",\n    \"\\t\\t\\tOther ->\\n\",\n    \"\\t\\t\\t\\terlang:error({unexpected_integer_value, Other})\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"PreRewardBlock = HTTPGetJSONMap(BaseUrl ++ \\\"/block/height/\\\" ++ integer_to_list(RedenomBlockHeight - 1)),\\n\",\n    \"PreRewardValue = maps:get(<<\\\"reward\\\">>, PreRewardBlock, undefined),\\n\",\n    \"PreReward =\\n\",\n    \"\\tcase PreRewardValue of\\n\",\n    \"\\t\\tundefined ->\\n\",\n    \"\\t\\t\\terlang:error({missing_reward, PreRewardBlock});\\n\",\n    \"\\t\\t_ ->\\n\",\n    \"\\t\\t\\tParseIntValue(PreRewardValue)\\n\",\n    \"\\tend,\\n\",\n    \"PreWalletBalance = maps:get(wallet_balance, PreHTTP),\\n\",\n    \"PostWalletBalance = maps:get(wallet_balance, PostHTTP),\\n\",\n    \"ExpectedPostWallet = (PreWalletBalance + PreReward) * 1000,\\n\",\n    \"ok = case PostWalletBalance == ExpectedPostWallet of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {wallet_balance_scale_mismatch, PreWalletBalance, PreReward, PostWalletBalance}}\\n\",\n    \"end,\\n\",\n    \"\\n\",\n    \"Scale = 1000,\\n\",\n    \"\\n\",\n    \"CheckApproxScale =\\n\",\n    \"\\tfun(Key) ->\\n\",\n    \"\\t\\tPreVal = maps:get(Key, PreHTTP),\\n\",\n    \"\\t\\tPostVal = maps:get(Key, PostHTTP),\\n\",\n    \"\\t\\tExpected = PreVal * Scale,\\n\",\n    \"\\t\\tDiff = abs(PostVal - Expected),\\n\",\n    \"\\t\\tMaxDiff = max(1000, Expected div 10000),\\n\",\n    \"\\t\\tcase Diff =< MaxDiff of\\n\",\n    \"\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\tok;\\n\",\n    \"\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\terlang:error({redenom_approx_scale_mismatch, Key, PreVal, PostVal, Diff, MaxDiff})\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"ApproxScaleKeys = [\\n\",\n    \"\\tprice0, price1g, price0_addr, price1g_addr,\\n\",\n    \"\\tprice2_0, price2_1g, price2_0_addr, price2_1g_addr,\\n\",\n    \"\\topt0, opt1g, opt0_addr, opt1g_addr,\\n\",\n    \"\\tv2_0, v2_1g, v2_0_addr, v2_1g_addr,\\n\",\n    \"\\treserved_rewards\\n\",\n    \"],\\n\",\n    \"lists:foreach(CheckApproxScale, ApproxScaleKeys),\\n\",\n    \"\\n\",\n    \"PostHTTP.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"5515c8c7\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Assert block reward and endowment pool around redenomination\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"ef36e6e3\",\n   \"metadata\": {},\n   \"source\": [\n    \"Mines 1 block to `RedenomBlockHeight + 1`. Uses `ComputeExpectedRewardAndPool` (defined earlier)\\n\",\n    \"to verify the block reward and endowment pool at three heights around the redenomination\\n\",\n    \"boundary: `RedenomBlockHeight - 1` (pre), `RedenomBlockHeight` (redenomination block),\\n\",\n    \"and `RedenomBlockHeight + 1` (post).\\n\",\n    \"\\n\",\n    \"**Assertions:**\\n\",\n    \"- At each height, `block.reward` matches the expected reward within 0.1%.\\n\",\n    \"- At each height, `block.reward_pool` matches the expected pool within 0.1%.\\n\",\n    \"\\n\",\n    \"See the trigger block cell above for the tolerance justification.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 28,\n   \"id\": \"617e583f\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:47:58.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:47:58.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"#{at =>\\n\",\n       \"      #{height => 2069894,miner_fee_share => 0,endowment_fee_share => 0,\\n\",\n       \"        expected_reward => 123478065960000000000000,\\n\",\n       \"        expected_pool => 292873436150971032664149327000,\\n\",\n       \"        inflation => 123478065960000000000,storage_cost => 0,\\n\",\n       \"        actual_reward => 123477414604000000000000,\\n\",\n       \"        actual_pool => 292873436150971032664149327000},\\n\",\n       \"  pre =>\\n\",\n       \"      #{height => 2069893,miner_fee_share => 0,endowment_fee_share => 0,\\n\",\n       \"        expected_reward => 123478065960000000000,\\n\",\n       \"        expected_pool => 292873436150971032664149327,\\n\",\n       \"        inflation => 123478065960000000000,storage_cost => 0,\\n\",\n       \"        actual_reward => 123477740282000000000,\\n\",\n       \"        actual_pool => 292873436150971032664149327},\\n\",\n       \"  post =>\\n\",\n       \"      #{height => 2069895,miner_fee_share => 0,endowment_fee_share => 0,\\n\",\n       \"        expected_reward => 123478065960000000000000,\\n\",\n       \"        expected_pool => 292873436150971032664149327000,\\n\",\n       \"        inflation => 123478065960000000000000,storage_cost => 0,\\n\",\n       \"        actual_reward => 123477088927000000000000,\\n\",\n       \"        actual_pool => 292873436150971032664149327000}}\\n\"\n      ]\n     },\n     \"execution_count\": 28,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"ok = MineUntilHeight(RedenomBlockHeight + 1),\\n\",\n    \"\\n\",\n    \"AssertRewardAndPool =\\n\",\n    \"\\tfun(CheckHeight) ->\\n\",\n    \"\\t\\tBlock = RPCBlockByHeight(CheckHeight),\\n\",\n    \"\\t\\tPrevBlock = RPCBlockByHeight(CheckHeight - 1),\\n\",\n    \"\\t\\tBlockTXs = EnsureTXs(nb_block:txs(Block)),\\n\",\n    \"\\t\\tExpected = ComputeExpectedRewardAndPool(CheckHeight, Block, PrevBlock, BlockTXs),\\n\",\n    \"\\t\\tActReward = nb_block:reward(Block),\\n\",\n    \"\\t\\tActPool = nb_block:reward_pool(Block),\\n\",\n    \"\\t\\tExpReward = maps:get(expected_reward, Expected),\\n\",\n    \"\\t\\tExpPool = maps:get(expected_pool, Expected),\\n\",\n    \"\\t\\tRTol = max(1, ExpReward div 1000),\\n\",\n    \"\\t\\tok =\\n\",\n    \"\\t\\t\\tcase abs(ActReward - ExpReward) =< RTol of\\n\",\n    \"\\t\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\t\\tok;\\n\",\n    \"\\t\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\t\\t{error, {reward_mismatch, CheckHeight, ActReward, ExpReward, RTol}}\\n\",\n    \"\\t\\t\\tend,\\n\",\n    \"\\t\\tPTol = max(1, ExpPool div 1000),\\n\",\n    \"\\t\\tok =\\n\",\n    \"\\t\\t\\tcase abs(ActPool - ExpPool) =< PTol of\\n\",\n    \"\\t\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\t\\tok;\\n\",\n    \"\\t\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\t\\t{error, {pool_mismatch, CheckHeight, ActPool, ExpPool, PTol}}\\n\",\n    \"\\t\\t\\tend,\\n\",\n    \"\\t\\tExpected#{\\n\",\n    \"\\t\\t\\theight => CheckHeight,\\n\",\n    \"\\t\\t\\tactual_reward => ActReward,\\n\",\n    \"\\t\\t\\tactual_pool => ActPool\\n\",\n    \"\\t\\t}\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"PreResult = AssertRewardAndPool(RedenomBlockHeight - 1),\\n\",\n    \"AtResult = AssertRewardAndPool(RedenomBlockHeight),\\n\",\n    \"PostResult = AssertRewardAndPool(RedenomBlockHeight + 1),\\n\",\n    \"\\n\",\n    \"#{ pre => PreResult, at => AtResult, post => PostResult }.\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"53ef2398\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Assert miner balance deltas around redenomination\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"cfeb5e87\",\n   \"metadata\": {},\n   \"source\": [\n    \"No blocks mined. Checks the miner balance delta at `RedenomBlockHeight - 1` (pre-redenomination) and `RedenomBlockHeight` (redenomination block).\\n\",\n    \"\\n\",\n    \"For each height H, the expected balance delta at H+1 is `Redenominate(block(H).reward, block(H).denomination, block(H+1).denomination)`. The actual delta is `MinerBalanceAt(H+1) - Redenominate(MinerBalanceAt(H), block(H).denomination, block(H+1).denomination)`. Asserts exact equality.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 29,\n   \"id\": \"6d74e620\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 29,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"CheckMinerDelta =\\n\",\n    \"\\tfun(RewardHeight) ->\\n\",\n    \"\\t\\tRewardBlock = RPCBlockByHeight(RewardHeight),\\n\",\n    \"\\t\\tApplyBlock = RPCBlockByHeight(RewardHeight + 1),\\n\",\n    \"\\t\\tReward = nb_block:reward(RewardBlock),\\n\",\n    \"\\t\\tRewardDenom = nb_block:denomination(RewardBlock),\\n\",\n    \"\\t\\tApplyDenom = nb_block:denomination(ApplyBlock),\\n\",\n    \"\\t\\tExpectedApplied = Redenominate(Reward, RewardDenom, ApplyDenom),\\n\",\n    \"\\t\\tBalanceBefore = MinerBalanceAt(RewardHeight),\\n\",\n    \"\\t\\tBalanceAfter = MinerBalanceAt(RewardHeight + 1),\\n\",\n    \"\\t\\tBalanceBeforeNormalized = Redenominate(BalanceBefore, RewardDenom, ApplyDenom),\\n\",\n    \"\\t\\tDelta = BalanceAfter - BalanceBeforeNormalized,\\n\",\n    \"\\t\\tcase Delta == ExpectedApplied of\\n\",\n    \"\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\tok;\\n\",\n    \"\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\t{error, {miner_balance_delta_mismatch, RewardHeight, Delta, ExpectedApplied}}\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"ok = CheckMinerDelta(RedenomBlockHeight - 1),\\n\",\n    \"ok = CheckMinerDelta(RedenomBlockHeight),\\n\",\n    \"\\n\",\n    \"ok.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"c165093d\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Per-height summary table\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"f30f0dd6\",\n   \"metadata\": {},\n   \"source\": [\n    \"No blocks mined. Displays a summary map for every height from `RedenomTriggerBlockHeight - 2` through `RedenomBlockHeight + 1`, showing denomination, redenomination_height, reward_pool, and miner_balance. This is an informational cell with no assertions.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 30,\n   \"id\": \"19514b7a\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"[#{height => 2069890,denomination => 4,redenomination_height => 2069886,\\n\",\n       \"   reward_pool => 292873436146306634442575000,\\n\",\n       \"   miner_balance => 2962529974195557425000},\\n\",\n       \" #{height => 2069891,denomination => 4,redenomination_height => 2069886,\\n\",\n       \"   reward_pool => 292873436146306634442575000,\\n\",\n       \"   miner_balance => 3086008691515557425000},\\n\",\n       \" #{height => 2069892,denomination => 4,redenomination_height => 2069893,\\n\",\n       \"   reward_pool => 292873436150971032664149327,\\n\",\n       \"   miner_balance => 3209482185537424771957},\\n\",\n       \" #{height => 2069893,denomination => 4,redenomination_height => 2069893,\\n\",\n       \"   reward_pool => 292873436150971032664149327,\\n\",\n       \"   miner_balance => 3332960484717335850673},\\n\",\n       \" #{height => 2069894,denomination => 5,redenomination_height => 2069893,\\n\",\n       \"   reward_pool => 292873436150971032664149327000,\\n\",\n       \"   miner_balance => 3456438224999335850673000},\\n\",\n       \" #{height => 2069895,denomination => 5,redenomination_height => 2069893,\\n\",\n       \"   reward_pool => 292873436150971032664149327000,\\n\",\n       \"   miner_balance => 3579915639603335850673000}]\\n\"\n      ]\n     },\n     \"execution_count\": 30,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"SummaryStart0 = RedenomTriggerBlockHeight - 2,\\n\",\n    \"SummaryStart =\\n\",\n    \"\\tcase SummaryStart0 < 0 of\\n\",\n    \"\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t0;\\n\",\n    \"\\t\\tfalse ->\\n\",\n    \"\\t\\t\\tSummaryStart0\\n\",\n    \"\\tend,\\n\",\n    \"SummaryEnd = RedenomBlockHeight + 1,\\n\",\n    \"Heights = lists:seq(SummaryStart, SummaryEnd),\\n\",\n    \"Summary =\\n\",\n    \"\\t[begin\\n\",\n    \"\\t\\tBlock = RPCBlockByHeight(Height),\\n\",\n    \"\\t\\t#{ height => Height,\\n\",\n    \"\\t\\t  denomination => nb_block:denomination(Block),\\n\",\n    \"\\t\\t  redenomination_height => nb_block:redenomination_height(Block),\\n\",\n    \"\\t\\t  reward_pool => nb_block:reward_pool(Block),\\n\",\n    \"\\t\\t  miner_balance => WalletBalanceFromBlock(Block, MiningAddr) }\\n\",\n    \"\\t end || Height <- Heights],\\n\",\n    \"Summary.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"37e8f267\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Post-redenomination HTTP endpoint checks\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"0229eac0\",\n   \"metadata\": {},\n   \"source\": [\n    \"Validates every HTTP pricing, wallet, and block endpoint at the current height\\n\",\n    \"(post-redenomination). Each cell fetches a value from the HTTP API and asserts a property:\\n\",\n    \"\\n\",\n    \"- **`/price/{size}`**: price for 0 bytes and 1 GB. `price(1GB) > price(0)`.\\n\",\n    \"- **`/price/{size}/{addr}`**: with the miner address, equals the no-address variant.\\n\",\n    \"- **`/price2/` and `/optimistic_price/`**: JSON fee endpoints. `price2 == price`,\\n\",\n    \"  `optimistic_price <= price`.\\n\",\n    \"- **`/v2price/`**: positive, monotonic in data size, address variant equals no-address.\\n\",\n    \"- **`/wallet/{addr}/balance`**: equals the per-block balance endpoint.\\n\",\n    \"- **`/wallet/{addr}/last_tx`**: decodes to 32 bytes; the TX JSON has matching `id`\\n\",\n    \"  and non-empty `signature`.\\n\",\n    \"- **`/wallet/{addr}/reserved_rewards_total`**: equals the current block's reward\\n\",\n    \"  (`locked_rewards_blocks = 1`, sole miner).\\n\",\n    \"- **`/tx_anchor`**: decodes to a block hash from the last 10 blocks.\\n\",\n    \"- **`/block/height/{h}`**: returned height matches the request.\\n\",\n    \"- **`/info`**: network name matches localnet.\\n\",\n    \"- **`/tx/pending`**: every entry is a valid 32-byte base64-encoded TX id.\\n\",\n    \"- **Redenomination scaling:** all pricing and wallet endpoints are approximately\\n\",\n    \"  1000× their pre-redenomination values (tolerance: 0.01% or 1000 base units).\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 31,\n   \"id\": \"7e4e23c8\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"#{addr => \\\"yjrOLPSHP1cvZ_y1bSLkLd8lIWhu9dsbSufXm9-QjsY\\\",height => 2069895}\\n\"\n      ]\n     },\n     \"execution_count\": 31,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"AddrB64 = binary_to_list(ar_util:encode(MiningAddr)),\\n\",\n    \"InfoMap = HTTPGetJSONMap(BaseUrl ++ \\\"/info\\\"),\\n\",\n    \"CurrentHeight =\\n\",\n    \"\\tcase maps:get(<<\\\"height\\\">>, InfoMap, undefined) of\\n\",\n    \"\\t\\tCurH when is_integer(CurH) ->\\n\",\n    \"\\t\\t\\tCurH;\\n\",\n    \"\\t\\tCurHBin when is_binary(CurHBin) ->\\n\",\n    \"\\t\\t\\tbinary_to_integer(CurHBin);\\n\",\n    \"\\t\\tOther ->\\n\",\n    \"\\t\\t\\terlang:error({unexpected_info_height, Other})\\n\",\n    \"\\tend,\\n\",\n    \"#{addr => AddrB64, height => CurrentHeight}. \"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"post-http-redenomination-scaling-doc\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Assert post-redenomination values are ~1000× pre-redenomination\\n\",\n    \"\\n\",\n    \"Compares every pricing and wallet endpoint at the current post-redenomination height\\n\",\n    \"against the `PreHTTP` snapshot captured before redenomination. Between the two snapshots,\\n\",\n    \"2 blocks were mined (redenomination block + 1 post block), so values may drift slightly\\n\",\n    \"due to normal economic activity. Tolerance: `max(1000, Expected div 10000)` (0.01% or\\n\",\n    \"1000 base units, whichever is larger).\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 32,\n   \"id\": \"post-http-redenomination-scaling\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 32,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"PostHTTPCheck = #\\n\",\n    \"{\\n\",\n    \"\\tprice0 => HTTPGetInteger(BaseUrl ++ \\\"/price/0\\\"),\\n\",\n    \"\\tprice1g => HTTPGetInteger(BaseUrl ++ \\\"/price/1000000000\\\"),\\n\",\n    \"\\tprice0_addr => HTTPGetInteger(BaseUrl ++ \\\"/price/0/\\\" ++ AddrB64),\\n\",\n    \"\\tprice1g_addr => HTTPGetInteger(BaseUrl ++ \\\"/price/1000000000/\\\" ++ AddrB64),\\n\",\n    \"\\tv2_0 => HTTPGetInteger(BaseUrl ++ \\\"/v2price/0\\\"),\\n\",\n    \"\\tv2_1g => HTTPGetInteger(BaseUrl ++ \\\"/v2price/1000000000\\\"),\\n\",\n    \"\\tv2_0_addr => HTTPGetInteger(BaseUrl ++ \\\"/v2price/0/\\\" ++ AddrB64),\\n\",\n    \"\\tv2_1g_addr => HTTPGetInteger(BaseUrl ++ \\\"/v2price/1000000000/\\\" ++ AddrB64),\\n\",\n    \"\\twallet_balance => HTTPGetInteger(BaseUrl ++ \\\"/wallet/\\\" ++ AddrB64 ++ \\\"/balance\\\"),\\n\",\n    \"\\treserved_rewards => HTTPGetInteger(BaseUrl ++ \\\"/wallet/\\\" ++ AddrB64 ++ \\\"/reserved_rewards_total\\\")\\n\",\n    \"},\\n\",\n    \"\\n\",\n    \"PostHTTPScaleKeys = [\\n\",\n    \"\\tprice0, price1g, price0_addr, price1g_addr,\\n\",\n    \"\\tv2_0, v2_1g, v2_0_addr, v2_1g_addr,\\n\",\n    \"\\treserved_rewards\\n\",\n    \"],\\n\",\n    \"\\n\",\n    \"lists:foreach(\\n\",\n    \"\\tfun(Key) ->\\n\",\n    \"\\t\\tPreVal = maps:get(Key, PreHTTP),\\n\",\n    \"\\t\\tPostVal = maps:get(Key, PostHTTPCheck),\\n\",\n    \"\\t\\tExpected = PreVal * 1000,\\n\",\n    \"\\t\\tDiff = abs(PostVal - Expected),\\n\",\n    \"\\t\\tMaxDiff = max(1000, Expected div 10000),\\n\",\n    \"\\t\\tcase Diff =< MaxDiff of\\n\",\n    \"\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\tok;\\n\",\n    \"\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\terlang:error({post_redenomination_scale_mismatch,\\n\",\n    \"\\t\\t\\t\\t\\tKey, PreVal, PostVal, Expected, Diff, MaxDiff})\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\tPostHTTPScaleKeys\\n\",\n    \"),\\n\",\n    \"\\n\",\n    \"ok.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 33,\n   \"id\": \"3fedfb74\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"59246720252252719\\n\"\n      ]\n     },\n     \"execution_count\": 33,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"Price0 = HTTPGetInteger(BaseUrl ++ \\\"/price/0\\\"),\\n\",\n    \"Price0.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 34,\n   \"id\": \"43fb689b\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 34,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"Price1G = HTTPGetInteger(BaseUrl ++ \\\"/price/1000000000\\\"),\\n\",\n    \"ok = case Price1G > Price0 of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {price_not_monotonic, Price0, Price1G}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 35,\n   \"id\": \"b084310a\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 35,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"Price0Addr = HTTPGetInteger(BaseUrl ++ \\\"/price/0/\\\" ++ AddrB64),\\n\",\n    \"ok = case Price0Addr == Price0 of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {price_addr_mismatch, Price0Addr, Price0}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 36,\n   \"id\": \"0244bc6b\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 36,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"Price1GAddr = HTTPGetInteger(BaseUrl ++ \\\"/price/1000000000/\\\" ++ AddrB64),\\n\",\n    \"ok = case Price1GAddr == Price1G of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {price_addr_mismatch, Price1GAddr, Price1G}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 37,\n   \"id\": \"5e175c9e\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 37,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"Price2_0 = HTTPGetJSONFee(BaseUrl ++ \\\"/price2/0\\\"),\\n\",\n    \"ok = case Price2_0 == Price0 of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {price2_mismatch, Price2_0, Price0}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 38,\n   \"id\": \"7e58671c\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 38,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"Price2_1G = HTTPGetJSONFee(BaseUrl ++ \\\"/price2/1000000000\\\"),\\n\",\n    \"ok = case Price2_1G == Price1G of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {price2_mismatch, Price2_1G, Price1G}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 39,\n   \"id\": \"0c077c44\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 39,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"Price2_0Addr = HTTPGetJSONFee(BaseUrl ++ \\\"/price2/0/\\\" ++ AddrB64),\\n\",\n    \"ok = case Price2_0Addr == Price0Addr of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {price2_mismatch, Price2_0Addr, Price0Addr}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 40,\n   \"id\": \"fcb181b3\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 40,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"Price2_1GAddr = HTTPGetJSONFee(BaseUrl ++ \\\"/price2/1000000000/\\\" ++ AddrB64),\\n\",\n    \"ok = case Price2_1GAddr == Price1GAddr of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {price2_mismatch, Price2_1GAddr, Price1GAddr}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 41,\n   \"id\": \"800c0aac\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 41,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"Opt0 = HTTPGetJSONFee(BaseUrl ++ \\\"/optimistic_price/0\\\"),\\n\",\n    \"ok = case Opt0 =< Price0 of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {optimistic_too_high, Opt0, Price0}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 42,\n   \"id\": \"4a69ee36\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 42,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"Opt1G = HTTPGetJSONFee(BaseUrl ++ \\\"/optimistic_price/1000000000\\\"),\\n\",\n    \"ok = case Opt1G =< Price1G of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {optimistic_too_high, Opt1G, Price1G}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 43,\n   \"id\": \"24c08591\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 43,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"Opt0Addr = HTTPGetJSONFee(BaseUrl ++ \\\"/optimistic_price/0/\\\" ++ AddrB64),\\n\",\n    \"ok = case Opt0Addr =< Price0Addr of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {optimistic_too_high, Opt0Addr, Price0Addr}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 44,\n   \"id\": \"53865da9\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 44,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"Opt1GAddr = HTTPGetJSONFee(BaseUrl ++ \\\"/optimistic_price/1000000000/\\\" ++ AddrB64),\\n\",\n    \"ok = case Opt1GAddr =< Price1GAddr of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {optimistic_too_high, Opt1GAddr, Price1GAddr}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 45,\n   \"id\": \"7615efe7\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 45,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"V2_0 = HTTPGetInteger(BaseUrl ++ \\\"/v2price/0\\\"),\\n\",\n    \"ok = case V2_0 > 0 of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {v2price_not_positive, V2_0}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 46,\n   \"id\": \"fce9ac9f\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 46,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"V2_1G = HTTPGetInteger(BaseUrl ++ \\\"/v2price/1000000000\\\"),\\n\",\n    \"ok = case V2_1G > V2_0 of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {v2price_not_monotonic, V2_0, V2_1G}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 47,\n   \"id\": \"96a051c4\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 47,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"V2_0Addr = HTTPGetInteger(BaseUrl ++ \\\"/v2price/0/\\\" ++ AddrB64),\\n\",\n    \"ok = case V2_0Addr == V2_0 of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {v2price_addr_mismatch, V2_0Addr, V2_0}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 48,\n   \"id\": \"1c20fe9d\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 48,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"V2_1GAddr = HTTPGetInteger(BaseUrl ++ \\\"/v2price/1000000000/\\\" ++ AddrB64),\\n\",\n    \"ok = case V2_1GAddr == V2_1G of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {v2price_addr_mismatch, V2_1GAddr, V2_1G}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 49,\n   \"id\": \"f86858ec\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 49,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"WalletBalanceHTTP = HTTPGetInteger(BaseUrl ++ \\\"/wallet/\\\" ++ AddrB64 ++ \\\"/balance\\\"),\\n\",\n    \"WalletBalanceAtHeight =\\n\",\n    \"\\tHTTPGetInteger(\\n\",\n    \"\\t\\tBaseUrl ++ \\\"/block/height/\\\" ++ integer_to_list(CurrentHeight) ++ \\\"/wallet/\\\" ++ AddrB64 ++ \\\"/balance\\\"\\n\",\n    \"\\t),\\n\",\n    \"ok = case WalletBalanceHTTP == WalletBalanceAtHeight of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {wallet_balance_mismatch, WalletBalanceHTTP, WalletBalanceAtHeight}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 50,\n   \"id\": \"a4e360b2\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 50,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"LastTXB64 =\\n\",\n    \"\\tcase HTTPGet(BaseUrl ++ \\\"/wallet/\\\" ++ AddrB64 ++ \\\"/last_tx\\\") of\\n\",\n    \"\\t\\t{ok, LastTXBody} ->\\n\",\n    \"\\t\\t\\tiolist_to_binary(LastTXBody);\\n\",\n    \"\\t\\t{error, LastTXErr} ->\\n\",\n    \"\\t\\t\\terlang:error({last_tx_failed, LastTXErr})\\n\",\n    \"\\tend,\\n\",\n    \"ok = case DecodeBase64(LastTXB64) of\\n\",\n    \"\\t{ok, LastTXDecoded} ->\\n\",\n    \"\\t\\tcase byte_size(LastTXDecoded) == 32 of\\n\",\n    \"\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\tok;\\n\",\n    \"\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\t{error, {last_tx_size_mismatch, byte_size(LastTXDecoded)}}\\n\",\n    \"\\t\\tend;\\n\",\n    \"\\t{error, LastTXDecErr} ->\\n\",\n    \"\\t\\t{error, LastTXDecErr}\\n\",\n    \"end,\\n\",\n    \"LastTXMap = HTTPGetJSONMap(BaseUrl ++ \\\"/tx/\\\" ++ binary_to_list(LastTXB64)),\\n\",\n    \"ok = case maps:get(<<\\\"id\\\">>, LastTXMap, undefined) of\\n\",\n    \"\\tundefined ->\\n\",\n    \"\\t\\t{error, {missing_tx_id, LastTXMap}};\\n\",\n    \"\\tLastTXB64 ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tOtherId ->\\n\",\n    \"\\t\\t{error, {last_tx_id_mismatch, OtherId, LastTXB64}}\\n\",\n    \"end,\\n\",\n    \"ok = case maps:get(<<\\\"signature\\\">>, LastTXMap, undefined) of\\n\",\n    \"\\tSig when is_binary(Sig), byte_size(Sig) > 0 ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tOtherSig ->\\n\",\n    \"\\t\\t{error, {invalid_tx_signature, OtherSig}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 51,\n   \"id\": \"20c58916\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:09.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:09.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 51,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"ReservedHTTP = HTTPGetInteger(BaseUrl ++ \\\"/wallet/\\\" ++ AddrB64 ++ \\\"/reserved_rewards_total\\\"),\\n\",\n    \"\\n\",\n    \"BlockForReserved = RPCBlockByHeight(CurrentHeight),\\n\",\n    \"ExpectedReserved = nb_block:reward(BlockForReserved),\\n\",\n    \"\\n\",\n    \"ok = case ReservedHTTP == ExpectedReserved of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {reserved_rewards_mismatch, ReservedHTTP, ExpectedReserved}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 52,\n   \"id\": \"bfbb8047\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:09.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:10.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:10.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 52,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"AnchorB64 =\\n\",\n    \"\\tcase HTTPGet(BaseUrl ++ \\\"/tx_anchor\\\") of\\n\",\n    \"\\t\\t{ok, AnchorBody} ->\\n\",\n    \"\\t\\t\\tiolist_to_binary(AnchorBody);\\n\",\n    \"\\t\\t{error, AnchorErr} ->\\n\",\n    \"\\t\\t\\terlang:error({tx_anchor_failed, AnchorErr})\\n\",\n    \"\\tend,\\n\",\n    \"AnchorBin =\\n\",\n    \"\\tcase DecodeBase64(AnchorB64) of\\n\",\n    \"\\t\\t{ok, AnchorDecoded} ->\\n\",\n    \"\\t\\t\\tAnchorDecoded;\\n\",\n    \"\\t\\t{error, AnchorDecErr} ->\\n\",\n    \"\\t\\t\\terlang:error({tx_anchor_invalid, AnchorDecErr})\\n\",\n    \"\\tend,\\n\",\n    \"RecentStart0 = CurrentHeight - 10,\\n\",\n    \"RecentStart =\\n\",\n    \"\\tcase RecentStart0 < 0 of\\n\",\n    \"\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t0;\\n\",\n    \"\\t\\tfalse ->\\n\",\n    \"\\t\\t\\tRecentStart0\\n\",\n    \"\\tend,\\n\",\n    \"RecentHeights = lists:seq(RecentStart, CurrentHeight),\\n\",\n    \"RecentHashes =\\n\",\n    \"\\t[begin\\n\",\n    \"\\t\\tHashB64 = HTTPGetBlockHash(Height),\\n\",\n    \"\\t\\tcase DecodeBase64(HashB64) of\\n\",\n    \"\\t\\t\\t{ok, HashBin} ->\\n\",\n    \"\\t\\t\\t\\tHashBin;\\n\",\n    \"\\t\\t\\t{error, HashErr} ->\\n\",\n    \"\\t\\t\\t\\terlang:error({invalid_block_hash, Height, HashErr})\\n\",\n    \"\\t\\tend\\n\",\n    \"\\t end || Height <- RecentHeights],\\n\",\n    \"ok = case lists:member(AnchorBin, RecentHashes) of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {tx_anchor_not_recent, AnchorB64, RecentHeights}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 53,\n   \"id\": \"b759b251\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:10.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:10.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:10.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:10.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 53,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"BlockMap = HTTPGetJSONMap(BaseUrl ++ \\\"/block/height/\\\" ++ integer_to_list(CurrentHeight)),\\n\",\n    \"ok = case maps:get(<<\\\"height\\\">>, BlockMap, undefined) of\\n\",\n    \"\\tCurrentHeight ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tHeightVal ->\\n\",\n    \"\\t\\t{error, {block_height_mismatch, HeightVal, CurrentHeight}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 54,\n   \"id\": \"ce6f6345\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:10.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:10.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:10.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:10.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 54,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"ok = case maps:get(<<\\\"network\\\">>, InfoMap, undefined) of\\n\",\n    \"\\tundefined ->\\n\",\n    \"\\t\\t{error, {missing_network, InfoMap}};\\n\",\n    \"\\tNetwork ->\\n\",\n    \"\\t\\tcase Network == list_to_binary(LocalnetNetworkName) of\\n\",\n    \"\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\tok;\\n\",\n    \"\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\t{error, {network_mismatch, Network, LocalnetNetworkName}}\\n\",\n    \"\\t\\tend\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 55,\n   \"id\": \"c7c6835b\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:10.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:10.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:10.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:10.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 55,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"Pending = HTTPGetJSONList(BaseUrl ++ \\\"/tx/pending\\\"),\\n\",\n    \"ok = case lists:all(\\n\",\n    \"\\tfun(PendingItem) ->\\n\",\n    \"\\t\\tcase DecodeBase64(PendingItem) of\\n\",\n    \"\\t\\t\\t{ok, PendingDec} ->\\n\",\n    \"\\t\\t\\t\\tbyte_size(PendingDec) == 32;\\n\",\n    \"\\t\\t\\t{error, _} ->\\n\",\n    \"\\t\\t\\t\\tfalse\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\tPending\\n\",\n    \") of\\n\",\n    \"\\ttrue ->\\n\",\n    \"\\t\\tok;\\n\",\n    \"\\tfalse ->\\n\",\n    \"\\t\\t{error, {pending_invalid_entries, Pending}}\\n\",\n    \"end.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"f1001a0d\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Cleanup\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"5261b331\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Restore overridden parameters\\n\",\n    \"\\n\",\n    \"Restores `redenomination_threshold`, `redenomination_delay_blocks`, and `locked_rewards_blocks` to the values saved before overriding.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 56,\n   \"id\": \"53246420\",\n   \"metadata\": {\n    \"execution\": {\n     \"iopub.execute_input\": \"2026-02-19T20:48:10.000000Z\",\n     \"iopub.status.busy\": \"2026-02-19T20:48:10.000000Z\",\n     \"iopub.status.idle\": \"2026-02-19T20:48:10.000000Z\",\n     \"shell.execute_reply\": \"2026-02-19T20:48:10.000000Z\"\n    }\n   },\n   \"outputs\": [\n    {\n     \"data\": {\n      \"text/plain\": [\n       \"ok\\n\"\n      ]\n     },\n     \"execution_count\": 56,\n     \"metadata\": {},\n     \"output_type\": \"execute_result\"\n    }\n   ],\n   \"source\": [\n    \"RestoreEnv =\\n\",\n    \"\\tfun(Key, Prev) ->\\n\",\n    \"\\t\\tcase Prev of\\n\",\n    \"\\t\\t\\t{ok, Value} ->\\n\",\n    \"\\t\\t\\t\\tRPCCall(application, set_env, [arweave, Key, Value]);\\n\",\n    \"\\t\\t\\tundefined ->\\n\",\n    \"\\t\\t\\t\\tRPCCall(application, unset_env, [arweave, Key]);\\n\",\n    \"\\t\\t\\t_ ->\\n\",\n    \"\\t\\t\\t\\tRPCCall(application, unset_env, [arweave, Key])\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"RestoreEnv(redenomination_threshold, PrevRedenomThreshold),\\n\",\n    \"RestoreEnv(redenomination_delay_blocks, PrevRedenomDelay),\\n\",\n    \"RestoreEnv(locked_rewards_blocks, PrevLockedRewards),\\n\",\n    \"\\n\",\n    \"ok.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"57f813d8-1961-42b1-a72d-5fba85538df6\",\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": []\n  }\n ],\n \"metadata\": {\n  \"kernelspec\": {\n   \"display_name\": \"Erlang\",\n   \"language\": \"erlang\",\n   \"name\": \"erlang\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".erl\",\n   \"name\": \"erlang\",\n   \"version\": \"26.2.1\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}\n"
  },
  {
    "path": "notebooks/pricing_transition_localnet.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"089ad431\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Pricing Transition on Localnet\\n\",\n    \"\\n\",\n    \"## Overview\\n\",\n    \"\\n\",\n    \"The Arweave protocol transitions to the new dynamic (v2) pricing over\\n\",\n    \"a multi-phased transition period. The final phase — the **2.7.2 transition** — linearly\\n\",\n    \"interpolates between a cap price (340 Winston/GiB-minute) and the dynamically computed\\n\",\n    \"v2 price over 518,400 blocks (~24 months).\\n\",\n    \"\\n\",\n    \"This notebook validates pricing behavior around the **transition end** — the first block\\n\",\n    \"where pure v2 pricing takes effect with no interpolation or bounds. The localnet snapshot\\n\",\n    \"starts 5 blocks before the transition boundary.\\n\",\n    \"\\n\",\n    \"**Sections:**\\n\",\n    \"1. **Setup** – connect to the node, define RPC helpers and dollar-price conversions.\\n\",\n    \"2. **Transition window** – query the transition parameters and confirm heights.\\n\",\n    \"3. **Mine past transition** – mine ~40 blocks to cross the boundary and then trigger the first\\n\",\n    \"   post-transition price adjustment.\\n\",\n    \"4. **Fetch & display** – collect block pricing data and print a table with\\n\",\n    \"   $\\\\$/GiB$ upload costs (assuming $10/AR).\\n\",\n    \"5. **Validate `is_v2_pricing_height`** – the flag transitions at the right height.\\n\",\n    \"6. **Validate interpolation** – pre-transition target prices follow the interpolation formula.\\n\",\n    \"7. **Validate V2 pricing** – post-transition target prices equal the raw v2 price.\\n\",\n    \"8. **Validate continuity** – no sudden price jump at the boundary.\\n\",\n    \"9. **Validate block field evolution** – `price_per_gib_minute` and\\n\",\n    \"   `scheduled_price_per_gib_minute` evolve per the EMA recalculation rule.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"1f4cab16\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Setup\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"356bee12\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Connect to the localnet node\\n\",\n    \"\\n\",\n    \"Starts a distributed Erlang node with long names, sets the cookie to `localnet`,\\n\",\n    \"and pings `main-localnet@127.0.0.1` to confirm connectivity.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"47493a69\",\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"Cookie = 'localnet',\\n\",\n    \"Node = 'main-localnet@127.0.0.1',\\n\",\n    \"\\n\",\n    \"HostHasDot =\\n\",\n    \"\\tcase string:split(atom_to_list(node()), \\\"@\\\") of\\n\",\n    \"\\t\\t[_Name, Host] ->\\n\",\n    \"\\t\\t\\tcase string:find(Host, \\\".\\\") of\\n\",\n    \"\\t\\t\\t\\tnomatch ->\\n\",\n    \"\\t\\t\\t\\t\\tfalse;\\n\",\n    \"\\t\\t\\t\\t_ ->\\n\",\n    \"\\t\\t\\t\\t\\ttrue\\n\",\n    \"\\t\\t\\tend;\\n\",\n    \"\\t\\t_ ->\\n\",\n    \"\\t\\t\\tfalse\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"_ =\\n\",\n    \"\\tcase {node(), HostHasDot} of\\n\",\n    \"\\t\\t{nonode@nohost, _} ->\\n\",\n    \"\\t\\t\\tnet_kernel:start([list_to_atom(\\\"pricing_notebook@127.0.0.1\\\"), longnames]);\\n\",\n    \"\\t\\t{_, true} ->\\n\",\n    \"\\t\\t\\tok;\\n\",\n    \"\\t\\t{_, false} ->\\n\",\n    \"\\t\\t\\tnet_kernel:stop(),\\n\",\n    \"\\t\\t\\tnet_kernel:start([list_to_atom(\\\"pricing_notebook@127.0.0.1\\\"), longnames])\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"true = erlang:set_cookie(node(), Cookie),\\n\",\n    \"ok.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"4c084bc1\",\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"{Node, pong} = {Node, net_adm:ping(Node)},\\n\",\n    \"ok.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"b4553ce6\",\n   \"metadata\": {},\n   \"source\": [\n    \"### RPC and mining helpers\\n\",\n    \"\\n\",\n    \"- `RPCCall(M, F, A)` – calls `M:F(A)` on the remote node (30 s timeout).\\n\",\n    \"- `RPCHeight()` – current block height.\\n\",\n    \"- `MineUntilHeight(H)` – asks localnet to mine to height H and polls until reached.\\n\",\n    \"- `RPCBlockByHeight(H)` – reads the block record at height H.\\n\",\n    \"- `RPCGetPricePerGiBMinute(H, Block)` – `ar_pricing:get_price_per_gib_minute/2`.\\n\",\n    \"- `RPCGetV2PricePerGiBMinute(H, Block)` – `ar_pricing:get_v2_price_per_gib_minute/2`.\\n\",\n    \"- `RPCIsV2PricingHeight(H)` – `ar_pricing_transition:is_v2_pricing_height/1`.\\n\",\n    \"- `RPCGetTxFee(Size, Price, Kryder, H)` – `ar_pricing:get_tx_fee/1`.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"0d539cd3\",\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"RPCCall = fun(M, F, A) -> rpc:call(Node, M, F, A, 30000) end,\\n\",\n    \"RPCHeight = fun() -> RPCCall(ar_node, get_height, []) end,\\n\",\n    \"\\n\",\n    \"WaitForHeight =\\n\",\n    \"\\tfun\\n\",\n    \"\\t\\t(_, _TargetHeight, 0) ->\\n\",\n    \"\\t\\t\\terror(mine_until_height_timeout);\\n\",\n    \"\\t\\t(Self, TargetHeight, AttemptsLeft) ->\\n\",\n    \"\\t\\t\\tcase RPCHeight() >= TargetHeight of\\n\",\n    \"\\t\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\t\\tok;\\n\",\n    \"\\t\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\t\\ttimer:sleep(100),\\n\",\n    \"\\t\\t\\t\\t\\tSelf(Self, TargetHeight, AttemptsLeft - 1)\\n\",\n    \"\\t\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"MineUntilHeight =\\n\",\n    \"\\tfun(TargetHeight) ->\\n\",\n    \"\\t\\tMineResult = RPCCall(ar_localnet, mine_until_height, [TargetHeight]),\\n\",\n    \"\\t\\tok =\\n\",\n    \"\\t\\t\\tcase MineResult of\\n\",\n    \"\\t\\t\\t\\tok ->\\n\",\n    \"\\t\\t\\t\\t\\tok;\\n\",\n    \"\\t\\t\\t\\t[] ->\\n\",\n    \"\\t\\t\\t\\t\\tok;\\n\",\n    \"\\t\\t\\t\\tOther ->\\n\",\n    \"\\t\\t\\t\\t\\terror({unexpected_mine_until_height_result, Other})\\n\",\n    \"\\t\\t\\tend,\\n\",\n    \"\\t\\tWaitForHeight(WaitForHeight, TargetHeight, 6000)\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"RPCBlockHashByHeight =\\n\",\n    \"\\tfun(H) ->\\n\",\n    \"\\t\\tRPCCall(ar_block_index, get_element_by_height, [H])\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"RPCBlockByHeight =\\n\",\n    \"\\tfun(H) ->\\n\",\n    \"\\t\\tHash = RPCBlockHashByHeight(H),\\n\",\n    \"\\t\\tRPCCall(ar_storage, read_block, [Hash])\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"RPCIsV2PricingHeight =\\n\",\n    \"\\tfun(H) ->\\n\",\n    \"\\t\\tRPCCall(ar_pricing_transition, is_v2_pricing_height, [H])\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"RPCGetTxFee =\\n\",\n    \"\\tfun(DataSize, Price, Kryder, H) ->\\n\",\n    \"\\t\\tRPCCall(ar_pricing, get_tx_fee, [{DataSize, Price, Kryder, H}])\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"ok.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"e87bc3d3\",\n   \"metadata\": {},\n   \"source\": [\n    \"### Record accessors and dollar-price helpers\\n\",\n    \"\\n\",\n    \"Compiles `nb_block` at runtime for record field access. Defines HTTP helpers\\n\",\n    \"and dollar conversion assuming **$10/AR**:\\n\",\n    \"\\n\",\n    \"- `WinstonToUSD(W, D)` – converts Winston to USD at denomination `D`.\\n\",\n    \"  Formula: $$W × $10 / (10^{12} × 1000^{D−1})$$\\n\",\n    \"- `UploadCostUSD(Price, Kryder, H, D)` – the cost to upload 1 GiB in USD.\\n\",\n    \"  Uses `ar_pricing:get_tx_fee/1` which accounts for perpetual storage\\n\",\n    \"  (200+ years with 0.5 %/year decay), 20 replicas, Kryder+ rate, and the 5 % miner share.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"8c785044\",\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"TmpDir = \\\".tmp/notebooks/\\\",\\n\",\n    \"ok = filelib:ensure_dir(filename:join([TmpDir, \\\"keep\\\"])),\\n\",\n    \"\\n\",\n    \"CompileModule =\\n\",\n    \"\\tfun(Name, Source) ->\\n\",\n    \"\\t\\tPath = filename:join([TmpDir, Name ++ \\\".erl\\\"]),\\n\",\n    \"\\t\\tok = file:write_file(Path, Source),\\n\",\n    \"\\t\\t{ok, Module, Bin} = compile:file(Path, [binary]),\\n\",\n    \"\\t\\t{module, Module} = code:load_binary(Module, Path, Bin)\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"BlockAccessors =\\n\",\n    \"\\tlists:flatten([\\n\",\n    \"\\t\\t\\\"-module(nb_block).\\\\n\\\",\\n\",\n    \"\\t\\t\\\"-export([height/1, price_per_gib_minute/1, scheduled_price_per_gib_minute/1,\\\\n\\\",\\n\",\n    \"\\t\\t\\\"         denomination/1, kryder_plus_rate_multiplier/1]).\\\\n\\\",\\n\",\n    \"\\t\\t\\\"-include_lib(\\\\\\\"arweave/include/ar.hrl\\\\\\\").\\\\n\\\",\\n\",\n    \"\\t\\t\\\"height(B) -> B#block.height.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"price_per_gib_minute(B) -> B#block.price_per_gib_minute.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"scheduled_price_per_gib_minute(B) -> B#block.scheduled_price_per_gib_minute.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"denomination(B) -> B#block.denomination.\\\\n\\\",\\n\",\n    \"\\t\\t\\\"kryder_plus_rate_multiplier(B) -> B#block.kryder_plus_rate_multiplier.\\\\n\\\"\\n\",\n    \"\\t]),\\n\",\n    \"\\n\",\n    \"CompileModule(\\\"nb_block\\\", BlockAccessors),\\n\",\n    \"\\n\",\n    \"%% Remote helper: runs on the node to avoid sending block records over RPC.\\n\",\n    \"%% get_target_and_v2/1 reads the previous block from local storage\\n\",\n    \"%% and computes both the target price and the v2 price on-node.\\n\",\n    \"RemotePricingHelper =\\n\",\n    \"\\tlists:flatten([\\n\",\n    \"\\t\\t\\\"-module(nb_remote_pricing).\\\\n\\\",\\n\",\n    \"\\t\\t\\\"-export([get_target_and_v2/1]).\\\\n\\\",\\n\",\n    \"\\t\\t\\\"-include_lib(\\\\\\\"arweave/include/ar.hrl\\\\\\\").\\\\n\\\",\\n\",\n    \"\\t\\t\\\"get_target_and_v2(Height) ->\\\\n\\\",\\n\",\n    \"\\t\\t\\\"    PrevHash = element(1, ar_block_index:get_element_by_height(Height - 1)),\\\\n\\\",\\n\",\n    \"\\t\\t\\\"    PrevBlock = ar_block_cache:get(block_cache, PrevHash),\\\\n\\\",\\n\",\n    \"\\t\\t\\\"    case PrevBlock of\\\\n\\\",\\n\",\n    \"\\t\\t\\\"        not_found ->\\\\n\\\",\\n\",\n    \"\\t\\t\\\"            {error, error};\\\\n\\\",\\n\",\n    \"\\t\\t\\\"        _ ->\\\\n\\\",\\n\",\n    \"\\t\\t\\\"            V2 = try ar_pricing:get_v2_price_per_gib_minute(Height, PrevBlock)\\\\n\\\",\\n\",\n    \"\\t\\t\\\"                 catch _:_ -> error end,\\\\n\\\",\\n\",\n    \"\\t\\t\\\"            Target = try ar_pricing:get_price_per_gib_minute(Height, PrevBlock)\\\\n\\\",\\n\",\n    \"\\t\\t\\\"                    catch _:_ -> error end,\\\\n\\\",\\n\",\n    \"\\t\\t\\\"            {Target, V2}\\\\n\\\",\\n\",\n    \"\\t\\t\\\"    end.\\\\n\\\"\\n\",\n    \"\\t]),\\n\",\n    \"RemotePricingPath = filename:join([TmpDir, \\\"nb_remote_pricing.erl\\\"]),\\n\",\n    \"ok = file:write_file(RemotePricingPath, RemotePricingHelper),\\n\",\n    \"{ok, nb_remote_pricing, RemotePricingBin} = compile:file(RemotePricingPath, [binary]),\\n\",\n    \"{module, nb_remote_pricing} = code:load_binary(nb_remote_pricing, RemotePricingPath, RemotePricingBin),\\n\",\n    \"%% Load on the remote node\\n\",\n    \"{module, nb_remote_pricing} =\\n\",\n    \"\\tRPCCall(code, load_binary, [nb_remote_pricing, RemotePricingPath, RemotePricingBin]),\\n\",\n    \"\\n\",\n    \"RPCGetTargetAndV2 =\\n\",\n    \"\\tfun(H) ->\\n\",\n    \"\\t\\tRPCCall(nb_remote_pricing, get_target_and_v2, [H])\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"{ok, _} = application:ensure_all_started(inets),\\n\",\n    \"\\n\",\n    \"LocalnetHTTPHost =\\n\",\n    \"\\tcase os:getenv(\\\"LOCALNET_HTTP_HOST\\\") of\\n\",\n    \"\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\\"127.0.0.1\\\";\\n\",\n    \"\\t\\tV ->\\n\",\n    \"\\t\\t\\tV\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"LocalnetHTTPPort =\\n\",\n    \"\\tcase os:getenv(\\\"LOCALNET_HTTP_PORT\\\") of\\n\",\n    \"\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\\"1984\\\";\\n\",\n    \"\\t\\tV ->\\n\",\n    \"\\t\\t\\tV\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"LocalnetNetworkName =\\n\",\n    \"\\tcase os:getenv(\\\"LOCALNET_NETWORK_NAME\\\") of\\n\",\n    \"\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\\"arweave.localnet\\\";\\n\",\n    \"\\t\\tV ->\\n\",\n    \"\\t\\t\\tV\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"BaseUrl = \\\"http://\\\" ++ LocalnetHTTPHost ++ \\\":\\\" ++ LocalnetHTTPPort,\\n\",\n    \"\\n\",\n    \"HTTPGet =\\n\",\n    \"\\tfun(Url) ->\\n\",\n    \"\\t\\tHeaders = [{\\\"x-network\\\", LocalnetNetworkName}],\\n\",\n    \"\\t\\tcase httpc:request(get, {Url, Headers}, [], []) of\\n\",\n    \"\\t\\t\\t{ok, {{_, 200, _}, _, Body}} ->\\n\",\n    \"\\t\\t\\t\\t{ok, Body};\\n\",\n    \"\\t\\t\\t{ok, {{_, Status, _}, _, Body}} ->\\n\",\n    \"\\t\\t\\t\\t{error, {http_status, Status, Body}};\\n\",\n    \"\\t\\t\\tError ->\\n\",\n    \"\\t\\t\\t\\t{error, Error}\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"HTTPGetInteger =\\n\",\n    \"\\tfun(Url) ->\\n\",\n    \"\\t\\tcase HTTPGet(Url) of\\n\",\n    \"\\t\\t\\t{ok, Body} ->\\n\",\n    \"\\t\\t\\t\\tbinary_to_integer(iolist_to_binary(Body));\\n\",\n    \"\\t\\t\\t{error, Reason} ->\\n\",\n    \"\\t\\t\\t\\t{error, Reason}\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"GiB = 1073741824,\\n\",\n    \"\\n\",\n    \"WinstonToUSD =\\n\",\n    \"\\tfun(Winston, Denom) ->\\n\",\n    \"\\t\\tPow = lists:foldl(fun(_, Acc) -> Acc * 1000 end, 1, lists:seq(1, Denom - 1)),\\n\",\n    \"\\t\\tWinston * 10.0 / (1000000000000 * Pow)\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"UploadCostUSD =\\n\",\n    \"\\tfun(Price, Kryder, H, Denom) ->\\n\",\n    \"\\t\\tTxFee = RPCGetTxFee(GiB, Price, Kryder, H),\\n\",\n    \"\\t\\tWinstonToUSD(TxFee, Denom)\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"ok.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"00169a37\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Transition Window\\n\",\n    \"\\n\",\n    \"Queries the 2.7.2 pricing transition parameters via RPC.\\n\",\n    \"\\n\",\n    \"The 2.7.2 transition linearly interpolates between a **start price** (the 2.7.2 cap =\\n\",\n    \"340 Winston/GiB-minute) and the dynamic v2 price. The start price is obtained by\\n\",\n    \"evaluating `get_transition_price(TransitionStart, 0)`: at the transition start all weight\\n\",\n    \"is on the start price, so passing a v2 price of 0 yields the start price exactly.\\n\",\n    \"\\n\",\n    \"**Queried values:**\\n\",\n    \"- `ar_pricing_transition:transition_start_2_7_2()` – first height of the 2.7.2 transition.\\n\",\n    \"- `ar_pricing_transition:transition_length_2_7_2()` – number of transition blocks.\\n\",\n    \"- Transition end = start + length (first block with pure v2 pricing).\\n\",\n    \"- `PRICE_ADJUSTMENT_FREQUENCY` = 50 blocks (production/localnet value).\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"4a09108e\",\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"TransitionStart = RPCCall(ar_pricing_transition, transition_start_2_7_2, []),\\n\",\n    \"TransitionLength = RPCCall(ar_pricing_transition, transition_length_2_7_2, []),\\n\",\n    \"TransitionEnd = TransitionStart + TransitionLength,\\n\",\n    \"Height0 = RPCHeight(),\\n\",\n    \"\\n\",\n    \"TransitionStartPrice =\\n\",\n    \"\\tRPCCall(ar_pricing_transition, get_transition_price, [TransitionStart, 0]),\\n\",\n    \"\\n\",\n    \"PriceAdjustFreq = 50,\\n\",\n    \"FirstPostAdjust =\\n\",\n    \"\\tcase TransitionEnd rem PriceAdjustFreq of\\n\",\n    \"\\t\\t0 ->\\n\",\n    \"\\t\\t\\tTransitionEnd;\\n\",\n    \"\\t\\t_ ->\\n\",\n    \"\\t\\t\\t((TransitionEnd div PriceAdjustFreq) + 1) * PriceAdjustFreq\\n\",\n    \"\\tend,\\n\",\n    \"\\n\",\n    \"true = is_integer(TransitionStart),\\n\",\n    \"true = is_integer(TransitionLength),\\n\",\n    \"true = is_integer(TransitionEnd),\\n\",\n    \"true = (TransitionLength > 0),\\n\",\n    \"true = (TransitionEnd == TransitionStart + TransitionLength),\\n\",\n    \"340 = TransitionStartPrice,\\n\",\n    \"SnapshotBlocksBeforeEnd = TransitionEnd - Height0,\\n\",\n    \"5 = SnapshotBlocksBeforeEnd,\\n\",\n    \"\\n\",\n    \"io:format(\\\"  Transition start (2.7.2):       ~p~n\\\", [TransitionStart]),\\n\",\n    \"io:format(\\\"  Transition length:              ~p blocks~n\\\", [TransitionLength]),\\n\",\n    \"io:format(\\\"  Transition end:                 ~p~n\\\", [TransitionEnd]),\\n\",\n    \"io:format(\\\"  Transition start price:         ~p Winston/GiB-min~n\\\", [TransitionStartPrice]),\\n\",\n    \"io:format(\\\"  Current height:                 ~p~n\\\", [Height0]),\\n\",\n    \"io:format(\\\"  Blocks until transition end:    ~p~n\\\", [SnapshotBlocksBeforeEnd]),\\n\",\n    \"io:format(\\\"  Price adjustment frequency:     ~p blocks~n\\\", [PriceAdjustFreq]),\\n\",\n    \"io:format(\\\"  First post-transition adjust:   ~p~n\\\", [FirstPostAdjust]),\\n\",\n    \"ok.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"faf2a84d\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Mine Past Transition End\\n\",\n    \"\\n\",\n    \"The snapshot starts 5 blocks before the transition end. We mine past the boundary and\\n\",\n    \"past the first post-transition price-adjustment height so we can validate the EMA\\n\",\n    \"recalculation under pure v2 pricing.\\n\",\n    \"\\n\",\n    \"**Mined blocks:** from the current height to `FirstPostAdjust + 5` (~40 blocks).\\n\",\n    \"**Submitted txs:** none.\\n\",\n    \"**Expected:** mining succeeds without errors through the transition boundary.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"f1ce445b\",\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"MineTarget = FirstPostAdjust + 5,\\n\",\n    \"io:format(\\\"  Mining from ~p to ~p (~p blocks)...~n\\\",\\n\",\n    \"\\t[Height0, MineTarget, MineTarget - Height0]),\\n\",\n    \"ok = MineUntilHeight(MineTarget),\\n\",\n    \"HeightAfterMine = RPCHeight(),\\n\",\n    \"true = (HeightAfterMine >= MineTarget),\\n\",\n    \"io:format(\\\"  Done. Current height: ~p~n\\\", [HeightAfterMine]),\\n\",\n    \"ok.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"59f5936c\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Fetch and Display Pricing Data\\n\",\n    \"\\n\",\n    \"Fetches block records from `TransitionEnd - 5` to `MineTarget` and builds a table\\n\",\n    \"with the following columns:\\n\",\n    \"\\n\",\n    \"| Column | Source |\\n\",\n    \"|--------|--------|\\n\",\n    \"| **Height** | block height |\\n\",\n    \"| **Price** | `block.price_per_gib_minute` (stored, EMA-smoothed; updates every 50 blocks) |\\n\",\n    \"| **Scheduled Price** | `block.scheduled_price_per_gib_minute` (next value for Price at the next adjustment) |\\n\",\n    \"| **Target** | `ar_pricing:get_price_per_gib_minute(H, PrevBlock)` — the price at the given height adjusted for the transition |\\n\",\n    \"| **V2 Price** | `ar_pricing:get_v2_price_per_gib_minute(H, PrevBlock)` — raw dynamic price (no transition) |\\n\",\n    \"| **V2?** | `ar_pricing_transition:is_v2_pricing_height(H)` |\\n\",\n    \"| $\\\\mathbf{\\\\$/GiB}$| Upload cost for 1 GiB in USD at $\\\\$10/AR$ (via `get_tx_fee`, includes decay, 20 replicas, miner share) |\\n\",\n    \"\\n\",\n    \"Why `price_per_gib_minute` can look small: it is a **Winston per GiB-minute unit rate**, not a direct upload fee. The upload fee path annualizes this rate and applies replication, Kryder+, and miner-share factors in `get_tx_fee/1`, so a single-digit GiB-minute rate can still produce a meaningful `$ / GiB` upload cost.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"7cc608c1\",\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"RangeStart = TransitionEnd - 5,\\n\",\n    \"RangeEnd = MineTarget,\\n\",\n    \"true = (RangeStart =< RangeEnd),\\n\",\n    \"\\n\",\n    \"Blocks = maps:from_list(\\n\",\n    \"\\t[{H, RPCBlockByHeight(H)} || H <- lists:seq(RangeStart, RangeEnd)]\\n\",\n    \"),\\n\",\n    \"\\n\",\n    \"Heights = lists:seq(RangeStart, RangeEnd),\\n\",\n    \"true = (maps:size(Blocks) == length(Heights)),\\n\",\n    \"\\n\",\n    \"Rows = lists:map(\\n\",\n    \"\\tfun(H) ->\\n\",\n    \"\\t\\tBlock = maps:get(H, Blocks),\\n\",\n    \"\\t\\tPrice = nb_block:price_per_gib_minute(Block),\\n\",\n    \"\\t\\tScheduled = nb_block:scheduled_price_per_gib_minute(Block),\\n\",\n    \"\\t\\tDenom = nb_block:denomination(Block),\\n\",\n    \"\\t\\tKryder = nb_block:kryder_plus_rate_multiplier(Block),\\n\",\n    \"\\t\\tIsV2 = RPCIsV2PricingHeight(H),\\n\",\n    \"\\t\\t{TargetPrice, V2Price} = RPCGetTargetAndV2(H),\\n\",\n    \"\\t\\tUSD = UploadCostUSD(Price, Kryder, H, Denom),\\n\",\n    \"\\t\\t#{ height => H, price => Price, scheduled => Scheduled,\\n\",\n    \"\\t\\t  denomination => Denom, kryder => Kryder,\\n\",\n    \"\\t\\t  is_v2 => IsV2, v2_price => V2Price, target => TargetPrice,\\n\",\n    \"\\t\\t  upload_usd => USD }\\n\",\n    \"\\tend,\\n\",\n    \"\\tHeights),\\n\",\n    \"\\n\",\n    \"TargetErrors = [maps:get(height, R) || R <- Rows, maps:get(target, R) == error],\\n\",\n    \"V2Errors = [maps:get(height, R) || R <- Rows, maps:get(v2_price, R) == error],\\n\",\n    \"[] = TargetErrors,\\n\",\n    \"[] = V2Errors,\\n\",\n    \"\\n\",\n    \"io:format(\\\"~n~10s | ~6s | ~6s | ~7s | ~7s | ~3s | ~s~n\\\",\\n\",\n    \"\\t[\\\"Height\\\", \\\"Price\\\", \\\"Sched\\\", \\\"Target\\\", \\\"V2\\\", \\\"V2?\\\", \\\"$/GiB upload\\\"]),\\n\",\n    \"io:format(\\\"~s~n\\\", [lists:duplicate(72, $-)]),\\n\",\n    \"\\n\",\n    \"lists:foreach(\\n\",\n    \"\\tfun(Row) ->\\n\",\n    \"\\t\\t#{height := H, price := P, scheduled := S, target := T,\\n\",\n    \"\\t\\t  v2_price := V2, is_v2 := IV2, upload_usd := U} = Row,\\n\",\n    \"\\t\\tV2Flag =\\n\",\n    \"\\t\\t\\tcase IV2 of\\n\",\n    \"\\t\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\t\\t\\\"yes\\\";\\n\",\n    \"\\t\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\t\\t\\\"no \\\"\\n\",\n    \"\\t\\t\\tend,\\n\",\n    \"\\t\\tMark =\\n\",\n    \"\\t\\t\\tcase H of\\n\",\n    \"\\t\\t\\t\\t_ when H == TransitionEnd ->\\n\",\n    \"\\t\\t\\t\\t\\t\\\"  <-- transition end\\\";\\n\",\n    \"\\t\\t\\t\\t_ when H == FirstPostAdjust ->\\n\",\n    \"\\t\\t\\t\\t\\t\\\"  <-- 1st post-adjust\\\";\\n\",\n    \"\\t\\t\\t\\t_ ->\\n\",\n    \"\\t\\t\\t\\t\\t\\\"\\\"\\n\",\n    \"\\t\\t\\tend,\\n\",\n    \"\\t\\tFmtInt =\\n\",\n    \"\\t\\t\\tfun(error) -> \\\"err\\\";\\n\",\n    \"\\t\\t\\t   (N) -> integer_to_list(N)\\n\",\n    \"\\t\\t\\tend,\\n\",\n    \"\\t\\tUStr = lists:flatten(io_lib:format(\\\"$~.4f\\\", [U])),\\n\",\n    \"\\t\\tio:format(\\\"~10s | ~6s | ~6s | ~7s | ~7s | ~3s | ~s~s~n\\\",\\n\",\n    \"\\t\\t\\t[integer_to_list(H), integer_to_list(P), integer_to_list(S),\\n\",\n    \"\\t\\t\\t FmtInt(T), FmtInt(V2), V2Flag, UStr, Mark])\\n\",\n    \"\\tend,\\n\",\n    \"\\tRows),\\n\",\n    \"\\n\",\n    \"FirstRow = hd(Rows),\\n\",\n    \"Price0 = maps:get(price, FirstRow),\\n\",\n    \"Kryder0 = maps:get(kryder, FirstRow),\\n\",\n    \"HeightPrice0 = maps:get(height, FirstRow),\\n\",\n    \"Denom0 = maps:get(denomination, FirstRow),\\n\",\n    \"UploadWinston0 = RPCGetTxFee(GiB, Price0, Kryder0, HeightPrice0),\\n\",\n    \"UploadUSD0 = UploadCostUSD(Price0, Kryder0, HeightPrice0, Denom0),\\n\",\n    \"io:format(\\\"~n  Sample at height ~p: price_per_gib_minute=~p, fee_1GiB=~p Winston (~.4f USD)~n\\\",\\n\",\n    \"\\t[HeightPrice0, Price0, UploadWinston0, UploadUSD0]),\\n\",\n    \"true = (UploadWinston0 > 0),\\n\",\n    \"\\n\",\n    \"ok.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"fdcb9357\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Validate `is_v2_pricing_height`\\n\",\n    \"\\n\",\n    \"`is_v2_pricing_height(H)` must be `false` for all `H < TransitionEnd`\\n\",\n    \"and `true` for all `H >= TransitionEnd`.\\n\",\n    \"\\n\",\n    \"**Expected:** the flag transitions exactly at `TransitionEnd`.\\n\",\n    \"**Queried:** `ar_pricing_transition:is_v2_pricing_height/1` via RPC (already fetched in `Rows`).\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"5409c33d\",\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"PreV2Rows = [R || R <- Rows, maps:get(height, R) < TransitionEnd],\\n\",\n    \"PostV2Rows = [R || R <- Rows, maps:get(height, R) >= TransitionEnd],\\n\",\n    \"true = (length(PreV2Rows) > 0),\\n\",\n    \"true = (length(PostV2Rows) > 0),\\n\",\n    \"\\n\",\n    \"true = lists:all(\\n\",\n    \"\\tfun(R) -> maps:get(is_v2, R) == false end,\\n\",\n    \"\\tPreV2Rows\\n\",\n    \"),\\n\",\n    \"\\n\",\n    \"true = lists:all(\\n\",\n    \"\\tfun(R) -> maps:get(is_v2, R) == true end,\\n\",\n    \"\\tPostV2Rows\\n\",\n    \"),\\n\",\n    \"\\n\",\n    \"io:format(\\\"  All pre-transition heights:  is_v2 = false  [OK]~n\\\"),\\n\",\n    \"io:format(\\\"  All post-transition heights: is_v2 = true   [OK]~n\\\"),\\n\",\n    \"ok.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"0a1b454e\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Validate Pre-Transition Interpolation\\n\",\n    \"\\n\",\n    \"For every height `H < TransitionEnd`, the target price returned by\\n\",\n    \"`ar_pricing:get_price_per_gib_minute(H, PrevBlock)` must match the interpolation\\n\",\n    \"formula used inside `get_transition_price`:\\n\",\n    \"\\n\",\n    \"```\\n\",\n    \"Interval1 = H - TransitionStart\\n\",\n    \"Interval2 = TransitionEnd - H\\n\",\n    \"Expected  = (StartPrice * Interval2 + V2Price * Interval1) div (Interval1 + Interval2)\\n\",\n    \"```\\n\",\n    \"\\n\",\n    \"where `StartPrice` = 340 (the 2.7.2 cap) and `V2Price = get_v2_price_per_gib_minute(H, PrevBlock)`.\\n\",\n    \"\\n\",\n    \"During the 2.7.2 transition, bounds are `[0, infinity)`, so the `between` clamp is a no-op.\\n\",\n    \"\\n\",\n    \"Expectation note: this catches interpolation/transition arithmetic issues in\\n\",\n    \"`get_price_per_gib_minute/2`, but still depends on `get_v2_price_per_gib_minute/2`\\n\",\n    \"for the V2 component.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"929a34c3\",\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"PreRows = [R || R <- Rows, maps:get(height, R) < TransitionEnd],\\n\",\n    \"PreRowsValid = [R || R <- PreRows,\\n\",\n    \"\\tmaps:get(target, R) /= error, maps:get(v2_price, R) /= error],\\n\",\n    \"true = (length(PreRows) > 0),\\n\",\n    \"true = (length(PreRowsValid) > 0),\\n\",\n    \"\\n\",\n    \"lists:foreach(\\n\",\n    \"\\tfun(Row) ->\\n\",\n    \"\\t\\tH = maps:get(height, Row),\\n\",\n    \"\\t\\tV2 = maps:get(v2_price, Row),\\n\",\n    \"\\t\\tTarget = maps:get(target, Row),\\n\",\n    \"\\t\\tInterval1 = H - TransitionStart,\\n\",\n    \"\\t\\tInterval2 = TransitionEnd - H,\\n\",\n    \"\\t\\tExpected = (TransitionStartPrice * Interval2 + V2 * Interval1)\\n\",\n    \"\\t\\t\\tdiv (Interval1 + Interval2),\\n\",\n    \"\\t\\tcase Target == Expected of\\n\",\n    \"\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\tok;\\n\",\n    \"\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\terror({interpolation_mismatch, H, Target, Expected})\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\tPreRowsValid),\\n\",\n    \"\\n\",\n    \"io:format(\\\"  ~p/~p pre-transition heights match interpolation formula  [OK]~n\\\",\\n\",\n    \"\\t[length(PreRowsValid), length(PreRows)]),\\n\",\n    \"ok.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"d8fee1e4\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Validate Post-Transition V2 Pricing\\n\",\n    \"\\n\",\n    \"For every height `H >= TransitionEnd`, the transition is complete and\\n\",\n    \"`get_price_per_gib_minute(H, PrevBlock)` must equal\\n\",\n    \"`get_v2_price_per_gib_minute(H, PrevBlock)` — no interpolation, no bounds.\\n\",\n    \"\\n\",\n    \"Expectation note: this directly validates the transition handoff, but if both\\n\",\n    \"functions shared the same V2 defect it would not catch that defect by itself.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"c7f0ba6b\",\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"PostRows = [R || R <- Rows, maps:get(height, R) >= TransitionEnd],\\n\",\n    \"PostRowsValid = [R || R <- PostRows,\\n\",\n    \"\\tmaps:get(target, R) /= error, maps:get(v2_price, R) /= error],\\n\",\n    \"true = (length(PostRows) > 0),\\n\",\n    \"true = (length(PostRowsValid) > 0),\\n\",\n    \"\\n\",\n    \"lists:foreach(\\n\",\n    \"\\tfun(Row) ->\\n\",\n    \"\\t\\tH = maps:get(height, Row),\\n\",\n    \"\\t\\tV2 = maps:get(v2_price, Row),\\n\",\n    \"\\t\\tTarget = maps:get(target, Row),\\n\",\n    \"\\t\\tcase Target == V2 of\\n\",\n    \"\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\tok;\\n\",\n    \"\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\terror({v2_price_mismatch, H, Target, V2})\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\tPostRowsValid),\\n\",\n    \"\\n\",\n    \"io:format(\\\"  ~p/~p post-transition heights: target == V2 price  [OK]~n\\\",\\n\",\n    \"\\t[length(PostRowsValid), length(PostRows)]),\\n\",\n    \"ok.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"4bb658c6\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Validate Price Continuity at Transition Boundary\\n\",\n    \"\\n\",\n    \"At `TransitionEnd - 1` (last interpolated block), almost all weight is on V2Price:\\n\",\n    \"\\n\",\n    \"```\\n\",\n    \"weight_on_V2 = (TransitionLength - 1) / TransitionLength ~ 0.999998\\n\",\n    \"```\\n\",\n    \"\\n\",\n    \"The gap between the last interpolated price and the first pure-V2 price is at most\\n\",\n    \"`|StartPrice - V2Price| / TransitionLength`, which for `TransitionLength = 518,400`\\n\",\n    \"is negligible.\\n\",\n    \"\\n\",\n    \"**Expected:** relative price change < 0.1 % (1e-3).\\n\",\n    \"**Queried:** target prices from `Rows`.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"7f205bf4\",\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"LastPreValid = [R || R <- PreRows, maps:get(target, R) /= error],\\n\",\n    \"FirstPostValid = [R || R <- PostRows, maps:get(target, R) /= error],\\n\",\n    \"true = (length(LastPreValid) > 0),\\n\",\n    \"true = (length(FirstPostValid) > 0),\\n\",\n    \"\\n\",\n    \"LastPreTarget = maps:get(target, lists:last(LastPreValid)),\\n\",\n    \"FirstPostTarget = maps:get(target, hd(FirstPostValid)),\\n\",\n    \"Gap = abs(FirstPostTarget - LastPreTarget),\\n\",\n    \"RelGap = Gap / max(1, LastPreTarget),\\n\",\n    \"io:format(\\\"  Last pre-transition target:   ~p~n\\\", [LastPreTarget]),\\n\",\n    \"io:format(\\\"  First post-transition target:  ~p~n\\\", [FirstPostTarget]),\\n\",\n    \"io:format(\\\"  Absolute gap:                  ~p~n\\\", [Gap]),\\n\",\n    \"io:format(\\\"  Relative gap:                  ~.8f~n\\\", [RelGap]),\\n\",\n    \"true = RelGap < 0.001,\\n\",\n    \"io:format(\\\"  Relative gap < 0.1 %%  [OK]~n\\\"),\\n\",\n    \"ok.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"id\": \"ee9e099f\",\n   \"metadata\": {},\n   \"source\": [\n    \"## Validate Block Price Field Evolution\\n\",\n    \"\\n\",\n    \"The block's `price_per_gib_minute` field (stored on-chain) updates only at\\n\",\n    \"**price adjustment heights** (`Height rem 50 == 0`). The recalculation rule\\n\",\n    \"(post fork 2.7.1) is:\\n\",\n    \"\\n\",\n    \"```\\n\",\n    \"NewPrice     = PrevBlock.scheduled_price_per_gib_minute\\n\",\n    \"TargetPrice  = get_price_per_gib_minute(Height, PrevBlock)\\n\",\n    \"EMAPrice     = (9 * PrevScheduled + TargetPrice) div 10\\n\",\n    \"NewScheduled = max(PrevScheduled div 2, min(PrevScheduled * 2, EMAPrice))\\n\",\n    \"```\\n\",\n    \"\\n\",\n    \"Between adjustments both fields are unchanged.\\n\",\n    \"\\n\",\n    \"The nearest adjustment height **after** the transition end is **`FirstPostAdjust`**\\n\",\n    \"(= `TransitionEnd` rounded up to the next multiple of 50). At that height the\\n\",\n    \"target price is a pure v2 price for the first time — this verifies the\\n\",\n    \"recalculation correctly handles the transition boundary.\\n\",\n    \"\\n\",\n    \"Expectation note: this cell derives `ExpectedTargetPrice` from transition math +\\n\",\n    \"`v2_price`, then checks EMA against block fields, instead of reusing the already\\n\",\n    \"queried `target` value for the EMA expectation.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"55a1d0f5\",\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n    \"Pairs = lists:zip(lists:droplast(Rows), tl(Rows)),\\n\",\n    \"true = (length(Pairs) > 0),\\n\",\n    \"\\n\",\n    \"lists:foreach(\\n\",\n    \"\\tfun({PrevRow, CurrRow}) ->\\n\",\n    \"\\t\\tH = maps:get(height, CurrRow),\\n\",\n    \"\\t\\tPrice = maps:get(price, CurrRow),\\n\",\n    \"\\t\\tSched = maps:get(scheduled, CurrRow),\\n\",\n    \"\\t\\tPrevPrice = maps:get(price, PrevRow),\\n\",\n    \"\\t\\tPrevSched = maps:get(scheduled, PrevRow),\\n\",\n    \"\\t\\tIsAdjust = (H rem PriceAdjustFreq == 0),\\n\",\n    \"\\t\\tcase IsAdjust of\\n\",\n    \"\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\tcase {Price == PrevPrice, Sched == PrevSched} of\\n\",\n    \"\\t\\t\\t\\t\\t{true, true} ->\\n\",\n    \"\\t\\t\\t\\t\\t\\tok;\\n\",\n    \"\\t\\t\\t\\t\\t_ ->\\n\",\n    \"\\t\\t\\t\\t\\t\\terror({unexpected_price_change, H,\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t{price, Price, PrevPrice},\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t{sched, Sched, PrevSched}})\\n\",\n    \"\\t\\t\\t\\tend;\\n\",\n    \"\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\tcase Price == PrevSched of\\n\",\n    \"\\t\\t\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\t\\t\\tok;\\n\",\n    \"\\t\\t\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\t\\t\\terror({price_not_prev_scheduled, H, Price, PrevSched})\\n\",\n    \"\\t\\t\\t\\tend,\\n\",\n    \"\\t\\t\\t\\tV2Price = maps:get(v2_price, CurrRow),\\n\",\n    \"\\t\\t\\t\\tcase V2Price of\\n\",\n    \"\\t\\t\\t\\t\\terror ->\\n\",\n    \"\\t\\t\\t\\t\\t\\terror({v2_price_missing_for_ema_expectation, H});\\n\",\n    \"\\t\\t\\t\\t\\t_ ->\\n\",\n    \"\\t\\t\\t\\t\\t\\tExpectedTargetPrice =\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\tcase H < TransitionEnd of\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\t\\tInterval1 = H - TransitionStart,\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\t\\tInterval2 = TransitionEnd - H,\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\t\\t(TransitionStartPrice * Interval2 + V2Price * Interval1)\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\t\\t\\tdiv (Interval1 + Interval2);\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\t\\tV2Price\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\tend,\\n\",\n    \"\\t\\t\\t\\t\\t\\tEMAPrice = (9 * PrevSched + ExpectedTargetPrice) div 10,\\n\",\n    \"\\t\\t\\t\\t\\t\\tExpectedSched = max(PrevSched div 2,\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\tmin(PrevSched * 2, EMAPrice)),\\n\",\n    \"\\t\\t\\t\\t\\t\\tcase Sched == ExpectedSched of\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\ttrue ->\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\tok;\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\tfalse ->\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\terror({scheduled_price_mismatch, H,\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\t\\tSched, ExpectedSched,\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\t\\t{expected_target, ExpectedTargetPrice},\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\t\\t{ema, EMAPrice},\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\t\\t{v2, V2Price},\\n\",\n    \"\\t\\t\\t\\t\\t\\t\\t\\t\\t{prev_sched, PrevSched}})\\n\",\n    \"\\t\\t\\t\\t\\t\\tend\\n\",\n    \"\\t\\t\\t\\tend\\n\",\n    \"\\t\\tend\\n\",\n    \"\\tend,\\n\",\n    \"\\tPairs),\\n\",\n    \"\\n\",\n    \"AdjustHeights = [H || H <- Heights, H rem PriceAdjustFreq == 0],\\n\",\n    \"true = (length(AdjustHeights) > 0),\\n\",\n    \"io:format(\\\"  Block price fields consistent across ~p consecutive blocks  [OK]~n\\\",\\n\",\n    \"\\t[length(Rows)]),\\n\",\n    \"io:format(\\\"  Recalculation verified at adjustment heights: ~p  [OK]~n\\\",\\n\",\n    \"\\t[AdjustHeights]),\\n\",\n    \"ok.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"b978df1d-d7ab-4061-af8a-1042b790eb0e\",\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": []\n  }\n ],\n \"metadata\": {\n  \"kernelspec\": {\n   \"display_name\": \"Erlang\",\n   \"language\": \"erlang\",\n   \"name\": \"erlang\"\n  },\n  \"language_info\": {\n   \"file_extension\": \".erl\",\n   \"name\": \"erlang\",\n   \"version\": \"26.2.1\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 5\n}\n"
  },
  {
    "path": "notebooks/test.ipynb",
    "content": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"vscode\": {\n          \"languageId\": \"plaintext\"\n        }\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"2 = 2.\"\n      ]\n    }\n  ],\n  \"metadata\": {\n    \"language_info\": {\n      \"name\": \"python\"\n    }\n  },\n  \"nbformat\": 4,\n  \"nbformat_minor\": 2\n}\n"
  },
  {
    "path": "priv/templates/README.md",
    "content": "# relx templates\n\nThese templates are available on [relx github repository](https://github.com/erlware/relx/blob/main/priv/templates).\n\nOnly `extended_bin` is required for now.\n"
  },
  {
    "path": "priv/templates/extended_bin",
    "content": "#!/usr/bin/env bash\n######################################################################\n# EXTRA_DIST_ARGS environment variable can be set to set extra VM\n# arguments.\n######################################################################\n\nset -e\n\n######################################################################\n# Switch to user or dev mode. bin/arweave should be the script used\n# only by users, and bin/arweave-dev should be the one used only\n# for developers.\n######################################################################\ncase ${0##*/}\nin\n  arweave-dev)\n    export ARWEAVE_DEV=1\n    ;;\nesac\n\n######################################################################\n# EPMD Configuration. force epmd to listen on loopback interface.\n######################################################################\nexport ERL_EPMD_ADDRESS=\"${ERL_EPMD_ADDRESS:=127.0.0.1,::1}\"\nexport ERL_EPMD_PORT=\"${ERL_EPMD_PORT:=4369}\"\n\n# http://erlang.org/doc/man/run_erl.html\n# If defined, disables input and output flow control for the pty\n# opend by run_erl. Useful if you want to remove any risk of accidentally\n# blocking the flow control by using Ctrl-S (instead of Ctrl-D to detach),\n# which can result in blocking of the entire Beam process, and in the case\n# of running heart as supervisor even the heart process becomes blocked\n# when writing log message to terminal, leaving the heart process unable\n# to do its work.\nRUN_ERL_DISABLE_FLOWCNTRL=${RUN_ERL_DISABLE_FLOWCNTRL:-true}\nexport RUN_ERL_DISABLE_FLOWCNTRL\nRUN_ERL_LOG_GENERATIONS=${RUN_ERL_LOG_GENERATIONS:-1}\nexport RUN_ERL_LOG_GENERATIONS\nRUN_ERL_LOG_MAXSIZE=${RUN_ERL_LOG_MAXSIZE:-$((100*1024*1024))}\nexport RUN_ERL_LOG_MAXSIZE\nRUN_ERL_LOG_ALIVE_MINUTES=${RUN_ERL_LOG_ALIVE_MINUTES:-15}\nexport RUN_ERL_LOG_ALIVE_MINUTES\n\nif [ \"$TERM\" = \"dumb\" ] || [ -z \"$TERM\" ]; then\n  export TERM=screen\nfi\n\n# OSX does not support readlink '-f' flag, work\n# around that\n# shellcheck disable=SC2039,SC3000-SC4000\ncase $OSTYPE in\n    darwin*)\n\tSCRIPT=$(readlink \"$0\" || true)\n    ;;\n    *)\n\tSCRIPT=$(readlink -f \"$0\" || true)\n    ;;\nesac\n[ -z \"$SCRIPT\" ] && SCRIPT=$0\nexport SCRIPT_DIR=\"$(cd \"$(dirname \"$SCRIPT\")\" && pwd -P)\"\nexport PARENT_DIR=\"$(cd \"${SCRIPT_DIR}/..\" && pwd  -P)\"\nexport SYSTEM_NAME=\"$(uname -s)\"\nexport RELEASE_ROOT_DIR=\"$(cd \"${SCRIPT_DIR}/..\" && pwd -P)\"\nexport REBAR_CONFIG=\"${RELEASE_ROOT_DIR}/rebar.config\"\nexport BUILD_DIR=\"${RELEASE_ROOT_DIR}/_build\"\n\n# let extract release relx information from rebar.config.\n# the following erlang code will read/parse the file\n# and extract the information required. In case of issue\n# it print an error message and return 1, else 0.\nextract_release_from_rebar_config() {\n\terl -noshell -eval '\n\t\ttry\n\t\t\t% extract file from REBAR_CONFIG variable\n\t\t\tC = case os:getenv(\"REBAR_CONFIG\") of\n\t\t\t\tfalse -> throw(\"REBAR_CONFIG not set\");\n\t\t\t\tVRC -> VRC\n\t\t\tend,\n\n\t\t\t% read/parse rebar.config\n\t\t\tF = case file:consult(C) of\n\t\t\t\t{ok, FC} -> FC;\n\t\t\t\t{error, EC} -> throw(EC)\n\t\t\tend,\n\n\t\t\t% extract relx section\n\t\t\tR = case proplists:get_value(relx, F) of\n\t\t\t\tundefined -> throw(\"relx section not found\");\n\t\t\t\tRX -> RX\n\t\t\tend,\n\n\t\t\t% extract release section\n\t\t\tV = case lists:keyfind(release, 1, R) of\n\t\t\t\tM = {release, {_, VX}, _} -> VX;\n\t\t\t\t_ -> throw(\"release not found\")\n\t\t\tend,\n\t\t\tio:format(\"~s~n\", [V]),\n\t\t\terlang:halt(0)\n\t\tcatch\n\t\t\t_:E ->\n\t\t\t\tio:format(standard_error, \"error: ~p~n\", [E]),\n\t\t\t\terlang:halt(255)\n\t\tend.\n\t'\n\treturn $?\n}\n\n# Make the value available to variable substitution calls below\n# The following variables are usually hardcoded by rebar3\nexport REL_NAME=\"{{ release_name }}\"\nexport REL_VSN=\"{{ release_version }}\"\nexport RELEASE_NAME=\"{{ release_name }}\"\nexport RELEASE_VSN=\"{{ release_version }}\"\nexport RELEASE_GIT_REV=\"{{ git_rev }}\"\nexport RELEASE_DATETIME=\"{{ datetime }}\"\nexport RELEASE_ERTS=\"{{ release_erts_version }}\"\nexport RELEASE_CC=\"{{ cc_version }}\"\nexport RELEASE_CMAKE=\"{{ cmake_version }}\"\nexport RELEASE_GMAKE=\"{{ gmake_version }}\"\nexport ERTS_VSN=\"{{ erts_vsn }}\"\nexport RELEASE_PROG=\"${SCRIPT}\"\n\n# ensure REL_NAME and RELEASE_NAME variables are set\n# by default, if the script is running from sources,\n# the release name must be arweave.\ntest -z \"${REL_NAME}\" && export REL_NAME=\"arweave\"\ntest -z \"${RELEASE_NAME}\" && export RELEASE_NAME=\"arweave\"\n\n# check REL_VSN variable content. This one is quite important\n# to be able to start arweave.\nif test -z \"${REL_VSN}\"\nthen\n\tREL_VSN=$(extract_release_from_rebar_config)\n\tif test $? -ne 0\n\tthen\n\t\techo \"error: failed to read rebar file\" 1>&2\n\t\texit 1\n\tfi\n\n\tif test -z \"${REL_VSN}\"\n\tthen\n\t\techo \"error: no release found\" 1>&2\n\t\texit 1\n\tfi\n\n\texport REL_VSN\n\texport REL_PATH=\"${BUILD_DIR}/default/rel/${REL_NAME}/${REL_VSN}\"\n\texport REL_PATH_ALT=\"${BUILD_DIR}/default/rel/${REL_NAME}/releases/${REL_VSN}\"\n\tif ! test -e ${REL_PATH}\n\tthen\n\t\techo \"error: ${REL_PATH} does not exist\" 1>&2\n\t\tif ! test \"${ARWEAVE_DEV}\"\n\t\tthen\n\t\t\texit 1\n\t\tfi\n\tfi\nfi\n\nexport REL_DIR=\"${RELEASE_ROOT_DIR}/releases/${REL_VSN}\"\nexport RUNNER_LOG_DIR=\"${RUNNER_LOG_DIR:-$RELEASE_ROOT_DIR/logs}\"\nexport ESCRIPT_NAME=\"${ESCRIPT_NAME-$SCRIPT}\"\n\n# if RELX_RPC_TIMEOUT is set then use that\n# otherwise check for NODETOOL_TIMEOUT and convert to seconds\nif [ -z \"$RELX_RPC_TIMEOUT\" ]; then\n    # if NODETOOL_TIMEOUT exists then turn the old nodetool timeout into the rpc timeout\n    if [ -n \"$NODETOOL_TIMEOUT\" ]; then\n\t# will exit the script if NODETOOL_TIMEOUT isn't a number\n\tRELX_RPC_TIMEOUT=$((NODETOOL_TIMEOUT / 1000))\n    else\n\tRELX_RPC_TIMEOUT=60\n    fi\nfi\n\nexport RELX_RPC_TIMEOUT\n\n\n# start/stop/install/upgrade pre/post hooks\nPRE_START_HOOKS=\"{{{ pre_start_hooks }}}\"\nPOST_START_HOOKS=\"{{{ post_start_hooks }}}\"\nPRE_STOP_HOOKS=\"{{{ pre_stop_hooks }}}\"\nPOST_STOP_HOOKS=\"{{{ post_stop_hooks }}}\"\nPRE_INSTALL_UPGRADE_HOOKS=\"{{{ pre_install_upgrade_hooks }}}\"\nPOST_INSTALL_UPGRADE_HOOKS=\"{{{ post_install_upgrade_hooks }}}\"\nSTATUS_HOOK=\"{{{ status_hook }}}\"\nEXTENSIONS=\"{{{ extensions }}}\"\n\n_warning() {\n    printf -- \"warning: %s\\n\" \"${*}\" 1>&2\n}\n\n_error () {\n    printf -- \"error: %s\\n\" \"${*}\" 1>&2\n}\n\n######################################################################\n# Arweave Section\n######################################################################\n\n# Not all systems supports randomx jit\nif test ${SYSTEM_NAME} = \"Darwin\"\nthen\n    export RANDOMX_JIT=\"disable randomx_jit\"\nelse\n    export RANDOMX_JIT=\"\"\nfi\n\n# This variable is the main own used to start arweave\nARWEAVE_OPTS=\"-run ar main ${RANDOM_JIT}\"\n\n######################################################################\n# Arweave System Check Section\n######################################################################\narweave_check() {\n    case \"${1}\" in\n\thelp)\n\t    arweave_check_help\n\t    ;;\n\t*)\n\t    arweave_check_nofile\n\t    arweave_check_hugepages\n\t    ;;\n    esac\n}\n\narweave_check_help() {\n    echo \"Usage: ${REL_NAME} check\"\n    echo \"Check system configuration. Examples:\"\n    echo \"  ${REL_NAME} check\"\n    exit 1\n}\n\narweave_check_nofile() {\n    recommendation=\"1000000\"\n    limit=\"$(ulimit -n)\"\n\n    if [ \"$limit\" -lt \"$recommendation\" ]\n    then\n\t_warning \"************************************************************************\"\n\t_warning \"Your maximum number of open file descriptors is currently set to $limit.\"\n\t_warning \"We recommend setting that limit to $recommendation or higher.\"\n\t_warning \"Otherwise, consider setting your max_connections setting to something\"\n\t_warning \"lower than your file descriptor limit. This value can be check with:\"\n\t_warning \"    sysctl fs.file-max\"\n\t_warning \"or\"\n\t_warning \"    ulimit -n\"\n\t_warning \"see more at https://docs.arweave.org/\"\n\t_warning \"************************************************************************\"\n    fi\n}\n\narweave_check_hugepages() {\n    # execute this check only on linux\n    test ${SYSTEM_NAME} != \"Linux\" && return 0\n    recommendation=\"3500\"\n    value=$(sysctl -n vm.nr_hugepages)\n\n    if test ${value} -lt ${recommendation}\n    then\n\t_warning \"************************************************************************\"\n\t_warning \"huge pages is not configured on this system.\"\n\t_warning \"It should be set to ${recommendation}. This value can be check with:\"\n\t_warning \"    sysctl vm.nr_hugepages\"\n\t_warning \"see more at https://docs.arweave.org/\"\n\t_warning \"************************************************************************\"\n    fi\n}\n\n######################################################################\n# Arweave Benchmark Section\n######################################################################\narweave_benchmark() {\n    case \"${1}\"\n    in\n\t2.9)\n\t    shift\n\t    arweave_benchmark_2_9 ${*}\n\t    ;;\n\thash)\n\t    shift\n\t    arweave_benchmark_hash ${*}\n\t    ;;\n\tpacking)\n\t    shift\n\t    arweave_benchmark_packing ${*}\n\t    ;;\n\tvdf)\n\t    shift\n\t    arweave_benchmark_vdf ${*}\n\t    ;;\n\t*)\n\t    arweave_benchmark_help\n\t    ;;\n    esac\n}\n\narweave_benchmark_help() {\n    echo \"Usage: ${REL_NAME} benchmark [2.9|hash|packing|vdf]\"\n    echo \"Execute Arweave benchmarks. Examples:\"\n    echo \"  ${REL_NAME} benchmark 2.9\"\n    echo \"  ${REL_NAME} benchmark hash\"\n    echo \"  ${REL_NAME} benchmark packing\"\n    echo \"  ${REL_NAME} benchmark vdf\"\n    exit 1\n}\n\narweave_benchmark_2_9() {\n    ARWEAVE_OPTS=\"-run ar benchmark_2_9\"\n    echo ${*}\n}\n\narweave_benchmark_hash() {\n    ARWEAVE_OPTS=\"-run ar benchmark_hash\"\n    echo ${*}\n}\n\narweave_benchmark_packing() {\n    ARWEAVE_OPTS=\"-run ar benchmark_packing\"\n    echo ${*}\n}\n\narweave_benchmark_vdf() {\n    ARWEAVE_OPTS=\"-run ar benchmark_vdf\"\n    echo ${*}\n}\n\n######################################################################\n# Arweave Wallet Management Section\n######################################################################\narweave_wallet() {\n    case \"${1}\" in\n\tcreate)\n\t    shift\n\t    arweave_wallet_create ${*}\n\t    ;;\n\t*)\n\t    arweave_wallet_help\n\t    ;;\n    esac\n}\n\narweave_wallet_help() {\n    echo \"Usage: ${REL_NAME} wallet [create]\"\n    echo \"Manage Arweave wallets. Examples:\"\n    echo \"  ${REL_NAME} wallet create rsa\"\n    echo \"  ${REL_NAME} wallet create ecdsa\"\n    exit 1\n}\n\narweave_wallet_create() {\n    case \"${1}\" in\n\trsa)\n\t    shift\n\t    arweave_wallet_create_rsa ${*}\n\t    ;;\n\tecdsa)\n\t    shift\n\t    arweave_wallet_create_ecdsa ${*}\n\t    ;;\n\t*)\n\t    arweave_wallet_create_help\n\t    ;;\n    esac\n}\n\narweave_wallet_create_help() {\n    echo \"Usage: ${REL_NAME} wallet create [rsa|ecdsa]\"\n    echo \"Create Arweave wallet. examples:\"\n    echo \"  ${REL_NAME} wallet create rsa\"\n    echo \"  ${REL_NAME} wallet create ecdsa\"\n    exit 1\n}\n\narweave_wallet_create_rsa() {\n    ARWEAVE_OPTS=\"-run ar create_wallet\"\n    echo ${*}\n}\n\narweave_wallet_create_ecdsa() {\n    ARWEAVE_OPTS=\"-run ar create_ecdsa_wallet\"\n    echo ${*}\n}\n\n######################################################################\n# Arweave Data Doctor Section\n######################################################################\narweave_doctor() {\n    ARWEAVE_OPTS=\"-run ar_data_doctor main\"\n    echo ${*}\n}\n\narweave_doctor_help() {\n    echo \"Usage: ${REL_NAME} doctor\"\n    echo \"Execute data doctor analyzer\"\n    exit 1\n}\n\n######################################################################\n# Arweave Developer mode Section\n######################################################################\n# when ARWEAVE_DEV environment variable is set, the release is rebuild\narweave_developer_mode() {\n\t(\n\t    cd ${PARENT_DIR} \\\n\t\t&& ./ar-rebar3 ${ARWEAVE_BUILD_TARGET:-default} release\n\t    sleep 1\n\t)\n}\n\n# check if a command (subcommand) is a developer command.\nis_arweave_developer_command() {\n  local commands=\"test test_e2e\"\n  local value=\"${1}\"\n  if test \"${ARWEAVE_DEV}\"\n  then\n    return 1\n  fi\n\n  for command in ${commands}\n  do\n    if test \"${command}\" = \"${value}\"\n    then\n      return 0\n    fi\n  done\n  return 1\n}\n\n######################################################################\n# Arweave Version Section\n######################################################################\narweave_version() {\n    case \"${1}\" in\n\t*) arweave_version_light\n\t   ;;\n    esac\n}\n\narweave_version_light() {\n    echo \"${RELEASE_NAME} ${RELEASE_VSN} (${RELEASE_GIT_REV}) ${RELEASE_DATETIME}\"\n    echo \"   erts ${RELEASE_ERTS}\"\n    echo \"   ${RELEASE_CC}\"\n    echo \"   ${RELEASE_GMAKE}\"\n    echo \"   ${RELEASE_CMAKE}\"\n    exit 0\n}\n\narweave_version_help() {\n    echo \"Usage: ${REL_NAME} version\"\n    echo \"Return Arweave release\"\n    exit 1\n}\n\n######################################################################\n# test section\n######################################################################\narweave_test() {\n    TEST_CONFIG=\"./config/sys.config\"\n    TEST_PROFILE=\"test\"\n    TEST_NODE_NAME=\"${NODE_NAME:-main-localtest}\"\n    TEST_NODE_HOST=\"${NODE_HOST:-127.0.0.1}\"\n    TEST_COOKIE=\"${COOKIE:-test}\"\n    TEST_MODULE=\"tests\"\n    TEST_LOG=\"main-localtest.out\"\n    arweave_test_run ${*}\n}\n\narweave_test_help() {\n    echo \"Usage: ${REL_NAME} test MODULE\"\n    echo \"Run Arweave Test Suite for module MODULE\"\n}\n\narweave_e2e() {\n    TEST_CONFIG=\"./config/sys.config\"\n    TEST_PROFILE=\"e2e\"\n    TEST_NODE_NAME=\"${NODE_NAME:-main-e2e}\"\n    TEST_NODE_HOST=\"${NODE_HOST:-127.0.0.1}\"\n    TEST_COOKIE=\"${COOKIE:-e2e}\"\n    TEST_MODULE=\"e2e\"\n    TEST_LOG=\"main-e2e.out\"\n    arweave_test_run ${*}\n}\n\narweave_e2e_help() {\n    echo \"Usage: ${REL_NAME} test_e2e MODULE\"\n    echo \"Run Arweave e2e Test Suite for module MODULE\"\n}\n\n# test and e2e features are sharing the same procedures.\narweave_test_run() {\n    (\n\techo -e \"\\033[0;32m===> Enter into ${PARENT_DIR}\\033[0m\"\n\tcd ${PARENT_DIR}\n\n\techo -e \"\\033[0;32m===> Compile ${TEST_PROFILE} profile\\033[0m\"\n\t./ar-rebar3 \"${TEST_PROFILE}\" compile\n\n\t# if a specific test is specified\n\tif test \"${1}\"\n\tthen\n\t    TEST_NODE=\"${TEST_NODE_NAME}-${1}@${TEST_NODE_HOST}\"\n\telse\n\t    TEST_NODE=\"${TEST_NODE_NAME}@${TEST_NODE_HOST}\"\n\tfi\n\n\tTEST_PATH=\"$(./rebar3 as ${TEST_PROFILE} path)\"\n\tTEST_PATH_BASE=\"$(./rebar3 as ${TEST_PROFILE} path --base)/lib/arweave/test\"\n\tPARAMS=\"-pa ${TEST_PATH} ${TEST_PATH_BASE} -config ${TEST_CONFIG} -noshell\"\n\tENTRY_POINT=\"-run ar ${TEST_MODULE} ${*} -s init stop\"\n\tcommand=\"erl ${PARAMS} -name ${TEST_NODE} -setcookie ${TEST_COOKIE} ${ENTRY_POINT}\"\n\techo -e \"\\033[0;32m===> Execute command ${command}\\033[0m\"\n\tset -xe -o pipefail\n\t${command} | tee \"${TEST_LOG}\"\n\texit $?\n    )\n}\n\n######################################################################\n# Relx section\n######################################################################\nrelx_usage() {\n    command=\"$1\"\n\n    case \"$command\" in\n\tbenchmark)\n\t    arweave_benchmark_help\n\t    ;;\n\tcheck)\n\t    arweave_check_help\n\t    ;;\n\tdoctor)\n\t    arweave_doctor_help\n\t    ;;\n\tversion)\n\t    arweave_version_help\n\t    ;;\n\tpacking)\n\t    arweave_packing_help\n\t    ;;\n\twallet)\n\t    arweave_wallet_help\n\t    ;;\n\tdaemon)\n\t    echo \"Usage: ${REL_NAME} daemon\"\n\t    echo \"Start Arweave as daemon (in background)\"\n\t    ;;\n\tdaemon_attach)\n\t    echo \"Usage: ${REL_NAME} daemon_attach\"\n\t    echo \"Attach to a running Arweave daemonized process\"\n\t    ;;\n\trpc)\n\t    echo \"Usage: $REL_NAME rpc [Mod [Fun [Args]]]]\"\n\t    echo \"Applies the specified function and returns the result.\"\n\t    echo \"Mod must be specified. However, start and [] are assumed\"\n\t    echo \"for unspecified Fun and Args, respectively. Args is to \"\n\t    echo \"be in the same format as for erlang:apply/3 in ERTS.\"\n\t    ;;\n\tescript)\n\t    echo \"Usage: ${REL_NAME} escript [ESCRIPT]\"\n\t    echo \"Execute an Erlang script in the Arweave release environment.\"\n\t    echo \"Note: it will not start Arweave.\"\n\t    ;;\n\t\"eval\")\n\t    echo \"Usage: $REL_NAME eval [Exprs]\"\n\t    echo \"Executes a sequence of Erlang expressions, separated by\"\n\t    echo \"comma (,) and ended with a full stop (.)\"\n\t    ;;\n\tforeground)\n\t    echo \"Usage: $REL_NAME foreground\"\n\t    echo \"Starts the Arweave release in the foreground, meaning all output\"\n\t    echo \"going to stdout but without an interactive shell.\"\n\t    echo \"The entry point is set to -run ar main\"\n\t    ;;\n\tforeground_clean)\n\t    echo \"Usage: $REL_NAME foreground\"\n\t    echo \"Starts the Arweave release in the foreground, meaning all output\"\n\t    echo \"going to stdout but without an interactive shell.\"\n\t    echo \"No entry point is configured\"\n\t    ;;\n\tconsole)\n\t    echo \"Usage: $REL_NAME console\"\n\t    echo \"Starts Arweave with an interactive shell.\"\n\t    ;;\n\tconsole_clean)\n\t    echo \"Usage: ${REL_NAME} console_clean\"\n\t    echo: \"Starts an interactived Erlang shell without Arweave started.\"\n\t    ;;\n\tremote_console|remote|remsh)\n\t    echo \"Usage: $REL_NAME remote\"\n\t    echo \"Attach a remote shell to an already running Erlang node for this release.\"\n\t    ;;\n\treboot)\n\t    echo \"Usage: ${REL_NAME} reboot\"\n\t    echo \"Reboot the entire Arweave VM.\"\n\t    ;;\n\trestart)\n\t    echo \"Usage: ${REL_NAME} restart\"\n\t    echo \"Restart the running applications but not the Arweave VM.\"\n\t    ;;\n\tpid)\n\t    echo \"Usage: ${REL_NAME} pid\"\n\t    echo \"Returns the system PID of Arweave release (if running).\"\n\t    ;;\n\tping)\n\t    echo \"Usage: ${REL_NAME} ping\"\n\t    echo \"Checks if the Arweave node is running.\"\n\t    ;;\n\tstatus)\n\t    echo \"Usage: $REL_NAME status\"\n\t    echo \"Obtains node status information through optionally defined hooks.\"\n\t    ;;\n\tstop)\n\t    echo \"Usage: ${REL_NAME} stop\"\n\t    echo \"Stop the Arweave node.\"\n\t    ;;\n\ttest)\n\t    arweave_test_help\n\t    ;;\n\ttest_e2e)\n\t    arweave_e2e_help\n\t    ;;\n\t*)\n\t    # check for extension\n\t    IS_EXTENSION=$(relx_is_extension \"$command\")\n\t    if [ \"$IS_EXTENSION\" = \"1\" ]; then\n\t\tEXTENSION_SCRIPT=$(relx_get_extension_script \"$command\")\n\t\trelx_run_extension \"$EXTENSION_SCRIPT\" help\n\t    else\n\t\tEXTENSIONS=$(echo $EXTENSIONS | sed -e 's/|undefined//g')\n\t\techo \"Usage: ${REL_NAME} [COMMAND] [ARGS]\"\n\t\techo \"\"\n\t\techo \"Arweave Commands:\"\n\t\techo \"\"\n\t\techo \"  benchmark               Run Arweave Benchmarks\"\n\t\techo \"  check                   Check system parameters for Arweave\"\n\t\techo \"  console                 Start Arweave with an interactive Erlang shell\"\n\t\techo \"  console_clean           Start an interactive Erlang shell without the Arweave release's applications\"\n\t\techo \"  daemon                  Start Arweave in the background with run_erl (named pipes)\"\n\t\techo \"  daemon_attach           Connect to Arweave node started as daemon with to_erl (named pipes)\"\n\t\techo \"  doctor                  Start Arweave Data Analyzer tool\"\n\t\techo \"  escript                 Run an escript in the same environment as the Arweave release\"\n\t\techo \"  eval [Exprs]            Run Erlang expressions on Arweave node\"\n\t\techo \"  foreground              Start Arweave with output to stdout\"\n\t\techo \"  foreground_clean        Start Arweave VM without any entry-point as arguments\"\n\t\techo \"  pid                     Print the PID of the Arweave OS process\"\n\t\techo \"  ping                    Print pong if the Arweave node is alive\"\n\t\techo \"  reboot                  Reboot the entire Arweave VM\"\n\t\techo \"  reload                  Restart only Arweave application in the VM\"\n\t\techo \"  remote_console          Connect remote shell to the Arweave node\"\n\t\techo \"  restart                 Restart the running applications but not the Arweave VM\"\n\t\techo \"  rpc [Mod [Fun [Args]]]] Run apply(Mod, Fun, Args) on the Arweave node\"\n\t\techo \"  status                  Verify if the Arweave node is running and then run status hook scripts\"\n\t\techo \"  stop                    Stop the Arweave node\"\n\t\techo \"  version                 Print the Arweave version\"\n\t\techo \"  wallet                  Manage Arweave wallets\"\n\n\t\tif test \"$EXTENSIONS\"\n\t\tthen\n\t\t  echo \"$EXTENSIONS\"\n                fi\n\n\t\tif test \"${ARWEAVE_DEV}\"\n                then\n\t\t  echo \"\"\n\t\t  echo \"Arweave Commands (developer mode):\"\n\t\t  echo \"  test                    Run Arweave test Suite\"\n\t\t  echo \"  test_e2e                Run Arweave e2e Test Suite\"\n\t\tfi\n\t    fi\n\t    ;;\n    esac\n}\n\nfind_erts_dir() {\n    __erts_dir=\"$RELEASE_ROOT_DIR/erts-$ERTS_VSN\"\n    if [ -d \"$__erts_dir\" ]; then\n\tERTS_DIR=\"$__erts_dir\";\n    else\n\t__erl=\"$(command -v erl)\"\n\tcode=\"io:format(\\\"~s\\\", [code:root_dir()]), halt().\"\n\t__erl_root=\"$(\"$__erl\" -boot no_dot_erlang -sasl errlog_type error -noshell -eval \"$code\")\"\n\tERTS_DIR=\"$__erl_root/erts-$ERTS_VSN\"\n\tif [ ! -d \"$ERTS_DIR\" ]; then\n\t    erts_version_code=\"io:format(\\\"~s\\\", [erlang:system_info(version)]), halt().\"\n\t    __erts_version=\"$(\"$__erl\" -boot no_dot_erlang -sasl errlog_type error -noshell -eval \"$erts_version_code\")\"\n\t    ERTS_DIR=\"${__erl_root}/erts-${__erts_version}\"\n\t    if [ -d \"$ERTS_DIR\" ]; then\n\t\techo \"Exact ERTS version (${ERTS_VSN}) match not found, instead using ${__erts_version}. The release may fail to run.\" 1>&2\n\t\tERTS_VSN=${__erts_version}\n\t    else\n\t\techo \"Can not run the release. There is no ERTS bundled with the release or found on the system.\"\n\t\texit 1\n\t    fi\n\tfi\n    fi\n}\n\nfind_erl_call() {\n    # users who depend on stdout when running rpc calls must still use nodetool\n    # so we have an overload option to force use of nodetool instead of erl_call\n    if [ \"$USE_NODETOOL\" ]; then\n\tERL_RPC=relx_nodetool\n    else\n\t# only OTP-23 and above have erl_call in the erts bin directory\n\t# and only those versions have the features and bug fixes needed\n\t# to work properly with this script\n\t__erl_call=\"$ERTS_DIR/bin/erl_call\"\n\tif [ -f \"$__erl_call\" ]; then\n\t    ERL_RPC=\"$__erl_call\";\n\telse\n\t    ERL_RPC=relx_nodetool\n\tfi\n    fi\n}\n\n# Get node pid\nrelx_get_pid() {\n    if output=\"$(erl_rpc os getpid 2>/dev/null)\"\n    then\n\techo \"$output\" | sed -e 's/\"//g'\n\treturn 0\n    else\n\techo \"$output\"\n\treturn 1\n    fi\n}\n\nping_or_exit() {\n    if ! erl_rpc erlang is_alive > /dev/null 2>&1; then\n\techo \"Node is not running!\"\n\texit 1\n    fi\n}\n\nrelx_get_nodename() {\n    id=\"longname$(relx_gen_id)-${NAME}\"\n    if [ -z \"$COOKIE\" ]; then\n\t# shellcheck disable=SC2086\n\t\"$BINDIR/erlexec\" -boot \"$REL_DIR\"/start_clean \\\n\t\t\t  -mode interactive \\\n\t\t\t  -boot_var SYSTEM_LIB_DIR \"$SYSTEM_LIB_DIR\" \\\n\t\t\t  -eval '[_,H]=re:split(atom_to_list(node()),\"@\",[unicode,{return,list}]), io:format(\"~s~n\",[H]), halt()' \\\n\t\t\t  -dist_listen false \\\n\t\t\t  ${START_EPMD} \\\n\t\t\t  -noshell \"${NAME_TYPE}\" \"$id\"\n    else\n\t# running with setcookie prevents a ~/.erlang.cookie from being created\n\t# shellcheck disable=SC2086\n\t\"$BINDIR/erlexec\" -boot \"$REL_DIR\"/start_clean \\\n\t\t\t  -mode interactive \\\n\t\t\t  -boot_var SYSTEM_LIB_DIR \"$SYSTEM_LIB_DIR\" \\\n\t\t\t  -eval '[_,H]=re:split(atom_to_list(node()),\"@\",[unicode,{return,list}]), io:format(\"~s~n\",[H]), halt()' \\\n\t\t\t  -setcookie \"${COOKIE}\" \\\n\t\t\t  -dist_listen false \\\n\t\t\t  ${START_EPMD} \\\n\t\t\t  -noshell \"${NAME_TYPE}\" \"$id\"\n    fi\n}\n\n# Connect to a remote node\nrelx_rem_sh() {\n    # Remove remote_nodename when OTP-23 is the oldest version supported by rebar3/relx.\n    # sort the used erts version against 11.0 to see if it is less than 11.0 (OTP-23)\n    # if it is then we must generate a node name to use for the remote node.\n    # But this feature is only for short names in 23.0 (erts 11.0). It can be used\n    # for long names with 23.1 (erts 11.1) and above.\n    if [ \"${NAME_TYPE}\" = \"-sname\" ] && [  \"11.0\" = \"$(printf \"%s\\n11.0\" \"${ERTS_VSN}\" | sort -V | head -n1)\" ] ; then\n\n\tremote_nodename=\"${NAME_TYPE} undefined@${RELX_HOSTNAME}\"\n    # if the name type is longnames then make sure this is erts 11.1+\n    elif [ \"${NAME_TYPE}\" = \"-name\" ] && [  \"11.1\" = \"$(printf \"%s\\n11.1\" \"${ERTS_VSN}\" | sort -V | head -n1)\" ] ; then\n\tremote_nodename=\"${NAME_TYPE} undefined@${RELX_HOSTNAME}\"\n    else\n\t# Generate a unique id used to allow multiple remsh to the same node transparently\n\tremote_nodename=\"${NAME_TYPE} remsh$(relx_gen_id)-${NAME}\"\n    fi\n\n    # Get the node's ticktime so that we use the same one\n    TICKTIME=\"$(erl_rpc net_kernel get_net_ticktime)\"\n\n    # Setup remote shell command to control node\n    # -dist_listen is new in OTP-23. It keeps the remote node from binding to a listen port\n    # and implies the option -hidden\n    # shellcheck disable=SC2086\n    exec \"$BINDIR/erlexec\" ${remote_nodename} -remsh \"$NAME\" -boot \"$REL_DIR\"/start_clean -mode interactive \\\n\t -boot_var SYSTEM_LIB_DIR \"$SYSTEM_LIB_DIR\" \\\n\t -setcookie \"$COOKIE\" -hidden -kernel net_ticktime \"$TICKTIME\" \\\n\t -dist_listen false \\\n\t $DIST_ARGS \\\n\t $EXTRA_DIST_ARGS\n}\n\nerl_rpc() {\n    case \"$ERL_RPC\" in\n\t\"relx_nodetool\")\n\t    relx_nodetool rpc \"$@\"\n\t    ;;\n\t*)\n\t    command=$*\n\n\t    # erl_call -R is recommended for generating dynamic node name but is only available in 23.0+\n\t    if [  \"11.0\" = \"$(printf \"%s\\n11.0\" \"${ERTS_VSN}\" | sort -V | head -n1)\" ] ; then\n\t\tDYNAMIC_NAME=\"-R\"\n\t    else\n\t\tDYNAMIC_NAME=\"-r\"\n\t    fi\n\n\t    if [ \"$ADDRESS\" ]; then\n\t\tresult=$(\"$ERL_RPC\" \"${DYNAMIC_NAME}\" -c \"${COOKIE}\" -address \"${ADDRESS}\" -timeout \"${RELX_RPC_TIMEOUT}\" -a \"${command}\")\n\t    else\n\t\tresult=$(\"$ERL_RPC\" \"$NAME_TYPE\" \"$NAME\" \"${DYNAMIC_NAME}\" -c \"${COOKIE}\" -timeout \"${RELX_RPC_TIMEOUT}\" -a \"${command}\")\n\t    fi\n\t    code=$?\n\t    if [ $code -eq 0 ]; then\n\t\techo \"$result\"\n\t    else\n\t\treturn $code\n\t    fi\n\t    ;;\n    esac\n}\n\nerl_eval() {\n    case \"$ERL_RPC\" in\n\t\"relx_nodetool\")\n\t    relx_nodetool eval \"$@\"\n\t    ;;\n\t*)\n\t    local command=\"${*}\"\n\t    if [ \"$ERL_DIST_PORT\" ]; then\n\t\tresult=$(echo \"${command}\" | eval \"$ERL_RPC\" \"${DYNAMIC_NAME}\" -c \"${COOKIE}\" -address \"${ADDRESS}\" -timeout \"${RELX_RPC_TIMEOUT}\" -e)\n\t    else\n\t\tresult=$(echo \"${command}\" | eval \"$ERL_RPC\" \"$NAME_TYPE\" \"$NAME\" \"${DYNAMIC_NAME}\" -c \"${COOKIE}\" -timeout \"${RELX_RPC_TIMEOUT}\" -e)\n\t    fi\n\t    code=$?\n\t    if [ $code -eq 0 ]; then\n\t\techo \"$result\" | sed 's/^{ok, \\(.*\\)}$/\\1/'\n\t    else\n\t\treturn $code\n\t    fi\n\t    ;;\n    esac\n}\n\n\n# Generate a random id\nrelx_gen_id() {\n    # To prevent exhaustion of atoms on target node, optionally avoid\n    # generation of random node prefixes, if it is guaranteed calls\n    # are entirely sequential.\n    if [ -z \"${NODETOOL_NODE_PREFIX}\" ]; then\n\tdd count=1 bs=4 if=/dev/urandom 2> /dev/null | od -x  | head -n1 | awk '{print $2$3}'\n    else\n\techo \"${NODETOOL_NODE_PREFIX}\"\n    fi\n}\n\n# Control a node with nodetool if erl_call isn't from OTP-23+\nrelx_nodetool() {\n    command=\"$1\"; shift\n\n    # Generate a unique id used to allow multiple nodetool calls to the\n    # same node transparently\n    nodetool_id=\"maint$(relx_gen_id)-${NAME}\"\n\n    if [ -z \"${START_EPMD}\" ]; then\n\tERL_FLAGS=\"${ERL_FLAGS} ${DIST_ARGS} ${EXTRA_DIST_ARGS} ${NAME_TYPE} $nodetool_id -setcookie ${COOKIE} -dist_listen false\" \\\n\t\t \"$ERTS_DIR/bin/escript\" \\\n\t\t \"$ROOTDIR/bin/nodetool\" \\\n\t\t \"$NAME_TYPE\" \"$NAME\" \\\n\t\t \"$command\" \"$@\"\n    else\n\t# shellcheck disable=SC2086\n\tERL_FLAGS=\"${ERL_FLAGS} ${DIST_ARGS} ${EXTRA_DIST_ARGS} ${NAME_TYPE} $nodetool_id -setcookie ${COOKIE} -dist_listen false\" \\\n\t\t \"$ERTS_DIR/bin/escript\" \\\n\t\t \"$ROOTDIR/bin/nodetool\" \\\n\t\t $START_EPMD \"$NAME_TYPE\" \"$NAME\" \"$command\" \"$@\"\n    fi\n}\n\n# Run an escript in the node's environment\nrelx_escript() {\n    scriptpath=\"$1\"; shift\n    export RELEASE_ROOT_DIR\n\n    \"$ERTS_DIR/bin/escript\" \"$ROOTDIR/$scriptpath\" \"$@\"\n}\n\n# Convert {127,0,0,1} to 127.0.0.1 (inet:ntoa/1)\naddr_tuple_to_str() {\n    addr=\"$1\"\n    saved_IFS=\"$IFS\"\n    IFS=\"{,}'\\\" \"\n    # shellcheck disable=SC2086\n    eval set -- $addr\n    IFS=\"$saved_IFS\"\n\n    case $# in\n    4) printf '%u.%u.%u.%u' \"$@\";;\n    8) printf '%.4x:%.4x:%.4x:%.4x:%.4x:%.4x:%.4x:%.4x' \"$@\";;\n    *) echo \"Cannot parse IP address tuple: '$addr'\" 1>&2;;\n    esac\n}\n\nmake_out_file_path() {\n    # Use output directory provided in the RELX_OUT_FILE_PATH environment variable\n    # (default to the current location of vm.args and sys.config)\n    DIR=$(dirname \"$1\")\n    [ -d \"${RELX_OUT_FILE_PATH}\" ] && DIR=\"${RELX_OUT_FILE_PATH}\"\n    FILE=$(basename \"$1\")\n    IN=\"${DIR}/${FILE}\"\n\n    PFX=$(echo \"$IN\"   | awk '{sub(/\\.[^.]+$/, \"\", $0)}1')\n    SFX=$(echo \"$FILE\" | awk -F . '{if (NF>1) print $NF}')\n    if [ \"$RELX_MULTI_NODE\" ]; then\n\techo \"${PFX}.${NAME}.${SFX}\"\n    else\n\techo \"${PFX}.${SFX}\"\n    fi\n}\n\n# Replace environment variables\nreplace_os_vars() {\n    awk '{\n\twhile(match($0,\"[$]{[^}]*}\")) {\n\t    var=substr($0,RSTART+2,RLENGTH -3)\n\t    slen=split(var,arr,\":-\")\n\t    v=arr[1]\n\t    e=ENVIRON[v]\n\t    gsub(\"&\",\"\\\\\\\\\\\\&\",e)\n\t    if(slen > 1 && e==\"\") {\n\t\ti=index(var, \":-\"arr[2])\n\t\tdef=substr(var,i+2)\n\t\tgsub(\"[$]{\"var\"}\",def)\n\t    } else {\n\t\tgsub(\"[$]{\"var\"}\",e)\n\t    }\n\t}\n    }1' < \"$1\" > \"$2\"\n}\n\nadd_path() {\n    # Use $CWD/$1 if exists, otherwise releases/VSN/$1\n    local FILE=${1}; shift\n    local IN_FILE_PATH=${1}; shift\n    local EXTRA_PATHS=${*}\n\n    if [ \"${IN_FILE_PATH}\" ]\n    then\n      echo \"${IN_FILE_PATH}\"\n      return 0\n    fi\n\n    for e in \"${RELEASE_ROOT_DIR}\" \"${REL_DIR}\" ${EXTRA_PATHS}\n    do\n      if [ -f \"${e}/${FILE}\" ]\n      then\n        echo \"${e}/${FILE}\"\n        return 0\n      fi\n    done\n    return 1\n}\n\nmulti_check_replace_os_vars() {\n\tlocal file=\"${1}\"; shift\n\twhile test \"${*}\"\n\tdo\n\t\tlocal path=${1}; shift\n\t\tlocal ret=$(check_replace_os_vars ${file} ${path})\n\t\tif test \"${ret}\"\n\t\tthen\n\t\t\techo ${ret}\n\t\t\treturn 0\n\t\tfi\n\tdone\n\treturn 1\n}\n\ncheck_replace_os_vars() {\n    IN_FILE_PATH=$(add_path \"$1\" \"$2\")\n    OUT_FILE_PATH=\"$IN_FILE_PATH\"\n    SRC_FILE_PATH=\"$IN_FILE_PATH.src\"\n    ORIG_FILE_PATH=\"$IN_FILE_PATH.orig\"\n    if [ -f \"$SRC_FILE_PATH\" ]; then\n\tOUT_FILE_PATH=$(make_out_file_path \"$IN_FILE_PATH\")\n\treplace_os_vars \"$SRC_FILE_PATH\" \"$OUT_FILE_PATH\"\n    elif [ \"$RELX_REPLACE_OS_VARS\" ]; then\n\tOUT_FILE_PATH=$(make_out_file_path \"$IN_FILE_PATH\")\n\t# If vm.args.orig or sys.config.orig is present then use that\n\tif [ -f \"$ORIG_FILE_PATH\" ]; then\n\t   IN_FILE_PATH=\"$ORIG_FILE_PATH\"\n\tfi\n\n\t# apply the environment variable substitution to $IN_FILE_PATH\n\t# the result is saved to $OUT_FILE_PATH\n\t# if they are both the same, then ensure that we don't clobber\n\t# the file by saving a backup with the .orig extension\n\tif [ \"$IN_FILE_PATH\" = \"$OUT_FILE_PATH\" ]; then\n\t    cp \"$IN_FILE_PATH\" \"$ORIG_FILE_PATH\"\n\t    replace_os_vars \"$ORIG_FILE_PATH\" \"$OUT_FILE_PATH\"\n\telse\n\t    replace_os_vars \"$IN_FILE_PATH\" \"$OUT_FILE_PATH\"\n\tfi\n    else\n\t# If vm.arg.orig or sys.config.orig is present then use that\n\tif [ -f \"$ORIG_FILE_PATH\" ]; then\n\t    OUT_FILE_PATH=$(make_out_file_path \"$IN_FILE_PATH\")\n\t    cp \"$ORIG_FILE_PATH\" \"$OUT_FILE_PATH\"\n\tfi\n    fi\n    echo \"$OUT_FILE_PATH\"\n}\n\nrelx_run_hooks() {\n    HOOKS=$1\n    for hook in $HOOKS\n    do\n\t# the scripts arguments at this point are separated\n\t# from each other by | , we now replace these\n\t# by empty spaces and give them to the `set`\n\t# command in order to be able to extract them\n\t# separately\n\t# shellcheck disable=SC2046\n\tset $(echo \"$hook\" | sed -e 's/|/ /g')\n\tHOOK_SCRIPT=$1; shift\n\t# all hook locations are expected to be\n\t# relative to the start script location\n\t# shellcheck disable=SC1090,SC2240\n\t[ -f \"$SCRIPT_DIR/$HOOK_SCRIPT\" ] && . \"$SCRIPT_DIR/$HOOK_SCRIPT\" \"$@\"\n    done\n}\n\nrelx_disable_hooks() {\n    PRE_START_HOOKS=\"\"\n    POST_START_HOOKS=\"\"\n    PRE_STOP_HOOKS=\"\"\n    POST_STOP_HOOKS=\"\"\n    PRE_INSTALL_UPGRADE_HOOKS=\"\"\n    POST_INSTALL_UPGRADE_HOOKS=\"\"\n    STATUS_HOOK=\"\"\n}\n\nrelx_is_extension() {\n    EXTENSION=$1\n    case \"$EXTENSION\" in\n\t# {{ extensions }})\n\t#    echo \"1\"\n\t# ;;\n\t*)\n\t    echo \"0\"\n\t;;\n    esac\n}\n\nrelx_get_extension_script() {\n    EXTENSION=$1\n    # below are the extensions declarations\n    # of the form:\n    # foo_extension=\"path/to/foo_script\";bar_extension=\"path/to/bar_script\"\n    {{{extension_declarations}}}\n    # get the command extension (eg. foo) and\n    # obtain the actual script filename that it\n    # refers to (eg. \"path/to/foo_script\"\n    eval echo \"$\"\"${EXTENSION}_extension\"\n}\n\nrelx_run_extension() {\n    # drop the first argument which is the name of the\n    # extension script\n    EXTENSION_SCRIPT=$1\n    shift\n    # all extension script locations are expected to be\n    # relative to the start script location\n    # shellcheck disable=SC1090,SC2240\n    [ -f \"$SCRIPT_DIR/$EXTENSION_SCRIPT\" ] && . \"$SCRIPT_DIR/$EXTENSION_SCRIPT\" \"$@\"\n}\n\n# given a list of arguments, identify the internal ones\n#   --relx-disable-hooks\n# and process them accordingly\nprocess_internal_args() {\n    for arg in \"$@\"\n    do\n      shift\n      case \"$arg\" in\n\t  --relx-disable-hooks)\n\t    relx_disable_hooks\n\t    ;;\n\t  *)\n\t    ;;\n      esac\n    done\n}\n\n# This function takes a list of terms (usually arguments)\n# and split them in two categories, the one before --\n# and the one after. The one before is used as Erlang\n# VM parameters and should overwrite default configuration,\n# The last part (LOCAL_PARAMS) contains arweave parameters.\n# This function export LOCAL_PARAMS and VM_PARAMS variables.\nparse_args() {\n\tlocal separator=\"--\"\n\tlocal vm_params=\"\"\n\tlocal params=\"\"\n\twhile test \"${*}\"\n\tdo\n\t\tlocal arg=\"${1}\"\n\t\tif test \"${arg}\" = ${separator}\n\t\tthen\n\t\t\ttest \"${vm_params}\" \\\n\t\t\t\t&& vm_params=\"${vm_params} ${params}\" \\\n\t\t\t\t|| vm_params=\"${params}\"\n\t\t\tparams=\"\"\n\t\telse\n\t\t\ttest \"${params}\" \\\n\t\t\t\t&& params=\"${params} ${arg}\" \\\n\t\t\t\t|| params=\"${arg}\"\n\t\tfi\n\n\t\t# don't forget to shift to remove the previous\n\t\t# argument from the list\n\t\tshift\n\tdone\n\texport VM_PARAMS=\"${vm_params}\"\n\texport LOCAL_PARAMS=\"${params}\"\n}\n\n# if ARWEAVE_DEV environment is defined, then\n# we start by rebuild a release.\nif test \"${ARWEAVE_DEV}\"\nthen\n    arweave_developer_mode\nfi\n\n# process internal arguments\nprocess_internal_args \"$@\"\n\nfind_erts_dir\nfind_erl_call\nexport ROOTDIR=\"$RELEASE_ROOT_DIR\"\nexport BINDIR=\"$ERTS_DIR/bin\"\nexport EMU=\"beam\"\nexport PROGNAME=\"erl\"\nexport LD_LIBRARY_PATH=\"$ERTS_DIR/lib:$LD_LIBRARY_PATH\"\nSYSTEM_LIB_DIR=\"$(dirname \"$ERTS_DIR\")/lib\"\n\n# vm_args configuration, we can use priv/files/vm_args or\n# the path from the release.\nVMARGS_PATH=$(add_path \\\n\tvm.args \\\n\t\"${VMARGS_PATH}\" \\\n\t\"${REL_DIR}\" \\\n\t\"${REL_PATH}\" \\\n\t\"${REL_PATH_ALT}\" \\\n\t\"${RELEASE_ROOT_DIR}/config\" \\\n\t\"${RELEASE_ROOT_DIR}/priv/templates\")\nVMARGS_PATH=$(multi_check_replace_os_vars \\\n\tvm.args \\\n\t\"${VMARGS_PATH}\"\\\n\t\"${REL_DIR}\" \\\n\t\"${REL_PATH}\" \\\n\t\"${REL_PATH_ALT}\" \\\n\t\"${RELEASE_ROOT_DIR}/config\")\nRELX_CONFIG_PATH=$(multi_check_replace_os_vars \\\n\tsys.config \\\n\t\"${RELX_CONFIG_PATH}\" \\\n\t\"${REL_DIR}\" \\\n\t\"${REL_PATH}\" \\\n\t\"${REL_PATH_ALT}\" \\\n\t\"${RELEASE_ROOT_DIR}/config\")\n\n# Check vm.args and other files referenced via -args_file parameters for:\n#    - nonexisting -args_files\n#    - circular dependencies of -args_files\n#    - relative paths in -args_file parameters\n#    - multiple/mixed occurrences of -name and -sname parameters\n#    - missing -name or -sname parameters\n# If all checks pass, extract the target node name\nset +e\nTMP_NAME_ARG=$(awk 'function shell_quote(str)\n{\n    gsub(/'\\''/,\"'\\'\\\\\\\\\\'\\''\", str);\n    return \"'\\''\" str \"'\\''\"\n}\n\nfunction check_name(file)\n{\n    # if file exists, then it should be readable\n    if (system(\"test -f \" shell_quote(file)) == 0 && system(\"test -r \" shell_quote(file)) != 0) {\n\tprint file\" not readable\"\n\texit 3\n    }\n    while ((getline line<file)>0) {\n\tif (line~/^-args_file +/) {\n\t    gsub(/^-args_file +| *$/, \"\", line)\n\t    if (line in files) {\n\t\tprint \"circular reference to \"line\" encountered in \"file\n\t\texit 5\n\t    }\n\t    files[line]=line\n\t    check_name(line)\n\t}\n\telse if (line~/^-s?name +/) {\n\t    if (name!=\"\") {\n\t\tprint \"\\\"\"line\"\\\" parameter found in \"file\" but already specified as \\\"\"name\"\\\"\"\n\t\texit 2\n\t    }\n\t    name=line\n\t}\n    }\n}\n\nBEGIN {\n    split(\"\", files)\n    name=\"\"\n}\n\n{\n    files[FILENAME]=FILENAME\n    check_name(FILENAME)\n    if (name==\"\") {\n\tprint \"need to have exactly one of either -name or -sname parameters but none found\"\n\texit 1\n    }\n    print name\n    exit 0\n}' \"$VMARGS_PATH\")\nTMP_NAME_ARG_RC=$?\ncase $TMP_NAME_ARG_RC in\n    0) NAME_ARG=\"$TMP_NAME_ARG\";;\n    *) echo \"$TMP_NAME_ARG\"\n       exit $TMP_NAME_ARG_RC;;\nesac\nunset TMP_NAME_ARG\nunset TMP_NAME_ARG_RC\nset -e\n\n\n# Perform replacement of variables in ${NAME_ARG}\nNAME_ARG=$(eval echo \"${NAME_ARG}\")\n\n# Extract the name type and name from the NAME_ARG for REMSH\nNAME_TYPE=\"$(echo \"$NAME_ARG\" | awk '{print $1}')\"\nNAME=\"$(echo \"$NAME_ARG\" | awk '{print $2}')\"\n\n# Extract dist arguments\nDIST_ARGS=\"\"\nPROTO_DIST=\"$(grep '^-proto_dist' \"$VMARGS_PATH\" || true)\"\nif [ \"$PROTO_DIST\" ]; then\n    DIST_ARGS=\"${PROTO_DIST}\"\nfi\nSTART_EPMD=\"$(grep '^-start_epmd' \"$VMARGS_PATH\" || true)\"\nif [ \"$START_EPMD\" ]; then\n    DIST_ARGS=\"${DIST_ARGS} ${START_EPMD}\"\nfi\nEPMD_MODULE=\"$(grep '^-epmd_module' \"$VMARGS_PATH\" || true)\"\nif [ \"$EPMD_MODULE\" ]; then\n    DIST_ARGS=\"${DIST_ARGS} ${EPMD_MODULE}\"\nfi\nINET_DIST_USE_INTERFACE=\"$(grep '^-kernel  *inet_dist_use_interface' \"$VMARGS_PATH\" || true)\"\nif [ \"$INET_DIST_USE_INTERFACE\" ]; then\n    DIST_ARGS=\"${DIST_ARGS} ${INET_DIST_USE_INTERFACE}\"\nfi\n\nif [ \"$ERL_DIST_PORT\" ]; then\n    if [ \"$INET_DIST_USE_INTERFACE\" ]; then\n\tADDRESS=\"$(addr_tuple_to_str \"${INET_DIST_USE_INTERFACE#*inet_dist_use_interface }\"):$ERL_DIST_PORT\"\n    else\n\tADDRESS=\"$ERL_DIST_PORT\"\n    fi\n    if [  \"11.1\" = \"$(printf \"%s\\n11.1\" \"${ERTS_VSN}\" | sort -V | head -n1)\" ] ; then\n\t# unless set by the user, set start_epmd to false when ERL_DIST_PORT is used\n\tif [ ! \"$START_EPMD\" ]; then\n\t    EXTRA_DIST_ARGS=\"-erl_epmd_port ${ERL_DIST_PORT} -start_epmd false\"\n\telse\n\t    EXTRA_DIST_ARGS=\"-erl_epmd_port ${ERL_DIST_PORT}\"\n\tfi\n    else\n\tERL_DIST_PORT_WARNING=\"ERL_DIST_PORT is set and used to set the port, but doing so on ERTS version ${ERTS_VSN} means remsh/rpc will not work for this release\"\n\tif ! command -v logger > /dev/null 2>&1\n\tthen\n\t    echo \"WARNING: ${ERL_DIST_PORT_WARNING}\"\n\telse\n\t    logger -p warning -t \"${REL_NAME}[$$]\" \"${ERL_DIST_PORT_WARNING}\"\n\tfi\n\tEXTRA_DIST_ARGS=\"-kernel inet_dist_listen_min ${ERL_DIST_PORT} -kernel inet_dist_listen_max ${ERL_DIST_PORT}\"\n    fi\nfi\n\n# Force use of nodetool if proto_dist set as erl_call doesn't support proto_dist\nif [ \"$PROTO_DIST\" ]; then\n    ERL_RPC=relx_nodetool\nfi\n\n# Extract the target cookie\n# Do this before relx_get_nodename so we can use it and not create a ~/.erlang.cookie\nif [ -n \"$RELX_COOKIE\" ]; then\n    COOKIE=\"$RELX_COOKIE\"\nelse\n    COOKIE_ARG=\"$(grep '^-setcookie' \"$VMARGS_PATH\" || true)\"\n    DEFAULT_COOKIE_FILE=\"$HOME/.erlang.cookie\"\n    if [ -z \"$COOKIE_ARG\" ]; then\n\tif [ -f \"$DEFAULT_COOKIE_FILE\" ]; then\n\t    COOKIE=\"$(cat \"$DEFAULT_COOKIE_FILE\")\"\n\telse\n\t    echo \"No cookie is set or found. This limits the scripts functionality, installing, upgrading, rpc and getting a list of versions will not work.\"\n\tfi\n    else\n\t# Extract cookie name from COOKIE_ARG\n\tCOOKIE=\"$(echo \"$COOKIE_ARG\" | awk '{print $2}')\"\n    fi\nfi\n\n# User can specify an sname without @hostname\n# This will fail when creating remote shell\n# So here we check for @ and add @hostname if missing\ncase \"${NAME}\" in\n    *@*) ;;                             # Nothing to do\n    *)   NAME=${NAME}@$(relx_get_nodename);;  # Add @hostname\nesac\n\n# Export the variable so that it's available in the 'eval' calls\nexport NAME\n\n# create a variable of just the hostname part of the nodename\nRELX_HOSTNAME=$(echo \"${NAME}\" | cut -d'@' -f2)\n\ntest -z \"$PIPE_DIR\" && PIPE_BASE_DIR='/tmp/erl_pipes/'\nPIPE_DIR=\"${PIPE_DIR:-/tmp/erl_pipes/$NAME/}\"\n\ncd \"$ROOTDIR\"\n\nif is_arweave_developer_command \"${1}\"\nthen\n  relx_usage\n  exit 1\nfi\n\n# Check the first argument for instructions\ncase \"$1\" in\n    check)\n\tshift\n\tarweave_check ${*}\n\t;;\n\n    version)\n\tshift\n\tarweave_version ${*}\n\t;;\n\n    daemon|daemon_boot)\n\tarweave_check\n\tcase \"$1\" in\n\t    daemon)\n\t\tshift\n\t\tSTART_OPTION=\"console\"\n\t\tHEART_OPTION=\"daemon\"\n\t\t;;\n\t    daemon_boot)\n\t\tshift\n\t\tSTART_OPTION=\"console_boot\"\n\t\tHEART_OPTION=\"daemon_boot\"\n\t\t;;\n\tesac\n\n\tARGS=\"$(printf \"'%s' \" \"$@\")\"\n\n\t# shellcheck disable=SC2174\n\ttest -z \"$PIPE_BASE_DIR\" || mkdir -m 1777 -p \"$PIPE_BASE_DIR\"\n\tmkdir -p \"$PIPE_DIR\"\n\tif [ ! -w \"$PIPE_DIR\" ]\n\tthen\n\t    echo \"failed to start, user '$USER' does not have write privileges on '$PIPE_DIR', either delete it or run node as a different user\"\n\t    exit 1\n\tfi\n\n\t# Make sure log directory exists\n\tmkdir -p \"$RUNNER_LOG_DIR\"\n\n\trelx_run_hooks \"$PRE_START_HOOKS\"\n\n\t# check system configuration\n\tarweave_check\n\n\t\"$BINDIR/run_erl\" \\\n\t  -daemon \"$PIPE_DIR\" \\\n\t  \"$RUNNER_LOG_DIR\" \\\n\t  \"exec \\\"$RELEASE_ROOT_DIR/bin/$REL_NAME\\\" \\\"$START_OPTION\\\" ${ARGS}\"\n\n\t# wait for node to be up before running hooks\n\twhile ! erl_rpc erlang is_alive > /dev/null 2>&1\n\tdo\n\t    sleep 1\n\tdone\n\n\trelx_run_hooks \"$POST_START_HOOKS\"\n\t;;\n\n    stop)\n\trelx_run_hooks \"$PRE_STOP_HOOKS\"\n\t# Wait for the node to completely stop...\n\tPID=\"$(relx_get_pid)\"\n\tif ! erl_rpc init stop > /dev/null 2>&1; then\n\t    exit 1\n\tfi\n\twhile kill -s 0 \"$PID\" 2>/dev/null;\n\tdo\n\t    sleep 1\n\tdone\n\n\t# wait for node to be down before running hooks\n\twhile erl_rpc erlang is_alive > /dev/null 2>&1\n\tdo\n\t    sleep 1\n\tdone\n\n\trelx_run_hooks \"$POST_STOP_HOOKS\"\n\t;;\n\n    restart)\n\t## Restart the VM without exiting the process\n\tif ! erl_rpc init restart > /dev/null; then\n\t    exit 1\n\tfi\n\t;;\n\n    reboot)\n\t## Restart the VM completely (uses heart to restart it)\n\tif ! erl_rpc init reboot > /dev/null; then\n\t    exit 1\n\tfi\n\t;;\n\n    reload)\n\t## Reload only arweave application in the vm\n\tRELX_RPC_TIMEOUT=3600\n\n\t# first arweave and prometheus application must be stopped\n\tif erl_eval '[application:stop(A) || A <- [arweave, prometheus]].'\n\tthen\n\t\t# then arweave application can be restarted\n\t\terl_eval 'application:ensure_all_started(arweave).'\n\t\ttest $? -ne 0 && exit 1\n\t\texit $?\n\telse\n\t    exit 1\n\tfi\n\t;;\n    pid)\n\t## Get the VM's pid\n\tif ! relx_get_pid; then\n\t    exit 1\n\tfi\n\t;;\n\n    ping)\n\t## See if the VM is alive\n\tping_or_exit\n\n\techo \"pong\"\n\t;;\n\n    escript)\n\t## Run an escript under the node's environment\n\tshift\n\tif ! relx_escript \"$@\"; then\n\t    exit 1\n\tfi\n\t;;\n\n    daemon_attach|attach)\n\tcase \"$1\" in\n\t    attach)\n\t\t# TODO, add here the right annoying message asking users to consider\n\t\t# instead using systemd or some such other init system\n\t\techo \"'attach' has been deprecated, replaced by 'daemon_attach' and will be removed in the short-term, please consult rebar3.org on why you should be\"\\\n\t\t     \"using 'foreground' and an init tool such as 'systemd'\"\n\t\t;;\n\tesac\n\t# Make sure a node IS running\n\tping_or_exit\n\n\tif [ ! -w \"$PIPE_DIR\" ]\n\tthen\n\t    echo \"failed to attach, user '$USER' does not have sufficient privileges on '$PIPE_DIR', please run node as a different user\"\n\t    exit 1\n\tfi\n\n\tshift\n\texec \"$BINDIR/to_erl\" \"$PIPE_DIR\"\n\t;;\n\n    remote_console|remote|remsh)\n\t# Make sure a node IS running\n\tping_or_exit\n\n\tshift\n\trelx_rem_sh\n\t;;\n\n    console|console_clean|console_boot|foreground|foreground_clean|benchmark|wallet|doctor)\n\tFOREGROUNDOPTIONS=\"\"\n\t# .boot file typically just $REL_NAME (ie, the app name)\n\t# however, for debugging, sometimes start_clean.boot is useful.\n\t# For e.g. 'setup', one may even want to name another boot script.\n\tsubcommand=\"${1}\"\n\tcase \"$1\" in\n\t    console)\n\t\tshift\n\t\tif [ -f \"$REL_DIR/$REL_NAME.boot\" ]; then\n\t\t  BOOTFILE=\"$REL_DIR/$REL_NAME\"\n\t\telse\n\t\t  BOOTFILE=\"$REL_DIR/start\"\n\t\tfi\n\t\tARGS=${*}\n\t\t;;\n\t    foreground|foreground_clean|benchmark|wallet|doctor)\n\t\tshift\n\t\t# start up the release in the foreground for use by runit\n\t\t# or other supervision services\n\t\tif [ -f \"$REL_DIR/$REL_NAME.boot\" ]; then\n\t\t  BOOTFILE=\"$REL_DIR/$REL_NAME\"\n\t\telse\n\t\t  BOOTFILE=\"$REL_DIR/start\"\n\t\tfi\n\t\tFOREGROUNDOPTIONS=\"-noinput +Bd\"\n\n\t\t# all these arweave commands are being executed in\n\t\t# foreground mode, ARGS will be modified.\n\t\tcase ${subcommand} in\n\t\t    benchmark)\n\t\t\tarweave_benchmark ${*}\n\t\t\tARGS=$(arweave_benchmark ${*})\n\t\t\t;;\n\t\t    wallet)\n\t\t\tarweave_wallet ${*}\n\t\t\tARGS=$(arweave_wallet ${*})\n\t\t\t;;\n\t\t    doctor)\n\t\t\tarweave_doctor ${*}\n\t\t\tARGS=$(arweave_doctor ${*})\n\t\t\t;;\n\t\t    foreground_clean)\n\t\t\tARWEAVE_OPTS=\"\"\n\t\t\tARGS=${*}\n\t\t\t;;\n\t\t    *)\n\t\t\tARGS=${*}\n\t\t\t;;\n\t\tesac\n\t\t;;\n\t    console_clean)\n\t\tshift\n\t\t# if not set by user use interactive mode for console_clean\n\t\tCODE_LOADING_MODE=\"${CODE_LOADING_MODE:-interactive}\"\n\t\tBOOTFILE=\"$REL_DIR/start_clean\"\n\t\tARGS=${*}\n\t\t;;\n\t    console_boot)\n\t\tshift\n\t\tBOOTFILE=\"$1\"\n\t\tshift\n\t\tARGS=${*}\n\t\t;;\n\tesac\n\n\t# split the argument in two parts based on the previously\n\t# passed args, LOCAL_PARAMS is for arweave, VM_PARAMS is for\n\t# the vm.\n\tparse_args ${ARGS}\n\tARGS=${LOCAL_PARAMS}\n\n\t# if not set by user or console_clean use embedded\n\tCODE_LOADING_MODE=\"${CODE_LOADING_MODE:-embedded}\"\n\n\t# Setup beam-required vars\n\tEMU=\"beam\"\n\tPROGNAME=\"${0#*/}\"\n\n\texport EMU\n\texport PROGNAME\n\n\t# Dump environment info for logging purposes\n\t# shellcheck disable=SC2086\n\techo \"Exec: $BINDIR/erlexec\" \\\n\t    ${VM_PARAMS} \\\n\t    ${EXTRA_DIST_ARGS} \\\n\t    ${FOREGROUNDOPTIONS} \\\n\t    -boot \"$BOOTFILE\" \\\n\t    -mode \"$CODE_LOADING_MODE\" \\\n\t    -boot_var SYSTEM_LIB_DIR \"$SYSTEM_LIB_DIR\" \\\n\t    -config \"$RELX_CONFIG_PATH\" \\\n\t    -args_file \"$VMARGS_PATH\" \\\n\t    -- ${ARWEAVE_OPTS} ${ARGS}\n\techo \"Root: $ROOTDIR\"\n\n\t# Log the startup\n\techo \"$RELEASE_ROOT_DIR\"\n\tif ! command -v logger > /dev/null 2>&1\n\tthen\n\t    echo \"${REL_NAME}[$$] Starting up\"\n\telse\n\t    logger -t \"${REL_NAME}[$$]\" \"Starting up\"\n\tfi\n\n\trelx_run_hooks \"$PRE_START_HOOKS\"\n\n\t# check system configuration\n\tarweave_check\n\n\t# Start the VM\n\t# The variabre FOREGROUNDOPTIONS must NOT be quoted.\n\t# shellcheck disable=SC2086\n\texec \"$BINDIR/erlexec\" \\\n\t    ${VM_PARAMS} \\\n\t    ${EXTRA_DIST_ARGS} \\\n\t    ${FOREGROUNDOPTIONS} \\\n\t    -boot \"$BOOTFILE\" \\\n\t    -mode \"$CODE_LOADING_MODE\" \\\n\t    -boot_var SYSTEM_LIB_DIR \"$SYSTEM_LIB_DIR\" \\\n\t    -config \"$RELX_CONFIG_PATH\" \\\n\t    -args_file \"$VMARGS_PATH\" \\\n\t    -- ${ARWEAVE_OPTS} ${ARGS}\n\t# exec will replace the current image and nothing else gets\n\t# executed from this point on, this explains the absence\n\t# of the pre start hook\n\t;;\n    rpc)\n\t# Make sure a node IS running\n\tping_or_exit\n\n\tshift\n\n\terl_rpc \"$@\"\n\t;;\n    eval)\n\t# Make sure a node IS running\n\tping_or_exit\n\n\tshift\n\n\terl_eval \"$@\"\n\t;;\n    status)\n\t# Make sure a node IS running\n\tping_or_exit\n\n\t# shellcheck disable=SC1090,SC2240\n\t[ -n \"${STATUS_HOOK}\" ] && [ -f \"$SCRIPT_DIR/$STATUS_HOOK\" ] && . \"$SCRIPT_DIR/$STATUS_HOOK\" \"$@\"\n\t;;\n    tunnel)\n\t# prepare a tunnel to the remote node\n\tshift\n\ttarget=\"${1}\"\n\t# if epmd is running locally, try to kill it\n\tpgrep epmd && pkill epmd\n\n\t# fetch the port of the remote arweave node\n\tREMOTE_EPMD_PORT=$(ssh ${target} \"epmd -names | sed 1d | awk '$2==\\\"^arweave$\\\" {print $NF}'\")\n\n\t# create a local forward tunnel\n\tssh -L ${ERL_EPMD_PORT}:localhost:${ERL_EPMD_PORT} \\\n\t\t-L ${REMOTE_EPMD_PORT=}:localhost:${REMOTE_EPMD_PORT} \\\n\t\t${target} echo \"epmd tunnel is ready on localhost:${REMOTE_EPMD_PORT}\"\n\t;;\n    remote_observer)\n\t# start observer locally, assuming a tunnel has been previouly\n\t# created\n\tOBSERVER_ID=$(($(date \"+%N\")%6421))\n\terl -name observer-${OBSERVER_ID}@127.0.0.1 \\\n\t\t-setcookie ${COOKIE} \\\n\t\t-hidden -run observer\n\t;;\n    test)\n\tshift\n\tarweave_test ${*}\n\t;;\n    test_e2e)\n\tshift\n\tarweave_e2e ${*}\n\t;;\n    help)\n\tif [ -z \"$2\" ]; then\n\t    relx_usage\n\t    exit 1\n\tfi\n\n\tTOPIC=\"$2\"; shift\n\trelx_usage \"$TOPIC\"\n\t;;\n    *)\n\t# check for extension\n\tIS_EXTENSION=$(relx_is_extension \"$1\")\n\tif [ \"$IS_EXTENSION\" = \"1\" ]; then\n\t    EXTENSION_SCRIPT=$(relx_get_extension_script \"$1\")\n\t    shift\n\t    relx_run_extension \"$EXTENSION_SCRIPT\" \"$@\"\n\t    # all extension scripts are expected to exit\n\telse\n\t    relx_usage \"$1\"\n\tfi\n\texit 1\n\t;;\nesac\n\nexit 0\n"
  },
  {
    "path": "priv/templates/vm_args",
    "content": "######################################################################\n## Default vm arguments templates used by Arweave.\n##\n## Some useful links to configure emulator flags:\n##   https://www.erlang.org/doc/apps/erts/erl_cmd.html#emulator-flags\n##\n## Some useful links on Erlang's memory management:\n##   https://www.erlang-factory.com/static/upload/media/139454517145429lukaslarsson.pdf\n##   https://www.youtube.com/watch?v=nuCYL0X-8f4\n##\n## Note for testing it's sometimes useful to limit the number of\n## schedulers that will be used, to do that: +S 16:16\n######################################################################\n## Name of the node\n-name ${ARNAME:-{{ release_name }}@127.0.0.1}\n\n## Cookie for distributed erlang\n-setcookie ${ARCOOKIE:-{{ release_name }}}\n\n## This is now the default as of OTP-26\n## Multi-time warp mode in combination with time correction is the\n## preferred configuration.\n## It is only not the default in Erlang itself because it could break\n## older systems.\n# +C multi_time_warp\n\n## Uncomment the following line if running in a container.\n## +sbwt none\n\n## Increase number of concurrent ports/sockets\n##-env ERL_MAX_PORTS 4096\n\n## Tweak GC to run more often\n##-env ERL_FULLSWEEP_AFTER 10\n\n## +B [c | d | i]\n## Option c makes Ctrl-C interrupt the current shell instead of\n## invoking the emulator break\n## handler. Option d (same as specifying +B without an extra option)\n## disables the break handler. # Option i makes the emulator ignore any\n## break signal.\n## If option c is used with oldshell on Unix, Ctrl-C will restart the\n## shell process rather than\n## interrupt it.\n## Disable the emulator break handler\n## it easy to accidentally type ctrl-c when trying\n## to reach for ctrl-d. ctrl-c on a live node can\n## have very undesirable results\n+Bi\n\n## Enables the kernel poll functionality.\n+Ktrue\n\n## +A1024: emulator number of threads in the Async long thread pool for linked\n## in drivers, mostly unused\n+A1024\n\n## +SDio1024: emulator Scheduler thread count for Dirty I/O, 200\n## threads for file access\n+SDio1024\n\n## +MBsbct 103424: binary_alloc singleblock carrier threshold (in KiB)\n## (101MiB, default 512KiB). Blocks larger than the threshold are\n## placed in singleblock carriers. However multi-block carriers are\n## more efficient. Since we have so many 100MiB binary blocks due to\n## the recall range, set the threshold so that they are all placed in\n## multi-block carriers and not single-block carriers.\n+MBsbct 103424\n\n## +MBsmbcs 10240: binary_alloc smallest multi-block carrier size (in\n## KiB) (10MiB, default 256KiB).\n+MBsmbcs 10240\n\n## MBlmbcs 410629: binary_alloc largest multi-block carrier size (in\n## KiB) (~401MiB, default 5MiB). Set so that a single multi-block\n## carrier can hold roughly 4 full recall ranges.\n+MBlmbcs 410629\n\n## +MBmmsbc 1024: binary_alloc maximum mseg_alloc singleblock carriers\n## (1024 carriers, default 256). Once exhausted, the emulator will start\n## using sys_alloc rather than mseg_alloc for singleblock carriers.\n## This can be slower.\n+MBmmmbc 1024\n\n## +MBas aobf: emulator Memory Binary Allocation Strategy set to Address\n## Order Best Fit.\n## see: https://www.erlang.org/doc/man/erts_alloc.html#strategy\n+MBas aobf\n\n## Sets scheduler busy wait threshold. Defaults to medium. The\n## threshold determines how long schedulers are to busy wait when\n## running out of work before going to sleep.\n+sbwt very_long\n\n## Sets dirty scheduler busy wait threshold.\n+sbwtdcpu very_long\n\n## Sets dirty IO scheduler busy wait threshold\n+sbwtdio very_long\n\n##  Sets scheduler wakeup threshold.\n+swt very_low\n\n##  Sets dirty scheduler wakeup threshold.\n+swtdcpu very_low\n\n## Sets dirty IO scheduler wakeup threshold.\n+swtdio very_low\n"
  },
  {
    "path": "rebar.config",
    "content": "{minimum_otp_vsn, \"26.0\"}.\n\n{deps, [\n\t{b64fast, {git, \"https://github.com/ArweaveTeam/b64fast.git\", {ref, \"58f0502e49bf73b29d95c6d02460d1fb8d2a5273\"}}},\n\t{jiffy, {git, \"https://github.com/ArweaveTeam/jiffy.git\", {ref, \"073da726e07bafb5d140020a9e8765c703da3ef7\"}}},\n\t{ranch, \"1.8.1\"},\n\t{gun, \"2.2.0\"},\n\t{cowboy, \"2.12.0\"},\n\t{cowlib, \"2.15.0\"},\n\t{prometheus, \"4.11.0\"},\n\t{prometheus_process_collector,\n          {git, \"https://github.com/ArweaveTeam/prometheus_process_collector.git\",\n            {ref, \"1362b608ffa4748cdf5dba92b85c981218fd4fa2\"}}},\n\t{prometheus_cowboy, \"0.1.8\"},\n\t{rocksdb,\n\t\t{git, \"https://github.com/ArweaveTeam/erlang-rocksdb.git\",\n\t\t\t{ref, \"0e0b2f051e8f5720ceaea19dc51a7561f2472279\"}}},\n\t{recon, {git, \"https://github.com/ferd/recon.git\", {tag, \"2.5.6\"}}},\n\t{yamerl, {git, \"https://github.com/ArweaveTeam/yamerl\", {ref, \"bdb3b032f972a397c527667254393cd3c8942df3\"}}},\n\t{tomerl, {git, \"https://github.com/ArweaveTeam/tomerl\", {ref, \"be6d7ccf9fe357c5ec3b6411d2245a21b97e48d7\"}}}\n]}.\n\n{overrides, [ \n\t{override, b64fast, [\n\t\t{plugins, [{pc, {git, \"https://github.com/blt/port_compiler.git\", {tag, \"v1.12.0\"}}}]},\n\t\t{artifacts, [\"priv/b64fast.so\"]},\n\t\t{provider_hooks, [\n\t\t\t{post, [\n\t\t\t\t{compile, {pc, compile}},\n\t\t\t\t{clean, {pc, clean}}\n\t\t\t]}\n\t\t]}\n\t]},\n\t% this is a quick and dirty patch due to rebar3\n\t% versioning issue.\n\t% see: https://github.com/erlang/rebar3/issues/2364\n\t{override, gun, [\n\t\t{deps, [\n\t\t\t{cowlib,\".*\",{git,\"https://github.com/ninenines/cowlib\",{tag,\"2.15.0\"}}}\n\t\t]}\n\t]}\n]}.\n\n{relx, [\n\t{release, {arweave, \"2.9.6-alpha1\"}, [\n\t\tyamerl,\n\t\ttomerl,\n\t\tarweave_config,\n                {arweave_limiter, load},\n\t\t{arweave_diagnostic, load},\n\t\t{arweave, load},\n\t\t{recon, load},\n\t\tb64fast,\n\t\tjiffy,\n\t\trocksdb,\n\t\tprometheus_process_collector\n\t]},\n\n\t{sys_config, \"./config/sys.config\"},\n\t{vm_args_src, \"./config/vm.args.src\"},\n\n\t% dynamically generated overlay variable, required for\n\t% extra variables during script generation.\n\t{overlay_vars, \"_vars.config\"},\n\t{overlay, [\n\t\t{template, \"priv/templates/extended_bin\", \"bin/arweave\"},\n\t\t{template, \"priv/templates/extended_bin\", \"bin/arweave-{{release_version}}\"},\n\t\t{template, \"priv/templates/extended_bin\", \"{{output_dir}}/{{release_version}}/bin/arweave\"},\n\t\t{template, \"priv/templates/vm_args\", \"{{output_dir}}/{{release_version}}/vm.args\"},\n\t\t{template, \"priv/templates/vm_args\", \"releases/{{ release_version }}/vm.args\"},\n\t\t{copy, \"bin/start\", \"bin/start\"},\n\t\t{copy, \"bin/stop\", \"bin/stop\"},\n\t\t{copy, \"bin/console\", \"bin/console\"},\n\t\t{copy, \"bin/create-wallet\", \"bin/create-wallet\"},\n\t\t{copy, \"bin/benchmark-hash\", \"bin/benchmark-hash\"},\n\t\t{copy, \"bin/benchmark-packing\", \"bin/benchmark-packing\"},\n\t\t{copy, \"bin/benchmark-vdf\", \"bin/benchmark-vdf\"},\n\t\t{copy, \"bin/data-doctor\", \"bin/data-doctor\"},\n\t\t{copy, \"bin/logs\", \"bin/logs\"},\n\t\t{copy, \"bin/debug-logs\", \"bin/debug-logs\"},\n\t\t{copy, \"genesis_data/not_found.html\", \"genesis_data/not_found.html\"},\n\t\t{copy, \"genesis_data/hash_list_1_0\", \"genesis_data/hash_list_1_0\"},\n\t\t{copy, \"genesis_data/genesis_wallets.csv\", \"genesis_data/genesis_wallets.csv\"},\n\t\t{copy, \"genesis_data/genesis_txs/ZC44Bxrx6AtNJYLwhvpALuINZRBXklme3tpeJbJ2rdw.json\", \"genesis_data/genesis_txs/ZC44Bxrx6AtNJYLwhvpALuINZRBXklme3tpeJbJ2rdw.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/6NaT-Mz8QAiQS8atFaOu_ezqZnfu_XaQb-Grng-hvHc.json\", \"genesis_data/genesis_txs/6NaT-Mz8QAiQS8atFaOu_ezqZnfu_XaQb-Grng-hvHc.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/1qVeYpf2sY8Qkz0iVomVPVb15NA7QUtF3eFDoMwa8PI.json\", \"genesis_data/genesis_txs/1qVeYpf2sY8Qkz0iVomVPVb15NA7QUtF3eFDoMwa8PI.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/6GNIVQ-23jPJTxQkQITbSKE7SYm6J3MF4qbSgH3-AXU.json\", \"genesis_data/genesis_txs/6GNIVQ-23jPJTxQkQITbSKE7SYm6J3MF4qbSgH3-AXU.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/3T6mnguMWl8GeiqZWiBZrGXHHtwm12mIWciusoSACkQ.json\", \"genesis_data/genesis_txs/3T6mnguMWl8GeiqZWiBZrGXHHtwm12mIWciusoSACkQ.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/EQh5rYFJ5Z5yESi4DIuvl2n6iVZS899tA6V6rf2Xwhk.json\", \"genesis_data/genesis_txs/EQh5rYFJ5Z5yESi4DIuvl2n6iVZS899tA6V6rf2Xwhk.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/128KaPgVaZyrl8Vuzt795ZlWidERzih15pNDAJgahI0.json\", \"genesis_data/genesis_txs/128KaPgVaZyrl8Vuzt795ZlWidERzih15pNDAJgahI0.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/xiQYsaUMtlIq9DvTyucB4gu0BFC-qnFRIDclLv8wUT8.json\", \"genesis_data/genesis_txs/xiQYsaUMtlIq9DvTyucB4gu0BFC-qnFRIDclLv8wUT8.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/I6s8Z6gEPLQABFstkCoLVv_gdQNGb-uuMMut-R7q2hA.json\", \"genesis_data/genesis_txs/I6s8Z6gEPLQABFstkCoLVv_gdQNGb-uuMMut-R7q2hA.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/kXu3jTQwgYsphIUFbaVGg9rNiil96fNjw0RBa6oPRtU.json\", \"genesis_data/genesis_txs/kXu3jTQwgYsphIUFbaVGg9rNiil96fNjw0RBa6oPRtU.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/CUu1gtu6L5tJxkOAu13tNBGDKECohV8M4qgCOOPNtas.json\", \"genesis_data/genesis_txs/CUu1gtu6L5tJxkOAu13tNBGDKECohV8M4qgCOOPNtas.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/g6TUtTIi_rwlAHNuO6ACsQqIChWACugTPmZxaaJltDM.json\", \"genesis_data/genesis_txs/g6TUtTIi_rwlAHNuO6ACsQqIChWACugTPmZxaaJltDM.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/L9J9SkTWI_Fx5KhujeWGokIchHTSFlSIC0blr0JIz80.json\", \"genesis_data/genesis_txs/L9J9SkTWI_Fx5KhujeWGokIchHTSFlSIC0blr0JIz80.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/AX6ZZxDpFlNhoN5Am5Hi4DER4zOBGVnQm_bse5PfHNw.json\", \"genesis_data/genesis_txs/AX6ZZxDpFlNhoN5Am5Hi4DER4zOBGVnQm_bse5PfHNw.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/3MMMUrHDmjbCn_-TOZJJHvjLBp8PffZKUNfm_Ziy0Vk.json\", \"genesis_data/genesis_txs/3MMMUrHDmjbCn_-TOZJJHvjLBp8PffZKUNfm_Ziy0Vk.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/2vn7V0FR0JMXrVbj3Ofvc_2nvrFYCCpRoFjc7UYpJcA.json\", \"genesis_data/genesis_txs/2vn7V0FR0JMXrVbj3Ofvc_2nvrFYCCpRoFjc7UYpJcA.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/daTnztzTMlA8Ras9XgQ05Fr9ZYwOg4-UDfjW875yQeQ.json\", \"genesis_data/genesis_txs/daTnztzTMlA8Ras9XgQ05Fr9ZYwOg4-UDfjW875yQeQ.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/_QEE09XylMYgab9MYPvrrMy7v1jKWh0bGwqFvsBsO8s.json\", \"genesis_data/genesis_txs/_QEE09XylMYgab9MYPvrrMy7v1jKWh0bGwqFvsBsO8s.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/5WKzIeQrDGC86IQvl2NhRtgPNKHGRA9oyjRByV1F7p4.json\", \"genesis_data/genesis_txs/5WKzIeQrDGC86IQvl2NhRtgPNKHGRA9oyjRByV1F7p4.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/Tnf6b1F67AEV2r9Flj8ktSSHYoV8SeL9dFvHRkavlZo.json\", \"genesis_data/genesis_txs/Tnf6b1F67AEV2r9Flj8ktSSHYoV8SeL9dFvHRkavlZo.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/m1Vv28IVJIuYiToBhxFVp3dA47je3L8WkzSjggAWXAo.json\", \"genesis_data/genesis_txs/m1Vv28IVJIuYiToBhxFVp3dA47je3L8WkzSjggAWXAo.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/iPb5JLzNajAzUNByVeIGSEPR0rzGOV5iIYjWpi99APQ.json\", \"genesis_data/genesis_txs/iPb5JLzNajAzUNByVeIGSEPR0rzGOV5iIYjWpi99APQ.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/KOm2FJzmNXa_yjYC-58DkysCdk7FRFMcRmBx3DF6S9A.json\", \"genesis_data/genesis_txs/KOm2FJzmNXa_yjYC-58DkysCdk7FRFMcRmBx3DF6S9A.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/R0Mhun4e-WmLLGxnJq4SDTRqyNvTDTKC-uXuol1s63A.json\", \"genesis_data/genesis_txs/R0Mhun4e-WmLLGxnJq4SDTRqyNvTDTKC-uXuol1s63A.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/g8ZQaQTNUbg-jGeE61og18FrGqpFeZxjFDypGuhT7zI.json\", \"genesis_data/genesis_txs/g8ZQaQTNUbg-jGeE61og18FrGqpFeZxjFDypGuhT7zI.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/DC6gmByeCki7uyXHJhX_A9x3pkMgmJ8Tv6wDRnh7vGs.json\", \"genesis_data/genesis_txs/DC6gmByeCki7uyXHJhX_A9x3pkMgmJ8Tv6wDRnh7vGs.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/y-k4KjdSmwYmIugoObrtx5JWYczlEZBzwBHGMLqNP-0.json\", \"genesis_data/genesis_txs/y-k4KjdSmwYmIugoObrtx5JWYczlEZBzwBHGMLqNP-0.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/OIOqGvvuafD_5J9QzfxyPiNlnqzIcL96i6u4PTUeDmA.json\", \"genesis_data/genesis_txs/OIOqGvvuafD_5J9QzfxyPiNlnqzIcL96i6u4PTUeDmA.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/mcFln0_6FIuLwE9GtMRzmdQts4QALV3dxQkXdgSdO2s.json\", \"genesis_data/genesis_txs/mcFln0_6FIuLwE9GtMRzmdQts4QALV3dxQkXdgSdO2s.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/R2h2i6y-KFxuHukxmHIjSncPZSiS4tpuzH0tD1NAooI.json\", \"genesis_data/genesis_txs/R2h2i6y-KFxuHukxmHIjSncPZSiS4tpuzH0tD1NAooI.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/rC7TOXwflo7w9Ky0ljTYlzdbR0A3g2GVRbRJbIIuBfY.json\", \"genesis_data/genesis_txs/rC7TOXwflo7w9Ky0ljTYlzdbR0A3g2GVRbRJbIIuBfY.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/UdCfZG1jBYUKgeLc13zjRxmQHO4_13B-NigE57jmJ5A.json\", \"genesis_data/genesis_txs/UdCfZG1jBYUKgeLc13zjRxmQHO4_13B-NigE57jmJ5A.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/8gTAwQ3f17PKI9KCX1cjuXCs9F8Hcdz8KyhsecKuCJ0.json\", \"genesis_data/genesis_txs/8gTAwQ3f17PKI9KCX1cjuXCs9F8Hcdz8KyhsecKuCJ0.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/HFUR5ZwLihdaonJWHRHBuLay6cw8ZMV0bM870xhE6Qk.json\", \"genesis_data/genesis_txs/HFUR5ZwLihdaonJWHRHBuLay6cw8ZMV0bM870xhE6Qk.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/QDbVk-efwdVbHDGL1vZO3mQ3g65ol5RR-1wOvPLUkkE.json\", \"genesis_data/genesis_txs/QDbVk-efwdVbHDGL1vZO3mQ3g65ol5RR-1wOvPLUkkE.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/F5R2EA-gM8AtQ9_NymKwtr_Im3_ljMR38ndzCs5c77Y.json\", \"genesis_data/genesis_txs/F5R2EA-gM8AtQ9_NymKwtr_Im3_ljMR38ndzCs5c77Y.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/O6qlkPRgr7H3WLHjVov-CTm-q66Q4TuvhP6GC-c5ZjY.json\", \"genesis_data/genesis_txs/O6qlkPRgr7H3WLHjVov-CTm-q66Q4TuvhP6GC-c5ZjY.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/0qob-AeHGTS5EDamY6Mtsnxf1MCyUk18l09bqHAYQjU.json\", \"genesis_data/genesis_txs/0qob-AeHGTS5EDamY6Mtsnxf1MCyUk18l09bqHAYQjU.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/VuXQZjhUaZ2Hyi6Pl8_VTOu2mUWjoEemYb5TKXPFOS0.json\", \"genesis_data/genesis_txs/VuXQZjhUaZ2Hyi6Pl8_VTOu2mUWjoEemYb5TKXPFOS0.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/Osgzf9EDK9j7TMlqSJ_5Y1rzZgOA6qfR7ktiakLPk4A.json\", \"genesis_data/genesis_txs/Osgzf9EDK9j7TMlqSJ_5Y1rzZgOA6qfR7ktiakLPk4A.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/1xh_NCIFYbprcgNM4AVvZ47jRxsQmJYvCG-L-oEK4iE.json\", \"genesis_data/genesis_txs/1xh_NCIFYbprcgNM4AVvZ47jRxsQmJYvCG-L-oEK4iE.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/DpEoi9F4g952ajGuT4g1HWY-xndyE77dn0VfdNXkrC8.json\", \"genesis_data/genesis_txs/DpEoi9F4g952ajGuT4g1HWY-xndyE77dn0VfdNXkrC8.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/Z6IgRWClifhTSnomxJet2WLw8UUaslmqAi2nynj3Ke4.json\", \"genesis_data/genesis_txs/Z6IgRWClifhTSnomxJet2WLw8UUaslmqAi2nynj3Ke4.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/Xjz72yVLd_Qzl8_GfSPqZA1MAkxxhjr2Lsf2tGCj_ZQ.json\", \"genesis_data/genesis_txs/Xjz72yVLd_Qzl8_GfSPqZA1MAkxxhjr2Lsf2tGCj_ZQ.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/r8Yq7Lvx0FjFYyXBLn29UM5Evv4AtGLZ00LCtE_hC60.json\", \"genesis_data/genesis_txs/r8Yq7Lvx0FjFYyXBLn29UM5Evv4AtGLZ00LCtE_hC60.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/7kT0is0QnxdjqkPi0BKamhLW6z6_SK55LMAVKQC6F0M.json\", \"genesis_data/genesis_txs/7kT0is0QnxdjqkPi0BKamhLW6z6_SK55LMAVKQC6F0M.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/-wzIQJ19Hq8Zyf1L85Ga3uGTrdWA2W-UNyr8aH4a4iE.json\", \"genesis_data/genesis_txs/-wzIQJ19Hq8Zyf1L85Ga3uGTrdWA2W-UNyr8aH4a4iE.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/QDBM2PowqCX0eUCKzgV-DgdzeDz5TXLKYS3HVXLyqoo.json\", \"genesis_data/genesis_txs/QDBM2PowqCX0eUCKzgV-DgdzeDz5TXLKYS3HVXLyqoo.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/ZAk05et7CFN69E9NwET2mSRI0ISRigjMEjcy8kbO-Y8.json\", \"genesis_data/genesis_txs/ZAk05et7CFN69E9NwET2mSRI0ISRigjMEjcy8kbO-Y8.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/LJ2QSdjHftgyCOSgy9Ub0OkTTN25rxCY7D7mt6u8Uy8.json\", \"genesis_data/genesis_txs/LJ2QSdjHftgyCOSgy9Ub0OkTTN25rxCY7D7mt6u8Uy8.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/luyHFFFOvjKPqi6nVrxngcHaQ3RwbMDMqVTLqPagHy0.json\", \"genesis_data/genesis_txs/luyHFFFOvjKPqi6nVrxngcHaQ3RwbMDMqVTLqPagHy0.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/eJ2aSQ4nm-i8XAZW2pcRq6GoEjW9K8EBM6w7rLiuSHw.json\", \"genesis_data/genesis_txs/eJ2aSQ4nm-i8XAZW2pcRq6GoEjW9K8EBM6w7rLiuSHw.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/SHxtj5_gLdJMI-6CcspsDbFBuU_74df3I4-sAJkAr6w.json\", \"genesis_data/genesis_txs/SHxtj5_gLdJMI-6CcspsDbFBuU_74df3I4-sAJkAr6w.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/CZ181FVir4NaSJ7JsVb50-xCaZtd3dmKbDer7jpTSyI.json\", \"genesis_data/genesis_txs/CZ181FVir4NaSJ7JsVb50-xCaZtd3dmKbDer7jpTSyI.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/Mk8XJgQPSOIsx_QX_XDPxdEG5NcKgO92q9i37uLZsrs.json\", \"genesis_data/genesis_txs/Mk8XJgQPSOIsx_QX_XDPxdEG5NcKgO92q9i37uLZsrs.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/Y0PLaTBQ73JXn_jHvldOKC3jdbqDbqTMkcW0x65_Jek.json\", \"genesis_data/genesis_txs/Y0PLaTBQ73JXn_jHvldOKC3jdbqDbqTMkcW0x65_Jek.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/G5FyMvm8E0_07vFgz-XISJN3VEviSrbtih9_Wptef9w.json\", \"genesis_data/genesis_txs/G5FyMvm8E0_07vFgz-XISJN3VEviSrbtih9_Wptef9w.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/fx1EmDF4yioha3ms_VbddDQjl4bt6pBLpFCESuEIT6E.json\", \"genesis_data/genesis_txs/fx1EmDF4yioha3ms_VbddDQjl4bt6pBLpFCESuEIT6E.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/EUMtkWCJU0L23RnhXKfQ1wtD3Jh2O-vpFnLcQXynoAQ.json\", \"genesis_data/genesis_txs/EUMtkWCJU0L23RnhXKfQ1wtD3Jh2O-vpFnLcQXynoAQ.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/BYJCPwCLpd9a5K1HFy5F6ZvnemPiPFtV4hz5wMHr1NI.json\", \"genesis_data/genesis_txs/BYJCPwCLpd9a5K1HFy5F6ZvnemPiPFtV4hz5wMHr1NI.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/5dsjbEwH2r-EWCkfOznV4JkCOLSK9vNY-0iqPr4RZUM.json\", \"genesis_data/genesis_txs/5dsjbEwH2r-EWCkfOznV4JkCOLSK9vNY-0iqPr4RZUM.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/71M1E7A4e0PFW_6C0gly77iCg7ykX17647i00eEiA-s.json\", \"genesis_data/genesis_txs/71M1E7A4e0PFW_6C0gly77iCg7ykX17647i00eEiA-s.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/98kadyXY0OPfEZKeeZcCyQ7z5mRToZklK-D6f1a-Lxw.json\", \"genesis_data/genesis_txs/98kadyXY0OPfEZKeeZcCyQ7z5mRToZklK-D6f1a-Lxw.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/EvKHSfokNyuiTarFKOuQ_-SaBwtllGpQGc7IFkRfBfc.json\", \"genesis_data/genesis_txs/EvKHSfokNyuiTarFKOuQ_-SaBwtllGpQGc7IFkRfBfc.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/hRTkBAH0k74HlmlWXTWmetXcIFXvM_Zrz3i1JXULZSM.json\", \"genesis_data/genesis_txs/hRTkBAH0k74HlmlWXTWmetXcIFXvM_Zrz3i1JXULZSM.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/p9PJG5GkKZAxLyPJyDYw4_1CmhodHGGGqB785duwVwM.json\", \"genesis_data/genesis_txs/p9PJG5GkKZAxLyPJyDYw4_1CmhodHGGGqB785duwVwM.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/rTY6dpq4KEhZtB-5moP1mWN1CtrTKurv7QSY8wAN758.json\", \"genesis_data/genesis_txs/rTY6dpq4KEhZtB-5moP1mWN1CtrTKurv7QSY8wAN758.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/K_ae8Bfvql0dGhIfRH-R7W-zWoeB95kYGJNi3HjFyrs.json\", \"genesis_data/genesis_txs/K_ae8Bfvql0dGhIfRH-R7W-zWoeB95kYGJNi3HjFyrs.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/z7Xvravldr4BhTI4KPOEWtG325_1ORaLQ4aUPOAe_us.json\", \"genesis_data/genesis_txs/z7Xvravldr4BhTI4KPOEWtG325_1ORaLQ4aUPOAe_us.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/SCN8yn0cQASui1DeV4mMYeQrRn8eXKr7Cp9ll7L3UfI.json\", \"genesis_data/genesis_txs/SCN8yn0cQASui1DeV4mMYeQrRn8eXKr7Cp9ll7L3UfI.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/piTZgtn2oBsWKt09CV8LqH3I3JaVdRjFwjOAJmC-Xp4.json\", \"genesis_data/genesis_txs/piTZgtn2oBsWKt09CV8LqH3I3JaVdRjFwjOAJmC-Xp4.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/gbYMogbLVx3rOmm7K-o3nfGPKauLMLkGMSXcKkXW13Q.json\", \"genesis_data/genesis_txs/gbYMogbLVx3rOmm7K-o3nfGPKauLMLkGMSXcKkXW13Q.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/3BSgxVi4vtVtgMBtDE8xPMqU0PmkiKtKX6P_Iw0kMsM.json\", \"genesis_data/genesis_txs/3BSgxVi4vtVtgMBtDE8xPMqU0PmkiKtKX6P_Iw0kMsM.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/M_wQsQbFGtGiEaH0uW2swBubAnFab3ZcCN8IYWZvVzo.json\", \"genesis_data/genesis_txs/M_wQsQbFGtGiEaH0uW2swBubAnFab3ZcCN8IYWZvVzo.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/un3O49lggBX9raJKb6yuql_QTgZYWakWw5ydwUgUuXY.json\", \"genesis_data/genesis_txs/un3O49lggBX9raJKb6yuql_QTgZYWakWw5ydwUgUuXY.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/wFjsB5Y9GV61NqjCeyPCdkfXKUJOYccq8Bl9aljvwGc.json\", \"genesis_data/genesis_txs/wFjsB5Y9GV61NqjCeyPCdkfXKUJOYccq8Bl9aljvwGc.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/kcb41aN752OE__qEKDQAsbpzCUXMdlzI3clCBuxdVts.json\", \"genesis_data/genesis_txs/kcb41aN752OE__qEKDQAsbpzCUXMdlzI3clCBuxdVts.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/gE-2fjp2ncJ0ZRg12UBfqnCBb75OtAOksEX3wGZguqw.json\", \"genesis_data/genesis_txs/gE-2fjp2ncJ0ZRg12UBfqnCBb75OtAOksEX3wGZguqw.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/SJXMM0tlXown7l3ffjhsiKf311FDTRa7QkKX8tgyEZ8.json\", \"genesis_data/genesis_txs/SJXMM0tlXown7l3ffjhsiKf311FDTRa7QkKX8tgyEZ8.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/EPZ0hBh1wp-7T4JED4v6DOItd-9MNWkRfbLyizDLBsE.json\", \"genesis_data/genesis_txs/EPZ0hBh1wp-7T4JED4v6DOItd-9MNWkRfbLyizDLBsE.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/IACLRsWq-T6aesGEAjfFTZJd2sy7sFvWL7O6FI9A39U.json\", \"genesis_data/genesis_txs/IACLRsWq-T6aesGEAjfFTZJd2sy7sFvWL7O6FI9A39U.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/Dxrsx0xuPVY7oz9yHbL6wOFxo6ws7ycVe778C2bc9J8.json\", \"genesis_data/genesis_txs/Dxrsx0xuPVY7oz9yHbL6wOFxo6ws7ycVe778C2bc9J8.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/_01J_SIBJ164H0EedSfQ8h0dMfqet66WKHwcOFQEsMc.json\", \"genesis_data/genesis_txs/_01J_SIBJ164H0EedSfQ8h0dMfqet66WKHwcOFQEsMc.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/OILhne7UcvACtB4peA4osAjRMthaZZSW9OWhe3NpLBw.json\", \"genesis_data/genesis_txs/OILhne7UcvACtB4peA4osAjRMthaZZSW9OWhe3NpLBw.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/06dr4mrXcKlfPbK8t9vWOBCDJznyG-AsKxED-Jr0U88.json\", \"genesis_data/genesis_txs/06dr4mrXcKlfPbK8t9vWOBCDJznyG-AsKxED-Jr0U88.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/SBhaeMSTQm3rS6puYacdT-4wzlnkBlZ1agn6IW6Oyg8.json\", \"genesis_data/genesis_txs/SBhaeMSTQm3rS6puYacdT-4wzlnkBlZ1agn6IW6Oyg8.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/efqI0eDfp0OcYB-Ms5ELukIUr8-qtlX7Ica-ikhVZLU.json\", \"genesis_data/genesis_txs/efqI0eDfp0OcYB-Ms5ELukIUr8-qtlX7Ica-ikhVZLU.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/j2IiBCd5Vf2Q8ciTVxeHbN6JgrXUFiv0xtoMTA_VtqQ.json\", \"genesis_data/genesis_txs/j2IiBCd5Vf2Q8ciTVxeHbN6JgrXUFiv0xtoMTA_VtqQ.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/lFqBd1sEhgw1e_adedkee2hXP9beiNYbF625KV0vObU.json\", \"genesis_data/genesis_txs/lFqBd1sEhgw1e_adedkee2hXP9beiNYbF625KV0vObU.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/rRoy9jsUZ-Y10NIBksSD3P4HcVDfZheloItTTnc8_ZQ.json\", \"genesis_data/genesis_txs/rRoy9jsUZ-Y10NIBksSD3P4HcVDfZheloItTTnc8_ZQ.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/k6UueT0FWSSUbAAH4Uc1Oz6BivunVR0nSMTEILnB_dQ.json\", \"genesis_data/genesis_txs/k6UueT0FWSSUbAAH4Uc1Oz6BivunVR0nSMTEILnB_dQ.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/dYBZuFcCEgGVcfXgS9tmeJsue_qwaCRO3Mg2OHCZh_A.json\", \"genesis_data/genesis_txs/dYBZuFcCEgGVcfXgS9tmeJsue_qwaCRO3Mg2OHCZh_A.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/sfAY_3fQ41LahxW45rXfndEzeHD1eeWJgI9ZaM3slFU.json\", \"genesis_data/genesis_txs/sfAY_3fQ41LahxW45rXfndEzeHD1eeWJgI9ZaM3slFU.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/h0MlFXsvtNQlFwgTh6y7-gjXEj0CbGECgz77EwQsca0.json\", \"genesis_data/genesis_txs/h0MlFXsvtNQlFwgTh6y7-gjXEj0CbGECgz77EwQsca0.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/IwSIt1P5I_mM-gAeAvXiyxRVb73hqkQAMfxLIHbbZYk.json\", \"genesis_data/genesis_txs/IwSIt1P5I_mM-gAeAvXiyxRVb73hqkQAMfxLIHbbZYk.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/bqhG__MMablNhNpiSp8nopeKDCzXy97jLuSBlsKk_u8.json\", \"genesis_data/genesis_txs/bqhG__MMablNhNpiSp8nopeKDCzXy97jLuSBlsKk_u8.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/N6-1fOVDkoeDwKyoNdLxCVoyy-c0EF178A_oQeEchs8.json\", \"genesis_data/genesis_txs/N6-1fOVDkoeDwKyoNdLxCVoyy-c0EF178A_oQeEchs8.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/ez-ItWkyBvBZ6J7_Mobrpqc9RTp6I2JBmkPDV_xCQVY.json\", \"genesis_data/genesis_txs/ez-ItWkyBvBZ6J7_Mobrpqc9RTp6I2JBmkPDV_xCQVY.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/Z5e9G5QMZ_scJQ62qoqUs2XSuhknTuuAIhhGmfg3Ye8.json\", \"genesis_data/genesis_txs/Z5e9G5QMZ_scJQ62qoqUs2XSuhknTuuAIhhGmfg3Ye8.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/Ah6I8y8q0jb15KXjn0PyNfe7FR3v2xobg09Lfj7n1Mo.json\", \"genesis_data/genesis_txs/Ah6I8y8q0jb15KXjn0PyNfe7FR3v2xobg09Lfj7n1Mo.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/VUfaTp1eAzjnbxLR6xx_qQGVn_WOTna3rTolM8wY5BA.json\", \"genesis_data/genesis_txs/VUfaTp1eAzjnbxLR6xx_qQGVn_WOTna3rTolM8wY5BA.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/0FJrLrxrFkVTBwRrzCCh88Gm2tG1xPxg8s_IuRZDVzw.json\", \"genesis_data/genesis_txs/0FJrLrxrFkVTBwRrzCCh88Gm2tG1xPxg8s_IuRZDVzw.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/qU2Gu35-s9wMH1N4g_zMYKCqIStYzBZmRx0XlcIpjyk.json\", \"genesis_data/genesis_txs/qU2Gu35-s9wMH1N4g_zMYKCqIStYzBZmRx0XlcIpjyk.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/PySb_0NIjROmsIgwz4kMwC9MVmeY1MwuKdil0WeUzxw.json\", \"genesis_data/genesis_txs/PySb_0NIjROmsIgwz4kMwC9MVmeY1MwuKdil0WeUzxw.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/DJf1SRoKaPo1h3F-7oKIMu4A-r9dXXMjE57WQilPdTk.json\", \"genesis_data/genesis_txs/DJf1SRoKaPo1h3F-7oKIMu4A-r9dXXMjE57WQilPdTk.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/zCOtSnXKGGhXgrWld31Ak9qQA_SjpOqB6n-9sF74rhk.json\", \"genesis_data/genesis_txs/zCOtSnXKGGhXgrWld31Ak9qQA_SjpOqB6n-9sF74rhk.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/DDrS8BD0XTUVJt5E8kwisVTBX4PBWp0lCnSkSD3PJto.json\", \"genesis_data/genesis_txs/DDrS8BD0XTUVJt5E8kwisVTBX4PBWp0lCnSkSD3PJto.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/Yzj2WZ-3q5vKkBJtrmGlVjZND7iqtzvMRafS0TnQiLE.json\", \"genesis_data/genesis_txs/Yzj2WZ-3q5vKkBJtrmGlVjZND7iqtzvMRafS0TnQiLE.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/H_0S6x36tsFH-x1h77jV_zzGGp97V8UjmgC0RZYwbtM.json\", \"genesis_data/genesis_txs/H_0S6x36tsFH-x1h77jV_zzGGp97V8UjmgC0RZYwbtM.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/mvGgGlFTDJ0ukM6Bssd8G8B5PrEppr4Sg1_NTvzzV1U.json\", \"genesis_data/genesis_txs/mvGgGlFTDJ0ukM6Bssd8G8B5PrEppr4Sg1_NTvzzV1U.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/AN48OPO2-1mh4PKtpyoNm7SWJK2j8dF0-TFLU7Z1C9g.json\", \"genesis_data/genesis_txs/AN48OPO2-1mh4PKtpyoNm7SWJK2j8dF0-TFLU7Z1C9g.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/3Q5gJrbqc-PeOvD4QQ4WCNp-f5cYzTyHyg6P9b-WvwM.json\", \"genesis_data/genesis_txs/3Q5gJrbqc-PeOvD4QQ4WCNp-f5cYzTyHyg6P9b-WvwM.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/MOoLwb8S881q3-gM4GK7DuCEoh5CZnF1tMIZG300X58.json\", \"genesis_data/genesis_txs/MOoLwb8S881q3-gM4GK7DuCEoh5CZnF1tMIZG300X58.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/KPNGfBMOznCXZwOVvCXHRR6sVJx1akVkmXTV98lCMKY.json\", \"genesis_data/genesis_txs/KPNGfBMOznCXZwOVvCXHRR6sVJx1akVkmXTV98lCMKY.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/1nu07yo-0eB5GLxIJzzlxZW6nFTFiZ3XCDobJUcNyP4.json\", \"genesis_data/genesis_txs/1nu07yo-0eB5GLxIJzzlxZW6nFTFiZ3XCDobJUcNyP4.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/utAoO_xht393CbJ_7P_ektVYeEpkySWLM-066yJ5HyI.json\", \"genesis_data/genesis_txs/utAoO_xht393CbJ_7P_ektVYeEpkySWLM-066yJ5HyI.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/4UEhkNbsGdJUjx1lJQgX9KorwSf_RRZG8VMW6jMmf8Y.json\", \"genesis_data/genesis_txs/4UEhkNbsGdJUjx1lJQgX9KorwSf_RRZG8VMW6jMmf8Y.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/21Kfm2Apa8QWeqdMqyQAcxg9HbiluZXfQFu4-6xe-AY.json\", \"genesis_data/genesis_txs/21Kfm2Apa8QWeqdMqyQAcxg9HbiluZXfQFu4-6xe-AY.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/chdl-kIl4zG7VcJbKk0Q_5TeGwuH8Xp2YFPLRJJKTWw.json\", \"genesis_data/genesis_txs/chdl-kIl4zG7VcJbKk0Q_5TeGwuH8Xp2YFPLRJJKTWw.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/aGqWG70qjD5P8spXLMtyXnYxS9k7Net-u932EyIFl28.json\", \"genesis_data/genesis_txs/aGqWG70qjD5P8spXLMtyXnYxS9k7Net-u932EyIFl28.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/dn3p_BqD1gIcZQqdA8r6TucwycKGave22IqNjzKSHqI.json\", \"genesis_data/genesis_txs/dn3p_BqD1gIcZQqdA8r6TucwycKGave22IqNjzKSHqI.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/WE5eBi6hEq90HQvDjtJr-EmZATWJthgxh3HPPuQ7410.json\", \"genesis_data/genesis_txs/WE5eBi6hEq90HQvDjtJr-EmZATWJthgxh3HPPuQ7410.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/BRD5ARo8tiY64RqIoxYZ6jwbE-LQT_7jA513nHwWyRE.json\", \"genesis_data/genesis_txs/BRD5ARo8tiY64RqIoxYZ6jwbE-LQT_7jA513nHwWyRE.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/nXGMduBKL3mpsnFNPctfjEa9Z9zlMpdxcRrdkK95D80.json\", \"genesis_data/genesis_txs/nXGMduBKL3mpsnFNPctfjEa9Z9zlMpdxcRrdkK95D80.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/fkbFeVpiaAOtvt_-M9_U4HzbA8Elh5sa8xJXObrItYM.json\", \"genesis_data/genesis_txs/fkbFeVpiaAOtvt_-M9_U4HzbA8Elh5sa8xJXObrItYM.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/duSw-WaGKAabAztyg2zkj6hjgaVaRGBrJuvZ5Gd2Pzk.json\", \"genesis_data/genesis_txs/duSw-WaGKAabAztyg2zkj6hjgaVaRGBrJuvZ5Gd2Pzk.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/HSlgnBu2Yxros7zyehPgiu2u7h80dJfCCqrA88UnkB4.json\", \"genesis_data/genesis_txs/HSlgnBu2Yxros7zyehPgiu2u7h80dJfCCqrA88UnkB4.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/DTGNdsYZDXoU1nE82yEjG5ZEssxwUmkFTkM3_i6oSx8.json\", \"genesis_data/genesis_txs/DTGNdsYZDXoU1nE82yEjG5ZEssxwUmkFTkM3_i6oSx8.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/b96k6w6qUyLSSWZlmupyBmav6XYMsdt0xTc2yIUZtOA.json\", \"genesis_data/genesis_txs/b96k6w6qUyLSSWZlmupyBmav6XYMsdt0xTc2yIUZtOA.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/S5Uv2W6erubrzYjzm9QHKij51XE-j-GFdYwcV2uPIAA.json\", \"genesis_data/genesis_txs/S5Uv2W6erubrzYjzm9QHKij51XE-j-GFdYwcV2uPIAA.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/5ynd-L6Z1vrR7Vlyr-rkrga_Jw2ibALkIgldNmsVRcQ.json\", \"genesis_data/genesis_txs/5ynd-L6Z1vrR7Vlyr-rkrga_Jw2ibALkIgldNmsVRcQ.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/ocUISm-0ItAS-N3Ydwe1swo4JmoVpRzWzngFt-pDwfo.json\", \"genesis_data/genesis_txs/ocUISm-0ItAS-N3Ydwe1swo4JmoVpRzWzngFt-pDwfo.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/ByvrfeR4UNmWJwF2fU41mBo6ThFl49u24rEGpbeSI0Q.json\", \"genesis_data/genesis_txs/ByvrfeR4UNmWJwF2fU41mBo6ThFl49u24rEGpbeSI0Q.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/FTYnf3Z3QqEpNzTigfAlGTkgpgCWtFA7R8i-I1ik_Vo.json\", \"genesis_data/genesis_txs/FTYnf3Z3QqEpNzTigfAlGTkgpgCWtFA7R8i-I1ik_Vo.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/1Lwuom2q3FFI2pZz5EYgOzJRymgVWE3F9ZIl4vi3-kU.json\", \"genesis_data/genesis_txs/1Lwuom2q3FFI2pZz5EYgOzJRymgVWE3F9ZIl4vi3-kU.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/QR75we1zHW-qO7dsI932kXX0YrAIyuC2XIDRhfmK-fE.json\", \"genesis_data/genesis_txs/QR75we1zHW-qO7dsI932kXX0YrAIyuC2XIDRhfmK-fE.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/PjeEg7GpKT8twlBkp8GHAsEqfMvmNd3RaAx-l0R_i2w.json\", \"genesis_data/genesis_txs/PjeEg7GpKT8twlBkp8GHAsEqfMvmNd3RaAx-l0R_i2w.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/Kl1zrMIDIC9yW8yLMnSKQYDoV0PY41ymzJQw91qaZvY.json\", \"genesis_data/genesis_txs/Kl1zrMIDIC9yW8yLMnSKQYDoV0PY41ymzJQw91qaZvY.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/ijroBK9n_uKCS97V7iege_5Av2E-tm6ujquAazT_sBI.json\", \"genesis_data/genesis_txs/ijroBK9n_uKCS97V7iege_5Av2E-tm6ujquAazT_sBI.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/Kgr-XWwHYos5Y95ZJ9mAUwjYjj_rP0I-GnWctQDNlp8.json\", \"genesis_data/genesis_txs/Kgr-XWwHYos5Y95ZJ9mAUwjYjj_rP0I-GnWctQDNlp8.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/snWRgSI3vlTOy3RRkuNckM-ws-5lpFiPMpYlLx_zPyk.json\", \"genesis_data/genesis_txs/snWRgSI3vlTOy3RRkuNckM-ws-5lpFiPMpYlLx_zPyk.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/5qRekKepIlFbUhGMq_nNy89bzx_K44e4GmUKYAe9MRU.json\", \"genesis_data/genesis_txs/5qRekKepIlFbUhGMq_nNy89bzx_K44e4GmUKYAe9MRU.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/gyG1bGFt7qkMyUCrKiEfMzMzc3_3PooewqNeJpy-3Xk.json\", \"genesis_data/genesis_txs/gyG1bGFt7qkMyUCrKiEfMzMzc3_3PooewqNeJpy-3Xk.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/Z7gfizrPOypT4Pagg3oli5g8wA8pbKB0ZJnrw-FVyys.json\", \"genesis_data/genesis_txs/Z7gfizrPOypT4Pagg3oli5g8wA8pbKB0ZJnrw-FVyys.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/lsuH-ITPI--6KSzhIFclsEAWOSoRQu-8tlnOSxj_Er0.json\", \"genesis_data/genesis_txs/lsuH-ITPI--6KSzhIFclsEAWOSoRQu-8tlnOSxj_Er0.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/xavUY4L0L0nLNVvHiYfBqGL5iqUvdwQ-iY_nLLMB6J4.json\", \"genesis_data/genesis_txs/xavUY4L0L0nLNVvHiYfBqGL5iqUvdwQ-iY_nLLMB6J4.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/3khTH_o8WZHSCzP-AThkmt7zZL-d_lcqUKC8nz7c8lk.json\", \"genesis_data/genesis_txs/3khTH_o8WZHSCzP-AThkmt7zZL-d_lcqUKC8nz7c8lk.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/5FL2C4l-5cTl9wg4CblgIxzko8hGsB5URVA_yTAd4Nk.json\", \"genesis_data/genesis_txs/5FL2C4l-5cTl9wg4CblgIxzko8hGsB5URVA_yTAd4Nk.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/NBxewjnZAfekK0hKmwL_OpF1521JTeIpLk2a2TLDnTk.json\", \"genesis_data/genesis_txs/NBxewjnZAfekK0hKmwL_OpF1521JTeIpLk2a2TLDnTk.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/xC7ski_qpcrRwRkxxHwPZd2lOX6Q---2qdQ4Rr-wxAM.json\", \"genesis_data/genesis_txs/xC7ski_qpcrRwRkxxHwPZd2lOX6Q---2qdQ4Rr-wxAM.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/h7qIFbn0LoexuVwBcjKW7v5A65iQDQFYZUQjuowfIbk.json\", \"genesis_data/genesis_txs/h7qIFbn0LoexuVwBcjKW7v5A65iQDQFYZUQjuowfIbk.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/DQ6WaBfLEMEFhKoMoutuPyO_zFg1hWTDXT13CD8n1nw.json\", \"genesis_data/genesis_txs/DQ6WaBfLEMEFhKoMoutuPyO_zFg1hWTDXT13CD8n1nw.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/8y-ghHqMT2lEHQn86jRXkQ8I5cLWWtKW1CQROp8mzIs.json\", \"genesis_data/genesis_txs/8y-ghHqMT2lEHQn86jRXkQ8I5cLWWtKW1CQROp8mzIs.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/zUFRBcWpPAUyMlojffeTnPgsLo6YgU6JaJgOR0mpBuM.json\", \"genesis_data/genesis_txs/zUFRBcWpPAUyMlojffeTnPgsLo6YgU6JaJgOR0mpBuM.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/0mFNtCi-u34uwOj3BimQTPOT9PgLGE8uqCbtXhnwoKI.json\", \"genesis_data/genesis_txs/0mFNtCi-u34uwOj3BimQTPOT9PgLGE8uqCbtXhnwoKI.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/luQlV_58e9qjm7EZpoO6f5Y1j349Q34UwTW1Lx9J_vE.json\", \"genesis_data/genesis_txs/luQlV_58e9qjm7EZpoO6f5Y1j349Q34UwTW1Lx9J_vE.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/TFX7m_Kf56rV6LNuyQ31NeVoDHJ3x0YqhIv4-IBQ-3s.json\", \"genesis_data/genesis_txs/TFX7m_Kf56rV6LNuyQ31NeVoDHJ3x0YqhIv4-IBQ-3s.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/LFQ5iV6E5wyBbJmJoFJdH39ZxfW-y7mZFKou2H-ONvg.json\", \"genesis_data/genesis_txs/LFQ5iV6E5wyBbJmJoFJdH39ZxfW-y7mZFKou2H-ONvg.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/ud3zGJZA5tPRoitGG1c6HWm9W7iRS4ZF3u6PbZ-blns.json\", \"genesis_data/genesis_txs/ud3zGJZA5tPRoitGG1c6HWm9W7iRS4ZF3u6PbZ-blns.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/dv28G4IsYul7liWrycsx4UKSYHA4sWUY6xFQzRPi4p4.json\", \"genesis_data/genesis_txs/dv28G4IsYul7liWrycsx4UKSYHA4sWUY6xFQzRPi4p4.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/Ykh5TAI6koBN4UTQZ3GNIDr_uHNjlpHH9HsvtEkoWLA.json\", \"genesis_data/genesis_txs/Ykh5TAI6koBN4UTQZ3GNIDr_uHNjlpHH9HsvtEkoWLA.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/BQ2RVL6XY99AIkPKDBCfUfRmJGejkZ8YKgKZc2LewhU.json\", \"genesis_data/genesis_txs/BQ2RVL6XY99AIkPKDBCfUfRmJGejkZ8YKgKZc2LewhU.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/_fLFu_BOzTEPdX35rqUruuyNxi7f_La8T1_JG7pIPd0.json\", \"genesis_data/genesis_txs/_fLFu_BOzTEPdX35rqUruuyNxi7f_La8T1_JG7pIPd0.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/N3lqe8CUwPfChinYVV4OZZQNjtXc26JkOJyqgoKhq7E.json\", \"genesis_data/genesis_txs/N3lqe8CUwPfChinYVV4OZZQNjtXc26JkOJyqgoKhq7E.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/i9xaFWy0avtyCCxQdmWfGNDgh-PaJgIHkNK1pcJzmV8.json\", \"genesis_data/genesis_txs/i9xaFWy0avtyCCxQdmWfGNDgh-PaJgIHkNK1pcJzmV8.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/6J1sN2nhGpqe9iJwgdfnxxCK4af88__HoEG8MLeqtyM.json\", \"genesis_data/genesis_txs/6J1sN2nhGpqe9iJwgdfnxxCK4af88__HoEG8MLeqtyM.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/Juzb8MlmGd2qomIUwgfGzIFO7c7ZcY87kJPmqpSkt18.json\", \"genesis_data/genesis_txs/Juzb8MlmGd2qomIUwgfGzIFO7c7ZcY87kJPmqpSkt18.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/y0PrXtX7PonEbIG3uEdu-k-McGeLLAjzUriUTCMTGcw.json\", \"genesis_data/genesis_txs/y0PrXtX7PonEbIG3uEdu-k-McGeLLAjzUriUTCMTGcw.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/1yvqJKdnb9SRRKoBg1m0kWAsSh9S0R5r9T9TE0YHfRQ.json\", \"genesis_data/genesis_txs/1yvqJKdnb9SRRKoBg1m0kWAsSh9S0R5r9T9TE0YHfRQ.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/oRvFwVpHVeo0iysSg2jFOAZKE-hKwbm6mGeZ6VUZmxk.json\", \"genesis_data/genesis_txs/oRvFwVpHVeo0iysSg2jFOAZKE-hKwbm6mGeZ6VUZmxk.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/wnOghJX4aZlbm7SDDb4UUX8_6GZYpYYx3GireamHwAc.json\", \"genesis_data/genesis_txs/wnOghJX4aZlbm7SDDb4UUX8_6GZYpYYx3GireamHwAc.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/4gLPD5njSRtiaJwjcjmNOyI5Vw8sFBQQWOefmy4SPmQ.json\", \"genesis_data/genesis_txs/4gLPD5njSRtiaJwjcjmNOyI5Vw8sFBQQWOefmy4SPmQ.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/bnT7410oaZtnCdurp5jNgLKju9d_RRxhgggnxa5frMQ.json\", \"genesis_data/genesis_txs/bnT7410oaZtnCdurp5jNgLKju9d_RRxhgggnxa5frMQ.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/cgU_TlXi5gJ7hShSBYsS4UVi-sLTtfFv1y1sy2nNhos.json\", \"genesis_data/genesis_txs/cgU_TlXi5gJ7hShSBYsS4UVi-sLTtfFv1y1sy2nNhos.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/cIXdvNTNHJSmA6Rt5UgSNfMcGfvxDnYTa3a1ulS1SiY.json\", \"genesis_data/genesis_txs/cIXdvNTNHJSmA6Rt5UgSNfMcGfvxDnYTa3a1ulS1SiY.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/LixFbPqM1ZZ-5JWo339FMfPCpD_6M85rVK8IVmmt8m8.json\", \"genesis_data/genesis_txs/LixFbPqM1ZZ-5JWo339FMfPCpD_6M85rVK8IVmmt8m8.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/DkBAprUInkCbFa6A_WJJNL1z_PnhEavvyZtF09lmyvw.json\", \"genesis_data/genesis_txs/DkBAprUInkCbFa6A_WJJNL1z_PnhEavvyZtF09lmyvw.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/puLpw8OIIYCOatImKjpV5s0JWyKFq6bXFMz_qSf6mUA.json\", \"genesis_data/genesis_txs/puLpw8OIIYCOatImKjpV5s0JWyKFq6bXFMz_qSf6mUA.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/0biLy8DoOhucpeYzOj5jnopxxwe0XDRfCOMjyz_a74U.json\", \"genesis_data/genesis_txs/0biLy8DoOhucpeYzOj5jnopxxwe0XDRfCOMjyz_a74U.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/hX6nohfkKZ_9ajziHJ6g5V5cIe1EX9H9rg7eScK988s.json\", \"genesis_data/genesis_txs/hX6nohfkKZ_9ajziHJ6g5V5cIe1EX9H9rg7eScK988s.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/GlWMQUuiL80knS07G7NpoYat3w18VMuyLEuC_Pmijng.json\", \"genesis_data/genesis_txs/GlWMQUuiL80knS07G7NpoYat3w18VMuyLEuC_Pmijng.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/f3jE7NK419FZzwkx9VjTkrcX5FEgl2Ky3KSK0vH-wj0.json\", \"genesis_data/genesis_txs/f3jE7NK419FZzwkx9VjTkrcX5FEgl2Ky3KSK0vH-wj0.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/UMk64563QZfxgZr_vKOTDrcp5XJNENF82Pji4a078YY.json\", \"genesis_data/genesis_txs/UMk64563QZfxgZr_vKOTDrcp5XJNENF82Pji4a078YY.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/0EzNUQy_5b7CwNNLVAi7CnameMgnxVh-XyahT2kn74Y.json\", \"genesis_data/genesis_txs/0EzNUQy_5b7CwNNLVAi7CnameMgnxVh-XyahT2kn74Y.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/3ku6XelnvBsaRjoNxDWb_kT_PRlQ88U0pbWURziCj7s.json\", \"genesis_data/genesis_txs/3ku6XelnvBsaRjoNxDWb_kT_PRlQ88U0pbWURziCj7s.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/RU5mkM_3UrjRMffwgj7ovDMYxxjhfXvliozhpIqw0sA.json\", \"genesis_data/genesis_txs/RU5mkM_3UrjRMffwgj7ovDMYxxjhfXvliozhpIqw0sA.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/oMP40Kgd9MxLfksmW_HAlGe8Rn1Px8tpF-NOHBfe9oo.json\", \"genesis_data/genesis_txs/oMP40Kgd9MxLfksmW_HAlGe8Rn1Px8tpF-NOHBfe9oo.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/UYoJMT0QxMtB6ctUB-9iQlcx6fF8R3s8ahM4_iF4wiQ.json\", \"genesis_data/genesis_txs/UYoJMT0QxMtB6ctUB-9iQlcx6fF8R3s8ahM4_iF4wiQ.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/96Ijx5TWSxZmZaDH1pteGHFjIYY0aHmGWNHiMYeSYIM.json\", \"genesis_data/genesis_txs/96Ijx5TWSxZmZaDH1pteGHFjIYY0aHmGWNHiMYeSYIM.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/n6TKbsqmGl2m3yH15RAe405vYZQ7DStlvYsHCHp1D0U.json\", \"genesis_data/genesis_txs/n6TKbsqmGl2m3yH15RAe405vYZQ7DStlvYsHCHp1D0U.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/XtDRu-1SyoRL21gpKcxWtxyksVwTF9kvW26hvQ_bPzE.json\", \"genesis_data/genesis_txs/XtDRu-1SyoRL21gpKcxWtxyksVwTF9kvW26hvQ_bPzE.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/YIEEyYfNIRSjzm_gzv6l5CelyL4AOzKX9M4XPXRk2Yo.json\", \"genesis_data/genesis_txs/YIEEyYfNIRSjzm_gzv6l5CelyL4AOzKX9M4XPXRk2Yo.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/NptjIrqZrQMSdLbXAGyQCr8audCzArV3EofsjRCqrQw.json\", \"genesis_data/genesis_txs/NptjIrqZrQMSdLbXAGyQCr8audCzArV3EofsjRCqrQw.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/vWeY4yJSJF9LXogRZb3Qr6QyLtEIL_8IY4bzJ2e7O5I.json\", \"genesis_data/genesis_txs/vWeY4yJSJF9LXogRZb3Qr6QyLtEIL_8IY4bzJ2e7O5I.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/fBVa04p7MEL8BsPpyD_Pwv3uqBnBMVzG9YpXsCwZLtc.json\", \"genesis_data/genesis_txs/fBVa04p7MEL8BsPpyD_Pwv3uqBnBMVzG9YpXsCwZLtc.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/opfZTSNdqaxXZUmaKROD2sd4QkyNDnZE3u1A95eSw4E.json\", \"genesis_data/genesis_txs/opfZTSNdqaxXZUmaKROD2sd4QkyNDnZE3u1A95eSw4E.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/-M5_EBM4MayX8ZpuLFoANHO00c4pdrSmAQbPYv7fq4U.json\", \"genesis_data/genesis_txs/-M5_EBM4MayX8ZpuLFoANHO00c4pdrSmAQbPYv7fq4U.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/8rKBfpmkPlxnnYr6t0xIpUDubdidK0Fpnois7-xQJtc.json\", \"genesis_data/genesis_txs/8rKBfpmkPlxnnYr6t0xIpUDubdidK0Fpnois7-xQJtc.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/rvbM0iB1HJ1YadedIDWjJ95J2XBHWwPAJD4VfpdQpxQ.json\", \"genesis_data/genesis_txs/rvbM0iB1HJ1YadedIDWjJ95J2XBHWwPAJD4VfpdQpxQ.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/_Hf1lw_E6Lyd-0PGkCRQaN10cdEx4M-hl9y-zWiDo8k.json\", \"genesis_data/genesis_txs/_Hf1lw_E6Lyd-0PGkCRQaN10cdEx4M-hl9y-zWiDo8k.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/tOIFTqEef5fQYPzhlkC2Um7rddT6MyrHPzUWXDv_mJc.json\", \"genesis_data/genesis_txs/tOIFTqEef5fQYPzhlkC2Um7rddT6MyrHPzUWXDv_mJc.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/4LwZwAVcaBXhXsP5b4mnE11tUXefuRUTtTibtvoozDQ.json\", \"genesis_data/genesis_txs/4LwZwAVcaBXhXsP5b4mnE11tUXefuRUTtTibtvoozDQ.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/XxgirNr3QGaJTKxPWqK9byYLj7SdbfZudKd9rbynWyM.json\", \"genesis_data/genesis_txs/XxgirNr3QGaJTKxPWqK9byYLj7SdbfZudKd9rbynWyM.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/IvyUOghXQ31LnYE3bYEkS82gTAvpIa1rGGQKmiJuuMk.json\", \"genesis_data/genesis_txs/IvyUOghXQ31LnYE3bYEkS82gTAvpIa1rGGQKmiJuuMk.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/X9biR_ZA-rnpzk4gfLi0-pBSsjjT2l9Rk0VfYwf1WMo.json\", \"genesis_data/genesis_txs/X9biR_ZA-rnpzk4gfLi0-pBSsjjT2l9Rk0VfYwf1WMo.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/8b-7D96aRFJgDm8z5Tg47vBbdjseW0rRi17TYDcaQ5Q.json\", \"genesis_data/genesis_txs/8b-7D96aRFJgDm8z5Tg47vBbdjseW0rRi17TYDcaQ5Q.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/0O-UnzBvSFYoMQrbcsKHRH_YqNNylC1n9KWXmm-rr90.json\", \"genesis_data/genesis_txs/0O-UnzBvSFYoMQrbcsKHRH_YqNNylC1n9KWXmm-rr90.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/TUIdVI5yQH50laHvkxgAnTV6uuE2LXXH3pxIe6Q2S7I.json\", \"genesis_data/genesis_txs/TUIdVI5yQH50laHvkxgAnTV6uuE2LXXH3pxIe6Q2S7I.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/0Mxvgz6_wL0FBOxJmHcRcNwiaV8B90whDxG4Vh_GFic.json\", \"genesis_data/genesis_txs/0Mxvgz6_wL0FBOxJmHcRcNwiaV8B90whDxG4Vh_GFic.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/qyMWe-VUOzHXkQviMhNS0wJI_27nvCgDY9iiKANk-lI.json\", \"genesis_data/genesis_txs/qyMWe-VUOzHXkQviMhNS0wJI_27nvCgDY9iiKANk-lI.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/LC-_5GDhs09OvN7r8GPmjMa6A9xSeVtsAmDgYCgspvc.json\", \"genesis_data/genesis_txs/LC-_5GDhs09OvN7r8GPmjMa6A9xSeVtsAmDgYCgspvc.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/sB51Zz1HRjpwrWFhW6ZE2E-n5hl3joqxPQgnMCLX4ZM.json\", \"genesis_data/genesis_txs/sB51Zz1HRjpwrWFhW6ZE2E-n5hl3joqxPQgnMCLX4ZM.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/NEXnMz8Yuw-xfIPprKT2iwx5A1UjWwRHCH7XCpeXIPg.json\", \"genesis_data/genesis_txs/NEXnMz8Yuw-xfIPprKT2iwx5A1UjWwRHCH7XCpeXIPg.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/5OdjYWAipCjWzpqfNoNhyJ673d4pRMNva8la_SFfu_c.json\", \"genesis_data/genesis_txs/5OdjYWAipCjWzpqfNoNhyJ673d4pRMNva8la_SFfu_c.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/drYsyF85HcvC7LM1hkzPPgTj3_zp3amcNVNobBmOxvc.json\", \"genesis_data/genesis_txs/drYsyF85HcvC7LM1hkzPPgTj3_zp3amcNVNobBmOxvc.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/vQ4zTq--De8FHdVnE7sYCemwiaqoZDS4emR_y6o6ZFA.json\", \"genesis_data/genesis_txs/vQ4zTq--De8FHdVnE7sYCemwiaqoZDS4emR_y6o6ZFA.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/zwl046ia6I5VWLRYPJzBI70ypBQN2VlvLH9a_ndNKxA.json\", \"genesis_data/genesis_txs/zwl046ia6I5VWLRYPJzBI70ypBQN2VlvLH9a_ndNKxA.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/xYpSRRpO8ejUGeohlRutNt9qUMgvuZJGkPGCyu1kSas.json\", \"genesis_data/genesis_txs/xYpSRRpO8ejUGeohlRutNt9qUMgvuZJGkPGCyu1kSas.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/wmZTwziFc_VlvYJz_4nyxYd3WxznBmsn5QQyRKDcWXU.json\", \"genesis_data/genesis_txs/wmZTwziFc_VlvYJz_4nyxYd3WxznBmsn5QQyRKDcWXU.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/SWNkfm9ZZPCiYKFg6oIW_IgqJp5Ypbp-Fs9S7YgPm0c.json\", \"genesis_data/genesis_txs/SWNkfm9ZZPCiYKFg6oIW_IgqJp5Ypbp-Fs9S7YgPm0c.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/y6WPKL6MHzZp2ktvb1cETmNMBJyCEPlxdisKlroEBtc.json\", \"genesis_data/genesis_txs/y6WPKL6MHzZp2ktvb1cETmNMBJyCEPlxdisKlroEBtc.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/0ogs8DTdSrNxfE2LzrScPvnyf7CQ7jMdFaS_l0-K-GU.json\", \"genesis_data/genesis_txs/0ogs8DTdSrNxfE2LzrScPvnyf7CQ7jMdFaS_l0-K-GU.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/328-6fOVCfCid4QTxHjkAMkQLMHZgDg-hZo5PnVfp2Q.json\", \"genesis_data/genesis_txs/328-6fOVCfCid4QTxHjkAMkQLMHZgDg-hZo5PnVfp2Q.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/iRF6OnneKHJLhLMdCXpo6LsxVyWIGyklFEpu1bN3cyE.json\", \"genesis_data/genesis_txs/iRF6OnneKHJLhLMdCXpo6LsxVyWIGyklFEpu1bN3cyE.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/g19-Tkf4xuM9golcjx0mA1RkJUYocQJ3uYnH8MU1ePs.json\", \"genesis_data/genesis_txs/g19-Tkf4xuM9golcjx0mA1RkJUYocQJ3uYnH8MU1ePs.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/aPxbCROotxwkdovWbQEhw18UNAzVy-AmjYwjo9lb5u4.json\", \"genesis_data/genesis_txs/aPxbCROotxwkdovWbQEhw18UNAzVy-AmjYwjo9lb5u4.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/COXhhpbcLSEe2iP2kp4SDj5NjjBAC8CucsAgOHRF_lc.json\", \"genesis_data/genesis_txs/COXhhpbcLSEe2iP2kp4SDj5NjjBAC8CucsAgOHRF_lc.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/TGdhJ01pPw49A0ZIaCCcYBnL-RPK_3KZH3cA6E9dVqc.json\", \"genesis_data/genesis_txs/TGdhJ01pPw49A0ZIaCCcYBnL-RPK_3KZH3cA6E9dVqc.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/v2UplxDprWwaIwbB6z3KNEj3GjloqM8SinvVahZ1Wpk.json\", \"genesis_data/genesis_txs/v2UplxDprWwaIwbB6z3KNEj3GjloqM8SinvVahZ1Wpk.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/P_pvvzlCIX7Yaiuv6zt1voLcn69gb9jAHPRhHaHjLng.json\", \"genesis_data/genesis_txs/P_pvvzlCIX7Yaiuv6zt1voLcn69gb9jAHPRhHaHjLng.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/WsYJKhqhppBF6_eGbd0OACdu3LU6-CUuMcLeG3ST2qc.json\", \"genesis_data/genesis_txs/WsYJKhqhppBF6_eGbd0OACdu3LU6-CUuMcLeG3ST2qc.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/weff0Y0_3-H7Vy1HrbpIzUmbTM1rZ8Lw0wgDGYmlsrM.json\", \"genesis_data/genesis_txs/weff0Y0_3-H7Vy1HrbpIzUmbTM1rZ8Lw0wgDGYmlsrM.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/oWWJcAiBCxhtWkIqwir4-vTvD3JFpHgZRNIpS-Xjzp4.json\", \"genesis_data/genesis_txs/oWWJcAiBCxhtWkIqwir4-vTvD3JFpHgZRNIpS-Xjzp4.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/Mv-TFhA3639O4JbKzoO3wo8LNPcFwA_vaaOLHfWRfSo.json\", \"genesis_data/genesis_txs/Mv-TFhA3639O4JbKzoO3wo8LNPcFwA_vaaOLHfWRfSo.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/iuTLZ3xxGpaBCggV5xfUkJ6hMdUQKHw6f_vEn6sbmPo.json\", \"genesis_data/genesis_txs/iuTLZ3xxGpaBCggV5xfUkJ6hMdUQKHw6f_vEn6sbmPo.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/tVLYd_62zbU-VPzQPOMHUo9TJR1dvSZ_pAHrC5Ubs8Q.json\", \"genesis_data/genesis_txs/tVLYd_62zbU-VPzQPOMHUo9TJR1dvSZ_pAHrC5Ubs8Q.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/KhQeu3CG_X1zoHbyy99GUlC9gVFFexf6vVPOlLgCj9I.json\", \"genesis_data/genesis_txs/KhQeu3CG_X1zoHbyy99GUlC9gVFFexf6vVPOlLgCj9I.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/o0nw6fU4gPL7Ae45x1BEQr5GkXSzZUrWnZrdIWqgx6w.json\", \"genesis_data/genesis_txs/o0nw6fU4gPL7Ae45x1BEQr5GkXSzZUrWnZrdIWqgx6w.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/K47jh6Jr6TmZeZ_TadmyLLy1V6ZvLNpvV5FWcICohnk.json\", \"genesis_data/genesis_txs/K47jh6Jr6TmZeZ_TadmyLLy1V6ZvLNpvV5FWcICohnk.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/7SfLhJLtevo0zu-1bo8q6zX98WbGgpDNuY6PXbzS_j0.json\", \"genesis_data/genesis_txs/7SfLhJLtevo0zu-1bo8q6zX98WbGgpDNuY6PXbzS_j0.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/C3auX8HXhc2dChmvSBUfgGyYynuAr6P3g0p7420GG78.json\", \"genesis_data/genesis_txs/C3auX8HXhc2dChmvSBUfgGyYynuAr6P3g0p7420GG78.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/m5zFPHB-2VjCgTLStD9TLZwD1CHfLELPKkVXFJGIptM.json\", \"genesis_data/genesis_txs/m5zFPHB-2VjCgTLStD9TLZwD1CHfLELPKkVXFJGIptM.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/MPP4fxmSkvM2BVq8rumeT5yvDNu3QAT_kqpOlAq5s2E.json\", \"genesis_data/genesis_txs/MPP4fxmSkvM2BVq8rumeT5yvDNu3QAT_kqpOlAq5s2E.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/6YbxtptbO-sidrnYdgn0G_CiNBh-az5ZzWrSCP9DYKA.json\", \"genesis_data/genesis_txs/6YbxtptbO-sidrnYdgn0G_CiNBh-az5ZzWrSCP9DYKA.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/mGAMsTqBzau-MjTkMS5Z3g2_nUD-qQWeLtq6qlzkVl0.json\", \"genesis_data/genesis_txs/mGAMsTqBzau-MjTkMS5Z3g2_nUD-qQWeLtq6qlzkVl0.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/CSkFcCmNgvnp7jp7aK0tEGsLWiZVMF-QBkEFaJrAG48.json\", \"genesis_data/genesis_txs/CSkFcCmNgvnp7jp7aK0tEGsLWiZVMF-QBkEFaJrAG48.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/FkZzg_-5eSdFlbq9XnHe3wRhYidHJPXwUQ6YLuJijS0.json\", \"genesis_data/genesis_txs/FkZzg_-5eSdFlbq9XnHe3wRhYidHJPXwUQ6YLuJijS0.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/9JWfraRekKtgXiIjssn0tVSzhaCaN682jECsrKtR0_E.json\", \"genesis_data/genesis_txs/9JWfraRekKtgXiIjssn0tVSzhaCaN682jECsrKtR0_E.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/Ms9gCRdVwT9u8-ewYd6c-T0bet-n24n_q_Hn0-BlMow.json\", \"genesis_data/genesis_txs/Ms9gCRdVwT9u8-ewYd6c-T0bet-n24n_q_Hn0-BlMow.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/CbV_CDXgVNjV6fyoBDkYmbAcaC5VsLDYXgEIwj2Ewyo.json\", \"genesis_data/genesis_txs/CbV_CDXgVNjV6fyoBDkYmbAcaC5VsLDYXgEIwj2Ewyo.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/4pNPqxodBesN6jQl51nH17GA1fWYfHVm8cIEfusnPLY.json\", \"genesis_data/genesis_txs/4pNPqxodBesN6jQl51nH17GA1fWYfHVm8cIEfusnPLY.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/0_GKZOdtRH-nc094U5kFBlvQSjPz_oX0tcIroqLFD3U.json\", \"genesis_data/genesis_txs/0_GKZOdtRH-nc094U5kFBlvQSjPz_oX0tcIroqLFD3U.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/G1GqspPmLkJTiT35QUTWBT4def7j5ORSfHCtrYzrrng.json\", \"genesis_data/genesis_txs/G1GqspPmLkJTiT35QUTWBT4def7j5ORSfHCtrYzrrng.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/4ewYAvsgaT-6Oy23qPqK29O_AgfvNbhLvol13yN1PdQ.json\", \"genesis_data/genesis_txs/4ewYAvsgaT-6Oy23qPqK29O_AgfvNbhLvol13yN1PdQ.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/LBTipZADoYfO-9UecE07Z83ijiLl0f2wAGXyRFQqKCY.json\", \"genesis_data/genesis_txs/LBTipZADoYfO-9UecE07Z83ijiLl0f2wAGXyRFQqKCY.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/OaumRLT8oE6J8gqrQ9DrY_grMuSfWtai95VnqrX24hs.json\", \"genesis_data/genesis_txs/OaumRLT8oE6J8gqrQ9DrY_grMuSfWtai95VnqrX24hs.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/DMtXbcR_qHwdYXvkuCGOQARs_QtN9iWPw4x6TTaWOcw.json\", \"genesis_data/genesis_txs/DMtXbcR_qHwdYXvkuCGOQARs_QtN9iWPw4x6TTaWOcw.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/Eeo6rANLMAXonDFLDG2nu7n99O3Ymfk01wYXJBbEixY.json\", \"genesis_data/genesis_txs/Eeo6rANLMAXonDFLDG2nu7n99O3Ymfk01wYXJBbEixY.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/ZEB62vqKvkPK2s_RmxgQ2IhafMxJ_TXCGswrrKLhYiQ.json\", \"genesis_data/genesis_txs/ZEB62vqKvkPK2s_RmxgQ2IhafMxJ_TXCGswrrKLhYiQ.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/TkN4QLdC4tu-_Po50RYwF33shyHcanHSe_BKpryK0JA.json\", \"genesis_data/genesis_txs/TkN4QLdC4tu-_Po50RYwF33shyHcanHSe_BKpryK0JA.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/YfHEyNUGsOUiuqCgHV127cg2Z5Yap9tcQB1LH7tq9ZA.json\", \"genesis_data/genesis_txs/YfHEyNUGsOUiuqCgHV127cg2Z5Yap9tcQB1LH7tq9ZA.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/NvGRQrdis2HV22enpSpPqsb0M8s-pN_nl7eJtalZyC4.json\", \"genesis_data/genesis_txs/NvGRQrdis2HV22enpSpPqsb0M8s-pN_nl7eJtalZyC4.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/L8tkBBP7fyYfK4txqP-fGk_ODOU4UfIgFV79O-qd5vY.json\", \"genesis_data/genesis_txs/L8tkBBP7fyYfK4txqP-fGk_ODOU4UfIgFV79O-qd5vY.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/ISiC3yaTW9KnZmgs39osghIg0HP8ISh77bzH7u2m55Q.json\", \"genesis_data/genesis_txs/ISiC3yaTW9KnZmgs39osghIg0HP8ISh77bzH7u2m55Q.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/IQgiEwMLp1bb6muuB_G7Q3sRaaZ3OZHUSjgshUq5YMU.json\", \"genesis_data/genesis_txs/IQgiEwMLp1bb6muuB_G7Q3sRaaZ3OZHUSjgshUq5YMU.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/07u3F6WH-ohqBclh6UanAQ9Tau089eLJrIYM-8qkAbw.json\", \"genesis_data/genesis_txs/07u3F6WH-ohqBclh6UanAQ9Tau089eLJrIYM-8qkAbw.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/nh2sbgjxu6MmU8yGV00w7X4q4XCJETeYE3zVtcj2ldk.json\", \"genesis_data/genesis_txs/nh2sbgjxu6MmU8yGV00w7X4q4XCJETeYE3zVtcj2ldk.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/ydvI6weQPIRj2hcNg4RPqzDpFOhqiTc9iDqQ-fUUl4I.json\", \"genesis_data/genesis_txs/ydvI6weQPIRj2hcNg4RPqzDpFOhqiTc9iDqQ-fUUl4I.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/5Hatfzkj7ivvIsUIDjdOSp-4CdkClH6B7S_SNX0B2-o.json\", \"genesis_data/genesis_txs/5Hatfzkj7ivvIsUIDjdOSp-4CdkClH6B7S_SNX0B2-o.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/1Q2plP5JFTLwdTC27VfIgDJ-ri5h3mVsKxZploTrRmQ.json\", \"genesis_data/genesis_txs/1Q2plP5JFTLwdTC27VfIgDJ-ri5h3mVsKxZploTrRmQ.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/YlalzFjBD8CgZxDlI6eNWE3PIIflHGzXyY9VzPPeCFo.json\", \"genesis_data/genesis_txs/YlalzFjBD8CgZxDlI6eNWE3PIIflHGzXyY9VzPPeCFo.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/vaJOh_TzVSoEgbgDyKz6ABzd_wt2-ouBTe0gA1F3oMY.json\", \"genesis_data/genesis_txs/vaJOh_TzVSoEgbgDyKz6ABzd_wt2-ouBTe0gA1F3oMY.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/f6MY8LMCwGbKZqXd4dkCROQK0qFMjS5OJAbZq-UhMGA.json\", \"genesis_data/genesis_txs/f6MY8LMCwGbKZqXd4dkCROQK0qFMjS5OJAbZq-UhMGA.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/_u44CiJCcYiOrGffgZoQSmUrJe8CfYD7Nw0MdPX0tUw.json\", \"genesis_data/genesis_txs/_u44CiJCcYiOrGffgZoQSmUrJe8CfYD7Nw0MdPX0tUw.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/5mt79Uz6p83vdLtYRiByyWLqLI2GZBeSTutDRmzw7tM.json\", \"genesis_data/genesis_txs/5mt79Uz6p83vdLtYRiByyWLqLI2GZBeSTutDRmzw7tM.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/CEXuGv3KvVtkf5gkV0ip3g1FF-i12WIDo6IOigORIZA.json\", \"genesis_data/genesis_txs/CEXuGv3KvVtkf5gkV0ip3g1FF-i12WIDo6IOigORIZA.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/NPLj86idALmTczSq2vrZdTs0bjI-e-KI0j3EOWWpu54.json\", \"genesis_data/genesis_txs/NPLj86idALmTczSq2vrZdTs0bjI-e-KI0j3EOWWpu54.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/IpwG_74praZjsu9L91_KWYHrVTpEDwyHZrsHgum4Z8o.json\", \"genesis_data/genesis_txs/IpwG_74praZjsu9L91_KWYHrVTpEDwyHZrsHgum4Z8o.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/qX9u_AprdhyXAPGfh3C94x9AbxwWx9nJSs7g8FSwITM.json\", \"genesis_data/genesis_txs/qX9u_AprdhyXAPGfh3C94x9AbxwWx9nJSs7g8FSwITM.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/87ieWrloTFUdW7YjJqJcINd1M_PBWCzA1dIRFzF4RKM.json\", \"genesis_data/genesis_txs/87ieWrloTFUdW7YjJqJcINd1M_PBWCzA1dIRFzF4RKM.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/xSkMzFablxREj8H_RwoMseAFk-TCwaLVIZMHqXh5DHY.json\", \"genesis_data/genesis_txs/xSkMzFablxREj8H_RwoMseAFk-TCwaLVIZMHqXh5DHY.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/NE7AIvW60iQL_6aagNTSiaMpmLfAfRwbxau5FZLA10g.json\", \"genesis_data/genesis_txs/NE7AIvW60iQL_6aagNTSiaMpmLfAfRwbxau5FZLA10g.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/wUhEm861foyWdxy0SI7CvXRcWuohItlX6Ydqo2NvtY8.json\", \"genesis_data/genesis_txs/wUhEm861foyWdxy0SI7CvXRcWuohItlX6Ydqo2NvtY8.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/1QoMjs6Q3XKklJ9LfovRmGbe4bAy9xY247JfDZqN3Eo.json\", \"genesis_data/genesis_txs/1QoMjs6Q3XKklJ9LfovRmGbe4bAy9xY247JfDZqN3Eo.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/24VRr4yT-_fOndcFYtK2oSO-p9Pm6lNtzQv8E-U43Bc.json\", \"genesis_data/genesis_txs/24VRr4yT-_fOndcFYtK2oSO-p9Pm6lNtzQv8E-U43Bc.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/pVZkxPK8F9VFM5lDp0oTBThaw1RvmwG64wIHFChYJKA.json\", \"genesis_data/genesis_txs/pVZkxPK8F9VFM5lDp0oTBThaw1RvmwG64wIHFChYJKA.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/EnPMt9yzTsxLPR5mD9zUvndxicdYBUNzOlcCPvQlOK8.json\", \"genesis_data/genesis_txs/EnPMt9yzTsxLPR5mD9zUvndxicdYBUNzOlcCPvQlOK8.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/0ooE635sVsd6vdhX3Pb8Ufvuqd7XRjfUbG2eXde_CmI.json\", \"genesis_data/genesis_txs/0ooE635sVsd6vdhX3Pb8Ufvuqd7XRjfUbG2eXde_CmI.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/mJUxc7XyUp1HV_VRoi_54geidr26I9PUaiNL4msSNxk.json\", \"genesis_data/genesis_txs/mJUxc7XyUp1HV_VRoi_54geidr26I9PUaiNL4msSNxk.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/4bPVo0hCI3E-ry2mBjvOZsBpNwPM108NT0vnJCxCeJw.json\", \"genesis_data/genesis_txs/4bPVo0hCI3E-ry2mBjvOZsBpNwPM108NT0vnJCxCeJw.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/lq4SrnweWCHnEhw_AV69gMLyBrPxYOmOdVdRIXkHwOg.json\", \"genesis_data/genesis_txs/lq4SrnweWCHnEhw_AV69gMLyBrPxYOmOdVdRIXkHwOg.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/B4e9FBfqZGBszHAhZqTq-TNjb-oG7rYdlMWrQa4CPZU.json\", \"genesis_data/genesis_txs/B4e9FBfqZGBszHAhZqTq-TNjb-oG7rYdlMWrQa4CPZU.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/576xa7WLVidNoEcYPhAm7OlyYgbrp7Z1RBIfqLbVFzw.json\", \"genesis_data/genesis_txs/576xa7WLVidNoEcYPhAm7OlyYgbrp7Z1RBIfqLbVFzw.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/IJsiiIbd-Qs39TAJ67hiRJFsBye_rgQdU9GBid_PnZw.json\", \"genesis_data/genesis_txs/IJsiiIbd-Qs39TAJ67hiRJFsBye_rgQdU9GBid_PnZw.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/CMr-rV5FdlQcRBo4loZzj66EFqwHBmA36tWiRMKGigQ.json\", \"genesis_data/genesis_txs/CMr-rV5FdlQcRBo4loZzj66EFqwHBmA36tWiRMKGigQ.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/gXd75eQL5Yzcn1ba51nORAvb6f_surSnz3xcNlLAxEQ.json\", \"genesis_data/genesis_txs/gXd75eQL5Yzcn1ba51nORAvb6f_surSnz3xcNlLAxEQ.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/eGhF0za2qN5WuadlVZ1iak1S5LxXswHRzIa3j_P-sUM.json\", \"genesis_data/genesis_txs/eGhF0za2qN5WuadlVZ1iak1S5LxXswHRzIa3j_P-sUM.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/00nFXThK86Aog_HfLJc9j0nnXzXSlU6VdGC8qZc5ekI.json\", \"genesis_data/genesis_txs/00nFXThK86Aog_HfLJc9j0nnXzXSlU6VdGC8qZc5ekI.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/Znw-6H_ayGJBReeQm9z9WKulBH1ZzrOovdMsNPcIe_Y.json\", \"genesis_data/genesis_txs/Znw-6H_ayGJBReeQm9z9WKulBH1ZzrOovdMsNPcIe_Y.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/AoSTMf_ZxlcY12bK6_sWj02kssD00K4E-vkHx2vRxG4.json\", \"genesis_data/genesis_txs/AoSTMf_ZxlcY12bK6_sWj02kssD00K4E-vkHx2vRxG4.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/Achd6pqJVZ-1vNMLC977Lu8f20eBmgAv4dIddXql51s.json\", \"genesis_data/genesis_txs/Achd6pqJVZ-1vNMLC977Lu8f20eBmgAv4dIddXql51s.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/HTt6lPYQfcIgUxKPjUt3aQrpwE5e3UA4UT2EI9RxSbw.json\", \"genesis_data/genesis_txs/HTt6lPYQfcIgUxKPjUt3aQrpwE5e3UA4UT2EI9RxSbw.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/QJlE99-614f6XzZ-7VctQjX9DYe5wnO21aHSgg1RhnA.json\", \"genesis_data/genesis_txs/QJlE99-614f6XzZ-7VctQjX9DYe5wnO21aHSgg1RhnA.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/h37LQjpChpTPMquvaxpfFeKt_7oAB5ElDzsdbCQ61n0.json\", \"genesis_data/genesis_txs/h37LQjpChpTPMquvaxpfFeKt_7oAB5ElDzsdbCQ61n0.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/LUdFh6g9auj1LRtk8IUwLoY3e91jIkcSyPKuQQekPY4.json\", \"genesis_data/genesis_txs/LUdFh6g9auj1LRtk8IUwLoY3e91jIkcSyPKuQQekPY4.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/YukfPvGxtYmXFF6wJjDiZcvqmH5YItxwsoLbMxWCVFg.json\", \"genesis_data/genesis_txs/YukfPvGxtYmXFF6wJjDiZcvqmH5YItxwsoLbMxWCVFg.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/FmfkuPmh0vkdv_qbjXBUX1sQ-DmwBFbjuC4punobGy0.json\", \"genesis_data/genesis_txs/FmfkuPmh0vkdv_qbjXBUX1sQ-DmwBFbjuC4punobGy0.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/j3l4tvphmVOyVyFkNdS7ulmexBqPqEvsSJrBsjAFJXc.json\", \"genesis_data/genesis_txs/j3l4tvphmVOyVyFkNdS7ulmexBqPqEvsSJrBsjAFJXc.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/GypgExivgblZSA-1n7KjdI0SJOyXwFJkuzzPWS4NID8.json\", \"genesis_data/genesis_txs/GypgExivgblZSA-1n7KjdI0SJOyXwFJkuzzPWS4NID8.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/h0sgGEeQQcmSxg8uyiCOigWtI_r2ex-58nk1xso004c.json\", \"genesis_data/genesis_txs/h0sgGEeQQcmSxg8uyiCOigWtI_r2ex-58nk1xso004c.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/M7oOLbk7TPBanLCS0pzkJSbV1CYoJabbsSDe_pCjhEo.json\", \"genesis_data/genesis_txs/M7oOLbk7TPBanLCS0pzkJSbV1CYoJabbsSDe_pCjhEo.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/LiitFWnODMUA7esa_f49IiMEdN7cTKoKw1cgG2J_eNE.json\", \"genesis_data/genesis_txs/LiitFWnODMUA7esa_f49IiMEdN7cTKoKw1cgG2J_eNE.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/x8KM69OVm6lzslK6ccAE-3EX5sW6CUHBZB-1hbc-J0A.json\", \"genesis_data/genesis_txs/x8KM69OVm6lzslK6ccAE-3EX5sW6CUHBZB-1hbc-J0A.json\"},\n\t\t{copy, \"genesis_data/genesis_txs/TGp-18LYjSWQQ36gs5prU-vDgteOL79aywxXoDS-w0c.json\", \"genesis_data/genesis_txs/TGp-18LYjSWQQ36gs5prU-vDgteOL79aywxXoDS-w0c.json\"}\n\t]},\n\n\t{dev_mode, true},\n\t{include_erts, false},\n\n\t{extended_start_script, false},\n\t{extended_start_script_hook, [\n\t\t{post_start, [\n\t\t\twait_for_vm_start,\n\t\t\t{pid, \"/tmp/arweave.pid\"},\n\t\t\t{wait_for_process, ar_sup}\n\t\t]}\n\t]}\n]}.\n\n{pre_hooks, [\n\t{\"(darwin|linux)\", compile, \"make --jobs --max-load 4 -C apps/arweave/lib\"},\n\t{\"(freebsd|netbsd|openbsd)\", compile, \"gmake --jobs --max-load 4 -C apps/arweave/lib\"},\n\t% Compile NIFs\n\t{\"(linux)\", compile, \"env AR=gcc-ar make all -C apps/arweave/c_src\"},\n\t{\"(darwin)\", compile, \"make all -C apps/arweave/c_src\"},\n\t{\"(freebsd|netbsd|openbsd)\", compile, \"gmake all -C apps/arweave/c_src\"}\n]}.\n{post_hooks, [\n\t{\"(darwin|linux)\", clean, \"make -C apps/arweave/lib clean\"},\n\t{\"(freebsd|netbsd|openbsd)\", clean, \"gmake -C apps/arweave/lib clean\"},\n\t% Clean NIFs\n\t{\"(linux|darwin)\", clean, \"make -C apps/arweave/c_src clean\"},\n\t{\"(freebsd|netbsd|openbsd)\", clean, \"gmake -C apps/arweave/c_src clean\"},\n\t{\"(linux|darwin|freebsd|netbsd|openbsd)\", clean, \"make -C apps/arweave/lib/openssl-sha-lite clean\"}\n]}.\n\n{erl_opts, [\n\t{i, \"apps\"}\n]}.\n{profiles, [\n\t{prod, [\n\t\t{relx, [\n\t\t\t{dev_mode, false},\n\t\t\t{include_erts, true}\n\t\t]}\n\t]},\n\t{test, [\n\t\t{deps, [{meck, \"1.1.0\"}]},\n\t\t{erl_opts, [\n\t\t\t{d, 'DEBUG', debug},\n\t\t\t{d, 'FORKS_RESET', true},\n\t\t\t{d, 'NETWORK_NAME', \"arweave.localtest\"},\n\t\t\t{d, 'AR_TEST', true},\n\t\t\t%% 8 * 256 * 1024\n\t\t\t{d, 'PARTITION_SIZE', 2_000_000},\n\t\t\t%% If PARTITION_SIZE changes, REPLICA_2_9_ENTROPY_COUNT might need to change\n\t\t\t%% as well. \n\t\t\t%% 2_000_000 / 32_768 = 61.03515625\n\t\t\t%% (61 + 3) = 64 - the nearest multiple of 32\n\t\t\t{d, 'REPLICA_2_9_ENTROPY_COUNT', 64},\n\t\t\t{d, 'STRICT_DATA_SPLIT_THRESHOLD', 786432},\n\t\t\t%% lower multiplier to allow single-block solutions in tests\n\t\t\t{d, 'POA1_DIFF_MULTIPLIER', 1},\n\t\t\t%% use sha256 instead of randomx to speed up tests\n\t\t\t{d, 'STUB_RANDOMX', true},\n\t\t\t%% Configure VDF to be fast for tests, but not too fast to avoid\n\t\t\t%% excessive pressure on the mining server triggering a log flood.\n\t\t\t{d, 'VDF_DIFFICULTY', 6000}, % ~10ms/step\n\t\t\t{d, 'INITIAL_VDF_DIFFICULTY', 6000},\n\t\t\t{d, 'REPLICA_2_9_PACKING_DIFFICULTY', 2}\n\t\t ]},\n\t\t {relx, [\n\t\t\t {overlay, [{template, \"priv/templates/extended_bin\", \"bin/arweave\"}]}\n\t\t ]}\n\t]},\n\t{e2e, [\n\t\t{deps, [{meck, \"1.1.0\"}]},\n\t\t{erl_opts, [\n\t\t\t{src_dirs, [\"src\", \"test\", \"e2e\"]},\n\t\t\t{d, 'DEBUG', debug},\n\t\t\t{d, 'FORKS_RESET', true},\n\t\t\t{d, 'NETWORK_NAME', \"arweave.e2e\"},\n\t\t\t{d, 'AR_TEST', true},\n\t\t\t{d, 'PARTITION_SIZE', 2_000_000},\n\t\t\t%% If PARTITION_SIZE changes, REPLICA_2_9_ENTROPY_COUNT might need to change\n\t\t\t%% as well. \n\t\t\t%% 2_000_000 / 32_768 = 61.03515625\n\t\t\t%% (61 + 3) = 64 - the nearest multiple of 32\n\t\t\t{d, 'REPLICA_2_9_ENTROPY_COUNT', 64},\n\t\t\t{d, 'STRICT_DATA_SPLIT_THRESHOLD', 786432},\n\t\t\t%% The partition upper bound only gets increased when the vdf session changes\n\t\t\t%% (i.e. every ?NONCE_LIMITER_RESET_FREQUENCY VDF steps), so  we need to set\n\t\t\t%% the reset frequency low enough that the VDF session can change during a\n\t\t\t%% single e2e test run.\n\t\t\t{d, 'NONCE_LIMITER_RESET_FREQUENCY', 10},\n\t\t\t{d, 'COMPOSITE_PACKING_EXPIRATION_PERIOD_BLOCKS', 0}\n\t\t]},\n\t\t{relx, [\n\t\t\t {overlay, [{template, \"priv/templates/extended_bin\", \"bin/arweave\"}]}\n\t\t ]}\n\t]},\n\t{localnet, [\n\t\t{erl_opts, [\n\t\t\t{src_dirs, [\"src\", \"test\"]},\n\n\t\t\t%% All peers in your localnet must specify the same NETWORK_NAME, and all requests\n\t\t\t%% to nodes in your network must specify NETWORK_NAME in their X-Network header.\n\t\t\t%% If you clear this value, the mainnet will be assumed.\n\t\t\t{d, 'NETWORK_NAME', \"arweave.localnet\"},\n\n\t\t\t%% When a request is received without specifing the X-Network header, this network\n\t\t\t%% name is assumed. Rather than change this, it's better to make sure your clients\n\t\t\t%% specify the X-Network name as this will avoid potential issues (e.g.\n\t\t\t%% accidentally transferring mainnet AR tokens when you only intended to transfer\n\t\t\t%% localnet tokens). This variable is provided for situations where you can't\n\t\t\t%% control the client headers, need for them to be able to make requests to your\n\t\t\t%% localnet, and can manage the risk of an accidental mainnet request getting\n\t\t\t%% processed.\n\t\t\t%% {d, 'DEFAULT_NETWORK_NAME', \"arweave.localnet\"},\n\n\t\t\t{d, 'LOCALNET', true},\n\n\t\t\texport_all,\n\t\t\tno_inline\n\t\t]},\n\t\t{relx, [\n\t\t\t{release, {arweave, \"2.9.6-alpha1\"}, [\n\t\t\t\tarweave_config,\n                                arweave_limiter,\n\t\t\t\t{arweave_diagnostic, load},\n\t\t\t\t{arweave, load},\n\t\t\t\t{recon, load},\n\t\t\t\tb64fast,\n\t\t\t\tjiffy,\n\t\t\t\trocksdb,\n\t\t\t\tprometheus_process_collector\n\t\t\t]},\n\t\t\t{dev_mode, false},\n\t\t\t{include_erts, true}\n\t\t]}\n\t]},\n\n\t{testnet, [\n\t\t{deps, [{meck, \"1.1.0\"}]},\n\t\t{erl_opts, [\n\t\t\t%% -------------------------------------------------------------------------------------\n\t\t\t%% Required configuration for testnet\n\t\t\t%% All values below must be set for the testnet to function properly\n\t\t\t%% -------------------------------------------------------------------------------------\n\t\t\t{d, 'TESTNET', true},\n\t\t\t{d, 'NETWORK_NAME', \"arweave.fast.testnet\"},\n\t\t\t{d, 'TEST_WALLET_ADDRESS', \"MXeFJwxb4y3vL4In3oJu60tQGXGCzFzWLwBUxnbutdQ\"},\n\t\t\t{d, 'TOP_UP_TEST_WALLET_AR', 1000000},\n\n\t\t\t%% The following values all assume the testnet is restarted from height 1588329 using\n\t\t\t%% the flag:\n\t\t\t%% start_from_block 3lIjFuR6nMYwELWwQqZxYn_sj1tESmZgk6bVZewwxtr0X6a8mXG0JH7KAV_5AE2s\n\n\t\t\t%% TESTNET_FORK_HEIGHT should meet the following requirements:\n\t\t\t%% 1. Set to a difficulty retargeting height - i.e. a multiple of\n\t\t\t%%    ?RETARGET_BLOCKS (currently 10)\n\t\t\t%% 2. Set to 1 more than the testnet initialization height.\n\t\t\t%% 3. Set to the height of a block which has not yet been mined on the\n\t\t\t%%    testnet, or one which was already mined on the testnet (i.e. after the testnet\n\t\t\t%%    was forked from mainnet)\n\t\t\t%%\n\t\t\t%% For example, if the testnet was forked off mainnet at\n\t\t\t%% height 1265219 (either through the use of start_from_latest_state or\n\t\t\t%% start_from_block), then TESTNET_FORK_HEIGHT should be set to 1265220.\n\t\t\t{d, 'TESTNET_FORK_HEIGHT', 1588330},\n\n\t\t\t%% -------------------------------------------------------------------------------------\n\t\t\t%% Optional configuration for testnet\n\t\t\t%% Any values below here are not required and can be cleared/deleted as needed\n\t\t\t%% -------------------------------------------------------------------------------------\n\t\t\t{d, 'TESTNET_REWARD_HISTORY_BLOCKS', 120},\n\t\t\t{d, 'TESTNET_LEGACY_REWARD_HISTORY_BLOCKS', 40},\n\t\t\t{d, 'TESTNET_LOCKED_REWARDS_BLOCKS', 40},\n\t\t\t{d, 'TESTNET_TARGET_BLOCK_TIME', 45},\n\t\t\t{d, 'FORK_2_9_HEIGHT', 1588500}\n\t\t]},\n\t\t{relx, [\n\t\t\t{dev_mode, false},\n\t\t\t{include_erts, true},\n\t\t\t{overlay, [\n\t\t\t\t{copy, \"scripts/testnet/benchmark\", \"bin/benchmark\"}\n\t\t\t]}\n\t\t]}\n\t]}\n]}.\n\n% for some reason, probably due to the number of modules\n% arweave got, when testing with cover, the compilation\n% takes a while. to avoid this, few modules have been\n% excluded. If it's not enough, arweave application\n% can be disabled as well.\n{cover_excl_mods, [\n\tar_block_pre_validator_sup,\n\tar_bridge_sup,\n\tar_chunk_storage_sup,\n\tar_data_root_sync_sup,\n\tar_data_sync_sup,\n\tar_events_sup,\n \tar_header_sync_sup,\n \tar_http_sup,\n \tar_kv_sup,\n \tar_mining_sup,\n \tar_node_sup,\n \tar_nonce_limiter_sup,\n \tar_packing_sup,\n \tar_peer_worker_sup,\n \tar_poller_sup,\n \tar_rx4096_nif,\n \tar_rx512_nif,\n \tar_rxsquared_nif,\n \tar_storage_sup,\n \tar_sup,\n \tar_sync_record_sup,\n \tar_tx_emitter_sup,\n \tar_vdf_nif,\n \tar_verify_chunks_sup,\n \tar_webhook_sup,\n \tsecp256k1_nif,\n \tar_rx4096_nif\n]}.\n\n{cover_excl_apps, [\n\tarweave\n]}.\n\n% generate surefire report by default when using eunit with\n% rebar3\n{eunit_opts, [\n\t{report,{eunit_surefire,[{dir,\"./_build/test/surefire\"}]}}\n]}.\n"
  },
  {
    "path": "release_notes/N.2.9.5/README.md",
    "content": "**This is a substantial update. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.**\n\nThis release is primarily a bug fix, stability, and performance release. It includes all changes from all of the 2.9.5 alpha releases. Full details can be found in the release notes for each alpha:\n- [alpha1](https://github.com/ArweaveTeam/arweave/releases/tag/N.2.9.5-alpha1)\n- [alpha2](https://github.com/ArweaveTeam/arweave/releases/tag/N.2.9.5-alpha2)\n- [alpha3](https://github.com/ArweaveTeam/arweave/releases/tag/N.2.9.5-alpha3)\n- [alpha4](https://github.com/ArweaveTeam/arweave/releases/tag/N.2.9.5-alpha4)\n- [alpha5](https://github.com/ArweaveTeam/arweave/releases/tag/N.2.9.5-alpha5)\n- [alpha6](https://github.com/ArweaveTeam/arweave/releases/tag/N.2.9.5-alpha6)\n\nSome of the changes described above were to address regressions introduced in a prior alpha. The full set of changes you can expect when upgrading from 2.9.4.1 are described below.\n\nA special call out to the mining community members who installed and tested each of the alpha releases. Their help was critical in addressing regressions, fixing bugs, and implementing imrprovements. Thank you! Full list of contributors [here](#community-involvement).\n\n## New Binaries\n\nThis release includes an updated set of pre-built binaries:\n- Ubuntu 22:04, erlang R26\n- Ubuntu 24:04, erlang R26\n- rocky9, erlang R26\n- MacOS, erlang R26\n\nThe default `linux` release refers to Ubuntu 22:04, erlang R26.\n\nGoing forward we recommend Arweave be built with erlang R26 rather than erlang R24.\n\nThe MacOS binaries are intended to be used for VDF Servers. Packing and mining on MacOS is still unsupported.\n\n## Changes to miner config\n\n- Several changes to options related to repack-in-place. See [below](#support-for-repack-in-place-from-the-replica29-format).\n- `vdf`: see [below](#optimized-vdf).\n- Several changes to options related to the `verify` tool. See [below](#verify-tool-improvements).\n- `disable_replica_2_9_device_limit`: Disable the device limit for the replica.2.9 format. By default, at most one worker will be active per physical disk at a time, setting this flag removes this limit allowing multiple workers to be active on a given physical disk.\n- Several options to manually configure low level network performance. See help for options starting with `network.`, `http_client.` and `http_api.`.\n- `mining_cache_size_mb`: the default is set to 100MiB per partition being mined (e.g. if you leave `mining_cache_size_mb` unset while mining 64 partitions, your mining cache will be set to 6,400 MiB). \n- The process for running multiple nodes on a single server has changed. Each instance will need to set distinct/unique values for the `ARNODE` and `ARCOOKIE` environment variables. Here is an example script to launch 2 nodes one named `exit` and one named `miner`:\n```\n#!/usr/bin/env bash\nARNODE=exit@127.0.0.1 \\\nARCOOKIE=exit \\\nscreen -dmSL arweave.exit -Logfile ./screenlog.exit \\\n    ./bin/start config_file config.exit.json;\n\nARNODE=miner@127.0.0.1 \\\nARCOOKIE=miner \\\nscreen -dmSL arweave.miner -Logfile ./screenlog.miner \\\n    ./bin/start config_file config.miner.json\n```\n\n## Optimized VDF\n\nThis release includes the optimized VDF algorithm developed by Discord user `hihui`. \n\nTo use this optimized VDF algorithm set the `vdf hiopt_m4` config option. By default the node will run with the legacy `openssl` implementation.\n\n## Support for repack-in-place from the `replica.2.9` format\n\nThis release introduces support for repack-in-place from `replica.2.9` to `unpacked` or to a different `replica.2.9` address. In addition we've made several performance improvements and fixed a number of edge case bugs which may previously have caused some chunks to be skipped by the repack process.\n\n### Performance\nDue to how replica.2.9 chunks are processed, the parameters for tuning the repack-in-place performance have changed. There are 4 main considerations:\n- **Repack footprint size**: `replica.2.9` chunks are grouped in footprints of chunks. A full footprint is 1024 chunks distributed evenly across a partition.\n- **Repack batch size**: The repack-in-place process reads some number of chunks, repacks them, and then writes them back to disk. The batch size controls how many contiguous chunks are read at once. Previously a batch size of 10 would mean that 10 chunks would be read, repacked, and written. However in order to handle `replica.2.9` data efficiently, a batch size indicates the number of *footprints* to process at once. So a batch size of 10 means that 10 footprints will be read, repacked, and written. Since a full footprint is 1024 chunks, the amount of memory required to process a batch size of 10 is now 10,240 chunks or roughly 2.5 GiB. \n- **Available RAM**: The footprint size and batch size drive how much RAM is required by the repack in place process. And if you're repacking multiple partitions at once, the RAM requirements can grow quickly.\n- **Disk IO**: If you determine that disk IO is your bottleneck, you'd want to increase the batch size as much as you can as reading contiguous chunks are generally much faster than reading non-contiguous chunks.\n- **CPU**: However in some cases you may find that CPU is your bottleneck - this can happen when repacking from a legacy format like `spora_2_6`, or can happen when repacking many partitions between 2 `replica.2.9` addresses. The saving grace here is that if CPU is your bottleneck, you can reduce your batch size or footprint size to ease off on your memory utilization.\n\nTo control all these factors, repack-in-place has 2 config options:\n- `repack_batch_size`: controls the batch size - i.e. the number of footprints processed at once\n- `repack_cache_size_mb`: sets the total amount of memory to allocate to the repack-in-place process *per* partition. So if you set `repack_cache_size_mb` to `2000` and are repacking 4 partitions, you can expect the repack-in-place process to consume roughly 8 GiB of memory. Note: the node will automatically set the footprint size based on your configured batch and cache sizes - this typically means that it will *reduce* the footprint size as much as needed. A smaller footprint size will *increase* your CPU load as it will result in your node generating the same entropy multiple times. For example, if your footprint size is 256 the node will need to generate teh same entropy 4 times in order to process all 1024 chunks in the full footprint.\n\n### Debugging\n\nThis release also includes a new option on the `data-doctor inspect` tool that may help with debugging packing issues. \n\n```\n/bin/data-doctor inspect bitmap <data_dir> <storage_module>\n```\n\nExample: `/bin/data-doctor inspect bitmap /opt/data 36,En2eqsVJARnTVOSh723PBXAKGmKgrGSjQ2YIGwE_ZRI.replica.2.9`\n\nWill generate a bitmap where every pixel represents the packing state of a specific chunk. The bitmap is laid out so that each *vertical* column of pixels is a complete entropy footprint. Here is an example of bitmap:\n\n![bitmap_storage_module_5_En2eqsVJARnTVOSh723PBXAKGmKgrGSjQ2YIGwE_ZRI replica 2 9](https://github.com/user-attachments/assets/d84728c6-6cc0-447d-8f98-2db793fd66a4)\n\nThis bitmap shows the state of one node's partition 5 that has been repacked to replica.2.9. The green pixels are chunks that are in the expected replica.2.9 format, the black pixels are chunks that are missing from the miner's dataset, and the pink pixels are chunks that are too small to be packed (prior to partition ~9, users were allowed to pay for chunks that were smaller than 256KiB - these chunks are stored `unpacked` and can't be packed).\n\n## Performance Improvements\n\n- Improvements to both syncing speed and memory use while syncing\n- In our tests using solo as well as coordinated miners configured to mine while syncing many partitions, we observed steady memory use and full expected hashrate. This improves on 2.9.4.1 performance. Notably: the same tests run on 2.9.4.1 showed growing memory use, ultimately causing an OOM.\n- Reduce the volume of unnecessary network traffic due to a flood of `404` requests when trying to sync chunks from a node which only serves `replica.2.9` data. Note: the benefit of this change will only be seen when most of the nodes in the network upgrade.\n- Performance improvements to HTTP handling that should improve performance more generally.\n- Optimization to speed up the collection of peer intervals when syncing. This can improve syncing performance in some situations.\n- Fix a bug which could cause syncing to occasionally stall out.\n- Optimize the shutdown process. This should help with, but not fully address, the slow node shutdown issues.\n- Fix a bug where a VDF client might get pinned to a slow or stalled VDF server.\n- Several updates to the mining cache logic. These changes address a number of edge case performance and memory bloat issues that can occur while mining.\n- Improve the transaction validation performance, this should reduce the frequency of \"desyncs\". I.e. nodes should now be able to handle a higher network transaction volume without stalling\n  - Do not delay ready_for_mining on validator nodes\n  - Make sure identical tx-status pairs do not cause extra mempool updates\n  - Cache the owner address once computed for every TX\n- Reduce the time it takes for a node to join the network:\n  - Do not re-download local blocks on join\n  - Do not re-write written txs on join\n  - Reduce per peer retry budget on join 10 -> 5\n- Fix edge case that could occasionally cause a mining pool to reject a replica.2.9 solution.\n- Fix edge case crash that occurred when a coordinated miner timed out while fetching partitions from peers\n- Fix bug where storage module crossing weave end may cause syncing stall\n- Fix bug where crash during peer interval collection may cause syncing stall\n- Fix race condition where we may not detect double-signing\n- Optionally fix broken chunk storage records on the fly\n  - Set `enable fix_broken_chunk_storage_record` to turn the feature on.\n- Several fixes to improve the Arweave shutdown process\n  - Close hanging HTTP connections\n  - Prevent new HTTP connections from being created during shutdown\n  - Fix a timeout issue\n\nIn addition to the above fixes, we have some guidance on how to interpret some new logging and metrics:\n\n### Guidance: 2.9.5 hashrate appears to be slower than 2.9.4.1\n\n(Reported by discord users EvM, Lawso2517, Qwinn)\n\n#### Symptoms\n- 2.9.4.1 hashrate is higher than 2.9.5\n- 2.9.4.1 hashrate when solo mining might even be higher than the \"ideal\" hashrate listed in the mining report or grafana metrics\n\n#### Resolution\nThe 2.9.4.1 hashrate included invalid hashes and the 2.9.5 hashrate, although lower, includes only valid hashes.\n\n#### Root Cause\n2.9.4.1 (and earlier releases) had a bug which caused miners to generate hashes off of entropy in addition to valid packed data. The replica.2.9 data format lays down a full covering of entropy in each storage module before adding packed chunks. The result that is that for any storage module with less than 3.6TB of packed data, there is some amount of data on disk that is just entropy. A bug in the 2.9.4.1 mining algorithm generated hashes off of this entropy causing an inflated hashrate. Often the 2.9.4.1 hashrate is above the estimated \"ideal\" hashrate even when solo mining.\n\nAnother symptom of this bug is the `chunk_not_found` error occasionally reported by miners. This occurs under 2.9.4.1 (and earlier releases) when the miner hashes a range of entropy and generates a hash that exceeds the network difficulty. The miner believes this to be a valid solution and begins to build a block. At some point in the block building process the miner has to validate and include the packed chunk data. However since no packed chunk data exists (only entropy), the solution fails and the error is printed.\n\n2.9.5 fixes this bug so that miners correctly exclude entropy data when mining. This means that under 2.9.5 and later releases miners spend fewer resources hashing entropy data, and generate fewer failed solution errors. The reported hashrate on 2.9.5 is lower than 2.9.4.1 because the invalid hashes are no longer being counted.\n\n### Guidance: `cache_limit_exceeded` warning during solo and coordinated mining\n\n(Reported by discord users BerryCZ, mousetu, radion_nizametdinov, qq87237850, Qwinn, Vidiot)\n\n#### Symptoms\n- Logs show `mining_worker_failed_to_reserve_cache_space` warnings, with reason set to `cache_limit_exceeded`\n\n#### Resolution\nThe warning, if seen periodically, is expected and safe to ignore.\n\n#### Root Cause\nAll VDF servers - even those with the exact same VDF time - will be on slightly different steps. This is because new VDF epochs are opened roughly every 20 minutes and are opened when a block is added to the chain. Depending on when your VDF server receives that block it may start calculating the new VDF chain earlier or later than other VDF servers.\n\nThis can cause there to be a gap in the VDF steps generated by two different servers even if they are able to compute new VDF steps at the exact same speed.\n\nWhen a VDF server receives a block that is ahead of it in the VDF chain, it is able to quickly validate and use all the new VDF steps. This can cause the associated miners to receive a batch of VDF steps all at once. In these situations, the miner may exceed its mining cache causing the `cache_limit_exceeded` warning.\n\nHowever this ultimately does not materially impact the miner's true hashrate. A miner will process VDF steps in reverse order (latest steps first) as those are the most valuable steps. The steps being dropped from the cache will be the oldest steps. Old steps *may* still be useful, but there is a far greater chance that any solution mined off an old step will be orphaned. The older the VDF step, the less useful it is.\n\n**TLDR:** the warning, if seen periodically, is expected and safe to ignore.\n\n**Exception:** If you are continually seeing the warning (i.e. not in periodic batches, but constantly and all the time) it may indicate that your miner is not able to keep up with its workload. This can indicate a hardware configuration issue (e.g. disk read rates are too slow), or perhaps a hardware capacity issue (E.g. CPU not fast enough to run hashes on all attached storage module), or some other performance-related issue.\n\n\n## Prometheus metrics\n\n- `ar_mempool_add_tx_duration_milliseconds`: The duration in milliseconds it took to add a transaction to the mempool.\n- `reverify_mempool_chunk_duration_milliseconds`: The duration in milliseconds it took to reverify a chunk of transactions in the mempool.\n- `drop_txs_duration_milliseconds`: The duration in milliseconds it took to drop a chunk of transactions from the mempool\n- `del_from_propagation_queue_duration_milliseconds`: The duration in milliseconds it took to remove a transaction from the propagation queue after it was emitted to peers.\n- `chunk_storage_sync_record_check_duration_milliseconds`: The time in milliseconds it took to check the fetched chunk range is actually registered by the chunk storage.\n- `fixed_broken_chunk_storage_records`: The number of fixed broken chunk storage records detected when reading a range of chunks.\n- `mining_solution`: allows tracking mining solutions. Uses labels to differentiate the mining solution state.\n- `chunks_read`: The counter is incremented every time a chunk is read from `chunk_storage`\n- `chunk_read_rate_bytes_per_second`: The rate, in bytes per second, at which chunks are read from storage. The type label can be 'raw' or 'repack'.\n- `chunk_write_rate_bytes_per_second`: The rate, in bytes per second, at which chunks are written to storage.\n- `repack_chunk_states`: The count of chunks in each state. 'type' can be 'cache' or 'queue'.\n- `replica_2_9_entropy_generated`: The number of bytes of replica.2.9 entropy generated.\n- `mining_server_chunk_cache_size`: now includes additional label `type` which can take the value `total` or `reserved`.\n- `mining_server_tasks`: Incremented each time the mining server adds a task to the task queue.\n- `mining_vdf_step`: Incremented each time the mining server processes a VDF step.\n- `kryder_plus_rate_multiplier`: Kryder+ rate multiplier.\n- `endowment_pool_take`: Value we take from endowment pool to miner to compensate difference between expected and real reward.\n- `endowment_pool_give`: Value we give to endowment pool from transaction fees.\n\n\n## `verify` Tool Improvements\n\nThis release contains several improvements to the `verify` tool. Several miners have reported block failures due to invalid or missing chunks. The hope is that the `verify` tool improvements in this release will either allow those errors to be healed, or provide more information about the issue.\n\n### New `verify` modes\n\nThe `verify` tool can now be launched in `log` or `purge` modes. In `log` mode the tool will log errors but will not flag the chunks for healing. In `purge` mode all bad chunks will be marked as invalid and flagged to be resynced and repacked.\n\nTo launch in `log` mode specify the `verify log` flag. To launch in `purge` mode specify the `verify purge` flag. Note: `verify true` is no longer valid and will print an error on launch.\n\n### Chunk sampling\n\nThe `verify` tool will now sample 1,000 chunks and do a full unpack and validation of the chunk. This sampling mode is intended to give a statistical measure of how much data might be corrupt. To change the number of chunks sampled you can use the the `verify_samples` option. E.g. `verify_samples 500` will have the node sample 500 chunks.\n\n### More invalid scenarios tested\n\nThis latest version of the `verify` tool detects several new types of bad data. The first time you run the `verify` tool we recommend launching it in `log` mode and running it on a single partition. This should avoid any surprises due to the more aggressive detection logic. If the results are as you expect, then you can relaunch in `purge` mode to clean up any bad data. In particular, if you've misnamed your `storage_module` the `verify` tool will invalidate *all* chunks and force a full repack - running in `log` mode first will allow you to catch this error and rename your `storage_module` before purging all data.\n\n\n## Miscellaneous\n\n- Fix several issues which could cause a node to \"desync\". Desyncing occurs when a node gets stuck at one block height and stops advancing.\n- Add TX polling so that a node will pull missing transactions in addition to receiving them via gossip\n- Add support for DNS pools (multiple IPs behind a single DNS address).\n- Add webhooks for the entire mining solution lifecycle. New `solution` webhook added with multiple states `solution_rejected`, `solution_stale`, `solution_partial`, `solution_orphaned`, `solution_accepted`, and `solution_confirmed`.\n- Add a `verify` flag to the `benchmark-vdf` script\n  - When running `benchmark-vdf` you can specify the `verify true` flag to have the script verify the VDF output against a slower \"debug\" VDF algorithm.\n- Support CMake 4 on MacOS\n- Bug fixes to address `chunk_not_found` and `sub_chunk_mismatch` errors. \n\n## Community involvement\n\nA huge thank you to all the Mining community members who contributed to this release by testing the alpha releases, providing feedback, and helping us debug issues!\n\nDiscord users (alphabetical order):\n- AraAraTime\n- BerryCZ\n- bigbang\n- BloodHunter\n- Butcher_\n- core_1_\n- dlmx\n- doesn't stay up late\n- dzeto\n- edzo\n- Evalcast\n- EvM\n- Fox Malder\n- grumpy.003\n- hihui\n- Iba Shinu\n- JanP\n- JamsJun\n- JF\n- jimmyjoe7768\n- lawso2517\n- MaSTeRMinD\n- MCB\n- Merdi Kim\n- metagravity\n- Methistos\n- Michael | Artifact\n- mousetu\n- Niiiko\n- qq87237850\n- Qwinn\n- radion_nizametdinov\n- RedMOoN\n- sam\n- sk\n- smash\n- sumimi\n- T777\n- tashilo\n- Thaseus\n- U genius\n- Vidiot\n- Wednesday\n- wybiacx\n"
  },
  {
    "path": "release_notes/N.2.9.5-alpha5/README.md",
    "content": "**This is an alpha update and may not be ready for production use. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.**\n\nThis release includes several syncing and mining performance improvements. It passes all automated tests and has undergone a base level of internal testing, but is not considered production ready. We only recommend upgrading if you wish to take advantage of the new performance improvements.\n\n## Performance improvements\n\nIn all cases we ran tests on a full-weave solo miner, as well as a full-weave coordinated mining cluster. We believe the observed performance improvements are generalizable to other miners, but, as always, the performance observed by a given miner is often influenced by many factors that we are not been able to test for. TLDR: your mileage may vary.\n\n### Syncing\n\nImprovements to both syncing speed and memory use while syncing. The improvements address some regressions that were reported in the 2.9.5 alphas, but also improve on 2.9.4.1 performance.\n\n### Mining\n\nThis release addresses the significant hashrate loss that was observed during Coordinated Mining on the 2.9.5 alphas.\n\n### Syncing + Mining\n\nIn our tests using solo as well as coordinated miners configured to mine while syncing many partitions, we observed steady memory use and full expected hashrate. This addresses some regressions that were reported in the 2.9.5 alphas, but also improves on 2.9.4.1 performance. Notably: the same tests run on 2.9.4.1 showed growing memory use, ultimately causing an OOM.\n\n## Community involvement\n\nA huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!\n\nDiscord users (alphabetical order):\n- BerryCZ\n- Butcher_\n- edzo\n- Evalcast\n- EvM\n- JF\n- lawso2517\n- MaSTeRMinD\n- qq87237850\n- Qwinn\n- radion_nizametdinov\n- RedMOoN\n- smash\n- T777\n- Vidiot"
  },
  {
    "path": "release_notes/N.2.9.5-alpha6/README.md",
    "content": "**This is an alpha update and may not be ready for production use. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.**\n\nThis release addresses several of the mining performance issues that had been reported on previous alphas. It passes all automated tests and has undergone a base level of internal testing, but is not considered production ready. We only recommend upgrading if you wish to take advantage of the new performance improvements.\n\n## Fix: Crash during coordinated mining when a solution is found\n\n(Reported by discord user Vidiot)\n\n### Symptoms\n- After mining well for some time, hashrate dropped to 0\n- Logs had messages like: `Generic server ar_mining_server terminating. Reason: {badarg,[{ar_block,compute_h1,3`\n\n## Fix: `session_not_found` error during coordinated mining\n\n(Reported by discord user Qwinn)\n\n### Symptoms\n- Hahrate lower than expected\n- Logs had `mining_worker_failed_to_add_chunk_to_cache` errors, with reason set to `session_not_found`\n\n## Guidance: `cache_limit_exceeded` warning during solo and coordinated mining\n\n(Reported by discord users BerryCZ, mousetu, radion_nizametdinov, qq87237850, Qwinn, Vidiot)\n\n### Symptoms\n- Logs show `mining_worker_failed_to_reserve_cache_space` warnings, with reason set to `cache_limit_exceeded`\n\n### Resolution\nThe warning, if seen periodically, is expected and safe to ignore.\n\n### Root Cause\nAll VDF servers - even those with the exact same VDF time - will be on slightly different steps. This is because new VDF epochs are opened roughly every 20 minutes and are opened when a block is added to the chain. Depending on when your VDF server receives that block it may start calculating the new VDF chain earlier or later than other VDF servers.\n\nThis can cause there to be a gap in the VDF steps generated by two different servers even if they are able to compute new VDF steps at the exact same speed.\n\nWhen a VDF server receives a block that is ahead of it in the VDF chain, it is able to quickly validate and use all the new VDF steps. This can cause the associated miners to receive a batch of VDF steps all at once. In these situations, the miner may exceed its mining cache causing the `cache_limit_exceeded` warning.\n\nHowever this ultimately does not materially impact the miner's true hashrate. A miner will process VDF steps in reverse order (latest steps first) as those are the most valuable steps. The steps being dropped from the cache will be the oldest steps. Old steps *may* still be useful, but there is a far greater chance that any solution mined off an old step will be orphaned. The older the VDF step, the less useful it is.\n\n**TLDR:** the warning, if seen periodically, is expected and safe to ignore.\n\n**Exception:** If you are continually seeing the warning (i.e. not in periodic batches, but constantly and all the time) it may indicate that your miner is not able to keep up with its workload. This can indicate a hardware configuration issue (e.g. disk read rates are too slow), or perhaps a hardware capacity issue (E.g. CPU not fast enough to run hashes on all attached storage module), or some other performance-related issue.\n\n### Guidance\n- This alpha increases the default cache size from 4 steps to 20 VDF steps. This should noticeably reduce (but not eliminate) the frequency of the `cache_limit_exceeded` warning\n- If you want to increase it further you can set the `mining_cache_size_mb` option.\n\n## Guidance: 2.9.5-alphaX hashrate appears to be slower than 2.9.4.1\n\n(Reported by discord users EvM, Lawso2517, Qwinn)\n\n### Symptoms\n- 2.9.4.1 hashrate is higher than 2.9.5-alphaX\n- 2.9.4.1 hashrate when solo mining might even be higher than the \"ideal\" hashrate listed in the mining report or grafana metrics\n\n### Resolution\nThe 2.9.4.1 hashrate included invalid hashes and the 2.9.5-alpha6 hashrate, although lower, includes only valid hashes.\n\n### Root Cause\n2.9.4.1 (and earlier releases) had a bug which caused miners to generate hashes off of entropy in addition to valid packed data. The replica.2.9 data format lays down a full covering of entropy in each storage module before adding packed chunks. The result that is that for any storage module with less than 3.6TB of packed data, there is some amount of data on disk that is just entropy. A bug in the 2.9.4.1 mining algorithm generated hashes off of this entropy causing an inflated hashrate. Often the 2.9.4.1 hashrate is above the estimated \"ideal\" hashrate even when solo mining.\n\nAnother symptom of this bug is the `chunk_not_found` error occasionally reported by miners. This occurs under 2.9.4.1 (and earlier releases) when the miner hashes a range of entropy and generates a hash that exceeds the network difficulty. The miner believes this to be a valid solution and begins to build a block. At some point in the block building process the miner has to validate and include the packed chunk data. However since no packed chunk data exists (only entropy), the solution fails and the error is printed.\n\n2.9.5-alpha2 fixed this bug so that miners correctly exclude entropy data when mining. This means that under 2.9.5-alpha2 and later releases miners spend fewer resources hashing entropy data, and generate fewer failed solution errors. The reported hashrate on 2.9.5-alpha2 is lower than 2.9.4.1 because the invalid hashes are no longer being counted.\n\n\n## Community involvement\n\nA huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!\n\nDiscord users (alphabetical order):\n- BerryCZ\n- EvM\n- JanP\n- lawso2517\n- mousetu\n- qq87237850\n- Qwinn\n- radion_nizametdinov\n- smash\n- T777\n- Vidiot"
  },
  {
    "path": "release_notes/N.2.9.5.1/README.md",
    "content": "# Arweave 2.9.5.1 Patch Release Notes\n\n### This release introduces various stability and validation enhancements.\n\nSeveral input validation steps could crash on invalid values, in some cases halting the arweave node. \n\nThe patch includes graceful validation of certain inputs and defensive deserialization of local binaries.\n"
  },
  {
    "path": "release_notes/README.md",
    "content": "# Arweave Releases\n\nProcess for doing an Arweave release\n\n## Run tests\n\n1. Make sure the automated unit tests are green for both:\n   - [Ubuntu](https://github.com/ArweaveTeam/arweave/actions/workflows/test-amd64-ubuntu-22.04.yml)\n   - [MacOS](https://github.com/ArweaveTeam/arweave/actions/workflows/test-arm64-macos-15.yml)\n\n2. Optionally run the `e2e` test locally (these test can take a couple\n   hours to complete): `./bin/e2e`\n\n## Release procedure\n\nThis section explains how arweave is released using Github Actions.\n\n1. find a release version using `N.X.Y.Z.*` format, for example\n   `N.9.8.7-alpha2`. During the next step, it will be called\n   `${release_version}`.\n\n2. create a new release notes containing the instruction of the new\n   release.\n\n```sh\nmkdir release_notes/${release_version}\ntouch release_notes/${release_version}/README.md\ncat > release_notes/${release_version}/README.md <<EOF\n# New release!\n\nHere the message...\nEOF\n```\n\n3. create a new commit containing the release message updating the version names/numbers in:\n   - `rebar.config`\n   - `arweave.app.src`\n\n4. push this commit to master or via a PR.\n\n```sh\ngit push\n```\n\n5. create a new tag and push it to the repository.\n\n```sh\ngit tag ${release_version}\ngit push origin refs/tags/${release_version}\n```\n\n6. If the tag match the required specification, a github action\n   workflow will be executed to generate the artifacts for the\n   release.\n\n7. After the release is complete, create a new commit bumping the\n   `RELEASE_NUMBER` again to differentiate future `master` builds\n   from the latest release.\n   [Example commit.](https://github.com/ArweaveTeam/arweave/commit/882b9e058f18e7eec9fbc5ee8c9b24f089f94c12)\n\n## Patch Release Procedure\n\nThis section explains how arweave is released when a patch is required\non a specific branch.\n\n1. Ensure the git tree is using the latest branches and/or tags\n\n```sh\ngit fetch -a\n```\n\n2. Ensure the current branch used is `master` and pull the latest\n   commits.\n\n```sh\ngit checkout master\ngit pull\n```\n\n3. Find the latest stable release version in use with the format `N.X.Y.Z`.\n\n```sh\ngit tag\n```\n\n4. Create a new branch called `release/N.X.Y.Z` if it does not exist\n   and reset it to the version `N.X.Y.Z`\n\n```sh\ngit checkout -b release/N.X.Y.Z\ngit reset --hard release/N.X.Y.Z\n```\n\n5. Cherry-pick the patch(es)\n\n```sh\ngit cherry-pick fix/fixed-issues-in-ar-node\n```\n\n6. Create a release note explaining what kind of patches have been\n   applied using `N.X.Y.Z.P`.\n\n```sh\nmkdir release_notes/N.X.Y.Z.P\ntouch release_notes/N.X.Y.Z.P/README.md\ncat > release_notes/N.X.Y.Z.P/README.md <<EOF\n# New patch release!\n\nHere the message for the patches...\nEOF\n```\n\n7. Add the release note in the tree\n\n```sh\ngit add release_notes/N.X.Y.Z.P\ngit commit -am \"commit message\"\n```\n\n8. When the last patch has been added, tag the branch by incrementing\n   the `P` (patch) value if needed.\n\n```sh\ngit tag -m \"${tag_message}\" N.X.Y.Z.P\n```\n\n9. Push the changes including the tag. **warning**: don't use `--tags`\n   with git, it could overwrite some existing tags.\n\n```sh\ngit push origin release/N.X.Y.Z\ngit push origin refs/tags/N.X.Y.Z.P\n```\n\n10. github actions should then take the job and create the release\n    automatically.\n"
  },
  {
    "path": "scripts/full_test_modules.txt",
    "content": "# Full test suite module list shared by local runs and CI.\n# Keep one module per line.\n\n# modules\nar\nar_block\nar_block_cache\nar_chain_stats\nar_chunk_copy\nar_chunk_storage\nar_data_sync_coordinator\nar_deep_hash\nar_device_lock\nar_diff_dag\nar_entropy_gen\nar_entropy_cache\nar_entropy_storage\nar_ets_intervals\nar_events\nar_footprint_record\nar_inflation\nar_intervals\nar_join\nar_kv\narweave_limiter_group\narweave_limiter_metrics_collector\nar_merkle\nar_mining_cache\nar_mining_server\nar_mining_stats\nar_node\nar_node_utils\nar_nonce_limiter\nar_packing_server\nar_patricia_tree\nar_peers\nar_peer_intervals\nar_peer_intervals_discovery_test\nar_peer_worker\nar_pricing\nar_repack\nar_repack_fsm\nar_replica_2_9\nar_retarget\nar_serialize\nar_storage_module\nar_storage\nar_sync_buckets\nar_sync_record\nar_tx_db\nar_unbalanced_merkle\nar_util\nar_verify_chunks\nar_wallet\nar_webhook\nar_pool\n\n# standard\nar_base64_compatibility_tests\nar_config_tests\nar_difficulty_tests\nar_get_chunk_tests\nar_header_sync_tests\nar_http_iface_tests\nar_http_util_tests\nar_info_tests\nar_mempool_tests\nar_mine_randomx_tests\nar_mine_vdf_tests\nar_mining_io_tests\nar_mining_worker_tests\nar_poller_tests\nar_replica_2_9_nif_tests\nar_semaphore_tests\nar_start_from_block_tests\nar_tx_blacklist_tests\nar_tx_replay_pool_tests\nar_vdf_tests\n\n# long running\nar_coordinated_mining_tests\nar_data_roots_sync_tests\nar_data_sync_records_footprints_test\nar_data_sync_recovers_from_corruption_test\nar_data_sync_syncs_data_test\nar_data_sync_syncs_after_joining_test\nar_data_sync_mines_off_only_last_chunks_test\nar_data_sync_mines_off_only_second_last_chunks_test\nar_data_sync_disk_pool_rotation_test\nar_data_sync_enqueue_intervals_test\nar_forced_validation_tests\nar_fork_recovery_tests\nar_tx\nar_packing_tests\nar_poa\nar_vdf_block_validation_tests\nar_vdf_server_tests\nar_vdf_external_update_tests\nar_post_block_tests\nar_reject_chunks_tests\nar_cli_parser\n"
  },
  {
    "path": "scripts/github_workflow.sh",
    "content": "#!/bin/bash\n######################################################################\n# This script has been extracted from the github workflow and can\n# then be reused locally outside of a runner.\n#\n# Usage:\n#   ./github_workflow.sh NAMESPACE\n######################################################################\n\n# print the logs if tests fail\n_print_peer_logs() {\n\tpeer=${1}\n\tif ls ${peer}-*.out 2>&1 >/dev/null\n\tthen\n\t\techo -e \"\\033[0;31m===> Test failed, printing the ${peer} node's output...\\033[0m\"\n\t\tcat ${peer}-*.out\n\telse\n\t\techo -e \"\\033[0;31m===> Test failed without ${peer} output...\\033[0m\"\n\tfi\n}\n\n# check if test can be restarted\n_check_retry() {\n\tlocal first_line_peer1\n\techo -e \"\\033[0;32m===> Checking for retry\\033[0m\"\n\n\t# For debugging purposes, print the peer1 output if the tests failed\n\tif ls peer1-*.out 2>&1 >/dev/null\n\tthen\n\t\tfirst_line_peer1=$(head -n 1 peer1-*.out)\n\tfi\n\n\tfirst_line_main=$(head -n 1 main.out)\n\techo -e \"\\033[0;31m===> First line of peer1 node's output: $first_line_peer1\\033[0m\"\n\techo -e \"\\033[0;31m===> First line of main node's output: $first_line_main\\033[0m\"\n\n\t# Check if it is a retryable error\n\tif [[ \"$first_line_peer1\" == \"Protocol 'inet_tcp': register/listen error: \"* ]]\n\tthen\n\t\techo \"Retrying test because of inet_tcp error...\"\n\t\tRETRYABLE=1\n\t\tsleep 1\n\telif [[ \"$first_line_peer1\" == \"Protocol 'inet_tcp': the name\"* ]]\n\tthen\n\t\techo \"Retrying test because of inet_tcp clash...\"\n\t\tRETRYABLE=1\n\t\tsleep 1\n\telif [[ \"$first_line_main\" == *\"econnrefused\"* ]]\n\tthen\n\t\techo \"Retrying test because of econnrefused...\"\n\t\tRETRYABLE=1\n\t\tsleep 1\n\telse\n\t\t_print_peer_logs peer1\n\t\t_print_peer_logs peer2\n\t\t_print_peer_logs peer3\n\t\t_print_peer_logs peer4\n\tfi\n}\n\n# set github environment\n_set_github_env() {\n\tif test -z \"${GITHUB_ENV}\"\n\tthen\n\t\techo \"GITHUB_ENV variable not set\"\n\t\treturn 1\n\tfi\n\n\tlocal exit_code=${1}\n\t# Set the exit_code output variable using Environment Files\n\techo \"exit_code=${exit_code}\" >> ${GITHUB_ENV}\n\treturn 0\n}\n\n######################################################################\n# main script\n######################################################################\nMODE=\"${1}\"\nNAMESPACE_FLAG=\"${2}\"\nPWD=$(pwd)\nEXIT_CODE=0\nexport PATH=\"${PWD}/_build/erts/bin:${PATH}\"\nexport ERL_EPMD_ADDRESS=\"127.0.0.1\"\nexport NAMESPACE=\"${NAMESPACE_FLAG}\"\n\nif test \"${MODE}\" = \"e2e\"\nthen\n\texport ERL_PATH_ADD=\"$(echo ${PWD}/_build/e2e/lib/*/ebin)\"\n\texport ERL_PATH_TEST=\"${PWD}/_build/e2e/lib/arweave/e2e\"\nelse\n\texport ERL_PATH_ADD=\"$(echo ${PWD}/_build/test/lib/*/ebin)\"\n\texport ERL_PATH_TEST=\"${PWD}/_build/test/lib/arweave/test\"\nfi\n\nexport ERL_PATH_CONF=\"${PWD}/config/sys.config\"\nexport ERL_TEST_OPTS=\"-pa ${ERL_PATH_ADD} ${ERL_PATH_TEST} -config ${ERL_PATH_CONF}\"\n\nRETRYABLE=1\nwhile [[ $RETRYABLE -eq 1 ]]\ndo\n\tRETRYABLE=0\n\tset +e\n\tset -x\n\tNODE_NAME=\"main-${NAMESPACE}@127.0.0.1\"\n\tCOOKIE=${NAMESPACE}\n\terl +S 4:4 $ERL_TEST_OPTS \\\n\t\t-noshell \\\n\t\t-name \"${NODE_NAME}\" \\\n\t\t-setcookie \"${COOKIE}\" \\\n\t\t-run ar ${MODE} \"${NAMESPACE}\" \\\n\t\t-s init stop 2>&1 | tee main.out\n\tEXIT_CODE=${PIPESTATUS[0]}\n\tset +x\n\tset -e\n\n\tif [[ ${EXIT_CODE} -ne 0 ]]\n\tthen\n\t\t_check_retry\n\tfi\ndone\n\n# exit with the exit code of the tests\n_set_github_env ${EXIT_CODE}\nexit ${EXIT_CODE}\n"
  },
  {
    "path": "scripts/ierl_kernel.sh",
    "content": "#!/usr/bin/env sh\n\nset -eu\n\nSCRIPT_DIR=\"$(dirname \"$0\")\"\nREPO_ROOT=\"$SCRIPT_DIR/..\"\nIERL_BIN=\"$REPO_ROOT/.venv/bin/ierl\"\n\nif [ -x \"$IERL_BIN\" ]; then\n  exec \"$IERL_BIN\" \"$@\"\nfi\n\nexec ierl \"$@\"\n"
  },
  {
    "path": "scripts/list_test_modules.sh",
    "content": "#!/usr/bin/env bash\nset -euo pipefail\n\nSCRIPT_DIR=\"$(cd \"$(dirname \"$0\")\" && pwd -P)\"\nMODULES_FILE=\"${SCRIPT_DIR}/full_test_modules.txt\"\nFORMAT=\"${1:-plain}\"\n\ncase \"${FORMAT}\" in\n  json)\n    awk '\n      BEGIN {\n        first = 1\n        printf(\"[\")\n      }\n\n      /^[[:space:]]*#/ || /^[[:space:]]*$/ {\n        next\n      }\n\n      {\n        gsub(/^[[:space:]]+|[[:space:]]+$/, \"\", $0)\n        if (first == 0) {\n          printf(\",\")\n        }\n        printf(\"\\\"%s\\\"\", $0)\n        first = 0\n      }\n\n      END {\n        print \"]\"\n      }\n    ' \"${MODULES_FILE}\"\n    ;;\n  plain)\n    awk '\n      /^[[:space:]]*#/ || /^[[:space:]]*$/ {\n        next\n      }\n\n      {\n        gsub(/^[[:space:]]+|[[:space:]]+$/, \"\", $0)\n        print\n      }\n    ' \"${MODULES_FILE}\"\n    ;;\n  *)\n    echo \"Usage: $0 [plain|json]\" >&2\n    exit 1\n    ;;\nesac\n"
  },
  {
    "path": "scripts/run_notebook.sh",
    "content": "#!/usr/bin/env bash\n\nset -euo pipefail\n\nSCRIPT_DIR=\"$(cd \"$(dirname \"$0\")\" && pwd)\"\nREPO_ROOT=\"$(cd \"$SCRIPT_DIR/..\" && pwd)\"\n\nNOTEBOOK_NAME=\"pricing_transition_localnet\"\nNOTEBOOK_PATH=\"\"\nNOTEBOOK_URL_PATH=\"\"\nNOTEBOOK_DIR=\"\"\nNOTEBOOK_ABS_PATH=\"\"\n\nNODE_NAME_FULL=\"main-localnet@127.0.0.1\"\nNODE_COOKIE=\"localnet\"\nJOIN_TIMEOUT_SEC=\"${JOIN_TIMEOUT_SEC:-300}\"\nJOIN_POLL_SEC=\"${JOIN_POLL_SEC:-1}\"\nJUPYTER_PORT=\"${JUPYTER_PORT:-8888}\"\nJUPYTER_OPEN_BROWSER=\"${JUPYTER_OPEN_BROWSER:-true}\"\nJUPYTER_DATA_DIR=\"${JUPYTER_DATA_DIR:-$REPO_ROOT/.tmp/jupyter}\"\nJUPYTER_CONFIG_DIR=\"${JUPYTER_CONFIG_DIR:-$REPO_ROOT/.jupyter}\"\nLOCALNET_HTTP_HOST=\"${LOCALNET_HTTP_HOST:-127.0.0.1}\"\nLOCALNET_HTTP_PORT=\"${LOCALNET_HTTP_PORT:-1984}\"\nLOCALNET_NETWORK_NAME=\"${LOCALNET_NETWORK_NAME:-arweave.localnet}\"\n\nSTARTED_LOCALNET=0\nLOCALNET_PID=\"\"\n\nresolve_notebook() {\n  if [ -z \"$NOTEBOOK_PATH\" ]; then\n    NOTEBOOK_PATH=\"notebooks/${NOTEBOOK_NAME}.ipynb\"\n  fi\n\n  if [ \"${NOTEBOOK_PATH:0:1}\" = \"/\" ]; then\n    NOTEBOOK_ABS_PATH=\"$NOTEBOOK_PATH\"\n  else\n    NOTEBOOK_ABS_PATH=\"$REPO_ROOT/$NOTEBOOK_PATH\"\n  fi\n\n  if [ ! -f \"$NOTEBOOK_ABS_PATH\" ]; then\n    echo \"Notebook not found: $NOTEBOOK_ABS_PATH\"\n    exit 1\n  fi\n\n  NOTEBOOK_DIR=\"$(dirname \"$NOTEBOOK_ABS_PATH\")\"\n  NOTEBOOK_URL_PATH=\"$(basename \"$NOTEBOOK_ABS_PATH\")\"\n}\n\nstart_localnet() {\n  if [ \"$(uname -s)\" == \"Darwin\" ]; then\n    RANDOMX_JIT=\"disable randomx_jit\"\n  else\n    RANDOMX_JIT=\n  fi\n\n  export ERL_EPMD_ADDRESS=127.0.0.1\n\n  ./ar-rebar3 localnet compile\n\n  ERL_LOCALNET_OPTS=\"-pa $(./rebar3 as localnet path) $(./rebar3 as localnet path --base)/lib/arweave/test -config config/sys.config\"\n\n  erl $ERL_LOCALNET_OPTS -name \"$NODE_NAME_FULL\" -setcookie \"$NODE_COOKIE\" -noshell -s ar shell_localnet -eval \"timer:sleep(infinity).\" &\n  LOCALNET_PID=\"$!\"\n  STARTED_LOCALNET=1\n}\n\nfetch_info() {\n  curl -fsS --max-time 2 \\\n    -H \"x-network: ${LOCALNET_NETWORK_NAME}\" \\\n    \"http://${LOCALNET_HTTP_HOST}:${LOCALNET_HTTP_PORT}/info\" 2>/dev/null | tr -d '\\n' || true\n}\n\nparse_info_network() {\n  local info=\"$1\"\n  echo \"$info\" | sed -E -n 's/.*\"network\"[[:space:]]*:[[:space:]]*\"([^\"]*)\".*/\\1/p'\n}\n\nparse_info_height() {\n  local info=\"$1\"\n  echo \"$info\" | sed -E -n 's/.*\"height\"[[:space:]]*:[[:space:]]*(-?[0-9]+).*/\\1/p'\n}\n\nwait_for_info_height() {\n  local start\n  local info\n  local network\n  local height\n  start=\"$(date +%s)\"\n\n  while true; do\n    info=\"$(fetch_info)\"\n    if [ -n \"$info\" ]; then\n      network=\"$(parse_info_network \"$info\")\"\n      if [ -z \"$network\" ]; then\n        echo \"Failed to parse network from /info: $info\"\n        return 1\n      fi\n      if [ \"$network\" != \"$LOCALNET_NETWORK_NAME\" ]; then\n        echo \"Found node at ${LOCALNET_HTTP_HOST}:${LOCALNET_HTTP_PORT} with network ${network}, expected ${LOCALNET_NETWORK_NAME}.\"\n        return 1\n      fi\n\n      height=\"$(parse_info_height \"$info\")\"\n      if [ -z \"$height\" ]; then\n        echo \"Failed to parse height from /info: $info\"\n        return 1\n      fi\n      if [ \"$height\" != \"-1\" ]; then\n        return 0\n      fi\n    fi\n\n    if [ \"$(( $(date +%s) - start ))\" -ge \"$JOIN_TIMEOUT_SEC\" ]; then\n      echo \"Timed out waiting for localnet /info height.\"\n      return 1\n    fi\n\n    sleep \"$JOIN_POLL_SEC\"\n  done\n}\n\ncleanup() {\n  if [ \"$STARTED_LOCALNET\" = \"1\" ] && [ -n \"$LOCALNET_PID\" ]; then\n    kill \"$LOCALNET_PID\" >/dev/null 2>&1 || true\n  fi\n}\n\nrun_notebook() {\n  local jupyter_cmd\n  jupyter_cmd=()\n  export PATH=\"$REPO_ROOT/.venv/bin:$REPO_ROOT/scripts:$PATH\"\n\n  if command -v jupyter >/dev/null 2>&1; then\n    jupyter_cmd=(\"jupyter\")\n  elif command -v uv >/dev/null 2>&1 && [ -d \"$REPO_ROOT/.venv\" ]; then\n    jupyter_cmd=(\"uv\" \"run\" \"jupyter\")\n  else\n    jupyter_cmd=(\"jupyter\")\n  fi\n\n  if [ \"$JUPYTER_OPEN_BROWSER\" = \"true\" ]; then\n    JUPYTER_DATA_DIR=\"$JUPYTER_DATA_DIR\" JUPYTER_CONFIG_DIR=\"$JUPYTER_CONFIG_DIR\" \"${jupyter_cmd[@]}\" notebook \\\n      --NotebookApp.use_redirect_file=False \\\n      --NotebookApp.default_url=\"/notebooks/${NOTEBOOK_URL_PATH}\" \\\n      --ServerApp.default_url=\"/notebooks/${NOTEBOOK_URL_PATH}\" \\\n      --NotebookApp.notebook_dir=\"$NOTEBOOK_DIR\" \\\n      --ServerApp.root_dir=\"$NOTEBOOK_DIR\" \\\n      --port \"$JUPYTER_PORT\"\n  else\n    JUPYTER_DATA_DIR=\"$JUPYTER_DATA_DIR\" JUPYTER_CONFIG_DIR=\"$JUPYTER_CONFIG_DIR\" \"${jupyter_cmd[@]}\" notebook \\\n      --NotebookApp.default_url=\"/notebooks/${NOTEBOOK_URL_PATH}\" \\\n      --ServerApp.default_url=\"/notebooks/${NOTEBOOK_URL_PATH}\" \\\n      --NotebookApp.notebook_dir=\"$NOTEBOOK_DIR\" \\\n      --ServerApp.root_dir=\"$NOTEBOOK_DIR\" \\\n      --no-browser \\\n      --port \"$JUPYTER_PORT\"\n  fi\n}\n\ncd \"$REPO_ROOT\"\n\ntrap cleanup EXIT\n\nresolve_notebook\n\nif [ -z \"$(fetch_info)\" ]; then\n  start_localnet\nfi\n\nwait_for_info_height\nrun_notebook\n"
  },
  {
    "path": "scripts/run_notebook_headless.sh",
    "content": "#!/usr/bin/env bash\n\nset -euo pipefail\n\nSCRIPT_DIR=\"$(cd \"$(dirname \"$0\")\" && pwd)\"\nREPO_ROOT=\"$(cd \"$SCRIPT_DIR/..\" && pwd)\"\n\nNOTEBOOK_NAME=\"pricing_transition_localnet\"\nNOTEBOOK_PATH=\"\"\nNOTEBOOK_ARG=\"\"\n\nwhile [ \"$#\" -gt 0 ]; do\n  NOTEBOOK_ARG=\"$1\"\n  shift 1\ndone\n\nNODE_NAME_FULL=\"main-localnet@127.0.0.1\"\nNODE_COOKIE=\"localnet\"\nJOIN_TIMEOUT_SEC=\"${JOIN_TIMEOUT_SEC:-300}\"\nJOIN_POLL_SEC=\"${JOIN_POLL_SEC:-1}\"\nEXEC_TIMEOUT_SEC=\"${EXEC_TIMEOUT_SEC:-1200}\"\nKERNEL_NAME=\"${ERLANG_JUPYTER_KERNEL:-erlang}\"\nJUPYTER_DATA_DIR=\"${JUPYTER_DATA_DIR:-$REPO_ROOT/.tmp/jupyter}\"\nJUPYTER_CONFIG_DIR=\"${JUPYTER_CONFIG_DIR:-$REPO_ROOT/.jupyter}\"\nLOCALNET_HTTP_HOST=\"${LOCALNET_HTTP_HOST:-127.0.0.1}\"\nLOCALNET_HTTP_PORT=\"${LOCALNET_HTTP_PORT:-1984}\"\nLOCALNET_NETWORK_NAME=\"${LOCALNET_NETWORK_NAME:-arweave.localnet}\"\n\nSTARTED_LOCALNET=0\nLOCALNET_PID=\"\"\n\nresolve_notebook() {\n  if [ -z \"$NOTEBOOK_PATH\" ]; then\n    case \"${NOTEBOOK_ARG:-$NOTEBOOK_NAME}\" in\n      *.ipynb|*/*)\n        NOTEBOOK_PATH=\"${NOTEBOOK_ARG:-$NOTEBOOK_NAME}\"\n        ;;\n      *)\n        NOTEBOOK_PATH=\"notebooks/${NOTEBOOK_ARG:-$NOTEBOOK_NAME}.ipynb\"\n        ;;\n    esac\n  fi\n\n  if [ ! -f \"$NOTEBOOK_PATH\" ]; then\n    echo \"Notebook not found: $NOTEBOOK_PATH\"\n    exit 1\n  fi\n}\n\nstart_localnet() {\n  if [ \"$(uname -s)\" == \"Darwin\" ]; then\n    RANDOMX_JIT=\"disable randomx_jit\"\n  else\n    RANDOMX_JIT=\n  fi\n\n  export ERL_EPMD_ADDRESS=127.0.0.1\n\n  ./ar-rebar3 localnet compile\n\n  ERL_LOCALNET_OPTS=\"-pa $(./rebar3 as localnet path) $(./rebar3 as localnet path --base)/lib/arweave/test -config config/sys.config\"\n\n  erl $ERL_LOCALNET_OPTS -name \"$NODE_NAME_FULL\" -setcookie \"$NODE_COOKIE\" -noshell -s ar shell_localnet -eval \"timer:sleep(infinity).\" &\n  LOCALNET_PID=\"$!\"\n  STARTED_LOCALNET=1\n}\n\nfetch_info() {\n  curl -fsS --max-time 2 \\\n    -H \"x-network: ${LOCALNET_NETWORK_NAME}\" \\\n    \"http://${LOCALNET_HTTP_HOST}:${LOCALNET_HTTP_PORT}/info\" 2>/dev/null | tr -d '\\n' || true\n}\n\nparse_info_network() {\n  local info=\"$1\"\n  echo \"$info\" | sed -E -n 's/.*\"network\"[[:space:]]*:[[:space:]]*\"([^\"]*)\".*/\\1/p'\n}\n\nparse_info_height() {\n  local info=\"$1\"\n  echo \"$info\" | sed -E -n 's/.*\"height\"[[:space:]]*:[[:space:]]*(-?[0-9]+).*/\\1/p'\n}\n\nwait_for_info_height() {\n  local start\n  local info\n  local network\n  local height\n  start=\"$(date +%s)\"\n\n  while true; do\n    info=\"$(fetch_info)\"\n    if [ -n \"$info\" ]; then\n      network=\"$(parse_info_network \"$info\")\"\n      if [ -z \"$network\" ]; then\n        echo \"Failed to parse network from /info: $info\"\n        return 1\n      fi\n      if [ \"$network\" != \"$LOCALNET_NETWORK_NAME\" ]; then\n        echo \"Found node at ${LOCALNET_HTTP_HOST}:${LOCALNET_HTTP_PORT} with network ${network}, expected ${LOCALNET_NETWORK_NAME}.\"\n        return 1\n      fi\n\n      height=\"$(parse_info_height \"$info\")\"\n      if [ -z \"$height\" ]; then\n        echo \"Failed to parse height from /info: $info\"\n        return 1\n      fi\n      if [ \"$height\" != \"-1\" ]; then\n        return 0\n      fi\n    fi\n\n    if [ \"$(( $(date +%s) - start ))\" -ge \"$JOIN_TIMEOUT_SEC\" ]; then\n      echo \"Timed out waiting for localnet /info height.\"\n      return 1\n    fi\n\n    sleep \"$JOIN_POLL_SEC\"\n  done\n}\n\ncleanup() {\n  if [ \"$STARTED_LOCALNET\" = \"1\" ] && [ -n \"$LOCALNET_PID\" ]; then\n    kill \"$LOCALNET_PID\" >/dev/null 2>&1 || true\n  fi\n}\n\nrun_notebook() {\n  local jupyter_cmd\n  local tmp_dir\n  local tmp_output\n  local tmp_root\n  jupyter_cmd=()\n  export PATH=\"$REPO_ROOT/.venv/bin:$REPO_ROOT/scripts:$PATH\"\n\n  if command -v jupyter >/dev/null 2>&1; then\n    jupyter_cmd=(\"jupyter\")\n  elif command -v uv >/dev/null 2>&1 && [ -d \"$REPO_ROOT/.venv\" ]; then\n    jupyter_cmd=(\"uv\" \"run\" \"jupyter\")\n  else\n    jupyter_cmd=(\"jupyter\")\n  fi\n\n  tmp_root=\"${NOTEBOOK_TMP_DIR:-$REPO_ROOT/.tmp/nbconvert}\"\n  mkdir -p \"$tmp_root\"\n  tmp_dir=\"$(mktemp -d \"$tmp_root/notebook.XXXXXX\")\"\n  tmp_output=\"$(basename \"$NOTEBOOK_PATH\")\"\n\n  JUPYTER_DATA_DIR=\"$JUPYTER_DATA_DIR\" JUPYTER_CONFIG_DIR=\"$JUPYTER_CONFIG_DIR\" \"${jupyter_cmd[@]}\" nbconvert \\\n    --to notebook \\\n    --execute \\\n    --output \"$tmp_output\" \\\n    --output-dir \"$tmp_dir\" \\\n    --ExecutePreprocessor.timeout=\"$EXEC_TIMEOUT_SEC\" \\\n    --ExecutePreprocessor.kernel_name=\"$KERNEL_NAME\" \\\n    \"$NOTEBOOK_PATH\"\n\n  if [ \"${NOTEBOOK_SAVE_OUTPUTS:-}\" = \"1\" ]; then\n    mv \"$tmp_dir/$tmp_output\" \"$NOTEBOOK_PATH\"\n  fi\n\n  rm -rf \"$tmp_dir\"\n}\n\ncd \"$REPO_ROOT\"\n\ntrap cleanup EXIT\n\nresolve_notebook\n\nif [ -z \"$(fetch_info)\" ]; then\n  start_localnet\nfi\n\nwait_for_info_height\nrun_notebook\n"
  },
  {
    "path": "scripts/setup_notebook_env.sh",
    "content": "#!/usr/bin/env bash\n\nset -euo pipefail\n\nSCRIPT_DIR=\"$(cd \"$(dirname \"$0\")\" && pwd)\"\nREPO_ROOT=\"$(cd \"$SCRIPT_DIR/..\" && pwd)\"\n\nERLANG_KERNEL_NAME=\"${ERLANG_JUPYTER_KERNEL:-erlang}\"\nIERL_URL=\"${IERL_URL:-https://github.com/filmor/ierl/releases/latest/download/ierl}\"\nIERL_PATH=\"${IERL_PATH:-$REPO_ROOT/.tmp/ierl}\"\nJUPYTER_DATA_DIR=\"${JUPYTER_DATA_DIR:-$REPO_ROOT/.tmp/jupyter}\"\n\ncd \"$REPO_ROOT\"\n\nmkdir -p \"$REPO_ROOT/.tmp\"\nmkdir -p \"$JUPYTER_DATA_DIR\"\n\nif ! command -v python3 >/dev/null 2>&1; then\n  echo \"python3 is not installed or not on PATH.\"\n  exit 1\nfi\n\nif [ -d \"$REPO_ROOT/.venv\" ]; then\n  echo \"Using existing virtual environment at: $REPO_ROOT/.venv\"\nelse\n  python3 -m venv \"$REPO_ROOT/.venv\"\nfi\n\"$REPO_ROOT/.venv/bin/python\" -m pip install --upgrade pip\n\"$REPO_ROOT/.venv/bin/python\" -m pip install jupyter pandas\n\nif ! command -v curl >/dev/null 2>&1; then\n  echo \"curl is not installed or not on PATH.\"\n  exit 1\nfi\n\nif [ ! -x \"$IERL_PATH\" ]; then\n  curl -L \"$IERL_URL\" -o \"$IERL_PATH\"\n  chmod +x \"$IERL_PATH\"\nfi\n\nif [ -d \"$REPO_ROOT/.venv/bin\" ]; then\n  install -m 0755 \"$IERL_PATH\" \"$REPO_ROOT/.venv/bin/ierl\"\nfi\n\ncat > \"$REPO_ROOT/scripts/ierl_kernel.sh\" <<'EOF'\n#!/usr/bin/env sh\n\nset -eu\n\nSCRIPT_DIR=\"$(dirname \"$0\")\"\nREPO_ROOT=\"$SCRIPT_DIR/..\"\nIERL_BIN=\"$REPO_ROOT/.venv/bin/ierl\"\n\nif [ -x \"$IERL_BIN\" ]; then\n  exec \"$IERL_BIN\" \"$@\"\nfi\n\nexec ierl \"$@\"\nEOF\n\nchmod +x \"$REPO_ROOT/scripts/ierl_kernel.sh\"\n\ninstall_kernel() {\n  local kernel_dir=\"$JUPYTER_DATA_DIR/kernels/$ERLANG_KERNEL_NAME\"\n  local kernel_json=\"$kernel_dir/kernel.json\"\n  local kernel_wrapper=\"$kernel_dir/ierl_kernel.sh\"\n\n  mkdir -p \"$kernel_dir\"\n\n  cat > \"$kernel_wrapper\" <<'EOF'\n#!/usr/bin/env sh\n\nset -eu\n\nSCRIPT_DIR=\"$(cd \"$(dirname \"$0\")\" && pwd)\"\nREPO_ROOT=\"$(cd \"$SCRIPT_DIR/../../../..\" && pwd)\"\nIERL_BIN=\"$REPO_ROOT/.venv/bin/ierl\"\n\nif [ -x \"$IERL_BIN\" ]; then\n  if [ \"${NOTEBOOK_SKIP_COMPILE:-0}\" = \"0\" ]; then\n    if [ ! -f \"$REPO_ROOT/_build/localnet/lib/arweave/ebin/ar_node.beam\" ]; then\n      (cd \"$REPO_ROOT\" && ./ar-rebar3 localnet compile)\n    fi\n  fi\n  if [ -d \"$REPO_ROOT/_build/localnet/lib\" ]; then\n    export ERL_LIBS=\"$REPO_ROOT/_build/localnet/lib\"\n  fi\n  exec \"$IERL_BIN\" \"$@\"\nfi\n\nexec ierl \"$@\"\nEOF\n\n  chmod +x \"$kernel_wrapper\"\n\n  cat > \"$kernel_json\" <<'EOF'\n{\n  \"argv\": [\n    \"{resource_dir}/ierl_kernel.sh\",\n    \"kernel\",\n    \"erlang\",\n    \"-f\",\n    \"{connection_file}\"\n  ],\n  \"display_name\": \"Erlang\",\n  \"language\": \"erlang\"\n}\nEOF\n}\n\ninstall_kernel\n\nif ! PATH=\"$REPO_ROOT/.venv/bin:$PATH\" JUPYTER_DATA_DIR=\"$JUPYTER_DATA_DIR\" \"$REPO_ROOT/.venv/bin/jupyter\" kernelspec list 2>/dev/null | grep -q \"[[:space:]]${ERLANG_KERNEL_NAME}[[:space:]]\"; then\n  echo \"Kernel not found after install: ${ERLANG_KERNEL_NAME}\"\n  echo \"Check kernelspec list: $REPO_ROOT/.venv/bin/jupyter kernelspec list\"\n  exit 1\nfi\n\necho \"Notebook environment ready.\"\n"
  },
  {
    "path": "scripts/surefire_to_html.py",
    "content": "#!/usr/bin/env python3\n######################################################################\n# A simple script to convert a surefire report from XML to HTML.\n# see: https://www.erlang.org/doc/apps/eunit/eunit_surefire.html\n# see: https://maven.apache.org/surefire/maven-surefire-report-plugin/\n######################################################################\nimport xml\nimport sys\nimport xml.etree.ElementTree as ET\n\ndef usage():\n    print(\"Usage: %s [PATH|-]\" % sys.argv[0])\n\ndef main():\n    if len(sys.argv) <= 1:\n        usage()\n        return 1\n    if sys.argv[1] == \"-\":\n        return from_stdin()\n    if sys.argv[1]:\n        return from_file(sys.argv[1])\n    return 1\n\ndef from_stdin():\n    data = sys.stdin.readlines()\n    element = ET.fromstringlist(data)\n    return convert(element)\n\ndef from_file(file):\n    tree = ET.parse(file)\n    element = tree.getroot()\n    return convert(element)\n\ndef convert(element):\n    attrib = element.attrib\n    print(f\"\"\"<table>\n<tr><td>name</td><td>{attrib[\"name\"]}</td></tr>\n<tr><td>tests</td><td>{attrib[\"tests\"]}</td></tr>\n<tr><td>failures</td><td>{attrib[\"failures\"]}</td></tr>\n<tr><td>errors</td><td>{attrib[\"errors\"]}</td></tr>\n<tr><td>skipped</td><td>{attrib[\"skipped\"]}</td></tr>\n<tr><td>time</td><td>{attrib[\"time\"]}</td></tr>\n</table>\"\"\")\n    return 0\n\nif __name__ == '__main__':\n    sys.exit(main())\n"
  },
  {
    "path": "scripts/system_info.sh",
    "content": "#!/usr/bin/env sh\n######################################################################\n# Script used to export information about local software installed,\n# useful in case of debugging.\n######################################################################\nCHECK_SOFTWARE=\"cc cmake cpp clang curl erl g++ gcc git make rsync wget pkg-config\"\nCHECK_LIBS=\"libssl gmp sqlite3 ncurses\"\n\n# function helper to check software version\n_software_version() {\n  local name=\"${1}\"\n  local flag=\"--version\"\n  test \"${name}\" = \"erl\" && flag=\"-version\"\n\n  if which \"${name}\" 2>&1 >/dev/null\n  then\n    local path=$(which \"${name}\")\n    local version=$(${name} ${flag} 2>&1 | head -n1)\n    echo \"${name}:\"\n    echo \"  path: ${path}\"\n    echo \"  version: ${version}\"\n  else\n    echo \"${name}: not found\"\n  fi\n}\n\n# wrapper around erl command to easily evaluate erlang\n# code from the shell\n_erl() {\n  local eval=\"${1}\"\n  local erl=\"erl\n    -mode embedded -noshell -noinput\n    -eval '${eval}.'\n    -eval 'init:stop().'\n  \"\n  eval ${erl}\n}\n\n# print erlang/beam information\n_erlang_version() {\n  if which erl 2>&1 >/dev/null\n  then\n    echo \"erlang/beam:\"\n    _erl '\n      io:format(\"\\ \\ root_dir: ~s~n\", [code:root_dir()]),\n      io:format(\"\\ \\ lib_dir: ~s~n\", [code:lib_dir()]),\n      io:format(\"\\ \\ modules:~n\"),\n      [\n        io:format(\"\\ \\ \\ \\ ~s: ~s~n\",[X,Y])\n      ||\n        {X,Y,_} <- code:all_available()\n      ]\n    '\n  fi\n}\n\n# function helper to check library version\n_lib_version() {\n  local name=\"${1}\"\n  if pkg-config --exists \"${name}\"\n  then\n    echo \"${name}:\"\n    echo \"  version: $(pkg-config --modversion ${name})\"\n    echo \"  flags: $(pkg-config --libs --cflags --define-prefix ${name})\"\n  else\n    echo \"${name}: not found\"\n  fi\n}\n\n######################################################################\n# main script\n######################################################################\n# check software\nfor s in ${CHECK_SOFTWARE}\ndo\n  _software_version ${s}\ndone\n\n# check libraries\nif $(which pkg-config 2>&1 >/dev/null)\nthen\n  for l in ${CHECK_LIBS}\n  do\n    _lib_version ${l}\n  done\nfi\n\n# check specific erlang vm and modules\n_erlang_version\n"
  },
  {
    "path": "scripts/testnet/benchmark",
    "content": "#!/usr/bin/env bash\n\nset -e\n\nSCRIPT_DIR=\"$(dirname \"$0\")\"\n$SCRIPT_DIR/check-nofile\n\n# Sets $ARWEAVE and $COMMAND\nsource $SCRIPT_DIR/arweave.env\n\necho \"Moving the benchmark folder to benchmark.old...\"\nrm -rf benchmark.old\nif [ -d benchmark ]; then\n    mv -i benchmark benchmark.old;\nfi\n\n$ARWEAVE foreground -run ar main $RANDOMX_JIT init mine data_dir benchmark\n"
  },
  {
    "path": "testnet/assert_testnet.sh",
    "content": "#!/bin/bash\n\nARWEAVE_DIR=\"$(cd \"$(dirname \"$0\")/..\" && pwd)\"\n\nALL_NODES+=(\ntestnet-1\ntestnet-2\ntestnet-3\ntestnet-4\ntestnet-5\ntestnet-6\n)\n\n# Get the current hostname\ncurrent_host=$(hostname -f)\n\n# Check if current hostname is in the list of testnet servers\nis_testnet_server=0\nfor server in \"${ALL_NODES[@]}\"; do\n    if [[ \"$current_host\" == \"$server\" ]]; then\n        is_testnet_server=1\n        break\n    fi\ndone\n\n# If not a testnet server, abort\nif [[ \"$is_testnet_server\" -eq 0 ]]; then\n    exit 1\nfi\n\nmkdir -p /arweave-build/mainnet\nmkdir -p /arweave-build/testnet\n"
  },
  {
    "path": "testnet/backup_data.sh",
    "content": "#!/bin/bash\n\nARWEAVE_DIR=\"$(cd \"$(dirname \"$0\")/..\" && pwd)\"\n\nif ! $ARWEAVE_DIR/testnet/assert_testnet.sh; then\n\techo \"Error: This script must be run on a testnet server.\"\n\texit 1\nfi\n\nif [ $# -ne 1 ]; then\n\techo \"backup_data.sh <backup name>\"\n    exit 1\nfi\n\nNAME=$1\nBACKUP_DIR=\"/arweave-backups/${NAME}/\"\n\nif [ -d \"$BACKUP_DIR\" ]; then\n    echo \"Error: Backup directory $BACKUP_DIR already exists.\"\n    exit 1\nfi\n\nset -x\nmkdir -p $BACKUP_DIR\ncp -rf /arweave-data/data_sync_state $BACKUP_DIR\ncp -rf /arweave-data/header_sync_state $BACKUP_DIR\ncp -rf /arweave-data/ar_tx_blacklist $BACKUP_DIR\ncp -rf /arweave-data/disk_cache $BACKUP_DIR\ncp -rf /arweave-data/rocksdb $BACKUP_DIR\ncp -rf /arweave-data/txs $BACKUP_DIR\ncp -rf /arweave-data/wallet_lists $BACKUP_DIR\ncp -rf /arweave-data/wallets $BACKUP_DIR\n{ set +x; } 2>/dev/null\necho\n\n"
  },
  {
    "path": "testnet/clear_data.sh",
    "content": "#!/bin/bash\n\nARWEAVE_DIR=\"$(cd \"$(dirname \"$0\")/..\" && pwd)\"\n\nif ! $ARWEAVE_DIR/testnet/assert_testnet.sh; then\n\techo \"Error: This script must be run on a testnet server.\"\n\texit 1\nfi\n\nread -p \"Do you really want to delete all files and directories in /arweave-data \\\nexcept for storage_modules and wallets? [y/N] \" response\n\nif [[ \"$response\" =~ ^([yY][eE][sS]|[yY])$ ]]; then\n    for item in /arweave-data/*; do\n\t\tfilename=$(basename \"$item\")\n        if [[ \"$filename\" != \"wallets\" && ! \"$filename\" =~ ^storage_module ]]; then\n            echo rm -rf \"$item\"\n\t\t\trm -rf \"$item\"\n        fi\n    done\n    echo \"Cleanup complete!\"\nelse\n    echo \"Operation cancelled.\"\nfi"
  },
  {
    "path": "testnet/config/testnet-1.json",
    "content": "{\n    \"storage_modules\": [\n        \"7,1099511627776,PyDuArRDMyRzK1IR5aoM6woO6YhVTUavldx-ZlTluDk\",\n        \"7,1099511627776,HnjnoDf25mJroiFgY3FJLYw3EsUtcF9LDcJYMI3gKZs\",\n        \"112,1099511627776,HnjnoDf25mJroiFgY3FJLYw3EsUtcF9LDcJYMI3gKZs\"\n    ],\n    \"mining_addr\": \"HnjnoDf25mJroiFgY3FJLYw3EsUtcF9LDcJYMI3gKZs\",\n    \"peers\": [\n        \"testnet-2.arweave.xyz\",\n        \"testnet-3.arweave.xyz\",\n        \"testnet-4.arweave.xyz\",\n        \"testnet-5.arweave.xyz\",\n        \"testnet-6.arweave.xyz\"\n    ],\n    \"coordinated_mining\": true,\n    \"cm_api_secret\": \"testnet_cm_secret\",\n    \"cm_exit_peer\": \"testnet-2.arweave.xyz\",\n    \"cm_peers\": [\n        \"testnet-2.arweave.xyz\",\n        \"testnet-3.arweave.xyz\",\n        \"testnet-6.arweave.xyz\"\n    ],\n    \"debug\": true,\n    \"mine\": true,\n    \"enable\": [\n        \"remove_orphaned_storage_module_data\",\n        \"randomx_large_pages\",\n        \"pack_served_chunks\"\n    ],\n    \"data_dir\": \"/arweave-data\",\n    \"requests_per_minute_limit\": 9000,\n    \"mining_cache_size_mb\": 3200\n}"
  },
  {
    "path": "testnet/config/testnet-2.json",
    "content": "{\n    \"storage_modules\": [\n        \"40,1099511627776,HnjnoDf25mJroiFgY3FJLYw3EsUtcF9LDcJYMI3gKZs\",\n        \"8,1099511627776,HnjnoDf25mJroiFgY3FJLYw3EsUtcF9LDcJYMI3gKZs\"\n    ],\n    \"mining_addr\": \"HnjnoDf25mJroiFgY3FJLYw3EsUtcF9LDcJYMI3gKZs\",\n    \"peers\": [\n        \"testnet-1.arweave.xyz\",\n        \"testnet-3.arweave.xyz\",\n        \"testnet-4.arweave.xyz\",\n        \"testnet-5.arweave.xyz\",\n        \"testnet-6.arweave.xyz\"\n    ],\n    \"vdf_server_trusted_peers\": [\n        \"testnet-4.arweave.xyz\"\n    ],\n    \"coordinated_mining\": true,\n    \"cm_api_secret\": \"testnet_cm_secret\",\n    \"cm_peers\": [\n        \"testnet-1.arweave.xyz\",\n        \"testnet-3.arweave.xyz\",\n        \"testnet-6.arweave.xyz\"\n    ],\n    \"debug\": true,\n    \"mine\": true,\n    \"enable\": [\n        \"remove_orphaned_storage_module_data\",\n        \"randomx_large_pages\",\n        \"pack_served_chunks\"\n    ],\n    \"data_dir\": \"/arweave-data\",\n    \"requests_per_minute_limit\": 9000\n}"
  },
  {
    "path": "testnet/config/testnet-3.json",
    "content": "{\n    \"storage_modules\": [\n        \"51,1099511627776,peHwdWtsEr27dC7SeTT1hoympOePTO7vlJt2zhQMAtg\",\n        \"137969,1073741824,peHwdWtsEr27dC7SeTT1hoympOePTO7vlJt2zhQMAtg\",\n        \"137970,1073741824,peHwdWtsEr27dC7SeTT1hoympOePTO7vlJt2zhQMAtg\"\n    ],\n    \"mining_addr\": \"peHwdWtsEr27dC7SeTT1hoympOePTO7vlJt2zhQMAtg\",\n    \"peers\": [\n        \"testnet-1.arweave.xyz\",\n        \"testnet-2.arweave.xyz\",\n        \"testnet-4.arweave.xyz\",\n        \"testnet-5.arweave.xyz\",\n        \"testnet-6.arweave.xyz\"\n    ],\n    \"vdf_server_trusted_peers\": [\n        \"testnet-4.arweave.xyz\"\n    ],\n    \"debug\": true,\n    \"mine\": true,\n    \"enable\": [\n        \"remove_orphaned_storage_module_data\",\n        \"randomx_large_pages\",\n        \"pack_served_chunks\"\n    ],\n    \"data_dir\": \"/arweave-data\",\n    \"requests_per_minute_limit\": 9000,\n    \"mining_cache_size_mb\": 2400\n}"
  },
  {
    "path": "testnet/config/testnet-4.json",
    "content": "{\n    \"storage_modules\": [\n        \"70,1099511627776,hDdptPiuAlrP5RxEHwZoffm7obIyvvBi40T5PPvp57w\",\n        \"71,1099511627776,hDdptPiuAlrP5RxEHwZoffm7obIyvvBi40T5PPvp57w\",\n        \"245,500000000000,hDdptPiuAlrP5RxEHwZoffm7obIyvvBi40T5PPvp57w\",\n        \"1226,100000000000,hDdptPiuAlrP5RxEHwZoffm7obIyvvBi40T5PPvp57w\"\n    ],\n    \"mining_addr\": \"hDdptPiuAlrP5RxEHwZoffm7obIyvvBi40T5PPvp57w\",\n    \"peers\": [\"testnet-1.arweave.xyz\", \"testnet-2.arweave.xyz\"],\n    \"vdf_client_peers\": [\n        \"testnet-3.arweave.xyz\"\n    ],\n    \"header_sync_jobs\": 0,\n    \"debug\": true,\n    \"mine\": true,\n    \"enable\": [\n        \"remove_orphaned_storage_module_data\",\n        \"public_vdf_server\",\n        \"randomx_large_pages\",\n        \"pack_served_chunks\"\n    ],\n    \"data_dir\": \"/arweave-data\",\n    \"requests_per_minute_limit\": 9000\n}\n"
  },
  {
    "path": "testnet/config/testnet-5.json",
    "content": "{\n    \"storage_modules\": [\n        \"53,1099511627776,v0AnxIi2DhwdyKUEHR_GrHjIGJtv1ImSB2z2ZWyQzSc,repack_in_place,c5if16ZXh5ooCsCVLumeqgl7Z73lGI8xf8PQOvnb_CE.replica.2.9\",\n        \"19,3298534883328,v0AnxIi2DhwdyKUEHR_GrHjIGJtv1ImSB2z2ZWyQzSc,repack_in_place,c5if16ZXh5ooCsCVLumeqgl7Z73lGI8xf8PQOvnb_CE.replica.2.9\",\n        \"20,3298534883328,v0AnxIi2DhwdyKUEHR_GrHjIGJtv1ImSB2z2ZWyQzSc,repack_in_place,c5if16ZXh5ooCsCVLumeqgl7Z73lGI8xf8PQOvnb_CE.replica.2.9\",\n        \"7,1099511627776,v0AnxIi2DhwdyKUEHR_GrHjIGJtv1ImSB2z2ZWyQzSc,repack_in_place,c5if16ZXh5ooCsCVLumeqgl7Z73lGI8xf8PQOvnb_CE.replica.2.9\",\n        \"40,1099511627776,c5if16ZXh5ooCsCVLumeqgl7Z73lGI8xf8PQOvnb_CE.replica.2.9\",\n        \"70,1099511627776,v0AnxIi2DhwdyKUEHR_GrHjIGJtv1ImSB2z2ZWyQzSc,repack_in_place,c5if16ZXh5ooCsCVLumeqgl7Z73lGI8xf8PQOvnb_CE.replica.2.9\",\n        \"71,1099511627776,v0AnxIi2DhwdyKUEHR_GrHjIGJtv1ImSB2z2ZWyQzSc,repack_in_place,c5if16ZXh5ooCsCVLumeqgl7Z73lGI8xf8PQOvnb_CE.replica.2.9\"\n    ],\n    \"mining_addr\": \"c5if16ZXh5ooCsCVLumeqgl7Z73lGI8xf8PQOvnb_CE\",\n    \"peers\": [\n        \"testnet-1.arweave.xyz\",\n        \"testnet-2.arweave.xyz\",\n        \"testnet-3.arweave.xyz\",\n        \"testnet-4.arweave.xyz\",\n        \"testnet-6.arweave.xyz\"\n    ],\n    \"debug\": true,\n    \"mine\": true,\n    \"enable\": [\n        \"remove_orphaned_storage_module_data\",\n        \"randomx_large_pages\",\n        \"pack_served_chunks\"\n    ],\n    \"data_dir\": \"/arweave-data\",\n    \"requests_per_minute_limit\": 9000\n}\n"
  },
  {
    "path": "testnet/config/testnet-6.json",
    "content": "{\n    \"storage_modules\": [\n        \"19,3298534883328,HnjnoDf25mJroiFgY3FJLYw3EsUtcF9LDcJYMI3gKZs\",\n        \"20,3298534883328,HnjnoDf25mJroiFgY3FJLYw3EsUtcF9LDcJYMI3gKZs\",\n        \"40,1099511627776,HnjnoDf25mJroiFgY3FJLYw3EsUtcF9LDcJYMI3gKZs\",\n        \"53,1099511627776,HnjnoDf25mJroiFgY3FJLYw3EsUtcF9LDcJYMI3gKZs\",\n        \"70,1099511627776,HnjnoDf25mJroiFgY3FJLYw3EsUtcF9LDcJYMI3gKZs\",\n        \"71,1099511627776,HnjnoDf25mJroiFgY3FJLYw3EsUtcF9LDcJYMI3gKZs\",\n        \"7,1099511627776,HnjnoDf25mJroiFgY3FJLYw3EsUtcF9LDcJYMI3gKZs\"\n    ],\n    \"mining_addr\": \"HnjnoDf25mJroiFgY3FJLYw3EsUtcF9LDcJYMI3gKZs\",\n    \"peers\": [\n        \"testnet-1.arweave.xyz\",\n        \"testnet-2.arweave.xyz\",\n        \"testnet-3.arweave.xyz\",\n        \"testnet-4.arweave.xyz\",\n        \"testnet-5.arweave.xyz\"\n    ],\n    \"vdf_server_trusted_peers\": [\n        \"testnet-4.arweave.xyz\"\n    ],\n    \"cm_api_secret\": \"testnet_cm_secret\",\n    \"cm_exit_peer\": \"testnet-2.arweave.xyz\",\n    \"cm_peers\": [\n        \"testnet-1.arweave.xyz\",\n        \"testnet-2.arweave.xyz\",\n        \"testnet-3.arweave.xyz\"\n    ],\n    \"debug\": true,\n    \"mine\": true,\n    \"enable\": [\n        \"remove_orphaned_storage_module_data\",\n        \"randomx_large_pages\",\n        \"pack_served_chunks\"\n    ],\n    \"data_dir\": \"/arweave-data\",\n    \"requests_per_minute_limit\": 9000\n}"
  },
  {
    "path": "testnet/rebuild_mainnet.sh",
    "content": "#!/bin/bash\n\nARWEAVE_DIR=\"$(cd \"$(dirname \"$0\")/..\" && pwd)\"\n\nif ! $ARWEAVE_DIR/testnet/assert_testnet.sh; then\n\techo \"Error: This script must be run on a testnet server.\"\n\texit 1\nfi\n\nmkdir -p /arweave-build/mainnet\nrm -rf /arweave-build/mainnet/*\n\necho \"$0 $@\" > /arweave-build/mainnet/build.command\n\ncd $ARWEAVE_DIR\nrm -rf $ARWEAVE_DIR/_build/prod/rel/arweave/*\n$ARWEAVE_DIR/rebar3 as prod tar\ntar xf $ARWEAVE_DIR/_build/prod/rel/arweave/arweave-*.tar.gz -C /arweave-build/mainnet\n\n"
  },
  {
    "path": "testnet/rebuild_testnet.sh",
    "content": "#!/bin/bash\n\nARWEAVE_DIR=\"$(cd \"$(dirname \"$0\")/..\" && pwd)\"\n\nif ! $ARWEAVE_DIR/testnet/assert_testnet.sh; then\n\techo \"Error: This script must be run on a testnet server.\"\n\texit 1\nfi\n\nmkdir -p /arweave-build/testnet\nrm -rf /arweave-build/testnet/*\n\necho \"$0 $@\" > /arweave-build/testnet/build.command\n\ncd $ARWEAVE_DIR\nrm -rf $ARWEAVE_DIR/_build/testnet/rel/arweave/*\n$ARWEAVE_DIR/rebar3 as testnet tar\ntar xf $ARWEAVE_DIR/_build/testnet/rel/arweave/arweave-*.tar.gz -C /arweave-build/testnet\n"
  },
  {
    "path": "testnet/restore_data.sh",
    "content": "#!/bin/bash\n\nARWEAVE_DIR=\"$(cd \"$(dirname \"$0\")/..\" && pwd)\"\n\nif ! $ARWEAVE_DIR/testnet/assert_testnet.sh; then\n\techo \"Error: This script must be run on a testnet server.\"\n\texit 1\nfi\n\nif [ $# -ne 1 ]; then\n\techo \"restore_data.sh <backup name>\"\n    exit 1\nfi\n\nNAME=$1\nBACKUP_DIR=\"/arweave-backups/${NAME}/\"\n\nif [ ! -d \"$BACKUP_DIR\" ]; then\n    echo \"Error: Backup directory $BACKUP_DIR does not exist.\"\n    exit 1\nfi\n\nDIRECTORIES=(\n\t\"data_sync_state\"\n\t\"header_sync_state\"\n    \"ar_tx_blacklist\"\n    \"disk_cache\"\n    \"rocksdb\"\n    \"txs\"\n    \"wallet_lists\"\n    \"wallets\"\n)\n\n# Warn about the deletion\necho \"The following files/directories will be DELETED:\"\nfor DIR in \"${DIRECTORIES[@]}\"; do\n    echo \"/arweave-data/$DIR\"\ndone\n\n# Prompt for confirmation\necho \"Are you sure you want to continue? (yes/no)\"\nread -r RESPONSE\n\nif [[ \"$RESPONSE\" == \"yes\" ]]; then\n    # Proceed with deletion\n\t\n    for DIR in \"${DIRECTORIES[@]}\"; do\n\t\tFULL_PATH=\"/arweave-data/$DIR\"\n        if [ -e \"$FULL_PATH\" ]; then\n\t\t\tset -x\n            rm -rf \"$FULL_PATH\"\n\t\t\t{ set +x; } 2>/dev/null\n        fi\n    done\nelse\n    # Abort the operation\n    echo \"Operation aborted.\"\n\texit 0\nfi\n\nfor DIR in \"${DIRECTORIES[@]}\"; do\n\tset -x\n\tcp -rf $BACKUP_DIR/$DIR /arweave-data/$DIR\n\t{ set +x; } 2>/dev/null\ndone\n\necho\n\n"
  },
  {
    "path": "testnet/start_mainnet.sh",
    "content": "#!/bin/bash\n\nARWEAVE_DIR=\"$(cd \"$(dirname \"$0\")/..\" && pwd)\"\n\nif ! $ARWEAVE_DIR/testnet/assert_testnet.sh; then\n\techo \"Error: This script must be run on a testnet server.\"\n\texit 1\nfi\n\nif [[ ! -f \"/arweave-build/mainnet/bin/start\" ]]; then\n    echo \"Arweave start script not found. Please run rebuild_mainnet.sh first.\"\n\texit 1\nfi\n\nconfig_file=\"$ARWEAVE_DIR/testnet/config/$(hostname -f).json\"\nSCREEN_CMD=\"screen -dmsL arweave /arweave-build/mainnet/bin/start config_file $config_file vdf_server_trusted_peer vdf-server-3.arweave.xyz\"\n\necho \"$SCREEN_CMD\"\necho \"$SCREEN_CMD\" > /arweave-build/mainnet/run.sh\nchmod +x /arweave-build/mainnet/run.sh\n\ncd /arweave-build/mainnet\n./run.sh"
  },
  {
    "path": "testnet/start_testnet.sh",
    "content": "#!/bin/bash\n\n# Function to display help\ndisplay_help() {\n    echo \"Usage: $0 [<extra flags>]\"\n    echo \"   <extra flags>: start_from_block <block> or start_from_latest_state is required when \"\n    echo \"                  launching the pilot node with the start_from_block flag.\"\n}\n\nARWEAVE_DIR=\"$(cd \"$(dirname \"$0\")/..\" && pwd)\"\n\nif ! $ARWEAVE_DIR/testnet/assert_testnet.sh; then\n\techo \"Error: This script must be run on a testnet server.\"\n\texit 1\nfi\n\nif [[ ! -f \"/arweave-build/testnet/bin/start\" ]]; then\n    echo \"Arweave start script not found. Please run rebuild_testnet.sh first.\"\n\texit 1\nfi\n\nnode=$(hostname -f)\nconfig_file=\"$ARWEAVE_DIR/testnet/config/$(hostname -f).json\"\nblacklist=\"transaction_blacklist_url \\\"${BLACKLIST_URL}\\\"\"\nSCREEN_CMD=\"screen -dmsL arweave /arweave-build/testnet/bin/start $blacklist config_file $config_file $*\"\n\necho \"$SCREEN_CMD\"\necho \"$SCREEN_CMD\" > /arweave-build/testnet/run.sh\nchmod +x /arweave-build/testnet/run.sh\n\ncd /arweave-build/testnet\n./run.sh\n"
  }
]