[
  {
    "path": ".dockerignore",
    "content": "./bin/\n./pkg/\n.git\n"
  },
  {
    "path": ".github/CODEOWNERS",
    "content": "# release configuration\n/.release/                              @hashicorp/release-engineering\n/.github/workflows/build.yml            @hashicorp/release-engineering\n"
  },
  {
    "path": ".github/CONTRIBUTING.md",
    "content": "# Contributing to Levant\n\n**First** of all welcome and thank you for even considering to contribute to the Levant project. If you're unsure or afraid of anything, just ask or submit the issue or pull request anyways. You won't be yelled at for giving your best effort.\n\nIf you wish to work on Levant itself or any of its built-in components, you will first need [Go](https://golang.org/) installed on your machine (version 1.9+ is required) and ensure your [GOPATH](https://golang.org/doc/code.html#GOPATH) is correctly configured.\n\n## Issues\n\nRemember to craft your issues in such a way as to help others who might be facing similar challenges. Give your issues meaningful titles, that offer context. Please try to use complete sentences in your issue. Everything is okay to submit as an issue, even questions or ideas.\n\nNot every contribution requires an issue, but all bugs and significant changes to core functionality do. A pull request to fix a bug or implement a core change will not be accepted without a corresponding bug report.\n\n## What Good Issues Look Like\n\nLevant includes a default issue template, please endeavor to provide as much information as possible.\n\n1. **Avoid raising duplicate issues.** Please use the GitHub issue search feature to check whether your bug report or feature request has been mentioned in the past.\n\n1. **Provide Bug Details.** When filing a bug report, include debug logs, version details and stack traces. Your issue should provide:\n    1. Guidance on **how to reproduce the issue.**\n    1. Tell us **what you expected to happen.**\n    1. Tell us **what actually happened.**\n    1. Tell us **what version of Levant and Nomad you're using.**\n\n1. **Provide Feature Details.** When filing a feature request, include background on why you're requesting the feature and how it will be useful to others. If you have a design proposal to implement the feature, please include these details so the maintainers can provide feedback on the proposed approach.    \n\n## Pull Requests\n\n**All pull requests must include a description.** The description should at a minimum, provide background on the purpose of the pull request. Consider providing an overview of why the work is taking place; don’t assume familiarity with the history. If the pull request is related to an issue, make sure to mention the issue number(s).\n\nTry to keep pull requests tidy, and be prepared for feedback. Everyone is welcome to contribute to Levant but we do try to keep a high quality of code standard. Be ready to face this. Feel free to open a pull request for anything, about anything. **Everyone** is welcome.\n\n## Get Early Feedback\n\nIf you are contributing, do not feel the need to sit on your contribution until it is perfectly polished and complete. It helps everyone involved for you to seek feedback as early as you possibly can. Submitting an early, unfinished version of your contribution for feedback in no way prejudices your chances of getting that contribution accepted, and can save you from putting a lot of work into a contribution that is not suitable for the project.\n\n## Code Review\n\nPull requests will not be merged until they've been code reviewed by at least one maintainer. You should implement any code review feedback unless you strongly object to it. In the event that you object to the code review feedback, you should make your case clearly and calmly. If, after doing so, the feedback is judged to still apply, you must either apply the feedback or withdraw your contribution.\n\n# Tests and Checks\n\nAs Levant continues to mature, additional test harnesses will be implemented. Once these harnesses are in place, tests will be required for all bugs fixes and features. No exception.\n\n## Linting\n\nAll Go code in your pull request must pass `lint` checks. You can run lint on all Golang files using the `make lint` target. All lint checks are automatically enforced by CI tests.\n\n## Formatting\n\n**Do your best to follow existing conventions you see in the codebase**, and ensure your code is formatted with `go fmt`. You can run `fmt` on all Golang files using the `make fmt` target. All format checks are automatically enforced by CI tests.\n\n## Testing\n\nTests are required for all new functionality where practical; certain portions of Levant have no tests but this should be the exception and not the rule.\n\n# Building\n\nLevant is linted, tested and built using make:\n\n```\nmake\n```\n\nThe resulting binary file will be stored in the project root directory and is named `levant-local` which can be invoked as required. The binary is built by default for the host system only. If you wish to build for different architectures and operating systems please see the [golang docs](https://golang.org/doc/install/source) for the whole list of available `GOOS` and `GOARCH` values.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE.md",
    "content": "**Description**\n\n<!--\nBriefly describe the problem you are having in a few paragraphs.\n-->\n\n**Relevant Nomad job specification file**\n\n```\n(paste your output here)\n```\n\n**Output of `levant version`:**\n\n```\n(paste your output here)\n```\n\n**Output of `consul version`:**\n\n```\n(paste your output here)\n```\n\n**Output of `nomad version`:**\n\n```\n(paste your output here)\n```\n\n**Additional environment details:**\n\n**Debug log outputs from Levant:**\n\n<!--\nIf the log fragment is particularly long, please paste it into a gist or\nsimilar and paste the link here.\n-->\n"
  },
  {
    "path": ".github/dependabot.yml",
    "content": "version: 2\n\nupdates:\n  - package-ecosystem: \"github-actions\"\n    directory: \"/\"\n    schedule:\n      interval: \"daily\""
  },
  {
    "path": ".github/workflows/actionlint.yml",
    "content": "# If the repository is public, be sure to change to GitHub hosted runners\nname: Lint GitHub Actions Workflows\non:\n  pull_request:\npermissions:\n  contents: read\njobs:\n  actionlint:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n      - name: \"Check workflow files\"\n        uses: docker://docker.mirror.hashicorp.services/rhysd/actionlint:latest\n"
  },
  {
    "path": ".github/workflows/build.yml",
    "content": "name: build\non:\n  push:\n  workflow_dispatch:\n  workflow_call:\n\nenv:\n  PKG_NAME: \"levant\"\n\njobs:\n  get-go-version:\n    name: \"Determine Go toolchain version\"\n    runs-on: ubuntu-22.04\n    outputs:\n      go-version: ${{ steps.get-go-version.outputs.go-version }}\n    steps:\n      - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n      - name: Determine Go version\n        id: get-go-version\n        run: |\n          echo \"Building with Go $(cat .go-version)\"\n          echo \"go-version=$(cat .go-version)\" >> \"$GITHUB_OUTPUT\"\n\n  get-product-version:\n    runs-on: ubuntu-22.04\n    outputs:\n      product-version: ${{ steps.get-product-version.outputs.product-version }}\n    steps:\n      - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n      - name: get product version\n        id: get-product-version\n        run: |\n          make version\n          echo \"product-version=$(make version)\" >> \"$GITHUB_OUTPUT\"\n\n  generate-metadata-file:\n    needs: get-product-version\n    runs-on: ubuntu-22.04\n    outputs:\n      filepath: ${{ steps.generate-metadata-file.outputs.filepath }}\n    steps:\n      - name: \"Checkout directory\"\n        uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n      - name: Generate metadata file\n        id: generate-metadata-file\n        uses: hashicorp/actions-generate-metadata@fdbc8803a0e53bcbb912ddeee3808329033d6357 # v1.1.1\n        with:\n          version: ${{ needs.get-product-version.outputs.product-version }}\n          product: ${{ env.PKG_NAME }}\n          repository: ${{ env.PKG_NAME }}\n          repositoryOwner: \"hashicorp\"\n      - uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3\n        if: ${{ !env.ACT }}\n        with:\n          name: metadata.json\n          path: ${{ steps.generate-metadata-file.outputs.filepath }}\n\n  generate-ldflags:\n    needs: get-product-version\n    runs-on: ubuntu-22.04\n    outputs:\n      ldflags: ${{ steps.generate-ldflags.outputs.ldflags }}\n    steps:\n      - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n      - name: Generate ld flags\n        id: generate-ldflags\n        run: |\n          echo \"ldflags=-X 'github.com/hashicorp/levant/version.GitDescribe=v${{ needs.get-product-version.outputs.product-version }}'\" >> \"$GITHUB_OUTPUT\"\n\n  build:\n    needs:\n      - get-go-version\n      - get-product-version\n      - generate-ldflags\n    runs-on: ubuntu-22.04\n    strategy:\n      matrix:\n        goos: [\"linux\", \"darwin\", \"windows\", \"freebsd\"]\n        goarch: [ \"amd64\", \"arm64\"]\n      fail-fast: true\n    name: Go ${{ needs.get-go-version.outputs.go-version }} ${{ matrix.goos }} ${{ matrix.goarch }} build\n    steps:\n      - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n      - name: Setup go\n        uses: actions/setup-go@f111f3307d8850f501ac008e886eec1fd1932a34 # v5.3.0\n        with:\n          go-version-file: \".go-version\"\n      - name: Build Levant\n        env:\n          GOOS: ${{ matrix.goos }}\n          GOARCH: ${{ matrix.goarch }}\n          GO_LDFLAGS: ${{ needs.generate-ldflags.outputs.ldflags }}\n          CGO_ENABLED: \"0\"\n          BIN_PATH: dist/levant\n        uses: hashicorp/actions-go-build@37358f6098ef21b09542d84a9814ebb843aa4e3e # v1.0.0\n        with:\n          product_name: ${{ env.PKG_NAME }}\n          product_version: ${{ needs.get-product-version.outputs.product-version }}\n          go_version: ${{ needs.get-go-version.outputs.go-version }}\n          os: ${{ matrix.goos }}\n          arch: ${{ matrix.goarch }}\n          reproducible: nope\n          instructions: |-\n            make crt\n      - if: ${{ matrix.goos == 'linux' }}\n        uses: hashicorp/actions-packaging-linux@8d55a640bb30b5508f16757ea908b274564792d4 # v1.7.0\n        with:\n          name: \"levant\"\n          description: \"Levant is a templating and deployment tool for HashiCorp Nomad\"\n          arch: ${{ matrix.goarch }}\n          version: ${{ needs.get-product-version.outputs.product-version }}\n          maintainer: \"HashiCorp\"\n          homepage: \"https://github.com/hashicorp/levant\"\n          license: \"MPL-2.0\"\n          binary: \"dist/levant\"\n          deb_depends: \"openssl\"\n          rpm_depends: \"openssl\"\n      - if: ${{ matrix.goos == 'linux' }}\n        name: Determine package file names\n        run: |\n          echo \"RPM_PACKAGE=$(basename out/*.rpm)\" >> \"$GITHUB_ENV\"\n          echo \"DEB_PACKAGE=$(basename out/*.deb)\" >> \"$GITHUB_ENV\"\n      - if: ${{ matrix.goos == 'linux' }}\n        uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3\n        with:\n          name: ${{ env.RPM_PACKAGE }}\n          path: out/${{ env.RPM_PACKAGE }}\n          if-no-files-found: error\n      - if: ${{ matrix.goos == 'linux' }}\n        uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3\n        with:\n          name: ${{ env.DEB_PACKAGE }}\n          path: out/${{ env.DEB_PACKAGE }}\n          if-no-files-found: error\n\n  build-docker-default:\n    needs:\n      - get-product-version\n      - build\n    runs-on: ubuntu-22.04\n    strategy:\n      matrix:\n        arch: [\"arm64\", \"amd64\"]\n      fail-fast: true\n    env:\n      version: ${{ needs.get-product-version.outputs.product-version }}\n    name: Docker ${{ matrix.arch }} default release build\n    steps:\n      - uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 # v4.1.6\n      - name: Docker Build (Action)\n        uses: hashicorp/actions-docker-build@11d43ef520c65f58683d048ce9b47d6617893c9a # v2.0.0\n        with:\n          smoke_test: |\n            TEST_VERSION=\"$(docker run \"${IMAGE_NAME}\" version | awk '/Levant/{print $2}')\"\n            if [ \"${TEST_VERSION}\" != \"v${version}\" ]; then\n              echo \"Test FAILED\"\n              exit 1\n            fi\n            echo \"Test PASSED\"\n          version: ${{ needs.get-product-version.outputs.product-version }}\n          target: release\n          arch: ${{ matrix.arch }}\n          tags: |\n            docker.io/hashicorp/${{ env.PKG_NAME }}:${{ env.version }}\n          dev_tags: |\n            docker.io/hashicorppreview/${{ env.PKG_NAME }}:${{ env.version }}-dev\n            docker.io/hashicorppreview/${{ env.PKG_NAME }}:${{ env.version }}-${{ github.sha }}\n\npermissions:\n  contents: read\n"
  },
  {
    "path": ".github/workflows/ci.yml",
    "content": "name: ci\non:\n  push:\njobs:\n  lint-go:\n    runs-on: ubuntu-latest\n    env:\n      GO_TAGS: ''\n    steps:\n      - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n      - uses: actions/setup-go@f111f3307d8850f501ac008e886eec1fd1932a34 # v5.3.0\n        with:\n          go-version-file: \".go-version\"\n      - name: Setup golangci-lint\n        run: |-\n          download=https://raw.githubusercontent.com/golangci/golangci-lint/9a8a056e9fe49c0e9ed2287aedce1022c79a115b/install.sh  # v1.52.2\n          curl -sSf \"$download\" | sh -s v1.51.2\n          ./bin/golangci-lint version\n      - run: make check\n  check-deps-go:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n      - uses: actions/setup-go@f111f3307d8850f501ac008e886eec1fd1932a34 # v5.3.0\n        with:\n          go-version-file: \".go-version\"\n      - run: make check-mod\n  test-go:\n    runs-on: ubuntu-latest\n    env:\n      GO_TAGS: ''\n    steps:\n      - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n      - uses: actions/setup-go@f111f3307d8850f501ac008e886eec1fd1932a34 # v5.3.0\n        with:\n          go-version-file: \".go-version\"\n      - run: make test\n  build-go:\n    runs-on: ubuntu-latest\n    env:\n      GO_TAGS: ''\n    steps:\n      - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2\n      - uses: actions/setup-go@f111f3307d8850f501ac008e886eec1fd1932a34 # v5.3.0\n        with:\n          go-version-file: \".go-version\"\n      - run: make dev\npermissions:\n  contents: read\n"
  },
  {
    "path": ".github/workflows/nightly-release-readme.md",
    "content": "Nightly releases are snapshots of the development activity on the Levant project that may include new features and bug fixes scheduled for upcoming releases. These releases are made available to make it easier for users to test their existing build configurations against the latest Levant code base for potential issues or to experiment with new features, with a chance to provide feedback on ways to improve the changes before being released.\n\nAs these releases are snapshots of the latest code, you may encounter an issue compared to the latest stable release. Users are encouraged to run nightly releases in a non production environment. If you encounter an issue, please check our [issue tracker](https://github.com/hashicorp/levant/issues) to see if the issue has already been reported; if a report hasn't been made, please report it so we can review the issue and make any needed fixes.\n\n**Note**: Nightly releases are only available via GitHub Releases, and artifacts are not codesigned or notarized. Distribution via other [Release Channels](https://www.hashicorp.com/official-release-channels) such as the Releases Site or Homebrew is not yet supported.\n"
  },
  {
    "path": ".github/workflows/nightly-release.yml",
    "content": "# This GitHub action triggers a fresh set of Levant builds and publishes them\n# to GitHub Releases under the `nightly` tag.\n# Note that artifacts available via GitHub Releases are not codesigned or\n# notarized.\n# Failures are reported to slack.\nname: Nightly Release\n\non:\n  schedule:\n    # Runs against the default branch every day overnight\n    - cron: \"18 3 * * *\"\n\n  workflow_dispatch:\n\njobs:\n  # Build a fresh set of artifacts\n  build-artifacts:\n    uses: ./.github/workflows/build.yml\n\n  github-release:\n    needs: build-artifacts\n    runs-on: ubuntu-22.04\n    steps:\n      - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 #v4.2.2\n\n      - name: Download built artifacts\n        uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0\n        with:\n          path: out/\n\n      # Set BUILD_OUTPUT_LIST to out\\<project>-<version>.<fileext>\\*,out\\...\n      # This is needed to attach the build artifacts to the GitHub Release\n      - name: Set BUILD_OUTPUT_LIST\n        run: |\n          (ls -xm1 out/) > tmp.txt\n          sed 's:.*:out/&/*:' < tmp.txt > tmp2.txt\n          echo \"BUILD_OUTPUT_LIST=$(tr '\\n' ',' < tmp2.txt | perl -ple 'chop')\" >> \"$GITHUB_ENV\"\n          rm -rf tmp.txt && rm -rf tmp2.txt\n\n      - name: Advance nightly tag\n        uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1\n        with:\n          github-token: ${{ secrets.GITHUB_TOKEN }}\n          script: |\n            try {\n                await github.rest.git.deleteRef({\n                  owner: context.repo.owner,\n                  repo: context.repo.repo,\n                  ref: \"tags/nightly\"\n                })\n            } catch (e) {\n              console.log(\"Warning: The nightly tag doesn't exist yet, so there's nothing to do. Trace: \" + e)\n            }\n            await github.rest.git.createRef({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              ref: \"refs/tags/nightly\",\n              sha: context.sha\n            })\n\n      # This will create a new GitHub Release called `nightly`\n      # If a release with this name already exists, it will overwrite the existing data\n      - name: Create a nightly GitHub prerelease\n        id: create_prerelease\n        uses: ncipollo/release-action@440c8c1cb0ed28b9f43e4d1d670870f059653174 # v1.16.0\n        with:\n          name: nightly\n          artifacts: \"${{ env.BUILD_OUTPUT_LIST }}\"\n          tag: nightly\n          bodyFile: \".github/workflows/nightly-release-readme.md\"\n          prerelease: true\n          allowUpdates: true\n          removeArtifacts: true\n          draft: false\n          token: ${{ secrets.GITHUB_TOKEN }}\n\n      - name: Publish nightly GitHub prerelease\n        uses: eregon/publish-release@01df127f5e9a3c26935118e22e738d95b59d10ce # v1.0.6\n        env:\n          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n        with:\n          release_id: ${{ steps.create_prerelease.outputs.id }}\n\n# Send a slack notification if either job defined above fails\n  slack-notify:\n    needs:\n      - build-artifacts\n      - github-release\n    if: always() && (needs.build-artifacts.result == 'failure' || needs.github-release.result == 'failure')\n    runs-on: ubuntu-latest\n    steps:\n      - name: Notify Slack on Nightly Release Failure\n        uses: hashicorp/actions-slack-status@1a3f63b30bd476aee1f3bd6f9d8f2aacc4f14d81 # v2.0.1\n        with:\n          failure-message: |-\n            :x::moon::nomad-sob: Levant Nightly Release *FAILED* on\n          status: failure\n          slack-webhook-url: ${{ secrets.SLACK_WEBHOOK_URL }}\n\npermissions:\n  contents: write\n"
  },
  {
    "path": ".gitignore",
    "content": ".DS_Store\n\n# Compiled Object files, Static and Dynamic libs (Shared Objects)\n*.o\n*.a\n*.so\n\n# Folders\n_obj\n_test\ndist/\n\n# Build output directory.\n/bin\n/pkg\n\n# Architecture specific extensions/prefixes\n*.[568vq]\n[568vq].out\n\n*.cgo1.go\n*.cgo2.c\n_cgo_defun.c\n_cgo_gotypes.go\n_cgo_export.*\n\n_testmain.go\n\n*.exe\n*.test\n*.prof\n\n/pkg\nlevant-local\n\n.idea\nvendor/*/.hg/*\n\n# assets download path when using bob CLI\n.bob\n"
  },
  {
    "path": ".go-version",
    "content": "1.24.4\n"
  },
  {
    "path": ".golangci.yml",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\nrun:\n  deadline: 5m\n  issues-exit-code: 1\n  tests: true\n\noutput:\n  formats: colored-line-number\n  print-issued-lines: true\n  print-linter-name: true\n\nlinters:\n  enable:\n    - errcheck\n    - goconst\n    - gocyclo\n    - gofmt\n    - goimports\n    - gosec\n    - govet\n    - ineffassign\n    - misspell\n    - prealloc\n    - revive\n    - unconvert\n    - unparam\n    - unused\n  enable-all: false\n  disable-all: true\n"
  },
  {
    "path": ".release/ci.hcl",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\nschema = \"1\"\n\nproject \"levant\" {\n  team = \"nomad\"\n\n  slack {\n    notification_channel = \"C03B5EWFW01\"\n  }\n\n  github {\n    organization = \"hashicorp\"\n    repository   = \"levant\"\n\n    release_branches = [\n      \"main\",\n      \"release/**\",\n    ]\n  }\n}\n\nevent \"build\" {\n  action \"build\" {\n    organization = \"hashicorp\"\n    repository   = \"levant\"\n    workflow     = \"build\"\n  }\n}\n\nevent \"prepare\" {\n  depends = [\"build\"]\n\n  action \"prepare\" {\n    organization = \"hashicorp\"\n    repository   = \"crt-workflows-common\"\n    workflow     = \"prepare\"\n    depends      = [\"build\"]\n  }\n\n  notification {\n    on = \"always\"\n  }\n}\n\n## These are promotion and post-publish events\n## they should be added to the end of the file after the verify event stanza.\n\nevent \"trigger-staging\" {\n  // This event is dispatched by the bob trigger-promotion command\n  // and is required - do not delete.\n}\n\nevent \"promote-staging\" {\n  depends = [\"trigger-staging\"]\n\n  action \"promote-staging\" {\n    organization = \"hashicorp\"\n    repository   = \"crt-workflows-common\"\n    workflow     = \"promote-staging\"\n    config       = \"release-metadata.hcl\"\n  }\n\n  notification {\n    on = \"always\"\n  }\n}\n\nevent \"promote-staging-docker\" {\n  depends = [\"promote-staging\"]\n\n  action \"promote-staging-docker\" {\n    organization = \"hashicorp\"\n    repository   = \"crt-workflows-common\"\n    workflow     = \"promote-staging-docker\"\n  }\n\n  notification {\n    on = \"always\"\n  }\n}\n\nevent \"trigger-production\" {\n  // This event is dispatched by the bob trigger-promotion command\n  // and is required - do not delete.\n}\n\nevent \"promote-production\" {\n  depends = [\"trigger-production\"]\n\n  action \"promote-production\" {\n    organization = \"hashicorp\"\n    repository   = \"crt-workflows-common\"\n    workflow     = \"promote-production\"\n  }\n\n  notification {\n    on = \"always\"\n  }\n}\n\nevent \"promote-production-docker\" {\n  depends = [\"promote-production\"]\n\n  action \"promote-production-docker\" {\n    organization = \"hashicorp\"\n    repository   = \"crt-workflows-common\"\n    workflow     = \"promote-production-docker\"\n  }\n\n  notification {\n    on = \"always\"\n  }\n}\n\nevent \"promote-production-packaging\" {\n  depends = [\"promote-production-docker\"]\n\n  action \"promote-production-packaging\" {\n    organization = \"hashicorp\"\n    repository   = \"crt-workflows-common\"\n    workflow     = \"promote-production-packaging\"\n  }\n\n  notification {\n    on = \"always\"\n  }\n}\n"
  },
  {
    "path": ".release/levant-artifacts.hcl",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\nschema = 1\nartifacts {\n  zip = [\n    \"levant_${version}_darwin_amd64.zip\",\n    \"levant_${version}_darwin_arm64.zip\",\n    \"levant_${version}_freebsd_amd64.zip\",\n    \"levant_${version}_freebsd_arm64.zip\",\n    \"levant_${version}_linux_amd64.zip\",\n    \"levant_${version}_linux_arm64.zip\",\n    \"levant_${version}_windows_amd64.zip\",\n    \"levant_${version}_windows_arm64.zip\",\n  ]\n  rpm = [\n    \"levant-${version_linux}-1.aarch64.rpm\",\n    \"levant-${version_linux}-1.x86_64.rpm\",\n  ]\n  deb = [\n    \"levant_${version_linux}-1_amd64.deb\",\n    \"levant_${version_linux}-1_arm64.deb\",\n  ]\n  container = [\n    \"levant_release_linux_amd64_${version}_${commit_sha}.docker.dev.tar\",\n    \"levant_release_linux_amd64_${version}_${commit_sha}.docker.tar\",\n    \"levant_release_linux_arm64_${version}_${commit_sha}.docker.dev.tar\",\n    \"levant_release_linux_arm64_${version}_${commit_sha}.docker.tar\",\n  ]\n}\n"
  },
  {
    "path": ".release/release-metadata.hcl",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\nurl_docker_registry_dockerhub = \"https://hub.docker.com/r/hashicorp/levant\"\nurl_license                   = \"https://github.com/hashicorp/levant/blob/main/LICENSE\"\nurl_source_repository         = \"https://github.com/hashicorp/levant\"\n"
  },
  {
    "path": ".release/security-scan.hcl",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\ncontainer {\n  dependencies    = true\n  alpine_security = true\n\n  secrets {\n    all = true\n  }\n\n  # Triage items that are _safe_ to ignore here. Note that this list should be\n  # periodically cleaned up to remove items that are no longer found by the scanner.\n  triage {\n    suppress {\n      vulnerabilities = [\n        \"GHSA-rx97-6c62-55mf\", // https://github.com/github/advisory-database/pull/5759 TODO(dduzgun): remove when dep updated.\n        \"CVE-2025-46394\",      // busybox@1.37.0-r18 TODO(dduzgun): remove when dep updated.\n        \"CVE-2024-58251\",      // busybox@1.37.0-r18 TODO(dduzgun): remove when dep updated.\n      ]\n    }\n  }\n}\n\nbinary {\n  go_modules = true\n  osv        = true\n  go_stdlib  = true\n  oss_index  = false\n  nvd        = false\n\n  secrets {\n    all = true\n  }\n\n  # Triage items that are _safe_ to ignore here. Note that this list should be\n  # periodically cleaned up to remove items that are no longer found by the scanner.\n  triage {\n    suppress {\n      vulnerabilities = [\n        \"GHSA-rx97-6c62-55mf\", // https://github.com/github/advisory-database/pull/5759 TODO(dduzgun): remove when dep updated.\n        \"CVE-2025-46394\",      // busybox@1.37.0-r18 TODO(dduzgun): remove when dep updated.\n        \"CVE-2024-58251\",      // busybox@1.37.0-r18 TODO(dduzgun): remove when dep updated.\n      ]\n    }\n  }\n}\n"
  },
  {
    "path": "CHANGELOG.md",
    "content": "## 0.4.0 (June 26, 2025)\n\n__BACKWARDS INCOMPATIBILITIES:__\n* cli: Levant no longer supports the deprecated Vault token workflow.\n\nIMPROVEMENT:\n* build: Now builds with Go v1.24\n* deps: Updated Nomad API to v1.10.2\n\n## 0.3.3 (October 5, 2023)\n\nIMPROVEMENTS:\n* build: Now builds with Go v1.21.1 [[GH-507](https://github.com/hashicorp/levant/pull/507)]\n* deps: Updated Nomad dependency to v1.6.2 [[GH-507](https://github.com/hashicorp/levant/pull/507)]\n\n## 0.3.2 (October 20, 2022)\n\nIMPROVEMENTS:\n * build: Now builds with go v1.19.1 [[GH-464](https://github.com/hashicorp/levant/pull/464)]\n * deps: Updated Nomad dependency to v1.4.1. [[GH-464](https://github.com/hashicorp/levant/pull/464)]\n * deps: Updated golang.org/x/text to v0.4.0 [[GH-465](https://github.com/hashicorp/levant/pull/465)]\n\n## 0.3.1 (February 14, 2022)\n\nIMPROVEMENTS:\n* build: Updated Nomad dependency to 1.2.4. [[GH-438](https://github.com/hashicorp/levant/pull/438)]\n\n## 0.3.0 (March 09, 2021)\n\n__BACKWARDS INCOMPATIBILITIES:__\n * template: existing Levant functions that share a name with [sprig](https://github.com/Masterminds/sprig) functions have been renamed to include the prefix `levant` such as `levantEnv`.\n\nBUG FIXES:\n * cli: Fixed panic when dispatching a job. [[GH-348](https://github.com/hashicorp/levant/pull/348)]\n * status-checker: Pass the namespace to the query options when calling the Nomad API [[GH-356](https://github.com/hashicorp/levant/pull/356)]\n * template: Fixed issue with default variables file not being used. [[GH-353](https://github.com/hashicorp/levant/pull/353)]\n\nIMPROVEMENTS:\n * build: Updated Nomad dependency to 1.0.4. [[GH-399](https://github.com/hashicorp/levant/pull/399)]\n * cli: Added `log-level` and `log-format` flags to render command. [[GH-346](https://github.com/hashicorp/levant/pull/346)]\n * render: when rendering, send logging to stderr if stdout is not a terminal [[GH-386](https://github.com/hashicorp/levant/pull/386)]\n * template: Added [sprig](https://github.com/Masterminds/sprig) template functions. [[GH-347](https://github.com/hashicorp/levant/pull/347)]\n * template: Added `spewDump` and `spewPrintf` functions for easier debugging. [[GH-344](https://github.com/hashicorp/levant/pull/344)]\n\n## 0.2.9 (27 December 2019)\n\nIMPROVEMENTS:\n * Update vendoered version of Nomad to 0.9.6 [GH-313](https://github.com/jrasell/levant/pull/313)\n * Update to go 1.13 and use modules rather than dep [GH-319](https://github.com/jrasell/levant/pull/319)\n * Remove use of vendor nomad/structs import to allow easier vendor [GH-320](https://github.com/jrasell/levant/pull/320)\n * Add template replace function [GH-291](https://github.com/jrasell/levant/pull/291)\n\nBUG FIXES:\n * Use info level logs when no changes are detected [GH-303](https://github.com/jrasell/levant/pull/303)\n\n## 0.2.8 (14 September 2019)\n\nIMPROVEMENTS:\n * Add `-force` flag to deploy CLI command which allows for forcing a deployment even if Levant detects 0 changes on plan [GH-296](https://github.com/jrasell/levant/pull/296)\n\nBUG FIXES:\n * Fix segfault when logging deployID details [GH-286](https://github.com/jrasell/levant/pull/286)\n * Fix error message within scale-in which incorrectly referenced scale-out [GH-285](https://github.com/jrasell/levant/pull/285/files)\n\n## 0.2.7 (19 March 2019)\n\nIMPROVEMENTS:\n * Use `missingkey=zero` rather than error which allows better use of standard go templating, particulary conditionals [GH-275](https://github.com/jrasell/levant/pull/275)\n * Added maths functions add, subtract, multiply, divide and modulo to the template rendering process [GH-277](https://github.com/jrasell/levant/pull/277)\n\n## 0.2.6 (25 February 2019)\n\nIMPROVEMENTS:\n * Add the ability to supply a Vault token to a job during deployment via either a `vault` or `vault-token` flag [GH-258](https://github.com/jrasell/levant/pull/258)\n * New `fileContents` template function which allows the entire contents of a file to be read into the template [GH-261](https://github.com/jrasell/levant/pull/261)\n\nBUG FIXES:\n * Fix a panic when running scale* deployment watcher due to incorrectly initialized client config [GH-253](https://github.com/jrasell/levant/pull/253)\n * Fix incorrect behavior when flag `ignore-no-changes` was set [GH-264](https://github.com/jrasell/levant/pull/264)\n * Fix endless deployment loop when Nomad doesn't return a deployment ID [GH-268](https://github.com/jrasell/levant/pull/268)\n\n## 0.2.5 (25 October 2018)\n\nBUG FIXES:\n * Fix panic in deployment where count is not specified due to unsafe count checking on task groups [GH-249](https://github.com/jrasell/levant/pull/249)\n\n## 0.2.4 (24 October 2018)\n\nBUG FIXES:\n * Fix panic in scale commands due to an incorrectly initialized configuration struct [GH-244](https://github.com/jrasell/levant/pull/244)\n * Fix bug where job deploys with taskgroup counts of 0 would hang for 1 hour [GH-246](https://github.com/jrasell/levant/pull/246)\n\n## 0.2.3 (2 October 2018)\n\nIMPROVEMENTS:\n * New `env` template function allows the lookup and substitution of variables by environemnt variables [GH-225](https://github.com/jrasell/levant/pull/225)\n * Add plan command to allow running a plan whilst using templating [GH-234](https://github.com/jrasell/levant/pull/234)\n * Add `toUpper` and `toLower` template funcs [GH-237](https://github.com/jrasell/levant/pull/237)\n\n## 0.2.2 (6 August 2018)\n\nBUG FIXES:\n * Fix an issue where if an evaluation had filtered nodes Levant would exit immediately rather than tracking the deployment which could still succeed [GH-221](https://github.com/jrasell/levant/pull/221)\n * Fixed failure inspector to report on tasks that are restarting [GH-82](https://github.com/jrasell/levant/pull/82)\n\n## 0.2.1 (20 July 2018)\n\nIMPROVEMENTS:\n * JSON can now be used as a variable file format [GH-210](https://github.com/jrasell/levant/pull/210)\n * The template funcs now include numerous parse functions to provide greater flexibility [GH-212](https://github.com/jrasell/levant/pull/212)\n * Ability to configure allow-stale Nomad setting when performing calls to help in environments with high network latency [GH-185](https://github.com/jrasell/levant/pull/185)\n\nBUG FIXES:\n * Update vendored package of Nomad to fix failures when interacting with jobs configured with update progress_deadline params [GH-216](https://github.com/jrasell/levant/pull/216)\n\n## 0.2.0 (4 July 2018)\n\nIMPROVEMENTS:\n * New `scale-in` and `scale-out` commands  allow an operator to manually scale jobs and task groups based on counts or percentages [GH-172](https://github.com/jrasell/levant/pull/172)\n * New template functions allowing the lookup of variables from Consul KVs, ISO-8601 timestamp generation and loops [GH-175](https://github.com/jrasell/levant/pull/175), [GH-202](https://github.com/jrasell/levant/pull/202)\n * Multiple variable files can be passed on each run, allowing for common configuration to be shared across jobs [GH-180](https://github.com/jrasell/levant/pull/180)\n * Provide better command help for deploy and render commands [GH-183](https://github.com/jrasell/levant/pull/184)\n * Add `-ignore-no-changes` flag to deploy CLI command which allows the changing on behaviour to exit 0 even if Levant detects 0 changes on plan [GH-196](https://github.com/jrasell/levant/pull/196)\n\nBUG FIXES:\n * Fix formatting with version summary output which had erronous quote [GH-170](https://github.com/jrasell/levant/pull/170)\n\n## 0.1.1 (13 May 2018)\n\nIMPROVEMENTS:\n * Use govvv for builds and to supply additional version information in the version command output [GH-151](https://github.com/jrasell/levant/pull/151)\n * Levant will now run Nomad plan before deployments to log the plan diff [GH-153](https://github.com/jrasell/levant/pull/153)\n * Logging can now be output in JSON format and uses contextual data for better processing ability [GH-157](https://github.com/jrasell/levant/pull/157)\n\nBUG FIXES:\n * Fix occasional panic when performing deployment check of a batch job deployment [GH-150](https://github.com/jrasell/levant/pull/150)\n\n## 0.1.0 (18 April 2018)\n\nIMPROVEMENTS:\n * New 'dispatch' command which allows Levant to dispatch Nomad jobs which will go through Levants additional job checking [GH-128](https://github.com/jrasell/levant/pull/128)\n * New 'force-batch' deploy flag which allows users to trigger a periodic run on deployment independent of the schedule [GH-110](https://github.com/jrasell/levant/pull/110)\n * Enhanced job status checking for non-service type jobs [GH-96](https://github.com/jrasell/levant/pull/96), [GH-109](https://github.com/jrasell/levant/pull/109)\n * Implement config struct for Levant to track config during run [GH-102](https://github.com/jrasell/levant/pull/102)\n * Test and build Levant with Go version 1.10 [GH-119](https://github.com/jrasell/levant/pull/119), [GH-116](https://github.com/jrasell/levant/pull/116)\n * Add a catchall for unhandled failure cases to log more useful information for the operator [GH-138](https://github.com/jrasell/levant/pull/138)\n * Updated vendored dependancy of Nomad to 0.8.0 [GH-137](https://github.com/jrasell/levant/pull/137)\n\nBUG FIXES:\n * Service jobs that don't have an update stanza do not produce deployments and should skip the deployment watcher [GH-99](https://github.com/jrasell/levant/pull/99)\n * Ensure the count updater ignores jobs that are in stopped state [GH-106](https://github.com/jrasell/levant/pull/106)\n * Fix a small formatting issue with the deploy command arg help [GH-111](https://github.com/jrasell/levant/pull/111)\n * Do not run the auto-revert inspector if auto-promote fails [GH-122](https://github.com/jrasell/levant/pull/122)\n * Fix issue where allocationStatusChecker logged incorrectly [GH-131](https://github.com/jrasell/levant/pull/131)\n * Add retry to auto-revert checker to ensure the correct deployment is monitored, and not the original [GH-134](https://github.com/jrasell/levant/pull/134)\n\n## 0.0.4 (25 January 2018)\n\nIMPROVEMENTS:\n * Job types of `batch` now undergo checking to confirm the job reaches status of `running` [GH-73](https://github.com/jrasell/levant/pull/73)\n * Vendored Nomad version has been increased to 0.7.1 allowing use of Nomad ACL tokens [GH-76](https://github.com/jrasell/levant/pull/76)\n * Log messages now includes the date, time and timezone [GH-80](https://github.com/jrasell/levant/pull/80)\n\nBUG FIXES:\n * Skip health checks for task groups without canaries when performing canary auto-promote health checking [GH-83](https://github.com/jrasell/levant/pull/83)\n * Fix issue where jobs without specified count caused panic [GH-89](https://github.com/jrasell/levant/pull/89)\n\n## 0.0.3 (23 December 2017)\n\nIMPROVEMENTS:\n * Levant can now track Nomad auto-revert of a failed deployment [GH-55](https://github.com/jrasell/levant/pull/55)\n * Provide greater feedback around variables file passed, CLI variables passed and which variables are being used by Levant.[GH-62](https://github.com/jrasell/levant/pull/62)\n * Levant supports autoloading of default files when running `levant deploy` [GH-37](https://github.com/jrasell/levant/pull/37)\n\nBUG FIXES:\n * Fix issue where Levant did not correctly handle deploying jobs of type `batch` [GH-52](https://github.com/jrasell/levant/pull/52)\n * Fix issue where evaluations errors were not being fully checked [GH-66](https://github.com/jrasell/levant/pull/66)\n * Fix issue in failure_inspector incorrectly handling multi-groups [GH-69](https://github.com/jrasell/levant/pull/69)\n\n## 0.0.2 (29 November 2017)\n\nIMPROVEMENTS:\n * Introduce `-force-count` flag into deploy command which disables dynamic count updating; meaning Levant will explicity use counts defined in the job specification template [GH-33](https://github.com/jrasell/levant/pull/33)\n * Levant deployments now inspect the evaluation results and log any error messages [GH-40](https://github.com/jrasell/levant/pull/40)\n\nBUG FIXES:\n * Fix formatting issue in render command help [GH-28](https://github.com/jrasell/levant/pull/28)\n * Update failure_inspector to cover more failure use cases [GH-27](https://github.com/jrasell/levant/pull/27)\n * Fix a bug in handling Nomad job types incorrectly [GH-32](https://github.com/jrasell/levant/pull/32)\n * Fix issue where jobs deployed with all task group counts at 0 would cause a failure as no deployment ID is returned [GH-36](https://github.com/jrasell/levant/pull/36)\n\n## 0.0.1 (30 October 2017)\n\n- Initial release.\n"
  },
  {
    "path": "CODE_OF_CONDUCT.md",
    "content": "# Code of Conduct\n\n## 1. Purpose\n\nA primary goal of Levant is to be inclusive to the largest number of contributors, with the most varied and diverse backgrounds possible. As such, we are committed to providing a friendly, safe and welcoming environment for all, regardless of gender, sexual orientation, ability, ethnicity, socioeconomic status, and religion (or lack thereof).\n\nThis code of conduct outlines our expectations for all those who participate in our community, as well as the consequences for unacceptable behavior.\n\nWe invite all those who participate in Levant to help us create safe and positive experiences for everyone.\n\n## 2. Open Source Citizenship\n\nA supplemental goal of this Code of Conduct is to increase open source citizenship by encouraging participants to recognize and strengthen the relationships between our actions and their effects on our community.\n\nCommunities mirror the societies in which they exist and positive action is essential to counteract the many forms of inequality and abuses of power that exist in society.\n\nIf you see someone who is making an extra effort to ensure our community is welcoming, friendly, and encourages all participants to contribute to the fullest extent, we want to know.\n\n## 3. Expected Behavior\n\nThe following behaviors are expected and requested of all community members:\n\n*   Participate in an authentic and active way. In doing so, you contribute to the health and longevity of this community.\n*   Exercise consideration and respect in your speech and actions.\n*   Attempt collaboration before conflict.\n*   Refrain from demeaning, discriminatory, or harassing behavior and speech.\n*   Be mindful of your surroundings and of your fellow participants. Alert community leaders if you notice a dangerous situation, someone in distress, or violations of this Code of Conduct, even if they seem inconsequential.\n*   Remember that community event venues may be shared with members of the public; please be respectful to all patrons of these locations.\n\n## 4. Unacceptable Behavior\n\nThe following behaviors are considered harassment and are unacceptable within our community:\n\n*   Violence, threats of violence or violent language directed against another person.\n*   Sexist, racist, homophobic, transphobic, ableist or otherwise discriminatory jokes and language.\n*   Posting or displaying sexually explicit or violent material.\n*   Posting or threatening to post other people’s personally identifying information (\"doxing\").\n*   Personal insults, particularly those related to gender, sexual orientation, race, religion, or disability.\n*   Inappropriate photography or recording.\n*   Inappropriate physical contact. You should have someone’s consent before touching them.\n*   Unwelcome sexual attention. This includes, sexualized comments or jokes; inappropriate touching, groping, and unwelcomed sexual advances.\n*   Deliberate intimidation, stalking or following (online or in person).\n*   Advocating for, or encouraging, any of the above behavior.\n*   Sustained disruption of community events, including talks and presentations.\n\n## 5. Consequences of Unacceptable Behavior\n\nUnacceptable behavior from any community member, including sponsors and those with decision-making authority, will not be tolerated.\n\nAnyone asked to stop unacceptable behavior is expected to comply immediately.\n\nIf a community member engages in unacceptable behavior, the community organizers may take any action they deem appropriate, up to and including a temporary ban or permanent expulsion from the community without warning (and without refund in the case of a paid event).\n\n## 6. Reporting Guidelines\n\nIf you are subject to or witness unacceptable behavior, or have any other concerns, please notify a community organizer as soon as possible. jamesrasell@gmail.com.\n\n\n\nAdditionally, community organizers are available to help community members engage with local law enforcement or to otherwise help those experiencing unacceptable behavior feel safe. In the context of in-person events, organizers will also provide escorts as desired by the person experiencing distress.\n\n## 7. Addressing Grievances\n\nIf you feel you have been falsely or unfairly accused of violating this Code of Conduct, you should notify Jrasell with a concise description of your grievance. Your grievance will be handled in accordance with our existing governing policies.\n\n\n\n## 8. Scope\n\nWe expect all community participants (contributors, paid or otherwise; sponsors; and other guests) to abide by this Code of Conduct in all community venues–online and in-person–as well as in all one-on-one communications pertaining to community business.\n\nThis code of conduct and its related procedures also applies to unacceptable behavior occurring outside the scope of community activities when such behavior has the potential to adversely affect the safety and well-being of community members.\n\n## 9. Contact info\n\njamesrasell@gmail.com\n\n## 10. License and attribution\n\nThis Code of Conduct is distributed under a [Creative Commons Attribution-ShareAlike license](http://creativecommons.org/licenses/by-sa/3.0/).\n\nPortions of text derived from the [Django Code of Conduct](https://www.djangoproject.com/conduct/) and the [Geek Feminism Anti-Harassment Policy](http://geekfeminism.wikia.com/wiki/Conference_anti-harassment/Policy).\n\nRetrieved on November 22, 2016 from [http://citizencodeofconduct.org/](http://citizencodeofconduct.org/)\n"
  },
  {
    "path": "Dockerfile",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\n# This Dockerfile contains multiple targets.\n# Use 'docker build --target=<name> .' to build one.\n\n# ===================================\n#   Non-release images.\n# ===================================\n\n# devbuild compiles the binary\n# -----------------------------------\nFROM golang:1.24 AS devbuild\n\n# Disable CGO to make sure we build static binaries\nENV CGO_ENABLED=0\n\n# Escape the GOPATH\nWORKDIR /build\nCOPY . ./\nRUN go build -o levant .\n\n# dev runs the binary from devbuild\n# -----------------------------------\nFROM alpine:3.22 AS dev\n\nCOPY --from=devbuild /build/levant /bin/\nCOPY ./scripts/docker-entrypoint.sh /\n\nENTRYPOINT [\"/docker-entrypoint.sh\"]\nCMD [\"help\"]\n\n\n# ===================================\n#   Release images.\n# ===================================\n\nFROM alpine:3.22 AS release\n\nARG PRODUCT_NAME=levant\nARG PRODUCT_VERSION\nARG PRODUCT_REVISION\n# TARGETARCH and TARGETOS are set automatically when --platform is provided.\nARG TARGETOS TARGETARCH\n\nLABEL maintainer=\"Nomad Team <nomad@hashicorp.com>\"\nLABEL version=${PRODUCT_VERSION}\nLABEL revision=${PRODUCT_REVISION}\n\nCOPY dist/$TARGETOS/$TARGETARCH/levant /bin/\nCOPY ./scripts/docker-entrypoint.sh /\n\n# Create a non-root user to run the software.\nRUN addgroup $PRODUCT_NAME && \\\n    adduser -S -G $PRODUCT_NAME $PRODUCT_NAME\n\nUSER $PRODUCT_NAME\nENTRYPOINT [\"/docker-entrypoint.sh\"]\nCMD [\"help\"]\n\n# ===================================\n#   Set default target to 'dev'.\n# ===================================\nFROM dev\n"
  },
  {
    "path": "GNUmakefile",
    "content": "SHELL = bash\ndefault: lint test check-mod dev\n\nGIT_COMMIT := $(shell git rev-parse --short HEAD)\nGIT_DIRTY := $(if $(shell git status --porcelain),+CHANGES)\n\nGO_LDFLAGS := \"$(GO_LDFLAGS) -X github.com/hashicorp/levant/version.GitCommit=$(GIT_COMMIT)$(GIT_DIRTY)\"\n\n.PHONY: tools\ntools: ## Install the tools used to test and build\n\t@echo \"==> Installing tools...\"\n\tgo install github.com/golangci/golangci-lint/cmd/golangci-lint@v1.64.5\n\tgo install github.com/hashicorp/hcl/v2/cmd/hclfmt@d0c4fa8b0bbc2e4eeccd1ed2a32c2089ed8c5cf1\n\t@echo \"==> Done\"\n\n\npkg/%/levant: GO_OUT ?= $@\npkg/windows_%/levant: GO_OUT = $@.exe\npkg/%/levant: ## Build Levant for GOOS_GOARCH, e.g. pkg/linux_amd64/levant\n\t@echo \"==> Building $@ with tags $(GO_TAGS)...\"\n\t@CGO_ENABLED=0 \\\n\t\tGOOS=$(firstword $(subst _, ,$*)) \\\n\t\tGOARCH=$(lastword $(subst _, ,$*)) \\\n\t\tgo build -trimpath -ldflags $(GO_LDFLAGS) -tags \"$(GO_TAGS)\" -o \"$(GO_OUT)\"\n\n.PRECIOUS: pkg/%/levant\npkg/%.zip: pkg/%/levant ## Build and zip Levant for GOOS_GOARCH, e.g. pkg/linux_amd64.zip\n\t@echo \"==> Packaging for $@...\"\n\tzip -j $@ $(dir $<)*\n\n.PHONY: crt\ncrt:\n\t@CGO_ENABLED=0 go build -trimpath -ldflags $(GO_LDFLAGS) -tags \"$(GO_TAGS)\" -o \"$(BIN_PATH)\"\n\n\n.PHONY: dev\ndev: #check ## Build for the current development version\n\t@echo \"==> Building Levant...\"\n\t@CGO_ENABLED=0 GO111MODULE=on \\\n\tgo build \\\n\t-ldflags $(GO_LDFLAGS) \\\n\t-o ./bin/levant\n\t@echo \"==> Done\"\n\n.PHONY: test\ntest: ## Test the source code\n\t@echo \"==> Testing source code...\"\n\t@go test -cover -v -tags -race \\\n\t\t\"$(BUILDTAGS)\" $(shell go list ./... |grep -v vendor |grep -v test)\n\n.PHONY: acceptance-test\nacceptance-test: ## Run the Levant acceptance tests\n\t@echo \"==> Running $@...\"\n\tgo test -timeout 300s github.com/hashicorp/levant/test -v\n\n.PHONY: check\ncheck: tools lint check-mod ## Lint the source code and check other properties\n\n.PHONY: lint\nlint: hclfmt ## Lint the source code\n\t@echo \"==> Linting source code...\"\n\t@golangci-lint run -j 1\n\t@echo \"==> Done\"\n\n.PHONY: hclfmt\nhclfmt: ## Format HCL files with hclfmt\n\t@echo \"--> Formatting HCL\"\n\t@find . -name '.git' -prune \\\n\t\t\t\t\t-o -name '*fixtures*' -prune \\\n\t        -o \\( -name '*.nomad' -o -name '*.hcl' -o -name '*.tf' \\) \\\n\t      -print0 | xargs -0 hclfmt -w\n\t@if (git status -s | grep -q -e '\\.hcl$$' -e '\\.nomad$$' -e '\\.tf$$'); then echo The following HCL files are out of sync; git status -s | grep -e '\\.hcl$$' -e '\\.nomad$$' -e '\\.tf$$'; exit 1; fi\n\n.PHONY: check-mod\ncheck-mod: ## Checks the Go mod is tidy\n\t@echo \"==> Checking Go mod...\"\n\t@GO111MODULE=on go mod tidy\n\t@if (git status --porcelain | grep -q go.mod); then \\\n\t\techo go.mod needs updating; \\\n\t\tgit --no-pager diff go.mod; \\\n\t\texit 1; fi\n\t@echo \"==> Done\"\n\nHELP_FORMAT=\"    \\033[36m%-25s\\033[0m %s\\n\"\n.PHONY: help\nhelp: ## Display this usage information\n\t@echo \"Levant make commands:\"\n\t@grep -E '^[^ ]+:.*?## .*$$' $(MAKEFILE_LIST) | \\\n\t\tsort | \\\n\t\tawk 'BEGIN {FS = \":.*?## \"}; \\\n\t\t\t{printf $(HELP_FORMAT), $$1, $$2}'\n\n.PHONY: version\nversion:\nifneq (,$(wildcard version/version_ent.go))\n\t@$(CURDIR)/scripts/version.sh version/version.go version/version_ent.go\nelse\n\t@$(CURDIR)/scripts/version.sh version/version.go version/version.go\nendif\n"
  },
  {
    "path": "LICENSE",
    "content": "Copyright (c) 2017 HashiCorp, Inc.\n\nMozilla Public License, version 2.0\n\n1. Definitions\n\n1.1. \"Contributor\"\n\n     means each individual or legal entity that creates, contributes to the\n     creation of, or owns Covered Software.\n\n1.2. \"Contributor Version\"\n\n     means the combination of the Contributions of others (if any) used by a\n     Contributor and that particular Contributor's Contribution.\n\n1.3. \"Contribution\"\n\n     means Covered Software of a particular Contributor.\n\n1.4. \"Covered Software\"\n\n     means Source Code Form to which the initial Contributor has attached the\n     notice in Exhibit A, the Executable Form of such Source Code Form, and\n     Modifications of such Source Code Form, in each case including portions\n     thereof.\n\n1.5. \"Incompatible With Secondary Licenses\"\n     means\n\n     a. that the initial Contributor has attached the notice described in\n        Exhibit B to the Covered Software; or\n\n     b. that the Covered Software was made available under the terms of\n        version 1.1 or earlier of the License, but not also under the terms of\n        a Secondary License.\n\n1.6. \"Executable Form\"\n\n     means any form of the work other than Source Code Form.\n\n1.7. \"Larger Work\"\n\n     means a work that combines Covered Software with other material, in a\n     separate file or files, that is not Covered Software.\n\n1.8. \"License\"\n\n     means this document.\n\n1.9. \"Licensable\"\n\n     means having the right to grant, to the maximum extent possible, whether\n     at the time of the initial grant or subsequently, any and all of the\n     rights conveyed by this License.\n\n1.10. \"Modifications\"\n\n     means any of the following:\n\n     a. any file in Source Code Form that results from an addition to,\n        deletion from, or modification of the contents of Covered Software; or\n\n     b. any new file in Source Code Form that contains any Covered Software.\n\n1.11. \"Patent Claims\" of a Contributor\n\n      means any patent claim(s), including without limitation, method,\n      process, and apparatus claims, in any patent Licensable by such\n      Contributor that would be infringed, but for the grant of the License,\n      by the making, using, selling, offering for sale, having made, import,\n      or transfer of either its Contributions or its Contributor Version.\n\n1.12. \"Secondary License\"\n\n      means either the GNU General Public License, Version 2.0, the GNU Lesser\n      General Public License, Version 2.1, the GNU Affero General Public\n      License, Version 3.0, or any later versions of those licenses.\n\n1.13. \"Source Code Form\"\n\n      means the form of the work preferred for making modifications.\n\n1.14. \"You\" (or \"Your\")\n\n      means an individual or a legal entity exercising rights under this\n      License. For legal entities, \"You\" includes any entity that controls, is\n      controlled by, or is under common control with You. For purposes of this\n      definition, \"control\" means (a) the power, direct or indirect, to cause\n      the direction or management of such entity, whether by contract or\n      otherwise, or (b) ownership of more than fifty percent (50%) of the\n      outstanding shares or beneficial ownership of such entity.\n\n\n2. License Grants and Conditions\n\n2.1. Grants\n\n     Each Contributor hereby grants You a world-wide, royalty-free,\n     non-exclusive license:\n\n     a. under intellectual property rights (other than patent or trademark)\n        Licensable by such Contributor to use, reproduce, make available,\n        modify, display, perform, distribute, and otherwise exploit its\n        Contributions, either on an unmodified basis, with Modifications, or\n        as part of a Larger Work; and\n\n     b. under Patent Claims of such Contributor to make, use, sell, offer for\n        sale, have made, import, and otherwise transfer either its\n        Contributions or its Contributor Version.\n\n2.2. Effective Date\n\n     The licenses granted in Section 2.1 with respect to any Contribution\n     become effective for each Contribution on the date the Contributor first\n     distributes such Contribution.\n\n2.3. Limitations on Grant Scope\n\n     The licenses granted in this Section 2 are the only rights granted under\n     this License. No additional rights or licenses will be implied from the\n     distribution or licensing of Covered Software under this License.\n     Notwithstanding Section 2.1(b) above, no patent license is granted by a\n     Contributor:\n\n     a. for any code that a Contributor has removed from Covered Software; or\n\n     b. for infringements caused by: (i) Your and any other third party's\n        modifications of Covered Software, or (ii) the combination of its\n        Contributions with other software (except as part of its Contributor\n        Version); or\n\n     c. under Patent Claims infringed by Covered Software in the absence of\n        its Contributions.\n\n     This License does not grant any rights in the trademarks, service marks,\n     or logos of any Contributor (except as may be necessary to comply with\n     the notice requirements in Section 3.4).\n\n2.4. Subsequent Licenses\n\n     No Contributor makes additional grants as a result of Your choice to\n     distribute the Covered Software under a subsequent version of this\n     License (see Section 10.2) or under the terms of a Secondary License (if\n     permitted under the terms of Section 3.3).\n\n2.5. Representation\n\n     Each Contributor represents that the Contributor believes its\n     Contributions are its original creation(s) or it has sufficient rights to\n     grant the rights to its Contributions conveyed by this License.\n\n2.6. Fair Use\n\n     This License is not intended to limit any rights You have under\n     applicable copyright doctrines of fair use, fair dealing, or other\n     equivalents.\n\n2.7. Conditions\n\n     Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in\n     Section 2.1.\n\n\n3. Responsibilities\n\n3.1. Distribution of Source Form\n\n     All distribution of Covered Software in Source Code Form, including any\n     Modifications that You create or to which You contribute, must be under\n     the terms of this License. You must inform recipients that the Source\n     Code Form of the Covered Software is governed by the terms of this\n     License, and how they can obtain a copy of this License. You may not\n     attempt to alter or restrict the recipients' rights in the Source Code\n     Form.\n\n3.2. Distribution of Executable Form\n\n     If You distribute Covered Software in Executable Form then:\n\n     a. such Covered Software must also be made available in Source Code Form,\n        as described in Section 3.1, and You must inform recipients of the\n        Executable Form how they can obtain a copy of such Source Code Form by\n        reasonable means in a timely manner, at a charge no more than the cost\n        of distribution to the recipient; and\n\n     b. You may distribute such Executable Form under the terms of this\n        License, or sublicense it under different terms, provided that the\n        license for the Executable Form does not attempt to limit or alter the\n        recipients' rights in the Source Code Form under this License.\n\n3.3. Distribution of a Larger Work\n\n     You may create and distribute a Larger Work under terms of Your choice,\n     provided that You also comply with the requirements of this License for\n     the Covered Software. If the Larger Work is a combination of Covered\n     Software with a work governed by one or more Secondary Licenses, and the\n     Covered Software is not Incompatible With Secondary Licenses, this\n     License permits You to additionally distribute such Covered Software\n     under the terms of such Secondary License(s), so that the recipient of\n     the Larger Work may, at their option, further distribute the Covered\n     Software under the terms of either this License or such Secondary\n     License(s).\n\n3.4. Notices\n\n     You may not remove or alter the substance of any license notices\n     (including copyright notices, patent notices, disclaimers of warranty, or\n     limitations of liability) contained within the Source Code Form of the\n     Covered Software, except that You may alter any license notices to the\n     extent required to remedy known factual inaccuracies.\n\n3.5. Application of Additional Terms\n\n     You may choose to offer, and to charge a fee for, warranty, support,\n     indemnity or liability obligations to one or more recipients of Covered\n     Software. However, You may do so only on Your own behalf, and not on\n     behalf of any Contributor. You must make it absolutely clear that any\n     such warranty, support, indemnity, or liability obligation is offered by\n     You alone, and You hereby agree to indemnify every Contributor for any\n     liability incurred by such Contributor as a result of warranty, support,\n     indemnity or liability terms You offer. You may include additional\n     disclaimers of warranty and limitations of liability specific to any\n     jurisdiction.\n\n4. Inability to Comply Due to Statute or Regulation\n\n   If it is impossible for You to comply with any of the terms of this License\n   with respect to some or all of the Covered Software due to statute,\n   judicial order, or regulation then You must: (a) comply with the terms of\n   this License to the maximum extent possible; and (b) describe the\n   limitations and the code they affect. Such description must be placed in a\n   text file included with all distributions of the Covered Software under\n   this License. Except to the extent prohibited by statute or regulation,\n   such description must be sufficiently detailed for a recipient of ordinary\n   skill to be able to understand it.\n\n5. Termination\n\n5.1. The rights granted under this License will terminate automatically if You\n     fail to comply with any of its terms. However, if You become compliant,\n     then the rights granted under this License from a particular Contributor\n     are reinstated (a) provisionally, unless and until such Contributor\n     explicitly and finally terminates Your grants, and (b) on an ongoing\n     basis, if such Contributor fails to notify You of the non-compliance by\n     some reasonable means prior to 60 days after You have come back into\n     compliance. Moreover, Your grants from a particular Contributor are\n     reinstated on an ongoing basis if such Contributor notifies You of the\n     non-compliance by some reasonable means, this is the first time You have\n     received notice of non-compliance with this License from such\n     Contributor, and You become compliant prior to 30 days after Your receipt\n     of the notice.\n\n5.2. If You initiate litigation against any entity by asserting a patent\n     infringement claim (excluding declaratory judgment actions,\n     counter-claims, and cross-claims) alleging that a Contributor Version\n     directly or indirectly infringes any patent, then the rights granted to\n     You by any and all Contributors for the Covered Software under Section\n     2.1 of this License shall terminate.\n\n5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user\n     license agreements (excluding distributors and resellers) which have been\n     validly granted by You or Your distributors under this License prior to\n     termination shall survive termination.\n\n6. Disclaimer of Warranty\n\n   Covered Software is provided under this License on an \"as is\" basis,\n   without warranty of any kind, either expressed, implied, or statutory,\n   including, without limitation, warranties that the Covered Software is free\n   of defects, merchantable, fit for a particular purpose or non-infringing.\n   The entire risk as to the quality and performance of the Covered Software\n   is with You. Should any Covered Software prove defective in any respect,\n   You (not any Contributor) assume the cost of any necessary servicing,\n   repair, or correction. This disclaimer of warranty constitutes an essential\n   part of this License. No use of  any Covered Software is authorized under\n   this License except under this disclaimer.\n\n7. Limitation of Liability\n\n   Under no circumstances and under no legal theory, whether tort (including\n   negligence), contract, or otherwise, shall any Contributor, or anyone who\n   distributes Covered Software as permitted above, be liable to You for any\n   direct, indirect, special, incidental, or consequential damages of any\n   character including, without limitation, damages for lost profits, loss of\n   goodwill, work stoppage, computer failure or malfunction, or any and all\n   other commercial damages or losses, even if such party shall have been\n   informed of the possibility of such damages. This limitation of liability\n   shall not apply to liability for death or personal injury resulting from\n   such party's negligence to the extent applicable law prohibits such\n   limitation. Some jurisdictions do not allow the exclusion or limitation of\n   incidental or consequential damages, so this exclusion and limitation may\n   not apply to You.\n\n8. Litigation\n\n   Any litigation relating to this License may be brought only in the courts\n   of a jurisdiction where the defendant maintains its principal place of\n   business and such litigation shall be governed by laws of that\n   jurisdiction, without reference to its conflict-of-law provisions. Nothing\n   in this Section shall prevent a party's ability to bring cross-claims or\n   counter-claims.\n\n9. Miscellaneous\n\n   This License represents the complete agreement concerning the subject\n   matter hereof. If any provision of this License is held to be\n   unenforceable, such provision shall be reformed only to the extent\n   necessary to make it enforceable. Any law or regulation which provides that\n   the language of a contract shall be construed against the drafter shall not\n   be used to construe this License against a Contributor.\n\n\n10. Versions of the License\n\n10.1. New Versions\n\n      Mozilla Foundation is the license steward. Except as provided in Section\n      10.3, no one other than the license steward has the right to modify or\n      publish new versions of this License. Each version will be given a\n      distinguishing version number.\n\n10.2. Effect of New Versions\n\n      You may distribute the Covered Software under the terms of the version\n      of the License under which You originally received the Covered Software,\n      or under the terms of any subsequent version published by the license\n      steward.\n\n10.3. Modified Versions\n\n      If you create software not governed by this License, and you want to\n      create a new license for such software, you may create and use a\n      modified version of this License if you rename the license and remove\n      any references to the name of the license steward (except to note that\n      such modified license differs from this License).\n\n10.4. Distributing Source Code Form that is Incompatible With Secondary\n      Licenses If You choose to distribute Source Code Form that is\n      Incompatible With Secondary Licenses under the terms of this version of\n      the License, the notice described in Exhibit B of this License must be\n      attached.\n\nExhibit A - Source Code Form License Notice\n\n      This Source Code Form is subject to the\n      terms of the Mozilla Public License, v.\n      2.0. If a copy of the MPL was not\n      distributed with this file, You can\n      obtain one at\n      http://mozilla.org/MPL/2.0/.\n\nIf it is not possible or desirable to put the notice in a particular file,\nthen You may include the notice in a location (such as a LICENSE file in a\nrelevant directory) where a recipient would be likely to look for such a\nnotice.\n\nYou may add additional accurate notices of copyright ownership.\n\nExhibit B - \"Incompatible With Secondary Licenses\" Notice\n\n      This Source Code Form is \"Incompatible\n      With Secondary Licenses\", as defined by\n      the Mozilla Public License, v. 2.0.\n"
  },
  {
    "path": "README.md",
    "content": "# Levant\n\n_Levant v0.4.0 is the final release of Levant. All users are encouraged to migrate to [Nomad Pack](https://github.com/hashicorp/nomad-pack)._\n\n[![Build Status](https://circleci.com/gh/hashicorp/levant.svg?style=svg)](https://circleci.com/gh/hashicorp/levant) [![Discuss](https://img.shields.io/badge/discuss-nomad-00BC7F?style=flat)](https://discuss.hashicorp.com/c/nomad)\n\nLevant is an open source templating and deployment tool for [HashiCorp Nomad][] jobs that provides\nrealtime feedback and detailed failure messages upon deployment issues.\n\n## Features\n\n- **Realtime Feedback**: Using watchers, Levant provides realtime feedback on Nomad job deployments\n  allowing for greater insight and knowledge about application deployments.\n\n- **Advanced Job Status Checking**: Particularly for system and batch jobs, Levant ensures the job,\n  evaluations and allocations all reach the desired state providing feedback at every stage.\n\n- **Dynamic Job Group Counts**: If the Nomad job is currently running on the cluster, Levant dynamically\n  updates the rendered template with the relevant job group counts before deployment.\n\n- **Failure Inspection**: Upon a deployment failure, Levant inspects each allocation and logs information\n  about each event, providing useful information for debugging without the need for querying the cluster\n  retrospectively.\n\n- **Canary Auto Promotion**: In environments with advanced automation and alerting, automatic promotion\n  of canary deployments may be desirable after a certain time threshold. Levant allows the user to\n  specify a `canary-auto-promote` time period, which if reached with a healthy set of canaries,\n  automatically promotes the deployment.\n\n- **Multiple Variable File Formats**: Currently Levant supports `.json`, `.tf`, `.yaml`, and `.yml`\n  file extensions for the declaration of template variables.\n\n- **Auto Revert Checking**: In the event that a job deployment does not pass its healthy threshold\n  and the job has auto-revert enabled; Levant tracks the resulting rollback deployment so you can\n  see the exact outcome of the deployment process.\n\n## Download & Install\n\n- Official Levant binaries can be downloaded from the [HashiCorp releases site][releases-hashicorp].\n\n- Levant can be installed via the go toolkit using `go get github.com/hashicorp/levant && go install github.com/hashicorp/levant`\n\n- A docker image can be found on [Docker Hub][levant-docker]. The latest version can be downloaded\n  using `docker pull hashicorp/levant`.\n\n- Levant can be built from source by firstly cloning the repository `git clone git://github.com/hashicorp/levant.git`.\n  Once cloned, a binary can be built using the `make dev` command which will be available at\n  `./bin/levant`.\n\n- There is a [Levant Ansible role][levant-ansible] available to help installation on machines. Thanks\n  to @stevenscg for this.\n\n- Pre-built binaries of Levant from versions 0.2.9 and earlier can be downloaded from the [GitHub releases page][releases] page. These binaries were released prior to the migration to the HashiCorp organization. For example: `curl -L https://github.com/hashicorp/levant/releases/download/0.2.9/linux-amd64-levant -o levant`\n\n## Templating\n\nLevant includes functionality to perform template variables substitution as well as trigger built-in\ntemplate functions to add timestamps or retrieve information from Consul. For full details please\nconsult the [templates][] documentation page.\n\n## Commands\n\nLevant supports a number of command line arguments which provide control over the Levant binary. For\ndetail about each command and its supported flags, please consult the [commands][] documentation page.\n\n## Clients\n\nLevant utilizes the Nomad and Consul official clients and configuration can be done via a number of\nenvironment variables. For detail about these please read through the [clients][] documentation page.\n\n## Contributing\n\nCommunity contributions to Levant are encouraged. Please refer to the [contribution guide][] for\ndetails about hacking on Levant.\n\n[clients]: ./docs/clients.md\n[commands]: ./docs/commands.md\n[templates]: ./docs/templates.md\n[contribution guide]: https://github.com/hashicorp/levant/blob/master/.github/CONTRIBUTING.md\n[hashicorp nomad]: https://www.nomadproject.io/\n[releases]: https://github.com/hashicorp/levant/releases\n[levant-docker]: https://hub.docker.com/r/hashicorp/levant/\n[levant-ansible]: https://github.com/stevenscg/ansible-role-levant\n[releases-hashicorp]: https://releases.hashicorp.com/levant/\n"
  },
  {
    "path": "client/consul.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage client\n\nimport (\n\tconsul \"github.com/hashicorp/consul/api\"\n)\n\n// NewConsulClient is used to create a new client to interact with Consul.\nfunc NewConsulClient(addr string) (*consul.Client, error) {\n\tconfig := consul.DefaultConfig()\n\n\tif addr != \"\" {\n\t\tconfig.Address = addr\n\t}\n\n\tc, err := consul.NewClient(config)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn c, nil\n}\n"
  },
  {
    "path": "client/nomad.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage client\n\nimport (\n\tnomad \"github.com/hashicorp/nomad/api\"\n)\n\n// NewNomadClient is used to create a new client to interact with Nomad.\nfunc NewNomadClient(addr string) (*nomad.Client, error) {\n\tconfig := nomad.DefaultConfig()\n\n\tif addr != \"\" {\n\t\tconfig.Address = addr\n\t}\n\n\tc, err := nomad.NewClient(config)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn c, nil\n}\n"
  },
  {
    "path": "command/deploy.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage command\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n\n\t\"github.com/hashicorp/levant/helper\"\n\t\"github.com/hashicorp/levant/levant\"\n\t\"github.com/hashicorp/levant/levant/structs\"\n\t\"github.com/hashicorp/levant/logging\"\n\t\"github.com/hashicorp/levant/template\"\n\tnomad \"github.com/hashicorp/nomad/api\"\n)\n\n// DeployCommand is the command implementation that allows users to deploy a\n// Nomad job based on passed templates and variables.\ntype DeployCommand struct {\n\tMeta\n}\n\n// Help provides the help information for the deploy command.\nfunc (c *DeployCommand) Help() string {\n\thelpText := `\nUsage: levant deploy [options] [TEMPLATE]\n\n  Deploy a Nomad job based on input templates and variable files. The deploy\n  command supports passing variables individually on the command line. Multiple\n  commands can be passed in the format of -var 'key=value'. Variables passed\n  via the command line take precedence over the same variable declared within\n  a passed variable file.\n\nArguments:\n\n  TEMPLATE nomad job template\n    If no argument is given we look for a single *.nomad file\n\nGeneral Options:\n\n  -address=<http_address>\n    The Nomad HTTP API address including port which Levant will use to make\n    calls.\n\n  -allow-stale\n    Allow stale consistency mode for requests into nomad.\n\n  -canary-auto-promote=<seconds>\n    The time in seconds, after which Levant will auto-promote a canary job\n    if all canaries within the deployment are healthy.\n\n  -consul-address=<addr>\n    The Consul host and port to use when making Consul KeyValue lookups for\n    template rendering.\n\n  -force\n    Execute deployment even though there were no changes.\n\n  -force-batch\n    Forces a new instance of the periodic job. A new instance will be created\n    even if it violates the job's prohibit_overlap settings.\n\n  -force-count\n    Use the taskgroup count from the Nomad jobfile instead of the count that\n    is currently set in a running job.\n\n  -ignore-no-changes\n    By default if no changes are detected when running a deployment Levant will\n    exit with a status 1 to indicate a deployment didn't happen. This behaviour\n    can be changed using this flag so that Levant will exit cleanly ensuring CD\n    pipelines don't fail when no changes are detected.\n\n  -log-level=<level>\n    Specify the verbosity level of Levant's logs. Valid values include DEBUG,\n    INFO, and WARN, in decreasing order of verbosity. The default is INFO.\n\n  -log-format=<format>\n    Specify the format of Levant's logs. Valid values are HUMAN or JSON. The\n    default is HUMAN.\n\n  -var-file=<file>\n    Path to a file containing user variables used when rendering the job\n    template. You can repeat this flag multiple times to supply multiple\n    var-files. Defaults to levant.(json|yaml|yml|tf).\n    [default: levant.(json|yaml|yml|tf)]\n`\n\treturn strings.TrimSpace(helpText)\n}\n\n// Synopsis is provides a brief summary of the deploy command.\nfunc (c *DeployCommand) Synopsis() string {\n\treturn \"Render and deploy a Nomad job from a template\"\n}\n\n// Run triggers a run of the Levant template and deploy functions.\nfunc (c *DeployCommand) Run(args []string) int {\n\n\tvar err error\n\tvar level, format string\n\n\tconfig := &levant.DeployConfig{\n\t\tClient:   &structs.ClientConfig{},\n\t\tDeploy:   &structs.DeployConfig{},\n\t\tPlan:     &structs.PlanConfig{},\n\t\tTemplate: &structs.TemplateConfig{},\n\t}\n\n\tflags := c.Meta.FlagSet(\"deploy\", FlagSetVars)\n\tflags.Usage = func() { c.UI.Output(c.Help()) }\n\n\tflags.StringVar(&config.Client.Addr, \"address\", \"\", \"\")\n\tflags.BoolVar(&config.Client.AllowStale, \"allow-stale\", false, \"\")\n\tflags.IntVar(&config.Deploy.Canary, \"canary-auto-promote\", 0, \"\")\n\tflags.StringVar(&config.Client.ConsulAddr, \"consul-address\", \"\", \"\")\n\tflags.BoolVar(&config.Deploy.Force, \"force\", false, \"\")\n\tflags.BoolVar(&config.Deploy.ForceBatch, \"force-batch\", false, \"\")\n\tflags.BoolVar(&config.Deploy.ForceCount, \"force-count\", false, \"\")\n\tflags.BoolVar(&config.Plan.IgnoreNoChanges, \"ignore-no-changes\", false, \"\")\n\tflags.StringVar(&level, \"log-level\", \"INFO\", \"\")\n\tflags.StringVar(&format, \"log-format\", \"HUMAN\", \"\")\n\n\tflags.Var((*helper.FlagStringSlice)(&config.Template.VariableFiles), \"var-file\", \"\")\n\n\tif err = flags.Parse(args); err != nil {\n\t\treturn 1\n\t}\n\n\targs = flags.Args()\n\n\tif err = logging.SetupLogger(level, format); err != nil {\n\t\tc.UI.Error(err.Error())\n\t\treturn 1\n\t}\n\n\tif len(args) == 1 {\n\t\tconfig.Template.TemplateFile = args[0]\n\t} else if len(args) == 0 {\n\t\tif config.Template.TemplateFile = helper.GetDefaultTmplFile(); config.Template.TemplateFile == \"\" {\n\t\t\tc.UI.Error(c.Help())\n\t\t\tc.UI.Error(\"\\nERROR: Template arg missing and no default template found\")\n\t\t\treturn 1\n\t\t}\n\t} else {\n\t\tc.UI.Error(c.Help())\n\t\treturn 1\n\t}\n\n\tconfig.Template.Job, err = template.RenderJob(config.Template.TemplateFile,\n\t\tconfig.Template.VariableFiles, config.Client.ConsulAddr, &c.Meta.flagVars)\n\tif err != nil {\n\t\tc.UI.Error(fmt.Sprintf(\"[ERROR] levant/command: %v\", err))\n\t\treturn 1\n\t}\n\n\tif config.Deploy.Canary > 0 {\n\t\tif err = c.checkCanaryAutoPromote(config.Template.Job, config.Deploy.Canary); err != nil {\n\t\t\tc.UI.Error(fmt.Sprintf(\"[ERROR] levant/command: %v\", err))\n\t\t\treturn 1\n\t\t}\n\t}\n\n\tif config.Deploy.ForceBatch {\n\t\tif err = c.checkForceBatch(config.Template.Job, config.Deploy.ForceBatch); err != nil {\n\t\t\tc.UI.Error(fmt.Sprintf(\"[ERROR] levant/command: %v\", err))\n\t\t\treturn 1\n\t\t}\n\t}\n\n\tif !config.Deploy.Force {\n\t\tp := levant.PlanConfig{\n\t\t\tClient:   config.Client,\n\t\t\tPlan:     config.Plan,\n\t\t\tTemplate: config.Template,\n\t\t}\n\n\t\tplanSuccess, changes := levant.TriggerPlan(&p)\n\t\tif !planSuccess {\n\t\t\treturn 1\n\t\t} else if !changes && p.Plan.IgnoreNoChanges {\n\t\t\treturn 0\n\t\t}\n\t}\n\n\tsuccess := levant.TriggerDeployment(config, nil)\n\tif !success {\n\t\treturn 1\n\t}\n\n\treturn 0\n}\n\nfunc (c *DeployCommand) checkCanaryAutoPromote(job *nomad.Job, canaryAutoPromote int) error {\n\tif canaryAutoPromote == 0 {\n\t\treturn nil\n\t}\n\n\tif job.Update != nil && job.Update.Canary != nil && *job.Update.Canary > 0 {\n\t\treturn nil\n\t}\n\n\tfor _, group := range job.TaskGroups {\n\t\tif group.Update != nil && group.Update.Canary != nil && *group.Update.Canary > 0 {\n\t\t\treturn nil\n\t\t}\n\t}\n\n\treturn fmt.Errorf(\"canary-auto-update of %v passed but job is not canary enabled\", canaryAutoPromote)\n}\n\n// checkForceBatch ensures that if the force-batch flag is passed, the job is\n// periodic.\nfunc (c *DeployCommand) checkForceBatch(job *nomad.Job, forceBatch bool) error {\n\n\tif forceBatch && job.IsPeriodic() {\n\t\treturn nil\n\t}\n\n\treturn fmt.Errorf(\"force-batch passed but job is not periodic\")\n}\n"
  },
  {
    "path": "command/deploy_test.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage command\n\nimport (\n\t\"testing\"\n\n\t\"github.com/hashicorp/levant/template\"\n)\n\nfunc TestDeploy_checkCanaryAutoPromote(t *testing.T) {\n\n\tfVars := make(map[string]interface{})\n\tdepCommand := &DeployCommand{}\n\tcanaryPromote := 30\n\n\tcases := []struct {\n\t\tFile          string\n\t\tCanaryPromote int\n\t\tOutput        error\n\t}{\n\t\t{\n\t\t\tFile:          \"test-fixtures/job_canary.nomad\",\n\t\t\tCanaryPromote: canaryPromote,\n\t\t\tOutput:        nil,\n\t\t},\n\t\t{\n\t\t\tFile:          \"test-fixtures/group_canary.nomad\",\n\t\t\tCanaryPromote: canaryPromote,\n\t\t\tOutput:        nil,\n\t\t},\n\t}\n\n\tfor i, c := range cases {\n\t\tjob, err := template.RenderJob(c.File, []string{}, \"\", &fVars)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"case %d failed: %v\", i, err)\n\t\t}\n\n\t\tout := depCommand.checkCanaryAutoPromote(job, c.CanaryPromote)\n\t\tif out != c.Output {\n\t\t\tt.Fatalf(\"case %d: got \\\"%v\\\"; want %v\", i, out, c.Output)\n\t\t}\n\t}\n}\n\nfunc TestDeploy_checkForceBatch(t *testing.T) {\n\n\tfVars := make(map[string]interface{})\n\tdepCommand := &DeployCommand{}\n\tforceBatch := true\n\n\tcases := []struct {\n\t\tFile       string\n\t\tForceBatch bool\n\t\tOutput     error\n\t}{\n\t\t{\n\t\t\tFile:       \"test-fixtures/periodic_batch.nomad\",\n\t\t\tForceBatch: forceBatch,\n\t\t\tOutput:     nil,\n\t\t},\n\t}\n\n\tfor i, c := range cases {\n\t\tjob, err := template.RenderJob(c.File, []string{}, \"\", &fVars)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"case %d failed: %v\", i, err)\n\t\t}\n\n\t\tout := depCommand.checkForceBatch(job, c.ForceBatch)\n\t\tif out != c.Output {\n\t\t\tt.Fatalf(\"case %d: got \\\"%v\\\"; want %v\", i, out, c.Output)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "command/dispatch.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage command\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"strings\"\n\n\t\"github.com/hashicorp/levant/levant\"\n\t\"github.com/hashicorp/levant/logging\"\n\tflaghelper \"github.com/hashicorp/nomad/helper/flags\"\n)\n\n// DispatchCommand is the command implementation that allows users to\n// dispatch a Nomad job.\ntype DispatchCommand struct {\n\tMeta\n}\n\n// Help provides the help information for the dispatch command.\nfunc (c *DispatchCommand) Help() string {\n\thelpText := `\nUsage: levant dispatch [options] <parameterized job> [input source]\n\n  Dispatch creates an instance of a parameterized job. A data payload to the\n  dispatched instance can be provided via stdin by using \"-\" or by specifying a\n  path to a file. Metadata can be supplied by using the meta flag one or more\n  times. \n\nGeneral Options:\n\n  -address=<http_address>\n    The Nomad HTTP API address including port which Levant will use to make\n    calls.\n\n  -log-level=<level>\n    Specify the verbosity level of Levant's logs. Valid values include DEBUG,\n    INFO, and WARN, in decreasing order of verbosity. The default is INFO.\n\n  -log-format=<format>\n    Specify the format of Levant's logs. Valid values are HUMAN or JSON. The\n    default is HUMAN.\n\nDispatch Options:\n\n  -meta <key>=<value>\n    Meta takes a key/value pair separated by \"=\". The metadata key will be\n    merged into the job's metadata. The job may define a default value for the\n    key which is overridden when dispatching. The flag can be provided more \n    than once to inject multiple metadata key/value pairs. Arbitrary keys are\n    not allowed. The parameterized job must allow the key to be merged.\n`\n\treturn strings.TrimSpace(helpText)\n}\n\n// Synopsis is provides a brief summary of the dispatch command.\nfunc (c *DispatchCommand) Synopsis() string {\n\treturn \"Dispatch an instance of a parameterized job\"\n}\n\n// Run triggers a run of the Levant dispatch functions.\nfunc (c *DispatchCommand) Run(args []string) int {\n\n\tvar meta []string\n\tvar addr, logLevel, logFormat string\n\n\tflags := c.Meta.FlagSet(\"dispatch\", FlagSetVars)\n\tflags.Usage = func() { c.UI.Output(c.Help()) }\n\tflags.Var((*flaghelper.StringFlag)(&meta), \"meta\", \"\")\n\tflags.StringVar(&addr, \"address\", \"\", \"\")\n\tflags.StringVar(&logLevel, \"log-level\", \"INFO\", \"\")\n\tflags.StringVar(&logFormat, \"log-format\", \"human\", \"\")\n\n\tif err := flags.Parse(args); err != nil {\n\t\treturn 1\n\t}\n\n\targs = flags.Args()\n\tif l := len(args); l < 1 || l > 2 {\n\t\tc.UI.Error(c.Help())\n\t\treturn 1\n\t}\n\n\terr := logging.SetupLogger(logLevel, logFormat)\n\tif err != nil {\n\t\tc.UI.Error(fmt.Sprintf(\"Error setting up logging: %v\", err))\n\t}\n\n\tjob := args[0]\n\tvar payload []byte\n\tvar readErr error\n\n\tif len(args) == 2 {\n\t\tswitch args[1] {\n\t\tcase \"-\":\n\t\t\tpayload, readErr = io.ReadAll(os.Stdin)\n\t\tdefault:\n\t\t\tpayload, readErr = os.ReadFile(args[1])\n\t\t}\n\t\tif readErr != nil {\n\t\t\tc.UI.Error(fmt.Sprintf(\"Error reading input data: %v\", readErr))\n\t\t\treturn 1\n\t\t}\n\t}\n\n\tmetaMap := make(map[string]string, len(meta))\n\tfor _, m := range meta {\n\t\tsplit := strings.SplitN(m, \"=\", 2)\n\t\tif len(split) != 2 {\n\t\t\tc.UI.Error(fmt.Sprintf(\"Error parsing meta value: %v\", m))\n\t\t\treturn 1\n\t\t}\n\t\tmetaMap[split[0]] = split[1]\n\t}\n\n\tsuccess := levant.TriggerDispatch(job, metaMap, payload, addr)\n\tif !success {\n\t\treturn 1\n\t}\n\n\treturn 0\n}\n"
  },
  {
    "path": "command/meta.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage command\n\nimport (\n\t\"bufio\"\n\t\"flag\"\n\t\"io\"\n\n\t\"github.com/hashicorp/levant/helper\"\n\t\"github.com/mitchellh/cli\"\n)\n\n// FlagSetFlags is an enum to define what flags are present in the\n// default FlagSet returned by Meta.FlagSet\ntype FlagSetFlags uint\n\n// Consts which helps us track meta CLI falgs.\nconst (\n\tFlagSetNone        FlagSetFlags = 0\n\tFlagSetBuildFilter FlagSetFlags = 1 << iota\n\tFlagSetVars\n)\n\n// Meta contains the meta-options and functionality that nearly every\n// Levant command inherits.\ntype Meta struct {\n\tUI cli.Ui\n\n\t// These are set by command-line flags\n\tflagVars map[string]interface{}\n}\n\n// FlagSet returns a FlagSet with the common flags that every\n// command implements.\nfunc (m *Meta) FlagSet(n string, fs FlagSetFlags) *flag.FlagSet {\n\tf := flag.NewFlagSet(n, flag.ContinueOnError)\n\n\t// FlagSetVars tells us what variables to use\n\tif fs&FlagSetVars != 0 {\n\t\tf.Var((*helper.Flag)(&m.flagVars), \"var\", \"\")\n\t}\n\n\t// Create an io.Writer that writes to our Ui properly for errors.\n\terrR, errW := io.Pipe()\n\terrScanner := bufio.NewScanner(errR)\n\tgo func() {\n\t\tfor errScanner.Scan() {\n\t\t\tm.UI.Error(errScanner.Text())\n\t\t}\n\t}()\n\tf.SetOutput(errW)\n\n\treturn f\n}\n"
  },
  {
    "path": "command/plan.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage command\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n\n\t\"github.com/hashicorp/levant/helper\"\n\t\"github.com/hashicorp/levant/levant\"\n\t\"github.com/hashicorp/levant/levant/structs\"\n\t\"github.com/hashicorp/levant/logging\"\n\t\"github.com/hashicorp/levant/template\"\n)\n\n// PlanCommand is the command implementation that allows users to plan a\n// Nomad job based on passed templates and variables.\ntype PlanCommand struct {\n\tMeta\n}\n\n// Help provides the help information for the plan command.\nfunc (c *PlanCommand) Help() string {\n\thelpText := `\nUsage: levant plan [options] [TEMPLATE]\n\n  Perform a Nomad plan based on input templates and variable files. The plan\n  command supports passing variables individually on the command line. Multiple\n  commands can be passed in the format of -var 'key=value'. Variables passed\n  via the command line take precedence over the same variable declared within\n  a passed variable file.\n\nArguments:\n\n  TEMPLATE nomad job template\n    If no argument is given we look for a single *.nomad file\n\nGeneral Options:\n\n  -address=<http_address>\n    The Nomad HTTP API address including port which Levant will use to make\n    calls.\n\n  -allow-stale\n    Allow stale consistency mode for requests into nomad.\n\t\t\n  -consul-address=<addr>\n    The Consul host and port to use when making Consul KeyValue lookups for\n    template rendering.\n\n  -force-count\n    Use the taskgroup count from the Nomad jobfile instead of the count that\n    is currently set in a running job.\n\n  -ignore-no-changes\n    By default if no changes are detected when running a plan Levant will\n    exit with a status 1 to indicate there are no changes. This behaviour\n    can be changed using this flag so that Levant will exit cleanly ensuring CD\n    pipelines don't fail when no changes are detected.\n\n  -log-level=<level>\n    Specify the verbosity level of Levant's logs. Valid values include DEBUG,\n    INFO, and WARN, in decreasing order of verbosity. The default is INFO.\n\n  -log-format=<format>\n    Specify the format of Levant's logs. Valid values are HUMAN or JSON. The\n    default is HUMAN.\n\n  -var-file=<file>\n    Path to a file containing user variables used when rendering the job\n    template. You can repeat this flag multiple times to supply multiple\n    var-files. Defaults to levant.(json|yaml|yml|tf).\n    [default: levant.(json|yaml|yml|tf)]\n`\n\treturn strings.TrimSpace(helpText)\n}\n\n// Synopsis is provides a brief summary of the plan command.\nfunc (c *PlanCommand) Synopsis() string {\n\treturn \"Render and perform a Nomad job plan from a template\"\n}\n\n// Run triggers a run of the Levant template and plan functions.\nfunc (c *PlanCommand) Run(args []string) int {\n\n\tvar err error\n\tvar level, format string\n\tconfig := &levant.PlanConfig{\n\t\tClient:   &structs.ClientConfig{},\n\t\tPlan:     &structs.PlanConfig{},\n\t\tTemplate: &structs.TemplateConfig{},\n\t}\n\n\tflags := c.Meta.FlagSet(\"plan\", FlagSetVars)\n\tflags.Usage = func() { c.UI.Output(c.Help()) }\n\n\tflags.StringVar(&config.Client.Addr, \"address\", \"\", \"\")\n\tflags.BoolVar(&config.Client.AllowStale, \"allow-stale\", false, \"\")\n\tflags.StringVar(&config.Client.ConsulAddr, \"consul-address\", \"\", \"\")\n\tflags.BoolVar(&config.Plan.IgnoreNoChanges, \"ignore-no-changes\", false, \"\")\n\tflags.StringVar(&level, \"log-level\", \"INFO\", \"\")\n\tflags.StringVar(&format, \"log-format\", \"HUMAN\", \"\")\n\tflags.Var((*helper.FlagStringSlice)(&config.Template.VariableFiles), \"var-file\", \"\")\n\n\tif err = flags.Parse(args); err != nil {\n\t\treturn 1\n\t}\n\n\targs = flags.Args()\n\n\tif err = logging.SetupLogger(level, format); err != nil {\n\t\tc.UI.Error(err.Error())\n\t\treturn 1\n\t}\n\n\tif len(args) == 1 {\n\t\tconfig.Template.TemplateFile = args[0]\n\t} else if len(args) == 0 {\n\t\tif config.Template.TemplateFile = helper.GetDefaultTmplFile(); config.Template.TemplateFile == \"\" {\n\t\t\tc.UI.Error(c.Help())\n\t\t\tc.UI.Error(\"\\nERROR: Template arg missing and no default template found\")\n\t\t\treturn 1\n\t\t}\n\t} else {\n\t\tc.UI.Error(c.Help())\n\t\treturn 1\n\t}\n\n\tconfig.Template.Job, err = template.RenderJob(config.Template.TemplateFile,\n\t\tconfig.Template.VariableFiles, config.Client.ConsulAddr, &c.Meta.flagVars)\n\n\tif err != nil {\n\t\tc.UI.Error(fmt.Sprintf(\"[ERROR] levant/command: %v\", err))\n\t\treturn 1\n\t}\n\n\tsuccess, changes := levant.TriggerPlan(config)\n\tif !success {\n\t\treturn 1\n\t} else if !changes && config.Plan.IgnoreNoChanges {\n\t\treturn 0\n\t}\n\n\treturn 0\n}\n"
  },
  {
    "path": "command/render.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage command\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"os\"\n\t\"strings\"\n\n\t\"github.com/hashicorp/levant/helper\"\n\t\"github.com/hashicorp/levant/logging\"\n\t\"github.com/hashicorp/levant/template\"\n)\n\n// RenderCommand is the command implementation that allows users to render a\n// Nomad job template based on passed templates and variables.\ntype RenderCommand struct {\n\tMeta\n}\n\n// Help provides the help information for the template command.\nfunc (c *RenderCommand) Help() string {\n\thelpText := `\nUsage: levant render [options] [TEMPLATE]\n\n  Render a Nomad job template, useful for debugging. Like deploy, the render\n  command also supports passing variables individually on the command line. \n  Multiple vars can be passed in the format of -var 'key=value'. Variables \n  passed via the command line take precedence over the same variable declared\n  within a passed variable file.\n\nArguments:\n\n  TEMPLATE  nomad job template\n    If no argument is given we look for a single *.nomad file\n\nGeneral Options:\n\n  -consul-address=<addr>\n    The Consul host and port to use when making Consul KeyValue lookups for\n    template rendering.\n\n  -log-level=<level>\n    Specify the verbosity level of Levant's logs. Valid values include DEBUG,\n    INFO, and WARN, in decreasing order of verbosity. The default is INFO.\n\n  -log-format=<format>\n    Specify the format of Levant's logs. Valid values are HUMAN or JSON. The\n    default is HUMAN.\n\n  -out=<file>\n    Specify the path to write the rendered template out to, if a file exists at\n    the specified path it will be truncated before rendering. The template will be\n    rendered to stdout if this is not set.\n\n  -var-file=<file>\n    The variables file to render the template with. You can repeat this flag multiple\n    times to supply multiple var-files. [default: levant.(json|yaml|yml|tf)]\n`\n\treturn strings.TrimSpace(helpText)\n}\n\n// Synopsis is provides a brief summary of the template command.\nfunc (c *RenderCommand) Synopsis() string {\n\treturn \"Render a Nomad job from a template\"\n}\n\n// Run triggers a run of the Levant template functions.\nfunc (c *RenderCommand) Run(args []string) int {\n\n\tvar addr, outPath, templateFile string\n\tvar variables []string\n\tvar err error\n\tvar tpl *bytes.Buffer\n\tvar level, format string\n\n\tflags := c.Meta.FlagSet(\"render\", FlagSetVars)\n\tflags.Usage = func() { c.UI.Output(c.Help()) }\n\n\tflags.StringVar(&addr, \"consul-address\", \"\", \"\")\n\tflags.StringVar(&level, \"log-level\", \"INFO\", \"\")\n\tflags.StringVar(&format, \"log-format\", \"HUMAN\", \"\")\n\tflags.Var((*helper.FlagStringSlice)(&variables), \"var-file\", \"\")\n\tflags.StringVar(&outPath, \"out\", \"\", \"\")\n\n\tif err = flags.Parse(args); err != nil {\n\t\treturn 1\n\t}\n\n\targs = flags.Args()\n\n\tif err = logging.SetupLogger(level, format); err != nil {\n\t\tc.UI.Error(err.Error())\n\t\treturn 1\n\t}\n\n\tif len(args) == 1 {\n\t\ttemplateFile = args[0]\n\t} else if len(args) == 0 {\n\t\tif templateFile = helper.GetDefaultTmplFile(); templateFile == \"\" {\n\t\t\tc.UI.Error(c.Help())\n\t\t\tc.UI.Error(\"\\nERROR: Template arg missing and no default template found\")\n\t\t\treturn 1\n\t\t}\n\t} else {\n\t\tc.UI.Error(c.Help())\n\t\treturn 1\n\t}\n\n\ttpl, err = template.RenderTemplate(templateFile, variables, addr, &c.Meta.flagVars)\n\tif err != nil {\n\t\tc.UI.Error(fmt.Sprintf(\"[ERROR] levant/command: %v\", err))\n\t\treturn 1\n\t}\n\n\tout := os.Stdout\n\tif outPath != \"\" {\n\t\tout, err = os.Create(outPath)\n\t\tif err != nil {\n\t\t\tc.UI.Error(fmt.Sprintf(\"[ERROR] levant/command: %v\", err))\n\t\t\treturn 1\n\t\t}\n\t}\n\n\t_, err = tpl.WriteTo(out)\n\tif err != nil {\n\t\tc.UI.Error(fmt.Sprintf(\"[ERROR] levant/command: %v\", err))\n\t\treturn 1\n\t}\n\n\treturn 0\n}\n"
  },
  {
    "path": "command/scale_in.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage command\n\nimport (\n\t\"strings\"\n\n\t\"github.com/hashicorp/levant/levant/structs\"\n\t\"github.com/hashicorp/levant/logging\"\n\t\"github.com/hashicorp/levant/scale\"\n)\n\n// ScaleInCommand is the command implementation that allows users to scale a\n// Nomad job out.\ntype ScaleInCommand struct {\n\tMeta\n}\n\n// Help provides the help information for the scale-in command.\nfunc (c *ScaleInCommand) Help() string {\n\thelpText := `\nUsage: levant scale-in [options] <job-id>\n\n  Scale a Nomad job and optional task group out.\n\nGeneral Options:\n\n  -address=<http_address>\n    The Nomad HTTP API address including port which Levant will use to make\n    calls.\n\n  -allow-stale\n    Allow stale consistency mode for requests into nomad.\n  \n  -log-level=<level>\n    Specify the verbosity level of Levant's logs. Valid values include DEBUG,\n    INFO, and WARN, in decreasing order of verbosity. The default is INFO.\n  \n  -log-format=<format>\n    Specify the format of Levant's logs. Valid values are HUMAN or JSON. The\n    default is HUMAN.\n\t\nScale In Options:\n\n  -count=<num>\n    The count by which the job and task groups should be scaled in by. Only\n    one of count or percent can be passed.\n\n  -percent=<num>\n    A percentage value by which the job and task groups should be scaled in\n    by. Counts will be rounded up, to ensure required capacity is met. Only \n    one of count or percent can be passed.\n\n  -task-group=<name>\n    The name of the task group you wish to target for scaling. If this is not\n    specified, all task groups within the job will be scaled.\n`\n\treturn strings.TrimSpace(helpText)\n}\n\n// Synopsis is provides a brief summary of the scale-in command.\nfunc (c *ScaleInCommand) Synopsis() string {\n\treturn \"Scale in a Nomad job\"\n}\n\n// Run triggers a run of the Levant scale-in functions.\nfunc (c *ScaleInCommand) Run(args []string) int {\n\n\tvar err error\n\tvar logL, logF string\n\n\tconfig := &scale.Config{\n\t\tClient: &structs.ClientConfig{},\n\t\tScale: &structs.ScaleConfig{\n\t\t\tDirection: structs.ScalingDirectionIn,\n\t\t},\n\t}\n\n\tflags := c.Meta.FlagSet(\"scale-in\", FlagSetVars)\n\tflags.Usage = func() { c.UI.Output(c.Help()) }\n\n\tflags.StringVar(&config.Client.Addr, \"address\", \"\", \"\")\n\tflags.BoolVar(&config.Client.AllowStale, \"allow-stale\", false, \"\")\n\tflags.StringVar(&logL, \"log-level\", \"INFO\", \"\")\n\tflags.StringVar(&logF, \"log-format\", \"HUMAN\", \"\")\n\tflags.IntVar(&config.Scale.Count, \"count\", 0, \"\")\n\tflags.IntVar(&config.Scale.Percent, \"percent\", 0, \"\")\n\tflags.StringVar(&config.Scale.TaskGroup, \"task-group\", \"\", \"\")\n\n\tif err = flags.Parse(args); err != nil {\n\t\treturn 1\n\t}\n\n\targs = flags.Args()\n\n\tif len(args) != 1 {\n\t\tc.UI.Error(\"This command takes one argument: <job-name>\")\n\t\treturn 1\n\t}\n\n\tconfig.Scale.JobID = args[0]\n\n\tif config.Scale.Count == 0 && config.Scale.Percent == 0 || config.Scale.Count > 0 && config.Scale.Percent > 0 {\n\t\tc.UI.Error(\"You must set either -count or -percent flag to scale-in\")\n\t\treturn 1\n\t}\n\n\tif config.Scale.Count > 0 {\n\t\tconfig.Scale.DirectionType = structs.ScalingDirectionTypeCount\n\t}\n\n\tif config.Scale.Percent > 0 {\n\t\tconfig.Scale.DirectionType = structs.ScalingDirectionTypePercent\n\t}\n\n\tif err = logging.SetupLogger(logL, logF); err != nil {\n\t\tc.UI.Error(err.Error())\n\t\treturn 1\n\t}\n\n\tsuccess := scale.TriggerScalingEvent(config)\n\tif !success {\n\t\treturn 1\n\t}\n\n\treturn 0\n}\n"
  },
  {
    "path": "command/scale_out.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage command\n\nimport (\n\t\"strings\"\n\n\t\"github.com/hashicorp/levant/levant/structs\"\n\t\"github.com/hashicorp/levant/logging\"\n\t\"github.com/hashicorp/levant/scale\"\n)\n\n// ScaleOutCommand is the command implementation that allows users to scale a\n// Nomad job out.\ntype ScaleOutCommand struct {\n\tMeta\n}\n\n// Help provides the help information for the scale-out command.\nfunc (c *ScaleOutCommand) Help() string {\n\thelpText := `\nUsage: levant scale-out [options] <job-id>\n\n  Scale a Nomad job and optional task group out.\n\nGeneral Options:\n\n  -address=<http_address>\n    The Nomad HTTP API address including port which Levant will use to make\n    calls.\n\n  -allow-stale\n    Allow stale consistency mode for requests into nomad.\n  \n  -log-level=<level>\n    Specify the verbosity level of Levant's logs. Valid values include DEBUG,\n    INFO, and WARN, in decreasing order of verbosity. The default is INFO.\n  \n  -log-format=<format>\n    Specify the format of Levant's logs. Valid values are HUMAN or JSON. The\n    default is HUMAN.\n\t\nScale Out Options:\n\n  -count=<num>\n    The count by which the job and task groups should be scaled out by. Only\n    one of count or percent can be passed.\n\n  -percent=<num>\n    A percentage value by which the job and task groups should be scaled out\n    by. Counts will be rounded up, to ensure required capacity is met. Only \n    one of count or percent can be passed.\n\n  -task-group=<name>\n    The name of the task group you wish to target for scaling. Is this is not\n    specified all task groups within the job will be scaled.\n`\n\treturn strings.TrimSpace(helpText)\n}\n\n// Synopsis is provides a brief summary of the scale-out command.\nfunc (c *ScaleOutCommand) Synopsis() string {\n\treturn \"Scale out a Nomad job\"\n}\n\n// Run triggers a run of the Levant scale-out functions.\nfunc (c *ScaleOutCommand) Run(args []string) int {\n\n\tvar err error\n\tvar logL, logF string\n\n\tconfig := &scale.Config{\n\t\tClient: &structs.ClientConfig{},\n\t\tScale: &structs.ScaleConfig{\n\t\t\tDirection: structs.ScalingDirectionOut,\n\t\t},\n\t}\n\n\tflags := c.Meta.FlagSet(\"scale-out\", FlagSetVars)\n\tflags.Usage = func() { c.UI.Output(c.Help()) }\n\n\tflags.StringVar(&config.Client.Addr, \"address\", \"\", \"\")\n\tflags.BoolVar(&config.Client.AllowStale, \"allow-stale\", false, \"\")\n\tflags.StringVar(&logL, \"log-level\", \"INFO\", \"\")\n\tflags.StringVar(&logF, \"log-format\", \"HUMAN\", \"\")\n\tflags.IntVar(&config.Scale.Count, \"count\", 0, \"\")\n\tflags.IntVar(&config.Scale.Percent, \"percent\", 0, \"\")\n\tflags.StringVar(&config.Scale.TaskGroup, \"task-group\", \"\", \"\")\n\n\tif err = flags.Parse(args); err != nil {\n\t\treturn 1\n\t}\n\n\targs = flags.Args()\n\n\tif len(args) != 1 {\n\t\tc.UI.Error(\"This command takes one argument: <job-name>\")\n\t\treturn 1\n\t}\n\n\tconfig.Scale.JobID = args[0]\n\n\tif config.Scale.Count == 0 && config.Scale.Percent == 0 || config.Scale.Count > 0 && config.Scale.Percent > 0 {\n\t\tc.UI.Error(\"You must set either -count or -percent flag to scale-out\")\n\t\treturn 1\n\t}\n\n\tif config.Scale.Count > 0 {\n\t\tconfig.Scale.DirectionType = structs.ScalingDirectionTypeCount\n\t}\n\n\tif config.Scale.Percent > 0 {\n\t\tconfig.Scale.DirectionType = structs.ScalingDirectionTypePercent\n\t}\n\n\tif err = logging.SetupLogger(logL, logF); err != nil {\n\t\tc.UI.Error(err.Error())\n\t\treturn 1\n\t}\n\n\tsuccess := scale.TriggerScalingEvent(config)\n\tif !success {\n\t\treturn 1\n\t}\n\n\treturn 0\n}\n"
  },
  {
    "path": "command/test-fixtures/group_canary.nomad",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\njob \"example\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n\n  group \"cache\" {\n    update {\n      max_parallel     = 1\n      min_healthy_time = \"10s\"\n      healthy_deadline = \"1m\"\n      auto_revert      = true\n      canary           = 1\n    }\n    count = 1\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay = \"25s\"\n      mode = \"delay\"\n    }\n    ephemeral_disk {\n      size = 300\n    }\n    task \"redis\" {\n      artifact {\n        source = \"google.com\"\n      }\n\n      driver = \"docker\"\n      config {\n        image = \"redis:3.2\"\n        port_map {\n          db = 6379\n        }\n      }\n      resources {\n        cpu    = 500\n        memory = 256\n        network {\n          mbits = 10\n          port \"db\" {}\n        }\n      }\n      service {\n        name = \"global-redis-check\"\n        tags = [\"global\", \"cache\"]\n        port = \"db\"\n        check {\n          name     = \"alive\"\n          type     = \"tcp\"\n          interval = \"10s\"\n          timeout  = \"2s\"\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "command/test-fixtures/job_canary.nomad",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\njob \"example\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n  update {\n    max_parallel     = 1\n    min_healthy_time = \"10s\"\n    healthy_deadline = \"1m\"\n    auto_revert      = true\n    canary           = 1\n  }\n  group \"cache\" {\n    count = 1\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay = \"25s\"\n      mode = \"delay\"\n    }\n    ephemeral_disk {\n      size = 300\n    }\n    task \"redis\" {\n      artifact {\n        source = \"google.com\"\n      }\n\n      driver = \"docker\"\n      config {\n        image = \"redis:3.2\"\n        port_map {\n          db = 6379\n        }\n      }\n      resources {\n        cpu    = 500\n        memory = 256\n        network {\n          mbits = 10\n          port \"db\" {}\n        }\n      }\n      service {\n        name = \"global-redis-check\"\n        tags = [\"global\", \"cache\"]\n        port = \"db\"\n        check {\n          name     = \"alive\"\n          type     = \"tcp\"\n          interval = \"10s\"\n          timeout  = \"2s\"\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "command/test-fixtures/periodic_batch.nomad",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\njob \"periodic_batch_test\" {\n  datacenters = [\"dc1\"]\n  region      = \"global\"\n  type        = \"batch\"\n  priority    = 75\n  periodic {\n    cron             = \"* 1 * * * *\"\n    prohibit_overlap = true\n  }\n  group \"periodic_batch\" {\n    task \"periodic_batch\" {\n      driver = \"docker\"\n      config {\n        image = \"cogniteev/echo\"\n      }\n      resources {\n        cpu    = 100\n        memory = 128\n        network {\n          mbits = 1\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "command/version.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage command\n\nimport (\n\t\"github.com/mitchellh/cli\"\n)\n\nvar _ cli.Command = &VersionCommand{}\n\n// VersionCommand is a Command implementation that prints the version.\ntype VersionCommand struct {\n\tVersion string\n\tUI      cli.Ui\n}\n\n// Help provides the help information for the version command.\nfunc (c *VersionCommand) Help() string {\n\treturn \"\"\n}\n\n// Synopsis is provides a brief summary of the version command.\nfunc (c *VersionCommand) Synopsis() string {\n\treturn \"Prints the Levant version\"\n}\n\n// Run executes the version command.\nfunc (c *VersionCommand) Run(_ []string) int {\n\tc.UI.Info(c.Version)\n\treturn 0\n}\n"
  },
  {
    "path": "commands.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage main\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\n\t\"github.com/hashicorp/levant/command\"\n\t\"github.com/hashicorp/levant/version\"\n\t\"github.com/mitchellh/cli\"\n)\n\n// Commands returns the mapping of CLI commands for Levant. The meta parameter\n// lets you set meta options for all commands.\nfunc Commands(metaPtr *command.Meta) map[string]cli.CommandFactory {\n\tif metaPtr == nil {\n\t\tmetaPtr = new(command.Meta)\n\t}\n\n\tmeta := *metaPtr\n\tif meta.UI == nil {\n\t\tmeta.UI = &cli.BasicUi{\n\t\t\tReader:      os.Stdin,\n\t\t\tWriter:      os.Stdout,\n\t\t\tErrorWriter: os.Stderr,\n\t\t}\n\t}\n\n\treturn map[string]cli.CommandFactory{\n\n\t\t\"deploy\": func() (cli.Command, error) {\n\t\t\treturn &command.DeployCommand{\n\t\t\t\tMeta: meta,\n\t\t\t}, nil\n\t\t},\n\t\t\"dispatch\": func() (cli.Command, error) {\n\t\t\treturn &command.DispatchCommand{\n\t\t\t\tMeta: meta,\n\t\t\t}, nil\n\t\t},\n\t\t\"plan\": func() (cli.Command, error) {\n\t\t\treturn &command.PlanCommand{\n\t\t\t\tMeta: meta,\n\t\t\t}, nil\n\t\t},\n\t\t\"render\": func() (cli.Command, error) {\n\t\t\treturn &command.RenderCommand{\n\t\t\t\tMeta: meta,\n\t\t\t}, nil\n\t\t},\n\t\t\"scale-in\": func() (cli.Command, error) {\n\t\t\treturn &command.ScaleInCommand{\n\t\t\t\tMeta: meta,\n\t\t\t}, nil\n\t\t},\n\t\t\"scale-out\": func() (cli.Command, error) {\n\t\t\treturn &command.ScaleOutCommand{\n\t\t\t\tMeta: meta,\n\t\t\t}, nil\n\t\t},\n\t\t\"version\": func() (cli.Command, error) {\n\t\t\treturn &command.VersionCommand{\n\t\t\t\tVersion: fmt.Sprintf(\"Levant %s\", version.GetHumanVersion()),\n\t\t\t\tUI:      meta.UI,\n\t\t\t}, nil\n\t\t},\n\t}\n}\n"
  },
  {
    "path": "docs/README.md",
    "content": "# Levant Documentation\n\nWelcome to the Levant documentation. The `docs` directory aims to host detailed and thorough documentation about Levant including design rational. Levant is an open source templating and deployment tool for [HashiCorp Nomad](https://www.nomadproject.io/) jobs that provides realtime feedback and detailed failure messages upon deployment issues.\n\n## Documentation Pages\n\n* [Commands](./commands.md) - detail about the Levant CLI and flags associated with each command.\n* [Templates](./templates.md) - detail about Levant's templating engine and functions.\n* [Clients](./clients.md) - information on Nomad and Consul client configuration options.\n\n## Contributing\n\nIf there is something missing or wrong, contributions to the Levant docs are very welcome and greatly appreciated.\n"
  },
  {
    "path": "docs/clients.md",
    "content": "## Clients\n\nLevant uses Nomad and Consul clients in order to perform its work. Currently only the HTTP address client parameter can be configured for each client via CLI flags; a choice made to keep the number of flags low. In order to further configure the clients you can use environment variables as detailed below.\n\n### Nomad Client\n\nThe project uses the Nomad [Default API Client](https://github.com/hashicorp/nomad/blob/master/api/api.go#L201) which means the following Nomad client parameters used by Levant are configurable via environment variables:\n\n * **NOMAD_ADDR** - The address of the Nomad server.\n * **NOMAD_REGION** - The region of the Nomad servers to forward commands to.\n * **NOMAD_NAMESPACE** - The target namespace for queries and actions bound to a namespace.\n * **NOMAD_CACERT** - Path to a PEM encoded CA cert file to use to verify the Nomad server SSL certificate.\n * **NOMAD_CAPATH** - Path to a directory of PEM encoded CA cert files to verify the Nomad server SSL certificate.\n * **NOMAD_CLIENT_CERT** - Path to a PEM encoded client certificate for TLS authentication to the Nomad server.\n * **NOMAD_CLIENT_KEY** - Path to an unencrypted PEM encoded private key matching the client certificate from `NOMAD_CLIENT_CERT`.\n * **NOMAD_SKIP_VERIFY** - Do not verify TLS certificate.\n * **NOMAD_TOKEN** - The SecretID of an ACL token to use to authenticate API requests with.\n\n### Consul Client\n\nThe project also uses the Consul [Default API Client](https://github.com/hashicorp/consul/blob/master/api/api.go#L282) which means the following Consul client parameters used by Levant are configurable via environment variables:\n\n * **CONSUL_CACERT** - Path to a CA file to use for TLS when communicating with Consul.\n * **CONSUL_CAPATH** - Path to a directory of CA certificates to use for TLS when communicating with Consul.\n * **CONSUL_CLIENT_CERT** - Path to a client cert file to use for TLS when 'verify_incoming' is enabled.\n * **CONSUL_CLIENT_KEY** - Path to a client key file to use for TLS when 'verify_incoming' is enabled.\n * **CONSUL_HTTP_ADDR** - The `address` and port of the Consul HTTP agent. The value can be an IP address or DNS address, but it must also include the port.\n * **CONSUL_TLS_SERVER_NAME** - The server name to use as the SNI host when connecting via TLS.\n * **CONSUL_HTTP_TOKEN** - ACL token to use in the request. If unspecified, the query will default to the token of the Consul agent at the HTTP address.\n"
  },
  {
    "path": "docs/commands.md",
    "content": "## Commands\n\nLevant supports a number of command line arguments which provide control over the Levant binary. Each command supports the `--help` flag to provide usage assistance.\n\n### Command: `deploy`\n\n`deploy` is the main entry point into Levant for deploying a Nomad job and supports the following flags which should then be proceeded by the Nomad job template you whish to deploy. Levant also supports autoloading files by which Levant will look in the current working directory for a `levant.[yaml,yml,tf]` file and a single `*.nomad` file to use for the command actions.\n\n* **-address** (string: \"http://localhost:4646\") The HTTP API endpoint for Nomad where all calls will be made.\n\n* **-allow-stale** (bool: false) Allow stale consistency mode for requests into nomad.\n\n* **-canary-auto-promote** (int: 0) The time period in seconds that Levant should wait for before attempting to promote a canary deployment.\n\n* **-consul-address** (string: \"localhost:8500\") The Consul host and port to use when making Consul KeyValue lookups for template rendering.\n\n* **-force** (bool: false) Execute deployment even though there were no changes.\n\n* **-force-batch** (bool: false) Forces a new instance of the periodic job. A new instance will be created even if it violates the job's prohibit_overlap settings.\n\n* **-force-count** (bool: false) Use the taskgroup count from the Nomad job file instead of the count that is obtained from the running job count.\n\n* **-ignore-no-changes** (bool: false) By default if no changes are detected when running a deployment Levant will exit with a status 1 to indicate a deployment didn't happen. This behaviour can be changed using this flag so that Levant will exit cleanly ensuring CD pipelines don't fail when no changes are detected\n\n* **-log-level** (string: \"INFO\") The level at which Levant will log to. Valid values are DEBUG, INFO, WARN, ERROR and FATAL.\n\n* **-log-format** (string: \"HUMAN\") Specify the format of Levant's logs. Valid values are HUMAN or JSON\n\n* **-var-file** (string: \"\") The variables file to render the template with. This flag can be specified multiple times to supply multiple variables files.\n\nThe `deploy` command also supports passing variables individually on the command line. Multiple commands can be passed in the format of `-var 'key=value'`. Variables passed via the command line take precedence over the same variable declared within a passed variable file.\n\nFull example:\n\n```\nlevant deploy -log-level=debug -address=nomad.devoops -var-file=var.yaml -var 'var=test' example.nomad\n```\n\n### Dispatch: `dispatch`\n\n`dispatch` allows you to dispatch an instance of a Nomad parameterized job and utilise Levant's advanced job checking features to ensure the job reaches the correct running state.\n\n* **-address** (string: \"http://localhost:4646\") The HTTP API endpoint for Nomad where all calls will be made.\n\n* **-log-level** (string: \"INFO\") The level at which Levant will log to. Valid values are DEBUG, INFO, WARN, ERROR and FATAL.\n\n* **-log-format** (string: \"HUMAN\") Specify the format of Levant's logs. Valid values are HUMAN or JSON\n\n* **-meta** (string: \"key=value\") The metadata key will be merged into the job's metadata. The job may define a default value for the key which is overridden when dispatching. The flag can be provided more than once to inject multiple metadata key/value pairs. Arbitrary keys are not allowed. The parameterized job must allow the key to be merged.\n\nThe command also supports the ability to send data payload to the dispatched instance. This can be provided via stdin by using \"-\" for the input source or by specifying a path to a file.\n\nFull example:\n\n```\nlevant dispatch -log-level=debug -address=nomad.devoops -meta key=value dispatch_job payload_item\n```\n\n### Plan: `plan`\n\n`plan` allows you to perform a Nomad plan of a rendered template job. This is useful for seeing the expected changes before larger deploys.\n\n* **-address** (string: \"http://localhost:4646\") The HTTP API endpoint for Nomad where all calls will be made.\n\n* **-allow-stale** (bool: false) Allow stale consistency mode for requests into nomad.\n\n* **-consul-address** (string: \"localhost:8500\") The Consul host and port to use when making Consul KeyValue lookups for template rendering.\n\n* **-force-count** (bool: false) Use the taskgroup count from the Nomad job file instead of the count that is obtained from the running job count.\n\n* **-ignore-no-changes** (bool: false) By default if no changes are detected when running a deployment Levant will exit with a status 1 to indicate a deployment didn't happen. This behaviour can be changed using this flag so that Levant will exit cleanly ensuring CD pipelines don't fail when no changes are detected\n\n* **-log-level** (string: \"INFO\") The level at which Levant will log to. Valid values are DEBUG, INFO, WARN, ERROR and FATAL.\n\n* **-log-format** (string: \"HUMAN\") Specify the format of Levant's logs. Valid values are HUMAN or JSON\n\n* **-var-file** (string: \"\") The variables file to render the template with. This flag can be specified multiple times to supply multiple variables files.\n\nThe `plan` command also supports passing variables individually on the command line. Multiple commands can be passed in the format of `-var 'key=value'`. Variables passed via the command line take precedence over the same variable declared within a passed variable file.\n\nFull example:\n\n```\nlevant plan -log-level=debug -address=nomad.devoops -var-file=var.yaml -var 'var=test' example.nomad\n```\n\n### Command: `render`\n\n`render` allows rendering of a Nomad job template without deploying, useful when testing or debugging. Levant also supports autoloading files by which Levant will look in the current working directory for a `levant.[yaml,yml,tf]` file and a single `*.nomad` file to use for the command actions.\n\n* **-consul-address** (string: \"localhost:8500\") The Consul host and port to use when making Consul KeyValue lookups for template rendering.\n\n* **-log-level** (string: \"DEBUG\") The level at which Levant will log to. Valid values are DEBUG, INFO, WARN, ERROR and FATAL.\n\n* **-log-format** (string: \"JSON\") Specify the format of Levant's logs. Valid values are HUMAN or JSON\n\n* **-var-file** (string: \"\") The variables file to render the template with. This flag can be specified multiple times to supply multiple variables files.\n\n* **-out** (string: \"\") The path to write the rendered template to. The template will be rendered to stdout if this is not set.\n\nLike `deploy`, the `render` command also supports passing variables individually on the command line. Multiple vars can be passed in the format of `-var 'key=value'`. Variables passed via the command line take precedence over the same variable declared within a passed variable file.\n\nFull example:\n\n```\nlevant render -var-file=var.yaml -var 'var=test' example.nomad\n```\n\n### Command: `scale-in`\n\nThe `scale-in` command allows the operator to scale a Nomad job and optional task-group within that job in/down in number. This can be helpful particulary in development and testing of new Nomad jobs or resizing.\n\n* **-address** (string: \"http://localhost:4646\") The HTTP API endpoint for Nomad where all calls will be made.\n\n* **-count** (int: 0) The count by which the job and task groups should be scaled in by. Only one of count or percent can be passed.\n\n* **-log-level** (string: \"INFO\") The level at which Levant will log to. Valid values are DEBUG, INFO, WARN, ERROR and FATAL.\n\n* **-log-format** (string: \"HUMAN\") Specify the format of Levant's logs. Valid values are HUMAN or JSON\n\n* **-percent** (int: 0) A percentage value by which the job and task groups should be scaled in by. Counts will be rounded up, to ensure required capacity is met. Only one of count or percent can be passed.\n\n* **-task-group** (string: \"\") The name of the task group you wish to target for scaling. If this is not specified, all task groups within the job will be scaled.\n\nFull example:\n\n```\nlevant scale-in -count 3 -task-group cache example\n```\n\n### Command: `scale-out`\n\nThe `scale-out` command allows the operator to scale a Nomad job and optional task-group within that job out/up in number. This can be helpful particulary in development and testing of new Nomad jobs or resizing.\n\n* **-address** (string: \"http://localhost:4646\") The HTTP API endpoint for Nomad where all calls will be made.\n\n* **-count** (int: 0) The count by which the job and task groups should be scaled out by. Only one of count or percent can be passed.\n\n* **-log-level** (string: \"INFO\") The level at which Levant will log to. Valid values are DEBUG, INFO, WARNING, ERROR and FATAL.\n\n* **-log-format** (string: \"HUMAN\") Specify the format of Levant's logs. Valid values are HUMAN or JSON\n\n* **-percent** (int: 0) A percentage value by which the job and task groups should be scaled out by. Counts will be rounded up, to ensure required capacity is met. Only one of count or percent can be passed.\n\n* **-task-group** (string: \"\") The name of the task group you wish to target for scaling. If this is not specified, all task groups within the job will be scaled.\n\nFull example:\n\n```\nlevant scale-out -percent 30 -task-group cache example\n```\n\n### Command: `version`\n\nThe `version` command displays build information about the running binary, including the release version.\n"
  },
  {
    "path": "docs/templates.md",
    "content": "## Templates\n\nAlongside enhanced deployments of Nomad jobs; Levant provides templating functionality allowing for greater flexibility throughout your Nomad jobs files. It also allows the same job file to be used across each environment you have, meaning your operation maturity is kept high. \n\n### Template Substitution\n\nLevant currently supports `.json`, `.tf`, `.yaml`, and `.yml` file extensions for the declaration of template variables and uses opening and closing double squared brackets `[[ ]]` within the templated job file. This is to ensure there is no clash with existing Nomad interpolation which uses the standard `{{ }}` notation.\n\n#### JSON\n\nJSON as well as YML provide the most flexible variable file format. It allows for descriptive and well organised jobs and variables file as shown below.\n\nExample job template:\n```hcl\nresources {\n    cpu    = [[.resources.cpu]]\n    memory = [[.resources.memory]]\n\n    network {\n        mbits = [[.resources.network.mbits]]\n    }\n}\n```\n\nExample variable file:\n```json\n{\n    \"resources\":{\n        \"cpu\":250,\n        \"memory\":512,\n        \"network\":{\n            \"mbits\":10\n        }\n    }\n}\n```\n\n#### Terraform\n\nTerraform (.tf) is probably the most inflexible of the variable file formats but does provide an easy to follow, descriptive manner in which to work. It may also be advantageous to use this format if you use Terraform for infrastructure as code thus allow you to use a consistant file format.\n\nExample job template:\n```hcl\nresources {\n    cpu    = [[.resources_cpu]]\n    memory = [[.resources_memory]]\n\n    network {\n        mbits = [[.resources_network_mbits]]\n    }\n}\n```\n\nExample variable file:\n```hcl\nvariable \"resources_cpu\" {\n  description = \"the CPU in MHz to allocate to the task group\"\n  type        = \"string\"\n  default     = 250\n}\n\nvariable \"resources_memory\" {\n  description = \"the memory in MB to allocate to the task group\"\n  type        = \"string\"\n  default     = 512\n}\n\nvariable \"resources_network_mbits\" {\n  description = \"the network bandwidth in MBits to allocate\"\n  type        = \"string\"\n  default     = 10\n}\n```\n\n#### YAML\n\nExample job template:\n```hcl\nresources {\n    cpu    = [[.resources.cpu]]\n    memory = [[.resources.memory]]\n\n    network {\n        mbits = [[.resources.network.mbits]]\n    }\n}\n```\n\nExample variable file:\n```yaml\n---\nresources:\n  cpu: 250\n  memory: 512\n  network:\n    mbits: 10\n```\n\n### Template Functions\n\nLevant's template rendering supports a number of functions which provide flexibility when deploying jobs. As with the variable substitution, it uses opening and closing double squared brackets `[[ ]]` as not to conflict with Nomad's templating standard. Levant parses job files using the [Go Template library](https://golang.org/pkg/text/template/) which makes available the features of that library as well as the functions described below.\n\nIf you require any additional functions please raise a feature request against the project.\n\n#### consulKey\n\nQuery Consul for the value at the given key path and render the template with the value. In the below example the value at the Consul KV path `service/config/cpu` would be `250`.\n\nExample:\n```\n[[ consulKey \"service/config/cpu\" ]]\n```\n\nRender:\n```\n250\n```\n\n#### consulKeyExists\n\nQuery Consul for the value at the given key path. If the key exists, this will return true, false otherwise. This is helpful in controlling the template flow, and adding conditional logic into particular sections of the job file. In this example we could try and control where the job is configured with a particular \"alerting\" setup by checking for the existance of a KV in Consul.\n\nExample:\n```\n{{ if consulKeyExists \"service/config/alerting\" }}\n  <configure alerts>\n{{ else }}\n  <skip configure alerts>\n{{ end }}\n```\n\n#### consulKeyOrDefault\n\nQuery Consul for the value at the given key path. If the key does not exist, the default value will be used instead. In the following example we query the Consul KV path `service/config/database-addr` but there is nothing at that location. If a value did exist at the path, the rendered value would be the KV value. This can be helpful when configuring jobs which defaults which are appropriate for local testing and development.\n\nExample:\n```\n[[ consulKeyOrDefault \"service/config/database-addr\" \"localhost:3306\" ]]\n```\n\nRender:\n```\nlocalhost:3306\n```\n\n#### env\n\nReturns the value of the given environment variable.\n\nExample:\n```\n[[ env \"HOME\" ]]\n[[ or (env \"NON_EXISTENT\") \"foo\" ]]\n```\n\nRender:\n```\n/bin/bash\nfoo\n```\n\n\n#### fileContents\n\nReads the entire contents of the specified file and adds it to the template.\n\nExample file contents:\n```\n---\nyaml:\n  - is: everywhere\n```\n\nExample job template:\n```\n[[ fileContents \"/etc/myapp/config\" ]]\n```\n\nRender:\n```\n---\nyaml:\n  - is: everywhere\n```\n\n\n#### loop\n\nAccepts varying parameters and differs its behavior based on those parameters as detailed below.\n\nIf loop is given a single int input, it will loop up to, but not including the given integer from index 0:\n\nExample:\n```\n[[ range $i := loop 3 ]]\nthis-is-loop[[ $i ]][[ end ]]\n```\n\nRender:\n```\nthis-is-output0\nthis-is-output1\nthis-is-output2\n```\n\nIf given two integers, this function will begin at the first integer and loop up to but not including the second integer:\n\nExample:\n```\n[[ range $i := loop 3 6 ]]\nthis-is-loop[[ $i ]][[ end ]]\n```\n\nRender:\n```\nthis-is-output3\nthis-is-output4\nthis-is-output5\n```\n\n#### parseBool\n\nTakes the given string and parses it as a boolean value which can be helpful in performing conditional checks. In the below example if the key has a value of \"true\" we could use it to alter what tags are added to the job:\n\nExample:\n```\n[[ if \"true\" | parseBool ]][[ \"beta-release\" ]][[ end ]]\n```\n\nRender:\n```\nbeta-release\n```\n\n#### parseFloat\n\nTakes the given string and parses it as a base-10 float64.\n\nExample:\n```\n[[ \"3.14159265359\" | parseFloat ]]\n```\n\nRender:\n```\n3.14159265359\n```\n\n#### parseInt\n\nTakes the given string and parses it as a base-10 int64 and is typically combined with other helpers such as loop:\n\nExample:\n```\n[[ with $i := consulKey \"service/config/conn_pool\" | parseInt ]][[ range $d := loop $i ]]\nconn-pool-id-[[ $d ]][[ end ]][[ end ]]\n```\n\nRender:\n```\nconn-pool-id-0\nconn-pool-id-1\nconn-pool-id-2\n```\n\n#### parseJSON\n\nTakes the given input and parses the result as JSON. This can allow you to wrap an entire job template as shown below and pull variables from Consul KV for template rendering. The below example is based on the template substitution above and expects the Consul KV to be `{\"resources\":{\"cpu\":250,\"memory\":512,\"network\":{\"mbits\":10}}}`:\n\nExample:\n```\n[[ with $data := consulKey \"service/config/variables\" | parseJSON ]]\nresources {\n    cpu    = [[.resources.cpu]]\n    memory = [[.resources.memory]]\n\n    network {\n        mbits = [[.resources.network.mbits]]\n    }\n}\n[[ end ]]\n```\n\nRender:\n```\nresources {\n    cpu    = 250\n    memory = 512\n\n    network {\n        mbits = 10\n    }\n}\n```\n\n#### parseUint\n\nTakes the given string and parses it as a base-10 int64.\n\nExample:\n```\n[[ \"100\" | parseUint ]]\n```\n\nRender:\n```\n100\n```\n\n#### replace\n\nReplaces all occurrences of the search string with the replacement string.\n\nExample:\n```\n[[ \"Batman and Robin\" | replace \"Robin\" \"Catwoman\" ]]\n```\n\nRender:\n```\nBatman and Catwoman\n```\n\n#### timeNow\n\nReturns the current ISO_8601 standard timestamp as a string in the timezone of the machine the rendering was triggered on.\n\nExample:\n```\n[[ timeNow ]]\n```\n\nRender:\n```\n2018-06-25T09:45:08+02:00\n```\n\n#### timeNowUTC\n\nReturns the current ISO_8601 standard timestamp as a string in UTC.\n\nExample:\n```\n[[ timeNowUTC ]]\n```\n\nRender:\n```\n2018-06-25T07:45:08Z\n```\n\n#### timeNowTimezone\n\nReturns the current ISO_8601 standard timestamp as a string of the timezone specified. The timezone must be specified according to the entries in the IANA Time Zone database, such as \"ASIA/SEOUL\". Details of the entries can be found on [wikipedia](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) or your local workstation (Mac or BSD) by searching within `/usr/share/zoneinfo`.\n\nExample:\n```\n[[ timeNowTimezone \"ASIA/SEOUL\" ]]\n```\n\nRender:\n```\n2018-06-25T16:45:08+09:00\n```\n\n#### toLower\n\nTakes the argument as a string and converts it to lowercase.\n\nExample:\n```\n[[ \"QUEUE-NAME\" | toLower ]]\n```\n\nRender:\n```\nqueue-name\n```\n\n#### toUpper\n\nTakes the argument as a string and converts it to uppercase.\n\nExample:\n```\n[[ \"queue-name\" | toUpper ]]\n```\n\nRender:\n```\nQUEUE-NAME\n```\n\n#### add\n\nReturns the sum of the two passed values.\n\nExamples:\n```\n[[ add 5 2 ]]\n```\n\nRender:\n```\n7\n```\n\n#### subtract\n\nReturns the difference of the second value from the first.\n\nExample:\n```\n[[ subtract 2 5 ]]\n```\n\nRender:\n```\n3\n```\n\n#### multiply\n\nReturns the product of the two values.\n\nExample:\n```\n[[ multiply 4 4 ]]\n```\n\nRender:\n```\n16\n```\n\n#### divide\n\nReturns the division of the second value from the first.\n\nExample:\n```\n[[ divide 2 6 ]]\n```\n\nRender:\n```\n3\n```\n\n#### modulo\n\nReturns the modulo of the second value from the first.\n\nExample:\n```\n[[ modulo 2 5 ]]\n```\n\nRender:\n```\n1\n```\n\n#### Access Variable Globally\n\nExample config file:\n```yaml\nmy_i32: 1\nmy_array:\n  - \"a\"\n  - \"b\"\n  - \"c\"\nmy_nested:\n  my_data1: \"lorempium\"\n  my_data2: \"faker\"\n```\n\nTemplate:\n```\n[[ $.my_i32 ]]\n[[ range $c := $.my_array ]][[ $c ]]-[[ $.my_i32 ]],[[ end ]]\n```\n\nRender:\n```\n1\na1,b1,c1,\n```\n\n#### Sprig Template\nMore about Sprig here: [Sprig](https://masterminds.github.io/sprig/)\n\n#### Sprig Join String\n\nTemplate: \n```\n[[ $.my_array | sprigJoin `-` ]]\n```\n\nRender:\n```\na-b-c\n```\n\n#### Define variable\n\nTemplate:\n```\n[[ with $data := $.my_nested ]]\nENV_x1=[[ $data.my_data1 ]]\nENV_x2=[[ $data.my_data2 ]]\n[[ end ]] \n```\n\nRender:\n```\n\nENV_x1=lorempium\nENV_x2=faker\n\n```\n"
  },
  {
    "path": "go.mod",
    "content": "module github.com/hashicorp/levant\n\ngo 1.24.4\n\n// Use the same version of go-metrics as Nomad.\nreplace github.com/armon/go-metrics => github.com/armon/go-metrics v0.0.0-20230509193637-d9ca9af9f1f9\n\nrequire (\n\tgithub.com/Masterminds/sprig/v3 v3.3.0\n\tgithub.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc\n\tgithub.com/hashicorp/consul/api v1.32.1\n\tgithub.com/hashicorp/go-multierror v1.1.1\n\tgithub.com/hashicorp/nomad v1.10.2\n\tgithub.com/hashicorp/nomad/api v0.0.0-20250620152331-1030760d3f77\n\tgithub.com/hashicorp/terraform v0.13.5\n\tgithub.com/mattn/go-isatty v0.0.20\n\tgithub.com/mitchellh/cli v1.1.5\n\tgithub.com/pkg/errors v0.9.1\n\tgithub.com/rs/zerolog v1.6.0\n\tgithub.com/sean-/conswriter v0.0.0-20180208195008-f5ae3917a627\n\tgithub.com/stretchr/testify v1.10.0\n\tgopkg.in/yaml.v2 v2.4.0\n)\n\nrequire (\n\tdario.cat/mergo v1.0.1 // indirect\n\tgithub.com/Masterminds/goutils v1.1.1 // indirect\n\tgithub.com/Masterminds/semver/v3 v3.3.0 // indirect\n\tgithub.com/agext/levenshtein v1.2.2 // indirect\n\tgithub.com/apparentlymart/go-cidr v1.1.0 // indirect\n\tgithub.com/apparentlymart/go-textseg/v15 v15.0.0 // indirect\n\tgithub.com/apparentlymart/go-versions v1.0.0 // indirect\n\tgithub.com/armon/go-metrics v0.4.1 // indirect\n\tgithub.com/armon/go-radix v1.0.0 // indirect\n\tgithub.com/bgentry/speakeasy v0.1.0 // indirect\n\tgithub.com/bmatcuk/doublestar v1.1.5 // indirect\n\tgithub.com/fatih/color v1.18.0 // indirect\n\tgithub.com/google/go-cmp v0.7.0 // indirect\n\tgithub.com/google/uuid v1.6.0 // indirect\n\tgithub.com/gorilla/websocket v1.5.3 // indirect\n\tgithub.com/hashicorp/cronexpr v1.1.2 // indirect\n\tgithub.com/hashicorp/errwrap v1.1.0 // indirect\n\tgithub.com/hashicorp/go-cleanhttp v0.5.2 // indirect\n\tgithub.com/hashicorp/go-cty-funcs v0.0.0-20200930094925-2721b1e36840 // indirect\n\tgithub.com/hashicorp/go-hclog v1.6.3 // indirect\n\tgithub.com/hashicorp/go-immutable-radix v1.3.1 // indirect\n\tgithub.com/hashicorp/go-metrics v0.5.4 // indirect\n\tgithub.com/hashicorp/go-retryablehttp v0.7.7 // indirect\n\tgithub.com/hashicorp/go-rootcerts v1.0.2 // indirect\n\tgithub.com/hashicorp/go-version v1.7.0 // indirect\n\tgithub.com/hashicorp/golang-lru v1.0.2 // indirect\n\tgithub.com/hashicorp/hcl/v2 v2.20.2-0.20240517235513-55d9c02d147d // indirect\n\tgithub.com/hashicorp/hil v0.0.0-20210521165536-27a72121fd40 // indirect\n\tgithub.com/hashicorp/serf v0.10.2 // indirect\n\tgithub.com/hashicorp/terraform-svchost v0.0.0-20191011084731-65d371908596 // indirect\n\tgithub.com/huandu/xstrings v1.5.0 // indirect\n\tgithub.com/mattn/go-colorable v0.1.14 // indirect\n\tgithub.com/mitchellh/copystructure v1.2.0 // indirect\n\tgithub.com/mitchellh/go-homedir v1.1.0 // indirect\n\tgithub.com/mitchellh/go-wordwrap v1.0.1 // indirect\n\tgithub.com/mitchellh/mapstructure v1.5.0 // indirect\n\tgithub.com/mitchellh/reflectwalk v1.0.2 // indirect\n\tgithub.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect\n\tgithub.com/posener/complete v1.2.3 // indirect\n\tgithub.com/sean-/pager v0.0.0-20180208200047-666be9bf53b5 // indirect\n\tgithub.com/shopspring/decimal v1.4.0 // indirect\n\tgithub.com/spf13/afero v1.2.2 // indirect\n\tgithub.com/spf13/cast v1.7.0 // indirect\n\tgithub.com/zclconf/go-cty v1.16.3 // indirect\n\tgithub.com/zclconf/go-cty-yaml v1.1.0 // indirect\n\tgolang.org/x/crypto v0.38.0 // indirect\n\tgolang.org/x/exp v0.0.0-20250506013437-ce4c2cf36ca6 // indirect\n\tgolang.org/x/mod v0.25.0 // indirect\n\tgolang.org/x/net v0.40.0 // indirect\n\tgolang.org/x/oauth2 v0.30.0 // indirect\n\tgolang.org/x/sync v0.15.0 // indirect\n\tgolang.org/x/sys v0.33.0 // indirect\n\tgolang.org/x/text v0.25.0 // indirect\n\tgolang.org/x/tools v0.33.0 // indirect\n\tgopkg.in/yaml.v3 v3.0.1 // indirect\n)\n"
  },
  {
    "path": "go.sum",
    "content": "cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=\ncloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=\ncloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=\ncloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=\ncloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=\ncloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=\ncloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=\ncloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=\ndario.cat/mergo v1.0.1 h1:Ra4+bf83h2ztPIQYNP99R6m+Y7KfnARDfID+a+vLl4s=\ndario.cat/mergo v1.0.1/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk=\ngithub.com/Azure/azure-sdk-for-go v45.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=\ngithub.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=\ngithub.com/Azure/go-autorest/autorest v0.11.3/go.mod h1:JFgpikqFJ/MleTTxwepExTKnFUKKszPS8UavbQYUMuw=\ngithub.com/Azure/go-autorest/autorest/adal v0.9.0/go.mod h1:/c022QCutn2P7uY+/oQWWNcK9YU+MH96NgK+jErpbcg=\ngithub.com/Azure/go-autorest/autorest/azure/cli v0.4.0/go.mod h1:JljT387FplPzBA31vUcvsetLKF3pec5bdAxjVU4kI2s=\ngithub.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74=\ngithub.com/Azure/go-autorest/autorest/mocks v0.4.0/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=\ngithub.com/Azure/go-autorest/autorest/to v0.4.0/go.mod h1:fE8iZBn7LQR7zH/9XU2NcPR4o9jEImooCeWJcYV/zLE=\ngithub.com/Azure/go-autorest/autorest/validation v0.3.0/go.mod h1:yhLgjC0Wda5DYXl6JAsWyUe4KVNffhoDhG0zVzUMo3E=\ngithub.com/Azure/go-autorest/logger v0.2.0/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=\ngithub.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=\ngithub.com/Azure/go-ntlmssp v0.0.0-20180810175552-4a21cbd618b4/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU=\ngithub.com/Azure/go-ntlmssp v0.0.0-20200615164410-66371956d46c/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU=\ngithub.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=\ngithub.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=\ngithub.com/ChrisTrenkamp/goxpath v0.0.0-20170922090931-c385f95c6022/go.mod h1:nuWgzSkT5PnyOd+272uUmV0dnAnAn42Mk7PiQC5VzN4=\ngithub.com/ChrisTrenkamp/goxpath v0.0.0-20190607011252-c5096ec8773d/go.mod h1:nuWgzSkT5PnyOd+272uUmV0dnAnAn42Mk7PiQC5VzN4=\ngithub.com/DataDog/datadog-go v3.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=\ngithub.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJLSYI=\ngithub.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU=\ngithub.com/Masterminds/semver/v3 v3.1.1/go.mod h1:VPu/7SZ7ePZ3QOrcuXROw5FAcLl4a0cBrbBpGY/8hQs=\ngithub.com/Masterminds/semver/v3 v3.3.0 h1:B8LGeaivUe71a5qox1ICM/JLl0NqZSW5CHyL+hmvYS0=\ngithub.com/Masterminds/semver/v3 v3.3.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM=\ngithub.com/Masterminds/sprig/v3 v3.2.1/go.mod h1:UoaO7Yp8KlPnJIYWTFkMaqPUYKTfGFPhxNuwnnxkKlk=\ngithub.com/Masterminds/sprig/v3 v3.3.0 h1:mQh0Yrg1XPo6vjYXgtf5OtijNAKJRNcTdOOGZe3tPhs=\ngithub.com/Masterminds/sprig/v3 v3.3.0/go.mod h1:Zy1iXRYNqNLUolqCpL4uhk6SHUMAOSCzdgBfDb35Lz0=\ngithub.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=\ngithub.com/PuerkitoBio/purell v1.0.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=\ngithub.com/PuerkitoBio/urlesc v0.0.0-20160726150825-5bd2802263f2/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=\ngithub.com/QcloudApi/qcloud_sign_golang v0.0.0-20141224014652-e4130a326409/go.mod h1:1pk82RBxDY/JZnPQrtqHlUFfCctgdorsd9M06fMynOM=\ngithub.com/abdullin/seq v0.0.0-20160510034733-d5467c17e7af/go.mod h1:5Jv4cbFiHJMsVxt52+i0Ha45fjshj6wxYr1r19tB9bw=\ngithub.com/agext/levenshtein v1.2.1/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=\ngithub.com/agext/levenshtein v1.2.2 h1:0S/Yg6LYmFJ5stwQeRp6EeOcCbj7xiqQSdNelsXvaqE=\ngithub.com/agext/levenshtein v1.2.2/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=\ngithub.com/agl/ed25519 v0.0.0-20170116200512-5312a6153412/go.mod h1:WPjqKcmVOxf0XSf3YxCJs6N6AOSrOx3obionmG7T0y0=\ngithub.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=\ngithub.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=\ngithub.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=\ngithub.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=\ngithub.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=\ngithub.com/aliyun/alibaba-cloud-sdk-go v0.0.0-20190329064014-6e358769c32a/go.mod h1:T9M45xf79ahXVelWoOBmH0y4aC1t5kXO5BxwyakgIGA=\ngithub.com/aliyun/aliyun-oss-go-sdk v0.0.0-20190103054945-8205d1f41e70/go.mod h1:T/Aws4fEfogEE9v+HPhhw+CntffsBHJ8nXQCwKr0/g8=\ngithub.com/aliyun/aliyun-tablestore-go-sdk v4.1.2+incompatible/go.mod h1:LDQHRZylxvcg8H7wBIDfvO5g/cy4/sz1iucBlc2l3Jw=\ngithub.com/antchfx/xpath v0.0.0-20190129040759-c8489ed3251e/go.mod h1:Yee4kTMuNiPYJ7nSNorELQMr1J33uOpXDMByNYhvtNk=\ngithub.com/antchfx/xquery v0.0.0-20180515051857-ad5b8c7a47b0/go.mod h1:LzD22aAzDP8/dyiCKFp31He4m2GPjl0AFyzDtZzUu9M=\ngithub.com/apparentlymart/go-cidr v1.0.1/go.mod h1:EBcsNrHc3zQeuaeCeCtQruQm+n9/YjEn/vI25Lg7Gwc=\ngithub.com/apparentlymart/go-cidr v1.1.0 h1:2mAhrMoF+nhXqxTzSZMUzDHkLjmIHC+Zzn4tdgBZjnU=\ngithub.com/apparentlymart/go-cidr v1.1.0/go.mod h1:EBcsNrHc3zQeuaeCeCtQruQm+n9/YjEn/vI25Lg7Gwc=\ngithub.com/apparentlymart/go-dump v0.0.0-20180507223929-23540a00eaa3/go.mod h1:oL81AME2rN47vu18xqj1S1jPIPuN7afo62yKTNn3XMM=\ngithub.com/apparentlymart/go-dump v0.0.0-20190214190832-042adf3cf4a0 h1:MzVXffFUye+ZcSR6opIgz9Co7WcDx6ZcY+RjfFHoA0I=\ngithub.com/apparentlymart/go-dump v0.0.0-20190214190832-042adf3cf4a0/go.mod h1:oL81AME2rN47vu18xqj1S1jPIPuN7afo62yKTNn3XMM=\ngithub.com/apparentlymart/go-textseg v1.0.0/go.mod h1:z96Txxhf3xSFMPmb5X/1W05FF/Nj9VFpLOpjS5yuumk=\ngithub.com/apparentlymart/go-textseg/v12 v12.0.0/go.mod h1:S/4uRK2UtaQttw1GenVJEynmyUenKwP++x/+DdGV/Ec=\ngithub.com/apparentlymart/go-textseg/v15 v15.0.0 h1:uYvfpb3DyLSCGWnctWKGj857c6ew1u1fNQOlOtuGxQY=\ngithub.com/apparentlymart/go-textseg/v15 v15.0.0/go.mod h1:K8XmNZdhEBkdlyDdvbmmsvpAG721bKi0joRfFdHIWJ4=\ngithub.com/apparentlymart/go-userdirs v0.0.0-20200915174352-b0c018a67c13/go.mod h1:7kfpUbyCdGJ9fDRCp3fopPQi5+cKNHgTE4ZuNrO71Cw=\ngithub.com/apparentlymart/go-versions v1.0.0 h1:4A4CekGuwDUQqc+uTXCrdb9Y98JZsML2sdfNTeVjsK4=\ngithub.com/apparentlymart/go-versions v1.0.0/go.mod h1:YF5j7IQtrOAOnsGkniupEA5bfCjzd7i14yu0shZavyM=\ngithub.com/armon/circbuf v0.0.0-20190214190532-5111143e8da2/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=\ngithub.com/armon/go-metrics v0.0.0-20230509193637-d9ca9af9f1f9 h1:51N4T44k8crLrlHy1zgBKGdYKjzjquaXw/RPbq/bH+o=\ngithub.com/armon/go-metrics v0.0.0-20230509193637-d9ca9af9f1f9/go.mod h1:E6amYzXo6aW1tqzoZGT755KkbgrJsSdpwZ+3JqfkOG4=\ngithub.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=\ngithub.com/armon/go-radix v1.0.0 h1:F4z6KzEeeQIMeLFa97iZU6vupzoecKdU5TX24SNppXI=\ngithub.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=\ngithub.com/aws/aws-sdk-go v1.15.78/go.mod h1:E3/ieXAlvM0XWO57iftYVDLLvQ824smPP3ATZkfNZeM=\ngithub.com/aws/aws-sdk-go v1.31.9/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZveU8YkpAk0=\ngithub.com/baiyubin/aliyun-sts-go-sdk v0.0.0-20180326062324-cfa1a18b161f/go.mod h1:AuiFmCCPBSrqvVMvuqFuk0qogytodnVFVSN5CeJB8Gc=\ngithub.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=\ngithub.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=\ngithub.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=\ngithub.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d/go.mod h1:6QX/PXZ00z/TKoufEY6K/a0k6AhaJrQKdFe6OfVXsa4=\ngithub.com/bgentry/speakeasy v0.1.0 h1:ByYyxL9InA1OWqxJqqp2A5pYHUrCiAL6K3J+LKSsQkY=\ngithub.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=\ngithub.com/bmatcuk/doublestar v1.1.5 h1:2bNwBOmhyFEFcoB3tGvTD5xanq+4kyOZlB8wFYbMjkk=\ngithub.com/bmatcuk/doublestar v1.1.5/go.mod h1:wiQtGV+rzVYxB7WIlirSN++5HPtPlXEo9MEoZQC/PmE=\ngithub.com/boltdb/bolt v1.3.1/go.mod h1:clJnj/oiGkjum5o1McbSZDSLxVThjynRyGBgiAx27Ps=\ngithub.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=\ngithub.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=\ngithub.com/cheggaaa/pb v1.0.27/go.mod h1:pQciLPpbU0oxA0h+VJYYLxO+XeDQb5pZijXscXHm81s=\ngithub.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=\ngithub.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=\ngithub.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=\ngithub.com/circonus-labs/circonus-gometrics v2.3.1+incompatible/go.mod h1:nmEj6Dob7S7YxXgwXpfOuvO54S+tGdZdw9fuRZt25Ag=\ngithub.com/circonus-labs/circonusllhist v0.1.3/go.mod h1:kMXHVDlOchFAehlya5ePtbp5jckzBHf4XRpQvBOLI+I=\ngithub.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=\ngithub.com/coreos/bbolt v1.3.0/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=\ngithub.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=\ngithub.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=\ngithub.com/coreos/go-systemd v0.0.0-20181012123002-c6f51f82210d/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=\ngithub.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=\ngithub.com/davecgh/go-spew v0.0.0-20151105211317-5215b55f46b2/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=\ngithub.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=\ngithub.com/dimchansky/utfbom v1.1.0/go.mod h1:rO41eb7gLfo8SF1jd9F8HplJm1Fewwi4mQvIirEdv+8=\ngithub.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=\ngithub.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=\ngithub.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM=\ngithub.com/dylanmei/iso8601 v0.1.0/go.mod h1:w9KhXSgIyROl1DefbMYIE7UVSIvELTbMrCfx+QkYnoQ=\ngithub.com/dylanmei/winrmtest v0.0.0-20190225150635-99b7fe2fddf1/go.mod h1:lcy9/2gH1jn/VCLouHA6tOEwLoNVd4GW6zhuKLmHC2Y=\ngithub.com/elazarl/goproxy v0.0.0-20170405201442-c4fc26588b6e/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=\ngithub.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=\ngithub.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=\ngithub.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=\ngithub.com/evanphx/json-patch v0.0.0-20190203023257-5858425f7550/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=\ngithub.com/evanphx/json-patch v4.2.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=\ngithub.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=\ngithub.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk=\ngithub.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=\ngithub.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=\ngithub.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=\ngithub.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=\ngithub.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=\ngithub.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=\ngithub.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=\ngithub.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=\ngithub.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=\ngithub.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=\ngithub.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=\ngithub.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=\ngithub.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=\ngithub.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=\ngithub.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=\ngithub.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=\ngithub.com/go-openapi/jsonpointer v0.0.0-20160704185906-46af16f9f7b1/go.mod h1:+35s3my2LFTysnkMfxsJBAMHj/DoqoB9knIWoYG/Vk0=\ngithub.com/go-openapi/jsonreference v0.0.0-20160704190145-13c6e3589ad9/go.mod h1:W3Z9FmVs9qj+KR4zFKmDPGiLdk1D9Rlm7cyMvf57TTg=\ngithub.com/go-openapi/spec v0.0.0-20160808142527-6aced65f8501/go.mod h1:J8+jY1nAiCcj+friV/PDoE1/3eeccG9LYBs0tYvLOWc=\ngithub.com/go-openapi/swag v0.0.0-20160704191624-1d0bd113de87/go.mod h1:DXUve3Dpr1UfpPtxFw+EFuQ41HhCWZfha5jSVRG7C7I=\ngithub.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=\ngithub.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=\ngithub.com/go-test/deep v1.0.1/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA=\ngithub.com/go-test/deep v1.0.3 h1:ZrJSEWsXzPOxaZnFteGEfooLba+ju3FYIbOrS+rQd68=\ngithub.com/go-test/deep v1.0.3/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA=\ngithub.com/gofrs/uuid v3.2.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=\ngithub.com/gofrs/uuid v3.3.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=\ngithub.com/gogo/protobuf v0.0.0-20171007142547-342cbe0a0415/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=\ngithub.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=\ngithub.com/gogo/protobuf v1.2.2-0.20190723190241-65acae22fc9d/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=\ngithub.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=\ngithub.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=\ngithub.com/golang/groupcache v0.0.0-20180513044358-24b0969c4cb7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=\ngithub.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=\ngithub.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=\ngithub.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=\ngithub.com/golang/protobuf v0.0.0-20161109072736-4bd1920723d7/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=\ngithub.com/golang/protobuf v1.1.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=\ngithub.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=\ngithub.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=\ngithub.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=\ngithub.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=\ngithub.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=\ngithub.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=\ngithub.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=\ngithub.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=\ngithub.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=\ngithub.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=\ngithub.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=\ngithub.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=\ngithub.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=\ngithub.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=\ngithub.com/google/btree v1.1.2 h1:xf4v41cLI2Z6FxbKm+8Bu+m8ifhj15JuZ9sa0jZCMUU=\ngithub.com/google/btree v1.1.2/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4=\ngithub.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=\ngithub.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=\ngithub.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=\ngithub.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=\ngithub.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=\ngithub.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=\ngithub.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=\ngithub.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=\ngithub.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=\ngithub.com/google/gofuzz v0.0.0-20161122191042-44d81051d367/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI=\ngithub.com/google/gofuzz v0.0.0-20170612174753-24818f796faf/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI=\ngithub.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=\ngithub.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=\ngithub.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=\ngithub.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=\ngithub.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=\ngithub.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=\ngithub.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=\ngithub.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=\ngithub.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=\ngithub.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=\ngithub.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=\ngithub.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=\ngithub.com/gophercloud/gophercloud v0.6.1-0.20191122030953-d8ac278c1c9d/go.mod h1:ozGNgr9KYOVATV5jsgHl/ceCDXGuguqOZAzoQ/2vcNM=\ngithub.com/gophercloud/gophercloud v0.10.1-0.20200424014253-c3bfe50899e5/go.mod h1:gmC5oQqMDOMO1t1gq5DquX/yAU808e/4mzjjDA76+Ss=\ngithub.com/gophercloud/utils v0.0.0-20200423144003-7c72efc7435d/go.mod h1:ehWUbLQJPqS0Ep+CxeD559hsm9pthPXadJNKwZkp43w=\ngithub.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=\ngithub.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=\ngithub.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=\ngithub.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=\ngithub.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=\ngithub.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=\ngithub.com/grpc-ecosystem/grpc-gateway v1.8.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=\ngithub.com/hashicorp/aws-sdk-go-base v0.6.0/go.mod h1:2fRjWDv3jJBeN6mVWFHV6hFTNeFBx2gpDLQaZNxUVAY=\ngithub.com/hashicorp/consul v0.0.0-20171026175957-610f3c86a089 h1:1eDpXAxTh0iPv+1kc9/gfSI2pxRERDsTk/lNGolwHn8=\ngithub.com/hashicorp/consul v0.0.0-20171026175957-610f3c86a089/go.mod h1:mFrjN1mfidgJfYP1xrJCF+AfRhr6Eaqhb2+sfyn/OOI=\ngithub.com/hashicorp/consul/api v1.32.1 h1:0+osr/3t/aZNAdJX558crU3PEjVrG4x6715aZHRgceE=\ngithub.com/hashicorp/consul/api v1.32.1/go.mod h1:mXUWLnxftwTmDv4W3lzxYCPD199iNLLUyLfLGFJbtl4=\ngithub.com/hashicorp/consul/sdk v0.16.2 h1:cGX/djeEe9r087ARiKVWwVWCF64J+yW0G6ftZMZYbj0=\ngithub.com/hashicorp/consul/sdk v0.16.2/go.mod h1:onxcZjYVsPx5XMveAC/OtoIsdr32fykB7INFltDoRE8=\ngithub.com/hashicorp/cronexpr v1.1.2 h1:wG/ZYIKT+RT3QkOdgYc+xsKWVRgnxJ1OJtjjy84fJ9A=\ngithub.com/hashicorp/cronexpr v1.1.2/go.mod h1:P4wA0KBl9C5q2hABiMO7cp6jcIg96CDh1Efb3g1PWA4=\ngithub.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=\ngithub.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=\ngithub.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=\ngithub.com/hashicorp/go-azure-helpers v0.12.0/go.mod h1:Zc3v4DNeX6PDdy7NljlYpnrdac1++qNW0I4U+ofGwpg=\ngithub.com/hashicorp/go-checkpoint v0.5.0/go.mod h1:7nfLNL10NsxqO4iWuW6tWW0HjZuDrwkBuEQsVcpCOgg=\ngithub.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=\ngithub.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=\ngithub.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ=\ngithub.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48=\ngithub.com/hashicorp/go-cty-funcs v0.0.0-20200930094925-2721b1e36840 h1:kgvybwEeu0SXktbB2y3uLHX9lklLo+nzUwh59A3jzQc=\ngithub.com/hashicorp/go-cty-funcs v0.0.0-20200930094925-2721b1e36840/go.mod h1:Abjk0jbRkDaNCzsRhOv2iDCofYpX1eVsjozoiK63qLA=\ngithub.com/hashicorp/go-getter v1.4.2-0.20200106182914-9813cbd4eb02/go.mod h1:7qxyCd8rBfcShwsvxgIguu4KbS3l8bUCwg2Umn7RjeY=\ngithub.com/hashicorp/go-hclog v0.0.0-20180709165350-ff2cf002a8dd/go.mod h1:9bjs9uLqI8l75knNv3lV1kA55veR+WUPSiKIWcQHudI=\ngithub.com/hashicorp/go-hclog v0.9.2/go.mod h1:5CU+agLiy3J7N7QjHK5d05KxGsuXiQLrjA0H7acj2lQ=\ngithub.com/hashicorp/go-hclog v1.6.3 h1:Qr2kF+eVWjTiYmU7Y31tYlP1h0q/X3Nl3tPGdaB11/k=\ngithub.com/hashicorp/go-hclog v1.6.3/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M=\ngithub.com/hashicorp/go-immutable-radix v0.0.0-20180129170900-7f3cd4390caa/go.mod h1:6ij3Z20p+OhOkCSrA0gImAWoHYQRGbnlcuk6XYTiaRw=\ngithub.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=\ngithub.com/hashicorp/go-immutable-radix v1.3.1 h1:DKHmCUm2hRBK510BaiZlwvpD40f8bJFeZnpfm2KLowc=\ngithub.com/hashicorp/go-immutable-radix v1.3.1/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=\ngithub.com/hashicorp/go-metrics v0.5.4 h1:8mmPiIJkTPPEbAiV97IxdAGNdRdaWwVap1BU6elejKY=\ngithub.com/hashicorp/go-metrics v0.5.4/go.mod h1:CG5yz4NZ/AI/aQt9Ucm/vdBnbh7fvmv4lxZ350i+QQI=\ngithub.com/hashicorp/go-msgpack v0.5.4/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=\ngithub.com/hashicorp/go-msgpack v1.1.6-0.20240304204939-8824e8ccc35f h1:/xqzTen8ftnKv3cKa87WEoOLtsDJYFU0ArjrKaPTTkc=\ngithub.com/hashicorp/go-msgpack/v2 v2.1.3 h1:cB1w4Zrk0O3jQBTcFMKqYQWRFfsSQ/TYKNyUUVyCP2c=\ngithub.com/hashicorp/go-msgpack/v2 v2.1.3/go.mod h1:SjlwKKFnwBXvxD/I1bEcfJIBbEJ+MCUn39TxymNR5ZU=\ngithub.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=\ngithub.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=\ngithub.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=\ngithub.com/hashicorp/go-plugin v1.3.0/go.mod h1:F9eH4LrE/ZsRdbwhfjs9k9HoDUwAHnYtXdgmf1AVNs0=\ngithub.com/hashicorp/go-retryablehttp v0.5.2/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs=\ngithub.com/hashicorp/go-retryablehttp v0.5.3/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs=\ngithub.com/hashicorp/go-retryablehttp v0.7.7 h1:C8hUCYzor8PIfXHa4UrZkU4VvK8o9ISHxT2Q8+VepXU=\ngithub.com/hashicorp/go-retryablehttp v0.7.7/go.mod h1:pkQpWZeYWskR+D1tR2O5OcBFOxfA7DoAO6xtkuQnHTk=\ngithub.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa6eBIzfwKfwNnHU=\ngithub.com/hashicorp/go-rootcerts v1.0.2 h1:jzhAVGtqPKbwpyCPELlgNWhE1znq+qwJtW5Oi2viEzc=\ngithub.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8=\ngithub.com/hashicorp/go-safetemp v1.0.0/go.mod h1:oaerMy3BhqiTbVye6QuFhFtIceqFoDHxNAB65b+Rj1I=\ngithub.com/hashicorp/go-slug v0.4.1/go.mod h1:I5tq5Lv0E2xcNXNkmx7BSfzi1PsJ2cNjs3cC3LwyhK8=\ngithub.com/hashicorp/go-sockaddr v0.0.0-20180320115054-6d291a969b86/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=\ngithub.com/hashicorp/go-sockaddr v1.0.7 h1:G+pTkSO01HpR5qCxg7lxfsFEZaG+C0VssTy/9dbT+Fw=\ngithub.com/hashicorp/go-sockaddr v1.0.7/go.mod h1:FZQbEYa1pxkQ7WLpyXJ6cbjpT8q0YgQaK/JakXqGyWw=\ngithub.com/hashicorp/go-tfe v0.8.1/go.mod h1:XAV72S4O1iP8BDaqiaPLmL2B4EE6almocnOn8E8stHc=\ngithub.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=\ngithub.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=\ngithub.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8=\ngithub.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=\ngithub.com/hashicorp/go-version v1.0.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=\ngithub.com/hashicorp/go-version v1.1.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=\ngithub.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=\ngithub.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY=\ngithub.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=\ngithub.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=\ngithub.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=\ngithub.com/hashicorp/golang-lru v1.0.2 h1:dV3g9Z/unq5DpblPpw+Oqcv4dU/1omnb4Ok8iPY6p1c=\ngithub.com/hashicorp/golang-lru v1.0.2/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=\ngithub.com/hashicorp/hcl v0.0.0-20170504190234-a4b07c25de5f/go.mod h1:oZtUIOe8dh44I2q6ScRibXws4Ajl+d+nod3AaR9vL5w=\ngithub.com/hashicorp/hcl/v2 v2.0.0/go.mod h1:oVVDG71tEinNGYCxinCYadcmKU9bglqW9pV3txagJ90=\ngithub.com/hashicorp/hcl/v2 v2.6.0/go.mod h1:bQTN5mpo+jewjJgh8jr0JUguIi7qPHUF6yIfAEN3jqY=\ngithub.com/hashicorp/hcl/v2 v2.20.2-0.20240517235513-55d9c02d147d h1:7abftkc86B+tlA/0cDy5f6C4LgWfFOCpsGg3RJZsfbw=\ngithub.com/hashicorp/hcl/v2 v2.20.2-0.20240517235513-55d9c02d147d/go.mod h1:62ZYHrXgPoX8xBnzl8QzbWq4dyDsDtfCRgIq1rbJEvA=\ngithub.com/hashicorp/hil v0.0.0-20190212112733-ab17b08d6590/go.mod h1:n2TSygSNwsLJ76m8qFXTSc7beTb+auJxYdqrnoqwZWE=\ngithub.com/hashicorp/hil v0.0.0-20210521165536-27a72121fd40 h1:ExwaL+hUy1ys2AWDbsbh/lxQS2EVCYxuj0LoyLTdB3Y=\ngithub.com/hashicorp/hil v0.0.0-20210521165536-27a72121fd40/go.mod h1:n2TSygSNwsLJ76m8qFXTSc7beTb+auJxYdqrnoqwZWE=\ngithub.com/hashicorp/memberlist v0.1.0/go.mod h1:ncdBp14cuox2iFOq3kDiquKU6fqsTBc3W6JvZwjxxsE=\ngithub.com/hashicorp/memberlist v0.5.3 h1:tQ1jOCypD0WvMemw/ZhhtH+PWpzcftQvgCorLu0hndk=\ngithub.com/hashicorp/memberlist v0.5.3/go.mod h1:h60o12SZn/ua/j0B6iKAZezA4eDaGsIuPO70eOaJ6WE=\ngithub.com/hashicorp/nomad v1.10.2 h1:0yKNxoCgBGWIeDOcYBzXXdLh6JBsuF7nO9td64flC/s=\ngithub.com/hashicorp/nomad v1.10.2/go.mod h1:S/FXen41m3/BPsKEXfatDxmGJdp8lMhWg9FLBbTKsz4=\ngithub.com/hashicorp/nomad/api v0.0.0-20250620152331-1030760d3f77 h1:5PhbqQJDNugTEhPbA/0uVdK21SKhgkfMmRifTICCqIw=\ngithub.com/hashicorp/nomad/api v0.0.0-20250620152331-1030760d3f77/go.mod h1:y4olHzVXiQolzyk6QD/gqJxQTnnchlTf/QtczFFKwOI=\ngithub.com/hashicorp/serf v0.0.0-20160124182025-e4ec8cc423bb/go.mod h1:h/Ru6tmZazX7WO/GDmwdpS975F019L4t5ng5IgwbNrE=\ngithub.com/hashicorp/serf v0.10.2 h1:m5IORhuNSjaxeljg5DeQVDlQyVkhRIjJDimbkCa8aAc=\ngithub.com/hashicorp/serf v0.10.2/go.mod h1:T1CmSGfSeGfnfNy/w0odXQUR1rfECGd2Qdsp84DjOiY=\ngithub.com/hashicorp/terraform v0.13.5 h1:QgPgb/pOBclUMymDji9255FlseySf9dRcUMzJws9BAQ=\ngithub.com/hashicorp/terraform v0.13.5/go.mod h1:1H1qcnppNc/bBGc7poOfnmmBeQMlF0stEN3haY3emCU=\ngithub.com/hashicorp/terraform-config-inspect v0.0.0-20191212124732-c6ae6269b9d7/go.mod h1:p+ivJws3dpqbp1iP84+npOyAmTTOLMgCzrXd3GSdn/A=\ngithub.com/hashicorp/terraform-svchost v0.0.0-20191011084731-65d371908596 h1:hjyO2JsNZUKT1ym+FAdlBEkGPevazYsmVgIMw7dVELg=\ngithub.com/hashicorp/terraform-svchost v0.0.0-20191011084731-65d371908596/go.mod h1:kNDNcF7sN4DocDLBkQYz73HGKwN1ANB1blq4lIYLYvg=\ngithub.com/hashicorp/vault v0.10.4/go.mod h1:KfSyffbKxoVyspOdlaGVjIuwLobi07qD1bAbosPMpP0=\ngithub.com/hashicorp/yamux v0.0.0-20180604194846-3520598351bb/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM=\ngithub.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM=\ngithub.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=\ngithub.com/huandu/xstrings v1.3.1/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=\ngithub.com/huandu/xstrings v1.3.2/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=\ngithub.com/huandu/xstrings v1.5.0 h1:2ag3IFq9ZDANvthTwTiqSSZLjDc+BedvHPAp5tJy2TI=\ngithub.com/huandu/xstrings v1.5.0/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=\ngithub.com/imdario/mergo v0.3.9/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=\ngithub.com/imdario/mergo v0.3.11/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=\ngithub.com/jhump/protoreflect v1.6.0/go.mod h1:eaTn3RZAmMBcV0fifFvlm6VHNz3wSkYyXYWUh7ymB74=\ngithub.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=\ngithub.com/jmespath/go-jmespath v0.3.0/go.mod h1:9QtRXoHjLGCJ5IBSaohpXITPlowMeeYCZ7fLUTSywik=\ngithub.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=\ngithub.com/joyent/triton-go v0.0.0-20180313100802-d8f9c0314926/go.mod h1:U+RSyWxWd04xTqnuOQxnai7XGS2PrPY2cfGoDKtMHjA=\ngithub.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=\ngithub.com/json-iterator/go v0.0.0-20180612202835-f2b4162afba3/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=\ngithub.com/json-iterator/go v0.0.0-20180701071628-ab8a2e0c74be/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=\ngithub.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=\ngithub.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=\ngithub.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=\ngithub.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=\ngithub.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=\ngithub.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=\ngithub.com/jtolds/gls v4.2.1+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=\ngithub.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=\ngithub.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=\ngithub.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0/go.mod h1:1NbS8ALrpOvjt0rHPNLyCIeMtbizbir8U//inJ+zuB8=\ngithub.com/keybase/go-crypto v0.0.0-20161004153544-93f5b35093ba/go.mod h1:ghbZscTyKdM07+Fw3KSi0hcJm+AlEUWj8QLlPtijN/M=\ngithub.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=\ngithub.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=\ngithub.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=\ngithub.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=\ngithub.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=\ngithub.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=\ngithub.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=\ngithub.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=\ngithub.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=\ngithub.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=\ngithub.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=\ngithub.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=\ngithub.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348/go.mod h1:B69LEHPfb2qLo0BaaOLcbitczOKLWTsrBG9LczfCD4k=\ngithub.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=\ngithub.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=\ngithub.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=\ngithub.com/likexian/gokit v0.0.0-20190309162924-0a377eecf7aa/go.mod h1:QdfYv6y6qPA9pbBA2qXtoT8BMKha6UyNbxWGWl/9Jfk=\ngithub.com/likexian/gokit v0.0.0-20190418170008-ace88ad0983b/go.mod h1:KKqSnk/VVSW8kEyO2vVCXoanzEutKdlBAPohmGXkxCk=\ngithub.com/likexian/gokit v0.0.0-20190501133040-e77ea8b19cdc/go.mod h1:3kvONayqCaj+UgrRZGpgfXzHdMYCAO0KAt4/8n0L57Y=\ngithub.com/likexian/gokit v0.20.15/go.mod h1:kn+nTv3tqh6yhor9BC4Lfiu58SmH8NmQ2PmEl+uM6nU=\ngithub.com/likexian/simplejson-go v0.0.0-20190409170913-40473a74d76d/go.mod h1:Typ1BfnATYtZ/+/shXfFYLrovhFyuKvzwrdOnIDHlmg=\ngithub.com/likexian/simplejson-go v0.0.0-20190419151922-c1f9f0b4f084/go.mod h1:U4O1vIJvIKwbMZKUJ62lppfdvkCdVd2nfMimHK81eec=\ngithub.com/likexian/simplejson-go v0.0.0-20190502021454-d8787b4bfa0b/go.mod h1:3BWwtmKP9cXWwYCr5bkoVDEfLywacOv0s06OBEDpyt8=\ngithub.com/lusis/go-artifactory v0.0.0-20160115162124-7e4ce345df82/go.mod h1:y54tfGmO3NKssKveTEFFzH8C/akrSOy/iW9qEAUDV84=\ngithub.com/mailru/easyjson v0.0.0-20160728113105-d5b7844b561a/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=\ngithub.com/masterzen/simplexml v0.0.0-20160608183007-4572e39b1ab9/go.mod h1:kCEbxUJlNDEBNbdQMkPSp6yaKcRXVI6f4ddk8Riv4bc=\ngithub.com/masterzen/simplexml v0.0.0-20190410153822-31eea3082786/go.mod h1:kCEbxUJlNDEBNbdQMkPSp6yaKcRXVI6f4ddk8Riv4bc=\ngithub.com/masterzen/winrm v0.0.0-20200615185753-c42b5136ff88/go.mod h1:a2HXwefeat3evJHxFXSayvRHpYEPJYtErl4uIzfaUqY=\ngithub.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=\ngithub.com/mattn/go-colorable v0.1.1/go.mod h1:FuOcm+DKB9mbwrcAfNl7/TZVBZ6rcnceauSikq3lYCQ=\ngithub.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=\ngithub.com/mattn/go-colorable v0.1.12/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4=\ngithub.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=\ngithub.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=\ngithub.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=\ngithub.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=\ngithub.com/mattn/go-isatty v0.0.5/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=\ngithub.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=\ngithub.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=\ngithub.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=\ngithub.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=\ngithub.com/mattn/go-runewidth v0.0.4/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=\ngithub.com/mattn/go-shellwords v1.0.4/go.mod h1:3xCvwCdWdlDJUrvuMn7Wuy9eWs4pE8vqg+NOMyg4B2o=\ngithub.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=\ngithub.com/miekg/dns v1.0.8/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=\ngithub.com/miekg/dns v1.1.66 h1:FeZXOS3VCVsKnEAd+wBkjMC3D2K+ww66Cq3VnCINuJE=\ngithub.com/miekg/dns v1.1.66/go.mod h1:jGFzBsSNbJw6z1HYut1RKBKHA9PBdxeHrZG8J+gC2WE=\ngithub.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=\ngithub.com/mitchellh/cli v1.1.5 h1:OxRIeJXpAMztws/XHlN2vu6imG5Dpq+j61AzAX5fLng=\ngithub.com/mitchellh/cli v1.1.5/go.mod h1:v8+iFts2sPIKUV1ltktPXMCC8fumSKFItNcD2cLtRR4=\ngithub.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db/go.mod h1:l0dey0ia/Uv7NcFFVbCLtqEBQbrT4OCwCSKTEv6enCw=\ngithub.com/mitchellh/copystructure v1.0.0/go.mod h1:SNtv71yrdKgLRyLFxmLdkAbkKEFWgYaq1OVrnRcwhnw=\ngithub.com/mitchellh/copystructure v1.2.0 h1:vpKXTN4ewci03Vljg/q9QvCGUDttBOGBIa15WveJJGw=\ngithub.com/mitchellh/copystructure v1.2.0/go.mod h1:qLl+cE2AmVv+CoeAwDPye/v+N2HKCj9FbZEVFJRxO9s=\ngithub.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=\ngithub.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=\ngithub.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=\ngithub.com/mitchellh/go-linereader v0.0.0-20190213213312-1b945b3263eb/go.mod h1:OaY7UOoTkkrX3wRwjpYRKafIkkyeD0UtweSHAWWiqQM=\ngithub.com/mitchellh/go-testing-interface v0.0.0-20171004221916-a61a99592b77/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=\ngithub.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=\ngithub.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=\ngithub.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=\ngithub.com/mitchellh/go-wordwrap v1.0.1 h1:TLuKupo69TCn6TQSyGxwI1EblZZEsQ0vMlAFQflz0v0=\ngithub.com/mitchellh/go-wordwrap v1.0.1/go.mod h1:R62XHJLzvMFRBbcrT7m7WgmE1eOyTSsCt+hzestvNj0=\ngithub.com/mitchellh/gox v1.0.1/go.mod h1:ED6BioOGXMswlXa2zxfh/xdd5QhwYliBFn9V18Ap4z4=\ngithub.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY=\ngithub.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=\ngithub.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=\ngithub.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=\ngithub.com/mitchellh/panicwrap v1.0.0/go.mod h1:pKvZHwWrZowLUzftuFq7coarnxbBXU4aQh3N0BJOeeA=\ngithub.com/mitchellh/prefixedio v0.0.0-20190213213902-5733675afd51/go.mod h1:kB1naBgV9ORnkiTVeyJOI1DavaJkG4oNIq0Af6ZVKUo=\ngithub.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=\ngithub.com/mitchellh/reflectwalk v1.0.1/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=\ngithub.com/mitchellh/reflectwalk v1.0.2 h1:G2LzWKi524PWgd3mLHV8Y5k7s6XUvT0Gef6zxSIeXaQ=\ngithub.com/mitchellh/reflectwalk v1.0.2/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=\ngithub.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=\ngithub.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=\ngithub.com/modern-go/reflect2 v0.0.0-20180320133207-05fbef0ca5da/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=\ngithub.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=\ngithub.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=\ngithub.com/mozillazg/go-httpheader v0.2.1/go.mod h1:jJ8xECTlalr6ValeXYdOF8fFUISeBAdw6E61aqQma60=\ngithub.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=\ngithub.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=\ngithub.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=\ngithub.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=\ngithub.com/nu7hatch/gouuid v0.0.0-20131221200532-179d4d0c4d8d/go.mod h1:YUTz3bUH2ZwIWBy3CJBeOBEugqcmXREj14T+iG/4k4U=\ngithub.com/oklog/run v1.0.0/go.mod h1:dlhp/R75TPv97u0XWUtDeV/lRKWPKSdTuV0TZvrmrQA=\ngithub.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=\ngithub.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=\ngithub.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=\ngithub.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=\ngithub.com/onsi/gomega v0.0.0-20190113212917-5533ce8a0da3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=\ngithub.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=\ngithub.com/packer-community/winrmcp v0.0.0-20180921211025-c76d91c1e7db/go.mod h1:f6Izs6JvFTdnRbziASagjZ2vmf55NSIkC/weStxCHqk=\ngithub.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=\ngithub.com/pascaldekloe/goe v0.1.0 h1:cBOtyMzM9HTpWjXfbbunk26uA6nG3a8n06Wieeh0MwY=\ngithub.com/pascaldekloe/goe v0.1.0/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=\ngithub.com/pkg/browser v0.0.0-20180916011732-0a3d74bf9ce4/go.mod h1:4OwLy04Bl9Ef3GJJCoec+30X3LQs/0/m4HFRt/2LUSA=\ngithub.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=\ngithub.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=\ngithub.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=\ngithub.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=\ngithub.com/pmezard/go-difflib v0.0.0-20151028094244-d8ed2627bdf0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=\ngithub.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=\ngithub.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=\ngithub.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=\ngithub.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=\ngithub.com/posener/complete v1.2.1/go.mod h1:6gapUrK/U1TAN7ciCoNRIdVC5sbdBTUh1DKN0g6uH7E=\ngithub.com/posener/complete v1.2.3 h1:NP0eAhjcjImqslEwo/1hq7gpajME0fTLTezBKDqfXqo=\ngithub.com/posener/complete v1.2.3/go.mod h1:WZIdtGGp+qx0sLrYKtIRAruyNpv6hFCicSgv7Sy7s/s=\ngithub.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=\ngithub.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829/go.mod h1:p2iRAGwDERtqlqzRXnrOVns+ignqQo//hLXqYxZYVNs=\ngithub.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=\ngithub.com/prometheus/client_golang v1.4.0/go.mod h1:e9GMxYsXl05ICDXkRhurwBS4Q3OK1iX/F2sw+iXX5zU=\ngithub.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=\ngithub.com/prometheus/client_golang v1.11.1/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=\ngithub.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=\ngithub.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=\ngithub.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=\ngithub.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=\ngithub.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=\ngithub.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=\ngithub.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=\ngithub.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8bs7vj7HSQ4=\ngithub.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=\ngithub.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=\ngithub.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=\ngithub.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=\ngithub.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=\ngithub.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=\ngithub.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=\ngithub.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=\ngithub.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=\ngithub.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=\ngithub.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=\ngithub.com/rs/zerolog v1.6.0 h1:MIbb0f94AuCzp0f4qdwfbi20VM8JcjoLbahiBHJEqwY=\ngithub.com/rs/zerolog v1.6.0/go.mod h1:YbFCdg8HfsridGWAh22vktObvhZbQsZXe4/zB0OKkWU=\ngithub.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=\ngithub.com/sean-/conswriter v0.0.0-20180208195008-f5ae3917a627 h1:Tn2Iev07a4oOcAuFna8AJxDOF/M+6OkNbpEZLX30D6M=\ngithub.com/sean-/conswriter v0.0.0-20180208195008-f5ae3917a627/go.mod h1:7zjs06qF79/FKAJpBvFx3P8Ww4UTIMAe+lpNXDHziac=\ngithub.com/sean-/pager v0.0.0-20180208200047-666be9bf53b5 h1:D07EBYJLI26GmLRKNtrs47p8vs/5QqpUX3VcwsAPkEo=\ngithub.com/sean-/pager v0.0.0-20180208200047-666be9bf53b5/go.mod h1:BeybITEsBEg6qbIiqJ6/Bqeq25bCLbL7YFmpaFfJDuM=\ngithub.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529 h1:nn5Wsu0esKSJiIVhscUtVbo7ada43DJhG55ua/hjS5I=\ngithub.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=\ngithub.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=\ngithub.com/shoenig/test v1.12.1 h1:mLHfnMv7gmhhP44WrvT+nKSxKkPDiNkIuHGdIGI9RLU=\ngithub.com/shoenig/test v1.12.1/go.mod h1:UxJ6u/x2v/TNs/LoLxBNJRV9DiwBBKYxXSyczsBHFoI=\ngithub.com/shopspring/decimal v1.2.0/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o=\ngithub.com/shopspring/decimal v1.4.0 h1:bxl37RwXBklmTi0C79JfXCEBD1cqqHt0bbgBAGFp81k=\ngithub.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME=\ngithub.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=\ngithub.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=\ngithub.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=\ngithub.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=\ngithub.com/smartystreets/goconvey v0.0.0-20180222194500-ef6db91d284a/go.mod h1:XDJAKZRPZ1CvBcN2aX5YOUTYGHki24fSF0Iv48Ibg0s=\ngithub.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=\ngithub.com/spf13/afero v1.2.2 h1:5jhuqJyZCZf2JRofRvN/nIFgIWNzPa3/Vz8mYylgbWc=\ngithub.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=\ngithub.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=\ngithub.com/spf13/cast v1.7.0 h1:ntdiHjuueXFgm5nzDRdOS4yfT43P5Fnud6DH50rz/7w=\ngithub.com/spf13/cast v1.7.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=\ngithub.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=\ngithub.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=\ngithub.com/spf13/pflag v1.0.2/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=\ngithub.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=\ngithub.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=\ngithub.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=\ngithub.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=\ngithub.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=\ngithub.com/stretchr/testify v0.0.0-20151208002404-e3a8ff8ce365/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=\ngithub.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=\ngithub.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=\ngithub.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=\ngithub.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=\ngithub.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=\ngithub.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals=\ngithub.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=\ngithub.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=\ngithub.com/svanharmelen/jsonapi v0.0.0-20180618144545-0c0828c3f16d/go.mod h1:BSTlc8jOjh0niykqEGVXOLXdi9o0r0kR8tCYiMvjFgw=\ngithub.com/tencentcloud/tencentcloud-sdk-go v3.0.82+incompatible/go.mod h1:0PfYow01SHPMhKY31xa+EFz2RStxIqj6JFAJS+IkCi4=\ngithub.com/tencentyun/cos-go-sdk-v5 v0.0.0-20190808065407-f07404cefc8c/go.mod h1:wk2XFUg6egk4tSDNZtXeKfe2G6690UVyt163PuUxBZk=\ngithub.com/tmc/grpc-websocket-proxy v0.0.0-20171017195756-830351dc03c6/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=\ngithub.com/tombuildsstuff/giovanni v0.12.0/go.mod h1:qJ5dpiYWkRsuOSXO8wHbee7+wElkLNfWVolcf59N84E=\ngithub.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=\ngithub.com/ugorji/go v0.0.0-20180813092308-00b869d2f4a5/go.mod h1:hnLbHMwcvSihnDhEfx2/BzKp2xb0Y+ErdfYcrs9tkJQ=\ngithub.com/ulikunitz/xz v0.5.5/go.mod h1:2bypXElzHzzJZwzH67Y6wb67pO62Rzfn7BSiF4ABRW8=\ngithub.com/vmihailenco/msgpack v3.3.3+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk=\ngithub.com/vmihailenco/msgpack v4.0.1+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk=\ngithub.com/xanzy/ssh-agent v0.2.1/go.mod h1:mLlQY/MoOhWBj+gOGMQkOeiEvkx+8pJSI+0Bx9h2kr4=\ngithub.com/xiang90/probing v0.0.0-20160813154853-07dd2e8dfe18/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=\ngithub.com/xlab/treeprint v0.0.0-20161029104018-1d6e34225557/go.mod h1:ce1O1j6UtZfjr22oyGxGLbauSBp2YVXpARAosm7dHBg=\ngithub.com/zclconf/go-cty v1.0.0/go.mod h1:xnAOWiHeOqg2nWS62VtQ7pbOu17FtxJNW8RLEih+O3s=\ngithub.com/zclconf/go-cty v1.1.0/go.mod h1:xnAOWiHeOqg2nWS62VtQ7pbOu17FtxJNW8RLEih+O3s=\ngithub.com/zclconf/go-cty v1.2.0/go.mod h1:hOPWgoHbaTUnI5k4D2ld+GRpFJSCe6bCM7m1q/N4PQ8=\ngithub.com/zclconf/go-cty v1.4.0/go.mod h1:nHzOclRkoj++EU9ZjSrZvRG0BXIWt8c7loYc0qXAFGQ=\ngithub.com/zclconf/go-cty v1.5.1/go.mod h1:nHzOclRkoj++EU9ZjSrZvRG0BXIWt8c7loYc0qXAFGQ=\ngithub.com/zclconf/go-cty v1.16.3 h1:osr++gw2T61A8KVYHoQiFbFd1Lh3JOCXc/jFLJXKTxk=\ngithub.com/zclconf/go-cty v1.16.3/go.mod h1:VvMs5i0vgZdhYawQNq5kePSpLAoz8u1xvZgrPIxfnZE=\ngithub.com/zclconf/go-cty-debug v0.0.0-20240509010212-0d6042c53940 h1:4r45xpDWB6ZMSMNJFMOjqrGHynW3DIBuR2H9j0ug+Mo=\ngithub.com/zclconf/go-cty-debug v0.0.0-20240509010212-0d6042c53940/go.mod h1:CmBdvvj3nqzfzJ6nTCIwDTPZ56aVGvDrmztiO5g3qrM=\ngithub.com/zclconf/go-cty-yaml v1.0.2/go.mod h1:IP3Ylp0wQpYm50IHK8OZWKMu6sPJIUgKa8XhiVHura0=\ngithub.com/zclconf/go-cty-yaml v1.1.0 h1:nP+jp0qPHv2IhUVqmQSzjvqAWcObN0KBkUl2rWBdig0=\ngithub.com/zclconf/go-cty-yaml v1.1.0/go.mod h1:9YLUH4g7lOhVWqUbctnVlZ5KLpg7JAprQNgxSZ1Gyxs=\ngo.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=\ngo.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=\ngo.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=\ngo.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=\ngo.uber.org/zap v1.9.1/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=\ngolang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=\ngolang.org/x/crypto v0.0.0-20190219172222-a4c6cb3142f2/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=\ngolang.org/x/crypto v0.0.0-20190222235706-ffb98f73852f/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=\ngolang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=\ngolang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=\ngolang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=\ngolang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=\ngolang.org/x/crypto v0.0.0-20191202143827-86a70503ff7e/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=\ngolang.org/x/crypto v0.0.0-20200414173820-0848c9571904/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=\ngolang.org/x/crypto v0.0.0-20200422194213-44a606286825/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=\ngolang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=\ngolang.org/x/crypto v0.0.0-20200820211705-5c72a883971a/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=\ngolang.org/x/crypto v0.38.0 h1:jt+WWG8IZlBnVbomuhg2Mdq0+BBQaHbtqHEFEigjUV8=\ngolang.org/x/crypto v0.38.0/go.mod h1:MvrbAqul58NNYPKnOra203SB9vpuZW0e+RRZV+Ggqjw=\ngolang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=\ngolang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=\ngolang.org/x/exp v0.0.0-20250506013437-ce4c2cf36ca6 h1:y5zboxd6LQAqYIhHnB48p0ByQ/GnQx2BE33L8BOHQkI=\ngolang.org/x/exp v0.0.0-20250506013437-ce4c2cf36ca6/go.mod h1:U6Lno4MTRCDY+Ba7aCcauB9T60gsv5s4ralQzP72ZoQ=\ngolang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=\ngolang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=\ngolang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=\ngolang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=\ngolang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=\ngolang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=\ngolang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=\ngolang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=\ngolang.org/x/mod v0.25.0 h1:n7a+ZbQKQA/Ysbyb0/6IbB1H/X41mKgbhfv7AfG/44w=\ngolang.org/x/mod v0.25.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww=\ngolang.org/x/net v0.0.0-20170114055629-f2499483f923/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=\ngolang.org/x/net v0.0.0-20180530234432-1e491301e022/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=\ngolang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=\ngolang.org/x/net v0.0.0-20180811021610-c39426892332/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=\ngolang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=\ngolang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=\ngolang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=\ngolang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=\ngolang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=\ngolang.org/x/net v0.0.0-20190125091013-d26f9f9a57f3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=\ngolang.org/x/net v0.0.0-20190206173232-65e2d4e15006/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=\ngolang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=\ngolang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=\ngolang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=\ngolang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=\ngolang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=\ngolang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=\ngolang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=\ngolang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=\ngolang.org/x/net v0.0.0-20190812203447-cdfb69ac37fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=\ngolang.org/x/net v0.0.0-20191009170851-d66e71096ffb/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=\ngolang.org/x/net v0.0.0-20191126235420-ef20fe5d7933/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=\ngolang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=\ngolang.org/x/net v0.0.0-20200602114024-627f9648deb9/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=\ngolang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=\ngolang.org/x/net v0.40.0 h1:79Xs7wF06Gbdcg4kdCCIQArK11Z1hr5POQ6+fIYHNuY=\ngolang.org/x/net v0.40.0/go.mod h1:y0hY0exeL2Pku80/zKK7tpntoX23cqL3Oa6njdgRtds=\ngolang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=\ngolang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=\ngolang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=\ngolang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI=\ngolang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU=\ngolang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.15.0 h1:KWH3jNZsfyT6xfAfKiz6MRNmd46ByHDYaZ7KSkCtdW8=\ngolang.org/x/sync v0.15.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=\ngolang.org/x/sys v0.0.0-20170830134202-bb24a47a89ea/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20190221075227-b4e8571b14e0/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20190502175342-a43fa875dd82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20190509141414-a5b02f93d862/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20191128015809-6d18c012aee9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=\ngolang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=\ngolang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=\ngolang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=\ngolang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=\ngolang.org/x/text v0.3.1-0.20181227161524-e6919f6577db/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=\ngolang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=\ngolang.org/x/text v0.25.0 h1:qVyWApTSYLk/drJRO5mDlNYskwQznZmkpV2c8q9zls4=\ngolang.org/x/text v0.25.0/go.mod h1:WEdwpYrmk1qmdHvhkSTNPm3app7v4rsT8F2UD6+VHIA=\ngolang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=\ngolang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=\ngolang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=\ngolang.org/x/tools v0.0.0-20181011042414-1f849cf54d09/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=\ngolang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=\ngolang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=\ngolang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=\ngolang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=\ngolang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=\ngolang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=\ngolang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=\ngolang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=\ngolang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=\ngolang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=\ngolang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=\ngolang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=\ngolang.org/x/tools v0.0.0-20191203134012-c197fd4bf371/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=\ngolang.org/x/tools v0.33.0 h1:4qz2S3zmRxbGIhDIAgjxvFutSvH5EfnsYrRBj0UI0bc=\ngolang.org/x/tools v0.33.0/go.mod h1:CIJMaWEY88juyUfo7UbgPqbC8rU2OqfAV1h2Qp0oMYI=\ngolang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=\ngolang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=\ngolang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=\ngoogle.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=\ngoogle.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=\ngoogle.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=\ngoogle.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=\ngoogle.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=\ngoogle.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=\ngoogle.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=\ngoogle.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=\ngoogle.golang.org/genproto v0.0.0-20170818010345-ee236bd376b0/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=\ngoogle.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=\ngoogle.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=\ngoogle.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=\ngoogle.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=\ngoogle.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=\ngoogle.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=\ngoogle.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=\ngoogle.golang.org/grpc v1.8.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=\ngoogle.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=\ngoogle.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=\ngoogle.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=\ngoogle.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=\ngoogle.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=\ngoogle.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=\ngoogle.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=\ngoogle.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=\ngoogle.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=\ngoogle.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=\ngoogle.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=\ngoogle.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=\ngopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=\ngopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=\ngopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=\ngopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=\ngopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=\ngopkg.in/cheggaaa/pb.v1 v1.0.27/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=\ngopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=\ngopkg.in/inf.v0 v0.9.0/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=\ngopkg.in/ini.v1 v1.42.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=\ngopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=\ngopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=\ngopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=\ngopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=\ngopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=\ngopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=\ngopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=\ngopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=\ngopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=\ngopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=\ngopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=\ngopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=\ngopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=\ngopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=\ngopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=\nhonnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=\nhonnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=\nhonnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=\nhonnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=\nk8s.io/api v0.0.0-20190620084959-7cf5895f2711/go.mod h1:TBhBqb1AWbBQbW3XRusr7n7E4v2+5ZY8r8sAMnyFC5A=\nk8s.io/apimachinery v0.0.0-20190612205821-1799e75a0719/go.mod h1:I4A+glKBHiTgiEjQiCCQfCAIcIMFGt291SmsvcrFzJA=\nk8s.io/apimachinery v0.0.0-20190913080033-27d36303b655/go.mod h1:nL6pwRT8NgfF8TT68DBI8uEePRt89cSvoXUVqbkWHq4=\nk8s.io/client-go v10.0.0+incompatible/go.mod h1:7vJpHMYJwNQCWgzmNV+VYUl1zCObLyodBc8nIyt8L5s=\nk8s.io/gengo v0.0.0-20190128074634-0689ccc1d7d6/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=\nk8s.io/klog v0.0.0-20181102134211-b9b56d5dfc92/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=\nk8s.io/klog v0.3.1/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=\nk8s.io/klog v0.4.0/go.mod h1:4Bi6QPql/J/LkTDqv7R/cd3hPo4k2DG6Ptcz060Ez5I=\nk8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=\nk8s.io/kube-openapi v0.0.0-20190228160746-b3a7cee44a30/go.mod h1:BXM9ceUBTj2QnfH2MK1odQs778ajze1RxcmP6S8RVVc=\nk8s.io/kube-openapi v0.0.0-20190816220812-743ec37842bf/go.mod h1:1TqjTSzOxsLGIKfj0lK8EeCP7K1iUG65v09OM0/WG5E=\nk8s.io/utils v0.0.0-20200411171748-3d5a2fe318e4/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=\nrsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=\nsigs.k8s.io/structured-merge-diff v0.0.0-20190525122527-15d366b2352e/go.mod h1:wWxsB5ozmmv/SG7nM11ayaAW51xMvak/t1r0CSlcokI=\nsigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=\n"
  },
  {
    "path": "helper/files.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage helper\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n\n\t\"github.com/rs/zerolog/log\"\n)\n\n// GetDefaultTmplFile checks the current working directory for *.nomad files.\n// If only 1 is found we return the match.\nfunc GetDefaultTmplFile() (templateFile string) {\n\tif matches, _ := filepath.Glob(\"*.nomad\"); matches != nil {\n\t\tif len(matches) == 1 {\n\t\t\ttemplateFile = matches[0]\n\t\t\tlog.Debug().Msgf(\"helper/files: using templatefile `%v`\", templateFile)\n\t\t\treturn templateFile\n\t\t}\n\t}\n\treturn \"\"\n}\n\n// GetDefaultVarFile checks the current working directory for levant.(yaml|yml|tf) files.\n// The first match is returned.\nfunc GetDefaultVarFile() (varFile string) {\n\tif _, err := os.Stat(\"levant.yaml\"); !os.IsNotExist(err) {\n\t\tlog.Debug().Msg(\"helper/files: using default var-file `levant.yaml`\")\n\t\treturn \"levant.yaml\"\n\t}\n\tif _, err := os.Stat(\"levant.yml\"); !os.IsNotExist(err) {\n\t\tlog.Debug().Msg(\"helper/files: using default var-file `levant.yml`\")\n\t\treturn \"levant.yml\"\n\t}\n\tif _, err := os.Stat(\"levant.json\"); !os.IsNotExist(err) {\n\t\tlog.Debug().Msg(\"helper/files: using default var-file `levant.json`\")\n\t\treturn \"levant.json\"\n\t}\n\tif _, err := os.Stat(\"levant.tf\"); !os.IsNotExist(err) {\n\t\tlog.Debug().Msg(\"helper/files: using default var-file `levant.tf`\")\n\t\treturn \"levant.tf\"\n\t}\n\tlog.Debug().Msg(\"helper/files: no default var-file found\")\n\treturn \"\"\n}\n"
  },
  {
    "path": "helper/files_test.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage helper\n\nimport (\n\t\"os\"\n\t\"reflect\"\n\t\"testing\"\n)\n\nfunc TestHelper_GetDefaultTmplFile(t *testing.T) {\n\td1 := []byte(\"Levant Test Job File\\n\")\n\n\tcases := []struct {\n\t\tTmplFiles []string\n\t\tOutput    string\n\t}{\n\t\t{\n\t\t\t[]string{\"example.nomad\", \"example1.nomad\"},\n\t\t\t\"\",\n\t\t},\n\t\t{\n\t\t\t[]string{\"example.nomad\"},\n\t\t\t\"example.nomad\",\n\t\t},\n\t}\n\n\tfor _, tc := range cases {\n\t\tfor _, f := range tc.TmplFiles {\n\n\t\t\t// Use write file as tmpfile adds a prefix which doesn't work with the\n\t\t\t// GetDefaultTmplFile function.\n\t\t\terr := os.WriteFile(f, d1, 0600)\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t}\n\n\t\tactual := GetDefaultTmplFile()\n\n\t\t// Call explicit Remove as the function is dependant on the number of files\n\t\t// in the target directory.\n\t\tfor _, f := range tc.TmplFiles {\n\t\t\tos.Remove(f)\n\t\t}\n\n\t\tif !reflect.DeepEqual(actual, tc.Output) {\n\t\t\tt.Fatalf(\"got: %#v, expected %#v\", actual, tc.Output)\n\t\t}\n\t}\n\n}\n\nfunc TestHelper_GetDefaultVarFile(t *testing.T) {\n\td1 := []byte(\"Levant Test Variable File\\n\")\n\n\tcases := []struct {\n\t\tVarFile string\n\t}{\n\t\t{\"levant.yaml\"},\n\t\t{\"levant.yml\"},\n\t\t{\"levant.tf\"},\n\t\t{\"\"},\n\t}\n\n\tfor _, tc := range cases {\n\t\tif tc.VarFile != \"\" {\n\n\t\t\t// Use write file as tmpfile adds a prefix which doesn't work with the\n\t\t\t// GetDefaultTmplFile function.\n\t\t\terr := os.WriteFile(tc.VarFile, d1, 0600)\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t}\n\n\t\tactual := GetDefaultVarFile()\n\t\tif !reflect.DeepEqual(actual, tc.VarFile) {\n\t\t\tt.Fatalf(\"got: %#v, expected %#v\", actual, tc.VarFile)\n\t\t}\n\n\t\tos.Remove(tc.VarFile)\n\t}\n}\n"
  },
  {
    "path": "helper/kvflag.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage helper\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n)\n\n// Flag is a flag.Value implementation for parsing user variables\n// from the command-line in the format of '-var key=value'.\ntype Flag map[string]interface{}\n\nfunc (v *Flag) String() string {\n\treturn \"\"\n}\n\n// Set takes a flag variable argument and pulls the correct key and value to\n// create or add to a map.\nfunc (v *Flag) Set(raw string) error {\n\tsplit := strings.SplitN(raw, \"=\", 2)\n\tif len(split) != 2 {\n\t\treturn fmt.Errorf(\"no '=' value in arg: %s\", raw)\n\t}\n\tkeyRaw, value := split[0], split[1]\n\n\tif *v == nil {\n\t\t*v = make(map[string]interface{})\n\t}\n\n\t// Split the variable key based on the nested delimiter to get a list of\n\t// nested keys.\n\tkeys := strings.Split(keyRaw, \".\")\n\n\tlastKeyIdx := len(keys) - 1\n\t// Find the nested map where this value belongs\n\t// create missing maps as we go\n\ttarget := *v\n\tfor i := 0; i < lastKeyIdx; i++ {\n\t\traw, ok := target[keys[i]]\n\t\tif !ok {\n\t\t\traw = make(map[string]interface{})\n\t\t\ttarget[keys[i]] = raw\n\t\t}\n\t\tvar newTarget Flag\n\t\tif newTarget, ok = raw.(map[string]interface{}); !ok {\n\t\t\treturn fmt.Errorf(\"simple value already exists at key %q\", strings.Join(keys[:i+1], \".\"))\n\t\t}\n\t\ttarget = newTarget\n\t}\n\ttarget[keys[lastKeyIdx]] = value\n\treturn nil\n}\n\n// FlagStringSlice is a flag.Value implementation for parsing targets from the\n// command line, e.g. -var-file=aa -var-file=bb\ntype FlagStringSlice []string\n\nfunc (v *FlagStringSlice) String() string {\n\treturn \"\"\n}\n\n// Set is used to append a variable file flag argument to a list of file flag\n// args.\nfunc (v *FlagStringSlice) Set(raw string) error {\n\t*v = append(*v, raw)\n\treturn nil\n}\n"
  },
  {
    "path": "helper/kvflag_test.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage helper\n\nimport (\n\t\"reflect\"\n\t\"testing\"\n\n\t\"github.com/hashicorp/go-multierror\"\n\t\"github.com/stretchr/testify/require\"\n)\n\nfunc TestHelper_Set(t *testing.T) {\n\tcases := []struct {\n\t\tLabel  string\n\t\tInputs []string\n\t\tOutput map[string]interface{}\n\t\tError  bool\n\t}{\n\t\t{\n\t\t\t\"simple value\",\n\t\t\t[]string{\"key=value\"},\n\t\t\tmap[string]interface{}{\"key\": \"value\"},\n\t\t\tfalse,\n\t\t},\n\t\t{\n\t\t\t\"nested replaces simple\",\n\t\t\t[]string{\"key=1\", \"key.nested=2\"},\n\t\t\tnil,\n\t\t\ttrue,\n\t\t},\n\t\t{\n\t\t\t\"simple replaces nested\",\n\t\t\t[]string{\"key.nested=2\", \"key=1\"},\n\t\t\tmap[string]interface{}{\"key\": \"1\"},\n\t\t\tfalse,\n\t\t},\n\t\t{\n\t\t\t\"nested siblings\",\n\t\t\t[]string{\"nested.a=1\", \"nested.b=2\"},\n\t\t\tmap[string]interface{}{\"nested\": map[string]interface{}{\"a\": \"1\", \"b\": \"2\"}},\n\t\t\tfalse,\n\t\t},\n\t\t{\n\t\t\t\"nested singleton\",\n\t\t\t[]string{\"nested.key=value\"},\n\t\t\tmap[string]interface{}{\"nested\": map[string]interface{}{\"key\": \"value\"}},\n\t\t\tfalse,\n\t\t},\n\t\t{\n\t\t\t\"nested with parent\",\n\t\t\t[]string{\"root=a\", \"nested.key=value\"},\n\t\t\tmap[string]interface{}{\"root\": \"a\", \"nested\": map[string]interface{}{\"key\": \"value\"}},\n\t\t\tfalse,\n\t\t},\n\t\t{\n\t\t\t\"empty value\",\n\t\t\t[]string{\"key=\"},\n\t\t\tmap[string]interface{}{\"key\": \"\"},\n\t\t\tfalse,\n\t\t},\n\t\t{\n\t\t\t\"value contains equal sign\",\n\t\t\t[]string{\"key=foo=bar\"},\n\t\t\tmap[string]interface{}{\"key\": \"foo=bar\"},\n\t\t\tfalse,\n\t\t},\n\t\t{\n\t\t\t\"missing equal sign\",\n\t\t\t[]string{\"key\"},\n\t\t\tnil,\n\t\t\ttrue,\n\t\t},\n\t}\n\n\tfor _, tc := range cases {\n\t\tt.Run(tc.Label, func(t *testing.T) {\n\t\t\tf := new(Flag)\n\t\t\tmErr := multierror.Error{}\n\t\t\tfor _, input := range tc.Inputs {\n\t\t\t\terr := f.Set(input)\n\t\t\t\tif err != nil {\n\t\t\t\t\tmErr.Errors = append(mErr.Errors, err)\n\t\t\t\t}\n\t\t\t}\n\t\t\tif tc.Error {\n\t\t\t\trequire.Error(t, mErr.ErrorOrNil())\n\t\t\t} else {\n\t\t\t\tactual := map[string]interface{}(*f)\n\t\t\t\trequire.True(t, reflect.DeepEqual(actual, tc.Output))\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "helper/nomad/opts.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage nomad\n\nimport \"github.com/hashicorp/nomad/api\"\n\n// GenerateBlockingQueryOptions generate Nomad API QueryOptions that can be\n// used for blocking. The namespace parameter will be set, if its non-nil.\nfunc GenerateBlockingQueryOptions(ns *string) *api.QueryOptions {\n\tq := api.QueryOptions{WaitIndex: 1}\n\tif ns != nil {\n\t\tq.Namespace = *ns\n\t}\n\treturn &q\n}\n"
  },
  {
    "path": "helper/nomad/opts_test.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage nomad\n\nimport (\n\t\"testing\"\n\n\t\"github.com/hashicorp/nomad/api\"\n\t\"github.com/stretchr/testify/assert\"\n)\n\nfunc Test_GenerateBlockingQueryOptions(t *testing.T) {\n\ttestCases := []struct {\n\t\tinputNS        *string\n\t\texpectedOutput *api.QueryOptions\n\t\tname           string\n\t}{\n\t\t{\n\t\t\tinputNS: nil,\n\t\t\texpectedOutput: &api.QueryOptions{\n\t\t\t\tWaitIndex: 1,\n\t\t\t},\n\t\t\tname: \"nil input namespace\",\n\t\t},\n\t\t{\n\t\t\tinputNS: stringToPtr(\"non-default\"),\n\t\t\texpectedOutput: &api.QueryOptions{\n\t\t\t\tWaitIndex: 1,\n\t\t\t\tNamespace: \"non-default\",\n\t\t\t},\n\t\t\tname: \"non-nil input namespace\",\n\t\t},\n\t}\n\n\tfor _, tc := range testCases {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\tactualOutput := GenerateBlockingQueryOptions(tc.inputNS)\n\t\t\tassert.Equal(t, tc.expectedOutput, actualOutput, tc.name)\n\t\t})\n\t}\n}\n\nfunc stringToPtr(s string) *string { return &s }\n"
  },
  {
    "path": "helper/variable.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage helper\n\nimport (\n\t\"github.com/rs/zerolog/log\"\n)\n\n// VariableMerge merges the passed file variables with the flag variabes to\n// provide a single set of variables. The flagVars will always prevale over file\n// variables.\nfunc VariableMerge(fileVars, flagVars *map[string]interface{}) map[string]interface{} {\n\n\tout := make(map[string]interface{})\n\n\tfor k, v := range *flagVars {\n\t\tlog.Info().Msgf(\"helper/variable: using command line variable with key %s and value %s\", k, v)\n\t\tout[k] = v\n\t}\n\n\tfor k, v := range *fileVars {\n\t\tif _, ok := out[k]; ok {\n\t\t\tlog.Debug().Msgf(\"helper/variable: variable from file with key %s and value %s overridden by CLI var\",\n\t\t\t\tk, v)\n\t\t\tcontinue\n\t\t}\n\t\tlog.Info().Msgf(\"helper/variable: using variable with key %s and value %v from file\", k, v)\n\t\tout[k] = v\n\t}\n\n\treturn out\n}\n"
  },
  {
    "path": "helper/variable_test.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage helper\n\nimport (\n\t\"reflect\"\n\t\"testing\"\n)\n\nfunc TestHelper_VariableMerge(t *testing.T) {\n\n\tflagVars := make(map[string]interface{})\n\tflagVars[\"job_name\"] = \"levantExample\"\n\tflagVars[\"datacentre\"] = \"dc13\"\n\n\tfileVars := make(map[string]interface{})\n\tfileVars[\"job_name\"] = \"levantExampleOverride\"\n\tfileVars[\"CPU_MHz\"] = 500\n\n\texpected := make(map[string]interface{})\n\texpected[\"job_name\"] = \"levantExample\"\n\texpected[\"datacentre\"] = \"dc13\"\n\texpected[\"CPU_MHz\"] = 500\n\n\tres := VariableMerge(&fileVars, &flagVars)\n\n\tif !reflect.DeepEqual(res, expected) {\n\t\tt.Fatalf(\"expected \\n%#v\\n\\n, got \\n\\n%#v\\n\\n\", expected, res)\n\t}\n}\n"
  },
  {
    "path": "levant/auto_revert.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage levant\n\nimport (\n\t\"time\"\n\n\t\"github.com/hashicorp/nomad/api\"\n\tnomad \"github.com/hashicorp/nomad/api\"\n\t\"github.com/rs/zerolog/log\"\n)\n\nfunc (l *levantDeployment) autoRevert(dep *nomad.Deployment) {\n\n\t// Setup a loop in order to retry a race condition whereby Levant may query\n\t// the latest deployment (auto-revert dep) before it has been started.\n\ti := 0\n\tfor i := 0; i < 5; i++ {\n\t\trevertDep, _, err := l.nomad.Jobs().LatestDeployment(dep.JobID, &api.QueryOptions{Namespace: dep.Namespace})\n\t\tif err != nil {\n\t\t\tlog.Error().Msgf(\"levant/auto_revert: unable to query latest deployment of job %s\", dep.JobID)\n\t\t\treturn\n\t\t}\n\n\t\t// Check whether we have got the original deployment ID as a return from\n\t\t// Nomad, and if so, continue the loop to try again.\n\t\tif revertDep.ID == dep.ID {\n\t\t\tlog.Debug().Msgf(\"levant/auto_revert: auto-revert deployment not triggered for job %s, rechecking\", dep.JobID)\n\t\t\ttime.Sleep(1 * time.Second)\n\t\t\tcontinue\n\t\t}\n\n\t\tlog.Info().Msgf(\"levant/auto_revert: beginning deployment watcher for job %s\", dep.JobID)\n\t\tsuccess := l.deploymentWatcher(revertDep.ID)\n\n\t\tif success {\n\t\t\tlog.Info().Msgf(\"levant/auto_revert: auto-revert of job %s was successful\", dep.JobID)\n\t\t\tbreak\n\t\t}\n\n\t\tlog.Error().Msgf(\"levant/auto_revert: auto-revert of job %s failed; POTENTIAL OUTAGE SITUATION\", dep.JobID)\n\t\tl.checkFailedDeployment(&revertDep.ID)\n\t\tbreak\n\t}\n\n\t// At this point we have not been able to get the latest deploymentID that\n\t// is different from the original so we can't perform auto-revert checking.\n\tif i == 5 {\n\t\tlog.Error().Msgf(\"levant/auto_revert: unable to check auto-revert of job %s\", dep.JobID)\n\t}\n}\n\n// checkAutoRevert inspects a Nomad deployment to determine if any TashGroups\n// have been auto-reverted.\nfunc (l *levantDeployment) checkAutoRevert(dep *nomad.Deployment) {\n\n\tvar revert bool\n\n\t// Identify whether any of the TaskGroups are enabled for auto-revert and have\n\t// therefore caused the job to enter a deployment to revert to a stable\n\t// version.\n\tfor _, v := range dep.TaskGroups {\n\t\tif v.AutoRevert {\n\t\t\trevert = true\n\t\t\tbreak\n\t\t}\n\t}\n\n\tif revert {\n\t\tlog.Info().Msgf(\"levant/auto_revert: job %v has entered auto-revert state; launching auto-revert checker\",\n\t\t\tdep.JobID)\n\n\t\t// Run the levant autoRevert function.\n\t\tl.autoRevert(dep)\n\t} else {\n\t\tlog.Info().Msgf(\"levant/auto_revert: job %v is not in auto-revert; POTENTIAL OUTAGE SITUATION\", dep.JobID)\n\t}\n}\n"
  },
  {
    "path": "levant/deploy.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage levant\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/hashicorp/levant/client\"\n\t\"github.com/hashicorp/levant/levant/structs\"\n\tnomad \"github.com/hashicorp/nomad/api\"\n\t\"github.com/pkg/errors\"\n\t\"github.com/rs/zerolog/log\"\n)\n\nconst (\n\tjobStatusRunning = \"running\"\n)\n\n// levantDeployment is the all deployment related objects for this Levant\n// deployment invocation.\ntype levantDeployment struct {\n\tnomad  *nomad.Client\n\tconfig *DeployConfig\n}\n\n// DeployConfig is the set of config structs required to run a Levant deploy.\ntype DeployConfig struct {\n\tDeploy   *structs.DeployConfig\n\tClient   *structs.ClientConfig\n\tPlan     *structs.PlanConfig\n\tTemplate *structs.TemplateConfig\n}\n\n// newLevantDeployment sets up the Levant deployment object and Nomad client\n// to interact with the Nomad API.\nfunc newLevantDeployment(config *DeployConfig, nomadClient *nomad.Client) (*levantDeployment, error) {\n\n\tvar err error\n\tdep := &levantDeployment{}\n\tdep.config = config\n\n\tif nomadClient == nil {\n\t\tdep.nomad, err = client.NewNomadClient(config.Client.Addr)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t} else {\n\t\tdep.nomad = nomadClient\n\t}\n\n\t// Add the JobID as a log context field.\n\tlog.Logger = log.With().Str(structs.JobIDContextField, *config.Template.Job.ID).Logger()\n\n\treturn dep, nil\n}\n\n// TriggerDeployment provides the main entry point into a Levant deployment and\n// is used to setup the clients before triggering the deployment process.\nfunc TriggerDeployment(config *DeployConfig, nomadClient *nomad.Client) bool {\n\n\t// Create our new deployment object.\n\tlevantDep, err := newLevantDeployment(config, nomadClient)\n\tif err != nil {\n\t\tlog.Error().Err(err).Msg(\"levant/deploy: unable to setup Levant deployment\")\n\t\treturn false\n\t}\n\n\t// Run the job validation steps and count updater.\n\tpreDepVal := levantDep.preDeployValidate()\n\tif !preDepVal {\n\t\tlog.Error().Msg(\"levant/deploy: pre-deployment validation process failed\")\n\t\treturn false\n\t}\n\n\t// Start the main deployment function.\n\tsuccess := levantDep.deploy()\n\tif !success {\n\t\tlog.Error().Msg(\"levant/deploy: job deployment failed\")\n\t\treturn false\n\t}\n\n\tlog.Info().Msg(\"levant/deploy: job deployment successful\")\n\treturn true\n}\n\nfunc (l *levantDeployment) preDeployValidate() (success bool) {\n\n\t// Validate the job to check it is syntactically correct.\n\tif _, _, err := l.nomad.Jobs().Validate(l.config.Template.Job, nil); err != nil {\n\t\tlog.Error().Err(err).Msg(\"levant/deploy: job validation failed\")\n\t\treturn\n\t}\n\n\t// If job.Type isn't set we can't continue\n\tif l.config.Template.Job.Type == nil {\n\t\tlog.Error().Msgf(\"levant/deploy: Nomad job `type` is not set; should be set to `%s`, `%s` or `%s`\",\n\t\t\tnomad.JobTypeBatch, nomad.JobTypeSystem, nomad.JobTypeService)\n\t\treturn\n\t}\n\n\tif !l.config.Deploy.ForceCount {\n\t\tif err := l.dynamicGroupCountUpdater(); err != nil {\n\t\t\treturn\n\t\t}\n\t}\n\n\treturn true\n}\n\n// deploy triggers a register of the job resulting in a Nomad deployment which\n// is monitored to determine the eventual state.\nfunc (l *levantDeployment) deploy() (success bool) {\n\n\tlog.Info().Msgf(\"levant/deploy: triggering a deployment\")\n\n\teval, _, err := l.nomad.Jobs().Register(l.config.Template.Job, nil)\n\tif err != nil {\n\t\tlog.Error().Err(err).Msg(\"levant/deploy: unable to register job with Nomad\")\n\t\treturn\n\t}\n\n\tif l.config.Deploy.ForceBatch {\n\t\tif eval.EvalID, err = l.triggerPeriodic(l.config.Template.Job.ID); err != nil {\n\t\t\tlog.Error().Err(err).Msg(\"levant/deploy: unable to trigger periodic instance of job\")\n\t\t\treturn\n\t\t}\n\t}\n\n\t// Periodic and parameterized jobs do not return an evaluation and therefore\n\t// can't perform the evaluationInspector unless we are forcing an instance of\n\t// periodic which will yield an EvalID.\n\tif !l.config.Template.Job.IsPeriodic() && !l.config.Template.Job.IsParameterized() ||\n\t\tl.config.Template.Job.IsPeriodic() && l.config.Deploy.ForceBatch {\n\n\t\t// Trigger the evaluationInspector to identify any potential errors in the\n\t\t// Nomad evaluation run. As far as I can tell from testing; a single alloc\n\t\t// failure in an evaluation means no allocs will be placed so we exit here.\n\t\terr = l.evaluationInspector(&eval.EvalID)\n\t\tif err != nil {\n\t\t\tlog.Error().Err(err).Msg(\"levant/deploy: something\")\n\t\t\treturn\n\t\t}\n\t}\n\n\tif l.isJobZeroCount() {\n\t\treturn true\n\t}\n\n\tswitch *l.config.Template.Job.Type {\n\tcase nomad.JobTypeService:\n\n\t\t// If the service job doesn't have an update stanza, the job will not use\n\t\t// Nomad deployments.\n\t\tif l.config.Template.Job.Update == nil {\n\t\t\tlog.Info().Msg(\"levant/deploy: job is not configured with update stanza, consider adding to use deployments\")\n\t\t\treturn l.jobStatusChecker(&eval.EvalID)\n\t\t}\n\n\t\tlog.Info().Msgf(\"levant/deploy: beginning deployment watcher for job\")\n\n\t\t// Get the deploymentID from the evaluationID so that we can watch the\n\t\t// deployment for end status.\n\t\tdepID, err := l.getDeploymentID(eval.EvalID)\n\t\tif err != nil {\n\t\t\tlog.Error().Err(err).Msgf(\"levant/deploy: unable to get info of evaluation %s\", eval.EvalID)\n\t\t\treturn\n\t\t}\n\n\t\t// Get the success of the deployment and return if we have success.\n\t\tif success = l.deploymentWatcher(depID); success {\n\t\t\treturn\n\t\t}\n\n\t\tdep, _, err := l.nomad.Deployments().Info(depID, nil)\n\t\tif err != nil {\n\t\t\tlog.Error().Err(err).Msgf(\"levant/deploy: unable to query deployment %s for auto-revert check\", depID)\n\t\t\treturn\n\t\t}\n\n\t\t// If the job is not a canary job, then run the auto-revert checker, the\n\t\t// current checking mechanism is slightly hacky and should be updated.\n\t\t// The reason for this is currently the config.Job is populate from the\n\t\t// rendered job and so a user could potentially not set canary meaning\n\t\t// the field shows a null.\n\t\tif l.config.Template.Job.Update.Canary == nil {\n\t\t\tl.checkAutoRevert(dep)\n\t\t} else if *l.config.Template.Job.Update.Canary == 0 {\n\t\t\tl.checkAutoRevert(dep)\n\t\t}\n\n\tcase nomad.JobTypeBatch:\n\t\treturn l.jobStatusChecker(&eval.EvalID)\n\n\tcase nomad.JobTypeSystem:\n\t\treturn l.jobStatusChecker(&eval.EvalID)\n\n\tdefault:\n\t\tlog.Debug().Msgf(\"levant/deploy: Levant does not support advanced deployments of job type %s\",\n\t\t\t*l.config.Template.Job.Type)\n\t\tsuccess = true\n\t}\n\treturn\n}\n\nfunc (l *levantDeployment) evaluationInspector(evalID *string) error {\n\n\tfor {\n\t\tevalInfo, _, err := l.nomad.Evaluations().Info(*evalID, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tswitch evalInfo.Status {\n\t\tcase \"complete\", \"failed\", \"canceled\":\n\t\t\tif len(evalInfo.FailedTGAllocs) == 0 {\n\t\t\t\tlog.Info().Msgf(\"levant/deploy: evaluation %s finished successfully\", *evalID)\n\t\t\t\treturn nil\n\t\t\t}\n\n\t\t\tfor group, metrics := range evalInfo.FailedTGAllocs {\n\n\t\t\t\t// Check if any nodes have been exhausted of resources and therefor are\n\t\t\t\t// unable to place allocs.\n\t\t\t\tif metrics.NodesExhausted > 0 {\n\t\t\t\t\tvar exhausted, dimension []string\n\t\t\t\t\tfor e := range metrics.ClassExhausted {\n\t\t\t\t\t\texhausted = append(exhausted, e)\n\t\t\t\t\t}\n\t\t\t\t\tfor d := range metrics.DimensionExhausted {\n\t\t\t\t\t\tdimension = append(dimension, d)\n\t\t\t\t\t}\n\t\t\t\t\tlog.Error().Msgf(\"levant/deploy: task group %s failed to place allocs, failed on %v and exhausted %v\",\n\t\t\t\t\t\tgroup, exhausted, dimension)\n\t\t\t\t}\n\n\t\t\t\t// Check if any node classes were filtered causing alloc placement\n\t\t\t\t// failures.\n\t\t\t\tif len(metrics.ClassFiltered) > 0 {\n\t\t\t\t\tfor f := range metrics.ClassFiltered {\n\t\t\t\t\t\tlog.Error().Msgf(\"levant/deploy: task group %s failed to place %v allocs as class \\\"%s\\\" was filtered\",\n\t\t\t\t\t\t\tgroup, len(metrics.ClassFiltered), f)\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t// Check if any node constraints were filtered causing alloc placement\n\t\t\t\t// failures.\n\t\t\t\tif len(metrics.ConstraintFiltered) > 0 {\n\t\t\t\t\tfor cf := range metrics.ConstraintFiltered {\n\t\t\t\t\t\tlog.Error().Msgf(\"levant/deploy: task group %s failed to place %v allocs as constraint \\\"%s\\\" was filtered\",\n\t\t\t\t\t\t\tgroup, len(metrics.ConstraintFiltered), cf)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// Do not return an error here; there could well be information from\n\t\t\t// Nomad detailing filtered nodes but the deployment will still be\n\t\t\t// successful. GH-220.\n\t\t\treturn nil\n\n\t\tdefault:\n\t\t\ttime.Sleep(1 * time.Second)\n\t\t\tcontinue\n\t\t}\n\t}\n}\n\nfunc (l *levantDeployment) deploymentWatcher(depID string) (success bool) {\n\n\tvar canaryChan chan interface{}\n\tdeploymentChan := make(chan interface{})\n\n\tt := time.Now()\n\twt := 5 * time.Second\n\n\t// Setup the canaryChan and launch the autoPromote go routine if autoPromote\n\t// has been enabled.\n\tif l.config.Deploy.Canary > 0 {\n\t\tcanaryChan = make(chan interface{})\n\t\tgo l.canaryAutoPromote(depID, l.config.Deploy.Canary, canaryChan, deploymentChan)\n\t}\n\n\tq := &nomad.QueryOptions{WaitIndex: 1, AllowStale: l.config.Client.AllowStale, WaitTime: wt}\n\n\tfor {\n\n\t\tdep, meta, err := l.nomad.Deployments().Info(depID, q)\n\t\tlog.Debug().Msgf(\"levant/deploy: deployment %v running for %.2fs\", depID, time.Since(t).Seconds())\n\n\t\t// Listen for the deploymentChan closing which indicates Levant should exit\n\t\t// the deployment watcher.\n\t\tselect {\n\t\tcase <-deploymentChan:\n\t\t\treturn false\n\t\tdefault:\n\t\t\tbreak\n\t\t}\n\n\t\tif err != nil {\n\t\t\tlog.Error().Err(err).Msgf(\"levant/deploy: unable to get info of deployment %s\", depID)\n\t\t\treturn\n\t\t}\n\n\t\tif meta.LastIndex <= q.WaitIndex {\n\t\t\tcontinue\n\t\t}\n\n\t\tq.WaitIndex = meta.LastIndex\n\n\t\tcont, err := l.checkDeploymentStatus(dep, canaryChan)\n\t\tif err != nil {\n\t\t\treturn false\n\t\t}\n\n\t\tif cont {\n\t\t\tcontinue\n\t\t}\n\t\treturn true\n\t}\n}\n\nfunc (l *levantDeployment) checkDeploymentStatus(dep *nomad.Deployment, shutdownChan chan interface{}) (bool, error) {\n\n\tswitch dep.Status {\n\tcase \"successful\":\n\t\tlog.Info().Msgf(\"levant/deploy: deployment %v has completed successfully\", dep.ID)\n\t\treturn false, nil\n\tcase jobStatusRunning:\n\t\treturn true, nil\n\tdefault:\n\t\tif shutdownChan != nil {\n\t\t\tlog.Debug().Msgf(\"levant/deploy: deployment %v meaning canary auto promote will shutdown\", dep.Status)\n\t\t\tclose(shutdownChan)\n\t\t}\n\n\t\tlog.Error().Msgf(\"levant/deploy: deployment %v has status %s\", dep.ID, dep.Status)\n\n\t\t// Launch the failure inspector.\n\t\tl.checkFailedDeployment(&dep.ID)\n\n\t\treturn false, fmt.Errorf(\"deployment failed\")\n\t}\n}\n\n// canaryAutoPromote handles Levant's canary-auto-promote functionality.\nfunc (l *levantDeployment) canaryAutoPromote(depID string, waitTime int, shutdownChan, deploymentChan chan interface{}) {\n\n\t// Setup the AutoPromote timer.\n\tautoPromote := time.After(time.Duration(waitTime) * time.Second)\n\n\tfor {\n\t\tselect {\n\t\tcase <-autoPromote:\n\t\t\tlog.Info().Msgf(\"levant/deploy: auto-promote period %vs has been reached for deployment %s\",\n\t\t\t\twaitTime, depID)\n\n\t\t\t// Check the deployment is healthy before promoting.\n\t\t\tif healthy := l.checkCanaryDeploymentHealth(depID); !healthy {\n\t\t\t\tlog.Error().Msgf(\"levant/deploy: the canary deployment %s has unhealthy allocations, unable to promote\", depID)\n\t\t\t\tclose(deploymentChan)\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tlog.Info().Msgf(\"levant/deploy: triggering auto promote of deployment %s\", depID)\n\n\t\t\t// Promote the deployment.\n\t\t\t_, _, err := l.nomad.Deployments().PromoteAll(depID, nil)\n\t\t\tif err != nil {\n\t\t\t\tlog.Error().Err(err).Msgf(\"levant/deploy: unable to promote deployment %s\", depID)\n\t\t\t\tclose(deploymentChan)\n\t\t\t\treturn\n\t\t\t}\n\n\t\tcase <-shutdownChan:\n\t\t\tlog.Info().Msg(\"levant/deploy: canary auto promote has been shutdown\")\n\t\t\treturn\n\t\t}\n\t}\n}\n\n// checkCanaryDeploymentHealth is used to check the health status of each\n// task-group within a canary deployment.\nfunc (l *levantDeployment) checkCanaryDeploymentHealth(depID string) (healthy bool) {\n\n\tvar unhealthy int\n\n\tdep, _, err := l.nomad.Deployments().Info(depID, &nomad.QueryOptions{AllowStale: l.config.Client.AllowStale})\n\tif err != nil {\n\t\tlog.Error().Err(err).Msgf(\"levant/deploy: unable to query deployment %s for health\", depID)\n\t\treturn\n\t}\n\n\t// Iterate over each task in the deployment to determine its health status. If an\n\t// unhealthy task is found, increment the unhealthy counter.\n\tfor taskName, taskInfo := range dep.TaskGroups {\n\t\t// skip any task groups which are not configured for canary deployments\n\t\tif taskInfo.DesiredCanaries == 0 {\n\t\t\tlog.Debug().Msgf(\"levant/deploy: task %s has no desired canaries, skipping health checks in deployment %s\", taskName, depID)\n\t\t\tcontinue\n\t\t}\n\n\t\tif taskInfo.DesiredCanaries != taskInfo.HealthyAllocs {\n\t\t\tlog.Error().Msgf(\"levant/deploy: task %s has unhealthy allocations in deployment %s\", taskName, depID)\n\t\t\tunhealthy++\n\t\t}\n\t}\n\n\t// If zero unhealthy tasks were found, continue with the auto promotion.\n\tif unhealthy == 0 {\n\t\tlog.Debug().Msgf(\"levant/deploy: deployment %s has 0 unhealthy allocations\", depID)\n\t\thealthy = true\n\t}\n\n\treturn\n}\n\n// triggerPeriodic is used to force an instance of a periodic job outside of the\n// planned schedule. This results in an evalID being created that can then be\n// checked in the same fashion as other jobs.\nfunc (l *levantDeployment) triggerPeriodic(jobID *string) (evalID string, err error) {\n\n\tlog.Info().Msg(\"levant/deploy: triggering a run of periodic job\")\n\n\t// Trigger the run if possible and just return both the evalID and the err.\n\t// There is no need to check this here as the caller does this.\n\tevalID, _, err = l.nomad.Jobs().PeriodicForce(*jobID, nil)\n\treturn\n}\n\n// getDeploymentID finds the Nomad deploymentID associated to a Nomad\n// evaluationID. This is only needed as sometimes Nomad initially returns eval\n// info with an empty deploymentID; and a retry is required in order to get the\n// updated response from Nomad.\nfunc (l *levantDeployment) getDeploymentID(evalID string) (depID string, err error) {\n\n\tvar evalInfo *nomad.Evaluation\n\n\ttimeout := time.NewTicker(time.Second * 60)\n\tdefer timeout.Stop()\n\n\tfor {\n\t\tselect {\n\t\tcase <-timeout.C:\n\t\t\terr = errors.New(\"timeout reached on attempting to find deployment ID\")\n\t\t\treturn\n\n\t\tdefault:\n\t\t\tif evalInfo, _, err = l.nomad.Evaluations().Info(evalID, nil); err != nil {\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tif evalInfo.DeploymentID != \"\" {\n\t\t\t\treturn evalInfo.DeploymentID, nil\n\t\t\t}\n\n\t\t\tlog.Debug().Msgf(\"levant/deploy: Nomad returned an empty deployment for evaluation %v; retrying\", evalID)\n\t\t\ttime.Sleep(2 * time.Second)\n\t\t\tcontinue\n\t\t}\n\t}\n}\n\n// dynamicGroupCountUpdater takes the templated and rendered job and updates the\n// group counts based on the currently deployed job; if it's running.\nfunc (l *levantDeployment) dynamicGroupCountUpdater() error {\n\n\t// Gather information about the current state, if any, of the job on the\n\t// Nomad cluster.\n\trJob, _, err := l.nomad.Jobs().Info(*l.config.Template.Job.Name, &nomad.QueryOptions{Namespace: *l.config.Template.Job.Namespace})\n\n\t// This is a hack due to GH-1849; we check the error string for 404, which\n\t// indicates the job is not running, not that there was an error in the API\n\t// call.\n\tif err != nil && strings.Contains(err.Error(), \"404\") {\n\t\tlog.Info().Msg(\"levant/deploy: job is not running, using template file group counts\")\n\t\treturn nil\n\t} else if err != nil {\n\t\tlog.Error().Err(err).Msg(\"levant/deploy: unable to perform job evaluation\")\n\t\treturn err\n\t}\n\n\t// Check that the job is actually running and not in a potentially stopped\n\t// state.\n\tif *rJob.Status != jobStatusRunning {\n\t\treturn nil\n\t}\n\n\tlog.Debug().Msgf(\"levant/deploy: running dynamic job count updater\")\n\n\t// Iterate over the templated job and the Nomad returned job and update group count\n\t// based on matches.\n\tfor _, rGroup := range rJob.TaskGroups {\n\t\tfor _, group := range l.config.Template.Job.TaskGroups {\n\t\t\tif *rGroup.Name == *group.Name {\n\t\t\t\tlog.Info().Msgf(\"levant/deploy: using dynamic count %v for group %s\",\n\t\t\t\t\t*rGroup.Count, *group.Name)\n\t\t\t\tgroup.Count = rGroup.Count\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\n// isJobZeroCount checks that all task groups have a count bigger than zero.\nfunc (l *levantDeployment) isJobZeroCount() bool {\n\tfor _, tg := range l.config.Template.Job.TaskGroups {\n\t\tif tg.Count == nil {\n\t\t\treturn false\n\t\t} else if *tg.Count > 0 {\n\t\t\treturn false\n\t\t}\n\t}\n\treturn true\n}\n"
  },
  {
    "path": "levant/dispatch.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage levant\n\nimport (\n\t\"github.com/hashicorp/levant/client\"\n\t\"github.com/hashicorp/levant/levant/structs\"\n\tnomad \"github.com/hashicorp/nomad/api\"\n\t\"github.com/rs/zerolog/log\"\n)\n\n// TriggerDispatch provides the main entry point into a Levant dispatch and\n// is used to setup the clients before triggering the dispatch process.\nfunc TriggerDispatch(job string, metaMap map[string]string, payload []byte, address string) bool {\n\n\tclient, err := client.NewNomadClient(address)\n\tif err != nil {\n\t\tlog.Error().Msgf(\"levant/dispatch: unable to setup Levant dispatch: %v\", err)\n\t\treturn false\n\t}\n\n\t// TODO: Potential refactor so that dispatch does not need to use the\n\t// levantDeployment object. Requires client refactor.\n\tdep := &levantDeployment{}\n\tdep.nomad = client\n\n\tsuccess := dep.dispatch(job, metaMap, payload)\n\tif !success {\n\t\tlog.Error().Msgf(\"levant/dispatch: dispatch of job %v failed\", job)\n\t\treturn false\n\t}\n\n\tlog.Info().Msgf(\"levant/dispatch: dispatch of job %v successful\", job)\n\treturn true\n}\n\n// dispatch triggers a new instance of a parameterized job of the job\n// resulting in a Nomad job which is monitored to determine the eventual\n// state.\nfunc (l *levantDeployment) dispatch(job string, metaMap map[string]string, payload []byte) bool {\n\n\t// Initiate the dispatch with the passed meta parameters.\n\teval, _, err := l.nomad.Jobs().Dispatch(job, metaMap, payload, \"\", nil)\n\tif err != nil {\n\t\tlog.Error().Msgf(\"levant/dispatch: %v\", err)\n\t\treturn false\n\t}\n\n\tlog.Info().Msgf(\"levant/dispatch: triggering dispatch against job %s\", job)\n\n\t// If we didn't get an EvaluationID then we cannot continue.\n\tif eval.EvalID == \"\" {\n\t\tlog.Error().Msgf(\"levant/dispatch: dispatched job %s did not return evaluation\", job)\n\t\treturn false\n\t}\n\n\t// In order to correctly run the jobStatusChecker we need to correctly\n\t// assign the dispatched job ID/Name based on the invoked job.\n\tl.config = &DeployConfig{\n\t\tTemplate: &structs.TemplateConfig{\n\t\t\tJob: &nomad.Job{\n\t\t\t\tID:   &eval.DispatchedJobID,\n\t\t\t\tName: &eval.DispatchedJobID,\n\t\t\t},\n\t\t},\n\t}\n\n\t// Perform the evaluation inspection to ensure to check for any possible\n\t// errors in triggering the dispatch job.\n\terr = l.evaluationInspector(&eval.EvalID)\n\tif err != nil {\n\t\tlog.Error().Msgf(\"levant/dispatch: %v\", err)\n\t\treturn false\n\t}\n\n\treturn l.jobStatusChecker(&eval.EvalID)\n}\n"
  },
  {
    "path": "levant/failure_inspector.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage levant\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n\t\"sync\"\n\n\tnomad \"github.com/hashicorp/nomad/api\"\n\t\"github.com/rs/zerolog/log\"\n)\n\n// checkFailedDeployment helps log information about deployment failures.\nfunc (l *levantDeployment) checkFailedDeployment(depID *string) {\n\n\tvar allocIDs []string\n\n\tallocs, _, err := l.nomad.Deployments().Allocations(*depID, nil)\n\tif err != nil {\n\t\tlog.Error().Msgf(\"levant/failure_inspector: unable to query deployment allocations for deployment %v\",\n\t\t\tdepID)\n\t}\n\n\t// Iterate the allocations on the deployment and create a list of each allocID\n\t// we only list the ones that have tasks that are not successful\n\tfor _, alloc := range allocs {\n\t\tfor _, task := range alloc.TaskStates {\n\t\t\t// we need to test for success both for service style jobs and for batch style jobs\n\t\t\tif task.State != \"started\" {\n\t\t\t\tallocIDs = append(allocIDs, alloc.ID)\n\t\t\t\t// once we add the allocation we don't need to add it again\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}\n\n\t// Setup a waitgroup so the function doesn't return until all allocations have\n\t// been inspected.\n\tvar wg sync.WaitGroup\n\twg.Add(+len(allocIDs))\n\n\t// Inspect each allocation.\n\tfor _, id := range allocIDs {\n\t\tlog.Debug().Msgf(\"levant/failure_inspector: launching allocation inspector for alloc %v\", id)\n\t\tgo l.allocInspector(id, &wg)\n\t}\n\n\twg.Wait()\n}\n\n// allocInspector inspects an allocations events to log any useful information\n// which may help debug deployment failures.\nfunc (l *levantDeployment) allocInspector(allocID string, wg *sync.WaitGroup) {\n\n\t// Inform the wait group we have finished our task upon completion.\n\tdefer wg.Done()\n\n\tresp, _, err := l.nomad.Allocations().Info(allocID, nil)\n\tif err != nil {\n\t\tlog.Error().Msgf(\"levant/failure_inspector: unable to query alloc %v: %v\", allocID, err)\n\t\treturn\n\t}\n\n\t// Iterate each each Task and Event to log any relevant information which may\n\t// help debug deployment failures.\n\tfor _, task := range resp.TaskStates {\n\t\tfor _, event := range task.Events {\n\n\t\t\tvar desc string\n\n\t\t\tswitch event.Type {\n\t\t\tcase nomad.TaskFailedValidation:\n\t\t\t\tif event.ValidationError != \"\" {\n\t\t\t\t\tdesc = event.ValidationError\n\t\t\t\t} else {\n\t\t\t\t\tdesc = \"validation of task failed\"\n\t\t\t\t}\n\t\t\tcase nomad.TaskSetupFailure:\n\t\t\t\tif event.SetupError != \"\" {\n\t\t\t\t\tdesc = event.SetupError\n\t\t\t\t} else {\n\t\t\t\t\tdesc = \"task setup failed\"\n\t\t\t\t}\n\t\t\tcase nomad.TaskDriverFailure:\n\t\t\t\tif event.DriverError != \"\" {\n\t\t\t\t\tdesc = event.DriverError\n\t\t\t\t} else {\n\t\t\t\t\tdesc = \"failed to start task\"\n\t\t\t\t}\n\t\t\tcase nomad.TaskArtifactDownloadFailed:\n\t\t\t\tif event.DownloadError != \"\" {\n\t\t\t\t\tdesc = event.DownloadError\n\t\t\t\t} else {\n\t\t\t\t\tdesc = \"the task failed to download artifacts\"\n\t\t\t\t}\n\t\t\tcase nomad.TaskKilling:\n\t\t\t\tif event.KillReason != \"\" {\n\t\t\t\t\tdesc = fmt.Sprintf(\"the task was killed: %v\", event.KillReason)\n\t\t\t\t} else if event.KillTimeout != 0 {\n\t\t\t\t\tdesc = fmt.Sprintf(\"sent interrupt, waiting %v before force killing\", event.KillTimeout)\n\t\t\t\t} else {\n\t\t\t\t\tdesc = \"the task was sent interrupt\"\n\t\t\t\t}\n\t\t\tcase nomad.TaskKilled:\n\t\t\t\tif event.KillError != \"\" {\n\t\t\t\t\tdesc = event.KillError\n\t\t\t\t} else {\n\t\t\t\t\tdesc = \"the task was successfully killed\"\n\t\t\t\t}\n\t\t\tcase nomad.TaskTerminated:\n\t\t\t\tvar parts []string\n\t\t\t\tparts = append(parts, fmt.Sprintf(\"exit Code %d\", event.ExitCode))\n\n\t\t\t\tif event.Signal != 0 {\n\t\t\t\t\tparts = append(parts, fmt.Sprintf(\"signal %d\", event.Signal))\n\t\t\t\t}\n\n\t\t\t\tif event.Message != \"\" {\n\t\t\t\t\tparts = append(parts, fmt.Sprintf(\"exit message %q\", event.Message))\n\t\t\t\t}\n\t\t\t\tdesc = strings.Join(parts, \", \")\n\t\t\tcase nomad.TaskNotRestarting:\n\t\t\t\tif event.RestartReason != \"\" {\n\t\t\t\t\tdesc = event.RestartReason\n\t\t\t\t} else {\n\t\t\t\t\tdesc = \"the task exceeded restart policy\"\n\t\t\t\t}\n\t\t\tcase nomad.TaskSiblingFailed:\n\t\t\t\tif event.FailedSibling != \"\" {\n\t\t\t\t\tdesc = fmt.Sprintf(\"task's sibling %q failed\", event.FailedSibling)\n\t\t\t\t} else {\n\t\t\t\t\tdesc = \"task's sibling failed\"\n\t\t\t\t}\n\t\t\tcase nomad.TaskLeaderDead:\n\t\t\t\tdesc = \"leader task in group is dead\"\n\t\t\t}\n\n\t\t\t// If we have matched and have an updated desc then log the appropriate\n\t\t\t// information.\n\t\t\tif desc != \"\" {\n\t\t\t\tlog.Error().Msgf(\"levant/failure_inspector: alloc %s incurred event %s because %s\",\n\t\t\t\t\tallocID, strings.ToLower(event.Type), strings.TrimSpace(desc))\n\t\t\t} else {\n\t\t\t\tlog.Error().Msgf(\"levant/failure_inspector: alloc %s logged for failure; event_type: %s; message: %s\",\n\t\t\t\t\tallocID,\n\t\t\t\t\tstrings.ToLower(event.Type),\n\t\t\t\t\tstrings.ToLower(event.DisplayMessage))\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "levant/job_status_checker.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage levant\n\nimport (\n\tnomadHelper \"github.com/hashicorp/levant/helper/nomad\"\n\tnomad \"github.com/hashicorp/nomad/api\"\n\t\"github.com/rs/zerolog/log\"\n)\n\n// TaskCoordinate is a coordinate for an allocation/task combination\ntype TaskCoordinate struct {\n\tAlloc    string\n\tTaskName string\n}\n\n// jobStatusChecker checks the status of a job at least reaches a status of\n// running. Depending on the type of job and its configuration it can go through\n// more checks.\nfunc (l *levantDeployment) jobStatusChecker(evalID *string) bool {\n\n\tlog.Debug().Msgf(\"levant/job_status_checker: running job status checker for job\")\n\n\t// Run the initial job status check to ensure the job reaches a state of\n\t// running.\n\tjStatus := l.simpleJobStatusChecker()\n\n\t// Periodic and parameterized batch jobs do not produce evaluations and so\n\t// can only go through the simplest of checks.\n\tif *evalID == \"\" {\n\t\treturn jStatus\n\t}\n\n\t// Job registrations that produce an evaluation can be more thoroughly\n\t// checked even if they don't support Nomad deployments.\n\tif jStatus {\n\t\treturn l.jobAllocationChecker(evalID)\n\t}\n\n\treturn false\n}\n\n// simpleJobStatusChecker is used to check that jobs which do not emit initial\n// evaluations at least reach a job status of running.\nfunc (l *levantDeployment) simpleJobStatusChecker() bool {\n\n\tq := nomadHelper.GenerateBlockingQueryOptions(l.config.Template.Job.Namespace)\n\n\tfor {\n\n\t\tjob, meta, err := l.nomad.Jobs().Info(*l.config.Template.Job.Name, q)\n\t\tif err != nil {\n\t\t\tlog.Error().Err(err).Msg(\"levant/job_status_checker: unable to query job information from Nomad\")\n\t\t\treturn false\n\t\t}\n\n\t\t// If the LastIndex is not greater than our stored LastChangeIndex, we don't\n\t\t// need to do anything.\n\t\tif meta.LastIndex <= q.WaitIndex {\n\t\t\tcontinue\n\t\t}\n\n\t\t// Checks the status of the job and proceed as expected depending on this.\n\t\tswitch *job.Status {\n\t\tcase \"running\":\n\t\t\tlog.Info().Msgf(\"levant/job_status_checker: job has status %s\", *job.Status)\n\t\t\treturn true\n\t\tcase \"pending\":\n\t\t\tlog.Debug().Msgf(\"levant/job_status_checker: job has status %s\", *job.Status)\n\t\t\tq.WaitIndex = meta.LastIndex\n\t\t\tcontinue\n\t\tcase \"dead\":\n\t\t\tlog.Error().Msgf(\"levant/job_status_checker: job has status %s\", *job.Status)\n\t\t\treturn false\n\t\t}\n\t}\n}\n\n// jobAllocationChecker is the main entry point into the allocation checker for\n// jobs that do not support Nomad deployments.\nfunc (l *levantDeployment) jobAllocationChecker(evalID *string) bool {\n\n\tq := nomadHelper.GenerateBlockingQueryOptions(l.config.Template.Job.Namespace)\n\n\t// Build our small internal checking struct.\n\tlevantTasks := make(map[TaskCoordinate]string)\n\n\tfor {\n\n\t\tallocs, meta, err := l.nomad.Evaluations().Allocations(*evalID, q)\n\t\tif err != nil {\n\t\t\tlog.Error().Err(err).Msg(\"levant/job_status_checker: unable to query allocs of job from Nomad\")\n\t\t\treturn false\n\t\t}\n\n\t\t// If the LastIndex is not greater than our stored LastChangeIndex, we don't\n\t\t// need to do anything.\n\t\tif meta.LastIndex <= q.WaitIndex {\n\t\t\tcontinue\n\t\t}\n\n\t\t// If we get here, set the wi to the latest Index.\n\t\tq.WaitIndex = meta.LastIndex\n\n\t\tcomplete, deadTasks := allocationStatusChecker(levantTasks, allocs)\n\n\t\t// depending on how we finished up we report our status\n\t\t// If we have no allocations left to track then we can exit and log\n\t\t// information depending on the success.\n\t\tif complete && deadTasks == 0 {\n\t\t\tlog.Info().Msg(\"levant/job_status_checker: all allocations in deployment of job are running\")\n\t\t\treturn true\n\t\t} else if complete && deadTasks > 0 {\n\t\t\treturn false\n\t\t}\n\t}\n}\n\n// allocationStatusChecker is used to check the state of allocations within a\n// job deployment, an update Levants internal tracking on task status based on\n// this. This functionality exists as Nomad does not currently support\n// deployments across all job types.\nfunc allocationStatusChecker(levantTasks map[TaskCoordinate]string, allocs []*nomad.AllocationListStub) (bool, int) {\n\n\tcomplete := true\n\tdeadTasks := 0\n\n\tfor _, alloc := range allocs {\n\t\tfor taskName, task := range alloc.TaskStates {\n\t\t\t// if the state is one we haven't seen yet then we print a message\n\t\t\tif levantTasks[TaskCoordinate{alloc.ID, taskName}] != task.State {\n\t\t\t\tlog.Info().Msgf(\"levant/job_status_checker: task %s in allocation %s now in %s state\",\n\t\t\t\t\ttaskName, alloc.ID, task.State)\n\t\t\t\t// then we record the new state\n\t\t\t\tlevantTasks[TaskCoordinate{alloc.ID, taskName}] = task.State\n\t\t\t}\n\n\t\t\t// then we have some case specific actions\n\t\t\tswitch levantTasks[TaskCoordinate{alloc.ID, taskName}] {\n\t\t\t// if a task is still pendign we are not yet done\n\t\t\tcase \"pending\":\n\t\t\t\tcomplete = false\n\t\t\t\t// if the task is dead we record that\n\t\t\tcase \"dead\":\n\t\t\t\tdeadTasks++\n\t\t\t}\n\t\t}\n\t}\n\treturn complete, deadTasks\n}\n"
  },
  {
    "path": "levant/job_status_checker_test.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage levant\n\nimport (\n\t\"testing\"\n\n\tnomad \"github.com/hashicorp/nomad/api\"\n)\n\nfunc TestJobStatusChecker_allocationStatusChecker(t *testing.T) {\n\n\t// Build our task status maps\n\tlevantTasks1 := make(map[TaskCoordinate]string)\n\tlevantTasks2 := make(map[TaskCoordinate]string)\n\tlevantTasks3 := make(map[TaskCoordinate]string)\n\n\t// Build a small AllocationListStubs with required information.\n\tvar allocs1 []*nomad.AllocationListStub\n\ttaskStates1 := make(map[string]*nomad.TaskState)\n\ttaskStates1[\"task1\"] = &nomad.TaskState{State: \"running\"}\n\tallocs1 = append(allocs1, &nomad.AllocationListStub{\n\t\tID:         \"10246d87-ecd7-21ad-13b2-f0c564647d64\",\n\t\tTaskStates: taskStates1,\n\t})\n\n\tvar allocs2 []*nomad.AllocationListStub\n\ttaskStates2 := make(map[string]*nomad.TaskState)\n\ttaskStates2[\"task1\"] = &nomad.TaskState{State: \"running\"}\n\ttaskStates2[\"task2\"] = &nomad.TaskState{State: \"pending\"}\n\tallocs2 = append(allocs2, &nomad.AllocationListStub{\n\t\tID:         \"20246d87-ecd7-21ad-13b2-f0c564647d64\",\n\t\tTaskStates: taskStates2,\n\t})\n\n\tvar allocs3 []*nomad.AllocationListStub\n\ttaskStates3 := make(map[string]*nomad.TaskState)\n\ttaskStates3[\"task1\"] = &nomad.TaskState{State: \"dead\"}\n\tallocs3 = append(allocs3, &nomad.AllocationListStub{\n\t\tID:         \"30246d87-ecd7-21ad-13b2-f0c564647d64\",\n\t\tTaskStates: taskStates3,\n\t})\n\n\tcases := []struct {\n\t\tlevantTasks      map[TaskCoordinate]string\n\t\tallocs           []*nomad.AllocationListStub\n\t\tdead             int\n\t\texpectedDead     int\n\t\texpectedComplete bool\n\t}{\n\t\t{\n\t\t\tlevantTasks1,\n\t\t\tallocs1,\n\t\t\t0,\n\t\t\t0,\n\t\t\ttrue,\n\t\t},\n\t\t{\n\t\t\tlevantTasks2,\n\t\t\tallocs2,\n\t\t\t0,\n\t\t\t0,\n\t\t\tfalse,\n\t\t},\n\t\t{\n\t\t\tlevantTasks3,\n\t\t\tallocs3,\n\t\t\t0,\n\t\t\t1,\n\t\t\ttrue,\n\t\t},\n\t}\n\n\tfor _, tc := range cases {\n\t\tcomplete, dead := allocationStatusChecker(tc.levantTasks, tc.allocs)\n\n\t\tif complete != tc.expectedComplete {\n\t\t\tt.Fatalf(\"expected complete to be %v but got %v\", tc.expectedComplete, complete)\n\t\t}\n\t\tif dead != tc.expectedDead {\n\t\t\tt.Fatalf(\"expected %v dead task(s) but got %v\", tc.expectedDead, dead)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "levant/plan.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage levant\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/hashicorp/levant/client\"\n\t\"github.com/hashicorp/levant/levant/structs\"\n\tnomad \"github.com/hashicorp/nomad/api\"\n\t\"github.com/rs/zerolog/log\"\n)\n\nconst (\n\tdiffTypeAdded  = \"Added\"\n\tdiffTypeEdited = \"Edited\"\n\tdiffTypeNone   = \"None\"\n)\n\ntype levantPlan struct {\n\tnomad  *nomad.Client\n\tconfig *PlanConfig\n}\n\n// PlanConfig is the set of config structs required to run a Levant plan.\ntype PlanConfig struct {\n\tClient   *structs.ClientConfig\n\tPlan     *structs.PlanConfig\n\tTemplate *structs.TemplateConfig\n}\n\nfunc newPlan(config *PlanConfig) (*levantPlan, error) {\n\n\tvar err error\n\n\tplan := &levantPlan{}\n\tplan.config = config\n\n\tplan.nomad, err = client.NewNomadClient(config.Client.Addr)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn plan, nil\n}\n\n// TriggerPlan initiates a Levant plan run.\nfunc TriggerPlan(config *PlanConfig) (bool, bool) {\n\n\tlp, err := newPlan(config)\n\tif err != nil {\n\t\tlog.Error().Err(err).Msg(\"levant/plan: unable to setup Levant plan\")\n\t\treturn false, false\n\t}\n\n\tchanges, err := lp.plan()\n\tif err != nil {\n\t\tlog.Error().Err(err).Msg(\"levant/plan: error when running plan\")\n\t\treturn false, changes\n\t}\n\n\tif !changes && lp.config.Plan.IgnoreNoChanges {\n\t\tlog.Info().Msg(\"levant/plan: no changes found in job but ignore-no-changes flag set to true\")\n\t} else if !changes && !lp.config.Plan.IgnoreNoChanges {\n\t\tlog.Info().Msg(\"levant/plan: no changes found in job\")\n\t\treturn false, changes\n\t}\n\n\treturn true, changes\n}\n\n// plan is the entry point into running the Levant plan function which logs all\n// changes anticipated by Nomad of the upcoming job registration. If there are\n// no planned changes here, return false to indicate we should stop the process.\nfunc (lp *levantPlan) plan() (bool, error) {\n\n\tlog.Debug().Msg(\"levant/plan: triggering Nomad plan\")\n\n\t// Run a plan using the rendered job.\n\tresp, _, err := lp.nomad.Jobs().Plan(lp.config.Template.Job, true, nil)\n\tif err != nil {\n\t\tlog.Error().Err(err).Msg(\"levant/plan: unable to run a job plan\")\n\t\treturn false, err\n\t}\n\n\tswitch resp.Diff.Type {\n\n\t// If the job is new, then don't print the entire diff but just log that it\n\t// is a new registration.\n\tcase diffTypeAdded:\n\t\tlog.Info().Msg(\"levant/plan: job is a new addition to the cluster\")\n\t\treturn true, nil\n\n\t\t// If there are no changes, log the message so the user can see this and\n\t\t// exit the deployment.\n\tcase diffTypeNone:\n\t\tlog.Info().Msg(\"levant/plan: no changes detected for job\")\n\t\treturn false, nil\n\n\t\t// If there are changes, run the planDiff function which is responsible for\n\t\t// iterating through the plan and logging all the planned changes.\n\tcase diffTypeEdited:\n\t\tplanDiff(resp.Diff)\n\t}\n\n\treturn true, nil\n}\n\nfunc planDiff(plan *nomad.JobDiff) {\n\n\t// Iterate through each TaskGroup.\n\tfor _, tg := range plan.TaskGroups {\n\t\tif tg.Type != diffTypeEdited {\n\t\t\tcontinue\n\t\t}\n\t\tfor _, tgo := range tg.Objects {\n\t\t\trecurseObjDiff(tg.Name, \"\", tgo)\n\t\t}\n\n\t\t// Iterate through each Task.\n\t\tfor _, t := range tg.Tasks {\n\t\t\tif t.Type != diffTypeEdited {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif len(t.Objects) == 0 {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tfor _, o := range t.Objects {\n\t\t\t\trecurseObjDiff(tg.Name, t.Name, o)\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc recurseObjDiff(g, t string, objDiff *nomad.ObjectDiff) {\n\n\t// If we have reached the end of the object tree, and have an edited type\n\t// with field information then we can interate on the fields to find those\n\t// which have changed.\n\tif len(objDiff.Objects) == 0 && len(objDiff.Fields) > 0 && objDiff.Type == diffTypeEdited {\n\t\tfor _, f := range objDiff.Fields {\n\t\t\tif f.Type != diffTypeEdited {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tlogDiffObj(g, t, objDiff.Name, f.Name, f.Old, f.New)\n\t\t\tcontinue\n\t\t}\n\n\t} else {\n\t\t// Continue to interate through the object diff objects until such time\n\t\t// the above is triggered.\n\t\tfor _, o := range objDiff.Objects {\n\t\t\trecurseObjDiff(g, t, o)\n\t\t}\n\t}\n}\n\n// logDiffObj is a helper function so Levant can log the most accurate and\n// useful plan output messages.\nfunc logDiffObj(g, t, objName, fName, fOld, fNew string) {\n\n\tvar lStart, l string\n\n\t// We will always have at least this information to log.\n\tlEnd := fmt.Sprintf(\"plan indicates change of %s:%s from %s to %s\",\n\t\tobjName, fName, fOld, fNew)\n\n\t// If we have been passed a group name, use this to start the log line.\n\tif g != \"\" {\n\t\tlStart = fmt.Sprintf(\"group %s \", g)\n\t}\n\n\t// If we have been passed a task name, append this to the group name.\n\tif t != \"\" {\n\t\tlStart = lStart + fmt.Sprintf(\"and task %s \", t)\n\t}\n\n\t// Build the final log message.\n\tif lStart != \"\" {\n\t\tl = lStart + lEnd\n\t} else {\n\t\tl = lEnd\n\t}\n\n\tlog.Info().Msgf(\"levant/plan: %s\", l)\n}\n"
  },
  {
    "path": "levant/structs/config.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage structs\n\nimport nomad \"github.com/hashicorp/nomad/api\"\n\nconst (\n\t// JobIDContextField is the logging context feild added when interacting\n\t// with jobs.\n\tJobIDContextField = \"job_id\"\n\n\t// ScalingDirectionOut represents a scaling out event; adding to the total number.\n\tScalingDirectionOut = \"Out\"\n\n\t// ScalingDirectionIn represents a scaling in event; removing from the total number.\n\tScalingDirectionIn = \"In\"\n\n\t// ScalingDirectionTypeCount means the scale event will use a change by count.\n\tScalingDirectionTypeCount = \"Count\"\n\n\t// ScalingDirectionTypePercent means the scale event will use a percentage of current change.\n\tScalingDirectionTypePercent = \"Percent\"\n)\n\n// DeployConfig is the main struct used to configure and run a Levant deployment on\n// a given target job.\ntype DeployConfig struct {\n\t// Canary enables canary autopromote and is the value in seconds to wait\n\t// until attempting to perform autopromote.\n\tCanary int\n\n\t// Force is a boolean flag that can be used to force a deployment\n\t// even though levant didn't detect any changes.\n\tForce bool\n\n\t// ForceBatch is a boolean flag that can be used to force a run of a periodic\n\t// job upon registration.\n\tForceBatch bool\n\n\t// ForceCount is a boolean flag that can be used to ignore running job counts\n\t// and force the count based on the rendered job file.\n\tForceCount bool\n\n\t// EnvVault is a boolean flag that can be used to enable reading the VAULT_TOKEN\n\t// from the enviromment.\n\tEnvVault bool\n}\n\n// ClientConfig is the config struct which houses all the information needed to connect\n// to the external services and endpoints.\ntype ClientConfig struct {\n\t// Addr is the Nomad API address to use for all calls and must include both\n\t// protocol and port.\n\tAddr string\n\n\t// ConsulAddr is the Consul API address to use for all calls.\n\tConsulAddr string\n\n\t// AllowStale sets consistency level for nomad query\n\t// https://www.nomadproject.io/api/index.html#consistency-modes\n\tAllowStale bool\n}\n\n// PlanConfig contains any configuration options that are specific to running a\n// Nomad plan.\ntype PlanConfig struct {\n\t// IgnoreNoChanges is used to allow operators to force Levant to exit cleanly\n\t// even if there are no changes found during the plan.\n\tIgnoreNoChanges bool\n}\n\n// TemplateConfig contains all the job templating configuration options including\n// the rendered job.\ntype TemplateConfig struct {\n\t// Job represents the Nomad Job definition that will be deployed.\n\tJob *nomad.Job\n\n\t// TemplateFile is the job specification template which will be rendered\n\t// before being deployed to the cluster.\n\tTemplateFile string\n\n\t// VariableFiles contains the variables which will be substituted into the\n\t// templateFile before deployment.\n\tVariableFiles []string\n}\n\n// ScaleConfig contains all the scaling specific configuration options.\ntype ScaleConfig struct {\n\t// Count is the count by which the operator has asked to scale the Nomad job\n\t// and optional taskgroup by.\n\tCount int\n\n\t// Direction is the direction in which the scaling will take place and is\n\t// populated by consts.\n\tDirection string\n\n\t// DirectionType is an identifier on whether the operator has specified to\n\t// scale using a count increase or percentage.\n\tDirectionType string\n\n\t// JobID is the Nomad job which will be interacted with for scaling.\n\tJobID string\n\n\t// Percent is the percentage by which the operator has asked to scale the\n\t// Nomad job and optional taskgroup by.\n\tPercent int\n\n\t// TaskGroup is the Nomad job taskgroup which has been selected for scaling.\n\tTaskGroup string\n}\n"
  },
  {
    "path": "logging/logging.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage logging\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\tstdlog \"log\"\n\t\"os\"\n\t\"strings\"\n\n\tisatty \"github.com/mattn/go-isatty\"\n\t\"github.com/rs/zerolog\"\n\t\"github.com/rs/zerolog/log\"\n\t\"github.com/sean-/conswriter\"\n)\n\nvar acceptedLogLevels = []string{\"DEBUG\", \"INFO\", \"WARN\", \"ERROR\", \"FATAL\"}\nvar acceptedLogFormat = []string{\"HUMAN\", \"JSON\"}\n\n// SetupLogger sets the log level and outout format.\n// Accepted levels are panic, fatal, error, warn, info and debug.\n// Accepted formats are human or json.\nfunc SetupLogger(level, format string) (err error) {\n\n\tif err = setLogFormat(strings.ToUpper(format)); err != nil {\n\t\treturn err\n\t}\n\n\tif err = setLogLevel(strings.ToUpper(level)); err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\nfunc setLogLevel(level string) error {\n\tswitch level {\n\tcase \"DEBUG\":\n\t\tzerolog.SetGlobalLevel(zerolog.DebugLevel)\n\tcase \"INFO\":\n\t\tzerolog.SetGlobalLevel(zerolog.InfoLevel)\n\tcase \"WARN\":\n\t\tzerolog.SetGlobalLevel(zerolog.WarnLevel)\n\tcase \"ERROR\":\n\t\tzerolog.SetGlobalLevel(zerolog.ErrorLevel)\n\tcase \"FATAL\":\n\t\tzerolog.SetGlobalLevel(zerolog.FatalLevel)\n\tdefault:\n\t\treturn fmt.Errorf(\"unsupported log level: %q (supported levels: %s)\", level,\n\t\t\tstrings.Join(acceptedLogLevels, \" \"))\n\t}\n\treturn nil\n}\n\nfunc setLogFormat(format string) error {\n\n\tvar logWriter io.Writer\n\tvar zLog zerolog.Logger\n\n\tif isatty.IsTerminal(os.Stdout.Fd()) ||\n\t\tisatty.IsCygwinTerminal(os.Stdout.Fd()) {\n\t\tlogWriter = conswriter.GetTerminal()\n\t} else {\n\t\tlogWriter = os.Stderr\n\t}\n\n\tswitch format {\n\tcase \"HUMAN\":\n\t\tw := zerolog.ConsoleWriter{\n\t\t\tOut:     logWriter,\n\t\t\tNoColor: true,\n\t\t}\n\t\tzLog = zerolog.New(w).With().Timestamp().Logger()\n\tcase \"JSON\":\n\t\tzLog = zerolog.New(logWriter).With().Timestamp().Logger()\n\tdefault:\n\t\treturn fmt.Errorf(\"unsupported log format: %q (supported formats: %s)\", format,\n\t\t\tstrings.Join(acceptedLogFormat, \" \"))\n\t}\n\n\tlog.Logger = zLog\n\tstdlog.SetFlags(0)\n\tstdlog.SetOutput(zLog)\n\n\treturn nil\n}\n"
  },
  {
    "path": "main.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage main\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\n\t\"github.com/mitchellh/cli\"\n)\n\nfunc main() {\n\tos.Exit(Run(os.Args[1:]))\n}\n\n// Run sets up the commands and triggers RunCustom which inacts the correct\n// run of Levant.\nfunc Run(args []string) int {\n\treturn RunCustom(args, Commands(nil))\n}\n\n// RunCustom is the main function to trigger a run of Levant.\nfunc RunCustom(args []string, commands map[string]cli.CommandFactory) int {\n\t// Get the command line args. We shortcut \"--version\" and \"-v\" to\n\t// just show the version.\n\tfor _, arg := range args {\n\t\tif arg == \"-v\" || arg == \"-version\" || arg == \"--version\" {\n\t\t\tnewArgs := make([]string, len(args)+1)\n\t\t\tnewArgs[0] = \"version\"\n\t\t\tcopy(newArgs[1:], args)\n\t\t\targs = newArgs\n\t\t\tbreak\n\t\t}\n\t}\n\n\t// Build the commands to include in the help now.\n\tcommandsInclude := make([]string, 0, len(commands))\n\tfor k := range commands {\n\t\tswitch k {\n\t\tdefault:\n\t\t\tcommandsInclude = append(commandsInclude, k)\n\t\t}\n\t}\n\n\tcli := &cli.CLI{\n\t\tArgs:     args,\n\t\tCommands: commands,\n\t\tHelpFunc: cli.FilteredHelpFunc(commandsInclude, cli.BasicHelpFunc(\"levant\")),\n\t}\n\n\texitCode, err := cli.Run()\n\tif err != nil {\n\t\t_, pErr := fmt.Fprintf(os.Stderr, \"Error executing CLI: %s\\n\", err.Error())\n\n\t\t// If we are unable to log to stderr; try just printing the error to\n\t\t// provide some insight.\n\t\tif pErr != nil {\n\t\t\tfmt.Print(pErr)\n\t\t}\n\n\t\treturn 1\n\t}\n\n\treturn exitCode\n}\n"
  },
  {
    "path": "scale/scale.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage scale\n\nimport (\n\t\"github.com/hashicorp/levant/client\"\n\t\"github.com/hashicorp/levant/levant\"\n\t\"github.com/hashicorp/levant/levant/structs\"\n\tnomad \"github.com/hashicorp/nomad/api\"\n\t\"github.com/rs/zerolog/log\"\n)\n\n// Config is the set of config structs required to run a Levant scale.\ntype Config struct {\n\tClient *structs.ClientConfig\n\tScale  *structs.ScaleConfig\n}\n\n// TriggerScalingEvent provides the exported entry point into performing a job\n// scale based on user inputs.\nfunc TriggerScalingEvent(config *Config) bool {\n\n\t// Add the JobID as a log context field.\n\tlog.Logger = log.With().Str(structs.JobIDContextField, config.Scale.JobID).Logger()\n\n\tnomadClient, err := client.NewNomadClient(config.Client.Addr)\n\tif err != nil {\n\t\tlog.Error().Msg(\"levant/scale: unable to setup Levant scaling event\")\n\t\treturn false\n\t}\n\n\tjob := updateJob(nomadClient, config)\n\tif job == nil {\n\t\tlog.Error().Msg(\"levant/scale: unable to perform job count update\")\n\t\treturn false\n\t}\n\n\t// Setup a deployment object, as a scaling event is a deployment and should\n\t// go through the same process and code upgrades.\n\tdeploymentConfig := &levant.DeployConfig{}\n\tdeploymentConfig.Template = &structs.TemplateConfig{Job: job}\n\tdeploymentConfig.Client = config.Client\n\tdeploymentConfig.Deploy = &structs.DeployConfig{ForceCount: true}\n\n\tlog.Info().Msg(\"levant/scale: job will now be deployed with updated counts\")\n\n\t// Trigger a deployment of the updated job which results in the scaling of\n\t// the job and will go through all the deployment tracking until an end\n\t// state is reached.\n\treturn levant.TriggerDeployment(deploymentConfig, nomadClient)\n}\n\n// updateJob gathers information on the current state of the running job and\n// along with the user defined input updates the in-memory job specification\n// to reflect the desired scaled state.\nfunc updateJob(client *nomad.Client, config *Config) *nomad.Job {\n\n\tjob, _, err := client.Jobs().Info(config.Scale.JobID, nil)\n\tif err != nil {\n\t\tlog.Error().Err(err).Msg(\"levant/scale: unable to obtain job information from Nomad\")\n\t\treturn nil\n\t}\n\n\t// You can't scale a job that isn't running; or at least you shouldn't in\n\t// my current opinion.\n\tif *job.Status != \"running\" {\n\t\tlog.Error().Msgf(\"levant/scale: job is not in running state\")\n\t\treturn nil\n\t}\n\n\tfor _, group := range job.TaskGroups {\n\n\t\t// If the user has specified a taskgroup to scale, ensure we only change\n\t\t// the specific of this.\n\t\tif config.Scale.TaskGroup != \"\" {\n\t\t\tif *group.Name == config.Scale.TaskGroup {\n\t\t\t\tlog.Debug().Msgf(\"levant/scale: scaling action to be requested on taskgroup %s only\",\n\t\t\t\t\tconfig.Scale.TaskGroup)\n\t\t\t\tupdateTaskGroup(config, group)\n\t\t\t}\n\n\t\t\t// If no taskgroup has been specified, all found will have their\n\t\t\t// count updated.\n\t\t} else {\n\t\t\tlog.Debug().Msg(\"levant/scale: scaling action requested on all taskgroups\")\n\t\t\tupdateTaskGroup(config, group)\n\t\t}\n\t}\n\n\treturn job\n}\n\n// updateTaskGroup is tasked with performing the count update based on the user\n// configuration when a group is identified as being marked for scaling.\nfunc updateTaskGroup(config *Config, group *nomad.TaskGroup) {\n\n\tvar c int\n\n\t// If a percentage scale value has been passed, we must convert this to an\n\t// int which represents the count to scale by as Nomad job submissions must\n\t// be done with group counts as desired ints.\n\tswitch config.Scale.DirectionType {\n\tcase structs.ScalingDirectionTypeCount:\n\t\tc = config.Scale.Count\n\tcase structs.ScalingDirectionTypePercent:\n\t\tc = calculateCountBasedOnPercent(*group.Count, config.Scale.Percent)\n\t}\n\n\t// Depending on whether we are scaling-out or scaling-in we need to perform\n\t// the correct maths. There is a little duplication here, but that is to\n\t// provide better logging.\n\tswitch config.Scale.Direction {\n\tcase structs.ScalingDirectionOut:\n\t\tnc := *group.Count + c\n\t\tlog.Info().Msgf(\"levant/scale: task group %s will scale-out from %v to %v\",\n\t\t\t*group.Name, *group.Count, nc)\n\t\t*group.Count = nc\n\n\tcase structs.ScalingDirectionIn:\n\t\tnc := *group.Count - c\n\t\tlog.Info().Msgf(\"levant/scale: task group %s will scale-in from %v to %v\",\n\t\t\t*group.Name, *group.Count, nc)\n\t\t*group.Count = nc\n\t}\n}\n\n// calculateCountBasedOnPercent is a small helper function to turn a percentage\n// based scale event into a relative count.\nfunc calculateCountBasedOnPercent(count, percent int) int {\n\tn := (float64(count) / 100) * float64(percent)\n\treturn int(n + 0.5)\n}\n"
  },
  {
    "path": "scale/scale_test.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage scale\n\nimport (\n\t\"testing\"\n\n\t\"github.com/hashicorp/levant/levant/structs\"\n\tnomad \"github.com/hashicorp/nomad/api\"\n)\n\nfunc TestScale_updateTaskGroup(t *testing.T) {\n\n\tsOut := structs.ScalingDirectionOut\n\tsIn := structs.ScalingDirectionIn\n\tsCount := structs.ScalingDirectionTypeCount\n\tsPercent := structs.ScalingDirectionTypePercent\n\n\tcases := []struct {\n\t\tConfig   *Config\n\t\tGroup    *nomad.TaskGroup\n\t\tEndCount int\n\t}{\n\t\t{\n\t\t\tbuildScalingConfig(sOut, sCount, 100),\n\t\t\tbuildTaskGroup(1000),\n\t\t\t1100,\n\t\t},\n\t\t{\n\t\t\tbuildScalingConfig(sOut, sPercent, 25),\n\t\t\tbuildTaskGroup(100),\n\t\t\t125,\n\t\t},\n\t\t{\n\t\t\tbuildScalingConfig(sIn, sCount, 900),\n\t\t\tbuildTaskGroup(901),\n\t\t\t1,\n\t\t},\n\t\t{\n\t\t\tbuildScalingConfig(sIn, sPercent, 90),\n\t\t\tbuildTaskGroup(100),\n\t\t\t10,\n\t\t},\n\t}\n\n\tfor _, tc := range cases {\n\t\tupdateTaskGroup(tc.Config, tc.Group)\n\n\t\tif tc.EndCount != *tc.Group.Count {\n\t\t\tt.Fatalf(\"got: %#v, expected %#v\", *tc.Group.Count, tc.EndCount)\n\t\t}\n\t}\n}\n\nfunc TestScale_calculateCountBasedOnPercent(t *testing.T) {\n\n\tcases := []struct {\n\t\tCount   int\n\t\tPercent int\n\t\tOutput  int\n\t}{\n\t\t{\n\t\t\t100,\n\t\t\t50,\n\t\t\t50,\n\t\t},\n\t\t{\n\t\t\t3,\n\t\t\t33,\n\t\t\t1,\n\t\t},\n\t\t{\n\t\t\t3,\n\t\t\t10,\n\t\t\t0,\n\t\t},\n\t}\n\n\tfor _, tc := range cases {\n\t\toutput := calculateCountBasedOnPercent(tc.Count, tc.Percent)\n\n\t\tif output != tc.Output {\n\t\t\tt.Fatalf(\"got: %#v, expected %#v\", output, tc.Output)\n\t\t}\n\t}\n}\n\nfunc buildScalingConfig(direction, dType string, number int) *Config {\n\n\tc := &Config{\n\t\tScale: &structs.ScaleConfig{\n\t\t\tDirection:     direction,\n\t\t\tDirectionType: dType,\n\t\t},\n\t}\n\n\tswitch dType {\n\tcase structs.ScalingDirectionTypeCount:\n\t\tc.Scale.Count = number\n\tcase structs.ScalingDirectionTypePercent:\n\t\tc.Scale.Percent = number\n\t}\n\n\treturn c\n}\n\nfunc buildTaskGroup(count int) *nomad.TaskGroup {\n\n\tn := \"LevantTest\"\n\tc := count\n\n\tt := &nomad.TaskGroup{\n\t\tName:  &n,\n\t\tCount: &c,\n\t}\n\n\treturn t\n}\n"
  },
  {
    "path": "scripts/docker-entrypoint.sh",
    "content": "#!/usr/bin/env sh\n# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\n\nset -e\n\nif [ \"$1\" = 'levant' ]; then\n  shift\nfi\n\nexec levant \"$@\"\n"
  },
  {
    "path": "scripts/version.sh",
    "content": "#!/usr/bin/env bash\n# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\n\nversion_file=$1\nversion_metadata_file=$2\nversion=$(awk '$1 == \"Version\" && $2 == \"=\" { gsub(/\"/, \"\", $3); print $3 }' <\"${version_file}\")\nprerelease=$(awk '$1 == \"VersionPrerelease\" && $2 == \"=\" { gsub(/\"/, \"\", $3); print $3 }' <\"${version_file}\")\nmetadata=$(awk '$1 == \"VersionMetadata\" && $2 == \"=\" { gsub(/\"/, \"\", $3); print $3 }' <\"${version_metadata_file}\")\n\nif [ -n \"$metadata\" ] && [ -n \"$prerelease\" ]; then\n    echo \"${version}-${prerelease}+${metadata}\"\nelif [ -n \"$metadata\" ]; then\n    echo \"${version}+${metadata}\"\nelif [ -n \"$prerelease\" ]; then\n    echo \"${version}-${prerelease}\"\nelse\n    echo \"${version}\"\nfi\n"
  },
  {
    "path": "template/funcs.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage template\n\nimport (\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"math\"\n\t\"os\"\n\t\"reflect\"\n\t\"strconv\"\n\t\"strings\"\n\t\"text/template\"\n\t\"time\"\n\t\"unicode\"\n\t\"unicode/utf8\"\n\n\t\"github.com/Masterminds/sprig/v3\"\n\tspewLib \"github.com/davecgh/go-spew/spew\"\n\tconsul \"github.com/hashicorp/consul/api\"\n\t\"github.com/rs/zerolog/log\"\n)\n\n// funcMap builds the template functions and passes the consulClient where this\n// is required.\nfunc funcMap(consulClient *consul.Client) template.FuncMap {\n\tr := template.FuncMap{\n\t\t\"consulKey\":          consulKeyFunc(consulClient),\n\t\t\"consulKeyExists\":    consulKeyExistsFunc(consulClient),\n\t\t\"consulKeyOrDefault\": consulKeyOrDefaultFunc(consulClient),\n\t\t\"env\":                envFunc(),\n\t\t\"fileContents\":       fileContents(),\n\t\t\"loop\":               loop,\n\t\t\"parseBool\":          parseBool,\n\t\t\"parseFloat\":         parseFloat,\n\t\t\"parseInt\":           parseInt,\n\t\t\"parseJSON\":          parseJSON,\n\t\t\"parseUint\":          parseUint,\n\t\t\"replace\":            replace,\n\t\t\"timeNow\":            timeNowFunc,\n\t\t\"timeNowUTC\":         timeNowUTCFunc,\n\t\t\"timeNowTimezone\":    timeNowTimezoneFunc(),\n\t\t\"toLower\":            toLower,\n\t\t\"toUpper\":            toUpper,\n\n\t\t// Maths.\n\t\t\"add\":      add,\n\t\t\"subtract\": subtract,\n\t\t\"multiply\": multiply,\n\t\t\"divide\":   divide,\n\t\t\"modulo\":   modulo,\n\n\t\t// Case Helpers\n\t\t\"firstRuneToUpper\": firstRuneToUpper,\n\t\t\"firstRuneToLower\": firstRuneToLower,\n\t\t\"runeToUpper\":      runeToUpper,\n\t\t\"runeToLower\":      runeToLower,\n\n\t\t//debug.\n\t\t\"spewDump\":   spewDump,\n\t\t\"spewPrintf\": spewPrintf,\n\t}\n\t// Add the Sprig functions to the funcmap\n\tfor k, v := range sprig.FuncMap() {\n\t\t// if there is a name conflict, favor sprig and rename original version\n\t\tif origFun, ok := r[k]; ok {\n\t\t\tif name, err := firstRuneToUpper(k); err == nil {\n\t\t\t\tname = \"levant\" + name\n\t\t\t\tlog.Debug().Msgf(\"template/funcs: renaming \\\"%v\\\" function to \\\"%v\\\"\", k, name)\n\t\t\t\tr[name] = origFun\n\t\t\t} else {\n\t\t\t\tlog.Error().Msgf(\"template/funcs: could not add \\\"%v\\\" function. error:%v\", k, err)\n\t\t\t}\n\t\t}\n\t\tr[k] = v\n\t}\n\tr[\"sprigVersion\"] = sprigVersionFunc\n\n\treturn r\n}\n\n// SprigVersion contains the semver of the included sprig library\n// it is used in command/version and provided in the sprig_version\n// template function\nconst SprigVersion = \"3.1.0\"\n\nfunc sprigVersionFunc() func(string) (string, error) {\n\treturn func(_ string) (string, error) {\n\t\treturn SprigVersion, nil\n\t}\n}\n\nfunc consulKeyFunc(consulClient *consul.Client) func(string) (string, error) {\n\treturn func(s string) (string, error) {\n\n\t\tif len(s) == 0 {\n\t\t\treturn \"\", nil\n\t\t}\n\n\t\tkv, _, err := consulClient.KV().Get(s, nil)\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\n\t\tif kv == nil {\n\t\t\treturn \"\", errors.New(\"Consul KV not found\")\n\t\t}\n\n\t\tv := string(kv.Value[:])\n\t\tlog.Info().Msgf(\"template/funcs: using Consul KV variable with key %s and value %s\",\n\t\t\ts, v)\n\n\t\treturn v, nil\n\t}\n}\n\nfunc consulKeyExistsFunc(consulClient *consul.Client) func(string) (bool, error) {\n\treturn func(s string) (bool, error) {\n\n\t\tif len(s) == 0 {\n\t\t\treturn false, nil\n\t\t}\n\n\t\tkv, _, err := consulClient.KV().Get(s, nil)\n\t\tif err != nil {\n\t\t\treturn false, err\n\t\t}\n\n\t\tif kv == nil {\n\t\t\treturn false, nil\n\t\t}\n\n\t\tlog.Info().Msgf(\"template/funcs: found Consul KV variable with key %s\", s)\n\n\t\treturn true, nil\n\t}\n}\n\nfunc consulKeyOrDefaultFunc(consulClient *consul.Client) func(string, string) (string, error) {\n\treturn func(s, d string) (string, error) {\n\n\t\tif len(s) == 0 {\n\t\t\tlog.Info().Msgf(\"template/funcs: using default Consul KV variable with value %s\", d)\n\t\t\treturn d, nil\n\t\t}\n\n\t\tkv, _, err := consulClient.KV().Get(s, nil)\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\n\t\tif kv == nil {\n\t\t\tlog.Info().Msgf(\"template/funcs: using default Consul KV variable with value %s\", d)\n\t\t\treturn d, nil\n\t\t}\n\n\t\tv := string(kv.Value[:])\n\t\tlog.Info().Msgf(\"template/funcs: using Consul KV variable with key %s and value %s\",\n\t\t\ts, v)\n\n\t\treturn v, nil\n\t}\n}\n\nfunc loop(ints ...int64) (<-chan int64, error) {\n\tvar start, stop int64\n\tswitch len(ints) {\n\tcase 1:\n\t\tstart, stop = 0, ints[0]\n\tcase 2:\n\t\tstart, stop = ints[0], ints[1]\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"loop: wrong number of arguments, expected 1 or 2\"+\n\t\t\t\", but got %d\", len(ints))\n\t}\n\n\tch := make(chan int64)\n\n\tgo func() {\n\t\tfor i := start; i < stop; i++ {\n\t\t\tch <- i\n\t\t}\n\t\tclose(ch)\n\t}()\n\n\treturn ch, nil\n}\n\nfunc parseBool(s string) (bool, error) {\n\tif s == \"\" {\n\t\treturn false, nil\n\t}\n\n\tresult, err := strconv.ParseBool(s)\n\tif err != nil {\n\t\treturn false, err\n\t}\n\treturn result, nil\n}\n\nfunc parseFloat(s string) (float64, error) {\n\tif s == \"\" {\n\t\treturn 0.0, nil\n\t}\n\n\tresult, err := strconv.ParseFloat(s, 10)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\treturn result, nil\n}\n\nfunc parseInt(s string) (int64, error) {\n\tif s == \"\" {\n\t\treturn 0, nil\n\t}\n\n\tresult, err := strconv.ParseInt(s, 10, 64)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\treturn result, nil\n}\n\nfunc parseJSON(s string) (interface{}, error) {\n\tif s == \"\" {\n\t\treturn map[string]interface{}{}, nil\n\t}\n\n\tvar data interface{}\n\tif err := json.Unmarshal([]byte(s), &data); err != nil {\n\t\treturn nil, err\n\t}\n\treturn data, nil\n}\n\nfunc parseUint(s string) (uint64, error) {\n\tif s == \"\" {\n\t\treturn 0, nil\n\t}\n\n\tresult, err := strconv.ParseUint(s, 10, 64)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\treturn result, nil\n}\n\nfunc replace(input, from, to string) string {\n\treturn strings.Replace(input, from, to, -1)\n}\n\nfunc timeNowFunc() string {\n\treturn time.Now().Format(\"2006-01-02T15:04:05Z07:00\")\n}\n\nfunc timeNowUTCFunc() string {\n\treturn time.Now().UTC().Format(\"2006-01-02T15:04:05Z07:00\")\n}\n\nfunc timeNowTimezoneFunc() func(string) (string, error) {\n\treturn func(t string) (string, error) {\n\n\t\tif t == \"\" {\n\t\t\treturn \"\", nil\n\t\t}\n\n\t\tloc, err := time.LoadLocation(t)\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\n\t\treturn time.Now().In(loc).Format(\"2006-01-02T15:04:05Z07:00\"), nil\n\t}\n}\n\nfunc toLower(s string) (string, error) {\n\treturn strings.ToLower(s), nil\n}\n\nfunc toUpper(s string) (string, error) {\n\treturn strings.ToUpper(s), nil\n}\n\nfunc envFunc() func(string) (string, error) {\n\treturn func(s string) (string, error) {\n\t\tif s == \"\" {\n\t\t\treturn \"\", nil\n\t\t}\n\t\treturn os.Getenv(s), nil\n\t}\n}\n\nfunc fileContents() func(string) (string, error) {\n\treturn func(s string) (string, error) {\n\t\tif s == \"\" {\n\t\t\treturn \"\", nil\n\t\t}\n\t\tcontents, err := os.ReadFile(s)\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t\treturn string(contents), nil\n\t}\n}\n\nfunc add(b, a interface{}) (interface{}, error) {\n\tav := reflect.ValueOf(a)\n\tbv := reflect.ValueOf(b)\n\n\tswitch av.Kind() {\n\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\tswitch bv.Kind() {\n\t\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\t\treturn av.Int() + bv.Int(), nil\n\t\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:\n\t\t\tub := bv.Uint()\n\t\t\tif ub > math.MaxInt {\n\t\t\t\treturn nil, fmt.Errorf(\"uint value overflows int\")\n\t\t\t}\n\t\t\treturn av.Int() + int64(ub), nil\n\t\tcase reflect.Float32, reflect.Float64:\n\t\t\treturn float64(av.Int()) + bv.Float(), nil\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"add: unknown type for %q (%T)\", bv, b)\n\t\t}\n\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:\n\t\tswitch bv.Kind() {\n\t\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\t\tua := av.Uint()\n\t\t\tif ua > math.MaxInt {\n\t\t\t\treturn nil, fmt.Errorf(\"uint value overflows int\")\n\t\t\t}\n\t\t\treturn int64(ua) + bv.Int(), nil\n\t\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:\n\t\t\treturn av.Uint() + bv.Uint(), nil\n\t\tcase reflect.Float32, reflect.Float64:\n\t\t\treturn float64(av.Uint()) + bv.Float(), nil\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"add: unknown type for %q (%T)\", bv, b)\n\t\t}\n\tcase reflect.Float32, reflect.Float64:\n\t\tswitch bv.Kind() {\n\t\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\t\treturn av.Float() + float64(bv.Int()), nil\n\t\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:\n\t\t\treturn av.Float() + float64(bv.Uint()), nil\n\t\tcase reflect.Float32, reflect.Float64:\n\t\t\treturn av.Float() + bv.Float(), nil\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"add: unknown type for %q (%T)\", bv, b)\n\t\t}\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"add: unknown type for %q (%T)\", av, a)\n\t}\n}\n\nfunc subtract(b, a interface{}) (interface{}, error) {\n\tav := reflect.ValueOf(a)\n\tbv := reflect.ValueOf(b)\n\n\tswitch av.Kind() {\n\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\tswitch bv.Kind() {\n\t\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\t\treturn av.Int() - bv.Int(), nil\n\t\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:\n\t\t\tub := bv.Uint()\n\t\t\tif ub > math.MaxInt {\n\t\t\t\treturn nil, fmt.Errorf(\"uint value overflows int\")\n\t\t\t}\n\t\t\treturn av.Int() - int64(ub), nil\n\t\tcase reflect.Float32, reflect.Float64:\n\t\t\treturn float64(av.Int()) - bv.Float(), nil\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"subtract: unknown type for %q (%T)\", bv, b)\n\t\t}\n\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:\n\t\tswitch bv.Kind() {\n\t\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\t\tua := av.Uint()\n\t\t\tif ua > math.MaxInt {\n\t\t\t\treturn nil, fmt.Errorf(\"uint value overflows int\")\n\t\t\t}\n\t\t\treturn int64(ua) - bv.Int(), nil\n\t\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:\n\t\t\treturn av.Uint() - bv.Uint(), nil\n\t\tcase reflect.Float32, reflect.Float64:\n\t\t\treturn float64(av.Uint()) - bv.Float(), nil\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"subtract: unknown type for %q (%T)\", bv, b)\n\t\t}\n\tcase reflect.Float32, reflect.Float64:\n\t\tswitch bv.Kind() {\n\t\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\t\treturn av.Float() - float64(bv.Int()), nil\n\t\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:\n\t\t\treturn av.Float() - float64(bv.Uint()), nil\n\t\tcase reflect.Float32, reflect.Float64:\n\t\t\treturn av.Float() - bv.Float(), nil\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"subtract: unknown type for %q (%T)\", bv, b)\n\t\t}\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"subtract: unknown type for %q (%T)\", av, a)\n\t}\n}\n\nfunc multiply(b, a interface{}) (interface{}, error) {\n\tav := reflect.ValueOf(a)\n\tbv := reflect.ValueOf(b)\n\n\tswitch av.Kind() {\n\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\tswitch bv.Kind() {\n\t\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\t\treturn av.Int() * bv.Int(), nil\n\t\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:\n\t\t\tub := bv.Uint()\n\t\t\tif ub > math.MaxInt {\n\t\t\t\treturn nil, fmt.Errorf(\"uint value overflows int\")\n\t\t\t}\n\t\t\treturn av.Int() * int64(ub), nil\n\t\tcase reflect.Float32, reflect.Float64:\n\t\t\treturn float64(av.Int()) * bv.Float(), nil\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"multiply: unknown type for %q (%T)\", bv, b)\n\t\t}\n\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:\n\t\tswitch bv.Kind() {\n\t\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\t\tua := av.Uint()\n\t\t\tif ua > math.MaxInt {\n\t\t\t\treturn nil, fmt.Errorf(\"uint value overflows int\")\n\t\t\t}\n\t\t\treturn int64(ua) * bv.Int(), nil\n\t\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:\n\t\t\treturn av.Uint() * bv.Uint(), nil\n\t\tcase reflect.Float32, reflect.Float64:\n\t\t\treturn float64(av.Uint()) * bv.Float(), nil\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"multiply: unknown type for %q (%T)\", bv, b)\n\t\t}\n\tcase reflect.Float32, reflect.Float64:\n\t\tswitch bv.Kind() {\n\t\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\t\treturn av.Float() * float64(bv.Int()), nil\n\t\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:\n\t\t\treturn av.Float() * float64(bv.Uint()), nil\n\t\tcase reflect.Float32, reflect.Float64:\n\t\t\treturn av.Float() * bv.Float(), nil\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"multiply: unknown type for %q (%T)\", bv, b)\n\t\t}\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"multiply: unknown type for %q (%T)\", av, a)\n\t}\n}\n\nfunc divide(b, a interface{}) (interface{}, error) {\n\tav := reflect.ValueOf(a)\n\tbv := reflect.ValueOf(b)\n\n\tswitch av.Kind() {\n\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\tswitch bv.Kind() {\n\t\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\t\treturn av.Int() / bv.Int(), nil\n\t\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:\n\t\t\tub := bv.Uint()\n\t\t\tif ub > math.MaxInt {\n\t\t\t\treturn nil, fmt.Errorf(\"uint value overflows int\")\n\t\t\t}\n\t\t\treturn av.Int() / int64(ub), nil\n\t\tcase reflect.Float32, reflect.Float64:\n\t\t\treturn float64(av.Int()) / bv.Float(), nil\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"divide: unknown type for %q (%T)\", bv, b)\n\t\t}\n\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:\n\t\tswitch bv.Kind() {\n\t\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\t\tua := av.Uint()\n\t\t\tif ua > math.MaxInt {\n\t\t\t\treturn nil, fmt.Errorf(\"uint value overflows int\")\n\t\t\t}\n\t\t\treturn int64(ua) / bv.Int(), nil\n\t\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:\n\t\t\treturn av.Uint() / bv.Uint(), nil\n\t\tcase reflect.Float32, reflect.Float64:\n\t\t\treturn float64(av.Uint()) / bv.Float(), nil\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"divide: unknown type for %q (%T)\", bv, b)\n\t\t}\n\tcase reflect.Float32, reflect.Float64:\n\t\tswitch bv.Kind() {\n\t\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\t\treturn av.Float() / float64(bv.Int()), nil\n\t\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:\n\t\t\treturn av.Float() / float64(bv.Uint()), nil\n\t\tcase reflect.Float32, reflect.Float64:\n\t\t\treturn av.Float() / bv.Float(), nil\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"divide: unknown type for %q (%T)\", bv, b)\n\t\t}\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"divide: unknown type for %q (%T)\", av, a)\n\t}\n}\n\nfunc modulo(b, a interface{}) (interface{}, error) {\n\tav := reflect.ValueOf(a)\n\tbv := reflect.ValueOf(b)\n\n\tswitch av.Kind() {\n\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\tswitch bv.Kind() {\n\t\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\t\treturn av.Int() % bv.Int(), nil\n\t\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:\n\t\t\tub := bv.Uint()\n\t\t\tif ub > math.MaxInt {\n\t\t\t\treturn nil, fmt.Errorf(\"uint value overflows int\")\n\t\t\t}\n\t\t\treturn av.Int() % int64(ub), nil\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"modulo: unknown type for %q (%T)\", bv, b)\n\t\t}\n\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:\n\t\tswitch bv.Kind() {\n\t\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\t\tua := av.Uint()\n\t\t\tif ua > math.MaxInt {\n\t\t\t\treturn nil, fmt.Errorf(\"uint value overflows int\")\n\t\t\t}\n\t\t\treturn int64(ua) % bv.Int(), nil\n\t\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:\n\t\t\treturn av.Uint() % bv.Uint(), nil\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"modulo: unknown type for %q (%T)\", bv, b)\n\t\t}\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"modulo: unknown type for %q (%T)\", av, a)\n\t}\n}\n\nfunc firstRuneToUpper(s string) (string, error) {\n\treturn runeToUpper(s, 0)\n}\n\nfunc runeToUpper(inString string, runeIndex int) (string, error) {\n\treturn funcOnRune(unicode.ToUpper, inString, runeIndex)\n}\n\nfunc firstRuneToLower(s string) (string, error) {\n\treturn runeToLower(s, 0)\n}\n\nfunc runeToLower(inString string, runeIndex int) (string, error) {\n\treturn funcOnRune(unicode.ToLower, inString, runeIndex)\n}\n\nfunc funcOnRune(inFunc func(rune) rune, inString string, runeIndex int) (string, error) {\n\tif !utf8.ValidString(inString) {\n\t\treturn \"\", errors.New(\"funcOnRune: not a valid UTF-8 string\")\n\t}\n\n\truneCount := utf8.RuneCountInString(inString)\n\n\tif runeIndex > runeCount-1 || runeIndex < 0 {\n\t\treturn \"\", fmt.Errorf(\"funcOnRune: runeIndex out of range (max:%v, provided:%v)\", runeCount-1, runeIndex)\n\t}\n\trunes := []rune(inString)\n\ttransformedRune := inFunc(runes[runeIndex])\n\n\tif runes[runeIndex] == transformedRune {\n\t\treturn inString, nil\n\t}\n\trunes[runeIndex] = transformedRune\n\treturn string(runes), nil\n}\n\nfunc spewDump(a interface{}) (string, error) {\n\treturn spewLib.Sdump(a), nil\n}\n\nfunc spewPrintf(format string, args ...interface{}) (string, error) {\n\treturn spewLib.Sprintf(format, args), nil\n}\n"
  },
  {
    "path": "template/render.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage template\n\nimport (\n\t\"bytes\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"os\"\n\t\"path\"\n\n\t\"github.com/hashicorp/levant/client\"\n\t\"github.com/hashicorp/levant/helper\"\n\tnomad \"github.com/hashicorp/nomad/api\"\n\t\"github.com/hashicorp/nomad/jobspec2\"\n\t\"github.com/hashicorp/terraform/configs\"\n\t\"github.com/hashicorp/terraform/configs/hcl2shim\"\n\t\"github.com/rs/zerolog/log\"\n\tyaml \"gopkg.in/yaml.v2\"\n)\n\n// RenderJob takes in a template and variables performing a render of the\n// template followed by Nomad jobspec parse.\nfunc RenderJob(templateFile string, variableFiles []string, addr string, flagVars *map[string]interface{}) (job *nomad.Job, err error) {\n\tvar tpl *bytes.Buffer\n\ttpl, err = RenderTemplate(templateFile, variableFiles, addr, flagVars)\n\tif err != nil {\n\t\treturn\n\t}\n\n\treturn jobspec2.Parse(\"\", tpl)\n}\n\n// RenderTemplate is the main entry point to render the template based on the\n// passed variables file.\nfunc RenderTemplate(templateFile string, variableFiles []string, addr string, flagVars *map[string]interface{}) (tpl *bytes.Buffer, err error) {\n\n\tt := &tmpl{}\n\tt.flagVariables = flagVars\n\tt.jobTemplateFile = templateFile\n\tt.variableFiles = variableFiles\n\n\tc, err := client.NewConsulClient(addr)\n\tif err != nil {\n\t\treturn\n\t}\n\n\tt.consulClient = c\n\n\tif len(t.variableFiles) == 0 {\n\t\tlog.Debug().Msgf(\"template/render: no variable file passed, trying defaults\")\n\t\tif defaultVarFile := helper.GetDefaultVarFile(); defaultVarFile != \"\" {\n\t\t\tt.variableFiles = []string{defaultVarFile}\n\t\t\tlog.Debug().Msgf(\"template/render: found default variable file, using %s\", defaultVarFile)\n\t\t}\n\t}\n\n\tmergedVariables := make(map[string]interface{})\n\tfor _, variableFile := range t.variableFiles {\n\t\t// Process the variable file extension and log DEBUG so the template can be\n\t\t// correctly rendered.\n\t\tvar ext string\n\t\tif ext = path.Ext(variableFile); ext != \"\" {\n\t\t\tlog.Debug().Msgf(\"template/render: variable file extension %s detected\", ext)\n\t\t}\n\n\t\tvar variables map[string]interface{}\n\t\tswitch ext {\n\t\tcase terraformVarExtension:\n\t\t\tvariables, err = t.parseTFVars(variableFile)\n\t\tcase yamlVarExtension, ymlVarExtension:\n\t\t\tvariables, err = t.parseYAMLVars(variableFile)\n\t\tcase jsonVarExtension:\n\t\t\tvariables, err = t.parseJSONVars(variableFile)\n\t\tdefault:\n\t\t\terr = fmt.Errorf(\"variables file extension %v not supported\", ext)\n\t\t}\n\n\t\tif err != nil {\n\t\t\treturn\n\t\t}\n\t\tfor k, v := range variables {\n\t\t\tmergedVariables[k] = v\n\t\t}\n\t}\n\n\tsrc, err := os.ReadFile(t.jobTemplateFile)\n\tif err != nil {\n\t\treturn\n\t}\n\n\t// If no command line variables are passed; log this as DEBUG to provide much\n\t// greater feedback.\n\tif len(*t.flagVariables) == 0 {\n\t\tlog.Debug().Msgf(\"template/render: no command line variables passed\")\n\t}\n\n\ttpl, err = t.renderTemplate(string(src), mergedVariables)\n\n\treturn\n}\n\nfunc (t *tmpl) parseJSONVars(variableFile string) (variables map[string]interface{}, err error) {\n\n\tjsonFile, err := os.ReadFile(variableFile)\n\tif err != nil {\n\t\treturn\n\t}\n\n\tvariables = make(map[string]interface{})\n\tif err = json.Unmarshal(jsonFile, &variables); err != nil {\n\t\treturn\n\t}\n\n\treturn variables, nil\n}\n\nfunc (t *tmpl) parseTFVars(variableFile string) (map[string]interface{}, error) {\n\n\ttfParser := configs.NewParser(nil)\n\tloadedFile, loadDiags := tfParser.LoadConfigFile(variableFile)\n\tif loadDiags != nil && loadDiags.HasErrors() {\n\t\treturn nil, loadDiags\n\t}\n\tif loadedFile == nil {\n\t\treturn nil, fmt.Errorf(\"hcl returned nil file\")\n\t}\n\n\tvariables := make(map[string]interface{})\n\tfor _, variable := range loadedFile.Variables {\n\t\tvariables[variable.Name] = hcl2shim.ConfigValueFromHCL2(variable.Default)\n\t}\n\treturn variables, nil\n}\n\nfunc (t *tmpl) parseYAMLVars(variableFile string) (variables map[string]interface{}, err error) {\n\n\tyamlFile, err := os.ReadFile(variableFile)\n\tif err != nil {\n\t\treturn\n\t}\n\n\tvariables = make(map[string]interface{})\n\tif err = yaml.Unmarshal(yamlFile, &variables); err != nil {\n\t\treturn\n\t}\n\treturn variables, nil\n}\n\nfunc (t *tmpl) renderTemplate(src string, variables map[string]interface{}) (tpl *bytes.Buffer, err error) {\n\n\ttpl = &bytes.Buffer{}\n\n\t// Setup the template file for rendering\n\ttmpl := t.newTemplate()\n\tif tmpl, err = tmpl.Parse(src); err != nil {\n\t\treturn\n\t}\n\n\tif variables != nil {\n\t\t// Merge variables passed on the CLI with those passed through a variables file.\n\t\terr = tmpl.Execute(tpl, helper.VariableMerge(&variables, t.flagVariables))\n\t} else {\n\t\t// No variables file passed; render using any passed CLI variables.\n\t\tlog.Debug().Msgf(\"template/render: variable file not passed\")\n\t\terr = tmpl.Execute(tpl, t.flagVariables)\n\t}\n\n\treturn tpl, err\n}\n"
  },
  {
    "path": "template/render_test.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage template\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\tnomad \"github.com/hashicorp/nomad/api\"\n)\n\nconst (\n\ttestJobName           = \"levantExample\"\n\ttestJobNameOverwrite  = \"levantExampleOverwrite\"\n\ttestJobNameOverwrite2 = \"levantExampleOverwrite2\"\n\ttestDCName            = \"dc13\"\n\ttestEnvName           = \"GROUP_NAME_ENV\"\n\ttestEnvValue          = \"cache\"\n)\n\nfunc TestTemplater_RenderTemplate(t *testing.T) {\n\n\tvar job *nomad.Job\n\tvar err error\n\n\t// Start with an empty passed var args map.\n\tfVars := make(map[string]interface{})\n\n\t// Test basic TF template render.\n\tjob, err = RenderJob(\"test-fixtures/single_templated.nomad\", []string{\"test-fixtures/test.tf\"}, \"\", &fVars)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif *job.Name != testJobName {\n\t\tt.Fatalf(\"expected %s but got %v\", testJobName, *job.Name)\n\t}\n\tif *job.TaskGroups[0].Tasks[0].Resources.CPU != 1313 {\n\t\tt.Fatalf(\"expected CPU resource %v but got %v\", 1313, *job.TaskGroups[0].Tasks[0].Resources.CPU)\n\t}\n\n\t// Test basic YAML template render.\n\tjob, err = RenderJob(\"test-fixtures/single_templated.nomad\", []string{\"test-fixtures/test.yaml\"}, \"\", &fVars)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif *job.Name != testJobName {\n\t\tt.Fatalf(\"expected %s but got %v\", testJobName, *job.Name)\n\t}\n\tif *job.TaskGroups[0].Tasks[0].Resources.CPU != 1313 {\n\t\tt.Fatalf(\"expected CPU resource %v but got %v\", 1313, *job.TaskGroups[0].Tasks[0].Resources.CPU)\n\t}\n\n\t// Test multiple var-files\n\tjob, err = RenderJob(\"test-fixtures/single_templated.nomad\", []string{\"test-fixtures/test.yaml\", \"test-fixtures/test-overwrite.yaml\"}, \"\", &fVars)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif *job.Name != testJobNameOverwrite {\n\t\tt.Fatalf(\"expected %s but got %v\", testJobNameOverwrite, *job.Name)\n\t}\n\n\t// Test multiple var-files of different types\n\tjob, err = RenderJob(\"test-fixtures/single_templated.nomad\", []string{\"test-fixtures/test.tf\", \"test-fixtures/test-overwrite.yaml\"}, \"\", &fVars)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif *job.Name != testJobNameOverwrite {\n\t\tt.Fatalf(\"expected %s but got %v\", testJobNameOverwrite, *job.Name)\n\t}\n\n\t// Test multiple var-files with var-args\n\tfVars[\"job_name\"] = testJobNameOverwrite2\n\tjob, err = RenderJob(\"test-fixtures/single_templated.nomad\", []string{\"test-fixtures/test.tf\", \"test-fixtures/test-overwrite.yaml\"}, \"\", &fVars)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif *job.Name != testJobNameOverwrite2 {\n\t\tt.Fatalf(\"expected %s but got %v\", testJobNameOverwrite2, *job.Name)\n\t}\n\n\t// Test empty var-args and empty variable file render.\n\tjob, err = RenderJob(\"test-fixtures/none_templated.nomad\", []string{}, \"\", &fVars)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif *job.Name != testJobName {\n\t\tt.Fatalf(\"expected %s but got %v\", testJobName, *job.Name)\n\t}\n\n\t// Test var-args only render.\n\tfVars = map[string]interface{}{\"job_name\": testJobName, \"task_resource_cpu\": \"1313\"}\n\tjob, err = RenderJob(\"test-fixtures/single_templated.nomad\", []string{}, \"\", &fVars)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif *job.Name != testJobName {\n\t\tt.Fatalf(\"expected %s but got %v\", testJobName, *job.Name)\n\t}\n\tif *job.TaskGroups[0].Tasks[0].Resources.CPU != 1313 {\n\t\tt.Fatalf(\"expected CPU resource %v but got %v\", 1313, *job.TaskGroups[0].Tasks[0].Resources.CPU)\n\t}\n\n\t// Test var-args and variables file render.\n\tdelete(fVars, \"job_name\")\n\tfVars[\"datacentre\"] = testDCName\n\tos.Setenv(testEnvName, testEnvValue)\n\tjob, err = RenderJob(\"test-fixtures/multi_templated.nomad\", []string{\"test-fixtures/test.yaml\"}, \"\", &fVars)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif *job.Name != testJobName {\n\t\tt.Fatalf(\"expected %s but got %v\", testJobName, *job.Name)\n\t}\n\tif job.Datacenters[0] != testDCName {\n\t\tt.Fatalf(\"expected %s but got %v\", testDCName, job.Datacenters[0])\n\t}\n\tif *job.TaskGroups[0].Name != testEnvValue {\n\t\tt.Fatalf(\"expected %s but got %v\", testEnvValue, *job.TaskGroups[0].Name)\n\t}\n}\n"
  },
  {
    "path": "template/template.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage template\n\nimport (\n\t\"text/template\"\n\n\tconsul \"github.com/hashicorp/consul/api\"\n)\n\n// tmpl provides everything needed to fully render and job template using\n// inbuilt functions.\ntype tmpl struct {\n\tconsulClient    *consul.Client\n\tflagVariables   *map[string]interface{}\n\tjobTemplateFile string\n\tvariableFiles   []string\n}\n\nconst (\n\tjsonVarExtension      = \".json\"\n\tterraformVarExtension = \".tf\"\n\tyamlVarExtension      = \".yaml\"\n\tymlVarExtension       = \".yml\"\n\trightDelim            = \"]]\"\n\tleftDelim             = \"[[\"\n)\n\n// newTemplate returns an empty template with default options set\nfunc (t *tmpl) newTemplate() *template.Template {\n\ttmpl := template.New(\"jobTemplate\")\n\ttmpl.Delims(leftDelim, rightDelim)\n\ttmpl.Option(\"missingkey=zero\")\n\ttmpl.Funcs(funcMap(t.consulClient))\n\treturn tmpl\n}\n"
  },
  {
    "path": "template/test-fixtures/missing_var.nomad",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\njob \"[[.job_name]]\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n  update {\n    max_parallel     = 1\n    min_healthy_time = \"10s\"\n    healthy_deadline = \"1m\"\n    auto_revert      = true\n  }\n\n  group \"cache\" {\n    count = 1\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay = \"25s\"\n      mode = \"delay\"\n    }\n    ephemeral_disk {\n      size = 300\n    }\n    task \"redis\" {\n      artifact {\n        # .binary_url is not set\n        source = \"[[ .binary_url ]]\"\n      }\n\n      driver = \"docker\"\n      config {\n        image = \"redis:3.2\"\n        port_map {\n          db = 6379\n        }\n      }\n      resources {\n        cpu    = 500\n        memory = 256\n        network {\n          mbits = 10\n          port \"db\" {}\n        }\n      }\n      service {\n        name = \"global-redis-check\"\n        tags = [\"global\", \"cache\"]\n        port = \"db\"\n        check {\n          name     = \"alive\"\n          type     = \"tcp\"\n          interval = \"10s\"\n          timeout  = \"2s\"\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/test-fixtures/multi_templated.nomad",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\njob \"[[.job_name]]\" {\n  datacenters = [\"[[.datacentre]]\"]\n  type = \"service\"\n  update {\n    max_parallel     = 1\n    min_healthy_time = \"10s\"\n    healthy_deadline = \"1m\"\n    auto_revert      = true\n  }\n\n  group \"[[env \"GROUP_NAME_ENV\"]]\" {\n    count = 1\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay = \"25s\"\n      mode = \"delay\"\n    }\n    ephemeral_disk {\n      size = 300\n    }\n    task \"redis\" {\n      driver = \"docker\"\n      config {\n        image = \"redis:3.2\"\n        port_map {\n          db = 6379\n        }\n      }\n      resources {\n        cpu    = 500\n        memory = 256\n        network {\n          mbits = 10\n          port \"db\" {}\n        }\n      }\n      service {\n        name = \"global-redis-check\"\n        tags = [\"global\", \"cache\"]\n        port = \"db\"\n        check {\n          name     = \"alive\"\n          type     = \"tcp\"\n          interval = \"10s\"\n          timeout  = \"2s\"\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/test-fixtures/none_templated.nomad",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\njob \"levantExample\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n  update {\n    max_parallel     = 1\n    min_healthy_time = \"10s\"\n    healthy_deadline = \"1m\"\n    auto_revert      = true\n  }\n\n  group \"cache\" {\n    count = 1\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay = \"25s\"\n      mode = \"delay\"\n    }\n    ephemeral_disk {\n      size = 300\n    }\n    task \"redis\" {\n      driver = \"docker\"\n      config {\n        image = \"redis:3.2\"\n        port_map {\n          db = 6379\n        }\n      }\n      resources {\n        cpu    = 500\n        memory = 256\n        network {\n          mbits = 10\n          port \"db\" {}\n        }\n      }\n      service {\n        name = \"global-redis-check\"\n        tags = [\"global\", \"cache\"]\n        port = \"db\"\n        check {\n          name     = \"alive\"\n          type     = \"tcp\"\n          interval = \"10s\"\n          timeout  = \"2s\"\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/test-fixtures/single_templated.nomad",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\njob \"[[.job_name]]\" {\n  datacenters = [\"dc1\"]\n  type = \"service\"\n  update {\n    max_parallel     = 1\n    min_healthy_time = \"10s\"\n    healthy_deadline = \"1m\"\n    auto_revert      = true\n  }\n\n  group \"cache\" {\n    count = 1\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay = \"25s\"\n      mode = \"delay\"\n    }\n    ephemeral_disk {\n      size = 300\n    }\n    task \"redis\" {\n      template {\n        data = <<EOH\n        APP_ENV={{ key \"config/app/env\" }}\n        APP_DEBUG={{ key \"config/app/debug\" }}\n        APP_KEY={{ secret \"secret/key\" }}\n        APP_URL={{ key \"config/app/url\" }}\n        EOH\n        destination = \"core/.env\"\n        change_mode = \"noop\"\n      }\n\n      driver = \"docker\"\n      config {\n        image = \"redis:3.2\"\n        port_map {\n          db = 6379\n        }\n      }\n      resources {\n        cpu    = [[.task_resource_cpu]]\n        memory = 256\n        network {\n          mbits = 10\n          port \"db\" {}\n        }\n      }\n      service {\n        name = \"global-redis-check\"\n        tags = [\"global\", \"cache\"]\n        port = \"db\"\n        check {\n          name     = \"alive\"\n          type     = \"tcp\"\n          interval = \"10s\"\n          timeout  = \"2s\"\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "template/test-fixtures/test-overwrite.yaml",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\njob_name: levantExampleOverwrite\n"
  },
  {
    "path": "template/test-fixtures/test.tf",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\nvariable \"job_name\" {\n  default = \"levantExample\"\n}\n\nvariable \"task_resource_cpu\" {\n  description = \"the CPU for the task\"\n  type        = number\n  default     = 1313\n}\n"
  },
  {
    "path": "template/test-fixtures/test.yaml",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\njob_name: levantExample\ntask_resource_cpu: 1313\n"
  },
  {
    "path": "test/acctest/acctest.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage acctest\n\nimport (\n\t\"fmt\"\n\t\"testing\"\n\n\tnomad \"github.com/hashicorp/nomad/api\"\n)\n\n// TestCase is a single test of levant\ntype TestCase struct {\n\t// Steps are ran in order stopping on failure\n\tSteps []TestStep\n\n\t// SetupFunc is called at before the steps\n\tSetupFunc TestStateFunc\n\n\t// CleanupFunc is called at the end of the TestCase\n\tCleanupFunc TestStateFunc\n}\n\n// TestStep is a single step within a TestCase\ntype TestStep struct {\n\t// Runner is used to execute the step, can be nil for a check only step\n\tRunner TestStepRunner\n\n\t// Check is called after Runner if it does not fail\n\tCheck TestStateFunc\n\n\t// ExpectErr allows Runner to fail, use CheckErr to confirm error is correct\n\tExpectErr bool\n\n\t// CheckErr is called if Runner fails and ExpectErr is true\n\tCheckErr func(error) bool\n}\n\n// TestStepRunner models a runner for a TestStep\ntype TestStepRunner interface {\n\t// Run executes the levant feature under testing\n\tRun(*TestState) error\n}\n\n// TestStateFunc is used to verify acceptance test criteria\ntype TestStateFunc func(*TestState) error\n\n// TestState is the configuration for the TestCase\ntype TestState struct {\n\tJobName   string\n\tNamespace string\n\tNomad     *nomad.Client\n}\n\n// Test executes a single TestCase\nfunc Test(t *testing.T, c TestCase) {\n\tif len(c.Steps) < 1 {\n\t\tt.Fatal(\"must have at least one test step\")\n\t}\n\n\tnomad, err := newNomadClient()\n\tif err != nil {\n\t\tt.Fatalf(\"failed to create nomad client: %s\", err)\n\t}\n\n\tstate := &TestState{\n\t\tJobName: fmt.Sprintf(\"levant-%s\", t.Name()),\n\t\tNomad:   nomad,\n\t}\n\n\tif c.SetupFunc != nil {\n\t\tif err := c.SetupFunc(state); err != nil {\n\t\t\tt.Errorf(\"setup failed: %s\", err)\n\t\t}\n\t}\n\n\tfor i, step := range c.Steps {\n\t\tstepNum := i + 1\n\n\t\tif step.Runner != nil {\n\t\t\terr = step.Runner.Run(state)\n\t\t\tif err != nil {\n\t\t\t\tif !step.ExpectErr {\n\t\t\t\t\tt.Errorf(\"step %d/%d failed: %s\", stepNum, len(c.Steps), err)\n\t\t\t\t\tbreak\n\t\t\t\t}\n\n\t\t\t\tif step.CheckErr != nil {\n\t\t\t\t\tok := step.CheckErr(err)\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\tt.Errorf(\"step %d/%d CheckErr failed: %s\", stepNum, len(c.Steps), err)\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif step.Check != nil {\n\t\t\terr = step.Check(state)\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"step %d/%d Check failed: %s\", stepNum, len(c.Steps), err)\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}\n\n\tif c.CleanupFunc != nil {\n\t\terr = c.CleanupFunc(state)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"cleanup failed: %s\", err)\n\t\t}\n\t}\n}\n\n// CleanupPurgeJob is a cleanup func to purge the TestCase job from Nomad\nfunc CleanupPurgeJob(s *TestState) error {\n\t_, _, err := s.Nomad.Jobs().Deregister(s.JobName, true, &nomad.WriteOptions{Namespace: s.Namespace})\n\treturn err\n}\n\n// CheckDeploymentStatus is a TestStateFunc to check if the latest deployment of\n// the TestCase job matches the desired status\nfunc CheckDeploymentStatus(status string) TestStateFunc {\n\treturn func(s *TestState) error {\n\t\tdeploy, _, err := s.Nomad.Jobs().LatestDeployment(s.JobName, &nomad.QueryOptions{Namespace: s.Namespace})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tif deploy.Status != status {\n\t\t\treturn fmt.Errorf(\"deployment %s is in status '%s', expected '%s'\", deploy.ID, deploy.Status, status)\n\t\t}\n\n\t\treturn nil\n\t}\n}\n\n// CheckTaskGroupCount is a TestStateFunc to check a TaskGroup count\nfunc CheckTaskGroupCount(groupName string, count int) TestStateFunc {\n\treturn func(s *TestState) error {\n\t\tjob, _, err := s.Nomad.Jobs().Info(s.JobName, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tfor _, group := range job.TaskGroups {\n\t\t\tif groupName == *group.Name {\n\t\t\t\tif *group.Count == count {\n\t\t\t\t\treturn nil\n\t\t\t\t}\n\n\t\t\t\treturn fmt.Errorf(\"task group %s count is %d, expected %d\", groupName, *group.Count, count)\n\t\t\t}\n\t\t}\n\n\t\treturn fmt.Errorf(\"unable to find task group %s\", groupName)\n\t}\n}\n\n// newNomadClient creates a Nomad API client configurable by NOMAD_\n// env variables or returns an error if Nomad is in an unhealthy state\nfunc newNomadClient() (*nomad.Client, error) {\n\tc, err := nomad.NewClient(nomad.DefaultConfig())\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tresp, err := c.Agent().Health()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tif (resp.Server != nil && !resp.Server.Ok) || (resp.Client != nil && !resp.Client.Ok) {\n\t\treturn nil, fmt.Errorf(\"agent unhealthy\")\n\t}\n\treturn c, nil\n}\n"
  },
  {
    "path": "test/acctest/deploy.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage acctest\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/hashicorp/levant/levant\"\n\t\"github.com/hashicorp/levant/levant/structs\"\n\t\"github.com/hashicorp/levant/template\"\n)\n\n// DeployTestStepRunner implements TestStepRunner to execute a levant deployment\ntype DeployTestStepRunner struct {\n\tFixtureName string\n\n\tVars map[string]interface{}\n\n\tCanary     int\n\tForceBatch bool\n\tForceCount bool\n}\n\n// Run renders the job fixture and triggers a deployment\nfunc (c DeployTestStepRunner) Run(s *TestState) error {\n\tif c.Vars == nil {\n\t\tc.Vars = map[string]interface{}{}\n\t}\n\tc.Vars[\"job_name\"] = s.JobName\n\n\tjob, err := template.RenderJob(\"fixtures/\"+c.FixtureName, []string{}, \"\", &c.Vars)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"error rendering template: %s\", err)\n\t}\n\n\tif job.Namespace != nil {\n\t\ts.Namespace = *job.Namespace\n\t}\n\n\tcfg := &levant.DeployConfig{\n\t\tDeploy: &structs.DeployConfig{\n\t\t\tCanary:     c.Canary,\n\t\t\tForceBatch: c.ForceBatch,\n\t\t\tForceCount: c.ForceCount,\n\t\t},\n\t\tClient: &structs.ClientConfig{},\n\t\tTemplate: &structs.TemplateConfig{\n\t\t\tJob: job,\n\t\t},\n\t}\n\n\tif !levant.TriggerDeployment(cfg, nil) {\n\t\treturn fmt.Errorf(\"deployment failed\")\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "test/deploy_test.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage test\n\nimport (\n\t\"fmt\"\n\t\"testing\"\n\n\t\"github.com/hashicorp/levant/test/acctest\"\n\t\"github.com/hashicorp/nomad/api\"\n)\n\nfunc TestDeploy_basic(t *testing.T) {\n\tacctest.Test(t, acctest.TestCase{\n\t\tSteps: []acctest.TestStep{\n\t\t\t{\n\t\t\t\tRunner: acctest.DeployTestStepRunner{\n\t\t\t\t\tFixtureName: \"deploy_basic.nomad\",\n\t\t\t\t},\n\t\t\t\tCheck: acctest.CheckDeploymentStatus(\"successful\"),\n\t\t\t},\n\t\t},\n\t\tCleanupFunc: acctest.CleanupPurgeJob,\n\t})\n}\n\nfunc TestDeploy_namespace(t *testing.T) {\n\tacctest.Test(t, acctest.TestCase{\n\t\tSetupFunc: func(s *acctest.TestState) error {\n\t\t\tif _, err := s.Nomad.Namespaces().Register(&api.Namespace{Name: \"test\"}, nil); err != nil {\n\t\t\t\treturn fmt.Errorf(\"could not create test namespace: %w\", err)\n\t\t\t}\n\t\t\treturn nil\n\t\t},\n\t\tSteps: []acctest.TestStep{\n\t\t\t{\n\t\t\t\tRunner: acctest.DeployTestStepRunner{\n\t\t\t\t\tFixtureName: \"deploy_namespace.nomad\",\n\t\t\t\t},\n\t\t\t\tCheck: acctest.CheckDeploymentStatus(\"successful\"),\n\t\t\t},\n\t\t},\n\t\tCleanupFunc: func(s *acctest.TestState) error {\n\t\t\tif err := acctest.CleanupPurgeJob(s); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\tif _, err := s.Nomad.Namespaces().Delete(\"test\", nil); err != nil {\n\t\t\t\treturn fmt.Errorf(\"could not delete namespace test: %w\", err)\n\t\t\t}\n\t\t\treturn nil\n\t\t},\n\t})\n}\n\nfunc TestDeploy_driverError(t *testing.T) {\n\tacctest.Test(t, acctest.TestCase{\n\t\tSteps: []acctest.TestStep{\n\t\t\t{\n\t\t\t\tRunner: acctest.DeployTestStepRunner{\n\t\t\t\t\tFixtureName: \"deploy_driver_error.nomad\",\n\t\t\t\t},\n\t\t\t\tExpectErr: true,\n\t\t\t\tCheckErr: func(_ error) bool {\n\t\t\t\t\t// this is a bit pointless without the error bubbled up from levant\n\t\t\t\t\treturn true\n\t\t\t\t},\n\t\t\t},\n\t\t\t{\n\t\t\t\t// allows us to check a job was registered and previous step error wasn't a parse failure etc.\n\t\t\t\tCheck: acctest.CheckDeploymentStatus(\"failed\"),\n\t\t\t},\n\t\t},\n\t\tCleanupFunc: acctest.CleanupPurgeJob,\n\t})\n}\n\nfunc TestDeploy_allocError(t *testing.T) {\n\tacctest.Test(t, acctest.TestCase{\n\t\tSteps: []acctest.TestStep{\n\t\t\t{\n\t\t\t\tRunner: acctest.DeployTestStepRunner{\n\t\t\t\t\tFixtureName: \"deploy_alloc_error.nomad\",\n\t\t\t\t},\n\t\t\t\tExpectErr: true,\n\t\t\t\tCheckErr: func(_ error) bool {\n\t\t\t\t\t// this is a bit pointless without the error bubbled up from levant\n\t\t\t\t\treturn true\n\t\t\t\t},\n\t\t\t},\n\t\t\t{\n\t\t\t\t// allows us to check a job was registered and previous step error wasn't a parse failure etc.\n\t\t\t\tCheck: acctest.CheckDeploymentStatus(\"failed\"),\n\t\t\t},\n\t\t},\n\t\tCleanupFunc: acctest.CleanupPurgeJob,\n\t})\n}\n\nfunc TestDeploy_count(t *testing.T) {\n\tacctest.Test(t, acctest.TestCase{\n\t\tSteps: []acctest.TestStep{\n\t\t\t{\n\t\t\t\tRunner: acctest.DeployTestStepRunner{\n\t\t\t\t\tFixtureName: \"deploy_count.nomad\",\n\t\t\t\t\tVars: map[string]interface{}{\n\t\t\t\t\t\t\"count\": \"3\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tCheck: acctest.CheckDeploymentStatus(\"successful\"),\n\t\t\t},\n\t\t\t{\n\t\t\t\tRunner: acctest.DeployTestStepRunner{\n\t\t\t\t\tFixtureName: \"deploy_count.nomad\",\n\t\t\t\t\tVars: map[string]interface{}{\n\t\t\t\t\t\t\"count\": \"1\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tCheck: acctest.CheckDeploymentStatus(\"successful\"),\n\t\t\t},\n\t\t\t{\n\t\t\t\t// expect levant to read counts from the api\n\t\t\t\tCheck: acctest.CheckTaskGroupCount(\"test\", 3),\n\t\t\t},\n\t\t\t{\n\t\t\t\tRunner: acctest.DeployTestStepRunner{\n\t\t\t\t\tFixtureName: \"deploy_count.nomad\",\n\t\t\t\t\tVars: map[string]interface{}{\n\t\t\t\t\t\t\"count\": \"1\",\n\t\t\t\t\t},\n\t\t\t\t\tForceCount: true,\n\t\t\t\t},\n\t\t\t\tCheck: acctest.CheckDeploymentStatus(\"successful\"),\n\t\t\t},\n\t\t\t{\n\t\t\t\tCheck: acctest.CheckTaskGroupCount(\"test\", 1),\n\t\t\t},\n\t\t},\n\t\tCleanupFunc: acctest.CleanupPurgeJob,\n\t})\n}\n\nfunc TestDeploy_canary(t *testing.T) {\n\tacctest.Test(t, acctest.TestCase{\n\t\tSteps: []acctest.TestStep{\n\t\t\t{\n\t\t\t\tRunner: acctest.DeployTestStepRunner{\n\t\t\t\t\tFixtureName: \"deploy_canary.nomad\",\n\t\t\t\t\tCanary:      10,\n\t\t\t\t\tVars: map[string]interface{}{\n\t\t\t\t\t\t\"env_version\": \"1\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tCheck: acctest.CheckDeploymentStatus(\"successful\"),\n\t\t\t},\n\t\t\t{\n\t\t\t\tRunner: acctest.DeployTestStepRunner{\n\t\t\t\t\tFixtureName: \"deploy_canary.nomad\",\n\t\t\t\t\tCanary:      10,\n\t\t\t\t\tVars: map[string]interface{}{\n\t\t\t\t\t\t\"env_version\": \"2\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tCheck: acctest.CheckDeploymentStatus(\"successful\"),\n\t\t\t},\n\t\t},\n\t\tCleanupFunc: acctest.CleanupPurgeJob,\n\t})\n}\n\nfunc TestDeploy_lifecycle(t *testing.T) {\n\tacctest.Test(t, acctest.TestCase{\n\t\tSteps: []acctest.TestStep{\n\t\t\t{\n\t\t\t\tRunner: acctest.DeployTestStepRunner{\n\t\t\t\t\tFixtureName: \"deploy_lifecycle.nomad\",\n\t\t\t\t},\n\t\t\t\tCheck: acctest.CheckDeploymentStatus(\"successful\"),\n\t\t\t},\n\t\t},\n\t\tCleanupFunc: acctest.CleanupPurgeJob,\n\t})\n}\n\nfunc TestDeploy_taskScalingStanza(t *testing.T) {\n\tacctest.Test(t, acctest.TestCase{\n\t\tSteps: []acctest.TestStep{\n\t\t\t{\n\t\t\t\tRunner: acctest.DeployTestStepRunner{\n\t\t\t\t\tFixtureName: \"deploy_task_scaling.nomad\",\n\t\t\t\t},\n\t\t\t\tCheck: acctest.CheckDeploymentStatus(\"successful\"),\n\t\t\t},\n\t\t},\n\t\tCleanupFunc: acctest.CleanupPurgeJob,\n\t})\n}\n"
  },
  {
    "path": "test/fixtures/deploy_alloc_error.nomad",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\n# test alloc error with a command failure\n\njob \"[[.job_name]]\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n  update {\n    max_parallel      = 1\n    min_healthy_time  = \"10s\"\n    healthy_deadline  = \"15s\"\n    progress_deadline = \"20s\"\n  }\n\n  group \"test\" {\n    count = 1\n    restart {\n      attempts = 1\n      interval = \"10s\"\n      delay    = \"5s\"\n      mode     = \"fail\"\n    }\n    ephemeral_disk {\n      size = 300\n    }\n    task \"alpine\" {\n      driver = \"docker\"\n      config {\n        image   = \"alpine\"\n        command = \"badcommandname\"\n      }\n      resources {\n        cpu    = 100\n        memory = 128\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "test/fixtures/deploy_basic.nomad",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\n# tests a healthy deployment\n\njob \"[[.job_name]]\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n  update {\n    max_parallel     = 1\n    min_healthy_time = \"10s\"\n    healthy_deadline = \"1m\"\n    auto_revert      = true\n  }\n\n  group \"test\" {\n    count = 1\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay    = \"25s\"\n      mode     = \"delay\"\n    }\n    ephemeral_disk {\n      size = 300\n    }\n    task \"alpine\" {\n      driver = \"docker\"\n      config {\n        image   = \"alpine\"\n        command = \"tail\"\n        args    = [\"-f\", \"/dev/null\"]\n      }\n      resources {\n        cpu    = 100\n        memory = 128\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "test/fixtures/deploy_canary.nomad",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\n# tests a canary deployment\n\njob \"[[.job_name]]\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n  update {\n    max_parallel     = 1\n    min_healthy_time = \"2s\"\n    healthy_deadline = \"1m\"\n    auto_revert      = true\n    canary           = 1\n  }\n\n  group \"test\" {\n    count = 1\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay    = \"25s\"\n      mode     = \"delay\"\n    }\n    ephemeral_disk {\n      size = 300\n    }\n    task \"alpine\" {\n      driver = \"docker\"\n      config {\n        image   = \"alpine\"\n        command = \"tail\"\n        args    = [\"-f\", \"/dev/null\"]\n      }\n      env {\n        version = \"[[ .env_version ]]\"\n      }\n      resources {\n        cpu    = 100\n        memory = 128\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "test/fixtures/deploy_count.nomad",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\n# tests a healthy deployment with a count\n\njob \"[[.job_name]]\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n  update {\n    max_parallel     = 1\n    min_healthy_time = \"10s\"\n    healthy_deadline = \"1m\"\n    auto_revert      = true\n  }\n\n  group \"test\" {\n    count = [[.count]]\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay    = \"25s\"\n      mode     = \"delay\"\n    }\n    ephemeral_disk {\n      size = 300\n    }\n    task \"alpine\" {\n      driver = \"docker\"\n      config {\n        image   = \"alpine\"\n        command = \"tail\"\n        args    = [\"-f\", \"/dev/null\"]\n      }\n      resources {\n        cpu    = 100\n        memory = 128\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "test/fixtures/deploy_driver_error.nomad",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\n# tests driver error with an invalid docker image tag\n\njob \"[[.job_name]]\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n  update {\n    max_parallel      = 1\n    min_healthy_time  = \"10s\"\n    healthy_deadline  = \"15s\"\n    progress_deadline = \"20s\"\n  }\n\n  group \"test\" {\n    count = 1\n    restart {\n      attempts = 1\n      interval = \"10s\"\n      delay    = \"5s\"\n      mode     = \"fail\"\n    }\n    ephemeral_disk {\n      size = 300\n    }\n    task \"alpine\" {\n      driver = \"docker\"\n      config {\n        image = \"alpine:badimagetag\"\n      }\n      resources {\n        cpu    = 100\n        memory = 128\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "test/fixtures/deploy_lifecycle.nomad",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\njob \"[[.job_name]]\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n  update {\n    max_parallel     = 1\n    min_healthy_time = \"10s\"\n    healthy_deadline = \"1m\"\n    auto_revert      = true\n  }\n\n  group \"test\" {\n    count = 1\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay    = \"25s\"\n      mode     = \"delay\"\n    }\n    ephemeral_disk {\n      size = 300\n    }\n    task \"init\" {\n      driver = \"docker\"\n      lifecycle {\n        hook = \"prestart\"\n      }\n      config {\n        image   = \"alpine\"\n        command = \"sleep\"\n        args    = [\"5\"]\n      }\n      resources {\n        cpu    = 100\n        memory = 128\n      }\n    }\n    task \"alpine\" {\n      driver = \"docker\"\n      config {\n        image   = \"alpine\"\n        command = \"tail\"\n        args    = [\"-f\", \"/dev/null\"]\n      }\n      resources {\n        cpu    = 100\n        memory = 128\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "test/fixtures/deploy_namespace.nomad",
    "content": "# tests a healthy deployment\n\njob \"[[.job_name]]\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n  namespace   = \"test\"\n  update {\n    max_parallel     = 1\n    min_healthy_time = \"10s\"\n    healthy_deadline = \"1m\"\n    auto_revert      = true\n  }\n\n  group \"test\" {\n    count = 1\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay    = \"25s\"\n      mode     = \"delay\"\n    }\n    ephemeral_disk {\n      size = 300\n    }\n    task \"alpine\" {\n      driver = \"docker\"\n      config {\n        image   = \"alpine\"\n        command = \"tail\"\n        args = [\n          \"-f\", \"/dev/null\"\n        ]\n      }\n      resources {\n        cpu    = 100\n        memory = 128\n        network {\n          mbits = 10\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "test/fixtures/deploy_task_scaling.nomad",
    "content": "# Copyright (c) HashiCorp, Inc.\n# SPDX-License-Identifier: MPL-2.0\n\njob \"[[.job_name]]\" {\n  datacenters = [\"dc1\"]\n  type        = \"service\"\n  update {\n    max_parallel     = 1\n    min_healthy_time = \"10s\"\n    healthy_deadline = \"1m\"\n    auto_revert      = true\n  }\n\n  group \"test\" {\n    count = 1\n    restart {\n      attempts = 10\n      interval = \"5m\"\n      delay    = \"25s\"\n      mode     = \"delay\"\n    }\n    ephemeral_disk {\n      size = 300\n    }\n    task \"alpine\" {\n      driver = \"docker\"\n      config {\n        image   = \"alpine\"\n        command = \"tail\"\n        args    = [\"-f\", \"/dev/null\"]\n      }\n      resources {\n        cpu    = 100\n        memory = 128\n      }\n      scaling \"cpu\" {\n        policy {\n          check \"95pct\" {\n            strategy \"app-sizing-percentile\" {\n              percentile = \"95\"\n            }\n          }\n        }\n      }\n      scaling \"mem\" {\n        policy {\n          check \"max\" {\n            strategy \"app-sizing-max\" {}\n          }\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "version/version.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage version\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n)\n\nvar (\n\tGitCommit   string\n\tGitDescribe string\n\n\t// Version must conform to the format expected by\n\t// github.com/hashicorp/go-version for tests to work.\n\tVersion = \"0.4.0\"\n\n\t// VersionPrerelease is the marker for the version. If this is \"\"\n\t// (empty string) then it means that it is a final release. Otherwise, this\n\t// is a pre-release such as \"dev\" (in development), \"beta\", \"rc1\", etc.\n\tVersionPrerelease = \"dev\"\n\n\t// VersionMetadata is metadata further describing the build type.\n\tVersionMetadata = \"\"\n)\n\n// GetHumanVersion composes the parts of the version in a way that's suitable\n// for displaying to humans.\nfunc GetHumanVersion() string {\n\tversion := Version\n\tif GitDescribe != \"\" {\n\t\tversion = GitDescribe\n\t}\n\n\t// Add v as prefix if not present\n\tif !strings.HasPrefix(version, \"v\") {\n\t\tversion = fmt.Sprintf(\"v%s\", version)\n\t}\n\n\trelease := VersionPrerelease\n\tif GitDescribe == \"\" && release == \"\" {\n\t\trelease = \"dev\"\n\t}\n\n\tif release != \"\" {\n\t\tif !strings.HasSuffix(version, \"-\"+release) {\n\t\t\t// if we tagged a prerelease version then the release is in the version already\n\t\t\tversion += fmt.Sprintf(\"-%s\", release)\n\t\t}\n\t\tif GitCommit != \"\" {\n\t\t\tversion += fmt.Sprintf(\" (%s)\", GitCommit)\n\t\t}\n\t}\n\n\t// Strip off any single quotes added by the git information.\n\treturn strings.Replace(version, \"'\", \"\", -1)\n}\n"
  },
  {
    "path": "version/version_test.go",
    "content": "// Copyright (c) HashiCorp, Inc.\n// SPDX-License-Identifier: MPL-2.0\n\npackage version\n\nimport (\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n)\n\nfunc Test_GetHumanVersion(t *testing.T) {\n\ttestCases := []struct {\n\t\tinputCommit     string\n\t\tinputDescribe   string\n\t\tinputVersion    string\n\t\tinputPrerelease string\n\t\texpectedOutput  string\n\t}{\n\t\t{\n\t\t\tinputCommit:     \"440bca3+CHANGES\",\n\t\t\tinputDescribe:   \"\",\n\t\t\tinputVersion:    \"0.1.3\",\n\t\t\tinputPrerelease: \"dev\",\n\t\t\texpectedOutput:  \"v0.1.3-dev (440bca3+CHANGES)\",\n\t\t},\n\t\t{\n\t\t\tinputCommit:     \"440bca3\",\n\t\t\tinputDescribe:   \"\",\n\t\t\tinputVersion:    \"0.6.0\",\n\t\t\tinputPrerelease: \"beta1\",\n\t\t\texpectedOutput:  \"v0.6.0-beta1 (440bca3)\",\n\t\t},\n\t\t{\n\t\t\tinputCommit:     \"440bca3\",\n\t\t\tinputDescribe:   \"v1.0.0\",\n\t\t\tinputVersion:    \"1.0.0\",\n\t\t\tinputPrerelease: \"\",\n\t\t\texpectedOutput:  \"v1.0.0\",\n\t\t},\n\t}\n\n\tfor _, tc := range testCases {\n\t\tGitCommit = tc.inputCommit\n\t\tGitDescribe = tc.inputDescribe\n\t\tVersion = tc.inputVersion\n\t\tVersionPrerelease = tc.inputPrerelease\n\t\tassert.Equal(t, tc.expectedOutput, GetHumanVersion())\n\t}\n}\n"
  }
]