[
  {
    "path": ".dockerignore",
    "content": "# More info: https://docs.docker.com/engine/reference/builder/#dockerignore-file\n# Ignore all files which are not go type\n!**/*.go\n!**/*.mod\n!**/*.sum\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug-report.yml",
    "content": "name: Bug Report\ndescription: Report a bug in Eraser\ntitle: \"[BUG] <title>\"\nlabels:\n  - \"bug\"\nbody:\n  - type: markdown\n    attributes:\n      value: |\n        Please search to see if an issue already exists for your bug before continuing.\n        > If you need to report a security issue please see https://github.com/eraser-dev/eraser/security/policy instead.\n  - type: input\n    attributes:\n      label: Version of Eraser\n      placeholder: Release version (e.g. v1.0.0) or `git describe --dirty` output if built from source\n  - type: textarea\n    attributes:\n      label: Expected Behavior\n      description: Briefly describe what you expect to happen.\n  - type: textarea\n    attributes:\n      label: Actual Behavior\n      description: Briefly describe what is actually happening.\n  - type: textarea\n    attributes:\n      label: Steps To Reproduce\n      description: Detailed steps to reproduce the behavior.\n      placeholder: |\n        1. In Kubernetes v1.27.0 ...\n        2. With this config...\n        3. Run '...'\n        4. See error...\n  - type: markdown\n    attributes:\n      value: |\n        Thanks for taking the time to fill out a bug report!\n  - type: checkboxes\n    id: idea\n    attributes:\n      label: \"Are you willing to submit PRs to contribute to this bug fix?\"\n      description: \"This is absolutely not required, but we are happy to guide you in the contribution process especially when you already have a good proposal or understanding of how to implement it. Join us at the `#eraser` channel on the [Kubernetes Slack](https://kubernetes.slack.com/archives/C03Q8KV8YQ4) if you have any questions.\"\n      options:\n        - label: Yes, I am willing to implement it.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature-request.yml",
    "content": "name: Request\ndescription: Request a new feature or propose an enhancement to Eraser\ntitle: \"[REQ] <title>\"\nlabels:\n  - \"enhancement\"\nbody:\n  - type: markdown\n    attributes:\n      value: |\n        Please search to see if an issue already exists for your request before continuing.\n  - type: dropdown\n    attributes:\n      label: What kind of request is this?\n      multiple: false\n      options:\n        - New feature\n        - Improvement of existing experience\n        - Other\n  - type: textarea\n    attributes:\n      label: What is your request or suggestion?\n      placeholder: |\n        e.g. I would like Eraser to add this <feature> so that I can use it in my <scenario>.\n        e.g. When using Eraser the <current behavior> has this <limitation> and it would be better if it has this <improvement>.\n  - type: markdown\n    attributes:\n      value: |\n        Thanks for taking the time to fill out a request!\n  - type: checkboxes\n    id: idea\n    attributes:\n      label: \"Are you willing to submit PRs to contribute to this feature request?\"\n      description: \"This is absolutely not required, but we are happy to guide you in the contribution process especially when you already have a good proposal or understanding of how to implement it. Join us at the `#eraser` channel on the [Kubernetes Slack](https://kubernetes.slack.com/archives/C03Q8KV8YQ4) if you have any questions.\"\n      options:\n        - label: Yes, I am willing to implement it.\n"
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE.md",
    "content": "**What this PR does / why we need it**:\n\n**Which issue(s) this PR fixes** *(optional, using `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when the PR gets merged)*:\nFixes #\n\n**Special notes for your reviewer**:\n"
  },
  {
    "path": ".github/dependabot.yml",
    "content": "version: 2\n\nupdates:\n  - package-ecosystem: \"npm\"\n    directory: \"/docs\"\n    schedule:\n      interval: \"weekly\"\n    commit-message:\n      prefix: \"chore\"\n    groups:\n      docusaurus:\n        patterns:\n        - \"@docusaurus/*\"\n\n  - package-ecosystem: \"github-actions\"\n    directory: \"/\"\n    schedule:\n      interval: \"weekly\"\n    commit-message:\n      prefix: \"chore\"\n    groups:\n      all:\n        patterns:\n        - \"*\"\n\n  - package-ecosystem: \"gomod\"\n    directory: \"/\"\n    schedule:\n      interval: \"weekly\"\n    commit-message:\n      prefix: \"chore\"\n    ignore:\n      - dependency-name: \"*\"\n        update-types:\n        - \"version-update:semver-major\"\n        - \"version-update:semver-minor\"\n    groups:\n      k8s:\n        patterns:\n        - \"k8s.io/*\"\n        exclude-patterns:\n        - \"k8s.io/cri-api\"\n\n  - package-ecosystem: docker\n    directory: /\n    schedule:\n      interval: weekly\n\n  - package-ecosystem: docker\n    directory: /build/tooling\n    schedule:\n      interval: weekly\n"
  },
  {
    "path": ".github/semantic.yml",
    "content": "titleOnly: true\ntypes:\n  - build\n  - chore\n  - ci\n  - docs\n  - feat\n  - fix\n  - perf\n  - refactor\n  - revert\n  - style\n  - test\n"
  },
  {
    "path": ".github/workflows/README.md",
    "content": "# GitHub Workflows\n\nThis directory contains all of our workflows used in our GitHub CI/CD pipeline.\n\n## Descriptions\n\n### [Scan Images for Vulnerabilities (Trivy)](scan-images.yaml)\nOur images are scheduled to be scanned for vulnerabilities using Trivy every Monday at 07:00 UTC.\n\n#### Weekly Scans\nBy default, our images are built from the `main` branch, and any vulnerabilities caught are published in the [Github Security tab](https://github.com/eraser-dev/eraser/security).\n\n#### Dispatching a Scan\nWe can do a manual dispatch of the workflow and specify the released version to scan, e.g. `v1.3.0-beta.0`. If left blank, the image will be built off of the branch the workflow is dispatched from.\n\nIf we want to publish those results to our [Github Security tab](https://github.com/eraser-dev/eraser/security), we need to toggle the `upload-results` input to `true`.\n\n#### Scan Results\nThe scan results are automatically stored in the run artifacts. Those can be accessed by going into the workflow run, and under the run's **Summary** there is an **Artifacts** section storing all the images' scan results.\n\nIf the `upload-results` input is set to `true`, any vulnerabilities found will be published in the [Github Security tab](https://github.com/eraser-dev/eraser/security).\n"
  },
  {
    "path": ".github/workflows/build-id.yaml",
    "content": "name: Image build definitions for e2e tests\n\non:\n  workflow_call:\n    outputs:\n      build-id:\n        description: \"random build id to keep things together\"\n        value: ${{ jobs.generate-build-id.outputs.image-build-id }}\n      bucket-id:\n        description: \"docker-images-<build-id>\"\n        value: ${{ jobs.generate-build-id.outputs.bucket-id }}\n\npermissions:\n  contents: read\n\njobs:\n  generate-build-id:\n    name: \"Generate Build ID\"\n    runs-on: ubuntu-latest\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2\n        with:\n          egress-policy: audit\n\n      - id: build-id\n        run: |\n          build_id=\"$(date +%s)\"\n          echo build-id=$build_id | tee -a $GITHUB_OUTPUT\n          echo bucket-id=docker-images-$build_id | tee -a $GITHUB_OUTPUT\n    outputs:\n      image-build-id: ${{ steps.build-id.outputs.build-id }}\n      bucket-id: ${{ steps.build-id.outputs.bucket-id }}\n"
  },
  {
    "path": ".github/workflows/codeql.yaml",
    "content": "name: \"CodeQL\"\n\non:\n  push:\n    branches: [ main ]\n  schedule:\n    - cron: '0 7 * * 1' # Monday at 7:00 AM\n\npermissions: read-all\n\njobs:\n  analyze:\n    name: Analyze\n    runs-on: ubuntu-latest\n    permissions:\n      actions: read\n      contents: read\n      security-events: write\n\n    strategy:\n      fail-fast: false\n      matrix:\n        language: [ 'go' ]\n\n    steps:\n    - name: Harden Runner\n      uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2\n      with:\n        egress-policy: audit\n\n    - name: Checkout repository\n      uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3\n\n    - name: Initialize CodeQL\n      uses: github/codeql-action/init@fdbfb4d2750291e159f0156def62b853c2798ca2\n      with:\n        languages: ${{ matrix.language }}\n\n    - name: Autobuild\n      uses: github/codeql-action/autobuild@fdbfb4d2750291e159f0156def62b853c2798ca2\n\n    - name: Perform CodeQL Analysis\n      uses: github/codeql-action/analyze@fdbfb4d2750291e159f0156def62b853c2798ca2\n"
  },
  {
    "path": ".github/workflows/dep-review.yaml",
    "content": "name: 'Dependency Review'\non: [pull_request]\n\npermissions:\n  contents: read\n\njobs:\n  dependency-review:\n    runs-on: ubuntu-latest\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2\n        with:\n          egress-policy: audit\n\n      - name: 'Checkout Repository'\n        uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3\n\n      - name: 'Dependency Review'\n        uses: actions/dependency-review-action@0659a74c94536054bfa5aeb92241f70d680cc78e\n"
  },
  {
    "path": ".github/workflows/deploy_docs.yaml",
    "content": "name: Generate docs website to GitHub Pages\n\non:\n  push:\n    branches:\n      - main\n    paths:\n      - '.github/workflows/deploy_docs.yaml'\n      - 'docs/**'\n  pull_request:\n    branches:\n      - main\n    paths:\n      - '.github/workflows/deploy_docs.yaml'\n      - 'docs/**'\n\npermissions:\n  contents: read\n\njobs:\n  deploy:\n    name: Generate docs website to GitHub Pages\n    runs-on: ubuntu-latest\n    permissions:\n      contents: write\n    defaults:\n      run:\n        working-directory: docs\n    steps:\n      - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0\n\n      - name: Harden Runner\n        uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2\n        with:\n          egress-policy: audit\n\n      - name: Setup Node\n        uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0\n        with:\n          node-version: 20.x\n\n      - name: Get yarn cache\n        id: yarn-cache\n        run: echo \"dir=$(yarn cache dir)\" > $GITHUB_OUTPUT\n\n      - name: Cache dependencies\n        uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0\n        with:\n          path: ${{ steps.yarn-cache.outputs.dir }}\n          key: ${{ runner.os }}-website-${{ hashFiles('**/yarn.lock') }}\n          restore-keys: |\n            ${{ runner.os }}-website-\n\n      - run: yarn install --frozen-lockfile\n      - run: yarn build\n\n      - name: Deploy to GitHub Pages\n        if: github.ref == 'refs/heads/main' && github.event_name == 'push' && github.repository == 'eraser-dev/eraser'\n        uses: peaceiris/actions-gh-pages@4f9cc6602d3f66b9c108549d475ec49e8ef4d45e # v4.0.0\n        with:\n          github_token: ${{ secrets.GITHUB_TOKEN }}\n          publish_dir: ./docs/build\n          destination_dir: ./docs\n"
  },
  {
    "path": ".github/workflows/e2e-build.yaml",
    "content": "name: Image build definitions for e2e tests\n\non:\n  workflow_call:\n    inputs:\n      bucket-id:\n        required: true\n        type: string\n\njobs:\n  build-remover:\n    name: \"Build remover image for e2e tests\"\n    runs-on: ubuntu-latest\n    timeout-minutes: 10\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2\n        with:\n          egress-policy: audit\n      - name: Set up Go\n        uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0\n        with:\n          go-version: \"1.25\"\n          check-latest: true\n      - name: Setup buildx instance\n        uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1\n        with:\n          use: true\n      - uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0\n        with:\n          key: ${{ runner.OS }}-go-${{ hashFiles('**/go.sum') }}\n          restore-keys: |\n            ${{ runner.os }}-go-\n          path: |\n            ~/go/pkg/mod\n            ~/.cache/go-build\n      - uses: crazy-max/ghaction-github-runtime@3cb05d89e1f492524af3d41a1c98c83bc3025124 # v3.1.0\n      - name: Check out code into the Go module directory\n        uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0\n      - run: 'echo ${{ inputs.bucket-id }}'\n      - name: Set env\n        run: |\n          echo REMOVER_REPO=remover >> $GITHUB_ENV\n          echo REMOVER_TAG=test >> $GITHUB_ENV\n      - name: Build remover\n        run: 'make docker-build-remover OUTPUT_TYPE=type=oci,dest=./${REMOVER_REPO}_${REMOVER_TAG}.tar,name=${REMOVER_REPO}:${REMOVER_TAG}'\n      - name: Upload Build Artifacts\n        uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4  # v5.0.0\n        with:\n          name: ${{ inputs.bucket-id }}-remover\n          path: remover_test.tar\n          overwrite: true\n\n  build-trivy-scanner:\n    name: \"Build trivy-scanner image for e2e tests\"\n    runs-on: ubuntu-latest\n    timeout-minutes: 10\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2\n        with:\n          egress-policy: audit\n      - name: Set up Go\n        uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0\n        with:\n          go-version: \"1.25\"\n          check-latest: true\n      - name: Setup buildx instance\n        uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1\n        with:\n          use: true\n      - uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0\n        with:\n          key: ${{ runner.OS }}-go-${{ hashFiles('**/go.sum') }}\n          restore-keys: |\n            ${{ runner.os }}-go-\n          path: |\n            ~/go/pkg/mod\n            ~/.cache/go-build\n      - uses: crazy-max/ghaction-github-runtime@3cb05d89e1f492524af3d41a1c98c83bc3025124 # v3.1.0\n      - name: Check out code into the Go module directory\n        uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0\n      - name: Set env\n        run: |\n          echo TRIVY_SCANNER_REPO=scanner >> $GITHUB_ENV\n          echo TRIVY_SCANNER_TAG=test >> $GITHUB_ENV\n      - name: Build trivy-scanner\n        run: 'make docker-build-trivy-scanner OUTPUT_TYPE=type=oci,dest=./${TRIVY_SCANNER_REPO}_${TRIVY_SCANNER_TAG}.tar,name=${TRIVY_SCANNER_REPO}:${TRIVY_SCANNER_TAG}'\n      - name: Upload Build Artifacts\n        uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4  # v5.0.0\n        with:\n          name: ${{ inputs.bucket-id }}-scanner\n          path: scanner_test.tar\n          overwrite: true\n\n  build-manager:\n    name: \"Build manager image for e2e tests\"\n    runs-on: ubuntu-latest\n    timeout-minutes: 10\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2\n        with:\n          egress-policy: audit\n      - name: Set up Go\n        uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0\n        with:\n          go-version: \"1.25\"\n          check-latest: true\n      - name: Setup buildx instance\n        uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1\n        with:\n          use: true\n      - uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0\n        with:\n          key: ${{ runner.OS }}-go-${{ hashFiles('**/go.sum') }}\n          restore-keys: |\n            ${{ runner.os }}-go-\n          path: |\n            ~/go/pkg/mod\n            ~/.cache/go-build\n      - uses: crazy-max/ghaction-github-runtime@3cb05d89e1f492524af3d41a1c98c83bc3025124 # v3.1.0\n      - name: Check out code into the Go module directory\n        uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0\n      - name: Set env\n        run: |\n          echo MANAGER_REPO=manager >> $GITHUB_ENV\n          echo MANAGER_TAG=test >> $GITHUB_ENV\n      - name: Build manager\n        run: 'make docker-build-manager OUTPUT_TYPE=type=oci,dest=./${MANAGER_REPO}_${MANAGER_TAG}.tar,name=${MANAGER_REPO}:${MANAGER_TAG}'\n      - name: Upload Build Artifacts\n        uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4  # v5.0.0\n        with:\n          name: ${{ inputs.bucket-id }}-manager\n          path: manager_test.tar\n          overwrite: true\n\n  build-collector:\n    name: \"Build collector image for e2e tests\"\n    runs-on: ubuntu-latest\n    timeout-minutes: 10\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2\n        with:\n          egress-policy: audit\n      - name: Set up Go\n        uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0\n        with:\n          go-version: \"1.25\"\n          check-latest: true\n      - name: Setup buildx instance\n        uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1\n        with:\n          use: true\n      - uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0\n        with:\n          key: ${{ runner.OS }}-go-${{ hashFiles('**/go.sum') }}\n          restore-keys: |\n            ${{ runner.os }}-go-\n          path: |\n            ~/go/pkg/mod\n            ~/.cache/go-build\n      - uses: crazy-max/ghaction-github-runtime@3cb05d89e1f492524af3d41a1c98c83bc3025124 # v3.1.0\n      - name: Check out code into the Go module directory\n        uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0\n      - name: Set env\n        run: |\n          echo COLLECTOR_REPO=collector >> $GITHUB_ENV\n          echo COLLECTOR_TAG=test >> $GITHUB_ENV\n      - name: Build collector\n        run: 'make docker-build-collector OUTPUT_TYPE=type=oci,dest=./${COLLECTOR_REPO}_${COLLECTOR_TAG}.tar,name=${COLLECTOR_REPO}:${COLLECTOR_TAG}'\n      - name: Upload Build Artifacts\n        uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4  # v5.0.0\n        with:\n          name: ${{ inputs.bucket-id }}-collector\n          path: collector_test.tar\n          overwrite: true\n"
  },
  {
    "path": ".github/workflows/e2e-test.yaml",
    "content": "name: Run E2E tests\n\non:\n  workflow_call:\n    inputs:\n      upgrade-test:\n        required: false\n        type: string\n      bucket-id:\n        required: true\n        type: string\n\npermissions:\n  contents: read\n\njobs:\n  build-e2e-test-list:\n    name: \"Build E2E Test List\"\n    runs-on: ubuntu-latest\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2\n        with:\n          egress-policy: audit\n\n      - name: Check out code into the Go module directory\n        uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0\n      - id: set-test-matrix\n        run: |\n          readarray -d '' test_dirs < <(find ./test/e2e/tests -mindepth 1 -type d -print0)\n          json_array=\"$(printf \"%s\\n\" \"${test_dirs[@]}\" | jq -R . | jq -cs)\"\n          echo \"e2e-tests=${json_array}\" > $GITHUB_OUTPUT\n    outputs:\n      e2e-tests: ${{ steps.set-test-matrix.outputs.e2e-tests }}\n  e2e-test:\n    name: \"E2E Tests\"\n    runs-on: ubuntu-latest\n    timeout-minutes: 20\n    needs:\n      - build-e2e-test-list\n    permissions:\n      contents: write\n    strategy:\n      fail-fast: false\n      matrix:\n        KUBERNETES_VERSION: [\"1.27.13\", \"1.28.9\", \"1.29.4\", \"1.30.2\"]\n        E2E_TEST: ${{ fromJson(needs.build-e2e-test-list.outputs.e2e-tests) }}\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2\n        with:\n          egress-policy: audit\n      - name: Check out code into the Go module directory\n        uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0\n      - name: Fetch Build Artifacts\n        uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0\n        with:\n          pattern: ${{ inputs.bucket-id }}-*\n          path: ${{ github.workspace }}/images\n          merge-multiple: true\n      - name: Set up Go\n        uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0\n        with:\n          go-version: \"1.25\"\n          check-latest: true\n      - name: Set env\n        run: |\n          ARCHIVE_DIR=${{ github.workspace }}/images\n          REMOVER_REPO=remover\n          MANAGER_REPO=manager\n          COLLECTOR_REPO=collector\n          TRIVY_SCANNER_REPO=scanner\n\n          REMOVER_TAG=test\n          MANAGER_TAG=test\n          COLLECTOR_TAG=test\n          TRIVY_SCANNER_TAG=test\n\n          echo REMOVER_REPO=$REMOVER_REPO >> $GITHUB_ENV\n          echo MANAGER_REPO=$MANAGER_REPO >> $GITHUB_ENV\n          echo COLLECTOR_REPO=$COLLECTOR_REPO >> $GITHUB_ENV\n          echo TRIVY_SCANNER_REPO=$TRIVY_SCANNER_REPO >> $GITHUB_ENV\n\n          echo REMOVER_TAG=$REMOVER_TAG >> $GITHUB_ENV\n          echo MANAGER_TAG=$MANAGER_TAG >> $GITHUB_ENV\n          echo COLLECTOR_TAG=$COLLECTOR_TAG >> $GITHUB_ENV\n          echo TRIVY_SCANNER_TAG=$TRIVY_SCANNER_TAG >> $GITHUB_ENV\n          echo ARCHIVE_DIR=$ARCHIVE_DIR >> $GITHUB_ENV\n\n          echo REMOVER_TARBALL_PATH=$ARCHIVE_DIR/${REMOVER_REPO}_${REMOVER_TAG}.tar >> $GITHUB_ENV\n          echo MANAGER_TARBALL_PATH=$ARCHIVE_DIR/${MANAGER_REPO}_${MANAGER_TAG}.tar >> $GITHUB_ENV\n          echo COLLECTOR_TARBALL_PATH=$ARCHIVE_DIR/${COLLECTOR_REPO}_${COLLECTOR_TAG}.tar >> $GITHUB_ENV\n          echo SCANNER_TARBALL_PATH=$ARCHIVE_DIR/${TRIVY_SCANNER_REPO}_${TRIVY_SCANNER_TAG}.tar >> $GITHUB_ENV\n\n          if [[ -n \"${{ inputs.upgrade-test }}\" ]]; then\n            echo HELM_UPGRADE_TEST=1 >> $GITHUB_ENV\n          fi\n      - name: Run e2e test\n        run: |\n          make e2e-test \\\n            KUBERNETES_VERSION=${{ matrix.KUBERNETES_VERSION }} \\\n            E2E_TESTS=${{ matrix.E2E_TEST }}\n      - name: Remove slash from E2E_TEST\n        run: |\n          E2E_TEST=${{ matrix.E2E_TEST }}\n          E2E_TEST=${E2E_TEST//\\//_}\n          echo \"E2E_TEST=${E2E_TEST}\" >> $GITHUB_ENV\n      - name: Upload artifacts\n        uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4  # v5.0.0\n        if: always()\n        with:\n          name: test_logs_${{ matrix.KUBERNETES_VERSION }}_${{ env.E2E_TEST }}\n          path: ${{ github.workspace }}/test_logs/\n          retention-days: 1\n          overwrite: true\n"
  },
  {
    "path": ".github/workflows/patch-docs.yaml",
    "content": "name: patch_docs\non:\n  push:\n    tags:\n      - 'v[0-9]+.[0-9]+.[1-9]+' # run this workflow when a new patch version is published\n\npermissions:\n  contents: write\n  pull-requests: write\n\njobs:\n  patch-docs:\n    runs-on: ubuntu-22.04\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2\n        with:\n          egress-policy: audit\n\n      - name: Set release version and target branch for vNext\n        if: github.event_name == 'push'\n        run: |\n          TAG=\"$(echo \"${{ github.ref }}\" | tr -d 'refs/tags/v')\"\n          MAJOR_VERSION=\"$(echo \"${TAG}\" | cut -d '.' -f1)\"\n          echo \"MAJOR_VERSION=${MAJOR_VERSION}\" >> ${GITHUB_ENV}\n          MINOR_VERSION=\"$(echo \"${TAG}\" | cut -d '.' -f2)\"\n          echo \"MINOR_VERSION=${MINOR_VERSION}\" >> ${GITHUB_ENV}\n          PATCH_VERSION=\"$(echo \"${TAG}\" | cut -d '.' -f3)\"\n          echo \"PATCH_VERSION=${PATCH_VERSION}\" >> ${GITHUB_ENV}\n          echo \"TAG=${TAG}\" >> ${GITHUB_ENV}\n      \n      - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3\n        with:\n          fetch-depth: 0\n\n      - name: Create release branch if needed # patched docs are always being merged to the main branch\n        run: |\n          git checkout main \n      \n      - name: Create patch version docs\n        run: make patch-version-docs NEWVERSION=v${MAJOR_VERSION}.${MINOR_VERSION}.x TAG=v${TAG} OLDVERSION=v${MAJOR_VERSION}.${MINOR_VERSION}.$((PATCH_VERSION-1))\n      \n      - name: Create release pull request\n        uses: peter-evans/create-pull-request@84ae59a2cdc2258d6fa0732dd66352dddae2a412 # v7.0.9\n        with:\n          commit-message: \"chore: Patch docs for ${{ env.TAG }} release\"\n          title: \"chore: Patch docs for ${{ env.TAG }} release\"\n          branch: \"patch-docs-${{ env.TAG }}\"\n          base: \"main\"\n          signoff: true\n          labels: |\n            release-pr\n            ${{ github.event.inputs.release_version }}\n      "
  },
  {
    "path": ".github/workflows/release-pr.yaml",
    "content": "name: create_release_pull_request\non:\n  push:\n    tags:\n      - 'v[0-9]+.[0-9]+.0' # run this workflow when a new minor version is published\n  workflow_dispatch:\n    inputs:\n      release_version:\n        description: 'Which version are we creating a release pull request for?'\n        required: true\n\npermissions:\n  contents: write\n  pull-requests: write\n\njobs:\n  create-release-pull-request:\n    runs-on: ubuntu-latest\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2\n        with:\n          egress-policy: audit\n\n      - name: Set up Go\n        uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0\n        with:\n          go-version: \"1.25\"\n          check-latest: true\n\n      - name: Set release version and target branch for vNext\n        if: github.event_name == 'push'\n        run: |\n          TAG=\"$(echo \"${{ github.ref }}\" | tr -d 'refs/tags/v')\"\n          MAJOR_VERSION=\"$(echo \"${TAG}\" | cut -d '.' -f1)\"\n          echo \"MAJOR_VERSION=${MAJOR_VERSION}\" >> ${GITHUB_ENV}\n          MINOR_VERSION=\"$(echo \"${TAG}\" | cut -d '.' -f2)\"\n          echo \"MINOR_VERSION=${MINOR_VERSION}\" >> ${GITHUB_ENV}\n\n          # increment the minor version by 1 for vNext\n          echo \"NEWVERSION=v${MAJOR_VERSION}.$((MINOR_VERSION+1)).0-beta.0\" >> ${GITHUB_ENV}\n          # pre-release is always being merged to the main branch\n          echo \"TARGET_BRANCH=main\" >> ${GITHUB_ENV}\n          echo \"TAG=${TAG}\" >> ${GITHUB_ENV}\n\n      - name: Set release version and target branch from input\n        if: github.event_name == 'workflow_dispatch'\n        run: |\n          NEWVERSION=\"${{ github.event.inputs.release_version }}\"\n          echo \"${NEWVERSION}\" | grep -E '^v[0-9]+\\.[0-9]+\\.[0-9](-(beta|rc)\\.[0-9]+)?$' || (echo \"release_version should be in the format vX.Y.Z, vX.Y.Z-beta.A, or vX.Y.Z-rc.B\" && exit 1)\n\n          echo \"NEWVERSION=${NEWVERSION}\" >> ${GITHUB_ENV}\n          echo \"TAG=${NEWVERSION}\" >> ${GITHUB_ENV}\n          MAJOR_VERSION=\"$(echo \"${NEWVERSION}\" | cut -d '.' -f1 | tr -d 'v')\"\n          MINOR_VERSION=\"$(echo \"${NEWVERSION}\" | cut -d '.' -f2)\"\n\n          # non-beta releases should always be merged to release branches\n          echo \"TARGET_BRANCH=release-${MAJOR_VERSION}.${MINOR_VERSION}\" >> ${GITHUB_ENV}\n\n          # beta releases should always be merged to main\n          if [[ \"${NEWVERSION}\" =~ \"beta\" ]]; then\n            echo \"TARGET_BRANCH=main\" >> ${GITHUB_ENV}\n          fi\n\n      - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3\n        with:\n          fetch-depth: 0\n\n      - name: Create release branch if needed\n        run: |\n          git checkout \"${TARGET_BRANCH}\" && exit 0\n\n          # Create and push release branch if it doesn't exist\n          git checkout -b \"${TARGET_BRANCH}\"\n          git push --set-upstream origin \"${TARGET_BRANCH}\"\n\n      - run: make release-manifest promote-staging-manifest\n\n      - if: github.event_name == 'push'\n        run: make version-docs NEWVERSION=v${MAJOR_VERSION}.${MINOR_VERSION}.x TAG=v${TAG}\n\n      - name: Create release pull request\n        uses: peter-evans/create-pull-request@84ae59a2cdc2258d6fa0732dd66352dddae2a412 # v7.0.9\n        with:\n          commit-message: \"chore: Prepare ${{ env.NEWVERSION }} release\"\n          title: \"chore: Prepare ${{ env.NEWVERSION }} release\"\n          branch: \"release-${{ env.NEWVERSION }}\"\n          base: \"${{ env.TARGET_BRANCH }}\"\n          signoff: true\n"
  },
  {
    "path": ".github/workflows/release.yaml",
    "content": "name: release\n\non:\n  push:\n    # Sequence of patterns matched against refs/tags\n    tags:\n      - 'v*' # Push events to matching v*, i.e. v1.0, v20.15.10\n\nenv:\n  REGISTRY: ghcr.io\n\npermissions:\n  contents: write\n  packages: write\n\njobs:\n  build-publish-release:\n    name: \"release\"\n    runs-on: ubuntu-latest\n    timeout-minutes: 60\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2\n        with:\n          egress-policy: audit\n\n      - name: Check out code into the Go module directory\n        uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0\n\n      - name: Setup buildx instance\n        uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1\n        with:\n          use: true\n\n      - uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0\n        with:\n          key: ${{ runner.OS }}-go-${{ hashFiles('**/go.sum') }}\n          restore-keys: |\n            ${{ runner.os }}-go-\n          path: |\n            ~/go/pkg/mod\n            ~/.cache/go-build\n      - uses: crazy-max/ghaction-github-runtime@3cb05d89e1f492524af3d41a1c98c83bc3025124 # v3.1.0\n\n      - name: Get tag\n        run: |\n          echo \"TAG=${GITHUB_REF#refs/tags/}\" >> $GITHUB_ENV\n\n      - name: Log in to the GHCR\n        uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0\n        with:\n          registry: ${{ env.REGISTRY }}\n          username: ${{ github.actor }}\n          password: ${{ secrets.GITHUB_TOKEN }}\n\n      - name: Build eraser-manager\n        run: make docker-build-manager \\\n          CACHE_FROM=type=gha,scope=eraser-manager \\\n          CACHE_TO=type=gha,scope=eraser-manager,mode=max \\\n          PLATFORM=\"linux/amd64,linux/arm64\" \\\n          OUTPUT_TYPE=type=registry \\\n          GENERATE_ATTESTATIONS=true \\\n          MANAGER_IMG=${{ env.REGISTRY }}/${GITHUB_REPOSITORY_OWNER}/eraser-manager:${TAG}\n\n      - name: Build remover\n        run: make docker-build-remover \\\n          CACHE_FROM=type=gha,scope=eraser-node \\\n          CACHE_TO=type=gha,scope=eraser-node,mode=max \\\n          PLATFORM=\"linux/amd64,linux/arm64\" \\\n          OUTPUT_TYPE=type=registry \\\n          GENERATE_ATTESTATIONS=true \\\n          REMOVER_IMG=${{ env.REGISTRY }}/${GITHUB_REPOSITORY_OWNER}/remover:${TAG}\n\n      - name: Build collector\n        run: make docker-build-collector \\\n          CACHE_FROM=type=gha,scope=collector \\\n          CACHE_TO=type=gha,scope=collector,mode=max \\\n          PLATFORM=\"linux/amd64,linux/arm64\" \\\n          OUTPUT_TYPE=type=registry \\\n          GENERATE_ATTESTATIONS=true \\\n          COLLECTOR_IMG=${{ env.REGISTRY }}/${GITHUB_REPOSITORY_OWNER}/collector:${TAG}\n\n      - name: Build Trivy scanner\n        run: make docker-build-trivy-scanner \\\n          CACHE_FROM=type=gha,scope=trivy-scanner \\\n          CACHE_TO=type=gha,scope=trivy-scanner,mode=max \\\n          PLATFORM=\"linux/amd64,linux/arm64\" \\\n          OUTPUT_TYPE=type=registry \\\n          GENERATE_ATTESTATIONS=true \\\n          TRIVY_SCANNER_IMG=${{ env.REGISTRY }}/${GITHUB_REPOSITORY_OWNER}/eraser-trivy-scanner:${TAG}\n\n      - name: Create GitHub release\n        uses: marvinpinto/action-automatic-releases@919008cf3f741b179569b7a6fb4d8860689ab7f0 # v1.2.1\n        with:\n          repo_token: \"${{ secrets.GITHUB_TOKEN }}\"\n          prerelease: false\n\n      - name: Publish Helm chart\n        uses: stefanprodan/helm-gh-pages@0ad2bb377311d61ac04ad9eb6f252fb68e207260 # v1.7.0\n        with:\n          token: ${{ secrets.GITHUB_TOKEN }}\n          charts_dir: charts\n          target_dir: charts\n          linting: off\n"
  },
  {
    "path": ".github/workflows/scan-images.yaml",
    "content": "name: Scan Images for Vulnerabilities (Trivy)\nrun-name: Scan ${{ inputs.version == '' && github.ref_name || inputs.version }} images for vulnerabilities ${{ github.event_name == 'schedule' && '(scheduled)' || '' }}\non:\n  schedule:\n    - cron: \"0 7 * * 1\" # Run every Monday at 7:00 AM UTC\n  workflow_dispatch:\n    inputs:\n      version:\n        description: \"Version of Eraser to run Trivy scans against. Leave empty to scan images built from the branch the action is running against.\"\n        type: string\n        required: false\n        default: \"\"\n      upload-results:\n        description: \"Upload results to Github Security?\"\n        type: boolean\n        required: true\n        default: false\n\npermissions: read-all\n\nenv:\n  # Scanning released versions require the project `eraser-dev` as part of the registry name.\n  REGISTRY: ghcr.io/${{ github.event.inputs.version == '' && 'eraser-test' || 'eraser-dev' }}\n  TAG: ${{ github.event.inputs.version == '' && 'test' || github.event.inputs.version }}\n\njobs:\n  scan_vulnerabilities:\n    name: Scan ${{ matrix.data.image }} for vulnerabilities\n    runs-on: ubuntu-latest\n    timeout-minutes: 15\n    strategy:\n      matrix:\n        data:\n          - {image: remover, build_cmd: docker-build-remover, repo_environment_var: REMOVER_REPO}\n          - {image: eraser-manager, build_cmd: docker-build-manager, repo_environment_var: MANAGER_REPO}\n          - {image: collector, build_cmd: docker-build-collector, repo_environment_var: COLLECTOR_REPO}\n          - {image: eraser-trivy-scanner, build_cmd: docker-build-trivy-scanner, repo_environment_var: TRIVY_SCANNER_REPO}\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2\n        with:\n          egress-policy: audit\n\n      - name: Check out code\n        if: github.event_name == 'schedule' || github.event.inputs.version == ''\n        uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0\n\n      - name: Build image\n        if: github.event_name == 'schedule' || github.event.inputs.version == ''\n        run: |\n          make ${{ matrix.data.build_cmd }} VERSION=${{ env.TAG }} ${{ matrix.data.repo_environment_var }}=${{ env.REGISTRY }}/${{ matrix.data.image }}\n\n      - name: Scan for vulnerabilities\n        uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8 # 0.33.1\n        with:\n          image-ref: ${{ env.REGISTRY }}/${{ matrix.data.image }}:${{ env.TAG }}\n          vuln-type: 'os,library'\n          ignore-unfixed: true\n          format: 'sarif'\n          output: ${{ matrix.data.image }}-results.sarif\n\n      - uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4  # v5.0.0\n        with:\n          name: ${{ matrix.data.image }} Scan Results\n          path: ${{ matrix.data.image }}-results.sarif\n          overwrite: true\n\n  upload_vulnerabilities:\n    name: Upload ${{ matrix.image }} results to GitHub Security\n    runs-on: ubuntu-latest\n    needs: scan_vulnerabilities\n    if: github.event_name == 'schedule' || (github.event_name == 'workflow_dispatch' && github.event.inputs.upload-results == 'true')\n    permissions:\n      actions: read\n      contents: read\n      security-events: write\n    strategy:\n      matrix:\n        image: [remover, eraser-manager, collector, eraser-trivy-scanner]\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2\n        with:\n          egress-policy: audit\n\n      - uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0\n        with:\n          name: ${{ matrix.image }} Scan Results\n          path: ${{ matrix.image }}-results.sarif\n          merge-multiple: true\n\n      - name: Upload results to GitHub Security\n        uses: github/codeql-action/upload-sarif@fdbfb4d2750291e159f0156def62b853c2798ca2 # v2.14.4\n        with:\n          sarif_file: ${{ matrix.image }}-results.sarif\n"
  },
  {
    "path": ".github/workflows/scorecard.yml",
    "content": "name: Scorecard supply-chain security\non:\n  # For Branch-Protection check. Only the default branch is supported. See\n  # https://github.com/ossf/scorecard/blob/main/docs/checks.md#branch-protection\n  branch_protection_rule:\n  # To guarantee Maintained check is occasionally updated. See\n  # https://github.com/ossf/scorecard/blob/main/docs/checks.md#maintained\n  schedule:\n    - cron: '0 17 * * 1'\n  push:\n    branches: [ \"main\" ]\n\n# Declare default permissions as read only.\npermissions: read-all\n\njobs:\n  analysis:\n    name: Scorecard analysis\n    runs-on: ubuntu-latest\n    permissions:\n      # Needed to upload the results to code-scanning dashboard.\n      security-events: write\n      # Needed to publish results and get a badge (see publish_results below).\n      id-token: write\n\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2\n        with:\n          egress-policy: audit\n\n      - name: \"Checkout code\"\n        uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v3.1.0\n        with:\n          persist-credentials: false\n\n      - name: \"Run analysis\"\n        uses: ossf/scorecard-action@4eaacf0543bb3f2c246792bd56e8cdeffafb205a # v2.4.3\n        with:\n          results_file: results.sarif\n          results_format: sarif\n          # (Optional) \"write\" PAT token. Uncomment the `repo_token` line below if:\n          # - you want to enable the Branch-Protection check on a *public* repository, or\n          # - you are installing Scorecard on a *private* repository\n          # To create the PAT, follow the steps in https://github.com/ossf/scorecard-action#authentication-with-pat.\n          # repo_token: ${{ secrets.SCORECARD_TOKEN }}\n\n          # Public repositories:\n          #   - Publish results to OpenSSF REST API for easy access by consumers\n          #   - Allows the repository to include the Scorecard badge.\n          #   - See https://github.com/ossf/scorecard-action#publishing-results.\n          # For private repositories:\n          #   - `publish_results` will always be set to `false`, regardless\n          #     of the value entered here.\n          publish_results: true\n\n      # Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF\n      # format to the repository Actions tab.\n      - name: \"Upload artifact\"\n        uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4  # v5.0.0\n        with:\n          name: SARIF file\n          path: results.sarif\n          retention-days: 5\n          overwrite: true\n\n      # Upload the results to GitHub's code scanning dashboard.\n      - name: \"Upload to code-scanning\"\n        uses: github/codeql-action/upload-sarif@fdbfb4d2750291e159f0156def62b853c2798ca2 # v2.2.4\n        with:\n          sarif_file: results.sarif\n"
  },
  {
    "path": ".github/workflows/test.yaml",
    "content": "name: test\non:\n  push:\n    paths-ignore:\n      - \"**.md\"\n      - \"hack/**\"\n      - \"docs/**\"\n  pull_request:\n    paths-ignore:\n      - \"**.md\"\n      - \"hack/**\"\n      - \"docs/**\"\nenv:\n  REGISTRY: ghcr.io\n\npermissions: read-all\n\njobs:\n  generate-bucket-id:\n    name: \"Generate build id for storage\"\n    uses: ./.github/workflows/build-id.yaml\n\n  build-images:\n    name: \"Build images for e2e tests\"\n    uses: ./.github/workflows/e2e-build.yaml\n    needs:\n      - generate-bucket-id\n    with:\n      bucket-id: ${{ needs.generate-bucket-id.outputs.bucket-id }}\n\n  e2e-test:\n    name: \"Run e2e tests\"\n    uses: ./.github/workflows/e2e-test.yaml\n    permissions:\n      contents: write\n    needs:\n      - build-images\n      - generate-bucket-id\n    with:\n      bucket-id: ${{ needs.generate-bucket-id.outputs.bucket-id }}\n\n  lint:\n    name: \"Lint\"\n    runs-on: ubuntu-latest\n    timeout-minutes: 40\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2\n        with:\n          egress-policy: audit\n      - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0\n      - name: Set up Go\n        uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0\n        with:\n          go-version: \"1.25\"\n          check-latest: true\n      - name: lint manager\n        uses: golangci/golangci-lint-action@e7fa5ac41e1cf5b7d48e45e42232ce7ada589601 # v9.1.0\n        with:\n          version: latest\n          args: --timeout=10m\n      - name: lint remover\n        uses: golangci/golangci-lint-action@e7fa5ac41e1cf5b7d48e45e42232ce7ada589601 # v9.1.0\n        with:\n          version: latest\n          working-directory: pkg/remover\n          skip-pkg-cache: true\n          args: --timeout=10m\n      - name: lint collector\n        uses: golangci/golangci-lint-action@e7fa5ac41e1cf5b7d48e45e42232ce7ada589601 # v9.1.0\n        with:\n          version: latest\n          working-directory: pkg/collector\n          skip-pkg-cache: true\n          args: --timeout=10m\n      - name: lint trivvy scanner\n        uses: golangci/golangci-lint-action@e7fa5ac41e1cf5b7d48e45e42232ce7ada589601 # v9.1.0\n        with:\n          version: latest\n          working-directory: pkg/scanners/trivy\n          skip-pkg-cache: true\n          args: --timeout=10m\n\n  unit-test:\n    name: \"Unit Tests\"\n    runs-on: ubuntu-latest\n    timeout-minutes: 40\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2\n        with:\n          egress-policy: audit\n      - name: Set up Go\n        uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0\n        with:\n          go-version: \"1.25\"\n          check-latest: true\n      - uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0\n        with:\n          key: ${{ runner.OS }}-go-${{ hashFiles('**/go.sum') }}\n          restore-keys: |\n            ${{ runner.os }}-go-\n          path: |\n            ~/go/pkg/mod\n            ~/.cache/go-build\n      - name: Check out code into the Go module directory\n        uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0\n      - name: Unit test\n        run: make test\n      - name: Codecov upload\n        uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7\n        with:\n          flags: unittests\n          file: ./cover.out\n          fail_ci_if_error: false\n\n  check-manifest:\n    name: \"Check codegen and manifest\"\n    runs-on: ubuntu-latest\n    timeout-minutes: 10\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2\n        with:\n          egress-policy: audit\n      - name: Check out code into the Go module directory\n        uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0\n      - name: Set up Go\n        uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0\n        with:\n          go-version: \"1.25\"\n          check-latest: true\n      - name: Check go.mod and manifests\n        run: |\n          go mod tidy\n          git diff --exit-code\n          make generate manifests\n          git diff --exit-code\n\n  scan_vulnerabilities:\n    name: \"[Trivy] Scan for vulnerabilities\"\n    runs-on: ubuntu-latest\n    timeout-minutes: 15\n    permissions:\n      contents: read\n    steps:\n      - name: Harden Runner\n        uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2\n        with:\n          egress-policy: audit\n\n      - name: Check out code into the Go module directory\n        uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0\n\n      - name: Get repo\n        run: |\n          echo \"REPO=$(echo $GITHUB_REPOSITORY | awk '{print tolower($0)}')\" >> $GITHUB_ENV\n\n      - name: Build eraser-manager\n        run: |\n          make docker-build-manager MANAGER_REPO=${{ env.REGISTRY }}/${REPO}-manager MANAGER_TAG=test\n      - name: Build remover\n        run: |\n          make docker-build-remover REMOVER_REPO=${{ env.REGISTRY }}/remover REMOVER_TAG=test\n      - name: Build collector\n        run: |\n          make docker-build-collector COLLECTOR_REPO=${{ env.REGISTRY }}/collector COLLECTOR_TAG=test\n      - name: Build trivy scanner\n        run: |\n          make docker-build-trivy-scanner TRIVY_SCANNER_REPO=${{ env.REGISTRY }}/${REPO}-trivy-scanner TRIVY_SCANNER_TAG=test\n\n      - name: Run trivy for remover\n        uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8\n        with:\n          image-ref: ${{ env.REGISTRY }}/remover:test\n          exit-code: \"1\"\n          ignore-unfixed: true\n          vuln-type: \"os,library\"\n\n      - name: Run trivy for eraser-manager\n        uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8\n        with:\n          image-ref: ${{ env.REGISTRY }}/${{ env.REPO }}-manager:test\n          exit-code: \"1\"\n          ignore-unfixed: true\n          vuln-type: \"os,library\"\n\n      - name: Run trivy for collector\n        uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8\n        with:\n          image-ref: ${{ env.REGISTRY }}/collector:test\n          exit-code: \"1\"\n          ignore-unfixed: true\n          vuln-type: \"os,library\"\n\n      - name: Run trivy for trivy-scanner\n        uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8\n        with:\n          image-ref: ${{ env.REGISTRY }}/${{ env.REPO }}-trivy-scanner:test\n          exit-code: \"1\"\n          ignore-unfixed: true\n          vuln-type: \"os,library\"\n"
  },
  {
    "path": ".github/workflows/upgrade.yaml",
    "content": "name: upgrade\non:\n  push:\n    paths:\n      - \"manifest_staging/charts/**\"\n      - \".github/workflows/upgrade.yaml\"\n\n  pull_request:\n    paths:\n      - \"manifest_staging/charts/**\"\n      - \".github/workflows/upgrade.yaml\"\n\nenv:\n  REGISTRY: ghcr.io\n\npermissions: read-all\n\njobs:\n  generate-bucket-id:\n    name: \"Generate build id for storage\"\n    uses: ./.github/workflows/build-id.yaml\n\n  build-images:\n    name: \"Build images for e2e tests\"\n    uses: ./.github/workflows/e2e-build.yaml\n    needs:\n      - generate-bucket-id\n    with:\n      bucket-id: ${{ needs.generate-bucket-id.outputs.bucket-id }}\n\n  e2e-test:\n    name: \"Run e2e tests\"\n    uses: ./.github/workflows/e2e-test.yaml\n    permissions:\n      contents: write\n    needs:\n      - build-images\n      - generate-bucket-id\n    with:\n      upgrade-test: \"1\"\n      bucket-id: ${{ needs.generate-bucket-id.outputs.bucket-id }}\n"
  },
  {
    "path": ".gitignore",
    "content": "# Binaries for programs and plugins\n*.exe\n*.exe~\n*.dll\n*.so\n*.dylib\nbin\ntestbin/*\n.eraser\n./pkg/eraser/eraser\n# Test binary, build with `go test -c`\n*.test\n\n# Output of the go coverage tool, specifically when used with LiteIDE\n*.out\n\n# Kubernetes Generated files - skip generated files, except for vendored files\n\n!vendor/**/zz_generated.*\n\n# editor and IDE paraphernalia\n.idea\n*.swp\n*.swo\n*~\n\n# history files\n.history\n\n.vscode/\n\n# macOS\n.DS_Store\n\n# docs site\nnode_modules/\n.docusaurus/\n/docs/build/\n\n!/build/\n/build/*\n!/build/tooling/\n!/build/version.sh\n\n# e2e test log outputs\ntest/e2e/tests/eraser_logs/\neraser\nremover\n"
  },
  {
    "path": ".golangci.yaml",
    "content": "version: \"2\"\n\nrun:\n  go: \"1.25\"\n\nlinters:\n  default: none\n  enable:\n    - errcheck\n    - copyloopvar # replacement for exportloopref\n    - forcetypeassert\n    - gocritic\n    - goconst\n    - godot\n    - gosec\n    - govet\n    - ineffassign\n    - misspell\n    # - revive # replacement for golint\n    - staticcheck # includes gosimple and staticcheck\n    - unused\n    - whitespace\n  settings:\n    gocritic:\n      enabled-tags:\n      - performance\n    gosec:\n      excludes:\n      - G108\n    lll:\n      line-length: 200\n    misspell:\n      locale: US\n  exclusions:\n    paths:\n      - \"docs/build/assets/files/.*\\\\.go\"\n\nformatters:\n  enable:\n    - gofmt\n    - gofumpt\n    - goimports\n"
  },
  {
    "path": ".trivyignore",
    "content": "GHSA-6xv5-86q9-7xr8\n"
  },
  {
    "path": "CODEOWNERS",
    "content": "# Global approvers\n*   @ashnamehrotra @pmengelbert @sozercan\n"
  },
  {
    "path": "CODE_OF_CONDUCT.md",
    "content": "# CNCF Code of Conduct\n\nThis project has adopted the [CNCF Community Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).\n\nResources:\n\n- [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)\n- [Reporting](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)\n"
  },
  {
    "path": "Dockerfile",
    "content": "# syntax=docker/dockerfile:1.6\n\n# Default Trivy binary image, overwritten by Makefile\nARG TRIVY_BINARY_IMG=\"ghcr.io/aquasecurity/trivy:0.67.2\"\nARG BUILDKIT_SBOM_SCAN_STAGE=builder,manager-build,collector-build,remover-build,trivy-scanner-build\n\nFROM --platform=$TARGETPLATFORM $TRIVY_BINARY_IMG AS trivy-binary\n\n# Build the manager binary\nFROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS builder\nWORKDIR /workspace\n# Copy the Go Modules manifests\nCOPY go.mod go.mod\nCOPY go.sum go.sum\n# cache deps before building and copying source so that we don't need to re-download as much\n# and so that source changes don't invalidate our downloaded layer\nENV GOCACHE=/root/gocache\nENV CGO_ENABLED=0\nRUN \\\n    --mount=type=cache,target=${GOCACHE} \\\n    --mount=type=cache,target=/go/pkg/mod \\\n    go mod download\nCOPY . .\n\nARG LDFLAGS\nARG TARGETOS\nARG TARGETARCH\n\nFROM builder AS manager-build\nRUN \\\n    --mount=type=cache,target=${GOCACHE} \\\n    --mount=type=cache,target=/go/pkg/mod \\\n    GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build ${LDFLAGS:+-ldflags \"$LDFLAGS\"} -o out/manager main.go\n\nFROM builder AS collector-build\nRUN \\\n    --mount=type=cache,target=${GOCACHE} \\\n    --mount=type=cache,target=/go/pkg/mod \\\n    GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build ${LDFLAGS:+-ldflags \"$LDFLAGS\"} -o out/collector ./pkg/collector\n\nFROM builder AS remover-build\nRUN \\\n    --mount=type=cache,target=${GOCACHE} \\\n    --mount=type=cache,target=/go/pkg/mod \\\n    GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build ${LDFLAGS:+-ldflags \"$LDFLAGS\"} -o out/remover ./pkg/remover\n\nFROM builder AS trivy-scanner-build\nRUN \\\n    --mount=type=cache,target=${GOCACHE} \\\n    --mount=type=cache,target=/go/pkg/mod \\\n    GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build ${LDFLAGS:+-ldflags \"$LDFLAGS\"} -o out/trivy-scanner ./pkg/scanners/trivy\n\nFROM --platform=$TARGETPLATFORM gcr.io/distroless/static:nonroot AS manager\nWORKDIR /\nCOPY --from=manager-build /workspace/out/manager .\nUSER 65532:65532\nENTRYPOINT [\"/manager\"]\n\nFROM --platform=$TARGETPLATFORM gcr.io/distroless/static:latest as collector\nCOPY --from=collector-build /workspace/out/collector /\nENTRYPOINT [\"/collector\"]\n\nFROM --platform=$TARGETPLATFORM gcr.io/distroless/static:latest as remover\nCOPY --from=remover-build /workspace/out/remover /\nENTRYPOINT [\"/remover\"]\n\nFROM --platform=$TARGETPLATFORM gcr.io/distroless/static:latest as trivy-scanner\nCOPY --from=trivy-scanner-build /workspace/out/trivy-scanner /\nCOPY --from=trivy-binary /usr/local/bin/trivy /\nWORKDIR /var/lib/trivy\nENTRYPOINT [\"/trivy-scanner\"]\n\nFROM gcr.io/distroless/static-debian12:nonroot AS non-vulnerable\nCOPY --from=builder /tmp /tmp\n"
  },
  {
    "path": "LICENSE",
    "content": "\n                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright 2023 The Linux Foundation\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License."
  },
  {
    "path": "MAINTAINERS.md",
    "content": "# Maintainers\n\nMaintainers:\n- Sertaç Özercan (@sozercan)\n- Ashna Mehrotra (@ashnamehrotra)\n- Peter Engelbert (@pmengelbert)\n- Brian Goff (@cpuguy83)\n"
  },
  {
    "path": "Makefile",
    "content": "VERSION := v1.5.0-beta.0\n\nMANAGER_TAG ?= ${VERSION}\nTRIVY_SCANNER_TAG ?= ${VERSION}\nCOLLECTOR_TAG ?= ${VERSION}\nREMOVER_TAG ?= ${VERSION}\n\n# Image URL to use all building/pushing image targets\nTRIVY_SCANNER_REPO ?= ghcr.io/eraser-dev/eraser-trivy-scanner\nTRIVY_SCANNER_IMG ?= ${TRIVY_SCANNER_REPO}:${TRIVY_SCANNER_TAG}\nTRIVY_BINARY_REPO ?= ghcr.io/aquasecurity/trivy\nTRIVY_BINARY_TAG ?= 0.67.2\nTRIVY_BINARY_IMG ?= ${TRIVY_BINARY_REPO}:${TRIVY_BINARY_TAG}\nMANAGER_REPO ?= ghcr.io/eraser-dev/eraser-manager\nMANAGER_IMG ?= ${MANAGER_REPO}:${MANAGER_TAG}\nREMOVER_REPO ?= ghcr.io/eraser-dev/remover\nREMOVER_IMG ?= ${REMOVER_REPO}:${REMOVER_TAG}\nCOLLECTOR_REPO ?= ghcr.io/eraser-dev/collector\nCOLLECTOR_IMG ?= ${COLLECTOR_REPO}:${COLLECTOR_TAG}\nVULNERABLE_IMG ?= docker.io/library/alpine:3.7.3\nEOL_IMG ?= docker.io/library/alpine:3.6\nBUSYBOX_BASE_IMG ?= busybox:1.36.0\nNON_VULNERABLE_IMG ?= ghcr.io/eraser-dev/non-vulnerable:latest\nE2E_TESTS ?= $(shell find ./test/e2e/tests/ -mindepth 1 -type d)\nAPI_VERSIONS ?= ./api/v1alpha1,./api/v1,./api/v1alpha2,./api/v1alpha3\n\nHELM_UPGRADE_TEST ?=\nTEST_LOGDIR ?= $(PWD)/test_logs\n\nREMOVER_TARBALL_PATH ?=\nMANAGER_TARBALL_PATH ?=\nCOLLECTOR_TARBALL_PATH ?=\nSCANNER_TARBALL_PATH ?=\n\nKUSTOMIZE_VERSION ?= 3.8.9\nKUBERNETES_VERSION ?= 1.29.2\nNODE_VERSION ?= 20-bullseye-slim\nENVTEST_K8S_VERSION ?= 1.25\nGOLANGCI_LINT_VERSION := 1.43.0\n\nPLATFORM ?= linux\n\n# build variables\nLDFLAGS ?= $(shell build/version.sh \"${VERSION}\")\nERASER_LDFLAGS ?= -extldflags=-static $(LDFLAGS) -w\nTRIVY_SCANNER_LDFLAGS ?= $(ERASER_LDFLAGS) -X 'main.trivyVersion=v$(TRIVY_BINARY_TAG)'\n\n# Get the currently used golang install path (in GOPATH/bin, unless GOBIN is set)\nifeq (,$(shell go env GOBIN))\nGOBIN=$(shell go env GOPATH)/bin\nelse\nGOBIN=$(shell go env GOBIN)\nendif\n\nifdef CACHE_TO\n_CACHE_TO := --cache-to $(CACHE_TO)\nendif\n\nifdef CACHE_FROM\n_CACHE_FROM := --cache-from $(CACHE_FROM)\nendif\n\nifdef GENERATE_ATTESTATIONS\n_ATTESTATIONS := --attest type=sbom --attest type=provenance,mode=max\nendif\n\nIDFLAGS=\nifeq (false,$(shell hack/rootless_docker.sh))\nIDFLAGS=-u $(shell id -u):$(shell id -g)\nendif\n\nOUTPUT_TYPE ?= type=docker\nTOOLS_DIR := hack/tools\nTOOLS_BIN_DIR := $(abspath $(TOOLS_DIR)/bin)\nGO_INSTALL := ./hack/go-install.sh\n\nGOLANGCI_LINT_BIN := golangci-lint\nGOLANGCI_LINT := $(TOOLS_BIN_DIR)/$(GOLANGCI_LINT_BIN)-v$(GOLANGCI_LINT_VERSION)\n\nTEST_COUNT ?= 1\nTIMEOUT ?= 1800s\n\n$(GOLANGCI_LINT):\n\tGOBIN=$(TOOLS_BIN_DIR) $(GO_INSTALL) github.com/golangci/golangci-lint/cmd/golangci-lint $(GOLANGCI_LINT_BIN) v$(GOLANGCI_LINT_VERSION)\n\n# Setting SHELL to bash allows bash commands to be executed by recipes.\n# This is a requirement for 'setup-envtest.sh' in the test target.\n# Options are set to exit when a recipe line exits non-zero or a piped command fails.\nSHELL = /usr/bin/env bash -o pipefail\n.SHELLFLAGS = -ec\n\nall: build\n\n##@ General\n\n# The help target prints out all targets with their descriptions organized\n# beneath their categories. The categories are represented by '##@' and the\n# target descriptions by '##'. The awk commands is responsible for reading the\n# entire set of makefiles included in this invocation, looking for lines of the\n# file as xyz: ## something, and then pretty-format the target and help. Then,\n# if there's a line with ##@ something, that gets pretty-printed as a category.\n# More info on the usage of ANSI control characters for terminal formatting:\n# https://en.wikipedia.org/wiki/ANSI_escape_code#SGR_parameters\n# More info on the awk command:\n# http://linuxcommand.org/lc3_adv_awk.php\n\nhelp: ## Display this help.\n\t@awk 'BEGIN {FS = \":.*##\"; printf \"\\nUsage:\\n  make \\033[36m<target>\\033[0m\\n\"} /^[a-zA-Z_0-9-]+:.*?##/ { printf \"  \\033[36m%-15s\\033[0m %s\\n\", $$1, $$2 } /^##@/ { printf \"\\n\\033[1m%s\\033[0m\\n\", substr($$0, 5) } ' $(MAKEFILE_LIST)\n\n##@ Linting\n.PHONY: lint\nlint: $(GOLANGCI_LINT) ## Runs go linting.\n\t$(GOLANGCI_LINT) run -v\n\n##@ Development\n\n#kustomize_\n\nmanifests: __manifest_kustomize __helm_kustomize __controller-gen ## Generates k8s yaml for eraser deployment.\n\t$(CONTROLLER_GEN) \\\n\t\tcrd \\\n\t\trbac:roleName=manager-role \\\n\t\twebhook \\\n\t\tpaths=\"./...\" \\\n\t\toutput:crd:artifacts:config=config/crd/bases\n\trm -rf manifest_staging\n\tmkdir -p manifest_staging/deploy\n\tmkdir -p manifest_staging/charts/eraser\n\t$(MANIFEST_KUSTOMIZE) build /eraser/config/default -o /eraser/manifest_staging/deploy/eraser.yaml\n\t$(HELM_KUSTOMIZE) build \\\n\t\t--load_restrictor LoadRestrictionsNone /eraser/third_party/open-policy-agent/gatekeeper/helmify | \\\n\t\tgo run third_party/open-policy-agent/gatekeeper/helmify/*.go\n\n# Generate code containing DeepCopy, DeepCopyInto, and DeepCopyObject method\n# implementations. Also generate conversions between structs of different API versions.\ngenerate: __conversion-gen __controller-gen\n\t$(CONTROLLER_GEN) object:headerFile=\"hack/boilerplate.go.txt\" paths=\"./api/...\"\n\t$(CONVERSION_GEN) \\\n\t\t--output-base=/eraser \\\n\t\t--input-dirs=$(API_VERSIONS) \\\n\t\t--go-header-file=./hack/boilerplate.go.txt \\\n\t\t--output-file-base=zz_generated.conversion\n\nfmt: ## Run go fmt against code.\n\tgo fmt ./...\n\nvet: ## Run go vet against code.\n\tgo vet ./...\n\ntest: manifests generate fmt vet envtest ## Run unit tests.\n\tKUBEBUILDER_ASSETS=\"$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) -p path)\" go test ./... -coverprofile cover.out\n\nbusybox-img:\n\tdocker build -t busybox-e2e-test:latest \\\n\t\t-f test/e2e/test-data/Dockerfile.busybox \\\n\t\t--build-arg IMG=$(BUSYBOX_BASE_IMG) test/e2e/test-data\nBUSYBOX_IMG=busybox-e2e-test:latest\n\ncollector-dummy-img:\n\tdocker build -t $(COLLECTOR_REPO):dummy \\\n\t\t-f test/e2e/test-data/Dockerfile.dummyCollector \\\n\t\ttest/e2e/test-data\nCOLLECTOR_IMAGE_DUMMY=$(COLLECTOR_REPO):dummy\n\nvulnerable-img:\n\tdocker pull $(VULNERABLE_IMG)\n\neol-img:\n\tdocker pull $(EOL_IMG)\n\nnon-vulnerable-img:\n\tdocker buildx build \\\n\t\t$(_CACHE_FROM) $(_CACHE_TO) \\\n\t\t--build-arg LDFLAGS=\"$(LDFLAGS)\" \\\n\t\t--platform=\"$(PLATFORM)\" \\\n\t\t--output=$(OUTPUT_TYPE) \\\n\t\t-t ${NON_VULNERABLE_IMG} \\\n\t\t--target non-vulnerable .\n\ncustom-node-v$(KUBERNETES_VERSION):\n\tdocker build -t custom-node:v$(KUBERNETES_VERSION) \\\n\t\t-f test/e2e/test-data/Dockerfile.customNode \\\n\t\t--build-arg KUBERNETES_VERSION=${KUBERNETES_VERSION} test/e2e/test-data\nMODIFIED_NODE_IMAGE=custom-node:v$(KUBERNETES_VERSION)\n\ne2e-test: vulnerable-img eol-img non-vulnerable-img busybox-img collector-dummy-img custom-node-v$(KUBERNETES_VERSION)\n\tfor test in $(E2E_TESTS); do \\\n\t\tCGO_ENABLED=0 \\\n            PROJECT_ABSOLUTE_PATH=$(CURDIR) \\\n            REMOVER_TARBALL_PATH=${REMOVER_TARBALL_PATH} \\\n            MANAGER_TARBALL_PATH=${MANAGER_TARBALL_PATH} \\\n            COLLECTOR_TARBALL_PATH=${COLLECTOR_TARBALL_PATH} \\\n            SCANNER_TARBALL_PATH=${SCANNER_TARBALL_PATH} \\\n\t\t\tHELM_UPGRADE_TEST=${HELM_UPGRADE_TEST} \\\n\t\t\tREMOVER_IMAGE=${REMOVER_IMG} \\\n\t\t\tMANAGER_IMAGE=${MANAGER_IMG} \\\n\t\t\tCOLLECTOR_IMAGE=${COLLECTOR_IMG} \\\n\t\t\tSCANNER_IMAGE=${TRIVY_SCANNER_IMG} \\\n\t\t\tBUSYBOX_IMAGE=${BUSYBOX_IMG} \\\n\t\t\tCOLLECTOR_IMAGE_DUMMY=${COLLECTOR_IMAGE_DUMMY} \\\n\t\t\tVULNERABLE_IMAGE=${VULNERABLE_IMG} \\\n\t\t\tNON_VULNERABLE_IMAGE=${NON_VULNERABLE_IMG} \\\n\t\t\tEOL_IMAGE=${EOL_IMG} \\\n\t\t\tNODE_VERSION=kindest/node:v${KUBERNETES_VERSION} \\\n\t\t\tMODIFIED_NODE_IMAGE=${MODIFIED_NODE_IMAGE} \\\n\t\t\tTEST_LOGDIR=${TEST_LOGDIR} \\\n\t\t\tgo test -count=$(TEST_COUNT) -timeout=$(TIMEOUT) $(TESTFLAGS) -tags=e2e -v $$test ; \\\n\tdone\n\n##@ Build\nbuild: generate fmt vet ## Build manager binary.\n\tgo build -o bin/manager -ldflags \"$(LDFLAGS)\" main.go\n\nrun: manifests generate fmt vet ## Run a controller from your host.\n\tgo run ./main.go\n\ndocker-build-manager: ## Build docker image with the manager.\n\tdocker buildx build \\\n\t\t$(_CACHE_FROM) $(_CACHE_TO) \\\n\t\t$(_ATTESTATIONS) \\\n\t\t--build-arg LDFLAGS=\"$(LDFLAGS)\" \\\n\t\t--platform=\"$(PLATFORM)\" \\\n\t\t--output=$(OUTPUT_TYPE) \\\n\t\t-t ${MANAGER_IMG} \\\n\t\t--target manager .\n\ndocker-build-trivy-scanner: ## Build docker image for trivy-scanner image.\n\tdocker buildx build \\\n\t\t$(_CACHE_FROM) $(_CACHE_TO) \\\n\t\t$(_ATTESTATIONS) \\\n\t\t--build-arg TRIVY_BINARY_IMG=\"$(TRIVY_BINARY_IMG)\" \\\n\t\t--build-arg LDFLAGS=\"$(TRIVY_SCANNER_LDFLAGS)\" \\\n\t\t--platform=\"$(PLATFORM)\" \\\n\t\t--output=$(OUTPUT_TYPE) \\\n\t\t-t ${TRIVY_SCANNER_IMG} \\\n\t\t--target trivy-scanner .\n\ndocker-build-remover: ## Build docker image for remover image.\n\tdocker buildx build \\\n\t\t$(_CACHE_FROM) $(_CACHE_TO) \\\n\t\t$(_ATTESTATIONS) \\\n\t\t--build-arg LDFLAGS=\"$(ERASER_LDFLAGS)\" \\\n\t\t--platform=\"$(PLATFORM)\" \\\n\t\t--output=$(OUTPUT_TYPE) \\\n\t\t-t ${REMOVER_IMG} \\\n\t\t--target remover .\n\ndocker-build-collector:\n\tdocker buildx build \\\n\t\t$(_CACHE_FROM) $(_CACHE_TO) \\\n\t\t$(_ATTESTATIONS) \\\n\t\t--build-arg LDFLAGS=\"$(LDFLAGS)\" \\\n\t\t--platform=\"$(PLATFORM)\" \\\n\t\t--output=$(OUTPUT_TYPE) \\\n\t\t-t ${COLLECTOR_IMG} \\\n\t\t--target collector .\n\n##@ Deployment\n\ninstall: __manifest_kustomize ## Install CRDs into the K8s cluster specified in ~/.kube/config.\n\t$(MANIFEST_KUSTOMIZE) build /eraser/config/crd | kubectl apply -f -\n\nuninstall: __manifest_kustomize ## Uninstall CRDs from the K8s cluster specified in ~/.kube/config.\n\t$(MANIFEST_KUSTOMIZE) build /eraser/config/crd | kubectl delete -f -\n\ndeploy: __manifest_kustomize ## Deploy controller to the K8s cluster specified in ~/.kube/config.\n\t$(MANIFEST_KUSTOMIZE) build /eraser/config/default | kubectl apply -f -\n\nundeploy: __manifest_kustomize ## Undeploy controller from the K8s cluster specified in ~/.kube/config.\n\t$(MANIFEST_KUSTOMIZE) build /eraser/config/default | kubectl delete -f -\n\n##@ Release\n\nrelease-manifest: ## Generates manifests for a release.\n\t@sed -i -e 's/^VERSION := .*/VERSION := ${NEWVERSION}/' ./Makefile\n\t@sed -i'' -e 's@image: $(REPOSITORY):.*@image: $(REPOSITORY):'\"$(NEWVERSION)\"'@' ./config/manager/manager.yaml\n\t@sed -i \"s/appVersion: .*/appVersion: ${NEWVERSION}/\" ./third_party/open-policy-agent/gatekeeper/helmify/static/Chart.yaml\n\t@sed -i \"s/version: .*/version: $$(echo ${NEWVERSION} | cut -c2-)/\" ./third_party/open-policy-agent/gatekeeper/helmify/static/Chart.yaml\n\t@sed -Ei 's/(tag:\\s*).*/\\1\"$(NEWVERSION)\"/' ./third_party/open-policy-agent/gatekeeper/helmify/static/values.yaml\n\t@sed -i 's/Current release version: `.*`/Current release version: `'\"${NEWVERSION}\"'`/' ./third_party/open-policy-agent/gatekeeper/helmify/static/README.md\n\t@sed -i 's/https:\\/\\/raw\\.githubusercontent\\.com\\/eraser-dev\\/eraser\\/master\\/deploy\\/eraser\\.yaml.*/https:\\/\\/raw\\.githubusercontent\\.com\\/eraser-dev\\/eraser\\/${NEWVERSION}\\/deploy\\/eraser\\.yaml/' ./docs/docs/installation.md\n\texport\n\t$(MAKE) manifests\n\npromote-staging-manifest: ## Promotes the k8s deployment yaml files to release.\n\t@rm -rf deploy\n\t@cp -r manifest_staging/deploy .\n\t@rm -rf charts\n\t@cp -r manifest_staging/charts .\n\nENVTEST = $(shell pwd)/bin/setup-envtest\n.PHONY: envtest\nenvtest: __tooling-image bin/setup-envtest\n\nbin/setup-envtest:\n\tdocker run --rm -v $(shell pwd)/bin:/go/bin -e GO111MODULE=on eraser-tooling go install sigs.k8s.io/controller-runtime/tools/setup-envtest@v0.0.0-20240320141353-395cfc7486e6\n\n__controller-gen: __tooling-image\nCONTROLLER_GEN=docker run --rm -v $(shell pwd):/eraser eraser-tooling controller-gen\n\n__conversion-gen: __tooling-image\nCONVERSION_GEN=docker run --rm -v $(shell pwd):/eraser eraser-tooling conversion-gen\n\n__manifest_kustomize: __kustomize-manifest-image\nMANIFEST_KUSTOMIZE=docker run --rm -v $(shell pwd)/manifest_staging:/eraser/manifest_staging manifest-kustomize\n\n__helm_kustomize: __kustomize-helm-image\nHELM_KUSTOMIZE=docker run --rm -v $(shell pwd)/manifest_staging:/eraser/manifest_staging -v $(shell pwd)/third_party:/eraser/third_party helm-kustomize\n\n__tooling-image:\n\tdocker build . \\\n\t\t-t eraser-tooling \\\n\t\t-f build/tooling/Dockerfile\n\n__kustomize-helm-image:\n\tdocker build . \\\n\t\t-t helm-kustomize \\\n\t\t--build-arg KUSTOMIZE_VERSION=${KUSTOMIZE_VERSION} \\\n\t\t-f build/tooling/Dockerfile.helm\n\n__kustomize-manifest-image:\n\tdocker build . \\\n\t\t-t manifest-kustomize \\\n\t\t--build-arg KUSTOMIZE_VERSION=${KUSTOMIZE_VERSION} \\\n\t\t--build-arg TRIVY_SCANNER_REPO=${TRIVY_SCANNER_REPO} \\\n\t\t--build-arg MANAGER_REPO=${MANAGER_REPO} \\\n\t\t--build-arg REMOVER_REPO=${REMOVER_REPO} \\\n\t\t--build-arg COLLECTOR_REPO=${COLLECTOR_REPO} \\\n\t\t--build-arg MANAGER_TAG=${MANAGER_TAG} \\\n\t\t--build-arg TRIVY_SCANNER_TAG=${TRIVY_SCANNER_TAG} \\\n\t\t--build-arg COLLECTOR_TAG=${COLLECTOR_TAG} \\\n\t\t--build-arg REMOVER_TAG=${REMOVER_TAG} \\\n\t\t-f build/tooling/Dockerfile.manifest\n\n# Tags a new version for docs\n.PHONY: version-docs\nversion-docs:\n\tdocker run --rm \\\n\t\t-v $(shell pwd)/docs:/docs \\\n\t\t-w /docs \\\n\t\t$(IDFLAGS) \\\n\t\tnode:${NODE_VERSION} \\\n\t\tsh -c \"yarn install --frozen-lockfile && yarn run docusaurus docs:version ${NEWVERSION}\"\n\t@sed -i 's/https:\\/\\/raw\\.githubusercontent\\.com\\/eraser-dev\\/eraser\\/main\\/deploy\\/eraser\\.yaml.*/https:\\/\\/raw\\.githubusercontent\\.com\\/eraser-dev\\/eraser\\/${TAG}\\/deploy\\/eraser\\.yaml/' ./docs/versioned_docs/version-${NEWVERSION}/installation.md\n\n.PHONY: patch-version-docs\npatch-version-docs:\n\t@sed -i 's/https:\\/\\/raw\\.githubusercontent\\.com\\/eraser-dev\\/eraser\\/${OLDVERSION}\\/deploy\\/eraser\\.yaml.*/https:\\/\\/raw\\.githubusercontent\\.com\\/eraser-dev\\/eraser\\/${TAG}\\/deploy\\/eraser\\.yaml/' ./docs/versioned_docs/version-${NEWVERSION}/installation.md\n"
  },
  {
    "path": "PROJECT",
    "content": "domain: eraser-dev.io\nlayout:\n- go.kubebuilder.io/v3\nprojectName: eraser\nrepo: github.com/eraser-dev/eraser\nresources:\n- api:\n    crdVersion: v1alpha1\n    namespaced: true\n  controller: true\n  domain: eraser-dev.io\n  group: eraser.sh\n  kind: ImageList\n  path: eraser.io/eraser/api/v1alpha1\n  version: v1alpha1\n- controller: true\n  domain: eraser-dev.io\n  group: eraser.sh\n  kind: ImageCollector\n  version: v1alpha1\n- domain: eraser-dev.io\n  group: eraser.sh\n  kind: ImageList\n  version: v1\n- api:\n    crdVersion: v1\n    namespaced: true\n  domain: eraser.io\n  group: eraser.sh\n  kind: EraserConfig\n  path: github.com/eraser-dev/eraser/api/v1alpha1\n  version: v1alpha1\n- domain: eraser.io\n  group: eraser.sh\n  kind: EraserConfig\n  version: v1alpha2\nversion: \"3\"\n"
  },
  {
    "path": "README.md",
    "content": "# Eraser: Cleaning up Images from Kubernetes Nodes\n\n![GitHub](https://img.shields.io/github/license/eraser-dev/eraser)\n[![FOSSA Status](https://app.fossa.com/api/projects/git%2Bgithub.com%2Feraser-dev%2Feraser.svg?type=shield&issueType=license)](https://app.fossa.com/projects/git%2Bgithub.com%2Feraser-dev%2Feraser?ref=badge_shield)\n[![OpenSSF Best Practices](https://www.bestpractices.dev/projects/7622/badge)](https://www.bestpractices.dev/projects/7622)\n[![OpenSSF Scorecard](https://api.securityscorecards.dev/projects/github.com/eraser-dev/eraser/badge)](https://api.securityscorecards.dev/projects/github.com/eraser-dev/eraser)\n\n<img src=\"./images/eraser-logo-color-1c.png\" alt=\"Eraser logo\" width=\"100%\" />\n\nEraser helps Kubernetes admins remove a list of non-running images from all Kubernetes nodes in a cluster.\n\n## Getting started\n\nYou can find a quick start guide in the Eraser [documentation](https://eraser-dev.github.io/eraser/docs/quick-start).\n\n## Demo\n\n![intro](demo/demo.gif)\n\n## Contributing\n\nThere are several ways to get involved:\n\n- Join the [mailing list](https://groups.google.com/u/1/g/eraser-dev) to get notifications for releases, security announcements, etc.\n- Join the [biweekly community meetings](https://docs.google.com/document/d/1Sj5u47K3WUGYNPmQHGFpb52auqZb1FxSlWAQnPADhWI/edit) to discuss development, issues, use cases, etc.\n- Join the `#eraser` channel on the [Kubernetes Slack](https://kubernetes.slack.com/archives/C03Q8KV8YQ4)\n- View the development setup instructions in the [documentation](https://eraser-dev.github.io/eraser/docs/setup)\n\nThis project welcomes contributions and suggestions.\n\nThis project has adopted the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).\n\n## Support\n\n### How to file issues and get help\n\nThis project uses GitHub Issues to track bugs and feature requests. Please search the [existing issues](https://github.com/eraser-dev/eraser/issues) before filing new issues to avoid duplicates. For new issues, file your bug or feature request as a new Issue.\n\nThe Eraser maintainers will respond to the best of their abilities."
  },
  {
    "path": "api/group.go",
    "content": "/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n\thttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\npackage apis\n"
  },
  {
    "path": "api/unversioned/config/config.go",
    "content": "package config\n\nimport (\n\t\"fmt\"\n\t\"sync\"\n\t\"time\"\n\n\tv1 \"k8s.io/api/core/v1\"\n\t\"k8s.io/apimachinery/pkg/api/resource\"\n\n\t\"github.com/eraser-dev/eraser/api/unversioned\"\n\t\"github.com/eraser-dev/eraser/version\"\n)\n\nvar defaultScannerConfig = `\ncacheDir: /var/lib/trivy\ndbRepo: ghcr.io/aquasecurity/trivy-db\ndeleteFailedImages: true\ndeleteEOLImages: true\nvulnerabilities:\n  ignoreUnfixed: false\n  types:\n    - os\n    - library\nsecurityChecks: # need to be documented; determined by trivy, not us\n  - vuln\nseverities:\n  - CRITICAL\n  - HIGH\n  - MEDIUM\n  - LOW\nignoredStatuses:\n`\n\ntype Manager struct {\n\tmtx sync.Mutex\n\tcfg *unversioned.EraserConfig\n}\n\nfunc (m *Manager) Read() (unversioned.EraserConfig, error) {\n\tm.mtx.Lock()\n\tdefer m.mtx.Unlock()\n\n\tif m.cfg == nil {\n\t\treturn unversioned.EraserConfig{}, fmt.Errorf(\"ConfigManager configuration is nil, aborting\")\n\t}\n\n\tcfg := *m.cfg\n\treturn cfg, nil\n}\n\nfunc (m *Manager) Update(newC *unversioned.EraserConfig) error {\n\tm.mtx.Lock()\n\tdefer m.mtx.Unlock()\n\n\tif m.cfg == nil {\n\t\treturn fmt.Errorf(\"ConfigManager configuration is nil, aborting\")\n\t}\n\n\tif newC == nil {\n\t\treturn fmt.Errorf(\"new configuration is nil, aborting\")\n\t}\n\n\t*m.cfg = *newC\n\treturn nil\n}\n\nfunc NewManager(cfg *unversioned.EraserConfig) *Manager {\n\treturn &Manager{\n\t\tmtx: sync.Mutex{},\n\t\tcfg: cfg,\n\t}\n}\n\nconst (\n\tnoDelay = unversioned.Duration(0)\n\toneDay  = unversioned.Duration(time.Hour * 24)\n)\n\nfunc Default() *unversioned.EraserConfig {\n\treturn &unversioned.EraserConfig{\n\t\tManager: unversioned.ManagerConfig{\n\t\t\tRuntime: unversioned.RuntimeSpec{\n\t\t\t\tName:    unversioned.RuntimeContainerd,\n\t\t\t\tAddress: \"unix:///run/containerd/containerd.sock\",\n\t\t\t},\n\t\t\tOTLPEndpoint: \"\",\n\t\t\tLogLevel:     \"info\",\n\t\t\tScheduling: unversioned.ScheduleConfig{\n\t\t\t\tRepeatInterval:   unversioned.Duration(oneDay),\n\t\t\t\tBeginImmediately: true,\n\t\t\t},\n\t\t\tProfile: unversioned.ProfileConfig{\n\t\t\t\tEnabled: false,\n\t\t\t\tPort:    6060,\n\t\t\t},\n\t\t\tImageJob: unversioned.ImageJobConfig{\n\t\t\t\tSuccessRatio: 1.0,\n\t\t\t\tCleanup: unversioned.ImageJobCleanupConfig{\n\t\t\t\t\tDelayOnSuccess: noDelay,\n\t\t\t\t\tDelayOnFailure: oneDay,\n\t\t\t\t},\n\t\t\t},\n\t\t\tPullSecrets: []string{},\n\t\t\tNodeFilter: unversioned.NodeFilterConfig{\n\t\t\t\tType: \"exclude\",\n\t\t\t\tSelectors: []string{\n\t\t\t\t\t\"eraser.sh/cleanup.filter\",\n\t\t\t\t},\n\t\t\t},\n\t\t\tAdditionalPodLabels: map[string]string{},\n\t\t},\n\t\tComponents: unversioned.Components{\n\t\t\tCollector: unversioned.OptionalContainerConfig{\n\t\t\t\tEnabled: false,\n\t\t\t\tContainerConfig: unversioned.ContainerConfig{\n\t\t\t\t\tImage: unversioned.RepoTag{\n\t\t\t\t\t\tRepo: repo(\"collector\"),\n\t\t\t\t\t\tTag:  version.BuildVersion,\n\t\t\t\t\t},\n\t\t\t\t\tRequest: unversioned.ResourceRequirements{\n\t\t\t\t\t\tMem: resource.MustParse(\"25Mi\"),\n\t\t\t\t\t\tCPU: resource.MustParse(\"7m\"),\n\t\t\t\t\t},\n\t\t\t\t\tLimit: unversioned.ResourceRequirements{\n\t\t\t\t\t\tMem: resource.MustParse(\"500Mi\"),\n\t\t\t\t\t\tCPU: resource.Quantity{},\n\t\t\t\t\t},\n\t\t\t\t\tConfig: nil,\n\t\t\t\t},\n\t\t\t},\n\t\t\tScanner: unversioned.OptionalContainerConfig{\n\t\t\t\tEnabled: false,\n\t\t\t\tContainerConfig: unversioned.ContainerConfig{\n\t\t\t\t\tImage: unversioned.RepoTag{\n\t\t\t\t\t\tRepo: repo(\"eraser-trivy-scanner\"),\n\t\t\t\t\t\tTag:  version.BuildVersion,\n\t\t\t\t\t},\n\t\t\t\t\tRequest: unversioned.ResourceRequirements{\n\t\t\t\t\t\tMem: resource.MustParse(\"500Mi\"),\n\t\t\t\t\t\tCPU: resource.MustParse(\"1000m\"),\n\t\t\t\t\t},\n\t\t\t\t\tLimit: unversioned.ResourceRequirements{\n\t\t\t\t\t\tMem: resource.MustParse(\"2Gi\"),\n\t\t\t\t\t\tCPU: resource.MustParse(\"1500m\"),\n\t\t\t\t\t},\n\t\t\t\t\tConfig:  &defaultScannerConfig,\n\t\t\t\t\tVolumes: []v1.Volume{},\n\t\t\t\t},\n\t\t\t},\n\t\t\tRemover: unversioned.ContainerConfig{\n\t\t\t\tImage: unversioned.RepoTag{\n\t\t\t\t\tRepo: repo(\"remover\"),\n\t\t\t\t\tTag:  version.BuildVersion,\n\t\t\t\t},\n\t\t\t\tRequest: unversioned.ResourceRequirements{\n\t\t\t\t\tMem: resource.MustParse(\"25Mi\"),\n\t\t\t\t\tCPU: resource.MustParse(\"7m\"),\n\t\t\t\t},\n\t\t\t\tLimit: unversioned.ResourceRequirements{\n\t\t\t\t\tMem: resource.MustParse(\"30Mi\"),\n\t\t\t\t\tCPU: resource.Quantity{},\n\t\t\t\t},\n\t\t\t\tConfig: nil,\n\t\t\t},\n\t\t},\n\t}\n}\n\nfunc repo(basename string) string {\n\tif version.DefaultRepo == \"\" {\n\t\treturn basename\n\t}\n\n\treturn fmt.Sprintf(\"%s/%s\", version.DefaultRepo, basename)\n}\n"
  },
  {
    "path": "api/unversioned/doc.go",
    "content": "package unversioned\n\n// +kubebuilder:object:generate=true\n"
  },
  {
    "path": "api/unversioned/eraserconfig_types.go",
    "content": "/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage unversioned\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"net/url\"\n\t\"time\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\t\"k8s.io/apimachinery/pkg/api/resource\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n)\n\ntype (\n\tDuration time.Duration\n\tRuntime  string\n\n\tRuntimeSpec struct {\n\t\tName    Runtime `json:\"name\"`\n\t\tAddress string  `json:\"address\"`\n\t}\n)\n\nconst (\n\tRuntimeContainerd  Runtime = \"containerd\"\n\tRuntimeDockerShim  Runtime = \"dockershim\"\n\tRuntimeCrio        Runtime = \"crio\"\n\tRuntimeNotProvided Runtime = \"\"\n\n\tContainerdPath = \"/run/containerd/containerd.sock\"\n\tDockerPath     = \"/run/dockershim.sock\"\n\tCrioPath       = \"/run/crio/crio.sock\"\n)\n\nfunc ConvertRuntimeToRuntimeSpec(r Runtime) (RuntimeSpec, error) {\n\tvar rs RuntimeSpec\n\n\tswitch r {\n\tcase RuntimeContainerd:\n\t\trs = RuntimeSpec{Name: RuntimeContainerd, Address: fmt.Sprintf(\"unix://%s\", ContainerdPath)}\n\tcase RuntimeDockerShim:\n\t\trs = RuntimeSpec{Name: RuntimeDockerShim, Address: fmt.Sprintf(\"unix://%s\", DockerPath)}\n\tcase RuntimeCrio:\n\t\trs = RuntimeSpec{Name: RuntimeCrio, Address: fmt.Sprintf(\"unix://%s\", CrioPath)}\n\tdefault:\n\t\treturn rs, fmt.Errorf(\"invalid runtime: valid names are %s, %s, %s\", RuntimeContainerd, RuntimeDockerShim, RuntimeCrio)\n\t}\n\n\treturn rs, nil\n}\n\nfunc (td *Duration) UnmarshalJSON(b []byte) error {\n\tvar str string\n\terr := json.Unmarshal(b, &str)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tpd, err := time.ParseDuration(str)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t*td = Duration(pd)\n\treturn nil\n}\n\nfunc (td *Duration) MarshalJSON() ([]byte, error) {\n\treturn []byte(fmt.Sprintf(`\"%s\"`, time.Duration(*td).String())), nil\n}\n\nfunc (r *RuntimeSpec) UnmarshalJSON(b []byte) error {\n\t// create temp RuntimeSpec to prevent recursive error into this function when using unmarshall to check validity of provided RuntimeSpec\n\ttype TempRuntimeSpec struct {\n\t\tName    string `json:\"name\"`\n\t\tAddress string `json:\"address\"`\n\t}\n\tvar rs TempRuntimeSpec\n\terr := json.Unmarshal(b, &rs)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"error unmarshalling into TempRuntimeSpec %v %s\", err, string(b))\n\t}\n\n\tswitch rt := Runtime(rs.Name); rt {\n\t// make sure user provided Runtime is valid\n\tcase RuntimeContainerd, RuntimeDockerShim, RuntimeCrio:\n\t\tif rs.Address != \"\" {\n\t\t\t// check that provided RuntimeAddress is valid\n\t\t\tu, err := url.Parse(rs.Address)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\tswitch u.Scheme {\n\t\t\tcase \"tcp\", \"unix\":\n\t\t\tdefault:\n\t\t\t\treturn fmt.Errorf(\"invalid RuntimeAddress scheme: valid schemes for runtime socket address are `tcp` and `unix`\")\n\t\t\t}\n\n\t\t\tr.Name = Runtime(rs.Name)\n\t\t\tr.Address = rs.Address\n\n\t\t\treturn nil\n\t\t}\n\n\t\t// if RuntimeAddress is not provided, get defaults\n\t\tconverted, err := ConvertRuntimeToRuntimeSpec(rt)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\t*r = converted\n\tcase RuntimeNotProvided:\n\t\tif rs.Address != \"\" {\n\t\t\treturn fmt.Errorf(\"runtime name must be provided with address\")\n\t\t}\n\n\t\t// if empty name and address, use containerd as default\n\t\tr.Name = RuntimeContainerd\n\t\tr.Address = fmt.Sprintf(\"unix://%s\", ContainerdPath)\n\tdefault:\n\t\treturn fmt.Errorf(\"invalid runtime: valid names are %s, %s, %s\", RuntimeContainerd, RuntimeDockerShim, RuntimeCrio)\n\t}\n\n\treturn nil\n}\n\n// EDIT THIS FILE!  THIS IS SCAFFOLDING FOR YOU TO OWN!\n// NOTE: json tags are required.  Any new fields you add must have json tags for the fields to be serialized.\n\ntype OptionalContainerConfig struct {\n\tEnabled         bool `json:\"enabled,omitempty\"`\n\tContainerConfig `json:\",inline\"`\n}\n\ntype ContainerConfig struct {\n\tImage   RepoTag              `json:\"image,omitempty\"`\n\tRequest ResourceRequirements `json:\"request,omitempty\"`\n\tLimit   ResourceRequirements `json:\"limit,omitempty\"`\n\tConfig  *string              `json:\"config,omitempty\"`\n\tVolumes []corev1.Volume      `json:\"volumes,omitempty\"`\n}\n\ntype ManagerConfig struct {\n\tRuntime             RuntimeSpec       `json:\"runtime,omitempty\"`\n\tOTLPEndpoint        string            `json:\"otlpEndpoint,omitempty\"`\n\tLogLevel            string            `json:\"logLevel,omitempty\"`\n\tScheduling          ScheduleConfig    `json:\"scheduling,omitempty\"`\n\tProfile             ProfileConfig     `json:\"profile,omitempty\"`\n\tImageJob            ImageJobConfig    `json:\"imageJob,omitempty\"`\n\tPullSecrets         []string          `json:\"pullSecrets,omitempty\"`\n\tNodeFilter          NodeFilterConfig  `json:\"nodeFilter,omitempty\"`\n\tPriorityClassName   string            `json:\"priorityClassName,omitempty\"`\n\tAdditionalPodLabels map[string]string `json:\"additionalPodLabels,omitempty\"`\n}\n\ntype ScheduleConfig struct {\n\tRepeatInterval   Duration `json:\"repeatInterval,omitempty\"`\n\tBeginImmediately bool     `json:\"beginImmediately,omitempty\"`\n}\n\ntype ProfileConfig struct {\n\tEnabled bool `json:\"enabled,omitempty\"`\n\tPort    int  `json:\"port,omitempty\"`\n}\n\ntype ImageJobConfig struct {\n\tSuccessRatio float64               `json:\"successRatio,omitempty\"`\n\tCleanup      ImageJobCleanupConfig `json:\"cleanup,omitempty\"`\n}\n\ntype ImageJobCleanupConfig struct {\n\tDelayOnSuccess Duration `json:\"delayOnSuccess,omitempty\"`\n\tDelayOnFailure Duration `json:\"delayOnFailure,omitempty\"`\n}\n\ntype NodeFilterConfig struct {\n\tType      string   `json:\"type,omitempty\"`\n\tSelectors []string `json:\"selectors,omitempty\"`\n}\n\ntype ResourceRequirements struct {\n\tMem resource.Quantity `json:\"mem,omitempty\"`\n\tCPU resource.Quantity `json:\"cpu,omitempty\"`\n}\n\ntype RepoTag struct {\n\tRepo string `json:\"repo,omitempty\"`\n\tTag  string `json:\"tag,omitempty\"`\n}\n\ntype Components struct {\n\tCollector OptionalContainerConfig `json:\"collector,omitempty\"`\n\tScanner   OptionalContainerConfig `json:\"scanner,omitempty\"`\n\tRemover   ContainerConfig         `json:\"remover,omitempty\"`\n}\n\n//+kubebuilder:object:root=true\n\n// EraserConfig is the Schema for the eraserconfigs API.\ntype EraserConfig struct {\n\tmetav1.TypeMeta `json:\",inline\"`\n\tManager         ManagerConfig `json:\"manager\"`\n\tComponents      Components    `json:\"components\"`\n}\n\nfunc init() {\n\tSchemeBuilder.Register(&EraserConfig{})\n}\n"
  },
  {
    "path": "api/unversioned/groupversion_info.go",
    "content": "/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\n// Package v1 contains API Schema definitions for the eraser.sh v1 API group\n// +kubebuilder:object:generate=true\n// +groupName=eraser.sh\npackage unversioned\n\nimport (\n\t\"k8s.io/apimachinery/pkg/runtime/schema\"\n\t\"sigs.k8s.io/controller-runtime/pkg/scheme\"\n)\n\nvar (\n\t// GroupVersion is group version used to register these objects.\n\tGroupVersion = schema.GroupVersion{Group: \"eraser.sh\", Version: \"unversioned\"}\n\n\t// SchemeBuilder is used to add go types to the GroupVersionKind scheme.\n\tSchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}\n\n\t// AddToScheme adds the types in this group-version to the given scheme.\n\tAddToScheme = SchemeBuilder.AddToScheme\n)\n"
  },
  {
    "path": "api/unversioned/imagejob_types.go",
    "content": "/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\n// +kubebuilder:skip\npackage unversioned\n\nimport (\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n)\n\ntype Image struct {\n\tImageID string   `json:\"image_id\"`\n\tNames   []string `json:\"names,omitempty\"`\n\tDigests []string `json:\"digests,omitempty\"`\n}\n\n// EDIT THIS FILE!  THIS IS SCAFFOLDING FOR YOU TO OWN!\n// NOTE: json tags are required.  Any new fields you add must have json tags for the fields to be serialized.\n\n// JobPhase defines the phase of an ImageJob status.\ntype JobPhase string\n\nconst (\n\tPhaseRunning   JobPhase = \"Running\"\n\tPhaseCompleted JobPhase = \"Completed\"\n\tPhaseFailed    JobPhase = \"Failed\"\n)\n\n// ImageJobStatus defines the observed state of ImageJob.\ntype ImageJobStatus struct {\n\t// number of pods that failed\n\tFailed int `json:\"failed\"`\n\n\t// number of pods that completed successfully\n\tSucceeded int `json:\"succeeded\"`\n\n\t// desired number of pods\n\tDesired int `json:\"desired\"`\n\n\t// number of nodes that were skipped e.g. because they are not a linux node\n\tSkipped int `json:\"skipped\"`\n\n\t// job running, successfully completed, or failed\n\tPhase JobPhase `json:\"phase\"`\n\n\t// Time to delay deletion until\n\tDeleteAfter *metav1.Time `json:\"deleteAfter,omitempty\"`\n}\n\n// ImageJob is the Schema for the imagejobs API.\ntype ImageJob struct {\n\tmetav1.TypeMeta   `json:\",inline\"`\n\tmetav1.ObjectMeta `json:\"metadata,omitempty\"`\n\n\tStatus ImageJobStatus `json:\"status,omitempty\"`\n}\n\n// ImageJobList contains a list of ImageJob.\ntype ImageJobList struct {\n\tmetav1.TypeMeta `json:\",inline\"`\n\tmetav1.ListMeta `json:\"metadata,omitempty\"`\n\tItems           []ImageJob `json:\"items\"`\n}\n"
  },
  {
    "path": "api/unversioned/imagelist_types.go",
    "content": "/*\nCopyright 2021.\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n    http://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\n// +kubebuilder:skip\npackage unversioned\n\nimport (\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n)\n\n// ImageListSpec defines the desired state of ImageList.\ntype ImageListSpec struct {\n\t// The list of non-compliant images to delete if non-running.\n\tImages []string `json:\"images\"`\n}\n\n// ImageListStatus defines the observed state of ImageList.\ntype ImageListStatus struct {\n\t// Information when the job was completed.\n\tTimestamp *metav1.Time `json:\"timestamp\"`\n\t// Number of nodes that successfully ran the job\n\tSuccess int64 `json:\"success\"`\n\t// Number of nodes that failed to run the job\n\tFailed int64 `json:\"failed\"`\n\t// Number of nodes that were skipped due to a skip selector\n\tSkipped int64 `json:\"skipped\"`\n}\n\n// ImageList is the Schema for the imagelists API.\ntype ImageList struct {\n\tmetav1.TypeMeta   `json:\",inline\"`\n\tmetav1.ObjectMeta `json:\"metadata,omitempty\"`\n\n\tSpec   ImageListSpec   `json:\"spec,omitempty\"`\n\tStatus ImageListStatus `json:\"status,omitempty\"`\n}\n\n// ImageListList contains a list of ImageList.\ntype ImageListList struct {\n\tmetav1.TypeMeta `json:\",inline\"`\n\tmetav1.ListMeta `json:\"metadata,omitempty\"`\n\tItems           []ImageList `json:\"items\"`\n}\n"
  },
  {
    "path": "api/unversioned/zz_generated.deepcopy.go",
    "content": "//go:build !ignore_autogenerated\n\n/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\n// Code generated by controller-gen. DO NOT EDIT.\n\npackage unversioned\n\nimport (\n\t\"k8s.io/api/core/v1\"\n\truntime \"k8s.io/apimachinery/pkg/runtime\"\n)\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *Components) DeepCopyInto(out *Components) {\n\t*out = *in\n\tin.Collector.DeepCopyInto(&out.Collector)\n\tin.Scanner.DeepCopyInto(&out.Scanner)\n\tin.Remover.DeepCopyInto(&out.Remover)\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Components.\nfunc (in *Components) DeepCopy() *Components {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(Components)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ContainerConfig) DeepCopyInto(out *ContainerConfig) {\n\t*out = *in\n\tout.Image = in.Image\n\tin.Request.DeepCopyInto(&out.Request)\n\tin.Limit.DeepCopyInto(&out.Limit)\n\tif in.Config != nil {\n\t\tin, out := &in.Config, &out.Config\n\t\t*out = new(string)\n\t\t**out = **in\n\t}\n\tif in.Volumes != nil {\n\t\tin, out := &in.Volumes, &out.Volumes\n\t\t*out = make([]v1.Volume, len(*in))\n\t\tfor i := range *in {\n\t\t\t(*in)[i].DeepCopyInto(&(*out)[i])\n\t\t}\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ContainerConfig.\nfunc (in *ContainerConfig) DeepCopy() *ContainerConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ContainerConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *EraserConfig) DeepCopyInto(out *EraserConfig) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tin.Manager.DeepCopyInto(&out.Manager)\n\tin.Components.DeepCopyInto(&out.Components)\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EraserConfig.\nfunc (in *EraserConfig) DeepCopy() *EraserConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(EraserConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.\nfunc (in *EraserConfig) DeepCopyObject() runtime.Object {\n\tif c := in.DeepCopy(); c != nil {\n\t\treturn c\n\t}\n\treturn nil\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *Image) DeepCopyInto(out *Image) {\n\t*out = *in\n\tif in.Names != nil {\n\t\tin, out := &in.Names, &out.Names\n\t\t*out = make([]string, len(*in))\n\t\tcopy(*out, *in)\n\t}\n\tif in.Digests != nil {\n\t\tin, out := &in.Digests, &out.Digests\n\t\t*out = make([]string, len(*in))\n\t\tcopy(*out, *in)\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Image.\nfunc (in *Image) DeepCopy() *Image {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(Image)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageJob) DeepCopyInto(out *ImageJob) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tin.ObjectMeta.DeepCopyInto(&out.ObjectMeta)\n\tin.Status.DeepCopyInto(&out.Status)\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageJob.\nfunc (in *ImageJob) DeepCopy() *ImageJob {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageJob)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageJobCleanupConfig) DeepCopyInto(out *ImageJobCleanupConfig) {\n\t*out = *in\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageJobCleanupConfig.\nfunc (in *ImageJobCleanupConfig) DeepCopy() *ImageJobCleanupConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageJobCleanupConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageJobConfig) DeepCopyInto(out *ImageJobConfig) {\n\t*out = *in\n\tout.Cleanup = in.Cleanup\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageJobConfig.\nfunc (in *ImageJobConfig) DeepCopy() *ImageJobConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageJobConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageJobList) DeepCopyInto(out *ImageJobList) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tin.ListMeta.DeepCopyInto(&out.ListMeta)\n\tif in.Items != nil {\n\t\tin, out := &in.Items, &out.Items\n\t\t*out = make([]ImageJob, len(*in))\n\t\tfor i := range *in {\n\t\t\t(*in)[i].DeepCopyInto(&(*out)[i])\n\t\t}\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageJobList.\nfunc (in *ImageJobList) DeepCopy() *ImageJobList {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageJobList)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageJobStatus) DeepCopyInto(out *ImageJobStatus) {\n\t*out = *in\n\tif in.DeleteAfter != nil {\n\t\tin, out := &in.DeleteAfter, &out.DeleteAfter\n\t\t*out = (*in).DeepCopy()\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageJobStatus.\nfunc (in *ImageJobStatus) DeepCopy() *ImageJobStatus {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageJobStatus)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageList) DeepCopyInto(out *ImageList) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tin.ObjectMeta.DeepCopyInto(&out.ObjectMeta)\n\tin.Spec.DeepCopyInto(&out.Spec)\n\tin.Status.DeepCopyInto(&out.Status)\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageList.\nfunc (in *ImageList) DeepCopy() *ImageList {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageList)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageListList) DeepCopyInto(out *ImageListList) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tin.ListMeta.DeepCopyInto(&out.ListMeta)\n\tif in.Items != nil {\n\t\tin, out := &in.Items, &out.Items\n\t\t*out = make([]ImageList, len(*in))\n\t\tfor i := range *in {\n\t\t\t(*in)[i].DeepCopyInto(&(*out)[i])\n\t\t}\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageListList.\nfunc (in *ImageListList) DeepCopy() *ImageListList {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageListList)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageListSpec) DeepCopyInto(out *ImageListSpec) {\n\t*out = *in\n\tif in.Images != nil {\n\t\tin, out := &in.Images, &out.Images\n\t\t*out = make([]string, len(*in))\n\t\tcopy(*out, *in)\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageListSpec.\nfunc (in *ImageListSpec) DeepCopy() *ImageListSpec {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageListSpec)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageListStatus) DeepCopyInto(out *ImageListStatus) {\n\t*out = *in\n\tif in.Timestamp != nil {\n\t\tin, out := &in.Timestamp, &out.Timestamp\n\t\t*out = (*in).DeepCopy()\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageListStatus.\nfunc (in *ImageListStatus) DeepCopy() *ImageListStatus {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageListStatus)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ManagerConfig) DeepCopyInto(out *ManagerConfig) {\n\t*out = *in\n\tout.Runtime = in.Runtime\n\tout.Scheduling = in.Scheduling\n\tout.Profile = in.Profile\n\tout.ImageJob = in.ImageJob\n\tif in.PullSecrets != nil {\n\t\tin, out := &in.PullSecrets, &out.PullSecrets\n\t\t*out = make([]string, len(*in))\n\t\tcopy(*out, *in)\n\t}\n\tin.NodeFilter.DeepCopyInto(&out.NodeFilter)\n\tif in.AdditionalPodLabels != nil {\n\t\tin, out := &in.AdditionalPodLabels, &out.AdditionalPodLabels\n\t\t*out = make(map[string]string, len(*in))\n\t\tfor key, val := range *in {\n\t\t\t(*out)[key] = val\n\t\t}\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ManagerConfig.\nfunc (in *ManagerConfig) DeepCopy() *ManagerConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ManagerConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *NodeFilterConfig) DeepCopyInto(out *NodeFilterConfig) {\n\t*out = *in\n\tif in.Selectors != nil {\n\t\tin, out := &in.Selectors, &out.Selectors\n\t\t*out = make([]string, len(*in))\n\t\tcopy(*out, *in)\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeFilterConfig.\nfunc (in *NodeFilterConfig) DeepCopy() *NodeFilterConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(NodeFilterConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *OptionalContainerConfig) DeepCopyInto(out *OptionalContainerConfig) {\n\t*out = *in\n\tin.ContainerConfig.DeepCopyInto(&out.ContainerConfig)\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OptionalContainerConfig.\nfunc (in *OptionalContainerConfig) DeepCopy() *OptionalContainerConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(OptionalContainerConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ProfileConfig) DeepCopyInto(out *ProfileConfig) {\n\t*out = *in\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProfileConfig.\nfunc (in *ProfileConfig) DeepCopy() *ProfileConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ProfileConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *RepoTag) DeepCopyInto(out *RepoTag) {\n\t*out = *in\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RepoTag.\nfunc (in *RepoTag) DeepCopy() *RepoTag {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(RepoTag)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ResourceRequirements) DeepCopyInto(out *ResourceRequirements) {\n\t*out = *in\n\tout.Mem = in.Mem.DeepCopy()\n\tout.CPU = in.CPU.DeepCopy()\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ResourceRequirements.\nfunc (in *ResourceRequirements) DeepCopy() *ResourceRequirements {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ResourceRequirements)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *RuntimeSpec) DeepCopyInto(out *RuntimeSpec) {\n\t*out = *in\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RuntimeSpec.\nfunc (in *RuntimeSpec) DeepCopy() *RuntimeSpec {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(RuntimeSpec)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ScheduleConfig) DeepCopyInto(out *ScheduleConfig) {\n\t*out = *in\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScheduleConfig.\nfunc (in *ScheduleConfig) DeepCopy() *ScheduleConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ScheduleConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n"
  },
  {
    "path": "api/v1/doc.go",
    "content": "// +k8s:conversion-gen=github.com/eraser-dev/eraser/api/unversioned\npackage v1\n"
  },
  {
    "path": "api/v1/groupversion_info.go",
    "content": "/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\n// Package v1 contains API Schema definitions for the eraser.sh v1 API group\n// +kubebuilder:object:generate=true\n// +groupName=eraser.sh\npackage v1\n\nimport (\n\truntime \"k8s.io/apimachinery/pkg/runtime\"\n\t\"k8s.io/apimachinery/pkg/runtime/schema\"\n\t\"sigs.k8s.io/controller-runtime/pkg/scheme\"\n)\n\nvar (\n\t// GroupVersion is group version used to register these objects.\n\tGroupVersion = schema.GroupVersion{Group: \"eraser.sh\", Version: \"v1\"}\n\n\t// SchemeBuilder is used to add go types to the GroupVersionKind scheme.\n\tSchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}\n\n\tlocalSchemeBuilder = runtime.NewSchemeBuilder(SchemeBuilder.AddToScheme)\n\n\t// AddToScheme adds the types in this group-version to the given scheme.\n\tAddToScheme = SchemeBuilder.AddToScheme\n)\n"
  },
  {
    "path": "api/v1/imagejob_types.go",
    "content": "/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage v1\n\nimport (\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n)\n\ntype Image struct {\n\tImageID string   `json:\"image_id\"`\n\tNames   []string `json:\"names,omitempty\"`\n\tDigests []string `json:\"digests,omitempty\"`\n}\n\n// EDIT THIS FILE!  THIS IS SCAFFOLDING FOR YOU TO OWN!\n// NOTE: json tags are required.  Any new fields you add must have json tags for the fields to be serialized.\n\n// JobPhase defines the phase of an ImageJob status.\ntype JobPhase string\n\nconst (\n\tPhaseRunning   JobPhase = \"Running\"\n\tPhaseCompleted JobPhase = \"Completed\"\n\tPhaseFailed    JobPhase = \"Failed\"\n)\n\n// ImageJobStatus defines the observed state of ImageJob.\ntype ImageJobStatus struct {\n\t// number of pods that failed\n\tFailed int `json:\"failed\"`\n\n\t// number of pods that completed successfully\n\tSucceeded int `json:\"succeeded\"`\n\n\t// desired number of pods\n\tDesired int `json:\"desired\"`\n\n\t// number of nodes that were skipped e.g. because they are not a linux node\n\tSkipped int `json:\"skipped\"`\n\n\t// job running, successfully completed, or failed\n\tPhase JobPhase `json:\"phase\"`\n\n\t// Time to delay deletion until\n\tDeleteAfter *metav1.Time `json:\"deleteAfter,omitempty\"`\n}\n\n// +kubebuilder:object:root=true\n// +kubebuilder:subresource:status\n// +kubebuilder:resource:scope=\"Cluster\"\n// +kubebuilder:storageversion\n// ImageJob is the Schema for the imagejobs API.\ntype ImageJob struct {\n\tmetav1.TypeMeta   `json:\",inline\"`\n\tmetav1.ObjectMeta `json:\"metadata,omitempty\"`\n\n\tStatus ImageJobStatus `json:\"status,omitempty\"`\n}\n\n// +kubebuilder:object:root=true\n// +kubebuilder:storageversion\n// ImageJobList contains a list of ImageJob.\ntype ImageJobList struct {\n\tmetav1.TypeMeta `json:\",inline\"`\n\tmetav1.ListMeta `json:\"metadata,omitempty\"`\n\tItems           []ImageJob `json:\"items\"`\n}\n\nfunc init() {\n\tSchemeBuilder.Register(&ImageJob{}, &ImageJobList{})\n}\n"
  },
  {
    "path": "api/v1/imagelist_types.go",
    "content": "/*\nCopyright 2021.\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n    http://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage v1\n\nimport (\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n)\n\n// ImageListSpec defines the desired state of ImageList.\ntype ImageListSpec struct {\n\t// The list of non-compliant images to delete if non-running.\n\tImages []string `json:\"images\"`\n}\n\n// ImageListStatus defines the observed state of ImageList.\ntype ImageListStatus struct {\n\t// Information when the job was completed.\n\tTimestamp *metav1.Time `json:\"timestamp\"`\n\t// Number of nodes that successfully ran the job\n\tSuccess int64 `json:\"success\"`\n\t// Number of nodes that failed to run the job\n\tFailed int64 `json:\"failed\"`\n\t// Number of nodes that were skipped due to a skip selector\n\tSkipped int64 `json:\"skipped\"`\n}\n\n// +kubebuilder:object:root=true\n// +kubebuilder:subresource:status\n// +kubebuilder:resource:scope=\"Cluster\"\n// +kubebuilder:storageversion\n// ImageList is the Schema for the imagelists API.\ntype ImageList struct {\n\tmetav1.TypeMeta   `json:\",inline\"`\n\tmetav1.ObjectMeta `json:\"metadata,omitempty\"`\n\n\tSpec   ImageListSpec   `json:\"spec,omitempty\"`\n\tStatus ImageListStatus `json:\"status,omitempty\"`\n}\n\n// +kubebuilder:object:root=true\n// +kubebuilder:storageversion\n// ImageListList contains a list of ImageList.\ntype ImageListList struct {\n\tmetav1.TypeMeta `json:\",inline\"`\n\tmetav1.ListMeta `json:\"metadata,omitempty\"`\n\tItems           []ImageList `json:\"items\"`\n}\n\nfunc init() {\n\tSchemeBuilder.Register(&ImageList{}, &ImageListList{})\n}\n"
  },
  {
    "path": "api/v1/zz_generated.conversion.go",
    "content": "//go:build !ignore_autogenerated\n// +build !ignore_autogenerated\n\n/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n// Code generated by conversion-gen. DO NOT EDIT.\n\npackage v1\n\nimport (\n\tunsafe \"unsafe\"\n\n\tunversioned \"github.com/eraser-dev/eraser/api/unversioned\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\tconversion \"k8s.io/apimachinery/pkg/conversion\"\n\truntime \"k8s.io/apimachinery/pkg/runtime\"\n)\n\nfunc init() {\n\tlocalSchemeBuilder.Register(RegisterConversions)\n}\n\n// RegisterConversions adds conversion functions to the given scheme.\n// Public to allow building arbitrary schemes.\nfunc RegisterConversions(s *runtime.Scheme) error {\n\tif err := s.AddGeneratedConversionFunc((*Image)(nil), (*unversioned.Image)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1_Image_To_unversioned_Image(a.(*Image), b.(*unversioned.Image), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.Image)(nil), (*Image)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_Image_To_v1_Image(a.(*unversioned.Image), b.(*Image), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ImageJob)(nil), (*unversioned.ImageJob)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1_ImageJob_To_unversioned_ImageJob(a.(*ImageJob), b.(*unversioned.ImageJob), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ImageJob)(nil), (*ImageJob)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ImageJob_To_v1_ImageJob(a.(*unversioned.ImageJob), b.(*ImageJob), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ImageJobList)(nil), (*unversioned.ImageJobList)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1_ImageJobList_To_unversioned_ImageJobList(a.(*ImageJobList), b.(*unversioned.ImageJobList), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ImageJobList)(nil), (*ImageJobList)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ImageJobList_To_v1_ImageJobList(a.(*unversioned.ImageJobList), b.(*ImageJobList), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ImageJobStatus)(nil), (*unversioned.ImageJobStatus)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1_ImageJobStatus_To_unversioned_ImageJobStatus(a.(*ImageJobStatus), b.(*unversioned.ImageJobStatus), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ImageJobStatus)(nil), (*ImageJobStatus)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ImageJobStatus_To_v1_ImageJobStatus(a.(*unversioned.ImageJobStatus), b.(*ImageJobStatus), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ImageList)(nil), (*unversioned.ImageList)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1_ImageList_To_unversioned_ImageList(a.(*ImageList), b.(*unversioned.ImageList), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ImageList)(nil), (*ImageList)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ImageList_To_v1_ImageList(a.(*unversioned.ImageList), b.(*ImageList), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ImageListList)(nil), (*unversioned.ImageListList)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1_ImageListList_To_unversioned_ImageListList(a.(*ImageListList), b.(*unversioned.ImageListList), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ImageListList)(nil), (*ImageListList)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ImageListList_To_v1_ImageListList(a.(*unversioned.ImageListList), b.(*ImageListList), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ImageListSpec)(nil), (*unversioned.ImageListSpec)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1_ImageListSpec_To_unversioned_ImageListSpec(a.(*ImageListSpec), b.(*unversioned.ImageListSpec), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ImageListSpec)(nil), (*ImageListSpec)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ImageListSpec_To_v1_ImageListSpec(a.(*unversioned.ImageListSpec), b.(*ImageListSpec), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ImageListStatus)(nil), (*unversioned.ImageListStatus)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1_ImageListStatus_To_unversioned_ImageListStatus(a.(*ImageListStatus), b.(*unversioned.ImageListStatus), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ImageListStatus)(nil), (*ImageListStatus)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ImageListStatus_To_v1_ImageListStatus(a.(*unversioned.ImageListStatus), b.(*ImageListStatus), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\nfunc autoConvert_v1_Image_To_unversioned_Image(in *Image, out *unversioned.Image, s conversion.Scope) error {\n\tout.ImageID = in.ImageID\n\tout.Names = *(*[]string)(unsafe.Pointer(&in.Names))\n\tout.Digests = *(*[]string)(unsafe.Pointer(&in.Digests))\n\treturn nil\n}\n\n// Convert_v1_Image_To_unversioned_Image is an autogenerated conversion function.\nfunc Convert_v1_Image_To_unversioned_Image(in *Image, out *unversioned.Image, s conversion.Scope) error {\n\treturn autoConvert_v1_Image_To_unversioned_Image(in, out, s)\n}\n\nfunc autoConvert_unversioned_Image_To_v1_Image(in *unversioned.Image, out *Image, s conversion.Scope) error {\n\tout.ImageID = in.ImageID\n\tout.Names = *(*[]string)(unsafe.Pointer(&in.Names))\n\tout.Digests = *(*[]string)(unsafe.Pointer(&in.Digests))\n\treturn nil\n}\n\n// Convert_unversioned_Image_To_v1_Image is an autogenerated conversion function.\nfunc Convert_unversioned_Image_To_v1_Image(in *unversioned.Image, out *Image, s conversion.Scope) error {\n\treturn autoConvert_unversioned_Image_To_v1_Image(in, out, s)\n}\n\nfunc autoConvert_v1_ImageJob_To_unversioned_ImageJob(in *ImageJob, out *unversioned.ImageJob, s conversion.Scope) error {\n\tout.ObjectMeta = in.ObjectMeta\n\tif err := Convert_v1_ImageJobStatus_To_unversioned_ImageJobStatus(&in.Status, &out.Status, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_v1_ImageJob_To_unversioned_ImageJob is an autogenerated conversion function.\nfunc Convert_v1_ImageJob_To_unversioned_ImageJob(in *ImageJob, out *unversioned.ImageJob, s conversion.Scope) error {\n\treturn autoConvert_v1_ImageJob_To_unversioned_ImageJob(in, out, s)\n}\n\nfunc autoConvert_unversioned_ImageJob_To_v1_ImageJob(in *unversioned.ImageJob, out *ImageJob, s conversion.Scope) error {\n\tout.ObjectMeta = in.ObjectMeta\n\tif err := Convert_unversioned_ImageJobStatus_To_v1_ImageJobStatus(&in.Status, &out.Status, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_unversioned_ImageJob_To_v1_ImageJob is an autogenerated conversion function.\nfunc Convert_unversioned_ImageJob_To_v1_ImageJob(in *unversioned.ImageJob, out *ImageJob, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ImageJob_To_v1_ImageJob(in, out, s)\n}\n\nfunc autoConvert_v1_ImageJobList_To_unversioned_ImageJobList(in *ImageJobList, out *unversioned.ImageJobList, s conversion.Scope) error {\n\tout.ListMeta = in.ListMeta\n\tout.Items = *(*[]unversioned.ImageJob)(unsafe.Pointer(&in.Items))\n\treturn nil\n}\n\n// Convert_v1_ImageJobList_To_unversioned_ImageJobList is an autogenerated conversion function.\nfunc Convert_v1_ImageJobList_To_unversioned_ImageJobList(in *ImageJobList, out *unversioned.ImageJobList, s conversion.Scope) error {\n\treturn autoConvert_v1_ImageJobList_To_unversioned_ImageJobList(in, out, s)\n}\n\nfunc autoConvert_unversioned_ImageJobList_To_v1_ImageJobList(in *unversioned.ImageJobList, out *ImageJobList, s conversion.Scope) error {\n\tout.ListMeta = in.ListMeta\n\tout.Items = *(*[]ImageJob)(unsafe.Pointer(&in.Items))\n\treturn nil\n}\n\n// Convert_unversioned_ImageJobList_To_v1_ImageJobList is an autogenerated conversion function.\nfunc Convert_unversioned_ImageJobList_To_v1_ImageJobList(in *unversioned.ImageJobList, out *ImageJobList, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ImageJobList_To_v1_ImageJobList(in, out, s)\n}\n\nfunc autoConvert_v1_ImageJobStatus_To_unversioned_ImageJobStatus(in *ImageJobStatus, out *unversioned.ImageJobStatus, s conversion.Scope) error {\n\tout.Failed = in.Failed\n\tout.Succeeded = in.Succeeded\n\tout.Desired = in.Desired\n\tout.Skipped = in.Skipped\n\tout.Phase = unversioned.JobPhase(in.Phase)\n\tout.DeleteAfter = (*metav1.Time)(unsafe.Pointer(in.DeleteAfter))\n\treturn nil\n}\n\n// Convert_v1_ImageJobStatus_To_unversioned_ImageJobStatus is an autogenerated conversion function.\nfunc Convert_v1_ImageJobStatus_To_unversioned_ImageJobStatus(in *ImageJobStatus, out *unversioned.ImageJobStatus, s conversion.Scope) error {\n\treturn autoConvert_v1_ImageJobStatus_To_unversioned_ImageJobStatus(in, out, s)\n}\n\nfunc autoConvert_unversioned_ImageJobStatus_To_v1_ImageJobStatus(in *unversioned.ImageJobStatus, out *ImageJobStatus, s conversion.Scope) error {\n\tout.Failed = in.Failed\n\tout.Succeeded = in.Succeeded\n\tout.Desired = in.Desired\n\tout.Skipped = in.Skipped\n\tout.Phase = JobPhase(in.Phase)\n\tout.DeleteAfter = (*metav1.Time)(unsafe.Pointer(in.DeleteAfter))\n\treturn nil\n}\n\n// Convert_unversioned_ImageJobStatus_To_v1_ImageJobStatus is an autogenerated conversion function.\nfunc Convert_unversioned_ImageJobStatus_To_v1_ImageJobStatus(in *unversioned.ImageJobStatus, out *ImageJobStatus, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ImageJobStatus_To_v1_ImageJobStatus(in, out, s)\n}\n\nfunc autoConvert_v1_ImageList_To_unversioned_ImageList(in *ImageList, out *unversioned.ImageList, s conversion.Scope) error {\n\tout.ObjectMeta = in.ObjectMeta\n\tif err := Convert_v1_ImageListSpec_To_unversioned_ImageListSpec(&in.Spec, &out.Spec, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1_ImageListStatus_To_unversioned_ImageListStatus(&in.Status, &out.Status, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_v1_ImageList_To_unversioned_ImageList is an autogenerated conversion function.\nfunc Convert_v1_ImageList_To_unversioned_ImageList(in *ImageList, out *unversioned.ImageList, s conversion.Scope) error {\n\treturn autoConvert_v1_ImageList_To_unversioned_ImageList(in, out, s)\n}\n\nfunc autoConvert_unversioned_ImageList_To_v1_ImageList(in *unversioned.ImageList, out *ImageList, s conversion.Scope) error {\n\tout.ObjectMeta = in.ObjectMeta\n\tif err := Convert_unversioned_ImageListSpec_To_v1_ImageListSpec(&in.Spec, &out.Spec, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_ImageListStatus_To_v1_ImageListStatus(&in.Status, &out.Status, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_unversioned_ImageList_To_v1_ImageList is an autogenerated conversion function.\nfunc Convert_unversioned_ImageList_To_v1_ImageList(in *unversioned.ImageList, out *ImageList, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ImageList_To_v1_ImageList(in, out, s)\n}\n\nfunc autoConvert_v1_ImageListList_To_unversioned_ImageListList(in *ImageListList, out *unversioned.ImageListList, s conversion.Scope) error {\n\tout.ListMeta = in.ListMeta\n\tout.Items = *(*[]unversioned.ImageList)(unsafe.Pointer(&in.Items))\n\treturn nil\n}\n\n// Convert_v1_ImageListList_To_unversioned_ImageListList is an autogenerated conversion function.\nfunc Convert_v1_ImageListList_To_unversioned_ImageListList(in *ImageListList, out *unversioned.ImageListList, s conversion.Scope) error {\n\treturn autoConvert_v1_ImageListList_To_unversioned_ImageListList(in, out, s)\n}\n\nfunc autoConvert_unversioned_ImageListList_To_v1_ImageListList(in *unversioned.ImageListList, out *ImageListList, s conversion.Scope) error {\n\tout.ListMeta = in.ListMeta\n\tout.Items = *(*[]ImageList)(unsafe.Pointer(&in.Items))\n\treturn nil\n}\n\n// Convert_unversioned_ImageListList_To_v1_ImageListList is an autogenerated conversion function.\nfunc Convert_unversioned_ImageListList_To_v1_ImageListList(in *unversioned.ImageListList, out *ImageListList, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ImageListList_To_v1_ImageListList(in, out, s)\n}\n\nfunc autoConvert_v1_ImageListSpec_To_unversioned_ImageListSpec(in *ImageListSpec, out *unversioned.ImageListSpec, s conversion.Scope) error {\n\tout.Images = *(*[]string)(unsafe.Pointer(&in.Images))\n\treturn nil\n}\n\n// Convert_v1_ImageListSpec_To_unversioned_ImageListSpec is an autogenerated conversion function.\nfunc Convert_v1_ImageListSpec_To_unversioned_ImageListSpec(in *ImageListSpec, out *unversioned.ImageListSpec, s conversion.Scope) error {\n\treturn autoConvert_v1_ImageListSpec_To_unversioned_ImageListSpec(in, out, s)\n}\n\nfunc autoConvert_unversioned_ImageListSpec_To_v1_ImageListSpec(in *unversioned.ImageListSpec, out *ImageListSpec, s conversion.Scope) error {\n\tout.Images = *(*[]string)(unsafe.Pointer(&in.Images))\n\treturn nil\n}\n\n// Convert_unversioned_ImageListSpec_To_v1_ImageListSpec is an autogenerated conversion function.\nfunc Convert_unversioned_ImageListSpec_To_v1_ImageListSpec(in *unversioned.ImageListSpec, out *ImageListSpec, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ImageListSpec_To_v1_ImageListSpec(in, out, s)\n}\n\nfunc autoConvert_v1_ImageListStatus_To_unversioned_ImageListStatus(in *ImageListStatus, out *unversioned.ImageListStatus, s conversion.Scope) error {\n\tout.Timestamp = (*metav1.Time)(unsafe.Pointer(in.Timestamp))\n\tout.Success = in.Success\n\tout.Failed = in.Failed\n\tout.Skipped = in.Skipped\n\treturn nil\n}\n\n// Convert_v1_ImageListStatus_To_unversioned_ImageListStatus is an autogenerated conversion function.\nfunc Convert_v1_ImageListStatus_To_unversioned_ImageListStatus(in *ImageListStatus, out *unversioned.ImageListStatus, s conversion.Scope) error {\n\treturn autoConvert_v1_ImageListStatus_To_unversioned_ImageListStatus(in, out, s)\n}\n\nfunc autoConvert_unversioned_ImageListStatus_To_v1_ImageListStatus(in *unversioned.ImageListStatus, out *ImageListStatus, s conversion.Scope) error {\n\tout.Timestamp = (*metav1.Time)(unsafe.Pointer(in.Timestamp))\n\tout.Success = in.Success\n\tout.Failed = in.Failed\n\tout.Skipped = in.Skipped\n\treturn nil\n}\n\n// Convert_unversioned_ImageListStatus_To_v1_ImageListStatus is an autogenerated conversion function.\nfunc Convert_unversioned_ImageListStatus_To_v1_ImageListStatus(in *unversioned.ImageListStatus, out *ImageListStatus, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ImageListStatus_To_v1_ImageListStatus(in, out, s)\n}\n"
  },
  {
    "path": "api/v1/zz_generated.deepcopy.go",
    "content": "//go:build !ignore_autogenerated\n\n/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\n// Code generated by controller-gen. DO NOT EDIT.\n\npackage v1\n\nimport (\n\t\"k8s.io/apimachinery/pkg/runtime\"\n)\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *Image) DeepCopyInto(out *Image) {\n\t*out = *in\n\tif in.Names != nil {\n\t\tin, out := &in.Names, &out.Names\n\t\t*out = make([]string, len(*in))\n\t\tcopy(*out, *in)\n\t}\n\tif in.Digests != nil {\n\t\tin, out := &in.Digests, &out.Digests\n\t\t*out = make([]string, len(*in))\n\t\tcopy(*out, *in)\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Image.\nfunc (in *Image) DeepCopy() *Image {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(Image)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageJob) DeepCopyInto(out *ImageJob) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tin.ObjectMeta.DeepCopyInto(&out.ObjectMeta)\n\tin.Status.DeepCopyInto(&out.Status)\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageJob.\nfunc (in *ImageJob) DeepCopy() *ImageJob {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageJob)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.\nfunc (in *ImageJob) DeepCopyObject() runtime.Object {\n\tif c := in.DeepCopy(); c != nil {\n\t\treturn c\n\t}\n\treturn nil\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageJobList) DeepCopyInto(out *ImageJobList) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tin.ListMeta.DeepCopyInto(&out.ListMeta)\n\tif in.Items != nil {\n\t\tin, out := &in.Items, &out.Items\n\t\t*out = make([]ImageJob, len(*in))\n\t\tfor i := range *in {\n\t\t\t(*in)[i].DeepCopyInto(&(*out)[i])\n\t\t}\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageJobList.\nfunc (in *ImageJobList) DeepCopy() *ImageJobList {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageJobList)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.\nfunc (in *ImageJobList) DeepCopyObject() runtime.Object {\n\tif c := in.DeepCopy(); c != nil {\n\t\treturn c\n\t}\n\treturn nil\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageJobStatus) DeepCopyInto(out *ImageJobStatus) {\n\t*out = *in\n\tif in.DeleteAfter != nil {\n\t\tin, out := &in.DeleteAfter, &out.DeleteAfter\n\t\t*out = (*in).DeepCopy()\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageJobStatus.\nfunc (in *ImageJobStatus) DeepCopy() *ImageJobStatus {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageJobStatus)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageList) DeepCopyInto(out *ImageList) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tin.ObjectMeta.DeepCopyInto(&out.ObjectMeta)\n\tin.Spec.DeepCopyInto(&out.Spec)\n\tin.Status.DeepCopyInto(&out.Status)\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageList.\nfunc (in *ImageList) DeepCopy() *ImageList {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageList)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.\nfunc (in *ImageList) DeepCopyObject() runtime.Object {\n\tif c := in.DeepCopy(); c != nil {\n\t\treturn c\n\t}\n\treturn nil\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageListList) DeepCopyInto(out *ImageListList) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tin.ListMeta.DeepCopyInto(&out.ListMeta)\n\tif in.Items != nil {\n\t\tin, out := &in.Items, &out.Items\n\t\t*out = make([]ImageList, len(*in))\n\t\tfor i := range *in {\n\t\t\t(*in)[i].DeepCopyInto(&(*out)[i])\n\t\t}\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageListList.\nfunc (in *ImageListList) DeepCopy() *ImageListList {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageListList)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.\nfunc (in *ImageListList) DeepCopyObject() runtime.Object {\n\tif c := in.DeepCopy(); c != nil {\n\t\treturn c\n\t}\n\treturn nil\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageListSpec) DeepCopyInto(out *ImageListSpec) {\n\t*out = *in\n\tif in.Images != nil {\n\t\tin, out := &in.Images, &out.Images\n\t\t*out = make([]string, len(*in))\n\t\tcopy(*out, *in)\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageListSpec.\nfunc (in *ImageListSpec) DeepCopy() *ImageListSpec {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageListSpec)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageListStatus) DeepCopyInto(out *ImageListStatus) {\n\t*out = *in\n\tif in.Timestamp != nil {\n\t\tin, out := &in.Timestamp, &out.Timestamp\n\t\t*out = (*in).DeepCopy()\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageListStatus.\nfunc (in *ImageListStatus) DeepCopy() *ImageListStatus {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageListStatus)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n"
  },
  {
    "path": "api/v1alpha1/config/config.go",
    "content": "package config\n\nimport (\n\t\"fmt\"\n\t\"time\"\n\n\tv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/version\"\n\t\"k8s.io/apimachinery/pkg/api/resource\"\n)\n\nvar defaultScannerConfig = `\ncacheDir: /var/lib/trivy\ndbRepo: ghcr.io/aquasecurity/trivy-db\ndeleteFailedImages: true\ndeleteEOLImages: true\nvulnerabilities:\n  ignoreUnfixed: false\n  types:\n    - os\n    - library\nsecurityChecks: # need to be documented; determined by trivy, not us\n  - vuln\nseverities:\n  - CRITICAL\n  - HIGH\n  - MEDIUM\n  - LOW\n`\n\nconst (\n\tnoDelay = v1alpha1.Duration(0)\n\toneDay  = v1alpha1.Duration(time.Hour * 24)\n)\n\nfunc Default() *v1alpha1.EraserConfig {\n\treturn &v1alpha1.EraserConfig{\n\t\tManager: v1alpha1.ManagerConfig{\n\t\t\tRuntime:      \"containerd\",\n\t\t\tOTLPEndpoint: \"\",\n\t\t\tLogLevel:     \"info\",\n\t\t\tScheduling: v1alpha1.ScheduleConfig{\n\t\t\t\tRepeatInterval:   v1alpha1.Duration(oneDay),\n\t\t\t\tBeginImmediately: true,\n\t\t\t},\n\t\t\tProfile: v1alpha1.ProfileConfig{\n\t\t\t\tEnabled: false,\n\t\t\t\tPort:    6060,\n\t\t\t},\n\t\t\tImageJob: v1alpha1.ImageJobConfig{\n\t\t\t\tSuccessRatio: 1.0,\n\t\t\t\tCleanup: v1alpha1.ImageJobCleanupConfig{\n\t\t\t\t\tDelayOnSuccess: noDelay,\n\t\t\t\t\tDelayOnFailure: oneDay,\n\t\t\t\t},\n\t\t\t},\n\t\t\tPullSecrets: []string{},\n\t\t\tNodeFilter: v1alpha1.NodeFilterConfig{\n\t\t\t\tType: \"exclude\",\n\t\t\t\tSelectors: []string{\n\t\t\t\t\t\"eraser.sh/cleanup.filter\",\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\tComponents: v1alpha1.Components{\n\t\t\tCollector: v1alpha1.OptionalContainerConfig{\n\t\t\t\tEnabled: false,\n\t\t\t\tContainerConfig: v1alpha1.ContainerConfig{\n\t\t\t\t\tImage: v1alpha1.RepoTag{\n\t\t\t\t\t\tRepo: repo(\"collector\"),\n\t\t\t\t\t\tTag:  version.BuildVersion,\n\t\t\t\t\t},\n\t\t\t\t\tRequest: v1alpha1.ResourceRequirements{\n\t\t\t\t\t\tMem: resource.MustParse(\"25Mi\"),\n\t\t\t\t\t\tCPU: resource.MustParse(\"7m\"),\n\t\t\t\t\t},\n\t\t\t\t\tLimit: v1alpha1.ResourceRequirements{\n\t\t\t\t\t\tMem: resource.MustParse(\"500Mi\"),\n\t\t\t\t\t\tCPU: resource.Quantity{},\n\t\t\t\t\t},\n\t\t\t\t\tConfig: nil,\n\t\t\t\t},\n\t\t\t},\n\t\t\tScanner: v1alpha1.OptionalContainerConfig{\n\t\t\t\tEnabled: false,\n\t\t\t\tContainerConfig: v1alpha1.ContainerConfig{\n\t\t\t\t\tImage: v1alpha1.RepoTag{\n\t\t\t\t\t\tRepo: repo(\"eraser-trivy-scanner\"),\n\t\t\t\t\t\tTag:  version.BuildVersion,\n\t\t\t\t\t},\n\t\t\t\t\tRequest: v1alpha1.ResourceRequirements{\n\t\t\t\t\t\tMem: resource.MustParse(\"500Mi\"),\n\t\t\t\t\t\tCPU: resource.MustParse(\"1000m\"),\n\t\t\t\t\t},\n\t\t\t\t\tLimit: v1alpha1.ResourceRequirements{\n\t\t\t\t\t\tMem: resource.MustParse(\"2Gi\"),\n\t\t\t\t\t\tCPU: resource.MustParse(\"1500m\"),\n\t\t\t\t\t},\n\t\t\t\t\tConfig: &defaultScannerConfig,\n\t\t\t\t},\n\t\t\t},\n\t\t\tEraser: v1alpha1.ContainerConfig{\n\t\t\t\tImage: v1alpha1.RepoTag{\n\t\t\t\t\tRepo: repo(\"eraser\"),\n\t\t\t\t\tTag:  version.BuildVersion,\n\t\t\t\t},\n\t\t\t\tRequest: v1alpha1.ResourceRequirements{\n\t\t\t\t\tMem: resource.MustParse(\"25Mi\"),\n\t\t\t\t\tCPU: resource.MustParse(\"7m\"),\n\t\t\t\t},\n\t\t\t\tLimit: v1alpha1.ResourceRequirements{\n\t\t\t\t\tMem: resource.MustParse(\"30Mi\"),\n\t\t\t\t\tCPU: resource.Quantity{},\n\t\t\t\t},\n\t\t\t\tConfig: nil,\n\t\t\t},\n\t\t},\n\t}\n}\n\nfunc repo(basename string) string {\n\tif version.DefaultRepo == \"\" {\n\t\treturn basename\n\t}\n\n\treturn fmt.Sprintf(\"%s/%s\", version.DefaultRepo, basename)\n}\n"
  },
  {
    "path": "api/v1alpha1/custom_conversions.go",
    "content": "package v1alpha1\n\nimport (\n\tunversioned \"github.com/eraser-dev/eraser/api/unversioned\"\n\tconversion \"k8s.io/apimachinery/pkg/conversion\"\n)\n\n//nolint:revive\nfunc Convert_v1alpha1_ManagerConfig_To_unversioned_ManagerConfig(in *ManagerConfig, out *unversioned.ManagerConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha1_ManagerConfig_To_unversioned_ManagerConfig(in, out, s)\n}\n\n//nolint:revive\nfunc manualConvert_v1alpha1_Runtime_To_unversioned_RuntimeSpec(in *Runtime, out *unversioned.RuntimeSpec, _ conversion.Scope) error {\n\tout.Name = unversioned.Runtime(string(*in))\n\n\trs, err := unversioned.ConvertRuntimeToRuntimeSpec(out.Name)\n\tif err != nil {\n\t\treturn err\n\t}\n\tout.Address = rs.Address\n\n\treturn nil\n}\n\n//nolint:revive\nfunc Convert_v1alpha1_Runtime_To_unversioned_RuntimeSpec(in *Runtime, out *unversioned.RuntimeSpec, s conversion.Scope) error {\n\treturn manualConvert_v1alpha1_Runtime_To_unversioned_RuntimeSpec(in, out, s)\n}\n\n//nolint:revive\nfunc Convert_unversioned_ManagerConfig_To_v1alpha1_ManagerConfig(in *unversioned.ManagerConfig, out *ManagerConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ManagerConfig_To_v1alpha1_ManagerConfig(in, out, s)\n}\n\n//nolint:revive\nfunc manualConvert_unversioned_RuntimeSpec_To_v1alpha1_Runtime(in *unversioned.RuntimeSpec, out *Runtime, _ conversion.Scope) error {\n\t*out = Runtime(in.Name)\n\treturn nil\n}\n\n//nolint:revive\nfunc Convert_unversioned_RuntimeSpec_To_v1alpha1_Runtime(in *unversioned.RuntimeSpec, out *Runtime, s conversion.Scope) error {\n\treturn manualConvert_unversioned_RuntimeSpec_To_v1alpha1_Runtime(in, out, s)\n}\n"
  },
  {
    "path": "api/v1alpha1/doc.go",
    "content": "// Package v1alpha1 contains API Schema definitions for the eraser v1alpha1 API version.\n// +k8s:conversion-gen=github.com/eraser-dev/eraser/api/unversioned\npackage v1alpha1\n"
  },
  {
    "path": "api/v1alpha1/eraserconfig_types.go",
    "content": "/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage v1alpha1\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"time\"\n\n\t\"github.com/eraser-dev/eraser/api/unversioned\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\t\"k8s.io/apimachinery/pkg/api/resource\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/conversion\"\n)\n\ntype (\n\tDuration time.Duration\n\tRuntime  string\n)\n\nconst (\n\tRuntimeContainerd Runtime = \"containerd\"\n\tRuntimeDockerShim Runtime = \"dockershim\"\n\tRuntimeCrio       Runtime = \"crio\"\n)\n\nfunc (td *Duration) UnmarshalJSON(b []byte) error {\n\tvar str string\n\terr := json.Unmarshal(b, &str)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tpd, err := time.ParseDuration(str)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t*td = Duration(pd)\n\treturn nil\n}\n\nfunc (td *Duration) MarshalJSON() ([]byte, error) {\n\treturn []byte(fmt.Sprintf(`\"%s\"`, time.Duration(*td).String())), nil\n}\n\nfunc (r *Runtime) UnmarshalJSON(b []byte) error {\n\tvar str string\n\terr := json.Unmarshal(b, &str)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tswitch rt := Runtime(str); rt {\n\tcase RuntimeContainerd, RuntimeDockerShim, RuntimeCrio:\n\t\t*r = rt\n\tdefault:\n\t\treturn fmt.Errorf(\"cannot determine runtime type: %s. valid values are containerd, dockershim, or crio\", str)\n\t}\n\n\treturn nil\n}\n\n// EDIT THIS FILE!  THIS IS SCAFFOLDING FOR YOU TO OWN!\n// NOTE: json tags are required. Any new fields you add must have json tags for the fields to be serialized.\n\ntype OptionalContainerConfig struct {\n\tEnabled         bool `json:\"enabled,omitempty\"`\n\tContainerConfig `json:\",inline\"`\n}\n\ntype ContainerConfig struct {\n\tImage   RepoTag              `json:\"image,omitempty\"`\n\tRequest ResourceRequirements `json:\"request,omitempty\"`\n\tLimit   ResourceRequirements `json:\"limit,omitempty\"`\n\tConfig  *string              `json:\"config,omitempty\"`\n\tVolumes []corev1.Volume      `json:\"volumes,omitempty\"`\n}\n\ntype ManagerConfig struct {\n\tRuntime           Runtime          `json:\"runtime,omitempty\"`\n\tOTLPEndpoint      string           `json:\"otlpEndpoint,omitempty\"`\n\tLogLevel          string           `json:\"logLevel,omitempty\"`\n\tScheduling        ScheduleConfig   `json:\"scheduling,omitempty\"`\n\tProfile           ProfileConfig    `json:\"profile,omitempty\"`\n\tImageJob          ImageJobConfig   `json:\"imageJob,omitempty\"`\n\tPullSecrets       []string         `json:\"pullSecrets,omitempty\"`\n\tNodeFilter        NodeFilterConfig `json:\"nodeFilter,omitempty\"`\n\tPriorityClassName string           `json:\"priorityClassName,omitempty\"`\n}\n\ntype ScheduleConfig struct {\n\tRepeatInterval   Duration `json:\"repeatInterval,omitempty\"`\n\tBeginImmediately bool     `json:\"beginImmediately,omitempty\"`\n}\n\ntype ProfileConfig struct {\n\tEnabled bool `json:\"enabled,omitempty\"`\n\tPort    int  `json:\"port,omitempty\"`\n}\n\ntype ImageJobConfig struct {\n\tSuccessRatio float64               `json:\"successRatio,omitempty\"`\n\tCleanup      ImageJobCleanupConfig `json:\"cleanup,omitempty\"`\n}\n\ntype ImageJobCleanupConfig struct {\n\tDelayOnSuccess Duration `json:\"delayOnSuccess,omitempty\"`\n\tDelayOnFailure Duration `json:\"delayOnFailure,omitempty\"`\n}\n\ntype NodeFilterConfig struct {\n\tType      string   `json:\"type,omitempty\"`\n\tSelectors []string `json:\"selectors,omitempty\"`\n}\n\ntype ResourceRequirements struct {\n\tMem resource.Quantity `json:\"mem,omitempty\"`\n\tCPU resource.Quantity `json:\"cpu,omitempty\"`\n}\n\ntype RepoTag struct {\n\tRepo string `json:\"repo,omitempty\"`\n\tTag  string `json:\"tag,omitempty\"`\n}\n\ntype Components struct {\n\tCollector OptionalContainerConfig `json:\"collector,omitempty\"`\n\tScanner   OptionalContainerConfig `json:\"scanner,omitempty\"`\n\tEraser    ContainerConfig         `json:\"eraser,omitempty\"`\n}\n\n//+kubebuilder:object:root=true\n\n// EraserConfig is the Schema for the eraserconfigs API.\ntype EraserConfig struct {\n\tmetav1.TypeMeta `json:\",inline\"`\n\tManager         ManagerConfig `json:\"manager\"`\n\tComponents      Components    `json:\"components\"`\n}\n\nfunc init() {\n\tSchemeBuilder.Register(&EraserConfig{})\n}\n\n// In future versions of EraserConfig (for example, v1alpha2), the\n// .Components.Eraser field has been renamed to .Components.Remover\n// conversion-gen is unable to make the conversion automatically,\n// and provides stubs for these functions, with a warning that they\n// will not work properly. Because they are called by other generated\n// functions, the names of these functions cannot change.\nfunc Convert_v1alpha1_Components_To_unversioned_Components(in *Components, out *unversioned.Components, s conversion.Scope) error { //nolint:revive\n\tif err := Convert_v1alpha1_OptionalContainerConfig_To_unversioned_OptionalContainerConfig(&in.Collector, &out.Collector, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha1_OptionalContainerConfig_To_unversioned_OptionalContainerConfig(&in.Scanner, &out.Scanner, s); err != nil {\n\t\treturn err\n\t}\n\treturn Convert_v1alpha1_ContainerConfig_To_unversioned_ContainerConfig(&in.Eraser, &out.Remover, s)\n}\n\nfunc Convert_unversioned_Components_To_v1alpha1_Components(in *unversioned.Components, out *Components, s conversion.Scope) error { //nolint:revive\n\tif err := Convert_unversioned_OptionalContainerConfig_To_v1alpha1_OptionalContainerConfig(&in.Collector, &out.Collector, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_OptionalContainerConfig_To_v1alpha1_OptionalContainerConfig(&in.Scanner, &out.Scanner, s); err != nil {\n\t\treturn err\n\t}\n\treturn Convert_unversioned_ContainerConfig_To_v1alpha1_ContainerConfig(&in.Remover, &out.Eraser, s)\n}\n"
  },
  {
    "path": "api/v1alpha1/groupversion_info.go",
    "content": "/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\n// Package v1 contains API Schema definitions for the eraser.sh v1 API group\n// +kubebuilder:object:generate=true\n// +groupName=eraser.sh\npackage v1alpha1\n\nimport (\n\truntime \"k8s.io/apimachinery/pkg/runtime\"\n\t\"k8s.io/apimachinery/pkg/runtime/schema\"\n\t\"sigs.k8s.io/controller-runtime/pkg/scheme\"\n)\n\nvar (\n\t// GroupVersion is group version used to register these objects.\n\tGroupVersion = schema.GroupVersion{Group: \"eraser.sh\", Version: \"v1alpha1\"}\n\n\t// SchemeBuilder is used to add go types to the GroupVersionKind scheme.\n\tSchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}\n\n\tlocalSchemeBuilder = runtime.NewSchemeBuilder(SchemeBuilder.AddToScheme)\n\n\t// AddToScheme adds the types in this group-version to the given scheme.\n\tAddToScheme = SchemeBuilder.AddToScheme\n)\n"
  },
  {
    "path": "api/v1alpha1/imagejob_types.go",
    "content": "/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage v1alpha1\n\nimport (\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n)\n\ntype Image struct {\n\tImageID string   `json:\"image_id\"`\n\tNames   []string `json:\"names,omitempty\"`\n\tDigests []string `json:\"digests,omitempty\"`\n}\n\n// EDIT THIS FILE!  THIS IS SCAFFOLDING FOR YOU TO OWN!\n// NOTE: json tags are required.  Any new fields you add must have json tags for the fields to be serialized.\n\n// JobPhase defines the phase of an ImageJob status.\ntype JobPhase string\n\nconst (\n\tPhaseRunning   JobPhase = \"Running\"\n\tPhaseCompleted JobPhase = \"Completed\"\n\tPhaseFailed    JobPhase = \"Failed\"\n)\n\n// ImageJobStatus defines the observed state of ImageJob.\ntype ImageJobStatus struct {\n\t// number of pods that failed\n\tFailed int `json:\"failed\"`\n\n\t// number of pods that completed successfully\n\tSucceeded int `json:\"succeeded\"`\n\n\t// desired number of pods\n\tDesired int `json:\"desired\"`\n\n\t// number of nodes that were skipped e.g. because they are not a linux node\n\tSkipped int `json:\"skipped\"`\n\n\t// job running, successfully completed, or failed\n\tPhase JobPhase `json:\"phase\"`\n\n\t// Time to delay deletion until\n\tDeleteAfter *metav1.Time `json:\"deleteAfter,omitempty\"`\n}\n\n// +kubebuilder:object:root=true\n// +kubebuilder:subresource:status\n// +kubebuilder:resource:scope=\"Cluster\"\n// +kubebuilder:deprecatedversion:warning=\"v1alpha1 of the eraser API has been deprecated. Please migrate to v1.\"\n// ImageJob is the Schema for the imagejobs API.\ntype ImageJob struct {\n\tmetav1.TypeMeta   `json:\",inline\"`\n\tmetav1.ObjectMeta `json:\"metadata,omitempty\"`\n\n\tStatus ImageJobStatus `json:\"status,omitempty\"`\n}\n\n// +kubebuilder:object:root=true\n\n// ImageJobList contains a list of ImageJob.\ntype ImageJobList struct {\n\tmetav1.TypeMeta `json:\",inline\"`\n\tmetav1.ListMeta `json:\"metadata,omitempty\"`\n\tItems           []ImageJob `json:\"items\"`\n}\n\nfunc init() {\n\tSchemeBuilder.Register(&ImageJob{}, &ImageJobList{})\n}\n"
  },
  {
    "path": "api/v1alpha1/imagelist_types.go",
    "content": "/*\nCopyright 2021.\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n    http://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage v1alpha1\n\nimport (\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n)\n\n// ImageListSpec defines the desired state of ImageList.\ntype ImageListSpec struct {\n\t// The list of non-compliant images to delete if non-running.\n\tImages []string `json:\"images\"`\n}\n\n// ImageListStatus defines the observed state of ImageList.\ntype ImageListStatus struct {\n\t// Information when the job was completed.\n\tTimestamp *metav1.Time `json:\"timestamp\"`\n\t// Number of nodes that successfully ran the job\n\tSuccess int64 `json:\"success\"`\n\t// Number of nodes that failed to run the job\n\tFailed int64 `json:\"failed\"`\n\t// Number of nodes that were skipped due to a skip selector\n\tSkipped int64 `json:\"skipped\"`\n}\n\n// +kubebuilder:object:root=true\n// +kubebuilder:subresource:status\n// +kubebuilder:resource:scope=\"Cluster\"\n// +kubebuilder:deprecatedversion:warning=\"v1alpha1 of the eraser API has been deprecated. Please migrate to v1.\"\n// ImageList is the Schema for the imagelists API.\ntype ImageList struct {\n\tmetav1.TypeMeta   `json:\",inline\"`\n\tmetav1.ObjectMeta `json:\"metadata,omitempty\"`\n\n\tSpec   ImageListSpec   `json:\"spec,omitempty\"`\n\tStatus ImageListStatus `json:\"status,omitempty\"`\n}\n\n// +kubebuilder:object:root=true\n\n// ImageListList contains a list of ImageList.\ntype ImageListList struct {\n\tmetav1.TypeMeta `json:\",inline\"`\n\tmetav1.ListMeta `json:\"metadata,omitempty\"`\n\tItems           []ImageList `json:\"items\"`\n}\n\nfunc init() {\n\tSchemeBuilder.Register(&ImageList{}, &ImageListList{})\n}\n"
  },
  {
    "path": "api/v1alpha1/zz_generated.conversion.go",
    "content": "//go:build !ignore_autogenerated\n// +build !ignore_autogenerated\n\n/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n// Code generated by conversion-gen. DO NOT EDIT.\n\npackage v1alpha1\n\nimport (\n\tunsafe \"unsafe\"\n\n\tunversioned \"github.com/eraser-dev/eraser/api/unversioned\"\n\tv1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\tconversion \"k8s.io/apimachinery/pkg/conversion\"\n\truntime \"k8s.io/apimachinery/pkg/runtime\"\n)\n\nfunc init() {\n\tlocalSchemeBuilder.Register(RegisterConversions)\n}\n\n// RegisterConversions adds conversion functions to the given scheme.\n// Public to allow building arbitrary schemes.\nfunc RegisterConversions(s *runtime.Scheme) error {\n\tif err := s.AddGeneratedConversionFunc((*ContainerConfig)(nil), (*unversioned.ContainerConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha1_ContainerConfig_To_unversioned_ContainerConfig(a.(*ContainerConfig), b.(*unversioned.ContainerConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ContainerConfig)(nil), (*ContainerConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ContainerConfig_To_v1alpha1_ContainerConfig(a.(*unversioned.ContainerConfig), b.(*ContainerConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*EraserConfig)(nil), (*unversioned.EraserConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha1_EraserConfig_To_unversioned_EraserConfig(a.(*EraserConfig), b.(*unversioned.EraserConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.EraserConfig)(nil), (*EraserConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_EraserConfig_To_v1alpha1_EraserConfig(a.(*unversioned.EraserConfig), b.(*EraserConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*Image)(nil), (*unversioned.Image)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha1_Image_To_unversioned_Image(a.(*Image), b.(*unversioned.Image), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.Image)(nil), (*Image)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_Image_To_v1alpha1_Image(a.(*unversioned.Image), b.(*Image), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ImageJob)(nil), (*unversioned.ImageJob)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha1_ImageJob_To_unversioned_ImageJob(a.(*ImageJob), b.(*unversioned.ImageJob), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ImageJob)(nil), (*ImageJob)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ImageJob_To_v1alpha1_ImageJob(a.(*unversioned.ImageJob), b.(*ImageJob), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ImageJobCleanupConfig)(nil), (*unversioned.ImageJobCleanupConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha1_ImageJobCleanupConfig_To_unversioned_ImageJobCleanupConfig(a.(*ImageJobCleanupConfig), b.(*unversioned.ImageJobCleanupConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ImageJobCleanupConfig)(nil), (*ImageJobCleanupConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ImageJobCleanupConfig_To_v1alpha1_ImageJobCleanupConfig(a.(*unversioned.ImageJobCleanupConfig), b.(*ImageJobCleanupConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ImageJobConfig)(nil), (*unversioned.ImageJobConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha1_ImageJobConfig_To_unversioned_ImageJobConfig(a.(*ImageJobConfig), b.(*unversioned.ImageJobConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ImageJobConfig)(nil), (*ImageJobConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ImageJobConfig_To_v1alpha1_ImageJobConfig(a.(*unversioned.ImageJobConfig), b.(*ImageJobConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ImageJobList)(nil), (*unversioned.ImageJobList)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha1_ImageJobList_To_unversioned_ImageJobList(a.(*ImageJobList), b.(*unversioned.ImageJobList), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ImageJobList)(nil), (*ImageJobList)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ImageJobList_To_v1alpha1_ImageJobList(a.(*unversioned.ImageJobList), b.(*ImageJobList), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ImageJobStatus)(nil), (*unversioned.ImageJobStatus)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha1_ImageJobStatus_To_unversioned_ImageJobStatus(a.(*ImageJobStatus), b.(*unversioned.ImageJobStatus), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ImageJobStatus)(nil), (*ImageJobStatus)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ImageJobStatus_To_v1alpha1_ImageJobStatus(a.(*unversioned.ImageJobStatus), b.(*ImageJobStatus), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ImageList)(nil), (*unversioned.ImageList)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha1_ImageList_To_unversioned_ImageList(a.(*ImageList), b.(*unversioned.ImageList), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ImageList)(nil), (*ImageList)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ImageList_To_v1alpha1_ImageList(a.(*unversioned.ImageList), b.(*ImageList), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ImageListList)(nil), (*unversioned.ImageListList)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha1_ImageListList_To_unversioned_ImageListList(a.(*ImageListList), b.(*unversioned.ImageListList), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ImageListList)(nil), (*ImageListList)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ImageListList_To_v1alpha1_ImageListList(a.(*unversioned.ImageListList), b.(*ImageListList), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ImageListSpec)(nil), (*unversioned.ImageListSpec)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha1_ImageListSpec_To_unversioned_ImageListSpec(a.(*ImageListSpec), b.(*unversioned.ImageListSpec), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ImageListSpec)(nil), (*ImageListSpec)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ImageListSpec_To_v1alpha1_ImageListSpec(a.(*unversioned.ImageListSpec), b.(*ImageListSpec), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ImageListStatus)(nil), (*unversioned.ImageListStatus)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha1_ImageListStatus_To_unversioned_ImageListStatus(a.(*ImageListStatus), b.(*unversioned.ImageListStatus), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ImageListStatus)(nil), (*ImageListStatus)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ImageListStatus_To_v1alpha1_ImageListStatus(a.(*unversioned.ImageListStatus), b.(*ImageListStatus), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*NodeFilterConfig)(nil), (*unversioned.NodeFilterConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha1_NodeFilterConfig_To_unversioned_NodeFilterConfig(a.(*NodeFilterConfig), b.(*unversioned.NodeFilterConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.NodeFilterConfig)(nil), (*NodeFilterConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_NodeFilterConfig_To_v1alpha1_NodeFilterConfig(a.(*unversioned.NodeFilterConfig), b.(*NodeFilterConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*OptionalContainerConfig)(nil), (*unversioned.OptionalContainerConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha1_OptionalContainerConfig_To_unversioned_OptionalContainerConfig(a.(*OptionalContainerConfig), b.(*unversioned.OptionalContainerConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.OptionalContainerConfig)(nil), (*OptionalContainerConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_OptionalContainerConfig_To_v1alpha1_OptionalContainerConfig(a.(*unversioned.OptionalContainerConfig), b.(*OptionalContainerConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ProfileConfig)(nil), (*unversioned.ProfileConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha1_ProfileConfig_To_unversioned_ProfileConfig(a.(*ProfileConfig), b.(*unversioned.ProfileConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ProfileConfig)(nil), (*ProfileConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ProfileConfig_To_v1alpha1_ProfileConfig(a.(*unversioned.ProfileConfig), b.(*ProfileConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*RepoTag)(nil), (*unversioned.RepoTag)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha1_RepoTag_To_unversioned_RepoTag(a.(*RepoTag), b.(*unversioned.RepoTag), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.RepoTag)(nil), (*RepoTag)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_RepoTag_To_v1alpha1_RepoTag(a.(*unversioned.RepoTag), b.(*RepoTag), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ResourceRequirements)(nil), (*unversioned.ResourceRequirements)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha1_ResourceRequirements_To_unversioned_ResourceRequirements(a.(*ResourceRequirements), b.(*unversioned.ResourceRequirements), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ResourceRequirements)(nil), (*ResourceRequirements)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ResourceRequirements_To_v1alpha1_ResourceRequirements(a.(*unversioned.ResourceRequirements), b.(*ResourceRequirements), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ScheduleConfig)(nil), (*unversioned.ScheduleConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha1_ScheduleConfig_To_unversioned_ScheduleConfig(a.(*ScheduleConfig), b.(*unversioned.ScheduleConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ScheduleConfig)(nil), (*ScheduleConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ScheduleConfig_To_v1alpha1_ScheduleConfig(a.(*unversioned.ScheduleConfig), b.(*ScheduleConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddConversionFunc((*unversioned.Components)(nil), (*Components)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_Components_To_v1alpha1_Components(a.(*unversioned.Components), b.(*Components), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddConversionFunc((*unversioned.ManagerConfig)(nil), (*ManagerConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ManagerConfig_To_v1alpha1_ManagerConfig(a.(*unversioned.ManagerConfig), b.(*ManagerConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddConversionFunc((*unversioned.RuntimeSpec)(nil), (*Runtime)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_RuntimeSpec_To_v1alpha1_Runtime(a.(*unversioned.RuntimeSpec), b.(*Runtime), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddConversionFunc((*Components)(nil), (*unversioned.Components)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha1_Components_To_unversioned_Components(a.(*Components), b.(*unversioned.Components), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddConversionFunc((*ManagerConfig)(nil), (*unversioned.ManagerConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha1_ManagerConfig_To_unversioned_ManagerConfig(a.(*ManagerConfig), b.(*unversioned.ManagerConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddConversionFunc((*Runtime)(nil), (*unversioned.RuntimeSpec)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha1_Runtime_To_unversioned_RuntimeSpec(a.(*Runtime), b.(*unversioned.RuntimeSpec), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\nfunc autoConvert_v1alpha1_Components_To_unversioned_Components(in *Components, out *unversioned.Components, s conversion.Scope) error {\n\tif err := Convert_v1alpha1_OptionalContainerConfig_To_unversioned_OptionalContainerConfig(&in.Collector, &out.Collector, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha1_OptionalContainerConfig_To_unversioned_OptionalContainerConfig(&in.Scanner, &out.Scanner, s); err != nil {\n\t\treturn err\n\t}\n\t// WARNING: in.Eraser requires manual conversion: does not exist in peer-type\n\treturn nil\n}\n\nfunc autoConvert_unversioned_Components_To_v1alpha1_Components(in *unversioned.Components, out *Components, s conversion.Scope) error {\n\tif err := Convert_unversioned_OptionalContainerConfig_To_v1alpha1_OptionalContainerConfig(&in.Collector, &out.Collector, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_OptionalContainerConfig_To_v1alpha1_OptionalContainerConfig(&in.Scanner, &out.Scanner, s); err != nil {\n\t\treturn err\n\t}\n\t// WARNING: in.Remover requires manual conversion: does not exist in peer-type\n\treturn nil\n}\n\nfunc autoConvert_v1alpha1_ContainerConfig_To_unversioned_ContainerConfig(in *ContainerConfig, out *unversioned.ContainerConfig, s conversion.Scope) error {\n\tif err := Convert_v1alpha1_RepoTag_To_unversioned_RepoTag(&in.Image, &out.Image, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha1_ResourceRequirements_To_unversioned_ResourceRequirements(&in.Request, &out.Request, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha1_ResourceRequirements_To_unversioned_ResourceRequirements(&in.Limit, &out.Limit, s); err != nil {\n\t\treturn err\n\t}\n\tout.Config = (*string)(unsafe.Pointer(in.Config))\n\tout.Volumes = *(*[]v1.Volume)(unsafe.Pointer(&in.Volumes))\n\treturn nil\n}\n\n// Convert_v1alpha1_ContainerConfig_To_unversioned_ContainerConfig is an autogenerated conversion function.\nfunc Convert_v1alpha1_ContainerConfig_To_unversioned_ContainerConfig(in *ContainerConfig, out *unversioned.ContainerConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha1_ContainerConfig_To_unversioned_ContainerConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_ContainerConfig_To_v1alpha1_ContainerConfig(in *unversioned.ContainerConfig, out *ContainerConfig, s conversion.Scope) error {\n\tif err := Convert_unversioned_RepoTag_To_v1alpha1_RepoTag(&in.Image, &out.Image, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_ResourceRequirements_To_v1alpha1_ResourceRequirements(&in.Request, &out.Request, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_ResourceRequirements_To_v1alpha1_ResourceRequirements(&in.Limit, &out.Limit, s); err != nil {\n\t\treturn err\n\t}\n\tout.Config = (*string)(unsafe.Pointer(in.Config))\n\tout.Volumes = *(*[]v1.Volume)(unsafe.Pointer(&in.Volumes))\n\treturn nil\n}\n\n// Convert_unversioned_ContainerConfig_To_v1alpha1_ContainerConfig is an autogenerated conversion function.\nfunc Convert_unversioned_ContainerConfig_To_v1alpha1_ContainerConfig(in *unversioned.ContainerConfig, out *ContainerConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ContainerConfig_To_v1alpha1_ContainerConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha1_EraserConfig_To_unversioned_EraserConfig(in *EraserConfig, out *unversioned.EraserConfig, s conversion.Scope) error {\n\tif err := Convert_v1alpha1_ManagerConfig_To_unversioned_ManagerConfig(&in.Manager, &out.Manager, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha1_Components_To_unversioned_Components(&in.Components, &out.Components, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_v1alpha1_EraserConfig_To_unversioned_EraserConfig is an autogenerated conversion function.\nfunc Convert_v1alpha1_EraserConfig_To_unversioned_EraserConfig(in *EraserConfig, out *unversioned.EraserConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha1_EraserConfig_To_unversioned_EraserConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_EraserConfig_To_v1alpha1_EraserConfig(in *unversioned.EraserConfig, out *EraserConfig, s conversion.Scope) error {\n\tif err := Convert_unversioned_ManagerConfig_To_v1alpha1_ManagerConfig(&in.Manager, &out.Manager, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_Components_To_v1alpha1_Components(&in.Components, &out.Components, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_unversioned_EraserConfig_To_v1alpha1_EraserConfig is an autogenerated conversion function.\nfunc Convert_unversioned_EraserConfig_To_v1alpha1_EraserConfig(in *unversioned.EraserConfig, out *EraserConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_EraserConfig_To_v1alpha1_EraserConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha1_Image_To_unversioned_Image(in *Image, out *unversioned.Image, s conversion.Scope) error {\n\tout.ImageID = in.ImageID\n\tout.Names = *(*[]string)(unsafe.Pointer(&in.Names))\n\tout.Digests = *(*[]string)(unsafe.Pointer(&in.Digests))\n\treturn nil\n}\n\n// Convert_v1alpha1_Image_To_unversioned_Image is an autogenerated conversion function.\nfunc Convert_v1alpha1_Image_To_unversioned_Image(in *Image, out *unversioned.Image, s conversion.Scope) error {\n\treturn autoConvert_v1alpha1_Image_To_unversioned_Image(in, out, s)\n}\n\nfunc autoConvert_unversioned_Image_To_v1alpha1_Image(in *unversioned.Image, out *Image, s conversion.Scope) error {\n\tout.ImageID = in.ImageID\n\tout.Names = *(*[]string)(unsafe.Pointer(&in.Names))\n\tout.Digests = *(*[]string)(unsafe.Pointer(&in.Digests))\n\treturn nil\n}\n\n// Convert_unversioned_Image_To_v1alpha1_Image is an autogenerated conversion function.\nfunc Convert_unversioned_Image_To_v1alpha1_Image(in *unversioned.Image, out *Image, s conversion.Scope) error {\n\treturn autoConvert_unversioned_Image_To_v1alpha1_Image(in, out, s)\n}\n\nfunc autoConvert_v1alpha1_ImageJob_To_unversioned_ImageJob(in *ImageJob, out *unversioned.ImageJob, s conversion.Scope) error {\n\tout.ObjectMeta = in.ObjectMeta\n\tif err := Convert_v1alpha1_ImageJobStatus_To_unversioned_ImageJobStatus(&in.Status, &out.Status, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_v1alpha1_ImageJob_To_unversioned_ImageJob is an autogenerated conversion function.\nfunc Convert_v1alpha1_ImageJob_To_unversioned_ImageJob(in *ImageJob, out *unversioned.ImageJob, s conversion.Scope) error {\n\treturn autoConvert_v1alpha1_ImageJob_To_unversioned_ImageJob(in, out, s)\n}\n\nfunc autoConvert_unversioned_ImageJob_To_v1alpha1_ImageJob(in *unversioned.ImageJob, out *ImageJob, s conversion.Scope) error {\n\tout.ObjectMeta = in.ObjectMeta\n\tif err := Convert_unversioned_ImageJobStatus_To_v1alpha1_ImageJobStatus(&in.Status, &out.Status, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_unversioned_ImageJob_To_v1alpha1_ImageJob is an autogenerated conversion function.\nfunc Convert_unversioned_ImageJob_To_v1alpha1_ImageJob(in *unversioned.ImageJob, out *ImageJob, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ImageJob_To_v1alpha1_ImageJob(in, out, s)\n}\n\nfunc autoConvert_v1alpha1_ImageJobCleanupConfig_To_unversioned_ImageJobCleanupConfig(in *ImageJobCleanupConfig, out *unversioned.ImageJobCleanupConfig, s conversion.Scope) error {\n\tout.DelayOnSuccess = unversioned.Duration(in.DelayOnSuccess)\n\tout.DelayOnFailure = unversioned.Duration(in.DelayOnFailure)\n\treturn nil\n}\n\n// Convert_v1alpha1_ImageJobCleanupConfig_To_unversioned_ImageJobCleanupConfig is an autogenerated conversion function.\nfunc Convert_v1alpha1_ImageJobCleanupConfig_To_unversioned_ImageJobCleanupConfig(in *ImageJobCleanupConfig, out *unversioned.ImageJobCleanupConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha1_ImageJobCleanupConfig_To_unversioned_ImageJobCleanupConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_ImageJobCleanupConfig_To_v1alpha1_ImageJobCleanupConfig(in *unversioned.ImageJobCleanupConfig, out *ImageJobCleanupConfig, s conversion.Scope) error {\n\tout.DelayOnSuccess = Duration(in.DelayOnSuccess)\n\tout.DelayOnFailure = Duration(in.DelayOnFailure)\n\treturn nil\n}\n\n// Convert_unversioned_ImageJobCleanupConfig_To_v1alpha1_ImageJobCleanupConfig is an autogenerated conversion function.\nfunc Convert_unversioned_ImageJobCleanupConfig_To_v1alpha1_ImageJobCleanupConfig(in *unversioned.ImageJobCleanupConfig, out *ImageJobCleanupConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ImageJobCleanupConfig_To_v1alpha1_ImageJobCleanupConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha1_ImageJobConfig_To_unversioned_ImageJobConfig(in *ImageJobConfig, out *unversioned.ImageJobConfig, s conversion.Scope) error {\n\tout.SuccessRatio = in.SuccessRatio\n\tif err := Convert_v1alpha1_ImageJobCleanupConfig_To_unversioned_ImageJobCleanupConfig(&in.Cleanup, &out.Cleanup, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_v1alpha1_ImageJobConfig_To_unversioned_ImageJobConfig is an autogenerated conversion function.\nfunc Convert_v1alpha1_ImageJobConfig_To_unversioned_ImageJobConfig(in *ImageJobConfig, out *unversioned.ImageJobConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha1_ImageJobConfig_To_unversioned_ImageJobConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_ImageJobConfig_To_v1alpha1_ImageJobConfig(in *unversioned.ImageJobConfig, out *ImageJobConfig, s conversion.Scope) error {\n\tout.SuccessRatio = in.SuccessRatio\n\tif err := Convert_unversioned_ImageJobCleanupConfig_To_v1alpha1_ImageJobCleanupConfig(&in.Cleanup, &out.Cleanup, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_unversioned_ImageJobConfig_To_v1alpha1_ImageJobConfig is an autogenerated conversion function.\nfunc Convert_unversioned_ImageJobConfig_To_v1alpha1_ImageJobConfig(in *unversioned.ImageJobConfig, out *ImageJobConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ImageJobConfig_To_v1alpha1_ImageJobConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha1_ImageJobList_To_unversioned_ImageJobList(in *ImageJobList, out *unversioned.ImageJobList, s conversion.Scope) error {\n\tout.ListMeta = in.ListMeta\n\tout.Items = *(*[]unversioned.ImageJob)(unsafe.Pointer(&in.Items))\n\treturn nil\n}\n\n// Convert_v1alpha1_ImageJobList_To_unversioned_ImageJobList is an autogenerated conversion function.\nfunc Convert_v1alpha1_ImageJobList_To_unversioned_ImageJobList(in *ImageJobList, out *unversioned.ImageJobList, s conversion.Scope) error {\n\treturn autoConvert_v1alpha1_ImageJobList_To_unversioned_ImageJobList(in, out, s)\n}\n\nfunc autoConvert_unversioned_ImageJobList_To_v1alpha1_ImageJobList(in *unversioned.ImageJobList, out *ImageJobList, s conversion.Scope) error {\n\tout.ListMeta = in.ListMeta\n\tout.Items = *(*[]ImageJob)(unsafe.Pointer(&in.Items))\n\treturn nil\n}\n\n// Convert_unversioned_ImageJobList_To_v1alpha1_ImageJobList is an autogenerated conversion function.\nfunc Convert_unversioned_ImageJobList_To_v1alpha1_ImageJobList(in *unversioned.ImageJobList, out *ImageJobList, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ImageJobList_To_v1alpha1_ImageJobList(in, out, s)\n}\n\nfunc autoConvert_v1alpha1_ImageJobStatus_To_unversioned_ImageJobStatus(in *ImageJobStatus, out *unversioned.ImageJobStatus, s conversion.Scope) error {\n\tout.Failed = in.Failed\n\tout.Succeeded = in.Succeeded\n\tout.Desired = in.Desired\n\tout.Skipped = in.Skipped\n\tout.Phase = unversioned.JobPhase(in.Phase)\n\tout.DeleteAfter = (*metav1.Time)(unsafe.Pointer(in.DeleteAfter))\n\treturn nil\n}\n\n// Convert_v1alpha1_ImageJobStatus_To_unversioned_ImageJobStatus is an autogenerated conversion function.\nfunc Convert_v1alpha1_ImageJobStatus_To_unversioned_ImageJobStatus(in *ImageJobStatus, out *unversioned.ImageJobStatus, s conversion.Scope) error {\n\treturn autoConvert_v1alpha1_ImageJobStatus_To_unversioned_ImageJobStatus(in, out, s)\n}\n\nfunc autoConvert_unversioned_ImageJobStatus_To_v1alpha1_ImageJobStatus(in *unversioned.ImageJobStatus, out *ImageJobStatus, s conversion.Scope) error {\n\tout.Failed = in.Failed\n\tout.Succeeded = in.Succeeded\n\tout.Desired = in.Desired\n\tout.Skipped = in.Skipped\n\tout.Phase = JobPhase(in.Phase)\n\tout.DeleteAfter = (*metav1.Time)(unsafe.Pointer(in.DeleteAfter))\n\treturn nil\n}\n\n// Convert_unversioned_ImageJobStatus_To_v1alpha1_ImageJobStatus is an autogenerated conversion function.\nfunc Convert_unversioned_ImageJobStatus_To_v1alpha1_ImageJobStatus(in *unversioned.ImageJobStatus, out *ImageJobStatus, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ImageJobStatus_To_v1alpha1_ImageJobStatus(in, out, s)\n}\n\nfunc autoConvert_v1alpha1_ImageList_To_unversioned_ImageList(in *ImageList, out *unversioned.ImageList, s conversion.Scope) error {\n\tout.ObjectMeta = in.ObjectMeta\n\tif err := Convert_v1alpha1_ImageListSpec_To_unversioned_ImageListSpec(&in.Spec, &out.Spec, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha1_ImageListStatus_To_unversioned_ImageListStatus(&in.Status, &out.Status, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_v1alpha1_ImageList_To_unversioned_ImageList is an autogenerated conversion function.\nfunc Convert_v1alpha1_ImageList_To_unversioned_ImageList(in *ImageList, out *unversioned.ImageList, s conversion.Scope) error {\n\treturn autoConvert_v1alpha1_ImageList_To_unversioned_ImageList(in, out, s)\n}\n\nfunc autoConvert_unversioned_ImageList_To_v1alpha1_ImageList(in *unversioned.ImageList, out *ImageList, s conversion.Scope) error {\n\tout.ObjectMeta = in.ObjectMeta\n\tif err := Convert_unversioned_ImageListSpec_To_v1alpha1_ImageListSpec(&in.Spec, &out.Spec, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_ImageListStatus_To_v1alpha1_ImageListStatus(&in.Status, &out.Status, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_unversioned_ImageList_To_v1alpha1_ImageList is an autogenerated conversion function.\nfunc Convert_unversioned_ImageList_To_v1alpha1_ImageList(in *unversioned.ImageList, out *ImageList, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ImageList_To_v1alpha1_ImageList(in, out, s)\n}\n\nfunc autoConvert_v1alpha1_ImageListList_To_unversioned_ImageListList(in *ImageListList, out *unversioned.ImageListList, s conversion.Scope) error {\n\tout.ListMeta = in.ListMeta\n\tout.Items = *(*[]unversioned.ImageList)(unsafe.Pointer(&in.Items))\n\treturn nil\n}\n\n// Convert_v1alpha1_ImageListList_To_unversioned_ImageListList is an autogenerated conversion function.\nfunc Convert_v1alpha1_ImageListList_To_unversioned_ImageListList(in *ImageListList, out *unversioned.ImageListList, s conversion.Scope) error {\n\treturn autoConvert_v1alpha1_ImageListList_To_unversioned_ImageListList(in, out, s)\n}\n\nfunc autoConvert_unversioned_ImageListList_To_v1alpha1_ImageListList(in *unversioned.ImageListList, out *ImageListList, s conversion.Scope) error {\n\tout.ListMeta = in.ListMeta\n\tout.Items = *(*[]ImageList)(unsafe.Pointer(&in.Items))\n\treturn nil\n}\n\n// Convert_unversioned_ImageListList_To_v1alpha1_ImageListList is an autogenerated conversion function.\nfunc Convert_unversioned_ImageListList_To_v1alpha1_ImageListList(in *unversioned.ImageListList, out *ImageListList, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ImageListList_To_v1alpha1_ImageListList(in, out, s)\n}\n\nfunc autoConvert_v1alpha1_ImageListSpec_To_unversioned_ImageListSpec(in *ImageListSpec, out *unversioned.ImageListSpec, s conversion.Scope) error {\n\tout.Images = *(*[]string)(unsafe.Pointer(&in.Images))\n\treturn nil\n}\n\n// Convert_v1alpha1_ImageListSpec_To_unversioned_ImageListSpec is an autogenerated conversion function.\nfunc Convert_v1alpha1_ImageListSpec_To_unversioned_ImageListSpec(in *ImageListSpec, out *unversioned.ImageListSpec, s conversion.Scope) error {\n\treturn autoConvert_v1alpha1_ImageListSpec_To_unversioned_ImageListSpec(in, out, s)\n}\n\nfunc autoConvert_unversioned_ImageListSpec_To_v1alpha1_ImageListSpec(in *unversioned.ImageListSpec, out *ImageListSpec, s conversion.Scope) error {\n\tout.Images = *(*[]string)(unsafe.Pointer(&in.Images))\n\treturn nil\n}\n\n// Convert_unversioned_ImageListSpec_To_v1alpha1_ImageListSpec is an autogenerated conversion function.\nfunc Convert_unversioned_ImageListSpec_To_v1alpha1_ImageListSpec(in *unversioned.ImageListSpec, out *ImageListSpec, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ImageListSpec_To_v1alpha1_ImageListSpec(in, out, s)\n}\n\nfunc autoConvert_v1alpha1_ImageListStatus_To_unversioned_ImageListStatus(in *ImageListStatus, out *unversioned.ImageListStatus, s conversion.Scope) error {\n\tout.Timestamp = (*metav1.Time)(unsafe.Pointer(in.Timestamp))\n\tout.Success = in.Success\n\tout.Failed = in.Failed\n\tout.Skipped = in.Skipped\n\treturn nil\n}\n\n// Convert_v1alpha1_ImageListStatus_To_unversioned_ImageListStatus is an autogenerated conversion function.\nfunc Convert_v1alpha1_ImageListStatus_To_unversioned_ImageListStatus(in *ImageListStatus, out *unversioned.ImageListStatus, s conversion.Scope) error {\n\treturn autoConvert_v1alpha1_ImageListStatus_To_unversioned_ImageListStatus(in, out, s)\n}\n\nfunc autoConvert_unversioned_ImageListStatus_To_v1alpha1_ImageListStatus(in *unversioned.ImageListStatus, out *ImageListStatus, s conversion.Scope) error {\n\tout.Timestamp = (*metav1.Time)(unsafe.Pointer(in.Timestamp))\n\tout.Success = in.Success\n\tout.Failed = in.Failed\n\tout.Skipped = in.Skipped\n\treturn nil\n}\n\n// Convert_unversioned_ImageListStatus_To_v1alpha1_ImageListStatus is an autogenerated conversion function.\nfunc Convert_unversioned_ImageListStatus_To_v1alpha1_ImageListStatus(in *unversioned.ImageListStatus, out *ImageListStatus, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ImageListStatus_To_v1alpha1_ImageListStatus(in, out, s)\n}\n\nfunc autoConvert_v1alpha1_ManagerConfig_To_unversioned_ManagerConfig(in *ManagerConfig, out *unversioned.ManagerConfig, s conversion.Scope) error {\n\tif err := Convert_v1alpha1_Runtime_To_unversioned_RuntimeSpec(&in.Runtime, &out.Runtime, s); err != nil {\n\t\treturn err\n\t}\n\tout.OTLPEndpoint = in.OTLPEndpoint\n\tout.LogLevel = in.LogLevel\n\tif err := Convert_v1alpha1_ScheduleConfig_To_unversioned_ScheduleConfig(&in.Scheduling, &out.Scheduling, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha1_ProfileConfig_To_unversioned_ProfileConfig(&in.Profile, &out.Profile, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha1_ImageJobConfig_To_unversioned_ImageJobConfig(&in.ImageJob, &out.ImageJob, s); err != nil {\n\t\treturn err\n\t}\n\tout.PullSecrets = *(*[]string)(unsafe.Pointer(&in.PullSecrets))\n\tif err := Convert_v1alpha1_NodeFilterConfig_To_unversioned_NodeFilterConfig(&in.NodeFilter, &out.NodeFilter, s); err != nil {\n\t\treturn err\n\t}\n\tout.PriorityClassName = in.PriorityClassName\n\treturn nil\n}\n\nfunc autoConvert_unversioned_ManagerConfig_To_v1alpha1_ManagerConfig(in *unversioned.ManagerConfig, out *ManagerConfig, s conversion.Scope) error {\n\tif err := Convert_unversioned_RuntimeSpec_To_v1alpha1_Runtime(&in.Runtime, &out.Runtime, s); err != nil {\n\t\treturn err\n\t}\n\tout.OTLPEndpoint = in.OTLPEndpoint\n\tout.LogLevel = in.LogLevel\n\tif err := Convert_unversioned_ScheduleConfig_To_v1alpha1_ScheduleConfig(&in.Scheduling, &out.Scheduling, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_ProfileConfig_To_v1alpha1_ProfileConfig(&in.Profile, &out.Profile, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_ImageJobConfig_To_v1alpha1_ImageJobConfig(&in.ImageJob, &out.ImageJob, s); err != nil {\n\t\treturn err\n\t}\n\tout.PullSecrets = *(*[]string)(unsafe.Pointer(&in.PullSecrets))\n\tif err := Convert_unversioned_NodeFilterConfig_To_v1alpha1_NodeFilterConfig(&in.NodeFilter, &out.NodeFilter, s); err != nil {\n\t\treturn err\n\t}\n\tout.PriorityClassName = in.PriorityClassName\n\t// WARNING: in.AdditionalPodLabels requires manual conversion: does not exist in peer-type\n\treturn nil\n}\n\nfunc autoConvert_v1alpha1_NodeFilterConfig_To_unversioned_NodeFilterConfig(in *NodeFilterConfig, out *unversioned.NodeFilterConfig, s conversion.Scope) error {\n\tout.Type = in.Type\n\tout.Selectors = *(*[]string)(unsafe.Pointer(&in.Selectors))\n\treturn nil\n}\n\n// Convert_v1alpha1_NodeFilterConfig_To_unversioned_NodeFilterConfig is an autogenerated conversion function.\nfunc Convert_v1alpha1_NodeFilterConfig_To_unversioned_NodeFilterConfig(in *NodeFilterConfig, out *unversioned.NodeFilterConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha1_NodeFilterConfig_To_unversioned_NodeFilterConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_NodeFilterConfig_To_v1alpha1_NodeFilterConfig(in *unversioned.NodeFilterConfig, out *NodeFilterConfig, s conversion.Scope) error {\n\tout.Type = in.Type\n\tout.Selectors = *(*[]string)(unsafe.Pointer(&in.Selectors))\n\treturn nil\n}\n\n// Convert_unversioned_NodeFilterConfig_To_v1alpha1_NodeFilterConfig is an autogenerated conversion function.\nfunc Convert_unversioned_NodeFilterConfig_To_v1alpha1_NodeFilterConfig(in *unversioned.NodeFilterConfig, out *NodeFilterConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_NodeFilterConfig_To_v1alpha1_NodeFilterConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha1_OptionalContainerConfig_To_unversioned_OptionalContainerConfig(in *OptionalContainerConfig, out *unversioned.OptionalContainerConfig, s conversion.Scope) error {\n\tout.Enabled = in.Enabled\n\tif err := Convert_v1alpha1_ContainerConfig_To_unversioned_ContainerConfig(&in.ContainerConfig, &out.ContainerConfig, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_v1alpha1_OptionalContainerConfig_To_unversioned_OptionalContainerConfig is an autogenerated conversion function.\nfunc Convert_v1alpha1_OptionalContainerConfig_To_unversioned_OptionalContainerConfig(in *OptionalContainerConfig, out *unversioned.OptionalContainerConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha1_OptionalContainerConfig_To_unversioned_OptionalContainerConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_OptionalContainerConfig_To_v1alpha1_OptionalContainerConfig(in *unversioned.OptionalContainerConfig, out *OptionalContainerConfig, s conversion.Scope) error {\n\tout.Enabled = in.Enabled\n\tif err := Convert_unversioned_ContainerConfig_To_v1alpha1_ContainerConfig(&in.ContainerConfig, &out.ContainerConfig, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_unversioned_OptionalContainerConfig_To_v1alpha1_OptionalContainerConfig is an autogenerated conversion function.\nfunc Convert_unversioned_OptionalContainerConfig_To_v1alpha1_OptionalContainerConfig(in *unversioned.OptionalContainerConfig, out *OptionalContainerConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_OptionalContainerConfig_To_v1alpha1_OptionalContainerConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha1_ProfileConfig_To_unversioned_ProfileConfig(in *ProfileConfig, out *unversioned.ProfileConfig, s conversion.Scope) error {\n\tout.Enabled = in.Enabled\n\tout.Port = in.Port\n\treturn nil\n}\n\n// Convert_v1alpha1_ProfileConfig_To_unversioned_ProfileConfig is an autogenerated conversion function.\nfunc Convert_v1alpha1_ProfileConfig_To_unversioned_ProfileConfig(in *ProfileConfig, out *unversioned.ProfileConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha1_ProfileConfig_To_unversioned_ProfileConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_ProfileConfig_To_v1alpha1_ProfileConfig(in *unversioned.ProfileConfig, out *ProfileConfig, s conversion.Scope) error {\n\tout.Enabled = in.Enabled\n\tout.Port = in.Port\n\treturn nil\n}\n\n// Convert_unversioned_ProfileConfig_To_v1alpha1_ProfileConfig is an autogenerated conversion function.\nfunc Convert_unversioned_ProfileConfig_To_v1alpha1_ProfileConfig(in *unversioned.ProfileConfig, out *ProfileConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ProfileConfig_To_v1alpha1_ProfileConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha1_RepoTag_To_unversioned_RepoTag(in *RepoTag, out *unversioned.RepoTag, s conversion.Scope) error {\n\tout.Repo = in.Repo\n\tout.Tag = in.Tag\n\treturn nil\n}\n\n// Convert_v1alpha1_RepoTag_To_unversioned_RepoTag is an autogenerated conversion function.\nfunc Convert_v1alpha1_RepoTag_To_unversioned_RepoTag(in *RepoTag, out *unversioned.RepoTag, s conversion.Scope) error {\n\treturn autoConvert_v1alpha1_RepoTag_To_unversioned_RepoTag(in, out, s)\n}\n\nfunc autoConvert_unversioned_RepoTag_To_v1alpha1_RepoTag(in *unversioned.RepoTag, out *RepoTag, s conversion.Scope) error {\n\tout.Repo = in.Repo\n\tout.Tag = in.Tag\n\treturn nil\n}\n\n// Convert_unversioned_RepoTag_To_v1alpha1_RepoTag is an autogenerated conversion function.\nfunc Convert_unversioned_RepoTag_To_v1alpha1_RepoTag(in *unversioned.RepoTag, out *RepoTag, s conversion.Scope) error {\n\treturn autoConvert_unversioned_RepoTag_To_v1alpha1_RepoTag(in, out, s)\n}\n\nfunc autoConvert_v1alpha1_ResourceRequirements_To_unversioned_ResourceRequirements(in *ResourceRequirements, out *unversioned.ResourceRequirements, s conversion.Scope) error {\n\tout.Mem = in.Mem\n\tout.CPU = in.CPU\n\treturn nil\n}\n\n// Convert_v1alpha1_ResourceRequirements_To_unversioned_ResourceRequirements is an autogenerated conversion function.\nfunc Convert_v1alpha1_ResourceRequirements_To_unversioned_ResourceRequirements(in *ResourceRequirements, out *unversioned.ResourceRequirements, s conversion.Scope) error {\n\treturn autoConvert_v1alpha1_ResourceRequirements_To_unversioned_ResourceRequirements(in, out, s)\n}\n\nfunc autoConvert_unversioned_ResourceRequirements_To_v1alpha1_ResourceRequirements(in *unversioned.ResourceRequirements, out *ResourceRequirements, s conversion.Scope) error {\n\tout.Mem = in.Mem\n\tout.CPU = in.CPU\n\treturn nil\n}\n\n// Convert_unversioned_ResourceRequirements_To_v1alpha1_ResourceRequirements is an autogenerated conversion function.\nfunc Convert_unversioned_ResourceRequirements_To_v1alpha1_ResourceRequirements(in *unversioned.ResourceRequirements, out *ResourceRequirements, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ResourceRequirements_To_v1alpha1_ResourceRequirements(in, out, s)\n}\n\nfunc autoConvert_v1alpha1_ScheduleConfig_To_unversioned_ScheduleConfig(in *ScheduleConfig, out *unversioned.ScheduleConfig, s conversion.Scope) error {\n\tout.RepeatInterval = unversioned.Duration(in.RepeatInterval)\n\tout.BeginImmediately = in.BeginImmediately\n\treturn nil\n}\n\n// Convert_v1alpha1_ScheduleConfig_To_unversioned_ScheduleConfig is an autogenerated conversion function.\nfunc Convert_v1alpha1_ScheduleConfig_To_unversioned_ScheduleConfig(in *ScheduleConfig, out *unversioned.ScheduleConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha1_ScheduleConfig_To_unversioned_ScheduleConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_ScheduleConfig_To_v1alpha1_ScheduleConfig(in *unversioned.ScheduleConfig, out *ScheduleConfig, s conversion.Scope) error {\n\tout.RepeatInterval = Duration(in.RepeatInterval)\n\tout.BeginImmediately = in.BeginImmediately\n\treturn nil\n}\n\n// Convert_unversioned_ScheduleConfig_To_v1alpha1_ScheduleConfig is an autogenerated conversion function.\nfunc Convert_unversioned_ScheduleConfig_To_v1alpha1_ScheduleConfig(in *unversioned.ScheduleConfig, out *ScheduleConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ScheduleConfig_To_v1alpha1_ScheduleConfig(in, out, s)\n}\n"
  },
  {
    "path": "api/v1alpha1/zz_generated.deepcopy.go",
    "content": "//go:build !ignore_autogenerated\n\n/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\n// Code generated by controller-gen. DO NOT EDIT.\n\npackage v1alpha1\n\nimport (\n\t\"k8s.io/api/core/v1\"\n\t\"k8s.io/apimachinery/pkg/runtime\"\n)\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *Components) DeepCopyInto(out *Components) {\n\t*out = *in\n\tin.Collector.DeepCopyInto(&out.Collector)\n\tin.Scanner.DeepCopyInto(&out.Scanner)\n\tin.Eraser.DeepCopyInto(&out.Eraser)\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Components.\nfunc (in *Components) DeepCopy() *Components {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(Components)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ContainerConfig) DeepCopyInto(out *ContainerConfig) {\n\t*out = *in\n\tout.Image = in.Image\n\tin.Request.DeepCopyInto(&out.Request)\n\tin.Limit.DeepCopyInto(&out.Limit)\n\tif in.Config != nil {\n\t\tin, out := &in.Config, &out.Config\n\t\t*out = new(string)\n\t\t**out = **in\n\t}\n\tif in.Volumes != nil {\n\t\tin, out := &in.Volumes, &out.Volumes\n\t\t*out = make([]v1.Volume, len(*in))\n\t\tfor i := range *in {\n\t\t\t(*in)[i].DeepCopyInto(&(*out)[i])\n\t\t}\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ContainerConfig.\nfunc (in *ContainerConfig) DeepCopy() *ContainerConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ContainerConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *EraserConfig) DeepCopyInto(out *EraserConfig) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tin.Manager.DeepCopyInto(&out.Manager)\n\tin.Components.DeepCopyInto(&out.Components)\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EraserConfig.\nfunc (in *EraserConfig) DeepCopy() *EraserConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(EraserConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.\nfunc (in *EraserConfig) DeepCopyObject() runtime.Object {\n\tif c := in.DeepCopy(); c != nil {\n\t\treturn c\n\t}\n\treturn nil\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *Image) DeepCopyInto(out *Image) {\n\t*out = *in\n\tif in.Names != nil {\n\t\tin, out := &in.Names, &out.Names\n\t\t*out = make([]string, len(*in))\n\t\tcopy(*out, *in)\n\t}\n\tif in.Digests != nil {\n\t\tin, out := &in.Digests, &out.Digests\n\t\t*out = make([]string, len(*in))\n\t\tcopy(*out, *in)\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Image.\nfunc (in *Image) DeepCopy() *Image {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(Image)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageJob) DeepCopyInto(out *ImageJob) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tin.ObjectMeta.DeepCopyInto(&out.ObjectMeta)\n\tin.Status.DeepCopyInto(&out.Status)\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageJob.\nfunc (in *ImageJob) DeepCopy() *ImageJob {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageJob)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.\nfunc (in *ImageJob) DeepCopyObject() runtime.Object {\n\tif c := in.DeepCopy(); c != nil {\n\t\treturn c\n\t}\n\treturn nil\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageJobCleanupConfig) DeepCopyInto(out *ImageJobCleanupConfig) {\n\t*out = *in\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageJobCleanupConfig.\nfunc (in *ImageJobCleanupConfig) DeepCopy() *ImageJobCleanupConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageJobCleanupConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageJobConfig) DeepCopyInto(out *ImageJobConfig) {\n\t*out = *in\n\tout.Cleanup = in.Cleanup\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageJobConfig.\nfunc (in *ImageJobConfig) DeepCopy() *ImageJobConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageJobConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageJobList) DeepCopyInto(out *ImageJobList) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tin.ListMeta.DeepCopyInto(&out.ListMeta)\n\tif in.Items != nil {\n\t\tin, out := &in.Items, &out.Items\n\t\t*out = make([]ImageJob, len(*in))\n\t\tfor i := range *in {\n\t\t\t(*in)[i].DeepCopyInto(&(*out)[i])\n\t\t}\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageJobList.\nfunc (in *ImageJobList) DeepCopy() *ImageJobList {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageJobList)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.\nfunc (in *ImageJobList) DeepCopyObject() runtime.Object {\n\tif c := in.DeepCopy(); c != nil {\n\t\treturn c\n\t}\n\treturn nil\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageJobStatus) DeepCopyInto(out *ImageJobStatus) {\n\t*out = *in\n\tif in.DeleteAfter != nil {\n\t\tin, out := &in.DeleteAfter, &out.DeleteAfter\n\t\t*out = (*in).DeepCopy()\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageJobStatus.\nfunc (in *ImageJobStatus) DeepCopy() *ImageJobStatus {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageJobStatus)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageList) DeepCopyInto(out *ImageList) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tin.ObjectMeta.DeepCopyInto(&out.ObjectMeta)\n\tin.Spec.DeepCopyInto(&out.Spec)\n\tin.Status.DeepCopyInto(&out.Status)\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageList.\nfunc (in *ImageList) DeepCopy() *ImageList {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageList)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.\nfunc (in *ImageList) DeepCopyObject() runtime.Object {\n\tif c := in.DeepCopy(); c != nil {\n\t\treturn c\n\t}\n\treturn nil\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageListList) DeepCopyInto(out *ImageListList) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tin.ListMeta.DeepCopyInto(&out.ListMeta)\n\tif in.Items != nil {\n\t\tin, out := &in.Items, &out.Items\n\t\t*out = make([]ImageList, len(*in))\n\t\tfor i := range *in {\n\t\t\t(*in)[i].DeepCopyInto(&(*out)[i])\n\t\t}\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageListList.\nfunc (in *ImageListList) DeepCopy() *ImageListList {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageListList)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.\nfunc (in *ImageListList) DeepCopyObject() runtime.Object {\n\tif c := in.DeepCopy(); c != nil {\n\t\treturn c\n\t}\n\treturn nil\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageListSpec) DeepCopyInto(out *ImageListSpec) {\n\t*out = *in\n\tif in.Images != nil {\n\t\tin, out := &in.Images, &out.Images\n\t\t*out = make([]string, len(*in))\n\t\tcopy(*out, *in)\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageListSpec.\nfunc (in *ImageListSpec) DeepCopy() *ImageListSpec {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageListSpec)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageListStatus) DeepCopyInto(out *ImageListStatus) {\n\t*out = *in\n\tif in.Timestamp != nil {\n\t\tin, out := &in.Timestamp, &out.Timestamp\n\t\t*out = (*in).DeepCopy()\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageListStatus.\nfunc (in *ImageListStatus) DeepCopy() *ImageListStatus {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageListStatus)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ManagerConfig) DeepCopyInto(out *ManagerConfig) {\n\t*out = *in\n\tout.Scheduling = in.Scheduling\n\tout.Profile = in.Profile\n\tout.ImageJob = in.ImageJob\n\tif in.PullSecrets != nil {\n\t\tin, out := &in.PullSecrets, &out.PullSecrets\n\t\t*out = make([]string, len(*in))\n\t\tcopy(*out, *in)\n\t}\n\tin.NodeFilter.DeepCopyInto(&out.NodeFilter)\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ManagerConfig.\nfunc (in *ManagerConfig) DeepCopy() *ManagerConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ManagerConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *NodeFilterConfig) DeepCopyInto(out *NodeFilterConfig) {\n\t*out = *in\n\tif in.Selectors != nil {\n\t\tin, out := &in.Selectors, &out.Selectors\n\t\t*out = make([]string, len(*in))\n\t\tcopy(*out, *in)\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeFilterConfig.\nfunc (in *NodeFilterConfig) DeepCopy() *NodeFilterConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(NodeFilterConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *OptionalContainerConfig) DeepCopyInto(out *OptionalContainerConfig) {\n\t*out = *in\n\tin.ContainerConfig.DeepCopyInto(&out.ContainerConfig)\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OptionalContainerConfig.\nfunc (in *OptionalContainerConfig) DeepCopy() *OptionalContainerConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(OptionalContainerConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ProfileConfig) DeepCopyInto(out *ProfileConfig) {\n\t*out = *in\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProfileConfig.\nfunc (in *ProfileConfig) DeepCopy() *ProfileConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ProfileConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *RepoTag) DeepCopyInto(out *RepoTag) {\n\t*out = *in\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RepoTag.\nfunc (in *RepoTag) DeepCopy() *RepoTag {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(RepoTag)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ResourceRequirements) DeepCopyInto(out *ResourceRequirements) {\n\t*out = *in\n\tout.Mem = in.Mem.DeepCopy()\n\tout.CPU = in.CPU.DeepCopy()\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ResourceRequirements.\nfunc (in *ResourceRequirements) DeepCopy() *ResourceRequirements {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ResourceRequirements)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ScheduleConfig) DeepCopyInto(out *ScheduleConfig) {\n\t*out = *in\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScheduleConfig.\nfunc (in *ScheduleConfig) DeepCopy() *ScheduleConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ScheduleConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n"
  },
  {
    "path": "api/v1alpha2/config/config.go",
    "content": "package config\n\nimport (\n\t\"fmt\"\n\t\"time\"\n\n\t\"github.com/eraser-dev/eraser/api/v1alpha2\"\n\t\"github.com/eraser-dev/eraser/version\"\n\t\"k8s.io/apimachinery/pkg/api/resource\"\n)\n\nvar defaultScannerConfig = `\ncacheDir: /var/lib/trivy\ndbRepo: ghcr.io/aquasecurity/trivy-db\ndeleteFailedImages: true\ndeleteEOLImages: true\nvulnerabilities:\n  ignoreUnfixed: false\n  types:\n    - os\n    - library\nsecurityChecks: # need to be documented; determined by trivy, not us\n  - vuln\nseverities:\n  - CRITICAL\n  - HIGH\n  - MEDIUM\n  - LOW\n`\n\nconst (\n\tnoDelay = v1alpha2.Duration(0)\n\toneDay  = v1alpha2.Duration(time.Hour * 24)\n)\n\nfunc Default() *v1alpha2.EraserConfig {\n\treturn &v1alpha2.EraserConfig{\n\t\tManager: v1alpha2.ManagerConfig{\n\t\t\tRuntime:      \"containerd\",\n\t\t\tOTLPEndpoint: \"\",\n\t\t\tLogLevel:     \"info\",\n\t\t\tScheduling: v1alpha2.ScheduleConfig{\n\t\t\t\tRepeatInterval:   v1alpha2.Duration(oneDay),\n\t\t\t\tBeginImmediately: true,\n\t\t\t},\n\t\t\tProfile: v1alpha2.ProfileConfig{\n\t\t\t\tEnabled: false,\n\t\t\t\tPort:    6060,\n\t\t\t},\n\t\t\tImageJob: v1alpha2.ImageJobConfig{\n\t\t\t\tSuccessRatio: 1.0,\n\t\t\t\tCleanup: v1alpha2.ImageJobCleanupConfig{\n\t\t\t\t\tDelayOnSuccess: noDelay,\n\t\t\t\t\tDelayOnFailure: oneDay,\n\t\t\t\t},\n\t\t\t},\n\t\t\tPullSecrets: []string{},\n\t\t\tNodeFilter: v1alpha2.NodeFilterConfig{\n\t\t\t\tType: \"exclude\",\n\t\t\t\tSelectors: []string{\n\t\t\t\t\t\"eraser.sh/cleanup.filter\",\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\tComponents: v1alpha2.Components{\n\t\t\tCollector: v1alpha2.OptionalContainerConfig{\n\t\t\t\tEnabled: false,\n\t\t\t\tContainerConfig: v1alpha2.ContainerConfig{\n\t\t\t\t\tImage: v1alpha2.RepoTag{\n\t\t\t\t\t\tRepo: repo(\"collector\"),\n\t\t\t\t\t\tTag:  version.BuildVersion,\n\t\t\t\t\t},\n\t\t\t\t\tRequest: v1alpha2.ResourceRequirements{\n\t\t\t\t\t\tMem: resource.MustParse(\"25Mi\"),\n\t\t\t\t\t\tCPU: resource.MustParse(\"7m\"),\n\t\t\t\t\t},\n\t\t\t\t\tLimit: v1alpha2.ResourceRequirements{\n\t\t\t\t\t\tMem: resource.MustParse(\"500Mi\"),\n\t\t\t\t\t\tCPU: resource.Quantity{},\n\t\t\t\t\t},\n\t\t\t\t\tConfig: nil,\n\t\t\t\t},\n\t\t\t},\n\t\t\tScanner: v1alpha2.OptionalContainerConfig{\n\t\t\t\tEnabled: false,\n\t\t\t\tContainerConfig: v1alpha2.ContainerConfig{\n\t\t\t\t\tImage: v1alpha2.RepoTag{\n\t\t\t\t\t\tRepo: repo(\"eraser-trivy-scanner\"),\n\t\t\t\t\t\tTag:  version.BuildVersion,\n\t\t\t\t\t},\n\t\t\t\t\tRequest: v1alpha2.ResourceRequirements{\n\t\t\t\t\t\tMem: resource.MustParse(\"500Mi\"),\n\t\t\t\t\t\tCPU: resource.MustParse(\"1000m\"),\n\t\t\t\t\t},\n\t\t\t\t\tLimit: v1alpha2.ResourceRequirements{\n\t\t\t\t\t\tMem: resource.MustParse(\"2Gi\"),\n\t\t\t\t\t\tCPU: resource.MustParse(\"1500m\"),\n\t\t\t\t\t},\n\t\t\t\t\tConfig: &defaultScannerConfig,\n\t\t\t\t},\n\t\t\t},\n\t\t\tRemover: v1alpha2.ContainerConfig{\n\t\t\t\tImage: v1alpha2.RepoTag{\n\t\t\t\t\tRepo: repo(\"remover\"),\n\t\t\t\t\tTag:  version.BuildVersion,\n\t\t\t\t},\n\t\t\t\tRequest: v1alpha2.ResourceRequirements{\n\t\t\t\t\tMem: resource.MustParse(\"25Mi\"),\n\t\t\t\t\tCPU: resource.MustParse(\"7m\"),\n\t\t\t\t},\n\t\t\t\tLimit: v1alpha2.ResourceRequirements{\n\t\t\t\t\tMem: resource.MustParse(\"30Mi\"),\n\t\t\t\t\tCPU: resource.Quantity{},\n\t\t\t\t},\n\t\t\t\tConfig: nil,\n\t\t\t},\n\t\t},\n\t}\n}\n\nfunc repo(basename string) string {\n\tif version.DefaultRepo == \"\" {\n\t\treturn basename\n\t}\n\n\treturn fmt.Sprintf(\"%s/%s\", version.DefaultRepo, basename)\n}\n"
  },
  {
    "path": "api/v1alpha2/custom_conversions.go",
    "content": "package v1alpha2\n\nimport (\n\tunversioned \"github.com/eraser-dev/eraser/api/unversioned\"\n\tconversion \"k8s.io/apimachinery/pkg/conversion\"\n)\n\n//nolint:revive\nfunc Convert_v1alpha2_ManagerConfig_To_unversioned_ManagerConfig(in *ManagerConfig, out *unversioned.ManagerConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha2_ManagerConfig_To_unversioned_ManagerConfig(in, out, s)\n}\n\n//nolint:revive\nfunc manualConvert_v1alpha2_Runtime_To_unversioned_RuntimeSpec(in *Runtime, out *unversioned.RuntimeSpec, _ conversion.Scope) error {\n\tout.Name = unversioned.Runtime(string(*in))\n\n\trs, err := unversioned.ConvertRuntimeToRuntimeSpec(out.Name)\n\tif err != nil {\n\t\treturn err\n\t}\n\tout.Address = rs.Address\n\n\treturn nil\n}\n\n//nolint:revive\nfunc Convert_v1alpha2_Runtime_To_unversioned_RuntimeSpec(in *Runtime, out *unversioned.RuntimeSpec, s conversion.Scope) error {\n\treturn manualConvert_v1alpha2_Runtime_To_unversioned_RuntimeSpec(in, out, s)\n}\n\n//nolint:revive\nfunc Convert_unversioned_ManagerConfig_To_v1alpha2_ManagerConfig(in *unversioned.ManagerConfig, out *ManagerConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ManagerConfig_To_v1alpha2_ManagerConfig(in, out, s)\n}\n\n//nolint:revive\nfunc manualConvert_unversioned_RuntimeSpec_To_v1alpha2_Runtime(in *unversioned.RuntimeSpec, out *Runtime, _ conversion.Scope) error {\n\t*out = Runtime(in.Name)\n\treturn nil\n}\n\n//nolint:revive\nfunc Convert_unversioned_RuntimeSpec_To_v1alpha2_Runtime(in *unversioned.RuntimeSpec, out *Runtime, s conversion.Scope) error {\n\treturn manualConvert_unversioned_RuntimeSpec_To_v1alpha2_Runtime(in, out, s)\n}\n"
  },
  {
    "path": "api/v1alpha2/doc.go",
    "content": "// Package v1alpha2 contains API Schema definitions for the eraser v1alpha2 API version.\n// +k8s:conversion-gen=github.com/eraser-dev/eraser/api/unversioned\npackage v1alpha2\n"
  },
  {
    "path": "api/v1alpha2/eraserconfig_types.go",
    "content": "/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage v1alpha2\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"time\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\t\"k8s.io/apimachinery/pkg/api/resource\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n)\n\ntype (\n\tDuration time.Duration\n\tRuntime  string\n)\n\nconst (\n\tRuntimeContainerd Runtime = \"containerd\"\n\tRuntimeDockerShim Runtime = \"dockershim\"\n\tRuntimeCrio       Runtime = \"crio\"\n)\n\nfunc (td *Duration) UnmarshalJSON(b []byte) error {\n\tvar str string\n\terr := json.Unmarshal(b, &str)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tpd, err := time.ParseDuration(str)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t*td = Duration(pd)\n\treturn nil\n}\n\nfunc (r *Runtime) UnmarshalJSON(b []byte) error {\n\tvar str string\n\terr := json.Unmarshal(b, &str)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tswitch rt := Runtime(str); rt {\n\tcase RuntimeContainerd, RuntimeDockerShim, RuntimeCrio:\n\t\t*r = rt\n\tdefault:\n\t\treturn fmt.Errorf(\"cannot determine runtime type: %s. valid values are containerd, dockershim, or crio\", str)\n\t}\n\n\treturn nil\n}\n\nfunc (td *Duration) MarshalJSON() ([]byte, error) {\n\treturn []byte(fmt.Sprintf(`\"%s\"`, time.Duration(*td).String())), nil\n}\n\n// EDIT THIS FILE!  THIS IS SCAFFOLDING FOR YOU TO OWN!\n// NOTE: json tags are required.  Any new fields you add must have json tags for the fields to be serialized.\n\ntype OptionalContainerConfig struct {\n\tEnabled         bool `json:\"enabled,omitempty\"`\n\tContainerConfig `json:\",inline\"`\n}\n\ntype ContainerConfig struct {\n\tImage   RepoTag              `json:\"image,omitempty\"`\n\tRequest ResourceRequirements `json:\"request,omitempty\"`\n\tLimit   ResourceRequirements `json:\"limit,omitempty\"`\n\tConfig  *string              `json:\"config,omitempty\"`\n\tVolumes []corev1.Volume      `json:\"volumes,omitempty\"`\n}\n\ntype ManagerConfig struct {\n\tRuntime           Runtime          `json:\"runtime,omitempty\"`\n\tOTLPEndpoint      string           `json:\"otlpEndpoint,omitempty\"`\n\tLogLevel          string           `json:\"logLevel,omitempty\"`\n\tScheduling        ScheduleConfig   `json:\"scheduling,omitempty\"`\n\tProfile           ProfileConfig    `json:\"profile,omitempty\"`\n\tImageJob          ImageJobConfig   `json:\"imageJob,omitempty\"`\n\tPullSecrets       []string         `json:\"pullSecrets,omitempty\"`\n\tNodeFilter        NodeFilterConfig `json:\"nodeFilter,omitempty\"`\n\tPriorityClassName string           `json:\"priorityClassName,omitempty\"`\n}\n\ntype ScheduleConfig struct {\n\tRepeatInterval   Duration `json:\"repeatInterval,omitempty\"`\n\tBeginImmediately bool     `json:\"beginImmediately,omitempty\"`\n}\n\ntype ProfileConfig struct {\n\tEnabled bool `json:\"enabled,omitempty\"`\n\tPort    int  `json:\"port,omitempty\"`\n}\n\ntype ImageJobConfig struct {\n\tSuccessRatio float64               `json:\"successRatio,omitempty\"`\n\tCleanup      ImageJobCleanupConfig `json:\"cleanup,omitempty\"`\n}\n\ntype ImageJobCleanupConfig struct {\n\tDelayOnSuccess Duration `json:\"delayOnSuccess,omitempty\"`\n\tDelayOnFailure Duration `json:\"delayOnFailure,omitempty\"`\n}\n\ntype NodeFilterConfig struct {\n\tType      string   `json:\"type,omitempty\"`\n\tSelectors []string `json:\"selectors,omitempty\"`\n}\n\ntype ResourceRequirements struct {\n\tMem resource.Quantity `json:\"mem,omitempty\"`\n\tCPU resource.Quantity `json:\"cpu,omitempty\"`\n}\n\ntype RepoTag struct {\n\tRepo string `json:\"repo,omitempty\"`\n\tTag  string `json:\"tag,omitempty\"`\n}\n\ntype Components struct {\n\tCollector OptionalContainerConfig `json:\"collector,omitempty\"`\n\tScanner   OptionalContainerConfig `json:\"scanner,omitempty\"`\n\tRemover   ContainerConfig         `json:\"remover,omitempty\"`\n}\n\n//+kubebuilder:object:root=true\n\n// EraserConfig is the Schema for the eraserconfigs API.\ntype EraserConfig struct {\n\tmetav1.TypeMeta `json:\",inline\"`\n\tManager         ManagerConfig `json:\"manager\"`\n\tComponents      Components    `json:\"components\"`\n}\n\nfunc init() {\n\tSchemeBuilder.Register(&EraserConfig{})\n}\n"
  },
  {
    "path": "api/v1alpha2/groupversion_info.go",
    "content": "/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\n// Package v1alpha2 contains API Schema definitions for the eraser.sh v1alpha2 API group\n// +kubebuilder:object:generate=true\n// +groupName=eraser.sh\npackage v1alpha2\n\nimport (\n\truntime \"k8s.io/apimachinery/pkg/runtime\"\n\t\"k8s.io/apimachinery/pkg/runtime/schema\"\n\t\"sigs.k8s.io/controller-runtime/pkg/scheme\"\n)\n\nvar (\n\t// GroupVersion is group version used to register these objects.\n\tGroupVersion = schema.GroupVersion{Group: \"eraser.sh\", Version: \"v1alpha2\"}\n\n\t// SchemeBuilder is used to add go types to the GroupVersionKind scheme.\n\tSchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}\n\n\tlocalSchemeBuilder = runtime.NewSchemeBuilder(SchemeBuilder.AddToScheme)\n\n\t// AddToScheme adds the types in this group-version to the given scheme.\n\tAddToScheme = SchemeBuilder.AddToScheme\n)\n"
  },
  {
    "path": "api/v1alpha2/zz_generated.conversion.go",
    "content": "//go:build !ignore_autogenerated\n// +build !ignore_autogenerated\n\n/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n// Code generated by conversion-gen. DO NOT EDIT.\n\npackage v1alpha2\n\nimport (\n\tunsafe \"unsafe\"\n\n\tunversioned \"github.com/eraser-dev/eraser/api/unversioned\"\n\tv1 \"k8s.io/api/core/v1\"\n\tconversion \"k8s.io/apimachinery/pkg/conversion\"\n\truntime \"k8s.io/apimachinery/pkg/runtime\"\n)\n\nfunc init() {\n\tlocalSchemeBuilder.Register(RegisterConversions)\n}\n\n// RegisterConversions adds conversion functions to the given scheme.\n// Public to allow building arbitrary schemes.\nfunc RegisterConversions(s *runtime.Scheme) error {\n\tif err := s.AddGeneratedConversionFunc((*Components)(nil), (*unversioned.Components)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha2_Components_To_unversioned_Components(a.(*Components), b.(*unversioned.Components), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.Components)(nil), (*Components)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_Components_To_v1alpha2_Components(a.(*unversioned.Components), b.(*Components), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ContainerConfig)(nil), (*unversioned.ContainerConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha2_ContainerConfig_To_unversioned_ContainerConfig(a.(*ContainerConfig), b.(*unversioned.ContainerConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ContainerConfig)(nil), (*ContainerConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ContainerConfig_To_v1alpha2_ContainerConfig(a.(*unversioned.ContainerConfig), b.(*ContainerConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*EraserConfig)(nil), (*unversioned.EraserConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha2_EraserConfig_To_unversioned_EraserConfig(a.(*EraserConfig), b.(*unversioned.EraserConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.EraserConfig)(nil), (*EraserConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_EraserConfig_To_v1alpha2_EraserConfig(a.(*unversioned.EraserConfig), b.(*EraserConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ImageJobCleanupConfig)(nil), (*unversioned.ImageJobCleanupConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha2_ImageJobCleanupConfig_To_unversioned_ImageJobCleanupConfig(a.(*ImageJobCleanupConfig), b.(*unversioned.ImageJobCleanupConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ImageJobCleanupConfig)(nil), (*ImageJobCleanupConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ImageJobCleanupConfig_To_v1alpha2_ImageJobCleanupConfig(a.(*unversioned.ImageJobCleanupConfig), b.(*ImageJobCleanupConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ImageJobConfig)(nil), (*unversioned.ImageJobConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha2_ImageJobConfig_To_unversioned_ImageJobConfig(a.(*ImageJobConfig), b.(*unversioned.ImageJobConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ImageJobConfig)(nil), (*ImageJobConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ImageJobConfig_To_v1alpha2_ImageJobConfig(a.(*unversioned.ImageJobConfig), b.(*ImageJobConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*NodeFilterConfig)(nil), (*unversioned.NodeFilterConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha2_NodeFilterConfig_To_unversioned_NodeFilterConfig(a.(*NodeFilterConfig), b.(*unversioned.NodeFilterConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.NodeFilterConfig)(nil), (*NodeFilterConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_NodeFilterConfig_To_v1alpha2_NodeFilterConfig(a.(*unversioned.NodeFilterConfig), b.(*NodeFilterConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*OptionalContainerConfig)(nil), (*unversioned.OptionalContainerConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha2_OptionalContainerConfig_To_unversioned_OptionalContainerConfig(a.(*OptionalContainerConfig), b.(*unversioned.OptionalContainerConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.OptionalContainerConfig)(nil), (*OptionalContainerConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_OptionalContainerConfig_To_v1alpha2_OptionalContainerConfig(a.(*unversioned.OptionalContainerConfig), b.(*OptionalContainerConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ProfileConfig)(nil), (*unversioned.ProfileConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha2_ProfileConfig_To_unversioned_ProfileConfig(a.(*ProfileConfig), b.(*unversioned.ProfileConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ProfileConfig)(nil), (*ProfileConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ProfileConfig_To_v1alpha2_ProfileConfig(a.(*unversioned.ProfileConfig), b.(*ProfileConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*RepoTag)(nil), (*unversioned.RepoTag)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha2_RepoTag_To_unversioned_RepoTag(a.(*RepoTag), b.(*unversioned.RepoTag), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.RepoTag)(nil), (*RepoTag)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_RepoTag_To_v1alpha2_RepoTag(a.(*unversioned.RepoTag), b.(*RepoTag), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ResourceRequirements)(nil), (*unversioned.ResourceRequirements)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha2_ResourceRequirements_To_unversioned_ResourceRequirements(a.(*ResourceRequirements), b.(*unversioned.ResourceRequirements), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ResourceRequirements)(nil), (*ResourceRequirements)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ResourceRequirements_To_v1alpha2_ResourceRequirements(a.(*unversioned.ResourceRequirements), b.(*ResourceRequirements), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ScheduleConfig)(nil), (*unversioned.ScheduleConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha2_ScheduleConfig_To_unversioned_ScheduleConfig(a.(*ScheduleConfig), b.(*unversioned.ScheduleConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ScheduleConfig)(nil), (*ScheduleConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ScheduleConfig_To_v1alpha2_ScheduleConfig(a.(*unversioned.ScheduleConfig), b.(*ScheduleConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddConversionFunc((*unversioned.ManagerConfig)(nil), (*ManagerConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ManagerConfig_To_v1alpha2_ManagerConfig(a.(*unversioned.ManagerConfig), b.(*ManagerConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddConversionFunc((*unversioned.RuntimeSpec)(nil), (*Runtime)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_RuntimeSpec_To_v1alpha2_Runtime(a.(*unversioned.RuntimeSpec), b.(*Runtime), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddConversionFunc((*ManagerConfig)(nil), (*unversioned.ManagerConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha2_ManagerConfig_To_unversioned_ManagerConfig(a.(*ManagerConfig), b.(*unversioned.ManagerConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddConversionFunc((*Runtime)(nil), (*unversioned.RuntimeSpec)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha2_Runtime_To_unversioned_RuntimeSpec(a.(*Runtime), b.(*unversioned.RuntimeSpec), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\nfunc autoConvert_v1alpha2_Components_To_unversioned_Components(in *Components, out *unversioned.Components, s conversion.Scope) error {\n\tif err := Convert_v1alpha2_OptionalContainerConfig_To_unversioned_OptionalContainerConfig(&in.Collector, &out.Collector, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha2_OptionalContainerConfig_To_unversioned_OptionalContainerConfig(&in.Scanner, &out.Scanner, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha2_ContainerConfig_To_unversioned_ContainerConfig(&in.Remover, &out.Remover, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_v1alpha2_Components_To_unversioned_Components is an autogenerated conversion function.\nfunc Convert_v1alpha2_Components_To_unversioned_Components(in *Components, out *unversioned.Components, s conversion.Scope) error {\n\treturn autoConvert_v1alpha2_Components_To_unversioned_Components(in, out, s)\n}\n\nfunc autoConvert_unversioned_Components_To_v1alpha2_Components(in *unversioned.Components, out *Components, s conversion.Scope) error {\n\tif err := Convert_unversioned_OptionalContainerConfig_To_v1alpha2_OptionalContainerConfig(&in.Collector, &out.Collector, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_OptionalContainerConfig_To_v1alpha2_OptionalContainerConfig(&in.Scanner, &out.Scanner, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_ContainerConfig_To_v1alpha2_ContainerConfig(&in.Remover, &out.Remover, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_unversioned_Components_To_v1alpha2_Components is an autogenerated conversion function.\nfunc Convert_unversioned_Components_To_v1alpha2_Components(in *unversioned.Components, out *Components, s conversion.Scope) error {\n\treturn autoConvert_unversioned_Components_To_v1alpha2_Components(in, out, s)\n}\n\nfunc autoConvert_v1alpha2_ContainerConfig_To_unversioned_ContainerConfig(in *ContainerConfig, out *unversioned.ContainerConfig, s conversion.Scope) error {\n\tif err := Convert_v1alpha2_RepoTag_To_unversioned_RepoTag(&in.Image, &out.Image, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha2_ResourceRequirements_To_unversioned_ResourceRequirements(&in.Request, &out.Request, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha2_ResourceRequirements_To_unversioned_ResourceRequirements(&in.Limit, &out.Limit, s); err != nil {\n\t\treturn err\n\t}\n\tout.Config = (*string)(unsafe.Pointer(in.Config))\n\tout.Volumes = *(*[]v1.Volume)(unsafe.Pointer(&in.Volumes))\n\treturn nil\n}\n\n// Convert_v1alpha2_ContainerConfig_To_unversioned_ContainerConfig is an autogenerated conversion function.\nfunc Convert_v1alpha2_ContainerConfig_To_unversioned_ContainerConfig(in *ContainerConfig, out *unversioned.ContainerConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha2_ContainerConfig_To_unversioned_ContainerConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_ContainerConfig_To_v1alpha2_ContainerConfig(in *unversioned.ContainerConfig, out *ContainerConfig, s conversion.Scope) error {\n\tif err := Convert_unversioned_RepoTag_To_v1alpha2_RepoTag(&in.Image, &out.Image, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_ResourceRequirements_To_v1alpha2_ResourceRequirements(&in.Request, &out.Request, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_ResourceRequirements_To_v1alpha2_ResourceRequirements(&in.Limit, &out.Limit, s); err != nil {\n\t\treturn err\n\t}\n\tout.Config = (*string)(unsafe.Pointer(in.Config))\n\tout.Volumes = *(*[]v1.Volume)(unsafe.Pointer(&in.Volumes))\n\treturn nil\n}\n\n// Convert_unversioned_ContainerConfig_To_v1alpha2_ContainerConfig is an autogenerated conversion function.\nfunc Convert_unversioned_ContainerConfig_To_v1alpha2_ContainerConfig(in *unversioned.ContainerConfig, out *ContainerConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ContainerConfig_To_v1alpha2_ContainerConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha2_EraserConfig_To_unversioned_EraserConfig(in *EraserConfig, out *unversioned.EraserConfig, s conversion.Scope) error {\n\tif err := Convert_v1alpha2_ManagerConfig_To_unversioned_ManagerConfig(&in.Manager, &out.Manager, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha2_Components_To_unversioned_Components(&in.Components, &out.Components, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_v1alpha2_EraserConfig_To_unversioned_EraserConfig is an autogenerated conversion function.\nfunc Convert_v1alpha2_EraserConfig_To_unversioned_EraserConfig(in *EraserConfig, out *unversioned.EraserConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha2_EraserConfig_To_unversioned_EraserConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_EraserConfig_To_v1alpha2_EraserConfig(in *unversioned.EraserConfig, out *EraserConfig, s conversion.Scope) error {\n\tif err := Convert_unversioned_ManagerConfig_To_v1alpha2_ManagerConfig(&in.Manager, &out.Manager, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_Components_To_v1alpha2_Components(&in.Components, &out.Components, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_unversioned_EraserConfig_To_v1alpha2_EraserConfig is an autogenerated conversion function.\nfunc Convert_unversioned_EraserConfig_To_v1alpha2_EraserConfig(in *unversioned.EraserConfig, out *EraserConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_EraserConfig_To_v1alpha2_EraserConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha2_ImageJobCleanupConfig_To_unversioned_ImageJobCleanupConfig(in *ImageJobCleanupConfig, out *unversioned.ImageJobCleanupConfig, s conversion.Scope) error {\n\tout.DelayOnSuccess = unversioned.Duration(in.DelayOnSuccess)\n\tout.DelayOnFailure = unversioned.Duration(in.DelayOnFailure)\n\treturn nil\n}\n\n// Convert_v1alpha2_ImageJobCleanupConfig_To_unversioned_ImageJobCleanupConfig is an autogenerated conversion function.\nfunc Convert_v1alpha2_ImageJobCleanupConfig_To_unversioned_ImageJobCleanupConfig(in *ImageJobCleanupConfig, out *unversioned.ImageJobCleanupConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha2_ImageJobCleanupConfig_To_unversioned_ImageJobCleanupConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_ImageJobCleanupConfig_To_v1alpha2_ImageJobCleanupConfig(in *unversioned.ImageJobCleanupConfig, out *ImageJobCleanupConfig, s conversion.Scope) error {\n\tout.DelayOnSuccess = Duration(in.DelayOnSuccess)\n\tout.DelayOnFailure = Duration(in.DelayOnFailure)\n\treturn nil\n}\n\n// Convert_unversioned_ImageJobCleanupConfig_To_v1alpha2_ImageJobCleanupConfig is an autogenerated conversion function.\nfunc Convert_unversioned_ImageJobCleanupConfig_To_v1alpha2_ImageJobCleanupConfig(in *unversioned.ImageJobCleanupConfig, out *ImageJobCleanupConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ImageJobCleanupConfig_To_v1alpha2_ImageJobCleanupConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha2_ImageJobConfig_To_unversioned_ImageJobConfig(in *ImageJobConfig, out *unversioned.ImageJobConfig, s conversion.Scope) error {\n\tout.SuccessRatio = in.SuccessRatio\n\tif err := Convert_v1alpha2_ImageJobCleanupConfig_To_unversioned_ImageJobCleanupConfig(&in.Cleanup, &out.Cleanup, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_v1alpha2_ImageJobConfig_To_unversioned_ImageJobConfig is an autogenerated conversion function.\nfunc Convert_v1alpha2_ImageJobConfig_To_unversioned_ImageJobConfig(in *ImageJobConfig, out *unversioned.ImageJobConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha2_ImageJobConfig_To_unversioned_ImageJobConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_ImageJobConfig_To_v1alpha2_ImageJobConfig(in *unversioned.ImageJobConfig, out *ImageJobConfig, s conversion.Scope) error {\n\tout.SuccessRatio = in.SuccessRatio\n\tif err := Convert_unversioned_ImageJobCleanupConfig_To_v1alpha2_ImageJobCleanupConfig(&in.Cleanup, &out.Cleanup, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_unversioned_ImageJobConfig_To_v1alpha2_ImageJobConfig is an autogenerated conversion function.\nfunc Convert_unversioned_ImageJobConfig_To_v1alpha2_ImageJobConfig(in *unversioned.ImageJobConfig, out *ImageJobConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ImageJobConfig_To_v1alpha2_ImageJobConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha2_ManagerConfig_To_unversioned_ManagerConfig(in *ManagerConfig, out *unversioned.ManagerConfig, s conversion.Scope) error {\n\tif err := Convert_v1alpha2_Runtime_To_unversioned_RuntimeSpec(&in.Runtime, &out.Runtime, s); err != nil {\n\t\treturn err\n\t}\n\tout.OTLPEndpoint = in.OTLPEndpoint\n\tout.LogLevel = in.LogLevel\n\tif err := Convert_v1alpha2_ScheduleConfig_To_unversioned_ScheduleConfig(&in.Scheduling, &out.Scheduling, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha2_ProfileConfig_To_unversioned_ProfileConfig(&in.Profile, &out.Profile, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha2_ImageJobConfig_To_unversioned_ImageJobConfig(&in.ImageJob, &out.ImageJob, s); err != nil {\n\t\treturn err\n\t}\n\tout.PullSecrets = *(*[]string)(unsafe.Pointer(&in.PullSecrets))\n\tif err := Convert_v1alpha2_NodeFilterConfig_To_unversioned_NodeFilterConfig(&in.NodeFilter, &out.NodeFilter, s); err != nil {\n\t\treturn err\n\t}\n\tout.PriorityClassName = in.PriorityClassName\n\treturn nil\n}\n\nfunc autoConvert_unversioned_ManagerConfig_To_v1alpha2_ManagerConfig(in *unversioned.ManagerConfig, out *ManagerConfig, s conversion.Scope) error {\n\tif err := Convert_unversioned_RuntimeSpec_To_v1alpha2_Runtime(&in.Runtime, &out.Runtime, s); err != nil {\n\t\treturn err\n\t}\n\tout.OTLPEndpoint = in.OTLPEndpoint\n\tout.LogLevel = in.LogLevel\n\tif err := Convert_unversioned_ScheduleConfig_To_v1alpha2_ScheduleConfig(&in.Scheduling, &out.Scheduling, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_ProfileConfig_To_v1alpha2_ProfileConfig(&in.Profile, &out.Profile, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_ImageJobConfig_To_v1alpha2_ImageJobConfig(&in.ImageJob, &out.ImageJob, s); err != nil {\n\t\treturn err\n\t}\n\tout.PullSecrets = *(*[]string)(unsafe.Pointer(&in.PullSecrets))\n\tif err := Convert_unversioned_NodeFilterConfig_To_v1alpha2_NodeFilterConfig(&in.NodeFilter, &out.NodeFilter, s); err != nil {\n\t\treturn err\n\t}\n\tout.PriorityClassName = in.PriorityClassName\n\t// WARNING: in.AdditionalPodLabels requires manual conversion: does not exist in peer-type\n\treturn nil\n}\n\nfunc autoConvert_v1alpha2_NodeFilterConfig_To_unversioned_NodeFilterConfig(in *NodeFilterConfig, out *unversioned.NodeFilterConfig, s conversion.Scope) error {\n\tout.Type = in.Type\n\tout.Selectors = *(*[]string)(unsafe.Pointer(&in.Selectors))\n\treturn nil\n}\n\n// Convert_v1alpha2_NodeFilterConfig_To_unversioned_NodeFilterConfig is an autogenerated conversion function.\nfunc Convert_v1alpha2_NodeFilterConfig_To_unversioned_NodeFilterConfig(in *NodeFilterConfig, out *unversioned.NodeFilterConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha2_NodeFilterConfig_To_unversioned_NodeFilterConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_NodeFilterConfig_To_v1alpha2_NodeFilterConfig(in *unversioned.NodeFilterConfig, out *NodeFilterConfig, s conversion.Scope) error {\n\tout.Type = in.Type\n\tout.Selectors = *(*[]string)(unsafe.Pointer(&in.Selectors))\n\treturn nil\n}\n\n// Convert_unversioned_NodeFilterConfig_To_v1alpha2_NodeFilterConfig is an autogenerated conversion function.\nfunc Convert_unversioned_NodeFilterConfig_To_v1alpha2_NodeFilterConfig(in *unversioned.NodeFilterConfig, out *NodeFilterConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_NodeFilterConfig_To_v1alpha2_NodeFilterConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha2_OptionalContainerConfig_To_unversioned_OptionalContainerConfig(in *OptionalContainerConfig, out *unversioned.OptionalContainerConfig, s conversion.Scope) error {\n\tout.Enabled = in.Enabled\n\tif err := Convert_v1alpha2_ContainerConfig_To_unversioned_ContainerConfig(&in.ContainerConfig, &out.ContainerConfig, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_v1alpha2_OptionalContainerConfig_To_unversioned_OptionalContainerConfig is an autogenerated conversion function.\nfunc Convert_v1alpha2_OptionalContainerConfig_To_unversioned_OptionalContainerConfig(in *OptionalContainerConfig, out *unversioned.OptionalContainerConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha2_OptionalContainerConfig_To_unversioned_OptionalContainerConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_OptionalContainerConfig_To_v1alpha2_OptionalContainerConfig(in *unversioned.OptionalContainerConfig, out *OptionalContainerConfig, s conversion.Scope) error {\n\tout.Enabled = in.Enabled\n\tif err := Convert_unversioned_ContainerConfig_To_v1alpha2_ContainerConfig(&in.ContainerConfig, &out.ContainerConfig, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_unversioned_OptionalContainerConfig_To_v1alpha2_OptionalContainerConfig is an autogenerated conversion function.\nfunc Convert_unversioned_OptionalContainerConfig_To_v1alpha2_OptionalContainerConfig(in *unversioned.OptionalContainerConfig, out *OptionalContainerConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_OptionalContainerConfig_To_v1alpha2_OptionalContainerConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha2_ProfileConfig_To_unversioned_ProfileConfig(in *ProfileConfig, out *unversioned.ProfileConfig, s conversion.Scope) error {\n\tout.Enabled = in.Enabled\n\tout.Port = in.Port\n\treturn nil\n}\n\n// Convert_v1alpha2_ProfileConfig_To_unversioned_ProfileConfig is an autogenerated conversion function.\nfunc Convert_v1alpha2_ProfileConfig_To_unversioned_ProfileConfig(in *ProfileConfig, out *unversioned.ProfileConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha2_ProfileConfig_To_unversioned_ProfileConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_ProfileConfig_To_v1alpha2_ProfileConfig(in *unversioned.ProfileConfig, out *ProfileConfig, s conversion.Scope) error {\n\tout.Enabled = in.Enabled\n\tout.Port = in.Port\n\treturn nil\n}\n\n// Convert_unversioned_ProfileConfig_To_v1alpha2_ProfileConfig is an autogenerated conversion function.\nfunc Convert_unversioned_ProfileConfig_To_v1alpha2_ProfileConfig(in *unversioned.ProfileConfig, out *ProfileConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ProfileConfig_To_v1alpha2_ProfileConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha2_RepoTag_To_unversioned_RepoTag(in *RepoTag, out *unversioned.RepoTag, s conversion.Scope) error {\n\tout.Repo = in.Repo\n\tout.Tag = in.Tag\n\treturn nil\n}\n\n// Convert_v1alpha2_RepoTag_To_unversioned_RepoTag is an autogenerated conversion function.\nfunc Convert_v1alpha2_RepoTag_To_unversioned_RepoTag(in *RepoTag, out *unversioned.RepoTag, s conversion.Scope) error {\n\treturn autoConvert_v1alpha2_RepoTag_To_unversioned_RepoTag(in, out, s)\n}\n\nfunc autoConvert_unversioned_RepoTag_To_v1alpha2_RepoTag(in *unversioned.RepoTag, out *RepoTag, s conversion.Scope) error {\n\tout.Repo = in.Repo\n\tout.Tag = in.Tag\n\treturn nil\n}\n\n// Convert_unversioned_RepoTag_To_v1alpha2_RepoTag is an autogenerated conversion function.\nfunc Convert_unversioned_RepoTag_To_v1alpha2_RepoTag(in *unversioned.RepoTag, out *RepoTag, s conversion.Scope) error {\n\treturn autoConvert_unversioned_RepoTag_To_v1alpha2_RepoTag(in, out, s)\n}\n\nfunc autoConvert_v1alpha2_ResourceRequirements_To_unversioned_ResourceRequirements(in *ResourceRequirements, out *unversioned.ResourceRequirements, s conversion.Scope) error {\n\tout.Mem = in.Mem\n\tout.CPU = in.CPU\n\treturn nil\n}\n\n// Convert_v1alpha2_ResourceRequirements_To_unversioned_ResourceRequirements is an autogenerated conversion function.\nfunc Convert_v1alpha2_ResourceRequirements_To_unversioned_ResourceRequirements(in *ResourceRequirements, out *unversioned.ResourceRequirements, s conversion.Scope) error {\n\treturn autoConvert_v1alpha2_ResourceRequirements_To_unversioned_ResourceRequirements(in, out, s)\n}\n\nfunc autoConvert_unversioned_ResourceRequirements_To_v1alpha2_ResourceRequirements(in *unversioned.ResourceRequirements, out *ResourceRequirements, s conversion.Scope) error {\n\tout.Mem = in.Mem\n\tout.CPU = in.CPU\n\treturn nil\n}\n\n// Convert_unversioned_ResourceRequirements_To_v1alpha2_ResourceRequirements is an autogenerated conversion function.\nfunc Convert_unversioned_ResourceRequirements_To_v1alpha2_ResourceRequirements(in *unversioned.ResourceRequirements, out *ResourceRequirements, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ResourceRequirements_To_v1alpha2_ResourceRequirements(in, out, s)\n}\n\nfunc autoConvert_v1alpha2_ScheduleConfig_To_unversioned_ScheduleConfig(in *ScheduleConfig, out *unversioned.ScheduleConfig, s conversion.Scope) error {\n\tout.RepeatInterval = unversioned.Duration(in.RepeatInterval)\n\tout.BeginImmediately = in.BeginImmediately\n\treturn nil\n}\n\n// Convert_v1alpha2_ScheduleConfig_To_unversioned_ScheduleConfig is an autogenerated conversion function.\nfunc Convert_v1alpha2_ScheduleConfig_To_unversioned_ScheduleConfig(in *ScheduleConfig, out *unversioned.ScheduleConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha2_ScheduleConfig_To_unversioned_ScheduleConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_ScheduleConfig_To_v1alpha2_ScheduleConfig(in *unversioned.ScheduleConfig, out *ScheduleConfig, s conversion.Scope) error {\n\tout.RepeatInterval = Duration(in.RepeatInterval)\n\tout.BeginImmediately = in.BeginImmediately\n\treturn nil\n}\n\n// Convert_unversioned_ScheduleConfig_To_v1alpha2_ScheduleConfig is an autogenerated conversion function.\nfunc Convert_unversioned_ScheduleConfig_To_v1alpha2_ScheduleConfig(in *unversioned.ScheduleConfig, out *ScheduleConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ScheduleConfig_To_v1alpha2_ScheduleConfig(in, out, s)\n}\n"
  },
  {
    "path": "api/v1alpha2/zz_generated.deepcopy.go",
    "content": "//go:build !ignore_autogenerated\n\n/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\n// Code generated by controller-gen. DO NOT EDIT.\n\npackage v1alpha2\n\nimport (\n\t\"k8s.io/api/core/v1\"\n\t\"k8s.io/apimachinery/pkg/runtime\"\n)\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *Components) DeepCopyInto(out *Components) {\n\t*out = *in\n\tin.Collector.DeepCopyInto(&out.Collector)\n\tin.Scanner.DeepCopyInto(&out.Scanner)\n\tin.Remover.DeepCopyInto(&out.Remover)\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Components.\nfunc (in *Components) DeepCopy() *Components {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(Components)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ContainerConfig) DeepCopyInto(out *ContainerConfig) {\n\t*out = *in\n\tout.Image = in.Image\n\tin.Request.DeepCopyInto(&out.Request)\n\tin.Limit.DeepCopyInto(&out.Limit)\n\tif in.Config != nil {\n\t\tin, out := &in.Config, &out.Config\n\t\t*out = new(string)\n\t\t**out = **in\n\t}\n\tif in.Volumes != nil {\n\t\tin, out := &in.Volumes, &out.Volumes\n\t\t*out = make([]v1.Volume, len(*in))\n\t\tfor i := range *in {\n\t\t\t(*in)[i].DeepCopyInto(&(*out)[i])\n\t\t}\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ContainerConfig.\nfunc (in *ContainerConfig) DeepCopy() *ContainerConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ContainerConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *EraserConfig) DeepCopyInto(out *EraserConfig) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tin.Manager.DeepCopyInto(&out.Manager)\n\tin.Components.DeepCopyInto(&out.Components)\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EraserConfig.\nfunc (in *EraserConfig) DeepCopy() *EraserConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(EraserConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.\nfunc (in *EraserConfig) DeepCopyObject() runtime.Object {\n\tif c := in.DeepCopy(); c != nil {\n\t\treturn c\n\t}\n\treturn nil\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageJobCleanupConfig) DeepCopyInto(out *ImageJobCleanupConfig) {\n\t*out = *in\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageJobCleanupConfig.\nfunc (in *ImageJobCleanupConfig) DeepCopy() *ImageJobCleanupConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageJobCleanupConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageJobConfig) DeepCopyInto(out *ImageJobConfig) {\n\t*out = *in\n\tout.Cleanup = in.Cleanup\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageJobConfig.\nfunc (in *ImageJobConfig) DeepCopy() *ImageJobConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageJobConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ManagerConfig) DeepCopyInto(out *ManagerConfig) {\n\t*out = *in\n\tout.Scheduling = in.Scheduling\n\tout.Profile = in.Profile\n\tout.ImageJob = in.ImageJob\n\tif in.PullSecrets != nil {\n\t\tin, out := &in.PullSecrets, &out.PullSecrets\n\t\t*out = make([]string, len(*in))\n\t\tcopy(*out, *in)\n\t}\n\tin.NodeFilter.DeepCopyInto(&out.NodeFilter)\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ManagerConfig.\nfunc (in *ManagerConfig) DeepCopy() *ManagerConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ManagerConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *NodeFilterConfig) DeepCopyInto(out *NodeFilterConfig) {\n\t*out = *in\n\tif in.Selectors != nil {\n\t\tin, out := &in.Selectors, &out.Selectors\n\t\t*out = make([]string, len(*in))\n\t\tcopy(*out, *in)\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeFilterConfig.\nfunc (in *NodeFilterConfig) DeepCopy() *NodeFilterConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(NodeFilterConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *OptionalContainerConfig) DeepCopyInto(out *OptionalContainerConfig) {\n\t*out = *in\n\tin.ContainerConfig.DeepCopyInto(&out.ContainerConfig)\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OptionalContainerConfig.\nfunc (in *OptionalContainerConfig) DeepCopy() *OptionalContainerConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(OptionalContainerConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ProfileConfig) DeepCopyInto(out *ProfileConfig) {\n\t*out = *in\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProfileConfig.\nfunc (in *ProfileConfig) DeepCopy() *ProfileConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ProfileConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *RepoTag) DeepCopyInto(out *RepoTag) {\n\t*out = *in\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RepoTag.\nfunc (in *RepoTag) DeepCopy() *RepoTag {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(RepoTag)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ResourceRequirements) DeepCopyInto(out *ResourceRequirements) {\n\t*out = *in\n\tout.Mem = in.Mem.DeepCopy()\n\tout.CPU = in.CPU.DeepCopy()\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ResourceRequirements.\nfunc (in *ResourceRequirements) DeepCopy() *ResourceRequirements {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ResourceRequirements)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ScheduleConfig) DeepCopyInto(out *ScheduleConfig) {\n\t*out = *in\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScheduleConfig.\nfunc (in *ScheduleConfig) DeepCopy() *ScheduleConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ScheduleConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n"
  },
  {
    "path": "api/v1alpha3/config/config.go",
    "content": "package config\n\nimport (\n\t\"fmt\"\n\t\"time\"\n\n\t\"github.com/eraser-dev/eraser/api/v1alpha3\"\n\t\"github.com/eraser-dev/eraser/version\"\n\t\"k8s.io/apimachinery/pkg/api/resource\"\n)\n\nvar defaultScannerConfig = `\ncacheDir: /var/lib/trivy\ndbRepo: ghcr.io/aquasecurity/trivy-db\ndeleteFailedImages: true\ndeleteEOLImages: true\nvulnerabilities:\n  ignoreUnfixed: false\n  types:\n    - os\n    - library\nsecurityChecks: # need to be documented; determined by trivy, not us\n  - vuln\nseverities:\n  - CRITICAL\n  - HIGH\n  - MEDIUM\n  - LOW\n`\n\nconst (\n\tnoDelay = v1alpha3.Duration(0)\n\toneDay  = v1alpha3.Duration(time.Hour * 24)\n)\n\nfunc Default() *v1alpha3.EraserConfig {\n\treturn &v1alpha3.EraserConfig{\n\t\tManager: v1alpha3.ManagerConfig{\n\t\t\tRuntime: v1alpha3.RuntimeSpec{\n\t\t\t\tName:    v1alpha3.RuntimeContainerd,\n\t\t\t\tAddress: \"unix:///run/containerd/containerd.sock\",\n\t\t\t},\n\t\t\tOTLPEndpoint: \"\",\n\t\t\tLogLevel:     \"info\",\n\t\t\tScheduling: v1alpha3.ScheduleConfig{\n\t\t\t\tRepeatInterval:   v1alpha3.Duration(oneDay),\n\t\t\t\tBeginImmediately: true,\n\t\t\t},\n\t\t\tProfile: v1alpha3.ProfileConfig{\n\t\t\t\tEnabled: false,\n\t\t\t\tPort:    6060,\n\t\t\t},\n\t\t\tImageJob: v1alpha3.ImageJobConfig{\n\t\t\t\tSuccessRatio: 1.0,\n\t\t\t\tCleanup: v1alpha3.ImageJobCleanupConfig{\n\t\t\t\t\tDelayOnSuccess: noDelay,\n\t\t\t\t\tDelayOnFailure: oneDay,\n\t\t\t\t},\n\t\t\t},\n\t\t\tPullSecrets: []string{},\n\t\t\tNodeFilter: v1alpha3.NodeFilterConfig{\n\t\t\t\tType: \"exclude\",\n\t\t\t\tSelectors: []string{\n\t\t\t\t\t\"eraser.sh/cleanup.filter\",\n\t\t\t\t},\n\t\t\t},\n\t\t\tAdditionalPodLabels: map[string]string{},\n\t\t},\n\t\tComponents: v1alpha3.Components{\n\t\t\tCollector: v1alpha3.OptionalContainerConfig{\n\t\t\t\tEnabled: false,\n\t\t\t\tContainerConfig: v1alpha3.ContainerConfig{\n\t\t\t\t\tImage: v1alpha3.RepoTag{\n\t\t\t\t\t\tRepo: repo(\"collector\"),\n\t\t\t\t\t\tTag:  version.BuildVersion,\n\t\t\t\t\t},\n\t\t\t\t\tRequest: v1alpha3.ResourceRequirements{\n\t\t\t\t\t\tMem: resource.MustParse(\"25Mi\"),\n\t\t\t\t\t\tCPU: resource.MustParse(\"7m\"),\n\t\t\t\t\t},\n\t\t\t\t\tLimit: v1alpha3.ResourceRequirements{\n\t\t\t\t\t\tMem: resource.MustParse(\"500Mi\"),\n\t\t\t\t\t\tCPU: resource.Quantity{},\n\t\t\t\t\t},\n\t\t\t\t\tConfig: nil,\n\t\t\t\t},\n\t\t\t},\n\t\t\tScanner: v1alpha3.OptionalContainerConfig{\n\t\t\t\tEnabled: false,\n\t\t\t\tContainerConfig: v1alpha3.ContainerConfig{\n\t\t\t\t\tImage: v1alpha3.RepoTag{\n\t\t\t\t\t\tRepo: repo(\"eraser-trivy-scanner\"),\n\t\t\t\t\t\tTag:  version.BuildVersion,\n\t\t\t\t\t},\n\t\t\t\t\tRequest: v1alpha3.ResourceRequirements{\n\t\t\t\t\t\tMem: resource.MustParse(\"500Mi\"),\n\t\t\t\t\t\tCPU: resource.MustParse(\"1000m\"),\n\t\t\t\t\t},\n\t\t\t\t\tLimit: v1alpha3.ResourceRequirements{\n\t\t\t\t\t\tMem: resource.MustParse(\"2Gi\"),\n\t\t\t\t\t\tCPU: resource.MustParse(\"1500m\"),\n\t\t\t\t\t},\n\t\t\t\t\tConfig: &defaultScannerConfig,\n\t\t\t\t},\n\t\t\t},\n\t\t\tRemover: v1alpha3.ContainerConfig{\n\t\t\t\tImage: v1alpha3.RepoTag{\n\t\t\t\t\tRepo: repo(\"remover\"),\n\t\t\t\t\tTag:  version.BuildVersion,\n\t\t\t\t},\n\t\t\t\tRequest: v1alpha3.ResourceRequirements{\n\t\t\t\t\tMem: resource.MustParse(\"25Mi\"),\n\t\t\t\t\tCPU: resource.MustParse(\"7m\"),\n\t\t\t\t},\n\t\t\t\tLimit: v1alpha3.ResourceRequirements{\n\t\t\t\t\tMem: resource.MustParse(\"30Mi\"),\n\t\t\t\t\tCPU: resource.Quantity{},\n\t\t\t\t},\n\t\t\t\tConfig: nil,\n\t\t\t},\n\t\t},\n\t}\n}\n\nfunc repo(basename string) string {\n\tif version.DefaultRepo == \"\" {\n\t\treturn basename\n\t}\n\n\treturn fmt.Sprintf(\"%s/%s\", version.DefaultRepo, basename)\n}\n"
  },
  {
    "path": "api/v1alpha3/doc.go",
    "content": "// +k8s:conversion-gen=github.com/eraser-dev/eraser/api/unversioned\npackage v1alpha3\n"
  },
  {
    "path": "api/v1alpha3/eraserconfig_types.go",
    "content": "/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage v1alpha3\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"net/url\"\n\t\"time\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\t\"k8s.io/apimachinery/pkg/api/resource\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n)\n\ntype (\n\tDuration time.Duration\n\tRuntime  string\n\n\tRuntimeSpec struct {\n\t\tName    Runtime `json:\"name\"`\n\t\tAddress string  `json:\"address\"`\n\t}\n)\n\nconst (\n\tRuntimeContainerd  Runtime = \"containerd\"\n\tRuntimeDockerShim  Runtime = \"dockershim\"\n\tRuntimeCrio        Runtime = \"crio\"\n\tRuntimeNotProvided Runtime = \"\"\n\n\tContainerdPath = \"/run/containerd/containerd.sock\"\n\tDockerPath     = \"/run/dockershim.sock\"\n\tCrioPath       = \"/run/crio/crio.sock\"\n)\n\nfunc ConvertRuntimeToRuntimeSpec(r Runtime) (RuntimeSpec, error) {\n\tvar rs RuntimeSpec\n\n\tswitch r {\n\tcase RuntimeContainerd:\n\t\trs = RuntimeSpec{Name: RuntimeContainerd, Address: fmt.Sprintf(\"unix://%s\", ContainerdPath)}\n\tcase RuntimeDockerShim:\n\t\trs = RuntimeSpec{Name: RuntimeDockerShim, Address: fmt.Sprintf(\"unix://%s\", DockerPath)}\n\tcase RuntimeCrio:\n\t\trs = RuntimeSpec{Name: RuntimeCrio, Address: fmt.Sprintf(\"unix://%s\", CrioPath)}\n\tdefault:\n\t\treturn rs, fmt.Errorf(\"invalid runtime: valid names are %s, %s, %s\", RuntimeContainerd, RuntimeDockerShim, RuntimeCrio)\n\t}\n\n\treturn rs, nil\n}\n\nfunc (td *Duration) UnmarshalJSON(b []byte) error {\n\tvar str string\n\terr := json.Unmarshal(b, &str)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tpd, err := time.ParseDuration(str)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t*td = Duration(pd)\n\treturn nil\n}\n\nfunc (td *Duration) MarshalJSON() ([]byte, error) {\n\treturn []byte(fmt.Sprintf(`\"%s\"`, time.Duration(*td).String())), nil\n}\n\nfunc (r *RuntimeSpec) UnmarshalJSON(b []byte) error {\n\t// create temp RuntimeSpec to prevent recursive error into this function when using unmarshall to check validity of provided RuntimeSpec\n\ttype TempRuntimeSpec struct {\n\t\tName    string `json:\"name\"`\n\t\tAddress string `json:\"address\"`\n\t}\n\tvar rs TempRuntimeSpec\n\terr := json.Unmarshal(b, &rs)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"error unmarshalling into TempRuntimeSpec %v %s\", err, string(b))\n\t}\n\n\tswitch rt := Runtime(rs.Name); rt {\n\t// make sure user provided Runtime is valid\n\tcase RuntimeContainerd, RuntimeDockerShim, RuntimeCrio:\n\t\tif rs.Address != \"\" {\n\t\t\t// check that provided RuntimeAddress is valid\n\t\t\tu, err := url.Parse(rs.Address)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\tswitch u.Scheme {\n\t\t\tcase \"tcp\", \"unix\":\n\t\t\tdefault:\n\t\t\t\treturn fmt.Errorf(\"invalid RuntimeAddress scheme: valid schemes for runtime socket address are `tcp` and `unix`\")\n\t\t\t}\n\n\t\t\tr.Name = Runtime(rs.Name)\n\t\t\tr.Address = rs.Address\n\n\t\t\treturn nil\n\t\t}\n\n\t\t// if RuntimeAddress is not provided, get defaults\n\t\tconverted, err := ConvertRuntimeToRuntimeSpec(rt)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\t*r = converted\n\tcase RuntimeNotProvided:\n\t\tif rs.Address != \"\" {\n\t\t\treturn fmt.Errorf(\"runtime name must be provided with address\")\n\t\t}\n\n\t\t// if empty name and address, use containerd as default\n\t\tr.Name = RuntimeContainerd\n\t\tr.Address = fmt.Sprintf(\"unix://%s\", ContainerdPath)\n\tdefault:\n\t\treturn fmt.Errorf(\"invalid runtime: valid names are %s, %s, %s\", RuntimeContainerd, RuntimeDockerShim, RuntimeCrio)\n\t}\n\n\treturn nil\n}\n\n// EDIT THIS FILE!  THIS IS SCAFFOLDING FOR YOU TO OWN!\n// NOTE: json tags are required.  Any new fields you add must have json tags for the fields to be serialized.\n\ntype OptionalContainerConfig struct {\n\tEnabled         bool `json:\"enabled,omitempty\"`\n\tContainerConfig `json:\",inline\"`\n}\n\ntype ContainerConfig struct {\n\tImage   RepoTag              `json:\"image,omitempty\"`\n\tRequest ResourceRequirements `json:\"request,omitempty\"`\n\tLimit   ResourceRequirements `json:\"limit,omitempty\"`\n\tConfig  *string              `json:\"config,omitempty\"`\n\tVolumes []corev1.Volume      `json:\"volumes,omitempty\"`\n}\n\ntype ManagerConfig struct {\n\tRuntime             RuntimeSpec       `json:\"runtime,omitempty\"`\n\tOTLPEndpoint        string            `json:\"otlpEndpoint,omitempty\"`\n\tLogLevel            string            `json:\"logLevel,omitempty\"`\n\tScheduling          ScheduleConfig    `json:\"scheduling,omitempty\"`\n\tProfile             ProfileConfig     `json:\"profile,omitempty\"`\n\tImageJob            ImageJobConfig    `json:\"imageJob,omitempty\"`\n\tPullSecrets         []string          `json:\"pullSecrets,omitempty\"`\n\tNodeFilter          NodeFilterConfig  `json:\"nodeFilter,omitempty\"`\n\tPriorityClassName   string            `json:\"priorityClassName,omitempty\"`\n\tAdditionalPodLabels map[string]string `json:\"additionalPodLabels,omitempty\"`\n}\n\ntype ScheduleConfig struct {\n\tRepeatInterval   Duration `json:\"repeatInterval,omitempty\"`\n\tBeginImmediately bool     `json:\"beginImmediately,omitempty\"`\n}\n\ntype ProfileConfig struct {\n\tEnabled bool `json:\"enabled,omitempty\"`\n\tPort    int  `json:\"port,omitempty\"`\n}\n\ntype ImageJobConfig struct {\n\tSuccessRatio float64               `json:\"successRatio,omitempty\"`\n\tCleanup      ImageJobCleanupConfig `json:\"cleanup,omitempty\"`\n}\n\ntype ImageJobCleanupConfig struct {\n\tDelayOnSuccess Duration `json:\"delayOnSuccess,omitempty\"`\n\tDelayOnFailure Duration `json:\"delayOnFailure,omitempty\"`\n}\n\ntype NodeFilterConfig struct {\n\tType      string   `json:\"type,omitempty\"`\n\tSelectors []string `json:\"selectors,omitempty\"`\n}\n\ntype ResourceRequirements struct {\n\tMem resource.Quantity `json:\"mem,omitempty\"`\n\tCPU resource.Quantity `json:\"cpu,omitempty\"`\n}\n\ntype RepoTag struct {\n\tRepo string `json:\"repo,omitempty\"`\n\tTag  string `json:\"tag,omitempty\"`\n}\n\ntype Components struct {\n\tCollector OptionalContainerConfig `json:\"collector,omitempty\"`\n\tScanner   OptionalContainerConfig `json:\"scanner,omitempty\"`\n\tRemover   ContainerConfig         `json:\"remover,omitempty\"`\n}\n\n//+kubebuilder:object:root=true\n\n// EraserConfig is the Schema for the eraserconfigs API.\ntype EraserConfig struct {\n\tmetav1.TypeMeta `json:\",inline\"`\n\tManager         ManagerConfig `json:\"manager\"`\n\tComponents      Components    `json:\"components\"`\n}\n\nfunc init() {\n\tSchemeBuilder.Register(&EraserConfig{})\n}\n"
  },
  {
    "path": "api/v1alpha3/groupversion_info.go",
    "content": "/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\n// Package v1alpha3 contains API Schema definitions for the eraser.sh v1alpha3 API group\n// +kubebuilder:object:generate=true\n// +groupName=eraser.sh\npackage v1alpha3\n\nimport (\n\truntime \"k8s.io/apimachinery/pkg/runtime\"\n\t\"k8s.io/apimachinery/pkg/runtime/schema\"\n\t\"sigs.k8s.io/controller-runtime/pkg/scheme\"\n)\n\nvar (\n\t// GroupVersion is group version used to register these objects.\n\tGroupVersion = schema.GroupVersion{Group: \"eraser.sh\", Version: \"v1alpha3\"}\n\n\t// SchemeBuilder is used to add go types to the GroupVersionKind scheme.\n\tSchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}\n\n\tlocalSchemeBuilder = runtime.NewSchemeBuilder(SchemeBuilder.AddToScheme)\n\n\t// AddToScheme adds the types in this group-version to the given scheme.\n\tAddToScheme = SchemeBuilder.AddToScheme\n)\n"
  },
  {
    "path": "api/v1alpha3/runtime_spec_test.go",
    "content": "package v1alpha3\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"testing\"\n)\n\nfunc TestConvertRuntimeToRuntimeSpec(t *testing.T) {\n\ttype testCase struct {\n\t\tinput     Runtime\n\t\texpected  RuntimeSpec\n\t\tshouldErr bool\n\t}\n\n\ttests := map[string]testCase{\n\t\t\"Containerd\": {\n\t\t\tinput:     RuntimeContainerd,\n\t\t\texpected:  RuntimeSpec{Name: RuntimeContainerd, Address: fmt.Sprintf(\"unix://%s\", ContainerdPath)},\n\t\t\tshouldErr: false,\n\t\t},\n\t\t\"DockerShim\": {\n\t\t\tinput:     RuntimeDockerShim,\n\t\t\texpected:  RuntimeSpec{Name: RuntimeDockerShim, Address: fmt.Sprintf(\"unix://%s\", DockerPath)},\n\t\t\tshouldErr: false,\n\t\t},\n\t\t\"Crio\": {\n\t\t\tinput:     RuntimeCrio,\n\t\t\texpected:  RuntimeSpec{Name: RuntimeCrio, Address: fmt.Sprintf(\"unix://%s\", CrioPath)},\n\t\t\tshouldErr: false,\n\t\t},\n\t\t\"InvalidRuntime\": {\n\t\t\tinput:     \"invalid\",\n\t\t\texpected:  RuntimeSpec{},\n\t\t\tshouldErr: true,\n\t\t},\n\t}\n\n\tfor name, test := range tests {\n\t\tt.Run(name, func(t *testing.T) {\n\t\t\tresult, err := ConvertRuntimeToRuntimeSpec(test.input)\n\n\t\t\tif test.shouldErr && err == nil {\n\t\t\t\tt.Errorf(\"Expected an error but got nil\")\n\t\t\t}\n\n\t\t\tif !test.shouldErr && err != nil {\n\t\t\t\tt.Errorf(\"Error: %v\", err)\n\t\t\t}\n\n\t\t\tif result != test.expected {\n\t\t\t\tt.Errorf(\"Unexpected result. Expected %v, but got %v\", test.expected, result)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestUnmarshalJSON(t *testing.T) {\n\ttype testCase struct {\n\t\tinput     []byte\n\t\texpected  RuntimeSpec\n\t\tshouldErr bool\n\t}\n\n\ttests := map[string]testCase{\n\t\t\"ValidContainerd\": {\n\t\t\tinput:     []byte(`{\"name\": \"containerd\", \"address\": \"unix:///run/containerd/containerd.sock\"}`),\n\t\t\texpected:  RuntimeSpec{Name: RuntimeContainerd, Address: fmt.Sprintf(\"unix://%s\", ContainerdPath)},\n\t\t\tshouldErr: false,\n\t\t},\n\t\t\"ValidDockerShim\": {\n\t\t\tinput:     []byte(`{\"name\": \"dockershim\", \"address\": \"unix:///run/dockershim.sock\"}`),\n\t\t\texpected:  RuntimeSpec{Name: RuntimeDockerShim, Address: fmt.Sprintf(\"unix://%s\", DockerPath)},\n\t\t\tshouldErr: false,\n\t\t},\n\t\t\"ValidCrio\": {\n\t\t\tinput:     []byte(`{\"name\": \"crio\", \"address\": \"unix:///run/crio/crio.sock\"}`),\n\t\t\texpected:  RuntimeSpec{Name: RuntimeCrio, Address: fmt.Sprintf(\"unix://%s\", CrioPath)},\n\t\t\tshouldErr: false,\n\t\t},\n\t\t\"InvalidName\": {\n\t\t\tinput:     []byte(`{\"name\": \"invalid\", \"address\": \"unix:///invalid\"}`),\n\t\t\texpected:  RuntimeSpec{},\n\t\t\tshouldErr: true,\n\t\t},\n\t\t\"InvalidAddressScheme\": {\n\t\t\tinput:     []byte(`{\"name\": \"containerd\", \"address\": \"http://invalid\"}`),\n\t\t\texpected:  RuntimeSpec{},\n\t\t\tshouldErr: true,\n\t\t},\n\t}\n\n\tfor name, test := range tests {\n\t\tt.Run(name, func(t *testing.T) {\n\t\t\tvar rs RuntimeSpec\n\t\t\terr := json.Unmarshal(test.input, &rs)\n\n\t\t\tif test.shouldErr && err == nil {\n\t\t\t\tt.Error(\"Expected an error but got nil\")\n\t\t\t}\n\n\t\t\tif !test.shouldErr && err != nil {\n\t\t\t\tt.Errorf(\"Error: %v\", err)\n\t\t\t}\n\n\t\t\tif rs != test.expected {\n\t\t\t\tt.Errorf(\"Unexpected result. Expected %v, but got %v\", test.expected, rs)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "api/v1alpha3/zz_generated.conversion.go",
    "content": "//go:build !ignore_autogenerated\n// +build !ignore_autogenerated\n\n/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n// Code generated by conversion-gen. DO NOT EDIT.\n\npackage v1alpha3\n\nimport (\n\tunsafe \"unsafe\"\n\n\tunversioned \"github.com/eraser-dev/eraser/api/unversioned\"\n\tv1 \"k8s.io/api/core/v1\"\n\tconversion \"k8s.io/apimachinery/pkg/conversion\"\n\truntime \"k8s.io/apimachinery/pkg/runtime\"\n)\n\nfunc init() {\n\tlocalSchemeBuilder.Register(RegisterConversions)\n}\n\n// RegisterConversions adds conversion functions to the given scheme.\n// Public to allow building arbitrary schemes.\nfunc RegisterConversions(s *runtime.Scheme) error {\n\tif err := s.AddGeneratedConversionFunc((*Components)(nil), (*unversioned.Components)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha3_Components_To_unversioned_Components(a.(*Components), b.(*unversioned.Components), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.Components)(nil), (*Components)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_Components_To_v1alpha3_Components(a.(*unversioned.Components), b.(*Components), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ContainerConfig)(nil), (*unversioned.ContainerConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha3_ContainerConfig_To_unversioned_ContainerConfig(a.(*ContainerConfig), b.(*unversioned.ContainerConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ContainerConfig)(nil), (*ContainerConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ContainerConfig_To_v1alpha3_ContainerConfig(a.(*unversioned.ContainerConfig), b.(*ContainerConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*EraserConfig)(nil), (*unversioned.EraserConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha3_EraserConfig_To_unversioned_EraserConfig(a.(*EraserConfig), b.(*unversioned.EraserConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.EraserConfig)(nil), (*EraserConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_EraserConfig_To_v1alpha3_EraserConfig(a.(*unversioned.EraserConfig), b.(*EraserConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ImageJobCleanupConfig)(nil), (*unversioned.ImageJobCleanupConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha3_ImageJobCleanupConfig_To_unversioned_ImageJobCleanupConfig(a.(*ImageJobCleanupConfig), b.(*unversioned.ImageJobCleanupConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ImageJobCleanupConfig)(nil), (*ImageJobCleanupConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ImageJobCleanupConfig_To_v1alpha3_ImageJobCleanupConfig(a.(*unversioned.ImageJobCleanupConfig), b.(*ImageJobCleanupConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ImageJobConfig)(nil), (*unversioned.ImageJobConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha3_ImageJobConfig_To_unversioned_ImageJobConfig(a.(*ImageJobConfig), b.(*unversioned.ImageJobConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ImageJobConfig)(nil), (*ImageJobConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ImageJobConfig_To_v1alpha3_ImageJobConfig(a.(*unversioned.ImageJobConfig), b.(*ImageJobConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ManagerConfig)(nil), (*unversioned.ManagerConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha3_ManagerConfig_To_unversioned_ManagerConfig(a.(*ManagerConfig), b.(*unversioned.ManagerConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ManagerConfig)(nil), (*ManagerConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ManagerConfig_To_v1alpha3_ManagerConfig(a.(*unversioned.ManagerConfig), b.(*ManagerConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*NodeFilterConfig)(nil), (*unversioned.NodeFilterConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha3_NodeFilterConfig_To_unversioned_NodeFilterConfig(a.(*NodeFilterConfig), b.(*unversioned.NodeFilterConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.NodeFilterConfig)(nil), (*NodeFilterConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_NodeFilterConfig_To_v1alpha3_NodeFilterConfig(a.(*unversioned.NodeFilterConfig), b.(*NodeFilterConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*OptionalContainerConfig)(nil), (*unversioned.OptionalContainerConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha3_OptionalContainerConfig_To_unversioned_OptionalContainerConfig(a.(*OptionalContainerConfig), b.(*unversioned.OptionalContainerConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.OptionalContainerConfig)(nil), (*OptionalContainerConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_OptionalContainerConfig_To_v1alpha3_OptionalContainerConfig(a.(*unversioned.OptionalContainerConfig), b.(*OptionalContainerConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ProfileConfig)(nil), (*unversioned.ProfileConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha3_ProfileConfig_To_unversioned_ProfileConfig(a.(*ProfileConfig), b.(*unversioned.ProfileConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ProfileConfig)(nil), (*ProfileConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ProfileConfig_To_v1alpha3_ProfileConfig(a.(*unversioned.ProfileConfig), b.(*ProfileConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*RepoTag)(nil), (*unversioned.RepoTag)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha3_RepoTag_To_unversioned_RepoTag(a.(*RepoTag), b.(*unversioned.RepoTag), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.RepoTag)(nil), (*RepoTag)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_RepoTag_To_v1alpha3_RepoTag(a.(*unversioned.RepoTag), b.(*RepoTag), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ResourceRequirements)(nil), (*unversioned.ResourceRequirements)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha3_ResourceRequirements_To_unversioned_ResourceRequirements(a.(*ResourceRequirements), b.(*unversioned.ResourceRequirements), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ResourceRequirements)(nil), (*ResourceRequirements)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ResourceRequirements_To_v1alpha3_ResourceRequirements(a.(*unversioned.ResourceRequirements), b.(*ResourceRequirements), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*RuntimeSpec)(nil), (*unversioned.RuntimeSpec)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha3_RuntimeSpec_To_unversioned_RuntimeSpec(a.(*RuntimeSpec), b.(*unversioned.RuntimeSpec), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.RuntimeSpec)(nil), (*RuntimeSpec)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_RuntimeSpec_To_v1alpha3_RuntimeSpec(a.(*unversioned.RuntimeSpec), b.(*RuntimeSpec), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*ScheduleConfig)(nil), (*unversioned.ScheduleConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_v1alpha3_ScheduleConfig_To_unversioned_ScheduleConfig(a.(*ScheduleConfig), b.(*unversioned.ScheduleConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\tif err := s.AddGeneratedConversionFunc((*unversioned.ScheduleConfig)(nil), (*ScheduleConfig)(nil), func(a, b interface{}, scope conversion.Scope) error {\n\t\treturn Convert_unversioned_ScheduleConfig_To_v1alpha3_ScheduleConfig(a.(*unversioned.ScheduleConfig), b.(*ScheduleConfig), scope)\n\t}); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\nfunc autoConvert_v1alpha3_Components_To_unversioned_Components(in *Components, out *unversioned.Components, s conversion.Scope) error {\n\tif err := Convert_v1alpha3_OptionalContainerConfig_To_unversioned_OptionalContainerConfig(&in.Collector, &out.Collector, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha3_OptionalContainerConfig_To_unversioned_OptionalContainerConfig(&in.Scanner, &out.Scanner, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha3_ContainerConfig_To_unversioned_ContainerConfig(&in.Remover, &out.Remover, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_v1alpha3_Components_To_unversioned_Components is an autogenerated conversion function.\nfunc Convert_v1alpha3_Components_To_unversioned_Components(in *Components, out *unversioned.Components, s conversion.Scope) error {\n\treturn autoConvert_v1alpha3_Components_To_unversioned_Components(in, out, s)\n}\n\nfunc autoConvert_unversioned_Components_To_v1alpha3_Components(in *unversioned.Components, out *Components, s conversion.Scope) error {\n\tif err := Convert_unversioned_OptionalContainerConfig_To_v1alpha3_OptionalContainerConfig(&in.Collector, &out.Collector, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_OptionalContainerConfig_To_v1alpha3_OptionalContainerConfig(&in.Scanner, &out.Scanner, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_ContainerConfig_To_v1alpha3_ContainerConfig(&in.Remover, &out.Remover, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_unversioned_Components_To_v1alpha3_Components is an autogenerated conversion function.\nfunc Convert_unversioned_Components_To_v1alpha3_Components(in *unversioned.Components, out *Components, s conversion.Scope) error {\n\treturn autoConvert_unversioned_Components_To_v1alpha3_Components(in, out, s)\n}\n\nfunc autoConvert_v1alpha3_ContainerConfig_To_unversioned_ContainerConfig(in *ContainerConfig, out *unversioned.ContainerConfig, s conversion.Scope) error {\n\tif err := Convert_v1alpha3_RepoTag_To_unversioned_RepoTag(&in.Image, &out.Image, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha3_ResourceRequirements_To_unversioned_ResourceRequirements(&in.Request, &out.Request, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha3_ResourceRequirements_To_unversioned_ResourceRequirements(&in.Limit, &out.Limit, s); err != nil {\n\t\treturn err\n\t}\n\tout.Config = (*string)(unsafe.Pointer(in.Config))\n\tout.Volumes = *(*[]v1.Volume)(unsafe.Pointer(&in.Volumes))\n\treturn nil\n}\n\n// Convert_v1alpha3_ContainerConfig_To_unversioned_ContainerConfig is an autogenerated conversion function.\nfunc Convert_v1alpha3_ContainerConfig_To_unversioned_ContainerConfig(in *ContainerConfig, out *unversioned.ContainerConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha3_ContainerConfig_To_unversioned_ContainerConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_ContainerConfig_To_v1alpha3_ContainerConfig(in *unversioned.ContainerConfig, out *ContainerConfig, s conversion.Scope) error {\n\tif err := Convert_unversioned_RepoTag_To_v1alpha3_RepoTag(&in.Image, &out.Image, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_ResourceRequirements_To_v1alpha3_ResourceRequirements(&in.Request, &out.Request, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_ResourceRequirements_To_v1alpha3_ResourceRequirements(&in.Limit, &out.Limit, s); err != nil {\n\t\treturn err\n\t}\n\tout.Config = (*string)(unsafe.Pointer(in.Config))\n\tout.Volumes = *(*[]v1.Volume)(unsafe.Pointer(&in.Volumes))\n\treturn nil\n}\n\n// Convert_unversioned_ContainerConfig_To_v1alpha3_ContainerConfig is an autogenerated conversion function.\nfunc Convert_unversioned_ContainerConfig_To_v1alpha3_ContainerConfig(in *unversioned.ContainerConfig, out *ContainerConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ContainerConfig_To_v1alpha3_ContainerConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha3_EraserConfig_To_unversioned_EraserConfig(in *EraserConfig, out *unversioned.EraserConfig, s conversion.Scope) error {\n\tif err := Convert_v1alpha3_ManagerConfig_To_unversioned_ManagerConfig(&in.Manager, &out.Manager, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha3_Components_To_unversioned_Components(&in.Components, &out.Components, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_v1alpha3_EraserConfig_To_unversioned_EraserConfig is an autogenerated conversion function.\nfunc Convert_v1alpha3_EraserConfig_To_unversioned_EraserConfig(in *EraserConfig, out *unversioned.EraserConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha3_EraserConfig_To_unversioned_EraserConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_EraserConfig_To_v1alpha3_EraserConfig(in *unversioned.EraserConfig, out *EraserConfig, s conversion.Scope) error {\n\tif err := Convert_unversioned_ManagerConfig_To_v1alpha3_ManagerConfig(&in.Manager, &out.Manager, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_Components_To_v1alpha3_Components(&in.Components, &out.Components, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_unversioned_EraserConfig_To_v1alpha3_EraserConfig is an autogenerated conversion function.\nfunc Convert_unversioned_EraserConfig_To_v1alpha3_EraserConfig(in *unversioned.EraserConfig, out *EraserConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_EraserConfig_To_v1alpha3_EraserConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha3_ImageJobCleanupConfig_To_unversioned_ImageJobCleanupConfig(in *ImageJobCleanupConfig, out *unversioned.ImageJobCleanupConfig, s conversion.Scope) error {\n\tout.DelayOnSuccess = unversioned.Duration(in.DelayOnSuccess)\n\tout.DelayOnFailure = unversioned.Duration(in.DelayOnFailure)\n\treturn nil\n}\n\n// Convert_v1alpha3_ImageJobCleanupConfig_To_unversioned_ImageJobCleanupConfig is an autogenerated conversion function.\nfunc Convert_v1alpha3_ImageJobCleanupConfig_To_unversioned_ImageJobCleanupConfig(in *ImageJobCleanupConfig, out *unversioned.ImageJobCleanupConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha3_ImageJobCleanupConfig_To_unversioned_ImageJobCleanupConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_ImageJobCleanupConfig_To_v1alpha3_ImageJobCleanupConfig(in *unversioned.ImageJobCleanupConfig, out *ImageJobCleanupConfig, s conversion.Scope) error {\n\tout.DelayOnSuccess = Duration(in.DelayOnSuccess)\n\tout.DelayOnFailure = Duration(in.DelayOnFailure)\n\treturn nil\n}\n\n// Convert_unversioned_ImageJobCleanupConfig_To_v1alpha3_ImageJobCleanupConfig is an autogenerated conversion function.\nfunc Convert_unversioned_ImageJobCleanupConfig_To_v1alpha3_ImageJobCleanupConfig(in *unversioned.ImageJobCleanupConfig, out *ImageJobCleanupConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ImageJobCleanupConfig_To_v1alpha3_ImageJobCleanupConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha3_ImageJobConfig_To_unversioned_ImageJobConfig(in *ImageJobConfig, out *unversioned.ImageJobConfig, s conversion.Scope) error {\n\tout.SuccessRatio = in.SuccessRatio\n\tif err := Convert_v1alpha3_ImageJobCleanupConfig_To_unversioned_ImageJobCleanupConfig(&in.Cleanup, &out.Cleanup, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_v1alpha3_ImageJobConfig_To_unversioned_ImageJobConfig is an autogenerated conversion function.\nfunc Convert_v1alpha3_ImageJobConfig_To_unversioned_ImageJobConfig(in *ImageJobConfig, out *unversioned.ImageJobConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha3_ImageJobConfig_To_unversioned_ImageJobConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_ImageJobConfig_To_v1alpha3_ImageJobConfig(in *unversioned.ImageJobConfig, out *ImageJobConfig, s conversion.Scope) error {\n\tout.SuccessRatio = in.SuccessRatio\n\tif err := Convert_unversioned_ImageJobCleanupConfig_To_v1alpha3_ImageJobCleanupConfig(&in.Cleanup, &out.Cleanup, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_unversioned_ImageJobConfig_To_v1alpha3_ImageJobConfig is an autogenerated conversion function.\nfunc Convert_unversioned_ImageJobConfig_To_v1alpha3_ImageJobConfig(in *unversioned.ImageJobConfig, out *ImageJobConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ImageJobConfig_To_v1alpha3_ImageJobConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha3_ManagerConfig_To_unversioned_ManagerConfig(in *ManagerConfig, out *unversioned.ManagerConfig, s conversion.Scope) error {\n\tif err := Convert_v1alpha3_RuntimeSpec_To_unversioned_RuntimeSpec(&in.Runtime, &out.Runtime, s); err != nil {\n\t\treturn err\n\t}\n\tout.OTLPEndpoint = in.OTLPEndpoint\n\tout.LogLevel = in.LogLevel\n\tif err := Convert_v1alpha3_ScheduleConfig_To_unversioned_ScheduleConfig(&in.Scheduling, &out.Scheduling, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha3_ProfileConfig_To_unversioned_ProfileConfig(&in.Profile, &out.Profile, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_v1alpha3_ImageJobConfig_To_unversioned_ImageJobConfig(&in.ImageJob, &out.ImageJob, s); err != nil {\n\t\treturn err\n\t}\n\tout.PullSecrets = *(*[]string)(unsafe.Pointer(&in.PullSecrets))\n\tif err := Convert_v1alpha3_NodeFilterConfig_To_unversioned_NodeFilterConfig(&in.NodeFilter, &out.NodeFilter, s); err != nil {\n\t\treturn err\n\t}\n\tout.PriorityClassName = in.PriorityClassName\n\tout.AdditionalPodLabels = *(*map[string]string)(unsafe.Pointer(&in.AdditionalPodLabels))\n\treturn nil\n}\n\n// Convert_v1alpha3_ManagerConfig_To_unversioned_ManagerConfig is an autogenerated conversion function.\nfunc Convert_v1alpha3_ManagerConfig_To_unversioned_ManagerConfig(in *ManagerConfig, out *unversioned.ManagerConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha3_ManagerConfig_To_unversioned_ManagerConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_ManagerConfig_To_v1alpha3_ManagerConfig(in *unversioned.ManagerConfig, out *ManagerConfig, s conversion.Scope) error {\n\tif err := Convert_unversioned_RuntimeSpec_To_v1alpha3_RuntimeSpec(&in.Runtime, &out.Runtime, s); err != nil {\n\t\treturn err\n\t}\n\tout.OTLPEndpoint = in.OTLPEndpoint\n\tout.LogLevel = in.LogLevel\n\tif err := Convert_unversioned_ScheduleConfig_To_v1alpha3_ScheduleConfig(&in.Scheduling, &out.Scheduling, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_ProfileConfig_To_v1alpha3_ProfileConfig(&in.Profile, &out.Profile, s); err != nil {\n\t\treturn err\n\t}\n\tif err := Convert_unversioned_ImageJobConfig_To_v1alpha3_ImageJobConfig(&in.ImageJob, &out.ImageJob, s); err != nil {\n\t\treturn err\n\t}\n\tout.PullSecrets = *(*[]string)(unsafe.Pointer(&in.PullSecrets))\n\tif err := Convert_unversioned_NodeFilterConfig_To_v1alpha3_NodeFilterConfig(&in.NodeFilter, &out.NodeFilter, s); err != nil {\n\t\treturn err\n\t}\n\tout.PriorityClassName = in.PriorityClassName\n\tout.AdditionalPodLabels = *(*map[string]string)(unsafe.Pointer(&in.AdditionalPodLabels))\n\treturn nil\n}\n\n// Convert_unversioned_ManagerConfig_To_v1alpha3_ManagerConfig is an autogenerated conversion function.\nfunc Convert_unversioned_ManagerConfig_To_v1alpha3_ManagerConfig(in *unversioned.ManagerConfig, out *ManagerConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ManagerConfig_To_v1alpha3_ManagerConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha3_NodeFilterConfig_To_unversioned_NodeFilterConfig(in *NodeFilterConfig, out *unversioned.NodeFilterConfig, s conversion.Scope) error {\n\tout.Type = in.Type\n\tout.Selectors = *(*[]string)(unsafe.Pointer(&in.Selectors))\n\treturn nil\n}\n\n// Convert_v1alpha3_NodeFilterConfig_To_unversioned_NodeFilterConfig is an autogenerated conversion function.\nfunc Convert_v1alpha3_NodeFilterConfig_To_unversioned_NodeFilterConfig(in *NodeFilterConfig, out *unversioned.NodeFilterConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha3_NodeFilterConfig_To_unversioned_NodeFilterConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_NodeFilterConfig_To_v1alpha3_NodeFilterConfig(in *unversioned.NodeFilterConfig, out *NodeFilterConfig, s conversion.Scope) error {\n\tout.Type = in.Type\n\tout.Selectors = *(*[]string)(unsafe.Pointer(&in.Selectors))\n\treturn nil\n}\n\n// Convert_unversioned_NodeFilterConfig_To_v1alpha3_NodeFilterConfig is an autogenerated conversion function.\nfunc Convert_unversioned_NodeFilterConfig_To_v1alpha3_NodeFilterConfig(in *unversioned.NodeFilterConfig, out *NodeFilterConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_NodeFilterConfig_To_v1alpha3_NodeFilterConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha3_OptionalContainerConfig_To_unversioned_OptionalContainerConfig(in *OptionalContainerConfig, out *unversioned.OptionalContainerConfig, s conversion.Scope) error {\n\tout.Enabled = in.Enabled\n\tif err := Convert_v1alpha3_ContainerConfig_To_unversioned_ContainerConfig(&in.ContainerConfig, &out.ContainerConfig, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_v1alpha3_OptionalContainerConfig_To_unversioned_OptionalContainerConfig is an autogenerated conversion function.\nfunc Convert_v1alpha3_OptionalContainerConfig_To_unversioned_OptionalContainerConfig(in *OptionalContainerConfig, out *unversioned.OptionalContainerConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha3_OptionalContainerConfig_To_unversioned_OptionalContainerConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_OptionalContainerConfig_To_v1alpha3_OptionalContainerConfig(in *unversioned.OptionalContainerConfig, out *OptionalContainerConfig, s conversion.Scope) error {\n\tout.Enabled = in.Enabled\n\tif err := Convert_unversioned_ContainerConfig_To_v1alpha3_ContainerConfig(&in.ContainerConfig, &out.ContainerConfig, s); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Convert_unversioned_OptionalContainerConfig_To_v1alpha3_OptionalContainerConfig is an autogenerated conversion function.\nfunc Convert_unversioned_OptionalContainerConfig_To_v1alpha3_OptionalContainerConfig(in *unversioned.OptionalContainerConfig, out *OptionalContainerConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_OptionalContainerConfig_To_v1alpha3_OptionalContainerConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha3_ProfileConfig_To_unversioned_ProfileConfig(in *ProfileConfig, out *unversioned.ProfileConfig, s conversion.Scope) error {\n\tout.Enabled = in.Enabled\n\tout.Port = in.Port\n\treturn nil\n}\n\n// Convert_v1alpha3_ProfileConfig_To_unversioned_ProfileConfig is an autogenerated conversion function.\nfunc Convert_v1alpha3_ProfileConfig_To_unversioned_ProfileConfig(in *ProfileConfig, out *unversioned.ProfileConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha3_ProfileConfig_To_unversioned_ProfileConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_ProfileConfig_To_v1alpha3_ProfileConfig(in *unversioned.ProfileConfig, out *ProfileConfig, s conversion.Scope) error {\n\tout.Enabled = in.Enabled\n\tout.Port = in.Port\n\treturn nil\n}\n\n// Convert_unversioned_ProfileConfig_To_v1alpha3_ProfileConfig is an autogenerated conversion function.\nfunc Convert_unversioned_ProfileConfig_To_v1alpha3_ProfileConfig(in *unversioned.ProfileConfig, out *ProfileConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ProfileConfig_To_v1alpha3_ProfileConfig(in, out, s)\n}\n\nfunc autoConvert_v1alpha3_RepoTag_To_unversioned_RepoTag(in *RepoTag, out *unversioned.RepoTag, s conversion.Scope) error {\n\tout.Repo = in.Repo\n\tout.Tag = in.Tag\n\treturn nil\n}\n\n// Convert_v1alpha3_RepoTag_To_unversioned_RepoTag is an autogenerated conversion function.\nfunc Convert_v1alpha3_RepoTag_To_unversioned_RepoTag(in *RepoTag, out *unversioned.RepoTag, s conversion.Scope) error {\n\treturn autoConvert_v1alpha3_RepoTag_To_unversioned_RepoTag(in, out, s)\n}\n\nfunc autoConvert_unversioned_RepoTag_To_v1alpha3_RepoTag(in *unversioned.RepoTag, out *RepoTag, s conversion.Scope) error {\n\tout.Repo = in.Repo\n\tout.Tag = in.Tag\n\treturn nil\n}\n\n// Convert_unversioned_RepoTag_To_v1alpha3_RepoTag is an autogenerated conversion function.\nfunc Convert_unversioned_RepoTag_To_v1alpha3_RepoTag(in *unversioned.RepoTag, out *RepoTag, s conversion.Scope) error {\n\treturn autoConvert_unversioned_RepoTag_To_v1alpha3_RepoTag(in, out, s)\n}\n\nfunc autoConvert_v1alpha3_ResourceRequirements_To_unversioned_ResourceRequirements(in *ResourceRequirements, out *unversioned.ResourceRequirements, s conversion.Scope) error {\n\tout.Mem = in.Mem\n\tout.CPU = in.CPU\n\treturn nil\n}\n\n// Convert_v1alpha3_ResourceRequirements_To_unversioned_ResourceRequirements is an autogenerated conversion function.\nfunc Convert_v1alpha3_ResourceRequirements_To_unversioned_ResourceRequirements(in *ResourceRequirements, out *unversioned.ResourceRequirements, s conversion.Scope) error {\n\treturn autoConvert_v1alpha3_ResourceRequirements_To_unversioned_ResourceRequirements(in, out, s)\n}\n\nfunc autoConvert_unversioned_ResourceRequirements_To_v1alpha3_ResourceRequirements(in *unversioned.ResourceRequirements, out *ResourceRequirements, s conversion.Scope) error {\n\tout.Mem = in.Mem\n\tout.CPU = in.CPU\n\treturn nil\n}\n\n// Convert_unversioned_ResourceRequirements_To_v1alpha3_ResourceRequirements is an autogenerated conversion function.\nfunc Convert_unversioned_ResourceRequirements_To_v1alpha3_ResourceRequirements(in *unversioned.ResourceRequirements, out *ResourceRequirements, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ResourceRequirements_To_v1alpha3_ResourceRequirements(in, out, s)\n}\n\nfunc autoConvert_v1alpha3_RuntimeSpec_To_unversioned_RuntimeSpec(in *RuntimeSpec, out *unversioned.RuntimeSpec, s conversion.Scope) error {\n\tout.Name = unversioned.Runtime(in.Name)\n\tout.Address = in.Address\n\treturn nil\n}\n\n// Convert_v1alpha3_RuntimeSpec_To_unversioned_RuntimeSpec is an autogenerated conversion function.\nfunc Convert_v1alpha3_RuntimeSpec_To_unversioned_RuntimeSpec(in *RuntimeSpec, out *unversioned.RuntimeSpec, s conversion.Scope) error {\n\treturn autoConvert_v1alpha3_RuntimeSpec_To_unversioned_RuntimeSpec(in, out, s)\n}\n\nfunc autoConvert_unversioned_RuntimeSpec_To_v1alpha3_RuntimeSpec(in *unversioned.RuntimeSpec, out *RuntimeSpec, s conversion.Scope) error {\n\tout.Name = Runtime(in.Name)\n\tout.Address = in.Address\n\treturn nil\n}\n\n// Convert_unversioned_RuntimeSpec_To_v1alpha3_RuntimeSpec is an autogenerated conversion function.\nfunc Convert_unversioned_RuntimeSpec_To_v1alpha3_RuntimeSpec(in *unversioned.RuntimeSpec, out *RuntimeSpec, s conversion.Scope) error {\n\treturn autoConvert_unversioned_RuntimeSpec_To_v1alpha3_RuntimeSpec(in, out, s)\n}\n\nfunc autoConvert_v1alpha3_ScheduleConfig_To_unversioned_ScheduleConfig(in *ScheduleConfig, out *unversioned.ScheduleConfig, s conversion.Scope) error {\n\tout.RepeatInterval = unversioned.Duration(in.RepeatInterval)\n\tout.BeginImmediately = in.BeginImmediately\n\treturn nil\n}\n\n// Convert_v1alpha3_ScheduleConfig_To_unversioned_ScheduleConfig is an autogenerated conversion function.\nfunc Convert_v1alpha3_ScheduleConfig_To_unversioned_ScheduleConfig(in *ScheduleConfig, out *unversioned.ScheduleConfig, s conversion.Scope) error {\n\treturn autoConvert_v1alpha3_ScheduleConfig_To_unversioned_ScheduleConfig(in, out, s)\n}\n\nfunc autoConvert_unversioned_ScheduleConfig_To_v1alpha3_ScheduleConfig(in *unversioned.ScheduleConfig, out *ScheduleConfig, s conversion.Scope) error {\n\tout.RepeatInterval = Duration(in.RepeatInterval)\n\tout.BeginImmediately = in.BeginImmediately\n\treturn nil\n}\n\n// Convert_unversioned_ScheduleConfig_To_v1alpha3_ScheduleConfig is an autogenerated conversion function.\nfunc Convert_unversioned_ScheduleConfig_To_v1alpha3_ScheduleConfig(in *unversioned.ScheduleConfig, out *ScheduleConfig, s conversion.Scope) error {\n\treturn autoConvert_unversioned_ScheduleConfig_To_v1alpha3_ScheduleConfig(in, out, s)\n}\n"
  },
  {
    "path": "api/v1alpha3/zz_generated.deepcopy.go",
    "content": "//go:build !ignore_autogenerated\n\n/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\n// Code generated by controller-gen. DO NOT EDIT.\n\npackage v1alpha3\n\nimport (\n\t\"k8s.io/api/core/v1\"\n\t\"k8s.io/apimachinery/pkg/runtime\"\n)\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *Components) DeepCopyInto(out *Components) {\n\t*out = *in\n\tin.Collector.DeepCopyInto(&out.Collector)\n\tin.Scanner.DeepCopyInto(&out.Scanner)\n\tin.Remover.DeepCopyInto(&out.Remover)\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Components.\nfunc (in *Components) DeepCopy() *Components {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(Components)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ContainerConfig) DeepCopyInto(out *ContainerConfig) {\n\t*out = *in\n\tout.Image = in.Image\n\tin.Request.DeepCopyInto(&out.Request)\n\tin.Limit.DeepCopyInto(&out.Limit)\n\tif in.Config != nil {\n\t\tin, out := &in.Config, &out.Config\n\t\t*out = new(string)\n\t\t**out = **in\n\t}\n\tif in.Volumes != nil {\n\t\tin, out := &in.Volumes, &out.Volumes\n\t\t*out = make([]v1.Volume, len(*in))\n\t\tfor i := range *in {\n\t\t\t(*in)[i].DeepCopyInto(&(*out)[i])\n\t\t}\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ContainerConfig.\nfunc (in *ContainerConfig) DeepCopy() *ContainerConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ContainerConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *EraserConfig) DeepCopyInto(out *EraserConfig) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tin.Manager.DeepCopyInto(&out.Manager)\n\tin.Components.DeepCopyInto(&out.Components)\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EraserConfig.\nfunc (in *EraserConfig) DeepCopy() *EraserConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(EraserConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.\nfunc (in *EraserConfig) DeepCopyObject() runtime.Object {\n\tif c := in.DeepCopy(); c != nil {\n\t\treturn c\n\t}\n\treturn nil\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageJobCleanupConfig) DeepCopyInto(out *ImageJobCleanupConfig) {\n\t*out = *in\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageJobCleanupConfig.\nfunc (in *ImageJobCleanupConfig) DeepCopy() *ImageJobCleanupConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageJobCleanupConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ImageJobConfig) DeepCopyInto(out *ImageJobConfig) {\n\t*out = *in\n\tout.Cleanup = in.Cleanup\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageJobConfig.\nfunc (in *ImageJobConfig) DeepCopy() *ImageJobConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ImageJobConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ManagerConfig) DeepCopyInto(out *ManagerConfig) {\n\t*out = *in\n\tout.Runtime = in.Runtime\n\tout.Scheduling = in.Scheduling\n\tout.Profile = in.Profile\n\tout.ImageJob = in.ImageJob\n\tif in.PullSecrets != nil {\n\t\tin, out := &in.PullSecrets, &out.PullSecrets\n\t\t*out = make([]string, len(*in))\n\t\tcopy(*out, *in)\n\t}\n\tin.NodeFilter.DeepCopyInto(&out.NodeFilter)\n\tif in.AdditionalPodLabels != nil {\n\t\tin, out := &in.AdditionalPodLabels, &out.AdditionalPodLabels\n\t\t*out = make(map[string]string, len(*in))\n\t\tfor key, val := range *in {\n\t\t\t(*out)[key] = val\n\t\t}\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ManagerConfig.\nfunc (in *ManagerConfig) DeepCopy() *ManagerConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ManagerConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *NodeFilterConfig) DeepCopyInto(out *NodeFilterConfig) {\n\t*out = *in\n\tif in.Selectors != nil {\n\t\tin, out := &in.Selectors, &out.Selectors\n\t\t*out = make([]string, len(*in))\n\t\tcopy(*out, *in)\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeFilterConfig.\nfunc (in *NodeFilterConfig) DeepCopy() *NodeFilterConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(NodeFilterConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *OptionalContainerConfig) DeepCopyInto(out *OptionalContainerConfig) {\n\t*out = *in\n\tin.ContainerConfig.DeepCopyInto(&out.ContainerConfig)\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OptionalContainerConfig.\nfunc (in *OptionalContainerConfig) DeepCopy() *OptionalContainerConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(OptionalContainerConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ProfileConfig) DeepCopyInto(out *ProfileConfig) {\n\t*out = *in\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProfileConfig.\nfunc (in *ProfileConfig) DeepCopy() *ProfileConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ProfileConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *RepoTag) DeepCopyInto(out *RepoTag) {\n\t*out = *in\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RepoTag.\nfunc (in *RepoTag) DeepCopy() *RepoTag {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(RepoTag)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ResourceRequirements) DeepCopyInto(out *ResourceRequirements) {\n\t*out = *in\n\tout.Mem = in.Mem.DeepCopy()\n\tout.CPU = in.CPU.DeepCopy()\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ResourceRequirements.\nfunc (in *ResourceRequirements) DeepCopy() *ResourceRequirements {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ResourceRequirements)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *RuntimeSpec) DeepCopyInto(out *RuntimeSpec) {\n\t*out = *in\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RuntimeSpec.\nfunc (in *RuntimeSpec) DeepCopy() *RuntimeSpec {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(RuntimeSpec)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ScheduleConfig) DeepCopyInto(out *ScheduleConfig) {\n\t*out = *in\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScheduleConfig.\nfunc (in *ScheduleConfig) DeepCopy() *ScheduleConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ScheduleConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n"
  },
  {
    "path": "build/version.sh",
    "content": "#!/bin/bash\n# borrowed from sigs.k8s.io/cluster-api-provider-azure/hack/version.sh and modified\n\nset -o errexit\nset -o pipefail\n\nversion::get_version_vars() {\n    # shellcheck disable=SC1083\n    GIT_COMMIT=\"$(git rev-parse HEAD^{commit})\"\n\n    if git_status=$(git status --porcelain 2>/dev/null) && [[ -z ${git_status} ]]; then\n        GIT_TREE_STATE=\"clean\"\n    else\n        GIT_TREE_STATE=\"dirty\"\n    fi\n\n    # borrowed from k8s.io/hack/lib/version.sh\n    # Use git describe to find the version based on tags.\n    if GIT_VERSION=$(git describe --tags --abbrev=14 2>/dev/null); then\n        # This translates the \"git describe\" to an actual semver.org\n        # compatible semantic version that looks something like this:\n        #   v1.1.0-alpha.0.6+84c76d1142ea4d\n        # shellcheck disable=SC2001\n        DASHES_IN_VERSION=$(echo \"${GIT_VERSION}\" | sed \"s/[^-]//g\")\n        if [[ \"${DASHES_IN_VERSION}\" == \"---\" ]] ; then\n            # We have distance to subversion (v1.1.0-subversion-1-gCommitHash)\n            # shellcheck disable=SC2001\n            GIT_VERSION=$(echo \"${GIT_VERSION}\" | sed \"s/-\\([0-9]\\{1,\\}\\)-g\\([0-9a-f]\\{14\\}\\)$/.\\1\\-\\2/\")\n        elif [[ \"${DASHES_IN_VERSION}\" == \"--\" ]] ; then\n            # We have distance to base tag (v1.1.0-1-gCommitHash)\n            # shellcheck disable=SC2001\n            GIT_VERSION=$(echo \"${GIT_VERSION}\" | sed \"s/-g\\([0-9a-f]\\{14\\}\\)$/-\\1/\")\n            # TODO: What should the output of this command look like?\n            # For example, v1.1.0-32-gfeb4736460af8f maps to v1.1.0-32-f, do we want the trailing \"-f\" or not?\n        fi\n\n        if [[ \"${GIT_TREE_STATE}\" == \"dirty\" ]]; then\n            # git describe --dirty only considers changes to existing files, but\n            # that is problematic since new untracked .go files affect the build,\n            # so use our idea of \"dirty\" from git status instead.\n            GIT_VERSION+=\"-dirty\"\n        fi\n\n        # Try to match the \"git describe\" output to a regex to try to extract\n        # the \"major\" and \"minor\" versions and whether this is the exact tagged\n        # version or whether the tree is between two tagged versions.\n        if [[ \"${GIT_VERSION}\" =~ ^v([0-9]+)\\.([0-9]+)(\\.[0-9]+)?([-].*)?([+].*)?$ ]]; then\n            GIT_MAJOR=${BASH_REMATCH[1]}\n            GIT_MINOR=${BASH_REMATCH[2]}\n        fi\n\n        # If GIT_VERSION is not a valid Semantic Version, then exit with error\n        if ! [[ \"${GIT_VERSION}\" =~ ^v([0-9]+)\\.([0-9]+)(\\.[0-9]+)?(-[0-9A-Za-z.-]+)?(\\+[0-9A-Za-z.-]+)?$ ]]; then\n            echo \"GIT_VERSION should be a valid Semantic Version. Current value: ${GIT_VERSION}\"\n            echo \"Please see more details here: https://semver.org\"\n            exit 1\n        fi\n    fi\n\n    if [[ -z ${SOURCE_DATE_EPOCH} ]]; then\n        SOURCE_DATE=\"$(git show -s --format=%cI HEAD)\"\n        SOURCE_DATE_EPOCH=\"$(date -u --date \"${SOURCE_DATE}\" +%s)\"\n    fi\n}\n\n# Prints the value that needs to be passed to the -ldflags parameter of go build\nversion::ldflags() {\n    version::get_version_vars\n\n    local -a ldflags\n    function add_ldflag() {\n        local key=${1}\n        local val=${2}\n        ldflags+=(\n            \"-X 'github.com/eraser-dev/eraser/version.${key}=${val}'\"\n        )\n    }\n\n    add_ldflag \"buildTime\" \"${SOURCE_DATE_EPOCH}\"\n    add_ldflag \"vcsCommit\" \"${GIT_COMMIT}\"\n    add_ldflag \"vcsState\" \"${GIT_TREE_STATE}\"\n\n    if [[ ! -z ${GIT_VERSION} ]]; then\n        add_ldflag \"BuildVersion\" \"${GIT_VERSION}\"\n        add_ldflag \"vcsMajor\" \"${GIT_MAJOR}\"\n        add_ldflag \"vcsMinor\" \"${GIT_MINOR}\"\n    elif [[ -n $1 ]]; then\n        add_ldflag \"BuildVersion\" \"$1\"\n    fi\n\n    # The -ldflags parameter takes a single string, so join the output.\n    echo \"${ldflags[*]-}\"\n}\n\nversion::ldflags $1\n"
  },
  {
    "path": "config/crd/bases/_.yaml",
    "content": "---\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n  annotations:\n    controller-gen.kubebuilder.io/version: v0.9.0\n  creationTimestamp: null\nspec:\n  group: \"\"\n  names:\n    kind: \"\"\n    plural: \"\"\n  scope: \"\"\n  versions: null\n"
  },
  {
    "path": "config/crd/bases/eraser.sh_imagejobs.yaml",
    "content": "---\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n  annotations:\n    controller-gen.kubebuilder.io/version: v0.14.0\n  name: imagejobs.eraser.sh\nspec:\n  group: eraser.sh\n  names:\n    kind: ImageJob\n    listKind: ImageJobList\n    plural: imagejobs\n    singular: imagejob\n  scope: Cluster\n  versions:\n  - name: v1\n    schema:\n      openAPIV3Schema:\n        description: ImageJob is the Schema for the imagejobs API.\n        properties:\n          apiVersion:\n            description: |-\n              APIVersion defines the versioned schema of this representation of an object.\n              Servers should convert recognized schemas to the latest internal value, and\n              may reject unrecognized values.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n            type: string\n          kind:\n            description: |-\n              Kind is a string value representing the REST resource this object represents.\n              Servers may infer this from the endpoint the client submits requests to.\n              Cannot be updated.\n              In CamelCase.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n            type: string\n          metadata:\n            type: object\n          status:\n            description: ImageJobStatus defines the observed state of ImageJob.\n            properties:\n              deleteAfter:\n                description: Time to delay deletion until\n                format: date-time\n                type: string\n              desired:\n                description: desired number of pods\n                type: integer\n              failed:\n                description: number of pods that failed\n                type: integer\n              phase:\n                description: job running, successfully completed, or failed\n                type: string\n              skipped:\n                description: number of nodes that were skipped e.g. because they are\n                  not a linux node\n                type: integer\n              succeeded:\n                description: number of pods that completed successfully\n                type: integer\n            required:\n            - desired\n            - failed\n            - phase\n            - skipped\n            - succeeded\n            type: object\n        type: object\n    served: true\n    storage: true\n    subresources:\n      status: {}\n  - deprecated: true\n    deprecationWarning: v1alpha1 of the eraser API has been deprecated. Please migrate\n      to v1.\n    name: v1alpha1\n    schema:\n      openAPIV3Schema:\n        description: ImageJob is the Schema for the imagejobs API.\n        properties:\n          apiVersion:\n            description: |-\n              APIVersion defines the versioned schema of this representation of an object.\n              Servers should convert recognized schemas to the latest internal value, and\n              may reject unrecognized values.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n            type: string\n          kind:\n            description: |-\n              Kind is a string value representing the REST resource this object represents.\n              Servers may infer this from the endpoint the client submits requests to.\n              Cannot be updated.\n              In CamelCase.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n            type: string\n          metadata:\n            type: object\n          status:\n            description: ImageJobStatus defines the observed state of ImageJob.\n            properties:\n              deleteAfter:\n                description: Time to delay deletion until\n                format: date-time\n                type: string\n              desired:\n                description: desired number of pods\n                type: integer\n              failed:\n                description: number of pods that failed\n                type: integer\n              phase:\n                description: job running, successfully completed, or failed\n                type: string\n              skipped:\n                description: number of nodes that were skipped e.g. because they are\n                  not a linux node\n                type: integer\n              succeeded:\n                description: number of pods that completed successfully\n                type: integer\n            required:\n            - desired\n            - failed\n            - phase\n            - skipped\n            - succeeded\n            type: object\n        type: object\n    served: true\n    storage: false\n    subresources:\n      status: {}\n"
  },
  {
    "path": "config/crd/bases/eraser.sh_imagelists.yaml",
    "content": "---\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n  annotations:\n    controller-gen.kubebuilder.io/version: v0.14.0\n  name: imagelists.eraser.sh\nspec:\n  group: eraser.sh\n  names:\n    kind: ImageList\n    listKind: ImageListList\n    plural: imagelists\n    singular: imagelist\n  scope: Cluster\n  versions:\n  - name: v1\n    schema:\n      openAPIV3Schema:\n        description: ImageList is the Schema for the imagelists API.\n        properties:\n          apiVersion:\n            description: |-\n              APIVersion defines the versioned schema of this representation of an object.\n              Servers should convert recognized schemas to the latest internal value, and\n              may reject unrecognized values.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n            type: string\n          kind:\n            description: |-\n              Kind is a string value representing the REST resource this object represents.\n              Servers may infer this from the endpoint the client submits requests to.\n              Cannot be updated.\n              In CamelCase.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n            type: string\n          metadata:\n            type: object\n          spec:\n            description: ImageListSpec defines the desired state of ImageList.\n            properties:\n              images:\n                description: The list of non-compliant images to delete if non-running.\n                items:\n                  type: string\n                type: array\n            required:\n            - images\n            type: object\n          status:\n            description: ImageListStatus defines the observed state of ImageList.\n            properties:\n              failed:\n                description: Number of nodes that failed to run the job\n                format: int64\n                type: integer\n              skipped:\n                description: Number of nodes that were skipped due to a skip selector\n                format: int64\n                type: integer\n              success:\n                description: Number of nodes that successfully ran the job\n                format: int64\n                type: integer\n              timestamp:\n                description: Information when the job was completed.\n                format: date-time\n                type: string\n            required:\n            - failed\n            - skipped\n            - success\n            - timestamp\n            type: object\n        type: object\n    served: true\n    storage: true\n    subresources:\n      status: {}\n  - deprecated: true\n    deprecationWarning: v1alpha1 of the eraser API has been deprecated. Please migrate\n      to v1.\n    name: v1alpha1\n    schema:\n      openAPIV3Schema:\n        description: ImageList is the Schema for the imagelists API.\n        properties:\n          apiVersion:\n            description: |-\n              APIVersion defines the versioned schema of this representation of an object.\n              Servers should convert recognized schemas to the latest internal value, and\n              may reject unrecognized values.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n            type: string\n          kind:\n            description: |-\n              Kind is a string value representing the REST resource this object represents.\n              Servers may infer this from the endpoint the client submits requests to.\n              Cannot be updated.\n              In CamelCase.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n            type: string\n          metadata:\n            type: object\n          spec:\n            description: ImageListSpec defines the desired state of ImageList.\n            properties:\n              images:\n                description: The list of non-compliant images to delete if non-running.\n                items:\n                  type: string\n                type: array\n            required:\n            - images\n            type: object\n          status:\n            description: ImageListStatus defines the observed state of ImageList.\n            properties:\n              failed:\n                description: Number of nodes that failed to run the job\n                format: int64\n                type: integer\n              skipped:\n                description: Number of nodes that were skipped due to a skip selector\n                format: int64\n                type: integer\n              success:\n                description: Number of nodes that successfully ran the job\n                format: int64\n                type: integer\n              timestamp:\n                description: Information when the job was completed.\n                format: date-time\n                type: string\n            required:\n            - failed\n            - skipped\n            - success\n            - timestamp\n            type: object\n        type: object\n    served: true\n    storage: false\n    subresources:\n      status: {}\n"
  },
  {
    "path": "config/crd/kustomization.yaml",
    "content": "# This kustomization.yaml is not intended to be run by itself,\n# since it depends on service name and namespace that are out of this kustomize package.\n# It should be run by config/default\nresources:\n  - bases/eraser.sh_imagelists.yaml\n  - bases/eraser.sh_imagejobs.yaml\n#+kubebuilder:scaffold:crdkustomizeresource\n\npatchesStrategicMerge:\n# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix.\n# patches here are for enabling the conversion webhook for each CRD\n#- patches/webhook_in_imagelists.yaml\n#- patches/webhook_in_eraserconfigs.yaml\n#+kubebuilder:scaffold:crdkustomizewebhookpatch\n\n# [CERTMANAGER] To enable webhook, uncomment all the sections with [CERTMANAGER] prefix.\n# patches here are for enabling the CA injection for each CRD\n#- patches/cainjection_in_imagelists.yaml\n#- patches/cainjection_in_eraserconfigs.yaml\n#+kubebuilder:scaffold:crdkustomizecainjectionpatch\n\n# the following config is for teaching kustomize how to do kustomization for CRDs.\nconfigurations:\n  - kustomizeconfig.yaml\n"
  },
  {
    "path": "config/crd/kustomizeconfig.yaml",
    "content": "# This file is for teaching kustomize how to substitute name and namespace reference in CRD\nnameReference:\n- kind: Service\n  version: v1alpha1\n  fieldSpecs:\n  - kind: CustomResourceDefinition\n    version: v1\n    group: apiextensions.k8s.io\n    path: spec/conversion/webhook/clientConfig/service/name\n\nnamespace:\n- kind: CustomResourceDefinition\n  version: v1\n  group: apiextensions.k8s.io\n  path: spec/conversion/webhook/clientConfig/service/namespace\n  create: false\n\nvarReference:\n- path: metadata/annotations\n"
  },
  {
    "path": "config/crd/patches/cainjection_in_eraserconfigs.yaml",
    "content": "# The following patch adds a directive for certmanager to inject CA into the CRD\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n  annotations:\n    cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)\n  name: eraserconfigs.eraser.sh\n"
  },
  {
    "path": "config/crd/patches/cainjection_in_imagelists.yaml",
    "content": "# The following patch adds a directive for certmanager to inject CA into the CRD\napiVersion: apiextensions.k8s.io/v1alpha1\nkind: CustomResourceDefinition\nmetadata:\n  annotations:\n    cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)\n  name: imagelists.eraser.sh\n"
  },
  {
    "path": "config/crd/patches/webhook_in_eraserconfigs.yaml",
    "content": "# The following patch enables a conversion webhook for the CRD\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n  name: eraserconfigs.eraser.sh\nspec:\n  conversion:\n    strategy: Webhook\n    webhook:\n      clientConfig:\n        service:\n          namespace: system\n          name: webhook-service\n          path: /convert\n      conversionReviewVersions:\n      - v1\n"
  },
  {
    "path": "config/crd/patches/webhook_in_imagelists.yaml",
    "content": "# The following patch enables a conversion webhook for the CRD\napiVersion: apiextensions.k8s.io/v1alpha1\nkind: CustomResourceDefinition\nmetadata:\n  name: imagelists.eraser.sh\nspec:\n  conversion:\n    strategy: Webhook\n    webhook:\n      clientConfig:\n        service:\n          namespace: system\n          name: webhook-service\n          path: /convert\n      conversionReviewVersions:\n      - v1alpha1\n      - v1\n"
  },
  {
    "path": "config/default/kustomization.yaml",
    "content": "# Adds namespace to all resources.\nnamespace: eraser-system\n\n# Value of this field is prepended to the\n# names of all resources, e.g. a deployment named\n# \"wordpress\" becomes \"alices-wordpress\".\n# Note that it should also match with the prefix (text before '-') of the namespace\n# field above.\nnamePrefix: eraser-\n\n# Labels to add to all resources and selectors.\n#commonLabels:\n#  someName: someValue\n\nbases:\n- ../crd\n- ../rbac\n- ../manager\n# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in\n# crd/kustomization.yaml\n#- ../webhook\n# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'. 'WEBHOOK' components are required.\n#- ../certmanager\n# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'.\n#- ../prometheus\n\npatchesStrategicMerge:\n# Protect the /metrics endpoint by putting it behind auth.\n# If you want your controller-manager to expose the /metrics\n# endpoint w/o any authn/z, please comment the following line.\n# - manager_auth_proxy_patch.yaml\n\n# Mount the controller config file for loading manager configurations\n# through a ComponentConfig type\n# - manager_config_patch.yaml\n\n# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in\n# crd/kustomization.yaml\n#- manager_webhook_patch.yaml\n\n# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'.\n# Uncomment 'CERTMANAGER' sections in crd/kustomization.yaml to enable the CA injection in the admission webhooks.\n# 'CERTMANAGER' needs to be enabled to use ca injection\n#- webhookcainjection_patch.yaml\n\n# the following config is for teaching kustomize how to do var substitution\nvars:\n# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER' prefix.\n#- name: CERTIFICATE_NAMESPACE # namespace of the certificate CR\n#  objref:\n#    kind: Certificate\n#    group: cert-manager.io\n#    version: v1\n#    name: serving-cert # this name should match the one in certificate.yaml\n#  fieldref:\n#    fieldpath: metadata.namespace\n#- name: CERTIFICATE_NAME\n#  objref:\n#    kind: Certificate\n#    group: cert-manager.io\n#    version: v1\n#    name: serving-cert # this name should match the one in certificate.yaml\n#- name: SERVICE_NAMESPACE # namespace of the service\n#  objref:\n#    kind: Service\n#    version: v1\n#    name: webhook-service\n#  fieldref:\n#    fieldpath: metadata.namespace\n#- name: SERVICE_NAME\n#  objref:\n#    kind: Service\n#    version: v1\n#    name: webhook-service\n"
  },
  {
    "path": "config/default/manager_auth_proxy_patch.yaml",
    "content": "# This patch inject a sidecar container which is a HTTP proxy for the\n# controller manager, it performs RBAC authorization against the Kubernetes API using SubjectAccessReviews.\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: controller-manager\n  namespace: system\nspec:\n  template:\n    spec:\n      containers:\n      - name: kube-rbac-proxy\n        image: gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0\n        args:\n        - \"--secure-listen-address=0.0.0.0:8443\"\n        - \"--upstream=http://127.0.0.1:8080/\"\n        - \"--logtostderr=true\"\n        - \"--v=10\"\n        ports:\n        - containerPort: 8443\n          name: https\n      - name: manager\n        args:\n        - \"--health-probe-bind-address=:8081\"\n        - \"--metrics-bind-address=127.0.0.1:8080\"\n        - \"--leader-elect\"\n"
  },
  {
    "path": "config/manager/controller_manager_config.yaml",
    "content": "apiVersion: eraser.sh/v1alpha3\nkind: EraserConfig\nmanager:\n  runtime:\n    name: containerd\n    address: unix:///run/containerd/containerd.sock\n  otlpEndpoint: \"\"\n  logLevel: info\n  scheduling:\n    repeatInterval: 24h\n    beginImmediately: true\n  profile:\n    enabled: false\n    port: 6060\n  imageJob:\n    successRatio: 1.0\n    cleanup:\n      delayOnSuccess: 0s\n      delayOnFailure: 24h\n  pullSecrets: [] # image pull secrets for collector/scanner/eraser\n  priorityClassName: \"\" # priority class name for collector/scanner/eraser\n  additionalPodLabels: {}\n  nodeFilter:\n    type: exclude # must be either exclude|include\n    selectors:\n      - eraser.sh/cleanup.filter\n      - kubernetes.io/os=windows\ncomponents:\n  collector:\n    enabled: true\n    image:\n      repo: COLLECTOR_REPO\n      tag: COLLECTOR_TAG\n    request:\n      mem: 25Mi\n      cpu: 7m\n    limit:\n      mem: 500Mi\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run\n      cpu: 0\n  scanner:\n    enabled: true\n    image:\n      repo: SCANNER_REPO # supply custom image for custom scanner\n      tag: SCANNER_TAG\n    request:\n      mem: 500Mi\n      cpu: 1000m\n    limit:\n      mem: 2Gi\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run\n      cpu: 0\n    # The config needs to be passed through to the scanner as yaml, as a\n    # single string. Because we allow custom scanner images, the scanner is\n    # responsible for defining a schema, parsing, and validating.\n    config: |\n      # this is the schema for the provided 'trivy-scanner'. custom scanners\n      # will define their own configuration.\n      cacheDir: /var/lib/trivy\n      dbRepo: ghcr.io/aquasecurity/trivy-db\n      deleteFailedImages: true\n      deleteEOLImages: true\n      vulnerabilities:\n        ignoreUnfixed: false\n        types:\n          - os\n          - library\n        securityChecks:\n          - vuln\n        severities:\n          - CRITICAL\n          - HIGH\n          - MEDIUM\n          - LOW\n        ignoredStatuses:\n      timeout:\n        total: 23h\n        perImage: 1h\n    volumes: []\n  remover:\n    image:\n      repo: REMOVER_REPO\n      tag: REMOVER_TAG\n    request:\n      mem: 25Mi\n      # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run\n      cpu: 0\n    limit:\n      mem: 30Mi\n      cpu: 0\n"
  },
  {
    "path": "config/manager/kustomization.yaml",
    "content": "resources:\n- manager.yaml\n\ngeneratorOptions:\n  disableNameSuffixHash: true\n\n\napiVersion: kustomize.config.k8s.io/v1beta1\nkind: Kustomization\nimages:\n- name: controller\n  newName: ghcr.io/eraser-dev/eraser-manager\n  newTag: v1.0.0-beta.3\n\n# DO NOT CHANGE FORMATTING:\n# This must be deleted for helm chart generation, so it should all be on one line.\nconfigMapGenerator: [ { \"files\": [\"controller_manager_config.yaml\"], \"name\": \"manager-config\" } ]\n# DO NOT CHANGE FORMATTING:\n# This must be deleted for helm chart generation, so it should all be on one line.\npatches: [{\"path\":\"patch.yaml\",\"target\":{\"kind\":\"Deployment\"}}]\n"
  },
  {
    "path": "config/manager/manager.yaml",
    "content": "apiVersion: v1\nkind: Namespace\nmetadata:\n  labels:\n    control-plane: controller-manager\n  name: system\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: controller-manager\n  namespace: system\n  labels:\n    control-plane: controller-manager\nspec:\n  selector:\n    matchLabels:\n      control-plane: controller-manager\n  replicas: 1\n  template:\n    metadata:\n      labels:\n        control-plane: controller-manager\n    spec:\n      nodeSelector:\n        kubernetes.io/os: linux\n      containers:\n      - command:\n        - /manager\n        args: []\n        image: controller:latest\n        name: manager\n        env:\n        - name: POD_NAMESPACE\n          valueFrom:\n            fieldRef:\n              apiVersion: v1\n              fieldPath: metadata.namespace\n        - name: OTEL_SERVICE_NAME\n          value: eraser-manager\n        securityContext:\n          allowPrivilegeEscalation: false\n          readOnlyRootFilesystem: true\n          runAsUser: 65532\n          runAsGroup: 65532\n          runAsNonRoot: true\n          seccompProfile:\n            type: RuntimeDefault\n          capabilities:\n            drop:\n              - ALL\n        livenessProbe:\n          httpGet:\n            path: /healthz\n            port: 8081\n          initialDelaySeconds: 15\n          periodSeconds: 20\n        readinessProbe:\n          httpGet:\n            path: /readyz\n            port: 8081\n          initialDelaySeconds: 5\n          periodSeconds: 10\n        resources:\n          limits:\n            memory: 30Mi\n          requests:\n            cpu: 100m\n            memory: 20Mi\n      serviceAccountName: controller-manager\n      terminationGracePeriodSeconds: 10\n"
  },
  {
    "path": "config/manager/patch.yaml",
    "content": "- op: add\n  path: /spec/template/spec/containers/0/volumeMounts\n  value: []\n- op: add\n  path: /spec/template/spec/containers/0/volumeMounts/-\n  value: { \"name\": \"manager-config\", \"mountPath\": \"/config\" }\n- op: add\n  path: /spec/template/spec/volumes\n  value: []\n- op: add\n  path: /spec/template/spec/volumes/-\n  value: { \"name\": \"manager-config\", \"configMap\": { \"name\": \"manager-config\" } }\n- op: add\n  path: /spec/template/spec/containers/0/args/-\n  value: \"--config=/config/controller_manager_config.yaml\"\n"
  },
  {
    "path": "config/prometheus/kustomization.yaml",
    "content": "resources:\n- monitor.yaml\n"
  },
  {
    "path": "config/prometheus/monitor.yaml",
    "content": "\n# Prometheus Monitor Service (Metrics)\napiVersion: monitoring.coreos.com/v1alpha1\nkind: ServiceMonitor\nmetadata:\n  labels:\n    control-plane: controller-manager\n  name: controller-manager-metrics-monitor\n  namespace: system\nspec:\n  endpoints:\n    - path: /metrics\n      port: https\n      scheme: https\n      bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token\n      tlsConfig:\n        insecureSkipVerify: true\n  selector:\n    matchLabels:\n      control-plane: controller-manager\n"
  },
  {
    "path": "config/rbac/auth_proxy_client_clusterrole.yaml",
    "content": "apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n  name: metrics-reader\nrules:\n- nonResourceURLs:\n  - \"/metrics\"\n  verbs:\n  - get\n"
  },
  {
    "path": "config/rbac/auth_proxy_role.yaml",
    "content": "apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n  name: proxy-role\nrules:\n- apiGroups:\n  - authentication.k8s.io\n  resources:\n  - tokenreviews\n  verbs:\n  - create\n- apiGroups:\n  - authorization.k8s.io\n  resources:\n  - subjectaccessreviews\n  verbs:\n  - create\n"
  },
  {
    "path": "config/rbac/auth_proxy_role_binding.yaml",
    "content": "apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: proxy-rolebinding\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: proxy-role\nsubjects:\n- kind: ServiceAccount\n  name: controller-manager\n  namespace: system\n"
  },
  {
    "path": "config/rbac/auth_proxy_service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    control-plane: controller-manager\n  name: controller-manager-metrics-service\n  namespace: system\nspec:\n  ports:\n  - name: https\n    port: 8443\n    targetPort: https\n  selector:\n    control-plane: controller-manager\n"
  },
  {
    "path": "config/rbac/cluster_role_binding.yaml",
    "content": "apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: manager-rolebinding\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: manager-role\nsubjects:\n- kind: ServiceAccount\n  name: controller-manager\n  namespace: system\n"
  },
  {
    "path": "config/rbac/imagejob_pods_service.yaml",
    "content": "apiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: imagejob-pods\n  namespace: system\n"
  },
  {
    "path": "config/rbac/imagelist_editor_role.yaml",
    "content": "# permissions for end users to edit imagelists.\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n  name: imagelist-editor-role\nrules:\n- apiGroups:\n  - eraser.sh\n  resources:\n  - imagelists\n  verbs:\n  - create\n  - delete\n  - get\n  - list\n  - patch\n  - update\n  - watch\n- apiGroups:\n  - eraser.sh\n  resources:\n  - imagelists/status\n  verbs:\n  - get\n"
  },
  {
    "path": "config/rbac/imagelist_viewer_role.yaml",
    "content": "# permissions for end users to view imagelists.\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n  name: imagelist-viewer-role\nrules:\n- apiGroups:\n  - eraser.sh\n  resources:\n  - imagelists\n  verbs:\n  - get\n  - list\n  - watch\n- apiGroups:\n  - eraser.sh\n  resources:\n  - imagelists/status\n  verbs:\n  - get\n"
  },
  {
    "path": "config/rbac/kustomization.yaml",
    "content": "resources:\n# All RBAC will be applied under this service account in\n# the deployment namespace. You may comment out this resource\n# if your manager will use a service account that exists at\n# runtime. Be sure to update RoleBinding and ClusterRoleBinding\n# subjects if changing service account names.\n- service_account.yaml\n- role.yaml\n- role_binding.yaml\n- imagejob_pods_service.yaml\n- cluster_role_binding.yaml\n# Comment the following 4 lines if you want to disable\n# the auth proxy (https://github.com/brancz/kube-rbac-proxy)\n# which protects your /metrics endpoint.\n# - auth_proxy_service.yaml\n# - auth_proxy_role.yaml\n# - auth_proxy_role_binding.yaml\n# - auth_proxy_client_clusterrole.yaml\n# uncomment the following if you want to enable leader election\n# - leader_election_role.yaml\n# - leader_election_role_binding.yaml\n"
  },
  {
    "path": "config/rbac/leader_election_role.yaml",
    "content": "# permissions to do leader election.\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n  name: leader-election-role\nrules:\n- apiGroups:\n  - \"\"\n  resources:\n  - configmaps\n  verbs:\n  - get\n  - list\n  - watch\n  - create\n  - update\n  - patch\n  - delete\n- apiGroups:\n  - coordination.k8s.io\n  resources:\n  - leases\n  verbs:\n  - get\n  - list\n  - watch\n  - create\n  - update\n  - patch\n  - delete\n- apiGroups:\n  - \"\"\n  resources:\n  - events\n  verbs:\n  - create\n  - patch\n"
  },
  {
    "path": "config/rbac/leader_election_role_binding.yaml",
    "content": "apiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n  name: leader-election-rolebinding\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: Role\n  name: leader-election-role\nsubjects:\n- kind: ServiceAccount\n  name: controller-manager\n  namespace: system\n"
  },
  {
    "path": "config/rbac/role.yaml",
    "content": "---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n  name: manager-role\nrules:\n- apiGroups:\n  - \"\"\n  resources:\n  - nodes\n  verbs:\n  - get\n  - list\n  - watch\n- apiGroups:\n  - eraser.sh\n  resources:\n  - imagejobs\n  verbs:\n  - create\n  - delete\n  - get\n  - list\n  - watch\n- apiGroups:\n  - eraser.sh\n  resources:\n  - imagejobs/status\n  verbs:\n  - get\n  - patch\n  - update\n- apiGroups:\n  - eraser.sh\n  resources:\n  - imagelists\n  verbs:\n  - get\n  - list\n  - watch\n- apiGroups:\n  - eraser.sh\n  resources:\n  - imagelists/status\n  verbs:\n  - get\n  - patch\n  - update\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n  name: manager-role\n  namespace: system\nrules:\n- apiGroups:\n  - \"\"\n  resources:\n  - configmaps\n  verbs:\n  - create\n  - delete\n  - get\n  - list\n  - patch\n  - update\n  - watch\n- apiGroups:\n  - \"\"\n  resources:\n  - pods\n  verbs:\n  - create\n  - delete\n  - get\n  - list\n  - update\n  - watch\n- apiGroups:\n  - \"\"\n  resources:\n  - podtemplates\n  verbs:\n  - create\n  - delete\n  - get\n  - list\n  - patch\n  - update\n  - watch\n"
  },
  {
    "path": "config/rbac/role_binding.yaml",
    "content": "apiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n  name: manager-rolebinding\n  namespace: system\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: Role\n  name: manager-role\nsubjects:\n- kind: ServiceAccount\n  name: controller-manager\n  namespace: system\n"
  },
  {
    "path": "config/rbac/service_account.yaml",
    "content": "apiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: controller-manager\n  namespace: system\n"
  },
  {
    "path": "controllers/configmap/configmap.go",
    "content": "package configmap\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"math/rand\"\n\t\"os\"\n\t\"os/signal\"\n\t\"syscall\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\t\"k8s.io/apimachinery/pkg/types\"\n\t\"sigs.k8s.io/controller-runtime/pkg/controller\"\n\t\"sigs.k8s.io/controller-runtime/pkg/event\"\n\t\"sigs.k8s.io/controller-runtime/pkg/handler\"\n\t\"sigs.k8s.io/controller-runtime/pkg/predicate\"\n\t\"sigs.k8s.io/controller-runtime/pkg/source\"\n\n\t\"go.opentelemetry.io/otel\"\n\t\"k8s.io/apimachinery/pkg/runtime\"\n\t\"sigs.k8s.io/controller-runtime/pkg/client\"\n\n\t\"github.com/eraser-dev/eraser/api/unversioned/config\"\n\teraserv1 \"github.com/eraser-dev/eraser/api/v1\"\n\tcontrollerUtils \"github.com/eraser-dev/eraser/controllers/util\"\n\t\"github.com/eraser-dev/eraser/pkg/metrics\"\n\teraserUtils \"github.com/eraser-dev/eraser/pkg/utils\"\n\tsdkmetric \"go.opentelemetry.io/otel/sdk/metric\"\n\tctrl \"sigs.k8s.io/controller-runtime\"\n\tlogf \"sigs.k8s.io/controller-runtime/pkg/log\"\n\t\"sigs.k8s.io/controller-runtime/pkg/manager\"\n\t\"sigs.k8s.io/controller-runtime/pkg/reconcile\"\n)\n\nvar (\n\tlog      = logf.Log.WithName(\"controller\").WithValues(\"process\", \"configmap-controller\")\n\tprovider *sdkmetric.MeterProvider\n\n\tconfigmap = types.NamespacedName{\n\t\tNamespace: eraserUtils.GetNamespace(),\n\t\tName:      controllerUtils.EraserConfigmapName,\n\t}\n)\n\n// ImageListReconciler reconciles a ImageList object.\ntype Reconciler struct {\n\tclient.Client\n\tscheme       *runtime.Scheme\n\teraserConfig *config.Manager\n}\n\nfunc Add(mgr manager.Manager, cfg *config.Manager) error {\n\tr, err := newReconciler(mgr, cfg)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tc, err := controller.New(\"imagelist-controller\", mgr, controller.Options{\n\t\tReconciler: r,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\terr = c.Watch(\n\t\tsource.Kind(mgr.GetCache(), &corev1.ConfigMap{}),\n\t\t&handler.EnqueueRequestForObject{},\n\t\tpredicate.ResourceVersionChangedPredicate{},\n\t\tpredicate.Funcs{\n\t\t\tUpdateFunc: func(e event.UpdateEvent) bool {\n\t\t\t\tcfg, ok := e.ObjectNew.(*corev1.ConfigMap)\n\t\t\t\tn := types.NamespacedName{Namespace: cfg.GetNamespace(), Name: cfg.GetName()}\n\n\t\t\t\tif !ok || n != configmap {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\n\t\t\t\tlog.Info(\"configmap was updated, reloading\")\n\t\t\t\treturn true\n\t\t\t},\n\t\t\tDeleteFunc:  controllerUtils.NeverOnDelete,\n\t\t\tGenericFunc: controllerUtils.NeverOnGeneric,\n\t\t\tCreateFunc:  controllerUtils.NeverOnCreate,\n\t\t},\n\t)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\n// newReconciler returns a new reconcile.Reconciler.\nfunc newReconciler(mgr manager.Manager, cfg *config.Manager) (reconcile.Reconciler, error) {\n\tc, err := cfg.Read()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\totlpEndpoint := c.Manager.OTLPEndpoint\n\tif otlpEndpoint != \"\" {\n\t\tctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)\n\t\tdefer cancel()\n\n\t\t_, _, provider = metrics.ConfigureMetrics(ctx, log, otlpEndpoint)\n\t\totel.SetMeterProvider(provider)\n\t}\n\n\trec := &Reconciler{\n\t\tClient:       mgr.GetClient(),\n\t\tscheme:       mgr.GetScheme(),\n\t\teraserConfig: cfg,\n\t}\n\n\treturn rec, nil\n}\n\n// +kubebuilder:rbac:groups=\"\",resources=configmaps,verbs=get;list;watch\n// +kubebuilder:rbac:groups=\"\",resources=pods,verbs=get;list;watch,delete\nfunc (r *Reconciler) Reconcile(ctx context.Context, _ ctrl.Request) (ctrl.Result, error) {\n\tj := eraserv1.ImageJobList{}\n\terr := r.List(ctx, &j)\n\tif err != nil {\n\t\treturn ctrl.Result{}, err\n\t}\n\n\tjobs := j.Items\n\tfor i := range jobs {\n\t\tif jobs[i].Status.Phase == eraserv1.PhaseRunning {\n\t\t\treturn ctrl.Result{}, fmt.Errorf(\"job is currently running, deferring configmap update\")\n\t\t}\n\t}\n\n\tp := corev1.PodList{}\n\terr = r.List(ctx, &p, client.MatchingLabels{\n\t\t\"control-plane\": \"controller-manager\",\n\t})\n\tif err != nil {\n\t\treturn ctrl.Result{}, err\n\t}\n\tpods := p.Items\n\n\tif len(pods) == 0 {\n\t\treturn ctrl.Result{}, nil\n\t}\n\n\tpod := pods[0]\n\tfor i := range pods[1:] {\n\t\tif pods[i].Status.Phase == corev1.PodPhase(corev1.PodRunning) {\n\t\t\tpod = pods[i]\n\t\t\tbreak\n\t\t}\n\t}\n\n\t// the configmap is mounted to the filesystem, but the normal\n\t// reconciliation loop will not update it on the node's filesystem until\n\t// about 60-90 seconds later. updating the annotations will trigger an\n\t// almost immediate update, which is monitored by an inotify watch set up in\n\t// the main() function.\n\t//\n\t// the annotation only needs to be different from the previous value, so we\n\t// don't need cryptographically sound random numbers here. the following\n\t// comment disables the linter which prefers random numbers from the\n\t// crypto/rand library.\n\t//nolint:all\n\tnewVersion := fmt.Sprintf(\"%d\", rand.Int63())\n\tif pod.Annotations == nil {\n\t\tpod.Annotations = make(map[string]string)\n\t}\n\tpod.Annotations[\"eraser.sh/configVersion\"] = newVersion\n\n\terr = r.Update(ctx, &pod)\n\tif err != nil {\n\t\treturn ctrl.Result{}, err\n\t}\n\treturn ctrl.Result{}, nil\n}\n"
  },
  {
    "path": "controllers/controller.go",
    "content": "// Package controllers implements Kubernetes controllers for eraser resources.\npackage controllers\n\nimport (\n\t\"errors\"\n\n\t\"github.com/eraser-dev/eraser/api/unversioned/config\"\n\t\"github.com/eraser-dev/eraser/controllers/configmap\"\n\t\"github.com/eraser-dev/eraser/controllers/imagecollector\"\n\t\"github.com/eraser-dev/eraser/controllers/imagejob\"\n\t\"github.com/eraser-dev/eraser/controllers/imagelist\"\n\t\"k8s.io/apimachinery/pkg/api/meta\"\n\tctrl \"sigs.k8s.io/controller-runtime\"\n\t\"sigs.k8s.io/controller-runtime/pkg/manager\"\n)\n\ntype controllerSetupFunc func(manager.Manager, *config.Manager) error\n\nvar (\n\tcontrollerLog = ctrl.Log.WithName(\"controllerRuntimeLogger\")\n\n\tcontrollerAddFuncs = []controllerSetupFunc{\n\t\timagelist.Add,\n\t\timagejob.Add,\n\t\timagecollector.Add,\n\t\tconfigmap.Add,\n\t}\n)\n\nfunc SetupWithManager(m manager.Manager, cfg *config.Manager) error {\n\tcontrollerLog.Info(\"set up with manager\")\n\tfor _, f := range controllerAddFuncs {\n\t\tif err := f(m, cfg); err != nil {\n\t\t\tvar kindMatchErr *meta.NoKindMatchError\n\t\t\tif errors.As(err, &kindMatchErr) {\n\t\t\t\tcontrollerLog.Info(\"CRD %v is not installed\", kindMatchErr.GroupKind)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "controllers/imagecollector/imagecollector_controller.go",
    "content": "/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage imagecollector\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\t\"os/signal\"\n\t\"path/filepath\"\n\t\"strconv\"\n\t\"syscall\"\n\t\"time\"\n\n\t\"go.opentelemetry.io/otel\"\n\t\"k8s.io/apimachinery/pkg/runtime\"\n\tctrl \"sigs.k8s.io/controller-runtime\"\n\t\"sigs.k8s.io/controller-runtime/pkg/client\"\n\t\"sigs.k8s.io/controller-runtime/pkg/event\"\n\t\"sigs.k8s.io/controller-runtime/pkg/handler\"\n\t\"sigs.k8s.io/controller-runtime/pkg/manager\"\n\t\"sigs.k8s.io/controller-runtime/pkg/reconcile\"\n\n\t\"github.com/eraser-dev/eraser/api/unversioned/config\"\n\teraserv1 \"github.com/eraser-dev/eraser/api/v1\"\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/controllers/util\"\n\n\t\"sigs.k8s.io/controller-runtime/pkg/controller\"\n\tlogf \"sigs.k8s.io/controller-runtime/pkg/log\"\n\t\"sigs.k8s.io/controller-runtime/pkg/predicate\"\n\t\"sigs.k8s.io/controller-runtime/pkg/source\"\n\n\t\"github.com/eraser-dev/eraser/pkg/logger\"\n\t\"github.com/eraser-dev/eraser/pkg/metrics\"\n\teraserUtils \"github.com/eraser-dev/eraser/pkg/utils\"\n\tsdkmetric \"go.opentelemetry.io/otel/sdk/metric\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/labels\"\n\t\"k8s.io/apimachinery/pkg/types\"\n)\n\nconst (\n\townerLabelValue  = \"imagecollector\"\n\tconfigVolumeName = \"eraser-config\"\n)\n\nvar (\n\tlog        = logf.Log.WithName(\"controller\").WithValues(\"process\", \"imagecollector-controller\")\n\tstartTime  time.Time\n\townerLabel labels.Selector\n\texporter   sdkmetric.Exporter\n\treader     sdkmetric.Reader\n\tprovider   *sdkmetric.MeterProvider\n)\n\nfunc init() {\n\tvar err error\n\n\townerLabelString := fmt.Sprintf(\"%s=%s\", util.ImageJobOwnerLabelKey, ownerLabelValue)\n\townerLabel, err = labels.Parse(ownerLabelString)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n}\n\n// ImageCollectorReconciler reconciles a ImageCollector object.\ntype Reconciler struct {\n\tclient.Client\n\tScheme       *runtime.Scheme\n\teraserConfig *config.Manager\n}\n\nfunc Add(mgr manager.Manager, cfg *config.Manager) error {\n\tc, err := cfg.Read()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tcollCfg := c.Components.Collector\n\tif !collCfg.Enabled {\n\t\t// don't add controller, but don't throw an error either\n\t\treturn nil\n\t}\n\n\tr, err := newReconciler(mgr, cfg)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn add(mgr, r)\n}\n\n// newReconciler returns a new reconcile.Reconciler.\nfunc newReconciler(mgr manager.Manager, cfg *config.Manager) (*Reconciler, error) {\n\tc, err := cfg.Read()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\totlpEndpoint := c.Manager.OTLPEndpoint\n\tif otlpEndpoint != \"\" {\n\t\tctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)\n\t\tdefer cancel()\n\n\t\texporter, reader, provider = metrics.ConfigureMetrics(ctx, log, otlpEndpoint)\n\t\totel.SetMeterProvider(provider)\n\t}\n\n\trec := &Reconciler{\n\t\tClient:       mgr.GetClient(),\n\t\tScheme:       mgr.GetScheme(),\n\t\teraserConfig: cfg,\n\t}\n\n\treturn rec, nil\n}\n\nfunc add(mgr manager.Manager, r *Reconciler) error {\n\tlog.Info(\"add collector controller\")\n\t// Create a new controller\n\tc, err := controller.New(\"imagecollector-controller\", mgr, controller.Options{\n\t\tReconciler: r,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\terr = c.Watch(\n\t\tsource.Kind(mgr.GetCache(), &eraserv1.ImageJob{}),\n\t\t&handler.EnqueueRequestForObject{}, predicate.Funcs{\n\t\t\t// Do nothing on Create, Delete, or Generic events\n\t\t\tCreateFunc:  util.NeverOnCreate,\n\t\t\tDeleteFunc:  util.NeverOnDelete,\n\t\t\tGenericFunc: util.NeverOnGeneric,\n\t\t\tUpdateFunc: func(e event.UpdateEvent) bool {\n\t\t\t\tif job, ok := e.ObjectNew.(*eraserv1.ImageJob); ok && util.IsCompletedOrFailed(job.Status.Phase) {\n\t\t\t\t\treturn ownerLabel.Matches(labels.Set(job.Labels))\n\t\t\t\t}\n\n\t\t\t\treturn false\n\t\t\t},\n\t\t},\n\t)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tch := make(chan event.GenericEvent)\n\terr = c.Watch(&source.Channel{\n\t\tSource: ch,\n\t}, &handler.EnqueueRequestForObject{})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\teraserConfig, err := r.eraserConfig.Read()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tscheduleCfg := eraserConfig.Manager.Scheduling\n\tdelay := time.Duration(scheduleCfg.RepeatInterval)\n\tif scheduleCfg.BeginImmediately {\n\t\tdelay = 0 * time.Second\n\t}\n\n\tlog.V(1).Info(\"delay\", \"delay\", delay)\n\n\t// runs the provided function after the specified delay\n\t_ = time.AfterFunc(delay, func() {\n\t\tlog.Info(\"Queueing first ImageCollector reconcile...\")\n\t\tch <- event.GenericEvent{\n\t\t\tObject: &eraserv1.ImageJob{\n\t\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\t\tName: \"first-reconcile\",\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\t})\n\n\treturn nil\n}\n\n//+kubebuilder:rbac:groups=eraser.sh,resources=imagelists,verbs=get;list;watch\n//+kubebuilder:rbac:groups=\"\",namespace=\"system\",resources=podtemplates,verbs=get;list;watch;create;update;patch;delete\n//+kubebuilder:rbac:groups=eraser.sh,resources=imagelists/status,verbs=get;update;patch\n//+kubebuilder:rbac:groups=\"\",resources=nodes,verbs=get;list;watch\n//+kubebuilder:rbac:groups=\"\",namespace=\"system\",resources=pods,verbs=get;list;watch;update;create;delete\n\n// Reconcile is part of the main kubernetes reconciliation loop which aims to\n// move the current state of the cluster closer to the desired state.\n// TODO(user): Modify the Reconcile function to compare the state specified by\n// the ImageCollector object against the actual cluster state, and then\n// perform operations to make the cluster state reflect the state specified by\n// the user.\n//\n// For more details, check Reconcile and its Result here:\n// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.11.2/pkg/reconcile\nfunc (r *Reconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {\n\tlog.Info(\"ImageCollector Reconcile\")\n\tdefer log.Info(\"done reconcile\")\n\n\timageJobList := &eraserv1.ImageJobList{}\n\tif err := r.List(ctx, imageJobList); err != nil {\n\t\tlog.Info(\"could not list imagejobs\")\n\t\treturn ctrl.Result{}, err\n\t}\n\n\tif req.Name == \"first-reconcile\" {\n\t\tfor idx := range imageJobList.Items {\n\t\t\tif err := r.Delete(ctx, &imageJobList.Items[idx]); err != nil {\n\t\t\t\tlog.Info(\"error cleaning up previous imagejobs\")\n\t\t\t\treturn ctrl.Result{}, err\n\t\t\t}\n\t\t}\n\t\treturn r.createImageJob(ctx)\n\t}\n\n\tswitch len(imageJobList.Items) {\n\tcase 0:\n\t\t// If we reach this point, reconcile has been called on a timer, and we want to begin a\n\t\t// collector ImageJob\n\t\treturn r.createImageJob(ctx)\n\tcase 1:\n\t\t// an imagejob has just completed; proceed to imagelist creation.\n\t\treturn r.handleCompletedImageJob(ctx, &imageJobList.Items[0])\n\tdefault:\n\t\treturn ctrl.Result{}, fmt.Errorf(\"more than one collector ImageJobs are scheduled\")\n\t}\n}\n\nfunc (r *Reconciler) handleJobDeletion(ctx context.Context, job *eraserv1.ImageJob) (ctrl.Result, error) {\n\tuntil := time.Until(job.Status.DeleteAfter.Time)\n\tif until > 0 {\n\t\tlog.Info(\"Delaying imagejob delete\", \"job\", job.Name, \"deleteAter\", job.Status.DeleteAfter)\n\t\treturn ctrl.Result{RequeueAfter: until}, nil\n\t}\n\n\tlog.Info(\"Deleting imagejob\", \"job\", job.Name)\n\terr := r.Delete(ctx, job)\n\tif err != nil {\n\t\treturn ctrl.Result{}, err\n\t}\n\n\ttemplate := corev1.PodTemplate{}\n\tif err := r.Get(ctx,\n\t\ttypes.NamespacedName{\n\t\t\tNamespace: eraserUtils.GetNamespace(),\n\t\t\tName:      job.GetName(),\n\t\t},\n\t\t&template,\n\t); err != nil {\n\t\treturn ctrl.Result{}, err\n\t}\n\n\tlog.Info(\"Deleting pod template\", \"template\", template.Name)\n\tif err := r.Delete(ctx, &template); err != nil {\n\t\treturn ctrl.Result{}, err\n\t}\n\n\tlog.Info(\"end job deletion\")\n\treturn ctrl.Result{}, nil\n}\n\nfunc (r *Reconciler) createImageJob(ctx context.Context) (ctrl.Result, error) {\n\teraserConfig, err := r.eraserConfig.Read()\n\tif err != nil {\n\t\treturn ctrl.Result{}, err\n\t}\n\n\tmgrCfg := eraserConfig.Manager\n\tcompCfg := eraserConfig.Components\n\n\tscanCfg := compCfg.Scanner\n\tcollectorCfg := compCfg.Collector\n\teraserCfg := compCfg.Remover\n\n\tscanDisabled := !scanCfg.Enabled\n\tstartTime = time.Now()\n\n\tremoverImg := *util.RemoverImage\n\tif removerImg == \"\" {\n\t\tiCfg := eraserCfg.Image\n\t\tremoverImg = fmt.Sprintf(\"%s:%s\", iCfg.Repo, iCfg.Tag)\n\t}\n\n\tlog.V(1).Info(\"removerImg\", \"removerImg\", removerImg)\n\n\tiCfg := collectorCfg.Image\n\tcollectorImg := fmt.Sprintf(\"%s:%s\", iCfg.Repo, iCfg.Tag)\n\n\tprofileConfig := eraserConfig.Manager.Profile\n\tprofileArgs := []string{\n\t\t\"--enable-pprof=\" + strconv.FormatBool(profileConfig.Enabled),\n\t\tfmt.Sprintf(\"--pprof-port=%d\", profileConfig.Port),\n\t}\n\n\tcollArgs := []string{\"--scan-disabled=\" + strconv.FormatBool(scanDisabled)}\n\tcollArgs = append(collArgs, profileArgs...)\n\n\tremoverArgs := []string{\"--log-level=\" + logger.GetLevel()}\n\tremoverArgs = append(removerArgs, profileArgs...)\n\n\tpullSecrets := []corev1.LocalObjectReference{}\n\tfor _, secret := range eraserConfig.Manager.PullSecrets {\n\t\tpullSecrets = append(pullSecrets, corev1.LocalObjectReference{Name: secret})\n\t}\n\n\tjobTemplate := corev1.PodTemplateSpec{\n\t\tSpec: corev1.PodSpec{\n\t\t\tVolumes: []corev1.Volume{\n\t\t\t\t{\n\t\t\t\t\t// EmptyDir default\n\t\t\t\t\tName: \"shared-data\",\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tName: configVolumeName,\n\t\t\t\t\tVolumeSource: corev1.VolumeSource{\n\t\t\t\t\t\tConfigMap: &corev1.ConfigMapVolumeSource{\n\t\t\t\t\t\t\tLocalObjectReference: corev1.LocalObjectReference{\n\t\t\t\t\t\t\t\tName: util.EraserConfigmapName,\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tImagePullSecrets:  pullSecrets,\n\t\t\tRestartPolicy:     corev1.RestartPolicyNever,\n\t\t\tPriorityClassName: eraserConfig.Manager.PriorityClassName,\n\t\t\tContainers: []corev1.Container{\n\t\t\t\t{\n\t\t\t\t\tName:            \"collector\",\n\t\t\t\t\tImage:           collectorImg,\n\t\t\t\t\tImagePullPolicy: corev1.PullIfNotPresent,\n\t\t\t\t\tArgs:            collArgs,\n\t\t\t\t\tVolumeMounts: []corev1.VolumeMount{\n\t\t\t\t\t\t{MountPath: \"/run/eraser.sh/shared-data\", Name: \"shared-data\"},\n\t\t\t\t\t},\n\t\t\t\t\tResources: corev1.ResourceRequirements{\n\t\t\t\t\t\tRequests: corev1.ResourceList{\n\t\t\t\t\t\t\t\"cpu\":    collectorCfg.Request.CPU,\n\t\t\t\t\t\t\t\"memory\": collectorCfg.Request.Mem,\n\t\t\t\t\t\t},\n\t\t\t\t\t\tLimits: corev1.ResourceList{\n\t\t\t\t\t\t\t\"memory\": collectorCfg.Limit.Mem,\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tName:            \"remover\",\n\t\t\t\t\tImage:           removerImg,\n\t\t\t\t\tImagePullPolicy: corev1.PullIfNotPresent,\n\t\t\t\t\tArgs:            removerArgs,\n\t\t\t\t\tVolumeMounts: []corev1.VolumeMount{\n\t\t\t\t\t\t{MountPath: \"/run/eraser.sh/shared-data\", Name: \"shared-data\"},\n\t\t\t\t\t},\n\t\t\t\t\tResources: corev1.ResourceRequirements{\n\t\t\t\t\t\tRequests: corev1.ResourceList{\n\t\t\t\t\t\t\t\"cpu\":    eraserCfg.Request.CPU,\n\t\t\t\t\t\t\t\"memory\": eraserCfg.Request.Mem,\n\t\t\t\t\t\t},\n\t\t\t\t\t\tLimits: corev1.ResourceList{\n\t\t\t\t\t\t\t\"memory\": eraserCfg.Limit.Mem,\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tSecurityContext: eraserUtils.SharedSecurityContext,\n\t\t\t\t\tEnv: []corev1.EnvVar{\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tName:  \"OTEL_EXPORTER_OTLP_ENDPOINT\",\n\t\t\t\t\t\t\tValue: mgrCfg.OTLPEndpoint,\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tName:  \"OTEL_SERVICE_NAME\",\n\t\t\t\t\t\t\tValue: \"remover\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tServiceAccountName: \"eraser-imagejob-pods\",\n\t\t},\n\t}\n\n\tjob := &eraserv1alpha1.ImageJob{\n\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\tGenerateName: \"imagejob-\",\n\t\t\tLabels: map[string]string{\n\t\t\t\tutil.ImageJobOwnerLabelKey: ownerLabelValue,\n\t\t\t},\n\t\t},\n\t}\n\n\tif !scanDisabled {\n\t\tiCfg := scanCfg.Image\n\t\tscannerImg := fmt.Sprintf(\"%s:%s\", iCfg.Repo, iCfg.Tag)\n\n\t\tcfgDirname := \"/config\"\n\t\tcfgFilename := filepath.Join(cfgDirname, \"controller_manager_config.yaml\")\n\t\tscannerArgs := []string{fmt.Sprintf(\"--config=%s\", cfgFilename)}\n\t\tscannerArgs = append(scannerArgs, profileArgs...)\n\n\t\tscannerContainer := corev1.Container{\n\t\t\tName:  \"trivy-scanner\",\n\t\t\tImage: scannerImg,\n\t\t\tArgs:  scannerArgs,\n\t\t\tVolumeMounts: []corev1.VolumeMount{\n\t\t\t\t{MountPath: \"/run/eraser.sh/shared-data\", Name: \"shared-data\"},\n\t\t\t\t{MountPath: cfgDirname, Name: configVolumeName},\n\t\t\t},\n\t\t\tResources: corev1.ResourceRequirements{\n\t\t\t\tRequests: corev1.ResourceList{\n\t\t\t\t\t\"memory\": scanCfg.Request.Mem,\n\t\t\t\t\t\"cpu\":    scanCfg.Request.CPU,\n\t\t\t\t},\n\t\t\t\tLimits: corev1.ResourceList{\n\t\t\t\t\t\"memory\": scanCfg.Limit.Mem,\n\t\t\t\t},\n\t\t\t},\n\t\t\t// env vars for exporting metrics\n\t\t\tEnv: []corev1.EnvVar{\n\t\t\t\t{\n\t\t\t\t\tName:  \"OTEL_EXPORTER_OTLP_ENDPOINT\",\n\t\t\t\t\tValue: mgrCfg.OTLPEndpoint,\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tName:  \"OTEL_SERVICE_NAME\",\n\t\t\t\t\tValue: \"trivy-scanner\",\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tName:  \"ERASER_RUNTIME_NAME\",\n\t\t\t\t\tValue: string(mgrCfg.Runtime.Name),\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\n\t\tlog.Info(\"extra mount for scanner starts\")\n\t\tscannerVolumes := compCfg.Scanner.Volumes\n\t\tif len(scannerVolumes) != 0 {\n\t\t\tjobTemplate.Spec.Volumes = append(jobTemplate.Spec.Volumes, scannerVolumes...)\n\t\t\tscannerVolumeMounts := []corev1.VolumeMount{}\n\t\t\tfor idx := range scannerVolumes {\n\t\t\t\tvolume := scannerVolumes[idx]\n\t\t\t\tif volume.HostPath == nil {\n\t\t\t\t\tlog.Error(fmt.Errorf(\"volume hostPath is nil\"), \"invalid volume\", \"volumeName\", volume.Name)\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tscannerVolumeMounts = append(scannerVolumeMounts, corev1.VolumeMount{\n\t\t\t\t\tName:      volume.Name,\n\t\t\t\t\tMountPath: volume.HostPath.Path,\n\t\t\t\t\tReadOnly:  true,\n\t\t\t\t})\n\t\t\t}\n\t\t\tscannerContainer.VolumeMounts = append(scannerContainer.VolumeMounts, scannerVolumeMounts...)\n\t\t}\n\n\t\tjobTemplate.Spec.Containers = append(jobTemplate.Spec.Containers, scannerContainer)\n\t}\n\n\tconfigmapList := &corev1.ConfigMapList{}\n\tif err := r.List(ctx, configmapList, client.InNamespace(eraserUtils.GetNamespace())); err != nil {\n\t\tlog.Info(\"Could not get list of configmaps\")\n\t\treturn reconcile.Result{}, err\n\t}\n\n\texclusionMount, exclusionVolume, err := util.GetExclusionVolume(configmapList)\n\tif err != nil {\n\t\tlog.Info(\"Could not get exclusion mounts and volumes\")\n\t\treturn reconcile.Result{}, err\n\t}\n\n\tfor i := range jobTemplate.Spec.Containers {\n\t\tjobTemplate.Spec.Containers[i].VolumeMounts = append(jobTemplate.Spec.Containers[i].VolumeMounts, exclusionMount...)\n\t}\n\n\tjobTemplate.Spec.Volumes = append(jobTemplate.Spec.Volumes, exclusionVolume...)\n\n\terr = r.Create(ctx, job)\n\tif err != nil {\n\t\tlog.Info(\"Could not create collector ImageJob\")\n\t\treturn reconcile.Result{}, err\n\t}\n\n\t// get manager pod with label control-plane=controller-manager\n\tpodList := corev1.PodList{}\n\tif err := r.List(ctx, &podList, client.InNamespace(eraserUtils.GetNamespace()), client.MatchingLabels{\"control-plane\": \"controller-manager\"}); err != nil {\n\t\tlog.Info(\"Unable to list controller-manager pod\")\n\t}\n\tif len(podList.Items) != 1 {\n\t\tlog.Info(\"Incorrect number of controller-manager pods\", \"number of pods\", len(podList.Items))\n\t}\n\tmanagerPod := &podList.Items[0]\n\n\tnamespace := eraserUtils.GetNamespace()\n\ttemplate := corev1.PodTemplate{\n\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\tName:      job.GetName(),\n\t\t\tNamespace: namespace,\n\t\t\tOwnerReferences: []metav1.OwnerReference{\n\t\t\t\t*metav1.NewControllerRef(managerPod, managerPod.GroupVersionKind()),\n\t\t\t},\n\t\t},\n\t\tTemplate: jobTemplate,\n\t}\n\n\terr = r.Create(ctx, &template)\n\tif err != nil {\n\t\tlog.Error(err, \"Could not create collector PodTemplate\")\n\t\treturn reconcile.Result{}, err\n\t}\n\n\tlog.Info(\"Successfully created collector ImageJob\", \"job\", job.Name)\n\treturn reconcile.Result{}, nil\n}\n\nfunc (r *Reconciler) handleCompletedImageJob(ctx context.Context, childJob *eraserv1.ImageJob) (ctrl.Result, error) {\n\tvar err error\n\tvar timeRemaining time.Duration\n\teraserConfig, err := r.eraserConfig.Read()\n\tif err != nil {\n\t\treturn ctrl.Result{}, err\n\t}\n\n\totlpEndpoint := eraserConfig.Manager.OTLPEndpoint\n\trepeatInterval := time.Duration(eraserConfig.Manager.Scheduling.RepeatInterval)\n\n\tcleanupCfg := eraserConfig.Manager.ImageJob.Cleanup\n\tsuccessDelay := time.Duration(cleanupCfg.DelayOnSuccess)\n\terrDelay := time.Duration(cleanupCfg.DelayOnFailure)\n\n\tswitch phase := childJob.Status.Phase; phase {\n\tcase eraserv1.PhaseCompleted:\n\t\tlog.Info(\"completed phase\")\n\t\tif childJob.Status.DeleteAfter == nil {\n\t\t\tchildJob.Status.DeleteAfter = util.After(time.Now(), int64(successDelay.Seconds()))\n\t\t\tif err := r.Status().Update(ctx, childJob); err != nil {\n\t\t\t\tlog.Info(\"Could not update Delete After for job \" + childJob.Name)\n\t\t\t}\n\t\t\treturn ctrl.Result{}, nil\n\t\t}\n\n\t\tif otlpEndpoint != \"\" {\n\t\t\t// record metrics\n\t\t\tif err := metrics.RecordMetricsController(ctx, otel.GetMeterProvider(), float64(time.Since(startTime).Seconds()), int64(childJob.Status.Succeeded), int64(childJob.Status.Failed)); err != nil {\n\t\t\t\tlog.Error(err, \"error recording metrics\")\n\t\t\t}\n\t\t\tmetrics.ExportMetrics(log, exporter, reader)\n\t\t}\n\n\t\ttimeRemaining = repeatInterval - successDelay\n\t\tif res, err := r.handleJobDeletion(ctx, childJob); err != nil || res.RequeueAfter > 0 {\n\t\t\treturn res, err\n\t\t}\n\tcase eraserv1.PhaseFailed:\n\t\tlog.Info(\"failed phase\")\n\t\tif childJob.Status.DeleteAfter == nil {\n\t\t\tchildJob.Status.DeleteAfter = util.After(time.Now(), int64(errDelay.Seconds()))\n\t\t\tif err := r.Status().Update(ctx, childJob); err != nil {\n\t\t\t\tlog.Info(\"Could not update Delete After for job \" + childJob.Name)\n\t\t\t}\n\t\t\treturn ctrl.Result{}, nil\n\t\t}\n\n\t\tif otlpEndpoint != \"\" {\n\t\t\t// record metrics\n\t\t\tif err := metrics.RecordMetricsController(ctx, otel.GetMeterProvider(), float64(time.Since(startTime).Milliseconds()), int64(childJob.Status.Succeeded), int64(childJob.Status.Failed)); err != nil {\n\t\t\t\tlog.Error(err, \"error recording metrics\")\n\t\t\t}\n\t\t\tmetrics.ExportMetrics(log, exporter, reader)\n\t\t}\n\n\t\ttimeRemaining = repeatInterval - errDelay\n\t\tif res, err := r.handleJobDeletion(ctx, childJob); err != nil || res.RequeueAfter > 0 {\n\t\t\treturn res, err\n\t\t}\n\tdefault:\n\t\terr = errors.New(\"should not reach this point for imagejob\")\n\t\tlog.Error(err, \"imagejob not in completed or failed phase\", \"imagejob\", childJob)\n\t}\n\n\tif timeRemaining <= 0 {\n\t\treturn ctrl.Result{Requeue: true}, err\n\t}\n\n\treturn ctrl.Result{RequeueAfter: timeRemaining}, err\n}\n"
  },
  {
    "path": "controllers/imagejob/imagejob_controller.go",
    "content": "/*\nCopyright 2021.\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n    http://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage imagejob\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net/url\"\n\t\"os\"\n\t\"strings\"\n\t\"time\"\n\n\t\"golang.org/x/exp/slices\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/labels\"\n\t\"k8s.io/apimachinery/pkg/runtime\"\n\t\"k8s.io/apimachinery/pkg/types\"\n\t\"k8s.io/apimachinery/pkg/util/wait\"\n\tctrl \"sigs.k8s.io/controller-runtime\"\n\t\"sigs.k8s.io/controller-runtime/pkg/client\"\n\t\"sigs.k8s.io/controller-runtime/pkg/controller\"\n\t\"sigs.k8s.io/controller-runtime/pkg/event\"\n\t\"sigs.k8s.io/controller-runtime/pkg/handler\"\n\tlogf \"sigs.k8s.io/controller-runtime/pkg/log\"\n\t\"sigs.k8s.io/controller-runtime/pkg/manager\"\n\t\"sigs.k8s.io/controller-runtime/pkg/predicate\"\n\t\"sigs.k8s.io/controller-runtime/pkg/reconcile\"\n\t\"sigs.k8s.io/controller-runtime/pkg/source\"\n\t\"sigs.k8s.io/kind/pkg/errors\"\n\n\t\"github.com/eraser-dev/eraser/api/unversioned\"\n\t\"github.com/eraser-dev/eraser/api/unversioned/config\"\n\teraserv1 \"github.com/eraser-dev/eraser/api/v1\"\n\tcontrollerUtils \"github.com/eraser-dev/eraser/controllers/util\"\n\teraserUtils \"github.com/eraser-dev/eraser/pkg/utils\"\n)\n\nconst (\n\tdefaultFilterLabel   = \"eraser.sh/cleanup.filter\"\n\twindowsFilterLabel   = \"kubernetes.io/os=windows\"\n\timageJobTypeLabelKey = \"eraser.sh/type\"\n\tcollectorJobType     = \"collector\"\n\tmanualJobType        = \"manual\"\n\tremoverContainer     = \"remover\"\n\tmanagerLabelValue    = \"controller-manager\"\n\tmanagerLabelKey      = \"control-plane\"\n)\n\nvar log = logf.Log.WithName(\"controller\").WithValues(\"process\", \"imagejob-controller\")\n\nvar defaultTolerations = []corev1.Toleration{\n\t{\n\t\tOperator: corev1.TolerationOpExists,\n\t},\n}\n\nfunc Add(mgr manager.Manager, cfg *config.Manager) error {\n\treturn add(mgr, newReconciler(mgr, cfg))\n}\n\n// newReconciler returns a new reconcile.Reconciler.\nfunc newReconciler(mgr manager.Manager, cfg *config.Manager) reconcile.Reconciler {\n\trec := &Reconciler{\n\t\tClient:       mgr.GetClient(),\n\t\tscheme:       mgr.GetScheme(),\n\t\teraserConfig: cfg,\n\t}\n\n\treturn rec\n}\n\n// ImageJobReconciler reconciles a ImageJob object.\ntype Reconciler struct {\n\tclient.Client\n\tscheme       *runtime.Scheme\n\teraserConfig *config.Manager\n}\n\n// add adds a new Controller to mgr with r as the reconcile.Reconciler.\nfunc add(mgr manager.Manager, r reconcile.Reconciler) error {\n\t// Create a new controller\n\tc, err := controller.New(\"imagejob-controller\", mgr, controller.Options{\n\t\tReconciler: r,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// Watch for changes to ImageJob\n\terr = c.Watch(source.Kind(mgr.GetCache(), &eraserv1.ImageJob{}), &handler.EnqueueRequestForObject{}, predicate.Funcs{\n\t\tUpdateFunc: func(e event.UpdateEvent) bool {\n\t\t\tif job, ok := e.ObjectNew.(*eraserv1.ImageJob); ok && controllerUtils.IsCompletedOrFailed(job.Status.Phase) {\n\t\t\t\treturn false // handled by Owning controller\n\t\t\t}\n\n\t\t\treturn true\n\t\t},\n\t\tCreateFunc:  controllerUtils.AlwaysOnCreate,\n\t\tGenericFunc: controllerUtils.NeverOnGeneric,\n\t\tDeleteFunc:  controllerUtils.NeverOnDelete,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// Watch for changes to pods created by ImageJob (eraser pods)\n\terr = c.Watch(\n\t\tsource.Kind(mgr.GetCache(), &corev1.Pod{}),\n\t\thandler.EnqueueRequestForOwner(mgr.GetScheme(), mgr.GetRESTMapper(), &corev1.PodTemplate{}),\n\t\tpredicate.Funcs{\n\t\t\tCreateFunc: func(e event.CreateEvent) bool {\n\t\t\t\treturn e.Object.GetNamespace() == eraserUtils.GetNamespace()\n\t\t\t},\n\t\t\tUpdateFunc: func(e event.UpdateEvent) bool {\n\t\t\t\treturn e.ObjectNew.GetNamespace() == eraserUtils.GetNamespace()\n\t\t\t},\n\t\t\tDeleteFunc: func(e event.DeleteEvent) bool {\n\t\t\t\treturn e.Object.GetNamespace() == eraserUtils.GetNamespace()\n\t\t\t},\n\t\t},\n\t)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// watch for changes to imagejob podTemplate (owned by controller manager pod)\n\terr = c.Watch(\n\t\tsource.Kind(mgr.GetCache(), &corev1.PodTemplate{}),\n\t\thandler.EnqueueRequestForOwner(mgr.GetScheme(), mgr.GetRESTMapper(), &corev1.Pod{}),\n\t\tpredicate.Funcs{\n\t\t\tCreateFunc: func(e event.CreateEvent) bool {\n\t\t\t\townerLabels, ok := e.Object.GetLabels()[managerLabelKey]\n\t\t\t\treturn ok && ownerLabels == managerLabelValue\n\t\t\t},\n\t\t\tUpdateFunc: func(e event.UpdateEvent) bool {\n\t\t\t\townerLabels, ok := e.ObjectNew.GetLabels()[managerLabelKey]\n\t\t\t\treturn ok && ownerLabels == managerLabelValue\n\t\t\t},\n\t\t\tDeleteFunc: func(e event.DeleteEvent) bool {\n\t\t\t\townerLabels, ok := e.Object.GetLabels()[managerLabelKey]\n\t\t\t\treturn ok && ownerLabels == managerLabelValue\n\t\t\t},\n\t\t},\n\t)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\n//+kubebuilder:rbac:groups=eraser.sh,resources=imagejobs,verbs=get;list;watch;create;delete\n//+kubebuilder:rbac:groups=\"\",namespace=\"system\",resources=podtemplates,verbs=get;list;watch;create;update;patch;delete\n//+kubebuilder:rbac:groups=eraser.sh,resources=imagejobs/status,verbs=get;update;patch\n//+kubebuilder:rbac:groups=\"\",namespace=\"system\",resources=configmaps,verbs=get;list;watch;create;update;patch;delete\n\n// Reconcile is part of the main kubernetes reconciliation loop which aims to\n// move the current state of the cluster closer to the desired state.\n// TODO(user): Modify the Reconcile function to compare the state specified by\n// the ImageJob object against the actual cluster state, and then\n// perform operations to make the cluster state reflect the state specified by\n// the user.\n//\n// For more details, check Reconcile and its Result here:\n// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.8.3/pkg/reconcile\nfunc (r *Reconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {\n\timageJob := &eraserv1.ImageJob{}\n\tif err := r.Get(ctx, req.NamespacedName, imageJob); err != nil {\n\t\timageJob.Status.Phase = eraserv1.PhaseFailed\n\t\tif err := r.updateJobStatus(ctx, imageJob); err != nil {\n\t\t\treturn ctrl.Result{}, err\n\t\t}\n\t\treturn ctrl.Result{}, client.IgnoreNotFound(err)\n\t}\n\n\tswitch imageJob.Status.Phase {\n\tcase \"\":\n\t\tif err := r.handleNewJob(ctx, imageJob); err != nil {\n\t\t\treturn ctrl.Result{}, fmt.Errorf(\"reconcile new: %w\", err)\n\t\t}\n\tcase eraserv1.PhaseRunning:\n\t\tif err := r.handleRunningJob(ctx, imageJob); err != nil {\n\t\t\treturn ctrl.Result{}, fmt.Errorf(\"reconcile running: %w\", err)\n\t\t}\n\tcase eraserv1.PhaseCompleted, eraserv1.PhaseFailed:\n\t\tbreak // this is handled by the Owning controller\n\tdefault:\n\t\treturn ctrl.Result{}, fmt.Errorf(\"reconcile: unexpected imagejob phase: %s\", imageJob.Status.Phase)\n\t}\n\n\treturn ctrl.Result{}, nil\n}\n\nfunc podListOptions(jobTemplate *corev1.PodTemplate) client.ListOptions {\n\tvar set map[string]string\n\n\tif jobTemplate.Template.Spec.Containers[0].Name == removerContainer {\n\t\tset = map[string]string{imageJobTypeLabelKey: manualJobType}\n\t} else {\n\t\tset = map[string]string{imageJobTypeLabelKey: collectorJobType}\n\t}\n\n\treturn client.ListOptions{\n\t\tNamespace:     eraserUtils.GetNamespace(),\n\t\tLabelSelector: labels.SelectorFromSet(set),\n\t}\n}\n\nfunc (r *Reconciler) handleRunningJob(ctx context.Context, imageJob *eraserv1.ImageJob) error {\n\t// get eraser pods\n\tpodList := &corev1.PodList{}\n\n\ttemplate := corev1.PodTemplate{}\n\tnamespace := eraserUtils.GetNamespace()\n\n\terr := r.Get(ctx, types.NamespacedName{\n\t\tName:      imageJob.GetName(),\n\t\tNamespace: namespace,\n\t}, &template)\n\tif err != nil {\n\t\timageJob.Status = eraserv1.ImageJobStatus{\n\t\t\tPhase:       eraserv1.PhaseFailed,\n\t\t\tDeleteAfter: controllerUtils.After(time.Now(), 1),\n\t\t}\n\t\treturn r.updateJobStatus(ctx, imageJob)\n\t}\n\n\tlistOpts := podListOptions(&template)\n\terr = r.List(ctx, podList, &listOpts)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tfailed := 0\n\tsuccess := 0\n\tskipped := imageJob.Status.Skipped\n\n\tif !podsComplete(podList.Items) {\n\t\treturn nil\n\t}\n\n\t// if all pods are complete, job is complete\n\t// get status of pods\n\tfor i := range podList.Items {\n\t\tif podList.Items[i].Status.Phase == corev1.PodSucceeded {\n\t\t\tsuccess++\n\t\t} else {\n\t\t\tfailed++\n\t\t}\n\t}\n\n\timageJob.Status = eraserv1.ImageJobStatus{\n\t\tDesired:   imageJob.Status.Desired,\n\t\tSucceeded: success,\n\t\tSkipped:   skipped,\n\t\tFailed:    failed,\n\t\tPhase:     eraserv1.PhaseCompleted,\n\t}\n\n\tsuccessAndSkipped := success + skipped\n\n\teraserConfig, err := r.eraserConfig.Read()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tmanagerConfig := eraserConfig.Manager\n\tsuccessRatio := managerConfig.ImageJob.SuccessRatio\n\n\tif float64(successAndSkipped/imageJob.Status.Desired) < successRatio {\n\t\tlog.Info(\n\t\t\t\"Marking job as failed\",\n\t\t\t\"success ratio\", successRatio,\n\t\t\t\"actual ratio\", success/imageJob.Status.Desired,\n\t\t)\n\t\timageJob.Status.Phase = eraserv1.PhaseFailed\n\t}\n\n\treturn r.updateJobStatus(ctx, imageJob)\n}\n\nfunc (r *Reconciler) handleNewJob(ctx context.Context, imageJob *eraserv1.ImageJob) error {\n\tnodes := &corev1.NodeList{}\n\terr := r.List(ctx, nodes)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\ttemplate := corev1.PodTemplate{}\n\terr = r.Get(ctx,\n\t\ttypes.NamespacedName{\n\t\t\tNamespace: eraserUtils.GetNamespace(),\n\t\t\tName:      imageJob.GetName(),\n\t\t},\n\t\t&template,\n\t)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\timageJob.Status = eraserv1.ImageJobStatus{\n\t\tDesired:   len(nodes.Items),\n\t\tSucceeded: 0,\n\t\tSkipped:   0, // placeholder, updated below\n\t\tFailed:    0,\n\t\tPhase:     eraserv1.PhaseRunning,\n\t}\n\n\tskipped := 0\n\tvar nodeList []corev1.Node\n\n\tlog := log.WithValues(\"job\", imageJob.Name)\n\n\tenv := []corev1.EnvVar{\n\t\t{Name: \"NODE_NAME\", ValueFrom: &corev1.EnvVarSource{FieldRef: &corev1.ObjectFieldSelector{FieldPath: \"spec.nodeName\"}}},\n\t}\n\n\teraserConfig, err := r.eraserConfig.Read()\n\tif err != nil {\n\t\treturn err\n\t}\n\tlog.V(1).Info(\"configuration used\", \"manager\", eraserConfig.Manager, \"components\", eraserConfig.Components)\n\n\tfilterOpts := eraserConfig.Manager.NodeFilter\n\tif !slices.Contains(filterOpts.Selectors, defaultFilterLabel) {\n\t\tfilterOpts.Selectors = append(filterOpts.Selectors, defaultFilterLabel)\n\t}\n\n\tswitch filterOpts.Type {\n\tcase \"exclude\":\n\t\tnodeList, skipped, err = filterOutSkippedNodes(nodes, filterOpts.Selectors)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\tcase \"include\":\n\t\tnodeList, skipped, err = selectIncludedNodes(nodes, filterOpts.Selectors)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\tdefault:\n\t\treturn errors.Errorf(\"invalid node filter option\")\n\t}\n\n\timageJob.Status.Skipped = skipped\n\tif err := r.updateJobStatus(ctx, imageJob); err != nil {\n\t\treturn err\n\t}\n\n\tvar namespacedNames []types.NamespacedName\n\tpodSpecTemplate := template.Template.Spec\n\tfor i := range nodeList {\n\t\tlog := log.WithValues(\"node\", nodeList[i].Name)\n\t\tpodSpec, err := copyAndFillTemplateSpec(&podSpecTemplate, env, &nodeList[i], &eraserConfig.Manager.Runtime)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tcontainerName := podSpec.Containers[0].Name\n\t\tnodeName := nodeList[i].Name\n\n\t\tpod := &corev1.Pod{\n\t\t\tTypeMeta: metav1.TypeMeta{},\n\t\t\tSpec:     *podSpec,\n\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\tNamespace:    eraserUtils.GetNamespace(),\n\t\t\t\tGenerateName: \"eraser-\" + nodeName + \"-\",\n\t\t\t\tOwnerReferences: []metav1.OwnerReference{\n\t\t\t\t\t*metav1.NewControllerRef(&template, template.GroupVersionKind()),\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\n\t\tpod.Labels = map[string]string{}\n\n\t\tfor k, v := range eraserConfig.Manager.AdditionalPodLabels {\n\t\t\tpod.Labels[k] = v\n\t\t}\n\n\t\tif containerName == removerContainer {\n\t\t\tpod.Labels[imageJobTypeLabelKey] = manualJobType\n\t\t} else {\n\t\t\tpod.Labels[imageJobTypeLabelKey] = collectorJobType\n\t\t}\n\n\t\terr = r.Create(ctx, pod)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tlog.Info(\"Started \"+containerName+\" pod on node\", \"nodeName\", nodeName)\n\t\tnamespacedNames = append(namespacedNames, types.NamespacedName{Name: pod.Name, Namespace: pod.Namespace})\n\t}\n\n\tfor _, namespacedName := range namespacedNames {\n\t\t//nolint:staticcheck // SA1019: TODO: Replace with PollUntilContextTimeout in future refactor\n\t\tif err := wait.PollImmediate(time.Nanosecond, time.Minute*5, r.isPodReady(ctx, namespacedName)); err != nil {\n\t\t\tlog.Error(err, \"timed out waiting for pod to leave pending state\", \"pod NamespacedName\", namespacedName)\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (r *Reconciler) isPodReady(ctx context.Context, namespacedName types.NamespacedName) wait.ConditionFunc {\n\treturn func() (bool, error) {\n\t\tcurrentPod := &corev1.Pod{}\n\n\t\tif err := r.Get(ctx, namespacedName, currentPod); err != nil {\n\t\t\treturn false, client.IgnoreNotFound(err)\n\t\t}\n\n\t\treturn currentPod.Status.Phase != corev1.PodPhase(corev1.PodPending), nil\n\t}\n}\n\n// SetupWithManager sets up the controller with the Manager.\nfunc (r *Reconciler) SetupWithManager(mgr ctrl.Manager) error {\n\tlog.Info(\"imagejob set up with manager\")\n\treturn ctrl.NewControllerManagedBy(mgr).\n\t\tFor(&eraserv1.ImageJob{}).\n\t\tComplete(r)\n}\n\nfunc podsComplete(podList []corev1.Pod) bool {\n\tfor i := range podList {\n\t\tif podList[i].Status.Phase == corev1.PodRunning || podList[i].Status.Phase == corev1.PodPending {\n\t\t\treturn containersFailed(&podList[i])\n\t\t}\n\t}\n\treturn true\n}\n\nfunc containersFailed(pod *corev1.Pod) bool {\n\tstatuses := pod.Status.ContainerStatuses\n\tfor i := range statuses {\n\t\tif statuses[i].State.Terminated != nil && statuses[i].State.Terminated.ExitCode != 0 {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\nfunc (r *Reconciler) updateJobStatus(ctx context.Context, imageJob *eraserv1.ImageJob) error {\n\tif imageJob.Name != \"\" {\n\t\tif err := r.Status().Update(ctx, imageJob); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc selectIncludedNodes(nodes *corev1.NodeList, includeNodesSelectors []string) ([]corev1.Node, int, error) {\n\tskipped := 0\n\tnodeList := make([]corev1.Node, 0, len(nodes.Items))\n\nnodes:\n\tfor i := range nodes.Items {\n\t\tlog := log.WithValues(\"node\", nodes.Items[i].Name)\n\t\tskipped++\n\t\tnodeName := nodes.Items[i].Name\n\t\tfor _, includeNodesSelectors := range includeNodesSelectors {\n\t\t\tincludedLabels, err := labels.Parse(includeNodesSelectors)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, -1, err\n\t\t\t}\n\n\t\t\tlog.V(1).Info(\"includedLabels\", \"includedLabels\", includedLabels)\n\t\t\tlog.V(1).Info(\"nodeLabels\", \"nodeLabels\", nodes.Items[i].Labels)\n\t\t\tif includedLabels.Matches(labels.Set(nodes.Items[i].Labels)) {\n\t\t\t\tlog.Info(\"node is included because it matched the specified labels\",\n\t\t\t\t\t\"nodeName\", nodeName,\n\t\t\t\t\t\"labels\", nodes.Items[i].Labels,\n\t\t\t\t\t\"specifiedSelectors\", includeNodesSelectors,\n\t\t\t\t)\n\n\t\t\t\tnodeList = append(nodeList, nodes.Items[i])\n\t\t\t\tskipped--\n\t\t\t\tcontinue nodes\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nodeList, skipped, nil\n}\n\nfunc filterOutSkippedNodes(nodes *corev1.NodeList, skipNodesSelectors []string) ([]corev1.Node, int, error) {\n\tskipped := 0\n\tnodeList := make([]corev1.Node, 0, len(nodes.Items))\n\nnodes:\n\tfor i := range nodes.Items {\n\t\tlog := log.WithValues(\"node\", nodes.Items[i].Name)\n\n\t\tnodeName := nodes.Items[i].Name\n\t\tfor _, skipNodesSelector := range skipNodesSelectors {\n\t\t\tskipLabels, err := labels.Parse(skipNodesSelector)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, -1, err\n\t\t\t}\n\n\t\t\tlog.V(1).Info(\"skipLabels\", \"skipLabels\", skipLabels)\n\t\t\tlog.V(1).Info(\"nodeLabels\", \"nodeLabels\", nodes.Items[i].Labels)\n\t\t\tif skipLabels.Matches(labels.Set(nodes.Items[i].Labels)) {\n\t\t\t\tlog.Info(\"node will be skipped because it matched the specified labels\",\n\t\t\t\t\t\"nodeName\", nodeName,\n\t\t\t\t\t\"labels\", nodes.Items[i].Labels,\n\t\t\t\t\t\"specifiedSelectors\", skipNodesSelectors,\n\t\t\t\t)\n\n\t\t\t\tskipped++\n\t\t\t\tcontinue nodes\n\t\t\t}\n\t\t}\n\n\t\tnodeList = append(nodeList, nodes.Items[i])\n\t}\n\n\treturn nodeList, skipped, nil\n}\n\nfunc copyAndFillTemplateSpec(templateSpecTemplate *corev1.PodSpec, env []corev1.EnvVar, node *corev1.Node, runtimeSpec *unversioned.RuntimeSpec) (*corev1.PodSpec, error) {\n\tnodeName := node.Name\n\n\tu, err := url.Parse(runtimeSpec.Address)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tvolumes := []corev1.Volume{\n\t\t{Name: \"runtime-sock-volume\", VolumeSource: corev1.VolumeSource{HostPath: &corev1.HostPathVolumeSource{Path: u.Path}}},\n\t}\n\n\tvolumeMounts := []corev1.VolumeMount{\n\t\t{MountPath: controllerUtils.CRIPath, Name: \"runtime-sock-volume\"},\n\t}\n\n\ttemplateSpec := templateSpecTemplate.DeepCopy()\n\ttemplateSpec.Tolerations = defaultTolerations\n\n\teraserImg := &templateSpec.Containers[0]\n\teraserImg.VolumeMounts = append(eraserImg.VolumeMounts, volumeMounts...)\n\teraserImg.Env = append(eraserImg.Env, env...)\n\n\tif len(templateSpec.Containers) > 1 {\n\t\tcollectorImg := &templateSpec.Containers[1]\n\t\tcollectorImg.VolumeMounts = append(collectorImg.VolumeMounts, volumeMounts...)\n\t\tcollectorImg.Env = append(collectorImg.Env, env...)\n\t}\n\n\tif len(templateSpec.Containers) > 2 {\n\t\tscannerImg := &templateSpec.Containers[2]\n\t\tscannerImg.VolumeMounts = append(scannerImg.VolumeMounts, volumeMounts...)\n\t\tscannerImg.Env = append(scannerImg.Env,\n\t\t\tcorev1.EnvVar{\n\t\t\t\tName:  controllerUtils.EnvVarContainerdNamespaceKey,\n\t\t\t\tValue: controllerUtils.EnvVarContainerdNamespaceValue,\n\t\t\t},\n\t\t)\n\t\tscannerImg.Env = append(scannerImg.Env, env...)\n\t}\n\n\tsecrets := os.Getenv(\"ERASER_PULL_SECRET_NAMES\")\n\tif secrets != \"\" {\n\t\tfor _, secret := range strings.Split(secrets, \",\") {\n\t\t\ttemplateSpec.ImagePullSecrets = append(templateSpec.ImagePullSecrets, corev1.LocalObjectReference{Name: secret})\n\t\t}\n\t}\n\n\ttemplateSpec.Volumes = append(volumes, templateSpec.Volumes...)\n\ttemplateSpec.NodeName = nodeName\n\n\treturn templateSpec, nil\n}\n"
  },
  {
    "path": "controllers/imagelist/imagelist_controller.go",
    "content": "/*\nCopyright 2021.\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n    http://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage imagelist\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"os\"\n\t\"os/signal\"\n\t\"path/filepath\"\n\t\"syscall\"\n\t\"time\"\n\n\t\"go.opentelemetry.io/otel\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\t\"k8s.io/apimachinery/pkg/api/errors\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/labels\"\n\t\"k8s.io/apimachinery/pkg/runtime\"\n\t\"k8s.io/apimachinery/pkg/types\"\n\tctrl \"sigs.k8s.io/controller-runtime\"\n\t\"sigs.k8s.io/controller-runtime/pkg/client\"\n\t\"sigs.k8s.io/controller-runtime/pkg/controller\"\n\t\"sigs.k8s.io/controller-runtime/pkg/event\"\n\t\"sigs.k8s.io/controller-runtime/pkg/handler\"\n\tlogf \"sigs.k8s.io/controller-runtime/pkg/log\"\n\t\"sigs.k8s.io/controller-runtime/pkg/manager\"\n\t\"sigs.k8s.io/controller-runtime/pkg/predicate\"\n\t\"sigs.k8s.io/controller-runtime/pkg/reconcile\"\n\t\"sigs.k8s.io/controller-runtime/pkg/source\"\n\n\t\"github.com/eraser-dev/eraser/api/unversioned/config\"\n\teraserv1 \"github.com/eraser-dev/eraser/api/v1\"\n\t\"github.com/eraser-dev/eraser/controllers/util\"\n\t\"github.com/eraser-dev/eraser/pkg/logger\"\n\t\"github.com/eraser-dev/eraser/pkg/metrics\"\n\teraserUtils \"github.com/eraser-dev/eraser/pkg/utils\"\n\n\tsdkmetric \"go.opentelemetry.io/otel/sdk/metric\"\n)\n\nconst (\n\timgListPath     = \"/run/eraser.sh/imagelist\"\n\townerLabelValue = \"imagelist-controller\"\n)\n\nvar (\n\tlog        = logf.Log.WithName(\"controller\").WithValues(\"process\", \"imagelist-controller\")\n\timageList  = types.NamespacedName{Name: \"imagelist\"}\n\townerLabel labels.Selector\n\tstartTime  time.Time\n\texporter   sdkmetric.Exporter\n\treader     sdkmetric.Reader\n\tprovider   *sdkmetric.MeterProvider\n)\n\nfunc init() {\n\tvar err error\n\townerLabelString := fmt.Sprintf(\"%s=%s\", util.ImageJobOwnerLabelKey, ownerLabelValue)\n\townerLabel, err = labels.Parse(ownerLabelString)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n}\n\nfunc Add(mgr manager.Manager, cfg *config.Manager) error {\n\tr, err := newReconciler(mgr, cfg)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn add(mgr, r)\n}\n\n// newReconciler returns a new reconcile.Reconciler.\nfunc newReconciler(mgr manager.Manager, cfg *config.Manager) (reconcile.Reconciler, error) {\n\tc, err := cfg.Read()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\totlpEndpoint := c.Manager.OTLPEndpoint\n\tif otlpEndpoint != \"\" {\n\t\tctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)\n\t\tdefer cancel()\n\n\t\texporter, reader, provider = metrics.ConfigureMetrics(ctx, log, otlpEndpoint)\n\t\totel.SetMeterProvider(provider)\n\t}\n\n\trec := &Reconciler{\n\t\tClient:       mgr.GetClient(),\n\t\tscheme:       mgr.GetScheme(),\n\t\teraserConfig: cfg,\n\t}\n\n\treturn rec, nil\n}\n\n// ImageJobReconciler reconciles a ImageJob object.\ntype ImageJobReconciler struct {\n\tclient.Client\n}\n\n// ImageListReconciler reconciles a ImageList object.\ntype Reconciler struct {\n\tclient.Client\n\tscheme       *runtime.Scheme\n\teraserConfig *config.Manager\n}\n\n//+kubebuilder:rbac:groups=eraser.sh,resources=imagelists,verbs=get;list;watch\n//+kubebuilder:rbac:groups=\"\",namespace=\"system\",resources=podtemplates,verbs=get;list;watch;create;update;patch;delete\n//+kubebuilder:rbac:groups=eraser.sh,resources=imagelists/status,verbs=get;update;patch\n//+kubebuilder:rbac:groups=\"\",resources=nodes,verbs=get;list;watch\n//+kubebuilder:rbac:groups=\"\",namespace=\"system\",resources=pods,verbs=get;list;watch;update;create;delete\n\n// Reconcile is part of the main kubernetes reconciliation loop which aims to\n// move the current state of the cluster closer to the desired state.\n// TODO(user): Modify the Reconcile function to compare the state specified by\n// the ImageList object against the actual cluster state, and then\n// perform operations to make the cluster state reflect the state specified by\n// the user.\n//\n// For more details, check Reconcile and its Result here:\n// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.8.3/pkg/reconcile\nfunc (r *Reconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {\n\t// Ignore unsupported lists\n\tif req.NamespacedName != imageList {\n\t\tlog.Info(\"Ignoring unsupported imagelist name\", \"name\", req.Name)\n\t\treturn reconcile.Result{}, nil\n\t}\n\n\timageList := eraserv1.ImageList{}\n\terr := r.Get(ctx, req.NamespacedName, &imageList)\n\tif err != nil {\n\t\treturn ctrl.Result{}, client.IgnoreNotFound(err)\n\t}\n\n\tjobList := eraserv1.ImageJobList{}\n\terr = r.List(ctx, &jobList)\n\tif client.IgnoreNotFound(err) != nil {\n\t\treturn ctrl.Result{}, err\n\t}\n\n\titems := util.FilterJobListByOwner(jobList.Items, metav1.NewControllerRef(&imageList, imageList.GroupVersionKind()))\n\n\tswitch len(items) {\n\tcase 0:\n\t\treturn r.handleImageListEvent(ctx, &imageList)\n\tcase 1:\n\t\tjob := items[0]\n\n\t\t// If we got here because of a completed ImageJob:\n\t\tif util.IsCompletedOrFailed(job.Status.Phase) {\n\t\t\treturn r.handleJobListEvent(ctx, &imageList, &job)\n\t\t}\n\n\t\t// If we got here due to an update to the ImageList, and there is an ImageJob already running,\n\t\t// keep requeueing it until that job is completed.\n\t\treturn ctrl.Result{RequeueAfter: time.Minute}, nil\n\tdefault:\n\t\treturn ctrl.Result{}, fmt.Errorf(\"there are multiple child imagejobs running\")\n\t}\n}\n\nfunc (r *Reconciler) handleJobListEvent(ctx context.Context, imageList *eraserv1.ImageList, job *eraserv1.ImageJob) (ctrl.Result, error) {\n\tphase := job.Status.Phase\n\tif phase == eraserv1.PhaseCompleted || phase == eraserv1.PhaseFailed {\n\t\terr := r.handleJobCompletion(ctx, imageList, job)\n\t\tif err != nil {\n\t\t\treturn ctrl.Result{}, err\n\t\t}\n\n\t\teraserConfig, err := r.eraserConfig.Read()\n\t\tif err != nil {\n\t\t\treturn ctrl.Result{}, err\n\t\t}\n\n\t\tcleanupCfg := eraserConfig.Manager.ImageJob.Cleanup\n\t\tsuccessDelay := time.Duration(cleanupCfg.DelayOnSuccess)\n\t\terrDelay := time.Duration(cleanupCfg.DelayOnFailure)\n\n\t\tif job.Status.DeleteAfter == nil {\n\t\t\tswitch job.Status.Phase {\n\t\t\tcase eraserv1.PhaseCompleted:\n\t\t\t\tjob.Status.DeleteAfter = util.After(time.Now(), int64(successDelay.Seconds()))\n\t\t\tcase eraserv1.PhaseFailed:\n\t\t\t\tjob.Status.DeleteAfter = util.After(time.Now(), int64(errDelay.Seconds()))\n\t\t\t}\n\n\t\t\tif err := r.Status().Update(ctx, job); err != nil {\n\t\t\t\tlog.Info(\"Could not update Delete After for job \" + job.Name)\n\t\t\t}\n\t\t\treturn ctrl.Result{}, nil\n\t\t}\n\n\t\totlpEndpoint := eraserConfig.Manager.OTLPEndpoint\n\t\tif otlpEndpoint != \"\" {\n\t\t\t// record metrics\n\t\t\tif err := metrics.RecordMetricsController(ctx, otel.GetMeterProvider(), float64(time.Since(startTime).Seconds()), int64(job.Status.Succeeded), int64(job.Status.Failed)); err != nil {\n\t\t\t\tlog.Error(err, \"error recording metrics\")\n\t\t\t}\n\t\t\tmetrics.ExportMetrics(log, exporter, reader)\n\t\t}\n\n\t\treturn r.handleJobDeletion(ctx, job)\n\t}\n\n\treturn ctrl.Result{}, fmt.Errorf(\"unexpected job phase: '%s'\", job.Status.Phase)\n}\n\nfunc (r *Reconciler) handleJobDeletion(ctx context.Context, job *eraserv1.ImageJob) (ctrl.Result, error) {\n\tuntil := time.Until(job.Status.DeleteAfter.Time)\n\tif until > 0 {\n\t\tlog.Info(\"Delaying imagejob delete\", \"job\", job.Name, \"deleteAter\", job.Status.DeleteAfter)\n\t\treturn ctrl.Result{RequeueAfter: until}, nil\n\t}\n\n\tlog.Info(\"Deleting imagejob\", \"job\", job.Name)\n\terr := r.Delete(ctx, job)\n\tif err != nil {\n\t\treturn ctrl.Result{}, err\n\t}\n\n\ttemplate := corev1.PodTemplate{}\n\tif err := r.Get(ctx,\n\t\ttypes.NamespacedName{\n\t\t\tNamespace: eraserUtils.GetNamespace(),\n\t\t\tName:      job.GetName(),\n\t\t},\n\t\t&template,\n\t); err != nil {\n\t\treturn ctrl.Result{}, err\n\t}\n\n\tlog.Info(\"Deleting pod template\", \"template\", template.Name)\n\tif err := r.Delete(ctx, &template); err != nil {\n\t\treturn ctrl.Result{}, err\n\t}\n\n\treturn ctrl.Result{}, nil\n}\n\nfunc (r *Reconciler) handleImageListEvent(ctx context.Context, imageList *eraserv1.ImageList) (ctrl.Result, error) {\n\timgListJSON, err := json.Marshal(imageList.Spec.Images)\n\tif err != nil {\n\t\treturn ctrl.Result{}, fmt.Errorf(\"marshal image list: %w\", err)\n\t}\n\n\tconfigMap := corev1.ConfigMap{\n\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\tGenerateName: \"imagelist-\",\n\t\t\tNamespace:    eraserUtils.GetNamespace(),\n\t\t},\n\t\tImmutable: eraserUtils.BoolPtr(true),\n\t\tData:      map[string]string{\"images\": string(imgListJSON)},\n\t}\n\tif err := r.Create(ctx, &configMap); err != nil {\n\t\treturn ctrl.Result{}, fmt.Errorf(\"create configmap: %w\", err)\n\t}\n\n\tconfigName := configMap.Name\n\targs := []string{\n\t\t\"--imagelist=\" + filepath.Join(imgListPath, \"images\"),\n\t\t\"--log-level=\" + logger.GetLevel(),\n\t}\n\n\teraserConfig, err := r.eraserConfig.Read()\n\tif err != nil {\n\t\treturn ctrl.Result{}, err\n\t}\n\n\teraserContainerCfg := eraserConfig.Components.Remover\n\timageCfg := eraserContainerCfg.Image\n\timage := fmt.Sprintf(\"%s:%s\", imageCfg.Repo, imageCfg.Tag)\n\n\tpullSecrets := []corev1.LocalObjectReference{}\n\tfor _, secret := range eraserConfig.Manager.PullSecrets {\n\t\tpullSecrets = append(pullSecrets, corev1.LocalObjectReference{Name: secret})\n\t}\n\n\tjobTemplate := corev1.PodTemplateSpec{\n\t\tSpec: corev1.PodSpec{\n\t\t\tVolumes: []corev1.Volume{\n\t\t\t\t{\n\t\t\t\t\tName: configName,\n\t\t\t\t\tVolumeSource: corev1.VolumeSource{\n\t\t\t\t\t\tConfigMap: &corev1.ConfigMapVolumeSource{LocalObjectReference: corev1.LocalObjectReference{Name: configName}},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tImagePullSecrets:  pullSecrets,\n\t\t\tRestartPolicy:     corev1.RestartPolicyNever,\n\t\t\tPriorityClassName: eraserConfig.Manager.PriorityClassName,\n\t\t\tContainers: []corev1.Container{\n\t\t\t\t{\n\t\t\t\t\tName:            \"remover\",\n\t\t\t\t\tImage:           image,\n\t\t\t\t\tImagePullPolicy: corev1.PullIfNotPresent,\n\t\t\t\t\tArgs:            args,\n\t\t\t\t\tVolumeMounts: []corev1.VolumeMount{\n\t\t\t\t\t\t{MountPath: imgListPath, Name: configName},\n\t\t\t\t\t},\n\t\t\t\t\tResources: corev1.ResourceRequirements{\n\t\t\t\t\t\tRequests: corev1.ResourceList{\n\t\t\t\t\t\t\t\"cpu\":    eraserContainerCfg.Request.CPU,\n\t\t\t\t\t\t\t\"memory\": eraserContainerCfg.Request.Mem,\n\t\t\t\t\t\t},\n\t\t\t\t\t\tLimits: corev1.ResourceList{\n\t\t\t\t\t\t\t\"memory\": eraserContainerCfg.Limit.Mem,\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tSecurityContext: eraserUtils.SharedSecurityContext,\n\t\t\t\t\t// env vars for exporting metrics\n\t\t\t\t\tEnv: []corev1.EnvVar{\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tName:  \"OTEL_EXPORTER_OTLP_ENDPOINT\",\n\t\t\t\t\t\t\tValue: eraserConfig.Manager.OTLPEndpoint,\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tName:  \"OTEL_SERVICE_NAME\",\n\t\t\t\t\t\t\tValue: \"remover\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tServiceAccountName: \"eraser-imagejob-pods\",\n\t\t},\n\t}\n\n\tjob := &eraserv1.ImageJob{\n\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\tGenerateName: \"imagejob-\",\n\t\t\tLabels: map[string]string{\n\t\t\t\tutil.ImageJobOwnerLabelKey: ownerLabelValue,\n\t\t\t},\n\t\t\tOwnerReferences: []metav1.OwnerReference{\n\t\t\t\t*metav1.NewControllerRef(imageList, eraserv1.GroupVersion.WithKind(\"ImageList\")),\n\t\t\t},\n\t\t},\n\t}\n\n\tconfigmapList := &corev1.ConfigMapList{}\n\tif err := r.List(ctx, configmapList); err != nil {\n\t\tlog.Info(\"Could not get list of configmaps\")\n\t\treturn reconcile.Result{}, err\n\t}\n\n\texclusionMount, exclusionVolume, err := util.GetExclusionVolume(configmapList)\n\tif err != nil {\n\t\tlog.Info(\"Could not get exclusion mounts and volumes\")\n\t\treturn reconcile.Result{}, err\n\t}\n\n\tfor i := range jobTemplate.Spec.Containers {\n\t\tjobTemplate.Spec.Containers[i].VolumeMounts = append(jobTemplate.Spec.Containers[i].VolumeMounts, exclusionMount...)\n\t}\n\n\tjobTemplate.Spec.Volumes = append(jobTemplate.Spec.Volumes, exclusionVolume...)\n\n\terr = r.Create(ctx, job)\n\tstartTime = time.Now()\n\tlog.Info(\"creating imagejob\", \"job\", job.Name)\n\n\tif err != nil {\n\t\tif errors.IsNotFound(err) {\n\t\t\treturn reconcile.Result{}, nil\n\t\t}\n\t\treturn reconcile.Result{}, err\n\t}\n\n\t// get manager pod with label control-plane=controller-manager\n\tpodList := corev1.PodList{}\n\tif err = r.List(ctx, &podList, client.InNamespace(eraserUtils.GetNamespace()), client.MatchingLabels{\"control-plane\": \"controller-manager\"}); err != nil {\n\t\tlog.Info(\"Unable to list controller-manager pod\")\n\t}\n\tif len(podList.Items) != 1 {\n\t\tlog.Info(\"Incorrect number of controller-manager pods\", \"number of pods\", len(podList.Items))\n\t}\n\tmanagerPod := &podList.Items[0]\n\n\ttemplate := corev1.PodTemplate{\n\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\tName:      job.GetName(),\n\t\t\tNamespace: eraserUtils.GetNamespace(),\n\t\t\tOwnerReferences: []metav1.OwnerReference{\n\t\t\t\t*metav1.NewControllerRef(managerPod, managerPod.GroupVersionKind()),\n\t\t\t},\n\t\t},\n\t\tTemplate: jobTemplate,\n\t}\n\n\terr = r.Create(ctx, &template)\n\tif err != nil {\n\t\treturn reconcile.Result{}, err\n\t}\n\n\tconfigMap.OwnerReferences = []metav1.OwnerReference{*metav1.NewControllerRef(job, eraserv1.GroupVersion.WithKind(\"ImageJob\"))}\n\terr = r.Update(ctx, &configMap)\n\tif err != nil {\n\t\treturn reconcile.Result{}, err\n\t}\n\n\treturn ctrl.Result{}, nil\n}\n\nfunc (r *Reconciler) handleJobCompletion(ctx context.Context, imageList *eraserv1.ImageList, job *eraserv1.ImageJob) error {\n\tnow := metav1.Now()\n\n\timageList.Status.Success = int64(job.Status.Succeeded)\n\timageList.Status.Failed = int64(job.Status.Failed)\n\timageList.Status.Skipped = int64(job.Status.Skipped)\n\timageList.Status.Timestamp = &now\n\n\terr := r.Status().Update(ctx, imageList)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\nfunc add(mgr manager.Manager, r reconcile.Reconciler) error {\n\tc, err := controller.New(\"imagelist-controller\", mgr, controller.Options{\n\t\tReconciler: r,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\terr = c.Watch(\n\t\tsource.Kind(mgr.GetCache(), &eraserv1.ImageList{}),\n\t\t&handler.EnqueueRequestForObject{}, predicate.GenerationChangedPredicate{})\n\tif err != nil {\n\t\treturn err\n\t}\n\terr = c.Watch(\n\t\tsource.Kind(mgr.GetCache(), &eraserv1.ImageJob{}),\n\t\thandler.EnqueueRequestForOwner(mgr.GetScheme(), mgr.GetRESTMapper(), &eraserv1.ImageList{}),\n\t\tpredicate.Funcs{\n\t\t\t// Do nothing on Create, Delete, or Generic events\n\t\t\tCreateFunc:  util.NeverOnCreate,\n\t\t\tDeleteFunc:  util.NeverOnDelete,\n\t\t\tGenericFunc: util.NeverOnGeneric,\n\t\t\tUpdateFunc: func(e event.UpdateEvent) bool {\n\t\t\t\tif job, ok := e.ObjectNew.(*eraserv1.ImageJob); ok && util.IsCompletedOrFailed(job.Status.Phase) {\n\t\t\t\t\treturn ownerLabel.Matches(labels.Set(job.Labels))\n\t\t\t\t}\n\n\t\t\t\treturn false\n\t\t\t},\n\t\t},\n\t)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "controllers/suite_test.go",
    "content": "/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage controllers\n\nimport (\n\t\"path/filepath\"\n\t\"testing\"\n\n\tginkgov2 \"github.com/onsi/ginkgo/v2\"\n\tgomega \"github.com/onsi/gomega\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"sigs.k8s.io/controller-runtime/pkg/client\"\n\t\"sigs.k8s.io/controller-runtime/pkg/envtest\"\n\tlogf \"sigs.k8s.io/controller-runtime/pkg/log\"\n\t\"sigs.k8s.io/controller-runtime/pkg/log/zap\"\n\n\teraserv1alpha2 \"github.com/eraser-dev/eraser/api/v1alpha2\"\n\t//+kubebuilder:scaffold:imports\n)\n\n// These tests use Ginkgo (BDD-style Go testing framework). Refer to\n// http://onsi.github.io/ginkgo/ to learn more about Ginkgo.\n\nvar (\n\tk8sClient client.Client\n\ttestEnv   *envtest.Environment\n)\n\nfunc TestAPIs(t *testing.T) {\n\tgomega.RegisterFailHandler(ginkgov2.Fail)\n\tginkgov2.RunSpecs(t, \"Controller Suite\")\n}\n\nvar _ = ginkgov2.BeforeSuite(func() {\n\tlogf.SetLogger(zap.New(zap.WriteTo(ginkgov2.GinkgoWriter), zap.UseDevMode(true)))\n\n\tginkgov2.By(\"bootstrapping test environment\")\n\ttestEnv = &envtest.Environment{\n\t\tCRDDirectoryPaths:     []string{filepath.Join(\"..\", \"config\", \"crd\", \"bases\")},\n\t\tErrorIfCRDPathMissing: true,\n\t}\n\n\tcfg, err := testEnv.Start()\n\tgomega.Expect(err).NotTo(gomega.HaveOccurred())\n\tgomega.Expect(cfg).NotTo(gomega.BeNil())\n\n\terr = eraserv1alpha2.AddToScheme(scheme.Scheme)\n\tgomega.Expect(err).NotTo(gomega.HaveOccurred())\n\n\t//+kubebuilder:scaffold:scheme\n\n\tk8sClient, err = client.New(cfg, client.Options{Scheme: scheme.Scheme})\n\tgomega.Expect(err).NotTo(gomega.HaveOccurred())\n\tgomega.Expect(k8sClient).NotTo(gomega.BeNil())\n})\n\nvar _ = ginkgov2.AfterSuite(func() {\n\tginkgov2.By(\"tearing down the test environment\")\n\terr := testEnv.Stop()\n\tgomega.Expect(err).NotTo(gomega.HaveOccurred())\n})\n"
  },
  {
    "path": "controllers/util/util.go",
    "content": "// Package util provides utility functions and constants for eraser controllers.\npackage util\n\nimport (\n\t\"flag\"\n\t\"os\"\n\t\"time\"\n\n\teraserv1 \"github.com/eraser-dev/eraser/api/v1\"\n\tbatchv1 \"k8s.io/api/batch/v1\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/labels\"\n\t\"sigs.k8s.io/controller-runtime/pkg/event\"\n)\n\nvar (\n\tRemoverImage        = flag.String(\"remover-image\", \"\", \"remover image\")\n\tEraserConfigmapName = \"eraser-manager-config\"\n)\n\nfunc init() {\n\tif configmapName := os.Getenv(\"ERASER_CONFIGMAP_NAME\"); configmapName != \"\" {\n\t\tEraserConfigmapName = configmapName\n\t}\n}\n\nconst (\n\tImageJobOwnerLabelKey = \"eraser.sh/job-owner\"\n\n\texclusionLabel = \"eraser.sh/exclude.list=true\"\n\n\tEnvVarContainerdNamespaceKey   = \"CONTAINERD_NAMESPACE\"\n\tEnvVarContainerdNamespaceValue = \"k8s.io\"\n\tCRIPath                        = \"/run/cri/cri.sock\"\n)\n\nfunc NeverOnCreate(_ event.CreateEvent) bool {\n\treturn false\n}\n\nfunc NeverOnDelete(_ event.DeleteEvent) bool {\n\treturn false\n}\n\nfunc NeverOnGeneric(_ event.GenericEvent) bool {\n\treturn false\n}\n\nfunc NeverOnUpdate(_ event.UpdateEvent) bool {\n\treturn false\n}\n\nfunc AlwaysOnCreate(_ event.CreateEvent) bool {\n\treturn true\n}\n\nfunc AlwaysOnDelete(_ event.DeleteEvent) bool {\n\treturn true\n}\n\nfunc AlwaysOnGeneric(_ event.GenericEvent) bool {\n\treturn true\n}\n\nfunc AlwaysOnUpdate(_ event.UpdateEvent) bool {\n\treturn true\n}\n\nfunc IsCompletedOrFailed(p eraserv1.JobPhase) bool {\n\treturn (p == eraserv1.PhaseCompleted || p == eraserv1.PhaseFailed)\n}\n\nfunc FilterJobListByOwner(jobs []eraserv1.ImageJob, owner *metav1.OwnerReference) []eraserv1.ImageJob {\n\tret := []eraserv1.ImageJob{}\n\n\tfor i := range jobs {\n\t\tjob := jobs[i]\n\n\t\tfor j := range job.OwnerReferences {\n\t\t\tor := job.OwnerReferences[j]\n\n\t\t\tif or.UID == owner.UID {\n\t\t\t\tret = append(ret, job)\n\t\t\t\tbreak // inner\n\t\t\t}\n\t\t}\n\t}\n\n\treturn ret\n}\n\nfunc FilterBatchJobListByOwner(jobs []batchv1.Job, owner *metav1.OwnerReference) []batchv1.Job {\n\tret := []batchv1.Job{}\n\n\tfor i := range jobs {\n\t\tjob := jobs[i]\n\n\t\tfor j := range job.OwnerReferences {\n\t\t\tor := job.OwnerReferences[j]\n\n\t\t\tif or.UID == owner.UID {\n\t\t\t\tret = append(ret, job)\n\t\t\t\tbreak // inner\n\t\t\t}\n\t\t}\n\t}\n\n\treturn ret\n}\n\nfunc After(t time.Time, seconds int64) *metav1.Time {\n\tnewT := metav1.NewTime(t.Add(time.Duration(seconds) * time.Second))\n\treturn &newT\n}\n\nfunc GetExclusionVolume(configmapList *corev1.ConfigMapList) ([]corev1.VolumeMount, []corev1.Volume, error) {\n\tvar exclusionMount []corev1.VolumeMount\n\tvar exclusionVolume []corev1.Volume\n\n\tselector, err := labels.Parse(exclusionLabel)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\tfor i := range configmapList.Items {\n\t\tcm := configmapList.Items[i]\n\t\tif selector.Matches(labels.Set(cm.Labels)) {\n\t\t\texclusionMount = append(exclusionMount, corev1.VolumeMount{MountPath: \"exclude-\" + cm.Name, Name: cm.Name})\n\t\t\texclusionVolume = append(exclusionVolume, corev1.Volume{\n\t\t\t\tName: cm.Name,\n\t\t\t\tVolumeSource: corev1.VolumeSource{\n\t\t\t\t\tConfigMap: &corev1.ConfigMapVolumeSource{LocalObjectReference: corev1.LocalObjectReference{Name: cm.Name}},\n\t\t\t\t},\n\t\t\t})\n\t\t}\n\t}\n\n\treturn exclusionMount, exclusionVolume, nil\n}\n"
  },
  {
    "path": "demo/README.md",
    "content": "# Eraser Demo\nThis demo is using [demo magic](https://github.com/paxtonhare/demo-magic) to execute. \n\n## Prerequisites\nThe following should be installed and pathed:\n- [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/)\n- [Kubectl](https://kubernetes.io/docs/reference/kubectl/)\n- [Docker](https://www.docker.com/)\n- [Helm](https://helm.sh/docs/intro/install/)\n\nGet Helm repo:\n```\n$ helm repo add eraser https://eraser-dev.github.io/eraser/charts\n```\n\nRun `demo.sh` to start the demo!\n```\n$ ./demo.sh\n```\n"
  },
  {
    "path": "demo/demo-magic.sh",
    "content": "#!/usr/bin/env bash\n\n###############################################################################\n#\n# demo-magic.sh\n#\n# Copyright (c) 2015 Paxton Hare\n#\n# This script lets you script demos in bash. It runs through your demo script when you press\n# ENTER. It simulates typing and runs commands.\n#\n###############################################################################\n\n# the speed to \"type\" the text\nTYPE_SPEED=20\n\n# no wait after \"p\" or \"pe\"\nNO_WAIT=false\n\n# if > 0, will pause for this amount of seconds before automatically proceeding with any p or pe\nPROMPT_TIMEOUT=0\n\n# don't show command number unless user specifies it\nSHOW_CMD_NUMS=false\n\n\n# handy color vars for pretty prompts\nBLACK=\"\\033[0;30m\"\nBLUE=\"\\033[0;34m\"\nGREEN=\"\\033[0;32m\"\nGREY=\"\\033[0;90m\"\nCYAN=\"\\033[0;36m\"\nRED=\"\\033[0;31m\"\nPURPLE=\"\\033[0;35m\"\nBROWN=\"\\033[0;33m\"\nWHITE=\"\\033[1;37m\"\nCOLOR_RESET=\"\\033[0m\"\n\nC_NUM=0\n\n# prompt and command color which can be overriden\nDEMO_PROMPT=\"$ \"\nDEMO_CMD_COLOR=$WHITE\nDEMO_COMMENT_COLOR=$GREY\n\n##\n# prints the script usage\n##\nfunction usage() {\n  echo -e \"\"\n  echo -e \"Usage: $0 [options]\"\n  echo -e \"\"\n  echo -e \"\\tWhere options is one or more of:\"\n  echo -e \"\\t-h\\tPrints Help text\"\n  echo -e \"\\t-d\\tDebug mode. Disables simulated typing\"\n  echo -e \"\\t-n\\tNo wait\"\n  echo -e \"\\t-w\\tWaits max the given amount of seconds before proceeding with demo (e.g. '-w5')\"\n  echo -e \"\"\n}\n\n##\n# wait for user to press ENTER\n# if $PROMPT_TIMEOUT > 0 this will be used as the max time for proceeding automatically\n##\nfunction wait() {\n  if [[ \"$PROMPT_TIMEOUT\" == \"0\" ]]; then\n    read -rs\n  else\n    read -rst \"$PROMPT_TIMEOUT\"\n  fi\n}\n\n##\n# print command only. Useful for when you want to pretend to run a command\n#\n# takes 1 param - the string command to print\n#\n# usage: p \"ls -l\"\n#\n##\nfunction p() {\n  if [[ ${1:0:1} == \"#\" ]]; then\n    cmd=$DEMO_COMMENT_COLOR$1$COLOR_RESET\n  else\n    cmd=$DEMO_CMD_COLOR$1$COLOR_RESET\n  fi\n\n  # render the prompt\n  x=$(PS1=\"$DEMO_PROMPT\" \"$BASH\" --norc -i </dev/null 2>&1 | sed -n '${s/^\\(.*\\)exit$/\\1/p;}')\n\n  # show command number is selected\n  if $SHOW_CMD_NUMS; then\n   printf \"[$((++C_NUM))] $x\"\n  else\n   printf \"$x\"\n  fi\n\n  # wait for the user to press a key before typing the command\n  if [ $NO_WAIT = false ]; then\n    wait\n  fi\n\n  if [[ -z $TYPE_SPEED ]]; then\n    echo -en \"$cmd\"\n  else\n    echo -en \"$cmd\" | pv -qL $[$TYPE_SPEED+(-2 + RANDOM%5)];\n  fi\n\n  # wait for the user to press a key before moving on\n  if [ $NO_WAIT = false ]; then\n    wait\n  fi\n  echo \"\"\n}\n\n##\n# Prints and executes a command\n#\n# takes 1 parameter - the string command to run\n#\n# usage: pe \"ls -l\"\n#\n##\nfunction pe() {\n  # print the command\n  p \"$@\"\n  run_cmd \"$@\"\n}\n\n##\n# print and executes a command immediately\n#\n# takes 1 parameter - the string command to run\n#\n# usage: pei \"ls -l\"\n#\n##\nfunction pei {\n  NO_WAIT=true pe \"$@\"\n}\n\n##\n# Enters script into interactive mode\n#\n# and allows newly typed commands to be executed within the script\n#\n# usage : cmd\n#\n##\nfunction cmd() {\n  # render the prompt\n  x=$(PS1=\"$DEMO_PROMPT\" \"$BASH\" --norc -i </dev/null 2>&1 | sed -n '${s/^\\(.*\\)exit$/\\1/p;}')\n  printf \"$x\\033[0m\"\n  read command\n  run_cmd \"${command}\"\n}\n\nfunction run_cmd() {\n  function handle_cancel() {\n    printf \"\"\n  }\n\n  trap handle_cancel SIGINT\n  stty -echoctl\n  eval \"$@\"\n  stty echoctl\n  trap - SIGINT\n}\n\n\nfunction check_pv() {\n  command -v pv >/dev/null 2>&1 || {\n\n    echo \"\"\n    echo -e \"${RED}##############################################################\"\n    echo \"# HOLD IT!! I require pv but it's not installed.  Aborting.\" >&2;\n    echo -e \"${RED}##############################################################\"\n    echo \"\"\n    echo -e \"${COLOR_RESET}Installing pv:\"\n    echo \"\"\n    echo -e \"${BLUE}Mac:${COLOR_RESET} $ brew install pv\"\n    echo \"\"\n    echo -e \"${BLUE}Other:${COLOR_RESET} http://www.ivarch.com/programs/pv.shtml\"\n    echo -e \"${COLOR_RESET}\"\n    exit 1;\n  }\n}\n\ncheck_pv\n#\n# handle some default params\n# -h for help\n# -d for disabling simulated typing\n#\nwhile getopts \":dhncw:\" opt; do\n  case $opt in\n    h)\n      usage\n      exit 1\n      ;;\n    d)\n      unset TYPE_SPEED\n      ;;\n    n)\n      NO_WAIT=true\n      ;;\n    c)\n      SHOW_CMD_NUMS=true\n      ;;\n    w)\n      PROMPT_TIMEOUT=$OPTARG\n      ;;\n  esac\ndone\n"
  },
  {
    "path": "demo/demo.sh",
    "content": "#!/bin/bash\n\n########################\n# include the magic\n########################\n. demo-magic.sh\n\n# boostrap environment\npei \"kind create cluster --name eraser-demo\"\nsleep 10\npei \"kubectl apply -f ds.yaml\"\nsleep 10\nclear\n\n# demo commands\npei \"kubectl get pods\"\nsleep 10\npei \"kubectl delete daemonset alpine\"\nsleep 5\npei \"kubectl get pods\"\npei \"docker exec eraser-demo-control-plane ctr -n k8s.io images list | grep alpine\"\npei \"helm install -n eraser-system eraser eraser/eraser --create-namespace\"\nwait\npei \"kubectl get po -n eraser-system\"\nwait\npei \"kubectl get po -n eraser-system\"\npei \"docker exec eraser-demo-control-plane ctr -n k8s.io images list | grep alpine\"\n\n# teardown environment\nkind delete cluster --name eraser-demo"
  },
  {
    "path": "demo/ds.yaml",
    "content": "apiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n  name: alpine\nspec:\n  selector:\n    matchLabels:\n      app: alpine\n  template:\n    metadata:\n      labels:\n        app: alpine\n    spec:\n      containers:\n      - name: alpine\n        image: docker.io/library/alpine:3.7.3\n"
  },
  {
    "path": "deploy/eraser.yaml",
    "content": "apiVersion: v1\nkind: Namespace\nmetadata:\n  labels:\n    control-plane: controller-manager\n  name: eraser-system\n---\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n  annotations:\n    controller-gen.kubebuilder.io/version: v0.14.0\n  name: imagejobs.eraser.sh\nspec:\n  group: eraser.sh\n  names:\n    kind: ImageJob\n    listKind: ImageJobList\n    plural: imagejobs\n    singular: imagejob\n  scope: Cluster\n  versions:\n  - name: v1\n    schema:\n      openAPIV3Schema:\n        description: ImageJob is the Schema for the imagejobs API.\n        properties:\n          apiVersion:\n            description: |-\n              APIVersion defines the versioned schema of this representation of an object.\n              Servers should convert recognized schemas to the latest internal value, and\n              may reject unrecognized values.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n            type: string\n          kind:\n            description: |-\n              Kind is a string value representing the REST resource this object represents.\n              Servers may infer this from the endpoint the client submits requests to.\n              Cannot be updated.\n              In CamelCase.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n            type: string\n          metadata:\n            type: object\n          status:\n            description: ImageJobStatus defines the observed state of ImageJob.\n            properties:\n              deleteAfter:\n                description: Time to delay deletion until\n                format: date-time\n                type: string\n              desired:\n                description: desired number of pods\n                type: integer\n              failed:\n                description: number of pods that failed\n                type: integer\n              phase:\n                description: job running, successfully completed, or failed\n                type: string\n              skipped:\n                description: number of nodes that were skipped e.g. because they are not a linux node\n                type: integer\n              succeeded:\n                description: number of pods that completed successfully\n                type: integer\n            required:\n            - desired\n            - failed\n            - phase\n            - skipped\n            - succeeded\n            type: object\n        type: object\n    served: true\n    storage: true\n    subresources:\n      status: {}\n  - deprecated: true\n    deprecationWarning: v1alpha1 of the eraser API has been deprecated. Please migrate to v1.\n    name: v1alpha1\n    schema:\n      openAPIV3Schema:\n        description: ImageJob is the Schema for the imagejobs API.\n        properties:\n          apiVersion:\n            description: |-\n              APIVersion defines the versioned schema of this representation of an object.\n              Servers should convert recognized schemas to the latest internal value, and\n              may reject unrecognized values.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n            type: string\n          kind:\n            description: |-\n              Kind is a string value representing the REST resource this object represents.\n              Servers may infer this from the endpoint the client submits requests to.\n              Cannot be updated.\n              In CamelCase.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n            type: string\n          metadata:\n            type: object\n          status:\n            description: ImageJobStatus defines the observed state of ImageJob.\n            properties:\n              deleteAfter:\n                description: Time to delay deletion until\n                format: date-time\n                type: string\n              desired:\n                description: desired number of pods\n                type: integer\n              failed:\n                description: number of pods that failed\n                type: integer\n              phase:\n                description: job running, successfully completed, or failed\n                type: string\n              skipped:\n                description: number of nodes that were skipped e.g. because they are not a linux node\n                type: integer\n              succeeded:\n                description: number of pods that completed successfully\n                type: integer\n            required:\n            - desired\n            - failed\n            - phase\n            - skipped\n            - succeeded\n            type: object\n        type: object\n    served: true\n    storage: false\n    subresources:\n      status: {}\n---\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n  annotations:\n    controller-gen.kubebuilder.io/version: v0.14.0\n  name: imagelists.eraser.sh\nspec:\n  group: eraser.sh\n  names:\n    kind: ImageList\n    listKind: ImageListList\n    plural: imagelists\n    singular: imagelist\n  scope: Cluster\n  versions:\n  - name: v1\n    schema:\n      openAPIV3Schema:\n        description: ImageList is the Schema for the imagelists API.\n        properties:\n          apiVersion:\n            description: |-\n              APIVersion defines the versioned schema of this representation of an object.\n              Servers should convert recognized schemas to the latest internal value, and\n              may reject unrecognized values.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n            type: string\n          kind:\n            description: |-\n              Kind is a string value representing the REST resource this object represents.\n              Servers may infer this from the endpoint the client submits requests to.\n              Cannot be updated.\n              In CamelCase.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n            type: string\n          metadata:\n            type: object\n          spec:\n            description: ImageListSpec defines the desired state of ImageList.\n            properties:\n              images:\n                description: The list of non-compliant images to delete if non-running.\n                items:\n                  type: string\n                type: array\n            required:\n            - images\n            type: object\n          status:\n            description: ImageListStatus defines the observed state of ImageList.\n            properties:\n              failed:\n                description: Number of nodes that failed to run the job\n                format: int64\n                type: integer\n              skipped:\n                description: Number of nodes that were skipped due to a skip selector\n                format: int64\n                type: integer\n              success:\n                description: Number of nodes that successfully ran the job\n                format: int64\n                type: integer\n              timestamp:\n                description: Information when the job was completed.\n                format: date-time\n                type: string\n            required:\n            - failed\n            - skipped\n            - success\n            - timestamp\n            type: object\n        type: object\n    served: true\n    storage: true\n    subresources:\n      status: {}\n  - deprecated: true\n    deprecationWarning: v1alpha1 of the eraser API has been deprecated. Please migrate to v1.\n    name: v1alpha1\n    schema:\n      openAPIV3Schema:\n        description: ImageList is the Schema for the imagelists API.\n        properties:\n          apiVersion:\n            description: |-\n              APIVersion defines the versioned schema of this representation of an object.\n              Servers should convert recognized schemas to the latest internal value, and\n              may reject unrecognized values.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n            type: string\n          kind:\n            description: |-\n              Kind is a string value representing the REST resource this object represents.\n              Servers may infer this from the endpoint the client submits requests to.\n              Cannot be updated.\n              In CamelCase.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n            type: string\n          metadata:\n            type: object\n          spec:\n            description: ImageListSpec defines the desired state of ImageList.\n            properties:\n              images:\n                description: The list of non-compliant images to delete if non-running.\n                items:\n                  type: string\n                type: array\n            required:\n            - images\n            type: object\n          status:\n            description: ImageListStatus defines the observed state of ImageList.\n            properties:\n              failed:\n                description: Number of nodes that failed to run the job\n                format: int64\n                type: integer\n              skipped:\n                description: Number of nodes that were skipped due to a skip selector\n                format: int64\n                type: integer\n              success:\n                description: Number of nodes that successfully ran the job\n                format: int64\n                type: integer\n              timestamp:\n                description: Information when the job was completed.\n                format: date-time\n                type: string\n            required:\n            - failed\n            - skipped\n            - success\n            - timestamp\n            type: object\n        type: object\n    served: true\n    storage: false\n    subresources:\n      status: {}\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: eraser-controller-manager\n  namespace: eraser-system\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: eraser-imagejob-pods\n  namespace: eraser-system\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n  name: eraser-manager-role\n  namespace: eraser-system\nrules:\n- apiGroups:\n  - \"\"\n  resources:\n  - configmaps\n  verbs:\n  - create\n  - delete\n  - get\n  - list\n  - patch\n  - update\n  - watch\n- apiGroups:\n  - \"\"\n  resources:\n  - pods\n  verbs:\n  - create\n  - delete\n  - get\n  - list\n  - update\n  - watch\n- apiGroups:\n  - \"\"\n  resources:\n  - podtemplates\n  verbs:\n  - create\n  - delete\n  - get\n  - list\n  - patch\n  - update\n  - watch\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n  name: eraser-manager-role\nrules:\n- apiGroups:\n  - \"\"\n  resources:\n  - nodes\n  verbs:\n  - get\n  - list\n  - watch\n- apiGroups:\n  - eraser.sh\n  resources:\n  - imagejobs\n  verbs:\n  - create\n  - delete\n  - get\n  - list\n  - watch\n- apiGroups:\n  - eraser.sh\n  resources:\n  - imagejobs/status\n  verbs:\n  - get\n  - patch\n  - update\n- apiGroups:\n  - eraser.sh\n  resources:\n  - imagelists\n  verbs:\n  - get\n  - list\n  - watch\n- apiGroups:\n  - eraser.sh\n  resources:\n  - imagelists/status\n  verbs:\n  - get\n  - patch\n  - update\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n  name: eraser-manager-rolebinding\n  namespace: eraser-system\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: Role\n  name: eraser-manager-role\nsubjects:\n- kind: ServiceAccount\n  name: eraser-controller-manager\n  namespace: eraser-system\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: eraser-manager-rolebinding\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: eraser-manager-role\nsubjects:\n- kind: ServiceAccount\n  name: eraser-controller-manager\n  namespace: eraser-system\n---\napiVersion: v1\ndata:\n  controller_manager_config.yaml: |\n    apiVersion: eraser.sh/v1alpha3\n    kind: EraserConfig\n    manager:\n      runtime:\n        name: containerd\n        address: unix:///run/containerd/containerd.sock\n      otlpEndpoint: \"\"\n      logLevel: info\n      scheduling:\n        repeatInterval: 24h\n        beginImmediately: true\n      profile:\n        enabled: false\n        port: 6060\n      imageJob:\n        successRatio: 1.0\n        cleanup:\n          delayOnSuccess: 0s\n          delayOnFailure: 24h\n      pullSecrets: [] # image pull secrets for collector/scanner/eraser\n      priorityClassName: \"\" # priority class name for collector/scanner/eraser\n      additionalPodLabels: {}\n      nodeFilter:\n        type: exclude # must be either exclude|include\n        selectors:\n          - eraser.sh/cleanup.filter\n          - kubernetes.io/os=windows\n    components:\n      collector:\n        enabled: true\n        image:\n          repo: ghcr.io/eraser-dev/collector\n          tag: v1.5.0-beta.0\n        request:\n          mem: 25Mi\n          cpu: 7m\n        limit:\n          mem: 500Mi\n          # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run\n          cpu: 0\n      scanner:\n        enabled: true\n        image:\n          repo: ghcr.io/eraser-dev/eraser-trivy-scanner # supply custom image for custom scanner\n          tag: v1.5.0-beta.0\n        request:\n          mem: 500Mi\n          cpu: 1000m\n        limit:\n          mem: 2Gi\n          # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run\n          cpu: 0\n        # The config needs to be passed through to the scanner as yaml, as a\n        # single string. Because we allow custom scanner images, the scanner is\n        # responsible for defining a schema, parsing, and validating.\n        config: |\n          # this is the schema for the provided 'trivy-scanner'. custom scanners\n          # will define their own configuration.\n          cacheDir: /var/lib/trivy\n          dbRepo: ghcr.io/aquasecurity/trivy-db\n          deleteFailedImages: true\n          deleteEOLImages: true\n          vulnerabilities:\n            ignoreUnfixed: false\n            types:\n              - os\n              - library\n            securityChecks:\n              - vuln\n            severities:\n              - CRITICAL\n              - HIGH\n              - MEDIUM\n              - LOW\n            ignoredStatuses:\n          timeout:\n            total: 23h\n            perImage: 1h\n        volumes: []\n      remover:\n        image:\n          repo: ghcr.io/eraser-dev/remover\n          tag: v1.5.0-beta.0\n        request:\n          mem: 25Mi\n          # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run\n          cpu: 0\n        limit:\n          mem: 30Mi\n          cpu: 0\nkind: ConfigMap\nmetadata:\n  name: eraser-manager-config\n  namespace: eraser-system\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  labels:\n    control-plane: controller-manager\n  name: eraser-controller-manager\n  namespace: eraser-system\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      control-plane: controller-manager\n  template:\n    metadata:\n      labels:\n        control-plane: controller-manager\n    spec:\n      containers:\n      - args:\n        - --config=/config/controller_manager_config.yaml\n        command:\n        - /manager\n        env:\n        - name: POD_NAMESPACE\n          valueFrom:\n            fieldRef:\n              apiVersion: v1\n              fieldPath: metadata.namespace\n        - name: OTEL_SERVICE_NAME\n          value: eraser-manager\n        image: ghcr.io/eraser-dev/eraser-manager:v1.5.0-beta.0\n        livenessProbe:\n          httpGet:\n            path: /healthz\n            port: 8081\n          initialDelaySeconds: 15\n          periodSeconds: 20\n        name: manager\n        readinessProbe:\n          httpGet:\n            path: /readyz\n            port: 8081\n          initialDelaySeconds: 5\n          periodSeconds: 10\n        resources:\n          limits:\n            memory: 30Mi\n          requests:\n            cpu: 100m\n            memory: 20Mi\n        securityContext:\n          allowPrivilegeEscalation: false\n          capabilities:\n            drop:\n            - ALL\n          readOnlyRootFilesystem: true\n          runAsGroup: 65532\n          runAsNonRoot: true\n          runAsUser: 65532\n          seccompProfile:\n            type: RuntimeDefault\n        volumeMounts:\n        - mountPath: /config\n          name: manager-config\n      nodeSelector:\n        kubernetes.io/os: linux\n      serviceAccountName: eraser-controller-manager\n      terminationGracePeriodSeconds: 10\n      volumes:\n      - configMap:\n          name: eraser-manager-config\n        name: manager-config\n"
  },
  {
    "path": "docs/README.md",
    "content": "# Website\n\nThis website is built using [Docusaurus 2](https://docusaurus.io/), a modern static website generator.\n\n### Installation\n\n```\n$ yarn\n```\n\n### Local Development\n\n```\n$ yarn start\n```\n\nThis command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.\n\n### Build\n\n```\n$ yarn build\n```\n\nThis command generates static content into the `build` directory and can be served using any static contents hosting service.\n\n### Deployment\n\nUsing SSH:\n\n```\n$ USE_SSH=true yarn deploy\n```\n\nNot using SSH:\n\n```\n$ GIT_USER=<Your GitHub username> yarn deploy\n```\n\nIf you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the `gh-pages` branch.\n\n## Search\n\nEraser docs website uses Algolia DocSearch service. Please see [here](https://docusaurus.io/docs/search) for more information.\n\nIf the search index has any issues:\n\n1. Go to [Algolia search dashboard](https://www.algolia.com/apps/X8MU4GEC0G/explorer/browse/eraser)\n1. Click manage index and export configuration\n1. Delete the index\n1. Import saved configuration\n1. Go to [Algolia crawler](https://crawler.algolia.com/admin/crawlers/acc2bdb5-4780-433f-a3e9-bb3b49598320/overview) and restart crawling manually (takes about a few minutes to crawl). This is scheduled to run every week automatically.\n"
  },
  {
    "path": "docs/babel.config.js",
    "content": "module.exports = {\n  presets: [require.resolve('@docusaurus/core/lib/babel/preset')],\n};\n"
  },
  {
    "path": "docs/design/README.md",
    "content": "# Design Docs\n\n## Implemented\n* [ImageList and Collector/Scanner Design](https://docs.google.com/document/d/1Rz1bkZKZSLVMjC_w8WLASPDUjfU80tjV-XWUXZ8vq3I/edit?usp=sharing)\n\n## Proposed\n* [Image exclusion](https://docs.google.com/document/d/1ksUziJzNSVpwCagqmAzZOKllwbmZg0q2xJNUDUcQP2U/edit)"
  },
  {
    "path": "docs/docs/architecture.md",
    "content": "---\ntitle: Architecture\n---\nAt a high level, Eraser has two main modes of operation: manual and automated.\n\nManual image removal involves supplying a list of images to remove; Eraser then\ndeploys pods to clean up the images you supplied.\n\nAutomated image removal runs on a timer. By default, the automated process\nremoves images based on the results of a vulnerability scan. The default\nvulnerability scanner is Trivy, but others can be provided in its place. Or,\nthe scanner can be disabled altogether, in which case Eraser acts as a garbage\ncollector -- it will remove all non-running images in your cluster.\n\n## Manual image cleanup\n\n<img title=\"manual cleanup\" src=\"/eraser/docs/img/eraser_manual.png\" />\n\n## Automated analysis, scanning, and cleanup\n\n<img title=\"automated cleanup\" src=\"/eraser/docs/img/eraser_timer.png\" />\n"
  },
  {
    "path": "docs/docs/code-of-conduct.md",
    "content": "---\ntitle: Code of Conduct\n---\n\nThis project has adopted the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).\n\nResources:\n\n- [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)\n- [Code of Conduct Reporting](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)\n"
  },
  {
    "path": "docs/docs/contributing.md",
    "content": "---\ntitle: Contributing\n---\n\nThere are several ways to get involved with Eraser\n\n- Join the [mailing list](https://groups.google.com/u/1/g/eraser-dev) to get notifications for releases, security announcements, etc.\n- Participate in the [biweekly community meetings](https://docs.google.com/document/d/1Sj5u47K3WUGYNPmQHGFpb52auqZb1FxSlWAQnPADhWI/edit) to disucss development, issues, use cases, etc.\n- Join the `#eraser` channel on the [Kubernetes Slack](https://slack.k8s.io/)\n- View the [development setup instructions](https://eraser-dev.github.io/eraser/docs/development)\n\nThis project welcomes contributions and suggestions.\n\nThis project has adopted the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)."
  },
  {
    "path": "docs/docs/custom-scanner.md",
    "content": "---\ntitle: Custom Scanner\n---\n\n## Creating a Custom Scanner\nTo create a custom scanner for non-compliant images, use the following [template](https://github.com/eraser-dev/eraser-scanner-template/).\n\nIn order to customize your scanner, start by creating a `NewImageProvider()`. The ImageProvider interface can be found can be found [here](../../pkg/scanners/template/scanner_template.go). \n\nThe ImageProvider will allow you to retrieve the list of all non-running and non-excluded images from the collector container through the `ReceiveImages()` function. Process these images with your customized scanner and threshold, and use `SendImages()` to pass the images found non-compliant to the eraser container for removal. Finally, complete the scanning process by calling `Finish()`.\n\nWhen complete, provide your custom scanner image to Eraser in deployment.\n"
  },
  {
    "path": "docs/docs/customization.md",
    "content": "---\ntitle: Customization\n---\n\n## Overview\n\nEraser uses a configmap to configure its behavior. The configmap is part of the\ndeployment and it is not necessary to deploy it manually. Once deployed, the configmap\ncan be edited at any time:\n\n```bash\nkubectl edit configmap --namespace eraser-system eraser-manager-config\n```\n\nIf an eraser job is already running, the changes will not take effect until the job completes.\nThe configuration is in yaml.\n\n## Key Concepts\n\n### Basic architecture\n\nThe _manager_ runs as a pod in your cluster and manages _ImageJobs_. Think of\nan _ImageJob_ as a unit of work, performed on every node in your cluster. Each\nnode runs a sub-job. The goal of the _ImageJob_ is to assess the images on your\ncluster's nodes, and to remove the images you don't want. There are two stages:\n1. Assessment\n1. Removal.\n\n\n### Scheduling\n\nAn _ImageJob_ can either be created on-demand (see [Manual Removal](https://eraser-dev.github.io/eraser/docs/manual-removal)),\nor they can be spawned on a timer like a cron job. On-demand jobs skip the\nassessment stage and get right down to the business of removing the images you\nspecified. The behavior of an on-demand job is quite different from that of\ntimed jobs.\n\n### Fault Tolerance\n\nBecause an _ImageJob_ runs on every node in your cluster, and the conditions on\neach node may vary widely, some of the sub-jobs may fail. If you cannot\ntolerate any failure, set the `manager.imageJob.successRatio` property to\n`1.0`. If 75% success sounds good to you, set it to `0.75`. In that case, if\nfewer than 75% of the pods spawned by the _ImageJob_ report success, the job as\na whole will be marked as a failure.\n\nThis is mainly to help diagnose error conditions. As such, you can set\n`manager.imageJob.cleanup.delayOnFailure` to a long value so that logs can be\ncaptured before the spawned pods are cleaned up.\n\n### Excluding Nodes\n\nFor various reasons, you may want to prevent Eraser from scheduling pods on\ncertain nodes. To do so, the nodes can be given a special label. By default,\nthis label is `eraser.sh/cleanup.filter`, but you can configure the behavior with\nthe options under `manager.nodeFilter`. The [table](#detailed-options) provides more detail.\n\n### Configuring Components\n\nAn _ImageJob_ is made up of various sub-jobs, with one sub-job for each node.\nThese sub-jobs can be broken down further into three stages.\n1. Collection (What is on the node?)\n1. Scanning (What images conform to the policy I've provided?)\n1. Removal (Remove images based on the results of the above)\n\nOf the above stages, only Removal is mandatory. The others can be disabled.\nFurthermore, manually triggered _ImageJobs_ will skip right to removal, even if\nEraser is configured to collect and scan. Collection and Scanning will only\ntake place when:\n1. The collector and/or scanner `components` are enabled, AND\n1. The job was *not* triggered manually by creating an _ImageList_.\n\nDisabling scanner will remove all non-running images by default.\n\n### Swapping out components\n\nThe collector, scanner, and remover components can all be swapped out. This\nenables you to build and host the images yourself. In addition, the scanner's\nbehavior can be completely tailored to your needs by swapping out the default\nimage with one of your own. To specify the images, use the\n`components.<component>.image.repo` and `components.<component>.image.tag`,\nwhere `<component>` is one of `collector`, `scanner`, or `remover`.\n\n## Universal Options\n\nThe following portions of the configmap apply no matter how you spawn your\n_ImageJob_. The values provided below are the defaults. For more detail on\nthese options, see the [table](#detailed-options).\n\n```yaml\nmanager:\n  runtime:\n    name: containerd\n    address: unix:///run/containerd/containerd.sock\n  otlpEndpoint: \"\" # empty string disables OpenTelemetry\n  logLevel: info\n  profile:\n    enabled: false\n    port: 6060\n  imageJob:\n    successRatio: 1.0\n    cleanup:\n      delayOnSuccess: 0s\n      delayOnFailure: 24h\n  pullSecrets: [] # image pull secrets for collector/scanner/remover\n  priorityClassName: \"\" # priority class name for collector/scanner/remover\n  additionalPodLabels: {}\n  extraScannerVolumes: {}\n  extraScannerVolumeMounts: {}\n  nodeFilter:\n    type: exclude # must be either exclude|include\n    selectors:\n      - eraser.sh/cleanup.filter\n      - kubernetes.io/os=windows\ncomponents:\n  remover:\n    image:\n      repo: ghcr.io/eraser-dev/remover\n      tag: v1.0.0\n    request:\n      mem: 25Mi\n      cpu: 0\n    limit:\n      mem: 30Mi\n      cpu: 1000m\n```\n\n## Component Options\n\n```yaml\ncomponents:\n  collector:\n    enabled: true\n    image:\n      repo: ghcr.io/eraser-dev/collector\n      tag: v1.0.0\n    request:\n      mem: 25Mi\n      cpu: 7m\n    limit:\n      mem: 500Mi\n      cpu: 0\n  scanner:\n    enabled: true\n    image:\n      repo: ghcr.io/eraser-dev/eraser-trivy-scanner\n      tag: v1.0.0\n    request:\n      mem: 500Mi\n      cpu: 1000m\n    limit:\n      mem: 2Gi\n      cpu: 0\n    config: |\n      # this is the schema for the provided 'trivy-scanner'. custom scanners\n      # will define their own configuration. see the below\n  remover:\n    image:\n      repo: ghcr.io/eraser-dev/remover\n      tag: v1.0.0\n    request:\n      mem: 25Mi\n      cpu: 0\n    limit:\n      mem: 30Mi\n      cpu: 1000m\n```\n\n## Scanner Options\n\nThese options can be provided to `components.scanner.config`. They will be\npassed through  as a string to the scanner container and parsed there. If you\nwant to configure your own scanner, you must provide some way to parse this.\n\nBelow are the values recognized by the provided `eraser-trivy-scanner` image.\nValues provided below are the defaults.\n\n```yaml\ncacheDir: /var/lib/trivy # The file path inside the container to store the cache\ndbRepo: ghcr.io/aquasecurity/trivy-db # The container registry from which to fetch the trivy database\ndeleteFailedImages: true # if true, remove images for which scanning fails, regardless of why it failed\ndeleteEOLImages: true # if true, remove images that have reached their end-of-life date\nvulnerabilities:\n  ignoreUnfixed: true # consider the image compliant if there are no known fixes for the vulnerabilities found.\n  types: # a list of vulnerability types. for more info, see trivy's documentation.\n    - os\n    - library\n  securityChecks: # see trivy's documentation for more information\n    - vuln\n  severities: # in this case, only flag images with CRITICAL vulnerability for removal\n    - CRITICAL\n  ignoredStatuses: # a list of trivy statuses to ignore. See https://aquasecurity.github.io/trivy/v0.44/docs/configuration/filtering/#by-status.\ntimeout:\n  total: 23h # if scanning isn't completed before this much time elapses, abort the whole scan\n  perImage: 1h # if scanning a single image exceeds this time, scanning will be aborted\n```\n\n## Detailed Options\n\n| Option | Description | Default |\n| --- | --- | --- |\n| manager.runtime.name | The runtime to use for the manager's containers. Must be one of containerd, crio, or dockershim. It is assumed that your nodes are all using the same runtime, and there is currently no way to configure multiple runtimes. | containerd |\n| manager.runtime.address | The runtime socket address to use for the containers. Can provide a custom address for containerd and dockershim runtimes, but not for crio due to Trivy restrictions. | unix:///run/containerd/containerd.sock |\n| manager.otlpEndpoint | The endpoint to send OpenTelemetry data to. If empty, data will not be sent. | \"\" |\n| manager.logLevel | The log level for the manager's containers. Must be one of debug, info, warn, error, dpanic, panic, or fatal. | info |\n| manager.scheduling.repeatInterval | Use only when collector ando/or scanner are enabled. This is like a cron job, and will spawn an _ImageJob_ at the interval provided. | 24h |\n| manager.scheduling.beginImmediately | If set to true, the fist _ImageJob_ will run immediately. If false, the job will not be spawned until after the interval (above) has elapsed. | true |\n| manager.profile.enabled | Whether to enable profiling for the manager's containers. This is for debugging with `go tool pprof`. | false |\n| manager.profile.port | The port on which to expose the profiling endpoint. | 6060 |\n| manager.imageJob.successRatio | The ratio of successful image jobs required before a cleanup is performed. | 1.0 |\n| manager.imageJob.cleanup.delayOnSuccess | The amount of time to wait after a successful image job before performing cleanup. | 0s |\n| manager.imageJob.cleanup.delayOnFailure | The amount of time to wait after a failed image job before performing cleanup. | 24h |\n| manager.pullSecrets | The image pull secrets to use for collector, scanner, and remover containers. | [] |\n| manager.priorityClassName | The priority class to use for collector, scanner, and remover containers. | \"\" |\n| manager.additionalPodLabels | Additional labels for all pods that the controller creates at runtime. | `{}` |\n| manager.nodeFilter.type | The type of node filter to use. Must be either \"exclude\" or \"include\". | exclude |\n| manager.nodeFilter.selectors | A list of selectors used to filter nodes. | [] |\n| components.collector.enabled | Whether to enable the collector component. | true |\n| components.collector.image.repo | The repository containing the collector image. | ghcr.io/eraser-dev/collector |\n| components.collector.image.tag | The tag of the collector image. | v1.0.0 |\n| components.collector.request.mem | The amount of memory to request for the collector container. | 25Mi |\n| components.collector.request.cpu | The amount of CPU to request for the collector container. | 7m |\n| components.collector.limit.mem | The maximum amount of memory the collector container is allowed to use. | 500Mi |\n| components.collector.limit.cpu | The maximum amount of CPU the collector container is allowed to use. | 0 |\n| components.scanner.enabled | Whether to enable the scanner component. | true |\n| components.scanner.image.repo | The repository containing the scanner image. | ghcr.io/eraser-dev/eraser-trivy-scanner |\n| components.scanner.image.tag | The tag of the scanner image. | v1.0.0 |\n| components.scanner.request.mem | The amount of memory to request for the scanner container. | 500Mi |\n| components.scanner.request.cpu | The amount of CPU to request for the scanner container. | 1000m |\n| components.scanner.limit.mem | The maximum amount of memory the scanner container is allowed to use. | 2Gi |\n| components.scanner.limit.cpu | The maximum amount of CPU the scanner container is allowed to use. | 0 |\n| components.scanner.config | The configuration to pass to the scanner container, as a YAML string. | See YAML below |\n| components.scanner.volumes | Extra volumes for scanner. | `{}` |\n| components.remover.image.repo | The repository containing the remover image. | ghcr.io/eraser-dev/remover |\n| components.remover.image.tag | The tag of the remover image. | v1.0.0 |\n| components.remover.request.mem | The amount of memory to request for the remover container. | 25Mi |\n| components.remover.request.cpu | The amount of CPU to request for the remover container. | 0 |\n"
  },
  {
    "path": "docs/docs/exclusion.md",
    "content": "---\ntitle: Exclusion\n---\n\n## Excluding registries, repositories, and images\nEraser can exclude registries (example, `docker.io/library/*`) and also specific images with a tag (example, `docker.io/library/ubuntu:18.04`) or digest (example, `sha256:80f31da1ac7b312ba29d65080fd...`) from its removal process.\n\nTo exclude any images or registries from the removal, create configmap(s) with the label `eraser.sh/exclude.list=true` in the eraser-system namespace with a JSON file holding the excluded images.\n\n```bash\n$ cat > sample.json <<\"EOF\"\n{\n  \"excluded\": [\n    \"docker.io/library/*\",\n    \"ghcr.io/eraser-dev/test:latest\"\n  ]\n}\nEOF\n\n$ kubectl create configmap excluded --from-file=sample.json --namespace=eraser-system\n$ kubectl label configmap excluded eraser.sh/exclude.list=true -n eraser-system\n```\n\n## Exempting Nodes from the Eraser Pipeline\nExempting nodes from cleanup was added in v1.0.0. When deploying Eraser, you can specify whether there is a list of nodes you would like to `include` or `exclude` from the cleanup process using the configmap. For more information, see the section on [customization](https://eraser-dev.github.io/eraser/docs/customization).\n"
  },
  {
    "path": "docs/docs/faq.md",
    "content": "---\ntitle: FAQ\n---\n## Why am I still seeing vulnerable images?\nEraser currently targets **non-running** images, so any vulnerable images that are currently running will not be removed. In addition, the default vulnerability scanning with Trivy removes images with `CRITICAL` vulnerabilities. Any images with lower vulnerabilities will not be removed. This can be configured using the [configmap](https://eraser-dev.github.io/eraser/docs/customization#scanner-options).\n\n## How is Eraser different from Kubernetes garbage collection?\nThe native garbage collection in Kubernetes works a bit differently than Eraser. By default, garbage collection begins when disk usage reaches 85%, and stops when it gets down to 80%. More details about Kubernetes garbage collection can be found in the [Kubernetes documentation](https://kubernetes.io/docs/concepts/architecture/garbage-collection/), and configuration options can be found in the [Kubelet documentation](https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/). \n\nThere are a couple core benefits to using Eraser for image cleanup:\n* Eraser can be configured to use image vulnerability data when making determinations on image removal\n* By interfacing directly with the container runtime, Eraser can clean up images that are not managed by Kubelet and Kubernetes\n"
  },
  {
    "path": "docs/docs/installation.md",
    "content": "---\ntitle: Installation\n---\n\n## Manifest\n\nTo install Eraser with the manifest file, run the following command:\n\n```bash\nkubectl apply -f https://raw.githubusercontent.com/eraser-dev/eraser/main/deploy/eraser.yaml\n```\n\n## Helm\n\nIf you'd like to install and manage Eraser with Helm, follow the install instructions [here](https://github.com/eraser-dev/eraser/blob/main/charts/eraser/README.md)\n"
  },
  {
    "path": "docs/docs/introduction.md",
    "content": "---\ntitle: Introduction\nslug: /\n---\n\n# Introduction\n\nWhen deploying to Kubernetes, it's common for pipelines to build and push images to a cluster, but it's much less common for these images to be cleaned up. This can lead to accumulating bloat on the disk, and a host of non-compliant images lingering on the nodes.\n\nThe current garbage collection process deletes images based on a percentage of load, but this process does not consider the vulnerability state of the images. **Eraser** aims to provide a simple way to determine the state of an image, and delete it if it meets the specified criteria.\n"
  },
  {
    "path": "docs/docs/manual-removal.md",
    "content": "---\ntitle: Manual Removal\n---\n\nCreate an `ImageList` and specify the images you would like to remove. In this case, the image `docker.io/library/alpine:3.7.3` will be removed.\n\n```shell\ncat <<EOF | kubectl apply -f -\napiVersion: eraser.sh/v1alpha1\nkind: ImageList\nmetadata:\n  name: imagelist\nspec:\n  images:\n    - docker.io/library/alpine:3.7.3   # use \"*\" for all non-running images\nEOF\n```\n\n> `ImageList` is a cluster-scoped resource and must be called imagelist. `\"*\"` can be specified to remove all non-running images instead of individual images.\n\nCreating an `ImageList` should trigger an `ImageJob` that will deploy Eraser pods on every node to perform the removal given the list of images.\n\n```shell\n$ kubectl get pods -n eraser-system\neraser-system        eraser-controller-manager-55d54c4fb6-dcglq   1/1     Running   0          9m8s\neraser-system        eraser-kind-control-plane                    1/1     Running   0          11s\neraser-system        eraser-kind-worker                           1/1     Running   0          11s\neraser-system        eraser-kind-worker2                          1/1     Running   0          11s\n```\n\nPods will run to completion and the images will be removed.\n\n```shell\n$ kubectl get pods -n eraser-system\neraser-system        eraser-controller-manager-6d6d5594d4-phl2q   1/1     Running     0          4m16s\neraser-system        eraser-kind-control-plane                    0/1     Completed   0          22s\neraser-system        eraser-kind-worker                           0/1     Completed   0          22s\neraser-system        eraser-kind-worker2                          0/1     Completed   0          22s\n```\n\nThe `ImageList` custom resource status field will contain the status of the last job. The success and failure counts indicate the number of nodes the Eraser agent was run on.\n\n```shell\n$ kubectl describe ImageList imagelist\n...\nStatus:\n  Failed:     0\n  Success:    3\n  Timestamp:  2022-02-25T23:41:55Z\n...\n```\n\nVerify the unused images are removed.\n\n```shell\n$ docker exec kind-worker ctr -n k8s.io images list | grep alpine\n```\n\nIf the image has been successfully removed, there will be no output.\n"
  },
  {
    "path": "docs/docs/metrics.md",
    "content": "---\ntitle: Metrics\n---\n\nTo view Eraser metrics, you will need to deploy an Open Telemetry collector in the 'eraser-system' namespace, and an exporter. An example collector with a Prometheus exporter is [otelcollector.yaml](https://github.com/eraser-dev/eraser/blob/main/test/e2e/test-data/otelcollector.yaml), and the endpoint can be specified using the [configmap](https://eraser-dev.github.io/eraser/docs/customization#universal-options). In this example, we are logging the collected data to the otel-collector pod, and exporting metrics through Prometheus at 'http://localhost:8889/metrics', but a separate exporter can also be configured.\n\nBelow is the list of metrics provided by Eraser per run:\n\n#### Eraser\n```yaml\n- count\n\t- name: images_removed_run_total\n\t\t- description: Total images removed by eraser\n```\n\n #### Scanner\n ```yaml\n- count\n\t- name: vulnerable_images_run_total\n\t\t- description: Total vulnerable images detected\n ```\n\n #### ImageJob\n ```yaml\n - count\n\t- name: imagejob_run_total\n\t\t- description: Total ImageJobs scheduled\n\t- name: pods_completed_run_total\n\t\t- description: Total pods completed\n\t-  name: pods_failed_run_total\n\t\t- description: Total pods failed\n- summary\n\t- name: imagejob_duration_run_seconds\n\t\t- description: Total time for ImageJobs scheduled to complete\n```\n"
  },
  {
    "path": "docs/docs/quick-start.md",
    "content": "---\ntitle: Quick Start\n---\n\nThis tutorial demonstrates the functionality of Eraser and validates that non-running images are removed succesfully.\n\n## Deploy a DaemonSet\n\nAfter following the [install instructions](installation.md), we'll apply a demo `DaemonSet`. For illustrative purposes, a DaemonSet is applied and deleted so the non-running images remain on all nodes. The alpine image with the `3.7.3` tag will be used in this example. This is an image with a known critical vulnerability.\n\nFirst, apply the `DaemonSet`:\n\n```shell\ncat <<EOF | kubectl apply -f -\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n  name: alpine\nspec:\n  selector:\n    matchLabels:\n      app: alpine\n  template:\n    metadata:\n      labels:\n        app: alpine\n    spec:\n      containers:\n      - name: alpine\n        image: docker.io/library/alpine:3.7.3\nEOF\n```\n\nNext, verify that the Pods are running or completed. After the `alpine` Pods complete, you may see a `CrashLoopBackoff` status. This is expected behavior from the `alpine` image and can be ignored for the tutorial.\n\n```shell\n$ kubectl get pods\nNAME          READY   STATUS      RESTARTS     AGE\nalpine-2gh9c   1/1     Running     1 (3s ago)   6s\nalpine-hljp9   0/1     Completed   1 (3s ago)   6s\n```\n\nDelete the DaemonSet:\n\n```shell\n$ kubectl delete daemonset alpine\n```\n\nVerify that the Pods have been deleted:\n\n```shell\n$ kubectl get pods\nNo resources found in default namespace.\n```\n\nTo verify that the `alpine` images are still on the nodes, exec into one of the worker nodes and list the images. If you are not using a kind cluster or Docker for your container nodes, you will need to adjust the exec command accordingly.\n\nList the nodes:\n\n```shell\n$ kubectl get nodes\nNAME                 STATUS   ROLES           AGE   VERSION\nkind-control-plane   Ready    control-plane   45m   v1.24.0\nkind-worker          Ready    <none>          45m   v1.24.0\nkind-worker2         Ready    <none>          44m   v1.24.0\n```\n\nList the images then filter for `alpine`:\n\n```shell\n$ docker exec kind-worker ctr -n k8s.io images list | grep alpine\ndocker.io/library/alpine:3.7.3                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 2.0 MiB   linux/386,linux/amd64,linux/arm/v6,linux/arm64/v8,linux/ppc64le,linux/s390x  io.cri-containerd.image=managed\ndocker.io/library/alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10           application/vnd.docker.distribution.manifest.list.v2+json sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 2.0 MiB   linux/386,linux/amd64,linux/arm/v6,linux/arm64/v8,linux/ppc64le,linux/s390x  io.cri-containerd.image=managed\n\n```\n\n## Automatically Cleaning Images\n\nAfter deploying Eraser, it will automatically clean images in a regular interval. This interval can be set using the `manager.scheduling.repeatInterval` setting in the [configmap](https://eraser-dev.github.io/eraser/docs/customization#detailed-options). The default interval is 24 hours (`24h`). Valid time units are \"ns\", \"us\" (or \"µs\"), \"ms\", \"s\", \"m\", \"h\".\n\nEraser will schedule eraser pods to each node in the cluster, and each pod will contain 3 containers: collector, scanner, and remover that will run to completion.\n\n```shell\n$ kubectl get pods -n eraser-system\nNAMESPACE            NAME                                         READY   STATUS      RESTARTS         AGE\neraser-system        eraser-kind-control-plane-sb789           0/3     Completed   0                26m\neraser-system        eraser-kind-worker-j84hm                  0/3     Completed   0                26m\neraser-system        eraser-kind-worker2-4lbdr                 0/3     Completed   0                26m\neraser-system        eraser-controller-manager-86cdb4cbf9-x8d7q   1/1     Running     0                26m\n```\n\nThe collector container sends the list of all images to the scanner container, which scans and reports non-compliant images to the remover container for removal of images that are non-running. Once all pods are completed, they will be automatically cleaned up. \n\n> If you want to remove all the images periodically, you can skip the scanner container by setting the `components.scanner.enabled` value to `false` using the [configmap](https://eraser-dev.github.io/eraser/docs/customization#detailed-options). In this case, each collector pod will hold 2 containers: collector and remover.\n\n```shell\n$ kubectl get pods -n eraser-system\nNAMESPACE            NAME                                         READY   STATUS      RESTARTS         AGE\neraser-system        eraser-kind-control-plane-ksk2b           0/2     Completed   0                50s\neraser-system        eraser-kind-worker-cpgqc                  0/2     Completed   0                50s\neraser-system        eraser-kind-worker2-k25df                 0/2     Completed   0                50s\neraser-system        eraser-controller-manager-86cdb4cbf9-x8d7q   1/1     Running     0                55s\n```\n"
  },
  {
    "path": "docs/docs/release-management.md",
    "content": "# Release Management\n\n## Overview\n\nThis document describes Eraser project release management, which includes release versioning, supported releases, and supported upgrades.\n\n## Legend\n\n- **X.Y.Z** refers to the version (git tag) of Eraser that is released. This is the version of the Eraser images and the Chart version.\n- **Breaking changes** refer to schema changes, flag changes, and behavior changes of Eraser that may require a clean installation during upgrade, and it may introduce changes that could break backward compatibility.\n- **Milestone** should be designed to include feature sets to accommodate 2 months release cycles including test gates. GitHub's milestones are used by maintainers to manage each release. PRs and Issues for each release should be created as part of a corresponding milestone.\n- **Patch releases** refer to applicable fixes, including security fixes, may be backported to support releases, depending on severity and feasibility.\n- **Test gates** should include soak tests and upgrade tests from the last minor version.\n\n## Release Versioning\n\nAll releases will be of the form _vX.Y.Z_ where X is the major version, Y is the minor version and Z is the patch version. This project strictly follows semantic versioning.\n\nThe rest of the doc will cover the release process for the following kinds of releases:\n\n**Major Releases**\n\nNo plan to move to 2.0.0 unless there is a major design change like an incompatible API change in the project\n\n**Minor Releases**\n\n- X.Y.0-alpha.W, W >= 0 (Branch : main)\n    - Released as needed before we cut a beta X.Y release\n    - Alpha release, cut from master branch\n- X.Y.0-beta.W, W >= 0 (Branch : main)\n    - Released as needed before we cut a stable X.Y release\n    - More stable than the alpha release to signal users to test things out\n    - Beta release, cut from master branch\n- X.Y.0-rc.W, W >= 0 (Branch : main)\n    - Released as needed before we cut a stable X.Y release\n    - soak for ~ 2 weeks before cutting a stable release\n    - Release candidate release, cut from master branch\n- X.Y.0 (Branch: main)\n    - Released as needed\n    - Stable release, cut from master when X.Y milestone is complete\n\n**Patch Releases**\n\n- Patch Releases X.Y.Z, Z > 0 (Branch: release-X.Y, only cut when a patch is needed)\n    - No breaking changes\n    - Applicable fixes, including security fixes, may be cherry-picked from master into the latest supported minor release-X.Y branches.\n    - Patch release, cut from a release-X.Y branch\n\n## Supported Releases\n\nApplicable fixes, including security fixes, may be cherry-picked into the release branch, depending on severity and feasibility. Patch releases are cut from that branch as needed.\n\nWe expect users to stay reasonably up-to-date with the versions of Eraser they use in production, but understand that it may take time to upgrade. We expect users to be running approximately the latest patch release of a given minor release and encourage users to upgrade as soon as possible.\n\nWe expect to \"support\" n (current) and n-1 major.minor releases. \"Support\" means we expect users to be running that version in production. For example, when v1.2.0 comes out, v1.0.x will no longer be supported for patches, and we encourage users to upgrade to a supported version as soon as possible.\n\n## Supported Kubernetes Versions\n\nEraser is assumed to be compatible with the [current Kubernetes Supported Versions](https://kubernetes.io/releases/patch-releases/#detailed-release-history-for-active-branches) per [Kubernetes Supported Versions policy](https://kubernetes.io/releases/version-skew-policy/).\n\nFor example, if Eraser _supported_ versions are v1.2 and v1.1, and Kubernetes _supported_ versions are v1.22, v1.23, v1.24, then all supported Eraser versions (v1.2, v1.1) are assumed to be compatible with all supported Kubernetes versions (v1.22, v1.23, v1.24). If Kubernetes v1.25 is released later, then Eraser v1.2 and v1.1 will be assumed to be compatible with v1.25 if those Eraser versions are still supported at that time.\n\nIf you choose to use Eraser with a version of Kubernetes that it does not support, you are using it at your own risk.\n\n## Acknowledgement\n\nThis document builds on the ideas and implementations of release processes from projects like Kubernetes and Helm."
  },
  {
    "path": "docs/docs/releasing.md",
    "content": "---\ntitle: Releasing\n---\n\n## Create Release Pull Request\n\n1. Go to `create_release_pull_request` workflow under actions.\n2. Select run workflow, and use the workflow from your branch. \n3. Input release version with the semantic version identifying the release.\n4. Click run workflow and review the PR created by github-actions.\n\n# Releasing\n\n5. Once the PR is merged to `main`, tag that commit with release version and push tags to remote repository.\n\n   ```\n   git checkout <BRANCH NAME>\n   git pull origin <BRANCH NAME>\n   git tag -a <NEW VERSION> -m '<NEW VERSION>'\n   git push origin <NEW VERSION>\n   ```\n6. Pushing the release tag will trigger GitHub Actions to trigger `release` job.\n   This will build the `ghcr.io/eraser-dev/remover`, `ghcr.io/eraser-dev/eraser-manager`, `ghcr.io/eraser-dev/collector`, and `ghcr.io/eraser-dev/eraser-trivy-scanner` images automatically, then publish the new release tag.\n\n## Publishing\n\n1. GitHub Action will create a new release, review and edit it at https://github.com/eraser-dev/eraser/releases\n\n## Notifying\n\n1. Send an email to the [Eraser mailing list](https://groups.google.com/g/eraser-dev) announcing the release, with links to GitHub.\n2. Post a message on the [Eraser Slack channel](https://kubernetes.slack.com/archives/C03Q8KV8YQ4) with the same information."
  },
  {
    "path": "docs/docs/setup.md",
    "content": "---\ntitle: Setup\n---\n\n# Development Setup\n\nThis document describes the steps to get started with development.\nYou can either utilize [Codespaces](https://docs.github.com/en/codespaces/overview) or setup a local environment.\n\n## Local Setup\n\n### Prerequisites:\n\n- [go](https://go.dev/) with version 1.17 or later.\n- [docker](https://docs.docker.com/get-docker/)\n- [kind](https://kind.sigs.k8s.io/)\n- `make`\n\n### Get things running\n\n- Get dependencies with `go get`\n\n- This project uses `make`. You can utilize `make help` to see available targets. For local deployment make targets help to build, test and deploy.\n\n### Making changes\n\nPlease refer to [Development Reference](#development-reference) for more details on the specific commands.\n\nTo test your changes on a cluster:\n\n```bash\n# generate necessary api files (optional - only needed if changes to api folder).\nmake generate\n\n# build applicable images\nmake docker-build-manager MANAGER_IMG=eraser-manager:dev\nmake docker-build-remover REMOVER_IMG=remover:dev\nmake docker-build-collector COLLECTOR_IMG=collector:dev\nmake docker-build-trivy-scanner TRIVY_SCANNER_IMG=eraser-trivy-scanner:dev\n\n# make sure updated image is present on cluster (e.g., see kind example below)\nkind load docker-image \\\n        eraser-manager:dev \\\n        eraser-trivy-scanner:dev \\\n        remover:dev \\\n        collector:dev\n\nmake manifests\nmake deploy\n\n# to remove the deployment\nmake undeploy\n```\n\nTo test your changes to manager locally:\n\n```bash\nmake run\n```\n\nExample Output:\n\n```\nyou@local:~/eraser$ make run\ndocker build . \\\n        -t eraser-tooling \\\n        -f build/tooling/Dockerfile\n[+] Building 7.8s (8/8) FINISHED\n => => naming to docker.io/library/eraser-tooling                           0.0s\ndocker run -v /home/eraser/config:/config -w /config/manager \\\n        registry.k8s.io/kustomize/kustomize:v3.8.9 edit set image controller=eraser-manager:dev\ndocker run -v /home/eraser:/eraser eraser-tooling controller-gen \\\n        crd \\\n        rbac:roleName=manager-role \\\n        webhook \\\n        paths=\"./...\" \\\n        output:crd:artifacts:config=config/crd/bases\nrm -rf manifest_staging\nmkdir -p manifest_staging/deploy\ndocker run --rm -v /home/eraser:/eraser \\\n        registry.k8s.io/kustomize/kustomize:v3.8.9 build \\\n        /eraser/config/default -o /eraser/manifest_staging/deploy/eraser.yaml\ndocker run -v /home/eraser:/eraser eraser-tooling controller-gen object:headerFile=\"hack/boilerplate.go.txt\" paths=\"./...\"\ngo fmt ./...\ngo vet ./...\ngo run ./main.go\n{\"level\":\"info\",\"ts\":1652985685.1663408,\"logger\":\"controller-runtime.metrics\",\"msg\":\"Metrics server is starting to listen\",\"addr\":\":8080\"}\n...\n```\n\n## Development Reference\n\nEraser is using tooling from [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder). For Eraser this tooling is containerized into the `eraser-tooling` image. The `make` targets can use this tooling and build the image when necessary.\n\nYou can override the default configuration using environment variables. Below you can find a reference of targets and configuration options.\n\n### Common Configuration\n\n| Environment Variable | Description                                                                                   |\n| -------------------- | --------------------------------------------------------------------------------------------- |\n| VERSION              | Specifies the version (i.e., the image tag) of eraser to be used.                             |\n| MANAGER_IMG          | Defines the image url for the Eraser manager. Used for tagging, pulling and pushing the image |\n| REMOVER_IMG           | Defines the image url for the Eraser. Used for tagging, pulling and pushing the image         |\n| COLLECTOR_IMG        | Defines the image url for the Collector. Used for tagging, pulling and pushing the image      |\n\n### Linting\n\n- `make lint`\n\nLints the go code.\n\n| Environment Variable | Description                                             |\n| -------------------- | ------------------------------------------------------- |\n| GOLANGCI_LINT        | Specifies the go linting binary to be used for linting. |\n\n### Development\n\n- `make generate`\n\nGenerates necessary files for the k8s api stored under `api/v1alpha1/zz_generated.deepcopy.go`. See the [kubebuilder docs](https://book.kubebuilder.io/cronjob-tutorial/other-api-files.html) for details.\n\n- `make manifests`\n\nGenerates the eraser deployment yaml files under `manifest_staging/deploy`.\n\nConfiguration Options:\n\n| Environment Variable | Description                                        |\n| -------------------- | -------------------------------------------------- |\n| REMOVER_IMG           | Defines the image url for the Eraser.              |\n| MANAGER_IMG          | Defines the image url for the Eraser manager.      |\n| KUSTOMIZE_VERSION    | Define Kustomize version for generating manifests. |\n\n- `make test`\n\nRuns the unit tests for the eraser project.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                 |\n| -------------------- | ----------------------------------------------------------- |\n| ENVTEST              | Specifies the envtest setup binary.                         |\n| ENVTEST_K8S_VERSION  | Specifies the Kubernetes version for envtest setup command. |\n\n- `make e2e-test`\n\nRuns e2e tests on a cluster.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                   |\n| -------------------- | ------------------------------------------------------------------------------------------------------------- |\n| REMOVER_IMG           | Eraser image to be used for e2e test.                                                                         |\n| MANAGER_IMG          | Eraser manager image to be used for e2e test.                                                                 |\n| KUBERNETES_VERSION   | Kubernetes version for e2e test.                                                                              |\n| TEST_COUNT           | Sets repetition for test. Please refer to [go docs](https://pkg.go.dev/cmd/go#hdr-Testing_flags) for details. |\n| TIMEOUT              | Sets timeout for test. Please refer to [go docs](https://pkg.go.dev/cmd/go#hdr-Testing_flags) for details.    |\n| TESTFLAGS            | Sets additional test flags                                                                                    |\n\n### Build\n\n- `make build`\n\nBuilds the eraser manager binaries.\n\n- `make run`\n\nRuns the eraser manager on your local machine.\n\n- `make docker-build-manager`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| MANAGER_IMG          | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-manager`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| MANAGER_IMG          | Specifies the target repository, image name and tag for building image. |\n\n- `make docker-build-remover`\n\nBuilds the docker image for eraser remover.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| REMOVER_IMG           | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-remover`\n\nBuilds the docker image for the eraser remover.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| REMOVER_IMG           | Specifies the target repository, image name and tag for building image. |\n\n- `make docker-build-collector`\n\nBuilds the docker image for the eraser collector.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| COLLECTOR_IMG        | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-collector`\n\nBuilds the docker image for the eraser collector.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| COLLECTOR_IMG        | Specifies the target repository, image name and tag for building image. |\n\n### Deployment\n\n- `make install`\n\nInstall CRDs into the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                      |\n| -------------------- | ---------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment. |\n\n- `make uninstall`\n\nUninstall CRDs from the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                      |\n| -------------------- | ---------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment. |\n\n- `make deploy`\n\nDeploys eraser to the cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                          |\n| -------------------- | -------------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment.     |\n| MANAGER_IMG          | Specifies the eraser manager image version to be used for deployment |\n\n- `make undeploy`\n\nUndeploy controller from the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                               |\n| -------------------- | ------------------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources that need to be removed. |\n\n### Release\n\n- `make release-manifest`\n\nGenerates k8s manifests files for a release.\n\nConfiguration Options:\n\n| Environment Variable | Description                          |\n| -------------------- | ------------------------------------ |\n| NEWVERSION           | Sets the new version in the Makefile |\n\n- `make promote-staging-manifest`\n\nPromotes the k8s deployment yaml files to release.\n"
  },
  {
    "path": "docs/docs/trivy.md",
    "content": "---\ntitle: Trivy\n---\n\n## Trivy Provider Options\nThe Trivy provider is used in Eraser for image scanning and detecting vulnerabilities. See [Customization](https://eraser-dev.github.io/eraser/docs/customization#scanner-options) for more details on configuring the scanner.\n"
  },
  {
    "path": "docs/docusaurus.config.js",
    "content": "// @ts-check\n// Note: type annotations allow type checking and IDEs autocompletion\n\nconst lightCodeTheme = require('prism-react-renderer').themes.github;\nconst darkCodeTheme = require('prism-react-renderer').themes.dracula;\n\n/** @type {import('@docusaurus/types').Config} */\nconst config = {\n  title: 'Eraser Docs',\n  url: 'https://eraser-dev.github.io',\n  baseUrl: '/eraser/docs/',\n  onBrokenLinks: 'warn',\n  onBrokenMarkdownLinks: 'warn',\n  favicon: 'img/favicon.ico',\n  trailingSlash: false,\n\n  // GitHub pages deployment config.\n  // If you aren't using GitHub pages, you don't need these.\n  organizationName: 'eraser-dev', // Usually your GitHub org/user name.\n  projectName: 'Eraser', // Usually your repo name.\n  deploymentBranch: 'gh-pages',\n\n  // Even if you don't use internalization, you can use this field to set useful\n  // metadata like html lang. For example, if your site is Chinese, you may want\n  // to replace \"en\" with \"zh-Hans\".\n  i18n: {\n    defaultLocale: 'en',\n    locales: ['en'],\n  },\n\n  presets: [\n    [\n      'classic',\n      /** @type {import('@docusaurus/preset-classic').Options} */\n      ({\n        docs: {\n          sidebarPath: require.resolve('./sidebars.js'),\n          routeBasePath: '/'\n        },\n        blog: false,\n        theme: {\n          customCss: require.resolve('./src/css/custom.css'),\n        },\n        gtag: {\n          trackingID: 'G-QV5PNCJ560',\n          anonymizeIP: true,\n        },\n      }),\n    ],\n  ],\n\n  themeConfig:\n    /** @type {import('@docusaurus/preset-classic').ThemeConfig} */\n    ({\n      navbar: {\n        title: 'Eraser',\n        logo: {\n          alt: 'Eraser Logo',\n          src: 'img/eraser.svg',\n        },\n        items: [\n          {\n            type: 'docsVersionDropdown',\n            position: 'right',\n          },\n          {\n            href: 'https://github.com/eraser-dev/eraser',\n            position: 'right',\n            className: 'header-github-link',\n            'aria-label': 'GitHub repository',\n          },\n        ],\n      },\n      footer: {\n        style: 'dark',\n        copyright: `Copyright © ${new Date().getFullYear()} Linux Foundation. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our <a href=\"https://www.linuxfoundation.org/legal/trademark-usage\">Trademark Usage page</a>.`,\n      },\n      prism: {\n        theme: lightCodeTheme,\n        darkTheme: darkCodeTheme,\n      },\n      algolia: {\n        appId: 'X8MU4GEC0G',\n        apiKey: 'aaca7901c07e616a7ec2e1e1f9670809',\n        indexName: 'eraser',\n      },\n      colorMode: {\n        defaultMode: 'light',\n        disableSwitch: false,\n        respectPrefersColorScheme: true,\n      }\n    }),\n};\n\nmodule.exports = config;\n"
  },
  {
    "path": "docs/package.json",
    "content": "{\n  \"name\": \"website\",\n  \"version\": \"0.0.0\",\n  \"private\": true,\n  \"scripts\": {\n    \"docusaurus\": \"docusaurus\",\n    \"start\": \"docusaurus start\",\n    \"build\": \"docusaurus build\",\n    \"swizzle\": \"docusaurus swizzle\",\n    \"deploy\": \"docusaurus deploy\",\n    \"clear\": \"docusaurus clear\",\n    \"serve\": \"docusaurus serve\",\n    \"write-translations\": \"docusaurus write-translations\",\n    \"write-heading-ids\": \"docusaurus write-heading-ids\"\n  },\n  \"dependencies\": {\n    \"@docusaurus/core\": \"^3.9.2\",\n    \"@docusaurus/preset-classic\": \"^3.9.2\",\n    \"@docusaurus/theme-classic\": \"^3.9.2\",\n    \"@mdx-js/react\": \"^3.1.1\",\n    \"clsx\": \"^2.1.1\",\n    \"got\": \"^14.6.5\",\n    \"js-yaml\": \"^3.14.2\",\n    \"on-headers\": \"^1.1.0\",\n    \"path-to-regexp\": \"^1.8.0\",\n    \"prism-react-renderer\": \"^2.4.1\",\n    \"react\": \"^19.2.0\",\n    \"react-dom\": \"^19.2.0\",\n    \"react-router\": \"^6.28.1\",\n    \"react-router-dom\": \"^6.28.1\",\n    \"trim\": \"^1.0.1\"\n  },\n  \"devDependencies\": {\n    \"@docusaurus/module-type-aliases\": \"3.9.2\"\n  },\n  \"browserslist\": {\n    \"production\": [\n      \">0.5%\",\n      \"not dead\",\n      \"not op_mini all\"\n    ],\n    \"development\": [\n      \"last 1 chrome version\",\n      \"last 1 firefox version\",\n      \"last 1 safari version\"\n    ]\n  },\n  \"resolutions\": {\n    \"trim\": \"^0.0.3\",\n    \"got\": \"^11.8.5\",\n    \"js-yaml\": \"^3.14.2\",\n    \"path-to-regexp\": \"^1.8.0\",\n    \"on-headers\": \"^1.1.0\"\n  }\n}\n"
  },
  {
    "path": "docs/sidebars.js",
    "content": "/**\n * Creating a sidebar enables you to:\n - create an ordered group of docs\n - render a sidebar for each doc of that group\n - provide next/previous navigation\n\n The sidebars can be generated from the filesystem, or explicitly defined here.\n\n Create as many sidebars as you want.\n */\n\n// @ts-check\n\n/** @type {import('@docusaurus/plugin-content-docs').SidebarsConfig} */\nconst sidebars = {\n  sidebar: [\n    'introduction',\n    'installation',\n    'quick-start',\n    'architecture',\n    {\n      type: 'category',\n      label: 'Topics',\n      collapsible: true,\n      collapsed: false,\n      items: [\n        'manual-removal',\n        'exclusion',\n        'customization',\n        'metrics'\n      ]\n    },\n    {\n      type: 'category',\n      label: 'Development',\n      collapsible: true,\n      collapsed: false,\n      items: [\n        'setup',\n        'releasing',\n      ]\n    },\n    {\n      type: 'category',\n      label: 'Scanning',\n      collapsible: true,\n      collapsed: false,\n      items: [\n        'custom-scanner',\n        'trivy',\n      ]\n    },\n    'faq',\n    'contributing',\n    'code-of-conduct',\n  ]\n};\n\nmodule.exports = sidebars;\n"
  },
  {
    "path": "docs/src/css/custom.css",
    "content": "/**\n * Any CSS included here will be global. The classic template\n * bundles Infima by default. Infima is a CSS framework designed to\n * work well for content-centric websites.\n */\n\n/* You can override the default Infima variables here. */\n:root {\n  --ifm-color-primary: #2e8555;\n  --ifm-color-primary-dark: #29784c;\n  --ifm-color-primary-darker: #277148;\n  --ifm-color-primary-darkest: #205d3b;\n  --ifm-color-primary-light: #33925d;\n  --ifm-color-primary-lighter: #359962;\n  --ifm-color-primary-lightest: #3cad6e;\n  --ifm-code-font-size: 95%;\n  --docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.1);\n}\n\n/* For readability concerns, you should choose a lighter palette in dark mode. */\n[data-theme='dark'] {\n  --ifm-color-primary: #25c2a0;\n  --ifm-color-primary-dark: #21af90;\n  --ifm-color-primary-darker: #1fa588;\n  --ifm-color-primary-darkest: #1a8870;\n  --ifm-color-primary-light: #29d5b0;\n  --ifm-color-primary-lighter: #32d8b4;\n  --ifm-color-primary-lightest: #4fddbf;\n  --docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.3);\n}\n\n.header-github-link:hover {\n  opacity: 0.6;\n}\n\n.header-github-link:before {\n  content: '';\n  width: 24px;\n  height: 24px;\n  display: flex;\n  background: url(\"data:image/svg+xml,%3Csvg viewBox='0 0 24 24' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath d='M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12'/%3E%3C/svg%3E\")\n    no-repeat;\n}\n\nhtml[data-theme='dark'] .header-github-link:before {\n  background: url(\"data:image/svg+xml,%3Csvg viewBox='0 0 24 24' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath fill='white' d='M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12'/%3E%3C/svg%3E\")\n    no-repeat;\n}"
  },
  {
    "path": "docs/static/.nojekyll",
    "content": ""
  },
  {
    "path": "docs/versioned_docs/version-v0.4.x/architecture.md",
    "content": "---\ntitle: Architecture\n---\nAt a high level, Eraser has two main modes of operation: manual and automated.\n\nManual image removal involves supplying a list of images to remove; Eraser then\ndeploys pods to clean up the images you supplied.\n\nAutomated image removal runs on a timer. By default, the automated process\nremoves images based on the results of a vulnerability scan. The default\nvulnerability scanner is Trivy, but others can be provided in its place. Or,\nthe scanner can be disabled altogether, in which case Eraser acts as a garbage\ncollector -- it will remove all non-running images in your cluster.\n\n## Manual image cleanup\n\nNote: metrics are not yet implemented in Eraser v0.4.x, but will be available in the upcoming v1.0.0 release.\n\n<img title=\"manual cleanup\" src=\"/eraser/docs/img/eraser_manual.png\" />\n\n## Automated analysis, scanning, and cleanup\n\n<img title=\"automated cleanup\" src=\"/eraser/docs/img/eraser_timer.png\" />\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.4.x/code-of-conduct.md",
    "content": "---\ntitle: Code of Conduct\n---\n\nThis project has adopted the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).\n\nResources:\n\n- [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)\n- [Code of Conduct Reporting](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.4.x/contributing.md",
    "content": "---\ntitle: Contributing\n---\n\nThere are several ways to get involved with Eraser\n\n- Join the [mailing list](https://groups.google.com/u/1/g/eraser-dev) to get notifications for releases, security announcements, etc.\n- Participate in the [biweekly community meetings](https://docs.google.com/document/d/1Sj5u47K3WUGYNPmQHGFpb52auqZb1FxSlWAQnPADhWI/edit) to disucss development, issues, use cases, etc.\n- Join the `#eraser` channel on the [Kubernetes Slack](https://slack.k8s.io/)\n- View the [development setup instructions](https://eraser-dev.github.io/eraser/docs/development)\n\nThis project welcomes contributions and suggestions.\n\nThis project has adopted the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.4.x/custom-scanner.md",
    "content": "---\ntitle: Custom Scanner\n---\n\n## Creating a Custom Scanner\nTo create a custom scanner for non-compliant images, provide your scanner image to Eraser in deployment.\n\nIn order for the custom scanner to communicate with the collector and eraser containers, utilize `ReadCollectScanPipe()` to get the list of all non-running images to scan from collector. Then, use `WriteScanErasePipe()` to pass the images found non-compliant by your scanner to eraser for removal. Both functions can be found in [util](../../../pkg/utils/utils.go).\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.4.x/customization.md",
    "content": "---\ntitle: Customization\n---\n\nBy default, successful jobs will be deleted after a period of time. You can change this behavior by setting the following flags in the eraser-controller-manager:\n\n- `--job-cleanup-on-success-delay`: Duration to delay job deletion after successful runs. 0 means no delay. Defaults to `0`.\n- `--job-cleanup-on-error-delay`: Duration to delay job deletion after errored runs. 0 means no delay. Defaults to `24h`. \n- `--job-success-ratio`: Ratio of successful/total runs to consider a job successful. 1.0 means all runs must succeed. Defaults to `1.0`.  \n\nFor duration, valid time units are \"ns\", \"us\" (or \"µs\"), \"ms\", \"s\", \"m\", \"h\".\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.4.x/exclusion.md",
    "content": "---\ntitle: Exclusion\n---\n\n## Excluding registries, repositories, and images\nEraser can exclude registries (example, `docker.io/library/*`) and also specific images with a tag (example, `docker.io/library/ubuntu:18.04`) or digest (example, `sha256:80f31da1ac7b312ba29d65080fd...`) from its removal process.\n\nTo exclude any images or registries from the removal, create configmap(s) with the label `eraser.sh/exclude.list=true` in the eraser-system namespace with a JSON file holding the excluded images.\n\n```bash\n$ cat > sample.json <<EOF\n{\"excluded\": [\"docker.io/library/*\", \"ghcr.io/eraser-dev/test:latest\"]}\nEOF\n\n$ kubectl create configmap excluded --from-file=excluded=sample.json --namespace=eraser-system\n$ kubectl label configmap excluded eraser.sh/exclude.list=true -n eraser-system\n```\n\n## Exempting Nodes from the Eraser Pipeline\nExempting nodes with `--filter-nodes` is added in v0.3.0. When deploying Eraser, you can specify whether there is a list of nodes you would like to `include` or `exclude` from the cleanup process using the `--filter-nodes` argument. \n\n_See [Eraser Helm Chart](https://github.com/eraser-dev/eraser/blob/main/charts/eraser/README.md) for more information on deployment._\n\nNodes with the selector `eraser.sh/cleanup.filter` will be filtered accordingly. \n- If `include` is provided, eraser and collector pods will only be scheduled on nodes with the selector `eraser.sh/cleanup.filter`. \n- If `exclude` is provided, eraser and collector pods will be scheduled on all nodes besides those with the selector `eraser.sh/cleanup.filter`.\n\nUnless specified, the default value of `--filter-nodes` is `exclude`. Because Windows nodes are not supported, they will always be excluded regardless of the `eraser.sh/cleanup.filter` label or the value of `--filter-nodes`.\n\nAdditional node selectors can be provided through the `--filter-nodes-selector` flag.\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.4.x/faq.md",
    "content": "---\ntitle: FAQ\n---\n## Why am I still seeing vulnerable images?\nEraser currently targets **non-running** images, so any vulnerable images that are currently running will not be removed. In addition, the default vulnerability scanning with Trivy removes images with `CRITICAL` vulnerabilities. Any images with lower vulnerabilities will not be removed. This can be configured with the `--severity` flag."
  },
  {
    "path": "docs/versioned_docs/version-v0.4.x/installation.md",
    "content": "---\ntitle: Installation\n---\n\n## Manifest\n\nTo install Eraser with the manifest file, run the following command:\n\n```bash\nkubectl apply -f https://raw.githubusercontent.com/eraser-dev/eraser/v0.4.0/deploy/eraser.yaml\n```\n\n## Helm\n\nIf you'd like to install and manage Eraser with Helm, follow the install instructions [here](https://github.com/eraser-dev/eraser/blob/main/charts/eraser/README.md)\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.4.x/introduction.md",
    "content": "---\ntitle: Introduction\nslug: /\n---\n\n# Introduction\n\nWhen deploying to Kubernetes, it's common for pipelines to build and push images to a cluster, but it's much less common for these images to be cleaned up. This can lead to accumulating bloat on the disk, and a host of non-compliant images lingering on the nodes.\n\nThe current garbage collection process deletes images based on a percentage of load, but this process does not consider the vulnerability state of the images. **Eraser** aims to provide a simple way to determine the state of an image, and delete it if it meets the specified criteria.\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.4.x/manual-removal.md",
    "content": "---\ntitle: Manual Removal\n---\n\nCreate an `ImageList` and specify the images you would like to remove. In this case, the image `docker.io/library/alpine:3.7.3` will be removed.\n\n```shell\ncat <<EOF | kubectl apply -f -\napiVersion: eraser.sh/v1alpha1\nkind: ImageList\nmetadata:\n  name: imagelist\nspec:\n  images:\n    - docker.io/library/alpine:3.7.3   # use \"*\" for all non-running images\nEOF\n```\n\n> `ImageList` is a cluster-scoped resource and must be called imagelist. `\"*\"` can be specified to remove all non-running images instead of individual images.\n\nCreating an `ImageList` should trigger an `ImageJob` that will deploy Eraser pods on every node to perform the removal given the list of images.\n\n```shell\n$ kubectl get pods -n eraser-system\neraser-system        eraser-controller-manager-55d54c4fb6-dcglq   1/1     Running   0          9m8s\neraser-system        eraser-kind-control-plane                    1/1     Running   0          11s\neraser-system        eraser-kind-worker                           1/1     Running   0          11s\neraser-system        eraser-kind-worker2                          1/1     Running   0          11s\n```\n\nPods will run to completion and the images will be removed.\n\n```shell\n$ kubectl get pods -n eraser-system\neraser-system        eraser-controller-manager-6d6d5594d4-phl2q   1/1     Running     0          4m16s\neraser-system        eraser-kind-control-plane                    0/1     Completed   0          22s\neraser-system        eraser-kind-worker                           0/1     Completed   0          22s\neraser-system        eraser-kind-worker2                          0/1     Completed   0          22s\n```\n\nThe `ImageList` custom resource status field will contain the status of the last job. The success and failure counts indicate the number of nodes the Eraser agent was run on.\n\n```shell\n$ kubectl describe ImageList imagelist\n...\nStatus:\n  Failed:     0\n  Success:    3\n  Timestamp:  2022-02-25T23:41:55Z\n...\n```\n\nVerify the unused images are removed.\n\n```shell\n$ docker exec kind-worker ctr -n k8s.io images list | grep alpine\n```\n\nIf the image has been successfully removed, there will be no output.\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.4.x/quick-start.md",
    "content": "---\ntitle: Quick Start\n---\n\nThis tutorial demonstrates the functionality of Eraser and validates that non-running images are removed succesfully.\n\n## Deploy a DaemonSet\n\nAfter following the [install instructions](installation.md), we'll apply a demo `DaemonSet`. For illustrative purposes, a DaemonSet is applied and deleted so the non-running images remain on all nodes. The alpine image with the `3.7.3` tag will be used in this example. This is an image with a known critical vulnerability.\n\nFirst, apply the `DaemonSet`:\n\n```shell\ncat <<EOF | kubectl apply -f -\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n  name: alpine\nspec:\n  selector:\n    matchLabels:\n      app: alpine\n  template:\n    metadata:\n      labels:\n        app: alpine\n    spec:\n      containers:\n      - name: alpine\n        image: docker.io/library/alpine:3.7.3\nEOF\n```\n\nNext, verify that the Pods are running or completed. After the `alpine` Pods complete, you may see a `CrashLoopBackoff` status. This is expected behavior from the `alpine` image and can be ignored for the tutorial.\n\n```shell\n$ kubectl get pods\nNAME          READY   STATUS      RESTARTS     AGE\nalpine-2gh9c   1/1     Running     1 (3s ago)   6s\nalpine-hljp9   0/1     Completed   1 (3s ago)   6s\n```\n\nDelete the DaemonSet:\n\n```shell\n$ kubectl delete daemonset alpine\n```\n\nVerify that the Pods have been deleted:\n\n```shell\n$ kubectl get pods\nNo resources found in default namespace.\n```\n\nTo verify that the `alpine` images are still on the nodes, exec into one of the worker nodes and list the images. If you are not using a kind cluster or Docker for your container nodes, you will need to adjust the exec command accordingly.\n\nList the nodes:\n\n```shell\n$ kubectl get nodes\nNAME                 STATUS   ROLES           AGE   VERSION\nkind-control-plane   Ready    control-plane   45m   v1.24.0\nkind-worker          Ready    <none>          45m   v1.24.0\nkind-worker2         Ready    <none>          44m   v1.24.0\n```\n\nList the images then filter for `alpine`:\n\n```shell\n$ docker exec kind-worker ctr -n k8s.io images list | grep alpine\ndocker.io/library/alpine:3.7.3                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 2.0 MiB   linux/386,linux/amd64,linux/arm/v6,linux/arm64/v8,linux/ppc64le,linux/s390x  io.cri-containerd.image=managed\ndocker.io/library/alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10           application/vnd.docker.distribution.manifest.list.v2+json sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 2.0 MiB   linux/386,linux/amd64,linux/arm/v6,linux/arm64/v8,linux/ppc64le,linux/s390x  io.cri-containerd.image=managed\n\n```\n\n## Automatically Cleaning Images\n\nAfter deploying Eraser, it will automatically clean images in a regular interval. This interval can be set by `--repeat-period` argument to `eraser-controller-manager`. The default interval is 24 hours (`24h`). Valid time units are \"ns\", \"us\" (or \"µs\"), \"ms\", \"s\", \"m\", \"h\".\n\nEraser will schedule collector pods to each node in the cluster, and each pod will contain 3 containers: collector, scanner, and eraser that will run to completion.\n\n```shell\n$ kubectl get pods -n eraser-system\nNAMESPACE            NAME                                         READY   STATUS      RESTARTS         AGE\neraser-system        collector-kind-control-plane-sb789           0/3     Completed   0                26m\neraser-system        collector-kind-worker-j84hm                  0/3     Completed   0                26m\neraser-system        collector-kind-worker2-4lbdr                 0/3     Completed   0                26m\neraser-system        eraser-controller-manager-86cdb4cbf9-x8d7q   1/1     Running     0                26m\n```\n\nThe collector container sends the list of all images to the scanner container, which scans and reports non-compliant images to the eraser container for removal of images that are non-running. Once all pods are completed, they will be automatically cleaned up. \n\n> If you want to remove all the images periodically, you can skip the scanner container by removing the `--scanner-image` argument. If you are deploying with Helm, use `--set scanner.image.repository=\"\"` to remove the scanner image. In this case, each collector pod will hold 2 containers: collector and eraser.\n\n```shell\n$ kubectl get pods -n eraser-system\nNAMESPACE            NAME                                         READY   STATUS      RESTARTS         AGE\neraser-system        collector-kind-control-plane-ksk2b           0/2     Completed   0                50s\neraser-system        collector-kind-worker-cpgqc                  0/2     Completed   0                50s\neraser-system        collector-kind-worker2-k25df                 0/2     Completed   0                50s\neraser-system        eraser-controller-manager-86cdb4cbf9-x8d7q   1/1     Running     0                55s\n```\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.4.x/releasing.md",
    "content": "---\ntitle: Releasing\n---\n\n## Overview\n\nThe release process consists of three phases: versioning, building, and publishing.\n\nVersioning involves maintaining the following files:\n\n- **Makefile** - the Makefile contains a VERSION variable that defines the version of the project.\n- **manager.yaml** - the controller-manager deployment yaml contains the latest release tag image of the project.\n- **eraser.yaml** - the eraser.yaml contains all eraser resources to be deployed to a cluster including the latest release tag image of the project.\n\nThe steps below explain how to update these files. In addition, the repository should be tagged with the semantic version identifying the release.\n\nBuilding involves obtaining a copy of the repository and triggering a build as part of the GitHub Actions CI pipeline.\n\nPublishing involves creating a release tag and creating a new _Release_ on GitHub.\n\n## Versioning\n\n1. Obtain a copy of the repository.\n\n   ```\n   git clone git@github.com:eraser-dev/eraser.git\n   ```\n\n1. If this is a patch release for a release branch, check out applicable branch, such as `release-0.1`. If not, branch should be `main`\n\n1. Execute the release-patch target to generate patch. Give the semantic version of the release:\n\n   ```\n   make release-manifest NEWVERSION=vX.Y.Z\n   ```\n\n1. Promote staging manifest to release.\n\n   ```\n   make promote-staging-manifest\n   ```\n\n1. If it's a new minor release (e.g. v0.**4**.x -> 0.**5**.0), tag docs to be versioned. Make sure to keep patch version as `.x` for a minor release.\n\n\t```\n\tmake version-docs NEWVERSION=v0.5.x\n\t```\n\n1. Preview the changes:\n\n   ```\n   git status\n   git diff\n   ```\n\n## Building and releasing\n\n1. Commit the changes and push to remote repository to create a pull request.\n\n   ```\n   git checkout -b release-<NEW VERSION>\n   git commit -a -s -m \"Prepare <NEW VERSION> release\"\n   git push <YOUR FORK>\n   ```\n\n2. Once the PR is merged to `main` or `release` branch (`<BRANCH NAME>` below), tag that commit with release version and push tags to remote repository.\n\n   ```\n   git checkout <BRANCH NAME>\n   git pull origin <BRANCH NAME>\n   git tag -a <NEW VERSION> -m '<NEW VERSION>'\n   git push origin <NEW VERSION>\n   ```\n\n3. Pushing the release tag will trigger GitHub Actions to trigger `release` job.\n   This will build the `ghcr.io/eraser-dev/eraser` and `ghcr.io/eraser-dev/eraser-manager` images automatically, then publish the new release tag.\n\n## Publishing\n\n1. GitHub Action will create a new release, review and edit it at https://github.com/eraser-dev/eraser/releases\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.4.x/setup.md",
    "content": "---\ntitle: Setup\n---\n\n# Development Setup\n\nThis document describes the steps to get started with development.\nYou can either utilize [Codespaces](https://docs.github.com/en/codespaces/overview) or setup a local environment.\n\n## Local Setup\n\n### Prerequisites:\n\n- [go](https://go.dev/) with version 1.17 or later.\n- [docker](https://docs.docker.com/get-docker/)\n- [kind](https://kind.sigs.k8s.io/)\n- `make`\n\n### Get things running\n\n- Get dependencies with `go get`\n\n- This project uses `make`. You can utilize `make help` to see available targets. For local deployment make targets help to build, test and deploy.\n\n### Making changes\n\nPlease refer to [Development Reference](#development-reference) for more details on the specific commands.\n\nTo test your changes on a cluster:\n\n```bash\n# generate necessary api files (optional - only needed if changes to api folder).\nmake generate\n\n# build applicable images\nmake docker-build-manager MANAGER_IMG=eraser-manager:dev\nmake docker-build-eraser ERASER_IMG=eraser:dev\nmake docker-build-collector COLLECTOR_IMG=collector:dev\nmake docker-build-trivy-scanner TRIVY_SCANNER_IMG=eraser-trivy-scanner:dev\n\n# make sure updated image is present on cluster (e.g., see kind example below)\nkind load docker-image \\\n        eraser-manager:dev \\\n        eraser-trivy-scanner:dev \\\n        eraser:dev \\\n        collector:dev\n\nmake manifests\nmake deploy\n\n# to remove the deployment\nmake undeploy\n```\n\nTo test your changes to manager locally:\n\n```bash\nmake run\n```\n\nExample Output:\n\n```\nyou@local:~/eraser$ make run\ndocker build . \\\n        -t eraser-tooling \\\n        -f build/tooling/Dockerfile\n[+] Building 7.8s (8/8) FINISHED\n => => naming to docker.io/library/eraser-tooling                           0.0s\ndocker run -v /home/eraser/config:/config -w /config/manager \\\n        registry.k8s.io/kustomize/kustomize:v3.8.9 edit set image controller=eraser-manager:dev\ndocker run -v /home/eraser:/eraser eraser-tooling controller-gen \\\n        crd \\\n        rbac:roleName=manager-role \\\n        webhook \\\n        paths=\"./...\" \\\n        output:crd:artifacts:config=config/crd/bases\nrm -rf manifest_staging\nmkdir -p manifest_staging/deploy\ndocker run --rm -v /home/eraser:/eraser \\\n        registry.k8s.io/kustomize/kustomize:v3.8.9 build \\\n        /eraser/config/default -o /eraser/manifest_staging/deploy/eraser.yaml\ndocker run -v /home/eraser:/eraser eraser-tooling controller-gen object:headerFile=\"hack/boilerplate.go.txt\" paths=\"./...\"\ngo fmt ./...\ngo vet ./...\ngo run ./main.go\n{\"level\":\"info\",\"ts\":1652985685.1663408,\"logger\":\"controller-runtime.metrics\",\"msg\":\"Metrics server is starting to listen\",\"addr\":\":8080\"}\n...\n```\n\n## Development Reference\n\nEraser is using tooling from [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder). For Eraser this tooling is containerized into the `eraser-tooling` image. The `make` targets can use this tooling and build the image when necessary.\n\nYou can override the default configuration using environment variables. Below you can find a reference of targets and configuration options.\n\n### Common Configuration\n\n| Environment Variable | Description                                                                                   |\n| -------------------- | --------------------------------------------------------------------------------------------- |\n| VERSION              | Specifies the version (i.e., the image tag) of eraser to be used.                             |\n| MANAGER_IMG          | Defines the image url for the Eraser manager. Used for tagging, pulling and pushing the image |\n| ERASER_IMG           | Defines the image url for the Eraser. Used for tagging, pulling and pushing the image         |\n| COLLECTOR_IMG        | Defines the image url for the Collector. Used for tagging, pulling and pushing the image      |\n\n### Linting\n\n- `make lint`\n\nLints the go code.\n\n| Environment Variable | Description                                             |\n| -------------------- | ------------------------------------------------------- |\n| GOLANGCI_LINT        | Specifies the go linting binary to be used for linting. |\n\n### Development\n\n- `make generate`\n\nGenerates necessary files for the k8s api stored under `api/v1alpha1/zz_generated.deepcopy.go`. See the [kubebuilder docs](https://book.kubebuilder.io/cronjob-tutorial/other-api-files.html) for details.\n\n- `make manifests`\n\nGenerates the eraser deployment yaml files under `manifest_staging/deploy`.\n\nConfiguration Options:\n\n| Environment Variable | Description                                        |\n| -------------------- | -------------------------------------------------- |\n| ERASER_IMG           | Defines the image url for the Eraser.              |\n| MANAGER_IMG          | Defines the image url for the Eraser manager.      |\n| KUSTOMIZE_VERSION    | Define Kustomize version for generating manifests. |\n\n- `make test`\n\nRuns the unit tests for the eraser project.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                 |\n| -------------------- | ----------------------------------------------------------- |\n| ENVTEST              | Specifies the envtest setup binary.                         |\n| ENVTEST_K8S_VERSION  | Specifies the Kubernetes version for envtest setup command. |\n\n- `make e2e-test`\n\nRuns e2e tests on a cluster.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                   |\n| -------------------- | ------------------------------------------------------------------------------------------------------------- |\n| ERASER_IMG           | Eraser image to be used for e2e test.                                                                         |\n| MANAGER_IMG          | Eraser manager image to be used for e2e test.                                                                 |\n| KUBERNETES_VERSION   | Kubernetes version for e2e test.                                                                              |\n| TEST_COUNT           | Sets repetition for test. Please refer to [go docs](https://pkg.go.dev/cmd/go#hdr-Testing_flags) for details. |\n| TIMEOUT              | Sets timeout for test. Please refer to [go docs](https://pkg.go.dev/cmd/go#hdr-Testing_flags) for details.    |\n| TESTFLAGS            | Sets additional test flags                                                                                    |\n\n### Build\n\n- `make build`\n\nBuilds the eraser manager binaries.\n\n- `make run`\n\nRuns the eraser manager on your local machine.\n\n- `make docker-build-manager`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| MANAGER_IMG          | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-manager`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| MANAGER_IMG          | Specifies the target repository, image name and tag for building image. |\n\n- `make docker-build-eraser`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| ERASER_IMG           | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-eraser`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| ERASER_IMG           | Specifies the target repository, image name and tag for building image. |\n\n- `make docker-build-collector`\n\nBuilds the docker image for the eraser collector.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| COLLECTOR_IMG        | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-collector`\n\nBuilds the docker image for the eraser collector.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| COLLECTOR_IMG        | Specifies the target repository, image name and tag for building image. |\n\n### Deployment\n\n- `make install`\n\nInstall CRDs into the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                      |\n| -------------------- | ---------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment. |\n\n- `make uninstall`\n\nUninstall CRDs from the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                      |\n| -------------------- | ---------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment. |\n\n- `make deploy`\n\nDeploys eraser to the cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                          |\n| -------------------- | -------------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment.     |\n| MANAGER_IMG          | Specifies the eraser manager image version to be used for deployment |\n\n- `make undeploy`\n\nUndeploy controller from the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                               |\n| -------------------- | ------------------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources that need to be removed. |\n\n### Release\n\n- `make release-manifest`\n\nGenerates k8s manifests files for a release.\n\nConfiguration Options:\n\n| Environment Variable | Description                          |\n| -------------------- | ------------------------------------ |\n| NEWVERSION           | Sets the new version in the Makefile |\n\n- `make promote-staging-manifest`\n\nPromotes the k8s deployment yaml files to release.\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.4.x/trivy.md",
    "content": "---\ntitle: Trivy\n---\n\n## Trivy Provider Options\nThe trivy provider is used in Eraser for image scanning and detecting vulnerabilities. The following arguments can be supplied to the scanner to specify which types of images will be detected for removal by the trivy scanner container:\n\n* --ignore-unfixed: boolean to report only fixed vulnerabilities (default true)\n* --security-checks: comma-separated list of what security issues to detect (default \"vuln\")\n* --vuln-type: list of severity levels to report  (default \"CRITICAL\")\n* --delete-scan-failed-images : boolean to delete images for which scanning has failed (default true)\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.5.x/architecture.md",
    "content": "---\ntitle: Architecture\n---\nAt a high level, Eraser has two main modes of operation: manual and automated.\n\nManual image removal involves supplying a list of images to remove; Eraser then\ndeploys pods to clean up the images you supplied.\n\nAutomated image removal runs on a timer. By default, the automated process\nremoves images based on the results of a vulnerability scan. The default\nvulnerability scanner is Trivy, but others can be provided in its place. Or,\nthe scanner can be disabled altogether, in which case Eraser acts as a garbage\ncollector -- it will remove all non-running images in your cluster.\n\n## Manual image cleanup\n\nNote: metrics are not yet implemented in Eraser v0.5.x, but will be available in the upcoming v1.0.0 release.\n\n<img title=\"manual cleanup\" src=\"/eraser/docs/img/eraser_manual.png\" />\n\n## Automated analysis, scanning, and cleanup\n\n<img title=\"automated cleanup\" src=\"/eraser/docs/img/eraser_timer.png\" />\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.5.x/code-of-conduct.md",
    "content": "---\ntitle: Code of Conduct\n---\n\nThis project has adopted the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).\n\nResources:\n\n- [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)\n- [Code of Conduct Reporting](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.5.x/contributing.md",
    "content": "---\ntitle: Contributing\n---\n\nThere are several ways to get involved with Eraser\n\n- Join the [mailing list](https://groups.google.com/u/1/g/eraser-dev) to get notifications for releases, security announcements, etc.\n- Participate in the [biweekly community meetings](https://docs.google.com/document/d/1Sj5u47K3WUGYNPmQHGFpb52auqZb1FxSlWAQnPADhWI/edit) to disucss development, issues, use cases, etc.\n- Join the `#eraser` channel on the [Kubernetes Slack](https://slack.k8s.io/)\n- View the [development setup instructions](https://eraser-dev.github.io/eraser/docs/development)\n\nThis project welcomes contributions and suggestions.\n\nThis project has adopted the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)."
  },
  {
    "path": "docs/versioned_docs/version-v0.5.x/custom-scanner.md",
    "content": "---\ntitle: Custom Scanner\n---\n\n## Creating a Custom Scanner\nTo create a custom scanner for non-compliant images, use the following [template](https://github.com/eraser-dev/eraser-scanner-template/).\n\nIn order to customize your scanner, start by creating a `NewImageProvider()`. The ImageProvider interface can be found can be found [here](../../../pkg/scanners/template/scanner_template.go). \n\nThe ImageProvider will allow you to retrieve the list of all non-running and non-excluded images from the collector container through the `ReceiveImages()` function. Process these images with your customized scanner and threshold, and use `SendImages()` to pass the images found non-compliant to the eraser container for removal. Finally, complete the scanning process by calling `Finish()`.\n\nWhen complete, provide your custom scanner image to Eraser in deployment.\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.5.x/customization.md",
    "content": "---\ntitle: Customization\n---\n\nBy default, successful jobs will be deleted after a period of time. You can change this behavior by setting the following flags in the eraser-controller-manager:\n\n- `--job-cleanup-on-success-delay`: Duration to delay job deletion after successful runs. 0 means no delay. Defaults to `0`.\n- `--job-cleanup-on-error-delay`: Duration to delay job deletion after errored runs. 0 means no delay. Defaults to `24h`. \n- `--job-success-ratio`: Ratio of successful/total runs to consider a job successful. 1.0 means all runs must succeed. Defaults to `1.0`.  \n\nFor duration, valid time units are \"ns\", \"us\" (or \"µs\"), \"ms\", \"s\", \"m\", \"h\".\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.5.x/exclusion.md",
    "content": "---\ntitle: Exclusion\n---\n\n## Excluding registries, repositories, and images\nEraser can exclude registries (example, `docker.io/library/*`) and also specific images with a tag (example, `docker.io/library/ubuntu:18.04`) or digest (example, `sha256:80f31da1ac7b312ba29d65080fd...`) from its removal process.\n\nTo exclude any images or registries from the removal, create configmap(s) with the label `eraser.sh/exclude.list=true` in the eraser-system namespace with a JSON file holding the excluded images.\n\n```bash\n$ cat > sample.json <<EOF\n{\"excluded\": [\"docker.io/library/*\", \"ghcr.io/eraser-dev/test:latest\"]}\nEOF\n\n$ kubectl create configmap excluded --from-file=sample.json --namespace=eraser-system\n$ kubectl label configmap excluded eraser.sh/exclude.list=true -n eraser-system\n```\n\n## Exempting Nodes from the Eraser Pipeline\nExempting nodes with `--filter-nodes` is added in v0.3.0. When deploying Eraser, you can specify whether there is a list of nodes you would like to `include` or `exclude` from the cleanup process using the `--filter-nodes` argument. \n\n_See [Eraser Helm Chart](https://github.com/eraser-dev/eraser/blob/main/charts/eraser/README.md) for more information on deployment._\n\nNodes with the selector `eraser.sh/cleanup.filter` will be filtered accordingly. \n- If `include` is provided, eraser and collector pods will only be scheduled on nodes with the selector `eraser.sh/cleanup.filter`. \n- If `exclude` is provided, eraser and collector pods will be scheduled on all nodes besides those with the selector `eraser.sh/cleanup.filter`.\n\nUnless specified, the default value of `--filter-nodes` is `exclude`. Because Windows nodes are not supported, they will always be excluded regardless of the `eraser.sh/cleanup.filter` label or the value of `--filter-nodes`.\n\nAdditional node selectors can be provided through the `--filter-nodes-selector` flag.\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.5.x/faq.md",
    "content": "---\ntitle: FAQ\n---\n## Why am I still seeing vulnerable images?\nEraser currently targets **non-running** images, so any vulnerable images that are currently running will not be removed. In addition, the default vulnerability scanning with Trivy removes images with `CRITICAL` vulnerabilities. Any images with lower vulnerabilities will not be removed. This can be configured with the `--severity` flag."
  },
  {
    "path": "docs/versioned_docs/version-v0.5.x/installation.md",
    "content": "---\ntitle: Installation\n---\n\n## Manifest\n\nTo install Eraser with the manifest file, run the following command:\n\n```bash\nkubectl apply -f https://raw.githubusercontent.com/eraser-dev/eraser/v0.5.0/deploy/eraser.yaml\n```\n\n## Helm\n\nIf you'd like to install and manage Eraser with Helm, follow the install instructions [here](https://github.com/eraser-dev/eraser/blob/main/charts/eraser/README.md)\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.5.x/introduction.md",
    "content": "---\ntitle: Introduction\nslug: /\n---\n\n# Introduction\n\nWhen deploying to Kubernetes, it's common for pipelines to build and push images to a cluster, but it's much less common for these images to be cleaned up. This can lead to accumulating bloat on the disk, and a host of non-compliant images lingering on the nodes.\n\nThe current garbage collection process deletes images based on a percentage of load, but this process does not consider the vulnerability state of the images. **Eraser** aims to provide a simple way to determine the state of an image, and delete it if it meets the specified criteria.\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.5.x/manual-removal.md",
    "content": "---\ntitle: Manual Removal\n---\n\nCreate an `ImageList` and specify the images you would like to remove. In this case, the image `docker.io/library/alpine:3.7.3` will be removed.\n\n```shell\ncat <<EOF | kubectl apply -f -\napiVersion: eraser.sh/v1alpha1\nkind: ImageList\nmetadata:\n  name: imagelist\nspec:\n  images:\n    - docker.io/library/alpine:3.7.3   # use \"*\" for all non-running images\nEOF\n```\n\n> `ImageList` is a cluster-scoped resource and must be called imagelist. `\"*\"` can be specified to remove all non-running images instead of individual images.\n\nCreating an `ImageList` should trigger an `ImageJob` that will deploy Eraser pods on every node to perform the removal given the list of images.\n\n```shell\n$ kubectl get pods -n eraser-system\neraser-system        eraser-controller-manager-55d54c4fb6-dcglq   1/1     Running   0          9m8s\neraser-system        eraser-kind-control-plane                    1/1     Running   0          11s\neraser-system        eraser-kind-worker                           1/1     Running   0          11s\neraser-system        eraser-kind-worker2                          1/1     Running   0          11s\n```\n\nPods will run to completion and the images will be removed.\n\n```shell\n$ kubectl get pods -n eraser-system\neraser-system        eraser-controller-manager-6d6d5594d4-phl2q   1/1     Running     0          4m16s\neraser-system        eraser-kind-control-plane                    0/1     Completed   0          22s\neraser-system        eraser-kind-worker                           0/1     Completed   0          22s\neraser-system        eraser-kind-worker2                          0/1     Completed   0          22s\n```\n\nThe `ImageList` custom resource status field will contain the status of the last job. The success and failure counts indicate the number of nodes the Eraser agent was run on.\n\n```shell\n$ kubectl describe ImageList imagelist\n...\nStatus:\n  Failed:     0\n  Success:    3\n  Timestamp:  2022-02-25T23:41:55Z\n...\n```\n\nVerify the unused images are removed.\n\n```shell\n$ docker exec kind-worker ctr -n k8s.io images list | grep alpine\n```\n\nIf the image has been successfully removed, there will be no output.\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.5.x/quick-start.md",
    "content": "---\ntitle: Quick Start\n---\n\nThis tutorial demonstrates the functionality of Eraser and validates that non-running images are removed succesfully.\n\n## Deploy a DaemonSet\n\nAfter following the [install instructions](installation.md), we'll apply a demo `DaemonSet`. For illustrative purposes, a DaemonSet is applied and deleted so the non-running images remain on all nodes. The alpine image with the `3.7.3` tag will be used in this example. This is an image with a known critical vulnerability.\n\nFirst, apply the `DaemonSet`:\n\n```shell\ncat <<EOF | kubectl apply -f -\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n  name: alpine\nspec:\n  selector:\n    matchLabels:\n      app: alpine\n  template:\n    metadata:\n      labels:\n        app: alpine\n    spec:\n      containers:\n      - name: alpine\n        image: docker.io/library/alpine:3.7.3\nEOF\n```\n\nNext, verify that the Pods are running or completed. After the `alpine` Pods complete, you may see a `CrashLoopBackoff` status. This is expected behavior from the `alpine` image and can be ignored for the tutorial.\n\n```shell\n$ kubectl get pods\nNAME          READY   STATUS      RESTARTS     AGE\nalpine-2gh9c   1/1     Running     1 (3s ago)   6s\nalpine-hljp9   0/1     Completed   1 (3s ago)   6s\n```\n\nDelete the DaemonSet:\n\n```shell\n$ kubectl delete daemonset alpine\n```\n\nVerify that the Pods have been deleted:\n\n```shell\n$ kubectl get pods\nNo resources found in default namespace.\n```\n\nTo verify that the `alpine` images are still on the nodes, exec into one of the worker nodes and list the images. If you are not using a kind cluster or Docker for your container nodes, you will need to adjust the exec command accordingly.\n\nList the nodes:\n\n```shell\n$ kubectl get nodes\nNAME                 STATUS   ROLES           AGE   VERSION\nkind-control-plane   Ready    control-plane   45m   v1.24.0\nkind-worker          Ready    <none>          45m   v1.24.0\nkind-worker2         Ready    <none>          44m   v1.24.0\n```\n\nList the images then filter for `alpine`:\n\n```shell\n$ docker exec kind-worker ctr -n k8s.io images list | grep alpine\ndocker.io/library/alpine:3.7.3                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 2.0 MiB   linux/386,linux/amd64,linux/arm/v6,linux/arm64/v8,linux/ppc64le,linux/s390x  io.cri-containerd.image=managed\ndocker.io/library/alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10           application/vnd.docker.distribution.manifest.list.v2+json sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 2.0 MiB   linux/386,linux/amd64,linux/arm/v6,linux/arm64/v8,linux/ppc64le,linux/s390x  io.cri-containerd.image=managed\n\n```\n\n## Automatically Cleaning Images\n\nAfter deploying Eraser, it will automatically clean images in a regular interval. This interval can be set by `--repeat-period` argument to `eraser-controller-manager`. The default interval is 24 hours (`24h`). Valid time units are \"ns\", \"us\" (or \"µs\"), \"ms\", \"s\", \"m\", \"h\".\n\nEraser will schedule collector pods to each node in the cluster, and each pod will contain 3 containers: collector, scanner, and eraser that will run to completion.\n\n```shell\n$ kubectl get pods -n eraser-system\nNAMESPACE            NAME                                         READY   STATUS      RESTARTS         AGE\neraser-system        collector-kind-control-plane-sb789           0/3     Completed   0                26m\neraser-system        collector-kind-worker-j84hm                  0/3     Completed   0                26m\neraser-system        collector-kind-worker2-4lbdr                 0/3     Completed   0                26m\neraser-system        eraser-controller-manager-86cdb4cbf9-x8d7q   1/1     Running     0                26m\n```\n\nThe collector container sends the list of all images to the scanner container, which scans and reports non-compliant images to the eraser container for removal of images that are non-running. Once all pods are completed, they will be automatically cleaned up. \n\n> If you want to remove all the images periodically, you can skip the scanner container by removing the `--scanner-image` argument. If you are deploying with Helm, use `--set scanner.image.repository=\"\"` to remove the scanner image. In this case, each collector pod will hold 2 containers: collector and eraser.\n\n```shell\n$ kubectl get pods -n eraser-system\nNAMESPACE            NAME                                         READY   STATUS      RESTARTS         AGE\neraser-system        collector-kind-control-plane-ksk2b           0/2     Completed   0                50s\neraser-system        collector-kind-worker-cpgqc                  0/2     Completed   0                50s\neraser-system        collector-kind-worker2-k25df                 0/2     Completed   0                50s\neraser-system        eraser-controller-manager-86cdb4cbf9-x8d7q   1/1     Running     0                55s\n```\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.5.x/releasing.md",
    "content": "---\ntitle: Releasing\n---\n\n## Overview\n\nThe release process consists of three phases: versioning, building, and publishing.\n\nVersioning involves maintaining the following files:\n\n- **Makefile** - the Makefile contains a VERSION variable that defines the version of the project.\n- **manager.yaml** - the controller-manager deployment yaml contains the latest release tag image of the project.\n- **eraser.yaml** - the eraser.yaml contains all eraser resources to be deployed to a cluster including the latest release tag image of the project.\n\nThe steps below explain how to update these files. In addition, the repository should be tagged with the semantic version identifying the release.\n\nBuilding involves obtaining a copy of the repository and triggering a build as part of the GitHub Actions CI pipeline.\n\nPublishing involves creating a release tag and creating a new _Release_ on GitHub.\n\n## Versioning\n\n1. Obtain a copy of the repository.\n\n   ```\n   git clone git@github.com:eraser-dev/eraser.git\n   ```\n\n1. If this is a patch release for a release branch, check out applicable branch, such as `release-0.1`. If not, branch should be `main`\n\n1. Execute the release-patch target to generate patch. Give the semantic version of the release:\n\n   ```\n   make release-manifest NEWVERSION=vX.Y.Z\n   ```\n\n1. Promote staging manifest to release.\n\n   ```\n   make promote-staging-manifest\n   ```\n\n1. If it's a new minor release (e.g. v0.**4**.x -> 0.**5**.0), tag docs to be versioned. Make sure to keep patch version as `.x` for a minor release.\n\n\t```\n\tmake version-docs NEWVERSION=v0.5.x\n\t```\n\n1. Preview the changes:\n\n   ```\n   git status\n   git diff\n   ```\n\n## Building and releasing\n\n1. Commit the changes and push to remote repository to create a pull request.\n\n   ```\n   git checkout -b release-<NEW VERSION>\n   git commit -a -s -m \"Prepare <NEW VERSION> release\"\n   git push <YOUR FORK>\n   ```\n\n2. Once the PR is merged to `main` or `release` branch (`<BRANCH NAME>` below), tag that commit with release version and push tags to remote repository.\n\n   ```\n   git checkout <BRANCH NAME>\n   git pull origin <BRANCH NAME>\n   git tag -a <NEW VERSION> -m '<NEW VERSION>'\n   git push origin <NEW VERSION>\n   ```\n\n3. Pushing the release tag will trigger GitHub Actions to trigger `release` job.\n   This will build the `ghcr.io/eraser-dev/eraser` and `ghcr.io/eraser-dev/eraser-manager` images automatically, then publish the new release tag.\n\n## Publishing\n\n1. GitHub Action will create a new release, review and edit it at https://github.com/eraser-dev/eraser/releases\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.5.x/setup.md",
    "content": "---\ntitle: Setup\n---\n\n# Development Setup\n\nThis document describes the steps to get started with development.\nYou can either utilize [Codespaces](https://docs.github.com/en/codespaces/overview) or setup a local environment.\n\n## Local Setup\n\n### Prerequisites:\n\n- [go](https://go.dev/) with version 1.17 or later.\n- [docker](https://docs.docker.com/get-docker/)\n- [kind](https://kind.sigs.k8s.io/)\n- `make`\n\n### Get things running\n\n- Get dependencies with `go get`\n\n- This project uses `make`. You can utilize `make help` to see available targets. For local deployment make targets help to build, test and deploy.\n\n### Making changes\n\nPlease refer to [Development Reference](#development-reference) for more details on the specific commands.\n\nTo test your changes on a cluster:\n\n```bash\n# generate necessary api files (optional - only needed if changes to api folder).\nmake generate\n\n# build applicable images\nmake docker-build-manager MANAGER_IMG=eraser-manager:dev\nmake docker-build-eraser ERASER_IMG=eraser:dev\nmake docker-build-collector COLLECTOR_IMG=collector:dev\nmake docker-build-trivy-scanner TRIVY_SCANNER_IMG=eraser-trivy-scanner:dev\n\n# make sure updated image is present on cluster (e.g., see kind example below)\nkind load docker-image \\\n        eraser-manager:dev \\\n        eraser-trivy-scanner:dev \\\n        eraser:dev \\\n        collector:dev\n\nmake manifests\nmake deploy\n\n# to remove the deployment\nmake undeploy\n```\n\nTo test your changes to manager locally:\n\n```bash\nmake run\n```\n\nExample Output:\n\n```\nyou@local:~/eraser$ make run\ndocker build . \\\n        -t eraser-tooling \\\n        -f build/tooling/Dockerfile\n[+] Building 7.8s (8/8) FINISHED\n => => naming to docker.io/library/eraser-tooling                           0.0s\ndocker run -v /home/eraser/config:/config -w /config/manager \\\n        registry.k8s.io/kustomize/kustomize:v3.8.9 edit set image controller=eraser-manager:dev\ndocker run -v /home/eraser:/eraser eraser-tooling controller-gen \\\n        crd \\\n        rbac:roleName=manager-role \\\n        webhook \\\n        paths=\"./...\" \\\n        output:crd:artifacts:config=config/crd/bases\nrm -rf manifest_staging\nmkdir -p manifest_staging/deploy\ndocker run --rm -v /home/eraser:/eraser \\\n        registry.k8s.io/kustomize/kustomize:v3.8.9 build \\\n        /eraser/config/default -o /eraser/manifest_staging/deploy/eraser.yaml\ndocker run -v /home/eraser:/eraser eraser-tooling controller-gen object:headerFile=\"hack/boilerplate.go.txt\" paths=\"./...\"\ngo fmt ./...\ngo vet ./...\ngo run ./main.go\n{\"level\":\"info\",\"ts\":1652985685.1663408,\"logger\":\"controller-runtime.metrics\",\"msg\":\"Metrics server is starting to listen\",\"addr\":\":8080\"}\n...\n```\n\n## Development Reference\n\nEraser is using tooling from [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder). For Eraser this tooling is containerized into the `eraser-tooling` image. The `make` targets can use this tooling and build the image when necessary.\n\nYou can override the default configuration using environment variables. Below you can find a reference of targets and configuration options.\n\n### Common Configuration\n\n| Environment Variable | Description                                                                                   |\n| -------------------- | --------------------------------------------------------------------------------------------- |\n| VERSION              | Specifies the version (i.e., the image tag) of eraser to be used.                             |\n| MANAGER_IMG          | Defines the image url for the Eraser manager. Used for tagging, pulling and pushing the image |\n| ERASER_IMG           | Defines the image url for the Eraser. Used for tagging, pulling and pushing the image         |\n| COLLECTOR_IMG        | Defines the image url for the Collector. Used for tagging, pulling and pushing the image      |\n\n### Linting\n\n- `make lint`\n\nLints the go code.\n\n| Environment Variable | Description                                             |\n| -------------------- | ------------------------------------------------------- |\n| GOLANGCI_LINT        | Specifies the go linting binary to be used for linting. |\n\n### Development\n\n- `make generate`\n\nGenerates necessary files for the k8s api stored under `api/v1alpha1/zz_generated.deepcopy.go`. See the [kubebuilder docs](https://book.kubebuilder.io/cronjob-tutorial/other-api-files.html) for details.\n\n- `make manifests`\n\nGenerates the eraser deployment yaml files under `manifest_staging/deploy`.\n\nConfiguration Options:\n\n| Environment Variable | Description                                        |\n| -------------------- | -------------------------------------------------- |\n| ERASER_IMG           | Defines the image url for the Eraser.              |\n| MANAGER_IMG          | Defines the image url for the Eraser manager.      |\n| KUSTOMIZE_VERSION    | Define Kustomize version for generating manifests. |\n\n- `make test`\n\nRuns the unit tests for the eraser project.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                 |\n| -------------------- | ----------------------------------------------------------- |\n| ENVTEST              | Specifies the envtest setup binary.                         |\n| ENVTEST_K8S_VERSION  | Specifies the Kubernetes version for envtest setup command. |\n\n- `make e2e-test`\n\nRuns e2e tests on a cluster.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                   |\n| -------------------- | ------------------------------------------------------------------------------------------------------------- |\n| ERASER_IMG           | Eraser image to be used for e2e test.                                                                         |\n| MANAGER_IMG          | Eraser manager image to be used for e2e test.                                                                 |\n| KUBERNETES_VERSION   | Kubernetes version for e2e test.                                                                              |\n| TEST_COUNT           | Sets repetition for test. Please refer to [go docs](https://pkg.go.dev/cmd/go#hdr-Testing_flags) for details. |\n| TIMEOUT              | Sets timeout for test. Please refer to [go docs](https://pkg.go.dev/cmd/go#hdr-Testing_flags) for details.    |\n| TESTFLAGS            | Sets additional test flags                                                                                    |\n\n### Build\n\n- `make build`\n\nBuilds the eraser manager binaries.\n\n- `make run`\n\nRuns the eraser manager on your local machine.\n\n- `make docker-build-manager`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| MANAGER_IMG          | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-manager`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| MANAGER_IMG          | Specifies the target repository, image name and tag for building image. |\n\n- `make docker-build-eraser`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| ERASER_IMG           | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-eraser`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| ERASER_IMG           | Specifies the target repository, image name and tag for building image. |\n\n- `make docker-build-collector`\n\nBuilds the docker image for the eraser collector.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| COLLECTOR_IMG        | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-collector`\n\nBuilds the docker image for the eraser collector.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| COLLECTOR_IMG        | Specifies the target repository, image name and tag for building image. |\n\n### Deployment\n\n- `make install`\n\nInstall CRDs into the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                      |\n| -------------------- | ---------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment. |\n\n- `make uninstall`\n\nUninstall CRDs from the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                      |\n| -------------------- | ---------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment. |\n\n- `make deploy`\n\nDeploys eraser to the cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                          |\n| -------------------- | -------------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment.     |\n| MANAGER_IMG          | Specifies the eraser manager image version to be used for deployment |\n\n- `make undeploy`\n\nUndeploy controller from the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                               |\n| -------------------- | ------------------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources that need to be removed. |\n\n### Release\n\n- `make release-manifest`\n\nGenerates k8s manifests files for a release.\n\nConfiguration Options:\n\n| Environment Variable | Description                          |\n| -------------------- | ------------------------------------ |\n| NEWVERSION           | Sets the new version in the Makefile |\n\n- `make promote-staging-manifest`\n\nPromotes the k8s deployment yaml files to release.\n"
  },
  {
    "path": "docs/versioned_docs/version-v0.5.x/trivy.md",
    "content": "---\ntitle: Trivy\n---\n\n## Trivy Provider Options\nThe trivy provider is used in Eraser for image scanning and detecting vulnerabilities. The following arguments can be supplied to the scanner to specify which types of images will be detected for removal by the trivy scanner container:\n\n* --ignore-unfixed: boolean to report only fixed vulnerabilities (default true)\n* --security-checks: comma-separated list of what security issues to detect (default \"vuln\")\n* --vuln-type: list of severity levels to report  (default \"CRITICAL\")\n* --delete-scan-failed-images : boolean to delete images for which scanning has failed (default true)\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.0.x/architecture.md",
    "content": "---\ntitle: Architecture\n---\nAt a high level, Eraser has two main modes of operation: manual and automated.\n\nManual image removal involves supplying a list of images to remove; Eraser then\ndeploys pods to clean up the images you supplied.\n\nAutomated image removal runs on a timer. By default, the automated process\nremoves images based on the results of a vulnerability scan. The default\nvulnerability scanner is Trivy, but others can be provided in its place. Or,\nthe scanner can be disabled altogether, in which case Eraser acts as a garbage\ncollector -- it will remove all non-running images in your cluster.\n\n## Manual image cleanup\n\n<img title=\"manual cleanup\" src=\"/eraser/docs/img/eraser_manual.png\" />\n\n## Automated analysis, scanning, and cleanup\n\n<img title=\"automated cleanup\" src=\"/eraser/docs/img/eraser_timer.png\" />\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.0.x/code-of-conduct.md",
    "content": "---\ntitle: Code of Conduct\n---\n\nThis project has adopted the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).\n\nResources:\n\n- [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)\n- [Code of Conduct Reporting](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)"
  },
  {
    "path": "docs/versioned_docs/version-v1.0.x/contributing.md",
    "content": "---\ntitle: Contributing\n---\n\nThere are several ways to get involved with Eraser\n\n- Join the [mailing list](https://groups.google.com/u/1/g/eraser-dev) to get notifications for releases, security announcements, etc.\n- Participate in the [biweekly community meetings](https://docs.google.com/document/d/1Sj5u47K3WUGYNPmQHGFpb52auqZb1FxSlWAQnPADhWI/edit) to disucss development, issues, use cases, etc.\n- Join the `#eraser` channel on the [Kubernetes Slack](https://slack.k8s.io/)\n- View the [development setup instructions](https://eraser-dev.github.io/eraser/docs/development)\n\nThis project welcomes contributions and suggestions.\n\nThis project has adopted the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)."
  },
  {
    "path": "docs/versioned_docs/version-v1.0.x/custom-scanner.md",
    "content": "---\ntitle: Custom Scanner\n---\n\n## Creating a Custom Scanner\nTo create a custom scanner for non-compliant images, use the following [template](https://github.com/eraser-dev/eraser-scanner-template/).\n\nIn order to customize your scanner, start by creating a `NewImageProvider()`. The ImageProvider interface can be found can be found [here](../../../pkg/scanners/template/scanner_template.go). \n\nThe ImageProvider will allow you to retrieve the list of all non-running and non-excluded images from the collector container through the `ReceiveImages()` function. Process these images with your customized scanner and threshold, and use `SendImages()` to pass the images found non-compliant to the eraser container for removal. Finally, complete the scanning process by calling `Finish()`.\n\nWhen complete, provide your custom scanner image to Eraser in deployment.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.0.x/customization.md",
    "content": "---\ntitle: Customization\n---\n\n## Overview\n\nEraser uses a configmap to configure its behavior. The configmap is part of the\ndeployment and it is not necessary to deploy it manually. Once deployed, the configmap\ncan be edited at any time:\n\n```bash\nkubectl edit configmap --namespace eraser-system eraser-manager-config\n```\n\nIf an eraser job is already running, the changes will not take effect until the job completes.\nThe configuration is in yaml.\n\n## Key Concepts\n\n### Basic architecture\n\nThe _manager_ runs as a pod in your cluster and manages _ImageJobs_. Think of\nan _ImageJob_ as a unit of work, performed on every node in your cluster. Each\nnode runs a sub-job. The goal of the _ImageJob_ is to assess the images on your\ncluster's nodes, and to remove the images you don't want. There are two stages:\n1. Assessment\n1. Removal.\n\n\n### Scheduling\n\nAn _ImageJob_ can either be created on-demand (see [Manual Removal](https://eraser-dev.github.io/eraser/docs/manual-removal)),\nor they can be spawned on a timer like a cron job. On-demand jobs skip the\nassessment stage and get right down to the business of removing the images you\nspecified. The behavior of an on-demand job is quite different from that of\ntimed jobs.\n\n### Fault Tolerance\n\nBecause an _ImageJob_ runs on every node in your cluster, and the conditions on\neach node may vary widely, some of the sub-jobs may fail. If you cannot\ntolerate any failure, set the `manager.imageJob.successRatio` property to\n`1.0`. If 75% success sounds good to you, set it to `0.75`. In that case, if\nfewer than 75% of the pods spawned by the _ImageJob_ report success, the job as\na whole will be marked as a failure.\n\nThis is mainly to help diagnose error conditions. As such, you can set\n`manager.imageJob.cleanup.delayOnFailure` to a long value so that logs can be\ncaptured before the spawned pods are cleaned up.\n\n### Excluding Nodes\n\nFor various reasons, you may want to prevent Eraser from scheduling pods on\ncertain nodes. To do so, the nodes can be given a special label. By default,\nthis label is `eraser.sh/cleanup.filter`, but you can configure the behavior with\nthe options under `manager.nodeFilter`. The [table](#detailed-options) provides more detail.\n\n### Configuring Components\n\nAn _ImageJob_ is made up of various sub-jobs, with one sub-job for each node.\nThese sub-jobs can be broken down further into three stages.\n1. Collection (What is on the node?)\n1. Scanning (What images conform to the policy I've provided?)\n1. Removal (Remove images based on the results of the above)\n\nOf the above stages, only Removal is mandatory. The others can be disabled.\nFurthermore, manually triggered _ImageJobs_ will skip right to removal, even if\nEraser is configured to collect and scan. Collection and Scanning will only\ntake place when:\n1. The collector and/or scanner `components` are enabled, AND\n1. The job was *not* triggered manually by creating an _ImageList_.\n\nDisabling scanner will remove all non-running images by default.\n\n### Swapping out components\n\nThe collector, scanner, and eraser components can all be swapped out. This\nenables you to build and host the images yourself. In addition, the scanner's\nbehavior can be completely tailored to your needs by swapping out the default\nimage with one of your own. To specify the images, use the\n`components.<component>.image.repo` and `components.<component>.image.tag`,\nwhere `<component>` is one of `collector`, `scanner`, or `eraser`.\n\n## Universal Options\n\nThe following portions of the configmap apply no matter how you spawn your\n_ImageJob_. The values provided below are the defaults. For more detail on\nthese options, see the [table](#detailed-options).\n\n```yaml\nmanager:\n  runtime: containerd\n  otlpEndpoint: \"\" # empty string disables OpenTelemetry\n  logLevel: info\n  profile:\n    enabled: false\n    port: 6060\n  imageJob:\n    successRatio: 1.0\n    cleanup:\n      delayOnSuccess: 0s\n      delayOnFailure: 24h\n  pullSecrets: [] # image pull secrets for collector/scanner/eraser\n  priorityClassName: \"\" # priority class name for collector/scanner/eraser\n  nodeFilter:\n    type: exclude # must be either exclude|include\n    selectors:\n      - eraser.sh/cleanup.filter\n      - kubernetes.io/os=windows\ncomponents:\n  eraser:\n    image:\n      repo: ghcr.io/eraser-dev/eraser\n      tag: v1.0.0\n    request:\n      mem: 25Mi\n      cpu: 0\n    limit:\n      mem: 30Mi\n      cpu: 1000m\n```\n\n## Component Options\n\n```yaml\ncomponents:\n  collector:\n    enabled: true\n    image:\n      repo: ghcr.io/eraser-dev/collector\n      tag: v1.0.0\n    request:\n      mem: 25Mi\n      cpu: 7m\n    limit:\n      mem: 500Mi\n      cpu: 0\n  scanner:\n    enabled: true\n    image:\n      repo: ghcr.io/eraser-dev/eraser-trivy-scanner\n      tag: v1.0.0\n    request:\n      mem: 500Mi\n      cpu: 1000m\n    limit:\n      mem: 2Gi\n      cpu: 0\n    config: |\n      # this is the schema for the provided 'trivy-scanner'. custom scanners\n      # will define their own configuration. see the below\n  eraser:\n    image:\n      repo: ghcr.io/eraser-dev/eraser\n      tag: v1.0.0\n    request:\n      mem: 25Mi\n      cpu: 0\n    limit:\n      mem: 30Mi\n      cpu: 1000m\n```\n\n## Scanner Options\n\nThese options can be provided to `components.scanner.config`. They will be\npassed through  as a string to the scanner container and parsed there. If you\nwant to configure your own scanner, you must provide some way to parse this.\n\nBelow are the values recognized by the provided `eraser-trivy-scanner` image.\nValues provided below are the defaults.\n\n```yaml\ncacheDir: /var/lib/trivy # The file path inside the container to store the cache\ndbRepo: ghcr.io/aquasecurity/trivy-db # The container registry from which to fetch the trivy database\ndeleteFailedImages: true # if true, remove images for which scanning fails, regardless of why it failed\nvulnerabilities:\n  ignoreUnfixed: true # consider the image compliant if there are no known fixes for the vulnerabilities found.\n  types: # a list of vulnerability types. for more info, see trivy's documentation.\n    - os\n    - library\n  securityChecks: # see trivy's documentation for more invormation\n    - vuln\n  severities: # in this case, only flag images with CRITICAL vulnerability for removal\n    - CRITICAL\ntimeout:\n  total: 23h # if scanning isn't completed before this much time elapses, abort the whole scan\n  perImage: 1h # if scanning a single image exceeds this time, scanning will be aborted\n```\n\n## Detailed Options\n\n| Option | Description | Default |\n| --- | --- | --- |\n| manager.runtime | The runtime to use for the manager's containers. Must be one of containerd, crio, or dockershim. It is assumed that your nodes are all using the same runtime, and there is currently no way to configure multiple runtimes. | containerd |\n| manager.otlpEndpoint | The endpoint to send OpenTelemetry data to. If empty, data will not be sent. | \"\" |\n| manager.logLevel | The log level for the manager's containers. Must be one of debug, info, warn, error, dpanic, panic, or fatal. | info |\n| manager.scheduling.repeatInterval | Use only when collector ando/or scanner are enabled. This is like a cron job, and will spawn an _ImageJob_ at the interval provided. | 24h |\n| manager.scheduling.beginImmediately | If set to true, the fist _ImageJob_ will run immediately. If false, the job will not be spawned until after the interval (above) has elapsed. | true |\n| manager.profile.enabled | Whether to enable profiling for the manager's containers. This is for debugging with `go tool pprof`. | false |\n| manager.profile.port | The port on which to expose the profiling endpoint. | 6060 |\n| manager.imageJob.successRatio | The ratio of successful image jobs required before a cleanup is performed. | 1.0 |\n| manager.imageJob.cleanup.delayOnSuccess | The amount of time to wait after a successful image job before performing cleanup. | 0s |\n| manager.imageJob.cleanup.delayOnFailure | The amount of time to wait after a failed image job before performing cleanup. | 24h |\n| manager.pullSecrets | The image pull secrets to use for collector, scanner, and eraser containers. | [] |\n| manager.priorityClassName | The priority class to use for collector, scanner, and eraser containers. | \"\" |\n| manager.nodeFilter.type | The type of node filter to use. Must be either \"exclude\" or \"include\". | exclude |\n| manager.nodeFilter.selectors | A list of selectors used to filter nodes. | [] |\n| components.collector.enabled | Whether to enable the collector component. | true |\n| components.collector.image.repo | The repository containing the collector image. | ghcr.io/eraser-dev/collector |\n| components.collector.image.tag | The tag of the collector image. | v1.0.0 |\n| components.collector.request.mem | The amount of memory to request for the collector container. | 25Mi |\n| components.collector.request.cpu | The amount of CPU to request for the collector container. | 7m |\n| components.collector.limit.mem | The maximum amount of memory the collector container is allowed to use. | 500Mi |\n| components.collector.limit.cpu | The maximum amount of CPU the collector container is allowed to use. | 0 |\n| components.scanner.enabled | Whether to enable the scanner component. | true |\n| components.scanner.image.repo | The repository containing the scanner image. | ghcr.io/eraser-dev/eraser-trivy-scanner |\n| components.scanner.image.tag | The tag of the scanner image. | v1.0.0 |\n| components.scanner.request.mem | The amount of memory to request for the scanner container. | 500Mi |\n| components.scanner.request.cpu | The amount of CPU to request for the scanner container. | 1000m |\n| components.scanner.limit.mem | The maximum amount of memory the scanner container is allowed to use. | 2Gi |\n| components.scanner.limit.cpu | The maximum amount of CPU the scanner container is allowed to use. | 0 |\n| components.scanner.config | The configuration to pass to the scanner container, as a YAML string. | See YAML below |\n| components.eraser.image.repo | The repository containing the eraser image. | ghcr.io/eraser-dev/eraser |\n| components.eraser.image.tag | The tag of the eraser image. | v1.0.0 |\n| components.eraser.request.mem | The amount of memory to request for the eraser container. | 25Mi |\n| components.eraser.request.cpu | The amount of CPU to request for the eraser container. | 0 |\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.0.x/exclusion.md",
    "content": "---\ntitle: Exclusion\n---\n\n## Excluding registries, repositories, and images\nEraser can exclude registries (example, `docker.io/library/*`) and also specific images with a tag (example, `docker.io/library/ubuntu:18.04`) or digest (example, `sha256:80f31da1ac7b312ba29d65080fd...`) from its removal process.\n\nTo exclude any images or registries from the removal, create configmap(s) with the label `eraser.sh/exclude.list=true` in the eraser-system namespace with a JSON file holding the excluded images.\n\n```bash\n$ cat > sample.json <<\"EOF\"\n{\n  \"excluded\": [\n    \"docker.io/library/*\",\n    \"ghcr.io/eraser-dev/test:latest\"\n  ]\n}\nEOF\n\n$ kubectl create configmap excluded --from-file=sample.json --namespace=eraser-system\n$ kubectl label configmap excluded eraser.sh/exclude.list=true -n eraser-system\n```\n\n## Exempting Nodes from the Eraser Pipeline\nExempting nodes from cleanup was added in v1.0.0. When deploying Eraser, you can specify whether there is a list of nodes you would like to `include` or `exclude` from the cleanup process using the configmap. For more information, see the section on [customization](https://eraser-dev.github.io/eraser/docs/customization).\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.0.x/faq.md",
    "content": "---\ntitle: FAQ\n---\n## Why am I still seeing vulnerable images?\nEraser currently targets **non-running** images, so any vulnerable images that are currently running will not be removed. In addition, the default vulnerability scanning with Trivy removes images with `CRITICAL` vulnerabilities. Any images with lower vulnerabilities will not be removed. This can be configured using the [configmap](https://eraser-dev.github.io/eraser/docs/customization#scanner-options).\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.0.x/installation.md",
    "content": "---\ntitle: Installation\n---\n\n## Manifest\n\nTo install Eraser with the manifest file, run the following command:\n\n```bash\nkubectl apply -f https://raw.githubusercontent.com/eraser-dev/eraser/v1.0.0/deploy/eraser.yaml\n```\n\n## Helm\n\nIf you'd like to install and manage Eraser with Helm, follow the install instructions [here](https://github.com/eraser-dev/eraser/blob/main/charts/eraser/README.md)\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.0.x/introduction.md",
    "content": "---\ntitle: Introduction\nslug: /\n---\n\n# Introduction\n\nWhen deploying to Kubernetes, it's common for pipelines to build and push images to a cluster, but it's much less common for these images to be cleaned up. This can lead to accumulating bloat on the disk, and a host of non-compliant images lingering on the nodes.\n\nThe current garbage collection process deletes images based on a percentage of load, but this process does not consider the vulnerability state of the images. **Eraser** aims to provide a simple way to determine the state of an image, and delete it if it meets the specified criteria.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.0.x/manual-removal.md",
    "content": "---\ntitle: Manual Removal\n---\n\nCreate an `ImageList` and specify the images you would like to remove. In this case, the image `docker.io/library/alpine:3.7.3` will be removed.\n\n```shell\ncat <<EOF | kubectl apply -f -\napiVersion: eraser.sh/v1\nkind: ImageList\nmetadata:\n  name: imagelist\nspec:\n  images:\n    - docker.io/library/alpine:3.7.3   # use \"*\" for all non-running images\nEOF\n```\n\n> `ImageList` is a cluster-scoped resource and must be called imagelist. `\"*\"` can be specified to remove all non-running images instead of individual images.\n\nCreating an `ImageList` should trigger an `ImageJob` that will deploy Eraser pods on every node to perform the removal given the list of images.\n\n```shell\n$ kubectl get pods -n eraser-system\neraser-system        eraser-controller-manager-55d54c4fb6-dcglq   1/1     Running   0          9m8s\neraser-system        eraser-kind-control-plane                    1/1     Running   0          11s\neraser-system        eraser-kind-worker                           1/1     Running   0          11s\neraser-system        eraser-kind-worker2                          1/1     Running   0          11s\n```\n\nPods will run to completion and the images will be removed.\n\n```shell\n$ kubectl get pods -n eraser-system\neraser-system        eraser-controller-manager-6d6d5594d4-phl2q   1/1     Running     0          4m16s\neraser-system        eraser-kind-control-plane                    0/1     Completed   0          22s\neraser-system        eraser-kind-worker                           0/1     Completed   0          22s\neraser-system        eraser-kind-worker2                          0/1     Completed   0          22s\n```\n\nThe `ImageList` custom resource status field will contain the status of the last job. The success and failure counts indicate the number of nodes the Eraser agent was run on.\n\n```shell\n$ kubectl describe ImageList imagelist\n...\nStatus:\n  Failed:     0\n  Success:    3\n  Timestamp:  2022-02-25T23:41:55Z\n...\n```\n\nVerify the unused images are removed.\n\n```shell\n$ docker exec kind-worker ctr -n k8s.io images list | grep alpine\n```\n\nIf the image has been successfully removed, there will be no output.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.0.x/metrics.md",
    "content": "---\ntitle: Metrics\n---\n\nTo view Eraser metrics, you will need to deploy an Open Telemetry collector in the 'eraser-system' namespace, and an exporter. An example collector with a Prometheus exporter is [otelcollector.yaml](https://github.com/eraser-dev/eraser/blob/main/test/e2e/test-data/otelcollector.yaml), and the endpoint can be specified using the [configmap](https://eraser-dev.github.io/eraser/docs/customization#universal-options). In this example, we are logging the collected data to the otel-collector pod, and exporting metrics through Prometheus at 'http://localhost:8889/metrics', but a separate exporter can also be configured.\n\nBelow is the list of metrics provided by Eraser per run:\n\n#### Eraser\n```yaml\n- count\n\t- name: images_removed_run_total\n\t\t- description: Total images removed by eraser\n```\n\n #### Scanner\n ```yaml\n- count\n\t- name: vulnerable_images_run_total\n\t\t- description: Total vulnerable images detected\n ```\n\n #### ImageJob\n ```yaml\n - count\n\t- name: imagejob_run_total\n\t\t- description: Total ImageJobs scheduled\n\t- name: pods_completed_run_total\n\t\t- description: Total pods completed\n\t-  name: pods_failed_run_total\n\t\t- description: Total pods failed\n- summary\n\t- name: imagejob_duration_run_seconds\n\t\t- description: Total time for ImageJobs scheduled to complete\n```\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.0.x/quick-start.md",
    "content": "---\ntitle: Quick Start\n---\n\nThis tutorial demonstrates the functionality of Eraser and validates that non-running images are removed succesfully.\n\n## Deploy a DaemonSet\n\nAfter following the [install instructions](installation.md), we'll apply a demo `DaemonSet`. For illustrative purposes, a DaemonSet is applied and deleted so the non-running images remain on all nodes. The alpine image with the `3.7.3` tag will be used in this example. This is an image with a known critical vulnerability.\n\nFirst, apply the `DaemonSet`:\n\n```shell\ncat <<EOF | kubectl apply -f -\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n  name: alpine\nspec:\n  selector:\n    matchLabels:\n      app: alpine\n  template:\n    metadata:\n      labels:\n        app: alpine\n    spec:\n      containers:\n      - name: alpine\n        image: docker.io/library/alpine:3.7.3\nEOF\n```\n\nNext, verify that the Pods are running or completed. After the `alpine` Pods complete, you may see a `CrashLoopBackoff` status. This is expected behavior from the `alpine` image and can be ignored for the tutorial.\n\n```shell\n$ kubectl get pods\nNAME          READY   STATUS      RESTARTS     AGE\nalpine-2gh9c   1/1     Running     1 (3s ago)   6s\nalpine-hljp9   0/1     Completed   1 (3s ago)   6s\n```\n\nDelete the DaemonSet:\n\n```shell\n$ kubectl delete daemonset alpine\n```\n\nVerify that the Pods have been deleted:\n\n```shell\n$ kubectl get pods\nNo resources found in default namespace.\n```\n\nTo verify that the `alpine` images are still on the nodes, exec into one of the worker nodes and list the images. If you are not using a kind cluster or Docker for your container nodes, you will need to adjust the exec command accordingly.\n\nList the nodes:\n\n```shell\n$ kubectl get nodes\nNAME                 STATUS   ROLES           AGE   VERSION\nkind-control-plane   Ready    control-plane   45m   v1.24.0\nkind-worker          Ready    <none>          45m   v1.24.0\nkind-worker2         Ready    <none>          44m   v1.24.0\n```\n\nList the images then filter for `alpine`:\n\n```shell\n$ docker exec kind-worker ctr -n k8s.io images list | grep alpine\ndocker.io/library/alpine:3.7.3                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 2.0 MiB   linux/386,linux/amd64,linux/arm/v6,linux/arm64/v8,linux/ppc64le,linux/s390x  io.cri-containerd.image=managed\ndocker.io/library/alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10           application/vnd.docker.distribution.manifest.list.v2+json sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 2.0 MiB   linux/386,linux/amd64,linux/arm/v6,linux/arm64/v8,linux/ppc64le,linux/s390x  io.cri-containerd.image=managed\n\n```\n\n## Automatically Cleaning Images\n\nAfter deploying Eraser, it will automatically clean images in a regular interval. This interval can be set using the `manager.scheduling.repeatInterval` setting in the [configmap](https://eraser-dev.github.io/eraser/docs/customization#detailed-options). The default interval is 24 hours (`24h`). Valid time units are \"ns\", \"us\" (or \"µs\"), \"ms\", \"s\", \"m\", \"h\".\n\nEraser will schedule collector pods to each node in the cluster, and each pod will contain 3 containers: collector, scanner, and eraser that will run to completion.\n\n```shell\n$ kubectl get pods -n eraser-system\nNAMESPACE            NAME                                         READY   STATUS      RESTARTS         AGE\neraser-system        collector-kind-control-plane-sb789           0/3     Completed   0                26m\neraser-system        collector-kind-worker-j84hm                  0/3     Completed   0                26m\neraser-system        collector-kind-worker2-4lbdr                 0/3     Completed   0                26m\neraser-system        eraser-controller-manager-86cdb4cbf9-x8d7q   1/1     Running     0                26m\n```\n\nThe collector container sends the list of all images to the scanner container, which scans and reports non-compliant images to the eraser container for removal of images that are non-running. Once all pods are completed, they will be automatically cleaned up. \n\n> If you want to remove all the images periodically, you can skip the scanner container by setting the `components.scanner.enabled` value to `false` using the [configmap](https://eraser-dev.github.io/eraser/docs/customization#detailed-options). In this case, each collector pod will hold 2 containers: collector and eraser.\n\n```shell\n$ kubectl get pods -n eraser-system\nNAMESPACE            NAME                                         READY   STATUS      RESTARTS         AGE\neraser-system        collector-kind-control-plane-ksk2b           0/2     Completed   0                50s\neraser-system        collector-kind-worker-cpgqc                  0/2     Completed   0                50s\neraser-system        collector-kind-worker2-k25df                 0/2     Completed   0                50s\neraser-system        eraser-controller-manager-86cdb4cbf9-x8d7q   1/1     Running     0                55s\n```\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.0.x/releasing.md",
    "content": "---\ntitle: Releasing\n---\n\n## Create Release Pull Request\n\n1. Go to `create_release_pull_request` workflow under actions.\n2. Select run workflow, and use the workflow from your branch. \n3. Input release version with the semantic version identifying the release.\n4. Click run workflow and review the PR created by github-actions.\n\n# Releasing\n\n5. Once the PR is merged to `main`, tag that commit with release version and push tags to remote repository.\n\n   ```\n   git checkout <BRANCH NAME>\n   git pull origin <BRANCH NAME>\n   git tag -a <NEW VERSION> -m '<NEW VERSION>'\n   git push origin <NEW VERSION>\n   ```\n6. Pushing the release tag will trigger GitHub Actions to trigger `release` job.\n   This will build the `ghcr.io/eraser-dev/eraser`, `ghcr.io/eraser-dev/eraser-manager`, `ghcr.io/eraser-dev/collector`, and `ghcr.io/eraser-dev/eraser-trivy-scanner` images automatically, then publish the new release tag.\n\n## Publishing\n\n1. GitHub Action will create a new release, review and edit it at https://github.com/eraser-dev/eraser/releases"
  },
  {
    "path": "docs/versioned_docs/version-v1.0.x/setup.md",
    "content": "---\ntitle: Setup\n---\n\n# Development Setup\n\nThis document describes the steps to get started with development.\nYou can either utilize [Codespaces](https://docs.github.com/en/codespaces/overview) or setup a local environment.\n\n## Local Setup\n\n### Prerequisites:\n\n- [go](https://go.dev/) with version 1.17 or later.\n- [docker](https://docs.docker.com/get-docker/)\n- [kind](https://kind.sigs.k8s.io/)\n- `make`\n\n### Get things running\n\n- Get dependencies with `go get`\n\n- This project uses `make`. You can utilize `make help` to see available targets. For local deployment make targets help to build, test and deploy.\n\n### Making changes\n\nPlease refer to [Development Reference](#development-reference) for more details on the specific commands.\n\nTo test your changes on a cluster:\n\n```bash\n# generate necessary api files (optional - only needed if changes to api folder).\nmake generate\n\n# build applicable images\nmake docker-build-manager MANAGER_IMG=eraser-manager:dev\nmake docker-build-eraser ERASER_IMG=eraser:dev\nmake docker-build-collector COLLECTOR_IMG=collector:dev\nmake docker-build-trivy-scanner TRIVY_SCANNER_IMG=eraser-trivy-scanner:dev\n\n# make sure updated image is present on cluster (e.g., see kind example below)\nkind load docker-image \\\n        eraser-manager:dev \\\n        eraser-trivy-scanner:dev \\\n        eraser:dev \\\n        collector:dev\n\nmake manifests\nmake deploy\n\n# to remove the deployment\nmake undeploy\n```\n\nTo test your changes to manager locally:\n\n```bash\nmake run\n```\n\nExample Output:\n\n```\nyou@local:~/eraser$ make run\ndocker build . \\\n        -t eraser-tooling \\\n        -f build/tooling/Dockerfile\n[+] Building 7.8s (8/8) FINISHED\n => => naming to docker.io/library/eraser-tooling                           0.0s\ndocker run -v /home/eraser/config:/config -w /config/manager \\\n        registry.k8s.io/kustomize/kustomize:v3.8.9 edit set image controller=eraser-manager:dev\ndocker run -v /home/eraser:/eraser eraser-tooling controller-gen \\\n        crd \\\n        rbac:roleName=manager-role \\\n        webhook \\\n        paths=\"./...\" \\\n        output:crd:artifacts:config=config/crd/bases\nrm -rf manifest_staging\nmkdir -p manifest_staging/deploy\ndocker run --rm -v /home/eraser:/eraser \\\n        registry.k8s.io/kustomize/kustomize:v3.8.9 build \\\n        /eraser/config/default -o /eraser/manifest_staging/deploy/eraser.yaml\ndocker run -v /home/eraser:/eraser eraser-tooling controller-gen object:headerFile=\"hack/boilerplate.go.txt\" paths=\"./...\"\ngo fmt ./...\ngo vet ./...\ngo run ./main.go\n{\"level\":\"info\",\"ts\":1652985685.1663408,\"logger\":\"controller-runtime.metrics\",\"msg\":\"Metrics server is starting to listen\",\"addr\":\":8080\"}\n...\n```\n\n## Development Reference\n\nEraser is using tooling from [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder). For Eraser this tooling is containerized into the `eraser-tooling` image. The `make` targets can use this tooling and build the image when necessary.\n\nYou can override the default configuration using environment variables. Below you can find a reference of targets and configuration options.\n\n### Common Configuration\n\n| Environment Variable | Description                                                                                   |\n| -------------------- | --------------------------------------------------------------------------------------------- |\n| VERSION              | Specifies the version (i.e., the image tag) of eraser to be used.                             |\n| MANAGER_IMG          | Defines the image url for the Eraser manager. Used for tagging, pulling and pushing the image |\n| ERASER_IMG           | Defines the image url for the Eraser. Used for tagging, pulling and pushing the image         |\n| COLLECTOR_IMG        | Defines the image url for the Collector. Used for tagging, pulling and pushing the image      |\n\n### Linting\n\n- `make lint`\n\nLints the go code.\n\n| Environment Variable | Description                                             |\n| -------------------- | ------------------------------------------------------- |\n| GOLANGCI_LINT        | Specifies the go linting binary to be used for linting. |\n\n### Development\n\n- `make generate`\n\nGenerates necessary files for the k8s api stored under `api/v1alpha1/zz_generated.deepcopy.go`. See the [kubebuilder docs](https://book.kubebuilder.io/cronjob-tutorial/other-api-files.html) for details.\n\n- `make manifests`\n\nGenerates the eraser deployment yaml files under `manifest_staging/deploy`.\n\nConfiguration Options:\n\n| Environment Variable | Description                                        |\n| -------------------- | -------------------------------------------------- |\n| ERASER_IMG           | Defines the image url for the Eraser.              |\n| MANAGER_IMG          | Defines the image url for the Eraser manager.      |\n| KUSTOMIZE_VERSION    | Define Kustomize version for generating manifests. |\n\n- `make test`\n\nRuns the unit tests for the eraser project.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                 |\n| -------------------- | ----------------------------------------------------------- |\n| ENVTEST              | Specifies the envtest setup binary.                         |\n| ENVTEST_K8S_VERSION  | Specifies the Kubernetes version for envtest setup command. |\n\n- `make e2e-test`\n\nRuns e2e tests on a cluster.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                   |\n| -------------------- | ------------------------------------------------------------------------------------------------------------- |\n| ERASER_IMG           | Eraser image to be used for e2e test.                                                                         |\n| MANAGER_IMG          | Eraser manager image to be used for e2e test.                                                                 |\n| KUBERNETES_VERSION   | Kubernetes version for e2e test.                                                                              |\n| TEST_COUNT           | Sets repetition for test. Please refer to [go docs](https://pkg.go.dev/cmd/go#hdr-Testing_flags) for details. |\n| TIMEOUT              | Sets timeout for test. Please refer to [go docs](https://pkg.go.dev/cmd/go#hdr-Testing_flags) for details.    |\n| TESTFLAGS            | Sets additional test flags                                                                                    |\n\n### Build\n\n- `make build`\n\nBuilds the eraser manager binaries.\n\n- `make run`\n\nRuns the eraser manager on your local machine.\n\n- `make docker-build-manager`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| MANAGER_IMG          | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-manager`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| MANAGER_IMG          | Specifies the target repository, image name and tag for building image. |\n\n- `make docker-build-eraser`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| ERASER_IMG           | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-eraser`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| ERASER_IMG           | Specifies the target repository, image name and tag for building image. |\n\n- `make docker-build-collector`\n\nBuilds the docker image for the eraser collector.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| COLLECTOR_IMG        | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-collector`\n\nBuilds the docker image for the eraser collector.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| COLLECTOR_IMG        | Specifies the target repository, image name and tag for building image. |\n\n### Deployment\n\n- `make install`\n\nInstall CRDs into the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                      |\n| -------------------- | ---------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment. |\n\n- `make uninstall`\n\nUninstall CRDs from the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                      |\n| -------------------- | ---------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment. |\n\n- `make deploy`\n\nDeploys eraser to the cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                          |\n| -------------------- | -------------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment.     |\n| MANAGER_IMG          | Specifies the eraser manager image version to be used for deployment |\n\n- `make undeploy`\n\nUndeploy controller from the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                               |\n| -------------------- | ------------------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources that need to be removed. |\n\n### Release\n\n- `make release-manifest`\n\nGenerates k8s manifests files for a release.\n\nConfiguration Options:\n\n| Environment Variable | Description                          |\n| -------------------- | ------------------------------------ |\n| NEWVERSION           | Sets the new version in the Makefile |\n\n- `make promote-staging-manifest`\n\nPromotes the k8s deployment yaml files to release.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.0.x/trivy.md",
    "content": "---\ntitle: Trivy\n---\n\n## Trivy Provider Options\nThe trivy provider is used in Eraser for image scanning and detecting vulnerabilities. See [Customization](https://eraser-dev.github.io/eraser/docs/customization#scanner-options) for more details on configuring the scanner.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.1.x/architecture.md",
    "content": "---\ntitle: Architecture\n---\nAt a high level, Eraser has two main modes of operation: manual and automated.\n\nManual image removal involves supplying a list of images to remove; Eraser then\ndeploys pods to clean up the images you supplied.\n\nAutomated image removal runs on a timer. By default, the automated process\nremoves images based on the results of a vulnerability scan. The default\nvulnerability scanner is Trivy, but others can be provided in its place. Or,\nthe scanner can be disabled altogether, in which case Eraser acts as a garbage\ncollector -- it will remove all non-running images in your cluster.\n\n## Manual image cleanup\n\n<img title=\"manual cleanup\" src=\"/eraser/docs/img/eraser_manual.png\" />\n\n## Automated analysis, scanning, and cleanup\n\n<img title=\"automated cleanup\" src=\"/eraser/docs/img/eraser_timer.png\" />\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.1.x/code-of-conduct.md",
    "content": "---\ntitle: Code of Conduct\n---\n\nThis project has adopted the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).\n\nResources:\n\n- [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)\n- [Code of Conduct Reporting](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.1.x/contributing.md",
    "content": "---\ntitle: Contributing\n---\n\nThere are several ways to get involved with Eraser\n\n- Join the [mailing list](https://groups.google.com/u/1/g/eraser-dev) to get notifications for releases, security announcements, etc.\n- Participate in the [biweekly community meetings](https://docs.google.com/document/d/1Sj5u47K3WUGYNPmQHGFpb52auqZb1FxSlWAQnPADhWI/edit) to disucss development, issues, use cases, etc.\n- Join the `#eraser` channel on the [Kubernetes Slack](https://slack.k8s.io/)\n- View the [development setup instructions](https://eraser-dev.github.io/eraser/docs/development)\n\nThis project welcomes contributions and suggestions.\n\nThis project has adopted the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)."
  },
  {
    "path": "docs/versioned_docs/version-v1.1.x/custom-scanner.md",
    "content": "---\ntitle: Custom Scanner\n---\n\n## Creating a Custom Scanner\nTo create a custom scanner for non-compliant images, use the following [template](https://github.com/eraser-dev/eraser-scanner-template/).\n\nIn order to customize your scanner, start by creating a `NewImageProvider()`. The ImageProvider interface can be found can be found [here](../../../pkg/scanners/template/scanner_template.go). \n\nThe ImageProvider will allow you to retrieve the list of all non-running and non-excluded images from the collector container through the `ReceiveImages()` function. Process these images with your customized scanner and threshold, and use `SendImages()` to pass the images found non-compliant to the eraser container for removal. Finally, complete the scanning process by calling `Finish()`.\n\nWhen complete, provide your custom scanner image to Eraser in deployment.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.1.x/customization.md",
    "content": "---\ntitle: Customization\n---\n\n## Overview\n\nEraser uses a configmap to configure its behavior. The configmap is part of the\ndeployment and it is not necessary to deploy it manually. Once deployed, the configmap\ncan be edited at any time:\n\n```bash\nkubectl edit configmap --namespace eraser-system eraser-manager-config\n```\n\nIf an eraser job is already running, the changes will not take effect until the job completes.\nThe configuration is in yaml.\n\n## Key Concepts\n\n### Basic architecture\n\nThe _manager_ runs as a pod in your cluster and manages _ImageJobs_. Think of\nan _ImageJob_ as a unit of work, performed on every node in your cluster. Each\nnode runs a sub-job. The goal of the _ImageJob_ is to assess the images on your\ncluster's nodes, and to remove the images you don't want. There are two stages:\n1. Assessment\n1. Removal.\n\n\n### Scheduling\n\nAn _ImageJob_ can either be created on-demand (see [Manual Removal](https://eraser-dev.github.io/eraser/docs/manual-removal)),\nor they can be spawned on a timer like a cron job. On-demand jobs skip the\nassessment stage and get right down to the business of removing the images you\nspecified. The behavior of an on-demand job is quite different from that of\ntimed jobs.\n\n### Fault Tolerance\n\nBecause an _ImageJob_ runs on every node in your cluster, and the conditions on\neach node may vary widely, some of the sub-jobs may fail. If you cannot\ntolerate any failure, set the `manager.imageJob.successRatio` property to\n`1.0`. If 75% success sounds good to you, set it to `0.75`. In that case, if\nfewer than 75% of the pods spawned by the _ImageJob_ report success, the job as\na whole will be marked as a failure.\n\nThis is mainly to help diagnose error conditions. As such, you can set\n`manager.imageJob.cleanup.delayOnFailure` to a long value so that logs can be\ncaptured before the spawned pods are cleaned up.\n\n### Excluding Nodes\n\nFor various reasons, you may want to prevent Eraser from scheduling pods on\ncertain nodes. To do so, the nodes can be given a special label. By default,\nthis label is `eraser.sh/cleanup.filter`, but you can configure the behavior with\nthe options under `manager.nodeFilter`. The [table](#detailed-options) provides more detail.\n\n### Configuring Components\n\nAn _ImageJob_ is made up of various sub-jobs, with one sub-job for each node.\nThese sub-jobs can be broken down further into three stages.\n1. Collection (What is on the node?)\n1. Scanning (What images conform to the policy I've provided?)\n1. Removal (Remove images based on the results of the above)\n\nOf the above stages, only Removal is mandatory. The others can be disabled.\nFurthermore, manually triggered _ImageJobs_ will skip right to removal, even if\nEraser is configured to collect and scan. Collection and Scanning will only\ntake place when:\n1. The collector and/or scanner `components` are enabled, AND\n1. The job was *not* triggered manually by creating an _ImageList_.\n\nDisabling scanner will remove all non-running images by default.\n\n### Swapping out components\n\nThe collector, scanner, and remover components can all be swapped out. This\nenables you to build and host the images yourself. In addition, the scanner's\nbehavior can be completely tailored to your needs by swapping out the default\nimage with one of your own. To specify the images, use the\n`components.<component>.image.repo` and `components.<component>.image.tag`,\nwhere `<component>` is one of `collector`, `scanner`, or `remover`.\n\n## Universal Options\n\nThe following portions of the configmap apply no matter how you spawn your\n_ImageJob_. The values provided below are the defaults. For more detail on\nthese options, see the [table](#detailed-options).\n\n```yaml\nmanager:\n  runtime: containerd\n  otlpEndpoint: \"\" # empty string disables OpenTelemetry\n  logLevel: info\n  profile:\n    enabled: false\n    port: 6060\n  imageJob:\n    successRatio: 1.0\n    cleanup:\n      delayOnSuccess: 0s\n      delayOnFailure: 24h\n  pullSecrets: [] # image pull secrets for collector/scanner/remover\n  priorityClassName: \"\" # priority class name for collector/scanner/remover\n  nodeFilter:\n    type: exclude # must be either exclude|include\n    selectors:\n      - eraser.sh/cleanup.filter\n      - kubernetes.io/os=windows\ncomponents:\n  remover:\n    image:\n      repo: ghcr.io/eraser-dev/remover\n      tag: v1.0.0\n    request:\n      mem: 25Mi\n      cpu: 0\n    limit:\n      mem: 30Mi\n      cpu: 1000m\n```\n\n## Component Options\n\n```yaml\ncomponents:\n  collector:\n    enabled: true\n    image:\n      repo: ghcr.io/eraser-dev/collector\n      tag: v1.0.0\n    request:\n      mem: 25Mi\n      cpu: 7m\n    limit:\n      mem: 500Mi\n      cpu: 0\n  scanner:\n    enabled: true\n    image:\n      repo: ghcr.io/eraser-dev/eraser-trivy-scanner\n      tag: v1.0.0\n    request:\n      mem: 500Mi\n      cpu: 1000m\n    limit:\n      mem: 2Gi\n      cpu: 0\n    config: |\n      # this is the schema for the provided 'trivy-scanner'. custom scanners\n      # will define their own configuration. see the below\n  remover:\n    image:\n      repo: ghcr.io/eraser-dev/remover\n      tag: v1.0.0\n    request:\n      mem: 25Mi\n      cpu: 0\n    limit:\n      mem: 30Mi\n      cpu: 1000m\n```\n\n## Scanner Options\n\nThese options can be provided to `components.scanner.config`. They will be\npassed through  as a string to the scanner container and parsed there. If you\nwant to configure your own scanner, you must provide some way to parse this.\n\nBelow are the values recognized by the provided `eraser-trivy-scanner` image.\nValues provided below are the defaults.\n\n```yaml\ncacheDir: /var/lib/trivy # The file path inside the container to store the cache\ndbRepo: ghcr.io/aquasecurity/trivy-db # The container registry from which to fetch the trivy database\ndeleteFailedImages: true # if true, remove images for which scanning fails, regardless of why it failed\ndeleteEOLImages: true # if true, remove images that have reached their end-of-life date\nvulnerabilities:\n  ignoreUnfixed: true # consider the image compliant if there are no known fixes for the vulnerabilities found.\n  types: # a list of vulnerability types. for more info, see trivy's documentation.\n    - os\n    - library\n  securityChecks: # see trivy's documentation for more invormation\n    - vuln\n  severities: # in this case, only flag images with CRITICAL vulnerability for removal\n    - CRITICAL\ntimeout:\n  total: 23h # if scanning isn't completed before this much time elapses, abort the whole scan\n  perImage: 1h # if scanning a single image exceeds this time, scanning will be aborted\n```\n\n## Detailed Options\n\n| Option | Description | Default |\n| --- | --- | --- |\n| manager.runtime | The runtime to use for the manager's containers. Must be one of containerd, crio, or dockershim. It is assumed that your nodes are all using the same runtime, and there is currently no way to configure multiple runtimes. | containerd |\n| manager.otlpEndpoint | The endpoint to send OpenTelemetry data to. If empty, data will not be sent. | \"\" |\n| manager.logLevel | The log level for the manager's containers. Must be one of debug, info, warn, error, dpanic, panic, or fatal. | info |\n| manager.scheduling.repeatInterval | Use only when collector ando/or scanner are enabled. This is like a cron job, and will spawn an _ImageJob_ at the interval provided. | 24h |\n| manager.scheduling.beginImmediately | If set to true, the fist _ImageJob_ will run immediately. If false, the job will not be spawned until after the interval (above) has elapsed. | true |\n| manager.profile.enabled | Whether to enable profiling for the manager's containers. This is for debugging with `go tool pprof`. | false |\n| manager.profile.port | The port on which to expose the profiling endpoint. | 6060 |\n| manager.imageJob.successRatio | The ratio of successful image jobs required before a cleanup is performed. | 1.0 |\n| manager.imageJob.cleanup.delayOnSuccess | The amount of time to wait after a successful image job before performing cleanup. | 0s |\n| manager.imageJob.cleanup.delayOnFailure | The amount of time to wait after a failed image job before performing cleanup. | 24h |\n| manager.pullSecrets | The image pull secrets to use for collector, scanner, and remover containers. | [] |\n| manager.priorityClassName | The priority class to use for collector, scanner, and remover containers. | \"\" |\n| manager.nodeFilter.type | The type of node filter to use. Must be either \"exclude\" or \"include\". | exclude |\n| manager.nodeFilter.selectors | A list of selectors used to filter nodes. | [] |\n| components.collector.enabled | Whether to enable the collector component. | true |\n| components.collector.image.repo | The repository containing the collector image. | ghcr.io/eraser-dev/collector |\n| components.collector.image.tag | The tag of the collector image. | v1.0.0 |\n| components.collector.request.mem | The amount of memory to request for the collector container. | 25Mi |\n| components.collector.request.cpu | The amount of CPU to request for the collector container. | 7m |\n| components.collector.limit.mem | The maximum amount of memory the collector container is allowed to use. | 500Mi |\n| components.collector.limit.cpu | The maximum amount of CPU the collector container is allowed to use. | 0 |\n| components.scanner.enabled | Whether to enable the scanner component. | true |\n| components.scanner.image.repo | The repository containing the scanner image. | ghcr.io/eraser-dev/eraser-trivy-scanner |\n| components.scanner.image.tag | The tag of the scanner image. | v1.0.0 |\n| components.scanner.request.mem | The amount of memory to request for the scanner container. | 500Mi |\n| components.scanner.request.cpu | The amount of CPU to request for the scanner container. | 1000m |\n| components.scanner.limit.mem | The maximum amount of memory the scanner container is allowed to use. | 2Gi |\n| components.scanner.limit.cpu | The maximum amount of CPU the scanner container is allowed to use. | 0 |\n| components.scanner.config | The configuration to pass to the scanner container, as a YAML string. | See YAML below |\n| components.remover.image.repo | The repository containing the remover image. | ghcr.io/eraser-dev/remover |\n| components.remover.image.tag | The tag of the remover image. | v1.0.0 |\n| components.remover.request.mem | The amount of memory to request for the remover container. | 25Mi |\n| components.remover.request.cpu | The amount of CPU to request for the remover container. | 0 |\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.1.x/exclusion.md",
    "content": "---\ntitle: Exclusion\n---\n\n## Excluding registries, repositories, and images\nEraser can exclude registries (example, `docker.io/library/*`) and also specific images with a tag (example, `docker.io/library/ubuntu:18.04`) or digest (example, `sha256:80f31da1ac7b312ba29d65080fd...`) from its removal process.\n\nTo exclude any images or registries from the removal, create configmap(s) with the label `eraser.sh/exclude.list=true` in the eraser-system namespace with a JSON file holding the excluded images.\n\n```bash\n$ cat > sample.json <<\"EOF\"\n{\n  \"excluded\": [\n    \"docker.io/library/*\",\n    \"ghcr.io/eraser-dev/test:latest\"\n  ]\n}\nEOF\n\n$ kubectl create configmap excluded --from-file=sample.json --namespace=eraser-system\n$ kubectl label configmap excluded eraser.sh/exclude.list=true -n eraser-system\n```\n\n## Exempting Nodes from the Eraser Pipeline\nExempting nodes from cleanup was added in v1.0.0. When deploying Eraser, you can specify whether there is a list of nodes you would like to `include` or `exclude` from the cleanup process using the configmap. For more information, see the section on [customization](https://eraser-dev.github.io/eraser/docs/customization).\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.1.x/faq.md",
    "content": "---\ntitle: FAQ\n---\n## Why am I still seeing vulnerable images?\nEraser currently targets **non-running** images, so any vulnerable images that are currently running will not be removed. In addition, the default vulnerability scanning with Trivy removes images with `CRITICAL` vulnerabilities. Any images with lower vulnerabilities will not be removed. This can be configured using the [configmap](https://eraser-dev.github.io/eraser/docs/customization#scanner-options).\n\n## How is Eraser different from Kubernetes garbage collection?\nThe native garbage collection in Kubernetes works a bit differently than Eraser. By default, garbage collection begins when disk usage reaches 85%, and stops when it gets down to 80%. More details about Kubernetes garbage collection can be found in the [Kubernetes documentation](https://kubernetes.io/docs/concepts/architecture/garbage-collection/), and configuration options can be found in the [Kubelet documentation](https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/). \n\nThere are a couple core benefits to using Eraser for image cleanup:\n* Eraser can be configured to use image vulnerability data when making determinations on image removal\n* By interfacing directly with the container runtime, Eraser can clean up images that are not managed by Kubelet and Kubernetes\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.1.x/installation.md",
    "content": "---\ntitle: Installation\n---\n\n## Manifest\n\nTo install Eraser with the manifest file, run the following command:\n\n```bash\nkubectl apply -f https://raw.githubusercontent.com/eraser-dev/eraser/v1.1.0/deploy/eraser.yaml\n```\n\n## Helm\n\nIf you'd like to install and manage Eraser with Helm, follow the install instructions [here](https://github.com/eraser-dev/eraser/blob/main/charts/eraser/README.md)\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.1.x/introduction.md",
    "content": "---\ntitle: Introduction\nslug: /\n---\n\n# Introduction\n\nWhen deploying to Kubernetes, it's common for pipelines to build and push images to a cluster, but it's much less common for these images to be cleaned up. This can lead to accumulating bloat on the disk, and a host of non-compliant images lingering on the nodes.\n\nThe current garbage collection process deletes images based on a percentage of load, but this process does not consider the vulnerability state of the images. **Eraser** aims to provide a simple way to determine the state of an image, and delete it if it meets the specified criteria.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.1.x/manual-removal.md",
    "content": "---\ntitle: Manual Removal\n---\n\nCreate an `ImageList` and specify the images you would like to remove. In this case, the image `docker.io/library/alpine:3.7.3` will be removed.\n\n```shell\ncat <<EOF | kubectl apply -f -\napiVersion: eraser.sh/v1alpha1\nkind: ImageList\nmetadata:\n  name: imagelist\nspec:\n  images:\n    - docker.io/library/alpine:3.7.3   # use \"*\" for all non-running images\nEOF\n```\n\n> `ImageList` is a cluster-scoped resource and must be called imagelist. `\"*\"` can be specified to remove all non-running images instead of individual images.\n\nCreating an `ImageList` should trigger an `ImageJob` that will deploy Eraser pods on every node to perform the removal given the list of images.\n\n```shell\n$ kubectl get pods -n eraser-system\neraser-system        eraser-controller-manager-55d54c4fb6-dcglq   1/1     Running   0          9m8s\neraser-system        eraser-kind-control-plane                    1/1     Running   0          11s\neraser-system        eraser-kind-worker                           1/1     Running   0          11s\neraser-system        eraser-kind-worker2                          1/1     Running   0          11s\n```\n\nPods will run to completion and the images will be removed.\n\n```shell\n$ kubectl get pods -n eraser-system\neraser-system        eraser-controller-manager-6d6d5594d4-phl2q   1/1     Running     0          4m16s\neraser-system        eraser-kind-control-plane                    0/1     Completed   0          22s\neraser-system        eraser-kind-worker                           0/1     Completed   0          22s\neraser-system        eraser-kind-worker2                          0/1     Completed   0          22s\n```\n\nThe `ImageList` custom resource status field will contain the status of the last job. The success and failure counts indicate the number of nodes the Eraser agent was run on.\n\n```shell\n$ kubectl describe ImageList imagelist\n...\nStatus:\n  Failed:     0\n  Success:    3\n  Timestamp:  2022-02-25T23:41:55Z\n...\n```\n\nVerify the unused images are removed.\n\n```shell\n$ docker exec kind-worker ctr -n k8s.io images list | grep alpine\n```\n\nIf the image has been successfully removed, there will be no output.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.1.x/metrics.md",
    "content": "---\ntitle: Metrics\n---\n\nTo view Eraser metrics, you will need to deploy an Open Telemetry collector in the 'eraser-system' namespace, and an exporter. An example collector with a Prometheus exporter is [otelcollector.yaml](https://github.com/eraser-dev/eraser/blob/main/test/e2e/test-data/otelcollector.yaml), and the endpoint can be specified using the [configmap](https://eraser-dev.github.io/eraser/docs/customization#universal-options). In this example, we are logging the collected data to the otel-collector pod, and exporting metrics through Prometheus at 'http://localhost:8889/metrics', but a separate exporter can also be configured.\n\nBelow is the list of metrics provided by Eraser per run:\n\n#### Eraser\n```yaml\n- count\n\t- name: images_removed_run_total\n\t\t- description: Total images removed by eraser\n```\n\n #### Scanner\n ```yaml\n- count\n\t- name: vulnerable_images_run_total\n\t\t- description: Total vulnerable images detected\n ```\n\n #### ImageJob\n ```yaml\n - count\n\t- name: imagejob_run_total\n\t\t- description: Total ImageJobs scheduled\n\t- name: pods_completed_run_total\n\t\t- description: Total pods completed\n\t-  name: pods_failed_run_total\n\t\t- description: Total pods failed\n- summary\n\t- name: imagejob_duration_run_seconds\n\t\t- description: Total time for ImageJobs scheduled to complete\n```\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.1.x/quick-start.md",
    "content": "---\ntitle: Quick Start\n---\n\nThis tutorial demonstrates the functionality of Eraser and validates that non-running images are removed succesfully.\n\n## Deploy a DaemonSet\n\nAfter following the [install instructions](installation.md), we'll apply a demo `DaemonSet`. For illustrative purposes, a DaemonSet is applied and deleted so the non-running images remain on all nodes. The alpine image with the `3.7.3` tag will be used in this example. This is an image with a known critical vulnerability.\n\nFirst, apply the `DaemonSet`:\n\n```shell\ncat <<EOF | kubectl apply -f -\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n  name: alpine\nspec:\n  selector:\n    matchLabels:\n      app: alpine\n  template:\n    metadata:\n      labels:\n        app: alpine\n    spec:\n      containers:\n      - name: alpine\n        image: docker.io/library/alpine:3.7.3\nEOF\n```\n\nNext, verify that the Pods are running or completed. After the `alpine` Pods complete, you may see a `CrashLoopBackoff` status. This is expected behavior from the `alpine` image and can be ignored for the tutorial.\n\n```shell\n$ kubectl get pods\nNAME          READY   STATUS      RESTARTS     AGE\nalpine-2gh9c   1/1     Running     1 (3s ago)   6s\nalpine-hljp9   0/1     Completed   1 (3s ago)   6s\n```\n\nDelete the DaemonSet:\n\n```shell\n$ kubectl delete daemonset alpine\n```\n\nVerify that the Pods have been deleted:\n\n```shell\n$ kubectl get pods\nNo resources found in default namespace.\n```\n\nTo verify that the `alpine` images are still on the nodes, exec into one of the worker nodes and list the images. If you are not using a kind cluster or Docker for your container nodes, you will need to adjust the exec command accordingly.\n\nList the nodes:\n\n```shell\n$ kubectl get nodes\nNAME                 STATUS   ROLES           AGE   VERSION\nkind-control-plane   Ready    control-plane   45m   v1.24.0\nkind-worker          Ready    <none>          45m   v1.24.0\nkind-worker2         Ready    <none>          44m   v1.24.0\n```\n\nList the images then filter for `alpine`:\n\n```shell\n$ docker exec kind-worker ctr -n k8s.io images list | grep alpine\ndocker.io/library/alpine:3.7.3                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 2.0 MiB   linux/386,linux/amd64,linux/arm/v6,linux/arm64/v8,linux/ppc64le,linux/s390x  io.cri-containerd.image=managed\ndocker.io/library/alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10           application/vnd.docker.distribution.manifest.list.v2+json sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 2.0 MiB   linux/386,linux/amd64,linux/arm/v6,linux/arm64/v8,linux/ppc64le,linux/s390x  io.cri-containerd.image=managed\n\n```\n\n## Automatically Cleaning Images\n\nAfter deploying Eraser, it will automatically clean images in a regular interval. This interval can be set using the `manager.scheduling.repeatInterval` setting in the [configmap](https://eraser-dev.github.io/eraser/docs/customization#detailed-options). The default interval is 24 hours (`24h`). Valid time units are \"ns\", \"us\" (or \"µs\"), \"ms\", \"s\", \"m\", \"h\".\n\nEraser will schedule eraser pods to each node in the cluster, and each pod will contain 3 containers: collector, scanner, and remover that will run to completion.\n\n```shell\n$ kubectl get pods -n eraser-system\nNAMESPACE            NAME                                         READY   STATUS      RESTARTS         AGE\neraser-system        eraser-kind-control-plane-sb789           0/3     Completed   0                26m\neraser-system        eraser-kind-worker-j84hm                  0/3     Completed   0                26m\neraser-system        eraser-kind-worker2-4lbdr                 0/3     Completed   0                26m\neraser-system        eraser-controller-manager-86cdb4cbf9-x8d7q   1/1     Running     0                26m\n```\n\nThe collector container sends the list of all images to the scanner container, which scans and reports non-compliant images to the remover container for removal of images that are non-running. Once all pods are completed, they will be automatically cleaned up. \n\n> If you want to remove all the images periodically, you can skip the scanner container by setting the `components.scanner.enabled` value to `false` using the [configmap](https://eraser-dev.github.io/eraser/docs/customization#detailed-options). In this case, each collector pod will hold 2 containers: collector and remover.\n\n```shell\n$ kubectl get pods -n eraser-system\nNAMESPACE            NAME                                         READY   STATUS      RESTARTS         AGE\neraser-system        eraser-kind-control-plane-ksk2b           0/2     Completed   0                50s\neraser-system        eraser-kind-worker-cpgqc                  0/2     Completed   0                50s\neraser-system        eraser-kind-worker2-k25df                 0/2     Completed   0                50s\neraser-system        eraser-controller-manager-86cdb4cbf9-x8d7q   1/1     Running     0                55s\n```\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.1.x/releasing.md",
    "content": "---\ntitle: Releasing\n---\n\n## Create Release Pull Request\n\n1. Go to `create_release_pull_request` workflow under actions.\n2. Select run workflow, and use the workflow from your branch. \n3. Input release version with the semantic version identifying the release.\n4. Click run workflow and review the PR created by github-actions.\n\n# Releasing\n\n5. Once the PR is merged to `main`, tag that commit with release version and push tags to remote repository.\n\n   ```\n   git checkout <BRANCH NAME>\n   git pull origin <BRANCH NAME>\n   git tag -a <NEW VERSION> -m '<NEW VERSION>'\n   git push origin <NEW VERSION>\n   ```\n6. Pushing the release tag will trigger GitHub Actions to trigger `release` job.\n   This will build the `ghcr.io/eraser-dev/remover`, `ghcr.io/eraser-dev/eraser-manager`, `ghcr.io/eraser-dev/collector`, and `ghcr.io/eraser-dev/eraser-trivy-scanner` images automatically, then publish the new release tag.\n\n## Publishing\n\n1. GitHub Action will create a new release, review and edit it at https://github.com/eraser-dev/eraser/releases"
  },
  {
    "path": "docs/versioned_docs/version-v1.1.x/setup.md",
    "content": "---\ntitle: Setup\n---\n\n# Development Setup\n\nThis document describes the steps to get started with development.\nYou can either utilize [Codespaces](https://docs.github.com/en/codespaces/overview) or setup a local environment.\n\n## Local Setup\n\n### Prerequisites:\n\n- [go](https://go.dev/) with version 1.17 or later.\n- [docker](https://docs.docker.com/get-docker/)\n- [kind](https://kind.sigs.k8s.io/)\n- `make`\n\n### Get things running\n\n- Get dependencies with `go get`\n\n- This project uses `make`. You can utilize `make help` to see available targets. For local deployment make targets help to build, test and deploy.\n\n### Making changes\n\nPlease refer to [Development Reference](#development-reference) for more details on the specific commands.\n\nTo test your changes on a cluster:\n\n```bash\n# generate necessary api files (optional - only needed if changes to api folder).\nmake generate\n\n# build applicable images\nmake docker-build-manager MANAGER_IMG=eraser-manager:dev\nmake docker-build-remover REMOVER_IMG=remover:dev\nmake docker-build-collector COLLECTOR_IMG=collector:dev\nmake docker-build-trivy-scanner TRIVY_SCANNER_IMG=eraser-trivy-scanner:dev\n\n# make sure updated image is present on cluster (e.g., see kind example below)\nkind load docker-image \\\n        eraser-manager:dev \\\n        eraser-trivy-scanner:dev \\\n        remover:dev \\\n        collector:dev\n\nmake manifests\nmake deploy\n\n# to remove the deployment\nmake undeploy\n```\n\nTo test your changes to manager locally:\n\n```bash\nmake run\n```\n\nExample Output:\n\n```\nyou@local:~/eraser$ make run\ndocker build . \\\n        -t eraser-tooling \\\n        -f build/tooling/Dockerfile\n[+] Building 7.8s (8/8) FINISHED\n => => naming to docker.io/library/eraser-tooling                           0.0s\ndocker run -v /home/eraser/config:/config -w /config/manager \\\n        registry.k8s.io/kustomize/kustomize:v3.8.9 edit set image controller=eraser-manager:dev\ndocker run -v /home/eraser:/eraser eraser-tooling controller-gen \\\n        crd \\\n        rbac:roleName=manager-role \\\n        webhook \\\n        paths=\"./...\" \\\n        output:crd:artifacts:config=config/crd/bases\nrm -rf manifest_staging\nmkdir -p manifest_staging/deploy\ndocker run --rm -v /home/eraser:/eraser \\\n        registry.k8s.io/kustomize/kustomize:v3.8.9 build \\\n        /eraser/config/default -o /eraser/manifest_staging/deploy/eraser.yaml\ndocker run -v /home/eraser:/eraser eraser-tooling controller-gen object:headerFile=\"hack/boilerplate.go.txt\" paths=\"./...\"\ngo fmt ./...\ngo vet ./...\ngo run ./main.go\n{\"level\":\"info\",\"ts\":1652985685.1663408,\"logger\":\"controller-runtime.metrics\",\"msg\":\"Metrics server is starting to listen\",\"addr\":\":8080\"}\n...\n```\n\n## Development Reference\n\nEraser is using tooling from [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder). For Eraser this tooling is containerized into the `eraser-tooling` image. The `make` targets can use this tooling and build the image when necessary.\n\nYou can override the default configuration using environment variables. Below you can find a reference of targets and configuration options.\n\n### Common Configuration\n\n| Environment Variable | Description                                                                                   |\n| -------------------- | --------------------------------------------------------------------------------------------- |\n| VERSION              | Specifies the version (i.e., the image tag) of eraser to be used.                             |\n| MANAGER_IMG          | Defines the image url for the Eraser manager. Used for tagging, pulling and pushing the image |\n| REMOVER_IMG           | Defines the image url for the Eraser. Used for tagging, pulling and pushing the image         |\n| COLLECTOR_IMG        | Defines the image url for the Collector. Used for tagging, pulling and pushing the image      |\n\n### Linting\n\n- `make lint`\n\nLints the go code.\n\n| Environment Variable | Description                                             |\n| -------------------- | ------------------------------------------------------- |\n| GOLANGCI_LINT        | Specifies the go linting binary to be used for linting. |\n\n### Development\n\n- `make generate`\n\nGenerates necessary files for the k8s api stored under `api/v1alpha1/zz_generated.deepcopy.go`. See the [kubebuilder docs](https://book.kubebuilder.io/cronjob-tutorial/other-api-files.html) for details.\n\n- `make manifests`\n\nGenerates the eraser deployment yaml files under `manifest_staging/deploy`.\n\nConfiguration Options:\n\n| Environment Variable | Description                                        |\n| -------------------- | -------------------------------------------------- |\n| REMOVER_IMG           | Defines the image url for the Eraser.              |\n| MANAGER_IMG          | Defines the image url for the Eraser manager.      |\n| KUSTOMIZE_VERSION    | Define Kustomize version for generating manifests. |\n\n- `make test`\n\nRuns the unit tests for the eraser project.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                 |\n| -------------------- | ----------------------------------------------------------- |\n| ENVTEST              | Specifies the envtest setup binary.                         |\n| ENVTEST_K8S_VERSION  | Specifies the Kubernetes version for envtest setup command. |\n\n- `make e2e-test`\n\nRuns e2e tests on a cluster.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                   |\n| -------------------- | ------------------------------------------------------------------------------------------------------------- |\n| REMOVER_IMG           | Eraser image to be used for e2e test.                                                                         |\n| MANAGER_IMG          | Eraser manager image to be used for e2e test.                                                                 |\n| KUBERNETES_VERSION   | Kubernetes version for e2e test.                                                                              |\n| TEST_COUNT           | Sets repetition for test. Please refer to [go docs](https://pkg.go.dev/cmd/go#hdr-Testing_flags) for details. |\n| TIMEOUT              | Sets timeout for test. Please refer to [go docs](https://pkg.go.dev/cmd/go#hdr-Testing_flags) for details.    |\n| TESTFLAGS            | Sets additional test flags                                                                                    |\n\n### Build\n\n- `make build`\n\nBuilds the eraser manager binaries.\n\n- `make run`\n\nRuns the eraser manager on your local machine.\n\n- `make docker-build-manager`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| MANAGER_IMG          | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-manager`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| MANAGER_IMG          | Specifies the target repository, image name and tag for building image. |\n\n- `make docker-build-remover`\n\nBuilds the docker image for eraser remover.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| REMOVER_IMG           | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-remover`\n\nBuilds the docker image for the eraser remover.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| REMOVER_IMG           | Specifies the target repository, image name and tag for building image. |\n\n- `make docker-build-collector`\n\nBuilds the docker image for the eraser collector.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| COLLECTOR_IMG        | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-collector`\n\nBuilds the docker image for the eraser collector.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| COLLECTOR_IMG        | Specifies the target repository, image name and tag for building image. |\n\n### Deployment\n\n- `make install`\n\nInstall CRDs into the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                      |\n| -------------------- | ---------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment. |\n\n- `make uninstall`\n\nUninstall CRDs from the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                      |\n| -------------------- | ---------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment. |\n\n- `make deploy`\n\nDeploys eraser to the cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                          |\n| -------------------- | -------------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment.     |\n| MANAGER_IMG          | Specifies the eraser manager image version to be used for deployment |\n\n- `make undeploy`\n\nUndeploy controller from the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                               |\n| -------------------- | ------------------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources that need to be removed. |\n\n### Release\n\n- `make release-manifest`\n\nGenerates k8s manifests files for a release.\n\nConfiguration Options:\n\n| Environment Variable | Description                          |\n| -------------------- | ------------------------------------ |\n| NEWVERSION           | Sets the new version in the Makefile |\n\n- `make promote-staging-manifest`\n\nPromotes the k8s deployment yaml files to release.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.1.x/trivy.md",
    "content": "---\ntitle: Trivy\n---\n\n## Trivy Provider Options\nThe trivy provider is used in Eraser for image scanning and detecting vulnerabilities. See [Customization](https://eraser-dev.github.io/eraser/docs/customization#scanner-options) for more details on configuring the scanner.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.2.x/architecture.md",
    "content": "---\ntitle: Architecture\n---\nAt a high level, Eraser has two main modes of operation: manual and automated.\n\nManual image removal involves supplying a list of images to remove; Eraser then\ndeploys pods to clean up the images you supplied.\n\nAutomated image removal runs on a timer. By default, the automated process\nremoves images based on the results of a vulnerability scan. The default\nvulnerability scanner is Trivy, but others can be provided in its place. Or,\nthe scanner can be disabled altogether, in which case Eraser acts as a garbage\ncollector -- it will remove all non-running images in your cluster.\n\n## Manual image cleanup\n\n<img title=\"manual cleanup\" src=\"/eraser/docs/img/eraser_manual.png\" />\n\n## Automated analysis, scanning, and cleanup\n\n<img title=\"automated cleanup\" src=\"/eraser/docs/img/eraser_timer.png\" />\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.2.x/code-of-conduct.md",
    "content": "---\ntitle: Code of Conduct\n---\n\nThis project has adopted the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).\n\nResources:\n\n- [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)\n- [Code of Conduct Reporting](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.2.x/contributing.md",
    "content": "---\ntitle: Contributing\n---\n\nThere are several ways to get involved with Eraser\n\n- Join the [mailing list](https://groups.google.com/u/1/g/eraser-dev) to get notifications for releases, security announcements, etc.\n- Participate in the [biweekly community meetings](https://docs.google.com/document/d/1Sj5u47K3WUGYNPmQHGFpb52auqZb1FxSlWAQnPADhWI/edit) to disucss development, issues, use cases, etc.\n- Join the `#eraser` channel on the [Kubernetes Slack](https://slack.k8s.io/)\n- View the [development setup instructions](https://eraser-dev.github.io/eraser/docs/development)\n\nThis project welcomes contributions and suggestions.\n\nThis project has adopted the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)."
  },
  {
    "path": "docs/versioned_docs/version-v1.2.x/custom-scanner.md",
    "content": "---\ntitle: Custom Scanner\n---\n\n## Creating a Custom Scanner\nTo create a custom scanner for non-compliant images, use the following [template](https://github.com/eraser-dev/eraser-scanner-template/).\n\nIn order to customize your scanner, start by creating a `NewImageProvider()`. The ImageProvider interface can be found can be found [here](../../../pkg/scanners/template/scanner_template.go). \n\nThe ImageProvider will allow you to retrieve the list of all non-running and non-excluded images from the collector container through the `ReceiveImages()` function. Process these images with your customized scanner and threshold, and use `SendImages()` to pass the images found non-compliant to the eraser container for removal. Finally, complete the scanning process by calling `Finish()`.\n\nWhen complete, provide your custom scanner image to Eraser in deployment.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.2.x/customization.md",
    "content": "---\ntitle: Customization\n---\n\n## Overview\n\nEraser uses a configmap to configure its behavior. The configmap is part of the\ndeployment and it is not necessary to deploy it manually. Once deployed, the configmap\ncan be edited at any time:\n\n```bash\nkubectl edit configmap --namespace eraser-system eraser-manager-config\n```\n\nIf an eraser job is already running, the changes will not take effect until the job completes.\nThe configuration is in yaml.\n\n## Key Concepts\n\n### Basic architecture\n\nThe _manager_ runs as a pod in your cluster and manages _ImageJobs_. Think of\nan _ImageJob_ as a unit of work, performed on every node in your cluster. Each\nnode runs a sub-job. The goal of the _ImageJob_ is to assess the images on your\ncluster's nodes, and to remove the images you don't want. There are two stages:\n1. Assessment\n1. Removal.\n\n\n### Scheduling\n\nAn _ImageJob_ can either be created on-demand (see [Manual Removal](https://eraser-dev.github.io/eraser/docs/manual-removal)),\nor they can be spawned on a timer like a cron job. On-demand jobs skip the\nassessment stage and get right down to the business of removing the images you\nspecified. The behavior of an on-demand job is quite different from that of\ntimed jobs.\n\n### Fault Tolerance\n\nBecause an _ImageJob_ runs on every node in your cluster, and the conditions on\neach node may vary widely, some of the sub-jobs may fail. If you cannot\ntolerate any failure, set the `manager.imageJob.successRatio` property to\n`1.0`. If 75% success sounds good to you, set it to `0.75`. In that case, if\nfewer than 75% of the pods spawned by the _ImageJob_ report success, the job as\na whole will be marked as a failure.\n\nThis is mainly to help diagnose error conditions. As such, you can set\n`manager.imageJob.cleanup.delayOnFailure` to a long value so that logs can be\ncaptured before the spawned pods are cleaned up.\n\n### Excluding Nodes\n\nFor various reasons, you may want to prevent Eraser from scheduling pods on\ncertain nodes. To do so, the nodes can be given a special label. By default,\nthis label is `eraser.sh/cleanup.filter`, but you can configure the behavior with\nthe options under `manager.nodeFilter`. The [table](#detailed-options) provides more detail.\n\n### Configuring Components\n\nAn _ImageJob_ is made up of various sub-jobs, with one sub-job for each node.\nThese sub-jobs can be broken down further into three stages.\n1. Collection (What is on the node?)\n1. Scanning (What images conform to the policy I've provided?)\n1. Removal (Remove images based on the results of the above)\n\nOf the above stages, only Removal is mandatory. The others can be disabled.\nFurthermore, manually triggered _ImageJobs_ will skip right to removal, even if\nEraser is configured to collect and scan. Collection and Scanning will only\ntake place when:\n1. The collector and/or scanner `components` are enabled, AND\n1. The job was *not* triggered manually by creating an _ImageList_.\n\nDisabling scanner will remove all non-running images by default.\n\n### Swapping out components\n\nThe collector, scanner, and remover components can all be swapped out. This\nenables you to build and host the images yourself. In addition, the scanner's\nbehavior can be completely tailored to your needs by swapping out the default\nimage with one of your own. To specify the images, use the\n`components.<component>.image.repo` and `components.<component>.image.tag`,\nwhere `<component>` is one of `collector`, `scanner`, or `remover`.\n\n## Universal Options\n\nThe following portions of the configmap apply no matter how you spawn your\n_ImageJob_. The values provided below are the defaults. For more detail on\nthese options, see the [table](#detailed-options).\n\n```yaml\nmanager:\n  runtime: containerd\n  otlpEndpoint: \"\" # empty string disables OpenTelemetry\n  logLevel: info\n  profile:\n    enabled: false\n    port: 6060\n  imageJob:\n    successRatio: 1.0\n    cleanup:\n      delayOnSuccess: 0s\n      delayOnFailure: 24h\n  pullSecrets: [] # image pull secrets for collector/scanner/remover\n  priorityClassName: \"\" # priority class name for collector/scanner/remover\n  nodeFilter:\n    type: exclude # must be either exclude|include\n    selectors:\n      - eraser.sh/cleanup.filter\n      - kubernetes.io/os=windows\ncomponents:\n  remover:\n    image:\n      repo: ghcr.io/eraser-dev/remover\n      tag: v1.0.0\n    request:\n      mem: 25Mi\n      cpu: 0\n    limit:\n      mem: 30Mi\n      cpu: 1000m\n```\n\n## Component Options\n\n```yaml\ncomponents:\n  collector:\n    enabled: true\n    image:\n      repo: ghcr.io/eraser-dev/collector\n      tag: v1.0.0\n    request:\n      mem: 25Mi\n      cpu: 7m\n    limit:\n      mem: 500Mi\n      cpu: 0\n  scanner:\n    enabled: true\n    image:\n      repo: ghcr.io/eraser-dev/eraser-trivy-scanner\n      tag: v1.0.0\n    request:\n      mem: 500Mi\n      cpu: 1000m\n    limit:\n      mem: 2Gi\n      cpu: 0\n    config: |\n      # this is the schema for the provided 'trivy-scanner'. custom scanners\n      # will define their own configuration. see the below\n  remover:\n    image:\n      repo: ghcr.io/eraser-dev/remover\n      tag: v1.0.0\n    request:\n      mem: 25Mi\n      cpu: 0\n    limit:\n      mem: 30Mi\n      cpu: 1000m\n```\n\n## Scanner Options\n\nThese options can be provided to `components.scanner.config`. They will be\npassed through  as a string to the scanner container and parsed there. If you\nwant to configure your own scanner, you must provide some way to parse this.\n\nBelow are the values recognized by the provided `eraser-trivy-scanner` image.\nValues provided below are the defaults.\n\n```yaml\ncacheDir: /var/lib/trivy # The file path inside the container to store the cache\ndbRepo: ghcr.io/aquasecurity/trivy-db # The container registry from which to fetch the trivy database\ndeleteFailedImages: true # if true, remove images for which scanning fails, regardless of why it failed\ndeleteEOLImages: true # if true, remove images that have reached their end-of-life date\nvulnerabilities:\n  ignoreUnfixed: true # consider the image compliant if there are no known fixes for the vulnerabilities found.\n  types: # a list of vulnerability types. for more info, see trivy's documentation.\n    - os\n    - library\n  securityChecks: # see trivy's documentation for more invormation\n    - vuln\n  severities: # in this case, only flag images with CRITICAL vulnerability for removal\n    - CRITICAL\ntimeout:\n  total: 23h # if scanning isn't completed before this much time elapses, abort the whole scan\n  perImage: 1h # if scanning a single image exceeds this time, scanning will be aborted\n```\n\n## Detailed Options\n\n| Option | Description | Default |\n| --- | --- | --- |\n| manager.runtime | The runtime to use for the manager's containers. Must be one of containerd, crio, or dockershim. It is assumed that your nodes are all using the same runtime, and there is currently no way to configure multiple runtimes. | containerd |\n| manager.otlpEndpoint | The endpoint to send OpenTelemetry data to. If empty, data will not be sent. | \"\" |\n| manager.logLevel | The log level for the manager's containers. Must be one of debug, info, warn, error, dpanic, panic, or fatal. | info |\n| manager.scheduling.repeatInterval | Use only when collector ando/or scanner are enabled. This is like a cron job, and will spawn an _ImageJob_ at the interval provided. | 24h |\n| manager.scheduling.beginImmediately | If set to true, the fist _ImageJob_ will run immediately. If false, the job will not be spawned until after the interval (above) has elapsed. | true |\n| manager.profile.enabled | Whether to enable profiling for the manager's containers. This is for debugging with `go tool pprof`. | false |\n| manager.profile.port | The port on which to expose the profiling endpoint. | 6060 |\n| manager.imageJob.successRatio | The ratio of successful image jobs required before a cleanup is performed. | 1.0 |\n| manager.imageJob.cleanup.delayOnSuccess | The amount of time to wait after a successful image job before performing cleanup. | 0s |\n| manager.imageJob.cleanup.delayOnFailure | The amount of time to wait after a failed image job before performing cleanup. | 24h |\n| manager.pullSecrets | The image pull secrets to use for collector, scanner, and remover containers. | [] |\n| manager.priorityClassName | The priority class to use for collector, scanner, and remover containers. | \"\" |\n| manager.nodeFilter.type | The type of node filter to use. Must be either \"exclude\" or \"include\". | exclude |\n| manager.nodeFilter.selectors | A list of selectors used to filter nodes. | [] |\n| components.collector.enabled | Whether to enable the collector component. | true |\n| components.collector.image.repo | The repository containing the collector image. | ghcr.io/eraser-dev/collector |\n| components.collector.image.tag | The tag of the collector image. | v1.0.0 |\n| components.collector.request.mem | The amount of memory to request for the collector container. | 25Mi |\n| components.collector.request.cpu | The amount of CPU to request for the collector container. | 7m |\n| components.collector.limit.mem | The maximum amount of memory the collector container is allowed to use. | 500Mi |\n| components.collector.limit.cpu | The maximum amount of CPU the collector container is allowed to use. | 0 |\n| components.scanner.enabled | Whether to enable the scanner component. | true |\n| components.scanner.image.repo | The repository containing the scanner image. | ghcr.io/eraser-dev/eraser-trivy-scanner |\n| components.scanner.image.tag | The tag of the scanner image. | v1.0.0 |\n| components.scanner.request.mem | The amount of memory to request for the scanner container. | 500Mi |\n| components.scanner.request.cpu | The amount of CPU to request for the scanner container. | 1000m |\n| components.scanner.limit.mem | The maximum amount of memory the scanner container is allowed to use. | 2Gi |\n| components.scanner.limit.cpu | The maximum amount of CPU the scanner container is allowed to use. | 0 |\n| components.scanner.config | The configuration to pass to the scanner container, as a YAML string. | See YAML below |\n| components.remover.image.repo | The repository containing the remover image. | ghcr.io/eraser-dev/remover |\n| components.remover.image.tag | The tag of the remover image. | v1.0.0 |\n| components.remover.request.mem | The amount of memory to request for the remover container. | 25Mi |\n| components.remover.request.cpu | The amount of CPU to request for the remover container. | 0 |\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.2.x/exclusion.md",
    "content": "---\ntitle: Exclusion\n---\n\n## Excluding registries, repositories, and images\nEraser can exclude registries (example, `docker.io/library/*`) and also specific images with a tag (example, `docker.io/library/ubuntu:18.04`) or digest (example, `sha256:80f31da1ac7b312ba29d65080fd...`) from its removal process.\n\nTo exclude any images or registries from the removal, create configmap(s) with the label `eraser.sh/exclude.list=true` in the eraser-system namespace with a JSON file holding the excluded images.\n\n```bash\n$ cat > sample.json <<\"EOF\"\n{\n  \"excluded\": [\n    \"docker.io/library/*\",\n    \"ghcr.io/eraser-dev/test:latest\"\n  ]\n}\nEOF\n\n$ kubectl create configmap excluded --from-file=sample.json --namespace=eraser-system\n$ kubectl label configmap excluded eraser.sh/exclude.list=true -n eraser-system\n```\n\n## Exempting Nodes from the Eraser Pipeline\nExempting nodes from cleanup was added in v1.0.0. When deploying Eraser, you can specify whether there is a list of nodes you would like to `include` or `exclude` from the cleanup process using the configmap. For more information, see the section on [customization](https://eraser-dev.github.io/eraser/docs/customization).\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.2.x/faq.md",
    "content": "---\ntitle: FAQ\n---\n## Why am I still seeing vulnerable images?\nEraser currently targets **non-running** images, so any vulnerable images that are currently running will not be removed. In addition, the default vulnerability scanning with Trivy removes images with `CRITICAL` vulnerabilities. Any images with lower vulnerabilities will not be removed. This can be configured using the [configmap](https://eraser-dev.github.io/eraser/docs/customization#scanner-options).\n\n## How is Eraser different from Kubernetes garbage collection?\nThe native garbage collection in Kubernetes works a bit differently than Eraser. By default, garbage collection begins when disk usage reaches 85%, and stops when it gets down to 80%. More details about Kubernetes garbage collection can be found in the [Kubernetes documentation](https://kubernetes.io/docs/concepts/architecture/garbage-collection/), and configuration options can be found in the [Kubelet documentation](https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/). \n\nThere are a couple core benefits to using Eraser for image cleanup:\n* Eraser can be configured to use image vulnerability data when making determinations on image removal\n* By interfacing directly with the container runtime, Eraser can clean up images that are not managed by Kubelet and Kubernetes\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.2.x/installation.md",
    "content": "---\ntitle: Installation\n---\n\n## Manifest\n\nTo install Eraser with the manifest file, run the following command:\n\n```bash\nkubectl apply -f https://raw.githubusercontent.com/eraser-dev/eraser/v1.2.0/deploy/eraser.yaml\n```\n\n## Helm\n\nIf you'd like to install and manage Eraser with Helm, follow the install instructions [here](https://github.com/eraser-dev/eraser/blob/main/charts/eraser/README.md)\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.2.x/introduction.md",
    "content": "---\ntitle: Introduction\nslug: /\n---\n\n# Introduction\n\nWhen deploying to Kubernetes, it's common for pipelines to build and push images to a cluster, but it's much less common for these images to be cleaned up. This can lead to accumulating bloat on the disk, and a host of non-compliant images lingering on the nodes.\n\nThe current garbage collection process deletes images based on a percentage of load, but this process does not consider the vulnerability state of the images. **Eraser** aims to provide a simple way to determine the state of an image, and delete it if it meets the specified criteria.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.2.x/manual-removal.md",
    "content": "---\ntitle: Manual Removal\n---\n\nCreate an `ImageList` and specify the images you would like to remove. In this case, the image `docker.io/library/alpine:3.7.3` will be removed.\n\n```shell\ncat <<EOF | kubectl apply -f -\napiVersion: eraser.sh/v1alpha1\nkind: ImageList\nmetadata:\n  name: imagelist\nspec:\n  images:\n    - docker.io/library/alpine:3.7.3   # use \"*\" for all non-running images\nEOF\n```\n\n> `ImageList` is a cluster-scoped resource and must be called imagelist. `\"*\"` can be specified to remove all non-running images instead of individual images.\n\nCreating an `ImageList` should trigger an `ImageJob` that will deploy Eraser pods on every node to perform the removal given the list of images.\n\n```shell\n$ kubectl get pods -n eraser-system\neraser-system        eraser-controller-manager-55d54c4fb6-dcglq   1/1     Running   0          9m8s\neraser-system        eraser-kind-control-plane                    1/1     Running   0          11s\neraser-system        eraser-kind-worker                           1/1     Running   0          11s\neraser-system        eraser-kind-worker2                          1/1     Running   0          11s\n```\n\nPods will run to completion and the images will be removed.\n\n```shell\n$ kubectl get pods -n eraser-system\neraser-system        eraser-controller-manager-6d6d5594d4-phl2q   1/1     Running     0          4m16s\neraser-system        eraser-kind-control-plane                    0/1     Completed   0          22s\neraser-system        eraser-kind-worker                           0/1     Completed   0          22s\neraser-system        eraser-kind-worker2                          0/1     Completed   0          22s\n```\n\nThe `ImageList` custom resource status field will contain the status of the last job. The success and failure counts indicate the number of nodes the Eraser agent was run on.\n\n```shell\n$ kubectl describe ImageList imagelist\n...\nStatus:\n  Failed:     0\n  Success:    3\n  Timestamp:  2022-02-25T23:41:55Z\n...\n```\n\nVerify the unused images are removed.\n\n```shell\n$ docker exec kind-worker ctr -n k8s.io images list | grep alpine\n```\n\nIf the image has been successfully removed, there will be no output.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.2.x/metrics.md",
    "content": "---\ntitle: Metrics\n---\n\nTo view Eraser metrics, you will need to deploy an Open Telemetry collector in the 'eraser-system' namespace, and an exporter. An example collector with a Prometheus exporter is [otelcollector.yaml](https://github.com/eraser-dev/eraser/blob/main/test/e2e/test-data/otelcollector.yaml), and the endpoint can be specified using the [configmap](https://eraser-dev.github.io/eraser/docs/customization#universal-options). In this example, we are logging the collected data to the otel-collector pod, and exporting metrics through Prometheus at 'http://localhost:8889/metrics', but a separate exporter can also be configured.\n\nBelow is the list of metrics provided by Eraser per run:\n\n#### Eraser\n```yaml\n- count\n\t- name: images_removed_run_total\n\t\t- description: Total images removed by eraser\n```\n\n #### Scanner\n ```yaml\n- count\n\t- name: vulnerable_images_run_total\n\t\t- description: Total vulnerable images detected\n ```\n\n #### ImageJob\n ```yaml\n - count\n\t- name: imagejob_run_total\n\t\t- description: Total ImageJobs scheduled\n\t- name: pods_completed_run_total\n\t\t- description: Total pods completed\n\t-  name: pods_failed_run_total\n\t\t- description: Total pods failed\n- summary\n\t- name: imagejob_duration_run_seconds\n\t\t- description: Total time for ImageJobs scheduled to complete\n```\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.2.x/quick-start.md",
    "content": "---\ntitle: Quick Start\n---\n\nThis tutorial demonstrates the functionality of Eraser and validates that non-running images are removed succesfully.\n\n## Deploy a DaemonSet\n\nAfter following the [install instructions](installation.md), we'll apply a demo `DaemonSet`. For illustrative purposes, a DaemonSet is applied and deleted so the non-running images remain on all nodes. The alpine image with the `3.7.3` tag will be used in this example. This is an image with a known critical vulnerability.\n\nFirst, apply the `DaemonSet`:\n\n```shell\ncat <<EOF | kubectl apply -f -\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n  name: alpine\nspec:\n  selector:\n    matchLabels:\n      app: alpine\n  template:\n    metadata:\n      labels:\n        app: alpine\n    spec:\n      containers:\n      - name: alpine\n        image: docker.io/library/alpine:3.7.3\nEOF\n```\n\nNext, verify that the Pods are running or completed. After the `alpine` Pods complete, you may see a `CrashLoopBackoff` status. This is expected behavior from the `alpine` image and can be ignored for the tutorial.\n\n```shell\n$ kubectl get pods\nNAME          READY   STATUS      RESTARTS     AGE\nalpine-2gh9c   1/1     Running     1 (3s ago)   6s\nalpine-hljp9   0/1     Completed   1 (3s ago)   6s\n```\n\nDelete the DaemonSet:\n\n```shell\n$ kubectl delete daemonset alpine\n```\n\nVerify that the Pods have been deleted:\n\n```shell\n$ kubectl get pods\nNo resources found in default namespace.\n```\n\nTo verify that the `alpine` images are still on the nodes, exec into one of the worker nodes and list the images. If you are not using a kind cluster or Docker for your container nodes, you will need to adjust the exec command accordingly.\n\nList the nodes:\n\n```shell\n$ kubectl get nodes\nNAME                 STATUS   ROLES           AGE   VERSION\nkind-control-plane   Ready    control-plane   45m   v1.24.0\nkind-worker          Ready    <none>          45m   v1.24.0\nkind-worker2         Ready    <none>          44m   v1.24.0\n```\n\nList the images then filter for `alpine`:\n\n```shell\n$ docker exec kind-worker ctr -n k8s.io images list | grep alpine\ndocker.io/library/alpine:3.7.3                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 2.0 MiB   linux/386,linux/amd64,linux/arm/v6,linux/arm64/v8,linux/ppc64le,linux/s390x  io.cri-containerd.image=managed\ndocker.io/library/alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10           application/vnd.docker.distribution.manifest.list.v2+json sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 2.0 MiB   linux/386,linux/amd64,linux/arm/v6,linux/arm64/v8,linux/ppc64le,linux/s390x  io.cri-containerd.image=managed\n\n```\n\n## Automatically Cleaning Images\n\nAfter deploying Eraser, it will automatically clean images in a regular interval. This interval can be set using the `manager.scheduling.repeatInterval` setting in the [configmap](https://eraser-dev.github.io/eraser/docs/customization#detailed-options). The default interval is 24 hours (`24h`). Valid time units are \"ns\", \"us\" (or \"µs\"), \"ms\", \"s\", \"m\", \"h\".\n\nEraser will schedule eraser pods to each node in the cluster, and each pod will contain 3 containers: collector, scanner, and remover that will run to completion.\n\n```shell\n$ kubectl get pods -n eraser-system\nNAMESPACE            NAME                                         READY   STATUS      RESTARTS         AGE\neraser-system        eraser-kind-control-plane-sb789           0/3     Completed   0                26m\neraser-system        eraser-kind-worker-j84hm                  0/3     Completed   0                26m\neraser-system        eraser-kind-worker2-4lbdr                 0/3     Completed   0                26m\neraser-system        eraser-controller-manager-86cdb4cbf9-x8d7q   1/1     Running     0                26m\n```\n\nThe collector container sends the list of all images to the scanner container, which scans and reports non-compliant images to the remover container for removal of images that are non-running. Once all pods are completed, they will be automatically cleaned up. \n\n> If you want to remove all the images periodically, you can skip the scanner container by setting the `components.scanner.enabled` value to `false` using the [configmap](https://eraser-dev.github.io/eraser/docs/customization#detailed-options). In this case, each collector pod will hold 2 containers: collector and remover.\n\n```shell\n$ kubectl get pods -n eraser-system\nNAMESPACE            NAME                                         READY   STATUS      RESTARTS         AGE\neraser-system        eraser-kind-control-plane-ksk2b           0/2     Completed   0                50s\neraser-system        eraser-kind-worker-cpgqc                  0/2     Completed   0                50s\neraser-system        eraser-kind-worker2-k25df                 0/2     Completed   0                50s\neraser-system        eraser-controller-manager-86cdb4cbf9-x8d7q   1/1     Running     0                55s\n```\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.2.x/releasing.md",
    "content": "---\ntitle: Releasing\n---\n\n## Create Release Pull Request\n\n1. Go to `create_release_pull_request` workflow under actions.\n2. Select run workflow, and use the workflow from your branch. \n3. Input release version with the semantic version identifying the release.\n4. Click run workflow and review the PR created by github-actions.\n\n# Releasing\n\n5. Once the PR is merged to `main`, tag that commit with release version and push tags to remote repository.\n\n   ```\n   git checkout <BRANCH NAME>\n   git pull origin <BRANCH NAME>\n   git tag -a <NEW VERSION> -m '<NEW VERSION>'\n   git push origin <NEW VERSION>\n   ```\n6. Pushing the release tag will trigger GitHub Actions to trigger `release` job.\n   This will build the `ghcr.io/eraser-dev/remover`, `ghcr.io/eraser-dev/eraser-manager`, `ghcr.io/eraser-dev/collector`, and `ghcr.io/eraser-dev/eraser-trivy-scanner` images automatically, then publish the new release tag.\n\n## Publishing\n\n1. GitHub Action will create a new release, review and edit it at https://github.com/eraser-dev/eraser/releases"
  },
  {
    "path": "docs/versioned_docs/version-v1.2.x/setup.md",
    "content": "---\ntitle: Setup\n---\n\n# Development Setup\n\nThis document describes the steps to get started with development.\nYou can either utilize [Codespaces](https://docs.github.com/en/codespaces/overview) or setup a local environment.\n\n## Local Setup\n\n### Prerequisites:\n\n- [go](https://go.dev/) with version 1.17 or later.\n- [docker](https://docs.docker.com/get-docker/)\n- [kind](https://kind.sigs.k8s.io/)\n- `make`\n\n### Get things running\n\n- Get dependencies with `go get`\n\n- This project uses `make`. You can utilize `make help` to see available targets. For local deployment make targets help to build, test and deploy.\n\n### Making changes\n\nPlease refer to [Development Reference](#development-reference) for more details on the specific commands.\n\nTo test your changes on a cluster:\n\n```bash\n# generate necessary api files (optional - only needed if changes to api folder).\nmake generate\n\n# build applicable images\nmake docker-build-manager MANAGER_IMG=eraser-manager:dev\nmake docker-build-remover REMOVER_IMG=remover:dev\nmake docker-build-collector COLLECTOR_IMG=collector:dev\nmake docker-build-trivy-scanner TRIVY_SCANNER_IMG=eraser-trivy-scanner:dev\n\n# make sure updated image is present on cluster (e.g., see kind example below)\nkind load docker-image \\\n        eraser-manager:dev \\\n        eraser-trivy-scanner:dev \\\n        remover:dev \\\n        collector:dev\n\nmake manifests\nmake deploy\n\n# to remove the deployment\nmake undeploy\n```\n\nTo test your changes to manager locally:\n\n```bash\nmake run\n```\n\nExample Output:\n\n```\nyou@local:~/eraser$ make run\ndocker build . \\\n        -t eraser-tooling \\\n        -f build/tooling/Dockerfile\n[+] Building 7.8s (8/8) FINISHED\n => => naming to docker.io/library/eraser-tooling                           0.0s\ndocker run -v /home/eraser/config:/config -w /config/manager \\\n        registry.k8s.io/kustomize/kustomize:v3.8.9 edit set image controller=eraser-manager:dev\ndocker run -v /home/eraser:/eraser eraser-tooling controller-gen \\\n        crd \\\n        rbac:roleName=manager-role \\\n        webhook \\\n        paths=\"./...\" \\\n        output:crd:artifacts:config=config/crd/bases\nrm -rf manifest_staging\nmkdir -p manifest_staging/deploy\ndocker run --rm -v /home/eraser:/eraser \\\n        registry.k8s.io/kustomize/kustomize:v3.8.9 build \\\n        /eraser/config/default -o /eraser/manifest_staging/deploy/eraser.yaml\ndocker run -v /home/eraser:/eraser eraser-tooling controller-gen object:headerFile=\"hack/boilerplate.go.txt\" paths=\"./...\"\ngo fmt ./...\ngo vet ./...\ngo run ./main.go\n{\"level\":\"info\",\"ts\":1652985685.1663408,\"logger\":\"controller-runtime.metrics\",\"msg\":\"Metrics server is starting to listen\",\"addr\":\":8080\"}\n...\n```\n\n## Development Reference\n\nEraser is using tooling from [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder). For Eraser this tooling is containerized into the `eraser-tooling` image. The `make` targets can use this tooling and build the image when necessary.\n\nYou can override the default configuration using environment variables. Below you can find a reference of targets and configuration options.\n\n### Common Configuration\n\n| Environment Variable | Description                                                                                   |\n| -------------------- | --------------------------------------------------------------------------------------------- |\n| VERSION              | Specifies the version (i.e., the image tag) of eraser to be used.                             |\n| MANAGER_IMG          | Defines the image url for the Eraser manager. Used for tagging, pulling and pushing the image |\n| REMOVER_IMG           | Defines the image url for the Eraser. Used for tagging, pulling and pushing the image         |\n| COLLECTOR_IMG        | Defines the image url for the Collector. Used for tagging, pulling and pushing the image      |\n\n### Linting\n\n- `make lint`\n\nLints the go code.\n\n| Environment Variable | Description                                             |\n| -------------------- | ------------------------------------------------------- |\n| GOLANGCI_LINT        | Specifies the go linting binary to be used for linting. |\n\n### Development\n\n- `make generate`\n\nGenerates necessary files for the k8s api stored under `api/v1alpha1/zz_generated.deepcopy.go`. See the [kubebuilder docs](https://book.kubebuilder.io/cronjob-tutorial/other-api-files.html) for details.\n\n- `make manifests`\n\nGenerates the eraser deployment yaml files under `manifest_staging/deploy`.\n\nConfiguration Options:\n\n| Environment Variable | Description                                        |\n| -------------------- | -------------------------------------------------- |\n| REMOVER_IMG           | Defines the image url for the Eraser.              |\n| MANAGER_IMG          | Defines the image url for the Eraser manager.      |\n| KUSTOMIZE_VERSION    | Define Kustomize version for generating manifests. |\n\n- `make test`\n\nRuns the unit tests for the eraser project.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                 |\n| -------------------- | ----------------------------------------------------------- |\n| ENVTEST              | Specifies the envtest setup binary.                         |\n| ENVTEST_K8S_VERSION  | Specifies the Kubernetes version for envtest setup command. |\n\n- `make e2e-test`\n\nRuns e2e tests on a cluster.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                   |\n| -------------------- | ------------------------------------------------------------------------------------------------------------- |\n| REMOVER_IMG           | Eraser image to be used for e2e test.                                                                         |\n| MANAGER_IMG          | Eraser manager image to be used for e2e test.                                                                 |\n| KUBERNETES_VERSION   | Kubernetes version for e2e test.                                                                              |\n| TEST_COUNT           | Sets repetition for test. Please refer to [go docs](https://pkg.go.dev/cmd/go#hdr-Testing_flags) for details. |\n| TIMEOUT              | Sets timeout for test. Please refer to [go docs](https://pkg.go.dev/cmd/go#hdr-Testing_flags) for details.    |\n| TESTFLAGS            | Sets additional test flags                                                                                    |\n\n### Build\n\n- `make build`\n\nBuilds the eraser manager binaries.\n\n- `make run`\n\nRuns the eraser manager on your local machine.\n\n- `make docker-build-manager`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| MANAGER_IMG          | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-manager`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| MANAGER_IMG          | Specifies the target repository, image name and tag for building image. |\n\n- `make docker-build-remover`\n\nBuilds the docker image for eraser remover.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| REMOVER_IMG           | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-remover`\n\nBuilds the docker image for the eraser remover.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| REMOVER_IMG           | Specifies the target repository, image name and tag for building image. |\n\n- `make docker-build-collector`\n\nBuilds the docker image for the eraser collector.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| COLLECTOR_IMG        | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-collector`\n\nBuilds the docker image for the eraser collector.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| COLLECTOR_IMG        | Specifies the target repository, image name and tag for building image. |\n\n### Deployment\n\n- `make install`\n\nInstall CRDs into the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                      |\n| -------------------- | ---------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment. |\n\n- `make uninstall`\n\nUninstall CRDs from the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                      |\n| -------------------- | ---------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment. |\n\n- `make deploy`\n\nDeploys eraser to the cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                          |\n| -------------------- | -------------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment.     |\n| MANAGER_IMG          | Specifies the eraser manager image version to be used for deployment |\n\n- `make undeploy`\n\nUndeploy controller from the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                               |\n| -------------------- | ------------------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources that need to be removed. |\n\n### Release\n\n- `make release-manifest`\n\nGenerates k8s manifests files for a release.\n\nConfiguration Options:\n\n| Environment Variable | Description                          |\n| -------------------- | ------------------------------------ |\n| NEWVERSION           | Sets the new version in the Makefile |\n\n- `make promote-staging-manifest`\n\nPromotes the k8s deployment yaml files to release.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.2.x/trivy.md",
    "content": "---\ntitle: Trivy\n---\n\n## Trivy Provider Options\nThe trivy provider is used in Eraser for image scanning and detecting vulnerabilities. See [Customization](https://eraser-dev.github.io/eraser/docs/customization#scanner-options) for more details on configuring the scanner.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.3.x/architecture.md",
    "content": "---\ntitle: Architecture\n---\nAt a high level, Eraser has two main modes of operation: manual and automated.\n\nManual image removal involves supplying a list of images to remove; Eraser then\ndeploys pods to clean up the images you supplied.\n\nAutomated image removal runs on a timer. By default, the automated process\nremoves images based on the results of a vulnerability scan. The default\nvulnerability scanner is Trivy, but others can be provided in its place. Or,\nthe scanner can be disabled altogether, in which case Eraser acts as a garbage\ncollector -- it will remove all non-running images in your cluster.\n\n## Manual image cleanup\n\n<img title=\"manual cleanup\" src=\"/eraser/docs/img/eraser_manual.png\" />\n\n## Automated analysis, scanning, and cleanup\n\n<img title=\"automated cleanup\" src=\"/eraser/docs/img/eraser_timer.png\" />\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.3.x/code-of-conduct.md",
    "content": "---\ntitle: Code of Conduct\n---\n\nThis project has adopted the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).\n\nResources:\n\n- [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)\n- [Code of Conduct Reporting](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.3.x/contributing.md",
    "content": "---\ntitle: Contributing\n---\n\nThere are several ways to get involved with Eraser\n\n- Join the [mailing list](https://groups.google.com/u/1/g/eraser-dev) to get notifications for releases, security announcements, etc.\n- Participate in the [biweekly community meetings](https://docs.google.com/document/d/1Sj5u47K3WUGYNPmQHGFpb52auqZb1FxSlWAQnPADhWI/edit) to disucss development, issues, use cases, etc.\n- Join the `#eraser` channel on the [Kubernetes Slack](https://slack.k8s.io/)\n- View the [development setup instructions](https://eraser-dev.github.io/eraser/docs/development)\n\nThis project welcomes contributions and suggestions.\n\nThis project has adopted the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)."
  },
  {
    "path": "docs/versioned_docs/version-v1.3.x/custom-scanner.md",
    "content": "---\ntitle: Custom Scanner\n---\n\n## Creating a Custom Scanner\nTo create a custom scanner for non-compliant images, use the following [template](https://github.com/eraser-dev/eraser-scanner-template/).\n\nIn order to customize your scanner, start by creating a `NewImageProvider()`. The ImageProvider interface can be found can be found [here](../../../pkg/scanners/template/scanner_template.go). \n\nThe ImageProvider will allow you to retrieve the list of all non-running and non-excluded images from the collector container through the `ReceiveImages()` function. Process these images with your customized scanner and threshold, and use `SendImages()` to pass the images found non-compliant to the eraser container for removal. Finally, complete the scanning process by calling `Finish()`.\n\nWhen complete, provide your custom scanner image to Eraser in deployment.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.3.x/customization.md",
    "content": "---\ntitle: Customization\n---\n\n## Overview\n\nEraser uses a configmap to configure its behavior. The configmap is part of the\ndeployment and it is not necessary to deploy it manually. Once deployed, the configmap\ncan be edited at any time:\n\n```bash\nkubectl edit configmap --namespace eraser-system eraser-manager-config\n```\n\nIf an eraser job is already running, the changes will not take effect until the job completes.\nThe configuration is in yaml.\n\n## Key Concepts\n\n### Basic architecture\n\nThe _manager_ runs as a pod in your cluster and manages _ImageJobs_. Think of\nan _ImageJob_ as a unit of work, performed on every node in your cluster. Each\nnode runs a sub-job. The goal of the _ImageJob_ is to assess the images on your\ncluster's nodes, and to remove the images you don't want. There are two stages:\n1. Assessment\n1. Removal.\n\n\n### Scheduling\n\nAn _ImageJob_ can either be created on-demand (see [Manual Removal](https://eraser-dev.github.io/eraser/docs/manual-removal)),\nor they can be spawned on a timer like a cron job. On-demand jobs skip the\nassessment stage and get right down to the business of removing the images you\nspecified. The behavior of an on-demand job is quite different from that of\ntimed jobs.\n\n### Fault Tolerance\n\nBecause an _ImageJob_ runs on every node in your cluster, and the conditions on\neach node may vary widely, some of the sub-jobs may fail. If you cannot\ntolerate any failure, set the `manager.imageJob.successRatio` property to\n`1.0`. If 75% success sounds good to you, set it to `0.75`. In that case, if\nfewer than 75% of the pods spawned by the _ImageJob_ report success, the job as\na whole will be marked as a failure.\n\nThis is mainly to help diagnose error conditions. As such, you can set\n`manager.imageJob.cleanup.delayOnFailure` to a long value so that logs can be\ncaptured before the spawned pods are cleaned up.\n\n### Excluding Nodes\n\nFor various reasons, you may want to prevent Eraser from scheduling pods on\ncertain nodes. To do so, the nodes can be given a special label. By default,\nthis label is `eraser.sh/cleanup.filter`, but you can configure the behavior with\nthe options under `manager.nodeFilter`. The [table](#detailed-options) provides more detail.\n\n### Configuring Components\n\nAn _ImageJob_ is made up of various sub-jobs, with one sub-job for each node.\nThese sub-jobs can be broken down further into three stages.\n1. Collection (What is on the node?)\n1. Scanning (What images conform to the policy I've provided?)\n1. Removal (Remove images based on the results of the above)\n\nOf the above stages, only Removal is mandatory. The others can be disabled.\nFurthermore, manually triggered _ImageJobs_ will skip right to removal, even if\nEraser is configured to collect and scan. Collection and Scanning will only\ntake place when:\n1. The collector and/or scanner `components` are enabled, AND\n1. The job was *not* triggered manually by creating an _ImageList_.\n\nDisabling scanner will remove all non-running images by default.\n\n### Swapping out components\n\nThe collector, scanner, and remover components can all be swapped out. This\nenables you to build and host the images yourself. In addition, the scanner's\nbehavior can be completely tailored to your needs by swapping out the default\nimage with one of your own. To specify the images, use the\n`components.<component>.image.repo` and `components.<component>.image.tag`,\nwhere `<component>` is one of `collector`, `scanner`, or `remover`.\n\n## Universal Options\n\nThe following portions of the configmap apply no matter how you spawn your\n_ImageJob_. The values provided below are the defaults. For more detail on\nthese options, see the [table](#detailed-options).\n\n```yaml\nmanager:\n  runtime:\n    name: containerd\n    address: unix:///run/containerd/containerd.sock\n  otlpEndpoint: \"\" # empty string disables OpenTelemetry\n  logLevel: info\n  profile:\n    enabled: false\n    port: 6060\n  imageJob:\n    successRatio: 1.0\n    cleanup:\n      delayOnSuccess: 0s\n      delayOnFailure: 24h\n  pullSecrets: [] # image pull secrets for collector/scanner/remover\n  priorityClassName: \"\" # priority class name for collector/scanner/remover\n  nodeFilter:\n    type: exclude # must be either exclude|include\n    selectors:\n      - eraser.sh/cleanup.filter\n      - kubernetes.io/os=windows\ncomponents:\n  remover:\n    image:\n      repo: ghcr.io/eraser-dev/remover\n      tag: v1.0.0\n    request:\n      mem: 25Mi\n      cpu: 0\n    limit:\n      mem: 30Mi\n      cpu: 1000m\n```\n\n## Component Options\n\n```yaml\ncomponents:\n  collector:\n    enabled: true\n    image:\n      repo: ghcr.io/eraser-dev/collector\n      tag: v1.0.0\n    request:\n      mem: 25Mi\n      cpu: 7m\n    limit:\n      mem: 500Mi\n      cpu: 0\n  scanner:\n    enabled: true\n    image:\n      repo: ghcr.io/eraser-dev/eraser-trivy-scanner\n      tag: v1.0.0\n    request:\n      mem: 500Mi\n      cpu: 1000m\n    limit:\n      mem: 2Gi\n      cpu: 0\n    config: |\n      # this is the schema for the provided 'trivy-scanner'. custom scanners\n      # will define their own configuration. see the below\n  remover:\n    image:\n      repo: ghcr.io/eraser-dev/remover\n      tag: v1.0.0\n    request:\n      mem: 25Mi\n      cpu: 0\n    limit:\n      mem: 30Mi\n      cpu: 1000m\n```\n\n## Scanner Options\n\nThese options can be provided to `components.scanner.config`. They will be\npassed through  as a string to the scanner container and parsed there. If you\nwant to configure your own scanner, you must provide some way to parse this.\n\nBelow are the values recognized by the provided `eraser-trivy-scanner` image.\nValues provided below are the defaults.\n\n```yaml\ncacheDir: /var/lib/trivy # The file path inside the container to store the cache\ndbRepo: ghcr.io/aquasecurity/trivy-db # The container registry from which to fetch the trivy database\ndeleteFailedImages: true # if true, remove images for which scanning fails, regardless of why it failed\ndeleteEOLImages: true # if true, remove images that have reached their end-of-life date\nvulnerabilities:\n  ignoreUnfixed: true # consider the image compliant if there are no known fixes for the vulnerabilities found.\n  types: # a list of vulnerability types. for more info, see trivy's documentation.\n    - os\n    - library\n  securityChecks: # see trivy's documentation for more information\n    - vuln\n  severities: # in this case, only flag images with CRITICAL vulnerability for removal\n    - CRITICAL\n  ignoredStatuses: # a list of trivy statuses to ignore. See https://aquasecurity.github.io/trivy/v0.44/docs/configuration/filtering/#by-status.\ntimeout:\n  total: 23h # if scanning isn't completed before this much time elapses, abort the whole scan\n  perImage: 1h # if scanning a single image exceeds this time, scanning will be aborted\n```\n\n## Detailed Options\n\n| Option | Description | Default |\n| --- | --- | --- |\n| manager.runtime.name | The runtime to use for the manager's containers. Must be one of containerd, crio, or dockershim. It is assumed that your nodes are all using the same runtime, and there is currently no way to configure multiple runtimes. | containerd |\n| manager.runtime.address | The runtime socket address to use for the containers. Can provide a custom address for containerd and dockershim runtimes, but not for crio due to Trivy restrictions. | unix:///run/containerd/containerd.sock |\n| manager.otlpEndpoint | The endpoint to send OpenTelemetry data to. If empty, data will not be sent. | \"\" |\n| manager.logLevel | The log level for the manager's containers. Must be one of debug, info, warn, error, dpanic, panic, or fatal. | info |\n| manager.scheduling.repeatInterval | Use only when collector ando/or scanner are enabled. This is like a cron job, and will spawn an _ImageJob_ at the interval provided. | 24h |\n| manager.scheduling.beginImmediately | If set to true, the fist _ImageJob_ will run immediately. If false, the job will not be spawned until after the interval (above) has elapsed. | true |\n| manager.profile.enabled | Whether to enable profiling for the manager's containers. This is for debugging with `go tool pprof`. | false |\n| manager.profile.port | The port on which to expose the profiling endpoint. | 6060 |\n| manager.imageJob.successRatio | The ratio of successful image jobs required before a cleanup is performed. | 1.0 |\n| manager.imageJob.cleanup.delayOnSuccess | The amount of time to wait after a successful image job before performing cleanup. | 0s |\n| manager.imageJob.cleanup.delayOnFailure | The amount of time to wait after a failed image job before performing cleanup. | 24h |\n| manager.pullSecrets | The image pull secrets to use for collector, scanner, and remover containers. | [] |\n| manager.priorityClassName | The priority class to use for collector, scanner, and remover containers. | \"\" |\n| manager.nodeFilter.type | The type of node filter to use. Must be either \"exclude\" or \"include\". | exclude |\n| manager.nodeFilter.selectors | A list of selectors used to filter nodes. | [] |\n| components.collector.enabled | Whether to enable the collector component. | true |\n| components.collector.image.repo | The repository containing the collector image. | ghcr.io/eraser-dev/collector |\n| components.collector.image.tag | The tag of the collector image. | v1.0.0 |\n| components.collector.request.mem | The amount of memory to request for the collector container. | 25Mi |\n| components.collector.request.cpu | The amount of CPU to request for the collector container. | 7m |\n| components.collector.limit.mem | The maximum amount of memory the collector container is allowed to use. | 500Mi |\n| components.collector.limit.cpu | The maximum amount of CPU the collector container is allowed to use. | 0 |\n| components.scanner.enabled | Whether to enable the scanner component. | true |\n| components.scanner.image.repo | The repository containing the scanner image. | ghcr.io/eraser-dev/eraser-trivy-scanner |\n| components.scanner.image.tag | The tag of the scanner image. | v1.0.0 |\n| components.scanner.request.mem | The amount of memory to request for the scanner container. | 500Mi |\n| components.scanner.request.cpu | The amount of CPU to request for the scanner container. | 1000m |\n| components.scanner.limit.mem | The maximum amount of memory the scanner container is allowed to use. | 2Gi |\n| components.scanner.limit.cpu | The maximum amount of CPU the scanner container is allowed to use. | 0 |\n| components.scanner.config | The configuration to pass to the scanner container, as a YAML string. | See YAML below |\n| components.remover.image.repo | The repository containing the remover image. | ghcr.io/eraser-dev/remover |\n| components.remover.image.tag | The tag of the remover image. | v1.0.0 |\n| components.remover.request.mem | The amount of memory to request for the remover container. | 25Mi |\n| components.remover.request.cpu | The amount of CPU to request for the remover container. | 0 |\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.3.x/exclusion.md",
    "content": "---\ntitle: Exclusion\n---\n\n## Excluding registries, repositories, and images\nEraser can exclude registries (example, `docker.io/library/*`) and also specific images with a tag (example, `docker.io/library/ubuntu:18.04`) or digest (example, `sha256:80f31da1ac7b312ba29d65080fd...`) from its removal process.\n\nTo exclude any images or registries from the removal, create configmap(s) with the label `eraser.sh/exclude.list=true` in the eraser-system namespace with a JSON file holding the excluded images.\n\n```bash\n$ cat > sample.json <<\"EOF\"\n{\n  \"excluded\": [\n    \"docker.io/library/*\",\n    \"ghcr.io/eraser-dev/test:latest\"\n  ]\n}\nEOF\n\n$ kubectl create configmap excluded --from-file=sample.json --namespace=eraser-system\n$ kubectl label configmap excluded eraser.sh/exclude.list=true -n eraser-system\n```\n\n## Exempting Nodes from the Eraser Pipeline\nExempting nodes from cleanup was added in v1.0.0. When deploying Eraser, you can specify whether there is a list of nodes you would like to `include` or `exclude` from the cleanup process using the configmap. For more information, see the section on [customization](https://eraser-dev.github.io/eraser/docs/customization).\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.3.x/faq.md",
    "content": "---\ntitle: FAQ\n---\n## Why am I still seeing vulnerable images?\nEraser currently targets **non-running** images, so any vulnerable images that are currently running will not be removed. In addition, the default vulnerability scanning with Trivy removes images with `CRITICAL` vulnerabilities. Any images with lower vulnerabilities will not be removed. This can be configured using the [configmap](https://eraser-dev.github.io/eraser/docs/customization#scanner-options).\n\n## How is Eraser different from Kubernetes garbage collection?\nThe native garbage collection in Kubernetes works a bit differently than Eraser. By default, garbage collection begins when disk usage reaches 85%, and stops when it gets down to 80%. More details about Kubernetes garbage collection can be found in the [Kubernetes documentation](https://kubernetes.io/docs/concepts/architecture/garbage-collection/), and configuration options can be found in the [Kubelet documentation](https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/). \n\nThere are a couple core benefits to using Eraser for image cleanup:\n* Eraser can be configured to use image vulnerability data when making determinations on image removal\n* By interfacing directly with the container runtime, Eraser can clean up images that are not managed by Kubelet and Kubernetes\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.3.x/installation.md",
    "content": "---\ntitle: Installation\n---\n\n## Manifest\n\nTo install Eraser with the manifest file, run the following command:\n\n```bash\nkubectl apply -f https://raw.githubusercontent.com/eraser-dev/eraser/v1.3.0/deploy/eraser.yaml\n```\n\n## Helm\n\nIf you'd like to install and manage Eraser with Helm, follow the install instructions [here](https://github.com/eraser-dev/eraser/blob/main/charts/eraser/README.md)\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.3.x/introduction.md",
    "content": "---\ntitle: Introduction\nslug: /\n---\n\n# Introduction\n\nWhen deploying to Kubernetes, it's common for pipelines to build and push images to a cluster, but it's much less common for these images to be cleaned up. This can lead to accumulating bloat on the disk, and a host of non-compliant images lingering on the nodes.\n\nThe current garbage collection process deletes images based on a percentage of load, but this process does not consider the vulnerability state of the images. **Eraser** aims to provide a simple way to determine the state of an image, and delete it if it meets the specified criteria.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.3.x/manual-removal.md",
    "content": "---\ntitle: Manual Removal\n---\n\nCreate an `ImageList` and specify the images you would like to remove. In this case, the image `docker.io/library/alpine:3.7.3` will be removed.\n\n```shell\ncat <<EOF | kubectl apply -f -\napiVersion: eraser.sh/v1alpha1\nkind: ImageList\nmetadata:\n  name: imagelist\nspec:\n  images:\n    - docker.io/library/alpine:3.7.3   # use \"*\" for all non-running images\nEOF\n```\n\n> `ImageList` is a cluster-scoped resource and must be called imagelist. `\"*\"` can be specified to remove all non-running images instead of individual images.\n\nCreating an `ImageList` should trigger an `ImageJob` that will deploy Eraser pods on every node to perform the removal given the list of images.\n\n```shell\n$ kubectl get pods -n eraser-system\neraser-system        eraser-controller-manager-55d54c4fb6-dcglq   1/1     Running   0          9m8s\neraser-system        eraser-kind-control-plane                    1/1     Running   0          11s\neraser-system        eraser-kind-worker                           1/1     Running   0          11s\neraser-system        eraser-kind-worker2                          1/1     Running   0          11s\n```\n\nPods will run to completion and the images will be removed.\n\n```shell\n$ kubectl get pods -n eraser-system\neraser-system        eraser-controller-manager-6d6d5594d4-phl2q   1/1     Running     0          4m16s\neraser-system        eraser-kind-control-plane                    0/1     Completed   0          22s\neraser-system        eraser-kind-worker                           0/1     Completed   0          22s\neraser-system        eraser-kind-worker2                          0/1     Completed   0          22s\n```\n\nThe `ImageList` custom resource status field will contain the status of the last job. The success and failure counts indicate the number of nodes the Eraser agent was run on.\n\n```shell\n$ kubectl describe ImageList imagelist\n...\nStatus:\n  Failed:     0\n  Success:    3\n  Timestamp:  2022-02-25T23:41:55Z\n...\n```\n\nVerify the unused images are removed.\n\n```shell\n$ docker exec kind-worker ctr -n k8s.io images list | grep alpine\n```\n\nIf the image has been successfully removed, there will be no output.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.3.x/metrics.md",
    "content": "---\ntitle: Metrics\n---\n\nTo view Eraser metrics, you will need to deploy an Open Telemetry collector in the 'eraser-system' namespace, and an exporter. An example collector with a Prometheus exporter is [otelcollector.yaml](https://github.com/eraser-dev/eraser/blob/main/test/e2e/test-data/otelcollector.yaml), and the endpoint can be specified using the [configmap](https://eraser-dev.github.io/eraser/docs/customization#universal-options). In this example, we are logging the collected data to the otel-collector pod, and exporting metrics through Prometheus at 'http://localhost:8889/metrics', but a separate exporter can also be configured.\n\nBelow is the list of metrics provided by Eraser per run:\n\n#### Eraser\n```yaml\n- count\n\t- name: images_removed_run_total\n\t\t- description: Total images removed by eraser\n```\n\n #### Scanner\n ```yaml\n- count\n\t- name: vulnerable_images_run_total\n\t\t- description: Total vulnerable images detected\n ```\n\n #### ImageJob\n ```yaml\n - count\n\t- name: imagejob_run_total\n\t\t- description: Total ImageJobs scheduled\n\t- name: pods_completed_run_total\n\t\t- description: Total pods completed\n\t-  name: pods_failed_run_total\n\t\t- description: Total pods failed\n- summary\n\t- name: imagejob_duration_run_seconds\n\t\t- description: Total time for ImageJobs scheduled to complete\n```\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.3.x/quick-start.md",
    "content": "---\ntitle: Quick Start\n---\n\nThis tutorial demonstrates the functionality of Eraser and validates that non-running images are removed succesfully.\n\n## Deploy a DaemonSet\n\nAfter following the [install instructions](installation.md), we'll apply a demo `DaemonSet`. For illustrative purposes, a DaemonSet is applied and deleted so the non-running images remain on all nodes. The alpine image with the `3.7.3` tag will be used in this example. This is an image with a known critical vulnerability.\n\nFirst, apply the `DaemonSet`:\n\n```shell\ncat <<EOF | kubectl apply -f -\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n  name: alpine\nspec:\n  selector:\n    matchLabels:\n      app: alpine\n  template:\n    metadata:\n      labels:\n        app: alpine\n    spec:\n      containers:\n      - name: alpine\n        image: docker.io/library/alpine:3.7.3\nEOF\n```\n\nNext, verify that the Pods are running or completed. After the `alpine` Pods complete, you may see a `CrashLoopBackoff` status. This is expected behavior from the `alpine` image and can be ignored for the tutorial.\n\n```shell\n$ kubectl get pods\nNAME          READY   STATUS      RESTARTS     AGE\nalpine-2gh9c   1/1     Running     1 (3s ago)   6s\nalpine-hljp9   0/1     Completed   1 (3s ago)   6s\n```\n\nDelete the DaemonSet:\n\n```shell\n$ kubectl delete daemonset alpine\n```\n\nVerify that the Pods have been deleted:\n\n```shell\n$ kubectl get pods\nNo resources found in default namespace.\n```\n\nTo verify that the `alpine` images are still on the nodes, exec into one of the worker nodes and list the images. If you are not using a kind cluster or Docker for your container nodes, you will need to adjust the exec command accordingly.\n\nList the nodes:\n\n```shell\n$ kubectl get nodes\nNAME                 STATUS   ROLES           AGE   VERSION\nkind-control-plane   Ready    control-plane   45m   v1.24.0\nkind-worker          Ready    <none>          45m   v1.24.0\nkind-worker2         Ready    <none>          44m   v1.24.0\n```\n\nList the images then filter for `alpine`:\n\n```shell\n$ docker exec kind-worker ctr -n k8s.io images list | grep alpine\ndocker.io/library/alpine:3.7.3                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 2.0 MiB   linux/386,linux/amd64,linux/arm/v6,linux/arm64/v8,linux/ppc64le,linux/s390x  io.cri-containerd.image=managed\ndocker.io/library/alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10           application/vnd.docker.distribution.manifest.list.v2+json sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 2.0 MiB   linux/386,linux/amd64,linux/arm/v6,linux/arm64/v8,linux/ppc64le,linux/s390x  io.cri-containerd.image=managed\n\n```\n\n## Automatically Cleaning Images\n\nAfter deploying Eraser, it will automatically clean images in a regular interval. This interval can be set using the `manager.scheduling.repeatInterval` setting in the [configmap](https://eraser-dev.github.io/eraser/docs/customization#detailed-options). The default interval is 24 hours (`24h`). Valid time units are \"ns\", \"us\" (or \"µs\"), \"ms\", \"s\", \"m\", \"h\".\n\nEraser will schedule eraser pods to each node in the cluster, and each pod will contain 3 containers: collector, scanner, and remover that will run to completion.\n\n```shell\n$ kubectl get pods -n eraser-system\nNAMESPACE            NAME                                         READY   STATUS      RESTARTS         AGE\neraser-system        eraser-kind-control-plane-sb789           0/3     Completed   0                26m\neraser-system        eraser-kind-worker-j84hm                  0/3     Completed   0                26m\neraser-system        eraser-kind-worker2-4lbdr                 0/3     Completed   0                26m\neraser-system        eraser-controller-manager-86cdb4cbf9-x8d7q   1/1     Running     0                26m\n```\n\nThe collector container sends the list of all images to the scanner container, which scans and reports non-compliant images to the remover container for removal of images that are non-running. Once all pods are completed, they will be automatically cleaned up. \n\n> If you want to remove all the images periodically, you can skip the scanner container by setting the `components.scanner.enabled` value to `false` using the [configmap](https://eraser-dev.github.io/eraser/docs/customization#detailed-options). In this case, each collector pod will hold 2 containers: collector and remover.\n\n```shell\n$ kubectl get pods -n eraser-system\nNAMESPACE            NAME                                         READY   STATUS      RESTARTS         AGE\neraser-system        eraser-kind-control-plane-ksk2b           0/2     Completed   0                50s\neraser-system        eraser-kind-worker-cpgqc                  0/2     Completed   0                50s\neraser-system        eraser-kind-worker2-k25df                 0/2     Completed   0                50s\neraser-system        eraser-controller-manager-86cdb4cbf9-x8d7q   1/1     Running     0                55s\n```\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.3.x/release-management.md",
    "content": "# Release Management\n\n## Overview\n\nThis document describes Eraser project release management, which includes release versioning, supported releases, and supported upgrades.\n\n## Legend\n\n- **X.Y.Z** refers to the version (git tag) of Eraser that is released. This is the version of the Eraser images and the Chart version.\n- **Breaking changes** refer to schema changes, flag changes, and behavior changes of Eraser that may require a clean installation during upgrade, and it may introduce changes that could break backward compatibility.\n- **Milestone** should be designed to include feature sets to accommodate 2 months release cycles including test gates. GitHub's milestones are used by maintainers to manage each release. PRs and Issues for each release should be created as part of a corresponding milestone.\n- **Patch releases** refer to applicable fixes, including security fixes, may be backported to support releases, depending on severity and feasibility.\n- **Test gates** should include soak tests and upgrade tests from the last minor version.\n\n## Release Versioning\n\nAll releases will be of the form _vX.Y.Z_ where X is the major version, Y is the minor version and Z is the patch version. This project strictly follows semantic versioning.\n\nThe rest of the doc will cover the release process for the following kinds of releases:\n\n**Major Releases**\n\nNo plan to move to 2.0.0 unless there is a major design change like an incompatible API change in the project\n\n**Minor Releases**\n\n- X.Y.0-alpha.W, W >= 0 (Branch : main)\n    - Released as needed before we cut a beta X.Y release\n    - Alpha release, cut from master branch\n- X.Y.0-beta.W, W >= 0 (Branch : main)\n    - Released as needed before we cut a stable X.Y release\n    - More stable than the alpha release to signal users to test things out\n    - Beta release, cut from master branch\n- X.Y.0-rc.W, W >= 0 (Branch : main)\n    - Released as needed before we cut a stable X.Y release\n    - soak for ~ 2 weeks before cutting a stable release\n    - Release candidate release, cut from master branch\n- X.Y.0 (Branch: main)\n    - Released as needed\n    - Stable release, cut from master when X.Y milestone is complete\n\n**Patch Releases**\n\n- Patch Releases X.Y.Z, Z > 0 (Branch: release-X.Y, only cut when a patch is needed)\n    - No breaking changes\n    - Applicable fixes, including security fixes, may be cherry-picked from master into the latest supported minor release-X.Y branches.\n    - Patch release, cut from a release-X.Y branch\n\n## Supported Releases\n\nApplicable fixes, including security fixes, may be cherry-picked into the release branch, depending on severity and feasibility. Patch releases are cut from that branch as needed.\n\nWe expect users to stay reasonably up-to-date with the versions of Eraser they use in production, but understand that it may take time to upgrade. We expect users to be running approximately the latest patch release of a given minor release and encourage users to upgrade as soon as possible.\n\nWe expect to \"support\" n (current) and n-1 major.minor releases. \"Support\" means we expect users to be running that version in production. For example, when v1.2.0 comes out, v1.0.x will no longer be supported for patches, and we encourage users to upgrade to a supported version as soon as possible.\n\n## Supported Kubernetes Versions\n\nEraser is assumed to be compatible with the [current Kubernetes Supported Versions](https://kubernetes.io/releases/patch-releases/#detailed-release-history-for-active-branches) per [Kubernetes Supported Versions policy](https://kubernetes.io/releases/version-skew-policy/).\n\nFor example, if Eraser _supported_ versions are v1.2 and v1.1, and Kubernetes _supported_ versions are v1.22, v1.23, v1.24, then all supported Eraser versions (v1.2, v1.1) are assumed to be compatible with all supported Kubernetes versions (v1.22, v1.23, v1.24). If Kubernetes v1.25 is released later, then Eraser v1.2 and v1.1 will be assumed to be compatible with v1.25 if those Eraser versions are still supported at that time.\n\nIf you choose to use Eraser with a version of Kubernetes that it does not support, you are using it at your own risk.\n\n## Acknowledgement\n\nThis document builds on the ideas and implementations of release processes from projects like Kubernetes and Helm."
  },
  {
    "path": "docs/versioned_docs/version-v1.3.x/releasing.md",
    "content": "---\ntitle: Releasing\n---\n\n## Create Release Pull Request\n\n1. Go to `create_release_pull_request` workflow under actions.\n2. Select run workflow, and use the workflow from your branch. \n3. Input release version with the semantic version identifying the release.\n4. Click run workflow and review the PR created by github-actions.\n\n# Releasing\n\n5. Once the PR is merged to `main`, tag that commit with release version and push tags to remote repository.\n\n   ```\n   git checkout <BRANCH NAME>\n   git pull origin <BRANCH NAME>\n   git tag -a <NEW VERSION> -m '<NEW VERSION>'\n   git push origin <NEW VERSION>\n   ```\n6. Pushing the release tag will trigger GitHub Actions to trigger `release` job.\n   This will build the `ghcr.io/eraser-dev/remover`, `ghcr.io/eraser-dev/eraser-manager`, `ghcr.io/eraser-dev/collector`, and `ghcr.io/eraser-dev/eraser-trivy-scanner` images automatically, then publish the new release tag.\n\n## Publishing\n\n1. GitHub Action will create a new release, review and edit it at https://github.com/eraser-dev/eraser/releases\n\n## Notifying\n\n1. Send an email to the [Eraser mailing list](https://groups.google.com/g/eraser-dev) announcing the release, with links to GitHub.\n2. Post a message on the [Eraser Slack channel](https://kubernetes.slack.com/archives/C03Q8KV8YQ4) with the same information."
  },
  {
    "path": "docs/versioned_docs/version-v1.3.x/setup.md",
    "content": "---\ntitle: Setup\n---\n\n# Development Setup\n\nThis document describes the steps to get started with development.\nYou can either utilize [Codespaces](https://docs.github.com/en/codespaces/overview) or setup a local environment.\n\n## Local Setup\n\n### Prerequisites:\n\n- [go](https://go.dev/) with version 1.17 or later.\n- [docker](https://docs.docker.com/get-docker/)\n- [kind](https://kind.sigs.k8s.io/)\n- `make`\n\n### Get things running\n\n- Get dependencies with `go get`\n\n- This project uses `make`. You can utilize `make help` to see available targets. For local deployment make targets help to build, test and deploy.\n\n### Making changes\n\nPlease refer to [Development Reference](#development-reference) for more details on the specific commands.\n\nTo test your changes on a cluster:\n\n```bash\n# generate necessary api files (optional - only needed if changes to api folder).\nmake generate\n\n# build applicable images\nmake docker-build-manager MANAGER_IMG=eraser-manager:dev\nmake docker-build-remover REMOVER_IMG=remover:dev\nmake docker-build-collector COLLECTOR_IMG=collector:dev\nmake docker-build-trivy-scanner TRIVY_SCANNER_IMG=eraser-trivy-scanner:dev\n\n# make sure updated image is present on cluster (e.g., see kind example below)\nkind load docker-image \\\n        eraser-manager:dev \\\n        eraser-trivy-scanner:dev \\\n        remover:dev \\\n        collector:dev\n\nmake manifests\nmake deploy\n\n# to remove the deployment\nmake undeploy\n```\n\nTo test your changes to manager locally:\n\n```bash\nmake run\n```\n\nExample Output:\n\n```\nyou@local:~/eraser$ make run\ndocker build . \\\n        -t eraser-tooling \\\n        -f build/tooling/Dockerfile\n[+] Building 7.8s (8/8) FINISHED\n => => naming to docker.io/library/eraser-tooling                           0.0s\ndocker run -v /home/eraser/config:/config -w /config/manager \\\n        registry.k8s.io/kustomize/kustomize:v3.8.9 edit set image controller=eraser-manager:dev\ndocker run -v /home/eraser:/eraser eraser-tooling controller-gen \\\n        crd \\\n        rbac:roleName=manager-role \\\n        webhook \\\n        paths=\"./...\" \\\n        output:crd:artifacts:config=config/crd/bases\nrm -rf manifest_staging\nmkdir -p manifest_staging/deploy\ndocker run --rm -v /home/eraser:/eraser \\\n        registry.k8s.io/kustomize/kustomize:v3.8.9 build \\\n        /eraser/config/default -o /eraser/manifest_staging/deploy/eraser.yaml\ndocker run -v /home/eraser:/eraser eraser-tooling controller-gen object:headerFile=\"hack/boilerplate.go.txt\" paths=\"./...\"\ngo fmt ./...\ngo vet ./...\ngo run ./main.go\n{\"level\":\"info\",\"ts\":1652985685.1663408,\"logger\":\"controller-runtime.metrics\",\"msg\":\"Metrics server is starting to listen\",\"addr\":\":8080\"}\n...\n```\n\n## Development Reference\n\nEraser is using tooling from [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder). For Eraser this tooling is containerized into the `eraser-tooling` image. The `make` targets can use this tooling and build the image when necessary.\n\nYou can override the default configuration using environment variables. Below you can find a reference of targets and configuration options.\n\n### Common Configuration\n\n| Environment Variable | Description                                                                                   |\n| -------------------- | --------------------------------------------------------------------------------------------- |\n| VERSION              | Specifies the version (i.e., the image tag) of eraser to be used.                             |\n| MANAGER_IMG          | Defines the image url for the Eraser manager. Used for tagging, pulling and pushing the image |\n| REMOVER_IMG           | Defines the image url for the Eraser. Used for tagging, pulling and pushing the image         |\n| COLLECTOR_IMG        | Defines the image url for the Collector. Used for tagging, pulling and pushing the image      |\n\n### Linting\n\n- `make lint`\n\nLints the go code.\n\n| Environment Variable | Description                                             |\n| -------------------- | ------------------------------------------------------- |\n| GOLANGCI_LINT        | Specifies the go linting binary to be used for linting. |\n\n### Development\n\n- `make generate`\n\nGenerates necessary files for the k8s api stored under `api/v1alpha1/zz_generated.deepcopy.go`. See the [kubebuilder docs](https://book.kubebuilder.io/cronjob-tutorial/other-api-files.html) for details.\n\n- `make manifests`\n\nGenerates the eraser deployment yaml files under `manifest_staging/deploy`.\n\nConfiguration Options:\n\n| Environment Variable | Description                                        |\n| -------------------- | -------------------------------------------------- |\n| REMOVER_IMG           | Defines the image url for the Eraser.              |\n| MANAGER_IMG          | Defines the image url for the Eraser manager.      |\n| KUSTOMIZE_VERSION    | Define Kustomize version for generating manifests. |\n\n- `make test`\n\nRuns the unit tests for the eraser project.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                 |\n| -------------------- | ----------------------------------------------------------- |\n| ENVTEST              | Specifies the envtest setup binary.                         |\n| ENVTEST_K8S_VERSION  | Specifies the Kubernetes version for envtest setup command. |\n\n- `make e2e-test`\n\nRuns e2e tests on a cluster.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                   |\n| -------------------- | ------------------------------------------------------------------------------------------------------------- |\n| REMOVER_IMG           | Eraser image to be used for e2e test.                                                                         |\n| MANAGER_IMG          | Eraser manager image to be used for e2e test.                                                                 |\n| KUBERNETES_VERSION   | Kubernetes version for e2e test.                                                                              |\n| TEST_COUNT           | Sets repetition for test. Please refer to [go docs](https://pkg.go.dev/cmd/go#hdr-Testing_flags) for details. |\n| TIMEOUT              | Sets timeout for test. Please refer to [go docs](https://pkg.go.dev/cmd/go#hdr-Testing_flags) for details.    |\n| TESTFLAGS            | Sets additional test flags                                                                                    |\n\n### Build\n\n- `make build`\n\nBuilds the eraser manager binaries.\n\n- `make run`\n\nRuns the eraser manager on your local machine.\n\n- `make docker-build-manager`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| MANAGER_IMG          | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-manager`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| MANAGER_IMG          | Specifies the target repository, image name and tag for building image. |\n\n- `make docker-build-remover`\n\nBuilds the docker image for eraser remover.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| REMOVER_IMG           | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-remover`\n\nBuilds the docker image for the eraser remover.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| REMOVER_IMG           | Specifies the target repository, image name and tag for building image. |\n\n- `make docker-build-collector`\n\nBuilds the docker image for the eraser collector.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| COLLECTOR_IMG        | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-collector`\n\nBuilds the docker image for the eraser collector.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| COLLECTOR_IMG        | Specifies the target repository, image name and tag for building image. |\n\n### Deployment\n\n- `make install`\n\nInstall CRDs into the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                      |\n| -------------------- | ---------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment. |\n\n- `make uninstall`\n\nUninstall CRDs from the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                      |\n| -------------------- | ---------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment. |\n\n- `make deploy`\n\nDeploys eraser to the cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                          |\n| -------------------- | -------------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment.     |\n| MANAGER_IMG          | Specifies the eraser manager image version to be used for deployment |\n\n- `make undeploy`\n\nUndeploy controller from the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                               |\n| -------------------- | ------------------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources that need to be removed. |\n\n### Release\n\n- `make release-manifest`\n\nGenerates k8s manifests files for a release.\n\nConfiguration Options:\n\n| Environment Variable | Description                          |\n| -------------------- | ------------------------------------ |\n| NEWVERSION           | Sets the new version in the Makefile |\n\n- `make promote-staging-manifest`\n\nPromotes the k8s deployment yaml files to release.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.3.x/trivy.md",
    "content": "---\ntitle: Trivy\n---\n\n## Trivy Provider Options\nThe Trivy provider is used in Eraser for image scanning and detecting vulnerabilities. See [Customization](https://eraser-dev.github.io/eraser/docs/customization#scanner-options) for more details on configuring the scanner.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.4.x/architecture.md",
    "content": "---\ntitle: Architecture\n---\nAt a high level, Eraser has two main modes of operation: manual and automated.\n\nManual image removal involves supplying a list of images to remove; Eraser then\ndeploys pods to clean up the images you supplied.\n\nAutomated image removal runs on a timer. By default, the automated process\nremoves images based on the results of a vulnerability scan. The default\nvulnerability scanner is Trivy, but others can be provided in its place. Or,\nthe scanner can be disabled altogether, in which case Eraser acts as a garbage\ncollector -- it will remove all non-running images in your cluster.\n\n## Manual image cleanup\n\n<img title=\"manual cleanup\" src=\"/eraser/docs/img/eraser_manual.png\" />\n\n## Automated analysis, scanning, and cleanup\n\n<img title=\"automated cleanup\" src=\"/eraser/docs/img/eraser_timer.png\" />\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.4.x/code-of-conduct.md",
    "content": "---\ntitle: Code of Conduct\n---\n\nThis project has adopted the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).\n\nResources:\n\n- [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)\n- [Code of Conduct Reporting](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.4.x/contributing.md",
    "content": "---\ntitle: Contributing\n---\n\nThere are several ways to get involved with Eraser\n\n- Join the [mailing list](https://groups.google.com/u/1/g/eraser-dev) to get notifications for releases, security announcements, etc.\n- Participate in the [biweekly community meetings](https://docs.google.com/document/d/1Sj5u47K3WUGYNPmQHGFpb52auqZb1FxSlWAQnPADhWI/edit) to disucss development, issues, use cases, etc.\n- Join the `#eraser` channel on the [Kubernetes Slack](https://slack.k8s.io/)\n- View the [development setup instructions](https://eraser-dev.github.io/eraser/docs/development)\n\nThis project welcomes contributions and suggestions.\n\nThis project has adopted the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)."
  },
  {
    "path": "docs/versioned_docs/version-v1.4.x/custom-scanner.md",
    "content": "---\ntitle: Custom Scanner\n---\n\n## Creating a Custom Scanner\nTo create a custom scanner for non-compliant images, use the following [template](https://github.com/eraser-dev/eraser-scanner-template/).\n\nIn order to customize your scanner, start by creating a `NewImageProvider()`. The ImageProvider interface can be found can be found [here](../../pkg/scanners/template/scanner_template.go). \n\nThe ImageProvider will allow you to retrieve the list of all non-running and non-excluded images from the collector container through the `ReceiveImages()` function. Process these images with your customized scanner and threshold, and use `SendImages()` to pass the images found non-compliant to the eraser container for removal. Finally, complete the scanning process by calling `Finish()`.\n\nWhen complete, provide your custom scanner image to Eraser in deployment.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.4.x/customization.md",
    "content": "---\ntitle: Customization\n---\n\n## Overview\n\nEraser uses a configmap to configure its behavior. The configmap is part of the\ndeployment and it is not necessary to deploy it manually. Once deployed, the configmap\ncan be edited at any time:\n\n```bash\nkubectl edit configmap --namespace eraser-system eraser-manager-config\n```\n\nIf an eraser job is already running, the changes will not take effect until the job completes.\nThe configuration is in yaml.\n\n## Key Concepts\n\n### Basic architecture\n\nThe _manager_ runs as a pod in your cluster and manages _ImageJobs_. Think of\nan _ImageJob_ as a unit of work, performed on every node in your cluster. Each\nnode runs a sub-job. The goal of the _ImageJob_ is to assess the images on your\ncluster's nodes, and to remove the images you don't want. There are two stages:\n1. Assessment\n1. Removal.\n\n\n### Scheduling\n\nAn _ImageJob_ can either be created on-demand (see [Manual Removal](https://eraser-dev.github.io/eraser/docs/manual-removal)),\nor they can be spawned on a timer like a cron job. On-demand jobs skip the\nassessment stage and get right down to the business of removing the images you\nspecified. The behavior of an on-demand job is quite different from that of\ntimed jobs.\n\n### Fault Tolerance\n\nBecause an _ImageJob_ runs on every node in your cluster, and the conditions on\neach node may vary widely, some of the sub-jobs may fail. If you cannot\ntolerate any failure, set the `manager.imageJob.successRatio` property to\n`1.0`. If 75% success sounds good to you, set it to `0.75`. In that case, if\nfewer than 75% of the pods spawned by the _ImageJob_ report success, the job as\na whole will be marked as a failure.\n\nThis is mainly to help diagnose error conditions. As such, you can set\n`manager.imageJob.cleanup.delayOnFailure` to a long value so that logs can be\ncaptured before the spawned pods are cleaned up.\n\n### Excluding Nodes\n\nFor various reasons, you may want to prevent Eraser from scheduling pods on\ncertain nodes. To do so, the nodes can be given a special label. By default,\nthis label is `eraser.sh/cleanup.filter`, but you can configure the behavior with\nthe options under `manager.nodeFilter`. The [table](#detailed-options) provides more detail.\n\n### Configuring Components\n\nAn _ImageJob_ is made up of various sub-jobs, with one sub-job for each node.\nThese sub-jobs can be broken down further into three stages.\n1. Collection (What is on the node?)\n1. Scanning (What images conform to the policy I've provided?)\n1. Removal (Remove images based on the results of the above)\n\nOf the above stages, only Removal is mandatory. The others can be disabled.\nFurthermore, manually triggered _ImageJobs_ will skip right to removal, even if\nEraser is configured to collect and scan. Collection and Scanning will only\ntake place when:\n1. The collector and/or scanner `components` are enabled, AND\n1. The job was *not* triggered manually by creating an _ImageList_.\n\nDisabling scanner will remove all non-running images by default.\n\n### Swapping out components\n\nThe collector, scanner, and remover components can all be swapped out. This\nenables you to build and host the images yourself. In addition, the scanner's\nbehavior can be completely tailored to your needs by swapping out the default\nimage with one of your own. To specify the images, use the\n`components.<component>.image.repo` and `components.<component>.image.tag`,\nwhere `<component>` is one of `collector`, `scanner`, or `remover`.\n\n## Universal Options\n\nThe following portions of the configmap apply no matter how you spawn your\n_ImageJob_. The values provided below are the defaults. For more detail on\nthese options, see the [table](#detailed-options).\n\n```yaml\nmanager:\n  runtime:\n    name: containerd\n    address: unix:///run/containerd/containerd.sock\n  otlpEndpoint: \"\" # empty string disables OpenTelemetry\n  logLevel: info\n  profile:\n    enabled: false\n    port: 6060\n  imageJob:\n    successRatio: 1.0\n    cleanup:\n      delayOnSuccess: 0s\n      delayOnFailure: 24h\n  pullSecrets: [] # image pull secrets for collector/scanner/remover\n  priorityClassName: \"\" # priority class name for collector/scanner/remover\n  additionalPodLabels: {}\n  extraScannerVolumes: {}\n  extraScannerVolumeMounts: {}\n  nodeFilter:\n    type: exclude # must be either exclude|include\n    selectors:\n      - eraser.sh/cleanup.filter\n      - kubernetes.io/os=windows\ncomponents:\n  remover:\n    image:\n      repo: ghcr.io/eraser-dev/remover\n      tag: v1.0.0\n    request:\n      mem: 25Mi\n      cpu: 0\n    limit:\n      mem: 30Mi\n      cpu: 1000m\n```\n\n## Component Options\n\n```yaml\ncomponents:\n  collector:\n    enabled: true\n    image:\n      repo: ghcr.io/eraser-dev/collector\n      tag: v1.0.0\n    request:\n      mem: 25Mi\n      cpu: 7m\n    limit:\n      mem: 500Mi\n      cpu: 0\n  scanner:\n    enabled: true\n    image:\n      repo: ghcr.io/eraser-dev/eraser-trivy-scanner\n      tag: v1.0.0\n    request:\n      mem: 500Mi\n      cpu: 1000m\n    limit:\n      mem: 2Gi\n      cpu: 0\n    config: |\n      # this is the schema for the provided 'trivy-scanner'. custom scanners\n      # will define their own configuration. see the below\n  remover:\n    image:\n      repo: ghcr.io/eraser-dev/remover\n      tag: v1.0.0\n    request:\n      mem: 25Mi\n      cpu: 0\n    limit:\n      mem: 30Mi\n      cpu: 1000m\n```\n\n## Scanner Options\n\nThese options can be provided to `components.scanner.config`. They will be\npassed through  as a string to the scanner container and parsed there. If you\nwant to configure your own scanner, you must provide some way to parse this.\n\nBelow are the values recognized by the provided `eraser-trivy-scanner` image.\nValues provided below are the defaults.\n\n```yaml\ncacheDir: /var/lib/trivy # The file path inside the container to store the cache\ndbRepo: ghcr.io/aquasecurity/trivy-db # The container registry from which to fetch the trivy database\ndeleteFailedImages: true # if true, remove images for which scanning fails, regardless of why it failed\ndeleteEOLImages: true # if true, remove images that have reached their end-of-life date\nvulnerabilities:\n  ignoreUnfixed: true # consider the image compliant if there are no known fixes for the vulnerabilities found.\n  types: # a list of vulnerability types. for more info, see trivy's documentation.\n    - os\n    - library\n  securityChecks: # see trivy's documentation for more information\n    - vuln\n  severities: # in this case, only flag images with CRITICAL vulnerability for removal\n    - CRITICAL\n  ignoredStatuses: # a list of trivy statuses to ignore. See https://aquasecurity.github.io/trivy/v0.44/docs/configuration/filtering/#by-status.\ntimeout:\n  total: 23h # if scanning isn't completed before this much time elapses, abort the whole scan\n  perImage: 1h # if scanning a single image exceeds this time, scanning will be aborted\n```\n\n## Detailed Options\n\n| Option | Description | Default |\n| --- | --- | --- |\n| manager.runtime.name | The runtime to use for the manager's containers. Must be one of containerd, crio, or dockershim. It is assumed that your nodes are all using the same runtime, and there is currently no way to configure multiple runtimes. | containerd |\n| manager.runtime.address | The runtime socket address to use for the containers. Can provide a custom address for containerd and dockershim runtimes, but not for crio due to Trivy restrictions. | unix:///run/containerd/containerd.sock |\n| manager.otlpEndpoint | The endpoint to send OpenTelemetry data to. If empty, data will not be sent. | \"\" |\n| manager.logLevel | The log level for the manager's containers. Must be one of debug, info, warn, error, dpanic, panic, or fatal. | info |\n| manager.scheduling.repeatInterval | Use only when collector ando/or scanner are enabled. This is like a cron job, and will spawn an _ImageJob_ at the interval provided. | 24h |\n| manager.scheduling.beginImmediately | If set to true, the fist _ImageJob_ will run immediately. If false, the job will not be spawned until after the interval (above) has elapsed. | true |\n| manager.profile.enabled | Whether to enable profiling for the manager's containers. This is for debugging with `go tool pprof`. | false |\n| manager.profile.port | The port on which to expose the profiling endpoint. | 6060 |\n| manager.imageJob.successRatio | The ratio of successful image jobs required before a cleanup is performed. | 1.0 |\n| manager.imageJob.cleanup.delayOnSuccess | The amount of time to wait after a successful image job before performing cleanup. | 0s |\n| manager.imageJob.cleanup.delayOnFailure | The amount of time to wait after a failed image job before performing cleanup. | 24h |\n| manager.pullSecrets | The image pull secrets to use for collector, scanner, and remover containers. | [] |\n| manager.priorityClassName | The priority class to use for collector, scanner, and remover containers. | \"\" |\n| manager.additionalPodLabels | Additional labels for all pods that the controller creates at runtime. | `{}` |\n| manager.nodeFilter.type | The type of node filter to use. Must be either \"exclude\" or \"include\". | exclude |\n| manager.nodeFilter.selectors | A list of selectors used to filter nodes. | [] |\n| components.collector.enabled | Whether to enable the collector component. | true |\n| components.collector.image.repo | The repository containing the collector image. | ghcr.io/eraser-dev/collector |\n| components.collector.image.tag | The tag of the collector image. | v1.0.0 |\n| components.collector.request.mem | The amount of memory to request for the collector container. | 25Mi |\n| components.collector.request.cpu | The amount of CPU to request for the collector container. | 7m |\n| components.collector.limit.mem | The maximum amount of memory the collector container is allowed to use. | 500Mi |\n| components.collector.limit.cpu | The maximum amount of CPU the collector container is allowed to use. | 0 |\n| components.scanner.enabled | Whether to enable the scanner component. | true |\n| components.scanner.image.repo | The repository containing the scanner image. | ghcr.io/eraser-dev/eraser-trivy-scanner |\n| components.scanner.image.tag | The tag of the scanner image. | v1.0.0 |\n| components.scanner.request.mem | The amount of memory to request for the scanner container. | 500Mi |\n| components.scanner.request.cpu | The amount of CPU to request for the scanner container. | 1000m |\n| components.scanner.limit.mem | The maximum amount of memory the scanner container is allowed to use. | 2Gi |\n| components.scanner.limit.cpu | The maximum amount of CPU the scanner container is allowed to use. | 0 |\n| components.scanner.config | The configuration to pass to the scanner container, as a YAML string. | See YAML below |\n| components.scanner.volumes | Extra volumes for scanner. | `{}` |\n| components.remover.image.repo | The repository containing the remover image. | ghcr.io/eraser-dev/remover |\n| components.remover.image.tag | The tag of the remover image. | v1.0.0 |\n| components.remover.request.mem | The amount of memory to request for the remover container. | 25Mi |\n| components.remover.request.cpu | The amount of CPU to request for the remover container. | 0 |\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.4.x/exclusion.md",
    "content": "---\ntitle: Exclusion\n---\n\n## Excluding registries, repositories, and images\nEraser can exclude registries (example, `docker.io/library/*`) and also specific images with a tag (example, `docker.io/library/ubuntu:18.04`) or digest (example, `sha256:80f31da1ac7b312ba29d65080fd...`) from its removal process.\n\nTo exclude any images or registries from the removal, create configmap(s) with the label `eraser.sh/exclude.list=true` in the eraser-system namespace with a JSON file holding the excluded images.\n\n```bash\n$ cat > sample.json <<\"EOF\"\n{\n  \"excluded\": [\n    \"docker.io/library/*\",\n    \"ghcr.io/eraser-dev/test:latest\"\n  ]\n}\nEOF\n\n$ kubectl create configmap excluded --from-file=sample.json --namespace=eraser-system\n$ kubectl label configmap excluded eraser.sh/exclude.list=true -n eraser-system\n```\n\n## Exempting Nodes from the Eraser Pipeline\nExempting nodes from cleanup was added in v1.0.0. When deploying Eraser, you can specify whether there is a list of nodes you would like to `include` or `exclude` from the cleanup process using the configmap. For more information, see the section on [customization](https://eraser-dev.github.io/eraser/docs/customization).\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.4.x/faq.md",
    "content": "---\ntitle: FAQ\n---\n## Why am I still seeing vulnerable images?\nEraser currently targets **non-running** images, so any vulnerable images that are currently running will not be removed. In addition, the default vulnerability scanning with Trivy removes images with `CRITICAL` vulnerabilities. Any images with lower vulnerabilities will not be removed. This can be configured using the [configmap](https://eraser-dev.github.io/eraser/docs/customization#scanner-options).\n\n## How is Eraser different from Kubernetes garbage collection?\nThe native garbage collection in Kubernetes works a bit differently than Eraser. By default, garbage collection begins when disk usage reaches 85%, and stops when it gets down to 80%. More details about Kubernetes garbage collection can be found in the [Kubernetes documentation](https://kubernetes.io/docs/concepts/architecture/garbage-collection/), and configuration options can be found in the [Kubelet documentation](https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/). \n\nThere are a couple core benefits to using Eraser for image cleanup:\n* Eraser can be configured to use image vulnerability data when making determinations on image removal\n* By interfacing directly with the container runtime, Eraser can clean up images that are not managed by Kubelet and Kubernetes\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.4.x/installation.md",
    "content": "---\ntitle: Installation\n---\n\n## Manifest\n\nTo install Eraser with the manifest file, run the following command:\n\n```bash\nkubectl apply -f https://raw.githubusercontent.com/eraser-dev/eraser/v1.4.1/deploy/eraser.yaml\n```\n\n## Helm\n\nIf you'd like to install and manage Eraser with Helm, follow the install instructions [here](https://github.com/eraser-dev/eraser/blob/main/charts/eraser/README.md)\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.4.x/introduction.md",
    "content": "---\ntitle: Introduction\nslug: /\n---\n\n# Introduction\n\nWhen deploying to Kubernetes, it's common for pipelines to build and push images to a cluster, but it's much less common for these images to be cleaned up. This can lead to accumulating bloat on the disk, and a host of non-compliant images lingering on the nodes.\n\nThe current garbage collection process deletes images based on a percentage of load, but this process does not consider the vulnerability state of the images. **Eraser** aims to provide a simple way to determine the state of an image, and delete it if it meets the specified criteria.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.4.x/manual-removal.md",
    "content": "---\ntitle: Manual Removal\n---\n\nCreate an `ImageList` and specify the images you would like to remove. In this case, the image `docker.io/library/alpine:3.7.3` will be removed.\n\n```shell\ncat <<EOF | kubectl apply -f -\napiVersion: eraser.sh/v1alpha1\nkind: ImageList\nmetadata:\n  name: imagelist\nspec:\n  images:\n    - docker.io/library/alpine:3.7.3   # use \"*\" for all non-running images\nEOF\n```\n\n> `ImageList` is a cluster-scoped resource and must be called imagelist. `\"*\"` can be specified to remove all non-running images instead of individual images.\n\nCreating an `ImageList` should trigger an `ImageJob` that will deploy Eraser pods on every node to perform the removal given the list of images.\n\n```shell\n$ kubectl get pods -n eraser-system\neraser-system        eraser-controller-manager-55d54c4fb6-dcglq   1/1     Running   0          9m8s\neraser-system        eraser-kind-control-plane                    1/1     Running   0          11s\neraser-system        eraser-kind-worker                           1/1     Running   0          11s\neraser-system        eraser-kind-worker2                          1/1     Running   0          11s\n```\n\nPods will run to completion and the images will be removed.\n\n```shell\n$ kubectl get pods -n eraser-system\neraser-system        eraser-controller-manager-6d6d5594d4-phl2q   1/1     Running     0          4m16s\neraser-system        eraser-kind-control-plane                    0/1     Completed   0          22s\neraser-system        eraser-kind-worker                           0/1     Completed   0          22s\neraser-system        eraser-kind-worker2                          0/1     Completed   0          22s\n```\n\nThe `ImageList` custom resource status field will contain the status of the last job. The success and failure counts indicate the number of nodes the Eraser agent was run on.\n\n```shell\n$ kubectl describe ImageList imagelist\n...\nStatus:\n  Failed:     0\n  Success:    3\n  Timestamp:  2022-02-25T23:41:55Z\n...\n```\n\nVerify the unused images are removed.\n\n```shell\n$ docker exec kind-worker ctr -n k8s.io images list | grep alpine\n```\n\nIf the image has been successfully removed, there will be no output.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.4.x/metrics.md",
    "content": "---\ntitle: Metrics\n---\n\nTo view Eraser metrics, you will need to deploy an Open Telemetry collector in the 'eraser-system' namespace, and an exporter. An example collector with a Prometheus exporter is [otelcollector.yaml](https://github.com/eraser-dev/eraser/blob/main/test/e2e/test-data/otelcollector.yaml), and the endpoint can be specified using the [configmap](https://eraser-dev.github.io/eraser/docs/customization#universal-options). In this example, we are logging the collected data to the otel-collector pod, and exporting metrics through Prometheus at 'http://localhost:8889/metrics', but a separate exporter can also be configured.\n\nBelow is the list of metrics provided by Eraser per run:\n\n#### Eraser\n```yaml\n- count\n\t- name: images_removed_run_total\n\t\t- description: Total images removed by eraser\n```\n\n #### Scanner\n ```yaml\n- count\n\t- name: vulnerable_images_run_total\n\t\t- description: Total vulnerable images detected\n ```\n\n #### ImageJob\n ```yaml\n - count\n\t- name: imagejob_run_total\n\t\t- description: Total ImageJobs scheduled\n\t- name: pods_completed_run_total\n\t\t- description: Total pods completed\n\t-  name: pods_failed_run_total\n\t\t- description: Total pods failed\n- summary\n\t- name: imagejob_duration_run_seconds\n\t\t- description: Total time for ImageJobs scheduled to complete\n```\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.4.x/quick-start.md",
    "content": "---\ntitle: Quick Start\n---\n\nThis tutorial demonstrates the functionality of Eraser and validates that non-running images are removed succesfully.\n\n## Deploy a DaemonSet\n\nAfter following the [install instructions](installation.md), we'll apply a demo `DaemonSet`. For illustrative purposes, a DaemonSet is applied and deleted so the non-running images remain on all nodes. The alpine image with the `3.7.3` tag will be used in this example. This is an image with a known critical vulnerability.\n\nFirst, apply the `DaemonSet`:\n\n```shell\ncat <<EOF | kubectl apply -f -\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n  name: alpine\nspec:\n  selector:\n    matchLabels:\n      app: alpine\n  template:\n    metadata:\n      labels:\n        app: alpine\n    spec:\n      containers:\n      - name: alpine\n        image: docker.io/library/alpine:3.7.3\nEOF\n```\n\nNext, verify that the Pods are running or completed. After the `alpine` Pods complete, you may see a `CrashLoopBackoff` status. This is expected behavior from the `alpine` image and can be ignored for the tutorial.\n\n```shell\n$ kubectl get pods\nNAME          READY   STATUS      RESTARTS     AGE\nalpine-2gh9c   1/1     Running     1 (3s ago)   6s\nalpine-hljp9   0/1     Completed   1 (3s ago)   6s\n```\n\nDelete the DaemonSet:\n\n```shell\n$ kubectl delete daemonset alpine\n```\n\nVerify that the Pods have been deleted:\n\n```shell\n$ kubectl get pods\nNo resources found in default namespace.\n```\n\nTo verify that the `alpine` images are still on the nodes, exec into one of the worker nodes and list the images. If you are not using a kind cluster or Docker for your container nodes, you will need to adjust the exec command accordingly.\n\nList the nodes:\n\n```shell\n$ kubectl get nodes\nNAME                 STATUS   ROLES           AGE   VERSION\nkind-control-plane   Ready    control-plane   45m   v1.24.0\nkind-worker          Ready    <none>          45m   v1.24.0\nkind-worker2         Ready    <none>          44m   v1.24.0\n```\n\nList the images then filter for `alpine`:\n\n```shell\n$ docker exec kind-worker ctr -n k8s.io images list | grep alpine\ndocker.io/library/alpine:3.7.3                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 2.0 MiB   linux/386,linux/amd64,linux/arm/v6,linux/arm64/v8,linux/ppc64le,linux/s390x  io.cri-containerd.image=managed\ndocker.io/library/alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10           application/vnd.docker.distribution.manifest.list.v2+json sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 2.0 MiB   linux/386,linux/amd64,linux/arm/v6,linux/arm64/v8,linux/ppc64le,linux/s390x  io.cri-containerd.image=managed\n\n```\n\n## Automatically Cleaning Images\n\nAfter deploying Eraser, it will automatically clean images in a regular interval. This interval can be set using the `manager.scheduling.repeatInterval` setting in the [configmap](https://eraser-dev.github.io/eraser/docs/customization#detailed-options). The default interval is 24 hours (`24h`). Valid time units are \"ns\", \"us\" (or \"µs\"), \"ms\", \"s\", \"m\", \"h\".\n\nEraser will schedule eraser pods to each node in the cluster, and each pod will contain 3 containers: collector, scanner, and remover that will run to completion.\n\n```shell\n$ kubectl get pods -n eraser-system\nNAMESPACE            NAME                                         READY   STATUS      RESTARTS         AGE\neraser-system        eraser-kind-control-plane-sb789           0/3     Completed   0                26m\neraser-system        eraser-kind-worker-j84hm                  0/3     Completed   0                26m\neraser-system        eraser-kind-worker2-4lbdr                 0/3     Completed   0                26m\neraser-system        eraser-controller-manager-86cdb4cbf9-x8d7q   1/1     Running     0                26m\n```\n\nThe collector container sends the list of all images to the scanner container, which scans and reports non-compliant images to the remover container for removal of images that are non-running. Once all pods are completed, they will be automatically cleaned up. \n\n> If you want to remove all the images periodically, you can skip the scanner container by setting the `components.scanner.enabled` value to `false` using the [configmap](https://eraser-dev.github.io/eraser/docs/customization#detailed-options). In this case, each collector pod will hold 2 containers: collector and remover.\n\n```shell\n$ kubectl get pods -n eraser-system\nNAMESPACE            NAME                                         READY   STATUS      RESTARTS         AGE\neraser-system        eraser-kind-control-plane-ksk2b           0/2     Completed   0                50s\neraser-system        eraser-kind-worker-cpgqc                  0/2     Completed   0                50s\neraser-system        eraser-kind-worker2-k25df                 0/2     Completed   0                50s\neraser-system        eraser-controller-manager-86cdb4cbf9-x8d7q   1/1     Running     0                55s\n```\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.4.x/release-management.md",
    "content": "# Release Management\n\n## Overview\n\nThis document describes Eraser project release management, which includes release versioning, supported releases, and supported upgrades.\n\n## Legend\n\n- **X.Y.Z** refers to the version (git tag) of Eraser that is released. This is the version of the Eraser images and the Chart version.\n- **Breaking changes** refer to schema changes, flag changes, and behavior changes of Eraser that may require a clean installation during upgrade, and it may introduce changes that could break backward compatibility.\n- **Milestone** should be designed to include feature sets to accommodate 2 months release cycles including test gates. GitHub's milestones are used by maintainers to manage each release. PRs and Issues for each release should be created as part of a corresponding milestone.\n- **Patch releases** refer to applicable fixes, including security fixes, may be backported to support releases, depending on severity and feasibility.\n- **Test gates** should include soak tests and upgrade tests from the last minor version.\n\n## Release Versioning\n\nAll releases will be of the form _vX.Y.Z_ where X is the major version, Y is the minor version and Z is the patch version. This project strictly follows semantic versioning.\n\nThe rest of the doc will cover the release process for the following kinds of releases:\n\n**Major Releases**\n\nNo plan to move to 2.0.0 unless there is a major design change like an incompatible API change in the project\n\n**Minor Releases**\n\n- X.Y.0-alpha.W, W >= 0 (Branch : main)\n    - Released as needed before we cut a beta X.Y release\n    - Alpha release, cut from master branch\n- X.Y.0-beta.W, W >= 0 (Branch : main)\n    - Released as needed before we cut a stable X.Y release\n    - More stable than the alpha release to signal users to test things out\n    - Beta release, cut from master branch\n- X.Y.0-rc.W, W >= 0 (Branch : main)\n    - Released as needed before we cut a stable X.Y release\n    - soak for ~ 2 weeks before cutting a stable release\n    - Release candidate release, cut from master branch\n- X.Y.0 (Branch: main)\n    - Released as needed\n    - Stable release, cut from master when X.Y milestone is complete\n\n**Patch Releases**\n\n- Patch Releases X.Y.Z, Z > 0 (Branch: release-X.Y, only cut when a patch is needed)\n    - No breaking changes\n    - Applicable fixes, including security fixes, may be cherry-picked from master into the latest supported minor release-X.Y branches.\n    - Patch release, cut from a release-X.Y branch\n\n## Supported Releases\n\nApplicable fixes, including security fixes, may be cherry-picked into the release branch, depending on severity and feasibility. Patch releases are cut from that branch as needed.\n\nWe expect users to stay reasonably up-to-date with the versions of Eraser they use in production, but understand that it may take time to upgrade. We expect users to be running approximately the latest patch release of a given minor release and encourage users to upgrade as soon as possible.\n\nWe expect to \"support\" n (current) and n-1 major.minor releases. \"Support\" means we expect users to be running that version in production. For example, when v1.2.0 comes out, v1.0.x will no longer be supported for patches, and we encourage users to upgrade to a supported version as soon as possible.\n\n## Supported Kubernetes Versions\n\nEraser is assumed to be compatible with the [current Kubernetes Supported Versions](https://kubernetes.io/releases/patch-releases/#detailed-release-history-for-active-branches) per [Kubernetes Supported Versions policy](https://kubernetes.io/releases/version-skew-policy/).\n\nFor example, if Eraser _supported_ versions are v1.2 and v1.1, and Kubernetes _supported_ versions are v1.22, v1.23, v1.24, then all supported Eraser versions (v1.2, v1.1) are assumed to be compatible with all supported Kubernetes versions (v1.22, v1.23, v1.24). If Kubernetes v1.25 is released later, then Eraser v1.2 and v1.1 will be assumed to be compatible with v1.25 if those Eraser versions are still supported at that time.\n\nIf you choose to use Eraser with a version of Kubernetes that it does not support, you are using it at your own risk.\n\n## Acknowledgement\n\nThis document builds on the ideas and implementations of release processes from projects like Kubernetes and Helm."
  },
  {
    "path": "docs/versioned_docs/version-v1.4.x/releasing.md",
    "content": "---\ntitle: Releasing\n---\n\n## Create Release Pull Request\n\n1. Go to `create_release_pull_request` workflow under actions.\n2. Select run workflow, and use the workflow from your branch. \n3. Input release version with the semantic version identifying the release.\n4. Click run workflow and review the PR created by github-actions.\n\n# Releasing\n\n5. Once the PR is merged to `main`, tag that commit with release version and push tags to remote repository.\n\n   ```\n   git checkout <BRANCH NAME>\n   git pull origin <BRANCH NAME>\n   git tag -a <NEW VERSION> -m '<NEW VERSION>'\n   git push origin <NEW VERSION>\n   ```\n6. Pushing the release tag will trigger GitHub Actions to trigger `release` job.\n   This will build the `ghcr.io/eraser-dev/remover`, `ghcr.io/eraser-dev/eraser-manager`, `ghcr.io/eraser-dev/collector`, and `ghcr.io/eraser-dev/eraser-trivy-scanner` images automatically, then publish the new release tag.\n\n## Publishing\n\n1. GitHub Action will create a new release, review and edit it at https://github.com/eraser-dev/eraser/releases\n\n## Notifying\n\n1. Send an email to the [Eraser mailing list](https://groups.google.com/g/eraser-dev) announcing the release, with links to GitHub.\n2. Post a message on the [Eraser Slack channel](https://kubernetes.slack.com/archives/C03Q8KV8YQ4) with the same information."
  },
  {
    "path": "docs/versioned_docs/version-v1.4.x/setup.md",
    "content": "---\ntitle: Setup\n---\n\n# Development Setup\n\nThis document describes the steps to get started with development.\nYou can either utilize [Codespaces](https://docs.github.com/en/codespaces/overview) or setup a local environment.\n\n## Local Setup\n\n### Prerequisites:\n\n- [go](https://go.dev/) with version 1.17 or later.\n- [docker](https://docs.docker.com/get-docker/)\n- [kind](https://kind.sigs.k8s.io/)\n- `make`\n\n### Get things running\n\n- Get dependencies with `go get`\n\n- This project uses `make`. You can utilize `make help` to see available targets. For local deployment make targets help to build, test and deploy.\n\n### Making changes\n\nPlease refer to [Development Reference](#development-reference) for more details on the specific commands.\n\nTo test your changes on a cluster:\n\n```bash\n# generate necessary api files (optional - only needed if changes to api folder).\nmake generate\n\n# build applicable images\nmake docker-build-manager MANAGER_IMG=eraser-manager:dev\nmake docker-build-remover REMOVER_IMG=remover:dev\nmake docker-build-collector COLLECTOR_IMG=collector:dev\nmake docker-build-trivy-scanner TRIVY_SCANNER_IMG=eraser-trivy-scanner:dev\n\n# make sure updated image is present on cluster (e.g., see kind example below)\nkind load docker-image \\\n        eraser-manager:dev \\\n        eraser-trivy-scanner:dev \\\n        remover:dev \\\n        collector:dev\n\nmake manifests\nmake deploy\n\n# to remove the deployment\nmake undeploy\n```\n\nTo test your changes to manager locally:\n\n```bash\nmake run\n```\n\nExample Output:\n\n```\nyou@local:~/eraser$ make run\ndocker build . \\\n        -t eraser-tooling \\\n        -f build/tooling/Dockerfile\n[+] Building 7.8s (8/8) FINISHED\n => => naming to docker.io/library/eraser-tooling                           0.0s\ndocker run -v /home/eraser/config:/config -w /config/manager \\\n        registry.k8s.io/kustomize/kustomize:v3.8.9 edit set image controller=eraser-manager:dev\ndocker run -v /home/eraser:/eraser eraser-tooling controller-gen \\\n        crd \\\n        rbac:roleName=manager-role \\\n        webhook \\\n        paths=\"./...\" \\\n        output:crd:artifacts:config=config/crd/bases\nrm -rf manifest_staging\nmkdir -p manifest_staging/deploy\ndocker run --rm -v /home/eraser:/eraser \\\n        registry.k8s.io/kustomize/kustomize:v3.8.9 build \\\n        /eraser/config/default -o /eraser/manifest_staging/deploy/eraser.yaml\ndocker run -v /home/eraser:/eraser eraser-tooling controller-gen object:headerFile=\"hack/boilerplate.go.txt\" paths=\"./...\"\ngo fmt ./...\ngo vet ./...\ngo run ./main.go\n{\"level\":\"info\",\"ts\":1652985685.1663408,\"logger\":\"controller-runtime.metrics\",\"msg\":\"Metrics server is starting to listen\",\"addr\":\":8080\"}\n...\n```\n\n## Development Reference\n\nEraser is using tooling from [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder). For Eraser this tooling is containerized into the `eraser-tooling` image. The `make` targets can use this tooling and build the image when necessary.\n\nYou can override the default configuration using environment variables. Below you can find a reference of targets and configuration options.\n\n### Common Configuration\n\n| Environment Variable | Description                                                                                   |\n| -------------------- | --------------------------------------------------------------------------------------------- |\n| VERSION              | Specifies the version (i.e., the image tag) of eraser to be used.                             |\n| MANAGER_IMG          | Defines the image url for the Eraser manager. Used for tagging, pulling and pushing the image |\n| REMOVER_IMG           | Defines the image url for the Eraser. Used for tagging, pulling and pushing the image         |\n| COLLECTOR_IMG        | Defines the image url for the Collector. Used for tagging, pulling and pushing the image      |\n\n### Linting\n\n- `make lint`\n\nLints the go code.\n\n| Environment Variable | Description                                             |\n| -------------------- | ------------------------------------------------------- |\n| GOLANGCI_LINT        | Specifies the go linting binary to be used for linting. |\n\n### Development\n\n- `make generate`\n\nGenerates necessary files for the k8s api stored under `api/v1alpha1/zz_generated.deepcopy.go`. See the [kubebuilder docs](https://book.kubebuilder.io/cronjob-tutorial/other-api-files.html) for details.\n\n- `make manifests`\n\nGenerates the eraser deployment yaml files under `manifest_staging/deploy`.\n\nConfiguration Options:\n\n| Environment Variable | Description                                        |\n| -------------------- | -------------------------------------------------- |\n| REMOVER_IMG           | Defines the image url for the Eraser.              |\n| MANAGER_IMG          | Defines the image url for the Eraser manager.      |\n| KUSTOMIZE_VERSION    | Define Kustomize version for generating manifests. |\n\n- `make test`\n\nRuns the unit tests for the eraser project.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                 |\n| -------------------- | ----------------------------------------------------------- |\n| ENVTEST              | Specifies the envtest setup binary.                         |\n| ENVTEST_K8S_VERSION  | Specifies the Kubernetes version for envtest setup command. |\n\n- `make e2e-test`\n\nRuns e2e tests on a cluster.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                   |\n| -------------------- | ------------------------------------------------------------------------------------------------------------- |\n| REMOVER_IMG           | Eraser image to be used for e2e test.                                                                         |\n| MANAGER_IMG          | Eraser manager image to be used for e2e test.                                                                 |\n| KUBERNETES_VERSION   | Kubernetes version for e2e test.                                                                              |\n| TEST_COUNT           | Sets repetition for test. Please refer to [go docs](https://pkg.go.dev/cmd/go#hdr-Testing_flags) for details. |\n| TIMEOUT              | Sets timeout for test. Please refer to [go docs](https://pkg.go.dev/cmd/go#hdr-Testing_flags) for details.    |\n| TESTFLAGS            | Sets additional test flags                                                                                    |\n\n### Build\n\n- `make build`\n\nBuilds the eraser manager binaries.\n\n- `make run`\n\nRuns the eraser manager on your local machine.\n\n- `make docker-build-manager`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| MANAGER_IMG          | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-manager`\n\nBuilds the docker image for the eraser manager.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| MANAGER_IMG          | Specifies the target repository, image name and tag for building image. |\n\n- `make docker-build-remover`\n\nBuilds the docker image for eraser remover.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| REMOVER_IMG           | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-remover`\n\nBuilds the docker image for the eraser remover.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| REMOVER_IMG           | Specifies the target repository, image name and tag for building image. |\n\n- `make docker-build-collector`\n\nBuilds the docker image for the eraser collector.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| CACHE_FROM           | Sets the target of the buildx --cache-from flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from). |\n| CACHE_TO             | Sets the target of the buildx --cache-to flag [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to).     |\n| PLATFORM             | Sets the target platform for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform).               |\n| OUTPUT_TYPE          | Sets the output for buildx [see buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).                          |\n| COLLECTOR_IMG        | Specifies the target repository, image name and tag for building image.                                                                                |\n\n- `make docker-push-collector`\n\nBuilds the docker image for the eraser collector.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                             |\n| -------------------- | ----------------------------------------------------------------------- |\n| COLLECTOR_IMG        | Specifies the target repository, image name and tag for building image. |\n\n### Deployment\n\n- `make install`\n\nInstall CRDs into the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                      |\n| -------------------- | ---------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment. |\n\n- `make uninstall`\n\nUninstall CRDs from the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                      |\n| -------------------- | ---------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment. |\n\n- `make deploy`\n\nDeploys eraser to the cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                          |\n| -------------------- | -------------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources for deployment.     |\n| MANAGER_IMG          | Specifies the eraser manager image version to be used for deployment |\n\n- `make undeploy`\n\nUndeploy controller from the K8s cluster specified in ~/.kube/config.\n\nConfiguration Options:\n\n| Environment Variable | Description                                                               |\n| -------------------- | ------------------------------------------------------------------------- |\n| KUSTOMIZE_VERSION    | Kustomize version used to generate k8s resources that need to be removed. |\n\n### Release\n\n- `make release-manifest`\n\nGenerates k8s manifests files for a release.\n\nConfiguration Options:\n\n| Environment Variable | Description                          |\n| -------------------- | ------------------------------------ |\n| NEWVERSION           | Sets the new version in the Makefile |\n\n- `make promote-staging-manifest`\n\nPromotes the k8s deployment yaml files to release.\n"
  },
  {
    "path": "docs/versioned_docs/version-v1.4.x/trivy.md",
    "content": "---\ntitle: Trivy\n---\n\n## Trivy Provider Options\nThe Trivy provider is used in Eraser for image scanning and detecting vulnerabilities. See [Customization](https://eraser-dev.github.io/eraser/docs/customization#scanner-options) for more details on configuring the scanner.\n"
  },
  {
    "path": "docs/versioned_sidebars/version-v0.4.x-sidebars.json",
    "content": "{\n  \"sidebar\": [\n    \"introduction\",\n    \"installation\",\n    \"quick-start\",\n    \"architecture\",\n    {\n      \"type\": \"category\",\n      \"label\": \"Topics\",\n      \"collapsible\": true,\n      \"collapsed\": false,\n      \"items\": [\n        \"manual-removal\",\n        \"exclusion\",\n        \"customization\"\n      ]\n    },\n    {\n      \"type\": \"category\",\n      \"label\": \"Development\",\n      \"collapsible\": true,\n      \"collapsed\": false,\n      \"items\": [\n        \"setup\",\n        \"releasing\"\n      ]\n    },\n    {\n      \"type\": \"category\",\n      \"label\": \"Scanning\",\n      \"collapsible\": true,\n      \"collapsed\": false,\n      \"items\": [\n        \"custom-scanner\",\n        \"trivy\"\n      ]\n    },\n    \"faq\",\n    \"contributing\",\n    \"code-of-conduct\"\n  ]\n}\n"
  },
  {
    "path": "docs/versioned_sidebars/version-v0.5.x-sidebars.json",
    "content": "{\n  \"sidebar\": [\n    \"introduction\",\n    \"installation\",\n    \"quick-start\",\n    \"architecture\",\n    {\n      \"type\": \"category\",\n      \"label\": \"Topics\",\n      \"collapsible\": true,\n      \"collapsed\": false,\n      \"items\": [\n        \"manual-removal\",\n        \"exclusion\",\n        \"customization\"\n      ]\n    },\n    {\n      \"type\": \"category\",\n      \"label\": \"Development\",\n      \"collapsible\": true,\n      \"collapsed\": false,\n      \"items\": [\n        \"setup\",\n        \"releasing\"\n      ]\n    },\n    {\n      \"type\": \"category\",\n      \"label\": \"Scanning\",\n      \"collapsible\": true,\n      \"collapsed\": false,\n      \"items\": [\n        \"custom-scanner\",\n        \"trivy\"\n      ]\n    },\n    \"faq\",\n    \"contributing\",\n    \"code-of-conduct\"\n  ]\n}\n"
  },
  {
    "path": "docs/versioned_sidebars/version-v1.0.x-sidebars.json",
    "content": "{\n  \"sidebar\": [\n    \"introduction\",\n    \"installation\",\n    \"quick-start\",\n    \"architecture\",\n    {\n      \"type\": \"category\",\n      \"label\": \"Topics\",\n      \"collapsible\": true,\n      \"collapsed\": false,\n      \"items\": [\n        \"manual-removal\",\n        \"exclusion\",\n        \"customization\",\n        \"metrics\"\n      ]\n    },\n    {\n      \"type\": \"category\",\n      \"label\": \"Development\",\n      \"collapsible\": true,\n      \"collapsed\": false,\n      \"items\": [\n        \"setup\",\n        \"releasing\"\n      ]\n    },\n    {\n      \"type\": \"category\",\n      \"label\": \"Scanning\",\n      \"collapsible\": true,\n      \"collapsed\": false,\n      \"items\": [\n        \"custom-scanner\",\n        \"trivy\"\n      ]\n    },\n    \"faq\",\n    \"contributing\",\n    \"code-of-conduct\"\n  ]\n}\n"
  },
  {
    "path": "docs/versioned_sidebars/version-v1.1.x-sidebars.json",
    "content": "{\n  \"sidebar\": [\n    \"introduction\",\n    \"installation\",\n    \"quick-start\",\n    \"architecture\",\n    {\n      \"type\": \"category\",\n      \"label\": \"Topics\",\n      \"collapsible\": true,\n      \"collapsed\": false,\n      \"items\": [\n        \"manual-removal\",\n        \"exclusion\",\n        \"customization\",\n        \"metrics\"\n      ]\n    },\n    {\n      \"type\": \"category\",\n      \"label\": \"Development\",\n      \"collapsible\": true,\n      \"collapsed\": false,\n      \"items\": [\n        \"setup\",\n        \"releasing\"\n      ]\n    },\n    {\n      \"type\": \"category\",\n      \"label\": \"Scanning\",\n      \"collapsible\": true,\n      \"collapsed\": false,\n      \"items\": [\n        \"custom-scanner\",\n        \"trivy\"\n      ]\n    },\n    \"faq\",\n    \"contributing\",\n    \"code-of-conduct\"\n  ]\n}\n"
  },
  {
    "path": "docs/versioned_sidebars/version-v1.2.x-sidebars.json",
    "content": "{\n  \"sidebar\": [\n    \"introduction\",\n    \"installation\",\n    \"quick-start\",\n    \"architecture\",\n    {\n      \"type\": \"category\",\n      \"label\": \"Topics\",\n      \"collapsible\": true,\n      \"collapsed\": false,\n      \"items\": [\n        \"manual-removal\",\n        \"exclusion\",\n        \"customization\",\n        \"metrics\"\n      ]\n    },\n    {\n      \"type\": \"category\",\n      \"label\": \"Development\",\n      \"collapsible\": true,\n      \"collapsed\": false,\n      \"items\": [\n        \"setup\",\n        \"releasing\"\n      ]\n    },\n    {\n      \"type\": \"category\",\n      \"label\": \"Scanning\",\n      \"collapsible\": true,\n      \"collapsed\": false,\n      \"items\": [\n        \"custom-scanner\",\n        \"trivy\"\n      ]\n    },\n    \"faq\",\n    \"contributing\",\n    \"code-of-conduct\"\n  ]\n}\n"
  },
  {
    "path": "docs/versioned_sidebars/version-v1.3.x-sidebars.json",
    "content": "{\n  \"sidebar\": [\n    \"introduction\",\n    \"installation\",\n    \"quick-start\",\n    \"architecture\",\n    {\n      \"type\": \"category\",\n      \"label\": \"Topics\",\n      \"collapsible\": true,\n      \"collapsed\": false,\n      \"items\": [\n        \"manual-removal\",\n        \"exclusion\",\n        \"customization\",\n        \"metrics\"\n      ]\n    },\n    {\n      \"type\": \"category\",\n      \"label\": \"Development\",\n      \"collapsible\": true,\n      \"collapsed\": false,\n      \"items\": [\n        \"setup\",\n        \"releasing\"\n      ]\n    },\n    {\n      \"type\": \"category\",\n      \"label\": \"Scanning\",\n      \"collapsible\": true,\n      \"collapsed\": false,\n      \"items\": [\n        \"custom-scanner\",\n        \"trivy\"\n      ]\n    },\n    \"faq\",\n    \"contributing\",\n    \"code-of-conduct\"\n  ]\n}\n"
  },
  {
    "path": "docs/versioned_sidebars/version-v1.4.x-sidebars.json",
    "content": "{\n  \"sidebar\": [\n    \"introduction\",\n    \"installation\",\n    \"quick-start\",\n    \"architecture\",\n    {\n      \"type\": \"category\",\n      \"label\": \"Topics\",\n      \"collapsible\": true,\n      \"collapsed\": false,\n      \"items\": [\n        \"manual-removal\",\n        \"exclusion\",\n        \"customization\",\n        \"metrics\"\n      ]\n    },\n    {\n      \"type\": \"category\",\n      \"label\": \"Development\",\n      \"collapsible\": true,\n      \"collapsed\": false,\n      \"items\": [\n        \"setup\",\n        \"releasing\"\n      ]\n    },\n    {\n      \"type\": \"category\",\n      \"label\": \"Scanning\",\n      \"collapsible\": true,\n      \"collapsed\": false,\n      \"items\": [\n        \"custom-scanner\",\n        \"trivy\"\n      ]\n    },\n    \"faq\",\n    \"contributing\",\n    \"code-of-conduct\"\n  ]\n}\n"
  },
  {
    "path": "docs/versions.json",
    "content": "[\n  \"v1.4.x\",\n  \"v1.3.x\",\n  \"v1.2.x\",\n  \"v1.1.x\",\n  \"v1.0.x\",\n  \"v0.5.x\",\n  \"v0.4.x\"\n]\n"
  },
  {
    "path": "go.mod",
    "content": "module github.com/eraser-dev/eraser\n\ngo 1.24.0\n\ntoolchain go1.24.9\n\nrequire (\n\tgithub.com/aquasecurity/trivy v0.51.2\n\tgithub.com/aquasecurity/trivy-db v0.0.0-20241209111357-8c398f13db0e // indirect\n\tgithub.com/go-logr/logr v1.4.3\n\tgithub.com/onsi/ginkgo/v2 v2.14.0\n\tgithub.com/onsi/gomega v1.30.0\n\tgithub.com/stretchr/testify v1.10.0\n\tgo.opentelemetry.io/otel v1.37.0\n\tgo.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.37.0\n\tgo.opentelemetry.io/otel/metric v1.37.0\n\tgo.opentelemetry.io/otel/sdk v1.37.0\n\tgo.opentelemetry.io/otel/sdk/metric v1.37.0\n\tgo.uber.org/zap v1.27.0\n\tgolang.org/x/exp v0.0.0-20241009180824-f66d83c29e7c\n\tgolang.org/x/sys v0.38.0\n\tgoogle.golang.org/grpc v1.73.1\n\tk8s.io/api v0.31.2\n\tk8s.io/apimachinery v0.31.2\n\tk8s.io/client-go v0.31.2\n\t// keeping this on 0.25 as updating to 0.26 will remove CRI v1alpha2 version\n\tk8s.io/cri-api v0.27.1\n\tk8s.io/klog/v2 v2.130.1\n\tk8s.io/utils v0.0.0-20240711033017-18e509b52bc8\n\toras.land/oras-go v1.2.5\n\tsigs.k8s.io/controller-runtime v0.16.6\n\tsigs.k8s.io/e2e-framework v0.0.8\n\tsigs.k8s.io/kind v0.15.0\n\tsigs.k8s.io/yaml v1.4.0\n)\n\nrequire (\n\tgithub.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 // indirect\n\tgithub.com/BurntSushi/toml v1.4.0 // indirect\n\tgithub.com/Microsoft/hcsshim v0.12.9 // indirect\n\tgithub.com/alessio/shellescape v1.4.1 // indirect\n\tgithub.com/beorn7/perks v1.0.1 // indirect\n\tgithub.com/cenkalti/backoff/v5 v5.0.2 // indirect\n\tgithub.com/cespare/xxhash/v2 v2.3.0 // indirect\n\tgithub.com/containerd/containerd v1.7.29 // indirect\n\tgithub.com/containerd/errdefs v0.3.0 // indirect\n\tgithub.com/containerd/log v0.1.0 // indirect\n\tgithub.com/containerd/platforms v0.2.1 // indirect\n\tgithub.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect\n\tgithub.com/distribution/reference v0.6.0 // indirect\n\tgithub.com/docker/cli v27.3.1+incompatible // indirect\n\tgithub.com/docker/distribution v2.8.3+incompatible // indirect\n\tgithub.com/docker/docker v28.0.0+incompatible // indirect\n\tgithub.com/docker/docker-credential-helpers v0.8.2 // indirect\n\tgithub.com/docker/go-connections v0.5.0 // indirect\n\tgithub.com/docker/go-metrics v0.0.1 // indirect\n\tgithub.com/emicklei/go-restful/v3 v3.11.0 // indirect\n\tgithub.com/evanphx/json-patch/v5 v5.8.0 // indirect\n\tgithub.com/fatih/color v1.16.0 // indirect\n\tgithub.com/felixge/httpsnoop v1.0.4 // indirect\n\tgithub.com/fsnotify/fsnotify v1.7.0 // indirect\n\tgithub.com/go-logr/stdr v1.2.2 // indirect\n\tgithub.com/go-logr/zapr v1.3.0 // indirect\n\tgithub.com/go-openapi/jsonpointer v0.21.0 // indirect\n\tgithub.com/go-openapi/jsonreference v0.21.0 // indirect\n\tgithub.com/go-openapi/swag v0.23.0 // indirect\n\tgithub.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 // indirect\n\tgithub.com/gogo/protobuf v1.3.2 // indirect\n\tgithub.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect\n\tgithub.com/golang/protobuf v1.5.4 // indirect\n\tgithub.com/google/gnostic-models v0.6.9-0.20230804172637-c7be7c783f49 // indirect\n\tgithub.com/google/go-cmp v0.7.0 // indirect\n\tgithub.com/google/go-containerregistry v0.20.2 // indirect\n\tgithub.com/google/gofuzz v1.2.0 // indirect\n\tgithub.com/google/pprof v0.0.0-20230406165453-00490a63f317 // indirect\n\tgithub.com/google/uuid v1.6.0 // indirect\n\tgithub.com/gorilla/mux v1.8.1 // indirect\n\tgithub.com/grpc-ecosystem/grpc-gateway/v2 v2.27.1 // indirect\n\tgithub.com/imdario/mergo v0.3.16 // indirect\n\tgithub.com/inconshreveable/mousetrap v1.1.0 // indirect\n\tgithub.com/josharian/intern v1.0.0 // indirect\n\tgithub.com/json-iterator/go v1.1.12 // indirect\n\tgithub.com/klauspost/compress v1.17.11 // indirect\n\tgithub.com/mailru/easyjson v0.7.7 // indirect\n\tgithub.com/mattn/go-colorable v0.1.13 // indirect\n\tgithub.com/mattn/go-isatty v0.0.20 // indirect\n\tgithub.com/moby/locker v1.0.1 // indirect\n\tgithub.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect\n\tgithub.com/modern-go/reflect2 v1.0.2 // indirect\n\tgithub.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect\n\tgithub.com/opencontainers/go-digest v1.0.0 // indirect\n\tgithub.com/opencontainers/image-spec v1.1.0 // indirect\n\tgithub.com/package-url/packageurl-go v0.1.2 // indirect\n\tgithub.com/pelletier/go-toml v1.9.5 // indirect\n\tgithub.com/pkg/errors v0.9.1 // indirect\n\tgithub.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect\n\tgithub.com/prometheus/client_golang v1.20.5 // indirect\n\tgithub.com/prometheus/client_model v0.6.1 // indirect\n\tgithub.com/prometheus/common v0.55.0 // indirect\n\tgithub.com/prometheus/procfs v0.15.1 // indirect\n\tgithub.com/samber/lo v1.47.0 // indirect\n\tgithub.com/sirupsen/logrus v1.9.3 // indirect\n\tgithub.com/spf13/cobra v1.8.1 // indirect\n\tgithub.com/spf13/pflag v1.0.5 // indirect\n\tgithub.com/vladimirvivien/gexe v0.1.1 // indirect\n\tgo.opentelemetry.io/auto/sdk v1.1.0 // indirect\n\tgo.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0 // indirect\n\tgo.opentelemetry.io/otel/trace v1.37.0 // indirect\n\tgo.opentelemetry.io/proto/otlp v1.7.0 // indirect\n\tgo.uber.org/multierr v1.11.0 // indirect\n\tgolang.org/x/crypto v0.45.0 // indirect\n\tgolang.org/x/net v0.47.0 // indirect\n\tgolang.org/x/oauth2 v0.30.0 // indirect\n\tgolang.org/x/sync v0.18.0 // indirect\n\tgolang.org/x/term v0.37.0 // indirect\n\tgolang.org/x/text v0.31.0 // indirect\n\tgolang.org/x/time v0.12.0 // indirect\n\tgolang.org/x/tools v0.38.0 // indirect\n\tgolang.org/x/xerrors v0.0.0-20231012003039-104605ab7028 // indirect\n\tgomodules.xyz/jsonpatch/v2 v2.4.0 // indirect\n\tgoogle.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822 // indirect\n\tgoogle.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 // indirect\n\tgoogle.golang.org/protobuf v1.36.6 // indirect\n\tgopkg.in/inf.v0 v0.9.1 // indirect\n\tgopkg.in/yaml.v2 v2.4.0 // indirect\n\tgopkg.in/yaml.v3 v3.0.1 // indirect\n\tk8s.io/apiextensions-apiserver v0.31.1 // indirect\n\tk8s.io/component-base v0.31.2 // indirect\n\tk8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 // indirect\n\tsigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect\n\tsigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect\n)\n\nreplace (\n\t// v0.3.1-0.20230104082527-d6f58551be3f is taken from github.com/moby/buildkit v0.11.0\n\t// spdx logic write on v0.3.0 and incompatible with v0.3.1-0.20230104082527-d6f58551be3f\n\tgithub.com/spdx/tools-golang => github.com/spdx/tools-golang v0.3.0\n\tk8s.io/api => k8s.io/api v0.28.12\n\tk8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.28.12\n\tk8s.io/apimachinery => k8s.io/apimachinery v0.28.12\n\tk8s.io/apiserver => k8s.io/apiserver v0.28.12\n\tk8s.io/cli-runtime => k8s.io/cli-runtime v0.28.12\n\tk8s.io/client-go => k8s.io/client-go v0.28.12\n\tk8s.io/cloud-provider => k8s.io/cloud-provider v0.28.12\n\tk8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.28.12\n\tk8s.io/code-generator => k8s.io/code-generator v0.28.12\n\tk8s.io/component-base => k8s.io/component-base v0.28.12\n\tk8s.io/component-helpers => k8s.io/component-helpers v0.28.12\n\tk8s.io/controller-manager => k8s.io/controller-manager v0.28.12\n\t// Pin CRI-API to version that still has v1alpha2 runtime API\n\tk8s.io/cri-api => k8s.io/cri-api v0.25.0\n\tk8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.28.12\n\tk8s.io/kube-aggregator => k8s.io/kube-aggregator v0.28.12\n\tk8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.28.12\n\tk8s.io/kube-proxy => k8s.io/kube-proxy v0.28.12\n\tk8s.io/kubectl => k8s.io/kubectl v0.27.16\n\tk8s.io/kubelet => k8s.io/kubelet v0.27.16\n\tk8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.27.16\n\tk8s.io/metrics => k8s.io/metrics v0.27.16\n\tk8s.io/mount-utils => k8s.io/mount-utils v0.27.16\n\tk8s.io/pod-security-admission => k8s.io/pod-security-admission v0.27.16\n\tk8s.io/sample-apiserver => k8s.io/sample-apiserver v0.27.16\n\tk8s.io/sample-cli-plugin => k8s.io/sample-cli-plugin v0.27.16\n\tk8s.io/sample-controller => k8s.io/sample-controller v0.27.16\n\t// Updated to fix types.AuthConfig compatibility with Docker v27.x\n\toras.land/oras-go => oras.land/oras-go v1.2.5\n)\n"
  },
  {
    "path": "go.sum",
    "content": "github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 h1:bvDV9vkmnHYOMsOr4WLk+Vo07yKIzd94sVoIqshQ4bU=\ngithub.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8=\ngithub.com/BurntSushi/toml v1.0.0/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=\ngithub.com/BurntSushi/toml v1.4.0 h1:kuoIxZQy2WRRk1pttg9asf+WVv6tWQuBNVmK8+nqPr0=\ngithub.com/BurntSushi/toml v1.4.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=\ngithub.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=\ngithub.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=\ngithub.com/Microsoft/hcsshim v0.12.9 h1:2zJy5KA+l0loz1HzEGqyNnjd3fyZA31ZBCGKacp6lLg=\ngithub.com/Microsoft/hcsshim v0.12.9/go.mod h1:fJ0gkFAna6ukt0bLdKB8djt4XIJhF/vEPuoIWYVvZ8Y=\ngithub.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d h1:UrqY+r/OJnIp5u0s1SbQ8dVfLCZJsnvazdBP5hS4iRs=\ngithub.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d/go.mod h1:HI8ITrYtUY+O+ZhtlqUnD8+KwNPOyugEhfP9fdUIaEQ=\ngithub.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=\ngithub.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=\ngithub.com/alessio/shellescape v1.4.1 h1:V7yhSDDn8LP4lc4jS8pFkt0zCnzVJlG5JXy9BVKJUX0=\ngithub.com/alessio/shellescape v1.4.1/go.mod h1:PZAiSCk0LJaZkiCSkPv8qIobYglO3FPpyFjDCtHLS30=\ngithub.com/aquasecurity/trivy v0.51.2 h1:C5rb5TsEiwGEKQzKc4f2qsJVd5uG+C2aMx+zF+7KOWY=\ngithub.com/aquasecurity/trivy v0.51.2/go.mod h1:/O2z/ySpHOiVOpiPGwZny3EFs/7Jis6et0nn6mlf6n4=\ngithub.com/aquasecurity/trivy-db v0.0.0-20241209111357-8c398f13db0e h1:O5j5SeCNBrXApgBTOobO06q4LMxJxIhcSGE7H6Y154E=\ngithub.com/aquasecurity/trivy-db v0.0.0-20241209111357-8c398f13db0e/go.mod h1:gS8VhlNxhraiq60BBnJw9kGtjeMspQ9E8pX24jCL4jg=\ngithub.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=\ngithub.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=\ngithub.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=\ngithub.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=\ngithub.com/bshuster-repo/logrus-logstash-hook v1.0.0 h1:e+C0SB5R1pu//O4MQ3f9cFuPGoOVeF2fE4Og9otCc70=\ngithub.com/bshuster-repo/logrus-logstash-hook v1.0.0/go.mod h1:zsTqEiSzDgAa/8GZR7E1qaXrhYNDKBYy5/dWPTIflbk=\ngithub.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd h1:rFt+Y/IK1aEZkEHchZRSq9OQbsSzIT/OrI8YFFmRIng=\ngithub.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd/go.mod h1:2oa8nejYd4cQ/b0hMIopN0lCRxU0bueqREvZLWFrtK8=\ngithub.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b h1:otBG+dV+YK+Soembjv71DPz3uX/V/6MMlSyD9JBQ6kQ=\ngithub.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b/go.mod h1:obH5gd0BsqsP2LwDJ9aOkm/6J86V6lyAXCoQWGw3K50=\ngithub.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0 h1:nvj0OLI3YqYXer/kZD8Ri1aaunCxIEsOst1BVJswV0o=\ngithub.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0/go.mod h1:D/8v3kj0zr8ZAKg1AQ6crr+5VwKN5eIywRkfhyM/+dE=\ngithub.com/cenkalti/backoff/v5 v5.0.2 h1:rIfFVxEf1QsI7E1ZHfp/B4DF/6QBAUhmgkxc0H7Zss8=\ngithub.com/cenkalti/backoff/v5 v5.0.2/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=\ngithub.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=\ngithub.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=\ngithub.com/containerd/cgroups v1.1.0 h1:v8rEWFl6EoqHB+swVNjVoCJE8o3jX7e8nqBGPLaDFBM=\ngithub.com/containerd/cgroups/v3 v3.0.3 h1:S5ByHZ/h9PMe5IOQoN7E+nMc2UcLEM/V48DGDJ9kip0=\ngithub.com/containerd/cgroups/v3 v3.0.3/go.mod h1:8HBe7V3aWGLFPd/k03swSIsGjZhHI2WzJmticMgVuz0=\ngithub.com/containerd/containerd v1.7.29 h1:90fWABQsaN9mJhGkoVnuzEY+o1XDPbg9BTC9QTAHnuE=\ngithub.com/containerd/containerd v1.7.29/go.mod h1:azUkWcOvHrWvaiUjSQH0fjzuHIwSPg1WL5PshGP4Szs=\ngithub.com/containerd/continuity v0.4.4 h1:/fNVfTJ7wIl/YPMHjf+5H32uFhl63JucB34PlCpMKII=\ngithub.com/containerd/continuity v0.4.4/go.mod h1:/lNJvtJKUQStBzpVQ1+rasXO1LAWtUQssk28EZvJ3nE=\ngithub.com/containerd/errdefs v0.3.0 h1:FSZgGOeK4yuT/+DnF07/Olde/q4KBoMsaamhXxIMDp4=\ngithub.com/containerd/errdefs v0.3.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=\ngithub.com/containerd/errdefs/pkg v0.3.0 h1:9IKJ06FvyNlexW690DXuQNx2KA2cUJXx151Xdx3ZPPE=\ngithub.com/containerd/errdefs/pkg v0.3.0/go.mod h1:NJw6s9HwNuRhnjJhM7pylWwMyAkmCQvQ4GpJHEqRLVk=\ngithub.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=\ngithub.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=\ngithub.com/containerd/platforms v0.2.1 h1:zvwtM3rz2YHPQsF2CHYM8+KtB5dvhISiXh5ZpSBQv6A=\ngithub.com/containerd/platforms v0.2.1/go.mod h1:XHCb+2/hzowdiut9rkudds9bE5yJ7npe7dG/wG+uFPw=\ngithub.com/containerd/typeurl v1.0.2 h1:Chlt8zIieDbzQFzXzAeBEF92KhExuE4p9p92/QmY7aY=\ngithub.com/containerd/typeurl/v2 v2.2.0 h1:6NBDbQzr7I5LHgp34xAXYF5DOTQDn05X58lsPEmzLso=\ngithub.com/containerd/typeurl/v2 v2.2.0/go.mod h1:8XOOxnyatxSWuG8OfsZXVnAF4iZfedjS/8UHSPJnX4g=\ngithub.com/cpuguy83/go-md2man/v2 v2.0.1/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=\ngithub.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=\ngithub.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=\ngithub.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/distribution/distribution/v3 v3.0.0-20221208165359-362910506bc2 h1:aBfCb7iqHmDEIp6fBvC/hQUddQfg+3qdYjwzaiP9Hnc=\ngithub.com/distribution/distribution/v3 v3.0.0-20221208165359-362910506bc2/go.mod h1:WHNsWjnIn2V1LYOrME7e8KxSeKunYHsxEm4am0BUtcI=\ngithub.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=\ngithub.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=\ngithub.com/docker/cli v27.3.1+incompatible h1:qEGdFBF3Xu6SCvCYhc7CzaQTlBmqDuzxPDpigSyeKQQ=\ngithub.com/docker/cli v27.3.1+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=\ngithub.com/docker/distribution v2.8.3+incompatible h1:AtKxIZ36LoNK51+Z6RpzLpddBirtxJnzDrHLEKxTAYk=\ngithub.com/docker/distribution v2.8.3+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=\ngithub.com/docker/docker v28.0.0+incompatible h1:Olh0KS820sJ7nPsBKChVhk5pzqcwDR15fumfAd/p9hM=\ngithub.com/docker/docker v28.0.0+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=\ngithub.com/docker/docker-credential-helpers v0.8.2 h1:bX3YxiGzFP5sOXWc3bTPEXdEaZSeVMrFgOr3T+zrFAo=\ngithub.com/docker/docker-credential-helpers v0.8.2/go.mod h1:P3ci7E3lwkZg6XiHdRKft1KckHiO9a2rNtyFbZ/ry9M=\ngithub.com/docker/go-connections v0.5.0 h1:USnMq7hx7gwdVZq1L49hLXaFtUdTADjXGp+uj1Br63c=\ngithub.com/docker/go-connections v0.5.0/go.mod h1:ov60Kzw0kKElRwhNs9UlUHAE/F9Fe6GLaXnqyDdmEXc=\ngithub.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c h1:+pKlWGMw7gf6bQ+oDZB4KHQFypsfjYlq/C4rfL7D3g8=\ngithub.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=\ngithub.com/docker/go-metrics v0.0.1 h1:AgB/0SvBxihN0X8OR4SjsblXkbMvalQ8cjmtKQ2rQV8=\ngithub.com/docker/go-metrics v0.0.1/go.mod h1:cG1hvH2utMXtqgqqYE9plW6lDxS3/5ayHzueweSI3Vw=\ngithub.com/docker/libtrust v0.0.0-20160708172513-aabc10ec26b7 h1:UhxFibDNY/bfvqU5CAUmr9zpesgbU6SWc8/B4mflAE4=\ngithub.com/docker/libtrust v0.0.0-20160708172513-aabc10ec26b7/go.mod h1:cyGadeNEkKy96OOhEzfZl+yxihPEzKnqJwvfuSUqbZE=\ngithub.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g=\ngithub.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=\ngithub.com/evanphx/json-patch v5.7.0+incompatible h1:vgGkfT/9f8zE6tvSCe74nfpAVDQ2tG6yudJd8LBksgI=\ngithub.com/evanphx/json-patch v5.7.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=\ngithub.com/evanphx/json-patch/v5 v5.6.0/go.mod h1:G79N1coSVB93tBe7j6PhzjmR3/2VvlbKOFpnXhI9Bw4=\ngithub.com/evanphx/json-patch/v5 v5.8.0 h1:lRj6N9Nci7MvzrXuX6HFzU8XjmhPiXPlsKEy1u0KQro=\ngithub.com/evanphx/json-patch/v5 v5.8.0/go.mod h1:VNkHZ/282BpEyt/tObQO8s5CMPmYYq14uClGH4abBuQ=\ngithub.com/fatih/color v1.16.0 h1:zmkK9Ngbjj+K0yRhTVONQh1p/HknKYSlNT+vZCzyokM=\ngithub.com/fatih/color v1.16.0/go.mod h1:fL2Sau1YI5c0pdGEVCbKQbLXB6edEj1ZgiY4NijnWvE=\ngithub.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=\ngithub.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=\ngithub.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA=\ngithub.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=\ngithub.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=\ngithub.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=\ngithub.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=\ngithub.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=\ngithub.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=\ngithub.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=\ngithub.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=\ngithub.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=\ngithub.com/go-logr/zapr v1.3.0 h1:XGdV8XW8zdwFiwOA2Dryh1gj2KRQyOOoNmBy4EplIcQ=\ngithub.com/go-logr/zapr v1.3.0/go.mod h1:YKepepNBd1u/oyhd/yQmtjVXmm9uML4IXUgMOwR8/Gg=\ngithub.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ=\ngithub.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=\ngithub.com/go-openapi/jsonreference v0.21.0 h1:Rs+Y7hSXT83Jacb7kFyjn4ijOuVGSvOdF2+tg1TRrwQ=\ngithub.com/go-openapi/jsonreference v0.21.0/go.mod h1:LmZmgsrTkVg9LG4EaHeY8cBDslNPMo06cago5JNLkm4=\ngithub.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE=\ngithub.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=\ngithub.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=\ngithub.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI=\ngithub.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572/go.mod h1:9Pwr4B2jHnOSGXyyzV8ROjYa2ojvAY6HCGYYfMoC3Ls=\ngithub.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=\ngithub.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=\ngithub.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=\ngithub.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=\ngithub.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=\ngithub.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=\ngithub.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=\ngithub.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=\ngithub.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=\ngithub.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=\ngithub.com/gomodule/redigo v1.8.2 h1:H5XSIre1MB5NbPYFp+i1NBbb5qN1W8Y8YAQoAYbkm8k=\ngithub.com/gomodule/redigo v1.8.2/go.mod h1:P9dn9mFrCBvWhGE1wpxx6fgq7BAeLBk+UUUzlpkBYO0=\ngithub.com/google/gnostic-models v0.6.9-0.20230804172637-c7be7c783f49 h1:0VpGH+cDhbDtdcweoyCVsF3fhN8kejK6rFe/2FFX2nU=\ngithub.com/google/gnostic-models v0.6.9-0.20230804172637-c7be7c783f49/go.mod h1:BkkQ4L1KS1xMt2aWSPStnn55ChGC0DPOn2FQYj+f25M=\ngithub.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=\ngithub.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=\ngithub.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=\ngithub.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=\ngithub.com/google/go-containerregistry v0.20.2 h1:B1wPJ1SN/S7pB+ZAimcciVD+r+yV/l/DSArMxlbwseo=\ngithub.com/google/go-containerregistry v0.20.2/go.mod h1:z38EKdKh4h7IP2gSfUUqEvalZBqs6AoLeWfUy34nQC8=\ngithub.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=\ngithub.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=\ngithub.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=\ngithub.com/google/pprof v0.0.0-20230406165453-00490a63f317 h1:hFhpt7CTmR3DX+b4R19ydQFtofxT0Sv3QsKNMVQYTMQ=\ngithub.com/google/pprof v0.0.0-20230406165453-00490a63f317/go.mod h1:79YE0hCXdHag9sBkw2o+N/YnZtTkXi0UT9Nnixa5eYk=\ngithub.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=\ngithub.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=\ngithub.com/gorilla/handlers v1.5.1 h1:9lRY6j8DEeeBT10CvO9hGW0gmky0BprnvDI5vfhUHH4=\ngithub.com/gorilla/handlers v1.5.1/go.mod h1:t8XrUpc4KVXb7HGyJ4/cEnwQiaxrX/hz1Zv/4g96P1Q=\ngithub.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=\ngithub.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=\ngithub.com/grpc-ecosystem/grpc-gateway/v2 v2.27.1 h1:X5VWvz21y3gzm9Nw/kaUeku/1+uBhcekkmy4IkffJww=\ngithub.com/grpc-ecosystem/grpc-gateway/v2 v2.27.1/go.mod h1:Zanoh4+gvIgluNqcfMVTJueD4wSS5hT7zTt4Mrutd90=\ngithub.com/hashicorp/golang-lru v0.6.0 h1:uL2shRDx7RTrOrTCUZEGP/wJUFiUI8QT6E7z5o8jga4=\ngithub.com/hashicorp/golang-lru v0.6.0/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=\ngithub.com/imdario/mergo v0.3.16 h1:wwQJbIsHYGMUyLSPrEq1CT16AhnhNJQ51+4fdHUnCl4=\ngithub.com/imdario/mergo v0.3.16/go.mod h1:WBLT9ZmE3lPoWsEzCh9LPo3TiwVN+ZKEjmz+hD27ysY=\ngithub.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=\ngithub.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=\ngithub.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=\ngithub.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=\ngithub.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=\ngithub.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=\ngithub.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=\ngithub.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=\ngithub.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=\ngithub.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=\ngithub.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=\ngithub.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=\ngithub.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=\ngithub.com/klauspost/compress v1.17.11 h1:In6xLpyWOi1+C7tXUUWv2ot1QvBjxevKAaI6IXrJmUc=\ngithub.com/klauspost/compress v1.17.11/go.mod h1:pMDklpSncoRMuLFrf1W9Ss9KT+0rH90U12bZKk7uwG0=\ngithub.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=\ngithub.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=\ngithub.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=\ngithub.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=\ngithub.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=\ngithub.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=\ngithub.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=\ngithub.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=\ngithub.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=\ngithub.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=\ngithub.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=\ngithub.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=\ngithub.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=\ngithub.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=\ngithub.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=\ngithub.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=\ngithub.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=\ngithub.com/moby/locker v1.0.1 h1:fOXqR41zeveg4fFODix+1Ch4mj/gT0NE1XJbp/epuBg=\ngithub.com/moby/locker v1.0.1/go.mod h1:S7SDdo5zpBK84bzzVlKr2V0hz+7x9hWbYC/kq7oQppc=\ngithub.com/moby/sys/mountinfo v0.6.2 h1:BzJjoreD5BMFNmD9Rus6gdd1pLuecOFPt8wC+Vygl78=\ngithub.com/moby/sys/mountinfo v0.6.2/go.mod h1:IJb6JQeOklcdMU9F5xQ8ZALD+CUr5VlGpwtX+VE0rpI=\ngithub.com/moby/sys/userns v0.1.0 h1:tVLXkFOxVu9A64/yh59slHVv9ahO9UIev4JZusOLG/g=\ngithub.com/moby/sys/userns v0.1.0/go.mod h1:IHUYgu/kao6N8YZlp9Cf444ySSvCmDlmzUcYfDHOl28=\ngithub.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=\ngithub.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=\ngithub.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=\ngithub.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=\ngithub.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=\ngithub.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=\ngithub.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=\ngithub.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=\ngithub.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=\ngithub.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=\ngithub.com/onsi/ginkgo/v2 v2.14.0 h1:vSmGj2Z5YPb9JwCWT6z6ihcUvDhuXLc3sJiqd3jMKAY=\ngithub.com/onsi/ginkgo/v2 v2.14.0/go.mod h1:JkUdW7JkN0V6rFvsHcJ478egV3XH9NxpD27Hal/PhZw=\ngithub.com/onsi/gomega v1.30.0 h1:hvMK7xYz4D3HapigLTeGdId/NcfQx1VHMJc60ew99+8=\ngithub.com/onsi/gomega v1.30.0/go.mod h1:9sxs+SwGrKI0+PWe4Fxa9tFQQBG5xSsSbMXOI8PPpoQ=\ngithub.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=\ngithub.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=\ngithub.com/opencontainers/image-spec v1.1.0 h1:8SG7/vwALn54lVB/0yZ/MMwhFrPYtpEHQb2IpWsCzug=\ngithub.com/opencontainers/image-spec v1.1.0/go.mod h1:W4s4sFTMaBeK1BQLXbG4AdM2szdn85PY75RI83NrTrM=\ngithub.com/package-url/packageurl-go v0.1.2 h1:0H2DQt6DHd/NeRlVwW4EZ4oEI6Bn40XlNPRqegcxuo4=\ngithub.com/package-url/packageurl-go v0.1.2/go.mod h1:uQd4a7Rh3ZsVg5j0lNyAfyxIeGde9yrlhjF78GzeW0c=\ngithub.com/pelletier/go-toml v1.9.4/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=\ngithub.com/pelletier/go-toml v1.9.5 h1:4yBQzkHv+7BHq2PQUZF3Mx0IYxG7LsP222s7Agd3ve8=\ngithub.com/pelletier/go-toml v1.9.5/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=\ngithub.com/phayes/freeport v0.0.0-20220201140144-74d24b5ae9f5 h1:Ii+DKncOVM8Cu1Hc+ETb5K+23HdAMvESYE3ZJ5b5cMI=\ngithub.com/phayes/freeport v0.0.0-20220201140144-74d24b5ae9f5/go.mod h1:iIss55rKnNBTvrwdmkUpLnDpZoAHvWaiq5+iMmen4AE=\ngithub.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=\ngithub.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=\ngithub.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=\ngithub.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=\ngithub.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=\ngithub.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=\ngithub.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=\ngithub.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=\ngithub.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=\ngithub.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQP1xR9D75/vuwEF3g=\ngithub.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y=\ngithub.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=\ngithub.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=\ngithub.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=\ngithub.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=\ngithub.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=\ngithub.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=\ngithub.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc=\ngithub.com/prometheus/common v0.55.0 h1:KEi6DK7lXW/m7Ig5i47x0vRzuBsHuvJdi5ee6Y3G1dc=\ngithub.com/prometheus/common v0.55.0/go.mod h1:2SECS4xJG1kd8XF9IcM1gMX6510RAEL65zxzNImwdc8=\ngithub.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=\ngithub.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=\ngithub.com/prometheus/procfs v0.0.3/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=\ngithub.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=\ngithub.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=\ngithub.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=\ngithub.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=\ngithub.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=\ngithub.com/samber/lo v1.47.0 h1:z7RynLwP5nbyRscyvcD043DWYoOcYRv3mV8lBeqOCLc=\ngithub.com/samber/lo v1.47.0/go.mod h1:RmDH9Ct32Qy3gduHQuKJ3gW1fMHAnE/fAzQuf6He5cU=\ngithub.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=\ngithub.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=\ngithub.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=\ngithub.com/spf13/cobra v1.4.0/go.mod h1:Wo4iy3BUC+X2Fybo0PDqwJIv3dNRiZLHQymsfxlB84g=\ngithub.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM=\ngithub.com/spf13/cobra v1.8.1/go.mod h1:wHxEcudfqmLYa8iTfL+OuZPbBZkmvliBWKIezN3kD9Y=\ngithub.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=\ngithub.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=\ngithub.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=\ngithub.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=\ngithub.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=\ngithub.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=\ngithub.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=\ngithub.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=\ngithub.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=\ngithub.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=\ngithub.com/vladimirvivien/gexe v0.1.1 h1:2A0SBaOSKH+cwLVdt6H+KkHZotZWRNLlWygANGw5DxE=\ngithub.com/vladimirvivien/gexe v0.1.1/go.mod h1:LHQL00w/7gDUKIak24n801ABp8C+ni6eBht9vGVst8w=\ngithub.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=\ngithub.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=\ngithub.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43 h1:+lm10QQTNSBd8DVTNGHx7o/IKu9HYDvLMffDhbyLccI=\ngithub.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43/go.mod h1:aX5oPXxHm3bOH+xeAttToC8pqch2ScQN/JoXYupl6xs=\ngithub.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50 h1:hlE8//ciYMztlGpl/VA+Zm1AcTPHYkHJPbHqE6WJUXE=\ngithub.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50/go.mod h1:NUSPSUX/bi6SeDMUh6brw0nXpxHnc96TguQh0+r/ssA=\ngithub.com/yvasiyarov/newrelic_platform_go v0.0.0-20140908184405-b21fdbd4370f h1:ERexzlUfuTvpE74urLSbIQW0Z/6hF9t8U4NsJLaioAY=\ngithub.com/yvasiyarov/newrelic_platform_go v0.0.0-20140908184405-b21fdbd4370f/go.mod h1:GlGEuHIJweS1mbCqG+7vt2nvWLzLLnRHbXz5JKd/Qbg=\ngo.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=\ngo.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=\ngo.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=\ngo.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=\ngo.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0 h1:4K4tsIXefpVJtvA/8srF4V4y0akAoPHkIslgAkjixJA=\ngo.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0/go.mod h1:jjdQuTGVsXV4vSs+CJ2qYDeDPf9yIJV23qlIzBm73Vg=\ngo.opentelemetry.io/otel v1.37.0 h1:9zhNfelUvx0KBfu/gb+ZgeAfAgtWrfHJZcAqFC228wQ=\ngo.opentelemetry.io/otel v1.37.0/go.mod h1:ehE/umFRLnuLa/vSccNq9oS1ErUlkkK71gMcN34UG8I=\ngo.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.37.0 h1:9PgnL3QNlj10uGxExowIDIZu66aVBwWhXmbOp1pa6RA=\ngo.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.37.0/go.mod h1:0ineDcLELf6JmKfuo0wvvhAVMuxWFYvkTin2iV4ydPQ=\ngo.opentelemetry.io/otel/metric v1.37.0 h1:mvwbQS5m0tbmqML4NqK+e3aDiO02vsf/WgbsdpcPoZE=\ngo.opentelemetry.io/otel/metric v1.37.0/go.mod h1:04wGrZurHYKOc+RKeye86GwKiTb9FKm1WHtO+4EVr2E=\ngo.opentelemetry.io/otel/sdk v1.37.0 h1:ItB0QUqnjesGRvNcmAcU0LyvkVyGJ2xftD29bWdDvKI=\ngo.opentelemetry.io/otel/sdk v1.37.0/go.mod h1:VredYzxUvuo2q3WRcDnKDjbdvmO0sCzOvVAiY+yUkAg=\ngo.opentelemetry.io/otel/sdk/metric v1.37.0 h1:90lI228XrB9jCMuSdA0673aubgRobVZFhbjxHHspCPc=\ngo.opentelemetry.io/otel/sdk/metric v1.37.0/go.mod h1:cNen4ZWfiD37l5NhS+Keb5RXVWZWpRE+9WyVCpbo5ps=\ngo.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mxVK7z4=\ngo.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0=\ngo.opentelemetry.io/proto/otlp v1.7.0 h1:jX1VolD6nHuFzOYso2E73H85i92Mv8JQYk0K9vz09os=\ngo.opentelemetry.io/proto/otlp v1.7.0/go.mod h1:fSKjH6YJ7HDlwzltzyMj036AJ3ejJLCgCSHGj4efDDo=\ngo.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=\ngo.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=\ngo.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=\ngo.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=\ngo.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=\ngo.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=\ngolang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=\ngolang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=\ngolang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=\ngolang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=\ngolang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=\ngolang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=\ngolang.org/x/exp v0.0.0-20241009180824-f66d83c29e7c h1:7dEasQXItcW1xKJ2+gg5VOiBnqWrJc+rq0DPKyvvdbY=\ngolang.org/x/exp v0.0.0-20241009180824-f66d83c29e7c/go.mod h1:NQtJDoLvd6faHhE7m4T/1IY708gDefGGjR/iUW8yQQ8=\ngolang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=\ngolang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=\ngolang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=\ngolang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=\ngolang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=\ngolang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=\ngolang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=\ngolang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=\ngolang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=\ngolang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=\ngolang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI=\ngolang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU=\ngolang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=\ngolang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=\ngolang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20190801041406-cbf593c0f2f3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=\ngolang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=\ngolang.org/x/term v0.37.0 h1:8EGAD0qCmHYZg6J17DvsMy9/wJ7/D/4pV/wfnld5lTU=\ngolang.org/x/term v0.37.0/go.mod h1:5pB4lxRNYYVZuTLmy8oR2BH8dflOR+IbTYFD8fi3254=\ngolang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=\ngolang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=\ngolang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=\ngolang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=\ngolang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE=\ngolang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=\ngolang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=\ngolang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=\ngolang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=\ngolang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=\ngolang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ=\ngolang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs=\ngolang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=\ngolang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=\ngolang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=\ngolang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=\ngolang.org/x/xerrors v0.0.0-20231012003039-104605ab7028 h1:+cNy6SZtPcJQH3LJVLOSmiC7MMxXNOb3PU/VUEz+EhU=\ngolang.org/x/xerrors v0.0.0-20231012003039-104605ab7028/go.mod h1:NDW/Ps6MPRej6fsCIbMTohpP40sJ/P/vI1MoTEGwX90=\ngomodules.xyz/jsonpatch/v2 v2.4.0 h1:Ci3iUJyx9UeRx7CeFN8ARgGbkESwJK+KB9lLcWxY/Zw=\ngomodules.xyz/jsonpatch/v2 v2.4.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY=\ngoogle.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822 h1:oWVWY3NzT7KJppx2UKhKmzPq4SRe0LdCijVRwvGeikY=\ngoogle.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822/go.mod h1:h3c4v36UTKzUiuaOKQ6gr3S+0hovBtUrXzTG/i3+XEc=\ngoogle.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 h1:fc6jSaCT0vBduLYZHYrBBNY4dsWuvgyff9noRNDdBeE=\ngoogle.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A=\ngoogle.golang.org/grpc v1.73.1 h1:4fUIxjPNPmuxBHa5OZH4nBgi6pXo1o9rKSqzJF/VrHs=\ngoogle.golang.org/grpc v1.73.1/go.mod h1:50sbHOUqWoCQGI8V2HQLJM0B+LMlIUjNSZmow7EVBQc=\ngoogle.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=\ngoogle.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=\ngopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=\ngopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=\ngopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=\ngopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=\ngopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=\ngopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=\ngopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=\ngopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=\ngopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=\ngopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=\ngopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=\ngopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=\ngopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=\ngotest.tools/v3 v3.4.0 h1:ZazjZUfuVeZGLAmlKKuyv3IKP5orXcwtOwDQH6YVr6o=\ngotest.tools/v3 v3.4.0/go.mod h1:CtbdzLSsqVhDgMtKsx03ird5YTGB3ar27v0u/yKBW5g=\nk8s.io/api v0.28.12 h1:C2hpsaso18pqn0Dmkfnbv/YCctozTC3KGGuZ6bF7zhQ=\nk8s.io/api v0.28.12/go.mod h1:qjswI+whxvf9LAKD4sEYHfy+WgHGWeH+H5sCRQMwZAQ=\nk8s.io/apiextensions-apiserver v0.28.12 h1:6GA64rylk5q0mbXfHHFVgfL1jx/4p6RU+Y+ni2DUuZc=\nk8s.io/apiextensions-apiserver v0.28.12/go.mod h1:Len29ySvb/fnrXvioTxg2l6iFi97B53Bm3/jBMBllCE=\nk8s.io/apimachinery v0.28.12 h1:VepMEVOi9o7L/4wMAXJq+3BK9tqBIeerTB+HSOTKeo0=\nk8s.io/apimachinery v0.28.12/go.mod h1:zUG757HaKs6Dc3iGtKjzIpBfqTM4yiRsEe3/E7NX15o=\nk8s.io/client-go v0.28.12 h1:li7iRPRQF3vDki6gTxT/kXWJvw3BkJSdjVPVhDTZQec=\nk8s.io/client-go v0.28.12/go.mod h1:yEzH2Z+nEGlrnKyHJWcJsbOr5tGdIj04dj1TVQOg0wE=\nk8s.io/component-base v0.28.12 h1:ZNq6QFFGCPjaAzWqYHaQRoAY5seoK3vP0pZOjgxOzNc=\nk8s.io/component-base v0.28.12/go.mod h1:8zI5TmGuHX6R5Lay61Ox7wb+dsEENl0NBmVSiHMQu1c=\nk8s.io/cri-api v0.25.0 h1:INwdXsCDSA/0hGNdPxdE2dQD6ft/5K1EaKXZixvSQxg=\nk8s.io/cri-api v0.25.0/go.mod h1:J1rAyQkSJ2Q6I+aBMOVgg2/cbbebso6FNa0UagiR0kc=\nk8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=\nk8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=\nk8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 h1:BZqlfIlq5YbRMFko6/PM7FjZpUb45WallggurYhKGag=\nk8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340/go.mod h1:yD4MZYeKMBwQKVht279WycxKyM84kkAx2DPrTXaeb98=\nk8s.io/utils v0.0.0-20240711033017-18e509b52bc8 h1:pUdcCO1Lk/tbT5ztQWOBi5HBgbBP1J8+AsQnQCKsi8A=\nk8s.io/utils v0.0.0-20240711033017-18e509b52bc8/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=\noras.land/oras-go v1.2.5 h1:XpYuAwAb0DfQsunIyMfeET92emK8km3W4yEzZvUbsTo=\noras.land/oras-go v1.2.5/go.mod h1:PuAwRShRZCsZb7g8Ar3jKKQR/2A/qN+pkYxIOd/FAoo=\nsigs.k8s.io/controller-runtime v0.16.6 h1:FiXwTuFF5ZJKmozfP2Z0j7dh6kmxP4Ou1KLfxgKKC3I=\nsigs.k8s.io/controller-runtime v0.16.6/go.mod h1:+dQzkZxnylD0u49e0a+7AR+vlibEBaThmPca7lTyUsI=\nsigs.k8s.io/e2e-framework v0.0.8 h1:5cKzNv8d7cAVKrgnYnQxLP0TS1pQbyGNSTk1CI3aV1c=\nsigs.k8s.io/e2e-framework v0.0.8/go.mod h1:fIMqwZHUiENwPxMXlsvwPXW5br6T2paZYrWodd/ZoaA=\nsigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo=\nsigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=\nsigs.k8s.io/kind v0.15.0 h1:Fskj234L4hjQlsScCgeYvCBIRt06cjLzc7+kbr1u8Tg=\nsigs.k8s.io/kind v0.15.0/go.mod h1:cKTqagdRyUQmihhBOd+7p43DpOPRn9rHsUC08K1Jbsk=\nsigs.k8s.io/structured-merge-diff/v4 v4.4.1 h1:150L+0vs/8DA78h1u02ooW1/fFq/Lwr+sGiqlzvrtq4=\nsigs.k8s.io/structured-merge-diff/v4 v4.4.1/go.mod h1:N8hJocpFajUSSeSJ9bOZ77VzejKZaXsTtZo4/u7Io08=\nsigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8=\nsigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=\nsigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=\n"
  },
  {
    "path": "hack/boilerplate.go.txt",
    "content": "/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/"
  },
  {
    "path": "hack/go-install.sh",
    "content": "#!/usr/bin/env bash\n# https://github.com/kubernetes-sigs/cluster-api-provider-azure/blob/master/scripts/go_install.sh\n\nset -o errexit\nset -o nounset\nset -o pipefail\n\nif [[ -z \"${1}\" ]]; then\n  echo \"must provide module as first parameter\"\n  exit 1\nfi\n\nif [[ -z \"${2}\" ]]; then\n  echo \"must provide binary name as second parameter\"\n  exit 1\nfi\n\nif [[ -z \"${3}\" ]]; then\n  echo \"must provide version as third parameter\"\n  exit 1\nfi\n\nif [[ -z \"${GOBIN}\" ]]; then\n  echo \"GOBIN is not set. Must set GOBIN to install the bin in a specified directory.\"\n  exit 1\nfi\n\ntmp_dir=$(mktemp -d -t goinstall_XXXXXXXXXX)\nfunction clean {\n  rm -rf \"${tmp_dir}\"\n}\ntrap clean EXIT\n\nrm \"${GOBIN}/${2}\"* || true\n\ncd \"${tmp_dir}\"\n\n# create a new module in the tmp directory\ngo mod init fake/mod\n\n# install the golang module specified as the first argument\ngo install \"${1}@${3}\"\nmv \"${GOBIN}/${2}\" \"${GOBIN}/${2}-${3}\"\nln -sf \"${GOBIN}/${2}-${3}\" \"${GOBIN}/${2}\"\n"
  },
  {
    "path": "hack/rootless_docker.sh",
    "content": "#!/usr/bin/env bash\n\nreadarray -t uids < <(ps -C dockerd -o uid=)\n[[ \"${#uids[@]}\" != \"1\" ]] && exit 1\n\nif [ ${uids[0]} -ne 0 ]; then\n    echo true\n    exit 0\nfi\n\necho false\nexit 1\n"
  },
  {
    "path": "main.go",
    "content": "/*\nCopyright 2021.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage main\n\nimport (\n\t\"context\"\n\t\"flag\"\n\t\"fmt\"\n\t\"net/http\"\n\t_ \"net/http/pprof\"\n\t\"os\"\n\t\"time\"\n\n\t// Import all Kubernetes client auth plugins (e.g. Azure, GCP, OIDC, etc.)\n\t// to ensure that exec-entrypoint and run can make use of them.\n\n\t_ \"k8s.io/client-go/plugin/pkg/client/auth\"\n\t\"k8s.io/utils/inotify\"\n\t\"sigs.k8s.io/yaml\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\t\"k8s.io/apimachinery/pkg/conversion\"\n\t\"k8s.io/apimachinery/pkg/fields\"\n\t\"k8s.io/apimachinery/pkg/runtime\"\n\tutilruntime \"k8s.io/apimachinery/pkg/util/runtime\"\n\tclientgoscheme \"k8s.io/client-go/kubernetes/scheme\"\n\tctrl \"sigs.k8s.io/controller-runtime\"\n\t\"sigs.k8s.io/controller-runtime/pkg/cache\"\n\t\"sigs.k8s.io/controller-runtime/pkg/client\"\n\t\"sigs.k8s.io/controller-runtime/pkg/healthz\"\n\t\"sigs.k8s.io/controller-runtime/pkg/metrics/server\"\n\t\"sigs.k8s.io/controller-runtime/pkg/webhook\"\n\n\t\"github.com/eraser-dev/eraser/api/unversioned\"\n\t\"github.com/eraser-dev/eraser/api/unversioned/config\"\n\teraserv1 \"github.com/eraser-dev/eraser/api/v1\"\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\tv1alpha1Config \"github.com/eraser-dev/eraser/api/v1alpha1/config\"\n\teraserv1alpha2 \"github.com/eraser-dev/eraser/api/v1alpha2\"\n\tv1alpha2Config \"github.com/eraser-dev/eraser/api/v1alpha2/config\"\n\teraserv1alpha3 \"github.com/eraser-dev/eraser/api/v1alpha3\"\n\tv1alpha3Config \"github.com/eraser-dev/eraser/api/v1alpha3/config\"\n\t\"github.com/eraser-dev/eraser/controllers\"\n\t\"github.com/eraser-dev/eraser/pkg/logger\"\n\t\"github.com/eraser-dev/eraser/pkg/utils\"\n\t\"github.com/eraser-dev/eraser/version\"\n\t//+kubebuilder:scaffold:imports\n)\n\nvar (\n\tscheme   = runtime.NewScheme()\n\tsetupLog = ctrl.Log.WithName(\"setup\")\n\n\tfromV1alpha1 = eraserv1alpha1.Convert_v1alpha1_EraserConfig_To_unversioned_EraserConfig\n\tfromV1alpha2 = eraserv1alpha2.Convert_v1alpha2_EraserConfig_To_unversioned_EraserConfig\n\tfromV1alpha3 = eraserv1alpha3.Convert_v1alpha3_EraserConfig_To_unversioned_EraserConfig\n)\n\nfunc init() {\n\tutilruntime.Must(clientgoscheme.AddToScheme(scheme))\n\n\tutilruntime.Must(eraserv1alpha1.AddToScheme(scheme))\n\tutilruntime.Must(eraserv1.AddToScheme(scheme))\n\t//+kubebuilder:scaffold:scheme\n}\n\ntype apiVersion struct {\n\tAPIVersion string `json:\"apiVersion\"`\n}\n\ntype convertFunc[T any] func(*T, *unversioned.EraserConfig, conversion.Scope) error\n\nfunc main() {\n\tctx, cancel := context.WithCancel(context.Background())\n\n\tgo func() {\n\t\t<-ctx.Done()\n\t\tos.Exit(1)\n\t}()\n\n\tvar configFile string\n\tflag.StringVar(&configFile, \"config\", \"\",\n\t\t\"The controller will load its initial configuration from this file. \"+\n\t\t\t\"Omit this flag to use the default configuration values. \"+\n\t\t\t\"Command-line flags override configuration from this file.\")\n\tflag.Parse()\n\n\tif err := logger.Configure(); err != nil {\n\t\tsetupLog.Error(err, \"unable to configure logger\")\n\t\tos.Exit(1)\n\t}\n\n\t// these can all be overwritten using EraserConfig.\n\toptions := ctrl.Options{\n\t\tScheme:                 scheme,\n\t\tMetrics:                server.Options{BindAddress: \":8889\"},\n\t\tWebhookServer:          webhook.NewServer(webhook.Options{Port: 9443}),\n\t\tHealthProbeBindAddress: \":8081\",\n\t\tLeaderElection:         false,\n\t\tCache: cache.Options{\n\t\t\tByObject: map[client.Object]cache.ByObject{\n\t\t\t\t// to watch eraser pods\n\t\t\t\t&corev1.Pod{}: {\n\t\t\t\t\tField: fields.OneTermEqualSelector(\"metadata.namespace\", utils.GetNamespace()),\n\t\t\t\t},\n\t\t\t\t// to watch eraser podTemplates\n\t\t\t\t&corev1.PodTemplate{}: {\n\t\t\t\t\tField: fields.OneTermEqualSelector(\"metadata.namespace\", utils.GetNamespace()),\n\t\t\t\t},\n\t\t\t\t// to watch eraser-manager-configs\n\t\t\t\t&corev1.ConfigMap{}: {\n\t\t\t\t\tField: fields.OneTermEqualSelector(\"metadata.namespace\", utils.GetNamespace()),\n\t\t\t\t},\n\t\t\t\t// to watch ImageJobs\n\t\t\t\t&eraserv1.ImageJob{}: {},\n\t\t\t\t// to watch ImageLists\n\t\t\t\t&eraserv1.ImageList{}: {},\n\t\t\t},\n\t\t},\n\t}\n\n\tif configFile == \"\" {\n\t\tsetupLog.Error(fmt.Errorf(\"config file was not supplied\"), \"aborting\")\n\t\tos.Exit(1)\n\t}\n\n\tcfg, err := getConfig(configFile)\n\tif err != nil {\n\t\tsetupLog.Error(err, \"error getting configuration\")\n\t\tos.Exit(1)\n\t}\n\n\tsetupLog.V(1).Info(\"eraser config\",\n\t\t\"manager\", cfg.Manager,\n\t\t\"components\", cfg.Components,\n\t\t\"options\", fmt.Sprintf(\"%#v\\n\", options),\n\t\t\"typeMeta\", fmt.Sprintf(\"%#v\\n\", cfg.TypeMeta),\n\t)\n\n\teraserOpts := config.NewManager(cfg)\n\tmanagerOpts := cfg.Manager\n\n\twatcher, err := setupWatcher(configFile)\n\tif err != nil {\n\t\tsetupLog.Error(err, \"unable to set up configuration file watch\")\n\t\tos.Exit(1)\n\t}\n\n\tgo startConfigWatch(cancel, watcher, eraserOpts, configFile)\n\n\tif managerOpts.Profile.Enabled {\n\t\tgo func() {\n\t\t\tserver := &http.Server{\n\t\t\t\tAddr:              fmt.Sprintf(\"localhost:%d\", managerOpts.Profile.Port),\n\t\t\t\tReadHeaderTimeout: 3 * time.Second,\n\t\t\t}\n\t\t\terr := server.ListenAndServe()\n\t\t\tsetupLog.Error(err, \"pprof server failed\")\n\t\t}()\n\t}\n\n\tconfig := ctrl.GetConfigOrDie()\n\tconfig.UserAgent = version.GetUserAgent(\"manager\")\n\n\tsetupLog.Info(\"setting up manager\", \"userAgent\", config.UserAgent)\n\n\tmgr, err := ctrl.NewManager(config, options)\n\tif err != nil {\n\t\tsetupLog.Error(err, \"unable to start manager\")\n\t\tos.Exit(1)\n\t}\n\n\tsetupLog.Info(\"setup controllers\")\n\tif err = controllers.SetupWithManager(mgr, eraserOpts); err != nil {\n\t\tsetupLog.Error(err, \"unable to setup controllers\")\n\t\tos.Exit(1)\n\t}\n\n\t//+kubebuilder:scaffold:builder\n\n\tif err := mgr.AddHealthzCheck(\"healthz\", healthz.Ping); err != nil {\n\t\tsetupLog.Error(err, \"unable to set up health check\")\n\t\tos.Exit(1)\n\t}\n\tif err := mgr.AddReadyzCheck(\"readyz\", healthz.Ping); err != nil {\n\t\tsetupLog.Error(err, \"unable to set up ready check\")\n\t\tos.Exit(1)\n\t}\n\n\tsetupLog.Info(\"starting manager\")\n\tif err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {\n\t\tsetupLog.Error(err, \"problem running manager\")\n\t\tos.Exit(1)\n\t}\n}\n\nfunc getConfig(configFile string) (*unversioned.EraserConfig, error) {\n\t//nolint:gosec // G304: Reading config file is intended functionality\n\tfileBytes, err := os.ReadFile(configFile)\n\tif err != nil {\n\t\tsetupLog.Error(err, \"configuration is either missing or invalid\")\n\t\tos.Exit(1)\n\t}\n\n\tvar av apiVersion\n\tif err := yaml.Unmarshal(fileBytes, &av); err != nil {\n\t\tsetupLog.Error(err, \"cannot unmarshal yaml\", \"bytes\", string(fileBytes), \"apiVersion\", av)\n\t\tos.Exit(1)\n\t}\n\n\tswitch av.APIVersion {\n\tcase \"eraser.sh/v1alpha1\":\n\t\treturn getUnversioned(fileBytes, v1alpha1Config.Default(), fromV1alpha1)\n\tcase \"eraser.sh/v1alpha2\":\n\t\treturn getUnversioned(fileBytes, v1alpha2Config.Default(), fromV1alpha2)\n\tcase \"eraser.sh/v1alpha3\":\n\t\treturn getUnversioned(fileBytes, v1alpha3Config.Default(), fromV1alpha3)\n\tdefault:\n\t\tsetupLog.Error(fmt.Errorf(\"unknown api version\"), \"error\", \"apiVersion\", av.APIVersion)\n\t\treturn nil, err\n\t}\n}\n\nfunc getUnversioned[T any](b []byte, defaults *T, convert convertFunc[T]) (*unversioned.EraserConfig, error) {\n\tcfg := defaults\n\n\tif err := yaml.Unmarshal(b, cfg); err != nil {\n\t\tsetupLog.Error(err, \"configuration is either missing or invalid\")\n\t\treturn nil, err\n\t}\n\n\tvar unv unversioned.EraserConfig\n\tif err := convert(cfg, &unv, nil); err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &unv, nil\n}\n\n// Kubernetes manages configmap volume updates by creating a new file,\n// changing the symlink, then deleting the old file. Hence, we want to\n// watch for IN_DELETE_SELF events. In case the watch is dropped, we need\n// to reestablish, so watch of IN_IGNORED too.\n// https://ahmet.im/blog/kubernetes-inotify/ for more information.\nfunc setupWatcher(configFile string) (*inotify.Watcher, error) {\n\twatcher, err := inotify.NewWatcher()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\terr = watcher.AddWatch(configFile, inotify.InDeleteSelf|inotify.InIgnored)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn watcher, nil\n}\n\nfunc startConfigWatch(cancel context.CancelFunc, watcher *inotify.Watcher, eraserOpts *config.Manager, filename string) {\n\tfor {\n\t\tselect {\n\t\tcase ev := <-watcher.Event:\n\t\t\t// by default inotify removes a watch on a file on an IN_DELETE_SELF\n\t\t\t// event, so we have to remove and reinstate the watch\n\t\t\tsetupLog.V(1).Info(\"event\", \"event\", ev)\n\t\t\tif ev.Mask&inotify.InIgnored != 0 {\n\t\t\t\terr := watcher.RemoveWatch(filename)\n\t\t\t\tif err != nil {\n\t\t\t\t\tsetupLog.Error(err, \"unable to remove watch on config\")\n\t\t\t\t}\n\n\t\t\t\terr = watcher.AddWatch(filename, inotify.InDeleteSelf|inotify.InIgnored)\n\t\t\t\tif err != nil {\n\t\t\t\t\tsetupLog.Error(err, \"unable to set up new watch on configuration\")\n\t\t\t\t}\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tvar err error\n\t\t\toldConfig := new(unversioned.EraserConfig)\n\n\t\t\t*oldConfig, err = eraserOpts.Read()\n\t\t\tif err != nil {\n\t\t\t\tsetupLog.Error(err, \"configuration could not be read\", \"event\", ev, \"filename\", filename)\n\t\t\t}\n\n\t\t\tnewConfig, err := getConfig(filename)\n\t\t\tif err != nil {\n\t\t\t\tsetupLog.Error(err, \"configuration is missing or invalid\", \"event\", ev, \"filename\", filename)\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tif err = eraserOpts.Update(newConfig); err != nil {\n\t\t\t\tsetupLog.Error(err, \"configuration update failed\")\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// read back the new configuration\n\t\t\t*newConfig, err = eraserOpts.Read()\n\t\t\tif err != nil {\n\t\t\t\tsetupLog.Error(err, \"unable to read back new configuration\")\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tif needsRestart(oldConfig, newConfig) {\n\t\t\t\tsetupLog.Info(\"configurations differ in an irreconcileable way, restarting\", \"old\", oldConfig.Components, \"new\", newConfig.Components)\n\t\t\t\t// restarts the manager\n\t\t\t\tcancel()\n\t\t\t}\n\n\t\t\tsetupLog.V(1).Info(\"new configuration\", \"manager\", newConfig.Manager, \"components\", newConfig.Components)\n\t\tcase err := <-watcher.Error:\n\t\t\tsetupLog.Error(err, \"file watcher error\")\n\t\t}\n\t}\n}\n\nfunc needsRestart(oldConfig, newConfig *unversioned.EraserConfig) bool {\n\ttype check struct {\n\t\tcollector bool\n\t\tscanner   bool\n\t}\n\n\toldComponents := check{collector: oldConfig.Components.Collector.Enabled, scanner: oldConfig.Components.Scanner.Enabled}\n\tnewComponents := check{collector: newConfig.Components.Collector.Enabled, scanner: newConfig.Components.Scanner.Enabled}\n\treturn oldComponents != newComponents\n}\n"
  },
  {
    "path": "manifest_staging/deploy/eraser.yaml",
    "content": "apiVersion: v1\nkind: Namespace\nmetadata:\n  labels:\n    control-plane: controller-manager\n  name: eraser-system\n---\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n  annotations:\n    controller-gen.kubebuilder.io/version: v0.14.0\n  name: imagejobs.eraser.sh\nspec:\n  group: eraser.sh\n  names:\n    kind: ImageJob\n    listKind: ImageJobList\n    plural: imagejobs\n    singular: imagejob\n  scope: Cluster\n  versions:\n  - name: v1\n    schema:\n      openAPIV3Schema:\n        description: ImageJob is the Schema for the imagejobs API.\n        properties:\n          apiVersion:\n            description: |-\n              APIVersion defines the versioned schema of this representation of an object.\n              Servers should convert recognized schemas to the latest internal value, and\n              may reject unrecognized values.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n            type: string\n          kind:\n            description: |-\n              Kind is a string value representing the REST resource this object represents.\n              Servers may infer this from the endpoint the client submits requests to.\n              Cannot be updated.\n              In CamelCase.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n            type: string\n          metadata:\n            type: object\n          status:\n            description: ImageJobStatus defines the observed state of ImageJob.\n            properties:\n              deleteAfter:\n                description: Time to delay deletion until\n                format: date-time\n                type: string\n              desired:\n                description: desired number of pods\n                type: integer\n              failed:\n                description: number of pods that failed\n                type: integer\n              phase:\n                description: job running, successfully completed, or failed\n                type: string\n              skipped:\n                description: number of nodes that were skipped e.g. because they are not a linux node\n                type: integer\n              succeeded:\n                description: number of pods that completed successfully\n                type: integer\n            required:\n            - desired\n            - failed\n            - phase\n            - skipped\n            - succeeded\n            type: object\n        type: object\n    served: true\n    storage: true\n    subresources:\n      status: {}\n  - deprecated: true\n    deprecationWarning: v1alpha1 of the eraser API has been deprecated. Please migrate to v1.\n    name: v1alpha1\n    schema:\n      openAPIV3Schema:\n        description: ImageJob is the Schema for the imagejobs API.\n        properties:\n          apiVersion:\n            description: |-\n              APIVersion defines the versioned schema of this representation of an object.\n              Servers should convert recognized schemas to the latest internal value, and\n              may reject unrecognized values.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n            type: string\n          kind:\n            description: |-\n              Kind is a string value representing the REST resource this object represents.\n              Servers may infer this from the endpoint the client submits requests to.\n              Cannot be updated.\n              In CamelCase.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n            type: string\n          metadata:\n            type: object\n          status:\n            description: ImageJobStatus defines the observed state of ImageJob.\n            properties:\n              deleteAfter:\n                description: Time to delay deletion until\n                format: date-time\n                type: string\n              desired:\n                description: desired number of pods\n                type: integer\n              failed:\n                description: number of pods that failed\n                type: integer\n              phase:\n                description: job running, successfully completed, or failed\n                type: string\n              skipped:\n                description: number of nodes that were skipped e.g. because they are not a linux node\n                type: integer\n              succeeded:\n                description: number of pods that completed successfully\n                type: integer\n            required:\n            - desired\n            - failed\n            - phase\n            - skipped\n            - succeeded\n            type: object\n        type: object\n    served: true\n    storage: false\n    subresources:\n      status: {}\n---\napiVersion: apiextensions.k8s.io/v1\nkind: CustomResourceDefinition\nmetadata:\n  annotations:\n    controller-gen.kubebuilder.io/version: v0.14.0\n  name: imagelists.eraser.sh\nspec:\n  group: eraser.sh\n  names:\n    kind: ImageList\n    listKind: ImageListList\n    plural: imagelists\n    singular: imagelist\n  scope: Cluster\n  versions:\n  - name: v1\n    schema:\n      openAPIV3Schema:\n        description: ImageList is the Schema for the imagelists API.\n        properties:\n          apiVersion:\n            description: |-\n              APIVersion defines the versioned schema of this representation of an object.\n              Servers should convert recognized schemas to the latest internal value, and\n              may reject unrecognized values.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n            type: string\n          kind:\n            description: |-\n              Kind is a string value representing the REST resource this object represents.\n              Servers may infer this from the endpoint the client submits requests to.\n              Cannot be updated.\n              In CamelCase.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n            type: string\n          metadata:\n            type: object\n          spec:\n            description: ImageListSpec defines the desired state of ImageList.\n            properties:\n              images:\n                description: The list of non-compliant images to delete if non-running.\n                items:\n                  type: string\n                type: array\n            required:\n            - images\n            type: object\n          status:\n            description: ImageListStatus defines the observed state of ImageList.\n            properties:\n              failed:\n                description: Number of nodes that failed to run the job\n                format: int64\n                type: integer\n              skipped:\n                description: Number of nodes that were skipped due to a skip selector\n                format: int64\n                type: integer\n              success:\n                description: Number of nodes that successfully ran the job\n                format: int64\n                type: integer\n              timestamp:\n                description: Information when the job was completed.\n                format: date-time\n                type: string\n            required:\n            - failed\n            - skipped\n            - success\n            - timestamp\n            type: object\n        type: object\n    served: true\n    storage: true\n    subresources:\n      status: {}\n  - deprecated: true\n    deprecationWarning: v1alpha1 of the eraser API has been deprecated. Please migrate to v1.\n    name: v1alpha1\n    schema:\n      openAPIV3Schema:\n        description: ImageList is the Schema for the imagelists API.\n        properties:\n          apiVersion:\n            description: |-\n              APIVersion defines the versioned schema of this representation of an object.\n              Servers should convert recognized schemas to the latest internal value, and\n              may reject unrecognized values.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n            type: string\n          kind:\n            description: |-\n              Kind is a string value representing the REST resource this object represents.\n              Servers may infer this from the endpoint the client submits requests to.\n              Cannot be updated.\n              In CamelCase.\n              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n            type: string\n          metadata:\n            type: object\n          spec:\n            description: ImageListSpec defines the desired state of ImageList.\n            properties:\n              images:\n                description: The list of non-compliant images to delete if non-running.\n                items:\n                  type: string\n                type: array\n            required:\n            - images\n            type: object\n          status:\n            description: ImageListStatus defines the observed state of ImageList.\n            properties:\n              failed:\n                description: Number of nodes that failed to run the job\n                format: int64\n                type: integer\n              skipped:\n                description: Number of nodes that were skipped due to a skip selector\n                format: int64\n                type: integer\n              success:\n                description: Number of nodes that successfully ran the job\n                format: int64\n                type: integer\n              timestamp:\n                description: Information when the job was completed.\n                format: date-time\n                type: string\n            required:\n            - failed\n            - skipped\n            - success\n            - timestamp\n            type: object\n        type: object\n    served: true\n    storage: false\n    subresources:\n      status: {}\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: eraser-controller-manager\n  namespace: eraser-system\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: eraser-imagejob-pods\n  namespace: eraser-system\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n  name: eraser-manager-role\n  namespace: eraser-system\nrules:\n- apiGroups:\n  - \"\"\n  resources:\n  - configmaps\n  verbs:\n  - create\n  - delete\n  - get\n  - list\n  - patch\n  - update\n  - watch\n- apiGroups:\n  - \"\"\n  resources:\n  - pods\n  verbs:\n  - create\n  - delete\n  - get\n  - list\n  - update\n  - watch\n- apiGroups:\n  - \"\"\n  resources:\n  - podtemplates\n  verbs:\n  - create\n  - delete\n  - get\n  - list\n  - patch\n  - update\n  - watch\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n  name: eraser-manager-role\nrules:\n- apiGroups:\n  - \"\"\n  resources:\n  - nodes\n  verbs:\n  - get\n  - list\n  - watch\n- apiGroups:\n  - eraser.sh\n  resources:\n  - imagejobs\n  verbs:\n  - create\n  - delete\n  - get\n  - list\n  - watch\n- apiGroups:\n  - eraser.sh\n  resources:\n  - imagejobs/status\n  verbs:\n  - get\n  - patch\n  - update\n- apiGroups:\n  - eraser.sh\n  resources:\n  - imagelists\n  verbs:\n  - get\n  - list\n  - watch\n- apiGroups:\n  - eraser.sh\n  resources:\n  - imagelists/status\n  verbs:\n  - get\n  - patch\n  - update\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n  name: eraser-manager-rolebinding\n  namespace: eraser-system\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: Role\n  name: eraser-manager-role\nsubjects:\n- kind: ServiceAccount\n  name: eraser-controller-manager\n  namespace: eraser-system\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: eraser-manager-rolebinding\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: eraser-manager-role\nsubjects:\n- kind: ServiceAccount\n  name: eraser-controller-manager\n  namespace: eraser-system\n---\napiVersion: v1\ndata:\n  controller_manager_config.yaml: |\n    apiVersion: eraser.sh/v1alpha3\n    kind: EraserConfig\n    manager:\n      runtime:\n        name: containerd\n        address: unix:///run/containerd/containerd.sock\n      otlpEndpoint: \"\"\n      logLevel: info\n      scheduling:\n        repeatInterval: 24h\n        beginImmediately: true\n      profile:\n        enabled: false\n        port: 6060\n      imageJob:\n        successRatio: 1.0\n        cleanup:\n          delayOnSuccess: 0s\n          delayOnFailure: 24h\n      pullSecrets: [] # image pull secrets for collector/scanner/eraser\n      priorityClassName: \"\" # priority class name for collector/scanner/eraser\n      additionalPodLabels: {}\n      nodeFilter:\n        type: exclude # must be either exclude|include\n        selectors:\n          - eraser.sh/cleanup.filter\n          - kubernetes.io/os=windows\n    components:\n      collector:\n        enabled: true\n        image:\n          repo: ghcr.io/eraser-dev/collector\n          tag: v1.5.0-beta.0\n        request:\n          mem: 25Mi\n          cpu: 7m\n        limit:\n          mem: 500Mi\n          # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run\n          cpu: 0\n      scanner:\n        enabled: true\n        image:\n          repo: ghcr.io/eraser-dev/eraser-trivy-scanner # supply custom image for custom scanner\n          tag: v1.5.0-beta.0\n        request:\n          mem: 500Mi\n          cpu: 1000m\n        limit:\n          mem: 2Gi\n          # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run\n          cpu: 0\n        # The config needs to be passed through to the scanner as yaml, as a\n        # single string. Because we allow custom scanner images, the scanner is\n        # responsible for defining a schema, parsing, and validating.\n        config: |\n          # this is the schema for the provided 'trivy-scanner'. custom scanners\n          # will define their own configuration.\n          cacheDir: /var/lib/trivy\n          dbRepo: ghcr.io/aquasecurity/trivy-db\n          deleteFailedImages: true\n          deleteEOLImages: true\n          vulnerabilities:\n            ignoreUnfixed: false\n            types:\n              - os\n              - library\n            securityChecks:\n              - vuln\n            severities:\n              - CRITICAL\n              - HIGH\n              - MEDIUM\n              - LOW\n            ignoredStatuses:\n          timeout:\n            total: 23h\n            perImage: 1h\n        volumes: []\n      remover:\n        image:\n          repo: ghcr.io/eraser-dev/remover\n          tag: v1.5.0-beta.0\n        request:\n          mem: 25Mi\n          # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run\n          cpu: 0\n        limit:\n          mem: 30Mi\n          cpu: 0\nkind: ConfigMap\nmetadata:\n  name: eraser-manager-config\n  namespace: eraser-system\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  labels:\n    control-plane: controller-manager\n  name: eraser-controller-manager\n  namespace: eraser-system\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      control-plane: controller-manager\n  template:\n    metadata:\n      labels:\n        control-plane: controller-manager\n    spec:\n      containers:\n      - args:\n        - --config=/config/controller_manager_config.yaml\n        command:\n        - /manager\n        env:\n        - name: POD_NAMESPACE\n          valueFrom:\n            fieldRef:\n              apiVersion: v1\n              fieldPath: metadata.namespace\n        - name: OTEL_SERVICE_NAME\n          value: eraser-manager\n        image: ghcr.io/eraser-dev/eraser-manager:v1.5.0-beta.0\n        livenessProbe:\n          httpGet:\n            path: /healthz\n            port: 8081\n          initialDelaySeconds: 15\n          periodSeconds: 20\n        name: manager\n        readinessProbe:\n          httpGet:\n            path: /readyz\n            port: 8081\n          initialDelaySeconds: 5\n          periodSeconds: 10\n        resources:\n          limits:\n            memory: 30Mi\n          requests:\n            cpu: 100m\n            memory: 20Mi\n        securityContext:\n          allowPrivilegeEscalation: false\n          capabilities:\n            drop:\n            - ALL\n          readOnlyRootFilesystem: true\n          runAsGroup: 65532\n          runAsNonRoot: true\n          runAsUser: 65532\n          seccompProfile:\n            type: RuntimeDefault\n        volumeMounts:\n        - mountPath: /config\n          name: manager-config\n      nodeSelector:\n        kubernetes.io/os: linux\n      serviceAccountName: eraser-controller-manager\n      terminationGracePeriodSeconds: 10\n      volumes:\n      - configMap:\n          name: eraser-manager-config\n        name: manager-config\n"
  },
  {
    "path": "pkg/collector/collector.go",
    "content": "package main\n\nimport (\n\t\"encoding/json\"\n\t\"flag\"\n\t\"fmt\"\n\t\"io\"\n\t\"net/http\"\n\t_ \"net/http/pprof\"\n\t\"os\"\n\t\"time\"\n\n\t\"github.com/eraser-dev/eraser/pkg/cri\"\n\t\"github.com/eraser-dev/eraser/pkg/logger\"\n\t\"golang.org/x/sys/unix\"\n\tlogf \"sigs.k8s.io/controller-runtime/pkg/log\"\n\n\tutil \"github.com/eraser-dev/eraser/pkg/utils\"\n)\n\nvar (\n\tenableProfile = flag.Bool(\"enable-pprof\", false, \"enable pprof profiling\")\n\tprofilePort   = flag.Int(\"pprof-port\", 6060, \"port for pprof profiling. defaulted to 6060 if unspecified\")\n\tscanDisabled  = flag.Bool(\"scan-disabled\", false, \"boolean for if scanner container is disabled\")\n\n\t// Timeout  of connecting to server (default: 5m).\n\ttimeout  = 5 * time.Minute\n\tlog      = logf.Log.WithName(\"collector\")\n\texcluded map[string]struct{}\n)\n\nfunc main() {\n\tflag.Parse()\n\n\tif *enableProfile {\n\t\tgo func() {\n\t\t\tserver := &http.Server{\n\t\t\t\tAddr:              fmt.Sprintf(\"localhost:%d\", *profilePort),\n\t\t\t\tReadHeaderTimeout: 3 * time.Second,\n\t\t\t}\n\t\t\terr := server.ListenAndServe()\n\t\t\tlog.Error(err, \"pprof server failed\")\n\t\t}()\n\t}\n\n\tif err := logger.Configure(); err != nil {\n\t\tfmt.Fprintln(os.Stderr, \"Error setting up logger:\", err)\n\t\tos.Exit(1)\n\t}\n\n\tclient, err := cri.NewCollectorClient(util.CRIPath)\n\tif err != nil {\n\t\tlog.Error(err, \"failed to get image client\")\n\t\tos.Exit(1)\n\t}\n\n\texcluded, err = util.ParseExcluded()\n\tif os.IsNotExist(err) {\n\t\tlog.Info(\"configmaps for exclusion do not exist\")\n\t} else if err != nil {\n\t\tlog.Error(err, \"failed to parse exclusion list\")\n\t\tos.Exit(1)\n\t}\n\tif len(excluded) == 0 {\n\t\tlog.Info(\"no images to exclude\")\n\t}\n\n\t// finalImages of type []Image\n\tfinalImages, err := getImages(client)\n\tif err != nil {\n\t\tlog.Error(err, \"failed to list all images\")\n\t\tos.Exit(1)\n\t}\n\tlog.Info(\"images collected\", \"finalImages:\", finalImages)\n\n\tdata, err := json.Marshal(finalImages)\n\tif err != nil {\n\t\tlog.Error(err, \"failed to encode finalImages\")\n\t\tos.Exit(1)\n\t}\n\n\tpath := util.CollectScanPath\n\n\tif *scanDisabled {\n\t\tpath = util.ScanErasePath\n\t}\n\n\tif err := unix.Mkfifo(path, util.PipeMode); err != nil {\n\t\tlog.Error(err, \"failed to create pipe\", \"pipeFile\", path)\n\t\tos.Exit(1)\n\t}\n\n\t//nolint:gosec // G304: Opening pipe file is intended functionality\n\tfile, err := os.OpenFile(path, os.O_WRONLY, 0)\n\tif err != nil {\n\t\tlog.Error(err, \"failed to open pipe\", \"pipeFile\", path)\n\t\tos.Exit(1)\n\t}\n\n\tif _, err := file.Write(data); err != nil {\n\t\tlog.Error(err, \"failed to write to pipe\", \"pipeFile\", path)\n\t\tos.Exit(1)\n\t}\n\n\tif err := file.Close(); err != nil {\n\t\tlog.Error(err, \"failed to close pipe\", \"pipeFile\", path)\n\t\tos.Exit(1)\n\t}\n\tif err := unix.Mkfifo(util.EraseCompleteCollectPath, util.PipeMode); err != nil {\n\t\tlog.Error(err, \"failed to create pipe\", \"pipeFile\", util.EraseCompleteCollectPath)\n\t\tos.Exit(1)\n\t}\n\n\tfile, err = os.OpenFile(util.EraseCompleteCollectPath, os.O_RDONLY, 0)\n\tif err != nil {\n\t\tlog.Error(err, \"failed to open pipe\", \"pipeFile\", util.EraseCompleteCollectPath)\n\t\tos.Exit(1)\n\t}\n\n\tdata, err = io.ReadAll(file)\n\tif err != nil {\n\t\tlog.Error(err, \"failed to read pipe\", \"pipeFile\", util.EraseCompleteCollectPath)\n\t\tos.Exit(1)\n\t}\n\n\tif err := file.Close(); err != nil {\n\t\tlog.Error(err, \"failed to close pipe\", \"pipeFile\", util.EraseCompleteCollectPath)\n\t\tos.Exit(1)\n\t}\n\n\tif string(data) != util.EraseCompleteMessage {\n\t\tlog.Info(\"garbage in pipe\", \"pipeFile\", util.EraseCompleteCollectPath, \"in_pipe\", string(data))\n\t\tos.Exit(1)\n\t}\n}\n"
  },
  {
    "path": "pkg/collector/helpers.go",
    "content": "package main\n\nimport (\n\t\"context\"\n\n\t\"github.com/eraser-dev/eraser/api/unversioned\"\n\t\"github.com/eraser-dev/eraser/pkg/cri\"\n\tutil \"github.com/eraser-dev/eraser/pkg/utils\"\n)\n\nfunc getImages(c cri.Collector) ([]unversioned.Image, error) {\n\tbackgroundContext, cancel := context.WithTimeout(context.Background(), timeout)\n\tdefer cancel()\n\n\timages, err := c.ListImages(backgroundContext)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tallImages := make([]unversioned.Image, 0, len(images))\n\t// map with key: imageID, value: repoTag list (contains full name of image)\n\tidToImageMap := make(map[string]unversioned.Image)\n\n\tfor _, img := range images {\n\t\trepoTags := []string{}\n\t\trepoTags = append(repoTags, img.RepoTags...)\n\n\t\tnewImg := unversioned.Image{\n\t\t\tImageID: img.Id,\n\t\t\tNames:   repoTags,\n\t\t}\n\n\t\tdigests, errs := util.ProcessRepoDigests(img.RepoDigests)\n\t\tfor _, err := range errs {\n\t\t\tlog.Error(err, \"error processing digest\")\n\t\t}\n\n\t\tnewImg.Digests = append(newImg.Digests, digests...)\n\n\t\tallImages = append(allImages, newImg)\n\t\tidToImageMap[img.Id] = newImg\n\t}\n\n\tcontainers, err := c.ListContainers(backgroundContext)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// Images that are running\n\t// map of (digest | name) -> imageID\n\trunningImages := util.GetRunningImages(containers, idToImageMap)\n\n\t// Images that aren't running\n\t// map of (digest | name) -> imageID\n\tnonRunningImages := util.GetNonRunningImages(runningImages, allImages, idToImageMap)\n\n\tfinalImages := make([]unversioned.Image, 0, len(images))\n\n\t// empty map to keep track of repeated digest values due to both name and digest being present as keys in nonRunningImages\n\tchecked := make(map[string]struct{})\n\n\tfor _, imageID := range nonRunningImages {\n\t\tif _, alreadyChecked := checked[imageID]; alreadyChecked {\n\t\t\tcontinue\n\t\t}\n\n\t\tchecked[imageID] = struct{}{}\n\t\timg := idToImageMap[imageID]\n\n\t\tcurrImage := unversioned.Image{\n\t\t\tImageID: imageID,\n\t\t\tNames:   img.Names,\n\t\t\tDigests: img.Digests,\n\t\t}\n\n\t\tif !util.IsExcluded(excluded, currImage.ImageID, idToImageMap) {\n\t\t\tfinalImages = append(finalImages, currImage)\n\t\t}\n\t}\n\n\treturn finalImages, nil\n}\n"
  },
  {
    "path": "pkg/cri/client.go",
    "content": "package cri\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"github.com/eraser-dev/eraser/pkg/utils\"\n\t\"google.golang.org/grpc\"\n\tv1 \"k8s.io/cri-api/pkg/apis/runtime/v1\"\n\tv1alpha2 \"k8s.io/cri-api/pkg/apis/runtime/v1alpha2\"\n)\n\nconst (\n\tRuntimeV1       runtimeVersion = \"v1\"\n\tRuntimeV1Alpha2 runtimeVersion = \"v1alpha2\"\n)\n\ntype (\n\tCollector interface {\n\t\tListImages(context.Context) ([]*v1.Image, error)\n\t\tListContainers(context.Context) ([]*v1.Container, error)\n\t}\n\n\tRemover interface {\n\t\tCollector\n\t\tDeleteImage(context.Context, string) error\n\t}\n\n\truntimeTryFunc func(context.Context, *grpc.ClientConn) (string, error)\n)\n\nfunc NewCollectorClient(socketPath string) (Collector, error) {\n\treturn NewRemoverClient(socketPath)\n}\n\nfunc NewRemoverClient(socketPath string) (Remover, error) {\n\tctx := context.Background()\n\n\tconn, err := utils.GetConn(ctx, socketPath)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn newClientWithFallback(ctx, conn)\n}\n\nfunc newClientWithFallback(ctx context.Context, conn *grpc.ClientConn) (Remover, error) {\n\terrs := new(errors)\n\tfuncs := []runtimeTryFunc{tryV1, tryV1Alpha2}\n\n\tfor _, f := range funcs {\n\t\tversion, err := f(ctx, conn)\n\t\tif err != nil {\n\t\t\terrs.Append(err)\n\t\t\tcontinue\n\t\t}\n\n\t\tclient, err := getClientFromRuntimeVersion(conn, version)\n\t\tif err != nil {\n\t\t\terrs.Append(err)\n\t\t\tcontinue\n\t\t}\n\n\t\treturn client, nil\n\t}\n\n\treturn nil, errs\n}\n\nfunc tryV1Alpha2(ctx context.Context, conn *grpc.ClientConn) (string, error) {\n\truntimeClientV1Alpha2 := v1alpha2.NewRuntimeServiceClient(conn)\n\treq2 := v1alpha2.VersionRequest{}\n\n\trespv1Alpha2, err := runtimeClientV1Alpha2.Version(ctx, &req2)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\treturn respv1Alpha2.RuntimeApiVersion, err\n}\n\nfunc tryV1(ctx context.Context, conn *grpc.ClientConn) (string, error) {\n\truntimeClient := v1.NewRuntimeServiceClient(conn)\n\treq := v1.VersionRequest{}\n\n\tresp, err := runtimeClient.Version(ctx, &req)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\treturn resp.RuntimeApiVersion, err\n}\n\nfunc getClientFromRuntimeVersion(conn *grpc.ClientConn, runtimeAPIVersion string) (Remover, error) {\n\tswitch runtimeAPIVersion {\n\tcase string(RuntimeV1):\n\t\timageClient := v1.NewImageServiceClient(conn)\n\t\truntimeClient := v1.NewRuntimeServiceClient(conn)\n\t\treturn &v1Client{\n\t\t\timages:  imageClient,\n\t\t\truntime: runtimeClient,\n\t\t}, nil\n\tcase string(RuntimeV1Alpha2):\n\t\truntimeClient := v1alpha2.NewRuntimeServiceClient(conn)\n\t\timageClient := v1alpha2.NewImageServiceClient(conn)\n\t\treturn &v1alpha2Client{\n\t\t\timages:  imageClient,\n\t\t\truntime: runtimeClient,\n\t\t}, nil\n\t}\n\n\treturn nil, fmt.Errorf(\"unrecognized CRI version: '%s'\", runtimeAPIVersion)\n}\n"
  },
  {
    "path": "pkg/cri/client_v1.go",
    "content": "package cri\n\nimport (\n\t\"context\"\n\n\t\"google.golang.org/grpc/codes\"\n\t\"google.golang.org/grpc/status\"\n\tv1 \"k8s.io/cri-api/pkg/apis/runtime/v1\"\n)\n\ntype (\n\tv1Client struct {\n\t\timages  v1.ImageServiceClient\n\t\truntime v1.RuntimeServiceClient\n\t}\n)\n\nfunc (c *v1Client) ListContainers(ctx context.Context) (list []*v1.Container, err error) {\n\tresp, err := c.runtime.ListContainers(ctx, new(v1.ListContainersRequest))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn resp.Containers, nil\n}\n\nfunc (c *v1Client) ListImages(ctx context.Context) (list []*v1.Image, err error) {\n\trequest := &v1.ListImagesRequest{Filter: nil}\n\n\tresp, err := c.images.ListImages(ctx, request)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn resp.Images, nil\n}\n\nfunc (c *v1Client) DeleteImage(ctx context.Context, image string) (err error) {\n\tif image == \"\" {\n\t\treturn err\n\t}\n\n\trequest := &v1.RemoveImageRequest{Image: &v1.ImageSpec{Image: image}}\n\n\t_, err = c.images.RemoveImage(ctx, request)\n\tif err != nil {\n\t\tif status.Code(err) == codes.NotFound {\n\t\t\treturn nil\n\t\t}\n\t\treturn err\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "pkg/cri/client_v1alpha2.go",
    "content": "package cri\n\nimport (\n\t\"context\"\n\n\t\"google.golang.org/grpc/codes\"\n\t\"google.golang.org/grpc/status\"\n\tv1 \"k8s.io/cri-api/pkg/apis/runtime/v1\"\n\tv1alpha2 \"k8s.io/cri-api/pkg/apis/runtime/v1alpha2\"\n)\n\ntype (\n\tv1alpha2Client struct {\n\t\timages  v1alpha2.ImageServiceClient\n\t\truntime v1alpha2.RuntimeServiceClient\n\t}\n)\n\nfunc (c *v1alpha2Client) ListContainers(ctx context.Context) (list []*v1.Container, err error) {\n\tresp, err := c.runtime.ListContainers(ctx, new(v1alpha2.ListContainersRequest))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn convertContainers(resp.Containers), nil\n}\n\nfunc (c *v1alpha2Client) ListImages(ctx context.Context) (list []*v1.Image, err error) {\n\trequest := &v1alpha2.ListImagesRequest{Filter: nil}\n\n\tresp, err := c.images.ListImages(ctx, request)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn convertImages(resp.Images), nil\n}\n\nfunc (c *v1alpha2Client) DeleteImage(ctx context.Context, image string) (err error) {\n\tif image == \"\" {\n\t\treturn err\n\t}\n\n\trequest := &v1alpha2.RemoveImageRequest{Image: &v1alpha2.ImageSpec{Image: image}}\n\n\t_, err = c.images.RemoveImage(ctx, request)\n\tif err != nil {\n\t\tif status.Code(err) == codes.NotFound {\n\t\t\treturn nil\n\t\t}\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\nfunc convertContainers(list []*v1alpha2.Container) []*v1.Container {\n\tv1s := []*v1.Container{}\n\n\tfor _, c := range list {\n\t\tv1s = append(v1s, convertContainer(c))\n\t}\n\n\treturn v1s\n}\n\nfunc convertImages(list []*v1alpha2.Image) []*v1.Image {\n\tv1s := []*v1.Image{}\n\n\tfor _, c := range list {\n\t\tv1s = append(v1s, convertImage(c))\n\t}\n\n\treturn v1s\n}\n\nfunc convertContainer(c *v1alpha2.Container) *v1.Container {\n\tif c == nil {\n\t\treturn nil\n\t}\n\n\tcont := &v1.Container{\n\t\tId:           c.Id,\n\t\tPodSandboxId: c.PodSandboxId,\n\t\tImageRef:     c.ImageRef,\n\t\tState:        v1.ContainerState(c.State),\n\t\tCreatedAt:    c.CreatedAt,\n\t\tLabels:       c.Labels,\n\t\tAnnotations:  c.Annotations,\n\t}\n\n\tif c.Image != nil {\n\t\tcont.Image = &v1.ImageSpec{\n\t\t\tImage:       c.Image.Image,\n\t\t\tAnnotations: c.Image.Annotations,\n\t\t}\n\t}\n\n\tif c.Metadata != nil {\n\t\tcont.Metadata = &v1.ContainerMetadata{\n\t\t\tName:    c.Metadata.Name,\n\t\t\tAttempt: c.Metadata.Attempt,\n\t\t}\n\t}\n\n\treturn cont\n}\n\nfunc convertImage(i *v1alpha2.Image) *v1.Image {\n\tif i == nil {\n\t\treturn nil\n\t}\n\n\timg := &v1.Image{\n\t\tId:          i.Id,\n\t\tRepoTags:    i.RepoTags,\n\t\tRepoDigests: i.RepoDigests,\n\t\tSize_:       i.Size_,\n\t\tUsername:    i.Username,\n\t\tPinned:      i.Pinned,\n\t}\n\n\tif i.Spec != nil {\n\t\timg.Spec = &v1.ImageSpec{\n\t\t\tImage:       i.Spec.Image,\n\t\t\tAnnotations: i.Spec.Annotations,\n\t\t}\n\t}\n\n\tif i.Uid != nil {\n\t\timg.Uid = &v1.Int64Value{\n\t\t\tValue: i.Uid.Value,\n\t\t}\n\t}\n\n\treturn img\n}\n"
  },
  {
    "path": "pkg/cri/util.go",
    "content": "package cri\n\nimport (\n\t\"strings\"\n)\n\ntype (\n\truntimeVersion string\n\n\terrors []error\n)\n\nfunc (errs errors) Error() string {\n\ts := make([]string, 0, len(errs))\n\tfor _, err := range errs {\n\t\ts = append(s, err.Error())\n\t}\n\n\treturn strings.Join(s, \"\\n\")\n}\n\nfunc (errs *errors) Append(err error) {\n\tif err == nil {\n\t\treturn\n\t}\n\t*errs = append(*errs, err)\n}\n"
  },
  {
    "path": "pkg/logger/zap.go",
    "content": "package logger\n\nimport (\n\t\"flag\"\n\t\"fmt\"\n\n\t\"github.com/go-logr/logr\"\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n\tklog \"k8s.io/klog/v2\"\n\tctrl \"sigs.k8s.io/controller-runtime\"\n\tcrzap \"sigs.k8s.io/controller-runtime/pkg/log/zap\"\n)\n\nvar logLevel = flag.String(\"log-level\", zapcore.InfoLevel.String(),\n\tfmt.Sprintf(\"Log verbosity level. Supported values (in order of detail) are %q, %q, %q, and %q.\",\n\t\tzapcore.DebugLevel.String(),\n\t\tzapcore.InfoLevel.String(),\n\t\tzapcore.WarnLevel.String(),\n\t\tzapcore.ErrorLevel.String()))\n\n// GetLevel gets the configured log level.\nfunc GetLevel() string {\n\treturn *logLevel\n}\n\n// Configure configures a singleton logger for use from controller-runtime.\nfunc Configure() error {\n\tvar zapLevel zapcore.Level\n\tif err := zapLevel.UnmarshalText([]byte(*logLevel)); err != nil {\n\t\treturn fmt.Errorf(\"unable to parse log level: %w: %s\", err, *logLevel)\n\t}\n\n\tvar logger logr.Logger\n\tswitch zapLevel {\n\tcase zap.DebugLevel:\n\t\tcfg := zap.NewDevelopmentEncoderConfig()\n\t\tlogger = crzap.New(crzap.UseDevMode(true), crzap.Encoder(zapcore.NewConsoleEncoder(cfg)), crzap.Level(zapLevel))\n\t\tctrl.SetLogger(logger)\n\t\tklog.SetLogger(logger)\n\tdefault:\n\t\tcfg := zap.NewProductionEncoderConfig()\n\t\tlogger = crzap.New(crzap.UseDevMode(false), crzap.Encoder(zapcore.NewJSONEncoder(cfg)), crzap.Level(zapLevel))\n\t}\n\n\tctrl.SetLogger(logger)\n\tklog.SetLogger(logger)\n\n\treturn nil\n}\n"
  },
  {
    "path": "pkg/metrics/metrics.go",
    "content": "package metrics\n\nimport (\n\t\"context\"\n\t\"os\"\n\n\t\"github.com/go-logr/logr\"\n\t\"go.opentelemetry.io/otel/attribute\"\n\t\"go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp\"\n\t\"go.opentelemetry.io/otel/metric\"\n\t\"go.opentelemetry.io/otel/sdk/instrumentation\"\n\tsdkmetric \"go.opentelemetry.io/otel/sdk/metric\"\n\t\"go.opentelemetry.io/otel/sdk/metric/metricdata\"\n)\n\nconst (\n\tImagesRemovedCounter     = \"images_removed_run_total\"\n\tImagesRemovedDescription = \"total images removed\"\n)\n\nfunc ConfigureMetrics(ctx context.Context, log logr.Logger, endpoint string) (sdkmetric.Exporter, sdkmetric.Reader, *sdkmetric.MeterProvider) {\n\texporter, err := otlpmetrichttp.New(ctx, otlpmetrichttp.WithInsecure(), otlpmetrichttp.WithEndpoint(endpoint))\n\tif err != nil {\n\t\tlog.Error(err, \"error initializing exporter\")\n\t\treturn nil, nil, nil\n\t}\n\n\treader := sdkmetric.NewPeriodicReader(exporter)\n\n\tdurationInstrument := sdkmetric.Instrument{\n\t\tName:  \"imagejob_duration_run_seconds\",\n\t\tScope: instrumentation.Scope{Name: \"eraser\"},\n\t}\n\n\tdurationStream := sdkmetric.Stream{\n\t\tName: \"imagejob_duration_run_seconds\",\n\t\tUnit: \"s\",\n\t\tAggregation: sdkmetric.AggregationExplicitBucketHistogram{\n\t\t\tBoundaries: []float64{0, 10, 20, 30, 40, 50, 60},\n\t\t},\n\t}\n\n\thistogramView := sdkmetric.NewView(durationInstrument, durationStream)\n\n\tprovider := sdkmetric.NewMeterProvider(sdkmetric.WithReader(reader), sdkmetric.WithView(histogramView))\n\n\treturn exporter, reader, provider\n}\n\nfunc ExportMetrics(log logr.Logger, exporter sdkmetric.Exporter, reader sdkmetric.Reader) {\n\tctxB := context.Background()\n\n\tvar m metricdata.ResourceMetrics\n\terr := reader.Collect(ctxB, &m)\n\tif err != nil {\n\t\tlog.Error(err, \"failed to collect metrics\")\n\t\treturn\n\t}\n\tif err := exporter.Export(ctxB, &m); err != nil {\n\t\tlog.Error(err, \"failed to export metrics\")\n\t}\n}\n\nfunc RecordMetricsRemover(ctx context.Context, p metric.MeterProvider, totalRemoved int64) error {\n\tcounter, err := p.Meter(\"eraser\").Int64Counter(ImagesRemovedCounter, metric.WithDescription(ImagesRemovedDescription), metric.WithUnit(\"1\"))\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tcounter.Add(ctx, totalRemoved, metric.WithAttributes(attribute.String(\"node name\", os.Getenv(\"NODE_NAME\"))))\n\treturn nil\n}\n\nfunc RecordMetricsScanner(ctx context.Context, p metric.MeterProvider, totalVulnerable int) error {\n\tcounter, err := p.Meter(\"eraser\").Int64Counter(\"vulnerable_images_run_total\", metric.WithDescription(\"total vulnerable images\"), metric.WithUnit(\"1\"))\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tcounter.Add(ctx, int64(totalVulnerable), metric.WithAttributes(attribute.String(\"node name\", os.Getenv(\"NODE_NAME\"))))\n\treturn nil\n}\n\nfunc RecordMetricsController(ctx context.Context, p metric.MeterProvider, jobDuration float64, podsCompleted int64, podsFailed int64) error {\n\tduration, err := p.Meter(\"eraser\").Float64Histogram(\"imagejob_duration_run_seconds\", metric.WithDescription(\"duration of imagejob\"), metric.WithUnit(\"s\"))\n\tif err != nil {\n\t\treturn err\n\t}\n\tduration.Record(ctx, jobDuration)\n\n\tcompleted, err := p.Meter(\"eraser\").Int64Counter(\"pods_completed_run_total\", metric.WithDescription(\"total pods completed\"), metric.WithUnit(\"1\"))\n\tif err != nil {\n\t\treturn err\n\t}\n\tcompleted.Add(ctx, podsCompleted)\n\n\tfailed, err := p.Meter(\"eraser\").Int64Counter(\"pods_failed_run_total\", metric.WithDescription(\"total pods failed\"), metric.WithUnit(\"1\"))\n\tif err != nil {\n\t\treturn err\n\t}\n\tfailed.Add(ctx, podsFailed)\n\n\tjobTotal, err := p.Meter(\"eraser\").Int64Counter(\"imagejob_run_total\", metric.WithDescription(\"total number of imagejobs completed\"), metric.WithUnit(\"1\"))\n\tif err != nil {\n\t\treturn err\n\t}\n\tjobTotal.Add(ctx, 1)\n\n\treturn nil\n}\n"
  },
  {
    "path": "pkg/metrics/metrics_test.go",
    "content": "package metrics\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"github.com/go-logr/logr\"\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/stretchr/testify/require\"\n\t\"go.opentelemetry.io/otel\"\n\t\"go.opentelemetry.io/otel/metric\"\n\tsdkmetric \"go.opentelemetry.io/otel/sdk/metric\"\n\t\"go.opentelemetry.io/otel/sdk/metric/metricdata\"\n)\n\nfunc TestConfigureMetrics(t *testing.T) {\n\texporter, reader, provider := ConfigureMetrics(context.Background(), logr.Discard(), \"otel-collector:4318\")\n\tif exporter == nil {\n\t\tt.Fatal(\"unable to configure exporter\")\n\t}\n\tif reader == nil {\n\t\tt.Fatal(\"unable to configure exporter\")\n\t}\n\tif provider == nil {\n\t\tt.Fatal(\"unable to configure exporter\")\n\t}\n\n\totel.SetMeterProvider(provider)\n}\n\nfunc TestRecordMetrics(t *testing.T) {\n\tif err := RecordMetricsRemover(context.Background(), otel.GetMeterProvider(), 1); err != nil {\n\t\tt.Fatal(\"could not record eraser metrics\")\n\t}\n\n\tif err := RecordMetricsScanner(context.Background(), otel.GetMeterProvider(), 1); err != nil {\n\t\tt.Fatal(\"could not record scanner metrics\")\n\t}\n\n\tif err := RecordMetricsController(context.Background(), otel.GetMeterProvider(), 1.0, 1, 1); err != nil {\n\t\tt.Fatal(\"could not record scanner metrics\")\n\t}\n}\n\nfunc TestMeterCreatesInstrument(t *testing.T) {\n\ttestCases := []struct {\n\t\tname string\n\t\tfn   func(*testing.T, metric.Meter)\n\t}{\n\t\t{\n\t\t\tname: \"AsyncInt64Count\",\n\t\t\tfn: func(t *testing.T, m metric.Meter) {\n\t\t\t\tctr, err := m.Int64Counter(ImagesRemovedCounter)\n\t\t\t\tassert.NoError(t, err)\n\t\t\t\tctr.Add(context.Background(), 1)\n\t\t\t\tassert.NoError(t, err)\n\t\t\t},\n\t\t},\n\t}\n\n\tfor _, tt := range testCases {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\trdr := sdkmetric.NewManualReader()\n\t\t\tm := sdkmetric.NewMeterProvider(sdkmetric.WithReader(rdr)).Meter(\"eraser\")\n\n\t\t\ttt.fn(t, m)\n\n\t\t\tvar rm metricdata.ResourceMetrics\n\t\t\terr := rdr.Collect(context.Background(), &rm)\n\t\t\tassert.NoError(t, err)\n\n\t\t\trequire.Len(t, rm.ScopeMetrics, 1)\n\t\t\tsm := rm.ScopeMetrics[0]\n\t\t\trequire.Len(t, sm.Metrics, 1)\n\t\t\tgot := sm.Metrics[0]\n\n\t\t\tif got.Name != ImagesRemovedCounter {\n\t\t\t\tt.Error(\"ImagesRemovedCounter not created\")\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "pkg/scanners/template/scanner_template.go",
    "content": "package template\n\nimport (\n\t\"context\"\n\t\"io\"\n\t\"os\"\n\t\"os/signal\"\n\t\"syscall\"\n\n\t\"github.com/eraser-dev/eraser/api/unversioned\"\n\t\"github.com/go-logr/logr\"\n\t\"golang.org/x/sys/unix\"\n\n\t\"github.com/eraser-dev/eraser/pkg/metrics\"\n\tutil \"github.com/eraser-dev/eraser/pkg/utils\"\n\t\"go.opentelemetry.io/otel\"\n\tlogf \"sigs.k8s.io/controller-runtime/pkg/log\"\n)\n\n// interface for custom scanners to communicate with Eraser.\ntype ImageProvider interface {\n\t// receive list of all non-running, non-excluded images from collector container to process.\n\tReceiveImages() ([]unversioned.Image, error)\n\n\t// sends non-compliant images found to remover container for removal.\n\tSendImages(nonCompliantImages, failedImages []unversioned.Image) error\n\n\t// completes scanner communication process - required after custom scanning finishes.\n\tFinish() error\n}\n\ntype config struct {\n\tctx                    context.Context\n\tlog                    logr.Logger\n\tdeleteScanFailedImages bool\n\tdeleteEOLImages        bool\n\treportMetrics          bool\n}\n\ntype ConfigFunc func(*config)\n\nfunc NewImageProvider(funcs ...ConfigFunc) ImageProvider {\n\t// default config\n\tcfg := &config{\n\t\tctx:                    context.Background(),\n\t\tlog:                    logf.Log.WithName(\"scanner\"),\n\t\tdeleteScanFailedImages: true,\n\t\treportMetrics:          false,\n\t}\n\n\t// apply user config\n\tfor _, f := range funcs {\n\t\tf(cfg)\n\t}\n\n\treturn cfg\n}\n\nfunc (cfg *config) ReceiveImages() ([]unversioned.Image, error) {\n\tvar err error\n\n\tif err := unix.Mkfifo(util.EraseCompleteScanPath, util.PipeMode); err != nil {\n\t\tcfg.log.Error(err, \"failed to create pipe\", \"pipeName\", util.EraseCompleteScanPath)\n\t\treturn nil, err\n\t}\n\n\terr = os.Chmod(util.EraseCompleteScanPath, 0o600)\n\tif err != nil {\n\t\tcfg.log.Error(err, \"unable to enable pipe for writing\", \"pipeName\", util.EraseCompleteScanPath)\n\t\treturn nil, err\n\t}\n\n\tallImages, err := util.ReadCollectScanPipe(cfg.ctx)\n\tif err != nil {\n\t\tcfg.log.Error(err, \"unable to read images from collect scan pipe\")\n\t\treturn nil, err\n\t}\n\n\treturn allImages, nil\n}\n\nfunc (cfg *config) SendImages(nonCompliantImages, failedImages []unversioned.Image) error {\n\tif cfg.deleteScanFailedImages {\n\t\tnonCompliantImages = append(nonCompliantImages, failedImages...)\n\t}\n\n\tif err := util.WriteScanErasePipe(nonCompliantImages); err != nil {\n\t\tcfg.log.Error(err, \"unable to write non-compliant images to scan erase pipe\")\n\t\treturn err\n\t}\n\n\tif cfg.reportMetrics {\n\t\tctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)\n\t\tdefer cancel()\n\n\t\texporter, reader, provider := metrics.ConfigureMetrics(ctx, cfg.log, os.Getenv(\"OTEL_EXPORTER_OTLP_ENDPOINT\"))\n\t\totel.SetMeterProvider(provider)\n\n\t\tif err := metrics.RecordMetricsScanner(ctx, otel.GetMeterProvider(), len(nonCompliantImages)); err != nil {\n\t\t\tcfg.log.Error(err, \"error recording metrics\")\n\t\t\treturn err\n\t\t}\n\n\t\tmetrics.ExportMetrics(cfg.log, exporter, reader)\n\t}\n\treturn nil\n}\n\nfunc (cfg *config) Finish() error {\n\tfile, err := os.OpenFile(util.EraseCompleteScanPath, os.O_RDONLY, 0)\n\tif err != nil {\n\t\tcfg.log.Error(err, \"failed to open pipe\", \"pipeName\", util.EraseCompleteScanPath)\n\t\treturn err\n\t}\n\n\tdata, err := io.ReadAll(file)\n\tif err != nil {\n\t\tcfg.log.Error(err, \"failed to read pipe\", \"pipeName\", util.EraseCompleteScanPath)\n\t\treturn err\n\t}\n\n\tif err := file.Close(); err != nil {\n\t\tcfg.log.Error(err, \"failed to close pipe\", \"pipeName\", util.EraseCompleteScanPath)\n\t\treturn err\n\t}\n\n\tif string(data) != util.EraseCompleteMessage {\n\t\tcfg.log.Info(\"garbage in pipe\", \"pipeName\", util.EraseCompleteScanPath, \"in_pipe\", string(data))\n\t\treturn err\n\t}\n\n\tcfg.log.Info(\"scanning complete, exiting\")\n\treturn nil\n}\n\n// provide custom context.\nfunc WithContext(ctx context.Context) ConfigFunc {\n\treturn func(cfg *config) {\n\t\tcfg.ctx = ctx\n\t}\n}\n\n// sets deleteScanFailedImages flag.\nfunc WithDeleteScanFailedImages(deleteScanFailedImages bool) ConfigFunc {\n\treturn func(cfg *config) {\n\t\tcfg.deleteScanFailedImages = deleteScanFailedImages\n\t}\n}\n\n// sets deleteEOLimages flag.\nfunc WithDeleteEOLImages(deleteEOLImages bool) ConfigFunc {\n\treturn func(cfg *config) {\n\t\tcfg.deleteEOLImages = deleteEOLImages\n\t}\n}\n\n// provide custom logger.\nfunc WithLogger(log logr.Logger) ConfigFunc {\n\treturn func(cfg *config) {\n\t\tcfg.log = log\n\t}\n}\n\n// sets boolean for recording metrics.\nfunc WithMetrics(reportMetrics bool) ConfigFunc {\n\treturn func(cfg *config) {\n\t\tcfg.reportMetrics = reportMetrics\n\t}\n}\n"
  },
  {
    "path": "pkg/scanners/trivy/helpers.go",
    "content": "package main\n\nimport (\n\t\"os\"\n\n\tunversioned \"github.com/eraser-dev/eraser/api/unversioned\"\n\t\"k8s.io/apimachinery/pkg/util/yaml\"\n)\n\nfunc loadConfig(filename string) (Config, error) {\n\tcfg := *DefaultConfig()\n\n\t//nolint:gosec // G304: Reading config file is intended functionality\n\tb, err := os.ReadFile(filename)\n\tif err != nil {\n\t\tlog.Error(err, \"unable to read eraser config\")\n\t\treturn cfg, err\n\t}\n\n\tvar eraserConfig unversioned.EraserConfig\n\terr = yaml.Unmarshal(b, &eraserConfig)\n\tif err != nil {\n\t\tlog.Error(err, \"unable to unmarshal eraser config\")\n\t}\n\n\tscanCfgYaml := eraserConfig.Components.Scanner.Config\n\tscanCfgBytes := []byte(\"\")\n\tif scanCfgYaml != nil {\n\t\tscanCfgBytes = []byte(*scanCfgYaml)\n\t}\n\n\terr = yaml.Unmarshal(scanCfgBytes, &cfg)\n\tif err != nil {\n\t\tlog.Error(err, \"unable to unmarshal scanner config\")\n\t\treturn cfg, err\n\t}\n\n\treturn cfg, nil\n}\n"
  },
  {
    "path": "pkg/scanners/trivy/trivy.go",
    "content": "package main\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"flag\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"os\"\n\t\"time\"\n\n\t\"github.com/eraser-dev/eraser/api/unversioned\"\n\n\t_ \"net/http/pprof\"\n\n\ttrivylogger \"github.com/aquasecurity/trivy/pkg/log\"\n\t\"github.com/eraser-dev/eraser/pkg/logger\"\n\t\"github.com/eraser-dev/eraser/pkg/scanners/template\"\n\t\"github.com/eraser-dev/eraser/pkg/utils\"\n\tlogf \"sigs.k8s.io/controller-runtime/pkg/log\"\n)\n\nconst (\n\tgeneralErr = 1\n\n\tseverityCritical = \"CRITICAL\"\n\tseverityHigh     = \"HIGH\"\n\tseverityMedium   = \"MEDIUM\"\n\tseverityLow      = \"LOW\"\n\tseverityUnknown  = \"UNKNOWN\"\n\n\tvulnTypeOs      = \"os\"\n\tvulnTypeLibrary = \"library\"\n\n\tsecurityCheckVuln   = \"vuln\"\n\tsecurityCheckConfig = \"config\"\n\tsecurityCheckSecret = \"secret\"\n\n\tstatusUnknown            = \"unknown\"\n\tstatusAffected           = \"affected\"\n\tstatusFixed              = \"fixed\"\n\tstatusUnderInvestigation = \"under_investigation\"\n\tstatusWillNotFix         = \"will_not_fix\"\n\tstatusFixDeferred        = \"fix_deferred\"\n\tstatusEndOfLife          = \"end_of_life\"\n)\n\nvar (\n\tconfig        = flag.String(\"config\", \"\", \"path to the configuration file\")\n\tenableProfile = flag.Bool(\"enable-pprof\", false, \"enable pprof profiling\")\n\tprofilePort   = flag.Int(\"pprof-port\", 6060, \"port for pprof profiling. defaulted to 6060 if unspecified\")\n\n\tlog = logf.Log.WithName(\"scanner\").WithValues(\"provider\", \"trivy\")\n\n\t// This can be overwritten by the linker.\n\ttrivyVersion = \"dev\"\n)\n\nfunc main() {\n\tflag.Parse()\n\n\terr := logger.Configure()\n\tif err != nil {\n\t\tfmt.Fprintf(os.Stderr, \"error setting up logger: %s\", err)\n\t\tos.Exit(generalErr)\n\t}\n\n\tlog.Info(\"trivy version\", \"trivy version\", trivyVersion)\n\tlog.Info(\"config\", \"config\", *config)\n\n\tuserConfig := *DefaultConfig()\n\tif *config != \"\" {\n\t\tvar err error\n\t\tuserConfig, err = loadConfig(*config)\n\t\tif err != nil {\n\t\t\tlog.Error(err, \"unable to read config\")\n\t\t\tos.Exit(generalErr)\n\t\t}\n\t}\n\n\tlog.V(1).Info(\"userConfig\",\n\t\t\"json\", userConfig,\n\t\t\"struct\", fmt.Sprintf(\"%#v\\n\", userConfig),\n\t)\n\n\tif *enableProfile {\n\t\tgo runProfileServer()\n\t}\n\n\trecordMetrics := os.Getenv(\"OTEL_EXPORTER_OTLP_ENDPOINT\") != \"\"\n\n\tctx := context.Background()\n\tprovider := template.NewImageProvider(\n\t\ttemplate.WithContext(ctx),\n\t\ttemplate.WithLogger(log),\n\t\ttemplate.WithMetrics(recordMetrics),\n\t\ttemplate.WithDeleteScanFailedImages(userConfig.DeleteFailedImages),\n\t\ttemplate.WithDeleteEOLImages(userConfig.DeleteEOLImages),\n\t)\n\n\tallImages, err := provider.ReceiveImages()\n\tif err != nil {\n\t\tlog.Error(err, \"unable to read images from provider\")\n\t\tos.Exit(generalErr)\n\t}\n\n\ts, err := initScanner(&userConfig)\n\tif err != nil {\n\t\tlog.Error(err, \"error initializing scanner\")\n\t}\n\n\tvulnerableImages, failedImages, err := scan(s, allImages)\n\tif err != nil {\n\t\tlog.Error(err, \"total image scan timed out\")\n\t}\n\n\tlog.Info(\"Vulnerable\", \"Images\", vulnerableImages, \"Total count\", len(vulnerableImages))\n\n\tif len(failedImages) > 0 {\n\t\tlog.Info(\"Failed\", \"Images\", failedImages)\n\t}\n\n\terr = provider.SendImages(vulnerableImages, failedImages)\n\tif err != nil {\n\t\tlog.Error(err, \"unable to write images\")\n\t}\n\n\tlog.Info(\"scanning complete, waiting for remover to finish...\")\n\terr = provider.Finish()\n\tif err != nil {\n\t\tlog.Error(err, \"unable to complete scanning process\")\n\t}\n\n\tlog.Info(\"remover job completed, shutting down...\")\n}\n\nfunc runProfileServer() {\n\tserver := &http.Server{\n\t\tAddr:              fmt.Sprintf(\"localhost:%d\", *profilePort),\n\t\tReadHeaderTimeout: 3 * time.Second,\n\t}\n\terr := server.ListenAndServe()\n\tlog.Error(err, \"pprof server failed\")\n}\n\nfunc initScanner(userConfig *Config) (Scanner, error) {\n\tif userConfig == nil {\n\t\treturn nil, fmt.Errorf(\"invalid trivy scanner config\")\n\t}\n\n\ttrivylogger.InitLogger(false, false)\n\n\tuserConfig.Runtime = unversioned.RuntimeSpec{\n\t\tName:    unversioned.Runtime(os.Getenv(utils.EnvEraserRuntimeName)),\n\t\tAddress: utils.CRIPath,\n\t}\n\n\ttotalTimeout := time.Duration(userConfig.Timeout.Total)\n\ttimer := time.NewTimer(totalTimeout)\n\n\tvar s Scanner = &ImageScanner{\n\t\tconfig: *userConfig,\n\t\ttimer:  timer,\n\t}\n\treturn s, nil\n}\n\nfunc scan(s Scanner, allImages []unversioned.Image) ([]unversioned.Image, []unversioned.Image, error) {\n\tvulnerableImages := make([]unversioned.Image, 0, len(allImages))\n\tfailedImages := make([]unversioned.Image, 0, len(allImages))\n\t// track total scan job time\n\n\tfor idx, img := range allImages {\n\t\tselect {\n\t\tcase <-s.Timer().C:\n\t\t\tfailedImages = append(failedImages, allImages[idx:]...)\n\t\t\treturn vulnerableImages, failedImages, errors.New(\"image scan total timeout exceeded\")\n\t\tdefault:\n\t\t\t// Logs scan failures\n\t\t\tstatus, err := s.Scan(img)\n\t\t\tif err != nil {\n\t\t\t\tfailedImages = append(failedImages, img)\n\t\t\t\tlog.Error(err, \"scan failed\")\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tswitch status {\n\t\t\tcase StatusNonCompliant:\n\t\t\t\tlog.Info(\"vulnerable image found\", \"img\", img)\n\t\t\t\tvulnerableImages = append(vulnerableImages, img)\n\t\t\tcase StatusFailed:\n\t\t\t\tfailedImages = append(failedImages, img)\n\t\t\t}\n\t\t}\n\t}\n\n\treturn vulnerableImages, failedImages, nil\n}\n"
  },
  {
    "path": "pkg/scanners/trivy/trivy_test.go",
    "content": "package main\n"
  },
  {
    "path": "pkg/scanners/trivy/types.go",
    "content": "package main\n\nimport (\n\t\"bytes\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"os\"\n\t\"os/exec\"\n\t\"strings\"\n\t\"time\"\n\n\ttrivyTypes \"github.com/aquasecurity/trivy/pkg/types\"\n\t\"github.com/eraser-dev/eraser/api/unversioned\"\n\t\"github.com/eraser-dev/eraser/pkg/utils\"\n)\n\nconst (\n\tStatusFailed ScanStatus = iota\n\tStatusNonCompliant\n\tStatusOK\n\tImgSrcPodman     = \"podman\"\n\tImgSrcDocker     = \"docker\"\n\tImgSrcContainerd = \"containerd\"\n)\n\nconst (\n\ttrivyCommandName        = \"/trivy\"\n\ttrivyImageArg           = \"image\"\n\ttrivyJSONFormatFlag     = \"--format=json\"\n\ttrivyCacheDirFlag       = \"--cache-dir\"\n\ttrivyTimeoutFlag        = \"--timeout\"\n\ttrivyDBRepoFlag         = \"--db-repository\"\n\ttrivyIgnoreUnfixedFlag  = \"--ignore-unfixed\"\n\ttrivyVulnTypesFlag      = \"--vuln-type\"\n\ttrivySecurityChecksFlag = \"--scanners\"\n\ttrivySeveritiesFlag     = \"--severity\"\n\ttrivyRuntimeFlag        = \"--image-src\"\n\ttrivyIgnoreStatusFlag   = \"--ignore-status\"\n)\n\ntype (\n\tConfig struct {\n\t\tRuntime            unversioned.RuntimeSpec `json:\"runtime,omitempty\"`\n\t\tCacheDir           string                  `json:\"cacheDir,omitempty\"`\n\t\tDBRepo             string                  `json:\"dbRepo,omitempty\"`\n\t\tDeleteFailedImages bool                    `json:\"deleteFailedImages,omitempty\"`\n\t\tDeleteEOLImages    bool                    `json:\"deleteEOLImages,omitempty\"`\n\t\tVulnerabilities    VulnConfig              `json:\"vulnerabilities,omitempty\"`\n\t\tTimeout            TimeoutConfig           `json:\"timeout,omitempty\"`\n\t}\n\n\tVulnConfig struct {\n\t\tIgnoreUnfixed   bool     `json:\"ignoreUnfixed,omitempty\"`\n\t\tTypes           []string `json:\"types,omitempty\"`\n\t\tSecurityChecks  []string `json:\"securityChecks,omitempty\"`\n\t\tSeverities      []string `json:\"severities,omitempty\"`\n\t\tIgnoredStatuses []string `json:\"ignoredStatuses,omitempty\"`\n\t}\n\n\tTimeoutConfig struct {\n\t\tTotal    unversioned.Duration `json:\"total,omitempty\"`\n\t\tPerImage unversioned.Duration `json:\"perImage,omitempty\"`\n\t}\n\n\tScanStatus int\n\n\tScanner interface {\n\t\tScan(unversioned.Image) (ScanStatus, error)\n\t\tTimer() *time.Timer\n\t}\n)\n\nfunc DefaultConfig() *Config {\n\treturn &Config{\n\t\tRuntime: unversioned.RuntimeSpec{\n\t\t\tName:    unversioned.RuntimeContainerd,\n\t\t\tAddress: utils.CRIPath,\n\t\t},\n\t\tCacheDir:           \"/var/lib/trivy\",\n\t\tDBRepo:             \"ghcr.io/aquasecurity/trivy-db\",\n\t\tDeleteFailedImages: true,\n\t\tDeleteEOLImages:    true,\n\t\tVulnerabilities: VulnConfig{\n\t\t\tIgnoreUnfixed: false,\n\t\t\tTypes: []string{\n\t\t\t\tvulnTypeOs,\n\t\t\t\tvulnTypeLibrary,\n\t\t\t},\n\t\t\tSecurityChecks:  []string{securityCheckVuln},\n\t\t\tSeverities:      []string{severityCritical, severityHigh, severityMedium, severityLow},\n\t\t\tIgnoredStatuses: []string{},\n\t\t},\n\t\tTimeout: TimeoutConfig{\n\t\t\tTotal:    unversioned.Duration(time.Hour * 23),\n\t\t\tPerImage: unversioned.Duration(time.Hour),\n\t\t},\n\t}\n}\n\nfunc (c *Config) cliArgs(ref string) []string {\n\targs := []string{}\n\n\t// Global options\n\targs = append(args, trivyJSONFormatFlag)\n\n\tif c.CacheDir != \"\" {\n\t\targs = append(args, trivyCacheDirFlag, c.CacheDir)\n\t}\n\n\tif c.Timeout.PerImage != 0 {\n\t\targs = append(args, trivyTimeoutFlag, time.Duration(c.Timeout.PerImage).String())\n\t}\n\n\truntimeVar, err := c.getRuntimeVar()\n\tif err != nil {\n\t\tlog.Error(err, \"invalid runtime provided\")\n\t}\n\n\targs = append(args, trivyImageArg, trivyRuntimeFlag, runtimeVar)\n\n\tif c.DBRepo != \"\" {\n\t\targs = append(args, trivyDBRepoFlag, c.DBRepo)\n\t}\n\n\tif c.Vulnerabilities.IgnoreUnfixed {\n\t\targs = append(args, trivyIgnoreUnfixedFlag)\n\t}\n\n\tif len(c.Vulnerabilities.Types) > 0 {\n\t\tallVulnTypes := strings.Join(c.Vulnerabilities.Types, \",\")\n\t\targs = append(args, trivyVulnTypesFlag, allVulnTypes)\n\t}\n\n\tif len(c.Vulnerabilities.SecurityChecks) > 0 {\n\t\tallSecurityChecks := strings.Join(c.Vulnerabilities.SecurityChecks, \",\")\n\t\targs = append(args, trivySecurityChecksFlag, allSecurityChecks)\n\t}\n\n\tif len(c.Vulnerabilities.Severities) > 0 {\n\t\tallSeverities := strings.Join(c.Vulnerabilities.Severities, \",\")\n\t\targs = append(args, trivySeveritiesFlag, allSeverities)\n\t}\n\n\tif len(c.Vulnerabilities.IgnoredStatuses) > 0 {\n\t\tallIgnoredStatuses := strings.Join(c.Vulnerabilities.IgnoredStatuses, \",\")\n\t\targs = append(args, trivyIgnoreStatusFlag, allIgnoredStatuses)\n\t}\n\n\targs = append(args, ref)\n\n\treturn args\n}\n\nfunc (c *Config) getRuntimeVar() (string, error) {\n\tvar imgsrc string\n\truntimeName := c.Runtime.Name\n\tswitch runtimeName {\n\tcase unversioned.RuntimeCrio:\n\t\timgsrc = ImgSrcPodman\n\tcase unversioned.RuntimeDockerShim:\n\t\timgsrc = ImgSrcDocker\n\tcase unversioned.RuntimeContainerd, unversioned.Runtime(\"\"):\n\t\timgsrc = ImgSrcContainerd\n\tdefault:\n\t\treturn \"\", fmt.Errorf(\"invalid runtime provided: %q\", runtimeName)\n\t}\n\treturn imgsrc, nil\n}\n\ntype ImageScanner struct {\n\tconfig Config\n\ttimer  *time.Timer\n}\n\nfunc (s *ImageScanner) Scan(img unversioned.Image) (ScanStatus, error) {\n\trefs := make([]string, 0, len(img.Names)+len(img.Digests))\n\trefs = append(refs, img.Digests...)\n\trefs = append(refs, img.Names...)\n\tscanSucceeded := false\n\n\tlog.Info(\"scanning image with id\", \"imageID\", img.ImageID, \"refs\", refs)\n\tfor i := 0; i < len(refs) && !scanSucceeded; i++ {\n\t\tlog.Info(\"scanning image with ref\", \"ref\", refs[i])\n\n\t\tstdout := new(bytes.Buffer)\n\t\tstderr := new(bytes.Buffer)\n\n\t\tcliArgs := s.config.cliArgs(refs[i])\n\t\t//nolint:gosec // G204: Trivy subprocess execution is intended functionality\n\t\tcmd := exec.Command(trivyCommandName, cliArgs...)\n\t\tcmd.Stdout = stdout\n\t\tcmd.Stderr = stderr\n\t\tcmd.Env = append(cmd.Env, os.Environ()...)\n\t\tcmd.Env = setRuntimeSocketEnvVars(cmd, s.config.Runtime)\n\n\t\tlog.V(1).Info(\"scanning image ref\", \"ref\", refs[i], \"cli_invocation\", fmt.Sprintf(\"%s %s\", trivyCommandName, strings.Join(cliArgs, \" \")), \"env\", cmd.Env)\n\t\tif err := cmd.Run(); err != nil {\n\t\t\tlog.Error(err, \"error scanning image\", \"imageID\", img.ImageID, \"reference\", refs[i], \"stderr\", stderr.String())\n\t\t\tcontinue\n\t\t}\n\n\t\tvar report trivyTypes.Report\n\t\tif err := json.Unmarshal(stdout.Bytes(), &report); err != nil {\n\t\t\tlog.Error(err, \"error unmarshaling report\", \"imageID\", img.ImageID, \"reference\", refs[i], \"report\", stdout.String(), \"stderr\", stderr.String())\n\t\t\tcontinue\n\t\t}\n\n\t\tif s.config.DeleteEOLImages {\n\t\t\tif report.Metadata.OS != nil && report.Metadata.OS.Eosl {\n\t\t\t\tlog.Info(\"image is end of life\", \"imageID\", img.ImageID, \"reference\", refs[i])\n\t\t\t\treturn StatusNonCompliant, nil\n\t\t\t}\n\t\t}\n\n\t\tfor j := range report.Results {\n\t\t\tif len(report.Results[j].Vulnerabilities) > 0 {\n\t\t\t\treturn StatusNonCompliant, nil\n\t\t\t}\n\t\t}\n\n\t\t// causes a break from the loop\n\t\tscanSucceeded = true\n\t}\n\n\tstatus := StatusOK\n\tif !scanSucceeded {\n\t\tstatus = StatusFailed\n\t}\n\n\treturn status, nil\n}\n\nfunc setRuntimeSocketEnvVars(cmd *exec.Cmd, runtime unversioned.RuntimeSpec) []string {\n\tenvKey := \"CONTAINERD_ADDRESS\"\n\tenvVal := utils.CRIPath\n\n\tswitch runtime.Name {\n\tcase unversioned.RuntimeDockerShim:\n\t\tenvKey = \"DOCKER_HOST\"\n\tcase unversioned.RuntimeCrio:\n\t\tinfoParent, err := os.Stat(\"/run/cri\")\n\t\tif err != nil {\n\t\t\tlog.Error(err, \"unable to get permissions for cri directory\")\n\t\t}\n\n\t\tinfoSocket, err := os.Stat(utils.CRIPath)\n\t\tif err != nil {\n\t\t\tlog.Error(err, \"unable to get permissions for cri socket\")\n\t\t}\n\n\t\tif err := os.Mkdir(\"/run/podman\", infoParent.Mode().Perm()); err != nil {\n\t\t\tlog.Error(err, \"unable to create /run/podman dir\")\n\t\t}\n\n\t\tif err := os.Symlink(utils.CRIPath, \"/run/podman/podman.sock\"); err != nil {\n\t\t\tlog.Error(err, \"unable to create symlink between CRI path and /run/podman/podman.sock\")\n\t\t}\n\n\t\tif err := os.Chmod(\"/run/podman/podman.sock\", infoSocket.Mode().Perm()); err != nil {\n\t\t\tlog.Error(err, \"unable to change /run/podman/podman.sock permissions\")\n\t\t}\n\t\tenvKey = \"XDG_RUNTIME_DIR\"\n\t\tenvVal = \"/run\"\n\t}\n\n\treturn append(cmd.Env, fmt.Sprintf(\"%s=%s\", envKey, envVal))\n}\n\nfunc (s *ImageScanner) Timer() *time.Timer {\n\treturn s.timer\n}\n\nvar _ Scanner = &ImageScanner{}\n"
  },
  {
    "path": "pkg/scanners/trivy/types_test.go",
    "content": "package main\n\nimport (\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/eraser-dev/eraser/api/unversioned\"\n)\n\nconst ref = \"image:tag\"\n\nvar testDuration = unversioned.Duration(100000000000)\n\nfunc init() {\n}\n\nfunc TestCLIArgs(t *testing.T) {\n\ttype testCell struct {\n\t\tdesc     string\n\t\tconfig   Config\n\t\texpected []string\n\t}\n\n\ttests := []testCell{\n\t\t{\n\t\t\tdesc:   \"empty config\",\n\t\t\tconfig: Config{},\n\t\t\t// default container runtime is containerd\n\t\t\texpected: []string{\"--format=json\", \"image\", \"--image-src\", ImgSrcContainerd, ref},\n\t\t},\n\t\t{\n\t\t\tdesc:     \"DeleteFailedImages has no effect\",\n\t\t\tconfig:   Config{DeleteFailedImages: true},\n\t\t\texpected: []string{\"--format=json\", \"image\", \"--image-src\", ImgSrcContainerd, ref},\n\t\t},\n\t\t{\n\t\t\tdesc:     \"DeleteEOLImages has no effect\",\n\t\t\tconfig:   Config{DeleteEOLImages: true},\n\t\t\texpected: []string{\"--format=json\", \"image\", \"--image-src\", ImgSrcContainerd, ref},\n\t\t},\n\t\t{\n\t\t\tdesc:     \"alternative runtime crio\",\n\t\t\tconfig:   Config{Runtime: unversioned.RuntimeSpec{Name: unversioned.RuntimeCrio, Address: unversioned.CrioPath}},\n\t\t\texpected: []string{\"--format=json\", \"image\", \"--image-src\", ImgSrcPodman, ref},\n\t\t},\n\t\t{\n\t\t\tdesc:     \"alternative runtime dockershim\",\n\t\t\tconfig:   Config{Runtime: unversioned.RuntimeSpec{Name: unversioned.RuntimeDockerShim, Address: unversioned.DockerPath}},\n\t\t\texpected: []string{\"--format=json\", \"image\", \"--image-src\", ImgSrcDocker, ref},\n\t\t},\n\t\t{\n\t\t\tdesc:     \"with cachedir\",\n\t\t\tconfig:   Config{CacheDir: \"/var/lib/trivy\"},\n\t\t\texpected: []string{\"--format=json\", \"--cache-dir\", \"/var/lib/trivy\", \"image\", \"--image-src\", ImgSrcContainerd, ref},\n\t\t},\n\t\t{\n\t\t\tdesc:     \"with custom db repo\",\n\t\t\tconfig:   Config{DBRepo: \"example.test/db/repo\"},\n\t\t\texpected: []string{\"--format=json\", \"image\", \"--image-src\", ImgSrcContainerd, \"--db-repository\", \"example.test/db/repo\", ref},\n\t\t},\n\t\t{\n\t\t\tdesc:     \"ignore unfixed\",\n\t\t\tconfig:   Config{Vulnerabilities: VulnConfig{IgnoreUnfixed: true}},\n\t\t\texpected: []string{\"--format=json\", \"image\", \"--image-src\", ImgSrcContainerd, \"--ignore-unfixed\", ref},\n\t\t},\n\t\t{\n\t\t\tdesc:     \"specify vulnerability types\",\n\t\t\tconfig:   Config{Vulnerabilities: VulnConfig{Types: []string{\"library\", \"os\"}}},\n\t\t\texpected: []string{\"--format=json\", \"image\", \"--image-src\", ImgSrcContainerd, \"--vuln-type\", \"library,os\", ref},\n\t\t},\n\t\t{\n\t\t\tdesc:     \"specify security checks / scanners\",\n\t\t\tconfig:   Config{Vulnerabilities: VulnConfig{SecurityChecks: []string{\"license\", \"vuln\"}}},\n\t\t\texpected: []string{\"--format=json\", \"image\", \"--image-src\", ImgSrcContainerd, \"--scanners\", \"license,vuln\", ref},\n\t\t},\n\t\t{\n\t\t\tdesc:     \"specify severities\",\n\t\t\tconfig:   Config{Vulnerabilities: VulnConfig{Severities: []string{\"LOW\", \"MEDIUM\"}}},\n\t\t\texpected: []string{\"--format=json\", \"image\", \"--image-src\", ImgSrcContainerd, \"--severity\", \"LOW,MEDIUM\", ref},\n\t\t},\n\t\t{\n\t\t\tdesc:     \"specify statuses to ignore\",\n\t\t\tconfig:   Config{Vulnerabilities: VulnConfig{IgnoredStatuses: []string{statusUnknown, statusFixed, statusWillNotFix}}},\n\t\t\texpected: []string{\"--format=json\", \"image\", \"--image-src\", ImgSrcContainerd, \"--ignore-status\", \"unknown,fixed,will_not_fix\", ref},\n\t\t},\n\t\t{\n\t\t\tdesc:     \"total timeout has no effect\",\n\t\t\tconfig:   Config{Timeout: TimeoutConfig{Total: testDuration}},\n\t\t\texpected: []string{\"--format=json\", \"image\", \"--image-src\", ImgSrcContainerd, ref},\n\t\t},\n\t\t{\n\t\t\tdesc:     \"per-image timeout\",\n\t\t\tconfig:   Config{Timeout: TimeoutConfig{PerImage: testDuration}},\n\t\t\texpected: []string{\"--format=json\", \"--timeout\", \"1m40s\", \"image\", \"--image-src\", ImgSrcContainerd, ref},\n\t\t},\n\t\t{\n\t\t\tdesc:   \"all global options\",\n\t\t\tconfig: Config{CacheDir: \"/var/lib/trivy\", Timeout: TimeoutConfig{PerImage: testDuration}},\n\t\t\t// these are output in a consistent order\n\t\t\texpected: []string{\"--format=json\", \"--cache-dir\", \"/var/lib/trivy\", \"--timeout\", \"1m40s\", \"image\", \"--image-src\", \"containerd\", ref},\n\t\t},\n\t\t{\n\t\t\tdesc: \"all `image` options\",\n\t\t\tconfig: Config{\n\t\t\t\tRuntime: unversioned.RuntimeSpec{\n\t\t\t\t\tName:    unversioned.RuntimeCrio,\n\t\t\t\t\tAddress: unversioned.CrioPath,\n\t\t\t\t},\n\t\t\t\tDBRepo: \"example.test/db/repo\",\n\t\t\t\tVulnerabilities: VulnConfig{\n\t\t\t\t\tIgnoreUnfixed:   true,\n\t\t\t\t\tTypes:           []string{\"library\", \"os\"},\n\t\t\t\t\tSecurityChecks:  []string{\"license\", \"vuln\"},\n\t\t\t\t\tSeverities:      []string{\"LOW\", \"MEDIUM\"},\n\t\t\t\t\tIgnoredStatuses: []string{statusUnknown, statusFixed},\n\t\t\t\t},\n\t\t\t},\n\t\t\texpected: []string{\n\t\t\t\t\"--format=json\", \"image\", \"--image-src\", ImgSrcPodman, \"--db-repository\", \"example.test/db/repo\", \"--ignore-unfixed\",\n\t\t\t\t\"--vuln-type\", \"library,os\", \"--scanners\", \"license,vuln\", \"--severity\", \"LOW,MEDIUM\", \"--ignore-status\", \"unknown,fixed\", ref,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tdesc: \"all options\",\n\t\t\tconfig: Config{\n\t\t\t\tCacheDir: \"/var/lib/trivy\",\n\t\t\t\tTimeout:  TimeoutConfig{PerImage: testDuration},\n\t\t\t\tRuntime: unversioned.RuntimeSpec{\n\t\t\t\t\tName:    unversioned.RuntimeCrio,\n\t\t\t\t\tAddress: unversioned.CrioPath,\n\t\t\t\t},\n\t\t\t\tDBRepo: \"example.test/db/repo\",\n\t\t\t\tVulnerabilities: VulnConfig{\n\t\t\t\t\tIgnoreUnfixed:   true,\n\t\t\t\t\tTypes:           []string{\"os\"},\n\t\t\t\t\tSecurityChecks:  []string{\"license\", \"vuln\"},\n\t\t\t\t\tSeverities:      []string{\"CRITICAL\"},\n\t\t\t\t\tIgnoredStatuses: []string{statusUnknown, statusFixed},\n\t\t\t\t},\n\t\t\t},\n\t\t\texpected: []string{\n\t\t\t\t\"--format=json\", \"--cache-dir\", \"/var/lib/trivy\", \"--timeout\", \"1m40s\", \"image\", \"--image-src\", ImgSrcPodman,\n\t\t\t\t\"--db-repository\", \"example.test/db/repo\", \"--ignore-unfixed\", \"--vuln-type\", \"os\", \"--scanners\",\n\t\t\t\t\"license,vuln\", \"--severity\", \"CRITICAL\", \"--ignore-status\", \"unknown,fixed\", ref,\n\t\t\t},\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.desc, func(t *testing.T) {\n\t\t\tactual := tt.config.cliArgs(ref)\n\t\t\tif len(actual) != len(tt.expected) {\n\t\t\t\tt.Logf(\"expected resulting length to be %d, was actually %d\", len(actual), len(tt.expected))\n\t\t\t\tt.Fail()\n\t\t\t}\n\n\t\t\tfor i := 0; i < len(actual); i++ {\n\t\t\t\tif actual[i] != tt.expected[i] {\n\t\t\t\t\tt.Logf(\"expected argument %s in position %d, was actually %s\", tt.expected[i], i, actual[i])\n\t\t\t\t\tt.Fail()\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif t.Failed() {\n\t\t\t\tt.Logf(\"expected result `%s`, but got `%s`\", strings.Join(tt.expected, \" \"), strings.Join(actual, \" \"))\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "pkg/utils/flag.go",
    "content": "package utils\n\nimport (\n\t\"fmt\"\n)\n\ntype MultiFlag []string\n\nfunc (nss *MultiFlag) String() string {\n\treturn fmt.Sprintf(\"%#v\", nss)\n}\n\nfunc (nss *MultiFlag) Set(s string) error {\n\t*nss = append(*nss, s)\n\treturn nil\n}\n"
  },
  {
    "path": "pkg/utils/pod_info.go",
    "content": "package utils\n\nimport \"os\"\n\nfunc GetNamespace() string {\n\tns, found := os.LookupEnv(\"POD_NAMESPACE\")\n\tif !found {\n\t\treturn \"eraser-system\"\n\t}\n\treturn ns\n}\n"
  },
  {
    "path": "pkg/utils/security_context.go",
    "content": "package utils\n\nimport (\n\tcorev1 \"k8s.io/api/core/v1\"\n)\n\nvar trueval = true\n\nvar SharedSecurityContext = &corev1.SecurityContext{\n\tCapabilities: &corev1.Capabilities{\n\t\tDrop: []corev1.Capability{\"ALL\"},\n\t},\n\tReadOnlyRootFilesystem: &trueval,\n\tSeccompProfile: &corev1.SeccompProfile{\n\t\tType: corev1.SeccompProfileTypeRuntimeDefault,\n\t},\n}\n"
  },
  {
    "path": "pkg/utils/utils.go",
    "content": "package utils\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"net\"\n\t\"net/url\"\n\t\"os\"\n\t\"strings\"\n\t\"time\"\n\n\t\"golang.org/x/sys/unix\"\n\t\"google.golang.org/grpc\"\n\t\"google.golang.org/grpc/credentials/insecure\"\n\tv1 \"k8s.io/cri-api/pkg/apis/runtime/v1\"\n\n\t\"github.com/eraser-dev/eraser/api/unversioned\"\n)\n\nconst (\n\t// unixProtocol is the network protocol of unix socket.\n\tunixProtocol             = \"unix\"\n\tPipeMode                 = 0o644\n\tScanErasePath            = \"/run/eraser.sh/shared-data/scanErase\"\n\tCollectScanPath          = \"/run/eraser.sh/shared-data/collectScan\"\n\tEraseCompleteCollectPath = \"/run/eraser.sh/shared-data/eraseCompleteCollect\"\n\tEraseCompleteMessage     = \"complete\"\n\tEraseCompleteScanPath    = \"/run/eraser.sh/shared-data/eraseCompleteScan\"\n\n\tCRIPath = \"/run/cri/cri.sock\"\n\n\tEnvEraserRuntimeName = \"ERASER_RUNTIME_NAME\"\n)\n\ntype ExclusionList struct {\n\tExcluded []string `json:\"excluded\"`\n}\n\nvar (\n\tErrProtocolNotSupported  = errors.New(\"protocol not supported\")\n\tErrEndpointDeprecated    = errors.New(\"endpoint is deprecated, please consider using full url format\")\n\tErrOnlySupportUnixSocket = errors.New(\"only support unix socket endpoint\")\n)\n\nfunc GetConn(ctx context.Context, socketPath string) (conn *grpc.ClientConn, err error) {\n\taddr, dialer, err := getAddressAndDialer(socketPath)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t//nolint:staticcheck // SA1019: grpc.DialContext is deprecated but maintains required blocking behavior\n\treturn grpc.DialContext(\n\t\tctx,\n\t\taddr,\n\t\t//nolint:staticcheck // SA1019: grpc.WithBlock is deprecated but ensures synchronous CRI connection\n\t\tgrpc.WithBlock(),\n\t\tgrpc.WithTransportCredentials(insecure.NewCredentials()),\n\t\tgrpc.WithContextDialer(dialer),\n\t)\n}\n\nfunc getAddressAndDialer(endpoint string) (string, func(ctx context.Context, addr string) (net.Conn, error), error) {\n\tprotocol, addr, err := ParseEndpointWithFallbackProtocol(endpoint, unixProtocol)\n\tif err != nil {\n\t\treturn \"\", nil, err\n\t}\n\tif protocol != unixProtocol {\n\t\treturn \"\", nil, ErrOnlySupportUnixSocket\n\t}\n\n\treturn addr, dial, nil\n}\n\nfunc dial(ctx context.Context, addr string) (net.Conn, error) {\n\treturn (&net.Dialer{}).DialContext(ctx, unixProtocol, addr)\n}\n\nfunc ParseEndpointWithFallbackProtocol(endpoint string, fallbackProtocol string) (protocol string, addr string, err error) {\n\tif protocol, addr, err = ParseEndpoint(endpoint); err != nil && protocol == \"\" {\n\t\tfallbackEndpoint := fallbackProtocol + \"://\" + endpoint\n\t\tprotocol, addr, err = ParseEndpoint(fallbackEndpoint)\n\t\tif err != nil {\n\t\t\treturn \"\", \"\", err\n\t\t}\n\t}\n\treturn protocol, addr, err\n}\n\nfunc ParseEndpoint(endpoint string) (string, string, error) {\n\tu, err := url.Parse(endpoint)\n\tif err != nil {\n\t\treturn \"\", \"\", fmt.Errorf(\"error while parsing: %w\", err)\n\t}\n\n\tswitch u.Scheme {\n\tcase \"tcp\":\n\t\treturn \"tcp\", u.Host, nil\n\tcase \"unix\":\n\t\treturn \"unix\", u.Path, nil\n\tcase \"\":\n\t\treturn \"\", \"\", fmt.Errorf(\"using %q as %w\", endpoint, ErrEndpointDeprecated)\n\tdefault:\n\t\treturn u.Scheme, \"\", fmt.Errorf(\"%q: %w\", u.Scheme, ErrProtocolNotSupported)\n\t}\n}\n\nfunc ListImages(ctx context.Context, images v1.ImageServiceClient) (list []*v1.Image, err error) {\n\trequest := &v1.ListImagesRequest{Filter: nil}\n\n\tresp, err := images.ListImages(ctx, request)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn resp.Images, nil\n}\n\nfunc ListContainers(ctx context.Context, runtime v1.RuntimeServiceClient) (list []*v1.Container, err error) {\n\tresp, err := runtime.ListContainers(ctx, new(v1.ListContainersRequest))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn resp.Containers, nil\n}\n\nfunc GetRunningImages(containers []*v1.Container, idToImageMap map[string]unversioned.Image) map[string]string {\n\t// Images that are running\n\t// map of (digest | tag) -> digest\n\trunningImages := make(map[string]string)\n\tfor _, container := range containers {\n\t\tcurr := container.Image\n\t\timageID := curr.GetImage()\n\t\trunningImages[imageID] = imageID\n\n\t\tfor _, name := range idToImageMap[imageID].Names {\n\t\t\trunningImages[name] = imageID\n\t\t}\n\n\t\tfor _, digest := range idToImageMap[imageID].Digests {\n\t\t\trunningImages[digest] = imageID\n\t\t}\n\t}\n\treturn runningImages\n}\n\nfunc GetNonRunningImages(runningImages map[string]string, allImages []unversioned.Image, idToImageMap map[string]unversioned.Image) map[string]string {\n\t// Images that aren't running\n\t// map of (digest | tag) -> digest\n\tnonRunningImages := make(map[string]string)\n\n\tfor _, img := range allImages {\n\t\timageID := img.ImageID\n\t\tif _, isRunning := runningImages[imageID]; !isRunning {\n\t\t\tnonRunningImages[imageID] = imageID\n\n\t\t\tfor _, name := range idToImageMap[imageID].Names {\n\t\t\t\tnonRunningImages[name] = imageID\n\t\t\t}\n\n\t\t\tfor _, digest := range idToImageMap[imageID].Digests {\n\t\t\t\tnonRunningImages[digest] = imageID\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nonRunningImages\n}\n\nfunc IsExcluded(excluded map[string]struct{}, img string, idToImageMap map[string]unversioned.Image) bool {\n\tif len(excluded) == 0 {\n\t\treturn false\n\t}\n\n\t// check if img excluded by digest\n\tif _, contains := excluded[img]; contains {\n\t\treturn true\n\t}\n\n\t// check if img excluded by name\n\tfor _, imgName := range idToImageMap[img].Names {\n\t\tif _, contains := excluded[imgName]; contains {\n\t\t\treturn true\n\t\t}\n\t}\n\n\tfor _, digest := range idToImageMap[img].Digests {\n\t\tif _, contains := excluded[digest]; contains {\n\t\t\treturn true\n\t\t}\n\t}\n\n\t// look for excluded repository values and names without tag\n\tfor key := range excluded {\n\t\t// if excluded key ends in /*, check image with pattern match\n\t\tif strings.HasSuffix(key, \"/*\") {\n\t\t\t// store repository name\n\t\t\trepo := strings.Split(key, \"*\")\n\n\t\t\t// check if img is part of repo\n\t\t\tif match := strings.HasPrefix(img, repo[0]); match {\n\t\t\t\treturn true\n\t\t\t}\n\n\t\t\t// retrieve and check by name in the case img is digest\n\t\t\tfor _, imgName := range idToImageMap[img].Names {\n\t\t\t\tif match := strings.HasPrefix(imgName, repo[0]); match {\n\t\t\t\t\treturn true\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tfor _, digest := range idToImageMap[img].Digests {\n\t\t\t\tif match := strings.HasPrefix(digest, repo[0]); match {\n\t\t\t\t\treturn true\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// if excluded key ends in :*, check image with pattern patch\n\t\tif strings.HasSuffix(key, \":*\") {\n\t\t\t// store image name\n\t\t\timagePath := strings.Split(key, \":\")\n\n\t\t\tif match := strings.HasPrefix(img, imagePath[0]); match {\n\t\t\t\treturn true\n\t\t\t}\n\n\t\t\t// retrieve and check by name in the case img is digest\n\t\t\tfor _, imgName := range idToImageMap[img].Names {\n\t\t\t\tif match := strings.HasPrefix(imgName, imagePath[0]); match {\n\t\t\t\t\treturn true\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tfor _, digest := range idToImageMap[img].Digests {\n\t\t\t\tif match := strings.HasPrefix(digest, imagePath[0]); match {\n\t\t\t\t\treturn true\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn false\n}\n\nfunc ParseImageList(path string) ([]string, error) {\n\timagelist := []string{}\n\t//nolint:gosec // G304: Reading image list file is intended functionality\n\tdata, err := os.ReadFile(path)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tif err := json.Unmarshal(data, &imagelist); err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn imagelist, nil\n}\n\nfunc ParseExcluded() (map[string]struct{}, error) {\n\texcludedMap := make(map[string]struct{})\n\tvar excludedList []string\n\n\tfiles, err := os.ReadDir(\"./\")\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tfor _, file := range files {\n\t\tif strings.HasPrefix(file.Name(), \"exclude-\") {\n\t\t\ttemp, err := readConfigMap(file.Name())\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\texcludedList = append(excludedList, temp...)\n\t\t}\n\t}\n\n\tfor _, img := range excludedList {\n\t\texcludedMap[img] = struct{}{}\n\t}\n\n\treturn excludedMap, nil\n}\n\nfunc BoolPtr(b bool) *bool {\n\treturn &b\n}\n\nfunc readConfigMap(path string) ([]string, error) {\n\tvar fileName string\n\n\tfiles, err := os.ReadDir(path)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tfor _, f := range files {\n\t\tif strings.HasSuffix(f.Name(), \".json\") {\n\t\t\tfileName = f.Name()\n\t\t\tbreak\n\t\t}\n\t}\n\n\tvar images []string\n\t//nolint:gosec // G304: Reading excluded images file is intended functionality\n\tdata, err := os.ReadFile(path + \"/\" + fileName)\n\n\tif os.IsNotExist(err) {\n\t\treturn nil, err\n\t} else if err != nil {\n\t\treturn nil, err\n\t}\n\n\tvar result ExclusionList\n\tif err := json.Unmarshal(data, &result); err != nil {\n\t\treturn nil, err\n\t}\n\n\timages = append(images, result.Excluded...)\n\n\treturn images, nil\n}\n\nfunc ReadCollectScanPipe(ctx context.Context) ([]unversioned.Image, error) {\n\ttimer := time.NewTimer(time.Second)\n\tif !timer.Stop() {\n\t\t<-timer.C\n\t}\n\tdefer timer.Stop()\n\n\tvar f *os.File\n\tfor {\n\t\tvar err error\n\n\t\tf, err = os.OpenFile(CollectScanPath, os.O_RDONLY, 0)\n\t\tif err == nil {\n\t\t\tbreak\n\t\t}\n\t\tif !os.IsNotExist(err) {\n\t\t\treturn nil, err\n\t\t}\n\n\t\ttimer.Reset(time.Second)\n\t\tselect {\n\t\tcase <-ctx.Done():\n\t\t\treturn nil, ctx.Err()\n\t\tcase <-timer.C:\n\t\t\tcontinue\n\t\t}\n\t}\n\n\t// json data is list of []eraserv1.Image\n\tdata, err := io.ReadAll(f)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tallImages := []unversioned.Image{}\n\tif err = json.Unmarshal(data, &allImages); err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn allImages, nil\n}\n\nfunc WriteScanErasePipe(vulnerableImages []unversioned.Image) error {\n\tdata, err := json.Marshal(vulnerableImages)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif err = unix.Mkfifo(ScanErasePath, PipeMode); err != nil {\n\t\treturn err\n\t}\n\n\tfile, err := os.OpenFile(ScanErasePath, os.O_WRONLY, 0)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif _, err := file.Write(data); err != nil {\n\t\treturn err\n\t}\n\n\treturn file.Close()\n}\n\nfunc ProcessRepoDigests(repoDigests []string) ([]string, []error) {\n\tdigests := []string{}\n\terrs := []error{}\n\n\tdigestSet := make(map[string]struct{})\n\tfor _, repoDigest := range repoDigests {\n\t\ts := strings.Split(repoDigest, \"@\")\n\t\tif len(s) < 2 {\n\t\t\terrs = append(errs, fmt.Errorf(\"repoDigest not formatted correctly: %s\", repoDigest))\n\t\t\tcontinue\n\t\t}\n\t\tdigest := s[1]\n\t\tdigestSet[digest] = struct{}{}\n\t}\n\n\tfor digest := range digestSet {\n\t\tdigests = append(digests, digest)\n\t}\n\n\treturn digests, errs\n}\n"
  },
  {
    "path": "pkg/utils/utils_test.go",
    "content": "package utils\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"net/url\"\n\t\"testing\"\n)\n\nfunc TestParseEndpointWithFallBackProtocol(t *testing.T) {\n\ttestCases := []struct {\n\t\tendpoint         string\n\t\tfallbackProtocol string\n\t\tprotocol         string\n\t\taddr             string\n\t\terrCheck         func(t *testing.T, err error)\n\t}{\n\t\t{\n\t\t\tendpoint:         fmt.Sprintf(\"unix://%s\", CRIPath),\n\t\t\tfallbackProtocol: \"unix\",\n\t\t\tprotocol:         \"unix\",\n\t\t\taddr:             CRIPath,\n\t\t\terrCheck: func(t *testing.T, err error) {\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Error(err)\n\t\t\t\t}\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tendpoint:         \"192.168.123.132\",\n\t\t\tfallbackProtocol: \"unix\",\n\t\t\tprotocol:         \"unix\",\n\t\t\taddr:             \"\",\n\t\t\terrCheck: func(t *testing.T, err error) {\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Error(err)\n\t\t\t\t}\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tendpoint:         \"tcp://localhost:8080\",\n\t\t\tfallbackProtocol: \"unix\",\n\t\t\tprotocol:         \"tcp\",\n\t\t\taddr:             \"localhost:8080\",\n\t\t\terrCheck: func(t *testing.T, err error) {\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Error(err)\n\t\t\t\t}\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tendpoint:         \"  \",\n\t\t\tfallbackProtocol: \"unix\",\n\t\t\tprotocol:         \"\",\n\t\t\taddr:             \"\",\n\t\t\terrCheck: func(t *testing.T, err error) {\n\t\t\t\tas := &url.Error{}\n\t\t\t\tif !errors.As(err, &as) {\n\t\t\t\t\tt.Error(err)\n\t\t\t\t}\n\t\t\t},\n\t\t},\n\t}\n\n\tfor _, tc := range testCases {\n\t\tp, a, e := ParseEndpointWithFallbackProtocol(tc.endpoint, tc.fallbackProtocol)\n\n\t\tif p != tc.protocol || a != tc.addr {\n\t\t\tt.Errorf(\"Test fails\")\n\t\t}\n\n\t\ttc.errCheck(t, e)\n\t}\n}\n\nfunc TestParseEndpoint(t *testing.T) {\n\ttestCases := []struct {\n\t\tendpoint string\n\t\tprotocol string\n\t\taddr     string\n\t\terrCheck func(t *testing.T, err error)\n\t}{\n\t\t{\n\t\t\tendpoint: fmt.Sprintf(\"unix://%s\", CRIPath),\n\t\t\tprotocol: \"unix\",\n\t\t\taddr:     CRIPath,\n\t\t\terrCheck: func(t *testing.T, err error) {\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Error(err)\n\t\t\t\t}\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tendpoint: \"192.168.123.132\",\n\t\t\tprotocol: \"\",\n\t\t\taddr:     \"\",\n\t\t\terrCheck: func(t *testing.T, err error) {\n\t\t\t\tif !errors.Is(err, ErrEndpointDeprecated) {\n\t\t\t\t\tt.Error(err)\n\t\t\t\t}\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tendpoint: \"https://myaccount.blob.core.windows.net/mycontainer/myblob\",\n\t\t\tprotocol: \"https\",\n\t\t\taddr:     \"\",\n\t\t\terrCheck: func(t *testing.T, err error) {\n\t\t\t\tif !errors.Is(err, ErrProtocolNotSupported) {\n\t\t\t\t\tt.Error(err)\n\t\t\t\t}\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tendpoint: \"unix://  \",\n\t\t\tprotocol: \"\",\n\t\t\taddr:     \"\",\n\t\t\terrCheck: func(t *testing.T, err error) {\n\t\t\t\tas := &url.Error{}\n\t\t\t\tif !errors.As(err, &as) {\n\t\t\t\t\tt.Error(err)\n\t\t\t\t}\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tc := range testCases {\n\t\tp, a, e := ParseEndpoint(tc.endpoint)\n\n\t\tif p != tc.protocol || a != tc.addr {\n\t\t\tt.Errorf(\"Test fails\")\n\t\t}\n\n\t\ttc.errCheck(t, e)\n\t}\n}\n\nfunc TestGetAddressAndDialer(t *testing.T) {\n\ttestCases := []struct {\n\t\tendpoint string\n\t\taddr     string\n\t\terr      error\n\t}{\n\t\t{\n\t\t\tendpoint: fmt.Sprintf(\"unix://%s\", CRIPath),\n\t\t\taddr:     CRIPath,\n\t\t\terr:      nil,\n\t\t},\n\t\t{\n\t\t\tendpoint: \"localhost:8080\",\n\t\t\taddr:     \"\",\n\t\t\terr:      ErrProtocolNotSupported,\n\t\t},\n\t\t{\n\t\t\tendpoint: \"tcp://localhost:8080\",\n\t\t\taddr:     \"\",\n\t\t\terr:      ErrOnlySupportUnixSocket,\n\t\t},\n\t}\n\n\tfor _, tc := range testCases {\n\t\ta, _, e := getAddressAndDialer(tc.endpoint)\n\t\tif a != tc.addr || !errors.Is(e, tc.err) {\n\t\t\tt.Errorf(\"Test fails\")\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "test/e2e/kind-config-custom-runtime.yaml",
    "content": "kind: Cluster\napiVersion: kind.x-k8s.io/v1alpha4\nnodes:\n- role: control-plane\n- role: worker\n- role: worker\ncontainerdConfigPatches:\n- |\n  [grpc]\n    address = \"/fake/socket/address.sock\"\nkubeadmConfigPatches:\n- |\n  kind: InitConfiguration\n  nodeRegistration:\n    criSocket: \"/fake/socket/address.sock\"\n- |\n  kind: JoinConfiguration\n  nodeRegistration:\n    criSocket: \"/fake/socket/address.sock\"\n"
  },
  {
    "path": "test/e2e/kind-config.yaml",
    "content": "kind: Cluster\napiVersion: kind.x-k8s.io/v1alpha4\nnodes:\n- role: control-plane\n- role: worker\n- role: worker\n"
  },
  {
    "path": "test/e2e/test-data/Dockerfile.busybox",
    "content": "ARG IMG\nFROM ${IMG}\n\nRUN echo 'echo \"$@\"; sleep 360;' > /script.sh && chmod +x /script.sh\n\nENTRYPOINT [\"/bin/sh\", \"/script.sh\"]\n"
  },
  {
    "path": "test/e2e/test-data/Dockerfile.customNode",
    "content": "ARG KUBERNETES_VERSION\nFROM kindest/node:v${KUBERNETES_VERSION}\n\nENV CONTAINERD_ADDRESS=\"/fake/socket/address.sock\"\n"
  },
  {
    "path": "test/e2e/test-data/Dockerfile.dummyCollector",
    "content": "FROM busybox:latest\n\nENTRYPOINT [\"yes\"]\n"
  },
  {
    "path": "test/e2e/test-data/eraser_v1_imagelist.yaml",
    "content": "apiVersion: eraser.sh/v1\nkind: ImageList\nmetadata:\n  name: imagelist\nspec:\n  images:\n    - sha256:2834dc507516af02784808c5f48b7cbe38b8ed5d0f4837f16e78d00deb7e7767\n    - docker.io/library/nginx:latest\n    - nginx\n"
  },
  {
    "path": "test/e2e/test-data/eraser_v1alpha1_imagelist.yaml",
    "content": "apiVersion: eraser.sh/v1alpha1\nkind: ImageList\nmetadata:\n  name: imagelist\nspec:\n  images:\n    - sha256:2834dc507516af02784808c5f48b7cbe38b8ed5d0f4837f16e78d00deb7e7767\n    - docker.io/library/nginx:latest\n    - nginx\n"
  },
  {
    "path": "test/e2e/test-data/eraser_v1alpha1_imagelist_updated.yaml",
    "content": "apiVersion: eraser.sh/v1alpha1\nkind: ImageList\nmetadata:\n  name: imagelist\nspec:\n  images:\n    - \"*\"\n"
  },
  {
    "path": "test/e2e/test-data/helm-empty-values.yaml",
    "content": "# This file exists to prevent regression in the tests. A situation arose in\n# which the helm keys were wrong in the e2e tests, resulting in helm receiving\n# the default values for image repository and tag. This resulted in false\n# positives because the cluster used in the e2e test would pull in the default\n# image from a registry.\n#\n# For all e2e tests using helm, this file should be provided on the command\n# line using `helm install -f`. This ensures that the default values are never\n# used. The correct values will be supplied by `--set` flags further to the\n# right on the command line.\n#\n# Below, randomized names are used to guarantee that if the wrong helm keys are\n# used, the test will fail.\n\nruntimeConfig:\n  apiVersion: eraser.sh/v1alpha3\n  manager:\n    runtime:\n      name: containerd\n      address: unix:///run/containerd/containerd.sock\n  components:\n    collector:\n      image:\n        repo: \"ychoimvthinanopp\"\n        tag: \"wpsestipmlioujqd\"\n    scanner:\n      image:\n        repo: \"aezoqcrjrsmxryrn\"\n        tag: \"mwojcakgxrqcudmn\"\n    remover:\n      image:\n        repo: \"eultpoofmlmfdthr\"\n        tag: \"otosqrwfwrgdrvzo\"\n\ndeploy:\n  image:\n    repo: \"tbiuomsxwcpmnpqi\"\n    tag: \"pgtyeohbgckhpuvz\"\n    pullPolicy: IfNotPresent\n"
  },
  {
    "path": "test/e2e/test-data/helm-test-config.yaml",
    "content": "controllerManager:\n  config:\n    manager:\n      imageJob:\n        cleanup:\n          delayOnSuccess: 1m\n    components:\n      scanner:\n        config: |\n          deleteFailedImages: false\n          deleteEOLImages: true\n"
  },
  {
    "path": "test/e2e/test-data/imagelist_alpine.yaml",
    "content": "apiVersion: eraser.sh/v1alpha1\nkind: ImageList\nmetadata:\n  name: imagelist\nspec:\n  images:\n    - docker.io/library/alpine:3.7.3\n"
  },
  {
    "path": "test/e2e/test-data/otelcollector.yaml",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: otel-collector-conf\n  labels:\n    app: opentelemetry\n    component: otel-collector-conf\ndata:\n  otel-collector-config: |\n    receivers:\n      otlp:\n        protocols:\n          http:\n\n    exporters:\n      logging:\n        loglevel: debug\n      prometheus:\n        endpoint: \"0.0.0.0:8889\"\n        send_timestamps: true\n        metric_expiration: 180m\n\n    service:\n      telemetry:\n        logs:\n          encoding: json\n      pipelines:\n        metrics:\n          receivers:\n            - otlp\n          exporters:\n            - logging\n            - prometheus\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: otel-collector\n  labels:\n    app: opentelemetry\n    component: otel-collector\nspec:\n  ports:\n  - name: otlp-http # Default endpoint for OpenTelemetry HTTP receiver.\n    port: 4318\n    protocol: TCP\n  - name: prometheus\n    port: 80\n    targetPort: 8889\n    protocol: TCP\n  selector:\n    component: otel-collector\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: otel-collector\n  labels:\n    app: opentelemetry\n    component: otel-collector\nspec:\n  selector:\n    matchLabels:\n      app: opentelemetry\n      component: otel-collector\n  minReadySeconds: 5\n  progressDeadlineSeconds: 120\n  replicas: 1\n  template:\n    metadata:\n      labels:\n        app: opentelemetry\n        component: otel-collector\n    spec:\n      containers:\n      - command:\n          - \"/otelcol\"\n          - \"--config=/conf/otel-collector-config.yaml\"\n        image: otel/opentelemetry-collector:0.61.0\n        name: otel-collector\n        resources:\n          limits:\n            memory: 2Gi\n          requests:\n            cpu: 200m\n            memory: 400Mi\n        ports:\n        - containerPort: 4318 # Default endpoint for OpenTelemetry receiver.\n        - containerPort: 8889 # Endpoint for Prometheus exporter.\n        volumeMounts:\n        - name: otel-collector-config-vol\n          mountPath: /conf\n      volumes:\n        - configMap:\n            name: otel-collector-conf\n            items:\n              - key: otel-collector-config\n                path: otel-collector-config.yaml\n          name: otel-collector-config-vol\n"
  },
  {
    "path": "test/e2e/tests/collector_delete_deployment/eraser_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/labels\"\n\t\"sigs.k8s.io/e2e-framework/klient/wait\"\n\t\"sigs.k8s.io/e2e-framework/klient/wait/conditions\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/features\"\n)\n\nfunc TestDeleteDeployment(t *testing.T) {\n\tdeleteDeploymentFeat := features.New(\"Delete deployment should delete eraser pods\").\n\t\tAssess(\"Non-vulnerable image successfully deleted from all nodes\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tctxT, cancel := context.WithTimeout(ctx, util.Timeout)\n\t\t\tdefer cancel()\n\t\t\tutil.CheckImageRemoved(ctxT, t, util.GetClusterNodes(t), util.NonVulnerableImage)\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Delete deployment\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting eraser pod logs\", err)\n\t\t\t}\n\n\t\t\tif err := util.KubectlDelete(cfg.KubeconfigFile(), util.TestNamespace, []string{\"deployment\", \"eraser-controller-manager\"}); err != nil {\n\t\t\t\tt.Error(\"unable to delete eraser-controller-manager deployment\")\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Check eraser pods are deleted\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting eraser pod logs\", err)\n\t\t\t}\n\n\t\t\tc, err := cfg.NewClient()\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(\"Failed to create new client\", err)\n\t\t\t}\n\n\t\t\tvar ls corev1.PodList\n\t\t\terr = c.Resources().List(ctx, &ls, func(o *metav1.ListOptions) {\n\t\t\t\to.LabelSelector = labels.SelectorFromSet(map[string]string{util.ImageJobTypeLabelKey: util.CollectorLabel}).String()\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"could not list pods: %v\", err)\n\t\t\t}\n\n\t\t\terr = wait.For(conditions.New(c.Resources()).ResourcesDeleted(&ls), wait.WithTimeout(util.Timeout))\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"error waiting for pods to be deleted: %v\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tFeature()\n\n\tutil.Testenv.Test(t, deleteDeploymentFeat)\n}\n"
  },
  {
    "path": "test/e2e/tests/collector_delete_deployment/main_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tutilruntime \"k8s.io/apimachinery/pkg/util/runtime\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"sigs.k8s.io/e2e-framework/pkg/env\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envfuncs\"\n)\n\nfunc TestMain(m *testing.M) {\n\tutilruntime.Must(eraserv1alpha1.AddToScheme(scheme.Scheme))\n\n\tremoverImage := util.ParsedImages.RemoverImage\n\tmanagerImage := util.ParsedImages.ManagerImage\n\tcollectorImage := util.ParsedImages.CollectorImage\n\n\tutil.Testenv = env.NewWithConfig(envconf.New())\n\t// Create KinD Cluster\n\tutil.Testenv.Setup(\n\t\tenvfuncs.CreateKindClusterWithConfig(util.KindClusterName, util.NodeVersion, util.KindConfigPath),\n\t\tenvfuncs.CreateNamespace(util.TestNamespace),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ManagerImage, util.ManagerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.CollectorImage, util.CollectorTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.NonVulnerableImage, \"\"),\n\t\tutil.HelmDeployLatestEraserRelease(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t),\n\t\tutil.UpgradeEraserHelm(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"true\"),\n\t\t\t\"--set\", util.CollectorImageRepo.Set(collectorImage.Repo),\n\t\t\t\"--set\", util.CollectorImageTag.Set(collectorImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t\t\"--set\", util.CleanupOnSuccessDelay.Set(\"2m\"),\n\t\t),\n\t).Finish(\n\t\tenvfuncs.DestroyKindCluster(util.KindClusterName),\n\t)\n\tos.Exit(util.Testenv.Run(m))\n}\n"
  },
  {
    "path": "test/e2e/tests/collector_delete_manager/eraser_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/labels\"\n\t\"sigs.k8s.io/e2e-framework/klient/wait\"\n\t\"sigs.k8s.io/e2e-framework/klient/wait/conditions\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/features\"\n)\n\nfunc TestDeleteManager(t *testing.T) {\n\tdeleteManagerFeat := features.New(\"Deleting manager pod while current ImageJob is running should delete ImageJob and restart\").\n\t\tAssess(\"Wait for eraser pods running\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tc, err := cfg.NewClient()\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(\"Failed to create new client\", err)\n\t\t\t}\n\n\t\t\terr = wait.For(\n\t\t\t\tutil.NumPodsPresentForLabel(ctx, c, 3, util.ImageJobTypeLabelKey+\"=\"+util.CollectorLabel),\n\t\t\t\twait.WithTimeout(time.Minute*2),\n\t\t\t\twait.WithInterval(time.Millisecond*500),\n\t\t\t)\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Delete controller-manager pod\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tc, err := cfg.NewClient()\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(\"Failed to create new client\", err)\n\t\t\t}\n\n\t\t\t// get manager pod\n\t\t\tvar podList corev1.PodList\n\t\t\terr = c.Resources().List(ctx, &podList, func(o *metav1.ListOptions) {\n\t\t\t\to.LabelSelector = labels.SelectorFromSet(map[string]string{util.ManagerLabelKey: util.ManagerLabelValue}).String()\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"could not list manager pods: %v\", err)\n\t\t\t}\n\n\t\t\tif len(podList.Items) != 1 {\n\t\t\t\tt.Error(\"incorrect number of manager pods: \", len(podList.Items))\n\t\t\t}\n\n\t\t\t// get current ImageJob before deleting manager pod\n\t\t\tvar jobList eraserv1alpha1.ImageJobList\n\t\t\terr = c.Resources().List(ctx, &jobList)\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"could not list ImageJob: %v\", err)\n\t\t\t}\n\n\t\t\tt.Log(\"job\", jobList.Items[0], \"name\", jobList.Items[0].Name)\n\n\t\t\tif len(jobList.Items) != 1 {\n\t\t\t\tt.Error(\"incorrect number of ImageJobs: \", len(jobList.Items))\n\t\t\t}\n\n\t\t\t// delete manager pod\n\t\t\tif err := util.KubectlDelete(cfg.KubeconfigFile(), util.TestNamespace, []string{\"pod\", podList.Items[0].Name}); err != nil {\n\t\t\t\tt.Error(\"unable to delete eraser-controller-manager pod\")\n\t\t\t}\n\n\t\t\t// wait for deletion of ImageJob\n\t\t\terr = wait.For(conditions.New(c.Resources()).ResourcesDeleted(&jobList), wait.WithTimeout(util.Timeout))\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"error waiting for ImageJob to be deleted: %v\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tFeature()\n\n\tutil.Testenv.Test(t, deleteManagerFeat)\n}\n"
  },
  {
    "path": "test/e2e/tests/collector_delete_manager/main_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tutilruntime \"k8s.io/apimachinery/pkg/util/runtime\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"sigs.k8s.io/e2e-framework/pkg/env\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envfuncs\"\n)\n\nfunc TestMain(m *testing.M) {\n\tutilruntime.Must(eraserv1alpha1.AddToScheme(scheme.Scheme))\n\n\tremoverImage := util.ParsedImages.RemoverImage\n\tmanagerImage := util.ParsedImages.ManagerImage\n\tcollectorImage := util.ParsedImages.CollectorImage\n\n\tutil.Testenv = env.NewWithConfig(envconf.New())\n\t// Create KinD Cluster\n\tutil.Testenv.Setup(\n\t\tenvfuncs.CreateKindClusterWithConfig(util.KindClusterName, util.NodeVersion, util.KindConfigPath),\n\t\tenvfuncs.CreateNamespace(util.TestNamespace),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ManagerImage, util.ManagerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.CollectorDummyImage, \"\"),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.NonVulnerableImage, \"\"),\n\t\tutil.HelmDeployLatestEraserRelease(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t),\n\t\tutil.UpgradeEraserHelm(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"true\"),\n\t\t\t\"--set\", util.CollectorImageRepo.Set(collectorImage.Repo),\n\t\t\t\"--set\", util.CollectorImageTag.Set(\"dummy\"),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t\t\"--set\", util.CleanupOnSuccessDelay.Set(\"2m\"),\n\t\t),\n\t).Finish(\n\t\tenvfuncs.DestroyKindCluster(util.KindClusterName),\n\t)\n\tos.Exit(util.Testenv.Run(m))\n}\n"
  },
  {
    "path": "test/e2e/tests/collector_disable_scan/eraser_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/features\"\n)\n\nfunc TestDisableScanner(t *testing.T) {\n\tdisableScanFeat := features.New(\"Scanner can be disabled\").\n\t\t// non-vulnerable image should be deleted from all nodes when scanner is disabled and we prune with collector\n\t\tAssess(\"Non-vulnerable image successfully deleted from all nodes\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tctxT, cancel := context.WithTimeout(ctx, util.Timeout)\n\t\t\tdefer cancel()\n\t\t\tutil.CheckImageRemoved(ctxT, t, util.GetClusterNodes(t), util.NonVulnerableImage)\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Get logs\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting eraser pod logs\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tFeature()\n\n\tutil.Testenv.Test(t, disableScanFeat)\n}\n"
  },
  {
    "path": "test/e2e/tests/collector_disable_scan/main_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tutilruntime \"k8s.io/apimachinery/pkg/util/runtime\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"sigs.k8s.io/e2e-framework/pkg/env\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envfuncs\"\n)\n\nfunc TestMain(m *testing.M) {\n\tutilruntime.Must(eraserv1alpha1.AddToScheme(scheme.Scheme))\n\n\tremoverImage := util.ParsedImages.RemoverImage\n\tmanagerImage := util.ParsedImages.ManagerImage\n\tcollectorImage := util.ParsedImages.CollectorImage\n\n\tutil.Testenv = env.NewWithConfig(envconf.New())\n\t// Create KinD Cluster\n\tutil.Testenv.Setup(\n\t\tenvfuncs.CreateKindClusterWithConfig(util.KindClusterName, util.NodeVersion, util.KindConfigPath),\n\t\tenvfuncs.CreateNamespace(util.TestNamespace),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ManagerImage, util.ManagerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.CollectorImage, util.CollectorTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.NonVulnerableImage, \"\"),\n\t\tutil.HelmDeployLatestEraserRelease(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t),\n\t\tutil.UpgradeEraserHelm(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"true\"),\n\t\t\t\"--set\", util.CollectorImageRepo.Set(collectorImage.Repo),\n\t\t\t\"--set\", util.CollectorImageTag.Set(collectorImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t\t\"--set\", util.CleanupOnSuccessDelay.Set(\"1m\"),\n\t\t),\n\t).Finish(\n\t\tenvfuncs.DestroyKindCluster(util.KindClusterName),\n\t)\n\tos.Exit(util.Testenv.Run(m))\n}\n"
  },
  {
    "path": "test/e2e/tests/collector_ensure_scan/eraser_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/features\"\n)\n\nfunc TestEnsureScannerFunctions(t *testing.T) {\n\tcollectScanErasePipelineFeat := features.New(\"Collector pods should run automatically, trigger the scanner, then the eraser pods. Helm test.\").\n\t\tAssess(\"Vulnerable and EOL images are successfully deleted from all nodes\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tctxT, cancel := context.WithTimeout(ctx, util.Timeout)\n\t\t\tdefer cancel()\n\t\t\tutil.CheckImageRemoved(ctxT, t, util.GetClusterNodes(t), util.Alpine)\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Get logs\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting eraser pod logs\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tFeature()\n\n\tutil.Testenv.Test(t, collectScanErasePipelineFeat)\n}\n"
  },
  {
    "path": "test/e2e/tests/collector_ensure_scan/main_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tutilruntime \"k8s.io/apimachinery/pkg/util/runtime\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"sigs.k8s.io/e2e-framework/pkg/env\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envfuncs\"\n)\n\nfunc TestMain(m *testing.M) {\n\tutilruntime.Must(eraserv1alpha1.AddToScheme(scheme.Scheme))\n\n\tremoverImage := util.ParsedImages.RemoverImage\n\tmanagerImage := util.ParsedImages.ManagerImage\n\tcollectorImage := util.ParsedImages.CollectorImage\n\tscannerImage := util.ParsedImages.ScannerImage\n\n\tutil.Testenv = env.NewWithConfig(envconf.New())\n\t// Create KinD Cluster\n\tutil.Testenv.Setup(\n\t\tenvfuncs.CreateKindClusterWithConfig(util.KindClusterName, util.NodeVersion, util.KindConfigPath),\n\t\tenvfuncs.CreateNamespace(util.TestNamespace),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ManagerImage, util.ManagerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.CollectorImage, util.CollectorTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.VulnerableImage, \"\"),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.NonVulnerableImage, \"\"),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.EOLImage, \"\"),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ScannerImage, util.ScannerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.HelmDeployLatestEraserRelease(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t),\n\t\tutil.UpgradeEraserHelm(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"true\"),\n\t\t\t\"--set\", util.ScannerImageRepo.Set(scannerImage.Repo),\n\t\t\t\"--set\", util.ScannerImageTag.Set(scannerImage.Tag),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"true\"),\n\t\t\t\"--set\", util.CollectorImageRepo.Set(collectorImage.Repo),\n\t\t\t\"--set\", util.CollectorImageTag.Set(collectorImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t\t\"--set\", util.CleanupOnSuccessDelay.Set(\"1m\"),\n\t\t\t// set deleteFailedImages to FALSE to catch a broken scanner\n\t\t\t\"--set-json\", util.ScannerConfig.Set(util.ScannerConfigNoDeleteFailedJSON),\n\t\t),\n\t).Finish(\n\t\tenvfuncs.DestroyKindCluster(util.KindClusterName),\n\t)\n\tos.Exit(util.Testenv.Run(m))\n}\n"
  },
  {
    "path": "test/e2e/tests/collector_pipeline/eraser_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/labels\"\n\n\t\"sigs.k8s.io/e2e-framework/klient/wait\"\n\t\"sigs.k8s.io/e2e-framework/klient/wait/conditions\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/features\"\n)\n\nfunc TestCollectScanErasePipeline(t *testing.T) {\n\tcollectScanErasePipelineFeat := features.New(\"Collector pods should run automatically, trigger the scanner, then the eraser pods. Manifest deployment test.\").\n\t\tAssess(\"Vulnerable and EOL images are successfully deleted from all nodes\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tctxT, cancel := context.WithTimeout(ctx, util.Timeout)\n\t\t\tdefer cancel()\n\t\t\tutil.CheckImageRemoved(ctxT, t, util.GetClusterNodes(t), util.Alpine)\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Pods from imagejobs are cleaned up\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tc, err := cfg.NewClient()\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(\"Failed to create new client\", err)\n\t\t\t}\n\n\t\t\tvar ls corev1.PodList\n\t\t\terr = c.Resources().List(ctx, &ls, func(o *metav1.ListOptions) {\n\t\t\t\to.LabelSelector = labels.SelectorFromSet(map[string]string{util.ImageJobTypeLabelKey: util.CollectorLabel}).String()\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"could not list pods: %v\", err)\n\t\t\t}\n\n\t\t\terr = wait.For(conditions.New(c.Resources()).ResourcesDeleted(&ls), wait.WithTimeout(util.Timeout))\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"error waiting for pods to be deleted: %v\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tFeature()\n\n\tutil.Testenv.Test(t, collectScanErasePipelineFeat)\n}\n"
  },
  {
    "path": "test/e2e/tests/collector_pipeline/main_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tutilruntime \"k8s.io/apimachinery/pkg/util/runtime\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"sigs.k8s.io/e2e-framework/pkg/env\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envfuncs\"\n)\n\nfunc TestMain(m *testing.M) {\n\tutilruntime.Must(eraserv1alpha1.AddToScheme(scheme.Scheme))\n\n\tremover := util.ParsedImages.RemoverImage\n\tcollector := util.ParsedImages.CollectorImage\n\tscanner := util.ParsedImages.ScannerImage\n\tmanager := util.ParsedImages.ManagerImage\n\n\tutil.Testenv = env.NewWithConfig(envconf.New())\n\t// Create KinD Cluster\n\tutil.Testenv.Setup(\n\t\tenvfuncs.CreateKindClusterWithConfig(util.KindClusterName, util.NodeVersion, util.KindConfigPath),\n\t\tenvfuncs.CreateNamespace(util.EraserNamespace),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ManagerImage, util.ManagerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ScannerImage, util.ScannerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.CollectorImage, util.CollectorTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.VulnerableImage, \"\"),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.EOLImage, \"\"),\n\t\tutil.MakeDeploy(map[string]string{\n\t\t\t\"REMOVER_REPO\":       remover.Repo,\n\t\t\t\"MANAGER_REPO\":       manager.Repo,\n\t\t\t\"TRIVY_SCANNER_REPO\": scanner.Repo,\n\t\t\t\"COLLECTOR_REPO\":     collector.Repo,\n\t\t\t\"REMOVER_TAG\":        remover.Tag,\n\t\t\t\"MANAGER_TAG\":        manager.Tag,\n\t\t\t\"TRIVY_SCANNER_TAG\":  scanner.Tag,\n\t\t\t\"COLLECTOR_TAG\":      collector.Tag,\n\t\t}),\n\t).Finish(\n\t\tenvfuncs.DestroyKindCluster(util.KindClusterName),\n\t)\n\tos.Exit(util.Testenv.Run(m))\n}\n"
  },
  {
    "path": "test/e2e/tests/collector_runtime_config/eraser_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/features\"\n)\n\nfunc TestCustomRuntimeAddress(t *testing.T) {\n\tcollectRuntimeFeat := features.New(\"Collector pods should run automatically, trigger the scanner, then the eraser pods. Helm test with custom runtime socket address.\").\n\t\tAssess(\"Vulnerable and EOL images are successfully deleted from all nodes\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tctxT, cancel := context.WithTimeout(ctx, util.Timeout)\n\t\t\tdefer cancel()\n\t\t\tutil.CheckImageRemoved(ctxT, t, util.GetClusterNodes(t), util.Alpine)\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Get logs\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting eraser pod logs\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tFeature()\n\n\tutil.Testenv.Test(t, collectRuntimeFeat)\n}\n"
  },
  {
    "path": "test/e2e/tests/collector_runtime_config/main_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tutilruntime \"k8s.io/apimachinery/pkg/util/runtime\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"sigs.k8s.io/e2e-framework/pkg/env\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envfuncs\"\n)\n\nfunc TestMain(m *testing.M) {\n\tutilruntime.Must(eraserv1alpha1.AddToScheme(scheme.Scheme))\n\n\tremoverImage := util.ParsedImages.RemoverImage\n\tmanagerImage := util.ParsedImages.ManagerImage\n\tcollectorImage := util.ParsedImages.CollectorImage\n\tscannerImage := util.ParsedImages.ScannerImage\n\n\tutil.Testenv = env.NewWithConfig(envconf.New())\n\n\t// Create KinD Cluster\n\tutil.Testenv.Setup(\n\t\tenvfuncs.CreateKindClusterWithConfig(util.KindClusterName, util.ModifiedNodeImage, util.KindConfigCustomRuntimePath),\n\t\tenvfuncs.CreateNamespace(util.TestNamespace),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ManagerImage, util.ManagerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.CollectorImage, util.CollectorTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.VulnerableImage, \"\"),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.NonVulnerableImage, \"\"),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.EOLImage, \"\"),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ScannerImage, util.ScannerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.HelmDeployLatestEraserRelease(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t),\n\t\tutil.UpgradeEraserHelm(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"true\"),\n\t\t\t\"--set\", util.ScannerImageRepo.Set(scannerImage.Repo),\n\t\t\t\"--set\", util.ScannerImageTag.Set(scannerImage.Tag),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"true\"),\n\t\t\t\"--set\", util.CollectorImageRepo.Set(collectorImage.Repo),\n\t\t\t\"--set\", util.CollectorImageTag.Set(collectorImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t\t\"--set\", util.CleanupOnSuccessDelay.Set(\"1m\"),\n\t\t\t// set deleteFailedImages to FALSE to catch a broken scanner\n\t\t\t\"--set-json\", util.ScannerConfig.Set(util.ScannerConfigNoDeleteFailedJSON),\n\t\t\t// set custom runtime socket as runtime address\n\t\t\t\"--set\", util.CustomRuntimeAddress.Set(\"unix:///fake/socket/address.sock\"),\n\t\t\t\"--set\", util.CustomRuntimeName.Set(\"containerd\"),\n\t\t),\n\t).Finish(\n\t\tenvfuncs.DestroyKindCluster(util.KindClusterName),\n\t)\n\tos.Exit(util.Testenv.Run(m))\n}\n"
  },
  {
    "path": "test/e2e/tests/collector_skip_excluded/eraser_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/labels\"\n\t\"sigs.k8s.io/e2e-framework/klient/wait\"\n\t\"sigs.k8s.io/e2e-framework/klient/wait/conditions\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/features\"\n)\n\nfunc TestCollectorExcluded(t *testing.T) {\n\tcollectorExcluded := features.New(\"ImageCollector should not remove excluded images\").\n\t\tAssess(\"Collector pods completed\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tc, err := cfg.NewClient()\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(\"Failed to create new client\", err)\n\t\t\t}\n\n\t\t\tvar ls corev1.PodList\n\t\t\terr = c.Resources().List(ctx, &ls, func(o *metav1.ListOptions) {\n\t\t\t\to.LabelSelector = labels.SelectorFromSet(map[string]string{util.ImageJobTypeLabelKey: util.CollectorLabel}).String()\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"could not list pods: %v\", err)\n\t\t\t}\n\n\t\t\tfor _, pod := range ls.Items {\n\t\t\t\terr = wait.For(conditions.New(c.Resources()).PodPhaseMatch(&pod, corev1.PodSucceeded), wait.WithTimeout(time.Minute*3))\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Log(\"collector pod unsuccessful\", pod.Name)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Alpine image is not removed\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\t_, cancel := context.WithTimeout(ctx, util.Timeout)\n\t\t\tdefer cancel()\n\t\t\tutil.CheckImagesExist(t, util.GetClusterNodes(t), util.VulnerableImage)\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Non-vulnerable image is not removed\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\t_, cancel := context.WithTimeout(ctx, util.Timeout)\n\t\t\tdefer cancel()\n\t\t\tutil.CheckImagesExist(t, util.GetClusterNodes(t), util.NonVulnerableImage)\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Get logs\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting eraser pod logs\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tFeature()\n\n\tutil.Testenv.Test(t, collectorExcluded)\n}\n"
  },
  {
    "path": "test/e2e/tests/collector_skip_excluded/main_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tutilruntime \"k8s.io/apimachinery/pkg/util/runtime\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"sigs.k8s.io/e2e-framework/pkg/env\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envfuncs\"\n)\n\nfunc TestMain(m *testing.M) {\n\tutilruntime.Must(eraserv1alpha1.AddToScheme(scheme.Scheme))\n\n\tremoverImage := util.ParsedImages.RemoverImage\n\tmanagerImage := util.ParsedImages.ManagerImage\n\tcollectorImage := util.ParsedImages.CollectorImage\n\tscannerImage := util.ParsedImages.ScannerImage\n\n\tutil.Testenv = env.NewWithConfig(envconf.New())\n\t// Create KinD Cluster\n\tutil.Testenv.Setup(\n\t\tenvfuncs.CreateKindClusterWithConfig(util.KindClusterName, util.NodeVersion, util.KindConfigPath),\n\t\tenvfuncs.CreateNamespace(util.TestNamespace),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ManagerImage, util.ManagerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.CollectorImage, util.CollectorTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.VulnerableImage, \"\"),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.NonVulnerableImage, \"\"),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ScannerImage, util.ScannerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.CreateExclusionList(util.TestNamespace, \"{\\\"excluded\\\": [\\\"docker.io/library/alpine:*\\\"]}\"),\n\t\tutil.CreateExclusionList(util.TestNamespace, \"{\\\"excluded\\\": [\\\"\"+util.NonVulnerableImage+\"\\\"]}\"),\n\t\tutil.HelmDeployLatestEraserRelease(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t),\n\t\tutil.UpgradeEraserHelm(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"true\"),\n\t\t\t\"--set\", util.ScannerImageRepo.Set(scannerImage.Repo),\n\t\t\t\"--set\", util.ScannerImageTag.Set(scannerImage.Tag),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"true\"),\n\t\t\t\"--set\", util.CollectorImageRepo.Set(collectorImage.Repo),\n\t\t\t\"--set\", util.CollectorImageTag.Set(collectorImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t\t\"--set\", util.CleanupOnSuccessDelay.Set(\"1m\"),\n\t\t),\n\t).Finish(\n\t\tenvfuncs.DestroyKindCluster(util.KindClusterName),\n\t)\n\tos.Exit(util.Testenv.Run(m))\n}\n"
  },
  {
    "path": "test/e2e/tests/configmap_update/eraser_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/labels\"\n\t\"sigs.k8s.io/e2e-framework/klient/wait\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/features\"\n)\n\nconst (\n\tnumPods       = 3\n\tconfigKey     = \"controller_manager_config.yaml\"\n\tconfigmapName = \"eraser-manager-config\"\n)\n\nvar ()\n\nfunc TestConfigmapUpdate(t *testing.T) {\n\tmetrics := features.New(\"Updating the remover image in the configmap should cause the manager to deploy using the new image\").\n\t\tAssess(\"Update configmap, change remover image to busybox\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tclient, err := cfg.NewClient()\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"Failed to create new client\", err)\n\t\t\t}\n\n\t\t\tconfigMap := corev1.ConfigMap{}\n\t\t\terr = client.Resources().Get(ctx, configmapName, util.TestNamespace, &configMap)\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"Unable to get configmap\", err)\n\t\t\t}\n\n\t\t\tbbSplit := strings.Split(util.BusyboxImage, \":\")\n\t\t\tbbRepo := bbSplit[0]\n\t\t\tbbTag := bbSplit[1]\n\n\t\t\tcmString := fmt.Sprintf(`---\napiVersion: eraser.sh/v1alpha2\nkind: EraserConfig\ncomponents:\n  remover:\n    image:\n      repo: %s\n      tag: %s\n    `, bbRepo, bbTag)\n\n\t\t\tconfigMap.Data[configKey] = cmString\n\t\t\terr = client.Resources().Update(ctx, &configMap)\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"unable to update configmap\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Deploy Imagelist\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\t// deploy imagelist config\n\t\t\tif err := util.DeployEraserConfig(cfg.KubeconfigFile(), cfg.Namespace(), util.ImagelistAlpinePath); err != nil {\n\t\t\t\tt.Error(\"Failed to deploy image list config\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Check eraser pods for change in configuration\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tc, err := cfg.NewClient()\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"Failed to create new client\", err)\n\t\t\t}\n\n\t\t\terr = wait.For(\n\t\t\t\tutil.NumPodsPresentForLabel(ctx, c, numPods, util.ImageJobTypeLabelKey+\"=\"+util.ManualLabel),\n\t\t\t\twait.WithTimeout(time.Minute*2),\n\t\t\t\twait.WithInterval(time.Millisecond*500),\n\t\t\t)\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\tvar ls corev1.PodList\n\t\t\terr = c.Resources().List(ctx, &ls, func(o *metav1.ListOptions) {\n\t\t\t\to.LabelSelector = labels.SelectorFromSet(map[string]string{util.ImageJobTypeLabelKey: util.ManualLabel}).String()\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"could not list pods: %v\", err)\n\t\t\t}\n\n\t\t\tfor i := range ls.Items {\n\t\t\t\t// there will only be the remover container in an imagelist deployment\n\t\t\t\tcontainer := ls.Items[i].Spec.Containers[0]\n\t\t\t\timage := container.Image\n\t\t\t\tif image != util.BusyboxImage {\n\t\t\t\t\tt.Errorf(\"pod %s has image %s, should be %s\", ls.Items[i].GetName(), image, util.BusyboxImage)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Get logs\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting eraser pod logs\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tFeature()\n\n\tutil.Testenv.Test(t, metrics)\n}\n"
  },
  {
    "path": "test/e2e/tests/configmap_update/main_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tutilruntime \"k8s.io/apimachinery/pkg/util/runtime\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"sigs.k8s.io/e2e-framework/pkg/env\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envfuncs\"\n)\n\nfunc TestMain(m *testing.M) {\n\tutilruntime.Must(eraserv1alpha1.AddToScheme(scheme.Scheme))\n\n\tremoverImage := util.ParsedImages.RemoverImage\n\tmanagerImage := util.ParsedImages.ManagerImage\n\n\tutil.Testenv = env.NewWithConfig(envconf.New())\n\t// Create KinD Cluster\n\tutil.Testenv.Setup(\n\t\tenvfuncs.CreateKindClusterWithConfig(util.KindClusterName, util.NodeVersion, util.KindConfigPath),\n\t\tenvfuncs.CreateNamespace(util.TestNamespace),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ManagerImage, util.ManagerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.BusyboxImage, \"\"),\n\t\tutil.HelmDeployLatestEraserRelease(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t),\n\t\tutil.UpgradeEraserHelm(util.TestNamespace,\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t\t\"--set\", util.CleanupOnSuccessDelay.Set(\"1m\"),\n\t\t),\n\t).Finish(\n\t\tenvfuncs.DestroyKindCluster(util.KindClusterName),\n\t)\n\tos.Exit(util.Testenv.Run(m))\n}\n"
  },
  {
    "path": "test/e2e/tests/helm_pull_secret/eraser_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/labels\"\n\t\"sigs.k8s.io/e2e-framework/klient/wait\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/features\"\n)\n\nconst (\n\texpectedPods = 4\n)\n\nfunc TestHelmPullSecret(t *testing.T) {\n\tpullSecretsPropagated := features.New(\"Image Pull Secrets\").\n\t\tAssess(\"All pods should have the correct pull secret\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tc, err := cfg.NewClient()\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(\"Failed to create new client\", err)\n\t\t\t}\n\n\t\t\terr = wait.For(\n\t\t\t\tutil.NumPodsPresentForLabel(ctx, c, 3, util.ImageJobTypeLabelKey+\"=\"+util.CollectorLabel),\n\t\t\t\twait.WithTimeout(time.Minute*2),\n\t\t\t\twait.WithInterval(time.Millisecond*500),\n\t\t\t)\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\tvar ls corev1.PodList\n\t\t\terr = c.Resources().List(ctx, &ls, func(o *metav1.ListOptions) {\n\t\t\t\to.LabelSelector = labels.SelectorFromSet(map[string]string{util.ImageJobTypeLabelKey: util.CollectorLabel}).String()\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"could not list pods: %v\", err)\n\t\t\t}\n\n\t\t\tvar ls2 corev1.PodList\n\t\t\terr = c.Resources().List(ctx, &ls2, func(o *metav1.ListOptions) {\n\t\t\t\to.LabelSelector = labels.SelectorFromSet(map[string]string{\"control-plane\": \"controller-manager\"}).String()\n\t\t\t})\n\n\t\t\titems := append(ls.Items, ls2.Items...)\n\t\t\tif len(items) != expectedPods {\n\t\t\t\tt.Errorf(\"incorrect number of pods for eraser deployment. should be %d but was %d\", expectedPods, len(items))\n\t\t\t}\n\n\t\t\tfor _, pod := range items {\n\t\t\t\tfound := false\n\t\t\t\tfor _, secret := range pod.Spec.ImagePullSecrets {\n\t\t\t\t\tif secret.Name == util.ImagePullSecret {\n\t\t\t\t\t\tfound = true\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif !found {\n\t\t\t\t\tt.Errorf(\"pod %s does not have secret set\", pod.ObjectMeta.Name)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Get logs\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting eraser pod logs\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tFeature()\n\n\tutil.Testenv.Test(t, pullSecretsPropagated)\n}\n"
  },
  {
    "path": "test/e2e/tests/helm_pull_secret/main_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tutilruntime \"k8s.io/apimachinery/pkg/util/runtime\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"sigs.k8s.io/e2e-framework/pkg/env\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envfuncs\"\n)\n\nfunc TestMain(m *testing.M) {\n\tutilruntime.Must(eraserv1alpha1.AddToScheme(scheme.Scheme))\n\n\tremoverImage := util.ParsedImages.RemoverImage\n\tmanagerImage := util.ParsedImages.ManagerImage\n\tcollectorImage := util.ParsedImages.CollectorImage\n\tscannerImage := util.ParsedImages.ScannerImage\n\n\tutil.Testenv = env.NewWithConfig(envconf.New())\n\t// Create KinD Cluster\n\tutil.Testenv.Setup(\n\t\tenvfuncs.CreateKindClusterWithConfig(util.KindClusterName, util.NodeVersion, util.KindConfigPath),\n\t\tenvfuncs.CreateNamespace(util.TestNamespace),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ManagerImage, util.ManagerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.CollectorImage, util.CollectorTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ScannerImage, util.ScannerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.HelmDeployLatestEraserRelease(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t),\n\t\tutil.UpgradeEraserHelm(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"true\"),\n\t\t\t\"--set\", util.ScannerImageRepo.Set(scannerImage.Repo),\n\t\t\t\"--set\", util.ScannerImageTag.Set(scannerImage.Tag),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"true\"),\n\t\t\t\"--set\", util.CollectorImageRepo.Set(collectorImage.Repo),\n\t\t\t\"--set\", util.CollectorImageTag.Set(collectorImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t\t\"--set-json\", util.ImagePullSecrets.Set(util.ImagePullSecretJSON),\n\t\t\t\"--set\", util.CleanupOnSuccessDelay.Set(\"1m\"),\n\t\t),\n\t).Finish(\n\t\tenvfuncs.DestroyKindCluster(util.KindClusterName),\n\t)\n\tos.Exit(util.Testenv.Run(m))\n}\n"
  },
  {
    "path": "test/e2e/tests/helm_pull_secret_imagelist/eraser_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\n\teraserv1 \"github.com/eraser-dev/eraser/api/v1\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/labels\"\n\t\"sigs.k8s.io/e2e-framework/klient/wait\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/features\"\n)\n\nconst (\n\texpectedPods = 4\n)\n\nfunc TestHelmPullSecretImagelist(t *testing.T) {\n\tpullSecretsPropagated := features.New(\"Image Pull Secrets\").\n\t\tAssess(\"All pods should have the correct pull secret\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tc, err := cfg.NewClient()\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(\"Failed to create new client\", err)\n\t\t\t}\n\n\t\t\timgList := &eraserv1.ImageList{\n\t\t\t\tObjectMeta: metav1.ObjectMeta{Name: util.Prune},\n\t\t\t\tSpec: eraserv1.ImageListSpec{\n\t\t\t\t\tImages: []string{\"*\"},\n\t\t\t\t},\n\t\t\t}\n\t\t\tif err := cfg.Client().Resources().Create(ctx, imgList); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\terr = wait.For(\n\t\t\t\tutil.NumPodsPresentForLabel(ctx, c, 3, util.ImageJobTypeLabelKey+\"=\"+util.ManualLabel),\n\t\t\t\twait.WithTimeout(time.Minute*2),\n\t\t\t\twait.WithInterval(time.Millisecond*500),\n\t\t\t)\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\tvar ls corev1.PodList\n\t\t\terr = c.Resources().List(ctx, &ls, func(o *metav1.ListOptions) {\n\t\t\t\to.LabelSelector = labels.SelectorFromSet(map[string]string{util.ImageJobTypeLabelKey: util.ManualLabel}).String()\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"could not list pods: %v\", err)\n\t\t\t}\n\n\t\t\tvar ls2 corev1.PodList\n\t\t\terr = c.Resources().List(ctx, &ls2, func(o *metav1.ListOptions) {\n\t\t\t\to.LabelSelector = labels.SelectorFromSet(map[string]string{\"control-plane\": \"controller-manager\"}).String()\n\t\t\t})\n\n\t\t\titems := append(ls.Items, ls2.Items...)\n\t\t\tif len(items) != expectedPods {\n\t\t\t\tt.Errorf(\"incorrect number of pods for eraser deployment. should be %d but was %d\", expectedPods, len(items))\n\t\t\t}\n\n\t\t\tfor _, pod := range items {\n\t\t\t\tfound := false\n\t\t\t\tfor _, secret := range pod.Spec.ImagePullSecrets {\n\t\t\t\t\tif secret.Name == util.ImagePullSecret {\n\t\t\t\t\t\tfound = true\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif !found {\n\t\t\t\t\tt.Errorf(\"pod %s does not have secret set\", pod.ObjectMeta.Name)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Get logs\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting eraser pod logs\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tFeature()\n\n\tutil.Testenv.Test(t, pullSecretsPropagated)\n}\n"
  },
  {
    "path": "test/e2e/tests/helm_pull_secret_imagelist/main_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\teraserv1 \"github.com/eraser-dev/eraser/api/v1\"\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tutilruntime \"k8s.io/apimachinery/pkg/util/runtime\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"sigs.k8s.io/e2e-framework/pkg/env\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envfuncs\"\n)\n\nfunc TestMain(m *testing.M) {\n\tutilruntime.Must(eraserv1alpha1.AddToScheme(scheme.Scheme))\n\tutilruntime.Must(eraserv1.AddToScheme(scheme.Scheme))\n\n\tremoverImage := util.ParsedImages.RemoverImage\n\tmanagerImage := util.ParsedImages.ManagerImage\n\n\tutil.Testenv = env.NewWithConfig(envconf.New())\n\t// Create KinD Cluster\n\tutil.Testenv.Setup(\n\t\tenvfuncs.CreateKindClusterWithConfig(util.KindClusterName, util.NodeVersion, util.KindConfigPath),\n\t\tenvfuncs.CreateNamespace(util.TestNamespace),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ManagerImage, util.ManagerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.DeployEraserHelm(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t\t\"--set-json\", util.ImagePullSecrets.Set(util.ImagePullSecretJSON),\n\t\t\t\"--set\", util.CleanupOnSuccessDelay.Set(\"1m\"),\n\t\t),\n\t).Finish(\n\t\tenvfuncs.DestroyKindCluster(util.KindClusterName),\n\t)\n\tos.Exit(util.Testenv.Run(m))\n}\n"
  },
  {
    "path": "test/e2e/tests/imagelist_alias/eraser_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\teraserv1 \"github.com/eraser-dev/eraser/api/v1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\n\t\"sigs.k8s.io/e2e-framework/klient/wait\"\n\t\"sigs.k8s.io/e2e-framework/klient/wait/conditions\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/features\"\n)\n\ntype nodeString string\n\nconst (\n\tnginxOneName            = \"nginxone\"\n\tnginxTwoName            = \"nginxtwo\"\n\tnodeNameKey  nodeString = \"nodeName\"\n)\n\nfunc TestEnsureAliasedImageRemoved(t *testing.T) {\n\taliasFix := features.New(\"Specifying an image alias in the image list will delete the underlying image\").\n\t\t// Deploy 3 deployments with different images\n\t\t// We'll shutdown two of them, run eraser with `*`, then check that the images for the removed deployments are removed from the cluster.\n\t\tSetup(func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\t// Ensure that both nginx:one and nginx:two are tags for the same image digest\n\t\t\t_, err := util.DockerPullImage(util.NginxLatest)\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"failed to pull nginx image\", err)\n\t\t\t}\n\n\t\t\t// Schedule two pods on a single node. Both pods will create containers from the same image,\n\t\t\t// but each pod refers to that same image by a different tag.\n\t\t\tnodeName := util.GetClusterNodes(t)[0]\n\n\t\t\t// At ghcr.io/eraser-dev/eraser/e2e-test/nginx there is a repository\n\t\t\t// containing three tags. The three tags are `latest`, `one` and\n\t\t\t// `two`. They are all aliases for the same image; only the name\n\t\t\t// differs. These images are maintained there in order to avoid\n\t\t\t// sideloading images into the kind cluster, which has a known bug\n\t\t\t// associated with it. See https://github.com/containerd/containerd/issues/7698\n\t\t\t// for more information.\n\t\t\tnginxOnePod := util.NewPod(cfg.Namespace(), util.NginxAliasOne, nginxOneName, nodeName)\n\t\t\tctx = context.WithValue(ctx, nodeNameKey, nodeName)\n\n\t\t\tif err := cfg.Client().Resources().Create(ctx, nginxOnePod); err != nil {\n\t\t\t\tt.Error(\"Failed to create the nginx pod\", err)\n\t\t\t}\n\n\t\t\tnginxTwoPod := util.NewPod(cfg.Namespace(), util.NginxAliasTwo, nginxTwoName, nodeName)\n\t\t\tif err := cfg.Client().Resources().Create(ctx, nginxTwoPod); err != nil {\n\t\t\t\tt.Error(\"Failed to create the nginx pod\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Pods successfully deployed\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tclient, err := cfg.NewClient()\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"Failed to create new client\", err)\n\t\t\t}\n\n\t\t\tresultPod := corev1.Pod{\n\t\t\t\tObjectMeta: metav1.ObjectMeta{Name: nginxOneName, Namespace: cfg.Namespace()},\n\t\t\t}\n\n\t\t\terr = wait.For(conditions.New(client.Resources()).PodConditionMatch(&resultPod, corev1.PodReady, corev1.ConditionTrue), wait.WithTimeout(util.Timeout))\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"pod not deployed\", err)\n\t\t\t}\n\n\t\t\tresultPod = corev1.Pod{\n\t\t\t\tObjectMeta: metav1.ObjectMeta{Name: nginxTwoName, Namespace: cfg.Namespace()},\n\t\t\t}\n\n\t\t\terr = wait.For(conditions.New(client.Resources()).PodConditionMatch(&resultPod, corev1.PodReady, corev1.ConditionTrue), wait.WithTimeout(util.Timeout))\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"pod not deployed\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Pods successfully deleted\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tvar (\n\t\t\t\tnginxOnePod corev1.Pod\n\t\t\t\tnginxTwoPod corev1.Pod\n\t\t\t)\n\n\t\t\tclient, err := cfg.NewClient()\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"Failed to create new client\", err)\n\t\t\t}\n\n\t\t\tif err := client.Resources().Get(ctx, nginxOneName, util.TestNamespace, &nginxOnePod); err != nil {\n\t\t\t\tt.Error(\"Failed to get the pod\", err)\n\t\t\t}\n\n\t\t\tif err := client.Resources().Get(ctx, nginxTwoName, util.TestNamespace, &nginxTwoPod); err != nil {\n\t\t\t\tt.Error(\"Failed to get the pod\", err)\n\t\t\t}\n\n\t\t\t// Delete the pods, so they will be cleaned up\n\t\t\tif err := client.Resources().Delete(ctx, &nginxOnePod); err != nil {\n\t\t\t\tt.Error(\"Failed to delete the pod\", err)\n\t\t\t}\n\n\t\t\tif err := client.Resources().Delete(ctx, &nginxTwoPod); err != nil {\n\t\t\t\tt.Error(\"Failed to delete the pod\", err)\n\t\t\t}\n\n\t\t\ttoDelete := corev1.PodList{\n\t\t\t\tItems: []corev1.Pod{nginxOnePod, nginxTwoPod},\n\t\t\t}\n\t\t\terr = wait.For(conditions.New(client.Resources()).ResourcesDeleted(&toDelete))\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"failed to delete pods\", err)\n\t\t\t}\n\n\t\t\tnodeName, ok := ctx.Value(nodeNameKey).(string)\n\t\t\tif !ok {\n\t\t\t\tt.Error(\"something is terribly wrong with the nodeName value\")\n\t\t\t}\n\n\t\t\tif err := wait.For(util.ContainerNotPresentOnNode(nodeName, nginxOneName), wait.WithTimeout(util.Timeout)); err != nil {\n\t\t\t\t// Let's not mark this as an error\n\t\t\t\t// We only have this to prevent race conditions with the eraser spinning up\n\t\t\t\tt.Logf(\"error while waiting for deployment deletion: %v\", err)\n\t\t\t}\n\n\t\t\tif err := wait.For(util.ContainerNotPresentOnNode(nodeName, nginxTwoName), wait.WithTimeout(util.Timeout)); err != nil {\n\t\t\t\t// Let's not mark this as an error\n\t\t\t\t// We only have this to prevent race conditions with the eraser spinning up\n\t\t\t\tt.Logf(\"error while waiting for deployment deletion: %v\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Image deleted when referencing by alias\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\timgList := &eraserv1.ImageList{\n\t\t\t\tObjectMeta: metav1.ObjectMeta{Name: util.Prune},\n\t\t\t\tSpec: eraserv1.ImageListSpec{\n\t\t\t\t\tImages: []string{util.NginxAliasTwo},\n\t\t\t\t},\n\t\t\t}\n\t\t\tif err := cfg.Client().Resources().Create(ctx, imgList); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\tnodeName, ok := ctx.Value(nodeNameKey).(string)\n\t\t\tif !ok {\n\t\t\t\tt.Error(\"something is terribly wrong with the nodeName value\")\n\t\t\t}\n\n\t\t\tctxT, cancel := context.WithTimeout(ctx, util.Timeout)\n\t\t\tdefer cancel()\n\t\t\tutil.CheckImageRemoved(ctxT, t, []string{nodeName}, util.Nginx)\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Get logs\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting eraser pod logs\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tFeature()\n\n\tutil.Testenv.Test(t, aliasFix)\n}\n"
  },
  {
    "path": "test/e2e/tests/imagelist_alias/main_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\teraserv1 \"github.com/eraser-dev/eraser/api/v1\"\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tutilruntime \"k8s.io/apimachinery/pkg/util/runtime\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"sigs.k8s.io/e2e-framework/pkg/env\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envfuncs\"\n)\n\nfunc TestMain(m *testing.M) {\n\tutilruntime.Must(eraserv1alpha1.AddToScheme(scheme.Scheme))\n\tutilruntime.Must(eraserv1.AddToScheme(scheme.Scheme))\n\n\tutil.Testenv = env.NewWithConfig(envconf.New())\n\t// Create KinD Cluster\n\n\tremoverImage := util.ParsedImages.RemoverImage\n\tmanagerImage := util.ParsedImages.ManagerImage\n\n\tutil.Testenv.Setup(\n\t\tenvfuncs.CreateKindClusterWithConfig(util.KindClusterName, util.NodeVersion, util.KindConfigPath),\n\t\tenvfuncs.CreateNamespace(util.TestNamespace),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ManagerImage, util.ManagerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.HelmDeployLatestEraserRelease(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t),\n\t\tutil.UpgradeEraserHelm(util.TestNamespace,\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t\t\"--set\", util.CleanupOnSuccessDelay.Set(\"1m\"),\n\t\t),\n\t).Finish(\n\t\tenvfuncs.DestroyKindCluster(util.KindClusterName),\n\t)\n\tos.Exit(util.Testenv.Run(m))\n}\n"
  },
  {
    "path": "test/e2e/tests/imagelist_change/eraser_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tappsv1 \"k8s.io/api/apps/v1\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/labels\"\n\n\t\"sigs.k8s.io/e2e-framework/klient/wait\"\n\t\"sigs.k8s.io/e2e-framework/klient/wait/conditions\"\n\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/features\"\n)\n\nfunc TestUpdateImageList(t *testing.T) {\n\timglistChangeFeat := features.New(\"Updating the Imagelist should trigger an ImageJob\").\n\t\tSetup(func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\t// Deploy 2 deployments with different images (nginx, redis)\n\t\t\tnginxDep := util.NewDeployment(cfg.Namespace(), util.Nginx, 2, map[string]string{\"app\": util.Nginx}, corev1.Container{Image: util.Nginx, Name: util.Nginx})\n\t\t\tif err := cfg.Client().Resources().Create(ctx, nginxDep); err != nil {\n\t\t\t\tt.Error(\"Failed to create the nginx dep\", err)\n\t\t\t}\n\n\t\t\tutil.NewDeployment(cfg.Namespace(), util.Redis, 2, map[string]string{\"app\": util.Redis}, corev1.Container{Image: util.Redis, Name: util.Redis})\n\t\t\terr := cfg.Client().Resources().Create(ctx, util.NewDeployment(cfg.Namespace(), util.Redis, 2, map[string]string{\"app\": util.Redis}, corev1.Container{Image: util.Redis, Name: util.Redis}))\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Deployments successfully deployed\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tclient, err := cfg.NewClient()\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"Failed to create new client\", err)\n\t\t\t}\n\n\t\t\tnginxDep := appsv1.Deployment{\n\t\t\t\tObjectMeta: metav1.ObjectMeta{Name: util.Nginx, Namespace: cfg.Namespace()},\n\t\t\t}\n\n\t\t\tif err = wait.For(conditions.New(client.Resources()).DeploymentConditionMatch(&nginxDep, appsv1.DeploymentAvailable, corev1.ConditionTrue),\n\t\t\t\twait.WithTimeout(util.Timeout)); err != nil {\n\t\t\t\tt.Fatal(\"nginx deployment not found\", err)\n\t\t\t}\n\t\t\tctx = context.WithValue(ctx, util.Nginx, &nginxDep)\n\n\t\t\tredisDep := appsv1.Deployment{\n\t\t\t\tObjectMeta: metav1.ObjectMeta{Name: util.Redis, Namespace: cfg.Namespace()},\n\t\t\t}\n\t\t\tif err = wait.For(conditions.New(client.Resources()).DeploymentConditionMatch(&redisDep, appsv1.DeploymentAvailable, corev1.ConditionTrue),\n\t\t\t\twait.WithTimeout(util.Timeout)); err != nil {\n\t\t\t\tt.Fatal(\"redis deployment not found\", err)\n\t\t\t}\n\t\t\tctx = context.WithValue(ctx, util.Redis, &redisDep)\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Remove deployments so the images aren't running\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\t// Here we remove the redis and nginx deployments\n\t\t\tvar redisPods corev1.PodList\n\t\t\tif err := cfg.Client().Resources().List(ctx, &redisPods, func(o *metav1.ListOptions) {\n\t\t\t\to.LabelSelector = labels.SelectorFromSet(map[string]string{\"app\": util.Redis}).String()\n\t\t\t}); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tif len(redisPods.Items) != 2 {\n\t\t\t\tt.Fatal(\"missing pods in redis deployment\")\n\t\t\t}\n\n\t\t\tvar nginxPods corev1.PodList\n\t\t\tif err := cfg.Client().Resources().List(ctx, &nginxPods, func(o *metav1.ListOptions) {\n\t\t\t\to.LabelSelector = labels.SelectorFromSet(map[string]string{\"app\": util.Nginx}).String()\n\t\t\t}); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tif len(nginxPods.Items) != 2 {\n\t\t\t\tt.Fatal(\"missing pods in nginx deployment\")\n\t\t\t}\n\n\t\t\terr := cfg.Client().Resources().Delete(ctx, ctx.Value(util.Redis).(*appsv1.Deployment))\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\terr = cfg.Client().Resources().Delete(ctx, ctx.Value(util.Nginx).(*appsv1.Deployment))\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\tfor _, nodeName := range util.GetClusterNodes(t) {\n\t\t\t\terr := wait.For(util.ContainerNotPresentOnNode(nodeName, util.Redis), wait.WithTimeout(util.Timeout))\n\t\t\t\tif err != nil {\n\t\t\t\t\t// Let's not mark this as an error\n\t\t\t\t\t// We only have this to prevent race conditions with the eraser spinning up\n\t\t\t\t\tt.Logf(\"error while waiting for deployment deletion: %v\", err)\n\t\t\t\t}\n\t\t\t}\n\t\t\tfor _, nodeName := range util.GetClusterNodes(t) {\n\t\t\t\terr := wait.For(util.ContainerNotPresentOnNode(nodeName, util.Nginx), wait.WithTimeout(util.Timeout))\n\t\t\t\tif err != nil {\n\t\t\t\t\t// Let's not mark this as an error\n\t\t\t\t\t// We only have this to prevent race conditions with the eraser spinning up\n\t\t\t\t\tt.Logf(\"error while waiting for deployment deletion: %v\", err)\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Deploy imagelist to remove nginx\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\t// deploy imageJob config\n\t\t\tif err := util.DeployEraserConfig(cfg.KubeconfigFile(), cfg.Namespace(), util.EraserV1Alpha1ImagelistPath); err != nil {\n\t\t\t\tt.Error(\"Failed to deploy image list config\", err)\n\t\t\t}\n\n\t\t\tctxT, cancel := context.WithTimeout(ctx, util.Timeout)\n\t\t\tdefer cancel()\n\t\t\tutil.CheckImageRemoved(ctxT, t, util.GetClusterNodes(t), util.Nginx)\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Update imagelist to prune rest of images\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\t// deploy imageJob config\n\t\t\tif err := util.DeployEraserConfig(cfg.KubeconfigFile(), cfg.Namespace(), util.EraserV1Alpha1ImagelistUpdatedPath); err != nil {\n\t\t\t\tt.Error(\"Failed to deploy image list config\", err)\n\t\t\t}\n\n\t\t\tctxT, cancel := context.WithTimeout(ctx, util.Timeout)\n\t\t\tdefer cancel()\n\t\t\tutil.CheckImageRemoved(ctxT, t, util.GetClusterNodes(t), util.Redis)\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Get logs\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting eraser pod logs\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tFeature()\n\n\tutil.Testenv.Test(t, imglistChangeFeat)\n}\n"
  },
  {
    "path": "test/e2e/tests/imagelist_change/main_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tutilruntime \"k8s.io/apimachinery/pkg/util/runtime\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"sigs.k8s.io/e2e-framework/pkg/env\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envfuncs\"\n)\n\nfunc TestMain(m *testing.M) {\n\tutilruntime.Must(eraserv1alpha1.AddToScheme(scheme.Scheme))\n\n\tremoverImage := util.ParsedImages.RemoverImage\n\tmanagerImage := util.ParsedImages.ManagerImage\n\n\tutil.Testenv = env.NewWithConfig(envconf.New())\n\t// Create KinD Cluster\n\tutil.Testenv.Setup(\n\t\tenvfuncs.CreateKindClusterWithConfig(util.KindClusterName, util.NodeVersion, util.KindConfigPath),\n\t\tenvfuncs.CreateNamespace(util.TestNamespace),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ManagerImage, util.ManagerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.HelmDeployLatestEraserRelease(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t),\n\t\tutil.UpgradeEraserHelm(util.TestNamespace,\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t),\n\t).Finish(\n\t\tenvfuncs.DestroyKindCluster(util.KindClusterName),\n\t)\n\tos.Exit(util.Testenv.Run(m))\n}\n"
  },
  {
    "path": "test/e2e/tests/imagelist_exclusion_list/eraser_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tappsv1 \"k8s.io/api/apps/v1\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/labels\"\n\n\t\"sigs.k8s.io/e2e-framework/klient/wait\"\n\t\"sigs.k8s.io/e2e-framework/klient/wait/conditions\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/features\"\n)\n\nfunc TestExclusionList(t *testing.T) {\n\texcludedImageFeat := features.New(\"Verify Eraser will skip excluded images\").\n\t\tSetup(func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tpodSelectorLabels := map[string]string{\"app\": util.Nginx}\n\t\t\tnginxDep := util.NewDeployment(cfg.Namespace(), util.Nginx, 2, podSelectorLabels, corev1.Container{Image: util.Nginx, Name: util.Nginx})\n\t\t\tif err := cfg.Client().Resources().Create(ctx, nginxDep); err != nil {\n\t\t\t\tt.Error(\"Failed to create the dep\", err)\n\t\t\t}\n\t\t\tif err := util.DeleteImageListsAndJobs(cfg.KubeconfigFile()); err != nil {\n\t\t\t\tt.Error(\"Failed to clean eraser obejcts \", err)\n\t\t\t}\n\n\t\t\t// create excluded configmap and add docker.io/library/*\n\t\t\texcluded := corev1.ConfigMap{\n\t\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\t\tName:      \"excluded\",\n\t\t\t\t\tNamespace: cfg.Namespace(),\n\t\t\t\t\tLabels:    map[string]string{\"eraser.sh/exclude.list\": \"true\"},\n\t\t\t\t},\n\t\t\t\tData: map[string]string{\"test.json\": \"{\\\"excluded\\\": [\\\"docker.io/library/*\\\"]}\"},\n\t\t\t}\n\t\t\tif err := cfg.Client().Resources().Create(ctx, &excluded); err != nil {\n\t\t\t\tt.Error(\"failed to create excluded configmap\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"deployment successfully deployed\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tclient, err := cfg.NewClient()\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"Failed to create new client\", err)\n\t\t\t}\n\n\t\t\tresultDeployment := appsv1.Deployment{\n\t\t\t\tObjectMeta: metav1.ObjectMeta{Name: util.Nginx, Namespace: cfg.Namespace()},\n\t\t\t}\n\n\t\t\tif err = wait.For(conditions.New(client.Resources()).DeploymentConditionMatch(&resultDeployment, appsv1.DeploymentAvailable, corev1.ConditionTrue),\n\t\t\t\twait.WithTimeout(util.Timeout)); err != nil {\n\t\t\t\tt.Error(\"deployment not found\", err)\n\t\t\t}\n\n\t\t\treturn context.WithValue(ctx, util.Nginx, &resultDeployment)\n\t\t}).\n\t\tAssess(\"Check image remains in all nodes\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\t// delete deployment\n\t\t\tclient, err := cfg.NewClient()\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"Failed to create new client\", err)\n\t\t\t}\n\n\t\t\tvar pods corev1.PodList\n\t\t\terr = client.Resources().List(ctx, &pods, func(o *metav1.ListOptions) {\n\t\t\t\to.LabelSelector = labels.SelectorFromSet(labels.Set{\"app\": util.Nginx}).String()\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\tdep := ctx.Value(util.Nginx).(*appsv1.Deployment)\n\t\t\tif err := client.Resources().Delete(ctx, dep); err != nil {\n\t\t\t\tt.Error(\"Failed to delete the dep\", err)\n\t\t\t}\n\n\t\t\tfor _, nodeName := range util.GetClusterNodes(t) {\n\t\t\t\terr := wait.For(util.ContainerNotPresentOnNode(nodeName, util.Nginx), wait.WithTimeout(util.Timeout))\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Logf(\"error while waiting for deployment deletion: %v\", err)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// create imagelist to trigger deletion\n\t\t\tif err := util.DeployEraserConfig(cfg.KubeconfigFile(), cfg.Namespace(), util.EraserV1ImagelistPath); err != nil {\n\t\t\t\tt.Error(\"Failed to deploy image list config\", err)\n\t\t\t}\n\n\t\t\t_, cancel := context.WithTimeout(ctx, util.Timeout)\n\t\t\tdefer cancel()\n\t\t\t// since docker.io/library/* was excluded, nginx should still exist following deletion\n\t\t\tutil.CheckImagesExist(t, util.GetClusterNodes(t), util.Nginx)\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Get logs\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting eraser pod logs\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tFeature()\n\n\tutil.Testenv.Test(t, excludedImageFeat)\n}\n"
  },
  {
    "path": "test/e2e/tests/imagelist_exclusion_list/main_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tutilruntime \"k8s.io/apimachinery/pkg/util/runtime\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"sigs.k8s.io/e2e-framework/pkg/env\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envfuncs\"\n)\n\nfunc TestMain(m *testing.M) {\n\tutilruntime.Must(eraserv1alpha1.AddToScheme(scheme.Scheme))\n\n\tremoverImage := util.ParsedImages.RemoverImage\n\tmanagerImage := util.ParsedImages.ManagerImage\n\n\tutil.Testenv = env.NewWithConfig(envconf.New())\n\t// Create KinD Cluster\n\tutil.Testenv.Setup(\n\t\tenvfuncs.CreateKindClusterWithConfig(util.KindClusterName, util.NodeVersion, util.KindConfigPath),\n\t\tenvfuncs.CreateNamespace(util.TestNamespace),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ManagerImage, util.ManagerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.HelmDeployLatestEraserRelease(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t),\n\t\tutil.UpgradeEraserHelm(util.TestNamespace,\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t\t\"--set\", util.CleanupOnSuccessDelay.Set(\"1m\"),\n\t\t),\n\t).Finish(\n\t\tenvfuncs.DestroyKindCluster(util.KindClusterName),\n\t)\n\tos.Exit(util.Testenv.Run(m))\n}\n"
  },
  {
    "path": "test/e2e/tests/imagelist_include_nodes/eraser_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tappsv1 \"k8s.io/api/apps/v1\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/labels\"\n\tclientgo \"k8s.io/client-go/kubernetes\"\n\n\t\"sigs.k8s.io/e2e-framework/klient/wait\"\n\t\"sigs.k8s.io/e2e-framework/klient/wait/conditions\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/features\"\n)\n\nfunc TestIncludeNodes(t *testing.T) {\n\tincludeNodesFeat := features.New(\"Applying the eraser.sh/cleanup.filter label to a node should only schedule ImageJob pods on that node\").\n\t\tSetup(func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\t// fetch node info\n\t\t\tc := cfg.Client().RESTConfig()\n\t\t\tk8sClient, err := clientgo.NewForConfig(c)\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"unable to obtain k8s client from config\", err)\n\t\t\t}\n\n\t\t\tpodSelectorLabels := map[string]string{\"app\": util.Nginx}\n\t\t\tnginxDep := util.NewDeployment(cfg.Namespace(), util.Nginx, 2, podSelectorLabels, corev1.Container{Image: util.Nginx, Name: util.Nginx})\n\t\t\tif err := cfg.Client().Resources().Create(ctx, nginxDep); err != nil {\n\t\t\t\tt.Error(\"Failed to create the dep\", err)\n\t\t\t}\n\n\t\t\tnodeList, err := k8sClient.CoreV1().Nodes().List(ctx, metav1.ListOptions{LabelSelector: util.FilterNodeSelector})\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"unable to list node %s\\n%#v\", util.FilterNodeSelector, err)\n\t\t\t}\n\n\t\t\tif len(nodeList.Items) != 1 {\n\t\t\t\tt.Errorf(\"List operation for selector %s resulted in the wrong number of nodes\", util.FilterNodeSelector)\n\t\t\t}\n\n\t\t\tnodeInclude := &nodeList.Items[0]\n\t\t\tnodeInclude.ObjectMeta.Labels[util.FilterLabelKey] = util.FilterLabelValue\n\n\t\t\tnodeInclude, err = k8sClient.CoreV1().Nodes().Update(ctx, nodeInclude, metav1.UpdateOptions{})\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"unable to update node %#v with label {%s: %s}\\nerror: %#v\", nodeInclude, util.FilterLabelKey, util.FilterLabelValue, err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Deployment and labelling the node have succeeded\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tc := cfg.Client().RESTConfig()\n\t\t\tk8sClient, err := clientgo.NewForConfig(c)\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"unable to obtain k8s client from config\", err)\n\t\t\t}\n\n\t\t\terr = wait.For(func() (bool, error) {\n\t\t\t\tnodeList, err := k8sClient.CoreV1().Nodes().List(ctx, metav1.ListOptions{LabelSelector: util.FilterLabelKey})\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn false, err\n\t\t\t\t}\n\n\t\t\t\treturn len(nodeList.Items) == 1, nil\n\t\t\t}, wait.WithTimeout(util.Timeout))\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"error while waiting for selector %s to be added to node\\n%#v\", util.FilterNodeSelector, err)\n\t\t\t}\n\n\t\t\tresultDeployment := appsv1.Deployment{\n\t\t\t\tObjectMeta: metav1.ObjectMeta{Name: util.Nginx, Namespace: cfg.Namespace()},\n\t\t\t}\n\n\t\t\tif err = wait.For(\n\t\t\t\tconditions.New(cfg.Client().Resources()).DeploymentConditionMatch(&resultDeployment, appsv1.DeploymentAvailable, corev1.ConditionTrue),\n\t\t\t\twait.WithTimeout(util.Timeout),\n\t\t\t); err != nil {\n\t\t\t\tt.Error(\"deployment not found\", err)\n\t\t\t}\n\n\t\t\treturn context.WithValue(ctx, util.Nginx, &resultDeployment)\n\t\t}).\n\t\tAssess(\"Node(s) successfully included\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\t// delete deployment\n\t\t\tclient, err := cfg.NewClient()\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"Failed to create new client\", err)\n\t\t\t}\n\n\t\t\tvar pods corev1.PodList\n\t\t\terr = client.Resources().List(ctx, &pods, func(o *metav1.ListOptions) {\n\t\t\t\to.LabelSelector = labels.SelectorFromSet(labels.Set{\"app\": util.Nginx}).String()\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\tdep := ctx.Value(util.Nginx).(*appsv1.Deployment)\n\t\t\tif err := client.Resources().Delete(ctx, dep); err != nil {\n\t\t\t\tt.Error(\"Failed to delete the dep\", err)\n\t\t\t}\n\n\t\t\terr = wait.For(util.ContainerNotPresentOnNode(util.FilterNodeName, util.Nginx), wait.WithTimeout(util.Timeout))\n\t\t\tif err != nil {\n\t\t\t\t// Let's not mark this as an error\n\t\t\t\t// We only have this to prevent race conditions with the eraser spinning up\n\t\t\t\tt.Logf(\"error while waiting for deployment deletion: %v\", err)\n\t\t\t}\n\n\t\t\t// deploy imageJob config\n\t\t\tif err = util.DeployEraserConfig(cfg.KubeconfigFile(), cfg.Namespace(), util.EraserV1Alpha1ImagelistPath); err != nil {\n\t\t\t\tt.Error(\"Failed to deploy image list config\", err)\n\t\t\t}\n\n\t\t\t// get pod logs before imagejob is deleted\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting eraser pod logs\", err)\n\t\t\t}\n\n\t\t\tctxT, cancel := context.WithTimeout(ctx, util.Timeout)\n\t\t\tdefer cancel()\n\n\t\t\t// ensure image is removed from filtered node.\n\t\t\tutil.CheckImageRemoved(ctxT, t, []string{util.FilterNodeName}, util.Nginx)\n\n\t\t\t// Wait for the imagejob to be completed by checking for its nonexistence in the cluster\n\t\t\terr = wait.For(util.ImagejobNotInCluster(cfg.KubeconfigFile()), wait.WithTimeout(util.Timeout))\n\t\t\tif err != nil {\n\t\t\t\tt.Logf(\"error while waiting for imagejob cleanup: %v\", err)\n\t\t\t}\n\n\t\t\tclusterNodes := util.GetClusterNodes(t)\n\t\t\tclusterNodes = util.DeleteStringFromSlice(clusterNodes, util.FilterNodeName)\n\n\t\t\t// the imagejob has done its work, so now we can check the node to make sure it didn't remove the images from the remaining nodes\n\t\t\tutil.CheckImagesExist(t, clusterNodes, util.Nginx)\n\n\t\t\treturn ctx\n\t\t}).\n\t\tFeature()\n\n\tutil.Testenv.Test(t, includeNodesFeat)\n}\n"
  },
  {
    "path": "test/e2e/tests/imagelist_include_nodes/main_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tutilruntime \"k8s.io/apimachinery/pkg/util/runtime\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"sigs.k8s.io/e2e-framework/pkg/env\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envfuncs\"\n)\n\nfunc TestMain(m *testing.M) {\n\tutilruntime.Must(eraserv1alpha1.AddToScheme(scheme.Scheme))\n\n\tremoverImage := util.ParsedImages.RemoverImage\n\tmanagerImage := util.ParsedImages.ManagerImage\n\n\tutil.Testenv = env.NewWithConfig(envconf.New())\n\t// Create KinD Cluster\n\tutil.Testenv.Setup(\n\t\tenvfuncs.CreateKindClusterWithConfig(util.KindClusterName, util.NodeVersion, util.KindConfigPath),\n\t\tenvfuncs.CreateNamespace(util.TestNamespace),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ManagerImage, util.ManagerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.HelmDeployLatestEraserRelease(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t),\n\t\tutil.UpgradeEraserHelm(util.TestNamespace,\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t\t\"--set\", util.FilterNodesType.Set(\"include\"),\n\t\t\t\"--set\", util.CleanupOnSuccessDelay.Set(\"1m\"),\n\t\t),\n\t).Finish(\n\t\tenvfuncs.DestroyKindCluster(util.KindClusterName),\n\t)\n\tos.Exit(util.Testenv.Run(m))\n}\n"
  },
  {
    "path": "test/e2e/tests/imagelist_prune_images/eraser_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\teraserv1 \"github.com/eraser-dev/eraser/api/v1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tappsv1 \"k8s.io/api/apps/v1\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/labels\"\n\n\t\"sigs.k8s.io/e2e-framework/klient/wait\"\n\t\"sigs.k8s.io/e2e-framework/klient/wait/conditions\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/features\"\n)\n\nfunc TestPrune(t *testing.T) {\n\tpruneImagesFeat := features.New(\"Prune all non-running images from cluster\").\n\t\t// Deploy 3 deployments with different images\n\t\t// We'll shutdown two of them, run eraser with `*`, then check that the images for the removed deployments are removed from the cluster.\n\t\tSetup(func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tnginxDep := util.NewDeployment(cfg.Namespace(), util.Nginx, 2, map[string]string{\"app\": util.Nginx}, corev1.Container{Image: util.Nginx, Name: util.Nginx})\n\t\t\tif err := cfg.Client().Resources().Create(ctx, nginxDep); err != nil {\n\t\t\t\tt.Error(\"Failed to create the nginx dep\", err)\n\t\t\t}\n\n\t\t\tutil.NewDeployment(cfg.Namespace(), util.Redis, 2, map[string]string{\"app\": util.Redis}, corev1.Container{Image: util.Redis, Name: util.Redis})\n\t\t\terr := cfg.Client().Resources().Create(ctx, util.NewDeployment(cfg.Namespace(), util.Redis, 2, map[string]string{\"app\": util.Redis}, corev1.Container{Image: util.Redis, Name: util.Redis}))\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\tutil.NewDeployment(cfg.Namespace(), util.Caddy, 2, map[string]string{\"app\": util.Caddy}, corev1.Container{Image: util.Caddy, Name: util.Caddy})\n\t\t\tif err := cfg.Client().Resources().Create(ctx, util.NewDeployment(cfg.Namespace(), util.Caddy, 2, map[string]string{\"app\": util.Caddy}, corev1.Container{Image: util.Caddy, Name: util.Caddy})); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Deployments successfully deployed\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tclient, err := cfg.NewClient()\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"Failed to create new client\", err)\n\t\t\t}\n\n\t\t\tnginxDep := appsv1.Deployment{\n\t\t\t\tObjectMeta: metav1.ObjectMeta{Name: util.Nginx, Namespace: cfg.Namespace()},\n\t\t\t}\n\n\t\t\tif err = wait.For(conditions.New(client.Resources()).DeploymentConditionMatch(&nginxDep, appsv1.DeploymentAvailable, corev1.ConditionTrue),\n\t\t\t\twait.WithTimeout(util.Timeout)); err != nil {\n\t\t\t\tt.Fatal(\"nginx deployment not found\", err)\n\t\t\t}\n\t\t\tctx = context.WithValue(ctx, util.Nginx, &nginxDep)\n\n\t\t\tredisDep := appsv1.Deployment{\n\t\t\t\tObjectMeta: metav1.ObjectMeta{Name: util.Redis, Namespace: cfg.Namespace()},\n\t\t\t}\n\t\t\tif err = wait.For(conditions.New(client.Resources()).DeploymentConditionMatch(&redisDep, appsv1.DeploymentAvailable, corev1.ConditionTrue),\n\t\t\t\twait.WithTimeout(util.Timeout)); err != nil {\n\t\t\t\tt.Fatal(\"redis deployment not found\", err)\n\t\t\t}\n\t\t\tctx = context.WithValue(ctx, util.Redis, &redisDep)\n\n\t\t\tcaddyDep := appsv1.Deployment{\n\t\t\t\tObjectMeta: metav1.ObjectMeta{Name: util.Caddy, Namespace: cfg.Namespace()},\n\t\t\t}\n\t\t\tif err = wait.For(conditions.New(client.Resources()).DeploymentConditionMatch(&caddyDep, appsv1.DeploymentAvailable, corev1.ConditionTrue),\n\t\t\t\twait.WithTimeout(util.Timeout)); err != nil {\n\t\t\t\tt.Fatal(\"caddy deployment not found\", err)\n\t\t\t}\n\t\t\tctx = context.WithValue(ctx, util.Caddy, &caddyDep)\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Remove some of the deployments so the images aren't running\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\t// Here we remove the redis and caddy deployments\n\t\t\t// Keep nginx running and ensure nginx is not deleted.\n\t\t\tvar redisPods corev1.PodList\n\t\t\tif err := cfg.Client().Resources().List(ctx, &redisPods, func(o *metav1.ListOptions) {\n\t\t\t\to.LabelSelector = labels.SelectorFromSet(map[string]string{\"app\": util.Redis}).String()\n\t\t\t}); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tif len(redisPods.Items) != 2 {\n\t\t\t\tt.Fatal(\"missing pods in redis deployment\")\n\t\t\t}\n\n\t\t\tvar caddyPods corev1.PodList\n\t\t\tif err := cfg.Client().Resources().List(ctx, &caddyPods, func(o *metav1.ListOptions) {\n\t\t\t\to.LabelSelector = labels.SelectorFromSet(map[string]string{\"app\": util.Caddy}).String()\n\t\t\t}); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tif len(caddyPods.Items) != 2 {\n\t\t\t\tt.Fatal(\"missing pods in caddy deployment\")\n\t\t\t}\n\n\t\t\terr := cfg.Client().Resources().Delete(ctx, ctx.Value(util.Redis).(*appsv1.Deployment))\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\terr = cfg.Client().Resources().Delete(ctx, ctx.Value(util.Caddy).(*appsv1.Deployment))\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\tfor _, nodeName := range util.GetClusterNodes(t) {\n\t\t\t\terr := wait.For(util.ContainerNotPresentOnNode(nodeName, util.Redis), wait.WithTimeout(util.Timeout))\n\t\t\t\tif err != nil {\n\t\t\t\t\t// Let's not mark this as an error\n\t\t\t\t\t// We only have this to prevent race conditions with the eraser spinning up\n\t\t\t\t\tt.Logf(\"error while waiting for deployment deletion: %v\", err)\n\t\t\t\t}\n\t\t\t}\n\t\t\tfor _, nodeName := range util.GetClusterNodes(t) {\n\t\t\t\terr := wait.For(util.ContainerNotPresentOnNode(nodeName, util.Caddy), wait.WithTimeout(util.Timeout))\n\t\t\t\tif err != nil {\n\t\t\t\t\t// Let's not mark this as an error\n\t\t\t\t\t// We only have this to prevent race conditions with the eraser spinning up\n\t\t\t\t\tt.Logf(\"error while waiting for deployment deletion: %v\", err)\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"All non-running images are removed from the cluster\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\timgList := &eraserv1.ImageList{\n\t\t\t\tObjectMeta: metav1.ObjectMeta{Name: util.Prune},\n\t\t\t\tSpec: eraserv1.ImageListSpec{\n\t\t\t\t\tImages: []string{\"*\"},\n\t\t\t\t},\n\t\t\t}\n\n\t\t\tif err := cfg.Client().Resources().Create(ctx, imgList); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tctx = context.WithValue(ctx, util.Prune, imgList)\n\n\t\t\t// The first check could take some extra time, where as things should be done already for the 2nd check.\n\t\t\t// So we'll give plenty of time and fail slow here.\n\t\t\tctxT, cancel := context.WithTimeout(ctx, util.Timeout)\n\t\t\tdefer cancel()\n\t\t\tutil.CheckImageRemoved(ctxT, t, util.GetClusterNodes(t), util.Redis)\n\n\t\t\tctxT, cancel = context.WithTimeout(ctx, util.Timeout)\n\t\t\tdefer cancel()\n\t\t\tutil.CheckImageRemoved(ctxT, t, util.GetClusterNodes(t), util.Caddy)\n\n\t\t\t// Make sure nginx is still there\n\t\t\tutil.CheckImagesExist(t, util.GetClusterNodes(t), util.Nginx)\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Get logs\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting eraser pod logs\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tFeature()\n\n\tutil.Testenv.Test(t, pruneImagesFeat)\n}\n"
  },
  {
    "path": "test/e2e/tests/imagelist_prune_images/main_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\teraserv1 \"github.com/eraser-dev/eraser/api/v1\"\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tutilruntime \"k8s.io/apimachinery/pkg/util/runtime\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"sigs.k8s.io/e2e-framework/pkg/env\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envfuncs\"\n)\n\nfunc TestMain(m *testing.M) {\n\tutilruntime.Must(eraserv1alpha1.AddToScheme(scheme.Scheme))\n\tutilruntime.Must(eraserv1.AddToScheme(scheme.Scheme))\n\n\tremoverImage := util.ParsedImages.RemoverImage\n\tmanagerImage := util.ParsedImages.ManagerImage\n\n\tutil.Testenv = env.NewWithConfig(envconf.New())\n\t// Create KinD Cluster\n\tutil.Testenv.Setup(\n\t\tenvfuncs.CreateKindClusterWithConfig(util.KindClusterName, util.NodeVersion, util.KindConfigPath),\n\t\tenvfuncs.CreateNamespace(util.TestNamespace),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ManagerImage, util.ManagerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.HelmDeployLatestEraserRelease(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t),\n\t\tutil.UpgradeEraserHelm(util.TestNamespace,\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t\t\"--set\", util.CleanupOnSuccessDelay.Set(\"1m\"),\n\t\t),\n\t).Finish(\n\t\tenvfuncs.DestroyKindCluster(util.KindClusterName),\n\t)\n\tos.Exit(util.Testenv.Run(m))\n}\n"
  },
  {
    "path": "test/e2e/tests/imagelist_rm_images/eraser_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tappsv1 \"k8s.io/api/apps/v1\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/labels\"\n\n\t\"sigs.k8s.io/e2e-framework/klient/k8s/resources\"\n\t\"sigs.k8s.io/e2e-framework/klient/wait\"\n\t\"sigs.k8s.io/e2e-framework/klient/wait/conditions\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/features\"\n)\n\nconst (\n\trestartTimeout = time.Minute\n)\n\nfunc TestImageListTriggersRemoverImageJob(t *testing.T) {\n\trmImageFeat := features.New(\"An ImageList should trigger a remover ImageJob\").\n\t\tSetup(func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tpodSelectorLabels := map[string]string{\"app\": util.Nginx}\n\t\t\tnginxDep := util.NewDeployment(cfg.Namespace(), util.Nginx, 2, podSelectorLabels, corev1.Container{Image: util.Nginx, Name: util.Nginx})\n\t\t\tif err := cfg.Client().Resources().Create(ctx, nginxDep); err != nil {\n\t\t\t\tt.Error(\"Failed to create the dep\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"deployment successfully deployed\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tclient, err := cfg.NewClient()\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"Failed to create new client\", err)\n\t\t\t}\n\n\t\t\tresultDeployment := appsv1.Deployment{\n\t\t\t\tObjectMeta: metav1.ObjectMeta{Name: util.Nginx, Namespace: cfg.Namespace()},\n\t\t\t}\n\n\t\t\tif err = wait.For(conditions.New(client.Resources()).DeploymentConditionMatch(&resultDeployment, appsv1.DeploymentAvailable, corev1.ConditionTrue),\n\t\t\t\twait.WithTimeout(util.Timeout)); err != nil {\n\t\t\t\tt.Error(\"deployment not found\", err)\n\t\t\t}\n\n\t\t\treturn context.WithValue(ctx, util.Nginx, &resultDeployment)\n\t\t}).\n\t\tAssess(\"Images successfully deleted from all nodes\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\t// delete deployment\n\t\t\tclient, err := cfg.NewClient()\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"Failed to create new client\", err)\n\t\t\t}\n\n\t\t\tvar pods corev1.PodList\n\t\t\terr = client.Resources().List(ctx, &pods, func(o *metav1.ListOptions) {\n\t\t\t\to.LabelSelector = labels.SelectorFromSet(labels.Set{\"app\": util.Nginx}).String()\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\tdep := ctx.Value(util.Nginx).(*appsv1.Deployment)\n\t\t\tif err := client.Resources().Delete(ctx, dep); err != nil {\n\t\t\t\tt.Error(\"Failed to delete the dep\", err)\n\t\t\t}\n\n\t\t\tfor _, nodeName := range util.GetClusterNodes(t) {\n\t\t\t\terr := wait.For(util.ContainerNotPresentOnNode(nodeName, util.Nginx), wait.WithTimeout(util.Timeout))\n\t\t\t\tif err != nil {\n\t\t\t\t\t// Let's not mark this as an error\n\t\t\t\t\t// We only have this to prevent race conditions with the eraser spinning up\n\t\t\t\t\tt.Logf(\"error while waiting for deployment deletion: %v\", err)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// deploy imageJob config\n\t\t\tif err := util.DeployEraserConfig(cfg.KubeconfigFile(), cfg.Namespace(), util.EraserV1Alpha1ImagelistPath); err != nil {\n\t\t\t\tt.Error(\"Failed to deploy image list config\", err)\n\t\t\t}\n\n\t\t\tpodNames := []string{}\n\t\t\t// get eraser pod name\n\t\t\terr = wait.For(func() (bool, error) {\n\t\t\t\tl := corev1.PodList{}\n\t\t\t\terr = client.Resources().List(ctx, &l, resources.WithLabelSelector(util.ImageJobTypeLabelKey+\"=\"+util.ManualLabel))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn false, err\n\t\t\t\t}\n\n\t\t\t\tif len(l.Items) != 3 {\n\t\t\t\t\treturn false, nil\n\t\t\t\t}\n\n\t\t\t\tfor _, pod := range l.Items {\n\t\t\t\t\tpodNames = append(podNames, pod.ObjectMeta.Name)\n\t\t\t\t}\n\t\t\t\treturn true, nil\n\t\t\t}, wait.WithTimeout(time.Minute*2), wait.WithInterval(time.Millisecond*500))\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\t// wait for those specific pods to no longer exist, so that when we\n\t\t\t// check later for an accidental redeployment, we are sure it is\n\t\t\t// actually a new deployment.\n\t\t\terr = wait.For(func() (bool, error) {\n\t\t\t\tvar l corev1.PodList\n\t\t\t\terr = client.Resources().List(ctx, &l, resources.WithLabelSelector(util.ImageJobTypeLabelKey+\"=\"+util.ManualLabel))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn false, err\n\t\t\t\t}\n\n\t\t\t\tif len(l.Items) == 0 {\n\t\t\t\t\treturn true, nil\n\t\t\t\t}\n\n\t\t\t\tfor _, name := range podNames {\n\t\t\t\t\tfor _, pod := range l.Items {\n\t\t\t\t\t\tif name == pod.ObjectMeta.Name {\n\t\t\t\t\t\t\treturn false, nil\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\treturn true, nil\n\t\t\t}, wait.WithTimeout(util.Timeout), wait.WithInterval(time.Millisecond*500))\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tt.Logf(\"initial eraser deployment cleaned up\")\n\n\t\t\tctxT, cancel := context.WithTimeout(ctx, time.Minute*3)\n\t\t\tdefer cancel()\n\t\t\tutil.CheckImageRemoved(ctxT, t, util.GetClusterNodes(t), util.Nginx)\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Eraser job was not restarted\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\t// until a timeout is reached, make sure there are no pods matching\n\t\t\t// the selector eraser.sh/type=manual\n\t\t\tclient := cfg.Client()\n\t\t\tctxT2, cancel := context.WithTimeout(ctx, restartTimeout)\n\t\t\tdefer cancel()\n\t\t\tutil.CheckDeploymentCleanedUp(ctxT2, t, client)\n\n\t\t\treturn ctx\n\t\t}).\n\t\tFeature()\n\n\tutil.Testenv.Test(t, rmImageFeat)\n}\n"
  },
  {
    "path": "test/e2e/tests/imagelist_rm_images/main_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tutilruntime \"k8s.io/apimachinery/pkg/util/runtime\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"sigs.k8s.io/e2e-framework/pkg/env\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envfuncs\"\n)\n\nfunc TestMain(m *testing.M) {\n\tutilruntime.Must(eraserv1alpha1.AddToScheme(scheme.Scheme))\n\n\tremoverImage := util.ParsedImages.RemoverImage\n\tmanagerImage := util.ParsedImages.ManagerImage\n\tcollectorImage := util.ParsedImages.CollectorImage\n\n\tutil.Testenv = env.NewWithConfig(envconf.New())\n\t// Create KinD Cluster\n\tutil.Testenv.Setup(\n\t\tenvfuncs.CreateKindClusterWithConfig(util.KindClusterName, util.NodeVersion, util.KindConfigPath),\n\t\tenvfuncs.CreateNamespace(util.TestNamespace),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ManagerImage, util.ManagerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.HelmDeployLatestEraserRelease(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t),\n\t\tutil.UpgradeEraserHelm(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"true\"),\n\t\t\t\"--set\", util.CollectorImageRepo.Set(collectorImage.Repo),\n\t\t\t\"--set\", util.CollectorImageTag.Set(collectorImage.Tag),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t\t\"--set\", util.ScheduleImmediate.Set(\"false\"),\n\t\t),\n\t).Finish(\n\t\tenvfuncs.DestroyKindCluster(util.KindClusterName),\n\t)\n\tos.Exit(util.Testenv.Run(m))\n}\n"
  },
  {
    "path": "test/e2e/tests/imagelist_skip_nodes/eraser_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tappsv1 \"k8s.io/api/apps/v1\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/labels\"\n\tclientgo \"k8s.io/client-go/kubernetes\"\n\n\t\"sigs.k8s.io/e2e-framework/klient/wait\"\n\t\"sigs.k8s.io/e2e-framework/klient/wait/conditions\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/features\"\n)\n\nfunc TestSkipNodes(t *testing.T) {\n\tskipNodesFeat := features.New(\"Applying the eraser.sh/cleanup.filter label to a node should prevent ImageJob pods from being scheduled on that node\").\n\t\tSetup(func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\t// fetch node info\n\t\t\tc := cfg.Client().RESTConfig()\n\t\t\tk8sClient, err := clientgo.NewForConfig(c)\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"unable to obtain k8s client from config\", err)\n\t\t\t}\n\n\t\t\tpodSelectorLabels := map[string]string{\"app\": util.Nginx}\n\t\t\tnginxDep := util.NewDeployment(cfg.Namespace(), util.Nginx, 2, podSelectorLabels, corev1.Container{Image: util.Nginx, Name: util.Nginx})\n\t\t\tif err := cfg.Client().Resources().Create(ctx, nginxDep); err != nil {\n\t\t\t\tt.Error(\"Failed to create the dep\", err)\n\t\t\t}\n\n\t\t\tnodeList, err := k8sClient.CoreV1().Nodes().List(ctx, metav1.ListOptions{LabelSelector: util.FilterNodeSelector})\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"unable to list node %s\\n%#v\", util.FilterNodeSelector, err)\n\t\t\t}\n\n\t\t\tif len(nodeList.Items) != 1 {\n\t\t\t\tt.Errorf(\"List operation for selector %s resulted in the wrong number of nodes\", util.FilterNodeSelector)\n\t\t\t}\n\n\t\t\tnodeToSkip := &nodeList.Items[0]\n\t\t\tnodeToSkip.ObjectMeta.Labels[util.FilterLabelKey] = util.FilterLabelValue\n\n\t\t\tnodeToSkip, err = k8sClient.CoreV1().Nodes().Update(ctx, nodeToSkip, metav1.UpdateOptions{})\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"unable to update node %#v with label {%s: %s}\\nerror: %#v\", nodeToSkip, util.FilterLabelKey, util.FilterLabelValue, err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Deployment and labelling the node have succeeded\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tc := cfg.Client().RESTConfig()\n\t\t\tk8sClient, err := clientgo.NewForConfig(c)\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"unable to obtain k8s client from config\", err)\n\t\t\t}\n\n\t\t\terr = wait.For(func() (bool, error) {\n\t\t\t\tnodeList, err := k8sClient.CoreV1().Nodes().List(ctx, metav1.ListOptions{LabelSelector: util.FilterLabelKey})\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn false, err\n\t\t\t\t}\n\n\t\t\t\treturn len(nodeList.Items) == 1, nil\n\t\t\t}, wait.WithTimeout(util.Timeout))\n\t\t\tif err != nil {\n\t\t\t\tt.Errorf(\"error while waiting for selector%s to be added to node\\n%#v\", util.FilterNodeSelector, err)\n\t\t\t}\n\n\t\t\tresultDeployment := appsv1.Deployment{\n\t\t\t\tObjectMeta: metav1.ObjectMeta{Name: util.Nginx, Namespace: cfg.Namespace()},\n\t\t\t}\n\n\t\t\tif err = wait.For(\n\t\t\t\tconditions.New(cfg.Client().Resources()).DeploymentConditionMatch(&resultDeployment, appsv1.DeploymentAvailable, corev1.ConditionTrue),\n\t\t\t\twait.WithTimeout(util.Timeout),\n\t\t\t); err != nil {\n\t\t\t\tt.Error(\"deployment not found\", err)\n\t\t\t}\n\n\t\t\treturn context.WithValue(ctx, util.Nginx, &resultDeployment)\n\t\t}).\n\t\tAssess(\"Node(s) successfully skipped\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\t// delete deployment\n\t\t\tclient, err := cfg.NewClient()\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"Failed to create new client\", err)\n\t\t\t}\n\n\t\t\tvar pods corev1.PodList\n\t\t\terr = client.Resources().List(ctx, &pods, func(o *metav1.ListOptions) {\n\t\t\t\to.LabelSelector = labels.SelectorFromSet(labels.Set{\"app\": util.Nginx}).String()\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\tdep := ctx.Value(util.Nginx).(*appsv1.Deployment)\n\t\t\tif err := client.Resources().Delete(ctx, dep); err != nil {\n\t\t\t\tt.Error(\"Failed to delete the dep\", err)\n\t\t\t}\n\n\t\t\tclusterNodes := util.GetClusterNodes(t)\n\t\t\tclusterNodes = util.DeleteStringFromSlice(clusterNodes, util.FilterNodeName)\n\n\t\t\tfor _, nodeName := range clusterNodes {\n\t\t\t\terr := wait.For(util.ContainerNotPresentOnNode(nodeName, util.Nginx), wait.WithTimeout(util.Timeout))\n\t\t\t\tif err != nil {\n\t\t\t\t\t// Let's not mark this as an error\n\t\t\t\t\t// We only have this to prevent race conditions with the eraser spinning up\n\t\t\t\t\tt.Logf(\"error while waiting for deployment deletion: %v\", err)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// deploy imageJob config\n\t\t\tif err := util.DeployEraserConfig(cfg.KubeconfigFile(), cfg.Namespace(), util.EraserV1Alpha1ImagelistPath); err != nil {\n\t\t\t\tt.Error(\"Failed to deploy image list config\", err)\n\t\t\t}\n\n\t\t\tctxT, cancel := context.WithTimeout(ctx, util.Timeout)\n\t\t\tdefer cancel()\n\n\t\t\t// ensure images are removed from all nodes except the one we are skipping. remove the node we are skipping from the list of nodes.\n\t\t\tutil.CheckImageRemoved(ctxT, t, clusterNodes, util.Nginx)\n\n\t\t\t// get pod logs before imagejob is deleted\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting collector pod logs\", err)\n\t\t\t}\n\n\t\t\t// Wait for the imagejob to be completed by checking for its nonexistence in the cluster\n\t\t\terr = wait.For(util.ImagejobNotInCluster(cfg.KubeconfigFile()), wait.WithTimeout(util.Timeout))\n\t\t\tif err != nil {\n\t\t\t\tt.Logf(\"error while waiting for imagejob cleanup: %v\", err)\n\t\t\t}\n\n\t\t\t// the imagejob has done its work, so now we can check the node to make sure it didn't remove the image\n\t\t\tutil.CheckImagesExist(t, []string{util.FilterNodeName}, util.Nginx)\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Get logs\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting eraser pod logs\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tFeature()\n\n\tutil.Testenv.Test(t, skipNodesFeat)\n}\n"
  },
  {
    "path": "test/e2e/tests/imagelist_skip_nodes/main_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tutilruntime \"k8s.io/apimachinery/pkg/util/runtime\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"sigs.k8s.io/e2e-framework/pkg/env\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envfuncs\"\n)\n\nfunc TestMain(m *testing.M) {\n\tutilruntime.Must(eraserv1alpha1.AddToScheme(scheme.Scheme))\n\n\tremoverImage := util.ParsedImages.RemoverImage\n\tmanagerImage := util.ParsedImages.ManagerImage\n\n\tutil.Testenv = env.NewWithConfig(envconf.New())\n\t// Create KinD Cluster\n\tutil.Testenv.Setup(\n\t\tenvfuncs.CreateKindClusterWithConfig(util.KindClusterName, util.NodeVersion, util.KindConfigPath),\n\t\tenvfuncs.CreateNamespace(util.TestNamespace),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ManagerImage, util.ManagerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.HelmDeployLatestEraserRelease(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t),\n\t\tutil.UpgradeEraserHelm(util.TestNamespace,\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t\t\"--set\", util.CleanupOnSuccessDelay.Set(\"1m\"),\n\t\t),\n\t).Finish(\n\t\tenvfuncs.DestroyKindCluster(util.KindClusterName),\n\t)\n\tos.Exit(util.Testenv.Run(m))\n}\n"
  },
  {
    "path": "test/e2e/tests/metrics_test_disable_scanner/eraser_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"regexp\"\n\t\"strconv\"\n\t\"testing\"\n\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/features\"\n)\n\nconst (\n\texpectedImagesRemoved = 3\n)\n\nfunc TestMetricsWithScannerDisabled(t *testing.T) {\n\tmetrics := features.New(\"Images_removed_run_total metric should report 1\").\n\t\tAssess(\"Alpine image is removed\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tctxT, cancel := context.WithTimeout(ctx, util.Timeout)\n\t\t\tdefer cancel()\n\t\t\tutil.CheckImageRemoved(ctxT, t, util.GetClusterNodes(t), util.VulnerableImage)\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Get logs\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting eraser pod logs\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Check images_removed_run_total metric\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif _, err := util.KubectlCurlPod(cfg.KubeconfigFile(), cfg.Namespace()); err != nil {\n\t\t\t\tt.Error(err, \"error running curl pod\")\n\t\t\t}\n\n\t\t\tif _, err := util.KubectlWait(cfg.KubeconfigFile(), \"temp\", cfg.Namespace()); err != nil {\n\t\t\t\tt.Error(err, \"error waiting for temp curl pod\")\n\t\t\t}\n\n\t\t\toutput, err := util.KubectlExecCurl(cfg.KubeconfigFile(), \"temp\", \"http://otel-collector/metrics\", cfg.Namespace())\n\t\t\tif err != nil {\n\t\t\t\tt.Error(err, \"error with otlp curl request\")\n\t\t\t}\n\n\t\t\tr := regexp.MustCompile(`images_removed_run_total{job=\"remover\",node_name=\".+\"} (\\d+)`)\n\t\t\tresults := r.FindAllStringSubmatch(output, -1)\n\n\t\t\ttotalRemoved := 0\n\t\t\tfor i := range results {\n\t\t\t\tval, _ := strconv.Atoi(results[i][1])\n\t\t\t\ttotalRemoved += val\n\t\t\t}\n\n\t\t\tif totalRemoved < expectedImagesRemoved {\n\t\t\t\tt.Error(\"images_removed_run_total incorrect, expected \", expectedImagesRemoved, \"got\", totalRemoved)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Get logs\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting eraser pod logs\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tFeature()\n\n\tutil.Testenv.Test(t, metrics)\n}\n"
  },
  {
    "path": "test/e2e/tests/metrics_test_disable_scanner/main_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tutilruntime \"k8s.io/apimachinery/pkg/util/runtime\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"sigs.k8s.io/e2e-framework/pkg/env\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envfuncs\"\n)\n\nfunc TestMain(m *testing.M) {\n\tutilruntime.Must(eraserv1alpha1.AddToScheme(scheme.Scheme))\n\n\tremoverImage := util.ParsedImages.RemoverImage\n\tmanagerImage := util.ParsedImages.ManagerImage\n\tcollectorImg := util.ParsedImages.CollectorImage\n\n\tutil.Testenv = env.NewWithConfig(envconf.New())\n\t// Create KinD Cluster\n\tutil.Testenv.Setup(\n\t\tenvfuncs.CreateKindClusterWithConfig(util.KindClusterName, util.NodeVersion, util.KindConfigPath),\n\t\tenvfuncs.CreateNamespace(util.TestNamespace),\n\t\tutil.DeployOtelCollector(util.TestNamespace),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ManagerImage, util.ManagerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.CollectorImage, util.CollectorTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.VulnerableImage, \"\"),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.HelmDeployLatestEraserRelease(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t),\n\t\tutil.UpgradeEraserHelm(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"true\"),\n\t\t\t\"--set\", util.CollectorImageRepo.Set(collectorImg.Repo),\n\t\t\t\"--set\", util.CollectorImageTag.Set(collectorImg.Tag),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t\t\"--set\", util.OTLPEndpoint.Set(\"otel-collector:4318\"),\n\t\t\t\"--set\", util.CleanupOnSuccessDelay.Set(\"1m\"),\n\t\t),\n\t).Finish(\n\t\tenvfuncs.DestroyKindCluster(util.KindClusterName),\n\t)\n\tos.Exit(util.Testenv.Run(m))\n}\n"
  },
  {
    "path": "test/e2e/tests/metrics_test_eraser/eraser_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"regexp\"\n\t\"strconv\"\n\t\"testing\"\n\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/features\"\n)\n\nconst (\n\texpectedImagesRemoved = 3\n)\n\nfunc TestMetricsEraserOnly(t *testing.T) {\n\tmetrics := features.New(\"Images_removed_run_total metric should report 1\").\n\t\tAssess(\"Alpine image is removed\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\t// deploy imagelist config\n\t\t\tif err := util.DeployEraserConfig(cfg.KubeconfigFile(), cfg.Namespace(), util.ImagelistAlpinePath); err != nil {\n\t\t\t\tt.Error(\"Failed to deploy image list config\", err)\n\t\t\t}\n\n\t\t\tctxT, cancel := context.WithTimeout(ctx, util.Timeout)\n\t\t\tdefer cancel()\n\t\t\tutil.CheckImageRemoved(ctxT, t, util.GetClusterNodes(t), util.VulnerableImage)\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Get logs\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting eraser pod logs\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Check images_removed_run_total metric\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif _, err := util.KubectlCurlPod(cfg.KubeconfigFile(), cfg.Namespace()); err != nil {\n\t\t\t\tt.Error(err, \"error running curl pod\")\n\t\t\t}\n\n\t\t\tif _, err := util.KubectlWait(cfg.KubeconfigFile(), \"temp\", cfg.Namespace()); err != nil {\n\t\t\t\tt.Error(err, \"error waiting for temp curl pod\")\n\t\t\t}\n\n\t\t\toutput, err := util.KubectlExecCurl(cfg.KubeconfigFile(), \"temp\", \"http://otel-collector/metrics\", cfg.Namespace())\n\t\t\tif err != nil {\n\t\t\t\tt.Error(err, \"error with otlp curl request\")\n\t\t\t}\n\n\t\t\tr := regexp.MustCompile(`images_removed_run_total{job=\"remover\",node_name=\".+\"} (\\d+)`)\n\t\t\tresults := r.FindAllStringSubmatch(output, -1)\n\n\t\t\ttotalRemoved := 0\n\t\t\tfor i := range results {\n\t\t\t\tval, _ := strconv.Atoi(results[i][1])\n\t\t\t\ttotalRemoved += val\n\t\t\t}\n\n\t\t\tif totalRemoved != expectedImagesRemoved {\n\t\t\t\tt.Error(\"images_removed_run_total incorrect, expected \", expectedImagesRemoved, \"got\", totalRemoved)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Get logs\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting eraser pod logs\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tFeature()\n\n\tutil.Testenv.Test(t, metrics)\n}\n"
  },
  {
    "path": "test/e2e/tests/metrics_test_eraser/main_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tutilruntime \"k8s.io/apimachinery/pkg/util/runtime\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"sigs.k8s.io/e2e-framework/pkg/env\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envfuncs\"\n)\n\nfunc TestMain(m *testing.M) {\n\tutilruntime.Must(eraserv1alpha1.AddToScheme(scheme.Scheme))\n\n\tremoverImage := util.ParsedImages.RemoverImage\n\tmanagerImage := util.ParsedImages.ManagerImage\n\n\tutil.Testenv = env.NewWithConfig(envconf.New())\n\t// Create KinD Cluster\n\tutil.Testenv.Setup(\n\t\tenvfuncs.CreateKindClusterWithConfig(util.KindClusterName, util.NodeVersion, util.KindConfigPath),\n\t\tenvfuncs.CreateNamespace(util.TestNamespace),\n\t\tutil.DeployOtelCollector(util.TestNamespace),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ManagerImage, util.ManagerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.VulnerableImage, \"\"),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.HelmDeployLatestEraserRelease(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t),\n\t\tutil.UpgradeEraserHelm(util.TestNamespace,\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t\t\"--set\", util.OTLPEndpoint.Set(\"otel-collector:4318\"),\n\t\t\t\"--set\", util.CleanupOnSuccessDelay.Set(\"1m\"),\n\t\t),\n\t).Finish(\n\t\tenvfuncs.DestroyKindCluster(util.KindClusterName),\n\t)\n\tos.Exit(util.Testenv.Run(m))\n}\n"
  },
  {
    "path": "test/e2e/tests/metrics_test_scanner/eraser_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"regexp\"\n\t\"strconv\"\n\t\"testing\"\n\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/features\"\n)\n\nconst (\n\texpectedVulnerableImages = 3\n)\n\nfunc TestMetricsWithScanner(t *testing.T) {\n\tmetrics := features.New(\"Images_removed_run_total and vulnerable_images_run_total metrics should report >= 3\").\n\t\tAssess(\"Alpine image is removed\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tctxT, cancel := context.WithTimeout(ctx, util.Timeout)\n\t\t\tdefer cancel()\n\t\t\tutil.CheckImageRemoved(ctxT, t, util.GetClusterNodes(t), util.VulnerableImage)\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Get logs\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting eraser pod logs\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Check images_removed_run_total metric\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif _, err := util.KubectlCurlPod(cfg.KubeconfigFile(), cfg.Namespace()); err != nil {\n\t\t\t\tt.Error(err, \"error running curl pod\")\n\t\t\t}\n\n\t\t\tif _, err := util.KubectlWait(cfg.KubeconfigFile(), \"temp\", cfg.Namespace()); err != nil {\n\t\t\t\tt.Error(err, \"error waiting for temp curl pod\")\n\t\t\t}\n\n\t\t\toutput, err := util.KubectlExecCurl(cfg.KubeconfigFile(), \"temp\", \"http://otel-collector/metrics\", cfg.Namespace())\n\t\t\tif err != nil {\n\t\t\t\tt.Error(err, \"error with otlp curl request\")\n\t\t\t}\n\n\t\t\tr := regexp.MustCompile(`images_removed_run_total{job=\"remover\",node_name=\".+\"} (\\d+)`)\n\t\t\tresults := r.FindAllStringSubmatch(output, -1)\n\n\t\t\ttotalRemoved := 0\n\t\t\tfor i := range results {\n\t\t\t\tval, _ := strconv.Atoi(results[i][1])\n\t\t\t\ttotalRemoved += val\n\t\t\t}\n\n\t\t\tif totalRemoved < 3 {\n\t\t\t\tt.Error(\"images_removed_run_total incorrect, expected 3, got\", totalRemoved)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Check vulnerable_images_run_total metric\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\toutput, err := util.KubectlExecCurl(cfg.KubeconfigFile(), \"temp\", \"http://otel-collector/metrics\", cfg.Namespace())\n\t\t\tif err != nil {\n\t\t\t\tt.Error(err, \"error with otlp curl request\")\n\t\t\t}\n\n\t\t\tr := regexp.MustCompile(`vulnerable_images_run_total{job=\"trivy-scanner\",node_name=\".+\"} (\\d+)`)\n\t\t\tresults := r.FindAllStringSubmatch(output, -1)\n\n\t\t\ttotalVulnerable := 0\n\t\t\tfor i := range results {\n\t\t\t\tval, _ := strconv.Atoi(results[i][1])\n\t\t\t\ttotalVulnerable += val\n\t\t\t}\n\n\t\t\tif totalVulnerable < expectedVulnerableImages {\n\t\t\t\tt.Error(\"vulnerable_images_run_total incorrect, expected \", expectedVulnerableImages, \"got\", totalVulnerable)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tAssess(\"Get logs\", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {\n\t\t\tif err := util.GetPodLogs(t); err != nil {\n\t\t\t\tt.Error(\"error getting eraser pod logs\", err)\n\t\t\t}\n\n\t\t\treturn ctx\n\t\t}).\n\t\tFeature()\n\n\tutil.Testenv.Test(t, metrics)\n}\n"
  },
  {
    "path": "test/e2e/tests/metrics_test_scanner/main_test.go",
    "content": "//go:build e2e\n// +build e2e\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\teraserv1alpha1 \"github.com/eraser-dev/eraser/api/v1alpha1\"\n\t\"github.com/eraser-dev/eraser/test/e2e/util\"\n\tutilruntime \"k8s.io/apimachinery/pkg/util/runtime\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"sigs.k8s.io/e2e-framework/pkg/env\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envfuncs\"\n)\n\nfunc TestMain(m *testing.M) {\n\tutilruntime.Must(eraserv1alpha1.AddToScheme(scheme.Scheme))\n\n\tremoverImage := util.ParsedImages.RemoverImage\n\tmanagerImage := util.ParsedImages.ManagerImage\n\tcollectorImage := util.ParsedImages.CollectorImage\n\tscannerImage := util.ParsedImages.ScannerImage\n\n\tutil.Testenv = env.NewWithConfig(envconf.New())\n\t// Create KinD Cluster\n\tutil.Testenv.Setup(\n\t\tenvfuncs.CreateKindClusterWithConfig(util.KindClusterName, util.NodeVersion, util.KindConfigPath),\n\t\tenvfuncs.CreateNamespace(util.TestNamespace),\n\t\tutil.DeployOtelCollector(util.TestNamespace),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ManagerImage, util.ManagerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.CollectorImage, util.CollectorTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.ScannerImage, util.ScannerTarballPath),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.VulnerableImage, \"\"),\n\t\tutil.LoadImageToCluster(util.KindClusterName, util.RemoverImage, util.RemoverTarballPath),\n\t\tutil.HelmDeployLatestEraserRelease(util.TestNamespace,\n\t\t\t\"--set\", util.ScannerEnable.Set(\"false\"),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"false\"),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t),\n\t\tutil.UpgradeEraserHelm(util.TestNamespace,\n\t\t\t\"--set\", util.OTLPEndpoint.Set(\"otel-collector:4318\"),\n\t\t\t\"--set\", util.ScannerEnable.Set(\"true\"),\n\t\t\t\"--set\", util.CollectorEnable.Set(\"true\"),\n\t\t\t\"--set\", util.CollectorImageRepo.Set(collectorImage.Repo),\n\t\t\t\"--set\", util.CollectorImageTag.Set(collectorImage.Tag),\n\t\t\t\"--set\", util.ScannerImageRepo.Set(scannerImage.Repo),\n\t\t\t\"--set\", util.ScannerImageTag.Set(scannerImage.Tag),\n\t\t\t\"--set\", util.RemoverImageRepo.Set(removerImage.Repo),\n\t\t\t\"--set\", util.RemoverImageTag.Set(removerImage.Tag),\n\t\t\t\"--set\", util.ManagerImageRepo.Set(managerImage.Repo),\n\t\t\t\"--set\", util.ManagerImageTag.Set(managerImage.Tag),\n\t\t\t\"--set\", util.CleanupOnSuccessDelay.Set(\"1m\"),\n\t\t),\n\t).Finish(\n\t\tenvfuncs.DestroyKindCluster(util.KindClusterName),\n\t)\n\tos.Exit(util.Testenv.Run(m))\n}\n"
  },
  {
    "path": "test/e2e/util/kubectl.go",
    "content": "// https://raw.githubusercontent.com/Azure/secrets-store-csi-driver-provider-azure/master/test/e2e/framework/exec/kubectl.go\npackage util\n\nimport (\n\t\"fmt\"\n\t\"os/exec\"\n\t\"strings\"\n\n\tklog \"k8s.io/klog/v2\"\n)\n\n// KubectlApply executes \"kubectl apply\" given a list of arguments.\nfunc KubectlApply(kubeconfigPath, namespace string, args []string) error {\n\targs = append([]string{\n\t\t\"apply\",\n\t\tfmt.Sprintf(\"--kubeconfig=%s\", kubeconfigPath),\n\t\tfmt.Sprintf(\"--namespace=%s\", namespace),\n\t}, args...)\n\n\t_, err := Kubectl(args)\n\treturn err\n}\n\n// HelmInstall executes \"helm install\" given a list of arguments.\nfunc HelmInstall(kubeconfigPath, namespace string, args []string) error {\n\targs = append([]string{\n\t\t\"install\",\n\t\t\"eraser-e2e-test\",\n\t\t\"--wait\",\n\t\t\"--debug\",\n\t\t\"--create-namespace\",\n\t\tfmt.Sprintf(\"--kubeconfig=%s\", kubeconfigPath),\n\t\tfmt.Sprintf(\"--namespace=%s\", namespace),\n\t}, args...)\n\n\t_, err := Helm(args)\n\treturn err\n}\n\n// HelmUpgrade executes \"helm upgrade\" given a list of arguments.\nfunc HelmUpgrade(kubeconfigPath, namespace string, args []string) error {\n\targs = append([]string{\n\t\t\"upgrade\",\n\t\t\"eraser-e2e-test\",\n\t\t\"--wait\",\n\t\t\"--debug\",\n\t\tfmt.Sprintf(\"--kubeconfig=%s\", kubeconfigPath),\n\t\tfmt.Sprintf(\"--namespace=%s\", namespace),\n\t}, args...)\n\n\t_, err := Helm(args)\n\treturn err\n}\n\n// HelmUninstall executes \"helm uninstall\" given a list of arguments.\nfunc HelmUninstall(kubeconfigPath, namespace string, args []string) error {\n\targs = append([]string{\n\t\t\"uninstall\",\n\t\t\"eraser-e2e-test\",\n\t\tfmt.Sprintf(\"--kubeconfig=%s\", kubeconfigPath),\n\t\tfmt.Sprintf(\"--namespace=%s\", namespace),\n\t}, args...)\n\n\t_, err := Helm(args)\n\treturn err\n}\n\n// KubectlDelete executes \"kubectl delete\" given a list of arguments.\nfunc KubectlDelete(kubeconfigPath, namespace string, args []string) error {\n\targs = append([]string{\n\t\t\"delete\",\n\t\tfmt.Sprintf(\"--kubeconfig=%s\", kubeconfigPath),\n\t\tfmt.Sprintf(\"--namespace=%s\", namespace),\n\t}, args...)\n\n\t_, err := Kubectl(args)\n\treturn err\n}\n\nfunc KubectlExecCurl(kubeconfigPath, podName string, endpoint, namespace string) (string, error) {\n\targs := []string{\n\t\t\"exec\",\n\t\t\"-i\",\n\t\tpodName,\n\t\t\"-n\",\n\t\tnamespace,\n\t\t\"--kubeconfig\",\n\t\tkubeconfigPath,\n\t\t\"--\",\n\t\t\"curl\",\n\t\tendpoint,\n\t}\n\n\treturn Kubectl(args)\n}\n\nfunc KubectlWait(kubeconfigPath, podName, namespace string) (string, error) {\n\targs := []string{\n\t\t\"wait\",\n\t\t\"--for=condition=Ready\",\n\t\tfmt.Sprintf(\"--kubeconfig=%s\", kubeconfigPath),\n\t\t\"--timeout=120s\",\n\t\t\"pod\",\n\t\tpodName,\n\t\t\"-n\",\n\t\tnamespace,\n\t}\n\n\treturn Kubectl(args)\n}\n\n// KubectlLogs executes \"kubectl logs\" given a list of arguments.\nfunc KubectlLogs(kubeconfigPath, podName, containerName, namespace string, extraArgs ...string) (string, error) {\n\targs := []string{\n\t\t\"logs\",\n\t\tfmt.Sprintf(\"--kubeconfig=%s\", kubeconfigPath),\n\t\tfmt.Sprintf(\"--namespace=%s\", namespace),\n\t\tpodName,\n\t}\n\n\tif containerName != \"\" {\n\t\targs = append(args, fmt.Sprintf(\"-c=%s\", containerName))\n\t}\n\n\targs = append(args, extraArgs...)\n\n\treturn Kubectl(args)\n}\n\n// KubectlDescribe executes \"kubectl describe\" given a list of arguments.\nfunc KubectlDescribe(kubeconfigPath, podName, namespace string) (string, error) {\n\targs := []string{\n\t\t\"describe\",\n\t\t\"pod\",\n\t\tpodName,\n\t\tfmt.Sprintf(\"--kubeconfig=%s\", kubeconfigPath),\n\t\tfmt.Sprintf(\"--namespace=%s\", namespace),\n\t}\n\treturn Kubectl(args)\n}\n\n// KubectlDescribe executes \"kubectl describe\" given a list of arguments.\nfunc KubectlCurlPod(kubeconfigPath, namespace string) (string, error) {\n\targs := []string{\n\t\t\"run\",\n\t\t\"temp\",\n\t\t\"-n\",\n\t\tnamespace,\n\t\t\"--image\",\n\t\t\"curlimages/curl\",\n\t\t\"--kubeconfig\",\n\t\tkubeconfigPath,\n\t\t\"--\",\n\t\t\"tail\",\n\t\t\"-f\",\n\t\t\"/dev/null\",\n\t}\n\treturn Kubectl(args)\n}\n\n// KubectlGet executes \"kubectl get\" given a list of arguments.\nfunc KubectlGet(kubeconfigPath string, otherArgs ...string) (string, error) {\n\targs := []string{\n\t\tfmt.Sprintf(\"--kubeconfig=%s\", kubeconfigPath),\n\t\t\"get\",\n\t}\n\targs = append(args, otherArgs...)\n\n\treturn Kubectl(args)\n}\n\nfunc Kubectl(args []string) (string, error) {\n\tklog.Infof(\"kubectl %s\", strings.Join(args, \" \"))\n\n\tcmd := exec.Command(\"kubectl\", args...)\n\n\tstdoutStderr, err := cmd.CombinedOutput()\n\toutput := strings.TrimSpace(string(stdoutStderr))\n\tif err != nil {\n\t\terr = fmt.Errorf(\"%w: %s\", err, output)\n\t}\n\n\treturn output, err\n}\n\nfunc KubectlBackground(args []string) error {\n\tklog.Infof(\"kubectl %s\", strings.Join(args, \" \"))\n\n\tcmd := exec.Command(\"kubectl\", args...)\n\n\tif err := cmd.Start(); err != nil {\n\t\treturn fmt.Errorf(\"failed to start cmd: %v\", err)\n\t}\n\n\treturn nil\n}\n\nfunc Helm(args []string) (string, error) {\n\tklog.Infof(\"helm %s\", strings.Join(args, \" \"))\n\n\tcmd := exec.Command(\"helm\", args...)\n\n\tstdoutStderr, err := cmd.CombinedOutput()\n\toutput := strings.TrimSpace(string(stdoutStderr))\n\tif err != nil {\n\t\terr = fmt.Errorf(\"%w: %s\", err, output)\n\t}\n\n\treturn output, err\n}\n"
  },
  {
    "path": "test/e2e/util/utils.go",
    "content": "package util\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\t\"os/exec\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\tappsv1 \"k8s.io/api/apps/v1\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\tklog \"k8s.io/klog/v2\"\n\t\"oras.land/oras-go/pkg/registry\"\n\t\"sigs.k8s.io/controller-runtime/pkg/client\"\n\t\"sigs.k8s.io/e2e-framework/klient\"\n\t\"sigs.k8s.io/e2e-framework/klient/k8s/resources\"\n\t\"sigs.k8s.io/e2e-framework/klient/wait\"\n\t\"sigs.k8s.io/e2e-framework/klient/wait/conditions\"\n\t\"sigs.k8s.io/e2e-framework/pkg/env\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envconf\"\n\t\"sigs.k8s.io/e2e-framework/pkg/envfuncs\"\n\t\"sigs.k8s.io/kind/pkg/cluster\"\n\n\teraserv1 \"github.com/eraser-dev/eraser/api/v1\"\n\n\tpkgUtil \"github.com/eraser-dev/eraser/pkg/utils\"\n)\n\nconst (\n\tproviderResourceChartDir  = \"manifest_staging/charts\"\n\tproviderResourceDeployDir = \"manifest_staging/deploy\"\n\tpublishedHelmRepo         = \"https://eraser-dev.github.io/eraser/charts\"\n\n\tKindClusterName  = \"eraser-e2e-test\"\n\tProviderResource = \"eraser.yaml\"\n\n\tAlpine        = \"alpine\"\n\tNginx         = \"nginx\"\n\tNginxLatest   = \"ghcr.io/eraser-dev/eraser/e2e-test/nginx:latest\"\n\tNginxAliasOne = \"ghcr.io/eraser-dev/eraser/e2e-test/nginx:one\"\n\tNginxAliasTwo = \"ghcr.io/eraser-dev/eraser/e2e-test/nginx:two\"\n\tRedis         = \"redis\"\n\tCaddy         = \"caddy\"\n\n\tImageCollectorShared = \"imagecollector-shared\"\n\tPrune                = \"imagelist\"\n\tImagePullSecret      = \"testsecret\"\n\tFilterNodeName       = \"eraser-e2e-test-worker\"\n\tFilterNodeSelector   = \"kubernetes.io/hostname=eraser-e2e-test-worker\"\n\tFilterLabelKey       = \"eraser.sh/cleanup.filter\"\n\tFilterLabelValue     = \"true\"\n)\n\nconst (\n\tCollectorEnable    = HelmPath(\"runtimeConfig.components.collector.enabled\")\n\tCollectorImageRepo = HelmPath(\"runtimeConfig.components.collector.image.repo\")\n\tCollectorImageTag  = HelmPath(\"runtimeConfig.components.collector.image.tag\")\n\n\tScannerConfig    = HelmPath(\"runtimeConfig.components.scanner.config\")\n\tScannerEnable    = HelmPath(\"runtimeConfig.components.scanner.enabled\")\n\tScannerImageRepo = HelmPath(\"runtimeConfig.components.scanner.image.repo\")\n\tScannerImageTag  = HelmPath(\"runtimeConfig.components.scanner.image.tag\")\n\n\tRemoverImageRepo = HelmPath(\"runtimeConfig.components.remover.image.repo\")\n\tRemoverImageTag  = HelmPath(\"runtimeConfig.components.remover.image.tag\")\n\n\tManagerImageRepo = HelmPath(\"deploy.image.repo\")\n\tManagerImageTag  = HelmPath(\"deploy.image.tag\")\n\n\tImagePullSecrets = HelmPath(\"runtimeConfig.manager.pullSecrets\")\n\tOTLPEndpoint     = HelmPath(\"runtimeConfig.manager.otlpEndpoint\")\n\n\tCleanupOnSuccessDelay = HelmPath(\"runtimeConfig.manager.imageJob.cleanup.delayOnSuccess\")\n\tFilterNodesType       = HelmPath(\"runtimeConfig.manager.nodeFilter.type\")\n\tScheduleImmediate     = HelmPath(\"runtimeConfig.manager.scheduling.beginImmediately\")\n\n\tCustomRuntimeAddress = HelmPath(\"runtimeConfig.manager.runtime.address\")\n\tCustomRuntimeName    = HelmPath(\"runtimeConfig.manager.runtime.name\")\n\n\tCollectorLabel       = \"collector\"\n\tManualLabel          = \"manual\"\n\tImageJobTypeLabelKey = \"eraser.sh/type\"\n\tManagerLabelKey      = \"control-plane\"\n\tManagerLabelValue    = \"controller-manager\"\n)\n\nvar (\n\tTestenv             env.Environment\n\tRemoverImage        = os.Getenv(\"REMOVER_IMAGE\")\n\tManagerImage        = os.Getenv(\"MANAGER_IMAGE\")\n\tCollectorImage      = os.Getenv(\"COLLECTOR_IMAGE\")\n\tScannerImage        = os.Getenv(\"SCANNER_IMAGE\")\n\tVulnerableImage     = os.Getenv(\"VULNERABLE_IMAGE\")\n\tNonVulnerableImage  = os.Getenv(\"NON_VULNERABLE_IMAGE\")\n\tEOLImage            = os.Getenv(\"EOL_IMAGE\")\n\tBusyboxImage        = os.Getenv(\"BUSYBOX_IMAGE\")\n\tCollectorDummyImage = os.Getenv(\"COLLECTOR_IMAGE_DUMMY\")\n\n\tRemoverTarballPath   = os.Getenv(\"REMOVER_TARBALL_PATH\")\n\tManagerTarballPath   = os.Getenv(\"MANAGER_TARBALL_PATH\")\n\tCollectorTarballPath = os.Getenv(\"COLLECTOR_TARBALL_PATH\")\n\tScannerTarballPath   = os.Getenv(\"SCANNER_TARBALL_PATH\")\n\n\tProjectAbsDir                      = os.Getenv(\"PROJECT_ABSOLUTE_PATH\")\n\tE2EPath                            = filepath.Join(ProjectAbsDir, \"test\", \"e2e\")\n\tTestDataPath                       = filepath.Join(E2EPath, \"test-data\")\n\tKindConfigPath                     = filepath.Join(E2EPath, \"kind-config.yaml\")\n\tKindConfigCustomRuntimePath        = filepath.Join(E2EPath, \"kind-config-custom-runtime.yaml\")\n\tHelmEmptyValuesPath                = filepath.Join(TestDataPath, \"helm-empty-values.yaml\")\n\tChartPath                          = filepath.Join(ProjectAbsDir, providerResourceChartDir)\n\tDeployPath                         = filepath.Join(ProjectAbsDir, providerResourceDeployDir)\n\tOTELCollectorConfigPath            = filepath.Join(TestDataPath, \"otelcollector.yaml\")\n\tEraserV1Alpha1ImagelistUpdatedPath = filepath.Join(TestDataPath, \"eraser_v1alpha1_imagelist_updated.yaml\")\n\tEraserV1Alpha1ImagelistPath        = filepath.Join(TestDataPath, \"eraser_v1alpha1_imagelist.yaml\")\n\tEraserV1ImagelistPath              = filepath.Join(TestDataPath, \"eraser_v1_imagelist.yaml\")\n\tImagelistAlpinePath                = filepath.Join(TestDataPath, \"imagelist_alpine.yaml\")\n\n\tNodeVersion       = os.Getenv(\"NODE_VERSION\")\n\tModifiedNodeImage = os.Getenv(\"MODIFIED_NODE_IMAGE\")\n\tTestNamespace     = envconf.RandomName(\"test-ns\", 16)\n\tEraserNamespace   = pkgUtil.GetNamespace()\n\tTestLogDir        = os.Getenv(\"TEST_LOGDIR\")\n\n\tParsedImages        *Images\n\tTimeout             = time.Minute * 20\n\tImagePullSecretJSON = fmt.Sprintf(`[\"%s\"]`, ImagePullSecret)\n\n\tScannerConfigNoDeleteFailedJSON = `\"{ \\\"cacheDir\\\": \\\"/var/lib/trivy\\\", \\\"dbRepo\\\": \\\"ghcr.io/aquasecurity/trivy-db\\\", \\\"deleteFailedImages\\\": false, \\\"deleteEOLImages\\\": true, \\\"vulnerabilities\\\": null, \\\"ignoreUnfixed\\\": true, \\\"types\\\": [ \\\"os\\\", \\\"library\\\" ], \\\"securityChecks\\\": [ \\\"vuln\\\" ], \\\"severities\\\": [ \\\"CRITICAL\\\", \\\"HIGH\\\", \\\"MEDIUM\\\", \\\"LOW\\\" ] }\"`\n\n\tManagerAdditionalArgs = HelmSet{\n\t\tkey:  \"controllerManager.additionalArgs\",\n\t\targs: []string{\"--delete-scan-failed-images=false\"},\n\t}\n)\n\ntype (\n\tRepoTag struct {\n\t\tRepo string\n\t\tTag  string\n\t}\n\n\tImages struct {\n\t\tCollectorImage RepoTag\n\t\tRemoverImage   RepoTag\n\t\tManagerImage   RepoTag\n\t\tScannerImage   RepoTag\n\t}\n\n\tHelmPath string\n\n\tHelmSet struct {\n\t\tkey  string\n\t\targs []string\n\t}\n)\n\nfunc (hp HelmPath) Set(val string) string {\n\treturn fmt.Sprintf(\"%s=%s\", hp, val)\n}\n\nfunc (hs *HelmSet) Set(val ...string) *HelmSet {\n\ths.args = append(hs.args, val...)\n\treturn hs\n}\n\nfunc (hs *HelmSet) String() string {\n\treturn fmt.Sprintf(\"%s={%s}\", hs.key, strings.Join(hs.args, \",\"))\n}\n\nfunc init() {\n\tvar err error\n\tParsedImages, err = parsedImages(RemoverImage, ManagerImage, CollectorImage, ScannerImage)\n\tif err != nil {\n\t\tklog.Error(err)\n\t\tpanic(err)\n\t}\n}\n\nfunc toRepoTag(ref registry.Reference) RepoTag {\n\tvar repoTag RepoTag\n\n\trepoTag.Repo = fmt.Sprintf(\"%s/%s\", ref.Registry, ref.Repository)\n\tif repoTag.Repo == \"/\" {\n\t\trepoTag.Repo = \"\"\n\t}\n\n\trepoTag.Tag = ref.Reference\n\treturn repoTag\n}\n\nfunc parsedImages(removerImage, managerImage, collectorImage, scannerImage string) (*Images, error) {\n\tremoverRepoTag, err := parseRepoTag(removerImage)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tcollectorRepoTag, err := parseRepoTag(collectorImage)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tmanagerRepoTag, err := parseRepoTag(managerImage)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tscannerRepoTag, err := parseRepoTag(scannerImage)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &Images{\n\t\tCollectorImage: collectorRepoTag,\n\t\tRemoverImage:   removerRepoTag,\n\t\tManagerImage:   managerRepoTag,\n\t\tScannerImage:   scannerRepoTag,\n\t}, nil\n}\n\nfunc parseRepoTag(img string) (RepoTag, error) {\n\tif img == \"\" {\n\t\treturn RepoTag{}, nil\n\t}\n\n\tref, err := registry.ParseReference(img)\n\tif err == nil {\n\t\treturn toRepoTag(ref), nil\n\t}\n\n\t// if true, this is an \"unpublished\" image, without a registry\n\tif parts := strings.Split(img, \"/\"); len(parts) == 1 {\n\t\t// the parser doesn't like unpublished images, so supply a dummy registry and pass it back to the parser\n\t\tvar result registry.Reference\n\t\tresult, err = registry.ParseReference(fmt.Sprintf(\"dummy.co/%s\", img))\n\t\tif err == nil {\n\t\t\treturn RepoTag{\n\t\t\t\t// the registry info is discarded since it was a dummy registry\n\t\t\t\tRepo: result.Repository,\n\t\t\t\tTag:  result.Reference,\n\t\t\t}, nil\n\t\t}\n\t}\n\n\treturn RepoTag{}, err\n}\n\nfunc LoadImageToCluster(clusterName, imageRef, tarballPath string) env.Func {\n\tif strings.HasSuffix(tarballPath, \".tar\") {\n\t\treturn envfuncs.LoadImageArchiveToCluster(clusterName, tarballPath)\n\t}\n\n\treturn envfuncs.LoadDockerImageToCluster(clusterName, imageRef)\n}\n\nfunc HelmDeployLatestEraserRelease(namespace string, extraArgs ...string) env.Func {\n\treturn func(ctx context.Context, cfg *envconf.Config) (context.Context, error) {\n\t\tif os.Getenv(\"HELM_UPGRADE_TEST\") == \"\" {\n\t\t\treturn ctx, nil\n\t\t}\n\n\t\tscriptTemplate := `\n            helm repo add eraser '%[1]s'\n            helm repo update\n        `\n\n\t\tscript := fmt.Sprintf(scriptTemplate, publishedHelmRepo)\n\t\t//nolint:gosec // G204: Subprocess execution is intended for e2e test setup\n\t\taddEraserRepoCmd := exec.Command(\"bash\", \"-ec\", script)\n\n\t\tif _, err := addEraserRepoCmd.CombinedOutput(); err != nil {\n\t\t\treturn ctx, err\n\t\t}\n\n\t\tallArgs := []string{\"-f\", HelmEmptyValuesPath}\n\t\tallArgs = append(allArgs, \"eraser/eraser\")\n\t\tallArgs = append(allArgs, extraArgs...)\n\n\t\tif err := HelmInstall(cfg.KubeconfigFile(), namespace, allArgs); err != nil {\n\t\t\treturn ctx, err\n\t\t}\n\n\t\tclient, err := cfg.NewClient()\n\t\tif err != nil {\n\t\t\tklog.ErrorS(err, \"Failed to create new Client\")\n\t\t\treturn ctx, err\n\t\t}\n\n\t\t// wait for the deployment to finish becoming available\n\t\teraserManagerDep := appsv1.Deployment{\n\t\t\tObjectMeta: metav1.ObjectMeta{Name: \"eraser-controller-manager\", Namespace: namespace},\n\t\t}\n\n\t\tif err := wait.For(conditions.New(client.Resources()).DeploymentConditionMatch(&eraserManagerDep, appsv1.DeploymentAvailable, corev1.ConditionTrue),\n\t\t\twait.WithTimeout(Timeout)); err != nil {\n\t\t\tklog.ErrorS(err, \"failed to deploy eraser manager\")\n\n\t\t\treturn ctx, err\n\t\t}\n\n\t\treturn ctx, nil\n\t}\n}\n\nfunc IsNotFound(err error) bool {\n\treturn err != nil && client.IgnoreNotFound(err) == nil\n}\n\nfunc NewDeployment(namespace, name string, replicas int32, labels map[string]string, containers ...corev1.Container) *appsv1.Deployment {\n\tif len(containers) == 0 {\n\t\tcontainers = []corev1.Container{\n\t\t\t{Image: \"nginx\", Name: \"nginx\"},\n\t\t}\n\t}\n\treturn &appsv1.Deployment{\n\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\tName:      name,\n\t\t\tNamespace: namespace,\n\t\t},\n\t\tSpec: appsv1.DeploymentSpec{\n\t\t\tReplicas: &replicas,\n\t\t\tSelector: &metav1.LabelSelector{\n\t\t\t\tMatchLabels: labels,\n\t\t\t},\n\t\t\tTemplate: corev1.PodTemplateSpec{\n\t\t\t\tObjectMeta: metav1.ObjectMeta{Labels: labels},\n\t\t\t\tSpec: corev1.PodSpec{\n\t\t\t\t\tAffinity: &corev1.Affinity{\n\t\t\t\t\t\tPodAntiAffinity: &corev1.PodAntiAffinity{\n\t\t\t\t\t\t\tRequiredDuringSchedulingIgnoredDuringExecution: []corev1.PodAffinityTerm{\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tLabelSelector: &metav1.LabelSelector{\n\t\t\t\t\t\t\t\t\t\tMatchLabels: labels,\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\tTopologyKey: \"kubernetes.io/hostname\",\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tContainers: containers,\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t}\n}\n\nfunc NewPod(namespace, image, name, nodeName string) *corev1.Pod {\n\treturn &corev1.Pod{\n\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\tName:      name,\n\t\t\tNamespace: namespace,\n\t\t},\n\t\tSpec: corev1.PodSpec{\n\t\t\tNodeName: nodeName,\n\t\t\tContainers: []corev1.Container{\n\t\t\t\t{\n\t\t\t\t\tName:  name,\n\t\t\t\t\tImage: image,\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t}\n}\n\n// deploy eraser config.\nfunc DeployEraserConfig(kubeConfig, namespace, fileName string) error {\n\terrApply := KubectlApply(kubeConfig, namespace, []string{\"-f\", fileName})\n\tif errApply != nil {\n\t\treturn errApply\n\t}\n\n\treturn nil\n}\n\nfunc NumPodsPresentForLabel(ctx context.Context, client klient.Client, num int, label string) func() (bool, error) {\n\treturn func() (bool, error) {\n\t\tvar pods corev1.PodList\n\t\terr := client.Resources().List(ctx, &pods, resources.WithLabelSelector(label))\n\t\tif err != nil {\n\t\t\treturn false, err\n\t\t}\n\n\t\treturn len(pods.Items) == num, nil\n\t}\n}\n\nfunc ContainerNotPresentOnNode(nodeName, containerName string) func() (bool, error) {\n\treturn func() (bool, error) {\n\t\toutput, err := ListNodeContainers(nodeName)\n\t\tif err != nil {\n\t\t\treturn false, err\n\t\t}\n\n\t\treturn !strings.Contains(output, containerName), nil\n\t}\n}\n\nfunc ImagejobNotInCluster(kubeconfigPath string) func() (bool, error) {\n\treturn func() (bool, error) {\n\t\toutput, err := KubectlGet(kubeconfigPath, \"imagejob\")\n\t\tif err != nil {\n\t\t\treturn false, err\n\t\t}\n\n\t\treturn strings.Contains(output, \"No resources\"), nil\n\t}\n}\n\nfunc GetImageJob(ctx context.Context, cfg *envconf.Config) (eraserv1.ImageJob, error) {\n\tc, err := cfg.NewClient()\n\tif err != nil {\n\t\treturn eraserv1.ImageJob{}, err\n\t}\n\n\tvar ls eraserv1.ImageJobList\n\terr = c.Resources().List(ctx, &ls)\n\tif err != nil {\n\t\treturn eraserv1.ImageJob{}, err\n\t}\n\n\tif len(ls.Items) != 1 {\n\t\treturn eraserv1.ImageJob{}, errors.New(\"only one imagejob should be present\")\n\t}\n\n\treturn ls.Items[0], nil\n}\n\nfunc ListNodeContainers(nodeName string) (string, error) {\n\targs := []string{\n\t\t\"exec\",\n\t\tnodeName,\n\t\t\"ctr\",\n\t\t\"-n\",\n\t\t\"k8s.io\",\n\t\t\"containers\",\n\t\t\"list\",\n\t}\n\n\t//nolint:gosec // G204: Docker subprocess execution is intended for e2e tests\n\tcmd := exec.Command(\"docker\", args...)\n\tstdoutStderr, err := cmd.CombinedOutput()\n\toutput := strings.TrimSpace(string(stdoutStderr))\n\tif err != nil {\n\t\terr = fmt.Errorf(\"%w: %s\", err, output)\n\t}\n\n\treturn output, err\n}\n\nfunc ListNodeImages(nodeName string) (string, error) {\n\targs := []string{\n\t\t\"exec\",\n\t\tnodeName,\n\t\t\"ctr\",\n\t\t\"-n\",\n\t\t\"k8s.io\",\n\t\t\"images\",\n\t\t\"list\",\n\t}\n\n\t//nolint:gosec // G204: Docker subprocess execution is intended for e2e tests\n\tcmd := exec.Command(\"docker\", args...)\n\tstdoutStderr, err := cmd.CombinedOutput()\n\toutput := strings.TrimSpace(string(stdoutStderr))\n\tif err != nil {\n\t\terr = fmt.Errorf(\"%w: %s\", err, output)\n\t}\n\n\treturn output, err\n}\n\n// This lists nodes in the cluster, filtering out the control-plane.\nfunc GetClusterNodes(t *testing.T) []string {\n\tt.Helper()\n\tprovider := cluster.NewProvider(cluster.ProviderWithDocker())\n\n\tnodeList, err := provider.ListNodes(KindClusterName)\n\tif err != nil {\n\t\tt.Fatal(\"Cannot list Kind node list\", err)\n\t}\n\tvar ourNodes []string\n\tfor i := range nodeList {\n\t\tn := nodeList[i].String()\n\t\tif !strings.Contains(n, \"control-plane\") {\n\t\t\tourNodes = append(ourNodes, n)\n\t\t}\n\t}\n\n\treturn ourNodes\n}\n\nfunc CheckImagesExist(t *testing.T, nodes []string, images ...string) {\n\tt.Helper()\n\n\tfor _, node := range nodes {\n\t\tnodeImages, err := ListNodeImages(node)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Cannot list images on node %s: %v\", node, err)\n\t\t\tcontinue\n\t\t}\n\n\t\tfor _, image := range images {\n\t\t\tif !strings.Contains(nodeImages, image) {\n\t\t\t\tt.Errorf(\"image %s missing on node %s\", image, node)\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc CheckDeploymentCleanedUp(ctx context.Context, t *testing.T, client klient.Client) {\n\tt.Helper()\n\n\tfor {\n\t\tselect {\n\t\tcase <-ctx.Done():\n\t\t\treturn\n\t\tdefault:\n\t\t\tvar pods corev1.PodList\n\t\t\terr := client.Resources().List(ctx, &pods, resources.WithLabelSelector(ImageJobTypeLabelKey+\"=\"+ManualLabel))\n\t\t\tif err != nil {\n\t\t\t\tt.Fatalf(\"error listing images: %s\", err)\n\t\t\t}\n\n\t\t\tif len(pods.Items) > 0 {\n\t\t\t\tt.Errorf(\"imagejob got restarted when it shouldn't: %d manual pods still present\", len(pods.Items))\n\t\t\t\tt.FailNow()\n\t\t\t}\n\t\t}\n\t\ttime.Sleep(time.Second * 2)\n\t}\n}\n\nfunc CheckImageRemoved(ctx context.Context, t *testing.T, nodes []string, images ...string) {\n\tt.Helper()\n\n\tcleaned := make(map[string]bool)\n\tfor len(cleaned) < len(nodes) {\n\t\tselect {\n\t\tcase <-ctx.Done():\n\t\t\tt.Error(\"timeout waiting for images to be cleaned\")\n\t\t\treturn\n\t\tdefault:\n\t\t}\n\t\tfor _, node := range nodes {\n\t\t\tdone := cleaned[node]\n\t\t\tif done {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tnodeImages, err := ListNodeImages(node)\n\t\t\tif err != nil {\n\t\t\t\tt.Error(\"Cannot list images\", err)\n\t\t\t}\n\n\t\t\tvar found int\n\t\t\tfor _, img := range images {\n\t\t\t\tif !strings.Contains(nodeImages, img) {\n\t\t\t\t\tfound++\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif found == len(images) {\n\t\t\t\tcleaned[node] = true\n\t\t\t}\n\t\t}\n\t\ttime.Sleep(time.Second)\n\t}\n\n\tif len(cleaned) < len(nodes) {\n\t\tt.Error(\"not all nodes cleaned\")\n\t}\n}\n\nfunc DockerPullImage(image string) (string, error) {\n\targs := []string{\"pull\", image}\n\t//nolint:gosec // G204: Docker subprocess execution is intended for e2e tests\n\tcmd := exec.Command(\"docker\", args...)\n\n\tstdoutStderr, err := cmd.CombinedOutput()\n\toutput := strings.TrimSpace(string(stdoutStderr))\n\tif err != nil {\n\t\terr = fmt.Errorf(\"%w: %s\", err, output)\n\t}\n\n\treturn output, err\n}\n\nfunc DockerTagImage(image, tag string) (string, error) {\n\targs := []string{\"tag\", image, tag}\n\t//nolint:gosec // G204: Docker subprocess execution is intended for e2e tests\n\tcmd := exec.Command(\"docker\", args...)\n\n\tstdoutStderr, err := cmd.CombinedOutput()\n\toutput := strings.TrimSpace(string(stdoutStderr))\n\tif err != nil {\n\t\terr = fmt.Errorf(\"%w: %s\", err, output)\n\t}\n\n\treturn output, err\n}\n\nfunc DeleteImageListsAndJobs(kubeConfig string) error {\n\tif err := KubectlDelete(kubeConfig, \"\", []string{\"imagejob\", \"--all\"}); err != nil {\n\t\treturn err\n\t}\n\treturn KubectlDelete(kubeConfig, \"\", []string{\"imagelist\", \"--all\"})\n}\n\nfunc DeleteStringFromSlice(strings []string, s string) []string {\n\tidx := -1\n\tfor i, cmp := range strings {\n\t\tif cmp == s {\n\t\t\tidx = i\n\t\t\tbreak\n\t\t}\n\t}\n\n\tif idx >= 0 {\n\t\tl := len(strings)\n\t\tstrings[l-1], strings[idx] = strings[idx], strings[l-1]\n\t\treturn strings[:l-1]\n\t}\n\n\treturn strings\n}\n\nfunc DeployEraserHelm(namespace string, args ...string) env.Func {\n\treturn func(ctx context.Context, cfg *envconf.Config) (context.Context, error) {\n\t\tproviderResourceAbsolutePath := filepath.Join(ChartPath, \"eraser\")\n\n\t\t// start deployment\n\t\tallArgs := []string{providerResourceAbsolutePath, \"-f\", HelmEmptyValuesPath}\n\t\tallArgs = append(allArgs, args...)\n\t\tif err := HelmInstall(cfg.KubeconfigFile(), namespace, allArgs); err != nil {\n\t\t\treturn ctx, err\n\t\t}\n\n\t\tclient, err := cfg.NewClient()\n\t\tif err != nil {\n\t\t\tklog.ErrorS(err, \"Failed to create new Client\")\n\t\t\treturn ctx, err\n\t\t}\n\n\t\t// wait for the deployment to finish becoming available\n\t\teraserManagerDep := appsv1.Deployment{\n\t\t\tObjectMeta: metav1.ObjectMeta{Name: \"eraser-controller-manager\", Namespace: namespace},\n\t\t}\n\n\t\tif err = wait.For(conditions.New(client.Resources()).DeploymentConditionMatch(&eraserManagerDep, appsv1.DeploymentAvailable, corev1.ConditionTrue),\n\t\t\twait.WithTimeout(Timeout)); err != nil {\n\t\t\tklog.ErrorS(err, \"failed to deploy eraser manager\")\n\n\t\t\treturn ctx, err\n\t\t}\n\n\t\treturn ctx, nil\n\t}\n}\n\nfunc UpgradeEraserHelm(namespace string, args ...string) env.Func {\n\treturn func(ctx context.Context, cfg *envconf.Config) (context.Context, error) {\n\t\tproviderResourceAbsolutePath := filepath.Join(ChartPath, \"eraser\")\n\n\t\t// start deployment\n\t\tallArgs := []string{providerResourceAbsolutePath, \"-f\", HelmEmptyValuesPath}\n\t\tallArgs = append(allArgs, args...)\n\t\tif os.Getenv(\"HELM_UPGRADE_TEST\") == \"\" {\n\t\t\tallArgs = append(allArgs, \"--install\")\n\t\t}\n\n\t\tif err := HelmUpgrade(cfg.KubeconfigFile(), namespace, allArgs); err != nil {\n\t\t\treturn ctx, err\n\t\t}\n\n\t\tclient, err := cfg.NewClient()\n\t\tif err != nil {\n\t\t\tklog.ErrorS(err, \"Failed to create new Client\")\n\t\t\treturn ctx, err\n\t\t}\n\n\t\t// wait for the deployment to finish becoming available\n\t\teraserManagerDep := appsv1.Deployment{\n\t\t\tObjectMeta: metav1.ObjectMeta{Name: \"eraser-controller-manager\", Namespace: namespace},\n\t\t}\n\n\t\tif err = wait.For(conditions.New(client.Resources()).DeploymentConditionMatch(&eraserManagerDep, appsv1.DeploymentAvailable, corev1.ConditionTrue),\n\t\t\twait.WithTimeout(Timeout)); err != nil {\n\t\t\tklog.ErrorS(err, \"failed to deploy eraser manager\")\n\n\t\t\treturn ctx, err\n\t\t}\n\n\t\treturn ctx, nil\n\t}\n}\n\nfunc DeployOtelCollector(namespace string) env.Func {\n\treturn func(ctx context.Context, cfg *envconf.Config) (context.Context, error) {\n\t\t// start otelcollector deployment\n\t\totelargs := []string{\"-f\", OTELCollectorConfigPath}\n\t\tif err := KubectlApply(cfg.KubeconfigFile(), namespace, otelargs); err != nil {\n\t\t\treturn ctx, err\n\t\t}\n\n\t\tclient, err := cfg.NewClient()\n\t\tif err != nil {\n\t\t\tklog.ErrorS(err, \"Failed to create new Client\")\n\t\t\treturn ctx, err\n\t\t}\n\n\t\t// wait for the deployment to finish becoming available\n\t\totelCollectorDep := appsv1.Deployment{\n\t\t\tObjectMeta: metav1.ObjectMeta{Name: \"otel-collector\", Namespace: namespace},\n\t\t}\n\n\t\tif err = wait.For(conditions.New(client.Resources()).DeploymentConditionMatch(&otelCollectorDep, appsv1.DeploymentAvailable, corev1.ConditionTrue),\n\t\t\twait.WithTimeout(Timeout)); err != nil {\n\t\t\tklog.ErrorS(err, \"failed to deploy otelcollector\")\n\n\t\t\treturn ctx, err\n\t\t}\n\n\t\treturn ctx, nil\n\t}\n}\n\nfunc GetPodLogs(t *testing.T) error {\n\tfor _, nodeName := range []string{\"eraser-e2e-test-control-plane\", \"eraser-e2e-test-worker\", \"eraser-e2e-test-worker2\"} {\n\t\ttestName := strings.Split(t.Name(), \"/\")[0]\n\t\tpath := filepath.Join(TestLogDir, testName, nodeName)\n\t\tif err := os.MkdirAll(path, 0o750); err != nil {\n\t\t\tt.Logf(\"error: %s\", err)\n\t\t\tcontinue\n\t\t}\n\n\t\tt.Logf(`docker cp %s:/var/log/containers %s`, nodeName, path)\n\t\tcmd := exec.Command(\"docker\", \"cp\", nodeName+\":/var/log/containers\", path) //nolint:gosec\n\t\toutput, err := cmd.CombinedOutput()\n\t\tif err != nil {\n\t\t\tt.Logf(\"error: %s\\n%s\", err, string(output))\n\t\t\tcontinue\n\t\t}\n\n\t\tt.Logf(`docker cp %s:/var/log/pods %s`, nodeName, path)\n\t\tcmd2 := exec.Command(\"docker\", \"cp\", nodeName+\":/var/log/pods\", path) //nolint:gosec\n\t\toutput, err = cmd2.CombinedOutput()\n\t\tif err != nil {\n\t\t\tt.Logf(\"error: %s\\n%s\", err, string(output))\n\t\t\tcontinue\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc MakeDeploy(env map[string]string) env.Func {\n\treturn func(ctx context.Context, _ *envconf.Config) (context.Context, error) {\n\t\targs := []string{\"deploy\"}\n\t\tfor k, v := range env {\n\t\t\targs = append(args, fmt.Sprintf(\"%s=%s\", k, v))\n\t\t}\n\n\t\tcmd := exec.Command(\"make\", args...)\n\t\tcmd.Dir = ProjectAbsDir\n\n\t\tout, err := cmd.CombinedOutput()\n\t\tif err != nil {\n\t\t\tfmt.Fprint(os.Stderr, string(out))\n\t\t\treturn ctx, err\n\t\t}\n\n\t\tklog.Info(string(out))\n\n\t\treturn ctx, nil\n\t}\n}\n\nfunc DeployEraserManifest(namespace, fileName string) env.Func {\n\treturn func(ctx context.Context, cfg *envconf.Config) (context.Context, error) {\n\t\tif err := DeployEraserConfig(cfg.KubeconfigFile(), namespace, filepath.Join(DeployPath, fileName)); err != nil {\n\t\t\treturn ctx, err\n\t\t}\n\n\t\treturn ctx, nil\n\t}\n}\n\nfunc CreateExclusionList(namespace string, list string) env.Func {\n\treturn func(ctx context.Context, cfg *envconf.Config) (context.Context, error) {\n\t\tc, err := cfg.NewClient()\n\t\tif err != nil {\n\t\t\treturn ctx, err\n\t\t}\n\n\t\t// create excluded configmap and add docker.io/library/alpine\n\t\texcluded := corev1.ConfigMap{\n\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\tGenerateName: \"excluded\",\n\t\t\t\tNamespace:    namespace,\n\t\t\t\tLabels:       map[string]string{\"eraser.sh/exclude.list\": \"true\"},\n\t\t\t},\n\t\t\tData: map[string]string{\"excluded.json\": list},\n\t\t}\n\t\tif err := cfg.Client().Resources().Create(ctx, &excluded); err != nil {\n\t\t\treturn ctx, err\n\t\t}\n\n\t\tcMap := corev1.ConfigMap{}\n\t\terr = wait.For(func() (bool, error) {\n\t\t\terr := c.Resources().Get(ctx, excluded.Name, namespace, &cMap)\n\t\t\tif IsNotFound(err) {\n\t\t\t\treturn false, nil\n\t\t\t}\n\n\t\t\tif err != nil {\n\t\t\t\treturn false, err\n\t\t\t}\n\n\t\t\tif cMap.Name == excluded.Name {\n\t\t\t\treturn true, nil\n\t\t\t}\n\n\t\t\treturn false, nil\n\t\t}, wait.WithTimeout(Timeout))\n\t\tif err != nil {\n\t\t\treturn ctx, err\n\t\t}\n\n\t\treturn ctx, nil\n\t}\n}\n"
  },
  {
    "path": "test/e2e/util/utils_test.go",
    "content": "package util\n\nimport (\n\t\"testing\"\n\n\t\"k8s.io/klog/v2\"\n)\n\nfunc TestParseRepoTag(t *testing.T) {\n\tcases := []struct {\n\t\tinput     string\n\t\texpected  RepoTag\n\t\texpectErr bool\n\t}{\n\t\t{\n\t\t\tinput: \"ghcr.io/repo/one/two:three\",\n\t\t\texpected: RepoTag{\n\t\t\t\tRepo: \"ghcr.io/repo/one/two\",\n\t\t\t\tTag:  \"three\",\n\t\t\t},\n\t\t\texpectErr: false,\n\t\t},\n\t\t{\n\t\t\tinput: \"ghcr.io/one:two\",\n\t\t\texpected: RepoTag{\n\t\t\t\tRepo: \"ghcr.io/one\",\n\t\t\t\tTag:  \"two\",\n\t\t\t},\n\t\t\texpectErr: false,\n\t\t},\n\t\t{\n\t\t\tinput: \"eraser:e2e-test\",\n\t\t\texpected: RepoTag{\n\t\t\t\tRepo: \"eraser\",\n\t\t\t\tTag:  \"e2e-test\",\n\t\t\t},\n\t\t\texpectErr: false,\n\t\t},\n\t\t{\n\t\t\tinput: \"eraser@sha256:4dca0fd5f424a31b03ab807cbae77eb32bf2d089eed1cee154b3afed458de0dc\",\n\t\t\texpected: RepoTag{\n\t\t\t\tRepo: \"eraser\",\n\t\t\t\tTag:  \"sha256:4dca0fd5f424a31b03ab807cbae77eb32bf2d089eed1cee154b3afed458de0dc\",\n\t\t\t},\n\t\t\texpectErr: false,\n\t\t},\n\t\t{\n\t\t\tinput: \"eraser:sha256:4dca0fd5f424a31b03ab807cbae77eb32bf2d089eed1cee154b3afed458de0dc\",\n\t\t\texpected: RepoTag{\n\t\t\t\tRepo: \"eraser\",\n\t\t\t\tTag:  \"sha256:4dca0fd5f424a31b03ab807cbae77eb32bf2d089eed1cee154b3afed458de0dc\",\n\t\t\t},\n\t\t\texpectErr: false,\n\t\t},\n\t\t{\n\t\t\tinput: \"eraser@sha256:4dca0fd5f4:4a31b03ab807cbae77eb32bf2d089eed1cee154b3afed458de0dc\",\n\t\t\texpected: RepoTag{\n\t\t\t\tRepo: \"\",\n\t\t\t\tTag:  \"\",\n\t\t\t},\n\t\t\texpectErr: true,\n\t\t},\n\t\t{\n\t\t\tinput: \"docker.io/nginx@sha256:4dca0fd5f424a31b03ab807cbae77eb32bf2d089eed1cee154b3afed458de0dc\",\n\t\t\texpected: RepoTag{\n\t\t\t\tRepo: \"docker.io/nginx\",\n\t\t\t\tTag:  \"sha256:4dca0fd5f424a31b03ab807cbae77eb32bf2d089eed1cee154b3afed458de0dc\",\n\t\t\t},\n\t\t\texpectErr: false,\n\t\t},\n\t\t{\n\t\t\tinput: \"docker.io/library/nginx@sha256:4dca0fd5f424a31b03ab807cbae77eb32bf2d089eed1cee154b3afed458de0dc\",\n\t\t\texpected: RepoTag{\n\t\t\t\tRepo: \"docker.io/library/nginx\",\n\t\t\t\tTag:  \"sha256:4dca0fd5f424a31b03ab807cbae77eb32bf2d089eed1cee154b3afed458de0dc\",\n\t\t\t},\n\t\t\texpectErr: false,\n\t\t},\n\t\t{\n\t\t\tinput: \"docker.io/nginx@sha256:4dca0fd5f4\",\n\t\t\texpected: RepoTag{\n\t\t\t\tRepo: \"\",\n\t\t\t\tTag:  \"\",\n\t\t\t},\n\t\t\texpectErr: true,\n\t\t},\n\t\t{\n\t\t\tinput: \"docker.io/nginx@sha256:gggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggg\",\n\t\t\texpected: RepoTag{\n\t\t\t\tRepo: \"\",\n\t\t\t\tTag:  \"\",\n\t\t\t},\n\t\t\texpectErr: true,\n\t\t},\n\t\t{\n\t\t\tinput: \"docker.io/library/nginx@sha123:4dca0fd5f424a31b03ab807cbae77eb32bf2d089eed1cee154b3afed458de0dc\",\n\t\t\texpected: RepoTag{\n\t\t\t\tRepo: \"\",\n\t\t\t\tTag:  \"\",\n\t\t\t},\n\t\t\texpectErr: true,\n\t\t},\n\t\t{\n\t\t\tinput: \"\",\n\t\t\texpected: RepoTag{\n\t\t\t\tRepo: \"\",\n\t\t\t\tTag:  \"\",\n\t\t\t},\n\t\t\texpectErr: false,\n\t\t},\n\t\t{\n\t\t\tinput: \":\",\n\t\t\texpected: RepoTag{\n\t\t\t\tRepo: \"\",\n\t\t\t\tTag:  \"\",\n\t\t\t},\n\t\t\texpectErr: true,\n\t\t},\n\t\t{\n\t\t\tinput: \"/\",\n\t\t\texpected: RepoTag{\n\t\t\t\tRepo: \"\",\n\t\t\t\tTag:  \"\",\n\t\t\t},\n\t\t\texpectErr: true,\n\t\t},\n\t}\n\n\tfor _, c := range cases {\n\t\tresult, err := parseRepoTag(c.input)\n\t\tif err != nil {\n\t\t\tif c.expectErr {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tklog.Errorf(\"error from parsing function: %#v\\ninput: %s\\nexpected: %#v\\ngot:      %#v\", err, c.input, c.expected, result)\n\t\t\tt.FailNow()\n\t\t}\n\n\t\tif c.expectErr {\n\t\t\tklog.Errorf(\"expected error parsing reference `%s`, but did not receive one\", c.input)\n\t\t\tt.Fail()\n\t\t}\n\n\t\tif result.Repo != c.expected.Repo || result.Tag != c.expected.Tag {\n\t\t\tklog.Errorf(\"wrong result\\ninput: %s\\nexpected: %#v\\ngot:      %#v\", c.input, c.expected, result)\n\t\t\tt.Fail()\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "third_party/open-policy-agent/gatekeeper/helmify/LICENSE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [yyyy] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License."
  },
  {
    "path": "third_party/open-policy-agent/gatekeeper/helmify/README.md",
    "content": "# gatekeeper/helmify\n\nForked from https://github.com/open-policy-agent/gatekeeper (v3.5.0-rc.1).\n\nThe helmify helps auto-generate the helm chart from manifest to avoid any drifts\n\nThe original code can be found at https://github.com/open-policy-agent/gatekeeper/tree/master/cmd/build/helmify."
  },
  {
    "path": "third_party/open-policy-agent/gatekeeper/helmify/kustomization.yaml",
    "content": "namespace: \"{{ .Release.Namespace }}\"\ncommonLabels:\n  app.kubernetes.io/name: '{{ template \"eraser.name\" . }}'\n  helm.sh/chart: '{{ template \"eraser.name\" . }}'\n  app.kubernetes.io/managed-by: '{{ .Release.Service }}'\n  app.kubernetes.io/instance: \"{{ .Release.Name }}\"\nbases:\n  - \"../../../../config/default\"\npatchesStrategicMerge:\n  - kustomize-for-helm.yaml\npatchesJson6902:\n  # these are defined in the chart values rather than hard-coded\n  - target:\n      kind: Deployment\n      name: eraser-controller-manager\n    patch: |-\n      - op: remove\n        path: /spec/template/spec/containers/0/resources/limits\n      - op: remove\n        path: /spec/template/spec/containers/0/resources/requests\n      - op: remove\n        path: /spec/template/spec/nodeSelector/kubernetes.io~1os\n"
  },
  {
    "path": "third_party/open-policy-agent/gatekeeper/helmify/kustomize-for-helm.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: eraser-controller-manager\n  namespace: eraser-system\nspec:\n  template:\n    metadata:\n      labels:\n        HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_ADDITIONALPODLABELS: \"\"\n    spec:\n      HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_PULL_SECRETS: \"\"\n      volumes:\n      - name: eraser-manager-config\n        configMap:\n          name: eraser-manager-config\n      containers:\n      - name: manager\n        args:\n        - --config=/config/controller_manager_config.yaml\n        - HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_ADDITIONAL_ARGS\n        command:\n        - /manager\n        image: \"{{ .Values.deploy.image.repo }}:{{ .Values.deploy.image.tag | default .Chart.AppVersion }}\"\n        imagePullPolicy: \"{{ .Values.deploy.image.pullPolicy }}\"\n        livenessProbe:\n          httpGet:\n            path: /healthz\n            port: 8081\n          initialDelaySeconds: 15\n          periodSeconds: 20\n        readinessProbe:\n          httpGet:\n            path: /readyz\n            port: 8081\n          initialDelaySeconds: 5\n          periodSeconds: 10\n        resources:\n          HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_CONTAINER_RESOURCES: \"\"\n        securityContext:\n          allowPrivilegeEscalation: false\n        volumeMounts:\n        - name: eraser-manager-config\n          mountPath: /config\n      nodeSelector:\n        HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_NODESELECTOR: \"\"\n      tolerations:\n        HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_TOLERATIONS: \"\"\n      affinity:\n        HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_AFFINITY: \"\"\n      priorityClassName: \"{{ .Values.deploy.priorityClassName }}\"\n"
  },
  {
    "path": "third_party/open-policy-agent/gatekeeper/helmify/main.go",
    "content": "package main\n\nimport (\n\t\"bufio\"\n\t\"flag\"\n\t\"fmt\"\n\t\"log\"\n\t\"os\"\n\t\"path\"\n\t\"regexp\"\n\t\"strings\"\n)\n\nvar outputDir = flag.String(\"output-dir\", \"manifest_staging/charts/eraser\", \"The root directory in which to write the Helm chart\")\n\nvar kindRegex = regexp.MustCompile(`(?m)^kind:[\\s]+([\\S]+)[\\s]*$`)\n\n// use exactly two spaces to be sure we are capturing metadata.name.\nvar nameRegex = regexp.MustCompile(`(?m)^  name:[\\s]+([\\S]+)[\\s]*$`)\n\nfunc extractKind(s string) (string, error) {\n\tmatches := kindRegex.FindStringSubmatch(s)\n\tif len(matches) != 2 {\n\t\treturn \"\", fmt.Errorf(\"%s does not have a kind\", s)\n\t}\n\treturn strings.Trim(matches[1], `\"'`), nil\n}\n\nfunc extractName(s string) (string, error) {\n\tmatches := nameRegex.FindStringSubmatch(s)\n\tif len(matches) != 2 {\n\t\treturn \"\", fmt.Errorf(\"%s does not have a name\", s)\n\t}\n\treturn strings.Trim(matches[1], `\"'`), nil\n}\n\ntype kindSet struct {\n\tbyKind map[string][]string\n}\n\nfunc (ks *kindSet) Add(obj string) error {\n\tkind, err := extractKind(obj)\n\tif err != nil {\n\t\treturn err\n\t}\n\tobjs, ok := ks.byKind[kind]\n\tif !ok {\n\t\tobjs = []string{obj}\n\t} else {\n\t\tobjs = append(objs, obj)\n\t}\n\tks.byKind[kind] = objs\n\treturn nil\n}\n\nfunc (ks *kindSet) Write() error {\n\tfor kind, objs := range ks.byKind {\n\t\tsubPath := \"templates\"\n\t\tnameExtractor := extractName\n\n\t\tif kind == \"Namespace\" {\n\t\t\tcontinue\n\t\t}\n\n\t\tfor _, obj := range objs {\n\t\t\tname, err := nameExtractor(obj)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tfileName := fmt.Sprintf(\"%s-%s.yaml\", strings.ToLower(name), strings.ToLower(kind))\n\t\t\tdestFile := path.Join(*outputDir, subPath, fileName)\n\t\t\tfmt.Printf(\"Writing %s\\n\", destFile)\n\n\t\t\tif err := os.WriteFile(destFile, []byte(obj), 0o600); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc doReplacements(obj string) string {\n\tfor old, new := range replacements {\n\t\tobj = strings.ReplaceAll(obj, old, new)\n\t}\n\treturn obj\n}\n\nfunc copyStaticFiles(root string, subdirs ...string) error {\n\tp := path.Join(append([]string{root}, subdirs...)...)\n\tfiles, err := os.ReadDir(p)\n\tif err != nil {\n\t\treturn err\n\t}\n\tfor _, f := range files {\n\t\tnewSubDirs := append([]string{}, subdirs...)\n\t\tnewSubDirs = append(newSubDirs, f.Name())\n\t\tdestination := path.Join(append([]string{*outputDir}, newSubDirs...)...)\n\t\tif f.IsDir() {\n\t\t\tfmt.Printf(\"Making %s\\n\", destination)\n\t\t\tif err := os.Mkdir(destination, 0o750); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif err := copyStaticFiles(root, newSubDirs...); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t} else {\n\t\t\tcontents, err := os.ReadFile(path.Join(p, f.Name())) // #nosec G304\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tfmt.Printf(\"Writing %s\\n\", destination)\n\t\t\tif err := os.WriteFile(destination, contents, 0o600); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc main() {\n\tflag.Parse()\n\tscanner := bufio.NewScanner(os.Stdin)\n\tkinds := kindSet{byKind: make(map[string][]string)}\n\tb := strings.Builder{}\n\tnotate := func() {\n\t\tobj := doReplacements(b.String())\n\t\tb.Reset()\n\t\tif err := kinds.Add(obj); err != nil {\n\t\t\tlog.Fatalf(\"Error adding object: %s, %s\", err, b.String())\n\t\t}\n\t}\n\n\tfor scanner.Scan() {\n\t\tif strings.HasPrefix(scanner.Text(), \"---\") {\n\t\t\tif b.Len() > 0 {\n\t\t\t\tnotate()\n\t\t\t}\n\t\t} else {\n\t\t\tb.WriteString(scanner.Text())\n\t\t\tb.WriteString(\"\\n\")\n\t\t}\n\t}\n\tif b.Len() > 0 {\n\t\tnotate()\n\t}\n\tif err := copyStaticFiles(\"third_party/open-policy-agent/gatekeeper/helmify/static\"); err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tif err := kinds.Write(); err != nil {\n\t\tlog.Fatal(err)\n\t}\n}\n"
  },
  {
    "path": "third_party/open-policy-agent/gatekeeper/helmify/replacements.go",
    "content": "package main\n\nvar replacements = map[string]string{\n\t`HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_CONTAINER_RESOURCES: \"\"`: `{{- toYaml .Values.deploy.resources | nindent 10 }}`,\n\t`HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_NODESELECTOR: \"\"`:        `{{- toYaml .Values.deploy.nodeSelector | nindent 8 }}`,\n\t`HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_TOLERATIONS: \"\"`:         `{{- toYaml .Values.deploy.tolerations | nindent 8 }}`,\n\t`HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_AFFINITY: \"\"`:            `{{- toYaml .Values.deploy.affinity | nindent 8 }}`,\n\t`- HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_ADDITIONAL_ARGS`:       `{{- if .Values.deploy.additionalArgs }}{{- range .Values.deploy.additionalArgs }}{{ nindent 8 \"- \" }}{{ . }}{{- end -}}{{ end }}`,\n\t`HELMSUBST_CONTROLLER_MANAGER_CONFIG_YAML`:                        `{{- toYaml .Values.runtimeConfig | nindent 4 }}`,\n\t`HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_ADDITIONALPODLABELS: \"\"`: `{{- if .Values.deploy.additionalPodLabels }}{{- toYaml .Values.deploy.additionalPodLabels | nindent 8 }}{{end}}`,\n\n\t`HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_PULL_SECRETS: \"\"`: `{{- if .Values.runtimeConfig.manager.pullSecrets }}\n      imagePullSecrets:\n        {{- range .Values.runtimeConfig.manager.pullSecrets }}\n        - name: {{ . }}\n        {{- end }}\n      {{- end }}`,\n}\n"
  },
  {
    "path": "third_party/open-policy-agent/gatekeeper/helmify/static/.helmignore",
    "content": "# Patterns to ignore when building packages.\n# This supports shell glob matching, relative path matching, and\n# negation (prefixed with !). Only one pattern per line.\n.DS_Store\n# Common VCS dirs\n.git/\n.gitignore\n.bzr/\n.bzrignore\n.hg/\n.hgignore\n.svn/\n# Common backup files\n*.swp\n*.bak\n*.tmp\n*.orig\n*~\n# Various IDEs\n.project\n.idea/\n*.tmproj\n.vscode/\n"
  },
  {
    "path": "third_party/open-policy-agent/gatekeeper/helmify/static/Chart.yaml",
    "content": "apiVersion: v2\nname: eraser\ndescription: A Helm chart for Eraser\ntype: application\nversion: 1.5.0-beta.0\nappVersion: v1.5.0-beta.0\nhome: https://github.com/eraser-dev/eraser\nsources:\n  - https://github.com/eraser-dev/eraser.git\n"
  },
  {
    "path": "third_party/open-policy-agent/gatekeeper/helmify/static/README.md",
    "content": "# Eraser Helm Chart\n\n## Contributing Changes\n\nThis Helm chart is autogenerated from the Eraser static manifest. The generator code lives under third_party/open-policy-agent/gatekeeper/helmify. To make modifications to this template, please edit kustomization.yaml, kustomize-for-helm.yaml and replacements.go under that directory and then run make manifests. Your changes will show up in the manifest_staging directory and will be promoted to the root charts directory the next time an Eraser release is cut.\n\n## Get Repo Info\n\n```console\nhelm repo add eraser https://eraser-dev.github.io/eraser/charts\nhelm repo update\n```\n\n_See [helm repo](https://helm.sh/docs/helm/helm_repo/) for command documentation._\n\n## Install Chart\n\n```console\n# Helm install with eraser-system namespace already created\n$ helm install -n eraser-system [RELEASE_NAME] eraser/eraser\n\n# Helm install and create namespace\n$ helm install -n eraser-system [RELEASE_NAME] eraser/eraser --create-namespace\n\n```\n\n_See [parameters](#parameters) below._\n\n_See [helm install](https://helm.sh/docs/helm/helm_install/) for command documentation._\n\n## Parameters\n\n| Parameter                                       | Description                                                                                          | Default                        |\n| ----------------------------------------------- | ---------------------------------------------------------------------------------------------------- | ------------------------------ |\n| runtimeConfig.health                            | Settings for the health server.                                                                      | `{}`                           |\n| runtimeConfig.metrics                           | Settings for the metrics server.                                                                     | `{}`                           |\n| runtimeConfig.webhook                           | Settings for the webhook server.                                                                     | `{}`                           |\n| runtimeConfig.leaderElection                    | Settings for leader election.                                                                        | `{}`                           |\n| runtimeConfig.manager.runtime                   | The container runtime to use.                                                                        | `containerd`                   |\n| runtimeConfig.manager.otlpEndpoint              | The OTLP endpoint to send metrics to.                                                                 | `\"\"`                           |\n| runtimeConfig.manager.logLevel                  | The logging level for the manager.                                                                   | `info`                         |\n| runtimeConfig.manager.scheduling                | Settings for scheduling.                                                                             | `{}`                           |\n| runtimeConfig.manager.profile                   | Settings for the profiler.                                                                           | `{}`                           |\n| runtimeConfig.manager.imageJob.successRatio     | The minimum ratio of successful image jobs required for the overall job to be considered successful. | `1.0`                          |\n| runtimeConfig.manager.imageJob.cleanup          | Settings for image job cleanup.                                                                      | `{}`                           |\n| runtimeConfig.manager.pullSecrets               | Image pull secrets for collector/scanner/eraser.                                                     | `[]`                           |\n| runtimeConfig.manager.priorityClassName         | Priority class name for collector/scanner/eraser.                                                    | `\"\"`                           |\n| runtimeConfig.manager.additionalPodLabels       | Additional labels for all pods that the controller creates at runtime.                               | `{}`                           |\n| runtimeConfig.manager.nodeFilter                | Filter for nodes.                                                                                    | `{}`                           |\n| runtimeConfig.components.collector              | Settings for the collector component.                                                                | `{ enabled: true }`           |\n| runtimeConfig.components.scanner                | Settings for the scanner component.                                                                  | `{ enabled: true }`           |\n| runtimeConfig.components.eraser                 | Settings for the eraser component.                                                                   | `{}`                           |\n| deploy.image.repo                               | Repository for the image.                                                                            | `ghcr.io/eraser-dev/eraser-manager` |\n| deploy.image.pullPolicy                         | Policy for pulling the image.                                                                        | `IfNotPresent`                 |\n| deploy.image.tag                                | Overrides the default image tag.                                                                     | `\"\"`                           |\n| deploy.additionalArgs                           | Additional arguments to pass to the command.                                                         | `[]`                           |\n| deploy.priorityClassName                        | Priority class name.                                                                                 | `\"\"`                           |\n| deploy.additionalPodLabels                      | Additional labels for the controller pod.                                                            | `{}`                           |\n| deploy.securityContext.allowPrivilegeEscalation | Whether to allow privilege escalation.                                                               | `false`                        |\n| deploy.resources.limits.memory                  | Memory limit for the resources.                                                                      | `30Mi`                         |\n| deploy.resources.requests.cpu                   | CPU request for the resources.                                                                       | `100m`                         |\n| deploy.resources.requests.memory                | Memory request for the resources.                                                                    | `20Mi`                         |\n| deploy.nodeSelector                             | Node Selector for manager.                                                                           | kubernetes.io/os: linux        |\n| deploy.tolerations                              | Tolerations for the manager.                                                                         | []                             |\n| deploy.affinity                                 | Affinity for the manager.                                                                            | {}                             |\n| nameOverride                                    | Override name if needed.                                                                             | \"\"                             |\n"
  },
  {
    "path": "third_party/open-policy-agent/gatekeeper/helmify/static/templates/_helpers.tpl",
    "content": "{{/*\nReturn the name of the chart. Use Values.nameOverride but if null use Chart.Name\n*/}}\n{{- define \"eraser.name\" -}}\n{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix \"-\" }}\n{{- end -}}\n"
  },
  {
    "path": "third_party/open-policy-agent/gatekeeper/helmify/static/templates/configmap.yaml",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: eraser-manager-config\n  namespace: \"{{ .Release.Namespace }}\"\ndata:\n  controller_manager_config.yaml: |\n    {{- toYaml .Values.runtimeConfig | nindent 4 }}\n"
  },
  {
    "path": "third_party/open-policy-agent/gatekeeper/helmify/static/values.yaml",
    "content": "runtimeConfig:\n  apiVersion: eraser.sh/v1alpha3\n  kind: EraserConfig\n  health: {}\n    # healthProbeBindAddress: :8081\n  metrics: {}\n    # bindAddress: 127.0.0.1:8080\n  webhook: {}\n    # port: 9443\n  leaderElection: {}\n    # leaderElect: true\n    # resourceName: e29e094a.k8s.io\n  manager:\n    runtime:\n      name: containerd\n      address: unix:///run/containerd/containerd.sock\n    otlpEndpoint: \"\"\n    logLevel: info\n    scheduling: {}\n      # repeatInterval: \"\"\n      # beginImmediately: true\n    profile: {}\n      # enabled: false\n      # port: 0\n    imageJob:\n      successRatio: 1.0\n      cleanup: {}\n        # delayOnSuccess: \"\"\n        # delayOnFailure: \"\"\n    pullSecrets: [] # image pull secrets for collector/scanner/eraser\n    priorityClassName: \"\" # priority class name for collector/scanner/eraser\n    additionalPodLabels: {}\n    nodeFilter:\n      type: exclude # must be either exclude|include\n      selectors:\n        - eraser.sh/cleanup.filter\n        - kubernetes.io/os=windows\n  components:\n    collector:\n      enabled: true\n      image:\n        # repo: \"\"\n        tag: \"v1.5.0-beta.0\"\n      request: {}\n        # mem: \"\"\n        # cpu: \"\"\n      limit: {}\n        # mem: \"\"\n        # cpu: \"\"\n    scanner:\n      enabled: true\n      image:\n        # repo: \"\"\n        tag: \"v1.5.0-beta.0\"\n      request: {}\n        # mem: \"\"\n        # cpu: \"\"\n      limit: {}\n        # mem: \"\"\n        # cpu: \"\"\n      config: \"\" # |\n        # cacheDir: /var/lib/trivy\n        # dbRepo: ghcr.io/aquasecurity/trivy-db\n        # deleteFailedImages: true\n        # deleteEOLImages: true\n        # vulnerabilities:\n        #   ignoreUnfixed: false\n        #   types:\n        #     - os\n        #     - library\n        #   securityChecks:\n        #     - vuln\n        #   severities:\n        #     - CRITICAL\n        #     - HIGH\n        #     - MEDIUM\n        #     - LOW\n        #   ignoredStatuses:\n        # timeout:\n        #   total: 23h\n        #   perImage: 1h\n    remover:\n      image:\n        # repo: \"\"\n        tag: \"v1.5.0-beta.0\"\n      request: {}\n        # mem: \"\"\n        # cpu: \"\"\n      limit: {}\n        # mem: \"\"\n        # cpu: \"\"\n\ndeploy:\n  image:\n    repo: ghcr.io/eraser-dev/eraser-manager\n    pullPolicy: IfNotPresent\n    # Overrides the image tag whose default is the chart appVersion.\n    tag: \"v1.5.0-beta.0\"\n  additionalArgs: []\n  priorityClassName: \"\"\n  additionalPodLabels: {}\n\n  securityContext:\n    allowPrivilegeEscalation: false\n\n  resources:\n    limits:\n      memory: 30Mi\n    requests:\n      cpu: 100m\n      memory: 20Mi\n\n  nodeSelector:\n    kubernetes.io/os: linux\n\n  tolerations: []\n\n  affinity: {}\n\nnameOverride: \"\"\n"
  },
  {
    "path": "version/version.go",
    "content": "// Package version provides build version and information for the eraser application.\npackage version\n\nimport (\n\t\"fmt\"\n\t\"runtime\"\n)\n\nvar (\n\t// BuildVersion is version set on build.\n\tBuildVersion string\n\t// DefaultRepo is the default repo for images.\n\tDefaultRepo = \"ghcr.io/eraser-dev\"\n\t// buildTime is the date for the binary build.\n\tbuildTime string\n\t// vcsCommit is the commit hash for the binary build.\n\tvcsCommit string\n)\n\n// GetUserAgent returns a user agent of the format eraser/<component>/<version> (<goos>/<goarch>) <commit>/<timestamp>.\nfunc GetUserAgent(component string) string {\n\treturn fmt.Sprintf(\"eraser/%s/%s (%s/%s) %s/%s\", component, BuildVersion, runtime.GOOS, runtime.GOARCH, vcsCommit, buildTime)\n}\n"
  },
  {
    "path": "version/version_test.go",
    "content": "package version\n\nimport (\n\t\"fmt\"\n\t\"runtime\"\n\t\"strings\"\n\t\"testing\"\n)\n\nfunc TestGetUserAgent(t *testing.T) {\n\tbuildTime = \"Now\"\n\tBuildVersion = \"version\"\n\tvcsCommit = \"hash\"\n\n\texpected := fmt.Sprintf(\"eraser/manager/%s (%s/%s) %s/%s\", BuildVersion, runtime.GOOS, runtime.GOARCH, vcsCommit, buildTime)\n\tactual := GetUserAgent(\"manager\")\n\tif !strings.EqualFold(expected, actual) {\n\t\tt.Fatalf(\"expected: %s, got: %s\", expected, actual)\n\t}\n}\n"
  }
]