main 864c7b98f6d7 cached
136 files
819.1 KB
213.2k tokens
1 requests
Download .txt
Showing preview only (863K chars total). Download the full file or copy to clipboard to get everything.
Repository: particuleio/terraform-kubernetes-addons
Branch: main
Commit: 864c7b98f6d7
Files: 136
Total size: 819.1 KB

Directory structure:
gitextract_o3n_q8aa/

├── .github/
│   ├── CONTRIBUTING.md
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug_report.md
│   │   └── feature_request.md
│   ├── PULL_REQUEST_TEMPLATE.md
│   ├── renovate.json
│   └── workflows/
│       ├── pr-title.yml
│       ├── pre-commit.yml
│       ├── release.yml
│       └── stale-actions.yaml
├── .gitignore
├── .mergify.yml
├── .pre-commit-config.yaml
├── .python-version
├── .releaserc.json
├── .terraform-docs.yml
├── CODEOWNERS
├── LICENSE
├── README.md
├── admiralty.tf
├── cert-manager-csi-driver.tf
├── cert-manager.tf
├── csi-external-snapshotter.tf
├── flux2.tf
├── grafana-mcp.tf
├── helm-dependencies.yaml
├── ingress-nginx.tf
├── k8gb.tf
├── karma.tf
├── keda.tf
├── kong-crds.tf
├── kong.tf
├── kube-prometheus-crd.tf
├── kube-prometheus.tf
├── linkerd-viz.tf
├── linkerd.tf
├── linkerd2-cni.tf
├── locals.tf
├── loki-stack.tf
├── metrics-server.tf
├── modules/
│   ├── aws/
│   │   ├── .terraform-docs.yml
│   │   ├── README.md
│   │   ├── aws-ebs-csi-driver.tf
│   │   ├── aws-efs-csi-driver.tf
│   │   ├── aws-for-fluent-bit.tf
│   │   ├── aws-load-balancer-controller.tf
│   │   ├── aws-node-termination-handler.tf
│   │   ├── cert-manager.tf
│   │   ├── cluster-autoscaler.tf
│   │   ├── cni-metrics-helper.tf
│   │   ├── data.tf
│   │   ├── examples/
│   │   │   └── README.md
│   │   ├── external-dns.tf
│   │   ├── iam/
│   │   │   ├── aws-ebs-csi-driver.json
│   │   │   ├── aws-ebs-csi-driver_kms.json
│   │   │   ├── aws-efs-csi-driver.json
│   │   │   └── aws-load-balancer-controller.json
│   │   ├── ingress-nginx.tf
│   │   ├── karpenter.tf
│   │   ├── kube-prometheus.tf
│   │   ├── locals-aws.tf
│   │   ├── loki-stack.tf
│   │   ├── prometheus-cloudwatch-exporter.tf
│   │   ├── s3-logging.tf
│   │   ├── secrets-store-csi-driver-provider-aws.tf
│   │   ├── templates/
│   │   │   ├── cert-manager-cluster-issuers.yaml.tpl
│   │   │   └── cni-metrics-helper.yaml.tpl
│   │   ├── thanos-memcached.tf
│   │   ├── thanos-storegateway.tf
│   │   ├── thanos-tls-querier.tf
│   │   ├── thanos.tf
│   │   ├── tigera-operator.tf
│   │   ├── variables-aws.tf
│   │   ├── velero.tf
│   │   ├── versions.tf
│   │   ├── victoria-metrics-k8s-stack.tf
│   │   └── yet-another-cloudwatch-exporter.tf
│   ├── azure/
│   │   ├── .terraform-docs.yml
│   │   ├── README.md
│   │   ├── ingress-nginx.tf
│   │   └── version.tf
│   ├── google/
│   │   ├── .terraform-docs.yml
│   │   ├── README.md
│   │   ├── cert-manager.tf
│   │   ├── data.tf
│   │   ├── external-dns.tf
│   │   ├── ingress-nginx.tf
│   │   ├── ip-masq-agent.tf
│   │   ├── kube-prometheus.tf
│   │   ├── loki-stack.tf
│   │   ├── manifests/
│   │   │   └── gke-ip-masq/
│   │   │       ├── ip-masq-agent-configmap.yaml
│   │   │       └── ip-masq-agent-daemonset.yaml
│   │   ├── templates/
│   │   │   ├── cert-manager-cluster-issuers.yaml.j2
│   │   │   ├── cert-manager-cluster-issuers.yaml.tpl
│   │   │   └── cni-metrics-helper.yaml.tpl
│   │   ├── thanos-memcached.tf
│   │   ├── thanos-receive.tf
│   │   ├── thanos-storegateway.tf
│   │   ├── thanos-tls-querier.tf
│   │   ├── thanos.tf
│   │   ├── variables-google.tf
│   │   ├── velero.tf
│   │   ├── versions.tf
│   │   └── victoria-metrics-k8s-stack.tf
│   └── scaleway/
│       ├── .terraform-docs.yml
│       ├── README.md
│       ├── cert-manager.tf
│       ├── examples/
│       │   └── README.md
│       ├── external-dns.tf
│       ├── ingress-nginx.tf
│       ├── kube-prometheus.tf
│       ├── locals-scaleway.tf
│       ├── loki-stack.tf
│       ├── templates/
│       │   └── cert-manager-cluster-issuers.yaml.tpl
│       ├── thanos-memcached.tf
│       ├── thanos-storegateway.tf
│       ├── thanos-tls-querier.tf
│       ├── thanos.tf
│       ├── variables-scaleway.tf
│       ├── velero.tf
│       ├── versions.tf
│       └── victoria-metrics-k8s-stack.tf
├── node-problem-detector.tf
├── priority-class.tf
├── prometheus-adapter.tf
├── prometheus-blackbox-exporter.tf
├── promtail.tf
├── reloader.tf
├── sealed-secrets.tf
├── secrets-store-csi-driver.tf
├── templates/
│   ├── cert-manager-cluster-issuers.yaml.tpl
│   └── cert-manager-csi-driver.yaml.tpl
├── tigera-operator.tf
├── traefik.tf
├── variables.tf
├── versions.tf
└── victoria-metrics-k8s-stack.tf

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/CONTRIBUTING.md
================================================
# Contributing

When contributing to this repository, please first discuss the change you wish to make via issue,
email, or any other method with the owners of this repository before making a change.

Please note we have a code of conduct, please follow it in all your interactions with the project.

## Pull Request Process

1. Ensure any install or build dependencies are removed before the end of the layer when doing a build.
2. Update the README.md with details of changes to the interface, this includes new environment variables, exposed ports, useful file locations, and container parameters.
3. Once all outstanding comments and checklist items have been addressed, your contribution will be merged! Merged PRs will trigger a new release

## Checklists for contributions

- [ ] Add [semantics prefix](#semantic-pull-requests) to your PR or Commits (at least one of your commit groups)
- [ ] CI tests are passing
- [ ] README.md has been updated after any changes to variables and outputs. See https://github.com/terraform-aws-modules/terraform-aws-eks/#doc-generation

## Semantic Pull Requests

To generate changelog, Pull Requests or Commits must have semantic and must follow conventional specs below:

- `feat:` for new features
- `fix:` for bug fixes
- `improvement:` for enhancements
- `docs:` for documentation and examples
- `refactor:` for code refactoring
- `test:` for tests
- `ci:` for CI purpose
- `chore:` for chores stuff

The `chore` prefix skipped during changelog generation. It can be used for `chore: update changelog` commit message by example.


================================================
FILE: .github/ISSUE_TEMPLATE/bug_report.md
================================================
---
name: Bug report
about: Create a report to help us improve
title: "[bug]"
labels: bug
assignees: ArchiFleKs

---

## Describe the bug

A clear and concise description of what the bug is.

## What is the current behavior?


## How to reproduce? Please include a code sample if relevant.


## What's the expected behavior?


## Are you able to fix this problem and submit a PR? Link here if you have already.


## Environment details

* Affected module version:
* OS:
* Terraform version:
* Kubernetes version

## Any other relevant info


================================================
FILE: .github/ISSUE_TEMPLATE/feature_request.md
================================================
---
name: Feature request
about: Suggest an idea for this project
title: "[enhancement]"
labels: enhancement
assignees: ArchiFleKs

---

**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

**Describe the solution you'd like**
A clear and concise description of what you want to happen.

**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.

**Additional context**
Add any other context or screenshots about the feature request here.


================================================
FILE: .github/PULL_REQUEST_TEMPLATE.md
================================================
# Pull request title

## Description

Please explain the changes you made here and link to any relevant issues.

### Checklist

- [ ] CI tests are passing
- [ ] README.md has been updated after any changes to variables and outputs. See https://github.com/particuleio/terraform-kubernetes-addons/#doc-generation


================================================
FILE: .github/renovate.json
================================================
{
  "extends": [
    ":separateMajorReleases",
    ":ignoreUnstable",
    ":prImmediately",
    ":updateNotScheduled",
    ":disableRateLimiting",
    ":ignoreModulesAndTests",
    ":gitSignOff",
    "group:monorepos",
    "group:recommended",
    "helpers:disableTypesNodeMajor",
    "workarounds:all",
    ":automergeDigest",
    ":automergeMinor",
    ":dependencyDashboard"
  ],
  "baseBranchPatterns": [
    "main"
  ],
  "enabledManagers": [
    "helmv3",
    "github-actions",
    "pre-commit",
    "terraform"
  ],
  "semanticCommits": "enabled",
  "platformAutomerge": false,
  "helmv3": {
    "enabled": true,
    "managerFilePatterns": [
      "/(^|/)helm-dependencies.yaml$/"
    ]
  },
  "commitMessageExtra": "to {{newVersion}} (was {{currentVersion}})",
  "prHourlyLimit": 0,
  "packageRules": [
    {
      "matchManagers": [
        "github-actions"
      ],
      "semanticCommitScope": "ci",
      "semanticCommitType": "chore"
    },
    {
      "matchManagers": [
        "pre-commit"
      ],
      "semanticCommitScope": "ci",
      "semanticCommitType": "chore"
    },
    {
      "matchManagers": [
        "helmv3"
      ],
      "semanticCommitScope": "charts",
      "semanticCommitType": "fix",
      "matchUpdateTypes": [
        "patch",
        "digest"
      ]
    },
    {
      "matchManagers": [
        "helmv3"
      ],
      "semanticCommitScope": "charts",
      "semanticCommitType": "feat",
      "matchUpdateTypes": [
        "major",
        "minor"
      ]
    },
    {
      "matchManagers": [
        "helmv3",
        "github-actions",
        "pre-commit"
      ],
      "matchUpdateTypes": [
        "minor",
        "patch",
        "digest"
      ],
      "addLabels": [
        "automerge"
      ]
    },
    {
      "matchManagers": [
        "terraform"
      ],
      "semanticCommitScope": "tf",
      "semanticCommitType": "feat",
      "automerge": false
    }
  ]
}


================================================
FILE: .github/workflows/pr-title.yml
================================================
name: 'Validate PR title'

on:
  pull_request_target:
    types:
      - opened
      - edited
      - synchronize

jobs:
  main:
    name: Validate PR title
    runs-on: ubuntu-latest
    steps:
      # Please look up the latest version from
      # https://github.com/amannn/action-semantic-pull-request/releases
      - uses: amannn/action-semantic-pull-request@v6.1.1
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        with:
          # Configure which types are allowed.
          # Default: https://github.com/commitizen/conventional-commit-types
          types: |
            fix
            feat
            docs
            ci
            chore
          # Configure that a scope must always be provided.
          requireScope: false
          # Configure additional validation for the subject based on a regex.
          # This example ensures the subject starts with an uppercase character.
          # subjectPattern: ^[A-Z].+$
          # If `subjectPattern` is configured, you can use this property to override
          # the default error message that is shown when the pattern doesn't match.
          # The variables `subject` and `title` can be used within the message.
          # subjectPatternError: |
          #   The subject "{subject}" found in the pull request title "{title}"
          #   didn't match the configured pattern. Please ensure that the subject
          #   starts with an uppercase character.
          # For work-in-progress PRs you can typically use draft pull requests
          # from Github. However, private repositories on the free plan don't have
          # this option and therefore this action allows you to opt-in to using the
          # special "[WIP]" prefix to indicate this state. This will avoid the
          # validation of the PR title and the pull request checks remain pending.
          # Note that a second check will be reported if this is enabled.
          wip: true
          # When using "Squash and merge" on a PR with only one commit, GitHub
          # will suggest using that commit message instead of the PR title for the
          # merge commit, and it's easy to commit this by mistake. Enable this option
          # to also validate the commit message for one commit PRs.
          validateSingleCommit: false


================================================
FILE: .github/workflows/pre-commit.yml
================================================
name: Pre-Commit

on:
  pull_request:
    branches:
      - main
      - master
  workflow_dispatch:

env:
  TERRAFORM_DOCS_VERSION: v0.21.0
  TFLINT_VERSION: v0.61.0

jobs:
  collectInputs:
    name: Collect workflow inputs
    runs-on: ubuntu-latest
    outputs:
      directories: ${{ steps.dirs.outputs.directories }}
    steps:
      - name: Checkout
        uses: actions/checkout@v6

      - name: Get root directories
        id: dirs
        uses: clowdhaus/terraform-composite-actions/directories@v1.14.0

  preCommitMinVersions:
    name: Min TF pre-commit
    needs: collectInputs
    runs-on: ubuntu-latest
    strategy:
      matrix:
        directory: ${{ fromJson(needs.collectInputs.outputs.directories) }}
    steps:
      - name: Checkout
        uses: actions/checkout@v6

      - name: Terraform min/max versions
        id: minMax
        uses: clowdhaus/terraform-min-max@v3.0.1
        with:
          directory: ${{ matrix.directory }}

      - name: Pre-commit Terraform ${{ steps.minMax.outputs.minVersion }}
        # Run only validate pre-commit check on min version supported
        if: ${{ matrix.directory !=  '.' }}
        uses: clowdhaus/terraform-composite-actions/pre-commit@v1.14.0
        with:
          terraform-version: ${{ steps.minMax.outputs.minVersion }}
          terraform-docs-version: ${{ env.TERRAFORM_DOCS_VERSION }}
          tflint-version: ${{ env.TFLINT_VERSION }}
          args: "terraform_validate --color=always --show-diff-on-failure --files ${{ matrix.directory }}/*"

      - name: Pre-commit Terraform ${{ steps.minMax.outputs.minVersion }}
        # Run only validate pre-commit check on min version supported
        if: ${{ matrix.directory ==  '.' }}
        uses: clowdhaus/terraform-composite-actions/pre-commit@v1.14.0
        with:
          terraform-version: ${{ steps.minMax.outputs.minVersion }}
          terraform-docs-version: ${{ env.TERRAFORM_DOCS_VERSION }}
          tflint-version: ${{ env.TFLINT_VERSION }}
          args: "terraform_validate --color=always --show-diff-on-failure --files $(ls *.tf)"

  preCommitMaxVersion:
    name: Max TF pre-commit
    runs-on: ubuntu-latest
    needs: collectInputs
    steps:
      - name: Checkout
        uses: actions/checkout@v6
        with:
          ref: ${{ github.event.pull_request.head.ref }}
          repository: ${{github.event.pull_request.head.repo.full_name}}

      - name: Terraform min/max versions
        id: minMax
        uses: clowdhaus/terraform-min-max@v3.0.1

      - name: Pre-commit Terraform ${{ steps.minMax.outputs.maxVersion }}
        uses: clowdhaus/terraform-composite-actions/pre-commit@v1.14.0
        with:
          terraform-version: ${{ steps.minMax.outputs.maxVersion }}
          terraform-docs-version: ${{ env.TERRAFORM_DOCS_VERSION }}
          tflint-version: ${{ env.TFLINT_VERSION }}


================================================
FILE: .github/workflows/release.yml
================================================
name: Release

on:
  push:
    branches:
    - release

jobs:
  terraform-release:
    if: github.ref == 'refs/heads/release'
    name: 'terraform:release'
    runs-on: ubuntu-latest
    steps:
    - name: Checkout
      uses: actions/checkout@v6

    - name: Semantic Release
      uses: cycjimmy/semantic-release-action@v3
      with:
        branches: |
          [
            'release'
          ]
      env:
        GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}


================================================
FILE: .github/workflows/stale-actions.yaml
================================================
name: 'Mark or close stale issues and PRs'
on:
  schedule:
    - cron: '0 0 * * *'

jobs:
  stale:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/stale@v10
        with:
          repo-token: ${{ secrets.GITHUB_TOKEN }}
          # Staling issues and PR's
          days-before-stale: 30
          stale-issue-label: stale
          stale-pr-label: stale
          stale-issue-message: |
            This issue has been automatically marked as stale because it has been open 30 days
            with no activity. Remove stale label or comment or this issue will be closed in 10 days
          stale-pr-message: |
            This PR has been automatically marked as stale because it has been open 30 days
            with no activity. Remove stale label or comment or this PR will be closed in 10 days
          # Not stale if have this labels or part of milestone
          exempt-issue-labels: bug,wip,on-hold
          exempt-pr-labels: bug,wip,on-hold
          exempt-all-milestones: true
          # Close issue operations
          # Label will be automatically removed if the issues are no longer closed nor locked.
          days-before-close: 10
          delete-branch: true
          close-issue-message: This issue was automatically closed because of stale in 10 days
          close-pr-message: This PR was automatically closed because of stale in 10 days


================================================
FILE: .gitignore
================================================
.terragrunt-cache
.terraform
.terraform.lock.hcl
.idea
.sisyphus


================================================
FILE: .mergify.yml
================================================
pull_request_rules:
  - name: Automatic approve Renovate PRs (patch/minor)
    conditions:
      - author=renovate[bot]
      - label=automerge
    actions:
      review:
        type: APPROVE

  - name: Automatic merge Renovate PRs (patch/minor)
    conditions:
      - author=renovate[bot]
      - base=main
      - label=automerge
      - "#approved-reviews-by>=1"
      - check-success=Max TF pre-commit
      - check-success=Validate PR title
    actions:
      merge:
        method: squash

  - name: Automatic merge on approval
    conditions:
      - base=main
      - "#approved-reviews-by>=1"
    actions:
      merge:
        method: squash

  - name: Automatic merge on approval release
    conditions:
      - base=release
      - "#approved-reviews-by>=1"
    actions:
      merge:
        method: merge


================================================
FILE: .pre-commit-config.yaml
================================================
repos:
- repo: https://github.com/antonbabenko/pre-commit-terraform
  rev: v1.105.0
  hooks:
    - id: terraform_fmt
    - id: terraform_validate
      args:
        - --hook-config=--retry-once-with-cleanup=true
        - --tf-init-args=-upgrade
    - id: terraform_docs
- repo: https://github.com/pre-commit/pre-commit-hooks
  rev: v6.0.0
  hooks:
    - id: check-merge-conflict
    - id: end-of-file-fixer
- repo: https://github.com/renovatebot/pre-commit-hooks
  rev: 43.110.9
  hooks:
    - id: renovate-config-validator


================================================
FILE: .python-version
================================================
3.x


================================================
FILE: .releaserc.json
================================================
{
  "plugins": [
    "@semantic-release/commit-analyzer",
    "@semantic-release/release-notes-generator",
    "@semantic-release/github"
  ]
}


================================================
FILE: .terraform-docs.yml
================================================
settings:
  lockfile: false


================================================
FILE: CODEOWNERS
================================================
# This is a comment.
# Each line is a file pattern followed by one or more owners.

# These owners will be the default owners for everything in
# the repo. Unless a later match takes precedence,
# @global-owner1 and @global-owner2 will be requested for
# review when someone opens a pull request.
*       @particuleio/team


================================================
FILE: LICENSE
================================================
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: README.md
================================================
# terraform-kubernetes-addons

[![semantic-release](https://img.shields.io/badge/%20%20%F0%9F%93%A6%F0%9F%9A%80-semantic--release-e10079.svg)](https://github.com/semantic-release/terraform-kubernetes-addons)
[![terraform-kubernetes-addons](https://github.com/particuleio/terraform-kubernetes-addons/workflows/terraform-kubernetes-addons/badge.svg)](https://github.com/particuleio/terraform-kubernetes-addons/actions?query=workflow%3Aterraform-kubernetes-addons)

## Main components

| Name                                                                                                                          | Description                                                                                      | Generic             | AWS                 | Scaleway            | GCP                 | Azure               |
|------|-------------|:-------:|:---:|:--------:|:---:|:-----:|
| [admiralty](https://admiralty.io/)                                                                                            | A system of Kubernetes controllers that intelligently schedules workloads across clusters        | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  |
| [aws-ebs-csi-driver](https://github.com/kubernetes-sigs/aws-ebs-csi-driver)                                                   | Enable new feature and the use of `gp3` volumes                                                  | N/A                 | :heavy_check_mark:  | N/A                 | N/A                 | N/A                 |
| [aws-efs-csi-driver](https://github.com/kubernetes-sigs/aws-efs-csi-driver)                                                   | Enable EFS Support                                                                               | N/A                 | :heavy_check_mark:  | N/A                 | N/A                 | N/A                 |
| [aws-for-fluent-bit](https://github.com/aws/aws-for-fluent-bit)                                                               | Cloudwatch logging with fluent bit instead of fluentd                                            | N/A                 | :heavy_check_mark:  | N/A                 | N/A                 | N/A                 |
| [aws-load-balancer-controller](https://aws.amazon.com/about-aws/whats-new/2020/10/introducing-aws-load-balancer-controller/)  | Use AWS ALB/NLB for ingress and services                                                         | N/A                 | :heavy_check_mark:  | N/A                 | N/A                 | N/A                 |
| [aws-node-termination-handler](https://github.com/aws/aws-node-termination-handler)                                           | Manage spot instance lifecyle                                                                    | N/A                 | :heavy_check_mark:  | N/A                 | N/A                 | N/A                 |
| [aws-calico](https://github.com/aws/eks-charts/tree/master/stable/aws-calico)                                                 | Use calico for network policy                                                                    | N/A                 | :heavy_check_mark:  | N/A                 | N/A                 | N/A                 |
| [secrets-store-csi-driver-provider-aws](https://github.com/aws/secrets-store-csi-driver-provider-aws) | AWS Secret Store and Parameter store driver for secret store CSI driver | :heavy_check_mark:  | N/A  | N/A  | N/A  | N/A  |
| [cert-manager](https://github.com/jetstack/cert-manager)                                                                      | automatically generate TLS certificates, supports ACME v2                                        | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | N/A                 |
| [cluster-autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler)                                 | scale worker nodes based on workload                                                             | N/A                 | :heavy_check_mark:  | Included            | Included            | Included            |
| [cni-metrics-helper](https://docs.aws.amazon.com/eks/latest/userguide/cni-metrics-helper.html)                                | Provides cloudwatch metrics for VPC CNI plugins                                                  | N/A                 | :heavy_check_mark:  | N/A                 | N/A                 | N/A                 |
| [external-dns](https://github.com/kubernetes-incubator/external-dns)                                                          | sync ingress and service records in route53                                                      | :x:                 | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :x:                 |
| [flux2](https://github.com/fluxcd/flux2)                                                                                      | Open and extensible continuous delivery solution for Kubernetes. Powered by GitOps Toolkit       | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  |
| [ingress-nginx](https://github.com/kubernetes/ingress-nginx)                                                                  | processes `Ingress` object and acts as a HTTP/HTTPS proxy (compatible with cert-manager)         | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :x:                 |
| [k8gb](https://www.k8gb.io/)                                                                                                  | A cloud native Kubernetes Global Balancer                                                        | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  |
| [karma](https://github.com/prymitive/karma)                                                                                   | An alertmanager dashboard                                                                        | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  |
| [keda](https://github.com/kedacore/keda)                                                                                      | Kubernetes Event-driven Autoscaling                                                              | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  |
| [kong](https://konghq.com/kong)                                                                                               | API Gateway ingress controller                                                                   | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :x:                 | :x:                 |
| [kube-prometheus-stack](https://github.com/prometheus-operator/kube-prometheus)                                               | Monitoring / Alerting / Dashboards                                                               | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :x:                 | :x:                 |
| [loki-stack](https://grafana.com/oss/loki/)                                                                                   | Grafana Loki logging stack                                                                       | :heavy_check_mark:  | :heavy_check_mark:  | :construction:      | :x:                 | :x:                 |
| [promtail](https://grafana.com/docs/loki/latest/clients/promtail/)                                                            | Ship log to loki from other cluster (eg. mTLS)                                                   | :construction:      | :heavy_check_mark:  | :construction:      | :x:                 | :x:                 |
| [prometheus-adapter](https://github.com/kubernetes-sigs/prometheus-adapter)                                                   | Prometheus metrics for use with the autoscaling/v2 Horizontal Pod Autoscaler in Kubernetes 1.6+  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  |
| [prometheus-cloudwatch-exporter](https://github.com/prometheus/cloudwatch_exporter)                                           | An exporter for Amazon CloudWatch, for Prometheus.                                               | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  |
| [prometheus-blackbox-exporter](https://github.com/prometheus/blackbox_exporter)                                               | The blackbox exporter allows blackbox probing of endpoints over HTTP, HTTPS, DNS, TCP and ICMP.  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  |
| [rabbitmq-cluster-operator](https://github.com/rabbitmq/cluster-operator)                                                     | The RabbitMQ Cluster Operator automates provisioning, management of RabbitMQ clusters.           | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  |
| [metrics-server](https://github.com/kubernetes-incubator/metrics-server)                                                      | enable metrics API and horizontal pod scaling (HPA)                                              | :heavy_check_mark:  | :heavy_check_mark:  | Included            | Included            | Included            |
| [node-problem-detector](https://github.com/kubernetes/node-problem-detector)                                                  | Forwards node problems to Kubernetes events                                                      | :heavy_check_mark:  | :heavy_check_mark:  | Included            | Included            | Included            |
| [secrets-store-csi-driver](https://github.com/kubernetes-sigs/secrets-store-csi-driver) | Secrets Store CSI driver for Kubernetes secrets - Integrates secrets stores with Kubernetes via a CSI volume. | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  |
| [sealed-secrets](https://github.com/bitnami-labs/sealed-secrets)                                                              | Technology agnostic, store secrets on git                                                        | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  | :heavy_check_mark:  |
| [thanos](https://thanos.io/)                                                                                                  | Open source, highly available Prometheus setup with long term storage capabilities               | :x:                 | :heavy_check_mark:  | :construction:      | :x:                 | :x:                 |
| [thanos-memcached](https://thanos.io/tip/components/query-frontend.md/#memcached)                                             | Open source, highly available Prometheus setup with long term storage capabilities               | :x:                 | :heavy_check_mark:  | :construction:      | :x:                 | :x:                 |
| [thanos-storegateway](https://thanos.io/)                                                                                     | Additional storegateway to query multiple object stores                                          | :x:                 | :heavy_check_mark:  | :construction:      | :x:                 | :x:                 |
| [thanos-tls-querier](https://thanos.io/tip/operating/cross-cluster-tls-communication.md/)                                     | Thanos TLS querier for cross cluster collection                                                  | :x:                 | :heavy_check_mark:  | :construction:      | :x:                 | :x:                 |

## Submodules

Submodules are used for specific cloud provider configuration such as IAM role for
AWS. For a Kubernetes vanilla cluster, generic addons should be used.

Any contribution supporting a new cloud provider is welcomed.

* [AWS](./modules/aws)
* [Scaleway](./modules/scaleway)
* [GCP](./modules/google)
* [Azure](./modules/azure)

## Doc generation

Code formatting and documentation for variables and outputs is generated using
[pre-commit-terraform
hooks](https://github.com/antonbabenko/pre-commit-terraform) which uses
[terraform-docs](https://github.com/segmentio/terraform-docs).

Follow [these
instructions](https://github.com/antonbabenko/pre-commit-terraform#how-to-install)
to install pre-commit locally.

And install `terraform-docs` with `go get github.com/segmentio/terraform-docs`
or `brew install terraform-docs`.

## Contributing

Report issues/questions/feature requests on in the
[issues](https://github.com/particuleio/terraform-kubernetes-addons/issues/new)
section.

Full contributing [guidelines are covered
here](https://github.com/particuleio/terraform-kubernetes-addons/blob/master/.github/CONTRIBUTING.md).

<!-- BEGIN_TF_DOCS -->
## Requirements

| Name | Version |
| ---- | ------- |
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.5.7 |
| <a name="requirement_flux"></a> [flux](#requirement\_flux) | ~> 1.0 |
| <a name="requirement_github"></a> [github](#requirement\_github) | ~> 6.0 |
| <a name="requirement_helm"></a> [helm](#requirement\_helm) | ~> 3.0 |
| <a name="requirement_http"></a> [http](#requirement\_http) | >= 3 |
| <a name="requirement_kubectl"></a> [kubectl](#requirement\_kubectl) | ~> 2.0 |
| <a name="requirement_kubernetes"></a> [kubernetes](#requirement\_kubernetes) | ~> 2.0, != 2.12 |
| <a name="requirement_tls"></a> [tls](#requirement\_tls) | ~> 4.0 |

## Providers

| Name | Version |
| ---- | ------- |
| <a name="provider_flux"></a> [flux](#provider\_flux) | ~> 1.0 |
| <a name="provider_github"></a> [github](#provider\_github) | ~> 6.0 |
| <a name="provider_helm"></a> [helm](#provider\_helm) | ~> 3.0 |
| <a name="provider_http"></a> [http](#provider\_http) | >= 3 |
| <a name="provider_kubectl"></a> [kubectl](#provider\_kubectl) | ~> 2.0 |
| <a name="provider_kubernetes"></a> [kubernetes](#provider\_kubernetes) | ~> 2.0, != 2.12 |
| <a name="provider_random"></a> [random](#provider\_random) | n/a |
| <a name="provider_time"></a> [time](#provider\_time) | n/a |
| <a name="provider_tls"></a> [tls](#provider\_tls) | ~> 4.0 |

## Modules

No modules.

## Resources

| Name | Type |
| ---- | ---- |
| [flux_bootstrap_git.flux](https://registry.terraform.io/providers/fluxcd/flux/latest/docs/resources/bootstrap_git) | resource |
| [github_branch_default.main](https://registry.terraform.io/providers/integrations/github/latest/docs/resources/branch_default) | resource |
| [github_repository.main](https://registry.terraform.io/providers/integrations/github/latest/docs/resources/repository) | resource |
| [github_repository_deploy_key.main](https://registry.terraform.io/providers/integrations/github/latest/docs/resources/repository_deploy_key) | resource |
| [helm_release.admiralty](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.cert-manager](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.cert-manager-csi-driver](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.grafana-mcp](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.ingress-nginx](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.k8gb](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.karma](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.keda](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.kong](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.kube-prometheus-stack](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.linkerd-control-plane](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.linkerd-crds](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.linkerd-viz](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.linkerd2-cni](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.loki-stack](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.metrics-server](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.node-problem-detector](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.prometheus-adapter](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.prometheus-blackbox-exporter](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.promtail](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.reloader](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.sealed-secrets](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.secrets-store-csi-driver](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.tigera-operator](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.traefik](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.victoria-metrics-k8s-stack](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [kubectl_manifest.calico_crds](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.cert-manager_cluster_issuers](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.csi-external-snapshotter](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.kong_crds](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.linkerd](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.linkerd-viz](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.prometheus-operator_crds](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.tigera-operator_crds](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubernetes_config_map.loki-stack_grafana_ds](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/config_map) | resource |
| [kubernetes_namespace.admiralty](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.cert-manager](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.flux2](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.grafana-mcp](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.ingress-nginx](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.k8gb](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.karma](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.keda](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.kong](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.kube-prometheus-stack](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.linkerd](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.linkerd-viz](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.linkerd2-cni](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.loki-stack](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.metrics-server](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.node-problem-detector](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.prometheus-adapter](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.prometheus-blackbox-exporter](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.promtail](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.reloader](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.sealed-secrets](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.secrets-store-csi-driver](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.tigera-operator](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.traefik](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.victoria-metrics-k8s-stack](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_network_policy.admiralty_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.admiralty_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.cert-manager_allow_control_plane](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.cert-manager_allow_monitoring](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.cert-manager_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.cert-manager_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.flux2_allow_monitoring](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.flux2_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.grafana-mcp_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.grafana-mcp_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.ingress-nginx_allow_control_plane](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.ingress-nginx_allow_ingress](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.ingress-nginx_allow_linkerd_viz](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.ingress-nginx_allow_monitoring](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.ingress-nginx_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.ingress-nginx_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.k8gb_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.k8gb_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.karma_allow_ingress](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.karma_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.karma_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.keda_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.keda_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.kong_allow_ingress](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.kong_allow_monitoring](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.kong_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.kong_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.kube-prometheus-stack_allow_control_plane](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.kube-prometheus-stack_allow_ingress](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.kube-prometheus-stack_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.kube-prometheus-stack_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.linkerd-viz_allow_control_plane](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.linkerd-viz_allow_monitoring](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.linkerd-viz_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.linkerd-viz_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.linkerd2-cni_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.linkerd2-cni_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.loki-stack_allow_ingress](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.loki-stack_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.loki-stack_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.metrics-server_allow_control_plane](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.metrics-server_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.metrics-server_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.npd_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.npd_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.prometheus-adapter_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.prometheus-adapter_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.prometheus-blackbox-exporter_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.prometheus-blackbox-exporter_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.promtail_allow_ingress](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.promtail_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.promtail_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.reloader_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.reloader_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.sealed-secrets_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.sealed-secrets_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.secrets-store-csi-driver_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.secrets-store-csi-driver_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.tigera-operator_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.tigera-operator_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.traefik_allow_ingress](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.traefik_allow_monitoring](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.traefik_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.traefik_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.victoria-metrics-k8s-stack_allow_control_plane](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.victoria-metrics-k8s-stack_allow_ingress](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.victoria-metrics-k8s-stack_allow_namespace](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_network_policy.victoria-metrics-k8s-stack_default_deny](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/network_policy) | resource |
| [kubernetes_priority_class.kubernetes_addons](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/priority_class) | resource |
| [kubernetes_priority_class.kubernetes_addons_ds](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/priority_class) | resource |
| [kubernetes_secret.linkerd_trust_anchor](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret) | resource |
| [kubernetes_secret.loki-stack-ca](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret) | resource |
| [kubernetes_secret.promtail-tls](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret) | resource |
| [kubernetes_secret.webhook_issuer_tls](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret) | resource |
| [random_string.grafana_password](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string) | resource |
| [time_sleep.cert-manager_sleep](https://registry.terraform.io/providers/hashicorp/time/latest/docs/resources/sleep) | resource |
| [tls_cert_request.promtail-csr](https://registry.terraform.io/providers/hashicorp/tls/latest/docs/resources/cert_request) | resource |
| [tls_locally_signed_cert.promtail-cert](https://registry.terraform.io/providers/hashicorp/tls/latest/docs/resources/locally_signed_cert) | resource |
| [tls_private_key.identity](https://registry.terraform.io/providers/hashicorp/tls/latest/docs/resources/private_key) | resource |
| [tls_private_key.linkerd_trust_anchor](https://registry.terraform.io/providers/hashicorp/tls/latest/docs/resources/private_key) | resource |
| [tls_private_key.loki-stack-ca-key](https://registry.terraform.io/providers/hashicorp/tls/latest/docs/resources/private_key) | resource |
| [tls_private_key.promtail-key](https://registry.terraform.io/providers/hashicorp/tls/latest/docs/resources/private_key) | resource |
| [tls_private_key.webhook_issuer_tls](https://registry.terraform.io/providers/hashicorp/tls/latest/docs/resources/private_key) | resource |
| [tls_self_signed_cert.linkerd_trust_anchor](https://registry.terraform.io/providers/hashicorp/tls/latest/docs/resources/self_signed_cert) | resource |
| [tls_self_signed_cert.loki-stack-ca-cert](https://registry.terraform.io/providers/hashicorp/tls/latest/docs/resources/self_signed_cert) | resource |
| [tls_self_signed_cert.webhook_issuer_tls](https://registry.terraform.io/providers/hashicorp/tls/latest/docs/resources/self_signed_cert) | resource |
| [github_repository.main](https://registry.terraform.io/providers/integrations/github/latest/docs/data-sources/repository) | data source |
| [http_http.calico_crds](https://registry.terraform.io/providers/hashicorp/http/latest/docs/data-sources/http) | data source |
| [http_http.csi-external-snapshotter](https://registry.terraform.io/providers/hashicorp/http/latest/docs/data-sources/http) | data source |
| [http_http.kong_crds](https://registry.terraform.io/providers/hashicorp/http/latest/docs/data-sources/http) | data source |
| [http_http.prometheus-operator_crds](https://registry.terraform.io/providers/hashicorp/http/latest/docs/data-sources/http) | data source |
| [http_http.prometheus-operator_version](https://registry.terraform.io/providers/hashicorp/http/latest/docs/data-sources/http) | data source |
| [http_http.tigera-operator_crds](https://registry.terraform.io/providers/hashicorp/http/latest/docs/data-sources/http) | data source |
| [kubectl_file_documents.calico_crds](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/data-sources/file_documents) | data source |
| [kubectl_file_documents.csi-external-snapshotter](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/data-sources/file_documents) | data source |
| [kubectl_file_documents.kong_crds](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/data-sources/file_documents) | data source |
| [kubectl_file_documents.tigera-operator_crds](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/data-sources/file_documents) | data source |
| [kubectl_path_documents.cert-manager_cluster_issuers](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/data-sources/path_documents) | data source |

## Inputs

| Name | Description | Type | Default | Required |
| ---- | ----------- | ---- | ------- | :------: |
| <a name="input_admiralty"></a> [admiralty](#input\_admiralty) | Customize admiralty chart, see `admiralty.tf` for supported values | `any` | `{}` | no |
| <a name="input_cert-manager"></a> [cert-manager](#input\_cert-manager) | Customize cert-manager chart, see `cert-manager.tf` for supported values | `any` | `{}` | no |
| <a name="input_cert-manager-csi-driver"></a> [cert-manager-csi-driver](#input\_cert-manager-csi-driver) | Customize cert-manager-csi-driver chart, see `cert-manager.tf` for supported values | `any` | `{}` | no |
| <a name="input_cluster-autoscaler"></a> [cluster-autoscaler](#input\_cluster-autoscaler) | Customize cluster-autoscaler chart, see `cluster-autoscaler.tf` for supported values | `any` | `{}` | no |
| <a name="input_cluster-name"></a> [cluster-name](#input\_cluster-name) | Name of the Kubernetes cluster | `string` | `"sample-cluster"` | no |
| <a name="input_csi-external-snapshotter"></a> [csi-external-snapshotter](#input\_csi-external-snapshotter) | Customize csi-external-snapshotter, see `csi-external-snapshotter.tf` for supported values | `any` | `{}` | no |
| <a name="input_external-dns"></a> [external-dns](#input\_external-dns) | Map of map for external-dns configuration: see `external_dns.tf` for supported values | `any` | `{}` | no |
| <a name="input_flux2"></a> [flux2](#input\_flux2) | Customize Flux chart, see `flux2.tf` for supported values | `any` | `{}` | no |
| <a name="input_grafana-mcp"></a> [grafana-mcp](#input\_grafana-mcp) | Customize grafana-mcp chart, see `grafana-mcp.tf` for supported values | `any` | `{}` | no |
| <a name="input_helm_defaults"></a> [helm\_defaults](#input\_helm\_defaults) | Customize default Helm behavior | `any` | `{}` | no |
| <a name="input_ingress-nginx"></a> [ingress-nginx](#input\_ingress-nginx) | Customize ingress-nginx chart, see `nginx-ingress.tf` for supported values | `any` | `{}` | no |
| <a name="input_ip-masq-agent"></a> [ip-masq-agent](#input\_ip-masq-agent) | Configure ip masq agent chart, see `ip-masq-agent.tf` for supported values. This addon works only on GCP. | `any` | `{}` | no |
| <a name="input_k8gb"></a> [k8gb](#input\_k8gb) | Customize k8gb chart, see `k8gb.tf` for supported values | `any` | `{}` | no |
| <a name="input_karma"></a> [karma](#input\_karma) | Customize karma chart, see `karma.tf` for supported values | `any` | `{}` | no |
| <a name="input_keda"></a> [keda](#input\_keda) | Customize keda chart, see `keda.tf` for supported values | `any` | `{}` | no |
| <a name="input_kong"></a> [kong](#input\_kong) | Customize kong-ingress chart, see `kong.tf` for supported values | `any` | `{}` | no |
| <a name="input_kube-prometheus-stack"></a> [kube-prometheus-stack](#input\_kube-prometheus-stack) | Customize kube-prometheus-stack chart, see `kube-prometheus-stack.tf` for supported values | `any` | `{}` | no |
| <a name="input_labels_prefix"></a> [labels\_prefix](#input\_labels\_prefix) | Custom label prefix used for network policy namespace matching | `string` | `"particule.io"` | no |
| <a name="input_linkerd"></a> [linkerd](#input\_linkerd) | Customize linkerd chart, see `linkerd.tf` for supported values | `any` | `{}` | no |
| <a name="input_linkerd-viz"></a> [linkerd-viz](#input\_linkerd-viz) | Customize linkerd-viz chart, see `linkerd-viz.tf` for supported values | `any` | `{}` | no |
| <a name="input_linkerd2"></a> [linkerd2](#input\_linkerd2) | Customize linkerd2 chart, see `linkerd2.tf` for supported values | `any` | `{}` | no |
| <a name="input_linkerd2-cni"></a> [linkerd2-cni](#input\_linkerd2-cni) | Customize linkerd2-cni chart, see `linkerd2-cni.tf` for supported values | `any` | `{}` | no |
| <a name="input_loki-stack"></a> [loki-stack](#input\_loki-stack) | Customize loki-stack chart, see `loki-stack.tf` for supported values | `any` | `{}` | no |
| <a name="input_metrics-server"></a> [metrics-server](#input\_metrics-server) | Customize metrics-server chart, see `metrics_server.tf` for supported values | `any` | `{}` | no |
| <a name="input_npd"></a> [npd](#input\_npd) | Customize node-problem-detector chart, see `npd.tf` for supported values | `any` | `{}` | no |
| <a name="input_priority-class"></a> [priority-class](#input\_priority-class) | Customize a priority class for addons | `any` | `{}` | no |
| <a name="input_priority-class-ds"></a> [priority-class-ds](#input\_priority-class-ds) | Customize a priority class for addons daemonsets | `any` | `{}` | no |
| <a name="input_prometheus-adapter"></a> [prometheus-adapter](#input\_prometheus-adapter) | Customize prometheus-adapter chart, see `prometheus-adapter.tf` for supported values | `any` | `{}` | no |
| <a name="input_prometheus-blackbox-exporter"></a> [prometheus-blackbox-exporter](#input\_prometheus-blackbox-exporter) | Customize prometheus-blackbox-exporter chart, see `prometheus-blackbox-exporter.tf` for supported values | `any` | `{}` | no |
| <a name="input_promtail"></a> [promtail](#input\_promtail) | Customize promtail chart, see `loki-stack.tf` for supported values | `any` | `{}` | no |
| <a name="input_reloader"></a> [reloader](#input\_reloader) | Customize reloader chart, see `reloader.tf` for supported values | `any` | `{}` | no |
| <a name="input_sealed-secrets"></a> [sealed-secrets](#input\_sealed-secrets) | Customize sealed-secrets chart, see `sealed-secrets.tf` for supported values | `any` | `{}` | no |
| <a name="input_secrets-store-csi-driver"></a> [secrets-store-csi-driver](#input\_secrets-store-csi-driver) | Customize secrets-store-csi-driver chart, see `secrets-store-csi-driver.tf` for supported values | `any` | `{}` | no |
| <a name="input_thanos"></a> [thanos](#input\_thanos) | Customize thanos chart, see `thanos.tf` for supported values | `any` | `{}` | no |
| <a name="input_thanos-memcached"></a> [thanos-memcached](#input\_thanos-memcached) | Customize thanos chart, see `thanos.tf` for supported values | `any` | `{}` | no |
| <a name="input_thanos-receive"></a> [thanos-receive](#input\_thanos-receive) | Customize thanos chart, see `thanos-receive.tf` for supported values | `any` | `{}` | no |
| <a name="input_thanos-storegateway"></a> [thanos-storegateway](#input\_thanos-storegateway) | Customize thanos chart, see `thanos.tf` for supported values | `any` | `{}` | no |
| <a name="input_thanos-tls-querier"></a> [thanos-tls-querier](#input\_thanos-tls-querier) | Customize thanos chart, see `thanos.tf` for supported values | `any` | `{}` | no |
| <a name="input_thanos-tls-querier-ca-cert"></a> [thanos-tls-querier-ca-cert](#input\_thanos-tls-querier-ca-cert) | TLS CA certificate, used to generate the client mTLS materials | `string` | `""` | no |
| <a name="input_thanos-tls-querier-ca-private-key"></a> [thanos-tls-querier-ca-private-key](#input\_thanos-tls-querier-ca-private-key) | TLS CA private key, used to generate the client mTLS materials | `string` | `""` | no |
| <a name="input_tigera-operator"></a> [tigera-operator](#input\_tigera-operator) | Customize tigera-operator chart, see `tigera-operator.tf` for supported values | `any` | `{}` | no |
| <a name="input_traefik"></a> [traefik](#input\_traefik) | Customize traefik chart, see `traefik.tf` for supported values | `any` | `{}` | no |
| <a name="input_velero"></a> [velero](#input\_velero) | Customize velero chart, see `velero.tf` for supported values | `any` | `{}` | no |
| <a name="input_victoria-metrics-k8s-stack"></a> [victoria-metrics-k8s-stack](#input\_victoria-metrics-k8s-stack) | Customize Victoria Metrics chart, see `victoria-metrics-k8s-stack.tf` for supported values | `any` | `{}` | no |

## Outputs

| Name | Description |
| ---- | ----------- |
| <a name="output_grafana_password"></a> [grafana\_password](#output\_grafana\_password) | n/a |
| <a name="output_loki-stack-ca"></a> [loki-stack-ca](#output\_loki-stack-ca) | n/a |
| <a name="output_loki-stack-ca-key"></a> [loki-stack-ca-key](#output\_loki-stack-ca-key) | n/a |
| <a name="output_promtail-cert"></a> [promtail-cert](#output\_promtail-cert) | n/a |
| <a name="output_promtail-key"></a> [promtail-key](#output\_promtail-key) | n/a |
<!-- END_TF_DOCS -->


================================================
FILE: admiralty.tf
================================================
locals {
  admiralty = merge(
    local.helm_defaults,
    {
      name                   = local.helm_dependencies[index(local.helm_dependencies.*.name, "admiralty")].name
      chart                  = local.helm_dependencies[index(local.helm_dependencies.*.name, "admiralty")].name
      repository             = local.helm_dependencies[index(local.helm_dependencies.*.name, "admiralty")].repository
      chart_version          = local.helm_dependencies[index(local.helm_dependencies.*.name, "admiralty")].version
      namespace              = "admiralty"
      enabled                = false
      create_ns              = true
      default_network_policy = true
    },
    var.admiralty
  )

  values_admiralty = <<-VALUES
    VALUES
}

resource "kubernetes_namespace" "admiralty" {
  count = local.admiralty["enabled"] && local.admiralty["create_ns"] ? 1 : 0

  metadata {
    labels = {
      name = local.admiralty["namespace"]
    }

    name = local.admiralty["namespace"]
  }
}

resource "helm_release" "admiralty" {
  count                 = local.admiralty["enabled"] ? 1 : 0
  repository            = local.admiralty["repository"]
  name                  = local.admiralty["name"]
  chart                 = local.admiralty["chart"]
  version               = local.admiralty["chart_version"]
  timeout               = local.admiralty["timeout"]
  force_update          = local.admiralty["force_update"]
  recreate_pods         = local.admiralty["recreate_pods"]
  wait                  = local.admiralty["wait"]
  atomic                = local.admiralty["atomic"]
  cleanup_on_fail       = local.admiralty["cleanup_on_fail"]
  dependency_update     = local.admiralty["dependency_update"]
  disable_crd_hooks     = local.admiralty["disable_crd_hooks"]
  disable_webhooks      = local.admiralty["disable_webhooks"]
  render_subchart_notes = local.admiralty["render_subchart_notes"]
  replace               = local.admiralty["replace"]
  reset_values          = local.admiralty["reset_values"]
  reuse_values          = local.admiralty["reuse_values"]
  skip_crds             = local.admiralty["skip_crds"]
  verify                = local.admiralty["verify"]
  values = [
    local.values_admiralty,
    local.admiralty["extra_values"]
  ]
  namespace = local.admiralty["create_ns"] ? kubernetes_namespace.admiralty.*.metadata.0.name[count.index] : local.admiralty["namespace"]
}

resource "kubernetes_network_policy" "admiralty_default_deny" {
  count = local.admiralty["enabled"] && local.admiralty["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${local.admiralty["namespace"]}-${local.admiralty["name"]}-default-deny"
    namespace = local.admiralty["namespace"]
  }

  spec {
    pod_selector {
    }
    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "admiralty_allow_namespace" {
  count = local.admiralty["enabled"] && local.admiralty["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${local.admiralty["namespace"]}-${local.admiralty["name"]}-default-namespace"
    namespace = local.admiralty["namespace"]
  }

  spec {
    pod_selector {
    }

    ingress {
      from {
        namespace_selector {
          match_labels = {
            name = local.admiralty["namespace"]
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}


================================================
FILE: cert-manager-csi-driver.tf
================================================
locals {

  cert-manager-csi-driver = merge(
    local.helm_defaults,
    {
      name                   = local.helm_dependencies[index(local.helm_dependencies.*.name, "cert-manager-csi-driver")].name
      chart                  = local.helm_dependencies[index(local.helm_dependencies.*.name, "cert-manager-csi-driver")].name
      repository             = local.helm_dependencies[index(local.helm_dependencies.*.name, "cert-manager-csi-driver")].repository
      chart_version          = local.helm_dependencies[index(local.helm_dependencies.*.name, "cert-manager-csi-driver")].version
      enabled                = local.cert-manager.csi_driver
      default_network_policy = true
      namespace              = local.cert-manager.namespace
    },
    var.cert-manager-csi-driver
  )

  values_cert-manager-csi-driver = <<VALUES
tolerations:
  - operator: "Exists"
VALUES

}

resource "helm_release" "cert-manager-csi-driver" {
  count                 = local.cert-manager-csi-driver["enabled"] ? 1 : 0
  repository            = local.cert-manager-csi-driver["repository"]
  name                  = local.cert-manager-csi-driver["name"]
  chart                 = local.cert-manager-csi-driver["chart"]
  version               = local.cert-manager-csi-driver["chart_version"]
  timeout               = local.cert-manager-csi-driver["timeout"]
  force_update          = local.cert-manager-csi-driver["force_update"]
  recreate_pods         = local.cert-manager-csi-driver["recreate_pods"]
  wait                  = local.cert-manager-csi-driver["wait"]
  atomic                = local.cert-manager-csi-driver["atomic"]
  cleanup_on_fail       = local.cert-manager-csi-driver["cleanup_on_fail"]
  dependency_update     = local.cert-manager-csi-driver["dependency_update"]
  disable_crd_hooks     = local.cert-manager-csi-driver["disable_crd_hooks"]
  disable_webhooks      = local.cert-manager-csi-driver["disable_webhooks"]
  render_subchart_notes = local.cert-manager-csi-driver["render_subchart_notes"]
  replace               = local.cert-manager-csi-driver["replace"]
  reset_values          = local.cert-manager-csi-driver["reset_values"]
  reuse_values          = local.cert-manager-csi-driver["reuse_values"]
  skip_crds             = local.cert-manager-csi-driver["skip_crds"]
  verify                = local.cert-manager-csi-driver["verify"]
  values = [
    local.values_cert-manager-csi-driver,
    local.cert-manager-csi-driver["extra_values"]
  ]
  namespace = local.cert-manager-csi-driver.namespace

  depends_on = [
    helm_release.cert-manager
  ]
}


================================================
FILE: cert-manager.tf
================================================
locals {

  cert-manager = merge(
    local.helm_defaults,
    {
      name                      = local.helm_dependencies[index(local.helm_dependencies.*.name, "cert-manager")].name
      chart                     = local.helm_dependencies[index(local.helm_dependencies.*.name, "cert-manager")].name
      repository                = local.helm_dependencies[index(local.helm_dependencies.*.name, "cert-manager")].repository
      chart_version             = local.helm_dependencies[index(local.helm_dependencies.*.name, "cert-manager")].version
      namespace                 = "cert-manager"
      service_account_name      = "cert-manager"
      enabled                   = false
      default_network_policy    = true
      acme_email                = "contact@acme.com"
      acme_http01_enabled       = false
      acme_http01_ingress_class = ""
      allowed_cidrs             = ["0.0.0.0/0"]
      csi_driver                = false
    },
    var.cert-manager
  )

  values_cert-manager = <<VALUES
global:
  priorityClassName: ${local.priority-class["create"] ? kubernetes_priority_class.kubernetes_addons[0].metadata[0].name : ""}
serviceAccount:
  name: ${local.cert-manager["service_account_name"]}
prometheus:
  servicemonitor:
    enabled: ${local.kube-prometheus-stack["enabled"] || local.victoria-metrics-k8s-stack["enabled"]}
securityContext:
  fsGroup: 1001
crds:
  enabled: true
VALUES

}

resource "kubernetes_namespace" "cert-manager" {
  count = local.cert-manager["enabled"] ? 1 : 0

  metadata {
    annotations = {
      "certmanager.k8s.io/disable-validation" = "true"
    }

    labels = {
      name = local.cert-manager["namespace"]
    }

    name = local.cert-manager["namespace"]
  }
}

resource "helm_release" "cert-manager" {
  count                 = local.cert-manager["enabled"] ? 1 : 0
  repository            = local.cert-manager["repository"]
  name                  = local.cert-manager["name"]
  chart                 = local.cert-manager["chart"]
  version               = local.cert-manager["chart_version"]
  timeout               = local.cert-manager["timeout"]
  force_update          = local.cert-manager["force_update"]
  recreate_pods         = local.cert-manager["recreate_pods"]
  wait                  = local.cert-manager["wait"]
  atomic                = local.cert-manager["atomic"]
  cleanup_on_fail       = local.cert-manager["cleanup_on_fail"]
  dependency_update     = local.cert-manager["dependency_update"]
  disable_crd_hooks     = local.cert-manager["disable_crd_hooks"]
  disable_webhooks      = local.cert-manager["disable_webhooks"]
  render_subchart_notes = local.cert-manager["render_subchart_notes"]
  replace               = local.cert-manager["replace"]
  reset_values          = local.cert-manager["reset_values"]
  reuse_values          = local.cert-manager["reuse_values"]
  skip_crds             = local.cert-manager["skip_crds"]
  verify                = local.cert-manager["verify"]
  values = [
    local.values_cert-manager,
    local.cert-manager["extra_values"]
  ]
  namespace = kubernetes_namespace.cert-manager.*.metadata.0.name[count.index]

  depends_on = [
    kubectl_manifest.prometheus-operator_crds
  ]
}

data "kubectl_path_documents" "cert-manager_cluster_issuers" {
  pattern = "${path.module}/templates/cert-manager-cluster-issuers.yaml.tpl"
  vars = {
    acme_email                = local.cert-manager["acme_email"]
    acme_http01_enabled       = local.cert-manager["acme_http01_enabled"]
    acme_http01_ingress_class = local.cert-manager["acme_http01_ingress_class"]
  }
}

resource "time_sleep" "cert-manager_sleep" {
  count           = local.cert-manager["enabled"] && local.cert-manager["acme_http01_enabled"] ? length(data.kubectl_path_documents.cert-manager_cluster_issuers.documents) : 0
  depends_on      = [helm_release.cert-manager]
  create_duration = "120s"
}

resource "kubectl_manifest" "cert-manager_cluster_issuers" {
  count     = local.cert-manager["enabled"] && local.cert-manager["acme_http01_enabled"] ? length(data.kubectl_path_documents.cert-manager_cluster_issuers.documents) : 0
  yaml_body = element(data.kubectl_path_documents.cert-manager_cluster_issuers.documents, count.index)
  depends_on = [
    helm_release.cert-manager,
    kubernetes_namespace.cert-manager,
    time_sleep.cert-manager_sleep
  ]
}

resource "kubernetes_network_policy" "cert-manager_default_deny" {
  count = local.cert-manager["enabled"] && local.cert-manager["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.cert-manager.*.metadata.0.name[count.index]}-default-deny"
    namespace = kubernetes_namespace.cert-manager.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }
    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "cert-manager_allow_namespace" {
  count = local.cert-manager["enabled"] && local.cert-manager["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.cert-manager.*.metadata.0.name[count.index]}-allow-namespace"
    namespace = kubernetes_namespace.cert-manager.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }

    ingress {
      from {
        namespace_selector {
          match_labels = {
            name = kubernetes_namespace.cert-manager.*.metadata.0.name[count.index]
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "cert-manager_allow_monitoring" {
  count = local.cert-manager["enabled"] && local.cert-manager["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.cert-manager.*.metadata.0.name[count.index]}-allow-monitoring"
    namespace = kubernetes_namespace.cert-manager.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }

    ingress {
      ports {
        port     = "9402"
        protocol = "TCP"
      }

      from {
        namespace_selector {
          match_labels = {
            "${local.labels_prefix}/component" = "monitoring"
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "cert-manager_allow_control_plane" {
  count = local.cert-manager["enabled"] && local.cert-manager["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.cert-manager.*.metadata.0.name[count.index]}-allow-control-plane"
    namespace = kubernetes_namespace.cert-manager.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
      match_expressions {
        key      = "app.kubernetes.io/name"
        operator = "In"
        values   = ["webhook"]
      }
    }

    ingress {
      ports {
        port     = "10250"
        protocol = "TCP"
      }

      dynamic "from" {
        for_each = local.cert-manager["allowed_cidrs"]
        content {
          ip_block {
            cidr = from.value
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}


================================================
FILE: csi-external-snapshotter.tf
================================================
locals {

  csi-external-snapshotter = merge(
    {
      enabled = false
      version = "v8.1.0"
    },
    var.csi-external-snapshotter
  )

  csi-external-snapshotter_yaml_files = [
    "https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${local.csi-external-snapshotter.version}/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml",
    "https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${local.csi-external-snapshotter.version}/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml",
    "https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${local.csi-external-snapshotter.version}/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml",
    "https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${local.csi-external-snapshotter.version}/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml",
    "https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${local.csi-external-snapshotter.version}/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml"
  ]

  csi-external-snapshotter_apply = local.csi-external-snapshotter["enabled"] ? [for v in data.kubectl_file_documents.csi-external-snapshotter[0].documents : {
    data : yamldecode(v)
    content : v
    }
  ] : null

}

data "http" "csi-external-snapshotter" {
  for_each = local.csi-external-snapshotter.enabled ? toset(local.csi-external-snapshotter_yaml_files) : []
  url      = each.key
}

data "kubectl_file_documents" "csi-external-snapshotter" {
  count   = local.csi-external-snapshotter.enabled ? 1 : 0
  content = join("\n---\n", [for k, v in data.http.csi-external-snapshotter : v.response_body])
}

resource "kubectl_manifest" "csi-external-snapshotter" {
  for_each  = local.csi-external-snapshotter.enabled ? { for v in local.csi-external-snapshotter_apply : lower(join("/", compact([v.data.apiVersion, v.data.kind, lookup(v.data.metadata, "namespace", ""), v.data.metadata.name]))) => v.content } : {}
  yaml_body = each.value
}


================================================
FILE: flux2.tf
================================================
locals {

  # GITHUB_TOKEN should be set for Github provider to work
  # GITHUB_ORGANIZATION should be set if deploying in another ORG and not your
  # github user

  flux2 = merge(
    {
      enabled                  = false
      create_ns                = true
      namespace                = "flux-system"
      path                     = "gitops/clusters/${var.cluster-name}"
      version                  = "v2.6.1"
      create_github_repository = false
      repository               = "gitops"
      repository_visibility    = "public"
      branch                   = "main"
      components_extra         = ["image-reflector-controller", "image-automation-controller"]
      read_only                = false
      default_network_policy   = true
    },
    var.flux2
  )
}

resource "kubernetes_namespace" "flux2" {
  count = local.flux2["enabled"] && local.flux2["create_ns"] ? 1 : 0

  metadata {
    labels = {
      name = local.flux2["namespace"]
    }

    name = local.flux2["namespace"]
  }
  lifecycle {
    ignore_changes = [
      metadata[0].annotations,
      metadata[0].labels,
    ]
  }
}

resource "tls_private_key" "identity" {
  count       = local.flux2["enabled"] ? 1 : 0
  algorithm   = "ECDSA"
  ecdsa_curve = "P521"
}

data "github_repository" "main" {
  count = local.flux2["enabled"] && !local.flux2["create_github_repository"] ? 1 : 0
  name  = local.flux2["repository"]
}

resource "github_repository" "main" {
  count      = local.flux2["enabled"] && local.flux2["create_github_repository"] ? 1 : 0
  name       = local.flux2["repository"]
  visibility = local.flux2["repository_visibility"]
  auto_init  = true
}

resource "github_branch_default" "main" {
  count      = local.flux2["enabled"] && local.flux2["create_github_repository"] ? 1 : 0
  repository = local.flux2["create_github_repository"] ? github_repository.main[0].name : data.github_repository.main[0].name
  branch     = local.flux2["branch"]
}

resource "github_repository_deploy_key" "main" {
  count      = local.flux2["enabled"] ? 1 : 0
  title      = "flux-${local.flux2["create_github_repository"] ? github_repository.main[0].name : local.flux2["repository"]}-${local.flux2["branch"]}"
  repository = local.flux2["create_github_repository"] ? github_repository.main[0].name : data.github_repository.main[0].name
  key        = tls_private_key.identity[0].public_key_openssh
  read_only  = local.flux2["read_only"]
}

resource "flux_bootstrap_git" "flux" {
  count = local.flux2["enabled"] ? 1 : 0

  depends_on = [
    github_repository_deploy_key.main,
    kubernetes_namespace.flux2
  ]

  path                    = local.flux2["path"]
  version                 = local.flux2["version"]
  namespace               = local.flux2["namespace"]
  cluster_domain          = try(local.flux2["cluster_domain"], null)
  components              = try(local.flux2["components"], null)
  components_extra        = try(local.flux2["components_extra"], null)
  disable_secret_creation = try(local.flux2["disable_secret_creation"], null)
  image_pull_secret       = try(local.flux2["image_pull_secrets"], null)
  interval                = try(local.flux2["interval"], null)
  kustomization_override  = try(local.flux2["kustomization_override"], null)
  log_level               = try(local.flux2["log_level"], null)
  network_policy          = try(local.flux2["network_policy"], null)
  recurse_submodules      = try(local.flux2["recurse_submodules"], null)
  registry                = try(local.flux2["registry"], null)
  secret_name             = try(local.flux2["secret_name"], null)
  toleration_keys         = try(local.flux2["toleration_keys"], null)
  watch_all_namespaces    = try(local.flux2["watch_all_namespaces"], null)

}

resource "kubernetes_network_policy" "flux2_allow_monitoring" {
  count = local.flux2["enabled"] && local.flux2["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${local.flux2["create_ns"] ? kubernetes_namespace.flux2.*.metadata.0.name[count.index] : local.flux2["namespace"]}-allow-monitoring"
    namespace = local.flux2["create_ns"] ? kubernetes_namespace.flux2.*.metadata.0.name[count.index] : local.flux2["namespace"]
  }

  spec {
    pod_selector {
    }

    ingress {
      ports {
        port     = "8080"
        protocol = "TCP"
      }

      from {
        namespace_selector {
          match_labels = {
            "${local.labels_prefix}/component" = "monitoring"
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "flux2_allow_namespace" {
  count = local.flux2["enabled"] && local.flux2["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${local.flux2["create_ns"] ? kubernetes_namespace.flux2.*.metadata.0.name[count.index] : local.flux2["namespace"]}-allow-namespace"
    namespace = local.flux2["create_ns"] ? kubernetes_namespace.flux2.*.metadata.0.name[count.index] : local.flux2["namespace"]
  }

  spec {
    pod_selector {
    }

    ingress {
      from {
        namespace_selector {
          match_labels = {
            name = local.flux2["create_ns"] ? kubernetes_namespace.flux2.*.metadata.0.name[count.index] : local.flux2["namespace"]
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}


================================================
FILE: grafana-mcp.tf
================================================
locals {
  grafana-mcp = merge(
    local.helm_defaults,
    {
      name                   = local.helm_dependencies[index(local.helm_dependencies.*.name, "grafana-mcp")].name
      chart                  = local.helm_dependencies[index(local.helm_dependencies.*.name, "grafana-mcp")].name
      repository             = local.helm_dependencies[index(local.helm_dependencies.*.name, "grafana-mcp")].repository
      chart_version          = local.helm_dependencies[index(local.helm_dependencies.*.name, "grafana-mcp")].version
      namespace              = "telemetry"
      create_ns              = false
      enabled                = false
      default_network_policy = true
    },
    var.grafana-mcp
  )

  values_grafana-mcp = <<VALUES
    VALUES
}

resource "kubernetes_namespace" "grafana-mcp" {
  count = local.grafana-mcp["enabled"] && local.grafana-mcp["create_ns"] ? 1 : 0

  metadata {
    labels = {
      name = local.grafana-mcp["namespace"]
    }

    name = local.grafana-mcp["namespace"]
  }
}

resource "helm_release" "grafana-mcp" {
  count                 = local.grafana-mcp["enabled"] ? 1 : 0
  repository            = local.grafana-mcp["repository"]
  name                  = local.grafana-mcp["name"]
  chart                 = local.grafana-mcp["chart"]
  version               = local.grafana-mcp["chart_version"]
  timeout               = local.grafana-mcp["timeout"]
  force_update          = local.grafana-mcp["force_update"]
  recreate_pods         = local.grafana-mcp["recreate_pods"]
  wait                  = local.grafana-mcp["wait"]
  atomic                = local.grafana-mcp["atomic"]
  cleanup_on_fail       = local.grafana-mcp["cleanup_on_fail"]
  dependency_update     = local.grafana-mcp["dependency_update"]
  disable_crd_hooks     = local.grafana-mcp["disable_crd_hooks"]
  disable_webhooks      = local.grafana-mcp["disable_webhooks"]
  render_subchart_notes = local.grafana-mcp["render_subchart_notes"]
  replace               = local.grafana-mcp["replace"]
  reset_values          = local.grafana-mcp["reset_values"]
  reuse_values          = local.grafana-mcp["reuse_values"]
  skip_crds             = local.grafana-mcp["skip_crds"]
  verify                = local.grafana-mcp["verify"]
  values = [
    local.values_grafana-mcp,
    local.grafana-mcp["extra_values"]
  ]
  namespace = local.grafana-mcp["create_ns"] ? kubernetes_namespace.grafana-mcp.*.metadata.0.name[count.index] : local.grafana-mcp["namespace"]
}

resource "kubernetes_network_policy" "grafana-mcp_default_deny" {
  count = local.grafana-mcp["create_ns"] && local.grafana-mcp["enabled"] && local.grafana-mcp["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.grafana-mcp.*.metadata.0.name[count.index]}-default-deny"
    namespace = kubernetes_namespace.grafana-mcp.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }
    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "grafana-mcp_allow_namespace" {
  count = local.grafana-mcp["create_ns"] && local.grafana-mcp["enabled"] && local.grafana-mcp["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.grafana-mcp.*.metadata.0.name[count.index]}-allow-namespace"
    namespace = kubernetes_namespace.grafana-mcp.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }

    ingress {
      from {
        namespace_selector {
          match_labels = {
            name = kubernetes_namespace.grafana-mcp.*.metadata.0.name[count.index]
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}


================================================
FILE: helm-dependencies.yaml
================================================
apiVersion: v2
name: Handle terraform-kubernetes-addons helm chart dependencies update
version: 1.0.0
dependencies:
  - name: admiralty
    version: 0.13.2
    repository: https://charts.admiralty.io
  - name: secrets-store-csi-driver
    version: 1.5.6
    repository: https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts
  - name: aws-ebs-csi-driver
    version: 2.58.0
    repository: https://kubernetes-sigs.github.io/aws-ebs-csi-driver
  - name: aws-efs-csi-driver
    version: 4.0.0
    repository: https://kubernetes-sigs.github.io/aws-efs-csi-driver
  - name: aws-for-fluent-bit
    version: 0.2.0
    repository: https://aws.github.io/eks-charts
  - name: aws-load-balancer-controller
    version: 3.2.1
    repository: https://aws.github.io/eks-charts
  - name: aws-node-termination-handler
    version: 0.21.0
    repository: https://aws.github.io/eks-charts
  - name: cert-manager
    version: v1.20.1
    repository: https://charts.jetstack.io
  - name: cert-manager-csi-driver
    version: v0.13.0
    repository: https://charts.jetstack.io
  - name: cluster-autoscaler
    version: 9.56.0
    repository: https://kubernetes.github.io/autoscaler
  - name: external-dns
    version: 1.20.0
    repository: https://kubernetes-sigs.github.io/external-dns/
  - name: flux
    version: 1.13.3
    repository: https://charts.fluxcd.io
  - name: grafana-mcp
    version: 0.3.1
    repository: https://grafana.github.io/helm-charts
  - name: ingress-nginx
    version: 4.15.1
    repository: https://kubernetes.github.io/ingress-nginx
  - name: k8gb
    version: v0.19.0
    repository: https://www.k8gb.io
  - name: karma
    version: 1.7.2
    repository: https://charts.helm.sh/stable
  - name: karpenter
    version: 1.11.0
    repository: oci://public.ecr.aws/karpenter
  - name: keda
    version: 2.19.0
    repository: https://kedacore.github.io/charts
  - name: kong
    version: 3.2.0
    repository: https://charts.konghq.com
  - name: kube-prometheus-stack
    version: 83.3.0
    repository: https://prometheus-community.github.io/helm-charts
  - name: linkerd2-cni
    version: 30.12.2
    repository: https://helm.linkerd.io/stable
  - name: linkerd-control-plane
    version: 1.16.11
    repository: https://helm.linkerd.io/stable
  - name: linkerd-crds
    version: 1.8.0
    repository: https://helm.linkerd.io/stable
  - name: linkerd-viz
    version: 30.12.11
    repository: https://helm.linkerd.io/stable
  - name: loki
    version: 6.55.0
    repository: https://grafana.github.io/helm-charts
  - name: promtail
    version: 6.17.1
    repository: https://grafana.github.io/helm-charts
  - name: metrics-server
    version: 3.13.0
    repository: https://kubernetes-sigs.github.io/metrics-server/
  - name: node-problem-detector
    version: 2.4.0
    repository: oci://ghcr.io/deliveryhero/helm-charts
  - name: prometheus-adapter
    version: 5.3.0
    repository: https://prometheus-community.github.io/helm-charts
  - name: prometheus-cloudwatch-exporter
    version: 0.28.1
    repository: https://prometheus-community.github.io/helm-charts
  - name: prometheus-blackbox-exporter
    version: 11.9.1
    repository: https://prometheus-community.github.io/helm-charts
  - name: scaleway-webhook
    version: v0.0.1
    repository: https://particuleio.github.io/charts
  - name: sealed-secrets
    version: 2.18.4
    repository: https://bitnami-labs.github.io/sealed-secrets
  - name: oci://registry-1.docker.io/bitnamicharts/thanos
    version: 15.9.2
    repository: ""
  - name: tigera-operator
    version: v3.31.4
    repository: https://docs.projectcalico.org/charts
  - name: traefik
    version: 39.0.7
    repository: https://helm.traefik.io/traefik
  - name: oci://registry-1.docker.io/bitnamicharts/memcached
    version: 7.5.3
    repository: ""
  - name: velero
    version: 12.0.0
    repository: https://vmware-tanzu.github.io/helm-charts
  - name: victoria-metrics-k8s-stack
    version: 0.72.6
    repository: https://victoriametrics.github.io/helm-charts/
  - name: yet-another-cloudwatch-exporter
    version: 0.14.0
    repository: https://nerdswords.github.io/yet-another-cloudwatch-exporter
  - name: reloader
    version: 2.2.9
    repository: https://stakater.github.io/stakater-charts


================================================
FILE: ingress-nginx.tf
================================================
locals {

  ingress-nginx = merge(
    local.helm_defaults,
    {
      name                   = local.helm_dependencies[index(local.helm_dependencies.*.name, "ingress-nginx")].name
      chart                  = local.helm_dependencies[index(local.helm_dependencies.*.name, "ingress-nginx")].name
      repository             = local.helm_dependencies[index(local.helm_dependencies.*.name, "ingress-nginx")].repository
      chart_version          = local.helm_dependencies[index(local.helm_dependencies.*.name, "ingress-nginx")].version
      namespace              = "ingress-nginx"
      enabled                = false
      default_network_policy = true
      ingress_cidrs          = ["0.0.0.0/0"]
      linkerd-viz-enabled    = false
      linkerd-viz-namespace  = "linkerd-viz"
      allowed_cidrs          = ["0.0.0.0/0"]
      extra_ns_labels        = {}
      extra_ns_annotations   = {}
    },
    var.ingress-nginx
  )

  values_ingress-nginx = <<VALUES
controller:
  allowSnippetAnnotations: true
  metrics:
    enabled: ${local.kube-prometheus-stack["enabled"] || local.victoria-metrics-k8s-stack["enabled"]}
    serviceMonitor:
      enabled: ${local.kube-prometheus-stack["enabled"] || local.victoria-metrics-k8s-stack["enabled"]}
  updateStrategy:
    type: RollingUpdate
  kind: "DaemonSet"
  publishService:
    enabled: true
  priorityClassName: ${local.priority-class-ds["create"] ? kubernetes_priority_class.kubernetes_addons_ds[0].metadata[0].name : ""}
  admissionWebhooks:
    patch:
      podAnnotations:
        linkerd.io/inject: disabled
VALUES

}

resource "kubernetes_namespace" "ingress-nginx" {
  count = local.ingress-nginx["enabled"] ? 1 : 0

  metadata {
    labels = merge({
      name                               = local.ingress-nginx["namespace"]
      "${local.labels_prefix}/component" = "ingress"
      },
    local.ingress-nginx["extra_ns_labels"])

    annotations = merge(
      local.ingress-nginx["extra_ns_annotations"]
    )

    name = local.ingress-nginx["namespace"]
  }
}

resource "helm_release" "ingress-nginx" {
  count                 = local.ingress-nginx["enabled"] ? 1 : 0
  repository            = local.ingress-nginx["repository"]
  name                  = local.ingress-nginx["name"]
  chart                 = local.ingress-nginx["chart"]
  version               = local.ingress-nginx["chart_version"]
  timeout               = local.ingress-nginx["timeout"]
  force_update          = local.ingress-nginx["force_update"]
  recreate_pods         = local.ingress-nginx["recreate_pods"]
  wait                  = local.ingress-nginx["wait"]
  atomic                = local.ingress-nginx["atomic"]
  cleanup_on_fail       = local.ingress-nginx["cleanup_on_fail"]
  dependency_update     = local.ingress-nginx["dependency_update"]
  disable_crd_hooks     = local.ingress-nginx["disable_crd_hooks"]
  disable_webhooks      = local.ingress-nginx["disable_webhooks"]
  render_subchart_notes = local.ingress-nginx["render_subchart_notes"]
  replace               = local.ingress-nginx["replace"]
  reset_values          = local.ingress-nginx["reset_values"]
  reuse_values          = local.ingress-nginx["reuse_values"]
  skip_crds             = local.ingress-nginx["skip_crds"]
  verify                = local.ingress-nginx["verify"]
  values = [
    local.values_ingress-nginx,
    local.ingress-nginx["extra_values"],
  ]
  namespace = kubernetes_namespace.ingress-nginx.*.metadata.0.name[count.index]

  depends_on = [
    kubectl_manifest.prometheus-operator_crds
  ]
}

resource "kubernetes_network_policy" "ingress-nginx_default_deny" {
  count = local.ingress-nginx["enabled"] && local.ingress-nginx["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.ingress-nginx.*.metadata.0.name[count.index]}-default-deny"
    namespace = kubernetes_namespace.ingress-nginx.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }
    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "ingress-nginx_allow_namespace" {
  count = local.ingress-nginx["enabled"] && local.ingress-nginx["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.ingress-nginx.*.metadata.0.name[count.index]}-allow-namespace"
    namespace = kubernetes_namespace.ingress-nginx.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }

    ingress {
      from {
        namespace_selector {
          match_labels = {
            name = kubernetes_namespace.ingress-nginx.*.metadata.0.name[count.index]
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "ingress-nginx_allow_ingress" {
  count = local.ingress-nginx["enabled"] && local.ingress-nginx["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.ingress-nginx.*.metadata.0.name[count.index]}-allow-ingress"
    namespace = kubernetes_namespace.ingress-nginx.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
      match_expressions {
        key      = "app.kubernetes.io/name"
        operator = "In"
        values   = ["ingress-nginx"]
      }
    }

    ingress {
      ports {
        port     = "http"
        protocol = "TCP"
      }
      ports {
        port     = "https"
        protocol = "TCP"
      }

      dynamic "from" {
        for_each = local.ingress-nginx["ingress_cidrs"]
        content {
          ip_block {
            cidr = from.value
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "ingress-nginx_allow_monitoring" {
  count = local.ingress-nginx["enabled"] && local.ingress-nginx["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.ingress-nginx.*.metadata.0.name[count.index]}-allow-monitoring"
    namespace = kubernetes_namespace.ingress-nginx.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }

    ingress {
      ports {
        port     = "metrics"
        protocol = "TCP"
      }

      from {
        namespace_selector {
          match_labels = {
            "${local.labels_prefix}/component" = "monitoring"
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "ingress-nginx_allow_control_plane" {
  count = local.ingress-nginx["enabled"] && local.ingress-nginx["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.ingress-nginx.*.metadata.0.name[count.index]}-allow-control-plane"
    namespace = kubernetes_namespace.ingress-nginx.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
      match_expressions {
        key      = "app.kubernetes.io/name"
        operator = "In"
        values   = ["ingress-nginx"]
      }
    }

    ingress {
      ports {
        port     = "webhook"
        protocol = "TCP"
      }

      dynamic "from" {
        for_each = local.ingress-nginx["allowed_cidrs"]
        content {
          ip_block {
            cidr = from.value
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "ingress-nginx_allow_linkerd_viz" {
  count = local.ingress-nginx["enabled"] && (local.linkerd-viz["enabled"] || local.ingress-nginx["linkerd-viz-enabled"]) && local.ingress-nginx["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.ingress-nginx.*.metadata.0.name[count.index]}-allow-linkerd-viz"
    namespace = kubernetes_namespace.ingress-nginx.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }

    ingress {
      from {
        namespace_selector {
          match_labels = {
            name = local.linkerd-viz["enabled"] ? local.linkerd-viz["namespace"] : local.ingress-nginx["linkerd-viz-namespace"]
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}


================================================
FILE: k8gb.tf
================================================
locals {
  k8gb = merge(
    local.helm_defaults,
    {
      name                   = local.helm_dependencies[index(local.helm_dependencies.*.name, "k8gb")].name
      chart                  = local.helm_dependencies[index(local.helm_dependencies.*.name, "k8gb")].name
      repository             = local.helm_dependencies[index(local.helm_dependencies.*.name, "k8gb")].repository
      chart_version          = local.helm_dependencies[index(local.helm_dependencies.*.name, "k8gb")].version
      namespace              = "k8gb"
      enabled                = false
      create_ns              = true
      default_network_policy = false
    },
    var.k8gb
  )

  values_k8gb = <<-VALUES
    VALUES
}

resource "kubernetes_namespace" "k8gb" {
  count = local.k8gb["enabled"] && local.k8gb["create_ns"] ? 1 : 0

  metadata {
    labels = {
      name = local.k8gb["namespace"]
    }

    name = local.k8gb["namespace"]
  }
}

resource "helm_release" "k8gb" {
  count                 = local.k8gb["enabled"] ? 1 : 0
  repository            = local.k8gb["repository"]
  name                  = local.k8gb["name"]
  chart                 = local.k8gb["chart"]
  version               = local.k8gb["chart_version"]
  timeout               = local.k8gb["timeout"]
  force_update          = local.k8gb["force_update"]
  recreate_pods         = local.k8gb["recreate_pods"]
  wait                  = local.k8gb["wait"]
  atomic                = local.k8gb["atomic"]
  cleanup_on_fail       = local.k8gb["cleanup_on_fail"]
  dependency_update     = local.k8gb["dependency_update"]
  disable_crd_hooks     = local.k8gb["disable_crd_hooks"]
  disable_webhooks      = local.k8gb["disable_webhooks"]
  render_subchart_notes = local.k8gb["render_subchart_notes"]
  replace               = local.k8gb["replace"]
  reset_values          = local.k8gb["reset_values"]
  reuse_values          = local.k8gb["reuse_values"]
  skip_crds             = local.k8gb["skip_crds"]
  verify                = local.k8gb["verify"]
  values = [
    local.values_k8gb,
    local.k8gb["extra_values"]
  ]
  namespace = local.k8gb["create_ns"] ? kubernetes_namespace.k8gb.*.metadata.0.name[count.index] : local.k8gb["namespace"]
}

resource "kubernetes_network_policy" "k8gb_default_deny" {
  count = local.k8gb["enabled"] && local.k8gb["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${local.k8gb["namespace"]}-${local.k8gb["name"]}-default-deny"
    namespace = local.k8gb["namespace"]
  }

  spec {
    pod_selector {
    }
    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "k8gb_allow_namespace" {
  count = local.k8gb["enabled"] && local.k8gb["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${local.k8gb["namespace"]}-${local.k8gb["name"]}-default-namespace"
    namespace = local.k8gb["namespace"]
  }

  spec {
    pod_selector {
    }

    ingress {
      from {
        namespace_selector {
          match_labels = {
            name = local.k8gb["namespace"]
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}


================================================
FILE: karma.tf
================================================
locals {
  karma = merge(
    local.helm_defaults,
    {
      name                   = local.helm_dependencies[index(local.helm_dependencies.*.name, "karma")].name
      chart                  = local.helm_dependencies[index(local.helm_dependencies.*.name, "karma")].name
      repository             = local.helm_dependencies[index(local.helm_dependencies.*.name, "karma")].repository
      chart_version          = local.helm_dependencies[index(local.helm_dependencies.*.name, "karma")].version
      namespace              = "monitoring"
      create_ns              = false
      enabled                = false
      default_network_policy = true
    },
    var.karma
  )

  values_karma = <<VALUES
VALUES

}

resource "kubernetes_namespace" "karma" {
  count = local.karma["enabled"] && local.karma["create_ns"] ? 1 : 0

  metadata {
    labels = {
      name                               = local.karma["namespace"]
      "${local.labels_prefix}/component" = "monitoring"
    }

    name = local.karma["namespace"]
  }
}

resource "helm_release" "karma" {
  count                 = local.karma["enabled"] ? 1 : 0
  repository            = local.karma["repository"]
  name                  = local.karma["name"]
  chart                 = local.karma["chart"]
  version               = local.karma["chart_version"]
  timeout               = local.karma["timeout"]
  force_update          = local.karma["force_update"]
  recreate_pods         = local.karma["recreate_pods"]
  wait                  = local.karma["wait"]
  atomic                = local.karma["atomic"]
  cleanup_on_fail       = local.karma["cleanup_on_fail"]
  dependency_update     = local.karma["dependency_update"]
  disable_crd_hooks     = local.karma["disable_crd_hooks"]
  disable_webhooks      = local.karma["disable_webhooks"]
  render_subchart_notes = local.karma["render_subchart_notes"]
  replace               = local.karma["replace"]
  reset_values          = local.karma["reset_values"]
  reuse_values          = local.karma["reuse_values"]
  skip_crds             = local.karma["skip_crds"]
  verify                = local.karma["verify"]
  values = [
    local.values_karma,
    local.karma["extra_values"]
  ]
  namespace = local.karma["create_ns"] ? kubernetes_namespace.karma.*.metadata.0.name[count.index] : local.karma["namespace"]

  depends_on = [
    kubectl_manifest.prometheus-operator_crds
  ]
}

resource "kubernetes_network_policy" "karma_default_deny" {
  count = local.karma["create_ns"] && local.karma["enabled"] && local.karma["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.karma.*.metadata.0.name[count.index]}-default-deny"
    namespace = kubernetes_namespace.karma.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }
    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "karma_allow_namespace" {
  count = local.karma["create_ns"] && local.karma["enabled"] && local.karma["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.karma.*.metadata.0.name[count.index]}-allow-namespace"
    namespace = kubernetes_namespace.karma.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }

    ingress {
      from {
        namespace_selector {
          match_labels = {
            name = kubernetes_namespace.karma.*.metadata.0.name[count.index]
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "karma_allow_ingress" {
  count = local.karma["enabled"] && local.karma["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${local.karma["create_ns"] ? kubernetes_namespace.karma.*.metadata.0.name[count.index] : local.karma["namespace"]}-allow-ingress-karma"
    namespace = local.karma["create_ns"] ? kubernetes_namespace.karma.*.metadata.0.name[count.index] : local.karma["namespace"]
  }

  spec {
    pod_selector {
    }

    ingress {
      from {
        namespace_selector {
          match_labels = {
            "${local.labels_prefix}/component" = "ingress"
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}


================================================
FILE: keda.tf
================================================
locals {
  keda = merge(
    local.helm_defaults,
    {
      name                   = local.helm_dependencies[index(local.helm_dependencies.*.name, "keda")].name
      chart                  = local.helm_dependencies[index(local.helm_dependencies.*.name, "keda")].name
      repository             = local.helm_dependencies[index(local.helm_dependencies.*.name, "keda")].repository
      chart_version          = local.helm_dependencies[index(local.helm_dependencies.*.name, "keda")].version
      namespace              = "keda"
      create_ns              = false
      enabled                = false
      default_network_policy = true
    },
    var.keda
  )

  values_keda = <<VALUES
VALUES
}

resource "kubernetes_namespace" "keda" {
  count = local.keda["enabled"] && local.keda["create_ns"] ? 1 : 0

  metadata {
    labels = {
      name                               = local.keda["namespace"]
      "${local.labels_prefix}/component" = "keda"
    }

    name = local.keda["namespace"]
  }
}

resource "helm_release" "keda" {
  count                 = local.keda["enabled"] ? 1 : 0
  repository            = local.keda["repository"]
  name                  = local.keda["name"]
  chart                 = local.keda["chart"]
  version               = local.keda["chart_version"]
  timeout               = local.keda["timeout"]
  force_update          = local.keda["force_update"]
  recreate_pods         = local.keda["recreate_pods"]
  wait                  = local.keda["wait"]
  atomic                = local.keda["atomic"]
  cleanup_on_fail       = local.keda["cleanup_on_fail"]
  dependency_update     = local.keda["dependency_update"]
  disable_crd_hooks     = local.keda["disable_crd_hooks"]
  disable_webhooks      = local.keda["disable_webhooks"]
  render_subchart_notes = local.keda["render_subchart_notes"]
  replace               = local.keda["replace"]
  reset_values          = local.keda["reset_values"]
  reuse_values          = local.keda["reuse_values"]
  skip_crds             = local.keda["skip_crds"]
  verify                = local.keda["verify"]
  values = [
    local.values_keda,
    local.keda["extra_values"]
  ]
  namespace = local.keda["create_ns"] ? kubernetes_namespace.keda.*.metadata.0.name[count.index] : local.keda["namespace"]

  depends_on = [
    kubectl_manifest.prometheus-operator_crds
  ]
}

resource "kubernetes_network_policy" "keda_default_deny" {
  count = local.keda["create_ns"] && local.keda["enabled"] && local.keda["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.keda.*.metadata.0.name[count.index]}-default-deny"
    namespace = kubernetes_namespace.keda.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }
    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "keda_allow_namespace" {
  count = local.keda["create_ns"] && local.keda["enabled"] && local.keda["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.keda.*.metadata.0.name[count.index]}-allow-namespace"
    namespace = kubernetes_namespace.keda.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }

    ingress {
      from {
        namespace_selector {
          match_labels = {
            name = kubernetes_namespace.keda.*.metadata.0.name[count.index]
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}


================================================
FILE: kong-crds.tf
================================================
locals {

  kong_crd_version = "kong-${local.kong.chart_version}"

  kong_crds = "https://raw.githubusercontent.com/Kong/charts/${local.kong_crd_version}/charts/kong/crds/custom-resource-definitions.yaml"

  kong_crds_apply = local.kong.enabled && local.kong.manage_crds ? [for v in data.kubectl_file_documents.kong_crds.0.documents : {
    data : yamldecode(v)
    content : v
    }
  ] : null
}

data "http" "kong_crds" {
  count = local.kong.enabled && local.kong.manage_crds ? 1 : 0
  url   = local.kong_crds
}

data "kubectl_file_documents" "kong_crds" {
  count   = local.kong.enabled && local.kong.manage_crds ? 1 : 0
  content = data.http.kong_crds[0].response_body
}

resource "kubectl_manifest" "kong_crds" {
  for_each          = local.kong.enabled && local.kong.manage_crds ? { for v in local.kong_crds_apply : lower(join("/", compact([v.data.apiVersion, v.data.kind, lookup(v.data.metadata, "namespace", ""), v.data.metadata.name]))) => v.content } : {}
  yaml_body         = each.value
  server_side_apply = true
  force_conflicts   = true
}


================================================
FILE: kong.tf
================================================
locals {

  kong = merge(
    local.helm_defaults,
    {
      name                   = local.helm_dependencies[index(local.helm_dependencies.*.name, "kong")].name
      chart                  = local.helm_dependencies[index(local.helm_dependencies.*.name, "kong")].name
      repository             = local.helm_dependencies[index(local.helm_dependencies.*.name, "kong")].repository
      chart_version          = local.helm_dependencies[index(local.helm_dependencies.*.name, "kong")].version
      namespace              = "kong"
      enabled                = false
      default_network_policy = true
      ingress_cidrs          = ["0.0.0.0/0"]
      manage_crds            = true
    },
    var.kong
  )

  values_kong = <<VALUES
ingressController:
  enabled: true
  installCRDs: false
  resources:
    requests:
      cpu: 50m
      memory: 64Mi
postgresql:
  enabled: false
env:
  database: "off"
admin:
  type: ClusterIP
autoscaling:
  enabled: true
replicaCount: 2
serviceMonitor:
  enabled: ${local.kube-prometheus-stack["enabled"] || local.victoria-metrics-k8s-stack["enabled"]}
resources:
  requests:
    cpu: 100m
    memory: 128Mi
VALUES
}

resource "kubernetes_namespace" "kong" {
  count = local.kong["enabled"] ? 1 : 0

  metadata {
    labels = {
      name                               = local.kong["namespace"]
      "${local.labels_prefix}/component" = "ingress"
    }

    name = local.kong["namespace"]
  }
}

resource "helm_release" "kong" {
  count                 = local.kong["enabled"] ? 1 : 0
  repository            = local.kong["repository"]
  name                  = local.kong["name"]
  chart                 = local.kong["chart"]
  version               = local.kong["chart_version"]
  timeout               = local.kong["timeout"]
  force_update          = local.kong["force_update"]
  recreate_pods         = local.kong["recreate_pods"]
  wait                  = local.kong["wait"]
  atomic                = local.kong["atomic"]
  cleanup_on_fail       = local.kong["cleanup_on_fail"]
  dependency_update     = local.kong["dependency_update"]
  disable_crd_hooks     = local.kong["disable_crd_hooks"]
  disable_webhooks      = local.kong["disable_webhooks"]
  render_subchart_notes = local.kong["render_subchart_notes"]
  replace               = local.kong["replace"]
  reset_values          = local.kong["reset_values"]
  reuse_values          = local.kong["reuse_values"]
  skip_crds             = local.kong["skip_crds"]
  verify                = local.kong["verify"]
  values = [
    local.values_kong,
    local.kong["extra_values"]
  ]
  namespace = kubernetes_namespace.kong.*.metadata.0.name[count.index]

  depends_on = [
    kubectl_manifest.prometheus-operator_crds
  ]
}

resource "kubernetes_network_policy" "kong_default_deny" {
  count = local.kong["enabled"] && local.kong["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.kong.*.metadata.0.name[count.index]}-default-deny"
    namespace = kubernetes_namespace.kong.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }
    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "kong_allow_namespace" {
  count = local.kong["enabled"] && local.kong["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.kong.*.metadata.0.name[count.index]}-allow-namespace"
    namespace = kubernetes_namespace.kong.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }

    ingress {
      from {
        namespace_selector {
          match_labels = {
            name = kubernetes_namespace.kong.*.metadata.0.name[count.index]
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "kong_allow_ingress" {
  count = local.kong["enabled"] && local.kong["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.kong.*.metadata.0.name[count.index]}-allow-ingress"
    namespace = kubernetes_namespace.kong.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
      match_expressions {
        key      = "app.kubernetes.io/name"
        operator = "In"
        values   = ["kong"]
      }
    }

    ingress {
      ports {
        port     = "8000"
        protocol = "TCP"
      }
      ports {
        port     = "8443"
        protocol = "TCP"
      }

      dynamic "from" {
        for_each = local.kong["ingress_cidrs"]
        content {
          ip_block {
            cidr = from.value
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "kong_allow_monitoring" {
  count = local.kong["enabled"] && local.kong["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.kong.*.metadata.0.name[count.index]}-allow-monitoring"
    namespace = kubernetes_namespace.kong.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }

    ingress {
      ports {
        port     = "metrics"
        protocol = "TCP"
      }

      from {
        namespace_selector {
          match_labels = {
            "${local.labels_prefix}/component" = "monitoring"
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}


================================================
FILE: kube-prometheus-crd.tf
================================================
locals {

  prometheus-operator_crd_version = (local.victoria-metrics-k8s-stack.enabled && local.victoria-metrics-k8s-stack.install_prometheus_operator_crds) || (local.kube-prometheus-stack.enabled && local.kube-prometheus-stack.manage_crds) ? yamldecode(data.http.prometheus-operator_version.0.response_body).appVersion : ""

  prometheus-operator_crds = [
    "https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/${local.prometheus-operator_crd_version}/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagerconfigs.yaml",
    "https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/${local.prometheus-operator_crd_version}/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml",
    "https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/${local.prometheus-operator_crd_version}/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml",
    "https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/${local.prometheus-operator_crd_version}/example/prometheus-operator-crd/monitoring.coreos.com_probes.yaml",
    "https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/${local.prometheus-operator_crd_version}/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml",
    "https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/${local.prometheus-operator_crd_version}/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml",
    "https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/${local.prometheus-operator_crd_version}/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml",
    "https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/${local.prometheus-operator_crd_version}/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml"
  ]

  prometheus-operator_chart = "https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-${local.kube-prometheus-stack.chart_version}/charts/kube-prometheus-stack/Chart.yaml"

  prometheus-operator_crds_apply = (local.victoria-metrics-k8s-stack.enabled && local.victoria-metrics-k8s-stack.install_prometheus_operator_crds) || (local.kube-prometheus-stack.enabled && local.kube-prometheus-stack.manage_crds) ? { for k, v in data.http.prometheus-operator_crds : lower(join("/", compact([yamldecode(v.response_body).apiVersion, yamldecode(v.response_body).kind, lookup(yamldecode(v.response_body).metadata, "namespace", ""), yamldecode(v.response_body).metadata.name]))) => v.response_body
  } : null

}

data "http" "prometheus-operator_version" {
  count = (local.victoria-metrics-k8s-stack.enabled && local.victoria-metrics-k8s-stack.install_prometheus_operator_crds) || (local.kube-prometheus-stack.enabled && local.kube-prometheus-stack.manage_crds) ? 1 : 0
  url   = local.prometheus-operator_chart
}

data "http" "prometheus-operator_crds" {
  for_each = (local.victoria-metrics-k8s-stack.enabled && local.victoria-metrics-k8s-stack.install_prometheus_operator_crds) || (local.kube-prometheus-stack.enabled && local.kube-prometheus-stack.manage_crds) ? toset(local.prometheus-operator_crds) : []
  url      = each.key
}

resource "kubectl_manifest" "prometheus-operator_crds" {
  for_each          = (local.victoria-metrics-k8s-stack.enabled && local.victoria-metrics-k8s-stack.install_prometheus_operator_crds) || (local.kube-prometheus-stack.enabled && local.kube-prometheus-stack.manage_crds) ? local.prometheus-operator_crds_apply : {}
  yaml_body         = each.value
  server_side_apply = true
  force_conflicts   = true
}


================================================
FILE: kube-prometheus.tf
================================================
locals {

  kube-prometheus-stack = merge(
    local.helm_defaults,
    {
      name                   = local.helm_dependencies[index(local.helm_dependencies.*.name, "kube-prometheus-stack")].name
      chart                  = local.helm_dependencies[index(local.helm_dependencies.*.name, "kube-prometheus-stack")].name
      repository             = local.helm_dependencies[index(local.helm_dependencies.*.name, "kube-prometheus-stack")].repository
      chart_version          = local.helm_dependencies[index(local.helm_dependencies.*.name, "kube-prometheus-stack")].version
      namespace              = "monitoring"
      enabled                = false
      allowed_cidrs          = ["0.0.0.0/0"]
      default_network_policy = true
      manage_crds            = true
    },
    var.kube-prometheus-stack
  )

  values_kube-prometheus-stack = <<VALUES
grafana:
  rbac:
    pspEnabled: false
  adminPassword: ${join(",", random_string.grafana_password.*.result)}
  dashboardProviders:
    dashboardproviders.yaml:
      apiVersion: 1
      providers:
      - name: 'default'
        orgId: 1
        folder: ''
        type: file
        disableDeletion: false
        editable: true
        options:
          path: /var/lib/grafana/dashboards/default
prometheus-node-exporter:
  priorityClassName: ${local.priority-class-ds["create"] ? kubernetes_priority_class.kubernetes_addons_ds[0].metadata[0].name : ""}
prometheus:
  prometheusSpec:
    priorityClassName: ${local.priority-class["create"] ? kubernetes_priority_class.kubernetes_addons[0].metadata[0].name : ""}
alertmanager:
  alertmanagerSpec:
    priorityClassName: ${local.priority-class["create"] ? kubernetes_priority_class.kubernetes_addons[0].metadata[0].name : ""}
prometheusOperator:
  admissionWebhooks:
    patch:
      podAnnotations:
        linkerd.io/inject: disabled
VALUES

  values_dashboard_kong = <<VALUES
grafana:
  dashboards:
    default:
      kong-dash:
        gnetId: 7424
        revision: 6
        datasource: ${local.kube-prometheus-stack.enabled ? "Prometheus" : local.victoria-metrics-k8s-stack.enabled ? "VictoriaMetrics" : ""}
VALUES

  values_dashboard_ingress-nginx = <<VALUES
grafana:
  dashboards:
    default:
      nginx-ingress:
        url: https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/grafana/dashboards/nginx.json
VALUES

  values_dashboard_cert-manager = <<VALUES
grafana:
  dashboards:
    default:
      cert-manager:
        gnetId: 11001
        revision: 1
        datasource: ${local.kube-prometheus-stack.enabled ? "Prometheus" : local.victoria-metrics-k8s-stack.enabled ? "VictoriaMetrics" : ""}
VALUES

  values_dashboard_node_exporter = <<VALUES
grafana:
  dashboards:
    default:
      node-exporter-full:
        gnetId: 1860
        revision: 21
        datasource: ${local.kube-prometheus-stack.enabled ? "Prometheus" : local.victoria-metrics-k8s-stack.enabled ? "VictoriaMetrics" : ""}
      node-exporter:
        gnetId: 11074
        revision: 9
        datasource: ${local.kube-prometheus-stack.enabled ? "Prometheus" : local.victoria-metrics-k8s-stack.enabled ? "VictoriaMetrics" : ""}
VALUES
}


resource "kubernetes_namespace" "kube-prometheus-stack" {
  count = local.kube-prometheus-stack["enabled"] ? 1 : 0

  metadata {
    labels = {
      name                               = local.kube-prometheus-stack["namespace"]
      "${local.labels_prefix}/component" = "monitoring"
    }

    name = local.kube-prometheus-stack["namespace"]
  }

  lifecycle {
    ignore_changes = [
      metadata[0].annotations,
      metadata[0].labels,
    ]
  }
}

resource "random_string" "grafana_password" {
  count   = local.kube-prometheus-stack["enabled"] ? 1 : 0
  length  = 16
  special = false
}

resource "helm_release" "kube-prometheus-stack" {
  count                 = local.kube-prometheus-stack["enabled"] ? 1 : 0
  repository            = local.kube-prometheus-stack["repository"]
  name                  = local.kube-prometheus-stack["name"]
  chart                 = local.kube-prometheus-stack["chart"]
  version               = local.kube-prometheus-stack["chart_version"]
  timeout               = local.kube-prometheus-stack["timeout"]
  force_update          = local.kube-prometheus-stack["force_update"]
  recreate_pods         = local.kube-prometheus-stack["recreate_pods"]
  wait                  = local.kube-prometheus-stack["wait"]
  atomic                = local.kube-prometheus-stack["atomic"]
  cleanup_on_fail       = local.kube-prometheus-stack["cleanup_on_fail"]
  dependency_update     = local.kube-prometheus-stack["dependency_update"]
  disable_crd_hooks     = local.kube-prometheus-stack["disable_crd_hooks"]
  disable_webhooks      = local.kube-prometheus-stack["disable_webhooks"]
  render_subchart_notes = local.kube-prometheus-stack["render_subchart_notes"]
  replace               = local.kube-prometheus-stack["replace"]
  reset_values          = local.kube-prometheus-stack["reset_values"]
  reuse_values          = local.kube-prometheus-stack["reuse_values"]
  skip_crds             = local.kube-prometheus-stack["skip_crds"]
  verify                = local.kube-prometheus-stack["verify"]
  values = compact([
    local.values_kube-prometheus-stack,
    local.kube-prometheus-stack["extra_values"],
    local.kong["enabled"] ? local.values_dashboard_kong : null,
    local.cert-manager["enabled"] ? local.values_dashboard_cert-manager : null,
    local.ingress-nginx["enabled"] ? local.values_dashboard_ingress-nginx : null,
    local.values_dashboard_node_exporter
  ])
  namespace = kubernetes_namespace.kube-prometheus-stack.*.metadata.0.name[count.index]

  depends_on = [
    helm_release.ingress-nginx,
    kubectl_manifest.prometheus-operator_crds
  ]
}

resource "kubernetes_network_policy" "kube-prometheus-stack_default_deny" {
  count = local.kube-prometheus-stack["enabled"] && local.kube-prometheus-stack["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.kube-prometheus-stack.*.metadata.0.name[count.index]}-default-deny"
    namespace = kubernetes_namespace.kube-prometheus-stack.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }
    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "kube-prometheus-stack_allow_namespace" {
  count = local.kube-prometheus-stack["enabled"] && local.kube-prometheus-stack["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.kube-prometheus-stack.*.metadata.0.name[count.index]}-allow-namespace"
    namespace = kubernetes_namespace.kube-prometheus-stack.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }

    ingress {
      from {
        namespace_selector {
          match_labels = {
            name = kubernetes_namespace.kube-prometheus-stack.*.metadata.0.name[count.index]
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "kube-prometheus-stack_allow_ingress" {
  count = local.kube-prometheus-stack["enabled"] && local.kube-prometheus-stack["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.kube-prometheus-stack.*.metadata.0.name[count.index]}-allow-ingress"
    namespace = kubernetes_namespace.kube-prometheus-stack.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }

    ingress {
      from {
        namespace_selector {
          match_labels = {
            "${local.labels_prefix}/component" = "ingress"
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "kube-prometheus-stack_allow_control_plane" {
  count = local.kube-prometheus-stack["enabled"] && local.kube-prometheus-stack["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.kube-prometheus-stack.*.metadata.0.name[count.index]}-allow-control-plane"
    namespace = kubernetes_namespace.kube-prometheus-stack.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
      match_expressions {
        key      = "app"
        operator = "In"
        values   = ["${local.kube-prometheus-stack["name"]}-operator"]
      }
    }

    ingress {
      ports {
        port     = "10250"
        protocol = "TCP"
      }

      dynamic "from" {
        for_each = local.kube-prometheus-stack["allowed_cidrs"]
        content {
          ip_block {
            cidr = from.value
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}

output "grafana_password" {
  value     = element(concat(random_string.grafana_password.*.result, [""]), 0)
  sensitive = true
}


================================================
FILE: linkerd-viz.tf
================================================
locals {
  linkerd-viz = merge(
    local.helm_defaults,
    {
      name                   = local.helm_dependencies[index(local.helm_dependencies.*.name, "linkerd-viz")].name
      chart                  = local.helm_dependencies[index(local.helm_dependencies.*.name, "linkerd-viz")].name
      repository             = local.helm_dependencies[index(local.helm_dependencies.*.name, "linkerd-viz")].repository
      chart_version          = local.helm_dependencies[index(local.helm_dependencies.*.name, "linkerd-viz")].version
      namespace              = "linkerd-viz"
      create_ns              = true
      enabled                = local.linkerd.enabled
      default_network_policy = true
      allowed_cidrs          = ["0.0.0.0/0"]
      ha                     = true
    },
    var.linkerd-viz
  )

  values_linkerd-viz = <<VALUES
    linkerdNamespace: ${local.linkerd["namespace"]}
    VALUES

  values_linkerd-viz_ha = <<VALUES
    #
    # The below is taken from: https://github.com/linkerd/linkerd2/blob/main/viz/charts/linkerd-viz/values-ha.yaml
    #

    # This values.yaml file contains the values needed to enable HA mode.
    # Usage:
    #   helm install -f values.yaml -f values-ha.yaml

    enablePodAntiAffinity: true

    # nodeAffinity:

    resources: &ha_resources
      cpu: &ha_resources_cpu
        limit: ""
        request: 100m
      memory:
        limit: 250Mi
        request: 50Mi

    # tap configuration
    tap:
      replicas: 3
      resources: *ha_resources

    # web configuration
    dashboard:
      resources: *ha_resources

    # prometheus configuration
    prometheus:
      resources:
        cpu:
          limit: ""
          request: 300m
        memory:
          limit: 8192Mi
          request: 300Mi
    VALUES

  linkerd-viz_manifests = {
    prometheus-servicemonitor         = <<-VALUES
      apiVersion: monitoring.coreos.com/v1
      kind: ServiceMonitor
      metadata:
        labels:
          k8s-app: linkerd-prometheus
          release: monitoring
        name: linkerd-federate
        namespace: ${local.linkerd-viz.namespace}
      spec:
        endpoints:
        - interval: 30s
          scrapeTimeout: 30s
          params:
            match[]:
            - '{job="linkerd-proxy"}'
            - '{job="linkerd-controller"}'
          path: /federate
          port: admin-http
          honorLabels: true
          relabelings:
          - action: keep
            regex: '^prometheus$'
            sourceLabels:
            - '__meta_kubernetes_pod_container_name'
        jobLabel: app
        namespaceSelector:
          matchNames:
          - ${local.linkerd-viz.namespace}
        selector:
          matchLabels:
            component: prometheus
      VALUES
    allow-prometheus-admin-federation = <<-VALUES
      apiVersion: policy.linkerd.io/v1beta1
      kind: ServerAuthorization
      metadata:
        namespace: ${local.linkerd-viz.namespace}
        name: prometheus-admin-federation
      spec:
        server:
          name: prometheus-admin
        client:
          unauthenticated: true
      VALUES
  }
}

resource "kubernetes_namespace" "linkerd-viz" {
  count = local.linkerd-viz["enabled"] && local.linkerd-viz["create_ns"] ? 1 : 0

  metadata {
    labels = {
      name                   = local.linkerd-viz["namespace"]
      "linkerd.io/extension" = "viz"
    }

    annotations = {
      "linkerd.io/inject"             = "enabled"
      "config.linkerd.io/proxy-await" = "enabled"
    }

    name = local.linkerd-viz["namespace"]
  }
}

resource "kubectl_manifest" "linkerd-viz" {
  for_each  = local.linkerd-viz.enabled && local.kube-prometheus-stack.enabled ? local.linkerd-viz_manifests : {}
  yaml_body = each.value
}

resource "helm_release" "linkerd-viz" {
  count                 = local.linkerd-viz["enabled"] ? 1 : 0
  repository            = local.linkerd-viz["repository"]
  name                  = local.linkerd-viz["name"]
  chart                 = local.linkerd-viz["chart"]
  version               = local.linkerd-viz["chart_version"]
  timeout               = local.linkerd-viz["timeout"]
  force_update          = local.linkerd-viz["force_update"]
  recreate_pods         = local.linkerd-viz["recreate_pods"]
  wait                  = local.linkerd-viz["wait"]
  atomic                = local.linkerd-viz["atomic"]
  cleanup_on_fail       = local.linkerd-viz["cleanup_on_fail"]
  dependency_update     = local.linkerd-viz["dependency_update"]
  disable_crd_hooks     = local.linkerd-viz["disable_crd_hooks"]
  disable_webhooks      = local.linkerd-viz["disable_webhooks"]
  render_subchart_notes = local.linkerd-viz["render_subchart_notes"]
  replace               = local.linkerd-viz["replace"]
  reset_values          = local.linkerd-viz["reset_values"]
  reuse_values          = local.linkerd-viz["reuse_values"]
  skip_crds             = local.linkerd-viz["skip_crds"]
  verify                = local.linkerd-viz["verify"]
  values = compact([
    local.values_linkerd-viz,
    local.linkerd-viz["extra_values"],
    local.linkerd-viz.ha ? local.values_linkerd-viz_ha : null
  ])
  namespace = local.linkerd-viz["create_ns"] ? kubernetes_namespace.linkerd-viz.*.metadata.0.name[count.index] : local.linkerd-viz["namespace"]

  depends_on = [helm_release.linkerd-control-plane]
}

resource "kubernetes_network_policy" "linkerd-viz_default_deny" {
  count = local.linkerd-viz["create_ns"] && local.linkerd-viz["enabled"] && local.linkerd-viz["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.linkerd-viz.*.metadata.0.name[count.index]}-default-deny"
    namespace = kubernetes_namespace.linkerd-viz.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }
    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "linkerd-viz_allow_namespace" {
  count = local.linkerd-viz["create_ns"] && local.linkerd-viz["enabled"] && local.linkerd-viz["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.linkerd-viz.*.metadata.0.name[count.index]}-allow-namespace"
    namespace = kubernetes_namespace.linkerd-viz.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }

    ingress {
      from {
        namespace_selector {
          match_labels = {
            name = kubernetes_namespace.linkerd-viz.*.metadata.0.name[count.index]
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "linkerd-viz_allow_control_plane" {
  count = local.linkerd-viz["enabled"] && local.linkerd-viz["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.linkerd-viz.*.metadata.0.name[count.index]}-allow-control-plane"
    namespace = kubernetes_namespace.linkerd-viz.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }

    ingress {
      ports {
        port     = "8089"
        protocol = "TCP"
      }

      dynamic "from" {
        for_each = local.linkerd-viz["allowed_cidrs"]
        content {
          ip_block {
            cidr = from.value
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "linkerd-viz_allow_monitoring" {
  count = local.linkerd-viz["enabled"] && local.linkerd-viz["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.linkerd-viz.*.metadata.0.name[count.index]}-allow-monitoring"
    namespace = kubernetes_namespace.linkerd-viz.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }

    ingress {
      from {
        namespace_selector {
          match_labels = {
            "${local.labels_prefix}/component" = "monitoring"
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}


================================================
FILE: linkerd.tf
================================================
locals {
  linkerd = merge(
    local.helm_defaults,
    {
      name               = local.helm_dependencies[index(local.helm_dependencies.*.name, "linkerd-control-plane")].name
      chart              = local.helm_dependencies[index(local.helm_dependencies.*.name, "linkerd-control-plane")].name
      repository         = local.helm_dependencies[index(local.helm_dependencies.*.name, "linkerd-control-plane")].repository
      chart_version      = local.helm_dependencies[index(local.helm_dependencies.*.name, "linkerd-control-plane")].version
      namespace          = "linkerd"
      create_ns          = true
      enabled            = false
      trust_anchor_pem   = null
      cluster_dns_domain = "cluster.local"
      ha                 = true
    },
    var.linkerd
  )

  values_linkerd = <<-VALUES
    identity:
      issuer:
        scheme: kubernetes.io/tls
    identityTrustAnchorsPEM: |
      ${indent(2, local.linkerd.enabled ? local.linkerd["trust_anchor_pem"] == null ? tls_self_signed_cert.linkerd_trust_anchor.0.cert_pem : local.linkerd["trust_anchor_pem"] : "")}
    policyValidator:
      externalSecret: true
      caBundle: |
        ${indent(4, local.linkerd.enabled ? tls_self_signed_cert.webhook_issuer_tls.0.cert_pem : "")}
    proxyInjector:
      externalSecret: true
      caBundle: |
        ${indent(4, local.linkerd.enabled ? tls_self_signed_cert.webhook_issuer_tls.0.cert_pem : "")}
    profileValidator:
      externalSecret: true
      caBundle: |
        ${indent(4, local.linkerd.enabled ? tls_self_signed_cert.webhook_issuer_tls.0.cert_pem : "")}
    VALUES

  values_linkerd_ha = <<-VALUES
    #
    # The below is taken from: https://github.com/linkerd/linkerd/blob/main/charts/linkerd/values-ha.yaml
    #

    # This values.yaml file contains the values needed to enable HA mode.
    # Usage:
    #   helm install -f values-ha.yaml

    # -- Create PodDisruptionBudget resources for each control plane workload
    enablePodDisruptionBudget: true

    # -- Specify a deployment strategy for each control plane workload
    deploymentStrategy:
      rollingUpdate:
        maxUnavailable: 1
        maxSurge: 25%

    # -- add PodAntiAffinity to each control plane workload
    enablePodAntiAffinity: true

    # nodeAffinity:

    # proxy configuration
    proxy:
      resources:
        cpu:
          request: 100m
        memory:
          limit: 250Mi
          request: 20Mi

    # controller configuration
    controllerReplicas: 3
    controllerResources: &controller_resources
      cpu: &controller_resources_cpu
        limit: ""
        request: 100m
      memory:
        limit: 250Mi
        request: 50Mi
    destinationResources: *controller_resources

    # identity configuration
    identityResources:
      cpu: *controller_resources_cpu
      memory:
        limit: 250Mi
        request: 10Mi

    # heartbeat configuration
    heartbeatResources: *controller_resources

    # proxy injector configuration
    proxyInjectorResources: *controller_resources
    webhookFailurePolicy: Fail

    # service profile validator configuration
    spValidatorResources: *controller_resources
    VALUES

  linkerd_manifests = {
    linkerd-trust-anchor = <<-VALUES
      apiVersion: cert-manager.io/v1
      kind: Issuer
      metadata:
        name: linkerd-trust-anchor
        namespace: ${local.linkerd.namespace}
      spec:
        ca:
          secretName: linkerd-trust-anchor
      VALUES

    linkerd-identity-issuer = <<-VALUES
      apiVersion: cert-manager.io/v1
      kind: Certificate
      metadata:
        name: linkerd-identity-issuer
        namespace: ${local.linkerd.namespace}
      spec:
        secretName: linkerd-identity-issuer
        revisionHistoryLimit: 3
        duration: 48h
        renewBefore: 25h
        issuerRef:
          name: linkerd-trust-anchor
          kind: Issuer
        commonName: identity.linkerd.${local.linkerd.cluster_dns_domain}
        dnsNames:
        - identity.linkerd.${local.linkerd.cluster_dns_domain}
        isCA: true
        privateKey:
          algorithm: ECDSA
        usages:
        - cert sign
        - crl sign
        - server auth
        - client auth
      VALUES

    webhook-issuer = <<-VALUES
      apiVersion: cert-manager.io/v1
      kind: Issuer
      metadata:
        name: webhook-issuer
        namespace: ${local.linkerd.namespace}
      spec:
        ca:
          secretName: webhook-issuer-tls
      VALUES

    linkerd-policy-validator = <<-VALUES
      apiVersion: cert-manager.io/v1
      kind: Certificate
      metadata:
        name: linkerd-policy-validator
        namespace: ${local.linkerd.namespace}
      spec:
        secretName: linkerd-policy-validator-k8s-tls
        duration: 24h
        renewBefore: 1h
        issuerRef:
          name: webhook-issuer
          kind: Issuer
        commonName: linkerd-policy-validator.${local.linkerd.namespace}.svc
        dnsNames:
        - linkerd-policy-validator.${local.linkerd.namespace}.svc
        isCA: false
        privateKey:
          algorithm: ECDSA
          encoding: PKCS8
        usages:
        - server auth
      VALUES

    linkerd-proxy-injector = <<-VALUES
      apiVersion: cert-manager.io/v1
      kind: Certificate
      metadata:
        name: linkerd-proxy-injector
        namespace: ${local.linkerd.namespace}
      spec:
        secretName: linkerd-proxy-injector-k8s-tls
        revisionHistoryLimit: 3
        duration: 24h
        renewBefore: 1h
        issuerRef:
          name: webhook-issuer
          kind: Issuer
        commonName: linkerd-proxy-injector.${local.linkerd.namespace}.svc
        dnsNames:
        - linkerd-proxy-injector.${local.linkerd.namespace}.svc
        isCA: false
        privateKey:
          algorithm: ECDSA
        usages:
        - server auth
      VALUES

    linkerd-sp-validator = <<-VALUES
      apiVersion: cert-manager.io/v1
      kind: Certificate
      metadata:
        name: linkerd-sp-validator
        namespace: ${local.linkerd.namespace}
      spec:
        secretName: linkerd-sp-validator-k8s-tls
        revisionHistoryLimit: 3
        duration: 24h
        renewBefore: 1h
        issuerRef:
          name: webhook-issuer
          kind: Issuer
        commonName: linkerd-sp-validator.${local.linkerd.namespace}.svc
        dnsNames:
        - linkerd-sp-validator.${local.linkerd.namespace}.svc
        isCA: false
        privateKey:
          algorithm: ECDSA
        usages:
        - server auth
      VALUES
  }

  linkerd-crds = merge(
    local.helm_defaults,
    {
      name          = local.helm_dependencies[index(local.helm_dependencies.*.name, "linkerd-crds")].name
      chart         = local.helm_dependencies[index(local.helm_dependencies.*.name, "linkerd-crds")].name
      repository    = local.helm_dependencies[index(local.helm_dependencies.*.name, "linkerd-crds")].repository
      chart_version = local.helm_dependencies[index(local.helm_dependencies.*.name, "linkerd-crds")].version
      namespace     = "linkerd"
      create_ns     = false
      enabled       = local.linkerd["enabled"] && !local.linkerd["skip_crds"]
    },
  )
}

resource "tls_private_key" "linkerd_trust_anchor" {
  count       = local.linkerd["enabled"] && local.linkerd["trust_anchor_pem"] == null ? 1 : 0
  algorithm   = "ECDSA"
  ecdsa_curve = "P256"
}

resource "tls_self_signed_cert" "linkerd_trust_anchor" {
  count                 = local.linkerd["enabled"] && local.linkerd["trust_anchor_pem"] == null ? 1 : 0
  private_key_pem       = tls_private_key.linkerd_trust_anchor.0.private_key_pem
  validity_period_hours = 87600
  early_renewal_hours   = 78840
  is_ca_certificate     = true

  subject {
    common_name = "root.linkerd.${local.linkerd.cluster_dns_domain}"
  }

  allowed_uses = [
    "cert_signing",
    "crl_signing",
  ]
}

resource "kubernetes_secret" "linkerd_trust_anchor" {
  count = local.linkerd["enabled"] && local.linkerd["trust_anchor_pem"] == null ? 1 : 0
  metadata {
    name      = "linkerd-trust-anchor"
    namespace = local.linkerd.create_ns ? kubernetes_namespace.linkerd.0.metadata[0].name : local.linkerd.namespace
  }

  data = {
    "tls.crt" = tls_self_signed_cert.linkerd_trust_anchor.0.cert_pem
    "tls.key" = tls_private_key.linkerd_trust_anchor.0.private_key_pem
  }

  type = "kubernetes.io/tls"
}

resource "tls_private_key" "webhook_issuer_tls" {
  count       = local.linkerd["enabled"] ? 1 : 0
  algorithm   = "ECDSA"
  ecdsa_curve = "P256"
}

resource "tls_self_signed_cert" "webhook_issuer_tls" {
  count                 = local.linkerd["enabled"] ? 1 : 0
  private_key_pem       = tls_private_key.webhook_issuer_tls.0.private_key_pem
  validity_period_hours = 87600
  early_renewal_hours   = 78840
  is_ca_certificate     = true

  subject {
    common_name = "webhook.linkerd.${local.linkerd.cluster_dns_domain}"
  }

  allowed_uses = [
    "cert_signing",
    "crl_signing",
  ]
}

resource "kubernetes_secret" "webhook_issuer_tls" {
  count = local.linkerd["enabled"] ? 1 : 0
  metadata {
    name      = "webhook-issuer-tls"
    namespace = local.linkerd.create_ns ? kubernetes_namespace.linkerd.0.metadata[0].name : local.linkerd.namespace
  }

  data = {
    "tls.crt" = tls_self_signed_cert.webhook_issuer_tls.0.cert_pem
    "tls.key" = tls_private_key.webhook_issuer_tls.0.private_key_pem
  }

  type = "kubernetes.io/tls"
}

resource "kubernetes_namespace" "linkerd" {
  count = local.linkerd["enabled"] && local.linkerd["create_ns"] ? 1 : 0

  metadata {
    labels = {
      name                                  = local.linkerd["namespace"]
      "linkerd.io/is-control-plane"         = "true"
      "config.linkerd.io/admission-webhook" = "disabled"
      "linkerd.io/control-plane-ns"         = local.linkerd.namespace
    }

    annotations = {
      "linkerd.io/inject" = "disabled"
    }

    name = local.linkerd["namespace"]
  }
}

resource "helm_release" "linkerd-control-plane" {
  count                 = local.linkerd["enabled"] ? 1 : 0
  repository            = local.linkerd["repository"]
  name                  = local.linkerd["name"]
  chart                 = local.linkerd["chart"]
  version               = local.linkerd["chart_version"]
  timeout               = local.linkerd["timeout"]
  force_update          = local.linkerd["force_update"]
  recreate_pods         = local.linkerd["recreate_pods"]
  wait                  = local.linkerd["wait"]
  atomic                = local.linkerd["atomic"]
  cleanup_on_fail       = local.linkerd["cleanup_on_fail"]
  dependency_update     = local.linkerd["dependency_update"]
  disable_crd_hooks     = local.linkerd["disable_crd_hooks"]
  disable_webhooks      = local.linkerd["disable_webhooks"]
  render_subchart_notes = local.linkerd["render_subchart_notes"]
  replace               = local.linkerd["replace"]
  reset_values          = local.linkerd["reset_values"]
  reuse_values          = local.linkerd["reuse_values"]
  skip_crds             = local.linkerd["skip_crds"]
  verify                = local.linkerd["verify"]
  values = compact([
    local.values_linkerd,
    local.linkerd["extra_values"],
    local.linkerd.ha ? local.values_linkerd_ha : null
  ])
  namespace = local.linkerd["create_ns"] ? kubernetes_namespace.linkerd.*.metadata.0.name[count.index] : local.linkerd["namespace"]

  depends_on = [
    helm_release.linkerd2-cni,
    helm_release.linkerd-crds
  ]
}

resource "kubectl_manifest" "linkerd" {
  for_each  = local.linkerd.enabled ? local.linkerd_manifests : {}
  yaml_body = each.value
}

resource "helm_release" "linkerd-crds" {
  count                 = local.linkerd["enabled"] && !local.linkerd["skip_crds"] ? 1 : 0
  repository            = local.linkerd["repository"]
  name                  = local.linkerd-crds["name"]
  chart                 = local.linkerd-crds["chart"]
  version               = local.linkerd-crds["chart_version"]
  timeout               = local.linkerd["timeout"]
  force_update          = local.linkerd["force_update"]
  recreate_pods         = local.linkerd["recreate_pods"]
  wait                  = local.linkerd["wait"]
  atomic                = local.linkerd["atomic"]
  cleanup_on_fail       = local.linkerd["cleanup_on_fail"]
  dependency_update     = local.linkerd["dependency_update"]
  disable_crd_hooks     = local.linkerd["disable_crd_hooks"]
  disable_webhooks      = local.linkerd["disable_webhooks"]
  render_subchart_notes = local.linkerd["render_subchart_notes"]
  replace               = local.linkerd["replace"]
  reset_values          = local.linkerd["reset_values"]
  reuse_values          = local.linkerd["reuse_values"]
  skip_crds             = local.linkerd["skip_crds"]
  verify                = local.linkerd["verify"]
  values                = []
  namespace             = local.linkerd["create_ns"] ? kubernetes_namespace.linkerd.*.metadata.0.name[count.index] : local.linkerd["namespace"]
}


================================================
FILE: linkerd2-cni.tf
================================================
locals {
  linkerd2-cni = merge(
    local.helm_defaults,
    {
      name                   = local.helm_dependencies[index(local.helm_dependencies.*.name, "linkerd2-cni")].name
      chart                  = local.helm_dependencies[index(local.helm_dependencies.*.name, "linkerd2-cni")].name
      repository             = local.helm_dependencies[index(local.helm_dependencies.*.name, "linkerd2-cni")].repository
      chart_version          = local.helm_dependencies[index(local.helm_dependencies.*.name, "linkerd2-cni")].version
      namespace              = "linkerd-cni"
      create_ns              = true
      enabled                = local.linkerd.enabled
      cni_conflist_filename  = "10-calico.conflist"
      default_network_policy = true
    },
    var.linkerd2-cni
  )

  values_linkerd2-cni = <<VALUES
    VALUES
}

resource "kubernetes_namespace" "linkerd2-cni" {
  count = local.linkerd2-cni["enabled"] && local.linkerd2-cni["create_ns"] ? 1 : 0

  metadata {
    labels = {
      name                                  = local.linkerd2-cni["namespace"]
      "config.linkerd.io/admission-webhook" = "disabled"
      "linkerd.io/cni-resource"             = "true"
    }

    annotations = {
      "linkerd.io/inject" = "disabled"
    }

    name = local.linkerd2-cni["namespace"]
  }
}

resource "helm_release" "linkerd2-cni" {
  count                 = local.linkerd2-cni["enabled"] ? 1 : 0
  repository            = local.linkerd2-cni["repository"]
  name                  = local.linkerd2-cni["name"]
  chart                 = local.linkerd2-cni["chart"]
  version               = local.linkerd2-cni["chart_version"]
  timeout               = local.linkerd2-cni["timeout"]
  force_update          = local.linkerd2-cni["force_update"]
  recreate_pods         = local.linkerd2-cni["recreate_pods"]
  wait                  = local.linkerd2-cni["wait"]
  atomic                = local.linkerd2-cni["atomic"]
  cleanup_on_fail       = local.linkerd2-cni["cleanup_on_fail"]
  dependency_update     = local.linkerd2-cni["dependency_update"]
  disable_crd_hooks     = local.linkerd2-cni["disable_crd_hooks"]
  disable_webhooks      = local.linkerd2-cni["disable_webhooks"]
  render_subchart_notes = local.linkerd2-cni["render_subchart_notes"]
  replace               = local.linkerd2-cni["replace"]
  reset_values          = local.linkerd2-cni["reset_values"]
  reuse_values          = local.linkerd2-cni["reuse_values"]
  skip_crds             = local.linkerd2-cni["skip_crds"]
  verify                = local.linkerd2-cni["verify"]
  values = [
    local.values_linkerd2-cni,
    local.linkerd2-cni["extra_values"]
  ]
  namespace = local.linkerd2-cni["create_ns"] ? kubernetes_namespace.linkerd2-cni.*.metadata.0.name[count.index] : local.linkerd2-cni["namespace"]
}

resource "kubernetes_network_policy" "linkerd2-cni_default_deny" {
  count = local.linkerd2-cni["create_ns"] && local.linkerd2-cni["enabled"] && local.linkerd2-cni["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.linkerd2-cni.*.metadata.0.name[count.index]}-default-deny"
    namespace = kubernetes_namespace.linkerd2-cni.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }
    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "linkerd2-cni_allow_namespace" {
  count = local.linkerd2-cni["create_ns"] && local.linkerd2-cni["enabled"] && local.linkerd2-cni["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.linkerd2-cni.*.metadata.0.name[count.index]}-allow-namespace"
    namespace = kubernetes_namespace.linkerd2-cni.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }

    ingress {
      from {
        namespace_selector {
          match_labels = {
            name = kubernetes_namespace.linkerd2-cni.*.metadata.0.name[count.index]
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}


================================================
FILE: locals.tf
================================================
locals {

  labels_prefix = var.labels_prefix != null ? var.labels_prefix : "particule.io"

  helm_defaults_defaults = {
    atomic                = false
    cleanup_on_fail       = false
    dependency_update     = false
    disable_crd_hooks     = false
    disable_webhooks      = false
    force_update          = false
    recreate_pods         = false
    render_subchart_notes = true
    replace               = false
    reset_values          = false
    reuse_values          = false
    skip_crds             = false
    timeout               = 3600
    verify                = false
    wait                  = true
    extra_values          = ""
  }

  helm_defaults = merge(
    local.helm_defaults_defaults,
    var.helm_defaults
  )

  helm_dependencies = yamldecode(file("${path.module}/helm-dependencies.yaml"))["dependencies"]
}


================================================
FILE: loki-stack.tf
================================================
locals {
  loki-stack = merge(
    local.helm_defaults,
    {
      name                   = local.helm_dependencies[index(local.helm_dependencies.*.name, "loki")].name
      chart                  = local.helm_dependencies[index(local.helm_dependencies.*.name, "loki")].name
      repository             = local.helm_dependencies[index(local.helm_dependencies.*.name, "loki")].repository
      chart_version          = local.helm_dependencies[index(local.helm_dependencies.*.name, "loki")].version
      namespace              = "monitoring"
      create_ns              = false
      enabled                = false
      default_network_policy = true
      generate_ca            = true
      trusted_ca_content     = null
      create_promtail_cert   = true
      create_grafana_ds_cm   = true
    },
    var.loki-stack
  )

  values_loki-stack = <<VALUES
priorityClassName: ${local.priority-class["create"] ? kubernetes_priority_class.kubernetes_addons[0].metadata[0].name : ""}
serviceMonitor:
  enabled: ${local.kube-prometheus-stack["enabled"] || local.victoria-metrics-k8s-stack["enabled"]}
gateway:
  service:
    labels:
      prometheus.io/service-monitor: "false"
VALUES
}

resource "kubernetes_namespace" "loki-stack" {
  count = local.loki-stack["enabled"] && local.loki-stack["create_ns"] ? 1 : 0

  metadata {
    labels = {
      name                               = local.loki-stack["namespace"]
      "${local.labels_prefix}/component" = "monitoring"
    }

    name = local.loki-stack["namespace"]
  }
}

resource "kubernetes_config_map" "loki-stack_grafana_ds" {
  count = local.loki-stack["enabled"] && local.loki-stack["create_grafana_ds_cm"] ? 1 : 0
  metadata {
    name      = "${local.loki-stack["name"]}-grafana-ds"
    namespace = local.loki-stack["namespace"]
    labels = {
      grafana_datasource = "1"
    }
  }

  data = {
    "datasource.yml" = <<-VALUES
      datasources:
      - access: proxy
        editable: true
        isDefault: false
        name: Loki
        orgId: 1
        type: loki
        url: http://${local.loki-stack["name"]}-gateway
        version: 1
      VALUES
  }
}

resource "helm_release" "loki-stack" {
  count                 = local.loki-stack["enabled"] ? 1 : 0
  repository            = local.loki-stack["repository"]
  name                  = local.loki-stack["name"]
  chart                 = local.loki-stack["chart"]
  version               = local.loki-stack["chart_version"]
  timeout               = local.loki-stack["timeout"]
  force_update          = local.loki-stack["force_update"]
  recreate_pods         = local.loki-stack["recreate_pods"]
  wait                  = local.loki-stack["wait"]
  atomic                = local.loki-stack["atomic"]
  cleanup_on_fail       = local.loki-stack["cleanup_on_fail"]
  dependency_update     = local.loki-stack["dependency_update"]
  disable_crd_hooks     = local.loki-stack["disable_crd_hooks"]
  disable_webhooks      = local.loki-stack["disable_webhooks"]
  render_subchart_notes = local.loki-stack["render_subchart_notes"]
  replace               = local.loki-stack["replace"]
  reset_values          = local.loki-stack["reset_values"]
  reuse_values          = local.loki-stack["reuse_values"]
  skip_crds             = local.loki-stack["skip_crds"]
  verify                = local.loki-stack["verify"]
  values = [
    local.values_loki-stack,
    local.loki-stack["extra_values"]
  ]
  namespace = local.loki-stack["create_ns"] ? kubernetes_namespace.loki-stack.*.metadata.0.name[count.index] : local.loki-stack["namespace"]

  depends_on = [
    kubectl_manifest.prometheus-operator_crds
  ]
}

resource "tls_private_key" "loki-stack-ca-key" {
  count       = local.loki-stack["enabled"] && local.loki-stack["generate_ca"] ? 1 : 0
  algorithm   = "ECDSA"
  ecdsa_curve = "P384"
}

resource "tls_self_signed_cert" "loki-stack-ca-cert" {
  count             = local.loki-stack["enabled"] && local.loki-stack["generate_ca"] ? 1 : 0
  private_key_pem   = tls_private_key.loki-stack-ca-key[0].private_key_pem
  is_ca_certificate = true

  subject {
    common_name  = var.cluster-name
    organization = var.cluster-name
  }

  validity_period_hours = 87600

  allowed_uses = [
    "cert_signing"
  ]
}

resource "kubernetes_network_policy" "loki-stack_default_deny" {
  count = local.loki-stack["create_ns"] && local.loki-stack["enabled"] && local.loki-stack["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.loki-stack.*.metadata.0.name[count.index]}-default-deny"
    namespace = kubernetes_namespace.loki-stack.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }
    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "loki-stack_allow_namespace" {
  count = local.loki-stack["create_ns"] && local.loki-stack["enabled"] && local.loki-stack["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.loki-stack.*.metadata.0.name[count.index]}-allow-namespace"
    namespace = kubernetes_namespace.loki-stack.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }

    ingress {
      from {
        namespace_selector {
          match_labels = {
            name = kubernetes_namespace.loki-stack.*.metadata.0.name[count.index]
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "loki-stack_allow_ingress" {
  count = local.loki-stack["create_ns"] && local.loki-stack["enabled"] && local.loki-stack["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.loki-stack.*.metadata.0.name[count.index]}-allow-ingress"
    namespace = kubernetes_namespace.loki-stack.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }

    ingress {
      from {
        namespace_selector {
          match_labels = {
            "${local.labels_prefix}/component" = "ingress"
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}

resource "kubernetes_secret" "loki-stack-ca" {
  count = local.loki-stack["enabled"] && (local.loki-stack["generate_ca"] || local.loki-stack["trusted_ca_content"] != null) ? 1 : 0
  metadata {
    name      = "${local.loki-stack["name"]}-ca"
    namespace = local.loki-stack["create_ns"] ? kubernetes_namespace.loki-stack.*.metadata.0.name[count.index] : local.loki-stack["namespace"]
  }

  data = {
    "ca.crt" = local.loki-stack["generate_ca"] ? tls_self_signed_cert.loki-stack-ca-cert[count.index].cert_pem : local.loki-stack["trusted_ca_content"]
  }
}

resource "tls_private_key" "promtail-key" {
  count       = local.loki-stack["enabled"] && local.loki-stack["generate_ca"] && local.loki-stack["create_promtail_cert"] ? 1 : 0
  algorithm   = "ECDSA"
  ecdsa_curve = "P384"
}

resource "tls_cert_request" "promtail-csr" {
  count           = local.loki-stack["enabled"] && local.loki-stack["generate_ca"] && local.loki-stack["create_promtail_cert"] ? 1 : 0
  private_key_pem = tls_private_key.promtail-key[count.index].private_key_pem

  subject {
    common_name = "promtail"
  }

  dns_names = [
    "promtail"
  ]
}

resource "tls_locally_signed_cert" "promtail-cert" {
  count              = local.loki-stack["enabled"] && local.loki-stack["generate_ca"] && local.loki-stack["create_promtail_cert"] ? 1 : 0
  cert_request_pem   = tls_cert_request.promtail-csr[count.index].cert_request_pem
  ca_private_key_pem = tls_private_key.loki-stack-ca-key[count.index].private_key_pem
  ca_cert_pem        = tls_self_signed_cert.loki-stack-ca-cert[count.index].cert_pem

  validity_period_hours = 8760

  allowed_uses = [
    "key_encipherment",
    "digital_signature",
    "client_auth"
  ]
}

output "loki-stack-ca" {
  value = element(concat(tls_self_signed_cert.loki-stack-ca-cert[*].cert_pem, [""]), 0)
}

output "loki-stack-ca-key" {
  value     = element(concat(tls_private_key.loki-stack-ca-key[*].private_key_pem, [""]), 0)
  sensitive = true
}

output "promtail-key" {
  value     = element(concat(tls_private_key.promtail-key[*].private_key_pem, [""]), 0)
  sensitive = true
}

output "promtail-cert" {
  value     = element(concat(tls_locally_signed_cert.promtail-cert[*].cert_pem, [""]), 0)
  sensitive = true
}


================================================
FILE: metrics-server.tf
================================================
locals {
  metrics-server = merge(
    local.helm_defaults,
    {
      name                   = local.helm_dependencies[index(local.helm_dependencies.*.name, "metrics-server")].name
      chart                  = local.helm_dependencies[index(local.helm_dependencies.*.name, "metrics-server")].name
      repository             = local.helm_dependencies[index(local.helm_dependencies.*.name, "metrics-server")].repository
      chart_version          = local.helm_dependencies[index(local.helm_dependencies.*.name, "metrics-server")].version
      namespace              = "metrics-server"
      enabled                = false
      default_network_policy = true
      allowed_cidrs          = ["0.0.0.0/0"]
    },
    var.metrics-server
  )

  values_metrics-server = <<VALUES
apiService:
  create: true
priorityClassName: ${local.priority-class["create"] ? kubernetes_priority_class.kubernetes_addons[0].metadata[0].name : ""}
VALUES

}

resource "kubernetes_namespace" "metrics-server" {
  count = local.metrics-server["enabled"] ? 1 : 0

  metadata {
    labels = {
      name = local.metrics-server["namespace"]
    }

    name = local.metrics-server["namespace"]
  }
}

resource "helm_release" "metrics-server" {
  count                 = local.metrics-server["enabled"] ? 1 : 0
  repository            = local.metrics-server["repository"]
  name                  = local.metrics-server["name"]
  chart                 = local.metrics-server["chart"]
  version               = local.metrics-server["chart_version"]
  timeout               = local.metrics-server["timeout"]
  force_update          = local.metrics-server["force_update"]
  recreate_pods         = local.metrics-server["recreate_pods"]
  wait                  = local.metrics-server["wait"]
  atomic                = local.metrics-server["atomic"]
  cleanup_on_fail       = local.metrics-server["cleanup_on_fail"]
  dependency_update     = local.metrics-server["dependency_update"]
  disable_crd_hooks     = local.metrics-server["disable_crd_hooks"]
  disable_webhooks      = local.metrics-server["disable_webhooks"]
  render_subchart_notes = local.metrics-server["render_subchart_notes"]
  replace               = local.metrics-server["replace"]
  reset_values          = local.metrics-server["reset_values"]
  reuse_values          = local.metrics-server["reuse_values"]
  skip_crds             = local.metrics-server["skip_crds"]
  verify                = local.metrics-server["verify"]
  values = [
    local.values_metrics-server,
    local.metrics-server["extra_values"]
  ]
  namespace = kubernetes_namespace.metrics-server.*.metadata.0.name[count.index]
}

resource "kubernetes_network_policy" "metrics-server_default_deny" {
  count = local.metrics-server["enabled"] && local.metrics-server["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.metrics-server.*.metadata.0.name[count.index]}-default-deny"
    namespace = kubernetes_namespace.metrics-server.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }
    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "metrics-server_allow_namespace" {
  count = local.metrics-server["enabled"] && local.metrics-server["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.metrics-server.*.metadata.0.name[count.index]}-allow-namespace"
    namespace = kubernetes_namespace.metrics-server.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
    }

    ingress {
      from {
        namespace_selector {
          match_labels = {
            name = kubernetes_namespace.metrics-server.*.metadata.0.name[count.index]
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}

resource "kubernetes_network_policy" "metrics-server_allow_control_plane" {
  count = local.metrics-server["enabled"] && local.metrics-server["default_network_policy"] ? 1 : 0

  metadata {
    name      = "${kubernetes_namespace.metrics-server.*.metadata.0.name[count.index]}-allow-control-plane"
    namespace = kubernetes_namespace.metrics-server.*.metadata.0.name[count.index]
  }

  spec {
    pod_selector {
      match_expressions {
        key      = "app.kubernetes.io/name"
        operator = "In"
        values   = ["metrics-server"]
      }
    }

    ingress {
      ports {
        port     = "10250"
        protocol = "TCP"
      }

      dynamic "from" {
        for_each = local.metrics-server["allowed_cidrs"]
        content {
          ip_block {
            cidr = from.value
          }
        }
      }
    }

    policy_types = ["Ingress"]
  }
}


================================================
FILE: modules/aws/.terraform-docs.yml
================================================
settings:
  lockfile: false


================================================
FILE: modules/aws/README.md
================================================
# terraform-kubernetes-addons:aws

[![semantic-release](https://img.shields.io/badge/%20%20%F0%9F%93%A6%F0%9F%9A%80-semantic--release-e10079.svg)](https://github.com/semantic-release/terraform-kubernetes-addons)
[![terraform-kubernetes-addons](https://github.com/particuleio/terraform-kubernetes-addons/workflows/terraform-kubernetes-addons/badge.svg)](https://github.com/particuleio/terraform-kubernetes-addons/actions?query=workflow%3Aterraform-kubernetes-addons)

## About

Provides various Kubernetes addons that are often used on Kubernetes with AWS

## Documentation

User guides, feature documentation and examples are available [here](https://github.com/particuleio/teks/)

## IAM permissions

This module can uses [IRSA](https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/).

<!-- BEGIN_TF_DOCS -->
## Requirements

| Name | Version |
| ---- | ------- |
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.5.7 |
| <a name="requirement_aws"></a> [aws](#requirement\_aws) | >= 6.28 |
| <a name="requirement_flux"></a> [flux](#requirement\_flux) | ~> 1.0 |
| <a name="requirement_github"></a> [github](#requirement\_github) | ~> 6.0 |
| <a name="requirement_helm"></a> [helm](#requirement\_helm) | ~> 3.0 |
| <a name="requirement_http"></a> [http](#requirement\_http) | >= 3 |
| <a name="requirement_kubectl"></a> [kubectl](#requirement\_kubectl) | ~> 2.0 |
| <a name="requirement_kubernetes"></a> [kubernetes](#requirement\_kubernetes) | ~> 2.0, != 2.12 |
| <a name="requirement_tls"></a> [tls](#requirement\_tls) | ~> 4.0 |

## Providers

| Name | Version |
| ---- | ------- |
| <a name="provider_aws"></a> [aws](#provider\_aws) | >= 6.28 |
| <a name="provider_flux"></a> [flux](#provider\_flux) | ~> 1.0 |
| <a name="provider_github"></a> [github](#provider\_github) | ~> 6.0 |
| <a name="provider_helm"></a> [helm](#provider\_helm) | ~> 3.0 |
| <a name="provider_http"></a> [http](#provider\_http) | >= 3 |
| <a name="provider_kubectl"></a> [kubectl](#provider\_kubectl) | ~> 2.0 |
| <a name="provider_kubernetes"></a> [kubernetes](#provider\_kubernetes) | ~> 2.0, != 2.12 |
| <a name="provider_random"></a> [random](#provider\_random) | n/a |
| <a name="provider_time"></a> [time](#provider\_time) | n/a |
| <a name="provider_tls"></a> [tls](#provider\_tls) | ~> 4.0 |

## Modules

| Name | Source | Version |
| ---- | ------ | ------- |
| <a name="module_iam_assumable_role_aws-ebs-csi-driver"></a> [iam\_assumable\_role\_aws-ebs-csi-driver](#module\_iam\_assumable\_role\_aws-ebs-csi-driver) | terraform-aws-modules/iam/aws//modules/iam-role | ~> 6.0 |
| <a name="module_iam_assumable_role_aws-efs-csi-driver"></a> [iam\_assumable\_role\_aws-efs-csi-driver](#module\_iam\_assumable\_role\_aws-efs-csi-driver) | terraform-aws-modules/iam/aws//modules/iam-role | ~> 6.0 |
| <a name="module_iam_assumable_role_aws-for-fluent-bit"></a> [iam\_assumable\_role\_aws-for-fluent-bit](#module\_iam\_assumable\_role\_aws-for-fluent-bit) | terraform-aws-modules/iam/aws//modules/iam-role | ~> 6.0 |
| <a name="module_iam_assumable_role_aws-load-balancer-controller"></a> [iam\_assumable\_role\_aws-load-balancer-controller](#module\_iam\_assumable\_role\_aws-load-balancer-controller) | terraform-aws-modules/iam/aws//modules/iam-role | ~> 6.0 |
| <a name="module_iam_assumable_role_cert-manager"></a> [iam\_assumable\_role\_cert-manager](#module\_iam\_assumable\_role\_cert-manager) | terraform-aws-modules/iam/aws//modules/iam-role | ~> 6.0 |
| <a name="module_iam_assumable_role_cluster-autoscaler"></a> [iam\_assumable\_role\_cluster-autoscaler](#module\_iam\_assumable\_role\_cluster-autoscaler) | terraform-aws-modules/iam/aws//modules/iam-role | ~> 6.0 |
| <a name="module_iam_assumable_role_cni-metrics-helper"></a> [iam\_assumable\_role\_cni-metrics-helper](#module\_iam\_assumable\_role\_cni-metrics-helper) | terraform-aws-modules/iam/aws//modules/iam-role | ~> 6.0 |
| <a name="module_iam_assumable_role_external-dns"></a> [iam\_assumable\_role\_external-dns](#module\_iam\_assumable\_role\_external-dns) | terraform-aws-modules/iam/aws//modules/iam-role | ~> 6.0 |
| <a name="module_iam_assumable_role_kube-prometheus-stack_grafana"></a> [iam\_assumable\_role\_kube-prometheus-stack\_grafana](#module\_iam\_assumable\_role\_kube-prometheus-stack\_grafana) | terraform-aws-modules/iam/aws//modules/iam-role | ~> 6.0 |
| <a name="module_iam_assumable_role_kube-prometheus-stack_thanos"></a> [iam\_assumable\_role\_kube-prometheus-stack\_thanos](#module\_iam\_assumable\_role\_kube-prometheus-stack\_thanos) | terraform-aws-modules/iam/aws//modules/iam-role | ~> 6.0 |
| <a name="module_iam_assumable_role_loki-stack"></a> [iam\_assumable\_role\_loki-stack](#module\_iam\_assumable\_role\_loki-stack) | terraform-aws-modules/iam/aws//modules/iam-role | ~> 6.0 |
| <a name="module_iam_assumable_role_prometheus-cloudwatch-exporter"></a> [iam\_assumable\_role\_prometheus-cloudwatch-exporter](#module\_iam\_assumable\_role\_prometheus-cloudwatch-exporter) | terraform-aws-modules/iam/aws//modules/iam-role | ~> 6.0 |
| <a name="module_iam_assumable_role_thanos"></a> [iam\_assumable\_role\_thanos](#module\_iam\_assumable\_role\_thanos) | terraform-aws-modules/iam/aws//modules/iam-role | ~> 6.0 |
| <a name="module_iam_assumable_role_thanos-storegateway"></a> [iam\_assumable\_role\_thanos-storegateway](#module\_iam\_assumable\_role\_thanos-storegateway) | terraform-aws-modules/iam/aws//modules/iam-role | ~> 6.0 |
| <a name="module_iam_assumable_role_velero"></a> [iam\_assumable\_role\_velero](#module\_iam\_assumable\_role\_velero) | terraform-aws-modules/iam/aws//modules/iam-role | ~> 6.0 |
| <a name="module_iam_assumable_role_yet-another-cloudwatch-exporter"></a> [iam\_assumable\_role\_yet-another-cloudwatch-exporter](#module\_iam\_assumable\_role\_yet-another-cloudwatch-exporter) | terraform-aws-modules/iam/aws//modules/iam-role | ~> 6.0 |
| <a name="module_karpenter"></a> [karpenter](#module\_karpenter) | terraform-aws-modules/eks/aws//modules/karpenter | ~> 21.0 |
| <a name="module_kube-prometheus-stack_thanos_bucket"></a> [kube-prometheus-stack\_thanos\_bucket](#module\_kube-prometheus-stack\_thanos\_bucket) | terraform-aws-modules/s3-bucket/aws | ~> 5.0 |
| <a name="module_loki_bucket"></a> [loki\_bucket](#module\_loki\_bucket) | terraform-aws-modules/s3-bucket/aws | ~> 5.0 |
| <a name="module_s3_logging_bucket"></a> [s3\_logging\_bucket](#module\_s3\_logging\_bucket) | terraform-aws-modules/s3-bucket/aws | ~> 5.0 |
| <a name="module_security-group-efs-csi-driver"></a> [security-group-efs-csi-driver](#module\_security-group-efs-csi-driver) | terraform-aws-modules/security-group/aws//modules/nfs | ~> 5.0 |
| <a name="module_thanos_bucket"></a> [thanos\_bucket](#module\_thanos\_bucket) | terraform-aws-modules/s3-bucket/aws | ~> 5.0 |
| <a name="module_velero_thanos_bucket"></a> [velero\_thanos\_bucket](#module\_velero\_thanos\_bucket) | terraform-aws-modules/s3-bucket/aws | ~> 5.0 |

## Resources

| Name | Type |
| ---- | ---- |
| [aws_cloudwatch_log_group.aws-for-fluent-bit](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_log_group) | resource |
| [aws_efs_file_system.aws-efs-csi-driver](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/efs_file_system) | resource |
| [aws_efs_mount_target.aws-efs-csi-driver](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/efs_mount_target) | resource |
| [aws_iam_policy.aws-ebs-csi-driver](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource |
| [aws_iam_policy.aws-efs-csi-driver](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource |
| [aws_iam_policy.aws-for-fluent-bit](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource |
| [aws_iam_policy.aws-load-balancer-controller](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource |
| [aws_iam_policy.cert-manager](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource |
| [aws_iam_policy.cluster-autoscaler](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource |
| [aws_iam_policy.cni-metrics-helper](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource |
| [aws_iam_policy.external-dns](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource |
| [aws_iam_policy.karpenter_additional](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource |
| [aws_iam_policy.kube-prometheus-stack_grafana](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource |
| [aws_iam_policy.kube-prometheus-stack_thanos](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource |
| [aws_iam_policy.loki-stack](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource |
| [aws_iam_policy.prometheus-cloudwatch-exporter](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource |
| [aws_iam_policy.thanos](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource |
| [aws_iam_policy.thanos-storegateway](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource |
| [aws_iam_policy.velero](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource |
| [aws_iam_policy.yet-another-cloudwatch-exporter](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource |
| [aws_kms_alias.aws-ebs-csi-driver](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/kms_alias) | resource |
| [aws_kms_key.aws-ebs-csi-driver](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/kms_key) | resource |
| [flux_bootstrap_git.flux](https://registry.terraform.io/providers/fluxcd/flux/latest/docs/resources/bootstrap_git) | resource |
| [github_branch_default.main](https://registry.terraform.io/providers/integrations/github/latest/docs/resources/branch_default) | resource |
| [github_repository.main](https://registry.terraform.io/providers/integrations/github/latest/docs/resources/repository) | resource |
| [github_repository_deploy_key.main](https://registry.terraform.io/providers/integrations/github/latest/docs/resources/repository_deploy_key) | resource |
| [helm_release.admiralty](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.aws-ebs-csi-driver](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.aws-efs-csi-driver](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.aws-for-fluent-bit](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.aws-load-balancer-controller](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.aws-node-termination-handler](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.cert-manager](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.cert-manager-csi-driver](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.cluster-autoscaler](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.external-dns](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.grafana-mcp](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.ingress-nginx](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.k8gb](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.karma](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.karpenter](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.keda](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.kong](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.kube-prometheus-stack](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.linkerd-control-plane](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.linkerd-crds](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.linkerd-viz](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.linkerd2-cni](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.loki-stack](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.metrics-server](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.node-problem-detector](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.prometheus-adapter](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.prometheus-blackbox-exporter](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.prometheus-cloudwatch-exporter](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.promtail](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.reloader](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.sealed-secrets](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.secrets-store-csi-driver](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.thanos](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.thanos-memcached](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.thanos-storegateway](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.thanos-tls-querier](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.tigera-operator](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.traefik](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.velero](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.victoria-metrics-k8s-stack](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [helm_release.yet-another-cloudwatch-exporter](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [kubectl_manifest.aws-ebs-csi-driver_vsc](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.calico_crds](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.cert-manager_cluster_issuers](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.cni-metrics-helper](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.csi-external-snapshotter](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.kong_crds](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.linkerd](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.linkerd-viz](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.prometheus-operator_crds](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.secrets-store-csi-driver-provider-aws](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubectl_manifest.tigera-operator_crds](https://registry.terraform.io/providers/alekc/kubectl/latest/docs/resources/manifest) | resource |
| [kubernetes_config_map.loki-stack_grafana_ds](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/config_map) | resource |
| [kubernetes_namespace.admiralty](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.aws-ebs-csi-driver](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.aws-efs-csi-driver](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.aws-for-fluent-bit](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace) | resource |
| [kubernetes_namespace.aws-load-balancer-controll
Download .txt
gitextract_o3n_q8aa/

├── .github/
│   ├── CONTRIBUTING.md
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug_report.md
│   │   └── feature_request.md
│   ├── PULL_REQUEST_TEMPLATE.md
│   ├── renovate.json
│   └── workflows/
│       ├── pr-title.yml
│       ├── pre-commit.yml
│       ├── release.yml
│       └── stale-actions.yaml
├── .gitignore
├── .mergify.yml
├── .pre-commit-config.yaml
├── .python-version
├── .releaserc.json
├── .terraform-docs.yml
├── CODEOWNERS
├── LICENSE
├── README.md
├── admiralty.tf
├── cert-manager-csi-driver.tf
├── cert-manager.tf
├── csi-external-snapshotter.tf
├── flux2.tf
├── grafana-mcp.tf
├── helm-dependencies.yaml
├── ingress-nginx.tf
├── k8gb.tf
├── karma.tf
├── keda.tf
├── kong-crds.tf
├── kong.tf
├── kube-prometheus-crd.tf
├── kube-prometheus.tf
├── linkerd-viz.tf
├── linkerd.tf
├── linkerd2-cni.tf
├── locals.tf
├── loki-stack.tf
├── metrics-server.tf
├── modules/
│   ├── aws/
│   │   ├── .terraform-docs.yml
│   │   ├── README.md
│   │   ├── aws-ebs-csi-driver.tf
│   │   ├── aws-efs-csi-driver.tf
│   │   ├── aws-for-fluent-bit.tf
│   │   ├── aws-load-balancer-controller.tf
│   │   ├── aws-node-termination-handler.tf
│   │   ├── cert-manager.tf
│   │   ├── cluster-autoscaler.tf
│   │   ├── cni-metrics-helper.tf
│   │   ├── data.tf
│   │   ├── examples/
│   │   │   └── README.md
│   │   ├── external-dns.tf
│   │   ├── iam/
│   │   │   ├── aws-ebs-csi-driver.json
│   │   │   ├── aws-ebs-csi-driver_kms.json
│   │   │   ├── aws-efs-csi-driver.json
│   │   │   └── aws-load-balancer-controller.json
│   │   ├── ingress-nginx.tf
│   │   ├── karpenter.tf
│   │   ├── kube-prometheus.tf
│   │   ├── locals-aws.tf
│   │   ├── loki-stack.tf
│   │   ├── prometheus-cloudwatch-exporter.tf
│   │   ├── s3-logging.tf
│   │   ├── secrets-store-csi-driver-provider-aws.tf
│   │   ├── templates/
│   │   │   ├── cert-manager-cluster-issuers.yaml.tpl
│   │   │   └── cni-metrics-helper.yaml.tpl
│   │   ├── thanos-memcached.tf
│   │   ├── thanos-storegateway.tf
│   │   ├── thanos-tls-querier.tf
│   │   ├── thanos.tf
│   │   ├── tigera-operator.tf
│   │   ├── variables-aws.tf
│   │   ├── velero.tf
│   │   ├── versions.tf
│   │   ├── victoria-metrics-k8s-stack.tf
│   │   └── yet-another-cloudwatch-exporter.tf
│   ├── azure/
│   │   ├── .terraform-docs.yml
│   │   ├── README.md
│   │   ├── ingress-nginx.tf
│   │   └── version.tf
│   ├── google/
│   │   ├── .terraform-docs.yml
│   │   ├── README.md
│   │   ├── cert-manager.tf
│   │   ├── data.tf
│   │   ├── external-dns.tf
│   │   ├── ingress-nginx.tf
│   │   ├── ip-masq-agent.tf
│   │   ├── kube-prometheus.tf
│   │   ├── loki-stack.tf
│   │   ├── manifests/
│   │   │   └── gke-ip-masq/
│   │   │       ├── ip-masq-agent-configmap.yaml
│   │   │       └── ip-masq-agent-daemonset.yaml
│   │   ├── templates/
│   │   │   ├── cert-manager-cluster-issuers.yaml.j2
│   │   │   ├── cert-manager-cluster-issuers.yaml.tpl
│   │   │   └── cni-metrics-helper.yaml.tpl
│   │   ├── thanos-memcached.tf
│   │   ├── thanos-receive.tf
│   │   ├── thanos-storegateway.tf
│   │   ├── thanos-tls-querier.tf
│   │   ├── thanos.tf
│   │   ├── variables-google.tf
│   │   ├── velero.tf
│   │   ├── versions.tf
│   │   └── victoria-metrics-k8s-stack.tf
│   └── scaleway/
│       ├── .terraform-docs.yml
│       ├── README.md
│       ├── cert-manager.tf
│       ├── examples/
│       │   └── README.md
│       ├── external-dns.tf
│       ├── ingress-nginx.tf
│       ├── kube-prometheus.tf
│       ├── locals-scaleway.tf
│       ├── loki-stack.tf
│       ├── templates/
│       │   └── cert-manager-cluster-issuers.yaml.tpl
│       ├── thanos-memcached.tf
│       ├── thanos-storegateway.tf
│       ├── thanos-tls-querier.tf
│       ├── thanos.tf
│       ├── variables-scaleway.tf
│       ├── velero.tf
│       ├── versions.tf
│       └── victoria-metrics-k8s-stack.tf
├── node-problem-detector.tf
├── priority-class.tf
├── prometheus-adapter.tf
├── prometheus-blackbox-exporter.tf
├── promtail.tf
├── reloader.tf
├── sealed-secrets.tf
├── secrets-store-csi-driver.tf
├── templates/
│   ├── cert-manager-cluster-issuers.yaml.tpl
│   └── cert-manager-csi-driver.yaml.tpl
├── tigera-operator.tf
├── traefik.tf
├── variables.tf
├── versions.tf
└── victoria-metrics-k8s-stack.tf
Condensed preview — 136 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (883K chars).
[
  {
    "path": ".github/CONTRIBUTING.md",
    "chars": 1575,
    "preview": "# Contributing\n\nWhen contributing to this repository, please first discuss the change you wish to make via issue,\nemail,"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.md",
    "chars": 540,
    "preview": "---\nname: Bug report\nabout: Create a report to help us improve\ntitle: \"[bug]\"\nlabels: bug\nassignees: ArchiFleKs\n\n---\n\n##"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.md",
    "chars": 625,
    "preview": "---\nname: Feature request\nabout: Suggest an idea for this project\ntitle: \"[enhancement]\"\nlabels: enhancement\nassignees: "
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE.md",
    "chars": 311,
    "preview": "# Pull request title\n\n## Description\n\nPlease explain the changes you made here and link to any relevant issues.\n\n### Che"
  },
  {
    "path": ".github/renovate.json",
    "chars": 1926,
    "preview": "{\n  \"extends\": [\n    \":separateMajorReleases\",\n    \":ignoreUnstable\",\n    \":prImmediately\",\n    \":updateNotScheduled\",\n "
  },
  {
    "path": ".github/workflows/pr-title.yml",
    "chars": 2316,
    "preview": "name: 'Validate PR title'\n\non:\n  pull_request_target:\n    types:\n      - opened\n      - edited\n      - synchronize\n\njobs"
  },
  {
    "path": ".github/workflows/pre-commit.yml",
    "chars": 2862,
    "preview": "name: Pre-Commit\n\non:\n  pull_request:\n    branches:\n      - main\n      - master\n  workflow_dispatch:\n\nenv:\n  TERRAFORM_D"
  },
  {
    "path": ".github/workflows/release.yml",
    "chars": 464,
    "preview": "name: Release\n\non:\n  push:\n    branches:\n    - release\n\njobs:\n  terraform-release:\n    if: github.ref == 'refs/heads/rel"
  },
  {
    "path": ".github/workflows/stale-actions.yaml",
    "chars": 1383,
    "preview": "name: 'Mark or close stale issues and PRs'\non:\n  schedule:\n    - cron: '0 0 * * *'\n\njobs:\n  stale:\n    runs-on: ubuntu-l"
  },
  {
    "path": ".gitignore",
    "chars": 65,
    "preview": ".terragrunt-cache\n.terraform\n.terraform.lock.hcl\n.idea\n.sisyphus\n"
  },
  {
    "path": ".mergify.yml",
    "chars": 819,
    "preview": "pull_request_rules:\n  - name: Automatic approve Renovate PRs (patch/minor)\n    conditions:\n      - author=renovate[bot]\n"
  },
  {
    "path": ".pre-commit-config.yaml",
    "chars": 526,
    "preview": "repos:\n- repo: https://github.com/antonbabenko/pre-commit-terraform\n  rev: v1.105.0\n  hooks:\n    - id: terraform_fmt\n   "
  },
  {
    "path": ".python-version",
    "chars": 4,
    "preview": "3.x\n"
  },
  {
    "path": ".releaserc.json",
    "chars": 144,
    "preview": "{\n  \"plugins\": [\n    \"@semantic-release/commit-analyzer\",\n    \"@semantic-release/release-notes-generator\",\n    \"@semanti"
  },
  {
    "path": ".terraform-docs.yml",
    "chars": 28,
    "preview": "settings:\n  lockfile: false\n"
  },
  {
    "path": "CODEOWNERS",
    "chars": 323,
    "preview": "# This is a comment.\n# Each line is a file pattern followed by one or more owners.\n\n# These owners will be the default o"
  },
  {
    "path": "LICENSE",
    "chars": 11357,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "README.md",
    "chars": 47544,
    "preview": "# terraform-kubernetes-addons\n\n[![semantic-release](https://img.shields.io/badge/%20%20%F0%9F%93%A6%F0%9F%9A%80-semantic"
  },
  {
    "path": "admiralty.tf",
    "chars": 3330,
    "preview": "locals {\n  admiralty = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies[inde"
  },
  {
    "path": "cert-manager-csi-driver.tf",
    "chars": 2571,
    "preview": "locals {\n\n  cert-manager-csi-driver = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_de"
  },
  {
    "path": "cert-manager.tf",
    "chars": 6968,
    "preview": "locals {\n\n  cert-manager = merge(\n    local.helm_defaults,\n    {\n      name                      = local.helm_dependenci"
  },
  {
    "path": "csi-external-snapshotter.tf",
    "chars": 2069,
    "preview": "locals {\n\n  csi-external-snapshotter = merge(\n    {\n      enabled = false\n      version = \"v8.1.0\"\n    },\n    var.csi-ex"
  },
  {
    "path": "flux2.tf",
    "chars": 5273,
    "preview": "locals {\n\n  # GITHUB_TOKEN should be set for Github provider to work\n  # GITHUB_ORGANIZATION should be set if deploying "
  },
  {
    "path": "grafana-mcp.tf",
    "chars": 3605,
    "preview": "locals {\n  grafana-mcp = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies[in"
  },
  {
    "path": "helm-dependencies.yaml",
    "chars": 4255,
    "preview": "apiVersion: v2\nname: Handle terraform-kubernetes-addons helm chart dependencies update\nversion: 1.0.0\ndependencies:\n  - "
  },
  {
    "path": "ingress-nginx.tf",
    "chars": 7904,
    "preview": "locals {\n\n  ingress-nginx = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies"
  },
  {
    "path": "k8gb.tf",
    "chars": 3071,
    "preview": "locals {\n  k8gb = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies[index(loc"
  },
  {
    "path": "karma.tf",
    "chars": 4146,
    "preview": "locals {\n  karma = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies[index(lo"
  },
  {
    "path": "keda.tf",
    "chars": 3387,
    "preview": "locals {\n  keda = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies[index(loc"
  },
  {
    "path": "kong-crds.tf",
    "chars": 1056,
    "preview": "locals {\n\n  kong_crd_version = \"kong-${local.kong.chart_version}\"\n\n  kong_crds = \"https://raw.githubusercontent.com/Kong"
  },
  {
    "path": "kong.tf",
    "chars": 5238,
    "preview": "locals {\n\n  kong = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies[index(lo"
  },
  {
    "path": "kube-prometheus-crd.tf",
    "chars": 3685,
    "preview": "locals {\n\n  prometheus-operator_crd_version = (local.victoria-metrics-k8s-stack.enabled && local.victoria-metrics-k8s-st"
  },
  {
    "path": "kube-prometheus.tf",
    "chars": 8656,
    "preview": "locals {\n\n  kube-prometheus-stack = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_depe"
  },
  {
    "path": "linkerd-viz.tf",
    "chars": 7788,
    "preview": "locals {\n  linkerd-viz = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies[in"
  },
  {
    "path": "linkerd.tf",
    "chars": 12922,
    "preview": "locals {\n  linkerd = merge(\n    local.helm_defaults,\n    {\n      name               = local.helm_dependencies[index(loca"
  },
  {
    "path": "linkerd2-cni.tf",
    "chars": 3934,
    "preview": "locals {\n  linkerd2-cni = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies[i"
  },
  {
    "path": "locals.tf",
    "chars": 848,
    "preview": "locals {\n\n  labels_prefix = var.labels_prefix != null ? var.labels_prefix : \"particule.io\"\n\n  helm_defaults_defaults = {"
  },
  {
    "path": "loki-stack.tf",
    "chars": 8220,
    "preview": "locals {\n  loki-stack = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies[ind"
  },
  {
    "path": "metrics-server.tf",
    "chars": 4610,
    "preview": "locals {\n  metrics-server = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies"
  },
  {
    "path": "modules/aws/.terraform-docs.yml",
    "chars": 28,
    "preview": "settings:\n  lockfile: false\n"
  },
  {
    "path": "modules/aws/README.md",
    "chars": 61103,
    "preview": "# terraform-kubernetes-addons:aws\n\n[![semantic-release](https://img.shields.io/badge/%20%20%F0%9F%93%A6%F0%9F%9A%80-sema"
  },
  {
    "path": "modules/aws/aws-ebs-csi-driver.tf",
    "chars": 10163,
    "preview": "locals {\n  aws-ebs-csi-driver = merge(\n    local.helm_defaults,\n    {\n      name          = local.helm_dependencies[inde"
  },
  {
    "path": "modules/aws/aws-efs-csi-driver.tf",
    "chars": 10237,
    "preview": "locals {\n  aws-efs-csi-driver = merge(\n    local.helm_defaults,\n    {\n      name          = local.helm_dependencies[inde"
  },
  {
    "path": "modules/aws/aws-for-fluent-bit.tf",
    "chars": 6833,
    "preview": "locals {\n\n  aws-for-fluent-bit = merge(\n    local.helm_defaults,\n    {\n      name                             = local.he"
  },
  {
    "path": "modules/aws/aws-load-balancer-controller.tf",
    "chars": 7815,
    "preview": "locals {\n  aws-load-balancer-controller = merge(\n    local.helm_defaults,\n    {\n      name                      = local."
  },
  {
    "path": "modules/aws/aws-node-termination-handler.tf",
    "chars": 4317,
    "preview": "locals {\n  aws-node-termination-handler = merge(\n    local.helm_defaults,\n    {\n      name                   = local.hel"
  },
  {
    "path": "modules/aws/cert-manager.tf",
    "chars": 9484,
    "preview": "locals {\n\n  cert-manager = merge(\n    local.helm_defaults,\n    {\n      name                           = local.helm_depen"
  },
  {
    "path": "modules/aws/cluster-autoscaler.tf",
    "chars": 8476,
    "preview": "locals {\n  cluster-autoscaler = merge(\n    local.helm_defaults,\n    {\n      name                      = local.helm_depen"
  },
  {
    "path": "modules/aws/cni-metrics-helper.tf",
    "chars": 2193,
    "preview": "locals {\n  cni-metrics-helper = merge(\n    {\n      create_iam_resources_irsa = true\n      enabled                   = fa"
  },
  {
    "path": "modules/aws/data.tf",
    "chars": 107,
    "preview": "data \"aws_region\" \"current\" {}\n\ndata \"aws_caller_identity\" \"current\" {}\n\ndata \"aws_partition\" \"current\" {}\n"
  },
  {
    "path": "modules/aws/examples/README.md",
    "chars": 93,
    "preview": "## Examples\n\nExamples are located in [teks](https://github.com/particuleio/teks) repository.\n"
  },
  {
    "path": "modules/aws/external-dns.tf",
    "chars": 6554,
    "preview": "locals {\n\n  external-dns = { for k, v in var.external-dns : k => merge(\n    local.helm_defaults,\n    {\n      chart      "
  },
  {
    "path": "modules/aws/iam/aws-ebs-csi-driver.json",
    "chars": 2707,
    "preview": "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"ec2:CreateSnapsh"
  },
  {
    "path": "modules/aws/iam/aws-ebs-csi-driver_kms.json",
    "chars": 594,
    "preview": "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"kms:CreateGrant\""
  },
  {
    "path": "modules/aws/iam/aws-efs-csi-driver.json",
    "chars": 1075,
    "preview": "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"elasticfilesyste"
  },
  {
    "path": "modules/aws/iam/aws-load-balancer-controller.json",
    "chars": 7071,
    "preview": "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"iam:CreateServic"
  },
  {
    "path": "modules/aws/ingress-nginx.tf",
    "chars": 11483,
    "preview": "locals {\n\n  ingress-nginx = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies"
  },
  {
    "path": "modules/aws/karpenter.tf",
    "chars": 7688,
    "preview": "locals {\n\n  karpenter = merge(\n    local.helm_defaults,\n    {\n      name                            = local.helm_depende"
  },
  {
    "path": "modules/aws/kube-prometheus.tf",
    "chars": 21244,
    "preview": "locals {\n  kube-prometheus-stack = merge(\n    local.helm_defaults,\n    {\n      name                              = local"
  },
  {
    "path": "modules/aws/locals-aws.tf",
    "chars": 139,
    "preview": "locals {\n  tags          = var.tags\n  arn-partition = var.arn-partition != \"\" ? var.arn-partition : data.aws_partition.c"
  },
  {
    "path": "modules/aws/loki-stack.tf",
    "chars": 12041,
    "preview": "locals {\n  loki-stack = merge(\n    local.helm_defaults,\n    {\n      name                      = local.helm_dependencies["
  },
  {
    "path": "modules/aws/prometheus-cloudwatch-exporter.tf",
    "chars": 7449,
    "preview": "locals {\n  prometheus-cloudwatch-exporter = merge(\n    local.helm_defaults,\n    {\n      name                      = loca"
  },
  {
    "path": "modules/aws/s3-logging.tf",
    "chars": 716,
    "preview": "locals {\n  s3-logging = merge(\n    {\n      enabled          = false\n      create_bucket    = true\n      custom_bucket_id"
  },
  {
    "path": "modules/aws/secrets-store-csi-driver-provider-aws.tf",
    "chars": 1352,
    "preview": "locals {\n\n  secrets-store-csi-driver-provider-aws = merge(\n    {\n      enabled = local.secrets-store-csi-driver.enabled\n"
  },
  {
    "path": "modules/aws/templates/cert-manager-cluster-issuers.yaml.tpl",
    "chars": 1408,
    "preview": "---\napiVersion: cert-manager.io/v1\nkind: ClusterIssuer\nmetadata:\n  name: letsencrypt-staging\nspec:\n  acme:\n    server: h"
  },
  {
    "path": "modules/aws/templates/cni-metrics-helper.yaml.tpl",
    "chars": 1977,
    "preview": "---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: cni-metrics-helper\nroleRef:\n  ap"
  },
  {
    "path": "modules/aws/thanos-memcached.tf",
    "chars": 2376,
    "preview": "locals {\n\n  thanos-memcached = merge(\n    local.helm_defaults,\n    {\n      chart         = local.helm_dependencies[index"
  },
  {
    "path": "modules/aws/thanos-storegateway.tf",
    "chars": 5822,
    "preview": "locals {\n\n  thanos-storegateway = { for k, v in var.thanos-storegateway : k => merge(\n    local.helm_defaults,\n    {\n   "
  },
  {
    "path": "modules/aws/thanos-tls-querier.tf",
    "chars": 7761,
    "preview": "locals {\n\n  thanos-ca-key  = local.thanos[\"generate_ca\"] ? (var.thanos-tls-querier-ca-private-key != \"\" ? var.thanos-tls"
  },
  {
    "path": "modules/aws/thanos.tf",
    "chars": 13646,
    "preview": "locals {\n\n  thanos = merge(\n    local.helm_defaults,\n    {\n      name                      = \"thanos\"\n      chart       "
  },
  {
    "path": "modules/aws/tigera-operator.tf",
    "chars": 6329,
    "preview": "locals {\n  tigera-operator = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencie"
  },
  {
    "path": "modules/aws/variables-aws.tf",
    "chars": 2345,
    "preview": "variable \"arn-partition\" {\n  description = \"ARN partition\"\n  default     = \"\"\n  type        = string\n}\n\nvariable \"aws\" {"
  },
  {
    "path": "modules/aws/velero.tf",
    "chars": 9343,
    "preview": "locals {\n  velero = merge(\n    local.helm_defaults,\n    {\n      name                      = local.helm_dependencies[inde"
  },
  {
    "path": "modules/aws/versions.tf",
    "chars": 711,
    "preview": "terraform {\n  required_version = \">= 1.5.7\"\n  required_providers {\n    aws = {\n      source  = \"hashicorp/aws\"\n      ver"
  },
  {
    "path": "modules/aws/victoria-metrics-k8s-stack.tf",
    "chars": 7353,
    "preview": "locals {\n  victoria-metrics-k8s-stack = merge(\n    local.helm_defaults,\n    {\n      name                             = l"
  },
  {
    "path": "modules/aws/yet-another-cloudwatch-exporter.tf",
    "chars": 7535,
    "preview": "locals {\n  yet-another-cloudwatch-exporter = merge(\n    local.helm_defaults,\n    {\n      name                      = loc"
  },
  {
    "path": "modules/azure/.terraform-docs.yml",
    "chars": 28,
    "preview": "settings:\n  lockfile: false\n"
  },
  {
    "path": "modules/azure/README.md",
    "chars": 31898,
    "preview": "## About\n\nProvides various Kubernetes addons that are often used on Kubernetes with Azure\n\n<!-- BEGIN_TF_DOCS -->\n## Req"
  },
  {
    "path": "modules/azure/ingress-nginx.tf",
    "chars": 2500,
    "preview": "locals {\n\n  ingress-nginx = merge(\n    local.helm_defaults,\n    {\n      name          = local.helm_dependencies[index(lo"
  },
  {
    "path": "modules/azure/version.tf",
    "chars": 718,
    "preview": "terraform {\n  required_version = \">= 1.5.7\"\n  required_providers {\n    azurerm = {\n      source  = \"hashicorp/azurerm\"\n "
  },
  {
    "path": "modules/google/.terraform-docs.yml",
    "chars": 28,
    "preview": "settings:\n  lockfile: false\n"
  },
  {
    "path": "modules/google/README.md",
    "chars": 46150,
    "preview": "# terraform-kubernetes-addons:google\n\n[![semantic-release](https://img.shields.io/badge/%20%20%F0%9F%93%A6%F0%9F%9A%80-s"
  },
  {
    "path": "modules/google/cert-manager.tf",
    "chars": 10512,
    "preview": "locals {\n  cert-manager = merge(\n    local.helm_defaults,\n    {\n      name                      = local.helm_dependencie"
  },
  {
    "path": "modules/google/data.tf",
    "chars": 77,
    "preview": "data \"google_project\" \"current\" {}\n\ndata \"google_client_config\" \"current\" {}\n"
  },
  {
    "path": "modules/google/external-dns.tf",
    "chars": 7048,
    "preview": "locals {\n\n  external-dns = { for k, v in var.external-dns : k => merge(\n    local.helm_defaults,\n    {\n      chart      "
  },
  {
    "path": "modules/google/ingress-nginx.tf",
    "chars": 8058,
    "preview": "locals {\n\n  ingress-nginx = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies"
  },
  {
    "path": "modules/google/ip-masq-agent.tf",
    "chars": 467,
    "preview": "locals {\n  ip-masq-agent = merge(\n    {\n      enabled = false\n    },\n    var.ip-masq-agent\n  )\n}\n\ndata \"kubectl_filename"
  },
  {
    "path": "modules/google/kube-prometheus.tf",
    "chars": 19518,
    "preview": "locals {\n  kube-prometheus-stack = merge(\n    local.helm_defaults,\n    {\n      name                                  = l"
  },
  {
    "path": "modules/google/loki-stack.tf",
    "chars": 11898,
    "preview": "locals {\n  loki-stack = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies[ind"
  },
  {
    "path": "modules/google/manifests/gke-ip-masq/ip-masq-agent-configmap.yaml",
    "chars": 241,
    "preview": "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: ip-masq-agent\n  namespace: kube-system\ndata:\n  config: |\n    nonMas"
  },
  {
    "path": "modules/google/manifests/gke-ip-masq/ip-masq-agent-daemonset.yaml",
    "chars": 1150,
    "preview": "---\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n  name: ip-masq-agent\n  namespace: kube-system\nspec:\n  selector:\n    m"
  },
  {
    "path": "modules/google/templates/cert-manager-cluster-issuers.yaml.j2",
    "chars": 1926,
    "preview": "---\napiVersion: cert-manager.io/v1\nkind: ClusterIssuer\nmetadata:\n  name: letsencrypt-staging\nspec:\n  acme:\n    server: h"
  },
  {
    "path": "modules/google/templates/cert-manager-cluster-issuers.yaml.tpl",
    "chars": 1240,
    "preview": "---\napiVersion: cert-manager.io/v1\nkind: ClusterIssuer\nmetadata:\n  name: letsencrypt-staging\nspec:\n  acme:\n    server: h"
  },
  {
    "path": "modules/google/templates/cni-metrics-helper.yaml.tpl",
    "chars": 1977,
    "preview": "---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: cni-metrics-helper\nroleRef:\n  ap"
  },
  {
    "path": "modules/google/thanos-memcached.tf",
    "chars": 2376,
    "preview": "locals {\n\n  thanos-memcached = merge(\n    local.helm_defaults,\n    {\n      chart         = local.helm_dependencies[index"
  },
  {
    "path": "modules/google/thanos-receive.tf",
    "chars": 11221,
    "preview": "locals {\n\n  thanos-receive = merge(\n    local.helm_defaults,\n    {\n      name                    = \"thanos\"\n      chart "
  },
  {
    "path": "modules/google/thanos-storegateway.tf",
    "chars": 4622,
    "preview": "locals {\n\n  thanos-storegateway = { for k, v in var.thanos-storegateway : k => merge(\n    local.helm_defaults,\n    {\n   "
  },
  {
    "path": "modules/google/thanos-tls-querier.tf",
    "chars": 7296,
    "preview": "locals {\n\n  thanos-tls-querier = { for k, v in var.thanos-tls-querier : k => merge(\n    local.helm_defaults,\n    {\n     "
  },
  {
    "path": "modules/google/thanos.tf",
    "chars": 16233,
    "preview": "locals {\n\n  thanos = merge(\n    local.helm_defaults,\n    {\n      name                            = \"thanos\"\n      chart "
  },
  {
    "path": "modules/google/variables-google.tf",
    "chars": 815,
    "preview": "variable \"google\" {\n  description = \"GCP provider customization\"\n  type        = any\n  default     = {}\n}\n\nvariable \"pro"
  },
  {
    "path": "modules/google/velero.tf",
    "chars": 9205,
    "preview": "locals {\n  velero = merge(\n    local.helm_defaults,\n    {\n      name                    = local.helm_dependencies[index("
  },
  {
    "path": "modules/google/versions.tf",
    "chars": 661,
    "preview": "terraform {\n  required_version = \">= 1.3\"\n  required_providers {\n    google      = \">= 4.69\"\n    google-beta = \">= 4.69\""
  },
  {
    "path": "modules/google/victoria-metrics-k8s-stack.tf",
    "chars": 7197,
    "preview": "locals {\n  victoria-metrics-k8s-stack = merge(\n    local.helm_defaults,\n    {\n      name                             = l"
  },
  {
    "path": "modules/scaleway/.terraform-docs.yml",
    "chars": 28,
    "preview": "settings:\n  lockfile: false\n"
  },
  {
    "path": "modules/scaleway/README.md",
    "chars": 37302,
    "preview": "# terraform-kubernetes-addons:scaleway\n\n[![semantic-release](https://img.shields.io/badge/%20%20%F0%9F%93%A6%F0%9F%9A%80"
  },
  {
    "path": "modules/scaleway/cert-manager.tf",
    "chars": 11843,
    "preview": "locals {\n\n  cert-manager = merge(\n    local.helm_defaults,\n    {\n      name                                  = local.hel"
  },
  {
    "path": "modules/scaleway/examples/README.md",
    "chars": 93,
    "preview": "## Examples\n\nExamples are located in [tkap](https://github.com/particuleio/tkap) repository.\n"
  },
  {
    "path": "modules/scaleway/external-dns.tf",
    "chars": 5729,
    "preview": "locals {\n\n  external-dns = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies["
  },
  {
    "path": "modules/scaleway/ingress-nginx.tf",
    "chars": 8153,
    "preview": "locals {\n\n  ingress-nginx = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies"
  },
  {
    "path": "modules/scaleway/kube-prometheus.tf",
    "chars": 14283,
    "preview": "locals {\n  kube-prometheus-stack = merge(\n    local.helm_defaults,\n    {\n      name                     = local.helm_dep"
  },
  {
    "path": "modules/scaleway/locals-scaleway.tf",
    "chars": 280,
    "preview": "locals {\n\n  scaleway_defaults = {\n    scw_access_key              = \"\"\n    scw_secret_key              = \"\"\n    scw_defa"
  },
  {
    "path": "modules/scaleway/loki-stack.tf",
    "chars": 9548,
    "preview": "locals {\n  loki-stack = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies[ind"
  },
  {
    "path": "modules/scaleway/templates/cert-manager-cluster-issuers.yaml.tpl",
    "chars": 3388,
    "preview": "---\napiVersion: cert-manager.io/v1\nkind: ClusterIssuer\nmetadata:\n  name: letsencrypt-staging\nspec:\n  acme:\n    server: h"
  },
  {
    "path": "modules/scaleway/thanos-memcached.tf",
    "chars": 2376,
    "preview": "locals {\n\n  thanos-memcached = merge(\n    local.helm_defaults,\n    {\n      chart         = local.helm_dependencies[index"
  },
  {
    "path": "modules/scaleway/thanos-storegateway.tf",
    "chars": 3691,
    "preview": "locals {\n\n  thanos-storegateway = { for k, v in var.thanos-storegateway : k => merge(\n    local.helm_defaults,\n    {\n   "
  },
  {
    "path": "modules/scaleway/thanos-tls-querier.tf",
    "chars": 7296,
    "preview": "locals {\n\n  thanos-tls-querier = { for k, v in var.thanos-tls-querier : k => merge(\n    local.helm_defaults,\n    {\n     "
  },
  {
    "path": "modules/scaleway/thanos.tf",
    "chars": 10773,
    "preview": "locals {\n\n  thanos = merge(\n    local.helm_defaults,\n    {\n      name                    = \"thanos\"\n      chart         "
  },
  {
    "path": "modules/scaleway/variables-scaleway.tf",
    "chars": 477,
    "preview": "variable \"scaleway\" {\n  description = \"Scaleway provider customization\"\n  type        = any\n  default     = {}\n}\n\nvariab"
  },
  {
    "path": "modules/scaleway/velero.tf",
    "chars": 5582,
    "preview": "locals {\n  velero = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies[index(l"
  },
  {
    "path": "modules/scaleway/versions.tf",
    "chars": 721,
    "preview": "terraform {\n  required_version = \">= 1.5.7\"\n  required_providers {\n    helm = {\n      source  = \"hashicorp/helm\"\n      v"
  },
  {
    "path": "modules/scaleway/victoria-metrics-k8s-stack.tf",
    "chars": 7261,
    "preview": "locals {\n  victoria-metrics-k8s-stack = merge(\n    local.helm_defaults,\n    {\n      name                             = l"
  },
  {
    "path": "node-problem-detector.tf",
    "chars": 3387,
    "preview": "locals {\n  npd = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies[index(loca"
  },
  {
    "path": "priority-class.tf",
    "chars": 739,
    "preview": "locals {\n  priority-class-ds = merge(\n    {\n      create = true\n      name   = \"kubernetes-addons-ds\"\n      value  = \"10"
  },
  {
    "path": "prometheus-adapter.tf",
    "chars": 4231,
    "preview": "locals {\n  prometheus-adapter = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependen"
  },
  {
    "path": "prometheus-blackbox-exporter.tf",
    "chars": 4740,
    "preview": "locals {\n  prometheus-blackbox-exporter = merge(\n    local.helm_defaults,\n    {\n      name                   = local.hel"
  },
  {
    "path": "promtail.tf",
    "chars": 6321,
    "preview": "locals {\n\n  promtail = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies[inde"
  },
  {
    "path": "reloader.tf",
    "chars": 3374,
    "preview": "locals {\n\n  reloader = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies[inde"
  },
  {
    "path": "sealed-secrets.tf",
    "chars": 3665,
    "preview": "locals {\n\n  sealed-secrets = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencie"
  },
  {
    "path": "secrets-store-csi-driver.tf",
    "chars": 4321,
    "preview": "locals {\n  secrets-store-csi-driver = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_de"
  },
  {
    "path": "templates/cert-manager-cluster-issuers.yaml.tpl",
    "chars": 770,
    "preview": "---\napiVersion: cert-manager.io/v1\nkind: ClusterIssuer\nmetadata:\n  name: letsencrypt-staging\nspec:\n  acme:\n    server: h"
  },
  {
    "path": "templates/cert-manager-csi-driver.yaml.tpl",
    "chars": 3362,
    "preview": "apiVersion: storage.k8s.io/v1beta1\nkind: CSIDriver\nmetadata:\n  name: csi.cert-manager.io\nspec:\n  podInfoOnMount: true\n  "
  },
  {
    "path": "tigera-operator.tf",
    "chars": 6395,
    "preview": "locals {\n  tigera-operator = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencie"
  },
  {
    "path": "traefik.tf",
    "chars": 4940,
    "preview": "locals {\n\n  traefik = merge(\n    local.helm_defaults,\n    {\n      name                   = local.helm_dependencies[index"
  },
  {
    "path": "variables.tf",
    "chars": 6992,
    "preview": "variable \"admiralty\" {\n  description = \"Customize admiralty chart, see `admiralty.tf` for supported values\"\n  type      "
  },
  {
    "path": "versions.tf",
    "chars": 635,
    "preview": "terraform {\n  required_version = \">= 1.5.7\"\n  required_providers {\n    helm = {\n      source  = \"hashicorp/helm\"\n      v"
  },
  {
    "path": "victoria-metrics-k8s-stack.tf",
    "chars": 7134,
    "preview": "locals {\n  victoria-metrics-k8s-stack = merge(\n    local.helm_defaults,\n    {\n      name                             = l"
  }
]

About this extraction

This page contains the full source code of the particuleio/terraform-kubernetes-addons GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 136 files (819.1 KB), approximately 213.2k tokens. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!