Full Code of xunleii/terraform-module-k3s for AI

master 1ae2afd8013f cached
47 files
127.8 KB
36.7k tokens
1 requests
Download .txt
Repository: xunleii/terraform-module-k3s
Branch: master
Commit: 1ae2afd8013f
Files: 47
Total size: 127.8 KB

Directory structure:
gitextract_xvrmib6x/

├── .devcontainer/
│   └── devcontainer.json
├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug-report.yaml
│   │   └── config.yaml
│   ├── labels.yaml
│   ├── renovate.json
│   └── workflows/
│       ├── github.documentation.yaml
│       ├── github.labeler.yaml
│       ├── github.stale.yaml
│       ├── security.terraform.yaml
│       ├── security.workflows.yaml
│       ├── templates.terraform.pull_requests.lint.yaml
│       ├── templates.terraform.pull_requests.plan.yaml
│       ├── terraform.lint.yaml
│       └── terraform.plan.yaml
├── .github_changelog_generator
├── .gitignore
├── .terraform-docs.yaml
├── .tool-versions
├── CHANGELOG.md
├── LICENSE
├── README.md
├── Taskfile.yaml
├── agent_nodes.tf
├── examples/
│   ├── civo-k3s/
│   │   ├── README.md
│   │   ├── k3s.tf
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── versions.tf
│   ├── do-k3s/
│   │   ├── README.md
│   │   ├── k3s.tf
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── versions.tf
│   └── hcloud-k3s/
│       ├── README.md
│       ├── k3s.tf
│       ├── main.tf
│       ├── outputs.tf
│       ├── variables.tf
│       └── versions.tf
├── k3s_certificates.tf
├── k3s_version.tf
├── main.tf
├── outputs.tf
├── renovate.json
├── server_nodes.tf
├── variables.tf
└── versions.tf

================================================
FILE CONTENTS
================================================

================================================
FILE: .devcontainer/devcontainer.json
================================================
{
  "$schema": "https://raw.githubusercontent.com/devcontainers/spec/main/schemas/devContainer.schema.json",
  "name": "k3s Terraform module - Dev Container",
  "image": "mcr.microsoft.com/vscode/devcontainers/universal",
  "features": {
    "ghcr.io/devcontainers-contrib/features/yamllint:2.0.9": {},
    "ghcr.io/devcontainers/features/terraform:1.4.2": {
      "version": "1.6.2"
    },
    "ghcr.io/devcontainers-contrib/features/go-task:1.0.5": {},
    "ghcr.io/dhoeric/features/terraform-docs:1.0.0": {
      "version": "0.16.0"
    },
    "ghcr.io/itsmechlark/features/act:1.0.0": {},
    "ghcr.io/itsmechlark/features/trivy:1.0.0": {}
  },
  "customizations": {
    "vscode": {
      "extensions": [
        "bierner.github-markdown-preview",
        "github.copilot",
        "ms-vscode.makefile-tools",
        "redhat.vscode-yaml",
        "tylerharris.terraform-link-docs",
        "yzhang.markdown-all-in-one",
        "task.vscode-task"
      ]
    }
  }
}

================================================
FILE: .github/ISSUE_TEMPLATE/bug-report.yaml
================================================
name: Bug Report
description: File a bug report for this project
title: ":bug: "
labels: ["kind/bug"]
projects: ["xunleii/2"]

body:
  - type: markdown
    attributes:
      value: |
        Before opening a new issue, please search existing issues.

        ----

        Thank you for filing a bug report! Please fill out the sections below to help us reproduce the bug.

  - type: textarea
    id: what_happened
    attributes:
      label: ":fire: What happened?"
      description: Describe the issue you are experiencing here
    validations:
      required: true
  - type: textarea
    id: what_expected
    attributes:
      label: ":+1: What did you expect to happen?"
      description: Describe what you expected to happen here
    validations:
      required: false
  - type: textarea
    id: how_reproduce
    attributes:
      label: ":mag: How can we reproduce the issue?"
      description: Describe how to reproduce the problem in as much detail as possible
    validations:
      required: true

  - type: input
    id: module_version
    attributes:
      label: ":wrench: Module version"
      description: Please provide the version of the module you are using
    validations:
      required: true
  - type: input
    id: terraform_version
    attributes:
      label: ":wrench: Terraform version"
      description: Please provide the version of Terraform you are using
    validations:
      required: true

  - type: textarea
    id: provider_list
    attributes:
      label: ":wrench: Terraform providers"
      description: List all the providers you are using with their version (copy the output of `terraform providers`)
    validations:
      required: true

  - type: textarea
    id: additional_info
    attributes:
      label: ":clipboard: Additional information"
      description: Please provide any additional information that might be useful
    validations:
      required: false


================================================
FILE: .github/ISSUE_TEMPLATE/config.yaml
================================================
blank_issues_enabled: true


================================================
FILE: .github/labels.yaml
================================================
- name: kind/bug
  description: Something isn't working
  color: D73A4A
- name: kind/dependencies
  description: Dependencies upgrade
  color: 2B098D
- name: kind/documentation
  description: Improvements or additions to documentation
  color: 0075CA
- name: kind/enhancement
  description: New feature or request
  color: A2EEEF
- name: kind/question
  description: Further information is requested
  color: D876E3

- name: size/XS
  color: 008000
- name: size/S
  color: 008000
- name: size/M
  color: FFFF00
- name: size/L
  color: FF0000
- name: size/XL
  color: FF0000

- name: status/stale
  description: This issue has not had recent activity
  color: 6A5ACD
- name: no-stale
  description: This issue cannot be marked as stale
  color: 6A5ACD

- name: terraform:plan
  description: Invoke Terraform plan workflow on the current PR
  color: 7A55CC

- name: duplicate
  description: This doesn't seem right
  color: CFD3D7
- name: good first issue
  description: Good for newcomers
  color: 7057FF
- name: help wanted
  description: Extra attention is needed
  color: 008672
- name: invalid
  description: This doesn't seem right
  color: E4E669
- name: wontfix
  description: This will not be worked on
  color: FFFFFF


================================================
FILE: .github/renovate.json
================================================
{
  "$schema": "https://docs.renovatebot.com/renovate-schema.json",

  "assignAutomerge": true,
  "automergeStrategy": "auto",
  "dependencyDashboard": true,
  "labels": ["kind/dependencies"],
  "prConcurrentLimit": 5,
  "prHourlyLimit": 0
}


================================================
FILE: .github/workflows/github.documentation.yaml
================================================
---
name: '[bot] Update documentation assets (master only)'
on:
  push:
    branches: [master]

jobs:
  generate-docs-assets:
    name: Update documentation assets
    runs-on: ubuntu-latest
    permissions:
      contents: write
    steps:
      - uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
      - uses: heinrichreimer/github-changelog-generator-action@e60b5a2bd9fcd88dadf6345ff8327863fb8b490f # v2.4
        with:
          token: ${{ secrets.GITHUB_TOKEN }}
      # NOTE: seems impossible to use terraform-docs/gh-actions with EndBug/add-and-commit... so
      #       we will do everything manually
      - name: Generate README.md with terraform-docs
        run: |
          mkdir --parent .terraform-docs
          curl -L "https://github.com/terraform-docs/terraform-docs/releases/download/v0.16.0/terraform-docs-v0.16.0-$(uname)-amd64.tar.gz" | tar -xvzC .terraform-docs
          chmod +x .terraform-docs/terraform-docs
          
          .terraform-docs/terraform-docs .
      - uses: EndBug/add-and-commit@1bad3abcf0d6ec49a5857d124b0bfb52dc7bb081 # v9.1.3
        with:
          default_author: github_actions
          message: "Update documentation assets"


================================================
FILE: .github/workflows/github.labeler.yaml
================================================
---
name: '[bot] Synchronize labels'
on:
  push:
    branches: [master]
    paths: [.github/workflows/github.labeler.yaml, .github/labels.yaml]
  schedule:
    - cron: '0 0 * * *'

jobs:
  sync:
    name: Synchronize labels
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
      - uses: micnncim/action-label-syncer@3abd5ab72fda571e69fffd97bd4e0033dd5f495c # v1.3.0
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        with:
          manifest: .github/labels.yaml
          prune: true


================================================
FILE: .github/workflows/github.stale.yaml
================================================
---
name: '[bot] Close stale issues and PRs'
on:
  schedule:
    - cron: '0 0 * * *'

jobs:
  stale:
    runs-on: ubuntu-latest
    permissions:
      contents: write
      issues: write
      pull-requests: write
    steps:
      - uses: actions/stale@28ca1036281a5e5922ead5184a1bbf96e5fc984e # v9.0.0
        with:
          days-before-close: 7
          days-before-stale: 30
          exempt-issue-labels: no-stale
          exempt-pr-labels: no-stale
          repo-token: ${{ secrets.GITHUB_TOKEN }}
          stale-issue-label: status/stale
          stale-issue-message: 'This issue has been automatically marked as stale because it has not had recent activity. If the issue still persists, please leave a comment and it will be reopened.'
          stale-pr-label: status/stale
          stale-pr-message: 'This pull request has been automatically marked as stale because it has not had recent activity. If the pull request still needs attention, please leave a comment and it will be reopened.'


================================================
FILE: .github/workflows/security.terraform.yaml
================================================
name: Security hardening (Terraform)

on:
  pull_request:

jobs:
  trivy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
      - uses: aquasecurity/trivy-action@d43c1f16c00cfd3978dde6c07f4bbcf9eb6993ca # 0.16.1
        with:
          scan-type: config
          scan-ref: .
          exit-code: 1
          severity: UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL


================================================
FILE: .github/workflows/security.workflows.yaml
================================================
name: Security hardening (Github Actions workflows)

on:
  pull_request:
    types: [opened, synchronize]
    paths: [".github/workflows/**"]

jobs:
  ci_harden_security:
    name: Github Action security hardening
    runs-on: ubuntu-latest
    permissions:
      security-events: write
    steps:
      - uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1

      - name: Lint your Github Actions
        run: |
          curl -O https://raw.githubusercontent.com/rhysd/actionlint/main/.github/actionlint-matcher.json

          echo "::add-matcher::actionlint-matcher.json"
          bash <(curl https://raw.githubusercontent.com/rhysd/actionlint/main/scripts/download-actionlint.bash)
          ./actionlint -color

      - name: Ensure SHA pinned actions
        uses: zgosalvez/github-actions-ensure-sha-pinned-actions@70c4af2ed5282c51ba40566d026d6647852ffa3e # v5.0.1


================================================
FILE: .github/workflows/templates.terraform.pull_requests.lint.yaml
================================================
name: IaaS - Terraform CI (for pull requests) - Lint

on:
  workflow_call:
    inputs:
      terraform_workdir:
        description: Working directory where Terraform files are
        required: false
        default: "."
        type: string
      terraform_version:
        description: Terraform version that should we use (latest by default)
        required: false
        type: string

jobs:
  # Terraform validate checks if your TF files are in a canonical format and without HCL issues
  terraform_validate:
    name: Terraform files validation
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
      - uses: hashicorp/setup-terraform@5e8dbf3c6d9deaf4193ca7a8fb23f2ac83bb6c85 # v4.0.0
        with:
          terraform_version: ${{ inputs.terraform_version }}
      - name: Pre-hook Terraform workflow
        id: pre
        run: |
          # Setup `workdir` suffix used to give more information during execution
          if [[ '${{ inputs.terraform_workdir }}' == '.' ]]; then
            echo "workdir=" >> "${GITHUB_OUTPUT}"
          else
            echo "workdir=(${{ inputs.terraform_workdir }})" >> "${GITHUB_OUTPUT}"
          fi

      # --- `terraform fmt`
      - name: Check if all Terraform configuration files are in a canonical format ${{ steps.pre.outputs.workdir }}
        id: fmt
        run: terraform fmt -check -recursive -diff -no-color
        working-directory: ${{ inputs.terraform_workdir }}
      - uses: marocchino/sticky-pull-request-comment@67d0dec7b07ed060a405f9b2a64b8ab319fdd7db # v2.9.2
        if: failure() && steps.fmt.outcome == 'failure'
        with:
          recreate: true
          header: tf::${{ steps.pre.outputs.workdir }}
          message: |
            # Terraform CI/CD ${{ steps.pre.outputs.workdir }}

            - [ ] :paintbrush: Check if all Terraform configuration files are in a canonical format

            ### 🚫 Failure reason
            ```terraform
            ${{ steps.fmt.outputs.stdout }}
            ```
            <br/>

            > _Report based on commit ${{ github.sha }} (authored by **@${{ github.actor }}**).  See [`actions#${{ github.run_id }}`](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}) for more details._

      # --- `terraform init`
      - name: Initialize Terraform working directory ${{ steps.pre.outputs.workdir }}
        id: init
        env:
          TF_IN_AUTOMATION: yes
        run: terraform init -no-color -backend=false
        working-directory: ${{ inputs.terraform_workdir }}
      - uses: marocchino/sticky-pull-request-comment@67d0dec7b07ed060a405f9b2a64b8ab319fdd7db # v2.9.2
        if: failure() && steps.init.outcome == 'failure'
        with:
          recreate: true
          header: tf::${{ steps.pre.outputs.workdir }}
          message: |
            # Terraform CI/CD ${{ steps.pre.outputs.workdir }}

            - [x] :paintbrush: Check if all Terraform configuration files are in a canonical format
            - [ ] :hammer_and_wrench: Validate the configuration files

            ### 🚫 Failure reason
            ```
            ${{ steps.init.outputs.stderr }}
            ```
            <br/>

            > _Report based on commit ${{ github.sha }} (authored by **@${{ github.actor }}**).  See [`actions#${{ github.run_id }}`](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}) for more details._

      # --- `terraform validate`
      - name: Validate the configuration files ${{ steps.pre.outputs.workdir }}
        id: validate
        env:
          TF_IN_AUTOMATION: yes
        run: terraform validate -no-color
        working-directory: ${{ inputs.terraform_workdir }}
      - uses: marocchino/sticky-pull-request-comment@67d0dec7b07ed060a405f9b2a64b8ab319fdd7db # v2.9.2
        if: failure() && steps.validate.outcome == 'failure'
        with:
          recreate: true
          header: tf::${{ steps.pre.outputs.workdir }}
          message: |
            # Terraform CI/CD ${{ steps.pre.outputs.workdir }}

            - [x] :paintbrush: Check if all Terraform configuration files are in a canonical format
            - [ ] :hammer_and_wrench: Validate the configuration files

            ### 🚫 Failure reason
            ```
            ${{ steps.validate.outputs.stderr }}
            ```
            <br/>

            > _Report based on commit ${{ github.sha }} (authored by **@${{ github.actor }}**).  See [`actions#${{ github.run_id }}`](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}) for more details._
      - uses: marocchino/sticky-pull-request-comment@67d0dec7b07ed060a405f9b2a64b8ab319fdd7db # v2.9.2
        if: success()
        with:
          recreate: true
          header: tf::${{ steps.pre.outputs.workdir }}
          message: |
            # Terraform CI/CD ${{ steps.pre.outputs.workdir }}

            - [x] :paintbrush: Check if all Terraform configuration files are in a canonical format
            - [x] :hammer_and_wrench: Validate the configuration files

            > _Report based on commit ${{ github.sha }} (authored by **@${{ github.actor }}**).  See [`actions#${{ github.run_id }}`](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}) for more details._

================================================
FILE: .github/workflows/templates.terraform.pull_requests.plan.yaml
================================================
name: IaaS - Terraform CI (for pull requests) - Plan

on:
  workflow_call:
    inputs:
      after_lint:
        default: true
        description: Is this workflow run after lint?
        required: false
        type: boolean

      env:
        description: List of environment variables to set (YAML formatted)
        required: false
        type: string

      terraform_vars:
        description: Terraform variables (YAML formatted)
        required: false
        type: string

      terraform_version:
        description: Terraform version that should we use (latest by default)
        required: false
        type: string
      terraform_workdir:
        description: Working directory where Terraform files are
        required: false
        default: "."
        type: string

    secrets:
      env:
        description: List of sensitive environment variables to set (YAML formatted)
        required: false

      terraform_vars:
        description: Sensitive Terraform variables (YAML formatted)
        required: false

jobs:
  # Terraform plan generated the speculative execution plan
  terraform_plan:
    name: Generate a speculative execution plan
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
      - uses: hashicorp/setup-terraform@5e8dbf3c6d9deaf4193ca7a8fb23f2ac83bb6c85 # v4.0.0
        with:
          terraform_version: ${{ inputs.terraform_version }}
      - name: Pre-hook Terraform workflow
        id: pre
        run: |
          import os
          import yaml

          # Setup `workdir` suffix used to give more information during execution

          with open(os.getenv('GITHUB_OUTPUT'), 'a') as output:
            if '${{ inputs.terraform_workdir }}' == '.':
              output.write('workdir=\n')
            else:
              output.write('workdir=(${{ inputs.terraform_workdir }})\n')

            if '${{ inputs.after_lint }}' == 'true':
              output.write('- [x] :paintbrush: Check if all Terraform configuration files are in a canonical format\n')
              output.write('- [x] :hammer_and_wrench: Validate the configuration files\n')
            else:
              output.write('lint_fmt_success=""\n')
              output.write('lint_val_success=""\n')

          # Import Terraform variables
  
          tf_env = '''
          ${{ inputs.env }}
          ${{ secrets.env }}
          '''

          tf_vars = '''
          ${{ inputs.terraform_vars }}
          ${{ secrets.terraform_vars }}
          '''

          with open(os.getenv('GITHUB_ENV'), 'a') as env:
            if tf_env.strip():
              for var in yaml.safe_load(tf_env).items():
                env.write('%s=%s\n' % var)
            if tf_vars.strip():
              for var in yaml.safe_load(tf_vars).items():
                env.write('TF_VAR_%s=%s\n' % var)
        shell: python

      # --- `terraform init`
      - name: Initialize Terraform working directory ${{ steps.pre.outputs.workdir }}
        id: init
        env:
          TF_IN_AUTOMATION: yes
        run: terraform init -no-color -backend=false
        working-directory: ${{ inputs.terraform_workdir }}
      - uses: marocchino/sticky-pull-request-comment@67d0dec7b07ed060a405f9b2a64b8ab319fdd7db # v2.9.2
        if: failure() && steps.init.outcome == 'failure'
        with:
          recreate: true
          header: tf::${{ steps.pre.outputs.workdir }}
          message: |
            # Terraform CI/CD ${{ steps.pre.outputs.workdir }}

            ${{ steps.pre.outputs.lint_fmt_success }}
            ${{ steps.pre.outputs.lint_val_success }}
            - [ ] :scroll: Generate a speculative execution plan

            ### 🚫 Failure reason
            ```
            ${{ steps.init.outputs.stderr }}
            ```
            <br/>

            > _Report based on commit ${{ github.sha }} (authored by **@${{ github.actor }}**).  See [`actions#${{ github.run_id }}`](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}) for more details._

      # --- `terraform plan`
      - name: Generate a speculative execution plan ${{ steps.pre.outputs.workdir }}
        id: plan
        env:
          TF_IN_AUTOMATION: yes
        run: terraform plan -input=false -no-color -parallelism=30 -compact-warnings
        working-directory: ${{ inputs.terraform_workdir }}
      - uses: marocchino/sticky-pull-request-comment@67d0dec7b07ed060a405f9b2a64b8ab319fdd7db # v2.9.2
        if: failure() && steps.plan.outcome == 'failure'
        with:
          recreate: true
          header: tf::${{ steps.pre.outputs.workdir }}
          message: |
            # Terraform CI/CD ${{ steps.pre.outputs.workdir }}

            ${{ steps.pre.outputs.lint_fmt_success }}
            ${{ steps.pre.outputs.lint_val_success }}
            - [ ] :scroll: Generate a speculative execution plan

            ### 🚫 Failure reason
            ```
            ${{ steps.plan.outputs.stderr }}
            ```
            <br/>

            > _Report based on commit ${{ github.sha }} (authored by **@${{ github.actor }}**).  See [`actions#${{ github.run_id }}`](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}) for more details._
      - uses: marocchino/sticky-pull-request-comment@67d0dec7b07ed060a405f9b2a64b8ab319fdd7db # v2.9.2
        if: success()
        with:
          recreate: true
          header: tf::${{ steps.pre.outputs.workdir }}
          message: |
            # Terraform CI/CD ${{ steps.pre.outputs.workdir }}

            ${{ steps.pre.outputs.lint_fmt_success }}
            ${{ steps.pre.outputs.lint_val_success }}
            - [x] :scroll: Generate a speculative execution plan

            ### Terraform Plan output
            ```terraform
            ${{ steps.plan.outputs.stdout }}
            ```
            <br/>

            > _Report based on commit ${{ github.sha }} (authored by **@${{ github.actor }}**).  See [`actions#${{ github.run_id }}`](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}) for more details._

================================================
FILE: .github/workflows/terraform.lint.yaml
================================================
name: Terraform HCL validation (PRs only)

on:
  pull_request:
    paths: ["**.tf"]

permissions:
  pull-requests: write

jobs:
  terraform-module-k3s:
    name: Terraform module
    uses: ./.github/workflows/templates.terraform.pull_requests.lint.yaml

  examples_hcloud-k3s:
    name: Hetzner Cloud
    needs: [terraform-module-k3s]
    uses: ./.github/workflows/templates.terraform.pull_requests.lint.yaml
    with:
      terraform_workdir: examples/hcloud-k3s

  examples_civo-k3s:
    name: CIVO
    needs: [terraform-module-k3s]
    uses: ./.github/workflows/templates.terraform.pull_requests.lint.yaml
    with:
      terraform_workdir: examples/civo-k3s


================================================
FILE: .github/workflows/terraform.plan.yaml
================================================
name: Terraform plan validation (PRs only)

on:
  pull_request:
    types: [labeled]

permissions:
  pull-requests: write

jobs:
  examples_hcloud-k3s:
    name: Hetzner Cloud
    if: ${{ github.event.label.name == 'terraform:plan' }}
    permissions:
      pull-requests: write
    secrets:
      env: |
        HCLOUD_TOKEN: ${{ secrets.HCLOUD_TOKEN }}
    uses: ./.github/workflows/templates.terraform.pull_requests.plan.yaml
    with:
      terraform_vars: |
        ssh_key: ''
      terraform_workdir: examples/hcloud-k3s

  unlabel-pull-request:
    if: always()
    name: Remove 'terraform:plan' label
    needs: [examples_hcloud-k3s]
    runs-on: ubuntu-latest
    steps:
      - name: Unlabel 'terraform:plan'
        uses: actions-ecosystem/action-remove-labels@d05162525702062b6bdef750ed8594fc024b3ed7
        with:
          labels: terraform:plan


================================================
FILE: .github_changelog_generator
================================================
add-sections={"dependencies":{"prefix":"**Dependencies upgrades:**", "labels":["kind/dependencies"]}}
project=terraform-module-k3s
user=xunleii


================================================
FILE: .gitignore
================================================
# Created by https://www.toptal.com/developers/gitignore/api/linux,windows,macos,terraform
# Edit at https://www.toptal.com/developers/gitignore?templates=linux,windows,macos,terraform

### Linux ###
*~

# temporary files which can be created if a process still has a handle open of a deleted file
.fuse_hidden*

# KDE directory preferences
.directory

# Linux trash folder which might appear on any partition or disk
.Trash-*

# .nfs files are created when an open file is removed but is still being accessed
.nfs*

### macOS ###
# General
.DS_Store
.AppleDouble
.LSOverride

# Icon must end with two \r
Icon


# Thumbnails
._*

# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.com.apple.timemachine.donotpresent

# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk

### macOS Patch ###
# iCloud generated files
*.icloud

### Terraform ###
# Local .terraform directories
**/.terraform/*

# .tfstate files
*.tfstate
*.tfstate.*

# Crash log files
crash.log
crash.*.log

# Exclude all .tfvars files, which are likely to contain sensitive data, such as
# password, private keys, and other secrets. These should not be part of version
# control as they are data points which are potentially sensitive and subject
# to change depending on the environment.
*.tfvars
*.tfvars.json

# Ignore override files as they are usually used to override resources locally and so
# are not checked in
override.tf
override.tf.json
*_override.tf
*_override.tf.json

# Include override files you do wish to add to version control using negated pattern
# !example_override.tf

# Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan
# example: *tfplan*

# Ignore CLI configuration files
.terraformrc
terraform.rc

# Ignore temporary TF docs folder
.terraform-docs/

### Windows ###
# Windows thumbnail cache files
Thumbs.db
Thumbs.db:encryptable
ehthumbs.db
ehthumbs_vista.db

# Dump file
*.stackdump

# Folder config file
[Dd]esktop.ini

# Recycle Bin used on file shares
$RECYCLE.BIN/

# Windows Installer files
*.cab
*.msi
*.msix
*.msm
*.msp

# Windows shortcuts
*.lnk

# End of https://www.toptal.com/developers/gitignore/api/linux,windows,macos,terraform

### Taskfile ###
# Ignore taskfile generated files
.task/


================================================
FILE: .terraform-docs.yaml
================================================
formatter: "markdown table"

content: |-
  ## Example _(based on [Hetzner Cloud example](examples/hcloud-k3s))_

  ```hcl
  {{ include "examples/hcloud-k3s/k3s.tf" | replace "./../.." "xunleii/k3s/module" }}
  ```

  {{ .Inputs | replace "\"|\"" "\"\\|\"" }}

  {{ .Outputs }}

  {{ .Providers }}

output:
  file: README.md
  mode: inject
  template: |-
    <!-- BEGIN_TF_DOCS -->
    {{ .Content }}
    <!-- END_TF_DOCS -->

sort:
  enabled: true
  by: required


================================================
FILE: .tool-versions
================================================
act 0.2.57
task 3.31.0
terraform 1.14.6
terraform-docs 0.20.0
trivy 0.69.3


================================================
FILE: CHANGELOG.md
================================================
# Changelog

## [Unreleased](https://github.com/xunleii/terraform-module-k3s/tree/HEAD)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v3.4.0...HEAD)

**Dependencies upgrades:**

- chore\(deps\): update hashicorp/setup-terraform action to v4 [\#225](https://github.com/xunleii/terraform-module-k3s/pull/225) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update zgosalvez/github-actions-ensure-sha-pinned-actions action to v5 [\#223](https://github.com/xunleii/terraform-module-k3s/pull/223) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update dependency trivy to v0.69.3 [\#221](https://github.com/xunleii/terraform-module-k3s/pull/221) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update dependency terraform to v1.14.6 [\#214](https://github.com/xunleii/terraform-module-k3s/pull/214) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update ghcr.io/devcontainers/features/terraform docker tag to v1.4.2 [\#213](https://github.com/xunleii/terraform-module-k3s/pull/213) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update marocchino/sticky-pull-request-comment action to v2.9.2 [\#203](https://github.com/xunleii/terraform-module-k3s/pull/203) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update dependency terraform-docs to v0.20.0 [\#202](https://github.com/xunleii/terraform-module-k3s/pull/202) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update zgosalvez/github-actions-ensure-sha-pinned-actions action to v3.0.23 - autoclosed [\#201](https://github.com/xunleii/terraform-module-k3s/pull/201) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update ghcr.io/devcontainers/features/terraform docker tag to v1.3.10 [\#200](https://github.com/xunleii/terraform-module-k3s/pull/200) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update dependency terraform-docs to v0.17.0 [\#163](https://github.com/xunleii/terraform-module-k3s/pull/163) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update actions/stale action to v9 [\#162](https://github.com/xunleii/terraform-module-k3s/pull/162) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update heinrichreimer/github-changelog-generator-action action to v2.4 [\#161](https://github.com/xunleii/terraform-module-k3s/pull/161) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update dependency trivy to v0.48.2 [\#160](https://github.com/xunleii/terraform-module-k3s/pull/160) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update aquasecurity/trivy-action action to v0.16.1 [\#159](https://github.com/xunleii/terraform-module-k3s/pull/159) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update dependency act to v0.2.57 [\#158](https://github.com/xunleii/terraform-module-k3s/pull/158) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update zgosalvez/github-actions-ensure-sha-pinned-actions action to v3.0.3 [\#157](https://github.com/xunleii/terraform-module-k3s/pull/157) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update dependency terraform to v1.6.6 [\#156](https://github.com/xunleii/terraform-module-k3s/pull/156) ([renovate[bot]](https://github.com/apps/renovate))

**Closed issues:**

- :unicorn: Add Certificate data as outputs [\#182](https://github.com/xunleii/terraform-module-k3s/issues/182)
- Servers must have an odd number of nodes [\#172](https://github.com/xunleii/terraform-module-k3s/issues/172)
- :bug:  Cannot scale up server nodes  [\#153](https://github.com/xunleii/terraform-module-k3s/issues/153)
- Taking a node out of the configuration keeps the node within the cluster but cordoned [\#139](https://github.com/xunleii/terraform-module-k3s/issues/139)
- Consider Integration Testing with k3d [\#133](https://github.com/xunleii/terraform-module-k3s/issues/133)
- K3s Cluster Node\(s\) Upgrade [\#132](https://github.com/xunleii/terraform-module-k3s/issues/132)
- Unable to use on Windows Terraform [\#95](https://github.com/xunleii/terraform-module-k3s/issues/95)

**Merged pull requests:**

- let k8s\_ca\_certificates\_install depend on var.depends\_on\_ [\#164](https://github.com/xunleii/terraform-module-k3s/pull/164) ([sschaeffner](https://github.com/sschaeffner))

## [v3.4.0](https://github.com/xunleii/terraform-module-k3s/tree/v3.4.0) (2023-11-26)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v3.3.0...v3.4.0)

**Dependencies upgrades:**

- chore\(deps\): update dependency terraform to v1.6.4 [\#154](https://github.com/xunleii/terraform-module-k3s/pull/154) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update zgosalvez/github-actions-ensure-sha-pinned-actions action to v3 [\#152](https://github.com/xunleii/terraform-module-k3s/pull/152) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update hashicorp/setup-terraform action to v3 [\#151](https://github.com/xunleii/terraform-module-k3s/pull/151) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update zgosalvez/github-actions-ensure-sha-pinned-actions action to v1.4.1 [\#150](https://github.com/xunleii/terraform-module-k3s/pull/150) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update marocchino/sticky-pull-request-comment action to v2.8.0 [\#149](https://github.com/xunleii/terraform-module-k3s/pull/149) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update dependency trivy to v0.47.0 [\#148](https://github.com/xunleii/terraform-module-k3s/pull/148) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update aquasecurity/trivy-action action to v0.14.0 [\#147](https://github.com/xunleii/terraform-module-k3s/pull/147) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update hashicorp/setup-terraform action to v2.0.3 [\#146](https://github.com/xunleii/terraform-module-k3s/pull/146) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update dependency terraform to v1.6.3 [\#145](https://github.com/xunleii/terraform-module-k3s/pull/145) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update actions/checkout action to v4 [\#137](https://github.com/xunleii/terraform-module-k3s/pull/137) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update terraform http to v3.4.0 [\#130](https://github.com/xunleii/terraform-module-k3s/pull/130) ([renovate[bot]](https://github.com/apps/renovate))

**Closed issues:**

- When generate\_ca\_certificates = false, module does not export any kubeconfig [\#143](https://github.com/xunleii/terraform-module-k3s/issues/143)
- Refresh kubeconfig when terraform state is lost [\#142](https://github.com/xunleii/terraform-module-k3s/issues/142)
- terraform destroy gets stuck while draining the last node [\#138](https://github.com/xunleii/terraform-module-k3s/issues/138)
- cdktf compatibility  [\#135](https://github.com/xunleii/terraform-module-k3s/issues/135)
- hcloud-k3s doesnt work with v3.3.0 [\#127](https://github.com/xunleii/terraform-module-k3s/issues/127)
- Generated kubeconfig cannot be used \(certificate signed by unknown authority\) [\#107](https://github.com/xunleii/terraform-module-k3s/issues/107)
- Cluster CA certificate is not trusted [\#85](https://github.com/xunleii/terraform-module-k3s/issues/85)
- Windows Terraform - SSH authentication failed [\#43](https://github.com/xunleii/terraform-module-k3s/issues/43)
- Custom k3s cluster name inside of the admin kubeconfig  [\#144](https://github.com/xunleii/terraform-module-k3s/issues/144)
- 🚧 Refresh this repository [\#140](https://github.com/xunleii/terraform-module-k3s/issues/140)
- Error "Variable `name` is deprecated" [\#136](https://github.com/xunleii/terraform-module-k3s/issues/136)

**Merged pull requests:**

- :recycle: Cleanup this repository [\#141](https://github.com/xunleii/terraform-module-k3s/pull/141) ([xunleii](https://github.com/xunleii))
- fix--multi\_server-install [\#131](https://github.com/xunleii/terraform-module-k3s/pull/131) ([jacaudi](https://github.com/jacaudi))
- Fix k3s\_install\_env\_vars and hcloud-k3s example issues [\#128](https://github.com/xunleii/terraform-module-k3s/pull/128) ([xunleii](https://github.com/xunleii))

## [v3.3.0](https://github.com/xunleii/terraform-module-k3s/tree/v3.3.0) (2023-05-14)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v3.2.0...v3.3.0)

**Dependencies upgrades:**

- chore\(deps\): update endbug/add-and-commit action to v9.1.3 [\#123](https://github.com/xunleii/terraform-module-k3s/pull/123) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update terraform http to v3.3.0 [\#122](https://github.com/xunleii/terraform-module-k3s/pull/122) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update actions/checkout action to v3.5.2 [\#121](https://github.com/xunleii/terraform-module-k3s/pull/121) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update terraform random to v3.5.1 [\#120](https://github.com/xunleii/terraform-module-k3s/pull/120) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update actions/checkout action to v3.5.0 [\#119](https://github.com/xunleii/terraform-module-k3s/pull/119) ([renovate[bot]](https://github.com/apps/renovate))
- Update actions/checkout action to v3.4.0 [\#118](https://github.com/xunleii/terraform-module-k3s/pull/118) ([renovate[bot]](https://github.com/apps/renovate))
- Update actions/checkout action to v3.3.0 [\#108](https://github.com/xunleii/terraform-module-k3s/pull/108) ([renovate[bot]](https://github.com/apps/renovate))
- Update xunleii/github-actions-grimoire digest to 0ab2cd9 [\#106](https://github.com/xunleii/terraform-module-k3s/pull/106) ([renovate[bot]](https://github.com/apps/renovate))
- Update actions/checkout action to v3.1.0 [\#105](https://github.com/xunleii/terraform-module-k3s/pull/105) ([renovate[bot]](https://github.com/apps/renovate))
- Update EndBug/add-and-commit action to v9.1.1 [\#102](https://github.com/xunleii/terraform-module-k3s/pull/102) ([renovate[bot]](https://github.com/apps/renovate))
- Update Terraform http to v3 [\#101](https://github.com/xunleii/terraform-module-k3s/pull/101) ([renovate[bot]](https://github.com/apps/renovate))
- Update Terraform tls to v4 [\#100](https://github.com/xunleii/terraform-module-k3s/pull/100) ([renovate[bot]](https://github.com/apps/renovate))

**Closed issues:**

- API URL broken in build script when using dual stack configs [\#111](https://github.com/xunleii/terraform-module-k3s/issues/111)
- Deprecated attribute with Terraform 1.3.7 [\#110](https://github.com/xunleii/terraform-module-k3s/issues/110)
- Error: Invalid Attribute Value Match  [\#104](https://github.com/xunleii/terraform-module-k3s/issues/104)

**Merged pull requests:**

- Update workflows generating documentation assets [\#125](https://github.com/xunleii/terraform-module-k3s/pull/125) ([xunleii](https://github.com/xunleii))
- feat\(k3s\_env\_vars\): introduce k3s\_install\_env\_vars [\#124](https://github.com/xunleii/terraform-module-k3s/pull/124) ([FalcoSuessgott](https://github.com/FalcoSuessgott))
- Dual-stack & IPv6 fixes [\#113](https://github.com/xunleii/terraform-module-k3s/pull/113) ([djh00t](https://github.com/djh00t))
- Update providers and fix \#110 [\#112](https://github.com/xunleii/terraform-module-k3s/pull/112) ([xunleii](https://github.com/xunleii))
- Add support for INSTALL\_K3S\_SELINUX\_WARN [\#109](https://github.com/xunleii/terraform-module-k3s/pull/109) ([hobbypunk90](https://github.com/hobbypunk90))

## [v3.2.0](https://github.com/xunleii/terraform-module-k3s/tree/v3.2.0) (2022-10-18)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v3.1.0...v3.2.0)

**Dependencies upgrades:**

- Update actions-ecosystem/action-remove-labels digest to d051625 [\#103](https://github.com/xunleii/terraform-module-k3s/pull/103) ([renovate[bot]](https://github.com/apps/renovate))
- Update EndBug/add-and-commit action to v9.0.1 [\#99](https://github.com/xunleii/terraform-module-k3s/pull/99) ([renovate[bot]](https://github.com/apps/renovate))
- Update xunleii/github-actions-grimoire digest to 42f3d38 [\#98](https://github.com/xunleii/terraform-module-k3s/pull/98) ([renovate[bot]](https://github.com/apps/renovate))
- Update actions/checkout action to v3 [\#97](https://github.com/xunleii/terraform-module-k3s/pull/97) ([renovate[bot]](https://github.com/apps/renovate))
- Update EndBug/add-and-commit action to v9 [\#94](https://github.com/xunleii/terraform-module-k3s/pull/94) ([renovate[bot]](https://github.com/apps/renovate))
- Update Hetzner Cloud example [\#93](https://github.com/xunleii/terraform-module-k3s/pull/93) ([xunleii](https://github.com/xunleii))
- Update actions/checkout action to v2.4.2 [\#89](https://github.com/xunleii/terraform-module-k3s/pull/89) ([renovate[bot]](https://github.com/apps/renovate))
- Update xunleii/github-actions-grimoire digest to 7b2b767 [\#87](https://github.com/xunleii/terraform-module-k3s/pull/87) ([renovate[bot]](https://github.com/apps/renovate))
- Update actions/checkout action to v3 [\#86](https://github.com/xunleii/terraform-module-k3s/pull/86) ([renovate[bot]](https://github.com/apps/renovate))

**Closed issues:**

- Error sensitive var.servers [\#84](https://github.com/xunleii/terraform-module-k3s/issues/84)
- Publish a new version on the Terraform registry  [\#79](https://github.com/xunleii/terraform-module-k3s/issues/79)

**Merged pull requests:**

- fix: typo in variables.tf [\#96](https://github.com/xunleii/terraform-module-k3s/pull/96) ([Tchoupinax](https://github.com/Tchoupinax))
- Fix some Github Action issues [\#92](https://github.com/xunleii/terraform-module-k3s/pull/92) ([xunleii](https://github.com/xunleii))
- Reenable auto changelog generation [\#91](https://github.com/xunleii/terraform-module-k3s/pull/91) ([xunleii](https://github.com/xunleii))
- Add missing permission on github actions workflow [\#90](https://github.com/xunleii/terraform-module-k3s/pull/90) ([xunleii](https://github.com/xunleii))
- addressing changes in recent hashicorp tls provider [\#88](https://github.com/xunleii/terraform-module-k3s/pull/88) ([ptu](https://github.com/ptu))
- Generate Changelog automatically [\#82](https://github.com/xunleii/terraform-module-k3s/pull/82) ([xunleii](https://github.com/xunleii))

## [v3.1.0](https://github.com/xunleii/terraform-module-k3s/tree/v3.1.0) (2022-01-04)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v3.0.0...v3.1.0)

**Dependencies upgrades:**

- chore\(deps\): update commitlint monorepo \(major\) [\#78](https://github.com/xunleii/terraform-module-k3s/pull/78) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update actions/checkout action to v2.4.0 [\#77](https://github.com/xunleii/terraform-module-k3s/pull/77) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update commitlint monorepo to v15 \(major\) [\#76](https://github.com/xunleii/terraform-module-k3s/pull/76) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update zgosalvez/github-actions-ensure-sha-pinned-actions action to v1.1.1 [\#75](https://github.com/xunleii/terraform-module-k3s/pull/75) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update dependency husky to v7.0.4 [\#74](https://github.com/xunleii/terraform-module-k3s/pull/74) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update marocchino/sticky-pull-request-comment action to v2.2.0 [\#73](https://github.com/xunleii/terraform-module-k3s/pull/73) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update actions/checkout action to v2.3.5 [\#72](https://github.com/xunleii/terraform-module-k3s/pull/72) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update wagoid/commitlint-github-action action to v4.1.9 [\#71](https://github.com/xunleii/terraform-module-k3s/pull/71) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update dependency @commitlint/cli to v13.2.1 [\#70](https://github.com/xunleii/terraform-module-k3s/pull/70) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update marocchino/sticky-pull-request-comment action to v2.1.1 [\#68](https://github.com/xunleii/terraform-module-k3s/pull/68) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update terraform random to v3 [\#65](https://github.com/xunleii/terraform-module-k3s/pull/65) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update terraform null to v3 [\#64](https://github.com/xunleii/terraform-module-k3s/pull/64) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update terraform http to v2 [\#63](https://github.com/xunleii/terraform-module-k3s/pull/63) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update dependency husky to v7 [\#62](https://github.com/xunleii/terraform-module-k3s/pull/62) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): update commitlint monorepo to v13 \(major\) [\#61](https://github.com/xunleii/terraform-module-k3s/pull/61) ([renovate[bot]](https://github.com/apps/renovate))
- chore\(deps\): pin dependencies [\#58](https://github.com/xunleii/terraform-module-k3s/pull/58) ([renovate[bot]](https://github.com/apps/renovate))

**Merged pull requests:**

- Remove commit lint dependencies [\#81](https://github.com/xunleii/terraform-module-k3s/pull/81) ([xunleii](https://github.com/xunleii))
- Output the Kubernetes cluster secret [\#80](https://github.com/xunleii/terraform-module-k3s/pull/80) ([orf](https://github.com/orf))
- Add Hacktoberfest labels [\#69](https://github.com/xunleii/terraform-module-k3s/pull/69) ([xunleii](https://github.com/xunleii))
- Rewrite CI/CD workflows [\#67](https://github.com/xunleii/terraform-module-k3s/pull/67) ([xunleii](https://github.com/xunleii))
- Add new use\_sudo input to the documentation [\#66](https://github.com/xunleii/terraform-module-k3s/pull/66) ([Corwind](https://github.com/Corwind))
- add option to use kubectl with sudo [\#57](https://github.com/xunleii/terraform-module-k3s/pull/57) ([Corwind](https://github.com/Corwind))
- Configure Renovate [\#56](https://github.com/xunleii/terraform-module-k3s/pull/56) ([renovate[bot]](https://github.com/apps/renovate))
- Fix civo example [\#55](https://github.com/xunleii/terraform-module-k3s/pull/55) ([debovema](https://github.com/debovema))

## [v3.0.0](https://github.com/xunleii/terraform-module-k3s/tree/v3.0.0) (2021-06-27)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v2.2.4...v3.0.0)

**Closed issues:**

- rename variable name to cluster\_domain [\#53](https://github.com/xunleii/terraform-module-k3s/issues/53)
- Pod and Service cidrs must be passed on all masters \(not just the 1st one\) [\#52](https://github.com/xunleii/terraform-module-k3s/issues/52)
- Hetzner example doesn't work [\#50](https://github.com/xunleii/terraform-module-k3s/issues/50)
- mkdir: cannot create directory ‘/var/lib/rancher’: Permission denied [\#42](https://github.com/xunleii/terraform-module-k3s/issues/42)

**Merged pull requests:**

- Resolve issues \#52 & \#53 [\#54](https://github.com/xunleii/terraform-module-k3s/pull/54) ([xunleii](https://github.com/xunleii))

## [v2.2.4](https://github.com/xunleii/terraform-module-k3s/tree/v2.2.4) (2021-04-30)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v2.2.3...v2.2.4)

**Closed issues:**

- Failed to join the cluster with the same name [\#26](https://github.com/xunleii/terraform-module-k3s/issues/26)

**Merged pull requests:**

- Enhancing 'Hetzner example' docs [\#51](https://github.com/xunleii/terraform-module-k3s/pull/51) ([NicoWde](https://github.com/NicoWde))
- Add support for provisioning without logging in as root [\#49](https://github.com/xunleii/terraform-module-k3s/pull/49) ([caleb-devops](https://github.com/caleb-devops))

## [v2.2.3](https://github.com/xunleii/terraform-module-k3s/tree/v2.2.3) (2021-02-17)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v2.2.2...v2.2.3)

**Merged pull requests:**

- fix: add \*\_drain to kubernetes\_ready [\#48](https://github.com/xunleii/terraform-module-k3s/pull/48) ([xunleii](https://github.com/xunleii))

## [v2.2.2](https://github.com/xunleii/terraform-module-k3s/tree/v2.2.2) (2021-02-13)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v2.2.1...v2.2.2)

**Merged pull requests:**

- feat: add dependency endpoint to allow sychronizing k3s install & provisionning [\#47](https://github.com/xunleii/terraform-module-k3s/pull/47) ([xunleii](https://github.com/xunleii))

## [v2.2.1](https://github.com/xunleii/terraform-module-k3s/tree/v2.2.1) (2021-02-10)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v2.2.0...v2.2.1)

**Closed issues:**

- failed to start k3s node with label `node-role.kubernetes.io/***` [\#45](https://github.com/xunleii/terraform-module-k3s/issues/45)
- register: metadata.name: Invalid value [\#44](https://github.com/xunleii/terraform-module-k3s/issues/44)
- Fix this stupid CI [\#38](https://github.com/xunleii/terraform-module-k3s/issues/38)

**Merged pull requests:**

- fix: correct some installation issues \(\#44 & \#45\) [\#46](https://github.com/xunleii/terraform-module-k3s/pull/46) ([xunleii](https://github.com/xunleii))
- Generate Kubeconfig file [\#37](https://github.com/xunleii/terraform-module-k3s/pull/37) ([guitcastro](https://github.com/guitcastro))
- removed missing additional\_flags from readme [\#36](https://github.com/xunleii/terraform-module-k3s/pull/36) ([guitcastro](https://github.com/guitcastro))
- doc: update README [\#35](https://github.com/xunleii/terraform-module-k3s/pull/35) ([xunleii](https://github.com/xunleii))

## [v2.2.0](https://github.com/xunleii/terraform-module-k3s/tree/v2.2.0) (2021-01-03)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v2.1.0...v2.2.0)

**Closed issues:**

- kube\_config output missing  [\#41](https://github.com/xunleii/terraform-module-k3s/issues/41)
- NodeNotFound when trying to update nodes [\#31](https://github.com/xunleii/terraform-module-k3s/issues/31)

**Merged pull requests:**

- Try to fix this CI.... another time [\#40](https://github.com/xunleii/terraform-module-k3s/pull/40) ([xunleii](https://github.com/xunleii))
- Fix doc typo in readme [\#39](https://github.com/xunleii/terraform-module-k3s/pull/39) ([DblK](https://github.com/DblK))

## [v2.1.0](https://github.com/xunleii/terraform-module-k3s/tree/v2.1.0) (2020-08-26)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v2.0.1...v2.1.0)

**Closed issues:**

- Deprecation of network\_id in `hcloud_server_network` [\#29](https://github.com/xunleii/terraform-module-k3s/issues/29)
- Remove or fix the 'latest' feature [\#27](https://github.com/xunleii/terraform-module-k3s/issues/27)
- Agent not update when k3s version changes [\#24](https://github.com/xunleii/terraform-module-k3s/issues/24)
- Need actions to test automatically PR [\#5](https://github.com/xunleii/terraform-module-k3s/issues/5)

**Merged pull requests:**

- fix: repair Terraform workflow \(CI\) [\#33](https://github.com/xunleii/terraform-module-k3s/pull/33) ([xunleii](https://github.com/xunleii))
- Make sure the node is up before trying to use it. [\#32](https://github.com/xunleii/terraform-module-k3s/pull/32) ([tedsteen](https://github.com/tedsteen))
- fix: replace network\_id with subnet\_id [\#30](https://github.com/xunleii/terraform-module-k3s/pull/30) ([solidnerd](https://github.com/solidnerd))
- fix: use k3s update channels for latest releases instead of github [\#28](https://github.com/xunleii/terraform-module-k3s/pull/28) ([solidnerd](https://github.com/solidnerd))

## [v2.0.1](https://github.com/xunleii/terraform-module-k3s/tree/v2.0.1) (2020-05-31)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v2.0.0...v2.0.1)

**Closed issues:**

- CI needs to be fixed before v2 release [\#22](https://github.com/xunleii/terraform-module-k3s/issues/22)

**Merged pull requests:**

- fix: do not uninstall k3s during upgrade [\#25](https://github.com/xunleii/terraform-module-k3s/pull/25) ([xunleii](https://github.com/xunleii))

## [v2.0.0](https://github.com/xunleii/terraform-module-k3s/tree/v2.0.0) (2020-05-31)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v1.7.0...v2.0.0)

**Closed issues:**

- Server taints flags are not used [\#20](https://github.com/xunleii/terraform-module-k3s/issues/20)
- Make it possible to have additional flags per agent [\#18](https://github.com/xunleii/terraform-module-k3s/issues/18)

**Merged pull requests:**

- fix: update Github Actions worflow [\#23](https://github.com/xunleii/terraform-module-k3s/pull/23) ([xunleii](https://github.com/xunleii))
- feat: rewrote module [\#21](https://github.com/xunleii/terraform-module-k3s/pull/21) ([xunleii](https://github.com/xunleii))
- Additional flags per instance [\#19](https://github.com/xunleii/terraform-module-k3s/pull/19) ([tedsteen](https://github.com/tedsteen))

## [v1.7.0](https://github.com/xunleii/terraform-module-k3s/tree/v1.7.0) (2020-01-31)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v1.6.3...v1.7.0)

**Merged pull requests:**

- feat: add node taints & labels [\#17](https://github.com/xunleii/terraform-module-k3s/pull/17) ([xunleii](https://github.com/xunleii))

## [v1.6.3](https://github.com/xunleii/terraform-module-k3s/tree/v1.6.3) (2019-12-28)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v1.6.2...v1.6.3)

**Merged pull requests:**

- fix: use node\_name field in node deletion [\#16](https://github.com/xunleii/terraform-module-k3s/pull/16) ([xunleii](https://github.com/xunleii))

## [v1.6.2](https://github.com/xunleii/terraform-module-k3s/tree/v1.6.2) (2019-12-21)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v1.6.1...v1.6.2)

**Merged pull requests:**

- feat: use name in agent nodes [\#15](https://github.com/xunleii/terraform-module-k3s/pull/15) ([xunleii](https://github.com/xunleii))

## [v1.6.1](https://github.com/xunleii/terraform-module-k3s/tree/v1.6.1) (2019-12-04)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v1.6.0...v1.6.1)

**Merged pull requests:**

- feat: upload installer [\#14](https://github.com/xunleii/terraform-module-k3s/pull/14) ([xunleii](https://github.com/xunleii))

## [v1.6.0](https://github.com/xunleii/terraform-module-k3s/tree/v1.6.0) (2019-12-04)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v1.5.0...v1.6.0)

**Merged pull requests:**

- refact: rename node roles in server and agent [\#13](https://github.com/xunleii/terraform-module-k3s/pull/13) ([xunleii](https://github.com/xunleii))
- Refact clean module [\#12](https://github.com/xunleii/terraform-module-k3s/pull/12) ([xunleii](https://github.com/xunleii))

## [v1.5.0](https://github.com/xunleii/terraform-module-k3s/tree/v1.5.0) (2019-12-01)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v1.4.0...v1.5.0)

## [v1.4.0](https://github.com/xunleii/terraform-module-k3s/tree/v1.4.0) (2019-11-27)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v1.3.2...v1.4.0)

**Merged pull requests:**

- refact: clean custom flags feature [\#11](https://github.com/xunleii/terraform-module-k3s/pull/11) ([xunleii](https://github.com/xunleii))

## [v1.3.2](https://github.com/xunleii/terraform-module-k3s/tree/v1.3.2) (2019-11-27)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v1.3.1...v1.3.2)

**Merged pull requests:**

- fix: join custom arguments [\#10](https://github.com/xunleii/terraform-module-k3s/pull/10) ([xunleii](https://github.com/xunleii))

## [v1.3.1](https://github.com/xunleii/terraform-module-k3s/tree/v1.3.1) (2019-11-27)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v1.2.3...v1.3.1)

**Merged pull requests:**

- feat: add custom arguments [\#9](https://github.com/xunleii/terraform-module-k3s/pull/9) ([xunleii](https://github.com/xunleii))

## [v1.2.3](https://github.com/xunleii/terraform-module-k3s/tree/v1.2.3) (2019-11-24)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v1.2.2...v1.2.3)

**Merged pull requests:**

- fix: remove warning 'quoted keywords are now deprecated' [\#8](https://github.com/xunleii/terraform-module-k3s/pull/8) ([xunleii](https://github.com/xunleii))

## [v1.2.2](https://github.com/xunleii/terraform-module-k3s/tree/v1.2.2) (2019-11-16)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v1.2.1...v1.2.2)

**Merged pull requests:**

- feat: add Terraform actions [\#6](https://github.com/xunleii/terraform-module-k3s/pull/6) ([xunleii](https://github.com/xunleii))

## [v1.2.1](https://github.com/xunleii/terraform-module-k3s/tree/v1.2.1) (2019-11-16)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v1.2.0...v1.2.1)

## [v1.2.0](https://github.com/xunleii/terraform-module-k3s/tree/v1.2.0) (2019-11-16)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v1.1.0...v1.2.0)

**Closed issues:**

- Remove 'scp' dependency [\#3](https://github.com/xunleii/terraform-module-k3s/issues/3)

**Merged pull requests:**

- Remove 'scp' dependency [\#4](https://github.com/xunleii/terraform-module-k3s/pull/4) ([xunleii](https://github.com/xunleii))

## [v1.1.0](https://github.com/xunleii/terraform-module-k3s/tree/v1.1.0) (2019-11-03)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/v1.0.0...v1.1.0)

**Closed issues:**

- Impossible to remove one \(several\) minion node\(s\) [\#1](https://github.com/xunleii/terraform-module-k3s/issues/1)

**Merged pull requests:**

- \#1 - fix removable node [\#2](https://github.com/xunleii/terraform-module-k3s/pull/2) ([xunleii](https://github.com/xunleii))

## [v1.0.0](https://github.com/xunleii/terraform-module-k3s/tree/v1.0.0) (2019-11-02)

[Full Changelog](https://github.com/xunleii/terraform-module-k3s/compare/ccc49fe3f98ef7a9885dcf5ae3efb087048497f9...v1.0.0)



\* *This Changelog was automatically generated by [github_changelog_generator](https://github.com/github-changelog-generator/github-changelog-generator)*


================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2019 Alexandre NICOLAIE

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: README.md
================================================
# terraform-module-k3s
![Terraform Version](https://img.shields.io/badge/terraform-≈_1.0-blueviolet)
[![GitHub tag (latest SemVer)](https://img.shields.io/github/v/tag/xunleii/terraform-module-k3s?label=registry)](https://registry.terraform.io/modules/xunleii/k3s)
[![GitHub issues](https://img.shields.io/github/issues/xunleii/terraform-module-k3s)](https://github.com/xunleii/terraform-module-k3s/issues)
[![Open Source Helpers](https://www.codetriage.com/xunleii/terraform-module-k3s/badges/users.svg)](https://www.codetriage.com/xunleii/terraform-module-k3s)
[![MIT Licensed](https://img.shields.io/badge/license-MIT-green.svg)](https://tldrlegal.com/license/mit-license)

Terraform module to create a [k3s](https://k3s.io/) cluster with multi-server and annotations/labels/taints management features.


## :warning: Security disclosure

Because the use of external references on the `destroy` provisioner is deprecated by Terraform, storing information inside each resource is mandatory in order to manage several functionalities such as automatic node draining and field management. As a result, several fields such as the `connection` block will be available in your TF state.
This means that the password or private key used will be **clearly readable** in this TF state.  
**Please be very careful to store your TF state securely if you use a private key or password in the `connection` block.**

<!-- BEGIN_TF_DOCS -->
## Example _(based on [Hetzner Cloud example](examples/hcloud-k3s))_

```hcl
module "k3s" {
  source = "xunleii/k3s/module"

  depends_on_    = hcloud_server.agents
  k3s_version    = "latest"
  cluster_domain = "cluster.local"
  cidr = {
    pods     = "10.42.0.0/16"
    services = "10.43.0.0/16"
  }
  drain_timeout  = "30s"
  managed_fields = ["label", "taint"] // ignore annotations

  global_flags = [
    "--flannel-iface ens10",
    "--kubelet-arg cloud-provider=external" // required to use https://github.com/hetznercloud/hcloud-cloud-controller-manager
  ]

  servers = {
    for i in range(length(hcloud_server.control_planes)) :
    hcloud_server.control_planes[i].name => {
      ip = hcloud_server_network.control_planes[i].ip
      connection = {
        host        = hcloud_server.control_planes[i].ipv4_address
        private_key = trimspace(tls_private_key.ed25519_provisioning.private_key_pem)
      }
      flags = [
        "--disable-cloud-controller",
        "--tls-san ${hcloud_server.control_planes[0].ipv4_address}",
      ]
      annotations = { "server_id" : i } // theses annotations will not be managed by this module
    }
  }

  agents = {
    for i in range(length(hcloud_server.agents)) :
    "${hcloud_server.agents[i].name}_node" => {
      name = hcloud_server.agents[i].name
      ip   = hcloud_server_network.agents_network[i].ip
      connection = {
        host        = hcloud_server.agents[i].ipv4_address
        private_key = trimspace(tls_private_key.ed25519_provisioning.private_key_pem)
      }

      labels = { "node.kubernetes.io/pool" = hcloud_server.agents[i].labels.nodepool }
      taints = { "dedicated" : hcloud_server.agents[i].labels.nodepool == "gpu" ? "gpu:NoSchedule" : null }
    }
  }
}
```

## Inputs

| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| <a name="input_servers"></a> [servers](#input\_servers) | K3s server nodes definition. The key is used as node name if no name is provided. | `map(any)` | n/a | yes |
| <a name="input_agents"></a> [agents](#input\_agents) | K3s agent nodes definitions. The key is used as node name if no name is provided. | `map(any)` | `{}` | no |
| <a name="input_cidr"></a> [cidr](#input\_cidr) | K3s network CIDRs (see https://rancher.com/docs/k3s/latest/en/installation/install-options/). | <pre>object({<br>    pods     = string<br>    services = string<br>  })</pre> | <pre>{<br>  "pods": "10.42.0.0/16",<br>  "services": "10.43.0.0/16"<br>}</pre> | no |
| <a name="input_cluster_domain"></a> [cluster\_domain](#input\_cluster\_domain) | K3s cluster domain name (see https://rancher.com/docs/k3s/latest/en/installation/install-options/). | `string` | `"cluster.local"` | no |
| <a name="input_depends_on_"></a> [depends\_on\_](#input\_depends\_on\_) | Resource dependency of this module. | `any` | `null` | no |
| <a name="input_drain_timeout"></a> [drain\_timeout](#input\_drain\_timeout) | The length of time to wait before giving up the node draining. Infinite by default. | `string` | `"0s"` | no |
| <a name="input_generate_ca_certificates"></a> [generate\_ca\_certificates](#input\_generate\_ca\_certificates) | If true, this module will generate the CA certificates (see https://github.com/rancher/k3s/issues/1868#issuecomment-639690634). Otherwise rancher will generate it. This is required to generate kubeconfig | `bool` | `true` | no |
| <a name="input_global_flags"></a> [global\_flags](#input\_global\_flags) | Add additional installation flags, used by all nodes (see https://rancher.com/docs/k3s/latest/en/installation/install-options/). | `list(string)` | `[]` | no |
| <a name="input_k3s_install_env_vars"></a> [k3s\_install\_env\_vars](#input\_k3s\_install\_env\_vars) | map of enviroment variables that are passed to the k3s installation script (see https://docs.k3s.io/reference/env-variables) | `map(string)` | `{}` | no |
| <a name="input_k3s_version"></a> [k3s\_version](#input\_k3s\_version) | Specify the k3s version. You can choose from the following release channels or pin the version directly | `string` | `"latest"` | no |
| <a name="input_kubernetes_certificates"></a> [kubernetes\_certificates](#input\_kubernetes\_certificates) | A list of maps of cerificate-name.[crt/key] : cerficate-value to copied to /var/lib/rancher/k3s/server/tls, if this option is used generate\_ca\_certificates will be treat as false | <pre>list(<br>    object({<br>      file_name    = string,<br>      file_content = string<br>    })<br>  )</pre> | `[]` | no |
| <a name="input_managed_fields"></a> [managed\_fields](#input\_managed\_fields) | List of fields which must be managed by this module (can be annotation, label and/or taint). | `list(string)` | <pre>[<br>  "annotation",<br>  "label",<br>  "taint"<br>]</pre> | no |
| <a name="input_name"></a> [name](#input\_name) | K3s cluster domain name (see https://rancher.com/docs/k3s/latest/en/installation/install-options/). This input is deprecated and will be remove in the next major release. Use `cluster_domain` instead. | `string` | `"!!!DEPRECATED!!!"` | no |
| <a name="input_separator"></a> [separator](#input\_separator) | Separator used to separates node name and field name (used to manage annotations, labels and taints). | `string` | `"\|"` | no |
| <a name="input_use_sudo"></a> [use\_sudo](#input\_use\_sudo) | Whether or not to use kubectl with sudo during cluster setup. | `bool` | `false` | no |

## Outputs

| Name | Description |
|------|-------------|
| <a name="output_kube_config"></a> [kube\_config](#output\_kube\_config) | Genereated kubeconfig. |
| <a name="output_kubernetes"></a> [kubernetes](#output\_kubernetes) | Authentication credentials of Kubernetes (full administrator). |
| <a name="output_kubernetes_cluster_secret"></a> [kubernetes\_cluster\_secret](#output\_kubernetes\_cluster\_secret) | Secret token used to join nodes to the cluster |
| <a name="output_kubernetes_ready"></a> [kubernetes\_ready](#output\_kubernetes\_ready) | Dependency endpoint to synchronize k3s installation and provisioning. |
| <a name="output_summary"></a> [summary](#output\_summary) | Current state of k3s (version & nodes). |

## Providers

| Name | Version |
|------|---------|
| <a name="provider_http"></a> [http](#provider\_http) | ~> 3.0 |
| <a name="provider_null"></a> [null](#provider\_null) | ~> 3.0 |
| <a name="provider_random"></a> [random](#provider\_random) | ~> 3.0 |
| <a name="provider_tls"></a> [tls](#provider\_tls) | ~> 4.0 |
<!-- END_TF_DOCS -->

## Frequently Asked Questions

### How to customise the generated `kubeconfig`

It is sometimes necessary to modify the context or the cluster name to adapt `kubeconfig` to a third-party tool or to avoid conflicts with existing tools. Although this is not the role of this module, it can easily be done with its outputs :

```hcl
module "k3s" {
  ...
}

local {
  kubeconfig = yamlencode({
    apiVersion      = "v1"
    kind            = "Config"
    current-context = "my-context-name"
    contexts = [{
      context = {
        cluster = "my-cluster-name"
        user : "my-user-name"
      }
      name = "my-context-name"
    }]
    clusters = [{
      cluster = {
        certificate-authority-data = base64encode(module.k3s.kubernetes.cluster_ca_certificate)
        server                     = module.k3s.kubernetes.api_endpoint
      }
      name = "my-cluster-name"
    }]
    users = [{
      user = {
        client-certificate-data : base64encode(module.k3s.kubernetes.client_certificate)
        client-key-data : base64encode(module.k3s.kubernetes.client_key)
      }
      name : "my-user-name"
    }]
  })
}
```

## License
`terraform-module-k3s` is released under the **MIT License**. See the bundled [LICENSE](LICENSE) file for details.

#
*Generated with :heart: by [terraform-docs](https://github.com/terraform-docs/terraform-docs)*


================================================
FILE: Taskfile.yaml
================================================
# yaml-language-server: $schema=https://taskfile.dev/schema.json
version: "3"

tasks:
  default: { cmds: [task --list], silent: true }

  dev:lint:
    aliases: [lint]
    cmds:
      - terraform fmt -recursive
    desc: Lint terraform code

  examples:hcloud:setup:
    aliases: [test, dev:test]
    cmds:
      - terraform init
      - terraform validate
      - terraform apply -auto-approve
      - terraform output -raw kubeconfig > kubeconfig~
    desc: Test this terraform module on Hetzner Cloud
    dir: examples/hcloud-k3s
    generates:
      - kubeconfig~
    interactive: true
    requires:
      vars: [HCLOUD_TOKEN]
    sources:
      # - "../../*.tf"
      - "*.tf"

  examples:hcloud:teardown:
    cmds:
      - terraform destroy -auto-approve
      - rm -f kubeconfig~
    desc: Remove all resources created by test:hcloud:setup
    dir: examples/hcloud-k3s
    interactive: true
    preconditions:
      - sh: test -f kubeconfig~
        msg: Run `test:hcloud:setup` first
    prompt: Are you sure you want to destroy all resources created by `test:hcloud:setup`?
    requires:
      vars: [HCLOUD_TOKEN]

  e2e:hcloud:
    aliases: [e2e]
    cmds:
      - task: examples:hcloud:setup
      - defer: task examples:hcloud:teardown
      - kubectl --kubeconfig examples/hcloud-k3s/kubeconfig~ get nodes
    desc: Run e2e tests on Hetzner Cloud


================================================
FILE: agent_nodes.tf
================================================
locals {
  // Generate a map of all agents annotations in order to manage them through this module. This
  // generation is made in two steps:
  // - generate a list of objects representing all annotations, following this
  //   'template' {key = node_name|annotation_name, value = annotation_value}
  // - generate a map based on the generated list (using the field key as map key)
  agent_annotations_list = flatten([
    for nk, nv in var.agents : [
      // Because we need node name and annotation name when we remove the annotation resource, we need
      // to share them through the annotation key (each.value are not avaible on destruction).
      for ak, av in try(nv.annotations, {}) : av == null ? { key : "" } : { key : "${nk}${var.separator}${ak}", value : av }
    ]
  ])
  agent_annotations = local.managed_annotation_enabled ? { for o in local.agent_annotations_list : o.key => o.value if o.key != "" } : {}

  // Generate a map of all agents labels in order to manage them through this module. This
  // generation is made in two steps, following the same process than annotation's map.
  agent_labels_list = flatten([
    for nk, nv in var.agents : [
      // Because we need node name and label name when we remove the label resource, we need
      // to share them through the label key (each.value are not avaible on destruction).
      for lk, lv in try(nv.labels, {}) : lv == null ? { key : "" } : { key : "${nk}${var.separator}${lk}", value : lv }
    ]
  ])
  agent_labels = local.managed_label_enabled ? { for o in local.agent_labels_list : o.key => o.value if o.key != "" } : {}

  // Generate a map of all agents taints in order to manage them through this module. This
  // generation is made in two steps, following the same process than annotation's map.
  agent_taints_list = flatten([
    for nk, nv in var.agents : [
      // Because we need node name and taint name when we remove the taint resource, we need
      // to share them through the taint key (each.value are not avaible on destruction).
      for tk, tv in try(nv.taints, {}) : tv == null ? { key : "" } : { key : "${nk}${var.separator}${tk}", value : tv }
    ]
  ])
  agent_taints = local.managed_taint_enabled ? { for o in local.agent_taints_list : o.key => o.value if o.key != "" } : {}

  // Generate a map of all calculated agent fields, used during k3s installation.
  agents_metadata = {
    for key, agent in var.agents :
    key => {
      name = try(agent.name, key)
      ip   = agent.ip

      flags = join(" ", compact(concat(
        [
          "--node-ip ${agent.ip}",
          "--node-name '${try(agent.name, key)}'",
          "--server https://${local.root_advertise_ip_k3s}:6443",
          "--token ${nonsensitive(random_password.k3s_cluster_secret.result)}", # NOTE: nonsensitive is used to show logs during provisioning
        ],
        var.global_flags,
        try(agent.flags, []),
        [for key, value in try(agent.taints, {}) : "--node-taint '${key}=${value}'" if value != null]
      )))

      immutable_fields_hash = sha1(join("", concat(
        [var.cluster_domain],
        var.global_flags,
        try(agent.flags, []),
      )))
    }
  }
  kubectl_cmd = var.use_sudo ? "sudo kubectl" : "kubectl"
}

// Install k3s agent
resource "null_resource" "agents_install" {
  for_each = var.agents

  depends_on = [null_resource.servers_install]
  triggers = {
    on_immutable_changes = local.agents_metadata[each.key].immutable_fields_hash
    on_new_version       = local.k3s_version
  }

  connection {
    type = try(each.value.connection.type, "ssh")

    host     = try(each.value.connection.host, each.value.ip)
    user     = try(each.value.connection.user, null)
    password = try(each.value.connection.password, null)
    port     = try(each.value.connection.port, null)
    timeout  = try(each.value.connection.timeout, null)

    script_path    = try(each.value.connection.script_path, null)
    private_key    = try(each.value.connection.private_key, null)
    certificate    = try(each.value.connection.certificate, null)
    agent          = try(each.value.connection.agent, null)
    agent_identity = try(each.value.connection.agent_identity, null)
    host_key       = try(each.value.connection.host_key, null)

    https    = try(each.value.connection.https, null)
    insecure = try(each.value.connection.insecure, null)
    use_ntlm = try(each.value.connection.use_ntlm, null)
    cacert   = try(each.value.connection.cacert, null)

    bastion_host        = try(each.value.connection.bastion_host, null)
    bastion_host_key    = try(each.value.connection.bastion_host_key, null)
    bastion_port        = try(each.value.connection.bastion_port, null)
    bastion_user        = try(each.value.connection.bastion_user, null)
    bastion_password    = try(each.value.connection.bastion_password, null)
    bastion_private_key = try(each.value.connection.bastion_private_key, null)
    bastion_certificate = try(each.value.connection.bastion_certificate, null)
  }

  // Upload k3s install script
  provisioner "file" {
    content     = data.http.k3s_installer.response_body
    destination = "/tmp/k3s-installer"
  }

  // Install k3s
  provisioner "remote-exec" {
    inline = [
      "${local.install_env_vars} INSTALL_K3S_VERSION=${local.k3s_version} sh /tmp/k3s-installer agent ${local.agents_metadata[each.key].flags}",
      "until systemctl is-active --quiet k3s-agent.service; do sleep 1; done"
    ]
  }
}

// Drain k3s node on destruction in order to safely move all workflows to another node.
resource "null_resource" "agents_drain" {
  for_each = var.agents

  depends_on = [null_resource.agents_install]
  triggers = {
    // Because some fields must be used on destruction, we need to store them into the current
    // object. The only way to do that is to use triggers to store theses fields.
    agent_name      = local.agents_metadata[split(var.separator, each.key)[0]].name
    connection_json = base64encode(jsonencode(local.root_server_connection))
    drain_timeout   = var.drain_timeout
    kubectl_cmd     = local.kubectl_cmd
  }
  // Because we use triggers as memory area, we need to ignore all changes on it.
  lifecycle { ignore_changes = [triggers] }

  connection {
    type = jsondecode(base64decode(self.triggers.connection_json)).type

    host     = jsondecode(base64decode(self.triggers.connection_json)).host
    user     = jsondecode(base64decode(self.triggers.connection_json)).user
    password = jsondecode(base64decode(self.triggers.connection_json)).password
    port     = jsondecode(base64decode(self.triggers.connection_json)).port
    timeout  = jsondecode(base64decode(self.triggers.connection_json)).timeout

    script_path    = jsondecode(base64decode(self.triggers.connection_json)).script_path
    private_key    = jsondecode(base64decode(self.triggers.connection_json)).private_key
    certificate    = jsondecode(base64decode(self.triggers.connection_json)).certificate
    agent          = jsondecode(base64decode(self.triggers.connection_json)).agent
    agent_identity = jsondecode(base64decode(self.triggers.connection_json)).agent_identity
    host_key       = jsondecode(base64decode(self.triggers.connection_json)).host_key

    https    = jsondecode(base64decode(self.triggers.connection_json)).https
    insecure = jsondecode(base64decode(self.triggers.connection_json)).insecure
    use_ntlm = jsondecode(base64decode(self.triggers.connection_json)).use_ntlm
    cacert   = jsondecode(base64decode(self.triggers.connection_json)).cacert

    bastion_host        = jsondecode(base64decode(self.triggers.connection_json)).bastion_host
    bastion_host_key    = jsondecode(base64decode(self.triggers.connection_json)).bastion_host_key
    bastion_port        = jsondecode(base64decode(self.triggers.connection_json)).bastion_port
    bastion_user        = jsondecode(base64decode(self.triggers.connection_json)).bastion_user
    bastion_password    = jsondecode(base64decode(self.triggers.connection_json)).bastion_password
    bastion_private_key = jsondecode(base64decode(self.triggers.connection_json)).bastion_private_key
    bastion_certificate = jsondecode(base64decode(self.triggers.connection_json)).bastion_certificate
  }

  provisioner "remote-exec" {
    when = destroy
    inline = [
      "${self.triggers.kubectl_cmd} drain ${self.triggers.agent_name} --delete-local-data --force --ignore-daemonsets --timeout=${self.triggers.drain_timeout}"
    ]
  }
}

// Add/remove manually annotation on k3s agent
resource "null_resource" "agents_annotation" {
  for_each = local.agent_annotations

  depends_on = [null_resource.agents_install]
  triggers = {
    agent_name       = local.agents_metadata[split(var.separator, each.key)[0]].name
    annotation_name  = split(var.separator, each.key)[1]
    on_value_changes = each.value

    // Because some fields must be used on destruction, we need to store them into the current
    // object. The only way to do that is to use triggers to store theses fields.
    connection_json = base64encode(jsonencode(local.root_server_connection))
    kubectl_cmd     = local.kubectl_cmd
  }
  // Because we dont care about connection modification, we ignore its changes.
  lifecycle { ignore_changes = [triggers["connection_json"], triggers["kubectl_cmd"]] }

  connection {
    type = jsondecode(base64decode(self.triggers.connection_json)).type

    host     = jsondecode(base64decode(self.triggers.connection_json)).host
    user     = jsondecode(base64decode(self.triggers.connection_json)).user
    password = jsondecode(base64decode(self.triggers.connection_json)).password
    port     = jsondecode(base64decode(self.triggers.connection_json)).port
    timeout  = jsondecode(base64decode(self.triggers.connection_json)).timeout

    script_path    = jsondecode(base64decode(self.triggers.connection_json)).script_path
    private_key    = jsondecode(base64decode(self.triggers.connection_json)).private_key
    certificate    = jsondecode(base64decode(self.triggers.connection_json)).certificate
    agent          = jsondecode(base64decode(self.triggers.connection_json)).agent
    agent_identity = jsondecode(base64decode(self.triggers.connection_json)).agent_identity
    host_key       = jsondecode(base64decode(self.triggers.connection_json)).host_key

    https    = jsondecode(base64decode(self.triggers.connection_json)).https
    insecure = jsondecode(base64decode(self.triggers.connection_json)).insecure
    use_ntlm = jsondecode(base64decode(self.triggers.connection_json)).use_ntlm
    cacert   = jsondecode(base64decode(self.triggers.connection_json)).cacert

    bastion_host        = jsondecode(base64decode(self.triggers.connection_json)).bastion_host
    bastion_host_key    = jsondecode(base64decode(self.triggers.connection_json)).bastion_host_key
    bastion_port        = jsondecode(base64decode(self.triggers.connection_json)).bastion_port
    bastion_user        = jsondecode(base64decode(self.triggers.connection_json)).bastion_user
    bastion_password    = jsondecode(base64decode(self.triggers.connection_json)).bastion_password
    bastion_private_key = jsondecode(base64decode(self.triggers.connection_json)).bastion_private_key
    bastion_certificate = jsondecode(base64decode(self.triggers.connection_json)).bastion_certificate
  }

  provisioner "remote-exec" {
    inline = [
      "until kubectl get node ${self.triggers.agent_name}; do sleep 1; done",
      "${self.triggers.kubectl_cmd} annotate --overwrite node ${self.triggers.agent_name} ${self.triggers.annotation_name}=${self.triggers.on_value_changes}"
    ]
  }

  provisioner "remote-exec" {
    when = destroy
    inline = [
      "${self.triggers.kubectl_cmd} annotate node ${self.triggers.agent_name} ${self.triggers.annotation_name}-"
    ]
  }
}

// Add/remove manually label on k3s agent
resource "null_resource" "agents_label" {
  for_each = local.agent_labels

  depends_on = [null_resource.agents_install]
  triggers = {
    agent_name       = local.agents_metadata[split(var.separator, each.key)[0]].name
    label_name       = split(var.separator, each.key)[1]
    on_value_changes = each.value

    // Because some fields must be used on destruction, we need to store them into the current
    // object. The only way to do that is to use triggers to store theses fields.
    connection_json = base64encode(jsonencode(local.root_server_connection))
    kubectl_cmd     = local.kubectl_cmd
  }
  // Because we dont care about connection modification, we ignore its changes.
  lifecycle { ignore_changes = [triggers["connection_json"], triggers["kubectl_cmd"]] }

  connection {
    type = jsondecode(base64decode(self.triggers.connection_json)).type

    host     = jsondecode(base64decode(self.triggers.connection_json)).host
    user     = jsondecode(base64decode(self.triggers.connection_json)).user
    password = jsondecode(base64decode(self.triggers.connection_json)).password
    port     = jsondecode(base64decode(self.triggers.connection_json)).port
    timeout  = jsondecode(base64decode(self.triggers.connection_json)).timeout

    script_path    = jsondecode(base64decode(self.triggers.connection_json)).script_path
    private_key    = jsondecode(base64decode(self.triggers.connection_json)).private_key
    certificate    = jsondecode(base64decode(self.triggers.connection_json)).certificate
    agent          = jsondecode(base64decode(self.triggers.connection_json)).agent
    agent_identity = jsondecode(base64decode(self.triggers.connection_json)).agent_identity
    host_key       = jsondecode(base64decode(self.triggers.connection_json)).host_key

    https    = jsondecode(base64decode(self.triggers.connection_json)).https
    insecure = jsondecode(base64decode(self.triggers.connection_json)).insecure
    use_ntlm = jsondecode(base64decode(self.triggers.connection_json)).use_ntlm
    cacert   = jsondecode(base64decode(self.triggers.connection_json)).cacert

    bastion_host        = jsondecode(base64decode(self.triggers.connection_json)).bastion_host
    bastion_host_key    = jsondecode(base64decode(self.triggers.connection_json)).bastion_host_key
    bastion_port        = jsondecode(base64decode(self.triggers.connection_json)).bastion_port
    bastion_user        = jsondecode(base64decode(self.triggers.connection_json)).bastion_user
    bastion_password    = jsondecode(base64decode(self.triggers.connection_json)).bastion_password
    bastion_private_key = jsondecode(base64decode(self.triggers.connection_json)).bastion_private_key
    bastion_certificate = jsondecode(base64decode(self.triggers.connection_json)).bastion_certificate
  }

  provisioner "remote-exec" {
    inline = [
      "until ${self.triggers.kubectl_cmd} get node ${self.triggers.agent_name}; do sleep 1; done",
      "${self.triggers.kubectl_cmd} label --overwrite node ${self.triggers.agent_name} ${self.triggers.label_name}=${self.triggers.on_value_changes}"
    ]
  }

  provisioner "remote-exec" {
    when = destroy
    inline = [
      "${self.triggers.kubectl_cmd} label node ${self.triggers.agent_name} ${self.triggers.label_name}-"
    ]
  }
}

// Add manually taint on k3s agent
resource "null_resource" "agents_taint" {
  for_each = local.agent_taints

  depends_on = [null_resource.agents_install]
  triggers = {
    agent_name       = local.agents_metadata[split(var.separator, each.key)[0]].name
    taint_name       = split(var.separator, each.key)[1]
    on_value_changes = each.value

    // Because some fields must be used on destruction, we need to store them into the current
    // object. The only way to do that is to use triggers to store theses fields.
    connection_json = base64encode(jsonencode(local.root_server_connection))
    kubectl_cmd     = local.kubectl_cmd
  }
  // Because we dont care about connection modification, we ignore its changes.
  lifecycle { ignore_changes = [triggers["connection_json"], triggers["kubectl_cmd"]] }

  connection {
    type = jsondecode(base64decode(self.triggers.connection_json)).type

    host     = jsondecode(base64decode(self.triggers.connection_json)).host
    user     = jsondecode(base64decode(self.triggers.connection_json)).user
    password = jsondecode(base64decode(self.triggers.connection_json)).password
    port     = jsondecode(base64decode(self.triggers.connection_json)).port
    timeout  = jsondecode(base64decode(self.triggers.connection_json)).timeout

    script_path    = jsondecode(base64decode(self.triggers.connection_json)).script_path
    private_key    = jsondecode(base64decode(self.triggers.connection_json)).private_key
    certificate    = jsondecode(base64decode(self.triggers.connection_json)).certificate
    agent          = jsondecode(base64decode(self.triggers.connection_json)).agent
    agent_identity = jsondecode(base64decode(self.triggers.connection_json)).agent_identity
    host_key       = jsondecode(base64decode(self.triggers.connection_json)).host_key

    https    = jsondecode(base64decode(self.triggers.connection_json)).https
    insecure = jsondecode(base64decode(self.triggers.connection_json)).insecure
    use_ntlm = jsondecode(base64decode(self.triggers.connection_json)).use_ntlm
    cacert   = jsondecode(base64decode(self.triggers.connection_json)).cacert

    bastion_host        = jsondecode(base64decode(self.triggers.connection_json)).bastion_host
    bastion_host_key    = jsondecode(base64decode(self.triggers.connection_json)).bastion_host_key
    bastion_port        = jsondecode(base64decode(self.triggers.connection_json)).bastion_port
    bastion_user        = jsondecode(base64decode(self.triggers.connection_json)).bastion_user
    bastion_password    = jsondecode(base64decode(self.triggers.connection_json)).bastion_password
    bastion_private_key = jsondecode(base64decode(self.triggers.connection_json)).bastion_private_key
    bastion_certificate = jsondecode(base64decode(self.triggers.connection_json)).bastion_certificate
  }

  provisioner "remote-exec" {
    inline = [
      "until ${self.triggers.kubectl_cmd} get node ${self.triggers.agent_name}; do sleep 1; done",
      "${self.triggers.kubectl_cmd} taint node ${self.triggers.agent_name} ${self.triggers.taint_name}=${self.triggers.on_value_changes} --overwrite"
    ]
  }

  provisioner "remote-exec" {
    when = destroy
    inline = [
      "${self.triggers.kubectl_cmd} taint node ${self.triggers.agent_name} ${self.triggers.taint_name}-"
    ]
  }
}


================================================
FILE: examples/civo-k3s/README.md
================================================
#  K3S example for Civo

Configuration in this directory creates a k3s cluster resources instances.

## Usage

> [!warning]
> **Note that this example may create resources which cost money. Run `terraform destroy` when you don't need these resources.**

```bash
$ export CIVO_TOKEN=...
$ terraform init
$ terraform apply
```


================================================
FILE: examples/civo-k3s/k3s.tf
================================================
module "k3s" {
  source = "./../.."

  depends_on_    = civo_instance.node_instances
  k3s_version    = "latest"
  cluster_domain = "civo_k3s"

  drain_timeout            = "60s"
  managed_fields           = ["label"]
  generate_ca_certificates = true

  global_flags = [for instance in civo_instance.node_instances : "--tls-san ${instance.public_ip}"]

  servers = {
    # The node name will be automatically provided by
    # the module using the field name... any usage of
    # --node-name in additional_flags will be ignored

    for instance in civo_instance.node_instances :
    instance.hostname => {
      ip = instance.private_ip
      connection = {
        timeout  = "60s"
        type     = "ssh"
        host     = instance.public_ip
        password = instance.initial_password
        user     = "root"
      }

      labels = { "node.kubernetes.io/type" = "master" }
    }
  }
}


================================================
FILE: examples/civo-k3s/main.tf
================================================
data "civo_disk_image" "ubuntu" {
  filter {
    key      = "name"
    values   = ["ubuntu"]
    match_by = "re"
  }

  sort {
    key       = "version"
    direction = "desc"
  }
}

data "civo_instances_size" "node_size" {
  filter {
    key    = "name"
    values = ["g3.small"]
  }
}

resource "civo_instance" "node_instances" {
  count      = 3
  hostname   = "node-${count.index + 1}"
  size       = data.civo_instances_size.node_size.sizes[0].name
  disk_image = data.civo_disk_image.ubuntu[count.index].id
}


================================================
FILE: examples/civo-k3s/outputs.tf
================================================
output "summary" {
  value = module.k3s.summary
}

output "kubeconfig" {
  value     = module.k3s.kube_config
  sensitive = true
}


================================================
FILE: examples/civo-k3s/versions.tf
================================================
terraform {
  required_providers {
    civo = {
      source  = "civo/civo"
      version = "~>0.10.10"
    }
  }
  required_version = "~> 1.0"
}


================================================
FILE: examples/do-k3s/README.md
================================================
#  K3S example for Digital Ocean

Configuration in this directory creates a k3s cluster resources instances.

## Usage

> [!warning]
> **Note that this example may create resources which cost money. Run `terraform destroy` when you don't need these resources.**

```bash
$ export DIGITALOCEAN_TOKEN=...
$ terraform init
$ terraform apply
```


================================================
FILE: examples/do-k3s/k3s.tf
================================================
module "k3s" {
  source = "./../.."

  depends_on_    = digitalocean_droplet.node_instances
  k3s_version    = "latest"
  cluster_domain = "do_k3s"

  drain_timeout            = "60s"
  managed_fields           = ["label"]
  generate_ca_certificates = true

  global_flags = [for instance in digitalocean_droplet.node_instances : "--tls-san ${instance.ipv4_address}"]

  servers = {
    # The node name will be automatically provided by
    # the module using the field name... any usage of
    # --node-name in additional_flags will be ignored

    for instance in digitalocean_droplet.node_instances :
    instance.name => {
      ip = instance.ipv4_address_private
      connection = {
        timeout     = "60s"
        type        = "ssh"
        host        = instance.ipv4_address
        private_key = trimspace(tls_private_key.ed25519_provisioning.private_key_pem)
      }

      labels = { "node.kubernetes.io/type" = "master" }
    }
  }
}


================================================
FILE: examples/do-k3s/main.tf
================================================
data "digitalocean_image" "ubuntu" {
  slug = "ubuntu-22-04-x64"
}

resource "tls_private_key" "ed25519_provisioning" {
  algorithm = "ED25519"
}

resource "digitalocean_ssh_key" "default" {
  name       = "K3S terraform module - Provisionning SSH key"
  public_key = trimspace(tls_private_key.ed25519_provisioning.public_key_openssh)
}

resource "digitalocean_droplet" "node_instances" {
  count = 3

  image    = data.digitalocean_image.ubuntu.slug
  name     = "k3s-node-${count.index}"
  region   = "ams3"
  size     = "s-1vcpu-2gb"
  ssh_keys = [digitalocean_ssh_key.default.fingerprint]
}


================================================
FILE: examples/do-k3s/outputs.tf
================================================
output "summary" {
  value = module.k3s.summary
}

output "kubeconfig" {
  value     = module.k3s.kube_config
  sensitive = true
}

output "ssh_private_key" {
  description = "Generated SSH private key."
  value       = tls_private_key.ed25519_provisioning.private_key_openssh
  sensitive   = true
}


================================================
FILE: examples/do-k3s/versions.tf
================================================
terraform {
  required_providers {
    digitalocean = {
      source  = "digitalocean/digitalocean"
      version = "2.31.0"
    }
  }
  required_version = "~> 1.0"
}


================================================
FILE: examples/hcloud-k3s/README.md
================================================
#  K3S example for Hetzner-Cloud

Configuration in this directory creates a k3s cluster resources including network, subnet and instances.

## Usage

> [!warning]
> **Note that this example may create resources which cost money. Run `terraform destroy` when you don't need these resources.**

```bash
$ export HCLOUD_TOKEN=...
$ terraform init
$ terraform apply
```

## How to connect to a node ?

```bash
terraform output -raw ssh_private_key | ssh-add -
ssh root@NODE-IP
```


================================================
FILE: examples/hcloud-k3s/k3s.tf
================================================
module "k3s" {
  source = "./../.."

  depends_on_    = hcloud_server.agents
  k3s_version    = "latest"
  cluster_domain = "cluster.local"
  cidr = {
    pods     = "10.42.0.0/16"
    services = "10.43.0.0/16"
  }
  drain_timeout  = "30s"
  managed_fields = ["label", "taint"] // ignore annotations

  global_flags = [
    "--flannel-iface ens10",
    "--kubelet-arg cloud-provider=external" // required to use https://github.com/hetznercloud/hcloud-cloud-controller-manager
  ]

  servers = {
    for i in range(length(hcloud_server.control_planes)) :
    hcloud_server.control_planes[i].name => {
      ip = hcloud_server_network.control_planes[i].ip
      connection = {
        host        = hcloud_server.control_planes[i].ipv4_address
        private_key = trimspace(tls_private_key.ed25519_provisioning.private_key_pem)
      }
      flags = [
        "--disable-cloud-controller",
        "--tls-san ${hcloud_server.control_planes[0].ipv4_address}",
      ]
      annotations = { "server_id" : i } // theses annotations will not be managed by this module
    }
  }

  agents = {
    for i in range(length(hcloud_server.agents)) :
    "${hcloud_server.agents[i].name}_node" => {
      name = hcloud_server.agents[i].name
      ip   = hcloud_server_network.agents_network[i].ip
      connection = {
        host        = hcloud_server.agents[i].ipv4_address
        private_key = trimspace(tls_private_key.ed25519_provisioning.private_key_pem)
      }

      labels = { "node.kubernetes.io/pool" = hcloud_server.agents[i].labels.nodepool }
      taints = { "dedicated" : hcloud_server.agents[i].labels.nodepool == "gpu" ? "gpu:NoSchedule" : null }
    }
  }
}


================================================
FILE: examples/hcloud-k3s/main.tf
================================================
data "hcloud_image" "ubuntu" {
  name = "ubuntu-20.04"
}

resource "tls_private_key" "ed25519_provisioning" {
  algorithm = "ED25519"
}

resource "hcloud_ssh_key" "default" {
  name       = "K3S terraform module - Provisionning SSH key"
  public_key = trimspace(tls_private_key.ed25519_provisioning.public_key_openssh)
}

resource "hcloud_network" "k3s" {
  name     = "k3s-network"
  ip_range = "10.0.0.0/8"
}

resource "hcloud_network_subnet" "k3s_nodes" {
  type         = "server"
  network_id   = hcloud_network.k3s.id
  network_zone = "eu-central"
  ip_range     = "10.254.1.0/24"
}

resource "hcloud_server_network" "control_planes" {
  count     = var.servers_num
  subnet_id = hcloud_network_subnet.k3s_nodes.id
  server_id = hcloud_server.control_planes[count.index].id
  ip        = cidrhost(hcloud_network_subnet.k3s_nodes.ip_range, 1 + count.index)
}

resource "hcloud_server_network" "agents_network" {
  count     = length(hcloud_server.agents)
  server_id = hcloud_server.agents[count.index].id
  subnet_id = hcloud_network_subnet.k3s_nodes.id
  ip        = cidrhost(hcloud_network_subnet.k3s_nodes.ip_range, 1 + var.servers_num + count.index)
}

resource "hcloud_server" "control_planes" {
  count = var.servers_num
  name  = "k3s-control-plane-${count.index}"

  image       = data.hcloud_image.ubuntu.name
  server_type = "cx11"

  ssh_keys = [hcloud_ssh_key.default.id]
  labels = {
    provisioner = "terraform",
    engine      = "k3s",
    node_type   = "control-plane"
  }
}


resource "hcloud_server" "agents" {
  count = var.agents_num
  name  = "k3s-agent-${count.index}"

  image       = data.hcloud_image.ubuntu.name
  server_type = "cx11"

  ssh_keys = [hcloud_ssh_key.default.id]
  labels = {
    provisioner = "terraform",
    engine      = "k3s",
    node_type   = "agent",
    nodepool    = count.index % 3 == 0 ? "gpu" : "general",
  }
}

================================================
FILE: examples/hcloud-k3s/outputs.tf
================================================
output "summary" {
  value = module.k3s.summary
}

output "kubeconfig" {
  value     = module.k3s.kube_config
  sensitive = true
}

output "ssh_private_key" {
  description = "Generated SSH private key."
  value       = tls_private_key.ed25519_provisioning.private_key_openssh
  sensitive   = true
}


================================================
FILE: examples/hcloud-k3s/variables.tf
================================================
variable "servers_num" {
  description = "Number of control plane nodes."
  default     = 3
}

variable "agents_num" {
  description = "Number of agent nodes."
  default     = 3
}

================================================
FILE: examples/hcloud-k3s/versions.tf
================================================
terraform {
  required_providers {
    hcloud = {
      source  = "hetznercloud/hcloud"
      version = "1.44.1"
    }
  }
  required_version = "~> 1.0"
}


================================================
FILE: k3s_certificates.tf
================================================
locals {
  should_generate_certificates = var.generate_ca_certificates && length(var.kubernetes_certificates) == 0
  certificates_names           = var.generate_ca_certificates ? ["client-ca", "server-ca", "request-header-key-ca"] : []
  certificates_types           = { for s in local.certificates_names : index(local.certificates_names, s) => s }
  certificates_by_type = { for s in local.certificates_names : s =>
    tls_self_signed_cert.kubernetes_ca_certs[index(local.certificates_names, s)].cert_pem
  }
  certificates_files = flatten(
    [
      [for s in local.certificates_names :
        flatten([
          {
            "file_name"    = "${s}.key"
            "file_content" = tls_private_key.kubernetes_ca[index(local.certificates_names, s)].private_key_pem
          },
          {
            "file_name"    = "${s}.crt"
            "file_content" = tls_self_signed_cert.kubernetes_ca_certs[index(local.certificates_names, s)].cert_pem
          }
        ])
      ]
      , var.kubernetes_certificates
    ]
  )
  cluster_ca_certificate = var.generate_ca_certificates ? local.certificates_by_type["server-ca"] : null
  client_certificate     = var.generate_ca_certificates ? tls_locally_signed_cert.master_user[0].cert_pem : null
  client_key             = var.generate_ca_certificates ? tls_private_key.master_user[0].private_key_pem : null
}

# Keys
resource "tls_private_key" "kubernetes_ca" {
  count = var.generate_ca_certificates ? 3 : 0

  algorithm   = "ECDSA"
  ecdsa_curve = "P384"
}

# certs
resource "tls_self_signed_cert" "kubernetes_ca_certs" {
  for_each = local.certificates_types

  validity_period_hours = 876600 # 100 years
  allowed_uses          = ["digital_signature", "key_encipherment", "cert_signing"]
  private_key_pem       = tls_private_key.kubernetes_ca[each.key].private_key_pem
  is_ca_certificate     = true

  subject {
    common_name = "kubernetes-${each.value}"
  }
}

# master-login cert
resource "tls_private_key" "master_user" {
  count = var.generate_ca_certificates ? 1 : 0

  algorithm   = "ECDSA"
  ecdsa_curve = "P384"
}

resource "tls_cert_request" "master_user" {
  count = var.generate_ca_certificates ? 1 : 0

  private_key_pem = tls_private_key.master_user[0].private_key_pem

  subject {
    common_name  = "master-user"
    organization = "system:masters"
  }
}

resource "tls_locally_signed_cert" "master_user" {
  count = var.generate_ca_certificates ? 1 : 0

  cert_request_pem   = tls_cert_request.master_user[0].cert_request_pem
  ca_private_key_pem = tls_private_key.kubernetes_ca[0].private_key_pem
  ca_cert_pem        = tls_self_signed_cert.kubernetes_ca_certs[0].cert_pem

  validity_period_hours = 876600

  allowed_uses = [
    "key_encipherment",
    "digital_signature",
    "client_auth"
  ]
}


================================================
FILE: k3s_version.tf
================================================
// Fetch the last version of k3s
data "http" "k3s_version" {
  url = "https://update.k3s.io/v1-release/channels"
}

// Fetch the k3s installation script
data "http" "k3s_installer" {
  url = "https://raw.githubusercontent.com/rancher/k3s/${jsondecode(data.http.k3s_version.response_body).data[1].latest}/install.sh"
}

locals {
  // Use the fetched version if 'lastest' is specified
  k3s_version = var.k3s_version == "latest" ? jsondecode(data.http.k3s_version.response_body).data[1].latest : var.k3s_version
}


================================================
FILE: main.tf
================================================
// Generate the k3s token used by all nodes to join the cluster
resource "random_password" "k3s_cluster_secret" {
  length  = 48
  special = false
}

locals {
  managed_annotation_enabled = contains(var.managed_fields, "annotation")
  managed_label_enabled      = contains(var.managed_fields, "label")
  managed_taint_enabled      = contains(var.managed_fields, "taint")
}

// null_resource used as dependency agregation.
resource "null_resource" "kubernetes_ready" {
  depends_on = [
    null_resource.servers_install, null_resource.servers_drain, null_resource.servers_annotation, null_resource.servers_label, null_resource.servers_taint,
    null_resource.agents_install, null_resource.agents_drain, null_resource.agents_annotation, null_resource.agents_label, null_resource.agents_taint,
  ]
}


================================================
FILE: outputs.tf
================================================
output "kubernetes" {
  description = "Authentication credentials of Kubernetes (full administrator)."
  value = {
    cluster_ca_certificate = local.cluster_ca_certificate
    client_certificate     = local.client_certificate
    client_key             = local.client_key
    api_endpoint           = "https://${local.root_server_connection.host}:6443"
    password               = null
    username               = null
  }
  sensitive = true
}

output "kube_config" {
  description = "Genereated kubeconfig."
  value = var.generate_ca_certificates == false ? null : yamlencode({
    apiVersion = "v1"
    clusters = [{
      cluster = {
        certificate-authority-data = base64encode(local.cluster_ca_certificate)
        server                     = "https://${local.root_server_connection.host}:6443"
      }
      name = var.cluster_domain
    }]
    contexts = [{
      context = {
        cluster = var.cluster_domain
        user : "master-user"
      }
      name = var.cluster_domain
    }]
    current-context = var.cluster_domain
    kind            = "Config"
    preferences     = {}
    users = [{
      user = {
        client-certificate-data : base64encode(local.client_certificate)
        client-key-data : base64encode(local.client_key)
      }
      name : "master-user"
    }]
  })
  sensitive = true
}

output "summary" {
  description = "Current state of k3s (version & nodes)."
  value = {
    version : local.k3s_version
    servers : [
      for key, server in var.servers :
      {
        name        = local.servers_metadata[key].name
        annotations = try(server.annotations, [])
        labels      = try(server.labels, [])
        taints      = try(server.taints, [])
      }
    ]
    agents : [
      for key, agent in var.agents :
      {
        name        = local.agents_metadata[key].name
        annotations = try(agent.annotations, [])
        labels      = try(agent.labels, [])
        taints      = try(agent.taints, [])
      }
    ]
  }
}

output "kubernetes_ready" {
  description = "Dependency endpoint to synchronize k3s installation and provisioning."
  value       = null_resource.kubernetes_ready
}

output "kubernetes_cluster_secret" {
  description = "Secret token used to join nodes to the cluster"
  value       = random_password.k3s_cluster_secret.result
  sensitive   = true
}


================================================
FILE: renovate.json
================================================
{
  "extends": ["config:base"],
  "labels": ["kind/dependencies"]
}


================================================
FILE: server_nodes.tf
================================================
locals {
  // Some vars use to easily access to the first k3s server values
  root_server_name = keys(var.servers)[0]

  // Get the first address from the IP array using comma's as the delimiter
  root_advertise_ip = split(",", values(var.servers)[0].ip)[0]

  // If root_advertise_ip is IPv6 wrap it in square brackets for IPv6 K3S URLs otherwise leave it raw
  root_advertise_ip_k3s = can(regex("::", local.root_advertise_ip)) ? "[${local.root_advertise_ip}]" : local.root_advertise_ip

  // string representation of all specified extra k3s installation env vars
  install_env_vars = join(" ", [for k, v in var.k3s_install_env_vars : "${k}=${v}"])

  root_server_connection = {
    type = try(var.servers[local.root_server_name].connection.type, "ssh")

    host     = try(var.servers[local.root_server_name].connection.host, var.servers[local.root_server_name].ip)
    user     = try(var.servers[local.root_server_name].connection.user, null)
    password = try(var.servers[local.root_server_name].connection.password, null)
    port     = try(var.servers[local.root_server_name].connection.port, null)
    timeout  = try(var.servers[local.root_server_name].connection.timeout, null)

    script_path    = try(var.servers[local.root_server_name].connection.script_path, null)
    private_key    = try(var.servers[local.root_server_name].connection.private_key, null)
    certificate    = try(var.servers[local.root_server_name].connection.certificate, null)
    agent          = try(var.servers[local.root_server_name].connection.agent, null)
    agent_identity = try(var.servers[local.root_server_name].connection.agent_identity, null)
    host_key       = try(var.servers[local.root_server_name].connection.host_key, null)

    https    = try(var.servers[local.root_server_name].connection.https, null)
    insecure = try(var.servers[local.root_server_name].connection.insecure, null)
    use_ntlm = try(var.servers[local.root_server_name].connection.use_ntlm, null)
    cacert   = try(var.servers[local.root_server_name].connection.cacert, null)

    bastion_host        = try(var.servers[local.root_server_name].connection.bastion_host, null)
    bastion_host_key    = try(var.servers[local.root_server_name].connection.bastion_host_key, null)
    bastion_port        = try(var.servers[local.root_server_name].connection.bastion_port, null)
    bastion_user        = try(var.servers[local.root_server_name].connection.bastion_user, null)
    bastion_password    = try(var.servers[local.root_server_name].connection.bastion_password, null)
    bastion_private_key = try(var.servers[local.root_server_name].connection.bastion_private_key, null)
    bastion_certificate = try(var.servers[local.root_server_name].connection.bastion_certificate, null)
  }

  // Generate a map of all servers annotations in order to manage them through this module. This
  // generation is made in two steps:
  // - generate a list of objects representing all annotations, following this
  //   'template' {key = node_name|annotation_name, value = annotation_value}
  // - generate a map based on the generated list (using the field key as map key)
  server_annotations_list = flatten([
    for nk, nv in var.servers : [
      // Because we need node name and annotation name when we remove the annotation resource, we need
      // to share them through the annotation key (each.value are not avaible on destruction).
      for ak, av in try(nv.annotations, {}) : av == null ? { key : "" } : { key : "${nk}${var.separator}${ak}", value : av }
    ]
  ])
  server_annotations = local.managed_annotation_enabled ? { for o in local.server_annotations_list : o.key => o.value if o.key != "" } : {}

  // Generate a map of all servers labels in order to manage them through this module. This
  // generation is made in two steps, following the same process than annotation's map.
  server_labels_list = flatten([
    for nk, nv in var.servers : [
      // Because we need node name and label name when we remove the label resource, we need
      // to share them through the label key (each.value are not avaible on destruction).
      for lk, lv in try(nv.labels, {}) : lv == null ? { key : "" } : { key : "${nk}${var.separator}${lk}", value : lv }
    ]
  ])
  server_labels = local.managed_label_enabled ? { for o in local.server_labels_list : o.key => o.value if o.key != "" } : {}

  // Generate a map of all servers taints in order to manage them through this module. This
  // generation is made in two steps, following the same process than annotation's map.
  server_taints_list = flatten([
    for nk, nv in var.servers : [
      // Because we need node name and taint name when we remove the taint resource, we need
      // to share them through the taint key (each.value are not avaible on destruction).
      for tk, tv in try(nv.taints, {}) : tv == null ? { key : "" } : { key : "${nk}${var.separator}${tk}", value : tv }
    ]
  ])
  server_taints = local.managed_taint_enabled ? { for o in local.server_taints_list : o.key => o.value if o.key != "" } : {}

  // Generate a map of all calculated server fields, used during k3s installation.
  servers_metadata = {
    for key, server in var.servers :
    key => {
      name = try(server.name, key)
      ip   = server.ip

      flags = join(" ", compact(concat(
        key == local.root_server_name ?
        // For the first server node, add all configuration flags
        [
          "--advertise-address ${local.root_advertise_ip}",
          "--node-ip ${server.ip}",
          "--node-name '${try(server.name, key)}'",
          "--cluster-domain '${var.cluster_domain}'",
          "--cluster-cidr ${var.cidr.pods}",
          "--service-cidr ${var.cidr.services}",
          "--token ${nonsensitive(random_password.k3s_cluster_secret.result)}", # NOTE: nonsensitive is used to show logs during provisioning
          length(var.servers) > 1 ? "--cluster-init" : "",
        ] :
        // For other server nodes, use agent flags (because the first node manage the cluster configuration)
        [
          "--node-ip ${server.ip}",
          "--node-name '${try(server.name, key)}'",
          "--server https://${local.root_advertise_ip_k3s}:6443",
          "--cluster-domain '${var.cluster_domain}'",
          "--cluster-cidr ${var.cidr.pods}",
          "--service-cidr ${var.cidr.services}",
          "--token ${nonsensitive(random_password.k3s_cluster_secret.result)}", # NOTE: nonsensitive is used to show logs during provisioning
        ],
        var.global_flags,
        try(server.flags, []),
        [for key, value in try(server.taints, {}) : "--node-taint '${key}=${value}'" if value != null]
      )))

      immutable_fields_hash = sha1(join("", concat(
        [var.cluster_domain, var.cidr.pods, var.cidr.services],
        var.global_flags,
        try(server.flags, []),
      )))
    }
  }
}

// Install k3s server
resource "null_resource" "k8s_ca_certificates_install" {
  count = length(local.certificates_files)

  depends_on = [var.depends_on_]

  connection {
    type = try(local.root_server_connection.type, "ssh")

    host     = try(local.root_server_connection.host, local.root_server_connection.ip)
    user     = try(local.root_server_connection.user, null)
    password = try(local.root_server_connection.password, null)
    port     = try(local.root_server_connection.port, null)
    timeout  = try(local.root_server_connection.timeout, null)

    script_path    = try(local.root_server_connection.script_path, null)
    private_key    = try(local.root_server_connection.private_key, null)
    certificate    = try(local.root_server_connection.certificate, null)
    agent          = try(local.root_server_connection.agent, null)
    agent_identity = try(local.root_server_connection.agent_identity, null)
    host_key       = try(local.root_server_connection.host_key, null)

    https    = try(local.root_server_connection.https, null)
    insecure = try(local.root_server_connection.insecure, null)
    use_ntlm = try(local.root_server_connection.use_ntlm, null)
    cacert   = try(local.root_server_connection.cacert, null)

    bastion_host        = try(local.root_server_connection.bastion_host, null)
    bastion_host_key    = try(local.root_server_connection.bastion_host_key, null)
    bastion_port        = try(local.root_server_connection.bastion_port, null)
    bastion_user        = try(local.root_server_connection.bastion_user, null)
    bastion_password    = try(local.root_server_connection.bastion_password, null)
    bastion_private_key = try(local.root_server_connection.bastion_private_key, null)
    bastion_certificate = try(local.root_server_connection.bastion_certificate, null)
  }

  provisioner "remote-exec" {
    inline = [
      <<-EOT
      # --- use sudo if we are not already root ---
      [ $(id -u) -eq 0 ] || exec sudo -n $0 $@

      mkdir -p /var/lib/rancher/k3s/server/tls/
      echo '${local.certificates_files[count.index].file_content}' > /var/lib/rancher/k3s/server/tls/${local.certificates_files[count.index].file_name}
      EOT
    ]
  }
}

resource "null_resource" "servers_install" {
  for_each = var.servers

  depends_on = [var.depends_on_, null_resource.k8s_ca_certificates_install]
  triggers = {
    on_immutable_changes = local.servers_metadata[each.key].immutable_fields_hash
    on_new_version       = local.k3s_version
  }

  connection {
    type = try(each.value.connection.type, "ssh")

    host     = try(each.value.connection.host, each.value.ip)
    user     = try(each.value.connection.user, null)
    password = try(each.value.connection.password, null)
    port     = try(each.value.connection.port, null)
    timeout  = try(each.value.connection.timeout, null)

    script_path    = try(each.value.connection.script_path, null)
    private_key    = try(each.value.connection.private_key, null)
    certificate    = try(each.value.connection.certificate, null)
    agent          = try(each.value.connection.agent, null)
    agent_identity = try(each.value.connection.agent_identity, null)
    host_key       = try(each.value.connection.host_key, null)

    https    = try(each.value.connection.https, null)
    insecure = try(each.value.connection.insecure, null)
    use_ntlm = try(each.value.connection.use_ntlm, null)
    cacert   = try(each.value.connection.cacert, null)

    bastion_host        = try(each.value.connection.bastion_host, null)
    bastion_host_key    = try(each.value.connection.bastion_host_key, null)
    bastion_port        = try(each.value.connection.bastion_port, null)
    bastion_user        = try(each.value.connection.bastion_user, null)
    bastion_password    = try(each.value.connection.bastion_password, null)
    bastion_private_key = try(each.value.connection.bastion_private_key, null)
    bastion_certificate = try(each.value.connection.bastion_certificate, null)
  }

  // Upload k3s file
  provisioner "file" {
    content     = data.http.k3s_installer.response_body
    destination = "/tmp/k3s-installer"
  }

  // Install k3s server
  provisioner "remote-exec" {
    inline = [
      "${local.install_env_vars} INSTALL_K3S_VERSION=${local.k3s_version} sh /tmp/k3s-installer server ${local.servers_metadata[each.key].flags}",
      "until ${local.kubectl_cmd} get node ${local.servers_metadata[each.key].name}; do sleep 1; done"
    ]
  }
}

// Drain k3s node on destruction in order to safely move all workflows to another node.
resource "null_resource" "servers_drain" {
  for_each = var.servers

  depends_on = [null_resource.servers_install]
  triggers = {
    server_name     = local.servers_metadata[split(var.separator, each.key)[0]].name
    connection_json = base64encode(jsonencode(local.root_server_connection))
    drain_timeout   = var.drain_timeout
    kubectl_cmd     = local.kubectl_cmd
  }
  lifecycle { ignore_changes = [triggers] }

  connection {
    type = jsondecode(base64decode(self.triggers.connection_json)).type

    host     = jsondecode(base64decode(self.triggers.connection_json)).host
    user     = jsondecode(base64decode(self.triggers.connection_json)).user
    password = jsondecode(base64decode(self.triggers.connection_json)).password
    port     = jsondecode(base64decode(self.triggers.connection_json)).port
    timeout  = jsondecode(base64decode(self.triggers.connection_json)).timeout

    script_path    = jsondecode(base64decode(self.triggers.connection_json)).script_path
    private_key    = jsondecode(base64decode(self.triggers.connection_json)).private_key
    certificate    = jsondecode(base64decode(self.triggers.connection_json)).certificate
    agent          = jsondecode(base64decode(self.triggers.connection_json)).agent
    agent_identity = jsondecode(base64decode(self.triggers.connection_json)).agent_identity
    host_key       = jsondecode(base64decode(self.triggers.connection_json)).host_key

    https    = jsondecode(base64decode(self.triggers.connection_json)).https
    insecure = jsondecode(base64decode(self.triggers.connection_json)).insecure
    use_ntlm = jsondecode(base64decode(self.triggers.connection_json)).use_ntlm
    cacert   = jsondecode(base64decode(self.triggers.connection_json)).cacert

    bastion_host        = jsondecode(base64decode(self.triggers.connection_json)).bastion_host
    bastion_host_key    = jsondecode(base64decode(self.triggers.connection_json)).bastion_host_key
    bastion_port        = jsondecode(base64decode(self.triggers.connection_json)).bastion_port
    bastion_user        = jsondecode(base64decode(self.triggers.connection_json)).bastion_user
    bastion_password    = jsondecode(base64decode(self.triggers.connection_json)).bastion_password
    bastion_private_key = jsondecode(base64decode(self.triggers.connection_json)).bastion_private_key
    bastion_certificate = jsondecode(base64decode(self.triggers.connection_json)).bastion_certificate
  }

  provisioner "remote-exec" {
    when = destroy
    inline = [
      "${self.triggers.kubectl_cmd} drain ${self.triggers.server_name} --delete-local-data --force --ignore-daemonsets --timeout=${self.triggers.drain_timeout}"
    ]
  }
}

// Add/remove manually annotation on k3s server
resource "null_resource" "servers_annotation" {
  for_each = local.server_annotations

  depends_on = [null_resource.servers_install]
  triggers = {
    server_name      = local.servers_metadata[split(var.separator, each.key)[0]].name
    annotation_name  = split(var.separator, each.key)[1]
    on_value_changes = each.value

    connection_json = base64encode(jsonencode(local.root_server_connection))
    kubectl_cmd     = local.kubectl_cmd
  }
  lifecycle { ignore_changes = [triggers["connection_json"], triggers["kubectl_cmd"]] }

  connection {
    type = jsondecode(base64decode(self.triggers.connection_json)).type

    host     = jsondecode(base64decode(self.triggers.connection_json)).host
    user     = jsondecode(base64decode(self.triggers.connection_json)).user
    password = jsondecode(base64decode(self.triggers.connection_json)).password
    port     = jsondecode(base64decode(self.triggers.connection_json)).port
    timeout  = jsondecode(base64decode(self.triggers.connection_json)).timeout

    script_path    = jsondecode(base64decode(self.triggers.connection_json)).script_path
    private_key    = jsondecode(base64decode(self.triggers.connection_json)).private_key
    certificate    = jsondecode(base64decode(self.triggers.connection_json)).certificate
    agent          = jsondecode(base64decode(self.triggers.connection_json)).agent
    agent_identity = jsondecode(base64decode(self.triggers.connection_json)).agent_identity
    host_key       = jsondecode(base64decode(self.triggers.connection_json)).host_key

    https    = jsondecode(base64decode(self.triggers.connection_json)).https
    insecure = jsondecode(base64decode(self.triggers.connection_json)).insecure
    use_ntlm = jsondecode(base64decode(self.triggers.connection_json)).use_ntlm
    cacert   = jsondecode(base64decode(self.triggers.connection_json)).cacert

    bastion_host        = jsondecode(base64decode(self.triggers.connection_json)).bastion_host
    bastion_host_key    = jsondecode(base64decode(self.triggers.connection_json)).bastion_host_key
    bastion_port        = jsondecode(base64decode(self.triggers.connection_json)).bastion_port
    bastion_user        = jsondecode(base64decode(self.triggers.connection_json)).bastion_user
    bastion_password    = jsondecode(base64decode(self.triggers.connection_json)).bastion_password
    bastion_private_key = jsondecode(base64decode(self.triggers.connection_json)).bastion_private_key
    bastion_certificate = jsondecode(base64decode(self.triggers.connection_json)).bastion_certificate
  }

  provisioner "remote-exec" {
    inline = [
      "${self.triggers.kubectl_cmd} annotate --overwrite node ${self.triggers.server_name} ${self.triggers.annotation_name}=${self.triggers.on_value_changes}"
    ]
  }

  provisioner "remote-exec" {
    when = destroy
    inline = [
      "${self.triggers.kubectl_cmd} annotate node ${self.triggers.server_name} ${self.triggers.annotation_name}-"
    ]
  }
}

// Add/remove manually label on k3s server
resource "null_resource" "servers_label" {
  for_each = local.server_labels

  depends_on = [null_resource.servers_install]
  triggers = {
    server_name      = local.servers_metadata[split(var.separator, each.key)[0]].name
    label_name       = split(var.separator, each.key)[1]
    on_value_changes = each.value

    connection_json = base64encode(jsonencode(local.root_server_connection))
    kubectl_cmd     = local.kubectl_cmd
  }
  lifecycle { ignore_changes = [triggers["connection_json"], triggers["kubectl_cmd"]] }

  connection {
    type = jsondecode(base64decode(self.triggers.connection_json)).type

    host     = jsondecode(base64decode(self.triggers.connection_json)).host
    user     = jsondecode(base64decode(self.triggers.connection_json)).user
    password = jsondecode(base64decode(self.triggers.connection_json)).password
    port     = jsondecode(base64decode(self.triggers.connection_json)).port
    timeout  = jsondecode(base64decode(self.triggers.connection_json)).timeout

    script_path    = jsondecode(base64decode(self.triggers.connection_json)).script_path
    private_key    = jsondecode(base64decode(self.triggers.connection_json)).private_key
    certificate    = jsondecode(base64decode(self.triggers.connection_json)).certificate
    agent          = jsondecode(base64decode(self.triggers.connection_json)).agent
    agent_identity = jsondecode(base64decode(self.triggers.connection_json)).agent_identity
    host_key       = jsondecode(base64decode(self.triggers.connection_json)).host_key

    https    = jsondecode(base64decode(self.triggers.connection_json)).https
    insecure = jsondecode(base64decode(self.triggers.connection_json)).insecure
    use_ntlm = jsondecode(base64decode(self.triggers.connection_json)).use_ntlm
    cacert   = jsondecode(base64decode(self.triggers.connection_json)).cacert

    bastion_host        = jsondecode(base64decode(self.triggers.connection_json)).bastion_host
    bastion_host_key    = jsondecode(base64decode(self.triggers.connection_json)).bastion_host_key
    bastion_port        = jsondecode(base64decode(self.triggers.connection_json)).bastion_port
    bastion_user        = jsondecode(base64decode(self.triggers.connection_json)).bastion_user
    bastion_password    = jsondecode(base64decode(self.triggers.connection_json)).bastion_password
    bastion_private_key = jsondecode(base64decode(self.triggers.connection_json)).bastion_private_key
    bastion_certificate = jsondecode(base64decode(self.triggers.connection_json)).bastion_certificate
  }

  provisioner "remote-exec" {
    inline = [
      "${self.triggers.kubectl_cmd} label --overwrite node ${self.triggers.server_name} ${self.triggers.label_name}=${self.triggers.on_value_changes}"
    ]
  }

  provisioner "remote-exec" {
    when = destroy
    inline = [
      "${self.triggers.kubectl_cmd} label node ${self.triggers.server_name} ${self.triggers.label_name}-"
    ]
  }
}

// Add/remove manually taint on k3s server
resource "null_resource" "servers_taint" {
  for_each = local.server_taints

  depends_on = [null_resource.servers_install]
  triggers = {
    server_name      = local.servers_metadata[split(var.separator, each.key)[0]].name
    taint_name       = split(var.separator, each.key)[1]
    connection_json  = base64encode(jsonencode(local.root_server_connection))
    on_value_changes = each.value

    connection_json = base64encode(jsonencode(local.root_server_connection))
    kubectl_cmd     = local.kubectl_cmd
  }
  lifecycle { ignore_changes = [triggers["connection_json"], triggers["kubectl_cmd"]] }

  connection {
    type = jsondecode(base64decode(self.triggers.connection_json)).type

    host     = jsondecode(base64decode(self.triggers.connection_json)).host
    user     = jsondecode(base64decode(self.triggers.connection_json)).user
    password = jsondecode(base64decode(self.triggers.connection_json)).password
    port     = jsondecode(base64decode(self.triggers.connection_json)).port
    timeout  = jsondecode(base64decode(self.triggers.connection_json)).timeout

    script_path    = jsondecode(base64decode(self.triggers.connection_json)).script_path
    private_key    = jsondecode(base64decode(self.triggers.connection_json)).private_key
    certificate    = jsondecode(base64decode(self.triggers.connection_json)).certificate
    agent          = jsondecode(base64decode(self.triggers.connection_json)).agent
    agent_identity = jsondecode(base64decode(self.triggers.connection_json)).agent_identity
    host_key       = jsondecode(base64decode(self.triggers.connection_json)).host_key

    https    = jsondecode(base64decode(self.triggers.connection_json)).https
    insecure = jsondecode(base64decode(self.triggers.connection_json)).insecure
    use_ntlm = jsondecode(base64decode(self.triggers.connection_json)).use_ntlm
    cacert   = jsondecode(base64decode(self.triggers.connection_json)).cacert

    bastion_host        = jsondecode(base64decode(self.triggers.connection_json)).bastion_host
    bastion_host_key    = jsondecode(base64decode(self.triggers.connection_json)).bastion_host_key
    bastion_port        = jsondecode(base64decode(self.triggers.connection_json)).bastion_port
    bastion_user        = jsondecode(base64decode(self.triggers.connection_json)).bastion_user
    bastion_password    = jsondecode(base64decode(self.triggers.connection_json)).bastion_password
    bastion_private_key = jsondecode(base64decode(self.triggers.connection_json)).bastion_private_key
    bastion_certificate = jsondecode(base64decode(self.triggers.connection_json)).bastion_certificate
  }

  provisioner "remote-exec" {
    inline = [
      "${self.triggers.kubectl_cmd} taint node ${self.triggers.server_name} ${self.triggers.taint_name}=${self.triggers.on_value_changes} --overwrite"
    ]
  }

  provisioner "remote-exec" {
    when = destroy
    inline = [
      "${self.triggers.kubectl_cmd} taint node ${self.triggers.server_name} ${self.triggers.taint_name}-"
    ]
  }
}


================================================
FILE: variables.tf
================================================
variable "depends_on_" {
  description = "Resource dependency of this module."
  default     = null
}

variable "k3s_version" {
  description = "Specify the k3s version. You can choose from the following release channels or pin the version directly"
  type        = string
  default     = "latest"
}

variable "k3s_install_env_vars" {
  description = "map of enviroment variables that are passed to the k3s installation script (see https://docs.k3s.io/reference/env-variables)"
  type        = map(string)
  default     = {}

  validation {
    condition     = !can(var.k3s_install_env_vars["INSTALL_K3S_VERSION"])
    error_message = "Environment variable \"INSTALL_K3S_VERSION\" needs to be set via variable k3s_version"
  }
}

variable "name" {
  description = "K3s cluster domain name (see https://rancher.com/docs/k3s/latest/en/installation/install-options/). This input is deprecated and will be remove in the next major release. Use `cluster_domain` instead."
  type        = string
  default     = "!!!DEPRECATED!!!"

  validation {
    condition     = var.name == "!!!DEPRECATED!!!"
    error_message = "Variable `name` is deprecated, use `cluster_domain` instead. It will be removed at the next major release."
  }
}

variable "cluster_domain" {
  description = "K3s cluster domain name (see https://rancher.com/docs/k3s/latest/en/installation/install-options/)."
  type        = string
  default     = "cluster.local"
}

variable "generate_ca_certificates" {
  description = "If true, this module will generate the CA certificates (see https://github.com/rancher/k3s/issues/1868#issuecomment-639690634). Otherwise rancher will generate it. This is required to generate kubeconfig"
  type        = bool
  default     = true
}

variable "kubernetes_certificates" {
  description = "A list of maps of cerificate-name.[crt/key] : cerficate-value to copied to /var/lib/rancher/k3s/server/tls, if this option is used generate_ca_certificates will be treat as false"
  type = list(
    object({
      file_name    = string,
      file_content = string
    })
  )
  default = []
}

variable "cidr" {
  description = "K3s network CIDRs (see https://rancher.com/docs/k3s/latest/en/installation/install-options/)."
  type = object({
    pods     = string
    services = string
  })
  default = {
    pods     = "10.42.0.0/16"
    services = "10.43.0.0/16"
  }
}

variable "drain_timeout" {
  description = "The length of time to wait before giving up the node draining. Infinite by default."
  type        = string
  default     = "0s"
}

variable "global_flags" {
  description = "Add additional installation flags, used by all nodes (see https://rancher.com/docs/k3s/latest/en/installation/install-options/)."
  type        = list(string)
  default     = []
}

variable "servers" {
  description = "K3s server nodes definition. The key is used as node name if no name is provided."
  type        = map(any)

  validation {
    condition     = length(var.servers) > 0
    error_message = "At least one server node must be provided."
  }
  validation {
    condition     = length(var.servers) % 2 == 1
    error_message = "Servers must have an odd number of nodes."
  }
  validation {
    condition     = can(values(var.servers)[*].ip)
    error_message = "Field servers.<name>.ip is required."
  }
  validation {
    condition     = !can(values(var.servers)[*].connection) || !contains([for v in var.servers : can(tomap(v.connection))], false)
    error_message = "Field servers.<name>.connection must be a valid Terraform connection."
  }
  validation {
    condition     = !can(values(var.servers)[*].flags) || !contains([for v in var.servers : can(tolist(v.flags))], false)
    error_message = "Field servers.<name>.flags must be a list of string (see: https://docs.k3s.io/cli/server)."
  }
  validation {
    condition     = !can(values(var.servers)[*].annotations) || !contains([for v in var.servers : can(tomap(v.annotations))], false)
    error_message = "Field servers.<name>.annotations must be a map of string."
  }
  validation {
    condition     = !can(values(var.servers)[*].labels) || !contains([for v in var.servers : can(tomap(v.labels))], false)
    error_message = "Field servers.<name>.labels must be a map of string."
  }
  validation {
    condition     = !can(values(var.servers)[*].taints) || !contains([for v in var.servers : can(tomap(v.taints))], false)
    error_message = "Field servers.<name>.taints must be a map of string."
  }
}

variable "agents" {
  description = "K3s agent nodes definitions. The key is used as node name if no name is provided."
  type        = map(any)
  default     = {}

  validation {
    condition     = can(values(var.agents)[*].ip)
    error_message = "Field agents.<name>.ip is required."
  }
  validation {
    condition     = !can(values(var.agents)[*].connection) || !contains([for v in var.agents : can(tomap(v.connection))], false)
    error_message = "Field agents.<name>.connection must be a valid Terraform connection."
  }
  validation {
    condition     = !can(values(var.agents)[*].flags) || !contains([for v in var.agents : can(tolist(v.flags))], false)
    error_message = "Field agents.<name>.flags must be a list of string (see: https://docs.k3s.io/cli/agent)."
  }
  validation {
    condition     = !can(values(var.agents)[*].annotations) || !contains([for v in var.agents : can(tomap(v.annotations))], false)
    error_message = "Field agents.<name>.annotations must be a map of string."
  }
  validation {
    condition     = !can(values(var.agents)[*].labels) || !contains([for v in var.agents : can(tomap(v.labels))], false)
    error_message = "Field agents.<name>.labels must be a map of string."
  }
  validation {
    condition     = !can(values(var.agents)[*].taints) || !contains([for v in var.agents : can(tomap(v.taints))], false)
    error_message = "Field agents.<name>.taints must be a map of string."
  }
}

variable "managed_fields" {
  description = "List of fields which must be managed by this module (can be annotation, label and/or taint)."
  type        = list(string)
  default     = ["annotation", "label", "taint"]
}

variable "separator" {
  description = "Separator used to separates node name and field name (used to manage annotations, labels and taints)."
  default     = "|"
}

variable "use_sudo" {
  description = "Whether or not to use kubectl with sudo during cluster setup."
  default     = false
  type        = bool
}


================================================
FILE: versions.tf
================================================
terraform {
  required_providers {
    http = {
      source  = "hashicorp/http"
      version = "~> 3.0"
    }
    null = {
      source  = "hashicorp/null"
      version = "~> 3.0"
    }
    random = {
      source  = "hashicorp/random"
      version = "~> 3.0"
    }
    tls = {
      source  = "hashicorp/tls"
      version = "~> 4.0"
    }
  }

  required_version = "~> 1.0"
}
Download .txt
gitextract_xvrmib6x/

├── .devcontainer/
│   └── devcontainer.json
├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug-report.yaml
│   │   └── config.yaml
│   ├── labels.yaml
│   ├── renovate.json
│   └── workflows/
│       ├── github.documentation.yaml
│       ├── github.labeler.yaml
│       ├── github.stale.yaml
│       ├── security.terraform.yaml
│       ├── security.workflows.yaml
│       ├── templates.terraform.pull_requests.lint.yaml
│       ├── templates.terraform.pull_requests.plan.yaml
│       ├── terraform.lint.yaml
│       └── terraform.plan.yaml
├── .github_changelog_generator
├── .gitignore
├── .terraform-docs.yaml
├── .tool-versions
├── CHANGELOG.md
├── LICENSE
├── README.md
├── Taskfile.yaml
├── agent_nodes.tf
├── examples/
│   ├── civo-k3s/
│   │   ├── README.md
│   │   ├── k3s.tf
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── versions.tf
│   ├── do-k3s/
│   │   ├── README.md
│   │   ├── k3s.tf
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── versions.tf
│   └── hcloud-k3s/
│       ├── README.md
│       ├── k3s.tf
│       ├── main.tf
│       ├── outputs.tf
│       ├── variables.tf
│       └── versions.tf
├── k3s_certificates.tf
├── k3s_version.tf
├── main.tf
├── outputs.tf
├── renovate.json
├── server_nodes.tf
├── variables.tf
└── versions.tf
Condensed preview — 47 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (138K chars).
[
  {
    "path": ".devcontainer/devcontainer.json",
    "chars": 971,
    "preview": "{\n  \"$schema\": \"https://raw.githubusercontent.com/devcontainers/spec/main/schemas/devContainer.schema.json\",\n  \"name\": \""
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug-report.yaml",
    "chars": 1920,
    "preview": "name: Bug Report\ndescription: File a bug report for this project\ntitle: \":bug: \"\nlabels: [\"kind/bug\"]\nprojects: [\"xunlei"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/config.yaml",
    "chars": 27,
    "preview": "blank_issues_enabled: true\n"
  },
  {
    "path": ".github/labels.yaml",
    "chars": 1226,
    "preview": "- name: kind/bug\n  description: Something isn't working\n  color: D73A4A\n- name: kind/dependencies\n  description: Depende"
  },
  {
    "path": ".github/renovate.json",
    "chars": 242,
    "preview": "{\n  \"$schema\": \"https://docs.renovatebot.com/renovate-schema.json\",\n\n  \"assignAutomerge\": true,\n  \"automergeStrategy\": \""
  },
  {
    "path": ".github/workflows/github.documentation.yaml",
    "chars": 1205,
    "preview": "---\nname: '[bot] Update documentation assets (master only)'\non:\n  push:\n    branches: [master]\n\njobs:\n  generate-docs-as"
  },
  {
    "path": ".github/workflows/github.labeler.yaml",
    "chars": 577,
    "preview": "---\nname: '[bot] Synchronize labels'\non:\n  push:\n    branches: [master]\n    paths: [.github/workflows/github.labeler.yam"
  },
  {
    "path": ".github/workflows/github.stale.yaml",
    "chars": 1006,
    "preview": "---\nname: '[bot] Close stale issues and PRs'\non:\n  schedule:\n    - cron: '0 0 * * *'\n\njobs:\n  stale:\n    runs-on: ubuntu"
  },
  {
    "path": ".github/workflows/security.terraform.yaml",
    "chars": 423,
    "preview": "name: Security hardening (Terraform)\n\non:\n  pull_request:\n\njobs:\n  trivy:\n    runs-on: ubuntu-latest\n    steps:\n      - "
  },
  {
    "path": ".github/workflows/security.workflows.yaml",
    "chars": 895,
    "preview": "name: Security hardening (Github Actions workflows)\n\non:\n  pull_request:\n    types: [opened, synchronize]\n    paths: [\"."
  },
  {
    "path": ".github/workflows/templates.terraform.pull_requests.lint.yaml",
    "chars": 5338,
    "preview": "name: IaaS - Terraform CI (for pull requests) - Lint\n\non:\n  workflow_call:\n    inputs:\n      terraform_workdir:\n        "
  },
  {
    "path": ".github/workflows/templates.terraform.pull_requests.plan.yaml",
    "chars": 6114,
    "preview": "name: IaaS - Terraform CI (for pull requests) - Plan\n\non:\n  workflow_call:\n    inputs:\n      after_lint:\n        default"
  },
  {
    "path": ".github/workflows/terraform.lint.yaml",
    "chars": 662,
    "preview": "name: Terraform HCL validation (PRs only)\n\non:\n  pull_request:\n    paths: [\"**.tf\"]\n\npermissions:\n  pull-requests: write"
  },
  {
    "path": ".github/workflows/terraform.plan.yaml",
    "chars": 861,
    "preview": "name: Terraform plan validation (PRs only)\n\non:\n  pull_request:\n    types: [labeled]\n\npermissions:\n  pull-requests: writ"
  },
  {
    "path": ".github_changelog_generator",
    "chars": 144,
    "preview": "add-sections={\"dependencies\":{\"prefix\":\"**Dependencies upgrades:**\", \"labels\":[\"kind/dependencies\"]}}\nproject=terraform-"
  },
  {
    "path": ".gitignore",
    "chars": 2394,
    "preview": "# Created by https://www.toptal.com/developers/gitignore/api/linux,windows,macos,terraform\n# Edit at https://www.toptal."
  },
  {
    "path": ".terraform-docs.yaml",
    "chars": 463,
    "preview": "formatter: \"markdown table\"\n\ncontent: |-\n  ## Example _(based on [Hetzner Cloud example](examples/hcloud-k3s))_\n\n  ```hc"
  },
  {
    "path": ".tool-versions",
    "chars": 75,
    "preview": "act 0.2.57\ntask 3.31.0\nterraform 1.14.6\nterraform-docs 0.20.0\ntrivy 0.69.3\n"
  },
  {
    "path": "CHANGELOG.md",
    "chars": 30666,
    "preview": "# Changelog\n\n## [Unreleased](https://github.com/xunleii/terraform-module-k3s/tree/HEAD)\n\n[Full Changelog](https://github"
  },
  {
    "path": "LICENSE",
    "chars": 1075,
    "preview": "MIT License\n\nCopyright (c) 2019 Alexandre NICOLAIE\n\nPermission is hereby granted, free of charge, to any person obtainin"
  },
  {
    "path": "README.md",
    "chars": 9306,
    "preview": "# terraform-module-k3s\n![Terraform Version](https://img.shields.io/badge/terraform-≈_1.0-blueviolet)\n[![GitHub tag (late"
  },
  {
    "path": "Taskfile.yaml",
    "chars": 1361,
    "preview": "# yaml-language-server: $schema=https://taskfile.dev/schema.json\nversion: \"3\"\n\ntasks:\n  default: { cmds: [task --list], "
  },
  {
    "path": "agent_nodes.tf",
    "chars": 18454,
    "preview": "locals {\n  // Generate a map of all agents annotations in order to manage them through this module. This\n  // generation"
  },
  {
    "path": "examples/civo-k3s/README.md",
    "chars": 325,
    "preview": "#  K3S example for Civo\n\nConfiguration in this directory creates a k3s cluster resources instances.\n\n## Usage\n\n> [!warni"
  },
  {
    "path": "examples/civo-k3s/k3s.tf",
    "chars": 897,
    "preview": "module \"k3s\" {\n  source = \"./../..\"\n\n  depends_on_    = civo_instance.node_instances\n  k3s_version    = \"latest\"\n  clust"
  },
  {
    "path": "examples/civo-k3s/main.tf",
    "chars": 515,
    "preview": "data \"civo_disk_image\" \"ubuntu\" {\n  filter {\n    key      = \"name\"\n    values   = [\"ubuntu\"]\n    match_by = \"re\"\n  }\n\n  "
  },
  {
    "path": "examples/civo-k3s/outputs.tf",
    "chars": 131,
    "preview": "output \"summary\" {\n  value = module.k3s.summary\n}\n\noutput \"kubeconfig\" {\n  value     = module.k3s.kube_config\n  sensitiv"
  },
  {
    "path": "examples/civo-k3s/versions.tf",
    "chars": 146,
    "preview": "terraform {\n  required_providers {\n    civo = {\n      source  = \"civo/civo\"\n      version = \"~>0.10.10\"\n    }\n  }\n  requ"
  },
  {
    "path": "examples/do-k3s/README.md",
    "chars": 342,
    "preview": "#  K3S example for Digital Ocean\n\nConfiguration in this directory creates a k3s cluster resources instances.\n\n## Usage\n\n"
  },
  {
    "path": "examples/do-k3s/k3s.tf",
    "chars": 952,
    "preview": "module \"k3s\" {\n  source = \"./../..\"\n\n  depends_on_    = digitalocean_droplet.node_instances\n  k3s_version    = \"latest\"\n"
  },
  {
    "path": "examples/do-k3s/main.tf",
    "chars": 595,
    "preview": "data \"digitalocean_image\" \"ubuntu\" {\n  slug = \"ubuntu-22-04-x64\"\n}\n\nresource \"tls_private_key\" \"ed25519_provisioning\" {\n"
  },
  {
    "path": "examples/do-k3s/outputs.tf",
    "chars": 300,
    "preview": "output \"summary\" {\n  value = module.k3s.summary\n}\n\noutput \"kubeconfig\" {\n  value     = module.k3s.kube_config\n  sensitiv"
  },
  {
    "path": "examples/do-k3s/versions.tf",
    "chars": 167,
    "preview": "terraform {\n  required_providers {\n    digitalocean = {\n      source  = \"digitalocean/digitalocean\"\n      version = \"2.3"
  },
  {
    "path": "examples/hcloud-k3s/README.md",
    "chars": 477,
    "preview": "#  K3S example for Hetzner-Cloud\n\nConfiguration in this directory creates a k3s cluster resources including network, sub"
  },
  {
    "path": "examples/hcloud-k3s/k3s.tf",
    "chars": 1667,
    "preview": "module \"k3s\" {\n  source = \"./../..\"\n\n  depends_on_    = hcloud_server.agents\n  k3s_version    = \"latest\"\n  cluster_domai"
  },
  {
    "path": "examples/hcloud-k3s/main.tf",
    "chars": 1872,
    "preview": "data \"hcloud_image\" \"ubuntu\" {\n  name = \"ubuntu-20.04\"\n}\n\nresource \"tls_private_key\" \"ed25519_provisioning\" {\n  algorith"
  },
  {
    "path": "examples/hcloud-k3s/outputs.tf",
    "chars": 300,
    "preview": "output \"summary\" {\n  value = module.k3s.summary\n}\n\noutput \"kubeconfig\" {\n  value     = module.k3s.kube_config\n  sensitiv"
  },
  {
    "path": "examples/hcloud-k3s/variables.tf",
    "chars": 179,
    "preview": "variable \"servers_num\" {\n  description = \"Number of control plane nodes.\"\n  default     = 3\n}\n\nvariable \"agents_num\" {\n "
  },
  {
    "path": "examples/hcloud-k3s/versions.tf",
    "chars": 155,
    "preview": "terraform {\n  required_providers {\n    hcloud = {\n      source  = \"hetznercloud/hcloud\"\n      version = \"1.44.1\"\n    }\n "
  },
  {
    "path": "k3s_certificates.tf",
    "chars": 2778,
    "preview": "locals {\n  should_generate_certificates = var.generate_ca_certificates && length(var.kubernetes_certificates) == 0\n  cer"
  },
  {
    "path": "k3s_version.tf",
    "chars": 512,
    "preview": "// Fetch the last version of k3s\ndata \"http\" \"k3s_version\" {\n  url = \"https://update.k3s.io/v1-release/channels\"\n}\n\n// F"
  },
  {
    "path": "main.tf",
    "chars": 798,
    "preview": "// Generate the k3s token used by all nodes to join the cluster\nresource \"random_password\" \"k3s_cluster_secret\" {\n  leng"
  },
  {
    "path": "outputs.tf",
    "chars": 2345,
    "preview": "output \"kubernetes\" {\n  description = \"Authentication credentials of Kubernetes (full administrator).\"\n  value = {\n    c"
  },
  {
    "path": "renovate.json",
    "chars": 68,
    "preview": "{\n  \"extends\": [\"config:base\"],\n  \"labels\": [\"kind/dependencies\"]\n}\n"
  },
  {
    "path": "server_nodes.tf",
    "chars": 23139,
    "preview": "locals {\n  // Some vars use to easily access to the first k3s server values\n  root_server_name = keys(var.servers)[0]\n\n "
  },
  {
    "path": "variables.tf",
    "chars": 6438,
    "preview": "variable \"depends_on_\" {\n  description = \"Resource dependency of this module.\"\n  default     = null\n}\n\nvariable \"k3s_ver"
  },
  {
    "path": "versions.tf",
    "chars": 382,
    "preview": "terraform {\n  required_providers {\n    http = {\n      source  = \"hashicorp/http\"\n      version = \"~> 3.0\"\n    }\n    null"
  }
]

About this extraction

This page contains the full source code of the xunleii/terraform-module-k3s GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 47 files (127.8 KB), approximately 36.7k tokens. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!