Showing preview only (542K chars total). Download the full file or copy to clipboard to get everything.
Repository: controlplaneio/netassert
Branch: master
Commit: bcb1d6125b85
Files: 118
Total size: 508.2 KB
Directory structure:
gitextract_igc5nwum/
├── .dockerignore
├── .editorconfig
├── .github/
│ ├── ISSUE_TEMPLATE/
│ │ ├── bug_report.md
│ │ ├── feature_request.md
│ │ └── question.md
│ ├── PULL_REQUEST_TEMPLATE/
│ │ └── pull_request_template.md
│ └── workflows/
│ ├── build.yaml
│ └── release.yaml
├── .gitignore
├── .goreleaser.yaml
├── .hadolint.yaml
├── .yamllint.yaml
├── CHANGELOG.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── Dockerfile
├── LICENSE
├── README.md
├── SECURITY.md
├── cmd/
│ └── netassert/
│ └── cli/
│ ├── common.go
│ ├── gen_result.go
│ ├── main.go
│ ├── ping.go
│ ├── root.go
│ ├── run.go
│ ├── validate.go
│ └── version.go
├── download.sh
├── e2e/
│ ├── README.md
│ ├── clusters/
│ │ ├── aws-eks-terraform-module/
│ │ │ ├── eks.tf
│ │ │ ├── outputs.tf
│ │ │ ├── variables.tf
│ │ │ └── vpc.tf
│ │ ├── eks-with-calico-cni/
│ │ │ ├── calico-3.26.4.yaml
│ │ │ └── terraform/
│ │ │ ├── main.tf
│ │ │ └── vars.tf
│ │ ├── eks-with-vpc-cni/
│ │ │ └── terraform/
│ │ │ ├── main.tf
│ │ │ └── vars.tf
│ │ ├── gke-dataplanev2/
│ │ │ ├── main.tf
│ │ │ └── variables.tf
│ │ ├── gke-vpc/
│ │ │ ├── main.tf
│ │ │ └── variables.tf
│ │ └── kind/
│ │ └── kind-config.yaml
│ ├── e2e_test.go
│ ├── helpers/
│ │ ├── common.go
│ │ ├── eks.go
│ │ ├── gke.go
│ │ └── kind.go
│ └── manifests/
│ ├── networkpolicies.yaml
│ ├── test-cases.yaml
│ └── workload.yaml
├── fluxcd-demo/
│ ├── README.md
│ ├── fluxcd-helmconfig.yaml
│ ├── helm/
│ │ ├── Chart.yaml
│ │ ├── templates/
│ │ │ ├── _helpers.tpl
│ │ │ ├── deployment.yaml
│ │ │ ├── pod1-pod2.yaml
│ │ │ ├── post-deploy-tests.yaml
│ │ │ └── statefulset.yaml
│ │ └── values.yaml
│ └── kind-cluster.yaml
├── go.mod
├── go.sum
├── helm/
│ ├── Chart.yaml
│ ├── README.md
│ ├── templates/
│ │ ├── NOTES.txt
│ │ ├── _helpers.tpl
│ │ ├── clusterrole.yaml
│ │ ├── clusterrolebinding.yaml
│ │ ├── configmap.yaml
│ │ ├── job.yaml
│ │ └── serviceaccount.yaml
│ └── values.yaml
├── internal/
│ ├── data/
│ │ ├── read.go
│ │ ├── read_test.go
│ │ ├── tap.go
│ │ ├── tap_test.go
│ │ ├── testdata/
│ │ │ ├── dir-without-yaml-files/
│ │ │ │ └── .gitkeep
│ │ │ ├── invalid/
│ │ │ │ ├── duplicated-names.yaml
│ │ │ │ ├── empty-resources.yaml
│ │ │ │ ├── host-as-dst-udp.yaml
│ │ │ │ ├── host-as-source.yaml
│ │ │ │ ├── missing-fields.yaml
│ │ │ │ ├── multiple-dst-blocks.yaml
│ │ │ │ ├── not-a-list.yaml
│ │ │ │ └── wrong-test-values.yaml
│ │ │ ├── invalid-duplicated-names/
│ │ │ │ ├── input1.yaml
│ │ │ │ └── input2.yaml
│ │ │ └── valid/
│ │ │ ├── empty.yaml
│ │ │ └── multi.yaml
│ │ ├── types.go
│ │ └── types_test.go
│ ├── engine/
│ │ ├── engine.go
│ │ ├── engine_daemonset_test.go
│ │ ├── engine_deployment_test.go
│ │ ├── engine_mocks_test.go
│ │ ├── engine_pod_test.go
│ │ ├── engine_statefulset_test.go
│ │ ├── interface.go
│ │ ├── run_tcp.go
│ │ ├── run_tcp_test.go
│ │ └── run_udp.go
│ ├── kubeops/
│ │ ├── client.go
│ │ ├── container_test.go
│ │ ├── containers.go
│ │ ├── daemonset.go
│ │ ├── daemonset_test.go
│ │ ├── deployment.go
│ │ ├── deployment_test.go
│ │ ├── pod.go
│ │ ├── pod_test.go
│ │ ├── statefulset.go
│ │ ├── statefulset_test.go
│ │ └── string_gen.go
│ └── logger/
│ └── hclog.go
├── justfile
└── rbac/
├── cluster-role.yaml
└── cluster-rolebinding.yaml
================================================
FILE CONTENTS
================================================
================================================
FILE: .dockerignore
================================================
Dockerfile*
Jenkinsfile*
**/.terraform
.git/
.idea/
*.iml
.gcloudignore
================================================
FILE: .editorconfig
================================================
root = true
[*]
end_of_line = lf
indent_style = space
indent_size = 2
insert_final_newline = true
max_line_length = 120
trim_trailing_whitespace = true
[*.py]
indent_size = 4
[{Makefile,makefile,**.mk}]
indent_style = tab
[*.sh]
indent_style = space
indent_size = 2
shell_variant = bash # like -ln=posix
binary_next_line = true # like -bn
switch_case_indent = true # like -ci
space_redirects = true # like -sr
keep_padding = false # like -kp
================================================
FILE: .github/ISSUE_TEMPLATE/bug_report.md
================================================
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behaviour:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behaviour**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
================================================
FILE: .github/ISSUE_TEMPLATE/feature_request.md
================================================
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
For example: I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
================================================
FILE: .github/ISSUE_TEMPLATE/question.md
================================================
---
name: Question
about: Post a question about the project
title: ''
labels: question
assignees: ''
---
**Your question**
A clear and concise question.
**Additional context**
Add any other context about your question here.
================================================
FILE: .github/PULL_REQUEST_TEMPLATE/pull_request_template.md
================================================
---
name: Pull Request
about: A pull request
title: ''
labels: ''
assignees: ''
---
[pull_requests]: https://github.com/controlplaneio/kubesec/pulls?q=is%3Apr+is%3Aopen+sort%3Aupdated-desc
<!-- You can erase any parts of this template not applicable to your Pull Request. -->
**All Submissions.**
- [ ] Have you followed the guidelines in our [Contributing document](../../CONTRIBUTING.md)?
- [ ] Have you checked to ensure there aren't other open [Pull Requests][pull_requests] for the same update/change?
**Code Submissions.**
- [ ] Does your submission pass linting, tests, and security analysis?
**Changes to Core Features.**
- [ ] Have you added an explanation of what your changes do and why you'd like us to include them?
- [ ] Have you written new tests for your core changes, as applicable?
================================================
FILE: .github/workflows/build.yaml
================================================
name: Lint and Build
on:
push:
tags-ignore:
- '*'
branches:
- '*'
pull_request:
branches: ['main', 'master']
jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@f43a0e5ff2bd294095638e18286ca9a3d1956744 # v3
- name: Run golangci-lint
uses: reviewdog/action-golangci-lint@f9bba13753278f6a73b27a56a3ffb1bfda90ed71 # v2
with:
go_version: "1.25.4"
fail_level: "none"
build:
runs-on: ubuntu-latest
needs: lint
steps:
- name: Checkout source code
uses: actions/checkout@f43a0e5ff2bd294095638e18286ca9a3d1956744 # v3
- name: Setup Go
uses: actions/setup-go@be3c94b385c4f180051c996d336f57a34c397495 # v3
with:
go-version: '1.25.4'
- name: Install dependencies
run: go get ./...
- name: Test
run: go test -v ./... --race
- name: E2E Test
env:
KIND_E2E_TESTS: yes
run: go test -timeout 20m -v ./e2e/...
- name: Build
run: go build -v ./...
- name: Build Container
run: go build -v ./...
- name: Build an image from Dockerfile
run: |
docker build -t controlplane/netassert:${{ github.sha }} .
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: 'controlplane/netassert:${{ github.sha }}'
format: 'table'
ignore-unfixed: true
exit-code: '1'
vuln-type: 'os,library'
severity: 'CRITICAL,HIGH,MEDIUM'
================================================
FILE: .github/workflows/release.yaml
================================================
name: release
on:
push:
tags:
- "v[0-9]+.[0-9]+.[0-9]+"
- "v[0-9]+.[0-9]+.[0-9]+-testing[0-9]+"
permissions:
contents: write
packages: write
id-token: write
attestations: write
env:
GH_REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
RELEASE_VERSION: ${{ github.ref_name }}
SCANNER_IMG_VERSION: v1.0.11
SNIFFER_IMG_VERSION: v1.1.9
jobs:
goreleaser:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5
- name: Set up Go
uses: actions/setup-go@be3c94b385c4f180051c996d336f57a34c397495 # v3
with:
go-version: '1.25.4'
- uses: anchore/sbom-action/download-syft@f8bdd1d8ac5e901a77a92f111440fdb1b593736b # v0.20.6
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@5fdedb94abba051217030cc86d4523cf3f02243d # v4
with:
distribution: goreleaser
args: release --clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5
# - name: Extract metadata (tags, labels) for Docker
# id: meta
# uses: docker/metadata-action@9ec57ed1fcdbf14dcef7dfbe97b2010124a938b7
# with:
# images: ${{ env.GH_REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Set up QEMU
uses: docker/setup-qemu-action@c7c53464625b32c7a7e944ae62b3e17d2b600130 # v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
- name: Install cosign
uses: sigstore/cosign-installer@398d4b0eeef1380460a10c8013a76f728fb906ac # v3
- name: Log in to the GitHub Container registry
uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
with:
registry: ${{ env.GH_REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Docker Hub
uses: docker/login-action@465a07811f14bebb1938fbed4728c6a1ff8901fc # v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
id: buildpush
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6
with:
platforms: linux/amd64,linux/arm64
sbom: true
provenance: mode=max
push: true
tags: |
docker.io/controlplane/netassert:${{ env.RELEASE_VERSION }}
docker.io/controlplane/netassert:latest
${{ env.GH_REGISTRY }}/${{ env.IMAGE_NAME }}:${{ env.RELEASE_VERSION }}
${{ env.GH_REGISTRY }}/${{ env.IMAGE_NAME }}:latest
build-args: |
VERSION=${{ env.RELEASE_VERSION }}
SCANNER_IMG_VERSION=${{ env.SCANNER_IMG_VERSION }}
SNIFFER_IMG_VERSION=${{ env.SNIFFER_IMG_VERSION }}
- name: Sign artifact
run: |
cosign sign --yes \
"${{ env.GH_REGISTRY }}/${{ env.IMAGE_NAME }}@${{ steps.buildpush.outputs.digest }}"
cosign sign --yes \
"docker.io/controlplane/netassert@${{ steps.buildpush.outputs.digest }}"
helm:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5
- name: Set up Helm
uses: azure/setup-helm@1a275c3b69536ee54be43f2070a358922e12c8d4 # v4
- name: Setup yq
uses: mikefarah/yq@065b200af9851db0d5132f50bc10b1406ea5c0a8 # v4
- name: Log in to GitHub Container Registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | helm registry login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Prepare and package Helm chart
run: |
CLEAN_VERSION=$(echo "$RELEASE_VERSION" | sed 's/^v//')
echo "Using chart version and appVersion: $CLEAN_VERSION"
yq -i ".image.tag = \"${RELEASE_VERSION}\"" ./helm/values.yaml
yq -i ".version = \"${CLEAN_VERSION}\"" ./helm/Chart.yaml
yq -i ".appVersion = \"${CLEAN_VERSION}\"" ./helm/Chart.yaml
helm package ./helm -d .
- name: Push Helm chart to GHCR
run: |
CLEAN_VERSION=$(echo "$RELEASE_VERSION" | sed 's/^v//')
helm push "./netassert-${CLEAN_VERSION}.tgz" oci://ghcr.io/${{ github.repository_owner }}/charts
================================================
FILE: .gitignore
================================================
# Secrets #
###########
*.pem
*.key
*_rsa
# Compiled source #
###################
*.com
*.class
*.dll
*.exe
*.o
*.so
*.pyc
# Packages #
############
# it's better to unpack these files and commit the raw source
# git has its own built in compression methods
*.7z
*.dmg
*.gz
*.iso
*.jar
*.rar
*.tar
*.zip
# Logs and databases #
######################
*.log
*.sqlite
pip-log.txt
# OS generated files #
######################
.DS_Store?
ehthumbs.db
Icon?
Thumbs.db
# IDE generated files #
#######################
.idea/
*.iml
atlassian-ide-plugin.xml
# Test Files #
##############
test/log
.coverage
.tox
nosetests.xml
# Package Managed Files #
#########################
bower_components/
vendor/
composer.lock
node_modules/
.npm/
venv/
.venv/
.venv2/
.venv3/
# temporary files #
###################
*.*swp
nohup.out
*.tmp
# Virtual machines #
####################
.vagrant/
# Pythonics #
#############
*.py[cod]
# Packages
*.egg
*.egg-info
dist
build
eggs
parts
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
docs/_build
.scratch.md
conf/.config/keybase/
# Pipenv
Pipfile*
# backup files
*.backup
*.notworking
internal/types-not-used/
demo.yaml
.idea
cmd/netassert/cli/netassert
cmd/netassert/cli/results.tap
internal/logger/*.old
cmd/netassert/cli/cli
# Terraform
.terraform
*.tfstate
*.tfstate.*
crash.log
crash.*.log
*.tfvars
*.tfvars.json
override.tf
override.tf.json
*_override.tf
*_override.tf.json
.terraformrc
terraform.rc
*.lock.hcl*
# Kubeconfig
*.kubeconfig
# CLI
/cmd/netassert/cli/*.sh
abc
netassert-*-*-kubeconfig
bin
results.tap
================================================
FILE: .goreleaser.yaml
================================================
builds:
- id: netassert
env:
- CGO_ENABLED=0
ldflags:
- -s
- -w
- -X main.version={{.Tag}}
- -X main.gitHash={{.FullCommit}}
- -X main.buildDate={{.Date}}
goos:
- linux
- darwin
- windows
goarch:
- amd64
- arm
- arm64
goarm:
- 6
- 7
main: ./cmd/netassert/cli/
binary: netassert
archives:
- id: netassert
name_template: "{{ .ProjectName }}_{{ .Tag }}_{{ .Os }}_{{ .Arch }}{{ if .Arm }}v{{ .Arm }}{{ end }}"
format: tar.gz
format_overrides:
- goos: windows
format: zip
files:
- LICENSE
wrap_in_directory: false
checksum:
algorithm: sha256
name_template: 'checksums-sha256.txt'
changelog:
sort: asc
sboms:
- id: archive
artifacts: archive
- id: source
artifacts: source
================================================
FILE: .hadolint.yaml
================================================
---
ignored:
- DL3018 # Pin versions in apk add.
- DL3022 # COPY --from alias
================================================
FILE: .yamllint.yaml
================================================
---
extends: default
ignore: |
test/bin/
test/asset/
rules:
comments:
min-spaces-from-content: 1
line-length:
max: 120
truthy:
check-keys: false
================================================
FILE: CHANGELOG.md
================================================
# Changelog
All notable changes to this project will be documented in this file.
## Table of Contents
- [2.0.3](#203)
- [2.0.2](#202)
- [2.0.1](#201)
- [2.0.0](#200)
- [0.1.0](#010)
---
## `2.0.3`
- test with latest version of Kubernetes and update to Go 1.21
- update e2e tests with latest version of EKS and GKE and Calico CNI
## `2.0.2`
- integrate e2e tests with network policies
- fix a bug in udp testing
## `2.0.1`
- fix release naming
## `2.0.0`
- complete rewrite of the tool in Go, with unit and integration tests
- leverages the ephemeral container support in Kubernetes > v1.25
- test case(s) are written in YAML
- support for Pods, StatefulSets, DaemonSets and Deployments which are directly referred through their names in the test suites
- artifacts are available for download
## `0.1.0`
- initial release
- no artifacts available
================================================
FILE: CODE_OF_CONDUCT.md
================================================
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
education, socio-economic status, nationality, personal appearance, race,
religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior
may be reported by contacting Andrew Martin andy(at)control-plane.io.
All complaints will be reviewed and investigated and will result in a response that is deemed
necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of
an incident. Further details of specific enforcement policies may be
posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org), version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
================================================
FILE: CONTRIBUTING.md
================================================
# Contributing to NetAssert
:+1::tada: First off, thanks for taking the time to contribute! :tada::+1:
`NetAssert` is Apache 2.0 licensed and accepts contributions via GitHub pull requests.
The following is a set of guidelines for contributing to `NetAssert`. We generally have stricter rules as it's a security
tool but don't let that discourage you from creating your PR, it can be incrementally fixed to fit the rules. Also feel
free to propose changes to this document in a pull request.
## Table Of Contents
- [Contributing to NetAssert](#contributing-to-netassert)
- [Table Of Contents](#table-of-contents)
- [Code of Conduct](#code-of-conduct)
- [I Don't Want To Read This Whole Thing I Just Have a Question!!!](#i-dont-want-to-read-this-whole-thing-i-just-have-a-question)
- [What Should I Know Before I Get Started?](#what-should-i-know-before-i-get-started)
- [How Can I Contribute?](#how-can-i-contribute)
- [Reporting Bugs](#reporting-bugs)
- [Before Submitting a Bug Report](#before-submitting-a-bug-report)
- [How Do I Submit a (Good) Bug Report?](#how-do-i-submit-a-good-bug-report)
- [Suggesting Enhancements](#suggesting-enhancements)
- [Before Submitting an Enhancement Suggestion](#before-submitting-an-enhancement-suggestion)
- [How Do I Submit A (Good) Enhancement Suggestion?](#how-do-i-submit-a-good-enhancement-suggestion)
- [Your First Code Contribution](#your-first-code-contribution)
- [Development](#development)
- [Pull Requests](#pull-requests)
- [Style Guides](#style-guides)
- [Git Commit Messages](#git-commit-messages)
- [General Style Guide](#general-style-guide)
- [GoLang Style Guide](#golang-style-guide)
- [Documentation Style Guide](#documentation-style-guide)
---
## Code of Conduct
This project and everyone participating are governed by the [Code of Conduct](CODE_OF_CONDUCT.md). By participating, you
are expected to uphold this code. Please report unacceptable behaviour to [andy@control-plane.io](mailto:andy@control-plane.io).
## I Don't Want To Read This Whole Thing I Just Have a Question!!!
We have an official message board with a detailed FAQ and where the community chimes in with helpful advice if you have questions.
We also have an issue template for questions [here](https://github.com/controlplaneio/netassert/issues/new).
## What Should I Know Before I Get Started?
Netassert has three components:
- [NetAssert](https://github.com/controlplaneio/netassert): This is responsible for orchestrating the tests and is also known as `netassert-engine`
- [NetAssertv2-packet-sniffer](https://github.com/controlplaneio/netassertv2-packet-sniffer): This is the sniffer component that is utilised during a UDP test and is injected to the destination/target Pod as an ephemeral container
- [NetAssertv2-l4-client](https://github.com/controlplaneio/netassertv2-l4-client): This is the scanner component that is injected as the scanner ephemeral container onto the source Pod and is utilised during both TCP and UDP tests
## How Can I Contribute?
### Reporting Bugs
This section guides you through submitting a bug report for `NetAssert`. Following these guidelines helps maintainers and the
community understand your report, reproduce the behaviour, and find related reports.
Before creating bug reports, please check [this list](#before-submitting-a-bug-report) as you might find out that you
don't need to create one. When you are creating a bug report, please [include as many details as possible](#how-do-i-submit-a-good-bug-report).
Fill out the issue template for bugs, the information it asks for helps us resolve issues faster.
> **Note:** If you find a **Closed** issue that seems like it is the same thing that you're experiencing, open a new issue
> and include a link to the original issue in the body of your new one.
#### Before Submitting a Bug Report
- **Perform a [cursory search](https://github.com/search?q=+is:issue+user:controlplaneio)** to see if the problem has already
been reported. If it has **and the issue is still open**, add a comment to the existing issue instead of opening a new
one
#### How Do I Submit a (Good) Bug Report?
Bugs are tracked as [GitHub issues](https://guides.github.com/features/issues/). Create an issue on that repository and
provide the following information by filling in the issue template [here](https://github.com/controlplaneio/netassert/issues/new).
Explain the problem and include additional details to help maintainers reproduce the problem:
- **Use a clear and descriptive title** for the issue to identify the problem
- **Describe the exact steps which reproduce the problem** in as many details as possible. For example, start by explaining
- **Provide specific examples to demonstrate the steps**. Include links to files or GitHub projects, or copy/pasteable
snippets, which you use in those examples. If you're providing snippets in the issue, use [Markdown code blocks](https://help.github.com/articles/markdown-basics/#multiple-lines)
- **Describe the behaviour you observed after following the steps** and point out what exactly is the problem with that behaviour
- **Explain which behaviour you expected to see instead and why.**
Provide more context by answering these questions:
- **Did the problem start happening recently** (e.g. after updating to a new version of netassert) or was this always a problem?
- If the problem started happening recently, **can you reproduce the problem in an older version of netassert?** What's the
most recent version in which the problem doesn't happen? You can download older versions of netassert from
[the releases page](https://github.com/controlplaneio/netassert/releases)
- **Can you reliably reproduce the issue?** If not, provide details about how often the problem happens and under which conditions
it normally happens
- If the problem is related to scanning files, **does the problem happen for all files and projects or only some?** Is there
anything else special about the files you are using? Please include them in your report, censor any sensitive information
but ensure the issue still exists with the censored file
### Suggesting Enhancements
This section guides you through submitting an enhancement suggestion for netassert, including completely new features and minor
improvements to existing functionality. Following these guidelines helps maintainers and the community understand your suggestion
and find related suggestions.
Before creating enhancement suggestions, please check [this list](#before-submitting-an-enhancement-suggestion) as you might
find out that you don't need to create one. When you are creating an enhancement suggestion, please
[include as many details as possible](#how-do-i-submit-a-good-enhancement-suggestion). Fill in the template feature request
template, including the steps that you imagine you would take if the feature you're requesting existed.
#### Before Submitting an Enhancement Suggestion
- **Perform a [cursory search](https://github.com/search?q=+is:issue+user:controlplaneio)** to see if the enhancement has
already been suggested. If it has, add a comment to the existing issue instead of opening a new one
#### How Do I Submit A (Good) Enhancement Suggestion?
Enhancement suggestions are tracked as [GitHub issues](https://guides.github.com/features/issues/). Create an issue on this
repository and provide the following information:
- **Use a clear and descriptive title** for the issue to identify the suggestion
- **Provide a step-by-step description of the suggested enhancement** in as many details as possible
- **Provide specific examples to demonstrate the steps**. Include copy/pasteable snippets which you use in those examples,
as [Markdown code blocks](https://help.github.com/articles/markdown-basics/#multiple-lines)
- **Describe the current behaviour** and **explain which behaviour you expected to see instead** and why
- **Explain why this enhancement would be useful** to most netassert users and isn't something that can or should be implemented
as a separate community project
- **List some other tools where this enhancement exists.**
- **Specify which version of netassert you're using.** You can get the exact version by running `netassert version` in your terminal
- **Specify the name and version of the OS you're using.**
### Your First Code Contribution
Unsure where to begin contributing to `netassert`? You can start by looking through these `Good First Issue` and `Help Wanted`
issues:
- [Good First Issue issues][good_first_issue] - issues which should only require a few lines of code, and a test or two
- [Help wanted issues][help_wanted] - issues which should be a bit more involved than `Good First Issue` issues
Both issue lists are sorted by total number of comments. While not perfect, number of comments is a reasonable proxy for
impact a given change will have.
#### Development
Download the latest version of [just](https://github.com/casey/just/releases). To build the project you can use `just build`. The resulting binary will be in `cmd/netassert/cli/netassert`. To run `unit` tests you can use `just test`. There is a seperate `README.md` in the `e2e` folder that lives in the root of this project that details `end-to-end` testing.
### Pull Requests
The process described here has several goals:
- Maintain the quality of `netassert`
- Fix problems that are important to users
- Engage the community in working toward the best possible netassert
- Enable a sustainable system for netassert's maintainers to review contributions
Please follow these steps to have your contribution considered by the maintainers:
<!-- markdownlint-disable no-inline-html -->
1. Follow all instructions in the template
2. Follow the [style guides](#style-guides)
3. After you submit your pull request, verify that all [status checks](https://help.github.com/articles/about-status-checks/)
are passing
<details>
<summary>What if the status checks are failing?</summary>
If a status check is failing, and you believe that the failure is unrelated to your change, please leave a comment on
the pull request explaining why you believe the failure is unrelated. A maintainer will re-run the status check for
you. If we conclude that the failure was a false positive, then we will open an issue to track that problem with our
status check suite.
</details>
<!-- markdownlint-enable no-inline-html -->
While the prerequisites above must be satisfied prior to having your pull request reviewed, the reviewer(s) may ask you to
complete additional tests, or other changes before your pull request can be ultimately accepted.
## Style Guides
### Git Commit Messages
- It's strongly preferred you [GPG Verify][commit_signing] your commits if you can
- Follow [Conventional Commits](https://www.conventionalcommits.org)
- Use the present tense ("add feature" not "added feature")
- Use the imperative mood ("move cursor to..." not "moves cursor to...")
- Limit the first line to 72 characters or less
- Reference issues and pull requests liberally after the first line
### General Style Guide
Look at installing an `.editorconfig` plugin or configure your editor to match the `.editorconfig` file in the root of the
repository.
### GoLang Style Guide
All Go code is linted with [golangci-lint](https://golangci-lint.run/).
For formatting rely on `gofmt` to handle styling.
### Documentation Style Guide
All markdown code is linted with [markdownlint-cli](https://github.com/igorshubovych/markdownlint-cli).
[good_first_issue]:https://github.com/controlplaneio/netassert/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22good+first+issue%22+sort%3Acomments-desc
[help_wanted]: https://github.com/controlplaneio/netassert/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22help+wanted%22
[commit_signing]: https://docs.github.com/en/free-pro-team@latest/github/authenticating-to-github/managing-commit-signature-verification
================================================
FILE: Dockerfile
================================================
FROM golang:1.25-alpine AS builder
ARG VERSION
COPY . /build
WORKDIR /build
RUN go mod download && \
CGO_ENABLED=0 GO111MODULE=on go build -ldflags="-X 'main.appName=NetAssert' -X 'main.version=${VERSION}' -X 'main.scannerImgVersion=${SCANNER_IMG_VERSION}' -X 'main.snifferImgVersion=${SNIFFER_IMG_VERSION}'" -v -o /netassertv2 cmd/netassert/cli/*.go && \
ls -ltr /netassertv2
FROM gcr.io/distroless/base:nonroot
COPY --from=builder /netassertv2 /usr/bin/netassertv2
ENTRYPOINT [ "/usr/bin/netassertv2" ]
================================================
FILE: LICENSE
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2017 control-plane.io
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================
FILE: README.md
================================================
# Netassert
[![Testing Workflow][testing_workflow_badge]][testing_workflow_badge]
[![Release Workflow][release_workflow_badge]][release_workflow_badge]
`NetAssert` is a command line tool that enables you to check the network connectivity between Kubernetes objects such as Pods, Deployments, DaemonSets, and StatefulSets, as well as test their connectivity to remote hosts or IP addresses. `NetAssert` v2 is a rewrite of original `NetAssert` tool in Go that utilises the ephemeral container support in Kubernetes to verify network connectivity. `NetAssert` test(s) are defined in YAML format. `NetAssert` **currently supports TCP and UDP protocols**:
- To perform a TCP test, only a [`scanner`](https://github.com/controlplaneio/netassertv2-l4-client) container is used. This container requires no privileges nor any Linux capabilities.
- To run a UDP test, a [`sniffer`](https://github.com/controlplaneio/netassertv2-packet-sniffer) ephemeral container is injected into the target Pod which requires `cap_raw` capabilities to read data from the network interface. During UDP testing, `NetAssert` runs both container `scanner` and `sniffer` container images which are injected as `ephemeral` containers into running Pods.
The [`sniffer`](https://github.com/controlplaneio/netassertv2-packet-sniffer) and [`scanner`](https://github.com/controlplaneio/netassertv2-l4-client) container images can be downloaded from:
- `docker.io/controlplane/netassertv2-l4-client:latest`
- Used for both TCP and UDP testing and acts as a Layer 4 (TCP/UDP) client
- Requires no privileges nor any Linux capabilities.
- `docker.io/controlplane/netassertv2-packet-sniffer:latest`
- Used for UDP testing only, injected at the destination to capture packet and search for specific string in the payload
- requires `cap_raw` capabilities to read data from the network interface
`NetAssert` utilises the above containers during test and configures them using *environment variables*. The list of environment variables that are used can be found [here](https://github.com/controlplaneio/netassertv2-packet-sniffer) and [here](https://github.com/controlplaneio/netassertv2-l4-client). It is possible to override the `sniffer` and `scanner` images from command line during a run, so one can also bring their own container image(s) as long as they support the same environment variables.
<img src="./img/demo.gif">
## Installation
- Please download the latest stable version of `NetAssert` from [releases](https://github.com/controlplaneio/netassert/releases) page. The binary is available for Linux, MacOS and Windows platforms.
- If you are on Unix/Linux, you can also use the [download.sh](./download.sh) script to download the latest version of `NetAssert` into the current path:
```bash
curl -sL https://raw.githubusercontent.com/controlplaneio/netassert/master/download.sh | bash
```
## Test specification
`NetAssert` v2 tests are written in YAML format. Each test is a YAML document which supports the following mappings:
- A YAML document is a list of `NetAssert` test. Each test has the following keys:
- **name**: a scalar representing the name of the connection
- **type**: a scalar representing the type of connection, only "k8s" is supported at this time
- **protocol**: a scalar representing the protocol used for the connection, which must be "tcp" or "udp"
- **targetPort**: an integer scalar representing the target port used by the connection
- **timeoutSeconds**: an integer scalar representing the timeout for the connection in seconds
- **attempts**: an integer scalar representing the number of connection attempts for the test
- **exitCode**: an integer scalar representing the expected exit code from the ephemeral/debug container(s)
- **src**: a mapping representing the source Kubernetes resource, which has the following keys:
- **k8sResource**: a mapping representing a Kubernetes resource with the following keys:
- **kind**: a scalar representing the kind of the Kubernetes resource, which can be `deployment`, `statefulset`, `daemonset` or `pod`
- **name**: a scalar representing the name of the Kubernetes resource
- **namespace**: a scalar representing the namespace of the Kubernetes resource
- **dst**: a mapping representing the destination Kubernetes resource or host, **which can have one of the the following keys** i.e both `k8sResource` and `host` **are not supported at the same time** :
- **k8sResource**: a mapping representing a Kubernetes resource with the following keys:
- **kind**: a scalar representing the kind of the Kubernetes resource, which can be `deployment`, `statefulset`, `daemonset` or `pod`
- **name**: a scalar representing the name of the Kubernetes resource
- **namespace**: a scalar representing the namespace of the Kubernetes resource. (Note: Only allowed when protocol is "tcp")
- **host**: a mapping representing a host/node with the following key:
- **name**: a scalar representing the name or IP address of the host/node. (Note: Only allowed when protocol is "tcp" or "udp", but not both at the same time)
<details><summary>This is an example of a test that can be consumed by `NetAssert` utility</summary>
```yaml
---
- name: busybox-deploy-to-echoserver-deploy
type: k8s
protocol: tcp
targetPort: 8080
timeoutSeconds: 67
attempts: 3
exitCode: 0
src:
k8sResource:
kind: deployment
name: busybox
namespace: busybox
dst:
k8sResource:
kind: deployment
name: echoserver
namespace: echoserver
#######
#######
- name: busybox-deploy-to-core-dns
type: k8s
protocol: udp
targetPort: 53
timeoutSeconds: 67
attempts: 3
exitCode: 0
src:
k8sResource:
kind: deployment
name: busybox
namespace: busybox
dst:
k8sResource:
kind: deployment
name: coredns
namespace: kube-system
######
######
- name: busybox-deploy-to-web-statefulset
type: k8s
protocol: tcp
targetPort: 80
timeoutSeconds: 67
attempts: 3
exitCode: 0
src:
k8sResource: # this is type endpoint
kind: deployment
name: busybox
namespace: busybox
dst:
k8sResource: ## this is type endpoint
kind: statefulset
name: web
namespace: web
###
###
- name: fluentd-daemonset-to-web-statefulset
type: k8s
protocol: tcp
targetPort: 80
timeoutSeconds: 67
attempts: 3
exitCode: 0
src:
k8sResource: # this is type endpoint
kind: daemonset
name: fluentd
namespace: fluentd
dst:
k8sResource: ## this is type endpoint
kind: statefulset
name: web
namespace: web
###
####
- name: busybox-deploy-to-control-plane-dot-io
type: k8s
protocol: tcp
targetPort: 80
timeoutSeconds: 67
attempts: 3
exitCode: 0
src:
k8sResource: # type endpoint
kind: deployment
name: busybox
namespace: busybox
dst:
host: # type host or node or machine
name: control-plane.io
###
###
- name: test-from-pod1-to-pod2
type: k8s
protocol: tcp
targetPort: 80
timeoutSeconds: 67
attempts: 3
exitCode: 0
src:
k8sResource: ##
kind: pod
name: pod1
namespace: pod1
dst:
k8sResource:
kind: pod
name: pod2
namespace: pod2
###
###
- name: busybox-deploy-to-fake-host
type: k8s
protocol: tcp
targetPort: 333
timeoutSeconds: 67
attempts: 3
exitCode: 1
src:
k8sResource: # type endpoint
kind: deployment
name: busybox
namespace: busybox
dst:
host: # type host or node or machine
name: 0.0.0.0
...
```
</details>
## Components
`NetAssert` has three main components:
- [NetAssert](https://github.com/controlplaneio/netassert): This is responsible for orchestrating the tests and is also known as `Netassert-Engine` or simply the `Engine`
- [NetAssertv2-packet-sniffer](https://github.com/controlplaneio/netassertv2-packet-sniffer): This is the sniffer component that is utilised during a UDP test and is injected to the destination/target Pod as an ephemeral container
- [NetAssertv2-l4-client](https://github.com/controlplaneio/netassertv2-l4-client): This is the scanner component that is injected as the scanner ephemeral container onto the source Pod and is utilised during both TCP and UDP tests
## Detailed steps/flow of tests
All the tests are read from an YAML file or a directory (step **1**) and the results are written following the [TAP format](https://testanything.org/) (step **5** for UDP and step **4** for TCP). The tests are performed in two different manners depending on whether a TCP or UDP connection is used
### UDP test
<img src="./img/udp-test.svg">
- Validate the test spec and ensure that the `src` and `dst` fields are correct: for udp tests both of them must be of type `k8sResource`
- Find a running Pod called `dstPod` in the object defined by the `dst.k8sResource` field. Ensure that the Pod is in running state and has an IP address allocated by the CNI
- Find a running Pod called `srcPod` in the object defined by the `src.k8sResource` field. Ensure that the Pod is in running state and has an IP address allocated by the CNI
- Generate a random UUID, which will be used by both ephemeral containers
- Inject the `netassert-l4-client` as an ephemeral container in the `srcPod` (step **2**) and set the port and protocol according to the test specifications. Provide also the target host equal to the previously found dstPod IP address, and the random UUID that was generated in the previous step as the message to be sent over the udp connection. At the same time, inject the `netassertv2-packet-sniffer` (step **3**) as an ephemeral container in the `dstPod` using the protocol, search string, number of matches and timeout defined in the test specifications. The search_string environment variable is equal to the UUID that was generated in the previous step which is expected to be found in the data sent by the scanner when the connections are successful.
- Poll that status of the ephemeral containers (step **4**)
- Ensure that the `netassertv2-packet-sniffer` ephemeral sniffer container’s exit status matches the one defined in the test specification
- Ensure that the `netassert-l4-client`, exits with exit status of zero. This should always be the case as UDP is not a connection oriented protocol.
### TCP test
<img src="./img/tcp-test.svg">
- Validate the test spec and ensure that the `src` field is of type `k8sResource`
- Find a running Pod called `srcPod` in the object defined by the `src.k8sResource` field. Ensure that the Pod is in running state and has an IPAddress
- Check if `dst` has `k8sResource` defined as a child object. If so then find a running Pod defined by the `dst.K8sResource`
- Inject the `netassert-l4-client` as an ephemeral container in the `srcPod` (step **2**). Configure the `netassert-l4-client` similarly to the udp case. If the `dst` field is set to `host` then use the host `name` field as the scanner target host
- Poll that status of the ephemeral containers (step **3**)
- Ensure that the exit code of that container matches the `exitCode` field defined in the test specification
## Development
- You will need Go version 1.25.x or higher. Download the latest version of [just](https://github.com/casey/just/releases). To build the project you can use `just build`. The resulting binary will be in `cmd/netassert/cli/netassert`. To run `unit` tests you can use `just test`. There is a separate [README.md](./e2e/README.md) that details `end-to-end` testing.
## Quick testing
### Spinning up the environment
- Make sure you have installed [`kind`](https://kind.sigs.k8s.io/) and its prerequisites
- Make sure you have also installed [`just`](https://github.com/casey/just/releases)
- Download the `NetAssert` binary from the [release](https://github.com/controlplaneio/netassert/releases) page:
```bash
❯ VERSION="v2.1.3" # change it to the version you want to install
❯ OS_DISTRO=linux_amd64 # change it to your OS_DISTRO (for reference check the NetAssert release page)
❯ curl -L -o netassert.tar.gz https://github.com/controlplaneio/netassert/releases/download/${VERSION}/netassert_${VERSION}_${OS_ARCH}.tar.gz
❯ tar -xzf netassert.tar.gz -C bin/netassert
```
- Alternatively, you can build `NetAssert` from source:
```bash
❯ just build
```
- You will also need a working kubernetes cluster with ephemeral/debug container support and a CNI that supports Network Policies, you can spin one quickly using the `justfile` included in the repo:
```bash
❯ just kind-down ; just kind-up
❯ just calico-apply
```
- wait for all the nodes to become ready:
```bash
❯ kubectl get nodes -w
```
### Running the sample tests
- In order to use the sample tests, you need to create network policies and kubernetes resources:
```bash
❯ just k8s-apply
kubectl apply -f ./e2e/manifests/workload.yaml
namespace/fluentd created
daemonset.apps/fluentd created
namespace/echoserver created
namespace/busybox created
deployment.apps/echoserver created
deployment.apps/busybox created
namespace/pod1 created
namespace/pod2 created
pod/pod2 created
pod/pod1 created
namespace/web created
statefulset.apps/web created
```
```bash
❯ just netpol-apply
kubectl apply -f ./e2e/manifests/networkpolicies.yaml
networkpolicy.networking.k8s.io/web created
```
- Wait for the workload to become ready (note that the workload pods are the ones created after running `just k8s-apply` in a previous step):
```bash
❯ kubectl get pods -A
busybox busybox-6c85d76fdc-r8gtp 1/1 Running 0 76s
echoserver echoserver-64bd7c5dc6-ldwh9 1/1 Running 0 76s
fluentd fluentd-5pp9c 1/1 Running 0 76s
fluentd fluentd-8vvp9 1/1 Running 0 76s
fluentd fluentd-9jblb 1/1 Running 0 76s
fluentd fluentd-jnlql 1/1 Running 0 76s
kube-system calico-kube-controllers-565c89d6df-8mwk9 1/1 Running 0 117s
kube-system calico-node-2sqhw 1/1 Running 0 117s
kube-system calico-node-4sxpn 1/1 Running 0 117s
kube-system calico-node-5gtg7 1/1 Running 0 117s
kube-system calico-node-kxjq8 1/1 Running 0 117s
kube-system coredns-7d764666f9-74xgb 1/1 Running 0 2m29s
kube-system coredns-7d764666f9-jvnr4 1/1 Running 0 2m29s
kube-system etcd-packet-test-control-plane 1/1 Running 0 2m35s
kube-system kube-apiserver-packet-test-control-plane 1/1 Running 0 2m35s
kube-system kube-controller-manager-packet-test-control-plane 1/1 Running 0 2m35s
kube-system kube-proxy-4xjp2 1/1 Running 0 2m27s
kube-system kube-proxy-b28pw 1/1 Running 0 2m29s
kube-system kube-proxy-p9smj 1/1 Running 0 2m27s
kube-system kube-proxy-xb2wq 1/1 Running 0 2m27s
kube-system kube-scheduler-packet-test-control-plane 1/1 Running 0 2m35s
local-path-storage local-path-provisioner-67b8995b4b-jf8lc 1/1 Running 0 2m29s
pod1 pod1 1/1 Running 0 75s
pod2 pod2 1/1 Running 0 76s
web web-0 1/1 Running 0 75s
web web-1 1/1 Running 0 31s
```
- Run the netassert binary pointing it to the test cases:
```bash
❯ bin/netassert run --input-file ./e2e/manifests/test-cases.yaml
❯ cat results.tap
TAP version 14
1..9
ok 1 - busybox-deploy-to-echoserver-deploy
ok 2 - busybox-deploy-to-echoserver-deploy-2
ok 3 - fluentd-deamonset-to-echoserver-deploy
ok 4 - busybox-deploy-to-web-statefulset
ok 5 - web-statefulset-to-busybox-deploy
ok 6 - fluentd-daemonset-to-web-statefulset
ok 7 - busybox-deploy-to-control-plane-dot-io
ok 8 - test-from-pod1-to-pod2
ok 9 - busybox-deploy-to-fake-host
```
- To see the results when a check fails, run:
```bash
❯ just netpol-rm-apply
kubectl delete -f ./e2e/manifests/networkpolicies.yaml
networkpolicy.networking.k8s.io "web" deleted
❯ bin/netassert run --input-file ./e2e/manifests/test-cases.yaml
❯ cat results.tap
TAP version 14
1..9
ok 1 - busybox-deploy-to-echoserver-deploy
ok 2 - busybox-deploy-to-echoserver-deploy-2
ok 3 - fluentd-deamonset-to-echoserver-deploy
ok 4 - busybox-deploy-to-web-statefulset
not ok 5 - web-statefulset-to-busybox-deploy
---
reason: ephemeral container netassertv2-client-aihlpxcys exit code for test web-statefulset-to-busybox-deploy
is 0 instead of 1
...
ok 6 - fluentd-daemonset-to-web-statefulset
ok 7 - busybox-deploy-to-control-plane-dot-io
ok 8 - test-from-pod1-to-pod2
ok 9 - busybox-deploy-to-fake-host
```
## Compatibility
NetAssert is architected for compatibility with Kubernetes versions that offer support for ephemeral containers. We have thoroughly tested NetAssert with Kubernetes versions 1.25 to 1.35, confirming compatibility and performance stability.
For broader validation, our team has also executed comprehensive [end-to-end tests](./e2e/README.md) against various Kubernetes distributions and CNIs which is detailed below:
| Kubernetes Distribution | Supported Version | Container Network Interface (CNI) |
|-------------------------|-------------------|------------------------------------
| Amazon EKS | 1.34 and higher | AWS VPC CNI |
| Amazon EKS | 1.34 and higher | Calico (Version 3.26 or later) |
| Google GKE | 1.33 and higher | Google Cloud Platform VPC CNI |
| Google GKE | 1.33 and higher | Google Cloud Dataplane V2 |
## Checking for ephemeral container support
You can check for ephemeral container support using the following command:
```bash
❯ netassert ping
2023-03-27T11:25:28.421+0100 [INFO] [NetAssert-v2.0.0]: ✅ Successfully pinged /healthz endpoint of the Kubernetes server
2023-03-27T11:25:28.425+0100 [INFO] [NetAssert-v2.0.0]: ✅ Ephemeral containers are supported by the Kubernetes server
```
## Increasing logging verbosity
You can increase the logging level to `debug` by passing `--log-level` argument:
```bash
❯ netassert run --input-file ./e2e/manifests/test-cases.yaml --log-level=debug
```
## RBAC Configuration
This tool can be run according to the Principle of Least Privilege (PoLP) by properly configuring the RBAC.
The list of required permissions can be found in the `netassert` ClusterRole `rbac/cluster-role.yaml`, which could be redefined as a Role for namespacing reasons if needed. This role can then be bound to a "principal" either through a RoleBinding or a ClusterRoleBinding, depending on whether the scope of the role is supposed to be namespaced or not. The ClusterRoleBinding `rbac/cluster-rolebinding.yaml` is an example where the user `netassert-user` is assigned the role `netassert` using a cluster-wide binding called `netassert`
## Limitations
- When performing UDP scanning, the sniffer container [image](https://github.com/controlplaneio/netassertv2-packet-sniffer) needs `cap_net_raw` capability so that it can bind and read packets from the network interface. As a result, admission controllers or other security mechanisms must be modified to allow the `sniffer` image to run with this capability. Currently, the Security context used by the ephemeral sniffer container looks like the following:
```yaml
...
...
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_RAW
runAsNonRoot: true
...
...
```
- Although they do not consume any resources, ephemeral containers that are injected as part of the test(s) by `NetAssert` will remain in the Pod specification
- Service meshes are not be currently supported
## E2E Tests
- Please check this [README.md](./e2e/README.md)
[testing_workflow_badge]: https://github.com/controlplaneio/netassert/actions/workflows/build.yaml/badge.svg
[release_workflow_badge]: https://github.com/controlplaneio/netassert/actions/workflows/release.yaml/badge.svg
================================================
FILE: SECURITY.md
================================================
# Security Policy
## Our Security Address
Contact: `security@control-plane.io`
Encryption: `https://keybase.io/sublimino/pgp_keys.asc`
Disclosure: `Full`
================================================
FILE: cmd/netassert/cli/common.go
================================================
package main
import (
"errors"
"github.com/controlplaneio/netassert/v2/internal/data"
"github.com/controlplaneio/netassert/v2/internal/kubeops"
"github.com/hashicorp/go-hclog"
)
// loadTestCases - Reads test from a file or Directory
func loadTestCases(testCasesFile, testCasesDir string) (data.Tests, error) {
if testCasesFile == "" && testCasesDir == "" {
return nil, errors.New("either an input file or an input dir containing the tests must be provided using " +
"flags (--input-file or --input-dir)")
}
if testCasesFile != "" && testCasesDir != "" {
return nil, errors.New("input must be either a file or a directory but not both i.e use one of " +
"the flags --input-file or --input-dir")
}
var (
testCases data.Tests
err error
)
switch {
case testCasesDir != "":
testCases, err = data.ReadTestsFromDir(testCasesDir)
case testCasesFile != "":
testCases, err = data.ReadTestsFromFile(testCasesFile)
}
return testCases, err
}
// createService - creates a new kubernetes operations service
func createService(kubeconfigPath string, l hclog.Logger) (*kubeops.Service, error) {
// if the user has supplied a kubeConfig file location then
if kubeconfigPath != "" {
return kubeops.NewServiceFromKubeConfigFile(kubeconfigPath, l)
}
return kubeops.NewDefaultService(l)
}
================================================
FILE: cmd/netassert/cli/gen_result.go
================================================
package main
import (
"fmt"
"os"
"github.com/hashicorp/go-hclog"
"github.com/controlplaneio/netassert/v2/internal/data"
)
// genResult - Prints results to Stdout and writes it to a Tap file
func genResult(testCases data.Tests, tapFile string, lg hclog.Logger) error {
failedTestCases := 0
for _, v := range testCases {
// increment the no. of test cases
if v.Pass {
lg.Info("✅ Test Result", "Name", v.Name, "Pass", v.Pass)
continue
}
lg.Info("❌ Test Result", "Name", v.Name, "Pass", v.Pass, "FailureReason", v.FailureReason)
failedTestCases++
}
tf, err := os.Create(tapFile)
if err != nil {
return fmt.Errorf("unable to create tap file %q: %w", tapFile, err)
}
if err := testCases.TAPResult(tf); err != nil {
return fmt.Errorf("unable to generate tap results: %w", err)
}
if err := tf.Close(); err != nil {
return fmt.Errorf("unable to close tap file %q: %w", tapFile, err)
}
lg.Info("✍ Wrote test result in a TAP File", "fileName", tapFile)
if failedTestCases > 0 {
return fmt.Errorf("total %v test cases have failed", failedTestCases)
}
return nil
}
================================================
FILE: cmd/netassert/cli/main.go
================================================
package main
import (
"os"
_ "go.uber.org/automaxprocs"
)
func main() {
if err := rootCmd.Execute(); err != nil {
os.Exit(1)
}
}
================================================
FILE: cmd/netassert/cli/ping.go
================================================
package main
import (
"context"
"fmt"
"os"
"time"
"github.com/hashicorp/go-hclog"
"github.com/spf13/cobra"
"github.com/controlplaneio/netassert/v2/internal/kubeops"
"github.com/controlplaneio/netassert/v2/internal/logger"
)
const (
apiServerHealthEndpoint = `/healthz` // health endpoint for the K8s server
)
type pingCmdConfig struct {
KubeConfig string
PingTimeout time.Duration
}
var pingCmdCfg = pingCmdConfig{}
var pingCmd = &cobra.Command{
Use: "ping",
Short: "pings the K8s API server over HTTP(S) to see if it is alive and also checks if the server has support for " +
"ephemeral containers.",
Long: "pings the K8s API server over HTTP(S) to see if it is alive and also checks if the server has support for " +
"ephemeral/debug containers.",
Run: func(cmd *cobra.Command, args []string) {
ctx, cancel := context.WithTimeout(context.Background(), pingCmdCfg.PingTimeout)
defer cancel()
lg := logger.NewHCLogger("info", fmt.Sprintf("%s-%s", appName, version), os.Stdout)
k8sSvc, err := createService(pingCmdCfg.KubeConfig, lg)
if err != nil {
lg.Error("Ping failed, unable to build K8s Client", "error", err)
os.Exit(1)
}
ping(ctx, lg, k8sSvc)
},
Version: rootCmd.Version,
}
// checkEphemeralContainerSupport checks to see if ephemeral containers are supported by the K8s server
func ping(ctx context.Context, lg hclog.Logger, k8sSvc *kubeops.Service) {
if err := k8sSvc.PingHealthEndpoint(ctx, apiServerHealthEndpoint); err != nil {
lg.Error("Ping failed", "error", err)
os.Exit(1)
}
lg.Info("✅ Successfully pinged " + apiServerHealthEndpoint + " endpoint of the Kubernetes server")
if err := k8sSvc.CheckEphemeralContainerSupport(ctx); err != nil {
lg.Error("❌ Ephemeral containers are not supported by the Kubernetes server",
"error", err)
os.Exit(1)
}
lg.Info("✅ Ephemeral containers are supported by the Kubernetes server")
}
func init() {
pingCmd.Flags().DurationVarP(&pingCmdCfg.PingTimeout, "timeout", "t", 60*time.Second,
"Timeout for the ping command")
pingCmd.Flags().StringVarP(&pingCmdCfg.KubeConfig, "kubeconfig", "k", "", "path to kubeconfig file")
}
================================================
FILE: cmd/netassert/cli/root.go
================================================
package main
import (
"fmt"
"github.com/spf13/cobra"
)
// these variables are overwritten at build time using ldflags
var (
version = "v2.0.0-dev" // netassert version
appName = "NetAssert" // name of the application
gitHash = "" // the git hash of the build
buildDate = "" // build date, will be injected by the build system
scannerImgVersion = "latest" // scanner container image version
snifferImgVersion = "latest" // sniffer container image version
)
var rootCmd = &cobra.Command{
Use: "netassert",
Short: "NetAssert is a command line utility to test network connectivity between kubernetes objects",
Long: "NetAssert is a command line utility to test network connectivity between kubernetes objects.\n" +
"It currently supports Deployment, Pod, Statefulset and Daemonset.\nYou can check the traffic flow between these objects or from these " +
"objects to a remote host or an IP address.\n\nBuilt by ControlPlane https://control-plane.io",
Version: fmt.Sprintf("\nBuilt by ControlPlane https://control-plane.io\n"+
"Version: %s\nCommit Hash: %s\nBuild Date: %s\n",
version, gitHash, buildDate),
}
func init() {
// add our subcommands
rootCmd.AddCommand(runCmd)
rootCmd.AddCommand(validateCmd)
rootCmd.AddCommand(versionCmd)
rootCmd.AddCommand(pingCmd)
}
================================================
FILE: cmd/netassert/cli/run.go
================================================
package main
import (
"context"
"fmt"
"os"
"os/signal"
"syscall"
"time"
"github.com/hashicorp/go-hclog"
"github.com/spf13/cobra"
"github.com/controlplaneio/netassert/v2/internal/engine"
"github.com/controlplaneio/netassert/v2/internal/logger"
)
// RunConfig - configuration for the run command
type runCmdConfig struct {
TapFile string
SuffixLength int
SnifferContainerImage string
SnifferContainerPrefix string
ScannerContainerImage string
ScannerContainerPrefix string
PauseInSeconds int
PacketCaptureInterface string
KubeConfig string
TestCasesFile string
TestCasesDir string
LogLevel string
}
// Initialize with default values
var runCmdCfg = runCmdConfig{
TapFile: "results.tap", // name of the default TAP file where the results will be written
SuffixLength: 9, // suffix length of the random string to be appended to the container name
SnifferContainerImage: fmt.Sprintf("%s:%s", "docker.io/controlplane/netassertv2-packet-sniffer", snifferImgVersion),
SnifferContainerPrefix: "netassertv2-sniffer",
ScannerContainerImage: fmt.Sprintf("%s:%s", "docker.io/controlplane/netassertv2-l4-client", scannerImgVersion),
ScannerContainerPrefix: "netassertv2-client",
PauseInSeconds: 1, // seconds to pause before each test case
PacketCaptureInterface: `eth0`, // the interface used by the sniffer image to capture traffic
LogLevel: "info", // log level
}
var runCmd = &cobra.Command{
Use: "run",
Short: "Run the program with the specified source file or source directory. Only one of the two " +
"flags (--input-file and --input-dir) can be used at a time. The --input-dir " +
"flag only reads the first level of the directory and does not recursively scan it.",
Long: "Run the program with the specified source file or source directory. Only one of the two " +
"flags (--input-file and --input-dir) can be used at a time. The --input-dir " +
"flag only reads the first level of the directory and does not recursively scan it.",
Run: func(cmd *cobra.Command, args []string) {
lg := logger.NewHCLogger(runCmdCfg.LogLevel, fmt.Sprintf("%s-%s", appName, version), os.Stdout)
if err := runTests(lg); err != nil {
lg.Error(" ❌ Failed to successfully run all the tests", "error", err)
os.Exit(1)
}
},
Version: rootCmd.Version,
}
// run - runs the netAssert Test(s)
func runTests(lg hclog.Logger) error {
testCases, err := loadTestCases(runCmdCfg.TestCasesFile, runCmdCfg.TestCasesDir)
if err != nil {
return fmt.Errorf("unable to load test cases: %w", err)
}
//lg := logger.NewHCLogger(runCmdCfg.LogLevel, fmt.Sprintf("%s-%s", appName, version), os.Stdout)
k8sSvc, err := createService(runCmdCfg.KubeConfig, lg)
if err != nil {
return fmt.Errorf("failed to build K8s client: %w", err)
}
ctx := context.Background()
ctx, cancel := signal.NotifyContext(ctx, os.Interrupt, syscall.SIGTERM, syscall.SIGQUIT)
defer cancel()
// ping the kubernetes cluster and check to see if
// it is alive and that it has support for ephemeral container(s)
ping(ctx, lg, k8sSvc)
// initialise our test runner
testRunner := engine.New(k8sSvc, lg)
// initialise our done signal
done := make(chan struct{})
// add our test runner to the wait group
go func() {
defer func() {
// once all our go routines have finished notify the done channel
done <- struct{}{}
}()
// run the tests
testRunner.RunTests(
ctx, // context to use
testCases, // net assert test cases
runCmdCfg.SnifferContainerPrefix, // prefix used for the sniffer container name
runCmdCfg.SnifferContainerImage, // sniffer container image location
runCmdCfg.ScannerContainerPrefix, // scanner container prefix used in the container name
runCmdCfg.ScannerContainerImage, // scanner container image location
runCmdCfg.SuffixLength, // length of random string that will be appended to the snifferContainerPrefix and scannerContainerPrefix
time.Duration(runCmdCfg.PauseInSeconds)*time.Second, // pause duration between each test
runCmdCfg.PacketCaptureInterface, // the interface used by the sniffer image to capture traffic
)
}()
// Wait for the tests to finish or for the context to be canceled
select {
case <-done:
// all our tests have finished running
case <-ctx.Done():
lg.Info("Received signal from OS", "msg", ctx.Err())
// context has been cancelled, we wait for our test runner to finish
<-done
}
return genResult(testCases, runCmdCfg.TapFile, lg)
}
func init() {
// Bind flags to the runCmd
runCmd.Flags().StringVarP(&runCmdCfg.TapFile, "tap", "t", runCmdCfg.TapFile, "output tap file containing the tests results")
runCmd.Flags().IntVarP(&runCmdCfg.SuffixLength, "suffix-length", "s", runCmdCfg.SuffixLength, "length of the random suffix that will appended to the scanner/sniffer containers")
runCmd.Flags().StringVarP(&runCmdCfg.SnifferContainerImage, "sniffer-image", "i", runCmdCfg.SnifferContainerImage, "container image to be used as sniffer")
runCmd.Flags().StringVarP(&runCmdCfg.SnifferContainerPrefix, "sniffer-prefix", "p", runCmdCfg.SnifferContainerPrefix, "prefix of the sniffer container")
runCmd.Flags().StringVarP(&runCmdCfg.ScannerContainerImage, "scanner-image", "c", runCmdCfg.ScannerContainerImage, "container image to be used as scanner")
runCmd.Flags().StringVarP(&runCmdCfg.ScannerContainerPrefix, "scanner-prefix", "x", runCmdCfg.ScannerContainerPrefix, "prefix of the scanner debug container name")
runCmd.Flags().IntVarP(&runCmdCfg.PauseInSeconds, "pause-sec", "P", runCmdCfg.PauseInSeconds, "number of seconds to pause before running each test case")
runCmd.Flags().StringVarP(&runCmdCfg.PacketCaptureInterface, "interface", "n", runCmdCfg.PacketCaptureInterface, "the network interface used by the sniffer container to capture packets")
runCmd.Flags().StringVarP(&runCmdCfg.TestCasesFile, "input-file", "f", runCmdCfg.TestCasesFile, "input test file that contains a list of netassert tests")
runCmd.Flags().StringVarP(&runCmdCfg.TestCasesDir, "input-dir", "d", runCmdCfg.TestCasesDir, "input test directory that contains a list of netassert test files")
runCmd.Flags().StringVarP(&runCmdCfg.KubeConfig, "kubeconfig", "k", runCmdCfg.KubeConfig, "path to kubeconfig file")
runCmd.Flags().StringVarP(&runCmdCfg.LogLevel, "log-level", "l", "info", "set log level (info, debug or trace)")
}
================================================
FILE: cmd/netassert/cli/validate.go
================================================
package main
import (
"fmt"
"os"
"github.com/spf13/cobra"
)
// validateCmdConfig - config for validate sub-command
type validateCmdConfig struct {
TestCasesFile string
TestCasesDir string
}
var (
validateCmdCfg validateCmdConfig // config for validate sub-command that will be used in the package
validateCmd = &cobra.Command{
Use: "validate",
Short: "verify the syntax and semantic correctness of netassert test(s) in a test file or folder. Only one of the " +
"two flags (--input-file and --input-dir) can be used at a time.",
Run: validateTestCases,
Version: rootCmd.Version,
}
)
// validateTestCases - validates test cases from file or directory
func validateTestCases(cmd *cobra.Command, args []string) {
_, err := loadTestCases(validateCmdCfg.TestCasesFile, validateCmdCfg.TestCasesDir)
if err != nil {
fmt.Println("❌ Validation of test cases failed", "error", err)
os.Exit(1)
}
fmt.Println("✅ All test cases are valid syntax-wise and semantically")
}
func init() {
validateCmd.Flags().StringVarP(&validateCmdCfg.TestCasesFile, "input-file", "f", "", "input test file that contains a list of netassert tests")
validateCmd.Flags().StringVarP(&validateCmdCfg.TestCasesDir, "input-dir", "d", "", "input test directory that contains a list of netassert test files")
}
================================================
FILE: cmd/netassert/cli/version.go
================================================
package main
import (
"fmt"
"os"
"github.com/spf13/cobra"
"github.com/controlplaneio/netassert/v2/internal/logger"
)
var versionCmd = &cobra.Command{
Use: "version",
Short: "Prints the version and other details associated with the program",
SilenceUsage: false,
Run: versionDetails,
}
// versionDetails - prints build information to the STDOUT
func versionDetails(cmd *cobra.Command, args []string) {
root := cmd.Root()
root.SetArgs([]string{"--version"})
if err := root.Execute(); err != nil {
lg := logger.NewHCLogger(runCmdCfg.LogLevel, fmt.Sprintf("%s-%s", appName, version), os.Stdout)
lg.Error("Failed to get version details", "error", err)
os.Exit(1)
}
}
================================================
FILE: download.sh
================================================
#!/bin/bash
set -euo pipefail
USER='controlplaneio'
REPO='netassert'
BINARY='netassert'
PWD=$(pwd)
LATEST=$(curl --silent "https://api.github.com/repos/$USER/$REPO/releases/latest" | grep '"tag_name":' | cut -d'"' -f4)
echo "Found latest release: $LATEST"
OS=$(uname -s | tr '[:upper:]' '[:lower:]')
echo "OS: $OS"
ARCH=$(uname -m)
if [[ "$ARCH" == "x86_64" ]]; then
ARCH="amd64"
fi
echo "ARCH: $ARCH"
FILE="${BINARY}_${LATEST}_${OS}_${ARCH}.tar.gz"
DOWNLOAD_URL="https://github.com/controlplaneio/${REPO}/releases/download/${LATEST}/${FILE}"
CHECKSUM_URL="https://github.com/controlplaneio/${REPO}/releases/download/${LATEST}/checksums-sha256.txt"
echo "[+] Downloading latest checksums from ${CHECKSUM_URL}"
if ! curl -sfLo "checksums.txt" "$CHECKSUM_URL"; then
echo "Failed to download checksums"
exit 1
fi
echo "[+] Downloading latest tarball from ${DOWNLOAD_URL}"
if ! curl -sfLO "$DOWNLOAD_URL"; then
echo "Failed to download tarball"
exit 1
fi
echo "[+] Verifying checksums"
if ! sha256sum -c checksums.txt --ignore-missing; then
echo "[+] Checksum verification failed"
exit 1
fi
echo "[+] Downloaded file verified successfully"
## unzip the tarball
echo "[+] Unzipping the downloaded tarball in directory ${PWD}"
if ! tar -xzf "${FILE}"; then
echo "[+] Failed to unzip the downloaded tarball"
exit 1
fi
echo "[+] Downloaded file unzipped successfully"
if [[ ! -f "${BINARY}" ]]; then
echo "[+] ${BINARY} file was not found in the current path"
exit 1
fi
echo "[+] You can now run netassert from ${PWD}/${BINARY}"
================================================
FILE: e2e/README.md
================================================
# End-to-End(E2E) Tests
The E2E tests uses `terraform` and `terratest` to spin up GKE and EKS clusters. There are altogether four tests:
- AWS EKS 1.34 with AWS VPC CNI
- Test AWS EKS 1.34 with the default AWS VPC CNI that support Network Policies
- AWS EKS 1.34 with Calico CNI
- Test AWS EKS 1.34 with calico CNI v3.35.0. As part of the test, the AWS CNI is uninstalled and Calico is installed
- GCP GKE 1.33 with GCP VPC CNI
- GKE 1.33 with GCP VPC CNI
- GCP GKE 1.33 GCP Dataplane v2
- GKE 1.33 with data-plane v2 based on Cilium
- Kind k8s 1.35 with Calico CNI
- Kind with Calico CNI
*Each test is skipped if the corresponding environment variable is not set.*
| Test | Environment variable |
|-------------------------------------|----------------------|
| AWS EKS with AWS VPC CNI | EKS_VPC_E2E_TESTS |
| AWS EKS with Calico CNI | EKS_CALICO_E2E_TESTS |
| GCP GKE with GCP VPC CNI | GKE_VPC_E2E_TESTS |
| GCP GKE GCP with DataPlane V2 | GKE_DPV2_E2E_TESTS |
| Kind with Calico CNI | KIND_E2E_TESTS |
## Running tests
- Make sure you have installed `kubectl` and `AWS Cli v2`
- Make sure you have also installed `gke-gcloud-auth-plugin` for kubectl by following this [link](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke)
For AWS EKS tests, make sure you have valid AWS credentials:
```bash
❯ aws sso login
❯ export EKS_VPC_E2E_TESTS=yes
❯ export EKS_CALICO_E2E_TESTS=yes
```
For GCP GKE tests, make sure you export the Project name and set it as the default project:
```bash
❯ export GOOGLE_PROJECT=<your_project>
❯ gcloud config set project <your_project>
❯ gcloud auth application-default login
❯ export GKE_VPC_E2E_TESTS=yes
❯ export GKE_DPV2_E2E_TESTS=yes
```
Run the tests using the following command:
```bash
# from the root of the project
❯ go test -timeout=91m -v ./e2e/... -count=1
```
Tests can be configured by updating values in [end-to-end test helpers](./helpers/)
## Azure AKS Integration
Currently, end-to-end testing of NetAssert with Azure Kubernetes Service (AKS) is not scheduled. However, we do not foresee any architectural reasons that would prevent successful integration.
### Network Policy Support
There are [three primary approaches](https://learn.microsoft.com/en-us/azure/aks/use-network-policies) for supporting network policies in AKS.
If the requirement is limited to **Linux nodes only** (excluding Windows), the recommended solution is [Azure CNI powered by Cilium](https://learn.microsoft.com/en-us/azure/aks/azure-cni-powered-by-cilium).
### Deployment via Terraform
For deploying a testing cluster, the Container Network Interface (CNI) configuration appears straightforward. It can likely be handled via a single parameter in the `azurerm` provider, specifically the [`network_policy` argument](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster#network_policy-1).
*Note: This Terraform configuration has yet to be validated.*
================================================
FILE: e2e/clusters/aws-eks-terraform-module/eks.tf
================================================
provider "aws" {
region = var.region
}
data "aws_availability_zones" "available" {}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"
#version = "~> 19"
cluster_name = var.cluster_name
cluster_version = var.cluster_version
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
cluster_endpoint_public_access = true
enable_cluster_creator_admin_permissions = true
cluster_addons = {
vpc-cni = {
before_compute = true
most_recent = true
configuration_values = jsonencode({
#resolve_conflicts_on_update = "OVERWRITE"
enableNetworkPolicy = var.enable_vpc_network_policies ? "true" : "false"
})
}
}
eks_managed_node_groups = {
example = {
name = "${var.node_group_name}1"
# Starting on 1.30, AL2023 is the default AMI type for EKS managed node groups
ami_type = "AL2023_x86_64_STANDARD"
instance_types = ["t3.medium"]
min_size = 0
max_size = 3
desired_size = var.desired_size
}
}
# Extend node-to-node security group rules
node_security_group_additional_rules = {
ingress_self_all = {
description = "Node to node all ports/protocols"
protocol = "-1"
from_port = 0
to_port = 0
type = "ingress"
self = true
}
egress_all = {
description = "Node all egress"
protocol = "-1"
from_port = 0
to_port = 0
type = "egress"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
}
}
resource "null_resource" "generate_kubeconfig" {
depends_on = [module.eks]
provisioner "local-exec" {
command = "aws eks update-kubeconfig --region ${var.region} --name ${module.eks.cluster_name} --kubeconfig ${var.kubeconfig_file}"
}
}
================================================
FILE: e2e/clusters/aws-eks-terraform-module/outputs.tf
================================================
output "cluster_endpoint" {
description = "Endpoint for EKS control plane"
value = module.eks.cluster_endpoint
}
output "cluster_security_group_id" {
description = "Security group ids attached to the cluster control plane"
value = module.eks.cluster_security_group_id
}
output "region" {
description = "AWS region"
value = var.region
}
output "cluster_name" {
description = "Kubernetes Cluster Name"
value = module.eks.cluster_name
}
================================================
FILE: e2e/clusters/aws-eks-terraform-module/variables.tf
================================================
variable "region" {
description = "AWS region"
type = string
}
variable "cluster_version" {
description = "The AWS EKS cluster version"
type = string
}
variable "cluster_name" {
type = string
description = "name of the cluster and VPC"
}
variable "kubeconfig_file" {
type = string
description = "name of the file that contains the kubeconfig information"
default = ".kubeconfig"
}
variable "desired_size" {
type = number
description = "desired size of the worker node pool"
default = 0
}
variable "node_group_name" {
type = string
description = "prefix of the node group"
default = "group"
}
variable "enable_vpc_network_policies" {
type = bool
description = "enable or disable vpc network policies"
}
================================================
FILE: e2e/clusters/aws-eks-terraform-module/vpc.tf
================================================
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
// set VPC name same as the EKS cluster name
name = var.cluster_name
version = "~> 5.0"
cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
enable_dns_support = true
public_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = 1
}
tags = {
owner = "prefix"
environment = "test"
}
}
================================================
FILE: e2e/clusters/eks-with-calico-cni/calico-3.26.4.yaml
================================================
---
# Source: calico/templates/calico-kube-controllers.yaml
# This manifest creates a Pod Disruption Budget for Controller to allow K8s Cluster Autoscaler to evict
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: calico-kube-controllers
---
# Source: calico/templates/calico-kube-controllers.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-kube-controllers
namespace: kube-system
---
# Source: calico/templates/calico-node.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system
---
# Source: calico/templates/calico-node.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-cni-plugin
namespace: kube-system
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Typha is disabled.
typha_service_name: "none"
# Configure the backend to use.
calico_backend: "bird"
# Configure the MTU to use for workload interfaces and tunnels.
# By default, MTU is auto-detected, and explicitly setting this field should not be required.
# You can override auto-detection by providing a non-zero value.
veth_mtu: "0"
# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"log_file_path": "/var/log/calico/cni/cni.log",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
},
{
"type": "bandwidth",
"capabilities": {"bandwidth": true}
}
]
}
---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: bgpconfigurations.crd.projectcalico.org
spec:
group: crd.projectcalico.org
names:
kind: BGPConfiguration
listKind: BGPConfigurationList
plural: bgpconfigurations
singular: bgpconfiguration
preserveUnknownFields: false
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
description: BGPConfiguration contains the configuration for any BGP routing.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: BGPConfigurationSpec contains the values of the BGP configuration.
properties:
asNumber:
description: 'ASNumber is the default AS number used by a node. [Default:
64512]'
format: int32
type: integer
bindMode:
description: BindMode indicates whether to listen for BGP connections
on all addresses (None) or only on the node's canonical IP address
Node.Spec.BGP.IPvXAddress (NodeIP). Default behaviour is to listen
for BGP connections on all addresses.
type: string
communities:
description: Communities is a list of BGP community values and their
arbitrary names for tagging routes.
items:
description: Community contains standard or large community value
and its name.
properties:
name:
description: Name given to community value.
type: string
value:
description: Value must be of format `aa:nn` or `aa:nn:mm`.
For standard community use `aa:nn` format, where `aa` and
`nn` are 16 bit number. For large community use `aa:nn:mm`
format, where `aa`, `nn` and `mm` are 32 bit number. Where,
`aa` is an AS Number, `nn` and `mm` are per-AS identifier.
pattern: ^(\d+):(\d+)$|^(\d+):(\d+):(\d+)$
type: string
type: object
type: array
ignoredInterfaces:
description: IgnoredInterfaces indicates the network interfaces that
needs to be excluded when reading device routes.
items:
type: string
type: array
listenPort:
description: ListenPort is the port where BGP protocol should listen.
Defaults to 179
maximum: 65535
minimum: 1
type: integer
logSeverityScreen:
description: 'LogSeverityScreen is the log severity above which logs
are sent to the stdout. [Default: INFO]'
type: string
nodeMeshMaxRestartTime:
description: Time to allow for software restart for node-to-mesh peerings. When
specified, this is configured as the graceful restart timeout. When
not specified, the BIRD default of 120s is used. This field can
only be set on the default BGPConfiguration instance and requires
that NodeMesh is enabled
type: string
nodeMeshPassword:
description: Optional BGP password for full node-to-mesh peerings.
This field can only be set on the default BGPConfiguration instance
and requires that NodeMesh is enabled
properties:
secretKeyRef:
description: Selects a key of a secret in the node pod's namespace.
properties:
key:
description: The key of the secret to select from. Must be
a valid secret key.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the Secret or its key must be
defined
type: boolean
required:
- key
type: object
type: object
nodeToNodeMeshEnabled:
description: 'NodeToNodeMeshEnabled sets whether full node to node
BGP mesh is enabled. [Default: true]'
type: boolean
prefixAdvertisements:
description: PrefixAdvertisements contains per-prefix advertisement
configuration.
items:
description: PrefixAdvertisement configures advertisement properties
for the specified CIDR.
properties:
cidr:
description: CIDR for which properties should be advertised.
type: string
communities:
description: Communities can be list of either community names
already defined in `Specs.Communities` or community value
of format `aa:nn` or `aa:nn:mm`. For standard community use
`aa:nn` format, where `aa` and `nn` are 16 bit number. For
large community use `aa:nn:mm` format, where `aa`, `nn` and
`mm` are 32 bit number. Where,`aa` is an AS Number, `nn` and
`mm` are per-AS identifier.
items:
type: string
type: array
type: object
type: array
serviceClusterIPs:
description: ServiceClusterIPs are the CIDR blocks from which service
cluster IPs are allocated. If specified, Calico will advertise these
blocks, as well as any cluster IPs within them.
items:
description: ServiceClusterIPBlock represents a single allowed ClusterIP
CIDR block.
properties:
cidr:
type: string
type: object
type: array
serviceExternalIPs:
description: ServiceExternalIPs are the CIDR blocks for Kubernetes
Service External IPs. Kubernetes Service ExternalIPs will only be
advertised if they are within one of these blocks.
items:
description: ServiceExternalIPBlock represents a single allowed
External IP CIDR block.
properties:
cidr:
type: string
type: object
type: array
serviceLoadBalancerIPs:
description: ServiceLoadBalancerIPs are the CIDR blocks for Kubernetes
Service LoadBalancer IPs. Kubernetes Service status.LoadBalancer.Ingress
IPs will only be advertised if they are within one of these blocks.
items:
description: ServiceLoadBalancerIPBlock represents a single allowed
LoadBalancer IP CIDR block.
properties:
cidr:
type: string
type: object
type: array
type: object
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: (devel)
creationTimestamp: null
name: bgpfilters.crd.projectcalico.org
spec:
group: crd.projectcalico.org
names:
kind: BGPFilter
listKind: BGPFilterList
plural: bgpfilters
singular: bgpfilter
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: BGPFilterSpec contains the IPv4 and IPv6 filter rules of
the BGP Filter.
properties:
exportV4:
description: The ordered set of IPv4 BGPFilter rules acting on exporting
routes to a peer.
items:
description: BGPFilterRuleV4 defines a BGP filter rule consisting
a single IPv4 CIDR block and a filter action for this CIDR.
properties:
action:
type: string
cidr:
type: string
matchOperator:
type: string
required:
- action
- cidr
- matchOperator
type: object
type: array
exportV6:
description: The ordered set of IPv6 BGPFilter rules acting on exporting
routes to a peer.
items:
description: BGPFilterRuleV6 defines a BGP filter rule consisting
a single IPv6 CIDR block and a filter action for this CIDR.
properties:
action:
type: string
cidr:
type: string
matchOperator:
type: string
required:
- action
- cidr
- matchOperator
type: object
type: array
importV4:
description: The ordered set of IPv4 BGPFilter rules acting on importing
routes from a peer.
items:
description: BGPFilterRuleV4 defines a BGP filter rule consisting
a single IPv4 CIDR block and a filter action for this CIDR.
properties:
action:
type: string
cidr:
type: string
matchOperator:
type: string
required:
- action
- cidr
- matchOperator
type: object
type: array
importV6:
description: The ordered set of IPv6 BGPFilter rules acting on importing
routes from a peer.
items:
description: BGPFilterRuleV6 defines a BGP filter rule consisting
a single IPv6 CIDR block and a filter action for this CIDR.
properties:
action:
type: string
cidr:
type: string
matchOperator:
type: string
required:
- action
- cidr
- matchOperator
type: object
type: array
type: object
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: bgppeers.crd.projectcalico.org
spec:
group: crd.projectcalico.org
names:
kind: BGPPeer
listKind: BGPPeerList
plural: bgppeers
singular: bgppeer
preserveUnknownFields: false
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: BGPPeerSpec contains the specification for a BGPPeer resource.
properties:
asNumber:
description: The AS Number of the peer.
format: int32
type: integer
filters:
description: The ordered set of BGPFilters applied on this BGP peer.
items:
type: string
type: array
keepOriginalNextHop:
description: Option to keep the original nexthop field when routes
are sent to a BGP Peer. Setting "true" configures the selected BGP
Peers node to use the "next hop keep;" instead of "next hop self;"(default)
in the specific branch of the Node on "bird.cfg".
type: boolean
maxRestartTime:
description: Time to allow for software restart. When specified,
this is configured as the graceful restart timeout. When not specified,
the BIRD default of 120s is used.
type: string
node:
description: The node name identifying the Calico node instance that
is targeted by this peer. If this is not set, and no nodeSelector
is specified, then this BGP peer selects all nodes in the cluster.
type: string
nodeSelector:
description: Selector for the nodes that should have this peering. When
this is set, the Node field must be empty.
type: string
numAllowedLocalASNumbers:
description: Maximum number of local AS numbers that are allowed in
the AS path for received routes. This removes BGP loop prevention
and should only be used if absolutely necesssary.
format: int32
type: integer
password:
description: Optional BGP password for the peerings generated by this
BGPPeer resource.
properties:
secretKeyRef:
description: Selects a key of a secret in the node pod's namespace.
properties:
key:
description: The key of the secret to select from. Must be
a valid secret key.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the Secret or its key must be
defined
type: boolean
required:
- key
type: object
type: object
peerIP:
description: The IP address of the peer followed by an optional port
number to peer with. If port number is given, format should be `[<IPv6>]:port`
or `<IPv4>:<port>` for IPv4. If optional port number is not set,
and this peer IP and ASNumber belongs to a calico/node with ListenPort
set in BGPConfiguration, then we use that port to peer.
type: string
peerSelector:
description: Selector for the remote nodes to peer with. When this
is set, the PeerIP and ASNumber fields must be empty. For each
peering between the local node and selected remote nodes, we configure
an IPv4 peering if both ends have NodeBGPSpec.IPv4Address specified,
and an IPv6 peering if both ends have NodeBGPSpec.IPv6Address specified. The
remote AS number comes from the remote node's NodeBGPSpec.ASNumber,
or the global default if that is not set.
type: string
reachableBy:
description: Add an exact, i.e. /32, static route toward peer IP in
order to prevent route flapping. ReachableBy contains the address
of the gateway which peer can be reached by.
type: string
sourceAddress:
description: Specifies whether and how to configure a source address
for the peerings generated by this BGPPeer resource. Default value
"UseNodeIP" means to configure the node IP as the source address. "None"
means not to configure a source address.
type: string
ttlSecurity:
description: TTLSecurity enables the generalized TTL security mechanism
(GTSM) which protects against spoofed packets by ignoring received
packets with a smaller than expected TTL value. The provided value
is the number of hops (edges) between the peers.
type: integer
type: object
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: blockaffinities.crd.projectcalico.org
spec:
group: crd.projectcalico.org
names:
kind: BlockAffinity
listKind: BlockAffinityList
plural: blockaffinities
singular: blockaffinity
preserveUnknownFields: false
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: BlockAffinitySpec contains the specification for a BlockAffinity
resource.
properties:
cidr:
type: string
deleted:
description: Deleted indicates that this block affinity is being deleted.
This field is a string for compatibility with older releases that
mistakenly treat this field as a string.
type: string
node:
type: string
state:
type: string
required:
- cidr
- deleted
- node
- state
type: object
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: (devel)
creationTimestamp: null
name: caliconodestatuses.crd.projectcalico.org
spec:
group: crd.projectcalico.org
names:
kind: CalicoNodeStatus
listKind: CalicoNodeStatusList
plural: caliconodestatuses
singular: caliconodestatus
preserveUnknownFields: false
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: CalicoNodeStatusSpec contains the specification for a CalicoNodeStatus
resource.
properties:
classes:
description: Classes declares the types of information to monitor
for this calico/node, and allows for selective status reporting
about certain subsets of information.
items:
type: string
type: array
node:
description: The node name identifies the Calico node instance for
node status.
type: string
updatePeriodSeconds:
description: UpdatePeriodSeconds is the period at which CalicoNodeStatus
should be updated. Set to 0 to disable CalicoNodeStatus refresh.
Maximum update period is one day.
format: int32
type: integer
type: object
status:
description: CalicoNodeStatusStatus defines the observed state of CalicoNodeStatus.
No validation needed for status since it is updated by Calico.
properties:
agent:
description: Agent holds agent status on the node.
properties:
birdV4:
description: BIRDV4 represents the latest observed status of bird4.
properties:
lastBootTime:
description: LastBootTime holds the value of lastBootTime
from bird.ctl output.
type: string
lastReconfigurationTime:
description: LastReconfigurationTime holds the value of lastReconfigTime
from bird.ctl output.
type: string
routerID:
description: Router ID used by bird.
type: string
state:
description: The state of the BGP Daemon.
type: string
version:
description: Version of the BGP daemon
type: string
type: object
birdV6:
description: BIRDV6 represents the latest observed status of bird6.
properties:
lastBootTime:
description: LastBootTime holds the value of lastBootTime
from bird.ctl output.
type: string
lastReconfigurationTime:
description: LastReconfigurationTime holds the value of lastReconfigTime
from bird.ctl output.
type: string
routerID:
description: Router ID used by bird.
type: string
state:
description: The state of the BGP Daemon.
type: string
version:
description: Version of the BGP daemon
type: string
type: object
type: object
bgp:
description: BGP holds node BGP status.
properties:
numberEstablishedV4:
description: The total number of IPv4 established bgp sessions.
type: integer
numberEstablishedV6:
description: The total number of IPv6 established bgp sessions.
type: integer
numberNotEstablishedV4:
description: The total number of IPv4 non-established bgp sessions.
type: integer
numberNotEstablishedV6:
description: The total number of IPv6 non-established bgp sessions.
type: integer
peersV4:
description: PeersV4 represents IPv4 BGP peers status on the node.
items:
description: CalicoNodePeer contains the status of BGP peers
on the node.
properties:
peerIP:
description: IP address of the peer whose condition we are
reporting.
type: string
since:
description: Since the state or reason last changed.
type: string
state:
description: State is the BGP session state.
type: string
type:
description: Type indicates whether this peer is configured
via the node-to-node mesh, or via en explicit global or
per-node BGPPeer object.
type: string
type: object
type: array
peersV6:
description: PeersV6 represents IPv6 BGP peers status on the node.
items:
description: CalicoNodePeer contains the status of BGP peers
on the node.
properties:
peerIP:
description: IP address of the peer whose condition we are
reporting.
type: string
since:
description: Since the state or reason last changed.
type: string
state:
description: State is the BGP session state.
type: string
type:
description: Type indicates whether this peer is configured
via the node-to-node mesh, or via en explicit global or
per-node BGPPeer object.
type: string
type: object
type: array
required:
- numberEstablishedV4
- numberEstablishedV6
- numberNotEstablishedV4
- numberNotEstablishedV6
type: object
lastUpdated:
description: LastUpdated is a timestamp representing the server time
when CalicoNodeStatus object last updated. It is represented in
RFC3339 form and is in UTC.
format: date-time
nullable: true
type: string
routes:
description: Routes reports routes known to the Calico BGP daemon
on the node.
properties:
routesV4:
description: RoutesV4 represents IPv4 routes on the node.
items:
description: CalicoNodeRoute contains the status of BGP routes
on the node.
properties:
destination:
description: Destination of the route.
type: string
gateway:
description: Gateway for the destination.
type: string
interface:
description: Interface for the destination
type: string
learnedFrom:
description: LearnedFrom contains information regarding
where this route originated.
properties:
peerIP:
description: If sourceType is NodeMesh or BGPPeer, IP
address of the router that sent us this route.
type: string
sourceType:
description: Type of the source where a route is learned
from.
type: string
type: object
type:
description: Type indicates if the route is being used for
forwarding or not.
type: string
type: object
type: array
routesV6:
description: RoutesV6 represents IPv6 routes on the node.
items:
description: CalicoNodeRoute contains the status of BGP routes
on the node.
properties:
destination:
description: Destination of the route.
type: string
gateway:
description: Gateway for the destination.
type: string
interface:
description: Interface for the destination
type: string
learnedFrom:
description: LearnedFrom contains information regarding
where this route originated.
properties:
peerIP:
description: If sourceType is NodeMesh or BGPPeer, IP
address of the router that sent us this route.
type: string
sourceType:
description: Type of the source where a route is learned
from.
type: string
type: object
type:
description: Type indicates if the route is being used for
forwarding or not.
type: string
type: object
type: array
type: object
type: object
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: clusterinformations.crd.projectcalico.org
spec:
group: crd.projectcalico.org
names:
kind: ClusterInformation
listKind: ClusterInformationList
plural: clusterinformations
singular: clusterinformation
preserveUnknownFields: false
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
description: ClusterInformation contains the cluster specific information.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: ClusterInformationSpec contains the values of describing
the cluster.
properties:
calicoVersion:
description: CalicoVersion is the version of Calico that the cluster
is running
type: string
clusterGUID:
description: ClusterGUID is the GUID of the cluster
type: string
clusterType:
description: ClusterType describes the type of the cluster
type: string
datastoreReady:
description: DatastoreReady is used during significant datastore migrations
to signal to components such as Felix that it should wait before
accessing the datastore.
type: boolean
variant:
description: Variant declares which variant of Calico should be active.
type: string
type: object
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: felixconfigurations.crd.projectcalico.org
spec:
group: crd.projectcalico.org
names:
kind: FelixConfiguration
listKind: FelixConfigurationList
plural: felixconfigurations
singular: felixconfiguration
preserveUnknownFields: false
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
description: Felix Configuration contains the configuration for Felix.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: FelixConfigurationSpec contains the values of the Felix configuration.
properties:
allowIPIPPacketsFromWorkloads:
description: 'AllowIPIPPacketsFromWorkloads controls whether Felix
will add a rule to drop IPIP encapsulated traffic from workloads
[Default: false]'
type: boolean
allowVXLANPacketsFromWorkloads:
description: 'AllowVXLANPacketsFromWorkloads controls whether Felix
will add a rule to drop VXLAN encapsulated traffic from workloads
[Default: false]'
type: boolean
awsSrcDstCheck:
description: 'Set source-destination-check on AWS EC2 instances. Accepted
value must be one of "DoNothing", "Enable" or "Disable". [Default:
DoNothing]'
enum:
- DoNothing
- Enable
- Disable
type: string
bpfConnectTimeLoadBalancingEnabled:
description: 'BPFConnectTimeLoadBalancingEnabled when in BPF mode,
controls whether Felix installs the connection-time load balancer. The
connect-time load balancer is required for the host to be able to
reach Kubernetes services and it improves the performance of pod-to-service
connections. The only reason to disable it is for debugging purposes. [Default:
true]'
type: boolean
bpfDSROptoutCIDRs:
description: BPFDSROptoutCIDRs is a list of CIDRs which are excluded
from DSR. That is, clients in those CIDRs will accesses nodeports
as if BPFExternalServiceMode was set to Tunnel.
items:
type: string
type: array
bpfDataIfacePattern:
description: BPFDataIfacePattern is a regular expression that controls
which interfaces Felix should attach BPF programs to in order to
catch traffic to/from the network. This needs to match the interfaces
that Calico workload traffic flows over as well as any interfaces
that handle incoming traffic to nodeports and services from outside
the cluster. It should not match the workload interfaces (usually
named cali...).
type: string
bpfDisableUnprivileged:
description: 'BPFDisableUnprivileged, if enabled, Felix sets the kernel.unprivileged_bpf_disabled
sysctl to disable unprivileged use of BPF. This ensures that unprivileged
users cannot access Calico''s BPF maps and cannot insert their own
BPF programs to interfere with Calico''s. [Default: true]'
type: boolean
bpfEnabled:
description: 'BPFEnabled, if enabled Felix will use the BPF dataplane.
[Default: false]'
type: boolean
bpfEnforceRPF:
description: 'BPFEnforceRPF enforce strict RPF on all host interfaces
with BPF programs regardless of what is the per-interfaces or global
setting. Possible values are Disabled, Strict or Loose. [Default:
Loose]'
type: string
bpfExtToServiceConnmark:
description: 'BPFExtToServiceConnmark in BPF mode, control a 32bit
mark that is set on connections from an external client to a local
service. This mark allows us to control how packets of that connection
are routed within the host and how is routing interpreted by RPF
check. [Default: 0]'
type: integer
bpfExternalServiceMode:
description: 'BPFExternalServiceMode in BPF mode, controls how connections
from outside the cluster to services (node ports and cluster IPs)
are forwarded to remote workloads. If set to "Tunnel" then both
request and response traffic is tunneled to the remote node. If
set to "DSR", the request traffic is tunneled but the response traffic
is sent directly from the remote node. In "DSR" mode, the remote
node appears to use the IP of the ingress node; this requires a
permissive L2 network. [Default: Tunnel]'
type: string
bpfHostConntrackBypass:
description: 'BPFHostConntrackBypass Controls whether to bypass Linux
conntrack in BPF mode for workloads and services. [Default: true
- bypass Linux conntrack]'
type: boolean
bpfKubeProxyEndpointSlicesEnabled:
description: BPFKubeProxyEndpointSlicesEnabled in BPF mode, controls
whether Felix's embedded kube-proxy accepts EndpointSlices or not.
type: boolean
bpfKubeProxyIptablesCleanupEnabled:
description: 'BPFKubeProxyIptablesCleanupEnabled, if enabled in BPF
mode, Felix will proactively clean up the upstream Kubernetes kube-proxy''s
iptables chains. Should only be enabled if kube-proxy is not running. [Default:
true]'
type: boolean
bpfKubeProxyMinSyncPeriod:
description: 'BPFKubeProxyMinSyncPeriod, in BPF mode, controls the
minimum time between updates to the dataplane for Felix''s embedded
kube-proxy. Lower values give reduced set-up latency. Higher values
reduce Felix CPU usage by batching up more work. [Default: 1s]'
type: string
bpfL3IfacePattern:
description: BPFL3IfacePattern is a regular expression that allows
to list tunnel devices like wireguard or vxlan (i.e., L3 devices)
in addition to BPFDataIfacePattern. That is, tunnel interfaces not
created by Calico, that Calico workload traffic flows over as well
as any interfaces that handle incoming traffic to nodeports and
services from outside the cluster.
type: string
bpfLogLevel:
description: 'BPFLogLevel controls the log level of the BPF programs
when in BPF dataplane mode. One of "Off", "Info", or "Debug". The
logs are emitted to the BPF trace pipe, accessible with the command
`tc exec bpf debug`. [Default: Off].'
type: string
bpfMapSizeConntrack:
description: 'BPFMapSizeConntrack sets the size for the conntrack
map. This map must be large enough to hold an entry for each active
connection. Warning: changing the size of the conntrack map can
cause disruption.'
type: integer
bpfMapSizeIPSets:
description: BPFMapSizeIPSets sets the size for ipsets map. The IP
sets map must be large enough to hold an entry for each endpoint
matched by every selector in the source/destination matches in network
policy. Selectors such as "all()" can result in large numbers of
entries (one entry per endpoint in that case).
type: integer
bpfMapSizeIfState:
description: BPFMapSizeIfState sets the size for ifstate map. The
ifstate map must be large enough to hold an entry for each device
(host + workloads) on a host.
type: integer
bpfMapSizeNATAffinity:
type: integer
bpfMapSizeNATBackend:
description: BPFMapSizeNATBackend sets the size for nat back end map.
This is the total number of endpoints. This is mostly more than
the size of the number of services.
type: integer
bpfMapSizeNATFrontend:
description: BPFMapSizeNATFrontend sets the size for nat front end
map. FrontendMap should be large enough to hold an entry for each
nodeport, external IP and each port in each service.
type: integer
bpfMapSizeRoute:
description: BPFMapSizeRoute sets the size for the routes map. The
routes map should be large enough to hold one entry per workload
and a handful of entries per host (enough to cover its own IPs and
tunnel IPs).
type: integer
bpfPSNATPorts:
anyOf:
- type: integer
- type: string
description: 'BPFPSNATPorts sets the range from which we randomly
pick a port if there is a source port collision. This should be
within the ephemeral range as defined by RFC 6056 (1024–65535) and
preferably outside the ephemeral ranges used by common operating
systems. Linux uses 32768–60999, while others mostly use the IANA
defined range 49152–65535. It is not necessarily a problem if this
range overlaps with the operating systems. Both ends of the range
are inclusive. [Default: 20000:29999]'
pattern: ^.*
x-kubernetes-int-or-string: true
bpfPolicyDebugEnabled:
description: BPFPolicyDebugEnabled when true, Felix records detailed
information about the BPF policy programs, which can be examined
with the calico-bpf command-line tool.
type: boolean
chainInsertMode:
description: 'ChainInsertMode controls whether Felix hooks the kernel''s
top-level iptables chains by inserting a rule at the top of the
chain or by appending a rule at the bottom. insert is the safe default
since it prevents Calico''s rules from being bypassed. If you switch
to append mode, be sure that the other rules in the chains signal
acceptance by falling through to the Calico rules, otherwise the
Calico policy will be bypassed. [Default: insert]'
type: string
dataplaneDriver:
description: DataplaneDriver filename of the external dataplane driver
to use. Only used if UseInternalDataplaneDriver is set to false.
type: string
dataplaneWatchdogTimeout:
description: "DataplaneWatchdogTimeout is the readiness/liveness timeout
used for Felix's (internal) dataplane driver. Increase this value
if you experience spurious non-ready or non-live events when Felix
is under heavy load. Decrease the value to get felix to report non-live
or non-ready more quickly. [Default: 90s] \n Deprecated: replaced
by the generic HealthTimeoutOverrides."
type: string
debugDisableLogDropping:
type: boolean
debugMemoryProfilePath:
type: string
debugSimulateCalcGraphHangAfter:
type: string
debugSimulateDataplaneHangAfter:
type: string
defaultEndpointToHostAction:
description: 'DefaultEndpointToHostAction controls what happens to
traffic that goes from a workload endpoint to the host itself (after
the traffic hits the endpoint egress policy). By default Calico
blocks traffic from workload endpoints to the host itself with an
iptables "DROP" action. If you want to allow some or all traffic
from endpoint to host, set this parameter to RETURN or ACCEPT. Use
RETURN if you have your own rules in the iptables "INPUT" chain;
Calico will insert its rules at the top of that chain, then "RETURN"
packets to the "INPUT" chain once it has completed processing workload
endpoint egress policy. Use ACCEPT to unconditionally accept packets
from workloads after processing workload endpoint egress policy.
[Default: Drop]'
type: string
deviceRouteProtocol:
description: This defines the route protocol added to programmed device
routes, by default this will be RTPROT_BOOT when left blank.
type: integer
deviceRouteSourceAddress:
description: This is the IPv4 source address to use on programmed
device routes. By default the source address is left blank, leaving
the kernel to choose the source address used.
type: string
deviceRouteSourceAddressIPv6:
description: This is the IPv6 source address to use on programmed
device routes. By default the source address is left blank, leaving
the kernel to choose the source address used.
type: string
disableConntrackInvalidCheck:
type: boolean
endpointReportingDelay:
type: string
endpointReportingEnabled:
type: boolean
externalNodesList:
description: ExternalNodesCIDRList is a list of CIDR's of external-non-calico-nodes
which may source tunnel traffic and have the tunneled traffic be
accepted at calico nodes.
items:
type: string
type: array
failsafeInboundHostPorts:
description: 'FailsafeInboundHostPorts is a list of UDP/TCP ports
and CIDRs that Felix will allow incoming traffic to host endpoints
on irrespective of the security policy. This is useful to avoid
accidentally cutting off a host with incorrect configuration. For
back-compatibility, if the protocol is not specified, it defaults
to "tcp". If a CIDR is not specified, it will allow traffic from
all addresses. To disable all inbound host ports, use the value
none. The default value allows ssh access and DHCP. [Default: tcp:22,
udp:68, tcp:179, tcp:2379, tcp:2380, tcp:6443, tcp:6666, tcp:6667]'
items:
description: ProtoPort is combination of protocol, port, and CIDR.
Protocol and port must be specified.
properties:
net:
type: string
port:
type: integer
protocol:
type: string
required:
- port
- protocol
type: object
type: array
failsafeOutboundHostPorts:
description: 'FailsafeOutboundHostPorts is a list of UDP/TCP ports
and CIDRs that Felix will allow outgoing traffic from host endpoints
to irrespective of the security policy. This is useful to avoid
accidentally cutting off a host with incorrect configuration. For
back-compatibility, if the protocol is not specified, it defaults
to "tcp". If a CIDR is not specified, it will allow traffic from
all addresses. To disable all outbound host ports, use the value
none. The default value opens etcd''s standard ports to ensure that
Felix does not get cut off from etcd as well as allowing DHCP and
DNS. [Default: tcp:179, tcp:2379, tcp:2380, tcp:6443, tcp:6666,
tcp:6667, udp:53, udp:67]'
items:
description: ProtoPort is combination of protocol, port, and CIDR.
Protocol and port must be specified.
properties:
net:
type: string
port:
type: integer
protocol:
type: string
required:
- port
- protocol
type: object
type: array
featureDetectOverride:
description: FeatureDetectOverride is used to override feature detection
based on auto-detected platform capabilities. Values are specified
in a comma separated list with no spaces, example; "SNATFullyRandom=true,MASQFullyRandom=false,RestoreSupportsLock=". "true"
or "false" will force the feature, empty or omitted values are auto-detected.
type: string
featureGates:
description: FeatureGates is used to enable or disable tech-preview
Calico features. Values are specified in a comma separated list
with no spaces, example; "BPFConnectTimeLoadBalancingWorkaround=enabled,XyZ=false".
This is used to enable features that are not fully production ready.
type: string
floatingIPs:
description: FloatingIPs configures whether or not Felix will program
non-OpenStack floating IP addresses. (OpenStack-derived floating
IPs are always programmed, regardless of this setting.)
enum:
- Enabled
- Disabled
type: string
genericXDPEnabled:
description: 'GenericXDPEnabled enables Generic XDP so network cards
that don''t support XDP offload or driver modes can use XDP. This
is not recommended since it doesn''t provide better performance
than iptables. [Default: false]'
type: boolean
healthEnabled:
type: boolean
healthHost:
type: string
healthPort:
type: integer
healthTimeoutOverrides:
description: HealthTimeoutOverrides allows the internal watchdog timeouts
of individual subcomponents to be overridden. This is useful for
working around "false positive" liveness timeouts that can occur
in particularly stressful workloads or if CPU is constrained. For
a list of active subcomponents, see Felix's logs.
items:
properties:
name:
type: string
timeout:
type: string
required:
- name
- timeout
type: object
type: array
interfaceExclude:
description: 'InterfaceExclude is a comma-separated list of interfaces
that Felix should exclude when monitoring for host endpoints. The
default value ensures that Felix ignores Kubernetes'' IPVS dummy
interface, which is used internally by kube-proxy. If you want to
exclude multiple interface names using a single value, the list
supports regular expressions. For regular expressions you must wrap
the value with ''/''. For example having values ''/^kube/,veth1''
will exclude all interfaces that begin with ''kube'' and also the
interface ''veth1''. [Default: kube-ipvs0]'
type: string
interfacePrefix:
description: 'InterfacePrefix is the interface name prefix that identifies
workload endpoints and so distinguishes them from host endpoint
interfaces. Note: in environments other than bare metal, the orchestrators
configure this appropriately. For example our Kubernetes and Docker
integrations set the ''cali'' value, and our OpenStack integration
sets the ''tap'' value. [Default: cali]'
type: string
interfaceRefreshInterval:
description: InterfaceRefreshInterval is the period at which Felix
rescans local interfaces to verify their state. The rescan can be
disabled by setting the interval to 0.
type: string
ipipEnabled:
description: 'IPIPEnabled overrides whether Felix should configure
an IPIP interface on the host. Optional as Felix determines this
based on the existing IP pools. [Default: nil (unset)]'
type: boolean
ipipMTU:
description: 'IPIPMTU is the MTU to set on the tunnel device. See
Configuring MTU [Default: 1440]'
type: integer
ipsetsRefreshInterval:
description: 'IpsetsRefreshInterval is the period at which Felix re-checks
all iptables state to ensure that no other process has accidentally
broken Calico''s rules. Set to 0 to disable iptables refresh. [Default:
90s]'
type: string
iptablesBackend:
description: IptablesBackend specifies which backend of iptables will
be used. The default is Auto.
type: string
iptablesFilterAllowAction:
type: string
iptablesFilterDenyAction:
description: IptablesFilterDenyAction controls what happens to traffic
that is denied by network policy. By default Calico blocks traffic
with an iptables "DROP" action. If you want to use "REJECT" action
instead you can configure it in here.
type: string
iptablesLockFilePath:
description: 'IptablesLockFilePath is the location of the iptables
lock file. You may need to change this if the lock file is not in
its standard location (for example if you have mapped it into Felix''s
container at a different path). [Default: /run/xtables.lock]'
type: string
iptablesLockProbeInterval:
description: 'IptablesLockProbeInterval is the time that Felix will
wait between attempts to acquire the iptables lock if it is not
available. Lower values make Felix more responsive when the lock
is contended, but use more CPU. [Default: 50ms]'
type: string
iptablesLockTimeout:
description: 'IptablesLockTimeout is the time that Felix will wait
for the iptables lock, or 0, to disable. To use this feature, Felix
must share the iptables lock file with all other processes that
also take the lock. When running Felix inside a container, this
requires the /run directory of the host to be mounted into the calico/node
or calico/felix container. [Default: 0s disabled]'
type: string
iptablesMangleAllowAction:
type: string
iptablesMarkMask:
description: 'IptablesMarkMask is the mask that Felix selects its
IPTables Mark bits from. Should be a 32 bit hexadecimal number with
at least 8 bits set, none of which clash with any other mark bits
in use on the system. [Default: 0xff000000]'
format: int32
type: integer
iptablesNATOutgoingInterfaceFilter:
type: string
iptablesPostWriteCheckInterval:
description: 'IptablesPostWriteCheckInterval is the period after Felix
has done a write to the dataplane that it schedules an extra read
back in order to check the write was not clobbered by another process.
This should only occur if another application on the system doesn''t
respect the iptables lock. [Default: 1s]'
type: string
iptablesRefreshInterval:
description: 'IptablesRefreshInterval is the period at which Felix
re-checks the IP sets in the dataplane to ensure that no other process
has accidentally broken Calico''s rules. Set to 0 to disable IP
sets refresh. Note: the default for this value is lower than the
other refresh intervals as a workaround for a Linux kernel bug that
was fixed in kernel version 4.11. If you are using v4.11 or greater
you may want to set this to, a higher value to reduce Felix CPU
usage. [Default: 10s]'
type: string
ipv6Support:
description: IPv6Support controls whether Felix enables support for
IPv6 (if supported by the in-use dataplane).
type: boolean
kubeNodePortRanges:
description: 'KubeNodePortRanges holds list of port ranges used for
service node ports. Only used if felix detects kube-proxy running
in ipvs mode. Felix uses these ranges to separate host and workload
traffic. [Default: 30000:32767].'
items:
anyOf:
- type: integer
- type: string
pattern: ^.*
x-kubernetes-int-or-string: true
type: array
logDebugFilenameRegex:
description: LogDebugFilenameRegex controls which source code files
have their Debug log output included in the logs. Only logs from
files with names that match the given regular expression are included. The
filter only applies to Debug level logs.
type: string
logFilePath:
description: 'LogFilePath is the full path to the Felix log. Set to
none to disable file logging. [Default: /var/log/calico/felix.log]'
type: string
logPrefix:
description: 'LogPrefix is the log prefix that Felix uses when rendering
LOG rules. [Default: calico-packet]'
type: string
logSeverityFile:
description: 'LogSeverityFile is the log severity above which logs
are sent to the log file. [Default: Info]'
type: string
logSeverityScreen:
description: 'LogSeverityScreen is the log severity above which logs
are sent to the stdout. [Default: Info]'
type: string
logSeveritySys:
description: 'LogSeveritySys is the log severity above which logs
are sent to the syslog. Set to None for no logging to syslog. [Default:
Info]'
type: string
maxIpsetSize:
type: integer
metadataAddr:
description: 'MetadataAddr is the IP address or domain name of the
server that can answer VM queries for cloud-init metadata. In OpenStack,
this corresponds to the machine running nova-api (or in Ubuntu,
nova-api-metadata). A value of none (case insensitive) means that
Felix should not set up any NAT rule for the metadata path. [Default:
127.0.0.1]'
type: string
metadataPort:
description: 'MetadataPort is the port of the metadata server. This,
combined with global.MetadataAddr (if not ''None''), is used to
set up a NAT rule, from 169.254.169.254:80 to MetadataAddr:MetadataPort.
In most cases this should not need to be changed [Default: 8775].'
type: integer
mtuIfacePattern:
description: MTUIfacePattern is a regular expression that controls
which interfaces Felix should scan in order to calculate the host's
MTU. This should not match workload interfaces (usually named cali...).
type: string
natOutgoingAddress:
description: NATOutgoingAddress specifies an address to use when performing
source NAT for traffic in a natOutgoing pool that is leaving the
network. By default the address used is an address on the interface
the traffic is leaving on (ie it uses the iptables MASQUERADE target)
type: string
natPortRange:
anyOf:
- type: integer
- type: string
description: NATPortRange specifies the range of ports that is used
for port mapping when doing outgoing NAT. When unset the default
behavior of the network stack is used.
pattern: ^.*
x-kubernetes-int-or-string: true
netlinkTimeout:
type: string
openstackRegion:
description: 'OpenstackRegion is the name of the region that a particular
Felix belongs to. In a multi-region Calico/OpenStack deployment,
this must be configured somehow for each Felix (here in the datamodel,
or in felix.cfg or the environment on each compute node), and must
match the [calico] openstack_region value configured in neutron.conf
on each node. [Default: Empty]'
type: string
policySyncPathPrefix:
description: 'PolicySyncPathPrefix is used to by Felix to communicate
policy changes to external services, like Application layer policy.
[Default: Empty]'
type: string
prometheusGoMetricsEnabled:
description: 'PrometheusGoMetricsEnabled disables Go runtime metrics
collection, which the Prometheus client does by default, when set
to false. This reduces the number of metrics reported, reducing
Prometheus load. [Default: true]'
type: boolean
prometheusMetricsEnabled:
description: 'PrometheusMetricsEnabled enables the Prometheus metrics
server in Felix if set to true. [Default: false]'
type: boolean
prometheusMetricsHost:
description: 'PrometheusMetricsHost is the host that the Prometheus
metrics server should bind to. [Default: empty]'
type: string
prometheusMetricsPort:
description: 'PrometheusMetricsPort is the TCP port that the Prometheus
metrics server should bind to. [Default: 9091]'
type: integer
prometheusProcessMetricsEnabled:
description: 'PrometheusProcessMetricsEnabled disables process metrics
collection, which the Prometheus client does by default, when set
to false. This reduces the number of metrics reported, reducing
Prometheus load. [Default: true]'
type: boolean
prometheusWireGuardMetricsEnabled:
description: 'PrometheusWireGuardMetricsEnabled disables wireguard
metrics collection, which the Prometheus client does by default,
when set to false. This reduces the number of metrics reported,
reducing Prometheus load. [Default: true]'
type: boolean
removeExternalRoutes:
description: Whether or not to remove device routes that have not
been programmed by Felix. Disabling this will allow external applications
to also add device routes. This is enabled by default which means
we will remove externally added routes.
type: boolean
reportingInterval:
description: 'ReportingInterval is the interval at which Felix reports
its status into the datastore or 0 to disable. Must be non-zero
in OpenStack deployments. [Default: 30s]'
type: string
reportingTTL:
description: 'ReportingTTL is the time-to-live setting for process-wide
status reports. [Default: 90s]'
type: string
routeRefreshInterval:
description: 'RouteRefreshInterval is the period at which Felix re-checks
the routes in the dataplane to ensure that no other process has
accidentally broken Calico''s rules. Set to 0 to disable route refresh.
[Default: 90s]'
type: string
routeSource:
description: 'RouteSource configures where Felix gets its routing
information. - WorkloadIPs: use workload endpoints to construct
routes. - CalicoIPAM: the default - use IPAM data to construct routes.'
type: string
routeSyncDisabled:
description: RouteSyncDisabled will disable all operations performed
on the route table. Set to true to run in network-policy mode only.
type: boolean
routeTableRange:
description: Deprecated in favor of RouteTableRanges. Calico programs
additional Linux route tables for various purposes. RouteTableRange
specifies the indices of the route tables that Calico should use.
properties:
max:
type: integer
min:
type: integer
required:
- max
- min
type: object
routeTableRanges:
description: Calico programs additional Linux route tables for various
purposes. RouteTableRanges specifies a set of table index ranges
that Calico should use. Deprecates`RouteTableRange`, overrides `RouteTableRange`.
items:
properties:
max:
type: integer
min:
type: integer
required:
- max
- min
type: object
type: array
serviceLoopPrevention:
description: 'When service IP advertisement is enabled, prevent routing
loops to service IPs that are not in use, by dropping or rejecting
packets that do not get DNAT''d by kube-proxy. Unless set to "Disabled",
in which case such routing loops continue to be allowed. [Default:
Drop]'
type: string
sidecarAccelerationEnabled:
description: 'SidecarAccelerationEnabled enables experimental sidecar
acceleration [Default: false]'
type: boolean
usageReportingEnabled:
description: 'UsageReportingEnabled reports anonymous Calico version
number and cluster size to projectcalico.org. Logs warnings returned
by the usage server. For example, if a significant security vulnerability
has been discovered in the version of Calico being used. [Default:
true]'
type: boolean
usageReportingInitialDelay:
description: 'UsageReportingInitialDelay controls the minimum delay
before Felix makes a report. [Default: 300s]'
type: string
usageReportingInterval:
description: 'UsageReportingInterval controls the interval at which
Felix makes reports. [Default: 86400s]'
type: string
useInternalDataplaneDriver:
description: UseInternalDataplaneDriver, if true, Felix will use its
internal dataplane programming logic. If false, it will launch
an external dataplane driver and communicate with it over protobuf.
type: boolean
vxlanEnabled:
description: 'VXLANEnabled overrides whether Felix should create the
VXLAN tunnel device for IPv4 VXLAN networking. Optional as Felix
determines this based on the existing IP pools. [Default: nil (unset)]'
type: boolean
vxlanMTU:
description: 'VXLANMTU is the MTU to set on the IPv4 VXLAN tunnel
device. See Configuring MTU [Default: 1410]'
type: integer
vxlanMTUV6:
description: 'VXLANMTUV6 is the MTU to set on the IPv6 VXLAN tunnel
device. See Configuring MTU [Default: 1390]'
type: integer
vxlanPort:
type: integer
vxlanVNI:
type: integer
wireguardEnabled:
description: 'WireguardEnabled controls whether Wireguard is enabled
for IPv4 (encapsulating IPv4 traffic over an IPv4 underlay network).
[Default: false]'
type: boolean
wireguardEnabledV6:
description: 'WireguardEnabledV6 controls whether Wireguard is enabled
for IPv6 (encapsulating IPv6 traffic over an IPv6 underlay network).
[Default: false]'
type: boolean
wireguardHostEncryptionEnabled:
description: 'WireguardHostEncryptionEnabled controls whether Wireguard
host-to-host encryption is enabled. [Default: false]'
type: boolean
wireguardInterfaceName:
description: 'WireguardInterfaceName specifies the name to use for
the IPv4 Wireguard interface. [Default: wireguard.cali]'
type: string
wireguardInterfaceNameV6:
description: 'WireguardInterfaceNameV6 specifies the name to use for
the IPv6 Wireguard interface. [Default: wg-v6.cali]'
type: string
wireguardKeepAlive:
description: 'WireguardKeepAlive controls Wireguard PersistentKeepalive
option. Set 0 to disable. [Default: 0]'
type: string
wireguardListeningPort:
description: 'WireguardListeningPort controls the listening port used
by IPv4 Wireguard. [Default: 51820]'
type: integer
wireguardListeningPortV6:
description: 'WireguardListeningPortV6 controls the listening port
used by IPv6 Wireguard. [Default: 51821]'
type: integer
wireguardMTU:
description: 'WireguardMTU controls the MTU on the IPv4 Wireguard
interface. See Configuring MTU [Default: 1440]'
type: integer
wireguardMTUV6:
description: 'WireguardMTUV6 controls the MTU on the IPv6 Wireguard
interface. See Configuring MTU [Default: 1420]'
type: integer
wireguardRoutingRulePriority:
description: 'WireguardRoutingRulePriority controls the priority value
to use for the Wireguard routing rule. [Default: 99]'
type: integer
workloadSourceSpoofing:
description: WorkloadSourceSpoofing controls whether pods can use
the allowedSourcePrefixes annotation to send traffic with a source
IP address that is not theirs. This is disabled by default. When
set to "Any", pods can request any prefix.
type: string
xdpEnabled:
description: 'XDPEnabled enables XDP acceleration for suitable untracked
incoming deny rules. [Default: true]'
type: boolean
xdpRefreshInterval:
description: 'XDPRefreshInterval is the period at which Felix re-checks
all XDP state to ensure that no other process has accidentally broken
Calico''s BPF maps or attached programs. Set to 0 to disable XDP
refresh. [Default: 90s]'
type: string
type: object
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: globalnetworkpolicies.crd.projectcalico.org
spec:
group: crd.projectcalico.org
names:
kind: GlobalNetworkPolicy
listKind: GlobalNetworkPolicyList
plural: globalnetworkpolicies
singular: globalnetworkpolicy
preserveUnknownFields: false
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
properties:
applyOnForward:
description: ApplyOnForward indicates to apply the rules in this policy
on forward traffic.
type: boolean
doNotTrack:
description: DoNotTrack indicates whether packets matched by the rules
in this policy should go through the data plane's connection tracking,
such as Linux conntrack. If True, the rules in this policy are
applied before any data plane connection tracking, and packets allowed
by this policy are marked as not to be tracked.
type: boolean
egress:
description: The ordered set of egress rules. Each rule contains
a set of packet match criteria and a corresponding action to apply.
items:
description: "A Rule encapsulates a set of match criteria and an
action. Both selector-based security Policy and security Profiles
reference rules - separated out as a list of rules for both ingress
and egress packet matching. \n Each positive match criteria has
a negated version, prefixed with \"Not\". All the match criteria
within a rule must be satisfied for a packet to match. A single
rule can contain the positive and negative version of a match
and both must be satisfied for the rule to match."
properties:
action:
type: string
destination:
description: Destination contains the match criteria that apply
to destination entity.
properties:
namespaceSelector:
description: "NamespaceSelector is an optional field that
contains a selector expression. Only traffic that originates
from (or terminates at) endpoints within the selected
namespaces will be matched. When both NamespaceSelector
and another selector are defined on the same rule, then
only workload endpoints that are matched by both selectors
will be selected by the rule. \n For NetworkPolicy, an
empty NamespaceSelector implies that the Selector is limited
to selecting only workload endpoints in the same namespace
as the NetworkPolicy. \n For NetworkPolicy, `global()`
NamespaceSelector implies that the Selector is limited
to selecting only GlobalNetworkSet or HostEndpoint. \n
For GlobalNetworkPolicy, an empty NamespaceSelector implies
the Selector applies to workload endpoints across all
namespaces."
type: string
nets:
description: Nets is an optional field that restricts the
rule to only apply to traffic that originates from (or
terminates at) IP addresses in any of the given subnets.
items:
type: string
type: array
notNets:
description: NotNets is the negated version of the Nets
field.
items:
type: string
type: array
notPorts:
description: NotPorts is the negated version of the Ports
field. Since only some protocols have ports, if any ports
are specified it requires the Protocol match in the Rule
to be set to "TCP" or "UDP".
items:
anyOf:
- type: integer
- type: string
pattern: ^.*
x-kubernetes-int-or-string: true
type: array
notSelector:
description: NotSelector is the negated version of the Selector
field. See Selector field for subtleties with negated
selectors.
type: string
ports:
description: "Ports is an optional field that restricts
the rule to only apply to traffic that has a source (destination)
port that matches one of these ranges/values. This value
is a list of integers or strings that represent ranges
of ports. \n Since only some protocols have ports, if
any ports are specified it requires the Protocol match
in the Rule to be set to \"TCP\" or \"UDP\"."
items:
anyOf:
- type: integer
- type: string
pattern: ^.*
x-kubernetes-int-or-string: true
type: array
selector:
description: "Selector is an optional field that contains
a selector expression (see Policy for sample syntax).
\ Only traffic that originates from (terminates at) endpoints
matching the selector will be matched. \n Note that: in
addition to the negated version of the Selector (see NotSelector
below), the selector expression syntax itself supports
negation. The two types of negation are subtly different.
One negates the set of matched endpoints, the other negates
the whole match: \n \tSelector = \"!has(my_label)\" matches
packets that are from other Calico-controlled \tendpoints
that do not have the label \"my_label\". \n \tNotSelector
= \"has(my_label)\" matches packets that are not from
Calico-controlled \tendpoints that do have the label \"my_label\".
\n The effect is that the latter will accept packets from
non-Calico sources whereas the former is limited to packets
from Calico-controlled endpoints."
type: string
serviceAccounts:
description: ServiceAccounts is an optional field that restricts
the rule to only apply to traffic that originates from
(or terminates at) a pod running as a matching service
account.
properties:
names:
description: Names is an optional field that restricts
the rule to only apply to traffic that originates
from (or terminates at) a pod running as a service
account whose name is in the list.
items:
type: string
type: array
selector:
description: Selector is an optional field that restricts
the rule to only apply to traffic that originates
from (or terminates at) a pod running as a service
account that matches the given label selector. If
both Names and Selector are specified then they are
AND'ed.
type: string
type: object
services:
description: "Services is an optional field that contains
options for matching Kubernetes Services. If specified,
only traffic that originates from or terminates at endpoints
within the selected service(s) will be matched, and only
to/from each endpoint's port. \n Services cannot be specified
on the same rule as Selector, NotSelector, NamespaceSelector,
Nets, NotNets or ServiceAccounts. \n Ports and NotPorts
can only be specified with Services on ingress rules."
properties:
name:
description: Name specifies the name of a Kubernetes
Service to match.
type: string
namespace:
description: Namespace specifies the namespace of the
given Service. If left empty, the rule will match
within this policy's namespace.
type: string
type: object
type: object
http:
description: HTTP contains match criteria that apply to HTTP
requests.
properties:
methods:
description: Methods is an optional field that restricts
the rule to apply only to HTTP requests that use one of
the listed HTTP Methods (e.g. GET, PUT, etc.) Multiple
methods are OR'd together.
items:
type: string
type: array
paths:
description: 'Paths is an optional field that restricts
the rule to apply to HTTP requests that use one of the
listed HTTP Paths. Multiple paths are OR''d together.
e.g: - exact: /foo - prefix: /bar NOTE: Each entry may
ONLY specify either a `exact` or a `prefix` match. The
validator will check for it.'
items:
description: 'HTTPPath specifies an HTTP path to match.
It may be either of the form: exact: <path>: which matches
the path exactly or prefix: <path-prefix>: which matches
the path prefix'
properties:
exact:
type: string
prefix:
type: string
type: object
type: array
type: object
icmp:
description: ICMP is an optional field that restricts the rule
to apply to a specific type and code of ICMP traffic. This
should only be specified if the Protocol field is set to "ICMP"
or "ICMPv6".
properties:
code:
description: Match on a specific ICMP code. If specified,
the Type value must also be specified. This is a technical
limitation imposed by the kernel's iptables firewall,
which Calico uses to enforce the rule.
type: integer
type:
description: Match on a specific ICMP type. For example
a value of 8 refers to ICMP Echo Request (i.e. pings).
type: integer
type: object
ipVersion:
description: IPVersion is an optional field that restricts the
rule to only match a specific IP version.
type: integer
metadata:
description: Metadata contains additional information for this
rule
properties:
annotations:
additionalProperties:
type: string
description: Annotations is a set of key value pairs that
give extra information about the rule
type: object
type: object
notICMP:
description: NotICMP is the negated version of the ICMP field.
properties:
code:
description: Match on a specific ICMP code. If specified,
the Type value must also be specified. This is a technical
limitation imposed by the kernel's iptables firewall,
which Calico uses to enforce the rule.
type: integer
type:
description: Match on a specific ICMP type. For example
a value of 8 refers to ICMP Echo Request (i.e. pings).
type: integer
type: object
notProtocol:
anyOf:
- type: integer
- type: string
description: NotProtocol is the negated version of the Protocol
field.
pattern: ^.*
x-kubernetes-int-or-string: true
protocol:
anyOf:
- type: integer
- type: string
description: "Protocol is an optional field that restricts the
rule to only apply to traffic of a specific IP protocol. Required
if any of the EntityRules contain Ports (because ports only
apply to certain protocols). \n Must be one of these string
values: \"TCP\", \"UDP\", \"ICMP\", \"ICMPv6\", \"SCTP\",
\"UDPLite\" or an integer in the range 1-255."
pattern: ^.*
x-kubernetes-int-or-string: true
source:
description: Source contains the match criteria that apply to
source entity.
properties:
namespaceSelector:
description: "NamespaceSelector is an optional field that
contains a selector expression. Only traffic that originates
from (or terminates at) endpoints within the selected
namespaces will be matched. When both NamespaceSelector
and another selector are defined on the same rule, then
only workload endpoints that are matched by both selectors
will be selected by the rule. \n For NetworkPolicy, an
empty NamespaceSelector implies that the Selector is limited
to selecting only workload endpoints in the same namespace
as the NetworkPolicy. \n For NetworkPolicy, `global()`
NamespaceSelector implies that the Selector is limited
to selecting only GlobalNetworkSet or HostEndpoint. \n
For GlobalNetworkPolicy, an empty NamespaceSelector implies
the Selector applies to workload endpoints across all
namespaces."
type: string
nets:
description: Nets is an optional field that restricts the
rule to only apply to traffic that originates from (or
terminates at) IP addresses in any of the given subnets.
items:
type: string
type: array
notNets:
description: NotNets is the negated version of the Nets
field.
items:
type: string
type: array
notPorts:
description: NotPorts is the negated version of the Ports
field. Since only some protocols have ports, if any ports
are specified it requires the Protocol match in the Rule
to be set to "TCP" or "UDP".
items:
anyOf:
- type: integer
- type: string
pattern: ^.*
x-kubernetes-int-or-string: true
type: array
notSelector:
description: NotSelector is the negated version of the Selector
field. See Selector field for subtleties with negated
selectors.
type: string
ports:
description: "Ports is an optional field that restricts
the rule to only apply to traffic that has a source (destination)
port that matches one of these ranges/values. This value
is a list of integers or strings that represent ranges
of ports. \n Since only some protocols have ports, if
any ports are specified it requires the Protocol match
in the Rule to be set to \"TCP\" or \"UDP\"."
items:
anyOf:
- type: integer
- type: string
pattern: ^.*
x-kubernetes-int-or-string: true
type: array
selector:
description: "Selector is an optional field that contains
a selector expression (see Policy for sample syntax).
\ Only traffic that originates from (terminates at) endpoints
matching the selector will be matched. \n Note that: in
addition to the negated version of the Selector (see NotSelector
below), the selector expression syntax itself supports
negation. The two types of negation are subtly different.
One negates the set of matched endpoints, the other negates
the whole match: \n \tSelector = \"!has(my_label)\" matches
packets that are from other Calico-controlled \tendpoints
that do not have the label \"my_label\". \n \tNotSelector
= \"has(my_label)\" matches packets that are not from
Calico-controlled \tendpoints that do have the label \"my_label\".
\n The effect is that the latter will accept packets from
non-Calico sources whereas the former is limited to packets
from Calico-controlled endpoints."
type: string
serviceAccounts:
description: ServiceAccounts is an optional field that restricts
the rule to only apply to traffic that originates from
(or terminates at) a pod running as a matching service
account.
properties:
names:
description: Names is an optional field that restricts
the rule to only apply to traffic that originates
from (or terminates at) a pod running as a service
account whose name is in the list.
items:
type: string
type: array
selector:
description: Selector is an optional field that restricts
the rule to only apply to traffic that originates
from (or terminates at) a pod running as a service
account that matches the given label selector. If
both Names and Selector are specified then they are
AND'ed.
type: string
type: object
services:
description: "Services is an optional field that contains
options for matching Kubernetes Services. If specified,
only traffic that originates from or terminates at endpoints
within the selected service(s) will be matched, and only
to/from each endpoint's port. \n Services cannot be specified
on the same rule as Selector, NotSelector, NamespaceSelector,
Nets, NotNets or ServiceAccounts. \n Ports and NotPorts
can only be specified with Services on ingress rules."
properties:
name:
description: Name specifies the name of a Kubernetes
Service to match.
type: string
namespace:
description: Namespace specifies the namespace of the
given Service. If left empty, the rule will match
within this policy's namespace.
type: string
type: object
type: object
required:
- action
type: object
type: array
ingress:
description: The ordered set of ingress rules. Each rule contains
a set of packet match criteria and a corresponding action to apply.
items:
description: "A Rule encapsulates a set of match criteria and an
action. Both selector-based security Policy and security Profiles
reference rules - separated out as a list of rules for both ingress
and egress packet matching. \n Each positive match criteria has
a negated version, prefixed with \"Not\". All the match criteria
within a rule must be satisfied for a packet to match. A single
rule can contain the positive and negative version of a match
and both must be satisfied for the rule to match."
properties:
action:
type: string
destination:
description: Destination contains the match criteria that apply
to destination entity.
properties:
namespaceSelector:
description: "NamespaceSelector is an optional field that
contains a selector expression. Only traffic that originates
from (or terminates at) endpoints within the selected
namespaces will be matched. When both NamespaceSelector
and another selector are defined on the same rule, then
only workload endpoints that are matched by both selectors
will be selected by the rule. \n For NetworkPolicy, an
empty NamespaceSelector implies that the Selector is limited
to selecting only workload endpoints in the same namespace
as the NetworkPolicy. \n For NetworkPolicy, `global()`
NamespaceSelector implies that the Selector is limited
to selecting only GlobalNetworkSet or HostEndpoint. \n
For GlobalNetworkPolicy, an empty NamespaceSelector implies
the Selector applies to workload endpoints across all
namespaces."
type: string
nets:
description: Nets is an optional field that restricts the
rule to only apply to traffic that originates from (or
terminates at) IP addresses in any of the given subnets.
items:
type: string
type: array
notNets:
description: NotNets is the negated version of the Nets
field.
items:
type: string
type: array
notPorts:
description: NotPorts is the negated version of the Ports
field. Since only some protocols have ports, if any ports
are specified it requires the Protocol match in the Rule
to be set to "TCP" or "UDP".
items:
anyOf:
gitextract_igc5nwum/
├── .dockerignore
├── .editorconfig
├── .github/
│ ├── ISSUE_TEMPLATE/
│ │ ├── bug_report.md
│ │ ├── feature_request.md
│ │ └── question.md
│ ├── PULL_REQUEST_TEMPLATE/
│ │ └── pull_request_template.md
│ └── workflows/
│ ├── build.yaml
│ └── release.yaml
├── .gitignore
├── .goreleaser.yaml
├── .hadolint.yaml
├── .yamllint.yaml
├── CHANGELOG.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── Dockerfile
├── LICENSE
├── README.md
├── SECURITY.md
├── cmd/
│ └── netassert/
│ └── cli/
│ ├── common.go
│ ├── gen_result.go
│ ├── main.go
│ ├── ping.go
│ ├── root.go
│ ├── run.go
│ ├── validate.go
│ └── version.go
├── download.sh
├── e2e/
│ ├── README.md
│ ├── clusters/
│ │ ├── aws-eks-terraform-module/
│ │ │ ├── eks.tf
│ │ │ ├── outputs.tf
│ │ │ ├── variables.tf
│ │ │ └── vpc.tf
│ │ ├── eks-with-calico-cni/
│ │ │ ├── calico-3.26.4.yaml
│ │ │ └── terraform/
│ │ │ ├── main.tf
│ │ │ └── vars.tf
│ │ ├── eks-with-vpc-cni/
│ │ │ └── terraform/
│ │ │ ├── main.tf
│ │ │ └── vars.tf
│ │ ├── gke-dataplanev2/
│ │ │ ├── main.tf
│ │ │ └── variables.tf
│ │ ├── gke-vpc/
│ │ │ ├── main.tf
│ │ │ └── variables.tf
│ │ └── kind/
│ │ └── kind-config.yaml
│ ├── e2e_test.go
│ ├── helpers/
│ │ ├── common.go
│ │ ├── eks.go
│ │ ├── gke.go
│ │ └── kind.go
│ └── manifests/
│ ├── networkpolicies.yaml
│ ├── test-cases.yaml
│ └── workload.yaml
├── fluxcd-demo/
│ ├── README.md
│ ├── fluxcd-helmconfig.yaml
│ ├── helm/
│ │ ├── Chart.yaml
│ │ ├── templates/
│ │ │ ├── _helpers.tpl
│ │ │ ├── deployment.yaml
│ │ │ ├── pod1-pod2.yaml
│ │ │ ├── post-deploy-tests.yaml
│ │ │ └── statefulset.yaml
│ │ └── values.yaml
│ └── kind-cluster.yaml
├── go.mod
├── go.sum
├── helm/
│ ├── Chart.yaml
│ ├── README.md
│ ├── templates/
│ │ ├── NOTES.txt
│ │ ├── _helpers.tpl
│ │ ├── clusterrole.yaml
│ │ ├── clusterrolebinding.yaml
│ │ ├── configmap.yaml
│ │ ├── job.yaml
│ │ └── serviceaccount.yaml
│ └── values.yaml
├── internal/
│ ├── data/
│ │ ├── read.go
│ │ ├── read_test.go
│ │ ├── tap.go
│ │ ├── tap_test.go
│ │ ├── testdata/
│ │ │ ├── dir-without-yaml-files/
│ │ │ │ └── .gitkeep
│ │ │ ├── invalid/
│ │ │ │ ├── duplicated-names.yaml
│ │ │ │ ├── empty-resources.yaml
│ │ │ │ ├── host-as-dst-udp.yaml
│ │ │ │ ├── host-as-source.yaml
│ │ │ │ ├── missing-fields.yaml
│ │ │ │ ├── multiple-dst-blocks.yaml
│ │ │ │ ├── not-a-list.yaml
│ │ │ │ └── wrong-test-values.yaml
│ │ │ ├── invalid-duplicated-names/
│ │ │ │ ├── input1.yaml
│ │ │ │ └── input2.yaml
│ │ │ └── valid/
│ │ │ ├── empty.yaml
│ │ │ └── multi.yaml
│ │ ├── types.go
│ │ └── types_test.go
│ ├── engine/
│ │ ├── engine.go
│ │ ├── engine_daemonset_test.go
│ │ ├── engine_deployment_test.go
│ │ ├── engine_mocks_test.go
│ │ ├── engine_pod_test.go
│ │ ├── engine_statefulset_test.go
│ │ ├── interface.go
│ │ ├── run_tcp.go
│ │ ├── run_tcp_test.go
│ │ └── run_udp.go
│ ├── kubeops/
│ │ ├── client.go
│ │ ├── container_test.go
│ │ ├── containers.go
│ │ ├── daemonset.go
│ │ ├── daemonset_test.go
│ │ ├── deployment.go
│ │ ├── deployment_test.go
│ │ ├── pod.go
│ │ ├── pod_test.go
│ │ ├── statefulset.go
│ │ ├── statefulset_test.go
│ │ └── string_gen.go
│ └── logger/
│ └── hclog.go
├── justfile
└── rbac/
├── cluster-role.yaml
└── cluster-rolebinding.yaml
SYMBOL INDEX (174 symbols across 42 files)
FILE: cmd/netassert/cli/common.go
function loadTestCases (line 12) | func loadTestCases(testCasesFile, testCasesDir string) (data.Tests, erro...
function createService (line 39) | func createService(kubeconfigPath string, l hclog.Logger) (*kubeops.Serv...
FILE: cmd/netassert/cli/gen_result.go
function genResult (line 13) | func genResult(testCases data.Tests, tapFile string, lg hclog.Logger) er...
FILE: cmd/netassert/cli/main.go
function main (line 9) | func main() {
FILE: cmd/netassert/cli/ping.go
constant apiServerHealthEndpoint (line 17) | apiServerHealthEndpoint = `/healthz`
type pingCmdConfig (line 20) | type pingCmdConfig struct
function ping (line 49) | func ping(ctx context.Context, lg hclog.Logger, k8sSvc *kubeops.Service) {
function init (line 67) | func init() {
FILE: cmd/netassert/cli/root.go
function init (line 31) | func init() {
FILE: cmd/netassert/cli/run.go
type runCmdConfig (line 19) | type runCmdConfig struct
function runTests (line 67) | func runTests(lg hclog.Logger) error {
function init (line 126) | func init() {
FILE: cmd/netassert/cli/validate.go
type validateCmdConfig (line 11) | type validateCmdConfig struct
function validateTestCases (line 29) | func validateTestCases(cmd *cobra.Command, args []string) {
function init (line 40) | func init() {
FILE: cmd/netassert/cli/version.go
function versionDetails (line 20) | func versionDetails(cmd *cobra.Command, args []string) {
FILE: e2e/e2e_test.go
constant suffixLength (line 26) | suffixLength = 9
constant snifferContainerImage (line 27) | snifferContainerImage = "docker.io/controlplane/netassertv2-packet-sni...
constant snifferContainerPrefix (line 28) | snifferContainerPrefix = "netassertv2-sniffer"
constant scannerContainerImage (line 29) | scannerContainerImage = "docker.io/controlplane/netassertv2-l4-client:...
constant scannerContainerPrefix (line 30) | scannerContainerPrefix = "netassertv2-client"
constant pauseInSeconds (line 31) | pauseInSeconds = 5
constant packetCaputureInterface (line 32) | packetCaputureInterface = `eth0`
constant testCasesFile (line 33) | testCasesFile = `./manifests/test-cases.yaml`
constant resultFile (line 34) | resultFile = "result.log"
type MinimalK8sObject (line 45) | type MinimalK8sObject struct
function TestMain (line 65) | func TestMain(m *testing.M) {
function TestKind (line 70) | func TestKind(t *testing.T) {
function TestGKEWithVPC (line 81) | func TestGKEWithVPC(t *testing.T) {
function TestGKEWithDataPlaneV2 (line 92) | func TestGKEWithDataPlaneV2(t *testing.T) {
function TestEKSWithVPC (line 103) | func TestEKSWithVPC(t *testing.T) {
function TestEKSWithCalico (line 115) | func TestEKSWithCalico(t *testing.T) {
function waitUntilManifestReady (line 127) | func waitUntilManifestReady(t *testing.T, svc *kubeops.Service, manifest...
function createTestDestroy (line 175) | func createTestDestroy(t *testing.T, gc helpers.GenericCluster) {
function runTests (line 243) | func runTests(ctx context.Context, t *testing.T, svc *kubeops.Service, n...
FILE: e2e/helpers/common.go
constant VPC (line 6) | VPC NetworkMode = "vpc"
constant DataPlaneV2 (line 7) | DataPlaneV2 NetworkMode = "dataplanev2"
constant Calico (line 8) | Calico NetworkMode = "calico"
type NetworkMode (line 11) | type NetworkMode
type GenericCluster (line 13) | type GenericCluster interface
FILE: e2e/helpers/eks.go
type EKSCluster (line 18) | type EKSCluster struct
method Create (line 66) | func (g *EKSCluster) Create(t *testing.T) {
method installCalico (line 78) | func (g *EKSCluster) installCalico(t *testing.T) {
method Destroy (line 122) | func (g *EKSCluster) Destroy(t *testing.T) {
method KubeConfigGet (line 128) | func (g *EKSCluster) KubeConfigGet() string {
method SkipNetPolTests (line 132) | func (g *EKSCluster) SkipNetPolTests() bool {
function NewEKSCluster (line 29) | func NewEKSCluster(t *testing.T, terraformDir, clusterNameSuffix string,...
FILE: e2e/helpers/gke.go
type GKECluster (line 9) | type GKECluster struct
method Create (line 46) | func (g *GKECluster) Create(t *testing.T) {
method Destroy (line 54) | func (g *GKECluster) Destroy(t *testing.T) {
method KubeConfigGet (line 60) | func (g *GKECluster) KubeConfigGet() string {
method SkipNetPolTests (line 64) | func (g *GKECluster) SkipNetPolTests() bool {
function NewGKECluster (line 20) | func NewGKECluster(t *testing.T, terraformDir, clusterNameSuffix string,...
FILE: e2e/helpers/kind.go
type KindCluster (line 12) | type KindCluster struct
method Create (line 33) | func (k *KindCluster) Create(t *testing.T) {
method Destroy (line 60) | func (k *KindCluster) Destroy(t *testing.T) {
method KubeConfigGet (line 71) | func (k *KindCluster) KubeConfigGet() string {
method SkipNetPolTests (line 75) | func (k *KindCluster) SkipNetPolTests() bool {
function NewKindCluster (line 20) | func NewKindCluster(t *testing.T, WorkspaceDir string, clusterNameSuffix...
FILE: internal/data/read.go
constant fileExtensionYAML (line 11) | fileExtensionYAML = `.yaml`
constant fileExtensionYML (line 12) | fileExtensionYML = `.yml`
function ReadTestsFromDir (line 17) | func ReadTestsFromDir(path string) (Tests, error) {
function ReadTestsFromFile (line 64) | func ReadTestsFromFile(fileName string) (Tests, error) {
FILE: internal/data/read_test.go
function TestReadTestFile (line 10) | func TestReadTestFile(t *testing.T) {
function TestReadTestsFromDir (line 50) | func TestReadTestsFromDir(t *testing.T) {
FILE: internal/data/tap.go
method TAPResult (line 11) | func (ts *Tests) TAPResult(w io.Writer) error {
FILE: internal/data/tap_test.go
function TestTests_TAPResult (line 10) | func TestTests_TAPResult(t *testing.T) {
FILE: internal/data/types.go
type Protocol (line 12) | type Protocol
constant ProtocolTCP (line 16) | ProtocolTCP Protocol = "tcp"
constant ProtocolUDP (line 19) | ProtocolUDP Protocol = "udp"
type K8sResourceKind (line 23) | type K8sResourceKind
constant KindDeployment (line 26) | KindDeployment K8sResourceKind = "deployment"
constant KindStatefulSet (line 27) | KindStatefulSet K8sResourceKind = "statefulset"
constant KindDaemonSet (line 28) | KindDaemonSet K8sResourceKind = "daemonset"
constant KindPod (line 29) | KindPod K8sResourceKind = "pod"
type TestType (line 42) | type TestType
constant K8sTest (line 45) | K8sTest TestType = "k8s"
type K8sResource (line 54) | type K8sResource struct
method validate (line 95) | func (r *K8sResource) validate() error {
type Src (line 62) | type Src struct
method validate (line 161) | func (d *Src) validate() error {
type Host (line 67) | type Host struct
method validate (line 127) | func (h *Host) validate() error {
type Dst (line 72) | type Dst struct
method validate (line 140) | func (d *Dst) validate() error {
type Test (line 78) | type Test struct
method validate (line 174) | func (te *Test) validate() error {
method setDefaults (line 254) | func (te *Test) setDefaults() {
method UnmarshalYAML (line 306) | func (te *Test) UnmarshalYAML(node *yaml.Node) error {
type Tests (line 93) | type Tests
method Validate (line 234) | func (ts *Tests) Validate() error {
method UnmarshalYAML (line 269) | func (ts *Tests) UnmarshalYAML(node *yaml.Node) error {
function NewFromReader (line 286) | func NewFromReader(r io.Reader) (Tests, error) {
FILE: internal/data/types_test.go
function TestNewFromReader (line 12) | func TestNewFromReader(t *testing.T) {
FILE: internal/engine/engine.go
type Engine (line 18) | type Engine struct
method GetPod (line 29) | func (e *Engine) GetPod(ctx context.Context, res *data.K8sResource) (*...
method RunTests (line 50) | func (e *Engine) RunTests(
method RunTest (line 109) | func (e *Engine) RunTest(
function New (line 24) | func New(service NetAssertTestRunner, log hclog.Logger) *Engine {
function cancellableDelay (line 90) | func cancellableDelay(ctx context.Context, duration time.Duration) {
FILE: internal/engine/engine_daemonset_test.go
function TestEngine_GetPod_DaemonSet (line 17) | func TestEngine_GetPod_DaemonSet(t *testing.T) {
FILE: internal/engine/engine_deployment_test.go
function TestEngine_GetPod_Deployment (line 17) | func TestEngine_GetPod_Deployment(t *testing.T) {
FILE: internal/engine/engine_mocks_test.go
type MockNetAssertTestRunner (line 17) | type MockNetAssertTestRunner struct
method EXPECT (line 35) | func (m *MockNetAssertTestRunner) EXPECT() *MockNetAssertTestRunnerMoc...
method BuildEphemeralScannerContainer (line 40) | func (m *MockNetAssertTestRunner) BuildEphemeralScannerContainer(arg0,...
method BuildEphemeralSnifferContainer (line 55) | func (m *MockNetAssertTestRunner) BuildEphemeralSnifferContainer(arg0,...
method GetExitStatusOfEphemeralContainer (line 70) | func (m *MockNetAssertTestRunner) GetExitStatusOfEphemeralContainer(ar...
method GetPod (line 85) | func (m *MockNetAssertTestRunner) GetPod(arg0 context.Context, arg1, a...
method GetPodInDaemonSet (line 100) | func (m *MockNetAssertTestRunner) GetPodInDaemonSet(arg0 context.Conte...
method GetPodInDeployment (line 115) | func (m *MockNetAssertTestRunner) GetPodInDeployment(arg0 context.Cont...
method GetPodInStatefulSet (line 130) | func (m *MockNetAssertTestRunner) GetPodInStatefulSet(arg0 context.Con...
method LaunchEphemeralContainerInPod (line 145) | func (m *MockNetAssertTestRunner) LaunchEphemeralContainerInPod(arg0 c...
type MockNetAssertTestRunnerMockRecorder (line 23) | type MockNetAssertTestRunnerMockRecorder struct
method BuildEphemeralScannerContainer (line 49) | func (mr *MockNetAssertTestRunnerMockRecorder) BuildEphemeralScannerCo...
method BuildEphemeralSnifferContainer (line 64) | func (mr *MockNetAssertTestRunnerMockRecorder) BuildEphemeralSnifferCo...
method GetExitStatusOfEphemeralContainer (line 79) | func (mr *MockNetAssertTestRunnerMockRecorder) GetExitStatusOfEphemera...
method GetPod (line 94) | func (mr *MockNetAssertTestRunnerMockRecorder) GetPod(arg0, arg1, arg2...
method GetPodInDaemonSet (line 109) | func (mr *MockNetAssertTestRunnerMockRecorder) GetPodInDaemonSet(arg0,...
method GetPodInDeployment (line 124) | func (mr *MockNetAssertTestRunnerMockRecorder) GetPodInDeployment(arg0...
method GetPodInStatefulSet (line 139) | func (mr *MockNetAssertTestRunnerMockRecorder) GetPodInStatefulSet(arg...
method LaunchEphemeralContainerInPod (line 155) | func (mr *MockNetAssertTestRunnerMockRecorder) LaunchEphemeralContaine...
function NewMockNetAssertTestRunner (line 28) | func NewMockNetAssertTestRunner(ctrl *gomock.Controller) *MockNetAssertT...
FILE: internal/engine/engine_pod_test.go
function TestEngine_GetPod_Pod (line 17) | func TestEngine_GetPod_Pod(t *testing.T) {
FILE: internal/engine/engine_statefulset_test.go
function TestEngine_GetPod_StatefulSet (line 17) | func TestEngine_GetPod_StatefulSet(t *testing.T) {
FILE: internal/engine/interface.go
type PodGetter (line 11) | type PodGetter interface
type EphemeralContainerOperator (line 19) | type EphemeralContainerOperator interface
type NetAssertTestRunner (line 57) | type NetAssertTestRunner interface
FILE: internal/engine/run_tcp.go
method RunTCPTest (line 15) | func (e *Engine) RunTCPTest(
method CheckExitStatusOfEphContainer (line 108) | func (e *Engine) CheckExitStatusOfEphContainer(
FILE: internal/engine/run_tcp_test.go
function TestEngine_RunTCPTest (line 38) | func TestEngine_RunTCPTest(t *testing.T) {
FILE: internal/engine/run_udp.go
constant defaultNetInt (line 15) | defaultNetInt = `eth0`
constant defaultSnapLen (line 16) | defaultSnapLen = 1024
constant ephemeralContainersExtraSeconds (line 17) | ephemeralContainersExtraSeconds = 23
constant attemptsMultiplier (line 18) | attemptsMultiplier = 3
method RunUDPTest (line 22) | func (e *Engine) RunUDPTest(
FILE: internal/kubeops/client.go
function generateKubernetesClient (line 16) | func generateKubernetesClient() (kubernetes.Interface, error) {
function genK8sClientFromKubeConfigFile (line 37) | func genK8sClientFromKubeConfigFile(kubeConfigPath string) (kubernetes.I...
type Service (line 52) | type Service struct
function New (line 58) | func New(client kubernetes.Interface, l hclog.Logger) *Service {
function NewDefaultService (line 66) | func NewDefaultService(l hclog.Logger) (*Service, error) {
function NewServiceFromKubeConfigFile (line 79) | func NewServiceFromKubeConfigFile(kubeConfigPath string, l hclog.Logger)...
FILE: internal/kubeops/container_test.go
function TestLaunchEphemeralContainerInPod_InvalidEphemeralContainer (line 14) | func TestLaunchEphemeralContainerInPod_InvalidEphemeralContainer(t *test...
FILE: internal/kubeops/containers.go
method LaunchEphemeralContainerInPod (line 19) | func (svc *Service) LaunchEphemeralContainerInPod(
method BuildEphemeralSnifferContainer (line 66) | func (svc *Service) BuildEphemeralSnifferContainer(
method BuildEphemeralScannerContainer (line 126) | func (svc *Service) BuildEphemeralScannerContainer(
method GetExitStatusOfEphemeralContainer (line 178) | func (svc *Service) GetExitStatusOfEphemeralContainer(
FILE: internal/kubeops/daemonset.go
method GetPodInDaemonSet (line 14) | func (svc *Service) GetPodInDaemonSet(ctx context.Context, name, namespa...
FILE: internal/kubeops/daemonset_test.go
function createDaemonSet (line 17) | func createDaemonSet(client kubernetes.Interface,
function deaemonSetPod (line 64) | func deaemonSetPod(
function TestGetPodInDaemonSet (line 112) | func TestGetPodInDaemonSet(t *testing.T) {
FILE: internal/kubeops/deployment.go
method GetPodInDeployment (line 15) | func (svc *Service) GetPodInDeployment(ctx context.Context, name, namesp...
FILE: internal/kubeops/deployment_test.go
function getDeploymentObject (line 19) | func getDeploymentObject(name, namespace string, replicaSize int32) *app...
function replicaSetWithOwnerSetToDeployment (line 68) | func replicaSetWithOwnerSetToDeployment(deploy *appsv1.Deployment, size ...
function podWithOwnerSetToReplicaSet (line 91) | func podWithOwnerSetToReplicaSet(rs *appsv1.ReplicaSet) *corev1.Pod {
function TestGetPodInDeployment (line 111) | func TestGetPodInDeployment(t *testing.T) {
FILE: internal/kubeops/pod.go
constant ephContainerGroup (line 19) | ephContainerGroup = ""
constant ephContainerVersion (line 20) | ephContainerVersion = "v1"
constant ephContainerKind (line 21) | ephContainerKind = "Pod"
constant ephContainerRes (line 22) | ephContainerRes = "pods/ephemeralcontainers"
method GetPod (line 26) | func (svc *Service) GetPod(ctx context.Context, name, namespace string) ...
function getRandomPodFromPodList (line 59) | func getRandomPodFromPodList(ownerObj metav1.Object, podList *corev1.Pod...
method CheckEphemeralContainerSupport (line 100) | func (svc *Service) CheckEphemeralContainerSupport(ctx context.Context) ...
function checkResourceSupport (line 119) | func checkResourceSupport(
method PingHealthEndpoint (line 143) | func (svc *Service) PingHealthEndpoint(ctx context.Context, endpoint str...
method WaitForPodInResourceReady (line 153) | func (svc *Service) WaitForPodInResourceReady(name, namespace, resourceT...
FILE: internal/kubeops/pod_test.go
function TestGetPod (line 14) | func TestGetPod(t *testing.T) {
FILE: internal/kubeops/statefulset.go
method GetPodInStatefulSet (line 15) | func (svc *Service) GetPodInStatefulSet(ctx context.Context, name, names...
FILE: internal/kubeops/statefulset_test.go
function createStatefulSet (line 20) | func createStatefulSet(client kubernetes.Interface, name, namespace stri...
function createStatefulSetPod (line 88) | func createStatefulSetPod(
function TestGetPodInStatefulSet (line 119) | func TestGetPodInStatefulSet(t *testing.T) {
FILE: internal/kubeops/string_gen.go
constant charset (line 10) | charset = "abcdefghijklmnopqrstuvwxyz123456789"
function NewUUIDString (line 14) | func NewUUIDString() (string, error) {
function RandString (line 24) | func RandString(length int) string {
FILE: internal/logger/hclog.go
function NewHCLogger (line 12) | func NewHCLogger(logLevel, appName string, w io.Writer) hclog.Logger {
Condensed preview — 118 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (552K chars).
[
{
"path": ".dockerignore",
"chars": 73,
"preview": "Dockerfile*\nJenkinsfile*\n**/.terraform\n.git/\n\n.idea/\n*.iml\n.gcloudignore\n"
},
{
"path": ".editorconfig",
"chars": 468,
"preview": "root = true\n\n[*]\nend_of_line = lf\nindent_style = space\nindent_size = 2\ninsert_final_newline = true\nmax_line_length = 120"
},
{
"path": ".github/ISSUE_TEMPLATE/bug_report.md",
"chars": 538,
"preview": "---\nname: Bug report\nabout: Create a report to help us improve\ntitle: ''\nlabels: bug\nassignees: ''\n---\n\n**Describe the b"
},
{
"path": ".github/ISSUE_TEMPLATE/feature_request.md",
"chars": 603,
"preview": "---\nname: Feature request\nabout: Suggest an idea for this project\ntitle: ''\nlabels: ''\nassignees: ''\n---\n\n**Is your feat"
},
{
"path": ".github/ISSUE_TEMPLATE/question.md",
"chars": 226,
"preview": "---\nname: Question\nabout: Post a question about the project\ntitle: ''\nlabels: question\nassignees: ''\n---\n\n**Your questio"
},
{
"path": ".github/PULL_REQUEST_TEMPLATE/pull_request_template.md",
"chars": 808,
"preview": "---\nname: Pull Request\nabout: A pull request\ntitle: ''\nlabels: ''\nassignees: ''\n---\n\n[pull_requests]: https://github.com"
},
{
"path": ".github/workflows/build.yaml",
"chars": 1627,
"preview": "name: Lint and Build\non:\n push:\n tags-ignore:\n - '*'\n branches:\n - '*'\n pull_request:\n branches: ['ma"
},
{
"path": ".github/workflows/release.yaml",
"chars": 4560,
"preview": "name: release\n\non:\n push:\n tags:\n - \"v[0-9]+.[0-9]+.[0-9]+\"\n - \"v[0-9]+.[0-9]+.[0-9]+-testing[0-9]+\"\n\nperm"
},
{
"path": ".gitignore",
"chars": 1697,
"preview": "# Secrets #\n###########\n*.pem\n*.key\n*_rsa\n\n# Compiled source #\n###################\n*.com\n*.class\n*.dll\n*.exe\n*.o\n*.so\n*."
},
{
"path": ".goreleaser.yaml",
"chars": 745,
"preview": "builds:\n- id: netassert\n env:\n - CGO_ENABLED=0\n ldflags:\n - -s\n - -w\n - -X main.version={{.Tag}}\n - -X main.gitHa"
},
{
"path": ".hadolint.yaml",
"chars": 82,
"preview": "---\nignored:\n - DL3018 # Pin versions in apk add.\n - DL3022 # COPY --from alias\n"
},
{
"path": ".yamllint.yaml",
"chars": 168,
"preview": "---\nextends: default\n\nignore: |\n test/bin/\n test/asset/\nrules:\n comments:\n min-spaces-from-content: 1\n line-lengt"
},
{
"path": "CHANGELOG.md",
"chars": 860,
"preview": "# Changelog\n\nAll notable changes to this project will be documented in this file.\n\n## Table of Contents\n\n- [2.0.3](#203)"
},
{
"path": "CODE_OF_CONDUCT.md",
"chars": 3204,
"preview": "# Contributor Covenant Code of Conduct\n\n## Our Pledge\n\nIn the interest of fostering an open and welcoming environment, w"
},
{
"path": "CONTRIBUTING.md",
"chars": 12065,
"preview": "# Contributing to NetAssert\n\n:+1::tada: First off, thanks for taking the time to contribute! :tada::+1:\n\n`NetAssert` is "
},
{
"path": "Dockerfile",
"chars": 518,
"preview": "FROM golang:1.25-alpine AS builder\n\nARG VERSION\n\nCOPY . /build\nWORKDIR /build\n\nRUN go mod download && \\\n CGO_ENABLED="
},
{
"path": "LICENSE",
"chars": 11346,
"preview": " Apache License\n Version 2.0, January 2004\n "
},
{
"path": "README.md",
"chars": 21093,
"preview": "# Netassert\n\n[![Testing Workflow][testing_workflow_badge]][testing_workflow_badge]\n[![Release Workflow][release_workflow"
},
{
"path": "SECURITY.md",
"chars": 156,
"preview": "# Security Policy\n\n## Our Security Address\n\nContact: `security@control-plane.io`\nEncryption: `https://keybase.io/sublimi"
},
{
"path": "cmd/netassert/cli/common.go",
"chars": 1321,
"preview": "package main\n\nimport (\n\t\"errors\"\n\n\t\"github.com/controlplaneio/netassert/v2/internal/data\"\n\t\"github.com/controlplaneio/ne"
},
{
"path": "cmd/netassert/cli/gen_result.go",
"chars": 1106,
"preview": "package main\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\n\t\"github.com/hashicorp/go-hclog\"\n\n\t\"github.com/controlplaneio/netassert/v2/internal"
},
{
"path": "cmd/netassert/cli/main.go",
"chars": 138,
"preview": "package main\n\nimport (\n\t\"os\"\n\n\t_ \"go.uber.org/automaxprocs\"\n)\n\nfunc main() {\n\tif err := rootCmd.Execute(); err != nil {\n"
},
{
"path": "cmd/netassert/cli/ping.go",
"chars": 2149,
"preview": "package main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"time\"\n\n\t\"github.com/hashicorp/go-hclog\"\n\t\"github.com/spf13/cobra\"\n\n\t\"gi"
},
{
"path": "cmd/netassert/cli/root.go",
"chars": 1360,
"preview": "package main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/spf13/cobra\"\n)\n\n// these variables are overwritten at build time using ldfla"
},
{
"path": "cmd/netassert/cli/run.go",
"chars": 6548,
"preview": "package main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"os/signal\"\n\t\"syscall\"\n\t\"time\"\n\n\t\"github.com/hashicorp/go-hclog\"\n\t\"githu"
},
{
"path": "cmd/netassert/cli/validate.go",
"chars": 1313,
"preview": "package main\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\n\t\"github.com/spf13/cobra\"\n)\n\n// validateCmdConfig - config for validate sub-command"
},
{
"path": "cmd/netassert/cli/version.go",
"chars": 710,
"preview": "package main\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/controlplaneio/netassert/v2/internal/logger"
},
{
"path": "download.sh",
"chars": 1543,
"preview": "#!/bin/bash\nset -euo pipefail\n\nUSER='controlplaneio'\nREPO='netassert'\nBINARY='netassert'\nPWD=$(pwd)\nLATEST=$(curl --sile"
},
{
"path": "e2e/README.md",
"chars": 3092,
"preview": "# End-to-End(E2E) Tests\n\nThe E2E tests uses `terraform` and `terratest` to spin up GKE and EKS clusters. There are altog"
},
{
"path": "e2e/clusters/aws-eks-terraform-module/eks.tf",
"chars": 1936,
"preview": "provider \"aws\" {\n region = var.region\n}\n\ndata \"aws_availability_zones\" \"available\" {}\n\n\n\nmodule \"eks\" {\n source = \"te"
},
{
"path": "e2e/clusters/aws-eks-terraform-module/outputs.tf",
"chars": 477,
"preview": "output \"cluster_endpoint\" {\n description = \"Endpoint for EKS control plane\"\n value = module.eks.cluster_endpoint"
},
{
"path": "e2e/clusters/aws-eks-terraform-module/variables.tf",
"chars": 803,
"preview": "variable \"region\" {\n description = \"AWS region\"\n type = string\n}\n\nvariable \"cluster_version\" {\n description = "
},
{
"path": "e2e/clusters/aws-eks-terraform-module/vpc.tf",
"chars": 851,
"preview": "module \"vpc\" {\n source = \"terraform-aws-modules/vpc/aws\"\n\n // set VPC name same as the EKS cluster name\n name = var."
},
{
"path": "e2e/clusters/eks-with-calico-cni/calico-3.26.4.yaml",
"chars": 244728,
"preview": "---\n# Source: calico/templates/calico-kube-controllers.yaml\n# This manifest creates a Pod Disruption Budget for Controll"
},
{
"path": "e2e/clusters/eks-with-calico-cni/terraform/main.tf",
"chars": 384,
"preview": "\n// we spin up an EKS cluster\nmodule \"eks_cluster_calico_cni\" {\n source = \"../../aws-eks-terraform-module\"\n region = v"
},
{
"path": "e2e/clusters/eks-with-calico-cni/terraform/vars.tf",
"chars": 803,
"preview": "variable \"region\" {\n description = \"AWS region\"\n type = string\n}\n\nvariable \"cluster_version\" {\n description = "
},
{
"path": "e2e/clusters/eks-with-vpc-cni/terraform/main.tf",
"chars": 383,
"preview": "\n// we spin up an EKS cluster\nmodule \"eks_cluster_vpc_cni\" {\n source = \"../../aws-eks-terraform-module\"\n region = var."
},
{
"path": "e2e/clusters/eks-with-vpc-cni/terraform/vars.tf",
"chars": 803,
"preview": "variable \"region\" {\n description = \"AWS region\"\n type = string\n}\n\nvariable \"cluster_version\" {\n description = "
},
{
"path": "e2e/clusters/gke-dataplanev2/main.tf",
"chars": 696,
"preview": "terraform {\n required_providers {\n google = {\n source = \"hashicorp/google\"\n version = \"~> 4.57.0\"\n }\n "
},
{
"path": "e2e/clusters/gke-dataplanev2/variables.tf",
"chars": 222,
"preview": "variable \"zone\" {\n type = string\n}\n\nvariable \"cluster_name\" {\n type = string\n}\n\nvariable \"cluster_version\" {\n type = "
},
{
"path": "e2e/clusters/gke-vpc/main.tf",
"chars": 745,
"preview": "terraform {\n required_providers {\n google = {\n source = \"hashicorp/google\"\n version = \"~> 4.57.0\"\n }\n "
},
{
"path": "e2e/clusters/gke-vpc/variables.tf",
"chars": 222,
"preview": "variable \"zone\" {\n type = string\n}\n\nvariable \"cluster_name\" {\n type = string\n}\n\nvariable \"cluster_version\" {\n type = "
},
{
"path": "e2e/clusters/kind/kind-config.yaml",
"chars": 422,
"preview": "---\nkind: Cluster\napiVersion: kind.x-k8s.io/v1alpha4\n# 1 control plane node and 3 workers\nnodes:\n # the control plane n"
},
{
"path": "e2e/e2e_test.go",
"chars": 8210,
"preview": "package e2e\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"io\"\n\t\"os\"\n\t\"slices\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/gruntwork-io"
},
{
"path": "e2e/helpers/common.go",
"chars": 308,
"preview": "package helpers\n\nimport \"testing\"\n\nconst (\n\tVPC NetworkMode = \"vpc\"\n\tDataPlaneV2 NetworkMode = \"dataplanev2\"\n\tCa"
},
{
"path": "e2e/helpers/eks.go",
"chars": 3726,
"preview": "package helpers\n\nimport (\n\t\"context\"\n\t\"os\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/gruntwork-io/terratest/modules/k8s\"\n\t\"github"
},
{
"path": "e2e/helpers/gke.go",
"chars": 1529,
"preview": "package helpers\n\nimport (\n\t\"testing\"\n\n\t\"github.com/gruntwork-io/terratest/modules/terraform\"\n)\n\ntype GKECluster struct {"
},
{
"path": "e2e/helpers/kind.go",
"chars": 1893,
"preview": "package helpers\n\nimport (\n\t\"os\"\n\t\"testing\"\n\n\t\"github.com/gruntwork-io/terratest/modules/k8s\"\n\t\"sigs.k8s.io/kind/pkg/clus"
},
{
"path": "e2e/manifests/networkpolicies.yaml",
"chars": 187,
"preview": "kind: NetworkPolicy\napiVersion: networking.k8s.io/v1\nmetadata:\n namespace: web\n name: web\nspec:\n podSelector:\n mat"
},
{
"path": "e2e/manifests/test-cases.yaml",
"chars": 3097,
"preview": "---\n- name: busybox-deploy-to-echoserver-deploy\n type: k8s\n protocol: tcp\n targetPort: 8080\n timeoutSeconds: 269\n a"
},
{
"path": "e2e/manifests/workload.yaml",
"chars": 3669,
"preview": "---\napiVersion: v1\nkind: Namespace\nmetadata:\n name: fluentd\n---\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n name: f"
},
{
"path": "fluxcd-demo/README.md",
"chars": 3861,
"preview": "# 🚀 FluxCD Demo Guide\n\nThis guide walks you through setting up a **FluxCD** demo environment using **kind** (Kubernetes "
},
{
"path": "fluxcd-demo/fluxcd-helmconfig.yaml",
"chars": 669,
"preview": "apiVersion: source.toolkit.fluxcd.io/v1\nkind: HelmRepository\nmetadata:\n name: demo-repo\n namespace: default\nspec:\n ty"
},
{
"path": "fluxcd-demo/helm/Chart.yaml",
"chars": 202,
"preview": "apiVersion: v1\ndescription: fluxcd-demo\nname: fluxcd-demo\nversion: 0.0.1-dev\nappVersion: 0.0.1-dev\ndependencies:\n- name:"
},
{
"path": "fluxcd-demo/helm/templates/_helpers.tpl",
"chars": 1057,
"preview": "{{/* vim: set filetype=mustache: */}}\n{{/*\nExpand the name of the chart.\n*/}}\n{{- define \"fluxcd-demo.name\" -}}\n{{- defa"
},
{
"path": "fluxcd-demo/helm/templates/deployment.yaml",
"chars": 1800,
"preview": "---\napiVersion: v1\nkind: Namespace\nmetadata:\n name: echoserver\n---\napiVersion: v1\nkind: Namespace\nmetadata:\n name: bus"
},
{
"path": "fluxcd-demo/helm/templates/pod1-pod2.yaml",
"chars": 727,
"preview": "\n---\napiVersion: v1\nkind: Namespace\nmetadata:\n name: pod1\n---\napiVersion: v1\nkind: Namespace\nmetadata:\n name: pod2\n---"
},
{
"path": "fluxcd-demo/helm/templates/post-deploy-tests.yaml",
"chars": 2964,
"preview": "apiVersion: v1\ndata:\n test.yaml: |\n ---\n - name: busybox-deploy-to-echoserver-deploy\n type: k8s\n protoc"
},
{
"path": "fluxcd-demo/helm/templates/statefulset.yaml",
"chars": 481,
"preview": "---\napiVersion: v1\nkind: Namespace\nmetadata:\n name: web\n---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: {{ "
},
{
"path": "fluxcd-demo/helm/values.yaml",
"chars": 1,
"preview": "\n"
},
{
"path": "fluxcd-demo/kind-cluster.yaml",
"chars": 342,
"preview": "kind: Cluster\napiVersion: kind.x-k8s.io/v1alpha4\ncontainerdConfigPatches:\n - |-\n [plugins.\"io.containerd.grpc.v1.cri"
},
{
"path": "go.mod",
"chars": 8021,
"preview": "module github.com/controlplaneio/netassert/v2\n\ngo 1.25.4\n\nrequire (\n\tgithub.com/google/uuid v1.6.0\n\tgithub.com/gruntwork"
},
{
"path": "go.sum",
"chars": 33029,
"preview": "al.essio.dev/pkg/shellescape v1.5.1 h1:86HrALUujYS/h+GtqoB26SBEdkWfmMI6FubjXlsXyho=\nal.essio.dev/pkg/shellescape v1.5.1/"
},
{
"path": "helm/Chart.yaml",
"chars": 200,
"preview": "apiVersion: v1\ndescription: NetAssert\nname: netassert\nversion: 1.0.0-dev\nappVersion: 1.0.0-dev\nhome: https://github.com/"
},
{
"path": "helm/README.md",
"chars": 0,
"preview": ""
},
{
"path": "helm/templates/NOTES.txt",
"chars": 0,
"preview": ""
},
{
"path": "helm/templates/_helpers.tpl",
"chars": 1290,
"preview": "{{/* vim: set filetype=mustache: */}}\n{{/*\nExpand the name of the chart.\n*/}}\n{{- define \"netassert.name\" -}}\n{{- defaul"
},
{
"path": "helm/templates/clusterrole.yaml",
"chars": 717,
"preview": "apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: {{ template \"netassert.fullname\" . }}\n ann"
},
{
"path": "helm/templates/clusterrolebinding.yaml",
"chars": 602,
"preview": "apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: {{ template \"netassert.fullname\" . }"
},
{
"path": "helm/templates/configmap.yaml",
"chars": 439,
"preview": "{{- if ne .Values.mode \"post-deploy\" }}\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: {{ template \"netassert.fullname"
},
{
"path": "helm/templates/job.yaml",
"chars": 2654,
"preview": "apiVersion: batch/v1\nkind: Job\nmetadata:\n name: {{ template \"netassert.fullname\" . }}\n annotations:\n {{- include \"n"
},
{
"path": "helm/templates/serviceaccount.yaml",
"chars": 336,
"preview": "apiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: {{ template \"netassert.fullname\" . }}\n annotations:\n {{- inclu"
},
{
"path": "helm/values.yaml",
"chars": 484,
"preview": "\n\nmode: post-deploy\njob:\n parallelism: 1\n completions: 1\n activeDeadlineSeconds: 900\n backoffLimit: 0\n ttlSecondsAf"
},
{
"path": "internal/data/read.go",
"chars": 1944,
"preview": "package data\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n)\n\n// List of file extensions we support\nconst (\n\tfileExtensionYAML"
},
{
"path": "internal/data/read_test.go",
"chars": 2341,
"preview": "package data\n\nimport (\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/require\"\n)\n\nfunc TestReadTestFile(t *testing"
},
{
"path": "internal/data/tap.go",
"chars": 920,
"preview": "package data\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\n\t\"gopkg.in/yaml.v2\"\n)\n\n// TAPResult - outputs result of tests into a TAP format\nfun"
},
{
"path": "internal/data/tap_test.go",
"chars": 1146,
"preview": "package data\n\nimport (\n\t\"bytes\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/require\"\n)\n\nfunc TestTests_TAPResult(t *testin"
},
{
"path": "internal/data/testdata/dir-without-yaml-files/.gitkeep",
"chars": 0,
"preview": ""
},
{
"path": "internal/data/testdata/invalid/duplicated-names.yaml",
"chars": 382,
"preview": "- name: testname\n type: k8s\n targetPort: 80\n exitCode: 0\n src:\n k8sResource:\n kind: deployment\n name: d"
},
{
"path": "internal/data/testdata/invalid/empty-resources.yaml",
"chars": 119,
"preview": "- name: testname\n type: k8s\n targetPort: 100000\n exitCode: 0\n src:\n k8sResource:\n dumb: 44\n dst:\n host:"
},
{
"path": "internal/data/testdata/invalid/host-as-dst-udp.yaml",
"chars": 207,
"preview": "- name: testname\n type: k8s\n protocol: udp\n targetPort: 80\n exitCode: 0\n src:\n k8sResource:\n kind: deployme"
},
{
"path": "internal/data/testdata/invalid/host-as-source.yaml",
"chars": 191,
"preview": "- name: testname\n type: k8s\n targetPort: 80\n exitCode: 0\n src:\n host:\n name: \"1.1.1.1\"\n dst:\n k8sResourc"
},
{
"path": "internal/data/testdata/invalid/missing-fields.yaml",
"chars": 98,
"preview": "- attempts: -2\n timeoutSeconds: -2\n targetPort: 100000\n exitCode: 0\n src:\n wrong: 44\n dst:"
},
{
"path": "internal/data/testdata/invalid/multiple-dst-blocks.yaml",
"chars": 320,
"preview": "- name: testname2\n type: k8s\n protocol: udp\n timeoutSeconds: 50\n attempts: 15\n targetPort: 8080\n exitCode: 1\n src"
},
{
"path": "internal/data/testdata/invalid/not-a-list.yaml",
"chars": 28,
"preview": "this:\nshould: \"be\"\na: \"list\""
},
{
"path": "internal/data/testdata/invalid/wrong-test-values.yaml",
"chars": 245,
"preview": "- name: testname\n type: k8s\n protocol: nonexisting\n attempts: -2\n timeoutSeconds: -2\n targetPort: 100000\n exitCode"
},
{
"path": "internal/data/testdata/invalid-duplicated-names/input1.yaml",
"chars": 191,
"preview": "- name: testname\n type: k8s\n targetPort: 80\n exitCode: 0\n src:\n k8sResource:\n kind: deployment\n name: d"
},
{
"path": "internal/data/testdata/invalid-duplicated-names/input2.yaml",
"chars": 191,
"preview": "- name: testname\n type: k8s\n targetPort: 80\n exitCode: 0\n src:\n k8sResource:\n kind: deployment\n name: d"
},
{
"path": "internal/data/testdata/valid/empty.yaml",
"chars": 0,
"preview": ""
},
{
"path": "internal/data/testdata/valid/multi.yaml",
"chars": 479,
"preview": "- name: testname\n type: k8s\n targetPort: 80\n exitCode: 0\n src:\n k8sResource:\n kind: deployment\n name: d"
},
{
"path": "internal/data/types.go",
"chars": 7517,
"preview": "package data\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\n\t\"gopkg.in/yaml.v3\"\n)\n\n// Protocol - represents the Layer 4 protocol\ntype"
},
{
"path": "internal/data/types_test.go",
"chars": 3864,
"preview": "package data\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/require\"\n)\n\nfunc Test"
},
{
"path": "internal/engine/engine.go",
"chars": 5249,
"preview": "//go:generate mockgen -destination=engine_mocks_test.go -package=engine github.com/controlplaneio/netassert/internal/eng"
},
{
"path": "internal/engine/engine_daemonset_test.go",
"chars": 1834,
"preview": "package engine\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"testing\"\n\n\t\"github.com/hashicorp/go-hclog\"\n\t\"github.com/stretchr/testify/re"
},
{
"path": "internal/engine/engine_deployment_test.go",
"chars": 1839,
"preview": "package engine\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"testing\"\n\n\t\"github.com/hashicorp/go-hclog\"\n\t\"github.com/stretchr/testify/re"
},
{
"path": "internal/engine/engine_mocks_test.go",
"chars": 7344,
"preview": "// Code generated by MockGen. DO NOT EDIT.\n// Source: github.com/controlplaneio/netassert/internal/engine (interfaces: N"
},
{
"path": "internal/engine/engine_pod_test.go",
"chars": 1693,
"preview": "package engine\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"testing\"\n\n\t\"github.com/hashicorp/go-hclog\"\n\t\"github.com/stretchr/testify/re"
},
{
"path": "internal/engine/engine_statefulset_test.go",
"chars": 1874,
"preview": "package engine\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"testing\"\n\n\t\"github.com/hashicorp/go-hclog\"\n\t\"github.com/stretchr/testify/re"
},
{
"path": "internal/engine/interface.go",
"chars": 2290,
"preview": "package engine\n\nimport (\n\t\"context\"\n\t\"time\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n)\n\n// PodGetter - gets a running Pod from vari"
},
{
"path": "internal/engine/run_tcp.go",
"chars": 4332,
"preview": "package engine\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"strconv\"\n\t\"time\"\n\n\t\"github.com/controlplaneio/netassert/v2/internal/data\"\n\t"
},
{
"path": "internal/engine/run_tcp_test.go",
"chars": 2297,
"preview": "package engine\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/hashicorp/go-hclog\"\n\t\"github.com/stretchr"
},
{
"path": "internal/engine/run_udp.go",
"chars": 6939,
"preview": "package engine\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"strconv\"\n\t\"time\"\n\n\t\"github.com/controlplaneio/netassert/v2/internal/data\"\n\t"
},
{
"path": "internal/kubeops/client.go",
"chars": 2627,
"preview": "package kubeops\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/hashicorp/go-hclog\"\n\t\"k8s.io/client-go/kubernetes\"\n\t\"k8s.io/client-go/res"
},
{
"path": "internal/kubeops/container_test.go",
"chars": 2933,
"preview": "package kubeops\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"github.com/hashicorp/go-hclog\"\n\t\"github.com/stretchr/testify/require\""
},
{
"path": "internal/kubeops/containers.go",
"chars": 7748,
"preview": "package kubeops\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"strconv\"\n\t\"time\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k"
},
{
"path": "internal/kubeops/daemonset.go",
"chars": 1479,
"preview": "package kubeops\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\""
},
{
"path": "internal/kubeops/daemonset_test.go",
"chars": 6048,
"preview": "package kubeops\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"github.com/hashicorp/go-hclog\"\n\t\"github.com/stretchr/testify/require\""
},
{
"path": "internal/kubeops/deployment.go",
"chars": 2969,
"preview": "package kubeops\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\tappsv1 \"k8s.io/api/apps/v1\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/a"
},
{
"path": "internal/kubeops/deployment_test.go",
"chars": 5347,
"preview": "package kubeops\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"testing\"\n\n\t\"github.com/hashicorp/go-hclog\"\n\t\"github.com/stretchr/testify/r"
},
{
"path": "internal/kubeops/pod.go",
"chars": 5572,
"preview": "package kubeops\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"log\"\n\t\"math/rand\"\n\t\"strings\"\n\t\"time\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\tapierr"
},
{
"path": "internal/kubeops/pod_test.go",
"chars": 3005,
"preview": "package kubeops\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"github.com/hashicorp/go-hclog\"\n\t\"github.com/stretchr/testify/require\""
},
{
"path": "internal/kubeops/statefulset.go",
"chars": 1975,
"preview": "package kubeops\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\tapierrors \"k8s.io/apimachinery/pkg/api/errors"
},
{
"path": "internal/kubeops/statefulset_test.go",
"chars": 5317,
"preview": "package kubeops\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"testing\"\n\n\t\"github.com/hashicorp/go-hclog\"\n\t\"github.com/stretchr/testify/r"
},
{
"path": "internal/kubeops/string_gen.go",
"chars": 603,
"preview": "package kubeops\n\nimport (\n\t\"math/rand\"\n\n\t\"github.com/google/uuid\"\n)\n\nconst (\n\tcharset = \"abcdefghijklmnopqrstuvwxyz12345"
},
{
"path": "internal/logger/hclog.go",
"chars": 675,
"preview": "package logger\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"strings\"\n\n\t\"github.com/hashicorp/go-hclog\"\n)\n\n// NewHCLogger - return an instanc"
},
{
"path": "justfile",
"chars": 1490,
"preview": "default:\n just --list\n\nversion := \"0.0.1\"\n\n# build the binary in ./bin folder\nbuild:\n\tgo build -o bin/netassert cmd/n"
},
{
"path": "rbac/cluster-role.yaml",
"chars": 404,
"preview": "apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: netassert\nrules:\n- apiGroups:\n - \"\"\n - \"a"
},
{
"path": "rbac/cluster-rolebinding.yaml",
"chars": 262,
"preview": "apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: netassert\nroleRef:\n apiGroup: rbac."
}
]
About this extraction
This page contains the full source code of the controlplaneio/netassert GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 118 files (508.2 KB), approximately 130.8k tokens, and a symbol index with 174 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.